title
stringlengths
1
185
diff
stringlengths
0
32.2M
body
stringlengths
0
123k
url
stringlengths
57
58
created_at
stringlengths
20
20
closed_at
stringlengths
20
20
merged_at
stringlengths
20
20
updated_at
stringlengths
20
20
DOC: pd.read_csv doc-string clarification #11555
diff --git a/doc/source/io.rst b/doc/source/io.rst index 36d4bd89261c4..e2f2301beb078 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -72,123 +72,201 @@ CSV & Text files ---------------- The two workhorse functions for reading text files (a.k.a. flat files) are -:func:`~pandas.io.parsers.read_csv` and :func:`~pandas.io.parsers.read_table`. -They both use the same parsing code to intelligently convert tabular -data into a DataFrame object. See the :ref:`cookbook<cookbook.csv>` -for some advanced strategies +:func:`read_csv` and :func:`read_table`. They both use the same parsing code to +intelligently convert tabular data into a DataFrame object. See the +:ref:`cookbook<cookbook.csv>` for some advanced strategies. + +Parsing options +''''''''''''''' + +:func:`read_csv` and :func:`read_table` accept the following arguments: + +Basic ++++++ + +filepath_or_buffer : various + Either a path to a file (a :class:`python:str`, :class:`python:pathlib.Path`, + or :class:`py:py._path.local.LocalPath`), URL (including http, ftp, and S3 + locations), or any object with a ``read()`` method (such as an open file or + :class:`~python:io.StringIO`). +sep : str, defaults to ``','`` for :func:`read_csv`, ``\t`` for :func:`read_table` + Delimiter to use. If sep is ``None``, + will try to automatically determine this. Regular expressions are accepted, + use of a regular expression will force use of the python parsing engine and + will ignore quotes in the data. +delimiter : str, default ``None`` + Alternative argument name for sep. + +Column and Index Locations and Names +++++++++++++++++++++++++++++++++++++ + +header : int or list of ints, default ``'infer'`` + Row number(s) to use as the column names, and the start of the data. Default + behavior is as if ``header=0`` if no ``names`` passed, otherwise as if + ``header=None``. Explicitly pass ``header=0`` to be able to replace existing + names. The header can be a list of ints that specify row locations for a + multi-index on the columns e.g. ``[0,1,3]``. Intervening rows that are not + specified will be skipped (e.g. 2 in this example is skipped). Note that + this parameter ignores commented lines and empty lines if + ``skip_blank_lines=True``, so header=0 denotes the first line of data + rather than the first line of the file. +names : array-like, default ``None`` + List of column names to use. If file contains no header row, then you should + explicitly pass ``header=None``. +index_col : int or sequence or ``False``, default ``None`` + Column to use as the row labels of the DataFrame. If a sequence is given, a + MultiIndex is used. If you have a malformed file with delimiters at the end of + each line, you might consider ``index_col=False`` to force pandas to *not* use + the first column as the index (row names). +usecols : array-like, default ``None`` + Return a subset of the columns. Results in much faster parsing time and lower + memory usage +squeeze : boolean, default ``False`` + If the parsed data only contains one column then return a Series. +prefix : str, default ``None`` + Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ... +mangle_dupe_cols : boolean, default ``True`` + Duplicate columns will be specified as 'X.0'...'X.N', rather than 'X'...'X'. + +General Parsing Configuration ++++++++++++++++++++++++++++++ + +dtype : Type name or dict of column -> type, default ``None`` + Data type for data or columns. E.g. ``{'a': np.float64, 'b': np.int32}`` + (unsupported with ``engine='python'``). Use `str` or `object` to preserve and + not interpret dtype. +engine : {``'c'``, ``'python'``} + Parser engine to use. The C engine is faster while the python engine is + currently more feature-complete. +converters : dict, default ``None`` + Dict of functions for converting values in certain columns. Keys can either be + integers or column labels. +true_values : list, default ``None`` + Values to consider as ``True``. +false_values : list, default ``None`` + Values to consider as ``False``. +skipinitialspace : boolean, default ``False`` + Skip spaces after delimiter. +skiprows : list-like or integer, default ``None`` + Line numbers to skip (0-indexed) or number of lines to skip (int) at the start + of the file. +skipfooter : int, default ``0`` + Number of lines at bottom of file to skip (unsupported with engine='c'). +nrows : int, default ``None`` + Number of rows of file to read. Useful for reading pieces of large files. + +NA and Missing Data Handling +++++++++++++++++++++++++++++ + +na_values : str, list-like or dict, default ``None`` + Additional strings to recognize as NA/NaN. If dict passed, specific per-column + NA values. By default the following values are interpreted as NaN: + ``'-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A', 'NA', + '#NA', 'NULL', 'NaN', '-NaN', 'nan', '-nan', ''``. +keep_default_na : boolean, default ``True`` + If na_values are specified and keep_default_na is ``False`` the default NaN + values are overridden, otherwise they're appended to. +na_filter : boolean, default ``True`` + Detect missing value markers (empty strings and the value of na_values). In + data without any NAs, passing ``na_filter=False`` can improve the performance + of reading a large file. +verbose : boolean, default ``False`` + Indicate number of NA values placed in non-numeric columns. +skip_blank_lines : boolean, default ``True`` + If ``True``, skip over blank lines rather than interpreting as NaN values. + +Datetime Handling ++++++++++++++++++ + +parse_dates : boolean or list of ints or names or list of lists or dict, default ``False``. + - If ``True`` -> try parsing the index. + - If ``[1, 2, 3]`` -> try parsing columns 1, 2, 3 each as a separate date + column. + - If ``[[1, 3]]`` -> combine columns 1 and 3 and parse as a single date + column. + - If ``{'foo' : [1, 3]}`` -> parse columns 1, 3 as date and call result 'foo'. + A fast-path exists for iso8601-formatted dates. +infer_datetime_format : boolean, default ``False`` + If ``True`` and parse_dates is enabled for a column, attempt to infer the + datetime format to speed up the processing. +keep_date_col : boolean, default ``False`` + If ``True`` and parse_dates specifies combining multiple columns then keep the + original columns. +date_parser : function, default ``None`` + Function to use for converting a sequence of string columns to an array of + datetime instances. The default uses ``dateutil.parser.parser`` to do the + conversion. Pandas will try to call date_parser in three different ways, + advancing to the next if an exception occurs: 1) Pass one or more arrays (as + defined by parse_dates) as arguments; 2) concatenate (row-wise) the string + values from the columns defined by parse_dates into a single array and pass + that; and 3) call date_parser once for each row using one or more strings + (corresponding to the columns defined by parse_dates) as arguments. +dayfirst : boolean, default ``False`` + DD/MM format dates, international and European format. + +Iteration ++++++++++ + +iterator : boolean, default ``False`` + Return `TextFileReader` object for iteration or getting chunks with + ``get_chunk()``. +chunksize : int, default ``None`` + Return `TextFileReader` object for iteration. See :ref:`iterating and chunking + <io.chunking>` below. + +Quoting, Compression, and File Format ++++++++++++++++++++++++++++++++++++++ + +compression : {``'infer'``, ``'gzip'``, ``'bz2'``, ``None``}, default ``'infer'`` + For on-the-fly decompression of on-disk data. If 'infer', then use gzip or bz2 + if filepath_or_buffer is a string ending in '.gz' or '.bz2', respectively, and + no decompression otherwise. Set to ``None`` for no decompression. +thousands : str, default ``None`` + Thousands separator. +decimal : str, default ``'.'`` + Character to recognize as decimal point. E.g. use ``','`` for European data. +lineterminator : str (length 1), default ``None`` + Character to break file into lines. Only valid with C parser. +quotechar : str (length 1) + The character used to denote the start and end of a quoted item. Quoted items + can include the delimiter and it will be ignored. +quoting : int or ``csv.QUOTE_*`` instance, default ``None`` + Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of + ``QUOTE_MINIMAL`` (0), ``QUOTE_ALL`` (1), ``QUOTE_NONNUMERIC`` (2) or + ``QUOTE_NONE`` (3). Default (``None``) results in ``QUOTE_MINIMAL`` + behavior. +escapechar : str (length 1), default ``None`` + One-character string used to escape delimiter when quoting is ``QUOTE_NONE``. +comment : str, default ``None`` + Indicates remainder of line should not be parsed. If found at the beginning of + a line, the line will be ignored altogether. This parameter must be a single + character. Like empty lines (as long as ``skip_blank_lines=True``), fully + commented lines are ignored by the parameter `header` but not by `skiprows`. + For example, if ``comment='#'``, parsing '#empty\\na,b,c\\n1,2,3' with + `header=0` will result in 'a,b,c' being treated as the header. +encoding : str, default ``None`` + Encoding to use for UTF when reading/writing (e.g. ``'utf-8'``). `List of + Python standard encodings + <https://docs.python.org/3/library/codecs.html#standard-encodings>`_. +dialect : str or :class:`python:csv.Dialect` instance, default ``None`` + If ``None`` defaults to Excel dialect. Ignored if sep longer than 1 char. See + :class:`python:csv.Dialect` documentation for more details. +tupleize_cols : boolean, default ``False`` + Leave a list of tuples on columns as is (default is to convert to a MultiIndex + on the columns). + +Error Handling +++++++++++++++ -They can take a number of arguments: - - - ``filepath_or_buffer``: Either a path to a file (a :class:`python:str`, - :class:`python:pathlib.Path`, or :class:`py:py._path.local.LocalPath`), URL - (including http, ftp, and S3 locations), or any object with a ``read`` - method (such as an open file or :class:`~python:io.StringIO`). - - ``sep`` or ``delimiter``: A delimiter / separator to split fields - on. With ``sep=None``, ``read_csv`` will try to infer the delimiter - automatically in some cases by "sniffing". - The separator may be specified as a regular expression; for instance - you may use '\|\\s*' to indicate a pipe plus arbitrary whitespace, but ignores quotes in the data when a regex is used in separator. - - ``delim_whitespace``: Parse whitespace-delimited (spaces or tabs) file - (much faster than using a regular expression) - - ``compression``: decompress ``'gzip'`` and ``'bz2'`` formats on the fly. - Set to ``'infer'`` (the default) to guess a format based on the file - extension. - - ``dialect``: string or :class:`python:csv.Dialect` instance to expose more - ways to specify the file format - - ``dtype``: A data type name or a dict of column name to data type. If not - specified, data types will be inferred. (Unsupported with - ``engine='python'``) - - ``header``: row number(s) to use as the column names, and the start of the - data. Defaults to 0 if no ``names`` passed, otherwise ``None``. Explicitly - pass ``header=0`` to be able to replace existing names. The header can be - a list of integers that specify row locations for a multi-index on the columns - E.g. [0,1,3]. Intervening rows that are not specified will be - skipped (e.g. 2 in this example are skipped). Note that this parameter - ignores commented lines and empty lines if ``skip_blank_lines=True`` (the default), - so header=0 denotes the first line of data rather than the first line of the file. - - ``skip_blank_lines``: whether to skip over blank lines rather than interpreting - them as NaN values - - ``skiprows``: A collection of numbers for rows in the file to skip. Can - also be an integer to skip the first ``n`` rows - - ``index_col``: column number, column name, or list of column numbers/names, - to use as the ``index`` (row labels) of the resulting DataFrame. By default, - it will number the rows without using any column, unless there is one more - data column than there are headers, in which case the first column is taken - as the index. - - ``names``: List of column names to use as column names. To replace header - existing in file, explicitly pass ``header=0``. - - ``na_values``: optional string or list of strings to recognize as NaN (missing - values), either in addition to or in lieu of the default set. - - ``true_values``: list of strings to recognize as ``True`` - - ``false_values``: list of strings to recognize as ``False`` - - ``keep_default_na``: whether to include the default set of missing values - in addition to the ones specified in ``na_values`` - - ``parse_dates``: if True then index will be parsed as dates - (False by default). You can specify more complicated options to parse - a subset of columns or a combination of columns into a single date column - (list of ints or names, list of lists, or dict) - [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column - [[1, 3]] -> combine columns 1 and 3 and parse as a single date column - {'foo' : [1, 3]} -> parse columns 1, 3 as date and call result 'foo' - - ``keep_date_col``: if True, then date component columns passed into - ``parse_dates`` will be retained in the output (False by default). - - ``date_parser``: function to use to parse strings into datetime - objects. If ``parse_dates`` is True, it defaults to the very robust - ``dateutil.parser``. Specifying this implicitly sets ``parse_dates`` as True. - You can also use functions from community supported date converters from - date_converters.py - - ``dayfirst``: if True then uses the DD/MM international/European date format - (This is False by default) - - ``thousands``: specifies the thousands separator. If not None, this character will - be stripped from numeric dtypes. However, if it is the first character in a field, - that column will be imported as a string. In the PythonParser, if not None, - then parser will try to look for it in the output and parse relevant data to numeric - dtypes. Because it has to essentially scan through the data again, this causes a - significant performance hit so only use if necessary. - - ``lineterminator`` : string (length 1), default ``None``, Character to break file into lines. Only valid with C parser - - ``quotechar`` : string, The character to used to denote the start and end of a quoted item. - Quoted items can include the delimiter and it will be ignored. - - ``quoting`` : int, - Controls whether quotes should be recognized. Values are taken from `csv.QUOTE_*` values. - Acceptable values are 0, 1, 2, and 3 for QUOTE_MINIMAL, QUOTE_ALL, - QUOTE_NONNUMERIC and QUOTE_NONE, respectively. - - ``skipinitialspace`` : boolean, default ``False``, Skip spaces after delimiter - - ``escapechar`` : string, to specify how to escape quoted data - - ``comment``: Indicates remainder of line should not be parsed. If found at the - beginning of a line, the line will be ignored altogether. This parameter - must be a single character. Like empty lines, fully commented lines - are ignored by the parameter `header` but not by `skiprows`. For example, - if comment='#', parsing '#empty\n1,2,3\na,b,c' with `header=0` will - result in '1,2,3' being treated as the header. - - ``nrows``: Number of rows to read out of the file. Useful to only read a - small portion of a large file - - ``iterator``: If True, return a ``TextFileReader`` to enable reading a file - into memory piece by piece - - ``chunksize``: An number of rows to be used to "chunk" a file into - pieces. Will cause an ``TextFileReader`` object to be returned. More on this - below in the section on :ref:`iterating and chunking <io.chunking>` - - ``skip_footer``: number of lines to skip at bottom of file (default 0) - (Unsupported with ``engine='c'``) - - ``converters``: a dictionary of functions for converting values in certain - columns, where keys are either integers or column labels - - ``encoding``: a string representing the encoding to use for decoding - unicode data, e.g. ``'utf-8``` or ``'latin-1'``. `Full list of Python - standard encodings - <https://docs.python.org/3/library/codecs.html#standard-encodings>`_ - - ``verbose``: show number of NA values inserted in non-numeric columns - - ``squeeze``: if True then output with only one column is turned into Series - - ``error_bad_lines``: if False then any lines causing an error will be skipped :ref:`bad lines <io.bad_lines>` - - ``usecols``: a subset of columns to return, results in much faster parsing - time and lower memory usage. - - ``mangle_dupe_cols``: boolean, default True, then duplicate columns will be specified - as 'X.0'...'X.N', rather than 'X'...'X' - - ``tupleize_cols``: boolean, default False, if False, convert a list of tuples - to a multi-index of columns, otherwise, leave the column index as a list of - tuples - - ``float_precision`` : string, default None. Specifies which converter the C - engine should use for floating-point values. The options are None for the - ordinary converter, 'high' for the high-precision converter, and - 'round_trip' for the round-trip converter. +error_bad_lines : boolean, default ``True`` + Lines with too many fields (e.g. a csv line with too many commas) will by + default cause an exception to be raised, and no DataFrame will be returned. If + ``False``, then these "bad lines" will dropped from the DataFrame that is + returned (only valid with C parser). See :ref:`bad lines <io.bad_lines>` + below. +warn_bad_lines : boolean, default ``True`` + If error_bad_lines is ``False``, and warn_bad_lines is ``True``, a warning for + each "bad line" will be output (only valid with C parser). .. ipython:: python :suppress: @@ -500,11 +578,10 @@ Date Handling Specifying Date Columns +++++++++++++++++++++++ -To better facilitate working with datetime data, -:func:`~pandas.io.parsers.read_csv` and :func:`~pandas.io.parsers.read_table` -uses the keyword arguments ``parse_dates`` and ``date_parser`` to allow users -to specify a variety of columns and date/time formats to turn the input text -data into ``datetime`` objects. +To better facilitate working with datetime data, :func:`read_csv` and +:func:`read_table` use the keyword arguments ``parse_dates`` and ``date_parser`` +to allow users to specify a variety of columns and date/time formats to turn the +input text data into ``datetime`` objects. The simplest case is to just pass in ``parse_dates=True``: @@ -929,10 +1006,9 @@ should pass the ``escapechar`` option: Files with Fixed Width Columns '''''''''''''''''''''''''''''' -While ``read_csv`` reads delimited data, the :func:`~pandas.io.parsers.read_fwf` -function works with data files that have known and fixed column widths. -The function parameters to ``read_fwf`` are largely the same as `read_csv` with -two extra parameters: +While ``read_csv`` reads delimited data, the :func:`read_fwf` function works +with data files that have known and fixed column widths. The function parameters +to ``read_fwf`` are largely the same as `read_csv` with two extra parameters: - ``colspecs``: A list of pairs (tuples) giving the extents of the fixed-width fields of each line as half-open intervals (i.e., [from, to[ ). diff --git a/doc/source/merging.rst b/doc/source/merging.rst index 074b15bbbcb66..7908428135308 100644 --- a/doc/source/merging.rst +++ b/doc/source/merging.rst @@ -80,29 +80,33 @@ some configurable handling of "what to do with the other axes": pd.concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False, keys=None, levels=None, names=None, verify_integrity=False) -- ``objs``: list or dict of Series, DataFrame, or Panel objects. If a dict is - passed, the sorted keys will be used as the `keys` argument, unless it is - passed, in which case the values will be selected (see below) -- ``axis``: {0, 1, ...}, default 0. The axis to concatenate along +- ``objs``: a sequence or mapping of Series, DataFrame, or Panel objects. If a + dict is passed, the sorted keys will be used as the `keys` argument, unless + it is passed, in which case the values will be selected (see below). Any None + objects will be dropped silently unless they are all None in which case a + ValueError will be raised. +- ``axis``: {0, 1, ...}, default 0. The axis to concatenate along. - ``join``: {'inner', 'outer'}, default 'outer'. How to handle indexes on - other axis(es). Outer for union and inner for intersection + other axis(es). Outer for union and inner for intersection. - ``join_axes``: list of Index objects. Specific indexes to use for the other - n - 1 axes instead of performing inner/outer set logic + n - 1 axes instead of performing inner/outer set logic. - ``keys``: sequence, default None. Construct hierarchical index using the - passed keys as the outermost level If multiple levels passed, should + passed keys as the outermost level. If multiple levels passed, should contain tuples. -- ``levels`` : list of sequences, default None. If keys passed, specific - levels to use for the resulting MultiIndex. Otherwise they will be inferred - from the keys +- ``levels`` : list of sequences, default None. Specific levels (unique values) + to use for constructing a MultiIndex. Otherwise they will be inferred from the + keys. - ``names``: list, default None. Names for the levels in the resulting - hierarchical index + hierarchical index. - ``verify_integrity``: boolean, default False. Check whether the new concatenated axis contains duplicates. This can be very expensive relative - to the actual data concatenation + to the actual data concatenation. - ``ignore_index`` : boolean, default False. If True, do not use the index values on the concatenation axis. The resulting axis will be labeled 0, ..., n - 1. This is useful if you are concatenating objects where the - concatenation axis does not have meaningful indexing information. + concatenation axis does not have meaningful indexing information. Note + the index values on the other axes are still respected in the join. +- ``copy`` : boolean, default True. If False, do not copy data unnecessarily. Without a little bit of context and example many of these arguments don't make much sense. Let's take the above example. Suppose we wanted to associate diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 4265113076b23..1114df5e08154 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -247,6 +247,8 @@ Other enhancements - ``pivot_table()`` now accepts most iterables for the ``values`` parameter (:issue:`12017`) - Added Google ``BigQuery`` service account authentication support, which enables authentication on remote servers. (:issue:`11881`). For further details see :ref:`here <io.bigquery_authentication>` +- the order of keyword arguments to text file parsing functions (``.read_csv()``, ``.read_table()``, ``.read_fwf()``) changed to group related arguments. (:issue:`#11555`) + .. _whatsnew_0180.api_breaking: Backwards incompatible API changes diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index 1593716097985..d39540af2ed06 100755 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -28,6 +28,14 @@ import pandas.lib as lib import pandas.parser as _parser +# common NA values +# no longer excluding inf representations +# '1.#INF','-1.#INF', '1.#INF000000', +_NA_VALUES = set([ + '-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', + 'N/A', 'NA', '#NA', 'NULL', 'NaN', '-NaN', 'nan', '-nan', '' +]) + class ParserWarning(Warning): pass @@ -40,70 +48,79 @@ class ParserWarning(Warning): Parameters ---------- -filepath_or_buffer : string or file handle / StringIO - The string could be a URL. Valid URL schemes include - http, ftp, s3, and file. For file URLs, a - host is expected. For instance, a local file could be - file ://localhost/path/to/table.csv +filepath_or_buffer : str, pathlib.Path, py._path.local.LocalPath or any \ +object with a read() method (such as a file handle or StringIO) + The string could be a URL. Valid URL schemes include http, ftp, s3, and + file. For file URLs, a host is expected. For instance, a local file could + be file ://localhost/path/to/table.csv %s -lineterminator : string (length 1), default None - Character to break file into lines. Only valid with C parser -quotechar : string (length 1) - The character used to denote the start and end of a quoted item. Quoted - items can include the delimiter and it will be ignored. -quoting : int or csv.QUOTE_* instance, default None - Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of - QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3). - Default (None) results in QUOTE_MINIMAL behavior. -skipinitialspace : boolean, default False - Skip spaces after delimiter -escapechar : string (length 1), default None - One-character string used to escape delimiter when quoting is QUOTE_NONE. -dtype : Type name or dict of column -> type, default None - Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32} - (Unsupported with engine='python') -compression : {'gzip', 'bz2', 'infer', None}, default 'infer' - For on-the-fly decompression of on-disk data. If 'infer', then use gzip or - bz2 if filepath_or_buffer is a string ending in '.gz' or '.bz2', - respectively, and no decompression otherwise. Set to None for no - decompression. -dialect : string or csv.Dialect instance, default None - If None defaults to Excel dialect. Ignored if sep longer than 1 char - See csv.Dialect documentation for more details -header : int, list of ints, default 'infer' +delimiter : str, default None + Alternative argument name for sep. +header : int or list of ints, default 'infer' Row number(s) to use as the column names, and the start of the data. - Defaults to 0 if no ``names`` passed, otherwise ``None``. Explicitly pass - ``header=0`` to be able to replace existing names. The header can be a list - of integers that specify row locations for a multi-index on the columns - E.g. [0,1,3]. Intervening rows that are not specified will be skipped - (e.g. 2 in this example are skipped). Note that this parameter ignores - commented lines and empty lines if ``skip_blank_lines=True``, so header=0 - denotes the first line of data rather than the first line of the file. -skiprows : list-like or integer, default None - Line numbers to skip (0-indexed) or number of lines to skip (int) - at the start of the file + Default behavior is as if set to 0 if no ``names`` passed, otherwise + ``None``. Explicitly pass ``header=0`` to be able to replace existing + names. The header can be a list of integers that specify row locations for + a multi-index on the columns e.g. [0,1,3]. Intervening rows that are not + specified will be skipped (e.g. 2 in this example is skipped). Note that + this parameter ignores commented lines and empty lines if + ``skip_blank_lines=True``, so header=0 denotes the first line of data + rather than the first line of the file. +names : array-like, default None + List of column names to use. If file contains no header row, then you + should explicitly pass header=None index_col : int or sequence or False, default None Column to use as the row labels of the DataFrame. If a sequence is given, a MultiIndex is used. If you have a malformed file with delimiters at the end of each line, you might consider index_col=False to force pandas to _not_ use the first column as the index (row names) -names : array-like, default None - List of column names to use. If file contains no header row, then you - should explicitly pass header=None -prefix : string, default None - Prefix to add to column numbers when no header, e.g 'X' for X0, X1, ... -na_values : str, list-like or dict, default None - Additional strings to recognize as NA/NaN. If dict passed, specific - per-column NA values +usecols : array-like, default None + Return a subset of the columns. + Results in much faster parsing time and lower memory usage. +squeeze : boolean, default False + If the parsed data only contains one column then return a Series +prefix : str, default None + Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ... +mangle_dupe_cols : boolean, default True + Duplicate columns will be specified as 'X.0'...'X.N', rather than 'X'...'X' +dtype : Type name or dict of column -> type, default None + Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32} + (Unsupported with engine='python'). Use `str` or `object` to preserve and + not interpret dtype. +%s +converters : dict, default None + Dict of functions for converting values in certain columns. Keys can either + be integers or column labels true_values : list, default None Values to consider as True false_values : list, default None Values to consider as False +skipinitialspace : boolean, default False + Skip spaces after delimiter. +skiprows : list-like or integer, default None + Line numbers to skip (0-indexed) or number of lines to skip (int) + at the start of the file +skipfooter : int, default 0 + Number of lines at bottom of file to skip (Unsupported with engine='c') +nrows : int, default None + Number of rows of file to read. Useful for reading pieces of large files +na_values : str or list-like or dict, default None + Additional strings to recognize as NA/NaN. If dict passed, specific + per-column NA values. By default the following values are interpreted as + NaN: `'""" + "'`, `'".join(sorted(_NA_VALUES)) + """'`. keep_default_na : bool, default True If na_values are specified and keep_default_na is False the default NaN - values are overridden, otherwise they're appended to -parse_dates : various, default False - + values are overridden, otherwise they're appended to. +na_filter : boolean, default True + Detect missing value markers (empty strings and the value of na_values). In + data without any NAs, passing na_filter=False can improve the performance + of reading a large file +verbose : boolean, default False + Indicate number of NA values placed in non-numeric columns +skip_blank_lines : boolean, default True + If True, skip over blank lines rather than interpreting as NaN values +parse_dates : boolean or list of ints or names or list of lists or dict, \ +default False * boolean. If True -> try parsing the index. * list of ints or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column. @@ -112,12 +129,15 @@ class ParserWarning(Warning): * dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call result 'foo' Note: A fast-path exists for iso8601-formatted dates. +infer_datetime_format : boolean, default False + If True and parse_dates is enabled for a column, attempt to infer + the datetime format to speed up the processing keep_date_col : boolean, default False If True and parse_dates specifies combining multiple columns then keep the original columns. date_parser : function, default None Function to use for converting a sequence of string columns to an array of - datetime instances. The default uses dateutil.parser.parser to do the + datetime instances. The default uses ``dateutil.parser.parser`` to do the conversion. Pandas will try to call date_parser in three different ways, advancing to the next if an exception occurs: 1) Pass one or more arrays (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the @@ -126,8 +146,34 @@ class ParserWarning(Warning): strings (corresponding to the columns defined by parse_dates) as arguments. dayfirst : boolean, default False DD/MM format dates, international and European format +iterator : boolean, default False + Return TextFileReader object for iteration or getting chunks with + ``get_chunk()``. +chunksize : int, default None + Return TextFileReader object for iteration. `See IO Tools docs for more + information + <http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_ on + ``iterator`` and ``chunksize``. +compression : {'infer', 'gzip', 'bz2', None}, default 'infer' + For on-the-fly decompression of on-disk data. If 'infer', then use gzip or + bz2 if filepath_or_buffer is a string ending in '.gz' or '.bz2', + respectively, and no decompression otherwise. Set to None for no + decompression. thousands : str, default None Thousands separator +decimal : str, default '.' + Character to recognize as decimal point (e.g. use ',' for European data). +lineterminator : str (length 1), default None + Character to break file into lines. Only valid with C parser. +quotechar : str (length 1), optional + The character used to denote the start and end of a quoted item. Quoted + items can include the delimiter and it will be ignored. +quoting : int or csv.QUOTE_* instance, default None + Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of + QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3). + Default (None) results in QUOTE_MINIMAL behavior. +escapechar : str (length 1), default None + One-character string used to escape delimiter when quoting is QUOTE_NONE. comment : str, default None Indicates remainder of line should not be parsed. If found at the beginning of a line, the line will be ignored altogether. This parameter must be a @@ -136,42 +182,13 @@ class ParserWarning(Warning): `skiprows`. For example, if comment='#', parsing '#empty\\na,b,c\\n1,2,3' with `header=0` will result in 'a,b,c' being treated as the header. -decimal : str, default '.' - Character to recognize as decimal point. E.g. use ',' for European data -nrows : int, default None - Number of rows of file to read. Useful for reading pieces of large files -iterator : boolean, default False - Return TextFileReader object for iteration or getting chunks with - ``get_chunk()``. -chunksize : int, default None - Return TextFileReader object for iteration. `See IO Tools docs for more - information - <http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_ on - ``iterator`` and ``chunksize``. -skipfooter : int, default 0 - Number of lines at bottom of file to skip (Unsupported with engine='c') -converters : dict, default None - Dict of functions for converting values in certain columns. Keys can either - be integers or column labels -verbose : boolean, default False - Indicate number of NA values placed in non-numeric columns -delimiter : string, default None - Alternative argument name for sep. Regular expressions are accepted. -encoding : string, default None +encoding : str, default None Encoding to use for UTF when reading/writing (ex. 'utf-8'). `List of Python standard encodings <https://docs.python.org/3/library/codecs.html#standard-encodings>`_ -squeeze : boolean, default False - If the parsed data only contains one column then return a Series -na_filter : boolean, default True - Detect missing value markers (empty strings and the value of na_values). In - data without any NAs, passing na_filter=False can improve the performance - of reading a large file -usecols : array-like, default None - Return a subset of the columns. - Results in much faster parsing time and lower memory usage. -mangle_dupe_cols : boolean, default True - Duplicate columns will be specified as 'X.0'...'X.N', rather than 'X'...'X' +dialect : str or csv.Dialect instance, default None + If None defaults to Excel dialect. Ignored if sep longer than 1 char + See csv.Dialect documentation for more details tupleize_cols : boolean, default False Leave a list of tuples on columns as is (default is to convert to a Multi Index on the columns) @@ -183,41 +200,34 @@ class ParserWarning(Warning): warn_bad_lines : boolean, default True If error_bad_lines is False, and warn_bad_lines is True, a warning for each "bad line" will be output. (Only valid with C parser). -infer_datetime_format : boolean, default False - If True and parse_dates is enabled for a column, attempt to infer - the datetime format to speed up the processing -skip_blank_lines : boolean, default True - If True, skip over blank lines rather than interpreting as NaN values Returns ------- result : DataFrame or TextParser """ -_csv_params = """sep : string, default ',' - Delimiter to use. If sep is None, will try to automatically determine - this. Regular expressions are accepted. -engine : {'c', 'python'} +# engine is not used in read_fwf() so is factored out of the shared docstring +_engine_doc = """engine : {'c', 'python'}, optional Parser engine to use. The C engine is faster while the python engine is currently more feature-complete.""" -_table_params = """sep : string, default \\t (tab-stop) - Delimiter to use. Regular expressions are accepted. -engine : {'c', 'python'} - Parser engine to use. The C engine is faster while the python engine is - currently more feature-complete.""" +_sep_doc = """sep : str, default {default} + Delimiter to use. If sep is None, will try to automatically determine + this. Regular expressions are accepted and will force use of the python + parsing engine and will ignore quotes in the data.""" _read_csv_doc = """ Read CSV (comma-separated) file into DataFrame %s -""" % (_parser_params % _csv_params) +""" % (_parser_params % (_sep_doc.format(default="','"), _engine_doc)) _read_table_doc = """ Read general delimited file into DataFrame %s -""" % (_parser_params % _table_params) +""" % (_parser_params % (_sep_doc.format(default="\\t (tab-stop)"), + _engine_doc)) _fwf_widths = """\ colspecs : list of pairs (int, int) or 'infer'. optional @@ -238,7 +248,7 @@ class ParserWarning(Warning): Also, 'delimiter' is used to specify the filler character of the fields if it is not spaces (e.g., '~'). -""" % (_parser_params % _fwf_widths) +""" % (_parser_params % (_fwf_widths, '')) def _read(filepath_or_buffer, kwds): @@ -370,65 +380,76 @@ def _make_parser_function(name, sep=','): def parser_f(filepath_or_buffer, sep=sep, - dialect=None, - compression='infer', - - doublequote=True, - escapechar=None, - quotechar='"', - quoting=csv.QUOTE_MINIMAL, - skipinitialspace=False, - lineterminator=None, + delimiter=None, + # Column and Index Locations and Names header='infer', - index_col=None, names=None, - prefix=None, - skiprows=None, - skipfooter=None, - skip_footer=0, - na_values=None, - true_values=None, - false_values=None, - delimiter=None, - converters=None, - dtype=None, + index_col=None, usecols=None, + squeeze=False, + prefix=None, + mangle_dupe_cols=True, + # General Parsing Configuration + dtype=None, engine=None, - delim_whitespace=False, - as_recarray=False, - na_filter=True, - compact_ints=False, - use_unsigned=False, - low_memory=_c_parser_defaults['low_memory'], - buffer_lines=None, - warn_bad_lines=True, - error_bad_lines=True, + converters=None, + true_values=None, + false_values=None, + skipinitialspace=False, + skiprows=None, + skipfooter=None, + nrows=None, + # NA and Missing Data Handling + na_values=None, keep_default_na=True, - thousands=None, - comment=None, - decimal=b'.', + na_filter=True, + verbose=False, + skip_blank_lines=True, + # Datetime Handling parse_dates=False, + infer_datetime_format=False, keep_date_col=False, - dayfirst=False, date_parser=None, + dayfirst=False, - memory_map=False, - float_precision=None, - nrows=None, + # Iteration iterator=False, chunksize=None, - verbose=False, + # Quoting, Compression, and File Format + compression='infer', + thousands=None, + decimal=b'.', + lineterminator=None, + quotechar='"', + quoting=csv.QUOTE_MINIMAL, + escapechar=None, + comment=None, encoding=None, - squeeze=False, - mangle_dupe_cols=True, + dialect=None, tupleize_cols=False, - infer_datetime_format=False, - skip_blank_lines=True): + + # Error Handling + error_bad_lines=True, + warn_bad_lines=True, + + # Deprecated + skip_footer=0, + + # Internal + doublequote=True, + delim_whitespace=False, + as_recarray=False, + compact_ints=False, + use_unsigned=False, + low_memory=_c_parser_defaults['low_memory'], + buffer_lines=None, + memory_map=False, + float_precision=None): # Alias sep -> delimiter. if delimiter is None: @@ -537,15 +558,6 @@ def read_fwf(filepath_or_buffer, colspecs='infer', widths=None, **kwds): return _read(filepath_or_buffer, kwds) -# common NA values -# no longer excluding inf representations -# '1.#INF','-1.#INF', '1.#INF000000', -_NA_VALUES = set([ - '-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', - 'N/A', 'NA', '#NA', 'NULL', 'NaN', '-NaN', 'nan', '-nan', '' -]) - - class TextFileReader(BaseIterator): """
closes #11555 Updated IO Tools documentation for read_csv() and read_table() to be consistent with the doc-string, also reordered keywords to group them more logically. Also updated merging.rst docs for concat.
https://api.github.com/repos/pandas-dev/pandas/pulls/12256
2016-02-08T06:33:20Z
2016-02-12T03:07:36Z
null
2016-02-12T14:06:46Z
ENH: Make HDFStore iterable
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 421822380c2da..2216b88769053 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -246,6 +246,7 @@ Other enhancements - ``DataFrame.select_dtypes`` now allows the ``np.float16`` typecode (:issue:`11990`) - ``pivot_table()`` now accepts most iterables for the ``values`` parameter (:issue:`12017`) - Added Google ``BigQuery`` service account authentication support, which enables authentication on remote servers. (:issue:`11881`). For further details see :ref:`here <io.bigquery_authentication>` +- ``HDFStore`` is now iterable: ``for k in store`` is equivalent to ``for k in store.keys()`` (:issue: `12221`). .. _whatsnew_0180.api_breaking: diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py index cd8f7699a85d1..c94b387f7554a 100644 --- a/pandas/io/pytables.py +++ b/pandas/io/pytables.py @@ -488,6 +488,9 @@ def keys(self): """ return [n._v_pathname for n in self.groups()] + def __iter__(self): + return iter(self.keys()) + def items(self): """ iterate on key->group diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py index b08d24747bcd3..192ed4e687272 100644 --- a/pandas/io/tests/test_pytables.py +++ b/pandas/io/tests/test_pytables.py @@ -389,8 +389,15 @@ def test_keys(self): store['d'] = tm.makePanel() store['foo/bar'] = tm.makePanel() self.assertEqual(len(store), 5) - self.assertTrue(set( - store.keys()) == set(['/a', '/b', '/c', '/d', '/foo/bar'])) + expected = set(['/a', '/b', '/c', '/d', '/foo/bar']) + self.assertTrue(set(store.keys()) == expected) + self.assertTrue(set(store) == expected) + + def test_iter_empty(self): + + with ensure_clean_path(self.path) as path: + # GH 12221 + self.assertTrue(list(pd.HDFStore(path)) == []) def test_repr(self):
closes #12221 HDFStore is not an iterator - but being iterable, it can return an iterator of itself (i.e. of `.keys()`).
https://api.github.com/repos/pandas-dev/pandas/pulls/12253
2016-02-07T15:51:57Z
2016-02-11T02:55:17Z
null
2016-02-11T06:21:42Z
BUG: Ensure data_columns is always a list (i.e. min_itemsize can exte…
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py index cd8f7699a85d1..5d727be9b8fc8 100644 --- a/pandas/io/pytables.py +++ b/pandas/io/pytables.py @@ -3247,7 +3247,7 @@ def validate_data_columns(self, data_columns, min_itemsize): # evaluate the passed data_columns, True == use all columns # take only valide axis labels if data_columns is True: - data_columns = axis_labels + data_columns = list(axis_labels) elif data_columns is None: data_columns = [] @@ -4084,7 +4084,7 @@ def write(self, obj, data_columns=None, **kwargs): obj = DataFrame({name: obj}, index=obj.index) obj.columns = [name] return super(AppendableSeriesTable, self).write( - obj=obj, data_columns=obj.columns, **kwargs) + obj=obj, data_columns=list(obj.columns), **kwargs) def read(self, columns=None, **kwargs): @@ -4185,7 +4185,7 @@ def write(self, obj, data_columns=None, **kwargs): if data_columns is None: data_columns = [] elif data_columns is True: - data_columns = obj.columns[:] + data_columns = list(obj.columns[:]) obj, self.levels = self.validate_multiindex(obj) for n in self.levels: if n not in data_columns: diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py index b08d24747bcd3..5225cfdf837a3 100644 --- a/pandas/io/tests/test_pytables.py +++ b/pandas/io/tests/test_pytables.py @@ -1338,6 +1338,16 @@ def check_col(key, name, size): [[124, 'abcdefqhij'], [346, 'abcdefghijklmnopqrtsuvwxyz']]) self.assertRaises(ValueError, store.append, 'df_new', df_new) + # min_itemsize on Series with Multiindex (GH 10381) + df = tm.makeMixedDataFrame().set_index(['A', 'C']) + store.append('ss', df['B'], min_itemsize={'index': 4}) + tm.assert_series_equal(store.select('ss'), df['B']) + + # min_itemsize with MultiIndex and data_columns=True + store.append('midf', df, data_columns=True, + min_itemsize={'index': 4}) + tm.assert_frame_equal(store.select('midf'), df) + # with nans _maybe_remove(store, 'df') df = tm.makeTimeDataFrame()
…nd it) closes #10381
https://api.github.com/repos/pandas-dev/pandas/pulls/12252
2016-02-07T15:51:05Z
2016-11-16T22:28:04Z
null
2016-11-24T12:50:07Z
DOC: strings longer than min_itemsize are not truncated
diff --git a/doc/source/io.rst b/doc/source/io.rst index 36d4bd89261c4..415e1a7ddaf07 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -2808,8 +2808,8 @@ Storing Mixed Types in a Table ++++++++++++++++++++++++++++++ Storing mixed-dtype data is supported. Strings are stored as a -fixed-width using the maximum size of the appended column. Subsequent -appends will truncate strings at this length. +fixed-width using the maximum size of the appended column. Subsequent attempts +at appending longer strings will raise a ``ValueError``. Passing ``min_itemsize={`values`: size}`` as a parameter to append will set a larger minimum for the string columns. Storing ``floats,
Fixed docs on min_itemsize
https://api.github.com/repos/pandas-dev/pandas/pulls/12251
2016-02-07T15:50:43Z
2016-03-12T17:52:30Z
null
2016-03-12T20:17:03Z
CLN: Remove testing._skip_if_no_cday
diff --git a/pandas/tseries/tests/test_daterange.py b/pandas/tseries/tests/test_daterange.py index 5b0d9b593c344..e88009b34381b 100644 --- a/pandas/tseries/tests/test_daterange.py +++ b/pandas/tseries/tests/test_daterange.py @@ -30,7 +30,6 @@ def test_generate(self): self.assert_numpy_array_equal(rng1, rng2) def test_generate_cday(self): - tm._skip_if_no_cday() rng1 = list(generate_range(START, END, offset=datetools.cday)) rng2 = list(generate_range(START, END, time_rule='C')) self.assert_numpy_array_equal(rng1, rng2) @@ -546,7 +545,6 @@ def test_freq_divides_end_in_nanos(self): class TestCustomDateRange(tm.TestCase): def setUp(self): - tm._skip_if_no_cday() self.rng = cdate_range(START, END) def test_constructor(self): diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py index 901d9f41e3949..726c777535315 100644 --- a/pandas/tseries/tests/test_offsets.py +++ b/pandas/tseries/tests/test_offsets.py @@ -1397,7 +1397,6 @@ def setUp(self): self.d = datetime(2008, 1, 1) self.nd = np_datetime64_compat('2008-01-01 00:00:00Z') - tm._skip_if_no_cday() self.offset = CDay() self.offset2 = CDay(2) @@ -1629,7 +1628,6 @@ class CustomBusinessMonthBase(object): def setUp(self): self.d = datetime(2008, 1, 1) - tm._skip_if_no_cday() self.offset = self._object() self.offset2 = self._object(2) diff --git a/pandas/util/testing.py b/pandas/util/testing.py index 06262edfe0f19..0a1249c246ae6 100644 --- a/pandas/util/testing.py +++ b/pandas/util/testing.py @@ -250,14 +250,6 @@ def _skip_if_windows(): import nose raise nose.SkipTest("Running on Windows") - -def _skip_if_no_cday(): - from pandas.core.datetools import cday - if cday is None: - import nose - raise nose.SkipTest("CustomBusinessDay not available.") - - def _skip_if_no_pathlib(): try: from pathlib import Path
NumPy 1.7 or later supports custom business day.
https://api.github.com/repos/pandas-dev/pandas/pulls/12249
2016-02-07T13:34:19Z
2016-02-08T15:20:29Z
null
2016-02-08T21:49:08Z
Hdffixes
diff --git a/doc/source/io.rst b/doc/source/io.rst index 36d4bd89261c4..415e1a7ddaf07 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -2808,8 +2808,8 @@ Storing Mixed Types in a Table ++++++++++++++++++++++++++++++ Storing mixed-dtype data is supported. Strings are stored as a -fixed-width using the maximum size of the appended column. Subsequent -appends will truncate strings at this length. +fixed-width using the maximum size of the appended column. Subsequent attempts +at appending longer strings will raise a ``ValueError``. Passing ``min_itemsize={`values`: size}`` as a parameter to append will set a larger minimum for the string columns. Storing ``floats, diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py index cd8f7699a85d1..ffe4f4d4cae89 100644 --- a/pandas/io/pytables.py +++ b/pandas/io/pytables.py @@ -488,6 +488,9 @@ def keys(self): """ return [n._v_pathname for n in self.groups()] + def __iter__(self): + return iter(self.keys()) + def items(self): """ iterate on key->group @@ -3247,7 +3250,7 @@ def validate_data_columns(self, data_columns, min_itemsize): # evaluate the passed data_columns, True == use all columns # take only valide axis labels if data_columns is True: - data_columns = axis_labels + data_columns = list(axis_labels) elif data_columns is None: data_columns = [] @@ -4084,7 +4087,7 @@ def write(self, obj, data_columns=None, **kwargs): obj = DataFrame({name: obj}, index=obj.index) obj.columns = [name] return super(AppendableSeriesTable, self).write( - obj=obj, data_columns=obj.columns, **kwargs) + obj=obj, data_columns=list(obj.columns), **kwargs) def read(self, columns=None, **kwargs): @@ -4185,7 +4188,7 @@ def write(self, obj, data_columns=None, **kwargs): if data_columns is None: data_columns = [] elif data_columns is True: - data_columns = obj.columns[:] + data_columns = list(obj.columns[:]) obj, self.levels = self.validate_multiindex(obj) for n in self.levels: if n not in data_columns: diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py index b08d24747bcd3..77ac1f90472fc 100644 --- a/pandas/io/tests/test_pytables.py +++ b/pandas/io/tests/test_pytables.py @@ -389,8 +389,9 @@ def test_keys(self): store['d'] = tm.makePanel() store['foo/bar'] = tm.makePanel() self.assertEqual(len(store), 5) - self.assertTrue(set( - store.keys()) == set(['/a', '/b', '/c', '/d', '/foo/bar'])) + expected = set(['/a', '/b', '/c', '/d', '/foo/bar']) + self.assertTrue(set(store.keys()) == expected) + self.assertTrue(set(store) == expected) def test_repr(self): @@ -1338,6 +1339,16 @@ def check_col(key, name, size): [[124, 'abcdefqhij'], [346, 'abcdefghijklmnopqrtsuvwxyz']]) self.assertRaises(ValueError, store.append, 'df_new', df_new) + # min_itemsize on Series with Multiindex (GH 10381) + df = tm.makeMixedDataFrame().set_index(['A', 'C']) + store.append('ss', df['B'], min_itemsize={'index': 4}) + tm.assert_series_equal(store.select('ss'), df['B']) + + # min_itemsize with MultiIndex and data_columns=True + store.append('midf', df, data_columns=True, + min_itemsize={'index': 4}) + tm.assert_frame_equal(store.select('midf'), df) + # with nans _maybe_remove(store, 'df') df = tm.makeTimeDataFrame()
This fixes #12221 and #10381 . Same source file, but let me know if you want separate PRs.
https://api.github.com/repos/pandas-dev/pandas/pulls/12248
2016-02-07T12:06:12Z
2016-02-08T13:48:11Z
null
2016-02-11T06:22:08Z
TST: allow assert_almost_equal to handle pandas instances
diff --git a/pandas/computation/tests/test_eval.py b/pandas/computation/tests/test_eval.py index 805ed960c2cf1..cd967ecafd8b4 100644 --- a/pandas/computation/tests/test_eval.py +++ b/pandas/computation/tests/test_eval.py @@ -1580,7 +1580,7 @@ def test_unary_functions(self): expr = "{0}(a)".format(fn) got = self.eval(expr) expect = getattr(np, fn)(a) - pd.util.testing.assert_almost_equal(got, expect) + tm.assert_series_equal(got, expect, check_names=False) def test_binary_functions(self): df = DataFrame({'a': np.random.randn(10), @@ -1601,7 +1601,7 @@ def test_df_use_case(self): parser=self.parser, inplace=True) got = df.e expect = np.arctan2(np.sin(df.a), df.b) - pd.util.testing.assert_almost_equal(got, expect) + tm.assert_series_equal(got, expect, check_names=False) def test_df_arithmetic_subexpression(self): df = DataFrame({'a': np.random.randn(10), @@ -1611,7 +1611,7 @@ def test_df_arithmetic_subexpression(self): parser=self.parser, inplace=True) got = df.e expect = np.sin(df.a + df.b) - pd.util.testing.assert_almost_equal(got, expect) + tm.assert_series_equal(got, expect, check_names=False) def check_result_type(self, dtype, expect_dtype): df = DataFrame({'a': np.random.randn(10).astype(dtype)}) @@ -1623,7 +1623,7 @@ def check_result_type(self, dtype, expect_dtype): expect = np.sin(df.a) self.assertEqual(expect.dtype, got.dtype) self.assertEqual(expect_dtype, got.dtype) - pd.util.testing.assert_almost_equal(got, expect) + tm.assert_series_equal(got, expect, check_names=False) def test_result_types(self): self.check_result_type(np.int32, np.float64) diff --git a/pandas/io/tests/test_cparser.py b/pandas/io/tests/test_cparser.py index 52cb56bea1122..ce6fce7b792b5 100644 --- a/pandas/io/tests/test_cparser.py +++ b/pandas/io/tests/test_cparser.py @@ -130,8 +130,8 @@ def test_integer_thousands_alt(self): thousands='.', header=None) result = reader.read() - expected = [123456, 12500] - tm.assert_almost_equal(result[0], expected) + expected = DataFrame([123456, 12500]) + tm.assert_frame_equal(result, expected) def test_skip_bad_lines(self): # too many lines, see #2430 for why diff --git a/pandas/sparse/tests/test_sparse.py b/pandas/sparse/tests/test_sparse.py index c073ee3334e55..24a73b3825a70 100644 --- a/pandas/sparse/tests/test_sparse.py +++ b/pandas/sparse/tests/test_sparse.py @@ -440,7 +440,8 @@ def _compare(idx): # Corner case sp = SparseSeries(np.ones(10) * nan) - assert_almost_equal(sp.take([0, 1, 2, 3, 4]), np.repeat(nan, 5)) + exp = pd.Series(np.repeat(nan, 5)) + tm.assert_series_equal(sp.take([0, 1, 2, 3, 4]), exp) def test_setitem(self): self.bseries[5] = 7. @@ -1872,8 +1873,10 @@ def test_setitem(self): assert_sp_frame_equal(self.panel['ItemE'], self.panel['ItemC']) assert_sp_frame_equal(self.panel['ItemF'], self.panel['ItemC']) - assert_almost_equal(self.panel.items, ['ItemA', 'ItemB', 'ItemC', - 'ItemD', 'ItemE', 'ItemF']) + + expected = pd.Index(['ItemA', 'ItemB', 'ItemC', + 'ItemD', 'ItemE', 'ItemF']) + tm.assert_index_equal(self.panel.items, expected) self.assertRaises(Exception, self.panel.__setitem__, 'item6', 1) @@ -1890,11 +1893,12 @@ def _check_loc(item, major, minor, val=1.5): def test_delitem_pop(self): del self.panel['ItemB'] - assert_almost_equal(self.panel.items, ['ItemA', 'ItemC', 'ItemD']) + tm.assert_index_equal(self.panel.items, + pd.Index(['ItemA', 'ItemC', 'ItemD'])) crackle = self.panel['ItemC'] pop = self.panel.pop('ItemC') self.assertIs(pop, crackle) - assert_almost_equal(self.panel.items, ['ItemA', 'ItemD']) + tm.assert_almost_equal(self.panel.items, pd.Index(['ItemA', 'ItemD'])) self.assertRaises(KeyError, self.panel.__delitem__, 'ItemC') diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py index 884a147d7509f..2d7ba6c704e41 100644 --- a/pandas/tests/frame/test_alter_axes.py +++ b/pandas/tests/frame/test_alter_axes.py @@ -10,8 +10,7 @@ from pandas import DataFrame, Series, Index, MultiIndex, RangeIndex import pandas as pd -from pandas.util.testing import (assert_almost_equal, - assert_series_equal, +from pandas.util.testing import (assert_series_equal, assert_frame_equal, assertRaisesRegexp) @@ -447,7 +446,7 @@ def test_reset_index(self): stacked.index.labels)): values = lev.take(lab) name = names[i] - assert_almost_equal(values, deleveled[name]) + tm.assert_index_equal(values, Index(deleveled[name])) stacked.index.names = [None, None] deleveled2 = stacked.reset_index() diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py index 87c263e129361..70e8d3bcf4582 100644 --- a/pandas/tests/frame/test_constructors.py +++ b/pandas/tests/frame/test_constructors.py @@ -28,8 +28,7 @@ from pandas.core.dtypes import DatetimeTZDtype -from pandas.util.testing import (assert_almost_equal, - assert_numpy_array_equal, +from pandas.util.testing import (assert_numpy_array_equal, assert_series_equal, assert_frame_equal, assertRaisesRegexp) @@ -359,7 +358,7 @@ def test_constructor_dict_block(self): expected = [[4., 3., 2., 1.]] df = DataFrame({'d': [4.], 'c': [3.], 'b': [2.], 'a': [1.]}, columns=['d', 'c', 'b', 'a']) - assert_almost_equal(df.values, expected) + tm.assert_numpy_array_equal(df.values, expected) def test_constructor_dict_cast(self): # cast float tests diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py index 6077c8e6f63ee..55631ad831441 100644 --- a/pandas/tests/frame/test_indexing.py +++ b/pandas/tests/frame/test_indexing.py @@ -1555,7 +1555,7 @@ def testit(df): 'mask_c': [False, True, False, True]}) df['mask'] = df.lookup(df.index, 'mask_' + df['label']) exp_mask = alt(df, df.index, 'mask_' + df['label']) - assert_almost_equal(df['mask'], exp_mask) + tm.assert_series_equal(df['mask'], pd.Series(exp_mask, name='mask')) self.assertEqual(df['mask'].dtype, np.bool_) with tm.assertRaises(KeyError): @@ -2070,7 +2070,9 @@ def test_xs_corner(self): df['E'] = 3. xs = df.xs(0) - assert_almost_equal(xs, [1., 'foo', 2., 'bar', 3.]) + exp = pd.Series([1., 'foo', 2., 'bar', 3.], + index=list('ABCDE'), name=0) + tm.assert_series_equal(xs, exp) # no columns but Index(dtype=object) df = DataFrame(index=['a', 'b', 'c']) diff --git a/pandas/tests/frame/test_mutate_columns.py b/pandas/tests/frame/test_mutate_columns.py index 1546d18a224cd..0d58dd5402aff 100644 --- a/pandas/tests/frame/test_mutate_columns.py +++ b/pandas/tests/frame/test_mutate_columns.py @@ -7,8 +7,7 @@ from pandas import DataFrame, Series -from pandas.util.testing import (assert_almost_equal, - assert_series_equal, +from pandas.util.testing import (assert_series_equal, assert_frame_equal, assertRaisesRegexp) @@ -125,12 +124,12 @@ def test_insert(self): df.insert(0, 'foo', df['a']) self.assert_numpy_array_equal(df.columns, ['foo', 'c', 'b', 'a']) - assert_almost_equal(df['a'], df['foo']) + tm.assert_series_equal(df['a'], df['foo'], check_names=False) df.insert(2, 'bar', df['c']) self.assert_numpy_array_equal(df.columns, ['foo', 'c', 'bar', 'b', 'a']) - assert_almost_equal(df['c'], df['bar']) + tm.assert_almost_equal(df['c'], df['bar'], check_names=False) # diff dtype diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py index 40ef3188e50f7..a9dfe07c20856 100644 --- a/pandas/tests/series/test_analytics.py +++ b/pandas/tests/series/test_analytics.py @@ -960,10 +960,10 @@ def test_rank(self): filled = self.ts.fillna(np.inf) # rankdata returns a ndarray - exp = Series(rankdata(filled), index=filled.index) + exp = Series(rankdata(filled), index=filled.index, name='ts') exp[mask] = np.nan - assert_almost_equal(ranks, exp) + tm.assert_series_equal(ranks, exp) iseries = Series(np.arange(5).repeat(2)) diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index c5783779c67c8..1289c0da14dd5 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -16,7 +16,7 @@ from pandas.compat import lrange, range, zip, OrderedDict, long from pandas import compat -from pandas.util.testing import assert_series_equal, assert_almost_equal +from pandas.util.testing import assert_series_equal import pandas.util.testing as tm from .common import TestData @@ -213,7 +213,7 @@ def test_constructor_maskedarray(self): def test_constructor_default_index(self): s = Series([0, 1, 2]) - assert_almost_equal(s.index, np.arange(3)) + tm.assert_index_equal(s.index, pd.Index(np.arange(3))) def test_constructor_corner(self): df = tm.makeTimeDataFrame() diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py index d36cc0c303dfe..11977341df77c 100644 --- a/pandas/tests/series/test_operators.py +++ b/pandas/tests/series/test_operators.py @@ -139,13 +139,15 @@ def test_div(self): assert_series_equal(result, expected) def test_operators(self): - def _check_op(series, other, op, pos_only=False): + def _check_op(series, other, op, pos_only=False, + check_dtype=True): left = np.abs(series) if pos_only else series right = np.abs(other) if pos_only else other cython_or_numpy = op(left, right) python = left.combine(right, op) - tm.assert_almost_equal(cython_or_numpy, python) + tm.assert_series_equal(cython_or_numpy, python, + check_dtype=check_dtype) def check(series, other): simple_ops = ['add', 'sub', 'mul', 'truediv', 'floordiv', 'mod'] @@ -169,15 +171,15 @@ def check(series, other): check(self.ts, self.ts[::2]) check(self.ts, 5) - def check_comparators(series, other): - _check_op(series, other, operator.gt) - _check_op(series, other, operator.ge) - _check_op(series, other, operator.eq) - _check_op(series, other, operator.lt) - _check_op(series, other, operator.le) + def check_comparators(series, other, check_dtype=True): + _check_op(series, other, operator.gt, check_dtype=check_dtype) + _check_op(series, other, operator.ge, check_dtype=check_dtype) + _check_op(series, other, operator.eq, check_dtype=check_dtype) + _check_op(series, other, operator.lt, check_dtype=check_dtype) + _check_op(series, other, operator.le, check_dtype=check_dtype) check_comparators(self.ts, 5) - check_comparators(self.ts, self.ts + 1) + check_comparators(self.ts, self.ts + 1, check_dtype=False) def test_operators_empty_int_corner(self): s1 = Series([], [], dtype=np.int32) @@ -1245,10 +1247,14 @@ def test_operators_frame(self): # rpow does not work with DataFrame df = DataFrame({'A': self.ts}) - tm.assert_almost_equal(self.ts + self.ts, self.ts + df['A']) - tm.assert_almost_equal(self.ts ** self.ts, self.ts ** df['A']) - tm.assert_almost_equal(self.ts < self.ts, self.ts < df['A']) - tm.assert_almost_equal(self.ts / self.ts, self.ts / df['A']) + tm.assert_series_equal(self.ts + self.ts, self.ts + df['A'], + check_names=False) + tm.assert_series_equal(self.ts ** self.ts, self.ts ** df['A'], + check_names=False) + tm.assert_series_equal(self.ts < self.ts, self.ts < df['A'], + check_names=False) + tm.assert_series_equal(self.ts / self.ts, self.ts / df['A'], + check_names=False) def test_operators_combine(self): def _check_fill(meth, op, a, b, fill_value=0): diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py index c175630748b38..840aabc15b75e 100644 --- a/pandas/tests/test_groupby.py +++ b/pandas/tests/test_groupby.py @@ -960,10 +960,13 @@ def test_aggregate_item_by_item(self): # GH5782 # odd comparisons can result here, so cast to make easy - assert_almost_equal( - result.xs('foo'), np.array([foo] * K).astype('float64')) - assert_almost_equal( - result.xs('bar'), np.array([bar] * K).astype('float64')) + exp = pd.Series(np.array([foo] * K), index=list('BCD'), + dtype=np.float64, name='foo') + tm.assert_series_equal(result.xs('foo'), exp) + + exp = pd.Series(np.array([bar] * K), index=list('BCD'), + dtype=np.float64, name='bar') + tm.assert_almost_equal(result.xs('bar'), exp) def aggfun(ser): return ser.size @@ -1390,7 +1393,8 @@ def test_frame_groupby(self): for name, group in grouped: mean = group.mean() for idx in group.index: - assert_almost_equal(transformed.xs(idx), mean) + tm.assert_series_equal(transformed.xs(idx), mean, + check_names=False) # iterate for weekday, group in grouped: diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py index 69e05e1f4e7ca..72bad407ded9f 100644 --- a/pandas/tests/test_internals.py +++ b/pandas/tests/test_internals.py @@ -702,7 +702,7 @@ def test_reindex_items(self): reindexed = mgr.reindex_axis(['g', 'c', 'a', 'd'], axis=0) self.assertEqual(reindexed.nblocks, 2) - assert_almost_equal(reindexed.items, ['g', 'c', 'a', 'd']) + tm.assert_index_equal(reindexed.items, pd.Index(['g', 'c', 'a', 'd'])) assert_almost_equal( mgr.get('g', fastpath=False), reindexed.get('g', fastpath=False)) assert_almost_equal( @@ -746,7 +746,8 @@ def test_get_numeric_data(self): mgr.set('obj', np.array([1, 2, 3], dtype=np.object_)) numeric = mgr.get_numeric_data() - assert_almost_equal(numeric.items, ['int', 'float', 'complex', 'bool']) + tm.assert_index_equal(numeric.items, + pd.Index(['int', 'float', 'complex', 'bool'])) assert_almost_equal( mgr.get('float', fastpath=False), numeric.get('float', fastpath=False)) @@ -762,7 +763,8 @@ def test_get_numeric_data(self): mgr.get('float').internal_values(), np.array([100., 200., 300.])) numeric2 = mgr.get_numeric_data(copy=True) - assert_almost_equal(numeric.items, ['int', 'float', 'complex', 'bool']) + tm.assert_index_equal(numeric.items, + pd.Index(['int', 'float', 'complex', 'bool'])) numeric2.set('float', np.array([1000., 2000., 3000.])) assert_almost_equal( mgr.get('float', fastpath=False), np.array([100., 200., 300.])) @@ -776,9 +778,9 @@ def test_get_bool_data(self): mgr.set('obj', np.array([True, False, True], dtype=np.object_)) bools = mgr.get_bool_data() - assert_almost_equal(bools.items, ['bool']) - assert_almost_equal( - mgr.get('bool', fastpath=False), bools.get('bool', fastpath=False)) + tm.assert_index_equal(bools.items, pd.Index(['bool'])) + assert_almost_equal(mgr.get('bool', fastpath=False), + bools.get('bool', fastpath=False)) assert_almost_equal( mgr.get('bool').internal_values(), bools.get('bool').internal_values()) @@ -946,23 +948,25 @@ def assert_reindex_axis_is_ok(mgr, axis, new_labels, fill_value): reindexed = mgr.reindex_axis(new_labels, axis, fill_value=fill_value) - assert_almost_equal(com.take_nd(mat, indexer, axis, - fill_value=fill_value), - reindexed.as_matrix()) - assert_almost_equal(reindexed.axes[axis], new_labels) + tm.assert_numpy_array_equal(com.take_nd(mat, indexer, axis, + fill_value=fill_value), + reindexed.as_matrix()) + tm.assert_index_equal(reindexed.axes[axis], new_labels) for mgr in self.MANAGERS: for ax in range(mgr.ndim): for fill_value in (None, np.nan, 100.): - yield assert_reindex_axis_is_ok, mgr, ax, [], fill_value + yield (assert_reindex_axis_is_ok, mgr, ax, + pd.Index([]), fill_value) yield (assert_reindex_axis_is_ok, mgr, ax, mgr.axes[ax], fill_value) yield (assert_reindex_axis_is_ok, mgr, ax, mgr.axes[ax][[0, 0, 0]], fill_value) yield (assert_reindex_axis_is_ok, mgr, ax, - ['foo', 'bar', 'baz'], fill_value) + pd.Index(['foo', 'bar', 'baz']), fill_value) yield (assert_reindex_axis_is_ok, mgr, ax, - ['foo', mgr.axes[ax][0], 'baz'], fill_value) + pd.Index(['foo', mgr.axes[ax][0], 'baz']), + fill_value) if mgr.shape[ax] >= 3: yield (assert_reindex_axis_is_ok, mgr, ax, @@ -973,6 +977,7 @@ def assert_reindex_axis_is_ok(mgr, axis, new_labels, fill_value): mgr.axes[ax][[0, 1, 2, 0, 1, 2]], fill_value) def test_reindex_indexer(self): + def assert_reindex_indexer_is_ok(mgr, axis, new_labels, indexer, fill_value): mat = mgr.as_matrix() @@ -980,18 +985,19 @@ def assert_reindex_indexer_is_ok(mgr, axis, new_labels, indexer, fill_value=fill_value) reindexed = mgr.reindex_indexer(new_labels, indexer, axis, fill_value=fill_value) - assert_almost_equal(reindexed_mat, reindexed.as_matrix()) - assert_almost_equal(reindexed.axes[axis], new_labels) + tm.assert_numpy_array_equal(reindexed_mat, reindexed.as_matrix()) + tm.assert_index_equal(reindexed.axes[axis], new_labels) for mgr in self.MANAGERS: for ax in range(mgr.ndim): for fill_value in (None, np.nan, 100.): - yield (assert_reindex_indexer_is_ok, mgr, ax, [], [], - fill_value) - yield (assert_reindex_indexer_is_ok, mgr, ax, mgr.axes[ax], + yield (assert_reindex_indexer_is_ok, mgr, ax, + pd.Index([]), [], fill_value) + yield (assert_reindex_indexer_is_ok, mgr, ax, + mgr.axes[ax], np.arange(mgr.shape[ax]), fill_value) + yield (assert_reindex_indexer_is_ok, mgr, ax, + pd.Index(['foo'] * mgr.shape[ax]), np.arange(mgr.shape[ax]), fill_value) - yield (assert_reindex_indexer_is_ok, mgr, ax, ['foo'] * - mgr.shape[ax], np.arange(mgr.shape[ax]), fill_value) yield (assert_reindex_indexer_is_ok, mgr, ax, mgr.axes[ax][::-1], np.arange(mgr.shape[ax]), @@ -999,16 +1005,19 @@ def assert_reindex_indexer_is_ok(mgr, axis, new_labels, indexer, yield (assert_reindex_indexer_is_ok, mgr, ax, mgr.axes[ax], np.arange(mgr.shape[ax])[::-1], fill_value) yield (assert_reindex_indexer_is_ok, mgr, ax, - ['foo', 'bar', 'baz'], [0, 0, 0], fill_value) + pd.Index(['foo', 'bar', 'baz']), + [0, 0, 0], fill_value) yield (assert_reindex_indexer_is_ok, mgr, ax, - ['foo', 'bar', 'baz'], [-1, 0, -1], fill_value) + pd.Index(['foo', 'bar', 'baz']), + [-1, 0, -1], fill_value) yield (assert_reindex_indexer_is_ok, mgr, ax, - ['foo', mgr.axes[ax][0], 'baz'], [-1, -1, -1], - fill_value) + pd.Index(['foo', mgr.axes[ax][0], 'baz']), + [-1, -1, -1], fill_value) if mgr.shape[ax] >= 3: yield (assert_reindex_indexer_is_ok, mgr, ax, - ['foo', 'bar', 'baz'], [0, 1, 2], fill_value) + pd.Index(['foo', 'bar', 'baz']), + [0, 1, 2], fill_value) # test_get_slice(slice_like, axis) # take(indexer, axis) diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py index 9f8d672723954..3430109939f77 100644 --- a/pandas/tests/test_multilevel.py +++ b/pandas/tests/test_multilevel.py @@ -849,7 +849,8 @@ def _check_counts(frame, axis=0): self.frame['D'] = 'foo' result = self.frame.count(level=0, numeric_only=True) - assert_almost_equal(result.columns, ['A', 'B', 'C']) + tm.assert_index_equal(result.columns, + pd.Index(['A', 'B', 'C'], name='exp')) def test_count_level_series(self): index = MultiIndex(levels=[['foo', 'bar', 'baz'], ['one', 'two', diff --git a/pandas/tests/test_stats.py b/pandas/tests/test_stats.py index 56f6a80e58eea..85ce1d5127512 100644 --- a/pandas/tests/test_stats.py +++ b/pandas/tests/test_stats.py @@ -9,9 +9,7 @@ from pandas import Series, DataFrame from pandas.compat import product -from pandas.util.testing import (assert_frame_equal, - assert_series_equal, - assert_almost_equal) +from pandas.util.testing import (assert_frame_equal, assert_series_equal) import pandas.util.testing as tm @@ -34,7 +32,7 @@ def test_rank_tie_methods(self): def _check(s, expected, method='average'): result = s.rank(method=method) - assert_almost_equal(result, expected) + tm.assert_series_equal(result, Series(expected)) dtypes = [None, object] disabled = set([(object, 'first')]) @@ -87,6 +85,7 @@ def test_rank_methods_frame(self): sprank = np.apply_along_axis( rankdata, ax, vals, m if m != 'first' else 'ordinal') + sprank = sprank.astype(np.float64) expected = DataFrame(sprank, columns=cols) if LooseVersion(scipy.__version__) >= '0.17.0': diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py index f0bb002a1c96d..54d6b29daf4b1 100644 --- a/pandas/tests/test_strings.py +++ b/pandas/tests/test_strings.py @@ -127,33 +127,35 @@ def test_count(self): values = ['foo', 'foofoo', NA, 'foooofooofommmfoo'] result = strings.str_count(values, 'f[o]+') - exp = [1, 2, NA, 4] + exp = Series([1, 2, NA, 4]) tm.assert_almost_equal(result, exp) result = Series(values).str.count('f[o]+') tm.assertIsInstance(result, Series) - tm.assert_almost_equal(result, exp) + tm.assert_series_equal(result, exp) # mixed mixed = ['a', NA, 'b', True, datetime.today(), 'foo', None, 1, 2.] rs = strings.str_count(mixed, 'a') - xp = [1, NA, 0, NA, NA, 0, NA, NA, NA] - tm.assert_almost_equal(rs, xp) + xp = np.array([1, NA, 0, NA, NA, 0, NA, NA, NA]) + tm.assert_numpy_array_equal(rs, xp) rs = Series(mixed).str.count('a') + xp = Series([1, NA, 0, NA, NA, 0, NA, NA, NA]) tm.assertIsInstance(rs, Series) - tm.assert_almost_equal(rs, xp) + tm.assert_series_equal(rs, xp) # unicode values = [u('foo'), u('foofoo'), NA, u('foooofooofommmfoo')] result = strings.str_count(values, 'f[o]+') - exp = [1, 2, NA, 4] - tm.assert_almost_equal(result, exp) + exp = np.array([1, 2, NA, 4]) + tm.assert_numpy_array_equal(result, exp) result = Series(values).str.count('f[o]+') + exp = Series([1, 2, NA, 4]) tm.assertIsInstance(result, Series) - tm.assert_almost_equal(result, exp) + tm.assert_series_equal(result, exp) def test_contains(self): values = ['foo', NA, 'fooommm__foo', 'mmm_', 'foommm[_]+bar'] @@ -187,12 +189,12 @@ def test_contains(self): # mixed mixed = ['a', NA, 'b', True, datetime.today(), 'foo', None, 1, 2.] rs = strings.str_contains(mixed, 'o') - xp = [False, NA, False, NA, NA, True, NA, NA, NA] + xp = Series([False, NA, False, NA, NA, True, NA, NA, NA]) tm.assert_almost_equal(rs, xp) rs = Series(mixed).str.contains('o') tm.assertIsInstance(rs, Series) - tm.assert_almost_equal(rs, xp) + tm.assert_series_equal(rs, xp) # unicode values = [u('foo'), NA, u('fooommm__foo'), u('mmm_')] @@ -227,12 +229,12 @@ def test_startswith(self): # mixed mixed = ['a', NA, 'b', True, datetime.today(), 'foo', None, 1, 2.] rs = strings.str_startswith(mixed, 'f') - xp = [False, NA, False, NA, NA, True, NA, NA, NA] + xp = Series([False, NA, False, NA, NA, True, NA, NA, NA]) tm.assert_almost_equal(rs, xp) rs = Series(mixed).str.startswith('f') tm.assertIsInstance(rs, Series) - tm.assert_almost_equal(rs, xp) + tm.assert_series_equal(rs, xp) # unicode values = Series([u('om'), NA, u('foo_nom'), u('nom'), u('bar_foo'), NA, @@ -255,12 +257,12 @@ def test_endswith(self): # mixed mixed = ['a', NA, 'b', True, datetime.today(), 'foo', None, 1, 2.] rs = strings.str_endswith(mixed, 'f') - xp = [False, NA, False, NA, NA, False, NA, NA, NA] + xp = Series([False, NA, False, NA, NA, False, NA, NA, NA]) tm.assert_almost_equal(rs, xp) rs = Series(mixed).str.endswith('f') tm.assertIsInstance(rs, Series) - tm.assert_almost_equal(rs, xp) + tm.assert_series_equal(rs, xp) # unicode values = Series([u('om'), NA, u('foo_nom'), u('nom'), u('bar_foo'), NA, @@ -310,9 +312,9 @@ def test_lower_upper(self): 2.]) mixed = mixed.str.upper() rs = Series(mixed).str.lower() - xp = ['a', NA, 'b', NA, NA, 'foo', NA, NA, NA] + xp = Series(['a', NA, 'b', NA, NA, 'foo', NA, NA, NA]) tm.assertIsInstance(rs, Series) - tm.assert_almost_equal(rs, xp) + tm.assert_series_equal(rs, xp) # unicode values = Series([u('om'), NA, u('nom'), u('nom')]) @@ -389,7 +391,7 @@ def test_replace(self): None, 1, 2.]) rs = Series(mixed).str.replace('BAD[_]*', '') - xp = ['a', NA, 'b', NA, NA, 'foo', NA, NA, NA] + xp = Series(['a', NA, 'b', NA, NA, 'foo', NA, NA, NA]) tm.assertIsInstance(rs, Series) tm.assert_almost_equal(rs, xp) @@ -426,9 +428,9 @@ def test_repeat(self): 2.]) rs = Series(mixed).str.repeat(3) - xp = ['aaa', NA, 'bbb', NA, NA, 'foofoofoo', NA, NA, NA] + xp = Series(['aaa', NA, 'bbb', NA, NA, 'foofoofoo', NA, NA, NA]) tm.assertIsInstance(rs, Series) - tm.assert_almost_equal(rs, xp) + tm.assert_series_equal(rs, xp) # unicode values = Series([u('a'), u('b'), NA, u('c'), NA, u('d')]) @@ -456,9 +458,10 @@ def test_deprecated_match(self): with tm.assert_produces_warning(): rs = Series(mixed).str.match('.*(BAD[_]+).*(BAD)') - xp = [('BAD_', 'BAD'), NA, ('BAD_', 'BAD'), NA, NA, [], NA, NA, NA] + xp = Series([('BAD_', 'BAD'), NA, ('BAD_', 'BAD'), + NA, NA, [], NA, NA, NA]) tm.assertIsInstance(rs, Series) - tm.assert_almost_equal(rs, xp) + tm.assert_series_equal(rs, xp) # unicode values = Series([u('fooBAD__barBAD'), NA, u('foo')]) @@ -489,9 +492,9 @@ def test_match(self): with tm.assert_produces_warning(): rs = Series(mixed).str.match('.*(BAD[_]+).*(BAD)', as_indexer=True) - xp = [True, NA, True, NA, NA, False, NA, NA, NA] + xp = Series([True, NA, True, NA, NA, False, NA, NA, NA]) tm.assertIsInstance(rs, Series) - tm.assert_almost_equal(rs, xp) + tm.assert_series_equal(rs, xp) # unicode values = Series([u('fooBAD__barBAD'), NA, u('foo')]) diff --git a/pandas/tests/test_testing.py b/pandas/tests/test_testing.py index 7c3ba2ee8b556..19598a54c6585 100644 --- a/pandas/tests/test_testing.py +++ b/pandas/tests/test_testing.py @@ -116,6 +116,14 @@ def test_assert_almost_equal_inf(self): self._assert_not_almost_equal_both(np.inf, 0) + def test_assert_almost_equal_pandas(self): + self.assert_almost_equal(pd.Index([1., 1.1]), + pd.Index([1., 1.100001])) + self.assert_almost_equal(pd.Series([1., 1.1]), + pd.Series([1., 1.100001])) + self.assert_almost_equal(pd.DataFrame({'a': [1., 1.1]}), + pd.DataFrame({'a': [1., 1.100001]})) + class TestUtilTesting(tm.TestCase): _multiprocess_can_split_ = True diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py index 0b8de24a1bd42..cc4a6ba61306d 100644 --- a/pandas/tests/test_window.py +++ b/pandas/tests/test_window.py @@ -908,8 +908,9 @@ def get_result(obj, window, min_periods=None, freq=None, center=False): assert_almost_equal(series_result[-1], static_comp(trunc_series)) - assert_almost_equal(frame_result.xs(last_date), - trunc_frame.apply(static_comp)) + assert_series_equal(frame_result.xs(last_date), + trunc_frame.apply(static_comp), + check_names=False) # GH 7925 if has_center: @@ -984,11 +985,11 @@ def test_ewma(self): def test_ewma_nan_handling(self): s = Series([1.] + [np.nan] * 5 + [1.]) result = s.ewm(com=5).mean() - assert_almost_equal(result, [1.] * len(s)) + tm.assert_series_equal(result, Series([1.] * len(s))) s = Series([np.nan] * 2 + [1.] + [np.nan] * 2 + [1.]) result = s.ewm(com=5).mean() - assert_almost_equal(result, [np.nan] * 2 + [1.] * 4) + tm.assert_series_equal(result, Series([np.nan] * 2 + [1.] * 4)) # GH 7603 s0 = Series([np.nan, 1., 101.]) diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py index fdf38a0869a0b..77ee54d84bf3d 100644 --- a/pandas/tools/tests/test_merge.py +++ b/pandas/tools/tests/test_merge.py @@ -609,12 +609,19 @@ def test_merge_different_column_key_names(self): merged = left.merge(right, left_on='lkey', right_on='rkey', how='outer', sort=True) - assert_almost_equal(merged['lkey'], - ['bar', 'baz', 'foo', 'foo', 'foo', 'foo', np.nan]) - assert_almost_equal(merged['rkey'], - ['bar', np.nan, 'foo', 'foo', 'foo', 'foo', 'qux']) - assert_almost_equal(merged['value_x'], [2, 3, 1, 1, 4, 4, np.nan]) - assert_almost_equal(merged['value_y'], [6, np.nan, 5, 8, 5, 8, 7]) + exp = pd.Series(['bar', 'baz', 'foo', 'foo', 'foo', 'foo', np.nan], + name='lkey') + tm.assert_series_equal(merged['lkey'], exp) + + exp = pd.Series(['bar', np.nan, 'foo', 'foo', 'foo', 'foo', 'qux'], + name='rkey') + tm.assert_series_equal(merged['rkey'], exp) + + exp = pd.Series([2, 3, 1, 1, 4, 4, np.nan], name='value_x') + tm.assert_series_equal(merged['value_x'], exp) + + exp = pd.Series([6, np.nan, 5, 8, 5, 8, 7], name='value_y') + tm.assert_series_equal(merged['value_y'], exp) def test_merge_copy(self): left = DataFrame({'a': 0, 'b': 1}, index=lrange(10)) diff --git a/pandas/tools/tests/test_tile.py b/pandas/tools/tests/test_tile.py index 63dc769f2ed75..55f27e1466a92 100644 --- a/pandas/tools/tests/test_tile.py +++ b/pandas/tools/tests/test_tile.py @@ -63,10 +63,10 @@ def test_cut_corner(self): def test_cut_out_of_range_more(self): # #1511 - s = Series([0, -1, 0, 1, -3]) + s = Series([0, -1, 0, 1, -3], name='x') ind = cut(s, [0, 1], labels=False) - exp = [np.nan, np.nan, np.nan, 0, np.nan] - tm.assert_almost_equal(ind, exp) + exp = Series([np.nan, np.nan, np.nan, 0, np.nan], name='x') + tm.assert_series_equal(ind, exp) def test_labels(self): arr = np.tile(np.arange(0, 1.01, 0.1), 4) diff --git a/pandas/tseries/tests/test_timedeltas.py b/pandas/tseries/tests/test_timedeltas.py index d5a057d25e752..4bdd0ed462852 100644 --- a/pandas/tseries/tests/test_timedeltas.py +++ b/pandas/tseries/tests/test_timedeltas.py @@ -670,8 +670,8 @@ def conv(v): # pass thru result = to_timedelta(np.array([np.timedelta64(1, 's')])) - expected = np.array([np.timedelta64(1, 's')]) - tm.assert_almost_equal(result, expected) + expected = pd.Index(np.array([np.timedelta64(1, 's')])) + tm.assert_index_equal(result, expected) # ints result = np.timedelta64(0, 'ns') diff --git a/pandas/util/testing.py b/pandas/util/testing.py index 915fd08e2c0c6..d434b19317dfc 100644 --- a/pandas/util/testing.py +++ b/pandas/util/testing.py @@ -102,9 +102,21 @@ def assertNotAlmostEquals(self, *args, **kwargs): return deprecate('assertNotAlmostEquals', self.assertNotAlmostEqual)(*args, **kwargs) -# NOTE: don't pass an NDFrame or index to this function - may not handle it -# well. -assert_almost_equal = _testing.assert_almost_equal +def assert_almost_equal(left, right, check_exact=False, **kwargs): + if isinstance(left, pd.Index): + return assert_index_equal(left, right, check_exact=check_exact, + **kwargs) + + elif isinstance(left, pd.Series): + return assert_series_equal(left, right, check_exact=check_exact, + **kwargs) + + elif isinstance(left, pd.DataFrame): + return assert_frame_equal(left, right, check_exact=check_exact, + **kwargs) + + return _testing.assert_almost_equal(left, right, **kwargs) + assert_dict_equal = _testing.assert_dict_equal @@ -714,9 +726,9 @@ def _get_ilevel_values(index, level): msg = '{0} values are different ({1} %)'.format(obj, np.round(diff, 5)) raise_assert_detail(obj, msg, left, right) else: - assert_almost_equal(left.values, right.values, - check_less_precise=check_less_precise, - obj=obj, lobj=left, robj=right) + _testing.assert_almost_equal(left.values, right.values, + check_less_precise=check_less_precise, + obj=obj, lobj=left, robj=right) # metadata comparison if check_names: @@ -766,7 +778,10 @@ def isiterable(obj): return hasattr(obj, '__iter__') def is_sorted(seq): - return assert_almost_equal(seq, np.sort(np.array(seq))) + if isinstance(seq, (Index, Series)): + seq = seq.values + # sorting does not change precisions + return assert_numpy_array_equal(seq, np.sort(np.array(seq))) def assertIs(first, second, msg=''): @@ -865,7 +880,7 @@ def assert_numpy_array_equal(left, right, # compare shape and values if array_equivalent(left, right, strict_nan=strict_nan): - return + return True if err_msg is None: # show detailed error @@ -965,19 +980,23 @@ def assert_series_equal(left, right, check_dtype=True, obj='{0}'.format(obj)) elif check_datetimelike_compat: # we want to check only if we have compat dtypes - # e.g. integer and M|m are NOT compat, but we can simply check the values in that case - if is_datetimelike_v_numeric(left, right) or is_datetimelike_v_object(left, right) or needs_i8_conversion(left) or needs_i8_conversion(right): - - # datetimelike may have different objects (e.g. datetime.datetime vs Timestamp) but will compare equal + # e.g. integer and M|m are NOT compat, but we can simply check + # the values in that case + if (is_datetimelike_v_numeric(left, right) or + is_datetimelike_v_object(left, right) or + needs_i8_conversion(left) or + needs_i8_conversion(right)): + + # datetimelike may have different objects (e.g. datetime.datetime + # vs Timestamp) but will compare equal if not Index(left.values).equals(Index(right.values)): - raise AssertionError( - '[datetimelike_compat=True] {0} is not equal to {1}.'.format(left.values, - right.values)) + msg = '[datetimelike_compat=True] {0} is not equal to {1}.' + raise AssertionError(msg.format(left.values, right.values)) else: assert_numpy_array_equal(left.values, right.values) else: - assert_almost_equal(left.get_values(), right.get_values(), - check_less_precise, obj='{0}'.format(obj)) + _testing.assert_almost_equal(left.get_values(), right.get_values(), + check_less_precise, obj='{0}'.format(obj)) # metadata comparison if check_names:
Closes #11584. Because many existing tests expects `assert_almost_equal` to compare `pandas` instances, it looks better to call appropriate test functions rather than raise. I've also fixed some failure cases caused by the change (comparing `Series` and `list`, etc).
https://api.github.com/repos/pandas-dev/pandas/pulls/12247
2016-02-07T00:59:28Z
2016-02-12T15:04:32Z
null
2016-02-27T05:39:12Z
DEPR: removal of deprecation warnings for float indexers
diff --git a/doc/source/release.rst b/doc/source/release.rst index 155def21fa3a9..dee5c464a1f9c 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -52,6 +52,7 @@ Highlights include: - ``pd.test()`` top-level nose test runner is available (:issue:`4327`) - Adding support for a ``RangeIndex`` as a specialized form of the ``Int64Index`` for memory savings, see :ref:`here <whatsnew_0180.enhancements.rangeindex>`. - API breaking ``.resample`` changes to make it more ``.groupby`` like, see :ref:`here <whatsnew_0180.breaking.resample>`. +- Removal of support for deprecated float indexers; these will now raise a ``TypeError``, see :ref:`here <whatsnew_0180.enhancements.float_indexers>`. See the :ref:`v0.18.0 Whatsnew <whatsnew_0180>` overview for an extensive list of all enhancements and bugs that have been fixed in 0.17.1. diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 2dcb3fbbd5a0d..ec8c5e2c0ba3e 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -21,6 +21,7 @@ Highlights include: - ``pd.test()`` top-level nose test runner is available (:issue:`4327`) - Adding support for a ``RangeIndex`` as a specialized form of the ``Int64Index`` for memory savings, see :ref:`here <whatsnew_0180.enhancements.rangeindex>`. - API breaking ``.resample`` changes to make it more ``.groupby`` like, see :ref:`here <whatsnew_0180.breaking.resample>`. +- Removal of support for deprecated float indexers; these will now raise a ``TypeError``, see :ref:`here <whatsnew_0180.enhancements.float_indexers>`. Check the :ref:`API Changes <whatsnew_0180.api_breaking>` and :ref:`deprecations <whatsnew_0180.deprecations>` before updating. @@ -865,9 +866,45 @@ Deprecations is better handled by matplotlib's `style sheets`_ (:issue:`11783`). +.. _style sheets: http://matplotlib.org/users/style_sheets.html +.. _whatsnew_0180.float_indexers: -.. _style sheets: http://matplotlib.org/users/style_sheets.html +Removal of deprecated float indexers +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +In :issue:`4892` indexing with floating point numbers on a non-``Float64Index`` was deprecated (in version 0.14.0). +In 0.18.0, this deprecation warning is removed and these will now raise a ``TypeError``. (:issue:`12165`) + +Previous Behavior: + +.. code-block: + + In [1]: s = Series([1,2,3]) + In [2]: s[1.0] + FutureWarning: scalar indexers for index type Int64Index should be integers and not floating point + Out[2]: 2 + + In [3]: s.iloc[1.0] + FutureWarning: scalar indexers for index type Int64Index should be integers and not floating point + Out[3]: 2 + + In [4]: s.loc[1.0] + FutureWarning: scalar indexers for index type Int64Index should be integers and not floating point + Out[4]: 2 + +New Behavior: + +.. code-block: + + In [4]: s[1.0] + TypeError: cannot do label indexing on <class 'pandas.indexes.range.RangeIndex'> with these indexers [1.0] of <type 'float'> + + In [4]: s.iloc[1.0] + TypeError: cannot do label indexing on <class 'pandas.indexes.range.RangeIndex'> with these indexers [1.0] of <type 'float'> + + In [4]: s.loc[1.0] + TypeError: cannot do label indexing on <class 'pandas.indexes.range.RangeIndex'> with these indexers [1.0] of <type 'float'> .. _whatsnew_0180.prior_deprecations: diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index b6f221322df52..22185c5676593 100644 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -115,7 +115,11 @@ def _get_setitem_indexer(self, key): try: return self._convert_to_indexer(key, is_setter=True) - except TypeError: + except TypeError as e: + + # invalid indexer type vs 'other' indexing errors + if 'cannot do' in str(e): + raise raise IndexingError(key) def __setitem__(self, key, value): @@ -312,6 +316,18 @@ def _setitem_with_indexer(self, indexer, value): index = self.obj.index new_index = index.insert(len(index), indexer) + # we have a coerced indexer, e.g. a float + # that matches in an Int64Index, so + # we will not create a duplicate index, rather + # index to that element + # e.g. 0.0 -> 0 + # GH12246 + if index.is_unique: + new_indexer = index.get_indexer([new_index[-1]]) + if (new_indexer != -1).any(): + return self._setitem_with_indexer(new_indexer, + value) + # this preserves dtype of the value new_values = Series([value])._values if len(self.obj._values): @@ -1091,8 +1107,17 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False): """ labels = self.obj._get_axis(axis) - # if we are a scalar indexer and not type correct raise - obj = self._convert_scalar_indexer(obj, axis) + if isinstance(obj, slice): + return self._convert_slice_indexer(obj, axis) + + # try to find out correct indexer, if not type correct raise + try: + obj = self._convert_scalar_indexer(obj, axis) + except TypeError: + + # but we will allow setting + if is_setter: + pass # see if we are positional in nature is_int_index = labels.is_integer() @@ -1131,10 +1156,7 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False): return obj - if isinstance(obj, slice): - return self._convert_slice_indexer(obj, axis) - - elif is_nested_tuple(obj, labels): + if is_nested_tuple(obj, labels): return labels.get_locs(obj) elif is_list_like_indexer(obj): if is_bool_indexer(obj): @@ -1278,7 +1300,7 @@ def _get_slice_axis(self, slice_obj, axis=0): labels = obj._get_axis(axis) indexer = labels.slice_indexer(slice_obj.start, slice_obj.stop, - slice_obj.step) + slice_obj.step, kind=self.name) if isinstance(indexer, slice): return self._slice(indexer, axis=axis, kind='iloc') diff --git a/pandas/indexes/base.py b/pandas/indexes/base.py index 9064b77ef7e3d..172f81e5a6423 100644 --- a/pandas/indexes/base.py +++ b/pandas/indexes/base.py @@ -21,12 +21,12 @@ from pandas.core.missing import _clean_reindex_fill_method from pandas.core.common import (isnull, array_equivalent, is_object_dtype, is_datetimetz, ABCSeries, - ABCPeriodIndex, + ABCPeriodIndex, ABCMultiIndex, _values_from_object, is_float, is_integer, is_iterator, is_categorical_dtype, _ensure_object, _ensure_int64, is_bool_indexer, is_list_like, is_bool_dtype, - is_integer_dtype) + is_integer_dtype, is_float_dtype) from pandas.core.strings import StringAccessorMixin from pandas.core.config import get_option @@ -162,7 +162,46 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, if dtype is not None: try: - data = np.array(data, dtype=dtype, copy=copy) + + # we need to avoid having numpy coerce + # things that look like ints/floats to ints unless + # they are actually ints, e.g. '0' and 0.0 + # should not be coerced + # GH 11836 + if is_integer_dtype(dtype): + inferred = lib.infer_dtype(data) + if inferred == 'integer': + data = np.array(data, copy=copy, dtype=dtype) + elif inferred in ['floating', 'mixed-integer-float']: + + # if we are actually all equal to integers + # then coerce to integer + from .numeric import Int64Index, Float64Index + try: + res = data.astype('i8') + if (res == data).all(): + return Int64Index(res, copy=copy, + name=name) + except (TypeError, ValueError): + pass + + # return an actual float index + return Float64Index(data, copy=copy, dtype=dtype, + name=name) + + elif inferred == 'string': + pass + else: + data = data.astype(dtype) + elif is_float_dtype(dtype): + inferred = lib.infer_dtype(data) + if inferred == 'string': + pass + else: + data = data.astype(dtype) + else: + data = np.array(data, dtype=dtype, copy=copy) + except (TypeError, ValueError): pass @@ -930,35 +969,32 @@ def _convert_scalar_indexer(self, key, kind=None): kind : optional, type of the indexing operation (loc/ix/iloc/None) right now we are converting - floats -> ints if the index supports it """ - def to_int(): - ikey = int(key) - if ikey != key: - return self._invalid_indexer('label', key) - return ikey - if kind == 'iloc': if is_integer(key): return key - elif is_float(key): - key = to_int() - warnings.warn("scalar indexers for index type {0} should be " - "integers and not floating point".format( - type(self).__name__), - FutureWarning, stacklevel=5) - return key return self._invalid_indexer('label', key) + else: - if is_float(key): - if isnull(key): - return self._invalid_indexer('label', key) - warnings.warn("scalar indexers for index type {0} should be " - "integers and not floating point".format( - type(self).__name__), - FutureWarning, stacklevel=3) - return to_int() + if len(self): + + # we can safely disallow + # if we are not a MultiIndex + # or a Float64Index + # or have mixed inferred type (IOW we have the possiblity + # of a float in with say strings) + if is_float(key): + if not (isinstance(self, ABCMultiIndex,) or + self.is_floating() or self.is_mixed()): + return self._invalid_indexer('label', key) + + # we can disallow integers with loc + # if could not contain and integer + elif is_integer(key) and kind == 'loc': + if not (isinstance(self, ABCMultiIndex,) or + self.holds_integer() or self.is_mixed()): + return self._invalid_indexer('label', key) return key @@ -991,14 +1027,6 @@ def f(c): v = getattr(key, c) if v is None or is_integer(v): return v - - # warn if it's a convertible float - if v == int(v): - warnings.warn("slice indexers when using iloc should be " - "integers and not floating point", - FutureWarning, stacklevel=7) - return int(v) - self._invalid_indexer('slice {0} value'.format(c), v) return slice(*[f(c) for c in ['start', 'stop', 'step']]) @@ -1057,7 +1085,7 @@ def is_int(v): indexer = key else: try: - indexer = self.slice_indexer(start, stop, step) + indexer = self.slice_indexer(start, stop, step, kind=kind) except Exception: if is_index_slice: if self.is_integer(): @@ -1891,10 +1919,7 @@ def get_value(self, series, key): s = _values_from_object(series) k = _values_from_object(key) - # prevent integer truncation bug in indexing - if is_float(k) and not self.is_floating(): - raise KeyError - + k = self._convert_scalar_indexer(k, kind='getitem') try: return self._engine.get_value(s, k, tz=getattr(series.dtype, 'tz', None)) @@ -2236,6 +2261,7 @@ def reindex(self, target, method=None, level=None, limit=None, if self.equals(target): indexer = None else: + if self.is_unique: indexer = self.get_indexer(target, method=method, limit=limit, @@ -2722,7 +2748,9 @@ def _maybe_cast_slice_bound(self, label, side, kind): # datetimelike Indexes # reject them if is_float(label): - self._invalid_indexer('slice', label) + if not (kind in ['ix'] and (self.holds_integer() or + self.is_floating())): + self._invalid_indexer('slice', label) # we are trying to find integer bounds on a non-integer based index # this is rejected (generally .loc gets you here) diff --git a/pandas/indexes/numeric.py b/pandas/indexes/numeric.py index 61d93284adbbb..17119f7e8ae0b 100644 --- a/pandas/indexes/numeric.py +++ b/pandas/indexes/numeric.py @@ -42,9 +42,14 @@ def _maybe_cast_slice_bound(self, label, side, kind): """ # we are a numeric index, so we accept - # integer/floats directly - if not (com.is_integer(label) or com.is_float(label)): - self._invalid_indexer('slice', label) + # integer directly + if com.is_integer(label): + pass + + # disallow floats only if we not-strict + elif com.is_float(label): + if not (self.is_floating() or kind in ['ix']): + self._invalid_indexer('slice', label) return label @@ -200,6 +205,18 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, if dtype is None: dtype = np.float64 + dtype = np.dtype(dtype) + + # allow integer / object dtypes to be passed, but coerce to float64 + if dtype.kind in ['i', 'O']: + dtype = np.float64 + + elif dtype.kind in ['f']: + pass + + else: + raise TypeError("cannot support {0} dtype in " + "Float64Index".format(dtype)) try: subarr = np.array(data, dtype=dtype, copy=copy) diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py index 55631ad831441..d5d0bd32a9356 100644 --- a/pandas/tests/frame/test_indexing.py +++ b/pandas/tests/frame/test_indexing.py @@ -212,11 +212,9 @@ def test_getitem_boolean(self): assert_frame_equal(subframe_obj, subframe) # test that Series indexers reindex - with tm.assert_produces_warning(UserWarning): - indexer_obj = indexer_obj.reindex(self.tsframe.index[::-1]) - - subframe_obj = self.tsframe[indexer_obj] - assert_frame_equal(subframe_obj, subframe) + indexer_obj = indexer_obj.reindex(self.tsframe.index[::-1]) + subframe_obj = self.tsframe[indexer_obj] + assert_frame_equal(subframe_obj, subframe) # test df[df > 0] for df in [self.tsframe, self.mixed_frame, @@ -1309,38 +1307,26 @@ def test_getitem_setitem_float_labels(self): df = DataFrame(np.random.randn(5, 5), index=index) # positional slicing only via iloc! - # stacklevel=False -> needed stacklevel depends on index type - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - result = df.iloc[1.0:5] - - expected = df.reindex([2.5, 3.5, 4.5, 5.0]) - assert_frame_equal(result, expected) - self.assertEqual(len(result), 4) + self.assertRaises(TypeError, lambda: df.iloc[1.0:5]) result = df.iloc[4:5] expected = df.reindex([5.0]) assert_frame_equal(result, expected) self.assertEqual(len(result), 1) - # GH 4892, float indexers in iloc are deprecated - import warnings - warnings.filterwarnings(action='error', category=FutureWarning) - cp = df.copy() def f(): cp.iloc[1.0:5] = 0 - self.assertRaises(FutureWarning, f) + self.assertRaises(TypeError, f) def f(): result = cp.iloc[1.0:5] == 0 # noqa - self.assertRaises(FutureWarning, f) + self.assertRaises(TypeError, f) self.assertTrue(result.values.all()) self.assertTrue((cp.iloc[0:1] == df.iloc[0:1]).values.all()) - warnings.filterwarnings(action='default', category=FutureWarning) - cp = df.copy() cp.iloc[4:5] = 0 self.assertTrue((cp.iloc[4:5] == 0).values.all()) diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py index 735025cfca42e..465879dd62466 100644 --- a/pandas/tests/indexes/test_base.py +++ b/pandas/tests/indexes/test_base.py @@ -180,53 +180,46 @@ def test_constructor_simple_new(self): def test_constructor_dtypes(self): - for idx in [Index(np.array([1, 2, 3], dtype=int)), Index( - np.array( - [1, 2, 3], dtype=int), dtype=int), Index( - np.array( - [1., 2., 3.], dtype=float), dtype=int), Index( - [1, 2, 3], dtype=int), Index( - [1., 2., 3.], dtype=int)]: + for idx in [Index(np.array([1, 2, 3], dtype=int)), + Index(np.array([1, 2, 3], dtype=int), dtype=int), + Index([1, 2, 3], dtype=int)]: self.assertIsInstance(idx, Int64Index) - for idx in [Index(np.array([1., 2., 3.], dtype=float)), Index( - np.array( - [1, 2, 3], dtype=int), dtype=float), Index( - np.array( - [1., 2., 3.], dtype=float), dtype=float), Index( - [1, 2, 3], dtype=float), Index( - [1., 2., 3.], dtype=float)]: + # these should coerce + for idx in [Index(np.array([1., 2., 3.], dtype=float), dtype=int), + Index([1., 2., 3.], dtype=int)]: + self.assertIsInstance(idx, Int64Index) + + for idx in [Index(np.array([1., 2., 3.], dtype=float)), + Index(np.array([1, 2, 3], dtype=int), dtype=float), + Index(np.array([1., 2., 3.], dtype=float), dtype=float), + Index([1, 2, 3], dtype=float), + Index([1., 2., 3.], dtype=float)]: self.assertIsInstance(idx, Float64Index) - for idx in [Index(np.array( - [True, False, True], dtype=bool)), Index([True, False, True]), - Index( - np.array( - [True, False, True], dtype=bool), dtype=bool), - Index( - [True, False, True], dtype=bool)]: + for idx in [Index(np.array([True, False, True], dtype=bool)), + Index([True, False, True]), + Index(np.array([True, False, True], dtype=bool), dtype=bool), + Index([True, False, True], dtype=bool)]: self.assertIsInstance(idx, Index) self.assertEqual(idx.dtype, object) - for idx in [Index( - np.array([1, 2, 3], dtype=int), dtype='category'), Index( - [1, 2, 3], dtype='category'), Index( - np.array([np.datetime64('2011-01-01'), np.datetime64( - '2011-01-02')]), dtype='category'), Index( - [datetime(2011, 1, 1), datetime(2011, 1, 2) - ], dtype='category')]: + for idx in [Index(np.array([1, 2, 3], dtype=int), dtype='category'), + Index([1, 2, 3], dtype='category'), + Index(np.array([np.datetime64('2011-01-01'), + np.datetime64('2011-01-02')]), dtype='category'), + Index([datetime(2011, 1, 1), datetime(2011, 1, 2)], dtype='category')]: self.assertIsInstance(idx, CategoricalIndex) - for idx in [Index(np.array([np.datetime64('2011-01-01'), np.datetime64( - '2011-01-02')])), + for idx in [Index(np.array([np.datetime64('2011-01-01'), + np.datetime64('2011-01-02')])), Index([datetime(2011, 1, 1), datetime(2011, 1, 2)])]: self.assertIsInstance(idx, DatetimeIndex) - for idx in [Index( - np.array([np.datetime64('2011-01-01'), np.datetime64( - '2011-01-02')]), dtype=object), Index( - [datetime(2011, 1, 1), datetime(2011, 1, 2) - ], dtype=object)]: + for idx in [Index(np.array([np.datetime64('2011-01-01'), + np.datetime64('2011-01-02')]), dtype=object), + Index([datetime(2011, 1, 1), + datetime(2011, 1, 2)], dtype=object)]: self.assertNotIsInstance(idx, DatetimeIndex) self.assertIsInstance(idx, Index) self.assertEqual(idx.dtype, object) @@ -235,10 +228,9 @@ def test_constructor_dtypes(self): 1, 'D')])), Index([timedelta(1), timedelta(1)])]: self.assertIsInstance(idx, TimedeltaIndex) - for idx in [Index( - np.array([np.timedelta64(1, 'D'), np.timedelta64(1, 'D')]), - dtype=object), Index( - [timedelta(1), timedelta(1)], dtype=object)]: + for idx in [Index(np.array([np.timedelta64(1, 'D'), + np.timedelta64(1, 'D')]), dtype=object), + Index([timedelta(1), timedelta(1)], dtype=object)]: self.assertNotIsInstance(idx, TimedeltaIndex) self.assertIsInstance(idx, Index) self.assertEqual(idx.dtype, object) @@ -942,12 +934,17 @@ def test_slice_locs(self): self.assertEqual(idx2.slice_locs(10.5, -1), (0, n)) # int slicing with floats + # GH 4892, these are all TypeErrors idx = Index(np.array([0, 1, 2, 5, 6, 7, 9, 10], dtype=int)) - self.assertEqual(idx.slice_locs(5.0, 10.0), (3, n)) - self.assertEqual(idx.slice_locs(4.5, 10.5), (3, 8)) + self.assertRaises(TypeError, + lambda: idx.slice_locs(5.0, 10.0), (3, n)) + self.assertRaises(TypeError, + lambda: idx.slice_locs(4.5, 10.5), (3, 8)) idx2 = idx[::-1] - self.assertEqual(idx2.slice_locs(8.5, 1.5), (2, 6)) - self.assertEqual(idx2.slice_locs(10.5, -1), (0, n)) + self.assertRaises(TypeError, + lambda: idx2.slice_locs(8.5, 1.5), (2, 6)) + self.assertRaises(TypeError, + lambda: idx2.slice_locs(10.5, -1), (0, n)) def test_slice_locs_dup(self): idx = Index(['a', 'a', 'b', 'c', 'd', 'd']) diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py index d14f7bbc680df..325e0df14a07e 100644 --- a/pandas/tests/indexes/test_numeric.py +++ b/pandas/tests/indexes/test_numeric.py @@ -379,10 +379,19 @@ def test_constructor(self): new_index = Int64Index(arr, copy=True) tm.assert_numpy_array_equal(new_index, self.index) val = arr[0] + 3000 + # this should not change index arr[0] = val self.assertNotEqual(new_index[0], val) + # interpret list-like + expected = Int64Index([5, 0]) + for cls in [Index, Int64Index]: + for idx in [cls([5, 0], dtype='int64'), + cls(np.array([5, 0]), dtype='int64'), + cls(Series([5, 0]), dtype='int64')]: + tm.assert_index_equal(idx, expected) + def test_constructor_corner(self): arr = np.array([1, 2, 3, 4], dtype=object) index = Int64Index(arr) diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py index bf8f1aa3f6427..72fff3f82111e 100644 --- a/pandas/tests/test_indexing.py +++ b/pandas/tests/test_indexing.py @@ -1130,7 +1130,9 @@ def test_loc_getitem_label_out_of_range(self): self.check_result('label range', 'loc', 'f', 'ix', 'f', typs=['floats'], fails=TypeError) self.check_result('label range', 'loc', 20, 'ix', 20, - typs=['ints', 'labels', 'mixed'], fails=KeyError) + typs=['ints', 'mixed'], fails=KeyError) + self.check_result('label range', 'loc', 20, 'ix', 20, + typs=['labels'], fails=TypeError) self.check_result('label range', 'loc', 20, 'ix', 20, typs=['ts'], axes=0, fails=TypeError) self.check_result('label range', 'loc', 20, 'ix', 20, typs=['floats'], @@ -4200,7 +4202,7 @@ def f(): def f(): df.ix[100.0, :] = df.ix[0] - self.assertRaises(ValueError, f) + self.assertRaises(TypeError, f) def f(): df.ix[100, :] = df.ix[0] @@ -5120,27 +5122,24 @@ def check_index(index, error): # positional selection result1 = s[5] - result2 = s[5.0] + self.assertRaises(TypeError, lambda: s[5.0]) result3 = s.iloc[5] - result4 = s.iloc[5.0] + self.assertRaises(TypeError, lambda: s.iloc[5.0]) # by value - self.assertRaises(error, lambda: s.loc[5]) - self.assertRaises(error, lambda: s.loc[5.0]) + self.assertRaises(TypeError, lambda: s.loc[5]) + self.assertRaises(TypeError, lambda: s.loc[5.0]) # this is fallback, so it works result5 = s.ix[5] - result6 = s.ix[5.0] + self.assertRaises(error, lambda: s.ix[5.0]) - self.assertEqual(result1, result2) self.assertEqual(result1, result3) - self.assertEqual(result1, result4) self.assertEqual(result1, result5) - self.assertEqual(result1, result6) # string-like for index in [tm.makeStringIndex, tm.makeUnicodeIndex]: - check_index(index, KeyError) + check_index(index, TypeError) # datetimelike for index in [tm.makeDateIndex, tm.makeTimedeltaIndex, @@ -5150,32 +5149,21 @@ def check_index(index, error): # exact indexing when found on IntIndex s = Series(np.arange(10), dtype='int64') - result1 = s[5.0] - result2 = s.loc[5.0] - result3 = s.ix[5.0] + self.assertRaises(TypeError, lambda: s[5.0]) + self.assertRaises(TypeError, lambda: s.loc[5.0]) + self.assertRaises(TypeError, lambda: s.ix[5.0]) result4 = s[5] result5 = s.loc[5] result6 = s.ix[5] - self.assertEqual(result1, result2) - self.assertEqual(result1, result3) - self.assertEqual(result1, result4) - self.assertEqual(result1, result5) - self.assertEqual(result1, result6) + self.assertEqual(result4, result5) + self.assertEqual(result4, result6) def test_slice_indexer(self): def check_iloc_compat(s): - # invalid type for iloc (but works with a warning) - # check_stacklevel=False -> impossible to get it right for all - # index types - with self.assert_produces_warning(FutureWarning, - check_stacklevel=False): - s.iloc[6.0:8] - with self.assert_produces_warning(FutureWarning, - check_stacklevel=False): - s.iloc[6.0:8.0] - with self.assert_produces_warning(FutureWarning, - check_stacklevel=False): - s.iloc[6:8.0] + # these are exceptions + self.assertRaises(TypeError, lambda: s.iloc[6.0:8]) + self.assertRaises(TypeError, lambda: s.iloc[6.0:8.0]) + self.assertRaises(TypeError, lambda: s.iloc[6:8.0]) def check_slicing_positional(index): @@ -5229,22 +5217,30 @@ def check_slicing_positional(index): result3 = s.loc[2:5] assert_series_equal(result2, result3) - # float slicers on an int index + # float slicers on an int index with ix expected = Series([11, 12, 13], index=[6, 7, 8]) - for method in [lambda x: x.loc, lambda x: x.ix]: - result = method(s)[6.0:8.5] - assert_series_equal(result, expected) + result = s.ix[6.0:8.5] + assert_series_equal(result, expected) - result = method(s)[5.5:8.5] - assert_series_equal(result, expected) + result = s.ix[5.5:8.5] + assert_series_equal(result, expected) + + result = s.ix[5.5:8.0] + assert_series_equal(result, expected) - result = method(s)[5.5:8.0] - assert_series_equal(result, expected) + for method in ['loc', 'iloc']: + # make all float slicing fail for .loc with an int index + self.assertRaises(TypeError, + lambda: getattr(s, method)[6.0:8]) + self.assertRaises(TypeError, + lambda: getattr(s, method)[6.0:8.0]) + self.assertRaises(TypeError, + lambda: getattr(s, method)[6:8.0]) - # make all float slicing fail for [] with an int index - self.assertRaises(TypeError, lambda: s[6.0:8]) - self.assertRaises(TypeError, lambda: s[6.0:8.0]) - self.assertRaises(TypeError, lambda: s[6:8.0]) + # make all float slicing fail for [] with an int index + self.assertRaises(TypeError, lambda: s[6.0:8]) + self.assertRaises(TypeError, lambda: s[6.0:8.0]) + self.assertRaises(TypeError, lambda: s[6:8.0]) check_iloc_compat(s) @@ -5329,121 +5325,343 @@ def test_ix_empty_list_indexer_is_ok(self): assert_frame_equal(df.ix[[]], df.iloc[:0, :], check_index_type=True, check_column_type=True) - def test_deprecate_float_indexers(self): + def test_index_type_coercion(self): + + # GH 11836 + # if we have an index type and set it with something that looks + # to numpy like the same, but is actually, not + # (e.g. setting with a float or string '0') + # then we need to coerce to object + + # integer indexes + for s in [Series(range(5)), + Series(range(5), index=range(1, 6))]: + + self.assertTrue(s.index.is_integer()) + + for attr in ['ix', 'loc']: + s2 = s.copy() + getattr(s2, attr)[0.1] = 0 + self.assertTrue(s2.index.is_floating()) + self.assertTrue(getattr(s2, attr)[0.1] == 0) + + s2 = s.copy() + getattr(s2, attr)[0.0] = 0 + exp = s.index + if 0 not in s: + exp = Index(s.index.tolist() + [0]) + tm.assert_index_equal(s2.index, exp) + + s2 = s.copy() + getattr(s2, attr)['0'] = 0 + self.assertTrue(s2.index.is_object()) + + # setitem + s2 = s.copy() + s2[0.1] = 0 + self.assertTrue(s2.index.is_floating()) + self.assertTrue(s2[0.1] == 0) + + s2 = s.copy() + s2[0.0] = 0 + exp = s.index + if 0 not in s: + exp = Index(s.index.tolist() + [0]) + tm.assert_index_equal(s2.index, exp) + + s2 = s.copy() + s2['0'] = 0 + self.assertTrue(s2.index.is_object()) + + for s in [Series(range(5), index=np.arange(5.))]: + + self.assertTrue(s.index.is_floating()) + + for attr in ['ix', 'loc']: + + s2 = s.copy() + getattr(s2, attr)[0.1] = 0 + self.assertTrue(s2.index.is_floating()) + self.assertTrue(getattr(s2, attr)[0.1] == 0) + + s2 = s.copy() + getattr(s2, attr)[0.0] = 0 + tm.assert_index_equal(s2.index, s.index) + + s2 = s.copy() + getattr(s2, attr)['0'] = 0 + self.assertTrue(s2.index.is_object()) + + # setitem + s2 = s.copy() + s2[0.1] = 0 + self.assertTrue(s2.index.is_floating()) + self.assertTrue(s2[0.1] == 0) + + s2 = s.copy() + s2[0.0] = 0 + tm.assert_index_equal(s2.index, s.index) + + s2 = s.copy() + s2['0'] = 0 + self.assertTrue(s2.index.is_object()) + + def test_invalid_scalar_float_indexers(self): # GH 4892 - # deprecate allowing float indexers that are equal to ints to be used - # as indexers in non-float indices + # float_indexers should raise exceptions + # on appropriate Index types & accessors - import warnings - warnings.filterwarnings(action='error', category=FutureWarning) + for index in [tm.makeStringIndex, tm.makeUnicodeIndex, + tm.makeCategoricalIndex, + tm.makeDateIndex, tm.makeTimedeltaIndex, + tm.makePeriodIndex]: - def check_index(index): i = index(5) for s in [Series( np.arange(len(i)), index=i), DataFrame( np.random.randn( len(i), len(i)), index=i, columns=i)]: - self.assertRaises(FutureWarning, lambda: s.iloc[3.0]) - # setting + for attr in ['iloc', 'loc', 'ix', '__getitem__']: + def f(): + getattr(s, attr)()[3.0] + self.assertRaises(TypeError, f) + + # setting only fails with iloc as + # the others expand the index def f(): s.iloc[3.0] = 0 - - self.assertRaises(FutureWarning, f) + self.assertRaises(TypeError, f) # fallsback to position selection ,series only s = Series(np.arange(len(i)), index=i) s[3] - self.assertRaises(FutureWarning, lambda: s[3.0]) + self.assertRaises(TypeError, lambda: s[3.0]) - for index in [tm.makeStringIndex, tm.makeUnicodeIndex, - tm.makeDateIndex, tm.makeTimedeltaIndex, - tm.makePeriodIndex]: - check_index(index) + # integer index + for index in [tm.makeIntIndex, tm.makeRangeIndex]: - # ints - i = index(5) - for s in [Series(np.arange(len(i))), DataFrame( - np.random.randn( - len(i), len(i)), index=i, columns=i)]: - self.assertRaises(FutureWarning, lambda: s.iloc[3.0]) + i = index(5) + for s in [Series(np.arange(len(i))), + DataFrame(np.random.randn(len(i), len(i)), + index=i, columns=i)]: + + # any kind of get access should fail + for attr in ['iloc', 'loc', 'ix']: + def f(): + getattr(s, attr)[3.0] + self.assertRaises(TypeError, f) + error = KeyError if isinstance(s, DataFrame) else TypeError + self.assertRaises(error, lambda: s[3.0]) + + # setting only fails with iloc as + def f(): + s.iloc[3.0] = 0 + self.assertRaises(TypeError, f) - # on some arch's this doesn't provide a warning (and thus raise) - # and some it does - try: - s[3.0] - except: - pass + # other indexers will coerce to an object index + # tested explicity in: test_invalid_scalar_float_indexers + # above - # setting - def f(): - s.iloc[3.0] = 0 + # floats index + index = tm.makeFloatIndex(5) + for s in [Series(np.arange(len(index)), index=index), + DataFrame(np.random.randn(len(index), len(index)), + index=index, columns=index)]: + + # assert all operations except for iloc are ok + indexer = index[3] + expected = s.iloc[3] - self.assertRaises(FutureWarning, f) + if isinstance(s, Series): + compare = self.assertEqual + else: + compare = tm.assert_series_equal - # floats: these are all ok! - i = np.arange(5.) + for attr in ['loc', 'ix']: - for s in [Series( - np.arange(len(i)), index=i), DataFrame( - np.random.randn( - len(i), len(i)), index=i, columns=i)]: - with tm.assert_produces_warning(False): - s[3.0] + # getting + result = getattr(s, attr)[indexer] + compare(result, expected) - with tm.assert_produces_warning(False): - s[3] + # setting + s2 = s.copy() - self.assertRaises(FutureWarning, lambda: s.iloc[3.0]) + def f(): + getattr(s2, attr)[indexer] = expected + result = getattr(s2, attr)[indexer] + compare(result, expected) - with tm.assert_produces_warning(False): - s.iloc[3] + # random integer is a KeyError + self.assertRaises(KeyError, lambda: getattr(s, attr)[3]) - with tm.assert_produces_warning(False): - s.loc[3.0] + # iloc succeeds with an integer + result = s.iloc[3] + compare(result, expected) - with tm.assert_produces_warning(False): - s.loc[3] + s2 = s.copy() + + def f(): + s2.iloc[3] = expected + result = s2.iloc[3] + compare(result, expected) + + # iloc raises with a float + self.assertRaises(TypeError, lambda: s.iloc[3.0]) def f(): s.iloc[3.0] = 0 + self.assertRaises(TypeError, f) - self.assertRaises(FutureWarning, f) + # getitem - # slices - for index in [tm.makeIntIndex, tm.makeRangeIndex, tm.makeFloatIndex, - tm.makeStringIndex, tm.makeUnicodeIndex, - tm.makeDateIndex, tm.makePeriodIndex]: + # getting + if isinstance(s, DataFrame): + expected = s.iloc[:, 3] + result = s[indexer] + compare(result, expected) + + # setting + s2 = s.copy() + + def f(): + s2[indexer] = expected + result = s2[indexer] + compare(result, expected) + + # random integer is a KeyError + result = self.assertRaises(KeyError, lambda: s[3]) + + def test_invalid_slice_float_indexers(self): + + # GH 4892 + # float_indexers should raise exceptions + # on appropriate Index types & accessors + + for index in [tm.makeStringIndex, tm.makeUnicodeIndex, + tm.makeDateIndex, tm.makeTimedeltaIndex, + tm.makePeriodIndex]: index = index(5) - for s in [Series( - range(5), index=index), DataFrame( - np.random.randn(5, 2), index=index)]: + for s in [Series(range(5), index=index), + DataFrame(np.random.randn(5, 2), index=index)]: + + # getitem + for l in [slice(3.0, 4), + slice(3, 4.0), + slice(3.0, 4.0)]: + + def f(): + s.iloc[l] + self.assertRaises(TypeError, f) + + def f(): + s.loc[l] + self.assertRaises(TypeError, f) + + def f(): + s[l] + self.assertRaises(TypeError, f) + + def f(): + s.ix[l] + self.assertRaises(TypeError, f) + + # setitem + for l in [slice(3.0, 4), + slice(3, 4.0), + slice(3.0, 4.0)]: + + def f(): + s.iloc[l] = 0 + self.assertRaises(TypeError, f) + + def f(): + s.loc[l] = 0 + self.assertRaises(TypeError, f) + + def f(): + s[l] = 0 + self.assertRaises(TypeError, f) + + def f(): + s.ix[l] = 0 + self.assertRaises(TypeError, f) + + # same as above, but for Integer based indexes + for index in [tm.makeIntIndex, tm.makeRangeIndex]: + + index = index(5) + for s in [Series(range(5), index=index), + DataFrame(np.random.randn(5, 2), index=index)]: # getitem - self.assertRaises(FutureWarning, lambda: s.iloc[3.0:4]) - self.assertRaises(FutureWarning, lambda: s.iloc[3.0:4.0]) - self.assertRaises(FutureWarning, lambda: s.iloc[3:4.0]) + for l in [slice(3.0, 4), + slice(3, 4.0), + slice(3.0, 4.0)]: + + def f(): + s.iloc[l] + self.assertRaises(TypeError, f) + + def f(): + s.loc[l] + self.assertRaises(TypeError, f) + + def f(): + s[l] + self.assertRaises(TypeError, f) + + # ix allows float slicing + s.ix[l] # setitem - def f(): - s.iloc[3.0:4] = 0 + for l in [slice(3.0, 4), + slice(3, 4.0), + slice(3.0, 4.0)]: - self.assertRaises(FutureWarning, f) + def f(): + s.iloc[l] = 0 + self.assertRaises(TypeError, f) - def f(): - s.iloc[3:4.0] = 0 + def f(): + s.loc[l] = 0 + self.assertRaises(TypeError, f) - self.assertRaises(FutureWarning, f) + def f(): + s[l] = 0 + self.assertRaises(TypeError, f) - def f(): - s.iloc[3.0:4.0] = 0 + # ix allows float slicing + s.ix[l] = 0 - self.assertRaises(FutureWarning, f) + # same as above, but for floats + index = tm.makeFloatIndex(5) + for s in [Series(range(5), index=index), + DataFrame(np.random.randn(5, 2), index=index)]: - warnings.filterwarnings(action='ignore', category=FutureWarning) + # getitem + for l in [slice(3.0, 4), + slice(3, 4.0), + slice(3.0, 4.0)]: + + # ix is ok + result1 = s.ix[3:4] + result2 = s.ix[3.0:4] + result3 = s.ix[3.0:4.0] + result4 = s.ix[3:4.0] + self.assertTrue(result1.equals(result2)) + self.assertTrue(result1.equals(result3)) + self.assertTrue(result1.equals(result4)) + + # setitem + for l in [slice(3.0, 4), + slice(3, 4.0), + slice(3.0, 4.0)]: + + pass def test_float_index_to_mixed(self): df = DataFrame({0.0: np.random.rand(10), 1.0: np.random.rand(10)}) diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py index 05ca65d6946fb..a25bb525c9970 100644 --- a/pandas/tseries/period.py +++ b/pandas/tseries/period.py @@ -678,7 +678,12 @@ def get_loc(self, key, method=None, tolerance=None): except TypeError: pass - key = Period(key, freq=self.freq) + try: + key = Period(key, freq=self.freq) + except ValueError: + # we cannot construct the Period + # as we have an invalid type + return self._invalid_indexer('label', key) try: return Index.get_loc(self, key.ordinal, method, tolerance) except KeyError:
raise a `TypeError` instead, xref #4892 closes #11836 similar to numpy in 1.11 [here](https://github.com/numpy/numpy/pull/6271)
https://api.github.com/repos/pandas-dev/pandas/pulls/12246
2016-02-06T20:09:15Z
2016-02-13T13:34:34Z
null
2016-02-15T20:51:19Z
TST: Skip scipy NaN test on 0.17 for now
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py index ca5246ba98f89..79e200225d26a 100644 --- a/pandas/tests/test_nanops.py +++ b/pandas/tests/test_nanops.py @@ -380,6 +380,7 @@ def test_nanstd(self): def test_nansem(self): tm.skip_if_no_package('scipy.stats') + tm._skip_if_scipy_0_17() from scipy.stats import sem self.check_funs_ddof(nanops.nansem, sem, allow_complex=False, allow_str=False, allow_date=False, @@ -439,6 +440,7 @@ def _skew_kurt_wrap(self, values, axis=None, func=None): def test_nanskew(self): tm.skip_if_no_package('scipy.stats') + tm._skip_if_scipy_0_17() from scipy.stats import skew func = partial(self._skew_kurt_wrap, func=skew) self.check_funs(nanops.nanskew, func, allow_complex=False, @@ -446,6 +448,7 @@ def test_nanskew(self): def test_nankurt(self): tm.skip_if_no_package('scipy.stats') + tm._skip_if_scipy_0_17() from scipy.stats import kurtosis func1 = partial(kurtosis, fisher=True) func = partial(self._skew_kurt_wrap, func=func1) diff --git a/pandas/util/testing.py b/pandas/util/testing.py index aa5d698301da7..06262edfe0f19 100644 --- a/pandas/util/testing.py +++ b/pandas/util/testing.py @@ -217,6 +217,12 @@ def _skip_if_no_scipy(): import nose raise nose.SkipTest('scipy.interpolate missing') +def _skip_if_scipy_0_17(): + import scipy + v = scipy.__version__ + if v >= LooseVersion("0.17.0"): + import nose + raise nose.SkipTest("scipy 0.17") def _skip_if_no_pytz(): try:
Working around https://github.com/pydata/pandas/issues/12240 which we'll leave open until I (or someone more knowledgeable) can properly fix these. This skips the following tests when on scipy 0.17 - test_nansem - test_nanskew - test_nankurk
https://api.github.com/repos/pandas-dev/pandas/pulls/12243
2016-02-06T17:31:31Z
2016-02-06T19:31:59Z
null
2016-11-03T12:38:49Z
BUG: Timestamp subtraction of NaT with timezones
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index d52e0e3098b98..5ffc2fc40701d 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -870,6 +870,8 @@ Bug Fixes - Bug in ``Series.resample`` using a frequency of ``Nano`` when the index is a ``DatetimeIndex`` and contains non-zero nanosecond parts (:issue:`12037`) +- Bug in ``NaT`` subtraction from ``Timestamp`` or ``DatetimeIndex`` with timezones (:issue:`11718`) + - Bug in ``Timedelta.round`` with negative values (:issue:`11690`) - Bug in ``.loc`` against ``CategoricalIndex`` may result in normal ``Index`` (:issue:`11586`) - Bug in ``DataFrame.info`` when duplicated column names exist (:issue:`11761`) diff --git a/pandas/tseries/base.py b/pandas/tseries/base.py index 4b8192edc56ce..d82a229f48de8 100644 --- a/pandas/tseries/base.py +++ b/pandas/tseries/base.py @@ -199,6 +199,27 @@ def inferred_freq(self): except ValueError: return None + def _nat_new(self, box=True): + """ + Return Index or ndarray filled with NaT which has the same + length as the caller. + + Parameters + ---------- + box : boolean, default True + - If True returns a Index as the same as caller. + - If False returns ndarray of np.int64. + """ + result = np.zeros(len(self), dtype=np.int64) + result.fill(tslib.iNaT) + if not box: + return result + + attribs = self._get_attributes_dict() + if not isinstance(self, com.ABCPeriodIndex): + attribs['freq'] = None + return self._simple_new(result, **attribs) + # Try to run function on index first, and then on elements of index # Especially important for group-by functionality def map(self, f): @@ -224,8 +245,8 @@ def sort_values(self, return_indexer=False, ascending=True): sorted_values = np.sort(self.values) attribs = self._get_attributes_dict() freq = attribs['freq'] - from pandas.tseries.period import PeriodIndex - if freq is not None and not isinstance(self, PeriodIndex): + + if freq is not None and not isinstance(self, com.ABCPeriodIndex): if freq.n > 0 and not ascending: freq = freq * -1 elif freq.n < 0 and ascending: diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py index 77aa05bc1189d..9faf6f174115c 100644 --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -740,20 +740,26 @@ def __setstate__(self, state): raise Exception("invalid pickle state") _unpickle_compat = __setstate__ + def _add_datelike(self, other): + # adding a timedeltaindex to a datetimelike + if other is tslib.NaT: + return self._nat_new(box=True) + raise TypeError("cannot add a datelike to a DatetimeIndex") + def _sub_datelike(self, other): # subtract a datetime from myself, yielding a TimedeltaIndex - from pandas import TimedeltaIndex other = Timestamp(other) - + if other is tslib.NaT: + result = self._nat_new(box=False) # require tz compat - if not self._has_same_tz(other): + elif not self._has_same_tz(other): raise TypeError("Timestamp subtraction must have the same " "timezones or no timezones") - - i8 = self.asi8 - result = i8 - other.value - result = self._maybe_mask_results(result, fill_value=tslib.iNaT) + else: + i8 = self.asi8 + result = i8 - other.value + result = self._maybe_mask_results(result, fill_value=tslib.iNaT) return TimedeltaIndex(result, name=self.name, copy=False) def _maybe_update_attributes(self, attrs): diff --git a/pandas/tseries/tdi.py b/pandas/tseries/tdi.py index 9129a156848a9..e74879602fa64 100644 --- a/pandas/tseries/tdi.py +++ b/pandas/tseries/tdi.py @@ -317,17 +317,24 @@ def _evaluate_with_timedelta_like(self, other, op, opstr): return NotImplemented def _add_datelike(self, other): - # adding a timedeltaindex to a datetimelike from pandas import Timestamp, DatetimeIndex - other = Timestamp(other) - i8 = self.asi8 - result = i8 + other.value - result = self._maybe_mask_results(result, fill_value=tslib.iNaT) + if other is tslib.NaT: + result = self._nat_new(box=False) + else: + other = Timestamp(other) + i8 = self.asi8 + result = i8 + other.value + result = self._maybe_mask_results(result, fill_value=tslib.iNaT) return DatetimeIndex(result, name=self.name, copy=False) def _sub_datelike(self, other): - raise TypeError("cannot subtract a datelike from a TimedeltaIndex") + from pandas import DatetimeIndex + if other is tslib.NaT: + result = self._nat_new(box=False) + else: + raise TypeError("cannot subtract a datelike from a TimedeltaIndex") + return DatetimeIndex(result, name=self.name, copy=False) def _format_native_types(self, na_rep=u('NaT'), date_format=None, **kwargs): diff --git a/pandas/tseries/tests/test_base.py b/pandas/tseries/tests/test_base.py index 2f28c55ae520f..7ddf3354324f9 100644 --- a/pandas/tseries/tests/test_base.py +++ b/pandas/tseries/tests/test_base.py @@ -341,6 +341,14 @@ def test_add_iadd(self): rng += 1 tm.assert_index_equal(rng, expected) + idx = DatetimeIndex(['2011-01-01', '2011-01-02']) + msg = "cannot add a datelike to a DatetimeIndex" + with tm.assertRaisesRegexp(TypeError, msg): + idx + pd.Timestamp('2011-01-01') + + with tm.assertRaisesRegexp(TypeError, msg): + pd.Timestamp('2011-01-01') + idx + def test_sub_isub(self): for tz in self.tz: # diff @@ -598,6 +606,16 @@ def test_infer_freq(self): tm.assert_index_equal(idx, result) self.assertEqual(result.freq, freq) + def test_nat_new(self): + idx = pd.date_range('2011-01-01', freq='D', periods=5, name='x') + result = idx._nat_new() + exp = pd.DatetimeIndex([pd.NaT] * 5, name='x') + tm.assert_index_equal(result, exp) + + result = idx._nat_new(box=False) + exp = np.array([tslib.iNaT] * 5, dtype=np.int64) + tm.assert_numpy_array_equal(result, exp) + class TestTimedeltaIndexOps(Ops): def setUp(self): @@ -777,7 +795,6 @@ def test_add_iadd(self): tm.assert_index_equal(rng, expected) def test_sub_isub(self): - # only test adding/sub offsets as - is now numeric # offset @@ -800,6 +817,15 @@ def test_sub_isub(self): rng -= 1 tm.assert_index_equal(rng, expected) + idx = TimedeltaIndex(['1 day', '2 day']) + msg = "cannot subtract a datelike from a TimedeltaIndex" + with tm.assertRaisesRegexp(TypeError, msg): + idx - pd.Timestamp('2011-01-01') + + result = Timestamp('2011-01-01') + idx + expected = DatetimeIndex(['2011-01-02', '2011-01-03']) + tm.assert_index_equal(result, expected) + def test_ops_compat(self): offsets = [pd.offsets.Hour(2), timedelta(hours=2), @@ -1252,6 +1278,17 @@ def test_infer_freq(self): tm.assert_index_equal(idx, result) self.assertEqual(result.freq, freq) + def test_nat_new(self): + + idx = pd.timedelta_range('1', freq='D', periods=5, name='x') + result = idx._nat_new() + exp = pd.TimedeltaIndex([pd.NaT] * 5, name='x') + tm.assert_index_equal(result, exp) + + result = idx._nat_new(box=False) + exp = np.array([tslib.iNaT] * 5, dtype=np.int64) + tm.assert_numpy_array_equal(result, exp) + class TestPeriodIndexOps(Ops): def setUp(self): @@ -2053,3 +2090,14 @@ def test_take(self): self.assert_index_equal(result, expected) self.assertEqual(result.freq, expected.freq) self.assertEqual(result.freq, 'D') + + def test_nat_new(self): + + idx = pd.period_range('2011-01', freq='M', periods=5, name='x') + result = idx._nat_new() + exp = pd.PeriodIndex([pd.NaT] * 5, freq='M', name='x') + tm.assert_index_equal(result, exp) + + result = idx._nat_new(box=False) + exp = np.array([tslib.iNaT] * 5, dtype=np.int64) + tm.assert_numpy_array_equal(result, exp) diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py index 4c6ec91ad1f18..381b106b17eb0 100644 --- a/pandas/tseries/tests/test_tslib.py +++ b/pandas/tseries/tests/test_tslib.py @@ -6,6 +6,7 @@ import pandas._period as period import datetime +import pandas as pd from pandas.core.api import Timestamp, Series, Timedelta, Period, to_datetime from pandas.tslib import get_timezone from pandas._period import period_asfreq, period_ordinal @@ -22,6 +23,7 @@ class TestTimestamp(tm.TestCase): + def test_constructor(self): base_str = '2014-07-01 09:00' base_dt = datetime.datetime(2014, 7, 1, 9) @@ -915,37 +917,85 @@ def test_nanosecond_timestamp(self): def test_nat_arithmetic(self): # GH 6873 - nat = tslib.NaT - t = Timestamp('2014-01-01') - dt = datetime.datetime(2014, 1, 1) - delta = datetime.timedelta(3600) - td = Timedelta('5s') i = 2 f = 1.5 - for (left, right) in [(nat, i), (nat, f), (nat, np.nan)]: - self.assertTrue((left / right) is nat) - self.assertTrue((left * right) is nat) - self.assertTrue((right * left) is nat) + for (left, right) in [(pd.NaT, i), (pd.NaT, f), (pd.NaT, np.nan)]: + self.assertIs(left / right, pd.NaT) + self.assertIs(left * right, pd.NaT) + self.assertIs(right * left, pd.NaT) with tm.assertRaises(TypeError): right / left # Timestamp / datetime - for (left, right) in [(nat, nat), (nat, t), (nat, dt)]: + t = Timestamp('2014-01-01') + dt = datetime.datetime(2014, 1, 1) + for (left, right) in [(pd.NaT, pd.NaT), (pd.NaT, t), (pd.NaT, dt)]: # NaT __add__ or __sub__ Timestamp-like (or inverse) returns NaT - self.assertTrue((right + left) is nat) - self.assertTrue((left + right) is nat) - self.assertTrue((left - right) is nat) - self.assertTrue((right - left) is nat) + self.assertIs(right + left, pd.NaT) + self.assertIs(left + right, pd.NaT) + self.assertIs(left - right, pd.NaT) + self.assertIs(right - left, pd.NaT) # timedelta-like # offsets are tested in test_offsets.py - for (left, right) in [(nat, delta), (nat, td)]: + + delta = datetime.timedelta(3600) + td = Timedelta('5s') + + for (left, right) in [(pd.NaT, delta), (pd.NaT, td)]: # NaT + timedelta-like returns NaT - self.assertTrue((right + left) is nat) - self.assertTrue((left + right) is nat) - self.assertTrue((right - left) is nat) - self.assertTrue((left - right) is nat) + self.assertIs(right + left, pd.NaT) + self.assertIs(left + right, pd.NaT) + self.assertIs(right - left, pd.NaT) + self.assertIs(left - right, pd.NaT) + + # GH 11718 + tm._skip_if_no_pytz() + import pytz + + t_utc = Timestamp('2014-01-01', tz='UTC') + t_tz = Timestamp('2014-01-01', tz='US/Eastern') + dt_tz = pytz.timezone('Asia/Tokyo').localize(dt) + + for (left, right) in [(pd.NaT, t_utc), (pd.NaT, t_tz), + (pd.NaT, dt_tz)]: + # NaT __add__ or __sub__ Timestamp-like (or inverse) returns NaT + self.assertIs(right + left, pd.NaT) + self.assertIs(left + right, pd.NaT) + self.assertIs(left - right, pd.NaT) + self.assertIs(right - left, pd.NaT) + + def test_nat_arithmetic_index(self): + # GH 11718 + + # datetime + tm._skip_if_no_pytz() + + dti = pd.DatetimeIndex(['2011-01-01', '2011-01-02'], name='x') + exp = pd.DatetimeIndex([pd.NaT, pd.NaT], name='x') + self.assert_index_equal(dti + pd.NaT, exp) + self.assert_index_equal(pd.NaT + dti, exp) + + dti_tz = pd.DatetimeIndex(['2011-01-01', '2011-01-02'], + tz='US/Eastern', name='x') + exp = pd.DatetimeIndex([pd.NaT, pd.NaT], name='x', tz='US/Eastern') + self.assert_index_equal(dti_tz + pd.NaT, exp) + self.assert_index_equal(pd.NaT + dti_tz, exp) + + exp = pd.TimedeltaIndex([pd.NaT, pd.NaT], name='x') + for (left, right) in [(pd.NaT, dti), (pd.NaT, dti_tz)]: + self.assert_index_equal(left - right, exp) + self.assert_index_equal(right - left, exp) + + # timedelta + tdi = pd.TimedeltaIndex(['1 day', '2 day'], name='x') + exp = pd.DatetimeIndex([pd.NaT, pd.NaT], name='x') + for (left, right) in [(pd.NaT, tdi)]: + self.assert_index_equal(left + right, exp) + self.assert_index_equal(right + left, exp) + self.assert_index_equal(left - right, exp) + self.assert_index_equal(right - left, exp) class TestTslib(tm.TestCase): @@ -1173,8 +1223,8 @@ def test_resolution(self): period.H_RESO, period.T_RESO, period.S_RESO, period.MS_RESO, period.US_RESO]): - for tz in [None, 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Eastern' - ]: + for tz in [None, 'Asia/Tokyo', 'US/Eastern', + 'dateutil/US/Eastern']: idx = date_range(start='2013-04-01', periods=30, freq=freq, tz=tz) result = period.resolution(idx.asi8, idx.tz) diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx index 49b8f2c19700c..fe5d06e520cbf 100644 --- a/pandas/tslib.pyx +++ b/pandas/tslib.pyx @@ -1055,16 +1055,12 @@ cdef class _Timestamp(datetime): return self + neg_other # a Timestamp-DatetimeIndex -> yields a negative TimedeltaIndex - elif getattr(other,'_typ',None) == 'datetimeindex': - - # we may be passed reverse ops - if get_timezone(getattr(self,'tzinfo',None)) != get_timezone(other.tz): - raise TypeError("Timestamp subtraction must have the same timezones or no timezones") - + elif getattr(other, '_typ', None) == 'datetimeindex': + # timezone comparison is performed in DatetimeIndex._sub_datelike return -other.__sub__(self) # a Timestamp-TimedeltaIndex -> yields a negative TimedeltaIndex - elif getattr(other,'_typ',None) == 'timedeltaindex': + elif getattr(other, '_typ', None) == 'timedeltaindex': return (-other).__add__(self) elif other is NaT: @@ -1157,6 +1153,7 @@ cdef class _NaT(_Timestamp): if isinstance(other, datetime): return NaT result = _Timestamp.__add__(self, other) + # Timestamp.__add__ doesn't return DatetimeIndex/TimedeltaIndex if result is NotImplemented: return result except (OverflowError, OutOfBoundsDatetime): @@ -1164,15 +1161,12 @@ cdef class _NaT(_Timestamp): return NaT def __sub__(self, other): - - if other is NaT: + if isinstance(other, (datetime, timedelta)): return NaT - - if type(self) is datetime: - other, self = self, other try: result = _Timestamp.__sub__(self, other) - if result is NotImplemented: + # Timestamp.__sub__ may return DatetimeIndex/TimedeltaIndex + if result is NotImplemented or hasattr(result, '_typ'): return result except (OverflowError, OutOfBoundsDatetime): pass
Closes #11718. - Ops with `NaT` and `Timestamp` with `tz` results in `NaT` - Ops with `NaT` and `DatetimeIndex` / `TimedeltaIndex` results in corresponding `Index` filled with `NaT`.
https://api.github.com/repos/pandas-dev/pandas/pulls/12241
2016-02-06T14:03:34Z
2016-02-11T23:35:54Z
null
2016-02-11T23:43:23Z
TST: fix some scipy 0.17.0 changes
diff --git a/pandas/tests/test_generic.py b/pandas/tests/test_generic.py index 2a97fdad8dd44..7cb0dd249effd 100644 --- a/pandas/tests/test_generic.py +++ b/pandas/tests/test_generic.py @@ -6,6 +6,7 @@ from numpy import nan import pandas as pd +from distutils.version import LooseVersion from pandas import (Index, Series, DataFrame, Panel, isnull, date_range, period_range) from pandas.core.index import MultiIndex @@ -1195,9 +1196,15 @@ def test_interp_alt_scipy(self): assert_frame_equal(result, expectedk) _skip_if_no_pchip() + import scipy result = df.interpolate(method='pchip') expected.ix[2, 'A'] = 3 - expected.ix[5, 'A'] = 6.125 + + if LooseVersion(scipy.__version__) >= '0.17.0': + expected.ix[5, 'A'] = 6.0 + else: + expected.ix[5, 'A'] = 6.125 + assert_frame_equal(result, expected) def test_interp_rowwise(self): diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py index ca5246ba98f89..3738f88c1800b 100644 --- a/pandas/tests/test_nanops.py +++ b/pandas/tests/test_nanops.py @@ -122,7 +122,7 @@ def check_results(self, targ, res, axis): # timedeltas are a beast here def _coerce_tds(targ, res): - if targ.dtype == 'm8[ns]': + if hasattr(targ, 'dtype') and targ.dtype == 'm8[ns]': if len(targ) == 1: targ = targ[0].item() res = res.item() @@ -141,7 +141,8 @@ def _coerce_tds(targ, res): tm.assert_almost_equal(targ, res) except: - if targ.dtype == 'm8[ns]': + # handle timedelta dtypes + if hasattr(targ, 'dtype') and targ.dtype == 'm8[ns]': targ, res = _coerce_tds(targ, res) tm.assert_almost_equal(targ, res) return diff --git a/pandas/tests/test_stats.py b/pandas/tests/test_stats.py index b4cc57cb8216c..56f6a80e58eea 100644 --- a/pandas/tests/test_stats.py +++ b/pandas/tests/test_stats.py @@ -2,6 +2,7 @@ from pandas import compat import nose +from distutils.version import LooseVersion from numpy import nan import numpy as np @@ -47,6 +48,7 @@ def _check(s, expected, method='average'): def test_rank_methods_series(self): tm.skip_if_no_package('scipy', '0.13', 'scipy.stats.rankdata') + import scipy from scipy.stats import rankdata xs = np.random.randn(9) @@ -61,10 +63,15 @@ def test_rank_methods_series(self): for m in ['average', 'min', 'max', 'first', 'dense']: result = ts.rank(method=m) sprank = rankdata(vals, m if m != 'first' else 'ordinal') - tm.assert_series_equal(result, Series(sprank, index=index)) + expected = Series(sprank, index=index) + + if LooseVersion(scipy.__version__) >= '0.17.0': + expected = expected.astype('float64') + tm.assert_series_equal(result, expected) def test_rank_methods_frame(self): tm.skip_if_no_package('scipy', '0.13', 'scipy.stats.rankdata') + import scipy from scipy.stats import rankdata xs = np.random.randint(0, 21, (100, 26)) @@ -81,6 +88,9 @@ def test_rank_methods_frame(self): rankdata, ax, vals, m if m != 'first' else 'ordinal') expected = DataFrame(sprank, columns=cols) + + if LooseVersion(scipy.__version__) >= '0.17.0': + expected = expected.astype('float64') tm.assert_frame_equal(result, expected) def test_rank_dense_method(self):
partially on #12235
https://api.github.com/repos/pandas-dev/pandas/pulls/12239
2016-02-05T20:09:44Z
2016-02-06T19:34:23Z
null
2016-02-06T19:34:23Z
DOC: Remove claims of unbiasedness from doc string for std
diff --git a/doc/source/basics.rst b/doc/source/basics.rst index 9ecee2ea86bd1..68ab07bc3df91 100644 --- a/doc/source/basics.rst +++ b/doc/source/basics.rst @@ -483,11 +483,11 @@ optional ``level`` parameter which applies only if the object has a ``mode``, Mode ``abs``, Absolute Value ``prod``, Product of values - ``std``, Unbiased standard deviation + ``std``, Bessel-corrected sample standard deviation ``var``, Unbiased variance - ``sem``, Unbiased standard error of the mean - ``skew``, Unbiased skewness (3rd moment) - ``kurt``, Unbiased kurtosis (4th moment) + ``sem``, Standard error of the mean + ``skew``, Sample skewness (3rd moment) + ``kurt``, Sample kurtosis (4th moment) ``quantile``, Sample quantile (value at %) ``cumsum``, Cumulative sum ``cumprod``, Cumulative product diff --git a/doc/source/computation.rst b/doc/source/computation.rst index 097898f027429..ed3251efc3656 100644 --- a/doc/source/computation.rst +++ b/doc/source/computation.rst @@ -309,10 +309,10 @@ We provide a number of the common statistical functions: :meth:`~Rolling.median`, Arithmetic median of values :meth:`~Rolling.min`, Minimum :meth:`~Rolling.max`, Maximum - :meth:`~Rolling.std`, Unbiased standard deviation + :meth:`~Rolling.std`, Bessel-corrected sample standard deviation :meth:`~Rolling.var`, Unbiased variance - :meth:`~Rolling.skew`, Unbiased skewness (3rd moment) - :meth:`~Rolling.kurt`, Unbiased kurtosis (4th moment) + :meth:`~Rolling.skew`, Sample skewness (3rd moment) + :meth:`~Rolling.kurt`, Sample kurtosis (4th moment) :meth:`~Rolling.quantile`, Sample quantile (value at %) :meth:`~Rolling.apply`, Generic apply :meth:`~Rolling.cov`, Unbiased covariance (binary) diff --git a/pandas/core/generic.py b/pandas/core/generic.py index ce156232ed698..1bea2fb4fcf95 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -4783,7 +4783,7 @@ def mad(self, axis=None, skipna=None, level=None): nanops.nanvar) cls.std = _make_stat_function_ddof( 'std', name, name2, axis_descr, - "Return unbiased standard deviation over requested axis." + "Return sample standard deviation over requested axis." "\n\nNormalized by N-1 by default. This can be changed using the " "ddof argument", nanops.nanstd)
xref #12230
https://api.github.com/repos/pandas-dev/pandas/pulls/12234
2016-02-05T06:12:00Z
2016-02-06T19:46:50Z
null
2016-02-06T19:46:50Z
Read very old stata DTA files
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index ffba681565f48..2d2e673e15e03 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -694,7 +694,7 @@ Bug Fixes - Bug in getitem when the values of a ``Series`` were tz-aware (:issue:`12089`) - Bug in ``Series.str.get_dummies`` when one of the variables was 'name' (:issue:`12180`) - Bug in ``pd.concat`` while concatenating tz-aware NaT series. (:issue:`11693`, :issue:`11755`) - +- Bug in ``pd.read_stata`` with version <= 108 files (:issue:`12232`) - Bug in ``Timedelta.round`` with negative values (:issue:`11690`) diff --git a/pandas/io/stata.py b/pandas/io/stata.py index 8181e69abc60b..bdb48521bd791 100644 --- a/pandas/io/stata.py +++ b/pandas/io/stata.py @@ -851,23 +851,24 @@ def __init__(self, encoding): float32_max = b'\xff\xff\xff\x7e' float64_min = b'\xff\xff\xff\xff\xff\xff\xef\xff' float64_max = b'\xff\xff\xff\xff\xff\xff\xdf\x7f' - self.VALID_RANGE = \ - { - 'b': (-127, 100), - 'h': (-32767, 32740), - 'l': (-2147483647, 2147483620), - 'f': (np.float32(struct.unpack('<f', float32_min)[0]), - np.float32(struct.unpack('<f', float32_max)[0])), - 'd': (np.float64(struct.unpack('<d', float64_min)[0]), - np.float64(struct.unpack('<d', float64_max)[0])) - } - - self.OLD_TYPE_MAPPING = \ - { - 'i': 252, - 'f': 254, - 'b': 251 - } + self.VALID_RANGE = { + 'b': (-127, 100), + 'h': (-32767, 32740), + 'l': (-2147483647, 2147483620), + 'f': (np.float32(struct.unpack('<f', float32_min)[0]), + np.float32(struct.unpack('<f', float32_max)[0])), + 'd': (np.float64(struct.unpack('<d', float64_min)[0]), + np.float64(struct.unpack('<d', float64_max)[0])) + } + + self.OLD_TYPE_MAPPING = { + 98: 251, # byte + 105: 252, # int + 108: 253, # long + 102: 254 # float + # don't know old code for double + } + # These missing values are the generic '.' in Stata, and are used # to replace nans self.MISSING_VALUES = { @@ -878,15 +879,14 @@ def __init__(self, encoding): 'd': np.float64( struct.unpack('<d', b'\x00\x00\x00\x00\x00\x00\xe0\x7f')[0]) } - self.NUMPY_TYPE_MAP = \ - { - 'b': 'i1', - 'h': 'i2', - 'l': 'i4', - 'f': 'f4', - 'd': 'f8', - 'Q': 'u8' - } + self.NUMPY_TYPE_MAP = { + 'b': 'i1', + 'h': 'i2', + 'l': 'i4', + 'f': 'f4', + 'd': 'f8', + 'Q': 'u8' + } # Reserved words cannot be used as variable names self.RESERVED_WORDS = ('aggregate', 'array', 'boolean', 'break', @@ -900,12 +900,6 @@ def __init__(self, encoding): 'protected', 'quad', 'rowvector', 'short', 'typedef', 'typename', 'virtual') - def _decode_bytes(self, str, errors=None): - if compat.PY3 or self._encoding is not None: - return str.decode(self._encoding, errors) - else: - return str - class StataReader(StataParser): __doc__ = _stata_reader_doc @@ -1201,11 +1195,14 @@ def _read_old_header(self, first_char): typlist = [ord(self.path_or_buf.read(1)) for i in range(self.nvar)] else: - typlist = [ - self.OLD_TYPE_MAPPING[ - self._decode_bytes(self.path_or_buf.read(1)) - ] for i in range(self.nvar) - ] + buf = self.path_or_buf.read(self.nvar) + typlistb = np.frombuffer(buf, dtype=np.uint8) + typlist = [] + for tp in typlistb: + if tp in self.OLD_TYPE_MAPPING: + typlist.append(self.OLD_TYPE_MAPPING[tp]) + else: + typlist.append(tp - 127) # string try: self.typlist = [self.TYPE_MAP[typ] for typ in typlist] @@ -1533,7 +1530,7 @@ def read(self, nrows=None, convert_dates=None, data[col], self.fmtlist[i]) - if convert_categoricals and self.value_label_dict: + if convert_categoricals and self.format_version > 108: data = self._do_convert_categoricals(data, self.value_label_dict, self.lbllist, diff --git a/pandas/io/tests/data/S4_EDUC1.DTA b/pandas/io/tests/data/S4_EDUC1.DTA new file mode 100644 index 0000000000000..2d5533b7e621c Binary files /dev/null and b/pandas/io/tests/data/S4_EDUC1.DTA differ diff --git a/pandas/io/tests/test_stata.py b/pandas/io/tests/test_stata.py index e1e12e47457f9..0389cc4b113cf 100644 --- a/pandas/io/tests/test_stata.py +++ b/pandas/io/tests/test_stata.py @@ -409,9 +409,9 @@ def test_read_write_dta12(self): written_and_read_again.set_index('index'), formatted) def test_read_write_dta13(self): - s1 = Series(2**9, dtype=np.int16) - s2 = Series(2**17, dtype=np.int32) - s3 = Series(2**33, dtype=np.int64) + s1 = Series(2 ** 9, dtype=np.int16) + s2 = Series(2 ** 17, dtype=np.int32) + s3 = Series(2 ** 33, dtype=np.int64) original = DataFrame({'int16': s1, 'int32': s2, 'int64': s3}) original.index.name = 'index' @@ -568,6 +568,20 @@ def test_dates_invalid_column(self): tm.assert_frame_equal(written_and_read_again.set_index('index'), modified) + def test_105(self): + # Data obtained from: + # http://go.worldbank.org/ZXY29PVJ21 + dpath = os.path.join(self.dirpath, 'S4_EDUC1.DTA') + df = pd.read_stata(dpath) + df0 = [[1, 1, 3, -2], [2, 1, 2, -2], [4, 1, 1, -2]] + df0 = pd.DataFrame(df0) + df0.columns = ["clustnum", "pri_schl", "psch_num", "psch_dis"] + df0['clustnum'] = df0["clustnum"].astype(np.int16) + df0['pri_schl'] = df0["pri_schl"].astype(np.int8) + df0['psch_num'] = df0["psch_num"].astype(np.int8) + df0['psch_dis'] = df0["psch_dis"].astype(np.float32) + tm.assert_frame_equal(df.head(3), df0) + def test_date_export_formats(self): columns = ['tc', 'td', 'tw', 'tm', 'tq', 'th', 'ty'] conversions = dict(((c, c) for c in columns))
This should close #12232, although the issue may resurface for files containing double values (I can't determine the old type code for doubles).
https://api.github.com/repos/pandas-dev/pandas/pulls/12233
2016-02-05T05:53:06Z
2016-02-08T15:18:59Z
null
2016-02-08T15:19:29Z
Use correct prop_cycle in the mpl_stylesheet.
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py index 03d9fe75da8cc..103b7484ea138 100644 --- a/pandas/tools/plotting.py +++ b/pandas/tools/plotting.py @@ -135,7 +135,7 @@ def _mpl_ge_1_5_0(): # Compat with mp 1.5, which uses cycler. import cycler colors = mpl_stylesheet.pop('axes.color_cycle') - mpl_stylesheet['axes.prop_cycle'] = cycler.cycler('color_cycle', colors) + mpl_stylesheet['axes.prop_cycle'] = cycler.cycler('color', colors) def _get_standard_kind(kind):
Fixes https://github.com/pydata/pandas/issues/11727
https://api.github.com/repos/pandas-dev/pandas/pulls/12227
2016-02-03T19:35:18Z
2016-02-06T19:49:44Z
null
2016-02-06T19:49:54Z
ERR automatic broadcast for merging different levels, #9455
diff --git a/doc/source/whatsnew/v0.18.1.txt b/doc/source/whatsnew/v0.18.1.txt index dbe446f0a7b4f..6ba8edcca59e8 100644 --- a/doc/source/whatsnew/v0.18.1.txt +++ b/doc/source/whatsnew/v0.18.1.txt @@ -47,7 +47,7 @@ API changes - +- ``pandas.merge()`` and ``DataFrame.join()`` will show a ``UserWarning`` when merging/joining a single- with a multi-leveled dataframe (:issue:`9455`, :issue:`12219`) diff --git a/pandas/tests/frame/test_axis_select_reindex.py b/pandas/tests/frame/test_axis_select_reindex.py index 64be1ef460f51..26a88d601ec4b 100644 --- a/pandas/tests/frame/test_axis_select_reindex.py +++ b/pandas/tests/frame/test_axis_select_reindex.py @@ -142,6 +142,31 @@ def test_drop_multiindex_not_lexsorted(self): tm.assert_frame_equal(result, expected) + def test_merge_join_different_levels(self): + # GH 9455 + + # first dataframe + df1 = DataFrame(columns=['a', 'b'], data=[[1, 11], [0, 22]]) + + # second dataframe + columns = MultiIndex.from_tuples([('a', ''), ('c', 'c1')]) + df2 = DataFrame(columns=columns, data=[[1, 33], [0, 44]]) + + # merge + columns = ['a', 'b', ('c', 'c1')] + expected = DataFrame(columns=columns, data=[[1, 11, 33], [0, 22, 44]]) + with tm.assert_produces_warning(UserWarning): + result = pd.merge(df1, df2, on='a') + tm.assert_frame_equal(result, expected) + + # join, see discussion in GH 12219 + columns = ['a', 'b', ('a', ''), ('c', 'c1')] + expected = DataFrame(columns=columns, + data=[[1, 11, 0, 44], [0, 22, 1, 33]]) + with tm.assert_produces_warning(UserWarning): + result = df1.join(df2, on='a') + tm.assert_frame_equal(result, expected) + def test_reindex(self): newFrame = self.frame.reindex(self.ts1.index) diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py index 82fdf0a3d3b46..895f5b74a3e80 100644 --- a/pandas/tools/merge.py +++ b/pandas/tools/merge.py @@ -2,6 +2,8 @@ SQL-style merge routines """ +import warnings + import numpy as np from pandas.compat import range, lrange, lzip, zip, map, filter import pandas.compat as compat @@ -193,6 +195,13 @@ def __init__(self, left, right, how='inner', on=None, 'can not merge DataFrame with instance of ' 'type {0}'.format(type(right))) + # warn user when merging between different levels + if left.columns.nlevels != right.columns.nlevels: + msg = ('merging between different levels can give an unintended ' + 'result ({0} levels on the left, {1} on the right)') + msg = msg.format(left.columns.nlevels, right.columns.nlevels) + warnings.warn(msg, UserWarning) + # note this function has side effects (self.left_join_keys, self.right_join_keys, diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py index d5ddfe624e240..f27192dd3f379 100644 --- a/pandas/tools/tests/test_merge.py +++ b/pandas/tools/tests/test_merge.py @@ -459,13 +459,15 @@ def test_join_inner_multiindex(self): # _assert_same_contents(expected, expected2.ix[:, expected.columns]) def test_join_hierarchical_mixed(self): + # GH 2024 df = DataFrame([(1, 2, 3), (4, 5, 6)], columns=['a', 'b', 'c']) new_df = df.groupby(['a']).agg({'b': [np.mean, np.sum]}) other_df = DataFrame( [(1, 2, 3), (7, 10, 6)], columns=['a', 'b', 'd']) other_df.set_index('a', inplace=True) - - result = merge(new_df, other_df, left_index=True, right_index=True) + # GH 9455, 12219 + with tm.assert_produces_warning(UserWarning): + result = merge(new_df, other_df, left_index=True, right_index=True) self.assertTrue(('b', 'mean') in result) self.assertTrue('b' in result)
I'm adding tests to close #9455 after pull request #12158 solved the issue But I confess I'm a bit surprised by the result: ``` n [2]: df1 = DataFrame(columns=['a', 'b'], data=[[0, 1]]) In [3]: df2a = DataFrame(columns=['c', 'd'], data=[[2, 3]]) In [4]: df2b = DataFrame(columns=['c', 'e'], data=[[4, 5]]) In [5]: df2 = concat([df2a, df2b], keys=['l', 'r'], axis=1) In [6]: df2.index.name = 'a' In [7]: df2 = df2.reset_index() In [8]: df1 Out[8]: a b 0 0 1 In [9]: df2 Out[9]: a l r c d c e 0 0 2 3 4 5 In [10]: merge(df1, df2, on='a') pandas/tools/merge.py:467: PerformanceWarning: dropping on a non-lexsorted multi-index without a level parameter may impact performance. self.right = self.right.drop(right_drop, axis=1) Out[10]: a b (l, c) (l, d) (r, c) (r, e) 0 0 1 2 3 4 5 ``` Is it all right ?
https://api.github.com/repos/pandas-dev/pandas/pulls/12219
2016-02-03T12:42:05Z
2016-04-25T14:38:03Z
null
2016-05-05T16:59:47Z
DEPR: Change boxplot return_type kwarg
diff --git a/doc/source/visualization.rst b/doc/source/visualization.rst index 16ef76638ec5b..6e05c3ff0457a 100644 --- a/doc/source/visualization.rst +++ b/doc/source/visualization.rst @@ -456,28 +456,29 @@ columns: .. _visualization.box.return: -Basically, plot functions return :class:`matplotlib Axes <matplotlib.axes.Axes>` as a return value. -In ``boxplot``, the return type can be changed by argument ``return_type``, and whether the subplots is enabled (``subplots=True`` in ``plot`` or ``by`` is specified in ``boxplot``). +.. warning:: -When ``subplots=False`` / ``by`` is ``None``: + The default changed from ``'dict'`` to ``'axes'`` in version 0.19.0. -* if ``return_type`` is ``'dict'``, a dictionary containing the :class:`matplotlib Lines <matplotlib.lines.Line2D>` is returned. The keys are "boxes", "caps", "fliers", "medians", and "whiskers". - This is the default of ``boxplot`` in historical reason. - Note that ``plot.box()`` returns ``Axes`` by default same as other plots. -* if ``return_type`` is ``'axes'``, a :class:`matplotlib Axes <matplotlib.axes.Axes>` containing the boxplot is returned. -* if ``return_type`` is ``'both'`` a namedtuple containing the :class:`matplotlib Axes <matplotlib.axes.Axes>` - and :class:`matplotlib Lines <matplotlib.lines.Line2D>` is returned +In ``boxplot``, the return type can be controlled by the ``return_type``, keyword. The valid choices are ``{"axes", "dict", "both", None}``. +Faceting, created by ``DataFrame.boxplot`` with the ``by`` +keyword, will affect the output type as well: -When ``subplots=True`` / ``by`` is some column of the DataFrame: +================ ======= ========================== +``return_type=`` Faceted Output type +---------------- ------- -------------------------- -* A dict of ``return_type`` is returned, where the keys are the columns - of the DataFrame. The plot has a facet for each column of - the DataFrame, with a separate box for each value of ``by``. +``None`` No axes +``None`` Yes 2-D ndarray of axes +``'axes'`` No axes +``'axes'`` Yes Series of axes +``'dict'`` No dict of artists +``'dict'`` Yes Series of dicts of artists +``'both'`` No namedtuple +``'both'`` Yes Series of namedtuples +================ ======= ========================== -Finally, when calling boxplot on a :class:`Groupby` object, a dict of ``return_type`` -is returned, where the keys are the same as the Groupby object. The plot has a -facet for each key, with each facet containing a box for each column of the -DataFrame. +``Groupby.boxplot`` always returns a Series of ``return_type``. .. ipython:: python :okwarning: diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt index a422e667e32a7..f02367a49d44d 100644 --- a/doc/source/whatsnew/v0.19.0.txt +++ b/doc/source/whatsnew/v0.19.0.txt @@ -494,6 +494,7 @@ API changes - ``__setitem__`` will no longer apply a callable rhs as a function instead of storing it. Call ``where`` directly to get the previous behavior. (:issue:`13299`) - Passing ``Period`` with multiple frequencies to normal ``Index`` now returns ``Index`` with ``object`` dtype (:issue:`13664`) - ``PeriodIndex.fillna`` with ``Period`` has different freq now coerces to ``object`` dtype (:issue:`13664`) +- Faceted boxplots from ``DataFrame.boxplot(by=col)`` now return a ``Series`` when ``return_type`` is not None. Previously these returned an ``OrderedDict``. Note that when ``return_type=None``, the default, these still return a 2-D NumPy array. (:issue:`12216`, :issue:`7096`) - More informative exceptions are passed through the csv parser. The exception type would now be the original exception type instead of ``CParserError``. (:issue:`13652`) - ``astype()`` will now accept a dict of column name to data types mapping as the ``dtype`` argument. (:issue:`12086`) - The ``pd.read_json`` and ``DataFrame.to_json`` has gained support for reading and writing json lines with ``lines`` option see :ref:`Line delimited json <io.jsonl>` (:issue:`9180`) @@ -1282,9 +1283,9 @@ Removal of prior version deprecations/changes Now legacy time rules raises ``ValueError``. For the list of currently supported offsets, see :ref:`here <timeseries.offset_aliases>` +- The default value for the ``return_type`` parameter for ``DataFrame.plot.box`` and ``DataFrame.boxplot`` changed from ``None`` to ``"axes"``. These methods will now return a matplotlib axes by default instead of a dictionary of artists. See :ref:`here <visualization.box.return>` (:issue:`6581`). - The ``tquery`` and ``uquery`` functions in the ``pandas.io.sql`` module are removed (:issue:`5950`). - .. _whatsnew_0190.performance: Performance Improvements diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py index 7dcc3d6e5734f..9fe1d7cacd38f 100644 --- a/pandas/tests/plotting/common.py +++ b/pandas/tests/plotting/common.py @@ -5,8 +5,8 @@ import os import warnings -from pandas import DataFrame -from pandas.compat import zip, iteritems, OrderedDict +from pandas import DataFrame, Series +from pandas.compat import zip, iteritems from pandas.util.decorators import cache_readonly from pandas.types.api import is_list_like import pandas.util.testing as tm @@ -445,7 +445,8 @@ def _check_box_return_type(self, returned, return_type, expected_keys=None, self.assertIsInstance(r, Axes) return - self.assertTrue(isinstance(returned, OrderedDict)) + self.assertTrue(isinstance(returned, Series)) + self.assertEqual(sorted(returned.keys()), sorted(expected_keys)) for key, value in iteritems(returned): self.assertTrue(isinstance(value, types[return_type])) diff --git a/pandas/tests/plotting/test_boxplot_method.py b/pandas/tests/plotting/test_boxplot_method.py index d499540827ab0..333792c5ffdb2 100644 --- a/pandas/tests/plotting/test_boxplot_method.py +++ b/pandas/tests/plotting/test_boxplot_method.py @@ -92,6 +92,12 @@ def test_boxplot_legacy(self): lines = list(itertools.chain.from_iterable(d.values())) self.assertEqual(len(ax.get_lines()), len(lines)) + @slow + def test_boxplot_return_type_none(self): + # GH 12216; return_type=None & by=None -> axes + result = self.hist_df.boxplot() + self.assertTrue(isinstance(result, self.plt.Axes)) + @slow def test_boxplot_return_type_legacy(self): # API change in https://github.com/pydata/pandas/pull/7096 @@ -103,10 +109,8 @@ def test_boxplot_return_type_legacy(self): with tm.assertRaises(ValueError): df.boxplot(return_type='NOTATYPE') - with tm.assert_produces_warning(FutureWarning): - result = df.boxplot() - # change to Axes in future - self._check_box_return_type(result, 'dict') + result = df.boxplot() + self._check_box_return_type(result, 'axes') with tm.assert_produces_warning(False): result = df.boxplot(return_type='dict') @@ -140,6 +144,7 @@ def _check_ax_limits(col, ax): p = df.boxplot(['height', 'weight', 'age'], by='category') height_ax, weight_ax, age_ax = p[0, 0], p[0, 1], p[1, 0] dummy_ax = p[1, 1] + _check_ax_limits(df['height'], height_ax) _check_ax_limits(df['weight'], weight_ax) _check_ax_limits(df['age'], age_ax) @@ -163,8 +168,7 @@ def test_boxplot_legacy(self): grouped = self.hist_df.groupby(by='gender') with tm.assert_produces_warning(UserWarning): axes = _check_plot_works(grouped.boxplot, return_type='axes') - self._check_axes_shape(list(axes.values()), axes_num=2, layout=(1, 2)) - + self._check_axes_shape(list(axes.values), axes_num=2, layout=(1, 2)) axes = _check_plot_works(grouped.boxplot, subplots=False, return_type='axes') self._check_axes_shape(axes, axes_num=1, layout=(1, 1)) @@ -175,7 +179,7 @@ def test_boxplot_legacy(self): grouped = df.groupby(level=1) with tm.assert_produces_warning(UserWarning): axes = _check_plot_works(grouped.boxplot, return_type='axes') - self._check_axes_shape(list(axes.values()), axes_num=10, layout=(4, 3)) + self._check_axes_shape(list(axes.values), axes_num=10, layout=(4, 3)) axes = _check_plot_works(grouped.boxplot, subplots=False, return_type='axes') @@ -184,8 +188,7 @@ def test_boxplot_legacy(self): grouped = df.unstack(level=1).groupby(level=0, axis=1) with tm.assert_produces_warning(UserWarning): axes = _check_plot_works(grouped.boxplot, return_type='axes') - self._check_axes_shape(list(axes.values()), axes_num=3, layout=(2, 2)) - + self._check_axes_shape(list(axes.values), axes_num=3, layout=(2, 2)) axes = _check_plot_works(grouped.boxplot, subplots=False, return_type='axes') self._check_axes_shape(axes, axes_num=1, layout=(1, 1)) @@ -226,8 +229,7 @@ def test_grouped_box_return_type(self): expected_keys=['height', 'weight', 'category']) # now for groupby - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - result = df.groupby('gender').boxplot() + result = df.groupby('gender').boxplot(return_type='dict') self._check_box_return_type( result, 'dict', expected_keys=['Male', 'Female']) @@ -347,7 +349,7 @@ def test_grouped_box_multiple_axes(self): with tm.assert_produces_warning(UserWarning): returned = df.boxplot(column=['height', 'weight', 'category'], by='gender', return_type='axes', ax=axes[0]) - returned = np.array(list(returned.values())) + returned = np.array(list(returned.values)) self._check_axes_shape(returned, axes_num=3, layout=(1, 3)) self.assert_numpy_array_equal(returned, axes[0]) self.assertIs(returned[0].figure, fig) @@ -357,7 +359,7 @@ def test_grouped_box_multiple_axes(self): returned = df.groupby('classroom').boxplot( column=['height', 'weight', 'category'], return_type='axes', ax=axes[1]) - returned = np.array(list(returned.values())) + returned = np.array(list(returned.values)) self._check_axes_shape(returned, axes_num=3, layout=(1, 3)) self.assert_numpy_array_equal(returned, axes[1]) self.assertIs(returned[0].figure, fig) diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py index 91be0a7a73e35..4d0c1e9213b17 100644 --- a/pandas/tests/plotting/test_frame.py +++ b/pandas/tests/plotting/test_frame.py @@ -1221,6 +1221,9 @@ def test_boxplot_return_type(self): result = df.plot.box(return_type='axes') self._check_box_return_type(result, 'axes') + result = df.plot.box() # default axes + self._check_box_return_type(result, 'axes') + result = df.plot.box(return_type='both') self._check_box_return_type(result, 'both') @@ -1230,7 +1233,7 @@ def test_boxplot_subplots_return_type(self): # normal style: return_type=None result = df.plot.box(subplots=True) - self.assertIsInstance(result, np.ndarray) + self.assertIsInstance(result, Series) self._check_box_return_type(result, None, expected_keys=[ 'height', 'weight', 'category']) diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py index 1abd11017dbfe..7fd0b1044f9d7 100644 --- a/pandas/tools/plotting.py +++ b/pandas/tools/plotting.py @@ -2247,7 +2247,7 @@ class BoxPlot(LinePlot): # namedtuple to hold results BP = namedtuple("Boxplot", ['ax', 'lines']) - def __init__(self, data, return_type=None, **kwargs): + def __init__(self, data, return_type='axes', **kwargs): # Do not call LinePlot.__init__ which may fill nan if return_type not in self._valid_return_types: raise ValueError( @@ -2266,7 +2266,7 @@ def _args_adjust(self): self.sharey = False @classmethod - def _plot(cls, ax, y, column_num=None, return_type=None, **kwds): + def _plot(cls, ax, y, column_num=None, return_type='axes', **kwds): if y.ndim == 2: y = [remove_na(v) for v in y] # Boxplot fails with empty arrays, so need to add a NaN @@ -2339,7 +2339,7 @@ def maybe_color_bp(self, bp): def _make_plot(self): if self.subplots: - self._return_obj = compat.OrderedDict() + self._return_obj = Series() for i, (label, y) in enumerate(self._iter_data()): ax = self._get_ax(i) @@ -2691,14 +2691,17 @@ def plot_series(data, kind='line', ax=None, # Series unique grid : Setting this to True will show the grid layout : tuple (optional) (rows, columns) for the layout of the plot - return_type : {'axes', 'dict', 'both'}, default 'dict' - The kind of object to return. 'dict' returns a dictionary - whose values are the matplotlib Lines of the boxplot; + return_type : {None, 'axes', 'dict', 'both'}, default None + The kind of object to return. The default is ``axes`` 'axes' returns the matplotlib axes the boxplot is drawn on; + 'dict' returns a dictionary whose values are the matplotlib + Lines of the boxplot; 'both' returns a namedtuple with the axes and dict. - When grouping with ``by``, a dict mapping columns to ``return_type`` - is returned. + When grouping with ``by``, a Series mapping columns to ``return_type`` + is returned, unless ``return_type`` is None, in which case a NumPy + array of axes is returned with the same shape as ``layout``. + See the prose documentation for more. kwds : other plotting keyword arguments to be passed to matplotlib boxplot function @@ -2724,7 +2727,7 @@ def boxplot(data, column=None, by=None, ax=None, fontsize=None, # validate return_type: if return_type not in BoxPlot._valid_return_types: - raise ValueError("return_type must be {None, 'axes', 'dict', 'both'}") + raise ValueError("return_type must be {'axes', 'dict', 'both'}") from pandas import Series, DataFrame if isinstance(data, Series): @@ -2769,23 +2772,19 @@ def plot_group(keys, values, ax): columns = [column] if by is not None: + # Prefer array return type for 2-D plots to match the subplot layout + # https://github.com/pydata/pandas/pull/12216#issuecomment-241175580 result = _grouped_plot_by_column(plot_group, data, columns=columns, by=by, grid=grid, figsize=figsize, ax=ax, layout=layout, return_type=return_type) else: + if return_type is None: + return_type = 'axes' if layout is not None: raise ValueError("The 'layout' keyword is not supported when " "'by' is None") - if return_type is None: - msg = ("\nThe default value for 'return_type' will change to " - "'axes' in a future release.\n To use the future behavior " - "now, set return_type='axes'.\n To keep the previous " - "behavior and silence this warning, set " - "return_type='dict'.") - warnings.warn(msg, FutureWarning, stacklevel=3) - return_type = 'dict' if ax is None: ax = _gca() data = data._get_numeric_data() @@ -3104,12 +3103,12 @@ def boxplot_frame_groupby(grouped, subplots=True, column=None, fontsize=None, figsize=figsize, layout=layout) axes = _flatten(axes) - ret = compat.OrderedDict() + ret = Series() for (key, group), ax in zip(grouped, axes): d = group.boxplot(ax=ax, column=column, fontsize=fontsize, rot=rot, grid=grid, **kwds) ax.set_title(pprint_thing(key)) - ret[key] = d + ret.loc[key] = d fig.subplots_adjust(bottom=0.15, top=0.9, left=0.1, right=0.9, wspace=0.2) else: @@ -3175,7 +3174,9 @@ def _grouped_plot_by_column(plotf, data, columns=None, by=None, _axes = _flatten(axes) - result = compat.OrderedDict() + result = Series() + ax_values = [] + for i, col in enumerate(columns): ax = _axes[i] gp_col = grouped[col] @@ -3183,9 +3184,11 @@ def _grouped_plot_by_column(plotf, data, columns=None, by=None, re_plotf = plotf(keys, values, ax, **kwargs) ax.set_title(col) ax.set_xlabel(pprint_thing(by)) - result[col] = re_plotf + ax_values.append(re_plotf) ax.grid(grid) + result = Series(ax_values, index=columns) + # Return axes in multiplot case, maybe revisit later # 985 if return_type is None: result = axes diff --git a/pandas/util/testing.py b/pandas/util/testing.py index f5a93d1f17d00..57bb01e5e0406 100644 --- a/pandas/util/testing.py +++ b/pandas/util/testing.py @@ -880,12 +880,12 @@ def assert_attr_equal(attr, left, right, obj='Attributes'): def assert_is_valid_plot_return_object(objs): import matplotlib.pyplot as plt - if isinstance(objs, np.ndarray): - for el in objs.flat: - assert isinstance(el, plt.Axes), ('one of \'objs\' is not a ' - 'matplotlib Axes instance, ' - 'type encountered {0!r}' - ''.format(el.__class__.__name__)) + if isinstance(objs, (pd.Series, np.ndarray)): + for el in objs.ravel(): + msg = ('one of \'objs\' is not a matplotlib Axes instance, ' + 'type encountered {0!r}') + assert isinstance(el, (plt.Axes, dict)), msg.format( + el.__class__.__name__) else: assert isinstance(objs, (plt.Artist, tuple, dict)), \ ('objs is neither an ndarray of Artist instances nor a '
Part of https://github.com/pydata/pandas/issues/6581 Deprecation started in https://github.com/pydata/pandas/pull/7096 Changes the default value of `return_type` in DataFrame.boxplot and DataFrame.plot.box from None to 'axes'.
https://api.github.com/repos/pandas-dev/pandas/pulls/12216
2016-02-03T02:51:32Z
2016-09-04T10:21:01Z
2016-09-04T10:21:01Z
2017-04-05T02:08:35Z
BUG: Strings like '2E' are incorrectly parsed as valid floats
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 421822380c2da..ac6267a15b513 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -784,6 +784,7 @@ Bug Fixes - Bug in ``read_excel`` failing to read data with one column when ``squeeze=True`` (:issue:`12157`) - Bug in ``.groupby`` where a ``KeyError`` was not raised for a wrong column if there was only one row in the dataframe (:issue:`11741`) - Bug in ``.read_csv`` with dtype specified on empty data producing an error (:issue:`12048`) +- Bug in ``.read_csv`` where strings like ``'2E'`` are treated as valid floats (:issue:`12237`) - Bug in building *pandas* with debugging symbols (:issue:`12123`) diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py index 7c68a44874631..d3020e337322b 100755 --- a/pandas/io/tests/test_parsers.py +++ b/pandas/io/tests/test_parsers.py @@ -29,6 +29,7 @@ import pandas.util.testing as tm import pandas as pd +from pandas.core.common import AbstractMethodError from pandas.compat import parse_date import pandas.lib as lib from pandas import compat @@ -2495,6 +2496,18 @@ def test_float_parser(self): expected = pd.DataFrame([[float(s) for s in data.split(',')]]) tm.assert_frame_equal(result, expected) + def float_precision_choices(self): + raise AbstractMethodError(self) + + def test_scientific_no_exponent(self): + # See PR 12215 + df = DataFrame.from_items([('w', ['2e']), ('x', ['3E']), + ('y', ['42e']), ('z', ['632E'])]) + data = df.to_csv(index=False) + for prec in self.float_precision_choices(): + df_roundtrip = self.read_csv(StringIO(data), float_precision=prec) + tm.assert_frame_equal(df_roundtrip, df) + def test_int64_overflow(self): data = """ID 00013007854817840016671868 @@ -2651,6 +2664,9 @@ def read_table(self, *args, **kwds): kwds['engine'] = 'python' return read_table(*args, **kwds) + def float_precision_choices(self): + return [None] + def test_sniff_delimiter(self): text = """index|A|B|C foo|1|2|3 @@ -3409,6 +3425,9 @@ def test_variable_width_unicode(self): class CParserTests(ParserTests): """ base class for CParser Testsing """ + def float_precision_choices(self): + return [None, 'high', 'round_trip'] + def test_buffer_overflow(self): # GH9205 # test certain malformed input files that cause buffer overflows in diff --git a/pandas/src/parse_helper.h b/pandas/src/parse_helper.h index 2cb1a7f017c62..d47e448700029 100644 --- a/pandas/src/parse_helper.h +++ b/pandas/src/parse_helper.h @@ -197,10 +197,12 @@ static double xstrtod(const char *str, char **endptr, char decimal, } // Process string of digits + num_digits = 0; n = 0; while (isdigit(*p)) { n = n * 10 + (*p - '0'); + num_digits++; p++; } @@ -208,6 +210,10 @@ static double xstrtod(const char *str, char **endptr, char decimal, exponent -= n; else exponent += n; + + // If no digits, after the 'e'/'E', un-consume it + if (num_digits == 0) + p--; } diff --git a/pandas/src/parser/tokenizer.c b/pandas/src/parser/tokenizer.c index 2e4a804a577b5..8fd3674047301 100644 --- a/pandas/src/parser/tokenizer.c +++ b/pandas/src/parser/tokenizer.c @@ -2225,10 +2225,12 @@ double xstrtod(const char *str, char **endptr, char decimal, } // Process string of digits + num_digits = 0; n = 0; while (isdigit(*p)) { n = n * 10 + (*p - '0'); + num_digits++; p++; } @@ -2236,6 +2238,10 @@ double xstrtod(const char *str, char **endptr, char decimal, exponent -= n; else exponent += n; + + // If no digits, after the 'e'/'E', un-consume it + if (num_digits == 0) + p--; } @@ -2396,10 +2402,12 @@ double precise_xstrtod(const char *str, char **endptr, char decimal, } // Process string of digits + num_digits = 0; n = 0; while (isdigit(*p)) { n = n * 10 + (*p - '0'); + num_digits++; p++; } @@ -2407,6 +2415,10 @@ double precise_xstrtod(const char *str, char **endptr, char decimal, exponent -= n; else exponent += n; + + // If no digits, after the 'e'/'E', un-consume it + if (num_digits == 0) + p--; } if (exponent > 308) diff --git a/pandas/tests/test_tseries.py b/pandas/tests/test_tseries.py index 8422759192cc3..f3784a246eb4b 100644 --- a/pandas/tests/test_tseries.py +++ b/pandas/tests/test_tseries.py @@ -337,6 +337,13 @@ def test_convert_infs(): assert (result.dtype == np.float64) +def test_scientific_no_exponent(): + # See PR 12215 + arr = np.array(['42E', '2E', '99e', '6e'], dtype='O') + result = lib.maybe_convert_numeric(arr, set(), False, True) + assert np.all(np.isnan(result)) + + def test_convert_objects_ints(): # test that we can detect many kinds of integers dtypes = ['i1', 'i2', 'i4', 'i8', 'u1', 'u2', 'u4', 'u8']
A work colleague, David Chase, encountered some surprising behaviour, which can be reduced to the following. The data-frame ``` DataFrame({'x': [2.5], 'y': [42], 'z': ['2E']}) ``` does not round-trip correctly. The string '2E' is interpreted as a valid float, but it should not be (according to man strtod(3), which seems a reasonable spec). This PR changes the three variants of `xstrtod()` to reject a string where no digits follow the 'e' or 'E', and includes tests for this case.
https://api.github.com/repos/pandas-dev/pandas/pulls/12215
2016-02-03T00:12:18Z
2016-02-09T22:03:11Z
null
2016-02-09T22:15:54Z
DOC: Added deprecation to convert_objects docstring#12052
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 958571fdc2218..0556cb95f7d44 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -2724,6 +2724,8 @@ def _convert(self, datetime=False, numeric=False, timedelta=False, def convert_objects(self, convert_dates=True, convert_numeric=False, convert_timedeltas=True, copy=True): """ + Deprecated. + Attempt to infer better dtype for object columns Parameters @@ -2742,6 +2744,15 @@ def convert_objects(self, convert_dates=True, convert_numeric=False, conversion was done). Note: This is meant for internal use, and should not be confused with inplace. + See also + -------- + pandas.to_datetime : Convert argument to datetime. + + pandas.to_timedelta : Convert argument to timedelta. + + pandas.to_numeric : Return a fixed frequency timedelta index, + with day as the default. + Returns ------- converted : same as input object
Added deprecation warning to convert_objects docstring Closes #12052
https://api.github.com/repos/pandas-dev/pandas/pulls/12209
2016-02-02T17:36:47Z
2016-02-06T21:50:04Z
null
2016-02-06T21:50:36Z
STYLE: final flake8 fixes, add back check for travis-ci
diff --git a/.travis.yml b/.travis.yml index 7dbc2fb821162..565d52184d0f1 100644 --- a/.travis.yml +++ b/.travis.yml @@ -164,11 +164,10 @@ script: - echo "script" - ci/run_build_docs.sh - ci/script.sh + - ci/lint.sh # nothing here, or failed tests won't fail travis after_script: - ci/install_test.sh - source activate pandas && ci/print_versions.py - ci/print_skipped.py /tmp/nosetests.xml - - ci/lint.sh - - ci/lint_ok_for_now.sh diff --git a/ci/lint.sh b/ci/lint.sh index 97d318b48469e..4350ecd8b11ed 100755 --- a/ci/lint.sh +++ b/ci/lint.sh @@ -4,17 +4,14 @@ echo "inside $0" source activate pandas -for path in 'core' +RET=0 +for path in 'core' 'io' 'stats' 'compat' 'sparse' 'tools' 'tseries' 'tests' 'computation' 'util' do echo "linting -> pandas/$path" - flake8 pandas/$path --filename '*.py' --statistics -q + flake8 pandas/$path --filename '*.py' + if [ $? -ne "0" ]; then + RET=1 + fi done -RET="$?" - -# we are disabling the return code for now -# to have Travis-CI pass. When the code -# passes linting, re-enable -#exit "$RET" - -exit 0 +exit $RET diff --git a/ci/lint_ok_for_now.sh b/ci/lint_ok_for_now.sh deleted file mode 100755 index eba667fadde06..0000000000000 --- a/ci/lint_ok_for_now.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash - -echo "inside $0" - -source activate pandas - -for path in 'io' 'stats' 'computation' 'tseries' 'util' 'compat' 'tools' 'sparse' 'tests' -do - echo "linting [ok_for_now] -> pandas/$path" - flake8 pandas/$path --filename '*.py' --statistics -q -done - -RET="$?" - -# we are disabling the return code for now -# to have Travis-CI pass. When the code -# passes linting, re-enable -#exit "$RET" - -exit 0 diff --git a/pandas/compat/numpy_compat.py b/pandas/compat/numpy_compat.py index f7f5da40d01c5..e4aeb05177aa4 100644 --- a/pandas/compat/numpy_compat.py +++ b/pandas/compat/numpy_compat.py @@ -19,7 +19,8 @@ _np_version_under1p11 = LooseVersion(_np_version) < '1.11' if LooseVersion(_np_version) < '1.7.0': - raise ImportError('this version of pandas is incompatible with numpy < 1.7.0\n' + raise ImportError('this version of pandas is incompatible with ' + 'numpy < 1.7.0\n' 'your numpy version is {0}.\n' 'Please upgrade numpy to >= 1.7.0 to use ' 'this pandas version'.format(_np_version)) @@ -61,7 +62,7 @@ def np_array_datetime64_compat(arr, *args, **kwargs): isinstance(arr, string_and_binary_types): arr = [tz_replacer(s) for s in arr] else: - arr = tz_replacer(s) + arr = tz_replacer(arr) return np.array(arr, *args, **kwargs) diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 33c3a3638ee72..27e932cb54b95 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -51,7 +51,6 @@ from pandas.tseries.index import DatetimeIndex from pandas.tseries.tdi import TimedeltaIndex -import pandas.core.algorithms as algos import pandas.core.base as base import pandas.core.common as com import pandas.core.format as fmt diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py index fbc25e7fdb98d..698bbcb2538b9 100644 --- a/pandas/core/groupby.py +++ b/pandas/core/groupby.py @@ -2310,10 +2310,9 @@ def _get_grouper(obj, key=None, axis=0, level=None, sort=True): except Exception: all_in_columns = False - if (not any_callable and not all_in_columns - and not any_arraylike and not any_groupers - and match_axis_length - and level is None): + if not any_callable and not all_in_columns and \ + not any_arraylike and not any_groupers and \ + match_axis_length and level is None: keys = [com._asarray_tuplesafe(keys)] if isinstance(level, (tuple, list)): @@ -3695,7 +3694,7 @@ def count(self): return self._wrap_agged_blocks(data.items, list(blk)) -from pandas.tools.plotting import boxplot_frame_groupby +from pandas.tools.plotting import boxplot_frame_groupby # noqa DataFrameGroupBy.boxplot = boxplot_frame_groupby diff --git a/pandas/core/internals.py b/pandas/core/internals.py index 6e9005395281c..10053d33d6b51 100644 --- a/pandas/core/internals.py +++ b/pandas/core/internals.py @@ -3981,7 +3981,7 @@ def form_blocks(arrays, names, axes): klass=DatetimeTZBlock, fastpath=True, placement=[i], ) - for i, names, array in datetime_tz_items] + for i, _, array in datetime_tz_items] blocks.extend(dttz_blocks) if len(bool_items): @@ -3999,7 +3999,7 @@ def form_blocks(arrays, names, axes): if len(cat_items) > 0: cat_blocks = [make_block(array, klass=CategoricalBlock, fastpath=True, placement=[i]) - for i, names, array in cat_items] + for i, _, array in cat_items] blocks.extend(cat_blocks) if len(extra_locs): diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py index 05257dd0ac625..a31efc63269b6 100644 --- a/pandas/core/reshape.py +++ b/pandas/core/reshape.py @@ -282,7 +282,7 @@ def _unstack_multiple(data, clocs): for i in range(len(clocs)): val = clocs[i] result = result.unstack(val) - clocs = [val if i > val else val - 1 for val in clocs] + clocs = [v if i > v else v - 1 for v in clocs] return result diff --git a/pandas/sparse/panel.py b/pandas/sparse/panel.py index 4bacaadd915d1..524508b828980 100644 --- a/pandas/sparse/panel.py +++ b/pandas/sparse/panel.py @@ -520,10 +520,10 @@ def _convert_frames(frames, index, columns, fill_value=np.nan, kind='block'): output[item] = df if index is None: - all_indexes = [df.index for df in output.values()] + all_indexes = [x.index for x in output.values()] index = _get_combined_index(all_indexes) if columns is None: - all_columns = [df.columns for df in output.values()] + all_columns = [x.columns for x in output.values()] columns = _get_combined_index(all_columns) index = _ensure_index(index) diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py index 818e2fb89008d..e68b94342985d 100644 --- a/pandas/tests/frame/test_apply.py +++ b/pandas/tests/frame/test_apply.py @@ -262,8 +262,8 @@ def transform(row): return row def transform2(row): - if (notnull(row['C']) and row['C'].startswith('shin') - and row['A'] == 'foo'): + if (notnull(row['C']) and row['C'].startswith('shin') and + row['A'] == 'foo'): row['D'] = 7 return row diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py index ff9567c8a40b1..6077c8e6f63ee 100644 --- a/pandas/tests/frame/test_indexing.py +++ b/pandas/tests/frame/test_indexing.py @@ -2178,8 +2178,8 @@ def test_where(self): def _safe_add(df): # only add to the numeric items def is_ok(s): - return (issubclass(s.dtype.type, (np.integer, np.floating)) - and s.dtype != 'uint8') + return (issubclass(s.dtype.type, (np.integer, np.floating)) and + s.dtype != 'uint8') return DataFrame(dict([(c, s + 1) if is_ok(s) else (c, s) for c, s in compat.iteritems(df)])) diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py index 52594b982a0d0..6db507f0e4151 100644 --- a/pandas/tests/frame/test_query_eval.py +++ b/pandas/tests/frame/test_query_eval.py @@ -581,8 +581,7 @@ def test_chained_cmp_and_in(self): df = DataFrame(randn(100, len(cols)), columns=cols) res = df.query('a < b < c and a not in b not in c', engine=engine, parser=parser) - ind = ((df.a < df.b) & (df.b < df.c) & ~df.b.isin(df.a) & - ~df.c.isin(df.b)) + ind = (df.a < df.b) & (df.b < df.c) & ~df.b.isin(df.a) & ~df.c.isin(df.b) # noqa expec = df[ind] assert_frame_equal(res, expec) diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py index c5c005beeb69e..2dda4a37e6449 100644 --- a/pandas/tests/frame/test_repr_info.py +++ b/pandas/tests/frame/test_repr_info.py @@ -328,13 +328,13 @@ def test_info_memory_usage(self): res = buf.getvalue().splitlines() self.assertTrue(re.match(r"memory usage: [^+]+$", res[-1])) - self.assertTrue(df_with_object_index.memory_usage(index=True, - deep=True).sum() - > df_with_object_index.memory_usage(index=True).sum()) + self.assertGreater(df_with_object_index.memory_usage(index=True, + deep=True).sum(), + df_with_object_index.memory_usage(index=True).sum()) df_object = pd.DataFrame({'a': ['a']}) - self.assertTrue(df_object.memory_usage(deep=True).sum() - > df_object.memory_usage().sum()) + self.assertGreater(df_object.memory_usage(deep=True).sum(), + df_object.memory_usage().sum()) # Test a DataFrame with duplicate columns dtypes = ['int64', 'int64', 'int64', 'float64'] diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py index c0963d885a08d..e7d64324e6590 100644 --- a/pandas/tests/frame/test_reshape.py +++ b/pandas/tests/frame/test_reshape.py @@ -10,7 +10,8 @@ import numpy as np from pandas.compat import u -from pandas import DataFrame, Index, Series, MultiIndex, date_range, Timedelta, Period +from pandas import (DataFrame, Index, Series, MultiIndex, date_range, + Timedelta, Period) import pandas as pd from pandas.util.testing import (assert_series_equal, diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py index 10a5b9dbefe02..99f894bfd3320 100644 --- a/pandas/tests/test_base.py +++ b/pandas/tests/test_base.py @@ -332,7 +332,7 @@ def test_none_comparison(self): self.assertTrue(result.iat[0]) self.assertTrue(result.iat[1]) - result = None == o + result = None == o # noqa self.assertFalse(result.iat[0]) self.assertFalse(result.iat[1]) diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py index 0f99d367de6fd..c175630748b38 100644 --- a/pandas/tests/test_groupby.py +++ b/pandas/tests/test_groupby.py @@ -3858,8 +3858,8 @@ def test_groupby_categorical(self): np.arange(4).repeat(8), levels, ordered=True) exp = CategoricalIndex(expc) self.assert_index_equal(desc_result.index.get_level_values(0), exp) - exp = Index(['count', 'mean', 'std', 'min', '25%', '50%', '75%', 'max'] - * 4) + exp = Index(['count', 'mean', 'std', 'min', '25%', '50%', + '75%', 'max'] * 4) self.assert_index_equal(desc_result.index.get_level_values(1), exp) def test_groupby_datetime_categorical(self): @@ -3899,8 +3899,8 @@ def test_groupby_datetime_categorical(self): np.arange(4).repeat(8), levels, ordered=True) exp = CategoricalIndex(expc) self.assert_index_equal(desc_result.index.get_level_values(0), exp) - exp = Index(['count', 'mean', 'std', 'min', '25%', '50%', '75%', 'max'] - * 4) + exp = Index(['count', 'mean', 'std', 'min', '25%', '50%', + '75%', 'max'] * 4) self.assert_index_equal(desc_result.index.get_level_values(1), exp) def test_groupby_categorical_index(self): diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py index cd9f44317da49..9f8d672723954 100644 --- a/pandas/tests/test_multilevel.py +++ b/pandas/tests/test_multilevel.py @@ -105,8 +105,8 @@ def test_append_index(self): expected = Index._simple_new( np.array([(1.1, datetime.datetime(2011, 1, 1, tzinfo=tz), 'A'), (1.2, datetime.datetime(2011, 1, 2, tzinfo=tz), 'B'), - (1.3, datetime.datetime(2011, 1, 3, tzinfo=tz), 'C')] - + expected_tuples), None) + (1.3, datetime.datetime(2011, 1, 3, tzinfo=tz), 'C')] + + expected_tuples), None) self.assertTrue(result.equals(expected)) def test_dataframe_constructor(self): diff --git a/pandas/tests/test_style.py b/pandas/tests/test_style.py index b9ca3f331711d..9a427cb26520c 100644 --- a/pandas/tests/test_style.py +++ b/pandas/tests/test_style.py @@ -1,6 +1,13 @@ import os from nose import SkipTest +import copy +import numpy as np +import pandas as pd +from pandas import DataFrame +from pandas.util.testing import TestCase +import pandas.util.testing as tm + # this is a mess. Getting failures on a python 2.7 build with # whenever we try to import jinja, whether it's installed or not. # so we're explicitly skipping that one *before* we try to import @@ -14,14 +21,6 @@ except ImportError: raise SkipTest("No Jinja2") -import copy - -import numpy as np -import pandas as pd -from pandas import DataFrame -from pandas.util.testing import TestCase -import pandas.util.testing as tm - class TestStyler(TestCase): @@ -196,8 +195,8 @@ def test_apply_subset(self): expected = dict(((r, c), ['color: baz']) for r, row in enumerate(self.df.index) for c, col in enumerate(self.df.columns) - if row in self.df.loc[slice_].index - and col in self.df.loc[slice_].columns) + if row in self.df.loc[slice_].index and + col in self.df.loc[slice_].columns) self.assertEqual(result, expected) def test_applymap_subset(self): @@ -213,8 +212,8 @@ def f(x): expected = dict(((r, c), ['foo: bar']) for r, row in enumerate(self.df.index) for c, col in enumerate(self.df.columns) - if row in self.df.loc[slice_].index - and col in self.df.loc[slice_].columns) + if row in self.df.loc[slice_].index and + col in self.df.loc[slice_].columns) self.assertEqual(result, expected) def test_empty(self): diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py index 65f90c320bb68..0b8de24a1bd42 100644 --- a/pandas/tests/test_window.py +++ b/pandas/tests/test_window.py @@ -1002,32 +1002,31 @@ def simple_wma(s, w): return (s.multiply(w).cumsum() / w.cumsum()).fillna(method='ffill') for (s, adjust, ignore_na, w) in [ - (s0, True, False, [np.nan, (1. - alpha), 1.]), - (s0, True, True, [np.nan, (1. - alpha), 1.]), - (s0, False, False, [np.nan, (1. - alpha), alpha]), - (s0, False, True, [np.nan, (1. - alpha), alpha]), - (s1, True, False, [(1. - alpha) ** 2, np.nan, 1.]), - (s1, True, True, [(1. - alpha), np.nan, 1.]), - (s1, False, False, [(1. - alpha) ** 2, np.nan, alpha]), - (s1, False, True, [(1. - alpha), np.nan, alpha]), - (s2, True, False, [np.nan, (1. - alpha) - ** 3, np.nan, np.nan, 1., np.nan]), - (s2, True, True, [np.nan, (1. - alpha), - np.nan, np.nan, 1., np.nan]), - (s2, False, False, [np.nan, (1. - alpha) - ** 3, np.nan, np.nan, alpha, np.nan]), - (s2, False, True, [np.nan, (1. - alpha), - np.nan, np.nan, alpha, np.nan]), - (s3, True, False, [(1. - alpha) - ** 3, np.nan, (1. - alpha), 1.]), - (s3, True, True, [(1. - alpha) ** - 2, np.nan, (1. - alpha), 1.]), - (s3, False, False, [(1. - alpha) ** 3, np.nan, - (1. - alpha) * alpha, - alpha * ((1. - alpha) ** 2 + alpha)]), - (s3, False, True, [(1. - alpha) ** 2, - np.nan, (1. - alpha) * alpha, alpha]), - ]: + (s0, True, False, [np.nan, (1. - alpha), 1.]), + (s0, True, True, [np.nan, (1. - alpha), 1.]), + (s0, False, False, [np.nan, (1. - alpha), alpha]), + (s0, False, True, [np.nan, (1. - alpha), alpha]), + (s1, True, False, [(1. - alpha) ** 2, np.nan, 1.]), + (s1, True, True, [(1. - alpha), np.nan, 1.]), + (s1, False, False, [(1. - alpha) ** 2, np.nan, alpha]), + (s1, False, True, [(1. - alpha), np.nan, alpha]), + (s2, True, False, [np.nan, (1. - alpha) ** + 3, np.nan, np.nan, 1., np.nan]), + (s2, True, True, [np.nan, (1. - alpha), + np.nan, np.nan, 1., np.nan]), + (s2, False, False, [np.nan, (1. - alpha) ** + 3, np.nan, np.nan, alpha, np.nan]), + (s2, False, True, [np.nan, (1. - alpha), + np.nan, np.nan, alpha, np.nan]), + (s3, True, False, [(1. - alpha) ** + 3, np.nan, (1. - alpha), 1.]), + (s3, True, True, [(1. - alpha) ** + 2, np.nan, (1. - alpha), 1.]), + (s3, False, False, [(1. - alpha) ** 3, np.nan, + (1. - alpha) * alpha, + alpha * ((1. - alpha) ** 2 + alpha)]), + (s3, False, True, [(1. - alpha) ** 2, + np.nan, (1. - alpha) * alpha, alpha])]: expected = simple_wma(s, Series(w)) result = s.ewm(com=com, adjust=adjust, ignore_na=ignore_na).mean() diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py index cac5d8ae3cf51..82fdf0a3d3b46 100644 --- a/pandas/tools/merge.py +++ b/pandas/tools/merge.py @@ -470,8 +470,7 @@ def _get_merge_keys(self): def _validate_specification(self): # Hm, any way to make this logic less complicated?? - if (self.on is None and self.left_on is None - and self.right_on is None): + if self.on is None and self.left_on is None and self.right_on is None: if self.left_index and self.right_index: self.left_on, self.right_on = (), () @@ -1185,7 +1184,7 @@ def _make_concat_multiindex(indexes, keys, levels=None, names=None): names = list(names) else: # make sure that all of the passed indices have the same nlevels - if not len(set([i.nlevels for i in indexes])) == 1: + if not len(set([idx.nlevels for idx in indexes])) == 1: raise AssertionError("Cannot concat indices that do" " not have the same number of levels") diff --git a/pandas/tools/pivot.py b/pandas/tools/pivot.py index 120a30199e522..8e920e0a99a7a 100644 --- a/pandas/tools/pivot.py +++ b/pandas/tools/pivot.py @@ -357,8 +357,8 @@ def _convert_by(by): if by is None: by = [] elif (np.isscalar(by) or isinstance(by, (np.ndarray, Index, - Series, Grouper)) - or hasattr(by, '__call__')): + Series, Grouper)) or + hasattr(by, '__call__')): by = [by] else: by = list(by) diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py index 8f7c0a2b1be9a..03d9fe75da8cc 100644 --- a/pandas/tools/plotting.py +++ b/pandas/tools/plotting.py @@ -108,8 +108,8 @@ def _mpl_ge_1_3_1(): import matplotlib # The or v[0] == '0' is because their versioneer is # messed up on dev - return (matplotlib.__version__ >= LooseVersion('1.3.1') - or matplotlib.__version__[0] == '0') + return (matplotlib.__version__ >= LooseVersion('1.3.1') or + matplotlib.__version__[0] == '0') except ImportError: return False @@ -117,8 +117,8 @@ def _mpl_ge_1_3_1(): def _mpl_ge_1_4_0(): try: import matplotlib - return (matplotlib.__version__ >= LooseVersion('1.4') - or matplotlib.__version__[0] == '0') + return (matplotlib.__version__ >= LooseVersion('1.4') or + matplotlib.__version__[0] == '0') except ImportError: return False @@ -126,8 +126,8 @@ def _mpl_ge_1_4_0(): def _mpl_ge_1_5_0(): try: import matplotlib - return (matplotlib.__version__ >= LooseVersion('1.5') - or matplotlib.__version__[0] == '0') + return (matplotlib.__version__ >= LooseVersion('1.5') or + matplotlib.__version__[0] == '0') except ImportError: return False @@ -1789,10 +1789,10 @@ def _update_stacker(cls, ax, stacking_id, values): ax._stacker_neg_prior[stacking_id] += values def _post_plot_logic(self, ax, data): - condition = (not self._use_dynamic_x() - and data.index.is_all_dates - and not self.subplots - or (self.subplots and self.sharex)) + condition = (not self._use_dynamic_x() and + data.index.is_all_dates and + not self.subplots or + (self.subplots and self.sharex)) index_name = self._get_index_name() @@ -2186,8 +2186,8 @@ def blank_labeler(label, value): # Blank out labels for values of 0 so they don't overlap # with nonzero wedges if labels is not None: - blabels = [blank_labeler(label, value) for - label, value in zip(labels, y)] + blabels = [blank_labeler(l, value) for + l, value in zip(labels, y)] else: blabels = None results = ax.pie(y, labels=blabels, **kwds) @@ -2331,7 +2331,7 @@ def _make_plot(self): self.maybe_color_bp(bp) self._return_obj = ret - labels = [l for l, y in self._iter_data()] + labels = [l for l, _ in self._iter_data()] labels = [com.pprint_thing(l) for l in labels] if not self.use_index: labels = [com.pprint_thing(key) for key in range(len(labels))] diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py index d83b0e3f250ca..a150e55b06ff3 100644 --- a/pandas/tseries/frequencies.py +++ b/pandas/tseries/frequencies.py @@ -6,6 +6,7 @@ import numpy as np +import pandas.core.algorithms as algos from pandas.core.algorithms import unique from pandas.tseries.offsets import DateOffset from pandas.util.decorators import cache_readonly @@ -1100,8 +1101,6 @@ def _get_wom_rule(self): return 'WOM-%d%s' % (week, wd) -import pandas.core.algorithms as algos - class _TimedeltaFrequencyInferer(_FrequencyInferer): diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py index a632913fbe4fe..77aa05bc1189d 100644 --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -1079,9 +1079,9 @@ def _maybe_utc_convert(self, other): def _wrap_joined_index(self, joined, other): name = self.name if self.name == other.name else None - if (isinstance(other, DatetimeIndex) - and self.offset == other.offset - and self._can_fast_union(other)): + if (isinstance(other, DatetimeIndex) and + self.offset == other.offset and + self._can_fast_union(other)): joined = self._shallow_copy(joined) joined.name = name return joined diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py index 50c0a1ab7f336..1a666f5ed012b 100644 --- a/pandas/tseries/offsets.py +++ b/pandas/tseries/offsets.py @@ -13,6 +13,7 @@ from pandas.tslib import Timestamp, OutOfBoundsDatetime, Timedelta import functools +import operator __all__ = ['Day', 'BusinessDay', 'BDay', 'CustomBusinessDay', 'CDay', 'CBMonthEnd', 'CBMonthBegin', @@ -111,8 +112,8 @@ def wrapper(self, other): def _is_normalized(dt): - if (dt.hour != 0 or dt.minute != 0 or dt.second != 0 - or dt.microsecond != 0 or getattr(dt, 'nanosecond', 0) != 0): + if (dt.hour != 0 or dt.minute != 0 or dt.second != 0 or + dt.microsecond != 0 or getattr(dt, 'nanosecond', 0) != 0): return False return True @@ -268,8 +269,8 @@ def apply_index(self, i): if (self._use_relativedelta and set(self.kwds).issubset(relativedelta_fast)): - months = ((self.kwds.get('years', 0) * 12 - + self.kwds.get('months', 0)) * self.n) + months = ((self.kwds.get('years', 0) * 12 + + self.kwds.get('months', 0)) * self.n) if months: shifted = tslib.shift_months(i.asi8, months) i = i._shallow_copy(shifted) @@ -321,8 +322,8 @@ def __repr__(self): exclude = set(['n', 'inc', 'normalize']) attrs = [] for attr in sorted(self.__dict__): - if ((attr == 'kwds' and len(self.kwds) == 0) - or attr.startswith('_')): + if ((attr == 'kwds' and len(self.kwds) == 0) or + attr.startswith('_')): continue elif attr == 'kwds': kwds_new = {} @@ -2437,8 +2438,6 @@ def onOffset(self, dt): # --------------------------------------------------------------------- # Ticks -import operator - def _tick_comp(op): def f(self, other): diff --git a/pandas/tseries/tdi.py b/pandas/tseries/tdi.py index fafe13e1f2c09..9129a156848a9 100644 --- a/pandas/tseries/tdi.py +++ b/pandas/tseries/tdi.py @@ -522,8 +522,8 @@ def join(self, other, how='left', level=None, return_indexers=False): def _wrap_joined_index(self, joined, other): name = self.name if self.name == other.name else None - if (isinstance(other, TimedeltaIndex) and self.freq == other.freq - and self._can_fast_union(other)): + if (isinstance(other, TimedeltaIndex) and self.freq == other.freq and + self._can_fast_union(other)): joined = self._shallow_copy(joined, name=name) return joined else: diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py index c46d21d2a8759..901d9f41e3949 100644 --- a/pandas/tseries/tests/test_offsets.py +++ b/pandas/tseries/tests/test_offsets.py @@ -4209,8 +4209,9 @@ def _test_offset(self, offset_name, offset_n, tstart, expected_utc_offset): self.assertTrue(timedelta(offset.kwds['days']) + tstart.date() == t.date()) # expect the same hour of day, minute, second, ... - self.assertTrue(t.hour == tstart.hour and t.minute == tstart.minute - and t.second == tstart.second) + self.assertTrue(t.hour == tstart.hour and + t.minute == tstart.minute and + t.second == tstart.second) elif offset_name in self.valid_date_offsets_singular: # expect the signular offset value to match between tstart and t datepart_offset = getattr(t, offset_name @@ -4223,8 +4224,10 @@ def _test_offset(self, offset_name, offset_n, tstart, expected_utc_offset): ).tz_convert('US/Pacific')) def _make_timestamp(self, string, hrs_offset, tz): - offset_string = '{hrs:02d}00'.format(hrs=hrs_offset) if hrs_offset >= 0 else \ - '-{hrs:02d}00'.format(hrs=-1 * hrs_offset) + if hrs_offset >= 0: + offset_string = '{hrs:02d}00'.format(hrs=hrs_offset) + else: + offset_string = '-{hrs:02d}00'.format(hrs=-1 * hrs_offset) return Timestamp(string + offset_string).tz_convert(tz) def test_fallback_plural(self): diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py index b9326aa8e1c60..c761a35649ba9 100644 --- a/pandas/tseries/tests/test_resample.py +++ b/pandas/tseries/tests/test_resample.py @@ -1276,9 +1276,9 @@ def test_resample_anchored_multiday(self): s = pd.Series(np.random.randn(5), index=pd.date_range('2014-10-14 23:06:23.206', - periods=3, freq='400L') - | pd.date_range('2014-10-15 23:00:00', - periods=2, freq='2200L')) + periods=3, freq='400L') | + pd.date_range('2014-10-15 23:00:00', + periods=2, freq='2200L')) # Ensure left closing works result = s.resample('2200L').mean() diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py index 9d80489904eb5..5665e502b8558 100644 --- a/pandas/tseries/tests/test_timeseries.py +++ b/pandas/tseries/tests/test_timeseries.py @@ -2451,7 +2451,7 @@ def test_constructor_datetime64_tzformat(self): self.assert_numpy_array_equal(idx.asi8, expected_i8.asi8) tm._skip_if_no_dateutil() - from dateutil.tz import tzoffset + # Non ISO 8601 format results in dateutil.tz.tzoffset for freq in ['AS', 'W-SUN']: idx = date_range('2013/1/1 0:00:00-5:00', '2016/1/1 23:59:59-5:00', diff --git a/pandas/tseries/tests/test_timeseries_legacy.py b/pandas/tseries/tests/test_timeseries_legacy.py index 96a9fd67733a1..086f23cd2d4fd 100644 --- a/pandas/tseries/tests/test_timeseries_legacy.py +++ b/pandas/tseries/tests/test_timeseries_legacy.py @@ -2,11 +2,8 @@ from datetime import datetime import sys import os - import nose - import numpy as np -randn = np.random.randn from pandas import (Index, Series, date_range, Timestamp, DatetimeIndex, Int64Index, to_datetime) @@ -24,6 +21,8 @@ import pandas.compat as compat from pandas.core.datetools import BDay +randn = np.random.randn + # infortunately, too much has changed to handle these legacy pickles # class TestLegacySupport(unittest.TestCase): diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py index 5f3f1f09729be..4c6ec91ad1f18 100644 --- a/pandas/tseries/tests/test_tslib.py +++ b/pandas/tseries/tests/test_tslib.py @@ -29,8 +29,8 @@ def test_constructor(self): # confirm base representation is correct import calendar - self.assertEqual(calendar.timegm(base_dt.timetuple()) - * 1000000000, base_expected) + self.assertEqual(calendar.timegm(base_dt.timetuple()) * 1000000000, + base_expected) tests = [(base_str, base_dt, base_expected), ('2014-07-01 10:00', datetime.datetime(2014, 7, 1, 10), @@ -89,8 +89,8 @@ def test_constructor_with_stringoffset(self): # confirm base representation is correct import calendar - self.assertEqual(calendar.timegm(base_dt.timetuple()) - * 1000000000, base_expected) + self.assertEqual(calendar.timegm(base_dt.timetuple()) * 1000000000, + base_expected) tests = [(base_str, base_expected), ('2014-07-01 12:00:00+02:00', diff --git a/pandas/util/nosetester.py b/pandas/util/nosetester.py index fdee9be20afa3..445cb79978fc1 100644 --- a/pandas/util/nosetester.py +++ b/pandas/util/nosetester.py @@ -138,7 +138,8 @@ def test(self, label='fast', verbose=1, extra_argv=None, * 'full' - fast (as above) and slow tests as in the 'no -A' option to nosetests - this is the same as ''. * None or '' - run all tests. - * attribute_identifier - string passed directly to nosetests as '-A'. + * attribute_identifier - string passed directly to nosetests + as '-A'. verbose : int, optional Verbosity value for test outputs, in the range 1-10. Default is 1. extra_argv : list, optional @@ -200,8 +201,9 @@ def test(self, label='fast', verbose=1, extra_argv=None, # Reset the warning filters to the default state, # so that running the tests is more repeatable. warnings.resetwarnings() - # Set all warnings to 'warn', this is because the default 'once' - # has the bad property of possibly shadowing later warnings. + # Set all warnings to 'warn', this is because the default + # 'once' has the bad property of possibly shadowing later + # warnings. warnings.filterwarnings('always') # Force the requested warnings to raise for warningtype in raise_warnings:
closes #11928
https://api.github.com/repos/pandas-dev/pandas/pulls/12208
2016-02-02T17:27:08Z
2016-02-03T14:06:28Z
2016-02-03T14:06:28Z
2016-02-03T14:06:28Z
ENH: formatting integers in FloatIndex as floats
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index ffba681565f48..5acda44c63161 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -185,6 +185,48 @@ In addition, ``.round()``, ``.floor()`` and ``.ceil()`` will be available thru t s s.dt.round('D') + +Formatting of integer in FloatIndex +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Integers in ``FloatIndex``, e.g. 1., are now formatted with a decimal point +and a ``0`` digit, e.g. ``1.0`` (:issue:`11713`) + +This change affects the display in jupyter, but also the output of IO methods +like ``.to_csv`` or ``.to_html`` + +Previous Behavior: + +.. code-block:: python + + In [2]: s = Series([1,2,3], index=np.arange(3.)) + + In [3]: s + Out[3]: + 0 1 + 1 2 + 2 3 + dtype: int64 + + In [4]: s.index + Out[4]: Float64Index([0.0, 1.0, 2.0], dtype='float64') + + In [5]: print(s.to_csv(path=None)) + 0,1 + 1,2 + 2,3 + + +New Behavior: + +.. ipython:: python + + s = Series([1,2,3], index=np.arange(3.)) + s + s.index + print(s.to_csv(path=None)) + + .. _whatsnew_0180.enhancements.other: Other enhancements diff --git a/pandas/core/format.py b/pandas/core/format.py index 10b67d6229234..d7f3a669de9f4 100644 --- a/pandas/core/format.py +++ b/pandas/core/format.py @@ -2115,7 +2115,9 @@ def _format_strings(self): abs_vals = np.abs(self.values) # this is pretty arbitrary for now - has_large_values = (abs_vals > 1e8).any() + # large values: more that 8 characters including decimal symbol + # and first digit, hence > 1e6 + has_large_values = (abs_vals > 1e6).any() has_small_values = ((abs_vals < 10**(-self.digits)) & (abs_vals > 0)).any() @@ -2367,7 +2369,7 @@ def just(x): def _trim_zeros(str_floats, na_rep='NaN'): """ - Trims zeros and decimal points. + Trims zeros, leaving just one before the decimal points if need be. """ trimmed = str_floats @@ -2379,8 +2381,8 @@ def _cond(values): while _cond(trimmed): trimmed = [x[:-1] if x != na_rep else x for x in trimmed] - # trim decimal points - return [x[:-1] if x.endswith('.') and x != na_rep else x for x in trimmed] + # leave one 0 after the decimal points if need be. + return [x + "0" if x.endswith('.') and x != na_rep else x for x in trimmed] def single_column_table(column, align=None, style=None): diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py index b7691033dfc83..2ad25ad738649 100644 --- a/pandas/tests/test_format.py +++ b/pandas/tests/test_format.py @@ -204,7 +204,7 @@ def test_repr_chop_threshold(self): self.assertEqual(repr(df), ' 0 1\n0 0.0 0.5\n1 0.5 0.0') with option_context("display.chop_threshold", 0.6): - self.assertEqual(repr(df), ' 0 1\n0 0 0\n1 0 0') + self.assertEqual(repr(df), ' 0 1\n0 0.0 0.0\n1 0.0 0.0') with option_context("display.chop_threshold", None): self.assertEqual(repr(df), ' 0 1\n0 0.1 0.5\n1 0.5 -0.1') @@ -753,7 +753,7 @@ def test_to_html_with_empty_string_label(self): def test_to_html_unicode(self): df = DataFrame({u('\u03c3'): np.arange(10.)}) - expected = u'<table border="1" class="dataframe">\n <thead>\n <tr style="text-align: right;">\n <th></th>\n <th>\u03c3</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>0</td>\n </tr>\n <tr>\n <th>1</th>\n <td>1</td>\n </tr>\n <tr>\n <th>2</th>\n <td>2</td>\n </tr>\n <tr>\n <th>3</th>\n <td>3</td>\n </tr>\n <tr>\n <th>4</th>\n <td>4</td>\n </tr>\n <tr>\n <th>5</th>\n <td>5</td>\n </tr>\n <tr>\n <th>6</th>\n <td>6</td>\n </tr>\n <tr>\n <th>7</th>\n <td>7</td>\n </tr>\n <tr>\n <th>8</th>\n <td>8</td>\n </tr>\n <tr>\n <th>9</th>\n <td>9</td>\n </tr>\n </tbody>\n</table>' + expected = u'<table border="1" class="dataframe">\n <thead>\n <tr style="text-align: right;">\n <th></th>\n <th>\u03c3</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>0.0</td>\n </tr>\n <tr>\n <th>1</th>\n <td>1.0</td>\n </tr>\n <tr>\n <th>2</th>\n <td>2.0</td>\n </tr>\n <tr>\n <th>3</th>\n <td>3.0</td>\n </tr>\n <tr>\n <th>4</th>\n <td>4.0</td>\n </tr>\n <tr>\n <th>5</th>\n <td>5.0</td>\n </tr>\n <tr>\n <th>6</th>\n <td>6.0</td>\n </tr>\n <tr>\n <th>7</th>\n <td>7.0</td>\n </tr>\n <tr>\n <th>8</th>\n <td>8.0</td>\n </tr>\n <tr>\n <th>9</th>\n <td>9.0</td>\n </tr>\n </tbody>\n</table>' self.assertEqual(df.to_html(), expected) df = DataFrame({'A': [u('\u03c3')]}) expected = u'<table border="1" class="dataframe">\n <thead>\n <tr style="text-align: right;">\n <th></th>\n <th>A</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>\u03c3</td>\n </tr>\n </tbody>\n</table>' @@ -1916,12 +1916,12 @@ def test_to_string_format_na(self): 'B': [np.nan, 'foo', 'foooo', 'fooooo', 'bar']}) result = df.to_string() - expected = (' A B\n' - '0 NaN NaN\n' - '1 -1 foo\n' - '2 -2 foooo\n' - '3 3 fooooo\n' - '4 4 bar') + expected = (' A B\n' + '0 NaN NaN\n' + '1 -1.0 foo\n' + '2 -2.0 foooo\n' + '3 3.0 fooooo\n' + '4 4.0 bar') self.assertEqual(result, expected) def test_to_string_line_width(self): @@ -3760,8 +3760,8 @@ def test_misc(self): def test_format(self): obj = fmt.FloatArrayFormatter(np.array([12, 0], dtype=np.float64)) result = obj.get_result() - self.assertEqual(result[0], " 12") - self.assertEqual(result[1], " 0") + self.assertEqual(result[0], " 12.0") + self.assertEqual(result[1], " 0.0") def test_output_significant_digits(self): # Issue #9764 @@ -3793,15 +3793,15 @@ def test_output_significant_digits(self): def test_too_long(self): # GH 10451 with pd.option_context('display.precision', 4): - # need both a number > 1e8 and something that normally formats to + # need both a number > 1e6 and something that normally formats to # having length > display.precision + 6 df = pd.DataFrame(dict(x=[12345.6789])) self.assertEqual(str(df), ' x\n0 12345.6789') - df = pd.DataFrame(dict(x=[2e8])) - self.assertEqual(str(df), ' x\n0 200000000') - df = pd.DataFrame(dict(x=[12345.6789, 2e8])) + df = pd.DataFrame(dict(x=[2e6])) + self.assertEqual(str(df), ' x\n0 2000000.0') + df = pd.DataFrame(dict(x=[12345.6789, 2e6])) self.assertEqual( - str(df), ' x\n0 1.2346e+04\n1 2.0000e+08') + str(df), ' x\n0 1.2346e+04\n1 2.0000e+06') class TestRepr_timedelta64(tm.TestCase):
Ref issue #12164 Previous Behavior: ``` python In [2]: s = Series([1,2,3], index=np.arange(3.)) In [3]: s Out[3]: 0 1 1 2 2 3 dtype: int64 In [4]: s.index Out[4]: Float64Index([0.0, 1.0, 2.0], dtype='float64') In [5]: print(s.to_csv(path=None)) 0,1 1,2 2,3 ``` New behavior ``` python In [2]: s = Series([1,2,3], index=np.arange(3.)) In [3]: s Out[3]: 0.0 1 1.0 2 2.0 3 dtype: int64 In [4]: s.index Out[4]: Float64Index([0.0, 1.0, 2.0], dtype='float64') In [5]: print(s.to_csv(path=None)) 0.0,1 1.0,2 2.0,3 ```
https://api.github.com/repos/pandas-dev/pandas/pulls/12207
2016-02-02T16:42:46Z
2016-02-05T14:53:09Z
null
2016-02-06T06:54:47Z
DEPR: removal of deprecated sql functions
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 9dca8615af3ae..20436261b1efb 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -487,7 +487,9 @@ Removal of prior version deprecations/changes - Removal of ``expanding_corr_pairwise`` in favor of ``.expanding().corr(pairwise=True)`` (:issue:`4950`) - Removal of ``DataMatrix`` module. This was not imported into the pandas namespace in any event (:issue:`12111`) - Removal of ``cols`` keyword in favor of ``subset`` in ``DataFrame.duplicated()`` and ``DataFrame.drop_duplicates()`` (:issue:`6680`) - +- Removal of the ``read_frame`` and ``frame_query`` (both aliases for ``pd.read_sql``) + and ``write_frame`` (alias of ``to_sql``) functions in the ``pd.io.sql`` namespace, + deprecated since 0.14.0 (:issue:`6292`). .. _whatsnew_0180.performance: diff --git a/pandas/io/sql.py b/pandas/io/sql.py index 63725988c8065..072ca86600ea0 100644 --- a/pandas/io/sql.py +++ b/pandas/io/sql.py @@ -1704,66 +1704,3 @@ def get_schema(frame, name, flavor='sqlite', keys=None, con=None, dtype=None): pandas_sql = pandasSQL_builder(con=con, flavor=flavor) return pandas_sql._create_sql_schema(frame, name, keys=keys, dtype=dtype) - - -# legacy names, with depreciation warnings and copied docs - -@Appender(read_sql.__doc__, join='\n') -def read_frame(*args, **kwargs): - """DEPRECATED - use read_sql - """ - warnings.warn("read_frame is deprecated, use read_sql", FutureWarning, - stacklevel=2) - return read_sql(*args, **kwargs) - - -@Appender(read_sql.__doc__, join='\n') -def frame_query(*args, **kwargs): - """DEPRECATED - use read_sql - """ - warnings.warn("frame_query is deprecated, use read_sql", FutureWarning, - stacklevel=2) - return read_sql(*args, **kwargs) - - -def write_frame(frame, name, con, flavor='sqlite', if_exists='fail', **kwargs): - """DEPRECATED - use to_sql - - Write records stored in a DataFrame to a SQL database. - - Parameters - ---------- - frame : DataFrame - name : string - con : DBAPI2 connection - flavor : {'sqlite', 'mysql'}, default 'sqlite' - The flavor of SQL to use. - if_exists : {'fail', 'replace', 'append'}, default 'fail' - - fail: If table exists, do nothing. - - replace: If table exists, drop it, recreate it, and insert data. - - append: If table exists, insert data. Create if does not exist. - index : boolean, default False - Write DataFrame index as a column - - Notes - ----- - This function is deprecated in favor of ``to_sql``. There are however - two differences: - - - With ``to_sql`` the index is written to the sql database by default. To - keep the behaviour this function you need to specify ``index=False``. - - The new ``to_sql`` function supports sqlalchemy connectables to work - with different sql flavors. - - See also - -------- - pandas.DataFrame.to_sql - - """ - warnings.warn("write_frame is deprecated, use to_sql", FutureWarning, - stacklevel=2) - - # for backwards compatibility, set index=False when not specified - index = kwargs.pop('index', False) - return to_sql(frame, name, con, flavor=flavor, if_exists=if_exists, - index=index, **kwargs) diff --git a/pandas/io/tests/test_sql.py b/pandas/io/tests/test_sql.py index 455e27b70055d..a5f6acc113466 100644 --- a/pandas/io/tests/test_sql.py +++ b/pandas/io/tests/test_sql.py @@ -524,12 +524,6 @@ def test_read_sql_view(self): "SELECT * FROM iris_view", self.conn) self._check_iris_loaded_frame(iris_frame) - def test_legacy_read_frame(self): - with tm.assert_produces_warning(FutureWarning): - iris_frame = sql.read_frame( - "SELECT * FROM iris", self.conn) - self._check_iris_loaded_frame(iris_frame) - def test_to_sql(self): sql.to_sql(self.test_frame1, 'test_frame1', self.conn, flavor='sqlite') self.assertTrue( @@ -598,17 +592,6 @@ def test_to_sql_panel(self): self.assertRaises(NotImplementedError, sql.to_sql, panel, 'test_panel', self.conn, flavor='sqlite') - def test_legacy_write_frame(self): - # Assume that functionality is already tested above so just do - # quick check that it basically works - with tm.assert_produces_warning(FutureWarning): - sql.write_frame(self.test_frame1, 'test_frame_legacy', self.conn, - flavor='sqlite') - - self.assertTrue( - sql.has_table('test_frame_legacy', self.conn, flavor='sqlite'), - 'Table not written to DB') - def test_roundtrip(self): sql.to_sql(self.test_frame1, 'test_frame_roundtrip', con=self.conn, flavor='sqlite') @@ -2239,7 +2222,7 @@ def test_write_row_by_row(self): self.conn.commit() - result = sql.read_frame("select * from test", con=self.conn) + result = sql.read_sql("select * from test", con=self.conn) result.index = frame.index tm.assert_frame_equal(result, frame) @@ -2254,7 +2237,7 @@ def test_execute(self): sql.execute(ins, self.conn, params=tuple(row)) self.conn.commit() - result = sql.read_frame("select * from test", self.conn) + result = sql.read_sql("select * from test", self.conn) result.index = frame.index[:1] tm.assert_frame_equal(result, frame[:1]) @@ -2327,8 +2310,8 @@ def test_na_roundtrip(self): pass def _check_roundtrip(self, frame): - sql.write_frame(frame, name='test_table', con=self.conn) - result = sql.read_frame("select * from test_table", self.conn) + sql.to_sql(frame, name='test_table', con=self.conn, index=False) + result = sql.read_sql("select * from test_table", self.conn) # HACK! Change this once indexes are handled properly. result.index = frame.index @@ -2339,8 +2322,8 @@ def _check_roundtrip(self, frame): frame['txt'] = ['a'] * len(frame) frame2 = frame.copy() frame2['Idx'] = Index(lrange(len(frame2))) + 10 - sql.write_frame(frame2, name='test_table2', con=self.conn) - result = sql.read_frame("select * from test_table2", self.conn, + sql.to_sql(frame2, name='test_table2', con=self.conn, index=False) + result = sql.read_sql("select * from test_table2", self.conn, index_col='Idx') expected = frame.copy() expected.index = Index(lrange(len(frame2))) + 10 @@ -2349,7 +2332,7 @@ def _check_roundtrip(self, frame): def test_tquery(self): frame = tm.makeTimeDataFrame() - sql.write_frame(frame, name='test_table', con=self.conn) + sql.to_sql(frame, name='test_table', con=self.conn, index=False) result = sql.tquery("select A from test_table", self.conn) expected = Series(frame.A.values, frame.index) # not to have name result = Series(result, frame.index) @@ -2367,7 +2350,7 @@ def test_tquery(self): def test_uquery(self): frame = tm.makeTimeDataFrame() - sql.write_frame(frame, name='test_table', con=self.conn) + sql.to_sql(frame, name='test_table', con=self.conn, index=False) stmt = 'INSERT INTO test_table VALUES(2.314, -123.1, 1.234, 2.3)' self.assertEqual(sql.uquery(stmt, con=self.conn), 1) @@ -2387,14 +2370,14 @@ def test_keyword_as_column_names(self): ''' ''' df = DataFrame({'From': np.ones(5)}) - sql.write_frame(df, con=self.conn, name='testkeywords') + sql.to_sql(df, con=self.conn, name='testkeywords', index=False) def test_onecolumn_of_integer(self): # GH 3628 # a column_of_integers dataframe should transfer well to sql mono_df = DataFrame([1, 2], columns=['c0']) - sql.write_frame(mono_df, con=self.conn, name='mono_df') + sql.to_sql(mono_df, con=self.conn, name='mono_df', index=False) # computing the sum via sql con_x = self.conn the_sum = sum([my_c0[0] @@ -2402,7 +2385,7 @@ def test_onecolumn_of_integer(self): # it should not fail, and gives 3 ( Issue #3628 ) self.assertEqual(the_sum, 3) - result = sql.read_frame("select * from mono_df", con_x) + result = sql.read_sql("select * from mono_df", con_x) tm.assert_frame_equal(result, mono_df) def test_if_exists(self): @@ -2421,7 +2404,7 @@ def clean_up(test_table_to_drop): # test if invalid value for if_exists raises appropriate error self.assertRaises(ValueError, - sql.write_frame, + sql.to_sql, frame=df_if_exists_1, con=self.conn, name=table_name, @@ -2430,10 +2413,10 @@ def clean_up(test_table_to_drop): clean_up(table_name) # test if_exists='fail' - sql.write_frame(frame=df_if_exists_1, con=self.conn, name=table_name, - flavor='sqlite', if_exists='fail') + sql.to_sql(frame=df_if_exists_1, con=self.conn, name=table_name, + flavor='sqlite', if_exists='fail') self.assertRaises(ValueError, - sql.write_frame, + sql.to_sql, frame=df_if_exists_1, con=self.conn, name=table_name, @@ -2441,23 +2424,23 @@ def clean_up(test_table_to_drop): if_exists='fail') # test if_exists='replace' - sql.write_frame(frame=df_if_exists_1, con=self.conn, name=table_name, - flavor='sqlite', if_exists='replace') + sql.to_sql(frame=df_if_exists_1, con=self.conn, name=table_name, + flavor='sqlite', if_exists='replace', index=False) self.assertEqual(sql.tquery(sql_select, con=self.conn), [(1, 'A'), (2, 'B')]) - sql.write_frame(frame=df_if_exists_2, con=self.conn, name=table_name, - flavor='sqlite', if_exists='replace') + sql.to_sql(frame=df_if_exists_2, con=self.conn, name=table_name, + flavor='sqlite', if_exists='replace', index=False) self.assertEqual(sql.tquery(sql_select, con=self.conn), [(3, 'C'), (4, 'D'), (5, 'E')]) clean_up(table_name) # test if_exists='append' - sql.write_frame(frame=df_if_exists_1, con=self.conn, name=table_name, - flavor='sqlite', if_exists='fail') + sql.to_sql(frame=df_if_exists_1, con=self.conn, name=table_name, + flavor='sqlite', if_exists='fail', index=False) self.assertEqual(sql.tquery(sql_select, con=self.conn), [(1, 'A'), (2, 'B')]) - sql.write_frame(frame=df_if_exists_2, con=self.conn, name=table_name, - flavor='sqlite', if_exists='append') + sql.to_sql(frame=df_if_exists_2, con=self.conn, name=table_name, + flavor='sqlite', if_exists='append', index=False) self.assertEqual(sql.tquery(sql_select, con=self.conn), [(1, 'A'), (2, 'B'), (3, 'C'), (4, 'D'), (5, 'E')]) clean_up(table_name) @@ -2542,7 +2525,7 @@ def test_write_row_by_row(self): self.conn.commit() - result = sql.read_frame("select * from test", con=self.conn) + result = sql.read_sql("select * from test", con=self.conn) result.index = frame.index tm.assert_frame_equal(result, frame) @@ -2577,7 +2560,7 @@ def test_execute(self): sql.execute(ins, self.conn, params=tuple(row)) self.conn.commit() - result = sql.read_frame("select * from test", self.conn) + result = sql.read_sql("select * from test", self.conn) result.index = frame.index[:1] tm.assert_frame_equal(result, frame[:1]) @@ -2666,9 +2649,9 @@ def _check_roundtrip(self, frame): with warnings.catch_warnings(): warnings.filterwarnings("ignore", "Unknown table.*") cur.execute(drop_sql) - sql.write_frame(frame, name='test_table', - con=self.conn, flavor='mysql') - result = sql.read_frame("select * from test_table", self.conn) + sql.to_sql(frame, name='test_table', + con=self.conn, flavor='mysql', index=False) + result = sql.read_sql("select * from test_table", self.conn) # HACK! Change this once indexes are handled properly. result.index = frame.index @@ -2686,9 +2669,9 @@ def _check_roundtrip(self, frame): with warnings.catch_warnings(): warnings.filterwarnings("ignore", "Unknown table.*") cur.execute(drop_sql) - sql.write_frame(frame2, name='test_table2', - con=self.conn, flavor='mysql') - result = sql.read_frame("select * from test_table2", self.conn, + sql.to_sql(frame2, name='test_table2', + con=self.conn, flavor='mysql', index=False) + result = sql.read_sql("select * from test_table2", self.conn, index_col='Idx') expected = frame.copy() @@ -2706,8 +2689,8 @@ def test_tquery(self): drop_sql = "DROP TABLE IF EXISTS test_table" cur = self.conn.cursor() cur.execute(drop_sql) - sql.write_frame(frame, name='test_table', - con=self.conn, flavor='mysql') + sql.to_sql(frame, name='test_table', + con=self.conn, flavor='mysql', index=False) result = sql.tquery("select A from test_table", self.conn) expected = Series(frame.A.values, frame.index) # not to have name result = Series(result, frame.index) @@ -2732,8 +2715,8 @@ def test_uquery(self): drop_sql = "DROP TABLE IF EXISTS test_table" cur = self.conn.cursor() cur.execute(drop_sql) - sql.write_frame(frame, name='test_table', - con=self.conn, flavor='mysql') + sql.to_sql(frame, name='test_table', + con=self.conn, flavor='mysql', index=False) stmt = 'INSERT INTO test_table VALUES(2.314, -123.1, 1.234, 2.3)' self.assertEqual(sql.uquery(stmt, con=self.conn), 1) @@ -2754,8 +2737,8 @@ def test_keyword_as_column_names(self): ''' _skip_if_no_pymysql() df = DataFrame({'From': np.ones(5)}) - sql.write_frame(df, con=self.conn, name='testkeywords', - if_exists='replace', flavor='mysql') + sql.to_sql(df, con=self.conn, name='testkeywords', + if_exists='replace', flavor='mysql', index=False) def test_if_exists(self): _skip_if_no_pymysql() @@ -2774,7 +2757,7 @@ def clean_up(test_table_to_drop): # test if invalid value for if_exists raises appropriate error self.assertRaises(ValueError, - sql.write_frame, + sql.to_sql, frame=df_if_exists_1, con=self.conn, name=table_name, @@ -2783,10 +2766,10 @@ def clean_up(test_table_to_drop): clean_up(table_name) # test if_exists='fail' - sql.write_frame(frame=df_if_exists_1, con=self.conn, name=table_name, - flavor='mysql', if_exists='fail') + sql.to_sql(frame=df_if_exists_1, con=self.conn, name=table_name, + flavor='mysql', if_exists='fail', index=False) self.assertRaises(ValueError, - sql.write_frame, + sql.to_sql, frame=df_if_exists_1, con=self.conn, name=table_name, @@ -2794,23 +2777,23 @@ def clean_up(test_table_to_drop): if_exists='fail') # test if_exists='replace' - sql.write_frame(frame=df_if_exists_1, con=self.conn, name=table_name, - flavor='mysql', if_exists='replace') + sql.to_sql(frame=df_if_exists_1, con=self.conn, name=table_name, + flavor='mysql', if_exists='replace', index=False) self.assertEqual(sql.tquery(sql_select, con=self.conn), [(1, 'A'), (2, 'B')]) - sql.write_frame(frame=df_if_exists_2, con=self.conn, name=table_name, - flavor='mysql', if_exists='replace') + sql.to_sql(frame=df_if_exists_2, con=self.conn, name=table_name, + flavor='mysql', if_exists='replace', index=False) self.assertEqual(sql.tquery(sql_select, con=self.conn), [(3, 'C'), (4, 'D'), (5, 'E')]) clean_up(table_name) # test if_exists='append' - sql.write_frame(frame=df_if_exists_1, con=self.conn, name=table_name, - flavor='mysql', if_exists='fail') + sql.to_sql(frame=df_if_exists_1, con=self.conn, name=table_name, + flavor='mysql', if_exists='fail', index=False) self.assertEqual(sql.tquery(sql_select, con=self.conn), [(1, 'A'), (2, 'B')]) - sql.write_frame(frame=df_if_exists_2, con=self.conn, name=table_name, - flavor='mysql', if_exists='append') + sql.to_sql(frame=df_if_exists_2, con=self.conn, name=table_name, + flavor='mysql', if_exists='append', index=False) self.assertEqual(sql.tquery(sql_select, con=self.conn), [(1, 'A'), (2, 'B'), (3, 'C'), (4, 'D'), (5, 'E')]) clean_up(table_name)
Start for removing some of the deprecated functions. Other functions to add: `write_frame`, `tquery`, `uquery`
https://api.github.com/repos/pandas-dev/pandas/pulls/12205
2016-02-02T13:59:39Z
2016-02-08T15:26:10Z
null
2016-02-08T15:26:19Z
CI: install sphinx with conda for doc build
diff --git a/ci/build_docs.sh b/ci/build_docs.sh index a906b78de5389..c0843593f85ff 100755 --- a/ci/build_docs.sh +++ b/ci/build_docs.sh @@ -18,7 +18,6 @@ if [ x"$DOC_BUILD" != x"" ]; then source activate pandas conda install -n pandas -c r r rpy2 --yes - pip install sphinx -U time sudo apt-get $APT_ARGS install dvipng diff --git a/ci/requirements-2.7_DOC_BUILD.run b/ci/requirements-2.7_DOC_BUILD.run index 33033defc8faa..854776762fdb5 100644 --- a/ci/requirements-2.7_DOC_BUILD.run +++ b/ci/requirements-2.7_DOC_BUILD.run @@ -1,4 +1,5 @@ ipython +sphinx nbconvert matplotlib scipy
Sphinx > 1.3.1 is now available in conda, so not needed to isntall this separately with pip to get latest version.
https://api.github.com/repos/pandas-dev/pandas/pulls/12204
2016-02-02T13:45:37Z
2016-02-02T15:36:14Z
null
2016-02-02T15:36:14Z
Add ability to specify AWS s3 host through AWS_S3_HOST environment variable
diff --git a/pandas/io/common.py b/pandas/io/common.py index f6032266050dd..823754ce3cb6c 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -274,14 +274,15 @@ def get_filepath_or_buffer(filepath_or_buffer, encoding=None, import boto except: raise ImportError("boto is required to handle s3 files") - # Assuming AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY + # Assuming AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_S3_HOST # are environment variables parsed_url = parse_url(filepath_or_buffer) + s3_host = os.environ.get('AWS_S3_HOST','s3.amazonaws.com') try: - conn = boto.connect_s3() + conn = boto.connect_s3(host=s3_host) except boto.exception.NoAuthHandlerFound: - conn = boto.connect_s3(anon=True) + conn = boto.connect_s3(host=s3_host,anon=True) b = conn.get_bucket(parsed_url.netloc, validate=False) if compat.PY2 and (compression == 'gzip' or
The AWS s3 host used by read_csv currently defaults to s3.amazonaws.com and there is currently no way to change this. This pull requests adds functionality where the code checks the AWS_S3_HOST environment variable for a user-specified host and defaults back to the current s3.amazonaws.com host. This is a simple code change, but it will allow users from around the globe to use read_csv to directly read from files in s3 when the s3 bucket is not located in the US East region.
https://api.github.com/repos/pandas-dev/pandas/pulls/12198
2016-02-01T20:49:46Z
2016-02-02T15:52:52Z
null
2016-02-02T15:53:18Z
TST: work around numpy https://github.com/numpy/numpy/issues/7163
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py index b07565d850847..2900bd565d905 100644 --- a/pandas/tests/frame/test_analytics.py +++ b/pandas/tests/frame/test_analytics.py @@ -13,7 +13,7 @@ from pandas.compat import lrange from pandas import (compat, isnull, notnull, DataFrame, Series, - MultiIndex, date_range, Timestamp) + MultiIndex, date_range, Timestamp, _np_version_under1p11) import pandas as pd import pandas.core.common as com import pandas.core.nanops as nanops @@ -562,8 +562,14 @@ def test_quantile_interpolation(self): df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], columns=['a', 'b', 'c']) result = df.quantile([.25, .5], interpolation='midpoint') - expected = DataFrame([[1.5, 1.5, 1.5], [2.5, 2.5, 2.5]], - index=[.25, .5], columns=['a', 'b', 'c']) + + # https://github.com/numpy/numpy/issues/7163 + if _np_version_under1p11: + expected = DataFrame([[1.5, 1.5, 1.5], [2.5, 2.5, 2.5]], + index=[.25, .5], columns=['a', 'b', 'c']) + else: + expected = DataFrame([[1.5, 1.5, 1.5], [2.0, 2.0, 2.0]], + index=[.25, .5], columns=['a', 'b', 'c']) assert_frame_equal(result, expected) def test_quantile_interpolation_np_lt_1p9(self):
closes #12196 xref https://github.com/numpy/numpy/issues/7163
https://api.github.com/repos/pandas-dev/pandas/pulls/12197
2016-02-01T20:49:10Z
2016-02-01T21:20:40Z
null
2016-02-01T21:20:40Z
BUG: concat of tz series with NaT
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index fad4c7e3d5d0a..60fd950bfe725 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -522,6 +522,7 @@ Bug Fixes - Bug in not treating ``NaT`` as a missing value in datetimelikes when factorizing & with ``Categoricals`` (:issue:`12077`) - Bug in getitem when the values of a ``Series`` were tz-aware (:issue:`12089`) - Bug in ``Series.str.get_dummies`` when one of the variables was 'name' (:issue:`12180`) +- Bug in ``pd.concat`` while concatenating tz-aware NaT series. (:issue:`11693`, :issue:`11755`) diff --git a/pandas/core/common.py b/pandas/core/common.py index 508765896e275..da20fd75ceb35 100644 --- a/pandas/core/common.py +++ b/pandas/core/common.py @@ -1643,7 +1643,13 @@ def _possibly_cast_to_datetime(value, dtype, errors='raise'): raise TypeError("cannot convert datetimelike to " "dtype [%s]" % dtype) elif is_datetime64tz: - pass + + # our NaT doesn't support tz's + # this will coerce to DatetimeIndex with + # a matching dtype below + if lib.isscalar(value) and isnull(value): + value = [value] + elif is_timedelta64 and not is_dtype_equal(dtype, _TD_DTYPE): if dtype.name == 'timedelta64[ns]': dtype = _TD_DTYPE @@ -1651,7 +1657,7 @@ def _possibly_cast_to_datetime(value, dtype, errors='raise'): raise TypeError("cannot convert timedeltalike to " "dtype [%s]" % dtype) - if np.isscalar(value): + if lib.isscalar(value): if value == tslib.iNaT or isnull(value): value = tslib.iNaT else: diff --git a/pandas/core/series.py b/pandas/core/series.py index 49182951c0e9d..68ae58737916b 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -2903,7 +2903,7 @@ def create_from_value(value, index, dtype): # return a new empty value suitable for the dtype if is_datetimetz(dtype): - subarr = DatetimeIndex([value] * len(index)) + subarr = DatetimeIndex([value] * len(index), dtype=dtype) else: if not isinstance(dtype, (np.dtype, type(np.dtype))): dtype = dtype.dtype @@ -2937,7 +2937,8 @@ def create_from_value(value, index, dtype): # a 1-element ndarray if len(subarr) != len(index) and len(subarr) == 1: - subarr = create_from_value(subarr[0], index, subarr) + subarr = create_from_value(subarr[0], index, + subarr.dtype) elif subarr.ndim > 1: if isinstance(data, np.ndarray): diff --git a/pandas/tests/indexes/test_datetimelike.py b/pandas/tests/indexes/test_datetimelike.py index de505b93da241..b0ca07e84f7ce 100644 --- a/pandas/tests/indexes/test_datetimelike.py +++ b/pandas/tests/indexes/test_datetimelike.py @@ -108,10 +108,6 @@ def test_construction_with_alt(self): expected = i.tz_localize(None).tz_localize('UTC') self.assert_index_equal(i2, expected) - i2 = DatetimeIndex(i, tz='UTC') - expected = i.tz_convert('UTC') - self.assert_index_equal(i2, expected) - # incompat tz/dtype self.assertRaises(ValueError, lambda: DatetimeIndex( i.tz_localize(None).asi8, dtype=i.dtype, tz='US/Pacific')) diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index 9133cc2c5a020..6ae24bbccfa74 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -473,6 +473,11 @@ def test_constructor_with_datetime_tz(self): self.assertTrue(s.dtype == 'object') self.assertTrue(lib.infer_dtype(s) == 'datetime') + # with all NaT + s = Series(pd.NaT, index=[0, 1], dtype='datetime64[ns, US/Eastern]') + expected = Series(pd.DatetimeIndex(['NaT', 'NaT'], tz='US/Eastern')) + assert_series_equal(s, expected) + def test_constructor_periodindex(self): # GH7932 # converting a PeriodIndex when put in a Series diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py index 753bbccf850e4..30c2621cd64ef 100644 --- a/pandas/tests/test_groupby.py +++ b/pandas/tests/test_groupby.py @@ -3943,10 +3943,17 @@ def test_groupby_multi_timezone(self): result = df.groupby('tz').date.apply( lambda x: pd.to_datetime(x).dt.tz_localize(x.name)) - expected = pd.to_datetime(Series( - ['2000-01-28 22:47:00', '2000-01-29 22:48:00', - '2000-01-31 00:49:00', '2000-01-31 22:50:00', - '2000-01-01 21:50:00'])) + expected = Series([Timestamp('2000-01-28 16:47:00-0600', + tz='America/Chicago'), + Timestamp('2000-01-29 16:48:00-0600', + tz='America/Chicago'), + Timestamp('2000-01-30 16:49:00-0800', + tz='America/Los_Angeles'), + Timestamp('2000-01-31 16:50:00-0600', + tz='America/Chicago'), + Timestamp('2000-01-01 16:50:00-0500', + tz='America/New_York')], + dtype=object) assert_series_equal(result, expected) tz = 'America/Chicago' diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py index 8200989ff84d2..fdf38a0869a0b 100644 --- a/pandas/tools/tests/test_merge.py +++ b/pandas/tools/tests/test_merge.py @@ -1024,6 +1024,63 @@ def test_merge_on_datetime64tz(self): result = pd.merge(left, right, on='key', how='outer') assert_frame_equal(result, expected) + def test_concat_NaT_series(self): + # GH 11693 + # test for merging NaT series with datetime series. + x = Series(date_range('20151124 08:00', '20151124 09:00', + freq='1h', tz='US/Eastern')) + y = Series(pd.NaT, index=[0, 1], dtype='datetime64[ns, US/Eastern]') + expected = Series([x[0], x[1], pd.NaT, pd.NaT]) + + result = concat([x, y], ignore_index=True) + tm.assert_series_equal(result, expected) + + # all NaT with tz + expected = Series(pd.NaT, index=range(4), + dtype='datetime64[ns, US/Eastern]') + result = pd.concat([y, y], ignore_index=True) + tm.assert_series_equal(result, expected) + + # without tz + x = pd.Series(pd.date_range('20151124 08:00', + '20151124 09:00', freq='1h')) + y = pd.Series(pd.date_range('20151124 10:00', + '20151124 11:00', freq='1h')) + y[:] = pd.NaT + expected = pd.Series([x[0], x[1], pd.NaT, pd.NaT]) + result = pd.concat([x, y], ignore_index=True) + tm.assert_series_equal(result, expected) + + # all NaT without tz + x[:] = pd.NaT + expected = pd.Series(pd.NaT, index=range(4), + dtype='datetime64[ns]') + result = pd.concat([x, y], ignore_index=True) + tm.assert_series_equal(result, expected) + + def test_concat_tz_series(self): + # GH 11755 + # tz and no tz + x = Series(date_range('20151124 08:00', + '20151124 09:00', + freq='1h', tz='UTC')) + y = Series(date_range('2012-01-01', '2012-01-02')) + expected = Series([x[0], x[1], y[0], y[1]], + dtype='object') + result = concat([x, y], ignore_index=True) + tm.assert_series_equal(result, expected) + + # GH 11887 + # concat tz and object + x = Series(date_range('20151124 08:00', + '20151124 09:00', + freq='1h', tz='UTC')) + y = Series(['a', 'b']) + expected = Series([x[0], x[1], y[0], y[1]], + dtype='object') + result = concat([x, y], ignore_index=True) + tm.assert_series_equal(result, expected) + def test_indicator(self): # PR #10054. xref #7412 and closes #8790. df1 = DataFrame({'col1': [0, 1], 'col_left': [ diff --git a/pandas/tseries/common.py b/pandas/tseries/common.py index f9f90a9377f76..5c31d79dc6780 100644 --- a/pandas/tseries/common.py +++ b/pandas/tseries/common.py @@ -255,14 +255,16 @@ def _concat_compat(to_concat, axis=0): def convert_to_pydatetime(x, axis): # coerce to an object dtype - if x.dtype == _NS_DTYPE: - if hasattr(x, 'tz'): + # if dtype is of datetimetz or timezone + if x.dtype.kind == _NS_DTYPE.kind: + if getattr(x, 'tz', None) is not None: x = x.asobject + else: + shape = x.shape + x = tslib.ints_to_pydatetime(x.view(np.int64).ravel()) + x = x.reshape(shape) - shape = x.shape - x = tslib.ints_to_pydatetime(x.view(np.int64).ravel()) - x = x.reshape(shape) elif x.dtype == _TD_DTYPE: shape = x.shape x = tslib.ints_to_pytimedelta(x.view(np.int64).ravel()) @@ -275,6 +277,12 @@ def convert_to_pydatetime(x, axis): # datetimetz if 'datetimetz' in typs: + # if to_concat have 'datetime' or 'object' + # then we need to coerce to object + if 'datetime' in typs or 'object' in typs: + to_concat = [convert_to_pydatetime(x, axis) for x in to_concat] + return np.concatenate(to_concat, axis=axis) + # we require ALL of the same tz for datetimetz tzs = set([getattr(x, 'tz', None) for x in to_concat]) - set([None]) if len(tzs) == 1: diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py index f7223f803c41a..a632913fbe4fe 100644 --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -242,6 +242,19 @@ def __new__(cls, data=None, raise ValueError("Must provide freq argument if no data is " "supplied") + # if dtype has an embeded tz, capture it + if dtype is not None: + try: + dtype = DatetimeTZDtype.construct_from_string(dtype) + dtz = getattr(dtype, 'tz', None) + if dtz is not None: + if tz is not None and str(tz) != str(dtz): + raise ValueError("cannot supply both a tz and a dtype" + " with a tz") + tz = dtz + except TypeError: + pass + if data is None: return cls._generate(start, end, periods, name, freq, tz=tz, normalize=normalize, closed=closed, @@ -272,7 +285,15 @@ def __new__(cls, data=None, data.name = name if tz is not None: - return data.tz_localize(tz, ambiguous=ambiguous) + + # we might already be localized to this tz + # so passing the same tz is ok + # however any other tz is a no-no + if data.tz is None: + return data.tz_localize(tz, ambiguous=ambiguous) + elif str(tz) != str(data.tz): + raise TypeError("Already tz-aware, use tz_convert " + "to convert.") return data @@ -288,6 +309,12 @@ def __new__(cls, data=None, if tz is None: tz = data.tz + else: + # the tz's must match + if str(tz) != str(data.tz): + raise TypeError("Already tz-aware, use tz_convert " + "to convert.") + subarr = data.values if freq is None: diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py index a011652b7f2e2..99cada26464cb 100644 --- a/pandas/tseries/tests/test_timeseries.py +++ b/pandas/tseries/tests/test_timeseries.py @@ -74,7 +74,7 @@ def test_index_unique(self): dups_local = self.dups.index.tz_localize('US/Eastern') dups_local.name = 'foo' result = dups_local.unique() - expected = DatetimeIndex(expected, tz='US/Eastern') + expected = DatetimeIndex(expected).tz_localize('US/Eastern') self.assertTrue(result.tz is not None) self.assertEqual(result.name, 'foo') self.assertTrue(result.equals(expected)) @@ -2473,6 +2473,40 @@ def test_constructor_datetime64_tzformat(self): tz='Asia/Tokyo') self.assert_numpy_array_equal(idx.asi8, expected_i8.asi8) + def test_constructor_dtype(self): + + # passing a dtype with a tz should localize + idx = DatetimeIndex(['2013-01-01', + '2013-01-02'], + dtype='datetime64[ns, US/Eastern]') + expected = DatetimeIndex(['2013-01-01', '2013-01-02'] + ).tz_localize('US/Eastern') + self.assertTrue(idx.equals(expected)) + + idx = DatetimeIndex(['2013-01-01', + '2013-01-02'], + tz='US/Eastern') + self.assertTrue(idx.equals(expected)) + + # if we already have a tz and its not the same, then raise + idx = DatetimeIndex(['2013-01-01', '2013-01-02'], + dtype='datetime64[ns, US/Eastern]') + + self.assertRaises(ValueError, + lambda: DatetimeIndex(idx, + dtype='datetime64[ns]')) + + # this is effectively trying to convert tz's + self.assertRaises(TypeError, + lambda: DatetimeIndex(idx, + dtype='datetime64[ns, CET]')) + self.assertRaises(ValueError, + lambda: DatetimeIndex( + idx, tz='CET', + dtype='datetime64[ns, US/Eastern]')) + result = DatetimeIndex(idx, dtype='datetime64[ns, US/Eastern]') + self.assertTrue(idx.equals(result)) + def test_constructor_name(self): idx = DatetimeIndex(start='2000-01-01', periods=1, freq='A', name='TEST') diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx index be1c5af74a95d..49b8f2c19700c 100644 --- a/pandas/tslib.pyx +++ b/pandas/tslib.pyx @@ -3554,6 +3554,10 @@ def tz_convert(ndarray[int64_t] vals, object tz1, object tz2): trans, deltas, typ = _get_dst_info(tz2) trans_len = len(trans) + # if all NaT, return all NaT + if (utc_dates==NPY_NAT).all(): + return utc_dates + # use first non-NaT element # if all-NaT, return all-NaT if (result==NPY_NAT).all():
closes #11693 xref #11736
https://api.github.com/repos/pandas-dev/pandas/pulls/12195
2016-02-01T18:58:36Z
2016-02-01T21:38:25Z
null
2016-02-01T21:38:25Z
CLN FloatArrayFormatter
diff --git a/pandas/core/format.py b/pandas/core/format.py index d7f3a669de9f4..adaf462c08479 100644 --- a/pandas/core/format.py +++ b/pandas/core/format.py @@ -2008,7 +2008,7 @@ def format_array(values, formatter, float_format=None, na_rep='NaN', class GenericArrayFormatter(object): def __init__(self, values, digits=7, formatter=None, na_rep='NaN', space=12, float_format=None, justify='right', decimal='.', - quoting=None): + quoting=None, fixed_width=True): self.values = values self.digits = digits self.na_rep = na_rep @@ -2018,6 +2018,7 @@ def __init__(self, values, digits=7, formatter=None, na_rep='NaN', self.justify = justify self.decimal = decimal self.quoting = quoting + self.fixed_width = fixed_width def get_result(self): fmt_values = self._format_strings() @@ -2076,96 +2077,135 @@ class FloatArrayFormatter(GenericArrayFormatter): def __init__(self, *args, **kwargs): GenericArrayFormatter.__init__(self, *args, **kwargs) + # float_format is expected to be a string + # formatter should be used to pass a function if self.float_format is not None and self.formatter is None: - self.formatter = self.float_format - - def _format_with(self, fmt_str): - def _val(x, threshold): - if notnull(x): - if (threshold is None or - abs(x) > get_option("display.chop_threshold")): - return fmt_str % x + if callable(self.float_format): + self.formatter = self.float_format + self.float_format = None + + def _value_formatter(self, float_format=None, threshold=None): + """Returns a function to be applied on each value to format it + """ + + # the float_format parameter supersedes self.float_format + if float_format is None: + float_format = self.float_format + + # we are going to compose different functions, to first convert to + # a string, then replace the decimal symbol, and finally chop according + # to the threshold + + # when there is no float_format, we use str instead of '%g' + # because str(0.0) = '0.0' while '%g' % 0.0 = '0' + if float_format: + def base_formatter(v): + return (float_format % v) if notnull(v) else self.na_rep + else: + def base_formatter(v): + return str(v) if notnull(v) else self.na_rep + + if self.decimal != '.': + def decimal_formatter(v): + return base_formatter(v).replace('.', self.decimal, 1) + else: + decimal_formatter = base_formatter + + if threshold is None: + return decimal_formatter + + def formatter(value): + if notnull(value): + if abs(value) > threshold: + return decimal_formatter(value) else: - if fmt_str.endswith("e"): # engineering format - return "0" - else: - return fmt_str % 0 + return decimal_formatter(0.0) else: - return self.na_rep - threshold = get_option("display.chop_threshold") - fmt_values = [_val(x, threshold) for x in self.values] - return _trim_zeros(fmt_values, self.na_rep) + return formatter + + def _format_values_as_array(self): + """Returns a numpy array containing the formatted values + """ - def _format_strings(self): if self.formatter is not None: - fmt_values = [self.formatter(x) for x in self.values] + return np.array([self.formatter(x) for x in self.values]) + + if self.fixed_width: + threshold = get_option("display.chop_threshold") else: - fmt_str = '%% .%df' % self.digits - fmt_values = self._format_with(fmt_str) + threshold = None - if len(fmt_values) > 0: - maxlen = max(len(x) for x in fmt_values) - else: - maxlen = 0 + # if we have a fixed_width, we'll need to try different float_format + def format_values_with(float_format): + formatter = self._value_formatter(float_format, threshold) - too_long = maxlen > self.digits + 6 + # separate the wheat from the chaff + values = self.values + mask = isnull(values) + if hasattr(values, 'to_dense'): # sparse numpy ndarray + values = values.to_dense() + values = np.array(values, dtype='object') + values[mask] = self.na_rep + imask = (~mask).ravel() + values.flat[imask] = np.array([formatter(val) + for val in values.ravel()[imask]]) - abs_vals = np.abs(self.values) + if self.fixed_width: + return _trim_zeros(values, self.na_rep) - # this is pretty arbitrary for now - # large values: more that 8 characters including decimal symbol - # and first digit, hence > 1e6 - has_large_values = (abs_vals > 1e6).any() - has_small_values = ((abs_vals < 10**(-self.digits)) & - (abs_vals > 0)).any() + return values - if too_long and has_large_values: - fmt_str = '%% .%de' % self.digits - fmt_values = self._format_with(fmt_str) - elif has_small_values: - fmt_str = '%% .%de' % self.digits - fmt_values = self._format_with(fmt_str) + # There is a special default string when we are fixed-width + # The default is otherwise to use str instead of a formatting string + if self.float_format is None and self.fixed_width: + float_format = '%% .%df' % self.digits + else: + float_format = self.float_format - return fmt_values + formatted_values = format_values_with(float_format) - def get_formatted_data(self): - """Returns the array with its float values converted into strings using - the parameters given at initalisation. + if not self.fixed_width: + return formatted_values - Note: the method `.get_result()` does something similar, but with a - fixed-width output suitable for screen printing. The output here is not - fixed-width. - """ - values = self.values - mask = isnull(values) - - # the following variable is to be applied on each value to format it - # according to the string containing the float format, - # self.float_format and the character to use as decimal separator, - # self.decimal - formatter = None - if self.float_format and self.decimal != '.': - formatter = lambda v: ( - (self.float_format % v).replace('.', self.decimal, 1)) - elif self.decimal != '.': # no float format - formatter = lambda v: str(v).replace('.', self.decimal, 1) - elif self.float_format: # no special decimal separator - formatter = lambda v: self.float_format % v - - if formatter is None and not self.quoting: - values = values.astype(str) + # we need do convert to engineering format if some values are too small + # and would appear as 0, or if some values are too big and take too + # much space + + if len(formatted_values) > 0: + maxlen = max(len(x) for x in formatted_values) + too_long = maxlen > self.digits + 6 else: - values = np.array(values, dtype='object') + too_long = False - values[mask] = self.na_rep - if formatter: - imask = (~mask).ravel() - values.flat[imask] = np.array([formatter(val) - for val in values.ravel()[imask]]) + abs_vals = np.abs(self.values) + + # this is pretty arbitrary for now + # large values: more that 8 characters including decimal symbol + # and first digit, hence > 1e6 + has_large_values = (abs_vals > 1e6).any() + has_small_values = ((abs_vals < 10**(-self.digits)) & + (abs_vals > 0)).any() + + if has_small_values or (too_long and has_large_values): + float_format = '%% .%de' % self.digits + formatted_values = format_values_with(float_format) + + return formatted_values - return values + def _format_strings(self): + # shortcut + if self.formatter is not None: + return [self.formatter(x) for x in self.values] + + return list(self._format_values_as_array()) + + def get_result_as_array(self): + """Returns the float values converted into strings using + the parameters given at initalisation, as a numpy array + """ + return self._format_values_as_array() class IntArrayFormatter(GenericArrayFormatter): diff --git a/pandas/core/internals.py b/pandas/core/internals.py index 10053d33d6b51..8973ea025e611 100644 --- a/pandas/core/internals.py +++ b/pandas/core/internals.py @@ -1380,8 +1380,9 @@ def to_native_types(self, slicer=None, na_rep='', float_format=None, from pandas.core.format import FloatArrayFormatter formatter = FloatArrayFormatter(values, na_rep=na_rep, float_format=float_format, - decimal=decimal, quoting=quoting) - return formatter.get_formatted_data() + decimal=decimal, quoting=quoting, + fixed_width=False) + return formatter.get_result_as_array() def should_store(self, value): # when inserting a column should not coerce integers to floats diff --git a/pandas/indexes/numeric.py b/pandas/indexes/numeric.py index 61d93284adbbb..fa707056ff2b7 100644 --- a/pandas/indexes/numeric.py +++ b/pandas/indexes/numeric.py @@ -272,8 +272,9 @@ def _format_native_types(self, na_rep='', float_format=None, decimal='.', from pandas.core.format import FloatArrayFormatter formatter = FloatArrayFormatter(self.values, na_rep=na_rep, float_format=float_format, - decimal=decimal, quoting=quoting) - return formatter.get_formatted_data() + decimal=decimal, quoting=quoting, + fixed_width=False) + return formatter.get_result_as_array() def get_value(self, series, key): """ we always want to get an index value, never a value """
I'm working on #12164 This is a work in progress.
https://api.github.com/repos/pandas-dev/pandas/pulls/12194
2016-02-01T16:14:44Z
2016-02-13T15:18:24Z
null
2016-02-22T11:07:11Z
Improved documentation for DataFrame.join
diff --git a/doc/source/merging.rst b/doc/source/merging.rst index 074b15bbbcb66..feb6e4834a754 100644 --- a/doc/source/merging.rst +++ b/doc/source/merging.rst @@ -558,10 +558,8 @@ DataFrame instance method, with the calling DataFrame being implicitly considered the left object in the join. The related ``DataFrame.join`` method, uses ``merge`` internally for the -index-on-index and index-on-column(s) joins, but *joins on indexes* by default -rather than trying to join on common columns (the default behavior for -``merge``). If you are joining on index, you may wish to use ``DataFrame.join`` -to save yourself some typing. +index-on-index (by default) and column(s)-on-index join. If you are joining on +index only, you may wish to use ``DataFrame.join`` to save yourself some typing. Brief primer on merge methods (relational algebra) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 41a4cd0d77508..1ca0b4e395b3f 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -4318,18 +4318,20 @@ def join(self, other, on=None, how='left', lsuffix='', rsuffix='', Series is passed, its name attribute must be set, and that will be used as the column name in the resulting joined DataFrame on : column name, tuple/list of column names, or array-like - Column(s) to use for joining, otherwise join on index. If multiples + Column(s) in the caller to join on the index in other, + otherwise joins index-on-index. If multiples columns given, the passed DataFrame must have a MultiIndex. Can pass an array as the join key if not already contained in the calling DataFrame. Like an Excel VLOOKUP operation how : {'left', 'right', 'outer', 'inner'} - How to handle indexes of the two objects. Default: 'left' - for joining on index, None otherwise - - * left: use calling frame's index - * right: use input frame's index - * outer: form union of indexes - * inner: use intersection of indexes + How to handle the operation of the two objects. Default: 'left' + + * left: use calling frame's index (or column if on is specified) + * right: use other frame's index + * outer: form union of calling frame's index (or column if on is + specified) with other frame's index + * inner: form intersection of calling frame's index (or column if + on is specified) with other frame's index lsuffix : string Suffix to use from left frame's overlapping columns rsuffix : string @@ -4343,6 +4345,77 @@ def join(self, other, on=None, how='left', lsuffix='', rsuffix='', on, lsuffix, and rsuffix options are not supported when passing a list of DataFrame objects + Examples + -------- + >>> caller = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'], + ... 'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']}) + + >>> caller + A key + 0 A0 K0 + 1 A1 K1 + 2 A2 K2 + 3 A3 K3 + 4 A4 K4 + 5 A5 K5 + + >>> other = pd.DataFrame({'key': ['K0', 'K1', 'K2'], + ... 'B': ['B0', 'B1', 'B2']}) + + >>> other + B key + 0 B0 K0 + 1 B1 K1 + 2 B2 K2 + + Join DataFrames using their indexes. + + >>> caller.join(other, lsuffix='_caller', rsuffix='_other') + + >>> A key_caller B key_other + 0 A0 K0 B0 K0 + 1 A1 K1 B1 K1 + 2 A2 K2 B2 K2 + 3 A3 K3 NaN NaN + 4 A4 K4 NaN NaN + 5 A5 K5 NaN NaN + + + If we want to join using the key columns, we need to set key to be + the index in both caller and other. The joined DataFrame will have + key as its index. + + >>> caller.set_index('key').join(other.set_index('key')) + + >>> A B + key + K0 A0 B0 + K1 A1 B1 + K2 A2 B2 + K3 A3 NaN + K4 A4 NaN + K5 A5 NaN + + Another option to join using the key columns is to use the on + parameter. DataFrame.join always uses other's index but we can use any + column in the caller. This method preserves the original caller's + index in the result. + + >>> caller.join(other.set_index('key'), on='key') + + >>> A key B + 0 A0 K0 B0 + 1 A1 K1 B1 + 2 A2 K2 B2 + 3 A3 K3 NaN + 4 A4 K4 NaN + 5 A5 K5 NaN + + + See also + -------- + DataFrame.merge : For column(s)-on-columns(s) operations + Returns ------- joined : DataFrame
closes #12188 I modified the description in DataFrame.join to make clear the difference with DataFrame.merge, also added examples.
https://api.github.com/repos/pandas-dev/pandas/pulls/12193
2016-02-01T16:11:20Z
2016-05-26T12:54:22Z
null
2016-05-26T12:54:50Z
BUG: Handle variables named 'name' in get_dummies, #12180
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 13b7b33fff527..fad4c7e3d5d0a 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -521,6 +521,7 @@ Bug Fixes - Bug in ``read_csv`` when reading from a ``StringIO`` in threads (:issue:`11790`) - Bug in not treating ``NaT`` as a missing value in datetimelikes when factorizing & with ``Categoricals`` (:issue:`12077`) - Bug in getitem when the values of a ``Series`` were tz-aware (:issue:`12089`) +- Bug in ``Series.str.get_dummies`` when one of the variables was 'name' (:issue:`12180`) diff --git a/pandas/core/strings.py b/pandas/core/strings.py index 1ffa836a75a1b..be78c950eff9d 100644 --- a/pandas/core/strings.py +++ b/pandas/core/strings.py @@ -1105,9 +1105,11 @@ def _wrap_result(self, result, use_codes=True, name=None): if not hasattr(result, 'ndim'): return result - name = name or getattr(result, 'name', None) or self._orig.name if result.ndim == 1: + # Wait until we are sure result is a Series or Index before + # checking attributes (GH 12180) + name = name or getattr(result, 'name', None) or self._orig.name if isinstance(self._orig, Index): # if result is a boolean np.array, return the np.array # instead of wrapping it into a boolean Index (GH 8875) diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py index f8255c4b4a410..bc540cc8bf92b 100644 --- a/pandas/tests/test_strings.py +++ b/pandas/tests/test_strings.py @@ -812,6 +812,14 @@ def test_get_dummies(self): idx = Index(['a|b', 'a|c', 'b|c']) idx.str.get_dummies('|') + # GH 12180 + # Dummies named 'name' should work as expected + s = Series(['a', 'b,name', 'b']) + result = s.str.get_dummies(',') + expected = DataFrame([[1, 0, 0], [0, 1, 1], [0, 1, 0]], + columns=['a', 'b', 'name']) + tm.assert_frame_equal(result, expected) + def test_join(self): values = Series(['a_b_c', 'c_d_e', np.nan, 'f_g_h']) result = values.str.split('_').str.join('_')
closes #12180
https://api.github.com/repos/pandas-dev/pandas/pulls/12192
2016-02-01T04:46:39Z
2016-02-01T20:09:16Z
null
2016-02-01T20:09:21Z
WIP/ENH: allow categoricals in msgpack
diff --git a/pandas/io/packers.py b/pandas/io/packers.py index a16f3600736b8..fc81acdbefd08 100644 --- a/pandas/io/packers.py +++ b/pandas/io/packers.py @@ -47,11 +47,12 @@ from pandas.compat import u from pandas import (Timestamp, Period, Series, DataFrame, # noqa Index, MultiIndex, Float64Index, Int64Index, - Panel, RangeIndex, PeriodIndex, DatetimeIndex) + Panel, RangeIndex, PeriodIndex, DatetimeIndex, + Categorical) from pandas.sparse.api import SparseSeries, SparseDataFrame, SparsePanel from pandas.sparse.array import BlockIndex, IntIndex from pandas.core.generic import NDFrame -from pandas.core.common import needs_i8_conversion +from pandas.core.common import needs_i8_conversion, is_categorical_dtype from pandas.io.common import get_filepath_or_buffer from pandas.core.internals import BlockManager, make_block import pandas.core.internals as internals @@ -170,6 +171,7 @@ def read(fh): # this is platform int, which we need to remap to np.int64 # for compat on windows platforms 7: np.dtype('int64'), + 'category': 'category' } @@ -209,6 +211,14 @@ def convert(values): if dtype == np.object_: return v.tolist() + if is_categorical_dtype(dtype): + return { + 'codes': {'dtype': values.codes.dtype.name, + 'data': convert(values.codes)}, + 'categories': {'dtype': values.categories.dtype.name, + 'data': convert(values.categories.values)} + } + if compressor == 'zlib': # return string arrays like they are @@ -242,6 +252,15 @@ def unconvert(values, dtype, compress=None): if as_is_ext: values = values.data + if is_categorical_dtype(dtype): + return Categorical.from_codes( + unconvert(values['codes']['data'], + dtype_for(values['codes']['dtype']), + compress=compress), + unconvert(values['categories']['data'], + dtype_for(values['categories']['dtype']), + compress=compress)) + if dtype == np.object_: return np.array(values, dtype=object) @@ -495,11 +514,15 @@ def decode(obj): elif typ == 'series': dtype = dtype_for(obj['dtype']) + ctor_dtype = dtype + if is_categorical_dtype(dtype): + # Series ctor doesn't take dtype with categorical + ctor_dtype = None index = obj['index'] return globals()[obj['klass']](unconvert(obj['data'], dtype, obj['compress']), index=index, - dtype=dtype, + dtype=ctor_dtype, name=obj['name']) elif typ == 'block_manager': axes = obj['axes'] diff --git a/pandas/io/tests/test_packers.py b/pandas/io/tests/test_packers.py index 6905225600ae6..9d10732aebe17 100644 --- a/pandas/io/tests/test_packers.py +++ b/pandas/io/tests/test_packers.py @@ -9,7 +9,7 @@ from pandas import compat from pandas.compat import u from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range, - date_range, period_range, Index) + date_range, period_range, Index, Categorical) from pandas.io.packers import to_msgpack, read_msgpack import pandas.util.testing as tm from pandas.util.testing import (ensure_clean, assert_index_equal, @@ -330,11 +330,13 @@ def setUp(self): 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'], 'D': date_range('1/1/2009', periods=5), 'E': [0., 1, Timestamp('20100101'), 'foo', 2.], + 'F': Categorical(['a', 'b', 'c', 'd', 'e']) } self.d['float'] = Series(data['A']) self.d['int'] = Series(data['B']) self.d['mixed'] = Series(data['E']) + self.d['categorical'] = Series(data['F']) def test_basic(self): @@ -356,13 +358,14 @@ def setUp(self): 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'], 'D': date_range('1/1/2009', periods=5), 'E': [0., 1, Timestamp('20100101'), 'foo', 2.], + 'F': Categorical(['a', 'b', 'c', 'd', 'e']) } self.frame = { 'float': DataFrame(dict(A=data['A'], B=Series(data['A']) + 1)), 'int': DataFrame(dict(A=data['B'], B=Series(data['B']) + 1)), 'mixed': DataFrame(dict([(k, data[k]) - for k in ['A', 'B', 'C', 'D']]))} + for k in ['A', 'B', 'C', 'D', 'F']]))} self.panel = { 'float': Panel(dict(ItemA=self.frame['float'],
closes part of #7621 WIP, but wanted to put it up as is, since it probably interacts with #12129 cc: @kawochen this is how I modified the schema - open to suggestions ``` python # normal block {'dtype': dtype, 'data': ExtType(0, <bytes>)} # Categorical block {'dtype': 'categorical', 'data': { 'codes': {'dtype': 'int8', 'data': ExtType(0, <bytes>), 'categories': {'dtype': dtype, 'data': ExtType(0, <bytes>)} } ```
https://api.github.com/repos/pandas-dev/pandas/pulls/12191
2016-02-01T01:52:37Z
2016-03-09T17:25:23Z
null
2016-06-08T22:33:56Z
DEPR: deprecate options.display.mpl_style
diff --git a/doc/source/10min.rst b/doc/source/10min.rst index 8dbe130bffb68..d51290b2a983b 100644 --- a/doc/source/10min.rst +++ b/doc/source/10min.rst @@ -11,10 +11,7 @@ np.random.seed(123456) np.set_printoptions(precision=4, suppress=True) import matplotlib - try: - matplotlib.style.use('ggplot') - except AttributeError: - pd.options.display.mpl_style = 'default' + matplotlib.style.use('ggplot') pd.options.display.max_rows = 15 #### portions of this were borrowed from the diff --git a/doc/source/computation.rst b/doc/source/computation.rst index ed3251efc3656..2b8cf7e41431b 100644 --- a/doc/source/computation.rst +++ b/doc/source/computation.rst @@ -8,10 +8,7 @@ np.set_printoptions(precision=4, suppress=True) import pandas as pd import matplotlib - try: - matplotlib.style.use('ggplot') - except AttributeError: - pd.options.display.mpl_style = 'default' + matplotlib.style.use('ggplot') import matplotlib.pyplot as plt plt.close('all') pd.options.display.max_rows=15 diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst index 1d301c1ee2f19..0dbc79415af0b 100644 --- a/doc/source/cookbook.rst +++ b/doc/source/cookbook.rst @@ -19,10 +19,7 @@ pd.options.display.max_rows=15 import matplotlib - try: - matplotlib.style.use('ggplot') - except AttributeError: - pd.options.display.mpl_style = 'default' + matplotlib.style.use('ggplot') np.set_printoptions(precision=4, suppress=True) diff --git a/doc/source/faq.rst b/doc/source/faq.rst index 39af96fa98df7..e5d659cc31606 100644 --- a/doc/source/faq.rst +++ b/doc/source/faq.rst @@ -14,10 +14,7 @@ Frequently Asked Questions (FAQ) import pandas as pd pd.options.display.max_rows = 15 import matplotlib - try: - matplotlib.style.use('ggplot') - except AttributeError: - pd.options.display.mpl_style = 'default' + matplotlib.style.use('ggplot') import matplotlib.pyplot as plt plt.close('all') diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst index 61f87ebe0db1b..1d1dd59efff52 100644 --- a/doc/source/groupby.rst +++ b/doc/source/groupby.rst @@ -10,10 +10,7 @@ import pandas as pd pd.options.display.max_rows = 15 import matplotlib - try: - matplotlib.style.use('ggplot') - except AttributeError: - pd.options.display.mpl_style = 'default' + matplotlib.style.use('ggplot') import matplotlib.pyplot as plt plt.close('all') diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst index 96ae46621dca2..fcc8ac896b9f0 100644 --- a/doc/source/missing_data.rst +++ b/doc/source/missing_data.rst @@ -7,10 +7,7 @@ import pandas as pd pd.options.display.max_rows=15 import matplotlib - try: - matplotlib.style.use('ggplot') - except AttributeError: - pd.options.display.mpl_style = 'default' + matplotlib.style.use('ggplot') import matplotlib.pyplot as plt .. _missing_data: diff --git a/doc/source/options.rst b/doc/source/options.rst index d00ba047a56a6..be1543f20a461 100644 --- a/doc/source/options.rst +++ b/doc/source/options.rst @@ -361,12 +361,6 @@ display.max_seq_items 100 when pretty-printing a long sequence, display.memory_usage True This specifies if the memory usage of a DataFrame should be displayed when the df.info() method is invoked. -display.mpl_style None Setting this to 'default' will modify - the rcParams used by matplotlib - to give plots a more pleasing visual - style by default. Setting this to - None/False restores the values to - their initial value. display.multi_sparse True "Sparsify" MultiIndex display (don't display repeated elements in outer levels within groups) diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index d52e0e3098b98..f54774ebe8dd6 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -816,6 +816,15 @@ Deprecations - ``pandas.stats.ols``, ``pandas.stats.plm`` and ``pandas.stats.var`` routines are deprecated and will be removed in a future version (:issue:`6077`) - show a ``FutureWarning`` rather than a ``DeprecationWarning`` on using long-time deprecated syntax in ``HDFStore.select``, where the ``where`` clause is not a string-like (:issue:`12027`) +- The ``pandas.options.display.mpl_style`` configuration has been deprecated + and will be removed in a future version of pandas. This functionality + is better handled by matplotlib's `style sheets`_ (:issue:`11783`). + + + + +.. _style sheets: http://matplotlib.org/users/style_sheets.html + .. _whatsnew_0180.prior_deprecations: Removal of prior version deprecations/changes diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py index 93a6968be8602..cf8a06465a7a4 100644 --- a/pandas/core/config_init.py +++ b/pandas/core/config_init.py @@ -9,6 +9,7 @@ module is imported, register them here rather then in the module. """ +import warnings import pandas.core.config as cf from pandas.core.config import (is_int, is_bool, is_text, is_instance_factory, @@ -222,6 +223,11 @@ Setting this to None/False restores the values to their initial value. """ +pc_mpl_style_deprecation_warning = """ +mpl_style had been deprecated and will be removed in a future version. +Use `matplotlib.pyplot.style.use` instead. +""" + pc_memory_usage_doc = """ : bool, string or None This specifies if the memory usage of a DataFrame should be displayed when @@ -246,6 +252,9 @@ def mpl_style_cb(key): + warnings.warn(pc_mpl_style_deprecation_warning, FutureWarning, + stacklevel=4) + import sys from pandas.tools.plotting import mpl_stylesheet global style_backup diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py index add7245561d3f..f224016be1e49 100644 --- a/pandas/tests/test_graphics.py +++ b/pandas/tests/test_graphics.py @@ -3774,9 +3774,15 @@ def test_df_grid_settings(self): plotting._dataframe_kinds, kws={'x': 'a', 'y': 'b'}) def test_option_mpl_style(self): - set_option('display.mpl_style', 'default') - set_option('display.mpl_style', None) - set_option('display.mpl_style', False) + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): + set_option('display.mpl_style', 'default') + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): + set_option('display.mpl_style', None) + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): + set_option('display.mpl_style', False) with tm.assertRaises(ValueError): set_option('display.mpl_style', 'default2')
Closes https://github.com/pydata/pandas/issues/11783 Deprecates pd.options.display.mpl_style in favor of matplotlib's style sheets.
https://api.github.com/repos/pandas-dev/pandas/pulls/12190
2016-01-31T18:59:52Z
2016-02-11T13:04:13Z
null
2016-02-11T13:04:13Z
BUG: read_excel with squeeze=True
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 58b60fb08920a..62fa6c80e690f 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -565,7 +565,7 @@ Bug Fixes - Bug in ``.plot`` potentially modifying the ``colors`` input when the number of columns didn't match the number of series provided (:issue:`12039`). - +- Bug in ``read_excel`` failing to read data with one column when ``squeeze=True`` (:issue:`12157`) - Bug in ``.groupby`` where a ``KeyError`` was not raised for a wrong column if there was only one row in the dataframe (:issue:`11741`) - Bug in ``.read_csv`` with dtype specified on empty data producing an error (:issue:`12048`) - Bug in building *pandas* with debugging symbols (:issue:`12123`) diff --git a/pandas/io/excel.py b/pandas/io/excel.py index 0642079cc5b34..2972e21f5f120 100644 --- a/pandas/io/excel.py +++ b/pandas/io/excel.py @@ -76,7 +76,7 @@ def read_excel(io, sheetname=0, header=0, skiprows=None, skip_footer=0, index_col=None, names=None, parse_cols=None, parse_dates=False, date_parser=None, na_values=None, thousands=None, convert_float=True, has_index_names=None, converters=None, - engine=None, **kwds): + engine=None, squeeze=False, **kwds): """ Read an Excel table into a pandas DataFrame @@ -133,6 +133,8 @@ def read_excel(io, sheetname=0, header=0, skiprows=None, skip_footer=0, * If list of ints then indicates list of column numbers to be parsed * If string then indicates comma separated list of column names and column ranges (e.g. "A:E" or "A,C,E:F") + squeeze : boolean, default False + If the parsed data only contains one column then return a Series na_values : list-like, default None List of additional strings to recognize as NA/NaN thousands : str, default None @@ -171,7 +173,7 @@ def read_excel(io, sheetname=0, header=0, skiprows=None, skip_footer=0, index_col=index_col, parse_cols=parse_cols, parse_dates=parse_dates, date_parser=date_parser, na_values=na_values, thousands=thousands, convert_float=convert_float, has_index_names=has_index_names, - skip_footer=skip_footer, converters=converters, **kwds) + skip_footer=skip_footer, converters=converters, squeeze=squeeze, **kwds) class ExcelFile(object): @@ -227,7 +229,7 @@ def parse(self, sheetname=0, header=0, skiprows=None, skip_footer=0, index_col=None, parse_cols=None, parse_dates=False, date_parser=None, na_values=None, thousands=None, convert_float=True, has_index_names=None, - converters=None, **kwds): + converters=None, squeeze=False, **kwds): """ Parse specified sheet(s) into a DataFrame @@ -246,6 +248,7 @@ def parse(self, sheetname=0, header=0, skiprows=None, skip_footer=0, skip_footer=skip_footer, convert_float=convert_float, converters=converters, + squeeze=squeeze, **kwds) def _should_parse(self, i, parse_cols): @@ -285,7 +288,7 @@ def _parse_excel(self, sheetname=0, header=0, skiprows=None, skip_footer=0, index_col=None, has_index_names=None, parse_cols=None, parse_dates=False, date_parser=None, na_values=None, thousands=None, convert_float=True, - verbose=False, **kwds): + verbose=False, squeeze=False, **kwds): skipfooter = kwds.pop('skipfooter', None) if skipfooter is not None: @@ -452,11 +455,13 @@ def _parse_cell(cell_contents, cell_typ): date_parser=date_parser, skiprows=skiprows, skip_footer=skip_footer, + squeeze=squeeze, **kwds) output[asheetname] = parser.read() - output[asheetname].columns = output[ - asheetname].columns.set_names(header_names) + if not squeeze or isinstance(output[asheetname], DataFrame): + output[asheetname].columns = output[ + asheetname].columns.set_names(header_names) if ret_dict: return output diff --git a/pandas/io/tests/data/test_squeeze.xls b/pandas/io/tests/data/test_squeeze.xls new file mode 100644 index 0000000000000..7261f4df13f08 Binary files /dev/null and b/pandas/io/tests/data/test_squeeze.xls differ diff --git a/pandas/io/tests/data/test_squeeze.xlsm b/pandas/io/tests/data/test_squeeze.xlsm new file mode 100644 index 0000000000000..d7fabe802ff52 Binary files /dev/null and b/pandas/io/tests/data/test_squeeze.xlsm differ diff --git a/pandas/io/tests/data/test_squeeze.xlsx b/pandas/io/tests/data/test_squeeze.xlsx new file mode 100644 index 0000000000000..89fc590cebcc7 Binary files /dev/null and b/pandas/io/tests/data/test_squeeze.xlsx differ diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py index 082a26df681a4..a6a189e4f4785 100644 --- a/pandas/io/tests/test_excel.py +++ b/pandas/io/tests/test_excel.py @@ -741,6 +741,24 @@ def test_read_excel_skiprows_list(self): 'skiprows_list', skiprows=np.array([0, 2])) tm.assert_frame_equal(actual, expected) + def test_read_excel_squeeze(self): + # GH 12157 + f = os.path.join(self.dirpath, 'test_squeeze' + self.ext) + + actual = pd.read_excel(f, 'two_columns', index_col=0, squeeze=True) + expected = pd.Series([2, 3, 4], [4, 5, 6], name='b') + expected.index.name = 'a' + tm.assert_series_equal(actual, expected) + + actual = pd.read_excel(f, 'two_columns', squeeze=True) + expected = pd.DataFrame({'a': [4, 5, 6], + 'b': [2, 3, 4]}) + tm.assert_frame_equal(actual, expected) + + actual = pd.read_excel(f, 'one_column', squeeze=True) + expected = pd.Series([1,2,3], name='a') + tm.assert_series_equal(actual, expected) + class XlsReaderTests(XlrdTests, tm.TestCase): ext = '.xls'
closes #12157
https://api.github.com/repos/pandas-dev/pandas/pulls/12184
2016-01-30T15:47:27Z
2016-01-30T19:22:40Z
null
2016-02-01T00:25:12Z
DOC: reformat install.rst testing section
diff --git a/doc/source/install.rst b/doc/source/install.rst index 049fc75184c96..c3430c74014f0 100644 --- a/doc/source/install.rst +++ b/doc/source/install.rst @@ -192,32 +192,31 @@ installed), make sure you have `nose :: - $ nosetests pandas - .......................................................................... - .......................S.................................................. - .......................................................................... - .......................................................................... - .......................................................................... - .......................................................................... - .......................................................................... - .......................................................................... - .......................................................................... - .......................................................................... - .................S........................................................ - .... + >>> import pandas as pd + >>> pd.test() + Running unit tests for pandas + pandas version 0.18.0 + numpy version 1.10.2 + pandas is installed in pandas + Python version 2.7.11 |Continuum Analytics, Inc.| + (default, Dec 6 2015, 18:57:58) [GCC 4.2.1 (Apple Inc. build 5577)] + nose version 1.3.7 + ..................................................................S...... + ........S................................................................ + ......................................................................... + ---------------------------------------------------------------------- - Ran 818 tests in 21.631s + Ran 9252 tests in 368.339s - OK (SKIP=2) + OK (SKIP=117) Dependencies ------------ * `setuptools <http://pythonhosted.org/setuptools>`__ * `NumPy <http://www.numpy.org>`__: 1.7.1 or higher -* `python-dateutil <http://labix.org/python-dateutil>`__ 1.5 or higher -* `pytz <http://pytz.sourceforge.net/>`__ - * Needed for time zone support +* `python-dateutil <http://labix.org/python-dateutil>`__: 1.5 or higher +* `pytz <http://pytz.sourceforge.net/>`__: Needed for time zone support .. _install.recommended_dependencies: @@ -247,6 +246,7 @@ Optional Dependencies * `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions * `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage. Version 3.0.0 or higher required, Version 3.2.1 or higher highly recommended. * `SQLAlchemy <http://www.sqlalchemy.org>`__: for SQL database support. Version 0.8.1 or higher recommended. + * Besides SQLAlchemy, you also need a database specific driver. Examples of such drivers are `psycopg2 <http://initd.org/psycopg/>`__ for PostgreSQL or `pymysql <https://github.com/PyMySQL/PyMySQL>`__ for MySQL. For @@ -255,12 +255,9 @@ Optional Dependencies You can find an overview of supported drivers for each SQL dialect in the `SQLAlchemy docs <http://docs.sqlalchemy.org/en/latest/dialects/index.html>`__. * `matplotlib <http://matplotlib.sourceforge.net/>`__: for plotting -* `statsmodels <http://statsmodels.sourceforge.net/>`__ - * Needed for parts of :mod:`pandas.stats` -* `openpyxl <http://packages.python.org/openpyxl/>`__, `xlrd/xlwt <http://www.python-excel.org/>`__ - * Needed for Excel I/O -* `XlsxWriter <https://pypi.python.org/pypi/XlsxWriter>`__ - * Alternative Excel writer +* `statsmodels <http://statsmodels.sourceforge.net/>`__: Needed for parts of :mod:`pandas.stats` +* `openpyxl <http://packages.python.org/openpyxl/>`__, `xlrd/xlwt <http://www.python-excel.org/>`__: Needed for Excel I/O +* `XlsxWriter <https://pypi.python.org/pypi/XlsxWriter>`__: Alternative Excel writer * `Jinja2 <http://jinja.pocoo.org/>`__: Template engine for conditional HTML formatting. * `boto <https://pypi.python.org/pypi/boto>`__: necessary for Amazon S3 access. @@ -275,12 +272,8 @@ Optional Dependencies distributions will have xclip and/or xsel immediately available for installation. * Google's `python-gflags <http://code.google.com/p/python-gflags/>`__ - and `google-api-python-client <http://github.com/google/google-api-python-client>`__ - * Needed for :mod:`~pandas.io.gbq` -* `setuptools <https://pypi.python.org/pypi/setuptools/>`__ - * Needed for :mod:`~pandas.io.gbq` (specifically, it utilizes `pkg_resources`) -* `httplib2 <http://pypi.python.org/pypi/httplib2>`__ - * Needed for :mod:`~pandas.io.gbq` + and `google-api-python-client <http://github.com/google/google-api-python-client>`__: Needed for :mod:`~pandas.io.gbq` +* `httplib2 <http://pypi.python.org/pypi/httplib2>`__: Needed for :mod:`~pandas.io.gbq` * One of the following combinations of libraries is needed to use the top-level :func:`~pandas.io.html.read_html` function: @@ -327,5 +320,5 @@ Optional Dependencies Without the optional dependencies, many useful features will not work. Hence, it is highly recommended that you install these. A packaged - distribution like `Enthought Canopy + distribution like `Anaconda <http://docs.continuum.io/anaconda/>`__, or `Enthought Canopy <http://enthought.com/products/canopy>`__ may be worth considering. diff --git a/pandas/util/nosetester.py b/pandas/util/nosetester.py index fc42d8ba7b54a..fdee9be20afa3 100644 --- a/pandas/util/nosetester.py +++ b/pandas/util/nosetester.py @@ -178,7 +178,15 @@ def test(self, label='fast', verbose=1, extra_argv=None, doctest.master = None if raise_warnings is None: - raise_warnings = 'release' + + # default based on if we are released + from pandas import __version__ + from distutils.version import StrictVersion + try: + StrictVersion(__version__) + raise_warnings = 'release' + except ValueError: + raise_warnings = 'develop' _warn_opts = dict(develop=(DeprecationWarning, RuntimeWarning), release=()) @@ -186,20 +194,22 @@ def test(self, label='fast', verbose=1, extra_argv=None, raise_warnings = _warn_opts[raise_warnings] with warnings.catch_warnings(): - # Reset the warning filters to the default state, - # so that running the tests is more repeatable. - warnings.resetwarnings() - # Set all warnings to 'warn', this is because the default 'once' - # has the bad property of possibly shadowing later warnings. - warnings.filterwarnings('always') - # Force the requested warnings to raise - for warningtype in raise_warnings: - warnings.filterwarnings('error', category=warningtype) - # Filter out annoying import messages. - warnings.filterwarnings("ignore", category=FutureWarning) - from numpy.testing.noseclasses import NumpyTestProgram + if len(raise_warnings): + # Reset the warning filters to the default state, + # so that running the tests is more repeatable. + warnings.resetwarnings() + # Set all warnings to 'warn', this is because the default 'once' + # has the bad property of possibly shadowing later warnings. + warnings.filterwarnings('always') + # Force the requested warnings to raise + for warningtype in raise_warnings: + warnings.filterwarnings('error', category=warningtype) + # Filter out annoying import messages. + warnings.filterwarnings("ignore", category=FutureWarning) + + from numpy.testing.noseclasses import NumpyTestProgram argv, plugins = self.prepare_test_args( label, verbose, extra_argv, doctests, coverage) t = NumpyTestProgram(argv=argv, exit=False, plugins=plugins)
TST: `run.test()` now defaults warnings based on release version
https://api.github.com/repos/pandas-dev/pandas/pulls/12179
2016-01-29T16:20:38Z
2016-01-29T16:39:09Z
null
2016-01-29T16:39:09Z
Add ZIP file decompression and TestCompression.
diff --git a/doc/source/whatsnew/v0.18.1.txt b/doc/source/whatsnew/v0.18.1.txt index 1ced927001a94..ff8c3347c64ff 100644 --- a/doc/source/whatsnew/v0.18.1.txt +++ b/doc/source/whatsnew/v0.18.1.txt @@ -56,6 +56,7 @@ Partial string indexing now matches on ``DateTimeIndex`` when part of a ``MultiI Other Enhancements ^^^^^^^^^^^^^^^^^^ +- ``pd.read_csv()`` now supports opening ZIP files that contains a single CSV (:issue:`12175`) - ``pd.read_msgpack()`` now always gives writeable ndarrays even when compression is used (:issue:`12359`). .. _whatsnew_0181.api: diff --git a/pandas/io/common.py b/pandas/io/common.py index be8c3ccfe08e6..d44057178d27e 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -360,6 +360,21 @@ def _get_handle(path, mode, encoding=None, compression=None): elif compression == 'bz2': import bz2 f = bz2.BZ2File(path, mode) + elif compression == 'zip': + import zipfile + zip_file = zipfile.ZipFile(path) + zip_names = zip_file.namelist() + + if len(zip_names) == 1: + file_name = zip_names.pop() + f = zip_file.open(file_name) + elif len(zip_names) == 0: + raise ValueError('Zero files found in ZIP file {}' + .format(path)) + else: + raise ValueError('Multiple files found in ZIP file.' + ' Only one file per ZIP :{}' + .format(zip_names)) else: raise ValueError('Unrecognized compression type: %s' % compression) diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index 36a9abdfbca60..49fbadadfb719 100755 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -158,11 +158,12 @@ class ParserWarning(Warning): information <http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_ on ``iterator`` and ``chunksize``. -compression : {'infer', 'gzip', 'bz2', None}, default 'infer' - For on-the-fly decompression of on-disk data. If 'infer', then use gzip or - bz2 if filepath_or_buffer is a string ending in '.gz' or '.bz2', - respectively, and no decompression otherwise. Set to None for no - decompression. +compression : {'gzip', 'bz2', 'zip', 'infer', None}, default 'infer' + For on-the-fly decompression of on-disk data. If 'infer', then use gzip, + bz2 or zip if filepath_or_buffer is a string ending in '.gz', '.bz2' or + '.zip', respectively, and no decompression otherwise. New in 0.18.1: ZIP + compression If using 'zip', the ZIP file must contain only one data file + to be read in. Set to None for no decompression. thousands : str, default None Thousands separator decimal : str, default '.' @@ -273,6 +274,8 @@ def _read(filepath_or_buffer, kwds): inferred_compression = 'gzip' elif filepath_or_buffer.endswith('.bz2'): inferred_compression = 'bz2' + elif filepath_or_buffer.endswith('.zip'): + inferred_compression = 'zip' else: inferred_compression = None else: @@ -1397,6 +1400,25 @@ def _wrap_compressed(f, compression, encoding=None): data = bz2.decompress(f.read()) f = StringIO(data) return f + elif compression == 'zip': + import zipfile + zip_file = zipfile.ZipFile(f) + zip_names = zip_file.namelist() + print('ZIPNAMES' + zip_names) + + if len(zip_names) == 1: + file_name = zip_names.pop() + f = zip_file.open(file_name) + return f + + elif len(zip_names) == 0: + raise ValueError('Corrupted or zero files found in compressed ' + 'zip file %s', zip_file.filename) + + else: + raise ValueError('Multiple files found in compressed ' + 'zip file %s', str(zip_names)) + else: raise ValueError('do not recognize compression method %s' % compression) diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py index 9f53fc1ded882..7c7b40d77e821 100755 --- a/pandas/io/tests/test_parsers.py +++ b/pandas/io/tests/test_parsers.py @@ -3,44 +3,39 @@ # flake8: noqa -from datetime import datetime import csv import os -import sys -import re -import nose import platform from distutils.version import LooseVersion +import re +import sys +from datetime import datetime from multiprocessing.pool import ThreadPool -from numpy import nan +import nose import numpy as np -from pandas.io.common import DtypeWarning +import pandas.lib as lib +import pandas.parser +from numpy import nan +from numpy.testing.decorators import slow +from pandas.lib import Timestamp +import pandas as pd +import pandas.io.parsers as parsers +import pandas.tseries.tools as tools +import pandas.util.testing as tm from pandas import DataFrame, Series, Index, MultiIndex, DatetimeIndex +from pandas import compat from pandas.compat import( StringIO, BytesIO, PY3, range, long, lrange, lmap, u ) -from pandas.io.common import URLError -import pandas.io.parsers as parsers +from pandas.compat import parse_date +from pandas.core.common import AbstractMethodError +from pandas.io.common import DtypeWarning, URLError from pandas.io.parsers import (read_csv, read_table, read_fwf, TextFileReader, TextParser) - -import pandas.util.testing as tm -import pandas as pd - -from pandas.core.common import AbstractMethodError -from pandas.compat import parse_date -import pandas.lib as lib -from pandas import compat -from pandas.lib import Timestamp from pandas.tseries.index import date_range -import pandas.tseries.tools as tools - -from numpy.testing.decorators import slow - -import pandas.parser class ParserTests(object): @@ -2696,7 +2691,166 @@ def test_uneven_lines_with_usecols(self): tm.assert_frame_equal(df, expected) -class TestPythonParser(ParserTests, tm.TestCase): +class CompressionTests(object): + def test_zip(self): + try: + import zipfile + except ImportError: + raise nose.SkipTest('need zipfile to run') + + with open(self.csv1, 'rb') as data_file: + data = data_file.read() + expected = self.read_csv(self.csv1) + + with tm.ensure_clean('test_file.zip') as path: + tmp = zipfile.ZipFile(path, mode='w') + tmp.writestr('test_file', data) + tmp.close() + + result = self.read_csv(path, compression='zip') + tm.assert_frame_equal(result, expected) + + result = self.read_csv(path, compression='infer') + tm.assert_frame_equal(result, expected) + + if self.engine is not 'python': + with open(path, 'rb') as f: + result = self.read_csv(f, compression='zip') + tm.assert_frame_equal(result, expected) + + with tm.ensure_clean('combined_zip.zip') as path: + inner_file_names = ['test_file', 'second_file'] + tmp = zipfile.ZipFile(path, mode='w') + for file_name in inner_file_names: + tmp.writestr(file_name, data) + tmp.close() + + self.assertRaisesRegexp(ValueError, 'Multiple files', self.read_csv, + path, compression='zip') + + self.assertRaisesRegexp(ValueError, 'Multiple files', self.read_csv, + path, compression='infer') + + with tm.ensure_clean() as path: + tmp = zipfile.ZipFile(path, mode='w') + tmp.close() + + self.assertRaisesRegexp(ValueError, 'Zero files',self.read_csv, + path, compression='zip') + + with tm.ensure_clean() as path: + with open(path, 'wb') as f: + self.assertRaises(zipfile.BadZipfile, self.read_csv, f, compression='zip') + + + def test_gzip(self): + try: + import gzip + except ImportError: + raise nose.SkipTest('need gzip to run') + + with open(self.csv1, 'rb') as data_file: + data = data_file.read() + expected = self.read_csv(self.csv1) + + with tm.ensure_clean() as path: + tmp = gzip.GzipFile(path, mode='wb') + tmp.write(data) + tmp.close() + + result = self.read_csv(path, compression='gzip') + tm.assert_frame_equal(result, expected) + + with open(path, 'rb') as f: + result = self.read_csv(f, compression='gzip') + tm.assert_frame_equal(result, expected) + + with tm.ensure_clean('test.gz') as path: + tmp = gzip.GzipFile(path, mode='wb') + tmp.write(data) + tmp.close() + result = self.read_csv(path, compression='infer') + tm.assert_frame_equal(result, expected) + + def test_bz2(self): + try: + import bz2 + except ImportError: + raise nose.SkipTest('need bz2 to run') + + with open(self.csv1, 'rb') as data_file: + data = data_file.read() + expected = self.read_csv(self.csv1) + + with tm.ensure_clean() as path: + tmp = bz2.BZ2File(path, mode='wb') + tmp.write(data) + tmp.close() + + result = self.read_csv(path, compression='bz2') + tm.assert_frame_equal(result, expected) + + self.assertRaises(ValueError, self.read_csv, + path, compression='bz3') + + with open(path, 'rb') as fin: + if compat.PY3: + result = self.read_csv(fin, compression='bz2') + tm.assert_frame_equal(result, expected) + elif self.engine is not 'python': + self.assertRaises(ValueError, self.read_csv, + fin, compression='bz2') + + with tm.ensure_clean('test.bz2') as path: + tmp = bz2.BZ2File(path, mode='wb') + tmp.write(data) + tmp.close() + result = self.read_csv(path, compression='infer') + tm.assert_frame_equal(result, expected) + + def test_decompression_regex_sep(self): + try: + import gzip + import bz2 + except ImportError: + raise nose.SkipTest('need gzip and bz2 to run') + + with open(self.csv1, 'rb') as data_file: + data = data_file.read() + data = data.replace(b',', b'::') + expected = self.read_csv(self.csv1) + + with tm.ensure_clean() as path: + tmp = gzip.GzipFile(path, mode='wb') + tmp.write(data) + tmp.close() + + # GH 6607 + # Test currently only valid with the python engine because of + # regex sep. Temporarily copied to TestPythonParser. + # Here test for ValueError when passing regex sep: + + with tm.assertRaisesRegexp(ValueError, 'regex sep'): #XXX + result = self.read_csv(path, sep='::', compression='gzip', engine='c') + tm.assert_frame_equal(result, expected) + + with tm.ensure_clean() as path: + tmp = bz2.BZ2File(path, mode='wb') + tmp.write(data) + tmp.close() + + # GH 6607 + with tm.assertRaisesRegexp(ValueError, 'regex sep'): #XXX + result = self.read_csv(path, sep='::', compression='bz2', engine='c') + tm.assert_frame_equal(result, expected) + + self.assertRaises(ValueError, self.read_csv, + path, compression='bz3') + + +class TestPythonParser(ParserTests, CompressionTests, tm.TestCase): + + engine = 'python' def test_negative_skipfooter_raises(self): text = """#foo,a,b,c @@ -2716,12 +2870,12 @@ def test_negative_skipfooter_raises(self): def read_csv(self, *args, **kwds): kwds = kwds.copy() - kwds['engine'] = 'python' + kwds['engine'] = self.engine return read_csv(*args, **kwds) def read_table(self, *args, **kwds): kwds = kwds.copy() - kwds['engine'] = 'python' + kwds['engine'] = self.engine return read_table(*args, **kwds) def float_precision_choices(self): @@ -3521,17 +3675,19 @@ def test_buffer_rd_bytes(self): except Exception as e: pass -class TestCParserHighMemory(CParserTests, tm.TestCase): + +class TestCParserHighMemory(CParserTests, CompressionTests, tm.TestCase): + engine = 'c' def read_csv(self, *args, **kwds): kwds = kwds.copy() - kwds['engine'] = 'c' + kwds['engine'] = self.engine kwds['low_memory'] = False return read_csv(*args, **kwds) def read_table(self, *args, **kwds): kwds = kwds.copy() - kwds['engine'] = 'c' + kwds['engine'] = self.engine kwds['low_memory'] = False return read_table(*args, **kwds) @@ -3832,18 +3988,20 @@ def test_single_char_leading_whitespace(self): tm.assert_frame_equal(result, expected) -class TestCParserLowMemory(CParserTests, tm.TestCase): +class TestCParserLowMemory(CParserTests, CompressionTests, tm.TestCase): + + engine = 'c' def read_csv(self, *args, **kwds): kwds = kwds.copy() - kwds['engine'] = 'c' + kwds['engine'] = self.engine kwds['low_memory'] = True kwds['buffer_lines'] = 2 return read_csv(*args, **kwds) def read_table(self, *args, **kwds): kwds = kwds.copy() - kwds['engine'] = 'c' + kwds['engine'] = self.engine kwds['low_memory'] = True kwds['buffer_lines'] = 2 return read_table(*args, **kwds) @@ -4060,86 +4218,6 @@ def test_pure_python_failover(self): expected = DataFrame({'a': [1, 4], 'b': [2, 5], 'c': [3, 6]}) tm.assert_frame_equal(result, expected) - def test_decompression(self): - try: - import gzip - import bz2 - except ImportError: - raise nose.SkipTest('need gzip and bz2 to run') - - data = open(self.csv1, 'rb').read() - expected = self.read_csv(self.csv1) - - with tm.ensure_clean() as path: - tmp = gzip.GzipFile(path, mode='wb') - tmp.write(data) - tmp.close() - - result = self.read_csv(path, compression='gzip') - tm.assert_frame_equal(result, expected) - - result = self.read_csv(open(path, 'rb'), compression='gzip') - tm.assert_frame_equal(result, expected) - - with tm.ensure_clean() as path: - tmp = bz2.BZ2File(path, mode='wb') - tmp.write(data) - tmp.close() - - result = self.read_csv(path, compression='bz2') - tm.assert_frame_equal(result, expected) - - # result = self.read_csv(open(path, 'rb'), compression='bz2') - # tm.assert_frame_equal(result, expected) - - self.assertRaises(ValueError, self.read_csv, - path, compression='bz3') - - with open(path, 'rb') as fin: - if compat.PY3: - result = self.read_csv(fin, compression='bz2') - tm.assert_frame_equal(result, expected) - else: - self.assertRaises(ValueError, self.read_csv, - fin, compression='bz2') - - def test_decompression_regex_sep(self): - try: - import gzip - import bz2 - except ImportError: - raise nose.SkipTest('need gzip and bz2 to run') - - data = open(self.csv1, 'rb').read() - data = data.replace(b',', b'::') - expected = self.read_csv(self.csv1) - - with tm.ensure_clean() as path: - tmp = gzip.GzipFile(path, mode='wb') - tmp.write(data) - tmp.close() - - # GH 6607 - # Test currently only valid with the python engine because of - # regex sep. Temporarily copied to TestPythonParser. - # Here test for ValueError when passing regex sep: - - with tm.assertRaisesRegexp(ValueError, 'regex sep'): # XXX - result = self.read_csv(path, sep='::', compression='gzip') - tm.assert_frame_equal(result, expected) - - with tm.ensure_clean() as path: - tmp = bz2.BZ2File(path, mode='wb') - tmp.write(data) - tmp.close() - - # GH 6607 - with tm.assertRaisesRegexp(ValueError, 'regex sep'): # XXX - result = self.read_csv(path, sep='::', compression='bz2') - tm.assert_frame_equal(result, expected) - - self.assertRaises(ValueError, self.read_csv, - path, compression='bz3') def test_memory_map(self): # it works! diff --git a/pandas/parser.pyx b/pandas/parser.pyx index e2ba8d9d07ae2..8bfc0ab8d6c56 100644 --- a/pandas/parser.pyx +++ b/pandas/parser.pyx @@ -567,6 +567,21 @@ cdef class TextReader: else: raise ValueError('Python 2 cannot read bz2 from open file ' 'handle') + elif self.compression == 'zip': + import zipfile + zip_file = zipfile.ZipFile(source) + zip_names = zip_file.namelist() + + if len(zip_names) == 1: + file_name = zip_names.pop() + source = zip_file.open(file_name) + + elif len(zip_names) == 0: + raise ValueError('Zero files found in compressed ' + 'zip file %s', source) + else: + raise ValueError('Multiple files found in compressed ' + 'zip file %s', str(zip_names)) else: raise ValueError('Unrecognized compression type: %s' % self.compression) diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py index a5b86b35d330e..4faf67eda6c78 100644 --- a/pandas/tests/frame/test_to_csv.py +++ b/pandas/tests/frame/test_to_csv.py @@ -994,7 +994,8 @@ def test_to_csv_compression_value_error(self): with ensure_clean() as filename: # zip compression is not supported and should raise ValueError - self.assertRaises(ValueError, df.to_csv, + import zipfile + self.assertRaises(zipfile.BadZipfile, df.to_csv, filename, compression="zip") def test_to_csv_date_format(self):
Closes #11413
https://api.github.com/repos/pandas-dev/pandas/pulls/12175
2016-01-29T15:15:25Z
2016-03-22T14:32:31Z
null
2016-03-22T16:16:56Z
Iterableiterator
diff --git a/pandas/io/common.py b/pandas/io/common.py index c5f433ceaab4b..8c9c348b9a11c 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -9,7 +9,7 @@ from pandas.compat import StringIO, BytesIO, string_types, text_type from pandas import compat -from pandas.core.common import pprint_thing, is_number +from pandas.core.common import pprint_thing, is_number, AbstractMethodError try: @@ -59,6 +59,20 @@ class DtypeWarning(Warning): pass +class BaseIterator(object): + """Subclass this and provide a "__next__()" method to obtain an iterator. + Useful only when the object being iterated is non-reusable (e.g. OK for a + parser, not for an in-memory table, yes for its iterator).""" + def __iter__(self): + return self + + def __next__(self): + raise AbstractMethodError(self) + +if not compat.PY3: + BaseIterator.next = lambda self: self.__next__() + + try: from boto.s3 import key @@ -394,7 +408,7 @@ def UnicodeReader(f, dialect=csv.excel, encoding="utf-8", **kwds): def UnicodeWriter(f, dialect=csv.excel, encoding="utf-8", **kwds): return csv.writer(f, dialect=dialect, **kwds) else: - class UnicodeReader: + class UnicodeReader(BaseIterator): """ A CSV reader which will iterate over lines in the CSV file "f", @@ -408,16 +422,10 @@ def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds): f = UTF8Recoder(f, encoding) self.reader = csv.reader(f, dialect=dialect, **kwds) - def next(self): + def __next__(self): row = next(self.reader) return [compat.text_type(s, "utf-8") for s in row] - # python 3 iterator - __next__ = next - - def __iter__(self): # pragma: no cover - return self - class UnicodeWriter: """ diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index dc6923b752ac7..1593716097985 100755 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -19,7 +19,8 @@ from pandas.core.config import get_option from pandas.io.date_converters import generic_parser from pandas.io.common import (get_filepath_or_buffer, _validate_header_arg, - _get_handle, UnicodeReader, UTF8Recoder) + _get_handle, UnicodeReader, UTF8Recoder, + BaseIterator) from pandas.tseries import tools from pandas.util.decorators import Appender @@ -545,7 +546,7 @@ def read_fwf(filepath_or_buffer, colspecs='infer', widths=None, **kwds): ]) -class TextFileReader(object): +class TextFileReader(BaseIterator): """ Passed dialect overrides any of the related parser options @@ -724,15 +725,8 @@ def _clean_options(self, options, engine): return result, engine - def __iter__(self): - try: - if self.chunksize: - while True: - yield self.read(self.chunksize) - else: - yield self.read() - except StopIteration: - pass + def __next__(self): + return self.get_chunk() def _make_engine(self, engine='c'): if engine == 'c': @@ -2363,7 +2357,7 @@ def _concat_date_cols(date_cols): return rs -class FixedWidthReader(object): +class FixedWidthReader(BaseIterator): """ A reader of fixed-width lines. """ @@ -2417,7 +2411,7 @@ def detect_colspecs(self, n=100): edges = np.where((mask ^ shifted) == 1)[0] return list(zip(edges[::2], edges[1::2])) - def next(self): + def __next__(self): if self.buffer is not None: try: line = next(self.buffer) @@ -2430,9 +2424,6 @@ def next(self): return [line[fromm:to].strip(self.delimiter) for (fromm, to) in self.colspecs] - # Iterator protocol in Python 3 uses __next__() - __next__ = next - class FixedWidthFieldParser(PythonParser): """ diff --git a/pandas/io/sas.py b/pandas/io/sas.py index 39e83b7715cda..49013a98c77ff 100644 --- a/pandas/io/sas.py +++ b/pandas/io/sas.py @@ -10,7 +10,7 @@ from datetime import datetime import pandas as pd -from pandas.io.common import get_filepath_or_buffer +from pandas.io.common import get_filepath_or_buffer, BaseIterator from pandas import compat import struct import numpy as np @@ -242,7 +242,7 @@ def _parse_float_vec(vec): return ieee -class XportReader(object): +class XportReader(BaseIterator): __doc__ = _xport_reader_doc def __init__(self, filepath_or_buffer, index=None, encoding='ISO-8859-1', @@ -369,15 +369,8 @@ def _read_header(self): dtype = np.dtype(dtypel) self._dtype = dtype - def __iter__(self): - try: - if self._chunksize: - while True: - yield self.read(self._chunksize) - else: - yield self.read() - except StopIteration: - pass + def __next__(self): + return self.read(nrows=self._chunksize or 1) def _record_count(self): """ diff --git a/pandas/io/stata.py b/pandas/io/stata.py index 8181e69abc60b..e54d0a5c43887 100644 --- a/pandas/io/stata.py +++ b/pandas/io/stata.py @@ -25,7 +25,7 @@ from pandas.util.decorators import Appender import pandas as pd import pandas.core.common as com -from pandas.io.common import get_filepath_or_buffer +from pandas.io.common import get_filepath_or_buffer, BaseIterator from pandas.lib import max_len_string_array, infer_dtype from pandas.tslib import NaT, Timestamp @@ -907,7 +907,7 @@ def _decode_bytes(self, str, errors=None): return str -class StataReader(StataParser): +class StataReader(StataParser, BaseIterator): __doc__ = _stata_reader_doc def __init__(self, path_or_buf, convert_dates=True, @@ -1377,15 +1377,8 @@ def data(self, **kwargs): return self.read(None, **kwargs) - def __iter__(self): - try: - if self._chunksize: - while True: - yield self.read(self._chunksize) - else: - yield self.read() - except StopIteration: - pass + def __next__(self): + return self.read(nrows=self._chunksize or 1) def get_chunk(self, size=None): """ diff --git a/pandas/io/tests/test_common.py b/pandas/io/tests/test_common.py index 55fe3f3357c05..8615b75d87626 100644 --- a/pandas/io/tests/test_common.py +++ b/pandas/io/tests/test_common.py @@ -9,6 +9,8 @@ from pandas.io import common +from pandas import read_csv, concat + try: from pathlib import Path except ImportError: @@ -21,6 +23,14 @@ class TestCommonIOCapabilities(tm.TestCase): + data1 = """index,A,B,C,D +foo,2,3,4,5 +bar,7,8,9,10 +baz,12,13,14,15 +qux,12,13,14,15 +foo2,12,13,14,15 +bar2,12,13,14,15 +""" def test_expand_user(self): filename = '~/sometest' @@ -64,3 +74,16 @@ def test_get_filepath_or_buffer_with_buffer(self): input_buffer = StringIO() filepath_or_buffer, _, _ = common.get_filepath_or_buffer(input_buffer) self.assertEqual(filepath_or_buffer, input_buffer) + + def test_iterator(self): + reader = read_csv(StringIO(self.data1), chunksize=1) + result = concat(reader, ignore_index=True) + expected = read_csv(StringIO(self.data1)) + tm.assert_frame_equal(result, expected) + + # GH12153 + it = read_csv(StringIO(self.data1), chunksize=1) + first = next(it) + tm.assert_frame_equal(first, expected.iloc[[0]]) + expected.index = [0 for i in range(len(expected))] + tm.assert_frame_equal(concat(it), expected.iloc[1:]) diff --git a/pandas/io/tests/test_sas.py b/pandas/io/tests/test_sas.py index bca3594f4b47c..3a235eafe9b2c 100644 --- a/pandas/io/tests/test_sas.py +++ b/pandas/io/tests/test_sas.py @@ -88,6 +88,11 @@ def test1_incremental(self): tm.assert_frame_equal(data, data_csv, check_index_type=False) + reader = XportReader(self.file01, index="SEQN", chunksize=1000) + data = pd.concat(reader, axis=0) + + tm.assert_frame_equal(data, data_csv, check_index_type=False) + def test2(self): # Test with SSHSV1_A.XPT diff --git a/pandas/io/tests/test_stata.py b/pandas/io/tests/test_stata.py index e1e12e47457f9..3eb0e0819e2ca 100644 --- a/pandas/io/tests/test_stata.py +++ b/pandas/io/tests/test_stata.py @@ -1033,6 +1033,10 @@ def test_iterator(self): chunk = itr.get_chunk() tm.assert_frame_equal(parsed.iloc[0:5, :], chunk) + # GH12153 + from_chunks = pd.concat(read_stata(fname, chunksize=4)) + tm.assert_frame_equal(parsed, from_chunks) + def test_read_chunks_115(self): files_115 = [self.dta2_115, self.dta3_115, self.dta4_115, self.dta14_115, self.dta15_115, self.dta16_115,
If the approach is not overkill, it can be used also in `UTF8Recoder` and `UnicodeReader` from `pandas/common/io.py`. Closes #12153.
https://api.github.com/repos/pandas-dev/pandas/pulls/12173
2016-01-29T10:52:53Z
2016-02-06T20:04:13Z
null
2016-02-06T21:02:48Z
ENH: plotting numerical expressions
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py index 8f7c0a2b1be9a..57a1f9b55316d 100644 --- a/pandas/tools/plotting.py +++ b/pandas/tools/plotting.py @@ -2421,6 +2421,14 @@ def _plot(data, x=None, y=None, subplots=False, if com.is_integer(y) and not data.columns.holds_integer(): y = data.columns[y] label = kwds['label'] if 'label' in kwds else y + + if com.is_string_like(y) and y not in data.columns: + data = data.assign(**{y:data.eval(y)}) + elif com.is_list_like: + for column in y: + if column not in data.columns: + data = data.assign(**{column:data.eval(column)}) + series = data[y].copy() # Don't modify series.name = label
Passing the `y` argument of a plot to `DataFrame.eval` allows plotting a numerical expression. Makes numerical data exploration much easier. E.g. graphically finding the norm of two curves. See example here: [example notebook](https://gist.github.com/TsvikiHirsh/46fd47bc35dc1874234d) No support for multicolumn plotting, e.g. for `["a","4*b"]` Should be very easy to apply the same on the `index`. Could be very useful for data exploration, without modifying the original `DataFrame`
https://api.github.com/repos/pandas-dev/pandas/pulls/12168
2016-01-28T16:44:17Z
2016-03-23T12:56:52Z
null
2016-07-01T16:55:07Z
DEPR: some removals
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 3a188ea20f8a3..9cb9d18edf4ac 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -447,7 +447,7 @@ Removal of prior version deprecations/changes - Removal of ``rolling_corr_pairwise`` in favor of ``.rolling().corr(pairwise=True)`` (:issue:`4950`) - Removal of ``expanding_corr_pairwise`` in favor of ``.expanding().corr(pairwise=True)`` (:issue:`4950`) - Removal of ``DataMatrix`` module. This was not imported into the pandas namespace in any event (:issue:`12111`) - +- Removal of ``cols`` keyword in favor of ``subset`` in ``DataFrame.duplicated()`` and ``DataFrame.drop_duplicates()`` (:issue:`6680`) .. _whatsnew_0180.performance: @@ -544,4 +544,4 @@ Bug Fixes - Bug in ``.skew`` and ``.kurt`` due to roundoff error for highly similar values (:issue:`11974`) -- Bug in ``buffer_rd_bytes`` src->buffer could be freed more than once if reading failed, causing a segfault (:issue:`12098`) +- Bug in ``buffer_rd_bytes`` src->buffer could be freed more than once if reading failed, causing a segfault (:issue:`12098`) diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 907da619b1875..47fab0a2da6af 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -3012,7 +3012,6 @@ def dropna(self, axis=0, how='any', thresh=None, subset=None, @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', False: 'first'}) - @deprecate_kwarg(old_arg_name='cols', new_arg_name='subset', stacklevel=3) def drop_duplicates(self, subset=None, keep='first', inplace=False): """ Return DataFrame with duplicate rows removed, optionally only @@ -3030,7 +3029,6 @@ def drop_duplicates(self, subset=None, keep='first', inplace=False): take_last : deprecated inplace : boolean, default False Whether to drop duplicates in place or to return a copy - cols : kwargs only argument of subset [deprecated] Returns ------- @@ -3047,7 +3045,6 @@ def drop_duplicates(self, subset=None, keep='first', inplace=False): @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', False: 'first'}) - @deprecate_kwarg(old_arg_name='cols', new_arg_name='subset', stacklevel=3) def duplicated(self, subset=None, keep='first'): """ Return boolean Series denoting duplicate rows, optionally only @@ -3065,7 +3062,6 @@ def duplicated(self, subset=None, keep='first'): last occurrence. - False : Mark all duplicates as ``True``. take_last : deprecated - cols : kwargs only argument of subset [deprecated] Returns ------- diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py index e1ba981e93d2e..dd8013409eedc 100644 --- a/pandas/tests/frame/test_analytics.py +++ b/pandas/tests/frame/test_analytics.py @@ -1670,40 +1670,6 @@ def test_drop_duplicates_for_take_all(self): expected = df.iloc[[0, 1, 2, 6]] assert_frame_equal(result, expected) - def test_drop_duplicates_deprecated_warning(self): - df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - expected = df[:2] - - # Raises warning - with tm.assert_produces_warning(False): - result = df.drop_duplicates(subset='AAA') - assert_frame_equal(result, expected) - - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(cols='AAA') - assert_frame_equal(result, expected) - - # Does not allow both subset and cols - self.assertRaises(TypeError, df.drop_duplicates, - kwargs={'cols': 'AAA', 'subset': 'B'}) - - # Does not allow unknown kwargs - self.assertRaises(TypeError, df.drop_duplicates, - kwargs={'subset': 'AAA', 'bad_arg': True}) - - # deprecate take_last - # Raises warning - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(take_last=False, subset='AAA') - assert_frame_equal(result, expected) - - self.assertRaises(ValueError, df.drop_duplicates, keep='invalid_name') - def test_drop_duplicates_tuple(self): df = DataFrame({('AA', 'AB'): ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'bar', 'foo'], @@ -1960,29 +1926,6 @@ def test_drop_duplicates_inplace(self): result = df2 assert_frame_equal(result, expected) - def test_duplicated_deprecated_warning(self): - df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - - # Raises warning - with tm.assert_produces_warning(False): - result = df.duplicated(subset='AAA') - - with tm.assert_produces_warning(FutureWarning): - result = df.duplicated(cols='AAA') # noqa - - # Does not allow both subset and cols - self.assertRaises(TypeError, df.duplicated, - kwargs={'cols': 'AAA', 'subset': 'B'}) - - # Does not allow unknown kwargs - self.assertRaises(TypeError, df.duplicated, - kwargs={'subset': 'AAA', 'bad_arg': True}) - # Rounding def test_round(self): diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx index be1c5af74a95d..b58f597f54e6d 100644 --- a/pandas/tslib.pyx +++ b/pandas/tslib.pyx @@ -233,8 +233,6 @@ class Timestamp(_Timestamp): Offset which Timestamp will have tz : string, pytz.timezone, dateutil.tz.tzfile or None Time zone for time which Timestamp will have. - unit : string - numpy unit used for conversion, if ts_input is int or float """ # Do not add ``dayfirst`` and ``yearfist`` to Timestamp based on the discussion
DEPR: Removal of cols keyword in favor of subset in DataFrame.duplicated() and DataFrame.drop_duplicates(), xref #6680
https://api.github.com/repos/pandas-dev/pandas/pulls/12165
2016-01-28T13:32:32Z
2016-01-28T21:23:45Z
null
2016-01-29T16:20:32Z
Style display format
diff --git a/doc/source/api.rst b/doc/source/api.rst index c572aa9ae2e03..59f0f0a82a892 100644 --- a/doc/source/api.rst +++ b/doc/source/api.rst @@ -1820,6 +1820,7 @@ Style Application Styler.apply Styler.applymap + Styler.format Styler.set_precision Styler.set_table_styles Styler.set_caption diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index c420b34db7ac8..35b1ee54ff683 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -392,6 +392,7 @@ Other enhancements values it contains (:issue:`11597`) - ``Series`` gained an ``is_unique`` attribute (:issue:`11946`) - ``DataFrame.quantile`` and ``Series.quantile`` now accept ``interpolation`` keyword (:issue:`10174`). +- Added ``DataFrame.style.format`` for more flexible formatting of cell values (:issue:`11692`) - ``DataFrame.select_dtypes`` now allows the ``np.float16`` typecode (:issue:`11990`) - ``pivot_table()`` now accepts most iterables for the ``values`` parameter (:issue:`12017`) - Added Google ``BigQuery`` service account authentication support, which enables authentication on remote servers. (:issue:`11881`). For further details see :ref:`here <io.bigquery_authentication>` diff --git a/pandas/core/style.py b/pandas/core/style.py index a5a42c2bb47a7..15fcec118e7d4 100644 --- a/pandas/core/style.py +++ b/pandas/core/style.py @@ -3,10 +3,11 @@ DataFrames and Series. """ from functools import partial +from itertools import product from contextlib import contextmanager from uuid import uuid1 import copy -from collections import defaultdict +from collections import defaultdict, MutableMapping try: from jinja2 import Template @@ -18,7 +19,8 @@ import numpy as np import pandas as pd -from pandas.compat import lzip +from pandas.compat import lzip, range +import pandas.core.common as com from pandas.core.indexing import _maybe_numeric_slice, _non_reducing_slice try: import matplotlib.pyplot as plt @@ -117,11 +119,7 @@ class Styler(object): <tr> {% for c in r %} <{{c.type}} id="T_{{uuid}}{{c.id}}" class="{{c.class}}"> - {% if c.value is number %} - {{c.value|round(precision)}} - {% else %} - {{c.value}} - {% endif %} + {{ c.display_value }} {% endfor %} </tr> {% endfor %} @@ -152,6 +150,15 @@ def __init__(self, data, precision=None, table_styles=None, uuid=None, precision = pd.options.display.precision self.precision = precision self.table_attributes = table_attributes + # display_funcs maps (row, col) -> formatting function + + def default_display_func(x): + if com.is_float(x): + return '{:>.{precision}g}'.format(x, precision=self.precision) + else: + return x + + self._display_funcs = defaultdict(lambda: default_display_func) def _repr_html_(self): """Hooks into Jupyter notebook rich display system.""" @@ -199,10 +206,12 @@ def _translate(self): "class": " ".join([BLANK_CLASS])}] * n_rlvls for c in range(len(clabels[0])): cs = [COL_HEADING_CLASS, "level%s" % r, "col%s" % c] - cs.extend( - cell_context.get("col_headings", {}).get(r, {}).get(c, [])) + cs.extend(cell_context.get( + "col_headings", {}).get(r, {}).get(c, [])) + value = clabels[r][c] row_es.append({"type": "th", - "value": clabels[r][c], + "value": value, + "display_value": value, "class": " ".join(cs)}) head.append(row_es) @@ -231,15 +240,22 @@ def _translate(self): cell_context.get("row_headings", {}).get(r, {}).get(c, [])) row_es = [{"type": "th", "value": rlabels[r][c], - "class": " ".join(cs)} for c in range(len(rlabels[r]))] + "class": " ".join(cs), + "display_value": rlabels[r][c]} + for c in range(len(rlabels[r]))] for c, col in enumerate(self.data.columns): cs = [DATA_CLASS, "row%s" % r, "col%s" % c] cs.extend(cell_context.get("data", {}).get(r, {}).get(c, [])) - row_es.append({"type": "td", - "value": self.data.iloc[r][c], - "class": " ".join(cs), - "id": "_".join(cs[1:])}) + formatter = self._display_funcs[(r, c)] + value = self.data.iloc[r, c] + row_es.append({ + "type": "td", + "value": value, + "class": " ".join(cs), + "id": "_".join(cs[1:]), + "display_value": formatter(value) + }) props = [] for x in ctx[r, c]: # have to handle empty styles like [''] @@ -255,6 +271,71 @@ def _translate(self): precision=precision, table_styles=table_styles, caption=caption, table_attributes=self.table_attributes) + def format(self, formatter, subset=None): + """ + Format the text display value of cells. + + .. versionadded:: 0.18.0 + + Parameters + ---------- + formatter: str, callable, or dict + subset: IndexSlice + A argument to DataFrame.loc that restricts which elements + ``formatter`` is applied to. + + Returns + ------- + self : Styler + + Notes + ----- + + ``formatter`` is either an ``a`` or a dict ``{column name: a}`` where + ``a`` is one of + + - str: this will be wrapped in: ``a.format(x)`` + - callable: called with the value of an individual cell + + The default display value for numeric values is the "general" (``g``) + format with ``pd.options.display.precision`` precision. + + Examples + -------- + + >>> df = pd.DataFrame(np.random.randn(4, 2), columns=['a', 'b']) + >>> df.style.format("{:.2%}") + >>> df['c'] = ['a', 'b', 'c', 'd'] + >>> df.style.format({'C': str.upper}) + """ + if subset is None: + row_locs = range(len(self.data)) + col_locs = range(len(self.data.columns)) + else: + subset = _non_reducing_slice(subset) + if len(subset) == 1: + subset = subset, self.data.columns + + sub_df = self.data.loc[subset] + row_locs = self.data.index.get_indexer_for(sub_df.index) + col_locs = self.data.columns.get_indexer_for(sub_df.columns) + + if isinstance(formatter, MutableMapping): + for col, col_formatter in formatter.items(): + # formatter must be callable, so '{}' are converted to lambdas + col_formatter = _maybe_wrap_formatter(col_formatter) + col_num = self.data.columns.get_indexer_for([col])[0] + + for row_num in row_locs: + self._display_funcs[(row_num, col_num)] = col_formatter + else: + # single scalar to format all cells with + locs = product(*(row_locs, col_locs)) + for i, j in locs: + formatter = _maybe_wrap_formatter(formatter) + self._display_funcs[(i, j)] = formatter + return self + def render(self): """ Render the built up styles to HTML @@ -376,7 +457,7 @@ def apply(self, func, axis=0, subset=None, **kwargs): Returns ------- - self + self : Styler Notes ----- @@ -415,7 +496,7 @@ def applymap(self, func, subset=None, **kwargs): Returns ------- - self + self : Styler """ self._todo.append((lambda instance: getattr(instance, '_applymap'), @@ -434,7 +515,7 @@ def set_precision(self, precision): Returns ------- - self + self : Styler """ self.precision = precision return self @@ -453,7 +534,7 @@ def set_table_attributes(self, attributes): Returns ------- - self + self : Styler """ self.table_attributes = attributes return self @@ -489,7 +570,7 @@ def use(self, styles): Returns ------- - self + self : Styler See Also -------- @@ -510,7 +591,7 @@ def set_uuid(self, uuid): Returns ------- - self + self : Styler """ self.uuid = uuid return self @@ -527,7 +608,7 @@ def set_caption(self, caption): Returns ------- - self + self : Styler """ self.caption = caption return self @@ -550,7 +631,7 @@ def set_table_styles(self, table_styles): Returns ------- - self + self : Styler Examples -------- @@ -583,7 +664,7 @@ def highlight_null(self, null_color='red'): Returns ------- - self + self : Styler """ self.applymap(self._highlight_null, null_color=null_color) return self @@ -610,7 +691,7 @@ def background_gradient(self, cmap='PuBu', low=0, high=0, axis=0, Returns ------- - self + self : Styler Notes ----- @@ -695,7 +776,7 @@ def bar(self, subset=None, axis=0, color='#d65f5f', width=100): Returns ------- - self + self : Styler """ subset = _maybe_numeric_slice(self.data, subset) subset = _non_reducing_slice(subset) @@ -720,7 +801,7 @@ def highlight_max(self, subset=None, color='yellow', axis=0): Returns ------- - self + self : Styler """ return self._highlight_handler(subset=subset, color=color, axis=axis, max_=True) @@ -742,7 +823,7 @@ def highlight_min(self, subset=None, color='yellow', axis=0): Returns ------- - self + self : Styler """ return self._highlight_handler(subset=subset, color=color, axis=axis, max_=False) @@ -771,3 +852,14 @@ def _highlight_extrema(data, color='yellow', max_=True): extrema = data == data.min().min() return pd.DataFrame(np.where(extrema, attr, ''), index=data.index, columns=data.columns) + + +def _maybe_wrap_formatter(formatter): + if com.is_string_like(formatter): + return lambda x: formatter.format(x) + elif callable(formatter): + return formatter + else: + msg = "Expected a template string or callable, got {} instead".format( + formatter) + raise TypeError(msg) diff --git a/pandas/tests/test_style.py b/pandas/tests/test_style.py index 9a427cb26520c..ef5a966d65545 100644 --- a/pandas/tests/test_style.py +++ b/pandas/tests/test_style.py @@ -136,9 +136,9 @@ def test_index_name(self): expected = [[{'class': 'blank', 'type': 'th', 'value': ''}, {'class': 'col_heading level0 col0', 'type': 'th', - 'value': 'B'}, + 'value': 'B', 'display_value': 'B'}, {'class': 'col_heading level0 col1', 'type': 'th', - 'value': 'C'}], + 'value': 'C', 'display_value': 'C'}], [{'class': 'col_heading level2 col0', 'type': 'th', 'value': 'A'}, {'class': 'blank', 'type': 'th', 'value': ''}, @@ -154,7 +154,7 @@ def test_multiindex_name(self): expected = [[{'class': 'blank', 'type': 'th', 'value': ''}, {'class': 'blank', 'type': 'th', 'value': ''}, {'class': 'col_heading level0 col0', 'type': 'th', - 'value': 'C'}], + 'value': 'C', 'display_value': 'C'}], [{'class': 'col_heading level2 col0', 'type': 'th', 'value': 'A'}, {'class': 'col_heading level2 col1', 'type': 'th', @@ -163,6 +163,12 @@ def test_multiindex_name(self): self.assertEqual(result['head'], expected) + def test_numeric_columns(self): + # https://github.com/pydata/pandas/issues/12125 + # smoke test for _translate + df = pd.DataFrame({0: [1, 2, 3]}) + df.style._translate() + def test_apply_axis(self): df = pd.DataFrame({'A': [0, 0], 'B': [1, 1]}) f = lambda x: ['val: %s' % x.max() for v in x] @@ -263,53 +269,51 @@ def test_bar(self): def test_bar_0points(self): df = pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) result = df.style.bar()._compute().ctx - expected = { - (0, 0): ['width: 10em', ' height: 80%'], - (0, 1): ['width: 10em', ' height: 80%'], - (0, 2): ['width: 10em', ' height: 80%'], - (1, 0): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 50.0%, ' - 'transparent 0%)'], - (1, 1): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 50.0%, ' - 'transparent 0%)'], - (1, 2): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 50.0%, ' - 'transparent 0%)'], - (2, 0): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 100.0%, ' - 'transparent 0%)'], - (2, 1): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 100.0%, ' - 'transparent 0%)'], - (2, 2): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 100.0%, ' - 'transparent 0%)']} + expected = {(0, 0): ['width: 10em', ' height: 80%'], + (0, 1): ['width: 10em', ' height: 80%'], + (0, 2): ['width: 10em', ' height: 80%'], + (1, 0): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 50.0%,' + ' transparent 0%)'], + (1, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 50.0%,' + ' transparent 0%)'], + (1, 2): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 50.0%,' + ' transparent 0%)'], + (2, 0): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 100.0%' + ', transparent 0%)'], + (2, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 100.0%' + ', transparent 0%)'], + (2, 2): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 100.0%' + ', transparent 0%)']} self.assertEqual(result, expected) result = df.style.bar(axis=1)._compute().ctx - expected = { - (0, 0): ['width: 10em', ' height: 80%'], - (0, 1): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 50.0%, ' - 'transparent 0%)'], - (0, 2): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 100.0%, ' - 'transparent 0%)'], - (1, 0): ['width: 10em', ' height: 80%'], - (1, 1): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 50.0%, ' - 'transparent 0%)'], - (1, 2): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 100.0%, ' - 'transparent 0%)'], - (2, 0): ['width: 10em', ' height: 80%'], - (2, 1): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 50.0%, ' - 'transparent 0%)'], - (2, 2): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 100.0%, ' - 'transparent 0%)']} + expected = {(0, 0): ['width: 10em', ' height: 80%'], + (0, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 50.0%,' + ' transparent 0%)'], + (0, 2): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 100.0%' + ', transparent 0%)'], + (1, 0): ['width: 10em', ' height: 80%'], + (1, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 50.0%' + ', transparent 0%)'], + (1, 2): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 100.0%' + ', transparent 0%)'], + (2, 0): ['width: 10em', ' height: 80%'], + (2, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 50.0%' + ', transparent 0%)'], + (2, 2): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 100.0%' + ', transparent 0%)']} self.assertEqual(result, expected) def test_highlight_null(self, null_color='red'): @@ -444,6 +448,73 @@ def test_export(self): self.assertEqual(style1._todo, style2._todo) style2.render() + def test_display_format(self): + df = pd.DataFrame(np.random.random(size=(2, 2))) + ctx = df.style.format("{:0.1f}")._translate() + + self.assertTrue(all(['display_value' in c for c in row] + for row in ctx['body'])) + self.assertTrue(all([len(c['display_value']) <= 3 for c in row[1:]] + for row in ctx['body'])) + self.assertTrue( + len(ctx['body'][0][1]['display_value'].lstrip('-')) <= 3) + + def test_display_format_raises(self): + df = pd.DataFrame(np.random.randn(2, 2)) + with tm.assertRaises(TypeError): + df.style.format(5) + with tm.assertRaises(TypeError): + df.style.format(True) + + def test_display_subset(self): + df = pd.DataFrame([[.1234, .1234], [1.1234, 1.1234]], + columns=['a', 'b']) + ctx = df.style.format({"a": "{:0.1f}", "b": "{0:.2%}"}, + subset=pd.IndexSlice[0, :])._translate() + expected = '0.1' + self.assertEqual(ctx['body'][0][1]['display_value'], expected) + self.assertEqual(ctx['body'][1][1]['display_value'], '1.1234') + self.assertEqual(ctx['body'][0][2]['display_value'], '12.34%') + + raw_11 = '1.1234' + ctx = df.style.format("{:0.1f}", + subset=pd.IndexSlice[0, :])._translate() + self.assertEqual(ctx['body'][0][1]['display_value'], expected) + self.assertEqual(ctx['body'][1][1]['display_value'], raw_11) + + ctx = df.style.format("{:0.1f}", + subset=pd.IndexSlice[0, :])._translate() + self.assertEqual(ctx['body'][0][1]['display_value'], expected) + self.assertEqual(ctx['body'][1][1]['display_value'], raw_11) + + ctx = df.style.format("{:0.1f}", + subset=pd.IndexSlice['a'])._translate() + self.assertEqual(ctx['body'][0][1]['display_value'], expected) + self.assertEqual(ctx['body'][0][2]['display_value'], '0.1234') + + ctx = df.style.format("{:0.1f}", + subset=pd.IndexSlice[0, 'a'])._translate() + self.assertEqual(ctx['body'][0][1]['display_value'], expected) + self.assertEqual(ctx['body'][1][1]['display_value'], raw_11) + + ctx = df.style.format("{:0.1f}", + subset=pd.IndexSlice[[0, 1], ['a']])._translate() + self.assertEqual(ctx['body'][0][1]['display_value'], expected) + self.assertEqual(ctx['body'][1][1]['display_value'], '1.1') + self.assertEqual(ctx['body'][0][2]['display_value'], '0.1234') + self.assertEqual(ctx['body'][1][2]['display_value'], '1.1234') + + def test_display_dict(self): + df = pd.DataFrame([[.1234, .1234], [1.1234, 1.1234]], + columns=['a', 'b']) + ctx = df.style.format({"a": "{:0.1f}", "b": "{0:.2%}"})._translate() + self.assertEqual(ctx['body'][0][1]['display_value'], '0.1') + self.assertEqual(ctx['body'][0][2]['display_value'], '12.34%') + df['c'] = ['aaa', 'bbb'] + ctx = df.style.format({"a": "{:0.1f}", "c": str.upper})._translate() + self.assertEqual(ctx['body'][0][1]['display_value'], '0.1') + self.assertEqual(ctx['body'][0][3]['display_value'], 'AAA') + @tm.mplskip class TestStylerMatplotlibDep(TestCase):
Closes https://github.com/pydata/pandas/issues/11692 Closes https://github.com/pydata/pandas/issues/12134 Closes https://github.com/pydata/pandas/issues/12125 This adds a `.format` method to Styler for formatting the display value (the actual text) of each scalar value. In the processes of cleaning up the template, I close #12134 (spurious 0) and #12125 (KeyError from using `iloc` improperly)
https://api.github.com/repos/pandas-dev/pandas/pulls/12162
2016-01-28T03:29:49Z
2016-02-12T20:42:51Z
null
2016-11-03T12:38:51Z
CI: fixup windows builds
diff --git a/conda.recipe/meta.yaml b/conda.recipe/meta.yaml index 279327b93c805..8264f5e9a952b 100644 --- a/conda.recipe/meta.yaml +++ b/conda.recipe/meta.yaml @@ -6,14 +6,13 @@ build: number: {{ environ.get('GIT_DESCRIBE_NUMBER', 0) }} source: - path: ../../ + path: ../ requirements: build: - python - cython - numpy x.x - - libpython # [py2k and win] - setuptools - pytz - python-dateutil diff --git a/pandas/__init__.py b/pandas/__init__.py index 4ba6dbe6a8063..b8c1f082b6cb3 100644 --- a/pandas/__init__.py +++ b/pandas/__init__.py @@ -4,13 +4,8 @@ __docformat__ = 'restructuredtext' -# use the closest tagged version if possible -from ._version import get_versions -v = get_versions() -__version__ = v.get('closest-tag',v['version']) -del get_versions, v - # numpy compat +import numpy as np from pandas.compat.numpy_compat import * try: @@ -23,8 +18,6 @@ "extensions first.".format(module)) from datetime import datetime -import numpy as np - from pandas.info import __doc__ # let init-time option registration happen @@ -50,3 +43,10 @@ from pandas.util.nosetester import NoseTester test = NoseTester().test del NoseTester + +# use the closest tagged version if possible +from ._version import get_versions +v = get_versions() +__version__ = v.get('closest-tag',v['version']) +del get_versions, v + diff --git a/pandas/compat/numpy_compat.py b/pandas/compat/numpy_compat.py index 726a20370f512..f7f5da40d01c5 100644 --- a/pandas/compat/numpy_compat.py +++ b/pandas/compat/numpy_compat.py @@ -1,8 +1,8 @@ """ support numpy compatiblitiy across versions """ +import numpy as np from distutils.version import LooseVersion from pandas.compat import string_types, string_and_binary_types -import numpy as np # TODO: HACK for NumPy 1.5.1 to suppress warnings # is this necessary? @@ -19,11 +19,10 @@ _np_version_under1p11 = LooseVersion(_np_version) < '1.11' if LooseVersion(_np_version) < '1.7.0': - from pandas import __version__ - raise ImportError('pandas {0} is incompatible with numpy < 1.7.0\n' - 'your numpy version is {1}.\n' + raise ImportError('this version of pandas is incompatible with numpy < 1.7.0\n' + 'your numpy version is {0}.\n' 'Please upgrade numpy to >= 1.7.0 to use ' - 'this pandas version'.format(__version__, _np_version)) + 'this pandas version'.format(_np_version)) def tz_replacer(s):
closes #12139 closes #12159
https://api.github.com/repos/pandas-dev/pandas/pulls/12159
2016-01-27T20:44:14Z
2016-01-27T22:39:25Z
null
2016-01-27T22:39:25Z
BUG in MultiIndex.drop for not-lexsorted multi-indexes, #12078
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 3a188ea20f8a3..7d312165fab74 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -523,7 +523,7 @@ Bug Fixes - Bug in ``read_sql`` with ``pymysql`` connections failing to return chunked data (:issue:`11522`) - Bug in ``.to_csv`` ignoring formatting parameters ``decimal``, ``na_rep``, ``float_format`` for float indexes (:issue:`11553`) - Bug in ``Int64Index`` and ``Float64Index`` preventing the use of the modulo operator (:issue:`9244`) - +- Bug in ``MultiIndex.drop`` for not lexsorted multi-indexes (:issue:`12078`) - Bug in ``DataFrame`` when masking an empty ``DataFrame`` (:issue:`11859`) @@ -544,4 +544,4 @@ Bug Fixes - Bug in ``.skew`` and ``.kurt`` due to roundoff error for highly similar values (:issue:`11974`) -- Bug in ``buffer_rd_bytes`` src->buffer could be freed more than once if reading failed, causing a segfault (:issue:`12098`) +- Bug in ``buffer_rd_bytes`` src->buffer could be freed more than once if reading failed, causing a segfault (:issue:`12098`) diff --git a/pandas/indexes/multi.py b/pandas/indexes/multi.py index 2d0ad1925daa0..1b7f057de9677 100644 --- a/pandas/indexes/multi.py +++ b/pandas/indexes/multi.py @@ -1083,10 +1083,24 @@ def drop(self, labels, level=None, errors='raise'): for label in labels: try: loc = self.get_loc(label) + # get_loc returns either an integer, a slice, or a boolean + # mask if isinstance(loc, int): inds.append(loc) - else: + elif isinstance(loc, slice): inds.extend(lrange(loc.start, loc.stop)) + elif is_bool_indexer(loc): + if self.lexsort_depth == 0: + warnings.warn('dropping on a non-lexsorted multi-index' + 'without a level parameter may impact ' + 'performance.', + PerformanceWarning, + stacklevel=2) + loc = loc.nonzero()[0] + inds.extend(loc) + else: + msg = 'unsupported indexer of type {}'.format(type(loc)) + raise AssertionError(msg) except KeyError: if errors != 'ignore': raise diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py index 6bc644d84b0d0..6d49f5dcb342e 100644 --- a/pandas/tests/indexes/test_multi.py +++ b/pandas/tests/indexes/test_multi.py @@ -8,6 +8,7 @@ from pandas import (date_range, MultiIndex, Index, CategoricalIndex, compat) +from pandas.io.common import PerformanceWarning from pandas.indexes.base import InvalidIndexError from pandas.compat import range, lrange, u, PY3, long, lzip @@ -1419,6 +1420,28 @@ def test_droplevel_multiple(self): expected = index[:2].droplevel(2).droplevel(0) self.assertTrue(dropped.equals(expected)) + def test_drop_not_lexsorted(self): + # GH 12078 + + # define the lexsorted version of the multi-index + tuples = [('a', ''), ('b1', 'c1'), ('b2', 'c2')] + lexsorted_mi = MultiIndex.from_tuples(tuples, names=['b', 'c']) + self.assertTrue(lexsorted_mi.is_lexsorted()) + + # and the not-lexsorted version + df = pd.DataFrame(columns=['a', 'b', 'c', 'd'], + data=[[1, 'b1', 'c1', 3], [1, 'b2', 'c2', 4]]) + df = df.pivot_table(index='a', columns=['b', 'c'], values='d') + df = df.reset_index() + not_lexsorted_mi = df.columns + self.assertFalse(not_lexsorted_mi.is_lexsorted()) + + # compare the results + self.assert_index_equal(lexsorted_mi, not_lexsorted_mi) + with self.assert_produces_warning(PerformanceWarning): + self.assert_index_equal(lexsorted_mi.drop('a'), + not_lexsorted_mi.drop('a')) + def test_insert(self): # key contained in all levels new_index = self.index.insert(0, ('bar', 'two'))
Closes #12078
https://api.github.com/repos/pandas-dev/pandas/pulls/12158
2016-01-27T16:54:27Z
2016-01-28T21:28:09Z
null
2016-01-28T21:46:26Z
BUG: getitem and a series with a non-ndarray values
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index ccdc48bc1dbbb..56e5b75bb0a27 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -483,7 +483,7 @@ Bug Fixes - Compat for numpy 1.11 w.r.t. ``NaT`` comparison changes (:issue:`12049`) - Bug in ``read_csv`` when reading from a ``StringIO`` in threads (:issue:`11790`) - Bug in not treating ``NaT`` as a missing value in datetimelikes when factorizing & with ``Categoricals`` (:issue:`12077`) - +- Bug in getitem when the values of a ``Series`` were tz-aware (:issue:`12089`) diff --git a/pandas/index.pyx b/pandas/index.pyx index 1678e3b280ee5..a7e613ee867c7 100644 --- a/pandas/index.pyx +++ b/pandas/index.pyx @@ -100,7 +100,7 @@ cdef class IndexEngine: hash(val) return val in self.mapping - cpdef get_value(self, ndarray arr, object key): + cpdef get_value(self, ndarray arr, object key, object tz=None): ''' arr : 1-dimensional ndarray ''' @@ -113,7 +113,7 @@ cdef class IndexEngine: return arr[loc] else: if arr.descr.type_num == NPY_DATETIME: - return Timestamp(util.get_value_at(arr, loc)) + return Timestamp(util.get_value_at(arr, loc), tz=tz) elif arr.descr.type_num == NPY_TIMEDELTA: return Timedelta(util.get_value_at(arr, loc)) return util.get_value_at(arr, loc) diff --git a/pandas/indexes/base.py b/pandas/indexes/base.py index 0147000e4380c..9064b77ef7e3d 100644 --- a/pandas/indexes/base.py +++ b/pandas/indexes/base.py @@ -1881,7 +1881,12 @@ def get_value(self, series, key): # use this, e.g. DatetimeIndex s = getattr(series, '_values', None) if isinstance(s, Index) and lib.isscalar(key): - return s[key] + try: + return s[key] + except (IndexError, ValueError): + + # invalid type as an indexer + pass s = _values_from_object(series) k = _values_from_object(key) @@ -1891,7 +1896,8 @@ def get_value(self, series, key): raise KeyError try: - return self._engine.get_value(s, k) + return self._engine.get_value(s, k, + tz=getattr(series.dtype, 'tz', None)) except KeyError as e1: if len(self) > 0 and self.inferred_type in ['integer', 'boolean']: raise diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py index 537c2d07443e4..c5ebbca67dc0e 100644 --- a/pandas/tests/series/test_indexing.py +++ b/pandas/tests/series/test_indexing.py @@ -613,6 +613,18 @@ def test_basic_getitem_with_labels(self): expected = s.reindex(arr_inds) assert_series_equal(result, expected) + # GH12089 + # with tz for values + s = Series(pd.date_range("2011-01-01", periods=3, tz="US/Eastern"), + index=['a', 'b', 'c']) + expected = Timestamp('2011-01-01', tz='US/Eastern') + result = s.loc['a'] + self.assertEqual(result, expected) + result = s.iloc[0] + self.assertEqual(result, expected) + result = s['a'] + self.assertEqual(result, expected) + def test_basic_setitem_with_labels(self): indices = self.ts.index[[5, 10, 15]] @@ -650,6 +662,26 @@ def test_basic_setitem_with_labels(self): self.assertRaises(Exception, s.__setitem__, inds_notfound, 0) self.assertRaises(Exception, s.__setitem__, arr_inds_notfound, 0) + # GH12089 + # with tz for values + s = Series(pd.date_range("2011-01-01", periods=3, tz="US/Eastern"), + index=['a', 'b', 'c']) + s2 = s.copy() + expected = Timestamp('2011-01-03', tz='US/Eastern') + s2.loc['a'] = expected + result = s2.loc['a'] + self.assertEqual(result, expected) + + s2 = s.copy() + s2.iloc[0] = expected + result = s2.iloc[0] + self.assertEqual(result, expected) + + s2 = s.copy() + s2['a'] = expected + result = s2['a'] + self.assertEqual(result, expected) + def test_ix_getitem(self): inds = self.series.index[[3, 4, 7]] assert_series_equal(self.series.ix[inds], self.series.reindex(inds)) diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py index bb4f878157595..4d5f64bc95249 100644 --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -1361,7 +1361,8 @@ def get_value_maybe_box(self, series, key): key = Timestamp(key, tz=self.tz) elif not isinstance(key, Timestamp): key = Timestamp(key) - values = self._engine.get_value(_values_from_object(series), key) + values = self._engine.get_value(_values_from_object(series), + key, tz=self.tz) return _maybe_box(self, values, series, key) def get_loc(self, key, method=None, tolerance=None):
closes #12089
https://api.github.com/repos/pandas-dev/pandas/pulls/12151
2016-01-27T00:41:15Z
2016-01-27T12:59:13Z
null
2016-01-27T12:59:13Z
CI: latest ipython version for doc build
diff --git a/ci/requirements-2.7_DOC_BUILD.run b/ci/requirements-2.7_DOC_BUILD.run index 8e96f87d4fcd2..33033defc8faa 100644 --- a/ci/requirements-2.7_DOC_BUILD.run +++ b/ci/requirements-2.7_DOC_BUILD.run @@ -1,4 +1,4 @@ -ipython=3.2.1 +ipython nbconvert matplotlib scipy
See https://github.com/pydata/pandas/pull/12002#issuecomment-174550631 Doc builds are failing, testing if using the latest `ipython` version solves it.
https://api.github.com/repos/pandas-dev/pandas/pulls/12138
2016-01-25T20:12:21Z
2016-01-26T16:26:34Z
null
2016-01-27T23:38:41Z
BUG: style.set_precision(0) displays spurious .0
diff --git a/pandas/core/style.py b/pandas/core/style.py index a5a42c2bb47a7..bced50926e275 100644 --- a/pandas/core/style.py +++ b/pandas/core/style.py @@ -118,7 +118,11 @@ class Styler(object): {% for c in r %} <{{c.type}} id="T_{{uuid}}{{c.id}}" class="{{c.class}}"> {% if c.value is number %} - {{c.value|round(precision)}} + {% if precision %} + {{c.value|round(precision)}} + {% else %} + {{'%.0f'|format(c.value)}} + {% endif %} {% else %} {{c.value}} {% endif %} diff --git a/pandas/tests/test_style.py b/pandas/tests/test_style.py index b9ca3f331711d..188696397e24b 100644 --- a/pandas/tests/test_style.py +++ b/pandas/tests/test_style.py @@ -379,6 +379,12 @@ def test_precision(self): self.assertTrue(s is s2) self.assertEqual(s.precision, 4) + def test_precision_zero(self): + df = pd.DataFrame({'A': 100, 'B': [0, 1, 2, 3, np.nan]}) + df['C'] = df['A'] / df['B'] + result = df.style.set_precision(0).render() + self.assertEqual(result.count('.'), 0) + def test_apply_none(self): def f(x): return pd.DataFrame(np.where(x == x.max(), 'color: red', ''),
Fixes #12134
https://api.github.com/repos/pandas-dev/pandas/pulls/12137
2016-01-25T19:43:27Z
2016-02-13T10:24:44Z
null
2016-02-13T10:33:27Z
BUG: set src->buffer = NULL after garbage collecting it in buffer_rd_…
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 81696982d0fde..abca5d7dc033e 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -201,10 +201,6 @@ In addition, ``.round()``, ``.floor()`` and ``.ceil()`` will be available thru t s s.dt.round('D') -.. _whatsnew_0180.api: - -- ``pandas.merge()`` and ``DataFrame.merge()`` will show a specific error message when trying to merge with an object that is not of type ``DataFrame`` or a subclass (:issue:`12081`) - .. _whatsnew_0180.api_breaking: Backwards incompatible API changes @@ -319,29 +315,6 @@ other anchored offsets like ``MonthBegin`` and ``YearBegin``. d = pd.Timestamp('2014-02-15') d + pd.offsets.QuarterBegin(n=0, startingMonth=2) - -Other API Changes -^^^^^^^^^^^^^^^^^ - -- ``DataFrame.between_time`` and ``Series.between_time`` now only parse a fixed set of time strings. Parsing of date strings is no longer supported and raises a ``ValueError``. (:issue:`11818`) - - .. ipython:: python - - s = pd.Series(range(10), pd.date_range('2015-01-01', freq='H', periods=10)) - s.between_time("7:00am", "9:00am") - - This will now raise. - - .. code-block:: python - - In [2]: s.between_time('20150101 07:00:00','20150101 09:00:00') - ValueError: Cannot convert arg ['20150101 07:00:00'] to a time. - -- ``.memory_usage`` now includes values in the index, as does memory_usage in ``.info`` (:issue:`11597`) - -- ``DataFrame.to_latex()`` now supports non-ascii encodings (eg utf-8) in Python 2 with the parameter ``encoding`` (:issue:`7061`) - - Changes to eval ^^^^^^^^^^^^^^^ @@ -397,6 +370,32 @@ assignments are valid for multi-line expressions. g = f / 2.0""", inplace=True) df + +.. _whatsnew_0180.api: + +Other API Changes +^^^^^^^^^^^^^^^^^ + +- ``DataFrame.between_time`` and ``Series.between_time`` now only parse a fixed set of time strings. Parsing of date strings is no longer supported and raises a ``ValueError``. (:issue:`11818`) + + .. ipython:: python + + s = pd.Series(range(10), pd.date_range('2015-01-01', freq='H', periods=10)) + s.between_time("7:00am", "9:00am") + + This will now raise. + + .. code-block:: python + + In [2]: s.between_time('20150101 07:00:00','20150101 09:00:00') + ValueError: Cannot convert arg ['20150101 07:00:00'] to a time. + +- ``.memory_usage`` now includes values in the index, as does memory_usage in ``.info`` (:issue:`11597`) + +- ``DataFrame.to_latex()`` now supports non-ascii encodings (eg utf-8) in Python 2 with the parameter ``encoding`` (:issue:`7061`) + +- ``pandas.merge()`` and ``DataFrame.merge()`` will show a specific error message when trying to merge with an object that is not of type ``DataFrame`` or a subclass (:issue:`12081`) + .. _whatsnew_0180.deprecations: Deprecations @@ -502,7 +501,7 @@ Bug Fixes - Bug in ``pd.read_clipboard`` and ``pd.to_clipboard`` functions not supporting Unicode; upgrade included ``pyperclip`` to v1.5.15 (:issue:`9263`) - Bug in ``DataFrame.query`` containing an assignment (:issue:`8664`) -- Bug in ``from_msgpack`` where ``__contains__()`` fails for columns of the unpacked ``DataFrame``, if the ``DataFrame`` has object columns. (:issue: `11880`) +- Bug in ``from_msgpack`` where ``__contains__()`` fails for columns of the unpacked ``DataFrame``, if the ``DataFrame`` has object columns. (:issue:`11880`) - Bug in timezone info lost when broadcasting scalar datetime to ``DataFrame`` (:issue:`11682`) @@ -521,7 +520,7 @@ Bug Fixes - Bug in ``Index`` prevents copying name of passed ``Index``, when a new name is not provided (:issue:`11193`) - Bug in ``read_excel`` failing to read any non-empty sheets when empty sheets exist and ``sheetname=None`` (:issue:`11711`) - Bug in ``read_excel`` failing to raise ``NotImplemented`` error when keywords ``parse_dates`` and ``date_parser`` are provided (:issue:`11544`) -- Bug in ``read_sql`` with pymysql connections failing to return chunked data (:issue:`11522`) +- Bug in ``read_sql`` with ``pymysql`` connections failing to return chunked data (:issue:`11522`) - Bug in ``.to_csv`` ignoring formatting parameters ``decimal``, ``na_rep``, ``float_format`` for float indexes (:issue:`11553`) - Bug in ``Int64Index`` and ``Float64Index`` preventing the use of the modulo operator (:issue:`9244`) @@ -529,8 +528,7 @@ Bug Fixes - Bug in ``DataFrame`` when masking an empty ``DataFrame`` (:issue:`11859`) -- Bug in ``.plot`` potentially modifying the ``colors`` input when the number -of columns didn't match the number of series provided (:issue:`12039`). +- Bug in ``.plot`` potentially modifying the ``colors`` input when the number of columns didn't match the number of series provided (:issue:`12039`). - Bug in ``.groupby`` where a ``KeyError`` was not raised for a wrong column if there was only one row in the dataframe (:issue:`11741`) @@ -545,3 +543,5 @@ of columns didn't match the number of series provided (:issue:`12039`). - Big in ``.style`` indexes and multi-indexes not appearing (:issue:`11655`) - Bug in ``.skew`` and ``.kurt`` due to roundoff error for highly similar values (:issue:`11974`) + +- Bug in ``buffer_rd_bytes`` src->buffer could be freed more than once if reading failed, causing a segfault (:issue:`12098`) diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py index 11ccb0eba8f72..0797354c1a92e 100755 --- a/pandas/io/tests/test_parsers.py +++ b/pandas/io/tests/test_parsers.py @@ -3667,6 +3667,25 @@ def test_buffer_overflow(self): self.assertIn( 'Buffer overflow caught - possible malformed input file.', str(cperr)) + def test_buffer_rd_bytes(self): + # GH 12098 + # src->buffer can be freed twice leading to a segfault if a corrupt + # gzip file is read with read_csv and the buffer is filled more than + # once before gzip throws an exception + + data = '\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x03\xED\xC3\x41\x09' \ + '\x00\x00\x08\x00\xB1\xB7\xB6\xBA\xFE\xA5\xCC\x21\x6C\xB0' \ + '\xA6\x4D' + '\x55' * 267 + \ + '\x7D\xF7\x00\x91\xE0\x47\x97\x14\x38\x04\x00' \ + '\x1f\x8b\x08\x00VT\x97V\x00\x03\xed]\xefO' + for i in range(100): + try: + _ = self.read_csv(StringIO(data), + compression='gzip', + delim_whitespace=True) + except Exception as e: + pass + def test_single_char_leading_whitespace(self): # GH 9710 data = """\ @@ -4208,6 +4227,25 @@ def test_buffer_overflow(self): self.assertIn( 'Buffer overflow caught - possible malformed input file.', str(cperr)) + def test_buffer_rd_bytes(self): + # GH 12098 + # src->buffer can be freed twice leading to a segfault if a corrupt + # gzip file is read with read_csv and the buffer is filled more than + # once before gzip throws an exception + + data = '\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x03\xED\xC3\x41\x09' \ + '\x00\x00\x08\x00\xB1\xB7\xB6\xBA\xFE\xA5\xCC\x21\x6C\xB0' \ + '\xA6\x4D' + '\x55' * 267 + \ + '\x7D\xF7\x00\x91\xE0\x47\x97\x14\x38\x04\x00' \ + '\x1f\x8b\x08\x00VT\x97V\x00\x03\xed]\xefO' + for i in range(100): + try: + _ = self.read_csv(StringIO(data), + compression='gzip', + delim_whitespace=True) + except Exception as e: + pass + def test_single_char_leading_whitespace(self): # GH 9710 data = """\ diff --git a/pandas/src/parser/io.c b/pandas/src/parser/io.c index 0297d1ba49527..566de72804968 100644 --- a/pandas/src/parser/io.c +++ b/pandas/src/parser/io.c @@ -121,6 +121,7 @@ void* buffer_rd_bytes(void *source, size_t nbytes, /* delete old object */ Py_XDECREF(src->buffer); + src->buffer = NULL; args = Py_BuildValue("(i)", nbytes); func = PyObject_GetAttrString(src->obj, "read");
Issue #12098 Add src->buffer = NULL; after garbage collecting src->buffer in the buffer_rd_bytes routine in io.c to fix the segfault
https://api.github.com/repos/pandas-dev/pandas/pulls/12135
2016-01-25T17:13:39Z
2016-01-27T15:12:25Z
null
2019-09-23T11:23:41Z
to_csv locale dependent format {:n} support
diff --git a/pandas/core/format.py b/pandas/core/format.py index 10b67d6229234..f64caa8e217f4 100644 --- a/pandas/core/format.py +++ b/pandas/core/format.py @@ -2146,11 +2146,11 @@ def get_formatted_data(self): formatter = None if self.float_format and self.decimal != '.': formatter = lambda v: ( - (self.float_format % v).replace('.', self.decimal, 1)) + self.float_format.format(v).replace('.', self.decimal, 1)) elif self.decimal != '.': # no float format formatter = lambda v: str(v).replace('.', self.decimal, 1) elif self.float_format: # no special decimal separator - formatter = lambda v: self.float_format % v + formatter = lambda v: self.float_format.format(v) if formatter is None and not self.quoting: values = values.astype(str)
closes #11812
https://api.github.com/repos/pandas-dev/pandas/pulls/12132
2016-01-25T07:32:27Z
2016-01-30T15:19:56Z
null
2016-01-30T15:19:56Z
Use system line terminator in to_clipboard by default
diff --git a/pandas/io/clipboard.py b/pandas/io/clipboard.py index 2109e1c5d6d4c..f893402f21f95 100644 --- a/pandas/io/clipboard.py +++ b/pandas/io/clipboard.py @@ -51,7 +51,7 @@ def read_clipboard(**kwargs): # pragma: no cover return read_table(StringIO(text), **kwargs) -def to_clipboard(obj, excel=None, sep=None, **kwargs): # pragma: no cover +def to_clipboard(obj, excel=None, sep=None, line_terminator=None, **kwargs): # pragma: no cover """ Attempt to write text representation of object to the system clipboard The clipboard can be then pasted into Excel for example. @@ -65,6 +65,7 @@ def to_clipboard(obj, excel=None, sep=None, **kwargs): # pragma: no cover if False, write a string representation of the object to the clipboard sep : optional, defaults to tab + line_terminator : optional, defaults to system line terminator other keywords are passed to to_csv Notes @@ -82,8 +83,11 @@ def to_clipboard(obj, excel=None, sep=None, **kwargs): # pragma: no cover try: if sep is None: sep = '\t' + if line_terminator is None: + import os + line_terminator = os.linesep buf = StringIO() - obj.to_csv(buf, sep=sep, **kwargs) + obj.to_csv(buf, sep=sep, line_terminator=line_terminator, **kwargs) clipboard_set(buf.getvalue()) return except:
closes #11720 Adds line_terminator optional argument to to_clipboard
https://api.github.com/repos/pandas-dev/pandas/pulls/12131
2016-01-25T06:56:29Z
2016-01-30T15:20:09Z
null
2016-01-30T15:20:09Z
REF: reorganize pandas/tests/test_series.py
diff --git a/pandas/sparse/tests/test_sparse.py b/pandas/sparse/tests/test_sparse.py index 6add74f778404..c073ee3334e55 100644 --- a/pandas/sparse/tests/test_sparse.py +++ b/pandas/sparse/tests/test_sparse.py @@ -34,7 +34,7 @@ from pandas.sparse.tests.test_array import assert_sp_array_equal import pandas.tests.test_panel as test_panel -import pandas.tests.test_series as test_series +from pandas.tests.series.test_misc_api import SharedWithSparse dec = np.testing.dec @@ -116,7 +116,7 @@ def assert_sp_panel_equal(left, right, exact_indices=True): assert (item in left) -class TestSparseSeries(tm.TestCase, test_series.CheckNameIntegration): +class TestSparseSeries(tm.TestCase, SharedWithSparse): _multiprocess_can_split_ = True def setUp(self): diff --git a/pandas/tests/series/__init__.py b/pandas/tests/series/__init__.py new file mode 100644 index 0000000000000..e69de29bb2d1d diff --git a/pandas/tests/series/common.py b/pandas/tests/series/common.py new file mode 100644 index 0000000000000..613961e1c670f --- /dev/null +++ b/pandas/tests/series/common.py @@ -0,0 +1,30 @@ +from pandas.util.decorators import cache_readonly +import pandas.util.testing as tm +import pandas as pd + +_ts = tm.makeTimeSeries() + + +class TestData(object): + + @cache_readonly + def ts(self): + ts = _ts.copy() + ts.name = 'ts' + return ts + + @cache_readonly + def series(self): + series = tm.makeStringSeries() + series.name = 'series' + return series + + @cache_readonly + def objSeries(self): + objSeries = tm.makeObjectSeries() + objSeries.name = 'objects' + return objSeries + + @cache_readonly + def empty(self): + return pd.Series([], index=[]) diff --git a/pandas/tests/series/test_alter_axes.py b/pandas/tests/series/test_alter_axes.py new file mode 100644 index 0000000000000..14abad1fac599 --- /dev/null +++ b/pandas/tests/series/test_alter_axes.py @@ -0,0 +1,148 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +import numpy as np +import pandas as pd + +from pandas import Index, Series +from pandas.core.index import MultiIndex, RangeIndex + +from pandas.compat import lrange, range, zip +from pandas.util.testing import assert_series_equal, assert_frame_equal +import pandas.util.testing as tm + +from .common import TestData + + +class TestSeriesAlterAxes(TestData, tm.TestCase): + + _multiprocess_can_split_ = True + + def test_setindex(self): + # wrong type + series = self.series.copy() + self.assertRaises(TypeError, setattr, series, 'index', None) + + # wrong length + series = self.series.copy() + self.assertRaises(Exception, setattr, series, 'index', + np.arange(len(series) - 1)) + + # works + series = self.series.copy() + series.index = np.arange(len(series)) + tm.assertIsInstance(series.index, Index) + + def test_rename(self): + renamer = lambda x: x.strftime('%Y%m%d') + renamed = self.ts.rename(renamer) + self.assertEqual(renamed.index[0], renamer(self.ts.index[0])) + + # dict + rename_dict = dict(zip(self.ts.index, renamed.index)) + renamed2 = self.ts.rename(rename_dict) + assert_series_equal(renamed, renamed2) + + # partial dict + s = Series(np.arange(4), index=['a', 'b', 'c', 'd'], dtype='int64') + renamed = s.rename({'b': 'foo', 'd': 'bar'}) + self.assert_numpy_array_equal(renamed.index, ['a', 'foo', 'c', 'bar']) + + # index with name + renamer = Series(np.arange(4), + index=Index(['a', 'b', 'c', 'd'], name='name'), + dtype='int64') + renamed = renamer.rename({}) + self.assertEqual(renamed.index.name, renamer.index.name) + + def test_rename_inplace(self): + renamer = lambda x: x.strftime('%Y%m%d') + expected = renamer(self.ts.index[0]) + + self.ts.rename(renamer, inplace=True) + self.assertEqual(self.ts.index[0], expected) + + def test_set_index_makes_timeseries(self): + idx = tm.makeDateIndex(10) + + s = Series(lrange(10)) + s.index = idx + + with tm.assert_produces_warning(FutureWarning): + self.assertTrue(s.is_time_series) + self.assertTrue(s.index.is_all_dates) + + def test_reset_index(self): + df = tm.makeDataFrame()[:5] + ser = df.stack() + ser.index.names = ['hash', 'category'] + + ser.name = 'value' + df = ser.reset_index() + self.assertIn('value', df) + + df = ser.reset_index(name='value2') + self.assertIn('value2', df) + + # check inplace + s = ser.reset_index(drop=True) + s2 = ser + s2.reset_index(drop=True, inplace=True) + assert_series_equal(s, s2) + + # level + index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], + labels=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], + [0, 1, 0, 1, 0, 1]]) + s = Series(np.random.randn(6), index=index) + rs = s.reset_index(level=1) + self.assertEqual(len(rs.columns), 2) + + rs = s.reset_index(level=[0, 2], drop=True) + self.assertTrue(rs.index.equals(Index(index.get_level_values(1)))) + tm.assertIsInstance(rs, Series) + + def test_reset_index_range(self): + # GH 12071 + s = pd.Series(range(2), name='A', dtype='int64') + series_result = s.reset_index() + tm.assertIsInstance(series_result.index, RangeIndex) + series_expected = pd.DataFrame([[0, 0], [1, 1]], + columns=['index', 'A'], + index=RangeIndex(stop=2)) + assert_frame_equal(series_result, series_expected) + + def test_reorder_levels(self): + index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], + labels=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], + [0, 1, 0, 1, 0, 1]], + names=['L0', 'L1', 'L2']) + s = Series(np.arange(6), index=index) + + # no change, position + result = s.reorder_levels([0, 1, 2]) + assert_series_equal(s, result) + + # no change, labels + result = s.reorder_levels(['L0', 'L1', 'L2']) + assert_series_equal(s, result) + + # rotate, position + result = s.reorder_levels([1, 2, 0]) + e_idx = MultiIndex(levels=[['one', 'two', 'three'], [0, 1], ['bar']], + labels=[[0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1], + [0, 0, 0, 0, 0, 0]], + names=['L1', 'L2', 'L0']) + expected = Series(np.arange(6), index=e_idx) + assert_series_equal(result, expected) + + result = s.reorder_levels([0, 0, 0]) + e_idx = MultiIndex(levels=[['bar'], ['bar'], ['bar']], + labels=[[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0]], + names=['L0', 'L0', 'L0']) + expected = Series(range(6), index=e_idx) + assert_series_equal(result, expected) + + result = s.reorder_levels(['L0', 'L0', 'L0']) + assert_series_equal(result, expected) diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py new file mode 100644 index 0000000000000..385767e14113f --- /dev/null +++ b/pandas/tests/series/test_analytics.py @@ -0,0 +1,2100 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +from inspect import getargspec +from itertools import product +from distutils.version import LooseVersion + +import nose +import random + +from numpy import nan +import numpy as np +import pandas as pd + +from pandas import (Index, Series, DataFrame, isnull, notnull, bdate_range, + date_range, _np_version_under1p9) +from pandas.core.index import MultiIndex +from pandas.tseries.index import Timestamp +from pandas.tseries.tdi import Timedelta +import pandas.core.config as cf +import pandas.lib as lib + +import pandas.core.nanops as nanops + +from pandas.compat import lrange, range +from pandas import compat +from pandas.util.testing import (assert_series_equal, assert_almost_equal, + assert_frame_equal, assert_index_equal) +import pandas.util.testing as tm + +from .common import TestData + + +class TestSeriesAnalytics(TestData, tm.TestCase): + + _multiprocess_can_split_ = True + + def test_sum_zero(self): + arr = np.array([]) + self.assertEqual(nanops.nansum(arr), 0) + + arr = np.empty((10, 0)) + self.assertTrue((nanops.nansum(arr, axis=1) == 0).all()) + + # GH #844 + s = Series([], index=[]) + self.assertEqual(s.sum(), 0) + + df = DataFrame(np.empty((10, 0))) + self.assertTrue((df.sum(1) == 0).all()) + + def test_nansum_buglet(self): + s = Series([1.0, np.nan], index=[0, 1]) + result = np.nansum(s) + assert_almost_equal(result, 1) + + def test_overflow(self): + # GH 6915 + # overflowing on the smaller int dtypes + for dtype in ['int32', 'int64']: + v = np.arange(5000000, dtype=dtype) + s = Series(v) + + # no bottleneck + result = s.sum(skipna=False) + self.assertEqual(int(result), v.sum(dtype='int64')) + result = s.min(skipna=False) + self.assertEqual(int(result), 0) + result = s.max(skipna=False) + self.assertEqual(int(result), v[-1]) + + # use bottleneck if available + result = s.sum() + self.assertEqual(int(result), v.sum(dtype='int64')) + result = s.min() + self.assertEqual(int(result), 0) + result = s.max() + self.assertEqual(int(result), v[-1]) + + for dtype in ['float32', 'float64']: + v = np.arange(5000000, dtype=dtype) + s = Series(v) + + # no bottleneck + result = s.sum(skipna=False) + self.assertEqual(result, v.sum(dtype=dtype)) + result = s.min(skipna=False) + self.assertTrue(np.allclose(float(result), 0.0)) + result = s.max(skipna=False) + self.assertTrue(np.allclose(float(result), v[-1])) + + # use bottleneck if available + result = s.sum() + self.assertEqual(result, v.sum(dtype=dtype)) + result = s.min() + self.assertTrue(np.allclose(float(result), 0.0)) + result = s.max() + self.assertTrue(np.allclose(float(result), v[-1])) + + def test_sum(self): + self._check_stat_op('sum', np.sum, check_allna=True) + + def test_sum_inf(self): + import pandas.core.nanops as nanops + + s = Series(np.random.randn(10)) + s2 = s.copy() + + s[5:8] = np.inf + s2[5:8] = np.nan + + self.assertTrue(np.isinf(s.sum())) + + arr = np.random.randn(100, 100).astype('f4') + arr[:, 2] = np.inf + + with cf.option_context("mode.use_inf_as_null", True): + assert_almost_equal(s.sum(), s2.sum()) + + res = nanops.nansum(arr, axis=1) + self.assertTrue(np.isinf(res).all()) + + def test_mean(self): + self._check_stat_op('mean', np.mean) + + def test_median(self): + self._check_stat_op('median', np.median) + + # test with integers, test failure + int_ts = Series(np.ones(10, dtype=int), index=lrange(10)) + self.assertAlmostEqual(np.median(int_ts), int_ts.median()) + + def test_mode(self): + s = Series([12, 12, 11, 10, 19, 11]) + exp = Series([11, 12]) + assert_series_equal(s.mode(), exp) + + assert_series_equal( + Series([1, 2, 3]).mode(), Series( + [], dtype='int64')) + + lst = [5] * 20 + [1] * 10 + [6] * 25 + np.random.shuffle(lst) + s = Series(lst) + assert_series_equal(s.mode(), Series([6])) + + s = Series([5] * 10) + assert_series_equal(s.mode(), Series([5])) + + s = Series(lst) + s[0] = np.nan + assert_series_equal(s.mode(), Series([6.])) + + s = Series(list('adfasbasfwewefwefweeeeasdfasnbam')) + assert_series_equal(s.mode(), Series(['e'])) + + s = Series(['2011-01-03', '2013-01-02', '1900-05-03'], dtype='M8[ns]') + assert_series_equal(s.mode(), Series([], dtype="M8[ns]")) + s = Series(['2011-01-03', '2013-01-02', '1900-05-03', '2011-01-03', + '2013-01-02'], dtype='M8[ns]') + assert_series_equal(s.mode(), Series(['2011-01-03', '2013-01-02'], + dtype='M8[ns]')) + + def test_prod(self): + self._check_stat_op('prod', np.prod) + + def test_min(self): + self._check_stat_op('min', np.min, check_objects=True) + + def test_max(self): + self._check_stat_op('max', np.max, check_objects=True) + + def test_var_std(self): + alt = lambda x: np.std(x, ddof=1) + self._check_stat_op('std', alt) + + alt = lambda x: np.var(x, ddof=1) + self._check_stat_op('var', alt) + + result = self.ts.std(ddof=4) + expected = np.std(self.ts.values, ddof=4) + assert_almost_equal(result, expected) + + result = self.ts.var(ddof=4) + expected = np.var(self.ts.values, ddof=4) + assert_almost_equal(result, expected) + + # 1 - element series with ddof=1 + s = self.ts.iloc[[0]] + result = s.var(ddof=1) + self.assertTrue(isnull(result)) + + result = s.std(ddof=1) + self.assertTrue(isnull(result)) + + def test_sem(self): + alt = lambda x: np.std(x, ddof=1) / np.sqrt(len(x)) + self._check_stat_op('sem', alt) + + result = self.ts.sem(ddof=4) + expected = np.std(self.ts.values, + ddof=4) / np.sqrt(len(self.ts.values)) + assert_almost_equal(result, expected) + + # 1 - element series with ddof=1 + s = self.ts.iloc[[0]] + result = s.sem(ddof=1) + self.assertTrue(isnull(result)) + + def test_skew(self): + tm._skip_if_no_scipy() + + from scipy.stats import skew + alt = lambda x: skew(x, bias=False) + self._check_stat_op('skew', alt) + + # test corner cases, skew() returns NaN unless there's at least 3 + # values + min_N = 3 + for i in range(1, min_N + 1): + s = Series(np.ones(i)) + df = DataFrame(np.ones((i, i))) + if i < min_N: + self.assertTrue(np.isnan(s.skew())) + self.assertTrue(np.isnan(df.skew()).all()) + else: + self.assertEqual(0, s.skew()) + self.assertTrue((df.skew() == 0).all()) + + def test_kurt(self): + tm._skip_if_no_scipy() + + from scipy.stats import kurtosis + alt = lambda x: kurtosis(x, bias=False) + self._check_stat_op('kurt', alt) + + index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], + labels=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], + [0, 1, 0, 1, 0, 1]]) + s = Series(np.random.randn(6), index=index) + self.assertAlmostEqual(s.kurt(), s.kurt(level=0)['bar']) + + # test corner cases, kurt() returns NaN unless there's at least 4 + # values + min_N = 4 + for i in range(1, min_N + 1): + s = Series(np.ones(i)) + df = DataFrame(np.ones((i, i))) + if i < min_N: + self.assertTrue(np.isnan(s.kurt())) + self.assertTrue(np.isnan(df.kurt()).all()) + else: + self.assertEqual(0, s.kurt()) + self.assertTrue((df.kurt() == 0).all()) + + def test_argsort(self): + self._check_accum_op('argsort') + argsorted = self.ts.argsort() + self.assertTrue(issubclass(argsorted.dtype.type, np.integer)) + + # GH 2967 (introduced bug in 0.11-dev I think) + s = Series([Timestamp('201301%02d' % (i + 1)) for i in range(5)]) + self.assertEqual(s.dtype, 'datetime64[ns]') + shifted = s.shift(-1) + self.assertEqual(shifted.dtype, 'datetime64[ns]') + self.assertTrue(isnull(shifted[4])) + + result = s.argsort() + expected = Series(lrange(5), dtype='int64') + assert_series_equal(result, expected) + + result = shifted.argsort() + expected = Series(lrange(4) + [-1], dtype='int64') + assert_series_equal(result, expected) + + def test_argsort_stable(self): + s = Series(np.random.randint(0, 100, size=10000)) + mindexer = s.argsort(kind='mergesort') + qindexer = s.argsort() + + mexpected = np.argsort(s.values, kind='mergesort') + qexpected = np.argsort(s.values, kind='quicksort') + + self.assert_numpy_array_equal(mindexer, mexpected) + self.assert_numpy_array_equal(qindexer, qexpected) + self.assertFalse(np.array_equal(qindexer, mindexer)) + + def test_cumsum(self): + self._check_accum_op('cumsum') + + def test_cumprod(self): + self._check_accum_op('cumprod') + + def test_cummin(self): + self.assert_numpy_array_equal(self.ts.cummin(), + np.minimum.accumulate(np.array(self.ts))) + ts = self.ts.copy() + ts[::2] = np.NaN + result = ts.cummin()[1::2] + expected = np.minimum.accumulate(ts.valid()) + + self.assert_numpy_array_equal(result, expected) + + def test_cummax(self): + self.assert_numpy_array_equal(self.ts.cummax(), + np.maximum.accumulate(np.array(self.ts))) + ts = self.ts.copy() + ts[::2] = np.NaN + result = ts.cummax()[1::2] + expected = np.maximum.accumulate(ts.valid()) + + self.assert_numpy_array_equal(result, expected) + + def test_cummin_datetime64(self): + s = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', '2000-1-1', + 'NaT', '2000-1-3'])) + + expected = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', + '2000-1-1', 'NaT', '2000-1-1'])) + result = s.cummin(skipna=True) + self.assert_series_equal(expected, result) + + expected = pd.Series(pd.to_datetime( + ['NaT', '2000-1-2', '2000-1-2', '2000-1-1', '2000-1-1', '2000-1-1' + ])) + result = s.cummin(skipna=False) + self.assert_series_equal(expected, result) + + def test_cummax_datetime64(self): + s = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', '2000-1-1', + 'NaT', '2000-1-3'])) + + expected = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', + '2000-1-2', 'NaT', '2000-1-3'])) + result = s.cummax(skipna=True) + self.assert_series_equal(expected, result) + + expected = pd.Series(pd.to_datetime( + ['NaT', '2000-1-2', '2000-1-2', '2000-1-2', '2000-1-2', '2000-1-3' + ])) + result = s.cummax(skipna=False) + self.assert_series_equal(expected, result) + + def test_cummin_timedelta64(self): + s = pd.Series(pd.to_timedelta(['NaT', + '2 min', + 'NaT', + '1 min', + 'NaT', + '3 min', ])) + + expected = pd.Series(pd.to_timedelta(['NaT', + '2 min', + 'NaT', + '1 min', + 'NaT', + '1 min', ])) + result = s.cummin(skipna=True) + self.assert_series_equal(expected, result) + + expected = pd.Series(pd.to_timedelta(['NaT', + '2 min', + '2 min', + '1 min', + '1 min', + '1 min', ])) + result = s.cummin(skipna=False) + self.assert_series_equal(expected, result) + + def test_cummax_timedelta64(self): + s = pd.Series(pd.to_timedelta(['NaT', + '2 min', + 'NaT', + '1 min', + 'NaT', + '3 min', ])) + + expected = pd.Series(pd.to_timedelta(['NaT', + '2 min', + 'NaT', + '2 min', + 'NaT', + '3 min', ])) + result = s.cummax(skipna=True) + self.assert_series_equal(expected, result) + + expected = pd.Series(pd.to_timedelta(['NaT', + '2 min', + '2 min', + '2 min', + '2 min', + '3 min', ])) + result = s.cummax(skipna=False) + self.assert_series_equal(expected, result) + + def test_npdiff(self): + raise nose.SkipTest("skipping due to Series no longer being an " + "ndarray") + + # no longer works as the return type of np.diff is now nd.array + s = Series(np.arange(5)) + + r = np.diff(s) + assert_series_equal(Series([nan, 0, 0, 0, nan]), r) + + def _check_stat_op(self, name, alternate, check_objects=False, + check_allna=False): + import pandas.core.nanops as nanops + + def testit(): + f = getattr(Series, name) + + # add some NaNs + self.series[5:15] = np.NaN + + # idxmax, idxmin, min, and max are valid for dates + if name not in ['max', 'min']: + ds = Series(date_range('1/1/2001', periods=10)) + self.assertRaises(TypeError, f, ds) + + # skipna or no + self.assertTrue(notnull(f(self.series))) + self.assertTrue(isnull(f(self.series, skipna=False))) + + # check the result is correct + nona = self.series.dropna() + assert_almost_equal(f(nona), alternate(nona.values)) + assert_almost_equal(f(self.series), alternate(nona.values)) + + allna = self.series * nan + + if check_allna: + # xref 9422 + # bottleneck >= 1.0 give 0.0 for an allna Series sum + try: + self.assertTrue(nanops._USE_BOTTLENECK) + import bottleneck as bn # noqa + self.assertTrue(bn.__version__ >= LooseVersion('1.0')) + self.assertEqual(f(allna), 0.0) + except: + self.assertTrue(np.isnan(f(allna))) + + # dtype=object with None, it works! + s = Series([1, 2, 3, None, 5]) + f(s) + + # 2888 + l = [0] + l.extend(lrange(2 ** 40, 2 ** 40 + 1000)) + s = Series(l, dtype='int64') + assert_almost_equal(float(f(s)), float(alternate(s.values))) + + # check date range + if check_objects: + s = Series(bdate_range('1/1/2000', periods=10)) + res = f(s) + exp = alternate(s) + self.assertEqual(res, exp) + + # check on string data + if name not in ['sum', 'min', 'max']: + self.assertRaises(TypeError, f, Series(list('abc'))) + + # Invalid axis. + self.assertRaises(ValueError, f, self.series, axis=1) + + # Unimplemented numeric_only parameter. + if 'numeric_only' in getargspec(f).args: + self.assertRaisesRegexp(NotImplementedError, name, f, + self.series, numeric_only=True) + + testit() + + try: + import bottleneck as bn # noqa + nanops._USE_BOTTLENECK = False + testit() + nanops._USE_BOTTLENECK = True + except ImportError: + pass + + def _check_accum_op(self, name): + func = getattr(np, name) + self.assert_numpy_array_equal(func(self.ts), func(np.array(self.ts))) + + # with missing values + ts = self.ts.copy() + ts[::2] = np.NaN + + result = func(ts)[1::2] + expected = func(np.array(ts.valid())) + + self.assert_numpy_array_equal(result, expected) + + def test_round(self): + # numpy.round doesn't preserve metadata, probably a numpy bug, + # re: GH #314 + self.ts.index.name = "index_name" + result = self.ts.round(2) + expected = Series(np.round(self.ts.values, 2), index=self.ts.index, + name='ts') + assert_series_equal(result, expected) + self.assertEqual(result.name, self.ts.name) + + def test_built_in_round(self): + if not compat.PY3: + raise nose.SkipTest( + 'build in round cannot be overriden prior to Python 3') + + s = Series([1.123, 2.123, 3.123], index=lrange(3)) + result = round(s) + expected_rounded0 = Series([1., 2., 3.], index=lrange(3)) + self.assert_series_equal(result, expected_rounded0) + + decimals = 2 + expected_rounded = Series([1.12, 2.12, 3.12], index=lrange(3)) + result = round(s, decimals) + self.assert_series_equal(result, expected_rounded) + + def test_prod_numpy16_bug(self): + s = Series([1., 1., 1.], index=lrange(3)) + result = s.prod() + self.assertNotIsInstance(result, Series) + + def test_quantile(self): + from numpy import percentile + + q = self.ts.quantile(0.1) + self.assertEqual(q, percentile(self.ts.valid(), 10)) + + q = self.ts.quantile(0.9) + self.assertEqual(q, percentile(self.ts.valid(), 90)) + + # object dtype + q = Series(self.ts, dtype=object).quantile(0.9) + self.assertEqual(q, percentile(self.ts.valid(), 90)) + + # datetime64[ns] dtype + dts = self.ts.index.to_series() + q = dts.quantile(.2) + self.assertEqual(q, Timestamp('2000-01-10 19:12:00')) + + # timedelta64[ns] dtype + tds = dts.diff() + q = tds.quantile(.25) + self.assertEqual(q, pd.to_timedelta('24:00:00')) + + # GH7661 + result = Series([np.timedelta64('NaT')]).sum() + self.assertTrue(result is pd.NaT) + + msg = 'percentiles should all be in the interval \\[0, 1\\]' + for invalid in [-1, 2, [0.5, -1], [0.5, 2]]: + with tm.assertRaisesRegexp(ValueError, msg): + self.ts.quantile(invalid) + + def test_quantile_multi(self): + from numpy import percentile + + qs = [.1, .9] + result = self.ts.quantile(qs) + expected = pd.Series([percentile(self.ts.valid(), 10), + percentile(self.ts.valid(), 90)], + index=qs, name=self.ts.name) + assert_series_equal(result, expected) + + dts = self.ts.index.to_series() + dts.name = 'xxx' + result = dts.quantile((.2, .2)) + expected = Series([Timestamp('2000-01-10 19:12:00'), + Timestamp('2000-01-10 19:12:00')], + index=[.2, .2], name='xxx') + assert_series_equal(result, expected) + + result = self.ts.quantile([]) + expected = pd.Series([], name=self.ts.name, index=Index( + [], dtype=float)) + assert_series_equal(result, expected) + + def test_quantile_interpolation(self): + # GH #10174 + if _np_version_under1p9: + raise nose.SkipTest("Numpy version is under 1.9") + + from numpy import percentile + + # interpolation = linear (default case) + q = self.ts.quantile(0.1, interpolation='linear') + self.assertEqual(q, percentile(self.ts.valid(), 10)) + q1 = self.ts.quantile(0.1) + self.assertEqual(q1, percentile(self.ts.valid(), 10)) + + # test with and without interpolation keyword + self.assertEqual(q, q1) + + def test_quantile_interpolation_np_lt_1p9(self): + # GH #10174 + if not _np_version_under1p9: + raise nose.SkipTest("Numpy version is greater than 1.9") + + from numpy import percentile + + # interpolation = linear (default case) + q = self.ts.quantile(0.1, interpolation='linear') + self.assertEqual(q, percentile(self.ts.valid(), 10)) + q1 = self.ts.quantile(0.1) + self.assertEqual(q1, percentile(self.ts.valid(), 10)) + + # interpolation other than linear + expErrMsg = "Interpolation methods other than " + with tm.assertRaisesRegexp(ValueError, expErrMsg): + self.ts.quantile(0.9, interpolation='nearest') + + # object dtype + with tm.assertRaisesRegexp(ValueError, expErrMsg): + q = Series(self.ts, dtype=object).quantile(0.7, + interpolation='higher') + + def test_all_any(self): + ts = tm.makeTimeSeries() + bool_series = ts > 0 + self.assertFalse(bool_series.all()) + self.assertTrue(bool_series.any()) + + # Alternative types, with implicit 'object' dtype. + s = Series(['abc', True]) + self.assertEqual('abc', s.any()) # 'abc' || True => 'abc' + + def test_all_any_params(self): + # Check skipna, with implicit 'object' dtype. + s1 = Series([np.nan, True]) + s2 = Series([np.nan, False]) + self.assertTrue(s1.all(skipna=False)) # nan && True => True + self.assertTrue(s1.all(skipna=True)) + self.assertTrue(np.isnan(s2.any(skipna=False))) # nan || False => nan + self.assertFalse(s2.any(skipna=True)) + + # Check level. + s = pd.Series([False, False, True, True, False, True], + index=[0, 0, 1, 1, 2, 2]) + assert_series_equal(s.all(level=0), Series([False, True, False])) + assert_series_equal(s.any(level=0), Series([False, True, True])) + + # bool_only is not implemented with level option. + self.assertRaises(NotImplementedError, s.any, bool_only=True, level=0) + self.assertRaises(NotImplementedError, s.all, bool_only=True, level=0) + + # bool_only is not implemented alone. + self.assertRaises(NotImplementedError, s.any, bool_only=True) + self.assertRaises(NotImplementedError, s.all, bool_only=True) + + def test_modulo(self): + + # GH3590, modulo as ints + p = DataFrame({'first': [3, 4, 5, 8], 'second': [0, 0, 0, 3]}) + result = p['first'] % p['second'] + expected = Series(p['first'].values % p['second'].values, + dtype='float64') + expected.iloc[0:3] = np.nan + assert_series_equal(result, expected) + + result = p['first'] % 0 + expected = Series(np.nan, index=p.index, name='first') + assert_series_equal(result, expected) + + p = p.astype('float64') + result = p['first'] % p['second'] + expected = Series(p['first'].values % p['second'].values) + assert_series_equal(result, expected) + + p = p.astype('float64') + result = p['first'] % p['second'] + result2 = p['second'] % p['first'] + self.assertFalse(np.array_equal(result, result2)) + + # GH 9144 + s = Series([0, 1]) + + result = s % 0 + expected = Series([nan, nan]) + assert_series_equal(result, expected) + + result = 0 % s + expected = Series([nan, 0.0]) + assert_series_equal(result, expected) + + def test_ops_consistency_on_empty(self): + + # GH 7869 + # consistency on empty + + # float + result = Series(dtype=float).sum() + self.assertEqual(result, 0) + + result = Series(dtype=float).mean() + self.assertTrue(isnull(result)) + + result = Series(dtype=float).median() + self.assertTrue(isnull(result)) + + # timedelta64[ns] + result = Series(dtype='m8[ns]').sum() + self.assertEqual(result, Timedelta(0)) + + result = Series(dtype='m8[ns]').mean() + self.assertTrue(result is pd.NaT) + + result = Series(dtype='m8[ns]').median() + self.assertTrue(result is pd.NaT) + + def test_corr(self): + tm._skip_if_no_scipy() + + import scipy.stats as stats + + # full overlap + self.assertAlmostEqual(self.ts.corr(self.ts), 1) + + # partial overlap + self.assertAlmostEqual(self.ts[:15].corr(self.ts[5:]), 1) + + self.assertTrue(isnull(self.ts[:15].corr(self.ts[5:], min_periods=12))) + + ts1 = self.ts[:15].reindex(self.ts.index) + ts2 = self.ts[5:].reindex(self.ts.index) + self.assertTrue(isnull(ts1.corr(ts2, min_periods=12))) + + # No overlap + self.assertTrue(np.isnan(self.ts[::2].corr(self.ts[1::2]))) + + # all NA + cp = self.ts[:10].copy() + cp[:] = np.nan + self.assertTrue(isnull(cp.corr(cp))) + + A = tm.makeTimeSeries() + B = tm.makeTimeSeries() + result = A.corr(B) + expected, _ = stats.pearsonr(A, B) + self.assertAlmostEqual(result, expected) + + def test_corr_rank(self): + tm._skip_if_no_scipy() + + import scipy + import scipy.stats as stats + + # kendall and spearman + A = tm.makeTimeSeries() + B = tm.makeTimeSeries() + A[-5:] = A[:5] + result = A.corr(B, method='kendall') + expected = stats.kendalltau(A, B)[0] + self.assertAlmostEqual(result, expected) + + result = A.corr(B, method='spearman') + expected = stats.spearmanr(A, B)[0] + self.assertAlmostEqual(result, expected) + + # these methods got rewritten in 0.8 + if scipy.__version__ < LooseVersion('0.9'): + raise nose.SkipTest("skipping corr rank because of scipy version " + "{0}".format(scipy.__version__)) + + # results from R + A = Series( + [-0.89926396, 0.94209606, -1.03289164, -0.95445587, 0.76910310, - + 0.06430576, -2.09704447, 0.40660407, -0.89926396, 0.94209606]) + B = Series( + [-1.01270225, -0.62210117, -1.56895827, 0.59592943, -0.01680292, + 1.17258718, -1.06009347, -0.10222060, -0.89076239, 0.89372375]) + kexp = 0.4319297 + sexp = 0.5853767 + self.assertAlmostEqual(A.corr(B, method='kendall'), kexp) + self.assertAlmostEqual(A.corr(B, method='spearman'), sexp) + + def test_cov(self): + # full overlap + self.assertAlmostEqual(self.ts.cov(self.ts), self.ts.std() ** 2) + + # partial overlap + self.assertAlmostEqual(self.ts[:15].cov(self.ts[5:]), + self.ts[5:15].std() ** 2) + + # No overlap + self.assertTrue(np.isnan(self.ts[::2].cov(self.ts[1::2]))) + + # all NA + cp = self.ts[:10].copy() + cp[:] = np.nan + self.assertTrue(isnull(cp.cov(cp))) + + # min_periods + self.assertTrue(isnull(self.ts[:15].cov(self.ts[5:], min_periods=12))) + + ts1 = self.ts[:15].reindex(self.ts.index) + ts2 = self.ts[5:].reindex(self.ts.index) + self.assertTrue(isnull(ts1.cov(ts2, min_periods=12))) + + def test_count(self): + self.assertEqual(self.ts.count(), len(self.ts)) + + self.ts[::2] = np.NaN + + self.assertEqual(self.ts.count(), np.isfinite(self.ts).sum()) + + mi = MultiIndex.from_arrays([list('aabbcc'), [1, 2, 2, nan, 1, 2]]) + ts = Series(np.arange(len(mi)), index=mi) + + left = ts.count(level=1) + right = Series([2, 3, 1], index=[1, 2, nan]) + assert_series_equal(left, right) + + ts.iloc[[0, 3, 5]] = nan + assert_series_equal(ts.count(level=1), right - 1) + + def test_dot(self): + a = Series(np.random.randn(4), index=['p', 'q', 'r', 's']) + b = DataFrame(np.random.randn(3, 4), index=['1', '2', '3'], + columns=['p', 'q', 'r', 's']).T + + result = a.dot(b) + expected = Series(np.dot(a.values, b.values), index=['1', '2', '3']) + assert_series_equal(result, expected) + + # Check index alignment + b2 = b.reindex(index=reversed(b.index)) + result = a.dot(b) + assert_series_equal(result, expected) + + # Check ndarray argument + result = a.dot(b.values) + self.assertTrue(np.all(result == expected.values)) + assert_almost_equal(a.dot(b['2'].values), expected['2']) + + # Check series argument + assert_almost_equal(a.dot(b['1']), expected['1']) + assert_almost_equal(a.dot(b2['1']), expected['1']) + + self.assertRaises(Exception, a.dot, a.values[:3]) + self.assertRaises(ValueError, a.dot, b.T) + + def test_value_counts_nunique(self): + + # basics.rst doc example + series = Series(np.random.randn(500)) + series[20:500] = np.nan + series[10:20] = 5000 + result = series.nunique() + self.assertEqual(result, 11) + + def test_unique(self): + + # 714 also, dtype=float + s = Series([1.2345] * 100) + s[::2] = np.nan + result = s.unique() + self.assertEqual(len(result), 2) + + s = Series([1.2345] * 100, dtype='f4') + s[::2] = np.nan + result = s.unique() + self.assertEqual(len(result), 2) + + # NAs in object arrays #714 + s = Series(['foo'] * 100, dtype='O') + s[::2] = np.nan + result = s.unique() + self.assertEqual(len(result), 2) + + # decision about None + s = Series([1, 2, 3, None, None, None], dtype=object) + result = s.unique() + expected = np.array([1, 2, 3, None], dtype=object) + self.assert_numpy_array_equal(result, expected) + + def test_drop_duplicates(self): + # check both int and object + for s in [Series([1, 2, 3, 3]), Series(['1', '2', '3', '3'])]: + expected = Series([False, False, False, True]) + assert_series_equal(s.duplicated(), expected) + assert_series_equal(s.drop_duplicates(), s[~expected]) + sc = s.copy() + sc.drop_duplicates(inplace=True) + assert_series_equal(sc, s[~expected]) + + expected = Series([False, False, True, False]) + assert_series_equal(s.duplicated(keep='last'), expected) + assert_series_equal(s.drop_duplicates(keep='last'), s[~expected]) + sc = s.copy() + sc.drop_duplicates(keep='last', inplace=True) + assert_series_equal(sc, s[~expected]) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + assert_series_equal(s.duplicated(take_last=True), expected) + with tm.assert_produces_warning(FutureWarning): + assert_series_equal( + s.drop_duplicates(take_last=True), s[~expected]) + sc = s.copy() + with tm.assert_produces_warning(FutureWarning): + sc.drop_duplicates(take_last=True, inplace=True) + assert_series_equal(sc, s[~expected]) + + expected = Series([False, False, True, True]) + assert_series_equal(s.duplicated(keep=False), expected) + assert_series_equal(s.drop_duplicates(keep=False), s[~expected]) + sc = s.copy() + sc.drop_duplicates(keep=False, inplace=True) + assert_series_equal(sc, s[~expected]) + + for s in [Series([1, 2, 3, 5, 3, 2, 4]), + Series(['1', '2', '3', '5', '3', '2', '4'])]: + expected = Series([False, False, False, False, True, True, False]) + assert_series_equal(s.duplicated(), expected) + assert_series_equal(s.drop_duplicates(), s[~expected]) + sc = s.copy() + sc.drop_duplicates(inplace=True) + assert_series_equal(sc, s[~expected]) + + expected = Series([False, True, True, False, False, False, False]) + assert_series_equal(s.duplicated(keep='last'), expected) + assert_series_equal(s.drop_duplicates(keep='last'), s[~expected]) + sc = s.copy() + sc.drop_duplicates(keep='last', inplace=True) + assert_series_equal(sc, s[~expected]) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + assert_series_equal(s.duplicated(take_last=True), expected) + with tm.assert_produces_warning(FutureWarning): + assert_series_equal( + s.drop_duplicates(take_last=True), s[~expected]) + sc = s.copy() + with tm.assert_produces_warning(FutureWarning): + sc.drop_duplicates(take_last=True, inplace=True) + assert_series_equal(sc, s[~expected]) + + expected = Series([False, True, True, False, True, True, False]) + assert_series_equal(s.duplicated(keep=False), expected) + assert_series_equal(s.drop_duplicates(keep=False), s[~expected]) + sc = s.copy() + sc.drop_duplicates(keep=False, inplace=True) + assert_series_equal(sc, s[~expected]) + + def test_rank(self): + tm._skip_if_no_scipy() + from scipy.stats import rankdata + + self.ts[::2] = np.nan + self.ts[:10][::3] = 4. + + ranks = self.ts.rank() + oranks = self.ts.astype('O').rank() + + assert_series_equal(ranks, oranks) + + mask = np.isnan(self.ts) + filled = self.ts.fillna(np.inf) + + # rankdata returns a ndarray + exp = Series(rankdata(filled), index=filled.index) + exp[mask] = np.nan + + assert_almost_equal(ranks, exp) + + iseries = Series(np.arange(5).repeat(2)) + + iranks = iseries.rank() + exp = iseries.astype(float).rank() + assert_series_equal(iranks, exp) + iseries = Series(np.arange(5)) + 1.0 + exp = iseries / 5.0 + iranks = iseries.rank(pct=True) + + assert_series_equal(iranks, exp) + + iseries = Series(np.repeat(1, 100)) + exp = Series(np.repeat(0.505, 100)) + iranks = iseries.rank(pct=True) + assert_series_equal(iranks, exp) + + iseries[1] = np.nan + exp = Series(np.repeat(50.0 / 99.0, 100)) + exp[1] = np.nan + iranks = iseries.rank(pct=True) + assert_series_equal(iranks, exp) + + iseries = Series(np.arange(5)) + 1.0 + iseries[4] = np.nan + exp = iseries / 4.0 + iranks = iseries.rank(pct=True) + assert_series_equal(iranks, exp) + + iseries = Series(np.repeat(np.nan, 100)) + exp = iseries.copy() + iranks = iseries.rank(pct=True) + assert_series_equal(iranks, exp) + + iseries = Series(np.arange(5)) + 1 + iseries[4] = np.nan + exp = iseries / 4.0 + iranks = iseries.rank(pct=True) + assert_series_equal(iranks, exp) + + rng = date_range('1/1/1990', periods=5) + iseries = Series(np.arange(5), rng) + 1 + iseries.ix[4] = np.nan + exp = iseries / 4.0 + iranks = iseries.rank(pct=True) + assert_series_equal(iranks, exp) + + iseries = Series([1e-50, 1e-100, 1e-20, 1e-2, 1e-20 + 1e-30, 1e-1]) + exp = Series([2, 1, 3, 5, 4, 6.0]) + iranks = iseries.rank() + assert_series_equal(iranks, exp) + + values = np.array( + [-50, -1, -1e-20, -1e-25, -1e-50, 0, 1e-40, 1e-20, 1e-10, 2, 40 + ], dtype='float64') + random_order = np.random.permutation(len(values)) + iseries = Series(values[random_order]) + exp = Series(random_order + 1.0, dtype='float64') + iranks = iseries.rank() + assert_series_equal(iranks, exp) + + def test_rank_inf(self): + raise nose.SkipTest('DataFrame.rank does not currently rank ' + 'np.inf and -np.inf properly') + + values = np.array( + [-np.inf, -50, -1, -1e-20, -1e-25, -1e-50, 0, 1e-40, 1e-20, 1e-10, + 2, 40, np.inf], dtype='float64') + random_order = np.random.permutation(len(values)) + iseries = Series(values[random_order]) + exp = Series(random_order + 1.0, dtype='float64') + iranks = iseries.rank() + assert_series_equal(iranks, exp) + + def test_clip(self): + val = self.ts.median() + + self.assertEqual(self.ts.clip_lower(val).min(), val) + self.assertEqual(self.ts.clip_upper(val).max(), val) + + self.assertEqual(self.ts.clip(lower=val).min(), val) + self.assertEqual(self.ts.clip(upper=val).max(), val) + + result = self.ts.clip(-0.5, 0.5) + expected = np.clip(self.ts, -0.5, 0.5) + assert_series_equal(result, expected) + tm.assertIsInstance(expected, Series) + + def test_clip_types_and_nulls(self): + + sers = [Series([np.nan, 1.0, 2.0, 3.0]), Series([None, 'a', 'b', 'c']), + Series(pd.to_datetime( + [np.nan, 1, 2, 3], unit='D'))] + + for s in sers: + thresh = s[2] + l = s.clip_lower(thresh) + u = s.clip_upper(thresh) + self.assertEqual(l[notnull(l)].min(), thresh) + self.assertEqual(u[notnull(u)].max(), thresh) + self.assertEqual(list(isnull(s)), list(isnull(l))) + self.assertEqual(list(isnull(s)), list(isnull(u))) + + def test_clip_against_series(self): + # GH #6966 + + s = Series([1.0, 1.0, 4.0]) + threshold = Series([1.0, 2.0, 3.0]) + + assert_series_equal(s.clip_lower(threshold), Series([1.0, 2.0, 4.0])) + assert_series_equal(s.clip_upper(threshold), Series([1.0, 1.0, 3.0])) + + lower = Series([1.0, 2.0, 3.0]) + upper = Series([1.5, 2.5, 3.5]) + assert_series_equal(s.clip(lower, upper), Series([1.0, 2.0, 3.5])) + assert_series_equal(s.clip(1.5, upper), Series([1.5, 1.5, 3.5])) + + def test_clip_with_datetimes(self): + + # GH 11838 + # naive and tz-aware datetimes + + t = Timestamp('2015-12-01 09:30:30') + s = Series([Timestamp('2015-12-01 09:30:00'), Timestamp( + '2015-12-01 09:31:00')]) + result = s.clip(upper=t) + expected = Series([Timestamp('2015-12-01 09:30:00'), Timestamp( + '2015-12-01 09:30:30')]) + assert_series_equal(result, expected) + + t = Timestamp('2015-12-01 09:30:30', tz='US/Eastern') + s = Series([Timestamp('2015-12-01 09:30:00', tz='US/Eastern'), + Timestamp('2015-12-01 09:31:00', tz='US/Eastern')]) + result = s.clip(upper=t) + expected = Series([Timestamp('2015-12-01 09:30:00', tz='US/Eastern'), + Timestamp('2015-12-01 09:30:30', tz='US/Eastern')]) + assert_series_equal(result, expected) + + def test_cummethods_bool(self): + # GH 6270 + # looks like a buggy np.maximum.accumulate for numpy 1.6.1, py 3.2 + def cummin(x): + return np.minimum.accumulate(x) + + def cummax(x): + return np.maximum.accumulate(x) + + a = pd.Series([False, False, False, True, True, False, False]) + b = ~a + c = pd.Series([False] * len(b)) + d = ~c + methods = {'cumsum': np.cumsum, + 'cumprod': np.cumprod, + 'cummin': cummin, + 'cummax': cummax} + args = product((a, b, c, d), methods) + for s, method in args: + expected = Series(methods[method](s.values)) + result = getattr(s, method)() + assert_series_equal(result, expected) + + e = pd.Series([False, True, nan, False]) + cse = pd.Series([0, 1, nan, 1], dtype=object) + cpe = pd.Series([False, 0, nan, 0]) + cmin = pd.Series([False, False, nan, False]) + cmax = pd.Series([False, True, nan, True]) + expecteds = {'cumsum': cse, + 'cumprod': cpe, + 'cummin': cmin, + 'cummax': cmax} + + for method in methods: + res = getattr(e, method)() + assert_series_equal(res, expecteds[method]) + + def test_isin(self): + s = Series(['A', 'B', 'C', 'a', 'B', 'B', 'A', 'C']) + + result = s.isin(['A', 'C']) + expected = Series([True, False, True, False, False, False, True, True]) + assert_series_equal(result, expected) + + def test_isin_with_string_scalar(self): + # GH4763 + s = Series(['A', 'B', 'C', 'a', 'B', 'B', 'A', 'C']) + with tm.assertRaises(TypeError): + s.isin('a') + + with tm.assertRaises(TypeError): + s = Series(['aaa', 'b', 'c']) + s.isin('aaa') + + def test_isin_with_i8(self): + # GH 5021 + + expected = Series([True, True, False, False, False]) + expected2 = Series([False, True, False, False, False]) + + # datetime64[ns] + s = Series(date_range('jan-01-2013', 'jan-05-2013')) + + result = s.isin(s[0:2]) + assert_series_equal(result, expected) + + result = s.isin(s[0:2].values) + assert_series_equal(result, expected) + + # fails on dtype conversion in the first place + result = s.isin(s[0:2].values.astype('datetime64[D]')) + assert_series_equal(result, expected) + + result = s.isin([s[1]]) + assert_series_equal(result, expected2) + + result = s.isin([np.datetime64(s[1])]) + assert_series_equal(result, expected2) + + # timedelta64[ns] + s = Series(pd.to_timedelta(lrange(5), unit='d')) + result = s.isin(s[0:2]) + assert_series_equal(result, expected) + + def test_timedelta64_analytics(self): + from pandas import date_range + + # index min/max + td = Series(date_range('2012-1-1', periods=3, freq='D')) - \ + Timestamp('20120101') + + result = td.idxmin() + self.assertEqual(result, 0) + + result = td.idxmax() + self.assertEqual(result, 2) + + # GH 2982 + # with NaT + td[0] = np.nan + + result = td.idxmin() + self.assertEqual(result, 1) + + result = td.idxmax() + self.assertEqual(result, 2) + + # abs + s1 = Series(date_range('20120101', periods=3)) + s2 = Series(date_range('20120102', periods=3)) + expected = Series(s2 - s1) + + # this fails as numpy returns timedelta64[us] + # result = np.abs(s1-s2) + # assert_frame_equal(result,expected) + + result = (s1 - s2).abs() + assert_series_equal(result, expected) + + # max/min + result = td.max() + expected = Timedelta('2 days') + self.assertEqual(result, expected) + + result = td.min() + expected = Timedelta('1 days') + self.assertEqual(result, expected) + + def test_idxmin(self): + # test idxmin + # _check_stat_op approach can not be used here because of isnull check. + + # add some NaNs + self.series[5:15] = np.NaN + + # skipna or no + self.assertEqual(self.series[self.series.idxmin()], self.series.min()) + self.assertTrue(isnull(self.series.idxmin(skipna=False))) + + # no NaNs + nona = self.series.dropna() + self.assertEqual(nona[nona.idxmin()], nona.min()) + self.assertEqual(nona.index.values.tolist().index(nona.idxmin()), + nona.values.argmin()) + + # all NaNs + allna = self.series * nan + self.assertTrue(isnull(allna.idxmin())) + + # datetime64[ns] + from pandas import date_range + s = Series(date_range('20130102', periods=6)) + result = s.idxmin() + self.assertEqual(result, 0) + + s[0] = np.nan + result = s.idxmin() + self.assertEqual(result, 1) + + def test_idxmax(self): + # test idxmax + # _check_stat_op approach can not be used here because of isnull check. + + # add some NaNs + self.series[5:15] = np.NaN + + # skipna or no + self.assertEqual(self.series[self.series.idxmax()], self.series.max()) + self.assertTrue(isnull(self.series.idxmax(skipna=False))) + + # no NaNs + nona = self.series.dropna() + self.assertEqual(nona[nona.idxmax()], nona.max()) + self.assertEqual(nona.index.values.tolist().index(nona.idxmax()), + nona.values.argmax()) + + # all NaNs + allna = self.series * nan + self.assertTrue(isnull(allna.idxmax())) + + from pandas import date_range + s = Series(date_range('20130102', periods=6)) + result = s.idxmax() + self.assertEqual(result, 5) + + s[5] = np.nan + result = s.idxmax() + self.assertEqual(result, 4) + + # Float64Index + # GH 5914 + s = pd.Series([1, 2, 3], [1.1, 2.1, 3.1]) + result = s.idxmax() + self.assertEqual(result, 3.1) + result = s.idxmin() + self.assertEqual(result, 1.1) + + s = pd.Series(s.index, s.index) + result = s.idxmax() + self.assertEqual(result, 3.1) + result = s.idxmin() + self.assertEqual(result, 1.1) + + def test_ptp(self): + N = 1000 + arr = np.random.randn(N) + ser = Series(arr) + self.assertEqual(np.ptp(ser), np.ptp(arr)) + + # GH11163 + s = Series([3, 5, np.nan, -3, 10]) + self.assertEqual(s.ptp(), 13) + self.assertTrue(pd.isnull(s.ptp(skipna=False))) + + mi = pd.MultiIndex.from_product([['a', 'b'], [1, 2, 3]]) + s = pd.Series([1, np.nan, 7, 3, 5, np.nan], index=mi) + + expected = pd.Series([6, 2], index=['a', 'b'], dtype=np.float64) + self.assert_series_equal(s.ptp(level=0), expected) + + expected = pd.Series([np.nan, np.nan], index=['a', 'b']) + self.assert_series_equal(s.ptp(level=0, skipna=False), expected) + + with self.assertRaises(ValueError): + s.ptp(axis=1) + + s = pd.Series(['a', 'b', 'c', 'd', 'e']) + with self.assertRaises(TypeError): + s.ptp() + + with self.assertRaises(NotImplementedError): + s.ptp(numeric_only=True) + + def test_datetime_timedelta_quantiles(self): + # covers #9694 + self.assertTrue(pd.isnull(Series([], dtype='M8[ns]').quantile(.5))) + self.assertTrue(pd.isnull(Series([], dtype='m8[ns]').quantile(.5))) + + def test_empty_timeseries_redections_return_nat(self): + # covers #11245 + for dtype in ('m8[ns]', 'm8[ns]', 'M8[ns]', 'M8[ns, UTC]'): + self.assertIs(Series([], dtype=dtype).min(), pd.NaT) + self.assertIs(Series([], dtype=dtype).max(), pd.NaT) + + def test_unique_data_ownership(self): + # it works! #1807 + Series(Series(["a", "c", "b"]).unique()).sort_values() + + def test_replace(self): + N = 100 + ser = Series(np.random.randn(N)) + ser[0:4] = np.nan + ser[6:10] = 0 + + # replace list with a single value + ser.replace([np.nan], -1, inplace=True) + + exp = ser.fillna(-1) + assert_series_equal(ser, exp) + + rs = ser.replace(0., np.nan) + ser[ser == 0.] = np.nan + assert_series_equal(rs, ser) + + ser = Series(np.fabs(np.random.randn(N)), tm.makeDateIndex(N), + dtype=object) + ser[:5] = np.nan + ser[6:10] = 'foo' + ser[20:30] = 'bar' + + # replace list with a single value + rs = ser.replace([np.nan, 'foo', 'bar'], -1) + + self.assertTrue((rs[:5] == -1).all()) + self.assertTrue((rs[6:10] == -1).all()) + self.assertTrue((rs[20:30] == -1).all()) + self.assertTrue((isnull(ser[:5])).all()) + + # replace with different values + rs = ser.replace({np.nan: -1, 'foo': -2, 'bar': -3}) + + self.assertTrue((rs[:5] == -1).all()) + self.assertTrue((rs[6:10] == -2).all()) + self.assertTrue((rs[20:30] == -3).all()) + self.assertTrue((isnull(ser[:5])).all()) + + # replace with different values with 2 lists + rs2 = ser.replace([np.nan, 'foo', 'bar'], [-1, -2, -3]) + assert_series_equal(rs, rs2) + + # replace inplace + ser.replace([np.nan, 'foo', 'bar'], -1, inplace=True) + + self.assertTrue((ser[:5] == -1).all()) + self.assertTrue((ser[6:10] == -1).all()) + self.assertTrue((ser[20:30] == -1).all()) + + ser = Series([np.nan, 0, np.inf]) + assert_series_equal(ser.replace(np.nan, 0), ser.fillna(0)) + + ser = Series([np.nan, 0, 'foo', 'bar', np.inf, None, lib.NaT]) + assert_series_equal(ser.replace(np.nan, 0), ser.fillna(0)) + filled = ser.copy() + filled[4] = 0 + assert_series_equal(ser.replace(np.inf, 0), filled) + + ser = Series(self.ts.index) + assert_series_equal(ser.replace(np.nan, 0), ser.fillna(0)) + + # malformed + self.assertRaises(ValueError, ser.replace, [1, 2, 3], [np.nan, 0]) + + # make sure that we aren't just masking a TypeError because bools don't + # implement indexing + with tm.assertRaisesRegexp(TypeError, 'Cannot compare types .+'): + ser.replace([1, 2], [np.nan, 0]) + + ser = Series([0, 1, 2, 3, 4]) + result = ser.replace([0, 1, 2, 3, 4], [4, 3, 2, 1, 0]) + assert_series_equal(result, Series([4, 3, 2, 1, 0])) + + # API change from 0.12? + # GH 5319 + ser = Series([0, np.nan, 2, 3, 4]) + expected = ser.ffill() + result = ser.replace([np.nan]) + assert_series_equal(result, expected) + + ser = Series([0, np.nan, 2, 3, 4]) + expected = ser.ffill() + result = ser.replace(np.nan) + assert_series_equal(result, expected) + # GH 5797 + ser = Series(date_range('20130101', periods=5)) + expected = ser.copy() + expected.loc[2] = Timestamp('20120101') + result = ser.replace({Timestamp('20130103'): Timestamp('20120101')}) + assert_series_equal(result, expected) + result = ser.replace(Timestamp('20130103'), Timestamp('20120101')) + assert_series_equal(result, expected) + + def test_replace_with_single_list(self): + ser = Series([0, 1, 2, 3, 4]) + result = ser.replace([1, 2, 3]) + assert_series_equal(result, Series([0, 0, 0, 0, 4])) + + s = ser.copy() + s.replace([1, 2, 3], inplace=True) + assert_series_equal(s, Series([0, 0, 0, 0, 4])) + + # make sure things don't get corrupted when fillna call fails + s = ser.copy() + with tm.assertRaises(ValueError): + s.replace([1, 2, 3], inplace=True, method='crash_cymbal') + assert_series_equal(s, ser) + + def test_replace_mixed_types(self): + s = Series(np.arange(5), dtype='int64') + + def check_replace(to_rep, val, expected): + sc = s.copy() + r = s.replace(to_rep, val) + sc.replace(to_rep, val, inplace=True) + assert_series_equal(expected, r) + assert_series_equal(expected, sc) + + # should NOT upcast to float + e = Series([0, 1, 2, 3, 4]) + tr, v = [3], [3.0] + check_replace(tr, v, e) + + # MUST upcast to float + e = Series([0, 1, 2, 3.5, 4]) + tr, v = [3], [3.5] + check_replace(tr, v, e) + + # casts to object + e = Series([0, 1, 2, 3.5, 'a']) + tr, v = [3, 4], [3.5, 'a'] + check_replace(tr, v, e) + + # again casts to object + e = Series([0, 1, 2, 3.5, Timestamp('20130101')]) + tr, v = [3, 4], [3.5, Timestamp('20130101')] + check_replace(tr, v, e) + + # casts to float + e = Series([0, 1, 2, 3.5, 1]) + tr, v = [3, 4], [3.5, True] + check_replace(tr, v, e) + + # test an object with dates + floats + integers + strings + dr = date_range('1/1/2001', '1/10/2001', + freq='D').to_series().reset_index(drop=True) + result = dr.astype(object).replace( + [dr[0], dr[1], dr[2]], [1.0, 2, 'a']) + expected = Series([1.0, 2, 'a'] + dr[3:].tolist(), dtype=object) + assert_series_equal(result, expected) + + def test_replace_bool_with_string_no_op(self): + s = Series([True, False, True]) + result = s.replace('fun', 'in-the-sun') + tm.assert_series_equal(s, result) + + def test_replace_bool_with_string(self): + # nonexistent elements + s = Series([True, False, True]) + result = s.replace(True, '2u') + expected = Series(['2u', False, '2u']) + tm.assert_series_equal(expected, result) + + def test_replace_bool_with_bool(self): + s = Series([True, False, True]) + result = s.replace(True, False) + expected = Series([False] * len(s)) + tm.assert_series_equal(expected, result) + + def test_replace_with_dict_with_bool_keys(self): + s = Series([True, False, True]) + with tm.assertRaisesRegexp(TypeError, 'Cannot compare types .+'): + s.replace({'asdf': 'asdb', True: 'yes'}) + + def test_replace2(self): + N = 100 + ser = Series(np.fabs(np.random.randn(N)), tm.makeDateIndex(N), + dtype=object) + ser[:5] = np.nan + ser[6:10] = 'foo' + ser[20:30] = 'bar' + + # replace list with a single value + rs = ser.replace([np.nan, 'foo', 'bar'], -1) + + self.assertTrue((rs[:5] == -1).all()) + self.assertTrue((rs[6:10] == -1).all()) + self.assertTrue((rs[20:30] == -1).all()) + self.assertTrue((isnull(ser[:5])).all()) + + # replace with different values + rs = ser.replace({np.nan: -1, 'foo': -2, 'bar': -3}) + + self.assertTrue((rs[:5] == -1).all()) + self.assertTrue((rs[6:10] == -2).all()) + self.assertTrue((rs[20:30] == -3).all()) + self.assertTrue((isnull(ser[:5])).all()) + + # replace with different values with 2 lists + rs2 = ser.replace([np.nan, 'foo', 'bar'], [-1, -2, -3]) + assert_series_equal(rs, rs2) + + # replace inplace + ser.replace([np.nan, 'foo', 'bar'], -1, inplace=True) + self.assertTrue((ser[:5] == -1).all()) + self.assertTrue((ser[6:10] == -1).all()) + self.assertTrue((ser[20:30] == -1).all()) + + def test_repeat(self): + s = Series(np.random.randn(3), index=['a', 'b', 'c']) + + reps = s.repeat(5) + exp = Series(s.values.repeat(5), index=s.index.values.repeat(5)) + assert_series_equal(reps, exp) + + to_rep = [2, 3, 4] + reps = s.repeat(to_rep) + exp = Series(s.values.repeat(to_rep), + index=s.index.values.repeat(to_rep)) + assert_series_equal(reps, exp) + + def test_searchsorted_numeric_dtypes_scalar(self): + s = Series([1, 2, 90, 1000, 3e9]) + r = s.searchsorted(30) + e = 2 + tm.assert_equal(r, e) + + r = s.searchsorted([30]) + e = np.array([2]) + tm.assert_numpy_array_equal(r, e) + + def test_searchsorted_numeric_dtypes_vector(self): + s = Series([1, 2, 90, 1000, 3e9]) + r = s.searchsorted([91, 2e6]) + e = np.array([3, 4]) + tm.assert_numpy_array_equal(r, e) + + def test_search_sorted_datetime64_scalar(self): + s = Series(pd.date_range('20120101', periods=10, freq='2D')) + v = pd.Timestamp('20120102') + r = s.searchsorted(v) + e = 1 + tm.assert_equal(r, e) + + def test_search_sorted_datetime64_list(self): + s = Series(pd.date_range('20120101', periods=10, freq='2D')) + v = [pd.Timestamp('20120102'), pd.Timestamp('20120104')] + r = s.searchsorted(v) + e = np.array([1, 2]) + tm.assert_numpy_array_equal(r, e) + + def test_searchsorted_sorter(self): + # GH8490 + s = Series([3, 1, 2]) + r = s.searchsorted([0, 3], sorter=np.argsort(s)) + e = np.array([0, 2]) + tm.assert_numpy_array_equal(r, e) + + def test_is_unique(self): + # GH11946 + s = Series(np.random.randint(0, 10, size=1000)) + self.assertFalse(s.is_unique) + s = Series(np.arange(1000)) + self.assertTrue(s.is_unique) + + def test_sort_values(self): + + ts = self.ts.copy() + + # 9816 deprecated + with tm.assert_produces_warning(FutureWarning): + ts.sort() + + self.assert_numpy_array_equal(ts, self.ts.sort_values()) + self.assert_numpy_array_equal(ts.index, self.ts.sort_values().index) + + ts.sort_values(ascending=False, inplace=True) + self.assert_numpy_array_equal(ts, self.ts.sort_values(ascending=False)) + self.assert_numpy_array_equal(ts.index, self.ts.sort_values( + ascending=False).index) + + # GH 5856/5853 + # Series.sort_values operating on a view + df = DataFrame(np.random.randn(10, 4)) + s = df.iloc[:, 0] + + def f(): + s.sort_values(inplace=True) + + self.assertRaises(ValueError, f) + + # test order/sort inplace + # GH6859 + ts1 = self.ts.copy() + ts1.sort_values(ascending=False, inplace=True) + ts2 = self.ts.copy() + ts2.sort_values(ascending=False, inplace=True) + assert_series_equal(ts1, ts2) + + ts1 = self.ts.copy() + ts1 = ts1.sort_values(ascending=False, inplace=False) + ts2 = self.ts.copy() + ts2 = ts.sort_values(ascending=False) + assert_series_equal(ts1, ts2) + + def test_sort_index(self): + rindex = list(self.ts.index) + random.shuffle(rindex) + + random_order = self.ts.reindex(rindex) + sorted_series = random_order.sort_index() + assert_series_equal(sorted_series, self.ts) + + # descending + sorted_series = random_order.sort_index(ascending=False) + assert_series_equal(sorted_series, + self.ts.reindex(self.ts.index[::-1])) + + def test_sort_index_inplace(self): + + # For #11402 + rindex = list(self.ts.index) + random.shuffle(rindex) + + # descending + random_order = self.ts.reindex(rindex) + result = random_order.sort_index(ascending=False, inplace=True) + self.assertIs(result, None, + msg='sort_index() inplace should return None') + assert_series_equal(random_order, self.ts.reindex(self.ts.index[::-1])) + + # ascending + random_order = self.ts.reindex(rindex) + result = random_order.sort_index(ascending=True, inplace=True) + self.assertIs(result, None, + msg='sort_index() inplace should return None') + assert_series_equal(random_order, self.ts) + + def test_sort_API(self): + + # API for 9816 + + # sortlevel + mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) + s = Series([1, 2], mi) + backwards = s.iloc[[1, 0]] + + res = s.sort_index(level='A') + assert_series_equal(backwards, res) + + # sort_index + rindex = list(self.ts.index) + random.shuffle(rindex) + + random_order = self.ts.reindex(rindex) + sorted_series = random_order.sort_index(level=0) + assert_series_equal(sorted_series, self.ts) + + # compat on axis + sorted_series = random_order.sort_index(axis=0) + assert_series_equal(sorted_series, self.ts) + + self.assertRaises(ValueError, lambda: random_order.sort_values(axis=1)) + + sorted_series = random_order.sort_index(level=0, axis=0) + assert_series_equal(sorted_series, self.ts) + + self.assertRaises(ValueError, + lambda: random_order.sort_index(level=0, axis=1)) + + def test_order(self): + + # 9816 deprecated + with tm.assert_produces_warning(FutureWarning): + self.ts.order() + + ts = self.ts.copy() + ts[:5] = np.NaN + vals = ts.values + + result = ts.sort_values() + self.assertTrue(np.isnan(result[-5:]).all()) + self.assert_numpy_array_equal(result[:-5], np.sort(vals[5:])) + + result = ts.sort_values(na_position='first') + self.assertTrue(np.isnan(result[:5]).all()) + self.assert_numpy_array_equal(result[5:], np.sort(vals[5:])) + + # something object-type + ser = Series(['A', 'B'], [1, 2]) + # no failure + ser.sort_values() + + # ascending=False + ordered = ts.sort_values(ascending=False) + expected = np.sort(ts.valid().values)[::-1] + assert_almost_equal(expected, ordered.valid().values) + ordered = ts.sort_values(ascending=False, na_position='first') + assert_almost_equal(expected, ordered.valid().values) + + def test_nsmallest_nlargest(self): + # float, int, datetime64 (use i8), timedelts64 (same), + # object that are numbers, object that are strings + + base = [3, 2, 1, 2, 5] + + s_list = [ + Series(base, dtype='int8'), + Series(base, dtype='int16'), + Series(base, dtype='int32'), + Series(base, dtype='int64'), + Series(base, dtype='float32'), + Series(base, dtype='float64'), + Series(base, dtype='uint8'), + Series(base, dtype='uint16'), + Series(base, dtype='uint32'), + Series(base, dtype='uint64'), + Series(base).astype('timedelta64[ns]'), + Series(pd.to_datetime(['2003', '2002', '2001', '2002', '2005'])), + ] + + raising = [ + Series([3., 2, 1, 2, '5'], dtype='object'), + Series([3., 2, 1, 2, 5], dtype='object'), + # not supported on some archs + # Series([3., 2, 1, 2, 5], dtype='complex256'), + Series([3., 2, 1, 2, 5], dtype='complex128'), + ] + + for r in raising: + dt = r.dtype + msg = "Cannot use method 'n(larg|small)est' with dtype %s" % dt + args = 2, len(r), 0, -1 + methods = r.nlargest, r.nsmallest + for method, arg in product(methods, args): + with tm.assertRaisesRegexp(TypeError, msg): + method(arg) + + for s in s_list: + + assert_series_equal(s.nsmallest(2), s.iloc[[2, 1]]) + + assert_series_equal(s.nsmallest(2, keep='last'), s.iloc[[2, 3]]) + with tm.assert_produces_warning(FutureWarning): + assert_series_equal( + s.nsmallest(2, take_last=True), s.iloc[[2, 3]]) + + assert_series_equal(s.nlargest(3), s.iloc[[4, 0, 1]]) + + assert_series_equal(s.nlargest(3, keep='last'), s.iloc[[4, 0, 3]]) + with tm.assert_produces_warning(FutureWarning): + assert_series_equal( + s.nlargest(3, take_last=True), s.iloc[[4, 0, 3]]) + + empty = s.iloc[0:0] + assert_series_equal(s.nsmallest(0), empty) + assert_series_equal(s.nsmallest(-1), empty) + assert_series_equal(s.nlargest(0), empty) + assert_series_equal(s.nlargest(-1), empty) + + assert_series_equal(s.nsmallest(len(s)), s.sort_values()) + assert_series_equal(s.nsmallest(len(s) + 1), s.sort_values()) + assert_series_equal(s.nlargest(len(s)), s.iloc[[4, 0, 1, 3, 2]]) + assert_series_equal(s.nlargest(len(s) + 1), + s.iloc[[4, 0, 1, 3, 2]]) + + s = Series([3., np.nan, 1, 2, 5]) + assert_series_equal(s.nlargest(), s.iloc[[4, 0, 3, 2]]) + assert_series_equal(s.nsmallest(), s.iloc[[2, 3, 0, 4]]) + + msg = 'keep must be either "first", "last"' + with tm.assertRaisesRegexp(ValueError, msg): + s.nsmallest(keep='invalid') + with tm.assertRaisesRegexp(ValueError, msg): + s.nlargest(keep='invalid') + + def test_sortlevel(self): + mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) + s = Series([1, 2], mi) + backwards = s.iloc[[1, 0]] + + res = s.sortlevel('A') + assert_series_equal(backwards, res) + + res = s.sortlevel(['A', 'B']) + assert_series_equal(backwards, res) + + res = s.sortlevel('A', sort_remaining=False) + assert_series_equal(s, res) + + res = s.sortlevel(['A', 'B'], sort_remaining=False) + assert_series_equal(s, res) + + def test_map(self): + index, data = tm.getMixedTypeDict() + + source = Series(data['B'], index=data['C']) + target = Series(data['C'][:4], index=data['D'][:4]) + + merged = target.map(source) + + for k, v in compat.iteritems(merged): + self.assertEqual(v, source[target[k]]) + + # input could be a dict + merged = target.map(source.to_dict()) + + for k, v in compat.iteritems(merged): + self.assertEqual(v, source[target[k]]) + + # function + result = self.ts.map(lambda x: x * 2) + self.assert_numpy_array_equal(result, self.ts * 2) + + # GH 10324 + a = Series([1, 2, 3, 4]) + b = Series(["even", "odd", "even", "odd"], dtype="category") + c = Series(["even", "odd", "even", "odd"]) + + exp = Series(["odd", "even", "odd", np.nan], dtype="category") + self.assert_series_equal(a.map(b), exp) + exp = Series(["odd", "even", "odd", np.nan]) + self.assert_series_equal(a.map(c), exp) + + a = Series(['a', 'b', 'c', 'd']) + b = Series([1, 2, 3, 4], + index=pd.CategoricalIndex(['b', 'c', 'd', 'e'])) + c = Series([1, 2, 3, 4], index=Index(['b', 'c', 'd', 'e'])) + + exp = Series([np.nan, 1, 2, 3]) + self.assert_series_equal(a.map(b), exp) + exp = Series([np.nan, 1, 2, 3]) + self.assert_series_equal(a.map(c), exp) + + a = Series(['a', 'b', 'c', 'd']) + b = Series(['B', 'C', 'D', 'E'], dtype='category', + index=pd.CategoricalIndex(['b', 'c', 'd', 'e'])) + c = Series(['B', 'C', 'D', 'E'], index=Index(['b', 'c', 'd', 'e'])) + + exp = Series([np.nan, 'B', 'C', 'D'], dtype='category') + self.assert_series_equal(a.map(b), exp) + exp = Series([np.nan, 'B', 'C', 'D']) + self.assert_series_equal(a.map(c), exp) + + def test_map_compat(self): + # related GH 8024 + s = Series([True, True, False], index=[1, 2, 3]) + result = s.map({True: 'foo', False: 'bar'}) + expected = Series(['foo', 'foo', 'bar'], index=[1, 2, 3]) + assert_series_equal(result, expected) + + def test_map_int(self): + left = Series({'a': 1., 'b': 2., 'c': 3., 'd': 4}) + right = Series({1: 11, 2: 22, 3: 33}) + + self.assertEqual(left.dtype, np.float_) + self.assertTrue(issubclass(right.dtype.type, np.integer)) + + merged = left.map(right) + self.assertEqual(merged.dtype, np.float_) + self.assertTrue(isnull(merged['d'])) + self.assertTrue(not isnull(merged['c'])) + + def test_map_type_inference(self): + s = Series(lrange(3)) + s2 = s.map(lambda x: np.where(x == 0, 0, 1)) + self.assertTrue(issubclass(s2.dtype.type, np.integer)) + + def test_map_decimal(self): + from decimal import Decimal + + result = self.series.map(lambda x: Decimal(str(x))) + self.assertEqual(result.dtype, np.object_) + tm.assertIsInstance(result[0], Decimal) + + def test_map_na_exclusion(self): + s = Series([1.5, np.nan, 3, np.nan, 5]) + + result = s.map(lambda x: x * 2, na_action='ignore') + exp = s * 2 + assert_series_equal(result, exp) + + def test_map_dict_with_tuple_keys(self): + ''' + Due to new MultiIndex-ing behaviour in v0.14.0, + dicts with tuple keys passed to map were being + converted to a multi-index, preventing tuple values + from being mapped properly. + ''' + df = pd.DataFrame({'a': [(1, ), (2, ), (3, 4), (5, 6)]}) + label_mappings = {(1, ): 'A', (2, ): 'B', (3, 4): 'A', (5, 6): 'B'} + df['labels'] = df['a'].map(label_mappings) + df['expected_labels'] = pd.Series(['A', 'B', 'A', 'B'], index=df.index) + # All labels should be filled now + tm.assert_series_equal(df['labels'], df['expected_labels'], + check_names=False) + + def test_apply(self): + assert_series_equal(self.ts.apply(np.sqrt), np.sqrt(self.ts)) + + # elementwise-apply + import math + assert_series_equal(self.ts.apply(math.exp), np.exp(self.ts)) + + # how to handle Series result, #2316 + result = self.ts.apply(lambda x: Series( + [x, x ** 2], index=['x', 'x^2'])) + expected = DataFrame({'x': self.ts, 'x^2': self.ts ** 2}) + tm.assert_frame_equal(result, expected) + + # empty series + s = Series(dtype=object, name='foo', index=pd.Index([], name='bar')) + rs = s.apply(lambda x: x) + tm.assert_series_equal(s, rs) + # check all metadata (GH 9322) + self.assertIsNot(s, rs) + self.assertIs(s.index, rs.index) + self.assertEqual(s.dtype, rs.dtype) + self.assertEqual(s.name, rs.name) + + # index but no data + s = Series(index=[1, 2, 3]) + rs = s.apply(lambda x: x) + tm.assert_series_equal(s, rs) + + def test_apply_same_length_inference_bug(self): + s = Series([1, 2]) + f = lambda x: (x, x + 1) + + result = s.apply(f) + expected = s.map(f) + assert_series_equal(result, expected) + + s = Series([1, 2, 3]) + result = s.apply(f) + expected = s.map(f) + assert_series_equal(result, expected) + + def test_apply_dont_convert_dtype(self): + s = Series(np.random.randn(10)) + + f = lambda x: x if x > 0 else np.nan + result = s.apply(f, convert_dtype=False) + self.assertEqual(result.dtype, object) + + def test_apply_args(self): + s = Series(['foo,bar']) + + result = s.apply(str.split, args=(',', )) + self.assertEqual(result[0], ['foo', 'bar']) + tm.assertIsInstance(result[0], list) + + def test_shift_int(self): + ts = self.ts.astype(int) + shifted = ts.shift(1) + expected = ts.astype(float).shift(1) + assert_series_equal(shifted, expected) + + def test_shift_categorical(self): + # GH 9416 + s = pd.Series(['a', 'b', 'c', 'd'], dtype='category') + + assert_series_equal(s.iloc[:-1], s.shift(1).shift(-1).valid()) + + sp1 = s.shift(1) + assert_index_equal(s.index, sp1.index) + self.assertTrue(np.all(sp1.values.codes[:1] == -1)) + self.assertTrue(np.all(s.values.codes[:-1] == sp1.values.codes[1:])) + + sn2 = s.shift(-2) + assert_index_equal(s.index, sn2.index) + self.assertTrue(np.all(sn2.values.codes[-2:] == -1)) + self.assertTrue(np.all(s.values.codes[2:] == sn2.values.codes[:-2])) + + assert_index_equal(s.values.categories, sp1.values.categories) + assert_index_equal(s.values.categories, sn2.values.categories) + + def test_reshape_non_2d(self): + # GH 4554 + x = Series(np.random.random(201), name='x') + self.assertTrue(x.reshape(x.shape, ) is x) + + # GH 2719 + a = Series([1, 2, 3, 4]) + result = a.reshape(2, 2) + expected = a.values.reshape(2, 2) + tm.assert_numpy_array_equal(result, expected) + self.assertTrue(type(result) is type(expected)) + + def test_reshape_2d_return_array(self): + x = Series(np.random.random(201), name='x') + result = x.reshape((-1, 1)) + self.assertNotIsInstance(result, Series) + + result2 = np.reshape(x, (-1, 1)) + self.assertNotIsInstance(result2, Series) + + result = x[:, None] + expected = x.reshape((-1, 1)) + assert_almost_equal(result, expected) + + def test_unstack(self): + from numpy import nan + + index = MultiIndex(levels=[['bar', 'foo'], ['one', 'three', 'two']], + labels=[[1, 1, 0, 0], [0, 1, 0, 2]]) + + s = Series(np.arange(4.), index=index) + unstacked = s.unstack() + + expected = DataFrame([[2., nan, 3.], [0., 1., nan]], + index=['bar', 'foo'], + columns=['one', 'three', 'two']) + + assert_frame_equal(unstacked, expected) + + unstacked = s.unstack(level=0) + assert_frame_equal(unstacked, expected.T) + + index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], + labels=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], + [0, 1, 0, 1, 0, 1]]) + s = Series(np.random.randn(6), index=index) + exp_index = MultiIndex(levels=[['one', 'two', 'three'], [0, 1]], + labels=[[0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1]]) + expected = DataFrame({'bar': s.values}, index=exp_index).sortlevel(0) + unstacked = s.unstack(0) + assert_frame_equal(unstacked, expected) + + # GH5873 + idx = pd.MultiIndex.from_arrays([[101, 102], [3.5, np.nan]]) + ts = pd.Series([1, 2], index=idx) + left = ts.unstack() + right = DataFrame([[nan, 1], [2, nan]], index=[101, 102], + columns=[nan, 3.5]) + print(left) + print(right) + assert_frame_equal(left, right) + + idx = pd.MultiIndex.from_arrays([['cat', 'cat', 'cat', 'dog', 'dog' + ], ['a', 'a', 'b', 'a', 'b'], + [1, 2, 1, 1, np.nan]]) + ts = pd.Series([1.0, 1.1, 1.2, 1.3, 1.4], index=idx) + right = DataFrame([[1.0, 1.3], [1.1, nan], [nan, 1.4], [1.2, nan]], + columns=['cat', 'dog']) + tpls = [('a', 1), ('a', 2), ('b', nan), ('b', 1)] + right.index = pd.MultiIndex.from_tuples(tpls) + assert_frame_equal(ts.unstack(level=0), right) diff --git a/pandas/tests/series/test_combine_concat.py b/pandas/tests/series/test_combine_concat.py new file mode 100644 index 0000000000000..72f1cac219998 --- /dev/null +++ b/pandas/tests/series/test_combine_concat.py @@ -0,0 +1,198 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +from datetime import datetime + +from numpy import nan +import numpy as np +import pandas as pd + +from pandas import Series, DataFrame + +from pandas import compat +from pandas.util.testing import assert_series_equal +import pandas.util.testing as tm + +from .common import TestData + + +class TestSeriesCombine(TestData, tm.TestCase): + + _multiprocess_can_split_ = True + + def test_append(self): + appendedSeries = self.series.append(self.objSeries) + for idx, value in compat.iteritems(appendedSeries): + if idx in self.series.index: + self.assertEqual(value, self.series[idx]) + elif idx in self.objSeries.index: + self.assertEqual(value, self.objSeries[idx]) + else: + self.fail("orphaned index!") + + self.assertRaises(ValueError, self.ts.append, self.ts, + verify_integrity=True) + + def test_append_many(self): + pieces = [self.ts[:5], self.ts[5:10], self.ts[10:]] + + result = pieces[0].append(pieces[1:]) + assert_series_equal(result, self.ts) + + def test_combine_first(self): + values = tm.makeIntIndex(20).values.astype(float) + series = Series(values, index=tm.makeIntIndex(20)) + + series_copy = series * 2 + series_copy[::2] = np.NaN + + # nothing used from the input + combined = series.combine_first(series_copy) + + self.assert_numpy_array_equal(combined, series) + + # Holes filled from input + combined = series_copy.combine_first(series) + self.assertTrue(np.isfinite(combined).all()) + + self.assert_numpy_array_equal(combined[::2], series[::2]) + self.assert_numpy_array_equal(combined[1::2], series_copy[1::2]) + + # mixed types + index = tm.makeStringIndex(20) + floats = Series(tm.randn(20), index=index) + strings = Series(tm.makeStringIndex(10), index=index[::2]) + + combined = strings.combine_first(floats) + + tm.assert_dict_equal(strings, combined, compare_keys=False) + tm.assert_dict_equal(floats[1::2], combined, compare_keys=False) + + # corner case + s = Series([1., 2, 3], index=[0, 1, 2]) + result = s.combine_first(Series([], index=[])) + assert_series_equal(s, result) + + def test_update(self): + s = Series([1.5, nan, 3., 4., nan]) + s2 = Series([nan, 3.5, nan, 5.]) + s.update(s2) + + expected = Series([1.5, 3.5, 3., 5., np.nan]) + assert_series_equal(s, expected) + + # GH 3217 + df = DataFrame([{"a": 1}, {"a": 3, "b": 2}]) + df['c'] = np.nan + + # this will fail as long as series is a sub-class of ndarray + # df['c'].update(Series(['foo'],index=[0])) ##### + + def test_concat_empty_series_dtypes_roundtrips(self): + + # round-tripping with self & like self + dtypes = map(np.dtype, ['float64', 'int8', 'uint8', 'bool', 'm8[ns]', + 'M8[ns]']) + + for dtype in dtypes: + self.assertEqual(pd.concat([Series(dtype=dtype)]).dtype, dtype) + self.assertEqual(pd.concat([Series(dtype=dtype), + Series(dtype=dtype)]).dtype, dtype) + + def int_result_type(dtype, dtype2): + typs = set([dtype.kind, dtype2.kind]) + if not len(typs - set(['i', 'u', 'b'])) and (dtype.kind == 'i' or + dtype2.kind == 'i'): + return 'i' + elif not len(typs - set(['u', 'b'])) and (dtype.kind == 'u' or + dtype2.kind == 'u'): + return 'u' + return None + + def float_result_type(dtype, dtype2): + typs = set([dtype.kind, dtype2.kind]) + if not len(typs - set(['f', 'i', 'u'])) and (dtype.kind == 'f' or + dtype2.kind == 'f'): + return 'f' + return None + + def get_result_type(dtype, dtype2): + result = float_result_type(dtype, dtype2) + if result is not None: + return result + result = int_result_type(dtype, dtype2) + if result is not None: + return result + return 'O' + + for dtype in dtypes: + for dtype2 in dtypes: + if dtype == dtype2: + continue + + expected = get_result_type(dtype, dtype2) + result = pd.concat([Series(dtype=dtype), Series(dtype=dtype2) + ]).dtype + self.assertEqual(result.kind, expected) + + def test_concat_empty_series_dtypes(self): + + # bools + self.assertEqual(pd.concat([Series(dtype=np.bool_), + Series(dtype=np.int32)]).dtype, np.int32) + self.assertEqual(pd.concat([Series(dtype=np.bool_), + Series(dtype=np.float32)]).dtype, + np.object_) + + # datetimelike + self.assertEqual(pd.concat([Series(dtype='m8[ns]'), + Series(dtype=np.bool)]).dtype, np.object_) + self.assertEqual(pd.concat([Series(dtype='m8[ns]'), + Series(dtype=np.int64)]).dtype, np.object_) + self.assertEqual(pd.concat([Series(dtype='M8[ns]'), + Series(dtype=np.bool)]).dtype, np.object_) + self.assertEqual(pd.concat([Series(dtype='M8[ns]'), + Series(dtype=np.int64)]).dtype, np.object_) + self.assertEqual(pd.concat([Series(dtype='M8[ns]'), + Series(dtype=np.bool_), + Series(dtype=np.int64)]).dtype, np.object_) + + # categorical + self.assertEqual(pd.concat([Series(dtype='category'), + Series(dtype='category')]).dtype, + 'category') + self.assertEqual(pd.concat([Series(dtype='category'), + Series(dtype='float64')]).dtype, + np.object_) + self.assertEqual(pd.concat([Series(dtype='category'), + Series(dtype='object')]).dtype, 'category') + + # sparse + result = pd.concat([Series(dtype='float64').to_sparse(), Series( + dtype='float64').to_sparse()]) + self.assertEqual(result.dtype, np.float64) + self.assertEqual(result.ftype, 'float64:sparse') + + result = pd.concat([Series(dtype='float64').to_sparse(), Series( + dtype='float64')]) + self.assertEqual(result.dtype, np.float64) + self.assertEqual(result.ftype, 'float64:sparse') + + result = pd.concat([Series(dtype='float64').to_sparse(), Series( + dtype='object')]) + self.assertEqual(result.dtype, np.object_) + self.assertEqual(result.ftype, 'object:dense') + + def test_combine_first_dt64(self): + from pandas.tseries.tools import to_datetime + s0 = to_datetime(Series(["2010", np.NaN])) + s1 = to_datetime(Series([np.NaN, "2011"])) + rs = s0.combine_first(s1) + xp = to_datetime(Series(['2010', '2011'])) + assert_series_equal(rs, xp) + + s0 = to_datetime(Series(["2010", np.NaN])) + s1 = Series([np.NaN, "2011"]) + rs = s0.combine_first(s1) + xp = Series([datetime(2010, 1, 1), '2011']) + assert_series_equal(rs, xp) diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py new file mode 100644 index 0000000000000..9133cc2c5a020 --- /dev/null +++ b/pandas/tests/series/test_constructors.py @@ -0,0 +1,685 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +from datetime import datetime, timedelta + +from numpy import nan +import numpy as np +import numpy.ma as ma +import pandas as pd + +from pandas import Index, Series, isnull, date_range, period_range +from pandas.core.index import MultiIndex +from pandas.tseries.index import Timestamp, DatetimeIndex +import pandas.core.common as com +import pandas.lib as lib + +from pandas.compat import lrange, range, zip, OrderedDict, long +from pandas import compat +from pandas.util.testing import assert_series_equal, assert_almost_equal +import pandas.util.testing as tm + +from .common import TestData + + +class TestSeriesConstructors(TestData, tm.TestCase): + + _multiprocess_can_split_ = True + + def test_scalar_conversion(self): + + # Pass in scalar is disabled + scalar = Series(0.5) + self.assertNotIsInstance(scalar, float) + + # coercion + self.assertEqual(float(Series([1.])), 1.0) + self.assertEqual(int(Series([1.])), 1) + self.assertEqual(long(Series([1.])), 1) + + def test_TimeSeries_deprecation(self): + + # deprecation TimeSeries, #10890 + with tm.assert_produces_warning(FutureWarning): + pd.TimeSeries(1, index=date_range('20130101', periods=3)) + + def test_constructor(self): + # Recognize TimeSeries + with tm.assert_produces_warning(FutureWarning): + self.assertTrue(self.ts.is_time_series) + self.assertTrue(self.ts.index.is_all_dates) + + # Pass in Series + derived = Series(self.ts) + with tm.assert_produces_warning(FutureWarning): + self.assertTrue(derived.is_time_series) + self.assertTrue(derived.index.is_all_dates) + + self.assertTrue(tm.equalContents(derived.index, self.ts.index)) + # Ensure new index is not created + self.assertEqual(id(self.ts.index), id(derived.index)) + + # Mixed type Series + mixed = Series(['hello', np.NaN], index=[0, 1]) + self.assertEqual(mixed.dtype, np.object_) + self.assertIs(mixed[1], np.NaN) + + with tm.assert_produces_warning(FutureWarning): + self.assertFalse(self.empty.is_time_series) + self.assertFalse(self.empty.index.is_all_dates) + with tm.assert_produces_warning(FutureWarning): + self.assertFalse(Series({}).is_time_series) + self.assertFalse(Series({}).index.is_all_dates) + self.assertRaises(Exception, Series, np.random.randn(3, 3), + index=np.arange(3)) + + mixed.name = 'Series' + rs = Series(mixed).name + xp = 'Series' + self.assertEqual(rs, xp) + + # raise on MultiIndex GH4187 + m = MultiIndex.from_arrays([[1, 2], [3, 4]]) + self.assertRaises(NotImplementedError, Series, m) + + def test_constructor_empty(self): + empty = Series() + empty2 = Series([]) + + # the are Index() and RangeIndex() which don't compare type equal + # but are just .equals + assert_series_equal(empty, empty2, check_index_type=False) + + empty = Series(index=lrange(10)) + empty2 = Series(np.nan, index=lrange(10)) + assert_series_equal(empty, empty2) + + def test_constructor_series(self): + index1 = ['d', 'b', 'a', 'c'] + index2 = sorted(index1) + s1 = Series([4, 7, -5, 3], index=index1) + s2 = Series(s1, index=index2) + + assert_series_equal(s2, s1.sort_index()) + + def test_constructor_iterator(self): + + expected = Series(list(range(10)), dtype='int64') + result = Series(range(10), dtype='int64') + assert_series_equal(result, expected) + + def test_constructor_generator(self): + gen = (i for i in range(10)) + + result = Series(gen) + exp = Series(lrange(10)) + assert_series_equal(result, exp) + + gen = (i for i in range(10)) + result = Series(gen, index=lrange(10, 20)) + exp.index = lrange(10, 20) + assert_series_equal(result, exp) + + def test_constructor_map(self): + # GH8909 + m = map(lambda x: x, range(10)) + + result = Series(m) + exp = Series(lrange(10)) + assert_series_equal(result, exp) + + m = map(lambda x: x, range(10)) + result = Series(m, index=lrange(10, 20)) + exp.index = lrange(10, 20) + assert_series_equal(result, exp) + + def test_constructor_categorical(self): + cat = pd.Categorical([0, 1, 2, 0, 1, 2], ['a', 'b', 'c'], + fastpath=True) + res = Series(cat) + self.assertTrue(res.values.equals(cat)) + + def test_constructor_maskedarray(self): + data = ma.masked_all((3, ), dtype=float) + result = Series(data) + expected = Series([nan, nan, nan]) + assert_series_equal(result, expected) + + data[0] = 0.0 + data[2] = 2.0 + index = ['a', 'b', 'c'] + result = Series(data, index=index) + expected = Series([0.0, nan, 2.0], index=index) + assert_series_equal(result, expected) + + data[1] = 1.0 + result = Series(data, index=index) + expected = Series([0.0, 1.0, 2.0], index=index) + assert_series_equal(result, expected) + + data = ma.masked_all((3, ), dtype=int) + result = Series(data) + expected = Series([nan, nan, nan], dtype=float) + assert_series_equal(result, expected) + + data[0] = 0 + data[2] = 2 + index = ['a', 'b', 'c'] + result = Series(data, index=index) + expected = Series([0, nan, 2], index=index, dtype=float) + assert_series_equal(result, expected) + + data[1] = 1 + result = Series(data, index=index) + expected = Series([0, 1, 2], index=index, dtype=int) + assert_series_equal(result, expected) + + data = ma.masked_all((3, ), dtype=bool) + result = Series(data) + expected = Series([nan, nan, nan], dtype=object) + assert_series_equal(result, expected) + + data[0] = True + data[2] = False + index = ['a', 'b', 'c'] + result = Series(data, index=index) + expected = Series([True, nan, False], index=index, dtype=object) + assert_series_equal(result, expected) + + data[1] = True + result = Series(data, index=index) + expected = Series([True, True, False], index=index, dtype=bool) + assert_series_equal(result, expected) + + from pandas import tslib + data = ma.masked_all((3, ), dtype='M8[ns]') + result = Series(data) + expected = Series([tslib.iNaT, tslib.iNaT, tslib.iNaT], dtype='M8[ns]') + assert_series_equal(result, expected) + + data[0] = datetime(2001, 1, 1) + data[2] = datetime(2001, 1, 3) + index = ['a', 'b', 'c'] + result = Series(data, index=index) + expected = Series([datetime(2001, 1, 1), tslib.iNaT, + datetime(2001, 1, 3)], index=index, dtype='M8[ns]') + assert_series_equal(result, expected) + + data[1] = datetime(2001, 1, 2) + result = Series(data, index=index) + expected = Series([datetime(2001, 1, 1), datetime(2001, 1, 2), + datetime(2001, 1, 3)], index=index, dtype='M8[ns]') + assert_series_equal(result, expected) + + def test_constructor_default_index(self): + s = Series([0, 1, 2]) + assert_almost_equal(s.index, np.arange(3)) + + def test_constructor_corner(self): + df = tm.makeTimeDataFrame() + objs = [df, df] + s = Series(objs, index=[0, 1]) + tm.assertIsInstance(s, Series) + + def test_constructor_sanitize(self): + s = Series(np.array([1., 1., 8.]), dtype='i8') + self.assertEqual(s.dtype, np.dtype('i8')) + + s = Series(np.array([1., 1., np.nan]), copy=True, dtype='i8') + self.assertEqual(s.dtype, np.dtype('f8')) + + def test_constructor_pass_none(self): + s = Series(None, index=lrange(5)) + self.assertEqual(s.dtype, np.float64) + + s = Series(None, index=lrange(5), dtype=object) + self.assertEqual(s.dtype, np.object_) + + # GH 7431 + # inference on the index + s = Series(index=np.array([None])) + expected = Series(index=Index([None])) + assert_series_equal(s, expected) + + def test_constructor_cast(self): + self.assertRaises(ValueError, Series, ['a', 'b', 'c'], dtype=float) + + def test_constructor_dtype_nocast(self): + # 1572 + s = Series([1, 2, 3]) + + s2 = Series(s, dtype=np.int64) + + s2[1] = 5 + self.assertEqual(s[1], 5) + + def test_constructor_datelike_coercion(self): + + # GH 9477 + # incorrectly infering on dateimelike looking when object dtype is + # specified + s = Series([Timestamp('20130101'), 'NOV'], dtype=object) + self.assertEqual(s.iloc[0], Timestamp('20130101')) + self.assertEqual(s.iloc[1], 'NOV') + self.assertTrue(s.dtype == object) + + # the dtype was being reset on the slicing and re-inferred to datetime + # even thought the blocks are mixed + belly = '216 3T19'.split() + wing1 = '2T15 4H19'.split() + wing2 = '416 4T20'.split() + mat = pd.to_datetime('2016-01-22 2019-09-07'.split()) + df = pd.DataFrame( + {'wing1': wing1, + 'wing2': wing2, + 'mat': mat}, index=belly) + + result = df.loc['3T19'] + self.assertTrue(result.dtype == object) + result = df.loc['216'] + self.assertTrue(result.dtype == object) + + def test_constructor_dtype_datetime64(self): + import pandas.tslib as tslib + + s = Series(tslib.iNaT, dtype='M8[ns]', index=lrange(5)) + self.assertTrue(isnull(s).all()) + + # in theory this should be all nulls, but since + # we are not specifying a dtype is ambiguous + s = Series(tslib.iNaT, index=lrange(5)) + self.assertFalse(isnull(s).all()) + + s = Series(nan, dtype='M8[ns]', index=lrange(5)) + self.assertTrue(isnull(s).all()) + + s = Series([datetime(2001, 1, 2, 0, 0), tslib.iNaT], dtype='M8[ns]') + self.assertTrue(isnull(s[1])) + self.assertEqual(s.dtype, 'M8[ns]') + + s = Series([datetime(2001, 1, 2, 0, 0), nan], dtype='M8[ns]') + self.assertTrue(isnull(s[1])) + self.assertEqual(s.dtype, 'M8[ns]') + + # GH3416 + dates = [ + np.datetime64(datetime(2013, 1, 1)), + np.datetime64(datetime(2013, 1, 2)), + np.datetime64(datetime(2013, 1, 3)), + ] + + s = Series(dates) + self.assertEqual(s.dtype, 'M8[ns]') + + s.ix[0] = np.nan + self.assertEqual(s.dtype, 'M8[ns]') + + # invalid astypes + for t in ['s', 'D', 'us', 'ms']: + self.assertRaises(TypeError, s.astype, 'M8[%s]' % t) + + # GH3414 related + self.assertRaises(TypeError, lambda x: Series( + Series(dates).astype('int') / 1000000, dtype='M8[ms]')) + self.assertRaises(TypeError, + lambda x: Series(dates, dtype='datetime64')) + + # invalid dates can be help as object + result = Series([datetime(2, 1, 1)]) + self.assertEqual(result[0], datetime(2, 1, 1, 0, 0)) + + result = Series([datetime(3000, 1, 1)]) + self.assertEqual(result[0], datetime(3000, 1, 1, 0, 0)) + + # don't mix types + result = Series([Timestamp('20130101'), 1], index=['a', 'b']) + self.assertEqual(result['a'], Timestamp('20130101')) + self.assertEqual(result['b'], 1) + + # GH6529 + # coerce datetime64 non-ns properly + dates = date_range('01-Jan-2015', '01-Dec-2015', freq='M') + values2 = dates.view(np.ndarray).astype('datetime64[ns]') + expected = Series(values2, dates) + + for dtype in ['s', 'D', 'ms', 'us', 'ns']: + values1 = dates.view(np.ndarray).astype('M8[{0}]'.format(dtype)) + result = Series(values1, dates) + assert_series_equal(result, expected) + + # leave datetime.date alone + dates2 = np.array([d.date() for d in dates.to_pydatetime()], + dtype=object) + series1 = Series(dates2, dates) + self.assert_numpy_array_equal(series1.values, dates2) + self.assertEqual(series1.dtype, object) + + # these will correctly infer a datetime + s = Series([None, pd.NaT, '2013-08-05 15:30:00.000001']) + self.assertEqual(s.dtype, 'datetime64[ns]') + s = Series([np.nan, pd.NaT, '2013-08-05 15:30:00.000001']) + self.assertEqual(s.dtype, 'datetime64[ns]') + s = Series([pd.NaT, None, '2013-08-05 15:30:00.000001']) + self.assertEqual(s.dtype, 'datetime64[ns]') + s = Series([pd.NaT, np.nan, '2013-08-05 15:30:00.000001']) + self.assertEqual(s.dtype, 'datetime64[ns]') + + # tz-aware (UTC and other tz's) + # GH 8411 + dr = date_range('20130101', periods=3) + self.assertTrue(Series(dr).iloc[0].tz is None) + dr = date_range('20130101', periods=3, tz='UTC') + self.assertTrue(str(Series(dr).iloc[0].tz) == 'UTC') + dr = date_range('20130101', periods=3, tz='US/Eastern') + self.assertTrue(str(Series(dr).iloc[0].tz) == 'US/Eastern') + + # non-convertible + s = Series([1479596223000, -1479590, pd.NaT]) + self.assertTrue(s.dtype == 'object') + self.assertTrue(s[2] is pd.NaT) + self.assertTrue('NaT' in str(s)) + + # if we passed a NaT it remains + s = Series([datetime(2010, 1, 1), datetime(2, 1, 1), pd.NaT]) + self.assertTrue(s.dtype == 'object') + self.assertTrue(s[2] is pd.NaT) + self.assertTrue('NaT' in str(s)) + + # if we passed a nan it remains + s = Series([datetime(2010, 1, 1), datetime(2, 1, 1), np.nan]) + self.assertTrue(s.dtype == 'object') + self.assertTrue(s[2] is np.nan) + self.assertTrue('NaN' in str(s)) + + def test_constructor_with_datetime_tz(self): + + # 8260 + # support datetime64 with tz + + dr = date_range('20130101', periods=3, tz='US/Eastern') + s = Series(dr) + self.assertTrue(s.dtype.name == 'datetime64[ns, US/Eastern]') + self.assertTrue(s.dtype == 'datetime64[ns, US/Eastern]') + self.assertTrue(com.is_datetime64tz_dtype(s.dtype)) + self.assertTrue('datetime64[ns, US/Eastern]' in str(s)) + + # export + result = s.values + self.assertIsInstance(result, np.ndarray) + self.assertTrue(result.dtype == 'datetime64[ns]') + self.assertTrue(dr.equals(pd.DatetimeIndex(result).tz_localize( + 'UTC').tz_convert(tz=s.dt.tz))) + + # indexing + result = s.iloc[0] + self.assertEqual(result, Timestamp('2013-01-01 00:00:00-0500', + tz='US/Eastern', offset='D')) + result = s[0] + self.assertEqual(result, Timestamp('2013-01-01 00:00:00-0500', + tz='US/Eastern', offset='D')) + + result = s[Series([True, True, False], index=s.index)] + assert_series_equal(result, s[0:2]) + + result = s.iloc[0:1] + assert_series_equal(result, Series(dr[0:1])) + + # concat + result = pd.concat([s.iloc[0:1], s.iloc[1:]]) + assert_series_equal(result, s) + + # astype + result = s.astype(object) + expected = Series(DatetimeIndex(s._values).asobject) + assert_series_equal(result, expected) + + result = Series(s.values).dt.tz_localize('UTC').dt.tz_convert(s.dt.tz) + assert_series_equal(result, s) + + # astype - datetime64[ns, tz] + result = Series(s.values).astype('datetime64[ns, US/Eastern]') + assert_series_equal(result, s) + + result = Series(s.values).astype(s.dtype) + assert_series_equal(result, s) + + result = s.astype('datetime64[ns, CET]') + expected = Series(date_range('20130101 06:00:00', periods=3, tz='CET')) + assert_series_equal(result, expected) + + # short str + self.assertTrue('datetime64[ns, US/Eastern]' in str(s)) + + # formatting with NaT + result = s.shift() + self.assertTrue('datetime64[ns, US/Eastern]' in str(result)) + self.assertTrue('NaT' in str(result)) + + # long str + t = Series(date_range('20130101', periods=1000, tz='US/Eastern')) + self.assertTrue('datetime64[ns, US/Eastern]' in str(t)) + + result = pd.DatetimeIndex(s, freq='infer') + tm.assert_index_equal(result, dr) + + # inference + s = Series([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'), + pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Pacific')]) + self.assertTrue(s.dtype == 'datetime64[ns, US/Pacific]') + self.assertTrue(lib.infer_dtype(s) == 'datetime64') + + s = Series([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'), + pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Eastern')]) + self.assertTrue(s.dtype == 'object') + self.assertTrue(lib.infer_dtype(s) == 'datetime') + + def test_constructor_periodindex(self): + # GH7932 + # converting a PeriodIndex when put in a Series + + pi = period_range('20130101', periods=5, freq='D') + s = Series(pi) + expected = Series(pi.asobject) + assert_series_equal(s, expected) + + def test_constructor_dict(self): + d = {'a': 0., 'b': 1., 'c': 2.} + result = Series(d, index=['b', 'c', 'd', 'a']) + expected = Series([1, 2, nan, 0], index=['b', 'c', 'd', 'a']) + assert_series_equal(result, expected) + + pidx = tm.makePeriodIndex(100) + d = {pidx[0]: 0, pidx[1]: 1} + result = Series(d, index=pidx) + expected = Series(np.nan, pidx) + expected.ix[0] = 0 + expected.ix[1] = 1 + assert_series_equal(result, expected) + + def test_constructor_dict_multiindex(self): + check = lambda result, expected: tm.assert_series_equal( + result, expected, check_dtype=True, check_series_type=True) + d = {('a', 'a'): 0., ('b', 'a'): 1., ('b', 'c'): 2.} + _d = sorted(d.items()) + ser = Series(d) + expected = Series([x[1] for x in _d], + index=MultiIndex.from_tuples([x[0] for x in _d])) + check(ser, expected) + + d['z'] = 111. + _d.insert(0, ('z', d['z'])) + ser = Series(d) + expected = Series([x[1] for x in _d], index=Index( + [x[0] for x in _d], tupleize_cols=False)) + ser = ser.reindex(index=expected.index) + check(ser, expected) + + def test_constructor_subclass_dict(self): + data = tm.TestSubDict((x, 10.0 * x) for x in range(10)) + series = Series(data) + refseries = Series(dict(compat.iteritems(data))) + assert_series_equal(refseries, series) + + def test_constructor_dict_datetime64_index(self): + # GH 9456 + + dates_as_str = ['1984-02-19', '1988-11-06', '1989-12-03', '1990-03-15'] + values = [42544017.198965244, 1234565, 40512335.181958228, -1] + + def create_data(constructor): + return dict(zip((constructor(x) for x in dates_as_str), values)) + + data_datetime64 = create_data(np.datetime64) + data_datetime = create_data(lambda x: datetime.strptime(x, '%Y-%m-%d')) + data_Timestamp = create_data(Timestamp) + + expected = Series(values, (Timestamp(x) for x in dates_as_str)) + + result_datetime64 = Series(data_datetime64) + result_datetime = Series(data_datetime) + result_Timestamp = Series(data_Timestamp) + + assert_series_equal(result_datetime64, expected) + assert_series_equal(result_datetime, expected) + assert_series_equal(result_Timestamp, expected) + + def test_orderedDict_ctor(self): + # GH3283 + import pandas + import random + data = OrderedDict([('col%s' % i, random.random()) for i in range(12)]) + s = pandas.Series(data) + self.assertTrue(all(s.values == list(data.values()))) + + def test_orderedDict_subclass_ctor(self): + # GH3283 + import pandas + import random + + class A(OrderedDict): + pass + + data = A([('col%s' % i, random.random()) for i in range(12)]) + s = pandas.Series(data) + self.assertTrue(all(s.values == list(data.values()))) + + def test_constructor_list_of_tuples(self): + data = [(1, 1), (2, 2), (2, 3)] + s = Series(data) + self.assertEqual(list(s), data) + + def test_constructor_tuple_of_tuples(self): + data = ((1, 1), (2, 2), (2, 3)) + s = Series(data) + self.assertEqual(tuple(s), data) + + def test_constructor_set(self): + values = set([1, 2, 3, 4, 5]) + self.assertRaises(TypeError, Series, values) + values = frozenset(values) + self.assertRaises(TypeError, Series, values) + + def test_fromDict(self): + data = {'a': 0, 'b': 1, 'c': 2, 'd': 3} + + series = Series(data) + self.assertTrue(tm.is_sorted(series.index)) + + data = {'a': 0, 'b': '1', 'c': '2', 'd': datetime.now()} + series = Series(data) + self.assertEqual(series.dtype, np.object_) + + data = {'a': 0, 'b': '1', 'c': '2', 'd': '3'} + series = Series(data) + self.assertEqual(series.dtype, np.object_) + + data = {'a': '0', 'b': '1'} + series = Series(data, dtype=float) + self.assertEqual(series.dtype, np.float64) + + def test_fromValue(self): + + nans = Series(np.NaN, index=self.ts.index) + self.assertEqual(nans.dtype, np.float_) + self.assertEqual(len(nans), len(self.ts)) + + strings = Series('foo', index=self.ts.index) + self.assertEqual(strings.dtype, np.object_) + self.assertEqual(len(strings), len(self.ts)) + + d = datetime.now() + dates = Series(d, index=self.ts.index) + self.assertEqual(dates.dtype, 'M8[ns]') + self.assertEqual(len(dates), len(self.ts)) + + def test_constructor_dtype_timedelta64(self): + + # basic + td = Series([timedelta(days=i) for i in range(3)]) + self.assertEqual(td.dtype, 'timedelta64[ns]') + + td = Series([timedelta(days=1)]) + self.assertEqual(td.dtype, 'timedelta64[ns]') + + td = Series([timedelta(days=1), timedelta(days=2), np.timedelta64( + 1, 's')]) + + self.assertEqual(td.dtype, 'timedelta64[ns]') + + # mixed with NaT + from pandas import tslib + td = Series([timedelta(days=1), tslib.NaT], dtype='m8[ns]') + self.assertEqual(td.dtype, 'timedelta64[ns]') + + td = Series([timedelta(days=1), np.nan], dtype='m8[ns]') + self.assertEqual(td.dtype, 'timedelta64[ns]') + + td = Series([np.timedelta64(300000000), pd.NaT], dtype='m8[ns]') + self.assertEqual(td.dtype, 'timedelta64[ns]') + + # improved inference + # GH5689 + td = Series([np.timedelta64(300000000), pd.NaT]) + self.assertEqual(td.dtype, 'timedelta64[ns]') + + td = Series([np.timedelta64(300000000), tslib.iNaT]) + self.assertEqual(td.dtype, 'timedelta64[ns]') + + td = Series([np.timedelta64(300000000), np.nan]) + self.assertEqual(td.dtype, 'timedelta64[ns]') + + td = Series([pd.NaT, np.timedelta64(300000000)]) + self.assertEqual(td.dtype, 'timedelta64[ns]') + + td = Series([np.timedelta64(1, 's')]) + self.assertEqual(td.dtype, 'timedelta64[ns]') + + # these are frequency conversion astypes + # for t in ['s', 'D', 'us', 'ms']: + # self.assertRaises(TypeError, td.astype, 'm8[%s]' % t) + + # valid astype + td.astype('int64') + + # invalid casting + self.assertRaises(TypeError, td.astype, 'int32') + + # this is an invalid casting + def f(): + Series([timedelta(days=1), 'foo'], dtype='m8[ns]') + + self.assertRaises(Exception, f) + + # leave as object here + td = Series([timedelta(days=i) for i in range(3)] + ['foo']) + self.assertEqual(td.dtype, 'object') + + # these will correctly infer a timedelta + s = Series([None, pd.NaT, '1 Day']) + self.assertEqual(s.dtype, 'timedelta64[ns]') + s = Series([np.nan, pd.NaT, '1 Day']) + self.assertEqual(s.dtype, 'timedelta64[ns]') + s = Series([pd.NaT, None, '1 Day']) + self.assertEqual(s.dtype, 'timedelta64[ns]') + s = Series([pd.NaT, np.nan, '1 Day']) + self.assertEqual(s.dtype, 'timedelta64[ns]') diff --git a/pandas/tests/series/test_datetime_values.py b/pandas/tests/series/test_datetime_values.py new file mode 100644 index 0000000000000..c6593d403ffcc --- /dev/null +++ b/pandas/tests/series/test_datetime_values.py @@ -0,0 +1,395 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +from datetime import datetime + +import numpy as np +import pandas as pd + +from pandas import (Index, Series, DataFrame, bdate_range, + date_range, period_range, timedelta_range) +from pandas.tseries.period import PeriodIndex +from pandas.tseries.index import Timestamp, DatetimeIndex +from pandas.tseries.tdi import TimedeltaIndex +import pandas.core.common as com + +from pandas.util.testing import assert_series_equal +import pandas.util.testing as tm + +from .common import TestData + + +class TestSeriesDatetimeValues(TestData, tm.TestCase): + + _multiprocess_can_split_ = True + + def test_dt_namespace_accessor(self): + + # GH 7207 + # test .dt namespace accessor + + ok_for_base = ['year', 'month', 'day', 'hour', 'minute', 'second', + 'weekofyear', 'week', 'dayofweek', 'weekday', + 'dayofyear', 'quarter', 'freq', 'days_in_month', + 'daysinmonth'] + ok_for_period = ok_for_base + ['qyear'] + ok_for_period_methods = ['strftime'] + ok_for_dt = ok_for_base + ['date', 'time', 'microsecond', 'nanosecond', + 'is_month_start', 'is_month_end', + 'is_quarter_start', 'is_quarter_end', + 'is_year_start', 'is_year_end', 'tz'] + ok_for_dt_methods = ['to_period', 'to_pydatetime', 'tz_localize', + 'tz_convert', 'normalize', 'strftime', 'round', + 'floor', 'ceil'] + ok_for_td = ['days', 'seconds', 'microseconds', 'nanoseconds'] + ok_for_td_methods = ['components', 'to_pytimedelta', 'total_seconds', + 'round', 'floor', 'ceil'] + + def get_expected(s, name): + result = getattr(Index(s._values), prop) + if isinstance(result, np.ndarray): + if com.is_integer_dtype(result): + result = result.astype('int64') + elif not com.is_list_like(result): + return result + return Series(result, index=s.index) + + def compare(s, name): + a = getattr(s.dt, prop) + b = get_expected(s, prop) + if not (com.is_list_like(a) and com.is_list_like(b)): + self.assertEqual(a, b) + else: + tm.assert_series_equal(a, b) + + # datetimeindex + for s in [Series(date_range('20130101', periods=5)), + Series(date_range('20130101', periods=5, freq='s')), + Series(date_range('20130101 00:00:00', periods=5, freq='ms')) + ]: + for prop in ok_for_dt: + # we test freq below + if prop != 'freq': + compare(s, prop) + + for prop in ok_for_dt_methods: + getattr(s.dt, prop) + + result = s.dt.to_pydatetime() + self.assertIsInstance(result, np.ndarray) + self.assertTrue(result.dtype == object) + + result = s.dt.tz_localize('US/Eastern') + expected = Series( + DatetimeIndex(s.values).tz_localize('US/Eastern'), + index=s.index) + tm.assert_series_equal(result, expected) + + tz_result = result.dt.tz + self.assertEqual(str(tz_result), 'US/Eastern') + freq_result = s.dt.freq + self.assertEqual(freq_result, DatetimeIndex(s.values, + freq='infer').freq) + + # let's localize, then convert + result = s.dt.tz_localize('UTC').dt.tz_convert('US/Eastern') + expected = Series( + DatetimeIndex(s.values).tz_localize('UTC').tz_convert( + 'US/Eastern'), index=s.index) + tm.assert_series_equal(result, expected) + + # round + s = Series(pd.to_datetime( + ['2012-01-01 13:00:00', '2012-01-01 12:01:00', + '2012-01-01 08:00:00'])) + result = s.dt.round('D') + expected = Series(pd.to_datetime(['2012-01-02', '2012-01-02', + '2012-01-01'])) + tm.assert_series_equal(result, expected) + + # round with tz + result = s.dt.tz_localize('UTC').dt.tz_convert('US/Eastern').dt.round( + 'D') + expected = Series(pd.to_datetime(['2012-01-01', '2012-01-01', + '2012-01-01']).tz_localize( + 'US/Eastern')) + tm.assert_series_equal(result, expected) + + # floor + s = Series(pd.to_datetime( + ['2012-01-01 13:00:00', '2012-01-01 12:01:00', + '2012-01-01 08:00:00'])) + result = s.dt.floor('D') + expected = Series(pd.to_datetime(['2012-01-01', '2012-01-01', + '2012-01-01'])) + tm.assert_series_equal(result, expected) + + # ceil + s = Series(pd.to_datetime( + ['2012-01-01 13:00:00', '2012-01-01 12:01:00', + '2012-01-01 08:00:00'])) + result = s.dt.ceil('D') + expected = Series(pd.to_datetime(['2012-01-02', '2012-01-02', + '2012-01-02'])) + tm.assert_series_equal(result, expected) + + # datetimeindex with tz + s = Series(date_range('20130101', periods=5, tz='US/Eastern')) + for prop in ok_for_dt: + + # we test freq below + if prop != 'freq': + compare(s, prop) + + for prop in ok_for_dt_methods: + getattr(s.dt, prop) + + result = s.dt.to_pydatetime() + self.assertIsInstance(result, np.ndarray) + self.assertTrue(result.dtype == object) + + result = s.dt.tz_convert('CET') + expected = Series(s._values.tz_convert('CET'), index=s.index) + tm.assert_series_equal(result, expected) + + tz_result = result.dt.tz + self.assertEqual(str(tz_result), 'CET') + freq_result = s.dt.freq + self.assertEqual(freq_result, DatetimeIndex(s.values, + freq='infer').freq) + + # timedeltaindex + for s in [Series( + timedelta_range('1 day', periods=5), index=list('abcde')), + Series(timedelta_range('1 day 01:23:45', periods=5, freq='s')), + Series(timedelta_range('2 days 01:23:45.012345', periods=5, + freq='ms'))]: + for prop in ok_for_td: + # we test freq below + if prop != 'freq': + compare(s, prop) + + for prop in ok_for_td_methods: + getattr(s.dt, prop) + + result = s.dt.components + self.assertIsInstance(result, DataFrame) + tm.assert_index_equal(result.index, s.index) + + result = s.dt.to_pytimedelta() + self.assertIsInstance(result, np.ndarray) + self.assertTrue(result.dtype == object) + + result = s.dt.total_seconds() + self.assertIsInstance(result, pd.Series) + self.assertTrue(result.dtype == 'float64') + + freq_result = s.dt.freq + self.assertEqual(freq_result, TimedeltaIndex(s.values, + freq='infer').freq) + + # both + index = date_range('20130101', periods=3, freq='D') + s = Series(date_range('20140204', periods=3, freq='s'), index=index) + tm.assert_series_equal(s.dt.year, Series( + np.array( + [2014, 2014, 2014], dtype='int64'), index=index)) + tm.assert_series_equal(s.dt.month, Series( + np.array( + [2, 2, 2], dtype='int64'), index=index)) + tm.assert_series_equal(s.dt.second, Series( + np.array( + [0, 1, 2], dtype='int64'), index=index)) + tm.assert_series_equal(s.dt.normalize(), pd.Series( + [s[0]] * 3, index=index)) + + # periodindex + for s in [Series(period_range('20130101', periods=5, freq='D'))]: + for prop in ok_for_period: + # we test freq below + if prop != 'freq': + compare(s, prop) + + for prop in ok_for_period_methods: + getattr(s.dt, prop) + + freq_result = s.dt.freq + self.assertEqual(freq_result, PeriodIndex(s.values).freq) + + # test limited display api + def get_dir(s): + results = [r for r in s.dt.__dir__() if not r.startswith('_')] + return list(sorted(set(results))) + + s = Series(date_range('20130101', periods=5, freq='D')) + results = get_dir(s) + tm.assert_almost_equal( + results, list(sorted(set(ok_for_dt + ok_for_dt_methods)))) + + s = Series(period_range('20130101', periods=5, freq='D').asobject) + results = get_dir(s) + tm.assert_almost_equal( + results, list(sorted(set(ok_for_period + ok_for_period_methods)))) + + # 11295 + # ambiguous time error on the conversions + s = Series(pd.date_range('2015-01-01', '2016-01-01', freq='T')) + s = s.dt.tz_localize('UTC').dt.tz_convert('America/Chicago') + results = get_dir(s) + tm.assert_almost_equal( + results, list(sorted(set(ok_for_dt + ok_for_dt_methods)))) + expected = Series(pd.date_range('2015-01-01', '2016-01-01', freq='T', + tz='UTC').tz_convert( + 'America/Chicago')) + tm.assert_series_equal(s, expected) + + # no setting allowed + s = Series(date_range('20130101', periods=5, freq='D')) + with tm.assertRaisesRegexp(ValueError, "modifications"): + s.dt.hour = 5 + + # trying to set a copy + with pd.option_context('chained_assignment', 'raise'): + + def f(): + s.dt.hour[0] = 5 + + self.assertRaises(com.SettingWithCopyError, f) + + def test_dt_accessor_no_new_attributes(self): + # https://github.com/pydata/pandas/issues/10673 + s = Series(date_range('20130101', periods=5, freq='D')) + with tm.assertRaisesRegexp(AttributeError, + "You cannot add any new attribute"): + s.dt.xlabel = "a" + + def test_strftime(self): + # GH 10086 + s = Series(date_range('20130101', periods=5)) + result = s.dt.strftime('%Y/%m/%d') + expected = Series(['2013/01/01', '2013/01/02', '2013/01/03', + '2013/01/04', '2013/01/05']) + tm.assert_series_equal(result, expected) + + s = Series(date_range('2015-02-03 11:22:33.4567', periods=5)) + result = s.dt.strftime('%Y/%m/%d %H-%M-%S') + expected = Series(['2015/02/03 11-22-33', '2015/02/04 11-22-33', + '2015/02/05 11-22-33', '2015/02/06 11-22-33', + '2015/02/07 11-22-33']) + tm.assert_series_equal(result, expected) + + s = Series(period_range('20130101', periods=5)) + result = s.dt.strftime('%Y/%m/%d') + expected = Series(['2013/01/01', '2013/01/02', '2013/01/03', + '2013/01/04', '2013/01/05']) + tm.assert_series_equal(result, expected) + + s = Series(period_range( + '2015-02-03 11:22:33.4567', periods=5, freq='s')) + result = s.dt.strftime('%Y/%m/%d %H-%M-%S') + expected = Series(['2015/02/03 11-22-33', '2015/02/03 11-22-34', + '2015/02/03 11-22-35', '2015/02/03 11-22-36', + '2015/02/03 11-22-37']) + tm.assert_series_equal(result, expected) + + s = Series(date_range('20130101', periods=5)) + s.iloc[0] = pd.NaT + result = s.dt.strftime('%Y/%m/%d') + expected = Series(['NaT', '2013/01/02', '2013/01/03', '2013/01/04', + '2013/01/05']) + tm.assert_series_equal(result, expected) + + datetime_index = date_range('20150301', periods=5) + result = datetime_index.strftime("%Y/%m/%d") + expected = np.array( + ['2015/03/01', '2015/03/02', '2015/03/03', '2015/03/04', + '2015/03/05'], dtype=object) + self.assert_numpy_array_equal(result, expected) + + period_index = period_range('20150301', periods=5) + result = period_index.strftime("%Y/%m/%d") + expected = np.array( + ['2015/03/01', '2015/03/02', '2015/03/03', '2015/03/04', + '2015/03/05'], dtype=object) + self.assert_numpy_array_equal(result, expected) + + s = Series([datetime(2013, 1, 1, 2, 32, 59), datetime(2013, 1, 2, 14, + 32, 1)]) + result = s.dt.strftime('%Y-%m-%d %H:%M:%S') + expected = Series(["2013-01-01 02:32:59", "2013-01-02 14:32:01"]) + tm.assert_series_equal(result, expected) + + s = Series(period_range('20130101', periods=4, freq='H')) + result = s.dt.strftime('%Y/%m/%d %H:%M:%S') + expected = Series(["2013/01/01 00:00:00", "2013/01/01 01:00:00", + "2013/01/01 02:00:00", "2013/01/01 03:00:00"]) + + s = Series(period_range('20130101', periods=4, freq='L')) + result = s.dt.strftime('%Y/%m/%d %H:%M:%S.%l') + expected = Series( + ["2013/01/01 00:00:00.000", "2013/01/01 00:00:00.001", + "2013/01/01 00:00:00.002", "2013/01/01 00:00:00.003"]) + tm.assert_series_equal(result, expected) + + def test_valid_dt_with_missing_values(self): + + from datetime import date, time + + # GH 8689 + s = Series(date_range('20130101', periods=5, freq='D')) + s.iloc[2] = pd.NaT + + for attr in ['microsecond', 'nanosecond', 'second', 'minute', 'hour', + 'day']: + expected = getattr(s.dt, attr).copy() + expected.iloc[2] = np.nan + result = getattr(s.dt, attr) + tm.assert_series_equal(result, expected) + + result = s.dt.date + expected = Series( + [date(2013, 1, 1), date(2013, 1, 2), np.nan, date(2013, 1, 4), + date(2013, 1, 5)], dtype='object') + tm.assert_series_equal(result, expected) + + result = s.dt.time + expected = Series( + [time(0), time(0), np.nan, time(0), time(0)], dtype='object') + tm.assert_series_equal(result, expected) + + def test_dt_accessor_api(self): + # GH 9322 + from pandas.tseries.common import (CombinedDatetimelikeProperties, + DatetimeProperties) + self.assertIs(Series.dt, CombinedDatetimelikeProperties) + + s = Series(date_range('2000-01-01', periods=3)) + self.assertIsInstance(s.dt, DatetimeProperties) + + for s in [Series(np.arange(5)), Series(list('abcde')), + Series(np.random.randn(5))]: + with tm.assertRaisesRegexp(AttributeError, + "only use .dt accessor"): + s.dt + self.assertFalse(hasattr(s, 'dt')) + + def test_sub_of_datetime_from_TimeSeries(self): + from pandas.tseries.timedeltas import to_timedelta + from datetime import datetime + a = Timestamp(datetime(1993, 0o1, 0o7, 13, 30, 00)) + b = datetime(1993, 6, 22, 13, 30) + a = Series([a]) + result = to_timedelta(np.abs(a - b)) + self.assertEqual(result.dtype, 'timedelta64[ns]') + + def test_between(self): + s = Series(bdate_range('1/1/2000', periods=20).asobject) + s[::2] = np.nan + + result = s[s.between(s[3], s[17])] + expected = s[3:18].dropna() + assert_series_equal(result, expected) + + result = s[s.between(s[3], s[17], inclusive=False)] + expected = s[5:16].dropna() + assert_series_equal(result, expected) diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py new file mode 100644 index 0000000000000..502953034ae2d --- /dev/null +++ b/pandas/tests/series/test_dtypes.py @@ -0,0 +1,147 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +import sys +from datetime import datetime +import string + +from numpy import nan +import numpy as np + +from pandas import Series +from pandas.tseries.index import Timestamp +from pandas.tseries.tdi import Timedelta + +from pandas.compat import lrange, range, u +from pandas import compat +from pandas.util.testing import assert_series_equal +import pandas.util.testing as tm + +from .common import TestData + + +class TestSeriesDtypes(TestData, tm.TestCase): + + _multiprocess_can_split_ = True + + def test_astype(self): + s = Series(np.random.randn(5), name='foo') + + for dtype in ['float32', 'float64', 'int64', 'int32']: + astyped = s.astype(dtype) + self.assertEqual(astyped.dtype, dtype) + self.assertEqual(astyped.name, s.name) + + def test_dtype(self): + + self.assertEqual(self.ts.dtype, np.dtype('float64')) + self.assertEqual(self.ts.dtypes, np.dtype('float64')) + self.assertEqual(self.ts.ftype, 'float64:dense') + self.assertEqual(self.ts.ftypes, 'float64:dense') + assert_series_equal(self.ts.get_dtype_counts(), Series(1, ['float64'])) + assert_series_equal(self.ts.get_ftype_counts(), Series( + 1, ['float64:dense'])) + + def test_astype_cast_nan_int(self): + df = Series([1.0, 2.0, 3.0, np.nan]) + self.assertRaises(ValueError, df.astype, np.int64) + + def test_astype_cast_object_int(self): + arr = Series(["car", "house", "tree", "1"]) + + self.assertRaises(ValueError, arr.astype, int) + self.assertRaises(ValueError, arr.astype, np.int64) + self.assertRaises(ValueError, arr.astype, np.int8) + + arr = Series(['1', '2', '3', '4'], dtype=object) + result = arr.astype(int) + self.assert_numpy_array_equal(result, np.arange(1, 5)) + + def test_astype_datetimes(self): + import pandas.tslib as tslib + + s = Series(tslib.iNaT, dtype='M8[ns]', index=lrange(5)) + s = s.astype('O') + self.assertEqual(s.dtype, np.object_) + + s = Series([datetime(2001, 1, 2, 0, 0)]) + s = s.astype('O') + self.assertEqual(s.dtype, np.object_) + + s = Series([datetime(2001, 1, 2, 0, 0) for i in range(3)]) + s[1] = np.nan + self.assertEqual(s.dtype, 'M8[ns]') + s = s.astype('O') + self.assertEqual(s.dtype, np.object_) + + def test_astype_str(self): + # GH4405 + digits = string.digits + s1 = Series([digits * 10, tm.rands(63), tm.rands(64), tm.rands(1000)]) + s2 = Series([digits * 10, tm.rands(63), tm.rands(64), nan, 1.0]) + types = (compat.text_type, np.str_) + for typ in types: + for s in (s1, s2): + res = s.astype(typ) + expec = s.map(compat.text_type) + assert_series_equal(res, expec) + + # GH9757 + # Test str and unicode on python 2.x and just str on python 3.x + for tt in set([str, compat.text_type]): + ts = Series([Timestamp('2010-01-04 00:00:00')]) + s = ts.astype(tt) + expected = Series([tt('2010-01-04')]) + assert_series_equal(s, expected) + + ts = Series([Timestamp('2010-01-04 00:00:00', tz='US/Eastern')]) + s = ts.astype(tt) + expected = Series([tt('2010-01-04 00:00:00-05:00')]) + assert_series_equal(s, expected) + + td = Series([Timedelta(1, unit='d')]) + s = td.astype(tt) + expected = Series([tt('1 days 00:00:00.000000000')]) + assert_series_equal(s, expected) + + def test_astype_unicode(self): + + # GH7758 + # a bit of magic is required to set default encoding encoding to utf-8 + digits = string.digits + test_series = [ + Series([digits * 10, tm.rands(63), tm.rands(64), tm.rands(1000)]), + Series([u('データーサイエンス、お前はもう死んでいる')]), + + ] + + former_encoding = None + if not compat.PY3: + # in python we can force the default encoding for this test + former_encoding = sys.getdefaultencoding() + reload(sys) # noqa + sys.setdefaultencoding("utf-8") + if sys.getdefaultencoding() == "utf-8": + test_series.append(Series([u('野菜食べないとやばい') + .encode("utf-8")])) + for s in test_series: + res = s.astype("unicode") + expec = s.map(compat.text_type) + assert_series_equal(res, expec) + # restore the former encoding + if former_encoding is not None and former_encoding != "utf-8": + reload(sys) # noqa + sys.setdefaultencoding(former_encoding) + + def test_complexx(self): + + # GH4819 + # complex access for ndarray compat + a = np.arange(5) + b = Series(a + 4j * a) + tm.assert_almost_equal(a, b.real) + tm.assert_almost_equal(4 * a, b.imag) + + b.real = np.arange(5) + 5 + tm.assert_almost_equal(a + 5, b.real) + tm.assert_almost_equal(4 * a, b.imag) diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py new file mode 100644 index 0000000000000..537c2d07443e4 --- /dev/null +++ b/pandas/tests/series/test_indexing.py @@ -0,0 +1,1789 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +from datetime import datetime, timedelta + +from numpy import nan +import numpy as np +import pandas as pd + +from pandas import Index, Series, DataFrame, isnull, date_range +from pandas.core.index import MultiIndex +from pandas.core.indexing import IndexingError +from pandas.tseries.index import Timestamp +from pandas.tseries.tdi import Timedelta +import pandas.core.common as com + +import pandas.core.datetools as datetools + +from pandas.compat import lrange, range +from pandas import compat +from pandas.util.testing import assert_series_equal, assert_almost_equal +import pandas.util.testing as tm + +from .common import TestData + +JOIN_TYPES = ['inner', 'outer', 'left', 'right'] + + +class TestSeriesIndexing(TestData, tm.TestCase): + + _multiprocess_can_split_ = True + + def test_get(self): + + # GH 6383 + s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56, 45, + 51, 39, 55, 43, 54, 52, 51, 54])) + + result = s.get(25, 0) + expected = 0 + self.assertEqual(result, expected) + + s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56, + 45, 51, 39, 55, 43, 54, 52, 51, 54]), + index=pd.Float64Index( + [25.0, 36.0, 49.0, 64.0, 81.0, 100.0, + 121.0, 144.0, 169.0, 196.0, 1225.0, + 1296.0, 1369.0, 1444.0, 1521.0, 1600.0, + 1681.0, 1764.0, 1849.0, 1936.0], + dtype='object')) + + result = s.get(25, 0) + expected = 43 + self.assertEqual(result, expected) + + # GH 7407 + # with a boolean accessor + df = pd.DataFrame({'i': [0] * 3, 'b': [False] * 3}) + vc = df.i.value_counts() + result = vc.get(99, default='Missing') + self.assertEqual(result, 'Missing') + + vc = df.b.value_counts() + result = vc.get(False, default='Missing') + self.assertEqual(result, 3) + + result = vc.get(True, default='Missing') + self.assertEqual(result, 'Missing') + + def test_delitem(self): + + # GH 5542 + # should delete the item inplace + s = Series(lrange(5)) + del s[0] + + expected = Series(lrange(1, 5), index=lrange(1, 5)) + assert_series_equal(s, expected) + + del s[1] + expected = Series(lrange(2, 5), index=lrange(2, 5)) + assert_series_equal(s, expected) + + # empty + s = Series() + + def f(): + del s[0] + + self.assertRaises(KeyError, f) + + # only 1 left, del, add, del + s = Series(1) + del s[0] + assert_series_equal(s, Series(dtype='int64', index=Index( + [], dtype='int64'))) + s[0] = 1 + assert_series_equal(s, Series(1)) + del s[0] + assert_series_equal(s, Series(dtype='int64', index=Index( + [], dtype='int64'))) + + # Index(dtype=object) + s = Series(1, index=['a']) + del s['a'] + assert_series_equal(s, Series(dtype='int64', index=Index( + [], dtype='object'))) + s['a'] = 1 + assert_series_equal(s, Series(1, index=['a'])) + del s['a'] + assert_series_equal(s, Series(dtype='int64', index=Index( + [], dtype='object'))) + + def test_getitem_setitem_ellipsis(self): + s = Series(np.random.randn(10)) + + np.fix(s) + + result = s[...] + assert_series_equal(result, s) + + s[...] = 5 + self.assertTrue((result == 5).all()) + + def test_getitem_negative_out_of_bounds(self): + s = Series(tm.rands_array(5, 10), index=tm.rands_array(10, 10)) + + self.assertRaises(IndexError, s.__getitem__, -11) + self.assertRaises(IndexError, s.__setitem__, -11, 'foo') + + def test_pop(self): + # GH 6600 + df = DataFrame({'A': 0, 'B': np.arange(5, dtype='int64'), 'C': 0, }) + k = df.iloc[4] + + result = k.pop('B') + self.assertEqual(result, 4) + + expected = Series([0, 0], index=['A', 'C'], name=4) + assert_series_equal(k, expected) + + def test_getitem_get(self): + idx1 = self.series.index[5] + idx2 = self.objSeries.index[5] + + self.assertEqual(self.series[idx1], self.series.get(idx1)) + self.assertEqual(self.objSeries[idx2], self.objSeries.get(idx2)) + + self.assertEqual(self.series[idx1], self.series[5]) + self.assertEqual(self.objSeries[idx2], self.objSeries[5]) + + self.assertEqual( + self.series.get(-1), self.series.get(self.series.index[-1])) + self.assertEqual(self.series[5], self.series.get(self.series.index[5])) + + # missing + d = self.ts.index[0] - datetools.bday + self.assertRaises(KeyError, self.ts.__getitem__, d) + + # None + # GH 5652 + for s in [Series(), Series(index=list('abc'))]: + result = s.get(None) + self.assertIsNone(result) + + def test_iget(self): + + s = Series(np.random.randn(10), index=lrange(0, 20, 2)) + + # 10711, deprecated + with tm.assert_produces_warning(FutureWarning): + s.iget(1) + + # 10711, deprecated + with tm.assert_produces_warning(FutureWarning): + s.irow(1) + + # 10711, deprecated + with tm.assert_produces_warning(FutureWarning): + s.iget_value(1) + + for i in range(len(s)): + result = s.iloc[i] + exp = s[s.index[i]] + assert_almost_equal(result, exp) + + # pass a slice + result = s.iloc[slice(1, 3)] + expected = s.ix[2:4] + assert_series_equal(result, expected) + + # test slice is a view + result[:] = 0 + self.assertTrue((s[1:3] == 0).all()) + + # list of integers + result = s.iloc[[0, 2, 3, 4, 5]] + expected = s.reindex(s.index[[0, 2, 3, 4, 5]]) + assert_series_equal(result, expected) + + def test_iget_nonunique(self): + s = Series([0, 1, 2], index=[0, 1, 0]) + self.assertEqual(s.iloc[2], 2) + + def test_getitem_regression(self): + s = Series(lrange(5), index=lrange(5)) + result = s[lrange(5)] + assert_series_equal(result, s) + + def test_getitem_setitem_slice_bug(self): + s = Series(lrange(10), lrange(10)) + result = s[-12:] + assert_series_equal(result, s) + + result = s[-7:] + assert_series_equal(result, s[3:]) + + result = s[:-12] + assert_series_equal(result, s[:0]) + + s = Series(lrange(10), lrange(10)) + s[-12:] = 0 + self.assertTrue((s == 0).all()) + + s[:-12] = 5 + self.assertTrue((s == 0).all()) + + def test_getitem_int64(self): + idx = np.int64(5) + self.assertEqual(self.ts[idx], self.ts[5]) + + def test_getitem_fancy(self): + slice1 = self.series[[1, 2, 3]] + slice2 = self.objSeries[[1, 2, 3]] + self.assertEqual(self.series.index[2], slice1.index[1]) + self.assertEqual(self.objSeries.index[2], slice2.index[1]) + self.assertEqual(self.series[2], slice1[1]) + self.assertEqual(self.objSeries[2], slice2[1]) + + def test_getitem_boolean(self): + s = self.series + mask = s > s.median() + + # passing list is OK + result = s[list(mask)] + expected = s[mask] + assert_series_equal(result, expected) + self.assert_numpy_array_equal(result.index, s.index[mask]) + + def test_getitem_boolean_empty(self): + s = Series([], dtype=np.int64) + s.index.name = 'index_name' + s = s[s.isnull()] + self.assertEqual(s.index.name, 'index_name') + self.assertEqual(s.dtype, np.int64) + + # GH5877 + # indexing with empty series + s = Series(['A', 'B']) + expected = Series(np.nan, index=['C'], dtype=object) + result = s[Series(['C'], dtype=object)] + assert_series_equal(result, expected) + + s = Series(['A', 'B']) + expected = Series(dtype=object, index=Index([], dtype='int64')) + result = s[Series([], dtype=object)] + assert_series_equal(result, expected) + + # invalid because of the boolean indexer + # that's empty or not-aligned + def f(): + s[Series([], dtype=bool)] + + self.assertRaises(IndexingError, f) + + def f(): + s[Series([True], dtype=bool)] + + self.assertRaises(IndexingError, f) + + def test_getitem_generator(self): + gen = (x > 0 for x in self.series) + result = self.series[gen] + result2 = self.series[iter(self.series > 0)] + expected = self.series[self.series > 0] + assert_series_equal(result, expected) + assert_series_equal(result2, expected) + + def test_getitem_boolean_object(self): + # using column from DataFrame + + s = self.series + mask = s > s.median() + omask = mask.astype(object) + + # getitem + result = s[omask] + expected = s[mask] + assert_series_equal(result, expected) + + # setitem + s2 = s.copy() + cop = s.copy() + cop[omask] = 5 + s2[mask] = 5 + assert_series_equal(cop, s2) + + # nans raise exception + omask[5:10] = np.nan + self.assertRaises(Exception, s.__getitem__, omask) + self.assertRaises(Exception, s.__setitem__, omask, 5) + + def test_getitem_setitem_boolean_corner(self): + ts = self.ts + mask_shifted = ts.shift(1, freq=datetools.bday) > ts.median() + + # these used to raise...?? + + self.assertRaises(Exception, ts.__getitem__, mask_shifted) + self.assertRaises(Exception, ts.__setitem__, mask_shifted, 1) + # ts[mask_shifted] + # ts[mask_shifted] = 1 + + self.assertRaises(Exception, ts.ix.__getitem__, mask_shifted) + self.assertRaises(Exception, ts.ix.__setitem__, mask_shifted, 1) + # ts.ix[mask_shifted] + # ts.ix[mask_shifted] = 2 + + def test_getitem_setitem_slice_integers(self): + s = Series(np.random.randn(8), index=[2, 4, 6, 8, 10, 12, 14, 16]) + + result = s[:4] + expected = s.reindex([2, 4, 6, 8]) + assert_series_equal(result, expected) + + s[:4] = 0 + self.assertTrue((s[:4] == 0).all()) + self.assertTrue(not (s[4:] == 0).any()) + + def test_getitem_out_of_bounds(self): + # don't segfault, GH #495 + self.assertRaises(IndexError, self.ts.__getitem__, len(self.ts)) + + # GH #917 + s = Series([]) + self.assertRaises(IndexError, s.__getitem__, -1) + + def test_getitem_setitem_integers(self): + # caused bug without test + s = Series([1, 2, 3], ['a', 'b', 'c']) + + self.assertEqual(s.ix[0], s['a']) + s.ix[0] = 5 + self.assertAlmostEqual(s['a'], 5) + + def test_getitem_box_float64(self): + value = self.ts[5] + tm.assertIsInstance(value, np.float64) + + def test_getitem_ambiguous_keyerror(self): + s = Series(lrange(10), index=lrange(0, 20, 2)) + self.assertRaises(KeyError, s.__getitem__, 1) + self.assertRaises(KeyError, s.ix.__getitem__, 1) + + def test_getitem_unordered_dup(self): + obj = Series(lrange(5), index=['c', 'a', 'a', 'b', 'b']) + self.assertTrue(np.isscalar(obj['c'])) + self.assertEqual(obj['c'], 0) + + def test_getitem_dups_with_missing(self): + + # breaks reindex, so need to use .ix internally + # GH 4246 + s = Series([1, 2, 3, 4], ['foo', 'bar', 'foo', 'bah']) + expected = s.ix[['foo', 'bar', 'bah', 'bam']] + result = s[['foo', 'bar', 'bah', 'bam']] + assert_series_equal(result, expected) + + def test_getitem_dups(self): + s = Series(range(5), index=['A', 'A', 'B', 'C', 'C'], dtype=np.int64) + expected = Series([3, 4], index=['C', 'C'], dtype=np.int64) + result = s['C'] + assert_series_equal(result, expected) + + def test_getitem_dataframe(self): + rng = list(range(10)) + s = pd.Series(10, index=rng) + df = pd.DataFrame(rng, index=rng) + self.assertRaises(TypeError, s.__getitem__, df > 5) + + def test_setitem_ambiguous_keyerror(self): + s = Series(lrange(10), index=lrange(0, 20, 2)) + + # equivalent of an append + s2 = s.copy() + s2[1] = 5 + expected = s.append(Series([5], index=[1])) + assert_series_equal(s2, expected) + + s2 = s.copy() + s2.ix[1] = 5 + expected = s.append(Series([5], index=[1])) + assert_series_equal(s2, expected) + + def test_setitem_float_labels(self): + # note labels are floats + s = Series(['a', 'b', 'c'], index=[0, 0.5, 1]) + tmp = s.copy() + + s.ix[1] = 'zoo' + tmp.iloc[2] = 'zoo' + + assert_series_equal(s, tmp) + + def test_slice(self): + numSlice = self.series[10:20] + numSliceEnd = self.series[-10:] + objSlice = self.objSeries[10:20] + + self.assertNotIn(self.series.index[9], numSlice.index) + self.assertNotIn(self.objSeries.index[9], objSlice.index) + + self.assertEqual(len(numSlice), len(numSlice.index)) + self.assertEqual(self.series[numSlice.index[0]], + numSlice[numSlice.index[0]]) + + self.assertEqual(numSlice.index[1], self.series.index[11]) + + self.assertTrue(tm.equalContents(numSliceEnd, np.array(self.series)[ + -10:])) + + # test return view + sl = self.series[10:20] + sl[:] = 0 + self.assertTrue((self.series[10:20] == 0).all()) + + def test_slice_can_reorder_not_uniquely_indexed(self): + s = Series(1, index=['a', 'a', 'b', 'b', 'c']) + s[::-1] # it works! + + def test_slice_float_get_set(self): + + self.assertRaises(TypeError, lambda: self.ts[4.0:10.0]) + + def f(): + self.ts[4.0:10.0] = 0 + + self.assertRaises(TypeError, f) + + self.assertRaises(TypeError, self.ts.__getitem__, slice(4.5, 10.0)) + self.assertRaises(TypeError, self.ts.__setitem__, slice(4.5, 10.0), 0) + + def test_slice_floats2(self): + s = Series(np.random.rand(10), index=np.arange(10, 20, dtype=float)) + + self.assertEqual(len(s.ix[12.0:]), 8) + self.assertEqual(len(s.ix[12.5:]), 7) + + i = np.arange(10, 20, dtype=float) + i[2] = 12.2 + s.index = i + self.assertEqual(len(s.ix[12.0:]), 8) + self.assertEqual(len(s.ix[12.5:]), 7) + + def test_slice_float64(self): + + values = np.arange(10., 50., 2) + index = Index(values) + + start, end = values[[5, 15]] + + s = Series(np.random.randn(20), index=index) + + result = s[start:end] + expected = s.iloc[5:16] + assert_series_equal(result, expected) + + result = s.loc[start:end] + assert_series_equal(result, expected) + + df = DataFrame(np.random.randn(20, 3), index=index) + + result = df[start:end] + expected = df.iloc[5:16] + tm.assert_frame_equal(result, expected) + + result = df.loc[start:end] + tm.assert_frame_equal(result, expected) + + def test_setitem(self): + self.ts[self.ts.index[5]] = np.NaN + self.ts[[1, 2, 17]] = np.NaN + self.ts[6] = np.NaN + self.assertTrue(np.isnan(self.ts[6])) + self.assertTrue(np.isnan(self.ts[2])) + self.ts[np.isnan(self.ts)] = 5 + self.assertFalse(np.isnan(self.ts[2])) + + # caught this bug when writing tests + series = Series(tm.makeIntIndex(20).astype(float), + index=tm.makeIntIndex(20)) + + series[::2] = 0 + self.assertTrue((series[::2] == 0).all()) + + # set item that's not contained + s = self.series.copy() + s['foobar'] = 1 + + app = Series([1], index=['foobar'], name='series') + expected = self.series.append(app) + assert_series_equal(s, expected) + + # Test for issue #10193 + key = pd.Timestamp('2012-01-01') + series = pd.Series() + series[key] = 47 + expected = pd.Series(47, [key]) + assert_series_equal(series, expected) + + series = pd.Series([], pd.DatetimeIndex([], freq='D')) + series[key] = 47 + expected = pd.Series(47, pd.DatetimeIndex([key], freq='D')) + assert_series_equal(series, expected) + + def test_setitem_dtypes(self): + + # change dtypes + # GH 4463 + expected = Series([np.nan, 2, 3]) + + s = Series([1, 2, 3]) + s.iloc[0] = np.nan + assert_series_equal(s, expected) + + s = Series([1, 2, 3]) + s.loc[0] = np.nan + assert_series_equal(s, expected) + + s = Series([1, 2, 3]) + s[0] = np.nan + assert_series_equal(s, expected) + + s = Series([False]) + s.loc[0] = np.nan + assert_series_equal(s, Series([np.nan])) + + s = Series([False, True]) + s.loc[0] = np.nan + assert_series_equal(s, Series([np.nan, 1.0])) + + def test_set_value(self): + idx = self.ts.index[10] + res = self.ts.set_value(idx, 0) + self.assertIs(res, self.ts) + self.assertEqual(self.ts[idx], 0) + + # equiv + s = self.series.copy() + res = s.set_value('foobar', 0) + self.assertIs(res, s) + self.assertEqual(res.index[-1], 'foobar') + self.assertEqual(res['foobar'], 0) + + s = self.series.copy() + s.loc['foobar'] = 0 + self.assertEqual(s.index[-1], 'foobar') + self.assertEqual(s['foobar'], 0) + + def test_setslice(self): + sl = self.ts[5:20] + self.assertEqual(len(sl), len(sl.index)) + self.assertTrue(sl.index.is_unique) + + def test_basic_getitem_setitem_corner(self): + # invalid tuples, e.g. self.ts[:, None] vs. self.ts[:, 2] + with tm.assertRaisesRegexp(ValueError, 'tuple-index'): + self.ts[:, 2] + with tm.assertRaisesRegexp(ValueError, 'tuple-index'): + self.ts[:, 2] = 2 + + # weird lists. [slice(0, 5)] will work but not two slices + result = self.ts[[slice(None, 5)]] + expected = self.ts[:5] + assert_series_equal(result, expected) + + # OK + self.assertRaises(Exception, self.ts.__getitem__, + [5, slice(None, None)]) + self.assertRaises(Exception, self.ts.__setitem__, + [5, slice(None, None)], 2) + + def test_basic_getitem_with_labels(self): + indices = self.ts.index[[5, 10, 15]] + + result = self.ts[indices] + expected = self.ts.reindex(indices) + assert_series_equal(result, expected) + + result = self.ts[indices[0]:indices[2]] + expected = self.ts.ix[indices[0]:indices[2]] + assert_series_equal(result, expected) + + # integer indexes, be careful + s = Series(np.random.randn(10), index=lrange(0, 20, 2)) + inds = [0, 2, 5, 7, 8] + arr_inds = np.array([0, 2, 5, 7, 8]) + result = s[inds] + expected = s.reindex(inds) + assert_series_equal(result, expected) + + result = s[arr_inds] + expected = s.reindex(arr_inds) + assert_series_equal(result, expected) + + def test_basic_setitem_with_labels(self): + indices = self.ts.index[[5, 10, 15]] + + cp = self.ts.copy() + exp = self.ts.copy() + cp[indices] = 0 + exp.ix[indices] = 0 + assert_series_equal(cp, exp) + + cp = self.ts.copy() + exp = self.ts.copy() + cp[indices[0]:indices[2]] = 0 + exp.ix[indices[0]:indices[2]] = 0 + assert_series_equal(cp, exp) + + # integer indexes, be careful + s = Series(np.random.randn(10), index=lrange(0, 20, 2)) + inds = [0, 4, 6] + arr_inds = np.array([0, 4, 6]) + + cp = s.copy() + exp = s.copy() + s[inds] = 0 + s.ix[inds] = 0 + assert_series_equal(cp, exp) + + cp = s.copy() + exp = s.copy() + s[arr_inds] = 0 + s.ix[arr_inds] = 0 + assert_series_equal(cp, exp) + + inds_notfound = [0, 4, 5, 6] + arr_inds_notfound = np.array([0, 4, 5, 6]) + self.assertRaises(Exception, s.__setitem__, inds_notfound, 0) + self.assertRaises(Exception, s.__setitem__, arr_inds_notfound, 0) + + def test_ix_getitem(self): + inds = self.series.index[[3, 4, 7]] + assert_series_equal(self.series.ix[inds], self.series.reindex(inds)) + assert_series_equal(self.series.ix[5::2], self.series[5::2]) + + # slice with indices + d1, d2 = self.ts.index[[5, 15]] + result = self.ts.ix[d1:d2] + expected = self.ts.truncate(d1, d2) + assert_series_equal(result, expected) + + # boolean + mask = self.series > self.series.median() + assert_series_equal(self.series.ix[mask], self.series[mask]) + + # ask for index value + self.assertEqual(self.ts.ix[d1], self.ts[d1]) + self.assertEqual(self.ts.ix[d2], self.ts[d2]) + + def test_ix_getitem_not_monotonic(self): + d1, d2 = self.ts.index[[5, 15]] + + ts2 = self.ts[::2][[1, 2, 0]] + + self.assertRaises(KeyError, ts2.ix.__getitem__, slice(d1, d2)) + self.assertRaises(KeyError, ts2.ix.__setitem__, slice(d1, d2), 0) + + def test_ix_getitem_setitem_integer_slice_keyerrors(self): + s = Series(np.random.randn(10), index=lrange(0, 20, 2)) + + # this is OK + cp = s.copy() + cp.ix[4:10] = 0 + self.assertTrue((cp.ix[4:10] == 0).all()) + + # so is this + cp = s.copy() + cp.ix[3:11] = 0 + self.assertTrue((cp.ix[3:11] == 0).values.all()) + + result = s.ix[4:10] + result2 = s.ix[3:11] + expected = s.reindex([4, 6, 8, 10]) + + assert_series_equal(result, expected) + assert_series_equal(result2, expected) + + # non-monotonic, raise KeyError + s2 = s.iloc[lrange(5) + lrange(5, 10)[::-1]] + self.assertRaises(KeyError, s2.ix.__getitem__, slice(3, 11)) + self.assertRaises(KeyError, s2.ix.__setitem__, slice(3, 11), 0) + + def test_ix_getitem_iterator(self): + idx = iter(self.series.index[:10]) + result = self.series.ix[idx] + assert_series_equal(result, self.series[:10]) + + def test_where(self): + s = Series(np.random.randn(5)) + cond = s > 0 + + rs = s.where(cond).dropna() + rs2 = s[cond] + assert_series_equal(rs, rs2) + + rs = s.where(cond, -s) + assert_series_equal(rs, s.abs()) + + rs = s.where(cond) + assert (s.shape == rs.shape) + assert (rs is not s) + + # test alignment + cond = Series([True, False, False, True, False], index=s.index) + s2 = -(s.abs()) + + expected = s2[cond].reindex(s2.index[:3]).reindex(s2.index) + rs = s2.where(cond[:3]) + assert_series_equal(rs, expected) + + expected = s2.abs() + expected.ix[0] = s2[0] + rs = s2.where(cond[:3], -s2) + assert_series_equal(rs, expected) + + self.assertRaises(ValueError, s.where, 1) + self.assertRaises(ValueError, s.where, cond[:3].values, -s) + + # GH 2745 + s = Series([1, 2]) + s[[True, False]] = [0, 1] + expected = Series([0, 2]) + assert_series_equal(s, expected) + + # failures + self.assertRaises(ValueError, s.__setitem__, tuple([[[True, False]]]), + [0, 2, 3]) + self.assertRaises(ValueError, s.__setitem__, tuple([[[True, False]]]), + []) + + # unsafe dtype changes + for dtype in [np.int8, np.int16, np.int32, np.int64, np.float16, + np.float32, np.float64]: + s = Series(np.arange(10), dtype=dtype) + mask = s < 5 + s[mask] = lrange(2, 7) + expected = Series(lrange(2, 7) + lrange(5, 10), dtype=dtype) + assert_series_equal(s, expected) + self.assertEqual(s.dtype, expected.dtype) + + # these are allowed operations, but are upcasted + for dtype in [np.int64, np.float64]: + s = Series(np.arange(10), dtype=dtype) + mask = s < 5 + values = [2.5, 3.5, 4.5, 5.5, 6.5] + s[mask] = values + expected = Series(values + lrange(5, 10), dtype='float64') + assert_series_equal(s, expected) + self.assertEqual(s.dtype, expected.dtype) + + # GH 9731 + s = Series(np.arange(10), dtype='int64') + mask = s > 5 + values = [2.5, 3.5, 4.5, 5.5] + s[mask] = values + expected = Series(lrange(6) + values, dtype='float64') + assert_series_equal(s, expected) + + # can't do these as we are forced to change the itemsize of the input + # to something we cannot + for dtype in [np.int8, np.int16, np.int32, np.float16, np.float32]: + s = Series(np.arange(10), dtype=dtype) + mask = s < 5 + values = [2.5, 3.5, 4.5, 5.5, 6.5] + self.assertRaises(Exception, s.__setitem__, tuple(mask), values) + + # GH3235 + s = Series(np.arange(10), dtype='int64') + mask = s < 5 + s[mask] = lrange(2, 7) + expected = Series(lrange(2, 7) + lrange(5, 10), dtype='int64') + assert_series_equal(s, expected) + self.assertEqual(s.dtype, expected.dtype) + + s = Series(np.arange(10), dtype='int64') + mask = s > 5 + s[mask] = [0] * 4 + expected = Series([0, 1, 2, 3, 4, 5] + [0] * 4, dtype='int64') + assert_series_equal(s, expected) + + s = Series(np.arange(10)) + mask = s > 5 + + def f(): + s[mask] = [5, 4, 3, 2, 1] + + self.assertRaises(ValueError, f) + + def f(): + s[mask] = [0] * 5 + + self.assertRaises(ValueError, f) + + # dtype changes + s = Series([1, 2, 3, 4]) + result = s.where(s > 2, np.nan) + expected = Series([np.nan, np.nan, 3, 4]) + assert_series_equal(result, expected) + + # GH 4667 + # setting with None changes dtype + s = Series(range(10)).astype(float) + s[8] = None + result = s[8] + self.assertTrue(isnull(result)) + + s = Series(range(10)).astype(float) + s[s > 8] = None + result = s[isnull(s)] + expected = Series(np.nan, index=[9]) + assert_series_equal(result, expected) + + def test_where_setitem_invalid(self): + + # GH 2702 + # make sure correct exceptions are raised on invalid list assignment + + # slice + s = Series(list('abc')) + + def f(): + s[0:3] = list(range(27)) + + self.assertRaises(ValueError, f) + + s[0:3] = list(range(3)) + expected = Series([0, 1, 2]) + assert_series_equal(s.astype(np.int64), expected, ) + + # slice with step + s = Series(list('abcdef')) + + def f(): + s[0:4:2] = list(range(27)) + + self.assertRaises(ValueError, f) + + s = Series(list('abcdef')) + s[0:4:2] = list(range(2)) + expected = Series([0, 'b', 1, 'd', 'e', 'f']) + assert_series_equal(s, expected) + + # neg slices + s = Series(list('abcdef')) + + def f(): + s[:-1] = list(range(27)) + + self.assertRaises(ValueError, f) + + s[-3:-1] = list(range(2)) + expected = Series(['a', 'b', 'c', 0, 1, 'f']) + assert_series_equal(s, expected) + + # list + s = Series(list('abc')) + + def f(): + s[[0, 1, 2]] = list(range(27)) + + self.assertRaises(ValueError, f) + + s = Series(list('abc')) + + def f(): + s[[0, 1, 2]] = list(range(2)) + + self.assertRaises(ValueError, f) + + # scalar + s = Series(list('abc')) + s[0] = list(range(10)) + expected = Series([list(range(10)), 'b', 'c']) + assert_series_equal(s, expected) + + def test_where_broadcast(self): + # Test a variety of differently sized series + for size in range(2, 6): + # Test a variety of boolean indices + for selection in [ + # First element should be set + np.resize([True, False, False, False, False], size), + # Set alternating elements] + np.resize([True, False], size), + # No element should be set + np.resize([False], size)]: + + # Test a variety of different numbers as content + for item in [2.0, np.nan, np.finfo(np.float).max, + np.finfo(np.float).min]: + # Test numpy arrays, lists and tuples as the input to be + # broadcast + for arr in [np.array([item]), [item], (item, )]: + data = np.arange(size, dtype=float) + s = Series(data) + s[selection] = arr + # Construct the expected series by taking the source + # data or item based on the selection + expected = Series([item if use_item else data[ + i] for i, use_item in enumerate(selection)]) + assert_series_equal(s, expected) + + s = Series(data) + result = s.where(~selection, arr) + assert_series_equal(result, expected) + + def test_where_inplace(self): + s = Series(np.random.randn(5)) + cond = s > 0 + + rs = s.copy() + + rs.where(cond, inplace=True) + assert_series_equal(rs.dropna(), s[cond]) + assert_series_equal(rs, s.where(cond)) + + rs = s.copy() + rs.where(cond, -s, inplace=True) + assert_series_equal(rs, s.where(cond, -s)) + + def test_where_dups(self): + # GH 4550 + # where crashes with dups in index + s1 = Series(list(range(3))) + s2 = Series(list(range(3))) + comb = pd.concat([s1, s2]) + result = comb.where(comb < 2) + expected = Series([0, 1, np.nan, 0, 1, np.nan], + index=[0, 1, 2, 0, 1, 2]) + assert_series_equal(result, expected) + + # GH 4548 + # inplace updating not working with dups + comb[comb < 1] = 5 + expected = Series([5, 1, 2, 5, 1, 2], index=[0, 1, 2, 0, 1, 2]) + assert_series_equal(comb, expected) + + comb[comb < 2] += 10 + expected = Series([5, 11, 2, 5, 11, 2], index=[0, 1, 2, 0, 1, 2]) + assert_series_equal(comb, expected) + + def test_where_datetime(self): + s = Series(date_range('20130102', periods=2)) + expected = Series([10, 10], dtype='datetime64[ns]') + mask = np.array([False, False]) + + rs = s.where(mask, [10, 10]) + assert_series_equal(rs, expected) + + rs = s.where(mask, 10) + assert_series_equal(rs, expected) + + rs = s.where(mask, 10.0) + assert_series_equal(rs, expected) + + rs = s.where(mask, [10.0, 10.0]) + assert_series_equal(rs, expected) + + rs = s.where(mask, [10.0, np.nan]) + expected = Series([10, None], dtype='datetime64[ns]') + assert_series_equal(rs, expected) + + def test_where_timedelta(self): + s = Series([1, 2], dtype='timedelta64[ns]') + expected = Series([10, 10], dtype='timedelta64[ns]') + mask = np.array([False, False]) + + rs = s.where(mask, [10, 10]) + assert_series_equal(rs, expected) + + rs = s.where(mask, 10) + assert_series_equal(rs, expected) + + rs = s.where(mask, 10.0) + assert_series_equal(rs, expected) + + rs = s.where(mask, [10.0, 10.0]) + assert_series_equal(rs, expected) + + rs = s.where(mask, [10.0, np.nan]) + expected = Series([10, None], dtype='timedelta64[ns]') + assert_series_equal(rs, expected) + + def test_mask(self): + # compare with tested results in test_where + s = Series(np.random.randn(5)) + cond = s > 0 + + rs = s.where(~cond, np.nan) + assert_series_equal(rs, s.mask(cond)) + + rs = s.where(~cond) + rs2 = s.mask(cond) + assert_series_equal(rs, rs2) + + rs = s.where(~cond, -s) + rs2 = s.mask(cond, -s) + assert_series_equal(rs, rs2) + + cond = Series([True, False, False, True, False], index=s.index) + s2 = -(s.abs()) + rs = s2.where(~cond[:3]) + rs2 = s2.mask(cond[:3]) + assert_series_equal(rs, rs2) + + rs = s2.where(~cond[:3], -s2) + rs2 = s2.mask(cond[:3], -s2) + assert_series_equal(rs, rs2) + + self.assertRaises(ValueError, s.mask, 1) + self.assertRaises(ValueError, s.mask, cond[:3].values, -s) + + # dtype changes + s = Series([1, 2, 3, 4]) + result = s.mask(s > 2, np.nan) + expected = Series([1, 2, np.nan, np.nan]) + assert_series_equal(result, expected) + + def test_mask_broadcast(self): + # GH 8801 + # copied from test_where_broadcast + for size in range(2, 6): + for selection in [ + # First element should be set + np.resize([True, False, False, False, False], size), + # Set alternating elements] + np.resize([True, False], size), + # No element should be set + np.resize([False], size)]: + for item in [2.0, np.nan, np.finfo(np.float).max, + np.finfo(np.float).min]: + for arr in [np.array([item]), [item], (item, )]: + data = np.arange(size, dtype=float) + s = Series(data) + result = s.mask(selection, arr) + expected = Series([item if use_item else data[ + i] for i, use_item in enumerate(selection)]) + assert_series_equal(result, expected) + + def test_mask_inplace(self): + s = Series(np.random.randn(5)) + cond = s > 0 + + rs = s.copy() + rs.mask(cond, inplace=True) + assert_series_equal(rs.dropna(), s[~cond]) + assert_series_equal(rs, s.mask(cond)) + + rs = s.copy() + rs.mask(cond, -s, inplace=True) + assert_series_equal(rs, s.mask(cond, -s)) + + def test_ix_setitem(self): + inds = self.series.index[[3, 4, 7]] + + result = self.series.copy() + result.ix[inds] = 5 + + expected = self.series.copy() + expected[[3, 4, 7]] = 5 + assert_series_equal(result, expected) + + result.ix[5:10] = 10 + expected[5:10] = 10 + assert_series_equal(result, expected) + + # set slice with indices + d1, d2 = self.series.index[[5, 15]] + result.ix[d1:d2] = 6 + expected[5:16] = 6 # because it's inclusive + assert_series_equal(result, expected) + + # set index value + self.series.ix[d1] = 4 + self.series.ix[d2] = 6 + self.assertEqual(self.series[d1], 4) + self.assertEqual(self.series[d2], 6) + + def test_where_numeric_with_string(self): + # GH 9280 + s = pd.Series([1, 2, 3]) + w = s.where(s > 1, 'X') + + self.assertFalse(com.is_integer(w[0])) + self.assertTrue(com.is_integer(w[1])) + self.assertTrue(com.is_integer(w[2])) + self.assertTrue(isinstance(w[0], str)) + self.assertTrue(w.dtype == 'object') + + w = s.where(s > 1, ['X', 'Y', 'Z']) + self.assertFalse(com.is_integer(w[0])) + self.assertTrue(com.is_integer(w[1])) + self.assertTrue(com.is_integer(w[2])) + self.assertTrue(isinstance(w[0], str)) + self.assertTrue(w.dtype == 'object') + + w = s.where(s > 1, np.array(['X', 'Y', 'Z'])) + self.assertFalse(com.is_integer(w[0])) + self.assertTrue(com.is_integer(w[1])) + self.assertTrue(com.is_integer(w[2])) + self.assertTrue(isinstance(w[0], str)) + self.assertTrue(w.dtype == 'object') + + def test_setitem_boolean(self): + mask = self.series > self.series.median() + + # similiar indexed series + result = self.series.copy() + result[mask] = self.series * 2 + expected = self.series * 2 + assert_series_equal(result[mask], expected[mask]) + + # needs alignment + result = self.series.copy() + result[mask] = (self.series * 2)[0:5] + expected = (self.series * 2)[0:5].reindex_like(self.series) + expected[-mask] = self.series[mask] + assert_series_equal(result[mask], expected[mask]) + + def test_ix_setitem_boolean(self): + mask = self.series > self.series.median() + + result = self.series.copy() + result.ix[mask] = 0 + expected = self.series + expected[mask] = 0 + assert_series_equal(result, expected) + + def test_ix_setitem_corner(self): + inds = list(self.series.index[[5, 8, 12]]) + self.series.ix[inds] = 5 + self.assertRaises(Exception, self.series.ix.__setitem__, + inds + ['foo'], 5) + + def test_get_set_boolean_different_order(self): + ordered = self.series.sort_values() + + # setting + copy = self.series.copy() + copy[ordered > 0] = 0 + + expected = self.series.copy() + expected[expected > 0] = 0 + + assert_series_equal(copy, expected) + + # getting + sel = self.series[ordered > 0] + exp = self.series[self.series > 0] + assert_series_equal(sel, exp) + + def test_setitem_na(self): + # these induce dtype changes + expected = Series([np.nan, 3, np.nan, 5, np.nan, 7, np.nan, 9, np.nan]) + s = Series([2, 3, 4, 5, 6, 7, 8, 9, 10]) + s[::2] = np.nan + assert_series_equal(s, expected) + + # get's coerced to float, right? + expected = Series([np.nan, 1, np.nan, 0]) + s = Series([True, True, False, False]) + s[::2] = np.nan + assert_series_equal(s, expected) + + expected = Series([np.nan, np.nan, np.nan, np.nan, np.nan, 5, 6, 7, 8, + 9]) + s = Series(np.arange(10)) + s[:5] = np.nan + assert_series_equal(s, expected) + + def test_basic_indexing(self): + s = Series(np.random.randn(5), index=['a', 'b', 'a', 'a', 'b']) + + self.assertRaises(IndexError, s.__getitem__, 5) + self.assertRaises(IndexError, s.__setitem__, 5, 0) + + self.assertRaises(KeyError, s.__getitem__, 'c') + + s = s.sort_index() + + self.assertRaises(IndexError, s.__getitem__, 5) + self.assertRaises(IndexError, s.__setitem__, 5, 0) + + def test_int_indexing(self): + s = Series(np.random.randn(6), index=[0, 0, 1, 1, 2, 2]) + + self.assertRaises(KeyError, s.__getitem__, 5) + + self.assertRaises(KeyError, s.__getitem__, 'c') + + # not monotonic + s = Series(np.random.randn(6), index=[2, 2, 0, 0, 1, 1]) + + self.assertRaises(KeyError, s.__getitem__, 5) + + self.assertRaises(KeyError, s.__getitem__, 'c') + + def test_datetime_indexing(self): + from pandas import date_range + + index = date_range('1/1/2000', '1/7/2000') + index = index.repeat(3) + + s = Series(len(index), index=index) + stamp = Timestamp('1/8/2000') + + self.assertRaises(KeyError, s.__getitem__, stamp) + s[stamp] = 0 + self.assertEqual(s[stamp], 0) + + # not monotonic + s = Series(len(index), index=index) + s = s[::-1] + + self.assertRaises(KeyError, s.__getitem__, stamp) + s[stamp] = 0 + self.assertEqual(s[stamp], 0) + + def test_timedelta_assignment(self): + # GH 8209 + s = Series([]) + s.loc['B'] = timedelta(1) + tm.assert_series_equal(s, Series(Timedelta('1 days'), index=['B'])) + + s = s.reindex(s.index.insert(0, 'A')) + tm.assert_series_equal(s, Series( + [np.nan, Timedelta('1 days')], index=['A', 'B'])) + + result = s.fillna(timedelta(1)) + expected = Series(Timedelta('1 days'), index=['A', 'B']) + tm.assert_series_equal(result, expected) + + s.loc['A'] = timedelta(1) + tm.assert_series_equal(s, expected) + + def test_underlying_data_conversion(self): + + # GH 4080 + df = DataFrame(dict((c, [1, 2, 3]) for c in ['a', 'b', 'c'])) + df.set_index(['a', 'b', 'c'], inplace=True) + s = Series([1], index=[(2, 2, 2)]) + df['val'] = 0 + df + df['val'].update(s) + + expected = DataFrame( + dict(a=[1, 2, 3], b=[1, 2, 3], c=[1, 2, 3], val=[0, 1, 0])) + expected.set_index(['a', 'b', 'c'], inplace=True) + tm.assert_frame_equal(df, expected) + + # GH 3970 + # these are chained assignments as well + pd.set_option('chained_assignment', None) + df = DataFrame({"aa": range(5), "bb": [2.2] * 5}) + df["cc"] = 0.0 + + ck = [True] * len(df) + + df["bb"].iloc[0] = .13 + + # TODO: unused + df_tmp = df.iloc[ck] # noqa + + df["bb"].iloc[0] = .15 + self.assertEqual(df['bb'].iloc[0], 0.15) + pd.set_option('chained_assignment', 'raise') + + # GH 3217 + df = DataFrame(dict(a=[1, 3], b=[np.nan, 2])) + df['c'] = np.nan + df['c'].update(pd.Series(['foo'], index=[0])) + + expected = DataFrame(dict(a=[1, 3], b=[np.nan, 2], c=['foo', np.nan])) + tm.assert_frame_equal(df, expected) + + def test_preserveRefs(self): + seq = self.ts[[5, 10, 15]] + seq[1] = np.NaN + self.assertFalse(np.isnan(self.ts[10])) + + def test_drop(self): + + # unique + s = Series([1, 2], index=['one', 'two']) + expected = Series([1], index=['one']) + result = s.drop(['two']) + assert_series_equal(result, expected) + result = s.drop('two', axis='rows') + assert_series_equal(result, expected) + + # non-unique + # GH 5248 + s = Series([1, 1, 2], index=['one', 'two', 'one']) + expected = Series([1, 2], index=['one', 'one']) + result = s.drop(['two'], axis=0) + assert_series_equal(result, expected) + result = s.drop('two') + assert_series_equal(result, expected) + + expected = Series([1], index=['two']) + result = s.drop(['one']) + assert_series_equal(result, expected) + result = s.drop('one') + assert_series_equal(result, expected) + + # single string/tuple-like + s = Series(range(3), index=list('abc')) + self.assertRaises(ValueError, s.drop, 'bc') + self.assertRaises(ValueError, s.drop, ('a', )) + + # errors='ignore' + s = Series(range(3), index=list('abc')) + result = s.drop('bc', errors='ignore') + assert_series_equal(result, s) + result = s.drop(['a', 'd'], errors='ignore') + expected = s.ix[1:] + assert_series_equal(result, expected) + + # bad axis + self.assertRaises(ValueError, s.drop, 'one', axis='columns') + + # GH 8522 + s = Series([2, 3], index=[True, False]) + self.assertTrue(s.index.is_object()) + result = s.drop(True) + expected = Series([3], index=[False]) + assert_series_equal(result, expected) + + def test_align(self): + def _check_align(a, b, how='left', fill=None): + aa, ab = a.align(b, join=how, fill_value=fill) + + join_index = a.index.join(b.index, how=how) + if fill is not None: + diff_a = aa.index.difference(join_index) + diff_b = ab.index.difference(join_index) + if len(diff_a) > 0: + self.assertTrue((aa.reindex(diff_a) == fill).all()) + if len(diff_b) > 0: + self.assertTrue((ab.reindex(diff_b) == fill).all()) + + ea = a.reindex(join_index) + eb = b.reindex(join_index) + + if fill is not None: + ea = ea.fillna(fill) + eb = eb.fillna(fill) + + assert_series_equal(aa, ea) + assert_series_equal(ab, eb) + self.assertEqual(aa.name, 'ts') + self.assertEqual(ea.name, 'ts') + self.assertEqual(ab.name, 'ts') + self.assertEqual(eb.name, 'ts') + + for kind in JOIN_TYPES: + _check_align(self.ts[2:], self.ts[:-5], how=kind) + _check_align(self.ts[2:], self.ts[:-5], how=kind, fill=-1) + + # empty left + _check_align(self.ts[:0], self.ts[:-5], how=kind) + _check_align(self.ts[:0], self.ts[:-5], how=kind, fill=-1) + + # empty right + _check_align(self.ts[:-5], self.ts[:0], how=kind) + _check_align(self.ts[:-5], self.ts[:0], how=kind, fill=-1) + + # both empty + _check_align(self.ts[:0], self.ts[:0], how=kind) + _check_align(self.ts[:0], self.ts[:0], how=kind, fill=-1) + + def test_align_fill_method(self): + def _check_align(a, b, how='left', method='pad', limit=None): + aa, ab = a.align(b, join=how, method=method, limit=limit) + + join_index = a.index.join(b.index, how=how) + ea = a.reindex(join_index) + eb = b.reindex(join_index) + + ea = ea.fillna(method=method, limit=limit) + eb = eb.fillna(method=method, limit=limit) + + assert_series_equal(aa, ea) + assert_series_equal(ab, eb) + + for kind in JOIN_TYPES: + for meth in ['pad', 'bfill']: + _check_align(self.ts[2:], self.ts[:-5], how=kind, method=meth) + _check_align(self.ts[2:], self.ts[:-5], how=kind, method=meth, + limit=1) + + # empty left + _check_align(self.ts[:0], self.ts[:-5], how=kind, method=meth) + _check_align(self.ts[:0], self.ts[:-5], how=kind, method=meth, + limit=1) + + # empty right + _check_align(self.ts[:-5], self.ts[:0], how=kind, method=meth) + _check_align(self.ts[:-5], self.ts[:0], how=kind, method=meth, + limit=1) + + # both empty + _check_align(self.ts[:0], self.ts[:0], how=kind, method=meth) + _check_align(self.ts[:0], self.ts[:0], how=kind, method=meth, + limit=1) + + def test_align_nocopy(self): + b = self.ts[:5].copy() + + # do copy + a = self.ts.copy() + ra, _ = a.align(b, join='left') + ra[:5] = 5 + self.assertFalse((a[:5] == 5).any()) + + # do not copy + a = self.ts.copy() + ra, _ = a.align(b, join='left', copy=False) + ra[:5] = 5 + self.assertTrue((a[:5] == 5).all()) + + # do copy + a = self.ts.copy() + b = self.ts[:5].copy() + _, rb = a.align(b, join='right') + rb[:3] = 5 + self.assertFalse((b[:3] == 5).any()) + + # do not copy + a = self.ts.copy() + b = self.ts[:5].copy() + _, rb = a.align(b, join='right', copy=False) + rb[:2] = 5 + self.assertTrue((b[:2] == 5).all()) + + def test_align_sameindex(self): + a, b = self.ts.align(self.ts, copy=False) + self.assertIs(a.index, self.ts.index) + self.assertIs(b.index, self.ts.index) + + # a, b = self.ts.align(self.ts, copy=True) + # self.assertIsNot(a.index, self.ts.index) + # self.assertIsNot(b.index, self.ts.index) + + def test_align_multiindex(self): + # GH 10665 + + midx = pd.MultiIndex.from_product([range(2), range(3), range(2)], + names=('a', 'b', 'c')) + idx = pd.Index(range(2), name='b') + s1 = pd.Series(np.arange(12, dtype='int64'), index=midx) + s2 = pd.Series(np.arange(2, dtype='int64'), index=idx) + + # these must be the same results (but flipped) + res1l, res1r = s1.align(s2, join='left') + res2l, res2r = s2.align(s1, join='right') + + expl = s1 + tm.assert_series_equal(expl, res1l) + tm.assert_series_equal(expl, res2r) + expr = pd.Series([0, 0, 1, 1, np.nan, np.nan] * 2, index=midx) + tm.assert_series_equal(expr, res1r) + tm.assert_series_equal(expr, res2l) + + res1l, res1r = s1.align(s2, join='right') + res2l, res2r = s2.align(s1, join='left') + + exp_idx = pd.MultiIndex.from_product([range(2), range(2), range(2)], + names=('a', 'b', 'c')) + expl = pd.Series([0, 1, 2, 3, 6, 7, 8, 9], index=exp_idx) + tm.assert_series_equal(expl, res1l) + tm.assert_series_equal(expl, res2r) + expr = pd.Series([0, 0, 1, 1] * 2, index=exp_idx) + tm.assert_series_equal(expr, res1r) + tm.assert_series_equal(expr, res2l) + + def test_reindex(self): + + identity = self.series.reindex(self.series.index) + + # __array_interface__ is not defined for older numpies + # and on some pythons + try: + self.assertTrue(np.may_share_memory(self.series.index, + identity.index)) + except (AttributeError): + pass + + self.assertTrue(identity.index.is_(self.series.index)) + self.assertTrue(identity.index.identical(self.series.index)) + + subIndex = self.series.index[10:20] + subSeries = self.series.reindex(subIndex) + + for idx, val in compat.iteritems(subSeries): + self.assertEqual(val, self.series[idx]) + + subIndex2 = self.ts.index[10:20] + subTS = self.ts.reindex(subIndex2) + + for idx, val in compat.iteritems(subTS): + self.assertEqual(val, self.ts[idx]) + stuffSeries = self.ts.reindex(subIndex) + + self.assertTrue(np.isnan(stuffSeries).all()) + + # This is extremely important for the Cython code to not screw up + nonContigIndex = self.ts.index[::2] + subNonContig = self.ts.reindex(nonContigIndex) + for idx, val in compat.iteritems(subNonContig): + self.assertEqual(val, self.ts[idx]) + + # return a copy the same index here + result = self.ts.reindex() + self.assertFalse((result is self.ts)) + + def test_reindex_nan(self): + ts = Series([2, 3, 5, 7], index=[1, 4, nan, 8]) + + i, j = [nan, 1, nan, 8, 4, nan], [2, 0, 2, 3, 1, 2] + assert_series_equal(ts.reindex(i), ts.iloc[j]) + + ts.index = ts.index.astype('object') + + # reindex coerces index.dtype to float, loc/iloc doesn't + assert_series_equal(ts.reindex(i), ts.iloc[j], check_index_type=False) + + def test_reindex_corner(self): + # (don't forget to fix this) I think it's fixed + self.empty.reindex(self.ts.index, method='pad') # it works + + # corner case: pad empty series + reindexed = self.empty.reindex(self.ts.index, method='pad') + + # pass non-Index + reindexed = self.ts.reindex(list(self.ts.index)) + assert_series_equal(self.ts, reindexed) + + # bad fill method + ts = self.ts[::2] + self.assertRaises(Exception, ts.reindex, self.ts.index, method='foo') + + def test_reindex_pad(self): + + s = Series(np.arange(10), dtype='int64') + s2 = s[::2] + + reindexed = s2.reindex(s.index, method='pad') + reindexed2 = s2.reindex(s.index, method='ffill') + assert_series_equal(reindexed, reindexed2) + + expected = Series([0, 0, 2, 2, 4, 4, 6, 6, 8, 8], index=np.arange(10)) + assert_series_equal(reindexed, expected) + + # GH4604 + s = Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e']) + new_index = ['a', 'g', 'c', 'f'] + expected = Series([1, 1, 3, 3], index=new_index) + + # this changes dtype because the ffill happens after + result = s.reindex(new_index).ffill() + assert_series_equal(result, expected.astype('float64')) + + result = s.reindex(new_index).ffill(downcast='infer') + assert_series_equal(result, expected) + + expected = Series([1, 5, 3, 5], index=new_index) + result = s.reindex(new_index, method='ffill') + assert_series_equal(result, expected) + + # inferrence of new dtype + s = Series([True, False, False, True], index=list('abcd')) + new_index = 'agc' + result = s.reindex(list(new_index)).ffill() + expected = Series([True, True, False], index=list(new_index)) + assert_series_equal(result, expected) + + # GH4618 shifted series downcasting + s = Series(False, index=lrange(0, 5)) + result = s.shift(1).fillna(method='bfill') + expected = Series(False, index=lrange(0, 5)) + assert_series_equal(result, expected) + + def test_reindex_nearest(self): + s = Series(np.arange(10, dtype='int64')) + target = [0.1, 0.9, 1.5, 2.0] + actual = s.reindex(target, method='nearest') + expected = Series(np.around(target).astype('int64'), target) + assert_series_equal(expected, actual) + + actual = s.reindex_like(actual, method='nearest') + assert_series_equal(expected, actual) + + actual = s.reindex_like(actual, method='nearest', tolerance=1) + assert_series_equal(expected, actual) + + actual = s.reindex(target, method='nearest', tolerance=0.2) + expected = Series([0, 1, np.nan, 2], target) + assert_series_equal(expected, actual) + + def test_reindex_backfill(self): + pass + + def test_reindex_int(self): + ts = self.ts[::2] + int_ts = Series(np.zeros(len(ts), dtype=int), index=ts.index) + + # this should work fine + reindexed_int = int_ts.reindex(self.ts.index) + + # if NaNs introduced + self.assertEqual(reindexed_int.dtype, np.float_) + + # NO NaNs introduced + reindexed_int = int_ts.reindex(int_ts.index[::2]) + self.assertEqual(reindexed_int.dtype, np.int_) + + def test_reindex_bool(self): + + # A series other than float, int, string, or object + ts = self.ts[::2] + bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index) + + # this should work fine + reindexed_bool = bool_ts.reindex(self.ts.index) + + # if NaNs introduced + self.assertEqual(reindexed_bool.dtype, np.object_) + + # NO NaNs introduced + reindexed_bool = bool_ts.reindex(bool_ts.index[::2]) + self.assertEqual(reindexed_bool.dtype, np.bool_) + + def test_reindex_bool_pad(self): + # fail + ts = self.ts[5:] + bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index) + filled_bool = bool_ts.reindex(self.ts.index, method='pad') + self.assertTrue(isnull(filled_bool[:5]).all()) + + def test_reindex_like(self): + other = self.ts[::2] + assert_series_equal(self.ts.reindex(other.index), + self.ts.reindex_like(other)) + + # GH 7179 + day1 = datetime(2013, 3, 5) + day2 = datetime(2013, 5, 5) + day3 = datetime(2014, 3, 5) + + series1 = Series([5, None, None], [day1, day2, day3]) + series2 = Series([None, None], [day1, day3]) + + result = series1.reindex_like(series2, method='pad') + expected = Series([5, np.nan], index=[day1, day3]) + assert_series_equal(result, expected) + + def test_reindex_fill_value(self): + # ----------------------------------------------------------- + # floats + floats = Series([1., 2., 3.]) + result = floats.reindex([1, 2, 3]) + expected = Series([2., 3., np.nan], index=[1, 2, 3]) + assert_series_equal(result, expected) + + result = floats.reindex([1, 2, 3], fill_value=0) + expected = Series([2., 3., 0], index=[1, 2, 3]) + assert_series_equal(result, expected) + + # ----------------------------------------------------------- + # ints + ints = Series([1, 2, 3]) + + result = ints.reindex([1, 2, 3]) + expected = Series([2., 3., np.nan], index=[1, 2, 3]) + assert_series_equal(result, expected) + + # don't upcast + result = ints.reindex([1, 2, 3], fill_value=0) + expected = Series([2, 3, 0], index=[1, 2, 3]) + self.assertTrue(issubclass(result.dtype.type, np.integer)) + assert_series_equal(result, expected) + + # ----------------------------------------------------------- + # objects + objects = Series([1, 2, 3], dtype=object) + + result = objects.reindex([1, 2, 3]) + expected = Series([2, 3, np.nan], index=[1, 2, 3], dtype=object) + assert_series_equal(result, expected) + + result = objects.reindex([1, 2, 3], fill_value='foo') + expected = Series([2, 3, 'foo'], index=[1, 2, 3], dtype=object) + assert_series_equal(result, expected) + + # ------------------------------------------------------------ + # bools + bools = Series([True, False, True]) + + result = bools.reindex([1, 2, 3]) + expected = Series([False, True, np.nan], index=[1, 2, 3], dtype=object) + assert_series_equal(result, expected) + + result = bools.reindex([1, 2, 3], fill_value=False) + expected = Series([False, True, False], index=[1, 2, 3]) + assert_series_equal(result, expected) + + def test_select(self): + n = len(self.ts) + result = self.ts.select(lambda x: x >= self.ts.index[n // 2]) + expected = self.ts.reindex(self.ts.index[n // 2:]) + assert_series_equal(result, expected) + + result = self.ts.select(lambda x: x.weekday() == 2) + expected = self.ts[self.ts.index.weekday == 2] + assert_series_equal(result, expected) + + def test_cast_on_putmask(self): + + # GH 2746 + + # need to upcast + s = Series([1, 2], index=[1, 2], dtype='int64') + s[[True, False]] = Series([0], index=[1], dtype='int64') + expected = Series([0, 2], index=[1, 2], dtype='int64') + + assert_series_equal(s, expected) + + def test_type_promote_putmask(self): + + # GH8387: test that changing types does not break alignment + ts = Series(np.random.randn(100), index=np.arange(100, 0, -1)).round(5) + left, mask = ts.copy(), ts > 0 + right = ts[mask].copy().map(str) + left[mask] = right + assert_series_equal(left, ts.map(lambda t: str(t) if t > 0 else t)) + + s = Series([0, 1, 2, 0]) + mask = s > 0 + s2 = s[mask].map(str) + s[mask] = s2 + assert_series_equal(s, Series([0, '1', '2', 0])) + + s = Series([0, 'foo', 'bar', 0]) + mask = Series([False, True, True, False]) + s2 = s[mask] + s[mask] = s2 + assert_series_equal(s, Series([0, 'foo', 'bar', 0])) + + def test_head_tail(self): + assert_series_equal(self.series.head(), self.series[:5]) + assert_series_equal(self.series.head(0), self.series[0:0]) + assert_series_equal(self.series.tail(), self.series[-5:]) + assert_series_equal(self.series.tail(0), self.series[0:0]) + + def test_multilevel_preserve_name(self): + index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two', + 'three']], + labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], + [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], + names=['first', 'second']) + s = Series(np.random.randn(len(index)), index=index, name='sth') + + result = s['foo'] + result2 = s.ix['foo'] + self.assertEqual(result.name, s.name) + self.assertEqual(result2.name, s.name) diff --git a/pandas/tests/series/test_internals.py b/pandas/tests/series/test_internals.py new file mode 100644 index 0000000000000..93bd7f0eec7c5 --- /dev/null +++ b/pandas/tests/series/test_internals.py @@ -0,0 +1,310 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +from datetime import datetime + +from numpy import nan +import numpy as np + +from pandas import Series +from pandas.tseries.index import Timestamp +import pandas.lib as lib + +from pandas.util.testing import assert_series_equal +import pandas.util.testing as tm + + +class TestSeriesInternals(tm.TestCase): + + _multiprocess_can_split_ = True + + def test_convert_objects(self): + + s = Series([1., 2, 3], index=['a', 'b', 'c']) + with tm.assert_produces_warning(FutureWarning): + result = s.convert_objects(convert_dates=False, + convert_numeric=True) + assert_series_equal(result, s) + + # force numeric conversion + r = s.copy().astype('O') + r['a'] = '1' + with tm.assert_produces_warning(FutureWarning): + result = r.convert_objects(convert_dates=False, + convert_numeric=True) + assert_series_equal(result, s) + + r = s.copy().astype('O') + r['a'] = '1.' + with tm.assert_produces_warning(FutureWarning): + result = r.convert_objects(convert_dates=False, + convert_numeric=True) + assert_series_equal(result, s) + + r = s.copy().astype('O') + r['a'] = 'garbled' + expected = s.copy() + expected['a'] = np.nan + with tm.assert_produces_warning(FutureWarning): + result = r.convert_objects(convert_dates=False, + convert_numeric=True) + assert_series_equal(result, expected) + + # GH 4119, not converting a mixed type (e.g.floats and object) + s = Series([1, 'na', 3, 4]) + with tm.assert_produces_warning(FutureWarning): + result = s.convert_objects(convert_numeric=True) + expected = Series([1, np.nan, 3, 4]) + assert_series_equal(result, expected) + + s = Series([1, '', 3, 4]) + with tm.assert_produces_warning(FutureWarning): + result = s.convert_objects(convert_numeric=True) + expected = Series([1, np.nan, 3, 4]) + assert_series_equal(result, expected) + + # dates + s = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), + datetime(2001, 1, 3, 0, 0)]) + s2 = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), + datetime(2001, 1, 3, 0, 0), 'foo', 1.0, 1, + Timestamp('20010104'), '20010105'], + dtype='O') + with tm.assert_produces_warning(FutureWarning): + result = s.convert_objects(convert_dates=True, + convert_numeric=False) + expected = Series([Timestamp('20010101'), Timestamp('20010102'), + Timestamp('20010103')], dtype='M8[ns]') + assert_series_equal(result, expected) + + with tm.assert_produces_warning(FutureWarning): + result = s.convert_objects(convert_dates='coerce', + convert_numeric=False) + with tm.assert_produces_warning(FutureWarning): + result = s.convert_objects(convert_dates='coerce', + convert_numeric=True) + assert_series_equal(result, expected) + + expected = Series([Timestamp('20010101'), Timestamp('20010102'), + Timestamp('20010103'), + lib.NaT, lib.NaT, lib.NaT, Timestamp('20010104'), + Timestamp('20010105')], dtype='M8[ns]') + with tm.assert_produces_warning(FutureWarning): + result = s2.convert_objects(convert_dates='coerce', + convert_numeric=False) + assert_series_equal(result, expected) + with tm.assert_produces_warning(FutureWarning): + result = s2.convert_objects(convert_dates='coerce', + convert_numeric=True) + assert_series_equal(result, expected) + + # preserver all-nans (if convert_dates='coerce') + s = Series(['foo', 'bar', 1, 1.0], dtype='O') + with tm.assert_produces_warning(FutureWarning): + result = s.convert_objects(convert_dates='coerce', + convert_numeric=False) + assert_series_equal(result, s) + + # preserver if non-object + s = Series([1], dtype='float32') + with tm.assert_produces_warning(FutureWarning): + result = s.convert_objects(convert_dates='coerce', + convert_numeric=False) + assert_series_equal(result, s) + + # r = s.copy() + # r[0] = np.nan + # result = r.convert_objects(convert_dates=True,convert_numeric=False) + # self.assertEqual(result.dtype, 'M8[ns]') + + # dateutil parses some single letters into today's value as a date + for x in 'abcdefghijklmnopqrstuvwxyz': + s = Series([x]) + with tm.assert_produces_warning(FutureWarning): + result = s.convert_objects(convert_dates='coerce') + assert_series_equal(result, s) + s = Series([x.upper()]) + with tm.assert_produces_warning(FutureWarning): + result = s.convert_objects(convert_dates='coerce') + assert_series_equal(result, s) + + def test_convert_objects_preserve_bool(self): + s = Series([1, True, 3, 5], dtype=object) + with tm.assert_produces_warning(FutureWarning): + r = s.convert_objects(convert_numeric=True) + e = Series([1, 1, 3, 5], dtype='i8') + tm.assert_series_equal(r, e) + + def test_convert_objects_preserve_all_bool(self): + s = Series([False, True, False, False], dtype=object) + with tm.assert_produces_warning(FutureWarning): + r = s.convert_objects(convert_numeric=True) + e = Series([False, True, False, False], dtype=bool) + tm.assert_series_equal(r, e) + + # GH 10265 + def test_convert(self): + # Tests: All to nans, coerce, true + # Test coercion returns correct type + s = Series(['a', 'b', 'c']) + results = s._convert(datetime=True, coerce=True) + expected = Series([lib.NaT] * 3) + assert_series_equal(results, expected) + + results = s._convert(numeric=True, coerce=True) + expected = Series([np.nan] * 3) + assert_series_equal(results, expected) + + expected = Series([lib.NaT] * 3, dtype=np.dtype('m8[ns]')) + results = s._convert(timedelta=True, coerce=True) + assert_series_equal(results, expected) + + dt = datetime(2001, 1, 1, 0, 0) + td = dt - datetime(2000, 1, 1, 0, 0) + + # Test coercion with mixed types + s = Series(['a', '3.1415', dt, td]) + results = s._convert(datetime=True, coerce=True) + expected = Series([lib.NaT, lib.NaT, dt, lib.NaT]) + assert_series_equal(results, expected) + + results = s._convert(numeric=True, coerce=True) + expected = Series([nan, 3.1415, nan, nan]) + assert_series_equal(results, expected) + + results = s._convert(timedelta=True, coerce=True) + expected = Series([lib.NaT, lib.NaT, lib.NaT, td], + dtype=np.dtype('m8[ns]')) + assert_series_equal(results, expected) + + # Test standard conversion returns original + results = s._convert(datetime=True) + assert_series_equal(results, s) + results = s._convert(numeric=True) + expected = Series([nan, 3.1415, nan, nan]) + assert_series_equal(results, expected) + results = s._convert(timedelta=True) + assert_series_equal(results, s) + + # test pass-through and non-conversion when other types selected + s = Series(['1.0', '2.0', '3.0']) + results = s._convert(datetime=True, numeric=True, timedelta=True) + expected = Series([1.0, 2.0, 3.0]) + assert_series_equal(results, expected) + results = s._convert(True, False, True) + assert_series_equal(results, s) + + s = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, 0)], + dtype='O') + results = s._convert(datetime=True, numeric=True, timedelta=True) + expected = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, + 0)]) + assert_series_equal(results, expected) + results = s._convert(datetime=False, numeric=True, timedelta=True) + assert_series_equal(results, s) + + td = datetime(2001, 1, 1, 0, 0) - datetime(2000, 1, 1, 0, 0) + s = Series([td, td], dtype='O') + results = s._convert(datetime=True, numeric=True, timedelta=True) + expected = Series([td, td]) + assert_series_equal(results, expected) + results = s._convert(True, True, False) + assert_series_equal(results, s) + + s = Series([1., 2, 3], index=['a', 'b', 'c']) + result = s._convert(numeric=True) + assert_series_equal(result, s) + + # force numeric conversion + r = s.copy().astype('O') + r['a'] = '1' + result = r._convert(numeric=True) + assert_series_equal(result, s) + + r = s.copy().astype('O') + r['a'] = '1.' + result = r._convert(numeric=True) + assert_series_equal(result, s) + + r = s.copy().astype('O') + r['a'] = 'garbled' + result = r._convert(numeric=True) + expected = s.copy() + expected['a'] = nan + assert_series_equal(result, expected) + + # GH 4119, not converting a mixed type (e.g.floats and object) + s = Series([1, 'na', 3, 4]) + result = s._convert(datetime=True, numeric=True) + expected = Series([1, nan, 3, 4]) + assert_series_equal(result, expected) + + s = Series([1, '', 3, 4]) + result = s._convert(datetime=True, numeric=True) + assert_series_equal(result, expected) + + # dates + s = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), + datetime(2001, 1, 3, 0, 0)]) + s2 = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), + datetime(2001, 1, 3, 0, 0), 'foo', 1.0, 1, + Timestamp('20010104'), '20010105'], dtype='O') + + result = s._convert(datetime=True) + expected = Series([Timestamp('20010101'), Timestamp('20010102'), + Timestamp('20010103')], dtype='M8[ns]') + assert_series_equal(result, expected) + + result = s._convert(datetime=True, coerce=True) + assert_series_equal(result, expected) + + expected = Series([Timestamp('20010101'), Timestamp('20010102'), + Timestamp('20010103'), lib.NaT, lib.NaT, lib.NaT, + Timestamp('20010104'), Timestamp('20010105')], + dtype='M8[ns]') + result = s2._convert(datetime=True, numeric=False, timedelta=False, + coerce=True) + assert_series_equal(result, expected) + result = s2._convert(datetime=True, coerce=True) + assert_series_equal(result, expected) + + s = Series(['foo', 'bar', 1, 1.0], dtype='O') + result = s._convert(datetime=True, coerce=True) + expected = Series([lib.NaT] * 4) + assert_series_equal(result, expected) + + # preserver if non-object + s = Series([1], dtype='float32') + result = s._convert(datetime=True, coerce=True) + assert_series_equal(result, s) + + # r = s.copy() + # r[0] = np.nan + # result = r._convert(convert_dates=True,convert_numeric=False) + # self.assertEqual(result.dtype, 'M8[ns]') + + # dateutil parses some single letters into today's value as a date + expected = Series([lib.NaT]) + for x in 'abcdefghijklmnopqrstuvwxyz': + s = Series([x]) + result = s._convert(datetime=True, coerce=True) + assert_series_equal(result, expected) + s = Series([x.upper()]) + result = s._convert(datetime=True, coerce=True) + assert_series_equal(result, expected) + + def test_convert_no_arg_error(self): + s = Series(['1.0', '2']) + self.assertRaises(ValueError, s._convert) + + def test_convert_preserve_bool(self): + s = Series([1, True, 3, 5], dtype=object) + r = s._convert(datetime=True, numeric=True) + e = Series([1, 1, 3, 5], dtype='i8') + tm.assert_series_equal(r, e) + + def test_convert_preserve_all_bool(self): + s = Series([False, True, False, False], dtype=object) + r = s._convert(datetime=True, numeric=True) + e = Series([False, True, False, False], dtype=bool) + tm.assert_series_equal(r, e) diff --git a/pandas/tests/series/test_io.py b/pandas/tests/series/test_io.py new file mode 100644 index 0000000000000..beb023215e6ce --- /dev/null +++ b/pandas/tests/series/test_io.py @@ -0,0 +1,175 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +from datetime import datetime + +import numpy as np +import pandas as pd + +from pandas import Series, DataFrame + +from pandas.compat import StringIO, u +from pandas.util.testing import (assert_series_equal, assert_almost_equal, + assert_frame_equal, ensure_clean) +import pandas.util.testing as tm + +from .common import TestData + + +class TestSeriesIO(TestData, tm.TestCase): + + _multiprocess_can_split_ = True + + def test_from_csv(self): + + with ensure_clean() as path: + self.ts.to_csv(path) + ts = Series.from_csv(path) + assert_series_equal(self.ts, ts, check_names=False) + self.assertTrue(ts.name is None) + self.assertTrue(ts.index.name is None) + + # GH10483 + self.ts.to_csv(path, header=True) + ts_h = Series.from_csv(path, header=0) + self.assertTrue(ts_h.name == 'ts') + + self.series.to_csv(path) + series = Series.from_csv(path) + self.assertIsNone(series.name) + self.assertIsNone(series.index.name) + assert_series_equal(self.series, series, check_names=False) + self.assertTrue(series.name is None) + self.assertTrue(series.index.name is None) + + self.series.to_csv(path, header=True) + series_h = Series.from_csv(path, header=0) + self.assertTrue(series_h.name == 'series') + + outfile = open(path, 'w') + outfile.write('1998-01-01|1.0\n1999-01-01|2.0') + outfile.close() + series = Series.from_csv(path, sep='|') + checkseries = Series({datetime(1998, 1, 1): 1.0, + datetime(1999, 1, 1): 2.0}) + assert_series_equal(checkseries, series) + + series = Series.from_csv(path, sep='|', parse_dates=False) + checkseries = Series({'1998-01-01': 1.0, '1999-01-01': 2.0}) + assert_series_equal(checkseries, series) + + def test_to_csv(self): + import io + + with ensure_clean() as path: + self.ts.to_csv(path) + + lines = io.open(path, newline=None).readlines() + assert (lines[1] != '\n') + + self.ts.to_csv(path, index=False) + arr = np.loadtxt(path) + assert_almost_equal(arr, self.ts.values) + + def test_to_csv_unicode_index(self): + buf = StringIO() + s = Series([u("\u05d0"), "d2"], index=[u("\u05d0"), u("\u05d1")]) + + s.to_csv(buf, encoding='UTF-8') + buf.seek(0) + + s2 = Series.from_csv(buf, index_col=0, encoding='UTF-8') + + assert_series_equal(s, s2) + + def test_tolist(self): + rs = self.ts.tolist() + xp = self.ts.values.tolist() + assert_almost_equal(rs, xp) + + # datetime64 + s = Series(self.ts.index) + rs = s.tolist() + self.assertEqual(self.ts.index[0], rs[0]) + + def test_to_frame(self): + self.ts.name = None + rs = self.ts.to_frame() + xp = pd.DataFrame(self.ts.values, index=self.ts.index) + assert_frame_equal(rs, xp) + + self.ts.name = 'testname' + rs = self.ts.to_frame() + xp = pd.DataFrame(dict(testname=self.ts.values), index=self.ts.index) + assert_frame_equal(rs, xp) + + rs = self.ts.to_frame(name='testdifferent') + xp = pd.DataFrame( + dict(testdifferent=self.ts.values), index=self.ts.index) + assert_frame_equal(rs, xp) + + def test_to_dict(self): + self.assert_numpy_array_equal(Series(self.ts.to_dict()), self.ts) + + def test_to_csv_float_format(self): + + with ensure_clean() as filename: + ser = Series([0.123456, 0.234567, 0.567567]) + ser.to_csv(filename, float_format='%.2f') + + rs = Series.from_csv(filename) + xp = Series([0.12, 0.23, 0.57]) + assert_series_equal(rs, xp) + + def test_to_csv_list_entries(self): + s = Series(['jack and jill', 'jesse and frank']) + + split = s.str.split(r'\s+and\s+') + + buf = StringIO() + split.to_csv(buf) + + def test_to_csv_path_is_none(self): + # GH 8215 + # Series.to_csv() was returning None, inconsistent with + # DataFrame.to_csv() which returned string + s = Series([1, 2, 3]) + csv_str = s.to_csv(path=None) + self.assertIsInstance(csv_str, str) + + def test_timeseries_periodindex(self): + # GH2891 + from pandas import period_range + prng = period_range('1/1/2011', '1/1/2012', freq='M') + ts = Series(np.random.randn(len(prng)), prng) + new_ts = self.round_trip_pickle(ts) + self.assertEqual(new_ts.index.freq, 'M') + + def test_pickle_preserve_name(self): + unpickled = self._pickle_roundtrip_name(self.ts) + self.assertEqual(unpickled.name, self.ts.name) + + def _pickle_roundtrip_name(self, obj): + + with ensure_clean() as path: + obj.to_pickle(path) + unpickled = pd.read_pickle(path) + return unpickled + + def test_to_frame_expanddim(self): + # GH 9762 + + class SubclassedSeries(Series): + + @property + def _constructor_expanddim(self): + return SubclassedFrame + + class SubclassedFrame(DataFrame): + pass + + s = SubclassedSeries([1, 2, 3], name='X') + result = s.to_frame() + self.assertTrue(isinstance(result, SubclassedFrame)) + expected = SubclassedFrame({'X': [1, 2, 3]}) + assert_frame_equal(result, expected) diff --git a/pandas/tests/series/test_misc_api.py b/pandas/tests/series/test_misc_api.py new file mode 100644 index 0000000000000..acf002f316513 --- /dev/null +++ b/pandas/tests/series/test_misc_api.py @@ -0,0 +1,307 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +import numpy as np +import pandas as pd + +from pandas import Index, Series, DataFrame, date_range +from pandas.tseries.index import Timestamp +import pandas.core.common as com + +from pandas.compat import range +from pandas import compat +from pandas.util.testing import (assert_series_equal, + ensure_clean) +import pandas.util.testing as tm + +from .common import TestData + + +class SharedWithSparse(object): + + def test_scalarop_preserve_name(self): + result = self.ts * 2 + self.assertEqual(result.name, self.ts.name) + + def test_copy_name(self): + result = self.ts.copy() + self.assertEqual(result.name, self.ts.name) + + def test_copy_index_name_checking(self): + # don't want to be able to modify the index stored elsewhere after + # making a copy + + self.ts.index.name = None + self.assertIsNone(self.ts.index.name) + self.assertIs(self.ts, self.ts) + + cp = self.ts.copy() + cp.index.name = 'foo' + com.pprint_thing(self.ts.index.name) + self.assertIsNone(self.ts.index.name) + + def test_append_preserve_name(self): + result = self.ts[:5].append(self.ts[5:]) + self.assertEqual(result.name, self.ts.name) + + def test_binop_maybe_preserve_name(self): + # names match, preserve + result = self.ts * self.ts + self.assertEqual(result.name, self.ts.name) + result = self.ts.mul(self.ts) + self.assertEqual(result.name, self.ts.name) + + result = self.ts * self.ts[:-2] + self.assertEqual(result.name, self.ts.name) + + # names don't match, don't preserve + cp = self.ts.copy() + cp.name = 'something else' + result = self.ts + cp + self.assertIsNone(result.name) + result = self.ts.add(cp) + self.assertIsNone(result.name) + + ops = ['add', 'sub', 'mul', 'div', 'truediv', 'floordiv', 'mod', 'pow'] + ops = ops + ['r' + op for op in ops] + for op in ops: + # names match, preserve + s = self.ts.copy() + result = getattr(s, op)(s) + self.assertEqual(result.name, self.ts.name) + + # names don't match, don't preserve + cp = self.ts.copy() + cp.name = 'changed' + result = getattr(s, op)(cp) + self.assertIsNone(result.name) + + def test_combine_first_name(self): + result = self.ts.combine_first(self.ts[:5]) + self.assertEqual(result.name, self.ts.name) + + def test_getitem_preserve_name(self): + result = self.ts[self.ts > 0] + self.assertEqual(result.name, self.ts.name) + + result = self.ts[[0, 2, 4]] + self.assertEqual(result.name, self.ts.name) + + result = self.ts[5:10] + self.assertEqual(result.name, self.ts.name) + + def test_pickle(self): + unp_series = self._pickle_roundtrip(self.series) + unp_ts = self._pickle_roundtrip(self.ts) + assert_series_equal(unp_series, self.series) + assert_series_equal(unp_ts, self.ts) + + def _pickle_roundtrip(self, obj): + + with ensure_clean() as path: + obj.to_pickle(path) + unpickled = pd.read_pickle(path) + return unpickled + + def test_argsort_preserve_name(self): + result = self.ts.argsort() + self.assertEqual(result.name, self.ts.name) + + def test_sort_index_name(self): + result = self.ts.sort_index(ascending=False) + self.assertEqual(result.name, self.ts.name) + + def test_to_sparse_pass_name(self): + result = self.ts.to_sparse() + self.assertEqual(result.name, self.ts.name) + + +class TestSeriesMisc(TestData, SharedWithSparse, tm.TestCase): + + _multiprocess_can_split_ = True + + def test_tab_completion(self): + # GH 9910 + s = Series(list('abcd')) + # Series of str values should have .str but not .dt/.cat in __dir__ + self.assertTrue('str' in dir(s)) + self.assertTrue('dt' not in dir(s)) + self.assertTrue('cat' not in dir(s)) + + # similiarly for .dt + s = Series(date_range('1/1/2015', periods=5)) + self.assertTrue('dt' in dir(s)) + self.assertTrue('str' not in dir(s)) + self.assertTrue('cat' not in dir(s)) + + # similiarly for .cat, but with the twist that str and dt should be + # there if the categories are of that type first cat and str + s = Series(list('abbcd'), dtype="category") + self.assertTrue('cat' in dir(s)) + self.assertTrue('str' in dir(s)) # as it is a string categorical + self.assertTrue('dt' not in dir(s)) + + # similar to cat and str + s = Series(date_range('1/1/2015', periods=5)).astype("category") + self.assertTrue('cat' in dir(s)) + self.assertTrue('str' not in dir(s)) + self.assertTrue('dt' in dir(s)) # as it is a datetime categorical + + def test_not_hashable(self): + s_empty = Series() + s = Series([1]) + self.assertRaises(TypeError, hash, s_empty) + self.assertRaises(TypeError, hash, s) + + def test_contains(self): + tm.assert_contains_all(self.ts.index, self.ts) + + def test_iter(self): + for i, val in enumerate(self.series): + self.assertEqual(val, self.series[i]) + + for i, val in enumerate(self.ts): + self.assertEqual(val, self.ts[i]) + + def test_keys(self): + # HACK: By doing this in two stages, we avoid 2to3 wrapping the call + # to .keys() in a list() + getkeys = self.ts.keys + self.assertIs(getkeys(), self.ts.index) + + def test_values(self): + self.assert_numpy_array_equal(self.ts, self.ts.values) + + def test_iteritems(self): + for idx, val in compat.iteritems(self.series): + self.assertEqual(val, self.series[idx]) + + for idx, val in compat.iteritems(self.ts): + self.assertEqual(val, self.ts[idx]) + + # assert is lazy (genrators don't define reverse, lists do) + self.assertFalse(hasattr(self.series.iteritems(), 'reverse')) + + def test_raise_on_info(self): + s = Series(np.random.randn(10)) + with tm.assertRaises(AttributeError): + s.info() + + def test_copy(self): + + for deep in [None, False, True]: + s = Series(np.arange(10), dtype='float64') + + # default deep is True + if deep is None: + s2 = s.copy() + else: + s2 = s.copy(deep=deep) + + s2[::2] = np.NaN + + if deep is None or deep is True: + # Did not modify original Series + self.assertTrue(np.isnan(s2[0])) + self.assertFalse(np.isnan(s[0])) + else: + + # we DID modify the original Series + self.assertTrue(np.isnan(s2[0])) + self.assertTrue(np.isnan(s[0])) + + # GH 11794 + # copy of tz-aware + expected = Series([Timestamp('2012/01/01', tz='UTC')]) + expected2 = Series([Timestamp('1999/01/01', tz='UTC')]) + + for deep in [None, False, True]: + s = Series([Timestamp('2012/01/01', tz='UTC')]) + + if deep is None: + s2 = s.copy() + else: + s2 = s.copy(deep=deep) + + s2[0] = pd.Timestamp('1999/01/01', tz='UTC') + + # default deep is True + if deep is None or deep is True: + assert_series_equal(s, expected) + assert_series_equal(s2, expected2) + else: + assert_series_equal(s, expected2) + assert_series_equal(s2, expected2) + + def test_axis_alias(self): + s = Series([1, 2, np.nan]) + assert_series_equal(s.dropna(axis='rows'), s.dropna(axis='index')) + self.assertEqual(s.dropna().sum('rows'), 3) + self.assertEqual(s._get_axis_number('rows'), 0) + self.assertEqual(s._get_axis_name('rows'), 'index') + + def test_numpy_unique(self): + # it works! + np.unique(self.ts) + + def test_ndarray_compat(self): + + # test numpy compat with Series as sub-class of NDFrame + tsdf = DataFrame(np.random.randn(1000, 3), columns=['A', 'B', 'C'], + index=date_range('1/1/2000', periods=1000)) + + def f(x): + return x[x.argmax()] + + result = tsdf.apply(f) + expected = tsdf.max() + assert_series_equal(result, expected) + + # .item() + s = Series([1]) + result = s.item() + self.assertEqual(result, 1) + self.assertEqual(s.item(), s.iloc[0]) + + # using an ndarray like function + s = Series(np.random.randn(10)) + result = np.ones_like(s) + expected = Series(1, index=range(10), dtype='float64') + # assert_series_equal(result,expected) + + # ravel + s = Series(np.random.randn(10)) + tm.assert_almost_equal(s.ravel(order='F'), s.values.ravel(order='F')) + + # compress + # GH 6658 + s = Series([0, 1., -1], index=list('abc')) + result = np.compress(s > 0, s) + assert_series_equal(result, Series([1.], index=['b'])) + + result = np.compress(s < -1, s) + # result empty Index(dtype=object) as the same as original + exp = Series([], dtype='float64', index=Index([], dtype='object')) + assert_series_equal(result, exp) + + s = Series([0, 1., -1], index=[.1, .2, .3]) + result = np.compress(s > 0, s) + assert_series_equal(result, Series([1.], index=[.2])) + + result = np.compress(s < -1, s) + # result empty Float64Index as the same as original + exp = Series([], dtype='float64', index=Index([], dtype='float64')) + assert_series_equal(result, exp) + + def test_str_attribute(self): + # GH9068 + methods = ['strip', 'rstrip', 'lstrip'] + s = Series([' jack', 'jill ', ' jesse ', 'frank']) + for method in methods: + expected = Series([getattr(str, method)(x) for x in s.values]) + assert_series_equal(getattr(Series.str, method)(s.str), expected) + + # str accessor only valid with string values + s = Series(range(5)) + with self.assertRaisesRegexp(AttributeError, 'only use .str accessor'): + s.str.repeat(2) diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py new file mode 100644 index 0000000000000..4bd77c01db9d0 --- /dev/null +++ b/pandas/tests/series/test_missing.py @@ -0,0 +1,454 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +from datetime import timedelta + +from numpy import nan +import numpy as np +import pandas as pd + +from pandas import Series, isnull +from pandas.tseries.index import Timestamp + +from pandas.compat import range +from pandas.util.testing import assert_series_equal +import pandas.util.testing as tm + +from .common import TestData + + +class TestSeriesMissingData(TestData, tm.TestCase): + + _multiprocess_can_split_ = True + + def test_timedelta_fillna(self): + # GH 3371 + s = Series([Timestamp('20130101'), Timestamp('20130101'), Timestamp( + '20130102'), Timestamp('20130103 9:01:01')]) + td = s.diff() + + # reg fillna + result = td.fillna(0) + expected = Series([timedelta(0), timedelta(0), timedelta(1), timedelta( + days=1, seconds=9 * 3600 + 60 + 1)]) + assert_series_equal(result, expected) + + # interprested as seconds + result = td.fillna(1) + expected = Series([timedelta(seconds=1), timedelta(0), timedelta(1), + timedelta(days=1, seconds=9 * 3600 + 60 + 1)]) + assert_series_equal(result, expected) + + result = td.fillna(timedelta(days=1, seconds=1)) + expected = Series([timedelta(days=1, seconds=1), timedelta( + 0), timedelta(1), timedelta(days=1, seconds=9 * 3600 + 60 + 1)]) + assert_series_equal(result, expected) + + result = td.fillna(np.timedelta64(int(1e9))) + expected = Series([timedelta(seconds=1), timedelta(0), timedelta(1), + timedelta(days=1, seconds=9 * 3600 + 60 + 1)]) + assert_series_equal(result, expected) + + from pandas import tslib + result = td.fillna(tslib.NaT) + expected = Series([tslib.NaT, timedelta(0), timedelta(1), + timedelta(days=1, seconds=9 * 3600 + 60 + 1)], + dtype='m8[ns]') + assert_series_equal(result, expected) + + # ffill + td[2] = np.nan + result = td.ffill() + expected = td.fillna(0) + expected[0] = np.nan + assert_series_equal(result, expected) + + # bfill + td[2] = np.nan + result = td.bfill() + expected = td.fillna(0) + expected[2] = timedelta(days=1, seconds=9 * 3600 + 60 + 1) + assert_series_equal(result, expected) + + def test_datetime64_fillna(self): + + s = Series([Timestamp('20130101'), Timestamp('20130101'), Timestamp( + '20130102'), Timestamp('20130103 9:01:01')]) + s[2] = np.nan + + # reg fillna + result = s.fillna(Timestamp('20130104')) + expected = Series([Timestamp('20130101'), Timestamp( + '20130101'), Timestamp('20130104'), Timestamp('20130103 9:01:01')]) + assert_series_equal(result, expected) + + from pandas import tslib + result = s.fillna(tslib.NaT) + expected = s + assert_series_equal(result, expected) + + # ffill + result = s.ffill() + expected = Series([Timestamp('20130101'), Timestamp( + '20130101'), Timestamp('20130101'), Timestamp('20130103 9:01:01')]) + assert_series_equal(result, expected) + + # bfill + result = s.bfill() + expected = Series([Timestamp('20130101'), Timestamp('20130101'), + Timestamp('20130103 9:01:01'), Timestamp( + '20130103 9:01:01')]) + assert_series_equal(result, expected) + + # GH 6587 + # make sure that we are treating as integer when filling + # this also tests inference of a datetime-like with NaT's + s = Series([pd.NaT, pd.NaT, '2013-08-05 15:30:00.000001']) + expected = Series( + ['2013-08-05 15:30:00.000001', '2013-08-05 15:30:00.000001', + '2013-08-05 15:30:00.000001'], dtype='M8[ns]') + result = s.fillna(method='backfill') + assert_series_equal(result, expected) + + def test_datetime64_tz_fillna(self): + for tz in ['US/Eastern', 'Asia/Tokyo']: + # DatetimeBlock + s = Series([Timestamp('2011-01-01 10:00'), pd.NaT, Timestamp( + '2011-01-03 10:00'), pd.NaT]) + result = s.fillna(pd.Timestamp('2011-01-02 10:00')) + expected = Series([Timestamp('2011-01-01 10:00'), Timestamp( + '2011-01-02 10:00'), Timestamp('2011-01-03 10:00'), Timestamp( + '2011-01-02 10:00')]) + self.assert_series_equal(expected, result) + + result = s.fillna(pd.Timestamp('2011-01-02 10:00', tz=tz)) + expected = Series([Timestamp('2011-01-01 10:00'), Timestamp( + '2011-01-02 10:00', tz=tz), Timestamp('2011-01-03 10:00'), + Timestamp('2011-01-02 10:00', tz=tz)]) + self.assert_series_equal(expected, result) + + result = s.fillna('AAA') + expected = Series([Timestamp('2011-01-01 10:00'), 'AAA', + Timestamp('2011-01-03 10:00'), 'AAA'], + dtype=object) + self.assert_series_equal(expected, result) + + result = s.fillna({1: pd.Timestamp('2011-01-02 10:00', tz=tz), + 3: pd.Timestamp('2011-01-04 10:00')}) + expected = Series([Timestamp('2011-01-01 10:00'), Timestamp( + '2011-01-02 10:00', tz=tz), Timestamp('2011-01-03 10:00'), + Timestamp('2011-01-04 10:00')]) + self.assert_series_equal(expected, result) + + result = s.fillna({1: pd.Timestamp('2011-01-02 10:00'), + 3: pd.Timestamp('2011-01-04 10:00')}) + expected = Series([Timestamp('2011-01-01 10:00'), Timestamp( + '2011-01-02 10:00'), Timestamp('2011-01-03 10:00'), Timestamp( + '2011-01-04 10:00')]) + self.assert_series_equal(expected, result) + + # DatetimeBlockTZ + idx = pd.DatetimeIndex(['2011-01-01 10:00', pd.NaT, + '2011-01-03 10:00', pd.NaT], tz=tz) + s = pd.Series(idx) + result = s.fillna(pd.Timestamp('2011-01-02 10:00')) + expected = Series([Timestamp('2011-01-01 10:00', tz=tz), Timestamp( + '2011-01-02 10:00'), Timestamp('2011-01-03 10:00', tz=tz), + Timestamp('2011-01-02 10:00')]) + self.assert_series_equal(expected, result) + + result = s.fillna(pd.Timestamp('2011-01-02 10:00', tz=tz)) + idx = pd.DatetimeIndex(['2011-01-01 10:00', '2011-01-02 10:00', + '2011-01-03 10:00', '2011-01-02 10:00'], + tz=tz) + expected = Series(idx) + self.assert_series_equal(expected, result) + + result = s.fillna(pd.Timestamp( + '2011-01-02 10:00', tz=tz).to_pydatetime()) + idx = pd.DatetimeIndex(['2011-01-01 10:00', '2011-01-02 10:00', + '2011-01-03 10:00', '2011-01-02 10:00'], + tz=tz) + expected = Series(idx) + self.assert_series_equal(expected, result) + + result = s.fillna('AAA') + expected = Series([Timestamp('2011-01-01 10:00', tz=tz), 'AAA', + Timestamp('2011-01-03 10:00', tz=tz), 'AAA'], + dtype=object) + self.assert_series_equal(expected, result) + + result = s.fillna({1: pd.Timestamp('2011-01-02 10:00', tz=tz), + 3: pd.Timestamp('2011-01-04 10:00')}) + expected = Series([Timestamp('2011-01-01 10:00', tz=tz), Timestamp( + '2011-01-02 10:00', tz=tz), Timestamp( + '2011-01-03 10:00', tz=tz), Timestamp('2011-01-04 10:00')]) + self.assert_series_equal(expected, result) + + result = s.fillna({1: pd.Timestamp('2011-01-02 10:00', tz=tz), + 3: pd.Timestamp('2011-01-04 10:00', tz=tz)}) + expected = Series([Timestamp('2011-01-01 10:00', tz=tz), Timestamp( + '2011-01-02 10:00', tz=tz), Timestamp( + '2011-01-03 10:00', tz=tz), Timestamp('2011-01-04 10:00', + tz=tz)]) + self.assert_series_equal(expected, result) + + # filling with a naive/other zone, coerce to object + result = s.fillna(Timestamp('20130101')) + expected = Series([Timestamp('2011-01-01 10:00', tz=tz), Timestamp( + '2013-01-01'), Timestamp('2011-01-03 10:00', tz=tz), Timestamp( + '2013-01-01')]) + self.assert_series_equal(expected, result) + + result = s.fillna(Timestamp('20130101', tz='US/Pacific')) + expected = Series([Timestamp('2011-01-01 10:00', tz=tz), + Timestamp('2013-01-01', tz='US/Pacific'), + Timestamp('2011-01-03 10:00', tz=tz), + Timestamp('2013-01-01', tz='US/Pacific')]) + self.assert_series_equal(expected, result) + + def test_fillna_int(self): + s = Series(np.random.randint(-100, 100, 50)) + s.fillna(method='ffill', inplace=True) + assert_series_equal(s.fillna(method='ffill', inplace=False), s) + + def test_fillna_raise(self): + s = Series(np.random.randint(-100, 100, 50)) + self.assertRaises(TypeError, s.fillna, [1, 2]) + self.assertRaises(TypeError, s.fillna, (1, 2)) + + def test_isnull_for_inf(self): + s = Series(['a', np.inf, np.nan, 1.0]) + with pd.option_context('mode.use_inf_as_null', True): + r = s.isnull() + dr = s.dropna() + e = Series([False, True, True, False]) + de = Series(['a', 1.0], index=[0, 3]) + tm.assert_series_equal(r, e) + tm.assert_series_equal(dr, de) + + def test_fillna(self): + ts = Series([0., 1., 2., 3., 4.], index=tm.makeDateIndex(5)) + + self.assert_numpy_array_equal(ts, ts.fillna(method='ffill')) + + ts[2] = np.NaN + + self.assert_numpy_array_equal(ts.fillna(method='ffill'), + [0., 1., 1., 3., 4.]) + self.assert_numpy_array_equal(ts.fillna(method='backfill'), + [0., 1., 3., 3., 4.]) + + self.assert_numpy_array_equal(ts.fillna(value=5), [0., 1., 5., 3., 4.]) + + self.assertRaises(ValueError, ts.fillna) + self.assertRaises(ValueError, self.ts.fillna, value=0, method='ffill') + + # GH 5703 + s1 = Series([np.nan]) + s2 = Series([1]) + result = s1.fillna(s2) + expected = Series([1.]) + assert_series_equal(result, expected) + result = s1.fillna({}) + assert_series_equal(result, s1) + result = s1.fillna(Series(())) + assert_series_equal(result, s1) + result = s2.fillna(s1) + assert_series_equal(result, s2) + result = s1.fillna({0: 1}) + assert_series_equal(result, expected) + result = s1.fillna({1: 1}) + assert_series_equal(result, Series([np.nan])) + result = s1.fillna({0: 1, 1: 1}) + assert_series_equal(result, expected) + result = s1.fillna(Series({0: 1, 1: 1})) + assert_series_equal(result, expected) + result = s1.fillna(Series({0: 1, 1: 1}, index=[4, 5])) + assert_series_equal(result, s1) + + s1 = Series([0, 1, 2], list('abc')) + s2 = Series([0, np.nan, 2], list('bac')) + result = s2.fillna(s1) + expected = Series([0, 0, 2.], list('bac')) + assert_series_equal(result, expected) + + # limit + s = Series(np.nan, index=[0, 1, 2]) + result = s.fillna(999, limit=1) + expected = Series([999, np.nan, np.nan], index=[0, 1, 2]) + assert_series_equal(result, expected) + + result = s.fillna(999, limit=2) + expected = Series([999, 999, np.nan], index=[0, 1, 2]) + assert_series_equal(result, expected) + + # GH 9043 + # make sure a string representation of int/float values can be filled + # correctly without raising errors or being converted + vals = ['0', '1.5', '-0.3'] + for val in vals: + s = Series([0, 1, np.nan, np.nan, 4], dtype='float64') + result = s.fillna(val) + expected = Series([0, 1, val, val, 4], dtype='object') + assert_series_equal(result, expected) + + def test_fillna_bug(self): + x = Series([nan, 1., nan, 3., nan], ['z', 'a', 'b', 'c', 'd']) + filled = x.fillna(method='ffill') + expected = Series([nan, 1., 1., 3., 3.], x.index) + assert_series_equal(filled, expected) + + filled = x.fillna(method='bfill') + expected = Series([1., 1., 3., 3., nan], x.index) + assert_series_equal(filled, expected) + + def test_fillna_inplace(self): + x = Series([nan, 1., nan, 3., nan], ['z', 'a', 'b', 'c', 'd']) + y = x.copy() + + y.fillna(value=0, inplace=True) + + expected = x.fillna(value=0) + assert_series_equal(y, expected) + + def test_fillna_invalid_method(self): + try: + self.ts.fillna(method='ffil') + except ValueError as inst: + self.assertIn('ffil', str(inst)) + + def test_ffill(self): + ts = Series([0., 1., 2., 3., 4.], index=tm.makeDateIndex(5)) + ts[2] = np.NaN + assert_series_equal(ts.ffill(), ts.fillna(method='ffill')) + + def test_bfill(self): + ts = Series([0., 1., 2., 3., 4.], index=tm.makeDateIndex(5)) + ts[2] = np.NaN + assert_series_equal(ts.bfill(), ts.fillna(method='bfill')) + + def test_timedelta64_nan(self): + + from pandas import tslib + td = Series([timedelta(days=i) for i in range(10)]) + + # nan ops on timedeltas + td1 = td.copy() + td1[0] = np.nan + self.assertTrue(isnull(td1[0])) + self.assertEqual(td1[0].value, tslib.iNaT) + td1[0] = td[0] + self.assertFalse(isnull(td1[0])) + + td1[1] = tslib.iNaT + self.assertTrue(isnull(td1[1])) + self.assertEqual(td1[1].value, tslib.iNaT) + td1[1] = td[1] + self.assertFalse(isnull(td1[1])) + + td1[2] = tslib.NaT + self.assertTrue(isnull(td1[2])) + self.assertEqual(td1[2].value, tslib.iNaT) + td1[2] = td[2] + self.assertFalse(isnull(td1[2])) + + # boolean setting + # this doesn't work, not sure numpy even supports it + # result = td[(td>np.timedelta64(timedelta(days=3))) & + # td<np.timedelta64(timedelta(days=7)))] = np.nan + # self.assertEqual(isnull(result).sum(), 7) + + # NumPy limitiation =( + + # def test_logical_range_select(self): + # np.random.seed(12345) + # selector = -0.5 <= self.ts <= 0.5 + # expected = (self.ts >= -0.5) & (self.ts <= 0.5) + # assert_series_equal(selector, expected) + + def test_dropna_empty(self): + s = Series([]) + self.assertEqual(len(s.dropna()), 0) + s.dropna(inplace=True) + self.assertEqual(len(s), 0) + + # invalid axis + self.assertRaises(ValueError, s.dropna, axis=1) + + def test_datetime64_tz_dropna(self): + # DatetimeBlock + s = Series([Timestamp('2011-01-01 10:00'), pd.NaT, Timestamp( + '2011-01-03 10:00'), pd.NaT]) + result = s.dropna() + expected = Series([Timestamp('2011-01-01 10:00'), + Timestamp('2011-01-03 10:00')], index=[0, 2]) + self.assert_series_equal(result, expected) + + # DatetimeBlockTZ + idx = pd.DatetimeIndex(['2011-01-01 10:00', pd.NaT, + '2011-01-03 10:00', pd.NaT], + tz='Asia/Tokyo') + s = pd.Series(idx) + self.assertEqual(s.dtype, 'datetime64[ns, Asia/Tokyo]') + result = s.dropna() + expected = Series([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), + Timestamp('2011-01-03 10:00', tz='Asia/Tokyo')], + index=[0, 2]) + self.assertEqual(result.dtype, 'datetime64[ns, Asia/Tokyo]') + self.assert_series_equal(result, expected) + + def test_dropna_no_nan(self): + for s in [Series([1, 2, 3], name='x'), Series( + [False, True, False], name='x')]: + + result = s.dropna() + self.assert_series_equal(result, s) + self.assertFalse(result is s) + + s2 = s.copy() + s2.dropna(inplace=True) + self.assert_series_equal(s2, s) + + def test_valid(self): + ts = self.ts.copy() + ts[::2] = np.NaN + + result = ts.valid() + self.assertEqual(len(result), ts.count()) + + tm.assert_dict_equal(result, ts, compare_keys=False) + + def test_isnull(self): + ser = Series([0, 5.4, 3, nan, -0.001]) + np.array_equal(ser.isnull(), + Series([False, False, False, True, False]).values) + ser = Series(["hi", "", nan]) + np.array_equal(ser.isnull(), Series([False, False, True]).values) + + def test_notnull(self): + ser = Series([0, 5.4, 3, nan, -0.001]) + np.array_equal(ser.notnull(), + Series([True, True, True, False, True]).values) + ser = Series(["hi", "", nan]) + np.array_equal(ser.notnull(), Series([True, True, False]).values) + + def test_pad_nan(self): + x = Series([np.nan, 1., np.nan, 3., np.nan], ['z', 'a', 'b', 'c', 'd'], + dtype=float) + + x.fillna(method='pad', inplace=True) + + expected = Series([np.nan, 1.0, 1.0, 3.0, 3.0], + ['z', 'a', 'b', 'c', 'd'], dtype=float) + assert_series_equal(x[1:], expected[1:]) + self.assertTrue(np.isnan(x[0]), np.isnan(expected[0])) + + def test_dropna_preserve_name(self): + self.ts[:5] = np.nan + result = self.ts.dropna() + self.assertEqual(result.name, self.ts.name) + name = self.ts.name + ts = self.ts.copy() + ts.dropna(inplace=True) + self.assertEqual(ts.name, name) diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py new file mode 100644 index 0000000000000..d36cc0c303dfe --- /dev/null +++ b/pandas/tests/series/test_operators.py @@ -0,0 +1,1377 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +from datetime import datetime, timedelta +import operator +from itertools import product, starmap + +from numpy import nan, inf +import numpy as np +import pandas as pd + +from pandas import (Index, Series, DataFrame, isnull, bdate_range, + NaT, date_range, timedelta_range, + _np_version_under1p8) +from pandas.tseries.index import Timestamp +from pandas.tseries.tdi import Timedelta +import pandas.core.nanops as nanops + +from pandas.compat import range, zip +from pandas import compat +from pandas.util.testing import assert_series_equal, assert_almost_equal +import pandas.util.testing as tm + +from .common import TestData + + +class TestSeriesOperators(TestData, tm.TestCase): + + _multiprocess_can_split_ = True + + def test_comparisons(self): + left = np.random.randn(10) + right = np.random.randn(10) + left[:3] = np.nan + + result = nanops.nangt(left, right) + expected = (left > right).astype('O') + expected[:3] = np.nan + + assert_almost_equal(result, expected) + + s = Series(['a', 'b', 'c']) + s2 = Series([False, True, False]) + + # it works! + s == s2 + s2 == s + + def test_op_method(self): + def check(series, other, check_reverse=False): + simple_ops = ['add', 'sub', 'mul', 'floordiv', 'truediv', 'pow'] + if not compat.PY3: + simple_ops.append('div') + + for opname in simple_ops: + op = getattr(Series, opname) + + if op == 'div': + alt = operator.truediv + else: + alt = getattr(operator, opname) + + result = op(series, other) + expected = alt(series, other) + tm.assert_almost_equal(result, expected) + if check_reverse: + rop = getattr(Series, "r" + opname) + result = rop(series, other) + expected = alt(other, series) + tm.assert_almost_equal(result, expected) + + check(self.ts, self.ts * 2) + check(self.ts, self.ts[::2]) + check(self.ts, 5, check_reverse=True) + check(tm.makeFloatSeries(), tm.makeFloatSeries(), check_reverse=True) + + def test_neg(self): + assert_series_equal(-self.series, -1 * self.series) + + def test_invert(self): + assert_series_equal(-(self.series < 0), ~(self.series < 0)) + + def test_div(self): + + # no longer do integer div for any ops, but deal with the 0's + p = DataFrame({'first': [3, 4, 5, 8], 'second': [0, 0, 0, 3]}) + result = p['first'] / p['second'] + expected = Series(p['first'].values.astype(float) / p['second'].values, + dtype='float64') + expected.iloc[0:3] = np.inf + assert_series_equal(result, expected) + + result = p['first'] / 0 + expected = Series(np.inf, index=p.index, name='first') + assert_series_equal(result, expected) + + p = p.astype('float64') + result = p['first'] / p['second'] + expected = Series(p['first'].values / p['second'].values) + assert_series_equal(result, expected) + + p = DataFrame({'first': [3, 4, 5, 8], 'second': [1, 1, 1, 1]}) + result = p['first'] / p['second'] + assert_series_equal(result, p['first'].astype('float64'), + check_names=False) + self.assertTrue(result.name is None) + self.assertFalse(np.array_equal(result, p['second'] / p['first'])) + + # inf signing + s = Series([np.nan, 1., -1.]) + result = s / 0 + expected = Series([np.nan, np.inf, -np.inf]) + assert_series_equal(result, expected) + + # float/integer issue + # GH 7785 + p = DataFrame({'first': (1, 0), 'second': (-0.01, -0.02)}) + expected = Series([-0.01, -np.inf]) + + result = p['second'].div(p['first']) + assert_series_equal(result, expected, check_names=False) + + result = p['second'] / p['first'] + assert_series_equal(result, expected) + + # GH 9144 + s = Series([-1, 0, 1]) + + result = 0 / s + expected = Series([0.0, nan, 0.0]) + assert_series_equal(result, expected) + + result = s / 0 + expected = Series([-inf, nan, inf]) + assert_series_equal(result, expected) + + result = s // 0 + expected = Series([-inf, nan, inf]) + assert_series_equal(result, expected) + + def test_operators(self): + def _check_op(series, other, op, pos_only=False): + left = np.abs(series) if pos_only else series + right = np.abs(other) if pos_only else other + + cython_or_numpy = op(left, right) + python = left.combine(right, op) + tm.assert_almost_equal(cython_or_numpy, python) + + def check(series, other): + simple_ops = ['add', 'sub', 'mul', 'truediv', 'floordiv', 'mod'] + + for opname in simple_ops: + _check_op(series, other, getattr(operator, opname)) + + _check_op(series, other, operator.pow, pos_only=True) + + _check_op(series, other, lambda x, y: operator.add(y, x)) + _check_op(series, other, lambda x, y: operator.sub(y, x)) + _check_op(series, other, lambda x, y: operator.truediv(y, x)) + _check_op(series, other, lambda x, y: operator.floordiv(y, x)) + _check_op(series, other, lambda x, y: operator.mul(y, x)) + _check_op(series, other, lambda x, y: operator.pow(y, x), + pos_only=True) + _check_op(series, other, lambda x, y: operator.mod(y, x)) + + check(self.ts, self.ts * 2) + check(self.ts, self.ts * 0) + check(self.ts, self.ts[::2]) + check(self.ts, 5) + + def check_comparators(series, other): + _check_op(series, other, operator.gt) + _check_op(series, other, operator.ge) + _check_op(series, other, operator.eq) + _check_op(series, other, operator.lt) + _check_op(series, other, operator.le) + + check_comparators(self.ts, 5) + check_comparators(self.ts, self.ts + 1) + + def test_operators_empty_int_corner(self): + s1 = Series([], [], dtype=np.int32) + s2 = Series({'x': 0.}) + tm.assert_series_equal(s1 * s2, Series([np.nan], index=['x'])) + + def test_operators_timedelta64(self): + + # invalid ops + self.assertRaises(Exception, self.objSeries.__add__, 1) + self.assertRaises(Exception, self.objSeries.__add__, + np.array(1, dtype=np.int64)) + self.assertRaises(Exception, self.objSeries.__sub__, 1) + self.assertRaises(Exception, self.objSeries.__sub__, + np.array(1, dtype=np.int64)) + + # seriese ops + v1 = date_range('2012-1-1', periods=3, freq='D') + v2 = date_range('2012-1-2', periods=3, freq='D') + rs = Series(v2) - Series(v1) + xp = Series(1e9 * 3600 * 24, + rs.index).astype('int64').astype('timedelta64[ns]') + assert_series_equal(rs, xp) + self.assertEqual(rs.dtype, 'timedelta64[ns]') + + df = DataFrame(dict(A=v1)) + td = Series([timedelta(days=i) for i in range(3)]) + self.assertEqual(td.dtype, 'timedelta64[ns]') + + # series on the rhs + result = df['A'] - df['A'].shift() + self.assertEqual(result.dtype, 'timedelta64[ns]') + + result = df['A'] + td + self.assertEqual(result.dtype, 'M8[ns]') + + # scalar Timestamp on rhs + maxa = df['A'].max() + tm.assertIsInstance(maxa, Timestamp) + + resultb = df['A'] - df['A'].max() + self.assertEqual(resultb.dtype, 'timedelta64[ns]') + + # timestamp on lhs + result = resultb + df['A'] + values = [Timestamp('20111230'), Timestamp('20120101'), + Timestamp('20120103')] + expected = Series(values, name='A') + assert_series_equal(result, expected) + + # datetimes on rhs + result = df['A'] - datetime(2001, 1, 1) + expected = Series( + [timedelta(days=4017 + i) for i in range(3)], name='A') + assert_series_equal(result, expected) + self.assertEqual(result.dtype, 'm8[ns]') + + d = datetime(2001, 1, 1, 3, 4) + resulta = df['A'] - d + self.assertEqual(resulta.dtype, 'm8[ns]') + + # roundtrip + resultb = resulta + d + assert_series_equal(df['A'], resultb) + + # timedeltas on rhs + td = timedelta(days=1) + resulta = df['A'] + td + resultb = resulta - td + assert_series_equal(resultb, df['A']) + self.assertEqual(resultb.dtype, 'M8[ns]') + + # roundtrip + td = timedelta(minutes=5, seconds=3) + resulta = df['A'] + td + resultb = resulta - td + assert_series_equal(df['A'], resultb) + self.assertEqual(resultb.dtype, 'M8[ns]') + + # inplace + value = rs[2] + np.timedelta64(timedelta(minutes=5, seconds=1)) + rs[2] += np.timedelta64(timedelta(minutes=5, seconds=1)) + self.assertEqual(rs[2], value) + + def test_timedeltas_with_DateOffset(self): + + # GH 4532 + # operate with pd.offsets + s = Series([Timestamp('20130101 9:01'), Timestamp('20130101 9:02')]) + + result = s + pd.offsets.Second(5) + result2 = pd.offsets.Second(5) + s + expected = Series([Timestamp('20130101 9:01:05'), Timestamp( + '20130101 9:02:05')]) + assert_series_equal(result, expected) + assert_series_equal(result2, expected) + + result = s - pd.offsets.Second(5) + result2 = -pd.offsets.Second(5) + s + expected = Series([Timestamp('20130101 9:00:55'), Timestamp( + '20130101 9:01:55')]) + assert_series_equal(result, expected) + assert_series_equal(result2, expected) + + result = s + pd.offsets.Milli(5) + result2 = pd.offsets.Milli(5) + s + expected = Series([Timestamp('20130101 9:01:00.005'), Timestamp( + '20130101 9:02:00.005')]) + assert_series_equal(result, expected) + assert_series_equal(result2, expected) + + result = s + pd.offsets.Minute(5) + pd.offsets.Milli(5) + expected = Series([Timestamp('20130101 9:06:00.005'), Timestamp( + '20130101 9:07:00.005')]) + assert_series_equal(result, expected) + + # operate with np.timedelta64 correctly + result = s + np.timedelta64(1, 's') + result2 = np.timedelta64(1, 's') + s + expected = Series([Timestamp('20130101 9:01:01'), Timestamp( + '20130101 9:02:01')]) + assert_series_equal(result, expected) + assert_series_equal(result2, expected) + + result = s + np.timedelta64(5, 'ms') + result2 = np.timedelta64(5, 'ms') + s + expected = Series([Timestamp('20130101 9:01:00.005'), Timestamp( + '20130101 9:02:00.005')]) + assert_series_equal(result, expected) + assert_series_equal(result2, expected) + + # valid DateOffsets + for do in ['Hour', 'Minute', 'Second', 'Day', 'Micro', 'Milli', + 'Nano']: + op = getattr(pd.offsets, do) + s + op(5) + op(5) + s + + def test_timedelta_series_ops(self): + # GH11925 + + s = Series(timedelta_range('1 day', periods=3)) + ts = Timestamp('2012-01-01') + expected = Series(date_range('2012-01-02', periods=3)) + assert_series_equal(ts + s, expected) + assert_series_equal(s + ts, expected) + + expected2 = Series(date_range('2011-12-31', periods=3, freq='-1D')) + assert_series_equal(ts - s, expected2) + assert_series_equal(ts + (-s), expected2) + + def test_timedelta64_operations_with_DateOffset(self): + # GH 10699 + td = Series([timedelta(minutes=5, seconds=3)] * 3) + result = td + pd.offsets.Minute(1) + expected = Series([timedelta(minutes=6, seconds=3)] * 3) + assert_series_equal(result, expected) + + result = td - pd.offsets.Minute(1) + expected = Series([timedelta(minutes=4, seconds=3)] * 3) + assert_series_equal(result, expected) + + result = td + Series([pd.offsets.Minute(1), pd.offsets.Second(3), + pd.offsets.Hour(2)]) + expected = Series([timedelta(minutes=6, seconds=3), timedelta( + minutes=5, seconds=6), timedelta(hours=2, minutes=5, seconds=3)]) + assert_series_equal(result, expected) + + result = td + pd.offsets.Minute(1) + pd.offsets.Second(12) + expected = Series([timedelta(minutes=6, seconds=15)] * 3) + assert_series_equal(result, expected) + + # valid DateOffsets + for do in ['Hour', 'Minute', 'Second', 'Day', 'Micro', 'Milli', + 'Nano']: + op = getattr(pd.offsets, do) + td + op(5) + op(5) + td + td - op(5) + op(5) - td + + def test_timedelta64_operations_with_timedeltas(self): + + # td operate with td + td1 = Series([timedelta(minutes=5, seconds=3)] * 3) + td2 = timedelta(minutes=5, seconds=4) + result = td1 - td2 + expected = Series([timedelta(seconds=0)] * 3) - Series([timedelta( + seconds=1)] * 3) + self.assertEqual(result.dtype, 'm8[ns]') + assert_series_equal(result, expected) + + result2 = td2 - td1 + expected = (Series([timedelta(seconds=1)] * 3) - Series([timedelta( + seconds=0)] * 3)) + assert_series_equal(result2, expected) + + # roundtrip + assert_series_equal(result + td2, td1) + + # Now again, using pd.to_timedelta, which should build + # a Series or a scalar, depending on input. + td1 = Series(pd.to_timedelta(['00:05:03'] * 3)) + td2 = pd.to_timedelta('00:05:04') + result = td1 - td2 + expected = Series([timedelta(seconds=0)] * 3) - Series([timedelta( + seconds=1)] * 3) + self.assertEqual(result.dtype, 'm8[ns]') + assert_series_equal(result, expected) + + result2 = td2 - td1 + expected = (Series([timedelta(seconds=1)] * 3) - Series([timedelta( + seconds=0)] * 3)) + assert_series_equal(result2, expected) + + # roundtrip + assert_series_equal(result + td2, td1) + + def test_timedelta64_operations_with_integers(self): + + # GH 4521 + # divide/multiply by integers + startdate = Series(date_range('2013-01-01', '2013-01-03')) + enddate = Series(date_range('2013-03-01', '2013-03-03')) + + s1 = enddate - startdate + s1[2] = np.nan + s2 = Series([2, 3, 4]) + expected = Series(s1.values.astype(np.int64) / s2, dtype='m8[ns]') + expected[2] = np.nan + result = s1 / s2 + assert_series_equal(result, expected) + + s2 = Series([20, 30, 40]) + expected = Series(s1.values.astype(np.int64) / s2, dtype='m8[ns]') + expected[2] = np.nan + result = s1 / s2 + assert_series_equal(result, expected) + + result = s1 / 2 + expected = Series(s1.values.astype(np.int64) / 2, dtype='m8[ns]') + expected[2] = np.nan + assert_series_equal(result, expected) + + s2 = Series([20, 30, 40]) + expected = Series(s1.values.astype(np.int64) * s2, dtype='m8[ns]') + expected[2] = np.nan + result = s1 * s2 + assert_series_equal(result, expected) + + for dtype in ['int32', 'int16', 'uint32', 'uint64', 'uint32', 'uint16', + 'uint8']: + s2 = Series([20, 30, 40], dtype=dtype) + expected = Series( + s1.values.astype(np.int64) * s2.astype(np.int64), + dtype='m8[ns]') + expected[2] = np.nan + result = s1 * s2 + assert_series_equal(result, expected) + + result = s1 * 2 + expected = Series(s1.values.astype(np.int64) * 2, dtype='m8[ns]') + expected[2] = np.nan + assert_series_equal(result, expected) + + result = s1 * -1 + expected = Series(s1.values.astype(np.int64) * -1, dtype='m8[ns]') + expected[2] = np.nan + assert_series_equal(result, expected) + + # invalid ops + assert_series_equal(s1 / s2.astype(float), + Series([Timedelta('2 days 22:48:00'), Timedelta( + '1 days 23:12:00'), Timedelta('NaT')])) + assert_series_equal(s1 / 2.0, + Series([Timedelta('29 days 12:00:00'), Timedelta( + '29 days 12:00:00'), Timedelta('NaT')])) + + for op in ['__add__', '__sub__']: + sop = getattr(s1, op, None) + if sop is not None: + self.assertRaises(TypeError, sop, 1) + self.assertRaises(TypeError, sop, s2.values) + + def test_timedelta64_conversions(self): + startdate = Series(date_range('2013-01-01', '2013-01-03')) + enddate = Series(date_range('2013-03-01', '2013-03-03')) + + s1 = enddate - startdate + s1[2] = np.nan + + for m in [1, 3, 10]: + for unit in ['D', 'h', 'm', 's', 'ms', 'us', 'ns']: + + # op + expected = s1.apply(lambda x: x / np.timedelta64(m, unit)) + result = s1 / np.timedelta64(m, unit) + assert_series_equal(result, expected) + + if m == 1 and unit != 'ns': + + # astype + result = s1.astype("timedelta64[{0}]".format(unit)) + assert_series_equal(result, expected) + + # reverse op + expected = s1.apply( + lambda x: Timedelta(np.timedelta64(m, unit)) / x) + result = np.timedelta64(m, unit) / s1 + + # astype + s = Series(date_range('20130101', periods=3)) + result = s.astype(object) + self.assertIsInstance(result.iloc[0], datetime) + self.assertTrue(result.dtype == np.object_) + + result = s1.astype(object) + self.assertIsInstance(result.iloc[0], timedelta) + self.assertTrue(result.dtype == np.object_) + + def test_timedelta64_equal_timedelta_supported_ops(self): + ser = Series([Timestamp('20130301'), Timestamp('20130228 23:00:00'), + Timestamp('20130228 22:00:00'), Timestamp( + '20130228 21:00:00')]) + + intervals = 'D', 'h', 'm', 's', 'us' + + # TODO: unused + # npy16_mappings = {'D': 24 * 60 * 60 * 1000000, + # 'h': 60 * 60 * 1000000, + # 'm': 60 * 1000000, + # 's': 1000000, + # 'us': 1} + + def timedelta64(*args): + return sum(starmap(np.timedelta64, zip(args, intervals))) + + for op, d, h, m, s, us in product([operator.add, operator.sub], + *([range(2)] * 5)): + nptd = timedelta64(d, h, m, s, us) + pytd = timedelta(days=d, hours=h, minutes=m, seconds=s, + microseconds=us) + lhs = op(ser, nptd) + rhs = op(ser, pytd) + + try: + assert_series_equal(lhs, rhs) + except: + raise AssertionError( + "invalid comparsion [op->{0},d->{1},h->{2},m->{3}," + "s->{4},us->{5}]\n{6}\n{7}\n".format(op, d, h, m, s, + us, lhs, rhs)) + + def test_operators_datetimelike(self): + def run_ops(ops, get_ser, test_ser): + + # check that we are getting a TypeError + # with 'operate' (from core/ops.py) for the ops that are not + # defined + for op_str in ops: + op = getattr(get_ser, op_str, None) + with tm.assertRaisesRegexp(TypeError, 'operate'): + op(test_ser) + + # ## timedelta64 ### + td1 = Series([timedelta(minutes=5, seconds=3)] * 3) + td1.iloc[2] = np.nan + td2 = timedelta(minutes=5, seconds=4) + ops = ['__mul__', '__floordiv__', '__pow__', '__rmul__', + '__rfloordiv__', '__rpow__'] + run_ops(ops, td1, td2) + td1 + td2 + td2 + td1 + td1 - td2 + td2 - td1 + td1 / td2 + td2 / td1 + + # ## datetime64 ### + dt1 = Series([Timestamp('20111230'), Timestamp('20120101'), Timestamp( + '20120103')]) + dt1.iloc[2] = np.nan + dt2 = Series([Timestamp('20111231'), Timestamp('20120102'), Timestamp( + '20120104')]) + ops = ['__add__', '__mul__', '__floordiv__', '__truediv__', '__div__', + '__pow__', '__radd__', '__rmul__', '__rfloordiv__', + '__rtruediv__', '__rdiv__', '__rpow__'] + run_ops(ops, dt1, dt2) + dt1 - dt2 + dt2 - dt1 + + # ## datetime64 with timetimedelta ### + ops = ['__mul__', '__floordiv__', '__truediv__', '__div__', '__pow__', + '__rmul__', '__rfloordiv__', '__rtruediv__', '__rdiv__', + '__rpow__'] + run_ops(ops, dt1, td1) + dt1 + td1 + td1 + dt1 + dt1 - td1 + # TODO: Decide if this ought to work. + # td1 - dt1 + + # ## timetimedelta with datetime64 ### + ops = ['__sub__', '__mul__', '__floordiv__', '__truediv__', '__div__', + '__pow__', '__rmul__', '__rfloordiv__', '__rtruediv__', + '__rdiv__', '__rpow__'] + run_ops(ops, td1, dt1) + td1 + dt1 + dt1 + td1 + + # 8260, 10763 + # datetime64 with tz + ops = ['__mul__', '__floordiv__', '__truediv__', '__div__', '__pow__', + '__rmul__', '__rfloordiv__', '__rtruediv__', '__rdiv__', + '__rpow__'] + dt1 = Series( + date_range('2000-01-01 09:00:00', periods=5, + tz='US/Eastern'), name='foo') + dt2 = dt1.copy() + dt2.iloc[2] = np.nan + td1 = Series(timedelta_range('1 days 1 min', periods=5, freq='H')) + td2 = td1.copy() + td2.iloc[1] = np.nan + run_ops(ops, dt1, td1) + + result = dt1 + td1[0] + expected = ( + dt1.dt.tz_localize(None) + td1[0]).dt.tz_localize('US/Eastern') + assert_series_equal(result, expected) + + result = dt2 + td2[0] + expected = ( + dt2.dt.tz_localize(None) + td2[0]).dt.tz_localize('US/Eastern') + assert_series_equal(result, expected) + + # odd numpy behavior with scalar timedeltas + if not _np_version_under1p8: + result = td1[0] + dt1 + expected = ( + dt1.dt.tz_localize(None) + td1[0]).dt.tz_localize('US/Eastern') + assert_series_equal(result, expected) + + result = td2[0] + dt2 + expected = ( + dt2.dt.tz_localize(None) + td2[0]).dt.tz_localize('US/Eastern') + assert_series_equal(result, expected) + + result = dt1 - td1[0] + expected = ( + dt1.dt.tz_localize(None) - td1[0]).dt.tz_localize('US/Eastern') + assert_series_equal(result, expected) + self.assertRaises(TypeError, lambda: td1[0] - dt1) + + result = dt2 - td2[0] + expected = ( + dt2.dt.tz_localize(None) - td2[0]).dt.tz_localize('US/Eastern') + assert_series_equal(result, expected) + self.assertRaises(TypeError, lambda: td2[0] - dt2) + + result = dt1 + td1 + expected = ( + dt1.dt.tz_localize(None) + td1).dt.tz_localize('US/Eastern') + assert_series_equal(result, expected) + + result = dt2 + td2 + expected = ( + dt2.dt.tz_localize(None) + td2).dt.tz_localize('US/Eastern') + assert_series_equal(result, expected) + + result = dt1 - td1 + expected = ( + dt1.dt.tz_localize(None) - td1).dt.tz_localize('US/Eastern') + assert_series_equal(result, expected) + + result = dt2 - td2 + expected = ( + dt2.dt.tz_localize(None) - td2).dt.tz_localize('US/Eastern') + assert_series_equal(result, expected) + + self.assertRaises(TypeError, lambda: td1 - dt1) + self.assertRaises(TypeError, lambda: td2 - dt2) + + def test_ops_nat(self): + # GH 11349 + timedelta_series = Series([NaT, Timedelta('1s')]) + datetime_series = Series([NaT, Timestamp('19900315')]) + nat_series_dtype_timedelta = Series( + [NaT, NaT], dtype='timedelta64[ns]') + nat_series_dtype_timestamp = Series([NaT, NaT], dtype='datetime64[ns]') + single_nat_dtype_datetime = Series([NaT], dtype='datetime64[ns]') + single_nat_dtype_timedelta = Series([NaT], dtype='timedelta64[ns]') + + # subtraction + assert_series_equal(timedelta_series - NaT, nat_series_dtype_timedelta) + assert_series_equal(-NaT + timedelta_series, + nat_series_dtype_timedelta) + + assert_series_equal(timedelta_series - single_nat_dtype_timedelta, + nat_series_dtype_timedelta) + assert_series_equal(-single_nat_dtype_timedelta + timedelta_series, + nat_series_dtype_timedelta) + + assert_series_equal(datetime_series - NaT, nat_series_dtype_timestamp) + assert_series_equal(-NaT + datetime_series, nat_series_dtype_timestamp) + + assert_series_equal(datetime_series - single_nat_dtype_datetime, + nat_series_dtype_timedelta) + with tm.assertRaises(TypeError): + -single_nat_dtype_datetime + datetime_series + + assert_series_equal(datetime_series - single_nat_dtype_timedelta, + nat_series_dtype_timestamp) + assert_series_equal(-single_nat_dtype_timedelta + datetime_series, + nat_series_dtype_timestamp) + + # without a Series wrapping the NaT, it is ambiguous + # whether it is a datetime64 or timedelta64 + # defaults to interpreting it as timedelta64 + assert_series_equal(nat_series_dtype_timestamp - NaT, + nat_series_dtype_timestamp) + assert_series_equal(-NaT + nat_series_dtype_timestamp, + nat_series_dtype_timestamp) + + assert_series_equal(nat_series_dtype_timestamp - + single_nat_dtype_datetime, + nat_series_dtype_timedelta) + with tm.assertRaises(TypeError): + -single_nat_dtype_datetime + nat_series_dtype_timestamp + + assert_series_equal(nat_series_dtype_timestamp - + single_nat_dtype_timedelta, + nat_series_dtype_timestamp) + assert_series_equal(-single_nat_dtype_timedelta + + nat_series_dtype_timestamp, + nat_series_dtype_timestamp) + + with tm.assertRaises(TypeError): + timedelta_series - single_nat_dtype_datetime + + # addition + assert_series_equal(nat_series_dtype_timestamp + NaT, + nat_series_dtype_timestamp) + assert_series_equal(NaT + nat_series_dtype_timestamp, + nat_series_dtype_timestamp) + + assert_series_equal(nat_series_dtype_timestamp + + single_nat_dtype_timedelta, + nat_series_dtype_timestamp) + assert_series_equal(single_nat_dtype_timedelta + + nat_series_dtype_timestamp, + nat_series_dtype_timestamp) + + assert_series_equal(nat_series_dtype_timedelta + NaT, + nat_series_dtype_timedelta) + assert_series_equal(NaT + nat_series_dtype_timedelta, + nat_series_dtype_timedelta) + + assert_series_equal(nat_series_dtype_timedelta + + single_nat_dtype_timedelta, + nat_series_dtype_timedelta) + assert_series_equal(single_nat_dtype_timedelta + + nat_series_dtype_timedelta, + nat_series_dtype_timedelta) + + assert_series_equal(timedelta_series + NaT, nat_series_dtype_timedelta) + assert_series_equal(NaT + timedelta_series, nat_series_dtype_timedelta) + + assert_series_equal(timedelta_series + single_nat_dtype_timedelta, + nat_series_dtype_timedelta) + assert_series_equal(single_nat_dtype_timedelta + timedelta_series, + nat_series_dtype_timedelta) + + assert_series_equal(nat_series_dtype_timestamp + NaT, + nat_series_dtype_timestamp) + assert_series_equal(NaT + nat_series_dtype_timestamp, + nat_series_dtype_timestamp) + + assert_series_equal(nat_series_dtype_timestamp + + single_nat_dtype_timedelta, + nat_series_dtype_timestamp) + assert_series_equal(single_nat_dtype_timedelta + + nat_series_dtype_timestamp, + nat_series_dtype_timestamp) + + assert_series_equal(nat_series_dtype_timedelta + NaT, + nat_series_dtype_timedelta) + assert_series_equal(NaT + nat_series_dtype_timedelta, + nat_series_dtype_timedelta) + + assert_series_equal(nat_series_dtype_timedelta + + single_nat_dtype_timedelta, + nat_series_dtype_timedelta) + assert_series_equal(single_nat_dtype_timedelta + + nat_series_dtype_timedelta, + nat_series_dtype_timedelta) + + assert_series_equal(nat_series_dtype_timedelta + + single_nat_dtype_datetime, + nat_series_dtype_timestamp) + assert_series_equal(single_nat_dtype_datetime + + nat_series_dtype_timedelta, + nat_series_dtype_timestamp) + + # multiplication + assert_series_equal(nat_series_dtype_timedelta * 1.0, + nat_series_dtype_timedelta) + assert_series_equal(1.0 * nat_series_dtype_timedelta, + nat_series_dtype_timedelta) + + assert_series_equal(timedelta_series * 1, timedelta_series) + assert_series_equal(1 * timedelta_series, timedelta_series) + + assert_series_equal(timedelta_series * 1.5, + Series([NaT, Timedelta('1.5s')])) + assert_series_equal(1.5 * timedelta_series, + Series([NaT, Timedelta('1.5s')])) + + assert_series_equal(timedelta_series * nan, nat_series_dtype_timedelta) + assert_series_equal(nan * timedelta_series, nat_series_dtype_timedelta) + + with tm.assertRaises(TypeError): + datetime_series * 1 + with tm.assertRaises(TypeError): + nat_series_dtype_timestamp * 1 + with tm.assertRaises(TypeError): + datetime_series * 1.0 + with tm.assertRaises(TypeError): + nat_series_dtype_timestamp * 1.0 + + # division + assert_series_equal(timedelta_series / 2, + Series([NaT, Timedelta('0.5s')])) + assert_series_equal(timedelta_series / 2.0, + Series([NaT, Timedelta('0.5s')])) + assert_series_equal(timedelta_series / nan, nat_series_dtype_timedelta) + with tm.assertRaises(TypeError): + nat_series_dtype_timestamp / 1.0 + with tm.assertRaises(TypeError): + nat_series_dtype_timestamp / 1 + + def test_ops_datetimelike_align(self): + # GH 7500 + # datetimelike ops need to align + dt = Series(date_range('2012-1-1', periods=3, freq='D')) + dt.iloc[2] = np.nan + dt2 = dt[::-1] + + expected = Series([timedelta(0), timedelta(0), pd.NaT]) + # name is reset + result = dt2 - dt + assert_series_equal(result, expected) + + expected = Series(expected, name=0) + result = (dt2.to_frame() - dt.to_frame())[0] + assert_series_equal(result, expected) + + def test_object_comparisons(self): + s = Series(['a', 'b', np.nan, 'c', 'a']) + + result = s == 'a' + expected = Series([True, False, False, False, True]) + assert_series_equal(result, expected) + + result = s < 'a' + expected = Series([False, False, False, False, False]) + assert_series_equal(result, expected) + + result = s != 'a' + expected = -(s == 'a') + assert_series_equal(result, expected) + + def test_comparison_tuples(self): + # GH11339 + # comparisons vs tuple + s = Series([(1, 1), (1, 2)]) + + result = s == (1, 2) + expected = Series([False, True]) + assert_series_equal(result, expected) + + result = s != (1, 2) + expected = Series([True, False]) + assert_series_equal(result, expected) + + result = s == (0, 0) + expected = Series([False, False]) + assert_series_equal(result, expected) + + result = s != (0, 0) + expected = Series([True, True]) + assert_series_equal(result, expected) + + s = Series([(1, 1), (1, 1)]) + + result = s == (1, 1) + expected = Series([True, True]) + assert_series_equal(result, expected) + + result = s != (1, 1) + expected = Series([False, False]) + assert_series_equal(result, expected) + + s = Series([frozenset([1]), frozenset([1, 2])]) + + result = s == frozenset([1]) + expected = Series([True, False]) + assert_series_equal(result, expected) + + def test_comparison_operators_with_nas(self): + s = Series(bdate_range('1/1/2000', periods=10), dtype=object) + s[::2] = np.nan + + # test that comparisons work + ops = ['lt', 'le', 'gt', 'ge', 'eq', 'ne'] + for op in ops: + val = s[5] + + f = getattr(operator, op) + result = f(s, val) + + expected = f(s.dropna(), val).reindex(s.index) + + if op == 'ne': + expected = expected.fillna(True).astype(bool) + else: + expected = expected.fillna(False).astype(bool) + + assert_series_equal(result, expected) + + # fffffffuuuuuuuuuuuu + # result = f(val, s) + # expected = f(val, s.dropna()).reindex(s.index) + # assert_series_equal(result, expected) + + # boolean &, |, ^ should work with object arrays and propagate NAs + + ops = ['and_', 'or_', 'xor'] + mask = s.isnull() + for bool_op in ops: + f = getattr(operator, bool_op) + + filled = s.fillna(s[0]) + + result = f(s < s[9], s > s[3]) + + expected = f(filled < filled[9], filled > filled[3]) + expected[mask] = False + assert_series_equal(result, expected) + + def test_comparison_object_numeric_nas(self): + s = Series(np.random.randn(10), dtype=object) + shifted = s.shift(2) + + ops = ['lt', 'le', 'gt', 'ge', 'eq', 'ne'] + for op in ops: + f = getattr(operator, op) + + result = f(s, shifted) + expected = f(s.astype(float), shifted.astype(float)) + assert_series_equal(result, expected) + + def test_comparison_invalid(self): + + # GH4968 + # invalid date/int comparisons + s = Series(range(5)) + s2 = Series(date_range('20010101', periods=5)) + + for (x, y) in [(s, s2), (s2, s)]: + self.assertRaises(TypeError, lambda: x == y) + self.assertRaises(TypeError, lambda: x != y) + self.assertRaises(TypeError, lambda: x >= y) + self.assertRaises(TypeError, lambda: x > y) + self.assertRaises(TypeError, lambda: x < y) + self.assertRaises(TypeError, lambda: x <= y) + + def test_more_na_comparisons(self): + left = Series(['a', np.nan, 'c']) + right = Series(['a', np.nan, 'd']) + + result = left == right + expected = Series([True, False, False]) + assert_series_equal(result, expected) + + result = left != right + expected = Series([False, True, True]) + assert_series_equal(result, expected) + + result = left == np.nan + expected = Series([False, False, False]) + assert_series_equal(result, expected) + + result = left != np.nan + expected = Series([True, True, True]) + assert_series_equal(result, expected) + + def test_comparison_different_length(self): + a = Series(['a', 'b', 'c']) + b = Series(['b', 'a']) + self.assertRaises(ValueError, a.__lt__, b) + + a = Series([1, 2]) + b = Series([2, 3, 4]) + self.assertRaises(ValueError, a.__eq__, b) + + def test_comparison_label_based(self): + + # GH 4947 + # comparisons should be label based + + a = Series([True, False, True], list('bca')) + b = Series([False, True, False], list('abc')) + + expected = Series([True, False, False], list('bca')) + result = a & b + assert_series_equal(result, expected) + + expected = Series([True, False, True], list('bca')) + result = a | b + assert_series_equal(result, expected) + + expected = Series([False, False, True], list('bca')) + result = a ^ b + assert_series_equal(result, expected) + + # rhs is bigger + a = Series([True, False, True], list('bca')) + b = Series([False, True, False, True], list('abcd')) + + expected = Series([True, False, False], list('bca')) + result = a & b + assert_series_equal(result, expected) + + expected = Series([True, False, True], list('bca')) + result = a | b + assert_series_equal(result, expected) + + # filling + + # vs empty + result = a & Series([]) + expected = Series([False, False, False], list('bca')) + assert_series_equal(result, expected) + + result = a | Series([]) + expected = Series([True, False, True], list('bca')) + assert_series_equal(result, expected) + + # vs non-matching + result = a & Series([1], ['z']) + expected = Series([False, False, False], list('bca')) + assert_series_equal(result, expected) + + result = a | Series([1], ['z']) + expected = Series([True, False, True], list('bca')) + assert_series_equal(result, expected) + + # identity + # we would like s[s|e] == s to hold for any e, whether empty or not + for e in [Series([]), Series([1], ['z']), Series(['z']), + Series(np.nan, b.index), Series(np.nan, a.index)]: + result = a[a | e] + assert_series_equal(result, a[a]) + + # vs scalars + index = list('bca') + t = Series([True, False, True]) + + for v in [True, 1, 2]: + result = Series([True, False, True], index=index) | v + expected = Series([True, True, True], index=index) + assert_series_equal(result, expected) + + for v in [np.nan, 'foo']: + self.assertRaises(TypeError, lambda: t | v) + + for v in [False, 0]: + result = Series([True, False, True], index=index) | v + expected = Series([True, False, True], index=index) + assert_series_equal(result, expected) + + for v in [True, 1]: + result = Series([True, False, True], index=index) & v + expected = Series([True, False, True], index=index) + assert_series_equal(result, expected) + + for v in [False, 0]: + result = Series([True, False, True], index=index) & v + expected = Series([False, False, False], index=index) + assert_series_equal(result, expected) + for v in [np.nan]: + self.assertRaises(TypeError, lambda: t & v) + + def test_operators_bitwise(self): + # GH 9016: support bitwise op for integer types + index = list('bca') + + s_tft = Series([True, False, True], index=index) + s_fff = Series([False, False, False], index=index) + s_tff = Series([True, False, False], index=index) + s_empty = Series([]) + + # TODO: unused + # s_0101 = Series([0, 1, 0, 1]) + + s_0123 = Series(range(4), dtype='int64') + s_3333 = Series([3] * 4) + s_4444 = Series([4] * 4) + + res = s_tft & s_empty + expected = s_fff + assert_series_equal(res, expected) + + res = s_tft | s_empty + expected = s_tft + assert_series_equal(res, expected) + + res = s_0123 & s_3333 + expected = Series(range(4), dtype='int64') + assert_series_equal(res, expected) + + res = s_0123 | s_4444 + expected = Series(range(4, 8), dtype='int64') + assert_series_equal(res, expected) + + s_a0b1c0 = Series([1], list('b')) + + res = s_tft & s_a0b1c0 + expected = s_tff + assert_series_equal(res, expected) + + res = s_tft | s_a0b1c0 + expected = s_tft + assert_series_equal(res, expected) + + n0 = 0 + res = s_tft & n0 + expected = s_fff + assert_series_equal(res, expected) + + res = s_0123 & n0 + expected = Series([0] * 4) + assert_series_equal(res, expected) + + n1 = 1 + res = s_tft & n1 + expected = s_tft + assert_series_equal(res, expected) + + res = s_0123 & n1 + expected = Series([0, 1, 0, 1]) + assert_series_equal(res, expected) + + s_1111 = Series([1] * 4, dtype='int8') + res = s_0123 & s_1111 + expected = Series([0, 1, 0, 1], dtype='int64') + assert_series_equal(res, expected) + + res = s_0123.astype(np.int16) | s_1111.astype(np.int32) + expected = Series([1, 1, 3, 3], dtype='int32') + assert_series_equal(res, expected) + + self.assertRaises(TypeError, lambda: s_1111 & 'a') + self.assertRaises(TypeError, lambda: s_1111 & ['a', 'b', 'c', 'd']) + self.assertRaises(TypeError, lambda: s_0123 & np.NaN) + self.assertRaises(TypeError, lambda: s_0123 & 3.14) + self.assertRaises(TypeError, lambda: s_0123 & [0.1, 4, 3.14, 2]) + + # s_0123 will be all false now because of reindexing like s_tft + assert_series_equal(s_tft & s_0123, Series([False] * 3, list('bca'))) + # s_tft will be all false now because of reindexing like s_0123 + assert_series_equal(s_0123 & s_tft, Series([False] * 4)) + assert_series_equal(s_0123 & False, Series([False] * 4)) + assert_series_equal(s_0123 ^ False, Series([False, True, True, True])) + assert_series_equal(s_0123 & [False], Series([False] * 4)) + assert_series_equal(s_0123 & (False), Series([False] * 4)) + assert_series_equal(s_0123 & Series([False, np.NaN, False, False]), + Series([False] * 4)) + + s_ftft = Series([False, True, False, True]) + assert_series_equal(s_0123 & Series([0.1, 4, -3.14, 2]), s_ftft) + + s_abNd = Series(['a', 'b', np.NaN, 'd']) + res = s_0123 & s_abNd + expected = s_ftft + assert_series_equal(res, expected) + + def test_scalar_na_cmp_corners(self): + s = Series([2, 3, 4, 5, 6, 7, 8, 9, 10]) + + def tester(a, b): + return a & b + + self.assertRaises(TypeError, tester, s, datetime(2005, 1, 1)) + + s = Series([2, 3, 4, 5, 6, 7, 8, 9, datetime(2005, 1, 1)]) + s[::2] = np.nan + + expected = Series(True, index=s.index) + expected[::2] = False + assert_series_equal(tester(s, list(s)), expected) + + d = DataFrame({'A': s}) + # TODO: Fix this exception - needs to be fixed! (see GH5035) + # (previously this was a TypeError because series returned + # NotImplemented + self.assertRaises(ValueError, tester, s, d) + + def test_operators_corner(self): + series = self.ts + + empty = Series([], index=Index([])) + + result = series + empty + self.assertTrue(np.isnan(result).all()) + + result = empty + Series([], index=Index([])) + self.assertEqual(len(result), 0) + + # TODO: this returned NotImplemented earlier, what to do? + # deltas = Series([timedelta(1)] * 5, index=np.arange(5)) + # sub_deltas = deltas[::2] + # deltas5 = deltas * 5 + # deltas = deltas + sub_deltas + + # float + int + int_ts = self.ts.astype(int)[:-5] + added = self.ts + int_ts + expected = self.ts.values[:-5] + int_ts.values + self.assert_numpy_array_equal(added[:-5], expected) + + def test_operators_reverse_object(self): + # GH 56 + arr = Series(np.random.randn(10), index=np.arange(10), dtype=object) + + def _check_op(arr, op): + result = op(1., arr) + expected = op(1., arr.astype(float)) + assert_series_equal(result.astype(float), expected) + + _check_op(arr, operator.add) + _check_op(arr, operator.sub) + _check_op(arr, operator.mul) + _check_op(arr, operator.truediv) + _check_op(arr, operator.floordiv) + + def test_series_frame_radd_bug(self): + import operator + + # GH 353 + vals = Series(tm.rands_array(5, 10)) + result = 'foo_' + vals + expected = vals.map(lambda x: 'foo_' + x) + assert_series_equal(result, expected) + + frame = DataFrame({'vals': vals}) + result = 'foo_' + frame + expected = DataFrame({'vals': vals.map(lambda x: 'foo_' + x)}) + tm.assert_frame_equal(result, expected) + + # really raise this time + self.assertRaises(TypeError, operator.add, datetime.now(), self.ts) + + def test_operators_frame(self): + # rpow does not work with DataFrame + df = DataFrame({'A': self.ts}) + + tm.assert_almost_equal(self.ts + self.ts, self.ts + df['A']) + tm.assert_almost_equal(self.ts ** self.ts, self.ts ** df['A']) + tm.assert_almost_equal(self.ts < self.ts, self.ts < df['A']) + tm.assert_almost_equal(self.ts / self.ts, self.ts / df['A']) + + def test_operators_combine(self): + def _check_fill(meth, op, a, b, fill_value=0): + exp_index = a.index.union(b.index) + a = a.reindex(exp_index) + b = b.reindex(exp_index) + + amask = isnull(a) + bmask = isnull(b) + + exp_values = [] + for i in range(len(exp_index)): + if amask[i]: + if bmask[i]: + exp_values.append(nan) + continue + exp_values.append(op(fill_value, b[i])) + elif bmask[i]: + if amask[i]: + exp_values.append(nan) + continue + exp_values.append(op(a[i], fill_value)) + else: + exp_values.append(op(a[i], b[i])) + + result = meth(a, b, fill_value=fill_value) + expected = Series(exp_values, exp_index) + assert_series_equal(result, expected) + + a = Series([nan, 1., 2., 3., nan], index=np.arange(5)) + b = Series([nan, 1, nan, 3, nan, 4.], index=np.arange(6)) + + pairings = [] + for op in ['add', 'sub', 'mul', 'pow', 'truediv', 'floordiv']: + fv = 0 + lop = getattr(Series, op) + lequiv = getattr(operator, op) + rop = getattr(Series, 'r' + op) + # bind op at definition time... + requiv = lambda x, y, op=op: getattr(operator, op)(y, x) + pairings.append((lop, lequiv, fv)) + pairings.append((rop, requiv, fv)) + + if compat.PY3: + pairings.append((Series.div, operator.truediv, 1)) + pairings.append((Series.rdiv, lambda x, y: operator.truediv(y, x), + 1)) + else: + pairings.append((Series.div, operator.div, 1)) + pairings.append((Series.rdiv, lambda x, y: operator.div(y, x), 1)) + + for op, equiv_op, fv in pairings: + result = op(a, b) + exp = equiv_op(a, b) + assert_series_equal(result, exp) + _check_fill(op, equiv_op, a, b, fill_value=fv) + # should accept axis=0 or axis='rows' + op(a, b, axis=0) + + def test_ne(self): + ts = Series([3, 4, 5, 6, 7], [3, 4, 5, 6, 7], dtype=float) + expected = [True, True, False, True, True] + self.assertTrue(tm.equalContents(ts.index != 5, expected)) + self.assertTrue(tm.equalContents(~(ts.index == 5), expected)) + + def test_operators_na_handling(self): + from decimal import Decimal + from datetime import date + s = Series([Decimal('1.3'), Decimal('2.3')], + index=[date(2012, 1, 1), date(2012, 1, 2)]) + + result = s + s.shift(1) + result2 = s.shift(1) + s + self.assertTrue(isnull(result[0])) + self.assertTrue(isnull(result2[0])) + + s = Series(['foo', 'bar', 'baz', np.nan]) + result = 'prefix_' + s + expected = Series(['prefix_foo', 'prefix_bar', 'prefix_baz', np.nan]) + assert_series_equal(result, expected) + + result = s + '_suffix' + expected = Series(['foo_suffix', 'bar_suffix', 'baz_suffix', np.nan]) + assert_series_equal(result, expected) + + def test_divide_decimal(self): + ''' resolves issue #9787 ''' + from decimal import Decimal + + expected = Series([Decimal(5)]) + + s = Series([Decimal(10)]) + s = s / Decimal(2) + + tm.assert_series_equal(expected, s) + + s = Series([Decimal(10)]) + s = s // Decimal(2) + + tm.assert_series_equal(expected, s) + + def test_datetime64_with_index(self): + + # arithmetic integer ops with an index + s = Series(np.random.randn(5)) + expected = s - s.index.to_series() + result = s - s.index + assert_series_equal(result, expected) + + # GH 4629 + # arithmetic datetime64 ops with an index + s = Series(date_range('20130101', periods=5), + index=date_range('20130101', periods=5)) + expected = s - s.index.to_series() + result = s - s.index + assert_series_equal(result, expected) + + result = s - s.index.to_period() + assert_series_equal(result, expected) + + df = DataFrame(np.random.randn(5, 2), + index=date_range('20130101', periods=5)) + df['date'] = Timestamp('20130102') + df['expected'] = df['date'] - df.index.to_series() + df['result'] = df['date'] - df.index + assert_series_equal(df['result'], df['expected'], check_names=False) diff --git a/pandas/tests/series/test_repr.py b/pandas/tests/series/test_repr.py new file mode 100644 index 0000000000000..ec2efdcf40705 --- /dev/null +++ b/pandas/tests/series/test_repr.py @@ -0,0 +1,182 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +from datetime import datetime, timedelta + +import numpy as np +import pandas as pd + +from pandas import (Index, Series, DataFrame, date_range) +from pandas.core.index import MultiIndex + +from pandas.compat import StringIO, lrange, range, u +from pandas import compat +import pandas.util.testing as tm + +from .common import TestData + + +class TestSeriesRepr(TestData, tm.TestCase): + + _multiprocess_can_split_ = True + + def test_multilevel_name_print(self): + index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two', + 'three']], + labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], + [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], + names=['first', 'second']) + s = Series(lrange(0, len(index)), index=index, name='sth') + expected = ["first second", "foo one 0", + " two 1", " three 2", + "bar one 3", " two 4", + "baz two 5", " three 6", + "qux one 7", " two 8", + " three 9", "Name: sth, dtype: int64"] + expected = "\n".join(expected) + self.assertEqual(repr(s), expected) + + def test_name_printing(self): + # test small series + s = Series([0, 1, 2]) + s.name = "test" + self.assertIn("Name: test", repr(s)) + s.name = None + self.assertNotIn("Name:", repr(s)) + # test big series (diff code path) + s = Series(lrange(0, 1000)) + s.name = "test" + self.assertIn("Name: test", repr(s)) + s.name = None + self.assertNotIn("Name:", repr(s)) + + s = Series(index=date_range('20010101', '20020101'), name='test') + self.assertIn("Name: test", repr(s)) + + def test_repr(self): + str(self.ts) + str(self.series) + str(self.series.astype(int)) + str(self.objSeries) + + str(Series(tm.randn(1000), index=np.arange(1000))) + str(Series(tm.randn(1000), index=np.arange(1000, 0, step=-1))) + + # empty + str(self.empty) + + # with NaNs + self.series[5:7] = np.NaN + str(self.series) + + # with Nones + ots = self.ts.astype('O') + ots[::2] = None + repr(ots) + + # various names + for name in ['', 1, 1.2, 'foo', u('\u03B1\u03B2\u03B3'), + 'loooooooooooooooooooooooooooooooooooooooooooooooooooong', + ('foo', 'bar', 'baz'), (1, 2), ('foo', 1, 2.3), + (u('\u03B1'), u('\u03B2'), u('\u03B3')), + (u('\u03B1'), 'bar')]: + self.series.name = name + repr(self.series) + + biggie = Series(tm.randn(1000), index=np.arange(1000), + name=('foo', 'bar', 'baz')) + repr(biggie) + + # 0 as name + ser = Series(np.random.randn(100), name=0) + rep_str = repr(ser) + self.assertIn("Name: 0", rep_str) + + # tidy repr + ser = Series(np.random.randn(1001), name=0) + rep_str = repr(ser) + self.assertIn("Name: 0", rep_str) + + ser = Series(["a\n\r\tb"], name=["a\n\r\td"], index=["a\n\r\tf"]) + self.assertFalse("\t" in repr(ser)) + self.assertFalse("\r" in repr(ser)) + self.assertFalse("a\n" in repr(ser)) + + # with empty series (#4651) + s = Series([], dtype=np.int64, name='foo') + self.assertEqual(repr(s), 'Series([], Name: foo, dtype: int64)') + + s = Series([], dtype=np.int64, name=None) + self.assertEqual(repr(s), 'Series([], dtype: int64)') + + def test_tidy_repr(self): + a = Series([u("\u05d0")] * 1000) + a.name = 'title1' + repr(a) # should not raise exception + + def test_repr_bool_fails(self): + s = Series([DataFrame(np.random.randn(2, 2)) for i in range(5)]) + + import sys + + buf = StringIO() + tmp = sys.stderr + sys.stderr = buf + try: + # it works (with no Cython exception barf)! + repr(s) + finally: + sys.stderr = tmp + self.assertEqual(buf.getvalue(), '') + + def test_repr_name_iterable_indexable(self): + s = Series([1, 2, 3], name=np.int64(3)) + + # it works! + repr(s) + + s.name = (u("\u05d0"), ) * 2 + repr(s) + + def test_repr_should_return_str(self): + # http://docs.python.org/py3k/reference/datamodel.html#object.__repr__ + # http://docs.python.org/reference/datamodel.html#object.__repr__ + # ...The return value must be a string object. + + # (str on py2.x, str (unicode) on py3) + + data = [8, 5, 3, 5] + index1 = [u("\u03c3"), u("\u03c4"), u("\u03c5"), u("\u03c6")] + df = Series(data, index=index1) + self.assertTrue(type(df.__repr__() == str)) # both py2 / 3 + + def test_repr_max_rows(self): + # GH 6863 + with pd.option_context('max_rows', None): + str(Series(range(1001))) # should not raise exception + + def test_unicode_string_with_unicode(self): + df = Series([u("\u05d0")], name=u("\u05d1")) + if compat.PY3: + str(df) + else: + compat.text_type(df) + + def test_bytestring_with_unicode(self): + df = Series([u("\u05d0")], name=u("\u05d1")) + if compat.PY3: + bytes(df) + else: + str(df) + + def test_timeseries_repr_object_dtype(self): + index = Index([datetime(2000, 1, 1) + timedelta(i) + for i in range(1000)], dtype=object) + ts = Series(np.random.randn(len(index)), index) + repr(ts) + + ts = tm.makeTimeSeries(1000) + self.assertTrue(repr(ts).splitlines()[-1].startswith('Freq:')) + + ts2 = ts.ix[np.random.randint(0, len(ts) - 1, 400)] + repr(ts2).splitlines()[-1] diff --git a/pandas/tests/series/test_timeseries.py b/pandas/tests/series/test_timeseries.py new file mode 100644 index 0000000000000..00b5f01483e29 --- /dev/null +++ b/pandas/tests/series/test_timeseries.py @@ -0,0 +1,619 @@ +# coding=utf-8 +# pylint: disable-msg=E1101,W0612 + +from datetime import datetime + +from numpy import nan +import numpy as np + +from pandas import Index, Series, notnull, date_range +from pandas.tseries.index import DatetimeIndex +from pandas.tseries.tdi import TimedeltaIndex + +import pandas.core.datetools as datetools + +from pandas.util.testing import assert_series_equal, assert_almost_equal +import pandas.util.testing as tm + +from .common import TestData + + +class TestSeriesTimeSeries(TestData, tm.TestCase): + + _multiprocess_can_split_ = True + + def test_shift(self): + shifted = self.ts.shift(1) + unshifted = shifted.shift(-1) + + tm.assert_dict_equal(unshifted.valid(), self.ts, compare_keys=False) + + offset = datetools.bday + shifted = self.ts.shift(1, freq=offset) + unshifted = shifted.shift(-1, freq=offset) + + assert_series_equal(unshifted, self.ts) + + unshifted = self.ts.shift(0, freq=offset) + assert_series_equal(unshifted, self.ts) + + shifted = self.ts.shift(1, freq='B') + unshifted = shifted.shift(-1, freq='B') + + assert_series_equal(unshifted, self.ts) + + # corner case + unshifted = self.ts.shift(0) + assert_series_equal(unshifted, self.ts) + + # Shifting with PeriodIndex + ps = tm.makePeriodSeries() + shifted = ps.shift(1) + unshifted = shifted.shift(-1) + tm.assert_dict_equal(unshifted.valid(), ps, compare_keys=False) + + shifted2 = ps.shift(1, 'B') + shifted3 = ps.shift(1, datetools.bday) + assert_series_equal(shifted2, shifted3) + assert_series_equal(ps, shifted2.shift(-1, 'B')) + + self.assertRaises(ValueError, ps.shift, freq='D') + + # legacy support + shifted4 = ps.shift(1, freq='B') + assert_series_equal(shifted2, shifted4) + + shifted5 = ps.shift(1, freq=datetools.bday) + assert_series_equal(shifted5, shifted4) + + # 32-bit taking + # GH 8129 + index = date_range('2000-01-01', periods=5) + for dtype in ['int32', 'int64']: + s1 = Series(np.arange(5, dtype=dtype), index=index) + p = s1.iloc[1] + result = s1.shift(periods=p) + expected = Series([np.nan, 0, 1, 2, 3], index=index) + assert_series_equal(result, expected) + + # xref 8260 + # with tz + s = Series( + date_range('2000-01-01 09:00:00', periods=5, + tz='US/Eastern'), name='foo') + result = s - s.shift() + assert_series_equal(result, Series( + TimedeltaIndex(['NaT'] + ['1 days'] * 4), name='foo')) + + # incompat tz + s2 = Series( + date_range('2000-01-01 09:00:00', periods=5, tz='CET'), name='foo') + self.assertRaises(ValueError, lambda: s - s2) + + def test_tshift(self): + # PeriodIndex + ps = tm.makePeriodSeries() + shifted = ps.tshift(1) + unshifted = shifted.tshift(-1) + + assert_series_equal(unshifted, ps) + + shifted2 = ps.tshift(freq='B') + assert_series_equal(shifted, shifted2) + + shifted3 = ps.tshift(freq=datetools.bday) + assert_series_equal(shifted, shifted3) + + self.assertRaises(ValueError, ps.tshift, freq='M') + + # DatetimeIndex + shifted = self.ts.tshift(1) + unshifted = shifted.tshift(-1) + + assert_series_equal(self.ts, unshifted) + + shifted2 = self.ts.tshift(freq=self.ts.index.freq) + assert_series_equal(shifted, shifted2) + + inferred_ts = Series(self.ts.values, Index(np.asarray(self.ts.index)), + name='ts') + shifted = inferred_ts.tshift(1) + unshifted = shifted.tshift(-1) + assert_series_equal(shifted, self.ts.tshift(1)) + assert_series_equal(unshifted, inferred_ts) + + no_freq = self.ts[[0, 5, 7]] + self.assertRaises(ValueError, no_freq.tshift) + + def test_truncate(self): + offset = datetools.bday + + ts = self.ts[::3] + + start, end = self.ts.index[3], self.ts.index[6] + start_missing, end_missing = self.ts.index[2], self.ts.index[7] + + # neither specified + truncated = ts.truncate() + assert_series_equal(truncated, ts) + + # both specified + expected = ts[1:3] + + truncated = ts.truncate(start, end) + assert_series_equal(truncated, expected) + + truncated = ts.truncate(start_missing, end_missing) + assert_series_equal(truncated, expected) + + # start specified + expected = ts[1:] + + truncated = ts.truncate(before=start) + assert_series_equal(truncated, expected) + + truncated = ts.truncate(before=start_missing) + assert_series_equal(truncated, expected) + + # end specified + expected = ts[:3] + + truncated = ts.truncate(after=end) + assert_series_equal(truncated, expected) + + truncated = ts.truncate(after=end_missing) + assert_series_equal(truncated, expected) + + # corner case, empty series returned + truncated = ts.truncate(after=self.ts.index[0] - offset) + assert (len(truncated) == 0) + + truncated = ts.truncate(before=self.ts.index[-1] + offset) + assert (len(truncated) == 0) + + self.assertRaises(ValueError, ts.truncate, + before=self.ts.index[-1] + offset, + after=self.ts.index[0] - offset) + + def test_asof(self): + # array or list or dates + N = 50 + rng = date_range('1/1/1990', periods=N, freq='53s') + ts = Series(np.random.randn(N), index=rng) + ts[15:30] = np.nan + dates = date_range('1/1/1990', periods=N * 3, freq='25s') + + result = ts.asof(dates) + self.assertTrue(notnull(result).all()) + lb = ts.index[14] + ub = ts.index[30] + + result = ts.asof(list(dates)) + self.assertTrue(notnull(result).all()) + lb = ts.index[14] + ub = ts.index[30] + + mask = (result.index >= lb) & (result.index < ub) + rs = result[mask] + self.assertTrue((rs == ts[lb]).all()) + + val = result[result.index[result.index >= ub][0]] + self.assertEqual(ts[ub], val) + + self.ts[5:10] = np.NaN + self.ts[15:20] = np.NaN + + val1 = self.ts.asof(self.ts.index[7]) + val2 = self.ts.asof(self.ts.index[19]) + + self.assertEqual(val1, self.ts[4]) + self.assertEqual(val2, self.ts[14]) + + # accepts strings + val1 = self.ts.asof(str(self.ts.index[7])) + self.assertEqual(val1, self.ts[4]) + + # in there + self.assertEqual(self.ts.asof(self.ts.index[3]), self.ts[3]) + + # no as of value + d = self.ts.index[0] - datetools.bday + self.assertTrue(np.isnan(self.ts.asof(d))) + + def test_getitem_setitem_datetimeindex(self): + from pandas import date_range + N = 50 + # testing with timezone, GH #2785 + rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern') + ts = Series(np.random.randn(N), index=rng) + + result = ts["1990-01-01 04:00:00"] + expected = ts[4] + self.assertEqual(result, expected) + + result = ts.copy() + result["1990-01-01 04:00:00"] = 0 + result["1990-01-01 04:00:00"] = ts[4] + assert_series_equal(result, ts) + + result = ts["1990-01-01 04:00:00":"1990-01-01 07:00:00"] + expected = ts[4:8] + assert_series_equal(result, expected) + + result = ts.copy() + result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = 0 + result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = ts[4:8] + assert_series_equal(result, ts) + + lb = "1990-01-01 04:00:00" + rb = "1990-01-01 07:00:00" + result = ts[(ts.index >= lb) & (ts.index <= rb)] + expected = ts[4:8] + assert_series_equal(result, expected) + + # repeat all the above with naive datetimes + result = ts[datetime(1990, 1, 1, 4)] + expected = ts[4] + self.assertEqual(result, expected) + + result = ts.copy() + result[datetime(1990, 1, 1, 4)] = 0 + result[datetime(1990, 1, 1, 4)] = ts[4] + assert_series_equal(result, ts) + + result = ts[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] + expected = ts[4:8] + assert_series_equal(result, expected) + + result = ts.copy() + result[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] = 0 + result[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] = ts[4:8] + assert_series_equal(result, ts) + + lb = datetime(1990, 1, 1, 4) + rb = datetime(1990, 1, 1, 7) + result = ts[(ts.index >= lb) & (ts.index <= rb)] + expected = ts[4:8] + assert_series_equal(result, expected) + + result = ts[ts.index[4]] + expected = ts[4] + self.assertEqual(result, expected) + + result = ts[ts.index[4:8]] + expected = ts[4:8] + assert_series_equal(result, expected) + + result = ts.copy() + result[ts.index[4:8]] = 0 + result[4:8] = ts[4:8] + assert_series_equal(result, ts) + + # also test partial date slicing + result = ts["1990-01-02"] + expected = ts[24:48] + assert_series_equal(result, expected) + + result = ts.copy() + result["1990-01-02"] = 0 + result["1990-01-02"] = ts[24:48] + assert_series_equal(result, ts) + + def test_getitem_setitem_datetime_tz_pytz(self): + tm._skip_if_no_pytz() + from pytz import timezone as tz + + from pandas import date_range + N = 50 + # testing with timezone, GH #2785 + rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern') + ts = Series(np.random.randn(N), index=rng) + + # also test Timestamp tz handling, GH #2789 + result = ts.copy() + result["1990-01-01 09:00:00+00:00"] = 0 + result["1990-01-01 09:00:00+00:00"] = ts[4] + assert_series_equal(result, ts) + + result = ts.copy() + result["1990-01-01 03:00:00-06:00"] = 0 + result["1990-01-01 03:00:00-06:00"] = ts[4] + assert_series_equal(result, ts) + + # repeat with datetimes + result = ts.copy() + result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = 0 + result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = ts[4] + assert_series_equal(result, ts) + + result = ts.copy() + + # comparison dates with datetime MUST be localized! + date = tz('US/Central').localize(datetime(1990, 1, 1, 3)) + result[date] = 0 + result[date] = ts[4] + assert_series_equal(result, ts) + + def test_getitem_setitem_datetime_tz_dateutil(self): + tm._skip_if_no_dateutil() + from dateutil.tz import tzutc + from pandas.tslib import _dateutil_gettz as gettz + + tz = lambda x: tzutc() if x == 'UTC' else gettz( + x) # handle special case for utc in dateutil + + from pandas import date_range + N = 50 + # testing with timezone, GH #2785 + rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern') + ts = Series(np.random.randn(N), index=rng) + + # also test Timestamp tz handling, GH #2789 + result = ts.copy() + result["1990-01-01 09:00:00+00:00"] = 0 + result["1990-01-01 09:00:00+00:00"] = ts[4] + assert_series_equal(result, ts) + + result = ts.copy() + result["1990-01-01 03:00:00-06:00"] = 0 + result["1990-01-01 03:00:00-06:00"] = ts[4] + assert_series_equal(result, ts) + + # repeat with datetimes + result = ts.copy() + result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = 0 + result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = ts[4] + assert_series_equal(result, ts) + + result = ts.copy() + result[datetime(1990, 1, 1, 3, tzinfo=tz('US/Central'))] = 0 + result[datetime(1990, 1, 1, 3, tzinfo=tz('US/Central'))] = ts[4] + assert_series_equal(result, ts) + + def test_getitem_setitem_periodindex(self): + from pandas import period_range + N = 50 + rng = period_range('1/1/1990', periods=N, freq='H') + ts = Series(np.random.randn(N), index=rng) + + result = ts["1990-01-01 04"] + expected = ts[4] + self.assertEqual(result, expected) + + result = ts.copy() + result["1990-01-01 04"] = 0 + result["1990-01-01 04"] = ts[4] + assert_series_equal(result, ts) + + result = ts["1990-01-01 04":"1990-01-01 07"] + expected = ts[4:8] + assert_series_equal(result, expected) + + result = ts.copy() + result["1990-01-01 04":"1990-01-01 07"] = 0 + result["1990-01-01 04":"1990-01-01 07"] = ts[4:8] + assert_series_equal(result, ts) + + lb = "1990-01-01 04" + rb = "1990-01-01 07" + result = ts[(ts.index >= lb) & (ts.index <= rb)] + expected = ts[4:8] + assert_series_equal(result, expected) + + # GH 2782 + result = ts[ts.index[4]] + expected = ts[4] + self.assertEqual(result, expected) + + result = ts[ts.index[4:8]] + expected = ts[4:8] + assert_series_equal(result, expected) + + result = ts.copy() + result[ts.index[4:8]] = 0 + result[4:8] = ts[4:8] + assert_series_equal(result, ts) + + def test_asof_periodindex(self): + from pandas import period_range, PeriodIndex + # array or list or dates + N = 50 + rng = period_range('1/1/1990', periods=N, freq='H') + ts = Series(np.random.randn(N), index=rng) + ts[15:30] = np.nan + dates = date_range('1/1/1990', periods=N * 3, freq='37min') + + result = ts.asof(dates) + self.assertTrue(notnull(result).all()) + lb = ts.index[14] + ub = ts.index[30] + + result = ts.asof(list(dates)) + self.assertTrue(notnull(result).all()) + lb = ts.index[14] + ub = ts.index[30] + + pix = PeriodIndex(result.index.values, freq='H') + mask = (pix >= lb) & (pix < ub) + rs = result[mask] + self.assertTrue((rs == ts[lb]).all()) + + ts[5:10] = np.NaN + ts[15:20] = np.NaN + + val1 = ts.asof(ts.index[7]) + val2 = ts.asof(ts.index[19]) + + self.assertEqual(val1, ts[4]) + self.assertEqual(val2, ts[14]) + + # accepts strings + val1 = ts.asof(str(ts.index[7])) + self.assertEqual(val1, ts[4]) + + # in there + self.assertEqual(ts.asof(ts.index[3]), ts[3]) + + # no as of value + d = ts.index[0].to_timestamp() - datetools.bday + self.assertTrue(np.isnan(ts.asof(d))) + + def test_asof_more(self): + from pandas import date_range + s = Series([nan, nan, 1, 2, nan, nan, 3, 4, 5], + index=date_range('1/1/2000', periods=9)) + + dates = s.index[[4, 5, 6, 2, 1]] + + result = s.asof(dates) + expected = Series([2, 2, 3, 1, np.nan], index=dates) + + assert_series_equal(result, expected) + + s = Series([1.5, 2.5, 1, 2, nan, nan, 3, 4, 5], + index=date_range('1/1/2000', periods=9)) + result = s.asof(s.index[0]) + self.assertEqual(result, s[0]) + + def test_asfreq(self): + ts = Series([0., 1., 2.], index=[datetime(2009, 10, 30), datetime( + 2009, 11, 30), datetime(2009, 12, 31)]) + + daily_ts = ts.asfreq('B') + monthly_ts = daily_ts.asfreq('BM') + self.assert_numpy_array_equal(monthly_ts, ts) + + daily_ts = ts.asfreq('B', method='pad') + monthly_ts = daily_ts.asfreq('BM') + self.assert_numpy_array_equal(monthly_ts, ts) + + daily_ts = ts.asfreq(datetools.bday) + monthly_ts = daily_ts.asfreq(datetools.bmonthEnd) + self.assert_numpy_array_equal(monthly_ts, ts) + + result = ts[:0].asfreq('M') + self.assertEqual(len(result), 0) + self.assertIsNot(result, ts) + + def test_diff(self): + # Just run the function + self.ts.diff() + + # int dtype + a = 10000000000000000 + b = a + 1 + s = Series([a, b]) + + rs = s.diff() + self.assertEqual(rs[1], 1) + + # neg n + rs = self.ts.diff(-1) + xp = self.ts - self.ts.shift(-1) + assert_series_equal(rs, xp) + + # 0 + rs = self.ts.diff(0) + xp = self.ts - self.ts + assert_series_equal(rs, xp) + + # datetime diff (GH3100) + s = Series(date_range('20130102', periods=5)) + rs = s - s.shift(1) + xp = s.diff() + assert_series_equal(rs, xp) + + # timedelta diff + nrs = rs - rs.shift(1) + nxp = xp.diff() + assert_series_equal(nrs, nxp) + + # with tz + s = Series( + date_range('2000-01-01 09:00:00', periods=5, + tz='US/Eastern'), name='foo') + result = s.diff() + assert_series_equal(result, Series( + TimedeltaIndex(['NaT'] + ['1 days'] * 4), name='foo')) + + def test_pct_change(self): + rs = self.ts.pct_change(fill_method=None) + assert_series_equal(rs, self.ts / self.ts.shift(1) - 1) + + rs = self.ts.pct_change(2) + filled = self.ts.fillna(method='pad') + assert_series_equal(rs, filled / filled.shift(2) - 1) + + rs = self.ts.pct_change(fill_method='bfill', limit=1) + filled = self.ts.fillna(method='bfill', limit=1) + assert_series_equal(rs, filled / filled.shift(1) - 1) + + rs = self.ts.pct_change(freq='5D') + filled = self.ts.fillna(method='pad') + assert_series_equal(rs, filled / filled.shift(freq='5D') - 1) + + def test_pct_change_shift_over_nas(self): + s = Series([1., 1.5, np.nan, 2.5, 3.]) + + chg = s.pct_change() + expected = Series([np.nan, 0.5, np.nan, 2.5 / 1.5 - 1, .2]) + assert_series_equal(chg, expected) + + def test_autocorr(self): + # Just run the function + corr1 = self.ts.autocorr() + + # Now run it with the lag parameter + corr2 = self.ts.autocorr(lag=1) + + # corr() with lag needs Series of at least length 2 + if len(self.ts) <= 2: + self.assertTrue(np.isnan(corr1)) + self.assertTrue(np.isnan(corr2)) + else: + self.assertEqual(corr1, corr2) + + # Choose a random lag between 1 and length of Series - 2 + # and compare the result with the Series corr() function + n = 1 + np.random.randint(max(1, len(self.ts) - 2)) + corr1 = self.ts.corr(self.ts.shift(n)) + corr2 = self.ts.autocorr(lag=n) + + # corr() with lag needs Series of at least length 2 + if len(self.ts) <= 2: + self.assertTrue(np.isnan(corr1)) + self.assertTrue(np.isnan(corr2)) + else: + self.assertEqual(corr1, corr2) + + def test_first_last_valid(self): + ts = self.ts.copy() + ts[:5] = np.NaN + + index = ts.first_valid_index() + self.assertEqual(index, ts.index[5]) + + ts[-5:] = np.NaN + index = ts.last_valid_index() + self.assertEqual(index, ts.index[-6]) + + ts[:] = np.nan + self.assertIsNone(ts.last_valid_index()) + self.assertIsNone(ts.first_valid_index()) + + ser = Series([], index=[]) + self.assertIsNone(ser.last_valid_index()) + self.assertIsNone(ser.first_valid_index()) + + def test_mpl_compat_hack(self): + result = self.ts[:, np.newaxis] + expected = self.ts.values[:, np.newaxis] + assert_almost_equal(result, expected) + + def test_timeseries_coercion(self): + idx = tm.makeDateIndex(10000) + ser = Series(np.random.randn(len(idx)), idx.astype(object)) + with tm.assert_produces_warning(FutureWarning): + self.assertTrue(ser.is_time_series) + self.assertTrue(ser.index.is_all_dates) + self.assertIsInstance(ser.index, DatetimeIndex) diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py deleted file mode 100644 index dff7aead5bdba..0000000000000 --- a/pandas/tests/test_series.py +++ /dev/null @@ -1,8649 +0,0 @@ -# coding=utf-8 -# pylint: disable-msg=E1101,W0612 - -import sys -from datetime import datetime, timedelta -import operator -import string -from inspect import getargspec -from itertools import product, starmap -from distutils.version import LooseVersion -import random - -import nose - -from numpy import nan, inf -import numpy as np -import numpy.ma as ma -import pandas as pd - -from pandas import (Index, Series, DataFrame, isnull, notnull, bdate_range, - NaT, date_range, period_range, timedelta_range, - _np_version_under1p8, _np_version_under1p9) -from pandas.core.index import MultiIndex, RangeIndex -from pandas.core.indexing import IndexingError -from pandas.tseries.period import PeriodIndex -from pandas.tseries.index import Timestamp, DatetimeIndex -from pandas.tseries.tdi import Timedelta, TimedeltaIndex -import pandas.core.common as com -import pandas.core.config as cf -import pandas.lib as lib - -import pandas.core.datetools as datetools -import pandas.core.nanops as nanops - -from pandas.compat import StringIO, lrange, range, zip, u, OrderedDict, long -from pandas import compat -from pandas.util.testing import (assert_series_equal, assert_almost_equal, - assert_frame_equal, assert_index_equal, - ensure_clean) -import pandas.util.testing as tm - -# ----------------------------------------------------------------------------- -# Series test cases - -JOIN_TYPES = ['inner', 'outer', 'left', 'right'] - - -class CheckNameIntegration(object): - - _multiprocess_can_split_ = True - - def test_scalarop_preserve_name(self): - result = self.ts * 2 - self.assertEqual(result.name, self.ts.name) - - def test_copy_name(self): - result = self.ts.copy() - self.assertEqual(result.name, self.ts.name) - - def test_copy_index_name_checking(self): - # don't want to be able to modify the index stored elsewhere after - # making a copy - - self.ts.index.name = None - self.assertIsNone(self.ts.index.name) - self.assertIs(self.ts, self.ts) - - cp = self.ts.copy() - cp.index.name = 'foo' - com.pprint_thing(self.ts.index.name) - self.assertIsNone(self.ts.index.name) - - def test_append_preserve_name(self): - result = self.ts[:5].append(self.ts[5:]) - self.assertEqual(result.name, self.ts.name) - - def test_dt_namespace_accessor(self): - - # GH 7207 - # test .dt namespace accessor - - ok_for_base = ['year', 'month', 'day', 'hour', 'minute', 'second', - 'weekofyear', 'week', 'dayofweek', 'weekday', - 'dayofyear', 'quarter', 'freq', 'days_in_month', - 'daysinmonth'] - ok_for_period = ok_for_base + ['qyear'] - ok_for_period_methods = ['strftime'] - ok_for_dt = ok_for_base + ['date', 'time', 'microsecond', 'nanosecond', - 'is_month_start', 'is_month_end', - 'is_quarter_start', 'is_quarter_end', - 'is_year_start', 'is_year_end', 'tz'] - ok_for_dt_methods = ['to_period', 'to_pydatetime', 'tz_localize', - 'tz_convert', 'normalize', 'strftime', 'round', - 'floor', 'ceil'] - ok_for_td = ['days', 'seconds', 'microseconds', 'nanoseconds'] - ok_for_td_methods = ['components', 'to_pytimedelta', 'total_seconds', - 'round', 'floor', 'ceil'] - - def get_expected(s, name): - result = getattr(Index(s._values), prop) - if isinstance(result, np.ndarray): - if com.is_integer_dtype(result): - result = result.astype('int64') - elif not com.is_list_like(result): - return result - return Series(result, index=s.index) - - def compare(s, name): - a = getattr(s.dt, prop) - b = get_expected(s, prop) - if not (com.is_list_like(a) and com.is_list_like(b)): - self.assertEqual(a, b) - else: - tm.assert_series_equal(a, b) - - # datetimeindex - for s in [Series(date_range('20130101', periods=5)), - Series(date_range('20130101', periods=5, freq='s')), - Series(date_range('20130101 00:00:00', periods=5, freq='ms')) - ]: - for prop in ok_for_dt: - # we test freq below - if prop != 'freq': - compare(s, prop) - - for prop in ok_for_dt_methods: - getattr(s.dt, prop) - - result = s.dt.to_pydatetime() - self.assertIsInstance(result, np.ndarray) - self.assertTrue(result.dtype == object) - - result = s.dt.tz_localize('US/Eastern') - expected = Series( - DatetimeIndex(s.values).tz_localize('US/Eastern'), - index=s.index) - tm.assert_series_equal(result, expected) - - tz_result = result.dt.tz - self.assertEqual(str(tz_result), 'US/Eastern') - freq_result = s.dt.freq - self.assertEqual(freq_result, DatetimeIndex(s.values, - freq='infer').freq) - - # let's localize, then convert - result = s.dt.tz_localize('UTC').dt.tz_convert('US/Eastern') - expected = Series( - DatetimeIndex(s.values).tz_localize('UTC').tz_convert( - 'US/Eastern'), index=s.index) - tm.assert_series_equal(result, expected) - - # round - s = Series(pd.to_datetime( - ['2012-01-01 13:00:00', '2012-01-01 12:01:00', - '2012-01-01 08:00:00'])) - result = s.dt.round('D') - expected = Series(pd.to_datetime(['2012-01-02', '2012-01-02', - '2012-01-01'])) - tm.assert_series_equal(result, expected) - - # round with tz - result = s.dt.tz_localize('UTC').dt.tz_convert('US/Eastern').dt.round( - 'D') - expected = Series(pd.to_datetime(['2012-01-01', '2012-01-01', - '2012-01-01']).tz_localize( - 'US/Eastern')) - tm.assert_series_equal(result, expected) - - # floor - s = Series(pd.to_datetime( - ['2012-01-01 13:00:00', '2012-01-01 12:01:00', - '2012-01-01 08:00:00'])) - result = s.dt.floor('D') - expected = Series(pd.to_datetime(['2012-01-01', '2012-01-01', - '2012-01-01'])) - tm.assert_series_equal(result, expected) - - # ceil - s = Series(pd.to_datetime( - ['2012-01-01 13:00:00', '2012-01-01 12:01:00', - '2012-01-01 08:00:00'])) - result = s.dt.ceil('D') - expected = Series(pd.to_datetime(['2012-01-02', '2012-01-02', - '2012-01-02'])) - tm.assert_series_equal(result, expected) - - # datetimeindex with tz - s = Series(date_range('20130101', periods=5, tz='US/Eastern')) - for prop in ok_for_dt: - - # we test freq below - if prop != 'freq': - compare(s, prop) - - for prop in ok_for_dt_methods: - getattr(s.dt, prop) - - result = s.dt.to_pydatetime() - self.assertIsInstance(result, np.ndarray) - self.assertTrue(result.dtype == object) - - result = s.dt.tz_convert('CET') - expected = Series(s._values.tz_convert('CET'), index=s.index) - tm.assert_series_equal(result, expected) - - tz_result = result.dt.tz - self.assertEqual(str(tz_result), 'CET') - freq_result = s.dt.freq - self.assertEqual(freq_result, DatetimeIndex(s.values, - freq='infer').freq) - - # timedeltaindex - for s in [Series( - timedelta_range('1 day', periods=5), index=list('abcde')), - Series(timedelta_range('1 day 01:23:45', periods=5, freq='s')), - Series(timedelta_range('2 days 01:23:45.012345', periods=5, - freq='ms'))]: - for prop in ok_for_td: - # we test freq below - if prop != 'freq': - compare(s, prop) - - for prop in ok_for_td_methods: - getattr(s.dt, prop) - - result = s.dt.components - self.assertIsInstance(result, DataFrame) - tm.assert_index_equal(result.index, s.index) - - result = s.dt.to_pytimedelta() - self.assertIsInstance(result, np.ndarray) - self.assertTrue(result.dtype == object) - - result = s.dt.total_seconds() - self.assertIsInstance(result, pd.Series) - self.assertTrue(result.dtype == 'float64') - - freq_result = s.dt.freq - self.assertEqual(freq_result, TimedeltaIndex(s.values, - freq='infer').freq) - - # both - index = date_range('20130101', periods=3, freq='D') - s = Series(date_range('20140204', periods=3, freq='s'), index=index) - tm.assert_series_equal(s.dt.year, Series( - np.array( - [2014, 2014, 2014], dtype='int64'), index=index)) - tm.assert_series_equal(s.dt.month, Series( - np.array( - [2, 2, 2], dtype='int64'), index=index)) - tm.assert_series_equal(s.dt.second, Series( - np.array( - [0, 1, 2], dtype='int64'), index=index)) - tm.assert_series_equal(s.dt.normalize(), pd.Series( - [s[0]] * 3, index=index)) - - # periodindex - for s in [Series(period_range('20130101', periods=5, freq='D'))]: - for prop in ok_for_period: - # we test freq below - if prop != 'freq': - compare(s, prop) - - for prop in ok_for_period_methods: - getattr(s.dt, prop) - - freq_result = s.dt.freq - self.assertEqual(freq_result, PeriodIndex(s.values).freq) - - # test limited display api - def get_dir(s): - results = [r for r in s.dt.__dir__() if not r.startswith('_')] - return list(sorted(set(results))) - - s = Series(date_range('20130101', periods=5, freq='D')) - results = get_dir(s) - tm.assert_almost_equal( - results, list(sorted(set(ok_for_dt + ok_for_dt_methods)))) - - s = Series(period_range('20130101', periods=5, freq='D').asobject) - results = get_dir(s) - tm.assert_almost_equal( - results, list(sorted(set(ok_for_period + ok_for_period_methods)))) - - # 11295 - # ambiguous time error on the conversions - s = Series(pd.date_range('2015-01-01', '2016-01-01', freq='T')) - s = s.dt.tz_localize('UTC').dt.tz_convert('America/Chicago') - results = get_dir(s) - tm.assert_almost_equal( - results, list(sorted(set(ok_for_dt + ok_for_dt_methods)))) - expected = Series(pd.date_range('2015-01-01', '2016-01-01', freq='T', - tz='UTC').tz_convert( - 'America/Chicago')) - tm.assert_series_equal(s, expected) - - # no setting allowed - s = Series(date_range('20130101', periods=5, freq='D')) - with tm.assertRaisesRegexp(ValueError, "modifications"): - s.dt.hour = 5 - - # trying to set a copy - with pd.option_context('chained_assignment', 'raise'): - - def f(): - s.dt.hour[0] = 5 - - self.assertRaises(com.SettingWithCopyError, f) - - def test_dt_accessor_no_new_attributes(self): - # https://github.com/pydata/pandas/issues/10673 - s = Series(date_range('20130101', periods=5, freq='D')) - with tm.assertRaisesRegexp(AttributeError, - "You cannot add any new attribute"): - s.dt.xlabel = "a" - - def test_strftime(self): - # GH 10086 - s = Series(date_range('20130101', periods=5)) - result = s.dt.strftime('%Y/%m/%d') - expected = Series(['2013/01/01', '2013/01/02', '2013/01/03', - '2013/01/04', '2013/01/05']) - tm.assert_series_equal(result, expected) - - s = Series(date_range('2015-02-03 11:22:33.4567', periods=5)) - result = s.dt.strftime('%Y/%m/%d %H-%M-%S') - expected = Series(['2015/02/03 11-22-33', '2015/02/04 11-22-33', - '2015/02/05 11-22-33', '2015/02/06 11-22-33', - '2015/02/07 11-22-33']) - tm.assert_series_equal(result, expected) - - s = Series(period_range('20130101', periods=5)) - result = s.dt.strftime('%Y/%m/%d') - expected = Series(['2013/01/01', '2013/01/02', '2013/01/03', - '2013/01/04', '2013/01/05']) - tm.assert_series_equal(result, expected) - - s = Series(period_range( - '2015-02-03 11:22:33.4567', periods=5, freq='s')) - result = s.dt.strftime('%Y/%m/%d %H-%M-%S') - expected = Series(['2015/02/03 11-22-33', '2015/02/03 11-22-34', - '2015/02/03 11-22-35', '2015/02/03 11-22-36', - '2015/02/03 11-22-37']) - tm.assert_series_equal(result, expected) - - s = Series(date_range('20130101', periods=5)) - s.iloc[0] = pd.NaT - result = s.dt.strftime('%Y/%m/%d') - expected = Series(['NaT', '2013/01/02', '2013/01/03', '2013/01/04', - '2013/01/05']) - tm.assert_series_equal(result, expected) - - datetime_index = date_range('20150301', periods=5) - result = datetime_index.strftime("%Y/%m/%d") - expected = np.array( - ['2015/03/01', '2015/03/02', '2015/03/03', '2015/03/04', - '2015/03/05'], dtype=object) - self.assert_numpy_array_equal(result, expected) - - period_index = period_range('20150301', periods=5) - result = period_index.strftime("%Y/%m/%d") - expected = np.array( - ['2015/03/01', '2015/03/02', '2015/03/03', '2015/03/04', - '2015/03/05'], dtype=object) - self.assert_numpy_array_equal(result, expected) - - s = Series([datetime(2013, 1, 1, 2, 32, 59), datetime(2013, 1, 2, 14, - 32, 1)]) - result = s.dt.strftime('%Y-%m-%d %H:%M:%S') - expected = Series(["2013-01-01 02:32:59", "2013-01-02 14:32:01"]) - tm.assert_series_equal(result, expected) - - s = Series(period_range('20130101', periods=4, freq='H')) - result = s.dt.strftime('%Y/%m/%d %H:%M:%S') - expected = Series(["2013/01/01 00:00:00", "2013/01/01 01:00:00", - "2013/01/01 02:00:00", "2013/01/01 03:00:00"]) - - s = Series(period_range('20130101', periods=4, freq='L')) - result = s.dt.strftime('%Y/%m/%d %H:%M:%S.%l') - expected = Series( - ["2013/01/01 00:00:00.000", "2013/01/01 00:00:00.001", - "2013/01/01 00:00:00.002", "2013/01/01 00:00:00.003"]) - tm.assert_series_equal(result, expected) - - def test_valid_dt_with_missing_values(self): - - from datetime import date, time - - # GH 8689 - s = Series(date_range('20130101', periods=5, freq='D')) - s.iloc[2] = pd.NaT - - for attr in ['microsecond', 'nanosecond', 'second', 'minute', 'hour', - 'day']: - expected = getattr(s.dt, attr).copy() - expected.iloc[2] = np.nan - result = getattr(s.dt, attr) - tm.assert_series_equal(result, expected) - - result = s.dt.date - expected = Series( - [date(2013, 1, 1), date(2013, 1, 2), np.nan, date(2013, 1, 4), - date(2013, 1, 5)], dtype='object') - tm.assert_series_equal(result, expected) - - result = s.dt.time - expected = Series( - [time(0), time(0), np.nan, time(0), time(0)], dtype='object') - tm.assert_series_equal(result, expected) - - def test_dt_accessor_api(self): - # GH 9322 - from pandas.tseries.common import (CombinedDatetimelikeProperties, - DatetimeProperties) - self.assertIs(Series.dt, CombinedDatetimelikeProperties) - - s = Series(date_range('2000-01-01', periods=3)) - self.assertIsInstance(s.dt, DatetimeProperties) - - for s in [Series(np.arange(5)), Series(list('abcde')), - Series(np.random.randn(5))]: - with tm.assertRaisesRegexp(AttributeError, - "only use .dt accessor"): - s.dt - self.assertFalse(hasattr(s, 'dt')) - - def test_tab_completion(self): - # GH 9910 - s = Series(list('abcd')) - # Series of str values should have .str but not .dt/.cat in __dir__ - self.assertTrue('str' in dir(s)) - self.assertTrue('dt' not in dir(s)) - self.assertTrue('cat' not in dir(s)) - - # similiarly for .dt - s = Series(date_range('1/1/2015', periods=5)) - self.assertTrue('dt' in dir(s)) - self.assertTrue('str' not in dir(s)) - self.assertTrue('cat' not in dir(s)) - - # similiarly for .cat, but with the twist that str and dt should be - # there if the categories are of that type first cat and str - s = Series(list('abbcd'), dtype="category") - self.assertTrue('cat' in dir(s)) - self.assertTrue('str' in dir(s)) # as it is a string categorical - self.assertTrue('dt' not in dir(s)) - - # similar to cat and str - s = Series(date_range('1/1/2015', periods=5)).astype("category") - self.assertTrue('cat' in dir(s)) - self.assertTrue('str' not in dir(s)) - self.assertTrue('dt' in dir(s)) # as it is a datetime categorical - - def test_binop_maybe_preserve_name(self): - # names match, preserve - result = self.ts * self.ts - self.assertEqual(result.name, self.ts.name) - result = self.ts.mul(self.ts) - self.assertEqual(result.name, self.ts.name) - - result = self.ts * self.ts[:-2] - self.assertEqual(result.name, self.ts.name) - - # names don't match, don't preserve - cp = self.ts.copy() - cp.name = 'something else' - result = self.ts + cp - self.assertIsNone(result.name) - result = self.ts.add(cp) - self.assertIsNone(result.name) - - ops = ['add', 'sub', 'mul', 'div', 'truediv', 'floordiv', 'mod', 'pow'] - ops = ops + ['r' + op for op in ops] - for op in ops: - # names match, preserve - s = self.ts.copy() - result = getattr(s, op)(s) - self.assertEqual(result.name, self.ts.name) - - # names don't match, don't preserve - cp = self.ts.copy() - cp.name = 'changed' - result = getattr(s, op)(cp) - self.assertIsNone(result.name) - - def test_combine_first_name(self): - result = self.ts.combine_first(self.ts[:5]) - self.assertEqual(result.name, self.ts.name) - - def test_combine_first_dt64(self): - from pandas.tseries.tools import to_datetime - s0 = to_datetime(Series(["2010", np.NaN])) - s1 = to_datetime(Series([np.NaN, "2011"])) - rs = s0.combine_first(s1) - xp = to_datetime(Series(['2010', '2011'])) - assert_series_equal(rs, xp) - - s0 = to_datetime(Series(["2010", np.NaN])) - s1 = Series([np.NaN, "2011"]) - rs = s0.combine_first(s1) - xp = Series([datetime(2010, 1, 1), '2011']) - assert_series_equal(rs, xp) - - def test_get(self): - - # GH 6383 - s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56, 45, - 51, 39, 55, 43, 54, 52, 51, 54])) - - result = s.get(25, 0) - expected = 0 - self.assertEqual(result, expected) - - s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56, - 45, 51, 39, 55, 43, 54, 52, 51, 54]), - index=pd.Float64Index( - [25.0, 36.0, 49.0, 64.0, 81.0, 100.0, - 121.0, 144.0, 169.0, 196.0, 1225.0, - 1296.0, 1369.0, 1444.0, 1521.0, 1600.0, - 1681.0, 1764.0, 1849.0, 1936.0], - dtype='object')) - - result = s.get(25, 0) - expected = 43 - self.assertEqual(result, expected) - - # GH 7407 - # with a boolean accessor - df = pd.DataFrame({'i': [0] * 3, 'b': [False] * 3}) - vc = df.i.value_counts() - result = vc.get(99, default='Missing') - self.assertEqual(result, 'Missing') - - vc = df.b.value_counts() - result = vc.get(False, default='Missing') - self.assertEqual(result, 3) - - result = vc.get(True, default='Missing') - self.assertEqual(result, 'Missing') - - def test_delitem(self): - - # GH 5542 - # should delete the item inplace - s = Series(lrange(5)) - del s[0] - - expected = Series(lrange(1, 5), index=lrange(1, 5)) - assert_series_equal(s, expected) - - del s[1] - expected = Series(lrange(2, 5), index=lrange(2, 5)) - assert_series_equal(s, expected) - - # empty - s = Series() - - def f(): - del s[0] - - self.assertRaises(KeyError, f) - - # only 1 left, del, add, del - s = Series(1) - del s[0] - assert_series_equal(s, Series(dtype='int64', index=Index( - [], dtype='int64'))) - s[0] = 1 - assert_series_equal(s, Series(1)) - del s[0] - assert_series_equal(s, Series(dtype='int64', index=Index( - [], dtype='int64'))) - - # Index(dtype=object) - s = Series(1, index=['a']) - del s['a'] - assert_series_equal(s, Series(dtype='int64', index=Index( - [], dtype='object'))) - s['a'] = 1 - assert_series_equal(s, Series(1, index=['a'])) - del s['a'] - assert_series_equal(s, Series(dtype='int64', index=Index( - [], dtype='object'))) - - def test_getitem_preserve_name(self): - result = self.ts[self.ts > 0] - self.assertEqual(result.name, self.ts.name) - - result = self.ts[[0, 2, 4]] - self.assertEqual(result.name, self.ts.name) - - result = self.ts[5:10] - self.assertEqual(result.name, self.ts.name) - - def test_getitem_setitem_ellipsis(self): - s = Series(np.random.randn(10)) - - np.fix(s) - - result = s[...] - assert_series_equal(result, s) - - s[...] = 5 - self.assertTrue((result == 5).all()) - - def test_getitem_negative_out_of_bounds(self): - s = Series(tm.rands_array(5, 10), index=tm.rands_array(10, 10)) - - self.assertRaises(IndexError, s.__getitem__, -11) - self.assertRaises(IndexError, s.__setitem__, -11, 'foo') - - def test_multilevel_name_print(self): - index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two', - 'three']], - labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], - [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], - names=['first', 'second']) - s = Series(lrange(0, len(index)), index=index, name='sth') - expected = ["first second", "foo one 0", - " two 1", " three 2", - "bar one 3", " two 4", - "baz two 5", " three 6", - "qux one 7", " two 8", - " three 9", "Name: sth, dtype: int64"] - expected = "\n".join(expected) - self.assertEqual(repr(s), expected) - - def test_multilevel_preserve_name(self): - index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two', - 'three']], - labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], - [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], - names=['first', 'second']) - s = Series(np.random.randn(len(index)), index=index, name='sth') - - result = s['foo'] - result2 = s.ix['foo'] - self.assertEqual(result.name, s.name) - self.assertEqual(result2.name, s.name) - - def test_name_printing(self): - # test small series - s = Series([0, 1, 2]) - s.name = "test" - self.assertIn("Name: test", repr(s)) - s.name = None - self.assertNotIn("Name:", repr(s)) - # test big series (diff code path) - s = Series(lrange(0, 1000)) - s.name = "test" - self.assertIn("Name: test", repr(s)) - s.name = None - self.assertNotIn("Name:", repr(s)) - - s = Series(index=date_range('20010101', '20020101'), name='test') - self.assertIn("Name: test", repr(s)) - - def test_pickle_preserve_name(self): - unpickled = self._pickle_roundtrip_name(self.ts) - self.assertEqual(unpickled.name, self.ts.name) - - def _pickle_roundtrip_name(self, obj): - - with ensure_clean() as path: - obj.to_pickle(path) - unpickled = pd.read_pickle(path) - return unpickled - - def test_argsort_preserve_name(self): - result = self.ts.argsort() - self.assertEqual(result.name, self.ts.name) - - def test_sort_index_name(self): - result = self.ts.sort_index(ascending=False) - self.assertEqual(result.name, self.ts.name) - - def test_to_sparse_pass_name(self): - result = self.ts.to_sparse() - self.assertEqual(result.name, self.ts.name) - - -class TestNanops(tm.TestCase): - - _multiprocess_can_split_ = True - - def test_comparisons(self): - left = np.random.randn(10) - right = np.random.randn(10) - left[:3] = np.nan - - result = nanops.nangt(left, right) - expected = (left > right).astype('O') - expected[:3] = np.nan - - assert_almost_equal(result, expected) - - s = Series(['a', 'b', 'c']) - s2 = Series([False, True, False]) - - # it works! - s == s2 - s2 == s - - def test_sum_zero(self): - arr = np.array([]) - self.assertEqual(nanops.nansum(arr), 0) - - arr = np.empty((10, 0)) - self.assertTrue((nanops.nansum(arr, axis=1) == 0).all()) - - # GH #844 - s = Series([], index=[]) - self.assertEqual(s.sum(), 0) - - df = DataFrame(np.empty((10, 0))) - self.assertTrue((df.sum(1) == 0).all()) - - def test_nansum_buglet(self): - s = Series([1.0, np.nan], index=[0, 1]) - result = np.nansum(s) - assert_almost_equal(result, 1) - - def test_overflow(self): - # GH 6915 - # overflowing on the smaller int dtypes - for dtype in ['int32', 'int64']: - v = np.arange(5000000, dtype=dtype) - s = Series(v) - - # no bottleneck - result = s.sum(skipna=False) - self.assertEqual(int(result), v.sum(dtype='int64')) - result = s.min(skipna=False) - self.assertEqual(int(result), 0) - result = s.max(skipna=False) - self.assertEqual(int(result), v[-1]) - - # use bottleneck if available - result = s.sum() - self.assertEqual(int(result), v.sum(dtype='int64')) - result = s.min() - self.assertEqual(int(result), 0) - result = s.max() - self.assertEqual(int(result), v[-1]) - - for dtype in ['float32', 'float64']: - v = np.arange(5000000, dtype=dtype) - s = Series(v) - - # no bottleneck - result = s.sum(skipna=False) - self.assertEqual(result, v.sum(dtype=dtype)) - result = s.min(skipna=False) - self.assertTrue(np.allclose(float(result), 0.0)) - result = s.max(skipna=False) - self.assertTrue(np.allclose(float(result), v[-1])) - - # use bottleneck if available - result = s.sum() - self.assertEqual(result, v.sum(dtype=dtype)) - result = s.min() - self.assertTrue(np.allclose(float(result), 0.0)) - result = s.max() - self.assertTrue(np.allclose(float(result), v[-1])) - - -class SafeForSparse(object): - pass - - -_ts = tm.makeTimeSeries() - - -class TestSeries(tm.TestCase, CheckNameIntegration): - - _multiprocess_can_split_ = True - - def setUp(self): - self.ts = _ts.copy() - self.ts.name = 'ts' - - self.series = tm.makeStringSeries() - self.series.name = 'series' - - self.objSeries = tm.makeObjectSeries() - self.objSeries.name = 'objects' - - self.empty = Series([], index=[]) - - def test_scalar_conversion(self): - - # Pass in scalar is disabled - scalar = Series(0.5) - self.assertNotIsInstance(scalar, float) - - # coercion - self.assertEqual(float(Series([1.])), 1.0) - self.assertEqual(int(Series([1.])), 1) - self.assertEqual(long(Series([1.])), 1) - - def test_astype(self): - s = Series(np.random.randn(5), name='foo') - - for dtype in ['float32', 'float64', 'int64', 'int32']: - astyped = s.astype(dtype) - self.assertEqual(astyped.dtype, dtype) - self.assertEqual(astyped.name, s.name) - - def test_TimeSeries_deprecation(self): - - # deprecation TimeSeries, #10890 - with tm.assert_produces_warning(FutureWarning): - pd.TimeSeries(1, index=date_range('20130101', periods=3)) - - def test_constructor(self): - # Recognize TimeSeries - with tm.assert_produces_warning(FutureWarning): - self.assertTrue(self.ts.is_time_series) - self.assertTrue(self.ts.index.is_all_dates) - - # Pass in Series - derived = Series(self.ts) - with tm.assert_produces_warning(FutureWarning): - self.assertTrue(derived.is_time_series) - self.assertTrue(derived.index.is_all_dates) - - self.assertTrue(tm.equalContents(derived.index, self.ts.index)) - # Ensure new index is not created - self.assertEqual(id(self.ts.index), id(derived.index)) - - # Mixed type Series - mixed = Series(['hello', np.NaN], index=[0, 1]) - self.assertEqual(mixed.dtype, np.object_) - self.assertIs(mixed[1], np.NaN) - - with tm.assert_produces_warning(FutureWarning): - self.assertFalse(self.empty.is_time_series) - self.assertFalse(self.empty.index.is_all_dates) - with tm.assert_produces_warning(FutureWarning): - self.assertFalse(Series({}).is_time_series) - self.assertFalse(Series({}).index.is_all_dates) - self.assertRaises(Exception, Series, np.random.randn(3, 3), - index=np.arange(3)) - - mixed.name = 'Series' - rs = Series(mixed).name - xp = 'Series' - self.assertEqual(rs, xp) - - # raise on MultiIndex GH4187 - m = MultiIndex.from_arrays([[1, 2], [3, 4]]) - self.assertRaises(NotImplementedError, Series, m) - - def test_constructor_empty(self): - empty = Series() - empty2 = Series([]) - - # the are Index() and RangeIndex() which don't compare type equal - # but are just .equals - assert_series_equal(empty, empty2, check_index_type=False) - - empty = Series(index=lrange(10)) - empty2 = Series(np.nan, index=lrange(10)) - assert_series_equal(empty, empty2) - - def test_constructor_series(self): - index1 = ['d', 'b', 'a', 'c'] - index2 = sorted(index1) - s1 = Series([4, 7, -5, 3], index=index1) - s2 = Series(s1, index=index2) - - assert_series_equal(s2, s1.sort_index()) - - def test_constructor_iterator(self): - - expected = Series(list(range(10)), dtype='int64') - result = Series(range(10), dtype='int64') - assert_series_equal(result, expected) - - def test_constructor_generator(self): - gen = (i for i in range(10)) - - result = Series(gen) - exp = Series(lrange(10)) - assert_series_equal(result, exp) - - gen = (i for i in range(10)) - result = Series(gen, index=lrange(10, 20)) - exp.index = lrange(10, 20) - assert_series_equal(result, exp) - - def test_constructor_map(self): - # GH8909 - m = map(lambda x: x, range(10)) - - result = Series(m) - exp = Series(lrange(10)) - assert_series_equal(result, exp) - - m = map(lambda x: x, range(10)) - result = Series(m, index=lrange(10, 20)) - exp.index = lrange(10, 20) - assert_series_equal(result, exp) - - def test_constructor_categorical(self): - cat = pd.Categorical([0, 1, 2, 0, 1, 2], ['a', 'b', 'c'], - fastpath=True) - res = Series(cat) - self.assertTrue(res.values.equals(cat)) - - def test_constructor_maskedarray(self): - data = ma.masked_all((3, ), dtype=float) - result = Series(data) - expected = Series([nan, nan, nan]) - assert_series_equal(result, expected) - - data[0] = 0.0 - data[2] = 2.0 - index = ['a', 'b', 'c'] - result = Series(data, index=index) - expected = Series([0.0, nan, 2.0], index=index) - assert_series_equal(result, expected) - - data[1] = 1.0 - result = Series(data, index=index) - expected = Series([0.0, 1.0, 2.0], index=index) - assert_series_equal(result, expected) - - data = ma.masked_all((3, ), dtype=int) - result = Series(data) - expected = Series([nan, nan, nan], dtype=float) - assert_series_equal(result, expected) - - data[0] = 0 - data[2] = 2 - index = ['a', 'b', 'c'] - result = Series(data, index=index) - expected = Series([0, nan, 2], index=index, dtype=float) - assert_series_equal(result, expected) - - data[1] = 1 - result = Series(data, index=index) - expected = Series([0, 1, 2], index=index, dtype=int) - assert_series_equal(result, expected) - - data = ma.masked_all((3, ), dtype=bool) - result = Series(data) - expected = Series([nan, nan, nan], dtype=object) - assert_series_equal(result, expected) - - data[0] = True - data[2] = False - index = ['a', 'b', 'c'] - result = Series(data, index=index) - expected = Series([True, nan, False], index=index, dtype=object) - assert_series_equal(result, expected) - - data[1] = True - result = Series(data, index=index) - expected = Series([True, True, False], index=index, dtype=bool) - assert_series_equal(result, expected) - - from pandas import tslib - data = ma.masked_all((3, ), dtype='M8[ns]') - result = Series(data) - expected = Series([tslib.iNaT, tslib.iNaT, tslib.iNaT], dtype='M8[ns]') - assert_series_equal(result, expected) - - data[0] = datetime(2001, 1, 1) - data[2] = datetime(2001, 1, 3) - index = ['a', 'b', 'c'] - result = Series(data, index=index) - expected = Series([datetime(2001, 1, 1), tslib.iNaT, - datetime(2001, 1, 3)], index=index, dtype='M8[ns]') - assert_series_equal(result, expected) - - data[1] = datetime(2001, 1, 2) - result = Series(data, index=index) - expected = Series([datetime(2001, 1, 1), datetime(2001, 1, 2), - datetime(2001, 1, 3)], index=index, dtype='M8[ns]') - assert_series_equal(result, expected) - - def test_constructor_default_index(self): - s = Series([0, 1, 2]) - assert_almost_equal(s.index, np.arange(3)) - - def test_constructor_corner(self): - df = tm.makeTimeDataFrame() - objs = [df, df] - s = Series(objs, index=[0, 1]) - tm.assertIsInstance(s, Series) - - def test_constructor_sanitize(self): - s = Series(np.array([1., 1., 8.]), dtype='i8') - self.assertEqual(s.dtype, np.dtype('i8')) - - s = Series(np.array([1., 1., np.nan]), copy=True, dtype='i8') - self.assertEqual(s.dtype, np.dtype('f8')) - - def test_constructor_pass_none(self): - s = Series(None, index=lrange(5)) - self.assertEqual(s.dtype, np.float64) - - s = Series(None, index=lrange(5), dtype=object) - self.assertEqual(s.dtype, np.object_) - - # GH 7431 - # inference on the index - s = Series(index=np.array([None])) - expected = Series(index=Index([None])) - assert_series_equal(s, expected) - - def test_constructor_cast(self): - self.assertRaises(ValueError, Series, ['a', 'b', 'c'], dtype=float) - - def test_constructor_dtype_nocast(self): - # 1572 - s = Series([1, 2, 3]) - - s2 = Series(s, dtype=np.int64) - - s2[1] = 5 - self.assertEqual(s[1], 5) - - def test_constructor_datelike_coercion(self): - - # GH 9477 - # incorrectly infering on dateimelike looking when object dtype is - # specified - s = Series([Timestamp('20130101'), 'NOV'], dtype=object) - self.assertEqual(s.iloc[0], Timestamp('20130101')) - self.assertEqual(s.iloc[1], 'NOV') - self.assertTrue(s.dtype == object) - - # the dtype was being reset on the slicing and re-inferred to datetime - # even thought the blocks are mixed - belly = '216 3T19'.split() - wing1 = '2T15 4H19'.split() - wing2 = '416 4T20'.split() - mat = pd.to_datetime('2016-01-22 2019-09-07'.split()) - df = pd.DataFrame( - {'wing1': wing1, - 'wing2': wing2, - 'mat': mat}, index=belly) - - result = df.loc['3T19'] - self.assertTrue(result.dtype == object) - result = df.loc['216'] - self.assertTrue(result.dtype == object) - - def test_constructor_dtype_datetime64(self): - import pandas.tslib as tslib - - s = Series(tslib.iNaT, dtype='M8[ns]', index=lrange(5)) - self.assertTrue(isnull(s).all()) - - # in theory this should be all nulls, but since - # we are not specifying a dtype is ambiguous - s = Series(tslib.iNaT, index=lrange(5)) - self.assertFalse(isnull(s).all()) - - s = Series(nan, dtype='M8[ns]', index=lrange(5)) - self.assertTrue(isnull(s).all()) - - s = Series([datetime(2001, 1, 2, 0, 0), tslib.iNaT], dtype='M8[ns]') - self.assertTrue(isnull(s[1])) - self.assertEqual(s.dtype, 'M8[ns]') - - s = Series([datetime(2001, 1, 2, 0, 0), nan], dtype='M8[ns]') - self.assertTrue(isnull(s[1])) - self.assertEqual(s.dtype, 'M8[ns]') - - # GH3416 - dates = [ - np.datetime64(datetime(2013, 1, 1)), - np.datetime64(datetime(2013, 1, 2)), - np.datetime64(datetime(2013, 1, 3)), - ] - - s = Series(dates) - self.assertEqual(s.dtype, 'M8[ns]') - - s.ix[0] = np.nan - self.assertEqual(s.dtype, 'M8[ns]') - - # invalid astypes - for t in ['s', 'D', 'us', 'ms']: - self.assertRaises(TypeError, s.astype, 'M8[%s]' % t) - - # GH3414 related - self.assertRaises(TypeError, lambda x: Series( - Series(dates).astype('int') / 1000000, dtype='M8[ms]')) - self.assertRaises(TypeError, - lambda x: Series(dates, dtype='datetime64')) - - # invalid dates can be help as object - result = Series([datetime(2, 1, 1)]) - self.assertEqual(result[0], datetime(2, 1, 1, 0, 0)) - - result = Series([datetime(3000, 1, 1)]) - self.assertEqual(result[0], datetime(3000, 1, 1, 0, 0)) - - # don't mix types - result = Series([Timestamp('20130101'), 1], index=['a', 'b']) - self.assertEqual(result['a'], Timestamp('20130101')) - self.assertEqual(result['b'], 1) - - # GH6529 - # coerce datetime64 non-ns properly - dates = date_range('01-Jan-2015', '01-Dec-2015', freq='M') - values2 = dates.view(np.ndarray).astype('datetime64[ns]') - expected = Series(values2, dates) - - for dtype in ['s', 'D', 'ms', 'us', 'ns']: - values1 = dates.view(np.ndarray).astype('M8[{0}]'.format(dtype)) - result = Series(values1, dates) - assert_series_equal(result, expected) - - # leave datetime.date alone - dates2 = np.array([d.date() for d in dates.to_pydatetime()], - dtype=object) - series1 = Series(dates2, dates) - self.assert_numpy_array_equal(series1.values, dates2) - self.assertEqual(series1.dtype, object) - - # these will correctly infer a datetime - s = Series([None, pd.NaT, '2013-08-05 15:30:00.000001']) - self.assertEqual(s.dtype, 'datetime64[ns]') - s = Series([np.nan, pd.NaT, '2013-08-05 15:30:00.000001']) - self.assertEqual(s.dtype, 'datetime64[ns]') - s = Series([pd.NaT, None, '2013-08-05 15:30:00.000001']) - self.assertEqual(s.dtype, 'datetime64[ns]') - s = Series([pd.NaT, np.nan, '2013-08-05 15:30:00.000001']) - self.assertEqual(s.dtype, 'datetime64[ns]') - - # tz-aware (UTC and other tz's) - # GH 8411 - dr = date_range('20130101', periods=3) - self.assertTrue(Series(dr).iloc[0].tz is None) - dr = date_range('20130101', periods=3, tz='UTC') - self.assertTrue(str(Series(dr).iloc[0].tz) == 'UTC') - dr = date_range('20130101', periods=3, tz='US/Eastern') - self.assertTrue(str(Series(dr).iloc[0].tz) == 'US/Eastern') - - # non-convertible - s = Series([1479596223000, -1479590, pd.NaT]) - self.assertTrue(s.dtype == 'object') - self.assertTrue(s[2] is pd.NaT) - self.assertTrue('NaT' in str(s)) - - # if we passed a NaT it remains - s = Series([datetime(2010, 1, 1), datetime(2, 1, 1), pd.NaT]) - self.assertTrue(s.dtype == 'object') - self.assertTrue(s[2] is pd.NaT) - self.assertTrue('NaT' in str(s)) - - # if we passed a nan it remains - s = Series([datetime(2010, 1, 1), datetime(2, 1, 1), np.nan]) - self.assertTrue(s.dtype == 'object') - self.assertTrue(s[2] is np.nan) - self.assertTrue('NaN' in str(s)) - - def test_constructor_with_datetime_tz(self): - - # 8260 - # support datetime64 with tz - - dr = date_range('20130101', periods=3, tz='US/Eastern') - s = Series(dr) - self.assertTrue(s.dtype.name == 'datetime64[ns, US/Eastern]') - self.assertTrue(s.dtype == 'datetime64[ns, US/Eastern]') - self.assertTrue(com.is_datetime64tz_dtype(s.dtype)) - self.assertTrue('datetime64[ns, US/Eastern]' in str(s)) - - # export - result = s.values - self.assertIsInstance(result, np.ndarray) - self.assertTrue(result.dtype == 'datetime64[ns]') - self.assertTrue(dr.equals(pd.DatetimeIndex(result).tz_localize( - 'UTC').tz_convert(tz=s.dt.tz))) - - # indexing - result = s.iloc[0] - self.assertEqual(result, Timestamp('2013-01-01 00:00:00-0500', - tz='US/Eastern', offset='D')) - result = s[0] - self.assertEqual(result, Timestamp('2013-01-01 00:00:00-0500', - tz='US/Eastern', offset='D')) - - result = s[Series([True, True, False], index=s.index)] - assert_series_equal(result, s[0:2]) - - result = s.iloc[0:1] - assert_series_equal(result, Series(dr[0:1])) - - # concat - result = pd.concat([s.iloc[0:1], s.iloc[1:]]) - assert_series_equal(result, s) - - # astype - result = s.astype(object) - expected = Series(DatetimeIndex(s._values).asobject) - assert_series_equal(result, expected) - - result = Series(s.values).dt.tz_localize('UTC').dt.tz_convert(s.dt.tz) - assert_series_equal(result, s) - - # astype - datetime64[ns, tz] - result = Series(s.values).astype('datetime64[ns, US/Eastern]') - assert_series_equal(result, s) - - result = Series(s.values).astype(s.dtype) - assert_series_equal(result, s) - - result = s.astype('datetime64[ns, CET]') - expected = Series(date_range('20130101 06:00:00', periods=3, tz='CET')) - assert_series_equal(result, expected) - - # short str - self.assertTrue('datetime64[ns, US/Eastern]' in str(s)) - - # formatting with NaT - result = s.shift() - self.assertTrue('datetime64[ns, US/Eastern]' in str(result)) - self.assertTrue('NaT' in str(result)) - - # long str - t = Series(date_range('20130101', periods=1000, tz='US/Eastern')) - self.assertTrue('datetime64[ns, US/Eastern]' in str(t)) - - result = pd.DatetimeIndex(s, freq='infer') - tm.assert_index_equal(result, dr) - - # inference - s = Series([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'), - pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Pacific')]) - self.assertTrue(s.dtype == 'datetime64[ns, US/Pacific]') - self.assertTrue(lib.infer_dtype(s) == 'datetime64') - - s = Series([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'), - pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Eastern')]) - self.assertTrue(s.dtype == 'object') - self.assertTrue(lib.infer_dtype(s) == 'datetime') - - def test_constructor_periodindex(self): - # GH7932 - # converting a PeriodIndex when put in a Series - - pi = period_range('20130101', periods=5, freq='D') - s = Series(pi) - expected = Series(pi.asobject) - assert_series_equal(s, expected) - - def test_constructor_dict(self): - d = {'a': 0., 'b': 1., 'c': 2.} - result = Series(d, index=['b', 'c', 'd', 'a']) - expected = Series([1, 2, nan, 0], index=['b', 'c', 'd', 'a']) - assert_series_equal(result, expected) - - pidx = tm.makePeriodIndex(100) - d = {pidx[0]: 0, pidx[1]: 1} - result = Series(d, index=pidx) - expected = Series(np.nan, pidx) - expected.ix[0] = 0 - expected.ix[1] = 1 - assert_series_equal(result, expected) - - def test_constructor_dict_multiindex(self): - check = lambda result, expected: tm.assert_series_equal( - result, expected, check_dtype=True, check_series_type=True) - d = {('a', 'a'): 0., ('b', 'a'): 1., ('b', 'c'): 2.} - _d = sorted(d.items()) - ser = Series(d) - expected = Series([x[1] for x in _d], - index=MultiIndex.from_tuples([x[0] for x in _d])) - check(ser, expected) - - d['z'] = 111. - _d.insert(0, ('z', d['z'])) - ser = Series(d) - expected = Series([x[1] for x in _d], index=Index( - [x[0] for x in _d], tupleize_cols=False)) - ser = ser.reindex(index=expected.index) - check(ser, expected) - - def test_constructor_subclass_dict(self): - data = tm.TestSubDict((x, 10.0 * x) for x in range(10)) - series = Series(data) - refseries = Series(dict(compat.iteritems(data))) - assert_series_equal(refseries, series) - - def test_constructor_dict_datetime64_index(self): - # GH 9456 - - dates_as_str = ['1984-02-19', '1988-11-06', '1989-12-03', '1990-03-15'] - values = [42544017.198965244, 1234565, 40512335.181958228, -1] - - def create_data(constructor): - return dict(zip((constructor(x) for x in dates_as_str), values)) - - data_datetime64 = create_data(np.datetime64) - data_datetime = create_data(lambda x: datetime.strptime(x, '%Y-%m-%d')) - data_Timestamp = create_data(Timestamp) - - expected = Series(values, (Timestamp(x) for x in dates_as_str)) - - result_datetime64 = Series(data_datetime64) - result_datetime = Series(data_datetime) - result_Timestamp = Series(data_Timestamp) - - assert_series_equal(result_datetime64, expected) - assert_series_equal(result_datetime, expected) - assert_series_equal(result_Timestamp, expected) - - def test_orderedDict_ctor(self): - # GH3283 - import pandas - import random - data = OrderedDict([('col%s' % i, random.random()) for i in range(12)]) - s = pandas.Series(data) - self.assertTrue(all(s.values == list(data.values()))) - - def test_orderedDict_subclass_ctor(self): - # GH3283 - import pandas - import random - - class A(OrderedDict): - pass - - data = A([('col%s' % i, random.random()) for i in range(12)]) - s = pandas.Series(data) - self.assertTrue(all(s.values == list(data.values()))) - - def test_constructor_list_of_tuples(self): - data = [(1, 1), (2, 2), (2, 3)] - s = Series(data) - self.assertEqual(list(s), data) - - def test_constructor_tuple_of_tuples(self): - data = ((1, 1), (2, 2), (2, 3)) - s = Series(data) - self.assertEqual(tuple(s), data) - - def test_constructor_set(self): - values = set([1, 2, 3, 4, 5]) - self.assertRaises(TypeError, Series, values) - values = frozenset(values) - self.assertRaises(TypeError, Series, values) - - def test_fromDict(self): - data = {'a': 0, 'b': 1, 'c': 2, 'd': 3} - - series = Series(data) - self.assertTrue(tm.is_sorted(series.index)) - - data = {'a': 0, 'b': '1', 'c': '2', 'd': datetime.now()} - series = Series(data) - self.assertEqual(series.dtype, np.object_) - - data = {'a': 0, 'b': '1', 'c': '2', 'd': '3'} - series = Series(data) - self.assertEqual(series.dtype, np.object_) - - data = {'a': '0', 'b': '1'} - series = Series(data, dtype=float) - self.assertEqual(series.dtype, np.float64) - - def test_setindex(self): - # wrong type - series = self.series.copy() - self.assertRaises(TypeError, setattr, series, 'index', None) - - # wrong length - series = self.series.copy() - self.assertRaises(Exception, setattr, series, 'index', - np.arange(len(series) - 1)) - - # works - series = self.series.copy() - series.index = np.arange(len(series)) - tm.assertIsInstance(series.index, Index) - - def test_array_finalize(self): - pass - - def test_pop(self): - # GH 6600 - df = DataFrame({'A': 0, 'B': np.arange(5, dtype='int64'), 'C': 0, }) - k = df.iloc[4] - - result = k.pop('B') - self.assertEqual(result, 4) - - expected = Series([0, 0], index=['A', 'C'], name=4) - assert_series_equal(k, expected) - - def test_not_hashable(self): - s_empty = Series() - s = Series([1]) - self.assertRaises(TypeError, hash, s_empty) - self.assertRaises(TypeError, hash, s) - - def test_fromValue(self): - - nans = Series(np.NaN, index=self.ts.index) - self.assertEqual(nans.dtype, np.float_) - self.assertEqual(len(nans), len(self.ts)) - - strings = Series('foo', index=self.ts.index) - self.assertEqual(strings.dtype, np.object_) - self.assertEqual(len(strings), len(self.ts)) - - d = datetime.now() - dates = Series(d, index=self.ts.index) - self.assertEqual(dates.dtype, 'M8[ns]') - self.assertEqual(len(dates), len(self.ts)) - - def test_contains(self): - tm.assert_contains_all(self.ts.index, self.ts) - - def test_pickle(self): - unp_series = self._pickle_roundtrip(self.series) - unp_ts = self._pickle_roundtrip(self.ts) - assert_series_equal(unp_series, self.series) - assert_series_equal(unp_ts, self.ts) - - def _pickle_roundtrip(self, obj): - - with ensure_clean() as path: - obj.to_pickle(path) - unpickled = pd.read_pickle(path) - return unpickled - - def test_getitem_get(self): - idx1 = self.series.index[5] - idx2 = self.objSeries.index[5] - - self.assertEqual(self.series[idx1], self.series.get(idx1)) - self.assertEqual(self.objSeries[idx2], self.objSeries.get(idx2)) - - self.assertEqual(self.series[idx1], self.series[5]) - self.assertEqual(self.objSeries[idx2], self.objSeries[5]) - - self.assertEqual( - self.series.get(-1), self.series.get(self.series.index[-1])) - self.assertEqual(self.series[5], self.series.get(self.series.index[5])) - - # missing - d = self.ts.index[0] - datetools.bday - self.assertRaises(KeyError, self.ts.__getitem__, d) - - # None - # GH 5652 - for s in [Series(), Series(index=list('abc'))]: - result = s.get(None) - self.assertIsNone(result) - - def test_iget(self): - - s = Series(np.random.randn(10), index=lrange(0, 20, 2)) - - # 10711, deprecated - with tm.assert_produces_warning(FutureWarning): - s.iget(1) - - # 10711, deprecated - with tm.assert_produces_warning(FutureWarning): - s.irow(1) - - # 10711, deprecated - with tm.assert_produces_warning(FutureWarning): - s.iget_value(1) - - for i in range(len(s)): - result = s.iloc[i] - exp = s[s.index[i]] - assert_almost_equal(result, exp) - - # pass a slice - result = s.iloc[slice(1, 3)] - expected = s.ix[2:4] - assert_series_equal(result, expected) - - # test slice is a view - result[:] = 0 - self.assertTrue((s[1:3] == 0).all()) - - # list of integers - result = s.iloc[[0, 2, 3, 4, 5]] - expected = s.reindex(s.index[[0, 2, 3, 4, 5]]) - assert_series_equal(result, expected) - - def test_iget_nonunique(self): - s = Series([0, 1, 2], index=[0, 1, 0]) - self.assertEqual(s.iloc[2], 2) - - def test_getitem_regression(self): - s = Series(lrange(5), index=lrange(5)) - result = s[lrange(5)] - assert_series_equal(result, s) - - def test_getitem_setitem_slice_bug(self): - s = Series(lrange(10), lrange(10)) - result = s[-12:] - assert_series_equal(result, s) - - result = s[-7:] - assert_series_equal(result, s[3:]) - - result = s[:-12] - assert_series_equal(result, s[:0]) - - s = Series(lrange(10), lrange(10)) - s[-12:] = 0 - self.assertTrue((s == 0).all()) - - s[:-12] = 5 - self.assertTrue((s == 0).all()) - - def test_getitem_int64(self): - idx = np.int64(5) - self.assertEqual(self.ts[idx], self.ts[5]) - - def test_getitem_fancy(self): - slice1 = self.series[[1, 2, 3]] - slice2 = self.objSeries[[1, 2, 3]] - self.assertEqual(self.series.index[2], slice1.index[1]) - self.assertEqual(self.objSeries.index[2], slice2.index[1]) - self.assertEqual(self.series[2], slice1[1]) - self.assertEqual(self.objSeries[2], slice2[1]) - - def test_getitem_boolean(self): - s = self.series - mask = s > s.median() - - # passing list is OK - result = s[list(mask)] - expected = s[mask] - assert_series_equal(result, expected) - self.assert_numpy_array_equal(result.index, s.index[mask]) - - def test_getitem_boolean_empty(self): - s = Series([], dtype=np.int64) - s.index.name = 'index_name' - s = s[s.isnull()] - self.assertEqual(s.index.name, 'index_name') - self.assertEqual(s.dtype, np.int64) - - # GH5877 - # indexing with empty series - s = Series(['A', 'B']) - expected = Series(np.nan, index=['C'], dtype=object) - result = s[Series(['C'], dtype=object)] - assert_series_equal(result, expected) - - s = Series(['A', 'B']) - expected = Series(dtype=object, index=Index([], dtype='int64')) - result = s[Series([], dtype=object)] - assert_series_equal(result, expected) - - # invalid because of the boolean indexer - # that's empty or not-aligned - def f(): - s[Series([], dtype=bool)] - - self.assertRaises(IndexingError, f) - - def f(): - s[Series([True], dtype=bool)] - - self.assertRaises(IndexingError, f) - - def test_getitem_generator(self): - gen = (x > 0 for x in self.series) - result = self.series[gen] - result2 = self.series[iter(self.series > 0)] - expected = self.series[self.series > 0] - assert_series_equal(result, expected) - assert_series_equal(result2, expected) - - def test_getitem_boolean_object(self): - # using column from DataFrame - - s = self.series - mask = s > s.median() - omask = mask.astype(object) - - # getitem - result = s[omask] - expected = s[mask] - assert_series_equal(result, expected) - - # setitem - s2 = s.copy() - cop = s.copy() - cop[omask] = 5 - s2[mask] = 5 - assert_series_equal(cop, s2) - - # nans raise exception - omask[5:10] = np.nan - self.assertRaises(Exception, s.__getitem__, omask) - self.assertRaises(Exception, s.__setitem__, omask, 5) - - def test_getitem_setitem_boolean_corner(self): - ts = self.ts - mask_shifted = ts.shift(1, freq=datetools.bday) > ts.median() - - # these used to raise...?? - - self.assertRaises(Exception, ts.__getitem__, mask_shifted) - self.assertRaises(Exception, ts.__setitem__, mask_shifted, 1) - # ts[mask_shifted] - # ts[mask_shifted] = 1 - - self.assertRaises(Exception, ts.ix.__getitem__, mask_shifted) - self.assertRaises(Exception, ts.ix.__setitem__, mask_shifted, 1) - # ts.ix[mask_shifted] - # ts.ix[mask_shifted] = 2 - - def test_getitem_setitem_slice_integers(self): - s = Series(np.random.randn(8), index=[2, 4, 6, 8, 10, 12, 14, 16]) - - result = s[:4] - expected = s.reindex([2, 4, 6, 8]) - assert_series_equal(result, expected) - - s[:4] = 0 - self.assertTrue((s[:4] == 0).all()) - self.assertTrue(not (s[4:] == 0).any()) - - def test_getitem_out_of_bounds(self): - # don't segfault, GH #495 - self.assertRaises(IndexError, self.ts.__getitem__, len(self.ts)) - - # GH #917 - s = Series([]) - self.assertRaises(IndexError, s.__getitem__, -1) - - def test_getitem_setitem_integers(self): - # caused bug without test - s = Series([1, 2, 3], ['a', 'b', 'c']) - - self.assertEqual(s.ix[0], s['a']) - s.ix[0] = 5 - self.assertAlmostEqual(s['a'], 5) - - def test_getitem_box_float64(self): - value = self.ts[5] - tm.assertIsInstance(value, np.float64) - - def test_getitem_ambiguous_keyerror(self): - s = Series(lrange(10), index=lrange(0, 20, 2)) - self.assertRaises(KeyError, s.__getitem__, 1) - self.assertRaises(KeyError, s.ix.__getitem__, 1) - - def test_getitem_unordered_dup(self): - obj = Series(lrange(5), index=['c', 'a', 'a', 'b', 'b']) - self.assertTrue(np.isscalar(obj['c'])) - self.assertEqual(obj['c'], 0) - - def test_getitem_dups_with_missing(self): - - # breaks reindex, so need to use .ix internally - # GH 4246 - s = Series([1, 2, 3, 4], ['foo', 'bar', 'foo', 'bah']) - expected = s.ix[['foo', 'bar', 'bah', 'bam']] - result = s[['foo', 'bar', 'bah', 'bam']] - assert_series_equal(result, expected) - - def test_getitem_dups(self): - s = Series(range(5), index=['A', 'A', 'B', 'C', 'C'], dtype=np.int64) - expected = Series([3, 4], index=['C', 'C'], dtype=np.int64) - result = s['C'] - assert_series_equal(result, expected) - - def test_getitem_dataframe(self): - rng = list(range(10)) - s = pd.Series(10, index=rng) - df = pd.DataFrame(rng, index=rng) - self.assertRaises(TypeError, s.__getitem__, df > 5) - - def test_setitem_ambiguous_keyerror(self): - s = Series(lrange(10), index=lrange(0, 20, 2)) - - # equivalent of an append - s2 = s.copy() - s2[1] = 5 - expected = s.append(Series([5], index=[1])) - assert_series_equal(s2, expected) - - s2 = s.copy() - s2.ix[1] = 5 - expected = s.append(Series([5], index=[1])) - assert_series_equal(s2, expected) - - def test_setitem_float_labels(self): - # note labels are floats - s = Series(['a', 'b', 'c'], index=[0, 0.5, 1]) - tmp = s.copy() - - s.ix[1] = 'zoo' - tmp.iloc[2] = 'zoo' - - assert_series_equal(s, tmp) - - def test_slice(self): - numSlice = self.series[10:20] - numSliceEnd = self.series[-10:] - objSlice = self.objSeries[10:20] - - self.assertNotIn(self.series.index[9], numSlice.index) - self.assertNotIn(self.objSeries.index[9], objSlice.index) - - self.assertEqual(len(numSlice), len(numSlice.index)) - self.assertEqual(self.series[numSlice.index[0]], - numSlice[numSlice.index[0]]) - - self.assertEqual(numSlice.index[1], self.series.index[11]) - - self.assertTrue(tm.equalContents(numSliceEnd, np.array(self.series)[ - -10:])) - - # test return view - sl = self.series[10:20] - sl[:] = 0 - self.assertTrue((self.series[10:20] == 0).all()) - - def test_slice_can_reorder_not_uniquely_indexed(self): - s = Series(1, index=['a', 'a', 'b', 'b', 'c']) - s[::-1] # it works! - - def test_slice_float_get_set(self): - - self.assertRaises(TypeError, lambda: self.ts[4.0:10.0]) - - def f(): - self.ts[4.0:10.0] = 0 - - self.assertRaises(TypeError, f) - - self.assertRaises(TypeError, self.ts.__getitem__, slice(4.5, 10.0)) - self.assertRaises(TypeError, self.ts.__setitem__, slice(4.5, 10.0), 0) - - def test_slice_floats2(self): - s = Series(np.random.rand(10), index=np.arange(10, 20, dtype=float)) - - self.assertEqual(len(s.ix[12.0:]), 8) - self.assertEqual(len(s.ix[12.5:]), 7) - - i = np.arange(10, 20, dtype=float) - i[2] = 12.2 - s.index = i - self.assertEqual(len(s.ix[12.0:]), 8) - self.assertEqual(len(s.ix[12.5:]), 7) - - def test_slice_float64(self): - - values = np.arange(10., 50., 2) - index = Index(values) - - start, end = values[[5, 15]] - - s = Series(np.random.randn(20), index=index) - - result = s[start:end] - expected = s.iloc[5:16] - assert_series_equal(result, expected) - - result = s.loc[start:end] - assert_series_equal(result, expected) - - df = DataFrame(np.random.randn(20, 3), index=index) - - result = df[start:end] - expected = df.iloc[5:16] - tm.assert_frame_equal(result, expected) - - result = df.loc[start:end] - tm.assert_frame_equal(result, expected) - - def test_setitem(self): - self.ts[self.ts.index[5]] = np.NaN - self.ts[[1, 2, 17]] = np.NaN - self.ts[6] = np.NaN - self.assertTrue(np.isnan(self.ts[6])) - self.assertTrue(np.isnan(self.ts[2])) - self.ts[np.isnan(self.ts)] = 5 - self.assertFalse(np.isnan(self.ts[2])) - - # caught this bug when writing tests - series = Series(tm.makeIntIndex(20).astype(float), - index=tm.makeIntIndex(20)) - - series[::2] = 0 - self.assertTrue((series[::2] == 0).all()) - - # set item that's not contained - s = self.series.copy() - s['foobar'] = 1 - - app = Series([1], index=['foobar'], name='series') - expected = self.series.append(app) - assert_series_equal(s, expected) - - # Test for issue #10193 - key = pd.Timestamp('2012-01-01') - series = pd.Series() - series[key] = 47 - expected = pd.Series(47, [key]) - assert_series_equal(series, expected) - - series = pd.Series([], pd.DatetimeIndex([], freq='D')) - series[key] = 47 - expected = pd.Series(47, pd.DatetimeIndex([key], freq='D')) - assert_series_equal(series, expected) - - def test_setitem_dtypes(self): - - # change dtypes - # GH 4463 - expected = Series([np.nan, 2, 3]) - - s = Series([1, 2, 3]) - s.iloc[0] = np.nan - assert_series_equal(s, expected) - - s = Series([1, 2, 3]) - s.loc[0] = np.nan - assert_series_equal(s, expected) - - s = Series([1, 2, 3]) - s[0] = np.nan - assert_series_equal(s, expected) - - s = Series([False]) - s.loc[0] = np.nan - assert_series_equal(s, Series([np.nan])) - - s = Series([False, True]) - s.loc[0] = np.nan - assert_series_equal(s, Series([np.nan, 1.0])) - - def test_set_value(self): - idx = self.ts.index[10] - res = self.ts.set_value(idx, 0) - self.assertIs(res, self.ts) - self.assertEqual(self.ts[idx], 0) - - # equiv - s = self.series.copy() - res = s.set_value('foobar', 0) - self.assertIs(res, s) - self.assertEqual(res.index[-1], 'foobar') - self.assertEqual(res['foobar'], 0) - - s = self.series.copy() - s.loc['foobar'] = 0 - self.assertEqual(s.index[-1], 'foobar') - self.assertEqual(s['foobar'], 0) - - def test_setslice(self): - sl = self.ts[5:20] - self.assertEqual(len(sl), len(sl.index)) - self.assertTrue(sl.index.is_unique) - - def test_basic_getitem_setitem_corner(self): - # invalid tuples, e.g. self.ts[:, None] vs. self.ts[:, 2] - with tm.assertRaisesRegexp(ValueError, 'tuple-index'): - self.ts[:, 2] - with tm.assertRaisesRegexp(ValueError, 'tuple-index'): - self.ts[:, 2] = 2 - - # weird lists. [slice(0, 5)] will work but not two slices - result = self.ts[[slice(None, 5)]] - expected = self.ts[:5] - assert_series_equal(result, expected) - - # OK - self.assertRaises(Exception, self.ts.__getitem__, - [5, slice(None, None)]) - self.assertRaises(Exception, self.ts.__setitem__, - [5, slice(None, None)], 2) - - def test_reshape_non_2d(self): - # GH 4554 - x = Series(np.random.random(201), name='x') - self.assertTrue(x.reshape(x.shape, ) is x) - - # GH 2719 - a = Series([1, 2, 3, 4]) - result = a.reshape(2, 2) - expected = a.values.reshape(2, 2) - tm.assert_numpy_array_equal(result, expected) - self.assertTrue(type(result) is type(expected)) - - def test_reshape_2d_return_array(self): - x = Series(np.random.random(201), name='x') - result = x.reshape((-1, 1)) - self.assertNotIsInstance(result, Series) - - result2 = np.reshape(x, (-1, 1)) - self.assertNotIsInstance(result2, Series) - - result = x[:, None] - expected = x.reshape((-1, 1)) - assert_almost_equal(result, expected) - - def test_basic_getitem_with_labels(self): - indices = self.ts.index[[5, 10, 15]] - - result = self.ts[indices] - expected = self.ts.reindex(indices) - assert_series_equal(result, expected) - - result = self.ts[indices[0]:indices[2]] - expected = self.ts.ix[indices[0]:indices[2]] - assert_series_equal(result, expected) - - # integer indexes, be careful - s = Series(np.random.randn(10), index=lrange(0, 20, 2)) - inds = [0, 2, 5, 7, 8] - arr_inds = np.array([0, 2, 5, 7, 8]) - result = s[inds] - expected = s.reindex(inds) - assert_series_equal(result, expected) - - result = s[arr_inds] - expected = s.reindex(arr_inds) - assert_series_equal(result, expected) - - def test_basic_setitem_with_labels(self): - indices = self.ts.index[[5, 10, 15]] - - cp = self.ts.copy() - exp = self.ts.copy() - cp[indices] = 0 - exp.ix[indices] = 0 - assert_series_equal(cp, exp) - - cp = self.ts.copy() - exp = self.ts.copy() - cp[indices[0]:indices[2]] = 0 - exp.ix[indices[0]:indices[2]] = 0 - assert_series_equal(cp, exp) - - # integer indexes, be careful - s = Series(np.random.randn(10), index=lrange(0, 20, 2)) - inds = [0, 4, 6] - arr_inds = np.array([0, 4, 6]) - - cp = s.copy() - exp = s.copy() - s[inds] = 0 - s.ix[inds] = 0 - assert_series_equal(cp, exp) - - cp = s.copy() - exp = s.copy() - s[arr_inds] = 0 - s.ix[arr_inds] = 0 - assert_series_equal(cp, exp) - - inds_notfound = [0, 4, 5, 6] - arr_inds_notfound = np.array([0, 4, 5, 6]) - self.assertRaises(Exception, s.__setitem__, inds_notfound, 0) - self.assertRaises(Exception, s.__setitem__, arr_inds_notfound, 0) - - def test_ix_getitem(self): - inds = self.series.index[[3, 4, 7]] - assert_series_equal(self.series.ix[inds], self.series.reindex(inds)) - assert_series_equal(self.series.ix[5::2], self.series[5::2]) - - # slice with indices - d1, d2 = self.ts.index[[5, 15]] - result = self.ts.ix[d1:d2] - expected = self.ts.truncate(d1, d2) - assert_series_equal(result, expected) - - # boolean - mask = self.series > self.series.median() - assert_series_equal(self.series.ix[mask], self.series[mask]) - - # ask for index value - self.assertEqual(self.ts.ix[d1], self.ts[d1]) - self.assertEqual(self.ts.ix[d2], self.ts[d2]) - - def test_ix_getitem_not_monotonic(self): - d1, d2 = self.ts.index[[5, 15]] - - ts2 = self.ts[::2][[1, 2, 0]] - - self.assertRaises(KeyError, ts2.ix.__getitem__, slice(d1, d2)) - self.assertRaises(KeyError, ts2.ix.__setitem__, slice(d1, d2), 0) - - def test_ix_getitem_setitem_integer_slice_keyerrors(self): - s = Series(np.random.randn(10), index=lrange(0, 20, 2)) - - # this is OK - cp = s.copy() - cp.ix[4:10] = 0 - self.assertTrue((cp.ix[4:10] == 0).all()) - - # so is this - cp = s.copy() - cp.ix[3:11] = 0 - self.assertTrue((cp.ix[3:11] == 0).values.all()) - - result = s.ix[4:10] - result2 = s.ix[3:11] - expected = s.reindex([4, 6, 8, 10]) - - assert_series_equal(result, expected) - assert_series_equal(result2, expected) - - # non-monotonic, raise KeyError - s2 = s.iloc[lrange(5) + lrange(5, 10)[::-1]] - self.assertRaises(KeyError, s2.ix.__getitem__, slice(3, 11)) - self.assertRaises(KeyError, s2.ix.__setitem__, slice(3, 11), 0) - - def test_ix_getitem_iterator(self): - idx = iter(self.series.index[:10]) - result = self.series.ix[idx] - assert_series_equal(result, self.series[:10]) - - def test_where(self): - s = Series(np.random.randn(5)) - cond = s > 0 - - rs = s.where(cond).dropna() - rs2 = s[cond] - assert_series_equal(rs, rs2) - - rs = s.where(cond, -s) - assert_series_equal(rs, s.abs()) - - rs = s.where(cond) - assert (s.shape == rs.shape) - assert (rs is not s) - - # test alignment - cond = Series([True, False, False, True, False], index=s.index) - s2 = -(s.abs()) - - expected = s2[cond].reindex(s2.index[:3]).reindex(s2.index) - rs = s2.where(cond[:3]) - assert_series_equal(rs, expected) - - expected = s2.abs() - expected.ix[0] = s2[0] - rs = s2.where(cond[:3], -s2) - assert_series_equal(rs, expected) - - self.assertRaises(ValueError, s.where, 1) - self.assertRaises(ValueError, s.where, cond[:3].values, -s) - - # GH 2745 - s = Series([1, 2]) - s[[True, False]] = [0, 1] - expected = Series([0, 2]) - assert_series_equal(s, expected) - - # failures - self.assertRaises(ValueError, s.__setitem__, tuple([[[True, False]]]), - [0, 2, 3]) - self.assertRaises(ValueError, s.__setitem__, tuple([[[True, False]]]), - []) - - # unsafe dtype changes - for dtype in [np.int8, np.int16, np.int32, np.int64, np.float16, - np.float32, np.float64]: - s = Series(np.arange(10), dtype=dtype) - mask = s < 5 - s[mask] = lrange(2, 7) - expected = Series(lrange(2, 7) + lrange(5, 10), dtype=dtype) - assert_series_equal(s, expected) - self.assertEqual(s.dtype, expected.dtype) - - # these are allowed operations, but are upcasted - for dtype in [np.int64, np.float64]: - s = Series(np.arange(10), dtype=dtype) - mask = s < 5 - values = [2.5, 3.5, 4.5, 5.5, 6.5] - s[mask] = values - expected = Series(values + lrange(5, 10), dtype='float64') - assert_series_equal(s, expected) - self.assertEqual(s.dtype, expected.dtype) - - # GH 9731 - s = Series(np.arange(10), dtype='int64') - mask = s > 5 - values = [2.5, 3.5, 4.5, 5.5] - s[mask] = values - expected = Series(lrange(6) + values, dtype='float64') - assert_series_equal(s, expected) - - # can't do these as we are forced to change the itemsize of the input - # to something we cannot - for dtype in [np.int8, np.int16, np.int32, np.float16, np.float32]: - s = Series(np.arange(10), dtype=dtype) - mask = s < 5 - values = [2.5, 3.5, 4.5, 5.5, 6.5] - self.assertRaises(Exception, s.__setitem__, tuple(mask), values) - - # GH3235 - s = Series(np.arange(10), dtype='int64') - mask = s < 5 - s[mask] = lrange(2, 7) - expected = Series(lrange(2, 7) + lrange(5, 10), dtype='int64') - assert_series_equal(s, expected) - self.assertEqual(s.dtype, expected.dtype) - - s = Series(np.arange(10), dtype='int64') - mask = s > 5 - s[mask] = [0] * 4 - expected = Series([0, 1, 2, 3, 4, 5] + [0] * 4, dtype='int64') - assert_series_equal(s, expected) - - s = Series(np.arange(10)) - mask = s > 5 - - def f(): - s[mask] = [5, 4, 3, 2, 1] - - self.assertRaises(ValueError, f) - - def f(): - s[mask] = [0] * 5 - - self.assertRaises(ValueError, f) - - # dtype changes - s = Series([1, 2, 3, 4]) - result = s.where(s > 2, np.nan) - expected = Series([np.nan, np.nan, 3, 4]) - assert_series_equal(result, expected) - - # GH 4667 - # setting with None changes dtype - s = Series(range(10)).astype(float) - s[8] = None - result = s[8] - self.assertTrue(isnull(result)) - - s = Series(range(10)).astype(float) - s[s > 8] = None - result = s[isnull(s)] - expected = Series(np.nan, index=[9]) - assert_series_equal(result, expected) - - def test_where_setitem_invalid(self): - - # GH 2702 - # make sure correct exceptions are raised on invalid list assignment - - # slice - s = Series(list('abc')) - - def f(): - s[0:3] = list(range(27)) - - self.assertRaises(ValueError, f) - - s[0:3] = list(range(3)) - expected = Series([0, 1, 2]) - assert_series_equal(s.astype(np.int64), expected, ) - - # slice with step - s = Series(list('abcdef')) - - def f(): - s[0:4:2] = list(range(27)) - - self.assertRaises(ValueError, f) - - s = Series(list('abcdef')) - s[0:4:2] = list(range(2)) - expected = Series([0, 'b', 1, 'd', 'e', 'f']) - assert_series_equal(s, expected) - - # neg slices - s = Series(list('abcdef')) - - def f(): - s[:-1] = list(range(27)) - - self.assertRaises(ValueError, f) - - s[-3:-1] = list(range(2)) - expected = Series(['a', 'b', 'c', 0, 1, 'f']) - assert_series_equal(s, expected) - - # list - s = Series(list('abc')) - - def f(): - s[[0, 1, 2]] = list(range(27)) - - self.assertRaises(ValueError, f) - - s = Series(list('abc')) - - def f(): - s[[0, 1, 2]] = list(range(2)) - - self.assertRaises(ValueError, f) - - # scalar - s = Series(list('abc')) - s[0] = list(range(10)) - expected = Series([list(range(10)), 'b', 'c']) - assert_series_equal(s, expected) - - def test_where_broadcast(self): - # Test a variety of differently sized series - for size in range(2, 6): - # Test a variety of boolean indices - for selection in [ - # First element should be set - np.resize([True, False, False, False, False], size), - # Set alternating elements] - np.resize([True, False], size), - # No element should be set - np.resize([False], size)]: - - # Test a variety of different numbers as content - for item in [2.0, np.nan, np.finfo(np.float).max, - np.finfo(np.float).min]: - # Test numpy arrays, lists and tuples as the input to be - # broadcast - for arr in [np.array([item]), [item], (item, )]: - data = np.arange(size, dtype=float) - s = Series(data) - s[selection] = arr - # Construct the expected series by taking the source - # data or item based on the selection - expected = Series([item if use_item else data[ - i] for i, use_item in enumerate(selection)]) - assert_series_equal(s, expected) - - s = Series(data) - result = s.where(~selection, arr) - assert_series_equal(result, expected) - - def test_where_inplace(self): - s = Series(np.random.randn(5)) - cond = s > 0 - - rs = s.copy() - - rs.where(cond, inplace=True) - assert_series_equal(rs.dropna(), s[cond]) - assert_series_equal(rs, s.where(cond)) - - rs = s.copy() - rs.where(cond, -s, inplace=True) - assert_series_equal(rs, s.where(cond, -s)) - - def test_where_dups(self): - # GH 4550 - # where crashes with dups in index - s1 = Series(list(range(3))) - s2 = Series(list(range(3))) - comb = pd.concat([s1, s2]) - result = comb.where(comb < 2) - expected = Series([0, 1, np.nan, 0, 1, np.nan], - index=[0, 1, 2, 0, 1, 2]) - assert_series_equal(result, expected) - - # GH 4548 - # inplace updating not working with dups - comb[comb < 1] = 5 - expected = Series([5, 1, 2, 5, 1, 2], index=[0, 1, 2, 0, 1, 2]) - assert_series_equal(comb, expected) - - comb[comb < 2] += 10 - expected = Series([5, 11, 2, 5, 11, 2], index=[0, 1, 2, 0, 1, 2]) - assert_series_equal(comb, expected) - - def test_where_datetime(self): - s = Series(date_range('20130102', periods=2)) - expected = Series([10, 10], dtype='datetime64[ns]') - mask = np.array([False, False]) - - rs = s.where(mask, [10, 10]) - assert_series_equal(rs, expected) - - rs = s.where(mask, 10) - assert_series_equal(rs, expected) - - rs = s.where(mask, 10.0) - assert_series_equal(rs, expected) - - rs = s.where(mask, [10.0, 10.0]) - assert_series_equal(rs, expected) - - rs = s.where(mask, [10.0, np.nan]) - expected = Series([10, None], dtype='datetime64[ns]') - assert_series_equal(rs, expected) - - def test_where_timedelta(self): - s = Series([1, 2], dtype='timedelta64[ns]') - expected = Series([10, 10], dtype='timedelta64[ns]') - mask = np.array([False, False]) - - rs = s.where(mask, [10, 10]) - assert_series_equal(rs, expected) - - rs = s.where(mask, 10) - assert_series_equal(rs, expected) - - rs = s.where(mask, 10.0) - assert_series_equal(rs, expected) - - rs = s.where(mask, [10.0, 10.0]) - assert_series_equal(rs, expected) - - rs = s.where(mask, [10.0, np.nan]) - expected = Series([10, None], dtype='timedelta64[ns]') - assert_series_equal(rs, expected) - - def test_mask(self): - # compare with tested results in test_where - s = Series(np.random.randn(5)) - cond = s > 0 - - rs = s.where(~cond, np.nan) - assert_series_equal(rs, s.mask(cond)) - - rs = s.where(~cond) - rs2 = s.mask(cond) - assert_series_equal(rs, rs2) - - rs = s.where(~cond, -s) - rs2 = s.mask(cond, -s) - assert_series_equal(rs, rs2) - - cond = Series([True, False, False, True, False], index=s.index) - s2 = -(s.abs()) - rs = s2.where(~cond[:3]) - rs2 = s2.mask(cond[:3]) - assert_series_equal(rs, rs2) - - rs = s2.where(~cond[:3], -s2) - rs2 = s2.mask(cond[:3], -s2) - assert_series_equal(rs, rs2) - - self.assertRaises(ValueError, s.mask, 1) - self.assertRaises(ValueError, s.mask, cond[:3].values, -s) - - # dtype changes - s = Series([1, 2, 3, 4]) - result = s.mask(s > 2, np.nan) - expected = Series([1, 2, np.nan, np.nan]) - assert_series_equal(result, expected) - - def test_mask_broadcast(self): - # GH 8801 - # copied from test_where_broadcast - for size in range(2, 6): - for selection in [ - # First element should be set - np.resize([True, False, False, False, False], size), - # Set alternating elements] - np.resize([True, False], size), - # No element should be set - np.resize([False], size)]: - for item in [2.0, np.nan, np.finfo(np.float).max, - np.finfo(np.float).min]: - for arr in [np.array([item]), [item], (item, )]: - data = np.arange(size, dtype=float) - s = Series(data) - result = s.mask(selection, arr) - expected = Series([item if use_item else data[ - i] for i, use_item in enumerate(selection)]) - assert_series_equal(result, expected) - - def test_mask_inplace(self): - s = Series(np.random.randn(5)) - cond = s > 0 - - rs = s.copy() - rs.mask(cond, inplace=True) - assert_series_equal(rs.dropna(), s[~cond]) - assert_series_equal(rs, s.mask(cond)) - - rs = s.copy() - rs.mask(cond, -s, inplace=True) - assert_series_equal(rs, s.mask(cond, -s)) - - def test_drop(self): - - # unique - s = Series([1, 2], index=['one', 'two']) - expected = Series([1], index=['one']) - result = s.drop(['two']) - assert_series_equal(result, expected) - result = s.drop('two', axis='rows') - assert_series_equal(result, expected) - - # non-unique - # GH 5248 - s = Series([1, 1, 2], index=['one', 'two', 'one']) - expected = Series([1, 2], index=['one', 'one']) - result = s.drop(['two'], axis=0) - assert_series_equal(result, expected) - result = s.drop('two') - assert_series_equal(result, expected) - - expected = Series([1], index=['two']) - result = s.drop(['one']) - assert_series_equal(result, expected) - result = s.drop('one') - assert_series_equal(result, expected) - - # single string/tuple-like - s = Series(range(3), index=list('abc')) - self.assertRaises(ValueError, s.drop, 'bc') - self.assertRaises(ValueError, s.drop, ('a', )) - - # errors='ignore' - s = Series(range(3), index=list('abc')) - result = s.drop('bc', errors='ignore') - assert_series_equal(result, s) - result = s.drop(['a', 'd'], errors='ignore') - expected = s.ix[1:] - assert_series_equal(result, expected) - - # bad axis - self.assertRaises(ValueError, s.drop, 'one', axis='columns') - - # GH 8522 - s = Series([2, 3], index=[True, False]) - self.assertTrue(s.index.is_object()) - result = s.drop(True) - expected = Series([3], index=[False]) - assert_series_equal(result, expected) - - def test_ix_setitem(self): - inds = self.series.index[[3, 4, 7]] - - result = self.series.copy() - result.ix[inds] = 5 - - expected = self.series.copy() - expected[[3, 4, 7]] = 5 - assert_series_equal(result, expected) - - result.ix[5:10] = 10 - expected[5:10] = 10 - assert_series_equal(result, expected) - - # set slice with indices - d1, d2 = self.series.index[[5, 15]] - result.ix[d1:d2] = 6 - expected[5:16] = 6 # because it's inclusive - assert_series_equal(result, expected) - - # set index value - self.series.ix[d1] = 4 - self.series.ix[d2] = 6 - self.assertEqual(self.series[d1], 4) - self.assertEqual(self.series[d2], 6) - - def test_where_numeric_with_string(self): - # GH 9280 - s = pd.Series([1, 2, 3]) - w = s.where(s > 1, 'X') - - self.assertFalse(com.is_integer(w[0])) - self.assertTrue(com.is_integer(w[1])) - self.assertTrue(com.is_integer(w[2])) - self.assertTrue(isinstance(w[0], str)) - self.assertTrue(w.dtype == 'object') - - w = s.where(s > 1, ['X', 'Y', 'Z']) - self.assertFalse(com.is_integer(w[0])) - self.assertTrue(com.is_integer(w[1])) - self.assertTrue(com.is_integer(w[2])) - self.assertTrue(isinstance(w[0], str)) - self.assertTrue(w.dtype == 'object') - - w = s.where(s > 1, np.array(['X', 'Y', 'Z'])) - self.assertFalse(com.is_integer(w[0])) - self.assertTrue(com.is_integer(w[1])) - self.assertTrue(com.is_integer(w[2])) - self.assertTrue(isinstance(w[0], str)) - self.assertTrue(w.dtype == 'object') - - def test_setitem_boolean(self): - mask = self.series > self.series.median() - - # similiar indexed series - result = self.series.copy() - result[mask] = self.series * 2 - expected = self.series * 2 - assert_series_equal(result[mask], expected[mask]) - - # needs alignment - result = self.series.copy() - result[mask] = (self.series * 2)[0:5] - expected = (self.series * 2)[0:5].reindex_like(self.series) - expected[-mask] = self.series[mask] - assert_series_equal(result[mask], expected[mask]) - - def test_ix_setitem_boolean(self): - mask = self.series > self.series.median() - - result = self.series.copy() - result.ix[mask] = 0 - expected = self.series - expected[mask] = 0 - assert_series_equal(result, expected) - - def test_ix_setitem_corner(self): - inds = list(self.series.index[[5, 8, 12]]) - self.series.ix[inds] = 5 - self.assertRaises(Exception, self.series.ix.__setitem__, - inds + ['foo'], 5) - - def test_get_set_boolean_different_order(self): - ordered = self.series.sort_values() - - # setting - copy = self.series.copy() - copy[ordered > 0] = 0 - - expected = self.series.copy() - expected[expected > 0] = 0 - - assert_series_equal(copy, expected) - - # getting - sel = self.series[ordered > 0] - exp = self.series[self.series > 0] - assert_series_equal(sel, exp) - - def test_repr(self): - str(self.ts) - str(self.series) - str(self.series.astype(int)) - str(self.objSeries) - - str(Series(tm.randn(1000), index=np.arange(1000))) - str(Series(tm.randn(1000), index=np.arange(1000, 0, step=-1))) - - # empty - str(self.empty) - - # with NaNs - self.series[5:7] = np.NaN - str(self.series) - - # with Nones - ots = self.ts.astype('O') - ots[::2] = None - repr(ots) - - # various names - for name in ['', 1, 1.2, 'foo', u('\u03B1\u03B2\u03B3'), - 'loooooooooooooooooooooooooooooooooooooooooooooooooooong', - ('foo', 'bar', 'baz'), (1, 2), ('foo', 1, 2.3), - (u('\u03B1'), u('\u03B2'), u('\u03B3')), - (u('\u03B1'), 'bar')]: - self.series.name = name - repr(self.series) - - biggie = Series(tm.randn(1000), index=np.arange(1000), - name=('foo', 'bar', 'baz')) - repr(biggie) - - # 0 as name - ser = Series(np.random.randn(100), name=0) - rep_str = repr(ser) - self.assertIn("Name: 0", rep_str) - - # tidy repr - ser = Series(np.random.randn(1001), name=0) - rep_str = repr(ser) - self.assertIn("Name: 0", rep_str) - - ser = Series(["a\n\r\tb"], name=["a\n\r\td"], index=["a\n\r\tf"]) - self.assertFalse("\t" in repr(ser)) - self.assertFalse("\r" in repr(ser)) - self.assertFalse("a\n" in repr(ser)) - - # with empty series (#4651) - s = Series([], dtype=np.int64, name='foo') - self.assertEqual(repr(s), 'Series([], Name: foo, dtype: int64)') - - s = Series([], dtype=np.int64, name=None) - self.assertEqual(repr(s), 'Series([], dtype: int64)') - - def test_tidy_repr(self): - a = Series([u("\u05d0")] * 1000) - a.name = 'title1' - repr(a) # should not raise exception - - def test_repr_bool_fails(self): - s = Series([DataFrame(np.random.randn(2, 2)) for i in range(5)]) - - import sys - - buf = StringIO() - tmp = sys.stderr - sys.stderr = buf - try: - # it works (with no Cython exception barf)! - repr(s) - finally: - sys.stderr = tmp - self.assertEqual(buf.getvalue(), '') - - def test_repr_name_iterable_indexable(self): - s = Series([1, 2, 3], name=np.int64(3)) - - # it works! - repr(s) - - s.name = (u("\u05d0"), ) * 2 - repr(s) - - def test_repr_should_return_str(self): - # http://docs.python.org/py3k/reference/datamodel.html#object.__repr__ - # http://docs.python.org/reference/datamodel.html#object.__repr__ - # ...The return value must be a string object. - - # (str on py2.x, str (unicode) on py3) - - data = [8, 5, 3, 5] - index1 = [u("\u03c3"), u("\u03c4"), u("\u03c5"), u("\u03c6")] - df = Series(data, index=index1) - self.assertTrue(type(df.__repr__() == str)) # both py2 / 3 - - def test_repr_max_rows(self): - # GH 6863 - with pd.option_context('max_rows', None): - str(Series(range(1001))) # should not raise exception - - def test_unicode_string_with_unicode(self): - df = Series([u("\u05d0")], name=u("\u05d1")) - if compat.PY3: - str(df) - else: - compat.text_type(df) - - def test_bytestring_with_unicode(self): - df = Series([u("\u05d0")], name=u("\u05d1")) - if compat.PY3: - bytes(df) - else: - str(df) - - def test_timeseries_repr_object_dtype(self): - index = Index([datetime(2000, 1, 1) + timedelta(i) - for i in range(1000)], dtype=object) - ts = Series(np.random.randn(len(index)), index) - repr(ts) - - ts = tm.makeTimeSeries(1000) - self.assertTrue(repr(ts).splitlines()[-1].startswith('Freq:')) - - ts2 = ts.ix[np.random.randint(0, len(ts) - 1, 400)] - repr(ts2).splitlines()[-1] - - def test_timeseries_periodindex(self): - # GH2891 - from pandas import period_range - prng = period_range('1/1/2011', '1/1/2012', freq='M') - ts = Series(np.random.randn(len(prng)), prng) - new_ts = self.round_trip_pickle(ts) - self.assertEqual(new_ts.index.freq, 'M') - - def test_iter(self): - for i, val in enumerate(self.series): - self.assertEqual(val, self.series[i]) - - for i, val in enumerate(self.ts): - self.assertEqual(val, self.ts[i]) - - def test_keys(self): - # HACK: By doing this in two stages, we avoid 2to3 wrapping the call - # to .keys() in a list() - getkeys = self.ts.keys - self.assertIs(getkeys(), self.ts.index) - - def test_values(self): - self.assert_numpy_array_equal(self.ts, self.ts.values) - - def test_iteritems(self): - for idx, val in compat.iteritems(self.series): - self.assertEqual(val, self.series[idx]) - - for idx, val in compat.iteritems(self.ts): - self.assertEqual(val, self.ts[idx]) - - # assert is lazy (genrators don't define reverse, lists do) - self.assertFalse(hasattr(self.series.iteritems(), 'reverse')) - - def test_sum(self): - self._check_stat_op('sum', np.sum, check_allna=True) - - def test_sum_inf(self): - import pandas.core.nanops as nanops - - s = Series(np.random.randn(10)) - s2 = s.copy() - - s[5:8] = np.inf - s2[5:8] = np.nan - - self.assertTrue(np.isinf(s.sum())) - - arr = np.random.randn(100, 100).astype('f4') - arr[:, 2] = np.inf - - with cf.option_context("mode.use_inf_as_null", True): - assert_almost_equal(s.sum(), s2.sum()) - - res = nanops.nansum(arr, axis=1) - self.assertTrue(np.isinf(res).all()) - - def test_mean(self): - self._check_stat_op('mean', np.mean) - - def test_median(self): - self._check_stat_op('median', np.median) - - # test with integers, test failure - int_ts = Series(np.ones(10, dtype=int), index=lrange(10)) - self.assertAlmostEqual(np.median(int_ts), int_ts.median()) - - def test_mode(self): - s = Series([12, 12, 11, 10, 19, 11]) - exp = Series([11, 12]) - assert_series_equal(s.mode(), exp) - - assert_series_equal( - Series([1, 2, 3]).mode(), Series( - [], dtype='int64')) - - lst = [5] * 20 + [1] * 10 + [6] * 25 - np.random.shuffle(lst) - s = Series(lst) - assert_series_equal(s.mode(), Series([6])) - - s = Series([5] * 10) - assert_series_equal(s.mode(), Series([5])) - - s = Series(lst) - s[0] = np.nan - assert_series_equal(s.mode(), Series([6.])) - - s = Series(list('adfasbasfwewefwefweeeeasdfasnbam')) - assert_series_equal(s.mode(), Series(['e'])) - - s = Series(['2011-01-03', '2013-01-02', '1900-05-03'], dtype='M8[ns]') - assert_series_equal(s.mode(), Series([], dtype="M8[ns]")) - s = Series(['2011-01-03', '2013-01-02', '1900-05-03', '2011-01-03', - '2013-01-02'], dtype='M8[ns]') - assert_series_equal(s.mode(), Series(['2011-01-03', '2013-01-02'], - dtype='M8[ns]')) - - def test_prod(self): - self._check_stat_op('prod', np.prod) - - def test_min(self): - self._check_stat_op('min', np.min, check_objects=True) - - def test_max(self): - self._check_stat_op('max', np.max, check_objects=True) - - def test_var_std(self): - alt = lambda x: np.std(x, ddof=1) - self._check_stat_op('std', alt) - - alt = lambda x: np.var(x, ddof=1) - self._check_stat_op('var', alt) - - result = self.ts.std(ddof=4) - expected = np.std(self.ts.values, ddof=4) - assert_almost_equal(result, expected) - - result = self.ts.var(ddof=4) - expected = np.var(self.ts.values, ddof=4) - assert_almost_equal(result, expected) - - # 1 - element series with ddof=1 - s = self.ts.iloc[[0]] - result = s.var(ddof=1) - self.assertTrue(isnull(result)) - - result = s.std(ddof=1) - self.assertTrue(isnull(result)) - - def test_sem(self): - alt = lambda x: np.std(x, ddof=1) / np.sqrt(len(x)) - self._check_stat_op('sem', alt) - - result = self.ts.sem(ddof=4) - expected = np.std(self.ts.values, - ddof=4) / np.sqrt(len(self.ts.values)) - assert_almost_equal(result, expected) - - # 1 - element series with ddof=1 - s = self.ts.iloc[[0]] - result = s.sem(ddof=1) - self.assertTrue(isnull(result)) - - def test_skew(self): - tm._skip_if_no_scipy() - - from scipy.stats import skew - alt = lambda x: skew(x, bias=False) - self._check_stat_op('skew', alt) - - # test corner cases, skew() returns NaN unless there's at least 3 - # values - min_N = 3 - for i in range(1, min_N + 1): - s = Series(np.ones(i)) - df = DataFrame(np.ones((i, i))) - if i < min_N: - self.assertTrue(np.isnan(s.skew())) - self.assertTrue(np.isnan(df.skew()).all()) - else: - self.assertEqual(0, s.skew()) - self.assertTrue((df.skew() == 0).all()) - - def test_kurt(self): - tm._skip_if_no_scipy() - - from scipy.stats import kurtosis - alt = lambda x: kurtosis(x, bias=False) - self._check_stat_op('kurt', alt) - - index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], - labels=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], - [0, 1, 0, 1, 0, 1]]) - s = Series(np.random.randn(6), index=index) - self.assertAlmostEqual(s.kurt(), s.kurt(level=0)['bar']) - - # test corner cases, kurt() returns NaN unless there's at least 4 - # values - min_N = 4 - for i in range(1, min_N + 1): - s = Series(np.ones(i)) - df = DataFrame(np.ones((i, i))) - if i < min_N: - self.assertTrue(np.isnan(s.kurt())) - self.assertTrue(np.isnan(df.kurt()).all()) - else: - self.assertEqual(0, s.kurt()) - self.assertTrue((df.kurt() == 0).all()) - - def test_argsort(self): - self._check_accum_op('argsort') - argsorted = self.ts.argsort() - self.assertTrue(issubclass(argsorted.dtype.type, np.integer)) - - # GH 2967 (introduced bug in 0.11-dev I think) - s = Series([Timestamp('201301%02d' % (i + 1)) for i in range(5)]) - self.assertEqual(s.dtype, 'datetime64[ns]') - shifted = s.shift(-1) - self.assertEqual(shifted.dtype, 'datetime64[ns]') - self.assertTrue(isnull(shifted[4])) - - result = s.argsort() - expected = Series(lrange(5), dtype='int64') - assert_series_equal(result, expected) - - result = shifted.argsort() - expected = Series(lrange(4) + [-1], dtype='int64') - assert_series_equal(result, expected) - - def test_argsort_stable(self): - s = Series(np.random.randint(0, 100, size=10000)) - mindexer = s.argsort(kind='mergesort') - qindexer = s.argsort() - - mexpected = np.argsort(s.values, kind='mergesort') - qexpected = np.argsort(s.values, kind='quicksort') - - self.assert_numpy_array_equal(mindexer, mexpected) - self.assert_numpy_array_equal(qindexer, qexpected) - self.assertFalse(np.array_equal(qindexer, mindexer)) - - def test_reorder_levels(self): - index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], - labels=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], - [0, 1, 0, 1, 0, 1]], - names=['L0', 'L1', 'L2']) - s = Series(np.arange(6), index=index) - - # no change, position - result = s.reorder_levels([0, 1, 2]) - assert_series_equal(s, result) - - # no change, labels - result = s.reorder_levels(['L0', 'L1', 'L2']) - assert_series_equal(s, result) - - # rotate, position - result = s.reorder_levels([1, 2, 0]) - e_idx = MultiIndex(levels=[['one', 'two', 'three'], [0, 1], ['bar']], - labels=[[0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1], - [0, 0, 0, 0, 0, 0]], - names=['L1', 'L2', 'L0']) - expected = Series(np.arange(6), index=e_idx) - assert_series_equal(result, expected) - - result = s.reorder_levels([0, 0, 0]) - e_idx = MultiIndex(levels=[['bar'], ['bar'], ['bar']], - labels=[[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0]], - names=['L0', 'L0', 'L0']) - expected = Series(range(6), index=e_idx) - assert_series_equal(result, expected) - - result = s.reorder_levels(['L0', 'L0', 'L0']) - assert_series_equal(result, expected) - - def test_cumsum(self): - self._check_accum_op('cumsum') - - def test_cumprod(self): - self._check_accum_op('cumprod') - - def test_cummin(self): - self.assert_numpy_array_equal(self.ts.cummin(), - np.minimum.accumulate(np.array(self.ts))) - ts = self.ts.copy() - ts[::2] = np.NaN - result = ts.cummin()[1::2] - expected = np.minimum.accumulate(ts.valid()) - - self.assert_numpy_array_equal(result, expected) - - def test_cummax(self): - self.assert_numpy_array_equal(self.ts.cummax(), - np.maximum.accumulate(np.array(self.ts))) - ts = self.ts.copy() - ts[::2] = np.NaN - result = ts.cummax()[1::2] - expected = np.maximum.accumulate(ts.valid()) - - self.assert_numpy_array_equal(result, expected) - - def test_cummin_datetime64(self): - s = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', '2000-1-1', - 'NaT', '2000-1-3'])) - - expected = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', - '2000-1-1', 'NaT', '2000-1-1'])) - result = s.cummin(skipna=True) - self.assert_series_equal(expected, result) - - expected = pd.Series(pd.to_datetime( - ['NaT', '2000-1-2', '2000-1-2', '2000-1-1', '2000-1-1', '2000-1-1' - ])) - result = s.cummin(skipna=False) - self.assert_series_equal(expected, result) - - def test_cummax_datetime64(self): - s = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', '2000-1-1', - 'NaT', '2000-1-3'])) - - expected = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', - '2000-1-2', 'NaT', '2000-1-3'])) - result = s.cummax(skipna=True) - self.assert_series_equal(expected, result) - - expected = pd.Series(pd.to_datetime( - ['NaT', '2000-1-2', '2000-1-2', '2000-1-2', '2000-1-2', '2000-1-3' - ])) - result = s.cummax(skipna=False) - self.assert_series_equal(expected, result) - - def test_cummin_timedelta64(self): - s = pd.Series(pd.to_timedelta(['NaT', - '2 min', - 'NaT', - '1 min', - 'NaT', - '3 min', ])) - - expected = pd.Series(pd.to_timedelta(['NaT', - '2 min', - 'NaT', - '1 min', - 'NaT', - '1 min', ])) - result = s.cummin(skipna=True) - self.assert_series_equal(expected, result) - - expected = pd.Series(pd.to_timedelta(['NaT', - '2 min', - '2 min', - '1 min', - '1 min', - '1 min', ])) - result = s.cummin(skipna=False) - self.assert_series_equal(expected, result) - - def test_cummax_timedelta64(self): - s = pd.Series(pd.to_timedelta(['NaT', - '2 min', - 'NaT', - '1 min', - 'NaT', - '3 min', ])) - - expected = pd.Series(pd.to_timedelta(['NaT', - '2 min', - 'NaT', - '2 min', - 'NaT', - '3 min', ])) - result = s.cummax(skipna=True) - self.assert_series_equal(expected, result) - - expected = pd.Series(pd.to_timedelta(['NaT', - '2 min', - '2 min', - '2 min', - '2 min', - '3 min', ])) - result = s.cummax(skipna=False) - self.assert_series_equal(expected, result) - - def test_npdiff(self): - raise nose.SkipTest("skipping due to Series no longer being an " - "ndarray") - - # no longer works as the return type of np.diff is now nd.array - s = Series(np.arange(5)) - - r = np.diff(s) - assert_series_equal(Series([nan, 0, 0, 0, nan]), r) - - def _check_stat_op(self, name, alternate, check_objects=False, - check_allna=False): - import pandas.core.nanops as nanops - - def testit(): - f = getattr(Series, name) - - # add some NaNs - self.series[5:15] = np.NaN - - # idxmax, idxmin, min, and max are valid for dates - if name not in ['max', 'min']: - ds = Series(date_range('1/1/2001', periods=10)) - self.assertRaises(TypeError, f, ds) - - # skipna or no - self.assertTrue(notnull(f(self.series))) - self.assertTrue(isnull(f(self.series, skipna=False))) - - # check the result is correct - nona = self.series.dropna() - assert_almost_equal(f(nona), alternate(nona.values)) - assert_almost_equal(f(self.series), alternate(nona.values)) - - allna = self.series * nan - - if check_allna: - # xref 9422 - # bottleneck >= 1.0 give 0.0 for an allna Series sum - try: - self.assertTrue(nanops._USE_BOTTLENECK) - import bottleneck as bn # noqa - self.assertTrue(bn.__version__ >= LooseVersion('1.0')) - self.assertEqual(f(allna), 0.0) - except: - self.assertTrue(np.isnan(f(allna))) - - # dtype=object with None, it works! - s = Series([1, 2, 3, None, 5]) - f(s) - - # 2888 - l = [0] - l.extend(lrange(2 ** 40, 2 ** 40 + 1000)) - s = Series(l, dtype='int64') - assert_almost_equal(float(f(s)), float(alternate(s.values))) - - # check date range - if check_objects: - s = Series(bdate_range('1/1/2000', periods=10)) - res = f(s) - exp = alternate(s) - self.assertEqual(res, exp) - - # check on string data - if name not in ['sum', 'min', 'max']: - self.assertRaises(TypeError, f, Series(list('abc'))) - - # Invalid axis. - self.assertRaises(ValueError, f, self.series, axis=1) - - # Unimplemented numeric_only parameter. - if 'numeric_only' in getargspec(f).args: - self.assertRaisesRegexp(NotImplementedError, name, f, - self.series, numeric_only=True) - - testit() - - try: - import bottleneck as bn # noqa - nanops._USE_BOTTLENECK = False - testit() - nanops._USE_BOTTLENECK = True - except ImportError: - pass - - def _check_accum_op(self, name): - func = getattr(np, name) - self.assert_numpy_array_equal(func(self.ts), func(np.array(self.ts))) - - # with missing values - ts = self.ts.copy() - ts[::2] = np.NaN - - result = func(ts)[1::2] - expected = func(np.array(ts.valid())) - - self.assert_numpy_array_equal(result, expected) - - def test_round(self): - # numpy.round doesn't preserve metadata, probably a numpy bug, - # re: GH #314 - self.ts.index.name = "index_name" - result = self.ts.round(2) - expected = Series(np.round(self.ts.values, 2), index=self.ts.index, - name='ts') - assert_series_equal(result, expected) - self.assertEqual(result.name, self.ts.name) - - def test_built_in_round(self): - if not compat.PY3: - raise nose.SkipTest( - 'build in round cannot be overriden prior to Python 3') - - s = Series([1.123, 2.123, 3.123], index=lrange(3)) - result = round(s) - expected_rounded0 = Series([1., 2., 3.], index=lrange(3)) - self.assert_series_equal(result, expected_rounded0) - - decimals = 2 - expected_rounded = Series([1.12, 2.12, 3.12], index=lrange(3)) - result = round(s, decimals) - self.assert_series_equal(result, expected_rounded) - - def test_prod_numpy16_bug(self): - s = Series([1., 1., 1.], index=lrange(3)) - result = s.prod() - self.assertNotIsInstance(result, Series) - - def test_quantile(self): - from numpy import percentile - - q = self.ts.quantile(0.1) - self.assertEqual(q, percentile(self.ts.valid(), 10)) - - q = self.ts.quantile(0.9) - self.assertEqual(q, percentile(self.ts.valid(), 90)) - - # object dtype - q = Series(self.ts, dtype=object).quantile(0.9) - self.assertEqual(q, percentile(self.ts.valid(), 90)) - - # datetime64[ns] dtype - dts = self.ts.index.to_series() - q = dts.quantile(.2) - self.assertEqual(q, Timestamp('2000-01-10 19:12:00')) - - # timedelta64[ns] dtype - tds = dts.diff() - q = tds.quantile(.25) - self.assertEqual(q, pd.to_timedelta('24:00:00')) - - # GH7661 - result = Series([np.timedelta64('NaT')]).sum() - self.assertTrue(result is pd.NaT) - - msg = 'percentiles should all be in the interval \\[0, 1\\]' - for invalid in [-1, 2, [0.5, -1], [0.5, 2]]: - with tm.assertRaisesRegexp(ValueError, msg): - self.ts.quantile(invalid) - - def test_quantile_multi(self): - from numpy import percentile - - qs = [.1, .9] - result = self.ts.quantile(qs) - expected = pd.Series([percentile(self.ts.valid(), 10), - percentile(self.ts.valid(), 90)], - index=qs, name=self.ts.name) - assert_series_equal(result, expected) - - dts = self.ts.index.to_series() - dts.name = 'xxx' - result = dts.quantile((.2, .2)) - expected = Series([Timestamp('2000-01-10 19:12:00'), - Timestamp('2000-01-10 19:12:00')], - index=[.2, .2], name='xxx') - assert_series_equal(result, expected) - - result = self.ts.quantile([]) - expected = pd.Series([], name=self.ts.name, index=Index( - [], dtype=float)) - assert_series_equal(result, expected) - - def test_quantile_interpolation(self): - # GH #10174 - if _np_version_under1p9: - raise nose.SkipTest("Numpy version is under 1.9") - - from numpy import percentile - - # interpolation = linear (default case) - q = self.ts.quantile(0.1, interpolation='linear') - self.assertEqual(q, percentile(self.ts.valid(), 10)) - q1 = self.ts.quantile(0.1) - self.assertEqual(q1, percentile(self.ts.valid(), 10)) - - # test with and without interpolation keyword - self.assertEqual(q, q1) - - def test_quantile_interpolation_np_lt_1p9(self): - # GH #10174 - if not _np_version_under1p9: - raise nose.SkipTest("Numpy version is greater than 1.9") - - from numpy import percentile - - # interpolation = linear (default case) - q = self.ts.quantile(0.1, interpolation='linear') - self.assertEqual(q, percentile(self.ts.valid(), 10)) - q1 = self.ts.quantile(0.1) - self.assertEqual(q1, percentile(self.ts.valid(), 10)) - - # interpolation other than linear - expErrMsg = "Interpolation methods other than " - with tm.assertRaisesRegexp(ValueError, expErrMsg): - self.ts.quantile(0.9, interpolation='nearest') - - # object dtype - with tm.assertRaisesRegexp(ValueError, expErrMsg): - q = Series(self.ts, dtype=object).quantile(0.7, - interpolation='higher') - - def test_append(self): - appendedSeries = self.series.append(self.objSeries) - for idx, value in compat.iteritems(appendedSeries): - if idx in self.series.index: - self.assertEqual(value, self.series[idx]) - elif idx in self.objSeries.index: - self.assertEqual(value, self.objSeries[idx]) - else: - self.fail("orphaned index!") - - self.assertRaises(ValueError, self.ts.append, self.ts, - verify_integrity=True) - - def test_append_many(self): - pieces = [self.ts[:5], self.ts[5:10], self.ts[10:]] - - result = pieces[0].append(pieces[1:]) - assert_series_equal(result, self.ts) - - def test_all_any(self): - ts = tm.makeTimeSeries() - bool_series = ts > 0 - self.assertFalse(bool_series.all()) - self.assertTrue(bool_series.any()) - - # Alternative types, with implicit 'object' dtype. - s = Series(['abc', True]) - self.assertEqual('abc', s.any()) # 'abc' || True => 'abc' - - def test_all_any_params(self): - # Check skipna, with implicit 'object' dtype. - s1 = Series([np.nan, True]) - s2 = Series([np.nan, False]) - self.assertTrue(s1.all(skipna=False)) # nan && True => True - self.assertTrue(s1.all(skipna=True)) - self.assertTrue(np.isnan(s2.any(skipna=False))) # nan || False => nan - self.assertFalse(s2.any(skipna=True)) - - # Check level. - s = pd.Series([False, False, True, True, False, True], - index=[0, 0, 1, 1, 2, 2]) - assert_series_equal(s.all(level=0), Series([False, True, False])) - assert_series_equal(s.any(level=0), Series([False, True, True])) - - # bool_only is not implemented with level option. - self.assertRaises(NotImplementedError, s.any, bool_only=True, level=0) - self.assertRaises(NotImplementedError, s.all, bool_only=True, level=0) - - # bool_only is not implemented alone. - self.assertRaises(NotImplementedError, s.any, bool_only=True) - self.assertRaises(NotImplementedError, s.all, bool_only=True) - - def test_op_method(self): - def check(series, other, check_reverse=False): - simple_ops = ['add', 'sub', 'mul', 'floordiv', 'truediv', 'pow'] - if not compat.PY3: - simple_ops.append('div') - - for opname in simple_ops: - op = getattr(Series, opname) - - if op == 'div': - alt = operator.truediv - else: - alt = getattr(operator, opname) - - result = op(series, other) - expected = alt(series, other) - tm.assert_almost_equal(result, expected) - if check_reverse: - rop = getattr(Series, "r" + opname) - result = rop(series, other) - expected = alt(other, series) - tm.assert_almost_equal(result, expected) - - check(self.ts, self.ts * 2) - check(self.ts, self.ts[::2]) - check(self.ts, 5, check_reverse=True) - check(tm.makeFloatSeries(), tm.makeFloatSeries(), check_reverse=True) - - def test_neg(self): - assert_series_equal(-self.series, -1 * self.series) - - def test_invert(self): - assert_series_equal(-(self.series < 0), ~(self.series < 0)) - - def test_modulo(self): - - # GH3590, modulo as ints - p = DataFrame({'first': [3, 4, 5, 8], 'second': [0, 0, 0, 3]}) - result = p['first'] % p['second'] - expected = Series(p['first'].values % p['second'].values, - dtype='float64') - expected.iloc[0:3] = np.nan - assert_series_equal(result, expected) - - result = p['first'] % 0 - expected = Series(np.nan, index=p.index, name='first') - assert_series_equal(result, expected) - - p = p.astype('float64') - result = p['first'] % p['second'] - expected = Series(p['first'].values % p['second'].values) - assert_series_equal(result, expected) - - p = p.astype('float64') - result = p['first'] % p['second'] - result2 = p['second'] % p['first'] - self.assertFalse(np.array_equal(result, result2)) - - # GH 9144 - s = Series([0, 1]) - - result = s % 0 - expected = Series([nan, nan]) - assert_series_equal(result, expected) - - result = 0 % s - expected = Series([nan, 0.0]) - assert_series_equal(result, expected) - - def test_div(self): - - # no longer do integer div for any ops, but deal with the 0's - p = DataFrame({'first': [3, 4, 5, 8], 'second': [0, 0, 0, 3]}) - result = p['first'] / p['second'] - expected = Series(p['first'].values.astype(float) / p['second'].values, - dtype='float64') - expected.iloc[0:3] = np.inf - assert_series_equal(result, expected) - - result = p['first'] / 0 - expected = Series(np.inf, index=p.index, name='first') - assert_series_equal(result, expected) - - p = p.astype('float64') - result = p['first'] / p['second'] - expected = Series(p['first'].values / p['second'].values) - assert_series_equal(result, expected) - - p = DataFrame({'first': [3, 4, 5, 8], 'second': [1, 1, 1, 1]}) - result = p['first'] / p['second'] - assert_series_equal(result, p['first'].astype('float64'), - check_names=False) - self.assertTrue(result.name is None) - self.assertFalse(np.array_equal(result, p['second'] / p['first'])) - - # inf signing - s = Series([np.nan, 1., -1.]) - result = s / 0 - expected = Series([np.nan, np.inf, -np.inf]) - assert_series_equal(result, expected) - - # float/integer issue - # GH 7785 - p = DataFrame({'first': (1, 0), 'second': (-0.01, -0.02)}) - expected = Series([-0.01, -np.inf]) - - result = p['second'].div(p['first']) - assert_series_equal(result, expected, check_names=False) - - result = p['second'] / p['first'] - assert_series_equal(result, expected) - - # GH 9144 - s = Series([-1, 0, 1]) - - result = 0 / s - expected = Series([0.0, nan, 0.0]) - assert_series_equal(result, expected) - - result = s / 0 - expected = Series([-inf, nan, inf]) - assert_series_equal(result, expected) - - result = s // 0 - expected = Series([-inf, nan, inf]) - assert_series_equal(result, expected) - - def test_operators(self): - def _check_op(series, other, op, pos_only=False): - left = np.abs(series) if pos_only else series - right = np.abs(other) if pos_only else other - - cython_or_numpy = op(left, right) - python = left.combine(right, op) - tm.assert_almost_equal(cython_or_numpy, python) - - def check(series, other): - simple_ops = ['add', 'sub', 'mul', 'truediv', 'floordiv', 'mod'] - - for opname in simple_ops: - _check_op(series, other, getattr(operator, opname)) - - _check_op(series, other, operator.pow, pos_only=True) - - _check_op(series, other, lambda x, y: operator.add(y, x)) - _check_op(series, other, lambda x, y: operator.sub(y, x)) - _check_op(series, other, lambda x, y: operator.truediv(y, x)) - _check_op(series, other, lambda x, y: operator.floordiv(y, x)) - _check_op(series, other, lambda x, y: operator.mul(y, x)) - _check_op(series, other, lambda x, y: operator.pow(y, x), - pos_only=True) - _check_op(series, other, lambda x, y: operator.mod(y, x)) - - check(self.ts, self.ts * 2) - check(self.ts, self.ts * 0) - check(self.ts, self.ts[::2]) - check(self.ts, 5) - - def check_comparators(series, other): - _check_op(series, other, operator.gt) - _check_op(series, other, operator.ge) - _check_op(series, other, operator.eq) - _check_op(series, other, operator.lt) - _check_op(series, other, operator.le) - - check_comparators(self.ts, 5) - check_comparators(self.ts, self.ts + 1) - - def test_operators_empty_int_corner(self): - s1 = Series([], [], dtype=np.int32) - s2 = Series({'x': 0.}) - tm.assert_series_equal(s1 * s2, Series([np.nan], index=['x'])) - - def test_constructor_dtype_timedelta64(self): - - # basic - td = Series([timedelta(days=i) for i in range(3)]) - self.assertEqual(td.dtype, 'timedelta64[ns]') - - td = Series([timedelta(days=1)]) - self.assertEqual(td.dtype, 'timedelta64[ns]') - - td = Series([timedelta(days=1), timedelta(days=2), np.timedelta64( - 1, 's')]) - self.assertEqual(td.dtype, 'timedelta64[ns]') - - # mixed with NaT - from pandas import tslib - td = Series([timedelta(days=1), tslib.NaT], dtype='m8[ns]') - self.assertEqual(td.dtype, 'timedelta64[ns]') - - td = Series([timedelta(days=1), np.nan], dtype='m8[ns]') - self.assertEqual(td.dtype, 'timedelta64[ns]') - - td = Series([np.timedelta64(300000000), pd.NaT], dtype='m8[ns]') - self.assertEqual(td.dtype, 'timedelta64[ns]') - - # improved inference - # GH5689 - td = Series([np.timedelta64(300000000), pd.NaT]) - self.assertEqual(td.dtype, 'timedelta64[ns]') - - td = Series([np.timedelta64(300000000), tslib.iNaT]) - self.assertEqual(td.dtype, 'timedelta64[ns]') - - td = Series([np.timedelta64(300000000), np.nan]) - self.assertEqual(td.dtype, 'timedelta64[ns]') - - td = Series([pd.NaT, np.timedelta64(300000000)]) - self.assertEqual(td.dtype, 'timedelta64[ns]') - - td = Series([np.timedelta64(1, 's')]) - self.assertEqual(td.dtype, 'timedelta64[ns]') - - # these are frequency conversion astypes - # for t in ['s', 'D', 'us', 'ms']: - # self.assertRaises(TypeError, td.astype, 'm8[%s]' % t) - - # valid astype - td.astype('int64') - - # invalid casting - self.assertRaises(TypeError, td.astype, 'int32') - - # this is an invalid casting - def f(): - Series([timedelta(days=1), 'foo'], dtype='m8[ns]') - - self.assertRaises(Exception, f) - - # leave as object here - td = Series([timedelta(days=i) for i in range(3)] + ['foo']) - self.assertEqual(td.dtype, 'object') - - # these will correctly infer a timedelta - s = Series([None, pd.NaT, '1 Day']) - self.assertEqual(s.dtype, 'timedelta64[ns]') - s = Series([np.nan, pd.NaT, '1 Day']) - self.assertEqual(s.dtype, 'timedelta64[ns]') - s = Series([pd.NaT, None, '1 Day']) - self.assertEqual(s.dtype, 'timedelta64[ns]') - s = Series([pd.NaT, np.nan, '1 Day']) - self.assertEqual(s.dtype, 'timedelta64[ns]') - - def test_operators_timedelta64(self): - - # invalid ops - self.assertRaises(Exception, self.objSeries.__add__, 1) - self.assertRaises(Exception, self.objSeries.__add__, - np.array(1, dtype=np.int64)) - self.assertRaises(Exception, self.objSeries.__sub__, 1) - self.assertRaises(Exception, self.objSeries.__sub__, - np.array(1, dtype=np.int64)) - - # seriese ops - v1 = date_range('2012-1-1', periods=3, freq='D') - v2 = date_range('2012-1-2', periods=3, freq='D') - rs = Series(v2) - Series(v1) - xp = Series(1e9 * 3600 * 24, - rs.index).astype('int64').astype('timedelta64[ns]') - assert_series_equal(rs, xp) - self.assertEqual(rs.dtype, 'timedelta64[ns]') - - df = DataFrame(dict(A=v1)) - td = Series([timedelta(days=i) for i in range(3)]) - self.assertEqual(td.dtype, 'timedelta64[ns]') - - # series on the rhs - result = df['A'] - df['A'].shift() - self.assertEqual(result.dtype, 'timedelta64[ns]') - - result = df['A'] + td - self.assertEqual(result.dtype, 'M8[ns]') - - # scalar Timestamp on rhs - maxa = df['A'].max() - tm.assertIsInstance(maxa, Timestamp) - - resultb = df['A'] - df['A'].max() - self.assertEqual(resultb.dtype, 'timedelta64[ns]') - - # timestamp on lhs - result = resultb + df['A'] - values = [Timestamp('20111230'), Timestamp('20120101'), - Timestamp('20120103')] - expected = Series(values, name='A') - assert_series_equal(result, expected) - - # datetimes on rhs - result = df['A'] - datetime(2001, 1, 1) - expected = Series( - [timedelta(days=4017 + i) for i in range(3)], name='A') - assert_series_equal(result, expected) - self.assertEqual(result.dtype, 'm8[ns]') - - d = datetime(2001, 1, 1, 3, 4) - resulta = df['A'] - d - self.assertEqual(resulta.dtype, 'm8[ns]') - - # roundtrip - resultb = resulta + d - assert_series_equal(df['A'], resultb) - - # timedeltas on rhs - td = timedelta(days=1) - resulta = df['A'] + td - resultb = resulta - td - assert_series_equal(resultb, df['A']) - self.assertEqual(resultb.dtype, 'M8[ns]') - - # roundtrip - td = timedelta(minutes=5, seconds=3) - resulta = df['A'] + td - resultb = resulta - td - assert_series_equal(df['A'], resultb) - self.assertEqual(resultb.dtype, 'M8[ns]') - - # inplace - value = rs[2] + np.timedelta64(timedelta(minutes=5, seconds=1)) - rs[2] += np.timedelta64(timedelta(minutes=5, seconds=1)) - self.assertEqual(rs[2], value) - - def test_timedeltas_with_DateOffset(self): - - # GH 4532 - # operate with pd.offsets - s = Series([Timestamp('20130101 9:01'), Timestamp('20130101 9:02')]) - - result = s + pd.offsets.Second(5) - result2 = pd.offsets.Second(5) + s - expected = Series([Timestamp('20130101 9:01:05'), Timestamp( - '20130101 9:02:05')]) - assert_series_equal(result, expected) - assert_series_equal(result2, expected) - - result = s - pd.offsets.Second(5) - result2 = -pd.offsets.Second(5) + s - expected = Series([Timestamp('20130101 9:00:55'), Timestamp( - '20130101 9:01:55')]) - assert_series_equal(result, expected) - assert_series_equal(result2, expected) - - result = s + pd.offsets.Milli(5) - result2 = pd.offsets.Milli(5) + s - expected = Series([Timestamp('20130101 9:01:00.005'), Timestamp( - '20130101 9:02:00.005')]) - assert_series_equal(result, expected) - assert_series_equal(result2, expected) - - result = s + pd.offsets.Minute(5) + pd.offsets.Milli(5) - expected = Series([Timestamp('20130101 9:06:00.005'), Timestamp( - '20130101 9:07:00.005')]) - assert_series_equal(result, expected) - - # operate with np.timedelta64 correctly - result = s + np.timedelta64(1, 's') - result2 = np.timedelta64(1, 's') + s - expected = Series([Timestamp('20130101 9:01:01'), Timestamp( - '20130101 9:02:01')]) - assert_series_equal(result, expected) - assert_series_equal(result2, expected) - - result = s + np.timedelta64(5, 'ms') - result2 = np.timedelta64(5, 'ms') + s - expected = Series([Timestamp('20130101 9:01:00.005'), Timestamp( - '20130101 9:02:00.005')]) - assert_series_equal(result, expected) - assert_series_equal(result2, expected) - - # valid DateOffsets - for do in ['Hour', 'Minute', 'Second', 'Day', 'Micro', 'Milli', - 'Nano']: - op = getattr(pd.offsets, do) - s + op(5) - op(5) + s - - def test_timedelta_series_ops(self): - # GH11925 - - s = Series(timedelta_range('1 day', periods=3)) - ts = Timestamp('2012-01-01') - expected = Series(date_range('2012-01-02', periods=3)) - assert_series_equal(ts + s, expected) - assert_series_equal(s + ts, expected) - - expected2 = Series(date_range('2011-12-31', periods=3, freq='-1D')) - assert_series_equal(ts - s, expected2) - assert_series_equal(ts + (-s), expected2) - - def test_timedelta64_operations_with_DateOffset(self): - # GH 10699 - td = Series([timedelta(minutes=5, seconds=3)] * 3) - result = td + pd.offsets.Minute(1) - expected = Series([timedelta(minutes=6, seconds=3)] * 3) - assert_series_equal(result, expected) - - result = td - pd.offsets.Minute(1) - expected = Series([timedelta(minutes=4, seconds=3)] * 3) - assert_series_equal(result, expected) - - result = td + Series([pd.offsets.Minute(1), pd.offsets.Second(3), - pd.offsets.Hour(2)]) - expected = Series([timedelta(minutes=6, seconds=3), timedelta( - minutes=5, seconds=6), timedelta(hours=2, minutes=5, seconds=3)]) - assert_series_equal(result, expected) - - result = td + pd.offsets.Minute(1) + pd.offsets.Second(12) - expected = Series([timedelta(minutes=6, seconds=15)] * 3) - assert_series_equal(result, expected) - - # valid DateOffsets - for do in ['Hour', 'Minute', 'Second', 'Day', 'Micro', 'Milli', - 'Nano']: - op = getattr(pd.offsets, do) - td + op(5) - op(5) + td - td - op(5) - op(5) - td - - def test_timedelta64_operations_with_timedeltas(self): - - # td operate with td - td1 = Series([timedelta(minutes=5, seconds=3)] * 3) - td2 = timedelta(minutes=5, seconds=4) - result = td1 - td2 - expected = Series([timedelta(seconds=0)] * 3) - Series([timedelta( - seconds=1)] * 3) - self.assertEqual(result.dtype, 'm8[ns]') - assert_series_equal(result, expected) - - result2 = td2 - td1 - expected = (Series([timedelta(seconds=1)] * 3) - Series([timedelta( - seconds=0)] * 3)) - assert_series_equal(result2, expected) - - # roundtrip - assert_series_equal(result + td2, td1) - - # Now again, using pd.to_timedelta, which should build - # a Series or a scalar, depending on input. - td1 = Series(pd.to_timedelta(['00:05:03'] * 3)) - td2 = pd.to_timedelta('00:05:04') - result = td1 - td2 - expected = Series([timedelta(seconds=0)] * 3) - Series([timedelta( - seconds=1)] * 3) - self.assertEqual(result.dtype, 'm8[ns]') - assert_series_equal(result, expected) - - result2 = td2 - td1 - expected = (Series([timedelta(seconds=1)] * 3) - Series([timedelta( - seconds=0)] * 3)) - assert_series_equal(result2, expected) - - # roundtrip - assert_series_equal(result + td2, td1) - - def test_timedelta64_operations_with_integers(self): - - # GH 4521 - # divide/multiply by integers - startdate = Series(date_range('2013-01-01', '2013-01-03')) - enddate = Series(date_range('2013-03-01', '2013-03-03')) - - s1 = enddate - startdate - s1[2] = np.nan - s2 = Series([2, 3, 4]) - expected = Series(s1.values.astype(np.int64) / s2, dtype='m8[ns]') - expected[2] = np.nan - result = s1 / s2 - assert_series_equal(result, expected) - - s2 = Series([20, 30, 40]) - expected = Series(s1.values.astype(np.int64) / s2, dtype='m8[ns]') - expected[2] = np.nan - result = s1 / s2 - assert_series_equal(result, expected) - - result = s1 / 2 - expected = Series(s1.values.astype(np.int64) / 2, dtype='m8[ns]') - expected[2] = np.nan - assert_series_equal(result, expected) - - s2 = Series([20, 30, 40]) - expected = Series(s1.values.astype(np.int64) * s2, dtype='m8[ns]') - expected[2] = np.nan - result = s1 * s2 - assert_series_equal(result, expected) - - for dtype in ['int32', 'int16', 'uint32', 'uint64', 'uint32', 'uint16', - 'uint8']: - s2 = Series([20, 30, 40], dtype=dtype) - expected = Series( - s1.values.astype(np.int64) * s2.astype(np.int64), - dtype='m8[ns]') - expected[2] = np.nan - result = s1 * s2 - assert_series_equal(result, expected) - - result = s1 * 2 - expected = Series(s1.values.astype(np.int64) * 2, dtype='m8[ns]') - expected[2] = np.nan - assert_series_equal(result, expected) - - result = s1 * -1 - expected = Series(s1.values.astype(np.int64) * -1, dtype='m8[ns]') - expected[2] = np.nan - assert_series_equal(result, expected) - - # invalid ops - assert_series_equal(s1 / s2.astype(float), - Series([Timedelta('2 days 22:48:00'), Timedelta( - '1 days 23:12:00'), Timedelta('NaT')])) - assert_series_equal(s1 / 2.0, - Series([Timedelta('29 days 12:00:00'), Timedelta( - '29 days 12:00:00'), Timedelta('NaT')])) - - for op in ['__add__', '__sub__']: - sop = getattr(s1, op, None) - if sop is not None: - self.assertRaises(TypeError, sop, 1) - self.assertRaises(TypeError, sop, s2.values) - - def test_timedelta64_conversions(self): - startdate = Series(date_range('2013-01-01', '2013-01-03')) - enddate = Series(date_range('2013-03-01', '2013-03-03')) - - s1 = enddate - startdate - s1[2] = np.nan - - for m in [1, 3, 10]: - for unit in ['D', 'h', 'm', 's', 'ms', 'us', 'ns']: - - # op - expected = s1.apply(lambda x: x / np.timedelta64(m, unit)) - result = s1 / np.timedelta64(m, unit) - assert_series_equal(result, expected) - - if m == 1 and unit != 'ns': - - # astype - result = s1.astype("timedelta64[{0}]".format(unit)) - assert_series_equal(result, expected) - - # reverse op - expected = s1.apply( - lambda x: Timedelta(np.timedelta64(m, unit)) / x) - result = np.timedelta64(m, unit) / s1 - - # astype - s = Series(date_range('20130101', periods=3)) - result = s.astype(object) - self.assertIsInstance(result.iloc[0], datetime) - self.assertTrue(result.dtype == np.object_) - - result = s1.astype(object) - self.assertIsInstance(result.iloc[0], timedelta) - self.assertTrue(result.dtype == np.object_) - - def test_timedelta64_equal_timedelta_supported_ops(self): - ser = Series([Timestamp('20130301'), Timestamp('20130228 23:00:00'), - Timestamp('20130228 22:00:00'), Timestamp( - '20130228 21:00:00')]) - - intervals = 'D', 'h', 'm', 's', 'us' - - # TODO: unused - # npy16_mappings = {'D': 24 * 60 * 60 * 1000000, - # 'h': 60 * 60 * 1000000, - # 'm': 60 * 1000000, - # 's': 1000000, - # 'us': 1} - - def timedelta64(*args): - return sum(starmap(np.timedelta64, zip(args, intervals))) - - for op, d, h, m, s, us in product([operator.add, operator.sub], - *([range(2)] * 5)): - nptd = timedelta64(d, h, m, s, us) - pytd = timedelta(days=d, hours=h, minutes=m, seconds=s, - microseconds=us) - lhs = op(ser, nptd) - rhs = op(ser, pytd) - - try: - assert_series_equal(lhs, rhs) - except: - raise AssertionError( - "invalid comparsion [op->{0},d->{1},h->{2},m->{3}," - "s->{4},us->{5}]\n{6}\n{7}\n".format(op, d, h, m, s, - us, lhs, rhs)) - - def test_timedelta_assignment(self): - # GH 8209 - s = Series([]) - s.loc['B'] = timedelta(1) - tm.assert_series_equal(s, Series(Timedelta('1 days'), index=['B'])) - - s = s.reindex(s.index.insert(0, 'A')) - tm.assert_series_equal(s, Series( - [np.nan, Timedelta('1 days')], index=['A', 'B'])) - - result = s.fillna(timedelta(1)) - expected = Series(Timedelta('1 days'), index=['A', 'B']) - tm.assert_series_equal(result, expected) - - s.loc['A'] = timedelta(1) - tm.assert_series_equal(s, expected) - - def test_operators_datetimelike(self): - def run_ops(ops, get_ser, test_ser): - - # check that we are getting a TypeError - # with 'operate' (from core/ops.py) for the ops that are not - # defined - for op_str in ops: - op = getattr(get_ser, op_str, None) - with tm.assertRaisesRegexp(TypeError, 'operate'): - op(test_ser) - - # ## timedelta64 ### - td1 = Series([timedelta(minutes=5, seconds=3)] * 3) - td1.iloc[2] = np.nan - td2 = timedelta(minutes=5, seconds=4) - ops = ['__mul__', '__floordiv__', '__pow__', '__rmul__', - '__rfloordiv__', '__rpow__'] - run_ops(ops, td1, td2) - td1 + td2 - td2 + td1 - td1 - td2 - td2 - td1 - td1 / td2 - td2 / td1 - - # ## datetime64 ### - dt1 = Series([Timestamp('20111230'), Timestamp('20120101'), Timestamp( - '20120103')]) - dt1.iloc[2] = np.nan - dt2 = Series([Timestamp('20111231'), Timestamp('20120102'), Timestamp( - '20120104')]) - ops = ['__add__', '__mul__', '__floordiv__', '__truediv__', '__div__', - '__pow__', '__radd__', '__rmul__', '__rfloordiv__', - '__rtruediv__', '__rdiv__', '__rpow__'] - run_ops(ops, dt1, dt2) - dt1 - dt2 - dt2 - dt1 - - # ## datetime64 with timetimedelta ### - ops = ['__mul__', '__floordiv__', '__truediv__', '__div__', '__pow__', - '__rmul__', '__rfloordiv__', '__rtruediv__', '__rdiv__', - '__rpow__'] - run_ops(ops, dt1, td1) - dt1 + td1 - td1 + dt1 - dt1 - td1 - # TODO: Decide if this ought to work. - # td1 - dt1 - - # ## timetimedelta with datetime64 ### - ops = ['__sub__', '__mul__', '__floordiv__', '__truediv__', '__div__', - '__pow__', '__rmul__', '__rfloordiv__', '__rtruediv__', - '__rdiv__', '__rpow__'] - run_ops(ops, td1, dt1) - td1 + dt1 - dt1 + td1 - - # 8260, 10763 - # datetime64 with tz - ops = ['__mul__', '__floordiv__', '__truediv__', '__div__', '__pow__', - '__rmul__', '__rfloordiv__', '__rtruediv__', '__rdiv__', - '__rpow__'] - dt1 = Series( - date_range('2000-01-01 09:00:00', periods=5, - tz='US/Eastern'), name='foo') - dt2 = dt1.copy() - dt2.iloc[2] = np.nan - td1 = Series(timedelta_range('1 days 1 min', periods=5, freq='H')) - td2 = td1.copy() - td2.iloc[1] = np.nan - run_ops(ops, dt1, td1) - - result = dt1 + td1[0] - expected = ( - dt1.dt.tz_localize(None) + td1[0]).dt.tz_localize('US/Eastern') - assert_series_equal(result, expected) - - result = dt2 + td2[0] - expected = ( - dt2.dt.tz_localize(None) + td2[0]).dt.tz_localize('US/Eastern') - assert_series_equal(result, expected) - - # odd numpy behavior with scalar timedeltas - if not _np_version_under1p8: - result = td1[0] + dt1 - expected = ( - dt1.dt.tz_localize(None) + td1[0]).dt.tz_localize('US/Eastern') - assert_series_equal(result, expected) - - result = td2[0] + dt2 - expected = ( - dt2.dt.tz_localize(None) + td2[0]).dt.tz_localize('US/Eastern') - assert_series_equal(result, expected) - - result = dt1 - td1[0] - expected = ( - dt1.dt.tz_localize(None) - td1[0]).dt.tz_localize('US/Eastern') - assert_series_equal(result, expected) - self.assertRaises(TypeError, lambda: td1[0] - dt1) - - result = dt2 - td2[0] - expected = ( - dt2.dt.tz_localize(None) - td2[0]).dt.tz_localize('US/Eastern') - assert_series_equal(result, expected) - self.assertRaises(TypeError, lambda: td2[0] - dt2) - - result = dt1 + td1 - expected = ( - dt1.dt.tz_localize(None) + td1).dt.tz_localize('US/Eastern') - assert_series_equal(result, expected) - - result = dt2 + td2 - expected = ( - dt2.dt.tz_localize(None) + td2).dt.tz_localize('US/Eastern') - assert_series_equal(result, expected) - - result = dt1 - td1 - expected = ( - dt1.dt.tz_localize(None) - td1).dt.tz_localize('US/Eastern') - assert_series_equal(result, expected) - - result = dt2 - td2 - expected = ( - dt2.dt.tz_localize(None) - td2).dt.tz_localize('US/Eastern') - assert_series_equal(result, expected) - - self.assertRaises(TypeError, lambda: td1 - dt1) - self.assertRaises(TypeError, lambda: td2 - dt2) - - def test_ops_nat(self): - # GH 11349 - timedelta_series = Series([NaT, Timedelta('1s')]) - datetime_series = Series([NaT, Timestamp('19900315')]) - nat_series_dtype_timedelta = Series( - [NaT, NaT], dtype='timedelta64[ns]') - nat_series_dtype_timestamp = Series([NaT, NaT], dtype='datetime64[ns]') - single_nat_dtype_datetime = Series([NaT], dtype='datetime64[ns]') - single_nat_dtype_timedelta = Series([NaT], dtype='timedelta64[ns]') - - # subtraction - assert_series_equal(timedelta_series - NaT, nat_series_dtype_timedelta) - assert_series_equal(-NaT + timedelta_series, - nat_series_dtype_timedelta) - - assert_series_equal(timedelta_series - single_nat_dtype_timedelta, - nat_series_dtype_timedelta) - assert_series_equal(-single_nat_dtype_timedelta + timedelta_series, - nat_series_dtype_timedelta) - - assert_series_equal(datetime_series - NaT, nat_series_dtype_timestamp) - assert_series_equal(-NaT + datetime_series, nat_series_dtype_timestamp) - - assert_series_equal(datetime_series - single_nat_dtype_datetime, - nat_series_dtype_timedelta) - with tm.assertRaises(TypeError): - -single_nat_dtype_datetime + datetime_series - - assert_series_equal(datetime_series - single_nat_dtype_timedelta, - nat_series_dtype_timestamp) - assert_series_equal(-single_nat_dtype_timedelta + datetime_series, - nat_series_dtype_timestamp) - - # without a Series wrapping the NaT, it is ambiguous - # whether it is a datetime64 or timedelta64 - # defaults to interpreting it as timedelta64 - assert_series_equal(nat_series_dtype_timestamp - NaT, - nat_series_dtype_timestamp) - assert_series_equal(-NaT + nat_series_dtype_timestamp, - nat_series_dtype_timestamp) - - assert_series_equal(nat_series_dtype_timestamp - - single_nat_dtype_datetime, - nat_series_dtype_timedelta) - with tm.assertRaises(TypeError): - -single_nat_dtype_datetime + nat_series_dtype_timestamp - - assert_series_equal(nat_series_dtype_timestamp - - single_nat_dtype_timedelta, - nat_series_dtype_timestamp) - assert_series_equal(-single_nat_dtype_timedelta + - nat_series_dtype_timestamp, - nat_series_dtype_timestamp) - - with tm.assertRaises(TypeError): - timedelta_series - single_nat_dtype_datetime - - # addition - assert_series_equal(nat_series_dtype_timestamp + NaT, - nat_series_dtype_timestamp) - assert_series_equal(NaT + nat_series_dtype_timestamp, - nat_series_dtype_timestamp) - - assert_series_equal(nat_series_dtype_timestamp + - single_nat_dtype_timedelta, - nat_series_dtype_timestamp) - assert_series_equal(single_nat_dtype_timedelta + - nat_series_dtype_timestamp, - nat_series_dtype_timestamp) - - assert_series_equal(nat_series_dtype_timedelta + NaT, - nat_series_dtype_timedelta) - assert_series_equal(NaT + nat_series_dtype_timedelta, - nat_series_dtype_timedelta) - - assert_series_equal(nat_series_dtype_timedelta + - single_nat_dtype_timedelta, - nat_series_dtype_timedelta) - assert_series_equal(single_nat_dtype_timedelta + - nat_series_dtype_timedelta, - nat_series_dtype_timedelta) - - assert_series_equal(timedelta_series + NaT, nat_series_dtype_timedelta) - assert_series_equal(NaT + timedelta_series, nat_series_dtype_timedelta) - - assert_series_equal(timedelta_series + single_nat_dtype_timedelta, - nat_series_dtype_timedelta) - assert_series_equal(single_nat_dtype_timedelta + timedelta_series, - nat_series_dtype_timedelta) - - assert_series_equal(nat_series_dtype_timestamp + NaT, - nat_series_dtype_timestamp) - assert_series_equal(NaT + nat_series_dtype_timestamp, - nat_series_dtype_timestamp) - - assert_series_equal(nat_series_dtype_timestamp + - single_nat_dtype_timedelta, - nat_series_dtype_timestamp) - assert_series_equal(single_nat_dtype_timedelta + - nat_series_dtype_timestamp, - nat_series_dtype_timestamp) - - assert_series_equal(nat_series_dtype_timedelta + NaT, - nat_series_dtype_timedelta) - assert_series_equal(NaT + nat_series_dtype_timedelta, - nat_series_dtype_timedelta) - - assert_series_equal(nat_series_dtype_timedelta + - single_nat_dtype_timedelta, - nat_series_dtype_timedelta) - assert_series_equal(single_nat_dtype_timedelta + - nat_series_dtype_timedelta, - nat_series_dtype_timedelta) - - assert_series_equal(nat_series_dtype_timedelta + - single_nat_dtype_datetime, - nat_series_dtype_timestamp) - assert_series_equal(single_nat_dtype_datetime + - nat_series_dtype_timedelta, - nat_series_dtype_timestamp) - - # multiplication - assert_series_equal(nat_series_dtype_timedelta * 1.0, - nat_series_dtype_timedelta) - assert_series_equal(1.0 * nat_series_dtype_timedelta, - nat_series_dtype_timedelta) - - assert_series_equal(timedelta_series * 1, timedelta_series) - assert_series_equal(1 * timedelta_series, timedelta_series) - - assert_series_equal(timedelta_series * 1.5, - Series([NaT, Timedelta('1.5s')])) - assert_series_equal(1.5 * timedelta_series, - Series([NaT, Timedelta('1.5s')])) - - assert_series_equal(timedelta_series * nan, nat_series_dtype_timedelta) - assert_series_equal(nan * timedelta_series, nat_series_dtype_timedelta) - - with tm.assertRaises(TypeError): - datetime_series * 1 - with tm.assertRaises(TypeError): - nat_series_dtype_timestamp * 1 - with tm.assertRaises(TypeError): - datetime_series * 1.0 - with tm.assertRaises(TypeError): - nat_series_dtype_timestamp * 1.0 - - # division - assert_series_equal(timedelta_series / 2, - Series([NaT, Timedelta('0.5s')])) - assert_series_equal(timedelta_series / 2.0, - Series([NaT, Timedelta('0.5s')])) - assert_series_equal(timedelta_series / nan, nat_series_dtype_timedelta) - with tm.assertRaises(TypeError): - nat_series_dtype_timestamp / 1.0 - with tm.assertRaises(TypeError): - nat_series_dtype_timestamp / 1 - - def test_ops_datetimelike_align(self): - # GH 7500 - # datetimelike ops need to align - dt = Series(date_range('2012-1-1', periods=3, freq='D')) - dt.iloc[2] = np.nan - dt2 = dt[::-1] - - expected = Series([timedelta(0), timedelta(0), pd.NaT]) - # name is reset - result = dt2 - dt - assert_series_equal(result, expected) - - expected = Series(expected, name=0) - result = (dt2.to_frame() - dt.to_frame())[0] - assert_series_equal(result, expected) - - def test_timedelta64_functions(self): - from pandas import date_range - - # index min/max - td = Series(date_range('2012-1-1', periods=3, freq='D')) - \ - Timestamp('20120101') - - result = td.idxmin() - self.assertEqual(result, 0) - - result = td.idxmax() - self.assertEqual(result, 2) - - # GH 2982 - # with NaT - td[0] = np.nan - - result = td.idxmin() - self.assertEqual(result, 1) - - result = td.idxmax() - self.assertEqual(result, 2) - - # abs - s1 = Series(date_range('20120101', periods=3)) - s2 = Series(date_range('20120102', periods=3)) - expected = Series(s2 - s1) - - # this fails as numpy returns timedelta64[us] - # result = np.abs(s1-s2) - # assert_frame_equal(result,expected) - - result = (s1 - s2).abs() - assert_series_equal(result, expected) - - # max/min - result = td.max() - expected = Timedelta('2 days') - self.assertEqual(result, expected) - - result = td.min() - expected = Timedelta('1 days') - self.assertEqual(result, expected) - - def test_ops_consistency_on_empty(self): - - # GH 7869 - # consistency on empty - - # float - result = Series(dtype=float).sum() - self.assertEqual(result, 0) - - result = Series(dtype=float).mean() - self.assertTrue(isnull(result)) - - result = Series(dtype=float).median() - self.assertTrue(isnull(result)) - - # timedelta64[ns] - result = Series(dtype='m8[ns]').sum() - self.assertEqual(result, Timedelta(0)) - - result = Series(dtype='m8[ns]').mean() - self.assertTrue(result is pd.NaT) - - result = Series(dtype='m8[ns]').median() - self.assertTrue(result is pd.NaT) - - def test_timedelta_fillna(self): - # GH 3371 - s = Series([Timestamp('20130101'), Timestamp('20130101'), Timestamp( - '20130102'), Timestamp('20130103 9:01:01')]) - td = s.diff() - - # reg fillna - result = td.fillna(0) - expected = Series([timedelta(0), timedelta(0), timedelta(1), timedelta( - days=1, seconds=9 * 3600 + 60 + 1)]) - assert_series_equal(result, expected) - - # interprested as seconds - result = td.fillna(1) - expected = Series([timedelta(seconds=1), timedelta(0), timedelta(1), - timedelta(days=1, seconds=9 * 3600 + 60 + 1)]) - assert_series_equal(result, expected) - - result = td.fillna(timedelta(days=1, seconds=1)) - expected = Series([timedelta(days=1, seconds=1), timedelta( - 0), timedelta(1), timedelta(days=1, seconds=9 * 3600 + 60 + 1)]) - assert_series_equal(result, expected) - - result = td.fillna(np.timedelta64(int(1e9))) - expected = Series([timedelta(seconds=1), timedelta(0), timedelta(1), - timedelta(days=1, seconds=9 * 3600 + 60 + 1)]) - assert_series_equal(result, expected) - - from pandas import tslib - result = td.fillna(tslib.NaT) - expected = Series([tslib.NaT, timedelta(0), timedelta(1), - timedelta(days=1, seconds=9 * 3600 + 60 + 1)], - dtype='m8[ns]') - assert_series_equal(result, expected) - - # ffill - td[2] = np.nan - result = td.ffill() - expected = td.fillna(0) - expected[0] = np.nan - assert_series_equal(result, expected) - - # bfill - td[2] = np.nan - result = td.bfill() - expected = td.fillna(0) - expected[2] = timedelta(days=1, seconds=9 * 3600 + 60 + 1) - assert_series_equal(result, expected) - - def test_datetime64_fillna(self): - - s = Series([Timestamp('20130101'), Timestamp('20130101'), Timestamp( - '20130102'), Timestamp('20130103 9:01:01')]) - s[2] = np.nan - - # reg fillna - result = s.fillna(Timestamp('20130104')) - expected = Series([Timestamp('20130101'), Timestamp( - '20130101'), Timestamp('20130104'), Timestamp('20130103 9:01:01')]) - assert_series_equal(result, expected) - - from pandas import tslib - result = s.fillna(tslib.NaT) - expected = s - assert_series_equal(result, expected) - - # ffill - result = s.ffill() - expected = Series([Timestamp('20130101'), Timestamp( - '20130101'), Timestamp('20130101'), Timestamp('20130103 9:01:01')]) - assert_series_equal(result, expected) - - # bfill - result = s.bfill() - expected = Series([Timestamp('20130101'), Timestamp('20130101'), - Timestamp('20130103 9:01:01'), Timestamp( - '20130103 9:01:01')]) - assert_series_equal(result, expected) - - # GH 6587 - # make sure that we are treating as integer when filling - # this also tests inference of a datetime-like with NaT's - s = Series([pd.NaT, pd.NaT, '2013-08-05 15:30:00.000001']) - expected = Series( - ['2013-08-05 15:30:00.000001', '2013-08-05 15:30:00.000001', - '2013-08-05 15:30:00.000001'], dtype='M8[ns]') - result = s.fillna(method='backfill') - assert_series_equal(result, expected) - - def test_datetime64_tz_fillna(self): - for tz in ['US/Eastern', 'Asia/Tokyo']: - # DatetimeBlock - s = Series([Timestamp('2011-01-01 10:00'), pd.NaT, Timestamp( - '2011-01-03 10:00'), pd.NaT]) - result = s.fillna(pd.Timestamp('2011-01-02 10:00')) - expected = Series([Timestamp('2011-01-01 10:00'), Timestamp( - '2011-01-02 10:00'), Timestamp('2011-01-03 10:00'), Timestamp( - '2011-01-02 10:00')]) - self.assert_series_equal(expected, result) - - result = s.fillna(pd.Timestamp('2011-01-02 10:00', tz=tz)) - expected = Series([Timestamp('2011-01-01 10:00'), Timestamp( - '2011-01-02 10:00', tz=tz), Timestamp('2011-01-03 10:00'), - Timestamp('2011-01-02 10:00', tz=tz)]) - self.assert_series_equal(expected, result) - - result = s.fillna('AAA') - expected = Series([Timestamp('2011-01-01 10:00'), 'AAA', - Timestamp('2011-01-03 10:00'), 'AAA'], - dtype=object) - self.assert_series_equal(expected, result) - - result = s.fillna({1: pd.Timestamp('2011-01-02 10:00', tz=tz), - 3: pd.Timestamp('2011-01-04 10:00')}) - expected = Series([Timestamp('2011-01-01 10:00'), Timestamp( - '2011-01-02 10:00', tz=tz), Timestamp('2011-01-03 10:00'), - Timestamp('2011-01-04 10:00')]) - self.assert_series_equal(expected, result) - - result = s.fillna({1: pd.Timestamp('2011-01-02 10:00'), - 3: pd.Timestamp('2011-01-04 10:00')}) - expected = Series([Timestamp('2011-01-01 10:00'), Timestamp( - '2011-01-02 10:00'), Timestamp('2011-01-03 10:00'), Timestamp( - '2011-01-04 10:00')]) - self.assert_series_equal(expected, result) - - # DatetimeBlockTZ - idx = pd.DatetimeIndex(['2011-01-01 10:00', pd.NaT, - '2011-01-03 10:00', pd.NaT], tz=tz) - s = pd.Series(idx) - result = s.fillna(pd.Timestamp('2011-01-02 10:00')) - expected = Series([Timestamp('2011-01-01 10:00', tz=tz), Timestamp( - '2011-01-02 10:00'), Timestamp('2011-01-03 10:00', tz=tz), - Timestamp('2011-01-02 10:00')]) - self.assert_series_equal(expected, result) - - result = s.fillna(pd.Timestamp('2011-01-02 10:00', tz=tz)) - idx = pd.DatetimeIndex(['2011-01-01 10:00', '2011-01-02 10:00', - '2011-01-03 10:00', '2011-01-02 10:00'], - tz=tz) - expected = Series(idx) - self.assert_series_equal(expected, result) - - result = s.fillna(pd.Timestamp( - '2011-01-02 10:00', tz=tz).to_pydatetime()) - idx = pd.DatetimeIndex(['2011-01-01 10:00', '2011-01-02 10:00', - '2011-01-03 10:00', '2011-01-02 10:00'], - tz=tz) - expected = Series(idx) - self.assert_series_equal(expected, result) - - result = s.fillna('AAA') - expected = Series([Timestamp('2011-01-01 10:00', tz=tz), 'AAA', - Timestamp('2011-01-03 10:00', tz=tz), 'AAA'], - dtype=object) - self.assert_series_equal(expected, result) - - result = s.fillna({1: pd.Timestamp('2011-01-02 10:00', tz=tz), - 3: pd.Timestamp('2011-01-04 10:00')}) - expected = Series([Timestamp('2011-01-01 10:00', tz=tz), Timestamp( - '2011-01-02 10:00', tz=tz), Timestamp( - '2011-01-03 10:00', tz=tz), Timestamp('2011-01-04 10:00')]) - self.assert_series_equal(expected, result) - - result = s.fillna({1: pd.Timestamp('2011-01-02 10:00', tz=tz), - 3: pd.Timestamp('2011-01-04 10:00', tz=tz)}) - expected = Series([Timestamp('2011-01-01 10:00', tz=tz), Timestamp( - '2011-01-02 10:00', tz=tz), Timestamp( - '2011-01-03 10:00', tz=tz), Timestamp('2011-01-04 10:00', - tz=tz)]) - self.assert_series_equal(expected, result) - - # filling with a naive/other zone, coerce to object - result = s.fillna(Timestamp('20130101')) - expected = Series([Timestamp('2011-01-01 10:00', tz=tz), Timestamp( - '2013-01-01'), Timestamp('2011-01-03 10:00', tz=tz), Timestamp( - '2013-01-01')]) - self.assert_series_equal(expected, result) - - result = s.fillna(Timestamp('20130101', tz='US/Pacific')) - expected = Series([Timestamp('2011-01-01 10:00', tz=tz), - Timestamp('2013-01-01', tz='US/Pacific'), - Timestamp('2011-01-03 10:00', tz=tz), - Timestamp('2013-01-01', tz='US/Pacific')]) - self.assert_series_equal(expected, result) - - def test_fillna_int(self): - s = Series(np.random.randint(-100, 100, 50)) - s.fillna(method='ffill', inplace=True) - assert_series_equal(s.fillna(method='ffill', inplace=False), s) - - def test_fillna_raise(self): - s = Series(np.random.randint(-100, 100, 50)) - self.assertRaises(TypeError, s.fillna, [1, 2]) - self.assertRaises(TypeError, s.fillna, (1, 2)) - - def test_raise_on_info(self): - s = Series(np.random.randn(10)) - with tm.assertRaises(AttributeError): - s.info() - - def test_isnull_for_inf(self): - s = Series(['a', np.inf, np.nan, 1.0]) - with pd.option_context('mode.use_inf_as_null', True): - r = s.isnull() - dr = s.dropna() - e = Series([False, True, True, False]) - de = Series(['a', 1.0], index=[0, 3]) - tm.assert_series_equal(r, e) - tm.assert_series_equal(dr, de) - -# TimeSeries-specific - - def test_fillna(self): - ts = Series([0., 1., 2., 3., 4.], index=tm.makeDateIndex(5)) - - self.assert_numpy_array_equal(ts, ts.fillna(method='ffill')) - - ts[2] = np.NaN - - self.assert_numpy_array_equal(ts.fillna(method='ffill'), - [0., 1., 1., 3., 4.]) - self.assert_numpy_array_equal(ts.fillna(method='backfill'), - [0., 1., 3., 3., 4.]) - - self.assert_numpy_array_equal(ts.fillna(value=5), [0., 1., 5., 3., 4.]) - - self.assertRaises(ValueError, ts.fillna) - self.assertRaises(ValueError, self.ts.fillna, value=0, method='ffill') - - # GH 5703 - s1 = Series([np.nan]) - s2 = Series([1]) - result = s1.fillna(s2) - expected = Series([1.]) - assert_series_equal(result, expected) - result = s1.fillna({}) - assert_series_equal(result, s1) - result = s1.fillna(Series(())) - assert_series_equal(result, s1) - result = s2.fillna(s1) - assert_series_equal(result, s2) - result = s1.fillna({0: 1}) - assert_series_equal(result, expected) - result = s1.fillna({1: 1}) - assert_series_equal(result, Series([np.nan])) - result = s1.fillna({0: 1, 1: 1}) - assert_series_equal(result, expected) - result = s1.fillna(Series({0: 1, 1: 1})) - assert_series_equal(result, expected) - result = s1.fillna(Series({0: 1, 1: 1}, index=[4, 5])) - assert_series_equal(result, s1) - - s1 = Series([0, 1, 2], list('abc')) - s2 = Series([0, np.nan, 2], list('bac')) - result = s2.fillna(s1) - expected = Series([0, 0, 2.], list('bac')) - assert_series_equal(result, expected) - - # limit - s = Series(np.nan, index=[0, 1, 2]) - result = s.fillna(999, limit=1) - expected = Series([999, np.nan, np.nan], index=[0, 1, 2]) - assert_series_equal(result, expected) - - result = s.fillna(999, limit=2) - expected = Series([999, 999, np.nan], index=[0, 1, 2]) - assert_series_equal(result, expected) - - # GH 9043 - # make sure a string representation of int/float values can be filled - # correctly without raising errors or being converted - vals = ['0', '1.5', '-0.3'] - for val in vals: - s = Series([0, 1, np.nan, np.nan, 4], dtype='float64') - result = s.fillna(val) - expected = Series([0, 1, val, val, 4], dtype='object') - assert_series_equal(result, expected) - - def test_fillna_bug(self): - x = Series([nan, 1., nan, 3., nan], ['z', 'a', 'b', 'c', 'd']) - filled = x.fillna(method='ffill') - expected = Series([nan, 1., 1., 3., 3.], x.index) - assert_series_equal(filled, expected) - - filled = x.fillna(method='bfill') - expected = Series([1., 1., 3., 3., nan], x.index) - assert_series_equal(filled, expected) - - def test_fillna_inplace(self): - x = Series([nan, 1., nan, 3., nan], ['z', 'a', 'b', 'c', 'd']) - y = x.copy() - - y.fillna(value=0, inplace=True) - - expected = x.fillna(value=0) - assert_series_equal(y, expected) - - def test_fillna_invalid_method(self): - try: - self.ts.fillna(method='ffil') - except ValueError as inst: - self.assertIn('ffil', str(inst)) - - def test_ffill(self): - ts = Series([0., 1., 2., 3., 4.], index=tm.makeDateIndex(5)) - ts[2] = np.NaN - assert_series_equal(ts.ffill(), ts.fillna(method='ffill')) - - def test_bfill(self): - ts = Series([0., 1., 2., 3., 4.], index=tm.makeDateIndex(5)) - ts[2] = np.NaN - assert_series_equal(ts.bfill(), ts.fillna(method='bfill')) - - def test_sub_of_datetime_from_TimeSeries(self): - from pandas.tseries.timedeltas import to_timedelta - from datetime import datetime - a = Timestamp(datetime(1993, 0o1, 0o7, 13, 30, 00)) - b = datetime(1993, 6, 22, 13, 30) - a = Series([a]) - result = to_timedelta(np.abs(a - b)) - self.assertEqual(result.dtype, 'timedelta64[ns]') - - def test_datetime64_with_index(self): - - # arithmetic integer ops with an index - s = Series(np.random.randn(5)) - expected = s - s.index.to_series() - result = s - s.index - assert_series_equal(result, expected) - - # GH 4629 - # arithmetic datetime64 ops with an index - s = Series(date_range('20130101', periods=5), - index=date_range('20130101', periods=5)) - expected = s - s.index.to_series() - result = s - s.index - assert_series_equal(result, expected) - - result = s - s.index.to_period() - assert_series_equal(result, expected) - - df = DataFrame(np.random.randn(5, 2), - index=date_range('20130101', periods=5)) - df['date'] = Timestamp('20130102') - df['expected'] = df['date'] - df.index.to_series() - df['result'] = df['date'] - df.index - assert_series_equal(df['result'], df['expected'], check_names=False) - - def test_timedelta64_nan(self): - - from pandas import tslib - td = Series([timedelta(days=i) for i in range(10)]) - - # nan ops on timedeltas - td1 = td.copy() - td1[0] = np.nan - self.assertTrue(isnull(td1[0])) - self.assertEqual(td1[0].value, tslib.iNaT) - td1[0] = td[0] - self.assertFalse(isnull(td1[0])) - - td1[1] = tslib.iNaT - self.assertTrue(isnull(td1[1])) - self.assertEqual(td1[1].value, tslib.iNaT) - td1[1] = td[1] - self.assertFalse(isnull(td1[1])) - - td1[2] = tslib.NaT - self.assertTrue(isnull(td1[2])) - self.assertEqual(td1[2].value, tslib.iNaT) - td1[2] = td[2] - self.assertFalse(isnull(td1[2])) - - # boolean setting - # this doesn't work, not sure numpy even supports it - # result = td[(td>np.timedelta64(timedelta(days=3))) & - # td<np.timedelta64(timedelta(days=7)))] = np.nan - # self.assertEqual(isnull(result).sum(), 7) - - # NumPy limitiation =( - - # def test_logical_range_select(self): - # np.random.seed(12345) - # selector = -0.5 <= self.ts <= 0.5 - # expected = (self.ts >= -0.5) & (self.ts <= 0.5) - # assert_series_equal(selector, expected) - - def test_operators_na_handling(self): - from decimal import Decimal - from datetime import date - s = Series([Decimal('1.3'), Decimal('2.3')], - index=[date(2012, 1, 1), date(2012, 1, 2)]) - - result = s + s.shift(1) - result2 = s.shift(1) + s - self.assertTrue(isnull(result[0])) - self.assertTrue(isnull(result2[0])) - - s = Series(['foo', 'bar', 'baz', np.nan]) - result = 'prefix_' + s - expected = Series(['prefix_foo', 'prefix_bar', 'prefix_baz', np.nan]) - assert_series_equal(result, expected) - - result = s + '_suffix' - expected = Series(['foo_suffix', 'bar_suffix', 'baz_suffix', np.nan]) - assert_series_equal(result, expected) - - def test_object_comparisons(self): - s = Series(['a', 'b', np.nan, 'c', 'a']) - - result = s == 'a' - expected = Series([True, False, False, False, True]) - assert_series_equal(result, expected) - - result = s < 'a' - expected = Series([False, False, False, False, False]) - assert_series_equal(result, expected) - - result = s != 'a' - expected = -(s == 'a') - assert_series_equal(result, expected) - - def test_comparison_tuples(self): - # GH11339 - # comparisons vs tuple - s = Series([(1, 1), (1, 2)]) - - result = s == (1, 2) - expected = Series([False, True]) - assert_series_equal(result, expected) - - result = s != (1, 2) - expected = Series([True, False]) - assert_series_equal(result, expected) - - result = s == (0, 0) - expected = Series([False, False]) - assert_series_equal(result, expected) - - result = s != (0, 0) - expected = Series([True, True]) - assert_series_equal(result, expected) - - s = Series([(1, 1), (1, 1)]) - - result = s == (1, 1) - expected = Series([True, True]) - assert_series_equal(result, expected) - - result = s != (1, 1) - expected = Series([False, False]) - assert_series_equal(result, expected) - - s = Series([frozenset([1]), frozenset([1, 2])]) - - result = s == frozenset([1]) - expected = Series([True, False]) - assert_series_equal(result, expected) - - def test_comparison_operators_with_nas(self): - s = Series(bdate_range('1/1/2000', periods=10), dtype=object) - s[::2] = np.nan - - # test that comparisons work - ops = ['lt', 'le', 'gt', 'ge', 'eq', 'ne'] - for op in ops: - val = s[5] - - f = getattr(operator, op) - result = f(s, val) - - expected = f(s.dropna(), val).reindex(s.index) - - if op == 'ne': - expected = expected.fillna(True).astype(bool) - else: - expected = expected.fillna(False).astype(bool) - - assert_series_equal(result, expected) - - # fffffffuuuuuuuuuuuu - # result = f(val, s) - # expected = f(val, s.dropna()).reindex(s.index) - # assert_series_equal(result, expected) - - # boolean &, |, ^ should work with object arrays and propagate NAs - - ops = ['and_', 'or_', 'xor'] - mask = s.isnull() - for bool_op in ops: - f = getattr(operator, bool_op) - - filled = s.fillna(s[0]) - - result = f(s < s[9], s > s[3]) - - expected = f(filled < filled[9], filled > filled[3]) - expected[mask] = False - assert_series_equal(result, expected) - - def test_comparison_object_numeric_nas(self): - s = Series(np.random.randn(10), dtype=object) - shifted = s.shift(2) - - ops = ['lt', 'le', 'gt', 'ge', 'eq', 'ne'] - for op in ops: - f = getattr(operator, op) - - result = f(s, shifted) - expected = f(s.astype(float), shifted.astype(float)) - assert_series_equal(result, expected) - - def test_comparison_invalid(self): - - # GH4968 - # invalid date/int comparisons - s = Series(range(5)) - s2 = Series(date_range('20010101', periods=5)) - - for (x, y) in [(s, s2), (s2, s)]: - self.assertRaises(TypeError, lambda: x == y) - self.assertRaises(TypeError, lambda: x != y) - self.assertRaises(TypeError, lambda: x >= y) - self.assertRaises(TypeError, lambda: x > y) - self.assertRaises(TypeError, lambda: x < y) - self.assertRaises(TypeError, lambda: x <= y) - - def test_more_na_comparisons(self): - left = Series(['a', np.nan, 'c']) - right = Series(['a', np.nan, 'd']) - - result = left == right - expected = Series([True, False, False]) - assert_series_equal(result, expected) - - result = left != right - expected = Series([False, True, True]) - assert_series_equal(result, expected) - - result = left == np.nan - expected = Series([False, False, False]) - assert_series_equal(result, expected) - - result = left != np.nan - expected = Series([True, True, True]) - assert_series_equal(result, expected) - - def test_comparison_different_length(self): - a = Series(['a', 'b', 'c']) - b = Series(['b', 'a']) - self.assertRaises(ValueError, a.__lt__, b) - - a = Series([1, 2]) - b = Series([2, 3, 4]) - self.assertRaises(ValueError, a.__eq__, b) - - def test_comparison_label_based(self): - - # GH 4947 - # comparisons should be label based - - a = Series([True, False, True], list('bca')) - b = Series([False, True, False], list('abc')) - - expected = Series([True, False, False], list('bca')) - result = a & b - assert_series_equal(result, expected) - - expected = Series([True, False, True], list('bca')) - result = a | b - assert_series_equal(result, expected) - - expected = Series([False, False, True], list('bca')) - result = a ^ b - assert_series_equal(result, expected) - - # rhs is bigger - a = Series([True, False, True], list('bca')) - b = Series([False, True, False, True], list('abcd')) - - expected = Series([True, False, False], list('bca')) - result = a & b - assert_series_equal(result, expected) - - expected = Series([True, False, True], list('bca')) - result = a | b - assert_series_equal(result, expected) - - # filling - - # vs empty - result = a & Series([]) - expected = Series([False, False, False], list('bca')) - assert_series_equal(result, expected) - - result = a | Series([]) - expected = Series([True, False, True], list('bca')) - assert_series_equal(result, expected) - - # vs non-matching - result = a & Series([1], ['z']) - expected = Series([False, False, False], list('bca')) - assert_series_equal(result, expected) - - result = a | Series([1], ['z']) - expected = Series([True, False, True], list('bca')) - assert_series_equal(result, expected) - - # identity - # we would like s[s|e] == s to hold for any e, whether empty or not - for e in [Series([]), Series([1], ['z']), Series(['z']), - Series(np.nan, b.index), Series(np.nan, a.index)]: - result = a[a | e] - assert_series_equal(result, a[a]) - - # vs scalars - index = list('bca') - t = Series([True, False, True]) - - for v in [True, 1, 2]: - result = Series([True, False, True], index=index) | v - expected = Series([True, True, True], index=index) - assert_series_equal(result, expected) - - for v in [np.nan, 'foo']: - self.assertRaises(TypeError, lambda: t | v) - - for v in [False, 0]: - result = Series([True, False, True], index=index) | v - expected = Series([True, False, True], index=index) - assert_series_equal(result, expected) - - for v in [True, 1]: - result = Series([True, False, True], index=index) & v - expected = Series([True, False, True], index=index) - assert_series_equal(result, expected) - - for v in [False, 0]: - result = Series([True, False, True], index=index) & v - expected = Series([False, False, False], index=index) - assert_series_equal(result, expected) - for v in [np.nan]: - self.assertRaises(TypeError, lambda: t & v) - - def test_operators_bitwise(self): - # GH 9016: support bitwise op for integer types - index = list('bca') - - s_tft = Series([True, False, True], index=index) - s_fff = Series([False, False, False], index=index) - s_tff = Series([True, False, False], index=index) - s_empty = Series([]) - - # TODO: unused - # s_0101 = Series([0, 1, 0, 1]) - - s_0123 = Series(range(4), dtype='int64') - s_3333 = Series([3] * 4) - s_4444 = Series([4] * 4) - - res = s_tft & s_empty - expected = s_fff - assert_series_equal(res, expected) - - res = s_tft | s_empty - expected = s_tft - assert_series_equal(res, expected) - - res = s_0123 & s_3333 - expected = Series(range(4), dtype='int64') - assert_series_equal(res, expected) - - res = s_0123 | s_4444 - expected = Series(range(4, 8), dtype='int64') - assert_series_equal(res, expected) - - s_a0b1c0 = Series([1], list('b')) - - res = s_tft & s_a0b1c0 - expected = s_tff - assert_series_equal(res, expected) - - res = s_tft | s_a0b1c0 - expected = s_tft - assert_series_equal(res, expected) - - n0 = 0 - res = s_tft & n0 - expected = s_fff - assert_series_equal(res, expected) - - res = s_0123 & n0 - expected = Series([0] * 4) - assert_series_equal(res, expected) - - n1 = 1 - res = s_tft & n1 - expected = s_tft - assert_series_equal(res, expected) - - res = s_0123 & n1 - expected = Series([0, 1, 0, 1]) - assert_series_equal(res, expected) - - s_1111 = Series([1] * 4, dtype='int8') - res = s_0123 & s_1111 - expected = Series([0, 1, 0, 1], dtype='int64') - assert_series_equal(res, expected) - - res = s_0123.astype(np.int16) | s_1111.astype(np.int32) - expected = Series([1, 1, 3, 3], dtype='int32') - assert_series_equal(res, expected) - - self.assertRaises(TypeError, lambda: s_1111 & 'a') - self.assertRaises(TypeError, lambda: s_1111 & ['a', 'b', 'c', 'd']) - self.assertRaises(TypeError, lambda: s_0123 & np.NaN) - self.assertRaises(TypeError, lambda: s_0123 & 3.14) - self.assertRaises(TypeError, lambda: s_0123 & [0.1, 4, 3.14, 2]) - - # s_0123 will be all false now because of reindexing like s_tft - assert_series_equal(s_tft & s_0123, Series([False] * 3, list('bca'))) - # s_tft will be all false now because of reindexing like s_0123 - assert_series_equal(s_0123 & s_tft, Series([False] * 4)) - assert_series_equal(s_0123 & False, Series([False] * 4)) - assert_series_equal(s_0123 ^ False, Series([False, True, True, True])) - assert_series_equal(s_0123 & [False], Series([False] * 4)) - assert_series_equal(s_0123 & (False), Series([False] * 4)) - assert_series_equal(s_0123 & Series([False, np.NaN, False, False]), - Series([False] * 4)) - - s_ftft = Series([False, True, False, True]) - assert_series_equal(s_0123 & Series([0.1, 4, -3.14, 2]), s_ftft) - - s_abNd = Series(['a', 'b', np.NaN, 'd']) - res = s_0123 & s_abNd - expected = s_ftft - assert_series_equal(res, expected) - - def test_between(self): - s = Series(bdate_range('1/1/2000', periods=20).asobject) - s[::2] = np.nan - - result = s[s.between(s[3], s[17])] - expected = s[3:18].dropna() - assert_series_equal(result, expected) - - result = s[s.between(s[3], s[17], inclusive=False)] - expected = s[5:16].dropna() - assert_series_equal(result, expected) - - def test_setitem_na(self): - # these induce dtype changes - expected = Series([np.nan, 3, np.nan, 5, np.nan, 7, np.nan, 9, np.nan]) - s = Series([2, 3, 4, 5, 6, 7, 8, 9, 10]) - s[::2] = np.nan - assert_series_equal(s, expected) - - # get's coerced to float, right? - expected = Series([np.nan, 1, np.nan, 0]) - s = Series([True, True, False, False]) - s[::2] = np.nan - assert_series_equal(s, expected) - - expected = Series([np.nan, np.nan, np.nan, np.nan, np.nan, 5, 6, 7, 8, - 9]) - s = Series(np.arange(10)) - s[:5] = np.nan - assert_series_equal(s, expected) - - def test_scalar_na_cmp_corners(self): - s = Series([2, 3, 4, 5, 6, 7, 8, 9, 10]) - - def tester(a, b): - return a & b - - self.assertRaises(TypeError, tester, s, datetime(2005, 1, 1)) - - s = Series([2, 3, 4, 5, 6, 7, 8, 9, datetime(2005, 1, 1)]) - s[::2] = np.nan - - expected = Series(True, index=s.index) - expected[::2] = False - assert_series_equal(tester(s, list(s)), expected) - - d = DataFrame({'A': s}) - # TODO: Fix this exception - needs to be fixed! (see GH5035) - # (previously this was a TypeError because series returned - # NotImplemented - self.assertRaises(ValueError, tester, s, d) - - def test_idxmin(self): - # test idxmin - # _check_stat_op approach can not be used here because of isnull check. - - # add some NaNs - self.series[5:15] = np.NaN - - # skipna or no - self.assertEqual(self.series[self.series.idxmin()], self.series.min()) - self.assertTrue(isnull(self.series.idxmin(skipna=False))) - - # no NaNs - nona = self.series.dropna() - self.assertEqual(nona[nona.idxmin()], nona.min()) - self.assertEqual(nona.index.values.tolist().index(nona.idxmin()), - nona.values.argmin()) - - # all NaNs - allna = self.series * nan - self.assertTrue(isnull(allna.idxmin())) - - # datetime64[ns] - from pandas import date_range - s = Series(date_range('20130102', periods=6)) - result = s.idxmin() - self.assertEqual(result, 0) - - s[0] = np.nan - result = s.idxmin() - self.assertEqual(result, 1) - - def test_idxmax(self): - # test idxmax - # _check_stat_op approach can not be used here because of isnull check. - - # add some NaNs - self.series[5:15] = np.NaN - - # skipna or no - self.assertEqual(self.series[self.series.idxmax()], self.series.max()) - self.assertTrue(isnull(self.series.idxmax(skipna=False))) - - # no NaNs - nona = self.series.dropna() - self.assertEqual(nona[nona.idxmax()], nona.max()) - self.assertEqual(nona.index.values.tolist().index(nona.idxmax()), - nona.values.argmax()) - - # all NaNs - allna = self.series * nan - self.assertTrue(isnull(allna.idxmax())) - - from pandas import date_range - s = Series(date_range('20130102', periods=6)) - result = s.idxmax() - self.assertEqual(result, 5) - - s[5] = np.nan - result = s.idxmax() - self.assertEqual(result, 4) - - # Float64Index - # GH 5914 - s = pd.Series([1, 2, 3], [1.1, 2.1, 3.1]) - result = s.idxmax() - self.assertEqual(result, 3.1) - result = s.idxmin() - self.assertEqual(result, 1.1) - - s = pd.Series(s.index, s.index) - result = s.idxmax() - self.assertEqual(result, 3.1) - result = s.idxmin() - self.assertEqual(result, 1.1) - - def test_ndarray_compat(self): - - # test numpy compat with Series as sub-class of NDFrame - tsdf = DataFrame(np.random.randn(1000, 3), columns=['A', 'B', 'C'], - index=date_range('1/1/2000', periods=1000)) - - def f(x): - return x[x.argmax()] - - result = tsdf.apply(f) - expected = tsdf.max() - assert_series_equal(result, expected) - - # .item() - s = Series([1]) - result = s.item() - self.assertEqual(result, 1) - self.assertEqual(s.item(), s.iloc[0]) - - # using an ndarray like function - s = Series(np.random.randn(10)) - result = np.ones_like(s) - expected = Series(1, index=range(10), dtype='float64') - # assert_series_equal(result,expected) - - # ravel - s = Series(np.random.randn(10)) - tm.assert_almost_equal(s.ravel(order='F'), s.values.ravel(order='F')) - - # compress - # GH 6658 - s = Series([0, 1., -1], index=list('abc')) - result = np.compress(s > 0, s) - assert_series_equal(result, Series([1.], index=['b'])) - - result = np.compress(s < -1, s) - # result empty Index(dtype=object) as the same as original - exp = Series([], dtype='float64', index=Index([], dtype='object')) - assert_series_equal(result, exp) - - s = Series([0, 1., -1], index=[.1, .2, .3]) - result = np.compress(s > 0, s) - assert_series_equal(result, Series([1.], index=[.2])) - - result = np.compress(s < -1, s) - # result empty Float64Index as the same as original - exp = Series([], dtype='float64', index=Index([], dtype='float64')) - assert_series_equal(result, exp) - - def test_complexx(self): - - # GH4819 - # complex access for ndarray compat - a = np.arange(5) - b = Series(a + 4j * a) - tm.assert_almost_equal(a, b.real) - tm.assert_almost_equal(4 * a, b.imag) - - b.real = np.arange(5) + 5 - tm.assert_almost_equal(a + 5, b.real) - tm.assert_almost_equal(4 * a, b.imag) - - def test_underlying_data_conversion(self): - - # GH 4080 - df = DataFrame(dict((c, [1, 2, 3]) for c in ['a', 'b', 'c'])) - df.set_index(['a', 'b', 'c'], inplace=True) - s = Series([1], index=[(2, 2, 2)]) - df['val'] = 0 - df - df['val'].update(s) - - expected = DataFrame( - dict(a=[1, 2, 3], b=[1, 2, 3], c=[1, 2, 3], val=[0, 1, 0])) - expected.set_index(['a', 'b', 'c'], inplace=True) - tm.assert_frame_equal(df, expected) - - # GH 3970 - # these are chained assignments as well - pd.set_option('chained_assignment', None) - df = DataFrame({"aa": range(5), "bb": [2.2] * 5}) - df["cc"] = 0.0 - - ck = [True] * len(df) - - df["bb"].iloc[0] = .13 - - # TODO: unused - df_tmp = df.iloc[ck] # noqa - - df["bb"].iloc[0] = .15 - self.assertEqual(df['bb'].iloc[0], 0.15) - pd.set_option('chained_assignment', 'raise') - - # GH 3217 - df = DataFrame(dict(a=[1, 3], b=[np.nan, 2])) - df['c'] = np.nan - df['c'].update(pd.Series(['foo'], index=[0])) - - expected = DataFrame(dict(a=[1, 3], b=[np.nan, 2], c=['foo', np.nan])) - tm.assert_frame_equal(df, expected) - - def test_operators_corner(self): - series = self.ts - - empty = Series([], index=Index([])) - - result = series + empty - self.assertTrue(np.isnan(result).all()) - - result = empty + Series([], index=Index([])) - self.assertEqual(len(result), 0) - - # TODO: this returned NotImplemented earlier, what to do? - # deltas = Series([timedelta(1)] * 5, index=np.arange(5)) - # sub_deltas = deltas[::2] - # deltas5 = deltas * 5 - # deltas = deltas + sub_deltas - - # float + int - int_ts = self.ts.astype(int)[:-5] - added = self.ts + int_ts - expected = self.ts.values[:-5] + int_ts.values - self.assert_numpy_array_equal(added[:-5], expected) - - def test_operators_reverse_object(self): - # GH 56 - arr = Series(np.random.randn(10), index=np.arange(10), dtype=object) - - def _check_op(arr, op): - result = op(1., arr) - expected = op(1., arr.astype(float)) - assert_series_equal(result.astype(float), expected) - - _check_op(arr, operator.add) - _check_op(arr, operator.sub) - _check_op(arr, operator.mul) - _check_op(arr, operator.truediv) - _check_op(arr, operator.floordiv) - - def test_series_frame_radd_bug(self): - import operator - - # GH 353 - vals = Series(tm.rands_array(5, 10)) - result = 'foo_' + vals - expected = vals.map(lambda x: 'foo_' + x) - assert_series_equal(result, expected) - - frame = DataFrame({'vals': vals}) - result = 'foo_' + frame - expected = DataFrame({'vals': vals.map(lambda x: 'foo_' + x)}) - tm.assert_frame_equal(result, expected) - - # really raise this time - self.assertRaises(TypeError, operator.add, datetime.now(), self.ts) - - def test_operators_frame(self): - # rpow does not work with DataFrame - df = DataFrame({'A': self.ts}) - - tm.assert_almost_equal(self.ts + self.ts, self.ts + df['A']) - tm.assert_almost_equal(self.ts ** self.ts, self.ts ** df['A']) - tm.assert_almost_equal(self.ts < self.ts, self.ts < df['A']) - tm.assert_almost_equal(self.ts / self.ts, self.ts / df['A']) - - def test_operators_combine(self): - def _check_fill(meth, op, a, b, fill_value=0): - exp_index = a.index.union(b.index) - a = a.reindex(exp_index) - b = b.reindex(exp_index) - - amask = isnull(a) - bmask = isnull(b) - - exp_values = [] - for i in range(len(exp_index)): - if amask[i]: - if bmask[i]: - exp_values.append(nan) - continue - exp_values.append(op(fill_value, b[i])) - elif bmask[i]: - if amask[i]: - exp_values.append(nan) - continue - exp_values.append(op(a[i], fill_value)) - else: - exp_values.append(op(a[i], b[i])) - - result = meth(a, b, fill_value=fill_value) - expected = Series(exp_values, exp_index) - assert_series_equal(result, expected) - - a = Series([nan, 1., 2., 3., nan], index=np.arange(5)) - b = Series([nan, 1, nan, 3, nan, 4.], index=np.arange(6)) - - pairings = [] - for op in ['add', 'sub', 'mul', 'pow', 'truediv', 'floordiv']: - fv = 0 - lop = getattr(Series, op) - lequiv = getattr(operator, op) - rop = getattr(Series, 'r' + op) - # bind op at definition time... - requiv = lambda x, y, op=op: getattr(operator, op)(y, x) - pairings.append((lop, lequiv, fv)) - pairings.append((rop, requiv, fv)) - - if compat.PY3: - pairings.append((Series.div, operator.truediv, 1)) - pairings.append((Series.rdiv, lambda x, y: operator.truediv(y, x), - 1)) - else: - pairings.append((Series.div, operator.div, 1)) - pairings.append((Series.rdiv, lambda x, y: operator.div(y, x), 1)) - - for op, equiv_op, fv in pairings: - result = op(a, b) - exp = equiv_op(a, b) - assert_series_equal(result, exp) - _check_fill(op, equiv_op, a, b, fill_value=fv) - # should accept axis=0 or axis='rows' - op(a, b, axis=0) - - def test_combine_first(self): - values = tm.makeIntIndex(20).values.astype(float) - series = Series(values, index=tm.makeIntIndex(20)) - - series_copy = series * 2 - series_copy[::2] = np.NaN - - # nothing used from the input - combined = series.combine_first(series_copy) - - self.assert_numpy_array_equal(combined, series) - - # Holes filled from input - combined = series_copy.combine_first(series) - self.assertTrue(np.isfinite(combined).all()) - - self.assert_numpy_array_equal(combined[::2], series[::2]) - self.assert_numpy_array_equal(combined[1::2], series_copy[1::2]) - - # mixed types - index = tm.makeStringIndex(20) - floats = Series(tm.randn(20), index=index) - strings = Series(tm.makeStringIndex(10), index=index[::2]) - - combined = strings.combine_first(floats) - - tm.assert_dict_equal(strings, combined, compare_keys=False) - tm.assert_dict_equal(floats[1::2], combined, compare_keys=False) - - # corner case - s = Series([1., 2, 3], index=[0, 1, 2]) - result = s.combine_first(Series([], index=[])) - assert_series_equal(s, result) - - def test_update(self): - s = Series([1.5, nan, 3., 4., nan]) - s2 = Series([nan, 3.5, nan, 5.]) - s.update(s2) - - expected = Series([1.5, 3.5, 3., 5., np.nan]) - assert_series_equal(s, expected) - - # GH 3217 - df = DataFrame([{"a": 1}, {"a": 3, "b": 2}]) - df['c'] = np.nan - - # this will fail as long as series is a sub-class of ndarray - # df['c'].update(Series(['foo'],index=[0])) ##### - - def test_corr(self): - tm._skip_if_no_scipy() - - import scipy.stats as stats - - # full overlap - self.assertAlmostEqual(self.ts.corr(self.ts), 1) - - # partial overlap - self.assertAlmostEqual(self.ts[:15].corr(self.ts[5:]), 1) - - self.assertTrue(isnull(self.ts[:15].corr(self.ts[5:], min_periods=12))) - - ts1 = self.ts[:15].reindex(self.ts.index) - ts2 = self.ts[5:].reindex(self.ts.index) - self.assertTrue(isnull(ts1.corr(ts2, min_periods=12))) - - # No overlap - self.assertTrue(np.isnan(self.ts[::2].corr(self.ts[1::2]))) - - # all NA - cp = self.ts[:10].copy() - cp[:] = np.nan - self.assertTrue(isnull(cp.corr(cp))) - - A = tm.makeTimeSeries() - B = tm.makeTimeSeries() - result = A.corr(B) - expected, _ = stats.pearsonr(A, B) - self.assertAlmostEqual(result, expected) - - def test_corr_rank(self): - tm._skip_if_no_scipy() - - import scipy - import scipy.stats as stats - - # kendall and spearman - A = tm.makeTimeSeries() - B = tm.makeTimeSeries() - A[-5:] = A[:5] - result = A.corr(B, method='kendall') - expected = stats.kendalltau(A, B)[0] - self.assertAlmostEqual(result, expected) - - result = A.corr(B, method='spearman') - expected = stats.spearmanr(A, B)[0] - self.assertAlmostEqual(result, expected) - - # these methods got rewritten in 0.8 - if scipy.__version__ < LooseVersion('0.9'): - raise nose.SkipTest("skipping corr rank because of scipy version " - "{0}".format(scipy.__version__)) - - # results from R - A = Series( - [-0.89926396, 0.94209606, -1.03289164, -0.95445587, 0.76910310, - - 0.06430576, -2.09704447, 0.40660407, -0.89926396, 0.94209606]) - B = Series( - [-1.01270225, -0.62210117, -1.56895827, 0.59592943, -0.01680292, - 1.17258718, -1.06009347, -0.10222060, -0.89076239, 0.89372375]) - kexp = 0.4319297 - sexp = 0.5853767 - self.assertAlmostEqual(A.corr(B, method='kendall'), kexp) - self.assertAlmostEqual(A.corr(B, method='spearman'), sexp) - - def test_cov(self): - # full overlap - self.assertAlmostEqual(self.ts.cov(self.ts), self.ts.std() ** 2) - - # partial overlap - self.assertAlmostEqual(self.ts[:15].cov(self.ts[5:]), - self.ts[5:15].std() ** 2) - - # No overlap - self.assertTrue(np.isnan(self.ts[::2].cov(self.ts[1::2]))) - - # all NA - cp = self.ts[:10].copy() - cp[:] = np.nan - self.assertTrue(isnull(cp.cov(cp))) - - # min_periods - self.assertTrue(isnull(self.ts[:15].cov(self.ts[5:], min_periods=12))) - - ts1 = self.ts[:15].reindex(self.ts.index) - ts2 = self.ts[5:].reindex(self.ts.index) - self.assertTrue(isnull(ts1.cov(ts2, min_periods=12))) - - def test_copy(self): - - for deep in [None, False, True]: - s = Series(np.arange(10), dtype='float64') - - # default deep is True - if deep is None: - s2 = s.copy() - else: - s2 = s.copy(deep=deep) - - s2[::2] = np.NaN - - if deep is None or deep is True: - # Did not modify original Series - self.assertTrue(np.isnan(s2[0])) - self.assertFalse(np.isnan(s[0])) - else: - - # we DID modify the original Series - self.assertTrue(np.isnan(s2[0])) - self.assertTrue(np.isnan(s[0])) - - # GH 11794 - # copy of tz-aware - expected = Series([Timestamp('2012/01/01', tz='UTC')]) - expected2 = Series([Timestamp('1999/01/01', tz='UTC')]) - - for deep in [None, False, True]: - s = Series([Timestamp('2012/01/01', tz='UTC')]) - - if deep is None: - s2 = s.copy() - else: - s2 = s.copy(deep=deep) - - s2[0] = pd.Timestamp('1999/01/01', tz='UTC') - - # default deep is True - if deep is None or deep is True: - assert_series_equal(s, expected) - assert_series_equal(s2, expected2) - else: - assert_series_equal(s, expected2) - assert_series_equal(s2, expected2) - - def test_count(self): - self.assertEqual(self.ts.count(), len(self.ts)) - - self.ts[::2] = np.NaN - - self.assertEqual(self.ts.count(), np.isfinite(self.ts).sum()) - - mi = MultiIndex.from_arrays([list('aabbcc'), [1, 2, 2, nan, 1, 2]]) - ts = Series(np.arange(len(mi)), index=mi) - - left = ts.count(level=1) - right = Series([2, 3, 1], index=[1, 2, nan]) - assert_series_equal(left, right) - - ts.iloc[[0, 3, 5]] = nan - assert_series_equal(ts.count(level=1), right - 1) - - def test_dtype(self): - - self.assertEqual(self.ts.dtype, np.dtype('float64')) - self.assertEqual(self.ts.dtypes, np.dtype('float64')) - self.assertEqual(self.ts.ftype, 'float64:dense') - self.assertEqual(self.ts.ftypes, 'float64:dense') - assert_series_equal(self.ts.get_dtype_counts(), Series(1, ['float64'])) - assert_series_equal(self.ts.get_ftype_counts(), Series( - 1, ['float64:dense'])) - - def test_dot(self): - a = Series(np.random.randn(4), index=['p', 'q', 'r', 's']) - b = DataFrame(np.random.randn(3, 4), index=['1', '2', '3'], - columns=['p', 'q', 'r', 's']).T - - result = a.dot(b) - expected = Series(np.dot(a.values, b.values), index=['1', '2', '3']) - assert_series_equal(result, expected) - - # Check index alignment - b2 = b.reindex(index=reversed(b.index)) - result = a.dot(b) - assert_series_equal(result, expected) - - # Check ndarray argument - result = a.dot(b.values) - self.assertTrue(np.all(result == expected.values)) - assert_almost_equal(a.dot(b['2'].values), expected['2']) - - # Check series argument - assert_almost_equal(a.dot(b['1']), expected['1']) - assert_almost_equal(a.dot(b2['1']), expected['1']) - - self.assertRaises(Exception, a.dot, a.values[:3]) - self.assertRaises(ValueError, a.dot, b.T) - - def test_value_counts_nunique(self): - - # basics.rst doc example - series = Series(np.random.randn(500)) - series[20:500] = np.nan - series[10:20] = 5000 - result = series.nunique() - self.assertEqual(result, 11) - - def test_unique(self): - - # 714 also, dtype=float - s = Series([1.2345] * 100) - s[::2] = np.nan - result = s.unique() - self.assertEqual(len(result), 2) - - s = Series([1.2345] * 100, dtype='f4') - s[::2] = np.nan - result = s.unique() - self.assertEqual(len(result), 2) - - # NAs in object arrays #714 - s = Series(['foo'] * 100, dtype='O') - s[::2] = np.nan - result = s.unique() - self.assertEqual(len(result), 2) - - # decision about None - s = Series([1, 2, 3, None, None, None], dtype=object) - result = s.unique() - expected = np.array([1, 2, 3, None], dtype=object) - self.assert_numpy_array_equal(result, expected) - - def test_dropna_empty(self): - s = Series([]) - self.assertEqual(len(s.dropna()), 0) - s.dropna(inplace=True) - self.assertEqual(len(s), 0) - - # invalid axis - self.assertRaises(ValueError, s.dropna, axis=1) - - def test_datetime64_tz_dropna(self): - # DatetimeBlock - s = Series([Timestamp('2011-01-01 10:00'), pd.NaT, Timestamp( - '2011-01-03 10:00'), pd.NaT]) - result = s.dropna() - expected = Series([Timestamp('2011-01-01 10:00'), - Timestamp('2011-01-03 10:00')], index=[0, 2]) - self.assert_series_equal(result, expected) - - # DatetimeBlockTZ - idx = pd.DatetimeIndex(['2011-01-01 10:00', pd.NaT, - '2011-01-03 10:00', pd.NaT], - tz='Asia/Tokyo') - s = pd.Series(idx) - self.assertEqual(s.dtype, 'datetime64[ns, Asia/Tokyo]') - result = s.dropna() - expected = Series([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - Timestamp('2011-01-03 10:00', tz='Asia/Tokyo')], - index=[0, 2]) - self.assertEqual(result.dtype, 'datetime64[ns, Asia/Tokyo]') - self.assert_series_equal(result, expected) - - def test_dropna_no_nan(self): - for s in [Series([1, 2, 3], name='x'), Series( - [False, True, False], name='x')]: - - result = s.dropna() - self.assert_series_equal(result, s) - self.assertFalse(result is s) - - s2 = s.copy() - s2.dropna(inplace=True) - self.assert_series_equal(s2, s) - - def test_axis_alias(self): - s = Series([1, 2, np.nan]) - assert_series_equal(s.dropna(axis='rows'), s.dropna(axis='index')) - self.assertEqual(s.dropna().sum('rows'), 3) - self.assertEqual(s._get_axis_number('rows'), 0) - self.assertEqual(s._get_axis_name('rows'), 'index') - - def test_drop_duplicates(self): - # check both int and object - for s in [Series([1, 2, 3, 3]), Series(['1', '2', '3', '3'])]: - expected = Series([False, False, False, True]) - assert_series_equal(s.duplicated(), expected) - assert_series_equal(s.drop_duplicates(), s[~expected]) - sc = s.copy() - sc.drop_duplicates(inplace=True) - assert_series_equal(sc, s[~expected]) - - expected = Series([False, False, True, False]) - assert_series_equal(s.duplicated(keep='last'), expected) - assert_series_equal(s.drop_duplicates(keep='last'), s[~expected]) - sc = s.copy() - sc.drop_duplicates(keep='last', inplace=True) - assert_series_equal(sc, s[~expected]) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - assert_series_equal(s.duplicated(take_last=True), expected) - with tm.assert_produces_warning(FutureWarning): - assert_series_equal( - s.drop_duplicates(take_last=True), s[~expected]) - sc = s.copy() - with tm.assert_produces_warning(FutureWarning): - sc.drop_duplicates(take_last=True, inplace=True) - assert_series_equal(sc, s[~expected]) - - expected = Series([False, False, True, True]) - assert_series_equal(s.duplicated(keep=False), expected) - assert_series_equal(s.drop_duplicates(keep=False), s[~expected]) - sc = s.copy() - sc.drop_duplicates(keep=False, inplace=True) - assert_series_equal(sc, s[~expected]) - - for s in [Series([1, 2, 3, 5, 3, 2, 4]), - Series(['1', '2', '3', '5', '3', '2', '4'])]: - expected = Series([False, False, False, False, True, True, False]) - assert_series_equal(s.duplicated(), expected) - assert_series_equal(s.drop_duplicates(), s[~expected]) - sc = s.copy() - sc.drop_duplicates(inplace=True) - assert_series_equal(sc, s[~expected]) - - expected = Series([False, True, True, False, False, False, False]) - assert_series_equal(s.duplicated(keep='last'), expected) - assert_series_equal(s.drop_duplicates(keep='last'), s[~expected]) - sc = s.copy() - sc.drop_duplicates(keep='last', inplace=True) - assert_series_equal(sc, s[~expected]) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - assert_series_equal(s.duplicated(take_last=True), expected) - with tm.assert_produces_warning(FutureWarning): - assert_series_equal( - s.drop_duplicates(take_last=True), s[~expected]) - sc = s.copy() - with tm.assert_produces_warning(FutureWarning): - sc.drop_duplicates(take_last=True, inplace=True) - assert_series_equal(sc, s[~expected]) - - expected = Series([False, True, True, False, True, True, False]) - assert_series_equal(s.duplicated(keep=False), expected) - assert_series_equal(s.drop_duplicates(keep=False), s[~expected]) - sc = s.copy() - sc.drop_duplicates(keep=False, inplace=True) - assert_series_equal(sc, s[~expected]) - - def test_sort_values(self): - - ts = self.ts.copy() - - # 9816 deprecated - with tm.assert_produces_warning(FutureWarning): - ts.sort() - - self.assert_numpy_array_equal(ts, self.ts.sort_values()) - self.assert_numpy_array_equal(ts.index, self.ts.sort_values().index) - - ts.sort_values(ascending=False, inplace=True) - self.assert_numpy_array_equal(ts, self.ts.sort_values(ascending=False)) - self.assert_numpy_array_equal(ts.index, self.ts.sort_values( - ascending=False).index) - - # GH 5856/5853 - # Series.sort_values operating on a view - df = DataFrame(np.random.randn(10, 4)) - s = df.iloc[:, 0] - - def f(): - s.sort_values(inplace=True) - - self.assertRaises(ValueError, f) - - # test order/sort inplace - # GH6859 - ts1 = self.ts.copy() - ts1.sort_values(ascending=False, inplace=True) - ts2 = self.ts.copy() - ts2.sort_values(ascending=False, inplace=True) - assert_series_equal(ts1, ts2) - - ts1 = self.ts.copy() - ts1 = ts1.sort_values(ascending=False, inplace=False) - ts2 = self.ts.copy() - ts2 = ts.sort_values(ascending=False) - assert_series_equal(ts1, ts2) - - def test_sort_index(self): - rindex = list(self.ts.index) - random.shuffle(rindex) - - random_order = self.ts.reindex(rindex) - sorted_series = random_order.sort_index() - assert_series_equal(sorted_series, self.ts) - - # descending - sorted_series = random_order.sort_index(ascending=False) - assert_series_equal(sorted_series, - self.ts.reindex(self.ts.index[::-1])) - - def test_sort_index_inplace(self): - - # For #11402 - rindex = list(self.ts.index) - random.shuffle(rindex) - - # descending - random_order = self.ts.reindex(rindex) - result = random_order.sort_index(ascending=False, inplace=True) - self.assertIs(result, None, - msg='sort_index() inplace should return None') - assert_series_equal(random_order, self.ts.reindex(self.ts.index[::-1])) - - # ascending - random_order = self.ts.reindex(rindex) - result = random_order.sort_index(ascending=True, inplace=True) - self.assertIs(result, None, - msg='sort_index() inplace should return None') - assert_series_equal(random_order, self.ts) - - def test_sort_API(self): - - # API for 9816 - - # sortlevel - mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) - s = Series([1, 2], mi) - backwards = s.iloc[[1, 0]] - - res = s.sort_index(level='A') - assert_series_equal(backwards, res) - - # sort_index - rindex = list(self.ts.index) - random.shuffle(rindex) - - random_order = self.ts.reindex(rindex) - sorted_series = random_order.sort_index(level=0) - assert_series_equal(sorted_series, self.ts) - - # compat on axis - sorted_series = random_order.sort_index(axis=0) - assert_series_equal(sorted_series, self.ts) - - self.assertRaises(ValueError, lambda: random_order.sort_values(axis=1)) - - sorted_series = random_order.sort_index(level=0, axis=0) - assert_series_equal(sorted_series, self.ts) - - self.assertRaises(ValueError, - lambda: random_order.sort_index(level=0, axis=1)) - - def test_order(self): - - # 9816 deprecated - with tm.assert_produces_warning(FutureWarning): - self.ts.order() - - ts = self.ts.copy() - ts[:5] = np.NaN - vals = ts.values - - result = ts.sort_values() - self.assertTrue(np.isnan(result[-5:]).all()) - self.assert_numpy_array_equal(result[:-5], np.sort(vals[5:])) - - result = ts.sort_values(na_position='first') - self.assertTrue(np.isnan(result[:5]).all()) - self.assert_numpy_array_equal(result[5:], np.sort(vals[5:])) - - # something object-type - ser = Series(['A', 'B'], [1, 2]) - # no failure - ser.sort_values() - - # ascending=False - ordered = ts.sort_values(ascending=False) - expected = np.sort(ts.valid().values)[::-1] - assert_almost_equal(expected, ordered.valid().values) - ordered = ts.sort_values(ascending=False, na_position='first') - assert_almost_equal(expected, ordered.valid().values) - - def test_nsmallest_nlargest(self): - # float, int, datetime64 (use i8), timedelts64 (same), - # object that are numbers, object that are strings - - base = [3, 2, 1, 2, 5] - - s_list = [ - Series(base, dtype='int8'), - Series(base, dtype='int16'), - Series(base, dtype='int32'), - Series(base, dtype='int64'), - Series(base, dtype='float32'), - Series(base, dtype='float64'), - Series(base, dtype='uint8'), - Series(base, dtype='uint16'), - Series(base, dtype='uint32'), - Series(base, dtype='uint64'), - Series(base).astype('timedelta64[ns]'), - Series(pd.to_datetime(['2003', '2002', '2001', '2002', '2005'])), - ] - - raising = [ - Series([3., 2, 1, 2, '5'], dtype='object'), - Series([3., 2, 1, 2, 5], dtype='object'), - # not supported on some archs - # Series([3., 2, 1, 2, 5], dtype='complex256'), - Series([3., 2, 1, 2, 5], dtype='complex128'), - ] - - for r in raising: - dt = r.dtype - msg = "Cannot use method 'n(larg|small)est' with dtype %s" % dt - args = 2, len(r), 0, -1 - methods = r.nlargest, r.nsmallest - for method, arg in product(methods, args): - with tm.assertRaisesRegexp(TypeError, msg): - method(arg) - - for s in s_list: - - assert_series_equal(s.nsmallest(2), s.iloc[[2, 1]]) - - assert_series_equal(s.nsmallest(2, keep='last'), s.iloc[[2, 3]]) - with tm.assert_produces_warning(FutureWarning): - assert_series_equal( - s.nsmallest(2, take_last=True), s.iloc[[2, 3]]) - - assert_series_equal(s.nlargest(3), s.iloc[[4, 0, 1]]) - - assert_series_equal(s.nlargest(3, keep='last'), s.iloc[[4, 0, 3]]) - with tm.assert_produces_warning(FutureWarning): - assert_series_equal( - s.nlargest(3, take_last=True), s.iloc[[4, 0, 3]]) - - empty = s.iloc[0:0] - assert_series_equal(s.nsmallest(0), empty) - assert_series_equal(s.nsmallest(-1), empty) - assert_series_equal(s.nlargest(0), empty) - assert_series_equal(s.nlargest(-1), empty) - - assert_series_equal(s.nsmallest(len(s)), s.sort_values()) - assert_series_equal(s.nsmallest(len(s) + 1), s.sort_values()) - assert_series_equal(s.nlargest(len(s)), s.iloc[[4, 0, 1, 3, 2]]) - assert_series_equal(s.nlargest(len(s) + 1), - s.iloc[[4, 0, 1, 3, 2]]) - - s = Series([3., np.nan, 1, 2, 5]) - assert_series_equal(s.nlargest(), s.iloc[[4, 0, 3, 2]]) - assert_series_equal(s.nsmallest(), s.iloc[[2, 3, 0, 4]]) - - msg = 'keep must be either "first", "last"' - with tm.assertRaisesRegexp(ValueError, msg): - s.nsmallest(keep='invalid') - with tm.assertRaisesRegexp(ValueError, msg): - s.nlargest(keep='invalid') - - def test_rank(self): - tm._skip_if_no_scipy() - from scipy.stats import rankdata - - self.ts[::2] = np.nan - self.ts[:10][::3] = 4. - - ranks = self.ts.rank() - oranks = self.ts.astype('O').rank() - - assert_series_equal(ranks, oranks) - - mask = np.isnan(self.ts) - filled = self.ts.fillna(np.inf) - - # rankdata returns a ndarray - exp = Series(rankdata(filled), index=filled.index) - exp[mask] = np.nan - - assert_almost_equal(ranks, exp) - - iseries = Series(np.arange(5).repeat(2)) - - iranks = iseries.rank() - exp = iseries.astype(float).rank() - assert_series_equal(iranks, exp) - iseries = Series(np.arange(5)) + 1.0 - exp = iseries / 5.0 - iranks = iseries.rank(pct=True) - - assert_series_equal(iranks, exp) - - iseries = Series(np.repeat(1, 100)) - exp = Series(np.repeat(0.505, 100)) - iranks = iseries.rank(pct=True) - assert_series_equal(iranks, exp) - - iseries[1] = np.nan - exp = Series(np.repeat(50.0 / 99.0, 100)) - exp[1] = np.nan - iranks = iseries.rank(pct=True) - assert_series_equal(iranks, exp) - - iseries = Series(np.arange(5)) + 1.0 - iseries[4] = np.nan - exp = iseries / 4.0 - iranks = iseries.rank(pct=True) - assert_series_equal(iranks, exp) - - iseries = Series(np.repeat(np.nan, 100)) - exp = iseries.copy() - iranks = iseries.rank(pct=True) - assert_series_equal(iranks, exp) - - iseries = Series(np.arange(5)) + 1 - iseries[4] = np.nan - exp = iseries / 4.0 - iranks = iseries.rank(pct=True) - assert_series_equal(iranks, exp) - - rng = date_range('1/1/1990', periods=5) - iseries = Series(np.arange(5), rng) + 1 - iseries.ix[4] = np.nan - exp = iseries / 4.0 - iranks = iseries.rank(pct=True) - assert_series_equal(iranks, exp) - - iseries = Series([1e-50, 1e-100, 1e-20, 1e-2, 1e-20 + 1e-30, 1e-1]) - exp = Series([2, 1, 3, 5, 4, 6.0]) - iranks = iseries.rank() - assert_series_equal(iranks, exp) - - values = np.array( - [-50, -1, -1e-20, -1e-25, -1e-50, 0, 1e-40, 1e-20, 1e-10, 2, 40 - ], dtype='float64') - random_order = np.random.permutation(len(values)) - iseries = Series(values[random_order]) - exp = Series(random_order + 1.0, dtype='float64') - iranks = iseries.rank() - assert_series_equal(iranks, exp) - - def test_rank_inf(self): - raise nose.SkipTest('DataFrame.rank does not currently rank ' - 'np.inf and -np.inf properly') - - values = np.array( - [-np.inf, -50, -1, -1e-20, -1e-25, -1e-50, 0, 1e-40, 1e-20, 1e-10, - 2, 40, np.inf], dtype='float64') - random_order = np.random.permutation(len(values)) - iseries = Series(values[random_order]) - exp = Series(random_order + 1.0, dtype='float64') - iranks = iseries.rank() - assert_series_equal(iranks, exp) - - def test_from_csv(self): - - with ensure_clean() as path: - self.ts.to_csv(path) - ts = Series.from_csv(path) - assert_series_equal(self.ts, ts, check_names=False) - self.assertTrue(ts.name is None) - self.assertTrue(ts.index.name is None) - - # GH10483 - self.ts.to_csv(path, header=True) - ts_h = Series.from_csv(path, header=0) - self.assertTrue(ts_h.name == 'ts') - - self.series.to_csv(path) - series = Series.from_csv(path) - self.assertIsNone(series.name) - self.assertIsNone(series.index.name) - assert_series_equal(self.series, series, check_names=False) - self.assertTrue(series.name is None) - self.assertTrue(series.index.name is None) - - self.series.to_csv(path, header=True) - series_h = Series.from_csv(path, header=0) - self.assertTrue(series_h.name == 'series') - - outfile = open(path, 'w') - outfile.write('1998-01-01|1.0\n1999-01-01|2.0') - outfile.close() - series = Series.from_csv(path, sep='|') - checkseries = Series({datetime(1998, 1, 1): 1.0, - datetime(1999, 1, 1): 2.0}) - assert_series_equal(checkseries, series) - - series = Series.from_csv(path, sep='|', parse_dates=False) - checkseries = Series({'1998-01-01': 1.0, '1999-01-01': 2.0}) - assert_series_equal(checkseries, series) - - def test_to_csv(self): - import io - - with ensure_clean() as path: - self.ts.to_csv(path) - - lines = io.open(path, newline=None).readlines() - assert (lines[1] != '\n') - - self.ts.to_csv(path, index=False) - arr = np.loadtxt(path) - assert_almost_equal(arr, self.ts.values) - - def test_to_csv_unicode_index(self): - buf = StringIO() - s = Series([u("\u05d0"), "d2"], index=[u("\u05d0"), u("\u05d1")]) - - s.to_csv(buf, encoding='UTF-8') - buf.seek(0) - - s2 = Series.from_csv(buf, index_col=0, encoding='UTF-8') - - assert_series_equal(s, s2) - - def test_tolist(self): - rs = self.ts.tolist() - xp = self.ts.values.tolist() - assert_almost_equal(rs, xp) - - # datetime64 - s = Series(self.ts.index) - rs = s.tolist() - self.assertEqual(self.ts.index[0], rs[0]) - - def test_to_frame(self): - self.ts.name = None - rs = self.ts.to_frame() - xp = pd.DataFrame(self.ts.values, index=self.ts.index) - assert_frame_equal(rs, xp) - - self.ts.name = 'testname' - rs = self.ts.to_frame() - xp = pd.DataFrame(dict(testname=self.ts.values), index=self.ts.index) - assert_frame_equal(rs, xp) - - rs = self.ts.to_frame(name='testdifferent') - xp = pd.DataFrame( - dict(testdifferent=self.ts.values), index=self.ts.index) - assert_frame_equal(rs, xp) - - def test_to_dict(self): - self.assert_numpy_array_equal(Series(self.ts.to_dict()), self.ts) - - def test_to_csv_float_format(self): - - with ensure_clean() as filename: - ser = Series([0.123456, 0.234567, 0.567567]) - ser.to_csv(filename, float_format='%.2f') - - rs = Series.from_csv(filename) - xp = Series([0.12, 0.23, 0.57]) - assert_series_equal(rs, xp) - - def test_to_csv_list_entries(self): - s = Series(['jack and jill', 'jesse and frank']) - - split = s.str.split(r'\s+and\s+') - - buf = StringIO() - split.to_csv(buf) - - def test_to_csv_path_is_none(self): - # GH 8215 - # Series.to_csv() was returning None, inconsistent with - # DataFrame.to_csv() which returned string - s = Series([1, 2, 3]) - csv_str = s.to_csv(path=None) - self.assertIsInstance(csv_str, str) - - def test_str_attribute(self): - # GH9068 - methods = ['strip', 'rstrip', 'lstrip'] - s = Series([' jack', 'jill ', ' jesse ', 'frank']) - for method in methods: - expected = Series([getattr(str, method)(x) for x in s.values]) - assert_series_equal(getattr(Series.str, method)(s.str), expected) - - # str accessor only valid with string values - s = Series(range(5)) - with self.assertRaisesRegexp(AttributeError, 'only use .str accessor'): - s.str.repeat(2) - - def test_clip(self): - val = self.ts.median() - - self.assertEqual(self.ts.clip_lower(val).min(), val) - self.assertEqual(self.ts.clip_upper(val).max(), val) - - self.assertEqual(self.ts.clip(lower=val).min(), val) - self.assertEqual(self.ts.clip(upper=val).max(), val) - - result = self.ts.clip(-0.5, 0.5) - expected = np.clip(self.ts, -0.5, 0.5) - assert_series_equal(result, expected) - tm.assertIsInstance(expected, Series) - - def test_clip_types_and_nulls(self): - - sers = [Series([np.nan, 1.0, 2.0, 3.0]), Series([None, 'a', 'b', 'c']), - Series(pd.to_datetime( - [np.nan, 1, 2, 3], unit='D'))] - - for s in sers: - thresh = s[2] - l = s.clip_lower(thresh) - u = s.clip_upper(thresh) - self.assertEqual(l[notnull(l)].min(), thresh) - self.assertEqual(u[notnull(u)].max(), thresh) - self.assertEqual(list(isnull(s)), list(isnull(l))) - self.assertEqual(list(isnull(s)), list(isnull(u))) - - def test_clip_against_series(self): - # GH #6966 - - s = Series([1.0, 1.0, 4.0]) - threshold = Series([1.0, 2.0, 3.0]) - - assert_series_equal(s.clip_lower(threshold), Series([1.0, 2.0, 4.0])) - assert_series_equal(s.clip_upper(threshold), Series([1.0, 1.0, 3.0])) - - lower = Series([1.0, 2.0, 3.0]) - upper = Series([1.5, 2.5, 3.5]) - assert_series_equal(s.clip(lower, upper), Series([1.0, 2.0, 3.5])) - assert_series_equal(s.clip(1.5, upper), Series([1.5, 1.5, 3.5])) - - def test_clip_with_datetimes(self): - - # GH 11838 - # naive and tz-aware datetimes - - t = Timestamp('2015-12-01 09:30:30') - s = Series([Timestamp('2015-12-01 09:30:00'), Timestamp( - '2015-12-01 09:31:00')]) - result = s.clip(upper=t) - expected = Series([Timestamp('2015-12-01 09:30:00'), Timestamp( - '2015-12-01 09:30:30')]) - assert_series_equal(result, expected) - - t = Timestamp('2015-12-01 09:30:30', tz='US/Eastern') - s = Series([Timestamp('2015-12-01 09:30:00', tz='US/Eastern'), - Timestamp('2015-12-01 09:31:00', tz='US/Eastern')]) - result = s.clip(upper=t) - expected = Series([Timestamp('2015-12-01 09:30:00', tz='US/Eastern'), - Timestamp('2015-12-01 09:30:30', tz='US/Eastern')]) - assert_series_equal(result, expected) - - def test_valid(self): - ts = self.ts.copy() - ts[::2] = np.NaN - - result = ts.valid() - self.assertEqual(len(result), ts.count()) - - tm.assert_dict_equal(result, ts, compare_keys=False) - - def test_isnull(self): - ser = Series([0, 5.4, 3, nan, -0.001]) - np.array_equal(ser.isnull(), - Series([False, False, False, True, False]).values) - ser = Series(["hi", "", nan]) - np.array_equal(ser.isnull(), Series([False, False, True]).values) - - def test_notnull(self): - ser = Series([0, 5.4, 3, nan, -0.001]) - np.array_equal(ser.notnull(), - Series([True, True, True, False, True]).values) - ser = Series(["hi", "", nan]) - np.array_equal(ser.notnull(), Series([True, True, False]).values) - - def test_shift(self): - shifted = self.ts.shift(1) - unshifted = shifted.shift(-1) - - tm.assert_dict_equal(unshifted.valid(), self.ts, compare_keys=False) - - offset = datetools.bday - shifted = self.ts.shift(1, freq=offset) - unshifted = shifted.shift(-1, freq=offset) - - assert_series_equal(unshifted, self.ts) - - unshifted = self.ts.shift(0, freq=offset) - assert_series_equal(unshifted, self.ts) - - shifted = self.ts.shift(1, freq='B') - unshifted = shifted.shift(-1, freq='B') - - assert_series_equal(unshifted, self.ts) - - # corner case - unshifted = self.ts.shift(0) - assert_series_equal(unshifted, self.ts) - - # Shifting with PeriodIndex - ps = tm.makePeriodSeries() - shifted = ps.shift(1) - unshifted = shifted.shift(-1) - tm.assert_dict_equal(unshifted.valid(), ps, compare_keys=False) - - shifted2 = ps.shift(1, 'B') - shifted3 = ps.shift(1, datetools.bday) - assert_series_equal(shifted2, shifted3) - assert_series_equal(ps, shifted2.shift(-1, 'B')) - - self.assertRaises(ValueError, ps.shift, freq='D') - - # legacy support - shifted4 = ps.shift(1, freq='B') - assert_series_equal(shifted2, shifted4) - - shifted5 = ps.shift(1, freq=datetools.bday) - assert_series_equal(shifted5, shifted4) - - # 32-bit taking - # GH 8129 - index = date_range('2000-01-01', periods=5) - for dtype in ['int32', 'int64']: - s1 = Series(np.arange(5, dtype=dtype), index=index) - p = s1.iloc[1] - result = s1.shift(periods=p) - expected = Series([np.nan, 0, 1, 2, 3], index=index) - assert_series_equal(result, expected) - - # xref 8260 - # with tz - s = Series( - date_range('2000-01-01 09:00:00', periods=5, - tz='US/Eastern'), name='foo') - result = s - s.shift() - assert_series_equal(result, Series( - TimedeltaIndex(['NaT'] + ['1 days'] * 4), name='foo')) - - # incompat tz - s2 = Series( - date_range('2000-01-01 09:00:00', periods=5, tz='CET'), name='foo') - self.assertRaises(ValueError, lambda: s - s2) - - def test_tshift(self): - # PeriodIndex - ps = tm.makePeriodSeries() - shifted = ps.tshift(1) - unshifted = shifted.tshift(-1) - - assert_series_equal(unshifted, ps) - - shifted2 = ps.tshift(freq='B') - assert_series_equal(shifted, shifted2) - - shifted3 = ps.tshift(freq=datetools.bday) - assert_series_equal(shifted, shifted3) - - self.assertRaises(ValueError, ps.tshift, freq='M') - - # DatetimeIndex - shifted = self.ts.tshift(1) - unshifted = shifted.tshift(-1) - - assert_series_equal(self.ts, unshifted) - - shifted2 = self.ts.tshift(freq=self.ts.index.freq) - assert_series_equal(shifted, shifted2) - - inferred_ts = Series(self.ts.values, Index(np.asarray(self.ts.index)), - name='ts') - shifted = inferred_ts.tshift(1) - unshifted = shifted.tshift(-1) - assert_series_equal(shifted, self.ts.tshift(1)) - assert_series_equal(unshifted, inferred_ts) - - no_freq = self.ts[[0, 5, 7]] - self.assertRaises(ValueError, no_freq.tshift) - - def test_shift_int(self): - ts = self.ts.astype(int) - shifted = ts.shift(1) - expected = ts.astype(float).shift(1) - assert_series_equal(shifted, expected) - - def test_shift_categorical(self): - # GH 9416 - s = pd.Series(['a', 'b', 'c', 'd'], dtype='category') - - assert_series_equal(s.iloc[:-1], s.shift(1).shift(-1).valid()) - - sp1 = s.shift(1) - assert_index_equal(s.index, sp1.index) - self.assertTrue(np.all(sp1.values.codes[:1] == -1)) - self.assertTrue(np.all(s.values.codes[:-1] == sp1.values.codes[1:])) - - sn2 = s.shift(-2) - assert_index_equal(s.index, sn2.index) - self.assertTrue(np.all(sn2.values.codes[-2:] == -1)) - self.assertTrue(np.all(s.values.codes[2:] == sn2.values.codes[:-2])) - - assert_index_equal(s.values.categories, sp1.values.categories) - assert_index_equal(s.values.categories, sn2.values.categories) - - def test_truncate(self): - offset = datetools.bday - - ts = self.ts[::3] - - start, end = self.ts.index[3], self.ts.index[6] - start_missing, end_missing = self.ts.index[2], self.ts.index[7] - - # neither specified - truncated = ts.truncate() - assert_series_equal(truncated, ts) - - # both specified - expected = ts[1:3] - - truncated = ts.truncate(start, end) - assert_series_equal(truncated, expected) - - truncated = ts.truncate(start_missing, end_missing) - assert_series_equal(truncated, expected) - - # start specified - expected = ts[1:] - - truncated = ts.truncate(before=start) - assert_series_equal(truncated, expected) - - truncated = ts.truncate(before=start_missing) - assert_series_equal(truncated, expected) - - # end specified - expected = ts[:3] - - truncated = ts.truncate(after=end) - assert_series_equal(truncated, expected) - - truncated = ts.truncate(after=end_missing) - assert_series_equal(truncated, expected) - - # corner case, empty series returned - truncated = ts.truncate(after=self.ts.index[0] - offset) - assert (len(truncated) == 0) - - truncated = ts.truncate(before=self.ts.index[-1] + offset) - assert (len(truncated) == 0) - - self.assertRaises(ValueError, ts.truncate, - before=self.ts.index[-1] + offset, - after=self.ts.index[0] - offset) - - def test_ptp(self): - N = 1000 - arr = np.random.randn(N) - ser = Series(arr) - self.assertEqual(np.ptp(ser), np.ptp(arr)) - - # GH11163 - s = Series([3, 5, np.nan, -3, 10]) - self.assertEqual(s.ptp(), 13) - self.assertTrue(pd.isnull(s.ptp(skipna=False))) - - mi = pd.MultiIndex.from_product([['a', 'b'], [1, 2, 3]]) - s = pd.Series([1, np.nan, 7, 3, 5, np.nan], index=mi) - - expected = pd.Series([6, 2], index=['a', 'b'], dtype=np.float64) - self.assert_series_equal(s.ptp(level=0), expected) - - expected = pd.Series([np.nan, np.nan], index=['a', 'b']) - self.assert_series_equal(s.ptp(level=0, skipna=False), expected) - - with self.assertRaises(ValueError): - s.ptp(axis=1) - - s = pd.Series(['a', 'b', 'c', 'd', 'e']) - with self.assertRaises(TypeError): - s.ptp() - - with self.assertRaises(NotImplementedError): - s.ptp(numeric_only=True) - - def test_asof(self): - # array or list or dates - N = 50 - rng = date_range('1/1/1990', periods=N, freq='53s') - ts = Series(np.random.randn(N), index=rng) - ts[15:30] = np.nan - dates = date_range('1/1/1990', periods=N * 3, freq='25s') - - result = ts.asof(dates) - self.assertTrue(notnull(result).all()) - lb = ts.index[14] - ub = ts.index[30] - - result = ts.asof(list(dates)) - self.assertTrue(notnull(result).all()) - lb = ts.index[14] - ub = ts.index[30] - - mask = (result.index >= lb) & (result.index < ub) - rs = result[mask] - self.assertTrue((rs == ts[lb]).all()) - - val = result[result.index[result.index >= ub][0]] - self.assertEqual(ts[ub], val) - - self.ts[5:10] = np.NaN - self.ts[15:20] = np.NaN - - val1 = self.ts.asof(self.ts.index[7]) - val2 = self.ts.asof(self.ts.index[19]) - - self.assertEqual(val1, self.ts[4]) - self.assertEqual(val2, self.ts[14]) - - # accepts strings - val1 = self.ts.asof(str(self.ts.index[7])) - self.assertEqual(val1, self.ts[4]) - - # in there - self.assertEqual(self.ts.asof(self.ts.index[3]), self.ts[3]) - - # no as of value - d = self.ts.index[0] - datetools.bday - self.assertTrue(np.isnan(self.ts.asof(d))) - - def test_getitem_setitem_datetimeindex(self): - from pandas import date_range - N = 50 - # testing with timezone, GH #2785 - rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern') - ts = Series(np.random.randn(N), index=rng) - - result = ts["1990-01-01 04:00:00"] - expected = ts[4] - self.assertEqual(result, expected) - - result = ts.copy() - result["1990-01-01 04:00:00"] = 0 - result["1990-01-01 04:00:00"] = ts[4] - assert_series_equal(result, ts) - - result = ts["1990-01-01 04:00:00":"1990-01-01 07:00:00"] - expected = ts[4:8] - assert_series_equal(result, expected) - - result = ts.copy() - result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = 0 - result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = ts[4:8] - assert_series_equal(result, ts) - - lb = "1990-01-01 04:00:00" - rb = "1990-01-01 07:00:00" - result = ts[(ts.index >= lb) & (ts.index <= rb)] - expected = ts[4:8] - assert_series_equal(result, expected) - - # repeat all the above with naive datetimes - result = ts[datetime(1990, 1, 1, 4)] - expected = ts[4] - self.assertEqual(result, expected) - - result = ts.copy() - result[datetime(1990, 1, 1, 4)] = 0 - result[datetime(1990, 1, 1, 4)] = ts[4] - assert_series_equal(result, ts) - - result = ts[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] - expected = ts[4:8] - assert_series_equal(result, expected) - - result = ts.copy() - result[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] = 0 - result[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] = ts[4:8] - assert_series_equal(result, ts) - - lb = datetime(1990, 1, 1, 4) - rb = datetime(1990, 1, 1, 7) - result = ts[(ts.index >= lb) & (ts.index <= rb)] - expected = ts[4:8] - assert_series_equal(result, expected) - - result = ts[ts.index[4]] - expected = ts[4] - self.assertEqual(result, expected) - - result = ts[ts.index[4:8]] - expected = ts[4:8] - assert_series_equal(result, expected) - - result = ts.copy() - result[ts.index[4:8]] = 0 - result[4:8] = ts[4:8] - assert_series_equal(result, ts) - - # also test partial date slicing - result = ts["1990-01-02"] - expected = ts[24:48] - assert_series_equal(result, expected) - - result = ts.copy() - result["1990-01-02"] = 0 - result["1990-01-02"] = ts[24:48] - assert_series_equal(result, ts) - - def test_getitem_setitem_datetime_tz_pytz(self): - tm._skip_if_no_pytz() - from pytz import timezone as tz - - from pandas import date_range - N = 50 - # testing with timezone, GH #2785 - rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern') - ts = Series(np.random.randn(N), index=rng) - - # also test Timestamp tz handling, GH #2789 - result = ts.copy() - result["1990-01-01 09:00:00+00:00"] = 0 - result["1990-01-01 09:00:00+00:00"] = ts[4] - assert_series_equal(result, ts) - - result = ts.copy() - result["1990-01-01 03:00:00-06:00"] = 0 - result["1990-01-01 03:00:00-06:00"] = ts[4] - assert_series_equal(result, ts) - - # repeat with datetimes - result = ts.copy() - result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = 0 - result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = ts[4] - assert_series_equal(result, ts) - - result = ts.copy() - - # comparison dates with datetime MUST be localized! - date = tz('US/Central').localize(datetime(1990, 1, 1, 3)) - result[date] = 0 - result[date] = ts[4] - assert_series_equal(result, ts) - - def test_getitem_setitem_datetime_tz_dateutil(self): - tm._skip_if_no_dateutil() - from dateutil.tz import tzutc - from pandas.tslib import _dateutil_gettz as gettz - - tz = lambda x: tzutc() if x == 'UTC' else gettz( - x) # handle special case for utc in dateutil - - from pandas import date_range - N = 50 - # testing with timezone, GH #2785 - rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern') - ts = Series(np.random.randn(N), index=rng) - - # also test Timestamp tz handling, GH #2789 - result = ts.copy() - result["1990-01-01 09:00:00+00:00"] = 0 - result["1990-01-01 09:00:00+00:00"] = ts[4] - assert_series_equal(result, ts) - - result = ts.copy() - result["1990-01-01 03:00:00-06:00"] = 0 - result["1990-01-01 03:00:00-06:00"] = ts[4] - assert_series_equal(result, ts) - - # repeat with datetimes - result = ts.copy() - result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = 0 - result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = ts[4] - assert_series_equal(result, ts) - - result = ts.copy() - result[datetime(1990, 1, 1, 3, tzinfo=tz('US/Central'))] = 0 - result[datetime(1990, 1, 1, 3, tzinfo=tz('US/Central'))] = ts[4] - assert_series_equal(result, ts) - - def test_getitem_setitem_periodindex(self): - from pandas import period_range - N = 50 - rng = period_range('1/1/1990', periods=N, freq='H') - ts = Series(np.random.randn(N), index=rng) - - result = ts["1990-01-01 04"] - expected = ts[4] - self.assertEqual(result, expected) - - result = ts.copy() - result["1990-01-01 04"] = 0 - result["1990-01-01 04"] = ts[4] - assert_series_equal(result, ts) - - result = ts["1990-01-01 04":"1990-01-01 07"] - expected = ts[4:8] - assert_series_equal(result, expected) - - result = ts.copy() - result["1990-01-01 04":"1990-01-01 07"] = 0 - result["1990-01-01 04":"1990-01-01 07"] = ts[4:8] - assert_series_equal(result, ts) - - lb = "1990-01-01 04" - rb = "1990-01-01 07" - result = ts[(ts.index >= lb) & (ts.index <= rb)] - expected = ts[4:8] - assert_series_equal(result, expected) - - # GH 2782 - result = ts[ts.index[4]] - expected = ts[4] - self.assertEqual(result, expected) - - result = ts[ts.index[4:8]] - expected = ts[4:8] - assert_series_equal(result, expected) - - result = ts.copy() - result[ts.index[4:8]] = 0 - result[4:8] = ts[4:8] - assert_series_equal(result, ts) - - def test_asof_periodindex(self): - from pandas import period_range, PeriodIndex - # array or list or dates - N = 50 - rng = period_range('1/1/1990', periods=N, freq='H') - ts = Series(np.random.randn(N), index=rng) - ts[15:30] = np.nan - dates = date_range('1/1/1990', periods=N * 3, freq='37min') - - result = ts.asof(dates) - self.assertTrue(notnull(result).all()) - lb = ts.index[14] - ub = ts.index[30] - - result = ts.asof(list(dates)) - self.assertTrue(notnull(result).all()) - lb = ts.index[14] - ub = ts.index[30] - - pix = PeriodIndex(result.index.values, freq='H') - mask = (pix >= lb) & (pix < ub) - rs = result[mask] - self.assertTrue((rs == ts[lb]).all()) - - ts[5:10] = np.NaN - ts[15:20] = np.NaN - - val1 = ts.asof(ts.index[7]) - val2 = ts.asof(ts.index[19]) - - self.assertEqual(val1, ts[4]) - self.assertEqual(val2, ts[14]) - - # accepts strings - val1 = ts.asof(str(ts.index[7])) - self.assertEqual(val1, ts[4]) - - # in there - self.assertEqual(ts.asof(ts.index[3]), ts[3]) - - # no as of value - d = ts.index[0].to_timestamp() - datetools.bday - self.assertTrue(np.isnan(ts.asof(d))) - - def test_asof_more(self): - from pandas import date_range - s = Series([nan, nan, 1, 2, nan, nan, 3, 4, 5], - index=date_range('1/1/2000', periods=9)) - - dates = s.index[[4, 5, 6, 2, 1]] - - result = s.asof(dates) - expected = Series([2, 2, 3, 1, np.nan], index=dates) - - assert_series_equal(result, expected) - - s = Series([1.5, 2.5, 1, 2, nan, nan, 3, 4, 5], - index=date_range('1/1/2000', periods=9)) - result = s.asof(s.index[0]) - self.assertEqual(result, s[0]) - - def test_cast_on_putmask(self): - - # GH 2746 - - # need to upcast - s = Series([1, 2], index=[1, 2], dtype='int64') - s[[True, False]] = Series([0], index=[1], dtype='int64') - expected = Series([0, 2], index=[1, 2], dtype='int64') - - assert_series_equal(s, expected) - - def test_type_promote_putmask(self): - - # GH8387: test that changing types does not break alignment - ts = Series(np.random.randn(100), index=np.arange(100, 0, -1)).round(5) - left, mask = ts.copy(), ts > 0 - right = ts[mask].copy().map(str) - left[mask] = right - assert_series_equal(left, ts.map(lambda t: str(t) if t > 0 else t)) - - s = Series([0, 1, 2, 0]) - mask = s > 0 - s2 = s[mask].map(str) - s[mask] = s2 - assert_series_equal(s, Series([0, '1', '2', 0])) - - s = Series([0, 'foo', 'bar', 0]) - mask = Series([False, True, True, False]) - s2 = s[mask] - s[mask] = s2 - assert_series_equal(s, Series([0, 'foo', 'bar', 0])) - - def test_astype_cast_nan_int(self): - df = Series([1.0, 2.0, 3.0, np.nan]) - self.assertRaises(ValueError, df.astype, np.int64) - - def test_astype_cast_object_int(self): - arr = Series(["car", "house", "tree", "1"]) - - self.assertRaises(ValueError, arr.astype, int) - self.assertRaises(ValueError, arr.astype, np.int64) - self.assertRaises(ValueError, arr.astype, np.int8) - - arr = Series(['1', '2', '3', '4'], dtype=object) - result = arr.astype(int) - self.assert_numpy_array_equal(result, np.arange(1, 5)) - - def test_astype_datetimes(self): - import pandas.tslib as tslib - - s = Series(tslib.iNaT, dtype='M8[ns]', index=lrange(5)) - s = s.astype('O') - self.assertEqual(s.dtype, np.object_) - - s = Series([datetime(2001, 1, 2, 0, 0)]) - s = s.astype('O') - self.assertEqual(s.dtype, np.object_) - - s = Series([datetime(2001, 1, 2, 0, 0) for i in range(3)]) - s[1] = np.nan - self.assertEqual(s.dtype, 'M8[ns]') - s = s.astype('O') - self.assertEqual(s.dtype, np.object_) - - def test_astype_str(self): - # GH4405 - digits = string.digits - s1 = Series([digits * 10, tm.rands(63), tm.rands(64), tm.rands(1000)]) - s2 = Series([digits * 10, tm.rands(63), tm.rands(64), nan, 1.0]) - types = (compat.text_type, np.str_) - for typ in types: - for s in (s1, s2): - res = s.astype(typ) - expec = s.map(compat.text_type) - assert_series_equal(res, expec) - - # GH9757 - # Test str and unicode on python 2.x and just str on python 3.x - for tt in set([str, compat.text_type]): - ts = Series([Timestamp('2010-01-04 00:00:00')]) - s = ts.astype(tt) - expected = Series([tt('2010-01-04')]) - assert_series_equal(s, expected) - - ts = Series([Timestamp('2010-01-04 00:00:00', tz='US/Eastern')]) - s = ts.astype(tt) - expected = Series([tt('2010-01-04 00:00:00-05:00')]) - assert_series_equal(s, expected) - - td = Series([Timedelta(1, unit='d')]) - s = td.astype(tt) - expected = Series([tt('1 days 00:00:00.000000000')]) - assert_series_equal(s, expected) - - def test_astype_unicode(self): - - # GH7758 - # a bit of magic is required to set default encoding encoding to utf-8 - digits = string.digits - test_series = [ - Series([digits * 10, tm.rands(63), tm.rands(64), tm.rands(1000)]), - Series([u('データーサイエンス、お前はもう死んでいる')]), - - ] - - former_encoding = None - if not compat.PY3: - # in python we can force the default encoding for this test - former_encoding = sys.getdefaultencoding() - reload(sys) # noqa - sys.setdefaultencoding("utf-8") - if sys.getdefaultencoding() == "utf-8": - test_series.append(Series([u('野菜食べないとやばい') - .encode("utf-8")])) - for s in test_series: - res = s.astype("unicode") - expec = s.map(compat.text_type) - assert_series_equal(res, expec) - # restore the former encoding - if former_encoding is not None and former_encoding != "utf-8": - reload(sys) # noqa - sys.setdefaultencoding(former_encoding) - - def test_map(self): - index, data = tm.getMixedTypeDict() - - source = Series(data['B'], index=data['C']) - target = Series(data['C'][:4], index=data['D'][:4]) - - merged = target.map(source) - - for k, v in compat.iteritems(merged): - self.assertEqual(v, source[target[k]]) - - # input could be a dict - merged = target.map(source.to_dict()) - - for k, v in compat.iteritems(merged): - self.assertEqual(v, source[target[k]]) - - # function - result = self.ts.map(lambda x: x * 2) - self.assert_numpy_array_equal(result, self.ts * 2) - - # GH 10324 - a = Series([1, 2, 3, 4]) - b = Series(["even", "odd", "even", "odd"], dtype="category") - c = Series(["even", "odd", "even", "odd"]) - - exp = Series(["odd", "even", "odd", np.nan], dtype="category") - self.assert_series_equal(a.map(b), exp) - exp = Series(["odd", "even", "odd", np.nan]) - self.assert_series_equal(a.map(c), exp) - - a = Series(['a', 'b', 'c', 'd']) - b = Series([1, 2, 3, 4], - index=pd.CategoricalIndex(['b', 'c', 'd', 'e'])) - c = Series([1, 2, 3, 4], index=Index(['b', 'c', 'd', 'e'])) - - exp = Series([np.nan, 1, 2, 3]) - self.assert_series_equal(a.map(b), exp) - exp = Series([np.nan, 1, 2, 3]) - self.assert_series_equal(a.map(c), exp) - - a = Series(['a', 'b', 'c', 'd']) - b = Series(['B', 'C', 'D', 'E'], dtype='category', - index=pd.CategoricalIndex(['b', 'c', 'd', 'e'])) - c = Series(['B', 'C', 'D', 'E'], index=Index(['b', 'c', 'd', 'e'])) - - exp = Series([np.nan, 'B', 'C', 'D'], dtype='category') - self.assert_series_equal(a.map(b), exp) - exp = Series([np.nan, 'B', 'C', 'D']) - self.assert_series_equal(a.map(c), exp) - - def test_map_compat(self): - # related GH 8024 - s = Series([True, True, False], index=[1, 2, 3]) - result = s.map({True: 'foo', False: 'bar'}) - expected = Series(['foo', 'foo', 'bar'], index=[1, 2, 3]) - assert_series_equal(result, expected) - - def test_map_int(self): - left = Series({'a': 1., 'b': 2., 'c': 3., 'd': 4}) - right = Series({1: 11, 2: 22, 3: 33}) - - self.assertEqual(left.dtype, np.float_) - self.assertTrue(issubclass(right.dtype.type, np.integer)) - - merged = left.map(right) - self.assertEqual(merged.dtype, np.float_) - self.assertTrue(isnull(merged['d'])) - self.assertTrue(not isnull(merged['c'])) - - def test_map_type_inference(self): - s = Series(lrange(3)) - s2 = s.map(lambda x: np.where(x == 0, 0, 1)) - self.assertTrue(issubclass(s2.dtype.type, np.integer)) - - def test_divide_decimal(self): - ''' resolves issue #9787 ''' - from decimal import Decimal - - expected = Series([Decimal(5)]) - - s = Series([Decimal(10)]) - s = s / Decimal(2) - - tm.assert_series_equal(expected, s) - - s = Series([Decimal(10)]) - s = s // Decimal(2) - - tm.assert_series_equal(expected, s) - - def test_map_decimal(self): - from decimal import Decimal - - result = self.series.map(lambda x: Decimal(str(x))) - self.assertEqual(result.dtype, np.object_) - tm.assertIsInstance(result[0], Decimal) - - def test_map_na_exclusion(self): - s = Series([1.5, np.nan, 3, np.nan, 5]) - - result = s.map(lambda x: x * 2, na_action='ignore') - exp = s * 2 - assert_series_equal(result, exp) - - def test_map_dict_with_tuple_keys(self): - ''' - Due to new MultiIndex-ing behaviour in v0.14.0, - dicts with tuple keys passed to map were being - converted to a multi-index, preventing tuple values - from being mapped properly. - ''' - df = pd.DataFrame({'a': [(1, ), (2, ), (3, 4), (5, 6)]}) - label_mappings = {(1, ): 'A', (2, ): 'B', (3, 4): 'A', (5, 6): 'B'} - df['labels'] = df['a'].map(label_mappings) - df['expected_labels'] = pd.Series(['A', 'B', 'A', 'B'], index=df.index) - # All labels should be filled now - tm.assert_series_equal(df['labels'], df['expected_labels'], - check_names=False) - - def test_apply(self): - assert_series_equal(self.ts.apply(np.sqrt), np.sqrt(self.ts)) - - # elementwise-apply - import math - assert_series_equal(self.ts.apply(math.exp), np.exp(self.ts)) - - # how to handle Series result, #2316 - result = self.ts.apply(lambda x: Series( - [x, x ** 2], index=['x', 'x^2'])) - expected = DataFrame({'x': self.ts, 'x^2': self.ts ** 2}) - tm.assert_frame_equal(result, expected) - - # empty series - s = Series(dtype=object, name='foo', index=pd.Index([], name='bar')) - rs = s.apply(lambda x: x) - tm.assert_series_equal(s, rs) - # check all metadata (GH 9322) - self.assertIsNot(s, rs) - self.assertIs(s.index, rs.index) - self.assertEqual(s.dtype, rs.dtype) - self.assertEqual(s.name, rs.name) - - # index but no data - s = Series(index=[1, 2, 3]) - rs = s.apply(lambda x: x) - tm.assert_series_equal(s, rs) - - def test_apply_same_length_inference_bug(self): - s = Series([1, 2]) - f = lambda x: (x, x + 1) - - result = s.apply(f) - expected = s.map(f) - assert_series_equal(result, expected) - - s = Series([1, 2, 3]) - result = s.apply(f) - expected = s.map(f) - assert_series_equal(result, expected) - - def test_apply_dont_convert_dtype(self): - s = Series(np.random.randn(10)) - - f = lambda x: x if x > 0 else np.nan - result = s.apply(f, convert_dtype=False) - self.assertEqual(result.dtype, object) - - def test_convert_objects(self): - - s = Series([1., 2, 3], index=['a', 'b', 'c']) - with tm.assert_produces_warning(FutureWarning): - result = s.convert_objects(convert_dates=False, - convert_numeric=True) - assert_series_equal(result, s) - - # force numeric conversion - r = s.copy().astype('O') - r['a'] = '1' - with tm.assert_produces_warning(FutureWarning): - result = r.convert_objects(convert_dates=False, - convert_numeric=True) - assert_series_equal(result, s) - - r = s.copy().astype('O') - r['a'] = '1.' - with tm.assert_produces_warning(FutureWarning): - result = r.convert_objects(convert_dates=False, - convert_numeric=True) - assert_series_equal(result, s) - - r = s.copy().astype('O') - r['a'] = 'garbled' - expected = s.copy() - expected['a'] = np.nan - with tm.assert_produces_warning(FutureWarning): - result = r.convert_objects(convert_dates=False, - convert_numeric=True) - assert_series_equal(result, expected) - - # GH 4119, not converting a mixed type (e.g.floats and object) - s = Series([1, 'na', 3, 4]) - with tm.assert_produces_warning(FutureWarning): - result = s.convert_objects(convert_numeric=True) - expected = Series([1, np.nan, 3, 4]) - assert_series_equal(result, expected) - - s = Series([1, '', 3, 4]) - with tm.assert_produces_warning(FutureWarning): - result = s.convert_objects(convert_numeric=True) - expected = Series([1, np.nan, 3, 4]) - assert_series_equal(result, expected) - - # dates - s = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), - datetime(2001, 1, 3, 0, 0)]) - s2 = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), - datetime(2001, 1, 3, 0, 0), 'foo', 1.0, 1, - Timestamp('20010104'), '20010105'], - dtype='O') - with tm.assert_produces_warning(FutureWarning): - result = s.convert_objects(convert_dates=True, - convert_numeric=False) - expected = Series([Timestamp('20010101'), Timestamp('20010102'), - Timestamp('20010103')], dtype='M8[ns]') - assert_series_equal(result, expected) - - with tm.assert_produces_warning(FutureWarning): - result = s.convert_objects(convert_dates='coerce', - convert_numeric=False) - with tm.assert_produces_warning(FutureWarning): - result = s.convert_objects(convert_dates='coerce', - convert_numeric=True) - assert_series_equal(result, expected) - - expected = Series([Timestamp('20010101'), Timestamp('20010102'), - Timestamp('20010103'), - lib.NaT, lib.NaT, lib.NaT, Timestamp('20010104'), - Timestamp('20010105')], dtype='M8[ns]') - with tm.assert_produces_warning(FutureWarning): - result = s2.convert_objects(convert_dates='coerce', - convert_numeric=False) - assert_series_equal(result, expected) - with tm.assert_produces_warning(FutureWarning): - result = s2.convert_objects(convert_dates='coerce', - convert_numeric=True) - assert_series_equal(result, expected) - - # preserver all-nans (if convert_dates='coerce') - s = Series(['foo', 'bar', 1, 1.0], dtype='O') - with tm.assert_produces_warning(FutureWarning): - result = s.convert_objects(convert_dates='coerce', - convert_numeric=False) - assert_series_equal(result, s) - - # preserver if non-object - s = Series([1], dtype='float32') - with tm.assert_produces_warning(FutureWarning): - result = s.convert_objects(convert_dates='coerce', - convert_numeric=False) - assert_series_equal(result, s) - - # r = s.copy() - # r[0] = np.nan - # result = r.convert_objects(convert_dates=True,convert_numeric=False) - # self.assertEqual(result.dtype, 'M8[ns]') - - # dateutil parses some single letters into today's value as a date - for x in 'abcdefghijklmnopqrstuvwxyz': - s = Series([x]) - with tm.assert_produces_warning(FutureWarning): - result = s.convert_objects(convert_dates='coerce') - assert_series_equal(result, s) - s = Series([x.upper()]) - with tm.assert_produces_warning(FutureWarning): - result = s.convert_objects(convert_dates='coerce') - assert_series_equal(result, s) - - def test_convert_objects_preserve_bool(self): - s = Series([1, True, 3, 5], dtype=object) - with tm.assert_produces_warning(FutureWarning): - r = s.convert_objects(convert_numeric=True) - e = Series([1, 1, 3, 5], dtype='i8') - tm.assert_series_equal(r, e) - - def test_convert_objects_preserve_all_bool(self): - s = Series([False, True, False, False], dtype=object) - with tm.assert_produces_warning(FutureWarning): - r = s.convert_objects(convert_numeric=True) - e = Series([False, True, False, False], dtype=bool) - tm.assert_series_equal(r, e) - - # GH 10265 - def test_convert(self): - # Tests: All to nans, coerce, true - # Test coercion returns correct type - s = Series(['a', 'b', 'c']) - results = s._convert(datetime=True, coerce=True) - expected = Series([lib.NaT] * 3) - assert_series_equal(results, expected) - - results = s._convert(numeric=True, coerce=True) - expected = Series([np.nan] * 3) - assert_series_equal(results, expected) - - expected = Series([lib.NaT] * 3, dtype=np.dtype('m8[ns]')) - results = s._convert(timedelta=True, coerce=True) - assert_series_equal(results, expected) - - dt = datetime(2001, 1, 1, 0, 0) - td = dt - datetime(2000, 1, 1, 0, 0) - - # Test coercion with mixed types - s = Series(['a', '3.1415', dt, td]) - results = s._convert(datetime=True, coerce=True) - expected = Series([lib.NaT, lib.NaT, dt, lib.NaT]) - assert_series_equal(results, expected) - - results = s._convert(numeric=True, coerce=True) - expected = Series([nan, 3.1415, nan, nan]) - assert_series_equal(results, expected) - - results = s._convert(timedelta=True, coerce=True) - expected = Series([lib.NaT, lib.NaT, lib.NaT, td], - dtype=np.dtype('m8[ns]')) - assert_series_equal(results, expected) - - # Test standard conversion returns original - results = s._convert(datetime=True) - assert_series_equal(results, s) - results = s._convert(numeric=True) - expected = Series([nan, 3.1415, nan, nan]) - assert_series_equal(results, expected) - results = s._convert(timedelta=True) - assert_series_equal(results, s) - - # test pass-through and non-conversion when other types selected - s = Series(['1.0', '2.0', '3.0']) - results = s._convert(datetime=True, numeric=True, timedelta=True) - expected = Series([1.0, 2.0, 3.0]) - assert_series_equal(results, expected) - results = s._convert(True, False, True) - assert_series_equal(results, s) - - s = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, 0)], - dtype='O') - results = s._convert(datetime=True, numeric=True, timedelta=True) - expected = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, - 0)]) - assert_series_equal(results, expected) - results = s._convert(datetime=False, numeric=True, timedelta=True) - assert_series_equal(results, s) - - td = datetime(2001, 1, 1, 0, 0) - datetime(2000, 1, 1, 0, 0) - s = Series([td, td], dtype='O') - results = s._convert(datetime=True, numeric=True, timedelta=True) - expected = Series([td, td]) - assert_series_equal(results, expected) - results = s._convert(True, True, False) - assert_series_equal(results, s) - - s = Series([1., 2, 3], index=['a', 'b', 'c']) - result = s._convert(numeric=True) - assert_series_equal(result, s) - - # force numeric conversion - r = s.copy().astype('O') - r['a'] = '1' - result = r._convert(numeric=True) - assert_series_equal(result, s) - - r = s.copy().astype('O') - r['a'] = '1.' - result = r._convert(numeric=True) - assert_series_equal(result, s) - - r = s.copy().astype('O') - r['a'] = 'garbled' - result = r._convert(numeric=True) - expected = s.copy() - expected['a'] = nan - assert_series_equal(result, expected) - - # GH 4119, not converting a mixed type (e.g.floats and object) - s = Series([1, 'na', 3, 4]) - result = s._convert(datetime=True, numeric=True) - expected = Series([1, nan, 3, 4]) - assert_series_equal(result, expected) - - s = Series([1, '', 3, 4]) - result = s._convert(datetime=True, numeric=True) - assert_series_equal(result, expected) - - # dates - s = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), - datetime(2001, 1, 3, 0, 0)]) - s2 = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), - datetime(2001, 1, 3, 0, 0), 'foo', 1.0, 1, - Timestamp('20010104'), '20010105'], dtype='O') - - result = s._convert(datetime=True) - expected = Series([Timestamp('20010101'), Timestamp('20010102'), - Timestamp('20010103')], dtype='M8[ns]') - assert_series_equal(result, expected) - - result = s._convert(datetime=True, coerce=True) - assert_series_equal(result, expected) - - expected = Series([Timestamp('20010101'), Timestamp('20010102'), - Timestamp('20010103'), lib.NaT, lib.NaT, lib.NaT, - Timestamp('20010104'), Timestamp('20010105')], - dtype='M8[ns]') - result = s2._convert(datetime=True, numeric=False, timedelta=False, - coerce=True) - assert_series_equal(result, expected) - result = s2._convert(datetime=True, coerce=True) - assert_series_equal(result, expected) - - s = Series(['foo', 'bar', 1, 1.0], dtype='O') - result = s._convert(datetime=True, coerce=True) - expected = Series([lib.NaT] * 4) - assert_series_equal(result, expected) - - # preserver if non-object - s = Series([1], dtype='float32') - result = s._convert(datetime=True, coerce=True) - assert_series_equal(result, s) - - # r = s.copy() - # r[0] = np.nan - # result = r._convert(convert_dates=True,convert_numeric=False) - # self.assertEqual(result.dtype, 'M8[ns]') - - # dateutil parses some single letters into today's value as a date - expected = Series([lib.NaT]) - for x in 'abcdefghijklmnopqrstuvwxyz': - s = Series([x]) - result = s._convert(datetime=True, coerce=True) - assert_series_equal(result, expected) - s = Series([x.upper()]) - result = s._convert(datetime=True, coerce=True) - assert_series_equal(result, expected) - - def test_convert_no_arg_error(self): - s = Series(['1.0', '2']) - self.assertRaises(ValueError, s._convert) - - def test_convert_preserve_bool(self): - s = Series([1, True, 3, 5], dtype=object) - r = s._convert(datetime=True, numeric=True) - e = Series([1, 1, 3, 5], dtype='i8') - tm.assert_series_equal(r, e) - - def test_convert_preserve_all_bool(self): - s = Series([False, True, False, False], dtype=object) - r = s._convert(datetime=True, numeric=True) - e = Series([False, True, False, False], dtype=bool) - tm.assert_series_equal(r, e) - - def test_apply_args(self): - s = Series(['foo,bar']) - - result = s.apply(str.split, args=(',', )) - self.assertEqual(result[0], ['foo', 'bar']) - tm.assertIsInstance(result[0], list) - - def test_align(self): - def _check_align(a, b, how='left', fill=None): - aa, ab = a.align(b, join=how, fill_value=fill) - - join_index = a.index.join(b.index, how=how) - if fill is not None: - diff_a = aa.index.difference(join_index) - diff_b = ab.index.difference(join_index) - if len(diff_a) > 0: - self.assertTrue((aa.reindex(diff_a) == fill).all()) - if len(diff_b) > 0: - self.assertTrue((ab.reindex(diff_b) == fill).all()) - - ea = a.reindex(join_index) - eb = b.reindex(join_index) - - if fill is not None: - ea = ea.fillna(fill) - eb = eb.fillna(fill) - - assert_series_equal(aa, ea) - assert_series_equal(ab, eb) - self.assertEqual(aa.name, 'ts') - self.assertEqual(ea.name, 'ts') - self.assertEqual(ab.name, 'ts') - self.assertEqual(eb.name, 'ts') - - for kind in JOIN_TYPES: - _check_align(self.ts[2:], self.ts[:-5], how=kind) - _check_align(self.ts[2:], self.ts[:-5], how=kind, fill=-1) - - # empty left - _check_align(self.ts[:0], self.ts[:-5], how=kind) - _check_align(self.ts[:0], self.ts[:-5], how=kind, fill=-1) - - # empty right - _check_align(self.ts[:-5], self.ts[:0], how=kind) - _check_align(self.ts[:-5], self.ts[:0], how=kind, fill=-1) - - # both empty - _check_align(self.ts[:0], self.ts[:0], how=kind) - _check_align(self.ts[:0], self.ts[:0], how=kind, fill=-1) - - def test_align_fill_method(self): - def _check_align(a, b, how='left', method='pad', limit=None): - aa, ab = a.align(b, join=how, method=method, limit=limit) - - join_index = a.index.join(b.index, how=how) - ea = a.reindex(join_index) - eb = b.reindex(join_index) - - ea = ea.fillna(method=method, limit=limit) - eb = eb.fillna(method=method, limit=limit) - - assert_series_equal(aa, ea) - assert_series_equal(ab, eb) - - for kind in JOIN_TYPES: - for meth in ['pad', 'bfill']: - _check_align(self.ts[2:], self.ts[:-5], how=kind, method=meth) - _check_align(self.ts[2:], self.ts[:-5], how=kind, method=meth, - limit=1) - - # empty left - _check_align(self.ts[:0], self.ts[:-5], how=kind, method=meth) - _check_align(self.ts[:0], self.ts[:-5], how=kind, method=meth, - limit=1) - - # empty right - _check_align(self.ts[:-5], self.ts[:0], how=kind, method=meth) - _check_align(self.ts[:-5], self.ts[:0], how=kind, method=meth, - limit=1) - - # both empty - _check_align(self.ts[:0], self.ts[:0], how=kind, method=meth) - _check_align(self.ts[:0], self.ts[:0], how=kind, method=meth, - limit=1) - - def test_align_nocopy(self): - b = self.ts[:5].copy() - - # do copy - a = self.ts.copy() - ra, _ = a.align(b, join='left') - ra[:5] = 5 - self.assertFalse((a[:5] == 5).any()) - - # do not copy - a = self.ts.copy() - ra, _ = a.align(b, join='left', copy=False) - ra[:5] = 5 - self.assertTrue((a[:5] == 5).all()) - - # do copy - a = self.ts.copy() - b = self.ts[:5].copy() - _, rb = a.align(b, join='right') - rb[:3] = 5 - self.assertFalse((b[:3] == 5).any()) - - # do not copy - a = self.ts.copy() - b = self.ts[:5].copy() - _, rb = a.align(b, join='right', copy=False) - rb[:2] = 5 - self.assertTrue((b[:2] == 5).all()) - - def test_align_sameindex(self): - a, b = self.ts.align(self.ts, copy=False) - self.assertIs(a.index, self.ts.index) - self.assertIs(b.index, self.ts.index) - - # a, b = self.ts.align(self.ts, copy=True) - # self.assertIsNot(a.index, self.ts.index) - # self.assertIsNot(b.index, self.ts.index) - - def test_align_multiindex(self): - # GH 10665 - - midx = pd.MultiIndex.from_product([range(2), range(3), range(2)], - names=('a', 'b', 'c')) - idx = pd.Index(range(2), name='b') - s1 = pd.Series(np.arange(12, dtype='int64'), index=midx) - s2 = pd.Series(np.arange(2, dtype='int64'), index=idx) - - # these must be the same results (but flipped) - res1l, res1r = s1.align(s2, join='left') - res2l, res2r = s2.align(s1, join='right') - - expl = s1 - tm.assert_series_equal(expl, res1l) - tm.assert_series_equal(expl, res2r) - expr = pd.Series([0, 0, 1, 1, np.nan, np.nan] * 2, index=midx) - tm.assert_series_equal(expr, res1r) - tm.assert_series_equal(expr, res2l) - - res1l, res1r = s1.align(s2, join='right') - res2l, res2r = s2.align(s1, join='left') - - exp_idx = pd.MultiIndex.from_product([range(2), range(2), range(2)], - names=('a', 'b', 'c')) - expl = pd.Series([0, 1, 2, 3, 6, 7, 8, 9], index=exp_idx) - tm.assert_series_equal(expl, res1l) - tm.assert_series_equal(expl, res2r) - expr = pd.Series([0, 0, 1, 1] * 2, index=exp_idx) - tm.assert_series_equal(expr, res1r) - tm.assert_series_equal(expr, res2l) - - def test_reindex(self): - - identity = self.series.reindex(self.series.index) - - # __array_interface__ is not defined for older numpies - # and on some pythons - try: - self.assertTrue(np.may_share_memory(self.series.index, - identity.index)) - except (AttributeError): - pass - - self.assertTrue(identity.index.is_(self.series.index)) - self.assertTrue(identity.index.identical(self.series.index)) - - subIndex = self.series.index[10:20] - subSeries = self.series.reindex(subIndex) - - for idx, val in compat.iteritems(subSeries): - self.assertEqual(val, self.series[idx]) - - subIndex2 = self.ts.index[10:20] - subTS = self.ts.reindex(subIndex2) - - for idx, val in compat.iteritems(subTS): - self.assertEqual(val, self.ts[idx]) - stuffSeries = self.ts.reindex(subIndex) - - self.assertTrue(np.isnan(stuffSeries).all()) - - # This is extremely important for the Cython code to not screw up - nonContigIndex = self.ts.index[::2] - subNonContig = self.ts.reindex(nonContigIndex) - for idx, val in compat.iteritems(subNonContig): - self.assertEqual(val, self.ts[idx]) - - # return a copy the same index here - result = self.ts.reindex() - self.assertFalse((result is self.ts)) - - def test_reindex_nan(self): - ts = Series([2, 3, 5, 7], index=[1, 4, nan, 8]) - - i, j = [nan, 1, nan, 8, 4, nan], [2, 0, 2, 3, 1, 2] - assert_series_equal(ts.reindex(i), ts.iloc[j]) - - ts.index = ts.index.astype('object') - - # reindex coerces index.dtype to float, loc/iloc doesn't - assert_series_equal(ts.reindex(i), ts.iloc[j], check_index_type=False) - - def test_reindex_corner(self): - # (don't forget to fix this) I think it's fixed - self.empty.reindex(self.ts.index, method='pad') # it works - - # corner case: pad empty series - reindexed = self.empty.reindex(self.ts.index, method='pad') - - # pass non-Index - reindexed = self.ts.reindex(list(self.ts.index)) - assert_series_equal(self.ts, reindexed) - - # bad fill method - ts = self.ts[::2] - self.assertRaises(Exception, ts.reindex, self.ts.index, method='foo') - - def test_reindex_pad(self): - - s = Series(np.arange(10), dtype='int64') - s2 = s[::2] - - reindexed = s2.reindex(s.index, method='pad') - reindexed2 = s2.reindex(s.index, method='ffill') - assert_series_equal(reindexed, reindexed2) - - expected = Series([0, 0, 2, 2, 4, 4, 6, 6, 8, 8], index=np.arange(10)) - assert_series_equal(reindexed, expected) - - # GH4604 - s = Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e']) - new_index = ['a', 'g', 'c', 'f'] - expected = Series([1, 1, 3, 3], index=new_index) - - # this changes dtype because the ffill happens after - result = s.reindex(new_index).ffill() - assert_series_equal(result, expected.astype('float64')) - - result = s.reindex(new_index).ffill(downcast='infer') - assert_series_equal(result, expected) - - expected = Series([1, 5, 3, 5], index=new_index) - result = s.reindex(new_index, method='ffill') - assert_series_equal(result, expected) - - # inferrence of new dtype - s = Series([True, False, False, True], index=list('abcd')) - new_index = 'agc' - result = s.reindex(list(new_index)).ffill() - expected = Series([True, True, False], index=list(new_index)) - assert_series_equal(result, expected) - - # GH4618 shifted series downcasting - s = Series(False, index=lrange(0, 5)) - result = s.shift(1).fillna(method='bfill') - expected = Series(False, index=lrange(0, 5)) - assert_series_equal(result, expected) - - def test_reindex_nearest(self): - s = Series(np.arange(10, dtype='int64')) - target = [0.1, 0.9, 1.5, 2.0] - actual = s.reindex(target, method='nearest') - expected = Series(np.around(target).astype('int64'), target) - assert_series_equal(expected, actual) - - actual = s.reindex_like(actual, method='nearest') - assert_series_equal(expected, actual) - - actual = s.reindex_like(actual, method='nearest', tolerance=1) - assert_series_equal(expected, actual) - - actual = s.reindex(target, method='nearest', tolerance=0.2) - expected = Series([0, 1, np.nan, 2], target) - assert_series_equal(expected, actual) - - def test_reindex_backfill(self): - pass - - def test_reindex_int(self): - ts = self.ts[::2] - int_ts = Series(np.zeros(len(ts), dtype=int), index=ts.index) - - # this should work fine - reindexed_int = int_ts.reindex(self.ts.index) - - # if NaNs introduced - self.assertEqual(reindexed_int.dtype, np.float_) - - # NO NaNs introduced - reindexed_int = int_ts.reindex(int_ts.index[::2]) - self.assertEqual(reindexed_int.dtype, np.int_) - - def test_reindex_bool(self): - - # A series other than float, int, string, or object - ts = self.ts[::2] - bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index) - - # this should work fine - reindexed_bool = bool_ts.reindex(self.ts.index) - - # if NaNs introduced - self.assertEqual(reindexed_bool.dtype, np.object_) - - # NO NaNs introduced - reindexed_bool = bool_ts.reindex(bool_ts.index[::2]) - self.assertEqual(reindexed_bool.dtype, np.bool_) - - def test_reindex_bool_pad(self): - # fail - ts = self.ts[5:] - bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index) - filled_bool = bool_ts.reindex(self.ts.index, method='pad') - self.assertTrue(isnull(filled_bool[:5]).all()) - - def test_reindex_like(self): - other = self.ts[::2] - assert_series_equal(self.ts.reindex(other.index), - self.ts.reindex_like(other)) - - # GH 7179 - day1 = datetime(2013, 3, 5) - day2 = datetime(2013, 5, 5) - day3 = datetime(2014, 3, 5) - - series1 = Series([5, None, None], [day1, day2, day3]) - series2 = Series([None, None], [day1, day3]) - - result = series1.reindex_like(series2, method='pad') - expected = Series([5, np.nan], index=[day1, day3]) - assert_series_equal(result, expected) - - def test_reindex_fill_value(self): - # ----------------------------------------------------------- - # floats - floats = Series([1., 2., 3.]) - result = floats.reindex([1, 2, 3]) - expected = Series([2., 3., np.nan], index=[1, 2, 3]) - assert_series_equal(result, expected) - - result = floats.reindex([1, 2, 3], fill_value=0) - expected = Series([2., 3., 0], index=[1, 2, 3]) - assert_series_equal(result, expected) - - # ----------------------------------------------------------- - # ints - ints = Series([1, 2, 3]) - - result = ints.reindex([1, 2, 3]) - expected = Series([2., 3., np.nan], index=[1, 2, 3]) - assert_series_equal(result, expected) - - # don't upcast - result = ints.reindex([1, 2, 3], fill_value=0) - expected = Series([2, 3, 0], index=[1, 2, 3]) - self.assertTrue(issubclass(result.dtype.type, np.integer)) - assert_series_equal(result, expected) - - # ----------------------------------------------------------- - # objects - objects = Series([1, 2, 3], dtype=object) - - result = objects.reindex([1, 2, 3]) - expected = Series([2, 3, np.nan], index=[1, 2, 3], dtype=object) - assert_series_equal(result, expected) - - result = objects.reindex([1, 2, 3], fill_value='foo') - expected = Series([2, 3, 'foo'], index=[1, 2, 3], dtype=object) - assert_series_equal(result, expected) - - # ------------------------------------------------------------ - # bools - bools = Series([True, False, True]) - - result = bools.reindex([1, 2, 3]) - expected = Series([False, True, np.nan], index=[1, 2, 3], dtype=object) - assert_series_equal(result, expected) - - result = bools.reindex([1, 2, 3], fill_value=False) - expected = Series([False, True, False], index=[1, 2, 3]) - assert_series_equal(result, expected) - - def test_rename(self): - renamer = lambda x: x.strftime('%Y%m%d') - renamed = self.ts.rename(renamer) - self.assertEqual(renamed.index[0], renamer(self.ts.index[0])) - - # dict - rename_dict = dict(zip(self.ts.index, renamed.index)) - renamed2 = self.ts.rename(rename_dict) - assert_series_equal(renamed, renamed2) - - # partial dict - s = Series(np.arange(4), index=['a', 'b', 'c', 'd'], dtype='int64') - renamed = s.rename({'b': 'foo', 'd': 'bar'}) - self.assert_numpy_array_equal(renamed.index, ['a', 'foo', 'c', 'bar']) - - # index with name - renamer = Series(np.arange(4), - index=Index(['a', 'b', 'c', 'd'], name='name'), - dtype='int64') - renamed = renamer.rename({}) - self.assertEqual(renamed.index.name, renamer.index.name) - - def test_rename_inplace(self): - renamer = lambda x: x.strftime('%Y%m%d') - expected = renamer(self.ts.index[0]) - - self.ts.rename(renamer, inplace=True) - self.assertEqual(self.ts.index[0], expected) - - def test_preserveRefs(self): - seq = self.ts[[5, 10, 15]] - seq[1] = np.NaN - self.assertFalse(np.isnan(self.ts[10])) - - def test_ne(self): - ts = Series([3, 4, 5, 6, 7], [3, 4, 5, 6, 7], dtype=float) - expected = [True, True, False, True, True] - self.assertTrue(tm.equalContents(ts.index != 5, expected)) - self.assertTrue(tm.equalContents(~(ts.index == 5), expected)) - - def test_pad_nan(self): - x = Series([np.nan, 1., np.nan, 3., np.nan], ['z', 'a', 'b', 'c', 'd'], - dtype=float) - - x.fillna(method='pad', inplace=True) - - expected = Series([np.nan, 1.0, 1.0, 3.0, 3.0], - ['z', 'a', 'b', 'c', 'd'], dtype=float) - assert_series_equal(x[1:], expected[1:]) - self.assertTrue(np.isnan(x[0]), np.isnan(expected[0])) - - def test_unstack(self): - from numpy import nan - from pandas.util.testing import assert_frame_equal - - index = MultiIndex(levels=[['bar', 'foo'], ['one', 'three', 'two']], - labels=[[1, 1, 0, 0], [0, 1, 0, 2]]) - - s = Series(np.arange(4.), index=index) - unstacked = s.unstack() - - expected = DataFrame([[2., nan, 3.], [0., 1., nan]], - index=['bar', 'foo'], - columns=['one', 'three', 'two']) - - assert_frame_equal(unstacked, expected) - - unstacked = s.unstack(level=0) - assert_frame_equal(unstacked, expected.T) - - index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], - labels=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], - [0, 1, 0, 1, 0, 1]]) - s = Series(np.random.randn(6), index=index) - exp_index = MultiIndex(levels=[['one', 'two', 'three'], [0, 1]], - labels=[[0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1]]) - expected = DataFrame({'bar': s.values}, index=exp_index).sortlevel(0) - unstacked = s.unstack(0) - assert_frame_equal(unstacked, expected) - - # GH5873 - idx = pd.MultiIndex.from_arrays([[101, 102], [3.5, np.nan]]) - ts = pd.Series([1, 2], index=idx) - left = ts.unstack() - right = DataFrame([[nan, 1], [2, nan]], index=[101, 102], - columns=[nan, 3.5]) - print(left) - print(right) - assert_frame_equal(left, right) - - idx = pd.MultiIndex.from_arrays([['cat', 'cat', 'cat', 'dog', 'dog' - ], ['a', 'a', 'b', 'a', 'b'], - [1, 2, 1, 1, np.nan]]) - ts = pd.Series([1.0, 1.1, 1.2, 1.3, 1.4], index=idx) - right = DataFrame([[1.0, 1.3], [1.1, nan], [nan, 1.4], [1.2, nan]], - columns=['cat', 'dog']) - tpls = [('a', 1), ('a', 2), ('b', nan), ('b', 1)] - right.index = pd.MultiIndex.from_tuples(tpls) - assert_frame_equal(ts.unstack(level=0), right) - - def test_sortlevel(self): - mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) - s = Series([1, 2], mi) - backwards = s.iloc[[1, 0]] - - res = s.sortlevel('A') - assert_series_equal(backwards, res) - - res = s.sortlevel(['A', 'B']) - assert_series_equal(backwards, res) - - res = s.sortlevel('A', sort_remaining=False) - assert_series_equal(s, res) - - res = s.sortlevel(['A', 'B'], sort_remaining=False) - assert_series_equal(s, res) - - def test_head_tail(self): - assert_series_equal(self.series.head(), self.series[:5]) - assert_series_equal(self.series.head(0), self.series[0:0]) - assert_series_equal(self.series.tail(), self.series[-5:]) - assert_series_equal(self.series.tail(0), self.series[0:0]) - - def test_isin(self): - s = Series(['A', 'B', 'C', 'a', 'B', 'B', 'A', 'C']) - - result = s.isin(['A', 'C']) - expected = Series([True, False, True, False, False, False, True, True]) - assert_series_equal(result, expected) - - def test_isin_with_string_scalar(self): - # GH4763 - s = Series(['A', 'B', 'C', 'a', 'B', 'B', 'A', 'C']) - with tm.assertRaises(TypeError): - s.isin('a') - - with tm.assertRaises(TypeError): - s = Series(['aaa', 'b', 'c']) - s.isin('aaa') - - def test_isin_with_i8(self): - # GH 5021 - - expected = Series([True, True, False, False, False]) - expected2 = Series([False, True, False, False, False]) - - # datetime64[ns] - s = Series(date_range('jan-01-2013', 'jan-05-2013')) - - result = s.isin(s[0:2]) - assert_series_equal(result, expected) - - result = s.isin(s[0:2].values) - assert_series_equal(result, expected) - - # fails on dtype conversion in the first place - result = s.isin(s[0:2].values.astype('datetime64[D]')) - assert_series_equal(result, expected) - - result = s.isin([s[1]]) - assert_series_equal(result, expected2) - - result = s.isin([np.datetime64(s[1])]) - assert_series_equal(result, expected2) - - # timedelta64[ns] - s = Series(pd.to_timedelta(lrange(5), unit='d')) - result = s.isin(s[0:2]) - assert_series_equal(result, expected) - -# ----------------------------------------------------------------------------- -# timeseries-specific - - def test_cummethods_bool(self): - # GH 6270 - # looks like a buggy np.maximum.accumulate for numpy 1.6.1, py 3.2 - def cummin(x): - return np.minimum.accumulate(x) - - def cummax(x): - return np.maximum.accumulate(x) - - a = pd.Series([False, False, False, True, True, False, False]) - b = ~a - c = pd.Series([False] * len(b)) - d = ~c - methods = {'cumsum': np.cumsum, - 'cumprod': np.cumprod, - 'cummin': cummin, - 'cummax': cummax} - args = product((a, b, c, d), methods) - for s, method in args: - expected = Series(methods[method](s.values)) - result = getattr(s, method)() - assert_series_equal(result, expected) - - e = pd.Series([False, True, nan, False]) - cse = pd.Series([0, 1, nan, 1], dtype=object) - cpe = pd.Series([False, 0, nan, 0]) - cmin = pd.Series([False, False, nan, False]) - cmax = pd.Series([False, True, nan, True]) - expecteds = {'cumsum': cse, - 'cumprod': cpe, - 'cummin': cmin, - 'cummax': cmax} - - for method in methods: - res = getattr(e, method)() - assert_series_equal(res, expecteds[method]) - - def test_replace(self): - N = 100 - ser = Series(np.random.randn(N)) - ser[0:4] = np.nan - ser[6:10] = 0 - - # replace list with a single value - ser.replace([np.nan], -1, inplace=True) - - exp = ser.fillna(-1) - assert_series_equal(ser, exp) - - rs = ser.replace(0., np.nan) - ser[ser == 0.] = np.nan - assert_series_equal(rs, ser) - - ser = Series(np.fabs(np.random.randn(N)), tm.makeDateIndex(N), - dtype=object) - ser[:5] = np.nan - ser[6:10] = 'foo' - ser[20:30] = 'bar' - - # replace list with a single value - rs = ser.replace([np.nan, 'foo', 'bar'], -1) - - self.assertTrue((rs[:5] == -1).all()) - self.assertTrue((rs[6:10] == -1).all()) - self.assertTrue((rs[20:30] == -1).all()) - self.assertTrue((isnull(ser[:5])).all()) - - # replace with different values - rs = ser.replace({np.nan: -1, 'foo': -2, 'bar': -3}) - - self.assertTrue((rs[:5] == -1).all()) - self.assertTrue((rs[6:10] == -2).all()) - self.assertTrue((rs[20:30] == -3).all()) - self.assertTrue((isnull(ser[:5])).all()) - - # replace with different values with 2 lists - rs2 = ser.replace([np.nan, 'foo', 'bar'], [-1, -2, -3]) - assert_series_equal(rs, rs2) - - # replace inplace - ser.replace([np.nan, 'foo', 'bar'], -1, inplace=True) - - self.assertTrue((ser[:5] == -1).all()) - self.assertTrue((ser[6:10] == -1).all()) - self.assertTrue((ser[20:30] == -1).all()) - - ser = Series([np.nan, 0, np.inf]) - assert_series_equal(ser.replace(np.nan, 0), ser.fillna(0)) - - ser = Series([np.nan, 0, 'foo', 'bar', np.inf, None, lib.NaT]) - assert_series_equal(ser.replace(np.nan, 0), ser.fillna(0)) - filled = ser.copy() - filled[4] = 0 - assert_series_equal(ser.replace(np.inf, 0), filled) - - ser = Series(self.ts.index) - assert_series_equal(ser.replace(np.nan, 0), ser.fillna(0)) - - # malformed - self.assertRaises(ValueError, ser.replace, [1, 2, 3], [np.nan, 0]) - - # make sure that we aren't just masking a TypeError because bools don't - # implement indexing - with tm.assertRaisesRegexp(TypeError, 'Cannot compare types .+'): - ser.replace([1, 2], [np.nan, 0]) - - ser = Series([0, 1, 2, 3, 4]) - result = ser.replace([0, 1, 2, 3, 4], [4, 3, 2, 1, 0]) - assert_series_equal(result, Series([4, 3, 2, 1, 0])) - - # API change from 0.12? - # GH 5319 - ser = Series([0, np.nan, 2, 3, 4]) - expected = ser.ffill() - result = ser.replace([np.nan]) - assert_series_equal(result, expected) - - ser = Series([0, np.nan, 2, 3, 4]) - expected = ser.ffill() - result = ser.replace(np.nan) - assert_series_equal(result, expected) - # GH 5797 - ser = Series(date_range('20130101', periods=5)) - expected = ser.copy() - expected.loc[2] = Timestamp('20120101') - result = ser.replace({Timestamp('20130103'): Timestamp('20120101')}) - assert_series_equal(result, expected) - result = ser.replace(Timestamp('20130103'), Timestamp('20120101')) - assert_series_equal(result, expected) - - def test_replace_with_single_list(self): - ser = Series([0, 1, 2, 3, 4]) - result = ser.replace([1, 2, 3]) - assert_series_equal(result, Series([0, 0, 0, 0, 4])) - - s = ser.copy() - s.replace([1, 2, 3], inplace=True) - assert_series_equal(s, Series([0, 0, 0, 0, 4])) - - # make sure things don't get corrupted when fillna call fails - s = ser.copy() - with tm.assertRaises(ValueError): - s.replace([1, 2, 3], inplace=True, method='crash_cymbal') - assert_series_equal(s, ser) - - def test_replace_mixed_types(self): - s = Series(np.arange(5), dtype='int64') - - def check_replace(to_rep, val, expected): - sc = s.copy() - r = s.replace(to_rep, val) - sc.replace(to_rep, val, inplace=True) - assert_series_equal(expected, r) - assert_series_equal(expected, sc) - - # should NOT upcast to float - e = Series([0, 1, 2, 3, 4]) - tr, v = [3], [3.0] - check_replace(tr, v, e) - - # MUST upcast to float - e = Series([0, 1, 2, 3.5, 4]) - tr, v = [3], [3.5] - check_replace(tr, v, e) - - # casts to object - e = Series([0, 1, 2, 3.5, 'a']) - tr, v = [3, 4], [3.5, 'a'] - check_replace(tr, v, e) - - # again casts to object - e = Series([0, 1, 2, 3.5, Timestamp('20130101')]) - tr, v = [3, 4], [3.5, Timestamp('20130101')] - check_replace(tr, v, e) - - # casts to float - e = Series([0, 1, 2, 3.5, 1]) - tr, v = [3, 4], [3.5, True] - check_replace(tr, v, e) - - # test an object with dates + floats + integers + strings - dr = date_range('1/1/2001', '1/10/2001', - freq='D').to_series().reset_index(drop=True) - result = dr.astype(object).replace( - [dr[0], dr[1], dr[2]], [1.0, 2, 'a']) - expected = Series([1.0, 2, 'a'] + dr[3:].tolist(), dtype=object) - assert_series_equal(result, expected) - - def test_replace_bool_with_string_no_op(self): - s = Series([True, False, True]) - result = s.replace('fun', 'in-the-sun') - tm.assert_series_equal(s, result) - - def test_replace_bool_with_string(self): - # nonexistent elements - s = Series([True, False, True]) - result = s.replace(True, '2u') - expected = Series(['2u', False, '2u']) - tm.assert_series_equal(expected, result) - - def test_replace_bool_with_bool(self): - s = Series([True, False, True]) - result = s.replace(True, False) - expected = Series([False] * len(s)) - tm.assert_series_equal(expected, result) - - def test_replace_with_dict_with_bool_keys(self): - s = Series([True, False, True]) - with tm.assertRaisesRegexp(TypeError, 'Cannot compare types .+'): - s.replace({'asdf': 'asdb', True: 'yes'}) - - def test_asfreq(self): - ts = Series([0., 1., 2.], index=[datetime(2009, 10, 30), datetime( - 2009, 11, 30), datetime(2009, 12, 31)]) - - daily_ts = ts.asfreq('B') - monthly_ts = daily_ts.asfreq('BM') - self.assert_numpy_array_equal(monthly_ts, ts) - - daily_ts = ts.asfreq('B', method='pad') - monthly_ts = daily_ts.asfreq('BM') - self.assert_numpy_array_equal(monthly_ts, ts) - - daily_ts = ts.asfreq(datetools.bday) - monthly_ts = daily_ts.asfreq(datetools.bmonthEnd) - self.assert_numpy_array_equal(monthly_ts, ts) - - result = ts[:0].asfreq('M') - self.assertEqual(len(result), 0) - self.assertIsNot(result, ts) - - def test_diff(self): - # Just run the function - self.ts.diff() - - # int dtype - a = 10000000000000000 - b = a + 1 - s = Series([a, b]) - - rs = s.diff() - self.assertEqual(rs[1], 1) - - # neg n - rs = self.ts.diff(-1) - xp = self.ts - self.ts.shift(-1) - assert_series_equal(rs, xp) - - # 0 - rs = self.ts.diff(0) - xp = self.ts - self.ts - assert_series_equal(rs, xp) - - # datetime diff (GH3100) - s = Series(date_range('20130102', periods=5)) - rs = s - s.shift(1) - xp = s.diff() - assert_series_equal(rs, xp) - - # timedelta diff - nrs = rs - rs.shift(1) - nxp = xp.diff() - assert_series_equal(nrs, nxp) - - # with tz - s = Series( - date_range('2000-01-01 09:00:00', periods=5, - tz='US/Eastern'), name='foo') - result = s.diff() - assert_series_equal(result, Series( - TimedeltaIndex(['NaT'] + ['1 days'] * 4), name='foo')) - - def test_pct_change(self): - rs = self.ts.pct_change(fill_method=None) - assert_series_equal(rs, self.ts / self.ts.shift(1) - 1) - - rs = self.ts.pct_change(2) - filled = self.ts.fillna(method='pad') - assert_series_equal(rs, filled / filled.shift(2) - 1) - - rs = self.ts.pct_change(fill_method='bfill', limit=1) - filled = self.ts.fillna(method='bfill', limit=1) - assert_series_equal(rs, filled / filled.shift(1) - 1) - - rs = self.ts.pct_change(freq='5D') - filled = self.ts.fillna(method='pad') - assert_series_equal(rs, filled / filled.shift(freq='5D') - 1) - - def test_pct_change_shift_over_nas(self): - s = Series([1., 1.5, np.nan, 2.5, 3.]) - - chg = s.pct_change() - expected = Series([np.nan, 0.5, np.nan, 2.5 / 1.5 - 1, .2]) - assert_series_equal(chg, expected) - - def test_autocorr(self): - # Just run the function - corr1 = self.ts.autocorr() - - # Now run it with the lag parameter - corr2 = self.ts.autocorr(lag=1) - - # corr() with lag needs Series of at least length 2 - if len(self.ts) <= 2: - self.assertTrue(np.isnan(corr1)) - self.assertTrue(np.isnan(corr2)) - else: - self.assertEqual(corr1, corr2) - - # Choose a random lag between 1 and length of Series - 2 - # and compare the result with the Series corr() function - n = 1 + np.random.randint(max(1, len(self.ts) - 2)) - corr1 = self.ts.corr(self.ts.shift(n)) - corr2 = self.ts.autocorr(lag=n) - - # corr() with lag needs Series of at least length 2 - if len(self.ts) <= 2: - self.assertTrue(np.isnan(corr1)) - self.assertTrue(np.isnan(corr2)) - else: - self.assertEqual(corr1, corr2) - - def test_first_last_valid(self): - ts = self.ts.copy() - ts[:5] = np.NaN - - index = ts.first_valid_index() - self.assertEqual(index, ts.index[5]) - - ts[-5:] = np.NaN - index = ts.last_valid_index() - self.assertEqual(index, ts.index[-6]) - - ts[:] = np.nan - self.assertIsNone(ts.last_valid_index()) - self.assertIsNone(ts.first_valid_index()) - - ser = Series([], index=[]) - self.assertIsNone(ser.last_valid_index()) - self.assertIsNone(ser.first_valid_index()) - - def test_mpl_compat_hack(self): - result = self.ts[:, np.newaxis] - expected = self.ts.values[:, np.newaxis] - assert_almost_equal(result, expected) - -# ----------------------------------------------------------------------------- -# GroupBy - - def test_select(self): - n = len(self.ts) - result = self.ts.select(lambda x: x >= self.ts.index[n // 2]) - expected = self.ts.reindex(self.ts.index[n // 2:]) - assert_series_equal(result, expected) - - result = self.ts.select(lambda x: x.weekday() == 2) - expected = self.ts[self.ts.index.weekday == 2] - assert_series_equal(result, expected) - -# ----------------------------------------------------------------------------- -# Misc not safe for sparse - - def test_dropna_preserve_name(self): - self.ts[:5] = np.nan - result = self.ts.dropna() - self.assertEqual(result.name, self.ts.name) - name = self.ts.name - ts = self.ts.copy() - ts.dropna(inplace=True) - self.assertEqual(ts.name, name) - - def test_numpy_unique(self): - # it works! - np.unique(self.ts) - - def test_concat_empty_series_dtypes_roundtrips(self): - - # round-tripping with self & like self - dtypes = map(np.dtype, ['float64', 'int8', 'uint8', 'bool', 'm8[ns]', - 'M8[ns]']) - - for dtype in dtypes: - self.assertEqual(pd.concat([Series(dtype=dtype)]).dtype, dtype) - self.assertEqual(pd.concat([Series(dtype=dtype), - Series(dtype=dtype)]).dtype, dtype) - - def int_result_type(dtype, dtype2): - typs = set([dtype.kind, dtype2.kind]) - if not len(typs - set(['i', 'u', 'b'])) and (dtype.kind == 'i' or - dtype2.kind == 'i'): - return 'i' - elif not len(typs - set(['u', 'b'])) and (dtype.kind == 'u' or - dtype2.kind == 'u'): - return 'u' - return None - - def float_result_type(dtype, dtype2): - typs = set([dtype.kind, dtype2.kind]) - if not len(typs - set(['f', 'i', 'u'])) and (dtype.kind == 'f' or - dtype2.kind == 'f'): - return 'f' - return None - - def get_result_type(dtype, dtype2): - result = float_result_type(dtype, dtype2) - if result is not None: - return result - result = int_result_type(dtype, dtype2) - if result is not None: - return result - return 'O' - - for dtype in dtypes: - for dtype2 in dtypes: - if dtype == dtype2: - continue - - expected = get_result_type(dtype, dtype2) - result = pd.concat([Series(dtype=dtype), Series(dtype=dtype2) - ]).dtype - self.assertEqual(result.kind, expected) - - def test_concat_empty_series_dtypes(self): - - # bools - self.assertEqual(pd.concat([Series(dtype=np.bool_), - Series(dtype=np.int32)]).dtype, np.int32) - self.assertEqual(pd.concat([Series(dtype=np.bool_), - Series(dtype=np.float32)]).dtype, - np.object_) - - # datetimelike - self.assertEqual(pd.concat([Series(dtype='m8[ns]'), - Series(dtype=np.bool)]).dtype, np.object_) - self.assertEqual(pd.concat([Series(dtype='m8[ns]'), - Series(dtype=np.int64)]).dtype, np.object_) - self.assertEqual(pd.concat([Series(dtype='M8[ns]'), - Series(dtype=np.bool)]).dtype, np.object_) - self.assertEqual(pd.concat([Series(dtype='M8[ns]'), - Series(dtype=np.int64)]).dtype, np.object_) - self.assertEqual(pd.concat([Series(dtype='M8[ns]'), - Series(dtype=np.bool_), - Series(dtype=np.int64)]).dtype, np.object_) - - # categorical - self.assertEqual(pd.concat([Series(dtype='category'), - Series(dtype='category')]).dtype, - 'category') - self.assertEqual(pd.concat([Series(dtype='category'), - Series(dtype='float64')]).dtype, - np.object_) - self.assertEqual(pd.concat([Series(dtype='category'), - Series(dtype='object')]).dtype, 'category') - - # sparse - result = pd.concat([Series(dtype='float64').to_sparse(), Series( - dtype='float64').to_sparse()]) - self.assertEqual(result.dtype, np.float64) - self.assertEqual(result.ftype, 'float64:sparse') - - result = pd.concat([Series(dtype='float64').to_sparse(), Series( - dtype='float64')]) - self.assertEqual(result.dtype, np.float64) - self.assertEqual(result.ftype, 'float64:sparse') - - result = pd.concat([Series(dtype='float64').to_sparse(), Series( - dtype='object')]) - self.assertEqual(result.dtype, np.object_) - self.assertEqual(result.ftype, 'object:dense') - - def test_searchsorted_numeric_dtypes_scalar(self): - s = Series([1, 2, 90, 1000, 3e9]) - r = s.searchsorted(30) - e = 2 - tm.assert_equal(r, e) - - r = s.searchsorted([30]) - e = np.array([2]) - tm.assert_numpy_array_equal(r, e) - - def test_searchsorted_numeric_dtypes_vector(self): - s = Series([1, 2, 90, 1000, 3e9]) - r = s.searchsorted([91, 2e6]) - e = np.array([3, 4]) - tm.assert_numpy_array_equal(r, e) - - def test_search_sorted_datetime64_scalar(self): - s = Series(pd.date_range('20120101', periods=10, freq='2D')) - v = pd.Timestamp('20120102') - r = s.searchsorted(v) - e = 1 - tm.assert_equal(r, e) - - def test_search_sorted_datetime64_list(self): - s = Series(pd.date_range('20120101', periods=10, freq='2D')) - v = [pd.Timestamp('20120102'), pd.Timestamp('20120104')] - r = s.searchsorted(v) - e = np.array([1, 2]) - tm.assert_numpy_array_equal(r, e) - - def test_searchsorted_sorter(self): - # GH8490 - s = Series([3, 1, 2]) - r = s.searchsorted([0, 3], sorter=np.argsort(s)) - e = np.array([0, 2]) - tm.assert_numpy_array_equal(r, e) - - def test_to_frame_expanddim(self): - # GH 9762 - - class SubclassedSeries(Series): - - @property - def _constructor_expanddim(self): - return SubclassedFrame - - class SubclassedFrame(DataFrame): - pass - - s = SubclassedSeries([1, 2, 3], name='X') - result = s.to_frame() - self.assertTrue(isinstance(result, SubclassedFrame)) - expected = SubclassedFrame({'X': [1, 2, 3]}) - assert_frame_equal(result, expected) - - def test_is_unique(self): - # GH11946 - s = Series(np.random.randint(0, 10, size=1000)) - self.assertFalse(s.is_unique) - s = Series(np.arange(1000)) - self.assertTrue(s.is_unique) - - -class TestSeriesNonUnique(tm.TestCase): - - _multiprocess_can_split_ = True - - def setUp(self): - pass - - def test_basic_indexing(self): - s = Series(np.random.randn(5), index=['a', 'b', 'a', 'a', 'b']) - - self.assertRaises(IndexError, s.__getitem__, 5) - self.assertRaises(IndexError, s.__setitem__, 5, 0) - - self.assertRaises(KeyError, s.__getitem__, 'c') - - s = s.sort_index() - - self.assertRaises(IndexError, s.__getitem__, 5) - self.assertRaises(IndexError, s.__setitem__, 5, 0) - - def test_int_indexing(self): - s = Series(np.random.randn(6), index=[0, 0, 1, 1, 2, 2]) - - self.assertRaises(KeyError, s.__getitem__, 5) - - self.assertRaises(KeyError, s.__getitem__, 'c') - - # not monotonic - s = Series(np.random.randn(6), index=[2, 2, 0, 0, 1, 1]) - - self.assertRaises(KeyError, s.__getitem__, 5) - - self.assertRaises(KeyError, s.__getitem__, 'c') - - def test_datetime_indexing(self): - from pandas import date_range - - index = date_range('1/1/2000', '1/7/2000') - index = index.repeat(3) - - s = Series(len(index), index=index) - stamp = Timestamp('1/8/2000') - - self.assertRaises(KeyError, s.__getitem__, stamp) - s[stamp] = 0 - self.assertEqual(s[stamp], 0) - - # not monotonic - s = Series(len(index), index=index) - s = s[::-1] - - self.assertRaises(KeyError, s.__getitem__, stamp) - s[stamp] = 0 - self.assertEqual(s[stamp], 0) - - def test_reset_index(self): - df = tm.makeDataFrame()[:5] - ser = df.stack() - ser.index.names = ['hash', 'category'] - - ser.name = 'value' - df = ser.reset_index() - self.assertIn('value', df) - - df = ser.reset_index(name='value2') - self.assertIn('value2', df) - - # check inplace - s = ser.reset_index(drop=True) - s2 = ser - s2.reset_index(drop=True, inplace=True) - assert_series_equal(s, s2) - - # level - index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], - labels=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], - [0, 1, 0, 1, 0, 1]]) - s = Series(np.random.randn(6), index=index) - rs = s.reset_index(level=1) - self.assertEqual(len(rs.columns), 2) - - rs = s.reset_index(level=[0, 2], drop=True) - self.assertTrue(rs.index.equals(Index(index.get_level_values(1)))) - tm.assertIsInstance(rs, Series) - - def test_reset_index_range(self): - # GH 12071 - s = pd.Series(range(2), name='A', dtype='int64') - series_result = s.reset_index() - tm.assertIsInstance(series_result.index, RangeIndex) - series_expected = pd.DataFrame([[0, 0], [1, 1]], - columns=['index', 'A'], - index=RangeIndex(stop=2)) - assert_frame_equal(series_result, series_expected) - - def test_set_index_makes_timeseries(self): - idx = tm.makeDateIndex(10) - - s = Series(lrange(10)) - s.index = idx - - with tm.assert_produces_warning(FutureWarning): - self.assertTrue(s.is_time_series) - self.assertTrue(s.index.is_all_dates) - - def test_timeseries_coercion(self): - idx = tm.makeDateIndex(10000) - ser = Series(np.random.randn(len(idx)), idx.astype(object)) - with tm.assert_produces_warning(FutureWarning): - self.assertTrue(ser.is_time_series) - self.assertTrue(ser.index.is_all_dates) - self.assertIsInstance(ser.index, DatetimeIndex) - - def test_replace(self): - N = 100 - ser = Series(np.fabs(np.random.randn(N)), tm.makeDateIndex(N), - dtype=object) - ser[:5] = np.nan - ser[6:10] = 'foo' - ser[20:30] = 'bar' - - # replace list with a single value - rs = ser.replace([np.nan, 'foo', 'bar'], -1) - - self.assertTrue((rs[:5] == -1).all()) - self.assertTrue((rs[6:10] == -1).all()) - self.assertTrue((rs[20:30] == -1).all()) - self.assertTrue((isnull(ser[:5])).all()) - - # replace with different values - rs = ser.replace({np.nan: -1, 'foo': -2, 'bar': -3}) - - self.assertTrue((rs[:5] == -1).all()) - self.assertTrue((rs[6:10] == -2).all()) - self.assertTrue((rs[20:30] == -3).all()) - self.assertTrue((isnull(ser[:5])).all()) - - # replace with different values with 2 lists - rs2 = ser.replace([np.nan, 'foo', 'bar'], [-1, -2, -3]) - assert_series_equal(rs, rs2) - - # replace inplace - ser.replace([np.nan, 'foo', 'bar'], -1, inplace=True) - self.assertTrue((ser[:5] == -1).all()) - self.assertTrue((ser[6:10] == -1).all()) - self.assertTrue((ser[20:30] == -1).all()) - - def test_repeat(self): - s = Series(np.random.randn(3), index=['a', 'b', 'c']) - - reps = s.repeat(5) - exp = Series(s.values.repeat(5), index=s.index.values.repeat(5)) - assert_series_equal(reps, exp) - - to_rep = [2, 3, 4] - reps = s.repeat(to_rep) - exp = Series(s.values.repeat(to_rep), - index=s.index.values.repeat(to_rep)) - assert_series_equal(reps, exp) - - def test_unique_data_ownership(self): - # it works! #1807 - Series(Series(["a", "c", "b"]).unique()).sort_values() - - def test_datetime_timedelta_quantiles(self): - # covers #9694 - self.assertTrue(pd.isnull(Series([], dtype='M8[ns]').quantile(.5))) - self.assertTrue(pd.isnull(Series([], dtype='m8[ns]').quantile(.5))) - - def test_empty_timeseries_redections_return_nat(self): - # covers #11245 - for dtype in ('m8[ns]', 'm8[ns]', 'M8[ns]', 'M8[ns, UTC]'): - self.assertIs(Series([], dtype=dtype).min(), pd.NaT) - self.assertIs(Series([], dtype=dtype).max(), pd.NaT) - - -if __name__ == '__main__': - nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], - exit=False) diff --git a/setup.py b/setup.py index 2fdae88fe4b99..984714eb53527 100755 --- a/setup.py +++ b/setup.py @@ -536,6 +536,7 @@ def pxd(name): 'pandas.tests', 'pandas.tests.frame', 'pandas.tests.indexes', + 'pandas.tests.series', 'pandas.tests.test_msgpack', 'pandas.tools', 'pandas.tools.tests',
Similar to test_frame.py, but only 8K lines to deal with this time.
https://api.github.com/repos/pandas-dev/pandas/pulls/12130
2016-01-25T06:17:06Z
2016-01-26T14:00:02Z
null
2016-01-26T17:55:45Z
DEPR: GH10623 remove items from msgpack.encode for blocks
diff --git a/doc/source/io.rst b/doc/source/io.rst index e2f2301beb078..459d79ec4d98c 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -2539,6 +2539,24 @@ both on the writing (serialization), and reading (deserialization). optimizations in the io of the ``msgpack`` data. Since this is marked as an EXPERIMENTAL LIBRARY, the storage format may not be stable until a future release. + As a result of writing format changes and other issues: + +----------------------+------------------------+ + | Packed with | Can be unpacked with | + +======================+========================+ + | pre-0.17 / Python 2 | any | + +----------------------+------------------------+ + | pre-0.17 / Python 3 | any | + +----------------------+------------------------+ + | 0.17 / Python 2 | - 0.17 / Python 2 | + | | - >=0.18 / any Python | + +----------------------+------------------------+ + | 0.17 / Python 3 | >=0.18 / any Python | + +----------------------+------------------------+ + | 0.18 | >= 0.18 | + +======================+========================+ + + Reading (files packed by older versions) is backward-compatibile, except for files packed with 0.17 in Python 2, in which case only they can only be unpacked in Python 2. + .. ipython:: python df = DataFrame(np.random.rand(5,2),columns=list('AB')) diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 8429739902927..47e78cf558a16 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -513,6 +513,33 @@ Subtraction by ``Timedelta`` in a ``Series`` by a ``Timestamp`` works (:issue:`1 ``pd.Timestamp`` to rehydrate any timestamp like object from its isoformat (:issue:`12300`). +Changes to msgpack +^^^^^^^^^^^^^^^^^^ + +Forward incompatible changes in ``msgpack`` writing format were made over 0.17.0 and 0.18.0; older versions of pandas cannot read files packed by newer versions (:issue:`12129`, `10527`) + +Bug in ``to_msgpack`` and ``read_msgpack`` introduced in 0.17.0 and fixed in 0.18.0, caused files packed in Python 2 unreadable by Python 3 (:issue:`12142`) + +.. warning:: + + As a result of a number of issues: + + +----------------------+------------------------+ + | Packed with | Can be unpacked with | + +======================+========================+ + | pre-0.17 / Python 2 | any | + +----------------------+------------------------+ + | pre-0.17 / Python 3 | any | + +----------------------+------------------------+ + | 0.17 / Python 2 | - 0.17 / Python 2 | + | | - >=0.18 / any Python | + +----------------------+------------------------+ + | 0.17 / Python 3 | >=0.18 / any Python | + +----------------------+------------------------+ + | 0.18 | >= 0.18 | + +======================+========================+ + + 0.18.0 is backward-compatible for reading files packed by older versions, except for files packed with 0.17 in Python 2, in which case only they can only be unpacked in Python 2. Signature change for .rank ^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -806,7 +833,6 @@ assignments are valid for multi-line expressions. Other API Changes ^^^^^^^^^^^^^^^^^ - - ``DataFrame.between_time`` and ``Series.between_time`` now only parse a fixed set of time strings. Parsing of date strings is no longer supported and raises a ``ValueError``. (:issue:`11818`) .. ipython:: python diff --git a/pandas/core/common.py b/pandas/core/common.py index 70c02c5632d80..4f3ec58910950 100644 --- a/pandas/core/common.py +++ b/pandas/core/common.py @@ -3039,3 +3039,29 @@ def _random_state(state=None): else: raise ValueError("random_state must be an integer, a numpy " "RandomState, or None") + + +def pandas_dtype(dtype): + """ + Converts input into a pandas only dtype object or a numpy dtype object. + + Parameters + ---------- + dtype : object to be converted + + Returns + ------- + np.dtype or a pandas dtype + """ + if isinstance(dtype, compat.string_types): + try: + return DatetimeTZDtype.construct_from_string(dtype) + except TypeError: + pass + + try: + return CategoricalDtype.construct_from_string(dtype) + except TypeError: + pass + + return np.dtype(dtype) diff --git a/pandas/core/internals.py b/pandas/core/internals.py index 8973ea025e611..c6b04757e201c 100644 --- a/pandas/core/internals.py +++ b/pandas/core/internals.py @@ -2098,6 +2098,14 @@ def __init__(self, values, placement, ndim=2, **kwargs): if not isinstance(values, self._holder): values = self._holder(values) + + dtype = kwargs.pop('dtype', None) + + if dtype is not None: + if isinstance(dtype, compat.string_types): + dtype = DatetimeTZDtype.construct_from_string(dtype) + values = values.tz_localize('UTC').tz_convert(dtype.tz) + if values.tz is None: raise ValueError("cannot create a DatetimeTZBlock without a tz") @@ -2428,6 +2436,10 @@ def make_block(values, placement, klass=None, ndim=None, dtype=None, else: klass = ObjectBlock + elif klass is DatetimeTZBlock and not is_datetimetz(values): + return klass(values, ndim=ndim, fastpath=fastpath, + placement=placement, dtype=dtype) + return klass(values, ndim=ndim, fastpath=fastpath, placement=placement) # TODO: flexible with index=None and/or items=None diff --git a/pandas/io/packers.py b/pandas/io/packers.py index 372c8d80e5a1a..701b78d2771fb 100644 --- a/pandas/io/packers.py +++ b/pandas/io/packers.py @@ -44,7 +44,7 @@ import numpy as np from pandas import compat -from pandas.compat import u +from pandas.compat import u, u_safe from pandas import (Timestamp, Period, Series, DataFrame, # noqa Index, MultiIndex, Float64Index, Int64Index, Panel, RangeIndex, PeriodIndex, DatetimeIndex, NaT) @@ -52,7 +52,7 @@ from pandas.sparse.api import SparseSeries, SparseDataFrame, SparsePanel from pandas.sparse.array import BlockIndex, IntIndex from pandas.core.generic import NDFrame -from pandas.core.common import needs_i8_conversion +from pandas.core.common import needs_i8_conversion, pandas_dtype from pandas.io.common import get_filepath_or_buffer from pandas.core.internals import BlockManager, make_block import pandas.core.internals as internals @@ -84,6 +84,8 @@ def to_msgpack(path_or_buf, *args, **kwargs): """ global compressor compressor = kwargs.pop('compress', None) + if compressor: + compressor = u(compressor) append = kwargs.pop('append', None) if append: mode = 'a+b' @@ -180,7 +182,7 @@ def dtype_for(t): """ return my dtype mapping, whether number or name """ if t in dtype_dict: return dtype_dict[t] - return np.typeDict[t] + return np.typeDict.get(t, t) c2f_dict = {'complex': np.float64, 'complex128': np.float64, @@ -248,15 +250,17 @@ def unconvert(values, dtype, compress=None): if dtype == np.object_: return np.array(values, dtype=object) + dtype = pandas_dtype(dtype).base + if not as_is_ext: values = values.encode('latin1') - if compress == 'zlib': + if compress == u'zlib': import zlib values = zlib.decompress(values) return np.frombuffer(values, dtype=dtype) - elif compress == 'blosc': + elif compress == u'blosc': import blosc values = blosc.decompress(values) return np.frombuffer(values, dtype=dtype) @@ -269,53 +273,52 @@ def encode(obj): """ Data encoder """ - tobj = type(obj) if isinstance(obj, Index): if isinstance(obj, RangeIndex): - return {'typ': 'range_index', - 'klass': obj.__class__.__name__, - 'name': getattr(obj, 'name', None), - 'start': getattr(obj, '_start', None), - 'stop': getattr(obj, '_stop', None), - 'step': getattr(obj, '_step', None)} + return {u'typ': u'range_index', + u'klass': u(obj.__class__.__name__), + u'name': getattr(obj, 'name', None), + u'start': getattr(obj, '_start', None), + u'stop': getattr(obj, '_stop', None), + u'step': getattr(obj, '_step', None)} elif isinstance(obj, PeriodIndex): - return {'typ': 'period_index', - 'klass': obj.__class__.__name__, - 'name': getattr(obj, 'name', None), - 'freq': getattr(obj, 'freqstr', None), - 'dtype': obj.dtype.name, - 'data': convert(obj.asi8), - 'compress': compressor} + return {u'typ': u'period_index', + u'klass': u(obj.__class__.__name__), + u'name': getattr(obj, 'name', None), + u'freq': u_safe(getattr(obj, 'freqstr', None)), + u'dtype': u(obj.dtype.name), + u'data': convert(obj.asi8), + u'compress': compressor} elif isinstance(obj, DatetimeIndex): tz = getattr(obj, 'tz', None) # store tz info and data as UTC if tz is not None: - tz = tz.zone + tz = u(tz.zone) obj = obj.tz_convert('UTC') - return {'typ': 'datetime_index', - 'klass': obj.__class__.__name__, - 'name': getattr(obj, 'name', None), - 'dtype': obj.dtype.name, - 'data': convert(obj.asi8), - 'freq': getattr(obj, 'freqstr', None), - 'tz': tz, - 'compress': compressor} + return {u'typ': u'datetime_index', + u'klass': u(obj.__class__.__name__), + u'name': getattr(obj, 'name', None), + u'dtype': u(obj.dtype.name), + u'data': convert(obj.asi8), + u'freq': u_safe(getattr(obj, 'freqstr', None)), + u'tz': tz, + u'compress': compressor} elif isinstance(obj, MultiIndex): - return {'typ': 'multi_index', - 'klass': obj.__class__.__name__, - 'names': getattr(obj, 'names', None), - 'dtype': obj.dtype.name, - 'data': convert(obj.values), - 'compress': compressor} + return {u'typ': u'multi_index', + u'klass': u(obj.__class__.__name__), + u'names': getattr(obj, 'names', None), + u'dtype': u(obj.dtype.name), + u'data': convert(obj.values), + u'compress': compressor} else: - return {'typ': 'index', - 'klass': obj.__class__.__name__, - 'name': getattr(obj, 'name', None), - 'dtype': obj.dtype.name, - 'data': convert(obj.values), - 'compress': compressor} + return {u'typ': u'index', + u'klass': u(obj.__class__.__name__), + u'name': getattr(obj, 'name', None), + u'dtype': u(obj.dtype.name), + u'data': convert(obj.values), + u'compress': compressor} elif isinstance(obj, Series): if isinstance(obj, SparseSeries): raise NotImplementedError( @@ -332,13 +335,13 @@ def encode(obj): # d[f] = getattr(obj, f, None) # return d else: - return {'typ': 'series', - 'klass': obj.__class__.__name__, - 'name': getattr(obj, 'name', None), - 'index': obj.index, - 'dtype': obj.dtype.name, - 'data': convert(obj.values), - 'compress': compressor} + return {u'typ': u'series', + u'klass': u(obj.__class__.__name__), + u'name': getattr(obj, 'name', None), + u'index': obj.index, + u'dtype': u(obj.dtype.name), + u'data': convert(obj.values), + u'compress': compressor} elif issubclass(tobj, NDFrame): if isinstance(obj, SparseDataFrame): raise NotImplementedError( @@ -371,86 +374,85 @@ def encode(obj): data = data.consolidate() # the block manager - return {'typ': 'block_manager', - 'klass': obj.__class__.__name__, - 'axes': data.axes, - 'blocks': [{'items': data.items.take(b.mgr_locs), - 'locs': b.mgr_locs.as_array, - 'values': convert(b.values), - 'shape': b.values.shape, - 'dtype': b.dtype.name, - 'klass': b.__class__.__name__, - 'compress': compressor - } for b in data.blocks]} + return {u'typ': u'block_manager', + u'klass': u(obj.__class__.__name__), + u'axes': data.axes, + u'blocks': [{u'locs': b.mgr_locs.as_array, + u'values': convert(b.values), + u'shape': b.values.shape, + u'dtype': u(b.dtype.name), + u'klass': u(b.__class__.__name__), + u'compress': compressor} for b in data.blocks] + } elif isinstance(obj, (datetime, date, np.datetime64, timedelta, np.timedelta64, NaTType)): if isinstance(obj, Timestamp): tz = obj.tzinfo if tz is not None: - tz = tz.zone + tz = u(tz.zone) offset = obj.offset if offset is not None: - offset = offset.freqstr - return {'typ': 'timestamp', - 'value': obj.value, - 'offset': offset, - 'tz': tz} + offset = u(offset.freqstr) + return {u'typ': u'timestamp', + u'value': obj.value, + u'offset': offset, + u'tz': tz} if isinstance(obj, NaTType): - return {'typ': 'nat'} + return {u'typ': u'nat'} elif isinstance(obj, np.timedelta64): - return {'typ': 'timedelta64', - 'data': obj.view('i8')} + return {u'typ': u'timedelta64', + u'data': obj.view('i8')} elif isinstance(obj, timedelta): - return {'typ': 'timedelta', - 'data': (obj.days, obj.seconds, obj.microseconds)} + return {u'typ': u'timedelta', + u'data': (obj.days, obj.seconds, obj.microseconds)} elif isinstance(obj, np.datetime64): - return {'typ': 'datetime64', - 'data': str(obj)} + return {u'typ': u'datetime64', + u'data': u(str(obj))} elif isinstance(obj, datetime): - return {'typ': 'datetime', - 'data': obj.isoformat()} + return {u'typ': u'datetime', + u'data': u(obj.isoformat())} elif isinstance(obj, date): - return {'typ': 'date', - 'data': obj.isoformat()} + return {u'typ': u'date', + u'data': u(obj.isoformat())} raise Exception("cannot encode this datetimelike object: %s" % obj) elif isinstance(obj, Period): - return {'typ': 'period', - 'ordinal': obj.ordinal, - 'freq': obj.freq} + return {u'typ': u'period', + u'ordinal': obj.ordinal, + u'freq': u(obj.freq)} elif isinstance(obj, BlockIndex): - return {'typ': 'block_index', - 'klass': obj.__class__.__name__, - 'blocs': obj.blocs, - 'blengths': obj.blengths, - 'length': obj.length} + return {u'typ': u'block_index', + u'klass': u(obj.__class__.__name__), + u'blocs': obj.blocs, + u'blengths': obj.blengths, + u'length': obj.length} elif isinstance(obj, IntIndex): - return {'typ': 'int_index', - 'klass': obj.__class__.__name__, - 'indices': obj.indices, - 'length': obj.length} + return {u'typ': u'int_index', + u'klass': u(obj.__class__.__name__), + u'indices': obj.indices, + u'length': obj.length} elif isinstance(obj, np.ndarray): - return {'typ': 'ndarray', - 'shape': obj.shape, - 'ndim': obj.ndim, - 'dtype': obj.dtype.name, - 'data': convert(obj), - 'compress': compressor} + return {u'typ': u'ndarray', + u'shape': obj.shape, + u'ndim': obj.ndim, + u'dtype': u(obj.dtype.name), + u'data': convert(obj), + u'compress': compressor} elif isinstance(obj, np.number): if np.iscomplexobj(obj): - return {'typ': 'np_scalar', - 'sub_typ': 'np_complex', - 'dtype': obj.dtype.name, - 'real': obj.real.__repr__(), - 'imag': obj.imag.__repr__()} + return {u'typ': u'np_scalar', + u'sub_typ': u'np_complex', + u'dtype': u(obj.dtype.name), + u'real': u(obj.real.__repr__()), + u'imag': u(obj.imag.__repr__())} else: - return {'typ': 'np_scalar', - 'dtype': obj.dtype.name, - 'data': obj.__repr__()} + return {u'typ': u'np_scalar', + u'dtype': u(obj.dtype.name), + u'data': u(obj.__repr__())} elif isinstance(obj, complex): - return {'typ': 'np_complex', - 'real': obj.real.__repr__(), - 'imag': obj.imag.__repr__()} + return {u'typ': u'np_complex', + u'real': u(obj.real.__repr__()), + u'imag': u(obj.imag.__repr__())} return obj @@ -460,83 +462,91 @@ def decode(obj): Decoder for deserializing numpy data types. """ - typ = obj.get('typ') + typ = obj.get(u'typ') if typ is None: return obj - elif typ == 'timestamp': - return Timestamp(obj['value'], tz=obj['tz'], offset=obj['offset']) - elif typ == 'nat': + elif typ == u'timestamp': + return Timestamp(obj[u'value'], tz=obj[u'tz'], offset=obj[u'offset']) + elif typ == u'nat': return NaT - elif typ == 'period': - return Period(ordinal=obj['ordinal'], freq=obj['freq']) - elif typ == 'index': - dtype = dtype_for(obj['dtype']) - data = unconvert(obj['data'], dtype, - obj.get('compress')) - return globals()[obj['klass']](data, dtype=dtype, name=obj['name']) - elif typ == 'range_index': - return globals()[obj['klass']](obj['start'], - obj['stop'], - obj['step'], - name=obj['name']) - elif typ == 'multi_index': - dtype = dtype_for(obj['dtype']) - data = unconvert(obj['data'], dtype, - obj.get('compress')) + elif typ == u'period': + return Period(ordinal=obj[u'ordinal'], freq=obj[u'freq']) + elif typ == u'index': + dtype = dtype_for(obj[u'dtype']) + data = unconvert(obj[u'data'], dtype, + obj.get(u'compress')) + return globals()[obj[u'klass']](data, dtype=dtype, name=obj[u'name']) + elif typ == u'range_index': + return globals()[obj[u'klass']](obj[u'start'], + obj[u'stop'], + obj[u'step'], + name=obj[u'name']) + elif typ == u'multi_index': + dtype = dtype_for(obj[u'dtype']) + data = unconvert(obj[u'data'], dtype, + obj.get(u'compress')) data = [tuple(x) for x in data] - return globals()[obj['klass']].from_tuples(data, names=obj['names']) - elif typ == 'period_index': - data = unconvert(obj['data'], np.int64, obj.get('compress')) - d = dict(name=obj['name'], freq=obj['freq']) - return globals()[obj['klass']](data, **d) - elif typ == 'datetime_index': - data = unconvert(obj['data'], np.int64, obj.get('compress')) - d = dict(name=obj['name'], freq=obj['freq'], verify_integrity=False) - result = globals()[obj['klass']](data, **d) - tz = obj['tz'] + return globals()[obj[u'klass']].from_tuples(data, names=obj[u'names']) + elif typ == u'period_index': + data = unconvert(obj[u'data'], np.int64, obj.get(u'compress')) + d = dict(name=obj[u'name'], freq=obj[u'freq']) + return globals()[obj[u'klass']](data, **d) + elif typ == u'datetime_index': + data = unconvert(obj[u'data'], np.int64, obj.get(u'compress')) + d = dict(name=obj[u'name'], freq=obj[u'freq'], verify_integrity=False) + result = globals()[obj[u'klass']](data, **d) + tz = obj[u'tz'] # reverse tz conversion if tz is not None: result = result.tz_localize('UTC').tz_convert(tz) return result - elif typ == 'series': - dtype = dtype_for(obj['dtype']) - index = obj['index'] - return globals()[obj['klass']](unconvert(obj['data'], dtype, - obj['compress']), - index=index, - dtype=dtype, - name=obj['name']) - elif typ == 'block_manager': - axes = obj['axes'] + elif typ == u'series': + dtype = dtype_for(obj[u'dtype']) + pd_dtype = pandas_dtype(dtype) + np_dtype = pandas_dtype(dtype).base + index = obj[u'index'] + result = globals()[obj[u'klass']](unconvert(obj[u'data'], dtype, + obj[u'compress']), + index=index, + dtype=np_dtype, + name=obj[u'name']) + tz = getattr(pd_dtype, 'tz', None) + if tz: + result = result.dt.tz_localize('UTC').dt.tz_convert(tz) + return result + + elif typ == u'block_manager': + axes = obj[u'axes'] def create_block(b): - values = unconvert(b['values'], dtype_for(b['dtype']), - b['compress']).reshape(b['shape']) + values = unconvert(b[u'values'], dtype_for(b[u'dtype']), + b[u'compress']).reshape(b[u'shape']) # locs handles duplicate column names, and should be used instead # of items; see GH 9618 - if 'locs' in b: - placement = b['locs'] + if u'locs' in b: + placement = b[u'locs'] else: - placement = axes[0].get_indexer(b['items']) + placement = axes[0].get_indexer(b[u'items']) return make_block(values=values, - klass=getattr(internals, b['klass']), - placement=placement) - - blocks = [create_block(b) for b in obj['blocks']] - return globals()[obj['klass']](BlockManager(blocks, axes)) - elif typ == 'datetime': - return parse(obj['data']) - elif typ == 'datetime64': - return np.datetime64(parse(obj['data'])) - elif typ == 'date': - return parse(obj['data']).date() - elif typ == 'timedelta': - return timedelta(*obj['data']) - elif typ == 'timedelta64': - return np.timedelta64(int(obj['data'])) + klass=getattr(internals, b[u'klass']), + placement=placement, + dtype=b[u'dtype']) + + blocks = [create_block(b) for b in obj[u'blocks']] + return globals()[obj[u'klass']](BlockManager(blocks, axes)) + elif typ == u'datetime': + return parse(obj[u'data']) + elif typ == u'datetime64': + return np.datetime64(parse(obj[u'data'])) + elif typ == u'date': + return parse(obj[u'data']).date() + elif typ == u'timedelta': + return timedelta(*obj[u'data']) + elif typ == u'timedelta64': + return np.timedelta64(int(obj[u'data'])) # elif typ == 'sparse_series': # dtype = dtype_for(obj['dtype']) # return globals()[obj['klass']]( @@ -554,25 +564,25 @@ def create_block(b): # obj['data'], items=obj['items'], # default_fill_value=obj['default_fill_value'], # default_kind=obj['default_kind']) - elif typ == 'block_index': - return globals()[obj['klass']](obj['length'], obj['blocs'], - obj['blengths']) - elif typ == 'int_index': - return globals()[obj['klass']](obj['length'], obj['indices']) - elif typ == 'ndarray': - return unconvert(obj['data'], np.typeDict[obj['dtype']], - obj.get('compress')).reshape(obj['shape']) - elif typ == 'np_scalar': - if obj.get('sub_typ') == 'np_complex': - return c2f(obj['real'], obj['imag'], obj['dtype']) + elif typ == u'block_index': + return globals()[obj[u'klass']](obj[u'length'], obj[u'blocs'], + obj[u'blengths']) + elif typ == u'int_index': + return globals()[obj[u'klass']](obj[u'length'], obj[u'indices']) + elif typ == u'ndarray': + return unconvert(obj[u'data'], np.typeDict[obj[u'dtype']], + obj.get(u'compress')).reshape(obj[u'shape']) + elif typ == u'np_scalar': + if obj.get(u'sub_typ') == u'np_complex': + return c2f(obj[u'real'], obj[u'imag'], obj[u'dtype']) else: - dtype = dtype_for(obj['dtype']) + dtype = dtype_for(obj[u'dtype']) try: - return dtype(obj['data']) + return dtype(obj[u'data']) except: - return dtype.type(obj['data']) - elif typ == 'np_complex': - return complex(obj['real'] + '+' + obj['imag'] + 'j') + return dtype.type(obj[u'data']) + elif typ == u'np_complex': + return complex(obj[u'real'] + u'+' + obj[u'imag'] + u'j') elif isinstance(obj, (dict, list, set)): return obj else: diff --git a/pandas/io/tests/data/legacy_msgpack/0.17.1/0.17.1_x86_64_linux_2.7.11.msgpack b/pandas/io/tests/data/legacy_msgpack/0.17.1/0.17.1_x86_64_linux_2.7.11.msgpack new file mode 100644 index 0000000000000..e89b5dd99150e Binary files /dev/null and b/pandas/io/tests/data/legacy_msgpack/0.17.1/0.17.1_x86_64_linux_2.7.11.msgpack differ diff --git a/pandas/io/tests/data/legacy_msgpack/0.17.1/0.17.1_x86_64_linux_3.4.4.msgpack b/pandas/io/tests/data/legacy_msgpack/0.17.1/0.17.1_x86_64_linux_3.4.4.msgpack new file mode 100644 index 0000000000000..98efdabedea72 Binary files /dev/null and b/pandas/io/tests/data/legacy_msgpack/0.17.1/0.17.1_x86_64_linux_3.4.4.msgpack differ diff --git a/pandas/io/tests/generate_legacy_storage_files.py b/pandas/io/tests/generate_legacy_storage_files.py index f556c980bb80c..bfa8ff6d30a9c 100644 --- a/pandas/io/tests/generate_legacy_storage_files.py +++ b/pandas/io/tests/generate_legacy_storage_files.py @@ -6,6 +6,7 @@ Index, MultiIndex, bdate_range, to_msgpack, date_range, period_range, Timestamp, Categorical, Period) +from pandas.compat import u import os import sys import numpy as np @@ -13,6 +14,9 @@ import platform as pl +_loose_version = LooseVersion(pandas.__version__) + + def _create_sp_series(): nan = np.nan @@ -22,7 +26,7 @@ def _create_sp_series(): arr[-1:] = nan bseries = SparseSeries(arr, kind='block') - bseries.name = 'bseries' + bseries.name = u'bseries' return bseries @@ -36,17 +40,17 @@ def _create_sp_tsseries(): date_index = bdate_range('1/1/2011', periods=len(arr)) bseries = SparseSeries(arr, index=date_index, kind='block') - bseries.name = 'btsseries' + bseries.name = u'btsseries' return bseries def _create_sp_frame(): nan = np.nan - data = {'A': [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6], - 'B': [0, 1, 2, nan, nan, nan, 3, 4, 5, 6], - 'C': np.arange(10).astype(np.int64), - 'D': [0, 1, 2, 3, 4, 5, nan, nan, nan, nan]} + data = {u'A': [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6], + u'B': [0, 1, 2, nan, nan, nan, 3, 4, 5, 6], + u'C': np.arange(10).astype(np.int64), + u'D': [0, 1, 2, 3, 4, 5, nan, nan, nan, nan]} dates = bdate_range('1/1/2011', periods=10) return SparseDataFrame(data, index=dates) @@ -56,79 +60,79 @@ def create_data(): """ create the pickle/msgpack data """ data = { - 'A': [0., 1., 2., 3., np.nan], - 'B': [0, 1, 0, 1, 0], - 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'], - 'D': date_range('1/1/2009', periods=5), - 'E': [0., 1, Timestamp('20100101'), 'foo', 2.] + u'A': [0., 1., 2., 3., np.nan], + u'B': [0, 1, 0, 1, 0], + u'C': [u'foo1', u'foo2', u'foo3', u'foo4', u'foo5'], + u'D': date_range('1/1/2009', periods=5), + u'E': [0., 1, Timestamp('20100101'), u'foo', 2.] } - scalars = dict(timestamp=Timestamp('20130101')) - if LooseVersion(pandas.__version__) >= '0.17.0': - scalars['period'] = Period('2012', 'M') + scalars = dict(timestamp=Timestamp('20130101'), + period=Period('2012', 'M')) index = dict(int=Index(np.arange(10)), date=date_range('20130101', periods=10), period=period_range('2013-01-01', freq='M', periods=10)) mi = dict(reg2=MultiIndex.from_tuples( - tuple(zip(*[['bar', 'bar', 'baz', 'baz', 'foo', - 'foo', 'qux', 'qux'], - ['one', 'two', 'one', 'two', 'one', - 'two', 'one', 'two']])), - names=['first', 'second'])) - series = dict(float=Series(data['A']), - int=Series(data['B']), - mixed=Series(data['E']), + tuple(zip(*[[u'bar', u'bar', u'baz', u'baz', u'foo', + u'foo', u'qux', u'qux'], + [u'one', u'two', u'one', u'two', u'one', + u'two', u'one', u'two']])), + names=[u'first', u'second'])) + series = dict(float=Series(data[u'A']), + int=Series(data[u'B']), + mixed=Series(data[u'E']), ts=Series(np.arange(10).astype(np.int64), index=date_range('20130101', periods=10)), mi=Series(np.arange(5).astype(np.float64), index=MultiIndex.from_tuples( tuple(zip(*[[1, 1, 2, 2, 2], [3, 4, 3, 4, 5]])), - names=['one', 'two'])), + names=[u'one', u'two'])), dup=Series(np.arange(5).astype(np.float64), - index=['A', 'B', 'C', 'D', 'A']), - cat=Series(Categorical(['foo', 'bar', 'baz'])), + index=[u'A', u'B', u'C', u'D', u'A']), + cat=Series(Categorical([u'foo', u'bar', u'baz'])), dt=Series(date_range('20130101', periods=5)), dt_tz=Series(date_range('20130101', periods=5, - tz='US/Eastern'))) - if LooseVersion(pandas.__version__) >= '0.17.0': - series['period'] = Series([Period('2000Q1')] * 5) + tz='US/Eastern')), + period=Series([Period('2000Q1')] * 5)) mixed_dup_df = DataFrame(data) - mixed_dup_df.columns = list("ABCDA") - frame = dict(float=DataFrame(dict(A=series['float'], - B=series['float'] + 1)), - int=DataFrame(dict(A=series['int'], B=series['int'] + 1)), - mixed=DataFrame(dict([(k, data[k]) - for k in ['A', 'B', 'C', 'D']])), - mi=DataFrame(dict(A=np.arange(5).astype(np.float64), - B=np.arange(5).astype(np.int64)), + mixed_dup_df.columns = list(u"ABCDA") + frame = dict(float=DataFrame({u'A': series[u'float'], + u'B': series[u'float'] + 1}), + int=DataFrame({u'A': series[u'int'], + u'B': series[u'int'] + 1}), + mixed=DataFrame({k: data[k] + for k in [u'A', u'B', u'C', u'D']}), + mi=DataFrame({u'A': np.arange(5).astype(np.float64), + u'B': np.arange(5).astype(np.int64)}, index=MultiIndex.from_tuples( - tuple(zip(*[['bar', 'bar', 'baz', - 'baz', 'baz'], - ['one', 'two', 'one', - 'two', 'three']])), - names=['first', 'second'])), + tuple(zip(*[[u'bar', u'bar', u'baz', + u'baz', u'baz'], + [u'one', u'two', u'one', + u'two', u'three']])), + names=[u'first', u'second'])), dup=DataFrame(np.arange(15).reshape(5, 3).astype(np.float64), - columns=['A', 'B', 'A']), - cat_onecol=DataFrame(dict(A=Categorical(['foo', 'bar']))), - cat_and_float=DataFrame(dict( - A=Categorical(['foo', 'bar', 'baz']), - B=np.arange(3).astype(np.int64))), + columns=[u'A', u'B', u'A']), + cat_onecol=DataFrame({u'A': Categorical([u'foo', u'bar'])}), + cat_and_float=DataFrame({ + u'A': Categorical([u'foo', u'bar', u'baz']), + u'B': np.arange(3).astype(np.int64)}), mixed_dup=mixed_dup_df, - dt_mixed_tzs=DataFrame(dict( - A=Timestamp('20130102', tz='US/Eastern'), - B=Timestamp('20130603', tz='CET')), index=range(5)), + dt_mixed_tzs=DataFrame({ + u'A': Timestamp('20130102', tz='US/Eastern'), + u'B': Timestamp('20130603', tz='CET')}, index=range(5)) ) - mixed_dup_panel = Panel(dict(ItemA=frame['float'], ItemB=frame['int'])) - mixed_dup_panel.items = ['ItemA', 'ItemA'] - panel = dict(float=Panel(dict(ItemA=frame['float'], - ItemB=frame['float'] + 1)), + mixed_dup_panel = Panel({u'ItemA': frame[u'float'], + u'ItemB': frame[u'int']}) + mixed_dup_panel.items = [u'ItemA', u'ItemA'] + panel = dict(float=Panel({u'ItemA': frame[u'float'], + u'ItemB': frame[u'float'] + 1}), dup=Panel(np.arange(30).reshape(3, 5, 2).astype(np.float64), - items=['A', 'B', 'A']), + items=[u'A', u'B', u'A']), mixed_dup=mixed_dup_panel) return dict(series=series, @@ -147,26 +151,38 @@ def create_pickle_data(): # Pre-0.14.1 versions generated non-unpicklable mixed-type frames and # panels if their columns/items were non-unique. - if LooseVersion(pandas.__version__) < '0.14.1': + if _loose_version < '0.14.1': del data['frame']['mixed_dup'] del data['panel']['mixed_dup'] + if _loose_version < '0.17.0': + del data['series']['period'] + del data['scalars']['period'] return data +def _u(x): + return {u(k): _u(x[k]) for k in x} if isinstance(x, dict) else x + + def create_msgpack_data(): data = create_data() - if LooseVersion(pandas.__version__) < '0.17.0': + if _loose_version < '0.17.0': del data['frame']['mixed_dup'] del data['panel']['mixed_dup'] del data['frame']['dup'] del data['panel']['dup'] + if _loose_version < '0.18.0': + del data['series']['dt_tz'] + del data['frame']['dt_mixed_tzs'] # Not supported del data['sp_series'] del data['sp_frame'] del data['series']['cat'] + del data['series']['period'] del data['frame']['cat_onecol'] del data['frame']['cat_and_float'] - return data + del data['scalars']['period'] + return _u(data) def platform_name(): @@ -199,7 +215,7 @@ def write_legacy_pickles(output_dir): print("created pickle file: %s" % pth) -def write_legacy_msgpack(output_dir): +def write_legacy_msgpack(output_dir, compress): version = pandas.__version__ @@ -208,9 +224,9 @@ def write_legacy_msgpack(output_dir): print(" pandas version: {0}".format(version)) print(" output dir : {0}".format(output_dir)) print(" storage format: msgpack") - pth = '{0}.msgpack'.format(platform_name()) - to_msgpack(os.path.join(output_dir, pth), create_msgpack_data()) + to_msgpack(os.path.join(output_dir, pth), create_msgpack_data(), + compress=compress) print("created msgpack file: %s" % pth) @@ -219,17 +235,22 @@ def write_legacy_file(): # force our cwd to be the first searched sys.path.insert(0, '.') - if len(sys.argv) != 3: + if not (3 <= len(sys.argv) <= 4): exit("Specify output directory and storage type: generate_legacy_" - "storage_files.py <output_dir> <storage_type>") + "storage_files.py <output_dir> <storage_type> " + "<msgpack_compress_type>") output_dir = str(sys.argv[1]) storage_type = str(sys.argv[2]) + try: + compress_type = str(sys.argv[3]) + except IndexError: + compress_type = None if storage_type == 'pickle': write_legacy_pickles(output_dir=output_dir) elif storage_type == 'msgpack': - write_legacy_msgpack(output_dir=output_dir) + write_legacy_msgpack(output_dir=output_dir, compress=compress_type) else: exit("storage_type must be one of {'pickle', 'msgpack'}") diff --git a/pandas/io/tests/test_packers.py b/pandas/io/tests/test_packers.py index d1c05069b4172..d0e7d00d79cb0 100644 --- a/pandas/io/tests/test_packers.py +++ b/pandas/io/tests/test_packers.py @@ -331,11 +331,16 @@ def setUp(self): 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'], 'D': date_range('1/1/2009', periods=5), 'E': [0., 1, Timestamp('20100101'), 'foo', 2.], + 'F': [Timestamp('20130102', tz='US/Eastern')] * 2 + + [Timestamp('20130603', tz='CET')] * 3, + 'G': [Timestamp('20130102', tz='US/Eastern')] * 5 } self.d['float'] = Series(data['A']) self.d['int'] = Series(data['B']) self.d['mixed'] = Series(data['E']) + self.d['dt_tz_mixed'] = Series(data['F']) + self.d['dt_tz'] = Series(data['G']) def test_basic(self): @@ -357,13 +362,14 @@ def setUp(self): 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'], 'D': date_range('1/1/2009', periods=5), 'E': [0., 1, Timestamp('20100101'), 'foo', 2.], + 'F': [Timestamp('20130102', tz='US/Eastern')] * 5, + 'G': [Timestamp('20130603', tz='CET')] * 5 } self.frame = { 'float': DataFrame(dict(A=data['A'], B=Series(data['A']) + 1)), 'int': DataFrame(dict(A=data['B'], B=Series(data['B']) + 1)), - 'mixed': DataFrame(dict([(k, data[k]) - for k in ['A', 'B', 'C', 'D']]))} + 'mixed': DataFrame(data)} self.panel = { 'float': Panel(dict(ItemA=self.frame['float'], @@ -713,6 +719,11 @@ def read_msgpacks(self, version): pth = tm.get_data_path('legacy_msgpack/{0}'.format(str(version))) n = 0 for f in os.listdir(pth): + # GH12142 0.17 files packed in P2 can't be read in P3 + if (compat.PY3 and + version.startswith('0.17.') and + f.split('.')[-4][-1] == '2'): + continue vf = os.path.join(pth, f) self.compare(vf, version) n += 1
closes #10623
https://api.github.com/repos/pandas-dev/pandas/pulls/12129
2016-01-25T06:02:48Z
2016-02-16T14:50:50Z
null
2016-02-16T17:10:30Z
PERF: add support for NaT in hashtable factorizers, improving Categorical construction with NaT
diff --git a/asv_bench/benchmarks/categoricals.py b/asv_bench/benchmarks/categoricals.py index d32c19d6d0bb8..244af3a577fe2 100644 --- a/asv_bench/benchmarks/categoricals.py +++ b/asv_bench/benchmarks/categoricals.py @@ -46,6 +46,22 @@ def time_fastpath(self): Categorical(self.codes, self.cat_idx, fastpath=True) +class categorical_constructor_with_datetimes(object): + goal_time = 0.2 + + def setup(self): + self.datetimes = pd.Series(pd.date_range( + '1995-01-01 00:00:00', periods=10000, freq='s')) + + def time_datetimes(self): + Categorical(self.datetimes) + + def time_datetimes_with_nat(self): + t = self.datetimes + t.iloc[-1] = pd.NaT + Categorical(t) + + class categorical_rendering(object): goal_time = 3e-3 diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 75a38544fb8eb..115e286acdac1 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -458,7 +458,7 @@ Performance Improvements - Improved huge ``DatetimeIndex``, ``PeriodIndex`` and ``TimedeltaIndex``'s ops performance including ``NaT`` (:issue:`10277`) - Improved performance of ``pandas.concat`` (:issue:`11958`) - Improved performance of ``StataReader`` (:issue:`11591`) - +- Improved performance in construction of ``Categoricals`` with Series of datetimes containing ``NaT`` (:issue:`12077`) @@ -481,6 +481,7 @@ Bug Fixes - Bug in vectorized ``DateOffset`` when ``n`` parameter is ``0`` (:issue:`11370`) - Compat for numpy 1.11 w.r.t. ``NaT`` comparison changes (:issue:`12049`) - Bug in ``read_csv`` when reading from a ``StringIO`` in threads (:issue:`11790`) +- Bug in not treating ``NaT`` as a missing value in datetimelikes when factorizing & with ``Categoricals`` (:issue:`12077`) diff --git a/pandas/algos.pyx b/pandas/algos.pyx index 62ee6ced84882..0f9ceba48e608 100644 --- a/pandas/algos.pyx +++ b/pandas/algos.pyx @@ -226,14 +226,27 @@ def rank_1d_int64(object in_arr, ties_method='average', ascending=True, ndarray[int64_t] sorted_data, values ndarray[float64_t] ranks ndarray[int64_t] argsorted - int64_t val + int64_t val, nan_value float64_t sum_ranks = 0 + bint keep_na int tiebreak = 0 float count = 0.0 tiebreak = tiebreakers[ties_method] + keep_na = na_option == 'keep' + values = np.asarray(in_arr) + if ascending ^ (na_option == 'top'): + nan_value = np.iinfo('int64').max + else: + nan_value = np.iinfo('int64').min + + # unlike floats, which have np.inf, -np.inf, and np.nan + # ints do not + mask = values == iNaT + np.putmask(values, mask, nan_value) + n = len(values) ranks = np.empty(n, dtype='f8') @@ -256,6 +269,9 @@ def rank_1d_int64(object in_arr, ties_method='average', ascending=True, sum_ranks += i + 1 dups += 1 val = sorted_data[i] + if (val == nan_value) and keep_na: + ranks[argsorted[i]] = nan + continue count += 1.0 if i == n - 1 or fabs(sorted_data[i + 1] - val) > 0: if tiebreak == TIEBREAK_AVERAGE: @@ -387,16 +403,30 @@ def rank_2d_int64(object in_arr, axis=0, ties_method='average', ndarray[float64_t, ndim=2] ranks ndarray[int64_t, ndim=2] argsorted ndarray[int64_t, ndim=2, cast=True] values - int64_t val + int64_t val, nan_value float64_t sum_ranks = 0 + bint keep_na = 0 int tiebreak = 0 float count = 0.0 tiebreak = tiebreakers[ties_method] + keep_na = na_option == 'keep' + + in_arr = np.asarray(in_arr) + if axis == 0: - values = np.asarray(in_arr).T + values = in_arr.T.copy() + else: + values = in_arr.copy() + + if ascending ^ (na_option == 'top'): + nan_value = np.iinfo('int64').max else: - values = np.asarray(in_arr) + nan_value = np.iinfo('int64').min + + # unlike floats, which have np.inf, -np.inf, and np.nan + # ints do not + np.putmask(values, values == iNaT, nan_value) n, k = (<object> values).shape ranks = np.empty((n, k), dtype='f8') @@ -423,6 +453,9 @@ def rank_2d_int64(object in_arr, axis=0, ties_method='average', sum_ranks += j + 1 dups += 1 val = values[i, j] + if val == nan_value and keep_na: + ranks[i, argsorted[i, j]] = nan + continue count += 1.0 if j == k - 1 or fabs(values[i, j + 1] - val) > FP_ERR: if tiebreak == TIEBREAK_AVERAGE: diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py index d1c983769ed2a..d516471ededb6 100644 --- a/pandas/core/algorithms.py +++ b/pandas/core/algorithms.py @@ -11,6 +11,7 @@ import pandas.algos as algos import pandas.hashtable as htable from pandas.compat import string_types +from pandas.tslib import iNaT def match(to_match, values, na_sentinel=-1): @@ -182,17 +183,23 @@ def factorize(values, sort=False, order=None, na_sentinel=-1, size_hint=None): "https://github.com/pydata/pandas/issues/6926" warn(msg, FutureWarning, stacklevel=2) - from pandas.core.index import Index - from pandas.core.series import Series + from pandas import Index, Series, DatetimeIndex + vals = np.asarray(values) + # localize to UTC + is_datetimetz = com.is_datetimetz(values) + if is_datetimetz: + values = DatetimeIndex(values) + vals = values.tz_localize(None) + is_datetime = com.is_datetime64_dtype(vals) is_timedelta = com.is_timedelta64_dtype(vals) (hash_klass, vec_klass), vals = _get_data_algo(vals, _hashtables) table = hash_klass(size_hint or len(vals)) uniques = vec_klass() - labels = table.get_labels(vals, uniques, 0, na_sentinel) + labels = table.get_labels(vals, uniques, 0, na_sentinel, True) labels = com._ensure_platform_int(labels) @@ -224,7 +231,12 @@ def factorize(values, sort=False, order=None, na_sentinel=-1, size_hint=None): uniques = uniques.take(sorter) - if is_datetime: + if is_datetimetz: + + # reset tz + uniques = DatetimeIndex(uniques.astype('M8[ns]')).tz_localize( + values.tz) + elif is_datetime: uniques = uniques.astype('M8[ns]') elif is_timedelta: uniques = uniques.astype('m8[ns]') @@ -296,7 +308,6 @@ def value_counts(values, sort=True, ascending=False, normalize=False, keys, counts = htable.value_count_scalar64(values, dropna) if dropna: - from pandas.tslib import iNaT msk = keys != iNaT keys, counts = keys[msk], counts[msk] @@ -478,22 +489,13 @@ def _interpolate(a, b, fraction): def _get_data_algo(values, func_map): - mask = None if com.is_float_dtype(values): f = func_map['float64'] values = com._ensure_float64(values) elif com.needs_i8_conversion(values): - - # if we have NaT, punt to object dtype - mask = com.isnull(values) - if mask.ravel().any(): - f = func_map['generic'] - values = com._ensure_object(values) - values[mask] = np.nan - else: - f = func_map['int64'] - values = values.view('i8') + f = func_map['int64'] + values = values.view('i8') elif com.is_integer_dtype(values): f = func_map['int64'] diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py index 8a6ea69058c7e..23740f1983b43 100644 --- a/pandas/core/categorical.py +++ b/pandas/core/categorical.py @@ -257,7 +257,7 @@ def __init__(self, values, categories=None, ordered=False, name=None, categories = values.categories values = values.__array__() - elif isinstance(values, ABCIndexClass): + elif isinstance(values, (ABCIndexClass, ABCSeries)): pass else: @@ -1177,7 +1177,7 @@ def get_values(self): """ # if we are a datetime and period index, return Index to keep metadata if com.is_datetimelike(self.categories): - return self.categories.take(self._codes) + return self.categories.take(self._codes, fill_value=np.nan) return np.array(self) def check_for_ordered(self, op): diff --git a/pandas/hashtable.pyx b/pandas/hashtable.pyx index a5fcbd3f2d0f1..f718c1ab0b8da 100644 --- a/pandas/hashtable.pyx +++ b/pandas/hashtable.pyx @@ -377,12 +377,13 @@ cdef class Int64HashTable(HashTable): def factorize(self, ndarray[object] values): reverse = {} - labels = self.get_labels(values, reverse, 0) + labels = self.get_labels(values, reverse, 0, 0) return reverse, labels @cython.boundscheck(False) def get_labels(self, int64_t[:] values, Int64Vector uniques, - Py_ssize_t count_prior, Py_ssize_t na_sentinel): + Py_ssize_t count_prior, Py_ssize_t na_sentinel, + bint check_null=True): cdef: Py_ssize_t i, n = len(values) int64_t[:] labels @@ -399,6 +400,11 @@ cdef class Int64HashTable(HashTable): for i in range(n): val = values[i] k = kh_get_int64(self.table, val) + + if check_null and val == iNaT: + labels[i] = na_sentinel + continue + if k != self.table.n_buckets: idx = self.table.vals[k] labels[i] = idx @@ -525,13 +531,14 @@ cdef class Float64HashTable(HashTable): def factorize(self, float64_t[:] values): uniques = Float64Vector() - labels = self.get_labels(values, uniques, 0, -1) + labels = self.get_labels(values, uniques, 0, -1, 1) return uniques.to_array(), labels @cython.boundscheck(False) def get_labels(self, float64_t[:] values, - Float64Vector uniques, - Py_ssize_t count_prior, int64_t na_sentinel): + Float64Vector uniques, + Py_ssize_t count_prior, int64_t na_sentinel, + bint check_null=True): cdef: Py_ssize_t i, n = len(values) int64_t[:] labels @@ -548,7 +555,7 @@ cdef class Float64HashTable(HashTable): for i in range(n): val = values[i] - if val != val: + if check_null and val != val: labels[i] = na_sentinel continue @@ -762,7 +769,8 @@ cdef class PyObjectHashTable(HashTable): return uniques.to_array() def get_labels(self, ndarray[object] values, ObjectVector uniques, - Py_ssize_t count_prior, int64_t na_sentinel): + Py_ssize_t count_prior, int64_t na_sentinel, + bint check_null=True): cdef: Py_ssize_t i, n = len(values) int64_t[:] labels @@ -777,7 +785,7 @@ cdef class PyObjectHashTable(HashTable): val = values[i] hash(val) - if val != val or val is None: + if check_null and val != val or val is None: labels[i] = na_sentinel continue @@ -808,14 +816,15 @@ cdef class Factorizer: def get_count(self): return self.count - def factorize(self, ndarray[object] values, sort=False, na_sentinel=-1): + def factorize(self, ndarray[object] values, sort=False, na_sentinel=-1, + check_null=True): """ Factorize values with nans replaced by na_sentinel >>> factorize(np.array([1,2,np.nan], dtype='O'), na_sentinel=20) array([ 0, 1, 20]) """ labels = self.table.get_labels(values, self.uniques, - self.count, na_sentinel) + self.count, na_sentinel, check_null) mask = (labels == na_sentinel) # sort on if sort: @@ -848,9 +857,10 @@ cdef class Int64Factorizer: return self.count def factorize(self, int64_t[:] values, sort=False, - na_sentinel=-1): + na_sentinel=-1, check_null=True): labels = self.table.get_labels(values, self.uniques, - self.count, na_sentinel) + self.count, na_sentinel, + check_null) # sort on if sort: diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py index f68faf99d3143..e1ba981e93d2e 100644 --- a/pandas/tests/frame/test_analytics.py +++ b/pandas/tests/frame/test_analytics.py @@ -787,7 +787,12 @@ def test_rank2(self): # check the rank expected = DataFrame([[2., nan, 1.], [2., 3., 1.]]) - result = df.rank(1, numeric_only=False) + result = df.rank(1, numeric_only=False, ascending=True) + assert_frame_equal(result, expected) + + expected = DataFrame([[1., nan, 2.], + [2., 1., 3.]]) + result = df.rank(1, numeric_only=False, ascending=False) assert_frame_equal(result, expected) # mixed-type frames diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py index 8a9827b9d5533..733ed2fbcb971 100755 --- a/pandas/tests/test_categorical.py +++ b/pandas/tests/test_categorical.py @@ -308,6 +308,38 @@ def test_constructor_with_generator(self): cat = pd.Categorical([0, 1, 2], categories=xrange(3)) self.assertTrue(cat.equals(exp)) + def test_constructor_with_datetimelike(self): + + # 12077 + # constructor wwth a datetimelike and NaT + + for dtl in [pd.date_range('1995-01-01 00:00:00', + periods=5, freq='s'), + pd.date_range('1995-01-01 00:00:00', + periods=5, freq='s', tz='US/Eastern'), + pd.timedelta_range('1 day', periods=5, freq='s')]: + + s = Series(dtl) + c = Categorical(s) + expected = type(dtl)(s) + expected.freq = None + tm.assert_index_equal(c.categories, expected) + self.assert_numpy_array_equal(c.codes, np.arange(5, dtype='int8')) + + # with NaT + s2 = s.copy() + s2.iloc[-1] = pd.NaT + c = Categorical(s2) + expected = type(dtl)(s2.dropna()) + expected.freq = None + tm.assert_index_equal(c.categories, expected) + self.assert_numpy_array_equal(c.codes, + np.concatenate([np.arange(4, dtype='int8'), + [-1]])) + + result = repr(c) + self.assertTrue('NaT' in result) + def test_from_codes(self): # too few categories diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py index 911277429ce86..05ca65d6946fb 100644 --- a/pandas/tseries/period.py +++ b/pandas/tseries/period.py @@ -832,7 +832,7 @@ def _format_native_types(self, na_rep=u('NaT'), date_format=None, values[imask] = np.array([formatter(dt) for dt in values[imask]]) return values - def take(self, indices, axis=0): + def take(self, indices, axis=0, allow_fill=True, fill_value=None): """ Analogous to ndarray.take """
closes #12077 ``` before after ratio [1330b9fe] [404911c6] 17.25ms 1.21ms 0.07 categoricals.categorical_constructor_with_datetimes.time_datetimes_with_nat ```
https://api.github.com/repos/pandas-dev/pandas/pulls/12128
2016-01-25T01:10:25Z
2016-01-25T15:29:27Z
null
2016-01-25T15:29:27Z
COMPAT: numpy_compat for >= 1.11 for np.datetime64 tz changes, #12100
diff --git a/pandas/__init__.py b/pandas/__init__.py index ca304fa8f8631..4ba6dbe6a8063 100644 --- a/pandas/__init__.py +++ b/pandas/__init__.py @@ -4,6 +4,15 @@ __docformat__ = 'restructuredtext' +# use the closest tagged version if possible +from ._version import get_versions +v = get_versions() +__version__ = v.get('closest-tag',v['version']) +del get_versions, v + +# numpy compat +from pandas.compat.numpy_compat import * + try: from pandas import hashtable, tslib, lib except ImportError as e: # pragma: no cover @@ -16,29 +25,8 @@ from datetime import datetime import numpy as np - -# XXX: HACK for NumPy 1.5.1 to suppress warnings -try: - np.seterr(all='ignore') -except Exception: # pragma: no cover - pass - -# numpy versioning -from distutils.version import LooseVersion -_np_version = np.version.short_version -_np_version_under1p8 = LooseVersion(_np_version) < '1.8' -_np_version_under1p9 = LooseVersion(_np_version) < '1.9' - - from pandas.info import __doc__ - -if LooseVersion(_np_version) < '1.7.0': - raise ImportError('pandas {0} is incompatible with numpy < 1.7.0, ' - 'your numpy version is {1}. Please upgrade numpy to' - ' >= 1.7.0 to use pandas version {0}'.format(__version__, - _np_version)) - # let init-time option registration happen import pandas.core.config_init @@ -62,9 +50,3 @@ from pandas.util.nosetester import NoseTester test = NoseTester().test del NoseTester - -# use the closest tagged version if possible -from ._version import get_versions -v = get_versions() -__version__ = v.get('closest-tag',v['version']) -del get_versions, v diff --git a/pandas/compat/numpy_compat.py b/pandas/compat/numpy_compat.py new file mode 100644 index 0000000000000..726a20370f512 --- /dev/null +++ b/pandas/compat/numpy_compat.py @@ -0,0 +1,74 @@ +""" support numpy compatiblitiy across versions """ + +from distutils.version import LooseVersion +from pandas.compat import string_types, string_and_binary_types +import numpy as np + +# TODO: HACK for NumPy 1.5.1 to suppress warnings +# is this necessary? +try: + np.seterr(all='ignore') +except Exception: # pragma: no cover + pass + +# numpy versioning +_np_version = np.version.short_version +_np_version_under1p8 = LooseVersion(_np_version) < '1.8' +_np_version_under1p9 = LooseVersion(_np_version) < '1.9' +_np_version_under1p10 = LooseVersion(_np_version) < '1.10' +_np_version_under1p11 = LooseVersion(_np_version) < '1.11' + +if LooseVersion(_np_version) < '1.7.0': + from pandas import __version__ + raise ImportError('pandas {0} is incompatible with numpy < 1.7.0\n' + 'your numpy version is {1}.\n' + 'Please upgrade numpy to >= 1.7.0 to use ' + 'this pandas version'.format(__version__, _np_version)) + + +def tz_replacer(s): + if isinstance(s, string_types): + if s.endswith('Z'): + s = s[:-1] + elif s.endswith('-0000'): + s = s[:-5] + return s + + +def np_datetime64_compat(s, *args, **kwargs): + """ + provide compat for construction of strings to numpy datetime64's with + tz-changes in 1.11 that make '2015-01-01 09:00:00Z' show a deprecation + warning, when need to pass '2015-01-01 09:00:00' + """ + + if not _np_version_under1p11: + s = tz_replacer(s) + return np.datetime64(s, *args, **kwargs) + + +def np_array_datetime64_compat(arr, *args, **kwargs): + """ + provide compat for construction of an array of strings to a + np.array(..., dtype=np.datetime64(..)) + tz-changes in 1.11 that make '2015-01-01 09:00:00Z' show a deprecation + warning, when need to pass '2015-01-01 09:00:00' + """ + + if not _np_version_under1p11: + + # is_list_like + if hasattr(arr, '__iter__') and not \ + isinstance(arr, string_and_binary_types): + arr = [tz_replacer(s) for s in arr] + else: + arr = tz_replacer(s) + + return np.array(arr, *args, **kwargs) + +__all__ = ['_np_version', + '_np_version_under1p8', + '_np_version_under1p9', + '_np_version_under1p10', + '_np_version_under1p11', + ] diff --git a/pandas/io/tests/test_date_converters.py b/pandas/io/tests/test_date_converters.py index 3855dc485ed83..8dd6c93249221 100644 --- a/pandas/io/tests/test_date_converters.py +++ b/pandas/io/tests/test_date_converters.py @@ -10,6 +10,7 @@ from pandas.util.testing import assert_frame_equal import pandas.io.date_converters as conv import pandas.util.testing as tm +from pandas.compat.numpy_compat import np_array_datetime64_compat class TestConverters(tm.TestCase): @@ -119,15 +120,16 @@ def test_dateparser_resolution_if_not_ns(self): """ def date_parser(date, time): - datetime = np.array(date + 'T' + time + 'Z', dtype='datetime64[s]') + datetime = np_array_datetime64_compat( + date + 'T' + time + 'Z', dtype='datetime64[s]') return datetime df = read_csv(StringIO(data), date_parser=date_parser, parse_dates={'datetime': ['date', 'time']}, index_col=['datetime', 'prn']) - datetimes = np.array(['2013-11-03T19:00:00Z'] * 3, - dtype='datetime64[s]') + datetimes = np_array_datetime64_compat(['2013-11-03T19:00:00Z'] * 3, + dtype='datetime64[s]') df_correct = DataFrame(data={'rxstatus': ['00E80000'] * 3}, index=MultiIndex.from_tuples( [(datetimes[0], 126), diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py index f1b5172a838cf..c46d21d2a8759 100644 --- a/pandas/tseries/tests/test_offsets.py +++ b/pandas/tseries/tests/test_offsets.py @@ -8,6 +8,7 @@ import numpy as np +from pandas.compat.numpy_compat import np_datetime64_compat from pandas.core.datetools import (bday, BDay, CDay, BQuarterEnd, BMonthEnd, BusinessHour, CBMonthEnd, CBMonthBegin, BYearEnd, MonthEnd, MonthBegin, BYearBegin, @@ -201,7 +202,7 @@ def setUp(self): 'Second': Timestamp('2011-01-01 09:00:01'), 'Milli': Timestamp('2011-01-01 09:00:00.001000'), 'Micro': Timestamp('2011-01-01 09:00:00.000001'), - 'Nano': Timestamp(np.datetime64( + 'Nano': Timestamp(np_datetime64_compat( '2011-01-01T09:00:00.000000001Z'))} def test_return_type(self): @@ -292,7 +293,7 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected, def test_apply(self): sdt = datetime(2011, 1, 1, 9, 0) - ndt = np.datetime64('2011-01-01 09:00Z') + ndt = np_datetime64_compat('2011-01-01 09:00Z') for offset in self.offset_types: for dt in [sdt, ndt]: @@ -333,7 +334,7 @@ def test_rollforward(self): norm_expected.update(normalized) sdt = datetime(2011, 1, 1, 9, 0) - ndt = np.datetime64('2011-01-01 09:00Z') + ndt = np_datetime64_compat('2011-01-01 09:00Z') for offset in self.offset_types: for dt in [sdt, ndt]: @@ -391,7 +392,7 @@ def test_rollback(self): norm_expected.update(normalized) sdt = datetime(2011, 1, 1, 9, 0) - ndt = np.datetime64('2011-01-01 09:00Z') + ndt = np_datetime64_compat('2011-01-01 09:00Z') for offset in self.offset_types: for dt in [sdt, ndt]: @@ -1394,7 +1395,7 @@ class TestCustomBusinessDay(Base): def setUp(self): self.d = datetime(2008, 1, 1) - self.nd = np.datetime64('2008-01-01 00:00:00Z') + self.nd = np_datetime64_compat('2008-01-01 00:00:00Z') tm._skip_if_no_cday() self.offset = CDay() diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py index e37ffa3974729..6912712cf90e2 100644 --- a/pandas/tseries/tests/test_period.py +++ b/pandas/tseries/tests/test_period.py @@ -22,14 +22,14 @@ import pandas as pd import numpy as np from numpy.random import randn -from pandas.compat import range, lrange, lmap, zip +from pandas.compat import range, lrange, lmap, zip, text_type, PY3 +from pandas.compat.numpy_compat import np_datetime64_compat from pandas import Series, DataFrame, _np_version_under1p9 from pandas import tslib from pandas.util.testing import (assert_series_equal, assert_almost_equal, assertRaisesRegexp) import pandas.util.testing as tm -from pandas import compat class TestPeriodProperties(tm.TestCase): @@ -329,8 +329,8 @@ def test_period_constructor(self): i1 = Period(date(2007, 1, 1), freq='M') i2 = Period(datetime(2007, 1, 1), freq='M') i3 = Period(np.datetime64('2007-01-01'), freq='M') - i4 = Period(np.datetime64('2007-01-01 00:00:00Z'), freq='M') - i5 = Period(np.datetime64('2007-01-01 00:00:00.000Z'), freq='M') + i4 = Period(np_datetime64_compat('2007-01-01 00:00:00Z'), freq='M') + i5 = Period(np_datetime64_compat('2007-01-01 00:00:00.000Z'), freq='M') self.assertEqual(i1, i2) self.assertEqual(i1, i3) self.assertEqual(i1, i4) @@ -340,14 +340,15 @@ def test_period_constructor(self): expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1000), freq='L') self.assertEqual(i1, expected) - expected = Period(np.datetime64('2007-01-01 09:00:00.001Z'), freq='L') + expected = Period(np_datetime64_compat( + '2007-01-01 09:00:00.001Z'), freq='L') self.assertEqual(i1, expected) i1 = Period('2007-01-01 09:00:00.00101') expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1010), freq='U') self.assertEqual(i1, expected) - expected = Period(np.datetime64('2007-01-01 09:00:00.00101Z'), + expected = Period(np_datetime64_compat('2007-01-01 09:00:00.00101Z'), freq='U') self.assertEqual(i1, expected) @@ -406,8 +407,8 @@ def test_period_constructor_offsets(self): i1 = Period(date(2007, 1, 1), freq='M') i2 = Period(datetime(2007, 1, 1), freq='M') i3 = Period(np.datetime64('2007-01-01'), freq='M') - i4 = Period(np.datetime64('2007-01-01 00:00:00Z'), freq='M') - i5 = Period(np.datetime64('2007-01-01 00:00:00.000Z'), freq='M') + i4 = Period(np_datetime64_compat('2007-01-01 00:00:00Z'), freq='M') + i5 = Period(np_datetime64_compat('2007-01-01 00:00:00.000Z'), freq='M') self.assertEqual(i1, i2) self.assertEqual(i1, i3) self.assertEqual(i1, i4) @@ -417,14 +418,15 @@ def test_period_constructor_offsets(self): expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1000), freq='L') self.assertEqual(i1, expected) - expected = Period(np.datetime64('2007-01-01 09:00:00.001Z'), freq='L') + expected = Period(np_datetime64_compat( + '2007-01-01 09:00:00.001Z'), freq='L') self.assertEqual(i1, expected) i1 = Period('2007-01-01 09:00:00.00101') expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1010), freq='U') self.assertEqual(i1, expected) - expected = Period(np.datetime64('2007-01-01 09:00:00.00101Z'), + expected = Period(np_datetime64_compat('2007-01-01 09:00:00.00101Z'), freq='U') self.assertEqual(i1, expected) @@ -462,7 +464,7 @@ def test_strftime(self): p = Period('2000-1-1 12:34:12', freq='S') res = p.strftime('%Y-%m-%d %H:%M:%S') self.assertEqual(res, '2000-01-01 12:34:12') - tm.assertIsInstance(res, compat.text_type) # GH3363 + tm.assertIsInstance(res, text_type) # GH3363 def test_sub_delta(self): left, right = Period('2011', freq='A'), Period('2007', freq='A') @@ -2957,9 +2959,9 @@ def test_map_with_string_constructor(self): index = PeriodIndex(raw, freq='A') types = str, - if compat.PY3: + if PY3: # unicode - types += compat.text_type, + types += text_type, for t in types: expected = np.array(lmap(t, raw), dtype=object) diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py index 84065c0340aad..038045fac99c0 100644 --- a/pandas/tseries/tests/test_timeseries.py +++ b/pandas/tseries/tests/test_timeseries.py @@ -15,6 +15,7 @@ Period, DatetimeIndex, Int64Index, to_datetime, bdate_range, Float64Index, NaT, timedelta_range, Timedelta) +from pandas.compat.numpy_compat import np_datetime64_compat import pandas.core.datetools as datetools import pandas.tseries.offsets as offsets import pandas.tseries.tools as tools @@ -2496,11 +2497,11 @@ def test_comparisons_nat(self): '2014-05-01', '2014-07-01']) didx2 = pd.DatetimeIndex(['2014-02-01', '2014-03-01', pd.NaT, pd.NaT, '2014-06-01', '2014-07-01']) - darr = np.array([np.datetime64('2014-02-01 00:00Z'), - np.datetime64('2014-03-01 00:00Z'), - np.datetime64('nat'), np.datetime64('nat'), - np.datetime64('2014-06-01 00:00Z'), - np.datetime64('2014-07-01 00:00Z')]) + darr = np.array([np_datetime64_compat('2014-02-01 00:00Z'), + np_datetime64_compat('2014-03-01 00:00Z'), + np_datetime64_compat('nat'), np.datetime64('nat'), + np_datetime64_compat('2014-06-01 00:00Z'), + np_datetime64_compat('2014-07-01 00:00Z')]) if _np_version_under1p8: # cannot test array because np.datetime('nat') returns today's date diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py index 123b91d8bbf82..2e22caded8d10 100644 --- a/pandas/tseries/tests/test_tslib.py +++ b/pandas/tseries/tests/test_tslib.py @@ -15,6 +15,9 @@ import pandas.tseries.offsets as offsets import pandas.util.testing as tm import pandas.compat as compat +from pandas.compat.numpy_compat import (np_datetime64_compat, + np_array_datetime64_compat) + from pandas.util.testing import assert_series_equal, _skip_if_has_locale @@ -694,7 +697,7 @@ def test_parsing_valid_dates(self): arr = np.array(['01-01-2013', '01-02-2013'], dtype=object) self.assert_numpy_array_equal( tslib.array_to_datetime(arr), - np.array( + np_array_datetime64_compat( [ '2013-01-01T00:00:00.000000000-0000', '2013-01-02T00:00:00.000000000-0000' @@ -706,7 +709,7 @@ def test_parsing_valid_dates(self): arr = np.array(['Mon Sep 16 2013', 'Tue Sep 17 2013'], dtype=object) self.assert_numpy_array_equal( tslib.array_to_datetime(arr), - np.array( + np_array_datetime64_compat( [ '2013-09-16T00:00:00.000000000-0000', '2013-09-17T00:00:00.000000000-0000' @@ -752,7 +755,7 @@ def test_coercing_dates_outside_of_datetime64_ns_bounds(self): arr = np.array(['1/1/1000', '1/1/2000'], dtype=object) self.assert_numpy_array_equal( tslib.array_to_datetime(arr, errors='coerce'), - np.array( + np_array_datetime64_compat( [ tslib.iNaT, '2000-01-01T00:00:00.000000000-0000' @@ -772,7 +775,7 @@ def test_coerce_of_invalid_datetimes(self): # With coercing, the invalid dates becomes iNaT self.assert_numpy_array_equal( tslib.array_to_datetime(arr, errors='coerce'), - np.array( + np_array_datetime64_compat( [ '2013-01-01T00:00:00.000000000-0000', tslib.iNaT, @@ -863,7 +866,7 @@ def test_nanosecond_timestamp(self): self.assertEqual(t.value, expected) self.assertEqual(t.nanosecond, 5) - t = Timestamp(np.datetime64('2011-01-01 00:00:00.000000005Z')) + t = Timestamp(np_datetime64_compat('2011-01-01 00:00:00.000000005Z')) self.assertEqual(repr(t), "Timestamp('2011-01-01 00:00:00.000000005')") self.assertEqual(t.value, expected) self.assertEqual(t.nanosecond, 5) @@ -879,7 +882,7 @@ def test_nanosecond_timestamp(self): self.assertEqual(t.value, expected) self.assertEqual(t.nanosecond, 10) - t = Timestamp(np.datetime64('2011-01-01 00:00:00.000000010Z')) + t = Timestamp(np_datetime64_compat('2011-01-01 00:00:00.000000010Z')) self.assertEqual(repr(t), "Timestamp('2011-01-01 00:00:00.000000010')") self.assertEqual(t.value, expected) self.assertEqual(t.nanosecond, 10)
closes #12100
https://api.github.com/repos/pandas-dev/pandas/pulls/12127
2016-01-24T19:00:21Z
2016-01-25T15:31:23Z
null
2016-01-25T15:31:23Z
Fix Styler._translate failing on numeric columns
diff --git a/pandas/core/style.py b/pandas/core/style.py index a5a42c2bb47a7..19aa36fa005d0 100644 --- a/pandas/core/style.py +++ b/pandas/core/style.py @@ -237,7 +237,7 @@ def _translate(self): cs = [DATA_CLASS, "row%s" % r, "col%s" % c] cs.extend(cell_context.get("data", {}).get(r, {}).get(c, [])) row_es.append({"type": "td", - "value": self.data.iloc[r][c], + "value": self.data.iloc[r, c], "class": " ".join(cs), "id": "_".join(cs[1:])}) props = [] diff --git a/pandas/tests/test_style.py b/pandas/tests/test_style.py index b9ca3f331711d..f8a6a467a8f0e 100644 --- a/pandas/tests/test_style.py +++ b/pandas/tests/test_style.py @@ -164,6 +164,12 @@ def test_multiindex_name(self): self.assertEqual(result['head'], expected) + def test_numeric_columns(self): + # https://github.com/pydata/pandas/issues/12125 + # smoke test for _translate + df = pd.DataFrame({0: [1, 2, 3]}) + df.style._translate() + def test_apply_axis(self): df = pd.DataFrame({'A': [0, 0], 'B': [1, 1]}) f = lambda x: ['val: %s' % x.max() for v in x]
Addresses https://github.com/pydata/pandas/issues/12125. The previous behaviour was to use a `__getitem__` lookup which caused incorrect behaviour with numeric columns. The adjusted code uses `iloc` numerical indexing to resolve the issue.
https://api.github.com/repos/pandas-dev/pandas/pulls/12126
2016-01-24T06:05:36Z
2016-01-24T19:49:26Z
null
2016-01-24T19:49:26Z
CLN: reorganize index.py, test_index.py
diff --git a/pandas/core/index.py b/pandas/core/index.py index ad5ed86236e50..05f98d59a1f56 100644 --- a/pandas/core/index.py +++ b/pandas/core/index.py @@ -1,7081 +1,3 @@ -# pylint: disable=E1101,E1103,W0232 -import datetime -import warnings -import operator -from functools import partial -from sys import getsizeof - -import numpy as np -import pandas.tslib as tslib -import pandas.lib as lib -import pandas.algos as _algos -import pandas.index as _index -from pandas.lib import Timestamp, Timedelta, is_datetime_array - -from pandas.compat import range, zip, lrange, lzip, u, map -from pandas import compat -from pandas.core import algorithms -from pandas.core.base import (PandasObject, FrozenList, FrozenNDArray, - IndexOpsMixin, PandasDelegate) -import pandas.core.base as base -from pandas.util.decorators import (Appender, Substitution, cache_readonly, - deprecate, deprecate_kwarg) -import pandas.core.common as com -from pandas.core.missing import _clean_reindex_fill_method -from pandas.core.common import (isnull, array_equivalent, is_dtype_equal, - is_object_dtype, is_datetimetz, ABCSeries, - ABCCategorical, ABCPeriodIndex, - _values_from_object, is_float, is_integer, - is_iterator, is_categorical_dtype, - _ensure_object, _ensure_int64, is_bool_indexer, - is_list_like, is_bool_dtype, is_null_slice, - is_integer_dtype, is_int64_dtype) -from pandas.core.strings import StringAccessorMixin - -from pandas.core.config import get_option -from pandas.io.common import PerformanceWarning - -# simplify -default_pprint = lambda x, max_seq_items=None: \ - com.pprint_thing(x, escape_chars=('\t', '\r', '\n'), quote_strings=True, - max_seq_items=max_seq_items) - -__all__ = ['Index'] - -_unsortable_types = frozenset(('mixed', 'mixed-integer')) - -_index_doc_kwargs = dict(klass='Index', inplace='', duplicated='np.array') -_index_shared_docs = dict() - - -def _try_get_item(x): - try: - return x.item() - except AttributeError: - return x - - -class InvalidIndexError(Exception): - pass - - -_o_dtype = np.dtype(object) -_Identity = object - - -def _new_Index(cls, d): - """ This is called upon unpickling, rather than the default which doesn't - have arguments and breaks __new__ - """ - return cls.__new__(cls, **d) - - -class Index(IndexOpsMixin, StringAccessorMixin, PandasObject): - """ - Immutable ndarray implementing an ordered, sliceable set. The basic object - storing axis labels for all pandas objects - - Parameters - ---------- - data : array-like (1-dimensional) - dtype : NumPy dtype (default: object) - copy : bool - Make a copy of input ndarray - name : object - Name to be stored in the index - tupleize_cols : bool (default: True) - When True, attempt to create a MultiIndex if possible - - Notes - ----- - An Index instance can **only** contain hashable objects - """ - # To hand over control to subclasses - _join_precedence = 1 - - # Cython methods - _groupby = _algos.groupby_object - _arrmap = _algos.arrmap_object - _left_indexer_unique = _algos.left_join_indexer_unique_object - _left_indexer = _algos.left_join_indexer_object - _inner_indexer = _algos.inner_join_indexer_object - _outer_indexer = _algos.outer_join_indexer_object - _box_scalars = False - - _typ = 'index' - _data = None - _id = None - name = None - asi8 = None - _comparables = ['name'] - _attributes = ['name'] - _allow_index_ops = True - _allow_datetime_index_ops = False - _allow_period_index_ops = False - _is_numeric_dtype = False - _can_hold_na = True - - # prioritize current class for _shallow_copy_with_infer, - # used to infer integers as datetime-likes - _infer_as_myclass = False - - _engine_type = _index.ObjectEngine - - def __new__(cls, data=None, dtype=None, copy=False, name=None, - fastpath=False, tupleize_cols=True, **kwargs): - - if name is None and hasattr(data, 'name'): - name = data.name - - if fastpath: - return cls._simple_new(data, name) - - # range - if isinstance(data, RangeIndex): - return RangeIndex(start=data, copy=copy, dtype=dtype, name=name) - elif isinstance(data, range): - return RangeIndex.from_range(data, copy=copy, dtype=dtype, - name=name) - - # categorical - if is_categorical_dtype(data) or is_categorical_dtype(dtype): - return CategoricalIndex(data, copy=copy, name=name, **kwargs) - - # index-like - elif isinstance(data, (np.ndarray, Index, ABCSeries)): - - if (issubclass(data.dtype.type, np.datetime64) or - is_datetimetz(data)): - from pandas.tseries.index import DatetimeIndex - result = DatetimeIndex(data, copy=copy, name=name, **kwargs) - if dtype is not None and _o_dtype == dtype: - return Index(result.to_pydatetime(), dtype=_o_dtype) - else: - return result - - elif issubclass(data.dtype.type, np.timedelta64): - from pandas.tseries.tdi import TimedeltaIndex - result = TimedeltaIndex(data, copy=copy, name=name, **kwargs) - if dtype is not None and _o_dtype == dtype: - return Index(result.to_pytimedelta(), dtype=_o_dtype) - else: - return result - - if dtype is not None: - try: - data = np.array(data, dtype=dtype, copy=copy) - except (TypeError, ValueError): - pass - - # maybe coerce to a sub-class - from pandas.tseries.period import PeriodIndex - if isinstance(data, PeriodIndex): - return PeriodIndex(data, copy=copy, name=name, **kwargs) - if issubclass(data.dtype.type, np.integer): - return Int64Index(data, copy=copy, dtype=dtype, name=name) - elif issubclass(data.dtype.type, np.floating): - return Float64Index(data, copy=copy, dtype=dtype, name=name) - elif issubclass(data.dtype.type, np.bool) or is_bool_dtype(data): - subarr = data.astype('object') - else: - subarr = com._asarray_tuplesafe(data, dtype=object) - - # _asarray_tuplesafe does not always copy underlying data, - # so need to make sure that this happens - if copy: - subarr = subarr.copy() - - if dtype is None: - inferred = lib.infer_dtype(subarr) - if inferred == 'integer': - return Int64Index(subarr.astype('i8'), copy=copy, - name=name) - elif inferred in ['floating', 'mixed-integer-float']: - return Float64Index(subarr, copy=copy, name=name) - elif inferred == 'boolean': - # don't support boolean explicity ATM - pass - elif inferred != 'string': - if (inferred.startswith('datetime') or - tslib.is_timestamp_array(subarr)): - - if (lib.is_datetime_with_singletz_array(subarr) or - 'tz' in kwargs): - # only when subarr has the same tz - from pandas.tseries.index import DatetimeIndex - return DatetimeIndex(subarr, copy=copy, name=name, - **kwargs) - - elif (inferred.startswith('timedelta') or - lib.is_timedelta_array(subarr)): - from pandas.tseries.tdi import TimedeltaIndex - return TimedeltaIndex(subarr, copy=copy, name=name, - **kwargs) - elif inferred == 'period': - return PeriodIndex(subarr, name=name, **kwargs) - return cls._simple_new(subarr, name) - - elif hasattr(data, '__array__'): - return Index(np.asarray(data), dtype=dtype, copy=copy, name=name, - **kwargs) - elif data is None or np.isscalar(data): - cls._scalar_data_error(data) - else: - if (tupleize_cols and isinstance(data, list) and data and - isinstance(data[0], tuple)): - - # we must be all tuples, otherwise don't construct - # 10697 - if all(isinstance(e, tuple) for e in data): - try: - # must be orderable in py3 - if compat.PY3: - sorted(data) - return MultiIndex.from_tuples(data, names=name or - kwargs.get('names')) - except (TypeError, KeyError): - # python2 - MultiIndex fails on mixed types - pass - # other iterable of some kind - subarr = com._asarray_tuplesafe(data, dtype=object) - return Index(subarr, dtype=dtype, copy=copy, name=name, **kwargs) - - """ - NOTE for new Index creation: - - - _simple_new: It returns new Index with the same type as the caller. - All metadata (such as name) must be provided by caller's responsibility. - Using _shallow_copy is recommended because it fills these metadata - otherwise specified. - - - _shallow_copy: It returns new Index with the same type (using - _simple_new), but fills caller's metadata otherwise specified. Passed - kwargs will overwrite corresponding metadata. - - - _shallow_copy_with_infer: It returns new Index inferring its type - from passed values. It fills caller's metadata otherwise specified as the - same as _shallow_copy. - - See each method's docstring. - """ - - @classmethod - def _simple_new(cls, values, name=None, dtype=None, **kwargs): - """ - we require the we have a dtype compat for the values - if we are passed a non-dtype compat, then coerce using the constructor - - Must be careful not to recurse. - """ - if not hasattr(values, 'dtype'): - if values is None and dtype is not None: - values = np.empty(0, dtype=dtype) - else: - values = np.array(values, copy=False) - if is_object_dtype(values): - values = cls(values, name=name, dtype=dtype, - **kwargs)._values - - result = object.__new__(cls) - result._data = values - result.name = name - for k, v in compat.iteritems(kwargs): - setattr(result, k, v) - result._reset_identity() - return result - - def _shallow_copy(self, values=None, **kwargs): - """ - create a new Index with the same class as the caller, don't copy the - data, use the same object attributes with passed in attributes taking - precedence - - *this is an internal non-public method* - - Parameters - ---------- - values : the values to create the new Index, optional - kwargs : updates the default attributes for this Index - """ - if values is None: - values = self.values - attributes = self._get_attributes_dict() - attributes.update(kwargs) - return self._simple_new(values, **attributes) - - def _shallow_copy_with_infer(self, values=None, **kwargs): - """ - create a new Index inferring the class with passed value, don't copy - the data, use the same object attributes with passed in attributes - taking precedence - - *this is an internal non-public method* - - Parameters - ---------- - values : the values to create the new Index, optional - kwargs : updates the default attributes for this Index - """ - if values is None: - values = self.values - attributes = self._get_attributes_dict() - attributes.update(kwargs) - attributes['copy'] = False - if self._infer_as_myclass: - try: - return self._constructor(values, **attributes) - except (TypeError, ValueError): - pass - return Index(values, **attributes) - - def _update_inplace(self, result, **kwargs): - # guard when called from IndexOpsMixin - raise TypeError("Index can't be updated inplace") - - def is_(self, other): - """ - More flexible, faster check like ``is`` but that works through views - - Note: this is *not* the same as ``Index.identical()``, which checks - that metadata is also the same. - - Parameters - ---------- - other : object - other object to compare against. - - Returns - ------- - True if both have same underlying data, False otherwise : bool - """ - # use something other than None to be clearer - return self._id is getattr( - other, '_id', Ellipsis) and self._id is not None - - def _reset_identity(self): - """Initializes or resets ``_id`` attribute with new object""" - self._id = _Identity() - - # ndarray compat - def __len__(self): - """ - return the length of the Index - """ - return len(self._data) - - def __array__(self, dtype=None): - """ the array interface, return my values """ - return self._data.view(np.ndarray) - - def __array_wrap__(self, result, context=None): - """ - Gets called after a ufunc - """ - if is_bool_dtype(result): - return result - - attrs = self._get_attributes_dict() - attrs = self._maybe_update_attributes(attrs) - return Index(result, **attrs) - - @cache_readonly - def dtype(self): - """ return the dtype object of the underlying data """ - return self._data.dtype - - @cache_readonly - def dtype_str(self): - """ return the dtype str of the underlying data """ - return str(self.dtype) - - @property - def values(self): - """ return the underlying data as an ndarray """ - return self._data.view(np.ndarray) - - def get_values(self): - """ return the underlying data as an ndarray """ - return self.values - - # ops compat - def tolist(self): - """ - return a list of the Index values - """ - return list(self.values) - - def repeat(self, n): - """ - return a new Index of the values repeated n times - - See also - -------- - numpy.ndarray.repeat - """ - return self._shallow_copy(self._values.repeat(n)) - - def ravel(self, order='C'): - """ - return an ndarray of the flattened values of the underlying data - - See also - -------- - numpy.ndarray.ravel - """ - return self._values.ravel(order=order) - - # construction helpers - @classmethod - def _scalar_data_error(cls, data): - raise TypeError('{0}(...) must be called with a collection of some ' - 'kind, {1} was passed'.format(cls.__name__, - repr(data))) - - @classmethod - def _string_data_error(cls, data): - raise TypeError('String dtype not supported, you may need ' - 'to explicitly cast to a numeric type') - - @classmethod - def _coerce_to_ndarray(cls, data): - """coerces data to ndarray, raises on scalar data. Converts other - iterables to list first and then to array. Does not touch ndarrays. - """ - - if not isinstance(data, (np.ndarray, Index)): - if data is None or np.isscalar(data): - cls._scalar_data_error(data) - - # other iterable of some kind - if not isinstance(data, (ABCSeries, list, tuple)): - data = list(data) - data = np.asarray(data) - return data - - def _get_attributes_dict(self): - """ return an attributes dict for my class """ - return dict([(k, getattr(self, k, None)) for k in self._attributes]) - - def view(self, cls=None): - - # we need to see if we are subclassing an - # index type here - if cls is not None and not hasattr(cls, '_typ'): - result = self._data.view(cls) - else: - result = self._shallow_copy() - if isinstance(result, Index): - result._id = self._id - return result - - def _coerce_scalar_to_index(self, item): - """ - we need to coerce a scalar to a compat for our index type - - Parameters - ---------- - item : scalar item to coerce - """ - return Index([item], dtype=self.dtype, **self._get_attributes_dict()) - - _index_shared_docs['copy'] = """ - Make a copy of this object. Name and dtype sets those attributes on - the new object. - - Parameters - ---------- - name : string, optional - deep : boolean, default False - dtype : numpy dtype or pandas type - - Returns - ------- - copy : Index - - Notes - ----- - In most cases, there should be no functional difference from using - ``deep``, but if ``deep`` is passed it will attempt to deepcopy. - """ - - @Appender(_index_shared_docs['copy']) - def copy(self, name=None, deep=False, dtype=None, **kwargs): - names = kwargs.get('names') - if names is not None and name is not None: - raise TypeError("Can only provide one of `names` and `name`") - if deep: - from copy import deepcopy - new_index = self._shallow_copy(self._data.copy()) - name = name or deepcopy(self.name) - else: - new_index = self._shallow_copy() - name = self.name - if name is not None: - names = [name] - if names: - new_index = new_index.set_names(names) - if dtype: - new_index = new_index.astype(dtype) - return new_index - - __copy__ = copy - - def __unicode__(self): - """ - Return a string representation for this object. - - Invoked by unicode(df) in py2 only. Yields a Unicode String in both - py2/py3. - """ - klass = self.__class__.__name__ - data = self._format_data() - attrs = self._format_attrs() - space = self._format_space() - - prepr = (u(",%s") % - space).join([u("%s=%s") % (k, v) for k, v in attrs]) - - # no data provided, just attributes - if data is None: - data = '' - - res = u("%s(%s%s)") % (klass, data, prepr) - - return res - - def _format_space(self): - - # using space here controls if the attributes - # are line separated or not (the default) - - # max_seq_items = get_option('display.max_seq_items') - # if len(self) > max_seq_items: - # space = "\n%s" % (' ' * (len(klass) + 1)) - return " " - - @property - def _formatter_func(self): - """ - Return the formatted data as a unicode string - """ - return default_pprint - - def _format_data(self): - """ - Return the formatted data as a unicode string - """ - from pandas.core.format import get_console_size, _get_adjustment - display_width, _ = get_console_size() - if display_width is None: - display_width = get_option('display.width') or 80 - - space1 = "\n%s" % (' ' * (len(self.__class__.__name__) + 1)) - space2 = "\n%s" % (' ' * (len(self.__class__.__name__) + 2)) - - n = len(self) - sep = ',' - max_seq_items = get_option('display.max_seq_items') or n - formatter = self._formatter_func - - # do we want to justify (only do so for non-objects) - is_justify = not (self.inferred_type in ('string', 'unicode') or - (self.inferred_type == 'categorical' and - is_object_dtype(self.categories))) - - # are we a truncated display - is_truncated = n > max_seq_items - - # adj can optionaly handle unicode eastern asian width - adj = _get_adjustment() - - def _extend_line(s, line, value, display_width, next_line_prefix): - - if (adj.len(line.rstrip()) + adj.len(value.rstrip()) >= - display_width): - s += line.rstrip() - line = next_line_prefix - line += value - return s, line - - def best_len(values): - if values: - return max([adj.len(x) for x in values]) - else: - return 0 - - if n == 0: - summary = '[], ' - elif n == 1: - first = formatter(self[0]) - summary = '[%s], ' % first - elif n == 2: - first = formatter(self[0]) - last = formatter(self[-1]) - summary = '[%s, %s], ' % (first, last) - else: - - if n > max_seq_items: - n = min(max_seq_items // 2, 10) - head = [formatter(x) for x in self[:n]] - tail = [formatter(x) for x in self[-n:]] - else: - head = [] - tail = [formatter(x) for x in self] - - # adjust all values to max length if needed - if is_justify: - - # however, if we are not truncated and we are only a single - # line, then don't justify - if (is_truncated or - not (len(', '.join(head)) < display_width and - len(', '.join(tail)) < display_width)): - max_len = max(best_len(head), best_len(tail)) - head = [x.rjust(max_len) for x in head] - tail = [x.rjust(max_len) for x in tail] - - summary = "" - line = space2 - - for i in range(len(head)): - word = head[i] + sep + ' ' - summary, line = _extend_line(summary, line, word, - display_width, space2) - - if is_truncated: - # remove trailing space of last line - summary += line.rstrip() + space2 + '...' - line = space2 - - for i in range(len(tail) - 1): - word = tail[i] + sep + ' ' - summary, line = _extend_line(summary, line, word, - display_width, space2) - - # last value: no sep added + 1 space of width used for trailing ',' - summary, line = _extend_line(summary, line, tail[-1], - display_width - 2, space2) - summary += line - summary += '],' - - if len(summary) > (display_width): - summary += space1 - else: # one row - summary += ' ' - - # remove initial space - summary = '[' + summary[len(space2):] - - return summary - - def _format_attrs(self): - """ - Return a list of tuples of the (attr,formatted_value) - """ - attrs = [] - attrs.append(('dtype', "'%s'" % self.dtype)) - if self.name is not None: - attrs.append(('name', default_pprint(self.name))) - max_seq_items = get_option('display.max_seq_items') or len(self) - if len(self) > max_seq_items: - attrs.append(('length', len(self))) - return attrs - - def to_series(self, **kwargs): - """ - Create a Series with both index and values equal to the index keys - useful with map for returning an indexer based on an index - - Returns - ------- - Series : dtype will be based on the type of the Index values. - """ - - from pandas import Series - return Series(self._to_embed(), index=self, name=self.name) - - def _to_embed(self, keep_tz=False): - """ - *this is an internal non-public method* - - return an array repr of this object, potentially casting to object - - """ - return self.values.copy() - - def astype(self, dtype): - return Index(self.values.astype(dtype), name=self.name, dtype=dtype) - - def _to_safe_for_reshape(self): - """ convert to object if we are a categorical """ - return self - - def to_datetime(self, dayfirst=False): - """ - For an Index containing strings or datetime.datetime objects, attempt - conversion to DatetimeIndex - """ - from pandas.tseries.index import DatetimeIndex - if self.inferred_type == 'string': - from dateutil.parser import parse - parser = lambda x: parse(x, dayfirst=dayfirst) - parsed = lib.try_parse_dates(self.values, parser=parser) - return DatetimeIndex(parsed) - else: - return DatetimeIndex(self.values) - - def _assert_can_do_setop(self, other): - if not com.is_list_like(other): - raise TypeError('Input must be Index or array-like') - return True - - def _convert_can_do_setop(self, other): - if not isinstance(other, Index): - other = Index(other, name=self.name) - result_name = self.name - else: - result_name = self.name if self.name == other.name else None - return other, result_name - - @property - def nlevels(self): - return 1 - - def _get_names(self): - return FrozenList((self.name, )) - - def _set_names(self, values, level=None): - if len(values) != 1: - raise ValueError('Length of new names must be 1, got %d' % - len(values)) - self.name = values[0] - - names = property(fset=_set_names, fget=_get_names) - - def set_names(self, names, level=None, inplace=False): - """ - Set new names on index. Defaults to returning new index. - - Parameters - ---------- - names : str or sequence - name(s) to set - level : int, level name, or sequence of int/level names (default None) - If the index is a MultiIndex (hierarchical), level(s) to set (None - for all levels). Otherwise level must be None - inplace : bool - if True, mutates in place - - Returns - ------- - new index (of same type and class...etc) [if inplace, returns None] - - Examples - -------- - >>> Index([1, 2, 3, 4]).set_names('foo') - Int64Index([1, 2, 3, 4], dtype='int64') - >>> Index([1, 2, 3, 4]).set_names(['foo']) - Int64Index([1, 2, 3, 4], dtype='int64') - >>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'), - (2, u'one'), (2, u'two')], - names=['foo', 'bar']) - >>> idx.set_names(['baz', 'quz']) - MultiIndex(levels=[[1, 2], [u'one', u'two']], - labels=[[0, 0, 1, 1], [0, 1, 0, 1]], - names=[u'baz', u'quz']) - >>> idx.set_names('baz', level=0) - MultiIndex(levels=[[1, 2], [u'one', u'two']], - labels=[[0, 0, 1, 1], [0, 1, 0, 1]], - names=[u'baz', u'bar']) - """ - if level is not None and self.nlevels == 1: - raise ValueError('Level must be None for non-MultiIndex') - - if level is not None and not is_list_like(level) and is_list_like( - names): - raise TypeError("Names must be a string") - - if not is_list_like(names) and level is None and self.nlevels > 1: - raise TypeError("Must pass list-like as `names`.") - - if not is_list_like(names): - names = [names] - if level is not None and not is_list_like(level): - level = [level] - - if inplace: - idx = self - else: - idx = self._shallow_copy() - idx._set_names(names, level=level) - if not inplace: - return idx - - def rename(self, name, inplace=False): - """ - Set new names on index. Defaults to returning new index. - - Parameters - ---------- - name : str or list - name to set - inplace : bool - if True, mutates in place - - Returns - ------- - new index (of same type and class...etc) [if inplace, returns None] - """ - return self.set_names([name], inplace=inplace) - - @property - def _has_complex_internals(self): - # to disable groupby tricks in MultiIndex - return False - - def summary(self, name=None): - if len(self) > 0: - head = self[0] - if (hasattr(head, 'format') and - not isinstance(head, compat.string_types)): - head = head.format() - tail = self[-1] - if (hasattr(tail, 'format') and - not isinstance(tail, compat.string_types)): - tail = tail.format() - index_summary = ', %s to %s' % (com.pprint_thing(head), - com.pprint_thing(tail)) - else: - index_summary = '' - - if name is None: - name = type(self).__name__ - return '%s: %s entries%s' % (name, len(self), index_summary) - - def _mpl_repr(self): - # how to represent ourselves to matplotlib - return self.values - - _na_value = np.nan - """The expected NA value to use with this index.""" - - @property - def is_monotonic(self): - """ alias for is_monotonic_increasing (deprecated) """ - return self._engine.is_monotonic_increasing - - @property - def is_monotonic_increasing(self): - """ - return if the index is monotonic increasing (only equal or - increasing) values. - """ - return self._engine.is_monotonic_increasing - - @property - def is_monotonic_decreasing(self): - """ - return if the index is monotonic decreasing (only equal or - decreasing) values. - """ - return self._engine.is_monotonic_decreasing - - def is_lexsorted_for_tuple(self, tup): - return True - - @cache_readonly(allow_setting=True) - def is_unique(self): - """ return if the index has unique values """ - return self._engine.is_unique - - @property - def has_duplicates(self): - return not self.is_unique - - def is_boolean(self): - return self.inferred_type in ['boolean'] - - def is_integer(self): - return self.inferred_type in ['integer'] - - def is_floating(self): - return self.inferred_type in ['floating', 'mixed-integer-float'] - - def is_numeric(self): - return self.inferred_type in ['integer', 'floating'] - - def is_object(self): - return is_object_dtype(self.dtype) - - def is_categorical(self): - return self.inferred_type in ['categorical'] - - def is_mixed(self): - return 'mixed' in self.inferred_type - - def holds_integer(self): - return self.inferred_type in ['integer', 'mixed-integer'] - - def _convert_scalar_indexer(self, key, kind=None): - """ - convert a scalar indexer - - Parameters - ---------- - key : label of the slice bound - kind : optional, type of the indexing operation (loc/ix/iloc/None) - - right now we are converting - floats -> ints if the index supports it - """ - - def to_int(): - ikey = int(key) - if ikey != key: - return self._invalid_indexer('label', key) - return ikey - - if kind == 'iloc': - if is_integer(key): - return key - elif is_float(key): - key = to_int() - warnings.warn("scalar indexers for index type {0} should be " - "integers and not floating point".format( - type(self).__name__), - FutureWarning, stacklevel=5) - return key - return self._invalid_indexer('label', key) - - if is_float(key): - if isnull(key): - return self._invalid_indexer('label', key) - warnings.warn("scalar indexers for index type {0} should be " - "integers and not floating point".format( - type(self).__name__), - FutureWarning, stacklevel=3) - return to_int() - - return key - - def _convert_slice_indexer_getitem(self, key, is_index_slice=False): - """ called from the getitem slicers, determine how to treat the key - whether positional or not """ - if self.is_integer() or is_index_slice: - return key - return self._convert_slice_indexer(key) - - def _convert_slice_indexer(self, key, kind=None): - """ - convert a slice indexer. disallow floats in the start/stop/step - - Parameters - ---------- - key : label of the slice bound - kind : optional, type of the indexing operation (loc/ix/iloc/None) - """ - - # if we are not a slice, then we are done - if not isinstance(key, slice): - return key - - # validate iloc - if kind == 'iloc': - - # need to coerce to_int if needed - def f(c): - v = getattr(key, c) - if v is None or is_integer(v): - return v - - # warn if it's a convertible float - if v == int(v): - warnings.warn("slice indexers when using iloc should be " - "integers and not floating point", - FutureWarning, stacklevel=7) - return int(v) - - self._invalid_indexer('slice {0} value'.format(c), v) - - return slice(*[f(c) for c in ['start', 'stop', 'step']]) - - # validate slicers - def validate(v): - if v is None or is_integer(v): - return True - - # dissallow floats (except for .ix) - elif is_float(v): - if kind == 'ix': - return True - - return False - - return True - - for c in ['start', 'stop', 'step']: - v = getattr(key, c) - if not validate(v): - self._invalid_indexer('slice {0} value'.format(c), v) - - # figure out if this is a positional indexer - start, stop, step = key.start, key.stop, key.step - - def is_int(v): - return v is None or is_integer(v) - - is_null_slicer = start is None and stop is None - is_index_slice = is_int(start) and is_int(stop) - is_positional = is_index_slice and not self.is_integer() - - if kind == 'getitem': - return self._convert_slice_indexer_getitem( - key, is_index_slice=is_index_slice) - - # convert the slice to an indexer here - - # if we are mixed and have integers - try: - if is_positional and self.is_mixed(): - # TODO: i, j are not used anywhere - if start is not None: - i = self.get_loc(start) # noqa - if stop is not None: - j = self.get_loc(stop) # noqa - is_positional = False - except KeyError: - if self.inferred_type == 'mixed-integer-float': - raise - - if is_null_slicer: - indexer = key - elif is_positional: - indexer = key - else: - try: - indexer = self.slice_indexer(start, stop, step) - except Exception: - if is_index_slice: - if self.is_integer(): - raise - else: - indexer = key - else: - raise - - return indexer - - def _convert_list_indexer(self, keyarr, kind=None): - """ - passed a key that is tuplesafe that is integer based - and we have a mixed index (e.g. number/labels). figure out - the indexer. return None if we can't help - """ - if (kind in [None, 'iloc', 'ix'] and - is_integer_dtype(keyarr) and not self.is_floating() and - not isinstance(keyarr, ABCPeriodIndex)): - - if self.inferred_type == 'mixed-integer': - indexer = self.get_indexer(keyarr) - if (indexer >= 0).all(): - return indexer - # missing values are flagged as -1 by get_indexer and negative - # indices are already converted to positive indices in the - # above if-statement, so the negative flags are changed to - # values outside the range of indices so as to trigger an - # IndexError in maybe_convert_indices - indexer[indexer < 0] = len(self) - from pandas.core.indexing import maybe_convert_indices - return maybe_convert_indices(indexer, len(self)) - - elif not self.inferred_type == 'integer': - keyarr = np.where(keyarr < 0, len(self) + keyarr, keyarr) - return keyarr - - return None - - def _invalid_indexer(self, form, key): - """ consistent invalid indexer message """ - raise TypeError("cannot do {form} indexing on {klass} with these " - "indexers [{key}] of {kind}".format( - form=form, klass=type(self), key=key, - kind=type(key))) - - def get_duplicates(self): - from collections import defaultdict - counter = defaultdict(lambda: 0) - for k in self.values: - counter[k] += 1 - return sorted(k for k, v in compat.iteritems(counter) if v > 1) - - _get_duplicates = get_duplicates - - def _cleanup(self): - self._engine.clear_mapping() - - @cache_readonly - def _constructor(self): - return type(self) - - @cache_readonly - def _engine(self): - # property, for now, slow to look up - return self._engine_type(lambda: self.values, len(self)) - - def _validate_index_level(self, level): - """ - Validate index level. - - For single-level Index getting level number is a no-op, but some - verification must be done like in MultiIndex. - - """ - if isinstance(level, int): - if level < 0 and level != -1: - raise IndexError("Too many levels: Index has only 1 level," - " %d is not a valid level number" % (level, )) - elif level > 0: - raise IndexError("Too many levels:" - " Index has only 1 level, not %d" % - (level + 1)) - elif level != self.name: - raise KeyError('Level %s must be same as name (%s)' % - (level, self.name)) - - def _get_level_number(self, level): - self._validate_index_level(level) - return 0 - - @cache_readonly - def inferred_type(self): - """ return a string of the type inferred from the values """ - return lib.infer_dtype(self) - - def is_type_compatible(self, kind): - return kind == self.inferred_type - - @cache_readonly - def is_all_dates(self): - if self._data is None: - return False - return is_datetime_array(_ensure_object(self.values)) - - def __iter__(self): - return iter(self.values) - - def __reduce__(self): - d = dict(data=self._data) - d.update(self._get_attributes_dict()) - return _new_Index, (self.__class__, d), None - - def __setstate__(self, state): - """Necessary for making this object picklable""" - - if isinstance(state, dict): - self._data = state.pop('data') - for k, v in compat.iteritems(state): - setattr(self, k, v) - - elif isinstance(state, tuple): - - if len(state) == 2: - nd_state, own_state = state - data = np.empty(nd_state[1], dtype=nd_state[2]) - np.ndarray.__setstate__(data, nd_state) - self.name = own_state[0] - - else: # pragma: no cover - data = np.empty(state) - np.ndarray.__setstate__(data, state) - - self._data = data - self._reset_identity() - else: - raise Exception("invalid pickle state") - - _unpickle_compat = __setstate__ - - def __deepcopy__(self, memo=None): - if memo is None: - memo = {} - return self.copy(deep=True) - - def __nonzero__(self): - raise ValueError("The truth value of a {0} is ambiguous. " - "Use a.empty, a.bool(), a.item(), a.any() or a.all()." - .format(self.__class__.__name__)) - - __bool__ = __nonzero__ - - def __contains__(self, key): - hash(key) - # work around some kind of odd cython bug - try: - return key in self._engine - except TypeError: - return False - - def __hash__(self): - raise TypeError("unhashable type: %r" % type(self).__name__) - - def __setitem__(self, key, value): - raise TypeError("Index does not support mutable operations") - - def __getitem__(self, key): - """ - Override numpy.ndarray's __getitem__ method to work as desired. - - This function adds lists and Series as valid boolean indexers - (ndarrays only supports ndarray with dtype=bool). - - If resulting ndim != 1, plain ndarray is returned instead of - corresponding `Index` subclass. - - """ - # There's no custom logic to be implemented in __getslice__, so it's - # not overloaded intentionally. - getitem = self._data.__getitem__ - promote = self._shallow_copy - - if np.isscalar(key): - return getitem(key) - - if isinstance(key, slice): - # This case is separated from the conditional above to avoid - # pessimization of basic indexing. - return promote(getitem(key)) - - if is_bool_indexer(key): - key = np.asarray(key) - - key = _values_from_object(key) - result = getitem(key) - if not np.isscalar(result): - return promote(result) - else: - return result - - def _ensure_compat_append(self, other): - """ - prepare the append - - Returns - ------- - list of to_concat, name of result Index - """ - name = self.name - to_concat = [self] - - if isinstance(other, (list, tuple)): - to_concat = to_concat + list(other) - else: - to_concat.append(other) - - for obj in to_concat: - if (isinstance(obj, Index) and obj.name != name and - obj.name is not None): - name = None - break - - to_concat = self._ensure_compat_concat(to_concat) - to_concat = [x._values if isinstance(x, Index) else x - for x in to_concat] - return to_concat, name - - def append(self, other): - """ - Append a collection of Index options together - - Parameters - ---------- - other : Index or list/tuple of indices - - Returns - ------- - appended : Index - """ - to_concat, name = self._ensure_compat_append(other) - attribs = self._get_attributes_dict() - attribs['name'] = name - return self._shallow_copy_with_infer( - np.concatenate(to_concat), **attribs) - - @staticmethod - def _ensure_compat_concat(indexes): - from pandas.tseries.api import (DatetimeIndex, PeriodIndex, - TimedeltaIndex) - klasses = DatetimeIndex, PeriodIndex, TimedeltaIndex - - is_ts = [isinstance(idx, klasses) for idx in indexes] - - if any(is_ts) and not all(is_ts): - return [_maybe_box(idx) for idx in indexes] - - return indexes - - def take(self, indices, axis=0, allow_fill=True, fill_value=None): - """ - return a new Index of the values selected by the indexer - - For internal compatibility with numpy arrays. - - # filling must always be None/nan here - # but is passed thru internally - - See also - -------- - numpy.ndarray.take - """ - - indices = com._ensure_platform_int(indices) - taken = self.values.take(indices) - return self._shallow_copy(taken) - - @cache_readonly - def _isnan(self): - """ return if each value is nan""" - if self._can_hold_na: - return isnull(self) - else: - # shouldn't reach to this condition by checking hasnans beforehand - values = np.empty(len(self), dtype=np.bool_) - values.fill(False) - return values - - @cache_readonly - def _nan_idxs(self): - if self._can_hold_na: - w, = self._isnan.nonzero() - return w - else: - return np.array([], dtype=np.int64) - - @cache_readonly - def hasnans(self): - """ return if I have any nans; enables various perf speedups """ - if self._can_hold_na: - return self._isnan.any() - else: - return False - - def _convert_for_op(self, value): - """ Convert value to be insertable to ndarray """ - return value - - def _assert_can_do_op(self, value): - """ Check value is valid for scalar op """ - if not lib.isscalar(value): - msg = "'value' must be a scalar, passed: {0}" - raise TypeError(msg.format(type(value).__name__)) - - def putmask(self, mask, value): - """ - return a new Index of the values set with the mask - - See also - -------- - numpy.ndarray.putmask - """ - values = self.values.copy() - try: - np.putmask(values, mask, self._convert_for_op(value)) - return self._shallow_copy(values) - except (ValueError, TypeError): - # coerces to object - return self.astype(object).putmask(mask, value) - - def format(self, name=False, formatter=None, **kwargs): - """ - Render a string representation of the Index - """ - header = [] - if name: - header.append(com.pprint_thing(self.name, - escape_chars=('\t', '\r', '\n')) if - self.name is not None else '') - - if formatter is not None: - return header + list(self.map(formatter)) - - return self._format_with_header(header, **kwargs) - - def _format_with_header(self, header, na_rep='NaN', **kwargs): - values = self.values - - from pandas.core.format import format_array - - if is_categorical_dtype(values.dtype): - values = np.array(values) - elif is_object_dtype(values.dtype): - values = lib.maybe_convert_objects(values, safe=1) - - if is_object_dtype(values.dtype): - result = [com.pprint_thing(x, escape_chars=('\t', '\r', '\n')) - for x in values] - - # could have nans - mask = isnull(values) - if mask.any(): - result = np.array(result) - result[mask] = na_rep - result = result.tolist() - - else: - result = _trim_front(format_array(values, None, justify='left')) - return header + result - - def to_native_types(self, slicer=None, **kwargs): - """ slice and dice then format """ - values = self - if slicer is not None: - values = values[slicer] - return values._format_native_types(**kwargs) - - def _format_native_types(self, na_rep='', quoting=None, **kwargs): - """ actually format my specific types """ - mask = isnull(self) - if not self.is_object() and not quoting: - values = np.asarray(self).astype(str) - else: - values = np.array(self, dtype=object, copy=True) - - values[mask] = na_rep - return values - - def equals(self, other): - """ - Determines if two Index objects contain the same elements. - """ - if self.is_(other): - return True - - if not isinstance(other, Index): - return False - - return array_equivalent(_values_from_object(self), - _values_from_object(other)) - - def identical(self, other): - """Similar to equals, but check that other comparable attributes are - also equal - """ - return (self.equals(other) and - all((getattr(self, c, None) == getattr(other, c, None) - for c in self._comparables)) and - type(self) == type(other)) - - def asof(self, label): - """ - For a sorted index, return the most recent label up to and including - the passed label. Return NaN if not found. - - See also - -------- - get_loc : asof is a thin wrapper around get_loc with method='pad' - """ - try: - loc = self.get_loc(label, method='pad') - except KeyError: - return _get_na_value(self.dtype) - else: - if isinstance(loc, slice): - loc = loc.indices(len(self))[-1] - return self[loc] - - def asof_locs(self, where, mask): - """ - where : array of timestamps - mask : array of booleans where data is not NA - - """ - locs = self.values[mask].searchsorted(where.values, side='right') - - locs = np.where(locs > 0, locs - 1, 0) - result = np.arange(len(self))[mask].take(locs) - - first = mask.argmax() - result[(locs == 0) & (where < self.values[first])] = -1 - - return result - - def sort_values(self, return_indexer=False, ascending=True): - """ - Return sorted copy of Index - """ - _as = self.argsort() - if not ascending: - _as = _as[::-1] - - sorted_index = self.take(_as) - - if return_indexer: - return sorted_index, _as - else: - return sorted_index - - def order(self, return_indexer=False, ascending=True): - """ - Return sorted copy of Index - - DEPRECATED: use :meth:`Index.sort_values` - """ - warnings.warn("order is deprecated, use sort_values(...)", - FutureWarning, stacklevel=2) - return self.sort_values(return_indexer=return_indexer, - ascending=ascending) - - def sort(self, *args, **kwargs): - raise TypeError("cannot sort an Index object in-place, use " - "sort_values instead") - - def sortlevel(self, level=None, ascending=True, sort_remaining=None): - """ - - For internal compatibility with with the Index API - - Sort the Index. This is for compat with MultiIndex - - Parameters - ---------- - ascending : boolean, default True - False to sort in descending order - - level, sort_remaining are compat parameters - - Returns - ------- - sorted_index : Index - """ - return self.sort_values(return_indexer=True, ascending=ascending) - - def shift(self, periods=1, freq=None): - """ - Shift Index containing datetime objects by input number of periods and - DateOffset - - Returns - ------- - shifted : Index - """ - raise NotImplementedError("Not supported for type %s" % - type(self).__name__) - - def argsort(self, *args, **kwargs): - """ - return an ndarray indexer of the underlying data - - See also - -------- - numpy.ndarray.argsort - """ - result = self.asi8 - if result is None: - result = np.array(self) - return result.argsort(*args, **kwargs) - - def __add__(self, other): - if com.is_list_like(other): - warnings.warn("using '+' to provide set union with Indexes is " - "deprecated, use '|' or .union()", FutureWarning, - stacklevel=2) - if isinstance(other, Index): - return self.union(other) - return Index(np.array(self) + other) - - def __radd__(self, other): - if is_list_like(other): - warnings.warn("using '+' to provide set union with Indexes is " - "deprecated, use '|' or .union()", FutureWarning, - stacklevel=2) - return Index(other + np.array(self)) - - __iadd__ = __add__ - - def __sub__(self, other): - warnings.warn("using '-' to provide set differences with Indexes is " - "deprecated, use .difference()", FutureWarning, - stacklevel=2) - return self.difference(other) - - def __and__(self, other): - return self.intersection(other) - - def __or__(self, other): - return self.union(other) - - def __xor__(self, other): - return self.sym_diff(other) - - def union(self, other): - """ - Form the union of two Index objects and sorts if possible. - - Parameters - ---------- - other : Index or array-like - - Returns - ------- - union : Index - - Examples - -------- - - >>> idx1 = pd.Index([1, 2, 3, 4]) - >>> idx2 = pd.Index([3, 4, 5, 6]) - >>> idx1.union(idx2) - Int64Index([1, 2, 3, 4, 5, 6], dtype='int64') - - """ - self._assert_can_do_setop(other) - other = _ensure_index(other) - - if len(other) == 0 or self.equals(other): - return self - - if len(self) == 0: - return other - - if not is_dtype_equal(self.dtype, other.dtype): - this = self.astype('O') - other = other.astype('O') - return this.union(other) - - if self.is_monotonic and other.is_monotonic: - try: - result = self._outer_indexer(self.values, other._values)[0] - except TypeError: - # incomparable objects - result = list(self.values) - - # worth making this faster? a very unusual case - value_set = set(self.values) - result.extend([x for x in other._values if x not in value_set]) - else: - indexer = self.get_indexer(other) - indexer, = (indexer == -1).nonzero() - - if len(indexer) > 0: - other_diff = com.take_nd(other._values, indexer, - allow_fill=False) - result = com._concat_compat((self.values, other_diff)) - - try: - self.values[0] < other_diff[0] - except TypeError as e: - warnings.warn("%s, sort order is undefined for " - "incomparable objects" % e, RuntimeWarning, - stacklevel=3) - else: - types = frozenset((self.inferred_type, - other.inferred_type)) - if not types & _unsortable_types: - result.sort() - - else: - result = self.values - - try: - result = np.sort(result) - except TypeError as e: - warnings.warn("%s, sort order is undefined for " - "incomparable objects" % e, RuntimeWarning, - stacklevel=3) - - # for subclasses - return self._wrap_union_result(other, result) - - def _wrap_union_result(self, other, result): - name = self.name if self.name == other.name else None - return self.__class__(result, name=name) - - def intersection(self, other): - """ - Form the intersection of two Index objects. - - This returns a new Index with elements common to the index and `other`. - Sortedness of the result is not guaranteed. - - Parameters - ---------- - other : Index or array-like - - Returns - ------- - intersection : Index - - Examples - -------- - - >>> idx1 = pd.Index([1, 2, 3, 4]) - >>> idx2 = pd.Index([3, 4, 5, 6]) - >>> idx1.intersection(idx2) - Int64Index([3, 4], dtype='int64') - - """ - self._assert_can_do_setop(other) - other = _ensure_index(other) - - if self.equals(other): - return self - - if not is_dtype_equal(self.dtype, other.dtype): - this = self.astype('O') - other = other.astype('O') - return this.intersection(other) - - if self.is_monotonic and other.is_monotonic: - try: - result = self._inner_indexer(self.values, other._values)[0] - return self._wrap_union_result(other, result) - except TypeError: - pass - - try: - indexer = Index(self.values).get_indexer(other._values) - indexer = indexer.take((indexer != -1).nonzero()[0]) - except: - # duplicates - indexer = Index(self.values).get_indexer_non_unique( - other._values)[0].unique() - indexer = indexer[indexer != -1] - - taken = self.take(indexer) - if self.name != other.name: - taken.name = None - return taken - - def difference(self, other): - """ - Return a new Index with elements from the index that are not in - `other`. - - This is the sorted set difference of two Index objects. - - Parameters - ---------- - other : Index or array-like - - Returns - ------- - difference : Index - - Examples - -------- - - >>> idx1 = pd.Index([1, 2, 3, 4]) - >>> idx2 = pd.Index([3, 4, 5, 6]) - >>> idx1.difference(idx2) - Int64Index([1, 2], dtype='int64') - - """ - self._assert_can_do_setop(other) - - if self.equals(other): - return Index([], name=self.name) - - other, result_name = self._convert_can_do_setop(other) - - theDiff = sorted(set(self) - set(other)) - return Index(theDiff, name=result_name) - - diff = deprecate('diff', difference) - - def sym_diff(self, other, result_name=None): - """ - Compute the sorted symmetric difference of two Index objects. - - Parameters - ---------- - other : Index or array-like - result_name : str - - Returns - ------- - sym_diff : Index - - Notes - ----- - ``sym_diff`` contains elements that appear in either ``idx1`` or - ``idx2`` but not both. Equivalent to the Index created by - ``(idx1 - idx2) + (idx2 - idx1)`` with duplicates dropped. - - The sorting of a result containing ``NaN`` values is not guaranteed - across Python versions. See GitHub issue #6444. - - Examples - -------- - >>> idx1 = Index([1, 2, 3, 4]) - >>> idx2 = Index([2, 3, 4, 5]) - >>> idx1.sym_diff(idx2) - Int64Index([1, 5], dtype='int64') - - You can also use the ``^`` operator: - - >>> idx1 ^ idx2 - Int64Index([1, 5], dtype='int64') - """ - self._assert_can_do_setop(other) - other, result_name_update = self._convert_can_do_setop(other) - if result_name is None: - result_name = result_name_update - - the_diff = sorted(set((self.difference(other)). - union(other.difference(self)))) - attribs = self._get_attributes_dict() - attribs['name'] = result_name - if 'freq' in attribs: - attribs['freq'] = None - return self._shallow_copy_with_infer(the_diff, **attribs) - - def get_loc(self, key, method=None, tolerance=None): - """ - Get integer location for requested label - - Parameters - ---------- - key : label - method : {None, 'pad'/'ffill', 'backfill'/'bfill', 'nearest'}, optional - * default: exact matches only. - * pad / ffill: find the PREVIOUS index value if no exact match. - * backfill / bfill: use NEXT index value if no exact match - * nearest: use the NEAREST index value if no exact match. Tied - distances are broken by preferring the larger index value. - tolerance : optional - Maximum distance from index value for inexact matches. The value of - the index at the matching location most satisfy the equation - ``abs(index[loc] - key) <= tolerance``. - - .. versionadded:: 0.17.0 - - Returns - ------- - loc : int if unique index, possibly slice or mask if not - """ - if method is None: - if tolerance is not None: - raise ValueError('tolerance argument only valid if using pad, ' - 'backfill or nearest lookups') - key = _values_from_object(key) - return self._engine.get_loc(key) - - indexer = self.get_indexer([key], method=method, tolerance=tolerance) - if indexer.ndim > 1 or indexer.size > 1: - raise TypeError('get_loc requires scalar valued input') - loc = indexer.item() - if loc == -1: - raise KeyError(key) - return loc - - def get_value(self, series, key): - """ - Fast lookup of value from 1-dimensional ndarray. Only use this if you - know what you're doing - """ - - # if we have something that is Index-like, then - # use this, e.g. DatetimeIndex - s = getattr(series, '_values', None) - if isinstance(s, Index) and lib.isscalar(key): - return s[key] - - s = _values_from_object(series) - k = _values_from_object(key) - - # prevent integer truncation bug in indexing - if is_float(k) and not self.is_floating(): - raise KeyError - - try: - return self._engine.get_value(s, k) - except KeyError as e1: - if len(self) > 0 and self.inferred_type in ['integer', 'boolean']: - raise - - try: - return tslib.get_value_box(s, key) - except IndexError: - raise - except TypeError: - # generator/iterator-like - if is_iterator(key): - raise InvalidIndexError(key) - else: - raise e1 - except Exception: # pragma: no cover - raise e1 - except TypeError: - # python 3 - if np.isscalar(key): # pragma: no cover - raise IndexError(key) - raise InvalidIndexError(key) - - def set_value(self, arr, key, value): - """ - Fast lookup of value from 1-dimensional ndarray. Only use this if you - know what you're doing - """ - self._engine.set_value(_values_from_object(arr), - _values_from_object(key), value) - - def get_level_values(self, level): - """ - Return vector of label values for requested level, equal to the length - of the index - - Parameters - ---------- - level : int - - Returns - ------- - values : ndarray - """ - # checks that level number is actually just 1 - self._validate_index_level(level) - return self - - def get_indexer(self, target, method=None, limit=None, tolerance=None): - """ - Compute indexer and mask for new index given the current index. The - indexer should be then used as an input to ndarray.take to align the - current data to the new index. - - Parameters - ---------- - target : Index - method : {None, 'pad'/'ffill', 'backfill'/'bfill', 'nearest'}, optional - * default: exact matches only. - * pad / ffill: find the PREVIOUS index value if no exact match. - * backfill / bfill: use NEXT index value if no exact match - * nearest: use the NEAREST index value if no exact match. Tied - distances are broken by preferring the larger index value. - limit : int, optional - Maximum number of consecutive labels in ``target`` to match for - inexact matches. - tolerance : optional - Maximum distance between original and new labels for inexact - matches. The values of the index at the matching locations most - satisfy the equation ``abs(index[indexer] - target) <= tolerance``. - - .. versionadded:: 0.17.0 - - Examples - -------- - >>> indexer = index.get_indexer(new_index) - >>> new_values = cur_values.take(indexer) - - Returns - ------- - indexer : ndarray of int - Integers from 0 to n - 1 indicating that the index at these - positions matches the corresponding target values. Missing values - in the target are marked by -1. - """ - method = _clean_reindex_fill_method(method) - target = _ensure_index(target) - if tolerance is not None: - tolerance = self._convert_tolerance(tolerance) - - pself, ptarget = self._possibly_promote(target) - if pself is not self or ptarget is not target: - return pself.get_indexer(ptarget, method=method, limit=limit, - tolerance=tolerance) - - if not is_dtype_equal(self.dtype, target.dtype): - this = self.astype(object) - target = target.astype(object) - return this.get_indexer(target, method=method, limit=limit, - tolerance=tolerance) - - if not self.is_unique: - raise InvalidIndexError('Reindexing only valid with uniquely' - ' valued Index objects') - - if method == 'pad' or method == 'backfill': - indexer = self._get_fill_indexer(target, method, limit, tolerance) - elif method == 'nearest': - indexer = self._get_nearest_indexer(target, limit, tolerance) - else: - if tolerance is not None: - raise ValueError('tolerance argument only valid if doing pad, ' - 'backfill or nearest reindexing') - if limit is not None: - raise ValueError('limit argument only valid if doing pad, ' - 'backfill or nearest reindexing') - - indexer = self._engine.get_indexer(target._values) - - return com._ensure_platform_int(indexer) - - def _convert_tolerance(self, tolerance): - # override this method on subclasses - return tolerance - - def _get_fill_indexer(self, target, method, limit=None, tolerance=None): - if self.is_monotonic_increasing and target.is_monotonic_increasing: - method = (self._engine.get_pad_indexer if method == 'pad' else - self._engine.get_backfill_indexer) - indexer = method(target._values, limit) - else: - indexer = self._get_fill_indexer_searchsorted(target, method, - limit) - if tolerance is not None: - indexer = self._filter_indexer_tolerance(target._values, indexer, - tolerance) - return indexer - - def _get_fill_indexer_searchsorted(self, target, method, limit=None): - """ - Fallback pad/backfill get_indexer that works for monotonic decreasing - indexes and non-monotonic targets - """ - if limit is not None: - raise ValueError('limit argument for %r method only well-defined ' - 'if index and target are monotonic' % method) - - side = 'left' if method == 'pad' else 'right' - target = np.asarray(target) - - # find exact matches first (this simplifies the algorithm) - indexer = self.get_indexer(target) - nonexact = (indexer == -1) - indexer[nonexact] = self._searchsorted_monotonic(target[nonexact], - side) - if side == 'left': - # searchsorted returns "indices into a sorted array such that, - # if the corresponding elements in v were inserted before the - # indices, the order of a would be preserved". - # Thus, we need to subtract 1 to find values to the left. - indexer[nonexact] -= 1 - # This also mapped not found values (values of 0 from - # np.searchsorted) to -1, which conveniently is also our - # sentinel for missing values - else: - # Mark indices to the right of the largest value as not found - indexer[indexer == len(self)] = -1 - return indexer - - def _get_nearest_indexer(self, target, limit, tolerance): - """ - Get the indexer for the nearest index labels; requires an index with - values that can be subtracted from each other (e.g., not strings or - tuples). - """ - left_indexer = self.get_indexer(target, 'pad', limit=limit) - right_indexer = self.get_indexer(target, 'backfill', limit=limit) - - target = np.asarray(target) - left_distances = abs(self.values[left_indexer] - target) - right_distances = abs(self.values[right_indexer] - target) - - op = operator.lt if self.is_monotonic_increasing else operator.le - indexer = np.where(op(left_distances, right_distances) | - (right_indexer == -1), left_indexer, right_indexer) - if tolerance is not None: - indexer = self._filter_indexer_tolerance(target, indexer, - tolerance) - return indexer - - def _filter_indexer_tolerance(self, target, indexer, tolerance): - distance = abs(self.values[indexer] - target) - indexer = np.where(distance <= tolerance, indexer, -1) - return indexer - - def get_indexer_non_unique(self, target): - """ return an indexer suitable for taking from a non unique index - return the labels in the same order as the target, and - return a missing indexer into the target (missing are marked as -1 - in the indexer); target must be an iterable """ - target = _ensure_index(target) - pself, ptarget = self._possibly_promote(target) - if pself is not self or ptarget is not target: - return pself.get_indexer_non_unique(ptarget) - - if self.is_all_dates: - self = Index(self.asi8) - tgt_values = target.asi8 - else: - tgt_values = target._values - - indexer, missing = self._engine.get_indexer_non_unique(tgt_values) - return Index(indexer), missing - - def get_indexer_for(self, target, **kwargs): - """ guaranteed return of an indexer even when non-unique """ - if self.is_unique: - return self.get_indexer(target, **kwargs) - indexer, _ = self.get_indexer_non_unique(target, **kwargs) - return indexer - - def _possibly_promote(self, other): - # A hack, but it works - from pandas.tseries.index import DatetimeIndex - if self.inferred_type == 'date' and isinstance(other, DatetimeIndex): - return DatetimeIndex(self), other - elif self.inferred_type == 'boolean': - if not is_object_dtype(self.dtype): - return self.astype('object'), other.astype('object') - return self, other - - def groupby(self, to_groupby): - """ - Group the index labels by a given array of values. - - Parameters - ---------- - to_groupby : array - Values used to determine the groups. - - Returns - ------- - groups : dict - {group name -> group labels} - - """ - return self._groupby(self.values, _values_from_object(to_groupby)) - - def map(self, mapper): - return self._arrmap(self.values, mapper) - - def isin(self, values, level=None): - """ - Compute boolean array of whether each index value is found in the - passed set of values. - - Parameters - ---------- - values : set or sequence of values - Sought values. - level : str or int, optional - Name or position of the index level to use (if the index is a - MultiIndex). - - Notes - ----- - If `level` is specified: - - - if it is the name of one *and only one* index level, use that level; - - otherwise it should be a number indicating level position. - - Returns - ------- - is_contained : ndarray (boolean dtype) - - """ - if level is not None: - self._validate_index_level(level) - return algorithms.isin(np.array(self), values) - - def _can_reindex(self, indexer): - """ - *this is an internal non-public method* - - Check if we are allowing reindexing with this particular indexer - - Parameters - ---------- - indexer : an integer indexer - - Raises - ------ - ValueError if its a duplicate axis - """ - - # trying to reindex on an axis with duplicates - if not self.is_unique and len(indexer): - raise ValueError("cannot reindex from a duplicate axis") - - def reindex(self, target, method=None, level=None, limit=None, - tolerance=None): - """ - Create index with target's values (move/add/delete values as necessary) - - Parameters - ---------- - target : an iterable - - Returns - ------- - new_index : pd.Index - Resulting index - indexer : np.ndarray or None - Indices of output values in original index - - """ - # GH6552: preserve names when reindexing to non-named target - # (i.e. neither Index nor Series). - preserve_names = not hasattr(target, 'name') - - # GH7774: preserve dtype/tz if target is empty and not an Index. - target = _ensure_has_len(target) # target may be an iterator - - if not isinstance(target, Index) and len(target) == 0: - attrs = self._get_attributes_dict() - attrs.pop('freq', None) # don't preserve freq - target = self._simple_new(None, dtype=self.dtype, **attrs) - else: - target = _ensure_index(target) - - if level is not None: - if method is not None: - raise TypeError('Fill method not supported if level passed') - _, indexer, _ = self._join_level(target, level, how='right', - return_indexers=True) - else: - if self.equals(target): - indexer = None - else: - if self.is_unique: - indexer = self.get_indexer(target, method=method, - limit=limit, - tolerance=tolerance) - else: - if method is not None or limit is not None: - raise ValueError("cannot reindex a non-unique index " - "with a method or limit") - indexer, missing = self.get_indexer_non_unique(target) - - if preserve_names and target.nlevels == 1 and target.name != self.name: - target = target.copy() - target.name = self.name - - return target, indexer - - def _reindex_non_unique(self, target): - """ - *this is an internal non-public method* - - Create a new index with target's values (move/add/delete values as - necessary) use with non-unique Index and a possibly non-unique target - - Parameters - ---------- - target : an iterable - - Returns - ------- - new_index : pd.Index - Resulting index - indexer : np.ndarray or None - Indices of output values in original index - - """ - - target = _ensure_index(target) - indexer, missing = self.get_indexer_non_unique(target) - check = indexer != -1 - new_labels = self.take(indexer[check]) - new_indexer = None - - if len(missing): - l = np.arange(len(indexer)) - - missing = com._ensure_platform_int(missing) - missing_labels = target.take(missing) - missing_indexer = _ensure_int64(l[~check]) - cur_labels = self.take(indexer[check])._values - cur_indexer = _ensure_int64(l[check]) - - new_labels = np.empty(tuple([len(indexer)]), dtype=object) - new_labels[cur_indexer] = cur_labels - new_labels[missing_indexer] = missing_labels - - # a unique indexer - if target.is_unique: - - # see GH5553, make sure we use the right indexer - new_indexer = np.arange(len(indexer)) - new_indexer[cur_indexer] = np.arange(len(cur_labels)) - new_indexer[missing_indexer] = -1 - - # we have a non_unique selector, need to use the original - # indexer here - else: - - # need to retake to have the same size as the indexer - indexer = indexer._values - indexer[~check] = 0 - - # reset the new indexer to account for the new size - new_indexer = np.arange(len(self.take(indexer))) - new_indexer[~check] = -1 - - new_index = self._shallow_copy_with_infer(new_labels, freq=None) - return new_index, indexer, new_indexer - - def join(self, other, how='left', level=None, return_indexers=False): - """ - *this is an internal non-public method* - - Compute join_index and indexers to conform data - structures to the new index. - - Parameters - ---------- - other : Index - how : {'left', 'right', 'inner', 'outer'} - level : int or level name, default None - return_indexers : boolean, default False - - Returns - ------- - join_index, (left_indexer, right_indexer) - """ - self_is_mi = isinstance(self, MultiIndex) - other_is_mi = isinstance(other, MultiIndex) - - # try to figure out the join level - # GH3662 - if level is None and (self_is_mi or other_is_mi): - - # have the same levels/names so a simple join - if self.names == other.names: - pass - else: - return self._join_multi(other, how=how, - return_indexers=return_indexers) - - # join on the level - if level is not None and (self_is_mi or other_is_mi): - return self._join_level(other, level, how=how, - return_indexers=return_indexers) - - other = _ensure_index(other) - - if len(other) == 0 and how in ('left', 'outer'): - join_index = self._shallow_copy() - if return_indexers: - rindexer = np.repeat(-1, len(join_index)) - return join_index, None, rindexer - else: - return join_index - - if len(self) == 0 and how in ('right', 'outer'): - join_index = other._shallow_copy() - if return_indexers: - lindexer = np.repeat(-1, len(join_index)) - return join_index, lindexer, None - else: - return join_index - - if self._join_precedence < other._join_precedence: - how = {'right': 'left', 'left': 'right'}.get(how, how) - result = other.join(self, how=how, level=level, - return_indexers=return_indexers) - if return_indexers: - x, y, z = result - result = x, z, y - return result - - if not is_dtype_equal(self.dtype, other.dtype): - this = self.astype('O') - other = other.astype('O') - return this.join(other, how=how, return_indexers=return_indexers) - - _validate_join_method(how) - - if not self.is_unique and not other.is_unique: - return self._join_non_unique(other, how=how, - return_indexers=return_indexers) - elif not self.is_unique or not other.is_unique: - if self.is_monotonic and other.is_monotonic: - return self._join_monotonic(other, how=how, - return_indexers=return_indexers) - else: - return self._join_non_unique(other, how=how, - return_indexers=return_indexers) - elif self.is_monotonic and other.is_monotonic: - try: - return self._join_monotonic(other, how=how, - return_indexers=return_indexers) - except TypeError: - pass - - if how == 'left': - join_index = self - elif how == 'right': - join_index = other - elif how == 'inner': - join_index = self.intersection(other) - elif how == 'outer': - join_index = self.union(other) - - if return_indexers: - if join_index is self: - lindexer = None - else: - lindexer = self.get_indexer(join_index) - if join_index is other: - rindexer = None - else: - rindexer = other.get_indexer(join_index) - return join_index, lindexer, rindexer - else: - return join_index - - def _join_multi(self, other, how, return_indexers=True): - - self_is_mi = isinstance(self, MultiIndex) - other_is_mi = isinstance(other, MultiIndex) - - # figure out join names - self_names = [n for n in self.names if n is not None] - other_names = [n for n in other.names if n is not None] - overlap = list(set(self_names) & set(other_names)) - - # need at least 1 in common, but not more than 1 - if not len(overlap): - raise ValueError("cannot join with no level specified and no " - "overlapping names") - if len(overlap) > 1: - raise NotImplementedError("merging with more than one level " - "overlap on a multi-index is not " - "implemented") - jl = overlap[0] - - # make the indices into mi's that match - if not (self_is_mi and other_is_mi): - - flip_order = False - if self_is_mi: - self, other = other, self - flip_order = True - # flip if join method is right or left - how = {'right': 'left', 'left': 'right'}.get(how, how) - - level = other.names.index(jl) - result = self._join_level(other, level, how=how, - return_indexers=return_indexers) - - if flip_order: - if isinstance(result, tuple): - return result[0], result[2], result[1] - return result - - # 2 multi-indexes - raise NotImplementedError("merging with both multi-indexes is not " - "implemented") - - def _join_non_unique(self, other, how='left', return_indexers=False): - from pandas.tools.merge import _get_join_indexers - - left_idx, right_idx = _get_join_indexers([self.values], - [other._values], how=how, - sort=True) - - left_idx = com._ensure_platform_int(left_idx) - right_idx = com._ensure_platform_int(right_idx) - - join_index = self.values.take(left_idx) - mask = left_idx == -1 - np.putmask(join_index, mask, other._values.take(right_idx)) - - join_index = self._wrap_joined_index(join_index, other) - - if return_indexers: - return join_index, left_idx, right_idx - else: - return join_index - - def _join_level(self, other, level, how='left', return_indexers=False, - keep_order=True): - """ - The join method *only* affects the level of the resulting - MultiIndex. Otherwise it just exactly aligns the Index data to the - labels of the level in the MultiIndex. If `keep_order` == True, the - order of the data indexed by the MultiIndex will not be changed; - otherwise, it will tie out with `other`. - """ - from pandas.algos import groupsort_indexer - - def _get_leaf_sorter(labels): - ''' - returns sorter for the inner most level while preserving the - order of higher levels - ''' - if labels[0].size == 0: - return np.empty(0, dtype='int64') - - if len(labels) == 1: - lab = _ensure_int64(labels[0]) - sorter, _ = groupsort_indexer(lab, 1 + lab.max()) - return sorter - - # find indexers of begining of each set of - # same-key labels w.r.t all but last level - tic = labels[0][:-1] != labels[0][1:] - for lab in labels[1:-1]: - tic |= lab[:-1] != lab[1:] - - starts = np.hstack(([True], tic, [True])).nonzero()[0] - lab = _ensure_int64(labels[-1]) - return lib.get_level_sorter(lab, _ensure_int64(starts)) - - if isinstance(self, MultiIndex) and isinstance(other, MultiIndex): - raise TypeError('Join on level between two MultiIndex objects ' - 'is ambiguous') - - left, right = self, other - - flip_order = not isinstance(self, MultiIndex) - if flip_order: - left, right = right, left - how = {'right': 'left', 'left': 'right'}.get(how, how) - - level = left._get_level_number(level) - old_level = left.levels[level] - - if not right.is_unique: - raise NotImplementedError('Index._join_level on non-unique index ' - 'is not implemented') - - new_level, left_lev_indexer, right_lev_indexer = \ - old_level.join(right, how=how, return_indexers=True) - - if left_lev_indexer is None: - if keep_order or len(left) == 0: - left_indexer = None - join_index = left - else: # sort the leaves - left_indexer = _get_leaf_sorter(left.labels[:level + 1]) - join_index = left[left_indexer] - - else: - left_lev_indexer = _ensure_int64(left_lev_indexer) - rev_indexer = lib.get_reverse_indexer(left_lev_indexer, - len(old_level)) - - new_lev_labels = com.take_nd(rev_indexer, left.labels[level], - allow_fill=False) - - new_labels = list(left.labels) - new_labels[level] = new_lev_labels - - new_levels = list(left.levels) - new_levels[level] = new_level - - if keep_order: # just drop missing values. o.w. keep order - left_indexer = np.arange(len(left)) - mask = new_lev_labels != -1 - if not mask.all(): - new_labels = [lab[mask] for lab in new_labels] - left_indexer = left_indexer[mask] - - else: # tie out the order with other - if level == 0: # outer most level, take the fast route - ngroups = 1 + new_lev_labels.max() - left_indexer, counts = groupsort_indexer(new_lev_labels, - ngroups) - # missing values are placed first; drop them! - left_indexer = left_indexer[counts[0]:] - new_labels = [lab[left_indexer] for lab in new_labels] - - else: # sort the leaves - mask = new_lev_labels != -1 - mask_all = mask.all() - if not mask_all: - new_labels = [lab[mask] for lab in new_labels] - - left_indexer = _get_leaf_sorter(new_labels[:level + 1]) - new_labels = [lab[left_indexer] for lab in new_labels] - - # left_indexers are w.r.t masked frame. - # reverse to original frame! - if not mask_all: - left_indexer = mask.nonzero()[0][left_indexer] - - join_index = MultiIndex(levels=new_levels, labels=new_labels, - names=left.names, verify_integrity=False) - - if right_lev_indexer is not None: - right_indexer = com.take_nd(right_lev_indexer, - join_index.labels[level], - allow_fill=False) - else: - right_indexer = join_index.labels[level] - - if flip_order: - left_indexer, right_indexer = right_indexer, left_indexer - - if return_indexers: - return join_index, left_indexer, right_indexer - else: - return join_index - - def _join_monotonic(self, other, how='left', return_indexers=False): - if self.equals(other): - ret_index = other if how == 'right' else self - if return_indexers: - return ret_index, None, None - else: - return ret_index - - sv = self.values - ov = other._values - - if self.is_unique and other.is_unique: - # We can perform much better than the general case - if how == 'left': - join_index = self - lidx = None - ridx = self._left_indexer_unique(sv, ov) - elif how == 'right': - join_index = other - lidx = self._left_indexer_unique(ov, sv) - ridx = None - elif how == 'inner': - join_index, lidx, ridx = self._inner_indexer(sv, ov) - join_index = self._wrap_joined_index(join_index, other) - elif how == 'outer': - join_index, lidx, ridx = self._outer_indexer(sv, ov) - join_index = self._wrap_joined_index(join_index, other) - else: - if how == 'left': - join_index, lidx, ridx = self._left_indexer(sv, ov) - elif how == 'right': - join_index, ridx, lidx = self._left_indexer(ov, sv) - elif how == 'inner': - join_index, lidx, ridx = self._inner_indexer(sv, ov) - elif how == 'outer': - join_index, lidx, ridx = self._outer_indexer(sv, ov) - join_index = self._wrap_joined_index(join_index, other) - - if return_indexers: - return join_index, lidx, ridx - else: - return join_index - - def _wrap_joined_index(self, joined, other): - name = self.name if self.name == other.name else None - return Index(joined, name=name) - - def slice_indexer(self, start=None, end=None, step=None, kind=None): - """ - For an ordered Index, compute the slice indexer for input labels and - step - - Parameters - ---------- - start : label, default None - If None, defaults to the beginning - end : label, default None - If None, defaults to the end - step : int, default None - kind : string, default None - - Returns - ------- - indexer : ndarray or slice - - Notes - ----- - This function assumes that the data is sorted, so use at your own peril - """ - start_slice, end_slice = self.slice_locs(start, end, step=step, - kind=kind) - - # return a slice - if not lib.isscalar(start_slice): - raise AssertionError("Start slice bound is non-scalar") - if not lib.isscalar(end_slice): - raise AssertionError("End slice bound is non-scalar") - - return slice(start_slice, end_slice, step) - - def _maybe_cast_slice_bound(self, label, side, kind): - """ - This function should be overloaded in subclasses that allow non-trivial - casting on label-slice bounds, e.g. datetime-like indices allowing - strings containing formatted datetimes. - - Parameters - ---------- - label : object - side : {'left', 'right'} - kind : string / None - - Returns - ------- - label : object - - Notes - ----- - Value of `side` parameter should be validated in caller. - - """ - - # We are a plain index here (sub-class override this method if they - # wish to have special treatment for floats/ints, e.g. Float64Index and - # datetimelike Indexes - # reject them - if is_float(label): - self._invalid_indexer('slice', label) - - # we are trying to find integer bounds on a non-integer based index - # this is rejected (generally .loc gets you here) - elif is_integer(label): - self._invalid_indexer('slice', label) - - return label - - def _searchsorted_monotonic(self, label, side='left'): - if self.is_monotonic_increasing: - return self.searchsorted(label, side=side) - elif self.is_monotonic_decreasing: - # np.searchsorted expects ascending sort order, have to reverse - # everything for it to work (element ordering, search side and - # resulting value). - pos = self[::-1].searchsorted(label, side='right' if side == 'left' - else 'right') - return len(self) - pos - - raise ValueError('index must be monotonic increasing or decreasing') - - def get_slice_bound(self, label, side, kind): - """ - Calculate slice bound that corresponds to given label. - - Returns leftmost (one-past-the-rightmost if ``side=='right'``) position - of given label. - - Parameters - ---------- - label : object - side : {'left', 'right'} - kind : string / None, the type of indexer - - """ - if side not in ('left', 'right'): - raise ValueError("Invalid value for side kwarg," - " must be either 'left' or 'right': %s" % - (side, )) - - original_label = label - - # For datetime indices label may be a string that has to be converted - # to datetime boundary according to its resolution. - label = self._maybe_cast_slice_bound(label, side, kind) - - # we need to look up the label - try: - slc = self.get_loc(label) - except KeyError as err: - try: - return self._searchsorted_monotonic(label, side) - except ValueError: - # raise the original KeyError - raise err - - if isinstance(slc, np.ndarray): - # get_loc may return a boolean array or an array of indices, which - # is OK as long as they are representable by a slice. - if is_bool_dtype(slc): - slc = lib.maybe_booleans_to_slice(slc.view('u1')) - else: - slc = lib.maybe_indices_to_slice(slc.astype('i8'), len(self)) - if isinstance(slc, np.ndarray): - raise KeyError("Cannot get %s slice bound for non-unique " - "label: %r" % (side, original_label)) - - if isinstance(slc, slice): - if side == 'left': - return slc.start - else: - return slc.stop - else: - if side == 'right': - return slc + 1 - else: - return slc - - def slice_locs(self, start=None, end=None, step=None, kind=None): - """ - Compute slice locations for input labels. - - Parameters - ---------- - start : label, default None - If None, defaults to the beginning - end : label, default None - If None, defaults to the end - step : int, defaults None - If None, defaults to 1 - kind : string, defaults None - - Returns - ------- - start, end : int - - """ - inc = (step is None or step >= 0) - - if not inc: - # If it's a reverse slice, temporarily swap bounds. - start, end = end, start - - start_slice = None - if start is not None: - start_slice = self.get_slice_bound(start, 'left', kind) - if start_slice is None: - start_slice = 0 - - end_slice = None - if end is not None: - end_slice = self.get_slice_bound(end, 'right', kind) - if end_slice is None: - end_slice = len(self) - - if not inc: - # Bounds at this moment are swapped, swap them back and shift by 1. - # - # slice_locs('B', 'A', step=-1): s='B', e='A' - # - # s='A' e='B' - # AFTER SWAP: | | - # v ------------------> V - # ----------------------------------- - # | | |A|A|A|A| | | | | |B|B| | | | | - # ----------------------------------- - # ^ <------------------ ^ - # SHOULD BE: | | - # end=s-1 start=e-1 - # - end_slice, start_slice = start_slice - 1, end_slice - 1 - - # i == -1 triggers ``len(self) + i`` selection that points to the - # last element, not before-the-first one, subtracting len(self) - # compensates that. - if end_slice == -1: - end_slice -= len(self) - if start_slice == -1: - start_slice -= len(self) - - return start_slice, end_slice - - def delete(self, loc): - """ - Make new Index with passed location(-s) deleted - - Returns - ------- - new_index : Index - """ - return self._shallow_copy(np.delete(self._data, loc)) - - def insert(self, loc, item): - """ - Make new Index inserting new item at location. Follows - Python list.append semantics for negative values - - Parameters - ---------- - loc : int - item : object - - Returns - ------- - new_index : Index - """ - _self = np.asarray(self) - item = self._coerce_scalar_to_index(item)._values - - idx = np.concatenate((_self[:loc], item, _self[loc:])) - return self._shallow_copy_with_infer(idx) - - def drop(self, labels, errors='raise'): - """ - Make new Index with passed list of labels deleted - - Parameters - ---------- - labels : array-like - errors : {'ignore', 'raise'}, default 'raise' - If 'ignore', suppress error and existing labels are dropped. - - Returns - ------- - dropped : Index - """ - labels = com._index_labels_to_array(labels) - indexer = self.get_indexer(labels) - mask = indexer == -1 - if mask.any(): - if errors != 'ignore': - raise ValueError('labels %s not contained in axis' % - labels[mask]) - indexer = indexer[~mask] - return self.delete(indexer) - - @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', - False: 'first'}) - @Appender(base._shared_docs['drop_duplicates'] % _index_doc_kwargs) - def drop_duplicates(self, keep='first'): - return super(Index, self).drop_duplicates(keep=keep) - - @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', - False: 'first'}) - @Appender(base._shared_docs['duplicated'] % _index_doc_kwargs) - def duplicated(self, keep='first'): - return super(Index, self).duplicated(keep=keep) - - _index_shared_docs['fillna'] = """ - Fill NA/NaN values with the specified value - - Parameters - ---------- - value : scalar - Scalar value to use to fill holes (e.g. 0). - This value cannot be a list-likes. - downcast : dict, default is None - a dict of item->dtype of what to downcast if possible, - or the string 'infer' which will try to downcast to an appropriate - equal type (e.g. float64 to int64 if possible) - - Returns - ------- - filled : Index - """ - - @Appender(_index_shared_docs['fillna']) - def fillna(self, value=None, downcast=None): - self._assert_can_do_op(value) - if self.hasnans: - result = self.putmask(self._isnan, value) - if downcast is None: - # no need to care metadata other than name - # because it can't have freq if - return Index(result, name=self.name) - return self._shallow_copy() - - def _evaluate_with_timedelta_like(self, other, op, opstr): - raise TypeError("can only perform ops with timedelta like values") - - def _evaluate_with_datetime_like(self, other, op, opstr): - raise TypeError("can only perform ops with datetime like values") - - @classmethod - def _add_comparison_methods(cls): - """ add in comparison methods """ - - def _make_compare(op): - def _evaluate_compare(self, other): - if isinstance(other, (np.ndarray, Index, ABCSeries)): - if other.ndim > 0 and len(self) != len(other): - raise ValueError('Lengths must match to compare') - func = getattr(self.values, op) - result = func(np.asarray(other)) - - # technically we could support bool dtyped Index - # for now just return the indexing array directly - if is_bool_dtype(result): - return result - try: - return Index(result) - except TypeError: - return result - - return _evaluate_compare - - cls.__eq__ = _make_compare('__eq__') - cls.__ne__ = _make_compare('__ne__') - cls.__lt__ = _make_compare('__lt__') - cls.__gt__ = _make_compare('__gt__') - cls.__le__ = _make_compare('__le__') - cls.__ge__ = _make_compare('__ge__') - - @classmethod - def _add_numericlike_set_methods_disabled(cls): - """ add in the numeric set-like methods to disable """ - - def _make_invalid_op(name): - def invalid_op(self, other=None): - raise TypeError("cannot perform {name} with this index type: " - "{typ}".format(name=name, typ=type(self))) - - invalid_op.__name__ = name - return invalid_op - - cls.__add__ = cls.__radd__ = __iadd__ = _make_invalid_op('__add__') # noqa - cls.__sub__ = __isub__ = _make_invalid_op('__sub__') # noqa - - @classmethod - def _add_numeric_methods_disabled(cls): - """ add in numeric methods to disable """ - - def _make_invalid_op(name): - def invalid_op(self, other=None): - raise TypeError("cannot perform {name} with this index type: " - "{typ}".format(name=name, typ=type(self))) - - invalid_op.__name__ = name - return invalid_op - - cls.__pow__ = cls.__rpow__ = _make_invalid_op('__pow__') - cls.__mul__ = cls.__rmul__ = _make_invalid_op('__mul__') - cls.__floordiv__ = cls.__rfloordiv__ = _make_invalid_op('__floordiv__') - cls.__truediv__ = cls.__rtruediv__ = _make_invalid_op('__truediv__') - if not compat.PY3: - cls.__div__ = cls.__rdiv__ = _make_invalid_op('__div__') - cls.__neg__ = _make_invalid_op('__neg__') - cls.__pos__ = _make_invalid_op('__pos__') - cls.__abs__ = _make_invalid_op('__abs__') - cls.__inv__ = _make_invalid_op('__inv__') - - def _maybe_update_attributes(self, attrs): - """ Update Index attributes (e.g. freq) depending on op """ - return attrs - - def _validate_for_numeric_unaryop(self, op, opstr): - """ validate if we can perform a numeric unary operation """ - - if not self._is_numeric_dtype: - raise TypeError("cannot evaluate a numeric op " - "{opstr} for type: {typ}".format( - opstr=opstr, - typ=type(self)) - ) - - def _validate_for_numeric_binop(self, other, op, opstr): - """ - return valid other, evaluate or raise TypeError - if we are not of the appropriate type - - internal method called by ops - """ - from pandas.tseries.offsets import DateOffset - - # if we are an inheritor of numeric, - # but not actually numeric (e.g. DatetimeIndex/PeriodInde) - if not self._is_numeric_dtype: - raise TypeError("cannot evaluate a numeric op {opstr} " - "for type: {typ}".format( - opstr=opstr, - typ=type(self)) - ) - - if isinstance(other, Index): - if not other._is_numeric_dtype: - raise TypeError("cannot evaluate a numeric op " - "{opstr} with type: {typ}".format( - opstr=type(self), - typ=type(other)) - ) - elif isinstance(other, np.ndarray) and not other.ndim: - other = other.item() - - if isinstance(other, (Index, ABCSeries, np.ndarray)): - if len(self) != len(other): - raise ValueError("cannot evaluate a numeric op with " - "unequal lengths") - other = _values_from_object(other) - if other.dtype.kind not in ['f', 'i']: - raise TypeError("cannot evaluate a numeric op " - "with a non-numeric dtype") - elif isinstance(other, (DateOffset, np.timedelta64, - Timedelta, datetime.timedelta)): - # higher up to handle - pass - elif isinstance(other, (Timestamp, np.datetime64)): - # higher up to handle - pass - else: - if not (is_float(other) or is_integer(other)): - raise TypeError("can only perform ops with scalar values") - - return other - - @classmethod - def _add_numeric_methods_binary(cls): - """ add in numeric methods """ - - def _make_evaluate_binop(op, opstr, reversed=False): - def _evaluate_numeric_binop(self, other): - - from pandas.tseries.offsets import DateOffset - other = self._validate_for_numeric_binop(other, op, opstr) - - # handle time-based others - if isinstance(other, (DateOffset, np.timedelta64, - Timedelta, datetime.timedelta)): - return self._evaluate_with_timedelta_like(other, op, opstr) - elif isinstance(other, (Timestamp, np.datetime64)): - return self._evaluate_with_datetime_like(other, op, opstr) - - # if we are a reversed non-communative op - values = self.values - if reversed: - values, other = other, values - - attrs = self._get_attributes_dict() - attrs = self._maybe_update_attributes(attrs) - return Index(op(values, other), **attrs) - - return _evaluate_numeric_binop - - cls.__add__ = cls.__radd__ = _make_evaluate_binop( - operator.add, '__add__') - cls.__sub__ = _make_evaluate_binop( - operator.sub, '__sub__') - cls.__rsub__ = _make_evaluate_binop( - operator.sub, '__sub__', reversed=True) - cls.__mul__ = cls.__rmul__ = _make_evaluate_binop( - operator.mul, '__mul__') - cls.__pow__ = cls.__rpow__ = _make_evaluate_binop( - operator.pow, '__pow__') - cls.__mod__ = _make_evaluate_binop( - operator.mod, '__mod__') - cls.__floordiv__ = _make_evaluate_binop( - operator.floordiv, '__floordiv__') - cls.__rfloordiv__ = _make_evaluate_binop( - operator.floordiv, '__floordiv__', reversed=True) - cls.__truediv__ = _make_evaluate_binop( - operator.truediv, '__truediv__') - cls.__rtruediv__ = _make_evaluate_binop( - operator.truediv, '__truediv__', reversed=True) - if not compat.PY3: - cls.__div__ = _make_evaluate_binop( - operator.div, '__div__') - cls.__rdiv__ = _make_evaluate_binop( - operator.div, '__div__', reversed=True) - - @classmethod - def _add_numeric_methods_unary(cls): - """ add in numeric unary methods """ - - def _make_evaluate_unary(op, opstr): - - def _evaluate_numeric_unary(self): - - self._validate_for_numeric_unaryop(op, opstr) - attrs = self._get_attributes_dict() - attrs = self._maybe_update_attributes(attrs) - return Index(op(self.values), **attrs) - - return _evaluate_numeric_unary - - cls.__neg__ = _make_evaluate_unary(lambda x: -x, '__neg__') - cls.__pos__ = _make_evaluate_unary(lambda x: x, '__pos__') - cls.__abs__ = _make_evaluate_unary(np.abs, '__abs__') - cls.__inv__ = _make_evaluate_unary(lambda x: -x, '__inv__') - - @classmethod - def _add_numeric_methods(cls): - cls._add_numeric_methods_unary() - cls._add_numeric_methods_binary() - - @classmethod - def _add_logical_methods(cls): - """ add in logical methods """ - - _doc = """ - - %(desc)s - - Parameters - ---------- - All arguments to numpy.%(outname)s are accepted. - - Returns - ------- - %(outname)s : bool or array_like (if axis is specified) - A single element array_like may be converted to bool.""" - - def _make_logical_function(name, desc, f): - @Substitution(outname=name, desc=desc) - @Appender(_doc) - def logical_func(self, *args, **kwargs): - result = f(self.values) - if (isinstance(result, (np.ndarray, ABCSeries, Index)) and - result.ndim == 0): - # return NumPy type - return result.dtype.type(result.item()) - else: # pragma: no cover - return result - - logical_func.__name__ = name - return logical_func - - cls.all = _make_logical_function('all', 'Return whether all elements ' - 'are True', - np.all) - cls.any = _make_logical_function('any', - 'Return whether any element is True', - np.any) - - @classmethod - def _add_logical_methods_disabled(cls): - """ add in logical methods to disable """ - - def _make_invalid_op(name): - def invalid_op(self, other=None): - raise TypeError("cannot perform {name} with this index type: " - "{typ}".format(name=name, typ=type(self))) - - invalid_op.__name__ = name - return invalid_op - - cls.all = _make_invalid_op('all') - cls.any = _make_invalid_op('any') - - -Index._add_numeric_methods_disabled() -Index._add_logical_methods() -Index._add_comparison_methods() - - -class CategoricalIndex(Index, PandasDelegate): - """ - - Immutable Index implementing an ordered, sliceable set. CategoricalIndex - represents a sparsely populated Index with an underlying Categorical. - - .. versionadded:: 0.16.1 - - Parameters - ---------- - data : array-like or Categorical, (1-dimensional) - categories : optional, array-like - categories for the CategoricalIndex - ordered : boolean, - designating if the categories are ordered - copy : bool - Make a copy of input ndarray - name : object - Name to be stored in the index - - """ - - _typ = 'categoricalindex' - _engine_type = _index.Int64Engine - _attributes = ['name'] - - def __new__(cls, data=None, categories=None, ordered=None, dtype=None, - copy=False, name=None, fastpath=False, **kwargs): - - if fastpath: - return cls._simple_new(data, name=name) - - if isinstance(data, ABCCategorical): - data = cls._create_categorical(cls, data, categories, ordered) - elif isinstance(data, CategoricalIndex): - data = data._data - data = cls._create_categorical(cls, data, categories, ordered) - else: - - # don't allow scalars - # if data is None, then categories must be provided - if lib.isscalar(data): - if data is not None or categories is None: - cls._scalar_data_error(data) - data = [] - data = cls._create_categorical(cls, data, categories, ordered) - - if copy: - data = data.copy() - - return cls._simple_new(data, name=name) - - def _create_from_codes(self, codes, categories=None, ordered=None, - name=None): - """ - *this is an internal non-public method* - - create the correct categorical from codes - - Parameters - ---------- - codes : new codes - categories : optional categories, defaults to existing - ordered : optional ordered attribute, defaults to existing - name : optional name attribute, defaults to existing - - Returns - ------- - CategoricalIndex - """ - - from pandas.core.categorical import Categorical - if categories is None: - categories = self.categories - if ordered is None: - ordered = self.ordered - if name is None: - name = self.name - cat = Categorical.from_codes(codes, categories=categories, - ordered=self.ordered) - return CategoricalIndex(cat, name=name) - - @staticmethod - def _create_categorical(self, data, categories=None, ordered=None): - """ - *this is an internal non-public method* - - create the correct categorical from data and the properties - - Parameters - ---------- - data : data for new Categorical - categories : optional categories, defaults to existing - ordered : optional ordered attribute, defaults to existing - - Returns - ------- - Categorical - """ - - if not isinstance(data, ABCCategorical): - from pandas.core.categorical import Categorical - data = Categorical(data, categories=categories, ordered=ordered) - else: - if categories is not None: - data = data.set_categories(categories) - if ordered is not None: - data = data.set_ordered(ordered) - return data - - @classmethod - def _simple_new(cls, values, name=None, categories=None, ordered=None, - **kwargs): - result = object.__new__(cls) - - values = cls._create_categorical(cls, values, categories, ordered) - result._data = values - result.name = name - for k, v in compat.iteritems(kwargs): - setattr(result, k, v) - - result._reset_identity() - return result - - def _is_dtype_compat(self, other): - """ - *this is an internal non-public method* - - provide a comparison between the dtype of self and other (coercing if - needed) - - Raises - ------ - TypeError if the dtypes are not compatible - """ - - if is_categorical_dtype(other): - if isinstance(other, CategoricalIndex): - other = other._values - if not other.is_dtype_equal(self): - raise TypeError("categories must match existing categories " - "when appending") - else: - values = other - if not is_list_like(values): - values = [values] - other = CategoricalIndex(self._create_categorical( - self, other, categories=self.categories, ordered=self.ordered)) - if not other.isin(values).all(): - raise TypeError("cannot append a non-category item to a " - "CategoricalIndex") - - return other - - def equals(self, other): - """ - Determines if two CategorialIndex objects contain the same elements. - """ - if self.is_(other): - return True - - try: - other = self._is_dtype_compat(other) - return array_equivalent(self._data, other) - except (TypeError, ValueError): - pass - - return False - - @property - def _formatter_func(self): - return self.categories._formatter_func - - def _format_attrs(self): - """ - Return a list of tuples of the (attr,formatted_value) - """ - max_categories = (10 if get_option("display.max_categories") == 0 else - get_option("display.max_categories")) - attrs = [('categories', default_pprint(self.categories, - max_seq_items=max_categories)), - ('ordered', self.ordered)] - if self.name is not None: - attrs.append(('name', default_pprint(self.name))) - attrs.append(('dtype', "'%s'" % self.dtype)) - max_seq_items = get_option('display.max_seq_items') or len(self) - if len(self) > max_seq_items: - attrs.append(('length', len(self))) - return attrs - - @property - def inferred_type(self): - return 'categorical' - - @property - def values(self): - """ return the underlying data, which is a Categorical """ - return self._data - - def get_values(self): - """ return the underlying data as an ndarray """ - return self._data.get_values() - - @property - def codes(self): - return self._data.codes - - @property - def categories(self): - return self._data.categories - - @property - def ordered(self): - return self._data.ordered - - def __contains__(self, key): - hash(key) - return key in self.values - - def __array__(self, dtype=None): - """ the array interface, return my values """ - return np.array(self._data, dtype=dtype) - - @cache_readonly - def _isnan(self): - """ return if each value is nan""" - return self._data.codes == -1 - - @Appender(_index_shared_docs['fillna']) - def fillna(self, value, downcast=None): - self._assert_can_do_op(value) - return CategoricalIndex(self._data.fillna(value), name=self.name) - - def argsort(self, *args, **kwargs): - return self.values.argsort(*args, **kwargs) - - @cache_readonly - def _engine(self): - - # we are going to look things up with the codes themselves - return self._engine_type(lambda: self.codes.astype('i8'), len(self)) - - @cache_readonly - def is_unique(self): - return not self.duplicated().any() - - @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', - False: 'first'}) - @Appender(base._shared_docs['duplicated'] % _index_doc_kwargs) - def duplicated(self, keep='first'): - from pandas.hashtable import duplicated_int64 - return duplicated_int64(self.codes.astype('i8'), keep) - - def _to_safe_for_reshape(self): - """ convert to object if we are a categorical """ - return self.astype('object') - - def get_loc(self, key, method=None): - """ - Get integer location for requested label - - Parameters - ---------- - key : label - method : {None} - * default: exact matches only. - - Returns - ------- - loc : int if unique index, possibly slice or mask if not - """ - codes = self.categories.get_loc(key) - if (codes == -1): - raise KeyError(key) - indexer, _ = self._engine.get_indexer_non_unique(np.array([codes])) - if (indexer == -1).any(): - raise KeyError(key) - - return indexer - - def _can_reindex(self, indexer): - """ always allow reindexing """ - pass - - def reindex(self, target, method=None, level=None, limit=None, - tolerance=None): - """ - Create index with target's values (move/add/delete values as necessary) - - Returns - ------- - new_index : pd.Index - Resulting index - indexer : np.ndarray or None - Indices of output values in original index - - """ - - if method is not None: - raise NotImplementedError("argument method is not implemented for " - "CategoricalIndex.reindex") - if level is not None: - raise NotImplementedError("argument level is not implemented for " - "CategoricalIndex.reindex") - if limit is not None: - raise NotImplementedError("argument limit is not implemented for " - "CategoricalIndex.reindex") - - target = _ensure_index(target) - - if not is_categorical_dtype(target) and not target.is_unique: - raise ValueError("cannot reindex with a non-unique indexer") - - indexer, missing = self.get_indexer_non_unique(np.array(target)) - new_target = self.take(indexer) - - # filling in missing if needed - if len(missing): - cats = self.categories.get_indexer(target) - - if (cats == -1).any(): - # coerce to a regular index here! - result = Index(np.array(self), name=self.name) - new_target, indexer, _ = result._reindex_non_unique( - np.array(target)) - - else: - - codes = new_target.codes.copy() - codes[indexer == -1] = cats[missing] - new_target = self._create_from_codes(codes) - - # we always want to return an Index type here - # to be consistent with .reindex for other index types (e.g. they don't - # coerce based on the actual values, only on the dtype) - # unless we had an inital Categorical to begin with - # in which case we are going to conform to the passed Categorical - new_target = np.asarray(new_target) - if is_categorical_dtype(target): - new_target = target._shallow_copy(new_target, name=self.name) - else: - new_target = Index(new_target, name=self.name) - - return new_target, indexer - - def _reindex_non_unique(self, target): - """ reindex from a non-unique; which CategoricalIndex's are almost - always - """ - new_target, indexer = self.reindex(target) - new_indexer = None - - check = indexer == -1 - if check.any(): - new_indexer = np.arange(len(self.take(indexer))) - new_indexer[check] = -1 - - cats = self.categories.get_indexer(target) - if not (cats == -1).any(): - # .reindex returns normal Index. Revert to CategoricalIndex if - # all targets are included in my categories - new_target = self._shallow_copy(new_target) - - return new_target, indexer, new_indexer - - def get_indexer(self, target, method=None, limit=None, tolerance=None): - """ - Compute indexer and mask for new index given the current index. The - indexer should be then used as an input to ndarray.take to align the - current data to the new index. The mask determines whether labels are - found or not in the current index - - Parameters - ---------- - target : MultiIndex or Index (of tuples) - method : {'pad', 'ffill', 'backfill', 'bfill'} - pad / ffill: propagate LAST valid observation forward to next valid - backfill / bfill: use NEXT valid observation to fill gap - - Notes - ----- - This is a low-level method and probably should be used at your own risk - - Examples - -------- - >>> indexer, mask = index.get_indexer(new_index) - >>> new_values = cur_values.take(indexer) - >>> new_values[-mask] = np.nan - - Returns - ------- - (indexer, mask) : (ndarray, ndarray) - """ - method = _clean_reindex_fill_method(method) - target = _ensure_index(target) - - if isinstance(target, CategoricalIndex): - target = target.categories - - if method == 'pad' or method == 'backfill': - raise NotImplementedError("method='pad' and method='backfill' not " - "implemented yet for CategoricalIndex") - elif method == 'nearest': - raise NotImplementedError("method='nearest' not implemented yet " - 'for CategoricalIndex') - else: - - codes = self.categories.get_indexer(target) - indexer, _ = self._engine.get_indexer_non_unique(codes) - - return com._ensure_platform_int(indexer) - - def get_indexer_non_unique(self, target): - """ this is the same for a CategoricalIndex for get_indexer; the API - returns the missing values as well - """ - target = _ensure_index(target) - - if isinstance(target, CategoricalIndex): - target = target.categories - - codes = self.categories.get_indexer(target) - return self._engine.get_indexer_non_unique(codes) - - def _convert_list_indexer(self, keyarr, kind=None): - """ - we are passed a list indexer. - Return our indexer or raise if all of the values are not included in - the categories - """ - codes = self.categories.get_indexer(keyarr) - if (codes == -1).any(): - raise KeyError("a list-indexer must only include values that are " - "in the categories") - - return None - - def take(self, indexer, axis=0, allow_fill=True, fill_value=None): - """ - For internal compatibility with numpy arrays. - - # filling must always be None/nan here - # but is passed thru internally - assert isnull(fill_value) - - See also - -------- - numpy.ndarray.take - """ - - indexer = com._ensure_platform_int(indexer) - taken = self.codes.take(indexer) - return self._create_from_codes(taken) - - def delete(self, loc): - """ - Make new Index with passed location(-s) deleted - - Returns - ------- - new_index : Index - """ - return self._create_from_codes(np.delete(self.codes, loc)) - - def insert(self, loc, item): - """ - Make new Index inserting new item at location. Follows - Python list.append semantics for negative values - - Parameters - ---------- - loc : int - item : object - - Returns - ------- - new_index : Index - - Raises - ------ - ValueError if the item is not in the categories - - """ - code = self.categories.get_indexer([item]) - if (code == -1): - raise TypeError("cannot insert an item into a CategoricalIndex " - "that is not already an existing category") - - codes = self.codes - codes = np.concatenate((codes[:loc], code, codes[loc:])) - return self._create_from_codes(codes) - - def append(self, other): - """ - Append a collection of CategoricalIndex options together - - Parameters - ---------- - other : Index or list/tuple of indices - - Returns - ------- - appended : Index - - Raises - ------ - ValueError if other is not in the categories - """ - to_concat, name = self._ensure_compat_append(other) - to_concat = [self._is_dtype_compat(c) for c in to_concat] - codes = np.concatenate([c.codes for c in to_concat]) - return self._create_from_codes(codes, name=name) - - @classmethod - def _add_comparison_methods(cls): - """ add in comparison methods """ - - def _make_compare(op): - def _evaluate_compare(self, other): - - # if we have a Categorical type, then must have the same - # categories - if isinstance(other, CategoricalIndex): - other = other._values - elif isinstance(other, Index): - other = self._create_categorical( - self, other._values, categories=self.categories, - ordered=self.ordered) - - if isinstance(other, (ABCCategorical, np.ndarray, ABCSeries)): - if len(self.values) != len(other): - raise ValueError("Lengths must match to compare") - - if isinstance(other, ABCCategorical): - if not self.values.is_dtype_equal(other): - raise TypeError("categorical index comparisions must " - "have the same categories and ordered " - "attributes") - - return getattr(self.values, op)(other) - - return _evaluate_compare - - cls.__eq__ = _make_compare('__eq__') - cls.__ne__ = _make_compare('__ne__') - cls.__lt__ = _make_compare('__lt__') - cls.__gt__ = _make_compare('__gt__') - cls.__le__ = _make_compare('__le__') - cls.__ge__ = _make_compare('__ge__') - - def _delegate_method(self, name, *args, **kwargs): - """ method delegation to the ._values """ - method = getattr(self._values, name) - if 'inplace' in kwargs: - raise ValueError("cannot use inplace with CategoricalIndex") - res = method(*args, **kwargs) - if lib.isscalar(res): - return res - return CategoricalIndex(res, name=self.name) - - @classmethod - def _add_accessors(cls): - """ add in Categorical accessor methods """ - - from pandas.core.categorical import Categorical - CategoricalIndex._add_delegate_accessors( - delegate=Categorical, accessors=["rename_categories", - "reorder_categories", - "add_categories", - "remove_categories", - "remove_unused_categories", - "set_categories", - "as_ordered", "as_unordered", - "min", "max"], - typ='method', overwrite=True) - - -CategoricalIndex._add_numericlike_set_methods_disabled() -CategoricalIndex._add_numeric_methods_disabled() -CategoricalIndex._add_logical_methods_disabled() -CategoricalIndex._add_comparison_methods() -CategoricalIndex._add_accessors() - - -class NumericIndex(Index): - """ - Provide numeric type operations - - This is an abstract class - - """ - _is_numeric_dtype = True - - def _maybe_cast_slice_bound(self, label, side, kind): - """ - This function should be overloaded in subclasses that allow non-trivial - casting on label-slice bounds, e.g. datetime-like indices allowing - strings containing formatted datetimes. - - Parameters - ---------- - label : object - side : {'left', 'right'} - kind : string / None - - Returns - ------- - label : object - - Notes - ----- - Value of `side` parameter should be validated in caller. - - """ - - # we are a numeric index, so we accept - # integer/floats directly - if not (is_integer(label) or is_float(label)): - self._invalid_indexer('slice', label) - - return label - - def _convert_tolerance(self, tolerance): - try: - return float(tolerance) - except ValueError: - raise ValueError('tolerance argument for %s must be numeric: %r' % - (type(self).__name__, tolerance)) - - -class Int64Index(NumericIndex): - """ - Immutable ndarray implementing an ordered, sliceable set. The basic object - storing axis labels for all pandas objects. Int64Index is a special case - of `Index` with purely integer labels. This is the default index type used - by the DataFrame and Series ctors when no explicit index is provided by the - user. - - Parameters - ---------- - data : array-like (1-dimensional) - dtype : NumPy dtype (default: int64) - copy : bool - Make a copy of input ndarray - name : object - Name to be stored in the index - - Notes - ----- - An Index instance can **only** contain hashable objects - """ - - _typ = 'int64index' - _groupby = _algos.groupby_int64 - _arrmap = _algos.arrmap_int64 - _left_indexer_unique = _algos.left_join_indexer_unique_int64 - _left_indexer = _algos.left_join_indexer_int64 - _inner_indexer = _algos.inner_join_indexer_int64 - _outer_indexer = _algos.outer_join_indexer_int64 - - _can_hold_na = False - - _engine_type = _index.Int64Engine - - def __new__(cls, data=None, dtype=None, copy=False, name=None, - fastpath=False, **kwargs): - - if fastpath: - return cls._simple_new(data, name=name) - - # isscalar, generators handled in coerce_to_ndarray - data = cls._coerce_to_ndarray(data) - - if issubclass(data.dtype.type, compat.string_types): - cls._string_data_error(data) - - elif issubclass(data.dtype.type, np.integer): - # don't force the upcast as we may be dealing - # with a platform int - if (dtype is None or - not issubclass(np.dtype(dtype).type, np.integer)): - dtype = np.int64 - - subarr = np.array(data, dtype=dtype, copy=copy) - else: - subarr = np.array(data, dtype=np.int64, copy=copy) - if len(data) > 0: - if (subarr != data).any(): - raise TypeError('Unsafe NumPy casting to integer, you must' - ' explicitly cast') - - return cls._simple_new(subarr, name=name) - - @property - def inferred_type(self): - return 'integer' - - @property - def asi8(self): - # do not cache or you'll create a memory leak - return self.values.view('i8') - - @property - def is_all_dates(self): - """ - Checks that all the labels are datetime objects - """ - return False - - def equals(self, other): - """ - Determines if two Index objects contain the same elements. - """ - if self.is_(other): - return True - - # if not isinstance(other, Int64Index): - # return False - - try: - return array_equivalent(_values_from_object(self), - _values_from_object(other)) - except TypeError: - # e.g. fails in numpy 1.6 with DatetimeIndex #1681 - return False - - def _wrap_joined_index(self, joined, other): - name = self.name if self.name == other.name else None - return Int64Index(joined, name=name) - - -Int64Index._add_numeric_methods() -Int64Index._add_logical_methods() - - -class RangeIndex(Int64Index): - - """ - Immutable Index implementing a monotonic range. RangeIndex is a - memory-saving special case of Int64Index limited to representing - monotonic ranges. - - Parameters - ---------- - start : int (default: 0) - stop : int (default: 0) - step : int (default: 1) - name : object, optional - Name to be stored in the index - copy : bool, default False - Make a copy of input if its a RangeIndex - - """ - - _typ = 'rangeindex' - _engine_type = _index.Int64Engine - - def __new__(cls, start=None, stop=None, step=None, name=None, dtype=None, - fastpath=False, copy=False, **kwargs): - - if fastpath: - return cls._simple_new(start, stop, step, name=name) - - cls._validate_dtype(dtype) - - # RangeIndex - if isinstance(start, RangeIndex): - if not copy: - return start - if name is None: - name = getattr(start, 'name', None) - start, stop, step = start._start, start._stop, start._step - - # validate the arguments - def _ensure_int(value, field): - try: - new_value = int(value) - except: - new_value = value - - if not is_integer(new_value) or new_value != value: - raise TypeError("RangeIndex(...) must be called with integers," - " {value} was passed for {field}".format( - value=type(value).__name__, - field=field) - ) - - return new_value - - if start is None: - start = 0 - else: - start = _ensure_int(start, 'start') - if stop is None: - stop = start - start = 0 - else: - stop = _ensure_int(stop, 'stop') - if step is None: - step = 1 - elif step == 0: - raise ValueError("Step must not be zero") - else: - step = _ensure_int(step, 'step') - - return cls._simple_new(start, stop, step, name) - - @classmethod - def from_range(cls, data, name=None, dtype=None, **kwargs): - """ create RangeIndex from a range (py3), or xrange (py2) object """ - if not isinstance(data, range): - raise TypeError( - '{0}(...) must be called with object coercible to a ' - 'range, {1} was passed'.format(cls.__name__, repr(data))) - - if compat.PY3: - step = data.step - stop = data.stop - start = data.start - else: - # seems we only have indexing ops to infer - # rather than direct accessors - if len(data) > 1: - step = data[1] - data[0] - stop = data[-1] + step - start = data[0] - elif len(data): - start = data[0] - stop = data[0] + 1 - step = 1 - else: - start = stop = 0 - step = 1 - return RangeIndex(start, stop, step, dtype=dtype, name=name, **kwargs) - - @classmethod - def _simple_new(cls, start, stop=None, step=None, name=None, - dtype=None, **kwargs): - result = object.__new__(cls) - - # handle passed None, non-integers - if start is None or not is_integer(start): - try: - return RangeIndex(start, stop, step, name=name, **kwargs) - except TypeError: - return Index(start, stop, step, name=name, **kwargs) - - result._start = start - result._stop = stop or 0 - result._step = step or 1 - result.name = name - for k, v in compat.iteritems(kwargs): - setattr(result, k, v) - - result._reset_identity() - return result - - @staticmethod - def _validate_dtype(dtype): - """ require dtype to be None or int64 """ - if not (dtype is None or is_int64_dtype(dtype)): - raise TypeError('Invalid to pass a non-int64 dtype to RangeIndex') - - @cache_readonly - def _constructor(self): - """ return the class to use for construction """ - return Int64Index - - @cache_readonly - def _data(self): - return np.arange(self._start, self._stop, self._step, dtype=np.int64) - - @cache_readonly - def _int64index(self): - return Int64Index(self._data, name=self.name, fastpath=True) - - def _get_data_as_items(self): - """ return a list of tuples of start, stop, step """ - return [('start', self._start), - ('stop', self._stop), - ('step', self._step)] - - def __reduce__(self): - d = self._get_attributes_dict() - d.update(dict(self._get_data_as_items())) - return _new_Index, (self.__class__, d), None - - def _format_attrs(self): - """ - Return a list of tuples of the (attr, formatted_value) - """ - attrs = self._get_data_as_items() - if self.name is not None: - attrs.append(('name', default_pprint(self.name))) - return attrs - - def _format_data(self): - # we are formatting thru the attributes - return None - - @cache_readonly - def nbytes(self): - """ return the number of bytes in the underlying data """ - return sum([getsizeof(getattr(self, v)) for v in - ['_start', '_stop', '_step']]) - - def memory_usage(self, deep=False): - """ - Memory usage of my values - - Parameters - ---------- - deep : bool - Introspect the data deeply, interrogate - `object` dtypes for system-level memory consumption - - Returns - ------- - bytes used - - Notes - ----- - Memory usage does not include memory consumed by elements that - are not components of the array if deep=False - - See Also - -------- - numpy.ndarray.nbytes - """ - return self.nbytes - - @property - def dtype(self): - return np.dtype(np.int64) - - @property - def is_unique(self): - """ return if the index has unique values """ - return True - - @property - def has_duplicates(self): - return False - - def tolist(self): - return lrange(self._start, self._stop, self._step) - - def _shallow_copy(self, values=None, **kwargs): - """ create a new Index, don't copy the data, use the same object attributes - with passed in attributes taking precedence """ - if values is None: - return RangeIndex(name=self.name, fastpath=True, - **dict(self._get_data_as_items())) - else: - kwargs.setdefault('name', self.name) - return self._int64index._shallow_copy(values, **kwargs) - - @Appender(_index_shared_docs['copy']) - def copy(self, name=None, deep=False, dtype=None, **kwargs): - self._validate_dtype(dtype) - if name is None: - name = self.name - return RangeIndex(name=name, fastpath=True, - **dict(self._get_data_as_items())) - - def argsort(self, *args, **kwargs): - """ - return an ndarray indexer of the underlying data - - See also - -------- - numpy.ndarray.argsort - """ - if self._step > 0: - return np.arange(len(self)) - else: - return np.arange(len(self) - 1, -1, -1) - - def equals(self, other): - """ - Determines if two Index objects contain the same elements. - """ - if isinstance(other, RangeIndex): - ls = len(self) - lo = len(other) - return (ls == lo == 0 or - ls == lo == 1 and - self._start == other._start or - ls == lo and - self._start == other._start and - self._step == other._step) - - return super(RangeIndex, self).equals(other) - - def intersection(self, other): - """ - Form the intersection of two Index objects. Sortedness of the result is - not guaranteed - - Parameters - ---------- - other : Index or array-like - - Returns - ------- - intersection : Index - """ - if not isinstance(other, RangeIndex): - return super(RangeIndex, self).intersection(other) - - # check whether intervals intersect - # deals with in- and decreasing ranges - int_low = max(min(self._start, self._stop + 1), - min(other._start, other._stop + 1)) - int_high = min(max(self._stop, self._start + 1), - max(other._stop, other._start + 1)) - if int_high <= int_low: - return RangeIndex() - - # Method hint: linear Diophantine equation - # solve intersection problem - # performance hint: for identical step sizes, could use - # cheaper alternative - gcd, s, t = self._extended_gcd(self._step, other._step) - - # check whether element sets intersect - if (self._start - other._start) % gcd: - return RangeIndex() - - # calculate parameters for the RangeIndex describing the - # intersection disregarding the lower bounds - tmp_start = self._start + (other._start - self._start) * \ - self._step // gcd * s - new_step = self._step * other._step // gcd - new_index = RangeIndex(tmp_start, int_high, new_step, fastpath=True) - - # adjust index to limiting interval - new_index._start = new_index._min_fitting_element(int_low) - return new_index - - def _min_fitting_element(self, lower_limit): - """Returns the smallest element greater than or equal to the limit""" - no_steps = -(-(lower_limit - self._start) // abs(self._step)) - return self._start + abs(self._step) * no_steps - - def _max_fitting_element(self, upper_limit): - """Returns the largest element smaller than or equal to the limit""" - no_steps = (upper_limit - self._start) // abs(self._step) - return self._start + abs(self._step) * no_steps - - def _extended_gcd(self, a, b): - """ - Extended Euclidean algorithms to solve Bezout's identity: - a*x + b*y = gcd(x, y) - Finds one particular solution for x, y: s, t - Returns: gcd, s, t - """ - s, old_s = 0, 1 - t, old_t = 1, 0 - r, old_r = b, a - while r: - quotient = old_r // r - old_r, r = r, old_r - quotient * r - old_s, s = s, old_s - quotient * s - old_t, t = t, old_t - quotient * t - return old_r, old_s, old_t - - def union(self, other): - """ - Form the union of two Index objects and sorts if possible - - Parameters - ---------- - other : Index or array-like - - Returns - ------- - union : Index - """ - self._assert_can_do_setop(other) - if len(other) == 0 or self.equals(other): - return self - if len(self) == 0: - return other - if isinstance(other, RangeIndex): - start_s, step_s = self._start, self._step - end_s = self._start + self._step * (len(self) - 1) - start_o, step_o = other._start, other._step - end_o = other._start + other._step * (len(other) - 1) - if self._step < 0: - start_s, step_s, end_s = end_s, -step_s, start_s - if other._step < 0: - start_o, step_o, end_o = end_o, -step_o, start_o - if len(self) == 1 and len(other) == 1: - step_s = step_o = abs(self._start - other._start) - elif len(self) == 1: - step_s = step_o - elif len(other) == 1: - step_o = step_s - start_r = min(start_s, start_o) - end_r = max(end_s, end_o) - if step_o == step_s: - if ((start_s - start_o) % step_s == 0 and - (start_s - end_o) <= step_s and - (start_o - end_s) <= step_s): - return RangeIndex(start_r, end_r + step_s, step_s) - if ((step_s % 2 == 0) and - (abs(start_s - start_o) <= step_s / 2) and - (abs(end_s - end_o) <= step_s / 2)): - return RangeIndex(start_r, end_r + step_s / 2, step_s / 2) - elif step_o % step_s == 0: - if ((start_o - start_s) % step_s == 0 and - (start_o + step_s >= start_s) and - (end_o - step_s <= end_s)): - return RangeIndex(start_r, end_r + step_s, step_s) - elif step_s % step_o == 0: - if ((start_s - start_o) % step_o == 0 and - (start_s + step_o >= start_o) and - (end_s - step_o <= end_o)): - return RangeIndex(start_r, end_r + step_o, step_o) - - return self._int64index.union(other) - - def join(self, other, how='left', level=None, return_indexers=False): - """ - *this is an internal non-public method* - - Compute join_index and indexers to conform data - structures to the new index. - - Parameters - ---------- - other : Index - how : {'left', 'right', 'inner', 'outer'} - level : int or level name, default None - return_indexers : boolean, default False - - Returns - ------- - join_index, (left_indexer, right_indexer) - """ - if how == 'outer' and self is not other: - # note: could return RangeIndex in more circumstances - return self._int64index.join(other, how, level, return_indexers) - - return super(RangeIndex, self).join(other, how, level, return_indexers) - - def __len__(self): - """ - return the length of the RangeIndex - """ - return max(0, -(-(self._stop - self._start) // self._step)) - - @property - def size(self): - return len(self) - - def __getitem__(self, key): - """ - Conserve RangeIndex type for scalar and slice keys. - """ - super_getitem = super(RangeIndex, self).__getitem__ - - if np.isscalar(key): - n = int(key) - if n != key: - return super_getitem(key) - if n < 0: - n = len(self) + key - if n < 0 or n > len(self) - 1: - raise IndexError("index {key} is out of bounds for axis 0 " - "with size {size}".format(key=key, - size=len(self))) - return self._start + n * self._step - - if isinstance(key, slice): - - # This is basically PySlice_GetIndicesEx, but delegation to our - # super routines if we don't have integers - - l = len(self) - - # complete missing slice information - step = 1 if key.step is None else key.step - if key.start is None: - start = l - 1 if step < 0 else 0 - else: - start = key.start - - if start < 0: - start += l - if start < 0: - start = -1 if step < 0 else 0 - if start >= l: - start = l - 1 if step < 0 else l - - if key.stop is None: - stop = -1 if step < 0 else l - else: - stop = key.stop - - if stop < 0: - stop += l - if stop < 0: - stop = -1 - if stop > l: - stop = l - - # delegate non-integer slices - if (start != int(start) and - stop != int(stop) and - step != int(step)): - return super_getitem(key) - - # convert indexes to values - start = self._start + self._step * start - stop = self._start + self._step * stop - step = self._step * step - - return RangeIndex(start, stop, step, self.name, fastpath=True) - - # fall back to Int64Index - return super_getitem(key) - - def __floordiv__(self, other): - if com.is_integer(other): - if (len(self) == 0 or - self._start % other == 0 and - self._step % other == 0): - start = self._start // other - step = self._step // other - stop = start + len(self) * step - return RangeIndex(start, stop, step, name=self.name, - fastpath=True) - if len(self) == 1: - start = self._start // other - return RangeIndex(start, start + 1, 1, name=self.name, - fastpath=True) - return self._int64index // other - - @classmethod - def _add_numeric_methods_binary(cls): - """ add in numeric methods, specialized to RangeIndex """ - - def _make_evaluate_binop(op, opstr, reversed=False, step=False): - """ - Parameters - ---------- - op : callable that accepts 2 parms - perform the binary op - opstr : string - string name of ops - reversed : boolean, default False - if this is a reversed op, e.g. radd - step : callable, optional, default to False - op to apply to the step parm if not None - if False, use the existing step - """ - - def _evaluate_numeric_binop(self, other): - - other = self._validate_for_numeric_binop(other, op, opstr) - attrs = self._get_attributes_dict() - attrs = self._maybe_update_attributes(attrs) - - if reversed: - self, other = other, self - - try: - # alppy if we have an override - if step: - rstep = step(self._step, other) - - # we don't have a representable op - # so return a base index - if not is_integer(rstep) or not rstep: - raise ValueError - - else: - rstep = self._step - - rstart = op(self._start, other) - rstop = op(self._stop, other) - - result = RangeIndex(rstart, - rstop, - rstep, - **attrs) - - # for compat with numpy / Int64Index - # even if we can represent as a RangeIndex, return - # as a Float64Index if we have float-like descriptors - if not all([is_integer(x) for x in - [rstart, rstop, rstep]]): - result = result.astype('float64') - - return result - - except (ValueError, TypeError, AttributeError): - pass - - # convert to Int64Index ops - if isinstance(self, RangeIndex): - self = self.values - if isinstance(other, RangeIndex): - other = other.values - - return Index(op(self, other), **attrs) - - return _evaluate_numeric_binop - - cls.__add__ = cls.__radd__ = _make_evaluate_binop( - operator.add, '__add__') - cls.__sub__ = _make_evaluate_binop(operator.sub, '__sub__') - cls.__rsub__ = _make_evaluate_binop( - operator.sub, '__sub__', reversed=True) - cls.__mul__ = cls.__rmul__ = _make_evaluate_binop( - operator.mul, - '__mul__', - step=operator.mul) - cls.__truediv__ = _make_evaluate_binop( - operator.truediv, - '__truediv__', - step=operator.truediv) - cls.__rtruediv__ = _make_evaluate_binop( - operator.truediv, - '__truediv__', - reversed=True, - step=operator.truediv) - if not compat.PY3: - cls.__div__ = _make_evaluate_binop( - operator.div, - '__div__', - step=operator.div) - cls.__rdiv__ = _make_evaluate_binop( - operator.div, - '__div__', - reversed=True, - step=operator.div) - -RangeIndex._add_numeric_methods() -RangeIndex._add_logical_methods() - - -class Float64Index(NumericIndex): - """ - Immutable ndarray implementing an ordered, sliceable set. The basic object - storing axis labels for all pandas objects. Float64Index is a special case - of `Index` with purely floating point labels. - - Parameters - ---------- - data : array-like (1-dimensional) - dtype : NumPy dtype (default: object) - copy : bool - Make a copy of input ndarray - name : object - Name to be stored in the index - - Notes - ----- - An Float64Index instance can **only** contain hashable objects - """ - - _typ = 'float64index' - _engine_type = _index.Float64Engine - _groupby = _algos.groupby_float64 - _arrmap = _algos.arrmap_float64 - _left_indexer_unique = _algos.left_join_indexer_unique_float64 - _left_indexer = _algos.left_join_indexer_float64 - _inner_indexer = _algos.inner_join_indexer_float64 - _outer_indexer = _algos.outer_join_indexer_float64 - - def __new__(cls, data=None, dtype=None, copy=False, name=None, - fastpath=False, **kwargs): - - if fastpath: - return cls._simple_new(data, name) - - data = cls._coerce_to_ndarray(data) - - if issubclass(data.dtype.type, compat.string_types): - cls._string_data_error(data) - - if dtype is None: - dtype = np.float64 - - try: - subarr = np.array(data, dtype=dtype, copy=copy) - except: - raise TypeError('Unsafe NumPy casting, you must explicitly cast') - - # coerce to float64 for storage - if subarr.dtype != np.float64: - subarr = subarr.astype(np.float64) - - return cls._simple_new(subarr, name) - - @property - def inferred_type(self): - return 'floating' - - def astype(self, dtype): - if np.dtype(dtype) not in (np.object, np.float64): - raise TypeError('Setting %s dtype to anything other than ' - 'float64 or object is not supported' % - self.__class__) - return Index(self._values, name=self.name, dtype=dtype) - - def _convert_scalar_indexer(self, key, kind=None): - """ - convert a scalar indexer - - Parameters - ---------- - key : label of the slice bound - kind : optional, type of the indexing operation (loc/ix/iloc/None) - - right now we are converting - floats -> ints if the index supports it - """ - - if kind == 'iloc': - if is_integer(key): - return key - return super(Float64Index, self)._convert_scalar_indexer(key, - kind=kind) - - return key - - def _convert_slice_indexer(self, key, kind=None): - """ - convert a slice indexer, by definition these are labels - unless we are iloc - - Parameters - ---------- - key : label of the slice bound - kind : optional, type of the indexing operation (loc/ix/iloc/None) - """ - - # if we are not a slice, then we are done - if not isinstance(key, slice): - return key - - if kind == 'iloc': - return super(Float64Index, self)._convert_slice_indexer(key, - kind=kind) - - # translate to locations - return self.slice_indexer(key.start, key.stop, key.step) - - def _format_native_types(self, na_rep='', float_format=None, decimal='.', - quoting=None, **kwargs): - from pandas.core.format import FloatArrayFormatter - formatter = FloatArrayFormatter(self.values, na_rep=na_rep, - float_format=float_format, - decimal=decimal, quoting=quoting) - return formatter.get_formatted_data() - - def get_value(self, series, key): - """ we always want to get an index value, never a value """ - if not np.isscalar(key): - raise InvalidIndexError - - from pandas.core.indexing import maybe_droplevels - from pandas.core.series import Series - - k = _values_from_object(key) - loc = self.get_loc(k) - new_values = _values_from_object(series)[loc] - - if np.isscalar(new_values) or new_values is None: - return new_values - - new_index = self[loc] - new_index = maybe_droplevels(new_index, k) - return Series(new_values, index=new_index, name=series.name) - - def equals(self, other): - """ - Determines if two Index objects contain the same elements. - """ - if self is other: - return True - - # need to compare nans locations and make sure that they are the same - # since nans don't compare equal this is a bit tricky - try: - if not isinstance(other, Float64Index): - other = self._constructor(other) - if (not is_dtype_equal(self.dtype, other.dtype) or - self.shape != other.shape): - return False - left, right = self._values, other._values - return ((left == right) | (self._isnan & other._isnan)).all() - except TypeError: - # e.g. fails in numpy 1.6 with DatetimeIndex #1681 - return False - - def __contains__(self, other): - if super(Float64Index, self).__contains__(other): - return True - - try: - # if other is a sequence this throws a ValueError - return np.isnan(other) and self.hasnans - except ValueError: - try: - return len(other) <= 1 and _try_get_item(other) in self - except TypeError: - return False - except: - return False - - def get_loc(self, key, method=None, tolerance=None): - try: - if np.all(np.isnan(key)): - nan_idxs = self._nan_idxs - try: - return nan_idxs.item() - except (ValueError, IndexError): - # should only need to catch ValueError here but on numpy - # 1.7 .item() can raise IndexError when NaNs are present - return nan_idxs - except (TypeError, NotImplementedError): - pass - return super(Float64Index, self).get_loc(key, method=method, - tolerance=tolerance) - - @property - def is_all_dates(self): - """ - Checks that all the labels are datetime objects - """ - return False - - @cache_readonly - def is_unique(self): - return super(Float64Index, self).is_unique and self._nan_idxs.size < 2 - - @Appender(Index.isin.__doc__) - def isin(self, values, level=None): - value_set = set(values) - if level is not None: - self._validate_index_level(level) - return lib.ismember_nans(np.array(self), value_set, - isnull(list(value_set)).any()) - - -Float64Index._add_numeric_methods() -Float64Index._add_logical_methods_disabled() - - -class MultiIndex(Index): - """ - A multi-level, or hierarchical, index object for pandas objects - - Parameters - ---------- - levels : sequence of arrays - The unique labels for each level - labels : sequence of arrays - Integers for each level designating which label at each location - sortorder : optional int - Level of sortedness (must be lexicographically sorted by that - level) - names : optional sequence of objects - Names for each of the index levels. (name is accepted for compat) - copy : boolean, default False - Copy the meta-data - verify_integrity : boolean, default True - Check that the levels/labels are consistent and valid - """ - - # initialize to zero-length tuples to make everything work - _typ = 'multiindex' - _names = FrozenList() - _levels = FrozenList() - _labels = FrozenList() - _comparables = ['names'] - rename = Index.set_names - - def __new__(cls, levels=None, labels=None, sortorder=None, names=None, - copy=False, verify_integrity=True, _set_identity=True, - name=None, **kwargs): - - # compat with Index - if name is not None: - names = name - if levels is None or labels is None: - raise TypeError("Must pass both levels and labels") - if len(levels) != len(labels): - raise ValueError('Length of levels and labels must be the same.') - if len(levels) == 0: - raise ValueError('Must pass non-zero number of levels/labels') - if len(levels) == 1: - if names: - name = names[0] - else: - name = None - return Index(levels[0], name=name, copy=True).take(labels[0]) - - result = object.__new__(MultiIndex) - - # we've already validated levels and labels, so shortcut here - result._set_levels(levels, copy=copy, validate=False) - result._set_labels(labels, copy=copy, validate=False) - - if names is not None: - # handles name validation - result._set_names(names) - - if sortorder is not None: - result.sortorder = int(sortorder) - else: - result.sortorder = sortorder - - if verify_integrity: - result._verify_integrity() - if _set_identity: - result._reset_identity() - - return result - - def _verify_integrity(self): - """Raises ValueError if length of levels and labels don't match or any - label would exceed level bounds""" - # NOTE: Currently does not check, among other things, that cached - # nlevels matches nor that sortorder matches actually sortorder. - labels, levels = self.labels, self.levels - if len(levels) != len(labels): - raise ValueError("Length of levels and labels must match. NOTE:" - " this index is in an inconsistent state.") - label_length = len(self.labels[0]) - for i, (level, label) in enumerate(zip(levels, labels)): - if len(label) != label_length: - raise ValueError("Unequal label lengths: %s" % - ([len(lab) for lab in labels])) - if len(label) and label.max() >= len(level): - raise ValueError("On level %d, label max (%d) >= length of" - " level (%d). NOTE: this index is in an" - " inconsistent state" % (i, label.max(), - len(level))) - - def _get_levels(self): - return self._levels - - def _set_levels(self, levels, level=None, copy=False, validate=True, - verify_integrity=False): - # This is NOT part of the levels property because it should be - # externally not allowed to set levels. User beware if you change - # _levels directly - if validate and len(levels) == 0: - raise ValueError('Must set non-zero number of levels.') - if validate and level is None and len(levels) != self.nlevels: - raise ValueError('Length of levels must match number of levels.') - if validate and level is not None and len(levels) != len(level): - raise ValueError('Length of levels must match length of level.') - - if level is None: - new_levels = FrozenList( - _ensure_index(lev, copy=copy)._shallow_copy() - for lev in levels) - else: - level = [self._get_level_number(l) for l in level] - new_levels = list(self._levels) - for l, v in zip(level, levels): - new_levels[l] = _ensure_index(v, copy=copy)._shallow_copy() - new_levels = FrozenList(new_levels) - - names = self.names - self._levels = new_levels - if any(names): - self._set_names(names) - - self._tuples = None - self._reset_cache() - - if verify_integrity: - self._verify_integrity() - - def set_levels(self, levels, level=None, inplace=False, - verify_integrity=True): - """ - Set new levels on MultiIndex. Defaults to returning - new index. - - Parameters - ---------- - levels : sequence or list of sequence - new level(s) to apply - level : int, level name, or sequence of int/level names (default None) - level(s) to set (None for all levels) - inplace : bool - if True, mutates in place - verify_integrity : bool (default True) - if True, checks that levels and labels are compatible - - Returns - ------- - new index (of same type and class...etc) - - - Examples - -------- - >>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'), - (2, u'one'), (2, u'two')], - names=['foo', 'bar']) - >>> idx.set_levels([['a','b'], [1,2]]) - MultiIndex(levels=[[u'a', u'b'], [1, 2]], - labels=[[0, 0, 1, 1], [0, 1, 0, 1]], - names=[u'foo', u'bar']) - >>> idx.set_levels(['a','b'], level=0) - MultiIndex(levels=[[u'a', u'b'], [u'one', u'two']], - labels=[[0, 0, 1, 1], [0, 1, 0, 1]], - names=[u'foo', u'bar']) - >>> idx.set_levels(['a','b'], level='bar') - MultiIndex(levels=[[1, 2], [u'a', u'b']], - labels=[[0, 0, 1, 1], [0, 1, 0, 1]], - names=[u'foo', u'bar']) - >>> idx.set_levels([['a','b'], [1,2]], level=[0,1]) - MultiIndex(levels=[[u'a', u'b'], [1, 2]], - labels=[[0, 0, 1, 1], [0, 1, 0, 1]], - names=[u'foo', u'bar']) - """ - if level is not None and not is_list_like(level): - if not is_list_like(levels): - raise TypeError("Levels must be list-like") - if is_list_like(levels[0]): - raise TypeError("Levels must be list-like") - level = [level] - levels = [levels] - elif level is None or is_list_like(level): - if not is_list_like(levels) or not is_list_like(levels[0]): - raise TypeError("Levels must be list of lists-like") - - if inplace: - idx = self - else: - idx = self._shallow_copy() - idx._reset_identity() - idx._set_levels(levels, level=level, validate=True, - verify_integrity=verify_integrity) - if not inplace: - return idx - - # remove me in 0.14 and change to read only property - __set_levels = deprecate("setting `levels` directly", - partial(set_levels, inplace=True, - verify_integrity=True), - alt_name="set_levels") - levels = property(fget=_get_levels, fset=__set_levels) - - def _get_labels(self): - return self._labels - - def _set_labels(self, labels, level=None, copy=False, validate=True, - verify_integrity=False): - - if validate and level is None and len(labels) != self.nlevels: - raise ValueError("Length of labels must match number of levels") - if validate and level is not None and len(labels) != len(level): - raise ValueError('Length of labels must match length of levels.') - - if level is None: - new_labels = FrozenList( - _ensure_frozen(lab, lev, copy=copy)._shallow_copy() - for lev, lab in zip(self.levels, labels)) - else: - level = [self._get_level_number(l) for l in level] - new_labels = list(self._labels) - for l, lev, lab in zip(level, self.levels, labels): - new_labels[l] = _ensure_frozen( - lab, lev, copy=copy)._shallow_copy() - new_labels = FrozenList(new_labels) - - self._labels = new_labels - self._tuples = None - self._reset_cache() - - if verify_integrity: - self._verify_integrity() - - def set_labels(self, labels, level=None, inplace=False, - verify_integrity=True): - """ - Set new labels on MultiIndex. Defaults to returning - new index. - - Parameters - ---------- - labels : sequence or list of sequence - new labels to apply - level : int, level name, or sequence of int/level names (default None) - level(s) to set (None for all levels) - inplace : bool - if True, mutates in place - verify_integrity : bool (default True) - if True, checks that levels and labels are compatible - - Returns - ------- - new index (of same type and class...etc) - - Examples - -------- - >>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'), - (2, u'one'), (2, u'two')], - names=['foo', 'bar']) - >>> idx.set_labels([[1,0,1,0], [0,0,1,1]]) - MultiIndex(levels=[[1, 2], [u'one', u'two']], - labels=[[1, 0, 1, 0], [0, 0, 1, 1]], - names=[u'foo', u'bar']) - >>> idx.set_labels([1,0,1,0], level=0) - MultiIndex(levels=[[1, 2], [u'one', u'two']], - labels=[[1, 0, 1, 0], [0, 1, 0, 1]], - names=[u'foo', u'bar']) - >>> idx.set_labels([0,0,1,1], level='bar') - MultiIndex(levels=[[1, 2], [u'one', u'two']], - labels=[[0, 0, 1, 1], [0, 0, 1, 1]], - names=[u'foo', u'bar']) - >>> idx.set_labels([[1,0,1,0], [0,0,1,1]], level=[0,1]) - MultiIndex(levels=[[1, 2], [u'one', u'two']], - labels=[[1, 0, 1, 0], [0, 0, 1, 1]], - names=[u'foo', u'bar']) - """ - if level is not None and not is_list_like(level): - if not is_list_like(labels): - raise TypeError("Labels must be list-like") - if is_list_like(labels[0]): - raise TypeError("Labels must be list-like") - level = [level] - labels = [labels] - elif level is None or is_list_like(level): - if not is_list_like(labels) or not is_list_like(labels[0]): - raise TypeError("Labels must be list of lists-like") - - if inplace: - idx = self - else: - idx = self._shallow_copy() - idx._reset_identity() - idx._set_labels(labels, level=level, verify_integrity=verify_integrity) - if not inplace: - return idx - - # remove me in 0.14 and change to readonly property - __set_labels = deprecate("setting labels directly", - partial(set_labels, inplace=True, - verify_integrity=True), - alt_name="set_labels") - labels = property(fget=_get_labels, fset=__set_labels) - - def copy(self, names=None, dtype=None, levels=None, labels=None, - deep=False, _set_identity=False): - """ - Make a copy of this object. Names, dtype, levels and labels can be - passed and will be set on new copy. - - Parameters - ---------- - names : sequence, optional - dtype : numpy dtype or pandas type, optional - levels : sequence, optional - labels : sequence, optional - - Returns - ------- - copy : MultiIndex - - Notes - ----- - In most cases, there should be no functional difference from using - ``deep``, but if ``deep`` is passed it will attempt to deepcopy. - This could be potentially expensive on large MultiIndex objects. - """ - if deep: - from copy import deepcopy - levels = levels if levels is not None else deepcopy(self.levels) - labels = labels if labels is not None else deepcopy(self.labels) - names = names if names is not None else deepcopy(self.names) - else: - levels = self.levels - labels = self.labels - names = self.names - return MultiIndex(levels=levels, labels=labels, names=names, - sortorder=self.sortorder, verify_integrity=False, - _set_identity=_set_identity) - - def __array__(self, dtype=None): - """ the array interface, return my values """ - return self.values - - def view(self, cls=None): - """ this is defined as a copy with the same identity """ - result = self.copy() - result._id = self._id - return result - - def _shallow_copy_with_infer(self, values=None, **kwargs): - return self._shallow_copy(values, **kwargs) - - def _shallow_copy(self, values=None, **kwargs): - if values is not None: - if 'name' in kwargs: - kwargs['names'] = kwargs.pop('name', None) - # discards freq - kwargs.pop('freq', None) - return MultiIndex.from_tuples(values, **kwargs) - return self.view() - - @cache_readonly - def dtype(self): - return np.dtype('O') - - @cache_readonly - def nbytes(self): - """ return the number of bytes in the underlying data """ - level_nbytes = sum((i.nbytes for i in self.levels)) - label_nbytes = sum((i.nbytes for i in self.labels)) - names_nbytes = sum((getsizeof(i) for i in self.names)) - return level_nbytes + label_nbytes + names_nbytes - - def _format_attrs(self): - """ - Return a list of tuples of the (attr,formatted_value) - """ - attrs = [('levels', default_pprint(self._levels, max_seq_items=False)), - ('labels', default_pprint(self._labels, max_seq_items=False))] - if not all(name is None for name in self.names): - attrs.append(('names', default_pprint(self.names))) - if self.sortorder is not None: - attrs.append(('sortorder', default_pprint(self.sortorder))) - return attrs - - def _format_space(self): - return "\n%s" % (' ' * (len(self.__class__.__name__) + 1)) - - def _format_data(self): - # we are formatting thru the attributes - return None - - def __len__(self): - return len(self.labels[0]) - - def _get_names(self): - return FrozenList(level.name for level in self.levels) - - def _set_names(self, names, level=None, validate=True): - """ - sets names on levels. WARNING: mutates! - - Note that you generally want to set this *after* changing levels, so - that it only acts on copies - """ - - names = list(names) - - if validate and level is not None and len(names) != len(level): - raise ValueError('Length of names must match length of level.') - if validate and level is None and len(names) != self.nlevels: - raise ValueError('Length of names must match number of levels in ' - 'MultiIndex.') - - if level is None: - level = range(self.nlevels) - else: - level = [self._get_level_number(l) for l in level] - - # set the name - for l, name in zip(level, names): - self.levels[l].rename(name, inplace=True) - - names = property(fset=_set_names, fget=_get_names, - doc="Names of levels in MultiIndex") - - def _reference_duplicate_name(self, name): - """ - Returns True if the name refered to in self.names is duplicated. - """ - # count the times name equals an element in self.names. - return sum(name == n for n in self.names) > 1 - - def _format_native_types(self, na_rep='nan', **kwargs): - new_levels = [] - new_labels = [] - - # go through the levels and format them - for level, label in zip(self.levels, self.labels): - level = level._format_native_types(na_rep=na_rep, **kwargs) - # add nan values, if there are any - mask = (label == -1) - if mask.any(): - nan_index = len(level) - level = np.append(level, na_rep) - label = label.values() - label[mask] = nan_index - new_levels.append(level) - new_labels.append(label) - - # reconstruct the multi-index - mi = MultiIndex(levels=new_levels, labels=new_labels, names=self.names, - sortorder=self.sortorder, verify_integrity=False) - - return mi.values - - @property - def _constructor(self): - return MultiIndex.from_tuples - - @cache_readonly - def inferred_type(self): - return 'mixed' - - @staticmethod - def _from_elements(values, labels=None, levels=None, names=None, - sortorder=None): - return MultiIndex(levels, labels, names, sortorder=sortorder) - - def _get_level_number(self, level): - try: - count = self.names.count(level) - if count > 1: - raise ValueError('The name %s occurs multiple times, use a ' - 'level number' % level) - level = self.names.index(level) - except ValueError: - if not isinstance(level, int): - raise KeyError('Level %s not found' % str(level)) - elif level < 0: - level += self.nlevels - if level < 0: - orig_level = level - self.nlevels - raise IndexError('Too many levels: Index has only %d ' - 'levels, %d is not a valid level number' % - (self.nlevels, orig_level)) - # Note: levels are zero-based - elif level >= self.nlevels: - raise IndexError('Too many levels: Index has only %d levels, ' - 'not %d' % (self.nlevels, level + 1)) - return level - - _tuples = None - - @property - def values(self): - if self._tuples is not None: - return self._tuples - - values = [] - for lev, lab in zip(self.levels, self.labels): - # Need to box timestamps, etc. - box = hasattr(lev, '_box_values') - # Try to minimize boxing. - if box and len(lev) > len(lab): - taken = lev._box_values(com.take_1d(lev._values, lab)) - elif box: - taken = com.take_1d(lev._box_values(lev._values), lab, - fill_value=_get_na_value(lev.dtype.type)) - else: - taken = com.take_1d(np.asarray(lev._values), lab) - values.append(taken) - - self._tuples = lib.fast_zip(values) - return self._tuples - - # fml - @property - def _is_v1(self): - return False - - @property - def _is_v2(self): - return False - - @property - def _has_complex_internals(self): - # to disable groupby tricks - return True - - @cache_readonly - def is_unique(self): - return not self.duplicated().any() - - @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', - False: 'first'}) - @Appender(base._shared_docs['duplicated'] % _index_doc_kwargs) - def duplicated(self, keep='first'): - from pandas.core.groupby import get_group_index - from pandas.hashtable import duplicated_int64 - - shape = map(len, self.levels) - ids = get_group_index(self.labels, shape, sort=False, xnull=False) - - return duplicated_int64(ids, keep) - - @Appender(_index_shared_docs['fillna']) - def fillna(self, value=None, downcast=None): - # isnull is not implemented for MultiIndex - raise NotImplementedError('isnull is not defined for MultiIndex') - - def get_value(self, series, key): - # somewhat broken encapsulation - from pandas.core.indexing import maybe_droplevels - from pandas.core.series import Series - - # Label-based - s = _values_from_object(series) - k = _values_from_object(key) - - def _try_mi(k): - # TODO: what if a level contains tuples?? - loc = self.get_loc(k) - new_values = series._values[loc] - new_index = self[loc] - new_index = maybe_droplevels(new_index, k) - return Series(new_values, index=new_index, name=series.name) - - try: - return self._engine.get_value(s, k) - except KeyError as e1: - try: - return _try_mi(key) - except KeyError: - pass - - try: - return _index.get_value_at(s, k) - except IndexError: - raise - except TypeError: - # generator/iterator-like - if is_iterator(key): - raise InvalidIndexError(key) - else: - raise e1 - except Exception: # pragma: no cover - raise e1 - except TypeError: - - # a Timestamp will raise a TypeError in a multi-index - # rather than a KeyError, try it here - # note that a string that 'looks' like a Timestamp will raise - # a KeyError! (GH5725) - if (isinstance(key, (datetime.datetime, np.datetime64)) or - (compat.PY3 and isinstance(key, compat.string_types))): - try: - return _try_mi(key) - except (KeyError): - raise - except: - pass - - try: - return _try_mi(Timestamp(key)) - except: - pass - - raise InvalidIndexError(key) - - def get_level_values(self, level): - """ - Return vector of label values for requested level, equal to the length - of the index - - Parameters - ---------- - level : int or level name - - Returns - ------- - values : ndarray - """ - num = self._get_level_number(level) - unique = self.levels[num] # .values - labels = self.labels[num] - filled = com.take_1d(unique.values, labels, - fill_value=unique._na_value) - _simple_new = unique._simple_new - values = _simple_new(filled, self.names[num], - freq=getattr(unique, 'freq', None), - tz=getattr(unique, 'tz', None)) - return values - - def format(self, space=2, sparsify=None, adjoin=True, names=False, - na_rep=None, formatter=None): - if len(self) == 0: - return [] - - stringified_levels = [] - for lev, lab in zip(self.levels, self.labels): - na = na_rep if na_rep is not None else _get_na_rep(lev.dtype.type) - - if len(lev) > 0: - - formatted = lev.take(lab).format(formatter=formatter) - - # we have some NA - mask = lab == -1 - if mask.any(): - formatted = np.array(formatted, dtype=object) - formatted[mask] = na - formatted = formatted.tolist() - - else: - # weird all NA case - formatted = [com.pprint_thing(na if isnull(x) else x, - escape_chars=('\t', '\r', '\n')) - for x in com.take_1d(lev._values, lab)] - stringified_levels.append(formatted) - - result_levels = [] - for lev, name in zip(stringified_levels, self.names): - level = [] - - if names: - level.append(com.pprint_thing(name, - escape_chars=('\t', '\r', '\n')) - if name is not None else '') - - level.extend(np.array(lev, dtype=object)) - result_levels.append(level) - - if sparsify is None: - sparsify = get_option("display.multi_sparse") - - if sparsify: - sentinel = '' - # GH3547 - # use value of sparsify as sentinel, unless it's an obvious - # "Truthey" value - if sparsify not in [True, 1]: - sentinel = sparsify - # little bit of a kludge job for #1217 - result_levels = _sparsify(result_levels, start=int(names), - sentinel=sentinel) - - if adjoin: - from pandas.core.format import _get_adjustment - adj = _get_adjustment() - return adj.adjoin(space, *result_levels).split('\n') - else: - return result_levels - - def _to_safe_for_reshape(self): - """ convert to object if we are a categorical """ - return self.set_levels([i._to_safe_for_reshape() for i in self.levels]) - - def to_hierarchical(self, n_repeat, n_shuffle=1): - """ - Return a MultiIndex reshaped to conform to the - shapes given by n_repeat and n_shuffle. - - Useful to replicate and rearrange a MultiIndex for combination - with another Index with n_repeat items. - - Parameters - ---------- - n_repeat : int - Number of times to repeat the labels on self - n_shuffle : int - Controls the reordering of the labels. If the result is going - to be an inner level in a MultiIndex, n_shuffle will need to be - greater than one. The size of each label must divisible by - n_shuffle. - - Returns - ------- - MultiIndex - - Examples - -------- - >>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'), - (2, u'one'), (2, u'two')]) - >>> idx.to_hierarchical(3) - MultiIndex(levels=[[1, 2], [u'one', u'two']], - labels=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1], - [0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1]]) - """ - levels = self.levels - labels = [np.repeat(x, n_repeat) for x in self.labels] - # Assumes that each label is divisible by n_shuffle - labels = [x.reshape(n_shuffle, -1).ravel(1) for x in labels] - names = self.names - return MultiIndex(levels=levels, labels=labels, names=names) - - @property - def is_all_dates(self): - return False - - def is_lexsorted(self): - """ - Return True if the labels are lexicographically sorted - """ - return self.lexsort_depth == self.nlevels - - def is_lexsorted_for_tuple(self, tup): - """ - Return True if we are correctly lexsorted given the passed tuple - """ - return len(tup) <= self.lexsort_depth - - @cache_readonly - def lexsort_depth(self): - if self.sortorder is not None: - if self.sortorder == 0: - return self.nlevels - else: - return 0 - - int64_labels = [com._ensure_int64(lab) for lab in self.labels] - for k in range(self.nlevels, 0, -1): - if lib.is_lexsorted(int64_labels[:k]): - return k - - return 0 - - @classmethod - def from_arrays(cls, arrays, sortorder=None, names=None): - """ - Convert arrays to MultiIndex - - Parameters - ---------- - arrays : list / sequence of array-likes - Each array-like gives one level's value for each data point. - len(arrays) is the number of levels. - sortorder : int or None - Level of sortedness (must be lexicographically sorted by that - level) - - Returns - ------- - index : MultiIndex - - Examples - -------- - >>> arrays = [[1, 1, 2, 2], ['red', 'blue', 'red', 'blue']] - >>> MultiIndex.from_arrays(arrays, names=('number', 'color')) - - See Also - -------- - MultiIndex.from_tuples : Convert list of tuples to MultiIndex - MultiIndex.from_product : Make a MultiIndex from cartesian product - of iterables - """ - from pandas.core.categorical import Categorical - - if len(arrays) == 1: - name = None if names is None else names[0] - return Index(arrays[0], name=name) - - cats = [Categorical.from_array(arr, ordered=True) for arr in arrays] - levels = [c.categories for c in cats] - labels = [c.codes for c in cats] - if names is None: - names = [getattr(arr, "name", None) for arr in arrays] - - return MultiIndex(levels=levels, labels=labels, sortorder=sortorder, - names=names, verify_integrity=False) - - @classmethod - def from_tuples(cls, tuples, sortorder=None, names=None): - """ - Convert list of tuples to MultiIndex - - Parameters - ---------- - tuples : list / sequence of tuple-likes - Each tuple is the index of one row/column. - sortorder : int or None - Level of sortedness (must be lexicographically sorted by that - level) - - Returns - ------- - index : MultiIndex - - Examples - -------- - >>> tuples = [(1, u'red'), (1, u'blue'), - (2, u'red'), (2, u'blue')] - >>> MultiIndex.from_tuples(tuples, names=('number', 'color')) - - See Also - -------- - MultiIndex.from_arrays : Convert list of arrays to MultiIndex - MultiIndex.from_product : Make a MultiIndex from cartesian product - of iterables - """ - if len(tuples) == 0: - # I think this is right? Not quite sure... - raise TypeError('Cannot infer number of levels from empty list') - - if isinstance(tuples, (np.ndarray, Index)): - if isinstance(tuples, Index): - tuples = tuples._values - - arrays = list(lib.tuples_to_object_array(tuples).T) - elif isinstance(tuples, list): - arrays = list(lib.to_object_array_tuples(tuples).T) - else: - arrays = lzip(*tuples) - - return MultiIndex.from_arrays(arrays, sortorder=sortorder, names=names) - - @classmethod - def from_product(cls, iterables, sortorder=None, names=None): - """ - Make a MultiIndex from the cartesian product of multiple iterables - - Parameters - ---------- - iterables : list / sequence of iterables - Each iterable has unique labels for each level of the index. - sortorder : int or None - Level of sortedness (must be lexicographically sorted by that - level). - names : list / sequence of strings or None - Names for the levels in the index. - - Returns - ------- - index : MultiIndex - - Examples - -------- - >>> numbers = [0, 1, 2] - >>> colors = [u'green', u'purple'] - >>> MultiIndex.from_product([numbers, colors], - names=['number', 'color']) - MultiIndex(levels=[[0, 1, 2], [u'green', u'purple']], - labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]], - names=[u'number', u'color']) - - See Also - -------- - MultiIndex.from_arrays : Convert list of arrays to MultiIndex - MultiIndex.from_tuples : Convert list of tuples to MultiIndex - """ - from pandas.core.categorical import Categorical - from pandas.tools.util import cartesian_product - - categoricals = [Categorical.from_array(it, ordered=True) - for it in iterables] - labels = cartesian_product([c.codes for c in categoricals]) - - return MultiIndex(levels=[c.categories for c in categoricals], - labels=labels, sortorder=sortorder, names=names) - - @property - def nlevels(self): - return len(self.levels) - - @property - def levshape(self): - return tuple(len(x) for x in self.levels) - - def __contains__(self, key): - hash(key) - # work around some kind of odd cython bug - try: - self.get_loc(key) - return True - except LookupError: - return False - - def __reduce__(self): - """Necessary for making this object picklable""" - d = dict(levels=[lev for lev in self.levels], - labels=[label for label in self.labels], - sortorder=self.sortorder, names=list(self.names)) - return _new_Index, (self.__class__, d), None - - def __setstate__(self, state): - """Necessary for making this object picklable""" - - if isinstance(state, dict): - levels = state.get('levels') - labels = state.get('labels') - sortorder = state.get('sortorder') - names = state.get('names') - - elif isinstance(state, tuple): - - nd_state, own_state = state - levels, labels, sortorder, names = own_state - - self._set_levels([Index(x) for x in levels], validate=False) - self._set_labels(labels) - self._set_names(names) - self.sortorder = sortorder - self._verify_integrity() - self._reset_identity() - - def __getitem__(self, key): - if np.isscalar(key): - retval = [] - for lev, lab in zip(self.levels, self.labels): - if lab[key] == -1: - retval.append(np.nan) - else: - retval.append(lev[lab[key]]) - - return tuple(retval) - else: - if is_bool_indexer(key): - key = np.asarray(key) - sortorder = self.sortorder - else: - # cannot be sure whether the result will be sorted - sortorder = None - - new_labels = [lab[key] for lab in self.labels] - - return MultiIndex(levels=self.levels, labels=new_labels, - names=self.names, sortorder=sortorder, - verify_integrity=False) - - def take(self, indexer, axis=None): - indexer = com._ensure_platform_int(indexer) - new_labels = [lab.take(indexer) for lab in self.labels] - return MultiIndex(levels=self.levels, labels=new_labels, - names=self.names, verify_integrity=False) - - def append(self, other): - """ - Append a collection of Index options together - - Parameters - ---------- - other : Index or list/tuple of indices - - Returns - ------- - appended : Index - """ - if not isinstance(other, (list, tuple)): - other = [other] - - if all((isinstance(o, MultiIndex) and o.nlevels >= self.nlevels) - for o in other): - arrays = [] - for i in range(self.nlevels): - label = self.get_level_values(i) - appended = [o.get_level_values(i) for o in other] - arrays.append(label.append(appended)) - return MultiIndex.from_arrays(arrays, names=self.names) - - to_concat = (self.values, ) + tuple(k._values for k in other) - new_tuples = np.concatenate(to_concat) - - # if all(isinstance(x, MultiIndex) for x in other): - try: - return MultiIndex.from_tuples(new_tuples, names=self.names) - except: - return Index(new_tuples) - - def argsort(self, *args, **kwargs): - return self.values.argsort(*args, **kwargs) - - def repeat(self, n): - return MultiIndex(levels=self.levels, - labels=[label.view(np.ndarray).repeat(n) - for label in self.labels], names=self.names, - sortorder=self.sortorder, verify_integrity=False) - - def drop(self, labels, level=None, errors='raise'): - """ - Make new MultiIndex with passed list of labels deleted - - Parameters - ---------- - labels : array-like - Must be a list of tuples - level : int or level name, default None - - Returns - ------- - dropped : MultiIndex - """ - if level is not None: - return self._drop_from_level(labels, level) - - try: - if not isinstance(labels, (np.ndarray, Index)): - labels = com._index_labels_to_array(labels) - indexer = self.get_indexer(labels) - mask = indexer == -1 - if mask.any(): - if errors != 'ignore': - raise ValueError('labels %s not contained in axis' % - labels[mask]) - indexer = indexer[~mask] - except Exception: - pass - - inds = [] - for label in labels: - try: - loc = self.get_loc(label) - if isinstance(loc, int): - inds.append(loc) - else: - inds.extend(lrange(loc.start, loc.stop)) - except KeyError: - if errors != 'ignore': - raise - - return self.delete(inds) - - def _drop_from_level(self, labels, level): - labels = com._index_labels_to_array(labels) - i = self._get_level_number(level) - index = self.levels[i] - values = index.get_indexer(labels) - - mask = ~lib.ismember(self.labels[i], set(values)) - - return self[mask] - - def droplevel(self, level=0): - """ - Return Index with requested level removed. If MultiIndex has only 2 - levels, the result will be of Index type not MultiIndex. - - Parameters - ---------- - level : int/level name or list thereof - - Notes - ----- - Does not check if result index is unique or not - - Returns - ------- - index : Index or MultiIndex - """ - levels = level - if not isinstance(levels, (tuple, list)): - levels = [level] - - new_levels = list(self.levels) - new_labels = list(self.labels) - new_names = list(self.names) - - levnums = sorted(self._get_level_number(lev) for lev in levels)[::-1] - - for i in levnums: - new_levels.pop(i) - new_labels.pop(i) - new_names.pop(i) - - if len(new_levels) == 1: - - # set nan if needed - mask = new_labels[0] == -1 - result = new_levels[0].take(new_labels[0]) - if mask.any(): - result = result.putmask(mask, np.nan) - - result.name = new_names[0] - return result - else: - return MultiIndex(levels=new_levels, labels=new_labels, - names=new_names, verify_integrity=False) - - def swaplevel(self, i, j): - """ - Swap level i with level j. Do not change the ordering of anything - - Parameters - ---------- - i, j : int, string (can be mixed) - Level of index to be swapped. Can pass level name as string. - - Returns - ------- - swapped : MultiIndex - """ - new_levels = list(self.levels) - new_labels = list(self.labels) - new_names = list(self.names) - - i = self._get_level_number(i) - j = self._get_level_number(j) - - new_levels[i], new_levels[j] = new_levels[j], new_levels[i] - new_labels[i], new_labels[j] = new_labels[j], new_labels[i] - new_names[i], new_names[j] = new_names[j], new_names[i] - - return MultiIndex(levels=new_levels, labels=new_labels, - names=new_names, verify_integrity=False) - - def reorder_levels(self, order): - """ - Rearrange levels using input order. May not drop or duplicate levels - - Parameters - ---------- - """ - order = [self._get_level_number(i) for i in order] - if len(order) != self.nlevels: - raise AssertionError('Length of order must be same as ' - 'number of levels (%d), got %d' % - (self.nlevels, len(order))) - new_levels = [self.levels[i] for i in order] - new_labels = [self.labels[i] for i in order] - new_names = [self.names[i] for i in order] - - return MultiIndex(levels=new_levels, labels=new_labels, - names=new_names, verify_integrity=False) - - def __getslice__(self, i, j): - return self.__getitem__(slice(i, j)) - - def sortlevel(self, level=0, ascending=True, sort_remaining=True): - """ - Sort MultiIndex at the requested level. The result will respect the - original ordering of the associated factor at that level. - - Parameters - ---------- - level : list-like, int or str, default 0 - If a string is given, must be a name of the level - If list-like must be names or ints of levels. - ascending : boolean, default True - False to sort in descending order - Can also be a list to specify a directed ordering - sort_remaining : sort by the remaining levels after level. - - Returns - ------- - sorted_index : MultiIndex - """ - from pandas.core.groupby import _indexer_from_factorized - - if isinstance(level, (compat.string_types, int)): - level = [level] - level = [self._get_level_number(lev) for lev in level] - sortorder = None - - # we have a directed ordering via ascending - if isinstance(ascending, list): - if not len(level) == len(ascending): - raise ValueError("level must have same length as ascending") - - from pandas.core.groupby import _lexsort_indexer - indexer = _lexsort_indexer(self.labels, orders=ascending) - - # level ordering - else: - - labels = list(self.labels) - shape = list(self.levshape) - - # partition labels and shape - primary = tuple(labels.pop(lev - i) for i, lev in enumerate(level)) - primshp = tuple(shape.pop(lev - i) for i, lev in enumerate(level)) - - if sort_remaining: - primary += primary + tuple(labels) - primshp += primshp + tuple(shape) - else: - sortorder = level[0] - - indexer = _indexer_from_factorized(primary, primshp, - compress=False) - - if not ascending: - indexer = indexer[::-1] - - indexer = com._ensure_platform_int(indexer) - new_labels = [lab.take(indexer) for lab in self.labels] - - new_index = MultiIndex(labels=new_labels, levels=self.levels, - names=self.names, sortorder=sortorder, - verify_integrity=False) - - return new_index, indexer - - def get_indexer(self, target, method=None, limit=None, tolerance=None): - """ - Compute indexer and mask for new index given the current index. The - indexer should be then used as an input to ndarray.take to align the - current data to the new index. The mask determines whether labels are - found or not in the current index - - Parameters - ---------- - target : MultiIndex or Index (of tuples) - method : {'pad', 'ffill', 'backfill', 'bfill'} - pad / ffill: propagate LAST valid observation forward to next valid - backfill / bfill: use NEXT valid observation to fill gap - - Notes - ----- - This is a low-level method and probably should be used at your own risk - - Examples - -------- - >>> indexer, mask = index.get_indexer(new_index) - >>> new_values = cur_values.take(indexer) - >>> new_values[-mask] = np.nan - - Returns - ------- - (indexer, mask) : (ndarray, ndarray) - """ - method = _clean_reindex_fill_method(method) - - target = _ensure_index(target) - - target_index = target - if isinstance(target, MultiIndex): - target_index = target._tuple_index - - if not is_object_dtype(target_index.dtype): - return np.ones(len(target_index)) * -1 - - if not self.is_unique: - raise Exception('Reindexing only valid with uniquely valued Index ' - 'objects') - - self_index = self._tuple_index - - if method == 'pad' or method == 'backfill': - if tolerance is not None: - raise NotImplementedError("tolerance not implemented yet " - 'for MultiIndex') - indexer = self_index._get_fill_indexer(target, method, limit) - elif method == 'nearest': - raise NotImplementedError("method='nearest' not implemented yet " - 'for MultiIndex; see GitHub issue 9365') - else: - indexer = self_index._engine.get_indexer(target._values) - - return com._ensure_platform_int(indexer) - - def reindex(self, target, method=None, level=None, limit=None, - tolerance=None): - """ - Create index with target's values (move/add/delete values as necessary) - - Returns - ------- - new_index : pd.MultiIndex - Resulting index - indexer : np.ndarray or None - Indices of output values in original index - - """ - # GH6552: preserve names when reindexing to non-named target - # (i.e. neither Index nor Series). - preserve_names = not hasattr(target, 'names') - - if level is not None: - if method is not None: - raise TypeError('Fill method not supported if level passed') - - # GH7774: preserve dtype/tz if target is empty and not an Index. - target = _ensure_has_len(target) # target may be an iterator - if len(target) == 0 and not isinstance(target, Index): - idx = self.levels[level] - attrs = idx._get_attributes_dict() - attrs.pop('freq', None) # don't preserve freq - target = type(idx)._simple_new(np.empty(0, dtype=idx.dtype), - **attrs) - else: - target = _ensure_index(target) - target, indexer, _ = self._join_level(target, level, how='right', - return_indexers=True, - keep_order=False) - else: - if self.equals(target): - indexer = None - else: - if self.is_unique: - indexer = self.get_indexer(target, method=method, - limit=limit, - tolerance=tolerance) - else: - raise Exception("cannot handle a non-unique multi-index!") - - if not isinstance(target, MultiIndex): - if indexer is None: - target = self - elif (indexer >= 0).all(): - target = self.take(indexer) - else: - # hopefully? - target = MultiIndex.from_tuples(target) - - if (preserve_names and target.nlevels == self.nlevels and - target.names != self.names): - target = target.copy(deep=False) - target.names = self.names - - return target, indexer - - @cache_readonly - def _tuple_index(self): - """ - Convert MultiIndex to an Index of tuples - - Returns - ------- - index : Index - """ - return Index(self._values) - - def get_slice_bound(self, label, side, kind): - if not isinstance(label, tuple): - label = label, - return self._partial_tup_index(label, side=side) - - def slice_locs(self, start=None, end=None, step=None, kind=None): - """ - For an ordered MultiIndex, compute the slice locations for input - labels. They can be tuples representing partial levels, e.g. for a - MultiIndex with 3 levels, you can pass a single value (corresponding to - the first level), or a 1-, 2-, or 3-tuple. - - Parameters - ---------- - start : label or tuple, default None - If None, defaults to the beginning - end : label or tuple - If None, defaults to the end - step : int or None - Slice step - kind : string, optional, defaults None - - Returns - ------- - (start, end) : (int, int) - - Notes - ----- - This function assumes that the data is sorted by the first level - """ - # This function adds nothing to its parent implementation (the magic - # happens in get_slice_bound method), but it adds meaningful doc. - return super(MultiIndex, self).slice_locs(start, end, step, kind=kind) - - def _partial_tup_index(self, tup, side='left'): - if len(tup) > self.lexsort_depth: - raise KeyError('Key length (%d) was greater than MultiIndex' - ' lexsort depth (%d)' % - (len(tup), self.lexsort_depth)) - - n = len(tup) - start, end = 0, len(self) - zipped = zip(tup, self.levels, self.labels) - for k, (lab, lev, labs) in enumerate(zipped): - section = labs[start:end] - - if lab not in lev: - if not lev.is_type_compatible(lib.infer_dtype([lab])): - raise TypeError('Level type mismatch: %s' % lab) - - # short circuit - loc = lev.searchsorted(lab, side=side) - if side == 'right' and loc >= 0: - loc -= 1 - return start + section.searchsorted(loc, side=side) - - idx = lev.get_loc(lab) - if k < n - 1: - end = start + section.searchsorted(idx, side='right') - start = start + section.searchsorted(idx, side='left') - else: - return start + section.searchsorted(idx, side=side) - - def get_loc(self, key, method=None): - """ - Get integer location, slice or boolean mask for requested label or - tuple. If the key is past the lexsort depth, the return may be a - boolean mask array, otherwise it is always a slice or int. - - Parameters - ---------- - key : label or tuple - method : None - - Returns - ------- - loc : int, slice object or boolean mask - """ - if method is not None: - raise NotImplementedError('only the default get_loc method is ' - 'currently supported for MultiIndex') - - def _maybe_to_slice(loc): - '''convert integer indexer to boolean mask or slice if possible''' - if not isinstance(loc, np.ndarray) or loc.dtype != 'int64': - return loc - - loc = lib.maybe_indices_to_slice(loc, len(self)) - if isinstance(loc, slice): - return loc - - mask = np.empty(len(self), dtype='bool') - mask.fill(False) - mask[loc] = True - return mask - - if not isinstance(key, tuple): - loc = self._get_level_indexer(key, level=0) - return _maybe_to_slice(loc) - - keylen = len(key) - if self.nlevels < keylen: - raise KeyError('Key length ({0}) exceeds index depth ({1})' - ''.format(keylen, self.nlevels)) - - if keylen == self.nlevels and self.is_unique: - - def _maybe_str_to_time_stamp(key, lev): - if lev.is_all_dates and not isinstance(key, Timestamp): - try: - return Timestamp(key, tz=getattr(lev, 'tz', None)) - except Exception: - pass - return key - - key = _values_from_object(key) - key = tuple(map(_maybe_str_to_time_stamp, key, self.levels)) - return self._engine.get_loc(key) - - # -- partial selection or non-unique index - # break the key into 2 parts based on the lexsort_depth of the index; - # the first part returns a continuous slice of the index; the 2nd part - # needs linear search within the slice - i = self.lexsort_depth - lead_key, follow_key = key[:i], key[i:] - start, stop = (self.slice_locs(lead_key, lead_key) - if lead_key else (0, len(self))) - - if start == stop: - raise KeyError(key) - - if not follow_key: - return slice(start, stop) - - warnings.warn('indexing past lexsort depth may impact performance.', - PerformanceWarning, stacklevel=10) - - loc = np.arange(start, stop, dtype='int64') - - for i, k in enumerate(follow_key, len(lead_key)): - mask = self.labels[i][loc] == self.levels[i].get_loc(k) - if not mask.all(): - loc = loc[mask] - if not len(loc): - raise KeyError(key) - - return (_maybe_to_slice(loc) if len(loc) != stop - start else - slice(start, stop)) - - def get_loc_level(self, key, level=0, drop_level=True): - """ - Get integer location slice for requested label or tuple - - Parameters - ---------- - key : label or tuple - level : int/level name or list thereof - - Returns - ------- - loc : int or slice object - """ - - def maybe_droplevels(indexer, levels, drop_level): - if not drop_level: - return self[indexer] - # kludgearound - orig_index = new_index = self[indexer] - levels = [self._get_level_number(i) for i in levels] - for i in sorted(levels, reverse=True): - try: - new_index = new_index.droplevel(i) - except: - - # no dropping here - return orig_index - return new_index - - if isinstance(level, (tuple, list)): - if len(key) != len(level): - raise AssertionError('Key for location must have same ' - 'length as number of levels') - result = None - for lev, k in zip(level, key): - loc, new_index = self.get_loc_level(k, level=lev) - if isinstance(loc, slice): - mask = np.zeros(len(self), dtype=bool) - mask[loc] = True - loc = mask - - result = loc if result is None else result & loc - - return result, maybe_droplevels(result, level, drop_level) - - level = self._get_level_number(level) - - # kludge for #1796 - if isinstance(key, list): - key = tuple(key) - - if isinstance(key, tuple) and level == 0: - - try: - if key in self.levels[0]: - indexer = self._get_level_indexer(key, level=level) - new_index = maybe_droplevels(indexer, [0], drop_level) - return indexer, new_index - except TypeError: - pass - - if not any(isinstance(k, slice) for k in key): - - # partial selection - # optionally get indexer to avoid re-calculation - def partial_selection(key, indexer=None): - if indexer is None: - indexer = self.get_loc(key) - ilevels = [i for i in range(len(key)) - if key[i] != slice(None, None)] - return indexer, maybe_droplevels(indexer, ilevels, - drop_level) - - if len(key) == self.nlevels: - - if self.is_unique: - - # here we have a completely specified key, but are - # using some partial string matching here - # GH4758 - all_dates = [(l.is_all_dates and - not isinstance(k, compat.string_types)) - for k, l in zip(key, self.levels)] - can_index_exactly = any(all_dates) - if (any([l.is_all_dates - for k, l in zip(key, self.levels)]) and - not can_index_exactly): - indexer = self.get_loc(key) - - # we have a multiple selection here - if (not isinstance(indexer, slice) or - indexer.stop - indexer.start != 1): - return partial_selection(key, indexer) - - key = tuple(self[indexer].tolist()[0]) - - return (self._engine.get_loc(_values_from_object(key)), - None) - else: - return partial_selection(key) - else: - return partial_selection(key) - else: - indexer = None - for i, k in enumerate(key): - if not isinstance(k, slice): - k = self._get_level_indexer(k, level=i) - if isinstance(k, slice): - # everything - if k.start == 0 and k.stop == len(self): - k = slice(None, None) - else: - k_index = k - - if isinstance(k, slice): - if k == slice(None, None): - continue - else: - raise TypeError(key) - - if indexer is None: - indexer = k_index - else: # pragma: no cover - indexer &= k_index - if indexer is None: - indexer = slice(None, None) - ilevels = [i for i in range(len(key)) - if key[i] != slice(None, None)] - return indexer, maybe_droplevels(indexer, ilevels, drop_level) - else: - indexer = self._get_level_indexer(key, level=level) - return indexer, maybe_droplevels(indexer, [level], drop_level) - - def _get_level_indexer(self, key, level=0, indexer=None): - # return an indexer, boolean array or a slice showing where the key is - # in the totality of values - # if the indexer is provided, then use this - - level_index = self.levels[level] - labels = self.labels[level] - - def convert_indexer(start, stop, step, indexer=indexer, labels=labels): - # given the inputs and the labels/indexer, compute an indexer set - # if we have a provided indexer, then this need not consider - # the entire labels set - - r = np.arange(start, stop, step) - if indexer is not None and len(indexer) != len(labels): - - # we have an indexer which maps the locations in the labels - # that we have already selected (and is not an indexer for the - # entire set) otherwise this is wasteful so we only need to - # examine locations that are in this set the only magic here is - # that the result are the mappings to the set that we have - # selected - from pandas import Series - mapper = Series(indexer) - indexer = labels.take(com._ensure_platform_int(indexer)) - result = Series(Index(indexer).isin(r).nonzero()[0]) - m = result.map(mapper)._values - - else: - m = np.zeros(len(labels), dtype=bool) - m[np.in1d(labels, r, assume_unique=True)] = True - - return m - - if isinstance(key, slice): - # handle a slice, returnig a slice if we can - # otherwise a boolean indexer - - try: - if key.start is not None: - start = level_index.get_loc(key.start) - else: - start = 0 - if key.stop is not None: - stop = level_index.get_loc(key.stop) - else: - stop = len(level_index) - 1 - step = key.step - except KeyError: - - # we have a partial slice (like looking up a partial date - # string) - start = stop = level_index.slice_indexer(key.start, key.stop, - key.step) - step = start.step - - if isinstance(start, slice) or isinstance(stop, slice): - # we have a slice for start and/or stop - # a partial date slicer on a DatetimeIndex generates a slice - # note that the stop ALREADY includes the stopped point (if - # it was a string sliced) - return convert_indexer(start.start, stop.stop, step) - - elif level > 0 or self.lexsort_depth == 0 or step is not None: - # need to have like semantics here to right - # searching as when we are using a slice - # so include the stop+1 (so we include stop) - return convert_indexer(start, stop + 1, step) - else: - # sorted, so can return slice object -> view - i = labels.searchsorted(start, side='left') - j = labels.searchsorted(stop, side='right') - return slice(i, j, step) - - else: - - loc = level_index.get_loc(key) - if level > 0 or self.lexsort_depth == 0: - return np.array(labels == loc, dtype=bool) - else: - # sorted, so can return slice object -> view - i = labels.searchsorted(loc, side='left') - j = labels.searchsorted(loc, side='right') - return slice(i, j) - - def get_locs(self, tup): - """ - Given a tuple of slices/lists/labels/boolean indexer to a level-wise - spec produce an indexer to extract those locations - - Parameters - ---------- - key : tuple of (slices/list/labels) - - Returns - ------- - locs : integer list of locations or boolean indexer suitable - for passing to iloc - """ - - # must be lexsorted to at least as many levels - if not self.is_lexsorted_for_tuple(tup): - raise KeyError('MultiIndex Slicing requires the index to be fully ' - 'lexsorted tuple len ({0}), lexsort depth ' - '({1})'.format(len(tup), self.lexsort_depth)) - - # indexer - # this is the list of all values that we want to select - n = len(self) - indexer = None - - def _convert_to_indexer(r): - # return an indexer - if isinstance(r, slice): - m = np.zeros(n, dtype=bool) - m[r] = True - r = m.nonzero()[0] - elif is_bool_indexer(r): - if len(r) != n: - raise ValueError("cannot index with a boolean indexer " - "that is not the same length as the " - "index") - r = r.nonzero()[0] - return Int64Index(r) - - def _update_indexer(idxr, indexer=indexer): - if indexer is None: - indexer = Index(np.arange(n)) - if idxr is None: - return indexer - return indexer & idxr - - for i, k in enumerate(tup): - - if is_bool_indexer(k): - # a boolean indexer, must be the same length! - k = np.asarray(k) - indexer = _update_indexer(_convert_to_indexer(k), - indexer=indexer) - - elif is_list_like(k): - # a collection of labels to include from this level (these - # are or'd) - indexers = None - for x in k: - try: - idxrs = _convert_to_indexer( - self._get_level_indexer(x, level=i, - indexer=indexer)) - indexers = (idxrs if indexers is None - else indexers | idxrs) - except KeyError: - - # ignore not founds - continue - - if indexers is not None: - indexer = _update_indexer(indexers, indexer=indexer) - else: - - # no matches we are done - return Int64Index([])._values - - elif is_null_slice(k): - # empty slice - indexer = _update_indexer(None, indexer=indexer) - - elif isinstance(k, slice): - - # a slice, include BOTH of the labels - indexer = _update_indexer(_convert_to_indexer( - self._get_level_indexer(k, level=i, indexer=indexer)), - indexer=indexer) - else: - # a single label - indexer = _update_indexer(_convert_to_indexer( - self.get_loc_level(k, level=i, drop_level=False)[0]), - indexer=indexer) - - # empty indexer - if indexer is None: - return Int64Index([])._values - return indexer._values - - def truncate(self, before=None, after=None): - """ - Slice index between two labels / tuples, return new MultiIndex - - Parameters - ---------- - before : label or tuple, can be partial. Default None - None defaults to start - after : label or tuple, can be partial. Default None - None defaults to end - - Returns - ------- - truncated : MultiIndex - """ - if after and before and after < before: - raise ValueError('after < before') - - i, j = self.levels[0].slice_locs(before, after) - left, right = self.slice_locs(before, after) - - new_levels = list(self.levels) - new_levels[0] = new_levels[0][i:j] - - new_labels = [lab[left:right] for lab in self.labels] - new_labels[0] = new_labels[0] - i - - return MultiIndex(levels=new_levels, labels=new_labels, - verify_integrity=False) - - def equals(self, other): - """ - Determines if two MultiIndex objects have the same labeling information - (the levels themselves do not necessarily have to be the same) - - See also - -------- - equal_levels - """ - if self.is_(other): - return True - - if not isinstance(other, MultiIndex): - return array_equivalent(self._values, - _values_from_object(_ensure_index(other))) - - if self.nlevels != other.nlevels: - return False - - if len(self) != len(other): - return False - - for i in range(self.nlevels): - svalues = com.take_nd(np.asarray(self.levels[i]._values), - self.labels[i], allow_fill=False) - ovalues = com.take_nd(np.asarray(other.levels[i]._values), - other.labels[i], allow_fill=False) - if not array_equivalent(svalues, ovalues): - return False - - return True - - def equal_levels(self, other): - """ - Return True if the levels of both MultiIndex objects are the same - - """ - if self.nlevels != other.nlevels: - return False - - for i in range(self.nlevels): - if not self.levels[i].equals(other.levels[i]): - return False - return True - - def union(self, other): - """ - Form the union of two MultiIndex objects, sorting if possible - - Parameters - ---------- - other : MultiIndex or array / Index of tuples - - Returns - ------- - Index - - >>> index.union(index2) - """ - self._assert_can_do_setop(other) - other, result_names = self._convert_can_do_setop(other) - - if len(other) == 0 or self.equals(other): - return self - - uniq_tuples = lib.fast_unique_multiple([self._values, other._values]) - return MultiIndex.from_arrays(lzip(*uniq_tuples), sortorder=0, - names=result_names) - - def intersection(self, other): - """ - Form the intersection of two MultiIndex objects, sorting if possible - - Parameters - ---------- - other : MultiIndex or array / Index of tuples - - Returns - ------- - Index - """ - self._assert_can_do_setop(other) - other, result_names = self._convert_can_do_setop(other) - - if self.equals(other): - return self - - self_tuples = self._values - other_tuples = other._values - uniq_tuples = sorted(set(self_tuples) & set(other_tuples)) - if len(uniq_tuples) == 0: - return MultiIndex(levels=[[]] * self.nlevels, - labels=[[]] * self.nlevels, - names=result_names, verify_integrity=False) - else: - return MultiIndex.from_arrays(lzip(*uniq_tuples), sortorder=0, - names=result_names) - - def difference(self, other): - """ - Compute sorted set difference of two MultiIndex objects - - Returns - ------- - diff : MultiIndex - """ - self._assert_can_do_setop(other) - other, result_names = self._convert_can_do_setop(other) - - if len(other) == 0: - return self - - if self.equals(other): - return MultiIndex(levels=[[]] * self.nlevels, - labels=[[]] * self.nlevels, - names=result_names, verify_integrity=False) - - difference = sorted(set(self._values) - set(other._values)) - - if len(difference) == 0: - return MultiIndex(levels=[[]] * self.nlevels, - labels=[[]] * self.nlevels, - names=result_names, verify_integrity=False) - else: - return MultiIndex.from_tuples(difference, sortorder=0, - names=result_names) - - def astype(self, dtype): - if not is_object_dtype(np.dtype(dtype)): - raise TypeError('Setting %s dtype to anything other than object ' - 'is not supported' % self.__class__) - return self._shallow_copy() - - def _convert_can_do_setop(self, other): - result_names = self.names - - if not hasattr(other, 'names'): - if len(other) == 0: - other = MultiIndex(levels=[[]] * self.nlevels, - labels=[[]] * self.nlevels, - verify_integrity=False) - else: - msg = 'other must be a MultiIndex or a list of tuples' - try: - other = MultiIndex.from_tuples(other) - except: - raise TypeError(msg) - else: - result_names = self.names if self.names == other.names else None - return other, result_names - - def insert(self, loc, item): - """ - Make new MultiIndex inserting new item at location - - Parameters - ---------- - loc : int - item : tuple - Must be same length as number of levels in the MultiIndex - - Returns - ------- - new_index : Index - """ - # Pad the key with empty strings if lower levels of the key - # aren't specified: - if not isinstance(item, tuple): - item = (item, ) + ('', ) * (self.nlevels - 1) - elif len(item) != self.nlevels: - raise ValueError('Item must have length equal to number of ' - 'levels.') - - new_levels = [] - new_labels = [] - for k, level, labels in zip(item, self.levels, self.labels): - if k not in level: - # have to insert into level - # must insert at end otherwise you have to recompute all the - # other labels - lev_loc = len(level) - level = level.insert(lev_loc, k) - else: - lev_loc = level.get_loc(k) - - new_levels.append(level) - new_labels.append(np.insert(_ensure_int64(labels), loc, lev_loc)) - - return MultiIndex(levels=new_levels, labels=new_labels, - names=self.names, verify_integrity=False) - - def delete(self, loc): - """ - Make new index with passed location deleted - - Returns - ------- - new_index : MultiIndex - """ - new_labels = [np.delete(lab, loc) for lab in self.labels] - return MultiIndex(levels=self.levels, labels=new_labels, - names=self.names, verify_integrity=False) - - get_major_bounds = slice_locs - - __bounds = None - - @property - def _bounds(self): - """ - Return or compute and return slice points for level 0, assuming - sortedness - """ - if self.__bounds is None: - inds = np.arange(len(self.levels[0])) - self.__bounds = self.labels[0].searchsorted(inds) - - return self.__bounds - - def _wrap_joined_index(self, joined, other): - names = self.names if self.names == other.names else None - return MultiIndex.from_tuples(joined, names=names) - - @Appender(Index.isin.__doc__) - def isin(self, values, level=None): - if level is None: - return lib.ismember(np.array(self), set(values)) - else: - num = self._get_level_number(level) - levs = self.levels[num] - labs = self.labels[num] - - sought_labels = levs.isin(values).nonzero()[0] - if levs.size == 0: - return np.zeros(len(labs), dtype=np.bool_) - else: - return np.lib.arraysetops.in1d(labs, sought_labels) - - -MultiIndex._add_numeric_methods_disabled() -MultiIndex._add_logical_methods_disabled() - -# For utility purposes - - -def _sparsify(label_list, start=0, sentinel=''): - pivoted = lzip(*label_list) - k = len(label_list) - - result = pivoted[:start + 1] - prev = pivoted[start] - - for cur in pivoted[start + 1:]: - sparse_cur = [] - - for i, (p, t) in enumerate(zip(prev, cur)): - if i == k - 1: - sparse_cur.append(t) - result.append(sparse_cur) - break - - if p == t: - sparse_cur.append(sentinel) - else: - sparse_cur.extend(cur[i:]) - result.append(sparse_cur) - break - - prev = cur - - return lzip(*result) - - -def _ensure_index(index_like, copy=False): - if isinstance(index_like, Index): - if copy: - index_like = index_like.copy() - return index_like - if hasattr(index_like, 'name'): - return Index(index_like, name=index_like.name, copy=copy) - - # must check for exactly list here because of strict type - # check in clean_index_list - if isinstance(index_like, list): - if type(index_like) != list: - index_like = list(index_like) - # 2200 ? - converted, all_arrays = lib.clean_index_list(index_like) - - if len(converted) > 0 and all_arrays: - return MultiIndex.from_arrays(converted) - else: - index_like = converted - else: - # clean_index_list does the equivalent of copying - # so only need to do this if not list instance - if copy: - from copy import copy - index_like = copy(index_like) - - return Index(index_like) - - -def _ensure_frozen(array_like, categories, copy=False): - array_like = com._coerce_indexer_dtype(array_like, categories) - array_like = array_like.view(FrozenNDArray) - if copy: - array_like = array_like.copy() - return array_like - - -def _validate_join_method(method): - if method not in ['left', 'right', 'inner', 'outer']: - raise ValueError('do not recognize join method %s' % method) - - -# TODO: handle index names! -def _get_combined_index(indexes, intersect=False): - indexes = _get_distinct_indexes(indexes) - if len(indexes) == 0: - return Index([]) - if len(indexes) == 1: - return indexes[0] - if intersect: - index = indexes[0] - for other in indexes[1:]: - index = index.intersection(other) - return index - union = _union_indexes(indexes) - return _ensure_index(union) - - -def _get_distinct_indexes(indexes): - return list(dict((id(x), x) for x in indexes).values()) - - -def _union_indexes(indexes): - if len(indexes) == 0: - raise AssertionError('Must have at least 1 Index to union') - if len(indexes) == 1: - result = indexes[0] - if isinstance(result, list): - result = Index(sorted(result)) - return result - - indexes, kind = _sanitize_and_check(indexes) - - def _unique_indices(inds): - def conv(i): - if isinstance(i, Index): - i = i.tolist() - return i - - return Index(lib.fast_unique_multiple_list([conv(i) for i in inds])) - - if kind == 'special': - result = indexes[0] - - if hasattr(result, 'union_many'): - return result.union_many(indexes[1:]) - else: - for other in indexes[1:]: - result = result.union(other) - return result - elif kind == 'array': - index = indexes[0] - for other in indexes[1:]: - if not index.equals(other): - return _unique_indices(indexes) - - return index - else: - return _unique_indices(indexes) - - -def _trim_front(strings): - """ - Trims zeros and decimal points - """ - trimmed = strings - while len(strings) > 0 and all([x[0] == ' ' for x in trimmed]): - trimmed = [x[1:] for x in trimmed] - return trimmed - - -def _sanitize_and_check(indexes): - kinds = list(set([type(index) for index in indexes])) - - if list in kinds: - if len(kinds) > 1: - indexes = [Index(com._try_sort(x)) if not isinstance(x, Index) else - x for x in indexes] - kinds.remove(list) - else: - return indexes, 'list' - - if len(kinds) > 1 or Index not in kinds: - return indexes, 'special' - else: - return indexes, 'array' - - -def _get_consensus_names(indexes): - - # find the non-none names, need to tupleify to make - # the set hashable, then reverse on return - consensus_names = set([tuple(i.names) for i in indexes - if all(n is not None for n in i.names)]) - if len(consensus_names) == 1: - return list(list(consensus_names)[0]) - return [None] * indexes[0].nlevels - - -def _maybe_box(idx): - from pandas.tseries.api import DatetimeIndex, PeriodIndex, TimedeltaIndex - klasses = DatetimeIndex, PeriodIndex, TimedeltaIndex - - if isinstance(idx, klasses): - return idx.asobject - return idx - - -def _all_indexes_same(indexes): - first = indexes[0] - for index in indexes[1:]: - if not first.equals(index): - return False - return True - - -def _get_na_rep(dtype): - return {np.datetime64: 'NaT', np.timedelta64: 'NaT'}.get(dtype, 'NaN') - - -def _get_na_value(dtype): - return {np.datetime64: tslib.NaT, - np.timedelta64: tslib.NaT}.get(dtype, np.nan) - - -def _ensure_has_len(seq): - """If seq is an iterator, put its values into a list.""" - try: - len(seq) - except TypeError: - return list(seq) - else: - return seq +# flake8: noqa +from pandas.indexes.api import * +from pandas.indexes.multi import _sparsify diff --git a/pandas/indexes/__init__.py b/pandas/indexes/__init__.py new file mode 100644 index 0000000000000..e69de29bb2d1d diff --git a/pandas/indexes/api.py b/pandas/indexes/api.py new file mode 100644 index 0000000000000..3f0ee40a6f93d --- /dev/null +++ b/pandas/indexes/api.py @@ -0,0 +1,118 @@ +from pandas.indexes.base import (Index, _new_Index, # noqa + _ensure_index, _get_na_value, + InvalidIndexError) +from pandas.indexes.category import CategoricalIndex # noqa +from pandas.indexes.multi import MultiIndex # noqa +from pandas.indexes.numeric import (NumericIndex, Float64Index, # noqa + Int64Index) +from pandas.indexes.range import RangeIndex # noqa + +import pandas.core.common as com +import pandas.lib as lib + +# TODO: there are many places that rely on these private methods existing in +# pandas.core.index +__all__ = ['Index', 'MultiIndex', 'NumericIndex', 'Float64Index', 'Int64Index', + 'CategoricalIndex', 'RangeIndex', + 'InvalidIndexError', + '_new_Index', + '_ensure_index', '_get_na_value', '_get_combined_index', + '_get_distinct_indexes', '_union_indexes', + '_get_consensus_names', + '_all_indexes_same'] + + +def _get_combined_index(indexes, intersect=False): + # TODO: handle index names! + indexes = _get_distinct_indexes(indexes) + if len(indexes) == 0: + return Index([]) + if len(indexes) == 1: + return indexes[0] + if intersect: + index = indexes[0] + for other in indexes[1:]: + index = index.intersection(other) + return index + union = _union_indexes(indexes) + return _ensure_index(union) + + +def _get_distinct_indexes(indexes): + return list(dict((id(x), x) for x in indexes).values()) + + +def _union_indexes(indexes): + if len(indexes) == 0: + raise AssertionError('Must have at least 1 Index to union') + if len(indexes) == 1: + result = indexes[0] + if isinstance(result, list): + result = Index(sorted(result)) + return result + + indexes, kind = _sanitize_and_check(indexes) + + def _unique_indices(inds): + def conv(i): + if isinstance(i, Index): + i = i.tolist() + return i + + return Index(lib.fast_unique_multiple_list([conv(i) for i in inds])) + + if kind == 'special': + result = indexes[0] + + if hasattr(result, 'union_many'): + return result.union_many(indexes[1:]) + else: + for other in indexes[1:]: + result = result.union(other) + return result + elif kind == 'array': + index = indexes[0] + for other in indexes[1:]: + if not index.equals(other): + return _unique_indices(indexes) + + return index + else: + return _unique_indices(indexes) + + +def _sanitize_and_check(indexes): + kinds = list(set([type(index) for index in indexes])) + + if list in kinds: + if len(kinds) > 1: + indexes = [Index(com._try_sort(x)) + if not isinstance(x, Index) else + x for x in indexes] + kinds.remove(list) + else: + return indexes, 'list' + + if len(kinds) > 1 or Index not in kinds: + return indexes, 'special' + else: + return indexes, 'array' + + +def _get_consensus_names(indexes): + + # find the non-none names, need to tupleify to make + # the set hashable, then reverse on return + consensus_names = set([tuple(i.names) for i in indexes + if all(n is not None for n in i.names)]) + if len(consensus_names) == 1: + return list(list(consensus_names)[0]) + return [None] * indexes[0].nlevels + + +def _all_indexes_same(indexes): + first = indexes[0] + for index in indexes[1:]: + if not first.equals(index): + return False + return True diff --git a/pandas/indexes/base.py b/pandas/indexes/base.py new file mode 100644 index 0000000000000..0147000e4380c --- /dev/null +++ b/pandas/indexes/base.py @@ -0,0 +1,3309 @@ +import datetime +import warnings +import operator + +import numpy as np +import pandas.tslib as tslib +import pandas.lib as lib +import pandas.algos as _algos +import pandas.index as _index +from pandas.lib import Timestamp, Timedelta, is_datetime_array + +from pandas.compat import range, u +from pandas import compat +from pandas.core import algorithms +from pandas.core.base import (PandasObject, FrozenList, FrozenNDArray, + IndexOpsMixin) +import pandas.core.base as base +from pandas.util.decorators import (Appender, Substitution, cache_readonly, + deprecate, deprecate_kwarg) +import pandas.core.common as com +from pandas.core.missing import _clean_reindex_fill_method +from pandas.core.common import (isnull, array_equivalent, + is_object_dtype, is_datetimetz, ABCSeries, + ABCPeriodIndex, + _values_from_object, is_float, is_integer, + is_iterator, is_categorical_dtype, + _ensure_object, _ensure_int64, is_bool_indexer, + is_list_like, is_bool_dtype, + is_integer_dtype) +from pandas.core.strings import StringAccessorMixin + +from pandas.core.config import get_option + +# simplify +default_pprint = lambda x, max_seq_items=None: \ + com.pprint_thing(x, escape_chars=('\t', '\r', '\n'), quote_strings=True, + max_seq_items=max_seq_items) + +__all__ = ['Index'] + +_unsortable_types = frozenset(('mixed', 'mixed-integer')) + +_index_doc_kwargs = dict(klass='Index', inplace='', duplicated='np.array') +_index_shared_docs = dict() + + +def _try_get_item(x): + try: + return x.item() + except AttributeError: + return x + + +class InvalidIndexError(Exception): + pass + + +_o_dtype = np.dtype(object) +_Identity = object + + +def _new_Index(cls, d): + """ This is called upon unpickling, rather than the default which doesn't + have arguments and breaks __new__ + """ + return cls.__new__(cls, **d) + + +class Index(IndexOpsMixin, StringAccessorMixin, PandasObject): + """ + Immutable ndarray implementing an ordered, sliceable set. The basic object + storing axis labels for all pandas objects + + Parameters + ---------- + data : array-like (1-dimensional) + dtype : NumPy dtype (default: object) + copy : bool + Make a copy of input ndarray + name : object + Name to be stored in the index + tupleize_cols : bool (default: True) + When True, attempt to create a MultiIndex if possible + + Notes + ----- + An Index instance can **only** contain hashable objects + """ + # To hand over control to subclasses + _join_precedence = 1 + + # Cython methods + _groupby = _algos.groupby_object + _arrmap = _algos.arrmap_object + _left_indexer_unique = _algos.left_join_indexer_unique_object + _left_indexer = _algos.left_join_indexer_object + _inner_indexer = _algos.inner_join_indexer_object + _outer_indexer = _algos.outer_join_indexer_object + _box_scalars = False + + _typ = 'index' + _data = None + _id = None + name = None + asi8 = None + _comparables = ['name'] + _attributes = ['name'] + _allow_index_ops = True + _allow_datetime_index_ops = False + _allow_period_index_ops = False + _is_numeric_dtype = False + _can_hold_na = True + + # prioritize current class for _shallow_copy_with_infer, + # used to infer integers as datetime-likes + _infer_as_myclass = False + + _engine_type = _index.ObjectEngine + + def __new__(cls, data=None, dtype=None, copy=False, name=None, + fastpath=False, tupleize_cols=True, **kwargs): + + if name is None and hasattr(data, 'name'): + name = data.name + + if fastpath: + return cls._simple_new(data, name) + + from .range import RangeIndex + + # range + if isinstance(data, RangeIndex): + return RangeIndex(start=data, copy=copy, dtype=dtype, name=name) + elif isinstance(data, range): + return RangeIndex.from_range(data, copy=copy, dtype=dtype, + name=name) + + # categorical + if is_categorical_dtype(data) or is_categorical_dtype(dtype): + from .category import CategoricalIndex + return CategoricalIndex(data, copy=copy, name=name, **kwargs) + + # index-like + elif isinstance(data, (np.ndarray, Index, ABCSeries)): + + if (issubclass(data.dtype.type, np.datetime64) or + is_datetimetz(data)): + from pandas.tseries.index import DatetimeIndex + result = DatetimeIndex(data, copy=copy, name=name, **kwargs) + if dtype is not None and _o_dtype == dtype: + return Index(result.to_pydatetime(), dtype=_o_dtype) + else: + return result + + elif issubclass(data.dtype.type, np.timedelta64): + from pandas.tseries.tdi import TimedeltaIndex + result = TimedeltaIndex(data, copy=copy, name=name, **kwargs) + if dtype is not None and _o_dtype == dtype: + return Index(result.to_pytimedelta(), dtype=_o_dtype) + else: + return result + + if dtype is not None: + try: + data = np.array(data, dtype=dtype, copy=copy) + except (TypeError, ValueError): + pass + + # maybe coerce to a sub-class + from pandas.tseries.period import PeriodIndex + if isinstance(data, PeriodIndex): + return PeriodIndex(data, copy=copy, name=name, **kwargs) + if issubclass(data.dtype.type, np.integer): + from .numeric import Int64Index + return Int64Index(data, copy=copy, dtype=dtype, name=name) + elif issubclass(data.dtype.type, np.floating): + from .numeric import Float64Index + return Float64Index(data, copy=copy, dtype=dtype, name=name) + elif issubclass(data.dtype.type, np.bool) or is_bool_dtype(data): + subarr = data.astype('object') + else: + subarr = com._asarray_tuplesafe(data, dtype=object) + + # _asarray_tuplesafe does not always copy underlying data, + # so need to make sure that this happens + if copy: + subarr = subarr.copy() + + if dtype is None: + inferred = lib.infer_dtype(subarr) + if inferred == 'integer': + from .numeric import Int64Index + return Int64Index(subarr.astype('i8'), copy=copy, + name=name) + elif inferred in ['floating', 'mixed-integer-float']: + from .numeric import Float64Index + return Float64Index(subarr, copy=copy, name=name) + elif inferred == 'boolean': + # don't support boolean explicity ATM + pass + elif inferred != 'string': + if (inferred.startswith('datetime') or + tslib.is_timestamp_array(subarr)): + + if (lib.is_datetime_with_singletz_array(subarr) or + 'tz' in kwargs): + # only when subarr has the same tz + from pandas.tseries.index import DatetimeIndex + return DatetimeIndex(subarr, copy=copy, name=name, + **kwargs) + + elif (inferred.startswith('timedelta') or + lib.is_timedelta_array(subarr)): + from pandas.tseries.tdi import TimedeltaIndex + return TimedeltaIndex(subarr, copy=copy, name=name, + **kwargs) + elif inferred == 'period': + return PeriodIndex(subarr, name=name, **kwargs) + return cls._simple_new(subarr, name) + + elif hasattr(data, '__array__'): + return Index(np.asarray(data), dtype=dtype, copy=copy, name=name, + **kwargs) + elif data is None or np.isscalar(data): + cls._scalar_data_error(data) + else: + if (tupleize_cols and isinstance(data, list) and data and + isinstance(data[0], tuple)): + + # we must be all tuples, otherwise don't construct + # 10697 + if all(isinstance(e, tuple) for e in data): + try: + # must be orderable in py3 + if compat.PY3: + sorted(data) + from .multi import MultiIndex + return MultiIndex.from_tuples( + data, names=name or kwargs.get('names')) + except (TypeError, KeyError): + # python2 - MultiIndex fails on mixed types + pass + # other iterable of some kind + subarr = com._asarray_tuplesafe(data, dtype=object) + return Index(subarr, dtype=dtype, copy=copy, name=name, **kwargs) + + """ + NOTE for new Index creation: + + - _simple_new: It returns new Index with the same type as the caller. + All metadata (such as name) must be provided by caller's responsibility. + Using _shallow_copy is recommended because it fills these metadata + otherwise specified. + + - _shallow_copy: It returns new Index with the same type (using + _simple_new), but fills caller's metadata otherwise specified. Passed + kwargs will overwrite corresponding metadata. + + - _shallow_copy_with_infer: It returns new Index inferring its type + from passed values. It fills caller's metadata otherwise specified as the + same as _shallow_copy. + + See each method's docstring. + """ + + @classmethod + def _simple_new(cls, values, name=None, dtype=None, **kwargs): + """ + we require the we have a dtype compat for the values + if we are passed a non-dtype compat, then coerce using the constructor + + Must be careful not to recurse. + """ + if not hasattr(values, 'dtype'): + if values is None and dtype is not None: + values = np.empty(0, dtype=dtype) + else: + values = np.array(values, copy=False) + if is_object_dtype(values): + values = cls(values, name=name, dtype=dtype, + **kwargs)._values + + result = object.__new__(cls) + result._data = values + result.name = name + for k, v in compat.iteritems(kwargs): + setattr(result, k, v) + result._reset_identity() + return result + + def _shallow_copy(self, values=None, **kwargs): + """ + create a new Index with the same class as the caller, don't copy the + data, use the same object attributes with passed in attributes taking + precedence + + *this is an internal non-public method* + + Parameters + ---------- + values : the values to create the new Index, optional + kwargs : updates the default attributes for this Index + """ + if values is None: + values = self.values + attributes = self._get_attributes_dict() + attributes.update(kwargs) + return self._simple_new(values, **attributes) + + def _shallow_copy_with_infer(self, values=None, **kwargs): + """ + create a new Index inferring the class with passed value, don't copy + the data, use the same object attributes with passed in attributes + taking precedence + + *this is an internal non-public method* + + Parameters + ---------- + values : the values to create the new Index, optional + kwargs : updates the default attributes for this Index + """ + if values is None: + values = self.values + attributes = self._get_attributes_dict() + attributes.update(kwargs) + attributes['copy'] = False + if self._infer_as_myclass: + try: + return self._constructor(values, **attributes) + except (TypeError, ValueError): + pass + return Index(values, **attributes) + + def _update_inplace(self, result, **kwargs): + # guard when called from IndexOpsMixin + raise TypeError("Index can't be updated inplace") + + def is_(self, other): + """ + More flexible, faster check like ``is`` but that works through views + + Note: this is *not* the same as ``Index.identical()``, which checks + that metadata is also the same. + + Parameters + ---------- + other : object + other object to compare against. + + Returns + ------- + True if both have same underlying data, False otherwise : bool + """ + # use something other than None to be clearer + return self._id is getattr( + other, '_id', Ellipsis) and self._id is not None + + def _reset_identity(self): + """Initializes or resets ``_id`` attribute with new object""" + self._id = _Identity() + + # ndarray compat + def __len__(self): + """ + return the length of the Index + """ + return len(self._data) + + def __array__(self, dtype=None): + """ the array interface, return my values """ + return self._data.view(np.ndarray) + + def __array_wrap__(self, result, context=None): + """ + Gets called after a ufunc + """ + if is_bool_dtype(result): + return result + + attrs = self._get_attributes_dict() + attrs = self._maybe_update_attributes(attrs) + return Index(result, **attrs) + + @cache_readonly + def dtype(self): + """ return the dtype object of the underlying data """ + return self._data.dtype + + @cache_readonly + def dtype_str(self): + """ return the dtype str of the underlying data """ + return str(self.dtype) + + @property + def values(self): + """ return the underlying data as an ndarray """ + return self._data.view(np.ndarray) + + def get_values(self): + """ return the underlying data as an ndarray """ + return self.values + + # ops compat + def tolist(self): + """ + return a list of the Index values + """ + return list(self.values) + + def repeat(self, n): + """ + return a new Index of the values repeated n times + + See also + -------- + numpy.ndarray.repeat + """ + return self._shallow_copy(self._values.repeat(n)) + + def ravel(self, order='C'): + """ + return an ndarray of the flattened values of the underlying data + + See also + -------- + numpy.ndarray.ravel + """ + return self._values.ravel(order=order) + + # construction helpers + @classmethod + def _scalar_data_error(cls, data): + raise TypeError('{0}(...) must be called with a collection of some ' + 'kind, {1} was passed'.format(cls.__name__, + repr(data))) + + @classmethod + def _string_data_error(cls, data): + raise TypeError('String dtype not supported, you may need ' + 'to explicitly cast to a numeric type') + + @classmethod + def _coerce_to_ndarray(cls, data): + """coerces data to ndarray, raises on scalar data. Converts other + iterables to list first and then to array. Does not touch ndarrays. + """ + + if not isinstance(data, (np.ndarray, Index)): + if data is None or np.isscalar(data): + cls._scalar_data_error(data) + + # other iterable of some kind + if not isinstance(data, (ABCSeries, list, tuple)): + data = list(data) + data = np.asarray(data) + return data + + def _get_attributes_dict(self): + """ return an attributes dict for my class """ + return dict([(k, getattr(self, k, None)) for k in self._attributes]) + + def view(self, cls=None): + + # we need to see if we are subclassing an + # index type here + if cls is not None and not hasattr(cls, '_typ'): + result = self._data.view(cls) + else: + result = self._shallow_copy() + if isinstance(result, Index): + result._id = self._id + return result + + def _coerce_scalar_to_index(self, item): + """ + we need to coerce a scalar to a compat for our index type + + Parameters + ---------- + item : scalar item to coerce + """ + return Index([item], dtype=self.dtype, **self._get_attributes_dict()) + + _index_shared_docs['copy'] = """ + Make a copy of this object. Name and dtype sets those attributes on + the new object. + + Parameters + ---------- + name : string, optional + deep : boolean, default False + dtype : numpy dtype or pandas type + + Returns + ------- + copy : Index + + Notes + ----- + In most cases, there should be no functional difference from using + ``deep``, but if ``deep`` is passed it will attempt to deepcopy. + """ + + @Appender(_index_shared_docs['copy']) + def copy(self, name=None, deep=False, dtype=None, **kwargs): + names = kwargs.get('names') + if names is not None and name is not None: + raise TypeError("Can only provide one of `names` and `name`") + if deep: + from copy import deepcopy + new_index = self._shallow_copy(self._data.copy()) + name = name or deepcopy(self.name) + else: + new_index = self._shallow_copy() + name = self.name + if name is not None: + names = [name] + if names: + new_index = new_index.set_names(names) + if dtype: + new_index = new_index.astype(dtype) + return new_index + + __copy__ = copy + + def __unicode__(self): + """ + Return a string representation for this object. + + Invoked by unicode(df) in py2 only. Yields a Unicode String in both + py2/py3. + """ + klass = self.__class__.__name__ + data = self._format_data() + attrs = self._format_attrs() + space = self._format_space() + + prepr = (u(",%s") % + space).join([u("%s=%s") % (k, v) for k, v in attrs]) + + # no data provided, just attributes + if data is None: + data = '' + + res = u("%s(%s%s)") % (klass, data, prepr) + + return res + + def _format_space(self): + + # using space here controls if the attributes + # are line separated or not (the default) + + # max_seq_items = get_option('display.max_seq_items') + # if len(self) > max_seq_items: + # space = "\n%s" % (' ' * (len(klass) + 1)) + return " " + + @property + def _formatter_func(self): + """ + Return the formatted data as a unicode string + """ + return default_pprint + + def _format_data(self): + """ + Return the formatted data as a unicode string + """ + from pandas.core.format import get_console_size, _get_adjustment + display_width, _ = get_console_size() + if display_width is None: + display_width = get_option('display.width') or 80 + + space1 = "\n%s" % (' ' * (len(self.__class__.__name__) + 1)) + space2 = "\n%s" % (' ' * (len(self.__class__.__name__) + 2)) + + n = len(self) + sep = ',' + max_seq_items = get_option('display.max_seq_items') or n + formatter = self._formatter_func + + # do we want to justify (only do so for non-objects) + is_justify = not (self.inferred_type in ('string', 'unicode') or + (self.inferred_type == 'categorical' and + is_object_dtype(self.categories))) + + # are we a truncated display + is_truncated = n > max_seq_items + + # adj can optionaly handle unicode eastern asian width + adj = _get_adjustment() + + def _extend_line(s, line, value, display_width, next_line_prefix): + + if (adj.len(line.rstrip()) + adj.len(value.rstrip()) >= + display_width): + s += line.rstrip() + line = next_line_prefix + line += value + return s, line + + def best_len(values): + if values: + return max([adj.len(x) for x in values]) + else: + return 0 + + if n == 0: + summary = '[], ' + elif n == 1: + first = formatter(self[0]) + summary = '[%s], ' % first + elif n == 2: + first = formatter(self[0]) + last = formatter(self[-1]) + summary = '[%s, %s], ' % (first, last) + else: + + if n > max_seq_items: + n = min(max_seq_items // 2, 10) + head = [formatter(x) for x in self[:n]] + tail = [formatter(x) for x in self[-n:]] + else: + head = [] + tail = [formatter(x) for x in self] + + # adjust all values to max length if needed + if is_justify: + + # however, if we are not truncated and we are only a single + # line, then don't justify + if (is_truncated or + not (len(', '.join(head)) < display_width and + len(', '.join(tail)) < display_width)): + max_len = max(best_len(head), best_len(tail)) + head = [x.rjust(max_len) for x in head] + tail = [x.rjust(max_len) for x in tail] + + summary = "" + line = space2 + + for i in range(len(head)): + word = head[i] + sep + ' ' + summary, line = _extend_line(summary, line, word, + display_width, space2) + + if is_truncated: + # remove trailing space of last line + summary += line.rstrip() + space2 + '...' + line = space2 + + for i in range(len(tail) - 1): + word = tail[i] + sep + ' ' + summary, line = _extend_line(summary, line, word, + display_width, space2) + + # last value: no sep added + 1 space of width used for trailing ',' + summary, line = _extend_line(summary, line, tail[-1], + display_width - 2, space2) + summary += line + summary += '],' + + if len(summary) > (display_width): + summary += space1 + else: # one row + summary += ' ' + + # remove initial space + summary = '[' + summary[len(space2):] + + return summary + + def _format_attrs(self): + """ + Return a list of tuples of the (attr,formatted_value) + """ + attrs = [] + attrs.append(('dtype', "'%s'" % self.dtype)) + if self.name is not None: + attrs.append(('name', default_pprint(self.name))) + max_seq_items = get_option('display.max_seq_items') or len(self) + if len(self) > max_seq_items: + attrs.append(('length', len(self))) + return attrs + + def to_series(self, **kwargs): + """ + Create a Series with both index and values equal to the index keys + useful with map for returning an indexer based on an index + + Returns + ------- + Series : dtype will be based on the type of the Index values. + """ + + from pandas import Series + return Series(self._to_embed(), index=self, name=self.name) + + def _to_embed(self, keep_tz=False): + """ + *this is an internal non-public method* + + return an array repr of this object, potentially casting to object + + """ + return self.values.copy() + + def astype(self, dtype): + return Index(self.values.astype(dtype), name=self.name, dtype=dtype) + + def _to_safe_for_reshape(self): + """ convert to object if we are a categorical """ + return self + + def to_datetime(self, dayfirst=False): + """ + For an Index containing strings or datetime.datetime objects, attempt + conversion to DatetimeIndex + """ + from pandas.tseries.index import DatetimeIndex + if self.inferred_type == 'string': + from dateutil.parser import parse + parser = lambda x: parse(x, dayfirst=dayfirst) + parsed = lib.try_parse_dates(self.values, parser=parser) + return DatetimeIndex(parsed) + else: + return DatetimeIndex(self.values) + + def _assert_can_do_setop(self, other): + if not com.is_list_like(other): + raise TypeError('Input must be Index or array-like') + return True + + def _convert_can_do_setop(self, other): + if not isinstance(other, Index): + other = Index(other, name=self.name) + result_name = self.name + else: + result_name = self.name if self.name == other.name else None + return other, result_name + + @property + def nlevels(self): + return 1 + + def _get_names(self): + return FrozenList((self.name, )) + + def _set_names(self, values, level=None): + if len(values) != 1: + raise ValueError('Length of new names must be 1, got %d' % + len(values)) + self.name = values[0] + + names = property(fset=_set_names, fget=_get_names) + + def set_names(self, names, level=None, inplace=False): + """ + Set new names on index. Defaults to returning new index. + + Parameters + ---------- + names : str or sequence + name(s) to set + level : int, level name, or sequence of int/level names (default None) + If the index is a MultiIndex (hierarchical), level(s) to set (None + for all levels). Otherwise level must be None + inplace : bool + if True, mutates in place + + Returns + ------- + new index (of same type and class...etc) [if inplace, returns None] + + Examples + -------- + >>> Index([1, 2, 3, 4]).set_names('foo') + Int64Index([1, 2, 3, 4], dtype='int64') + >>> Index([1, 2, 3, 4]).set_names(['foo']) + Int64Index([1, 2, 3, 4], dtype='int64') + >>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'), + (2, u'one'), (2, u'two')], + names=['foo', 'bar']) + >>> idx.set_names(['baz', 'quz']) + MultiIndex(levels=[[1, 2], [u'one', u'two']], + labels=[[0, 0, 1, 1], [0, 1, 0, 1]], + names=[u'baz', u'quz']) + >>> idx.set_names('baz', level=0) + MultiIndex(levels=[[1, 2], [u'one', u'two']], + labels=[[0, 0, 1, 1], [0, 1, 0, 1]], + names=[u'baz', u'bar']) + """ + if level is not None and self.nlevels == 1: + raise ValueError('Level must be None for non-MultiIndex') + + if level is not None and not is_list_like(level) and is_list_like( + names): + raise TypeError("Names must be a string") + + if not is_list_like(names) and level is None and self.nlevels > 1: + raise TypeError("Must pass list-like as `names`.") + + if not is_list_like(names): + names = [names] + if level is not None and not is_list_like(level): + level = [level] + + if inplace: + idx = self + else: + idx = self._shallow_copy() + idx._set_names(names, level=level) + if not inplace: + return idx + + def rename(self, name, inplace=False): + """ + Set new names on index. Defaults to returning new index. + + Parameters + ---------- + name : str or list + name to set + inplace : bool + if True, mutates in place + + Returns + ------- + new index (of same type and class...etc) [if inplace, returns None] + """ + return self.set_names([name], inplace=inplace) + + @property + def _has_complex_internals(self): + # to disable groupby tricks in MultiIndex + return False + + def summary(self, name=None): + if len(self) > 0: + head = self[0] + if (hasattr(head, 'format') and + not isinstance(head, compat.string_types)): + head = head.format() + tail = self[-1] + if (hasattr(tail, 'format') and + not isinstance(tail, compat.string_types)): + tail = tail.format() + index_summary = ', %s to %s' % (com.pprint_thing(head), + com.pprint_thing(tail)) + else: + index_summary = '' + + if name is None: + name = type(self).__name__ + return '%s: %s entries%s' % (name, len(self), index_summary) + + def _mpl_repr(self): + # how to represent ourselves to matplotlib + return self.values + + _na_value = np.nan + """The expected NA value to use with this index.""" + + @property + def is_monotonic(self): + """ alias for is_monotonic_increasing (deprecated) """ + return self._engine.is_monotonic_increasing + + @property + def is_monotonic_increasing(self): + """ + return if the index is monotonic increasing (only equal or + increasing) values. + """ + return self._engine.is_monotonic_increasing + + @property + def is_monotonic_decreasing(self): + """ + return if the index is monotonic decreasing (only equal or + decreasing) values. + """ + return self._engine.is_monotonic_decreasing + + def is_lexsorted_for_tuple(self, tup): + return True + + @cache_readonly(allow_setting=True) + def is_unique(self): + """ return if the index has unique values """ + return self._engine.is_unique + + @property + def has_duplicates(self): + return not self.is_unique + + def is_boolean(self): + return self.inferred_type in ['boolean'] + + def is_integer(self): + return self.inferred_type in ['integer'] + + def is_floating(self): + return self.inferred_type in ['floating', 'mixed-integer-float'] + + def is_numeric(self): + return self.inferred_type in ['integer', 'floating'] + + def is_object(self): + return is_object_dtype(self.dtype) + + def is_categorical(self): + return self.inferred_type in ['categorical'] + + def is_mixed(self): + return 'mixed' in self.inferred_type + + def holds_integer(self): + return self.inferred_type in ['integer', 'mixed-integer'] + + def _convert_scalar_indexer(self, key, kind=None): + """ + convert a scalar indexer + + Parameters + ---------- + key : label of the slice bound + kind : optional, type of the indexing operation (loc/ix/iloc/None) + + right now we are converting + floats -> ints if the index supports it + """ + + def to_int(): + ikey = int(key) + if ikey != key: + return self._invalid_indexer('label', key) + return ikey + + if kind == 'iloc': + if is_integer(key): + return key + elif is_float(key): + key = to_int() + warnings.warn("scalar indexers for index type {0} should be " + "integers and not floating point".format( + type(self).__name__), + FutureWarning, stacklevel=5) + return key + return self._invalid_indexer('label', key) + + if is_float(key): + if isnull(key): + return self._invalid_indexer('label', key) + warnings.warn("scalar indexers for index type {0} should be " + "integers and not floating point".format( + type(self).__name__), + FutureWarning, stacklevel=3) + return to_int() + + return key + + def _convert_slice_indexer_getitem(self, key, is_index_slice=False): + """ called from the getitem slicers, determine how to treat the key + whether positional or not """ + if self.is_integer() or is_index_slice: + return key + return self._convert_slice_indexer(key) + + def _convert_slice_indexer(self, key, kind=None): + """ + convert a slice indexer. disallow floats in the start/stop/step + + Parameters + ---------- + key : label of the slice bound + kind : optional, type of the indexing operation (loc/ix/iloc/None) + """ + + # if we are not a slice, then we are done + if not isinstance(key, slice): + return key + + # validate iloc + if kind == 'iloc': + + # need to coerce to_int if needed + def f(c): + v = getattr(key, c) + if v is None or is_integer(v): + return v + + # warn if it's a convertible float + if v == int(v): + warnings.warn("slice indexers when using iloc should be " + "integers and not floating point", + FutureWarning, stacklevel=7) + return int(v) + + self._invalid_indexer('slice {0} value'.format(c), v) + + return slice(*[f(c) for c in ['start', 'stop', 'step']]) + + # validate slicers + def validate(v): + if v is None or is_integer(v): + return True + + # dissallow floats (except for .ix) + elif is_float(v): + if kind == 'ix': + return True + + return False + + return True + + for c in ['start', 'stop', 'step']: + v = getattr(key, c) + if not validate(v): + self._invalid_indexer('slice {0} value'.format(c), v) + + # figure out if this is a positional indexer + start, stop, step = key.start, key.stop, key.step + + def is_int(v): + return v is None or is_integer(v) + + is_null_slicer = start is None and stop is None + is_index_slice = is_int(start) and is_int(stop) + is_positional = is_index_slice and not self.is_integer() + + if kind == 'getitem': + return self._convert_slice_indexer_getitem( + key, is_index_slice=is_index_slice) + + # convert the slice to an indexer here + + # if we are mixed and have integers + try: + if is_positional and self.is_mixed(): + # TODO: i, j are not used anywhere + if start is not None: + i = self.get_loc(start) # noqa + if stop is not None: + j = self.get_loc(stop) # noqa + is_positional = False + except KeyError: + if self.inferred_type == 'mixed-integer-float': + raise + + if is_null_slicer: + indexer = key + elif is_positional: + indexer = key + else: + try: + indexer = self.slice_indexer(start, stop, step) + except Exception: + if is_index_slice: + if self.is_integer(): + raise + else: + indexer = key + else: + raise + + return indexer + + def _convert_list_indexer(self, keyarr, kind=None): + """ + passed a key that is tuplesafe that is integer based + and we have a mixed index (e.g. number/labels). figure out + the indexer. return None if we can't help + """ + if (kind in [None, 'iloc', 'ix'] and + is_integer_dtype(keyarr) and not self.is_floating() and + not isinstance(keyarr, ABCPeriodIndex)): + + if self.inferred_type == 'mixed-integer': + indexer = self.get_indexer(keyarr) + if (indexer >= 0).all(): + return indexer + # missing values are flagged as -1 by get_indexer and negative + # indices are already converted to positive indices in the + # above if-statement, so the negative flags are changed to + # values outside the range of indices so as to trigger an + # IndexError in maybe_convert_indices + indexer[indexer < 0] = len(self) + from pandas.core.indexing import maybe_convert_indices + return maybe_convert_indices(indexer, len(self)) + + elif not self.inferred_type == 'integer': + keyarr = np.where(keyarr < 0, len(self) + keyarr, keyarr) + return keyarr + + return None + + def _invalid_indexer(self, form, key): + """ consistent invalid indexer message """ + raise TypeError("cannot do {form} indexing on {klass} with these " + "indexers [{key}] of {kind}".format( + form=form, klass=type(self), key=key, + kind=type(key))) + + def get_duplicates(self): + from collections import defaultdict + counter = defaultdict(lambda: 0) + for k in self.values: + counter[k] += 1 + return sorted(k for k, v in compat.iteritems(counter) if v > 1) + + _get_duplicates = get_duplicates + + def _cleanup(self): + self._engine.clear_mapping() + + @cache_readonly + def _constructor(self): + return type(self) + + @cache_readonly + def _engine(self): + # property, for now, slow to look up + return self._engine_type(lambda: self.values, len(self)) + + def _validate_index_level(self, level): + """ + Validate index level. + + For single-level Index getting level number is a no-op, but some + verification must be done like in MultiIndex. + + """ + if isinstance(level, int): + if level < 0 and level != -1: + raise IndexError("Too many levels: Index has only 1 level," + " %d is not a valid level number" % (level, )) + elif level > 0: + raise IndexError("Too many levels:" + " Index has only 1 level, not %d" % + (level + 1)) + elif level != self.name: + raise KeyError('Level %s must be same as name (%s)' % + (level, self.name)) + + def _get_level_number(self, level): + self._validate_index_level(level) + return 0 + + @cache_readonly + def inferred_type(self): + """ return a string of the type inferred from the values """ + return lib.infer_dtype(self) + + def is_type_compatible(self, kind): + return kind == self.inferred_type + + @cache_readonly + def is_all_dates(self): + if self._data is None: + return False + return is_datetime_array(_ensure_object(self.values)) + + def __iter__(self): + return iter(self.values) + + def __reduce__(self): + d = dict(data=self._data) + d.update(self._get_attributes_dict()) + return _new_Index, (self.__class__, d), None + + def __setstate__(self, state): + """Necessary for making this object picklable""" + + if isinstance(state, dict): + self._data = state.pop('data') + for k, v in compat.iteritems(state): + setattr(self, k, v) + + elif isinstance(state, tuple): + + if len(state) == 2: + nd_state, own_state = state + data = np.empty(nd_state[1], dtype=nd_state[2]) + np.ndarray.__setstate__(data, nd_state) + self.name = own_state[0] + + else: # pragma: no cover + data = np.empty(state) + np.ndarray.__setstate__(data, state) + + self._data = data + self._reset_identity() + else: + raise Exception("invalid pickle state") + + _unpickle_compat = __setstate__ + + def __deepcopy__(self, memo=None): + if memo is None: + memo = {} + return self.copy(deep=True) + + def __nonzero__(self): + raise ValueError("The truth value of a {0} is ambiguous. " + "Use a.empty, a.bool(), a.item(), a.any() or a.all()." + .format(self.__class__.__name__)) + + __bool__ = __nonzero__ + + def __contains__(self, key): + hash(key) + # work around some kind of odd cython bug + try: + return key in self._engine + except TypeError: + return False + + def __hash__(self): + raise TypeError("unhashable type: %r" % type(self).__name__) + + def __setitem__(self, key, value): + raise TypeError("Index does not support mutable operations") + + def __getitem__(self, key): + """ + Override numpy.ndarray's __getitem__ method to work as desired. + + This function adds lists and Series as valid boolean indexers + (ndarrays only supports ndarray with dtype=bool). + + If resulting ndim != 1, plain ndarray is returned instead of + corresponding `Index` subclass. + + """ + # There's no custom logic to be implemented in __getslice__, so it's + # not overloaded intentionally. + getitem = self._data.__getitem__ + promote = self._shallow_copy + + if np.isscalar(key): + return getitem(key) + + if isinstance(key, slice): + # This case is separated from the conditional above to avoid + # pessimization of basic indexing. + return promote(getitem(key)) + + if is_bool_indexer(key): + key = np.asarray(key) + + key = _values_from_object(key) + result = getitem(key) + if not np.isscalar(result): + return promote(result) + else: + return result + + def _ensure_compat_append(self, other): + """ + prepare the append + + Returns + ------- + list of to_concat, name of result Index + """ + name = self.name + to_concat = [self] + + if isinstance(other, (list, tuple)): + to_concat = to_concat + list(other) + else: + to_concat.append(other) + + for obj in to_concat: + if (isinstance(obj, Index) and obj.name != name and + obj.name is not None): + name = None + break + + to_concat = self._ensure_compat_concat(to_concat) + to_concat = [x._values if isinstance(x, Index) else x + for x in to_concat] + return to_concat, name + + def append(self, other): + """ + Append a collection of Index options together + + Parameters + ---------- + other : Index or list/tuple of indices + + Returns + ------- + appended : Index + """ + to_concat, name = self._ensure_compat_append(other) + attribs = self._get_attributes_dict() + attribs['name'] = name + return self._shallow_copy_with_infer( + np.concatenate(to_concat), **attribs) + + @staticmethod + def _ensure_compat_concat(indexes): + from pandas.tseries.api import (DatetimeIndex, PeriodIndex, + TimedeltaIndex) + klasses = DatetimeIndex, PeriodIndex, TimedeltaIndex + + is_ts = [isinstance(idx, klasses) for idx in indexes] + + if any(is_ts) and not all(is_ts): + return [_maybe_box(idx) for idx in indexes] + + return indexes + + def take(self, indices, axis=0, allow_fill=True, fill_value=None): + """ + return a new Index of the values selected by the indexer + + For internal compatibility with numpy arrays. + + # filling must always be None/nan here + # but is passed thru internally + + See also + -------- + numpy.ndarray.take + """ + + indices = com._ensure_platform_int(indices) + taken = self.values.take(indices) + return self._shallow_copy(taken) + + @cache_readonly + def _isnan(self): + """ return if each value is nan""" + if self._can_hold_na: + return isnull(self) + else: + # shouldn't reach to this condition by checking hasnans beforehand + values = np.empty(len(self), dtype=np.bool_) + values.fill(False) + return values + + @cache_readonly + def _nan_idxs(self): + if self._can_hold_na: + w, = self._isnan.nonzero() + return w + else: + return np.array([], dtype=np.int64) + + @cache_readonly + def hasnans(self): + """ return if I have any nans; enables various perf speedups """ + if self._can_hold_na: + return self._isnan.any() + else: + return False + + def _convert_for_op(self, value): + """ Convert value to be insertable to ndarray """ + return value + + def _assert_can_do_op(self, value): + """ Check value is valid for scalar op """ + if not lib.isscalar(value): + msg = "'value' must be a scalar, passed: {0}" + raise TypeError(msg.format(type(value).__name__)) + + def putmask(self, mask, value): + """ + return a new Index of the values set with the mask + + See also + -------- + numpy.ndarray.putmask + """ + values = self.values.copy() + try: + np.putmask(values, mask, self._convert_for_op(value)) + return self._shallow_copy(values) + except (ValueError, TypeError): + # coerces to object + return self.astype(object).putmask(mask, value) + + def format(self, name=False, formatter=None, **kwargs): + """ + Render a string representation of the Index + """ + header = [] + if name: + header.append(com.pprint_thing(self.name, + escape_chars=('\t', '\r', '\n')) if + self.name is not None else '') + + if formatter is not None: + return header + list(self.map(formatter)) + + return self._format_with_header(header, **kwargs) + + def _format_with_header(self, header, na_rep='NaN', **kwargs): + values = self.values + + from pandas.core.format import format_array + + if is_categorical_dtype(values.dtype): + values = np.array(values) + elif is_object_dtype(values.dtype): + values = lib.maybe_convert_objects(values, safe=1) + + if is_object_dtype(values.dtype): + result = [com.pprint_thing(x, escape_chars=('\t', '\r', '\n')) + for x in values] + + # could have nans + mask = isnull(values) + if mask.any(): + result = np.array(result) + result[mask] = na_rep + result = result.tolist() + + else: + result = _trim_front(format_array(values, None, justify='left')) + return header + result + + def to_native_types(self, slicer=None, **kwargs): + """ slice and dice then format """ + values = self + if slicer is not None: + values = values[slicer] + return values._format_native_types(**kwargs) + + def _format_native_types(self, na_rep='', quoting=None, **kwargs): + """ actually format my specific types """ + mask = isnull(self) + if not self.is_object() and not quoting: + values = np.asarray(self).astype(str) + else: + values = np.array(self, dtype=object, copy=True) + + values[mask] = na_rep + return values + + def equals(self, other): + """ + Determines if two Index objects contain the same elements. + """ + if self.is_(other): + return True + + if not isinstance(other, Index): + return False + + return array_equivalent(_values_from_object(self), + _values_from_object(other)) + + def identical(self, other): + """Similar to equals, but check that other comparable attributes are + also equal + """ + return (self.equals(other) and + all((getattr(self, c, None) == getattr(other, c, None) + for c in self._comparables)) and + type(self) == type(other)) + + def asof(self, label): + """ + For a sorted index, return the most recent label up to and including + the passed label. Return NaN if not found. + + See also + -------- + get_loc : asof is a thin wrapper around get_loc with method='pad' + """ + try: + loc = self.get_loc(label, method='pad') + except KeyError: + return _get_na_value(self.dtype) + else: + if isinstance(loc, slice): + loc = loc.indices(len(self))[-1] + return self[loc] + + def asof_locs(self, where, mask): + """ + where : array of timestamps + mask : array of booleans where data is not NA + + """ + locs = self.values[mask].searchsorted(where.values, side='right') + + locs = np.where(locs > 0, locs - 1, 0) + result = np.arange(len(self))[mask].take(locs) + + first = mask.argmax() + result[(locs == 0) & (where < self.values[first])] = -1 + + return result + + def sort_values(self, return_indexer=False, ascending=True): + """ + Return sorted copy of Index + """ + _as = self.argsort() + if not ascending: + _as = _as[::-1] + + sorted_index = self.take(_as) + + if return_indexer: + return sorted_index, _as + else: + return sorted_index + + def order(self, return_indexer=False, ascending=True): + """ + Return sorted copy of Index + + DEPRECATED: use :meth:`Index.sort_values` + """ + warnings.warn("order is deprecated, use sort_values(...)", + FutureWarning, stacklevel=2) + return self.sort_values(return_indexer=return_indexer, + ascending=ascending) + + def sort(self, *args, **kwargs): + raise TypeError("cannot sort an Index object in-place, use " + "sort_values instead") + + def sortlevel(self, level=None, ascending=True, sort_remaining=None): + """ + + For internal compatibility with with the Index API + + Sort the Index. This is for compat with MultiIndex + + Parameters + ---------- + ascending : boolean, default True + False to sort in descending order + + level, sort_remaining are compat parameters + + Returns + ------- + sorted_index : Index + """ + return self.sort_values(return_indexer=True, ascending=ascending) + + def shift(self, periods=1, freq=None): + """ + Shift Index containing datetime objects by input number of periods and + DateOffset + + Returns + ------- + shifted : Index + """ + raise NotImplementedError("Not supported for type %s" % + type(self).__name__) + + def argsort(self, *args, **kwargs): + """ + return an ndarray indexer of the underlying data + + See also + -------- + numpy.ndarray.argsort + """ + result = self.asi8 + if result is None: + result = np.array(self) + return result.argsort(*args, **kwargs) + + def __add__(self, other): + if com.is_list_like(other): + warnings.warn("using '+' to provide set union with Indexes is " + "deprecated, use '|' or .union()", FutureWarning, + stacklevel=2) + if isinstance(other, Index): + return self.union(other) + return Index(np.array(self) + other) + + def __radd__(self, other): + if is_list_like(other): + warnings.warn("using '+' to provide set union with Indexes is " + "deprecated, use '|' or .union()", FutureWarning, + stacklevel=2) + return Index(other + np.array(self)) + + __iadd__ = __add__ + + def __sub__(self, other): + warnings.warn("using '-' to provide set differences with Indexes is " + "deprecated, use .difference()", FutureWarning, + stacklevel=2) + return self.difference(other) + + def __and__(self, other): + return self.intersection(other) + + def __or__(self, other): + return self.union(other) + + def __xor__(self, other): + return self.sym_diff(other) + + def union(self, other): + """ + Form the union of two Index objects and sorts if possible. + + Parameters + ---------- + other : Index or array-like + + Returns + ------- + union : Index + + Examples + -------- + + >>> idx1 = pd.Index([1, 2, 3, 4]) + >>> idx2 = pd.Index([3, 4, 5, 6]) + >>> idx1.union(idx2) + Int64Index([1, 2, 3, 4, 5, 6], dtype='int64') + + """ + self._assert_can_do_setop(other) + other = _ensure_index(other) + + if len(other) == 0 or self.equals(other): + return self + + if len(self) == 0: + return other + + if not com.is_dtype_equal(self.dtype, other.dtype): + this = self.astype('O') + other = other.astype('O') + return this.union(other) + + if self.is_monotonic and other.is_monotonic: + try: + result = self._outer_indexer(self.values, other._values)[0] + except TypeError: + # incomparable objects + result = list(self.values) + + # worth making this faster? a very unusual case + value_set = set(self.values) + result.extend([x for x in other._values if x not in value_set]) + else: + indexer = self.get_indexer(other) + indexer, = (indexer == -1).nonzero() + + if len(indexer) > 0: + other_diff = com.take_nd(other._values, indexer, + allow_fill=False) + result = com._concat_compat((self.values, other_diff)) + + try: + self.values[0] < other_diff[0] + except TypeError as e: + warnings.warn("%s, sort order is undefined for " + "incomparable objects" % e, RuntimeWarning, + stacklevel=3) + else: + types = frozenset((self.inferred_type, + other.inferred_type)) + if not types & _unsortable_types: + result.sort() + + else: + result = self.values + + try: + result = np.sort(result) + except TypeError as e: + warnings.warn("%s, sort order is undefined for " + "incomparable objects" % e, RuntimeWarning, + stacklevel=3) + + # for subclasses + return self._wrap_union_result(other, result) + + def _wrap_union_result(self, other, result): + name = self.name if self.name == other.name else None + return self.__class__(result, name=name) + + def intersection(self, other): + """ + Form the intersection of two Index objects. + + This returns a new Index with elements common to the index and `other`. + Sortedness of the result is not guaranteed. + + Parameters + ---------- + other : Index or array-like + + Returns + ------- + intersection : Index + + Examples + -------- + + >>> idx1 = pd.Index([1, 2, 3, 4]) + >>> idx2 = pd.Index([3, 4, 5, 6]) + >>> idx1.intersection(idx2) + Int64Index([3, 4], dtype='int64') + + """ + self._assert_can_do_setop(other) + other = _ensure_index(other) + + if self.equals(other): + return self + + if not com.is_dtype_equal(self.dtype, other.dtype): + this = self.astype('O') + other = other.astype('O') + return this.intersection(other) + + if self.is_monotonic and other.is_monotonic: + try: + result = self._inner_indexer(self.values, other._values)[0] + return self._wrap_union_result(other, result) + except TypeError: + pass + + try: + indexer = Index(self.values).get_indexer(other._values) + indexer = indexer.take((indexer != -1).nonzero()[0]) + except: + # duplicates + indexer = Index(self.values).get_indexer_non_unique( + other._values)[0].unique() + indexer = indexer[indexer != -1] + + taken = self.take(indexer) + if self.name != other.name: + taken.name = None + return taken + + def difference(self, other): + """ + Return a new Index with elements from the index that are not in + `other`. + + This is the sorted set difference of two Index objects. + + Parameters + ---------- + other : Index or array-like + + Returns + ------- + difference : Index + + Examples + -------- + + >>> idx1 = pd.Index([1, 2, 3, 4]) + >>> idx2 = pd.Index([3, 4, 5, 6]) + >>> idx1.difference(idx2) + Int64Index([1, 2], dtype='int64') + + """ + self._assert_can_do_setop(other) + + if self.equals(other): + return Index([], name=self.name) + + other, result_name = self._convert_can_do_setop(other) + + theDiff = sorted(set(self) - set(other)) + return Index(theDiff, name=result_name) + + diff = deprecate('diff', difference) + + def sym_diff(self, other, result_name=None): + """ + Compute the sorted symmetric difference of two Index objects. + + Parameters + ---------- + other : Index or array-like + result_name : str + + Returns + ------- + sym_diff : Index + + Notes + ----- + ``sym_diff`` contains elements that appear in either ``idx1`` or + ``idx2`` but not both. Equivalent to the Index created by + ``(idx1 - idx2) + (idx2 - idx1)`` with duplicates dropped. + + The sorting of a result containing ``NaN`` values is not guaranteed + across Python versions. See GitHub issue #6444. + + Examples + -------- + >>> idx1 = Index([1, 2, 3, 4]) + >>> idx2 = Index([2, 3, 4, 5]) + >>> idx1.sym_diff(idx2) + Int64Index([1, 5], dtype='int64') + + You can also use the ``^`` operator: + + >>> idx1 ^ idx2 + Int64Index([1, 5], dtype='int64') + """ + self._assert_can_do_setop(other) + other, result_name_update = self._convert_can_do_setop(other) + if result_name is None: + result_name = result_name_update + + the_diff = sorted(set((self.difference(other)). + union(other.difference(self)))) + attribs = self._get_attributes_dict() + attribs['name'] = result_name + if 'freq' in attribs: + attribs['freq'] = None + return self._shallow_copy_with_infer(the_diff, **attribs) + + def get_loc(self, key, method=None, tolerance=None): + """ + Get integer location for requested label + + Parameters + ---------- + key : label + method : {None, 'pad'/'ffill', 'backfill'/'bfill', 'nearest'}, optional + * default: exact matches only. + * pad / ffill: find the PREVIOUS index value if no exact match. + * backfill / bfill: use NEXT index value if no exact match + * nearest: use the NEAREST index value if no exact match. Tied + distances are broken by preferring the larger index value. + tolerance : optional + Maximum distance from index value for inexact matches. The value of + the index at the matching location most satisfy the equation + ``abs(index[loc] - key) <= tolerance``. + + .. versionadded:: 0.17.0 + + Returns + ------- + loc : int if unique index, possibly slice or mask if not + """ + if method is None: + if tolerance is not None: + raise ValueError('tolerance argument only valid if using pad, ' + 'backfill or nearest lookups') + key = _values_from_object(key) + return self._engine.get_loc(key) + + indexer = self.get_indexer([key], method=method, tolerance=tolerance) + if indexer.ndim > 1 or indexer.size > 1: + raise TypeError('get_loc requires scalar valued input') + loc = indexer.item() + if loc == -1: + raise KeyError(key) + return loc + + def get_value(self, series, key): + """ + Fast lookup of value from 1-dimensional ndarray. Only use this if you + know what you're doing + """ + + # if we have something that is Index-like, then + # use this, e.g. DatetimeIndex + s = getattr(series, '_values', None) + if isinstance(s, Index) and lib.isscalar(key): + return s[key] + + s = _values_from_object(series) + k = _values_from_object(key) + + # prevent integer truncation bug in indexing + if is_float(k) and not self.is_floating(): + raise KeyError + + try: + return self._engine.get_value(s, k) + except KeyError as e1: + if len(self) > 0 and self.inferred_type in ['integer', 'boolean']: + raise + + try: + return tslib.get_value_box(s, key) + except IndexError: + raise + except TypeError: + # generator/iterator-like + if is_iterator(key): + raise InvalidIndexError(key) + else: + raise e1 + except Exception: # pragma: no cover + raise e1 + except TypeError: + # python 3 + if np.isscalar(key): # pragma: no cover + raise IndexError(key) + raise InvalidIndexError(key) + + def set_value(self, arr, key, value): + """ + Fast lookup of value from 1-dimensional ndarray. Only use this if you + know what you're doing + """ + self._engine.set_value(_values_from_object(arr), + _values_from_object(key), value) + + def get_level_values(self, level): + """ + Return vector of label values for requested level, equal to the length + of the index + + Parameters + ---------- + level : int + + Returns + ------- + values : ndarray + """ + # checks that level number is actually just 1 + self._validate_index_level(level) + return self + + def get_indexer(self, target, method=None, limit=None, tolerance=None): + """ + Compute indexer and mask for new index given the current index. The + indexer should be then used as an input to ndarray.take to align the + current data to the new index. + + Parameters + ---------- + target : Index + method : {None, 'pad'/'ffill', 'backfill'/'bfill', 'nearest'}, optional + * default: exact matches only. + * pad / ffill: find the PREVIOUS index value if no exact match. + * backfill / bfill: use NEXT index value if no exact match + * nearest: use the NEAREST index value if no exact match. Tied + distances are broken by preferring the larger index value. + limit : int, optional + Maximum number of consecutive labels in ``target`` to match for + inexact matches. + tolerance : optional + Maximum distance between original and new labels for inexact + matches. The values of the index at the matching locations most + satisfy the equation ``abs(index[indexer] - target) <= tolerance``. + + .. versionadded:: 0.17.0 + + Examples + -------- + >>> indexer = index.get_indexer(new_index) + >>> new_values = cur_values.take(indexer) + + Returns + ------- + indexer : ndarray of int + Integers from 0 to n - 1 indicating that the index at these + positions matches the corresponding target values. Missing values + in the target are marked by -1. + """ + method = _clean_reindex_fill_method(method) + target = _ensure_index(target) + if tolerance is not None: + tolerance = self._convert_tolerance(tolerance) + + pself, ptarget = self._possibly_promote(target) + if pself is not self or ptarget is not target: + return pself.get_indexer(ptarget, method=method, limit=limit, + tolerance=tolerance) + + if not com.is_dtype_equal(self.dtype, target.dtype): + this = self.astype(object) + target = target.astype(object) + return this.get_indexer(target, method=method, limit=limit, + tolerance=tolerance) + + if not self.is_unique: + raise InvalidIndexError('Reindexing only valid with uniquely' + ' valued Index objects') + + if method == 'pad' or method == 'backfill': + indexer = self._get_fill_indexer(target, method, limit, tolerance) + elif method == 'nearest': + indexer = self._get_nearest_indexer(target, limit, tolerance) + else: + if tolerance is not None: + raise ValueError('tolerance argument only valid if doing pad, ' + 'backfill or nearest reindexing') + if limit is not None: + raise ValueError('limit argument only valid if doing pad, ' + 'backfill or nearest reindexing') + + indexer = self._engine.get_indexer(target._values) + + return com._ensure_platform_int(indexer) + + def _convert_tolerance(self, tolerance): + # override this method on subclasses + return tolerance + + def _get_fill_indexer(self, target, method, limit=None, tolerance=None): + if self.is_monotonic_increasing and target.is_monotonic_increasing: + method = (self._engine.get_pad_indexer if method == 'pad' else + self._engine.get_backfill_indexer) + indexer = method(target._values, limit) + else: + indexer = self._get_fill_indexer_searchsorted(target, method, + limit) + if tolerance is not None: + indexer = self._filter_indexer_tolerance(target._values, indexer, + tolerance) + return indexer + + def _get_fill_indexer_searchsorted(self, target, method, limit=None): + """ + Fallback pad/backfill get_indexer that works for monotonic decreasing + indexes and non-monotonic targets + """ + if limit is not None: + raise ValueError('limit argument for %r method only well-defined ' + 'if index and target are monotonic' % method) + + side = 'left' if method == 'pad' else 'right' + target = np.asarray(target) + + # find exact matches first (this simplifies the algorithm) + indexer = self.get_indexer(target) + nonexact = (indexer == -1) + indexer[nonexact] = self._searchsorted_monotonic(target[nonexact], + side) + if side == 'left': + # searchsorted returns "indices into a sorted array such that, + # if the corresponding elements in v were inserted before the + # indices, the order of a would be preserved". + # Thus, we need to subtract 1 to find values to the left. + indexer[nonexact] -= 1 + # This also mapped not found values (values of 0 from + # np.searchsorted) to -1, which conveniently is also our + # sentinel for missing values + else: + # Mark indices to the right of the largest value as not found + indexer[indexer == len(self)] = -1 + return indexer + + def _get_nearest_indexer(self, target, limit, tolerance): + """ + Get the indexer for the nearest index labels; requires an index with + values that can be subtracted from each other (e.g., not strings or + tuples). + """ + left_indexer = self.get_indexer(target, 'pad', limit=limit) + right_indexer = self.get_indexer(target, 'backfill', limit=limit) + + target = np.asarray(target) + left_distances = abs(self.values[left_indexer] - target) + right_distances = abs(self.values[right_indexer] - target) + + op = operator.lt if self.is_monotonic_increasing else operator.le + indexer = np.where(op(left_distances, right_distances) | + (right_indexer == -1), left_indexer, right_indexer) + if tolerance is not None: + indexer = self._filter_indexer_tolerance(target, indexer, + tolerance) + return indexer + + def _filter_indexer_tolerance(self, target, indexer, tolerance): + distance = abs(self.values[indexer] - target) + indexer = np.where(distance <= tolerance, indexer, -1) + return indexer + + def get_indexer_non_unique(self, target): + """ return an indexer suitable for taking from a non unique index + return the labels in the same order as the target, and + return a missing indexer into the target (missing are marked as -1 + in the indexer); target must be an iterable """ + target = _ensure_index(target) + pself, ptarget = self._possibly_promote(target) + if pself is not self or ptarget is not target: + return pself.get_indexer_non_unique(ptarget) + + if self.is_all_dates: + self = Index(self.asi8) + tgt_values = target.asi8 + else: + tgt_values = target._values + + indexer, missing = self._engine.get_indexer_non_unique(tgt_values) + return Index(indexer), missing + + def get_indexer_for(self, target, **kwargs): + """ guaranteed return of an indexer even when non-unique """ + if self.is_unique: + return self.get_indexer(target, **kwargs) + indexer, _ = self.get_indexer_non_unique(target, **kwargs) + return indexer + + def _possibly_promote(self, other): + # A hack, but it works + from pandas.tseries.index import DatetimeIndex + if self.inferred_type == 'date' and isinstance(other, DatetimeIndex): + return DatetimeIndex(self), other + elif self.inferred_type == 'boolean': + if not is_object_dtype(self.dtype): + return self.astype('object'), other.astype('object') + return self, other + + def groupby(self, to_groupby): + """ + Group the index labels by a given array of values. + + Parameters + ---------- + to_groupby : array + Values used to determine the groups. + + Returns + ------- + groups : dict + {group name -> group labels} + + """ + return self._groupby(self.values, _values_from_object(to_groupby)) + + def map(self, mapper): + return self._arrmap(self.values, mapper) + + def isin(self, values, level=None): + """ + Compute boolean array of whether each index value is found in the + passed set of values. + + Parameters + ---------- + values : set or sequence of values + Sought values. + level : str or int, optional + Name or position of the index level to use (if the index is a + MultiIndex). + + Notes + ----- + If `level` is specified: + + - if it is the name of one *and only one* index level, use that level; + - otherwise it should be a number indicating level position. + + Returns + ------- + is_contained : ndarray (boolean dtype) + + """ + if level is not None: + self._validate_index_level(level) + return algorithms.isin(np.array(self), values) + + def _can_reindex(self, indexer): + """ + *this is an internal non-public method* + + Check if we are allowing reindexing with this particular indexer + + Parameters + ---------- + indexer : an integer indexer + + Raises + ------ + ValueError if its a duplicate axis + """ + + # trying to reindex on an axis with duplicates + if not self.is_unique and len(indexer): + raise ValueError("cannot reindex from a duplicate axis") + + def reindex(self, target, method=None, level=None, limit=None, + tolerance=None): + """ + Create index with target's values (move/add/delete values as necessary) + + Parameters + ---------- + target : an iterable + + Returns + ------- + new_index : pd.Index + Resulting index + indexer : np.ndarray or None + Indices of output values in original index + + """ + # GH6552: preserve names when reindexing to non-named target + # (i.e. neither Index nor Series). + preserve_names = not hasattr(target, 'name') + + # GH7774: preserve dtype/tz if target is empty and not an Index. + target = _ensure_has_len(target) # target may be an iterator + + if not isinstance(target, Index) and len(target) == 0: + attrs = self._get_attributes_dict() + attrs.pop('freq', None) # don't preserve freq + target = self._simple_new(None, dtype=self.dtype, **attrs) + else: + target = _ensure_index(target) + + if level is not None: + if method is not None: + raise TypeError('Fill method not supported if level passed') + _, indexer, _ = self._join_level(target, level, how='right', + return_indexers=True) + else: + if self.equals(target): + indexer = None + else: + if self.is_unique: + indexer = self.get_indexer(target, method=method, + limit=limit, + tolerance=tolerance) + else: + if method is not None or limit is not None: + raise ValueError("cannot reindex a non-unique index " + "with a method or limit") + indexer, missing = self.get_indexer_non_unique(target) + + if preserve_names and target.nlevels == 1 and target.name != self.name: + target = target.copy() + target.name = self.name + + return target, indexer + + def _reindex_non_unique(self, target): + """ + *this is an internal non-public method* + + Create a new index with target's values (move/add/delete values as + necessary) use with non-unique Index and a possibly non-unique target + + Parameters + ---------- + target : an iterable + + Returns + ------- + new_index : pd.Index + Resulting index + indexer : np.ndarray or None + Indices of output values in original index + + """ + + target = _ensure_index(target) + indexer, missing = self.get_indexer_non_unique(target) + check = indexer != -1 + new_labels = self.take(indexer[check]) + new_indexer = None + + if len(missing): + l = np.arange(len(indexer)) + + missing = com._ensure_platform_int(missing) + missing_labels = target.take(missing) + missing_indexer = _ensure_int64(l[~check]) + cur_labels = self.take(indexer[check])._values + cur_indexer = _ensure_int64(l[check]) + + new_labels = np.empty(tuple([len(indexer)]), dtype=object) + new_labels[cur_indexer] = cur_labels + new_labels[missing_indexer] = missing_labels + + # a unique indexer + if target.is_unique: + + # see GH5553, make sure we use the right indexer + new_indexer = np.arange(len(indexer)) + new_indexer[cur_indexer] = np.arange(len(cur_labels)) + new_indexer[missing_indexer] = -1 + + # we have a non_unique selector, need to use the original + # indexer here + else: + + # need to retake to have the same size as the indexer + indexer = indexer._values + indexer[~check] = 0 + + # reset the new indexer to account for the new size + new_indexer = np.arange(len(self.take(indexer))) + new_indexer[~check] = -1 + + new_index = self._shallow_copy_with_infer(new_labels, freq=None) + return new_index, indexer, new_indexer + + def join(self, other, how='left', level=None, return_indexers=False): + """ + *this is an internal non-public method* + + Compute join_index and indexers to conform data + structures to the new index. + + Parameters + ---------- + other : Index + how : {'left', 'right', 'inner', 'outer'} + level : int or level name, default None + return_indexers : boolean, default False + + Returns + ------- + join_index, (left_indexer, right_indexer) + """ + from .multi import MultiIndex + self_is_mi = isinstance(self, MultiIndex) + other_is_mi = isinstance(other, MultiIndex) + + # try to figure out the join level + # GH3662 + if level is None and (self_is_mi or other_is_mi): + + # have the same levels/names so a simple join + if self.names == other.names: + pass + else: + return self._join_multi(other, how=how, + return_indexers=return_indexers) + + # join on the level + if level is not None and (self_is_mi or other_is_mi): + return self._join_level(other, level, how=how, + return_indexers=return_indexers) + + other = _ensure_index(other) + + if len(other) == 0 and how in ('left', 'outer'): + join_index = self._shallow_copy() + if return_indexers: + rindexer = np.repeat(-1, len(join_index)) + return join_index, None, rindexer + else: + return join_index + + if len(self) == 0 and how in ('right', 'outer'): + join_index = other._shallow_copy() + if return_indexers: + lindexer = np.repeat(-1, len(join_index)) + return join_index, lindexer, None + else: + return join_index + + if self._join_precedence < other._join_precedence: + how = {'right': 'left', 'left': 'right'}.get(how, how) + result = other.join(self, how=how, level=level, + return_indexers=return_indexers) + if return_indexers: + x, y, z = result + result = x, z, y + return result + + if not com.is_dtype_equal(self.dtype, other.dtype): + this = self.astype('O') + other = other.astype('O') + return this.join(other, how=how, return_indexers=return_indexers) + + _validate_join_method(how) + + if not self.is_unique and not other.is_unique: + return self._join_non_unique(other, how=how, + return_indexers=return_indexers) + elif not self.is_unique or not other.is_unique: + if self.is_monotonic and other.is_monotonic: + return self._join_monotonic(other, how=how, + return_indexers=return_indexers) + else: + return self._join_non_unique(other, how=how, + return_indexers=return_indexers) + elif self.is_monotonic and other.is_monotonic: + try: + return self._join_monotonic(other, how=how, + return_indexers=return_indexers) + except TypeError: + pass + + if how == 'left': + join_index = self + elif how == 'right': + join_index = other + elif how == 'inner': + join_index = self.intersection(other) + elif how == 'outer': + join_index = self.union(other) + + if return_indexers: + if join_index is self: + lindexer = None + else: + lindexer = self.get_indexer(join_index) + if join_index is other: + rindexer = None + else: + rindexer = other.get_indexer(join_index) + return join_index, lindexer, rindexer + else: + return join_index + + def _join_multi(self, other, how, return_indexers=True): + from .multi import MultiIndex + self_is_mi = isinstance(self, MultiIndex) + other_is_mi = isinstance(other, MultiIndex) + + # figure out join names + self_names = [n for n in self.names if n is not None] + other_names = [n for n in other.names if n is not None] + overlap = list(set(self_names) & set(other_names)) + + # need at least 1 in common, but not more than 1 + if not len(overlap): + raise ValueError("cannot join with no level specified and no " + "overlapping names") + if len(overlap) > 1: + raise NotImplementedError("merging with more than one level " + "overlap on a multi-index is not " + "implemented") + jl = overlap[0] + + # make the indices into mi's that match + if not (self_is_mi and other_is_mi): + + flip_order = False + if self_is_mi: + self, other = other, self + flip_order = True + # flip if join method is right or left + how = {'right': 'left', 'left': 'right'}.get(how, how) + + level = other.names.index(jl) + result = self._join_level(other, level, how=how, + return_indexers=return_indexers) + + if flip_order: + if isinstance(result, tuple): + return result[0], result[2], result[1] + return result + + # 2 multi-indexes + raise NotImplementedError("merging with both multi-indexes is not " + "implemented") + + def _join_non_unique(self, other, how='left', return_indexers=False): + from pandas.tools.merge import _get_join_indexers + + left_idx, right_idx = _get_join_indexers([self.values], + [other._values], how=how, + sort=True) + + left_idx = com._ensure_platform_int(left_idx) + right_idx = com._ensure_platform_int(right_idx) + + join_index = self.values.take(left_idx) + mask = left_idx == -1 + np.putmask(join_index, mask, other._values.take(right_idx)) + + join_index = self._wrap_joined_index(join_index, other) + + if return_indexers: + return join_index, left_idx, right_idx + else: + return join_index + + def _join_level(self, other, level, how='left', return_indexers=False, + keep_order=True): + """ + The join method *only* affects the level of the resulting + MultiIndex. Otherwise it just exactly aligns the Index data to the + labels of the level in the MultiIndex. If `keep_order` == True, the + order of the data indexed by the MultiIndex will not be changed; + otherwise, it will tie out with `other`. + """ + from pandas.algos import groupsort_indexer + from .multi import MultiIndex + + def _get_leaf_sorter(labels): + ''' + returns sorter for the inner most level while preserving the + order of higher levels + ''' + if labels[0].size == 0: + return np.empty(0, dtype='int64') + + if len(labels) == 1: + lab = _ensure_int64(labels[0]) + sorter, _ = groupsort_indexer(lab, 1 + lab.max()) + return sorter + + # find indexers of begining of each set of + # same-key labels w.r.t all but last level + tic = labels[0][:-1] != labels[0][1:] + for lab in labels[1:-1]: + tic |= lab[:-1] != lab[1:] + + starts = np.hstack(([True], tic, [True])).nonzero()[0] + lab = _ensure_int64(labels[-1]) + return lib.get_level_sorter(lab, _ensure_int64(starts)) + + if isinstance(self, MultiIndex) and isinstance(other, MultiIndex): + raise TypeError('Join on level between two MultiIndex objects ' + 'is ambiguous') + + left, right = self, other + + flip_order = not isinstance(self, MultiIndex) + if flip_order: + left, right = right, left + how = {'right': 'left', 'left': 'right'}.get(how, how) + + level = left._get_level_number(level) + old_level = left.levels[level] + + if not right.is_unique: + raise NotImplementedError('Index._join_level on non-unique index ' + 'is not implemented') + + new_level, left_lev_indexer, right_lev_indexer = \ + old_level.join(right, how=how, return_indexers=True) + + if left_lev_indexer is None: + if keep_order or len(left) == 0: + left_indexer = None + join_index = left + else: # sort the leaves + left_indexer = _get_leaf_sorter(left.labels[:level + 1]) + join_index = left[left_indexer] + + else: + left_lev_indexer = _ensure_int64(left_lev_indexer) + rev_indexer = lib.get_reverse_indexer(left_lev_indexer, + len(old_level)) + + new_lev_labels = com.take_nd(rev_indexer, left.labels[level], + allow_fill=False) + + new_labels = list(left.labels) + new_labels[level] = new_lev_labels + + new_levels = list(left.levels) + new_levels[level] = new_level + + if keep_order: # just drop missing values. o.w. keep order + left_indexer = np.arange(len(left)) + mask = new_lev_labels != -1 + if not mask.all(): + new_labels = [lab[mask] for lab in new_labels] + left_indexer = left_indexer[mask] + + else: # tie out the order with other + if level == 0: # outer most level, take the fast route + ngroups = 1 + new_lev_labels.max() + left_indexer, counts = groupsort_indexer(new_lev_labels, + ngroups) + # missing values are placed first; drop them! + left_indexer = left_indexer[counts[0]:] + new_labels = [lab[left_indexer] for lab in new_labels] + + else: # sort the leaves + mask = new_lev_labels != -1 + mask_all = mask.all() + if not mask_all: + new_labels = [lab[mask] for lab in new_labels] + + left_indexer = _get_leaf_sorter(new_labels[:level + 1]) + new_labels = [lab[left_indexer] for lab in new_labels] + + # left_indexers are w.r.t masked frame. + # reverse to original frame! + if not mask_all: + left_indexer = mask.nonzero()[0][left_indexer] + + join_index = MultiIndex(levels=new_levels, labels=new_labels, + names=left.names, verify_integrity=False) + + if right_lev_indexer is not None: + right_indexer = com.take_nd(right_lev_indexer, + join_index.labels[level], + allow_fill=False) + else: + right_indexer = join_index.labels[level] + + if flip_order: + left_indexer, right_indexer = right_indexer, left_indexer + + if return_indexers: + return join_index, left_indexer, right_indexer + else: + return join_index + + def _join_monotonic(self, other, how='left', return_indexers=False): + if self.equals(other): + ret_index = other if how == 'right' else self + if return_indexers: + return ret_index, None, None + else: + return ret_index + + sv = self.values + ov = other._values + + if self.is_unique and other.is_unique: + # We can perform much better than the general case + if how == 'left': + join_index = self + lidx = None + ridx = self._left_indexer_unique(sv, ov) + elif how == 'right': + join_index = other + lidx = self._left_indexer_unique(ov, sv) + ridx = None + elif how == 'inner': + join_index, lidx, ridx = self._inner_indexer(sv, ov) + join_index = self._wrap_joined_index(join_index, other) + elif how == 'outer': + join_index, lidx, ridx = self._outer_indexer(sv, ov) + join_index = self._wrap_joined_index(join_index, other) + else: + if how == 'left': + join_index, lidx, ridx = self._left_indexer(sv, ov) + elif how == 'right': + join_index, ridx, lidx = self._left_indexer(ov, sv) + elif how == 'inner': + join_index, lidx, ridx = self._inner_indexer(sv, ov) + elif how == 'outer': + join_index, lidx, ridx = self._outer_indexer(sv, ov) + join_index = self._wrap_joined_index(join_index, other) + + if return_indexers: + return join_index, lidx, ridx + else: + return join_index + + def _wrap_joined_index(self, joined, other): + name = self.name if self.name == other.name else None + return Index(joined, name=name) + + def slice_indexer(self, start=None, end=None, step=None, kind=None): + """ + For an ordered Index, compute the slice indexer for input labels and + step + + Parameters + ---------- + start : label, default None + If None, defaults to the beginning + end : label, default None + If None, defaults to the end + step : int, default None + kind : string, default None + + Returns + ------- + indexer : ndarray or slice + + Notes + ----- + This function assumes that the data is sorted, so use at your own peril + """ + start_slice, end_slice = self.slice_locs(start, end, step=step, + kind=kind) + + # return a slice + if not lib.isscalar(start_slice): + raise AssertionError("Start slice bound is non-scalar") + if not lib.isscalar(end_slice): + raise AssertionError("End slice bound is non-scalar") + + return slice(start_slice, end_slice, step) + + def _maybe_cast_slice_bound(self, label, side, kind): + """ + This function should be overloaded in subclasses that allow non-trivial + casting on label-slice bounds, e.g. datetime-like indices allowing + strings containing formatted datetimes. + + Parameters + ---------- + label : object + side : {'left', 'right'} + kind : string / None + + Returns + ------- + label : object + + Notes + ----- + Value of `side` parameter should be validated in caller. + + """ + + # We are a plain index here (sub-class override this method if they + # wish to have special treatment for floats/ints, e.g. Float64Index and + # datetimelike Indexes + # reject them + if is_float(label): + self._invalid_indexer('slice', label) + + # we are trying to find integer bounds on a non-integer based index + # this is rejected (generally .loc gets you here) + elif is_integer(label): + self._invalid_indexer('slice', label) + + return label + + def _searchsorted_monotonic(self, label, side='left'): + if self.is_monotonic_increasing: + return self.searchsorted(label, side=side) + elif self.is_monotonic_decreasing: + # np.searchsorted expects ascending sort order, have to reverse + # everything for it to work (element ordering, search side and + # resulting value). + pos = self[::-1].searchsorted(label, side='right' if side == 'left' + else 'right') + return len(self) - pos + + raise ValueError('index must be monotonic increasing or decreasing') + + def get_slice_bound(self, label, side, kind): + """ + Calculate slice bound that corresponds to given label. + + Returns leftmost (one-past-the-rightmost if ``side=='right'``) position + of given label. + + Parameters + ---------- + label : object + side : {'left', 'right'} + kind : string / None, the type of indexer + + """ + if side not in ('left', 'right'): + raise ValueError("Invalid value for side kwarg," + " must be either 'left' or 'right': %s" % + (side, )) + + original_label = label + + # For datetime indices label may be a string that has to be converted + # to datetime boundary according to its resolution. + label = self._maybe_cast_slice_bound(label, side, kind) + + # we need to look up the label + try: + slc = self.get_loc(label) + except KeyError as err: + try: + return self._searchsorted_monotonic(label, side) + except ValueError: + # raise the original KeyError + raise err + + if isinstance(slc, np.ndarray): + # get_loc may return a boolean array or an array of indices, which + # is OK as long as they are representable by a slice. + if is_bool_dtype(slc): + slc = lib.maybe_booleans_to_slice(slc.view('u1')) + else: + slc = lib.maybe_indices_to_slice(slc.astype('i8'), len(self)) + if isinstance(slc, np.ndarray): + raise KeyError("Cannot get %s slice bound for non-unique " + "label: %r" % (side, original_label)) + + if isinstance(slc, slice): + if side == 'left': + return slc.start + else: + return slc.stop + else: + if side == 'right': + return slc + 1 + else: + return slc + + def slice_locs(self, start=None, end=None, step=None, kind=None): + """ + Compute slice locations for input labels. + + Parameters + ---------- + start : label, default None + If None, defaults to the beginning + end : label, default None + If None, defaults to the end + step : int, defaults None + If None, defaults to 1 + kind : string, defaults None + + Returns + ------- + start, end : int + + """ + inc = (step is None or step >= 0) + + if not inc: + # If it's a reverse slice, temporarily swap bounds. + start, end = end, start + + start_slice = None + if start is not None: + start_slice = self.get_slice_bound(start, 'left', kind) + if start_slice is None: + start_slice = 0 + + end_slice = None + if end is not None: + end_slice = self.get_slice_bound(end, 'right', kind) + if end_slice is None: + end_slice = len(self) + + if not inc: + # Bounds at this moment are swapped, swap them back and shift by 1. + # + # slice_locs('B', 'A', step=-1): s='B', e='A' + # + # s='A' e='B' + # AFTER SWAP: | | + # v ------------------> V + # ----------------------------------- + # | | |A|A|A|A| | | | | |B|B| | | | | + # ----------------------------------- + # ^ <------------------ ^ + # SHOULD BE: | | + # end=s-1 start=e-1 + # + end_slice, start_slice = start_slice - 1, end_slice - 1 + + # i == -1 triggers ``len(self) + i`` selection that points to the + # last element, not before-the-first one, subtracting len(self) + # compensates that. + if end_slice == -1: + end_slice -= len(self) + if start_slice == -1: + start_slice -= len(self) + + return start_slice, end_slice + + def delete(self, loc): + """ + Make new Index with passed location(-s) deleted + + Returns + ------- + new_index : Index + """ + return self._shallow_copy(np.delete(self._data, loc)) + + def insert(self, loc, item): + """ + Make new Index inserting new item at location. Follows + Python list.append semantics for negative values + + Parameters + ---------- + loc : int + item : object + + Returns + ------- + new_index : Index + """ + _self = np.asarray(self) + item = self._coerce_scalar_to_index(item)._values + + idx = np.concatenate((_self[:loc], item, _self[loc:])) + return self._shallow_copy_with_infer(idx) + + def drop(self, labels, errors='raise'): + """ + Make new Index with passed list of labels deleted + + Parameters + ---------- + labels : array-like + errors : {'ignore', 'raise'}, default 'raise' + If 'ignore', suppress error and existing labels are dropped. + + Returns + ------- + dropped : Index + """ + labels = com._index_labels_to_array(labels) + indexer = self.get_indexer(labels) + mask = indexer == -1 + if mask.any(): + if errors != 'ignore': + raise ValueError('labels %s not contained in axis' % + labels[mask]) + indexer = indexer[~mask] + return self.delete(indexer) + + @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', + False: 'first'}) + @Appender(base._shared_docs['drop_duplicates'] % _index_doc_kwargs) + def drop_duplicates(self, keep='first'): + return super(Index, self).drop_duplicates(keep=keep) + + @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', + False: 'first'}) + @Appender(base._shared_docs['duplicated'] % _index_doc_kwargs) + def duplicated(self, keep='first'): + return super(Index, self).duplicated(keep=keep) + + _index_shared_docs['fillna'] = """ + Fill NA/NaN values with the specified value + + Parameters + ---------- + value : scalar + Scalar value to use to fill holes (e.g. 0). + This value cannot be a list-likes. + downcast : dict, default is None + a dict of item->dtype of what to downcast if possible, + or the string 'infer' which will try to downcast to an appropriate + equal type (e.g. float64 to int64 if possible) + + Returns + ------- + filled : Index + """ + + @Appender(_index_shared_docs['fillna']) + def fillna(self, value=None, downcast=None): + self._assert_can_do_op(value) + if self.hasnans: + result = self.putmask(self._isnan, value) + if downcast is None: + # no need to care metadata other than name + # because it can't have freq if + return Index(result, name=self.name) + return self._shallow_copy() + + def _evaluate_with_timedelta_like(self, other, op, opstr): + raise TypeError("can only perform ops with timedelta like values") + + def _evaluate_with_datetime_like(self, other, op, opstr): + raise TypeError("can only perform ops with datetime like values") + + @classmethod + def _add_comparison_methods(cls): + """ add in comparison methods """ + + def _make_compare(op): + def _evaluate_compare(self, other): + if isinstance(other, (np.ndarray, Index, ABCSeries)): + if other.ndim > 0 and len(self) != len(other): + raise ValueError('Lengths must match to compare') + func = getattr(self.values, op) + result = func(np.asarray(other)) + + # technically we could support bool dtyped Index + # for now just return the indexing array directly + if is_bool_dtype(result): + return result + try: + return Index(result) + except TypeError: + return result + + return _evaluate_compare + + cls.__eq__ = _make_compare('__eq__') + cls.__ne__ = _make_compare('__ne__') + cls.__lt__ = _make_compare('__lt__') + cls.__gt__ = _make_compare('__gt__') + cls.__le__ = _make_compare('__le__') + cls.__ge__ = _make_compare('__ge__') + + @classmethod + def _add_numericlike_set_methods_disabled(cls): + """ add in the numeric set-like methods to disable """ + + def _make_invalid_op(name): + def invalid_op(self, other=None): + raise TypeError("cannot perform {name} with this index type: " + "{typ}".format(name=name, typ=type(self))) + + invalid_op.__name__ = name + return invalid_op + + cls.__add__ = cls.__radd__ = __iadd__ = _make_invalid_op('__add__') # noqa + cls.__sub__ = __isub__ = _make_invalid_op('__sub__') # noqa + + @classmethod + def _add_numeric_methods_disabled(cls): + """ add in numeric methods to disable """ + + def _make_invalid_op(name): + def invalid_op(self, other=None): + raise TypeError("cannot perform {name} with this index type: " + "{typ}".format(name=name, typ=type(self))) + + invalid_op.__name__ = name + return invalid_op + + cls.__pow__ = cls.__rpow__ = _make_invalid_op('__pow__') + cls.__mul__ = cls.__rmul__ = _make_invalid_op('__mul__') + cls.__floordiv__ = cls.__rfloordiv__ = _make_invalid_op('__floordiv__') + cls.__truediv__ = cls.__rtruediv__ = _make_invalid_op('__truediv__') + if not compat.PY3: + cls.__div__ = cls.__rdiv__ = _make_invalid_op('__div__') + cls.__neg__ = _make_invalid_op('__neg__') + cls.__pos__ = _make_invalid_op('__pos__') + cls.__abs__ = _make_invalid_op('__abs__') + cls.__inv__ = _make_invalid_op('__inv__') + + def _maybe_update_attributes(self, attrs): + """ Update Index attributes (e.g. freq) depending on op """ + return attrs + + def _validate_for_numeric_unaryop(self, op, opstr): + """ validate if we can perform a numeric unary operation """ + + if not self._is_numeric_dtype: + raise TypeError("cannot evaluate a numeric op " + "{opstr} for type: {typ}".format( + opstr=opstr, + typ=type(self)) + ) + + def _validate_for_numeric_binop(self, other, op, opstr): + """ + return valid other, evaluate or raise TypeError + if we are not of the appropriate type + + internal method called by ops + """ + from pandas.tseries.offsets import DateOffset + + # if we are an inheritor of numeric, + # but not actually numeric (e.g. DatetimeIndex/PeriodInde) + if not self._is_numeric_dtype: + raise TypeError("cannot evaluate a numeric op {opstr} " + "for type: {typ}".format( + opstr=opstr, + typ=type(self)) + ) + + if isinstance(other, Index): + if not other._is_numeric_dtype: + raise TypeError("cannot evaluate a numeric op " + "{opstr} with type: {typ}".format( + opstr=type(self), + typ=type(other)) + ) + elif isinstance(other, np.ndarray) and not other.ndim: + other = other.item() + + if isinstance(other, (Index, ABCSeries, np.ndarray)): + if len(self) != len(other): + raise ValueError("cannot evaluate a numeric op with " + "unequal lengths") + other = _values_from_object(other) + if other.dtype.kind not in ['f', 'i']: + raise TypeError("cannot evaluate a numeric op " + "with a non-numeric dtype") + elif isinstance(other, (DateOffset, np.timedelta64, + Timedelta, datetime.timedelta)): + # higher up to handle + pass + elif isinstance(other, (Timestamp, np.datetime64)): + # higher up to handle + pass + else: + if not (is_float(other) or is_integer(other)): + raise TypeError("can only perform ops with scalar values") + + return other + + @classmethod + def _add_numeric_methods_binary(cls): + """ add in numeric methods """ + + def _make_evaluate_binop(op, opstr, reversed=False): + def _evaluate_numeric_binop(self, other): + + from pandas.tseries.offsets import DateOffset + other = self._validate_for_numeric_binop(other, op, opstr) + + # handle time-based others + if isinstance(other, (DateOffset, np.timedelta64, + Timedelta, datetime.timedelta)): + return self._evaluate_with_timedelta_like(other, op, opstr) + elif isinstance(other, (Timestamp, np.datetime64)): + return self._evaluate_with_datetime_like(other, op, opstr) + + # if we are a reversed non-communative op + values = self.values + if reversed: + values, other = other, values + + attrs = self._get_attributes_dict() + attrs = self._maybe_update_attributes(attrs) + return Index(op(values, other), **attrs) + + return _evaluate_numeric_binop + + cls.__add__ = cls.__radd__ = _make_evaluate_binop( + operator.add, '__add__') + cls.__sub__ = _make_evaluate_binop( + operator.sub, '__sub__') + cls.__rsub__ = _make_evaluate_binop( + operator.sub, '__sub__', reversed=True) + cls.__mul__ = cls.__rmul__ = _make_evaluate_binop( + operator.mul, '__mul__') + cls.__pow__ = cls.__rpow__ = _make_evaluate_binop( + operator.pow, '__pow__') + cls.__mod__ = _make_evaluate_binop( + operator.mod, '__mod__') + cls.__floordiv__ = _make_evaluate_binop( + operator.floordiv, '__floordiv__') + cls.__rfloordiv__ = _make_evaluate_binop( + operator.floordiv, '__floordiv__', reversed=True) + cls.__truediv__ = _make_evaluate_binop( + operator.truediv, '__truediv__') + cls.__rtruediv__ = _make_evaluate_binop( + operator.truediv, '__truediv__', reversed=True) + if not compat.PY3: + cls.__div__ = _make_evaluate_binop( + operator.div, '__div__') + cls.__rdiv__ = _make_evaluate_binop( + operator.div, '__div__', reversed=True) + + @classmethod + def _add_numeric_methods_unary(cls): + """ add in numeric unary methods """ + + def _make_evaluate_unary(op, opstr): + + def _evaluate_numeric_unary(self): + + self._validate_for_numeric_unaryop(op, opstr) + attrs = self._get_attributes_dict() + attrs = self._maybe_update_attributes(attrs) + return Index(op(self.values), **attrs) + + return _evaluate_numeric_unary + + cls.__neg__ = _make_evaluate_unary(lambda x: -x, '__neg__') + cls.__pos__ = _make_evaluate_unary(lambda x: x, '__pos__') + cls.__abs__ = _make_evaluate_unary(np.abs, '__abs__') + cls.__inv__ = _make_evaluate_unary(lambda x: -x, '__inv__') + + @classmethod + def _add_numeric_methods(cls): + cls._add_numeric_methods_unary() + cls._add_numeric_methods_binary() + + @classmethod + def _add_logical_methods(cls): + """ add in logical methods """ + + _doc = """ + + %(desc)s + + Parameters + ---------- + All arguments to numpy.%(outname)s are accepted. + + Returns + ------- + %(outname)s : bool or array_like (if axis is specified) + A single element array_like may be converted to bool.""" + + def _make_logical_function(name, desc, f): + @Substitution(outname=name, desc=desc) + @Appender(_doc) + def logical_func(self, *args, **kwargs): + result = f(self.values) + if (isinstance(result, (np.ndarray, ABCSeries, Index)) and + result.ndim == 0): + # return NumPy type + return result.dtype.type(result.item()) + else: # pragma: no cover + return result + + logical_func.__name__ = name + return logical_func + + cls.all = _make_logical_function('all', 'Return whether all elements ' + 'are True', + np.all) + cls.any = _make_logical_function('any', + 'Return whether any element is True', + np.any) + + @classmethod + def _add_logical_methods_disabled(cls): + """ add in logical methods to disable """ + + def _make_invalid_op(name): + def invalid_op(self, other=None): + raise TypeError("cannot perform {name} with this index type: " + "{typ}".format(name=name, typ=type(self))) + + invalid_op.__name__ = name + return invalid_op + + cls.all = _make_invalid_op('all') + cls.any = _make_invalid_op('any') + + +Index._add_numeric_methods_disabled() +Index._add_logical_methods() +Index._add_comparison_methods() + + +def _ensure_index(index_like, copy=False): + if isinstance(index_like, Index): + if copy: + index_like = index_like.copy() + return index_like + if hasattr(index_like, 'name'): + return Index(index_like, name=index_like.name, copy=copy) + + # must check for exactly list here because of strict type + # check in clean_index_list + if isinstance(index_like, list): + if type(index_like) != list: + index_like = list(index_like) + # 2200 ? + converted, all_arrays = lib.clean_index_list(index_like) + + if len(converted) > 0 and all_arrays: + from .multi import MultiIndex + return MultiIndex.from_arrays(converted) + else: + index_like = converted + else: + # clean_index_list does the equivalent of copying + # so only need to do this if not list instance + if copy: + from copy import copy + index_like = copy(index_like) + + return Index(index_like) + + +def _get_na_value(dtype): + return {np.datetime64: tslib.NaT, + np.timedelta64: tslib.NaT}.get(dtype, np.nan) + + +def _ensure_frozen(array_like, categories, copy=False): + array_like = com._coerce_indexer_dtype(array_like, categories) + array_like = array_like.view(FrozenNDArray) + if copy: + array_like = array_like.copy() + return array_like + + +def _ensure_has_len(seq): + """If seq is an iterator, put its values into a list.""" + try: + len(seq) + except TypeError: + return list(seq) + else: + return seq + + +def _maybe_box(idx): + from pandas.tseries.api import DatetimeIndex, PeriodIndex, TimedeltaIndex + klasses = DatetimeIndex, PeriodIndex, TimedeltaIndex + + if isinstance(idx, klasses): + return idx.asobject + + return idx + + +def _trim_front(strings): + """ + Trims zeros and decimal points + """ + trimmed = strings + while len(strings) > 0 and all([x[0] == ' ' for x in trimmed]): + trimmed = [x[1:] for x in trimmed] + return trimmed + + +def _validate_join_method(method): + if method not in ['left', 'right', 'inner', 'outer']: + raise ValueError('do not recognize join method %s' % method) diff --git a/pandas/indexes/category.py b/pandas/indexes/category.py new file mode 100644 index 0000000000000..4ead02e5bd022 --- /dev/null +++ b/pandas/indexes/category.py @@ -0,0 +1,598 @@ +import numpy as np +import pandas.lib as lib +import pandas.index as _index + +from pandas import compat +from pandas.util.decorators import (Appender, cache_readonly, + deprecate_kwarg) +from pandas.core.missing import _clean_reindex_fill_method +from pandas.core.config import get_option +from pandas.indexes.base import Index +import pandas.core.base as base +import pandas.core.common as com +import pandas.indexes.base as ibase + + +class CategoricalIndex(Index, base.PandasDelegate): + """ + + Immutable Index implementing an ordered, sliceable set. CategoricalIndex + represents a sparsely populated Index with an underlying Categorical. + + .. versionadded:: 0.16.1 + + Parameters + ---------- + data : array-like or Categorical, (1-dimensional) + categories : optional, array-like + categories for the CategoricalIndex + ordered : boolean, + designating if the categories are ordered + copy : bool + Make a copy of input ndarray + name : object + Name to be stored in the index + + """ + + _typ = 'categoricalindex' + _engine_type = _index.Int64Engine + _attributes = ['name'] + + def __new__(cls, data=None, categories=None, ordered=None, dtype=None, + copy=False, name=None, fastpath=False, **kwargs): + + if fastpath: + return cls._simple_new(data, name=name) + + if isinstance(data, com.ABCCategorical): + data = cls._create_categorical(cls, data, categories, ordered) + elif isinstance(data, CategoricalIndex): + data = data._data + data = cls._create_categorical(cls, data, categories, ordered) + else: + + # don't allow scalars + # if data is None, then categories must be provided + if lib.isscalar(data): + if data is not None or categories is None: + cls._scalar_data_error(data) + data = [] + data = cls._create_categorical(cls, data, categories, ordered) + + if copy: + data = data.copy() + + return cls._simple_new(data, name=name) + + def _create_from_codes(self, codes, categories=None, ordered=None, + name=None): + """ + *this is an internal non-public method* + + create the correct categorical from codes + + Parameters + ---------- + codes : new codes + categories : optional categories, defaults to existing + ordered : optional ordered attribute, defaults to existing + name : optional name attribute, defaults to existing + + Returns + ------- + CategoricalIndex + """ + + from pandas.core.categorical import Categorical + if categories is None: + categories = self.categories + if ordered is None: + ordered = self.ordered + if name is None: + name = self.name + cat = Categorical.from_codes(codes, categories=categories, + ordered=self.ordered) + return CategoricalIndex(cat, name=name) + + @staticmethod + def _create_categorical(self, data, categories=None, ordered=None): + """ + *this is an internal non-public method* + + create the correct categorical from data and the properties + + Parameters + ---------- + data : data for new Categorical + categories : optional categories, defaults to existing + ordered : optional ordered attribute, defaults to existing + + Returns + ------- + Categorical + """ + if not isinstance(data, com.ABCCategorical): + from pandas.core.categorical import Categorical + data = Categorical(data, categories=categories, ordered=ordered) + else: + if categories is not None: + data = data.set_categories(categories) + if ordered is not None: + data = data.set_ordered(ordered) + return data + + @classmethod + def _simple_new(cls, values, name=None, categories=None, ordered=None, + **kwargs): + result = object.__new__(cls) + + values = cls._create_categorical(cls, values, categories, ordered) + result._data = values + result.name = name + for k, v in compat.iteritems(kwargs): + setattr(result, k, v) + + result._reset_identity() + return result + + def _is_dtype_compat(self, other): + """ + *this is an internal non-public method* + + provide a comparison between the dtype of self and other (coercing if + needed) + + Raises + ------ + TypeError if the dtypes are not compatible + """ + if com.is_categorical_dtype(other): + if isinstance(other, CategoricalIndex): + other = other._values + if not other.is_dtype_equal(self): + raise TypeError("categories must match existing categories " + "when appending") + else: + values = other + if not com.is_list_like(values): + values = [values] + other = CategoricalIndex(self._create_categorical( + self, other, categories=self.categories, ordered=self.ordered)) + if not other.isin(values).all(): + raise TypeError("cannot append a non-category item to a " + "CategoricalIndex") + + return other + + def equals(self, other): + """ + Determines if two CategorialIndex objects contain the same elements. + """ + if self.is_(other): + return True + + try: + other = self._is_dtype_compat(other) + return com.array_equivalent(self._data, other) + except (TypeError, ValueError): + pass + + return False + + @property + def _formatter_func(self): + return self.categories._formatter_func + + def _format_attrs(self): + """ + Return a list of tuples of the (attr,formatted_value) + """ + max_categories = (10 if get_option("display.max_categories") == 0 else + get_option("display.max_categories")) + attrs = [ + ('categories', + ibase.default_pprint(self.categories, + max_seq_items=max_categories)), + ('ordered', self.ordered)] + if self.name is not None: + attrs.append(('name', ibase.default_pprint(self.name))) + attrs.append(('dtype', "'%s'" % self.dtype)) + max_seq_items = get_option('display.max_seq_items') or len(self) + if len(self) > max_seq_items: + attrs.append(('length', len(self))) + return attrs + + @property + def inferred_type(self): + return 'categorical' + + @property + def values(self): + """ return the underlying data, which is a Categorical """ + return self._data + + def get_values(self): + """ return the underlying data as an ndarray """ + return self._data.get_values() + + @property + def codes(self): + return self._data.codes + + @property + def categories(self): + return self._data.categories + + @property + def ordered(self): + return self._data.ordered + + def __contains__(self, key): + hash(key) + return key in self.values + + def __array__(self, dtype=None): + """ the array interface, return my values """ + return np.array(self._data, dtype=dtype) + + @cache_readonly + def _isnan(self): + """ return if each value is nan""" + return self._data.codes == -1 + + @Appender(ibase._index_shared_docs['fillna']) + def fillna(self, value, downcast=None): + self._assert_can_do_op(value) + return CategoricalIndex(self._data.fillna(value), name=self.name) + + def argsort(self, *args, **kwargs): + return self.values.argsort(*args, **kwargs) + + @cache_readonly + def _engine(self): + + # we are going to look things up with the codes themselves + return self._engine_type(lambda: self.codes.astype('i8'), len(self)) + + @cache_readonly + def is_unique(self): + return not self.duplicated().any() + + @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', + False: 'first'}) + @Appender(base._shared_docs['duplicated'] % ibase._index_doc_kwargs) + def duplicated(self, keep='first'): + from pandas.hashtable import duplicated_int64 + return duplicated_int64(self.codes.astype('i8'), keep) + + def _to_safe_for_reshape(self): + """ convert to object if we are a categorical """ + return self.astype('object') + + def get_loc(self, key, method=None): + """ + Get integer location for requested label + + Parameters + ---------- + key : label + method : {None} + * default: exact matches only. + + Returns + ------- + loc : int if unique index, possibly slice or mask if not + """ + codes = self.categories.get_loc(key) + if (codes == -1): + raise KeyError(key) + indexer, _ = self._engine.get_indexer_non_unique(np.array([codes])) + if (indexer == -1).any(): + raise KeyError(key) + + return indexer + + def _can_reindex(self, indexer): + """ always allow reindexing """ + pass + + def reindex(self, target, method=None, level=None, limit=None, + tolerance=None): + """ + Create index with target's values (move/add/delete values as necessary) + + Returns + ------- + new_index : pd.Index + Resulting index + indexer : np.ndarray or None + Indices of output values in original index + + """ + + if method is not None: + raise NotImplementedError("argument method is not implemented for " + "CategoricalIndex.reindex") + if level is not None: + raise NotImplementedError("argument level is not implemented for " + "CategoricalIndex.reindex") + if limit is not None: + raise NotImplementedError("argument limit is not implemented for " + "CategoricalIndex.reindex") + + target = ibase._ensure_index(target) + + if not com.is_categorical_dtype(target) and not target.is_unique: + raise ValueError("cannot reindex with a non-unique indexer") + + indexer, missing = self.get_indexer_non_unique(np.array(target)) + new_target = self.take(indexer) + + # filling in missing if needed + if len(missing): + cats = self.categories.get_indexer(target) + + if (cats == -1).any(): + # coerce to a regular index here! + result = Index(np.array(self), name=self.name) + new_target, indexer, _ = result._reindex_non_unique( + np.array(target)) + + else: + + codes = new_target.codes.copy() + codes[indexer == -1] = cats[missing] + new_target = self._create_from_codes(codes) + + # we always want to return an Index type here + # to be consistent with .reindex for other index types (e.g. they don't + # coerce based on the actual values, only on the dtype) + # unless we had an inital Categorical to begin with + # in which case we are going to conform to the passed Categorical + new_target = np.asarray(new_target) + if com.is_categorical_dtype(target): + new_target = target._shallow_copy(new_target, name=self.name) + else: + new_target = Index(new_target, name=self.name) + + return new_target, indexer + + def _reindex_non_unique(self, target): + """ reindex from a non-unique; which CategoricalIndex's are almost + always + """ + new_target, indexer = self.reindex(target) + new_indexer = None + + check = indexer == -1 + if check.any(): + new_indexer = np.arange(len(self.take(indexer))) + new_indexer[check] = -1 + + cats = self.categories.get_indexer(target) + if not (cats == -1).any(): + # .reindex returns normal Index. Revert to CategoricalIndex if + # all targets are included in my categories + new_target = self._shallow_copy(new_target) + + return new_target, indexer, new_indexer + + def get_indexer(self, target, method=None, limit=None, tolerance=None): + """ + Compute indexer and mask for new index given the current index. The + indexer should be then used as an input to ndarray.take to align the + current data to the new index. The mask determines whether labels are + found or not in the current index + + Parameters + ---------- + target : MultiIndex or Index (of tuples) + method : {'pad', 'ffill', 'backfill', 'bfill'} + pad / ffill: propagate LAST valid observation forward to next valid + backfill / bfill: use NEXT valid observation to fill gap + + Notes + ----- + This is a low-level method and probably should be used at your own risk + + Examples + -------- + >>> indexer, mask = index.get_indexer(new_index) + >>> new_values = cur_values.take(indexer) + >>> new_values[-mask] = np.nan + + Returns + ------- + (indexer, mask) : (ndarray, ndarray) + """ + method = _clean_reindex_fill_method(method) + target = ibase._ensure_index(target) + + if isinstance(target, CategoricalIndex): + target = target.categories + + if method == 'pad' or method == 'backfill': + raise NotImplementedError("method='pad' and method='backfill' not " + "implemented yet for CategoricalIndex") + elif method == 'nearest': + raise NotImplementedError("method='nearest' not implemented yet " + 'for CategoricalIndex') + else: + + codes = self.categories.get_indexer(target) + indexer, _ = self._engine.get_indexer_non_unique(codes) + + return com._ensure_platform_int(indexer) + + def get_indexer_non_unique(self, target): + """ this is the same for a CategoricalIndex for get_indexer; the API + returns the missing values as well + """ + target = ibase._ensure_index(target) + + if isinstance(target, CategoricalIndex): + target = target.categories + + codes = self.categories.get_indexer(target) + return self._engine.get_indexer_non_unique(codes) + + def _convert_list_indexer(self, keyarr, kind=None): + """ + we are passed a list indexer. + Return our indexer or raise if all of the values are not included in + the categories + """ + codes = self.categories.get_indexer(keyarr) + if (codes == -1).any(): + raise KeyError("a list-indexer must only include values that are " + "in the categories") + + return None + + def take(self, indexer, axis=0, allow_fill=True, fill_value=None): + """ + For internal compatibility with numpy arrays. + + # filling must always be None/nan here + # but is passed thru internally + assert isnull(fill_value) + + See also + -------- + numpy.ndarray.take + """ + + indexer = com._ensure_platform_int(indexer) + taken = self.codes.take(indexer) + return self._create_from_codes(taken) + + def delete(self, loc): + """ + Make new Index with passed location(-s) deleted + + Returns + ------- + new_index : Index + """ + return self._create_from_codes(np.delete(self.codes, loc)) + + def insert(self, loc, item): + """ + Make new Index inserting new item at location. Follows + Python list.append semantics for negative values + + Parameters + ---------- + loc : int + item : object + + Returns + ------- + new_index : Index + + Raises + ------ + ValueError if the item is not in the categories + + """ + code = self.categories.get_indexer([item]) + if (code == -1): + raise TypeError("cannot insert an item into a CategoricalIndex " + "that is not already an existing category") + + codes = self.codes + codes = np.concatenate((codes[:loc], code, codes[loc:])) + return self._create_from_codes(codes) + + def append(self, other): + """ + Append a collection of CategoricalIndex options together + + Parameters + ---------- + other : Index or list/tuple of indices + + Returns + ------- + appended : Index + + Raises + ------ + ValueError if other is not in the categories + """ + to_concat, name = self._ensure_compat_append(other) + to_concat = [self._is_dtype_compat(c) for c in to_concat] + codes = np.concatenate([c.codes for c in to_concat]) + return self._create_from_codes(codes, name=name) + + @classmethod + def _add_comparison_methods(cls): + """ add in comparison methods """ + + def _make_compare(op): + def _evaluate_compare(self, other): + + # if we have a Categorical type, then must have the same + # categories + if isinstance(other, CategoricalIndex): + other = other._values + elif isinstance(other, Index): + other = self._create_categorical( + self, other._values, categories=self.categories, + ordered=self.ordered) + + if isinstance(other, (com.ABCCategorical, np.ndarray, + com.ABCSeries)): + if len(self.values) != len(other): + raise ValueError("Lengths must match to compare") + + if isinstance(other, com.ABCCategorical): + if not self.values.is_dtype_equal(other): + raise TypeError("categorical index comparisions must " + "have the same categories and ordered " + "attributes") + + return getattr(self.values, op)(other) + + return _evaluate_compare + + cls.__eq__ = _make_compare('__eq__') + cls.__ne__ = _make_compare('__ne__') + cls.__lt__ = _make_compare('__lt__') + cls.__gt__ = _make_compare('__gt__') + cls.__le__ = _make_compare('__le__') + cls.__ge__ = _make_compare('__ge__') + + def _delegate_method(self, name, *args, **kwargs): + """ method delegation to the ._values """ + method = getattr(self._values, name) + if 'inplace' in kwargs: + raise ValueError("cannot use inplace with CategoricalIndex") + res = method(*args, **kwargs) + if lib.isscalar(res): + return res + return CategoricalIndex(res, name=self.name) + + @classmethod + def _add_accessors(cls): + """ add in Categorical accessor methods """ + + from pandas.core.categorical import Categorical + CategoricalIndex._add_delegate_accessors( + delegate=Categorical, accessors=["rename_categories", + "reorder_categories", + "add_categories", + "remove_categories", + "remove_unused_categories", + "set_categories", + "as_ordered", "as_unordered", + "min", "max"], + typ='method', overwrite=True) + + +CategoricalIndex._add_numericlike_set_methods_disabled() +CategoricalIndex._add_numeric_methods_disabled() +CategoricalIndex._add_logical_methods_disabled() +CategoricalIndex._add_comparison_methods() +CategoricalIndex._add_accessors() diff --git a/pandas/indexes/multi.py b/pandas/indexes/multi.py new file mode 100644 index 0000000000000..2d0ad1925daa0 --- /dev/null +++ b/pandas/indexes/multi.py @@ -0,0 +1,2166 @@ +# pylint: disable=E1101,E1103,W0232 +import datetime +import warnings +from functools import partial +from sys import getsizeof + +import numpy as np +import pandas.lib as lib +import pandas.index as _index +from pandas.lib import Timestamp + +from pandas.compat import range, zip, lrange, lzip, map +from pandas import compat +from pandas.core.base import FrozenList +import pandas.core.base as base +from pandas.util.decorators import (Appender, cache_readonly, + deprecate, deprecate_kwarg) +import pandas.core.common as com +from pandas.core.missing import _clean_reindex_fill_method +from pandas.core.common import (isnull, array_equivalent, + is_object_dtype, + _values_from_object, + is_iterator, + _ensure_int64, is_bool_indexer, + is_list_like, is_null_slice) + +from pandas.core.config import get_option +from pandas.io.common import PerformanceWarning + +from pandas.indexes.base import (Index, _ensure_index, _ensure_frozen, + _get_na_value, InvalidIndexError) +import pandas.indexes.base as ibase + + +class MultiIndex(Index): + """ + A multi-level, or hierarchical, index object for pandas objects + + Parameters + ---------- + levels : sequence of arrays + The unique labels for each level + labels : sequence of arrays + Integers for each level designating which label at each location + sortorder : optional int + Level of sortedness (must be lexicographically sorted by that + level) + names : optional sequence of objects + Names for each of the index levels. (name is accepted for compat) + copy : boolean, default False + Copy the meta-data + verify_integrity : boolean, default True + Check that the levels/labels are consistent and valid + """ + + # initialize to zero-length tuples to make everything work + _typ = 'multiindex' + _names = FrozenList() + _levels = FrozenList() + _labels = FrozenList() + _comparables = ['names'] + rename = Index.set_names + + def __new__(cls, levels=None, labels=None, sortorder=None, names=None, + copy=False, verify_integrity=True, _set_identity=True, + name=None, **kwargs): + + # compat with Index + if name is not None: + names = name + if levels is None or labels is None: + raise TypeError("Must pass both levels and labels") + if len(levels) != len(labels): + raise ValueError('Length of levels and labels must be the same.') + if len(levels) == 0: + raise ValueError('Must pass non-zero number of levels/labels') + if len(levels) == 1: + if names: + name = names[0] + else: + name = None + return Index(levels[0], name=name, copy=True).take(labels[0]) + + result = object.__new__(MultiIndex) + + # we've already validated levels and labels, so shortcut here + result._set_levels(levels, copy=copy, validate=False) + result._set_labels(labels, copy=copy, validate=False) + + if names is not None: + # handles name validation + result._set_names(names) + + if sortorder is not None: + result.sortorder = int(sortorder) + else: + result.sortorder = sortorder + + if verify_integrity: + result._verify_integrity() + if _set_identity: + result._reset_identity() + + return result + + def _verify_integrity(self): + """Raises ValueError if length of levels and labels don't match or any + label would exceed level bounds""" + # NOTE: Currently does not check, among other things, that cached + # nlevels matches nor that sortorder matches actually sortorder. + labels, levels = self.labels, self.levels + if len(levels) != len(labels): + raise ValueError("Length of levels and labels must match. NOTE:" + " this index is in an inconsistent state.") + label_length = len(self.labels[0]) + for i, (level, label) in enumerate(zip(levels, labels)): + if len(label) != label_length: + raise ValueError("Unequal label lengths: %s" % + ([len(lab) for lab in labels])) + if len(label) and label.max() >= len(level): + raise ValueError("On level %d, label max (%d) >= length of" + " level (%d). NOTE: this index is in an" + " inconsistent state" % (i, label.max(), + len(level))) + + def _get_levels(self): + return self._levels + + def _set_levels(self, levels, level=None, copy=False, validate=True, + verify_integrity=False): + # This is NOT part of the levels property because it should be + # externally not allowed to set levels. User beware if you change + # _levels directly + if validate and len(levels) == 0: + raise ValueError('Must set non-zero number of levels.') + if validate and level is None and len(levels) != self.nlevels: + raise ValueError('Length of levels must match number of levels.') + if validate and level is not None and len(levels) != len(level): + raise ValueError('Length of levels must match length of level.') + + if level is None: + new_levels = FrozenList( + _ensure_index(lev, copy=copy)._shallow_copy() + for lev in levels) + else: + level = [self._get_level_number(l) for l in level] + new_levels = list(self._levels) + for l, v in zip(level, levels): + new_levels[l] = _ensure_index(v, copy=copy)._shallow_copy() + new_levels = FrozenList(new_levels) + + names = self.names + self._levels = new_levels + if any(names): + self._set_names(names) + + self._tuples = None + self._reset_cache() + + if verify_integrity: + self._verify_integrity() + + def set_levels(self, levels, level=None, inplace=False, + verify_integrity=True): + """ + Set new levels on MultiIndex. Defaults to returning + new index. + + Parameters + ---------- + levels : sequence or list of sequence + new level(s) to apply + level : int, level name, or sequence of int/level names (default None) + level(s) to set (None for all levels) + inplace : bool + if True, mutates in place + verify_integrity : bool (default True) + if True, checks that levels and labels are compatible + + Returns + ------- + new index (of same type and class...etc) + + + Examples + -------- + >>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'), + (2, u'one'), (2, u'two')], + names=['foo', 'bar']) + >>> idx.set_levels([['a','b'], [1,2]]) + MultiIndex(levels=[[u'a', u'b'], [1, 2]], + labels=[[0, 0, 1, 1], [0, 1, 0, 1]], + names=[u'foo', u'bar']) + >>> idx.set_levels(['a','b'], level=0) + MultiIndex(levels=[[u'a', u'b'], [u'one', u'two']], + labels=[[0, 0, 1, 1], [0, 1, 0, 1]], + names=[u'foo', u'bar']) + >>> idx.set_levels(['a','b'], level='bar') + MultiIndex(levels=[[1, 2], [u'a', u'b']], + labels=[[0, 0, 1, 1], [0, 1, 0, 1]], + names=[u'foo', u'bar']) + >>> idx.set_levels([['a','b'], [1,2]], level=[0,1]) + MultiIndex(levels=[[u'a', u'b'], [1, 2]], + labels=[[0, 0, 1, 1], [0, 1, 0, 1]], + names=[u'foo', u'bar']) + """ + if level is not None and not is_list_like(level): + if not is_list_like(levels): + raise TypeError("Levels must be list-like") + if is_list_like(levels[0]): + raise TypeError("Levels must be list-like") + level = [level] + levels = [levels] + elif level is None or is_list_like(level): + if not is_list_like(levels) or not is_list_like(levels[0]): + raise TypeError("Levels must be list of lists-like") + + if inplace: + idx = self + else: + idx = self._shallow_copy() + idx._reset_identity() + idx._set_levels(levels, level=level, validate=True, + verify_integrity=verify_integrity) + if not inplace: + return idx + + # remove me in 0.14 and change to read only property + __set_levels = deprecate("setting `levels` directly", + partial(set_levels, inplace=True, + verify_integrity=True), + alt_name="set_levels") + levels = property(fget=_get_levels, fset=__set_levels) + + def _get_labels(self): + return self._labels + + def _set_labels(self, labels, level=None, copy=False, validate=True, + verify_integrity=False): + + if validate and level is None and len(labels) != self.nlevels: + raise ValueError("Length of labels must match number of levels") + if validate and level is not None and len(labels) != len(level): + raise ValueError('Length of labels must match length of levels.') + + if level is None: + new_labels = FrozenList( + _ensure_frozen(lab, lev, copy=copy)._shallow_copy() + for lev, lab in zip(self.levels, labels)) + else: + level = [self._get_level_number(l) for l in level] + new_labels = list(self._labels) + for l, lev, lab in zip(level, self.levels, labels): + new_labels[l] = _ensure_frozen( + lab, lev, copy=copy)._shallow_copy() + new_labels = FrozenList(new_labels) + + self._labels = new_labels + self._tuples = None + self._reset_cache() + + if verify_integrity: + self._verify_integrity() + + def set_labels(self, labels, level=None, inplace=False, + verify_integrity=True): + """ + Set new labels on MultiIndex. Defaults to returning + new index. + + Parameters + ---------- + labels : sequence or list of sequence + new labels to apply + level : int, level name, or sequence of int/level names (default None) + level(s) to set (None for all levels) + inplace : bool + if True, mutates in place + verify_integrity : bool (default True) + if True, checks that levels and labels are compatible + + Returns + ------- + new index (of same type and class...etc) + + Examples + -------- + >>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'), + (2, u'one'), (2, u'two')], + names=['foo', 'bar']) + >>> idx.set_labels([[1,0,1,0], [0,0,1,1]]) + MultiIndex(levels=[[1, 2], [u'one', u'two']], + labels=[[1, 0, 1, 0], [0, 0, 1, 1]], + names=[u'foo', u'bar']) + >>> idx.set_labels([1,0,1,0], level=0) + MultiIndex(levels=[[1, 2], [u'one', u'two']], + labels=[[1, 0, 1, 0], [0, 1, 0, 1]], + names=[u'foo', u'bar']) + >>> idx.set_labels([0,0,1,1], level='bar') + MultiIndex(levels=[[1, 2], [u'one', u'two']], + labels=[[0, 0, 1, 1], [0, 0, 1, 1]], + names=[u'foo', u'bar']) + >>> idx.set_labels([[1,0,1,0], [0,0,1,1]], level=[0,1]) + MultiIndex(levels=[[1, 2], [u'one', u'two']], + labels=[[1, 0, 1, 0], [0, 0, 1, 1]], + names=[u'foo', u'bar']) + """ + if level is not None and not is_list_like(level): + if not is_list_like(labels): + raise TypeError("Labels must be list-like") + if is_list_like(labels[0]): + raise TypeError("Labels must be list-like") + level = [level] + labels = [labels] + elif level is None or is_list_like(level): + if not is_list_like(labels) or not is_list_like(labels[0]): + raise TypeError("Labels must be list of lists-like") + + if inplace: + idx = self + else: + idx = self._shallow_copy() + idx._reset_identity() + idx._set_labels(labels, level=level, verify_integrity=verify_integrity) + if not inplace: + return idx + + # remove me in 0.14 and change to readonly property + __set_labels = deprecate("setting labels directly", + partial(set_labels, inplace=True, + verify_integrity=True), + alt_name="set_labels") + labels = property(fget=_get_labels, fset=__set_labels) + + def copy(self, names=None, dtype=None, levels=None, labels=None, + deep=False, _set_identity=False): + """ + Make a copy of this object. Names, dtype, levels and labels can be + passed and will be set on new copy. + + Parameters + ---------- + names : sequence, optional + dtype : numpy dtype or pandas type, optional + levels : sequence, optional + labels : sequence, optional + + Returns + ------- + copy : MultiIndex + + Notes + ----- + In most cases, there should be no functional difference from using + ``deep``, but if ``deep`` is passed it will attempt to deepcopy. + This could be potentially expensive on large MultiIndex objects. + """ + if deep: + from copy import deepcopy + levels = levels if levels is not None else deepcopy(self.levels) + labels = labels if labels is not None else deepcopy(self.labels) + names = names if names is not None else deepcopy(self.names) + else: + levels = self.levels + labels = self.labels + names = self.names + return MultiIndex(levels=levels, labels=labels, names=names, + sortorder=self.sortorder, verify_integrity=False, + _set_identity=_set_identity) + + def __array__(self, dtype=None): + """ the array interface, return my values """ + return self.values + + def view(self, cls=None): + """ this is defined as a copy with the same identity """ + result = self.copy() + result._id = self._id + return result + + def _shallow_copy_with_infer(self, values=None, **kwargs): + return self._shallow_copy(values, **kwargs) + + def _shallow_copy(self, values=None, **kwargs): + if values is not None: + if 'name' in kwargs: + kwargs['names'] = kwargs.pop('name', None) + # discards freq + kwargs.pop('freq', None) + return MultiIndex.from_tuples(values, **kwargs) + return self.view() + + @cache_readonly + def dtype(self): + return np.dtype('O') + + @cache_readonly + def nbytes(self): + """ return the number of bytes in the underlying data """ + level_nbytes = sum((i.nbytes for i in self.levels)) + label_nbytes = sum((i.nbytes for i in self.labels)) + names_nbytes = sum((getsizeof(i) for i in self.names)) + return level_nbytes + label_nbytes + names_nbytes + + def _format_attrs(self): + """ + Return a list of tuples of the (attr,formatted_value) + """ + attrs = [ + ('levels', ibase.default_pprint(self._levels, + max_seq_items=False)), + ('labels', ibase.default_pprint(self._labels, + max_seq_items=False))] + if not all(name is None for name in self.names): + attrs.append(('names', ibase.default_pprint(self.names))) + if self.sortorder is not None: + attrs.append(('sortorder', ibase.default_pprint(self.sortorder))) + return attrs + + def _format_space(self): + return "\n%s" % (' ' * (len(self.__class__.__name__) + 1)) + + def _format_data(self): + # we are formatting thru the attributes + return None + + def __len__(self): + return len(self.labels[0]) + + def _get_names(self): + return FrozenList(level.name for level in self.levels) + + def _set_names(self, names, level=None, validate=True): + """ + sets names on levels. WARNING: mutates! + + Note that you generally want to set this *after* changing levels, so + that it only acts on copies + """ + + names = list(names) + + if validate and level is not None and len(names) != len(level): + raise ValueError('Length of names must match length of level.') + if validate and level is None and len(names) != self.nlevels: + raise ValueError('Length of names must match number of levels in ' + 'MultiIndex.') + + if level is None: + level = range(self.nlevels) + else: + level = [self._get_level_number(l) for l in level] + + # set the name + for l, name in zip(level, names): + self.levels[l].rename(name, inplace=True) + + names = property(fset=_set_names, fget=_get_names, + doc="Names of levels in MultiIndex") + + def _reference_duplicate_name(self, name): + """ + Returns True if the name refered to in self.names is duplicated. + """ + # count the times name equals an element in self.names. + return sum(name == n for n in self.names) > 1 + + def _format_native_types(self, na_rep='nan', **kwargs): + new_levels = [] + new_labels = [] + + # go through the levels and format them + for level, label in zip(self.levels, self.labels): + level = level._format_native_types(na_rep=na_rep, **kwargs) + # add nan values, if there are any + mask = (label == -1) + if mask.any(): + nan_index = len(level) + level = np.append(level, na_rep) + label = label.values() + label[mask] = nan_index + new_levels.append(level) + new_labels.append(label) + + # reconstruct the multi-index + mi = MultiIndex(levels=new_levels, labels=new_labels, names=self.names, + sortorder=self.sortorder, verify_integrity=False) + + return mi.values + + @property + def _constructor(self): + return MultiIndex.from_tuples + + @cache_readonly + def inferred_type(self): + return 'mixed' + + @staticmethod + def _from_elements(values, labels=None, levels=None, names=None, + sortorder=None): + return MultiIndex(levels, labels, names, sortorder=sortorder) + + def _get_level_number(self, level): + try: + count = self.names.count(level) + if count > 1: + raise ValueError('The name %s occurs multiple times, use a ' + 'level number' % level) + level = self.names.index(level) + except ValueError: + if not isinstance(level, int): + raise KeyError('Level %s not found' % str(level)) + elif level < 0: + level += self.nlevels + if level < 0: + orig_level = level - self.nlevels + raise IndexError('Too many levels: Index has only %d ' + 'levels, %d is not a valid level number' % + (self.nlevels, orig_level)) + # Note: levels are zero-based + elif level >= self.nlevels: + raise IndexError('Too many levels: Index has only %d levels, ' + 'not %d' % (self.nlevels, level + 1)) + return level + + _tuples = None + + @property + def values(self): + if self._tuples is not None: + return self._tuples + + values = [] + for lev, lab in zip(self.levels, self.labels): + # Need to box timestamps, etc. + box = hasattr(lev, '_box_values') + # Try to minimize boxing. + if box and len(lev) > len(lab): + taken = lev._box_values(com.take_1d(lev._values, lab)) + elif box: + taken = com.take_1d(lev._box_values(lev._values), lab, + fill_value=_get_na_value(lev.dtype.type)) + else: + taken = com.take_1d(np.asarray(lev._values), lab) + values.append(taken) + + self._tuples = lib.fast_zip(values) + return self._tuples + + # fml + @property + def _is_v1(self): + return False + + @property + def _is_v2(self): + return False + + @property + def _has_complex_internals(self): + # to disable groupby tricks + return True + + @cache_readonly + def is_unique(self): + return not self.duplicated().any() + + @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', + False: 'first'}) + @Appender(base._shared_docs['duplicated'] % ibase._index_doc_kwargs) + def duplicated(self, keep='first'): + from pandas.core.groupby import get_group_index + from pandas.hashtable import duplicated_int64 + + shape = map(len, self.levels) + ids = get_group_index(self.labels, shape, sort=False, xnull=False) + + return duplicated_int64(ids, keep) + + @Appender(ibase._index_shared_docs['fillna']) + def fillna(self, value=None, downcast=None): + # isnull is not implemented for MultiIndex + raise NotImplementedError('isnull is not defined for MultiIndex') + + def get_value(self, series, key): + # somewhat broken encapsulation + from pandas.core.indexing import maybe_droplevels + from pandas.core.series import Series + + # Label-based + s = _values_from_object(series) + k = _values_from_object(key) + + def _try_mi(k): + # TODO: what if a level contains tuples?? + loc = self.get_loc(k) + new_values = series._values[loc] + new_index = self[loc] + new_index = maybe_droplevels(new_index, k) + return Series(new_values, index=new_index, name=series.name) + + try: + return self._engine.get_value(s, k) + except KeyError as e1: + try: + return _try_mi(key) + except KeyError: + pass + + try: + return _index.get_value_at(s, k) + except IndexError: + raise + except TypeError: + # generator/iterator-like + if is_iterator(key): + raise InvalidIndexError(key) + else: + raise e1 + except Exception: # pragma: no cover + raise e1 + except TypeError: + + # a Timestamp will raise a TypeError in a multi-index + # rather than a KeyError, try it here + # note that a string that 'looks' like a Timestamp will raise + # a KeyError! (GH5725) + if (isinstance(key, (datetime.datetime, np.datetime64)) or + (compat.PY3 and isinstance(key, compat.string_types))): + try: + return _try_mi(key) + except (KeyError): + raise + except: + pass + + try: + return _try_mi(Timestamp(key)) + except: + pass + + raise InvalidIndexError(key) + + def get_level_values(self, level): + """ + Return vector of label values for requested level, equal to the length + of the index + + Parameters + ---------- + level : int or level name + + Returns + ------- + values : ndarray + """ + num = self._get_level_number(level) + unique = self.levels[num] # .values + labels = self.labels[num] + filled = com.take_1d(unique.values, labels, + fill_value=unique._na_value) + _simple_new = unique._simple_new + values = _simple_new(filled, self.names[num], + freq=getattr(unique, 'freq', None), + tz=getattr(unique, 'tz', None)) + return values + + def format(self, space=2, sparsify=None, adjoin=True, names=False, + na_rep=None, formatter=None): + if len(self) == 0: + return [] + + stringified_levels = [] + for lev, lab in zip(self.levels, self.labels): + na = na_rep if na_rep is not None else _get_na_rep(lev.dtype.type) + + if len(lev) > 0: + + formatted = lev.take(lab).format(formatter=formatter) + + # we have some NA + mask = lab == -1 + if mask.any(): + formatted = np.array(formatted, dtype=object) + formatted[mask] = na + formatted = formatted.tolist() + + else: + # weird all NA case + formatted = [com.pprint_thing(na if isnull(x) else x, + escape_chars=('\t', '\r', '\n')) + for x in com.take_1d(lev._values, lab)] + stringified_levels.append(formatted) + + result_levels = [] + for lev, name in zip(stringified_levels, self.names): + level = [] + + if names: + level.append(com.pprint_thing(name, + escape_chars=('\t', '\r', '\n')) + if name is not None else '') + + level.extend(np.array(lev, dtype=object)) + result_levels.append(level) + + if sparsify is None: + sparsify = get_option("display.multi_sparse") + + if sparsify: + sentinel = '' + # GH3547 + # use value of sparsify as sentinel, unless it's an obvious + # "Truthey" value + if sparsify not in [True, 1]: + sentinel = sparsify + # little bit of a kludge job for #1217 + result_levels = _sparsify(result_levels, start=int(names), + sentinel=sentinel) + + if adjoin: + from pandas.core.format import _get_adjustment + adj = _get_adjustment() + return adj.adjoin(space, *result_levels).split('\n') + else: + return result_levels + + def _to_safe_for_reshape(self): + """ convert to object if we are a categorical """ + return self.set_levels([i._to_safe_for_reshape() for i in self.levels]) + + def to_hierarchical(self, n_repeat, n_shuffle=1): + """ + Return a MultiIndex reshaped to conform to the + shapes given by n_repeat and n_shuffle. + + Useful to replicate and rearrange a MultiIndex for combination + with another Index with n_repeat items. + + Parameters + ---------- + n_repeat : int + Number of times to repeat the labels on self + n_shuffle : int + Controls the reordering of the labels. If the result is going + to be an inner level in a MultiIndex, n_shuffle will need to be + greater than one. The size of each label must divisible by + n_shuffle. + + Returns + ------- + MultiIndex + + Examples + -------- + >>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'), + (2, u'one'), (2, u'two')]) + >>> idx.to_hierarchical(3) + MultiIndex(levels=[[1, 2], [u'one', u'two']], + labels=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1], + [0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1]]) + """ + levels = self.levels + labels = [np.repeat(x, n_repeat) for x in self.labels] + # Assumes that each label is divisible by n_shuffle + labels = [x.reshape(n_shuffle, -1).ravel(1) for x in labels] + names = self.names + return MultiIndex(levels=levels, labels=labels, names=names) + + @property + def is_all_dates(self): + return False + + def is_lexsorted(self): + """ + Return True if the labels are lexicographically sorted + """ + return self.lexsort_depth == self.nlevels + + def is_lexsorted_for_tuple(self, tup): + """ + Return True if we are correctly lexsorted given the passed tuple + """ + return len(tup) <= self.lexsort_depth + + @cache_readonly + def lexsort_depth(self): + if self.sortorder is not None: + if self.sortorder == 0: + return self.nlevels + else: + return 0 + + int64_labels = [com._ensure_int64(lab) for lab in self.labels] + for k in range(self.nlevels, 0, -1): + if lib.is_lexsorted(int64_labels[:k]): + return k + + return 0 + + @classmethod + def from_arrays(cls, arrays, sortorder=None, names=None): + """ + Convert arrays to MultiIndex + + Parameters + ---------- + arrays : list / sequence of array-likes + Each array-like gives one level's value for each data point. + len(arrays) is the number of levels. + sortorder : int or None + Level of sortedness (must be lexicographically sorted by that + level) + + Returns + ------- + index : MultiIndex + + Examples + -------- + >>> arrays = [[1, 1, 2, 2], ['red', 'blue', 'red', 'blue']] + >>> MultiIndex.from_arrays(arrays, names=('number', 'color')) + + See Also + -------- + MultiIndex.from_tuples : Convert list of tuples to MultiIndex + MultiIndex.from_product : Make a MultiIndex from cartesian product + of iterables + """ + from pandas.core.categorical import Categorical + + if len(arrays) == 1: + name = None if names is None else names[0] + return Index(arrays[0], name=name) + + cats = [Categorical.from_array(arr, ordered=True) for arr in arrays] + levels = [c.categories for c in cats] + labels = [c.codes for c in cats] + if names is None: + names = [getattr(arr, "name", None) for arr in arrays] + + return MultiIndex(levels=levels, labels=labels, sortorder=sortorder, + names=names, verify_integrity=False) + + @classmethod + def from_tuples(cls, tuples, sortorder=None, names=None): + """ + Convert list of tuples to MultiIndex + + Parameters + ---------- + tuples : list / sequence of tuple-likes + Each tuple is the index of one row/column. + sortorder : int or None + Level of sortedness (must be lexicographically sorted by that + level) + + Returns + ------- + index : MultiIndex + + Examples + -------- + >>> tuples = [(1, u'red'), (1, u'blue'), + (2, u'red'), (2, u'blue')] + >>> MultiIndex.from_tuples(tuples, names=('number', 'color')) + + See Also + -------- + MultiIndex.from_arrays : Convert list of arrays to MultiIndex + MultiIndex.from_product : Make a MultiIndex from cartesian product + of iterables + """ + if len(tuples) == 0: + # I think this is right? Not quite sure... + raise TypeError('Cannot infer number of levels from empty list') + + if isinstance(tuples, (np.ndarray, Index)): + if isinstance(tuples, Index): + tuples = tuples._values + + arrays = list(lib.tuples_to_object_array(tuples).T) + elif isinstance(tuples, list): + arrays = list(lib.to_object_array_tuples(tuples).T) + else: + arrays = lzip(*tuples) + + return MultiIndex.from_arrays(arrays, sortorder=sortorder, names=names) + + @classmethod + def from_product(cls, iterables, sortorder=None, names=None): + """ + Make a MultiIndex from the cartesian product of multiple iterables + + Parameters + ---------- + iterables : list / sequence of iterables + Each iterable has unique labels for each level of the index. + sortorder : int or None + Level of sortedness (must be lexicographically sorted by that + level). + names : list / sequence of strings or None + Names for the levels in the index. + + Returns + ------- + index : MultiIndex + + Examples + -------- + >>> numbers = [0, 1, 2] + >>> colors = [u'green', u'purple'] + >>> MultiIndex.from_product([numbers, colors], + names=['number', 'color']) + MultiIndex(levels=[[0, 1, 2], [u'green', u'purple']], + labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]], + names=[u'number', u'color']) + + See Also + -------- + MultiIndex.from_arrays : Convert list of arrays to MultiIndex + MultiIndex.from_tuples : Convert list of tuples to MultiIndex + """ + from pandas.core.categorical import Categorical + from pandas.tools.util import cartesian_product + + categoricals = [Categorical.from_array(it, ordered=True) + for it in iterables] + labels = cartesian_product([c.codes for c in categoricals]) + + return MultiIndex(levels=[c.categories for c in categoricals], + labels=labels, sortorder=sortorder, names=names) + + @property + def nlevels(self): + return len(self.levels) + + @property + def levshape(self): + return tuple(len(x) for x in self.levels) + + def __contains__(self, key): + hash(key) + # work around some kind of odd cython bug + try: + self.get_loc(key) + return True + except LookupError: + return False + + def __reduce__(self): + """Necessary for making this object picklable""" + d = dict(levels=[lev for lev in self.levels], + labels=[label for label in self.labels], + sortorder=self.sortorder, names=list(self.names)) + return ibase._new_Index, (self.__class__, d), None + + def __setstate__(self, state): + """Necessary for making this object picklable""" + + if isinstance(state, dict): + levels = state.get('levels') + labels = state.get('labels') + sortorder = state.get('sortorder') + names = state.get('names') + + elif isinstance(state, tuple): + + nd_state, own_state = state + levels, labels, sortorder, names = own_state + + self._set_levels([Index(x) for x in levels], validate=False) + self._set_labels(labels) + self._set_names(names) + self.sortorder = sortorder + self._verify_integrity() + self._reset_identity() + + def __getitem__(self, key): + if np.isscalar(key): + retval = [] + for lev, lab in zip(self.levels, self.labels): + if lab[key] == -1: + retval.append(np.nan) + else: + retval.append(lev[lab[key]]) + + return tuple(retval) + else: + if is_bool_indexer(key): + key = np.asarray(key) + sortorder = self.sortorder + else: + # cannot be sure whether the result will be sorted + sortorder = None + + new_labels = [lab[key] for lab in self.labels] + + return MultiIndex(levels=self.levels, labels=new_labels, + names=self.names, sortorder=sortorder, + verify_integrity=False) + + def take(self, indexer, axis=None): + indexer = com._ensure_platform_int(indexer) + new_labels = [lab.take(indexer) for lab in self.labels] + return MultiIndex(levels=self.levels, labels=new_labels, + names=self.names, verify_integrity=False) + + def append(self, other): + """ + Append a collection of Index options together + + Parameters + ---------- + other : Index or list/tuple of indices + + Returns + ------- + appended : Index + """ + if not isinstance(other, (list, tuple)): + other = [other] + + if all((isinstance(o, MultiIndex) and o.nlevels >= self.nlevels) + for o in other): + arrays = [] + for i in range(self.nlevels): + label = self.get_level_values(i) + appended = [o.get_level_values(i) for o in other] + arrays.append(label.append(appended)) + return MultiIndex.from_arrays(arrays, names=self.names) + + to_concat = (self.values, ) + tuple(k._values for k in other) + new_tuples = np.concatenate(to_concat) + + # if all(isinstance(x, MultiIndex) for x in other): + try: + return MultiIndex.from_tuples(new_tuples, names=self.names) + except: + return Index(new_tuples) + + def argsort(self, *args, **kwargs): + return self.values.argsort(*args, **kwargs) + + def repeat(self, n): + return MultiIndex(levels=self.levels, + labels=[label.view(np.ndarray).repeat(n) + for label in self.labels], names=self.names, + sortorder=self.sortorder, verify_integrity=False) + + def drop(self, labels, level=None, errors='raise'): + """ + Make new MultiIndex with passed list of labels deleted + + Parameters + ---------- + labels : array-like + Must be a list of tuples + level : int or level name, default None + + Returns + ------- + dropped : MultiIndex + """ + if level is not None: + return self._drop_from_level(labels, level) + + try: + if not isinstance(labels, (np.ndarray, Index)): + labels = com._index_labels_to_array(labels) + indexer = self.get_indexer(labels) + mask = indexer == -1 + if mask.any(): + if errors != 'ignore': + raise ValueError('labels %s not contained in axis' % + labels[mask]) + indexer = indexer[~mask] + except Exception: + pass + + inds = [] + for label in labels: + try: + loc = self.get_loc(label) + if isinstance(loc, int): + inds.append(loc) + else: + inds.extend(lrange(loc.start, loc.stop)) + except KeyError: + if errors != 'ignore': + raise + + return self.delete(inds) + + def _drop_from_level(self, labels, level): + labels = com._index_labels_to_array(labels) + i = self._get_level_number(level) + index = self.levels[i] + values = index.get_indexer(labels) + + mask = ~lib.ismember(self.labels[i], set(values)) + + return self[mask] + + def droplevel(self, level=0): + """ + Return Index with requested level removed. If MultiIndex has only 2 + levels, the result will be of Index type not MultiIndex. + + Parameters + ---------- + level : int/level name or list thereof + + Notes + ----- + Does not check if result index is unique or not + + Returns + ------- + index : Index or MultiIndex + """ + levels = level + if not isinstance(levels, (tuple, list)): + levels = [level] + + new_levels = list(self.levels) + new_labels = list(self.labels) + new_names = list(self.names) + + levnums = sorted(self._get_level_number(lev) for lev in levels)[::-1] + + for i in levnums: + new_levels.pop(i) + new_labels.pop(i) + new_names.pop(i) + + if len(new_levels) == 1: + + # set nan if needed + mask = new_labels[0] == -1 + result = new_levels[0].take(new_labels[0]) + if mask.any(): + result = result.putmask(mask, np.nan) + + result.name = new_names[0] + return result + else: + return MultiIndex(levels=new_levels, labels=new_labels, + names=new_names, verify_integrity=False) + + def swaplevel(self, i, j): + """ + Swap level i with level j. Do not change the ordering of anything + + Parameters + ---------- + i, j : int, string (can be mixed) + Level of index to be swapped. Can pass level name as string. + + Returns + ------- + swapped : MultiIndex + """ + new_levels = list(self.levels) + new_labels = list(self.labels) + new_names = list(self.names) + + i = self._get_level_number(i) + j = self._get_level_number(j) + + new_levels[i], new_levels[j] = new_levels[j], new_levels[i] + new_labels[i], new_labels[j] = new_labels[j], new_labels[i] + new_names[i], new_names[j] = new_names[j], new_names[i] + + return MultiIndex(levels=new_levels, labels=new_labels, + names=new_names, verify_integrity=False) + + def reorder_levels(self, order): + """ + Rearrange levels using input order. May not drop or duplicate levels + + Parameters + ---------- + """ + order = [self._get_level_number(i) for i in order] + if len(order) != self.nlevels: + raise AssertionError('Length of order must be same as ' + 'number of levels (%d), got %d' % + (self.nlevels, len(order))) + new_levels = [self.levels[i] for i in order] + new_labels = [self.labels[i] for i in order] + new_names = [self.names[i] for i in order] + + return MultiIndex(levels=new_levels, labels=new_labels, + names=new_names, verify_integrity=False) + + def __getslice__(self, i, j): + return self.__getitem__(slice(i, j)) + + def sortlevel(self, level=0, ascending=True, sort_remaining=True): + """ + Sort MultiIndex at the requested level. The result will respect the + original ordering of the associated factor at that level. + + Parameters + ---------- + level : list-like, int or str, default 0 + If a string is given, must be a name of the level + If list-like must be names or ints of levels. + ascending : boolean, default True + False to sort in descending order + Can also be a list to specify a directed ordering + sort_remaining : sort by the remaining levels after level. + + Returns + ------- + sorted_index : MultiIndex + """ + from pandas.core.groupby import _indexer_from_factorized + + if isinstance(level, (compat.string_types, int)): + level = [level] + level = [self._get_level_number(lev) for lev in level] + sortorder = None + + # we have a directed ordering via ascending + if isinstance(ascending, list): + if not len(level) == len(ascending): + raise ValueError("level must have same length as ascending") + + from pandas.core.groupby import _lexsort_indexer + indexer = _lexsort_indexer(self.labels, orders=ascending) + + # level ordering + else: + + labels = list(self.labels) + shape = list(self.levshape) + + # partition labels and shape + primary = tuple(labels.pop(lev - i) for i, lev in enumerate(level)) + primshp = tuple(shape.pop(lev - i) for i, lev in enumerate(level)) + + if sort_remaining: + primary += primary + tuple(labels) + primshp += primshp + tuple(shape) + else: + sortorder = level[0] + + indexer = _indexer_from_factorized(primary, primshp, + compress=False) + + if not ascending: + indexer = indexer[::-1] + + indexer = com._ensure_platform_int(indexer) + new_labels = [lab.take(indexer) for lab in self.labels] + + new_index = MultiIndex(labels=new_labels, levels=self.levels, + names=self.names, sortorder=sortorder, + verify_integrity=False) + + return new_index, indexer + + def get_indexer(self, target, method=None, limit=None, tolerance=None): + """ + Compute indexer and mask for new index given the current index. The + indexer should be then used as an input to ndarray.take to align the + current data to the new index. The mask determines whether labels are + found or not in the current index + + Parameters + ---------- + target : MultiIndex or Index (of tuples) + method : {'pad', 'ffill', 'backfill', 'bfill'} + pad / ffill: propagate LAST valid observation forward to next valid + backfill / bfill: use NEXT valid observation to fill gap + + Notes + ----- + This is a low-level method and probably should be used at your own risk + + Examples + -------- + >>> indexer, mask = index.get_indexer(new_index) + >>> new_values = cur_values.take(indexer) + >>> new_values[-mask] = np.nan + + Returns + ------- + (indexer, mask) : (ndarray, ndarray) + """ + method = _clean_reindex_fill_method(method) + + target = _ensure_index(target) + + target_index = target + if isinstance(target, MultiIndex): + target_index = target._tuple_index + + if not is_object_dtype(target_index.dtype): + return np.ones(len(target_index)) * -1 + + if not self.is_unique: + raise Exception('Reindexing only valid with uniquely valued Index ' + 'objects') + + self_index = self._tuple_index + + if method == 'pad' or method == 'backfill': + if tolerance is not None: + raise NotImplementedError("tolerance not implemented yet " + 'for MultiIndex') + indexer = self_index._get_fill_indexer(target, method, limit) + elif method == 'nearest': + raise NotImplementedError("method='nearest' not implemented yet " + 'for MultiIndex; see GitHub issue 9365') + else: + indexer = self_index._engine.get_indexer(target._values) + + return com._ensure_platform_int(indexer) + + def reindex(self, target, method=None, level=None, limit=None, + tolerance=None): + """ + Create index with target's values (move/add/delete values as necessary) + + Returns + ------- + new_index : pd.MultiIndex + Resulting index + indexer : np.ndarray or None + Indices of output values in original index + + """ + # GH6552: preserve names when reindexing to non-named target + # (i.e. neither Index nor Series). + preserve_names = not hasattr(target, 'names') + + if level is not None: + if method is not None: + raise TypeError('Fill method not supported if level passed') + + # GH7774: preserve dtype/tz if target is empty and not an Index. + # target may be an iterator + target = ibase._ensure_has_len(target) + if len(target) == 0 and not isinstance(target, Index): + idx = self.levels[level] + attrs = idx._get_attributes_dict() + attrs.pop('freq', None) # don't preserve freq + target = type(idx)._simple_new(np.empty(0, dtype=idx.dtype), + **attrs) + else: + target = _ensure_index(target) + target, indexer, _ = self._join_level(target, level, how='right', + return_indexers=True, + keep_order=False) + else: + if self.equals(target): + indexer = None + else: + if self.is_unique: + indexer = self.get_indexer(target, method=method, + limit=limit, + tolerance=tolerance) + else: + raise Exception("cannot handle a non-unique multi-index!") + + if not isinstance(target, MultiIndex): + if indexer is None: + target = self + elif (indexer >= 0).all(): + target = self.take(indexer) + else: + # hopefully? + target = MultiIndex.from_tuples(target) + + if (preserve_names and target.nlevels == self.nlevels and + target.names != self.names): + target = target.copy(deep=False) + target.names = self.names + + return target, indexer + + @cache_readonly + def _tuple_index(self): + """ + Convert MultiIndex to an Index of tuples + + Returns + ------- + index : Index + """ + return Index(self._values) + + def get_slice_bound(self, label, side, kind): + if not isinstance(label, tuple): + label = label, + return self._partial_tup_index(label, side=side) + + def slice_locs(self, start=None, end=None, step=None, kind=None): + """ + For an ordered MultiIndex, compute the slice locations for input + labels. They can be tuples representing partial levels, e.g. for a + MultiIndex with 3 levels, you can pass a single value (corresponding to + the first level), or a 1-, 2-, or 3-tuple. + + Parameters + ---------- + start : label or tuple, default None + If None, defaults to the beginning + end : label or tuple + If None, defaults to the end + step : int or None + Slice step + kind : string, optional, defaults None + + Returns + ------- + (start, end) : (int, int) + + Notes + ----- + This function assumes that the data is sorted by the first level + """ + # This function adds nothing to its parent implementation (the magic + # happens in get_slice_bound method), but it adds meaningful doc. + return super(MultiIndex, self).slice_locs(start, end, step, kind=kind) + + def _partial_tup_index(self, tup, side='left'): + if len(tup) > self.lexsort_depth: + raise KeyError('Key length (%d) was greater than MultiIndex' + ' lexsort depth (%d)' % + (len(tup), self.lexsort_depth)) + + n = len(tup) + start, end = 0, len(self) + zipped = zip(tup, self.levels, self.labels) + for k, (lab, lev, labs) in enumerate(zipped): + section = labs[start:end] + + if lab not in lev: + if not lev.is_type_compatible(lib.infer_dtype([lab])): + raise TypeError('Level type mismatch: %s' % lab) + + # short circuit + loc = lev.searchsorted(lab, side=side) + if side == 'right' and loc >= 0: + loc -= 1 + return start + section.searchsorted(loc, side=side) + + idx = lev.get_loc(lab) + if k < n - 1: + end = start + section.searchsorted(idx, side='right') + start = start + section.searchsorted(idx, side='left') + else: + return start + section.searchsorted(idx, side=side) + + def get_loc(self, key, method=None): + """ + Get integer location, slice or boolean mask for requested label or + tuple. If the key is past the lexsort depth, the return may be a + boolean mask array, otherwise it is always a slice or int. + + Parameters + ---------- + key : label or tuple + method : None + + Returns + ------- + loc : int, slice object or boolean mask + """ + if method is not None: + raise NotImplementedError('only the default get_loc method is ' + 'currently supported for MultiIndex') + + def _maybe_to_slice(loc): + '''convert integer indexer to boolean mask or slice if possible''' + if not isinstance(loc, np.ndarray) or loc.dtype != 'int64': + return loc + + loc = lib.maybe_indices_to_slice(loc, len(self)) + if isinstance(loc, slice): + return loc + + mask = np.empty(len(self), dtype='bool') + mask.fill(False) + mask[loc] = True + return mask + + if not isinstance(key, tuple): + loc = self._get_level_indexer(key, level=0) + return _maybe_to_slice(loc) + + keylen = len(key) + if self.nlevels < keylen: + raise KeyError('Key length ({0}) exceeds index depth ({1})' + ''.format(keylen, self.nlevels)) + + if keylen == self.nlevels and self.is_unique: + + def _maybe_str_to_time_stamp(key, lev): + if lev.is_all_dates and not isinstance(key, Timestamp): + try: + return Timestamp(key, tz=getattr(lev, 'tz', None)) + except Exception: + pass + return key + + key = _values_from_object(key) + key = tuple(map(_maybe_str_to_time_stamp, key, self.levels)) + return self._engine.get_loc(key) + + # -- partial selection or non-unique index + # break the key into 2 parts based on the lexsort_depth of the index; + # the first part returns a continuous slice of the index; the 2nd part + # needs linear search within the slice + i = self.lexsort_depth + lead_key, follow_key = key[:i], key[i:] + start, stop = (self.slice_locs(lead_key, lead_key) + if lead_key else (0, len(self))) + + if start == stop: + raise KeyError(key) + + if not follow_key: + return slice(start, stop) + + warnings.warn('indexing past lexsort depth may impact performance.', + PerformanceWarning, stacklevel=10) + + loc = np.arange(start, stop, dtype='int64') + + for i, k in enumerate(follow_key, len(lead_key)): + mask = self.labels[i][loc] == self.levels[i].get_loc(k) + if not mask.all(): + loc = loc[mask] + if not len(loc): + raise KeyError(key) + + return (_maybe_to_slice(loc) if len(loc) != stop - start else + slice(start, stop)) + + def get_loc_level(self, key, level=0, drop_level=True): + """ + Get integer location slice for requested label or tuple + + Parameters + ---------- + key : label or tuple + level : int/level name or list thereof + + Returns + ------- + loc : int or slice object + """ + + def maybe_droplevels(indexer, levels, drop_level): + if not drop_level: + return self[indexer] + # kludgearound + orig_index = new_index = self[indexer] + levels = [self._get_level_number(i) for i in levels] + for i in sorted(levels, reverse=True): + try: + new_index = new_index.droplevel(i) + except: + + # no dropping here + return orig_index + return new_index + + if isinstance(level, (tuple, list)): + if len(key) != len(level): + raise AssertionError('Key for location must have same ' + 'length as number of levels') + result = None + for lev, k in zip(level, key): + loc, new_index = self.get_loc_level(k, level=lev) + if isinstance(loc, slice): + mask = np.zeros(len(self), dtype=bool) + mask[loc] = True + loc = mask + + result = loc if result is None else result & loc + + return result, maybe_droplevels(result, level, drop_level) + + level = self._get_level_number(level) + + # kludge for #1796 + if isinstance(key, list): + key = tuple(key) + + if isinstance(key, tuple) and level == 0: + + try: + if key in self.levels[0]: + indexer = self._get_level_indexer(key, level=level) + new_index = maybe_droplevels(indexer, [0], drop_level) + return indexer, new_index + except TypeError: + pass + + if not any(isinstance(k, slice) for k in key): + + # partial selection + # optionally get indexer to avoid re-calculation + def partial_selection(key, indexer=None): + if indexer is None: + indexer = self.get_loc(key) + ilevels = [i for i in range(len(key)) + if key[i] != slice(None, None)] + return indexer, maybe_droplevels(indexer, ilevels, + drop_level) + + if len(key) == self.nlevels: + + if self.is_unique: + + # here we have a completely specified key, but are + # using some partial string matching here + # GH4758 + all_dates = [(l.is_all_dates and + not isinstance(k, compat.string_types)) + for k, l in zip(key, self.levels)] + can_index_exactly = any(all_dates) + if (any([l.is_all_dates + for k, l in zip(key, self.levels)]) and + not can_index_exactly): + indexer = self.get_loc(key) + + # we have a multiple selection here + if (not isinstance(indexer, slice) or + indexer.stop - indexer.start != 1): + return partial_selection(key, indexer) + + key = tuple(self[indexer].tolist()[0]) + + return (self._engine.get_loc(_values_from_object(key)), + None) + else: + return partial_selection(key) + else: + return partial_selection(key) + else: + indexer = None + for i, k in enumerate(key): + if not isinstance(k, slice): + k = self._get_level_indexer(k, level=i) + if isinstance(k, slice): + # everything + if k.start == 0 and k.stop == len(self): + k = slice(None, None) + else: + k_index = k + + if isinstance(k, slice): + if k == slice(None, None): + continue + else: + raise TypeError(key) + + if indexer is None: + indexer = k_index + else: # pragma: no cover + indexer &= k_index + if indexer is None: + indexer = slice(None, None) + ilevels = [i for i in range(len(key)) + if key[i] != slice(None, None)] + return indexer, maybe_droplevels(indexer, ilevels, drop_level) + else: + indexer = self._get_level_indexer(key, level=level) + return indexer, maybe_droplevels(indexer, [level], drop_level) + + def _get_level_indexer(self, key, level=0, indexer=None): + # return an indexer, boolean array or a slice showing where the key is + # in the totality of values + # if the indexer is provided, then use this + + level_index = self.levels[level] + labels = self.labels[level] + + def convert_indexer(start, stop, step, indexer=indexer, labels=labels): + # given the inputs and the labels/indexer, compute an indexer set + # if we have a provided indexer, then this need not consider + # the entire labels set + + r = np.arange(start, stop, step) + if indexer is not None and len(indexer) != len(labels): + + # we have an indexer which maps the locations in the labels + # that we have already selected (and is not an indexer for the + # entire set) otherwise this is wasteful so we only need to + # examine locations that are in this set the only magic here is + # that the result are the mappings to the set that we have + # selected + from pandas import Series + mapper = Series(indexer) + indexer = labels.take(com._ensure_platform_int(indexer)) + result = Series(Index(indexer).isin(r).nonzero()[0]) + m = result.map(mapper)._values + + else: + m = np.zeros(len(labels), dtype=bool) + m[np.in1d(labels, r, assume_unique=True)] = True + + return m + + if isinstance(key, slice): + # handle a slice, returnig a slice if we can + # otherwise a boolean indexer + + try: + if key.start is not None: + start = level_index.get_loc(key.start) + else: + start = 0 + if key.stop is not None: + stop = level_index.get_loc(key.stop) + else: + stop = len(level_index) - 1 + step = key.step + except KeyError: + + # we have a partial slice (like looking up a partial date + # string) + start = stop = level_index.slice_indexer(key.start, key.stop, + key.step) + step = start.step + + if isinstance(start, slice) or isinstance(stop, slice): + # we have a slice for start and/or stop + # a partial date slicer on a DatetimeIndex generates a slice + # note that the stop ALREADY includes the stopped point (if + # it was a string sliced) + return convert_indexer(start.start, stop.stop, step) + + elif level > 0 or self.lexsort_depth == 0 or step is not None: + # need to have like semantics here to right + # searching as when we are using a slice + # so include the stop+1 (so we include stop) + return convert_indexer(start, stop + 1, step) + else: + # sorted, so can return slice object -> view + i = labels.searchsorted(start, side='left') + j = labels.searchsorted(stop, side='right') + return slice(i, j, step) + + else: + + loc = level_index.get_loc(key) + if level > 0 or self.lexsort_depth == 0: + return np.array(labels == loc, dtype=bool) + else: + # sorted, so can return slice object -> view + i = labels.searchsorted(loc, side='left') + j = labels.searchsorted(loc, side='right') + return slice(i, j) + + def get_locs(self, tup): + """ + Given a tuple of slices/lists/labels/boolean indexer to a level-wise + spec produce an indexer to extract those locations + + Parameters + ---------- + key : tuple of (slices/list/labels) + + Returns + ------- + locs : integer list of locations or boolean indexer suitable + for passing to iloc + """ + + # must be lexsorted to at least as many levels + if not self.is_lexsorted_for_tuple(tup): + raise KeyError('MultiIndex Slicing requires the index to be fully ' + 'lexsorted tuple len ({0}), lexsort depth ' + '({1})'.format(len(tup), self.lexsort_depth)) + + # indexer + # this is the list of all values that we want to select + n = len(self) + indexer = None + + def _convert_to_indexer(r): + # return an indexer + if isinstance(r, slice): + m = np.zeros(n, dtype=bool) + m[r] = True + r = m.nonzero()[0] + elif is_bool_indexer(r): + if len(r) != n: + raise ValueError("cannot index with a boolean indexer " + "that is not the same length as the " + "index") + r = r.nonzero()[0] + from .numeric import Int64Index + return Int64Index(r) + + def _update_indexer(idxr, indexer=indexer): + if indexer is None: + indexer = Index(np.arange(n)) + if idxr is None: + return indexer + return indexer & idxr + + for i, k in enumerate(tup): + + if is_bool_indexer(k): + # a boolean indexer, must be the same length! + k = np.asarray(k) + indexer = _update_indexer(_convert_to_indexer(k), + indexer=indexer) + + elif is_list_like(k): + # a collection of labels to include from this level (these + # are or'd) + indexers = None + for x in k: + try: + idxrs = _convert_to_indexer( + self._get_level_indexer(x, level=i, + indexer=indexer)) + indexers = (idxrs if indexers is None + else indexers | idxrs) + except KeyError: + + # ignore not founds + continue + + if indexers is not None: + indexer = _update_indexer(indexers, indexer=indexer) + else: + from .numeric import Int64Index + # no matches we are done + return Int64Index([])._values + + elif is_null_slice(k): + # empty slice + indexer = _update_indexer(None, indexer=indexer) + + elif isinstance(k, slice): + + # a slice, include BOTH of the labels + indexer = _update_indexer(_convert_to_indexer( + self._get_level_indexer(k, level=i, indexer=indexer)), + indexer=indexer) + else: + # a single label + indexer = _update_indexer(_convert_to_indexer( + self.get_loc_level(k, level=i, drop_level=False)[0]), + indexer=indexer) + + # empty indexer + if indexer is None: + return Int64Index([])._values + return indexer._values + + def truncate(self, before=None, after=None): + """ + Slice index between two labels / tuples, return new MultiIndex + + Parameters + ---------- + before : label or tuple, can be partial. Default None + None defaults to start + after : label or tuple, can be partial. Default None + None defaults to end + + Returns + ------- + truncated : MultiIndex + """ + if after and before and after < before: + raise ValueError('after < before') + + i, j = self.levels[0].slice_locs(before, after) + left, right = self.slice_locs(before, after) + + new_levels = list(self.levels) + new_levels[0] = new_levels[0][i:j] + + new_labels = [lab[left:right] for lab in self.labels] + new_labels[0] = new_labels[0] - i + + return MultiIndex(levels=new_levels, labels=new_labels, + verify_integrity=False) + + def equals(self, other): + """ + Determines if two MultiIndex objects have the same labeling information + (the levels themselves do not necessarily have to be the same) + + See also + -------- + equal_levels + """ + if self.is_(other): + return True + + if not isinstance(other, MultiIndex): + return array_equivalent(self._values, + _values_from_object(_ensure_index(other))) + + if self.nlevels != other.nlevels: + return False + + if len(self) != len(other): + return False + + for i in range(self.nlevels): + svalues = com.take_nd(np.asarray(self.levels[i]._values), + self.labels[i], allow_fill=False) + ovalues = com.take_nd(np.asarray(other.levels[i]._values), + other.labels[i], allow_fill=False) + if not array_equivalent(svalues, ovalues): + return False + + return True + + def equal_levels(self, other): + """ + Return True if the levels of both MultiIndex objects are the same + + """ + if self.nlevels != other.nlevels: + return False + + for i in range(self.nlevels): + if not self.levels[i].equals(other.levels[i]): + return False + return True + + def union(self, other): + """ + Form the union of two MultiIndex objects, sorting if possible + + Parameters + ---------- + other : MultiIndex or array / Index of tuples + + Returns + ------- + Index + + >>> index.union(index2) + """ + self._assert_can_do_setop(other) + other, result_names = self._convert_can_do_setop(other) + + if len(other) == 0 or self.equals(other): + return self + + uniq_tuples = lib.fast_unique_multiple([self._values, other._values]) + return MultiIndex.from_arrays(lzip(*uniq_tuples), sortorder=0, + names=result_names) + + def intersection(self, other): + """ + Form the intersection of two MultiIndex objects, sorting if possible + + Parameters + ---------- + other : MultiIndex or array / Index of tuples + + Returns + ------- + Index + """ + self._assert_can_do_setop(other) + other, result_names = self._convert_can_do_setop(other) + + if self.equals(other): + return self + + self_tuples = self._values + other_tuples = other._values + uniq_tuples = sorted(set(self_tuples) & set(other_tuples)) + if len(uniq_tuples) == 0: + return MultiIndex(levels=[[]] * self.nlevels, + labels=[[]] * self.nlevels, + names=result_names, verify_integrity=False) + else: + return MultiIndex.from_arrays(lzip(*uniq_tuples), sortorder=0, + names=result_names) + + def difference(self, other): + """ + Compute sorted set difference of two MultiIndex objects + + Returns + ------- + diff : MultiIndex + """ + self._assert_can_do_setop(other) + other, result_names = self._convert_can_do_setop(other) + + if len(other) == 0: + return self + + if self.equals(other): + return MultiIndex(levels=[[]] * self.nlevels, + labels=[[]] * self.nlevels, + names=result_names, verify_integrity=False) + + difference = sorted(set(self._values) - set(other._values)) + + if len(difference) == 0: + return MultiIndex(levels=[[]] * self.nlevels, + labels=[[]] * self.nlevels, + names=result_names, verify_integrity=False) + else: + return MultiIndex.from_tuples(difference, sortorder=0, + names=result_names) + + def astype(self, dtype): + if not is_object_dtype(np.dtype(dtype)): + raise TypeError('Setting %s dtype to anything other than object ' + 'is not supported' % self.__class__) + return self._shallow_copy() + + def _convert_can_do_setop(self, other): + result_names = self.names + + if not hasattr(other, 'names'): + if len(other) == 0: + other = MultiIndex(levels=[[]] * self.nlevels, + labels=[[]] * self.nlevels, + verify_integrity=False) + else: + msg = 'other must be a MultiIndex or a list of tuples' + try: + other = MultiIndex.from_tuples(other) + except: + raise TypeError(msg) + else: + result_names = self.names if self.names == other.names else None + return other, result_names + + def insert(self, loc, item): + """ + Make new MultiIndex inserting new item at location + + Parameters + ---------- + loc : int + item : tuple + Must be same length as number of levels in the MultiIndex + + Returns + ------- + new_index : Index + """ + # Pad the key with empty strings if lower levels of the key + # aren't specified: + if not isinstance(item, tuple): + item = (item, ) + ('', ) * (self.nlevels - 1) + elif len(item) != self.nlevels: + raise ValueError('Item must have length equal to number of ' + 'levels.') + + new_levels = [] + new_labels = [] + for k, level, labels in zip(item, self.levels, self.labels): + if k not in level: + # have to insert into level + # must insert at end otherwise you have to recompute all the + # other labels + lev_loc = len(level) + level = level.insert(lev_loc, k) + else: + lev_loc = level.get_loc(k) + + new_levels.append(level) + new_labels.append(np.insert(_ensure_int64(labels), loc, lev_loc)) + + return MultiIndex(levels=new_levels, labels=new_labels, + names=self.names, verify_integrity=False) + + def delete(self, loc): + """ + Make new index with passed location deleted + + Returns + ------- + new_index : MultiIndex + """ + new_labels = [np.delete(lab, loc) for lab in self.labels] + return MultiIndex(levels=self.levels, labels=new_labels, + names=self.names, verify_integrity=False) + + get_major_bounds = slice_locs + + __bounds = None + + @property + def _bounds(self): + """ + Return or compute and return slice points for level 0, assuming + sortedness + """ + if self.__bounds is None: + inds = np.arange(len(self.levels[0])) + self.__bounds = self.labels[0].searchsorted(inds) + + return self.__bounds + + def _wrap_joined_index(self, joined, other): + names = self.names if self.names == other.names else None + return MultiIndex.from_tuples(joined, names=names) + + @Appender(Index.isin.__doc__) + def isin(self, values, level=None): + if level is None: + return lib.ismember(np.array(self), set(values)) + else: + num = self._get_level_number(level) + levs = self.levels[num] + labs = self.labels[num] + + sought_labels = levs.isin(values).nonzero()[0] + if levs.size == 0: + return np.zeros(len(labs), dtype=np.bool_) + else: + return np.lib.arraysetops.in1d(labs, sought_labels) + + +MultiIndex._add_numeric_methods_disabled() +MultiIndex._add_logical_methods_disabled() + + +def _sparsify(label_list, start=0, sentinel=''): + pivoted = lzip(*label_list) + k = len(label_list) + + result = pivoted[:start + 1] + prev = pivoted[start] + + for cur in pivoted[start + 1:]: + sparse_cur = [] + + for i, (p, t) in enumerate(zip(prev, cur)): + if i == k - 1: + sparse_cur.append(t) + result.append(sparse_cur) + break + + if p == t: + sparse_cur.append(sentinel) + else: + sparse_cur.extend(cur[i:]) + result.append(sparse_cur) + break + + prev = cur + + return lzip(*result) + + +def _get_na_rep(dtype): + return {np.datetime64: 'NaT', np.timedelta64: 'NaT'}.get(dtype, 'NaN') diff --git a/pandas/indexes/numeric.py b/pandas/indexes/numeric.py new file mode 100644 index 0000000000000..61d93284adbbb --- /dev/null +++ b/pandas/indexes/numeric.py @@ -0,0 +1,369 @@ +import numpy as np +import pandas.lib as lib +import pandas.algos as _algos +import pandas.index as _index + +from pandas import compat +from pandas.indexes.base import Index, InvalidIndexError +from pandas.util.decorators import Appender, cache_readonly +import pandas.core.common as com +import pandas.indexes.base as ibase + + +class NumericIndex(Index): + """ + Provide numeric type operations + + This is an abstract class + + """ + _is_numeric_dtype = True + + def _maybe_cast_slice_bound(self, label, side, kind): + """ + This function should be overloaded in subclasses that allow non-trivial + casting on label-slice bounds, e.g. datetime-like indices allowing + strings containing formatted datetimes. + + Parameters + ---------- + label : object + side : {'left', 'right'} + kind : string / None + + Returns + ------- + label : object + + Notes + ----- + Value of `side` parameter should be validated in caller. + + """ + + # we are a numeric index, so we accept + # integer/floats directly + if not (com.is_integer(label) or com.is_float(label)): + self._invalid_indexer('slice', label) + + return label + + def _convert_tolerance(self, tolerance): + try: + return float(tolerance) + except ValueError: + raise ValueError('tolerance argument for %s must be numeric: %r' % + (type(self).__name__, tolerance)) + + +class Int64Index(NumericIndex): + """ + Immutable ndarray implementing an ordered, sliceable set. The basic object + storing axis labels for all pandas objects. Int64Index is a special case + of `Index` with purely integer labels. This is the default index type used + by the DataFrame and Series ctors when no explicit index is provided by the + user. + + Parameters + ---------- + data : array-like (1-dimensional) + dtype : NumPy dtype (default: int64) + copy : bool + Make a copy of input ndarray + name : object + Name to be stored in the index + + Notes + ----- + An Index instance can **only** contain hashable objects + """ + + _typ = 'int64index' + _groupby = _algos.groupby_int64 + _arrmap = _algos.arrmap_int64 + _left_indexer_unique = _algos.left_join_indexer_unique_int64 + _left_indexer = _algos.left_join_indexer_int64 + _inner_indexer = _algos.inner_join_indexer_int64 + _outer_indexer = _algos.outer_join_indexer_int64 + + _can_hold_na = False + + _engine_type = _index.Int64Engine + + def __new__(cls, data=None, dtype=None, copy=False, name=None, + fastpath=False, **kwargs): + + if fastpath: + return cls._simple_new(data, name=name) + + # isscalar, generators handled in coerce_to_ndarray + data = cls._coerce_to_ndarray(data) + + if issubclass(data.dtype.type, compat.string_types): + cls._string_data_error(data) + + elif issubclass(data.dtype.type, np.integer): + # don't force the upcast as we may be dealing + # with a platform int + if (dtype is None or + not issubclass(np.dtype(dtype).type, np.integer)): + dtype = np.int64 + + subarr = np.array(data, dtype=dtype, copy=copy) + else: + subarr = np.array(data, dtype=np.int64, copy=copy) + if len(data) > 0: + if (subarr != data).any(): + raise TypeError('Unsafe NumPy casting to integer, you must' + ' explicitly cast') + + return cls._simple_new(subarr, name=name) + + @property + def inferred_type(self): + return 'integer' + + @property + def asi8(self): + # do not cache or you'll create a memory leak + return self.values.view('i8') + + @property + def is_all_dates(self): + """ + Checks that all the labels are datetime objects + """ + return False + + def equals(self, other): + """ + Determines if two Index objects contain the same elements. + """ + if self.is_(other): + return True + + try: + return com.array_equivalent(com._values_from_object(self), + com._values_from_object(other)) + except TypeError: + # e.g. fails in numpy 1.6 with DatetimeIndex #1681 + return False + + def _wrap_joined_index(self, joined, other): + name = self.name if self.name == other.name else None + return Int64Index(joined, name=name) + + +Int64Index._add_numeric_methods() +Int64Index._add_logical_methods() + + +class Float64Index(NumericIndex): + """ + Immutable ndarray implementing an ordered, sliceable set. The basic object + storing axis labels for all pandas objects. Float64Index is a special case + of `Index` with purely floating point labels. + + Parameters + ---------- + data : array-like (1-dimensional) + dtype : NumPy dtype (default: object) + copy : bool + Make a copy of input ndarray + name : object + Name to be stored in the index + + Notes + ----- + An Float64Index instance can **only** contain hashable objects + """ + + _typ = 'float64index' + _engine_type = _index.Float64Engine + _groupby = _algos.groupby_float64 + _arrmap = _algos.arrmap_float64 + _left_indexer_unique = _algos.left_join_indexer_unique_float64 + _left_indexer = _algos.left_join_indexer_float64 + _inner_indexer = _algos.inner_join_indexer_float64 + _outer_indexer = _algos.outer_join_indexer_float64 + + def __new__(cls, data=None, dtype=None, copy=False, name=None, + fastpath=False, **kwargs): + + if fastpath: + return cls._simple_new(data, name) + + data = cls._coerce_to_ndarray(data) + + if issubclass(data.dtype.type, compat.string_types): + cls._string_data_error(data) + + if dtype is None: + dtype = np.float64 + + try: + subarr = np.array(data, dtype=dtype, copy=copy) + except: + raise TypeError('Unsafe NumPy casting, you must explicitly cast') + + # coerce to float64 for storage + if subarr.dtype != np.float64: + subarr = subarr.astype(np.float64) + + return cls._simple_new(subarr, name) + + @property + def inferred_type(self): + return 'floating' + + def astype(self, dtype): + if np.dtype(dtype) not in (np.object, np.float64): + raise TypeError('Setting %s dtype to anything other than ' + 'float64 or object is not supported' % + self.__class__) + return Index(self._values, name=self.name, dtype=dtype) + + def _convert_scalar_indexer(self, key, kind=None): + """ + convert a scalar indexer + + Parameters + ---------- + key : label of the slice bound + kind : optional, type of the indexing operation (loc/ix/iloc/None) + + right now we are converting + floats -> ints if the index supports it + """ + + if kind == 'iloc': + if com.is_integer(key): + return key + + return (super(Float64Index, self) + ._convert_scalar_indexer(key, kind=kind)) + + return key + + def _convert_slice_indexer(self, key, kind=None): + """ + convert a slice indexer, by definition these are labels + unless we are iloc + + Parameters + ---------- + key : label of the slice bound + kind : optional, type of the indexing operation (loc/ix/iloc/None) + """ + + # if we are not a slice, then we are done + if not isinstance(key, slice): + return key + + if kind == 'iloc': + return super(Float64Index, self)._convert_slice_indexer(key, + kind=kind) + + # translate to locations + return self.slice_indexer(key.start, key.stop, key.step) + + def _format_native_types(self, na_rep='', float_format=None, decimal='.', + quoting=None, **kwargs): + from pandas.core.format import FloatArrayFormatter + formatter = FloatArrayFormatter(self.values, na_rep=na_rep, + float_format=float_format, + decimal=decimal, quoting=quoting) + return formatter.get_formatted_data() + + def get_value(self, series, key): + """ we always want to get an index value, never a value """ + if not np.isscalar(key): + raise InvalidIndexError + + from pandas.core.indexing import maybe_droplevels + from pandas.core.series import Series + + k = com._values_from_object(key) + loc = self.get_loc(k) + new_values = com._values_from_object(series)[loc] + + if np.isscalar(new_values) or new_values is None: + return new_values + + new_index = self[loc] + new_index = maybe_droplevels(new_index, k) + return Series(new_values, index=new_index, name=series.name) + + def equals(self, other): + """ + Determines if two Index objects contain the same elements. + """ + if self is other: + return True + + # need to compare nans locations and make sure that they are the same + # since nans don't compare equal this is a bit tricky + try: + if not isinstance(other, Float64Index): + other = self._constructor(other) + if (not com.is_dtype_equal(self.dtype, other.dtype) or + self.shape != other.shape): + return False + left, right = self._values, other._values + return ((left == right) | (self._isnan & other._isnan)).all() + except TypeError: + # e.g. fails in numpy 1.6 with DatetimeIndex #1681 + return False + + def __contains__(self, other): + if super(Float64Index, self).__contains__(other): + return True + + try: + # if other is a sequence this throws a ValueError + return np.isnan(other) and self.hasnans + except ValueError: + try: + return len(other) <= 1 and ibase._try_get_item(other) in self + except TypeError: + return False + except: + return False + + def get_loc(self, key, method=None, tolerance=None): + try: + if np.all(np.isnan(key)): + nan_idxs = self._nan_idxs + try: + return nan_idxs.item() + except (ValueError, IndexError): + # should only need to catch ValueError here but on numpy + # 1.7 .item() can raise IndexError when NaNs are present + return nan_idxs + except (TypeError, NotImplementedError): + pass + return super(Float64Index, self).get_loc(key, method=method, + tolerance=tolerance) + + @property + def is_all_dates(self): + """ + Checks that all the labels are datetime objects + """ + return False + + @cache_readonly + def is_unique(self): + return super(Float64Index, self).is_unique and self._nan_idxs.size < 2 + + @Appender(Index.isin.__doc__) + def isin(self, values, level=None): + value_set = set(values) + if level is not None: + self._validate_index_level(level) + return lib.ismember_nans(np.array(self), value_set, + com.isnull(list(value_set)).any()) + + +Float64Index._add_numeric_methods() +Float64Index._add_logical_methods_disabled() diff --git a/pandas/indexes/range.py b/pandas/indexes/range.py new file mode 100644 index 0000000000000..f4f5745659002 --- /dev/null +++ b/pandas/indexes/range.py @@ -0,0 +1,623 @@ +from sys import getsizeof +import operator + +import numpy as np +import pandas.index as _index + +from pandas import compat +from pandas.compat import lrange, range +from pandas.indexes.base import Index +from pandas.util.decorators import Appender, cache_readonly +import pandas.core.common as com +import pandas.indexes.base as ibase + +from pandas.indexes.numeric import Int64Index + + +class RangeIndex(Int64Index): + + """ + Immutable Index implementing a monotonic range. RangeIndex is a + memory-saving special case of Int64Index limited to representing + monotonic ranges. + + Parameters + ---------- + start : int (default: 0) + stop : int (default: 0) + step : int (default: 1) + name : object, optional + Name to be stored in the index + copy : bool, default False + Make a copy of input if its a RangeIndex + + """ + + _typ = 'rangeindex' + _engine_type = _index.Int64Engine + + def __new__(cls, start=None, stop=None, step=None, name=None, dtype=None, + fastpath=False, copy=False, **kwargs): + + if fastpath: + return cls._simple_new(start, stop, step, name=name) + + cls._validate_dtype(dtype) + + # RangeIndex + if isinstance(start, RangeIndex): + if not copy: + return start + if name is None: + name = getattr(start, 'name', None) + start, stop, step = start._start, start._stop, start._step + + # validate the arguments + def _ensure_int(value, field): + try: + new_value = int(value) + except: + new_value = value + + if not com.is_integer(new_value) or new_value != value: + raise TypeError("RangeIndex(...) must be called with integers," + " {value} was passed for {field}".format( + value=type(value).__name__, + field=field) + ) + + return new_value + + if start is None: + start = 0 + else: + start = _ensure_int(start, 'start') + if stop is None: + stop = start + start = 0 + else: + stop = _ensure_int(stop, 'stop') + if step is None: + step = 1 + elif step == 0: + raise ValueError("Step must not be zero") + else: + step = _ensure_int(step, 'step') + + return cls._simple_new(start, stop, step, name) + + @classmethod + def from_range(cls, data, name=None, dtype=None, **kwargs): + """ create RangeIndex from a range (py3), or xrange (py2) object """ + if not isinstance(data, range): + raise TypeError( + '{0}(...) must be called with object coercible to a ' + 'range, {1} was passed'.format(cls.__name__, repr(data))) + + if compat.PY3: + step = data.step + stop = data.stop + start = data.start + else: + # seems we only have indexing ops to infer + # rather than direct accessors + if len(data) > 1: + step = data[1] - data[0] + stop = data[-1] + step + start = data[0] + elif len(data): + start = data[0] + stop = data[0] + 1 + step = 1 + else: + start = stop = 0 + step = 1 + return RangeIndex(start, stop, step, dtype=dtype, name=name, **kwargs) + + @classmethod + def _simple_new(cls, start, stop=None, step=None, name=None, + dtype=None, **kwargs): + result = object.__new__(cls) + + # handle passed None, non-integers + if start is None or not com.is_integer(start): + try: + return RangeIndex(start, stop, step, name=name, **kwargs) + except TypeError: + return Index(start, stop, step, name=name, **kwargs) + + result._start = start + result._stop = stop or 0 + result._step = step or 1 + result.name = name + for k, v in compat.iteritems(kwargs): + setattr(result, k, v) + + result._reset_identity() + return result + + @staticmethod + def _validate_dtype(dtype): + """ require dtype to be None or int64 """ + if not (dtype is None or com.is_int64_dtype(dtype)): + raise TypeError('Invalid to pass a non-int64 dtype to RangeIndex') + + @cache_readonly + def _constructor(self): + """ return the class to use for construction """ + return Int64Index + + @cache_readonly + def _data(self): + return np.arange(self._start, self._stop, self._step, dtype=np.int64) + + @cache_readonly + def _int64index(self): + return Int64Index(self._data, name=self.name, fastpath=True) + + def _get_data_as_items(self): + """ return a list of tuples of start, stop, step """ + return [('start', self._start), + ('stop', self._stop), + ('step', self._step)] + + def __reduce__(self): + d = self._get_attributes_dict() + d.update(dict(self._get_data_as_items())) + return ibase._new_Index, (self.__class__, d), None + + def _format_attrs(self): + """ + Return a list of tuples of the (attr, formatted_value) + """ + attrs = self._get_data_as_items() + if self.name is not None: + attrs.append(('name', ibase.default_pprint(self.name))) + return attrs + + def _format_data(self): + # we are formatting thru the attributes + return None + + @cache_readonly + def nbytes(self): + """ return the number of bytes in the underlying data """ + return sum([getsizeof(getattr(self, v)) for v in + ['_start', '_stop', '_step']]) + + def memory_usage(self, deep=False): + """ + Memory usage of my values + + Parameters + ---------- + deep : bool + Introspect the data deeply, interrogate + `object` dtypes for system-level memory consumption + + Returns + ------- + bytes used + + Notes + ----- + Memory usage does not include memory consumed by elements that + are not components of the array if deep=False + + See Also + -------- + numpy.ndarray.nbytes + """ + return self.nbytes + + @property + def dtype(self): + return np.dtype(np.int64) + + @property + def is_unique(self): + """ return if the index has unique values """ + return True + + @property + def has_duplicates(self): + return False + + def tolist(self): + return lrange(self._start, self._stop, self._step) + + def _shallow_copy(self, values=None, **kwargs): + """ create a new Index, don't copy the data, use the same object attributes + with passed in attributes taking precedence """ + if values is None: + return RangeIndex(name=self.name, fastpath=True, + **dict(self._get_data_as_items())) + else: + kwargs.setdefault('name', self.name) + return self._int64index._shallow_copy(values, **kwargs) + + @Appender(ibase._index_shared_docs['copy']) + def copy(self, name=None, deep=False, dtype=None, **kwargs): + self._validate_dtype(dtype) + if name is None: + name = self.name + return RangeIndex(name=name, fastpath=True, + **dict(self._get_data_as_items())) + + def argsort(self, *args, **kwargs): + """ + return an ndarray indexer of the underlying data + + See also + -------- + numpy.ndarray.argsort + """ + if self._step > 0: + return np.arange(len(self)) + else: + return np.arange(len(self) - 1, -1, -1) + + def equals(self, other): + """ + Determines if two Index objects contain the same elements. + """ + if isinstance(other, RangeIndex): + ls = len(self) + lo = len(other) + return (ls == lo == 0 or + ls == lo == 1 and + self._start == other._start or + ls == lo and + self._start == other._start and + self._step == other._step) + + return super(RangeIndex, self).equals(other) + + def intersection(self, other): + """ + Form the intersection of two Index objects. Sortedness of the result is + not guaranteed + + Parameters + ---------- + other : Index or array-like + + Returns + ------- + intersection : Index + """ + if not isinstance(other, RangeIndex): + return super(RangeIndex, self).intersection(other) + + # check whether intervals intersect + # deals with in- and decreasing ranges + int_low = max(min(self._start, self._stop + 1), + min(other._start, other._stop + 1)) + int_high = min(max(self._stop, self._start + 1), + max(other._stop, other._start + 1)) + if int_high <= int_low: + return RangeIndex() + + # Method hint: linear Diophantine equation + # solve intersection problem + # performance hint: for identical step sizes, could use + # cheaper alternative + gcd, s, t = self._extended_gcd(self._step, other._step) + + # check whether element sets intersect + if (self._start - other._start) % gcd: + return RangeIndex() + + # calculate parameters for the RangeIndex describing the + # intersection disregarding the lower bounds + tmp_start = self._start + (other._start - self._start) * \ + self._step // gcd * s + new_step = self._step * other._step // gcd + new_index = RangeIndex(tmp_start, int_high, new_step, fastpath=True) + + # adjust index to limiting interval + new_index._start = new_index._min_fitting_element(int_low) + return new_index + + def _min_fitting_element(self, lower_limit): + """Returns the smallest element greater than or equal to the limit""" + no_steps = -(-(lower_limit - self._start) // abs(self._step)) + return self._start + abs(self._step) * no_steps + + def _max_fitting_element(self, upper_limit): + """Returns the largest element smaller than or equal to the limit""" + no_steps = (upper_limit - self._start) // abs(self._step) + return self._start + abs(self._step) * no_steps + + def _extended_gcd(self, a, b): + """ + Extended Euclidean algorithms to solve Bezout's identity: + a*x + b*y = gcd(x, y) + Finds one particular solution for x, y: s, t + Returns: gcd, s, t + """ + s, old_s = 0, 1 + t, old_t = 1, 0 + r, old_r = b, a + while r: + quotient = old_r // r + old_r, r = r, old_r - quotient * r + old_s, s = s, old_s - quotient * s + old_t, t = t, old_t - quotient * t + return old_r, old_s, old_t + + def union(self, other): + """ + Form the union of two Index objects and sorts if possible + + Parameters + ---------- + other : Index or array-like + + Returns + ------- + union : Index + """ + self._assert_can_do_setop(other) + if len(other) == 0 or self.equals(other): + return self + if len(self) == 0: + return other + if isinstance(other, RangeIndex): + start_s, step_s = self._start, self._step + end_s = self._start + self._step * (len(self) - 1) + start_o, step_o = other._start, other._step + end_o = other._start + other._step * (len(other) - 1) + if self._step < 0: + start_s, step_s, end_s = end_s, -step_s, start_s + if other._step < 0: + start_o, step_o, end_o = end_o, -step_o, start_o + if len(self) == 1 and len(other) == 1: + step_s = step_o = abs(self._start - other._start) + elif len(self) == 1: + step_s = step_o + elif len(other) == 1: + step_o = step_s + start_r = min(start_s, start_o) + end_r = max(end_s, end_o) + if step_o == step_s: + if ((start_s - start_o) % step_s == 0 and + (start_s - end_o) <= step_s and + (start_o - end_s) <= step_s): + return RangeIndex(start_r, end_r + step_s, step_s) + if ((step_s % 2 == 0) and + (abs(start_s - start_o) <= step_s / 2) and + (abs(end_s - end_o) <= step_s / 2)): + return RangeIndex(start_r, end_r + step_s / 2, step_s / 2) + elif step_o % step_s == 0: + if ((start_o - start_s) % step_s == 0 and + (start_o + step_s >= start_s) and + (end_o - step_s <= end_s)): + return RangeIndex(start_r, end_r + step_s, step_s) + elif step_s % step_o == 0: + if ((start_s - start_o) % step_o == 0 and + (start_s + step_o >= start_o) and + (end_s - step_o <= end_o)): + return RangeIndex(start_r, end_r + step_o, step_o) + + return self._int64index.union(other) + + def join(self, other, how='left', level=None, return_indexers=False): + """ + *this is an internal non-public method* + + Compute join_index and indexers to conform data + structures to the new index. + + Parameters + ---------- + other : Index + how : {'left', 'right', 'inner', 'outer'} + level : int or level name, default None + return_indexers : boolean, default False + + Returns + ------- + join_index, (left_indexer, right_indexer) + """ + if how == 'outer' and self is not other: + # note: could return RangeIndex in more circumstances + return self._int64index.join(other, how, level, return_indexers) + + return super(RangeIndex, self).join(other, how, level, return_indexers) + + def __len__(self): + """ + return the length of the RangeIndex + """ + return max(0, -(-(self._stop - self._start) // self._step)) + + @property + def size(self): + return len(self) + + def __getitem__(self, key): + """ + Conserve RangeIndex type for scalar and slice keys. + """ + super_getitem = super(RangeIndex, self).__getitem__ + + if np.isscalar(key): + n = int(key) + if n != key: + return super_getitem(key) + if n < 0: + n = len(self) + key + if n < 0 or n > len(self) - 1: + raise IndexError("index {key} is out of bounds for axis 0 " + "with size {size}".format(key=key, + size=len(self))) + return self._start + n * self._step + + if isinstance(key, slice): + + # This is basically PySlice_GetIndicesEx, but delegation to our + # super routines if we don't have integers + + l = len(self) + + # complete missing slice information + step = 1 if key.step is None else key.step + if key.start is None: + start = l - 1 if step < 0 else 0 + else: + start = key.start + + if start < 0: + start += l + if start < 0: + start = -1 if step < 0 else 0 + if start >= l: + start = l - 1 if step < 0 else l + + if key.stop is None: + stop = -1 if step < 0 else l + else: + stop = key.stop + + if stop < 0: + stop += l + if stop < 0: + stop = -1 + if stop > l: + stop = l + + # delegate non-integer slices + if (start != int(start) and + stop != int(stop) and + step != int(step)): + return super_getitem(key) + + # convert indexes to values + start = self._start + self._step * start + stop = self._start + self._step * stop + step = self._step * step + + return RangeIndex(start, stop, step, self.name, fastpath=True) + + # fall back to Int64Index + return super_getitem(key) + + def __floordiv__(self, other): + if com.is_integer(other): + if (len(self) == 0 or + self._start % other == 0 and + self._step % other == 0): + start = self._start // other + step = self._step // other + stop = start + len(self) * step + return RangeIndex(start, stop, step, name=self.name, + fastpath=True) + if len(self) == 1: + start = self._start // other + return RangeIndex(start, start + 1, 1, name=self.name, + fastpath=True) + return self._int64index // other + + @classmethod + def _add_numeric_methods_binary(cls): + """ add in numeric methods, specialized to RangeIndex """ + + def _make_evaluate_binop(op, opstr, reversed=False, step=False): + """ + Parameters + ---------- + op : callable that accepts 2 parms + perform the binary op + opstr : string + string name of ops + reversed : boolean, default False + if this is a reversed op, e.g. radd + step : callable, optional, default to False + op to apply to the step parm if not None + if False, use the existing step + """ + + def _evaluate_numeric_binop(self, other): + + other = self._validate_for_numeric_binop(other, op, opstr) + attrs = self._get_attributes_dict() + attrs = self._maybe_update_attributes(attrs) + + if reversed: + self, other = other, self + + try: + # alppy if we have an override + if step: + rstep = step(self._step, other) + + # we don't have a representable op + # so return a base index + if not com.is_integer(rstep) or not rstep: + raise ValueError + + else: + rstep = self._step + + rstart = op(self._start, other) + rstop = op(self._stop, other) + + result = RangeIndex(rstart, + rstop, + rstep, + **attrs) + + # for compat with numpy / Int64Index + # even if we can represent as a RangeIndex, return + # as a Float64Index if we have float-like descriptors + if not all([com.is_integer(x) for x in + [rstart, rstop, rstep]]): + result = result.astype('float64') + + return result + + except (ValueError, TypeError, AttributeError): + pass + + # convert to Int64Index ops + if isinstance(self, RangeIndex): + self = self.values + if isinstance(other, RangeIndex): + other = other.values + + return Index(op(self, other), **attrs) + + return _evaluate_numeric_binop + + cls.__add__ = cls.__radd__ = _make_evaluate_binop( + operator.add, '__add__') + cls.__sub__ = _make_evaluate_binop(operator.sub, '__sub__') + cls.__rsub__ = _make_evaluate_binop( + operator.sub, '__sub__', reversed=True) + cls.__mul__ = cls.__rmul__ = _make_evaluate_binop( + operator.mul, + '__mul__', + step=operator.mul) + cls.__truediv__ = _make_evaluate_binop( + operator.truediv, + '__truediv__', + step=operator.truediv) + cls.__rtruediv__ = _make_evaluate_binop( + operator.truediv, + '__truediv__', + reversed=True, + step=operator.truediv) + if not compat.PY3: + cls.__div__ = _make_evaluate_binop( + operator.div, + '__div__', + step=operator.div) + cls.__rdiv__ = _make_evaluate_binop( + operator.div, + '__div__', + reversed=True, + step=operator.div) + +RangeIndex._add_numeric_methods() +RangeIndex._add_logical_methods() diff --git a/pandas/tests/indexes/__init__.py b/pandas/tests/indexes/__init__.py new file mode 100644 index 0000000000000..e69de29bb2d1d diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py new file mode 100644 index 0000000000000..f1824267d63d8 --- /dev/null +++ b/pandas/tests/indexes/common.py @@ -0,0 +1,656 @@ +# -*- coding: utf-8 -*- + +from pandas import compat +from pandas.compat import PY3 + +import numpy as np + +from pandas import (Series, Index, Float64Index, Int64Index, RangeIndex, + MultiIndex, CategoricalIndex, DatetimeIndex, + TimedeltaIndex, PeriodIndex) +from pandas.util.testing import assertRaisesRegexp + +import pandas.util.testing as tm + +import pandas as pd + + +class Base(object): + """ base class for index sub-class tests """ + _holder = None + _compat_props = ['shape', 'ndim', 'size', 'itemsize', 'nbytes'] + + def setup_indices(self): + for name, idx in self.indices.items(): + setattr(self, name, idx) + + def verify_pickle(self, index): + unpickled = self.round_trip_pickle(index) + self.assertTrue(index.equals(unpickled)) + + def test_pickle_compat_construction(self): + # this is testing for pickle compat + if self._holder is None: + return + + # need an object to create with + self.assertRaises(TypeError, self._holder) + + def test_shift(self): + + # GH8083 test the base class for shift + idx = self.create_index() + self.assertRaises(NotImplementedError, idx.shift, 1) + self.assertRaises(NotImplementedError, idx.shift, 1, 2) + + def test_create_index_existing_name(self): + + # GH11193, when an existing index is passed, and a new name is not + # specified, the new index should inherit the previous object name + expected = self.create_index() + if not isinstance(expected, MultiIndex): + expected.name = 'foo' + result = pd.Index(expected) + tm.assert_index_equal(result, expected) + + result = pd.Index(expected, name='bar') + expected.name = 'bar' + tm.assert_index_equal(result, expected) + else: + expected.names = ['foo', 'bar'] + result = pd.Index(expected) + tm.assert_index_equal( + result, Index(Index([('foo', 'one'), ('foo', 'two'), + ('bar', 'one'), ('baz', 'two'), + ('qux', 'one'), ('qux', 'two')], + dtype='object'), + names=['foo', 'bar'])) + + result = pd.Index(expected, names=['A', 'B']) + tm.assert_index_equal( + result, + Index(Index([('foo', 'one'), ('foo', 'two'), ('bar', 'one'), + ('baz', 'two'), ('qux', 'one'), ('qux', 'two')], + dtype='object'), names=['A', 'B'])) + + def test_numeric_compat(self): + + idx = self.create_index() + tm.assertRaisesRegexp(TypeError, "cannot perform __mul__", + lambda: idx * 1) + tm.assertRaisesRegexp(TypeError, "cannot perform __mul__", + lambda: 1 * idx) + + div_err = "cannot perform __truediv__" if PY3 \ + else "cannot perform __div__" + tm.assertRaisesRegexp(TypeError, div_err, lambda: idx / 1) + tm.assertRaisesRegexp(TypeError, div_err, lambda: 1 / idx) + tm.assertRaisesRegexp(TypeError, "cannot perform __floordiv__", + lambda: idx // 1) + tm.assertRaisesRegexp(TypeError, "cannot perform __floordiv__", + lambda: 1 // idx) + + def test_logical_compat(self): + idx = self.create_index() + tm.assertRaisesRegexp(TypeError, 'cannot perform all', + lambda: idx.all()) + tm.assertRaisesRegexp(TypeError, 'cannot perform any', + lambda: idx.any()) + + def test_boolean_context_compat(self): + + # boolean context compat + idx = self.create_index() + + def f(): + if idx: + pass + + tm.assertRaisesRegexp(ValueError, 'The truth value of a', f) + + def test_reindex_base(self): + idx = self.create_index() + expected = np.arange(idx.size) + + actual = idx.get_indexer(idx) + tm.assert_numpy_array_equal(expected, actual) + + with tm.assertRaisesRegexp(ValueError, 'Invalid fill method'): + idx.get_indexer(idx, method='invalid') + + def test_ndarray_compat_properties(self): + + idx = self.create_index() + self.assertTrue(idx.T.equals(idx)) + self.assertTrue(idx.transpose().equals(idx)) + + values = idx.values + for prop in self._compat_props: + self.assertEqual(getattr(idx, prop), getattr(values, prop)) + + # test for validity + idx.nbytes + idx.values.nbytes + + def test_repr_roundtrip(self): + + idx = self.create_index() + tm.assert_index_equal(eval(repr(idx)), idx) + + def test_str(self): + + # test the string repr + idx = self.create_index() + idx.name = 'foo' + self.assertTrue("'foo'" in str(idx)) + self.assertTrue(idx.__class__.__name__ in str(idx)) + + def test_dtype_str(self): + for idx in self.indices.values(): + dtype = idx.dtype_str + self.assertIsInstance(dtype, compat.string_types) + if isinstance(idx, PeriodIndex): + self.assertEqual(dtype, 'period') + else: + self.assertEqual(dtype, str(idx.dtype)) + + def test_repr_max_seq_item_setting(self): + # GH10182 + idx = self.create_index() + idx = idx.repeat(50) + with pd.option_context("display.max_seq_items", None): + repr(idx) + self.assertFalse('...' in str(idx)) + + def test_wrong_number_names(self): + def testit(ind): + ind.names = ["apple", "banana", "carrot"] + + for ind in self.indices.values(): + assertRaisesRegexp(ValueError, "^Length", testit, ind) + + def test_set_name_methods(self): + new_name = "This is the new name for this index" + for ind in self.indices.values(): + + # don't tests a MultiIndex here (as its tested separated) + if isinstance(ind, MultiIndex): + continue + + original_name = ind.name + new_ind = ind.set_names([new_name]) + self.assertEqual(new_ind.name, new_name) + self.assertEqual(ind.name, original_name) + res = ind.rename(new_name, inplace=True) + + # should return None + self.assertIsNone(res) + self.assertEqual(ind.name, new_name) + self.assertEqual(ind.names, [new_name]) + # with assertRaisesRegexp(TypeError, "list-like"): + # # should still fail even if it would be the right length + # ind.set_names("a") + with assertRaisesRegexp(ValueError, "Level must be None"): + ind.set_names("a", level=0) + + # rename in place just leaves tuples and other containers alone + name = ('A', 'B') + ind.rename(name, inplace=True) + self.assertEqual(ind.name, name) + self.assertEqual(ind.names, [name]) + + def test_hash_error(self): + for ind in self.indices.values(): + with tm.assertRaisesRegexp(TypeError, "unhashable type: %r" % + type(ind).__name__): + hash(ind) + + def test_copy_and_deepcopy(self): + from copy import copy, deepcopy + + for ind in self.indices.values(): + + # don't tests a MultiIndex here (as its tested separated) + if isinstance(ind, MultiIndex): + continue + + for func in (copy, deepcopy): + idx_copy = func(ind) + self.assertIsNot(idx_copy, ind) + self.assertTrue(idx_copy.equals(ind)) + + new_copy = ind.copy(deep=True, name="banana") + self.assertEqual(new_copy.name, "banana") + + def test_duplicates(self): + for ind in self.indices.values(): + + if not len(ind): + continue + if isinstance(ind, MultiIndex): + continue + idx = self._holder([ind[0]] * 5) + self.assertFalse(idx.is_unique) + self.assertTrue(idx.has_duplicates) + + # GH 10115 + # preserve names + idx.name = 'foo' + result = idx.drop_duplicates() + self.assertEqual(result.name, 'foo') + self.assert_index_equal(result, Index([ind[0]], name='foo')) + + def test_sort(self): + for ind in self.indices.values(): + self.assertRaises(TypeError, ind.sort) + + def test_order(self): + for ind in self.indices.values(): + # 9816 deprecated + with tm.assert_produces_warning(FutureWarning): + ind.order() + + def test_mutability(self): + for ind in self.indices.values(): + if not len(ind): + continue + self.assertRaises(TypeError, ind.__setitem__, 0, ind[0]) + + def test_view(self): + for ind in self.indices.values(): + i_view = ind.view() + self.assertEqual(i_view.name, ind.name) + + def test_compat(self): + for ind in self.indices.values(): + self.assertEqual(ind.tolist(), list(ind)) + + def test_argsort(self): + for k, ind in self.indices.items(): + + # sep teststed + if k in ['catIndex']: + continue + + result = ind.argsort() + expected = np.array(ind).argsort() + tm.assert_numpy_array_equal(result, expected) + + def test_pickle(self): + for ind in self.indices.values(): + self.verify_pickle(ind) + ind.name = 'foo' + self.verify_pickle(ind) + + def test_take(self): + indexer = [4, 3, 0, 2] + for k, ind in self.indices.items(): + + # separate + if k in ['boolIndex', 'tuples', 'empty']: + continue + + result = ind.take(indexer) + expected = ind[indexer] + self.assertTrue(result.equals(expected)) + + if not isinstance(ind, + (DatetimeIndex, PeriodIndex, TimedeltaIndex)): + # GH 10791 + with tm.assertRaises(AttributeError): + ind.freq + + def test_setops_errorcases(self): + for name, idx in compat.iteritems(self.indices): + # # non-iterable input + cases = [0.5, 'xxx'] + methods = [idx.intersection, idx.union, idx.difference, + idx.sym_diff] + + for method in methods: + for case in cases: + assertRaisesRegexp(TypeError, + "Input must be Index or array-like", + method, case) + + def test_intersection_base(self): + for name, idx in compat.iteritems(self.indices): + first = idx[:5] + second = idx[:3] + intersect = first.intersection(second) + + if isinstance(idx, CategoricalIndex): + pass + else: + self.assertTrue(tm.equalContents(intersect, second)) + + # GH 10149 + cases = [klass(second.values) + for klass in [np.array, Series, list]] + for case in cases: + if isinstance(idx, PeriodIndex): + msg = "can only call with other PeriodIndex-ed objects" + with tm.assertRaisesRegexp(ValueError, msg): + result = first.intersection(case) + elif isinstance(idx, CategoricalIndex): + pass + else: + result = first.intersection(case) + self.assertTrue(tm.equalContents(result, second)) + + if isinstance(idx, MultiIndex): + msg = "other must be a MultiIndex or a list of tuples" + with tm.assertRaisesRegexp(TypeError, msg): + result = first.intersection([1, 2, 3]) + + def test_union_base(self): + for name, idx in compat.iteritems(self.indices): + first = idx[3:] + second = idx[:5] + everything = idx + union = first.union(second) + self.assertTrue(tm.equalContents(union, everything)) + + # GH 10149 + cases = [klass(second.values) + for klass in [np.array, Series, list]] + for case in cases: + if isinstance(idx, PeriodIndex): + msg = "can only call with other PeriodIndex-ed objects" + with tm.assertRaisesRegexp(ValueError, msg): + result = first.union(case) + elif isinstance(idx, CategoricalIndex): + pass + else: + result = first.union(case) + self.assertTrue(tm.equalContents(result, everything)) + + if isinstance(idx, MultiIndex): + msg = "other must be a MultiIndex or a list of tuples" + with tm.assertRaisesRegexp(TypeError, msg): + result = first.union([1, 2, 3]) + + def test_difference_base(self): + for name, idx in compat.iteritems(self.indices): + first = idx[2:] + second = idx[:4] + answer = idx[4:] + result = first.difference(second) + + if isinstance(idx, CategoricalIndex): + pass + else: + self.assertTrue(tm.equalContents(result, answer)) + + # GH 10149 + cases = [klass(second.values) + for klass in [np.array, Series, list]] + for case in cases: + if isinstance(idx, PeriodIndex): + msg = "can only call with other PeriodIndex-ed objects" + with tm.assertRaisesRegexp(ValueError, msg): + result = first.difference(case) + elif isinstance(idx, CategoricalIndex): + pass + elif isinstance(idx, (DatetimeIndex, TimedeltaIndex)): + self.assertEqual(result.__class__, answer.__class__) + tm.assert_numpy_array_equal(result.asi8, answer.asi8) + else: + result = first.difference(case) + self.assertTrue(tm.equalContents(result, answer)) + + if isinstance(idx, MultiIndex): + msg = "other must be a MultiIndex or a list of tuples" + with tm.assertRaisesRegexp(TypeError, msg): + result = first.difference([1, 2, 3]) + + def test_symmetric_diff(self): + for name, idx in compat.iteritems(self.indices): + first = idx[1:] + second = idx[:-1] + if isinstance(idx, CategoricalIndex): + pass + else: + answer = idx[[0, -1]] + result = first.sym_diff(second) + self.assertTrue(tm.equalContents(result, answer)) + + # GH 10149 + cases = [klass(second.values) + for klass in [np.array, Series, list]] + for case in cases: + if isinstance(idx, PeriodIndex): + msg = "can only call with other PeriodIndex-ed objects" + with tm.assertRaisesRegexp(ValueError, msg): + result = first.sym_diff(case) + elif isinstance(idx, CategoricalIndex): + pass + else: + result = first.sym_diff(case) + self.assertTrue(tm.equalContents(result, answer)) + + if isinstance(idx, MultiIndex): + msg = "other must be a MultiIndex or a list of tuples" + with tm.assertRaisesRegexp(TypeError, msg): + result = first.sym_diff([1, 2, 3]) + + def test_insert_base(self): + + for name, idx in compat.iteritems(self.indices): + result = idx[1:4] + + if not len(idx): + continue + + # test 0th element + self.assertTrue(idx[0:4].equals(result.insert(0, idx[0]))) + + def test_delete_base(self): + + for name, idx in compat.iteritems(self.indices): + + if not len(idx): + continue + + if isinstance(idx, RangeIndex): + # tested in class + continue + + expected = idx[1:] + result = idx.delete(0) + self.assertTrue(result.equals(expected)) + self.assertEqual(result.name, expected.name) + + expected = idx[:-1] + result = idx.delete(-1) + self.assertTrue(result.equals(expected)) + self.assertEqual(result.name, expected.name) + + with tm.assertRaises((IndexError, ValueError)): + # either depending on numpy version + result = idx.delete(len(idx)) + + def test_equals_op(self): + # GH9947, GH10637 + index_a = self.create_index() + if isinstance(index_a, PeriodIndex): + return + + n = len(index_a) + index_b = index_a[0:-1] + index_c = index_a[0:-1].append(index_a[-2:-1]) + index_d = index_a[0:1] + with tm.assertRaisesRegexp(ValueError, "Lengths must match"): + index_a == index_b + expected1 = np.array([True] * n) + expected2 = np.array([True] * (n - 1) + [False]) + tm.assert_numpy_array_equal(index_a == index_a, expected1) + tm.assert_numpy_array_equal(index_a == index_c, expected2) + + # test comparisons with numpy arrays + array_a = np.array(index_a) + array_b = np.array(index_a[0:-1]) + array_c = np.array(index_a[0:-1].append(index_a[-2:-1])) + array_d = np.array(index_a[0:1]) + with tm.assertRaisesRegexp(ValueError, "Lengths must match"): + index_a == array_b + tm.assert_numpy_array_equal(index_a == array_a, expected1) + tm.assert_numpy_array_equal(index_a == array_c, expected2) + + # test comparisons with Series + series_a = Series(array_a) + series_b = Series(array_b) + series_c = Series(array_c) + series_d = Series(array_d) + with tm.assertRaisesRegexp(ValueError, "Lengths must match"): + index_a == series_b + tm.assert_numpy_array_equal(index_a == series_a, expected1) + tm.assert_numpy_array_equal(index_a == series_c, expected2) + + # cases where length is 1 for one of them + with tm.assertRaisesRegexp(ValueError, "Lengths must match"): + index_a == index_d + with tm.assertRaisesRegexp(ValueError, "Lengths must match"): + index_a == series_d + with tm.assertRaisesRegexp(ValueError, "Lengths must match"): + index_a == array_d + with tm.assertRaisesRegexp(ValueError, "Series lengths must match"): + series_a == series_d + with tm.assertRaisesRegexp(ValueError, "Lengths must match"): + series_a == array_d + + # comparing with a scalar should broadcast; note that we are excluding + # MultiIndex because in this case each item in the index is a tuple of + # length 2, and therefore is considered an array of length 2 in the + # comparison instead of a scalar + if not isinstance(index_a, MultiIndex): + expected3 = np.array([False] * (len(index_a) - 2) + [True, False]) + # assuming the 2nd to last item is unique in the data + item = index_a[-2] + tm.assert_numpy_array_equal(index_a == item, expected3) + tm.assert_numpy_array_equal(series_a == item, expected3) + + def test_numpy_ufuncs(self): + # test ufuncs of numpy 1.9.2. see: + # http://docs.scipy.org/doc/numpy/reference/ufuncs.html + + # some functions are skipped because it may return different result + # for unicode input depending on numpy version + + for name, idx in compat.iteritems(self.indices): + for func in [np.exp, np.exp2, np.expm1, np.log, np.log2, np.log10, + np.log1p, np.sqrt, np.sin, np.cos, np.tan, np.arcsin, + np.arccos, np.arctan, np.sinh, np.cosh, np.tanh, + np.arcsinh, np.arccosh, np.arctanh, np.deg2rad, + np.rad2deg]: + if isinstance(idx, pd.tseries.base.DatetimeIndexOpsMixin): + # raise TypeError or ValueError (PeriodIndex) + # PeriodIndex behavior should be changed in future version + with tm.assertRaises(Exception): + func(idx) + elif isinstance(idx, (Float64Index, Int64Index)): + # coerces to float (e.g. np.sin) + result = func(idx) + exp = Index(func(idx.values), name=idx.name) + self.assert_index_equal(result, exp) + self.assertIsInstance(result, pd.Float64Index) + else: + # raise AttributeError or TypeError + if len(idx) == 0: + continue + else: + with tm.assertRaises(Exception): + func(idx) + + for func in [np.isfinite, np.isinf, np.isnan, np.signbit]: + if isinstance(idx, pd.tseries.base.DatetimeIndexOpsMixin): + # raise TypeError or ValueError (PeriodIndex) + with tm.assertRaises(Exception): + func(idx) + elif isinstance(idx, (Float64Index, Int64Index)): + # results in bool array + result = func(idx) + exp = func(idx.values) + self.assertIsInstance(result, np.ndarray) + tm.assertNotIsInstance(result, Index) + else: + if len(idx) == 0: + continue + else: + with tm.assertRaises(Exception): + func(idx) + + def test_hasnans_isnans(self): + # GH 11343, added tests for hasnans / isnans + for name, index in self.indices.items(): + if isinstance(index, MultiIndex): + pass + else: + idx = index.copy() + + # cases in indices doesn't include NaN + expected = np.array([False] * len(idx), dtype=bool) + self.assert_numpy_array_equal(idx._isnan, expected) + self.assertFalse(idx.hasnans) + + idx = index.copy() + values = idx.values + + if len(index) == 0: + continue + elif isinstance(index, pd.tseries.base.DatetimeIndexOpsMixin): + values[1] = pd.tslib.iNaT + elif isinstance(index, Int64Index): + continue + else: + values[1] = np.nan + + if isinstance(index, PeriodIndex): + idx = index.__class__(values, freq=index.freq) + else: + idx = index.__class__(values) + + expected = np.array([False] * len(idx), dtype=bool) + expected[1] = True + self.assert_numpy_array_equal(idx._isnan, expected) + self.assertTrue(idx.hasnans) + + def test_fillna(self): + # GH 11343 + for name, index in self.indices.items(): + if len(index) == 0: + pass + elif isinstance(index, MultiIndex): + idx = index.copy() + msg = "isnull is not defined for MultiIndex" + with self.assertRaisesRegexp(NotImplementedError, msg): + idx.fillna(idx[0]) + else: + idx = index.copy() + result = idx.fillna(idx[0]) + self.assert_index_equal(result, idx) + self.assertFalse(result is idx) + + msg = "'value' must be a scalar, passed: " + with self.assertRaisesRegexp(TypeError, msg): + idx.fillna([idx[0]]) + + idx = index.copy() + values = idx.values + + if isinstance(index, pd.tseries.base.DatetimeIndexOpsMixin): + values[1] = pd.tslib.iNaT + elif isinstance(index, Int64Index): + continue + else: + values[1] = np.nan + + if isinstance(index, PeriodIndex): + idx = index.__class__(values, freq=index.freq) + else: + idx = index.__class__(values) + + expected = np.array([False] * len(idx), dtype=bool) + expected[1] = True + self.assert_numpy_array_equal(idx._isnan, expected) + self.assertTrue(idx.hasnans) diff --git a/pandas/tests/data/mindex_073.pickle b/pandas/tests/indexes/data/mindex_073.pickle similarity index 100% rename from pandas/tests/data/mindex_073.pickle rename to pandas/tests/indexes/data/mindex_073.pickle diff --git a/pandas/tests/data/multiindex_v1.pickle b/pandas/tests/indexes/data/multiindex_v1.pickle similarity index 100% rename from pandas/tests/data/multiindex_v1.pickle rename to pandas/tests/indexes/data/multiindex_v1.pickle diff --git a/pandas/tests/data/s1-0.12.0.pickle b/pandas/tests/indexes/data/s1-0.12.0.pickle similarity index 100% rename from pandas/tests/data/s1-0.12.0.pickle rename to pandas/tests/indexes/data/s1-0.12.0.pickle diff --git a/pandas/tests/data/s2-0.12.0.pickle b/pandas/tests/indexes/data/s2-0.12.0.pickle similarity index 100% rename from pandas/tests/data/s2-0.12.0.pickle rename to pandas/tests/indexes/data/s2-0.12.0.pickle diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py new file mode 100644 index 0000000000000..735025cfca42e --- /dev/null +++ b/pandas/tests/indexes/test_base.py @@ -0,0 +1,1495 @@ +# -*- coding: utf-8 -*- + +from datetime import datetime, timedelta + +# TODO(wesm): fix long line flake8 issues +# flake8: noqa + +import pandas.util.testing as tm +from pandas.indexes.api import Index, MultiIndex +from .common import Base + +from pandas.compat import (is_platform_windows, range, lrange, lzip, u, + zip, PY3) +import operator +import os + +import numpy as np + +from pandas import (period_range, date_range, Series, + Float64Index, Int64Index, + CategoricalIndex, DatetimeIndex, TimedeltaIndex, + PeriodIndex) +from pandas.util.testing import assert_almost_equal + +import pandas.core.config as cf + +from pandas.tseries.index import _to_m8 + +import pandas as pd +from pandas.lib import Timestamp + + +class TestIndex(Base, tm.TestCase): + _holder = Index + _multiprocess_can_split_ = True + + def setUp(self): + self.indices = dict(unicodeIndex=tm.makeUnicodeIndex(100), + strIndex=tm.makeStringIndex(100), + dateIndex=tm.makeDateIndex(100), + periodIndex=tm.makePeriodIndex(100), + tdIndex=tm.makeTimedeltaIndex(100), + intIndex=tm.makeIntIndex(100), + rangeIndex=tm.makeIntIndex(100), + floatIndex=tm.makeFloatIndex(100), + boolIndex=Index([True, False]), + catIndex=tm.makeCategoricalIndex(100), + empty=Index([]), + tuples=MultiIndex.from_tuples(lzip( + ['foo', 'bar', 'baz'], [1, 2, 3]))) + self.setup_indices() + + def create_index(self): + return Index(list('abcde')) + + def test_new_axis(self): + new_index = self.dateIndex[None, :] + self.assertEqual(new_index.ndim, 2) + tm.assertIsInstance(new_index, np.ndarray) + + def test_copy_and_deepcopy(self): + super(TestIndex, self).test_copy_and_deepcopy() + + new_copy2 = self.intIndex.copy(dtype=int) + self.assertEqual(new_copy2.dtype.kind, 'i') + + def test_constructor(self): + # regular instance creation + tm.assert_contains_all(self.strIndex, self.strIndex) + tm.assert_contains_all(self.dateIndex, self.dateIndex) + + # casting + arr = np.array(self.strIndex) + index = Index(arr) + tm.assert_contains_all(arr, index) + tm.assert_numpy_array_equal(self.strIndex, index) + + # copy + arr = np.array(self.strIndex) + index = Index(arr, copy=True, name='name') + tm.assertIsInstance(index, Index) + self.assertEqual(index.name, 'name') + tm.assert_numpy_array_equal(arr, index) + arr[0] = "SOMEBIGLONGSTRING" + self.assertNotEqual(index[0], "SOMEBIGLONGSTRING") + + # what to do here? + # arr = np.array(5.) + # self.assertRaises(Exception, arr.view, Index) + + def test_constructor_corner(self): + # corner case + self.assertRaises(TypeError, Index, 0) + + def test_construction_list_mixed_tuples(self): + # 10697 + # if we are constructing from a mixed list of tuples, make sure that we + # are independent of the sorting order + idx1 = Index([('A', 1), 'B']) + self.assertIsInstance(idx1, Index) and self.assertNotInstance( + idx1, MultiIndex) + idx2 = Index(['B', ('A', 1)]) + self.assertIsInstance(idx2, Index) and self.assertNotInstance( + idx2, MultiIndex) + + def test_constructor_from_series(self): + + expected = DatetimeIndex([Timestamp('20110101'), Timestamp('20120101'), + Timestamp('20130101')]) + s = Series([Timestamp('20110101'), Timestamp('20120101'), Timestamp( + '20130101')]) + result = Index(s) + self.assertTrue(result.equals(expected)) + result = DatetimeIndex(s) + self.assertTrue(result.equals(expected)) + + # GH 6273 + # create from a series, passing a freq + s = Series(pd.to_datetime(['1-1-1990', '2-1-1990', '3-1-1990', + '4-1-1990', '5-1-1990'])) + result = DatetimeIndex(s, freq='MS') + expected = DatetimeIndex( + ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990' + ], freq='MS') + self.assertTrue(result.equals(expected)) + + df = pd.DataFrame(np.random.rand(5, 3)) + df['date'] = ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', + '5-1-1990'] + result = DatetimeIndex(df['date'], freq='MS') + self.assertTrue(result.equals(expected)) + self.assertEqual(df['date'].dtype, object) + + exp = pd.Series( + ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990' + ], name='date') + self.assert_series_equal(df['date'], exp) + + # GH 6274 + # infer freq of same + result = pd.infer_freq(df['date']) + self.assertEqual(result, 'MS') + + def test_constructor_ndarray_like(self): + # GH 5460#issuecomment-44474502 + # it should be possible to convert any object that satisfies the numpy + # ndarray interface directly into an Index + class ArrayLike(object): + + def __init__(self, array): + self.array = array + + def __array__(self, dtype=None): + return self.array + + for array in [np.arange(5), np.array(['a', 'b', 'c']), + date_range('2000-01-01', periods=3).values]: + expected = pd.Index(array) + result = pd.Index(ArrayLike(array)) + self.assertTrue(result.equals(expected)) + + def test_index_ctor_infer_periodindex(self): + xp = period_range('2012-1-1', freq='M', periods=3) + rs = Index(xp) + tm.assert_numpy_array_equal(rs, xp) + tm.assertIsInstance(rs, PeriodIndex) + + def test_constructor_simple_new(self): + idx = Index([1, 2, 3, 4, 5], name='int') + result = idx._simple_new(idx, 'int') + self.assertTrue(result.equals(idx)) + + idx = Index([1.1, np.nan, 2.2, 3.0], name='float') + result = idx._simple_new(idx, 'float') + self.assertTrue(result.equals(idx)) + + idx = Index(['A', 'B', 'C', np.nan], name='obj') + result = idx._simple_new(idx, 'obj') + self.assertTrue(result.equals(idx)) + + def test_constructor_dtypes(self): + + for idx in [Index(np.array([1, 2, 3], dtype=int)), Index( + np.array( + [1, 2, 3], dtype=int), dtype=int), Index( + np.array( + [1., 2., 3.], dtype=float), dtype=int), Index( + [1, 2, 3], dtype=int), Index( + [1., 2., 3.], dtype=int)]: + self.assertIsInstance(idx, Int64Index) + + for idx in [Index(np.array([1., 2., 3.], dtype=float)), Index( + np.array( + [1, 2, 3], dtype=int), dtype=float), Index( + np.array( + [1., 2., 3.], dtype=float), dtype=float), Index( + [1, 2, 3], dtype=float), Index( + [1., 2., 3.], dtype=float)]: + self.assertIsInstance(idx, Float64Index) + + for idx in [Index(np.array( + [True, False, True], dtype=bool)), Index([True, False, True]), + Index( + np.array( + [True, False, True], dtype=bool), dtype=bool), + Index( + [True, False, True], dtype=bool)]: + self.assertIsInstance(idx, Index) + self.assertEqual(idx.dtype, object) + + for idx in [Index( + np.array([1, 2, 3], dtype=int), dtype='category'), Index( + [1, 2, 3], dtype='category'), Index( + np.array([np.datetime64('2011-01-01'), np.datetime64( + '2011-01-02')]), dtype='category'), Index( + [datetime(2011, 1, 1), datetime(2011, 1, 2) + ], dtype='category')]: + self.assertIsInstance(idx, CategoricalIndex) + + for idx in [Index(np.array([np.datetime64('2011-01-01'), np.datetime64( + '2011-01-02')])), + Index([datetime(2011, 1, 1), datetime(2011, 1, 2)])]: + self.assertIsInstance(idx, DatetimeIndex) + + for idx in [Index( + np.array([np.datetime64('2011-01-01'), np.datetime64( + '2011-01-02')]), dtype=object), Index( + [datetime(2011, 1, 1), datetime(2011, 1, 2) + ], dtype=object)]: + self.assertNotIsInstance(idx, DatetimeIndex) + self.assertIsInstance(idx, Index) + self.assertEqual(idx.dtype, object) + + for idx in [Index(np.array([np.timedelta64(1, 'D'), np.timedelta64( + 1, 'D')])), Index([timedelta(1), timedelta(1)])]: + self.assertIsInstance(idx, TimedeltaIndex) + + for idx in [Index( + np.array([np.timedelta64(1, 'D'), np.timedelta64(1, 'D')]), + dtype=object), Index( + [timedelta(1), timedelta(1)], dtype=object)]: + self.assertNotIsInstance(idx, TimedeltaIndex) + self.assertIsInstance(idx, Index) + self.assertEqual(idx.dtype, object) + + def test_view_with_args(self): + + restricted = ['unicodeIndex', 'strIndex', 'catIndex', 'boolIndex', + 'empty'] + + for i in restricted: + ind = self.indices[i] + + # with arguments + self.assertRaises(TypeError, lambda: ind.view('i8')) + + # these are ok + for i in list(set(self.indices.keys()) - set(restricted)): + ind = self.indices[i] + + # with arguments + ind.view('i8') + + def test_legacy_pickle_identity(self): + + # GH 8431 + pth = tm.get_data_path() + s1 = pd.read_pickle(os.path.join(pth, 's1-0.12.0.pickle')) + s2 = pd.read_pickle(os.path.join(pth, 's2-0.12.0.pickle')) + self.assertFalse(s1.index.identical(s2.index)) + self.assertFalse(s1.index.equals(s2.index)) + + def test_astype(self): + casted = self.intIndex.astype('i8') + + # it works! + casted.get_loc(5) + + # pass on name + self.intIndex.name = 'foobar' + casted = self.intIndex.astype('i8') + self.assertEqual(casted.name, 'foobar') + + def test_equals(self): + # same + self.assertTrue(Index(['a', 'b', 'c']).equals(Index(['a', 'b', 'c']))) + + # different length + self.assertFalse(Index(['a', 'b', 'c']).equals(Index(['a', 'b']))) + + # same length, different values + self.assertFalse(Index(['a', 'b', 'c']).equals(Index(['a', 'b', 'd']))) + + # Must also be an Index + self.assertFalse(Index(['a', 'b', 'c']).equals(['a', 'b', 'c'])) + + def test_insert(self): + + # GH 7256 + # validate neg/pos inserts + result = Index(['b', 'c', 'd']) + + # test 0th element + self.assertTrue(Index(['a', 'b', 'c', 'd']).equals(result.insert(0, + 'a'))) + + # test Nth element that follows Python list behavior + self.assertTrue(Index(['b', 'c', 'e', 'd']).equals(result.insert(-1, + 'e'))) + + # test loc +/- neq (0, -1) + self.assertTrue(result.insert(1, 'z').equals(result.insert(-2, 'z'))) + + # test empty + null_index = Index([]) + self.assertTrue(Index(['a']).equals(null_index.insert(0, 'a'))) + + def test_delete(self): + idx = Index(['a', 'b', 'c', 'd'], name='idx') + + expected = Index(['b', 'c', 'd'], name='idx') + result = idx.delete(0) + self.assertTrue(result.equals(expected)) + self.assertEqual(result.name, expected.name) + + expected = Index(['a', 'b', 'c'], name='idx') + result = idx.delete(-1) + self.assertTrue(result.equals(expected)) + self.assertEqual(result.name, expected.name) + + with tm.assertRaises((IndexError, ValueError)): + # either depeidnig on numpy version + result = idx.delete(5) + + def test_identical(self): + + # index + i1 = Index(['a', 'b', 'c']) + i2 = Index(['a', 'b', 'c']) + + self.assertTrue(i1.identical(i2)) + + i1 = i1.rename('foo') + self.assertTrue(i1.equals(i2)) + self.assertFalse(i1.identical(i2)) + + i2 = i2.rename('foo') + self.assertTrue(i1.identical(i2)) + + i3 = Index([('a', 'a'), ('a', 'b'), ('b', 'a')]) + i4 = Index([('a', 'a'), ('a', 'b'), ('b', 'a')], tupleize_cols=False) + self.assertFalse(i3.identical(i4)) + + def test_is_(self): + ind = Index(range(10)) + self.assertTrue(ind.is_(ind)) + self.assertTrue(ind.is_(ind.view().view().view().view())) + self.assertFalse(ind.is_(Index(range(10)))) + self.assertFalse(ind.is_(ind.copy())) + self.assertFalse(ind.is_(ind.copy(deep=False))) + self.assertFalse(ind.is_(ind[:])) + self.assertFalse(ind.is_(ind.view(np.ndarray).view(Index))) + self.assertFalse(ind.is_(np.array(range(10)))) + + # quasi-implementation dependent + self.assertTrue(ind.is_(ind.view())) + ind2 = ind.view() + ind2.name = 'bob' + self.assertTrue(ind.is_(ind2)) + self.assertTrue(ind2.is_(ind)) + # doesn't matter if Indices are *actually* views of underlying data, + self.assertFalse(ind.is_(Index(ind.values))) + arr = np.array(range(1, 11)) + ind1 = Index(arr, copy=False) + ind2 = Index(arr, copy=False) + self.assertFalse(ind1.is_(ind2)) + + def test_asof(self): + d = self.dateIndex[0] + self.assertEqual(self.dateIndex.asof(d), d) + self.assertTrue(np.isnan(self.dateIndex.asof(d - timedelta(1)))) + + d = self.dateIndex[-1] + self.assertEqual(self.dateIndex.asof(d + timedelta(1)), d) + + d = self.dateIndex[0].to_datetime() + tm.assertIsInstance(self.dateIndex.asof(d), Timestamp) + + def test_asof_datetime_partial(self): + idx = pd.date_range('2010-01-01', periods=2, freq='m') + expected = Timestamp('2010-02-28') + result = idx.asof('2010-02') + self.assertEqual(result, expected) + self.assertFalse(isinstance(result, Index)) + + def test_nanosecond_index_access(self): + s = Series([Timestamp('20130101')]).values.view('i8')[0] + r = DatetimeIndex([s + 50 + i for i in range(100)]) + x = Series(np.random.randn(100), index=r) + + first_value = x.asof(x.index[0]) + + # this does not yet work, as parsing strings is done via dateutil + # self.assertEqual(first_value, + # x['2013-01-01 00:00:00.000000050+0000']) + + self.assertEqual( + first_value, + x[Timestamp(np.datetime64('2013-01-01 00:00:00.000000050+0000', + 'ns'))]) + + def test_comparators(self): + index = self.dateIndex + element = index[len(index) // 2] + element = _to_m8(element) + + arr = np.array(index) + + def _check(op): + arr_result = op(arr, element) + index_result = op(index, element) + + self.assertIsInstance(index_result, np.ndarray) + tm.assert_numpy_array_equal(arr_result, index_result) + + _check(operator.eq) + _check(operator.ne) + _check(operator.gt) + _check(operator.lt) + _check(operator.ge) + _check(operator.le) + + def test_booleanindex(self): + boolIdx = np.repeat(True, len(self.strIndex)).astype(bool) + boolIdx[5:30:2] = False + + subIndex = self.strIndex[boolIdx] + + for i, val in enumerate(subIndex): + self.assertEqual(subIndex.get_loc(val), i) + + subIndex = self.strIndex[list(boolIdx)] + for i, val in enumerate(subIndex): + self.assertEqual(subIndex.get_loc(val), i) + + def test_fancy(self): + sl = self.strIndex[[1, 2, 3]] + for i in sl: + self.assertEqual(i, sl[sl.get_loc(i)]) + + def test_empty_fancy(self): + empty_farr = np.array([], dtype=np.float_) + empty_iarr = np.array([], dtype=np.int_) + empty_barr = np.array([], dtype=np.bool_) + + # pd.DatetimeIndex is excluded, because it overrides getitem and should + # be tested separately. + for idx in [self.strIndex, self.intIndex, self.floatIndex]: + empty_idx = idx.__class__([]) + + self.assertTrue(idx[[]].identical(empty_idx)) + self.assertTrue(idx[empty_iarr].identical(empty_idx)) + self.assertTrue(idx[empty_barr].identical(empty_idx)) + + # np.ndarray only accepts ndarray of int & bool dtypes, so should + # Index. + self.assertRaises(IndexError, idx.__getitem__, empty_farr) + + def test_getitem(self): + arr = np.array(self.dateIndex) + exp = self.dateIndex[5] + exp = _to_m8(exp) + + self.assertEqual(exp, arr[5]) + + def test_intersection(self): + first = self.strIndex[:20] + second = self.strIndex[:10] + intersect = first.intersection(second) + self.assertTrue(tm.equalContents(intersect, second)) + + # Corner cases + inter = first.intersection(first) + self.assertIs(inter, first) + + idx1 = Index([1, 2, 3, 4, 5], name='idx') + # if target has the same name, it is preserved + idx2 = Index([3, 4, 5, 6, 7], name='idx') + expected2 = Index([3, 4, 5], name='idx') + result2 = idx1.intersection(idx2) + self.assertTrue(result2.equals(expected2)) + self.assertEqual(result2.name, expected2.name) + + # if target name is different, it will be reset + idx3 = Index([3, 4, 5, 6, 7], name='other') + expected3 = Index([3, 4, 5], name=None) + result3 = idx1.intersection(idx3) + self.assertTrue(result3.equals(expected3)) + self.assertEqual(result3.name, expected3.name) + + # non monotonic + idx1 = Index([5, 3, 2, 4, 1], name='idx') + idx2 = Index([4, 7, 6, 5, 3], name='idx') + result2 = idx1.intersection(idx2) + self.assertTrue(tm.equalContents(result2, expected2)) + self.assertEqual(result2.name, expected2.name) + + idx3 = Index([4, 7, 6, 5, 3], name='other') + result3 = idx1.intersection(idx3) + self.assertTrue(tm.equalContents(result3, expected3)) + self.assertEqual(result3.name, expected3.name) + + # non-monotonic non-unique + idx1 = Index(['A', 'B', 'A', 'C']) + idx2 = Index(['B', 'D']) + expected = Index(['B'], dtype='object') + result = idx1.intersection(idx2) + self.assertTrue(result.equals(expected)) + + def test_union(self): + first = self.strIndex[5:20] + second = self.strIndex[:10] + everything = self.strIndex[:20] + union = first.union(second) + self.assertTrue(tm.equalContents(union, everything)) + + # GH 10149 + cases = [klass(second.values) for klass in [np.array, Series, list]] + for case in cases: + result = first.union(case) + self.assertTrue(tm.equalContents(result, everything)) + + # Corner cases + union = first.union(first) + self.assertIs(union, first) + + union = first.union([]) + self.assertIs(union, first) + + union = Index([]).union(first) + self.assertIs(union, first) + + # preserve names + first.name = 'A' + second.name = 'A' + union = first.union(second) + self.assertEqual(union.name, 'A') + + second.name = 'B' + union = first.union(second) + self.assertIsNone(union.name) + + def test_add(self): + + # - API change GH 8226 + with tm.assert_produces_warning(): + self.strIndex + self.strIndex + with tm.assert_produces_warning(): + self.strIndex + self.strIndex.tolist() + with tm.assert_produces_warning(): + self.strIndex.tolist() + self.strIndex + + with tm.assert_produces_warning(RuntimeWarning): + firstCat = self.strIndex.union(self.dateIndex) + secondCat = self.strIndex.union(self.strIndex) + + if self.dateIndex.dtype == np.object_: + appended = np.append(self.strIndex, self.dateIndex) + else: + appended = np.append(self.strIndex, self.dateIndex.astype('O')) + + self.assertTrue(tm.equalContents(firstCat, appended)) + self.assertTrue(tm.equalContents(secondCat, self.strIndex)) + tm.assert_contains_all(self.strIndex, firstCat) + tm.assert_contains_all(self.strIndex, secondCat) + tm.assert_contains_all(self.dateIndex, firstCat) + + # test add and radd + idx = Index(list('abc')) + expected = Index(['a1', 'b1', 'c1']) + self.assert_index_equal(idx + '1', expected) + expected = Index(['1a', '1b', '1c']) + self.assert_index_equal('1' + idx, expected) + + def test_append_multiple(self): + index = Index(['a', 'b', 'c', 'd', 'e', 'f']) + + foos = [index[:2], index[2:4], index[4:]] + result = foos[0].append(foos[1:]) + self.assertTrue(result.equals(index)) + + # empty + result = index.append([]) + self.assertTrue(result.equals(index)) + + def test_append_empty_preserve_name(self): + left = Index([], name='foo') + right = Index([1, 2, 3], name='foo') + + result = left.append(right) + self.assertEqual(result.name, 'foo') + + left = Index([], name='foo') + right = Index([1, 2, 3], name='bar') + + result = left.append(right) + self.assertIsNone(result.name) + + def test_add_string(self): + # from bug report + index = Index(['a', 'b', 'c']) + index2 = index + 'foo' + + self.assertNotIn('a', index2) + self.assertIn('afoo', index2) + + def test_iadd_string(self): + index = pd.Index(['a', 'b', 'c']) + # doesn't fail test unless there is a check before `+=` + self.assertIn('a', index) + + index += '_x' + self.assertIn('a_x', index) + + def test_difference(self): + + first = self.strIndex[5:20] + second = self.strIndex[:10] + answer = self.strIndex[10:20] + first.name = 'name' + # different names + result = first.difference(second) + + self.assertTrue(tm.equalContents(result, answer)) + self.assertEqual(result.name, None) + + # same names + second.name = 'name' + result = first.difference(second) + self.assertEqual(result.name, 'name') + + # with empty + result = first.difference([]) + self.assertTrue(tm.equalContents(result, first)) + self.assertEqual(result.name, first.name) + + # with everythin + result = first.difference(first) + self.assertEqual(len(result), 0) + self.assertEqual(result.name, first.name) + + def test_symmetric_diff(self): + # smoke + idx1 = Index([1, 2, 3, 4], name='idx1') + idx2 = Index([2, 3, 4, 5]) + result = idx1.sym_diff(idx2) + expected = Index([1, 5]) + self.assertTrue(tm.equalContents(result, expected)) + self.assertIsNone(result.name) + + # __xor__ syntax + expected = idx1 ^ idx2 + self.assertTrue(tm.equalContents(result, expected)) + self.assertIsNone(result.name) + + # multiIndex + idx1 = MultiIndex.from_tuples(self.tuples) + idx2 = MultiIndex.from_tuples([('foo', 1), ('bar', 3)]) + result = idx1.sym_diff(idx2) + expected = MultiIndex.from_tuples([('bar', 2), ('baz', 3), ('bar', 3)]) + self.assertTrue(tm.equalContents(result, expected)) + + # nans: + # GH #6444, sorting of nans. Make sure the number of nans is right + # and the correct non-nan values are there. punt on sorting. + idx1 = Index([1, 2, 3, np.nan]) + idx2 = Index([0, 1, np.nan]) + result = idx1.sym_diff(idx2) + # expected = Index([0.0, np.nan, 2.0, 3.0, np.nan]) + + nans = pd.isnull(result) + self.assertEqual(nans.sum(), 1) + self.assertEqual((~nans).sum(), 3) + [self.assertIn(x, result) for x in [0.0, 2.0, 3.0]] + + # other not an Index: + idx1 = Index([1, 2, 3, 4], name='idx1') + idx2 = np.array([2, 3, 4, 5]) + expected = Index([1, 5]) + result = idx1.sym_diff(idx2) + self.assertTrue(tm.equalContents(result, expected)) + self.assertEqual(result.name, 'idx1') + + result = idx1.sym_diff(idx2, result_name='new_name') + self.assertTrue(tm.equalContents(result, expected)) + self.assertEqual(result.name, 'new_name') + + def test_is_numeric(self): + self.assertFalse(self.dateIndex.is_numeric()) + self.assertFalse(self.strIndex.is_numeric()) + self.assertTrue(self.intIndex.is_numeric()) + self.assertTrue(self.floatIndex.is_numeric()) + self.assertFalse(self.catIndex.is_numeric()) + + def test_is_object(self): + self.assertTrue(self.strIndex.is_object()) + self.assertTrue(self.boolIndex.is_object()) + self.assertFalse(self.catIndex.is_object()) + self.assertFalse(self.intIndex.is_object()) + self.assertFalse(self.dateIndex.is_object()) + self.assertFalse(self.floatIndex.is_object()) + + def test_is_all_dates(self): + self.assertTrue(self.dateIndex.is_all_dates) + self.assertFalse(self.strIndex.is_all_dates) + self.assertFalse(self.intIndex.is_all_dates) + + def test_summary(self): + self._check_method_works(Index.summary) + # GH3869 + ind = Index(['{other}%s', "~:{range}:0"], name='A') + result = ind.summary() + # shouldn't be formatted accidentally. + self.assertIn('~:{range}:0', result) + self.assertIn('{other}%s', result) + + def test_format(self): + self._check_method_works(Index.format) + + index = Index([datetime.now()]) + + # windows has different precision on datetime.datetime.now (it doesn't + # include us since the default for Timestamp shows these but Index + # formating does not we are skipping + if not is_platform_windows(): + formatted = index.format() + expected = [str(index[0])] + self.assertEqual(formatted, expected) + + # 2845 + index = Index([1, 2.0 + 3.0j, np.nan]) + formatted = index.format() + expected = [str(index[0]), str(index[1]), u('NaN')] + self.assertEqual(formatted, expected) + + # is this really allowed? + index = Index([1, 2.0 + 3.0j, None]) + formatted = index.format() + expected = [str(index[0]), str(index[1]), u('NaN')] + self.assertEqual(formatted, expected) + + self.strIndex[:0].format() + + def test_format_with_name_time_info(self): + # bug I fixed 12/20/2011 + inc = timedelta(hours=4) + dates = Index([dt + inc for dt in self.dateIndex], name='something') + + formatted = dates.format(name=True) + self.assertEqual(formatted[0], 'something') + + def test_format_datetime_with_time(self): + t = Index([datetime(2012, 2, 7), datetime(2012, 2, 7, 23)]) + + result = t.format() + expected = ['2012-02-07 00:00:00', '2012-02-07 23:00:00'] + self.assertEqual(len(result), 2) + self.assertEqual(result, expected) + + def test_format_none(self): + values = ['a', 'b', 'c', None] + + idx = Index(values) + idx.format() + self.assertIsNone(idx[3]) + + def test_logical_compat(self): + idx = self.create_index() + self.assertEqual(idx.all(), idx.values.all()) + self.assertEqual(idx.any(), idx.values.any()) + + def _check_method_works(self, method): + method(self.empty) + method(self.dateIndex) + method(self.unicodeIndex) + method(self.strIndex) + method(self.intIndex) + method(self.tuples) + method(self.catIndex) + + def test_get_indexer(self): + idx1 = Index([1, 2, 3, 4, 5]) + idx2 = Index([2, 4, 6]) + + r1 = idx1.get_indexer(idx2) + assert_almost_equal(r1, [1, 3, -1]) + + r1 = idx2.get_indexer(idx1, method='pad') + e1 = [-1, 0, 0, 1, 1] + assert_almost_equal(r1, e1) + + r2 = idx2.get_indexer(idx1[::-1], method='pad') + assert_almost_equal(r2, e1[::-1]) + + rffill1 = idx2.get_indexer(idx1, method='ffill') + assert_almost_equal(r1, rffill1) + + r1 = idx2.get_indexer(idx1, method='backfill') + e1 = [0, 0, 1, 1, 2] + assert_almost_equal(r1, e1) + + rbfill1 = idx2.get_indexer(idx1, method='bfill') + assert_almost_equal(r1, rbfill1) + + r2 = idx2.get_indexer(idx1[::-1], method='backfill') + assert_almost_equal(r2, e1[::-1]) + + def test_get_indexer_invalid(self): + # GH10411 + idx = Index(np.arange(10)) + + with tm.assertRaisesRegexp(ValueError, 'tolerance argument'): + idx.get_indexer([1, 0], tolerance=1) + + with tm.assertRaisesRegexp(ValueError, 'limit argument'): + idx.get_indexer([1, 0], limit=1) + + def test_get_indexer_nearest(self): + idx = Index(np.arange(10)) + + all_methods = ['pad', 'backfill', 'nearest'] + for method in all_methods: + actual = idx.get_indexer([0, 5, 9], method=method) + tm.assert_numpy_array_equal(actual, [0, 5, 9]) + + actual = idx.get_indexer([0, 5, 9], method=method, tolerance=0) + tm.assert_numpy_array_equal(actual, [0, 5, 9]) + + for method, expected in zip(all_methods, [[0, 1, 8], [1, 2, 9], [0, 2, + 9]]): + actual = idx.get_indexer([0.2, 1.8, 8.5], method=method) + tm.assert_numpy_array_equal(actual, expected) + + actual = idx.get_indexer([0.2, 1.8, 8.5], method=method, + tolerance=1) + tm.assert_numpy_array_equal(actual, expected) + + for method, expected in zip(all_methods, [[0, -1, -1], [-1, 2, -1], + [0, 2, -1]]): + actual = idx.get_indexer([0.2, 1.8, 8.5], method=method, + tolerance=0.2) + tm.assert_numpy_array_equal(actual, expected) + + with tm.assertRaisesRegexp(ValueError, 'limit argument'): + idx.get_indexer([1, 0], method='nearest', limit=1) + + def test_get_indexer_nearest_decreasing(self): + idx = Index(np.arange(10))[::-1] + + all_methods = ['pad', 'backfill', 'nearest'] + for method in all_methods: + actual = idx.get_indexer([0, 5, 9], method=method) + tm.assert_numpy_array_equal(actual, [9, 4, 0]) + + for method, expected in zip(all_methods, [[8, 7, 0], [9, 8, 1], [9, 7, + 0]]): + actual = idx.get_indexer([0.2, 1.8, 8.5], method=method) + tm.assert_numpy_array_equal(actual, expected) + + def test_get_indexer_strings(self): + idx = pd.Index(['b', 'c']) + + actual = idx.get_indexer(['a', 'b', 'c', 'd'], method='pad') + expected = [-1, 0, 1, 1] + tm.assert_numpy_array_equal(actual, expected) + + actual = idx.get_indexer(['a', 'b', 'c', 'd'], method='backfill') + expected = [0, 0, 1, -1] + tm.assert_numpy_array_equal(actual, expected) + + with tm.assertRaises(TypeError): + idx.get_indexer(['a', 'b', 'c', 'd'], method='nearest') + + with tm.assertRaises(TypeError): + idx.get_indexer(['a', 'b', 'c', 'd'], method='pad', tolerance=2) + + def test_get_loc(self): + idx = pd.Index([0, 1, 2]) + all_methods = [None, 'pad', 'backfill', 'nearest'] + for method in all_methods: + self.assertEqual(idx.get_loc(1, method=method), 1) + if method is not None: + self.assertEqual(idx.get_loc(1, method=method, tolerance=0), 1) + with tm.assertRaises(TypeError): + idx.get_loc([1, 2], method=method) + + for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]: + self.assertEqual(idx.get_loc(1.1, method), loc) + + for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]: + self.assertEqual(idx.get_loc(1.1, method, tolerance=1), loc) + + for method in ['pad', 'backfill', 'nearest']: + with tm.assertRaises(KeyError): + idx.get_loc(1.1, method, tolerance=0.05) + + with tm.assertRaisesRegexp(ValueError, 'must be numeric'): + idx.get_loc(1.1, 'nearest', tolerance='invalid') + with tm.assertRaisesRegexp(ValueError, 'tolerance .* valid if'): + idx.get_loc(1.1, tolerance=1) + + idx = pd.Index(['a', 'c']) + with tm.assertRaises(TypeError): + idx.get_loc('a', method='nearest') + with tm.assertRaises(TypeError): + idx.get_loc('a', method='pad', tolerance='invalid') + + def test_slice_locs(self): + for dtype in [int, float]: + idx = Index(np.array([0, 1, 2, 5, 6, 7, 9, 10], dtype=dtype)) + n = len(idx) + + self.assertEqual(idx.slice_locs(start=2), (2, n)) + self.assertEqual(idx.slice_locs(start=3), (3, n)) + self.assertEqual(idx.slice_locs(3, 8), (3, 6)) + self.assertEqual(idx.slice_locs(5, 10), (3, n)) + self.assertEqual(idx.slice_locs(end=8), (0, 6)) + self.assertEqual(idx.slice_locs(end=9), (0, 7)) + + # reversed + idx2 = idx[::-1] + self.assertEqual(idx2.slice_locs(8, 2), (2, 6)) + self.assertEqual(idx2.slice_locs(7, 3), (2, 5)) + + # float slicing + idx = Index(np.array([0, 1, 2, 5, 6, 7, 9, 10], dtype=float)) + n = len(idx) + self.assertEqual(idx.slice_locs(5.0, 10.0), (3, n)) + self.assertEqual(idx.slice_locs(4.5, 10.5), (3, 8)) + idx2 = idx[::-1] + self.assertEqual(idx2.slice_locs(8.5, 1.5), (2, 6)) + self.assertEqual(idx2.slice_locs(10.5, -1), (0, n)) + + # int slicing with floats + idx = Index(np.array([0, 1, 2, 5, 6, 7, 9, 10], dtype=int)) + self.assertEqual(idx.slice_locs(5.0, 10.0), (3, n)) + self.assertEqual(idx.slice_locs(4.5, 10.5), (3, 8)) + idx2 = idx[::-1] + self.assertEqual(idx2.slice_locs(8.5, 1.5), (2, 6)) + self.assertEqual(idx2.slice_locs(10.5, -1), (0, n)) + + def test_slice_locs_dup(self): + idx = Index(['a', 'a', 'b', 'c', 'd', 'd']) + self.assertEqual(idx.slice_locs('a', 'd'), (0, 6)) + self.assertEqual(idx.slice_locs(end='d'), (0, 6)) + self.assertEqual(idx.slice_locs('a', 'c'), (0, 4)) + self.assertEqual(idx.slice_locs('b', 'd'), (2, 6)) + + idx2 = idx[::-1] + self.assertEqual(idx2.slice_locs('d', 'a'), (0, 6)) + self.assertEqual(idx2.slice_locs(end='a'), (0, 6)) + self.assertEqual(idx2.slice_locs('d', 'b'), (0, 4)) + self.assertEqual(idx2.slice_locs('c', 'a'), (2, 6)) + + for dtype in [int, float]: + idx = Index(np.array([10, 12, 12, 14], dtype=dtype)) + self.assertEqual(idx.slice_locs(12, 12), (1, 3)) + self.assertEqual(idx.slice_locs(11, 13), (1, 3)) + + idx2 = idx[::-1] + self.assertEqual(idx2.slice_locs(12, 12), (1, 3)) + self.assertEqual(idx2.slice_locs(13, 11), (1, 3)) + + def test_slice_locs_na(self): + idx = Index([np.nan, 1, 2]) + self.assertRaises(KeyError, idx.slice_locs, start=1.5) + self.assertRaises(KeyError, idx.slice_locs, end=1.5) + self.assertEqual(idx.slice_locs(1), (1, 3)) + self.assertEqual(idx.slice_locs(np.nan), (0, 3)) + + idx = Index([0, np.nan, np.nan, 1, 2]) + self.assertEqual(idx.slice_locs(np.nan), (1, 5)) + + def test_slice_locs_negative_step(self): + idx = Index(list('bcdxy')) + + SLC = pd.IndexSlice + + def check_slice(in_slice, expected): + s_start, s_stop = idx.slice_locs(in_slice.start, in_slice.stop, + in_slice.step) + result = idx[s_start:s_stop:in_slice.step] + expected = pd.Index(list(expected)) + self.assertTrue(result.equals(expected)) + + for in_slice, expected in [ + (SLC[::-1], 'yxdcb'), (SLC['b':'y':-1], ''), + (SLC['b'::-1], 'b'), (SLC[:'b':-1], 'yxdcb'), + (SLC[:'y':-1], 'y'), (SLC['y'::-1], 'yxdcb'), + (SLC['y'::-4], 'yb'), + # absent labels + (SLC[:'a':-1], 'yxdcb'), (SLC[:'a':-2], 'ydb'), + (SLC['z'::-1], 'yxdcb'), (SLC['z'::-3], 'yc'), + (SLC['m'::-1], 'dcb'), (SLC[:'m':-1], 'yx'), + (SLC['a':'a':-1], ''), (SLC['z':'z':-1], ''), + (SLC['m':'m':-1], '') + ]: + check_slice(in_slice, expected) + + def test_drop(self): + n = len(self.strIndex) + + drop = self.strIndex[lrange(5, 10)] + dropped = self.strIndex.drop(drop) + expected = self.strIndex[lrange(5) + lrange(10, n)] + self.assertTrue(dropped.equals(expected)) + + self.assertRaises(ValueError, self.strIndex.drop, ['foo', 'bar']) + self.assertRaises(ValueError, self.strIndex.drop, ['1', 'bar']) + + # errors='ignore' + mixed = drop.tolist() + ['foo'] + dropped = self.strIndex.drop(mixed, errors='ignore') + expected = self.strIndex[lrange(5) + lrange(10, n)] + self.assert_index_equal(dropped, expected) + + dropped = self.strIndex.drop(['foo', 'bar'], errors='ignore') + expected = self.strIndex[lrange(n)] + self.assert_index_equal(dropped, expected) + + dropped = self.strIndex.drop(self.strIndex[0]) + expected = self.strIndex[1:] + self.assert_index_equal(dropped, expected) + + ser = Index([1, 2, 3]) + dropped = ser.drop(1) + expected = Index([2, 3]) + self.assert_index_equal(dropped, expected) + + # errors='ignore' + self.assertRaises(ValueError, ser.drop, [3, 4]) + + dropped = ser.drop(4, errors='ignore') + expected = Index([1, 2, 3]) + self.assert_index_equal(dropped, expected) + + dropped = ser.drop([3, 4, 5], errors='ignore') + expected = Index([1, 2]) + self.assert_index_equal(dropped, expected) + + def test_tuple_union_bug(self): + import pandas + import numpy as np + + aidx1 = np.array([(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B')], + dtype=[('num', int), ('let', 'a1')]) + aidx2 = np.array([(1, 'A'), (2, 'A'), (1, 'B'), + (2, 'B'), (1, 'C'), (2, 'C')], + dtype=[('num', int), ('let', 'a1')]) + + idx1 = pandas.Index(aidx1) + idx2 = pandas.Index(aidx2) + + # intersection broken? + int_idx = idx1.intersection(idx2) + # needs to be 1d like idx1 and idx2 + expected = idx1[:4] # pandas.Index(sorted(set(idx1) & set(idx2))) + self.assertEqual(int_idx.ndim, 1) + self.assertTrue(int_idx.equals(expected)) + + # union broken + union_idx = idx1.union(idx2) + expected = idx2 + self.assertEqual(union_idx.ndim, 1) + self.assertTrue(union_idx.equals(expected)) + + def test_is_monotonic_incomparable(self): + index = Index([5, datetime.now(), 7]) + self.assertFalse(index.is_monotonic) + self.assertFalse(index.is_monotonic_decreasing) + + def test_get_set_value(self): + values = np.random.randn(100) + date = self.dateIndex[67] + + assert_almost_equal(self.dateIndex.get_value(values, date), values[67]) + + self.dateIndex.set_value(values, date, 10) + self.assertEqual(values[67], 10) + + def test_isin(self): + values = ['foo', 'bar', 'quux'] + + idx = Index(['qux', 'baz', 'foo', 'bar']) + result = idx.isin(values) + expected = np.array([False, False, True, True]) + tm.assert_numpy_array_equal(result, expected) + + # empty, return dtype bool + idx = Index([]) + result = idx.isin(values) + self.assertEqual(len(result), 0) + self.assertEqual(result.dtype, np.bool_) + + def test_isin_nan(self): + tm.assert_numpy_array_equal( + Index(['a', np.nan]).isin([np.nan]), [False, True]) + tm.assert_numpy_array_equal( + Index(['a', pd.NaT]).isin([pd.NaT]), [False, True]) + tm.assert_numpy_array_equal( + Index(['a', np.nan]).isin([float('nan')]), [False, False]) + tm.assert_numpy_array_equal( + Index(['a', np.nan]).isin([pd.NaT]), [False, False]) + # Float64Index overrides isin, so must be checked separately + tm.assert_numpy_array_equal( + Float64Index([1.0, np.nan]).isin([np.nan]), [False, True]) + tm.assert_numpy_array_equal( + Float64Index([1.0, np.nan]).isin([float('nan')]), [False, True]) + tm.assert_numpy_array_equal( + Float64Index([1.0, np.nan]).isin([pd.NaT]), [False, True]) + + def test_isin_level_kwarg(self): + def check_idx(idx): + values = idx.tolist()[-2:] + ['nonexisting'] + + expected = np.array([False, False, True, True]) + tm.assert_numpy_array_equal(expected, idx.isin(values, level=0)) + tm.assert_numpy_array_equal(expected, idx.isin(values, level=-1)) + + self.assertRaises(IndexError, idx.isin, values, level=1) + self.assertRaises(IndexError, idx.isin, values, level=10) + self.assertRaises(IndexError, idx.isin, values, level=-2) + + self.assertRaises(KeyError, idx.isin, values, level=1.0) + self.assertRaises(KeyError, idx.isin, values, level='foobar') + + idx.name = 'foobar' + tm.assert_numpy_array_equal(expected, + idx.isin(values, level='foobar')) + + self.assertRaises(KeyError, idx.isin, values, level='xyzzy') + self.assertRaises(KeyError, idx.isin, values, level=np.nan) + + check_idx(Index(['qux', 'baz', 'foo', 'bar'])) + # Float64Index overrides isin, so must be checked separately + check_idx(Float64Index([1.0, 2.0, 3.0, 4.0])) + + def test_boolean_cmp(self): + values = [1, 2, 3, 4] + + idx = Index(values) + res = (idx == values) + + tm.assert_numpy_array_equal(res, np.array( + [True, True, True, True], dtype=bool)) + + def test_get_level_values(self): + result = self.strIndex.get_level_values(0) + self.assertTrue(result.equals(self.strIndex)) + + def test_slice_keep_name(self): + idx = Index(['a', 'b'], name='asdf') + self.assertEqual(idx.name, idx[1:].name) + + def test_join_self(self): + # instance attributes of the form self.<name>Index + indices = 'unicode', 'str', 'date', 'int', 'float' + kinds = 'outer', 'inner', 'left', 'right' + for index_kind in indices: + res = getattr(self, '{0}Index'.format(index_kind)) + + for kind in kinds: + joined = res.join(res, how=kind) + self.assertIs(res, joined) + + def test_str_attribute(self): + # GH9068 + methods = ['strip', 'rstrip', 'lstrip'] + idx = Index([' jack', 'jill ', ' jesse ', 'frank']) + for method in methods: + expected = Index([getattr(str, method)(x) for x in idx.values]) + tm.assert_index_equal( + getattr(Index.str, method)(idx.str), expected) + + # create a few instances that are not able to use .str accessor + indices = [Index(range(5)), tm.makeDateIndex(10), + MultiIndex.from_tuples([('foo', '1'), ('bar', '3')]), + PeriodIndex(start='2000', end='2010', freq='A')] + for idx in indices: + with self.assertRaisesRegexp(AttributeError, + 'only use .str accessor'): + idx.str.repeat(2) + + idx = Index(['a b c', 'd e', 'f']) + expected = Index([['a', 'b', 'c'], ['d', 'e'], ['f']]) + tm.assert_index_equal(idx.str.split(), expected) + tm.assert_index_equal(idx.str.split(expand=False), expected) + + expected = MultiIndex.from_tuples([('a', 'b', 'c'), ('d', 'e', np.nan), + ('f', np.nan, np.nan)]) + tm.assert_index_equal(idx.str.split(expand=True), expected) + + # test boolean case, should return np.array instead of boolean Index + idx = Index(['a1', 'a2', 'b1', 'b2']) + expected = np.array([True, True, False, False]) + tm.assert_numpy_array_equal(idx.str.startswith('a'), expected) + self.assertIsInstance(idx.str.startswith('a'), np.ndarray) + s = Series(range(4), index=idx) + expected = Series(range(2), index=['a1', 'a2']) + tm.assert_series_equal(s[s.index.str.startswith('a')], expected) + + def test_tab_completion(self): + # GH 9910 + idx = Index(list('abcd')) + self.assertTrue('str' in dir(idx)) + + idx = Index(range(4)) + self.assertTrue('str' not in dir(idx)) + + def test_indexing_doesnt_change_class(self): + idx = Index([1, 2, 3, 'a', 'b', 'c']) + + self.assertTrue(idx[1:3].identical(pd.Index([2, 3], dtype=np.object_))) + self.assertTrue(idx[[0, 1]].identical(pd.Index( + [1, 2], dtype=np.object_))) + + def test_outer_join_sort(self): + left_idx = Index(np.random.permutation(15)) + right_idx = tm.makeDateIndex(10) + + with tm.assert_produces_warning(RuntimeWarning): + joined = left_idx.join(right_idx, how='outer') + + # right_idx in this case because DatetimeIndex has join precedence over + # Int64Index + with tm.assert_produces_warning(RuntimeWarning): + expected = right_idx.astype(object).union(left_idx.astype(object)) + tm.assert_index_equal(joined, expected) + + def test_nan_first_take_datetime(self): + idx = Index([pd.NaT, Timestamp('20130101'), Timestamp('20130102')]) + res = idx.take([-1, 0, 1]) + exp = Index([idx[-1], idx[0], idx[1]]) + tm.assert_index_equal(res, exp) + + def test_reindex_preserves_name_if_target_is_list_or_ndarray(self): + # GH6552 + idx = pd.Index([0, 1, 2]) + + dt_idx = pd.date_range('20130101', periods=3) + + idx.name = None + self.assertEqual(idx.reindex([])[0].name, None) + self.assertEqual(idx.reindex(np.array([]))[0].name, None) + self.assertEqual(idx.reindex(idx.tolist())[0].name, None) + self.assertEqual(idx.reindex(idx.tolist()[:-1])[0].name, None) + self.assertEqual(idx.reindex(idx.values)[0].name, None) + self.assertEqual(idx.reindex(idx.values[:-1])[0].name, None) + + # Must preserve name even if dtype changes. + self.assertEqual(idx.reindex(dt_idx.values)[0].name, None) + self.assertEqual(idx.reindex(dt_idx.tolist())[0].name, None) + + idx.name = 'foobar' + self.assertEqual(idx.reindex([])[0].name, 'foobar') + self.assertEqual(idx.reindex(np.array([]))[0].name, 'foobar') + self.assertEqual(idx.reindex(idx.tolist())[0].name, 'foobar') + self.assertEqual(idx.reindex(idx.tolist()[:-1])[0].name, 'foobar') + self.assertEqual(idx.reindex(idx.values)[0].name, 'foobar') + self.assertEqual(idx.reindex(idx.values[:-1])[0].name, 'foobar') + + # Must preserve name even if dtype changes. + self.assertEqual(idx.reindex(dt_idx.values)[0].name, 'foobar') + self.assertEqual(idx.reindex(dt_idx.tolist())[0].name, 'foobar') + + def test_reindex_preserves_type_if_target_is_empty_list_or_array(self): + # GH7774 + idx = pd.Index(list('abc')) + + def get_reindex_type(target): + return idx.reindex(target)[0].dtype.type + + self.assertEqual(get_reindex_type([]), np.object_) + self.assertEqual(get_reindex_type(np.array([])), np.object_) + self.assertEqual(get_reindex_type(np.array([], dtype=np.int64)), + np.object_) + + def test_reindex_doesnt_preserve_type_if_target_is_empty_index(self): + # GH7774 + idx = pd.Index(list('abc')) + + def get_reindex_type(target): + return idx.reindex(target)[0].dtype.type + + self.assertEqual(get_reindex_type(pd.Int64Index([])), np.int64) + self.assertEqual(get_reindex_type(pd.Float64Index([])), np.float64) + self.assertEqual(get_reindex_type(pd.DatetimeIndex([])), np.datetime64) + + reindexed = idx.reindex(pd.MultiIndex( + [pd.Int64Index([]), pd.Float64Index([])], [[], []]))[0] + self.assertEqual(reindexed.levels[0].dtype.type, np.int64) + self.assertEqual(reindexed.levels[1].dtype.type, np.float64) + + def test_groupby(self): + idx = Index(range(5)) + groups = idx.groupby(np.array([1, 1, 2, 2, 2])) + exp = {1: [0, 1], 2: [2, 3, 4]} + tm.assert_dict_equal(groups, exp) + + def test_equals_op_multiindex(self): + # GH9785 + # test comparisons of multiindex + from pandas.compat import StringIO + df = pd.read_csv(StringIO('a,b,c\n1,2,3\n4,5,6'), index_col=[0, 1]) + tm.assert_numpy_array_equal(df.index == df.index, + np.array([True, True])) + + mi1 = MultiIndex.from_tuples([(1, 2), (4, 5)]) + tm.assert_numpy_array_equal(df.index == mi1, np.array([True, True])) + mi2 = MultiIndex.from_tuples([(1, 2), (4, 6)]) + tm.assert_numpy_array_equal(df.index == mi2, np.array([True, False])) + mi3 = MultiIndex.from_tuples([(1, 2), (4, 5), (8, 9)]) + with tm.assertRaisesRegexp(ValueError, "Lengths must match"): + df.index == mi3 + + index_a = Index(['foo', 'bar', 'baz']) + with tm.assertRaisesRegexp(ValueError, "Lengths must match"): + df.index == index_a + tm.assert_numpy_array_equal(index_a == mi3, + np.array([False, False, False])) + + def test_conversion_preserves_name(self): + # GH 10875 + i = pd.Index(['01:02:03', '01:02:04'], name='label') + self.assertEqual(i.name, pd.to_datetime(i).name) + self.assertEqual(i.name, pd.to_timedelta(i).name) + + def test_string_index_repr(self): + # py3/py2 repr can differ because of "u" prefix + # which also affects to displayed element size + + # suppress flake8 warnings + if PY3: + coerce = lambda x: x + else: + coerce = unicode + + # short + idx = pd.Index(['a', 'bb', 'ccc']) + if PY3: + expected = u"""Index(['a', 'bb', 'ccc'], dtype='object')""" + self.assertEqual(repr(idx), expected) + else: + expected = u"""Index([u'a', u'bb', u'ccc'], dtype='object')""" + self.assertEqual(coerce(idx), expected) + + # multiple lines + idx = pd.Index(['a', 'bb', 'ccc'] * 10) + if PY3: + expected = u"""\ +Index(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', + 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', + 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'], + dtype='object')""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""\ +Index([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', + u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', + u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc'], + dtype='object')""" + + self.assertEqual(coerce(idx), expected) + + # truncated + idx = pd.Index(['a', 'bb', 'ccc'] * 100) + if PY3: + expected = u"""\ +Index(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', + ... + 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'], + dtype='object', length=300)""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""\ +Index([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', + ... + u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc'], + dtype='object', length=300)""" + + self.assertEqual(coerce(idx), expected) + + # short + idx = pd.Index([u'あ', u'いい', u'ううう']) + if PY3: + expected = u"""Index(['あ', 'いい', 'ううう'], dtype='object')""" + self.assertEqual(repr(idx), expected) + else: + expected = u"""\ +Index([u'あ', u'いい', u'ううう'], dtype='object')""" + self.assertEqual(coerce(idx), expected) + + # multiple lines + idx = pd.Index([u'あ', u'いい', u'ううう'] * 10) + if PY3: + expected = u"""Index(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', + 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', + 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], + dtype='object')""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""Index([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', + u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', + u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], + dtype='object')""" + + self.assertEqual(coerce(idx), expected) + + # truncated + idx = pd.Index([u'あ', u'いい', u'ううう'] * 100) + if PY3: + expected = u"""Index(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', + ... + 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], + dtype='object', length=300)""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""Index([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', + ... + u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], + dtype='object', length=300)""" + + self.assertEqual(coerce(idx), expected) + + # Emable Unicode option ----------------------------------------- + with cf.option_context('display.unicode.east_asian_width', True): + + # short + idx = pd.Index([u'あ', u'いい', u'ううう']) + if PY3: + expected = u"""Index(['あ', 'いい', 'ううう'], dtype='object')""" + self.assertEqual(repr(idx), expected) + else: + expected = u"""Index([u'あ', u'いい', u'ううう'], dtype='object')""" + self.assertEqual(coerce(idx), expected) + + # multiple lines + idx = pd.Index([u'あ', u'いい', u'ううう'] * 10) + if PY3: + expected = u"""Index(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', + 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', + 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', + 'あ', 'いい', 'ううう'], + dtype='object')""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""Index([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', + u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', + u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', + u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], + dtype='object')""" + + self.assertEqual(coerce(idx), expected) + + # truncated + idx = pd.Index([u'あ', u'いい', u'ううう'] * 100) + if PY3: + expected = u"""Index(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', + 'あ', + ... + 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', + 'ううう'], + dtype='object', length=300)""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""Index([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', + u'ううう', u'あ', + ... + u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', + u'いい', u'ううう'], + dtype='object', length=300)""" + + self.assertEqual(coerce(idx), expected) + + +def test_get_combined_index(): + from pandas.core.index import _get_combined_index + result = _get_combined_index([]) + assert (result.equals(Index([]))) diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py new file mode 100644 index 0000000000000..78016c0f0b5f7 --- /dev/null +++ b/pandas/tests/indexes/test_category.py @@ -0,0 +1,664 @@ +# -*- coding: utf-8 -*- + +# TODO(wesm): fix long line flake8 issues +# flake8: noqa + +import pandas.util.testing as tm +from pandas.indexes.api import Index, CategoricalIndex +from .common import Base + +from pandas.compat import range, PY3 + +import numpy as np + +from pandas import Categorical, compat +from pandas.util.testing import assert_almost_equal +import pandas.core.config as cf +import pandas as pd + +if PY3: + unicode = lambda x: x + + +class TestCategoricalIndex(Base, tm.TestCase): + _holder = CategoricalIndex + + def setUp(self): + self.indices = dict(catIndex=tm.makeCategoricalIndex(100)) + self.setup_indices() + + def create_index(self, categories=None, ordered=False): + if categories is None: + categories = list('cab') + return CategoricalIndex( + list('aabbca'), categories=categories, ordered=ordered) + + def test_construction(self): + + ci = self.create_index(categories=list('abcd')) + categories = ci.categories + + result = Index(ci) + tm.assert_index_equal(result, ci, exact=True) + self.assertFalse(result.ordered) + + result = Index(ci.values) + tm.assert_index_equal(result, ci, exact=True) + self.assertFalse(result.ordered) + + # empty + result = CategoricalIndex(categories=categories) + self.assertTrue(result.categories.equals(Index(categories))) + tm.assert_numpy_array_equal(result.codes, np.array([], dtype='int8')) + self.assertFalse(result.ordered) + + # passing categories + result = CategoricalIndex(list('aabbca'), categories=categories) + self.assertTrue(result.categories.equals(Index(categories))) + tm.assert_numpy_array_equal(result.codes, np.array( + [0, 0, 1, 1, 2, 0], dtype='int8')) + + c = pd.Categorical(list('aabbca')) + result = CategoricalIndex(c) + self.assertTrue(result.categories.equals(Index(list('abc')))) + tm.assert_numpy_array_equal(result.codes, np.array( + [0, 0, 1, 1, 2, 0], dtype='int8')) + self.assertFalse(result.ordered) + + result = CategoricalIndex(c, categories=categories) + self.assertTrue(result.categories.equals(Index(categories))) + tm.assert_numpy_array_equal(result.codes, np.array( + [0, 0, 1, 1, 2, 0], dtype='int8')) + self.assertFalse(result.ordered) + + ci = CategoricalIndex(c, categories=list('abcd')) + result = CategoricalIndex(ci) + self.assertTrue(result.categories.equals(Index(categories))) + tm.assert_numpy_array_equal(result.codes, np.array( + [0, 0, 1, 1, 2, 0], dtype='int8')) + self.assertFalse(result.ordered) + + result = CategoricalIndex(ci, categories=list('ab')) + self.assertTrue(result.categories.equals(Index(list('ab')))) + tm.assert_numpy_array_equal(result.codes, np.array( + [0, 0, 1, 1, -1, 0], dtype='int8')) + self.assertFalse(result.ordered) + + result = CategoricalIndex(ci, categories=list('ab'), ordered=True) + self.assertTrue(result.categories.equals(Index(list('ab')))) + tm.assert_numpy_array_equal(result.codes, np.array( + [0, 0, 1, 1, -1, 0], dtype='int8')) + self.assertTrue(result.ordered) + + # turn me to an Index + result = Index(np.array(ci)) + self.assertIsInstance(result, Index) + self.assertNotIsInstance(result, CategoricalIndex) + + def test_construction_with_dtype(self): + + # specify dtype + ci = self.create_index(categories=list('abc')) + + result = Index(np.array(ci), dtype='category') + tm.assert_index_equal(result, ci, exact=True) + + result = Index(np.array(ci).tolist(), dtype='category') + tm.assert_index_equal(result, ci, exact=True) + + # these are generally only equal when the categories are reordered + ci = self.create_index() + + result = Index( + np.array(ci), dtype='category').reorder_categories(ci.categories) + tm.assert_index_equal(result, ci, exact=True) + + # make sure indexes are handled + expected = CategoricalIndex([0, 1, 2], categories=[0, 1, 2], + ordered=True) + idx = Index(range(3)) + result = CategoricalIndex(idx, categories=idx, ordered=True) + tm.assert_index_equal(result, expected, exact=True) + + def test_disallow_set_ops(self): + + # GH 10039 + # set ops (+/-) raise TypeError + idx = pd.Index(pd.Categorical(['a', 'b'])) + + self.assertRaises(TypeError, lambda: idx - idx) + self.assertRaises(TypeError, lambda: idx + idx) + self.assertRaises(TypeError, lambda: idx - ['a', 'b']) + self.assertRaises(TypeError, lambda: idx + ['a', 'b']) + self.assertRaises(TypeError, lambda: ['a', 'b'] - idx) + self.assertRaises(TypeError, lambda: ['a', 'b'] + idx) + + def test_method_delegation(self): + + ci = CategoricalIndex(list('aabbca'), categories=list('cabdef')) + result = ci.set_categories(list('cab')) + tm.assert_index_equal(result, CategoricalIndex( + list('aabbca'), categories=list('cab'))) + + ci = CategoricalIndex(list('aabbca'), categories=list('cab')) + result = ci.rename_categories(list('efg')) + tm.assert_index_equal(result, CategoricalIndex( + list('ffggef'), categories=list('efg'))) + + ci = CategoricalIndex(list('aabbca'), categories=list('cab')) + result = ci.add_categories(['d']) + tm.assert_index_equal(result, CategoricalIndex( + list('aabbca'), categories=list('cabd'))) + + ci = CategoricalIndex(list('aabbca'), categories=list('cab')) + result = ci.remove_categories(['c']) + tm.assert_index_equal(result, CategoricalIndex( + list('aabb') + [np.nan] + ['a'], categories=list('ab'))) + + ci = CategoricalIndex(list('aabbca'), categories=list('cabdef')) + result = ci.as_unordered() + tm.assert_index_equal(result, ci) + + ci = CategoricalIndex(list('aabbca'), categories=list('cabdef')) + result = ci.as_ordered() + tm.assert_index_equal(result, CategoricalIndex( + list('aabbca'), categories=list('cabdef'), ordered=True)) + + # invalid + self.assertRaises(ValueError, lambda: ci.set_categories( + list('cab'), inplace=True)) + + def test_contains(self): + + ci = self.create_index(categories=list('cabdef')) + + self.assertTrue('a' in ci) + self.assertTrue('z' not in ci) + self.assertTrue('e' not in ci) + self.assertTrue(np.nan not in ci) + + # assert codes NOT in index + self.assertFalse(0 in ci) + self.assertFalse(1 in ci) + + with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + ci = CategoricalIndex( + list('aabbca'), categories=list('cabdef') + [np.nan]) + self.assertFalse(np.nan in ci) + + ci = CategoricalIndex( + list('aabbca') + [np.nan], categories=list('cabdef')) + self.assertTrue(np.nan in ci) + + def test_min_max(self): + + ci = self.create_index(ordered=False) + self.assertRaises(TypeError, lambda: ci.min()) + self.assertRaises(TypeError, lambda: ci.max()) + + ci = self.create_index(ordered=True) + + self.assertEqual(ci.min(), 'c') + self.assertEqual(ci.max(), 'b') + + def test_append(self): + + ci = self.create_index() + categories = ci.categories + + # append cats with the same categories + result = ci[:3].append(ci[3:]) + tm.assert_index_equal(result, ci, exact=True) + + foos = [ci[:1], ci[1:3], ci[3:]] + result = foos[0].append(foos[1:]) + tm.assert_index_equal(result, ci, exact=True) + + # empty + result = ci.append([]) + tm.assert_index_equal(result, ci, exact=True) + + # appending with different categories or reoreded is not ok + self.assertRaises( + TypeError, + lambda: ci.append(ci.values.set_categories(list('abcd')))) + self.assertRaises( + TypeError, + lambda: ci.append(ci.values.reorder_categories(list('abc')))) + + # with objects + result = ci.append(['c', 'a']) + expected = CategoricalIndex(list('aabbcaca'), categories=categories) + tm.assert_index_equal(result, expected, exact=True) + + # invalid objects + self.assertRaises(TypeError, lambda: ci.append(['a', 'd'])) + + def test_insert(self): + + ci = self.create_index() + categories = ci.categories + + # test 0th element + result = ci.insert(0, 'a') + expected = CategoricalIndex(list('aaabbca'), categories=categories) + tm.assert_index_equal(result, expected, exact=True) + + # test Nth element that follows Python list behavior + result = ci.insert(-1, 'a') + expected = CategoricalIndex(list('aabbcaa'), categories=categories) + tm.assert_index_equal(result, expected, exact=True) + + # test empty + result = CategoricalIndex(categories=categories).insert(0, 'a') + expected = CategoricalIndex(['a'], categories=categories) + tm.assert_index_equal(result, expected, exact=True) + + # invalid + self.assertRaises(TypeError, lambda: ci.insert(0, 'd')) + + def test_delete(self): + + ci = self.create_index() + categories = ci.categories + + result = ci.delete(0) + expected = CategoricalIndex(list('abbca'), categories=categories) + tm.assert_index_equal(result, expected, exact=True) + + result = ci.delete(-1) + expected = CategoricalIndex(list('aabbc'), categories=categories) + tm.assert_index_equal(result, expected, exact=True) + + with tm.assertRaises((IndexError, ValueError)): + # either depeidnig on numpy version + result = ci.delete(10) + + def test_astype(self): + + ci = self.create_index() + result = ci.astype('category') + tm.assert_index_equal(result, ci, exact=True) + + result = ci.astype(object) + self.assertTrue(result.equals(Index(np.array(ci)))) + + # this IS equal, but not the same class + self.assertTrue(result.equals(ci)) + self.assertIsInstance(result, Index) + self.assertNotIsInstance(result, CategoricalIndex) + + def test_reindex_base(self): + + # determined by cat ordering + idx = self.create_index() + expected = np.array([4, 0, 1, 5, 2, 3]) + + actual = idx.get_indexer(idx) + tm.assert_numpy_array_equal(expected, actual) + + with tm.assertRaisesRegexp(ValueError, 'Invalid fill method'): + idx.get_indexer(idx, method='invalid') + + def test_reindexing(self): + + ci = self.create_index() + oidx = Index(np.array(ci)) + + for n in [1, 2, 5, len(ci)]: + finder = oidx[np.random.randint(0, len(ci), size=n)] + expected = oidx.get_indexer_non_unique(finder)[0] + + actual = ci.get_indexer(finder) + tm.assert_numpy_array_equal(expected, actual) + + def test_reindex_dtype(self): + res, indexer = CategoricalIndex(['a', 'b', 'c', 'a']).reindex(['a', 'c' + ]) + tm.assert_index_equal(res, Index(['a', 'a', 'c']), exact=True) + tm.assert_numpy_array_equal(indexer, np.array([0, 3, 2])) + + res, indexer = CategoricalIndex(['a', 'b', 'c', 'a']).reindex( + Categorical(['a', 'c'])) + tm.assert_index_equal(res, CategoricalIndex( + ['a', 'a', 'c'], categories=['a', 'c']), exact=True) + tm.assert_numpy_array_equal(indexer, np.array([0, 3, 2])) + + res, indexer = CategoricalIndex( + ['a', 'b', 'c', 'a' + ], categories=['a', 'b', 'c', 'd']).reindex(['a', 'c']) + tm.assert_index_equal(res, Index( + ['a', 'a', 'c'], dtype='object'), exact=True) + tm.assert_numpy_array_equal(indexer, np.array([0, 3, 2])) + + res, indexer = CategoricalIndex( + ['a', 'b', 'c', 'a'], + categories=['a', 'b', 'c', 'd']).reindex(Categorical(['a', 'c'])) + tm.assert_index_equal(res, CategoricalIndex( + ['a', 'a', 'c'], categories=['a', 'c']), exact=True) + tm.assert_numpy_array_equal(indexer, np.array([0, 3, 2])) + + def test_duplicates(self): + + idx = CategoricalIndex([0, 0, 0], name='foo') + self.assertFalse(idx.is_unique) + self.assertTrue(idx.has_duplicates) + + expected = CategoricalIndex([0], name='foo') + self.assert_index_equal(idx.drop_duplicates(), expected) + + def test_get_indexer(self): + + idx1 = CategoricalIndex(list('aabcde'), categories=list('edabc')) + idx2 = CategoricalIndex(list('abf')) + + for indexer in [idx2, list('abf'), Index(list('abf'))]: + r1 = idx1.get_indexer(idx2) + assert_almost_equal(r1, [0, 1, 2, -1]) + + self.assertRaises(NotImplementedError, + lambda: idx2.get_indexer(idx1, method='pad')) + self.assertRaises(NotImplementedError, + lambda: idx2.get_indexer(idx1, method='backfill')) + self.assertRaises(NotImplementedError, + lambda: idx2.get_indexer(idx1, method='nearest')) + + def test_repr_roundtrip(self): + + ci = CategoricalIndex(['a', 'b'], categories=['a', 'b'], ordered=True) + str(ci) + tm.assert_index_equal(eval(repr(ci)), ci, exact=True) + + # formatting + if PY3: + str(ci) + else: + compat.text_type(ci) + + # long format + # this is not reprable + ci = CategoricalIndex(np.random.randint(0, 5, size=100)) + if PY3: + str(ci) + else: + compat.text_type(ci) + + def test_isin(self): + + ci = CategoricalIndex( + list('aabca') + [np.nan], categories=['c', 'a', 'b']) + tm.assert_numpy_array_equal( + ci.isin(['c']), + np.array([False, False, False, True, False, False])) + tm.assert_numpy_array_equal( + ci.isin(['c', 'a', 'b']), np.array([True] * 5 + [False])) + tm.assert_numpy_array_equal( + ci.isin(['c', 'a', 'b', np.nan]), np.array([True] * 6)) + + # mismatched categorical -> coerced to ndarray so doesn't matter + tm.assert_numpy_array_equal( + ci.isin(ci.set_categories(list('abcdefghi'))), np.array([True] * + 6)) + tm.assert_numpy_array_equal( + ci.isin(ci.set_categories(list('defghi'))), + np.array([False] * 5 + [True])) + + def test_identical(self): + + ci1 = CategoricalIndex(['a', 'b'], categories=['a', 'b'], ordered=True) + ci2 = CategoricalIndex(['a', 'b'], categories=['a', 'b', 'c'], + ordered=True) + self.assertTrue(ci1.identical(ci1)) + self.assertTrue(ci1.identical(ci1.copy())) + self.assertFalse(ci1.identical(ci2)) + + def test_equals(self): + + ci1 = CategoricalIndex(['a', 'b'], categories=['a', 'b'], ordered=True) + ci2 = CategoricalIndex(['a', 'b'], categories=['a', 'b', 'c'], + ordered=True) + + self.assertTrue(ci1.equals(ci1)) + self.assertFalse(ci1.equals(ci2)) + self.assertTrue(ci1.equals(ci1.astype(object))) + self.assertTrue(ci1.astype(object).equals(ci1)) + + self.assertTrue((ci1 == ci1).all()) + self.assertFalse((ci1 != ci1).all()) + self.assertFalse((ci1 > ci1).all()) + self.assertFalse((ci1 < ci1).all()) + self.assertTrue((ci1 <= ci1).all()) + self.assertTrue((ci1 >= ci1).all()) + + self.assertFalse((ci1 == 1).all()) + self.assertTrue((ci1 == Index(['a', 'b'])).all()) + self.assertTrue((ci1 == ci1.values).all()) + + # invalid comparisons + with tm.assertRaisesRegexp(ValueError, "Lengths must match"): + ci1 == Index(['a', 'b', 'c']) + self.assertRaises(TypeError, lambda: ci1 == ci2) + self.assertRaises( + TypeError, lambda: ci1 == Categorical(ci1.values, ordered=False)) + self.assertRaises( + TypeError, + lambda: ci1 == Categorical(ci1.values, categories=list('abc'))) + + # tests + # make sure that we are testing for category inclusion properly + self.assertTrue(CategoricalIndex( + list('aabca'), categories=['c', 'a', 'b']).equals(list('aabca'))) + with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + self.assertTrue(CategoricalIndex( + list('aabca'), categories=['c', 'a', 'b', np.nan]).equals(list( + 'aabca'))) + + self.assertFalse(CategoricalIndex( + list('aabca') + [np.nan], categories=['c', 'a', 'b']).equals(list( + 'aabca'))) + self.assertTrue(CategoricalIndex( + list('aabca') + [np.nan], categories=['c', 'a', 'b']).equals(list( + 'aabca') + [np.nan])) + + def test_string_categorical_index_repr(self): + # short + idx = pd.CategoricalIndex(['a', 'bb', 'ccc']) + if PY3: + expected = u"""CategoricalIndex(['a', 'bb', 'ccc'], categories=['a', 'bb', 'ccc'], ordered=False, dtype='category')""" + self.assertEqual(repr(idx), expected) + else: + expected = u"""CategoricalIndex([u'a', u'bb', u'ccc'], categories=[u'a', u'bb', u'ccc'], ordered=False, dtype='category')""" + self.assertEqual(unicode(idx), expected) + + # multiple lines + idx = pd.CategoricalIndex(['a', 'bb', 'ccc'] * 10) + if PY3: + expected = u"""CategoricalIndex(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', + 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', + 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'], + categories=['a', 'bb', 'ccc'], ordered=False, dtype='category')""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""CategoricalIndex([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', + u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', + u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', + u'a', u'bb', u'ccc', u'a', u'bb', u'ccc'], + categories=[u'a', u'bb', u'ccc'], ordered=False, dtype='category')""" + + self.assertEqual(unicode(idx), expected) + + # truncated + idx = pd.CategoricalIndex(['a', 'bb', 'ccc'] * 100) + if PY3: + expected = u"""CategoricalIndex(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', + ... + 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'], + categories=['a', 'bb', 'ccc'], ordered=False, dtype='category', length=300)""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""CategoricalIndex([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', + u'ccc', u'a', + ... + u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', + u'bb', u'ccc'], + categories=[u'a', u'bb', u'ccc'], ordered=False, dtype='category', length=300)""" + + self.assertEqual(unicode(idx), expected) + + # larger categories + idx = pd.CategoricalIndex(list('abcdefghijklmmo')) + if PY3: + expected = u"""CategoricalIndex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', + 'm', 'm', 'o'], + categories=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', ...], ordered=False, dtype='category')""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""CategoricalIndex([u'a', u'b', u'c', u'd', u'e', u'f', u'g', u'h', u'i', u'j', + u'k', u'l', u'm', u'm', u'o'], + categories=[u'a', u'b', u'c', u'd', u'e', u'f', u'g', u'h', ...], ordered=False, dtype='category')""" + + self.assertEqual(unicode(idx), expected) + + # short + idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう']) + if PY3: + expected = u"""CategoricalIndex(['あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" + self.assertEqual(repr(idx), expected) + else: + expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう'], categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" + self.assertEqual(unicode(idx), expected) + + # multiple lines + idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'] * 10) + if PY3: + expected = u"""CategoricalIndex(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', + 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', + 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], + categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', + u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', + u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', + u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], + categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" + + self.assertEqual(unicode(idx), expected) + + # truncated + idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'] * 100) + if PY3: + expected = u"""CategoricalIndex(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', + ... + 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], + categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category', length=300)""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', + u'ううう', u'あ', + ... + u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', + u'いい', u'ううう'], + categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category', length=300)""" + + self.assertEqual(unicode(idx), expected) + + # larger categories + idx = pd.CategoricalIndex(list(u'あいうえおかきくけこさしすせそ')) + if PY3: + expected = u"""CategoricalIndex(['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', 'け', 'こ', 'さ', 'し', + 'す', 'せ', 'そ'], + categories=['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', ...], ordered=False, dtype='category')""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""CategoricalIndex([u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', u'け', u'こ', + u'さ', u'し', u'す', u'せ', u'そ'], + categories=[u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', ...], ordered=False, dtype='category')""" + + self.assertEqual(unicode(idx), expected) + + # Emable Unicode option ----------------------------------------- + with cf.option_context('display.unicode.east_asian_width', True): + + # short + idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう']) + if PY3: + expected = u"""CategoricalIndex(['あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" + self.assertEqual(repr(idx), expected) + else: + expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう'], categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" + self.assertEqual(unicode(idx), expected) + + # multiple lines + idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'] * 10) + if PY3: + expected = u"""CategoricalIndex(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', + 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', + 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', + 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], + categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', + u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', + u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', + u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', + u'いい', u'ううう', u'あ', u'いい', u'ううう'], + categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" + + self.assertEqual(unicode(idx), expected) + + # truncated + idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'] * 100) + if PY3: + expected = u"""CategoricalIndex(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', + 'ううう', 'あ', + ... + 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', + 'あ', 'いい', 'ううう'], + categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category', length=300)""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', + u'いい', u'ううう', u'あ', + ... + u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', + u'ううう', u'あ', u'いい', u'ううう'], + categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category', length=300)""" + + self.assertEqual(unicode(idx), expected) + + # larger categories + idx = pd.CategoricalIndex(list(u'あいうえおかきくけこさしすせそ')) + if PY3: + expected = u"""CategoricalIndex(['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', 'け', 'こ', + 'さ', 'し', 'す', 'せ', 'そ'], + categories=['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', ...], ordered=False, dtype='category')""" + + self.assertEqual(repr(idx), expected) + else: + expected = u"""CategoricalIndex([u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', + u'け', u'こ', u'さ', u'し', u'す', u'せ', u'そ'], + categories=[u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', ...], ordered=False, dtype='category')""" + + self.assertEqual(unicode(idx), expected) + + def test_fillna_categorical(self): + # GH 11343 + idx = CategoricalIndex([1.0, np.nan, 3.0, 1.0], name='x') + # fill by value in categories + exp = CategoricalIndex([1.0, 1.0, 3.0, 1.0], name='x') + self.assert_index_equal(idx.fillna(1.0), exp) + + # fill by value not in categories raises ValueError + with tm.assertRaisesRegexp(ValueError, + 'fill value must be in categories'): + idx.fillna(2.0) diff --git a/pandas/tests/indexes/test_datetimelike.py b/pandas/tests/indexes/test_datetimelike.py new file mode 100644 index 0000000000000..de505b93da241 --- /dev/null +++ b/pandas/tests/indexes/test_datetimelike.py @@ -0,0 +1,855 @@ +# -*- coding: utf-8 -*- + +from datetime import timedelta, time + +import numpy as np + +from pandas import (date_range, period_range, + Series, Index, DatetimeIndex, + TimedeltaIndex, PeriodIndex) + +import pandas.util.testing as tm + +import pandas as pd +from pandas.lib import Timestamp + +from .common import Base + + +class DatetimeLike(Base): + + def test_shift_identity(self): + + idx = self.create_index() + self.assert_index_equal(idx, idx.shift(0)) + + def test_str(self): + + # test the string repr + idx = self.create_index() + idx.name = 'foo' + self.assertFalse("length=%s" % len(idx) in str(idx)) + self.assertTrue("'foo'" in str(idx)) + self.assertTrue(idx.__class__.__name__ in str(idx)) + + if hasattr(idx, 'tz'): + if idx.tz is not None: + self.assertTrue(idx.tz in str(idx)) + if hasattr(idx, 'freq'): + self.assertTrue("freq='%s'" % idx.freqstr in str(idx)) + + def test_view(self): + super(DatetimeLike, self).test_view() + + i = self.create_index() + + i_view = i.view('i8') + result = self._holder(i) + tm.assert_index_equal(result, i) + + i_view = i.view(self._holder) + result = self._holder(i) + tm.assert_index_equal(result, i_view) + + +class TestDatetimeIndex(DatetimeLike, tm.TestCase): + _holder = DatetimeIndex + _multiprocess_can_split_ = True + + def setUp(self): + self.indices = dict(index=tm.makeDateIndex(10)) + self.setup_indices() + + def create_index(self): + return date_range('20130101', periods=5) + + def test_shift(self): + + # test shift for datetimeIndex and non datetimeIndex + # GH8083 + + drange = self.create_index() + result = drange.shift(1) + expected = DatetimeIndex(['2013-01-02', '2013-01-03', '2013-01-04', + '2013-01-05', + '2013-01-06'], freq='D') + self.assert_index_equal(result, expected) + + result = drange.shift(-1) + expected = DatetimeIndex(['2012-12-31', '2013-01-01', '2013-01-02', + '2013-01-03', '2013-01-04'], + freq='D') + self.assert_index_equal(result, expected) + + result = drange.shift(3, freq='2D') + expected = DatetimeIndex(['2013-01-07', '2013-01-08', '2013-01-09', + '2013-01-10', + '2013-01-11'], freq='D') + self.assert_index_equal(result, expected) + + def test_construction_with_alt(self): + + i = pd.date_range('20130101', periods=5, freq='H', tz='US/Eastern') + i2 = DatetimeIndex(i, dtype=i.dtype) + self.assert_index_equal(i, i2) + + i2 = DatetimeIndex(i.tz_localize(None).asi8, tz=i.dtype.tz) + self.assert_index_equal(i, i2) + + i2 = DatetimeIndex(i.tz_localize(None).asi8, dtype=i.dtype) + self.assert_index_equal(i, i2) + + i2 = DatetimeIndex( + i.tz_localize(None).asi8, dtype=i.dtype, tz=i.dtype.tz) + self.assert_index_equal(i, i2) + + # localize into the provided tz + i2 = DatetimeIndex(i.tz_localize(None).asi8, tz='UTC') + expected = i.tz_localize(None).tz_localize('UTC') + self.assert_index_equal(i2, expected) + + i2 = DatetimeIndex(i, tz='UTC') + expected = i.tz_convert('UTC') + self.assert_index_equal(i2, expected) + + # incompat tz/dtype + self.assertRaises(ValueError, lambda: DatetimeIndex( + i.tz_localize(None).asi8, dtype=i.dtype, tz='US/Pacific')) + + def test_pickle_compat_construction(self): + pass + + def test_construction_index_with_mixed_timezones(self): + # GH 11488 + # no tz results in DatetimeIndex + result = Index( + [Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') + exp = DatetimeIndex( + [Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + self.assertIsNone(result.tz) + + # same tz results in DatetimeIndex + result = Index([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), + Timestamp('2011-01-02 10:00', tz='Asia/Tokyo')], + name='idx') + exp = DatetimeIndex( + [Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00') + ], tz='Asia/Tokyo', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + self.assertIsNotNone(result.tz) + self.assertEqual(result.tz, exp.tz) + + # same tz results in DatetimeIndex (DST) + result = Index([Timestamp('2011-01-01 10:00', tz='US/Eastern'), + Timestamp('2011-08-01 10:00', tz='US/Eastern')], + name='idx') + exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), + Timestamp('2011-08-01 10:00')], + tz='US/Eastern', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + self.assertIsNotNone(result.tz) + self.assertEqual(result.tz, exp.tz) + + # different tz results in Index(dtype=object) + result = Index([Timestamp('2011-01-01 10:00'), + Timestamp('2011-01-02 10:00', tz='US/Eastern')], + name='idx') + exp = Index([Timestamp('2011-01-01 10:00'), + Timestamp('2011-01-02 10:00', tz='US/Eastern')], + dtype='object', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertFalse(isinstance(result, DatetimeIndex)) + + result = Index([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), + Timestamp('2011-01-02 10:00', tz='US/Eastern')], + name='idx') + exp = Index([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), + Timestamp('2011-01-02 10:00', tz='US/Eastern')], + dtype='object', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertFalse(isinstance(result, DatetimeIndex)) + + # passing tz results in DatetimeIndex + result = Index([Timestamp('2011-01-01 10:00'), + Timestamp('2011-01-02 10:00', tz='US/Eastern')], + tz='Asia/Tokyo', name='idx') + exp = DatetimeIndex([Timestamp('2011-01-01 19:00'), + Timestamp('2011-01-03 00:00')], + tz='Asia/Tokyo', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + + # length = 1 + result = Index([Timestamp('2011-01-01')], name='idx') + exp = DatetimeIndex([Timestamp('2011-01-01')], name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + self.assertIsNone(result.tz) + + # length = 1 with tz + result = Index( + [Timestamp('2011-01-01 10:00', tz='Asia/Tokyo')], name='idx') + exp = DatetimeIndex([Timestamp('2011-01-01 10:00')], tz='Asia/Tokyo', + name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + self.assertIsNotNone(result.tz) + self.assertEqual(result.tz, exp.tz) + + def test_construction_index_with_mixed_timezones_with_NaT(self): + # GH 11488 + result = Index([pd.NaT, Timestamp('2011-01-01'), + pd.NaT, Timestamp('2011-01-02')], name='idx') + exp = DatetimeIndex([pd.NaT, Timestamp('2011-01-01'), + pd.NaT, Timestamp('2011-01-02')], name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + self.assertIsNone(result.tz) + + # same tz results in DatetimeIndex + result = Index([pd.NaT, Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), + pd.NaT, Timestamp('2011-01-02 10:00', + tz='Asia/Tokyo')], + name='idx') + exp = DatetimeIndex([pd.NaT, Timestamp('2011-01-01 10:00'), + pd.NaT, Timestamp('2011-01-02 10:00')], + tz='Asia/Tokyo', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + self.assertIsNotNone(result.tz) + self.assertEqual(result.tz, exp.tz) + + # same tz results in DatetimeIndex (DST) + result = Index([Timestamp('2011-01-01 10:00', tz='US/Eastern'), + pd.NaT, + Timestamp('2011-08-01 10:00', tz='US/Eastern')], + name='idx') + exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), pd.NaT, + Timestamp('2011-08-01 10:00')], + tz='US/Eastern', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + self.assertIsNotNone(result.tz) + self.assertEqual(result.tz, exp.tz) + + # different tz results in Index(dtype=object) + result = Index([pd.NaT, Timestamp('2011-01-01 10:00'), + pd.NaT, Timestamp('2011-01-02 10:00', + tz='US/Eastern')], + name='idx') + exp = Index([pd.NaT, Timestamp('2011-01-01 10:00'), + pd.NaT, Timestamp('2011-01-02 10:00', tz='US/Eastern')], + dtype='object', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertFalse(isinstance(result, DatetimeIndex)) + + result = Index([pd.NaT, Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), + pd.NaT, Timestamp('2011-01-02 10:00', + tz='US/Eastern')], name='idx') + exp = Index([pd.NaT, Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), + pd.NaT, Timestamp('2011-01-02 10:00', tz='US/Eastern')], + dtype='object', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertFalse(isinstance(result, DatetimeIndex)) + + # passing tz results in DatetimeIndex + result = Index([pd.NaT, Timestamp('2011-01-01 10:00'), + pd.NaT, Timestamp('2011-01-02 10:00', + tz='US/Eastern')], + tz='Asia/Tokyo', name='idx') + exp = DatetimeIndex([pd.NaT, Timestamp('2011-01-01 19:00'), + pd.NaT, Timestamp('2011-01-03 00:00')], + tz='Asia/Tokyo', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + + # all NaT + result = Index([pd.NaT, pd.NaT], name='idx') + exp = DatetimeIndex([pd.NaT, pd.NaT], name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + self.assertIsNone(result.tz) + + # all NaT with tz + result = Index([pd.NaT, pd.NaT], tz='Asia/Tokyo', name='idx') + exp = DatetimeIndex([pd.NaT, pd.NaT], tz='Asia/Tokyo', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + self.assertIsNotNone(result.tz) + self.assertEqual(result.tz, exp.tz) + + def test_construction_dti_with_mixed_timezones(self): + # GH 11488 (not changed, added explicit tests) + + # no tz results in DatetimeIndex + result = DatetimeIndex( + [Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') + exp = DatetimeIndex( + [Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + + # same tz results in DatetimeIndex + result = DatetimeIndex([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), + Timestamp('2011-01-02 10:00', + tz='Asia/Tokyo')], + name='idx') + exp = DatetimeIndex( + [Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00') + ], tz='Asia/Tokyo', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + + # same tz results in DatetimeIndex (DST) + result = DatetimeIndex([Timestamp('2011-01-01 10:00', tz='US/Eastern'), + Timestamp('2011-08-01 10:00', + tz='US/Eastern')], + name='idx') + exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), + Timestamp('2011-08-01 10:00')], + tz='US/Eastern', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + + # different tz coerces tz-naive to tz-awareIndex(dtype=object) + result = DatetimeIndex([Timestamp('2011-01-01 10:00'), + Timestamp('2011-01-02 10:00', + tz='US/Eastern')], name='idx') + exp = DatetimeIndex([Timestamp('2011-01-01 05:00'), + Timestamp('2011-01-02 10:00')], + tz='US/Eastern', name='idx') + self.assert_index_equal(result, exp, exact=True) + self.assertTrue(isinstance(result, DatetimeIndex)) + + # tz mismatch affecting to tz-aware raises TypeError/ValueError + with tm.assertRaises(ValueError): + DatetimeIndex([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), + Timestamp('2011-01-02 10:00', tz='US/Eastern')], + name='idx') + + with tm.assertRaises(TypeError): + DatetimeIndex([Timestamp('2011-01-01 10:00'), + Timestamp('2011-01-02 10:00', tz='US/Eastern')], + tz='Asia/Tokyo', name='idx') + + with tm.assertRaises(ValueError): + DatetimeIndex([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), + Timestamp('2011-01-02 10:00', tz='US/Eastern')], + tz='US/Eastern', name='idx') + + def test_get_loc(self): + idx = pd.date_range('2000-01-01', periods=3) + + for method in [None, 'pad', 'backfill', 'nearest']: + self.assertEqual(idx.get_loc(idx[1], method), 1) + self.assertEqual(idx.get_loc(idx[1].to_pydatetime(), method), 1) + self.assertEqual(idx.get_loc(str(idx[1]), method), 1) + if method is not None: + self.assertEqual(idx.get_loc(idx[1], method, + tolerance=pd.Timedelta('0 days')), + 1) + + self.assertEqual(idx.get_loc('2000-01-01', method='nearest'), 0) + self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest'), 1) + + self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest', + tolerance='1 day'), 1) + self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest', + tolerance=pd.Timedelta('1D')), 1) + self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest', + tolerance=np.timedelta64(1, 'D')), 1) + self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest', + tolerance=timedelta(1)), 1) + with tm.assertRaisesRegexp(ValueError, 'must be convertible'): + idx.get_loc('2000-01-01T12', method='nearest', tolerance='foo') + with tm.assertRaises(KeyError): + idx.get_loc('2000-01-01T03', method='nearest', tolerance='2 hours') + + self.assertEqual(idx.get_loc('2000', method='nearest'), slice(0, 3)) + self.assertEqual(idx.get_loc('2000-01', method='nearest'), slice(0, 3)) + + self.assertEqual(idx.get_loc('1999', method='nearest'), 0) + self.assertEqual(idx.get_loc('2001', method='nearest'), 2) + + with tm.assertRaises(KeyError): + idx.get_loc('1999', method='pad') + with tm.assertRaises(KeyError): + idx.get_loc('2001', method='backfill') + + with tm.assertRaises(KeyError): + idx.get_loc('foobar') + with tm.assertRaises(TypeError): + idx.get_loc(slice(2)) + + idx = pd.to_datetime(['2000-01-01', '2000-01-04']) + self.assertEqual(idx.get_loc('2000-01-02', method='nearest'), 0) + self.assertEqual(idx.get_loc('2000-01-03', method='nearest'), 1) + self.assertEqual(idx.get_loc('2000-01', method='nearest'), slice(0, 2)) + + # time indexing + idx = pd.date_range('2000-01-01', periods=24, freq='H') + tm.assert_numpy_array_equal(idx.get_loc(time(12)), [12]) + tm.assert_numpy_array_equal(idx.get_loc(time(12, 30)), []) + with tm.assertRaises(NotImplementedError): + idx.get_loc(time(12, 30), method='pad') + + def test_get_indexer(self): + idx = pd.date_range('2000-01-01', periods=3) + tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2]) + + target = idx[0] + pd.to_timedelta(['-1 hour', '12 hours', + '1 day 1 hour']) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'backfill'), [0, 1, 2]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'nearest'), [0, 1, 1]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'nearest', + tolerance=pd.Timedelta('1 hour')), + [0, -1, 1]) + with tm.assertRaises(ValueError): + idx.get_indexer(idx[[0]], method='nearest', tolerance='foo') + + def test_roundtrip_pickle_with_tz(self): + + # GH 8367 + # round-trip of timezone + index = date_range('20130101', periods=3, tz='US/Eastern', name='foo') + unpickled = self.round_trip_pickle(index) + self.assertTrue(index.equals(unpickled)) + + def test_reindex_preserves_tz_if_target_is_empty_list_or_array(self): + # GH7774 + index = date_range('20130101', periods=3, tz='US/Eastern') + self.assertEqual(str(index.reindex([])[0].tz), 'US/Eastern') + self.assertEqual(str(index.reindex(np.array([]))[0].tz), 'US/Eastern') + + def test_time_loc(self): # GH8667 + from datetime import time + from pandas.index import _SIZE_CUTOFF + + ns = _SIZE_CUTOFF + np.array([-100, 100], dtype=np.int64) + key = time(15, 11, 30) + start = key.hour * 3600 + key.minute * 60 + key.second + step = 24 * 3600 + + for n in ns: + idx = pd.date_range('2014-11-26', periods=n, freq='S') + ts = pd.Series(np.random.randn(n), index=idx) + i = np.arange(start, n, step) + + tm.assert_numpy_array_equal(ts.index.get_loc(key), i) + tm.assert_series_equal(ts[key], ts.iloc[i]) + + left, right = ts.copy(), ts.copy() + left[key] *= -10 + right.iloc[i] *= -10 + tm.assert_series_equal(left, right) + + def test_time_overflow_for_32bit_machines(self): + # GH8943. On some machines NumPy defaults to np.int32 (for example, + # 32-bit Linux machines). In the function _generate_regular_range + # found in tseries/index.py, `periods` gets multiplied by `strides` + # (which has value 1e9) and since the max value for np.int32 is ~2e9, + # and since those machines won't promote np.int32 to np.int64, we get + # overflow. + periods = np.int_(1000) + + idx1 = pd.date_range(start='2000', periods=periods, freq='S') + self.assertEqual(len(idx1), periods) + + idx2 = pd.date_range(end='2000', periods=periods, freq='S') + self.assertEqual(len(idx2), periods) + + def test_intersection(self): + first = self.index + second = self.index[5:] + intersect = first.intersection(second) + self.assertTrue(tm.equalContents(intersect, second)) + + # GH 10149 + cases = [klass(second.values) for klass in [np.array, Series, list]] + for case in cases: + result = first.intersection(case) + self.assertTrue(tm.equalContents(result, second)) + + third = Index(['a', 'b', 'c']) + result = first.intersection(third) + expected = pd.Index([], dtype=object) + self.assert_index_equal(result, expected) + + def test_union(self): + first = self.index[:5] + second = self.index[5:] + everything = self.index + union = first.union(second) + self.assertTrue(tm.equalContents(union, everything)) + + # GH 10149 + cases = [klass(second.values) for klass in [np.array, Series, list]] + for case in cases: + result = first.union(case) + self.assertTrue(tm.equalContents(result, everything)) + + def test_nat(self): + self.assertIs(DatetimeIndex([np.nan])[0], pd.NaT) + + def test_ufunc_coercions(self): + idx = date_range('2011-01-01', periods=3, freq='2D', name='x') + + delta = np.timedelta64(1, 'D') + for result in [idx + delta, np.add(idx, delta)]: + tm.assertIsInstance(result, DatetimeIndex) + exp = date_range('2011-01-02', periods=3, freq='2D', name='x') + tm.assert_index_equal(result, exp) + self.assertEqual(result.freq, '2D') + + for result in [idx - delta, np.subtract(idx, delta)]: + tm.assertIsInstance(result, DatetimeIndex) + exp = date_range('2010-12-31', periods=3, freq='2D', name='x') + tm.assert_index_equal(result, exp) + self.assertEqual(result.freq, '2D') + + delta = np.array([np.timedelta64(1, 'D'), np.timedelta64(2, 'D'), + np.timedelta64(3, 'D')]) + for result in [idx + delta, np.add(idx, delta)]: + tm.assertIsInstance(result, DatetimeIndex) + exp = DatetimeIndex(['2011-01-02', '2011-01-05', '2011-01-08'], + freq='3D', name='x') + tm.assert_index_equal(result, exp) + self.assertEqual(result.freq, '3D') + + for result in [idx - delta, np.subtract(idx, delta)]: + tm.assertIsInstance(result, DatetimeIndex) + exp = DatetimeIndex(['2010-12-31', '2011-01-01', '2011-01-02'], + freq='D', name='x') + tm.assert_index_equal(result, exp) + self.assertEqual(result.freq, 'D') + + def test_fillna_datetime64(self): + # GH 11343 + for tz in ['US/Eastern', 'Asia/Tokyo']: + idx = pd.DatetimeIndex(['2011-01-01 09:00', pd.NaT, + '2011-01-01 11:00']) + + exp = pd.DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', + '2011-01-01 11:00']) + self.assert_index_equal( + idx.fillna(pd.Timestamp('2011-01-01 10:00')), exp) + + # tz mismatch + exp = pd.Index([pd.Timestamp('2011-01-01 09:00'), + pd.Timestamp('2011-01-01 10:00', tz=tz), + pd.Timestamp('2011-01-01 11:00')], dtype=object) + self.assert_index_equal( + idx.fillna(pd.Timestamp('2011-01-01 10:00', tz=tz)), exp) + + # object + exp = pd.Index([pd.Timestamp('2011-01-01 09:00'), 'x', + pd.Timestamp('2011-01-01 11:00')], dtype=object) + self.assert_index_equal(idx.fillna('x'), exp) + + idx = pd.DatetimeIndex( + ['2011-01-01 09:00', pd.NaT, '2011-01-01 11:00'], tz=tz) + + exp = pd.DatetimeIndex( + ['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00' + ], tz=tz) + self.assert_index_equal( + idx.fillna(pd.Timestamp('2011-01-01 10:00', tz=tz)), exp) + + exp = pd.Index([pd.Timestamp('2011-01-01 09:00', tz=tz), + pd.Timestamp('2011-01-01 10:00'), + pd.Timestamp('2011-01-01 11:00', tz=tz)], + dtype=object) + self.assert_index_equal( + idx.fillna(pd.Timestamp('2011-01-01 10:00')), exp) + + # object + exp = pd.Index([pd.Timestamp('2011-01-01 09:00', tz=tz), + 'x', + pd.Timestamp('2011-01-01 11:00', tz=tz)], + dtype=object) + self.assert_index_equal(idx.fillna('x'), exp) + + +class TestPeriodIndex(DatetimeLike, tm.TestCase): + _holder = PeriodIndex + _multiprocess_can_split_ = True + + def setUp(self): + self.indices = dict(index=tm.makePeriodIndex(10)) + self.setup_indices() + + def create_index(self): + return period_range('20130101', periods=5, freq='D') + + def test_shift(self): + + # test shift for PeriodIndex + # GH8083 + drange = self.create_index() + result = drange.shift(1) + expected = PeriodIndex(['2013-01-02', '2013-01-03', '2013-01-04', + '2013-01-05', '2013-01-06'], freq='D') + self.assert_index_equal(result, expected) + + def test_pickle_compat_construction(self): + pass + + def test_get_loc(self): + idx = pd.period_range('2000-01-01', periods=3) + + for method in [None, 'pad', 'backfill', 'nearest']: + self.assertEqual(idx.get_loc(idx[1], method), 1) + self.assertEqual( + idx.get_loc(idx[1].asfreq('H', how='start'), method), 1) + self.assertEqual(idx.get_loc(idx[1].to_timestamp(), method), 1) + self.assertEqual( + idx.get_loc(idx[1].to_timestamp().to_pydatetime(), method), 1) + self.assertEqual(idx.get_loc(str(idx[1]), method), 1) + + idx = pd.period_range('2000-01-01', periods=5)[::2] + self.assertEqual(idx.get_loc('2000-01-02T12', method='nearest', + tolerance='1 day'), 1) + self.assertEqual(idx.get_loc('2000-01-02T12', method='nearest', + tolerance=pd.Timedelta('1D')), 1) + self.assertEqual(idx.get_loc('2000-01-02T12', method='nearest', + tolerance=np.timedelta64(1, 'D')), 1) + self.assertEqual(idx.get_loc('2000-01-02T12', method='nearest', + tolerance=timedelta(1)), 1) + with tm.assertRaisesRegexp(ValueError, 'must be convertible'): + idx.get_loc('2000-01-10', method='nearest', tolerance='foo') + + msg = 'Input has different freq from PeriodIndex\\(freq=D\\)' + with tm.assertRaisesRegexp(ValueError, msg): + idx.get_loc('2000-01-10', method='nearest', tolerance='1 hour') + with tm.assertRaises(KeyError): + idx.get_loc('2000-01-10', method='nearest', tolerance='1 day') + + def test_get_indexer(self): + idx = pd.period_range('2000-01-01', periods=3).asfreq('H', how='start') + tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2]) + + target = pd.PeriodIndex(['1999-12-31T23', '2000-01-01T12', + '2000-01-02T01'], freq='H') + tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'backfill'), [0, 1, 2]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'nearest'), [0, 1, 1]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'nearest', tolerance='1 hour'), + [0, -1, 1]) + + msg = 'Input has different freq from PeriodIndex\\(freq=H\\)' + with self.assertRaisesRegexp(ValueError, msg): + idx.get_indexer(target, 'nearest', tolerance='1 minute') + + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'nearest', tolerance='1 day'), [0, 1, 1]) + + def test_repeat(self): + # GH10183 + idx = pd.period_range('2000-01-01', periods=3, freq='D') + res = idx.repeat(3) + exp = PeriodIndex(idx.values.repeat(3), freq='D') + self.assert_index_equal(res, exp) + self.assertEqual(res.freqstr, 'D') + + def test_period_index_indexer(self): + + # GH4125 + idx = pd.period_range('2002-01', '2003-12', freq='M') + df = pd.DataFrame(pd.np.random.randn(24, 10), index=idx) + self.assert_frame_equal(df, df.ix[idx]) + self.assert_frame_equal(df, df.ix[list(idx)]) + self.assert_frame_equal(df, df.loc[list(idx)]) + self.assert_frame_equal(df.iloc[0:5], df.loc[idx[0:5]]) + self.assert_frame_equal(df, df.loc[list(idx)]) + + def test_fillna_period(self): + # GH 11343 + idx = pd.PeriodIndex( + ['2011-01-01 09:00', pd.NaT, '2011-01-01 11:00'], freq='H') + + exp = pd.PeriodIndex( + ['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00' + ], freq='H') + self.assert_index_equal( + idx.fillna(pd.Period('2011-01-01 10:00', freq='H')), exp) + + exp = pd.Index([pd.Period('2011-01-01 09:00', freq='H'), 'x', + pd.Period('2011-01-01 11:00', freq='H')], dtype=object) + self.assert_index_equal(idx.fillna('x'), exp) + + with tm.assertRaisesRegexp( + ValueError, + 'Input has different freq=D from PeriodIndex\\(freq=H\\)'): + idx.fillna(pd.Period('2011-01-01', freq='D')) + + def test_no_millisecond_field(self): + with self.assertRaises(AttributeError): + DatetimeIndex.millisecond + + with self.assertRaises(AttributeError): + DatetimeIndex([]).millisecond + + +class TestTimedeltaIndex(DatetimeLike, tm.TestCase): + _holder = TimedeltaIndex + _multiprocess_can_split_ = True + + def setUp(self): + self.indices = dict(index=tm.makeTimedeltaIndex(10)) + self.setup_indices() + + def create_index(self): + return pd.to_timedelta(range(5), unit='d') + pd.offsets.Hour(1) + + def test_shift(self): + # test shift for TimedeltaIndex + # err8083 + + drange = self.create_index() + result = drange.shift(1) + expected = TimedeltaIndex(['1 days 01:00:00', '2 days 01:00:00', + '3 days 01:00:00', + '4 days 01:00:00', '5 days 01:00:00'], + freq='D') + self.assert_index_equal(result, expected) + + result = drange.shift(3, freq='2D 1s') + expected = TimedeltaIndex(['6 days 01:00:03', '7 days 01:00:03', + '8 days 01:00:03', '9 days 01:00:03', + '10 days 01:00:03'], freq='D') + self.assert_index_equal(result, expected) + + def test_get_loc(self): + idx = pd.to_timedelta(['0 days', '1 days', '2 days']) + + for method in [None, 'pad', 'backfill', 'nearest']: + self.assertEqual(idx.get_loc(idx[1], method), 1) + self.assertEqual(idx.get_loc(idx[1].to_pytimedelta(), method), 1) + self.assertEqual(idx.get_loc(str(idx[1]), method), 1) + + self.assertEqual( + idx.get_loc(idx[1], 'pad', tolerance=pd.Timedelta(0)), 1) + self.assertEqual( + idx.get_loc(idx[1], 'pad', tolerance=np.timedelta64(0, 's')), 1) + self.assertEqual(idx.get_loc(idx[1], 'pad', tolerance=timedelta(0)), 1) + + with tm.assertRaisesRegexp(ValueError, 'must be convertible'): + idx.get_loc(idx[1], method='nearest', tolerance='foo') + + for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]: + self.assertEqual(idx.get_loc('1 day 1 hour', method), loc) + + def test_get_indexer(self): + idx = pd.to_timedelta(['0 days', '1 days', '2 days']) + tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2]) + + target = pd.to_timedelta(['-1 hour', '12 hours', '1 day 1 hour']) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'backfill'), [0, 1, 2]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'nearest'), [0, 1, 1]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'nearest', + tolerance=pd.Timedelta('1 hour')), + [0, -1, 1]) + + def test_numeric_compat(self): + + idx = self._holder(np.arange(5, dtype='int64')) + didx = self._holder(np.arange(5, dtype='int64') ** 2) + result = idx * 1 + tm.assert_index_equal(result, idx) + + result = 1 * idx + tm.assert_index_equal(result, idx) + + result = idx / 1 + tm.assert_index_equal(result, idx) + + result = idx // 1 + tm.assert_index_equal(result, idx) + + result = idx * np.array(5, dtype='int64') + tm.assert_index_equal(result, + self._holder(np.arange(5, dtype='int64') * 5)) + + result = idx * np.arange(5, dtype='int64') + tm.assert_index_equal(result, didx) + + result = idx * Series(np.arange(5, dtype='int64')) + tm.assert_index_equal(result, didx) + + result = idx * Series(np.arange(5, dtype='float64') + 0.1) + tm.assert_index_equal(result, self._holder(np.arange( + 5, dtype='float64') * (np.arange(5, dtype='float64') + 0.1))) + + # invalid + self.assertRaises(TypeError, lambda: idx * idx) + self.assertRaises(ValueError, lambda: idx * self._holder(np.arange(3))) + self.assertRaises(ValueError, lambda: idx * np.array([1, 2])) + + def test_pickle_compat_construction(self): + pass + + def test_ufunc_coercions(self): + # normal ops are also tested in tseries/test_timedeltas.py + idx = TimedeltaIndex(['2H', '4H', '6H', '8H', '10H'], + freq='2H', name='x') + + for result in [idx * 2, np.multiply(idx, 2)]: + tm.assertIsInstance(result, TimedeltaIndex) + exp = TimedeltaIndex(['4H', '8H', '12H', '16H', '20H'], + freq='4H', name='x') + tm.assert_index_equal(result, exp) + self.assertEqual(result.freq, '4H') + + for result in [idx / 2, np.divide(idx, 2)]: + tm.assertIsInstance(result, TimedeltaIndex) + exp = TimedeltaIndex(['1H', '2H', '3H', '4H', '5H'], + freq='H', name='x') + tm.assert_index_equal(result, exp) + self.assertEqual(result.freq, 'H') + + idx = TimedeltaIndex(['2H', '4H', '6H', '8H', '10H'], + freq='2H', name='x') + for result in [-idx, np.negative(idx)]: + tm.assertIsInstance(result, TimedeltaIndex) + exp = TimedeltaIndex(['-2H', '-4H', '-6H', '-8H', '-10H'], + freq='-2H', name='x') + tm.assert_index_equal(result, exp) + self.assertEqual(result.freq, '-2H') + + idx = TimedeltaIndex(['-2H', '-1H', '0H', '1H', '2H'], + freq='H', name='x') + for result in [abs(idx), np.absolute(idx)]: + tm.assertIsInstance(result, TimedeltaIndex) + exp = TimedeltaIndex(['2H', '1H', '0H', '1H', '2H'], + freq=None, name='x') + tm.assert_index_equal(result, exp) + self.assertEqual(result.freq, None) + + def test_fillna_timedelta(self): + # GH 11343 + idx = pd.TimedeltaIndex(['1 day', pd.NaT, '3 day']) + + exp = pd.TimedeltaIndex(['1 day', '2 day', '3 day']) + self.assert_index_equal(idx.fillna(pd.Timedelta('2 day')), exp) + + exp = pd.TimedeltaIndex(['1 day', '3 hour', '3 day']) + idx.fillna(pd.Timedelta('3 hour')) + + exp = pd.Index( + [pd.Timedelta('1 day'), 'x', pd.Timedelta('3 day')], dtype=object) + self.assert_index_equal(idx.fillna('x'), exp) diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py new file mode 100644 index 0000000000000..6bc644d84b0d0 --- /dev/null +++ b/pandas/tests/indexes/test_multi.py @@ -0,0 +1,1949 @@ +# -*- coding: utf-8 -*- + +from datetime import timedelta +from itertools import product +import nose +import re +import warnings + +from pandas import (date_range, MultiIndex, Index, CategoricalIndex, + compat) +from pandas.indexes.base import InvalidIndexError +from pandas.compat import range, lrange, u, PY3, long, lzip + +import numpy as np + +from pandas.util.testing import (assert_almost_equal, assertRaisesRegexp, + assert_copy) + +import pandas.util.testing as tm + +import pandas as pd +from pandas.lib import Timestamp + +from .common import Base + + +class TestMultiIndex(Base, tm.TestCase): + _holder = MultiIndex + _multiprocess_can_split_ = True + _compat_props = ['shape', 'ndim', 'size', 'itemsize'] + + def setUp(self): + major_axis = Index(['foo', 'bar', 'baz', 'qux']) + minor_axis = Index(['one', 'two']) + + major_labels = np.array([0, 0, 1, 2, 3, 3]) + minor_labels = np.array([0, 1, 0, 1, 0, 1]) + self.index_names = ['first', 'second'] + self.indices = dict(index=MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels + ], names=self.index_names, + verify_integrity=False)) + self.setup_indices() + + def create_index(self): + return self.index + + def test_boolean_context_compat2(self): + + # boolean context compat + # GH7897 + i1 = MultiIndex.from_tuples([('A', 1), ('A', 2)]) + i2 = MultiIndex.from_tuples([('A', 1), ('A', 3)]) + common = i1.intersection(i2) + + def f(): + if common: + pass + + tm.assertRaisesRegexp(ValueError, 'The truth value of a', f) + + def test_labels_dtypes(self): + + # GH 8456 + i = MultiIndex.from_tuples([('A', 1), ('A', 2)]) + self.assertTrue(i.labels[0].dtype == 'int8') + self.assertTrue(i.labels[1].dtype == 'int8') + + i = MultiIndex.from_product([['a'], range(40)]) + self.assertTrue(i.labels[1].dtype == 'int8') + i = MultiIndex.from_product([['a'], range(400)]) + self.assertTrue(i.labels[1].dtype == 'int16') + i = MultiIndex.from_product([['a'], range(40000)]) + self.assertTrue(i.labels[1].dtype == 'int32') + + i = pd.MultiIndex.from_product([['a'], range(1000)]) + self.assertTrue((i.labels[0] >= 0).all()) + self.assertTrue((i.labels[1] >= 0).all()) + + def test_set_name_methods(self): + # so long as these are synonyms, we don't need to test set_names + self.assertEqual(self.index.rename, self.index.set_names) + new_names = [name + "SUFFIX" for name in self.index_names] + ind = self.index.set_names(new_names) + self.assertEqual(self.index.names, self.index_names) + self.assertEqual(ind.names, new_names) + with assertRaisesRegexp(ValueError, "^Length"): + ind.set_names(new_names + new_names) + new_names2 = [name + "SUFFIX2" for name in new_names] + res = ind.set_names(new_names2, inplace=True) + self.assertIsNone(res) + self.assertEqual(ind.names, new_names2) + + # set names for specific level (# GH7792) + ind = self.index.set_names(new_names[0], level=0) + self.assertEqual(self.index.names, self.index_names) + self.assertEqual(ind.names, [new_names[0], self.index_names[1]]) + + res = ind.set_names(new_names2[0], level=0, inplace=True) + self.assertIsNone(res) + self.assertEqual(ind.names, [new_names2[0], self.index_names[1]]) + + # set names for multiple levels + ind = self.index.set_names(new_names, level=[0, 1]) + self.assertEqual(self.index.names, self.index_names) + self.assertEqual(ind.names, new_names) + + res = ind.set_names(new_names2, level=[0, 1], inplace=True) + self.assertIsNone(res) + self.assertEqual(ind.names, new_names2) + + def test_set_levels(self): + # side note - you probably wouldn't want to use levels and labels + # directly like this - but it is possible. + levels = self.index.levels + new_levels = [[lev + 'a' for lev in level] for level in levels] + + def assert_matching(actual, expected): + # avoid specifying internal representation + # as much as possible + self.assertEqual(len(actual), len(expected)) + for act, exp in zip(actual, expected): + act = np.asarray(act) + exp = np.asarray(exp) + assert_almost_equal(act, exp) + + # level changing [w/o mutation] + ind2 = self.index.set_levels(new_levels) + assert_matching(ind2.levels, new_levels) + assert_matching(self.index.levels, levels) + + # level changing [w/ mutation] + ind2 = self.index.copy() + inplace_return = ind2.set_levels(new_levels, inplace=True) + self.assertIsNone(inplace_return) + assert_matching(ind2.levels, new_levels) + + # level changing specific level [w/o mutation] + ind2 = self.index.set_levels(new_levels[0], level=0) + assert_matching(ind2.levels, [new_levels[0], levels[1]]) + assert_matching(self.index.levels, levels) + + ind2 = self.index.set_levels(new_levels[1], level=1) + assert_matching(ind2.levels, [levels[0], new_levels[1]]) + assert_matching(self.index.levels, levels) + + # level changing multiple levels [w/o mutation] + ind2 = self.index.set_levels(new_levels, level=[0, 1]) + assert_matching(ind2.levels, new_levels) + assert_matching(self.index.levels, levels) + + # level changing specific level [w/ mutation] + ind2 = self.index.copy() + inplace_return = ind2.set_levels(new_levels[0], level=0, inplace=True) + self.assertIsNone(inplace_return) + assert_matching(ind2.levels, [new_levels[0], levels[1]]) + assert_matching(self.index.levels, levels) + + ind2 = self.index.copy() + inplace_return = ind2.set_levels(new_levels[1], level=1, inplace=True) + self.assertIsNone(inplace_return) + assert_matching(ind2.levels, [levels[0], new_levels[1]]) + assert_matching(self.index.levels, levels) + + # level changing multiple levels [w/ mutation] + ind2 = self.index.copy() + inplace_return = ind2.set_levels(new_levels, level=[0, 1], + inplace=True) + self.assertIsNone(inplace_return) + assert_matching(ind2.levels, new_levels) + assert_matching(self.index.levels, levels) + + def test_set_labels(self): + # side note - you probably wouldn't want to use levels and labels + # directly like this - but it is possible. + labels = self.index.labels + major_labels, minor_labels = labels + major_labels = [(x + 1) % 3 for x in major_labels] + minor_labels = [(x + 1) % 1 for x in minor_labels] + new_labels = [major_labels, minor_labels] + + def assert_matching(actual, expected): + # avoid specifying internal representation + # as much as possible + self.assertEqual(len(actual), len(expected)) + for act, exp in zip(actual, expected): + act = np.asarray(act) + exp = np.asarray(exp) + assert_almost_equal(act, exp) + + # label changing [w/o mutation] + ind2 = self.index.set_labels(new_labels) + assert_matching(ind2.labels, new_labels) + assert_matching(self.index.labels, labels) + + # label changing [w/ mutation] + ind2 = self.index.copy() + inplace_return = ind2.set_labels(new_labels, inplace=True) + self.assertIsNone(inplace_return) + assert_matching(ind2.labels, new_labels) + + # label changing specific level [w/o mutation] + ind2 = self.index.set_labels(new_labels[0], level=0) + assert_matching(ind2.labels, [new_labels[0], labels[1]]) + assert_matching(self.index.labels, labels) + + ind2 = self.index.set_labels(new_labels[1], level=1) + assert_matching(ind2.labels, [labels[0], new_labels[1]]) + assert_matching(self.index.labels, labels) + + # label changing multiple levels [w/o mutation] + ind2 = self.index.set_labels(new_labels, level=[0, 1]) + assert_matching(ind2.labels, new_labels) + assert_matching(self.index.labels, labels) + + # label changing specific level [w/ mutation] + ind2 = self.index.copy() + inplace_return = ind2.set_labels(new_labels[0], level=0, inplace=True) + self.assertIsNone(inplace_return) + assert_matching(ind2.labels, [new_labels[0], labels[1]]) + assert_matching(self.index.labels, labels) + + ind2 = self.index.copy() + inplace_return = ind2.set_labels(new_labels[1], level=1, inplace=True) + self.assertIsNone(inplace_return) + assert_matching(ind2.labels, [labels[0], new_labels[1]]) + assert_matching(self.index.labels, labels) + + # label changing multiple levels [w/ mutation] + ind2 = self.index.copy() + inplace_return = ind2.set_labels(new_labels, level=[0, 1], + inplace=True) + self.assertIsNone(inplace_return) + assert_matching(ind2.labels, new_labels) + assert_matching(self.index.labels, labels) + + def test_set_levels_labels_names_bad_input(self): + levels, labels = self.index.levels, self.index.labels + names = self.index.names + + with tm.assertRaisesRegexp(ValueError, 'Length of levels'): + self.index.set_levels([levels[0]]) + + with tm.assertRaisesRegexp(ValueError, 'Length of labels'): + self.index.set_labels([labels[0]]) + + with tm.assertRaisesRegexp(ValueError, 'Length of names'): + self.index.set_names([names[0]]) + + # shouldn't scalar data error, instead should demand list-like + with tm.assertRaisesRegexp(TypeError, 'list of lists-like'): + self.index.set_levels(levels[0]) + + # shouldn't scalar data error, instead should demand list-like + with tm.assertRaisesRegexp(TypeError, 'list of lists-like'): + self.index.set_labels(labels[0]) + + # shouldn't scalar data error, instead should demand list-like + with tm.assertRaisesRegexp(TypeError, 'list-like'): + self.index.set_names(names[0]) + + # should have equal lengths + with tm.assertRaisesRegexp(TypeError, 'list of lists-like'): + self.index.set_levels(levels[0], level=[0, 1]) + + with tm.assertRaisesRegexp(TypeError, 'list-like'): + self.index.set_levels(levels, level=0) + + # should have equal lengths + with tm.assertRaisesRegexp(TypeError, 'list of lists-like'): + self.index.set_labels(labels[0], level=[0, 1]) + + with tm.assertRaisesRegexp(TypeError, 'list-like'): + self.index.set_labels(labels, level=0) + + # should have equal lengths + with tm.assertRaisesRegexp(ValueError, 'Length of names'): + self.index.set_names(names[0], level=[0, 1]) + + with tm.assertRaisesRegexp(TypeError, 'string'): + self.index.set_names(names, level=0) + + def test_metadata_immutable(self): + levels, labels = self.index.levels, self.index.labels + # shouldn't be able to set at either the top level or base level + mutable_regex = re.compile('does not support mutable operations') + with assertRaisesRegexp(TypeError, mutable_regex): + levels[0] = levels[0] + with assertRaisesRegexp(TypeError, mutable_regex): + levels[0][0] = levels[0][0] + # ditto for labels + with assertRaisesRegexp(TypeError, mutable_regex): + labels[0] = labels[0] + with assertRaisesRegexp(TypeError, mutable_regex): + labels[0][0] = labels[0][0] + # and for names + names = self.index.names + with assertRaisesRegexp(TypeError, mutable_regex): + names[0] = names[0] + + def test_inplace_mutation_resets_values(self): + levels = [['a', 'b', 'c'], [4]] + levels2 = [[1, 2, 3], ['a']] + labels = [[0, 1, 0, 2, 2, 0], [0, 0, 0, 0, 0, 0]] + mi1 = MultiIndex(levels=levels, labels=labels) + mi2 = MultiIndex(levels=levels2, labels=labels) + vals = mi1.values.copy() + vals2 = mi2.values.copy() + self.assertIsNotNone(mi1._tuples) + + # make sure level setting works + new_vals = mi1.set_levels(levels2).values + assert_almost_equal(vals2, new_vals) + # non-inplace doesn't kill _tuples [implementation detail] + assert_almost_equal(mi1._tuples, vals) + # and values is still same too + assert_almost_equal(mi1.values, vals) + + # inplace should kill _tuples + mi1.set_levels(levels2, inplace=True) + assert_almost_equal(mi1.values, vals2) + + # make sure label setting works too + labels2 = [[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]] + exp_values = np.empty((6, ), dtype=object) + exp_values[:] = [(long(1), 'a')] * 6 + # must be 1d array of tuples + self.assertEqual(exp_values.shape, (6, )) + new_values = mi2.set_labels(labels2).values + # not inplace shouldn't change + assert_almost_equal(mi2._tuples, vals2) + # should have correct values + assert_almost_equal(exp_values, new_values) + + # and again setting inplace should kill _tuples, etc + mi2.set_labels(labels2, inplace=True) + assert_almost_equal(mi2.values, new_values) + + def test_copy_in_constructor(self): + levels = np.array(["a", "b", "c"]) + labels = np.array([1, 1, 2, 0, 0, 1, 1]) + val = labels[0] + mi = MultiIndex(levels=[levels, levels], labels=[labels, labels], + copy=True) + self.assertEqual(mi.labels[0][0], val) + labels[0] = 15 + self.assertEqual(mi.labels[0][0], val) + val = levels[0] + levels[0] = "PANDA" + self.assertEqual(mi.levels[0][0], val) + + def test_set_value_keeps_names(self): + # motivating example from #3742 + lev1 = ['hans', 'hans', 'hans', 'grethe', 'grethe', 'grethe'] + lev2 = ['1', '2', '3'] * 2 + idx = pd.MultiIndex.from_arrays([lev1, lev2], names=['Name', 'Number']) + df = pd.DataFrame( + np.random.randn(6, 4), + columns=['one', 'two', 'three', 'four'], + index=idx) + df = df.sortlevel() + self.assertIsNone(df.is_copy) + self.assertEqual(df.index.names, ('Name', 'Number')) + df = df.set_value(('grethe', '4'), 'one', 99.34) + self.assertIsNone(df.is_copy) + self.assertEqual(df.index.names, ('Name', 'Number')) + + def test_names(self): + + # names are assigned in __init__ + names = self.index_names + level_names = [level.name for level in self.index.levels] + self.assertEqual(names, level_names) + + # setting bad names on existing + index = self.index + assertRaisesRegexp(ValueError, "^Length of names", setattr, index, + "names", list(index.names) + ["third"]) + assertRaisesRegexp(ValueError, "^Length of names", setattr, index, + "names", []) + + # initializing with bad names (should always be equivalent) + major_axis, minor_axis = self.index.levels + major_labels, minor_labels = self.index.labels + assertRaisesRegexp(ValueError, "^Length of names", MultiIndex, + levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels], + names=['first']) + assertRaisesRegexp(ValueError, "^Length of names", MultiIndex, + levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels], + names=['first', 'second', 'third']) + + # names are assigned + index.names = ["a", "b"] + ind_names = list(index.names) + level_names = [level.name for level in index.levels] + self.assertEqual(ind_names, level_names) + + def test_reference_duplicate_name(self): + idx = MultiIndex.from_tuples( + [('a', 'b'), ('c', 'd')], names=['x', 'x']) + self.assertTrue(idx._reference_duplicate_name('x')) + + idx = MultiIndex.from_tuples( + [('a', 'b'), ('c', 'd')], names=['x', 'y']) + self.assertFalse(idx._reference_duplicate_name('x')) + + def test_astype(self): + expected = self.index.copy() + actual = self.index.astype('O') + assert_copy(actual.levels, expected.levels) + assert_copy(actual.labels, expected.labels) + self.check_level_names(actual, expected.names) + + with assertRaisesRegexp(TypeError, "^Setting.*dtype.*object"): + self.index.astype(np.dtype(int)) + + def test_constructor_single_level(self): + single_level = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux']], + labels=[[0, 1, 2, 3]], names=['first']) + tm.assertIsInstance(single_level, Index) + self.assertNotIsInstance(single_level, MultiIndex) + self.assertEqual(single_level.name, 'first') + + single_level = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux']], + labels=[[0, 1, 2, 3]]) + self.assertIsNone(single_level.name) + + def test_constructor_no_levels(self): + assertRaisesRegexp(ValueError, "non-zero number of levels/labels", + MultiIndex, levels=[], labels=[]) + both_re = re.compile('Must pass both levels and labels') + with tm.assertRaisesRegexp(TypeError, both_re): + MultiIndex(levels=[]) + with tm.assertRaisesRegexp(TypeError, both_re): + MultiIndex(labels=[]) + + def test_constructor_mismatched_label_levels(self): + labels = [np.array([1]), np.array([2]), np.array([3])] + levels = ["a"] + assertRaisesRegexp(ValueError, "Length of levels and labels must be" + " the same", MultiIndex, levels=levels, + labels=labels) + length_error = re.compile('>= length of level') + label_error = re.compile(r'Unequal label lengths: \[4, 2\]') + + # important to check that it's looking at the right thing. + with tm.assertRaisesRegexp(ValueError, length_error): + MultiIndex(levels=[['a'], ['b']], + labels=[[0, 1, 2, 3], [0, 3, 4, 1]]) + + with tm.assertRaisesRegexp(ValueError, label_error): + MultiIndex(levels=[['a'], ['b']], labels=[[0, 0, 0, 0], [0, 0]]) + + # external API + with tm.assertRaisesRegexp(ValueError, length_error): + self.index.copy().set_levels([['a'], ['b']]) + + with tm.assertRaisesRegexp(ValueError, label_error): + self.index.copy().set_labels([[0, 0, 0, 0], [0, 0]]) + + # deprecated properties + with warnings.catch_warnings(): + warnings.simplefilter('ignore') + + with tm.assertRaisesRegexp(ValueError, length_error): + self.index.copy().levels = [['a'], ['b']] + + with tm.assertRaisesRegexp(ValueError, label_error): + self.index.copy().labels = [[0, 0, 0, 0], [0, 0]] + + def assert_multiindex_copied(self, copy, original): + # levels shoudl be (at least, shallow copied) + assert_copy(copy.levels, original.levels) + + assert_almost_equal(copy.labels, original.labels) + + # labels doesn't matter which way copied + assert_almost_equal(copy.labels, original.labels) + self.assertIsNot(copy.labels, original.labels) + + # names doesn't matter which way copied + self.assertEqual(copy.names, original.names) + self.assertIsNot(copy.names, original.names) + + # sort order should be copied + self.assertEqual(copy.sortorder, original.sortorder) + + def test_copy(self): + i_copy = self.index.copy() + + self.assert_multiindex_copied(i_copy, self.index) + + def test_shallow_copy(self): + i_copy = self.index._shallow_copy() + + self.assert_multiindex_copied(i_copy, self.index) + + def test_view(self): + i_view = self.index.view() + + self.assert_multiindex_copied(i_view, self.index) + + def check_level_names(self, index, names): + self.assertEqual([level.name for level in index.levels], list(names)) + + def test_changing_names(self): + + # names should be applied to levels + level_names = [level.name for level in self.index.levels] + self.check_level_names(self.index, self.index.names) + + view = self.index.view() + copy = self.index.copy() + shallow_copy = self.index._shallow_copy() + + # changing names should change level names on object + new_names = [name + "a" for name in self.index.names] + self.index.names = new_names + self.check_level_names(self.index, new_names) + + # but not on copies + self.check_level_names(view, level_names) + self.check_level_names(copy, level_names) + self.check_level_names(shallow_copy, level_names) + + # and copies shouldn't change original + shallow_copy.names = [name + "c" for name in shallow_copy.names] + self.check_level_names(self.index, new_names) + + def test_duplicate_names(self): + self.index.names = ['foo', 'foo'] + assertRaisesRegexp(KeyError, 'Level foo not found', + self.index._get_level_number, 'foo') + + def test_get_level_number_integer(self): + self.index.names = [1, 0] + self.assertEqual(self.index._get_level_number(1), 0) + self.assertEqual(self.index._get_level_number(0), 1) + self.assertRaises(IndexError, self.index._get_level_number, 2) + assertRaisesRegexp(KeyError, 'Level fourth not found', + self.index._get_level_number, 'fourth') + + def test_from_arrays(self): + arrays = [] + for lev, lab in zip(self.index.levels, self.index.labels): + arrays.append(np.asarray(lev).take(lab)) + + result = MultiIndex.from_arrays(arrays) + self.assertEqual(list(result), list(self.index)) + + # infer correctly + result = MultiIndex.from_arrays([[pd.NaT, Timestamp('20130101')], + ['a', 'b']]) + self.assertTrue(result.levels[0].equals(Index([Timestamp('20130101') + ]))) + self.assertTrue(result.levels[1].equals(Index(['a', 'b']))) + + def test_from_product(self): + + first = ['foo', 'bar', 'buz'] + second = ['a', 'b', 'c'] + names = ['first', 'second'] + result = MultiIndex.from_product([first, second], names=names) + + tuples = [('foo', 'a'), ('foo', 'b'), ('foo', 'c'), ('bar', 'a'), + ('bar', 'b'), ('bar', 'c'), ('buz', 'a'), ('buz', 'b'), + ('buz', 'c')] + expected = MultiIndex.from_tuples(tuples, names=names) + + tm.assert_numpy_array_equal(result, expected) + self.assertEqual(result.names, names) + + def test_from_product_datetimeindex(self): + dt_index = date_range('2000-01-01', periods=2) + mi = pd.MultiIndex.from_product([[1, 2], dt_index]) + etalon = pd.lib.list_to_object_array([(1, pd.Timestamp( + '2000-01-01')), (1, pd.Timestamp('2000-01-02')), (2, pd.Timestamp( + '2000-01-01')), (2, pd.Timestamp('2000-01-02'))]) + tm.assert_numpy_array_equal(mi.values, etalon) + + def test_values_boxed(self): + tuples = [(1, pd.Timestamp('2000-01-01')), (2, pd.NaT), + (3, pd.Timestamp('2000-01-03')), + (1, pd.Timestamp('2000-01-04')), + (2, pd.Timestamp('2000-01-02')), + (3, pd.Timestamp('2000-01-03'))] + mi = pd.MultiIndex.from_tuples(tuples) + tm.assert_numpy_array_equal(mi.values, + pd.lib.list_to_object_array(tuples)) + # Check that code branches for boxed values produce identical results + tm.assert_numpy_array_equal(mi.values[:4], mi[:4].values) + + def test_append(self): + result = self.index[:3].append(self.index[3:]) + self.assertTrue(result.equals(self.index)) + + foos = [self.index[:1], self.index[1:3], self.index[3:]] + result = foos[0].append(foos[1:]) + self.assertTrue(result.equals(self.index)) + + # empty + result = self.index.append([]) + self.assertTrue(result.equals(self.index)) + + def test_get_level_values(self): + result = self.index.get_level_values(0) + expected = ['foo', 'foo', 'bar', 'baz', 'qux', 'qux'] + tm.assert_numpy_array_equal(result, expected) + + self.assertEqual(result.name, 'first') + + result = self.index.get_level_values('first') + expected = self.index.get_level_values(0) + tm.assert_numpy_array_equal(result, expected) + + # GH 10460 + index = MultiIndex(levels=[CategoricalIndex( + ['A', 'B']), CategoricalIndex([1, 2, 3])], labels=[np.array( + [0, 0, 0, 1, 1, 1]), np.array([0, 1, 2, 0, 1, 2])]) + exp = CategoricalIndex(['A', 'A', 'A', 'B', 'B', 'B']) + self.assert_index_equal(index.get_level_values(0), exp) + exp = CategoricalIndex([1, 2, 3, 1, 2, 3]) + self.assert_index_equal(index.get_level_values(1), exp) + + def test_get_level_values_na(self): + arrays = [['a', 'b', 'b'], [1, np.nan, 2]] + index = pd.MultiIndex.from_arrays(arrays) + values = index.get_level_values(1) + expected = [1, np.nan, 2] + tm.assert_numpy_array_equal(values.values.astype(float), expected) + + arrays = [['a', 'b', 'b'], [np.nan, np.nan, 2]] + index = pd.MultiIndex.from_arrays(arrays) + values = index.get_level_values(1) + expected = [np.nan, np.nan, 2] + tm.assert_numpy_array_equal(values.values.astype(float), expected) + + arrays = [[np.nan, np.nan, np.nan], ['a', np.nan, 1]] + index = pd.MultiIndex.from_arrays(arrays) + values = index.get_level_values(0) + expected = [np.nan, np.nan, np.nan] + tm.assert_numpy_array_equal(values.values.astype(float), expected) + values = index.get_level_values(1) + expected = np.array(['a', np.nan, 1], dtype=object) + tm.assert_numpy_array_equal(values.values, expected) + + arrays = [['a', 'b', 'b'], pd.DatetimeIndex([0, 1, pd.NaT])] + index = pd.MultiIndex.from_arrays(arrays) + values = index.get_level_values(1) + expected = pd.DatetimeIndex([0, 1, pd.NaT]) + tm.assert_numpy_array_equal(values.values, expected.values) + + arrays = [[], []] + index = pd.MultiIndex.from_arrays(arrays) + values = index.get_level_values(0) + self.assertEqual(values.shape, (0, )) + + def test_reorder_levels(self): + # this blows up + assertRaisesRegexp(IndexError, '^Too many levels', + self.index.reorder_levels, [2, 1, 0]) + + def test_nlevels(self): + self.assertEqual(self.index.nlevels, 2) + + def test_iter(self): + result = list(self.index) + expected = [('foo', 'one'), ('foo', 'two'), ('bar', 'one'), + ('baz', 'two'), ('qux', 'one'), ('qux', 'two')] + self.assertEqual(result, expected) + + def test_legacy_pickle(self): + if PY3: + raise nose.SkipTest("testing for legacy pickles not " + "support on py3") + + path = tm.get_data_path('multiindex_v1.pickle') + obj = pd.read_pickle(path) + + obj2 = MultiIndex.from_tuples(obj.values) + self.assertTrue(obj.equals(obj2)) + + res = obj.get_indexer(obj) + exp = np.arange(len(obj)) + assert_almost_equal(res, exp) + + res = obj.get_indexer(obj2[::-1]) + exp = obj.get_indexer(obj[::-1]) + exp2 = obj2.get_indexer(obj2[::-1]) + assert_almost_equal(res, exp) + assert_almost_equal(exp, exp2) + + def test_legacy_v2_unpickle(self): + + # 0.7.3 -> 0.8.0 format manage + path = tm.get_data_path('mindex_073.pickle') + obj = pd.read_pickle(path) + + obj2 = MultiIndex.from_tuples(obj.values) + self.assertTrue(obj.equals(obj2)) + + res = obj.get_indexer(obj) + exp = np.arange(len(obj)) + assert_almost_equal(res, exp) + + res = obj.get_indexer(obj2[::-1]) + exp = obj.get_indexer(obj[::-1]) + exp2 = obj2.get_indexer(obj2[::-1]) + assert_almost_equal(res, exp) + assert_almost_equal(exp, exp2) + + def test_roundtrip_pickle_with_tz(self): + + # GH 8367 + # round-trip of timezone + index = MultiIndex.from_product( + [[1, 2], ['a', 'b'], date_range('20130101', periods=3, + tz='US/Eastern') + ], names=['one', 'two', 'three']) + unpickled = self.round_trip_pickle(index) + self.assertTrue(index.equal_levels(unpickled)) + + def test_from_tuples_index_values(self): + result = MultiIndex.from_tuples(self.index) + self.assertTrue((result.values == self.index.values).all()) + + def test_contains(self): + self.assertIn(('foo', 'two'), self.index) + self.assertNotIn(('bar', 'two'), self.index) + self.assertNotIn(None, self.index) + + def test_is_all_dates(self): + self.assertFalse(self.index.is_all_dates) + + def test_is_numeric(self): + # MultiIndex is never numeric + self.assertFalse(self.index.is_numeric()) + + def test_getitem(self): + # scalar + self.assertEqual(self.index[2], ('bar', 'one')) + + # slice + result = self.index[2:5] + expected = self.index[[2, 3, 4]] + self.assertTrue(result.equals(expected)) + + # boolean + result = self.index[[True, False, True, False, True, True]] + result2 = self.index[np.array([True, False, True, False, True, True])] + expected = self.index[[0, 2, 4, 5]] + self.assertTrue(result.equals(expected)) + self.assertTrue(result2.equals(expected)) + + def test_getitem_group_select(self): + sorted_idx, _ = self.index.sortlevel(0) + self.assertEqual(sorted_idx.get_loc('baz'), slice(3, 4)) + self.assertEqual(sorted_idx.get_loc('foo'), slice(0, 2)) + + def test_get_loc(self): + self.assertEqual(self.index.get_loc(('foo', 'two')), 1) + self.assertEqual(self.index.get_loc(('baz', 'two')), 3) + self.assertRaises(KeyError, self.index.get_loc, ('bar', 'two')) + self.assertRaises(KeyError, self.index.get_loc, 'quux') + + self.assertRaises(NotImplementedError, self.index.get_loc, 'foo', + method='nearest') + + # 3 levels + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) + self.assertRaises(KeyError, index.get_loc, (1, 1)) + self.assertEqual(index.get_loc((2, 0)), slice(3, 5)) + + def test_get_loc_duplicates(self): + index = Index([2, 2, 2, 2]) + result = index.get_loc(2) + expected = slice(0, 4) + self.assertEqual(result, expected) + # self.assertRaises(Exception, index.get_loc, 2) + + index = Index(['c', 'a', 'a', 'b', 'b']) + rs = index.get_loc('c') + xp = 0 + assert (rs == xp) + + def test_get_loc_level(self): + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) + + loc, new_index = index.get_loc_level((0, 1)) + expected = slice(1, 2) + exp_index = index[expected].droplevel(0).droplevel(0) + self.assertEqual(loc, expected) + self.assertTrue(new_index.equals(exp_index)) + + loc, new_index = index.get_loc_level((0, 1, 0)) + expected = 1 + self.assertEqual(loc, expected) + self.assertIsNone(new_index) + + self.assertRaises(KeyError, index.get_loc_level, (2, 2)) + + index = MultiIndex(levels=[[2000], lrange(4)], labels=[np.array( + [0, 0, 0, 0]), np.array([0, 1, 2, 3])]) + result, new_index = index.get_loc_level((2000, slice(None, None))) + expected = slice(None, None) + self.assertEqual(result, expected) + self.assertTrue(new_index.equals(index.droplevel(0))) + + def test_slice_locs(self): + df = tm.makeTimeDataFrame() + stacked = df.stack() + idx = stacked.index + + slob = slice(*idx.slice_locs(df.index[5], df.index[15])) + sliced = stacked[slob] + expected = df[5:16].stack() + tm.assert_almost_equal(sliced.values, expected.values) + + slob = slice(*idx.slice_locs(df.index[5] + timedelta(seconds=30), + df.index[15] - timedelta(seconds=30))) + sliced = stacked[slob] + expected = df[6:15].stack() + tm.assert_almost_equal(sliced.values, expected.values) + + def test_slice_locs_with_type_mismatch(self): + df = tm.makeTimeDataFrame() + stacked = df.stack() + idx = stacked.index + assertRaisesRegexp(TypeError, '^Level type mismatch', idx.slice_locs, + (1, 3)) + assertRaisesRegexp(TypeError, '^Level type mismatch', idx.slice_locs, + df.index[5] + timedelta(seconds=30), (5, 2)) + df = tm.makeCustomDataframe(5, 5) + stacked = df.stack() + idx = stacked.index + with assertRaisesRegexp(TypeError, '^Level type mismatch'): + idx.slice_locs(timedelta(seconds=30)) + # TODO: Try creating a UnicodeDecodeError in exception message + with assertRaisesRegexp(TypeError, '^Level type mismatch'): + idx.slice_locs(df.index[1], (16, "a")) + + def test_slice_locs_not_sorted(self): + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) + + assertRaisesRegexp(KeyError, "[Kk]ey length.*greater than MultiIndex" + " lexsort depth", index.slice_locs, (1, 0, 1), + (2, 1, 0)) + + # works + sorted_index, _ = index.sortlevel(0) + # should there be a test case here??? + sorted_index.slice_locs((1, 0, 1), (2, 1, 0)) + + def test_slice_locs_partial(self): + sorted_idx, _ = self.index.sortlevel(0) + + result = sorted_idx.slice_locs(('foo', 'two'), ('qux', 'one')) + self.assertEqual(result, (1, 5)) + + result = sorted_idx.slice_locs(None, ('qux', 'one')) + self.assertEqual(result, (0, 5)) + + result = sorted_idx.slice_locs(('foo', 'two'), None) + self.assertEqual(result, (1, len(sorted_idx))) + + result = sorted_idx.slice_locs('bar', 'baz') + self.assertEqual(result, (2, 4)) + + def test_slice_locs_not_contained(self): + # some searchsorted action + + index = MultiIndex(levels=[[0, 2, 4, 6], [0, 2, 4]], + labels=[[0, 0, 0, 1, 1, 2, 3, 3, 3], + [0, 1, 2, 1, 2, 2, 0, 1, 2]], sortorder=0) + + result = index.slice_locs((1, 0), (5, 2)) + self.assertEqual(result, (3, 6)) + + result = index.slice_locs(1, 5) + self.assertEqual(result, (3, 6)) + + result = index.slice_locs((2, 2), (5, 2)) + self.assertEqual(result, (3, 6)) + + result = index.slice_locs(2, 5) + self.assertEqual(result, (3, 6)) + + result = index.slice_locs((1, 0), (6, 3)) + self.assertEqual(result, (3, 8)) + + result = index.slice_locs(-1, 10) + self.assertEqual(result, (0, len(index))) + + def test_consistency(self): + # need to construct an overflow + major_axis = lrange(70000) + minor_axis = lrange(10) + + major_labels = np.arange(70000) + minor_labels = np.repeat(lrange(10), 7000) + + # the fact that is works means it's consistent + index = MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels]) + + # inconsistent + major_labels = np.array([0, 0, 1, 1, 1, 2, 2, 3, 3]) + minor_labels = np.array([0, 1, 0, 1, 1, 0, 1, 0, 1]) + index = MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels]) + + self.assertFalse(index.is_unique) + + def test_truncate(self): + major_axis = Index(lrange(4)) + minor_axis = Index(lrange(2)) + + major_labels = np.array([0, 0, 1, 2, 3, 3]) + minor_labels = np.array([0, 1, 0, 1, 0, 1]) + + index = MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels]) + + result = index.truncate(before=1) + self.assertNotIn('foo', result.levels[0]) + self.assertIn(1, result.levels[0]) + + result = index.truncate(after=1) + self.assertNotIn(2, result.levels[0]) + self.assertIn(1, result.levels[0]) + + result = index.truncate(before=1, after=2) + self.assertEqual(len(result.levels[0]), 2) + + # after < before + self.assertRaises(ValueError, index.truncate, 3, 1) + + def test_get_indexer(self): + major_axis = Index(lrange(4)) + minor_axis = Index(lrange(2)) + + major_labels = np.array([0, 0, 1, 2, 2, 3, 3]) + minor_labels = np.array([0, 1, 0, 0, 1, 0, 1]) + + index = MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels]) + idx1 = index[:5] + idx2 = index[[1, 3, 5]] + + r1 = idx1.get_indexer(idx2) + assert_almost_equal(r1, [1, 3, -1]) + + r1 = idx2.get_indexer(idx1, method='pad') + e1 = [-1, 0, 0, 1, 1] + assert_almost_equal(r1, e1) + + r2 = idx2.get_indexer(idx1[::-1], method='pad') + assert_almost_equal(r2, e1[::-1]) + + rffill1 = idx2.get_indexer(idx1, method='ffill') + assert_almost_equal(r1, rffill1) + + r1 = idx2.get_indexer(idx1, method='backfill') + e1 = [0, 0, 1, 1, 2] + assert_almost_equal(r1, e1) + + r2 = idx2.get_indexer(idx1[::-1], method='backfill') + assert_almost_equal(r2, e1[::-1]) + + rbfill1 = idx2.get_indexer(idx1, method='bfill') + assert_almost_equal(r1, rbfill1) + + # pass non-MultiIndex + r1 = idx1.get_indexer(idx2._tuple_index) + rexp1 = idx1.get_indexer(idx2) + assert_almost_equal(r1, rexp1) + + r1 = idx1.get_indexer([1, 2, 3]) + self.assertTrue((r1 == [-1, -1, -1]).all()) + + # create index with duplicates + idx1 = Index(lrange(10) + lrange(10)) + idx2 = Index(lrange(20)) + assertRaisesRegexp(InvalidIndexError, "Reindexing only valid with" + " uniquely valued Index objects", idx1.get_indexer, + idx2) + + def test_get_indexer_nearest(self): + midx = MultiIndex.from_tuples([('a', 1), ('b', 2)]) + with tm.assertRaises(NotImplementedError): + midx.get_indexer(['a'], method='nearest') + with tm.assertRaises(NotImplementedError): + midx.get_indexer(['a'], method='pad', tolerance=2) + + def test_format(self): + self.index.format() + self.index[:0].format() + + def test_format_integer_names(self): + index = MultiIndex(levels=[[0, 1], [0, 1]], + labels=[[0, 0, 1, 1], [0, 1, 0, 1]], names=[0, 1]) + index.format(names=True) + + def test_format_sparse_display(self): + index = MultiIndex(levels=[[0, 1], [0, 1], [0, 1], [0]], + labels=[[0, 0, 0, 1, 1, 1], [0, 0, 1, 0, 0, 1], + [0, 1, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0]]) + + result = index.format() + self.assertEqual(result[3], '1 0 0 0') + + def test_format_sparse_config(self): + warn_filters = warnings.filters + warnings.filterwarnings('ignore', category=FutureWarning, + module=".*format") + # GH1538 + pd.set_option('display.multi_sparse', False) + + result = self.index.format() + self.assertEqual(result[1], 'foo two') + + self.reset_display_options() + + warnings.filters = warn_filters + + def test_to_hierarchical(self): + index = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), ( + 2, 'two')]) + result = index.to_hierarchical(3) + expected = MultiIndex(levels=[[1, 2], ['one', 'two']], + labels=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1], + [0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1]]) + tm.assert_index_equal(result, expected) + self.assertEqual(result.names, index.names) + + # K > 1 + result = index.to_hierarchical(3, 2) + expected = MultiIndex(levels=[[1, 2], ['one', 'two']], + labels=[[0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1], + [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]]) + tm.assert_index_equal(result, expected) + self.assertEqual(result.names, index.names) + + # non-sorted + index = MultiIndex.from_tuples([(2, 'c'), (1, 'b'), + (2, 'a'), (2, 'b')], + names=['N1', 'N2']) + + result = index.to_hierarchical(2) + expected = MultiIndex.from_tuples([(2, 'c'), (2, 'c'), (1, 'b'), + (1, 'b'), + (2, 'a'), (2, 'a'), + (2, 'b'), (2, 'b')], + names=['N1', 'N2']) + tm.assert_index_equal(result, expected) + self.assertEqual(result.names, index.names) + + def test_bounds(self): + self.index._bounds + + def test_equals(self): + self.assertTrue(self.index.equals(self.index)) + self.assertTrue(self.index.equal_levels(self.index)) + + self.assertFalse(self.index.equals(self.index[:-1])) + + self.assertTrue(self.index.equals(self.index._tuple_index)) + + # different number of levels + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) + + index2 = MultiIndex(levels=index.levels[:-1], labels=index.labels[:-1]) + self.assertFalse(index.equals(index2)) + self.assertFalse(index.equal_levels(index2)) + + # levels are different + major_axis = Index(lrange(4)) + minor_axis = Index(lrange(2)) + + major_labels = np.array([0, 0, 1, 2, 2, 3]) + minor_labels = np.array([0, 1, 0, 0, 1, 0]) + + index = MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels]) + self.assertFalse(self.index.equals(index)) + self.assertFalse(self.index.equal_levels(index)) + + # some of the labels are different + major_axis = Index(['foo', 'bar', 'baz', 'qux']) + minor_axis = Index(['one', 'two']) + + major_labels = np.array([0, 0, 2, 2, 3, 3]) + minor_labels = np.array([0, 1, 0, 1, 0, 1]) + + index = MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels]) + self.assertFalse(self.index.equals(index)) + + def test_identical(self): + mi = self.index.copy() + mi2 = self.index.copy() + self.assertTrue(mi.identical(mi2)) + + mi = mi.set_names(['new1', 'new2']) + self.assertTrue(mi.equals(mi2)) + self.assertFalse(mi.identical(mi2)) + + mi2 = mi2.set_names(['new1', 'new2']) + self.assertTrue(mi.identical(mi2)) + + mi3 = Index(mi.tolist(), names=mi.names) + mi4 = Index(mi.tolist(), names=mi.names, tupleize_cols=False) + self.assertTrue(mi.identical(mi3)) + self.assertFalse(mi.identical(mi4)) + self.assertTrue(mi.equals(mi4)) + + def test_is_(self): + mi = MultiIndex.from_tuples(lzip(range(10), range(10))) + self.assertTrue(mi.is_(mi)) + self.assertTrue(mi.is_(mi.view())) + self.assertTrue(mi.is_(mi.view().view().view().view())) + mi2 = mi.view() + # names are metadata, they don't change id + mi2.names = ["A", "B"] + self.assertTrue(mi2.is_(mi)) + self.assertTrue(mi.is_(mi2)) + + self.assertTrue(mi.is_(mi.set_names(["C", "D"]))) + mi2 = mi.view() + mi2.set_names(["E", "F"], inplace=True) + self.assertTrue(mi.is_(mi2)) + # levels are inherent properties, they change identity + mi3 = mi2.set_levels([lrange(10), lrange(10)]) + self.assertFalse(mi3.is_(mi2)) + # shouldn't change + self.assertTrue(mi2.is_(mi)) + mi4 = mi3.view() + mi4.set_levels([[1 for _ in range(10)], lrange(10)], inplace=True) + self.assertFalse(mi4.is_(mi3)) + mi5 = mi.view() + mi5.set_levels(mi5.levels, inplace=True) + self.assertFalse(mi5.is_(mi)) + + def test_union(self): + piece1 = self.index[:5][::-1] + piece2 = self.index[3:] + + the_union = piece1 | piece2 + + tups = sorted(self.index._tuple_index) + expected = MultiIndex.from_tuples(tups) + + self.assertTrue(the_union.equals(expected)) + + # corner case, pass self or empty thing: + the_union = self.index.union(self.index) + self.assertIs(the_union, self.index) + + the_union = self.index.union(self.index[:0]) + self.assertIs(the_union, self.index) + + # won't work in python 3 + # tuples = self.index._tuple_index + # result = self.index[:4] | tuples[4:] + # self.assertTrue(result.equals(tuples)) + + # not valid for python 3 + # def test_union_with_regular_index(self): + # other = Index(['A', 'B', 'C']) + + # result = other.union(self.index) + # self.assertIn(('foo', 'one'), result) + # self.assertIn('B', result) + + # result2 = self.index.union(other) + # self.assertTrue(result.equals(result2)) + + def test_intersection(self): + piece1 = self.index[:5][::-1] + piece2 = self.index[3:] + + the_int = piece1 & piece2 + tups = sorted(self.index[3:5]._tuple_index) + expected = MultiIndex.from_tuples(tups) + self.assertTrue(the_int.equals(expected)) + + # corner case, pass self + the_int = self.index.intersection(self.index) + self.assertIs(the_int, self.index) + + # empty intersection: disjoint + empty = self.index[:2] & self.index[2:] + expected = self.index[:0] + self.assertTrue(empty.equals(expected)) + + # can't do in python 3 + # tuples = self.index._tuple_index + # result = self.index & tuples + # self.assertTrue(result.equals(tuples)) + + def test_difference(self): + + first = self.index + result = first.difference(self.index[-3:]) + + # - API change GH 8226 + with tm.assert_produces_warning(): + first - self.index[-3:] + with tm.assert_produces_warning(): + self.index[-3:] - first + with tm.assert_produces_warning(): + self.index[-3:] - first.tolist() + + self.assertRaises(TypeError, lambda: first.tolist() - self.index[-3:]) + + expected = MultiIndex.from_tuples(sorted(self.index[:-3].values), + sortorder=0, + names=self.index.names) + + tm.assertIsInstance(result, MultiIndex) + self.assertTrue(result.equals(expected)) + self.assertEqual(result.names, self.index.names) + + # empty difference: reflexive + result = self.index.difference(self.index) + expected = self.index[:0] + self.assertTrue(result.equals(expected)) + self.assertEqual(result.names, self.index.names) + + # empty difference: superset + result = self.index[-3:].difference(self.index) + expected = self.index[:0] + self.assertTrue(result.equals(expected)) + self.assertEqual(result.names, self.index.names) + + # empty difference: degenerate + result = self.index[:0].difference(self.index) + expected = self.index[:0] + self.assertTrue(result.equals(expected)) + self.assertEqual(result.names, self.index.names) + + # names not the same + chunklet = self.index[-3:] + chunklet.names = ['foo', 'baz'] + result = first.difference(chunklet) + self.assertEqual(result.names, (None, None)) + + # empty, but non-equal + result = self.index.difference(self.index.sortlevel(1)[0]) + self.assertEqual(len(result), 0) + + # raise Exception called with non-MultiIndex + result = first.difference(first._tuple_index) + self.assertTrue(result.equals(first[:0])) + + # name from empty array + result = first.difference([]) + self.assertTrue(first.equals(result)) + self.assertEqual(first.names, result.names) + + # name from non-empty array + result = first.difference([('foo', 'one')]) + expected = pd.MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'), ( + 'foo', 'two'), ('qux', 'one'), ('qux', 'two')]) + expected.names = first.names + self.assertEqual(first.names, result.names) + assertRaisesRegexp(TypeError, "other must be a MultiIndex or a list" + " of tuples", first.difference, [1, 2, 3, 4, 5]) + + def test_from_tuples(self): + assertRaisesRegexp(TypeError, 'Cannot infer number of levels from' + ' empty list', MultiIndex.from_tuples, []) + + idx = MultiIndex.from_tuples(((1, 2), (3, 4)), names=['a', 'b']) + self.assertEqual(len(idx), 2) + + def test_argsort(self): + result = self.index.argsort() + expected = self.index._tuple_index.argsort() + tm.assert_numpy_array_equal(result, expected) + + def test_sortlevel(self): + import random + + tuples = list(self.index) + random.shuffle(tuples) + + index = MultiIndex.from_tuples(tuples) + + sorted_idx, _ = index.sortlevel(0) + expected = MultiIndex.from_tuples(sorted(tuples)) + self.assertTrue(sorted_idx.equals(expected)) + + sorted_idx, _ = index.sortlevel(0, ascending=False) + self.assertTrue(sorted_idx.equals(expected[::-1])) + + sorted_idx, _ = index.sortlevel(1) + by1 = sorted(tuples, key=lambda x: (x[1], x[0])) + expected = MultiIndex.from_tuples(by1) + self.assertTrue(sorted_idx.equals(expected)) + + sorted_idx, _ = index.sortlevel(1, ascending=False) + self.assertTrue(sorted_idx.equals(expected[::-1])) + + def test_sortlevel_not_sort_remaining(self): + mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) + sorted_idx, _ = mi.sortlevel('A', sort_remaining=False) + self.assertTrue(sorted_idx.equals(mi)) + + def test_sortlevel_deterministic(self): + tuples = [('bar', 'one'), ('foo', 'two'), ('qux', 'two'), + ('foo', 'one'), ('baz', 'two'), ('qux', 'one')] + + index = MultiIndex.from_tuples(tuples) + + sorted_idx, _ = index.sortlevel(0) + expected = MultiIndex.from_tuples(sorted(tuples)) + self.assertTrue(sorted_idx.equals(expected)) + + sorted_idx, _ = index.sortlevel(0, ascending=False) + self.assertTrue(sorted_idx.equals(expected[::-1])) + + sorted_idx, _ = index.sortlevel(1) + by1 = sorted(tuples, key=lambda x: (x[1], x[0])) + expected = MultiIndex.from_tuples(by1) + self.assertTrue(sorted_idx.equals(expected)) + + sorted_idx, _ = index.sortlevel(1, ascending=False) + self.assertTrue(sorted_idx.equals(expected[::-1])) + + def test_dims(self): + pass + + def test_drop(self): + dropped = self.index.drop([('foo', 'two'), ('qux', 'one')]) + + index = MultiIndex.from_tuples([('foo', 'two'), ('qux', 'one')]) + dropped2 = self.index.drop(index) + + expected = self.index[[0, 2, 3, 5]] + self.assert_index_equal(dropped, expected) + self.assert_index_equal(dropped2, expected) + + dropped = self.index.drop(['bar']) + expected = self.index[[0, 1, 3, 4, 5]] + self.assert_index_equal(dropped, expected) + + dropped = self.index.drop('foo') + expected = self.index[[2, 3, 4, 5]] + self.assert_index_equal(dropped, expected) + + index = MultiIndex.from_tuples([('bar', 'two')]) + self.assertRaises(KeyError, self.index.drop, [('bar', 'two')]) + self.assertRaises(KeyError, self.index.drop, index) + self.assertRaises(KeyError, self.index.drop, ['foo', 'two']) + + # partially correct argument + mixed_index = MultiIndex.from_tuples([('qux', 'one'), ('bar', 'two')]) + self.assertRaises(KeyError, self.index.drop, mixed_index) + + # error='ignore' + dropped = self.index.drop(index, errors='ignore') + expected = self.index[[0, 1, 2, 3, 4, 5]] + self.assert_index_equal(dropped, expected) + + dropped = self.index.drop(mixed_index, errors='ignore') + expected = self.index[[0, 1, 2, 3, 5]] + self.assert_index_equal(dropped, expected) + + dropped = self.index.drop(['foo', 'two'], errors='ignore') + expected = self.index[[2, 3, 4, 5]] + self.assert_index_equal(dropped, expected) + + # mixed partial / full drop + dropped = self.index.drop(['foo', ('qux', 'one')]) + expected = self.index[[2, 3, 5]] + self.assert_index_equal(dropped, expected) + + # mixed partial / full drop / error='ignore' + mixed_index = ['foo', ('qux', 'one'), 'two'] + self.assertRaises(KeyError, self.index.drop, mixed_index) + dropped = self.index.drop(mixed_index, errors='ignore') + expected = self.index[[2, 3, 5]] + self.assert_index_equal(dropped, expected) + + def test_droplevel_with_names(self): + index = self.index[self.index.get_loc('foo')] + dropped = index.droplevel(0) + self.assertEqual(dropped.name, 'second') + + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])], + names=['one', 'two', 'three']) + dropped = index.droplevel(0) + self.assertEqual(dropped.names, ('two', 'three')) + + dropped = index.droplevel('two') + expected = index.droplevel(1) + self.assertTrue(dropped.equals(expected)) + + def test_droplevel_multiple(self): + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])], + names=['one', 'two', 'three']) + + dropped = index[:2].droplevel(['three', 'one']) + expected = index[:2].droplevel(2).droplevel(0) + self.assertTrue(dropped.equals(expected)) + + def test_insert(self): + # key contained in all levels + new_index = self.index.insert(0, ('bar', 'two')) + self.assertTrue(new_index.equal_levels(self.index)) + self.assertEqual(new_index[0], ('bar', 'two')) + + # key not contained in all levels + new_index = self.index.insert(0, ('abc', 'three')) + tm.assert_numpy_array_equal(new_index.levels[0], + list(self.index.levels[0]) + ['abc']) + tm.assert_numpy_array_equal(new_index.levels[1], + list(self.index.levels[1]) + ['three']) + self.assertEqual(new_index[0], ('abc', 'three')) + + # key wrong length + assertRaisesRegexp(ValueError, "Item must have length equal to number" + " of levels", self.index.insert, 0, ('foo2', )) + + left = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1]], + columns=['1st', '2nd', '3rd']) + left.set_index(['1st', '2nd'], inplace=True) + ts = left['3rd'].copy(deep=True) + + left.loc[('b', 'x'), '3rd'] = 2 + left.loc[('b', 'a'), '3rd'] = -1 + left.loc[('b', 'b'), '3rd'] = 3 + left.loc[('a', 'x'), '3rd'] = 4 + left.loc[('a', 'w'), '3rd'] = 5 + left.loc[('a', 'a'), '3rd'] = 6 + + ts.loc[('b', 'x')] = 2 + ts.loc['b', 'a'] = -1 + ts.loc[('b', 'b')] = 3 + ts.loc['a', 'x'] = 4 + ts.loc[('a', 'w')] = 5 + ts.loc['a', 'a'] = 6 + + right = pd.DataFrame([['a', 'b', 0], + ['b', 'd', 1], + ['b', 'x', 2], + ['b', 'a', -1], + ['b', 'b', 3], + ['a', 'x', 4], + ['a', 'w', 5], + ['a', 'a', 6]], + columns=['1st', '2nd', '3rd']) + right.set_index(['1st', '2nd'], inplace=True) + # FIXME data types changes to float because + # of intermediate nan insertion; + tm.assert_frame_equal(left, right, check_dtype=False) + tm.assert_series_equal(ts, right['3rd']) + + # GH9250 + idx = [('test1', i) for i in range(5)] + \ + [('test2', i) for i in range(6)] + \ + [('test', 17), ('test', 18)] + + left = pd.Series(np.linspace(0, 10, 11), + pd.MultiIndex.from_tuples(idx[:-2])) + + left.loc[('test', 17)] = 11 + left.ix[('test', 18)] = 12 + + right = pd.Series(np.linspace(0, 12, 13), + pd.MultiIndex.from_tuples(idx)) + + tm.assert_series_equal(left, right) + + def test_take_preserve_name(self): + taken = self.index.take([3, 0, 1]) + self.assertEqual(taken.names, self.index.names) + + def test_join_level(self): + def _check_how(other, how): + join_index, lidx, ridx = other.join(self.index, how=how, + level='second', + return_indexers=True) + + exp_level = other.join(self.index.levels[1], how=how) + self.assertTrue(join_index.levels[0].equals(self.index.levels[0])) + self.assertTrue(join_index.levels[1].equals(exp_level)) + + # pare down levels + mask = np.array( + [x[1] in exp_level for x in self.index], dtype=bool) + exp_values = self.index.values[mask] + tm.assert_numpy_array_equal(join_index.values, exp_values) + + if how in ('outer', 'inner'): + join_index2, ridx2, lidx2 = \ + self.index.join(other, how=how, level='second', + return_indexers=True) + + self.assertTrue(join_index.equals(join_index2)) + tm.assert_numpy_array_equal(lidx, lidx2) + tm.assert_numpy_array_equal(ridx, ridx2) + tm.assert_numpy_array_equal(join_index2.values, exp_values) + + def _check_all(other): + _check_how(other, 'outer') + _check_how(other, 'inner') + _check_how(other, 'left') + _check_how(other, 'right') + + _check_all(Index(['three', 'one', 'two'])) + _check_all(Index(['one'])) + _check_all(Index(['one', 'three'])) + + # some corner cases + idx = Index(['three', 'one', 'two']) + result = idx.join(self.index, level='second') + tm.assertIsInstance(result, MultiIndex) + + assertRaisesRegexp(TypeError, "Join.*MultiIndex.*ambiguous", + self.index.join, self.index, level=1) + + def test_join_self(self): + kinds = 'outer', 'inner', 'left', 'right' + for kind in kinds: + res = self.index + joined = res.join(res, how=kind) + self.assertIs(res, joined) + + def test_join_multi(self): + # GH 10665 + midx = pd.MultiIndex.from_product( + [np.arange(4), np.arange(4)], names=['a', 'b']) + idx = pd.Index([1, 2, 5], name='b') + + # inner + jidx, lidx, ridx = midx.join(idx, how='inner', return_indexers=True) + exp_idx = pd.MultiIndex.from_product( + [np.arange(4), [1, 2]], names=['a', 'b']) + exp_lidx = np.array([1, 2, 5, 6, 9, 10, 13, 14]) + exp_ridx = np.array([0, 1, 0, 1, 0, 1, 0, 1]) + self.assert_index_equal(jidx, exp_idx) + self.assert_numpy_array_equal(lidx, exp_lidx) + self.assert_numpy_array_equal(ridx, exp_ridx) + # flip + jidx, ridx, lidx = idx.join(midx, how='inner', return_indexers=True) + self.assert_index_equal(jidx, exp_idx) + self.assert_numpy_array_equal(lidx, exp_lidx) + self.assert_numpy_array_equal(ridx, exp_ridx) + + # keep MultiIndex + jidx, lidx, ridx = midx.join(idx, how='left', return_indexers=True) + exp_ridx = np.array([-1, 0, 1, -1, -1, 0, 1, -1, -1, 0, 1, -1, -1, 0, + 1, -1]) + self.assert_index_equal(jidx, midx) + self.assertIsNone(lidx) + self.assert_numpy_array_equal(ridx, exp_ridx) + # flip + jidx, ridx, lidx = idx.join(midx, how='right', return_indexers=True) + self.assert_index_equal(jidx, midx) + self.assertIsNone(lidx) + self.assert_numpy_array_equal(ridx, exp_ridx) + + def test_reindex(self): + result, indexer = self.index.reindex(list(self.index[:4])) + tm.assertIsInstance(result, MultiIndex) + self.check_level_names(result, self.index[:4].names) + + result, indexer = self.index.reindex(list(self.index)) + tm.assertIsInstance(result, MultiIndex) + self.assertIsNone(indexer) + self.check_level_names(result, self.index.names) + + def test_reindex_level(self): + idx = Index(['one']) + + target, indexer = self.index.reindex(idx, level='second') + target2, indexer2 = idx.reindex(self.index, level='second') + + exp_index = self.index.join(idx, level='second', how='right') + exp_index2 = self.index.join(idx, level='second', how='left') + + self.assertTrue(target.equals(exp_index)) + exp_indexer = np.array([0, 2, 4]) + tm.assert_numpy_array_equal(indexer, exp_indexer) + + self.assertTrue(target2.equals(exp_index2)) + exp_indexer2 = np.array([0, -1, 0, -1, 0, -1]) + tm.assert_numpy_array_equal(indexer2, exp_indexer2) + + assertRaisesRegexp(TypeError, "Fill method not supported", + self.index.reindex, self.index, method='pad', + level='second') + + assertRaisesRegexp(TypeError, "Fill method not supported", idx.reindex, + idx, method='bfill', level='first') + + def test_duplicates(self): + self.assertFalse(self.index.has_duplicates) + self.assertTrue(self.index.append(self.index).has_duplicates) + + index = MultiIndex(levels=[[0, 1], [0, 1, 2]], labels=[ + [0, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 0, 1, 2]]) + self.assertTrue(index.has_duplicates) + + # GH 9075 + t = [(u('x'), u('out'), u('z'), 5, u('y'), u('in'), u('z'), 169), + (u('x'), u('out'), u('z'), 7, u('y'), u('in'), u('z'), 119), + (u('x'), u('out'), u('z'), 9, u('y'), u('in'), u('z'), 135), + (u('x'), u('out'), u('z'), 13, u('y'), u('in'), u('z'), 145), + (u('x'), u('out'), u('z'), 14, u('y'), u('in'), u('z'), 158), + (u('x'), u('out'), u('z'), 16, u('y'), u('in'), u('z'), 122), + (u('x'), u('out'), u('z'), 17, u('y'), u('in'), u('z'), 160), + (u('x'), u('out'), u('z'), 18, u('y'), u('in'), u('z'), 180), + (u('x'), u('out'), u('z'), 20, u('y'), u('in'), u('z'), 143), + (u('x'), u('out'), u('z'), 21, u('y'), u('in'), u('z'), 128), + (u('x'), u('out'), u('z'), 22, u('y'), u('in'), u('z'), 129), + (u('x'), u('out'), u('z'), 25, u('y'), u('in'), u('z'), 111), + (u('x'), u('out'), u('z'), 28, u('y'), u('in'), u('z'), 114), + (u('x'), u('out'), u('z'), 29, u('y'), u('in'), u('z'), 121), + (u('x'), u('out'), u('z'), 31, u('y'), u('in'), u('z'), 126), + (u('x'), u('out'), u('z'), 32, u('y'), u('in'), u('z'), 155), + (u('x'), u('out'), u('z'), 33, u('y'), u('in'), u('z'), 123), + (u('x'), u('out'), u('z'), 12, u('y'), u('in'), u('z'), 144)] + + index = pd.MultiIndex.from_tuples(t) + self.assertFalse(index.has_duplicates) + + # handle int64 overflow if possible + def check(nlevels, with_nulls): + labels = np.tile(np.arange(500), 2) + level = np.arange(500) + + if with_nulls: # inject some null values + labels[500] = -1 # common nan value + labels = list(labels.copy() for i in range(nlevels)) + for i in range(nlevels): + labels[i][500 + i - nlevels // 2] = -1 + + labels += [np.array([-1, 1]).repeat(500)] + else: + labels = [labels] * nlevels + [np.arange(2).repeat(500)] + + levels = [level] * nlevels + [[0, 1]] + + # no dups + index = MultiIndex(levels=levels, labels=labels) + self.assertFalse(index.has_duplicates) + + # with a dup + if with_nulls: + f = lambda a: np.insert(a, 1000, a[0]) + labels = list(map(f, labels)) + index = MultiIndex(levels=levels, labels=labels) + else: + values = index.values.tolist() + index = MultiIndex.from_tuples(values + [values[0]]) + + self.assertTrue(index.has_duplicates) + + # no overflow + check(4, False) + check(4, True) + + # overflow possible + check(8, False) + check(8, True) + + # GH 9125 + n, k = 200, 5000 + levels = [np.arange(n), tm.makeStringIndex(n), 1000 + np.arange(n)] + labels = [np.random.choice(n, k * n) for lev in levels] + mi = MultiIndex(levels=levels, labels=labels) + + for keep in ['first', 'last', False]: + left = mi.duplicated(keep=keep) + right = pd.lib.duplicated(mi.values, keep=keep) + tm.assert_numpy_array_equal(left, right) + + # GH5873 + for a in [101, 102]: + mi = MultiIndex.from_arrays([[101, a], [3.5, np.nan]]) + self.assertFalse(mi.has_duplicates) + self.assertEqual(mi.get_duplicates(), []) + tm.assert_numpy_array_equal(mi.duplicated(), np.zeros( + 2, dtype='bool')) + + for n in range(1, 6): # 1st level shape + for m in range(1, 5): # 2nd level shape + # all possible unique combinations, including nan + lab = product(range(-1, n), range(-1, m)) + mi = MultiIndex(levels=[list('abcde')[:n], list('WXYZ')[:m]], + labels=np.random.permutation(list(lab)).T) + self.assertEqual(len(mi), (n + 1) * (m + 1)) + self.assertFalse(mi.has_duplicates) + self.assertEqual(mi.get_duplicates(), []) + tm.assert_numpy_array_equal(mi.duplicated(), np.zeros( + len(mi), dtype='bool')) + + def test_duplicate_meta_data(self): + # GH 10115 + index = MultiIndex(levels=[[0, 1], [0, 1, 2]], labels=[ + [0, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 0, 1, 2]]) + for idx in [index, + index.set_names([None, None]), + index.set_names([None, 'Num']), + index.set_names(['Upper', 'Num']), ]: + self.assertTrue(idx.has_duplicates) + self.assertEqual(idx.drop_duplicates().names, idx.names) + + def test_tolist(self): + result = self.index.tolist() + exp = list(self.index.values) + self.assertEqual(result, exp) + + def test_repr_with_unicode_data(self): + with pd.core.config.option_context("display.encoding", 'UTF-8'): + d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]} + index = pd.DataFrame(d).set_index(["a", "b"]).index + self.assertFalse("\\u" in repr(index) + ) # we don't want unicode-escaped + + def test_repr_roundtrip(self): + + mi = MultiIndex.from_product([list('ab'), range(3)], + names=['first', 'second']) + str(mi) + + if PY3: + tm.assert_index_equal(eval(repr(mi)), mi, exact=True) + else: + result = eval(repr(mi)) + # string coerces to unicode + tm.assert_index_equal(result, mi, exact=False) + self.assertEqual( + mi.get_level_values('first').inferred_type, 'string') + self.assertEqual( + result.get_level_values('first').inferred_type, 'unicode') + + mi_u = MultiIndex.from_product( + [list(u'ab'), range(3)], names=['first', 'second']) + result = eval(repr(mi_u)) + tm.assert_index_equal(result, mi_u, exact=True) + + # formatting + if PY3: + str(mi) + else: + compat.text_type(mi) + + # long format + mi = MultiIndex.from_product([list('abcdefg'), range(10)], + names=['first', 'second']) + result = str(mi) + + if PY3: + tm.assert_index_equal(eval(repr(mi)), mi, exact=True) + else: + result = eval(repr(mi)) + # string coerces to unicode + tm.assert_index_equal(result, mi, exact=False) + self.assertEqual( + mi.get_level_values('first').inferred_type, 'string') + self.assertEqual( + result.get_level_values('first').inferred_type, 'unicode') + + mi = MultiIndex.from_product( + [list(u'abcdefg'), range(10)], names=['first', 'second']) + result = eval(repr(mi_u)) + tm.assert_index_equal(result, mi_u, exact=True) + + def test_str(self): + # tested elsewhere + pass + + def test_unicode_string_with_unicode(self): + d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]} + idx = pd.DataFrame(d).set_index(["a", "b"]).index + + if PY3: + str(idx) + else: + compat.text_type(idx) + + def test_bytestring_with_unicode(self): + d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]} + idx = pd.DataFrame(d).set_index(["a", "b"]).index + + if PY3: + bytes(idx) + else: + str(idx) + + def test_slice_keep_name(self): + x = MultiIndex.from_tuples([('a', 'b'), (1, 2), ('c', 'd')], + names=['x', 'y']) + self.assertEqual(x[1:].names, x.names) + + def test_isnull_behavior(self): + # should not segfault GH5123 + # NOTE: if MI representation changes, may make sense to allow + # isnull(MI) + with tm.assertRaises(NotImplementedError): + pd.isnull(self.index) + + def test_level_setting_resets_attributes(self): + ind = MultiIndex.from_arrays([ + ['A', 'A', 'B', 'B', 'B'], [1, 2, 1, 2, 3] + ]) + assert ind.is_monotonic + ind.set_levels([['A', 'B', 'A', 'A', 'B'], [2, 1, 3, -2, 5]], + inplace=True) + # if this fails, probably didn't reset the cache correctly. + assert not ind.is_monotonic + + def test_isin(self): + values = [('foo', 2), ('bar', 3), ('quux', 4)] + + idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange( + 4)]) + result = idx.isin(values) + expected = np.array([False, False, True, True]) + tm.assert_numpy_array_equal(result, expected) + + # empty, return dtype bool + idx = MultiIndex.from_arrays([[], []]) + result = idx.isin(values) + self.assertEqual(len(result), 0) + self.assertEqual(result.dtype, np.bool_) + + def test_isin_nan(self): + idx = MultiIndex.from_arrays([['foo', 'bar'], [1.0, np.nan]]) + tm.assert_numpy_array_equal(idx.isin([('bar', np.nan)]), + [False, False]) + tm.assert_numpy_array_equal(idx.isin([('bar', float('nan'))]), + [False, False]) + + def test_isin_level_kwarg(self): + idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange( + 4)]) + + vals_0 = ['foo', 'bar', 'quux'] + vals_1 = [2, 3, 10] + + expected = np.array([False, False, True, True]) + tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level=0)) + tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level=-2)) + + tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level=1)) + tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level=-1)) + + self.assertRaises(IndexError, idx.isin, vals_0, level=5) + self.assertRaises(IndexError, idx.isin, vals_0, level=-5) + + self.assertRaises(KeyError, idx.isin, vals_0, level=1.0) + self.assertRaises(KeyError, idx.isin, vals_1, level=-1.0) + self.assertRaises(KeyError, idx.isin, vals_1, level='A') + + idx.names = ['A', 'B'] + tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level='A')) + tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level='B')) + + self.assertRaises(KeyError, idx.isin, vals_1, level='C') + + def test_reindex_preserves_names_when_target_is_list_or_ndarray(self): + # GH6552 + idx = self.index.copy() + target = idx.copy() + idx.names = target.names = [None, None] + + other_dtype = pd.MultiIndex.from_product([[1, 2], [3, 4]]) + + # list & ndarray cases + self.assertEqual(idx.reindex([])[0].names, [None, None]) + self.assertEqual(idx.reindex(np.array([]))[0].names, [None, None]) + self.assertEqual(idx.reindex(target.tolist())[0].names, [None, None]) + self.assertEqual(idx.reindex(target.values)[0].names, [None, None]) + self.assertEqual( + idx.reindex(other_dtype.tolist())[0].names, [None, None]) + self.assertEqual( + idx.reindex(other_dtype.values)[0].names, [None, None]) + + idx.names = ['foo', 'bar'] + self.assertEqual(idx.reindex([])[0].names, ['foo', 'bar']) + self.assertEqual(idx.reindex(np.array([]))[0].names, ['foo', 'bar']) + self.assertEqual(idx.reindex(target.tolist())[0].names, ['foo', 'bar']) + self.assertEqual(idx.reindex(target.values)[0].names, ['foo', 'bar']) + self.assertEqual( + idx.reindex(other_dtype.tolist())[0].names, ['foo', 'bar']) + self.assertEqual( + idx.reindex(other_dtype.values)[0].names, ['foo', 'bar']) + + def test_reindex_lvl_preserves_names_when_target_is_list_or_array(self): + # GH7774 + idx = pd.MultiIndex.from_product([[0, 1], ['a', 'b']], + names=['foo', 'bar']) + self.assertEqual(idx.reindex([], level=0)[0].names, ['foo', 'bar']) + self.assertEqual(idx.reindex([], level=1)[0].names, ['foo', 'bar']) + + def test_reindex_lvl_preserves_type_if_target_is_empty_list_or_array(self): + # GH7774 + idx = pd.MultiIndex.from_product([[0, 1], ['a', 'b']]) + self.assertEqual(idx.reindex([], level=0)[0].levels[0].dtype.type, + np.int64) + self.assertEqual(idx.reindex([], level=1)[0].levels[1].dtype.type, + np.object_) + + def test_groupby(self): + groups = self.index.groupby(np.array([1, 1, 1, 2, 2, 2])) + labels = self.index.get_values().tolist() + exp = {1: labels[:3], 2: labels[3:]} + tm.assert_dict_equal(groups, exp) + + # GH5620 + groups = self.index.groupby(self.index) + exp = dict((key, [key]) for key in self.index) + tm.assert_dict_equal(groups, exp) + + def test_index_name_retained(self): + # GH9857 + result = pd.DataFrame({'x': [1, 2, 6], + 'y': [2, 2, 8], + 'z': [-5, 0, 5]}) + result = result.set_index('z') + result.loc[10] = [9, 10] + df_expected = pd.DataFrame({'x': [1, 2, 6, 9], + 'y': [2, 2, 8, 10], + 'z': [-5, 0, 5, 10]}) + df_expected = df_expected.set_index('z') + tm.assert_frame_equal(result, df_expected) + + def test_equals_operator(self): + # GH9785 + self.assertTrue((self.index == self.index).all()) diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py new file mode 100644 index 0000000000000..d14f7bbc680df --- /dev/null +++ b/pandas/tests/indexes/test_numeric.py @@ -0,0 +1,824 @@ +# -*- coding: utf-8 -*- + +from datetime import datetime +from pandas import compat +from pandas.compat import range, lrange, u, PY3 + +import numpy as np + +from pandas import (date_range, Series, DataFrame, + Index, Float64Index, Int64Index, RangeIndex) +from pandas.util.testing import assertRaisesRegexp + +import pandas.util.testing as tm +import pandas.core.config as cf + +import pandas as pd +from pandas.lib import Timestamp + +from .common import Base + + +class Numeric(Base): + + def test_numeric_compat(self): + + idx = self.create_index() + didx = idx * idx + + result = idx * 1 + tm.assert_index_equal(result, idx) + + result = 1 * idx + tm.assert_index_equal(result, idx) + + # in general not true for RangeIndex + if not isinstance(idx, RangeIndex): + result = idx * idx + tm.assert_index_equal(result, idx ** 2) + + # truediv under PY3 + result = idx / 1 + expected = idx + if PY3: + expected = expected.astype('float64') + tm.assert_index_equal(result, expected) + + result = idx / 2 + if PY3: + expected = expected.astype('float64') + expected = Index(idx.values / 2) + tm.assert_index_equal(result, expected) + + result = idx // 1 + tm.assert_index_equal(result, idx) + + result = idx * np.array(5, dtype='int64') + tm.assert_index_equal(result, idx * 5) + + result = idx * np.arange(5, dtype='int64') + tm.assert_index_equal(result, didx) + + result = idx * Series(np.arange(5, dtype='int64')) + tm.assert_index_equal(result, didx) + + result = idx * Series(np.arange(5, dtype='float64') + 0.1) + expected = Float64Index(np.arange(5, dtype='float64') * + (np.arange(5, dtype='float64') + 0.1)) + tm.assert_index_equal(result, expected) + + # invalid + self.assertRaises(TypeError, + lambda: idx * date_range('20130101', periods=5)) + self.assertRaises(ValueError, lambda: idx * idx[0:3]) + self.assertRaises(ValueError, lambda: idx * np.array([1, 2])) + + def test_explicit_conversions(self): + + # GH 8608 + # add/sub are overriden explicity for Float/Int Index + idx = self._holder(np.arange(5, dtype='int64')) + + # float conversions + arr = np.arange(5, dtype='int64') * 3.2 + expected = Float64Index(arr) + fidx = idx * 3.2 + tm.assert_index_equal(fidx, expected) + fidx = 3.2 * idx + tm.assert_index_equal(fidx, expected) + + # interops with numpy arrays + expected = Float64Index(arr) + a = np.zeros(5, dtype='float64') + result = fidx - a + tm.assert_index_equal(result, expected) + + expected = Float64Index(-arr) + a = np.zeros(5, dtype='float64') + result = a - fidx + tm.assert_index_equal(result, expected) + + def test_ufunc_compat(self): + idx = self._holder(np.arange(5, dtype='int64')) + result = np.sin(idx) + expected = Float64Index(np.sin(np.arange(5, dtype='int64'))) + tm.assert_index_equal(result, expected) + + def test_index_groupby(self): + int_idx = Index(range(6)) + float_idx = Index(np.arange(0, 0.6, 0.1)) + obj_idx = Index('A B C D E F'.split()) + dt_idx = pd.date_range('2013-01-01', freq='M', periods=6) + + for idx in [int_idx, float_idx, obj_idx, dt_idx]: + to_groupby = np.array([1, 2, np.nan, np.nan, 2, 1]) + self.assertEqual(idx.groupby(to_groupby), + {1.0: [idx[0], idx[5]], 2.0: [idx[1], idx[4]]}) + + to_groupby = Index([datetime(2011, 11, 1), + datetime(2011, 12, 1), + pd.NaT, + pd.NaT, + datetime(2011, 12, 1), + datetime(2011, 11, 1)], + tz='UTC').values + + ex_keys = pd.tslib.datetime_to_datetime64(np.array([Timestamp( + '2011-11-01'), Timestamp('2011-12-01')])) + expected = {ex_keys[0][0]: [idx[0], idx[5]], + ex_keys[0][1]: [idx[1], idx[4]]} + self.assertEqual(idx.groupby(to_groupby), expected) + + def test_modulo(self): + # GH 9244 + index = self.create_index() + expected = Index(index.values % 2) + self.assert_index_equal(index % 2, expected) + + +class TestFloat64Index(Numeric, tm.TestCase): + _holder = Float64Index + _multiprocess_can_split_ = True + + def setUp(self): + self.indices = dict(mixed=Float64Index([1.5, 2, 3, 4, 5]), + float=Float64Index(np.arange(5) * 2.5)) + self.setup_indices() + + def create_index(self): + return Float64Index(np.arange(5, dtype='float64')) + + def test_repr_roundtrip(self): + for ind in (self.mixed, self.float): + tm.assert_index_equal(eval(repr(ind)), ind) + + def check_is_index(self, i): + self.assertIsInstance(i, Index) + self.assertNotIsInstance(i, Float64Index) + + def check_coerce(self, a, b, is_float_index=True): + self.assertTrue(a.equals(b)) + if is_float_index: + self.assertIsInstance(b, Float64Index) + else: + self.check_is_index(b) + + def test_constructor(self): + + # explicit construction + index = Float64Index([1, 2, 3, 4, 5]) + self.assertIsInstance(index, Float64Index) + self.assertTrue((index.values == np.array( + [1, 2, 3, 4, 5], dtype='float64')).all()) + index = Float64Index(np.array([1, 2, 3, 4, 5])) + self.assertIsInstance(index, Float64Index) + index = Float64Index([1., 2, 3, 4, 5]) + self.assertIsInstance(index, Float64Index) + index = Float64Index(np.array([1., 2, 3, 4, 5])) + self.assertIsInstance(index, Float64Index) + self.assertEqual(index.dtype, float) + + index = Float64Index(np.array([1., 2, 3, 4, 5]), dtype=np.float32) + self.assertIsInstance(index, Float64Index) + self.assertEqual(index.dtype, np.float64) + + index = Float64Index(np.array([1, 2, 3, 4, 5]), dtype=np.float32) + self.assertIsInstance(index, Float64Index) + self.assertEqual(index.dtype, np.float64) + + # nan handling + result = Float64Index([np.nan, np.nan]) + self.assertTrue(pd.isnull(result.values).all()) + result = Float64Index(np.array([np.nan])) + self.assertTrue(pd.isnull(result.values).all()) + result = Index(np.array([np.nan])) + self.assertTrue(pd.isnull(result.values).all()) + + def test_constructor_invalid(self): + + # invalid + self.assertRaises(TypeError, Float64Index, 0.) + self.assertRaises(TypeError, Float64Index, ['a', 'b', 0.]) + self.assertRaises(TypeError, Float64Index, [Timestamp('20130101')]) + + def test_constructor_coerce(self): + + self.check_coerce(self.mixed, Index([1.5, 2, 3, 4, 5])) + self.check_coerce(self.float, Index(np.arange(5) * 2.5)) + self.check_coerce(self.float, Index(np.array( + np.arange(5) * 2.5, dtype=object))) + + def test_constructor_explicit(self): + + # these don't auto convert + self.check_coerce(self.float, + Index((np.arange(5) * 2.5), dtype=object), + is_float_index=False) + self.check_coerce(self.mixed, Index( + [1.5, 2, 3, 4, 5], dtype=object), is_float_index=False) + + def test_astype(self): + + result = self.float.astype(object) + self.assertTrue(result.equals(self.float)) + self.assertTrue(self.float.equals(result)) + self.check_is_index(result) + + i = self.mixed.copy() + i.name = 'foo' + result = i.astype(object) + self.assertTrue(result.equals(i)) + self.assertTrue(i.equals(result)) + self.check_is_index(result) + + def test_equals(self): + + i = Float64Index([1.0, 2.0]) + self.assertTrue(i.equals(i)) + self.assertTrue(i.identical(i)) + + i2 = Float64Index([1.0, 2.0]) + self.assertTrue(i.equals(i2)) + + i = Float64Index([1.0, np.nan]) + self.assertTrue(i.equals(i)) + self.assertTrue(i.identical(i)) + + i2 = Float64Index([1.0, np.nan]) + self.assertTrue(i.equals(i2)) + + def test_get_indexer(self): + idx = Float64Index([0.0, 1.0, 2.0]) + tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2]) + + target = [-0.1, 0.5, 1.1] + tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'backfill'), [0, 1, 2]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'nearest'), [0, 1, 1]) + + def test_get_loc(self): + idx = Float64Index([0.0, 1.0, 2.0]) + for method in [None, 'pad', 'backfill', 'nearest']: + self.assertEqual(idx.get_loc(1, method), 1) + if method is not None: + self.assertEqual(idx.get_loc(1, method, tolerance=0), 1) + + for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]: + self.assertEqual(idx.get_loc(1.1, method), loc) + self.assertEqual(idx.get_loc(1.1, method, tolerance=0.9), loc) + + self.assertRaises(KeyError, idx.get_loc, 'foo') + self.assertRaises(KeyError, idx.get_loc, 1.5) + self.assertRaises(KeyError, idx.get_loc, 1.5, method='pad', + tolerance=0.1) + + with tm.assertRaisesRegexp(ValueError, 'must be numeric'): + idx.get_loc(1.4, method='nearest', tolerance='foo') + + def test_get_loc_na(self): + idx = Float64Index([np.nan, 1, 2]) + self.assertEqual(idx.get_loc(1), 1) + self.assertEqual(idx.get_loc(np.nan), 0) + + idx = Float64Index([np.nan, 1, np.nan]) + self.assertEqual(idx.get_loc(1), 1) + + # representable by slice [0:2:2] + # self.assertRaises(KeyError, idx.slice_locs, np.nan) + sliced = idx.slice_locs(np.nan) + self.assertTrue(isinstance(sliced, tuple)) + self.assertEqual(sliced, (0, 3)) + + # not representable by slice + idx = Float64Index([np.nan, 1, np.nan, np.nan]) + self.assertEqual(idx.get_loc(1), 1) + self.assertRaises(KeyError, idx.slice_locs, np.nan) + + def test_contains_nans(self): + i = Float64Index([1.0, 2.0, np.nan]) + self.assertTrue(np.nan in i) + + def test_contains_not_nans(self): + i = Float64Index([1.0, 2.0, np.nan]) + self.assertTrue(1.0 in i) + + def test_doesnt_contain_all_the_things(self): + i = Float64Index([np.nan]) + self.assertFalse(i.isin([0]).item()) + self.assertFalse(i.isin([1]).item()) + self.assertTrue(i.isin([np.nan]).item()) + + def test_nan_multiple_containment(self): + i = Float64Index([1.0, np.nan]) + tm.assert_numpy_array_equal(i.isin([1.0]), np.array([True, False])) + tm.assert_numpy_array_equal(i.isin([2.0, np.pi]), + np.array([False, False])) + tm.assert_numpy_array_equal(i.isin([np.nan]), np.array([False, True])) + tm.assert_numpy_array_equal(i.isin([1.0, np.nan]), + np.array([True, True])) + i = Float64Index([1.0, 2.0]) + tm.assert_numpy_array_equal(i.isin([np.nan]), np.array([False, False])) + + def test_astype_from_object(self): + index = Index([1.0, np.nan, 0.2], dtype='object') + result = index.astype(float) + expected = Float64Index([1.0, np.nan, 0.2]) + tm.assert_equal(result.dtype, expected.dtype) + tm.assert_index_equal(result, expected) + + def test_fillna_float64(self): + # GH 11343 + idx = Index([1.0, np.nan, 3.0], dtype=float, name='x') + # can't downcast + exp = Index([1.0, 0.1, 3.0], name='x') + self.assert_index_equal(idx.fillna(0.1), exp) + + # downcast + exp = Float64Index([1.0, 2.0, 3.0], name='x') + self.assert_index_equal(idx.fillna(2), exp) + + # object + exp = Index([1.0, 'obj', 3.0], name='x') + self.assert_index_equal(idx.fillna('obj'), exp) + + +class TestInt64Index(Numeric, tm.TestCase): + _holder = Int64Index + _multiprocess_can_split_ = True + + def setUp(self): + self.indices = dict(index=Int64Index(np.arange(0, 20, 2))) + self.setup_indices() + + def create_index(self): + return Int64Index(np.arange(5, dtype='int64')) + + def test_too_many_names(self): + def testit(): + self.index.names = ["roger", "harold"] + + assertRaisesRegexp(ValueError, "^Length", testit) + + def test_constructor(self): + # pass list, coerce fine + index = Int64Index([-5, 0, 1, 2]) + expected = np.array([-5, 0, 1, 2], dtype=np.int64) + tm.assert_numpy_array_equal(index, expected) + + # from iterable + index = Int64Index(iter([-5, 0, 1, 2])) + tm.assert_numpy_array_equal(index, expected) + + # scalar raise Exception + self.assertRaises(TypeError, Int64Index, 5) + + # copy + arr = self.index.values + new_index = Int64Index(arr, copy=True) + tm.assert_numpy_array_equal(new_index, self.index) + val = arr[0] + 3000 + # this should not change index + arr[0] = val + self.assertNotEqual(new_index[0], val) + + def test_constructor_corner(self): + arr = np.array([1, 2, 3, 4], dtype=object) + index = Int64Index(arr) + self.assertEqual(index.values.dtype, np.int64) + self.assertTrue(index.equals(arr)) + + # preventing casting + arr = np.array([1, '2', 3, '4'], dtype=object) + with tm.assertRaisesRegexp(TypeError, 'casting'): + Int64Index(arr) + + arr_with_floats = [0, 2, 3, 4, 5, 1.25, 3, -1] + with tm.assertRaisesRegexp(TypeError, 'casting'): + Int64Index(arr_with_floats) + + def test_copy(self): + i = Int64Index([], name='Foo') + i_copy = i.copy() + self.assertEqual(i_copy.name, 'Foo') + + def test_view(self): + super(TestInt64Index, self).test_view() + + i = Int64Index([], name='Foo') + i_view = i.view() + self.assertEqual(i_view.name, 'Foo') + + i_view = i.view('i8') + tm.assert_index_equal(i, Int64Index(i_view, name='Foo')) + + i_view = i.view(Int64Index) + tm.assert_index_equal(i, Int64Index(i_view, name='Foo')) + + def test_coerce_list(self): + # coerce things + arr = Index([1, 2, 3, 4]) + tm.assertIsInstance(arr, Int64Index) + + # but not if explicit dtype passed + arr = Index([1, 2, 3, 4], dtype=object) + tm.assertIsInstance(arr, Index) + + def test_dtype(self): + self.assertEqual(self.index.dtype, np.int64) + + def test_is_monotonic(self): + self.assertTrue(self.index.is_monotonic) + self.assertTrue(self.index.is_monotonic_increasing) + self.assertFalse(self.index.is_monotonic_decreasing) + + index = Int64Index([4, 3, 2, 1]) + self.assertFalse(index.is_monotonic) + self.assertTrue(index.is_monotonic_decreasing) + + index = Int64Index([1]) + self.assertTrue(index.is_monotonic) + self.assertTrue(index.is_monotonic_increasing) + self.assertTrue(index.is_monotonic_decreasing) + + def test_is_monotonic_na(self): + examples = [Index([np.nan]), + Index([np.nan, 1]), + Index([1, 2, np.nan]), + Index(['a', 'b', np.nan]), + pd.to_datetime(['NaT']), + pd.to_datetime(['NaT', '2000-01-01']), + pd.to_datetime(['2000-01-01', 'NaT', '2000-01-02']), + pd.to_timedelta(['1 day', 'NaT']), ] + for index in examples: + self.assertFalse(index.is_monotonic_increasing) + self.assertFalse(index.is_monotonic_decreasing) + + def test_equals(self): + same_values = Index(self.index, dtype=object) + self.assertTrue(self.index.equals(same_values)) + self.assertTrue(same_values.equals(self.index)) + + def test_logical_compat(self): + idx = self.create_index() + self.assertEqual(idx.all(), idx.values.all()) + self.assertEqual(idx.any(), idx.values.any()) + + def test_identical(self): + i = Index(self.index.copy()) + self.assertTrue(i.identical(self.index)) + + same_values_different_type = Index(i, dtype=object) + self.assertFalse(i.identical(same_values_different_type)) + + i = self.index.copy(dtype=object) + i = i.rename('foo') + same_values = Index(i, dtype=object) + self.assertTrue(same_values.identical(i)) + + self.assertFalse(i.identical(self.index)) + self.assertTrue(Index(same_values, name='foo', dtype=object).identical( + i)) + + self.assertFalse(self.index.copy(dtype=object) + .identical(self.index.copy(dtype='int64'))) + + def test_get_indexer(self): + target = Int64Index(np.arange(10)) + indexer = self.index.get_indexer(target) + expected = np.array([0, -1, 1, -1, 2, -1, 3, -1, 4, -1]) + tm.assert_numpy_array_equal(indexer, expected) + + def test_get_indexer_pad(self): + target = Int64Index(np.arange(10)) + indexer = self.index.get_indexer(target, method='pad') + expected = np.array([0, 0, 1, 1, 2, 2, 3, 3, 4, 4]) + tm.assert_numpy_array_equal(indexer, expected) + + def test_get_indexer_backfill(self): + target = Int64Index(np.arange(10)) + indexer = self.index.get_indexer(target, method='backfill') + expected = np.array([0, 1, 1, 2, 2, 3, 3, 4, 4, 5]) + tm.assert_numpy_array_equal(indexer, expected) + + def test_join_outer(self): + other = Int64Index([7, 12, 25, 1, 2, 5]) + other_mono = Int64Index([1, 2, 5, 7, 12, 25]) + + # not monotonic + # guarantee of sortedness + res, lidx, ridx = self.index.join(other, how='outer', + return_indexers=True) + noidx_res = self.index.join(other, how='outer') + self.assertTrue(res.equals(noidx_res)) + + eres = Int64Index([0, 1, 2, 4, 5, 6, 7, 8, 10, 12, 14, 16, 18, 25]) + elidx = np.array([0, -1, 1, 2, -1, 3, -1, 4, 5, 6, 7, 8, 9, -1], + dtype=np.int64) + eridx = np.array([-1, 3, 4, -1, 5, -1, 0, -1, -1, 1, -1, -1, -1, 2], + dtype=np.int64) + + tm.assertIsInstance(res, Int64Index) + self.assertTrue(res.equals(eres)) + tm.assert_numpy_array_equal(lidx, elidx) + tm.assert_numpy_array_equal(ridx, eridx) + + # monotonic + res, lidx, ridx = self.index.join(other_mono, how='outer', + return_indexers=True) + noidx_res = self.index.join(other_mono, how='outer') + self.assertTrue(res.equals(noidx_res)) + + eridx = np.array([-1, 0, 1, -1, 2, -1, 3, -1, -1, 4, -1, -1, -1, 5], + dtype=np.int64) + tm.assertIsInstance(res, Int64Index) + self.assertTrue(res.equals(eres)) + tm.assert_numpy_array_equal(lidx, elidx) + tm.assert_numpy_array_equal(ridx, eridx) + + def test_join_inner(self): + other = Int64Index([7, 12, 25, 1, 2, 5]) + other_mono = Int64Index([1, 2, 5, 7, 12, 25]) + + # not monotonic + res, lidx, ridx = self.index.join(other, how='inner', + return_indexers=True) + + # no guarantee of sortedness, so sort for comparison purposes + ind = res.argsort() + res = res.take(ind) + lidx = lidx.take(ind) + ridx = ridx.take(ind) + + eres = Int64Index([2, 12]) + elidx = np.array([1, 6]) + eridx = np.array([4, 1]) + + tm.assertIsInstance(res, Int64Index) + self.assertTrue(res.equals(eres)) + tm.assert_numpy_array_equal(lidx, elidx) + tm.assert_numpy_array_equal(ridx, eridx) + + # monotonic + res, lidx, ridx = self.index.join(other_mono, how='inner', + return_indexers=True) + + res2 = self.index.intersection(other_mono) + self.assertTrue(res.equals(res2)) + + eridx = np.array([1, 4]) + tm.assertIsInstance(res, Int64Index) + self.assertTrue(res.equals(eres)) + tm.assert_numpy_array_equal(lidx, elidx) + tm.assert_numpy_array_equal(ridx, eridx) + + def test_join_left(self): + other = Int64Index([7, 12, 25, 1, 2, 5]) + other_mono = Int64Index([1, 2, 5, 7, 12, 25]) + + # not monotonic + res, lidx, ridx = self.index.join(other, how='left', + return_indexers=True) + eres = self.index + eridx = np.array([-1, 4, -1, -1, -1, -1, 1, -1, -1, -1], + dtype=np.int64) + + tm.assertIsInstance(res, Int64Index) + self.assertTrue(res.equals(eres)) + self.assertIsNone(lidx) + tm.assert_numpy_array_equal(ridx, eridx) + + # monotonic + res, lidx, ridx = self.index.join(other_mono, how='left', + return_indexers=True) + eridx = np.array([-1, 1, -1, -1, -1, -1, 4, -1, -1, -1], + dtype=np.int64) + tm.assertIsInstance(res, Int64Index) + self.assertTrue(res.equals(eres)) + self.assertIsNone(lidx) + tm.assert_numpy_array_equal(ridx, eridx) + + # non-unique + idx = Index([1, 1, 2, 5]) + idx2 = Index([1, 2, 5, 7, 9]) + res, lidx, ridx = idx2.join(idx, how='left', return_indexers=True) + eres = Index([1, 1, 2, 5, 7, 9]) # 1 is in idx2, so it should be x2 + eridx = np.array([0, 1, 2, 3, -1, -1]) + elidx = np.array([0, 0, 1, 2, 3, 4]) + self.assertTrue(res.equals(eres)) + tm.assert_numpy_array_equal(lidx, elidx) + tm.assert_numpy_array_equal(ridx, eridx) + + def test_join_right(self): + other = Int64Index([7, 12, 25, 1, 2, 5]) + other_mono = Int64Index([1, 2, 5, 7, 12, 25]) + + # not monotonic + res, lidx, ridx = self.index.join(other, how='right', + return_indexers=True) + eres = other + elidx = np.array([-1, 6, -1, -1, 1, -1], dtype=np.int64) + + tm.assertIsInstance(other, Int64Index) + self.assertTrue(res.equals(eres)) + tm.assert_numpy_array_equal(lidx, elidx) + self.assertIsNone(ridx) + + # monotonic + res, lidx, ridx = self.index.join(other_mono, how='right', + return_indexers=True) + eres = other_mono + elidx = np.array([-1, 1, -1, -1, 6, -1], dtype=np.int64) + tm.assertIsInstance(other, Int64Index) + self.assertTrue(res.equals(eres)) + tm.assert_numpy_array_equal(lidx, elidx) + self.assertIsNone(ridx) + + # non-unique + idx = Index([1, 1, 2, 5]) + idx2 = Index([1, 2, 5, 7, 9]) + res, lidx, ridx = idx.join(idx2, how='right', return_indexers=True) + eres = Index([1, 1, 2, 5, 7, 9]) # 1 is in idx2, so it should be x2 + elidx = np.array([0, 1, 2, 3, -1, -1]) + eridx = np.array([0, 0, 1, 2, 3, 4]) + self.assertTrue(res.equals(eres)) + tm.assert_numpy_array_equal(lidx, elidx) + tm.assert_numpy_array_equal(ridx, eridx) + + def test_join_non_int_index(self): + other = Index([3, 6, 7, 8, 10], dtype=object) + + outer = self.index.join(other, how='outer') + outer2 = other.join(self.index, how='outer') + expected = Index([0, 2, 3, 4, 6, 7, 8, 10, 12, 14, + 16, 18], dtype=object) + self.assertTrue(outer.equals(outer2)) + self.assertTrue(outer.equals(expected)) + + inner = self.index.join(other, how='inner') + inner2 = other.join(self.index, how='inner') + expected = Index([6, 8, 10], dtype=object) + self.assertTrue(inner.equals(inner2)) + self.assertTrue(inner.equals(expected)) + + left = self.index.join(other, how='left') + self.assertTrue(left.equals(self.index)) + + left2 = other.join(self.index, how='left') + self.assertTrue(left2.equals(other)) + + right = self.index.join(other, how='right') + self.assertTrue(right.equals(other)) + + right2 = other.join(self.index, how='right') + self.assertTrue(right2.equals(self.index)) + + def test_join_non_unique(self): + left = Index([4, 4, 3, 3]) + + joined, lidx, ridx = left.join(left, return_indexers=True) + + exp_joined = Index([3, 3, 3, 3, 4, 4, 4, 4]) + self.assertTrue(joined.equals(exp_joined)) + + exp_lidx = np.array([2, 2, 3, 3, 0, 0, 1, 1], dtype=np.int64) + tm.assert_numpy_array_equal(lidx, exp_lidx) + + exp_ridx = np.array([2, 3, 2, 3, 0, 1, 0, 1], dtype=np.int64) + tm.assert_numpy_array_equal(ridx, exp_ridx) + + def test_join_self(self): + kinds = 'outer', 'inner', 'left', 'right' + for kind in kinds: + joined = self.index.join(self.index, how=kind) + self.assertIs(self.index, joined) + + def test_intersection(self): + other = Index([1, 2, 3, 4, 5]) + result = self.index.intersection(other) + expected = np.sort(np.intersect1d(self.index.values, other.values)) + tm.assert_numpy_array_equal(result, expected) + + result = other.intersection(self.index) + expected = np.sort(np.asarray(np.intersect1d(self.index.values, + other.values))) + tm.assert_numpy_array_equal(result, expected) + + def test_intersect_str_dates(self): + dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)] + + i1 = Index(dt_dates, dtype=object) + i2 = Index(['aa'], dtype=object) + res = i2.intersection(i1) + + self.assertEqual(len(res), 0) + + def test_union_noncomparable(self): + from datetime import datetime, timedelta + # corner case, non-Int64Index + now = datetime.now() + other = Index([now + timedelta(i) for i in range(4)], dtype=object) + result = self.index.union(other) + expected = np.concatenate((self.index, other)) + tm.assert_numpy_array_equal(result, expected) + + result = other.union(self.index) + expected = np.concatenate((other, self.index)) + tm.assert_numpy_array_equal(result, expected) + + def test_cant_or_shouldnt_cast(self): + # can't + data = ['foo', 'bar', 'baz'] + self.assertRaises(TypeError, Int64Index, data) + + # shouldn't + data = ['0', '1', '2'] + self.assertRaises(TypeError, Int64Index, data) + + def test_view_Index(self): + self.index.view(Index) + + def test_prevent_casting(self): + result = self.index.astype('O') + self.assertEqual(result.dtype, np.object_) + + def test_take_preserve_name(self): + index = Int64Index([1, 2, 3, 4], name='foo') + taken = index.take([3, 0, 1]) + self.assertEqual(index.name, taken.name) + + def test_int_name_format(self): + index = Index(['a', 'b', 'c'], name=0) + s = Series(lrange(3), index) + df = DataFrame(lrange(3), index=index) + repr(s) + repr(df) + + def test_print_unicode_columns(self): + df = pd.DataFrame({u("\u05d0"): [1, 2, 3], + "\u05d1": [4, 5, 6], + "c": [7, 8, 9]}) + repr(df.columns) # should not raise UnicodeDecodeError + + def test_repr_summary(self): + with cf.option_context('display.max_seq_items', 10): + r = repr(pd.Index(np.arange(1000))) + self.assertTrue(len(r) < 200) + self.assertTrue("..." in r) + + def test_repr_roundtrip(self): + tm.assert_index_equal(eval(repr(self.index)), self.index) + + def test_unicode_string_with_unicode(self): + idx = Index(lrange(1000)) + + if PY3: + str(idx) + else: + compat.text_type(idx) + + def test_bytestring_with_unicode(self): + idx = Index(lrange(1000)) + if PY3: + bytes(idx) + else: + str(idx) + + def test_slice_keep_name(self): + idx = Int64Index([1, 2], name='asdf') + self.assertEqual(idx.name, idx[1:].name) + + def test_ufunc_coercions(self): + idx = Int64Index([1, 2, 3, 4, 5], name='x') + + result = np.sqrt(idx) + tm.assertIsInstance(result, Float64Index) + exp = Float64Index(np.sqrt(np.array([1, 2, 3, 4, 5])), name='x') + tm.assert_index_equal(result, exp) + + result = np.divide(idx, 2.) + tm.assertIsInstance(result, Float64Index) + exp = Float64Index([0.5, 1., 1.5, 2., 2.5], name='x') + tm.assert_index_equal(result, exp) + + # _evaluate_numeric_binop + result = idx + 2. + tm.assertIsInstance(result, Float64Index) + exp = Float64Index([3., 4., 5., 6., 7.], name='x') + tm.assert_index_equal(result, exp) + + result = idx - 2. + tm.assertIsInstance(result, Float64Index) + exp = Float64Index([-1., 0., 1., 2., 3.], name='x') + tm.assert_index_equal(result, exp) + + result = idx * 1. + tm.assertIsInstance(result, Float64Index) + exp = Float64Index([1., 2., 3., 4., 5.], name='x') + tm.assert_index_equal(result, exp) + + result = idx / 2. + tm.assertIsInstance(result, Float64Index) + exp = Float64Index([0.5, 1., 1.5, 2., 2.5], name='x') + tm.assert_index_equal(result, exp) diff --git a/pandas/tests/indexes/test_range.py b/pandas/tests/indexes/test_range.py new file mode 100644 index 0000000000000..cf7fe67be6401 --- /dev/null +++ b/pandas/tests/indexes/test_range.py @@ -0,0 +1,806 @@ +# -*- coding: utf-8 -*- + +from datetime import datetime +from itertools import combinations +import operator + +from pandas.compat import range, u, PY3 + +import numpy as np + +from pandas import (Series, Index, Float64Index, Int64Index, RangeIndex) +from pandas.util.testing import assertRaisesRegexp + +import pandas.util.testing as tm + +import pandas as pd + +from .test_numeric import Numeric + + +class TestRangeIndex(Numeric, tm.TestCase): + _holder = RangeIndex + _compat_props = ['shape', 'ndim', 'size', 'itemsize'] + + def setUp(self): + self.indices = dict(index=RangeIndex(0, 20, 2, name='foo')) + self.setup_indices() + + def create_index(self): + return RangeIndex(5) + + def test_binops(self): + ops = [operator.add, operator.sub, operator.mul, operator.floordiv, + operator.truediv, pow] + scalars = [-1, 1, 2] + idxs = [RangeIndex(0, 10, 1), RangeIndex(0, 20, 2), + RangeIndex(-10, 10, 2), RangeIndex(5, -5, -1)] + for op in ops: + for a, b in combinations(idxs, 2): + result = op(a, b) + expected = op(Int64Index(a), Int64Index(b)) + tm.assert_index_equal(result, expected) + for idx in idxs: + for scalar in scalars: + result = op(idx, scalar) + expected = op(Int64Index(idx), scalar) + tm.assert_index_equal(result, expected) + + def test_too_many_names(self): + def testit(): + self.index.names = ["roger", "harold"] + + assertRaisesRegexp(ValueError, "^Length", testit) + + def test_constructor(self): + index = RangeIndex(5) + expected = np.arange(5, dtype=np.int64) + self.assertIsInstance(index, RangeIndex) + self.assertEqual(index._start, 0) + self.assertEqual(index._stop, 5) + self.assertEqual(index._step, 1) + self.assertEqual(index.name, None) + tm.assert_index_equal(Index(expected), index) + + index = RangeIndex(1, 5) + expected = np.arange(1, 5, dtype=np.int64) + self.assertIsInstance(index, RangeIndex) + self.assertEqual(index._start, 1) + tm.assert_index_equal(Index(expected), index) + + index = RangeIndex(1, 5, 2) + expected = np.arange(1, 5, 2, dtype=np.int64) + self.assertIsInstance(index, RangeIndex) + self.assertEqual(index._step, 2) + tm.assert_index_equal(Index(expected), index) + + index = RangeIndex() + expected = np.empty(0, dtype=np.int64) + self.assertIsInstance(index, RangeIndex) + self.assertEqual(index._start, 0) + self.assertEqual(index._stop, 0) + self.assertEqual(index._step, 1) + tm.assert_index_equal(Index(expected), index) + + index = RangeIndex(name='Foo') + self.assertIsInstance(index, RangeIndex) + self.assertEqual(index.name, 'Foo') + + # we don't allow on a bare Index + self.assertRaises(TypeError, lambda: Index(0, 1000)) + + # invalid args + for i in [Index(['a', 'b']), Series(['a', 'b']), np.array(['a', 'b']), + [], 'foo', datetime(2000, 1, 1, 0, 0), np.arange(0, 10)]: + self.assertRaises(TypeError, lambda: RangeIndex(i)) + + def test_constructor_same(self): + + # pass thru w and w/o copy + index = RangeIndex(1, 5, 2) + result = RangeIndex(index, copy=False) + self.assertTrue(result.identical(index)) + + result = RangeIndex(index, copy=True) + self.assertTrue(result.equals(index)) + + result = RangeIndex(index) + self.assertTrue(result.equals(index)) + + self.assertRaises(TypeError, + lambda: RangeIndex(index, dtype='float64')) + + def test_constructor_range(self): + + self.assertRaises(TypeError, lambda: RangeIndex(range(1, 5, 2))) + + result = RangeIndex.from_range(range(1, 5, 2)) + expected = RangeIndex(1, 5, 2) + self.assertTrue(result.equals(expected)) + + result = RangeIndex.from_range(range(5, 6)) + expected = RangeIndex(5, 6, 1) + self.assertTrue(result.equals(expected)) + + # an invalid range + result = RangeIndex.from_range(range(5, 1)) + expected = RangeIndex(0, 0, 1) + self.assertTrue(result.equals(expected)) + + result = RangeIndex.from_range(range(5)) + expected = RangeIndex(0, 5, 1) + self.assertTrue(result.equals(expected)) + + result = Index(range(1, 5, 2)) + expected = RangeIndex(1, 5, 2) + self.assertTrue(result.equals(expected)) + + self.assertRaises(TypeError, + lambda: Index(range(1, 5, 2), dtype='float64')) + + def test_numeric_compat2(self): + # validate that we are handling the RangeIndex overrides to numeric ops + # and returning RangeIndex where possible + + idx = RangeIndex(0, 10, 2) + + result = idx * 2 + expected = RangeIndex(0, 20, 4) + self.assertTrue(result.equals(expected)) + + result = idx + 2 + expected = RangeIndex(2, 12, 2) + self.assertTrue(result.equals(expected)) + + result = idx - 2 + expected = RangeIndex(-2, 8, 2) + self.assertTrue(result.equals(expected)) + + # truediv under PY3 + result = idx / 2 + if PY3: + expected = RangeIndex(0, 5, 1) + else: + expected = RangeIndex(0, 5, 1).astype('float64') + self.assertTrue(result.equals(expected)) + + result = idx / 4 + expected = RangeIndex(0, 10, 2).values / 4 + self.assertTrue(result.equals(expected)) + + result = idx // 1 + expected = idx + tm.assert_index_equal(result, expected, exact=True) + + # __mul__ + result = idx * idx + expected = Index(idx.values * idx.values) + tm.assert_index_equal(result, expected, exact=True) + + # __pow__ + idx = RangeIndex(0, 1000, 2) + result = idx ** 2 + expected = idx._int64index ** 2 + tm.assert_index_equal(Index(result.values), expected, exact=True) + + # __floordiv__ + cases_exact = [(RangeIndex(0, 1000, 2), 2, RangeIndex(0, 500, 1)), + (RangeIndex(-99, -201, -3), -3, RangeIndex(33, 67, 1)), + (RangeIndex(0, 1000, 1), 2, + RangeIndex(0, 1000, 1)._int64index // 2), + (RangeIndex(0, 100, 1), 2.0, + RangeIndex(0, 100, 1)._int64index // 2.0), + (RangeIndex(), 50, RangeIndex()), + (RangeIndex(2, 4, 2), 3, RangeIndex(0, 1, 1)), + (RangeIndex(-5, -10, -6), 4, RangeIndex(-2, -1, 1)), + (RangeIndex(-100, -200, 3), 2, RangeIndex())] + for idx, div, expected in cases_exact: + tm.assert_index_equal(idx // div, expected, exact=True) + + def test_constructor_corner(self): + arr = np.array([1, 2, 3, 4], dtype=object) + index = RangeIndex(1, 5) + self.assertEqual(index.values.dtype, np.int64) + self.assertTrue(index.equals(arr)) + + # non-int raise Exception + self.assertRaises(TypeError, RangeIndex, '1', '10', '1') + self.assertRaises(TypeError, RangeIndex, 1.1, 10.2, 1.3) + + # invalid passed type + self.assertRaises(TypeError, lambda: RangeIndex(1, 5, dtype='float64')) + + def test_copy(self): + i = RangeIndex(5, name='Foo') + i_copy = i.copy() + self.assertTrue(i_copy is not i) + self.assertTrue(i_copy.identical(i)) + self.assertEqual(i_copy._start, 0) + self.assertEqual(i_copy._stop, 5) + self.assertEqual(i_copy._step, 1) + self.assertEqual(i_copy.name, 'Foo') + + def test_repr(self): + i = RangeIndex(5, name='Foo') + result = repr(i) + if PY3: + expected = "RangeIndex(start=0, stop=5, step=1, name='Foo')" + else: + expected = "RangeIndex(start=0, stop=5, step=1, name=u'Foo')" + self.assertTrue(result, expected) + + result = eval(result) + self.assertTrue(result.equals(i)) + + i = RangeIndex(5, 0, -1) + result = repr(i) + expected = "RangeIndex(start=5, stop=0, step=-1)" + self.assertEqual(result, expected) + + result = eval(result) + self.assertTrue(result.equals(i)) + + def test_insert(self): + + idx = RangeIndex(5, name='Foo') + result = idx[1:4] + + # test 0th element + self.assertTrue(idx[0:4].equals(result.insert(0, idx[0]))) + + def test_delete(self): + + idx = RangeIndex(5, name='Foo') + expected = idx[1:].astype(int) + result = idx.delete(0) + self.assertTrue(result.equals(expected)) + self.assertEqual(result.name, expected.name) + + expected = idx[:-1].astype(int) + result = idx.delete(-1) + self.assertTrue(result.equals(expected)) + self.assertEqual(result.name, expected.name) + + with tm.assertRaises((IndexError, ValueError)): + # either depending on numpy version + result = idx.delete(len(idx)) + + def test_view(self): + super(TestRangeIndex, self).test_view() + + i = RangeIndex(name='Foo') + i_view = i.view() + self.assertEqual(i_view.name, 'Foo') + + i_view = i.view('i8') + tm.assert_numpy_array_equal(i, i_view) + + i_view = i.view(RangeIndex) + tm.assert_index_equal(i, i_view) + + def test_dtype(self): + self.assertEqual(self.index.dtype, np.int64) + + def test_is_monotonic(self): + self.assertTrue(self.index.is_monotonic) + self.assertTrue(self.index.is_monotonic_increasing) + self.assertFalse(self.index.is_monotonic_decreasing) + + index = RangeIndex(4, 0, -1) + self.assertFalse(index.is_monotonic) + self.assertTrue(index.is_monotonic_decreasing) + + index = RangeIndex(1, 2) + self.assertTrue(index.is_monotonic) + self.assertTrue(index.is_monotonic_increasing) + self.assertTrue(index.is_monotonic_decreasing) + + def test_equals(self): + equiv_pairs = [(RangeIndex(0, 9, 2), RangeIndex(0, 10, 2)), + (RangeIndex(0), RangeIndex(1, -1, 3)), + (RangeIndex(1, 2, 3), RangeIndex(1, 3, 4)), + (RangeIndex(0, -9, -2), RangeIndex(0, -10, -2))] + for left, right in equiv_pairs: + self.assertTrue(left.equals(right)) + self.assertTrue(right.equals(left)) + + def test_logical_compat(self): + idx = self.create_index() + self.assertEqual(idx.all(), idx.values.all()) + self.assertEqual(idx.any(), idx.values.any()) + + def test_identical(self): + i = Index(self.index.copy()) + self.assertTrue(i.identical(self.index)) + + # we don't allow object dtype for RangeIndex + if isinstance(self.index, RangeIndex): + return + + same_values_different_type = Index(i, dtype=object) + self.assertFalse(i.identical(same_values_different_type)) + + i = self.index.copy(dtype=object) + i = i.rename('foo') + same_values = Index(i, dtype=object) + self.assertTrue(same_values.identical(self.index.copy(dtype=object))) + + self.assertFalse(i.identical(self.index)) + self.assertTrue(Index(same_values, name='foo', dtype=object).identical( + i)) + + self.assertFalse(self.index.copy(dtype=object) + .identical(self.index.copy(dtype='int64'))) + + def test_get_indexer(self): + target = RangeIndex(10) + indexer = self.index.get_indexer(target) + expected = np.array([0, -1, 1, -1, 2, -1, 3, -1, 4, -1]) + self.assert_numpy_array_equal(indexer, expected) + + def test_get_indexer_pad(self): + target = RangeIndex(10) + indexer = self.index.get_indexer(target, method='pad') + expected = np.array([0, 0, 1, 1, 2, 2, 3, 3, 4, 4]) + self.assert_numpy_array_equal(indexer, expected) + + def test_get_indexer_backfill(self): + target = RangeIndex(10) + indexer = self.index.get_indexer(target, method='backfill') + expected = np.array([0, 1, 1, 2, 2, 3, 3, 4, 4, 5]) + self.assert_numpy_array_equal(indexer, expected) + + def test_join_outer(self): + # join with Int64Index + other = Int64Index(np.arange(25, 14, -1)) + + res, lidx, ridx = self.index.join(other, how='outer', + return_indexers=True) + noidx_res = self.index.join(other, how='outer') + self.assertTrue(res.equals(noidx_res)) + + eres = Int64Index([0, 2, 4, 6, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, + 21, 22, 23, 24, 25]) + elidx = np.array([0, 1, 2, 3, 4, 5, 6, 7, -1, 8, -1, 9, + -1, -1, -1, -1, -1, -1, -1], dtype=np.int64) + eridx = np.array([-1, -1, -1, -1, -1, -1, -1, -1, 10, 9, 8, 7, 6, + 5, 4, 3, 2, 1, 0], dtype=np.int64) + + self.assertIsInstance(res, Int64Index) + self.assertFalse(isinstance(res, RangeIndex)) + self.assertTrue(res.equals(eres)) + self.assert_numpy_array_equal(lidx, elidx) + self.assert_numpy_array_equal(ridx, eridx) + + # join with RangeIndex + other = RangeIndex(25, 14, -1) + + res, lidx, ridx = self.index.join(other, how='outer', + return_indexers=True) + noidx_res = self.index.join(other, how='outer') + self.assertTrue(res.equals(noidx_res)) + + self.assertIsInstance(res, Int64Index) + self.assertFalse(isinstance(res, RangeIndex)) + self.assertTrue(res.equals(eres)) + self.assert_numpy_array_equal(lidx, elidx) + self.assert_numpy_array_equal(ridx, eridx) + + def test_join_inner(self): + # Join with non-RangeIndex + other = Int64Index(np.arange(25, 14, -1)) + + res, lidx, ridx = self.index.join(other, how='inner', + return_indexers=True) + + # no guarantee of sortedness, so sort for comparison purposes + ind = res.argsort() + res = res.take(ind) + lidx = lidx.take(ind) + ridx = ridx.take(ind) + + eres = Int64Index([16, 18]) + elidx = np.array([8, 9]) + eridx = np.array([9, 7]) + + self.assertIsInstance(res, Int64Index) + self.assertTrue(res.equals(eres)) + self.assert_numpy_array_equal(lidx, elidx) + self.assert_numpy_array_equal(ridx, eridx) + + # Join two RangeIndex + other = RangeIndex(25, 14, -1) + + res, lidx, ridx = self.index.join(other, how='inner', + return_indexers=True) + + self.assertIsInstance(res, RangeIndex) + self.assertTrue(res.equals(eres)) + self.assert_numpy_array_equal(lidx, elidx) + self.assert_numpy_array_equal(ridx, eridx) + + def test_join_left(self): + # Join with Int64Index + other = Int64Index(np.arange(25, 14, -1)) + + res, lidx, ridx = self.index.join(other, how='left', + return_indexers=True) + eres = self.index + eridx = np.array([-1, -1, -1, -1, -1, -1, -1, -1, 9, 7], + dtype=np.int64) + + self.assertIsInstance(res, RangeIndex) + self.assertTrue(res.equals(eres)) + self.assertIsNone(lidx) + self.assert_numpy_array_equal(ridx, eridx) + + # Join withRangeIndex + other = Int64Index(np.arange(25, 14, -1)) + + res, lidx, ridx = self.index.join(other, how='left', + return_indexers=True) + + self.assertIsInstance(res, RangeIndex) + self.assertTrue(res.equals(eres)) + self.assertIsNone(lidx) + self.assert_numpy_array_equal(ridx, eridx) + + def test_join_right(self): + # Join with Int64Index + other = Int64Index(np.arange(25, 14, -1)) + + res, lidx, ridx = self.index.join(other, how='right', + return_indexers=True) + eres = other + elidx = np.array([-1, -1, -1, -1, -1, -1, -1, 9, -1, 8, -1], + dtype=np.int64) + + self.assertIsInstance(other, Int64Index) + self.assertTrue(res.equals(eres)) + self.assert_numpy_array_equal(lidx, elidx) + self.assertIsNone(ridx) + + # Join withRangeIndex + other = RangeIndex(25, 14, -1) + + res, lidx, ridx = self.index.join(other, how='right', + return_indexers=True) + eres = other + + self.assertIsInstance(other, RangeIndex) + self.assertTrue(res.equals(eres)) + self.assert_numpy_array_equal(lidx, elidx) + self.assertIsNone(ridx) + + def test_join_non_int_index(self): + other = Index([3, 6, 7, 8, 10], dtype=object) + + outer = self.index.join(other, how='outer') + outer2 = other.join(self.index, how='outer') + expected = Index([0, 2, 3, 4, 6, 7, 8, 10, 12, 14, + 16, 18], dtype=object) + self.assertTrue(outer.equals(outer2)) + self.assertTrue(outer.equals(expected)) + + inner = self.index.join(other, how='inner') + inner2 = other.join(self.index, how='inner') + expected = Index([6, 8, 10], dtype=object) + self.assertTrue(inner.equals(inner2)) + self.assertTrue(inner.equals(expected)) + + left = self.index.join(other, how='left') + self.assertTrue(left.equals(self.index)) + + left2 = other.join(self.index, how='left') + self.assertTrue(left2.equals(other)) + + right = self.index.join(other, how='right') + self.assertTrue(right.equals(other)) + + right2 = other.join(self.index, how='right') + self.assertTrue(right2.equals(self.index)) + + def test_join_non_unique(self): + other = Index([4, 4, 3, 3]) + + res, lidx, ridx = self.index.join(other, return_indexers=True) + + eres = Int64Index([0, 2, 4, 4, 6, 8, 10, 12, 14, 16, 18]) + elidx = np.array([0, 1, 2, 2, 3, 4, 5, 6, 7, 8, 9], dtype=np.int64) + eridx = np.array([-1, -1, 0, 1, -1, -1, -1, -1, -1, -1, -1], + dtype=np.int64) + + self.assertTrue(res.equals(eres)) + self.assert_numpy_array_equal(lidx, elidx) + self.assert_numpy_array_equal(ridx, eridx) + + def test_join_self(self): + kinds = 'outer', 'inner', 'left', 'right' + for kind in kinds: + joined = self.index.join(self.index, how=kind) + self.assertIs(self.index, joined) + + def test_intersection(self): + # intersect with Int64Index + other = Index(np.arange(1, 6)) + result = self.index.intersection(other) + expected = np.sort(np.intersect1d(self.index.values, other.values)) + self.assert_numpy_array_equal(result, expected) + + result = other.intersection(self.index) + expected = np.sort(np.asarray(np.intersect1d(self.index.values, + other.values))) + self.assert_numpy_array_equal(result, expected) + + # intersect with increasing RangeIndex + other = RangeIndex(1, 6) + result = self.index.intersection(other) + expected = np.sort(np.intersect1d(self.index.values, other.values)) + self.assert_numpy_array_equal(result, expected) + + # intersect with decreasing RangeIndex + other = RangeIndex(5, 0, -1) + result = self.index.intersection(other) + expected = np.sort(np.intersect1d(self.index.values, other.values)) + self.assert_numpy_array_equal(result, expected) + + def test_intersect_str_dates(self): + dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)] + + i1 = Index(dt_dates, dtype=object) + i2 = Index(['aa'], dtype=object) + res = i2.intersection(i1) + + self.assertEqual(len(res), 0) + + def test_union_noncomparable(self): + from datetime import datetime, timedelta + # corner case, non-Int64Index + now = datetime.now() + other = Index([now + timedelta(i) for i in range(4)], dtype=object) + result = self.index.union(other) + expected = np.concatenate((self.index, other)) + self.assert_numpy_array_equal(result, expected) + + result = other.union(self.index) + expected = np.concatenate((other, self.index)) + self.assert_numpy_array_equal(result, expected) + + def test_union(self): + RI = RangeIndex + I64 = Int64Index + cases = [(RI(0, 10, 1), RI(0, 10, 1), RI(0, 10, 1)), + (RI(0, 10, 1), RI(5, 20, 1), RI(0, 20, 1)), + (RI(0, 10, 1), RI(10, 20, 1), RI(0, 20, 1)), + (RI(0, -10, -1), RI(0, -10, -1), RI(0, -10, -1)), + (RI(0, -10, -1), RI(-10, -20, -1), RI(-19, 1, 1)), + (RI(0, 10, 2), RI(1, 10, 2), RI(0, 10, 1)), + (RI(0, 11, 2), RI(1, 12, 2), RI(0, 12, 1)), + (RI(0, 21, 4), RI(-2, 24, 4), RI(-2, 24, 2)), + (RI(0, -20, -2), RI(-1, -21, -2), RI(-19, 1, 1)), + (RI(0, 100, 5), RI(0, 100, 20), RI(0, 100, 5)), + (RI(0, -100, -5), RI(5, -100, -20), RI(-95, 10, 5)), + (RI(0, -11, -1), RI(1, -12, -4), RI(-11, 2, 1)), + (RI(), RI(), RI()), + (RI(0, -10, -2), RI(), RI(0, -10, -2)), + (RI(0, 100, 2), RI(100, 150, 200), RI(0, 102, 2)), + (RI(0, -100, -2), RI(-100, 50, 102), RI(-100, 4, 2)), + (RI(0, -100, -1), RI(0, -50, -3), RI(-99, 1, 1)), + (RI(0, 1, 1), RI(5, 6, 10), RI(0, 6, 5)), + (RI(0, 10, 5), RI(-5, -6, -20), RI(-5, 10, 5)), + (RI(0, 3, 1), RI(4, 5, 1), I64([0, 1, 2, 4])), + (RI(0, 10, 1), I64([]), RI(0, 10, 1)), + (RI(), I64([1, 5, 6]), I64([1, 5, 6]))] + for idx1, idx2, expected in cases: + res1 = idx1.union(idx2) + res2 = idx2.union(idx1) + res3 = idx1._int64index.union(idx2) + tm.assert_index_equal(res1, expected, exact=True) + tm.assert_index_equal(res2, expected, exact=True) + tm.assert_index_equal(res3, expected) + + def test_nbytes(self): + + # memory savings vs int index + i = RangeIndex(0, 1000) + self.assertTrue(i.nbytes < i.astype(int).nbytes / 10) + + # constant memory usage + i2 = RangeIndex(0, 10) + self.assertEqual(i.nbytes, i2.nbytes) + + def test_cant_or_shouldnt_cast(self): + # can't + self.assertRaises(TypeError, RangeIndex, 'foo', 'bar', 'baz') + + # shouldn't + self.assertRaises(TypeError, RangeIndex, '0', '1', '2') + + def test_view_Index(self): + self.index.view(Index) + + def test_prevent_casting(self): + result = self.index.astype('O') + self.assertEqual(result.dtype, np.object_) + + def test_take_preserve_name(self): + index = RangeIndex(1, 5, name='foo') + taken = index.take([3, 0, 1]) + self.assertEqual(index.name, taken.name) + + def test_print_unicode_columns(self): + df = pd.DataFrame({u("\u05d0"): [1, 2, 3], + "\u05d1": [4, 5, 6], + "c": [7, 8, 9]}) + repr(df.columns) # should not raise UnicodeDecodeError + + def test_repr_roundtrip(self): + tm.assert_index_equal(eval(repr(self.index)), self.index) + + def test_slice_keep_name(self): + idx = RangeIndex(1, 2, name='asdf') + self.assertEqual(idx.name, idx[1:].name) + + def test_explicit_conversions(self): + + # GH 8608 + # add/sub are overriden explicity for Float/Int Index + idx = RangeIndex(5) + + # float conversions + arr = np.arange(5, dtype='int64') * 3.2 + expected = Float64Index(arr) + fidx = idx * 3.2 + tm.assert_index_equal(fidx, expected) + fidx = 3.2 * idx + tm.assert_index_equal(fidx, expected) + + # interops with numpy arrays + expected = Float64Index(arr) + a = np.zeros(5, dtype='float64') + result = fidx - a + tm.assert_index_equal(result, expected) + + expected = Float64Index(-arr) + a = np.zeros(5, dtype='float64') + result = a - fidx + tm.assert_index_equal(result, expected) + + def test_duplicates(self): + for ind in self.indices: + if not len(ind): + continue + idx = self.indices[ind] + self.assertTrue(idx.is_unique) + self.assertFalse(idx.has_duplicates) + + def test_ufunc_compat(self): + idx = RangeIndex(5) + result = np.sin(idx) + expected = Float64Index(np.sin(np.arange(5, dtype='int64'))) + tm.assert_index_equal(result, expected) + + def test_extended_gcd(self): + result = self.index._extended_gcd(6, 10) + self.assertEqual(result[0], result[1] * 6 + result[2] * 10) + self.assertEqual(2, result[0]) + + result = self.index._extended_gcd(10, 6) + self.assertEqual(2, result[1] * 10 + result[2] * 6) + self.assertEqual(2, result[0]) + + def test_min_fitting_element(self): + result = RangeIndex(0, 20, 2)._min_fitting_element(1) + self.assertEqual(2, result) + + result = RangeIndex(1, 6)._min_fitting_element(1) + self.assertEqual(1, result) + + result = RangeIndex(18, -2, -2)._min_fitting_element(1) + self.assertEqual(2, result) + + result = RangeIndex(5, 0, -1)._min_fitting_element(1) + self.assertEqual(1, result) + + big_num = 500000000000000000000000 + + result = RangeIndex(5, big_num * 2, 1)._min_fitting_element(big_num) + self.assertEqual(big_num, result) + + def test_max_fitting_element(self): + result = RangeIndex(0, 20, 2)._max_fitting_element(17) + self.assertEqual(16, result) + + result = RangeIndex(1, 6)._max_fitting_element(4) + self.assertEqual(4, result) + + result = RangeIndex(18, -2, -2)._max_fitting_element(17) + self.assertEqual(16, result) + + result = RangeIndex(5, 0, -1)._max_fitting_element(4) + self.assertEqual(4, result) + + big_num = 500000000000000000000000 + + result = RangeIndex(5, big_num * 2, 1)._max_fitting_element(big_num) + self.assertEqual(big_num, result) + + def test_pickle_compat_construction(self): + # RangeIndex() is a valid constructor + pass + + def test_slice_specialised(self): + + # scalar indexing + res = self.index[1] + expected = 2 + self.assertEqual(res, expected) + + res = self.index[-1] + expected = 18 + self.assertEqual(res, expected) + + # slicing + # slice value completion + index = self.index[:] + expected = self.index + self.assert_numpy_array_equal(index, expected) + + # positive slice values + index = self.index[7:10:2] + expected = np.array([14, 18]) + self.assert_numpy_array_equal(index, expected) + + # negative slice values + index = self.index[-1:-5:-2] + expected = np.array([18, 14]) + self.assert_numpy_array_equal(index, expected) + + # stop overshoot + index = self.index[2:100:4] + expected = np.array([4, 12]) + self.assert_numpy_array_equal(index, expected) + + # reverse + index = self.index[::-1] + expected = self.index.values[::-1] + self.assert_numpy_array_equal(index, expected) + + index = self.index[-8::-1] + expected = np.array([4, 2, 0]) + self.assert_numpy_array_equal(index, expected) + + index = self.index[-40::-1] + expected = np.array([]) + self.assert_numpy_array_equal(index, expected) + + index = self.index[40::-1] + expected = self.index.values[40::-1] + self.assert_numpy_array_equal(index, expected) + + index = self.index[10::-1] + expected = self.index.values[::-1] + self.assert_numpy_array_equal(index, expected) + + def test_len_specialised(self): + + # make sure that our len is the same as + # np.arange calc + + for step in np.arange(1, 6, 1): + + arr = np.arange(0, 5, step) + i = RangeIndex(0, 5, step) + self.assertEqual(len(i), len(arr)) + + i = RangeIndex(5, 0, step) + self.assertEqual(len(i), 0) + + for step in np.arange(-6, -1, 1): + + arr = np.arange(5, 0, step) + i = RangeIndex(5, 0, step) + self.assertEqual(len(i), len(arr)) + + i = RangeIndex(0, 5, step) + self.assertEqual(len(i), 0) diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py deleted file mode 100644 index af42c2751bf46..0000000000000 --- a/pandas/tests/test_index.py +++ /dev/null @@ -1,7146 +0,0 @@ -# -*- coding: utf-8 -*- -# pylint: disable=E1101,E1103,W0232 - -# TODO(wesm): fix long line flake8 issues -# flake8: noqa - -from datetime import datetime, timedelta, time -from pandas import compat -from pandas.compat import (long, is_platform_windows, range, lrange, lzip, u, - zip, PY3) -from itertools import combinations -import operator -import re -import nose -import warnings -import os - -import numpy as np - -from pandas import (period_range, date_range, Categorical, Series, DataFrame, - Index, Float64Index, Int64Index, RangeIndex, MultiIndex, - CategoricalIndex, DatetimeIndex, TimedeltaIndex, - PeriodIndex) -from pandas.core.index import InvalidIndexError -from pandas.util.testing import (assert_almost_equal, assertRaisesRegexp, - assert_copy) - -import pandas.util.testing as tm -import pandas.core.config as cf - -from pandas.tseries.index import _to_m8 - -import pandas as pd -from pandas.lib import Timestamp -from itertools import product - -if PY3: - unicode = lambda x: x - - -class Base(object): - """ base class for index sub-class tests """ - _holder = None - _compat_props = ['shape', 'ndim', 'size', 'itemsize', 'nbytes'] - - def setup_indices(self): - # setup the test indices in the self.indicies dict - for name, ind in self.indices.items(): - setattr(self, name, ind) - - def verify_pickle(self, index): - unpickled = self.round_trip_pickle(index) - self.assertTrue(index.equals(unpickled)) - - def test_pickle_compat_construction(self): - # this is testing for pickle compat - if self._holder is None: - return - - # need an object to create with - self.assertRaises(TypeError, self._holder) - - def test_shift(self): - - # GH8083 test the base class for shift - idx = self.create_index() - self.assertRaises(NotImplementedError, idx.shift, 1) - self.assertRaises(NotImplementedError, idx.shift, 1, 2) - - def test_create_index_existing_name(self): - - # GH11193, when an existing index is passed, and a new name is not - # specified, the new index should inherit the previous object name - expected = self.create_index() - if not isinstance(expected, MultiIndex): - expected.name = 'foo' - result = pd.Index(expected) - tm.assert_index_equal(result, expected) - - result = pd.Index(expected, name='bar') - expected.name = 'bar' - tm.assert_index_equal(result, expected) - else: - expected.names = ['foo', 'bar'] - result = pd.Index(expected) - tm.assert_index_equal( - result, Index(Index([('foo', 'one'), ('foo', 'two'), - ('bar', 'one'), ('baz', 'two'), - ('qux', 'one'), ('qux', 'two')], - dtype='object'), - names=['foo', 'bar'])) - - result = pd.Index(expected, names=['A', 'B']) - tm.assert_index_equal( - result, - Index(Index([('foo', 'one'), ('foo', 'two'), ('bar', 'one'), - ('baz', 'two'), ('qux', 'one'), ('qux', 'two')], - dtype='object'), names=['A', 'B'])) - - def test_numeric_compat(self): - - idx = self.create_index() - tm.assertRaisesRegexp(TypeError, "cannot perform __mul__", - lambda: idx * 1) - tm.assertRaisesRegexp(TypeError, "cannot perform __mul__", - lambda: 1 * idx) - - div_err = "cannot perform __truediv__" if PY3 \ - else "cannot perform __div__" - tm.assertRaisesRegexp(TypeError, div_err, lambda: idx / 1) - tm.assertRaisesRegexp(TypeError, div_err, lambda: 1 / idx) - tm.assertRaisesRegexp(TypeError, "cannot perform __floordiv__", - lambda: idx // 1) - tm.assertRaisesRegexp(TypeError, "cannot perform __floordiv__", - lambda: 1 // idx) - - def test_logical_compat(self): - idx = self.create_index() - tm.assertRaisesRegexp(TypeError, 'cannot perform all', - lambda: idx.all()) - tm.assertRaisesRegexp(TypeError, 'cannot perform any', - lambda: idx.any()) - - def test_boolean_context_compat(self): - - # boolean context compat - idx = self.create_index() - - def f(): - if idx: - pass - - tm.assertRaisesRegexp(ValueError, 'The truth value of a', f) - - def test_reindex_base(self): - idx = self.create_index() - expected = np.arange(idx.size) - - actual = idx.get_indexer(idx) - tm.assert_numpy_array_equal(expected, actual) - - with tm.assertRaisesRegexp(ValueError, 'Invalid fill method'): - idx.get_indexer(idx, method='invalid') - - def test_ndarray_compat_properties(self): - - idx = self.create_index() - self.assertTrue(idx.T.equals(idx)) - self.assertTrue(idx.transpose().equals(idx)) - - values = idx.values - for prop in self._compat_props: - self.assertEqual(getattr(idx, prop), getattr(values, prop)) - - # test for validity - idx.nbytes - idx.values.nbytes - - def test_repr_roundtrip(self): - - idx = self.create_index() - tm.assert_index_equal(eval(repr(idx)), idx) - - def test_str(self): - - # test the string repr - idx = self.create_index() - idx.name = 'foo' - self.assertTrue("'foo'" in str(idx)) - self.assertTrue(idx.__class__.__name__ in str(idx)) - - def test_dtype_str(self): - for idx in self.indices.values(): - dtype = idx.dtype_str - self.assertIsInstance(dtype, compat.string_types) - if isinstance(idx, PeriodIndex): - self.assertEqual(dtype, 'period') - else: - self.assertEqual(dtype, str(idx.dtype)) - - def test_repr_max_seq_item_setting(self): - # GH10182 - idx = self.create_index() - idx = idx.repeat(50) - with pd.option_context("display.max_seq_items", None): - repr(idx) - self.assertFalse('...' in str(idx)) - - def test_wrong_number_names(self): - def testit(ind): - ind.names = ["apple", "banana", "carrot"] - - for ind in self.indices.values(): - assertRaisesRegexp(ValueError, "^Length", testit, ind) - - def test_set_name_methods(self): - new_name = "This is the new name for this index" - for ind in self.indices.values(): - - # don't tests a MultiIndex here (as its tested separated) - if isinstance(ind, MultiIndex): - continue - - original_name = ind.name - new_ind = ind.set_names([new_name]) - self.assertEqual(new_ind.name, new_name) - self.assertEqual(ind.name, original_name) - res = ind.rename(new_name, inplace=True) - - # should return None - self.assertIsNone(res) - self.assertEqual(ind.name, new_name) - self.assertEqual(ind.names, [new_name]) - # with assertRaisesRegexp(TypeError, "list-like"): - # # should still fail even if it would be the right length - # ind.set_names("a") - with assertRaisesRegexp(ValueError, "Level must be None"): - ind.set_names("a", level=0) - - # rename in place just leaves tuples and other containers alone - name = ('A', 'B') - ind.rename(name, inplace=True) - self.assertEqual(ind.name, name) - self.assertEqual(ind.names, [name]) - - def test_hash_error(self): - for ind in self.indices.values(): - with tm.assertRaisesRegexp(TypeError, "unhashable type: %r" % - type(ind).__name__): - hash(ind) - - def test_copy_and_deepcopy(self): - from copy import copy, deepcopy - - for ind in self.indices.values(): - - # don't tests a MultiIndex here (as its tested separated) - if isinstance(ind, MultiIndex): - continue - - for func in (copy, deepcopy): - idx_copy = func(ind) - self.assertIsNot(idx_copy, ind) - self.assertTrue(idx_copy.equals(ind)) - - new_copy = ind.copy(deep=True, name="banana") - self.assertEqual(new_copy.name, "banana") - - def test_duplicates(self): - for ind in self.indices.values(): - - if not len(ind): - continue - if isinstance(ind, MultiIndex): - continue - idx = self._holder([ind[0]] * 5) - self.assertFalse(idx.is_unique) - self.assertTrue(idx.has_duplicates) - - # GH 10115 - # preserve names - idx.name = 'foo' - result = idx.drop_duplicates() - self.assertEqual(result.name, 'foo') - self.assert_index_equal(result, Index([ind[0]], name='foo')) - - def test_sort(self): - for ind in self.indices.values(): - self.assertRaises(TypeError, ind.sort) - - def test_order(self): - for ind in self.indices.values(): - # 9816 deprecated - with tm.assert_produces_warning(FutureWarning): - ind.order() - - def test_mutability(self): - for ind in self.indices.values(): - if not len(ind): - continue - self.assertRaises(TypeError, ind.__setitem__, 0, ind[0]) - - def test_view(self): - for ind in self.indices.values(): - i_view = ind.view() - self.assertEqual(i_view.name, ind.name) - - def test_compat(self): - for ind in self.indices.values(): - self.assertEqual(ind.tolist(), list(ind)) - - def test_argsort(self): - for k, ind in self.indices.items(): - - # sep teststed - if k in ['catIndex']: - continue - - result = ind.argsort() - expected = np.array(ind).argsort() - tm.assert_numpy_array_equal(result, expected) - - def test_pickle(self): - for ind in self.indices.values(): - self.verify_pickle(ind) - ind.name = 'foo' - self.verify_pickle(ind) - - def test_take(self): - indexer = [4, 3, 0, 2] - for k, ind in self.indices.items(): - - # separate - if k in ['boolIndex', 'tuples', 'empty']: - continue - - result = ind.take(indexer) - expected = ind[indexer] - self.assertTrue(result.equals(expected)) - - if not isinstance(ind, - (DatetimeIndex, PeriodIndex, TimedeltaIndex)): - # GH 10791 - with tm.assertRaises(AttributeError): - ind.freq - - def test_setops_errorcases(self): - for name, idx in compat.iteritems(self.indices): - # # non-iterable input - cases = [0.5, 'xxx'] - methods = [idx.intersection, idx.union, idx.difference, - idx.sym_diff] - - for method in methods: - for case in cases: - assertRaisesRegexp(TypeError, - "Input must be Index or array-like", - method, case) - - def test_intersection_base(self): - for name, idx in compat.iteritems(self.indices): - first = idx[:5] - second = idx[:3] - intersect = first.intersection(second) - - if isinstance(idx, CategoricalIndex): - pass - else: - self.assertTrue(tm.equalContents(intersect, second)) - - # GH 10149 - cases = [klass(second.values) - for klass in [np.array, Series, list]] - for case in cases: - if isinstance(idx, PeriodIndex): - msg = "can only call with other PeriodIndex-ed objects" - with tm.assertRaisesRegexp(ValueError, msg): - result = first.intersection(case) - elif isinstance(idx, CategoricalIndex): - pass - else: - result = first.intersection(case) - self.assertTrue(tm.equalContents(result, second)) - - if isinstance(idx, MultiIndex): - msg = "other must be a MultiIndex or a list of tuples" - with tm.assertRaisesRegexp(TypeError, msg): - result = first.intersection([1, 2, 3]) - - def test_union_base(self): - for name, idx in compat.iteritems(self.indices): - first = idx[3:] - second = idx[:5] - everything = idx - union = first.union(second) - self.assertTrue(tm.equalContents(union, everything)) - - # GH 10149 - cases = [klass(second.values) - for klass in [np.array, Series, list]] - for case in cases: - if isinstance(idx, PeriodIndex): - msg = "can only call with other PeriodIndex-ed objects" - with tm.assertRaisesRegexp(ValueError, msg): - result = first.union(case) - elif isinstance(idx, CategoricalIndex): - pass - else: - result = first.union(case) - self.assertTrue(tm.equalContents(result, everything)) - - if isinstance(idx, MultiIndex): - msg = "other must be a MultiIndex or a list of tuples" - with tm.assertRaisesRegexp(TypeError, msg): - result = first.union([1, 2, 3]) - - def test_difference_base(self): - for name, idx in compat.iteritems(self.indices): - first = idx[2:] - second = idx[:4] - answer = idx[4:] - result = first.difference(second) - - if isinstance(idx, CategoricalIndex): - pass - else: - self.assertTrue(tm.equalContents(result, answer)) - - # GH 10149 - cases = [klass(second.values) - for klass in [np.array, Series, list]] - for case in cases: - if isinstance(idx, PeriodIndex): - msg = "can only call with other PeriodIndex-ed objects" - with tm.assertRaisesRegexp(ValueError, msg): - result = first.difference(case) - elif isinstance(idx, CategoricalIndex): - pass - elif isinstance(idx, (DatetimeIndex, TimedeltaIndex)): - self.assertEqual(result.__class__, answer.__class__) - tm.assert_numpy_array_equal(result.asi8, answer.asi8) - else: - result = first.difference(case) - self.assertTrue(tm.equalContents(result, answer)) - - if isinstance(idx, MultiIndex): - msg = "other must be a MultiIndex or a list of tuples" - with tm.assertRaisesRegexp(TypeError, msg): - result = first.difference([1, 2, 3]) - - def test_symmetric_diff(self): - for name, idx in compat.iteritems(self.indices): - first = idx[1:] - second = idx[:-1] - if isinstance(idx, CategoricalIndex): - pass - else: - answer = idx[[0, -1]] - result = first.sym_diff(second) - self.assertTrue(tm.equalContents(result, answer)) - - # GH 10149 - cases = [klass(second.values) - for klass in [np.array, Series, list]] - for case in cases: - if isinstance(idx, PeriodIndex): - msg = "can only call with other PeriodIndex-ed objects" - with tm.assertRaisesRegexp(ValueError, msg): - result = first.sym_diff(case) - elif isinstance(idx, CategoricalIndex): - pass - else: - result = first.sym_diff(case) - self.assertTrue(tm.equalContents(result, answer)) - - if isinstance(idx, MultiIndex): - msg = "other must be a MultiIndex or a list of tuples" - with tm.assertRaisesRegexp(TypeError, msg): - result = first.sym_diff([1, 2, 3]) - - def test_insert_base(self): - - for name, idx in compat.iteritems(self.indices): - result = idx[1:4] - - if not len(idx): - continue - - # test 0th element - self.assertTrue(idx[0:4].equals(result.insert(0, idx[0]))) - - def test_delete_base(self): - - for name, idx in compat.iteritems(self.indices): - - if not len(idx): - continue - - if isinstance(idx, RangeIndex): - # tested in class - continue - - expected = idx[1:] - result = idx.delete(0) - self.assertTrue(result.equals(expected)) - self.assertEqual(result.name, expected.name) - - expected = idx[:-1] - result = idx.delete(-1) - self.assertTrue(result.equals(expected)) - self.assertEqual(result.name, expected.name) - - with tm.assertRaises((IndexError, ValueError)): - # either depending on numpy version - result = idx.delete(len(idx)) - - def test_equals_op(self): - # GH9947, GH10637 - index_a = self.create_index() - if isinstance(index_a, PeriodIndex): - return - - n = len(index_a) - index_b = index_a[0:-1] - index_c = index_a[0:-1].append(index_a[-2:-1]) - index_d = index_a[0:1] - with tm.assertRaisesRegexp(ValueError, "Lengths must match"): - index_a == index_b - expected1 = np.array([True] * n) - expected2 = np.array([True] * (n - 1) + [False]) - tm.assert_numpy_array_equal(index_a == index_a, expected1) - tm.assert_numpy_array_equal(index_a == index_c, expected2) - - # test comparisons with numpy arrays - array_a = np.array(index_a) - array_b = np.array(index_a[0:-1]) - array_c = np.array(index_a[0:-1].append(index_a[-2:-1])) - array_d = np.array(index_a[0:1]) - with tm.assertRaisesRegexp(ValueError, "Lengths must match"): - index_a == array_b - tm.assert_numpy_array_equal(index_a == array_a, expected1) - tm.assert_numpy_array_equal(index_a == array_c, expected2) - - # test comparisons with Series - series_a = Series(array_a) - series_b = Series(array_b) - series_c = Series(array_c) - series_d = Series(array_d) - with tm.assertRaisesRegexp(ValueError, "Lengths must match"): - index_a == series_b - tm.assert_numpy_array_equal(index_a == series_a, expected1) - tm.assert_numpy_array_equal(index_a == series_c, expected2) - - # cases where length is 1 for one of them - with tm.assertRaisesRegexp(ValueError, "Lengths must match"): - index_a == index_d - with tm.assertRaisesRegexp(ValueError, "Lengths must match"): - index_a == series_d - with tm.assertRaisesRegexp(ValueError, "Lengths must match"): - index_a == array_d - with tm.assertRaisesRegexp(ValueError, "Series lengths must match"): - series_a == series_d - with tm.assertRaisesRegexp(ValueError, "Lengths must match"): - series_a == array_d - - # comparing with a scalar should broadcast; note that we are excluding - # MultiIndex because in this case each item in the index is a tuple of - # length 2, and therefore is considered an array of length 2 in the - # comparison instead of a scalar - if not isinstance(index_a, MultiIndex): - expected3 = np.array([False] * (len(index_a) - 2) + [True, False]) - # assuming the 2nd to last item is unique in the data - item = index_a[-2] - tm.assert_numpy_array_equal(index_a == item, expected3) - tm.assert_numpy_array_equal(series_a == item, expected3) - - def test_numpy_ufuncs(self): - # test ufuncs of numpy 1.9.2. see: - # http://docs.scipy.org/doc/numpy/reference/ufuncs.html - - # some functions are skipped because it may return different result - # for unicode input depending on numpy version - - for name, idx in compat.iteritems(self.indices): - for func in [np.exp, np.exp2, np.expm1, np.log, np.log2, np.log10, - np.log1p, np.sqrt, np.sin, np.cos, np.tan, np.arcsin, - np.arccos, np.arctan, np.sinh, np.cosh, np.tanh, - np.arcsinh, np.arccosh, np.arctanh, np.deg2rad, - np.rad2deg]: - if isinstance(idx, pd.tseries.base.DatetimeIndexOpsMixin): - # raise TypeError or ValueError (PeriodIndex) - # PeriodIndex behavior should be changed in future version - with tm.assertRaises(Exception): - func(idx) - elif isinstance(idx, (Float64Index, Int64Index)): - # coerces to float (e.g. np.sin) - result = func(idx) - exp = Index(func(idx.values), name=idx.name) - self.assert_index_equal(result, exp) - self.assertIsInstance(result, pd.Float64Index) - else: - # raise AttributeError or TypeError - if len(idx) == 0: - continue - else: - with tm.assertRaises(Exception): - func(idx) - - for func in [np.isfinite, np.isinf, np.isnan, np.signbit]: - if isinstance(idx, pd.tseries.base.DatetimeIndexOpsMixin): - # raise TypeError or ValueError (PeriodIndex) - with tm.assertRaises(Exception): - func(idx) - elif isinstance(idx, (Float64Index, Int64Index)): - # results in bool array - result = func(idx) - exp = func(idx.values) - self.assertIsInstance(result, np.ndarray) - tm.assertNotIsInstance(result, Index) - else: - if len(idx) == 0: - continue - else: - with tm.assertRaises(Exception): - func(idx) - - def test_hasnans_isnans(self): - # GH 11343, added tests for hasnans / isnans - for name, index in self.indices.items(): - if isinstance(index, MultiIndex): - pass - else: - idx = index.copy() - - # cases in indices doesn't include NaN - expected = np.array([False] * len(idx), dtype=bool) - self.assert_numpy_array_equal(idx._isnan, expected) - self.assertFalse(idx.hasnans) - - idx = index.copy() - values = idx.values - - if len(index) == 0: - continue - elif isinstance(index, pd.tseries.base.DatetimeIndexOpsMixin): - values[1] = pd.tslib.iNaT - elif isinstance(index, Int64Index): - continue - else: - values[1] = np.nan - - if isinstance(index, PeriodIndex): - idx = index.__class__(values, freq=index.freq) - else: - idx = index.__class__(values) - - expected = np.array([False] * len(idx), dtype=bool) - expected[1] = True - self.assert_numpy_array_equal(idx._isnan, expected) - self.assertTrue(idx.hasnans) - - def test_fillna(self): - # GH 11343 - for name, index in self.indices.items(): - if len(index) == 0: - pass - elif isinstance(index, MultiIndex): - idx = index.copy() - msg = "isnull is not defined for MultiIndex" - with self.assertRaisesRegexp(NotImplementedError, msg): - idx.fillna(idx[0]) - else: - idx = index.copy() - result = idx.fillna(idx[0]) - self.assert_index_equal(result, idx) - self.assertFalse(result is idx) - - msg = "'value' must be a scalar, passed: " - with self.assertRaisesRegexp(TypeError, msg): - idx.fillna([idx[0]]) - - idx = index.copy() - values = idx.values - - if isinstance(index, pd.tseries.base.DatetimeIndexOpsMixin): - values[1] = pd.tslib.iNaT - elif isinstance(index, Int64Index): - continue - else: - values[1] = np.nan - - if isinstance(index, PeriodIndex): - idx = index.__class__(values, freq=index.freq) - else: - idx = index.__class__(values) - - expected = np.array([False] * len(idx), dtype=bool) - expected[1] = True - self.assert_numpy_array_equal(idx._isnan, expected) - self.assertTrue(idx.hasnans) - - -class TestIndex(Base, tm.TestCase): - _holder = Index - _multiprocess_can_split_ = True - - def setUp(self): - self.indices = dict(unicodeIndex=tm.makeUnicodeIndex(100), - strIndex=tm.makeStringIndex(100), - dateIndex=tm.makeDateIndex(100), - periodIndex=tm.makePeriodIndex(100), - tdIndex=tm.makeTimedeltaIndex(100), - intIndex=tm.makeIntIndex(100), - rangeIndex=tm.makeIntIndex(100), - floatIndex=tm.makeFloatIndex(100), - boolIndex=Index([True, False]), - catIndex=tm.makeCategoricalIndex(100), - empty=Index([]), - tuples=MultiIndex.from_tuples(lzip( - ['foo', 'bar', 'baz'], [1, 2, 3]))) - self.setup_indices() - - def create_index(self): - return Index(list('abcde')) - - def test_new_axis(self): - new_index = self.dateIndex[None, :] - self.assertEqual(new_index.ndim, 2) - tm.assertIsInstance(new_index, np.ndarray) - - def test_copy_and_deepcopy(self): - super(TestIndex, self).test_copy_and_deepcopy() - - new_copy2 = self.intIndex.copy(dtype=int) - self.assertEqual(new_copy2.dtype.kind, 'i') - - def test_constructor(self): - # regular instance creation - tm.assert_contains_all(self.strIndex, self.strIndex) - tm.assert_contains_all(self.dateIndex, self.dateIndex) - - # casting - arr = np.array(self.strIndex) - index = Index(arr) - tm.assert_contains_all(arr, index) - tm.assert_numpy_array_equal(self.strIndex, index) - - # copy - arr = np.array(self.strIndex) - index = Index(arr, copy=True, name='name') - tm.assertIsInstance(index, Index) - self.assertEqual(index.name, 'name') - tm.assert_numpy_array_equal(arr, index) - arr[0] = "SOMEBIGLONGSTRING" - self.assertNotEqual(index[0], "SOMEBIGLONGSTRING") - - # what to do here? - # arr = np.array(5.) - # self.assertRaises(Exception, arr.view, Index) - - def test_constructor_corner(self): - # corner case - self.assertRaises(TypeError, Index, 0) - - def test_construction_list_mixed_tuples(self): - # 10697 - # if we are constructing from a mixed list of tuples, make sure that we - # are independent of the sorting order - idx1 = Index([('A', 1), 'B']) - self.assertIsInstance(idx1, Index) and self.assertNotInstance( - idx1, MultiIndex) - idx2 = Index(['B', ('A', 1)]) - self.assertIsInstance(idx2, Index) and self.assertNotInstance( - idx2, MultiIndex) - - def test_constructor_from_series(self): - - expected = DatetimeIndex([Timestamp('20110101'), Timestamp('20120101'), - Timestamp('20130101')]) - s = Series([Timestamp('20110101'), Timestamp('20120101'), Timestamp( - '20130101')]) - result = Index(s) - self.assertTrue(result.equals(expected)) - result = DatetimeIndex(s) - self.assertTrue(result.equals(expected)) - - # GH 6273 - # create from a series, passing a freq - s = Series(pd.to_datetime(['1-1-1990', '2-1-1990', '3-1-1990', - '4-1-1990', '5-1-1990'])) - result = DatetimeIndex(s, freq='MS') - expected = DatetimeIndex( - ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990' - ], freq='MS') - self.assertTrue(result.equals(expected)) - - df = pd.DataFrame(np.random.rand(5, 3)) - df['date'] = ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', - '5-1-1990'] - result = DatetimeIndex(df['date'], freq='MS') - self.assertTrue(result.equals(expected)) - self.assertEqual(df['date'].dtype, object) - - exp = pd.Series( - ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990' - ], name='date') - self.assert_series_equal(df['date'], exp) - - # GH 6274 - # infer freq of same - result = pd.infer_freq(df['date']) - self.assertEqual(result, 'MS') - - def test_constructor_ndarray_like(self): - # GH 5460#issuecomment-44474502 - # it should be possible to convert any object that satisfies the numpy - # ndarray interface directly into an Index - class ArrayLike(object): - - def __init__(self, array): - self.array = array - - def __array__(self, dtype=None): - return self.array - - for array in [np.arange(5), np.array(['a', 'b', 'c']), - date_range('2000-01-01', periods=3).values]: - expected = pd.Index(array) - result = pd.Index(ArrayLike(array)) - self.assertTrue(result.equals(expected)) - - def test_index_ctor_infer_periodindex(self): - xp = period_range('2012-1-1', freq='M', periods=3) - rs = Index(xp) - tm.assert_numpy_array_equal(rs, xp) - tm.assertIsInstance(rs, PeriodIndex) - - def test_constructor_simple_new(self): - idx = Index([1, 2, 3, 4, 5], name='int') - result = idx._simple_new(idx, 'int') - self.assertTrue(result.equals(idx)) - - idx = Index([1.1, np.nan, 2.2, 3.0], name='float') - result = idx._simple_new(idx, 'float') - self.assertTrue(result.equals(idx)) - - idx = Index(['A', 'B', 'C', np.nan], name='obj') - result = idx._simple_new(idx, 'obj') - self.assertTrue(result.equals(idx)) - - def test_constructor_dtypes(self): - - for idx in [Index(np.array([1, 2, 3], dtype=int)), Index( - np.array( - [1, 2, 3], dtype=int), dtype=int), Index( - np.array( - [1., 2., 3.], dtype=float), dtype=int), Index( - [1, 2, 3], dtype=int), Index( - [1., 2., 3.], dtype=int)]: - self.assertIsInstance(idx, Int64Index) - - for idx in [Index(np.array([1., 2., 3.], dtype=float)), Index( - np.array( - [1, 2, 3], dtype=int), dtype=float), Index( - np.array( - [1., 2., 3.], dtype=float), dtype=float), Index( - [1, 2, 3], dtype=float), Index( - [1., 2., 3.], dtype=float)]: - self.assertIsInstance(idx, Float64Index) - - for idx in [Index(np.array( - [True, False, True], dtype=bool)), Index([True, False, True]), - Index( - np.array( - [True, False, True], dtype=bool), dtype=bool), - Index( - [True, False, True], dtype=bool)]: - self.assertIsInstance(idx, Index) - self.assertEqual(idx.dtype, object) - - for idx in [Index( - np.array([1, 2, 3], dtype=int), dtype='category'), Index( - [1, 2, 3], dtype='category'), Index( - np.array([np.datetime64('2011-01-01'), np.datetime64( - '2011-01-02')]), dtype='category'), Index( - [datetime(2011, 1, 1), datetime(2011, 1, 2) - ], dtype='category')]: - self.assertIsInstance(idx, CategoricalIndex) - - for idx in [Index(np.array([np.datetime64('2011-01-01'), np.datetime64( - '2011-01-02')])), - Index([datetime(2011, 1, 1), datetime(2011, 1, 2)])]: - self.assertIsInstance(idx, DatetimeIndex) - - for idx in [Index( - np.array([np.datetime64('2011-01-01'), np.datetime64( - '2011-01-02')]), dtype=object), Index( - [datetime(2011, 1, 1), datetime(2011, 1, 2) - ], dtype=object)]: - self.assertNotIsInstance(idx, DatetimeIndex) - self.assertIsInstance(idx, Index) - self.assertEqual(idx.dtype, object) - - for idx in [Index(np.array([np.timedelta64(1, 'D'), np.timedelta64( - 1, 'D')])), Index([timedelta(1), timedelta(1)])]: - self.assertIsInstance(idx, TimedeltaIndex) - - for idx in [Index( - np.array([np.timedelta64(1, 'D'), np.timedelta64(1, 'D')]), - dtype=object), Index( - [timedelta(1), timedelta(1)], dtype=object)]: - self.assertNotIsInstance(idx, TimedeltaIndex) - self.assertIsInstance(idx, Index) - self.assertEqual(idx.dtype, object) - - def test_view_with_args(self): - - restricted = ['unicodeIndex', 'strIndex', 'catIndex', 'boolIndex', - 'empty'] - - for i in restricted: - ind = self.indices[i] - - # with arguments - self.assertRaises(TypeError, lambda: ind.view('i8')) - - # these are ok - for i in list(set(self.indices.keys()) - set(restricted)): - ind = self.indices[i] - - # with arguments - ind.view('i8') - - def test_legacy_pickle_identity(self): - - # GH 8431 - pth = tm.get_data_path() - s1 = pd.read_pickle(os.path.join(pth, 's1-0.12.0.pickle')) - s2 = pd.read_pickle(os.path.join(pth, 's2-0.12.0.pickle')) - self.assertFalse(s1.index.identical(s2.index)) - self.assertFalse(s1.index.equals(s2.index)) - - def test_astype(self): - casted = self.intIndex.astype('i8') - - # it works! - casted.get_loc(5) - - # pass on name - self.intIndex.name = 'foobar' - casted = self.intIndex.astype('i8') - self.assertEqual(casted.name, 'foobar') - - def test_equals(self): - # same - self.assertTrue(Index(['a', 'b', 'c']).equals(Index(['a', 'b', 'c']))) - - # different length - self.assertFalse(Index(['a', 'b', 'c']).equals(Index(['a', 'b']))) - - # same length, different values - self.assertFalse(Index(['a', 'b', 'c']).equals(Index(['a', 'b', 'd']))) - - # Must also be an Index - self.assertFalse(Index(['a', 'b', 'c']).equals(['a', 'b', 'c'])) - - def test_insert(self): - - # GH 7256 - # validate neg/pos inserts - result = Index(['b', 'c', 'd']) - - # test 0th element - self.assertTrue(Index(['a', 'b', 'c', 'd']).equals(result.insert(0, - 'a'))) - - # test Nth element that follows Python list behavior - self.assertTrue(Index(['b', 'c', 'e', 'd']).equals(result.insert(-1, - 'e'))) - - # test loc +/- neq (0, -1) - self.assertTrue(result.insert(1, 'z').equals(result.insert(-2, 'z'))) - - # test empty - null_index = Index([]) - self.assertTrue(Index(['a']).equals(null_index.insert(0, 'a'))) - - def test_delete(self): - idx = Index(['a', 'b', 'c', 'd'], name='idx') - - expected = Index(['b', 'c', 'd'], name='idx') - result = idx.delete(0) - self.assertTrue(result.equals(expected)) - self.assertEqual(result.name, expected.name) - - expected = Index(['a', 'b', 'c'], name='idx') - result = idx.delete(-1) - self.assertTrue(result.equals(expected)) - self.assertEqual(result.name, expected.name) - - with tm.assertRaises((IndexError, ValueError)): - # either depeidnig on numpy version - result = idx.delete(5) - - def test_identical(self): - - # index - i1 = Index(['a', 'b', 'c']) - i2 = Index(['a', 'b', 'c']) - - self.assertTrue(i1.identical(i2)) - - i1 = i1.rename('foo') - self.assertTrue(i1.equals(i2)) - self.assertFalse(i1.identical(i2)) - - i2 = i2.rename('foo') - self.assertTrue(i1.identical(i2)) - - i3 = Index([('a', 'a'), ('a', 'b'), ('b', 'a')]) - i4 = Index([('a', 'a'), ('a', 'b'), ('b', 'a')], tupleize_cols=False) - self.assertFalse(i3.identical(i4)) - - def test_is_(self): - ind = Index(range(10)) - self.assertTrue(ind.is_(ind)) - self.assertTrue(ind.is_(ind.view().view().view().view())) - self.assertFalse(ind.is_(Index(range(10)))) - self.assertFalse(ind.is_(ind.copy())) - self.assertFalse(ind.is_(ind.copy(deep=False))) - self.assertFalse(ind.is_(ind[:])) - self.assertFalse(ind.is_(ind.view(np.ndarray).view(Index))) - self.assertFalse(ind.is_(np.array(range(10)))) - - # quasi-implementation dependent - self.assertTrue(ind.is_(ind.view())) - ind2 = ind.view() - ind2.name = 'bob' - self.assertTrue(ind.is_(ind2)) - self.assertTrue(ind2.is_(ind)) - # doesn't matter if Indices are *actually* views of underlying data, - self.assertFalse(ind.is_(Index(ind.values))) - arr = np.array(range(1, 11)) - ind1 = Index(arr, copy=False) - ind2 = Index(arr, copy=False) - self.assertFalse(ind1.is_(ind2)) - - def test_asof(self): - d = self.dateIndex[0] - self.assertEqual(self.dateIndex.asof(d), d) - self.assertTrue(np.isnan(self.dateIndex.asof(d - timedelta(1)))) - - d = self.dateIndex[-1] - self.assertEqual(self.dateIndex.asof(d + timedelta(1)), d) - - d = self.dateIndex[0].to_datetime() - tm.assertIsInstance(self.dateIndex.asof(d), Timestamp) - - def test_asof_datetime_partial(self): - idx = pd.date_range('2010-01-01', periods=2, freq='m') - expected = Timestamp('2010-02-28') - result = idx.asof('2010-02') - self.assertEqual(result, expected) - self.assertFalse(isinstance(result, Index)) - - def test_nanosecond_index_access(self): - s = Series([Timestamp('20130101')]).values.view('i8')[0] - r = DatetimeIndex([s + 50 + i for i in range(100)]) - x = Series(np.random.randn(100), index=r) - - first_value = x.asof(x.index[0]) - - # this does not yet work, as parsing strings is done via dateutil - # self.assertEqual(first_value, - # x['2013-01-01 00:00:00.000000050+0000']) - - self.assertEqual( - first_value, - x[Timestamp(np.datetime64('2013-01-01 00:00:00.000000050+0000', - 'ns'))]) - - def test_comparators(self): - index = self.dateIndex - element = index[len(index) // 2] - element = _to_m8(element) - - arr = np.array(index) - - def _check(op): - arr_result = op(arr, element) - index_result = op(index, element) - - self.assertIsInstance(index_result, np.ndarray) - tm.assert_numpy_array_equal(arr_result, index_result) - - _check(operator.eq) - _check(operator.ne) - _check(operator.gt) - _check(operator.lt) - _check(operator.ge) - _check(operator.le) - - def test_booleanindex(self): - boolIdx = np.repeat(True, len(self.strIndex)).astype(bool) - boolIdx[5:30:2] = False - - subIndex = self.strIndex[boolIdx] - - for i, val in enumerate(subIndex): - self.assertEqual(subIndex.get_loc(val), i) - - subIndex = self.strIndex[list(boolIdx)] - for i, val in enumerate(subIndex): - self.assertEqual(subIndex.get_loc(val), i) - - def test_fancy(self): - sl = self.strIndex[[1, 2, 3]] - for i in sl: - self.assertEqual(i, sl[sl.get_loc(i)]) - - def test_empty_fancy(self): - empty_farr = np.array([], dtype=np.float_) - empty_iarr = np.array([], dtype=np.int_) - empty_barr = np.array([], dtype=np.bool_) - - # pd.DatetimeIndex is excluded, because it overrides getitem and should - # be tested separately. - for idx in [self.strIndex, self.intIndex, self.floatIndex]: - empty_idx = idx.__class__([]) - - self.assertTrue(idx[[]].identical(empty_idx)) - self.assertTrue(idx[empty_iarr].identical(empty_idx)) - self.assertTrue(idx[empty_barr].identical(empty_idx)) - - # np.ndarray only accepts ndarray of int & bool dtypes, so should - # Index. - self.assertRaises(IndexError, idx.__getitem__, empty_farr) - - def test_getitem(self): - arr = np.array(self.dateIndex) - exp = self.dateIndex[5] - exp = _to_m8(exp) - - self.assertEqual(exp, arr[5]) - - def test_intersection(self): - first = self.strIndex[:20] - second = self.strIndex[:10] - intersect = first.intersection(second) - self.assertTrue(tm.equalContents(intersect, second)) - - # Corner cases - inter = first.intersection(first) - self.assertIs(inter, first) - - idx1 = Index([1, 2, 3, 4, 5], name='idx') - # if target has the same name, it is preserved - idx2 = Index([3, 4, 5, 6, 7], name='idx') - expected2 = Index([3, 4, 5], name='idx') - result2 = idx1.intersection(idx2) - self.assertTrue(result2.equals(expected2)) - self.assertEqual(result2.name, expected2.name) - - # if target name is different, it will be reset - idx3 = Index([3, 4, 5, 6, 7], name='other') - expected3 = Index([3, 4, 5], name=None) - result3 = idx1.intersection(idx3) - self.assertTrue(result3.equals(expected3)) - self.assertEqual(result3.name, expected3.name) - - # non monotonic - idx1 = Index([5, 3, 2, 4, 1], name='idx') - idx2 = Index([4, 7, 6, 5, 3], name='idx') - result2 = idx1.intersection(idx2) - self.assertTrue(tm.equalContents(result2, expected2)) - self.assertEqual(result2.name, expected2.name) - - idx3 = Index([4, 7, 6, 5, 3], name='other') - result3 = idx1.intersection(idx3) - self.assertTrue(tm.equalContents(result3, expected3)) - self.assertEqual(result3.name, expected3.name) - - # non-monotonic non-unique - idx1 = Index(['A', 'B', 'A', 'C']) - idx2 = Index(['B', 'D']) - expected = Index(['B'], dtype='object') - result = idx1.intersection(idx2) - self.assertTrue(result.equals(expected)) - - def test_union(self): - first = self.strIndex[5:20] - second = self.strIndex[:10] - everything = self.strIndex[:20] - union = first.union(second) - self.assertTrue(tm.equalContents(union, everything)) - - # GH 10149 - cases = [klass(second.values) for klass in [np.array, Series, list]] - for case in cases: - result = first.union(case) - self.assertTrue(tm.equalContents(result, everything)) - - # Corner cases - union = first.union(first) - self.assertIs(union, first) - - union = first.union([]) - self.assertIs(union, first) - - union = Index([]).union(first) - self.assertIs(union, first) - - # preserve names - first.name = 'A' - second.name = 'A' - union = first.union(second) - self.assertEqual(union.name, 'A') - - second.name = 'B' - union = first.union(second) - self.assertIsNone(union.name) - - def test_add(self): - - # - API change GH 8226 - with tm.assert_produces_warning(): - self.strIndex + self.strIndex - with tm.assert_produces_warning(): - self.strIndex + self.strIndex.tolist() - with tm.assert_produces_warning(): - self.strIndex.tolist() + self.strIndex - - with tm.assert_produces_warning(RuntimeWarning): - firstCat = self.strIndex.union(self.dateIndex) - secondCat = self.strIndex.union(self.strIndex) - - if self.dateIndex.dtype == np.object_: - appended = np.append(self.strIndex, self.dateIndex) - else: - appended = np.append(self.strIndex, self.dateIndex.astype('O')) - - self.assertTrue(tm.equalContents(firstCat, appended)) - self.assertTrue(tm.equalContents(secondCat, self.strIndex)) - tm.assert_contains_all(self.strIndex, firstCat) - tm.assert_contains_all(self.strIndex, secondCat) - tm.assert_contains_all(self.dateIndex, firstCat) - - # test add and radd - idx = Index(list('abc')) - expected = Index(['a1', 'b1', 'c1']) - self.assert_index_equal(idx + '1', expected) - expected = Index(['1a', '1b', '1c']) - self.assert_index_equal('1' + idx, expected) - - def test_append_multiple(self): - index = Index(['a', 'b', 'c', 'd', 'e', 'f']) - - foos = [index[:2], index[2:4], index[4:]] - result = foos[0].append(foos[1:]) - self.assertTrue(result.equals(index)) - - # empty - result = index.append([]) - self.assertTrue(result.equals(index)) - - def test_append_empty_preserve_name(self): - left = Index([], name='foo') - right = Index([1, 2, 3], name='foo') - - result = left.append(right) - self.assertEqual(result.name, 'foo') - - left = Index([], name='foo') - right = Index([1, 2, 3], name='bar') - - result = left.append(right) - self.assertIsNone(result.name) - - def test_add_string(self): - # from bug report - index = Index(['a', 'b', 'c']) - index2 = index + 'foo' - - self.assertNotIn('a', index2) - self.assertIn('afoo', index2) - - def test_iadd_string(self): - index = pd.Index(['a', 'b', 'c']) - # doesn't fail test unless there is a check before `+=` - self.assertIn('a', index) - - index += '_x' - self.assertIn('a_x', index) - - def test_difference(self): - - first = self.strIndex[5:20] - second = self.strIndex[:10] - answer = self.strIndex[10:20] - first.name = 'name' - # different names - result = first.difference(second) - - self.assertTrue(tm.equalContents(result, answer)) - self.assertEqual(result.name, None) - - # same names - second.name = 'name' - result = first.difference(second) - self.assertEqual(result.name, 'name') - - # with empty - result = first.difference([]) - self.assertTrue(tm.equalContents(result, first)) - self.assertEqual(result.name, first.name) - - # with everythin - result = first.difference(first) - self.assertEqual(len(result), 0) - self.assertEqual(result.name, first.name) - - def test_symmetric_diff(self): - # smoke - idx1 = Index([1, 2, 3, 4], name='idx1') - idx2 = Index([2, 3, 4, 5]) - result = idx1.sym_diff(idx2) - expected = Index([1, 5]) - self.assertTrue(tm.equalContents(result, expected)) - self.assertIsNone(result.name) - - # __xor__ syntax - expected = idx1 ^ idx2 - self.assertTrue(tm.equalContents(result, expected)) - self.assertIsNone(result.name) - - # multiIndex - idx1 = MultiIndex.from_tuples(self.tuples) - idx2 = MultiIndex.from_tuples([('foo', 1), ('bar', 3)]) - result = idx1.sym_diff(idx2) - expected = MultiIndex.from_tuples([('bar', 2), ('baz', 3), ('bar', 3)]) - self.assertTrue(tm.equalContents(result, expected)) - - # nans: - # GH #6444, sorting of nans. Make sure the number of nans is right - # and the correct non-nan values are there. punt on sorting. - idx1 = Index([1, 2, 3, np.nan]) - idx2 = Index([0, 1, np.nan]) - result = idx1.sym_diff(idx2) - # expected = Index([0.0, np.nan, 2.0, 3.0, np.nan]) - - nans = pd.isnull(result) - self.assertEqual(nans.sum(), 1) - self.assertEqual((~nans).sum(), 3) - [self.assertIn(x, result) for x in [0.0, 2.0, 3.0]] - - # other not an Index: - idx1 = Index([1, 2, 3, 4], name='idx1') - idx2 = np.array([2, 3, 4, 5]) - expected = Index([1, 5]) - result = idx1.sym_diff(idx2) - self.assertTrue(tm.equalContents(result, expected)) - self.assertEqual(result.name, 'idx1') - - result = idx1.sym_diff(idx2, result_name='new_name') - self.assertTrue(tm.equalContents(result, expected)) - self.assertEqual(result.name, 'new_name') - - def test_is_numeric(self): - self.assertFalse(self.dateIndex.is_numeric()) - self.assertFalse(self.strIndex.is_numeric()) - self.assertTrue(self.intIndex.is_numeric()) - self.assertTrue(self.floatIndex.is_numeric()) - self.assertFalse(self.catIndex.is_numeric()) - - def test_is_object(self): - self.assertTrue(self.strIndex.is_object()) - self.assertTrue(self.boolIndex.is_object()) - self.assertFalse(self.catIndex.is_object()) - self.assertFalse(self.intIndex.is_object()) - self.assertFalse(self.dateIndex.is_object()) - self.assertFalse(self.floatIndex.is_object()) - - def test_is_all_dates(self): - self.assertTrue(self.dateIndex.is_all_dates) - self.assertFalse(self.strIndex.is_all_dates) - self.assertFalse(self.intIndex.is_all_dates) - - def test_summary(self): - self._check_method_works(Index.summary) - # GH3869 - ind = Index(['{other}%s', "~:{range}:0"], name='A') - result = ind.summary() - # shouldn't be formatted accidentally. - self.assertIn('~:{range}:0', result) - self.assertIn('{other}%s', result) - - def test_format(self): - self._check_method_works(Index.format) - - index = Index([datetime.now()]) - - # windows has different precision on datetime.datetime.now (it doesn't - # include us since the default for Timestamp shows these but Index - # formating does not we are skipping - if not is_platform_windows(): - formatted = index.format() - expected = [str(index[0])] - self.assertEqual(formatted, expected) - - # 2845 - index = Index([1, 2.0 + 3.0j, np.nan]) - formatted = index.format() - expected = [str(index[0]), str(index[1]), u('NaN')] - self.assertEqual(formatted, expected) - - # is this really allowed? - index = Index([1, 2.0 + 3.0j, None]) - formatted = index.format() - expected = [str(index[0]), str(index[1]), u('NaN')] - self.assertEqual(formatted, expected) - - self.strIndex[:0].format() - - def test_format_with_name_time_info(self): - # bug I fixed 12/20/2011 - inc = timedelta(hours=4) - dates = Index([dt + inc for dt in self.dateIndex], name='something') - - formatted = dates.format(name=True) - self.assertEqual(formatted[0], 'something') - - def test_format_datetime_with_time(self): - t = Index([datetime(2012, 2, 7), datetime(2012, 2, 7, 23)]) - - result = t.format() - expected = ['2012-02-07 00:00:00', '2012-02-07 23:00:00'] - self.assertEqual(len(result), 2) - self.assertEqual(result, expected) - - def test_format_none(self): - values = ['a', 'b', 'c', None] - - idx = Index(values) - idx.format() - self.assertIsNone(idx[3]) - - def test_logical_compat(self): - idx = self.create_index() - self.assertEqual(idx.all(), idx.values.all()) - self.assertEqual(idx.any(), idx.values.any()) - - def _check_method_works(self, method): - method(self.empty) - method(self.dateIndex) - method(self.unicodeIndex) - method(self.strIndex) - method(self.intIndex) - method(self.tuples) - method(self.catIndex) - - def test_get_indexer(self): - idx1 = Index([1, 2, 3, 4, 5]) - idx2 = Index([2, 4, 6]) - - r1 = idx1.get_indexer(idx2) - assert_almost_equal(r1, [1, 3, -1]) - - r1 = idx2.get_indexer(idx1, method='pad') - e1 = [-1, 0, 0, 1, 1] - assert_almost_equal(r1, e1) - - r2 = idx2.get_indexer(idx1[::-1], method='pad') - assert_almost_equal(r2, e1[::-1]) - - rffill1 = idx2.get_indexer(idx1, method='ffill') - assert_almost_equal(r1, rffill1) - - r1 = idx2.get_indexer(idx1, method='backfill') - e1 = [0, 0, 1, 1, 2] - assert_almost_equal(r1, e1) - - rbfill1 = idx2.get_indexer(idx1, method='bfill') - assert_almost_equal(r1, rbfill1) - - r2 = idx2.get_indexer(idx1[::-1], method='backfill') - assert_almost_equal(r2, e1[::-1]) - - def test_get_indexer_invalid(self): - # GH10411 - idx = Index(np.arange(10)) - - with tm.assertRaisesRegexp(ValueError, 'tolerance argument'): - idx.get_indexer([1, 0], tolerance=1) - - with tm.assertRaisesRegexp(ValueError, 'limit argument'): - idx.get_indexer([1, 0], limit=1) - - def test_get_indexer_nearest(self): - idx = Index(np.arange(10)) - - all_methods = ['pad', 'backfill', 'nearest'] - for method in all_methods: - actual = idx.get_indexer([0, 5, 9], method=method) - tm.assert_numpy_array_equal(actual, [0, 5, 9]) - - actual = idx.get_indexer([0, 5, 9], method=method, tolerance=0) - tm.assert_numpy_array_equal(actual, [0, 5, 9]) - - for method, expected in zip(all_methods, [[0, 1, 8], [1, 2, 9], [0, 2, - 9]]): - actual = idx.get_indexer([0.2, 1.8, 8.5], method=method) - tm.assert_numpy_array_equal(actual, expected) - - actual = idx.get_indexer([0.2, 1.8, 8.5], method=method, - tolerance=1) - tm.assert_numpy_array_equal(actual, expected) - - for method, expected in zip(all_methods, [[0, -1, -1], [-1, 2, -1], - [0, 2, -1]]): - actual = idx.get_indexer([0.2, 1.8, 8.5], method=method, - tolerance=0.2) - tm.assert_numpy_array_equal(actual, expected) - - with tm.assertRaisesRegexp(ValueError, 'limit argument'): - idx.get_indexer([1, 0], method='nearest', limit=1) - - def test_get_indexer_nearest_decreasing(self): - idx = Index(np.arange(10))[::-1] - - all_methods = ['pad', 'backfill', 'nearest'] - for method in all_methods: - actual = idx.get_indexer([0, 5, 9], method=method) - tm.assert_numpy_array_equal(actual, [9, 4, 0]) - - for method, expected in zip(all_methods, [[8, 7, 0], [9, 8, 1], [9, 7, - 0]]): - actual = idx.get_indexer([0.2, 1.8, 8.5], method=method) - tm.assert_numpy_array_equal(actual, expected) - - def test_get_indexer_strings(self): - idx = pd.Index(['b', 'c']) - - actual = idx.get_indexer(['a', 'b', 'c', 'd'], method='pad') - expected = [-1, 0, 1, 1] - tm.assert_numpy_array_equal(actual, expected) - - actual = idx.get_indexer(['a', 'b', 'c', 'd'], method='backfill') - expected = [0, 0, 1, -1] - tm.assert_numpy_array_equal(actual, expected) - - with tm.assertRaises(TypeError): - idx.get_indexer(['a', 'b', 'c', 'd'], method='nearest') - - with tm.assertRaises(TypeError): - idx.get_indexer(['a', 'b', 'c', 'd'], method='pad', tolerance=2) - - def test_get_loc(self): - idx = pd.Index([0, 1, 2]) - all_methods = [None, 'pad', 'backfill', 'nearest'] - for method in all_methods: - self.assertEqual(idx.get_loc(1, method=method), 1) - if method is not None: - self.assertEqual(idx.get_loc(1, method=method, tolerance=0), 1) - with tm.assertRaises(TypeError): - idx.get_loc([1, 2], method=method) - - for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]: - self.assertEqual(idx.get_loc(1.1, method), loc) - - for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]: - self.assertEqual(idx.get_loc(1.1, method, tolerance=1), loc) - - for method in ['pad', 'backfill', 'nearest']: - with tm.assertRaises(KeyError): - idx.get_loc(1.1, method, tolerance=0.05) - - with tm.assertRaisesRegexp(ValueError, 'must be numeric'): - idx.get_loc(1.1, 'nearest', tolerance='invalid') - with tm.assertRaisesRegexp(ValueError, 'tolerance .* valid if'): - idx.get_loc(1.1, tolerance=1) - - idx = pd.Index(['a', 'c']) - with tm.assertRaises(TypeError): - idx.get_loc('a', method='nearest') - with tm.assertRaises(TypeError): - idx.get_loc('a', method='pad', tolerance='invalid') - - def test_slice_locs(self): - for dtype in [int, float]: - idx = Index(np.array([0, 1, 2, 5, 6, 7, 9, 10], dtype=dtype)) - n = len(idx) - - self.assertEqual(idx.slice_locs(start=2), (2, n)) - self.assertEqual(idx.slice_locs(start=3), (3, n)) - self.assertEqual(idx.slice_locs(3, 8), (3, 6)) - self.assertEqual(idx.slice_locs(5, 10), (3, n)) - self.assertEqual(idx.slice_locs(end=8), (0, 6)) - self.assertEqual(idx.slice_locs(end=9), (0, 7)) - - # reversed - idx2 = idx[::-1] - self.assertEqual(idx2.slice_locs(8, 2), (2, 6)) - self.assertEqual(idx2.slice_locs(7, 3), (2, 5)) - - # float slicing - idx = Index(np.array([0, 1, 2, 5, 6, 7, 9, 10], dtype=float)) - n = len(idx) - self.assertEqual(idx.slice_locs(5.0, 10.0), (3, n)) - self.assertEqual(idx.slice_locs(4.5, 10.5), (3, 8)) - idx2 = idx[::-1] - self.assertEqual(idx2.slice_locs(8.5, 1.5), (2, 6)) - self.assertEqual(idx2.slice_locs(10.5, -1), (0, n)) - - # int slicing with floats - idx = Index(np.array([0, 1, 2, 5, 6, 7, 9, 10], dtype=int)) - self.assertEqual(idx.slice_locs(5.0, 10.0), (3, n)) - self.assertEqual(idx.slice_locs(4.5, 10.5), (3, 8)) - idx2 = idx[::-1] - self.assertEqual(idx2.slice_locs(8.5, 1.5), (2, 6)) - self.assertEqual(idx2.slice_locs(10.5, -1), (0, n)) - - def test_slice_locs_dup(self): - idx = Index(['a', 'a', 'b', 'c', 'd', 'd']) - self.assertEqual(idx.slice_locs('a', 'd'), (0, 6)) - self.assertEqual(idx.slice_locs(end='d'), (0, 6)) - self.assertEqual(idx.slice_locs('a', 'c'), (0, 4)) - self.assertEqual(idx.slice_locs('b', 'd'), (2, 6)) - - idx2 = idx[::-1] - self.assertEqual(idx2.slice_locs('d', 'a'), (0, 6)) - self.assertEqual(idx2.slice_locs(end='a'), (0, 6)) - self.assertEqual(idx2.slice_locs('d', 'b'), (0, 4)) - self.assertEqual(idx2.slice_locs('c', 'a'), (2, 6)) - - for dtype in [int, float]: - idx = Index(np.array([10, 12, 12, 14], dtype=dtype)) - self.assertEqual(idx.slice_locs(12, 12), (1, 3)) - self.assertEqual(idx.slice_locs(11, 13), (1, 3)) - - idx2 = idx[::-1] - self.assertEqual(idx2.slice_locs(12, 12), (1, 3)) - self.assertEqual(idx2.slice_locs(13, 11), (1, 3)) - - def test_slice_locs_na(self): - idx = Index([np.nan, 1, 2]) - self.assertRaises(KeyError, idx.slice_locs, start=1.5) - self.assertRaises(KeyError, idx.slice_locs, end=1.5) - self.assertEqual(idx.slice_locs(1), (1, 3)) - self.assertEqual(idx.slice_locs(np.nan), (0, 3)) - - idx = Index([0, np.nan, np.nan, 1, 2]) - self.assertEqual(idx.slice_locs(np.nan), (1, 5)) - - def test_slice_locs_negative_step(self): - idx = Index(list('bcdxy')) - - SLC = pd.IndexSlice - - def check_slice(in_slice, expected): - s_start, s_stop = idx.slice_locs(in_slice.start, in_slice.stop, - in_slice.step) - result = idx[s_start:s_stop:in_slice.step] - expected = pd.Index(list(expected)) - self.assertTrue(result.equals(expected)) - - for in_slice, expected in [ - (SLC[::-1], 'yxdcb'), (SLC['b':'y':-1], ''), - (SLC['b'::-1], 'b'), (SLC[:'b':-1], 'yxdcb'), - (SLC[:'y':-1], 'y'), (SLC['y'::-1], 'yxdcb'), - (SLC['y'::-4], 'yb'), - # absent labels - (SLC[:'a':-1], 'yxdcb'), (SLC[:'a':-2], 'ydb'), - (SLC['z'::-1], 'yxdcb'), (SLC['z'::-3], 'yc'), - (SLC['m'::-1], 'dcb'), (SLC[:'m':-1], 'yx'), - (SLC['a':'a':-1], ''), (SLC['z':'z':-1], ''), - (SLC['m':'m':-1], '') - ]: - check_slice(in_slice, expected) - - def test_drop(self): - n = len(self.strIndex) - - drop = self.strIndex[lrange(5, 10)] - dropped = self.strIndex.drop(drop) - expected = self.strIndex[lrange(5) + lrange(10, n)] - self.assertTrue(dropped.equals(expected)) - - self.assertRaises(ValueError, self.strIndex.drop, ['foo', 'bar']) - self.assertRaises(ValueError, self.strIndex.drop, ['1', 'bar']) - - # errors='ignore' - mixed = drop.tolist() + ['foo'] - dropped = self.strIndex.drop(mixed, errors='ignore') - expected = self.strIndex[lrange(5) + lrange(10, n)] - self.assert_index_equal(dropped, expected) - - dropped = self.strIndex.drop(['foo', 'bar'], errors='ignore') - expected = self.strIndex[lrange(n)] - self.assert_index_equal(dropped, expected) - - dropped = self.strIndex.drop(self.strIndex[0]) - expected = self.strIndex[1:] - self.assert_index_equal(dropped, expected) - - ser = Index([1, 2, 3]) - dropped = ser.drop(1) - expected = Index([2, 3]) - self.assert_index_equal(dropped, expected) - - # errors='ignore' - self.assertRaises(ValueError, ser.drop, [3, 4]) - - dropped = ser.drop(4, errors='ignore') - expected = Index([1, 2, 3]) - self.assert_index_equal(dropped, expected) - - dropped = ser.drop([3, 4, 5], errors='ignore') - expected = Index([1, 2]) - self.assert_index_equal(dropped, expected) - - def test_tuple_union_bug(self): - import pandas - import numpy as np - - aidx1 = np.array([(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B')], - dtype=[('num', int), ('let', 'a1')]) - aidx2 = np.array([(1, 'A'), (2, 'A'), (1, 'B'), - (2, 'B'), (1, 'C'), (2, 'C')], - dtype=[('num', int), ('let', 'a1')]) - - idx1 = pandas.Index(aidx1) - idx2 = pandas.Index(aidx2) - - # intersection broken? - int_idx = idx1.intersection(idx2) - # needs to be 1d like idx1 and idx2 - expected = idx1[:4] # pandas.Index(sorted(set(idx1) & set(idx2))) - self.assertEqual(int_idx.ndim, 1) - self.assertTrue(int_idx.equals(expected)) - - # union broken - union_idx = idx1.union(idx2) - expected = idx2 - self.assertEqual(union_idx.ndim, 1) - self.assertTrue(union_idx.equals(expected)) - - def test_is_monotonic_incomparable(self): - index = Index([5, datetime.now(), 7]) - self.assertFalse(index.is_monotonic) - self.assertFalse(index.is_monotonic_decreasing) - - def test_get_set_value(self): - values = np.random.randn(100) - date = self.dateIndex[67] - - assert_almost_equal(self.dateIndex.get_value(values, date), values[67]) - - self.dateIndex.set_value(values, date, 10) - self.assertEqual(values[67], 10) - - def test_isin(self): - values = ['foo', 'bar', 'quux'] - - idx = Index(['qux', 'baz', 'foo', 'bar']) - result = idx.isin(values) - expected = np.array([False, False, True, True]) - tm.assert_numpy_array_equal(result, expected) - - # empty, return dtype bool - idx = Index([]) - result = idx.isin(values) - self.assertEqual(len(result), 0) - self.assertEqual(result.dtype, np.bool_) - - def test_isin_nan(self): - tm.assert_numpy_array_equal( - Index(['a', np.nan]).isin([np.nan]), [False, True]) - tm.assert_numpy_array_equal( - Index(['a', pd.NaT]).isin([pd.NaT]), [False, True]) - tm.assert_numpy_array_equal( - Index(['a', np.nan]).isin([float('nan')]), [False, False]) - tm.assert_numpy_array_equal( - Index(['a', np.nan]).isin([pd.NaT]), [False, False]) - # Float64Index overrides isin, so must be checked separately - tm.assert_numpy_array_equal( - Float64Index([1.0, np.nan]).isin([np.nan]), [False, True]) - tm.assert_numpy_array_equal( - Float64Index([1.0, np.nan]).isin([float('nan')]), [False, True]) - tm.assert_numpy_array_equal( - Float64Index([1.0, np.nan]).isin([pd.NaT]), [False, True]) - - def test_isin_level_kwarg(self): - def check_idx(idx): - values = idx.tolist()[-2:] + ['nonexisting'] - - expected = np.array([False, False, True, True]) - tm.assert_numpy_array_equal(expected, idx.isin(values, level=0)) - tm.assert_numpy_array_equal(expected, idx.isin(values, level=-1)) - - self.assertRaises(IndexError, idx.isin, values, level=1) - self.assertRaises(IndexError, idx.isin, values, level=10) - self.assertRaises(IndexError, idx.isin, values, level=-2) - - self.assertRaises(KeyError, idx.isin, values, level=1.0) - self.assertRaises(KeyError, idx.isin, values, level='foobar') - - idx.name = 'foobar' - tm.assert_numpy_array_equal(expected, - idx.isin(values, level='foobar')) - - self.assertRaises(KeyError, idx.isin, values, level='xyzzy') - self.assertRaises(KeyError, idx.isin, values, level=np.nan) - - check_idx(Index(['qux', 'baz', 'foo', 'bar'])) - # Float64Index overrides isin, so must be checked separately - check_idx(Float64Index([1.0, 2.0, 3.0, 4.0])) - - def test_boolean_cmp(self): - values = [1, 2, 3, 4] - - idx = Index(values) - res = (idx == values) - - tm.assert_numpy_array_equal(res, np.array( - [True, True, True, True], dtype=bool)) - - def test_get_level_values(self): - result = self.strIndex.get_level_values(0) - self.assertTrue(result.equals(self.strIndex)) - - def test_slice_keep_name(self): - idx = Index(['a', 'b'], name='asdf') - self.assertEqual(idx.name, idx[1:].name) - - def test_join_self(self): - # instance attributes of the form self.<name>Index - indices = 'unicode', 'str', 'date', 'int', 'float' - kinds = 'outer', 'inner', 'left', 'right' - for index_kind in indices: - res = getattr(self, '{0}Index'.format(index_kind)) - - for kind in kinds: - joined = res.join(res, how=kind) - self.assertIs(res, joined) - - def test_str_attribute(self): - # GH9068 - methods = ['strip', 'rstrip', 'lstrip'] - idx = Index([' jack', 'jill ', ' jesse ', 'frank']) - for method in methods: - expected = Index([getattr(str, method)(x) for x in idx.values]) - tm.assert_index_equal( - getattr(Index.str, method)(idx.str), expected) - - # create a few instances that are not able to use .str accessor - indices = [Index(range(5)), tm.makeDateIndex(10), - MultiIndex.from_tuples([('foo', '1'), ('bar', '3')]), - PeriodIndex(start='2000', end='2010', freq='A')] - for idx in indices: - with self.assertRaisesRegexp(AttributeError, - 'only use .str accessor'): - idx.str.repeat(2) - - idx = Index(['a b c', 'd e', 'f']) - expected = Index([['a', 'b', 'c'], ['d', 'e'], ['f']]) - tm.assert_index_equal(idx.str.split(), expected) - tm.assert_index_equal(idx.str.split(expand=False), expected) - - expected = MultiIndex.from_tuples([('a', 'b', 'c'), ('d', 'e', np.nan), - ('f', np.nan, np.nan)]) - tm.assert_index_equal(idx.str.split(expand=True), expected) - - # test boolean case, should return np.array instead of boolean Index - idx = Index(['a1', 'a2', 'b1', 'b2']) - expected = np.array([True, True, False, False]) - tm.assert_numpy_array_equal(idx.str.startswith('a'), expected) - self.assertIsInstance(idx.str.startswith('a'), np.ndarray) - s = Series(range(4), index=idx) - expected = Series(range(2), index=['a1', 'a2']) - tm.assert_series_equal(s[s.index.str.startswith('a')], expected) - - def test_tab_completion(self): - # GH 9910 - idx = Index(list('abcd')) - self.assertTrue('str' in dir(idx)) - - idx = Index(range(4)) - self.assertTrue('str' not in dir(idx)) - - def test_indexing_doesnt_change_class(self): - idx = Index([1, 2, 3, 'a', 'b', 'c']) - - self.assertTrue(idx[1:3].identical(pd.Index([2, 3], dtype=np.object_))) - self.assertTrue(idx[[0, 1]].identical(pd.Index( - [1, 2], dtype=np.object_))) - - def test_outer_join_sort(self): - left_idx = Index(np.random.permutation(15)) - right_idx = tm.makeDateIndex(10) - - with tm.assert_produces_warning(RuntimeWarning): - joined = left_idx.join(right_idx, how='outer') - - # right_idx in this case because DatetimeIndex has join precedence over - # Int64Index - with tm.assert_produces_warning(RuntimeWarning): - expected = right_idx.astype(object).union(left_idx.astype(object)) - tm.assert_index_equal(joined, expected) - - def test_nan_first_take_datetime(self): - idx = Index([pd.NaT, Timestamp('20130101'), Timestamp('20130102')]) - res = idx.take([-1, 0, 1]) - exp = Index([idx[-1], idx[0], idx[1]]) - tm.assert_index_equal(res, exp) - - def test_reindex_preserves_name_if_target_is_list_or_ndarray(self): - # GH6552 - idx = pd.Index([0, 1, 2]) - - dt_idx = pd.date_range('20130101', periods=3) - - idx.name = None - self.assertEqual(idx.reindex([])[0].name, None) - self.assertEqual(idx.reindex(np.array([]))[0].name, None) - self.assertEqual(idx.reindex(idx.tolist())[0].name, None) - self.assertEqual(idx.reindex(idx.tolist()[:-1])[0].name, None) - self.assertEqual(idx.reindex(idx.values)[0].name, None) - self.assertEqual(idx.reindex(idx.values[:-1])[0].name, None) - - # Must preserve name even if dtype changes. - self.assertEqual(idx.reindex(dt_idx.values)[0].name, None) - self.assertEqual(idx.reindex(dt_idx.tolist())[0].name, None) - - idx.name = 'foobar' - self.assertEqual(idx.reindex([])[0].name, 'foobar') - self.assertEqual(idx.reindex(np.array([]))[0].name, 'foobar') - self.assertEqual(idx.reindex(idx.tolist())[0].name, 'foobar') - self.assertEqual(idx.reindex(idx.tolist()[:-1])[0].name, 'foobar') - self.assertEqual(idx.reindex(idx.values)[0].name, 'foobar') - self.assertEqual(idx.reindex(idx.values[:-1])[0].name, 'foobar') - - # Must preserve name even if dtype changes. - self.assertEqual(idx.reindex(dt_idx.values)[0].name, 'foobar') - self.assertEqual(idx.reindex(dt_idx.tolist())[0].name, 'foobar') - - def test_reindex_preserves_type_if_target_is_empty_list_or_array(self): - # GH7774 - idx = pd.Index(list('abc')) - - def get_reindex_type(target): - return idx.reindex(target)[0].dtype.type - - self.assertEqual(get_reindex_type([]), np.object_) - self.assertEqual(get_reindex_type(np.array([])), np.object_) - self.assertEqual(get_reindex_type(np.array([], dtype=np.int64)), - np.object_) - - def test_reindex_doesnt_preserve_type_if_target_is_empty_index(self): - # GH7774 - idx = pd.Index(list('abc')) - - def get_reindex_type(target): - return idx.reindex(target)[0].dtype.type - - self.assertEqual(get_reindex_type(pd.Int64Index([])), np.int64) - self.assertEqual(get_reindex_type(pd.Float64Index([])), np.float64) - self.assertEqual(get_reindex_type(pd.DatetimeIndex([])), np.datetime64) - - reindexed = idx.reindex(pd.MultiIndex( - [pd.Int64Index([]), pd.Float64Index([])], [[], []]))[0] - self.assertEqual(reindexed.levels[0].dtype.type, np.int64) - self.assertEqual(reindexed.levels[1].dtype.type, np.float64) - - def test_groupby(self): - idx = Index(range(5)) - groups = idx.groupby(np.array([1, 1, 2, 2, 2])) - exp = {1: [0, 1], 2: [2, 3, 4]} - tm.assert_dict_equal(groups, exp) - - def test_equals_op_multiindex(self): - # GH9785 - # test comparisons of multiindex - from pandas.compat import StringIO - df = pd.read_csv(StringIO('a,b,c\n1,2,3\n4,5,6'), index_col=[0, 1]) - tm.assert_numpy_array_equal(df.index == df.index, - np.array([True, True])) - - mi1 = MultiIndex.from_tuples([(1, 2), (4, 5)]) - tm.assert_numpy_array_equal(df.index == mi1, np.array([True, True])) - mi2 = MultiIndex.from_tuples([(1, 2), (4, 6)]) - tm.assert_numpy_array_equal(df.index == mi2, np.array([True, False])) - mi3 = MultiIndex.from_tuples([(1, 2), (4, 5), (8, 9)]) - with tm.assertRaisesRegexp(ValueError, "Lengths must match"): - df.index == mi3 - - index_a = Index(['foo', 'bar', 'baz']) - with tm.assertRaisesRegexp(ValueError, "Lengths must match"): - df.index == index_a - tm.assert_numpy_array_equal(index_a == mi3, - np.array([False, False, False])) - - def test_conversion_preserves_name(self): - # GH 10875 - i = pd.Index(['01:02:03', '01:02:04'], name='label') - self.assertEqual(i.name, pd.to_datetime(i).name) - self.assertEqual(i.name, pd.to_timedelta(i).name) - - def test_string_index_repr(self): - # py3/py2 repr can differ because of "u" prefix - # which also affects to displayed element size - - # short - idx = pd.Index(['a', 'bb', 'ccc']) - if PY3: - expected = u"""Index(['a', 'bb', 'ccc'], dtype='object')""" - self.assertEqual(repr(idx), expected) - else: - expected = u"""Index([u'a', u'bb', u'ccc'], dtype='object')""" - self.assertEqual(unicode(idx), expected) - - # multiple lines - idx = pd.Index(['a', 'bb', 'ccc'] * 10) - if PY3: - expected = u"""\ -Index(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', - 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', - 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'], - dtype='object')""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""\ -Index([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', - u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', - u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc'], - dtype='object')""" - - self.assertEqual(unicode(idx), expected) - - # truncated - idx = pd.Index(['a', 'bb', 'ccc'] * 100) - if PY3: - expected = u"""\ -Index(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', - ... - 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'], - dtype='object', length=300)""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""\ -Index([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', - ... - u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc'], - dtype='object', length=300)""" - - self.assertEqual(unicode(idx), expected) - - # short - idx = pd.Index([u'あ', u'いい', u'ううう']) - if PY3: - expected = u"""Index(['あ', 'いい', 'ううう'], dtype='object')""" - self.assertEqual(repr(idx), expected) - else: - expected = u"""\ -Index([u'あ', u'いい', u'ううう'], dtype='object')""" - self.assertEqual(unicode(idx), expected) - - # multiple lines - idx = pd.Index([u'あ', u'いい', u'ううう'] * 10) - if PY3: - expected = u"""Index(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', - 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', - 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], - dtype='object')""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""Index([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', - u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', - u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], - dtype='object')""" - - self.assertEqual(unicode(idx), expected) - - # truncated - idx = pd.Index([u'あ', u'いい', u'ううう'] * 100) - if PY3: - expected = u"""Index(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', - ... - 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], - dtype='object', length=300)""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""Index([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', - ... - u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], - dtype='object', length=300)""" - - self.assertEqual(unicode(idx), expected) - - # Emable Unicode option ----------------------------------------- - with cf.option_context('display.unicode.east_asian_width', True): - - # short - idx = pd.Index([u'あ', u'いい', u'ううう']) - if PY3: - expected = u"""Index(['あ', 'いい', 'ううう'], dtype='object')""" - self.assertEqual(repr(idx), expected) - else: - expected = u"""Index([u'あ', u'いい', u'ううう'], dtype='object')""" - self.assertEqual(unicode(idx), expected) - - # multiple lines - idx = pd.Index([u'あ', u'いい', u'ううう'] * 10) - if PY3: - expected = u"""Index(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', - 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', - 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', - 'あ', 'いい', 'ううう'], - dtype='object')""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""Index([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', - u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', - u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', - u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], - dtype='object')""" - - self.assertEqual(unicode(idx), expected) - - # truncated - idx = pd.Index([u'あ', u'いい', u'ううう'] * 100) - if PY3: - expected = u"""Index(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', - 'あ', - ... - 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', - 'ううう'], - dtype='object', length=300)""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""Index([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', - u'ううう', u'あ', - ... - u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', - u'いい', u'ううう'], - dtype='object', length=300)""" - - self.assertEqual(unicode(idx), expected) - - -class TestCategoricalIndex(Base, tm.TestCase): - _holder = CategoricalIndex - - def setUp(self): - self.indices = dict(catIndex=tm.makeCategoricalIndex(100)) - self.setup_indices() - - def create_index(self, categories=None, ordered=False): - if categories is None: - categories = list('cab') - return CategoricalIndex( - list('aabbca'), categories=categories, ordered=ordered) - - def test_construction(self): - - ci = self.create_index(categories=list('abcd')) - categories = ci.categories - - result = Index(ci) - tm.assert_index_equal(result, ci, exact=True) - self.assertFalse(result.ordered) - - result = Index(ci.values) - tm.assert_index_equal(result, ci, exact=True) - self.assertFalse(result.ordered) - - # empty - result = CategoricalIndex(categories=categories) - self.assertTrue(result.categories.equals(Index(categories))) - tm.assert_numpy_array_equal(result.codes, np.array([], dtype='int8')) - self.assertFalse(result.ordered) - - # passing categories - result = CategoricalIndex(list('aabbca'), categories=categories) - self.assertTrue(result.categories.equals(Index(categories))) - tm.assert_numpy_array_equal(result.codes, np.array( - [0, 0, 1, 1, 2, 0], dtype='int8')) - - c = pd.Categorical(list('aabbca')) - result = CategoricalIndex(c) - self.assertTrue(result.categories.equals(Index(list('abc')))) - tm.assert_numpy_array_equal(result.codes, np.array( - [0, 0, 1, 1, 2, 0], dtype='int8')) - self.assertFalse(result.ordered) - - result = CategoricalIndex(c, categories=categories) - self.assertTrue(result.categories.equals(Index(categories))) - tm.assert_numpy_array_equal(result.codes, np.array( - [0, 0, 1, 1, 2, 0], dtype='int8')) - self.assertFalse(result.ordered) - - ci = CategoricalIndex(c, categories=list('abcd')) - result = CategoricalIndex(ci) - self.assertTrue(result.categories.equals(Index(categories))) - tm.assert_numpy_array_equal(result.codes, np.array( - [0, 0, 1, 1, 2, 0], dtype='int8')) - self.assertFalse(result.ordered) - - result = CategoricalIndex(ci, categories=list('ab')) - self.assertTrue(result.categories.equals(Index(list('ab')))) - tm.assert_numpy_array_equal(result.codes, np.array( - [0, 0, 1, 1, -1, 0], dtype='int8')) - self.assertFalse(result.ordered) - - result = CategoricalIndex(ci, categories=list('ab'), ordered=True) - self.assertTrue(result.categories.equals(Index(list('ab')))) - tm.assert_numpy_array_equal(result.codes, np.array( - [0, 0, 1, 1, -1, 0], dtype='int8')) - self.assertTrue(result.ordered) - - # turn me to an Index - result = Index(np.array(ci)) - self.assertIsInstance(result, Index) - self.assertNotIsInstance(result, CategoricalIndex) - - def test_construction_with_dtype(self): - - # specify dtype - ci = self.create_index(categories=list('abc')) - - result = Index(np.array(ci), dtype='category') - tm.assert_index_equal(result, ci, exact=True) - - result = Index(np.array(ci).tolist(), dtype='category') - tm.assert_index_equal(result, ci, exact=True) - - # these are generally only equal when the categories are reordered - ci = self.create_index() - - result = Index( - np.array(ci), dtype='category').reorder_categories(ci.categories) - tm.assert_index_equal(result, ci, exact=True) - - # make sure indexes are handled - expected = CategoricalIndex([0, 1, 2], categories=[0, 1, 2], - ordered=True) - idx = Index(range(3)) - result = CategoricalIndex(idx, categories=idx, ordered=True) - tm.assert_index_equal(result, expected, exact=True) - - def test_disallow_set_ops(self): - - # GH 10039 - # set ops (+/-) raise TypeError - idx = pd.Index(pd.Categorical(['a', 'b'])) - - self.assertRaises(TypeError, lambda: idx - idx) - self.assertRaises(TypeError, lambda: idx + idx) - self.assertRaises(TypeError, lambda: idx - ['a', 'b']) - self.assertRaises(TypeError, lambda: idx + ['a', 'b']) - self.assertRaises(TypeError, lambda: ['a', 'b'] - idx) - self.assertRaises(TypeError, lambda: ['a', 'b'] + idx) - - def test_method_delegation(self): - - ci = CategoricalIndex(list('aabbca'), categories=list('cabdef')) - result = ci.set_categories(list('cab')) - tm.assert_index_equal(result, CategoricalIndex( - list('aabbca'), categories=list('cab'))) - - ci = CategoricalIndex(list('aabbca'), categories=list('cab')) - result = ci.rename_categories(list('efg')) - tm.assert_index_equal(result, CategoricalIndex( - list('ffggef'), categories=list('efg'))) - - ci = CategoricalIndex(list('aabbca'), categories=list('cab')) - result = ci.add_categories(['d']) - tm.assert_index_equal(result, CategoricalIndex( - list('aabbca'), categories=list('cabd'))) - - ci = CategoricalIndex(list('aabbca'), categories=list('cab')) - result = ci.remove_categories(['c']) - tm.assert_index_equal(result, CategoricalIndex( - list('aabb') + [np.nan] + ['a'], categories=list('ab'))) - - ci = CategoricalIndex(list('aabbca'), categories=list('cabdef')) - result = ci.as_unordered() - tm.assert_index_equal(result, ci) - - ci = CategoricalIndex(list('aabbca'), categories=list('cabdef')) - result = ci.as_ordered() - tm.assert_index_equal(result, CategoricalIndex( - list('aabbca'), categories=list('cabdef'), ordered=True)) - - # invalid - self.assertRaises(ValueError, lambda: ci.set_categories( - list('cab'), inplace=True)) - - def test_contains(self): - - ci = self.create_index(categories=list('cabdef')) - - self.assertTrue('a' in ci) - self.assertTrue('z' not in ci) - self.assertTrue('e' not in ci) - self.assertTrue(np.nan not in ci) - - # assert codes NOT in index - self.assertFalse(0 in ci) - self.assertFalse(1 in ci) - - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - ci = CategoricalIndex( - list('aabbca'), categories=list('cabdef') + [np.nan]) - self.assertFalse(np.nan in ci) - - ci = CategoricalIndex( - list('aabbca') + [np.nan], categories=list('cabdef')) - self.assertTrue(np.nan in ci) - - def test_min_max(self): - - ci = self.create_index(ordered=False) - self.assertRaises(TypeError, lambda: ci.min()) - self.assertRaises(TypeError, lambda: ci.max()) - - ci = self.create_index(ordered=True) - - self.assertEqual(ci.min(), 'c') - self.assertEqual(ci.max(), 'b') - - def test_append(self): - - ci = self.create_index() - categories = ci.categories - - # append cats with the same categories - result = ci[:3].append(ci[3:]) - tm.assert_index_equal(result, ci, exact=True) - - foos = [ci[:1], ci[1:3], ci[3:]] - result = foos[0].append(foos[1:]) - tm.assert_index_equal(result, ci, exact=True) - - # empty - result = ci.append([]) - tm.assert_index_equal(result, ci, exact=True) - - # appending with different categories or reoreded is not ok - self.assertRaises( - TypeError, - lambda: ci.append(ci.values.set_categories(list('abcd')))) - self.assertRaises( - TypeError, - lambda: ci.append(ci.values.reorder_categories(list('abc')))) - - # with objects - result = ci.append(['c', 'a']) - expected = CategoricalIndex(list('aabbcaca'), categories=categories) - tm.assert_index_equal(result, expected, exact=True) - - # invalid objects - self.assertRaises(TypeError, lambda: ci.append(['a', 'd'])) - - def test_insert(self): - - ci = self.create_index() - categories = ci.categories - - # test 0th element - result = ci.insert(0, 'a') - expected = CategoricalIndex(list('aaabbca'), categories=categories) - tm.assert_index_equal(result, expected, exact=True) - - # test Nth element that follows Python list behavior - result = ci.insert(-1, 'a') - expected = CategoricalIndex(list('aabbcaa'), categories=categories) - tm.assert_index_equal(result, expected, exact=True) - - # test empty - result = CategoricalIndex(categories=categories).insert(0, 'a') - expected = CategoricalIndex(['a'], categories=categories) - tm.assert_index_equal(result, expected, exact=True) - - # invalid - self.assertRaises(TypeError, lambda: ci.insert(0, 'd')) - - def test_delete(self): - - ci = self.create_index() - categories = ci.categories - - result = ci.delete(0) - expected = CategoricalIndex(list('abbca'), categories=categories) - tm.assert_index_equal(result, expected, exact=True) - - result = ci.delete(-1) - expected = CategoricalIndex(list('aabbc'), categories=categories) - tm.assert_index_equal(result, expected, exact=True) - - with tm.assertRaises((IndexError, ValueError)): - # either depeidnig on numpy version - result = ci.delete(10) - - def test_astype(self): - - ci = self.create_index() - result = ci.astype('category') - tm.assert_index_equal(result, ci, exact=True) - - result = ci.astype(object) - self.assertTrue(result.equals(Index(np.array(ci)))) - - # this IS equal, but not the same class - self.assertTrue(result.equals(ci)) - self.assertIsInstance(result, Index) - self.assertNotIsInstance(result, CategoricalIndex) - - def test_reindex_base(self): - - # determined by cat ordering - idx = self.create_index() - expected = np.array([4, 0, 1, 5, 2, 3]) - - actual = idx.get_indexer(idx) - tm.assert_numpy_array_equal(expected, actual) - - with tm.assertRaisesRegexp(ValueError, 'Invalid fill method'): - idx.get_indexer(idx, method='invalid') - - def test_reindexing(self): - - ci = self.create_index() - oidx = Index(np.array(ci)) - - for n in [1, 2, 5, len(ci)]: - finder = oidx[np.random.randint(0, len(ci), size=n)] - expected = oidx.get_indexer_non_unique(finder)[0] - - actual = ci.get_indexer(finder) - tm.assert_numpy_array_equal(expected, actual) - - def test_reindex_dtype(self): - res, indexer = CategoricalIndex(['a', 'b', 'c', 'a']).reindex(['a', 'c' - ]) - tm.assert_index_equal(res, Index(['a', 'a', 'c']), exact=True) - tm.assert_numpy_array_equal(indexer, np.array([0, 3, 2])) - - res, indexer = CategoricalIndex(['a', 'b', 'c', 'a']).reindex( - Categorical(['a', 'c'])) - tm.assert_index_equal(res, CategoricalIndex( - ['a', 'a', 'c'], categories=['a', 'c']), exact=True) - tm.assert_numpy_array_equal(indexer, np.array([0, 3, 2])) - - res, indexer = CategoricalIndex( - ['a', 'b', 'c', 'a' - ], categories=['a', 'b', 'c', 'd']).reindex(['a', 'c']) - tm.assert_index_equal(res, Index( - ['a', 'a', 'c'], dtype='object'), exact=True) - tm.assert_numpy_array_equal(indexer, np.array([0, 3, 2])) - - res, indexer = CategoricalIndex( - ['a', 'b', 'c', 'a'], - categories=['a', 'b', 'c', 'd']).reindex(Categorical(['a', 'c'])) - tm.assert_index_equal(res, CategoricalIndex( - ['a', 'a', 'c'], categories=['a', 'c']), exact=True) - tm.assert_numpy_array_equal(indexer, np.array([0, 3, 2])) - - def test_duplicates(self): - - idx = CategoricalIndex([0, 0, 0], name='foo') - self.assertFalse(idx.is_unique) - self.assertTrue(idx.has_duplicates) - - expected = CategoricalIndex([0], name='foo') - self.assert_index_equal(idx.drop_duplicates(), expected) - - def test_get_indexer(self): - - idx1 = CategoricalIndex(list('aabcde'), categories=list('edabc')) - idx2 = CategoricalIndex(list('abf')) - - for indexer in [idx2, list('abf'), Index(list('abf'))]: - r1 = idx1.get_indexer(idx2) - assert_almost_equal(r1, [0, 1, 2, -1]) - - self.assertRaises(NotImplementedError, - lambda: idx2.get_indexer(idx1, method='pad')) - self.assertRaises(NotImplementedError, - lambda: idx2.get_indexer(idx1, method='backfill')) - self.assertRaises(NotImplementedError, - lambda: idx2.get_indexer(idx1, method='nearest')) - - def test_repr_roundtrip(self): - - ci = CategoricalIndex(['a', 'b'], categories=['a', 'b'], ordered=True) - str(ci) - tm.assert_index_equal(eval(repr(ci)), ci, exact=True) - - # formatting - if PY3: - str(ci) - else: - compat.text_type(ci) - - # long format - # this is not reprable - ci = CategoricalIndex(np.random.randint(0, 5, size=100)) - if PY3: - str(ci) - else: - compat.text_type(ci) - - def test_isin(self): - - ci = CategoricalIndex( - list('aabca') + [np.nan], categories=['c', 'a', 'b']) - tm.assert_numpy_array_equal( - ci.isin(['c']), - np.array([False, False, False, True, False, False])) - tm.assert_numpy_array_equal( - ci.isin(['c', 'a', 'b']), np.array([True] * 5 + [False])) - tm.assert_numpy_array_equal( - ci.isin(['c', 'a', 'b', np.nan]), np.array([True] * 6)) - - # mismatched categorical -> coerced to ndarray so doesn't matter - tm.assert_numpy_array_equal( - ci.isin(ci.set_categories(list('abcdefghi'))), np.array([True] * - 6)) - tm.assert_numpy_array_equal( - ci.isin(ci.set_categories(list('defghi'))), - np.array([False] * 5 + [True])) - - def test_identical(self): - - ci1 = CategoricalIndex(['a', 'b'], categories=['a', 'b'], ordered=True) - ci2 = CategoricalIndex(['a', 'b'], categories=['a', 'b', 'c'], - ordered=True) - self.assertTrue(ci1.identical(ci1)) - self.assertTrue(ci1.identical(ci1.copy())) - self.assertFalse(ci1.identical(ci2)) - - def test_equals(self): - - ci1 = CategoricalIndex(['a', 'b'], categories=['a', 'b'], ordered=True) - ci2 = CategoricalIndex(['a', 'b'], categories=['a', 'b', 'c'], - ordered=True) - - self.assertTrue(ci1.equals(ci1)) - self.assertFalse(ci1.equals(ci2)) - self.assertTrue(ci1.equals(ci1.astype(object))) - self.assertTrue(ci1.astype(object).equals(ci1)) - - self.assertTrue((ci1 == ci1).all()) - self.assertFalse((ci1 != ci1).all()) - self.assertFalse((ci1 > ci1).all()) - self.assertFalse((ci1 < ci1).all()) - self.assertTrue((ci1 <= ci1).all()) - self.assertTrue((ci1 >= ci1).all()) - - self.assertFalse((ci1 == 1).all()) - self.assertTrue((ci1 == Index(['a', 'b'])).all()) - self.assertTrue((ci1 == ci1.values).all()) - - # invalid comparisons - with tm.assertRaisesRegexp(ValueError, "Lengths must match"): - ci1 == Index(['a', 'b', 'c']) - self.assertRaises(TypeError, lambda: ci1 == ci2) - self.assertRaises( - TypeError, lambda: ci1 == Categorical(ci1.values, ordered=False)) - self.assertRaises( - TypeError, - lambda: ci1 == Categorical(ci1.values, categories=list('abc'))) - - # tests - # make sure that we are testing for category inclusion properly - self.assertTrue(CategoricalIndex( - list('aabca'), categories=['c', 'a', 'b']).equals(list('aabca'))) - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - self.assertTrue(CategoricalIndex( - list('aabca'), categories=['c', 'a', 'b', np.nan]).equals(list( - 'aabca'))) - - self.assertFalse(CategoricalIndex( - list('aabca') + [np.nan], categories=['c', 'a', 'b']).equals(list( - 'aabca'))) - self.assertTrue(CategoricalIndex( - list('aabca') + [np.nan], categories=['c', 'a', 'b']).equals(list( - 'aabca') + [np.nan])) - - def test_string_categorical_index_repr(self): - # short - idx = pd.CategoricalIndex(['a', 'bb', 'ccc']) - if PY3: - expected = u"""CategoricalIndex(['a', 'bb', 'ccc'], categories=['a', 'bb', 'ccc'], ordered=False, dtype='category')""" - self.assertEqual(repr(idx), expected) - else: - expected = u"""CategoricalIndex([u'a', u'bb', u'ccc'], categories=[u'a', u'bb', u'ccc'], ordered=False, dtype='category')""" - self.assertEqual(unicode(idx), expected) - - # multiple lines - idx = pd.CategoricalIndex(['a', 'bb', 'ccc'] * 10) - if PY3: - expected = u"""CategoricalIndex(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', - 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', - 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'], - categories=['a', 'bb', 'ccc'], ordered=False, dtype='category')""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""CategoricalIndex([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', - u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', - u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', - u'a', u'bb', u'ccc', u'a', u'bb', u'ccc'], - categories=[u'a', u'bb', u'ccc'], ordered=False, dtype='category')""" - - self.assertEqual(unicode(idx), expected) - - # truncated - idx = pd.CategoricalIndex(['a', 'bb', 'ccc'] * 100) - if PY3: - expected = u"""CategoricalIndex(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', - ... - 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'], - categories=['a', 'bb', 'ccc'], ordered=False, dtype='category', length=300)""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""CategoricalIndex([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', - u'ccc', u'a', - ... - u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', - u'bb', u'ccc'], - categories=[u'a', u'bb', u'ccc'], ordered=False, dtype='category', length=300)""" - - self.assertEqual(unicode(idx), expected) - - # larger categories - idx = pd.CategoricalIndex(list('abcdefghijklmmo')) - if PY3: - expected = u"""CategoricalIndex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', - 'm', 'm', 'o'], - categories=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', ...], ordered=False, dtype='category')""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""CategoricalIndex([u'a', u'b', u'c', u'd', u'e', u'f', u'g', u'h', u'i', u'j', - u'k', u'l', u'm', u'm', u'o'], - categories=[u'a', u'b', u'c', u'd', u'e', u'f', u'g', u'h', ...], ordered=False, dtype='category')""" - - self.assertEqual(unicode(idx), expected) - - # short - idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう']) - if PY3: - expected = u"""CategoricalIndex(['あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" - self.assertEqual(repr(idx), expected) - else: - expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう'], categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" - self.assertEqual(unicode(idx), expected) - - # multiple lines - idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'] * 10) - if PY3: - expected = u"""CategoricalIndex(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', - 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', - 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], - categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', - u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', - u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', - u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], - categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" - - self.assertEqual(unicode(idx), expected) - - # truncated - idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'] * 100) - if PY3: - expected = u"""CategoricalIndex(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', - ... - 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], - categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category', length=300)""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', - u'ううう', u'あ', - ... - u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', - u'いい', u'ううう'], - categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category', length=300)""" - - self.assertEqual(unicode(idx), expected) - - # larger categories - idx = pd.CategoricalIndex(list(u'あいうえおかきくけこさしすせそ')) - if PY3: - expected = u"""CategoricalIndex(['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', 'け', 'こ', 'さ', 'し', - 'す', 'せ', 'そ'], - categories=['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', ...], ordered=False, dtype='category')""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""CategoricalIndex([u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', u'け', u'こ', - u'さ', u'し', u'す', u'せ', u'そ'], - categories=[u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', ...], ordered=False, dtype='category')""" - - self.assertEqual(unicode(idx), expected) - - # Emable Unicode option ----------------------------------------- - with cf.option_context('display.unicode.east_asian_width', True): - - # short - idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう']) - if PY3: - expected = u"""CategoricalIndex(['あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" - self.assertEqual(repr(idx), expected) - else: - expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう'], categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" - self.assertEqual(unicode(idx), expected) - - # multiple lines - idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'] * 10) - if PY3: - expected = u"""CategoricalIndex(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', - 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', - 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', - 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], - categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', - u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', - u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', - u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', - u'いい', u'ううう', u'あ', u'いい', u'ううう'], - categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" - - self.assertEqual(unicode(idx), expected) - - # truncated - idx = pd.CategoricalIndex([u'あ', u'いい', u'ううう'] * 100) - if PY3: - expected = u"""CategoricalIndex(['あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', - 'ううう', 'あ', - ... - 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', - 'あ', 'いい', 'ううう'], - categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category', length=300)""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', - u'いい', u'ううう', u'あ', - ... - u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', - u'ううう', u'あ', u'いい', u'ううう'], - categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category', length=300)""" - - self.assertEqual(unicode(idx), expected) - - # larger categories - idx = pd.CategoricalIndex(list(u'あいうえおかきくけこさしすせそ')) - if PY3: - expected = u"""CategoricalIndex(['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', 'け', 'こ', - 'さ', 'し', 'す', 'せ', 'そ'], - categories=['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', ...], ordered=False, dtype='category')""" - - self.assertEqual(repr(idx), expected) - else: - expected = u"""CategoricalIndex([u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', - u'け', u'こ', u'さ', u'し', u'す', u'せ', u'そ'], - categories=[u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', ...], ordered=False, dtype='category')""" - - self.assertEqual(unicode(idx), expected) - - def test_fillna_categorical(self): - # GH 11343 - idx = CategoricalIndex([1.0, np.nan, 3.0, 1.0], name='x') - # fill by value in categories - exp = CategoricalIndex([1.0, 1.0, 3.0, 1.0], name='x') - self.assert_index_equal(idx.fillna(1.0), exp) - - # fill by value not in categories raises ValueError - with tm.assertRaisesRegexp(ValueError, - 'fill value must be in categories'): - idx.fillna(2.0) - - -class Numeric(Base): - - def test_numeric_compat(self): - - idx = self.create_index() - didx = idx * idx - - result = idx * 1 - tm.assert_index_equal(result, idx) - - result = 1 * idx - tm.assert_index_equal(result, idx) - - # in general not true for RangeIndex - if not isinstance(idx, RangeIndex): - result = idx * idx - tm.assert_index_equal(result, idx ** 2) - - # truediv under PY3 - result = idx / 1 - expected = idx - if PY3: - expected = expected.astype('float64') - tm.assert_index_equal(result, expected) - - result = idx / 2 - if PY3: - expected = expected.astype('float64') - expected = Index(idx.values / 2) - tm.assert_index_equal(result, expected) - - result = idx // 1 - tm.assert_index_equal(result, idx) - - result = idx * np.array(5, dtype='int64') - tm.assert_index_equal(result, idx * 5) - - result = idx * np.arange(5, dtype='int64') - tm.assert_index_equal(result, didx) - - result = idx * Series(np.arange(5, dtype='int64')) - tm.assert_index_equal(result, didx) - - result = idx * Series(np.arange(5, dtype='float64') + 0.1) - expected = Float64Index(np.arange(5, dtype='float64') * - (np.arange(5, dtype='float64') + 0.1)) - tm.assert_index_equal(result, expected) - - # invalid - self.assertRaises(TypeError, - lambda: idx * date_range('20130101', periods=5)) - self.assertRaises(ValueError, lambda: idx * idx[0:3]) - self.assertRaises(ValueError, lambda: idx * np.array([1, 2])) - - def test_explicit_conversions(self): - - # GH 8608 - # add/sub are overriden explicity for Float/Int Index - idx = self._holder(np.arange(5, dtype='int64')) - - # float conversions - arr = np.arange(5, dtype='int64') * 3.2 - expected = Float64Index(arr) - fidx = idx * 3.2 - tm.assert_index_equal(fidx, expected) - fidx = 3.2 * idx - tm.assert_index_equal(fidx, expected) - - # interops with numpy arrays - expected = Float64Index(arr) - a = np.zeros(5, dtype='float64') - result = fidx - a - tm.assert_index_equal(result, expected) - - expected = Float64Index(-arr) - a = np.zeros(5, dtype='float64') - result = a - fidx - tm.assert_index_equal(result, expected) - - def test_ufunc_compat(self): - idx = self._holder(np.arange(5, dtype='int64')) - result = np.sin(idx) - expected = Float64Index(np.sin(np.arange(5, dtype='int64'))) - tm.assert_index_equal(result, expected) - - def test_index_groupby(self): - int_idx = Index(range(6)) - float_idx = Index(np.arange(0, 0.6, 0.1)) - obj_idx = Index('A B C D E F'.split()) - dt_idx = pd.date_range('2013-01-01', freq='M', periods=6) - - for idx in [int_idx, float_idx, obj_idx, dt_idx]: - to_groupby = np.array([1, 2, np.nan, np.nan, 2, 1]) - self.assertEqual(idx.groupby(to_groupby), - {1.0: [idx[0], idx[5]], 2.0: [idx[1], idx[4]]}) - - to_groupby = Index([datetime(2011, 11, 1), - datetime(2011, 12, 1), - pd.NaT, - pd.NaT, - datetime(2011, 12, 1), - datetime(2011, 11, 1)], - tz='UTC').values - - ex_keys = pd.tslib.datetime_to_datetime64(np.array([Timestamp( - '2011-11-01'), Timestamp('2011-12-01')])) - expected = {ex_keys[0][0]: [idx[0], idx[5]], - ex_keys[0][1]: [idx[1], idx[4]]} - self.assertEqual(idx.groupby(to_groupby), expected) - - def test_modulo(self): - # GH 9244 - index = self.create_index() - expected = Index(index.values % 2) - self.assert_index_equal(index % 2, expected) - - -class TestFloat64Index(Numeric, tm.TestCase): - _holder = Float64Index - _multiprocess_can_split_ = True - - def setUp(self): - self.indices = dict(mixed=Float64Index([1.5, 2, 3, 4, 5]), - float=Float64Index(np.arange(5) * 2.5)) - self.setup_indices() - - def create_index(self): - return Float64Index(np.arange(5, dtype='float64')) - - def test_repr_roundtrip(self): - for ind in (self.mixed, self.float): - tm.assert_index_equal(eval(repr(ind)), ind) - - def check_is_index(self, i): - self.assertIsInstance(i, Index) - self.assertNotIsInstance(i, Float64Index) - - def check_coerce(self, a, b, is_float_index=True): - self.assertTrue(a.equals(b)) - if is_float_index: - self.assertIsInstance(b, Float64Index) - else: - self.check_is_index(b) - - def test_constructor(self): - - # explicit construction - index = Float64Index([1, 2, 3, 4, 5]) - self.assertIsInstance(index, Float64Index) - self.assertTrue((index.values == np.array( - [1, 2, 3, 4, 5], dtype='float64')).all()) - index = Float64Index(np.array([1, 2, 3, 4, 5])) - self.assertIsInstance(index, Float64Index) - index = Float64Index([1., 2, 3, 4, 5]) - self.assertIsInstance(index, Float64Index) - index = Float64Index(np.array([1., 2, 3, 4, 5])) - self.assertIsInstance(index, Float64Index) - self.assertEqual(index.dtype, float) - - index = Float64Index(np.array([1., 2, 3, 4, 5]), dtype=np.float32) - self.assertIsInstance(index, Float64Index) - self.assertEqual(index.dtype, np.float64) - - index = Float64Index(np.array([1, 2, 3, 4, 5]), dtype=np.float32) - self.assertIsInstance(index, Float64Index) - self.assertEqual(index.dtype, np.float64) - - # nan handling - result = Float64Index([np.nan, np.nan]) - self.assertTrue(pd.isnull(result.values).all()) - result = Float64Index(np.array([np.nan])) - self.assertTrue(pd.isnull(result.values).all()) - result = Index(np.array([np.nan])) - self.assertTrue(pd.isnull(result.values).all()) - - def test_constructor_invalid(self): - - # invalid - self.assertRaises(TypeError, Float64Index, 0.) - self.assertRaises(TypeError, Float64Index, ['a', 'b', 0.]) - self.assertRaises(TypeError, Float64Index, [Timestamp('20130101')]) - - def test_constructor_coerce(self): - - self.check_coerce(self.mixed, Index([1.5, 2, 3, 4, 5])) - self.check_coerce(self.float, Index(np.arange(5) * 2.5)) - self.check_coerce(self.float, Index(np.array( - np.arange(5) * 2.5, dtype=object))) - - def test_constructor_explicit(self): - - # these don't auto convert - self.check_coerce(self.float, - Index((np.arange(5) * 2.5), dtype=object), - is_float_index=False) - self.check_coerce(self.mixed, Index( - [1.5, 2, 3, 4, 5], dtype=object), is_float_index=False) - - def test_astype(self): - - result = self.float.astype(object) - self.assertTrue(result.equals(self.float)) - self.assertTrue(self.float.equals(result)) - self.check_is_index(result) - - i = self.mixed.copy() - i.name = 'foo' - result = i.astype(object) - self.assertTrue(result.equals(i)) - self.assertTrue(i.equals(result)) - self.check_is_index(result) - - def test_equals(self): - - i = Float64Index([1.0, 2.0]) - self.assertTrue(i.equals(i)) - self.assertTrue(i.identical(i)) - - i2 = Float64Index([1.0, 2.0]) - self.assertTrue(i.equals(i2)) - - i = Float64Index([1.0, np.nan]) - self.assertTrue(i.equals(i)) - self.assertTrue(i.identical(i)) - - i2 = Float64Index([1.0, np.nan]) - self.assertTrue(i.equals(i2)) - - def test_get_indexer(self): - idx = Float64Index([0.0, 1.0, 2.0]) - tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2]) - - target = [-0.1, 0.5, 1.1] - tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'backfill'), [0, 1, 2]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest'), [0, 1, 1]) - - def test_get_loc(self): - idx = Float64Index([0.0, 1.0, 2.0]) - for method in [None, 'pad', 'backfill', 'nearest']: - self.assertEqual(idx.get_loc(1, method), 1) - if method is not None: - self.assertEqual(idx.get_loc(1, method, tolerance=0), 1) - - for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]: - self.assertEqual(idx.get_loc(1.1, method), loc) - self.assertEqual(idx.get_loc(1.1, method, tolerance=0.9), loc) - - self.assertRaises(KeyError, idx.get_loc, 'foo') - self.assertRaises(KeyError, idx.get_loc, 1.5) - self.assertRaises(KeyError, idx.get_loc, 1.5, method='pad', - tolerance=0.1) - - with tm.assertRaisesRegexp(ValueError, 'must be numeric'): - idx.get_loc(1.4, method='nearest', tolerance='foo') - - def test_get_loc_na(self): - idx = Float64Index([np.nan, 1, 2]) - self.assertEqual(idx.get_loc(1), 1) - self.assertEqual(idx.get_loc(np.nan), 0) - - idx = Float64Index([np.nan, 1, np.nan]) - self.assertEqual(idx.get_loc(1), 1) - - # representable by slice [0:2:2] - # self.assertRaises(KeyError, idx.slice_locs, np.nan) - sliced = idx.slice_locs(np.nan) - self.assertTrue(isinstance(sliced, tuple)) - self.assertEqual(sliced, (0, 3)) - - # not representable by slice - idx = Float64Index([np.nan, 1, np.nan, np.nan]) - self.assertEqual(idx.get_loc(1), 1) - self.assertRaises(KeyError, idx.slice_locs, np.nan) - - def test_contains_nans(self): - i = Float64Index([1.0, 2.0, np.nan]) - self.assertTrue(np.nan in i) - - def test_contains_not_nans(self): - i = Float64Index([1.0, 2.0, np.nan]) - self.assertTrue(1.0 in i) - - def test_doesnt_contain_all_the_things(self): - i = Float64Index([np.nan]) - self.assertFalse(i.isin([0]).item()) - self.assertFalse(i.isin([1]).item()) - self.assertTrue(i.isin([np.nan]).item()) - - def test_nan_multiple_containment(self): - i = Float64Index([1.0, np.nan]) - tm.assert_numpy_array_equal(i.isin([1.0]), np.array([True, False])) - tm.assert_numpy_array_equal(i.isin([2.0, np.pi]), - np.array([False, False])) - tm.assert_numpy_array_equal(i.isin([np.nan]), np.array([False, True])) - tm.assert_numpy_array_equal(i.isin([1.0, np.nan]), - np.array([True, True])) - i = Float64Index([1.0, 2.0]) - tm.assert_numpy_array_equal(i.isin([np.nan]), np.array([False, False])) - - def test_astype_from_object(self): - index = Index([1.0, np.nan, 0.2], dtype='object') - result = index.astype(float) - expected = Float64Index([1.0, np.nan, 0.2]) - tm.assert_equal(result.dtype, expected.dtype) - tm.assert_index_equal(result, expected) - - def test_fillna_float64(self): - # GH 11343 - idx = Index([1.0, np.nan, 3.0], dtype=float, name='x') - # can't downcast - exp = Index([1.0, 0.1, 3.0], name='x') - self.assert_index_equal(idx.fillna(0.1), exp) - - # downcast - exp = Float64Index([1.0, 2.0, 3.0], name='x') - self.assert_index_equal(idx.fillna(2), exp) - - # object - exp = Index([1.0, 'obj', 3.0], name='x') - self.assert_index_equal(idx.fillna('obj'), exp) - - -class TestInt64Index(Numeric, tm.TestCase): - _holder = Int64Index - _multiprocess_can_split_ = True - - def setUp(self): - self.indices = dict(index=Int64Index(np.arange(0, 20, 2))) - self.setup_indices() - - def create_index(self): - return Int64Index(np.arange(5, dtype='int64')) - - def test_too_many_names(self): - def testit(): - self.index.names = ["roger", "harold"] - - assertRaisesRegexp(ValueError, "^Length", testit) - - def test_constructor(self): - # pass list, coerce fine - index = Int64Index([-5, 0, 1, 2]) - expected = np.array([-5, 0, 1, 2], dtype=np.int64) - tm.assert_numpy_array_equal(index, expected) - - # from iterable - index = Int64Index(iter([-5, 0, 1, 2])) - tm.assert_numpy_array_equal(index, expected) - - # scalar raise Exception - self.assertRaises(TypeError, Int64Index, 5) - - # copy - arr = self.index.values - new_index = Int64Index(arr, copy=True) - tm.assert_numpy_array_equal(new_index, self.index) - val = arr[0] + 3000 - # this should not change index - arr[0] = val - self.assertNotEqual(new_index[0], val) - - def test_constructor_corner(self): - arr = np.array([1, 2, 3, 4], dtype=object) - index = Int64Index(arr) - self.assertEqual(index.values.dtype, np.int64) - self.assertTrue(index.equals(arr)) - - # preventing casting - arr = np.array([1, '2', 3, '4'], dtype=object) - with tm.assertRaisesRegexp(TypeError, 'casting'): - Int64Index(arr) - - arr_with_floats = [0, 2, 3, 4, 5, 1.25, 3, -1] - with tm.assertRaisesRegexp(TypeError, 'casting'): - Int64Index(arr_with_floats) - - def test_copy(self): - i = Int64Index([], name='Foo') - i_copy = i.copy() - self.assertEqual(i_copy.name, 'Foo') - - def test_view(self): - super(TestInt64Index, self).test_view() - - i = Int64Index([], name='Foo') - i_view = i.view() - self.assertEqual(i_view.name, 'Foo') - - i_view = i.view('i8') - tm.assert_index_equal(i, Int64Index(i_view, name='Foo')) - - i_view = i.view(Int64Index) - tm.assert_index_equal(i, Int64Index(i_view, name='Foo')) - - def test_coerce_list(self): - # coerce things - arr = Index([1, 2, 3, 4]) - tm.assertIsInstance(arr, Int64Index) - - # but not if explicit dtype passed - arr = Index([1, 2, 3, 4], dtype=object) - tm.assertIsInstance(arr, Index) - - def test_dtype(self): - self.assertEqual(self.index.dtype, np.int64) - - def test_is_monotonic(self): - self.assertTrue(self.index.is_monotonic) - self.assertTrue(self.index.is_monotonic_increasing) - self.assertFalse(self.index.is_monotonic_decreasing) - - index = Int64Index([4, 3, 2, 1]) - self.assertFalse(index.is_monotonic) - self.assertTrue(index.is_monotonic_decreasing) - - index = Int64Index([1]) - self.assertTrue(index.is_monotonic) - self.assertTrue(index.is_monotonic_increasing) - self.assertTrue(index.is_monotonic_decreasing) - - def test_is_monotonic_na(self): - examples = [Index([np.nan]), - Index([np.nan, 1]), - Index([1, 2, np.nan]), - Index(['a', 'b', np.nan]), - pd.to_datetime(['NaT']), - pd.to_datetime(['NaT', '2000-01-01']), - pd.to_datetime(['2000-01-01', 'NaT', '2000-01-02']), - pd.to_timedelta(['1 day', 'NaT']), ] - for index in examples: - self.assertFalse(index.is_monotonic_increasing) - self.assertFalse(index.is_monotonic_decreasing) - - def test_equals(self): - same_values = Index(self.index, dtype=object) - self.assertTrue(self.index.equals(same_values)) - self.assertTrue(same_values.equals(self.index)) - - def test_logical_compat(self): - idx = self.create_index() - self.assertEqual(idx.all(), idx.values.all()) - self.assertEqual(idx.any(), idx.values.any()) - - def test_identical(self): - i = Index(self.index.copy()) - self.assertTrue(i.identical(self.index)) - - same_values_different_type = Index(i, dtype=object) - self.assertFalse(i.identical(same_values_different_type)) - - i = self.index.copy(dtype=object) - i = i.rename('foo') - same_values = Index(i, dtype=object) - self.assertTrue(same_values.identical(i)) - - self.assertFalse(i.identical(self.index)) - self.assertTrue(Index(same_values, name='foo', dtype=object).identical( - i)) - - self.assertFalse(self.index.copy(dtype=object) - .identical(self.index.copy(dtype='int64'))) - - def test_get_indexer(self): - target = Int64Index(np.arange(10)) - indexer = self.index.get_indexer(target) - expected = np.array([0, -1, 1, -1, 2, -1, 3, -1, 4, -1]) - tm.assert_numpy_array_equal(indexer, expected) - - def test_get_indexer_pad(self): - target = Int64Index(np.arange(10)) - indexer = self.index.get_indexer(target, method='pad') - expected = np.array([0, 0, 1, 1, 2, 2, 3, 3, 4, 4]) - tm.assert_numpy_array_equal(indexer, expected) - - def test_get_indexer_backfill(self): - target = Int64Index(np.arange(10)) - indexer = self.index.get_indexer(target, method='backfill') - expected = np.array([0, 1, 1, 2, 2, 3, 3, 4, 4, 5]) - tm.assert_numpy_array_equal(indexer, expected) - - def test_join_outer(self): - other = Int64Index([7, 12, 25, 1, 2, 5]) - other_mono = Int64Index([1, 2, 5, 7, 12, 25]) - - # not monotonic - # guarantee of sortedness - res, lidx, ridx = self.index.join(other, how='outer', - return_indexers=True) - noidx_res = self.index.join(other, how='outer') - self.assertTrue(res.equals(noidx_res)) - - eres = Int64Index([0, 1, 2, 4, 5, 6, 7, 8, 10, 12, 14, 16, 18, 25]) - elidx = np.array([0, -1, 1, 2, -1, 3, -1, 4, 5, 6, 7, 8, 9, -1], - dtype=np.int64) - eridx = np.array([-1, 3, 4, -1, 5, -1, 0, -1, -1, 1, -1, -1, -1, 2], - dtype=np.int64) - - tm.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) - tm.assert_numpy_array_equal(lidx, elidx) - tm.assert_numpy_array_equal(ridx, eridx) - - # monotonic - res, lidx, ridx = self.index.join(other_mono, how='outer', - return_indexers=True) - noidx_res = self.index.join(other_mono, how='outer') - self.assertTrue(res.equals(noidx_res)) - - eridx = np.array([-1, 0, 1, -1, 2, -1, 3, -1, -1, 4, -1, -1, -1, 5], - dtype=np.int64) - tm.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) - tm.assert_numpy_array_equal(lidx, elidx) - tm.assert_numpy_array_equal(ridx, eridx) - - def test_join_inner(self): - other = Int64Index([7, 12, 25, 1, 2, 5]) - other_mono = Int64Index([1, 2, 5, 7, 12, 25]) - - # not monotonic - res, lidx, ridx = self.index.join(other, how='inner', - return_indexers=True) - - # no guarantee of sortedness, so sort for comparison purposes - ind = res.argsort() - res = res.take(ind) - lidx = lidx.take(ind) - ridx = ridx.take(ind) - - eres = Int64Index([2, 12]) - elidx = np.array([1, 6]) - eridx = np.array([4, 1]) - - tm.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) - tm.assert_numpy_array_equal(lidx, elidx) - tm.assert_numpy_array_equal(ridx, eridx) - - # monotonic - res, lidx, ridx = self.index.join(other_mono, how='inner', - return_indexers=True) - - res2 = self.index.intersection(other_mono) - self.assertTrue(res.equals(res2)) - - eridx = np.array([1, 4]) - tm.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) - tm.assert_numpy_array_equal(lidx, elidx) - tm.assert_numpy_array_equal(ridx, eridx) - - def test_join_left(self): - other = Int64Index([7, 12, 25, 1, 2, 5]) - other_mono = Int64Index([1, 2, 5, 7, 12, 25]) - - # not monotonic - res, lidx, ridx = self.index.join(other, how='left', - return_indexers=True) - eres = self.index - eridx = np.array([-1, 4, -1, -1, -1, -1, 1, -1, -1, -1], - dtype=np.int64) - - tm.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) - self.assertIsNone(lidx) - tm.assert_numpy_array_equal(ridx, eridx) - - # monotonic - res, lidx, ridx = self.index.join(other_mono, how='left', - return_indexers=True) - eridx = np.array([-1, 1, -1, -1, -1, -1, 4, -1, -1, -1], - dtype=np.int64) - tm.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) - self.assertIsNone(lidx) - tm.assert_numpy_array_equal(ridx, eridx) - - # non-unique - idx = Index([1, 1, 2, 5]) - idx2 = Index([1, 2, 5, 7, 9]) - res, lidx, ridx = idx2.join(idx, how='left', return_indexers=True) - eres = Index([1, 1, 2, 5, 7, 9]) # 1 is in idx2, so it should be x2 - eridx = np.array([0, 1, 2, 3, -1, -1]) - elidx = np.array([0, 0, 1, 2, 3, 4]) - self.assertTrue(res.equals(eres)) - tm.assert_numpy_array_equal(lidx, elidx) - tm.assert_numpy_array_equal(ridx, eridx) - - def test_join_right(self): - other = Int64Index([7, 12, 25, 1, 2, 5]) - other_mono = Int64Index([1, 2, 5, 7, 12, 25]) - - # not monotonic - res, lidx, ridx = self.index.join(other, how='right', - return_indexers=True) - eres = other - elidx = np.array([-1, 6, -1, -1, 1, -1], dtype=np.int64) - - tm.assertIsInstance(other, Int64Index) - self.assertTrue(res.equals(eres)) - tm.assert_numpy_array_equal(lidx, elidx) - self.assertIsNone(ridx) - - # monotonic - res, lidx, ridx = self.index.join(other_mono, how='right', - return_indexers=True) - eres = other_mono - elidx = np.array([-1, 1, -1, -1, 6, -1], dtype=np.int64) - tm.assertIsInstance(other, Int64Index) - self.assertTrue(res.equals(eres)) - tm.assert_numpy_array_equal(lidx, elidx) - self.assertIsNone(ridx) - - # non-unique - idx = Index([1, 1, 2, 5]) - idx2 = Index([1, 2, 5, 7, 9]) - res, lidx, ridx = idx.join(idx2, how='right', return_indexers=True) - eres = Index([1, 1, 2, 5, 7, 9]) # 1 is in idx2, so it should be x2 - elidx = np.array([0, 1, 2, 3, -1, -1]) - eridx = np.array([0, 0, 1, 2, 3, 4]) - self.assertTrue(res.equals(eres)) - tm.assert_numpy_array_equal(lidx, elidx) - tm.assert_numpy_array_equal(ridx, eridx) - - def test_join_non_int_index(self): - other = Index([3, 6, 7, 8, 10], dtype=object) - - outer = self.index.join(other, how='outer') - outer2 = other.join(self.index, how='outer') - expected = Index([0, 2, 3, 4, 6, 7, 8, 10, 12, 14, - 16, 18], dtype=object) - self.assertTrue(outer.equals(outer2)) - self.assertTrue(outer.equals(expected)) - - inner = self.index.join(other, how='inner') - inner2 = other.join(self.index, how='inner') - expected = Index([6, 8, 10], dtype=object) - self.assertTrue(inner.equals(inner2)) - self.assertTrue(inner.equals(expected)) - - left = self.index.join(other, how='left') - self.assertTrue(left.equals(self.index)) - - left2 = other.join(self.index, how='left') - self.assertTrue(left2.equals(other)) - - right = self.index.join(other, how='right') - self.assertTrue(right.equals(other)) - - right2 = other.join(self.index, how='right') - self.assertTrue(right2.equals(self.index)) - - def test_join_non_unique(self): - left = Index([4, 4, 3, 3]) - - joined, lidx, ridx = left.join(left, return_indexers=True) - - exp_joined = Index([3, 3, 3, 3, 4, 4, 4, 4]) - self.assertTrue(joined.equals(exp_joined)) - - exp_lidx = np.array([2, 2, 3, 3, 0, 0, 1, 1], dtype=np.int64) - tm.assert_numpy_array_equal(lidx, exp_lidx) - - exp_ridx = np.array([2, 3, 2, 3, 0, 1, 0, 1], dtype=np.int64) - tm.assert_numpy_array_equal(ridx, exp_ridx) - - def test_join_self(self): - kinds = 'outer', 'inner', 'left', 'right' - for kind in kinds: - joined = self.index.join(self.index, how=kind) - self.assertIs(self.index, joined) - - def test_intersection(self): - other = Index([1, 2, 3, 4, 5]) - result = self.index.intersection(other) - expected = np.sort(np.intersect1d(self.index.values, other.values)) - tm.assert_numpy_array_equal(result, expected) - - result = other.intersection(self.index) - expected = np.sort(np.asarray(np.intersect1d(self.index.values, - other.values))) - tm.assert_numpy_array_equal(result, expected) - - def test_intersect_str_dates(self): - dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)] - - i1 = Index(dt_dates, dtype=object) - i2 = Index(['aa'], dtype=object) - res = i2.intersection(i1) - - self.assertEqual(len(res), 0) - - def test_union_noncomparable(self): - from datetime import datetime, timedelta - # corner case, non-Int64Index - now = datetime.now() - other = Index([now + timedelta(i) for i in range(4)], dtype=object) - result = self.index.union(other) - expected = np.concatenate((self.index, other)) - tm.assert_numpy_array_equal(result, expected) - - result = other.union(self.index) - expected = np.concatenate((other, self.index)) - tm.assert_numpy_array_equal(result, expected) - - def test_cant_or_shouldnt_cast(self): - # can't - data = ['foo', 'bar', 'baz'] - self.assertRaises(TypeError, Int64Index, data) - - # shouldn't - data = ['0', '1', '2'] - self.assertRaises(TypeError, Int64Index, data) - - def test_view_Index(self): - self.index.view(Index) - - def test_prevent_casting(self): - result = self.index.astype('O') - self.assertEqual(result.dtype, np.object_) - - def test_take_preserve_name(self): - index = Int64Index([1, 2, 3, 4], name='foo') - taken = index.take([3, 0, 1]) - self.assertEqual(index.name, taken.name) - - def test_int_name_format(self): - index = Index(['a', 'b', 'c'], name=0) - s = Series(lrange(3), index) - df = DataFrame(lrange(3), index=index) - repr(s) - repr(df) - - def test_print_unicode_columns(self): - df = pd.DataFrame({u("\u05d0"): [1, 2, 3], - "\u05d1": [4, 5, 6], - "c": [7, 8, 9]}) - repr(df.columns) # should not raise UnicodeDecodeError - - def test_repr_summary(self): - with cf.option_context('display.max_seq_items', 10): - r = repr(pd.Index(np.arange(1000))) - self.assertTrue(len(r) < 200) - self.assertTrue("..." in r) - - def test_repr_roundtrip(self): - tm.assert_index_equal(eval(repr(self.index)), self.index) - - def test_unicode_string_with_unicode(self): - idx = Index(lrange(1000)) - - if PY3: - str(idx) - else: - compat.text_type(idx) - - def test_bytestring_with_unicode(self): - idx = Index(lrange(1000)) - if PY3: - bytes(idx) - else: - str(idx) - - def test_slice_keep_name(self): - idx = Int64Index([1, 2], name='asdf') - self.assertEqual(idx.name, idx[1:].name) - - def test_ufunc_coercions(self): - idx = Int64Index([1, 2, 3, 4, 5], name='x') - - result = np.sqrt(idx) - tm.assertIsInstance(result, Float64Index) - exp = Float64Index(np.sqrt(np.array([1, 2, 3, 4, 5])), name='x') - tm.assert_index_equal(result, exp) - - result = np.divide(idx, 2.) - tm.assertIsInstance(result, Float64Index) - exp = Float64Index([0.5, 1., 1.5, 2., 2.5], name='x') - tm.assert_index_equal(result, exp) - - # _evaluate_numeric_binop - result = idx + 2. - tm.assertIsInstance(result, Float64Index) - exp = Float64Index([3., 4., 5., 6., 7.], name='x') - tm.assert_index_equal(result, exp) - - result = idx - 2. - tm.assertIsInstance(result, Float64Index) - exp = Float64Index([-1., 0., 1., 2., 3.], name='x') - tm.assert_index_equal(result, exp) - - result = idx * 1. - tm.assertIsInstance(result, Float64Index) - exp = Float64Index([1., 2., 3., 4., 5.], name='x') - tm.assert_index_equal(result, exp) - - result = idx / 2. - tm.assertIsInstance(result, Float64Index) - exp = Float64Index([0.5, 1., 1.5, 2., 2.5], name='x') - tm.assert_index_equal(result, exp) - - -class TestRangeIndex(Numeric, tm.TestCase): - _holder = RangeIndex - _compat_props = ['shape', 'ndim', 'size', 'itemsize'] - - def setUp(self): - self.indices = dict(index=RangeIndex(0, 20, 2, name='foo')) - self.setup_indices() - - def create_index(self): - return RangeIndex(5) - - def test_binops(self): - ops = [operator.add, operator.sub, operator.mul, operator.floordiv, - operator.truediv, pow] - scalars = [-1, 1, 2] - idxs = [RangeIndex(0, 10, 1), RangeIndex(0, 20, 2), - RangeIndex(-10, 10, 2), RangeIndex(5, -5, -1)] - for op in ops: - for a, b in combinations(idxs, 2): - result = op(a, b) - expected = op(Int64Index(a), Int64Index(b)) - tm.assert_index_equal(result, expected) - for idx in idxs: - for scalar in scalars: - result = op(idx, scalar) - expected = op(Int64Index(idx), scalar) - tm.assert_index_equal(result, expected) - - def test_too_many_names(self): - def testit(): - self.index.names = ["roger", "harold"] - - assertRaisesRegexp(ValueError, "^Length", testit) - - def test_constructor(self): - index = RangeIndex(5) - expected = np.arange(5, dtype=np.int64) - self.assertIsInstance(index, RangeIndex) - self.assertEqual(index._start, 0) - self.assertEqual(index._stop, 5) - self.assertEqual(index._step, 1) - self.assertEqual(index.name, None) - tm.assert_index_equal(Index(expected), index) - - index = RangeIndex(1, 5) - expected = np.arange(1, 5, dtype=np.int64) - self.assertIsInstance(index, RangeIndex) - self.assertEqual(index._start, 1) - tm.assert_index_equal(Index(expected), index) - - index = RangeIndex(1, 5, 2) - expected = np.arange(1, 5, 2, dtype=np.int64) - self.assertIsInstance(index, RangeIndex) - self.assertEqual(index._step, 2) - tm.assert_index_equal(Index(expected), index) - - index = RangeIndex() - expected = np.empty(0, dtype=np.int64) - self.assertIsInstance(index, RangeIndex) - self.assertEqual(index._start, 0) - self.assertEqual(index._stop, 0) - self.assertEqual(index._step, 1) - tm.assert_index_equal(Index(expected), index) - - index = RangeIndex(name='Foo') - self.assertIsInstance(index, RangeIndex) - self.assertEqual(index.name, 'Foo') - - # we don't allow on a bare Index - self.assertRaises(TypeError, lambda: Index(0, 1000)) - - # invalid args - for i in [Index(['a', 'b']), Series(['a', 'b']), np.array(['a', 'b']), - [], 'foo', datetime(2000, 1, 1, 0, 0), np.arange(0, 10)]: - self.assertRaises(TypeError, lambda: RangeIndex(i)) - - def test_constructor_same(self): - - # pass thru w and w/o copy - index = RangeIndex(1, 5, 2) - result = RangeIndex(index, copy=False) - self.assertTrue(result.identical(index)) - - result = RangeIndex(index, copy=True) - self.assertTrue(result.equals(index)) - - result = RangeIndex(index) - self.assertTrue(result.equals(index)) - - self.assertRaises(TypeError, - lambda: RangeIndex(index, dtype='float64')) - - def test_constructor_range(self): - - self.assertRaises(TypeError, lambda: RangeIndex(range(1, 5, 2))) - - result = RangeIndex.from_range(range(1, 5, 2)) - expected = RangeIndex(1, 5, 2) - self.assertTrue(result.equals(expected)) - - result = RangeIndex.from_range(range(5, 6)) - expected = RangeIndex(5, 6, 1) - self.assertTrue(result.equals(expected)) - - # an invalid range - result = RangeIndex.from_range(range(5, 1)) - expected = RangeIndex(0, 0, 1) - self.assertTrue(result.equals(expected)) - - result = RangeIndex.from_range(range(5)) - expected = RangeIndex(0, 5, 1) - self.assertTrue(result.equals(expected)) - - result = Index(range(1, 5, 2)) - expected = RangeIndex(1, 5, 2) - self.assertTrue(result.equals(expected)) - - self.assertRaises(TypeError, - lambda: Index(range(1, 5, 2), dtype='float64')) - - def test_numeric_compat2(self): - # validate that we are handling the RangeIndex overrides to numeric ops - # and returning RangeIndex where possible - - idx = RangeIndex(0, 10, 2) - - result = idx * 2 - expected = RangeIndex(0, 20, 4) - self.assertTrue(result.equals(expected)) - - result = idx + 2 - expected = RangeIndex(2, 12, 2) - self.assertTrue(result.equals(expected)) - - result = idx - 2 - expected = RangeIndex(-2, 8, 2) - self.assertTrue(result.equals(expected)) - - # truediv under PY3 - result = idx / 2 - if PY3: - expected = RangeIndex(0, 5, 1) - else: - expected = RangeIndex(0, 5, 1).astype('float64') - self.assertTrue(result.equals(expected)) - - result = idx / 4 - expected = RangeIndex(0, 10, 2).values / 4 - self.assertTrue(result.equals(expected)) - - result = idx // 1 - expected = idx - tm.assert_index_equal(result, expected, exact=True) - - # __mul__ - result = idx * idx - expected = Index(idx.values * idx.values) - tm.assert_index_equal(result, expected, exact=True) - - # __pow__ - idx = RangeIndex(0, 1000, 2) - result = idx ** 2 - expected = idx._int64index ** 2 - tm.assert_index_equal(Index(result.values), expected, exact=True) - - # __floordiv__ - cases_exact = [(RangeIndex(0, 1000, 2), 2, RangeIndex(0, 500, 1)), - (RangeIndex(-99, -201, -3), -3, RangeIndex(33, 67, 1)), - (RangeIndex(0, 1000, 1), 2, - RangeIndex(0, 1000, 1)._int64index // 2), - (RangeIndex(0, 100, 1), 2.0, - RangeIndex(0, 100, 1)._int64index // 2.0), - (RangeIndex(), 50, RangeIndex()), - (RangeIndex(2, 4, 2), 3, RangeIndex(0, 1, 1)), - (RangeIndex(-5, -10, -6), 4, RangeIndex(-2, -1, 1)), - (RangeIndex(-100, -200, 3), 2, RangeIndex())] - for idx, div, expected in cases_exact: - tm.assert_index_equal(idx // div, expected, exact=True) - - def test_constructor_corner(self): - arr = np.array([1, 2, 3, 4], dtype=object) - index = RangeIndex(1, 5) - self.assertEqual(index.values.dtype, np.int64) - self.assertTrue(index.equals(arr)) - - # non-int raise Exception - self.assertRaises(TypeError, RangeIndex, '1', '10', '1') - self.assertRaises(TypeError, RangeIndex, 1.1, 10.2, 1.3) - - # invalid passed type - self.assertRaises(TypeError, lambda: RangeIndex(1, 5, dtype='float64')) - - def test_copy(self): - i = RangeIndex(5, name='Foo') - i_copy = i.copy() - self.assertTrue(i_copy is not i) - self.assertTrue(i_copy.identical(i)) - self.assertEqual(i_copy._start, 0) - self.assertEqual(i_copy._stop, 5) - self.assertEqual(i_copy._step, 1) - self.assertEqual(i_copy.name, 'Foo') - - def test_repr(self): - i = RangeIndex(5, name='Foo') - result = repr(i) - if PY3: - expected = "RangeIndex(start=0, stop=5, step=1, name='Foo')" - else: - expected = "RangeIndex(start=0, stop=5, step=1, name=u'Foo')" - self.assertTrue(result, expected) - - result = eval(result) - self.assertTrue(result.equals(i)) - - i = RangeIndex(5, 0, -1) - result = repr(i) - expected = "RangeIndex(start=5, stop=0, step=-1)" - self.assertEqual(result, expected) - - result = eval(result) - self.assertTrue(result.equals(i)) - - def test_insert(self): - - idx = RangeIndex(5, name='Foo') - result = idx[1:4] - - # test 0th element - self.assertTrue(idx[0:4].equals(result.insert(0, idx[0]))) - - def test_delete(self): - - idx = RangeIndex(5, name='Foo') - expected = idx[1:].astype(int) - result = idx.delete(0) - self.assertTrue(result.equals(expected)) - self.assertEqual(result.name, expected.name) - - expected = idx[:-1].astype(int) - result = idx.delete(-1) - self.assertTrue(result.equals(expected)) - self.assertEqual(result.name, expected.name) - - with tm.assertRaises((IndexError, ValueError)): - # either depending on numpy version - result = idx.delete(len(idx)) - - def test_view(self): - super(TestRangeIndex, self).test_view() - - i = RangeIndex(name='Foo') - i_view = i.view() - self.assertEqual(i_view.name, 'Foo') - - i_view = i.view('i8') - tm.assert_numpy_array_equal(i, i_view) - - i_view = i.view(RangeIndex) - tm.assert_index_equal(i, i_view) - - def test_dtype(self): - self.assertEqual(self.index.dtype, np.int64) - - def test_is_monotonic(self): - self.assertTrue(self.index.is_monotonic) - self.assertTrue(self.index.is_monotonic_increasing) - self.assertFalse(self.index.is_monotonic_decreasing) - - index = RangeIndex(4, 0, -1) - self.assertFalse(index.is_monotonic) - self.assertTrue(index.is_monotonic_decreasing) - - index = RangeIndex(1, 2) - self.assertTrue(index.is_monotonic) - self.assertTrue(index.is_monotonic_increasing) - self.assertTrue(index.is_monotonic_decreasing) - - def test_equals(self): - equiv_pairs = [(RangeIndex(0, 9, 2), RangeIndex(0, 10, 2)), - (RangeIndex(0), RangeIndex(1, -1, 3)), - (RangeIndex(1, 2, 3), RangeIndex(1, 3, 4)), - (RangeIndex(0, -9, -2), RangeIndex(0, -10, -2))] - for left, right in equiv_pairs: - self.assertTrue(left.equals(right)) - self.assertTrue(right.equals(left)) - - def test_logical_compat(self): - idx = self.create_index() - self.assertEqual(idx.all(), idx.values.all()) - self.assertEqual(idx.any(), idx.values.any()) - - def test_identical(self): - i = Index(self.index.copy()) - self.assertTrue(i.identical(self.index)) - - # we don't allow object dtype for RangeIndex - if isinstance(self.index, RangeIndex): - return - - same_values_different_type = Index(i, dtype=object) - self.assertFalse(i.identical(same_values_different_type)) - - i = self.index.copy(dtype=object) - i = i.rename('foo') - same_values = Index(i, dtype=object) - self.assertTrue(same_values.identical(self.index.copy(dtype=object))) - - self.assertFalse(i.identical(self.index)) - self.assertTrue(Index(same_values, name='foo', dtype=object).identical( - i)) - - self.assertFalse(self.index.copy(dtype=object) - .identical(self.index.copy(dtype='int64'))) - - def test_get_indexer(self): - target = RangeIndex(10) - indexer = self.index.get_indexer(target) - expected = np.array([0, -1, 1, -1, 2, -1, 3, -1, 4, -1]) - self.assert_numpy_array_equal(indexer, expected) - - def test_get_indexer_pad(self): - target = RangeIndex(10) - indexer = self.index.get_indexer(target, method='pad') - expected = np.array([0, 0, 1, 1, 2, 2, 3, 3, 4, 4]) - self.assert_numpy_array_equal(indexer, expected) - - def test_get_indexer_backfill(self): - target = RangeIndex(10) - indexer = self.index.get_indexer(target, method='backfill') - expected = np.array([0, 1, 1, 2, 2, 3, 3, 4, 4, 5]) - self.assert_numpy_array_equal(indexer, expected) - - def test_join_outer(self): - # join with Int64Index - other = Int64Index(np.arange(25, 14, -1)) - - res, lidx, ridx = self.index.join(other, how='outer', - return_indexers=True) - noidx_res = self.index.join(other, how='outer') - self.assertTrue(res.equals(noidx_res)) - - eres = Int64Index([0, 2, 4, 6, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, - 21, 22, 23, 24, 25]) - elidx = np.array([0, 1, 2, 3, 4, 5, 6, 7, -1, 8, -1, 9, - -1, -1, -1, -1, -1, -1, -1], dtype=np.int64) - eridx = np.array([-1, -1, -1, -1, -1, -1, -1, -1, 10, 9, 8, 7, 6, - 5, 4, 3, 2, 1, 0], dtype=np.int64) - - self.assertIsInstance(res, Int64Index) - self.assertFalse(isinstance(res, RangeIndex)) - self.assertTrue(res.equals(eres)) - self.assert_numpy_array_equal(lidx, elidx) - self.assert_numpy_array_equal(ridx, eridx) - - # join with RangeIndex - other = RangeIndex(25, 14, -1) - - res, lidx, ridx = self.index.join(other, how='outer', - return_indexers=True) - noidx_res = self.index.join(other, how='outer') - self.assertTrue(res.equals(noidx_res)) - - self.assertIsInstance(res, Int64Index) - self.assertFalse(isinstance(res, RangeIndex)) - self.assertTrue(res.equals(eres)) - self.assert_numpy_array_equal(lidx, elidx) - self.assert_numpy_array_equal(ridx, eridx) - - def test_join_inner(self): - # Join with non-RangeIndex - other = Int64Index(np.arange(25, 14, -1)) - - res, lidx, ridx = self.index.join(other, how='inner', - return_indexers=True) - - # no guarantee of sortedness, so sort for comparison purposes - ind = res.argsort() - res = res.take(ind) - lidx = lidx.take(ind) - ridx = ridx.take(ind) - - eres = Int64Index([16, 18]) - elidx = np.array([8, 9]) - eridx = np.array([9, 7]) - - self.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) - self.assert_numpy_array_equal(lidx, elidx) - self.assert_numpy_array_equal(ridx, eridx) - - # Join two RangeIndex - other = RangeIndex(25, 14, -1) - - res, lidx, ridx = self.index.join(other, how='inner', - return_indexers=True) - - self.assertIsInstance(res, RangeIndex) - self.assertTrue(res.equals(eres)) - self.assert_numpy_array_equal(lidx, elidx) - self.assert_numpy_array_equal(ridx, eridx) - - def test_join_left(self): - # Join with Int64Index - other = Int64Index(np.arange(25, 14, -1)) - - res, lidx, ridx = self.index.join(other, how='left', - return_indexers=True) - eres = self.index - eridx = np.array([-1, -1, -1, -1, -1, -1, -1, -1, 9, 7], - dtype=np.int64) - - self.assertIsInstance(res, RangeIndex) - self.assertTrue(res.equals(eres)) - self.assertIsNone(lidx) - self.assert_numpy_array_equal(ridx, eridx) - - # Join withRangeIndex - other = Int64Index(np.arange(25, 14, -1)) - - res, lidx, ridx = self.index.join(other, how='left', - return_indexers=True) - - self.assertIsInstance(res, RangeIndex) - self.assertTrue(res.equals(eres)) - self.assertIsNone(lidx) - self.assert_numpy_array_equal(ridx, eridx) - - def test_join_right(self): - # Join with Int64Index - other = Int64Index(np.arange(25, 14, -1)) - - res, lidx, ridx = self.index.join(other, how='right', - return_indexers=True) - eres = other - elidx = np.array([-1, -1, -1, -1, -1, -1, -1, 9, -1, 8, -1], - dtype=np.int64) - - self.assertIsInstance(other, Int64Index) - self.assertTrue(res.equals(eres)) - self.assert_numpy_array_equal(lidx, elidx) - self.assertIsNone(ridx) - - # Join withRangeIndex - other = RangeIndex(25, 14, -1) - - res, lidx, ridx = self.index.join(other, how='right', - return_indexers=True) - eres = other - - self.assertIsInstance(other, RangeIndex) - self.assertTrue(res.equals(eres)) - self.assert_numpy_array_equal(lidx, elidx) - self.assertIsNone(ridx) - - def test_join_non_int_index(self): - other = Index([3, 6, 7, 8, 10], dtype=object) - - outer = self.index.join(other, how='outer') - outer2 = other.join(self.index, how='outer') - expected = Index([0, 2, 3, 4, 6, 7, 8, 10, 12, 14, - 16, 18], dtype=object) - self.assertTrue(outer.equals(outer2)) - self.assertTrue(outer.equals(expected)) - - inner = self.index.join(other, how='inner') - inner2 = other.join(self.index, how='inner') - expected = Index([6, 8, 10], dtype=object) - self.assertTrue(inner.equals(inner2)) - self.assertTrue(inner.equals(expected)) - - left = self.index.join(other, how='left') - self.assertTrue(left.equals(self.index)) - - left2 = other.join(self.index, how='left') - self.assertTrue(left2.equals(other)) - - right = self.index.join(other, how='right') - self.assertTrue(right.equals(other)) - - right2 = other.join(self.index, how='right') - self.assertTrue(right2.equals(self.index)) - - def test_join_non_unique(self): - other = Index([4, 4, 3, 3]) - - res, lidx, ridx = self.index.join(other, return_indexers=True) - - eres = Int64Index([0, 2, 4, 4, 6, 8, 10, 12, 14, 16, 18]) - elidx = np.array([0, 1, 2, 2, 3, 4, 5, 6, 7, 8, 9], dtype=np.int64) - eridx = np.array([-1, -1, 0, 1, -1, -1, -1, -1, -1, -1, -1], - dtype=np.int64) - - self.assertTrue(res.equals(eres)) - self.assert_numpy_array_equal(lidx, elidx) - self.assert_numpy_array_equal(ridx, eridx) - - def test_join_self(self): - kinds = 'outer', 'inner', 'left', 'right' - for kind in kinds: - joined = self.index.join(self.index, how=kind) - self.assertIs(self.index, joined) - - def test_intersection(self): - # intersect with Int64Index - other = Index(np.arange(1, 6)) - result = self.index.intersection(other) - expected = np.sort(np.intersect1d(self.index.values, other.values)) - self.assert_numpy_array_equal(result, expected) - - result = other.intersection(self.index) - expected = np.sort(np.asarray(np.intersect1d(self.index.values, - other.values))) - self.assert_numpy_array_equal(result, expected) - - # intersect with increasing RangeIndex - other = RangeIndex(1, 6) - result = self.index.intersection(other) - expected = np.sort(np.intersect1d(self.index.values, other.values)) - self.assert_numpy_array_equal(result, expected) - - # intersect with decreasing RangeIndex - other = RangeIndex(5, 0, -1) - result = self.index.intersection(other) - expected = np.sort(np.intersect1d(self.index.values, other.values)) - self.assert_numpy_array_equal(result, expected) - - def test_intersect_str_dates(self): - dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)] - - i1 = Index(dt_dates, dtype=object) - i2 = Index(['aa'], dtype=object) - res = i2.intersection(i1) - - self.assertEqual(len(res), 0) - - def test_union_noncomparable(self): - from datetime import datetime, timedelta - # corner case, non-Int64Index - now = datetime.now() - other = Index([now + timedelta(i) for i in range(4)], dtype=object) - result = self.index.union(other) - expected = np.concatenate((self.index, other)) - self.assert_numpy_array_equal(result, expected) - - result = other.union(self.index) - expected = np.concatenate((other, self.index)) - self.assert_numpy_array_equal(result, expected) - - def test_union(self): - RI = RangeIndex - I64 = Int64Index - cases = [(RI(0, 10, 1), RI(0, 10, 1), RI(0, 10, 1)), - (RI(0, 10, 1), RI(5, 20, 1), RI(0, 20, 1)), - (RI(0, 10, 1), RI(10, 20, 1), RI(0, 20, 1)), - (RI(0, -10, -1), RI(0, -10, -1), RI(0, -10, -1)), - (RI(0, -10, -1), RI(-10, -20, -1), RI(-19, 1, 1)), - (RI(0, 10, 2), RI(1, 10, 2), RI(0, 10, 1)), - (RI(0, 11, 2), RI(1, 12, 2), RI(0, 12, 1)), - (RI(0, 21, 4), RI(-2, 24, 4), RI(-2, 24, 2)), - (RI(0, -20, -2), RI(-1, -21, -2), RI(-19, 1, 1)), - (RI(0, 100, 5), RI(0, 100, 20), RI(0, 100, 5)), - (RI(0, -100, -5), RI(5, -100, -20), RI(-95, 10, 5)), - (RI(0, -11, -1), RI(1, -12, -4), RI(-11, 2, 1)), - (RI(), RI(), RI()), - (RI(0, -10, -2), RI(), RI(0, -10, -2)), - (RI(0, 100, 2), RI(100, 150, 200), RI(0, 102, 2)), - (RI(0, -100, -2), RI(-100, 50, 102), RI(-100, 4, 2)), - (RI(0, -100, -1), RI(0, -50, -3), RI(-99, 1, 1)), - (RI(0, 1, 1), RI(5, 6, 10), RI(0, 6, 5)), - (RI(0, 10, 5), RI(-5, -6, -20), RI(-5, 10, 5)), - (RI(0, 3, 1), RI(4, 5, 1), I64([0, 1, 2, 4])), - (RI(0, 10, 1), I64([]), RI(0, 10, 1)), - (RI(), I64([1, 5, 6]), I64([1, 5, 6]))] - for idx1, idx2, expected in cases: - res1 = idx1.union(idx2) - res2 = idx2.union(idx1) - res3 = idx1._int64index.union(idx2) - tm.assert_index_equal(res1, expected, exact=True) - tm.assert_index_equal(res2, expected, exact=True) - tm.assert_index_equal(res3, expected) - - def test_nbytes(self): - - # memory savings vs int index - i = RangeIndex(0, 1000) - self.assertTrue(i.nbytes < i.astype(int).nbytes / 10) - - # constant memory usage - i2 = RangeIndex(0, 10) - self.assertEqual(i.nbytes, i2.nbytes) - - def test_cant_or_shouldnt_cast(self): - # can't - self.assertRaises(TypeError, RangeIndex, 'foo', 'bar', 'baz') - - # shouldn't - self.assertRaises(TypeError, RangeIndex, '0', '1', '2') - - def test_view_Index(self): - self.index.view(Index) - - def test_prevent_casting(self): - result = self.index.astype('O') - self.assertEqual(result.dtype, np.object_) - - def test_take_preserve_name(self): - index = RangeIndex(1, 5, name='foo') - taken = index.take([3, 0, 1]) - self.assertEqual(index.name, taken.name) - - def test_print_unicode_columns(self): - df = pd.DataFrame({u("\u05d0"): [1, 2, 3], - "\u05d1": [4, 5, 6], - "c": [7, 8, 9]}) - repr(df.columns) # should not raise UnicodeDecodeError - - def test_repr_roundtrip(self): - tm.assert_index_equal(eval(repr(self.index)), self.index) - - def test_slice_keep_name(self): - idx = RangeIndex(1, 2, name='asdf') - self.assertEqual(idx.name, idx[1:].name) - - def test_explicit_conversions(self): - - # GH 8608 - # add/sub are overriden explicity for Float/Int Index - idx = RangeIndex(5) - - # float conversions - arr = np.arange(5, dtype='int64') * 3.2 - expected = Float64Index(arr) - fidx = idx * 3.2 - tm.assert_index_equal(fidx, expected) - fidx = 3.2 * idx - tm.assert_index_equal(fidx, expected) - - # interops with numpy arrays - expected = Float64Index(arr) - a = np.zeros(5, dtype='float64') - result = fidx - a - tm.assert_index_equal(result, expected) - - expected = Float64Index(-arr) - a = np.zeros(5, dtype='float64') - result = a - fidx - tm.assert_index_equal(result, expected) - - def test_duplicates(self): - for ind in self.indices: - if not len(ind): - continue - idx = self.indices[ind] - self.assertTrue(idx.is_unique) - self.assertFalse(idx.has_duplicates) - - def test_ufunc_compat(self): - idx = RangeIndex(5) - result = np.sin(idx) - expected = Float64Index(np.sin(np.arange(5, dtype='int64'))) - tm.assert_index_equal(result, expected) - - def test_extended_gcd(self): - result = self.index._extended_gcd(6, 10) - self.assertEqual(result[0], result[1] * 6 + result[2] * 10) - self.assertEqual(2, result[0]) - - result = self.index._extended_gcd(10, 6) - self.assertEqual(2, result[1] * 10 + result[2] * 6) - self.assertEqual(2, result[0]) - - def test_min_fitting_element(self): - result = RangeIndex(0, 20, 2)._min_fitting_element(1) - self.assertEqual(2, result) - - result = RangeIndex(1, 6)._min_fitting_element(1) - self.assertEqual(1, result) - - result = RangeIndex(18, -2, -2)._min_fitting_element(1) - self.assertEqual(2, result) - - result = RangeIndex(5, 0, -1)._min_fitting_element(1) - self.assertEqual(1, result) - - big_num = 500000000000000000000000 - - result = RangeIndex(5, big_num * 2, 1)._min_fitting_element(big_num) - self.assertEqual(big_num, result) - - def test_max_fitting_element(self): - result = RangeIndex(0, 20, 2)._max_fitting_element(17) - self.assertEqual(16, result) - - result = RangeIndex(1, 6)._max_fitting_element(4) - self.assertEqual(4, result) - - result = RangeIndex(18, -2, -2)._max_fitting_element(17) - self.assertEqual(16, result) - - result = RangeIndex(5, 0, -1)._max_fitting_element(4) - self.assertEqual(4, result) - - big_num = 500000000000000000000000 - - result = RangeIndex(5, big_num * 2, 1)._max_fitting_element(big_num) - self.assertEqual(big_num, result) - - def test_pickle_compat_construction(self): - # RangeIndex() is a valid constructor - pass - - def test_slice_specialised(self): - - # scalar indexing - res = self.index[1] - expected = 2 - self.assertEqual(res, expected) - - res = self.index[-1] - expected = 18 - self.assertEqual(res, expected) - - # slicing - # slice value completion - index = self.index[:] - expected = self.index - self.assert_numpy_array_equal(index, expected) - - # positive slice values - index = self.index[7:10:2] - expected = np.array([14, 18]) - self.assert_numpy_array_equal(index, expected) - - # negative slice values - index = self.index[-1:-5:-2] - expected = np.array([18, 14]) - self.assert_numpy_array_equal(index, expected) - - # stop overshoot - index = self.index[2:100:4] - expected = np.array([4, 12]) - self.assert_numpy_array_equal(index, expected) - - # reverse - index = self.index[::-1] - expected = self.index.values[::-1] - self.assert_numpy_array_equal(index, expected) - - index = self.index[-8::-1] - expected = np.array([4, 2, 0]) - self.assert_numpy_array_equal(index, expected) - - index = self.index[-40::-1] - expected = np.array([]) - self.assert_numpy_array_equal(index, expected) - - index = self.index[40::-1] - expected = self.index.values[40::-1] - self.assert_numpy_array_equal(index, expected) - - index = self.index[10::-1] - expected = self.index.values[::-1] - self.assert_numpy_array_equal(index, expected) - - def test_len_specialised(self): - - # make sure that our len is the same as - # np.arange calc - - for step in np.arange(1, 6, 1): - - arr = np.arange(0, 5, step) - i = RangeIndex(0, 5, step) - self.assertEqual(len(i), len(arr)) - - i = RangeIndex(5, 0, step) - self.assertEqual(len(i), 0) - - for step in np.arange(-6, -1, 1): - - arr = np.arange(5, 0, step) - i = RangeIndex(5, 0, step) - self.assertEqual(len(i), len(arr)) - - i = RangeIndex(0, 5, step) - self.assertEqual(len(i), 0) - - -class DatetimeLike(Base): - - def test_shift_identity(self): - - idx = self.create_index() - self.assert_index_equal(idx, idx.shift(0)) - - def test_str(self): - - # test the string repr - idx = self.create_index() - idx.name = 'foo' - self.assertFalse("length=%s" % len(idx) in str(idx)) - self.assertTrue("'foo'" in str(idx)) - self.assertTrue(idx.__class__.__name__ in str(idx)) - - if hasattr(idx, 'tz'): - if idx.tz is not None: - self.assertTrue(idx.tz in str(idx)) - if hasattr(idx, 'freq'): - self.assertTrue("freq='%s'" % idx.freqstr in str(idx)) - - def test_view(self): - super(DatetimeLike, self).test_view() - - i = self.create_index() - - i_view = i.view('i8') - result = self._holder(i) - tm.assert_index_equal(result, i) - - i_view = i.view(self._holder) - result = self._holder(i) - tm.assert_index_equal(result, i_view) - - -class TestDatetimeIndex(DatetimeLike, tm.TestCase): - _holder = DatetimeIndex - _multiprocess_can_split_ = True - - def setUp(self): - self.indices = dict(index=tm.makeDateIndex(10)) - self.setup_indices() - - def create_index(self): - return date_range('20130101', periods=5) - - def test_shift(self): - - # test shift for datetimeIndex and non datetimeIndex - # GH8083 - - drange = self.create_index() - result = drange.shift(1) - expected = DatetimeIndex(['2013-01-02', '2013-01-03', '2013-01-04', - '2013-01-05', - '2013-01-06'], freq='D') - self.assert_index_equal(result, expected) - - result = drange.shift(-1) - expected = DatetimeIndex(['2012-12-31', '2013-01-01', '2013-01-02', - '2013-01-03', '2013-01-04'], - freq='D') - self.assert_index_equal(result, expected) - - result = drange.shift(3, freq='2D') - expected = DatetimeIndex(['2013-01-07', '2013-01-08', '2013-01-09', - '2013-01-10', - '2013-01-11'], freq='D') - self.assert_index_equal(result, expected) - - def test_construction_with_alt(self): - - i = pd.date_range('20130101', periods=5, freq='H', tz='US/Eastern') - i2 = DatetimeIndex(i, dtype=i.dtype) - self.assert_index_equal(i, i2) - - i2 = DatetimeIndex(i.tz_localize(None).asi8, tz=i.dtype.tz) - self.assert_index_equal(i, i2) - - i2 = DatetimeIndex(i.tz_localize(None).asi8, dtype=i.dtype) - self.assert_index_equal(i, i2) - - i2 = DatetimeIndex( - i.tz_localize(None).asi8, dtype=i.dtype, tz=i.dtype.tz) - self.assert_index_equal(i, i2) - - # localize into the provided tz - i2 = DatetimeIndex(i.tz_localize(None).asi8, tz='UTC') - expected = i.tz_localize(None).tz_localize('UTC') - self.assert_index_equal(i2, expected) - - i2 = DatetimeIndex(i, tz='UTC') - expected = i.tz_convert('UTC') - self.assert_index_equal(i2, expected) - - # incompat tz/dtype - self.assertRaises(ValueError, lambda: DatetimeIndex( - i.tz_localize(None).asi8, dtype=i.dtype, tz='US/Pacific')) - - def test_pickle_compat_construction(self): - pass - - def test_construction_index_with_mixed_timezones(self): - # GH 11488 - # no tz results in DatetimeIndex - result = Index( - [Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') - exp = DatetimeIndex( - [Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - self.assertIsNone(result.tz) - - # same tz results in DatetimeIndex - result = Index([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - Timestamp('2011-01-02 10:00', tz='Asia/Tokyo')], - name='idx') - exp = DatetimeIndex( - [Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00') - ], tz='Asia/Tokyo', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - self.assertIsNotNone(result.tz) - self.assertEqual(result.tz, exp.tz) - - # same tz results in DatetimeIndex (DST) - result = Index([Timestamp('2011-01-01 10:00', tz='US/Eastern'), - Timestamp('2011-08-01 10:00', tz='US/Eastern')], - name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), - Timestamp('2011-08-01 10:00')], - tz='US/Eastern', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - self.assertIsNotNone(result.tz) - self.assertEqual(result.tz, exp.tz) - - # different tz results in Index(dtype=object) - result = Index([Timestamp('2011-01-01 10:00'), - Timestamp('2011-01-02 10:00', tz='US/Eastern')], - name='idx') - exp = Index([Timestamp('2011-01-01 10:00'), - Timestamp('2011-01-02 10:00', tz='US/Eastern')], - dtype='object', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertFalse(isinstance(result, DatetimeIndex)) - - result = Index([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - Timestamp('2011-01-02 10:00', tz='US/Eastern')], - name='idx') - exp = Index([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - Timestamp('2011-01-02 10:00', tz='US/Eastern')], - dtype='object', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertFalse(isinstance(result, DatetimeIndex)) - - # passing tz results in DatetimeIndex - result = Index([Timestamp('2011-01-01 10:00'), - Timestamp('2011-01-02 10:00', tz='US/Eastern')], - tz='Asia/Tokyo', name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 19:00'), - Timestamp('2011-01-03 00:00')], - tz='Asia/Tokyo', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - - # length = 1 - result = Index([Timestamp('2011-01-01')], name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01')], name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - self.assertIsNone(result.tz) - - # length = 1 with tz - result = Index( - [Timestamp('2011-01-01 10:00', tz='Asia/Tokyo')], name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 10:00')], tz='Asia/Tokyo', - name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - self.assertIsNotNone(result.tz) - self.assertEqual(result.tz, exp.tz) - - def test_construction_index_with_mixed_timezones_with_NaT(self): - # GH 11488 - result = Index([pd.NaT, Timestamp('2011-01-01'), - pd.NaT, Timestamp('2011-01-02')], name='idx') - exp = DatetimeIndex([pd.NaT, Timestamp('2011-01-01'), - pd.NaT, Timestamp('2011-01-02')], name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - self.assertIsNone(result.tz) - - # same tz results in DatetimeIndex - result = Index([pd.NaT, Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - pd.NaT, Timestamp('2011-01-02 10:00', - tz='Asia/Tokyo')], - name='idx') - exp = DatetimeIndex([pd.NaT, Timestamp('2011-01-01 10:00'), - pd.NaT, Timestamp('2011-01-02 10:00')], - tz='Asia/Tokyo', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - self.assertIsNotNone(result.tz) - self.assertEqual(result.tz, exp.tz) - - # same tz results in DatetimeIndex (DST) - result = Index([Timestamp('2011-01-01 10:00', tz='US/Eastern'), - pd.NaT, - Timestamp('2011-08-01 10:00', tz='US/Eastern')], - name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), pd.NaT, - Timestamp('2011-08-01 10:00')], - tz='US/Eastern', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - self.assertIsNotNone(result.tz) - self.assertEqual(result.tz, exp.tz) - - # different tz results in Index(dtype=object) - result = Index([pd.NaT, Timestamp('2011-01-01 10:00'), - pd.NaT, Timestamp('2011-01-02 10:00', - tz='US/Eastern')], - name='idx') - exp = Index([pd.NaT, Timestamp('2011-01-01 10:00'), - pd.NaT, Timestamp('2011-01-02 10:00', tz='US/Eastern')], - dtype='object', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertFalse(isinstance(result, DatetimeIndex)) - - result = Index([pd.NaT, Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - pd.NaT, Timestamp('2011-01-02 10:00', - tz='US/Eastern')], name='idx') - exp = Index([pd.NaT, Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - pd.NaT, Timestamp('2011-01-02 10:00', tz='US/Eastern')], - dtype='object', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertFalse(isinstance(result, DatetimeIndex)) - - # passing tz results in DatetimeIndex - result = Index([pd.NaT, Timestamp('2011-01-01 10:00'), - pd.NaT, Timestamp('2011-01-02 10:00', - tz='US/Eastern')], - tz='Asia/Tokyo', name='idx') - exp = DatetimeIndex([pd.NaT, Timestamp('2011-01-01 19:00'), - pd.NaT, Timestamp('2011-01-03 00:00')], - tz='Asia/Tokyo', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - - # all NaT - result = Index([pd.NaT, pd.NaT], name='idx') - exp = DatetimeIndex([pd.NaT, pd.NaT], name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - self.assertIsNone(result.tz) - - # all NaT with tz - result = Index([pd.NaT, pd.NaT], tz='Asia/Tokyo', name='idx') - exp = DatetimeIndex([pd.NaT, pd.NaT], tz='Asia/Tokyo', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - self.assertIsNotNone(result.tz) - self.assertEqual(result.tz, exp.tz) - - def test_construction_dti_with_mixed_timezones(self): - # GH 11488 (not changed, added explicit tests) - - # no tz results in DatetimeIndex - result = DatetimeIndex( - [Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') - exp = DatetimeIndex( - [Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - - # same tz results in DatetimeIndex - result = DatetimeIndex([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - Timestamp('2011-01-02 10:00', - tz='Asia/Tokyo')], - name='idx') - exp = DatetimeIndex( - [Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00') - ], tz='Asia/Tokyo', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - - # same tz results in DatetimeIndex (DST) - result = DatetimeIndex([Timestamp('2011-01-01 10:00', tz='US/Eastern'), - Timestamp('2011-08-01 10:00', - tz='US/Eastern')], - name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), - Timestamp('2011-08-01 10:00')], - tz='US/Eastern', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - - # different tz coerces tz-naive to tz-awareIndex(dtype=object) - result = DatetimeIndex([Timestamp('2011-01-01 10:00'), - Timestamp('2011-01-02 10:00', - tz='US/Eastern')], name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 05:00'), - Timestamp('2011-01-02 10:00')], - tz='US/Eastern', name='idx') - self.assert_index_equal(result, exp, exact=True) - self.assertTrue(isinstance(result, DatetimeIndex)) - - # tz mismatch affecting to tz-aware raises TypeError/ValueError - with tm.assertRaises(ValueError): - DatetimeIndex([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - Timestamp('2011-01-02 10:00', tz='US/Eastern')], - name='idx') - - with tm.assertRaises(TypeError): - DatetimeIndex([Timestamp('2011-01-01 10:00'), - Timestamp('2011-01-02 10:00', tz='US/Eastern')], - tz='Asia/Tokyo', name='idx') - - with tm.assertRaises(ValueError): - DatetimeIndex([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - Timestamp('2011-01-02 10:00', tz='US/Eastern')], - tz='US/Eastern', name='idx') - - def test_get_loc(self): - idx = pd.date_range('2000-01-01', periods=3) - - for method in [None, 'pad', 'backfill', 'nearest']: - self.assertEqual(idx.get_loc(idx[1], method), 1) - self.assertEqual(idx.get_loc(idx[1].to_pydatetime(), method), 1) - self.assertEqual(idx.get_loc(str(idx[1]), method), 1) - if method is not None: - self.assertEqual(idx.get_loc(idx[1], method, - tolerance=pd.Timedelta('0 days')), - 1) - - self.assertEqual(idx.get_loc('2000-01-01', method='nearest'), 0) - self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest'), 1) - - self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest', - tolerance='1 day'), 1) - self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest', - tolerance=pd.Timedelta('1D')), 1) - self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest', - tolerance=np.timedelta64(1, 'D')), 1) - self.assertEqual(idx.get_loc('2000-01-01T12', method='nearest', - tolerance=timedelta(1)), 1) - with tm.assertRaisesRegexp(ValueError, 'must be convertible'): - idx.get_loc('2000-01-01T12', method='nearest', tolerance='foo') - with tm.assertRaises(KeyError): - idx.get_loc('2000-01-01T03', method='nearest', tolerance='2 hours') - - self.assertEqual(idx.get_loc('2000', method='nearest'), slice(0, 3)) - self.assertEqual(idx.get_loc('2000-01', method='nearest'), slice(0, 3)) - - self.assertEqual(idx.get_loc('1999', method='nearest'), 0) - self.assertEqual(idx.get_loc('2001', method='nearest'), 2) - - with tm.assertRaises(KeyError): - idx.get_loc('1999', method='pad') - with tm.assertRaises(KeyError): - idx.get_loc('2001', method='backfill') - - with tm.assertRaises(KeyError): - idx.get_loc('foobar') - with tm.assertRaises(TypeError): - idx.get_loc(slice(2)) - - idx = pd.to_datetime(['2000-01-01', '2000-01-04']) - self.assertEqual(idx.get_loc('2000-01-02', method='nearest'), 0) - self.assertEqual(idx.get_loc('2000-01-03', method='nearest'), 1) - self.assertEqual(idx.get_loc('2000-01', method='nearest'), slice(0, 2)) - - # time indexing - idx = pd.date_range('2000-01-01', periods=24, freq='H') - tm.assert_numpy_array_equal(idx.get_loc(time(12)), [12]) - tm.assert_numpy_array_equal(idx.get_loc(time(12, 30)), []) - with tm.assertRaises(NotImplementedError): - idx.get_loc(time(12, 30), method='pad') - - def test_get_indexer(self): - idx = pd.date_range('2000-01-01', periods=3) - tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2]) - - target = idx[0] + pd.to_timedelta(['-1 hour', '12 hours', - '1 day 1 hour']) - tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'backfill'), [0, 1, 2]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest'), [0, 1, 1]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest', - tolerance=pd.Timedelta('1 hour')), - [0, -1, 1]) - with tm.assertRaises(ValueError): - idx.get_indexer(idx[[0]], method='nearest', tolerance='foo') - - def test_roundtrip_pickle_with_tz(self): - - # GH 8367 - # round-trip of timezone - index = date_range('20130101', periods=3, tz='US/Eastern', name='foo') - unpickled = self.round_trip_pickle(index) - self.assertTrue(index.equals(unpickled)) - - def test_reindex_preserves_tz_if_target_is_empty_list_or_array(self): - # GH7774 - index = date_range('20130101', periods=3, tz='US/Eastern') - self.assertEqual(str(index.reindex([])[0].tz), 'US/Eastern') - self.assertEqual(str(index.reindex(np.array([]))[0].tz), 'US/Eastern') - - def test_time_loc(self): # GH8667 - from datetime import time - from pandas.index import _SIZE_CUTOFF - - ns = _SIZE_CUTOFF + np.array([-100, 100], dtype=np.int64) - key = time(15, 11, 30) - start = key.hour * 3600 + key.minute * 60 + key.second - step = 24 * 3600 - - for n in ns: - idx = pd.date_range('2014-11-26', periods=n, freq='S') - ts = pd.Series(np.random.randn(n), index=idx) - i = np.arange(start, n, step) - - tm.assert_numpy_array_equal(ts.index.get_loc(key), i) - tm.assert_series_equal(ts[key], ts.iloc[i]) - - left, right = ts.copy(), ts.copy() - left[key] *= -10 - right.iloc[i] *= -10 - tm.assert_series_equal(left, right) - - def test_time_overflow_for_32bit_machines(self): - # GH8943. On some machines NumPy defaults to np.int32 (for example, - # 32-bit Linux machines). In the function _generate_regular_range - # found in tseries/index.py, `periods` gets multiplied by `strides` - # (which has value 1e9) and since the max value for np.int32 is ~2e9, - # and since those machines won't promote np.int32 to np.int64, we get - # overflow. - periods = np.int_(1000) - - idx1 = pd.date_range(start='2000', periods=periods, freq='S') - self.assertEqual(len(idx1), periods) - - idx2 = pd.date_range(end='2000', periods=periods, freq='S') - self.assertEqual(len(idx2), periods) - - def test_intersection(self): - first = self.index - second = self.index[5:] - intersect = first.intersection(second) - self.assertTrue(tm.equalContents(intersect, second)) - - # GH 10149 - cases = [klass(second.values) for klass in [np.array, Series, list]] - for case in cases: - result = first.intersection(case) - self.assertTrue(tm.equalContents(result, second)) - - third = Index(['a', 'b', 'c']) - result = first.intersection(third) - expected = pd.Index([], dtype=object) - self.assert_index_equal(result, expected) - - def test_union(self): - first = self.index[:5] - second = self.index[5:] - everything = self.index - union = first.union(second) - self.assertTrue(tm.equalContents(union, everything)) - - # GH 10149 - cases = [klass(second.values) for klass in [np.array, Series, list]] - for case in cases: - result = first.union(case) - self.assertTrue(tm.equalContents(result, everything)) - - def test_nat(self): - self.assertIs(DatetimeIndex([np.nan])[0], pd.NaT) - - def test_ufunc_coercions(self): - idx = date_range('2011-01-01', periods=3, freq='2D', name='x') - - delta = np.timedelta64(1, 'D') - for result in [idx + delta, np.add(idx, delta)]: - tm.assertIsInstance(result, DatetimeIndex) - exp = date_range('2011-01-02', periods=3, freq='2D', name='x') - tm.assert_index_equal(result, exp) - self.assertEqual(result.freq, '2D') - - for result in [idx - delta, np.subtract(idx, delta)]: - tm.assertIsInstance(result, DatetimeIndex) - exp = date_range('2010-12-31', periods=3, freq='2D', name='x') - tm.assert_index_equal(result, exp) - self.assertEqual(result.freq, '2D') - - delta = np.array([np.timedelta64(1, 'D'), np.timedelta64(2, 'D'), - np.timedelta64(3, 'D')]) - for result in [idx + delta, np.add(idx, delta)]: - tm.assertIsInstance(result, DatetimeIndex) - exp = DatetimeIndex(['2011-01-02', '2011-01-05', '2011-01-08'], - freq='3D', name='x') - tm.assert_index_equal(result, exp) - self.assertEqual(result.freq, '3D') - - for result in [idx - delta, np.subtract(idx, delta)]: - tm.assertIsInstance(result, DatetimeIndex) - exp = DatetimeIndex(['2010-12-31', '2011-01-01', '2011-01-02'], - freq='D', name='x') - tm.assert_index_equal(result, exp) - self.assertEqual(result.freq, 'D') - - def test_fillna_datetime64(self): - # GH 11343 - for tz in ['US/Eastern', 'Asia/Tokyo']: - idx = pd.DatetimeIndex(['2011-01-01 09:00', pd.NaT, - '2011-01-01 11:00']) - - exp = pd.DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', - '2011-01-01 11:00']) - self.assert_index_equal( - idx.fillna(pd.Timestamp('2011-01-01 10:00')), exp) - - # tz mismatch - exp = pd.Index([pd.Timestamp('2011-01-01 09:00'), - pd.Timestamp('2011-01-01 10:00', tz=tz), - pd.Timestamp('2011-01-01 11:00')], dtype=object) - self.assert_index_equal( - idx.fillna(pd.Timestamp('2011-01-01 10:00', tz=tz)), exp) - - # object - exp = pd.Index([pd.Timestamp('2011-01-01 09:00'), 'x', - pd.Timestamp('2011-01-01 11:00')], dtype=object) - self.assert_index_equal(idx.fillna('x'), exp) - - idx = pd.DatetimeIndex( - ['2011-01-01 09:00', pd.NaT, '2011-01-01 11:00'], tz=tz) - - exp = pd.DatetimeIndex( - ['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00' - ], tz=tz) - self.assert_index_equal( - idx.fillna(pd.Timestamp('2011-01-01 10:00', tz=tz)), exp) - - exp = pd.Index([pd.Timestamp('2011-01-01 09:00', tz=tz), - pd.Timestamp('2011-01-01 10:00'), - pd.Timestamp('2011-01-01 11:00', tz=tz)], - dtype=object) - self.assert_index_equal( - idx.fillna(pd.Timestamp('2011-01-01 10:00')), exp) - - # object - exp = pd.Index([pd.Timestamp('2011-01-01 09:00', tz=tz), - 'x', - pd.Timestamp('2011-01-01 11:00', tz=tz)], - dtype=object) - self.assert_index_equal(idx.fillna('x'), exp) - - -class TestPeriodIndex(DatetimeLike, tm.TestCase): - _holder = PeriodIndex - _multiprocess_can_split_ = True - - def setUp(self): - self.indices = dict(index=tm.makePeriodIndex(10)) - self.setup_indices() - - def create_index(self): - return period_range('20130101', periods=5, freq='D') - - def test_shift(self): - - # test shift for PeriodIndex - # GH8083 - drange = self.create_index() - result = drange.shift(1) - expected = PeriodIndex(['2013-01-02', '2013-01-03', '2013-01-04', - '2013-01-05', '2013-01-06'], freq='D') - self.assert_index_equal(result, expected) - - def test_pickle_compat_construction(self): - pass - - def test_get_loc(self): - idx = pd.period_range('2000-01-01', periods=3) - - for method in [None, 'pad', 'backfill', 'nearest']: - self.assertEqual(idx.get_loc(idx[1], method), 1) - self.assertEqual( - idx.get_loc(idx[1].asfreq('H', how='start'), method), 1) - self.assertEqual(idx.get_loc(idx[1].to_timestamp(), method), 1) - self.assertEqual( - idx.get_loc(idx[1].to_timestamp().to_pydatetime(), method), 1) - self.assertEqual(idx.get_loc(str(idx[1]), method), 1) - - idx = pd.period_range('2000-01-01', periods=5)[::2] - self.assertEqual(idx.get_loc('2000-01-02T12', method='nearest', - tolerance='1 day'), 1) - self.assertEqual(idx.get_loc('2000-01-02T12', method='nearest', - tolerance=pd.Timedelta('1D')), 1) - self.assertEqual(idx.get_loc('2000-01-02T12', method='nearest', - tolerance=np.timedelta64(1, 'D')), 1) - self.assertEqual(idx.get_loc('2000-01-02T12', method='nearest', - tolerance=timedelta(1)), 1) - with tm.assertRaisesRegexp(ValueError, 'must be convertible'): - idx.get_loc('2000-01-10', method='nearest', tolerance='foo') - - msg = 'Input has different freq from PeriodIndex\\(freq=D\\)' - with tm.assertRaisesRegexp(ValueError, msg): - idx.get_loc('2000-01-10', method='nearest', tolerance='1 hour') - with tm.assertRaises(KeyError): - idx.get_loc('2000-01-10', method='nearest', tolerance='1 day') - - def test_get_indexer(self): - idx = pd.period_range('2000-01-01', periods=3).asfreq('H', how='start') - tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2]) - - target = pd.PeriodIndex(['1999-12-31T23', '2000-01-01T12', - '2000-01-02T01'], freq='H') - tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'backfill'), [0, 1, 2]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest'), [0, 1, 1]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest', tolerance='1 hour'), - [0, -1, 1]) - - msg = 'Input has different freq from PeriodIndex\\(freq=H\\)' - with self.assertRaisesRegexp(ValueError, msg): - idx.get_indexer(target, 'nearest', tolerance='1 minute') - - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest', tolerance='1 day'), [0, 1, 1]) - - def test_repeat(self): - # GH10183 - idx = pd.period_range('2000-01-01', periods=3, freq='D') - res = idx.repeat(3) - exp = PeriodIndex(idx.values.repeat(3), freq='D') - self.assert_index_equal(res, exp) - self.assertEqual(res.freqstr, 'D') - - def test_period_index_indexer(self): - - # GH4125 - idx = pd.period_range('2002-01', '2003-12', freq='M') - df = pd.DataFrame(pd.np.random.randn(24, 10), index=idx) - self.assert_frame_equal(df, df.ix[idx]) - self.assert_frame_equal(df, df.ix[list(idx)]) - self.assert_frame_equal(df, df.loc[list(idx)]) - self.assert_frame_equal(df.iloc[0:5], df.loc[idx[0:5]]) - self.assert_frame_equal(df, df.loc[list(idx)]) - - def test_fillna_period(self): - # GH 11343 - idx = pd.PeriodIndex( - ['2011-01-01 09:00', pd.NaT, '2011-01-01 11:00'], freq='H') - - exp = pd.PeriodIndex( - ['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00' - ], freq='H') - self.assert_index_equal( - idx.fillna(pd.Period('2011-01-01 10:00', freq='H')), exp) - - exp = pd.Index([pd.Period('2011-01-01 09:00', freq='H'), 'x', - pd.Period('2011-01-01 11:00', freq='H')], dtype=object) - self.assert_index_equal(idx.fillna('x'), exp) - - with tm.assertRaisesRegexp( - ValueError, - 'Input has different freq=D from PeriodIndex\\(freq=H\\)'): - idx.fillna(pd.Period('2011-01-01', freq='D')) - - def test_no_millisecond_field(self): - with self.assertRaises(AttributeError): - DatetimeIndex.millisecond - - with self.assertRaises(AttributeError): - DatetimeIndex([]).millisecond - - -class TestTimedeltaIndex(DatetimeLike, tm.TestCase): - _holder = TimedeltaIndex - _multiprocess_can_split_ = True - - def setUp(self): - self.indices = dict(index=tm.makeTimedeltaIndex(10)) - self.setup_indices() - - def create_index(self): - return pd.to_timedelta(range(5), unit='d') + pd.offsets.Hour(1) - - def test_shift(self): - # test shift for TimedeltaIndex - # err8083 - - drange = self.create_index() - result = drange.shift(1) - expected = TimedeltaIndex(['1 days 01:00:00', '2 days 01:00:00', - '3 days 01:00:00', - '4 days 01:00:00', '5 days 01:00:00'], - freq='D') - self.assert_index_equal(result, expected) - - result = drange.shift(3, freq='2D 1s') - expected = TimedeltaIndex(['6 days 01:00:03', '7 days 01:00:03', - '8 days 01:00:03', '9 days 01:00:03', - '10 days 01:00:03'], freq='D') - self.assert_index_equal(result, expected) - - def test_get_loc(self): - idx = pd.to_timedelta(['0 days', '1 days', '2 days']) - - for method in [None, 'pad', 'backfill', 'nearest']: - self.assertEqual(idx.get_loc(idx[1], method), 1) - self.assertEqual(idx.get_loc(idx[1].to_pytimedelta(), method), 1) - self.assertEqual(idx.get_loc(str(idx[1]), method), 1) - - self.assertEqual( - idx.get_loc(idx[1], 'pad', tolerance=pd.Timedelta(0)), 1) - self.assertEqual( - idx.get_loc(idx[1], 'pad', tolerance=np.timedelta64(0, 's')), 1) - self.assertEqual(idx.get_loc(idx[1], 'pad', tolerance=timedelta(0)), 1) - - with tm.assertRaisesRegexp(ValueError, 'must be convertible'): - idx.get_loc(idx[1], method='nearest', tolerance='foo') - - for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]: - self.assertEqual(idx.get_loc('1 day 1 hour', method), loc) - - def test_get_indexer(self): - idx = pd.to_timedelta(['0 days', '1 days', '2 days']) - tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2]) - - target = pd.to_timedelta(['-1 hour', '12 hours', '1 day 1 hour']) - tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'backfill'), [0, 1, 2]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest'), [0, 1, 1]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest', - tolerance=pd.Timedelta('1 hour')), - [0, -1, 1]) - - def test_numeric_compat(self): - - idx = self._holder(np.arange(5, dtype='int64')) - didx = self._holder(np.arange(5, dtype='int64') ** 2) - result = idx * 1 - tm.assert_index_equal(result, idx) - - result = 1 * idx - tm.assert_index_equal(result, idx) - - result = idx / 1 - tm.assert_index_equal(result, idx) - - result = idx // 1 - tm.assert_index_equal(result, idx) - - result = idx * np.array(5, dtype='int64') - tm.assert_index_equal(result, - self._holder(np.arange(5, dtype='int64') * 5)) - - result = idx * np.arange(5, dtype='int64') - tm.assert_index_equal(result, didx) - - result = idx * Series(np.arange(5, dtype='int64')) - tm.assert_index_equal(result, didx) - - result = idx * Series(np.arange(5, dtype='float64') + 0.1) - tm.assert_index_equal(result, self._holder(np.arange( - 5, dtype='float64') * (np.arange(5, dtype='float64') + 0.1))) - - # invalid - self.assertRaises(TypeError, lambda: idx * idx) - self.assertRaises(ValueError, lambda: idx * self._holder(np.arange(3))) - self.assertRaises(ValueError, lambda: idx * np.array([1, 2])) - - def test_pickle_compat_construction(self): - pass - - def test_ufunc_coercions(self): - # normal ops are also tested in tseries/test_timedeltas.py - idx = TimedeltaIndex(['2H', '4H', '6H', '8H', '10H'], - freq='2H', name='x') - - for result in [idx * 2, np.multiply(idx, 2)]: - tm.assertIsInstance(result, TimedeltaIndex) - exp = TimedeltaIndex(['4H', '8H', '12H', '16H', '20H'], - freq='4H', name='x') - tm.assert_index_equal(result, exp) - self.assertEqual(result.freq, '4H') - - for result in [idx / 2, np.divide(idx, 2)]: - tm.assertIsInstance(result, TimedeltaIndex) - exp = TimedeltaIndex(['1H', '2H', '3H', '4H', '5H'], - freq='H', name='x') - tm.assert_index_equal(result, exp) - self.assertEqual(result.freq, 'H') - - idx = TimedeltaIndex(['2H', '4H', '6H', '8H', '10H'], - freq='2H', name='x') - for result in [-idx, np.negative(idx)]: - tm.assertIsInstance(result, TimedeltaIndex) - exp = TimedeltaIndex(['-2H', '-4H', '-6H', '-8H', '-10H'], - freq='-2H', name='x') - tm.assert_index_equal(result, exp) - self.assertEqual(result.freq, '-2H') - - idx = TimedeltaIndex(['-2H', '-1H', '0H', '1H', '2H'], - freq='H', name='x') - for result in [abs(idx), np.absolute(idx)]: - tm.assertIsInstance(result, TimedeltaIndex) - exp = TimedeltaIndex(['2H', '1H', '0H', '1H', '2H'], - freq=None, name='x') - tm.assert_index_equal(result, exp) - self.assertEqual(result.freq, None) - - def test_fillna_timedelta(self): - # GH 11343 - idx = pd.TimedeltaIndex(['1 day', pd.NaT, '3 day']) - - exp = pd.TimedeltaIndex(['1 day', '2 day', '3 day']) - self.assert_index_equal(idx.fillna(pd.Timedelta('2 day')), exp) - - exp = pd.TimedeltaIndex(['1 day', '3 hour', '3 day']) - idx.fillna(pd.Timedelta('3 hour')) - - exp = pd.Index( - [pd.Timedelta('1 day'), 'x', pd.Timedelta('3 day')], dtype=object) - self.assert_index_equal(idx.fillna('x'), exp) - - -class TestMultiIndex(Base, tm.TestCase): - _holder = MultiIndex - _multiprocess_can_split_ = True - _compat_props = ['shape', 'ndim', 'size', 'itemsize'] - - def setUp(self): - major_axis = Index(['foo', 'bar', 'baz', 'qux']) - minor_axis = Index(['one', 'two']) - - major_labels = np.array([0, 0, 1, 2, 3, 3]) - minor_labels = np.array([0, 1, 0, 1, 0, 1]) - self.index_names = ['first', 'second'] - self.indices = dict(index=MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels - ], names=self.index_names, - verify_integrity=False)) - self.setup_indices() - - def create_index(self): - return self.index - - def test_boolean_context_compat2(self): - - # boolean context compat - # GH7897 - i1 = MultiIndex.from_tuples([('A', 1), ('A', 2)]) - i2 = MultiIndex.from_tuples([('A', 1), ('A', 3)]) - common = i1.intersection(i2) - - def f(): - if common: - pass - - tm.assertRaisesRegexp(ValueError, 'The truth value of a', f) - - def test_labels_dtypes(self): - - # GH 8456 - i = MultiIndex.from_tuples([('A', 1), ('A', 2)]) - self.assertTrue(i.labels[0].dtype == 'int8') - self.assertTrue(i.labels[1].dtype == 'int8') - - i = MultiIndex.from_product([['a'], range(40)]) - self.assertTrue(i.labels[1].dtype == 'int8') - i = MultiIndex.from_product([['a'], range(400)]) - self.assertTrue(i.labels[1].dtype == 'int16') - i = MultiIndex.from_product([['a'], range(40000)]) - self.assertTrue(i.labels[1].dtype == 'int32') - - i = pd.MultiIndex.from_product([['a'], range(1000)]) - self.assertTrue((i.labels[0] >= 0).all()) - self.assertTrue((i.labels[1] >= 0).all()) - - def test_set_name_methods(self): - # so long as these are synonyms, we don't need to test set_names - self.assertEqual(self.index.rename, self.index.set_names) - new_names = [name + "SUFFIX" for name in self.index_names] - ind = self.index.set_names(new_names) - self.assertEqual(self.index.names, self.index_names) - self.assertEqual(ind.names, new_names) - with assertRaisesRegexp(ValueError, "^Length"): - ind.set_names(new_names + new_names) - new_names2 = [name + "SUFFIX2" for name in new_names] - res = ind.set_names(new_names2, inplace=True) - self.assertIsNone(res) - self.assertEqual(ind.names, new_names2) - - # set names for specific level (# GH7792) - ind = self.index.set_names(new_names[0], level=0) - self.assertEqual(self.index.names, self.index_names) - self.assertEqual(ind.names, [new_names[0], self.index_names[1]]) - - res = ind.set_names(new_names2[0], level=0, inplace=True) - self.assertIsNone(res) - self.assertEqual(ind.names, [new_names2[0], self.index_names[1]]) - - # set names for multiple levels - ind = self.index.set_names(new_names, level=[0, 1]) - self.assertEqual(self.index.names, self.index_names) - self.assertEqual(ind.names, new_names) - - res = ind.set_names(new_names2, level=[0, 1], inplace=True) - self.assertIsNone(res) - self.assertEqual(ind.names, new_names2) - - def test_set_levels(self): - # side note - you probably wouldn't want to use levels and labels - # directly like this - but it is possible. - levels = self.index.levels - new_levels = [[lev + 'a' for lev in level] for level in levels] - - def assert_matching(actual, expected): - # avoid specifying internal representation - # as much as possible - self.assertEqual(len(actual), len(expected)) - for act, exp in zip(actual, expected): - act = np.asarray(act) - exp = np.asarray(exp) - assert_almost_equal(act, exp) - - # level changing [w/o mutation] - ind2 = self.index.set_levels(new_levels) - assert_matching(ind2.levels, new_levels) - assert_matching(self.index.levels, levels) - - # level changing [w/ mutation] - ind2 = self.index.copy() - inplace_return = ind2.set_levels(new_levels, inplace=True) - self.assertIsNone(inplace_return) - assert_matching(ind2.levels, new_levels) - - # level changing specific level [w/o mutation] - ind2 = self.index.set_levels(new_levels[0], level=0) - assert_matching(ind2.levels, [new_levels[0], levels[1]]) - assert_matching(self.index.levels, levels) - - ind2 = self.index.set_levels(new_levels[1], level=1) - assert_matching(ind2.levels, [levels[0], new_levels[1]]) - assert_matching(self.index.levels, levels) - - # level changing multiple levels [w/o mutation] - ind2 = self.index.set_levels(new_levels, level=[0, 1]) - assert_matching(ind2.levels, new_levels) - assert_matching(self.index.levels, levels) - - # level changing specific level [w/ mutation] - ind2 = self.index.copy() - inplace_return = ind2.set_levels(new_levels[0], level=0, inplace=True) - self.assertIsNone(inplace_return) - assert_matching(ind2.levels, [new_levels[0], levels[1]]) - assert_matching(self.index.levels, levels) - - ind2 = self.index.copy() - inplace_return = ind2.set_levels(new_levels[1], level=1, inplace=True) - self.assertIsNone(inplace_return) - assert_matching(ind2.levels, [levels[0], new_levels[1]]) - assert_matching(self.index.levels, levels) - - # level changing multiple levels [w/ mutation] - ind2 = self.index.copy() - inplace_return = ind2.set_levels(new_levels, level=[0, 1], - inplace=True) - self.assertIsNone(inplace_return) - assert_matching(ind2.levels, new_levels) - assert_matching(self.index.levels, levels) - - def test_set_labels(self): - # side note - you probably wouldn't want to use levels and labels - # directly like this - but it is possible. - labels = self.index.labels - major_labels, minor_labels = labels - major_labels = [(x + 1) % 3 for x in major_labels] - minor_labels = [(x + 1) % 1 for x in minor_labels] - new_labels = [major_labels, minor_labels] - - def assert_matching(actual, expected): - # avoid specifying internal representation - # as much as possible - self.assertEqual(len(actual), len(expected)) - for act, exp in zip(actual, expected): - act = np.asarray(act) - exp = np.asarray(exp) - assert_almost_equal(act, exp) - - # label changing [w/o mutation] - ind2 = self.index.set_labels(new_labels) - assert_matching(ind2.labels, new_labels) - assert_matching(self.index.labels, labels) - - # label changing [w/ mutation] - ind2 = self.index.copy() - inplace_return = ind2.set_labels(new_labels, inplace=True) - self.assertIsNone(inplace_return) - assert_matching(ind2.labels, new_labels) - - # label changing specific level [w/o mutation] - ind2 = self.index.set_labels(new_labels[0], level=0) - assert_matching(ind2.labels, [new_labels[0], labels[1]]) - assert_matching(self.index.labels, labels) - - ind2 = self.index.set_labels(new_labels[1], level=1) - assert_matching(ind2.labels, [labels[0], new_labels[1]]) - assert_matching(self.index.labels, labels) - - # label changing multiple levels [w/o mutation] - ind2 = self.index.set_labels(new_labels, level=[0, 1]) - assert_matching(ind2.labels, new_labels) - assert_matching(self.index.labels, labels) - - # label changing specific level [w/ mutation] - ind2 = self.index.copy() - inplace_return = ind2.set_labels(new_labels[0], level=0, inplace=True) - self.assertIsNone(inplace_return) - assert_matching(ind2.labels, [new_labels[0], labels[1]]) - assert_matching(self.index.labels, labels) - - ind2 = self.index.copy() - inplace_return = ind2.set_labels(new_labels[1], level=1, inplace=True) - self.assertIsNone(inplace_return) - assert_matching(ind2.labels, [labels[0], new_labels[1]]) - assert_matching(self.index.labels, labels) - - # label changing multiple levels [w/ mutation] - ind2 = self.index.copy() - inplace_return = ind2.set_labels(new_labels, level=[0, 1], - inplace=True) - self.assertIsNone(inplace_return) - assert_matching(ind2.labels, new_labels) - assert_matching(self.index.labels, labels) - - def test_set_levels_labels_names_bad_input(self): - levels, labels = self.index.levels, self.index.labels - names = self.index.names - - with tm.assertRaisesRegexp(ValueError, 'Length of levels'): - self.index.set_levels([levels[0]]) - - with tm.assertRaisesRegexp(ValueError, 'Length of labels'): - self.index.set_labels([labels[0]]) - - with tm.assertRaisesRegexp(ValueError, 'Length of names'): - self.index.set_names([names[0]]) - - # shouldn't scalar data error, instead should demand list-like - with tm.assertRaisesRegexp(TypeError, 'list of lists-like'): - self.index.set_levels(levels[0]) - - # shouldn't scalar data error, instead should demand list-like - with tm.assertRaisesRegexp(TypeError, 'list of lists-like'): - self.index.set_labels(labels[0]) - - # shouldn't scalar data error, instead should demand list-like - with tm.assertRaisesRegexp(TypeError, 'list-like'): - self.index.set_names(names[0]) - - # should have equal lengths - with tm.assertRaisesRegexp(TypeError, 'list of lists-like'): - self.index.set_levels(levels[0], level=[0, 1]) - - with tm.assertRaisesRegexp(TypeError, 'list-like'): - self.index.set_levels(levels, level=0) - - # should have equal lengths - with tm.assertRaisesRegexp(TypeError, 'list of lists-like'): - self.index.set_labels(labels[0], level=[0, 1]) - - with tm.assertRaisesRegexp(TypeError, 'list-like'): - self.index.set_labels(labels, level=0) - - # should have equal lengths - with tm.assertRaisesRegexp(ValueError, 'Length of names'): - self.index.set_names(names[0], level=[0, 1]) - - with tm.assertRaisesRegexp(TypeError, 'string'): - self.index.set_names(names, level=0) - - def test_metadata_immutable(self): - levels, labels = self.index.levels, self.index.labels - # shouldn't be able to set at either the top level or base level - mutable_regex = re.compile('does not support mutable operations') - with assertRaisesRegexp(TypeError, mutable_regex): - levels[0] = levels[0] - with assertRaisesRegexp(TypeError, mutable_regex): - levels[0][0] = levels[0][0] - # ditto for labels - with assertRaisesRegexp(TypeError, mutable_regex): - labels[0] = labels[0] - with assertRaisesRegexp(TypeError, mutable_regex): - labels[0][0] = labels[0][0] - # and for names - names = self.index.names - with assertRaisesRegexp(TypeError, mutable_regex): - names[0] = names[0] - - def test_inplace_mutation_resets_values(self): - levels = [['a', 'b', 'c'], [4]] - levels2 = [[1, 2, 3], ['a']] - labels = [[0, 1, 0, 2, 2, 0], [0, 0, 0, 0, 0, 0]] - mi1 = MultiIndex(levels=levels, labels=labels) - mi2 = MultiIndex(levels=levels2, labels=labels) - vals = mi1.values.copy() - vals2 = mi2.values.copy() - self.assertIsNotNone(mi1._tuples) - - # make sure level setting works - new_vals = mi1.set_levels(levels2).values - assert_almost_equal(vals2, new_vals) - # non-inplace doesn't kill _tuples [implementation detail] - assert_almost_equal(mi1._tuples, vals) - # and values is still same too - assert_almost_equal(mi1.values, vals) - - # inplace should kill _tuples - mi1.set_levels(levels2, inplace=True) - assert_almost_equal(mi1.values, vals2) - - # make sure label setting works too - labels2 = [[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]] - exp_values = np.empty((6, ), dtype=object) - exp_values[:] = [(long(1), 'a')] * 6 - # must be 1d array of tuples - self.assertEqual(exp_values.shape, (6, )) - new_values = mi2.set_labels(labels2).values - # not inplace shouldn't change - assert_almost_equal(mi2._tuples, vals2) - # should have correct values - assert_almost_equal(exp_values, new_values) - - # and again setting inplace should kill _tuples, etc - mi2.set_labels(labels2, inplace=True) - assert_almost_equal(mi2.values, new_values) - - def test_copy_in_constructor(self): - levels = np.array(["a", "b", "c"]) - labels = np.array([1, 1, 2, 0, 0, 1, 1]) - val = labels[0] - mi = MultiIndex(levels=[levels, levels], labels=[labels, labels], - copy=True) - self.assertEqual(mi.labels[0][0], val) - labels[0] = 15 - self.assertEqual(mi.labels[0][0], val) - val = levels[0] - levels[0] = "PANDA" - self.assertEqual(mi.levels[0][0], val) - - def test_set_value_keeps_names(self): - # motivating example from #3742 - lev1 = ['hans', 'hans', 'hans', 'grethe', 'grethe', 'grethe'] - lev2 = ['1', '2', '3'] * 2 - idx = pd.MultiIndex.from_arrays([lev1, lev2], names=['Name', 'Number']) - df = pd.DataFrame( - np.random.randn(6, 4), - columns=['one', 'two', 'three', 'four'], - index=idx) - df = df.sortlevel() - self.assertIsNone(df.is_copy) - self.assertEqual(df.index.names, ('Name', 'Number')) - df = df.set_value(('grethe', '4'), 'one', 99.34) - self.assertIsNone(df.is_copy) - self.assertEqual(df.index.names, ('Name', 'Number')) - - def test_names(self): - - # names are assigned in __init__ - names = self.index_names - level_names = [level.name for level in self.index.levels] - self.assertEqual(names, level_names) - - # setting bad names on existing - index = self.index - assertRaisesRegexp(ValueError, "^Length of names", setattr, index, - "names", list(index.names) + ["third"]) - assertRaisesRegexp(ValueError, "^Length of names", setattr, index, - "names", []) - - # initializing with bad names (should always be equivalent) - major_axis, minor_axis = self.index.levels - major_labels, minor_labels = self.index.labels - assertRaisesRegexp(ValueError, "^Length of names", MultiIndex, - levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels], - names=['first']) - assertRaisesRegexp(ValueError, "^Length of names", MultiIndex, - levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels], - names=['first', 'second', 'third']) - - # names are assigned - index.names = ["a", "b"] - ind_names = list(index.names) - level_names = [level.name for level in index.levels] - self.assertEqual(ind_names, level_names) - - def test_reference_duplicate_name(self): - idx = MultiIndex.from_tuples( - [('a', 'b'), ('c', 'd')], names=['x', 'x']) - self.assertTrue(idx._reference_duplicate_name('x')) - - idx = MultiIndex.from_tuples( - [('a', 'b'), ('c', 'd')], names=['x', 'y']) - self.assertFalse(idx._reference_duplicate_name('x')) - - def test_astype(self): - expected = self.index.copy() - actual = self.index.astype('O') - assert_copy(actual.levels, expected.levels) - assert_copy(actual.labels, expected.labels) - self.check_level_names(actual, expected.names) - - with assertRaisesRegexp(TypeError, "^Setting.*dtype.*object"): - self.index.astype(np.dtype(int)) - - def test_constructor_single_level(self): - single_level = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux']], - labels=[[0, 1, 2, 3]], names=['first']) - tm.assertIsInstance(single_level, Index) - self.assertNotIsInstance(single_level, MultiIndex) - self.assertEqual(single_level.name, 'first') - - single_level = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux']], - labels=[[0, 1, 2, 3]]) - self.assertIsNone(single_level.name) - - def test_constructor_no_levels(self): - assertRaisesRegexp(ValueError, "non-zero number of levels/labels", - MultiIndex, levels=[], labels=[]) - both_re = re.compile('Must pass both levels and labels') - with tm.assertRaisesRegexp(TypeError, both_re): - MultiIndex(levels=[]) - with tm.assertRaisesRegexp(TypeError, both_re): - MultiIndex(labels=[]) - - def test_constructor_mismatched_label_levels(self): - labels = [np.array([1]), np.array([2]), np.array([3])] - levels = ["a"] - assertRaisesRegexp(ValueError, "Length of levels and labels must be" - " the same", MultiIndex, levels=levels, - labels=labels) - length_error = re.compile('>= length of level') - label_error = re.compile(r'Unequal label lengths: \[4, 2\]') - - # important to check that it's looking at the right thing. - with tm.assertRaisesRegexp(ValueError, length_error): - MultiIndex(levels=[['a'], ['b']], - labels=[[0, 1, 2, 3], [0, 3, 4, 1]]) - - with tm.assertRaisesRegexp(ValueError, label_error): - MultiIndex(levels=[['a'], ['b']], labels=[[0, 0, 0, 0], [0, 0]]) - - # external API - with tm.assertRaisesRegexp(ValueError, length_error): - self.index.copy().set_levels([['a'], ['b']]) - - with tm.assertRaisesRegexp(ValueError, label_error): - self.index.copy().set_labels([[0, 0, 0, 0], [0, 0]]) - - # deprecated properties - with warnings.catch_warnings(): - warnings.simplefilter('ignore') - - with tm.assertRaisesRegexp(ValueError, length_error): - self.index.copy().levels = [['a'], ['b']] - - with tm.assertRaisesRegexp(ValueError, label_error): - self.index.copy().labels = [[0, 0, 0, 0], [0, 0]] - - def assert_multiindex_copied(self, copy, original): - # levels shoudl be (at least, shallow copied) - assert_copy(copy.levels, original.levels) - - assert_almost_equal(copy.labels, original.labels) - - # labels doesn't matter which way copied - assert_almost_equal(copy.labels, original.labels) - self.assertIsNot(copy.labels, original.labels) - - # names doesn't matter which way copied - self.assertEqual(copy.names, original.names) - self.assertIsNot(copy.names, original.names) - - # sort order should be copied - self.assertEqual(copy.sortorder, original.sortorder) - - def test_copy(self): - i_copy = self.index.copy() - - self.assert_multiindex_copied(i_copy, self.index) - - def test_shallow_copy(self): - i_copy = self.index._shallow_copy() - - self.assert_multiindex_copied(i_copy, self.index) - - def test_view(self): - i_view = self.index.view() - - self.assert_multiindex_copied(i_view, self.index) - - def check_level_names(self, index, names): - self.assertEqual([level.name for level in index.levels], list(names)) - - def test_changing_names(self): - - # names should be applied to levels - level_names = [level.name for level in self.index.levels] - self.check_level_names(self.index, self.index.names) - - view = self.index.view() - copy = self.index.copy() - shallow_copy = self.index._shallow_copy() - - # changing names should change level names on object - new_names = [name + "a" for name in self.index.names] - self.index.names = new_names - self.check_level_names(self.index, new_names) - - # but not on copies - self.check_level_names(view, level_names) - self.check_level_names(copy, level_names) - self.check_level_names(shallow_copy, level_names) - - # and copies shouldn't change original - shallow_copy.names = [name + "c" for name in shallow_copy.names] - self.check_level_names(self.index, new_names) - - def test_duplicate_names(self): - self.index.names = ['foo', 'foo'] - assertRaisesRegexp(KeyError, 'Level foo not found', - self.index._get_level_number, 'foo') - - def test_get_level_number_integer(self): - self.index.names = [1, 0] - self.assertEqual(self.index._get_level_number(1), 0) - self.assertEqual(self.index._get_level_number(0), 1) - self.assertRaises(IndexError, self.index._get_level_number, 2) - assertRaisesRegexp(KeyError, 'Level fourth not found', - self.index._get_level_number, 'fourth') - - def test_from_arrays(self): - arrays = [] - for lev, lab in zip(self.index.levels, self.index.labels): - arrays.append(np.asarray(lev).take(lab)) - - result = MultiIndex.from_arrays(arrays) - self.assertEqual(list(result), list(self.index)) - - # infer correctly - result = MultiIndex.from_arrays([[pd.NaT, Timestamp('20130101')], - ['a', 'b']]) - self.assertTrue(result.levels[0].equals(Index([Timestamp('20130101') - ]))) - self.assertTrue(result.levels[1].equals(Index(['a', 'b']))) - - def test_from_product(self): - - first = ['foo', 'bar', 'buz'] - second = ['a', 'b', 'c'] - names = ['first', 'second'] - result = MultiIndex.from_product([first, second], names=names) - - tuples = [('foo', 'a'), ('foo', 'b'), ('foo', 'c'), ('bar', 'a'), - ('bar', 'b'), ('bar', 'c'), ('buz', 'a'), ('buz', 'b'), - ('buz', 'c')] - expected = MultiIndex.from_tuples(tuples, names=names) - - tm.assert_numpy_array_equal(result, expected) - self.assertEqual(result.names, names) - - def test_from_product_datetimeindex(self): - dt_index = date_range('2000-01-01', periods=2) - mi = pd.MultiIndex.from_product([[1, 2], dt_index]) - etalon = pd.lib.list_to_object_array([(1, pd.Timestamp( - '2000-01-01')), (1, pd.Timestamp('2000-01-02')), (2, pd.Timestamp( - '2000-01-01')), (2, pd.Timestamp('2000-01-02'))]) - tm.assert_numpy_array_equal(mi.values, etalon) - - def test_values_boxed(self): - tuples = [(1, pd.Timestamp('2000-01-01')), (2, pd.NaT), - (3, pd.Timestamp('2000-01-03')), - (1, pd.Timestamp('2000-01-04')), - (2, pd.Timestamp('2000-01-02')), - (3, pd.Timestamp('2000-01-03'))] - mi = pd.MultiIndex.from_tuples(tuples) - tm.assert_numpy_array_equal(mi.values, - pd.lib.list_to_object_array(tuples)) - # Check that code branches for boxed values produce identical results - tm.assert_numpy_array_equal(mi.values[:4], mi[:4].values) - - def test_append(self): - result = self.index[:3].append(self.index[3:]) - self.assertTrue(result.equals(self.index)) - - foos = [self.index[:1], self.index[1:3], self.index[3:]] - result = foos[0].append(foos[1:]) - self.assertTrue(result.equals(self.index)) - - # empty - result = self.index.append([]) - self.assertTrue(result.equals(self.index)) - - def test_get_level_values(self): - result = self.index.get_level_values(0) - expected = ['foo', 'foo', 'bar', 'baz', 'qux', 'qux'] - tm.assert_numpy_array_equal(result, expected) - - self.assertEqual(result.name, 'first') - - result = self.index.get_level_values('first') - expected = self.index.get_level_values(0) - tm.assert_numpy_array_equal(result, expected) - - # GH 10460 - index = MultiIndex(levels=[CategoricalIndex( - ['A', 'B']), CategoricalIndex([1, 2, 3])], labels=[np.array( - [0, 0, 0, 1, 1, 1]), np.array([0, 1, 2, 0, 1, 2])]) - exp = CategoricalIndex(['A', 'A', 'A', 'B', 'B', 'B']) - self.assert_index_equal(index.get_level_values(0), exp) - exp = CategoricalIndex([1, 2, 3, 1, 2, 3]) - self.assert_index_equal(index.get_level_values(1), exp) - - def test_get_level_values_na(self): - arrays = [['a', 'b', 'b'], [1, np.nan, 2]] - index = pd.MultiIndex.from_arrays(arrays) - values = index.get_level_values(1) - expected = [1, np.nan, 2] - tm.assert_numpy_array_equal(values.values.astype(float), expected) - - arrays = [['a', 'b', 'b'], [np.nan, np.nan, 2]] - index = pd.MultiIndex.from_arrays(arrays) - values = index.get_level_values(1) - expected = [np.nan, np.nan, 2] - tm.assert_numpy_array_equal(values.values.astype(float), expected) - - arrays = [[np.nan, np.nan, np.nan], ['a', np.nan, 1]] - index = pd.MultiIndex.from_arrays(arrays) - values = index.get_level_values(0) - expected = [np.nan, np.nan, np.nan] - tm.assert_numpy_array_equal(values.values.astype(float), expected) - values = index.get_level_values(1) - expected = np.array(['a', np.nan, 1], dtype=object) - tm.assert_numpy_array_equal(values.values, expected) - - arrays = [['a', 'b', 'b'], pd.DatetimeIndex([0, 1, pd.NaT])] - index = pd.MultiIndex.from_arrays(arrays) - values = index.get_level_values(1) - expected = pd.DatetimeIndex([0, 1, pd.NaT]) - tm.assert_numpy_array_equal(values.values, expected.values) - - arrays = [[], []] - index = pd.MultiIndex.from_arrays(arrays) - values = index.get_level_values(0) - self.assertEqual(values.shape, (0, )) - - def test_reorder_levels(self): - # this blows up - assertRaisesRegexp(IndexError, '^Too many levels', - self.index.reorder_levels, [2, 1, 0]) - - def test_nlevels(self): - self.assertEqual(self.index.nlevels, 2) - - def test_iter(self): - result = list(self.index) - expected = [('foo', 'one'), ('foo', 'two'), ('bar', 'one'), - ('baz', 'two'), ('qux', 'one'), ('qux', 'two')] - self.assertEqual(result, expected) - - def test_legacy_pickle(self): - if PY3: - raise nose.SkipTest("testing for legacy pickles not " - "support on py3") - - path = tm.get_data_path('multiindex_v1.pickle') - obj = pd.read_pickle(path) - - obj2 = MultiIndex.from_tuples(obj.values) - self.assertTrue(obj.equals(obj2)) - - res = obj.get_indexer(obj) - exp = np.arange(len(obj)) - assert_almost_equal(res, exp) - - res = obj.get_indexer(obj2[::-1]) - exp = obj.get_indexer(obj[::-1]) - exp2 = obj2.get_indexer(obj2[::-1]) - assert_almost_equal(res, exp) - assert_almost_equal(exp, exp2) - - def test_legacy_v2_unpickle(self): - - # 0.7.3 -> 0.8.0 format manage - path = tm.get_data_path('mindex_073.pickle') - obj = pd.read_pickle(path) - - obj2 = MultiIndex.from_tuples(obj.values) - self.assertTrue(obj.equals(obj2)) - - res = obj.get_indexer(obj) - exp = np.arange(len(obj)) - assert_almost_equal(res, exp) - - res = obj.get_indexer(obj2[::-1]) - exp = obj.get_indexer(obj[::-1]) - exp2 = obj2.get_indexer(obj2[::-1]) - assert_almost_equal(res, exp) - assert_almost_equal(exp, exp2) - - def test_roundtrip_pickle_with_tz(self): - - # GH 8367 - # round-trip of timezone - index = MultiIndex.from_product( - [[1, 2], ['a', 'b'], date_range('20130101', periods=3, - tz='US/Eastern') - ], names=['one', 'two', 'three']) - unpickled = self.round_trip_pickle(index) - self.assertTrue(index.equal_levels(unpickled)) - - def test_from_tuples_index_values(self): - result = MultiIndex.from_tuples(self.index) - self.assertTrue((result.values == self.index.values).all()) - - def test_contains(self): - self.assertIn(('foo', 'two'), self.index) - self.assertNotIn(('bar', 'two'), self.index) - self.assertNotIn(None, self.index) - - def test_is_all_dates(self): - self.assertFalse(self.index.is_all_dates) - - def test_is_numeric(self): - # MultiIndex is never numeric - self.assertFalse(self.index.is_numeric()) - - def test_getitem(self): - # scalar - self.assertEqual(self.index[2], ('bar', 'one')) - - # slice - result = self.index[2:5] - expected = self.index[[2, 3, 4]] - self.assertTrue(result.equals(expected)) - - # boolean - result = self.index[[True, False, True, False, True, True]] - result2 = self.index[np.array([True, False, True, False, True, True])] - expected = self.index[[0, 2, 4, 5]] - self.assertTrue(result.equals(expected)) - self.assertTrue(result2.equals(expected)) - - def test_getitem_group_select(self): - sorted_idx, _ = self.index.sortlevel(0) - self.assertEqual(sorted_idx.get_loc('baz'), slice(3, 4)) - self.assertEqual(sorted_idx.get_loc('foo'), slice(0, 2)) - - def test_get_loc(self): - self.assertEqual(self.index.get_loc(('foo', 'two')), 1) - self.assertEqual(self.index.get_loc(('baz', 'two')), 3) - self.assertRaises(KeyError, self.index.get_loc, ('bar', 'two')) - self.assertRaises(KeyError, self.index.get_loc, 'quux') - - self.assertRaises(NotImplementedError, self.index.get_loc, 'foo', - method='nearest') - - # 3 levels - index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( - lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( - [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) - self.assertRaises(KeyError, index.get_loc, (1, 1)) - self.assertEqual(index.get_loc((2, 0)), slice(3, 5)) - - def test_get_loc_duplicates(self): - index = Index([2, 2, 2, 2]) - result = index.get_loc(2) - expected = slice(0, 4) - self.assertEqual(result, expected) - # self.assertRaises(Exception, index.get_loc, 2) - - index = Index(['c', 'a', 'a', 'b', 'b']) - rs = index.get_loc('c') - xp = 0 - assert (rs == xp) - - def test_get_loc_level(self): - index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( - lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( - [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) - - loc, new_index = index.get_loc_level((0, 1)) - expected = slice(1, 2) - exp_index = index[expected].droplevel(0).droplevel(0) - self.assertEqual(loc, expected) - self.assertTrue(new_index.equals(exp_index)) - - loc, new_index = index.get_loc_level((0, 1, 0)) - expected = 1 - self.assertEqual(loc, expected) - self.assertIsNone(new_index) - - self.assertRaises(KeyError, index.get_loc_level, (2, 2)) - - index = MultiIndex(levels=[[2000], lrange(4)], labels=[np.array( - [0, 0, 0, 0]), np.array([0, 1, 2, 3])]) - result, new_index = index.get_loc_level((2000, slice(None, None))) - expected = slice(None, None) - self.assertEqual(result, expected) - self.assertTrue(new_index.equals(index.droplevel(0))) - - def test_slice_locs(self): - df = tm.makeTimeDataFrame() - stacked = df.stack() - idx = stacked.index - - slob = slice(*idx.slice_locs(df.index[5], df.index[15])) - sliced = stacked[slob] - expected = df[5:16].stack() - tm.assert_almost_equal(sliced.values, expected.values) - - slob = slice(*idx.slice_locs(df.index[5] + timedelta(seconds=30), - df.index[15] - timedelta(seconds=30))) - sliced = stacked[slob] - expected = df[6:15].stack() - tm.assert_almost_equal(sliced.values, expected.values) - - def test_slice_locs_with_type_mismatch(self): - df = tm.makeTimeDataFrame() - stacked = df.stack() - idx = stacked.index - assertRaisesRegexp(TypeError, '^Level type mismatch', idx.slice_locs, - (1, 3)) - assertRaisesRegexp(TypeError, '^Level type mismatch', idx.slice_locs, - df.index[5] + timedelta(seconds=30), (5, 2)) - df = tm.makeCustomDataframe(5, 5) - stacked = df.stack() - idx = stacked.index - with assertRaisesRegexp(TypeError, '^Level type mismatch'): - idx.slice_locs(timedelta(seconds=30)) - # TODO: Try creating a UnicodeDecodeError in exception message - with assertRaisesRegexp(TypeError, '^Level type mismatch'): - idx.slice_locs(df.index[1], (16, "a")) - - def test_slice_locs_not_sorted(self): - index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( - lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( - [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) - - assertRaisesRegexp(KeyError, "[Kk]ey length.*greater than MultiIndex" - " lexsort depth", index.slice_locs, (1, 0, 1), - (2, 1, 0)) - - # works - sorted_index, _ = index.sortlevel(0) - # should there be a test case here??? - sorted_index.slice_locs((1, 0, 1), (2, 1, 0)) - - def test_slice_locs_partial(self): - sorted_idx, _ = self.index.sortlevel(0) - - result = sorted_idx.slice_locs(('foo', 'two'), ('qux', 'one')) - self.assertEqual(result, (1, 5)) - - result = sorted_idx.slice_locs(None, ('qux', 'one')) - self.assertEqual(result, (0, 5)) - - result = sorted_idx.slice_locs(('foo', 'two'), None) - self.assertEqual(result, (1, len(sorted_idx))) - - result = sorted_idx.slice_locs('bar', 'baz') - self.assertEqual(result, (2, 4)) - - def test_slice_locs_not_contained(self): - # some searchsorted action - - index = MultiIndex(levels=[[0, 2, 4, 6], [0, 2, 4]], - labels=[[0, 0, 0, 1, 1, 2, 3, 3, 3], - [0, 1, 2, 1, 2, 2, 0, 1, 2]], sortorder=0) - - result = index.slice_locs((1, 0), (5, 2)) - self.assertEqual(result, (3, 6)) - - result = index.slice_locs(1, 5) - self.assertEqual(result, (3, 6)) - - result = index.slice_locs((2, 2), (5, 2)) - self.assertEqual(result, (3, 6)) - - result = index.slice_locs(2, 5) - self.assertEqual(result, (3, 6)) - - result = index.slice_locs((1, 0), (6, 3)) - self.assertEqual(result, (3, 8)) - - result = index.slice_locs(-1, 10) - self.assertEqual(result, (0, len(index))) - - def test_consistency(self): - # need to construct an overflow - major_axis = lrange(70000) - minor_axis = lrange(10) - - major_labels = np.arange(70000) - minor_labels = np.repeat(lrange(10), 7000) - - # the fact that is works means it's consistent - index = MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels]) - - # inconsistent - major_labels = np.array([0, 0, 1, 1, 1, 2, 2, 3, 3]) - minor_labels = np.array([0, 1, 0, 1, 1, 0, 1, 0, 1]) - index = MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels]) - - self.assertFalse(index.is_unique) - - def test_truncate(self): - major_axis = Index(lrange(4)) - minor_axis = Index(lrange(2)) - - major_labels = np.array([0, 0, 1, 2, 3, 3]) - minor_labels = np.array([0, 1, 0, 1, 0, 1]) - - index = MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels]) - - result = index.truncate(before=1) - self.assertNotIn('foo', result.levels[0]) - self.assertIn(1, result.levels[0]) - - result = index.truncate(after=1) - self.assertNotIn(2, result.levels[0]) - self.assertIn(1, result.levels[0]) - - result = index.truncate(before=1, after=2) - self.assertEqual(len(result.levels[0]), 2) - - # after < before - self.assertRaises(ValueError, index.truncate, 3, 1) - - def test_get_indexer(self): - major_axis = Index(lrange(4)) - minor_axis = Index(lrange(2)) - - major_labels = np.array([0, 0, 1, 2, 2, 3, 3]) - minor_labels = np.array([0, 1, 0, 0, 1, 0, 1]) - - index = MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels]) - idx1 = index[:5] - idx2 = index[[1, 3, 5]] - - r1 = idx1.get_indexer(idx2) - assert_almost_equal(r1, [1, 3, -1]) - - r1 = idx2.get_indexer(idx1, method='pad') - e1 = [-1, 0, 0, 1, 1] - assert_almost_equal(r1, e1) - - r2 = idx2.get_indexer(idx1[::-1], method='pad') - assert_almost_equal(r2, e1[::-1]) - - rffill1 = idx2.get_indexer(idx1, method='ffill') - assert_almost_equal(r1, rffill1) - - r1 = idx2.get_indexer(idx1, method='backfill') - e1 = [0, 0, 1, 1, 2] - assert_almost_equal(r1, e1) - - r2 = idx2.get_indexer(idx1[::-1], method='backfill') - assert_almost_equal(r2, e1[::-1]) - - rbfill1 = idx2.get_indexer(idx1, method='bfill') - assert_almost_equal(r1, rbfill1) - - # pass non-MultiIndex - r1 = idx1.get_indexer(idx2._tuple_index) - rexp1 = idx1.get_indexer(idx2) - assert_almost_equal(r1, rexp1) - - r1 = idx1.get_indexer([1, 2, 3]) - self.assertTrue((r1 == [-1, -1, -1]).all()) - - # create index with duplicates - idx1 = Index(lrange(10) + lrange(10)) - idx2 = Index(lrange(20)) - assertRaisesRegexp(InvalidIndexError, "Reindexing only valid with" - " uniquely valued Index objects", idx1.get_indexer, - idx2) - - def test_get_indexer_nearest(self): - midx = MultiIndex.from_tuples([('a', 1), ('b', 2)]) - with tm.assertRaises(NotImplementedError): - midx.get_indexer(['a'], method='nearest') - with tm.assertRaises(NotImplementedError): - midx.get_indexer(['a'], method='pad', tolerance=2) - - def test_format(self): - self.index.format() - self.index[:0].format() - - def test_format_integer_names(self): - index = MultiIndex(levels=[[0, 1], [0, 1]], - labels=[[0, 0, 1, 1], [0, 1, 0, 1]], names=[0, 1]) - index.format(names=True) - - def test_format_sparse_display(self): - index = MultiIndex(levels=[[0, 1], [0, 1], [0, 1], [0]], - labels=[[0, 0, 0, 1, 1, 1], [0, 0, 1, 0, 0, 1], - [0, 1, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0]]) - - result = index.format() - self.assertEqual(result[3], '1 0 0 0') - - def test_format_sparse_config(self): - warn_filters = warnings.filters - warnings.filterwarnings('ignore', category=FutureWarning, - module=".*format") - # GH1538 - pd.set_option('display.multi_sparse', False) - - result = self.index.format() - self.assertEqual(result[1], 'foo two') - - self.reset_display_options() - - warnings.filters = warn_filters - - def test_to_hierarchical(self): - index = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), ( - 2, 'two')]) - result = index.to_hierarchical(3) - expected = MultiIndex(levels=[[1, 2], ['one', 'two']], - labels=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1], - [0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1]]) - tm.assert_index_equal(result, expected) - self.assertEqual(result.names, index.names) - - # K > 1 - result = index.to_hierarchical(3, 2) - expected = MultiIndex(levels=[[1, 2], ['one', 'two']], - labels=[[0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1], - [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]]) - tm.assert_index_equal(result, expected) - self.assertEqual(result.names, index.names) - - # non-sorted - index = MultiIndex.from_tuples([(2, 'c'), (1, 'b'), - (2, 'a'), (2, 'b')], - names=['N1', 'N2']) - - result = index.to_hierarchical(2) - expected = MultiIndex.from_tuples([(2, 'c'), (2, 'c'), (1, 'b'), - (1, 'b'), - (2, 'a'), (2, 'a'), - (2, 'b'), (2, 'b')], - names=['N1', 'N2']) - tm.assert_index_equal(result, expected) - self.assertEqual(result.names, index.names) - - def test_bounds(self): - self.index._bounds - - def test_equals(self): - self.assertTrue(self.index.equals(self.index)) - self.assertTrue(self.index.equal_levels(self.index)) - - self.assertFalse(self.index.equals(self.index[:-1])) - - self.assertTrue(self.index.equals(self.index._tuple_index)) - - # different number of levels - index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( - lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( - [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) - - index2 = MultiIndex(levels=index.levels[:-1], labels=index.labels[:-1]) - self.assertFalse(index.equals(index2)) - self.assertFalse(index.equal_levels(index2)) - - # levels are different - major_axis = Index(lrange(4)) - minor_axis = Index(lrange(2)) - - major_labels = np.array([0, 0, 1, 2, 2, 3]) - minor_labels = np.array([0, 1, 0, 0, 1, 0]) - - index = MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels]) - self.assertFalse(self.index.equals(index)) - self.assertFalse(self.index.equal_levels(index)) - - # some of the labels are different - major_axis = Index(['foo', 'bar', 'baz', 'qux']) - minor_axis = Index(['one', 'two']) - - major_labels = np.array([0, 0, 2, 2, 3, 3]) - minor_labels = np.array([0, 1, 0, 1, 0, 1]) - - index = MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels]) - self.assertFalse(self.index.equals(index)) - - def test_identical(self): - mi = self.index.copy() - mi2 = self.index.copy() - self.assertTrue(mi.identical(mi2)) - - mi = mi.set_names(['new1', 'new2']) - self.assertTrue(mi.equals(mi2)) - self.assertFalse(mi.identical(mi2)) - - mi2 = mi2.set_names(['new1', 'new2']) - self.assertTrue(mi.identical(mi2)) - - mi3 = Index(mi.tolist(), names=mi.names) - mi4 = Index(mi.tolist(), names=mi.names, tupleize_cols=False) - self.assertTrue(mi.identical(mi3)) - self.assertFalse(mi.identical(mi4)) - self.assertTrue(mi.equals(mi4)) - - def test_is_(self): - - mi = MultiIndex.from_tuples(lzip(range(10), range(10))) - self.assertTrue(mi.is_(mi)) - self.assertTrue(mi.is_(mi.view())) - self.assertTrue(mi.is_(mi.view().view().view().view())) - mi2 = mi.view() - # names are metadata, they don't change id - mi2.names = ["A", "B"] - self.assertTrue(mi2.is_(mi)) - self.assertTrue(mi.is_(mi2)) - - self.assertTrue(mi.is_(mi.set_names(["C", "D"]))) - mi2 = mi.view() - mi2.set_names(["E", "F"], inplace=True) - self.assertTrue(mi.is_(mi2)) - # levels are inherent properties, they change identity - mi3 = mi2.set_levels([lrange(10), lrange(10)]) - self.assertFalse(mi3.is_(mi2)) - # shouldn't change - self.assertTrue(mi2.is_(mi)) - mi4 = mi3.view() - mi4.set_levels([[1 for _ in range(10)], lrange(10)], inplace=True) - self.assertFalse(mi4.is_(mi3)) - mi5 = mi.view() - mi5.set_levels(mi5.levels, inplace=True) - self.assertFalse(mi5.is_(mi)) - - def test_union(self): - piece1 = self.index[:5][::-1] - piece2 = self.index[3:] - - the_union = piece1 | piece2 - - tups = sorted(self.index._tuple_index) - expected = MultiIndex.from_tuples(tups) - - self.assertTrue(the_union.equals(expected)) - - # corner case, pass self or empty thing: - the_union = self.index.union(self.index) - self.assertIs(the_union, self.index) - - the_union = self.index.union(self.index[:0]) - self.assertIs(the_union, self.index) - - # won't work in python 3 - # tuples = self.index._tuple_index - # result = self.index[:4] | tuples[4:] - # self.assertTrue(result.equals(tuples)) - - # not valid for python 3 - # def test_union_with_regular_index(self): - # other = Index(['A', 'B', 'C']) - - # result = other.union(self.index) - # self.assertIn(('foo', 'one'), result) - # self.assertIn('B', result) - - # result2 = self.index.union(other) - # self.assertTrue(result.equals(result2)) - - def test_intersection(self): - piece1 = self.index[:5][::-1] - piece2 = self.index[3:] - - the_int = piece1 & piece2 - tups = sorted(self.index[3:5]._tuple_index) - expected = MultiIndex.from_tuples(tups) - self.assertTrue(the_int.equals(expected)) - - # corner case, pass self - the_int = self.index.intersection(self.index) - self.assertIs(the_int, self.index) - - # empty intersection: disjoint - empty = self.index[:2] & self.index[2:] - expected = self.index[:0] - self.assertTrue(empty.equals(expected)) - - # can't do in python 3 - # tuples = self.index._tuple_index - # result = self.index & tuples - # self.assertTrue(result.equals(tuples)) - - def test_difference(self): - - first = self.index - result = first.difference(self.index[-3:]) - - # - API change GH 8226 - with tm.assert_produces_warning(): - first - self.index[-3:] - with tm.assert_produces_warning(): - self.index[-3:] - first - with tm.assert_produces_warning(): - self.index[-3:] - first.tolist() - - self.assertRaises(TypeError, lambda: first.tolist() - self.index[-3:]) - - expected = MultiIndex.from_tuples(sorted(self.index[:-3].values), - sortorder=0, - names=self.index.names) - - tm.assertIsInstance(result, MultiIndex) - self.assertTrue(result.equals(expected)) - self.assertEqual(result.names, self.index.names) - - # empty difference: reflexive - result = self.index.difference(self.index) - expected = self.index[:0] - self.assertTrue(result.equals(expected)) - self.assertEqual(result.names, self.index.names) - - # empty difference: superset - result = self.index[-3:].difference(self.index) - expected = self.index[:0] - self.assertTrue(result.equals(expected)) - self.assertEqual(result.names, self.index.names) - - # empty difference: degenerate - result = self.index[:0].difference(self.index) - expected = self.index[:0] - self.assertTrue(result.equals(expected)) - self.assertEqual(result.names, self.index.names) - - # names not the same - chunklet = self.index[-3:] - chunklet.names = ['foo', 'baz'] - result = first.difference(chunklet) - self.assertEqual(result.names, (None, None)) - - # empty, but non-equal - result = self.index.difference(self.index.sortlevel(1)[0]) - self.assertEqual(len(result), 0) - - # raise Exception called with non-MultiIndex - result = first.difference(first._tuple_index) - self.assertTrue(result.equals(first[:0])) - - # name from empty array - result = first.difference([]) - self.assertTrue(first.equals(result)) - self.assertEqual(first.names, result.names) - - # name from non-empty array - result = first.difference([('foo', 'one')]) - expected = pd.MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'), ( - 'foo', 'two'), ('qux', 'one'), ('qux', 'two')]) - expected.names = first.names - self.assertEqual(first.names, result.names) - assertRaisesRegexp(TypeError, "other must be a MultiIndex or a list" - " of tuples", first.difference, [1, 2, 3, 4, 5]) - - def test_from_tuples(self): - assertRaisesRegexp(TypeError, 'Cannot infer number of levels from' - ' empty list', MultiIndex.from_tuples, []) - - idx = MultiIndex.from_tuples(((1, 2), (3, 4)), names=['a', 'b']) - self.assertEqual(len(idx), 2) - - def test_argsort(self): - result = self.index.argsort() - expected = self.index._tuple_index.argsort() - tm.assert_numpy_array_equal(result, expected) - - def test_sortlevel(self): - import random - - tuples = list(self.index) - random.shuffle(tuples) - - index = MultiIndex.from_tuples(tuples) - - sorted_idx, _ = index.sortlevel(0) - expected = MultiIndex.from_tuples(sorted(tuples)) - self.assertTrue(sorted_idx.equals(expected)) - - sorted_idx, _ = index.sortlevel(0, ascending=False) - self.assertTrue(sorted_idx.equals(expected[::-1])) - - sorted_idx, _ = index.sortlevel(1) - by1 = sorted(tuples, key=lambda x: (x[1], x[0])) - expected = MultiIndex.from_tuples(by1) - self.assertTrue(sorted_idx.equals(expected)) - - sorted_idx, _ = index.sortlevel(1, ascending=False) - self.assertTrue(sorted_idx.equals(expected[::-1])) - - def test_sortlevel_not_sort_remaining(self): - mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) - sorted_idx, _ = mi.sortlevel('A', sort_remaining=False) - self.assertTrue(sorted_idx.equals(mi)) - - def test_sortlevel_deterministic(self): - tuples = [('bar', 'one'), ('foo', 'two'), ('qux', 'two'), - ('foo', 'one'), ('baz', 'two'), ('qux', 'one')] - - index = MultiIndex.from_tuples(tuples) - - sorted_idx, _ = index.sortlevel(0) - expected = MultiIndex.from_tuples(sorted(tuples)) - self.assertTrue(sorted_idx.equals(expected)) - - sorted_idx, _ = index.sortlevel(0, ascending=False) - self.assertTrue(sorted_idx.equals(expected[::-1])) - - sorted_idx, _ = index.sortlevel(1) - by1 = sorted(tuples, key=lambda x: (x[1], x[0])) - expected = MultiIndex.from_tuples(by1) - self.assertTrue(sorted_idx.equals(expected)) - - sorted_idx, _ = index.sortlevel(1, ascending=False) - self.assertTrue(sorted_idx.equals(expected[::-1])) - - def test_dims(self): - pass - - def test_drop(self): - dropped = self.index.drop([('foo', 'two'), ('qux', 'one')]) - - index = MultiIndex.from_tuples([('foo', 'two'), ('qux', 'one')]) - dropped2 = self.index.drop(index) - - expected = self.index[[0, 2, 3, 5]] - self.assert_index_equal(dropped, expected) - self.assert_index_equal(dropped2, expected) - - dropped = self.index.drop(['bar']) - expected = self.index[[0, 1, 3, 4, 5]] - self.assert_index_equal(dropped, expected) - - dropped = self.index.drop('foo') - expected = self.index[[2, 3, 4, 5]] - self.assert_index_equal(dropped, expected) - - index = MultiIndex.from_tuples([('bar', 'two')]) - self.assertRaises(KeyError, self.index.drop, [('bar', 'two')]) - self.assertRaises(KeyError, self.index.drop, index) - self.assertRaises(KeyError, self.index.drop, ['foo', 'two']) - - # partially correct argument - mixed_index = MultiIndex.from_tuples([('qux', 'one'), ('bar', 'two')]) - self.assertRaises(KeyError, self.index.drop, mixed_index) - - # error='ignore' - dropped = self.index.drop(index, errors='ignore') - expected = self.index[[0, 1, 2, 3, 4, 5]] - self.assert_index_equal(dropped, expected) - - dropped = self.index.drop(mixed_index, errors='ignore') - expected = self.index[[0, 1, 2, 3, 5]] - self.assert_index_equal(dropped, expected) - - dropped = self.index.drop(['foo', 'two'], errors='ignore') - expected = self.index[[2, 3, 4, 5]] - self.assert_index_equal(dropped, expected) - - # mixed partial / full drop - dropped = self.index.drop(['foo', ('qux', 'one')]) - expected = self.index[[2, 3, 5]] - self.assert_index_equal(dropped, expected) - - # mixed partial / full drop / error='ignore' - mixed_index = ['foo', ('qux', 'one'), 'two'] - self.assertRaises(KeyError, self.index.drop, mixed_index) - dropped = self.index.drop(mixed_index, errors='ignore') - expected = self.index[[2, 3, 5]] - self.assert_index_equal(dropped, expected) - - def test_droplevel_with_names(self): - index = self.index[self.index.get_loc('foo')] - dropped = index.droplevel(0) - self.assertEqual(dropped.name, 'second') - - index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( - lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( - [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])], - names=['one', 'two', 'three']) - dropped = index.droplevel(0) - self.assertEqual(dropped.names, ('two', 'three')) - - dropped = index.droplevel('two') - expected = index.droplevel(1) - self.assertTrue(dropped.equals(expected)) - - def test_droplevel_multiple(self): - index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( - lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( - [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])], - names=['one', 'two', 'three']) - - dropped = index[:2].droplevel(['three', 'one']) - expected = index[:2].droplevel(2).droplevel(0) - self.assertTrue(dropped.equals(expected)) - - def test_insert(self): - # key contained in all levels - new_index = self.index.insert(0, ('bar', 'two')) - self.assertTrue(new_index.equal_levels(self.index)) - self.assertEqual(new_index[0], ('bar', 'two')) - - # key not contained in all levels - new_index = self.index.insert(0, ('abc', 'three')) - tm.assert_numpy_array_equal(new_index.levels[0], - list(self.index.levels[0]) + ['abc']) - tm.assert_numpy_array_equal(new_index.levels[1], - list(self.index.levels[1]) + ['three']) - self.assertEqual(new_index[0], ('abc', 'three')) - - # key wrong length - assertRaisesRegexp(ValueError, "Item must have length equal to number" - " of levels", self.index.insert, 0, ('foo2', )) - - left = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1]], - columns=['1st', '2nd', '3rd']) - left.set_index(['1st', '2nd'], inplace=True) - ts = left['3rd'].copy(deep=True) - - left.loc[('b', 'x'), '3rd'] = 2 - left.loc[('b', 'a'), '3rd'] = -1 - left.loc[('b', 'b'), '3rd'] = 3 - left.loc[('a', 'x'), '3rd'] = 4 - left.loc[('a', 'w'), '3rd'] = 5 - left.loc[('a', 'a'), '3rd'] = 6 - - ts.loc[('b', 'x')] = 2 - ts.loc['b', 'a'] = -1 - ts.loc[('b', 'b')] = 3 - ts.loc['a', 'x'] = 4 - ts.loc[('a', 'w')] = 5 - ts.loc['a', 'a'] = 6 - - right = pd.DataFrame([['a', 'b', 0], - ['b', 'd', 1], - ['b', 'x', 2], - ['b', 'a', -1], - ['b', 'b', 3], - ['a', 'x', 4], - ['a', 'w', 5], - ['a', 'a', 6]], - columns=['1st', '2nd', '3rd']) - right.set_index(['1st', '2nd'], inplace=True) - # FIXME data types changes to float because - # of intermediate nan insertion; - tm.assert_frame_equal(left, right, check_dtype=False) - tm.assert_series_equal(ts, right['3rd']) - - # GH9250 - idx = [('test1', i) for i in range(5)] + \ - [('test2', i) for i in range(6)] + \ - [('test', 17), ('test', 18)] - - left = pd.Series(np.linspace(0, 10, 11), - pd.MultiIndex.from_tuples(idx[:-2])) - - left.loc[('test', 17)] = 11 - left.ix[('test', 18)] = 12 - - right = pd.Series(np.linspace(0, 12, 13), - pd.MultiIndex.from_tuples(idx)) - - tm.assert_series_equal(left, right) - - def test_take_preserve_name(self): - taken = self.index.take([3, 0, 1]) - self.assertEqual(taken.names, self.index.names) - - def test_join_level(self): - def _check_how(other, how): - join_index, lidx, ridx = other.join(self.index, how=how, - level='second', - return_indexers=True) - - exp_level = other.join(self.index.levels[1], how=how) - self.assertTrue(join_index.levels[0].equals(self.index.levels[0])) - self.assertTrue(join_index.levels[1].equals(exp_level)) - - # pare down levels - mask = np.array( - [x[1] in exp_level for x in self.index], dtype=bool) - exp_values = self.index.values[mask] - tm.assert_numpy_array_equal(join_index.values, exp_values) - - if how in ('outer', 'inner'): - join_index2, ridx2, lidx2 = \ - self.index.join(other, how=how, level='second', - return_indexers=True) - - self.assertTrue(join_index.equals(join_index2)) - tm.assert_numpy_array_equal(lidx, lidx2) - tm.assert_numpy_array_equal(ridx, ridx2) - tm.assert_numpy_array_equal(join_index2.values, exp_values) - - def _check_all(other): - _check_how(other, 'outer') - _check_how(other, 'inner') - _check_how(other, 'left') - _check_how(other, 'right') - - _check_all(Index(['three', 'one', 'two'])) - _check_all(Index(['one'])) - _check_all(Index(['one', 'three'])) - - # some corner cases - idx = Index(['three', 'one', 'two']) - result = idx.join(self.index, level='second') - tm.assertIsInstance(result, MultiIndex) - - assertRaisesRegexp(TypeError, "Join.*MultiIndex.*ambiguous", - self.index.join, self.index, level=1) - - def test_join_self(self): - kinds = 'outer', 'inner', 'left', 'right' - for kind in kinds: - res = self.index - joined = res.join(res, how=kind) - self.assertIs(res, joined) - - def test_join_multi(self): - # GH 10665 - midx = pd.MultiIndex.from_product( - [np.arange(4), np.arange(4)], names=['a', 'b']) - idx = pd.Index([1, 2, 5], name='b') - - # inner - jidx, lidx, ridx = midx.join(idx, how='inner', return_indexers=True) - exp_idx = pd.MultiIndex.from_product( - [np.arange(4), [1, 2]], names=['a', 'b']) - exp_lidx = np.array([1, 2, 5, 6, 9, 10, 13, 14]) - exp_ridx = np.array([0, 1, 0, 1, 0, 1, 0, 1]) - self.assert_index_equal(jidx, exp_idx) - self.assert_numpy_array_equal(lidx, exp_lidx) - self.assert_numpy_array_equal(ridx, exp_ridx) - # flip - jidx, ridx, lidx = idx.join(midx, how='inner', return_indexers=True) - self.assert_index_equal(jidx, exp_idx) - self.assert_numpy_array_equal(lidx, exp_lidx) - self.assert_numpy_array_equal(ridx, exp_ridx) - - # keep MultiIndex - jidx, lidx, ridx = midx.join(idx, how='left', return_indexers=True) - exp_ridx = np.array([-1, 0, 1, -1, -1, 0, 1, -1, -1, 0, 1, -1, -1, 0, - 1, -1]) - self.assert_index_equal(jidx, midx) - self.assertIsNone(lidx) - self.assert_numpy_array_equal(ridx, exp_ridx) - # flip - jidx, ridx, lidx = idx.join(midx, how='right', return_indexers=True) - self.assert_index_equal(jidx, midx) - self.assertIsNone(lidx) - self.assert_numpy_array_equal(ridx, exp_ridx) - - def test_reindex(self): - result, indexer = self.index.reindex(list(self.index[:4])) - tm.assertIsInstance(result, MultiIndex) - self.check_level_names(result, self.index[:4].names) - - result, indexer = self.index.reindex(list(self.index)) - tm.assertIsInstance(result, MultiIndex) - self.assertIsNone(indexer) - self.check_level_names(result, self.index.names) - - def test_reindex_level(self): - idx = Index(['one']) - - target, indexer = self.index.reindex(idx, level='second') - target2, indexer2 = idx.reindex(self.index, level='second') - - exp_index = self.index.join(idx, level='second', how='right') - exp_index2 = self.index.join(idx, level='second', how='left') - - self.assertTrue(target.equals(exp_index)) - exp_indexer = np.array([0, 2, 4]) - tm.assert_numpy_array_equal(indexer, exp_indexer) - - self.assertTrue(target2.equals(exp_index2)) - exp_indexer2 = np.array([0, -1, 0, -1, 0, -1]) - tm.assert_numpy_array_equal(indexer2, exp_indexer2) - - assertRaisesRegexp(TypeError, "Fill method not supported", - self.index.reindex, self.index, method='pad', - level='second') - - assertRaisesRegexp(TypeError, "Fill method not supported", idx.reindex, - idx, method='bfill', level='first') - - def test_duplicates(self): - self.assertFalse(self.index.has_duplicates) - self.assertTrue(self.index.append(self.index).has_duplicates) - - index = MultiIndex(levels=[[0, 1], [0, 1, 2]], labels=[ - [0, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 0, 1, 2]]) - self.assertTrue(index.has_duplicates) - - # GH 9075 - t = [(u('x'), u('out'), u('z'), 5, u('y'), u('in'), u('z'), 169), - (u('x'), u('out'), u('z'), 7, u('y'), u('in'), u('z'), 119), - (u('x'), u('out'), u('z'), 9, u('y'), u('in'), u('z'), 135), - (u('x'), u('out'), u('z'), 13, u('y'), u('in'), u('z'), 145), - (u('x'), u('out'), u('z'), 14, u('y'), u('in'), u('z'), 158), - (u('x'), u('out'), u('z'), 16, u('y'), u('in'), u('z'), 122), - (u('x'), u('out'), u('z'), 17, u('y'), u('in'), u('z'), 160), - (u('x'), u('out'), u('z'), 18, u('y'), u('in'), u('z'), 180), - (u('x'), u('out'), u('z'), 20, u('y'), u('in'), u('z'), 143), - (u('x'), u('out'), u('z'), 21, u('y'), u('in'), u('z'), 128), - (u('x'), u('out'), u('z'), 22, u('y'), u('in'), u('z'), 129), - (u('x'), u('out'), u('z'), 25, u('y'), u('in'), u('z'), 111), - (u('x'), u('out'), u('z'), 28, u('y'), u('in'), u('z'), 114), - (u('x'), u('out'), u('z'), 29, u('y'), u('in'), u('z'), 121), - (u('x'), u('out'), u('z'), 31, u('y'), u('in'), u('z'), 126), - (u('x'), u('out'), u('z'), 32, u('y'), u('in'), u('z'), 155), - (u('x'), u('out'), u('z'), 33, u('y'), u('in'), u('z'), 123), - (u('x'), u('out'), u('z'), 12, u('y'), u('in'), u('z'), 144)] - - index = pd.MultiIndex.from_tuples(t) - self.assertFalse(index.has_duplicates) - - # handle int64 overflow if possible - def check(nlevels, with_nulls): - labels = np.tile(np.arange(500), 2) - level = np.arange(500) - - if with_nulls: # inject some null values - labels[500] = -1 # common nan value - labels = list(labels.copy() for i in range(nlevels)) - for i in range(nlevels): - labels[i][500 + i - nlevels // 2] = -1 - - labels += [np.array([-1, 1]).repeat(500)] - else: - labels = [labels] * nlevels + [np.arange(2).repeat(500)] - - levels = [level] * nlevels + [[0, 1]] - - # no dups - index = MultiIndex(levels=levels, labels=labels) - self.assertFalse(index.has_duplicates) - - # with a dup - if with_nulls: - f = lambda a: np.insert(a, 1000, a[0]) - labels = list(map(f, labels)) - index = MultiIndex(levels=levels, labels=labels) - else: - values = index.values.tolist() - index = MultiIndex.from_tuples(values + [values[0]]) - - self.assertTrue(index.has_duplicates) - - # no overflow - check(4, False) - check(4, True) - - # overflow possible - check(8, False) - check(8, True) - - # GH 9125 - n, k = 200, 5000 - levels = [np.arange(n), tm.makeStringIndex(n), 1000 + np.arange(n)] - labels = [np.random.choice(n, k * n) for lev in levels] - mi = MultiIndex(levels=levels, labels=labels) - - for keep in ['first', 'last', False]: - left = mi.duplicated(keep=keep) - right = pd.lib.duplicated(mi.values, keep=keep) - tm.assert_numpy_array_equal(left, right) - - # GH5873 - for a in [101, 102]: - mi = MultiIndex.from_arrays([[101, a], [3.5, np.nan]]) - self.assertFalse(mi.has_duplicates) - self.assertEqual(mi.get_duplicates(), []) - tm.assert_numpy_array_equal(mi.duplicated(), np.zeros( - 2, dtype='bool')) - - for n in range(1, 6): # 1st level shape - for m in range(1, 5): # 2nd level shape - # all possible unique combinations, including nan - lab = product(range(-1, n), range(-1, m)) - mi = MultiIndex(levels=[list('abcde')[:n], list('WXYZ')[:m]], - labels=np.random.permutation(list(lab)).T) - self.assertEqual(len(mi), (n + 1) * (m + 1)) - self.assertFalse(mi.has_duplicates) - self.assertEqual(mi.get_duplicates(), []) - tm.assert_numpy_array_equal(mi.duplicated(), np.zeros( - len(mi), dtype='bool')) - - def test_duplicate_meta_data(self): - # GH 10115 - index = MultiIndex(levels=[[0, 1], [0, 1, 2]], labels=[ - [0, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 0, 1, 2]]) - for idx in [index, - index.set_names([None, None]), - index.set_names([None, 'Num']), - index.set_names(['Upper', 'Num']), ]: - self.assertTrue(idx.has_duplicates) - self.assertEqual(idx.drop_duplicates().names, idx.names) - - def test_tolist(self): - result = self.index.tolist() - exp = list(self.index.values) - self.assertEqual(result, exp) - - def test_repr_with_unicode_data(self): - with pd.core.config.option_context("display.encoding", 'UTF-8'): - d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]} - index = pd.DataFrame(d).set_index(["a", "b"]).index - self.assertFalse("\\u" in repr(index) - ) # we don't want unicode-escaped - - def test_repr_roundtrip(self): - - mi = MultiIndex.from_product([list('ab'), range(3)], - names=['first', 'second']) - str(mi) - - if PY3: - tm.assert_index_equal(eval(repr(mi)), mi, exact=True) - else: - result = eval(repr(mi)) - # string coerces to unicode - tm.assert_index_equal(result, mi, exact=False) - self.assertEqual( - mi.get_level_values('first').inferred_type, 'string') - self.assertEqual( - result.get_level_values('first').inferred_type, 'unicode') - - mi_u = MultiIndex.from_product( - [list(u'ab'), range(3)], names=['first', 'second']) - result = eval(repr(mi_u)) - tm.assert_index_equal(result, mi_u, exact=True) - - # formatting - if PY3: - str(mi) - else: - compat.text_type(mi) - - # long format - mi = MultiIndex.from_product([list('abcdefg'), range(10)], - names=['first', 'second']) - result = str(mi) - - if PY3: - tm.assert_index_equal(eval(repr(mi)), mi, exact=True) - else: - result = eval(repr(mi)) - # string coerces to unicode - tm.assert_index_equal(result, mi, exact=False) - self.assertEqual( - mi.get_level_values('first').inferred_type, 'string') - self.assertEqual( - result.get_level_values('first').inferred_type, 'unicode') - - mi = MultiIndex.from_product( - [list(u'abcdefg'), range(10)], names=['first', 'second']) - result = eval(repr(mi_u)) - tm.assert_index_equal(result, mi_u, exact=True) - - def test_str(self): - # tested elsewhere - pass - - def test_unicode_string_with_unicode(self): - d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]} - idx = pd.DataFrame(d).set_index(["a", "b"]).index - - if PY3: - str(idx) - else: - compat.text_type(idx) - - def test_bytestring_with_unicode(self): - d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]} - idx = pd.DataFrame(d).set_index(["a", "b"]).index - - if PY3: - bytes(idx) - else: - str(idx) - - def test_slice_keep_name(self): - x = MultiIndex.from_tuples([('a', 'b'), (1, 2), ('c', 'd')], - names=['x', 'y']) - self.assertEqual(x[1:].names, x.names) - - def test_isnull_behavior(self): - # should not segfault GH5123 - # NOTE: if MI representation changes, may make sense to allow - # isnull(MI) - with tm.assertRaises(NotImplementedError): - pd.isnull(self.index) - - def test_level_setting_resets_attributes(self): - ind = MultiIndex.from_arrays([ - ['A', 'A', 'B', 'B', 'B'], [1, 2, 1, 2, 3] - ]) - assert ind.is_monotonic - ind.set_levels([['A', 'B', 'A', 'A', 'B'], [2, 1, 3, -2, 5]], - inplace=True) - # if this fails, probably didn't reset the cache correctly. - assert not ind.is_monotonic - - def test_isin(self): - values = [('foo', 2), ('bar', 3), ('quux', 4)] - - idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange( - 4)]) - result = idx.isin(values) - expected = np.array([False, False, True, True]) - tm.assert_numpy_array_equal(result, expected) - - # empty, return dtype bool - idx = MultiIndex.from_arrays([[], []]) - result = idx.isin(values) - self.assertEqual(len(result), 0) - self.assertEqual(result.dtype, np.bool_) - - def test_isin_nan(self): - idx = MultiIndex.from_arrays([['foo', 'bar'], [1.0, np.nan]]) - tm.assert_numpy_array_equal(idx.isin([('bar', np.nan)]), - [False, False]) - tm.assert_numpy_array_equal(idx.isin([('bar', float('nan'))]), - [False, False]) - - def test_isin_level_kwarg(self): - idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange( - 4)]) - - vals_0 = ['foo', 'bar', 'quux'] - vals_1 = [2, 3, 10] - - expected = np.array([False, False, True, True]) - tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level=0)) - tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level=-2)) - - tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level=1)) - tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level=-1)) - - self.assertRaises(IndexError, idx.isin, vals_0, level=5) - self.assertRaises(IndexError, idx.isin, vals_0, level=-5) - - self.assertRaises(KeyError, idx.isin, vals_0, level=1.0) - self.assertRaises(KeyError, idx.isin, vals_1, level=-1.0) - self.assertRaises(KeyError, idx.isin, vals_1, level='A') - - idx.names = ['A', 'B'] - tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level='A')) - tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level='B')) - - self.assertRaises(KeyError, idx.isin, vals_1, level='C') - - def test_reindex_preserves_names_when_target_is_list_or_ndarray(self): - # GH6552 - idx = self.index.copy() - target = idx.copy() - idx.names = target.names = [None, None] - - other_dtype = pd.MultiIndex.from_product([[1, 2], [3, 4]]) - - # list & ndarray cases - self.assertEqual(idx.reindex([])[0].names, [None, None]) - self.assertEqual(idx.reindex(np.array([]))[0].names, [None, None]) - self.assertEqual(idx.reindex(target.tolist())[0].names, [None, None]) - self.assertEqual(idx.reindex(target.values)[0].names, [None, None]) - self.assertEqual( - idx.reindex(other_dtype.tolist())[0].names, [None, None]) - self.assertEqual( - idx.reindex(other_dtype.values)[0].names, [None, None]) - - idx.names = ['foo', 'bar'] - self.assertEqual(idx.reindex([])[0].names, ['foo', 'bar']) - self.assertEqual(idx.reindex(np.array([]))[0].names, ['foo', 'bar']) - self.assertEqual(idx.reindex(target.tolist())[0].names, ['foo', 'bar']) - self.assertEqual(idx.reindex(target.values)[0].names, ['foo', 'bar']) - self.assertEqual( - idx.reindex(other_dtype.tolist())[0].names, ['foo', 'bar']) - self.assertEqual( - idx.reindex(other_dtype.values)[0].names, ['foo', 'bar']) - - def test_reindex_lvl_preserves_names_when_target_is_list_or_array(self): - # GH7774 - idx = pd.MultiIndex.from_product([[0, 1], ['a', 'b']], - names=['foo', 'bar']) - self.assertEqual(idx.reindex([], level=0)[0].names, ['foo', 'bar']) - self.assertEqual(idx.reindex([], level=1)[0].names, ['foo', 'bar']) - - def test_reindex_lvl_preserves_type_if_target_is_empty_list_or_array(self): - # GH7774 - idx = pd.MultiIndex.from_product([[0, 1], ['a', 'b']]) - self.assertEqual(idx.reindex([], level=0)[0].levels[0].dtype.type, - np.int64) - self.assertEqual(idx.reindex([], level=1)[0].levels[1].dtype.type, - np.object_) - - def test_groupby(self): - groups = self.index.groupby(np.array([1, 1, 1, 2, 2, 2])) - labels = self.index.get_values().tolist() - exp = {1: labels[:3], 2: labels[3:]} - tm.assert_dict_equal(groups, exp) - - # GH5620 - groups = self.index.groupby(self.index) - exp = dict((key, [key]) for key in self.index) - tm.assert_dict_equal(groups, exp) - - def test_index_name_retained(self): - # GH9857 - result = pd.DataFrame({'x': [1, 2, 6], - 'y': [2, 2, 8], - 'z': [-5, 0, 5]}) - result = result.set_index('z') - result.loc[10] = [9, 10] - df_expected = pd.DataFrame({'x': [1, 2, 6, 9], - 'y': [2, 2, 8, 10], - 'z': [-5, 0, 5, 10]}) - df_expected = df_expected.set_index('z') - tm.assert_frame_equal(result, df_expected) - - def test_equals_operator(self): - # GH9785 - self.assertTrue((self.index == self.index).all()) - - -def test_get_combined_index(): - from pandas.core.index import _get_combined_index - result = _get_combined_index([]) - assert (result.equals(Index([]))) - - -if __name__ == '__main__': - nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], - exit=False) diff --git a/setup.py b/setup.py index 62d9062de1155..29c3f02013712 100755 --- a/setup.py +++ b/setup.py @@ -544,6 +544,7 @@ def pxd(name): 'pandas.computation', 'pandas.computation.tests', 'pandas.core', + 'pandas.indexes', 'pandas.io', 'pandas.rpy', 'pandas.sandbox', @@ -553,6 +554,7 @@ def pxd(name): 'pandas.util', 'pandas.tests', 'pandas.tests.frame', + 'pandas.tests.indexes', 'pandas.tests.test_msgpack', 'pandas.tools', 'pandas.tools.tests', @@ -580,6 +582,7 @@ def pxd(name): 'pandas.tools': ['tests/*.csv'], 'pandas.tests': ['data/*.pickle', 'data/*.csv'], + 'pandas.tests.indexes': ['data/*.pickle'], 'pandas.tseries.tests': ['data/*.pickle', 'data/*.csv'] },
Split apart these two very large modules and created a new `pandas.indexes` subpackage. It would be nice to move all the index class code from `pandas.tseries` there in a followup patch.
https://api.github.com/repos/pandas-dev/pandas/pulls/12124
2016-01-24T05:40:27Z
2016-01-24T22:53:31Z
null
2016-01-24T22:54:23Z
DEV: Add static modifier to inline declarations.
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 193d8f83ded79..b39a16775ef74 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -538,3 +538,5 @@ of columns didn't match the number of series provided (:issue:`12039`). - Bug in ``.loc`` setitem indexer preventing the use of a TZ-aware DatetimeIndex (:issue:`12050`) - Big in ``.style`` indexes and multi-indexes not appearing (:issue:`11655`) + +- Bug in building Pandas with debugging symbols (:issue:`12123`) diff --git a/pandas/src/helper.h b/pandas/src/helper.h index e97e45f4e87b3..b8c3cecbb2dc7 100644 --- a/pandas/src/helper.h +++ b/pandas/src/helper.h @@ -3,11 +3,11 @@ #ifndef PANDAS_INLINE #if defined(__GNUC__) - #define PANDAS_INLINE __inline__ + #define PANDAS_INLINE static __inline__ #elif defined(_MSC_VER) - #define PANDAS_INLINE __inline + #define PANDAS_INLINE static __inline #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define PANDAS_INLINE inline + #define PANDAS_INLINE static inline #else #define PANDAS_INLINE #endif diff --git a/pandas/src/klib/khash.h b/pandas/src/klib/khash.h index 4350ff06f37f0..10c437c22fe1d 100644 --- a/pandas/src/klib/khash.h +++ b/pandas/src/klib/khash.h @@ -132,11 +132,11 @@ typedef double khfloat64_t; #ifndef PANDAS_INLINE #if defined(__GNUC__) - #define PANDAS_INLINE __inline__ + #define PANDAS_INLINE static __inline__ #elif defined(_MSC_VER) - #define PANDAS_INLINE __inline + #define PANDAS_INLINE static __inline #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define PANDAS_INLINE inline + #define PANDAS_INLINE static inline #else #define PANDAS_INLINE #endif @@ -324,7 +324,7 @@ static const double __ac_HASH_UPPER = 0.77; } #define KHASH_INIT(name, khkey_t, khval_t, kh_is_map, __hash_func, __hash_equal) \ - KHASH_INIT2(name, static PANDAS_INLINE, khkey_t, khval_t, kh_is_map, __hash_func, __hash_equal) + KHASH_INIT2(name, PANDAS_INLINE, khkey_t, khval_t, kh_is_map, __hash_func, __hash_equal) /* --- BEGIN OF HASH FUNCTIONS --- */ @@ -354,7 +354,7 @@ static const double __ac_HASH_UPPER = 0.77; @param s Pointer to a null terminated string @return The hash value */ -static PANDAS_INLINE khint_t __ac_X31_hash_string(const char *s) +PANDAS_INLINE khint_t __ac_X31_hash_string(const char *s) { khint_t h = *s; if (h) for (++s ; *s; ++s) h = (h << 5) - h + *s; @@ -371,7 +371,7 @@ static PANDAS_INLINE khint_t __ac_X31_hash_string(const char *s) */ #define kh_str_hash_equal(a, b) (strcmp(a, b) == 0) -static PANDAS_INLINE khint_t __ac_Wang_hash(khint_t key) +PANDAS_INLINE khint_t __ac_Wang_hash(khint_t key) { key += ~(key << 15); key ^= (key >> 10); diff --git a/pandas/src/klib/kvec.h b/pandas/src/klib/kvec.h index 032962e5e17db..c5e6e6c407dfc 100644 --- a/pandas/src/klib/kvec.h +++ b/pandas/src/klib/kvec.h @@ -54,11 +54,11 @@ int main() { #ifndef PANDAS_INLINE #if defined(__GNUC__) - #define PANDAS_INLINE __inline__ + #define PANDAS_INLINE static __inline__ #elif defined(_MSC_VER) - #define PANDAS_INLINE __inline + #define PANDAS_INLINE static __inline #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define PANDAS_INLINE inline + #define PANDAS_INLINE static inline #else #define PANDAS_INLINE #endif diff --git a/pandas/src/parser/tokenizer.c b/pandas/src/parser/tokenizer.c index 9d81bc9c37b8d..2e4a804a577b5 100644 --- a/pandas/src/parser/tokenizer.c +++ b/pandas/src/parser/tokenizer.c @@ -408,7 +408,7 @@ static int push_char(parser_t *self, char c) { return 0; } -static int P_INLINE end_field(parser_t *self) { +int P_INLINE end_field(parser_t *self) { // XXX cruft // self->numeric_field = 0; if (self->words_len >= self->words_cap) { diff --git a/pandas/src/parser/tokenizer.h b/pandas/src/parser/tokenizer.h index eef94e0616769..6aac34ecce41e 100644 --- a/pandas/src/parser/tokenizer.h +++ b/pandas/src/parser/tokenizer.h @@ -41,11 +41,11 @@ See LICENSE for the license #ifndef P_INLINE #if defined(__GNUC__) - #define P_INLINE __inline__ + #define P_INLINE static __inline__ #elif defined(_MSC_VER) #define P_INLINE #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define P_INLINE inline + #define P_INLINE static inline #else #define P_INLINE #endif diff --git a/pandas/src/period_helper.c b/pandas/src/period_helper.c index e056b1fa9a522..86672e1a753ea 100644 --- a/pandas/src/period_helper.c +++ b/pandas/src/period_helper.c @@ -283,19 +283,19 @@ static int daytime_conversion_factors[][2] = { static npy_int64** daytime_conversion_factor_matrix = NULL; -PANDAS_INLINE static int max_value(int a, int b) { +PANDAS_INLINE int max_value(int a, int b) { return a > b ? a : b; } -PANDAS_INLINE static int min_value(int a, int b) { +PANDAS_INLINE int min_value(int a, int b) { return a < b ? a : b; } -PANDAS_INLINE static int get_freq_group(int freq) { +PANDAS_INLINE int get_freq_group(int freq) { return (freq/1000)*1000; } -PANDAS_INLINE static int get_freq_group_index(int freq) { +PANDAS_INLINE int get_freq_group_index(int freq) { return freq/1000; } @@ -399,7 +399,7 @@ PANDAS_INLINE npy_int64 downsample_daytime(npy_int64 ordinal, asfreq_info *af_in return ordinal / (af_info->intraday_conversion_factor); } -PANDAS_INLINE static npy_int64 transform_via_day(npy_int64 ordinal, char relation, asfreq_info *af_info, freq_conv_func first_func, freq_conv_func second_func) { +PANDAS_INLINE npy_int64 transform_via_day(npy_int64 ordinal, char relation, asfreq_info *af_info, freq_conv_func first_func, freq_conv_func second_func) { //printf("transform_via_day(%ld, %ld, %d)\n", ordinal, af_info->intraday_conversion_factor, af_info->intraday_conversion_upsample); npy_int64 result; diff --git a/pandas/src/skiplist.h b/pandas/src/skiplist.h index 2d70090302c94..3bf63aedce9cb 100644 --- a/pandas/src/skiplist.h +++ b/pandas/src/skiplist.h @@ -18,17 +18,17 @@ #ifndef PANDAS_INLINE #if defined(__GNUC__) - #define PANDAS_INLINE __inline__ + #define PANDAS_INLINE static __inline__ #elif defined(_MSC_VER) - #define PANDAS_INLINE __inline + #define PANDAS_INLINE static __inline #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define PANDAS_INLINE inline + #define PANDAS_INLINE static inline #else #define PANDAS_INLINE #endif #endif -PANDAS_INLINE static float __skiplist_nanf(void) +PANDAS_INLINE float __skiplist_nanf(void) { const union { int __i; float __f;} __bint = {0x7fc00000UL}; return __bint.__f; @@ -36,7 +36,7 @@ PANDAS_INLINE static float __skiplist_nanf(void) #define PANDAS_NAN ((double) __skiplist_nanf()) -static PANDAS_INLINE double Log2(double val) { +PANDAS_INLINE double Log2(double val) { return log(val) / log(2.); } @@ -59,15 +59,15 @@ typedef struct { int maxlevels; } skiplist_t; -static PANDAS_INLINE double urand(void) { +PANDAS_INLINE double urand(void) { return ((double) rand() + 1) / ((double) RAND_MAX + 2); } -static PANDAS_INLINE int int_min(int a, int b) { +PANDAS_INLINE int int_min(int a, int b) { return a < b ? a : b; } -static PANDAS_INLINE node_t *node_init(double value, int levels) { +PANDAS_INLINE node_t *node_init(double value, int levels) { node_t *result; result = (node_t*) malloc(sizeof(node_t)); if (result) { @@ -88,11 +88,11 @@ static PANDAS_INLINE node_t *node_init(double value, int levels) { } // do this ourselves -static PANDAS_INLINE void node_incref(node_t *node) { +PANDAS_INLINE void node_incref(node_t *node) { ++(node->ref_count); } -static PANDAS_INLINE void node_decref(node_t *node) { +PANDAS_INLINE void node_decref(node_t *node) { --(node->ref_count); } @@ -115,7 +115,7 @@ static void node_destroy(node_t *node) { } } -static PANDAS_INLINE void skiplist_destroy(skiplist_t *skp) { +PANDAS_INLINE void skiplist_destroy(skiplist_t *skp) { if (skp) { node_destroy(skp->head); free(skp->tmp_steps); @@ -124,7 +124,7 @@ static PANDAS_INLINE void skiplist_destroy(skiplist_t *skp) { } } -static PANDAS_INLINE skiplist_t *skiplist_init(int expected_size) { +PANDAS_INLINE skiplist_t *skiplist_init(int expected_size) { skiplist_t *result; node_t *NIL, *head; int maxlevels, i; @@ -163,7 +163,7 @@ static PANDAS_INLINE skiplist_t *skiplist_init(int expected_size) { } // 1 if left < right, 0 if left == right, -1 if left > right -static PANDAS_INLINE int _node_cmp(node_t* node, double value){ +PANDAS_INLINE int _node_cmp(node_t* node, double value){ if (node->is_nil || node->value > value) { return -1; } @@ -175,7 +175,7 @@ static PANDAS_INLINE int _node_cmp(node_t* node, double value){ } } -static PANDAS_INLINE double skiplist_get(skiplist_t *skp, int i, int *ret) { +PANDAS_INLINE double skiplist_get(skiplist_t *skp, int i, int *ret) { node_t *node; int level; @@ -199,7 +199,7 @@ static PANDAS_INLINE double skiplist_get(skiplist_t *skp, int i, int *ret) { return node->value; } -static PANDAS_INLINE int skiplist_insert(skiplist_t *skp, double value) { +PANDAS_INLINE int skiplist_insert(skiplist_t *skp, double value) { node_t *node, *prevnode, *newnode, *next_at_level; int *steps_at_level; int size, steps, level; @@ -253,7 +253,7 @@ static PANDAS_INLINE int skiplist_insert(skiplist_t *skp, double value) { return 1; } -static PANDAS_INLINE int skiplist_remove(skiplist_t *skp, double value) { +PANDAS_INLINE int skiplist_remove(skiplist_t *skp, double value) { int level, size; node_t *node, *prevnode, *tmpnode, *next_at_level; node_t **chain; diff --git a/pandas/src/ujson/lib/ultrajson.h b/pandas/src/ujson/lib/ultrajson.h index ba1958723fa94..f83f74a0fe0da 100644 --- a/pandas/src/ujson/lib/ultrajson.h +++ b/pandas/src/ujson/lib/ultrajson.h @@ -95,7 +95,7 @@ typedef __int64 JSLONG; #define FASTCALL_MSVC __fastcall #define FASTCALL_ATTR -#define INLINE_PREFIX __inline +#define INLINE_PREFIX static __inline #else @@ -114,7 +114,7 @@ typedef uint32_t JSUINT32; #define FASTCALL_ATTR #endif -#define INLINE_PREFIX inline +#define INLINE_PREFIX static inline typedef uint8_t JSUINT8; typedef uint16_t JSUTF16; diff --git a/setup.py b/setup.py index 62d9062de1155..971d14eb4a76f 100755 --- a/setup.py +++ b/setup.py @@ -290,7 +290,7 @@ def run(self): class CheckingBuildExt(build_ext): """ Subclass build_ext to get clearer report if Cython is necessary. - Also, add some platform based compiler flags. + """ def check_cython_extensions(self, extensions): @@ -304,27 +304,8 @@ def check_cython_extensions(self, extensions): def build_extensions(self): self.check_cython_extensions(self.extensions) - self.add_gnu_inline_flag(self.extensions) build_ext.build_extensions(self) - def add_gnu_inline_flag(self, extensions): - ''' - Add CFLAGS `-fgnu89-inline` for clang on FreeBSD 10+ - ''' - if not platform.system() == 'FreeBSD': - return - - try: - bsd_release = float(platform.release().split('-')[0]) - except ValueError: # unknow freebsd version - return - - if bsd_release < 10: # 9 or earlier still using gcc42 - return - - for ext in extensions: - ext.extra_compile_args += ['-fgnu89-inline'] - class CythonCommand(build_ext): """Custom distutils command subclassed from Cython.Distutils.build_ext
It is currently impossible to build a debug version of Pandas on Mac OS, since some of the C function declarations are declared as `inline`, and compiling in debug mode turns off optimizations and hence inlining. While this could be solved by adding a compile argument to `setup.py` to use the C89 behavior for inline as in #10510, I believe an ultimately cleaner solution may consist of declaring inlined functions as `static`, so that the code is properly C99-compliant. For the implementation, I added the `static` keyword to the definitions of `PANDAS_INLINE` and `P_INLINE` (and removed duplicate occurrences of `static` from some function declarations), so that future implementations that use those defines are automatically compliant. This is slightly different from e.g. Cython, which defines `CYTHON_INLINE` as `inline` and declares everything as `static CYTHON_INLINE`.
https://api.github.com/repos/pandas-dev/pandas/pulls/12123
2016-01-24T03:42:02Z
2016-01-24T18:29:54Z
null
2016-01-24T18:30:01Z
DOC: typo in v0.10 whatsnew
diff --git a/doc/source/whatsnew/v0.10.0.txt b/doc/source/whatsnew/v0.10.0.txt index 04159186084f5..f4e7825032ce0 100644 --- a/doc/source/whatsnew/v0.10.0.txt +++ b/doc/source/whatsnew/v0.10.0.txt @@ -82,7 +82,7 @@ Note: series.resample('D', how='sum', closed='right', label='right') - Infinity and negative infinity are no longer treated as NA by ``isnull`` and - ``notnull``. That they every were was a relic of early pandas. This behavior + ``notnull``. That they ever were was a relic of early pandas. This behavior can be re-enabled globally by the ``mode.use_inf_as_null`` option: .. ipython:: python
https://api.github.com/repos/pandas-dev/pandas/pulls/12122
2016-01-24T02:02:34Z
2016-01-24T13:14:44Z
null
2016-01-24T13:14:44Z
BUG: Avoid cancellations in nanskew/nankurt.
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 75a38544fb8eb..d7016af6014fc 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -540,3 +540,5 @@ of columns didn't match the number of series provided (:issue:`12039`). - Bug in ``.loc`` setitem indexer preventing the use of a TZ-aware DatetimeIndex (:issue:`12050`) - Big in ``.style`` indexes and multi-indexes not appearing (:issue:`11655`) + +- Bug in ``.skew`` and ``.kurt`` due to roundoff error for highly similar values (:issue:`11974`) diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py index 7764d6e1e1fa9..b5e4e45991a49 100644 --- a/pandas/core/nanops.py +++ b/pandas/core/nanops.py @@ -474,6 +474,13 @@ def nanargmin(values, axis=None, skipna=True): @disallow('M8', 'm8') def nanskew(values, axis=None, skipna=True): + """ Compute the sample skewness. + + The statistic computed here is the adjusted Fisher-Pearson standardized + moment coefficient G1. The algorithm computes this coefficient directly + from the second and third central moment. + + """ mask = isnull(values) if not is_float_dtype(values.dtype): @@ -486,24 +493,34 @@ def nanskew(values, axis=None, skipna=True): values = values.copy() np.putmask(values, mask, 0) - typ = values.dtype.type - A = values.sum(axis) / count - B = (values**2).sum(axis) / count - A**typ(2) - C = (values**3).sum(axis) / count - A**typ(3) - typ(3) * A * B + mean = values.sum(axis, dtype=np.float64) / count + if axis is not None: + mean = np.expand_dims(mean, axis) + + adjusted = values - mean + if skipna: + np.putmask(adjusted, mask, 0) + adjusted2 = adjusted ** 2 + adjusted3 = adjusted2 * adjusted + m2 = adjusted2.sum(axis, dtype=np.float64) + m3 = adjusted3.sum(axis, dtype=np.float64) # floating point error - B = _zero_out_fperr(B) - C = _zero_out_fperr(C) + m2 = _zero_out_fperr(m2) + m3 = _zero_out_fperr(m3) - result = ((np.sqrt(count * count - count) * C) / - ((count - typ(2)) * np.sqrt(B)**typ(3))) + result = (count * (count-1) ** 0.5 / (count-2)) * (m3 / m2 ** 1.5) + + dtype = values.dtype + if is_float_dtype(dtype): + result = result.astype(dtype) if isinstance(result, np.ndarray): - result = np.where(B == 0, 0, result) + result = np.where(m2 == 0, 0, result) result[count < 3] = np.nan return result else: - result = 0 if B == 0 else result + result = 0 if m2 == 0 else result if count < 3: return np.nan return result @@ -511,7 +528,13 @@ def nanskew(values, axis=None, skipna=True): @disallow('M8', 'm8') def nankurt(values, axis=None, skipna=True): + """ Compute the sample skewness. + + The statistic computed here is the adjusted Fisher-Pearson standardized + moment coefficient G2, computed directly from the second and fourth + central moment. + """ mask = isnull(values) if not is_float_dtype(values.dtype): values = values.astype('f8') @@ -523,30 +546,43 @@ def nankurt(values, axis=None, skipna=True): values = values.copy() np.putmask(values, mask, 0) - typ = values.dtype.type - A = values.sum(axis) / count - B = (values**2).sum(axis) / count - A**typ(2) - C = (values**3).sum(axis) / count - A**typ(3) - typ(3) * A * B - D = ((values**4).sum(axis) / count - A**typ(4) - - typ(6) * B * A * A - typ(4) * C * A) + mean = values.sum(axis, dtype=np.float64) / count + if axis is not None: + mean = np.expand_dims(mean, axis) + + adjusted = values - mean + if skipna: + np.putmask(adjusted, mask, 0) + adjusted2 = adjusted ** 2 + adjusted4 = adjusted2 ** 2 + m2 = adjusted2.sum(axis, dtype=np.float64) + m4 = adjusted4.sum(axis, dtype=np.float64) + + adj = 3 * (count - 1) ** 2 / ((count - 2) * (count - 3)) + numer = count * (count + 1) * (count - 1) * m4 + denom = (count - 2) * (count - 3) * m2**2 + result = numer / denom - adj - B = _zero_out_fperr(B) - D = _zero_out_fperr(D) + # floating point error + numer = _zero_out_fperr(numer) + denom = _zero_out_fperr(denom) - if not isinstance(B, np.ndarray): - # if B is a scalar, check these corner cases first before doing - # division + if not isinstance(denom, np.ndarray): + # if ``denom`` is a scalar, check these corner cases first before + # doing division if count < 4: return np.nan - if B == 0: + if denom == 0: return 0 - result = (((count * count - typ(1)) * D / (B * B) - typ(3) * - ((count - typ(1))**typ(2))) / ((count - typ(2)) * - (count - typ(3)))) + result = numer / denom - adj + + dtype = values.dtype + if is_float_dtype(dtype): + result = result.astype(dtype) if isinstance(result, np.ndarray): - result = np.where(B == 0, 0, result) + result = np.where(denom == 0, 0, result) result[count < 4] = np.nan return result diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py index ecd3fa6ed53ee..ca5246ba98f89 100644 --- a/pandas/tests/test_nanops.py +++ b/pandas/tests/test_nanops.py @@ -888,6 +888,106 @@ def prng(self): return np.random.RandomState(1234) +class TestNanskewFixedValues(tm.TestCase): + + # xref GH 11974 + + def setUp(self): + # Test data + skewness value (computed with scipy.stats.skew) + self.samples = np.sin(np.linspace(0, 1, 200)) + self.actual_skew = -0.1875895205961754 + + def test_constant_series(self): + # xref GH 11974 + for val in [3075.2, 3075.3, 3075.5]: + data = val * np.ones(300) + skew = nanops.nanskew(data) + self.assertEqual(skew, 0.0) + + def test_all_finite(self): + alpha, beta = 0.3, 0.1 + left_tailed = self.prng.beta(alpha, beta, size=100) + self.assertLess(nanops.nanskew(left_tailed), 0) + + alpha, beta = 0.1, 0.3 + right_tailed = self.prng.beta(alpha, beta, size=100) + self.assertGreater(nanops.nanskew(right_tailed), 0) + + def test_ground_truth(self): + skew = nanops.nanskew(self.samples) + self.assertAlmostEqual(skew, self.actual_skew) + + def test_axis(self): + samples = np.vstack([self.samples, + np.nan * np.ones(len(self.samples))]) + skew = nanops.nanskew(samples, axis=1) + tm.assert_almost_equal(skew, [self.actual_skew, np.nan]) + + def test_nans(self): + samples = np.hstack([self.samples, np.nan]) + skew = nanops.nanskew(samples, skipna=False) + self.assertTrue(np.isnan(skew)) + + def test_nans_skipna(self): + samples = np.hstack([self.samples, np.nan]) + skew = nanops.nanskew(samples, skipna=True) + tm.assert_almost_equal(skew, self.actual_skew) + + @property + def prng(self): + return np.random.RandomState(1234) + + +class TestNankurtFixedValues(tm.TestCase): + + # xref GH 11974 + + def setUp(self): + # Test data + kurtosis value (computed with scipy.stats.kurtosis) + self.samples = np.sin(np.linspace(0, 1, 200)) + self.actual_kurt = -1.2058303433799713 + + def test_constant_series(self): + # xref GH 11974 + for val in [3075.2, 3075.3, 3075.5]: + data = val * np.ones(300) + kurt = nanops.nankurt(data) + self.assertEqual(kurt, 0.0) + + def test_all_finite(self): + alpha, beta = 0.3, 0.1 + left_tailed = self.prng.beta(alpha, beta, size=100) + self.assertLess(nanops.nankurt(left_tailed), 0) + + alpha, beta = 0.1, 0.3 + right_tailed = self.prng.beta(alpha, beta, size=100) + self.assertGreater(nanops.nankurt(right_tailed), 0) + + def test_ground_truth(self): + kurt = nanops.nankurt(self.samples) + self.assertAlmostEqual(kurt, self.actual_kurt) + + def test_axis(self): + samples = np.vstack([self.samples, + np.nan * np.ones(len(self.samples))]) + kurt = nanops.nankurt(samples, axis=1) + tm.assert_almost_equal(kurt, [self.actual_kurt, np.nan]) + + def test_nans(self): + samples = np.hstack([self.samples, np.nan]) + kurt = nanops.nankurt(samples, skipna=False) + self.assertTrue(np.isnan(kurt)) + + def test_nans_skipna(self): + samples = np.hstack([self.samples, np.nan]) + kurt = nanops.nankurt(samples, skipna=True) + tm.assert_almost_equal(kurt, self.actual_kurt) + + @property + def prng(self): + return np.random.RandomState(1234) + + if __name__ == '__main__': import nose nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure', '-s'
closes https://github.com/pydata/pandas/issues/11974 This replaces the implementation of `nanskew` and `nankurt` by an algorithm that does not produce round-off error due to subtracting very similar large values. The sample skew and kurtosis are computed directly from the moments, rather than expanding the moments, which leads to cancellation errors. The algorithm implemented here is about as efficient as the original implementation (in terms of time and memory); a more efficient one-pass algorithm is certainly possible but would require some Cython code.
https://api.github.com/repos/pandas-dev/pandas/pulls/12121
2016-01-23T20:27:09Z
2016-01-25T15:40:17Z
null
2016-01-25T18:01:49Z
Allow from_tuples to accept and iterator
diff --git a/pandas/core/index.py b/pandas/core/index.py index 558da897b241e..ef3d15b9c1c51 100644 --- a/pandas/core/index.py +++ b/pandas/core/index.py @@ -5619,6 +5619,8 @@ def from_tuples(cls, tuples, sortorder=None, names=None): MultiIndex.from_product : Make a MultiIndex from cartesian product of iterables """ + tuples = list(tuples) + if len(tuples) == 0: # I think this is right? Not quite sure... raise TypeError('Cannot infer number of levels from empty list')
Since many methods of constructing a list of tuples in Python3 return iterators, this flattens them. Although using from_arrays often works better.
https://api.github.com/repos/pandas-dev/pandas/pulls/12119
2016-01-22T15:32:56Z
2016-01-27T16:00:17Z
null
2016-01-27T16:55:31Z
Update __init__.py
diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py index 2da4427af4cb6..0d71ef5c97204 100644 --- a/pandas/compat/__init__.py +++ b/pandas/compat/__init__.py @@ -37,6 +37,10 @@ PY2 = sys.version_info[0] == 2 PY3 = (sys.version_info[0] >= 3) PY35 = (sys.version_info >= (3, 5)) +try: + from inspect import signature + except: + from inspect import getargspec try: import __builtin__ as builtins
https://api.github.com/repos/pandas-dev/pandas/pulls/12118
2016-01-22T12:33:40Z
2016-01-22T15:47:37Z
null
2016-01-22T19:23:08Z
Update __init__.py
diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py index 2da4427af4cb6..4ffff8cff520f 100644 --- a/pandas/compat/__init__.py +++ b/pandas/compat/__init__.py @@ -47,7 +47,9 @@ BytesIO = StringIO import cPickle import httplib + from inspect import signature except ImportError: + from inspect import getargspec import builtins from io import StringIO, BytesIO cStringIO = StringIO
getargspec is dprecated in python 3; signature is used instead
https://api.github.com/repos/pandas-dev/pandas/pulls/12117
2016-01-22T06:35:51Z
2016-01-22T15:47:53Z
null
2016-01-22T15:47:53Z
PEP: pandas/sparse cleanup
diff --git a/pandas/sparse/api.py b/pandas/sparse/api.py index 230ad15937c92..b4d874e6a1ab9 100644 --- a/pandas/sparse/api.py +++ b/pandas/sparse/api.py @@ -1,5 +1,5 @@ # pylint: disable=W0611 - +# flake8: noqa from pandas.sparse.array import SparseArray from pandas.sparse.list import SparseList from pandas.sparse.series import SparseSeries, SparseTimeSeries diff --git a/pandas/sparse/array.py b/pandas/sparse/array.py index b40a23fb4556a..a370bcf42fbaa 100644 --- a/pandas/sparse/array.py +++ b/pandas/sparse/array.py @@ -19,12 +19,13 @@ import pandas.core.ops as ops -def _arith_method(op, name, str_rep=None, default_axis=None, - fill_zeros=None, **eval_kwargs): +def _arith_method(op, name, str_rep=None, default_axis=None, fill_zeros=None, + **eval_kwargs): """ Wrapper function for Series arithmetic operations, to avoid code duplication. """ + def wrapper(self, other): if isinstance(other, np.ndarray): if len(self) != len(other): @@ -37,14 +38,14 @@ def wrapper(self, other): else: return _sparse_array_op(self, other, op, name) elif np.isscalar(other): - new_fill_value = op(np.float64(self.fill_value), - np.float64(other)) + new_fill_value = op(np.float64(self.fill_value), np.float64(other)) return SparseArray(op(self.sp_values, other), sparse_index=self.sp_index, fill_value=new_fill_value) else: # pragma: no cover raise TypeError('operation with %s not supported' % type(other)) + if name.startswith("__"): name = name[2:-2] wrapper.__name__ = name @@ -74,44 +75,38 @@ def _sparse_array_op(left, right, op, name): def _sparse_nanop(this, other, name): sparse_op = getattr(splib, 'sparse_nan%s' % name) - result, result_index = sparse_op(this.sp_values, - this.sp_index, - other.sp_values, - other.sp_index) + result, result_index = sparse_op(this.sp_values, this.sp_index, + other.sp_values, other.sp_index) return result, result_index def _sparse_fillop(this, other, name): sparse_op = getattr(splib, 'sparse_%s' % name) - result, result_index = sparse_op(this.sp_values, - this.sp_index, - this.fill_value, - other.sp_values, - other.sp_index, - other.fill_value) + result, result_index = sparse_op(this.sp_values, this.sp_index, + this.fill_value, other.sp_values, + other.sp_index, other.fill_value) return result, result_index class SparseArray(PandasObject, np.ndarray): - """Data structure for labeled, sparse floating point data -Parameters ----------- -data : {array-like, Series, SparseSeries, dict} -kind : {'block', 'integer'} -fill_value : float - Defaults to NaN (code for missing) -sparse_index : {BlockIndex, IntIndex}, optional - Only if you have one. Mainly used internally - -Notes ------ -SparseArray objects are immutable via the typical Python means. If you -must change values, convert to dense, make your changes, then convert back -to sparse + Parameters + ---------- + data : {array-like, Series, SparseSeries, dict} + kind : {'block', 'integer'} + fill_value : float + Defaults to NaN (code for missing) + sparse_index : {BlockIndex, IntIndex}, optional + Only if you have one. Mainly used internally + + Notes + ----- + SparseArray objects are immutable via the typical Python means. If you + must change values, convert to dense, make your changes, then convert back + to sparse """ __array_priority__ = 15 _typ = 'array' @@ -120,9 +115,8 @@ class SparseArray(PandasObject, np.ndarray): sp_index = None fill_value = None - def __new__( - cls, data, sparse_index=None, index=None, kind='integer', fill_value=None, - dtype=np.float64, copy=False): + def __new__(cls, data, sparse_index=None, index=None, kind='integer', + fill_value=None, dtype=np.float64, copy=False): if index is not None: if data is None: @@ -164,7 +158,8 @@ def __new__( subarr = np.asarray(values, dtype=dtype) # if we have a bool type, make sure that we have a bool fill_value - if (dtype is not None and issubclass(dtype.type, np.bool_)) or (data is not None and lib.is_bool_array(subarr)): + if ((dtype is not None and issubclass(dtype.type, np.bool_)) or + (data is not None and lib.is_bool_array(subarr))): if np.isnan(fill_value) or not fill_value: fill_value = False else: @@ -284,9 +279,9 @@ def __getitem__(self, key): else: if isinstance(key, SparseArray): key = np.asarray(key) - if hasattr(key,'__len__') and len(self) != len(key): + if hasattr(key, '__len__') and len(self) != len(key): indices = self.sp_index - if hasattr(indices,'to_int_index'): + if hasattr(indices, 'to_int_index'): indices = indices.to_int_index() data_slice = self.values.take(indices.indices)[key] else: @@ -355,7 +350,8 @@ def __setitem__(self, key, value): # if com.is_integer(key): # self.values[key] = value # else: - # raise Exception("SparseArray does not support seting non-scalars via setitem") + # raise Exception("SparseArray does not support seting non-scalars + # via setitem") raise TypeError( "SparseArray does not support item assignment via setitem") @@ -364,16 +360,17 @@ def __setslice__(self, i, j, value): i = 0 if j < 0: j = 0 - slobj = slice(i, j) + slobj = slice(i, j) # noqa # if not np.isscalar(value): - # raise Exception("SparseArray does not support seting non-scalars via slices") + # raise Exception("SparseArray does not support seting non-scalars + # via slices") - #x = self.values - #x[slobj] = value - #self.values = x - raise TypeError( - "SparseArray does not support item assignment via slices") + # x = self.values + # x[slobj] = value + # self.values = x + raise TypeError("SparseArray does not support item assignment via " + "slices") def astype(self, dtype=None): """ @@ -394,8 +391,7 @@ def copy(self, deep=True): else: values = self.sp_values return SparseArray(values, sparse_index=self.sp_index, - dtype=self.dtype, - fill_value=self.fill_value) + dtype=self.dtype, fill_value=self.fill_value) def count(self): """ @@ -453,8 +449,7 @@ def cumsum(self, axis=0, dtype=None, out=None): if com.notnull(self.fill_value): return self.to_dense().cumsum() # TODO: what if sp_values contains NaN?? - return SparseArray(self.sp_values.cumsum(), - sparse_index=self.sp_index, + return SparseArray(self.sp_values.cumsum(), sparse_index=self.sp_index, fill_value=self.fill_value) def mean(self, axis=None, dtype=None, out=None): @@ -485,8 +480,8 @@ def _maybe_to_dense(obj): def _maybe_to_sparse(array): if isinstance(array, com.ABCSparseSeries): - array = SparseArray( - array.values, sparse_index=array.sp_index, fill_value=array.fill_value, copy=True) + array = SparseArray(array.values, sparse_index=array.sp_index, + fill_value=array.fill_value, copy=True) if not isinstance(array, SparseArray): array = com._values_from_object(array) return array @@ -538,15 +533,15 @@ def make_sparse(arr, kind='block', fill_value=nan): sparsified_values = arr[mask] return sparsified_values, index -ops.add_special_arithmetic_methods(SparseArray, - arith_method=_arith_method, - use_numexpr=False) +ops.add_special_arithmetic_methods(SparseArray, arith_method=_arith_method, + use_numexpr=False) def _concat_compat(to_concat, axis=0): """ - provide concatenation of an sparse/dense array of arrays each of which is a single dtype + provide concatenation of an sparse/dense array of arrays each of which is a + single dtype Parameters ---------- @@ -570,10 +565,10 @@ def convert_sparse(x, axis): typs = com.get_dtype_kinds(to_concat) # we have more than one type here, so densify and regular concat - to_concat = [ convert_sparse(x, axis) for x in to_concat ] - result = np.concatenate(to_concat,axis=axis) + to_concat = [convert_sparse(x, axis) for x in to_concat] + result = np.concatenate(to_concat, axis=axis) - if not len(typs-set(['sparse','f','i'])): + if not len(typs - set(['sparse', 'f', 'i'])): # we can remain sparse result = SparseArray(result.ravel()) diff --git a/pandas/sparse/frame.py b/pandas/sparse/frame.py index 5e3f59a24e5a1..25f1f16831317 100644 --- a/pandas/sparse/frame.py +++ b/pandas/sparse/frame.py @@ -6,21 +6,18 @@ # pylint: disable=E1101,E1103,W0231,E0202 from numpy import nan -from pandas.compat import range, lmap, map +from pandas.compat import lmap from pandas import compat import numpy as np -from pandas.core.common import (isnull, notnull, _pickle_array, - _unpickle_array, _try_sort) +from pandas.core.common import isnull, _unpickle_array, _try_sort from pandas.core.index import Index, MultiIndex, _ensure_index from pandas.core.series import Series from pandas.core.frame import (DataFrame, extract_index, _prep_ndarray, _default_index) -from pandas.util.decorators import cache_readonly import pandas.core.common as com -import pandas.core.datetools as datetools -from pandas.core.internals import BlockManager, create_block_manager_from_arrays - +from pandas.core.internals import (BlockManager, + create_block_manager_from_arrays) from pandas.core.generic import NDFrame from pandas.sparse.series import SparseSeries, SparseArray from pandas.util.decorators import Appender @@ -28,7 +25,6 @@ class SparseDataFrame(DataFrame): - """ DataFrame containing sparse floating point data in the form of SparseSeries objects @@ -48,9 +44,8 @@ class SparseDataFrame(DataFrame): _constructor_sliced = SparseSeries _subtyp = 'sparse_frame' - def __init__(self, data=None, index=None, columns=None, - default_kind=None, default_fill_value=None, - dtype=None, copy=False): + def __init__(self, data=None, index=None, columns=None, default_kind=None, + default_fill_value=None, dtype=None, copy=False): # pick up the defaults from the Sparse structures if isinstance(data, SparseDataFrame): @@ -90,15 +85,16 @@ def __init__(self, data=None, index=None, columns=None, if dtype is not None: mgr = mgr.astype(dtype) elif isinstance(data, SparseDataFrame): - mgr = self._init_mgr( - data._data, dict(index=index, columns=columns), dtype=dtype, copy=copy) + mgr = self._init_mgr(data._data, + dict(index=index, columns=columns), + dtype=dtype, copy=copy) elif isinstance(data, DataFrame): mgr = self._init_dict(data, data.index, data.columns) if dtype is not None: mgr = mgr.astype(dtype) elif isinstance(data, BlockManager): - mgr = self._init_mgr( - data, axes=dict(index=index, columns=columns), dtype=dtype, copy=copy) + mgr = self._init_mgr(data, axes=dict(index=index, columns=columns), + dtype=dtype, copy=copy) elif data is None: data = DataFrame() @@ -111,8 +107,7 @@ def __init__(self, data=None, index=None, columns=None, columns = Index([]) else: for c in columns: - data[c] = SparseArray(np.nan, - index=index, + data[c] = SparseArray(np.nan, index=index, kind=self._default_kind, fill_value=self._default_fill_value) mgr = to_manager(data, columns, index) @@ -123,11 +118,12 @@ def __init__(self, data=None, index=None, columns=None, @property def _constructor(self): - def wrapper(data=None, index=None, columns=None, default_fill_value=None, kind=None, fill_value=None, copy=False): + def wrapper(data=None, index=None, columns=None, + default_fill_value=None, kind=None, fill_value=None, + copy=False): result = SparseDataFrame(data, index=index, columns=columns, default_fill_value=fill_value, - default_kind=kind, - copy=copy) + default_kind=kind, copy=copy) # fill if requested if fill_value is not None and not isnull(fill_value): @@ -144,15 +140,15 @@ def _init_dict(self, data, index, columns, dtype=None): # pre-filter out columns if we passed it if columns is not None: columns = _ensure_index(columns) - data = dict((k, v) for k, v in compat.iteritems(data) if k in columns) + data = dict((k, v) for k, v in compat.iteritems(data) + if k in columns) else: columns = Index(_try_sort(list(data.keys()))) if index is None: index = extract_index(list(data.values())) - sp_maker = lambda x: SparseArray(x, - kind=self._default_kind, + sp_maker = lambda x: SparseArray(x, kind=self._default_kind, fill_value=self._default_fill_value, copy=True) sdict = DataFrame() @@ -193,24 +189,23 @@ def _init_matrix(self, data, index, columns, dtype=None): if len(columns) != K: raise ValueError('Column length mismatch: %d vs. %d' % - (len(columns), K)) + (len(columns), K)) if len(index) != N: raise ValueError('Index length mismatch: %d vs. %d' % - (len(index), N)) + (len(index), N)) data = dict([(idx, data[:, i]) for i, idx in enumerate(columns)]) return self._init_dict(data, index, columns, dtype) def __array_wrap__(self, result): - return SparseDataFrame(result, index=self.index, columns=self.columns, - default_kind=self._default_kind, - default_fill_value=self._default_fill_value).__finalize__(self) + return SparseDataFrame( + result, index=self.index, columns=self.columns, + default_kind=self._default_kind, + default_fill_value=self._default_fill_value).__finalize__(self) def __getstate__(self): # pickling - return dict(_typ=self._typ, - _subtyp=self._subtyp, - _data=self._data, + return dict(_typ=self._typ, _subtyp=self._subtyp, _data=self._data, _default_fill_value=self._default_fill_value, _default_kind=self._default_kind) @@ -246,7 +241,7 @@ def to_dense(self): df : DataFrame """ data = dict((k, v.to_dense()) for k, v in compat.iteritems(self)) - return DataFrame(data, index=self.index,columns=self.columns) + return DataFrame(data, index=self.index, columns=self.columns) def astype(self, dtype): raise NotImplementedError @@ -281,32 +276,32 @@ def density(self): def fillna(self, value=None, method=None, axis=0, inplace=False, limit=None, downcast=None): - new_self = super( - SparseDataFrame, self).fillna(value=value, method=method, axis=axis, - inplace=inplace, limit=limit, downcast=downcast) + new_self = super(SparseDataFrame, + self).fillna(value=value, method=method, axis=axis, + inplace=inplace, limit=limit, + downcast=downcast) if not inplace: self = new_self # set the fill value if we are filling as a scalar with nothing special # going on - if value is not None and value == value and method is None and limit is None: + if (value is not None and value == value and method is None and + limit is None): self._default_fill_value = value if not inplace: return self - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Support different internal representation of SparseDataFrame def _sanitize_column(self, key, value): - sp_maker = lambda x, index=None: SparseArray(x, - index=index, - fill_value=self._default_fill_value, - kind=self._default_kind) + sp_maker = lambda x, index=None: SparseArray( + x, index=index, fill_value=self._default_fill_value, + kind=self._default_kind) if isinstance(value, SparseSeries): - clean = value.reindex( - self.index).as_sparse_array(fill_value=self._default_fill_value, - kind=self._default_kind) + clean = value.reindex(self.index).as_sparse_array( + fill_value=self._default_fill_value, kind=self._default_kind) elif isinstance(value, SparseArray): if len(value) != len(self.index): @@ -409,12 +404,11 @@ def xs(self, key, axis=0, copy=False): data = self.take([i]).get_values()[0] return Series(data, index=self.columns) - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Arithmetic-related methods def _combine_frame(self, other, func, fill_value=None, level=None): - this, other = self.align(other, join='outer', level=level, - copy=False) + this, other = self.align(other, join='outer', level=level, copy=False) new_index, new_columns = this.index, this.columns if level is not None: @@ -444,13 +438,14 @@ def _combine_frame(self, other, func, fill_value=None, level=None): other_fill_value = getattr(other, 'default_fill_value', np.nan) if self.default_fill_value == other_fill_value: new_fill_value = self.default_fill_value - elif np.isnan(self.default_fill_value) and not np.isnan(other_fill_value): + elif np.isnan(self.default_fill_value) and not np.isnan( + other_fill_value): new_fill_value = other_fill_value - elif not np.isnan(self.default_fill_value) and np.isnan(other_fill_value): + elif not np.isnan(self.default_fill_value) and np.isnan( + other_fill_value): new_fill_value = self.default_fill_value - return self._constructor(data=new_data, - index=new_index, + return self._constructor(data=new_data, index=new_index, columns=new_columns, default_fill_value=new_fill_value, fill_value=new_fill_value).__finalize__(self) @@ -481,11 +476,10 @@ def _combine_match_index(self, other, func, level=None, fill_value=None): fill_value = func(np.float64(self.default_fill_value), np.float64(other.fill_value)) - return self._constructor(new_data, - index=new_index, - columns=self.columns, - default_fill_value=fill_value, - fill_value=self.default_fill_value).__finalize__(self) + return self._constructor( + new_data, index=new_index, columns=self.columns, + default_fill_value=fill_value, + fill_value=self.default_fill_value).__finalize__(self) def _combine_match_columns(self, other, func, level=None, fill_value=None): # patched version of DataFrame._combine_match_columns to account for @@ -509,22 +503,20 @@ def _combine_match_columns(self, other, func, level=None, fill_value=None): for col in intersection: new_data[col] = func(self[col], float(other[col])) - return self._constructor(new_data, - index=self.index, - columns=union, - default_fill_value=self.default_fill_value, - fill_value=self.default_fill_value).__finalize__(self) + return self._constructor( + new_data, index=self.index, columns=union, + default_fill_value=self.default_fill_value, + fill_value=self.default_fill_value).__finalize__(self) def _combine_const(self, other, func): new_data = {} for col, series in compat.iteritems(self): new_data[col] = func(series, other) - return self._constructor(data=new_data, - index=self.index, - columns=self.columns, - default_fill_value=self.default_fill_value, - fill_value=self.default_fill_value).__finalize__(self) + return self._constructor( + data=new_data, index=self.index, columns=self.columns, + default_fill_value=self.default_fill_value, + fill_value=self.default_fill_value).__finalize__(self) def _reindex_index(self, index, method, copy, level, fill_value=np.nan, limit=None, takeable=False): @@ -577,16 +569,17 @@ def _reindex_columns(self, columns, copy, level, fill_value, limit=None, return SparseDataFrame(sdict, index=self.index, columns=columns, default_fill_value=self._default_fill_value) - def _reindex_with_indexers(self, reindexers, method=None, fill_value=None, limit=None, - copy=False, allow_dups=False): + def _reindex_with_indexers(self, reindexers, method=None, fill_value=None, + limit=None, copy=False, allow_dups=False): if method is not None or limit is not None: - raise NotImplementedError("cannot reindex with a method or limit with sparse") + raise NotImplementedError("cannot reindex with a method or limit " + "with sparse") if fill_value is None: fill_value = np.nan - index, row_indexer = reindexers.get(0, (None, None)) + index, row_indexer = reindexers.get(0, (None, None)) columns, col_indexer = reindexers.get(1, (None, None)) if columns is None: @@ -597,13 +590,14 @@ def _reindex_with_indexers(self, reindexers, method=None, fill_value=None, limit if col not in self: continue if row_indexer is not None: - new_arrays[col] = com.take_1d( - self[col].get_values(), row_indexer, - fill_value=fill_value) + new_arrays[col] = com.take_1d(self[col].get_values(), + row_indexer, + fill_value=fill_value) else: new_arrays[col] = self[col] - return SparseDataFrame(new_arrays, index=index, columns=columns).__finalize__(self) + return SparseDataFrame(new_arrays, index=index, + columns=columns).__finalize__(self) def _join_compat(self, other, on=None, how='left', lsuffix='', rsuffix='', sort=False): @@ -617,8 +611,9 @@ def _join_index(self, other, how, lsuffix, rsuffix): if other.name is None: raise ValueError('Other Series must have a name') - other = SparseDataFrame({other.name: other}, - default_fill_value=self._default_fill_value) + other = SparseDataFrame( + {other.name: other}, + default_fill_value=self._default_fill_value) join_index = self.index.join(other.index, how=how) @@ -658,10 +653,11 @@ def transpose(self): """ Returns a DataFrame with the rows/columns switched. """ - return SparseDataFrame(self.values.T, index=self.columns, - columns=self.index, - default_fill_value=self._default_fill_value, - default_kind=self._default_kind).__finalize__(self) + return SparseDataFrame( + self.values.T, index=self.columns, columns=self.index, + default_fill_value=self._default_fill_value, + default_kind=self._default_kind).__finalize__(self) + T = property(transpose) @Appender(DataFrame.count.__doc__) @@ -710,10 +706,10 @@ def apply(self, func, axis=0, broadcast=False, reduce=False): applied = func(v) applied.fill_value = func(applied.fill_value) new_series[k] = applied - return self._constructor(new_series, index=self.index, - columns=self.columns, - default_fill_value=self._default_fill_value, - kind=self._default_kind).__finalize__(self) + return self._constructor( + new_series, index=self.index, columns=self.columns, + default_fill_value=self._default_fill_value, + kind=self._default_kind).__finalize__(self) else: if not broadcast: return self._apply_standard(func, axis, reduce=reduce) @@ -737,13 +733,17 @@ def applymap(self, func): """ return self.apply(lambda x: lmap(func, x)) + def to_manager(sdf, columns, index): - """ create and return the block manager from a dataframe of series, columns, index """ + """ create and return the block manager from a dataframe of series, + columns, index + """ # from BlockManager perspective axes = [_ensure_index(columns), _ensure_index(index)] - return create_block_manager_from_arrays([sdf[c] for c in columns], columns, axes) + return create_block_manager_from_arrays( + [sdf[c] for c in columns], columns, axes) def stack_sparse_frame(frame): @@ -759,8 +759,8 @@ def stack_sparse_frame(frame): inds_to_concat = [] vals_to_concat = [] # TODO: Figure out whether this can be reached. - # I think this currently can't be reached because you can't build a SparseDataFrame - # with a non-np.NaN fill value (fails earlier). + # I think this currently can't be reached because you can't build a + # SparseDataFrame with a non-np.NaN fill value (fails earlier). for _, series in compat.iteritems(frame): if not np.isnan(series.fill_value): raise TypeError('This routine assumes NaN fill value') diff --git a/pandas/sparse/panel.py b/pandas/sparse/panel.py index f57339fea0a7f..4bacaadd915d1 100644 --- a/pandas/sparse/panel.py +++ b/pandas/sparse/panel.py @@ -6,7 +6,7 @@ # pylint: disable=E1101,E1103,W0231 import warnings -from pandas.compat import range, lrange, zip +from pandas.compat import lrange, zip from pandas import compat import numpy as np @@ -21,7 +21,6 @@ class SparsePanelAxis(object): - def __init__(self, cache_field, frame_attr): self.cache_field = cache_field self.frame_attr = frame_attr @@ -42,7 +41,6 @@ def __set__(self, obj, value): class SparsePanel(Panel): - """ Sparse version of Panel @@ -66,13 +64,13 @@ class SparsePanel(Panel): _typ = 'panel' _subtyp = 'sparse_panel' - def __init__(self, frames=None, items=None, major_axis=None, minor_axis=None, - default_fill_value=np.nan, default_kind='block', - copy=False): + def __init__(self, frames=None, items=None, major_axis=None, + minor_axis=None, default_fill_value=np.nan, + default_kind='block', copy=False): # deprecation #11157 - warnings.warn("SparsePanel is deprecated and will be removed in a future version", - FutureWarning, stacklevel=2) + warnings.warn("SparsePanel is deprecated and will be removed in a " + "future version", FutureWarning, stacklevel=2) if frames is None: frames = {} @@ -80,11 +78,10 @@ def __init__(self, frames=None, items=None, major_axis=None, minor_axis=None, if isinstance(frames, np.ndarray): new_frames = {} for item, vals in zip(items, frames): - new_frames[item] = \ - SparseDataFrame(vals, index=major_axis, - columns=minor_axis, - default_fill_value=default_fill_value, - default_kind=default_kind) + new_frames[item] = SparseDataFrame( + vals, index=major_axis, columns=minor_axis, + default_fill_value=default_fill_value, + default_kind=default_kind) frames = new_frames if not isinstance(frames, dict): @@ -99,11 +96,9 @@ def __init__(self, frames=None, items=None, major_axis=None, minor_axis=None, items = Index(sorted(frames.keys())) items = _ensure_index(items) - (clean_frames, - major_axis, - minor_axis) = _convert_frames(frames, major_axis, - minor_axis, kind=kind, - fill_value=fill_value) + (clean_frames, major_axis, + minor_axis) = _convert_frames(frames, major_axis, minor_axis, + kind=kind, fill_value=fill_value) self._frames = clean_frames @@ -142,8 +137,7 @@ def to_dense(self): ------- dense : Panel """ - return Panel(self.values, self.items, self.major_axis, - self.minor_axis) + return Panel(self.values, self.items, self.major_axis, self.minor_axis) def as_matrix(self): return self.values @@ -151,8 +145,7 @@ def as_matrix(self): @property def values(self): # return dense values - return np.array([self._frames[item].values - for item in self.items]) + return np.array([self._frames[item].values for item in self.items]) # need a special property for items to make the field assignable @@ -173,6 +166,7 @@ def _set_items(self, new_items): self._frames = dict((new_k, old_frame_dict[old_k]) for new_k, old_k in zip(new_items, old_items)) self._items = new_items + items = property(fget=_get_items, fset=_set_items) # DataFrame's index @@ -257,8 +251,8 @@ def __getstate__(self): # pickling return (self._frames, com._pickle_array(self.items), com._pickle_array(self.major_axis), - com._pickle_array(self.minor_axis), - self.default_fill_value, self.default_kind) + com._pickle_array(self.minor_axis), self.default_fill_value, + self.default_kind) def __setstate__(self, state): frames, items, major, minor, fv, kind = state @@ -281,12 +275,13 @@ def copy(self, deep=True): d = self._construct_axes_dict() if deep: - new_data = dict((k, v.copy(deep=True)) for k, v in compat.iteritems(self._frames)) + new_data = dict((k, v.copy(deep=True)) + for k, v in compat.iteritems(self._frames)) d = dict((k, v.copy(deep=True)) for k, v in compat.iteritems(d)) else: new_data = self._frames.copy() - d['default_fill_value']=self.default_fill_value - d['default_kind']=self.default_kind + d['default_fill_value'] = self.default_fill_value + d['default_kind'] = self.default_kind return SparsePanel(new_data, **d) @@ -376,16 +371,16 @@ def reindex(self, major=None, items=None, minor=None, major_axis=None, if item in self._frames: new_frames[item] = self._frames[item] else: - raise NotImplementedError('Reindexing with new items not yet ' - 'supported') + raise NotImplementedError('Reindexing with new items not ' + 'yet supported') else: new_frames = self._frames if copy: - new_frames = dict((k, v.copy()) for k, v in compat.iteritems(new_frames)) + new_frames = dict((k, v.copy()) + for k, v in compat.iteritems(new_frames)) - return SparsePanel(new_frames, items=items, - major_axis=major, + return SparsePanel(new_frames, items=items, major_axis=major, minor_axis=minor, default_fill_value=self.default_fill_value, default_kind=self.default_kind) @@ -509,7 +504,8 @@ def mod(self, val, *args, **kwargs): # Sparse objects opt out of numexpr SparsePanel._add_aggregate_operations(use_numexpr=False) -ops.add_special_arithmetic_methods(SparsePanel, use_numexpr=False, **ops.panel_special_funcs) +ops.add_special_arithmetic_methods(SparsePanel, use_numexpr=False, ** + ops.panel_special_funcs) SparseWidePanel = SparsePanel diff --git a/pandas/sparse/scipy_sparse.py b/pandas/sparse/scipy_sparse.py index a815ca7545561..ea108e3e89935 100644 --- a/pandas/sparse/scipy_sparse.py +++ b/pandas/sparse/scipy_sparse.py @@ -3,13 +3,9 @@ Currently only includes SparseSeries.to_coo helpers. """ -from pandas.core.frame import DataFrame from pandas.core.index import MultiIndex, Index from pandas.core.series import Series -import itertools -import numpy as np from pandas.compat import OrderedDict, lmap -from pandas.tools.util import cartesian_product def _check_is_partition(parts, whole): @@ -19,10 +15,10 @@ def _check_is_partition(parts, whole): raise ValueError( 'Is not a partition because intersection is not null.') if set.union(*parts) != whole: - raise ValueError('Is not a partition becuase union is not the whole.') + raise ValueError('Is not a partition because union is not the whole.') -def _to_ijv(ss, row_levels=(0,), column_levels=(1,), sort_labels=False): +def _to_ijv(ss, row_levels=(0, ), column_levels=(1, ), sort_labels=False): """ For arbitrary (MultiIndexed) SparseSeries return (v, i, j, ilabels, jlabels) where (v, (i, j)) is suitable for passing to scipy.sparse.coo constructor. """ @@ -44,7 +40,6 @@ def get_indexers(levels): if len(levels) == 1: values_ilabels = [x[0] for x in values_ilabels] - ####################################################################### # # performance issues with groupby ################################### # TODO: these two lines can rejplace the code below but # groupby is too slow (in some cases at least) @@ -53,36 +48,37 @@ def get_indexers(levels): def _get_label_to_i_dict(labels, sort_labels=False): """ Return OrderedDict of unique labels to number. - Optionally sort by label. """ + Optionally sort by label. + """ labels = Index(lmap(tuple, labels)).unique().tolist() # squish if sort_labels: labels = sorted(list(labels)) d = OrderedDict((k, i) for i, k in enumerate(labels)) - return(d) + return (d) def _get_index_subset_to_coord_dict(index, subset, sort_labels=False): def robust_get_level_values(i): # if index has labels (that are not None) use those, # else use the level location try: - return(index.get_level_values(index.names[i])) + return index.get_level_values(index.names[i]) except KeyError: - return(index.get_level_values(i)) - ilabels = list( - zip(*[robust_get_level_values(i) for i in subset])) - labels_to_i = _get_label_to_i_dict( - ilabels, sort_labels=sort_labels) + return index.get_level_values(i) + + ilabels = list(zip(*[robust_get_level_values(i) for i in subset])) + labels_to_i = _get_label_to_i_dict(ilabels, + sort_labels=sort_labels) labels_to_i = Series(labels_to_i) if len(subset) > 1: labels_to_i.index = MultiIndex.from_tuples(labels_to_i.index) labels_to_i.index.names = [index.names[i] for i in subset] labels_to_i.name = 'value' - return(labels_to_i) + return (labels_to_i) - labels_to_i = _get_index_subset_to_coord_dict( - ss.index, levels, sort_labels=sort_labels) - ####################################################################### - ####################################################################### + labels_to_i = _get_index_subset_to_coord_dict(ss.index, levels, + sort_labels=sort_labels) + # ##################################################################### + # ##################################################################### i_coord = labels_to_i[values_ilabels].tolist() i_labels = labels_to_i.index.tolist() @@ -95,25 +91,28 @@ def robust_get_level_values(i): return values, i_coord, j_coord, i_labels, j_labels -def _sparse_series_to_coo(ss, row_levels=(0,), column_levels=(1,), sort_labels=False): +def _sparse_series_to_coo(ss, row_levels=(0, ), column_levels=(1, ), + sort_labels=False): """ Convert a SparseSeries to a scipy.sparse.coo_matrix using index levels row_levels, column_levels as the row and column - labels respectively. Returns the sparse_matrix, row and column labels. """ + labels respectively. Returns the sparse_matrix, row and column labels. + """ import scipy.sparse if ss.index.nlevels < 2: raise ValueError('to_coo requires MultiIndex with nlevels > 2') if not ss.index.is_unique: - raise ValueError( - 'Duplicate index entries are not allowed in to_coo transformation.') + raise ValueError('Duplicate index entries are not allowed in to_coo ' + 'transformation.') # to keep things simple, only rely on integer indexing (not labels) row_levels = [ss.index._get_level_number(x) for x in row_levels] column_levels = [ss.index._get_level_number(x) for x in column_levels] - v, i, j, rows, columns = _to_ijv( - ss, row_levels=row_levels, column_levels=column_levels, sort_labels=sort_labels) + v, i, j, rows, columns = _to_ijv(ss, row_levels=row_levels, + column_levels=column_levels, + sort_labels=sort_labels) sparse_matrix = scipy.sparse.coo_matrix( (v, (i, j)), shape=(len(rows), len(columns))) return sparse_matrix, rows, columns @@ -121,7 +120,8 @@ def _sparse_series_to_coo(ss, row_levels=(0,), column_levels=(1,), sort_labels=F def _coo_to_sparse_series(A, dense_index=False): """ Convert a scipy.sparse.coo_matrix to a SparseSeries. - Use the defaults given in the SparseSeries constructor. """ + Use the defaults given in the SparseSeries constructor. + """ s = Series(A.data, MultiIndex.from_arrays((A.row, A.col))) s = s.sort_index() s = s.to_sparse() # TODO: specify kind? diff --git a/pandas/sparse/series.py b/pandas/sparse/series.py index 96d509ed9b7c1..1a2fc9698da2f 100644 --- a/pandas/sparse/series.py +++ b/pandas/sparse/series.py @@ -18,20 +18,16 @@ from pandas.core import generic import pandas.core.common as com import pandas.core.ops as ops -import pandas.core.datetools as datetools import pandas.index as _index -from pandas import compat - from pandas.sparse.array import (make_sparse, _sparse_array_op, SparseArray) from pandas._sparse import BlockIndex, IntIndex import pandas._sparse as splib -from pandas.util.decorators import Appender - -from pandas.sparse.scipy_sparse import _sparse_series_to_coo, _coo_to_sparse_series +from pandas.sparse.scipy_sparse import (_sparse_series_to_coo, + _coo_to_sparse_series) -#------------------------------------------------------------------------------ +# ----------------------------------------------------------------------------- # Wrapper function for Series arithmetic methods @@ -41,8 +37,8 @@ def _arith_method(op, name, str_rep=None, default_axis=None, fill_zeros=None, Wrapper function for Series arithmetic operations, to avoid code duplication. - str_rep, default_axis, fill_zeros and eval_kwargs are not used, but are present - for compatibility. + str_rep, default_axis, fill_zeros and eval_kwargs are not used, but are + present for compatibility. """ def wrapper(self, other): @@ -69,8 +65,8 @@ def wrapper(self, other): wrapper.__name__ = name if name.startswith("__"): - # strip special method names, e.g. `__add__` needs to be `add` when passed - # to _sparse_series_op + # strip special method names, e.g. `__add__` needs to be `add` when + # passed to _sparse_series_op name = name[2:-2] return wrapper @@ -85,7 +81,6 @@ def _sparse_series_op(left, right, op, name): class SparseSeries(Series): - """Data structure for labeled, sparse floating point data Parameters @@ -135,7 +130,7 @@ def __init__(self, data=None, index=None, sparse_index=None, kind='block', if isinstance(data, SparseSeries) and index is None: index = data.index.view() elif index is not None: - assert(len(index) == len(data)) + assert (len(index) == len(data)) sparse_index = data.sp_index data = np.asarray(data) @@ -161,7 +156,7 @@ def __init__(self, data=None, index=None, sparse_index=None, kind='block', data, sparse_index = make_sparse(data, kind=kind, fill_value=fill_value) else: - assert(len(data) == sparse_index.npoints) + assert (len(data) == sparse_index.npoints) elif isinstance(data, SingleBlockManager): if dtype is not None: @@ -175,8 +170,7 @@ def __init__(self, data=None, index=None, sparse_index=None, kind='block', length = len(index) - if data == fill_value or (isnull(data) - and isnull(fill_value)): + if data == fill_value or (isnull(data) and isnull(fill_value)): if kind == 'block': sparse_index = BlockIndex(length, [], []) else: @@ -206,8 +200,9 @@ def __init__(self, data=None, index=None, sparse_index=None, kind='block', # create a sparse array if not isinstance(data, SparseArray): - data = SparseArray( - data, sparse_index=sparse_index, fill_value=fill_value, dtype=dtype, copy=copy) + data = SparseArray(data, sparse_index=sparse_index, + fill_value=fill_value, dtype=dtype, + copy=copy) data = SingleBlockManager(data, index) @@ -254,11 +249,13 @@ def npoints(self): return self.sp_index.npoints @classmethod - def from_array(cls, arr, index=None, name=None, copy=False, fill_value=None, fastpath=False): + def from_array(cls, arr, index=None, name=None, copy=False, + fill_value=None, fastpath=False): """ Simplified alternate constructor """ - return cls(arr, index=index, name=name, copy=copy, fill_value=fill_value, fastpath=fastpath) + return cls(arr, index=index, name=name, copy=copy, + fill_value=fill_value, fastpath=fastpath) @property def _constructor(self): @@ -278,11 +275,8 @@ def as_sparse_array(self, kind=None, fill_value=None, copy=False): fill_value = self.fill_value if kind is None: kind = self.kind - return SparseArray(self.values, - sparse_index=self.sp_index, - fill_value=fill_value, - kind=kind, - copy=copy) + return SparseArray(self.values, sparse_index=self.sp_index, + fill_value=fill_value, kind=kind, copy=copy) def __len__(self): return len(self.block) @@ -297,8 +291,7 @@ def __array_wrap__(self, result): """ Gets called prior to a ufunc (and after) """ - return self._constructor(result, - index=self.index, + return self._constructor(result, index=self.index, sparse_index=self.sp_index, fill_value=self.fill_value, copy=False).__finalize__(self) @@ -318,11 +311,8 @@ def _reduce(self, op, name, axis=0, skipna=True, numeric_only=None, def __getstate__(self): # pickling - return dict(_typ=self._typ, - _subtyp=self._subtyp, - _data=self._data, - fill_value=self.fill_value, - name=self.name) + return dict(_typ=self._typ, _subtyp=self._subtyp, _data=self._data, + fill_value=self.fill_value, name=self.name) def _unpickle_series_compat(self, state): @@ -339,8 +329,8 @@ def _unpickle_series_compat(self, state): # create a sparse array if not isinstance(data, SparseArray): - data = SparseArray( - data, sparse_index=sp_index, fill_value=fill_value, copy=False) + data = SparseArray(data, sparse_index=sp_index, + fill_value=fill_value, copy=False) # recreate data = SingleBlockManager(data, index, fastpath=True) @@ -473,8 +463,8 @@ def set_value(self, label, value, takeable=False): if new_values is not None: values = new_values new_index = values.index - values = SparseArray( - values, fill_value=self.fill_value, kind=self.kind) + values = SparseArray(values, fill_value=self.fill_value, + kind=self.kind) self._data = SingleBlockManager(values, new_index) self._index = new_index @@ -489,8 +479,8 @@ def _set_values(self, key, value): values = self.values.to_dense() values[key] = _index.convert_scalar(values, value) - values = SparseArray( - values, fill_value=self.fill_value, kind=self.kind) + values = SparseArray(values, fill_value=self.fill_value, + kind=self.kind) self._data = SingleBlockManager(values, self.index) def to_dense(self, sparse_only=False): @@ -502,7 +492,8 @@ def to_dense(self, sparse_only=False): index = self.index.take(int_index.indices) return Series(self.sp_values, index=index, name=self.name) else: - return Series(self.values.to_dense(), index=self.index, name=self.name) + return Series(self.values.to_dense(), index=self.index, + name=self.name) @property def density(self): @@ -518,8 +509,7 @@ def copy(self, deep=True): if deep: new_data = self._data.copy() - return self._constructor(new_data, - sparse_index=self.sp_index, + return self._constructor(new_data, sparse_index=self.sp_index, fill_value=self.fill_value).__finalize__(self) def reindex(self, index=None, method=None, copy=True, limit=None): @@ -539,7 +529,8 @@ def reindex(self, index=None, method=None, copy=True, limit=None): return self.copy() else: return self - return self._constructor(self._data.reindex(new_index, method=method, limit=limit, copy=copy), + return self._constructor(self._data.reindex(new_index, method=method, + limit=limit, copy=copy), index=new_index).__finalize__(self) def sparse_reindex(self, new_index): @@ -573,7 +564,8 @@ def take(self, indices, axis=0, convert=True): """ new_values = SparseArray.take(self.values, indices) new_index = self.index.take(indices) - return self._constructor(new_values, index=new_index).__finalize__(self) + return self._constructor(new_values, + index=new_index).__finalize__(self) def cumsum(self, axis=0, dtype=None, out=None): """ @@ -585,7 +577,9 @@ def cumsum(self, axis=0, dtype=None, out=None): """ new_array = SparseArray.cumsum(self.values) if isinstance(new_array, SparseArray): - return self._constructor(new_array, index=self.index, sparse_index=new_array.sp_index).__finalize__(self) + return self._constructor( + new_array, index=self.index, + sparse_index=new_array.sp_index).__finalize__(self) return Series(new_array, index=self.index).__finalize__(self) def dropna(self, axis=0, inplace=False, **kwargs): @@ -611,8 +605,8 @@ def shift(self, periods, freq=None): # no special handling of fill values yet if not isnull(self.fill_value): - dense_shifted = self.to_dense().shift(periods, freq=freq, - **kwds) + # TODO: kwds is not defined...should this work? + dense_shifted = self.to_dense().shift(periods, freq=freq, **kwds) # noqa return dense_shifted.to_sparse(fill_value=self.fill_value, kind=self.kind) @@ -620,10 +614,10 @@ def shift(self, periods, freq=None): return self.copy() if freq is not None: - return self._constructor(self.sp_values, - sparse_index=self.sp_index, - index=self.index.shift(periods, freq), - fill_value=self.fill_value).__finalize__(self) + return self._constructor( + self.sp_values, sparse_index=self.sp_index, + index=self.index.shift(periods, freq), + fill_value=self.fill_value).__finalize__(self) int_index = self.sp_index.to_int_index() new_indices = int_index.indices + periods @@ -636,8 +630,7 @@ def shift(self, periods, freq=None): new_sp_index = new_sp_index.to_block_index() return self._constructor(self.sp_values[start:end].copy(), - index=self.index, - sparse_index=new_sp_index, + index=self.index, sparse_index=new_sp_index, fill_value=self.fill_value).__finalize__(self) def combine_first(self, other): @@ -659,13 +652,14 @@ def combine_first(self, other): dense_combined = self.to_dense().combine_first(other) return dense_combined.to_sparse(fill_value=self.fill_value) - def to_coo(self, row_levels=(0,), column_levels=(1,), sort_labels=False): + def to_coo(self, row_levels=(0, ), column_levels=(1, ), sort_labels=False): """ Create a scipy.sparse.coo_matrix from a SparseSeries with MultiIndex. - Use row_levels and column_levels to determine the row and column coordinates respectively. - row_levels and column_levels are the names (labels) or numbers of the levels. - {row_levels, column_levels} must be a partition of the MultiIndex level names (or numbers). + Use row_levels and column_levels to determine the row and column + coordinates respectively. row_levels and column_levels are the names + (labels) or numbers of the levels. {row_levels, column_levels} must be + a partition of the MultiIndex level names (or numbers). .. versionadded:: 0.16.0 @@ -709,8 +703,9 @@ def to_coo(self, row_levels=(0,), column_levels=(1,), sort_labels=False): >>> columns [('a', 0), ('a', 1), ('b', 0), ('b', 1)] """ - A, rows, columns = _sparse_series_to_coo( - self, row_levels, column_levels, sort_labels=sort_labels) + A, rows, columns = _sparse_series_to_coo(self, row_levels, + column_levels, + sort_labels=sort_labels) return A, rows, columns @classmethod @@ -724,8 +719,10 @@ def from_coo(cls, A, dense_index=False): ---------- A : scipy.sparse.coo_matrix dense_index : bool, default False - If False (default), the SparseSeries index consists of only the coords of the non-null entries of the original coo_matrix. - If True, the SparseSeries index consists of the full sorted (row, col) coordinates of the coo_matrix. + If False (default), the SparseSeries index consists of only the + coords of the non-null entries of the original coo_matrix. + If True, the SparseSeries index consists of the full sorted + (row, col) coordinates of the coo_matrix. Returns ------- @@ -764,14 +761,15 @@ def from_coo(cls, A, dense_index=False): # force methods to overwrite previous definitions. ops.add_special_arithmetic_methods(SparseSeries, _arith_method, radd_func=operator.add, comp_method=None, - bool_method=None, use_numexpr=False, force=True) + bool_method=None, use_numexpr=False, + force=True) + # backwards compatiblity class SparseTimeSeries(SparseSeries): - def __init__(self, *args, **kwargs): # deprecation TimeSeries, #10890 - warnings.warn("SparseTimeSeries is deprecated. Please use SparseSeries", - FutureWarning, stacklevel=2) + warnings.warn("SparseTimeSeries is deprecated. Please use " + "SparseSeries", FutureWarning, stacklevel=2) super(SparseTimeSeries, self).__init__(*args, **kwargs) diff --git a/pandas/sparse/tests/test_array.py b/pandas/sparse/tests/test_array.py index add680489548d..b1e731bd8e2e5 100644 --- a/pandas/sparse/tests/test_array.py +++ b/pandas/sparse/tests/test_array.py @@ -1,13 +1,11 @@ from pandas.compat import range import re -from numpy import nan, ndarray +from numpy import nan import numpy as np import operator import warnings -from pandas.core.series import Series -from pandas.core.common import notnull from pandas.sparse.api import SparseArray from pandas.util.testing import assert_almost_equal, assertRaisesRegexp import pandas.util.testing as tm @@ -15,11 +13,11 @@ def assert_sp_array_equal(left, right): assert_almost_equal(left.sp_values, right.sp_values) - assert(left.sp_index.equals(right.sp_index)) + assert (left.sp_index.equals(right.sp_index)) if np.isnan(left.fill_value): - assert(np.isnan(right.fill_value)) + assert (np.isnan(right.fill_value)) else: - assert(left.fill_value == right.fill_value) + assert (left.fill_value == right.fill_value) class TestSparseArray(tm.TestCase): @@ -46,6 +44,7 @@ def setitem(): def setslice(): self.arr[1:5] = 2 + assertRaisesRegexp(TypeError, "item assignment", setitem) assertRaisesRegexp(TypeError, "item assignment", setslice) @@ -79,7 +78,7 @@ def _get_base(values): base = base.base return base - assert(_get_base(arr2) is _get_base(self.arr)) + assert (_get_base(arr2) is _get_base(self.arr)) def test_values_asarray(self): assert_almost_equal(self.arr.values, self.arr_data) @@ -150,7 +149,7 @@ def _check_op(op, first, second): exp_fv = op(first.fill_value, 4) assert_almost_equal(res4.fill_value, exp_fv) assert_almost_equal(res4.values, exp) - except (ValueError) : + except ValueError: pass def _check_inplace_op(op): @@ -184,7 +183,7 @@ def test_generator_warnings(self): category=PendingDeprecationWarning) for _ in sp_arr: pass - assert len(w)==0 + assert len(w) == 0 if __name__ == '__main__': diff --git a/pandas/sparse/tests/test_libsparse.py b/pandas/sparse/tests/test_libsparse.py index 7f9e61571ebfc..57baae08725c0 100644 --- a/pandas/sparse/tests/test_libsparse.py +++ b/pandas/sparse/tests/test_libsparse.py @@ -1,51 +1,30 @@ from pandas import Series -import nose -from numpy import nan +import nose # noqa import numpy as np import operator -from numpy.testing import assert_almost_equal, assert_equal +from numpy.testing import assert_equal import pandas.util.testing as tm -from pandas.core.sparse import SparseSeries -from pandas import DataFrame, compat +from pandas import compat from pandas._sparse import IntIndex, BlockIndex import pandas._sparse as splib TEST_LENGTH = 20 -plain_case = dict(xloc=[0, 7, 15], - xlen=[3, 5, 5], - yloc=[2, 9, 14], - ylen=[2, 3, 5], - intersect_loc=[2, 9, 15], +plain_case = dict(xloc=[0, 7, 15], xlen=[3, 5, 5], yloc=[2, 9, 14], + ylen=[2, 3, 5], intersect_loc=[2, 9, 15], intersect_len=[1, 3, 4]) -delete_blocks = dict(xloc=[0, 5], - xlen=[4, 4], - yloc=[1], - ylen=[4], - intersect_loc=[1], - intersect_len=[3]) -split_blocks = dict(xloc=[0], - xlen=[10], - yloc=[0, 5], - ylen=[3, 7], - intersect_loc=[0, 5], - intersect_len=[3, 5]) -skip_block = dict(xloc=[10], - xlen=[5], - yloc=[0, 12], - ylen=[5, 3], - intersect_loc=[12], - intersect_len=[3]) - -no_intersect = dict(xloc=[0, 10], - xlen=[4, 6], - yloc=[5, 17], - ylen=[4, 2], - intersect_loc=[], - intersect_len=[]) +delete_blocks = dict(xloc=[0, 5], xlen=[4, 4], yloc=[1], ylen=[4], + intersect_loc=[1], intersect_len=[3]) +split_blocks = dict(xloc=[0], xlen=[10], yloc=[0, 5], ylen=[3, 7], + intersect_loc=[0, 5], intersect_len=[3, 5]) +skip_block = dict(xloc=[10], xlen=[5], yloc=[0, 12], ylen=[5, 3], + intersect_loc=[12], intersect_len=[3]) + +no_intersect = dict(xloc=[0, 10], xlen=[4, 6], yloc=[5, 17], ylen=[4, 2], + intersect_loc=[], intersect_len=[]) def check_cases(_check_case): @@ -69,14 +48,14 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): xindex = BlockIndex(TEST_LENGTH, xloc, xlen) yindex = BlockIndex(TEST_LENGTH, yloc, ylen) bresult = xindex.make_union(yindex) - assert(isinstance(bresult, BlockIndex)) + assert (isinstance(bresult, BlockIndex)) assert_equal(bresult.blocs, eloc) assert_equal(bresult.blengths, elen) ixindex = xindex.to_int_index() iyindex = yindex.to_int_index() iresult = ixindex.make_union(iyindex) - assert(isinstance(iresult, IntIndex)) + assert (isinstance(iresult, IntIndex)) assert_equal(iresult.indices, bresult.to_int_index().indices) """ @@ -91,7 +70,6 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): eloc = [0] elen = [9] _check_case(xloc, xlen, yloc, ylen, eloc, elen) - """ x: ----- ----- y: ----- -- @@ -103,7 +81,6 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): eloc = [0, 10, 17] elen = [7, 5, 2] _check_case(xloc, xlen, yloc, ylen, eloc, elen) - """ x: ------ y: ------- @@ -116,7 +93,6 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): eloc = [1] elen = [7] _check_case(xloc, xlen, yloc, ylen, eloc, elen) - """ x: ------ ----- y: ------- @@ -129,7 +105,6 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): eloc = [2] elen = [12] _check_case(xloc, xlen, yloc, ylen, eloc, elen) - """ x: --- ----- y: ------- @@ -142,7 +117,6 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): eloc = [0] elen = [10] _check_case(xloc, xlen, yloc, ylen, eloc, elen) - """ x: ------ ----- y: ------- --- @@ -155,7 +129,6 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): eloc = [2] elen = [15] _check_case(xloc, xlen, yloc, ylen, eloc, elen) - """ x: ---------------------- y: ---- ---- --- @@ -168,7 +141,6 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): eloc = [2] elen = [15] _check_case(xloc, xlen, yloc, ylen, eloc, elen) - """ x: ---- --- y: --- --- @@ -185,18 +157,17 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): def test_lookup(): - def _check(index): - assert(index.lookup(0) == -1) - assert(index.lookup(5) == 0) - assert(index.lookup(7) == 2) - assert(index.lookup(8) == -1) - assert(index.lookup(9) == -1) - assert(index.lookup(10) == -1) - assert(index.lookup(11) == -1) - assert(index.lookup(12) == 3) - assert(index.lookup(17) == 8) - assert(index.lookup(18) == -1) + assert (index.lookup(0) == -1) + assert (index.lookup(5) == 0) + assert (index.lookup(7) == 2) + assert (index.lookup(8) == -1) + assert (index.lookup(9) == -1) + assert (index.lookup(10) == -1) + assert (index.lookup(11) == -1) + assert (index.lookup(12) == 3) + assert (index.lookup(17) == 8) + assert (index.lookup(18) == -1) bindex = BlockIndex(20, [5, 12], [3, 6]) iindex = bindex.to_int_index() @@ -210,7 +181,7 @@ def _check(index): def test_intersect(): def _check_correct(a, b, expected): result = a.intersect(b) - assert(result.equals(expected)) + assert (result.equals(expected)) def _check_length_exc(a, longer): nose.tools.assert_raises(Exception, a.intersect, longer) @@ -222,13 +193,11 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): longer_index = BlockIndex(TEST_LENGTH + 1, yloc, ylen) _check_correct(xindex, yindex, expected) - _check_correct(xindex.to_int_index(), - yindex.to_int_index(), + _check_correct(xindex.to_int_index(), yindex.to_int_index(), expected.to_int_index()) _check_length_exc(xindex, longer_index) - _check_length_exc(xindex.to_int_index(), - longer_index.to_int_index()) + _check_length_exc(xindex.to_int_index(), longer_index.to_int_index()) if compat.is_platform_windows(): raise nose.SkipTest("segfaults on win-64 when all tests are run") @@ -236,7 +205,6 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): class TestBlockIndex(tm.TestCase): - def test_equals(self): index = BlockIndex(10, [0, 4], [2, 5]) @@ -248,10 +216,11 @@ def test_check_integrity(self): lengths = [] # 0-length OK - index = BlockIndex(0, locs, lengths) + # TODO: index variables are not used...is that right? + index = BlockIndex(0, locs, lengths) # noqa # also OK even though empty - index = BlockIndex(1, locs, lengths) + index = BlockIndex(1, locs, lengths) # noqa # block extend beyond end self.assertRaises(Exception, BlockIndex, 10, [5], [10]) @@ -275,7 +244,6 @@ def test_to_block_index(self): class TestIntIndex(tm.TestCase): - def test_equals(self): index = IntIndex(10, [0, 1, 2, 3, 4]) self.assertTrue(index.equals(index)) @@ -292,6 +260,7 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): tm.assertIsInstance(xbindex, BlockIndex) self.assertTrue(xbindex.equals(xindex)) self.assertTrue(ybindex.equals(yindex)) + check_cases(_check_case) def test_to_int_index(self): @@ -300,7 +269,6 @@ def test_to_int_index(self): class TestSparseOperators(tm.TestCase): - def _nan_op_tests(self, sparse_op, python_op): def _check_case(xloc, xlen, yloc, ylen, eloc, elen): xindex = BlockIndex(TEST_LENGTH, xloc, xlen) @@ -341,10 +309,10 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): xfill = 0 yfill = 2 - result_block_vals, rb_index = sparse_op( - x, xindex, xfill, y, yindex, yfill) - result_int_vals, ri_index = sparse_op(x, xdindex, xfill, - y, ydindex, yfill) + result_block_vals, rb_index = sparse_op(x, xindex, xfill, y, + yindex, yfill) + result_int_vals, ri_index = sparse_op(x, xdindex, xfill, y, + ydindex, yfill) self.assertTrue(rb_index.to_int_index().equals(ri_index)) assert_equal(result_block_vals, result_int_vals) @@ -374,6 +342,7 @@ def f(self): sparse_op = getattr(splib, 'sparse_nan%s' % op) python_op = getattr(operator, op) self._nan_op_tests(sparse_op, python_op) + f.__name__ = 'test_nan%s' % op return f @@ -383,9 +352,11 @@ def f(self): sparse_op = getattr(splib, 'sparse_%s' % op) python_op = getattr(operator, op) self._op_tests(sparse_op, python_op) + f.__name__ = 'test_%s' % op return f + for op in check_ops: f = make_nanoptestf(op) g = make_optestf(op) @@ -395,6 +366,6 @@ def f(self): del g if __name__ == '__main__': - import nose + import nose # noqa nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/sparse/tests/test_sparse.py b/pandas/sparse/tests/test_sparse.py index 64ffd7482ee34..6add74f778404 100644 --- a/pandas/sparse/tests/test_sparse.py +++ b/pandas/sparse/tests/test_sparse.py @@ -1,45 +1,43 @@ # pylint: disable-msg=E1101,W0612 import operator -from datetime import datetime -import functools - -import nose +import nose # noqa from numpy import nan import numpy as np import pandas as pd -dec = np.testing.dec -from pandas.util.testing import (assert_almost_equal, assert_series_equal, assert_index_equal, - assert_frame_equal, assert_panel_equal, assertRaisesRegexp, +from pandas.util.testing import (assert_almost_equal, assert_series_equal, + assert_index_equal, assert_frame_equal, + assert_panel_equal, assertRaisesRegexp, assert_numpy_array_equal, assert_attr_equal) from numpy.testing import assert_equal -from pandas import Series, DataFrame, bdate_range, Panel, MultiIndex +from pandas import Series, DataFrame, bdate_range, Panel from pandas.core.datetools import BDay from pandas.core.index import Index from pandas.tseries.index import DatetimeIndex import pandas.core.datetools as datetools from pandas.core.common import isnull import pandas.util.testing as tm -from pandas.compat import range, lrange, StringIO, lrange +from pandas.compat import range, StringIO, lrange from pandas import compat from pandas.tools.util import cartesian_product import pandas.sparse.frame as spf from pandas._sparse import BlockIndex, IntIndex -from pandas.sparse.api import (SparseSeries, - SparseDataFrame, SparsePanel, - SparseArray) -from pandas.tests.frame.test_misc_api import ( - SafeForSparse as SparseFrameTests) +from pandas.sparse.api import SparseSeries, SparseDataFrame, SparsePanel +from pandas.tests.frame.test_misc_api import (SafeForSparse as + SparseFrameTests) + +from pandas.sparse.tests.test_array import assert_sp_array_equal import pandas.tests.test_panel as test_panel import pandas.tests.test_series as test_series -from pandas.sparse.tests.test_array import assert_sp_array_equal +dec = np.testing.dec + def _test_data1(): # nan-based @@ -76,7 +74,7 @@ def _test_data2_zero(): def assert_sp_series_equal(a, b, exact_indices=True, check_names=True): - assert(a.index.equals(b.index)) + assert (a.index.equals(b.index)) assert_sp_array_equal(a, b) if check_names: assert_attr_equal('name', a, b) @@ -88,7 +86,7 @@ def assert_sp_frame_equal(left, right, exact_indices=True): compare dense representations """ for col, series in compat.iteritems(left): - assert(col in right) + assert (col in right) # trade-off? if exact_indices: @@ -96,32 +94,29 @@ def assert_sp_frame_equal(left, right, exact_indices=True): else: assert_series_equal(series.to_dense(), right[col].to_dense()) - assert_almost_equal(left.default_fill_value, - right.default_fill_value) + assert_almost_equal(left.default_fill_value, right.default_fill_value) # do I care? # assert(left.default_kind == right.default_kind) for col in right: - assert(col in left) + assert (col in left) def assert_sp_panel_equal(left, right, exact_indices=True): for item, frame in compat.iteritems(left): - assert(item in right) + assert (item in right) # trade-off? assert_sp_frame_equal(frame, right[item], exact_indices=exact_indices) - assert_almost_equal(left.default_fill_value, - right.default_fill_value) - assert(left.default_kind == right.default_kind) + assert_almost_equal(left.default_fill_value, right.default_fill_value) + assert (left.default_kind == right.default_kind) for item in right: - assert(item in left) + assert (item in left) -class TestSparseSeries(tm.TestCase, - test_series.CheckNameIntegration): +class TestSparseSeries(tm.TestCase, test_series.CheckNameIntegration): _multiprocess_can_split_ = True def setUp(self): @@ -162,7 +157,7 @@ def test_TimeSeries_deprecation(self): # deprecation TimeSeries, #10890 with tm.assert_produces_warning(FutureWarning): - pd.SparseTimeSeries(1,index=pd.date_range('20130101',periods=3)) + pd.SparseTimeSeries(1, index=pd.date_range('20130101', periods=3)) def test_construct_DataFrame_with_sp_series(self): # it works! @@ -227,7 +222,7 @@ def test_dense_to_sparse(self): self.assertEqual(ziseries.name, self.zbseries.name) def test_to_dense_preserve_name(self): - assert(self.bseries.name is not None) + assert (self.bseries.name is not None) result = self.bseries.to_dense() self.assertEqual(result.name, self.bseries.name) @@ -364,10 +359,10 @@ def _check_getitem(sp, dense): # j = np.float64(i) # assert_almost_equal(sp[j], dense[j]) - # API change 1/6/2012 - # negative getitem works - # for i in xrange(len(dense)): - # assert_almost_equal(sp[-i], dense[-i]) + # API change 1/6/2012 + # negative getitem works + # for i in xrange(len(dense)): + # assert_almost_equal(sp[-i], dense[-i]) _check_getitem(self.bseries, self.bseries.to_dense()) _check_getitem(self.btseries, self.btseries.to_dense()) @@ -453,8 +448,9 @@ def test_setitem(self): def test_setslice(self): self.bseries[5:10] = 7. - assert_series_equal(self.bseries[5:10].to_dense(), Series( - 7., index=range(5, 10), name=self.bseries.name)) + assert_series_equal(self.bseries[5:10].to_dense(), + Series(7., index=range(5, 10), + name=self.bseries.name)) def test_operators(self): def _check_op(a, b, op): @@ -515,8 +511,8 @@ def _check_inplace_op(iop, op): inplace_ops = ['add', 'sub', 'mul', 'truediv', 'floordiv', 'pow'] for op in inplace_ops: - _check_inplace_op( - getattr(operator, "i%s" % op), getattr(operator, op)) + _check_inplace_op(getattr(operator, "i%s" % op), + getattr(operator, op)) def test_abs(self): s = SparseSeries([1, 2, -3], name='x') @@ -560,7 +556,8 @@ def _compare_with_series(sps, new_index): # corner cases sp = SparseSeries([], index=[]) - sp_zero = SparseSeries([], index=[], fill_value=0) + # TODO: sp_zero is not used anywhere...remove? + sp_zero = SparseSeries([], index=[], fill_value=0) # noqa _compare_with_series(sp, np.arange(10)) # with copy=False @@ -589,7 +586,8 @@ def _check(values, index1, index2, fill_value): assert_almost_equal(expected.values, reindexed.sp_values) # make sure level argument asserts - expected = expected.reindex(int_indices2).fillna(fill_value) + # TODO: expected is not used anywhere...remove? + expected = expected.reindex(int_indices2).fillna(fill_value) # noqa def _check_with_fill_value(values, first, second, fill_value=nan): i_index1 = IntIndex(length, first) @@ -614,16 +612,17 @@ def _check_all(values, first, second): _check_all(values1, index1, [0, 1, 7, 8, 9]) _check_all(values1, index1, []) - first_series = SparseSeries(values1, sparse_index=IntIndex(length, - index1), + first_series = SparseSeries(values1, + sparse_index=IntIndex(length, index1), fill_value=nan) with tm.assertRaisesRegexp(TypeError, 'new index must be a SparseIndex'): - reindexed = first_series.sparse_reindex(0) + reindexed = first_series.sparse_reindex(0) # noqa def test_repr(self): - bsrepr = repr(self.bseries) - isrepr = repr(self.iseries) + # TODO: These aren't used + bsrepr = repr(self.bseries) # noqa + isrepr = repr(self.iseries) # noqa def test_iter(self): pass @@ -670,8 +669,7 @@ def _compare_all(obj): _compare_all(nonna2) def test_dropna(self): - sp = SparseSeries([0, 0, 0, nan, nan, 5, 6], - fill_value=0) + sp = SparseSeries([0, 0, 0, nan, nan, 5, 6], fill_value=0) sp_valid = sp.valid() @@ -696,16 +694,14 @@ def _check_matches(indices, expected): homogenized = spf.homogenize(data) for k, v in compat.iteritems(homogenized): - assert(v.sp_index.equals(expected)) + assert (v.sp_index.equals(expected)) - indices1 = [BlockIndex(10, [2], [7]), - BlockIndex(10, [1, 6], [3, 4]), + indices1 = [BlockIndex(10, [2], [7]), BlockIndex(10, [1, 6], [3, 4]), BlockIndex(10, [0], [10])] expected1 = BlockIndex(10, [2, 6], [2, 3]) _check_matches(indices1, expected1) - indices2 = [BlockIndex(10, [2], [7]), - BlockIndex(10, [2], [7])] + indices2 = [BlockIndex(10, [2], [7]), BlockIndex(10, [2], [7])] expected2 = indices2[0] _check_matches(indices2, expected2) @@ -727,8 +723,7 @@ def test_fill_value_corner(self): self.assertTrue(np.isnan(result.fill_value)) def test_shift(self): - series = SparseSeries([nan, 1., 2., 3., nan, nan], - index=np.arange(6)) + series = SparseSeries([nan, 1., 2., 3., nan, nan], index=np.arange(6)) shifted = series.shift(0) self.assertIsNot(shifted, series) @@ -772,23 +767,29 @@ def test_combine_first(self): assert_sp_series_equal(result, result2) assert_sp_series_equal(result, expected) -class TestSparseHandlingMultiIndexes(tm.TestCase): +class TestSparseHandlingMultiIndexes(tm.TestCase): def setUp(self): - miindex = pd.MultiIndex.from_product([["x","y"], ["10","20"]],names=['row-foo', 'row-bar']) - micol = pd.MultiIndex.from_product([['a','b','c'], ["1","2"]],names=['col-foo', 'col-bar']) - dense_multiindex_frame = pd.DataFrame(index=miindex, columns=micol).sortlevel().sortlevel(axis=1) + miindex = pd.MultiIndex.from_product( + [["x", "y"], ["10", "20"]], names=['row-foo', 'row-bar']) + micol = pd.MultiIndex.from_product( + [['a', 'b', 'c'], ["1", "2"]], names=['col-foo', 'col-bar']) + dense_multiindex_frame = pd.DataFrame( + index=miindex, columns=micol).sortlevel().sortlevel(axis=1) self.dense_multiindex_frame = dense_multiindex_frame.fillna(value=3.14) def test_to_sparse_preserve_multiindex_names_columns(self): - sparse_multiindex_frame = self.dense_multiindex_frame.to_sparse().copy() - assert_index_equal(sparse_multiindex_frame.columns,self.dense_multiindex_frame.columns) + sparse_multiindex_frame = self.dense_multiindex_frame.to_sparse() + sparse_multiindex_frame = sparse_multiindex_frame.copy() + assert_index_equal(sparse_multiindex_frame.columns, + self.dense_multiindex_frame.columns) def test_round_trip_preserve_multiindex_names(self): sparse_multiindex_frame = self.dense_multiindex_frame.to_sparse() round_trip_multiindex_frame = sparse_multiindex_frame.to_dense() - assert_frame_equal(self.dense_multiindex_frame,round_trip_multiindex_frame, - check_column_type=True,check_names=True) + assert_frame_equal(self.dense_multiindex_frame, + round_trip_multiindex_frame, check_column_type=True, + check_names=True) class TestSparseSeriesScipyInteraction(tm.TestCase): @@ -813,8 +814,9 @@ def setUp(self): ss.index.names = [3, 0, 1, 2] self.sparse_series.append(ss) - ss = pd.Series( - [nan] * 12, index=cartesian_product((range(3), range(4)))).to_sparse() + ss = pd.Series([ + nan + ] * 12, index=cartesian_product((range(3), range(4)))).to_sparse() for k, v in zip([(0, 0), (1, 2), (1, 3)], [3.0, 1.0, 2.0]): ss[k] = v self.sparse_series.append(ss) @@ -827,7 +829,8 @@ def setUp(self): ([3.0, 1.0, 2.0], ([1, 0, 0], [0, 2, 3])), shape=(3, 4))) self.coo_matrices.append(scipy.sparse.coo_matrix( ([3.0, 1.0, 2.0], ([0, 1, 1], [0, 0, 1])), shape=(3, 2))) - self.ils = [[(1, 2), (1, 1), (2, 1)], [(1, 1), (1, 2), (2, 1)], [(1, 2, 'a'), (1, 1, 'b'), (2, 1, 'b')]] + self.ils = [[(1, 2), (1, 1), (2, 1)], [(1, 1), (1, 2), (2, 1)], + [(1, 2, 'a'), (1, 1, 'b'), (2, 1, 'b')]] self.jls = [[('a', 0), ('a', 1), ('b', 0), ('b', 1)], [0, 1]] def test_to_coo_text_names_integer_row_levels_nosort(self): @@ -839,14 +842,16 @@ def test_to_coo_text_names_integer_row_levels_nosort(self): def test_to_coo_text_names_integer_row_levels_sort(self): ss = self.sparse_series[0] kwargs = {'row_levels': [0, 1], - 'column_levels': [2, 3], 'sort_labels': True} + 'column_levels': [2, 3], + 'sort_labels': True} result = (self.coo_matrices[1], self.ils[1], self.jls[0]) self._run_test(ss, kwargs, result) def test_to_coo_text_names_text_row_levels_nosort_col_level_single(self): ss = self.sparse_series[0] kwargs = {'row_levels': ['A', 'B', 'C'], - 'column_levels': ['D'], 'sort_labels': False} + 'column_levels': ['D'], + 'sort_labels': False} result = (self.coo_matrices[2], self.ils[2], self.jls[1]) self._run_test(ss, kwargs, result) @@ -880,8 +885,8 @@ def test_to_coo_bad_ilevel(self): self.assertRaises(KeyError, ss.to_coo, ['A', 'B'], ['C', 'D', 'E']) def test_to_coo_duplicate_index_entries(self): - ss = pd.concat( - [self.sparse_series[0], self.sparse_series[0]]).to_sparse() + ss = pd.concat([self.sparse_series[0], + self.sparse_series[0]]).to_sparse() self.assertRaises(ValueError, ss.to_coo, ['A', 'B'], ['C', 'D']) def test_from_coo_dense_index(self): @@ -945,8 +950,7 @@ def setUp(self): values[np.isnan(values)] = 0 self.zframe = SparseDataFrame(values, columns=['A', 'B', 'C', 'D'], - default_fill_value=0, - index=self.dates) + default_fill_value=0, index=self.dates) values = self.frame.values.copy() values[np.isnan(values)] = 2 @@ -1005,11 +1009,10 @@ def test_constructor(self): # init dict with different index idx = self.frame.index[:5] - cons = SparseDataFrame(self.frame, index=idx, - columns=self.frame.columns, - default_fill_value=self.frame.default_fill_value, - default_kind=self.frame.default_kind, - copy=True) + cons = SparseDataFrame( + self.frame, index=idx, columns=self.frame.columns, + default_fill_value=self.frame.default_fill_value, + default_kind=self.frame.default_kind, copy=True) reindexed = self.frame.reindex(idx) assert_sp_frame_equal(cons, reindexed, exact_indices=False) @@ -1023,8 +1026,7 @@ def test_constructor_ndarray(self): sp = SparseDataFrame(self.frame.values) # 1d - sp = SparseDataFrame(self.data['A'], index=self.dates, - columns=['A']) + sp = SparseDataFrame(self.data['A'], index=self.dates, columns=['A']) assert_sp_frame_equal(sp, self.frame.reindex(columns=['A'])) # raise on level argument @@ -1032,12 +1034,10 @@ def test_constructor_ndarray(self): level=1) # wrong length index / columns - assertRaisesRegexp( - ValueError, "^Index length", SparseDataFrame, self.frame.values, - index=self.frame.index[:-1]) - assertRaisesRegexp( - ValueError, "^Column length", SparseDataFrame, self.frame.values, - columns=self.frame.columns[:-1]) + assertRaisesRegexp(ValueError, "^Index length", SparseDataFrame, + self.frame.values, index=self.frame.index[:-1]) + assertRaisesRegexp(ValueError, "^Column length", SparseDataFrame, + self.frame.values, columns=self.frame.columns[:-1]) # GH 9272 def test_constructor_empty(self): @@ -1068,13 +1068,15 @@ def test_constructor_from_series(self): y = Series(np.random.randn(10000), name='b') x2 = x.astype(float) x2.ix[:9998] = np.NaN - x_sparse = x2.to_sparse(fill_value=np.NaN) + # TODO: x_sparse is unused...fix + x_sparse = x2.to_sparse(fill_value=np.NaN) # noqa # Currently fails too with weird ufunc error # df1 = SparseDataFrame([x_sparse, y]) y.ix[:9998] = 0 - y_sparse = y.to_sparse(fill_value=0) + # TODO: y_sparse is unsused...fix + y_sparse = y.to_sparse(fill_value=0) # noqa # without sparse value raises error # df2 = SparseDataFrame([x2_sparse, y]) @@ -1129,6 +1131,13 @@ def test_density(self): df = SparseSeries([nan, nan, nan, 0, 1, 2, 3, 4, 5, 6]) self.assertEqual(df.density, 0.7) + df = SparseDataFrame({'A': [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6], + 'B': [0, 1, 2, nan, nan, nan, 3, 4, 5, 6], + 'C': np.arange(10), + 'D': [0, 1, 2, 3, 4, 5, nan, nan, nan, nan]}) + + self.assertEqual(df.density, 0.75) + def test_sparse_to_dense(self): pass @@ -1196,11 +1205,10 @@ def _compare_to_dense(a, b, da, db, op): # time series operations - series = [frame['A'], frame['B'], - frame['C'], frame['D'], - frame['A'].reindex(fidx[:7]), - frame['A'].reindex(fidx[::2]), - SparseSeries([], index=[])] + series = [frame['A'], frame['B'], frame['C'], frame['D'], + frame['A'].reindex(fidx[:7]), frame['A'].reindex(fidx[::2]), + SparseSeries( + [], index=[])] for op in opnames: _compare_to_dense(frame, frame[::2], frame.to_dense(), @@ -1208,30 +1216,24 @@ def _compare_to_dense(a, b, da, db, op): # 2304, no auto-broadcasting for i, s in enumerate(series): - f = lambda a, b: getattr(a,op)(b,axis='index') - _compare_to_dense(frame, s, frame.to_dense(), - s.to_dense(), f) + f = lambda a, b: getattr(a, op)(b, axis='index') + _compare_to_dense(frame, s, frame.to_dense(), s.to_dense(), f) # rops are not implemented - #_compare_to_dense(s, frame, s.to_dense(), - # frame.to_dense(), f) + # _compare_to_dense(s, frame, s.to_dense(), + # frame.to_dense(), f) - # cross-sectional operations - series = [frame.xs(fidx[0]), - frame.xs(fidx[3]), - frame.xs(fidx[5]), - frame.xs(fidx[7]), - frame.xs(fidx[5])[:2]] + # cross-sectional operations + series = [frame.xs(fidx[0]), frame.xs(fidx[3]), frame.xs(fidx[5]), + frame.xs(fidx[7]), frame.xs(fidx[5])[:2]] for op in ops: for s in series: - _compare_to_dense(frame, s, frame.to_dense(), - s, op) - _compare_to_dense(s, frame, s, - frame.to_dense(), op) + _compare_to_dense(frame, s, frame.to_dense(), s, op) + _compare_to_dense(s, frame, s, frame.to_dense(), op) # it works! - result = self.frame + self.frame.ix[:, ['A', 'B']] + result = self.frame + self.frame.ix[:, ['A', 'B']] # noqa def test_op_corners(self): empty = self.empty + self.empty @@ -1330,8 +1332,8 @@ def _check_frame(frame): # insert SparseSeries differently-indexed to_insert = frame['A'][::2] frame['E'] = to_insert - expected = to_insert.to_dense().reindex( - frame.index).fillna(to_insert.fill_value) + expected = to_insert.to_dense().reindex(frame.index).fillna( + to_insert.fill_value) result = frame['E'].to_dense() assert_series_equal(result, expected, check_names=False) self.assertEqual(result.name, 'E') @@ -1344,8 +1346,8 @@ def _check_frame(frame): # insert Series differently-indexed to_insert = frame['A'].to_dense()[::2] frame['G'] = to_insert - expected = to_insert.reindex( - frame.index).fillna(frame.default_fill_value) + expected = to_insert.reindex(frame.index).fillna( + frame.default_fill_value) expected.name = 'G' assert_series_equal(frame['G'].to_dense(), expected) @@ -1374,18 +1376,21 @@ def _check_frame(frame): def test_setitem_corner(self): self.frame['a'] = self.frame['B'] - assert_sp_series_equal(self.frame['a'], self.frame['B'], check_names=False) + assert_sp_series_equal(self.frame['a'], self.frame['B'], + check_names=False) def test_setitem_array(self): arr = self.frame['B'] self.frame['E'] = arr - assert_sp_series_equal(self.frame['E'], self.frame['B'], check_names=False) + assert_sp_series_equal(self.frame['E'], self.frame['B'], + check_names=False) self.frame['F'] = arr[:-1] index = self.frame.index[:-1] assert_sp_series_equal(self.frame['E'].reindex(index), - self.frame['F'].reindex(index), check_names=False) + self.frame['F'].reindex(index), + check_names=False) def test_delitem(self): A = self.frame['A'] @@ -1422,8 +1427,8 @@ def test_append(self): a = self.frame.ix[:5, :3] b = self.frame.ix[5:] appended = a.append(b) - assert_sp_frame_equal( - appended.ix[:, :3], self.frame.ix[:, :3], exact_indices=False) + assert_sp_frame_equal(appended.ix[:, :3], self.frame.ix[:, :3], + exact_indices=False) def test_apply(self): applied = self.frame.apply(np.sqrt) @@ -1456,8 +1461,8 @@ def test_apply_nonuq(self): # df.T breaks df = df_orig.T.to_sparse() - rs = df.apply(lambda s: s[0], axis=0) - # no non-unique columns supported in sparse yet + rs = df.apply(lambda s: s[0], axis=0) # noqa + # TODO: no non-unique columns supported in sparse yet # assert_series_equal(rs, xp) def test_applymap(self): @@ -1486,8 +1491,8 @@ def test_fillna(self): def test_rename(self): # just check this works - renamed = self.frame.rename(index=str) - renamed = self.frame.rename(columns=lambda x: '%s%d' % (x, len(x))) + renamed = self.frame.rename(index=str) # noqa + renamed = self.frame.rename(columns=lambda x: '%s%d' % (x, len(x))) # noqa def test_corr(self): res = self.frame.corr() @@ -1497,7 +1502,7 @@ def test_describe(self): self.frame['foo'] = np.nan self.frame.get_dtype_counts() str(self.frame) - desc = self.frame.describe() + desc = self.frame.describe() # noqa def test_join(self): left = self.frame.ix[:, ['A', 'B']] @@ -1508,16 +1513,16 @@ def test_join(self): right = self.frame.ix[:, ['B', 'D']] self.assertRaises(Exception, left.join, right) - with tm.assertRaisesRegexp(ValueError, 'Other Series must have a name'): - self.frame.join(Series(np.random.randn(len(self.frame)), - index=self.frame.index)) + with tm.assertRaisesRegexp(ValueError, + 'Other Series must have a name'): + self.frame.join(Series( + np.random.randn(len(self.frame)), index=self.frame.index)) def test_reindex(self): - def _check_frame(frame): index = frame.index sidx = index[::2] - sidx2 = index[:5] + sidx2 = index[:5] # noqa sparse_result = frame.reindex(sidx) dense_result = frame.to_dense().reindex(sidx) @@ -1527,8 +1532,8 @@ def _check_frame(frame): dense_result) sparse_result2 = sparse_result.reindex(index) - dense_result2 = dense_result.reindex( - index).fillna(frame.default_fill_value) + dense_result2 = dense_result.reindex(index).fillna( + frame.default_fill_value) assert_frame_equal(sparse_result2.to_dense(), dense_result2) # propagate CORRECT fill value @@ -1581,14 +1586,6 @@ def test_take(self): expected = self.frame.reindex(columns=['B', 'A', 'C']) assert_sp_frame_equal(result, expected) - def test_density(self): - df = SparseDataFrame({'A': [nan, nan, nan, 0, 1, 2, 3, 4, 5, 6], - 'B': [0, 1, 2, nan, nan, nan, 3, 4, 5, 6], - 'C': np.arange(10), - 'D': [0, 1, 2, 3, 4, 5, nan, nan, nan, nan]}) - - self.assertEqual(df.density, 0.75) - def test_to_dense(self): def _check(frame): dense_dm = frame.to_dense() @@ -1598,7 +1595,7 @@ def _check(frame): def test_stack_sparse_frame(self): def _check(frame): - dense_frame = frame.to_dense() + dense_frame = frame.to_dense() # noqa wp = Panel.from_dict({'foo': frame}) from_dense_lp = wp.to_frame() @@ -1620,6 +1617,7 @@ def _check(frame): transposed = frame.T untransposed = transposed.T assert_sp_frame_equal(frame, untransposed) + self._check_all(_check) def test_shift(self): @@ -1700,7 +1698,7 @@ def test_sparse_pow_issue(self): df = SparseDataFrame({'A': [nan, 0, 1]}) # note that 2 ** df works fine, also df ** 1 - result = 1 ** df + result = 1**df r1 = result.take([0], 1)['A'] r2 = result['A'] @@ -1717,21 +1715,21 @@ def test_as_blocks(self): def test_nan_columnname(self): # GH 8822 - nan_colname = DataFrame(Series(1.0,index=[0]),columns=[nan]) + nan_colname = DataFrame(Series(1.0, index=[0]), columns=[nan]) nan_colname_sparse = nan_colname.to_sparse() self.assertTrue(np.isnan(nan_colname_sparse.columns[0])) def _dense_series_compare(s, f): result = f(s) - assert(isinstance(result, SparseSeries)) + assert (isinstance(result, SparseSeries)) dense_result = f(s.to_dense()) assert_series_equal(result.to_dense(), dense_result) def _dense_frame_compare(frame, f): result = f(frame) - assert(isinstance(frame, SparseDataFrame)) + assert (isinstance(frame, SparseDataFrame)) dense_result = f(frame.to_dense()).fillna(frame.default_fill_value) assert_frame_equal(result.to_dense(), dense_result) @@ -1769,8 +1767,7 @@ def panel_data3(): }, index=index) -class TestSparsePanel(tm.TestCase, - test_panel.SafeForLongAndSparse, +class TestSparsePanel(tm.TestCase, test_panel.SafeForLongAndSparse, test_panel.SafeForSparse): _multiprocess_can_split_ = True @@ -1800,7 +1797,8 @@ def test_constructor(self): self.assertRaises(ValueError, SparsePanel, self.data_dict, items=['Item0', 'ItemA', 'ItemB']) with tm.assertRaisesRegexp(TypeError, - "input must be a dict, a 'list' was passed"): + "input must be a dict, a 'list' was " + "passed"): SparsePanel(['a', 'b', 'c']) # deprecation GH11157 @@ -1909,8 +1907,7 @@ def test_reindex(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): def _compare_with_dense(swp, items, major, minor): - swp_re = swp.reindex(items=items, major=major, - minor=minor) + swp_re = swp.reindex(items=items, major=major, minor=minor) dwp_re = swp.to_dense().reindex(items=items, major=major, minor=minor) assert_panel_equal(swp_re.to_dense(), dwp_re) @@ -1918,8 +1915,7 @@ def _compare_with_dense(swp, items, major, minor): _compare_with_dense(self.panel, self.panel.items[:2], self.panel.major_axis[::2], self.panel.minor_axis[::2]) - _compare_with_dense(self.panel, None, - self.panel.major_axis[::2], + _compare_with_dense(self.panel, None, self.panel.major_axis[::2], self.panel.minor_axis[::2]) self.assertRaises(ValueError, self.panel.reindex) @@ -1935,16 +1931,17 @@ def _compare_with_dense(swp, items, major, minor): def test_operators(self): def _check_ops(panel): - def _dense_comp(op): - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): dense = panel.to_dense() sparse_result = op(panel) dense_result = op(dense) assert_panel_equal(sparse_result.to_dense(), dense_result) def _mixed_comp(op): - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): result = op(panel, panel.to_dense()) expected = op(panel.to_dense(), panel.to_dense()) assert_panel_equal(result, expected) @@ -1992,8 +1989,9 @@ def _dense_comp(sparse): _dense_comp(self.panel) + if __name__ == '__main__': - import nose + import nose # noqa nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False)
https://api.github.com/repos/pandas-dev/pandas/pulls/12116
2016-01-22T01:55:48Z
2016-01-24T22:25:10Z
null
2016-01-24T22:25:19Z
CLN: grab bag of flake8 fixes
diff --git a/pandas/__init__.py b/pandas/__init__.py index c2ead16b6f821..ca304fa8f8631 100644 --- a/pandas/__init__.py +++ b/pandas/__init__.py @@ -1,5 +1,6 @@ # pylint: disable-msg=W0614,W0401,W0611,W0622 +# flake8: noqa __docformat__ = 'restructuredtext' diff --git a/pandas/_version.py b/pandas/_version.py index 61e9f3ff187ea..77b2fdca59576 100644 --- a/pandas/_version.py +++ b/pandas/_version.py @@ -8,6 +8,8 @@ # This file is released into the public domain. Generated by # versioneer-0.15 (https://github.com/warner/python-versioneer) +# flake8: noqa + import errno import os import re diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py index 2da4427af4cb6..f69cd4ef43f8b 100644 --- a/pandas/compat/__init__.py +++ b/pandas/compat/__init__.py @@ -25,6 +25,8 @@ * platform checker """ # pylint disable=W0611 +# flake8: noqa + import functools import itertools from distutils.version import LooseVersion diff --git a/pandas/compat/chainmap_impl.py b/pandas/compat/chainmap_impl.py index 92d2424057f83..c059ad08d4a7f 100644 --- a/pandas/compat/chainmap_impl.py +++ b/pandas/compat/chainmap_impl.py @@ -58,16 +58,19 @@ def __missing__(self, key): def __getitem__(self, key): for mapping in self.maps: try: - return mapping[key] # can't use 'key in mapping' with defaultdict + # can't use 'key in mapping' with defaultdict + return mapping[key] except KeyError: pass - return self.__missing__(key) # support subclasses that define __missing__ + # support subclasses that define __missing__ + return self.__missing__(key) def get(self, key, default=None): return self[key] if key in self else default def __len__(self): - return len(set().union(*self.maps)) # reuses stored hash values if possible + # reuses stored hash values if possible + return len(set().union(*self.maps)) def __iter__(self): return iter(set().union(*self.maps)) @@ -89,7 +92,10 @@ def fromkeys(cls, iterable, *args): return cls(dict.fromkeys(iterable, *args)) def copy(self): - 'New ChainMap or subclass with a new copy of maps[0] and refs to maps[1:]' + """ + New ChainMap or subclass with a new copy of maps[0] and refs to + maps[1:] + """ return self.__class__(self.maps[0].copy(), *self.maps[1:]) __copy__ = copy @@ -115,21 +121,29 @@ def __delitem__(self, key): try: del self.maps[0][key] except KeyError: - raise KeyError('Key not found in the first mapping: {!r}'.format(key)) + raise KeyError('Key not found in the first mapping: {!r}' + .format(key)) def popitem(self): - 'Remove and return an item pair from maps[0]. Raise KeyError is maps[0] is empty.' + """ + Remove and return an item pair from maps[0]. Raise KeyError is maps[0] + is empty. + """ try: return self.maps[0].popitem() except KeyError: raise KeyError('No keys found in the first mapping.') def pop(self, key, *args): - 'Remove *key* from maps[0] and return its value. Raise KeyError if *key* not in maps[0].' + """ + Remove *key* from maps[0] and return its value. Raise KeyError if + *key* not in maps[0]. + """ try: return self.maps[0].pop(key, *args) except KeyError: - raise KeyError('Key not found in the first mapping: {!r}'.format(key)) + raise KeyError('Key not found in the first mapping: {!r}' + .format(key)) def clear(self): 'Clear maps[0], leaving maps[1:] intact.' diff --git a/pandas/compat/openpyxl_compat.py b/pandas/compat/openpyxl_compat.py index 266aded2071b6..87cf52cf00fef 100644 --- a/pandas/compat/openpyxl_compat.py +++ b/pandas/compat/openpyxl_compat.py @@ -32,4 +32,4 @@ def is_compat(major_ver=1): return LooseVersion(stop_ver) <= ver else: raise ValueError('cannot test for openpyxl compatibility with ver {0}' - .format(major_ver)) + .format(major_ver)) diff --git a/pandas/compat/pickle_compat.py b/pandas/compat/pickle_compat.py index e794725574119..3059c39c2cb82 100644 --- a/pandas/compat/pickle_compat.py +++ b/pandas/compat/pickle_compat.py @@ -1,5 +1,7 @@ """ support pre 0.12 series pickle compatibility """ +# flake8: noqa + import sys import numpy as np import pandas diff --git a/pandas/computation/align.py b/pandas/computation/align.py index b5f730378c3cf..ab7c72e7480f9 100644 --- a/pandas/computation/align.py +++ b/pandas/computation/align.py @@ -173,8 +173,8 @@ def _reconstruct_object(typ, obj, axes, dtype): ret_value = res_t.type(obj) else: ret_value = typ(obj).astype(res_t) - # The condition is to distinguish 0-dim array (returned in case of scalar) - # and 1 element array + # The condition is to distinguish 0-dim array (returned in case of + # scalar) and 1 element array # e.g. np.array(0) and np.array([0]) if len(obj.shape) == 1 and len(obj) == 1: if not isinstance(ret_value, np.ndarray): diff --git a/pandas/computation/api.py b/pandas/computation/api.py index db8269a497768..e5814e08c4bbe 100644 --- a/pandas/computation/api.py +++ b/pandas/computation/api.py @@ -1,2 +1,4 @@ +# flake8: noqa + from pandas.computation.eval import eval from pandas.computation.expr import Expr diff --git a/pandas/computation/engines.py b/pandas/computation/engines.py index 58b822af546c8..532921035c385 100644 --- a/pandas/computation/engines.py +++ b/pandas/computation/engines.py @@ -1,13 +1,16 @@ """Engine classes for :func:`~pandas.eval` """ +# flake8: noqa + import abc from pandas import compat from pandas.compat import DeepChainMap, map from pandas.core import common as com from pandas.computation.align import _align, _reconstruct_object -from pandas.computation.ops import UndefinedVariableError, _mathops, _reductions +from pandas.computation.ops import (UndefinedVariableError, + _mathops, _reductions) _ne_builtins = frozenset(_mathops + _reductions) @@ -30,8 +33,8 @@ def _check_ne_builtin_clash(expr): if overlap: s = ', '.join(map(repr, overlap)) - raise NumExprClobberingError('Variables in expression "%s" overlap with ' - 'numexpr builtins: (%s)' % (expr, s)) + raise NumExprClobberingError('Variables in expression "%s" ' + 'overlap with builtins: (%s)' % (expr, s)) class AbstractEngine(object): diff --git a/pandas/computation/expr.py b/pandas/computation/expr.py index 6da5cf4753a8e..61a3c9991160d 100644 --- a/pandas/computation/expr.py +++ b/pandas/computation/expr.py @@ -2,11 +2,7 @@ """ import ast -import operator -import sys -import inspect import tokenize -import datetime from functools import partial @@ -21,7 +17,7 @@ from pandas.computation.ops import _reductions, _mathops, _LOCAL_TAG from pandas.computation.ops import Op, BinOp, UnaryOp, Term, Constant, Div from pandas.computation.ops import UndefinedVariableError, FuncNode -from pandas.computation.scope import Scope, _ensure_scope +from pandas.computation.scope import Scope def tokenize_string(source): @@ -381,9 +377,9 @@ def _possibly_evaluate_binop(self, op, op_class, lhs, rhs, rhs.type)) if self.engine != 'pytables': - if (res.op in _cmp_ops_syms - and getattr(lhs, 'is_datetime', False) - or getattr(rhs, 'is_datetime', False)): + if (res.op in _cmp_ops_syms and + getattr(lhs, 'is_datetime', False) or + getattr(rhs, 'is_datetime', False)): # all date ops must be done in python bc numexpr doesn't work # well with NaT return self._possibly_eval(res, self.binary_ops) @@ -392,8 +388,8 @@ def _possibly_evaluate_binop(self, op, op_class, lhs, rhs, # "in"/"not in" ops are always evaluated in python return self._possibly_eval(res, eval_in_python) elif self.engine != 'pytables': - if (getattr(lhs, 'return_type', None) == object - or getattr(rhs, 'return_type', None) == object): + if (getattr(lhs, 'return_type', None) == object or + getattr(rhs, 'return_type', None) == object): # evaluate "==" and "!=" in python if either of our operands # has an object return type return self._possibly_eval(res, eval_in_python + @@ -517,7 +513,8 @@ def visit_Attribute(self, node, **kwargs): raise ValueError("Invalid Attribute context {0}".format(ctx.__name__)) def visit_Call_35(self, node, side=None, **kwargs): - """ in 3.5 the starargs attribute was changed to be more flexible, #11097 """ + """ in 3.5 the starargs attribute was changed to be more flexible, + #11097 """ if isinstance(node.func, ast.Attribute): res = self.visit_Attribute(node.func) @@ -541,7 +538,7 @@ def visit_Call_35(self, node, side=None, **kwargs): if isinstance(res, FuncNode): - new_args = [ self.visit(arg) for arg in node.args ] + new_args = [self.visit(arg) for arg in node.args] if node.keywords: raise TypeError("Function \"{0}\" does not support keyword " @@ -551,7 +548,7 @@ def visit_Call_35(self, node, side=None, **kwargs): else: - new_args = [ self.visit(arg).value for arg in node.args ] + new_args = [self.visit(arg).value for arg in node.args] for key in node.keywords: if not isinstance(key, ast.keyword): @@ -559,7 +556,9 @@ def visit_Call_35(self, node, side=None, **kwargs): "'{0}'".format(node.func.id)) if key.arg: - kwargs.append(ast.keyword(keyword.arg, self.visit(keyword.value))) + # TODO: bug? + kwargs.append(ast.keyword( + keyword.arg, self.visit(keyword.value))) # noqa return self.const_type(res(*new_args, **kwargs), self.env) diff --git a/pandas/computation/expressions.py b/pandas/computation/expressions.py index 70541c94b4e8e..6e33250010c2b 100644 --- a/pandas/computation/expressions.py +++ b/pandas/computation/expressions.py @@ -16,9 +16,10 @@ ver = ne.__version__ _NUMEXPR_INSTALLED = ver >= LooseVersion('2.1') if not _NUMEXPR_INSTALLED: - warnings.warn("The installed version of numexpr {ver} is not supported " - "in pandas and will be not be used\nThe minimum supported " - "version is 2.1\n".format(ver=ver), UserWarning) + warnings.warn( + "The installed version of numexpr {ver} is not supported " + "in pandas and will be not be used\nThe minimum supported " + "version is 2.1\n".format(ver=ver), UserWarning) except ImportError: # pragma: no cover _NUMEXPR_INSTALLED = False @@ -96,8 +97,8 @@ def _can_use_numexpr(op, op_str, a, b, dtype_check): return False -def _evaluate_numexpr(op, op_str, a, b, raise_on_error=False, truediv=True, reversed=False, - **eval_kwargs): +def _evaluate_numexpr(op, op_str, a, b, raise_on_error=False, truediv=True, + reversed=False, **eval_kwargs): result = None if _can_use_numexpr(op, op_str, a, b, 'evaluate'): @@ -106,7 +107,7 @@ def _evaluate_numexpr(op, op_str, a, b, raise_on_error=False, truediv=True, reve # we were originally called by a reversed op # method if reversed: - a,b = b,a + a, b = b, a a_value = getattr(a, "values", a) b_value = getattr(b, "values", b) diff --git a/pandas/computation/ops.py b/pandas/computation/ops.py index f6d5f171036ea..0d528de9f55b6 100644 --- a/pandas/computation/ops.py +++ b/pandas/computation/ops.py @@ -498,12 +498,13 @@ def return_type(self): if operand.return_type == np.dtype('bool'): return np.dtype('bool') if (isinstance(operand, Op) and - (operand.op in _cmp_ops_dict or operand.op in _bool_ops_dict)): + (operand.op in _cmp_ops_dict or operand.op in _bool_ops_dict)): return np.dtype('bool') return np.dtype('int') class MathCall(Op): + def __init__(self, func, args): super(MathCall, self).__init__(func.name, args) self.func = func @@ -518,9 +519,11 @@ def __unicode__(self): class FuncNode(object): + def __init__(self, name): if name not in _mathops: - raise ValueError("\"{0}\" is not a supported function".format(name)) + raise ValueError( + "\"{0}\" is not a supported function".format(name)) self.name = name self.func = getattr(np, name) diff --git a/pandas/computation/pytables.py b/pandas/computation/pytables.py index 58359a815ed26..3b3a0a8ab8525 100644 --- a/pandas/computation/pytables.py +++ b/pandas/computation/pytables.py @@ -7,12 +7,11 @@ from datetime import datetime, timedelta import numpy as np import pandas as pd -from pandas.compat import u, string_types, PY3, DeepChainMap +from pandas.compat import u, string_types, DeepChainMap from pandas.core.base import StringMixin import pandas.core.common as com from pandas.computation import expr, ops from pandas.computation.ops import is_term, UndefinedVariableError -from pandas.computation.scope import _ensure_scope from pandas.computation.expr import BaseExprVisitor from pandas.computation.common import _ensure_decoded from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type @@ -147,17 +146,17 @@ def is_in_table(self): @property def kind(self): """ the kind of my field """ - return getattr(self.queryables.get(self.lhs),'kind',None) + return getattr(self.queryables.get(self.lhs), 'kind', None) @property def meta(self): """ the meta of my field """ - return getattr(self.queryables.get(self.lhs),'meta',None) + return getattr(self.queryables.get(self.lhs), 'meta', None) @property def metadata(self): """ the metadata of my field """ - return getattr(self.queryables.get(self.lhs),'metadata',None) + return getattr(self.queryables.get(self.lhs), 'metadata', None) def generate(self, v): """ create and return the op string for this TermValue """ @@ -195,7 +194,7 @@ def stringify(value): return TermValue(int(v), v, kind) elif meta == u('category'): metadata = com._values_from_object(self.metadata) - result = metadata.searchsorted(v,side='left') + result = metadata.searchsorted(v, side='left') return TermValue(result, result, u('integer')) elif kind == u('integer'): v = int(float(v)) @@ -504,7 +503,7 @@ def __init__(self, where, op=None, value=None, queryables=None, else: w = self.parse_back_compat(w) where[idx] = w - where = ' & ' .join(["(%s)" % w for w in where]) + where = ' & ' .join(["(%s)" % w for w in where]) # noqa self.expr = where self.env = Scope(scope_level + 1, local_dict=local_dict) @@ -551,12 +550,14 @@ def parse_back_compat(self, w, op=None, value=None): # stringify with quotes these values def convert(v): - if isinstance(v, (datetime,np.datetime64,timedelta,np.timedelta64)) or hasattr(v, 'timetuple'): + if (isinstance(v, (datetime, np.datetime64, + timedelta, np.timedelta64)) or + hasattr(v, 'timetuple')): return "'{0}'".format(v) return v - if isinstance(value, (list,tuple)): - value = [ convert(v) for v in value ] + if isinstance(value, (list, tuple)): + value = [convert(v) for v in value] else: value = convert(value) diff --git a/pandas/computation/tests/test_eval.py b/pandas/computation/tests/test_eval.py index b085232a1a8be..82da9cacd1460 100644 --- a/pandas/computation/tests/test_eval.py +++ b/pandas/computation/tests/test_eval.py @@ -1,5 +1,7 @@ #!/usr/bin/env python +# flake8: noqa + import warnings import operator from itertools import product @@ -82,6 +84,7 @@ def _is_py3_complex_incompat(result, expected): _good_arith_ops = com.difference(_arith_ops_syms, _special_case_arith_ops_syms) + class TestEvalNumexprPandas(tm.TestCase): @classmethod @@ -194,7 +197,7 @@ def check_complex_cmp_op(self, lhs, cmp1, rhs, binop, cmp2): binop=binop, cmp2=cmp2) scalar_with_in_notin = (np.isscalar(rhs) and (cmp1 in skip_these or - cmp2 in skip_these)) + cmp2 in skip_these)) if scalar_with_in_notin: with tm.assertRaises(TypeError): pd.eval(ex, engine=self.engine, parser=self.parser) @@ -211,12 +214,12 @@ def check_complex_cmp_op(self, lhs, cmp1, rhs, binop, cmp2): # hand side bool ops are fixed. # try: - # self.assertRaises(Exception, pd.eval, ex, - #local_dict={'lhs': lhs, 'rhs': rhs}, - # engine=self.engine, parser=self.parser) + # self.assertRaises(Exception, pd.eval, ex, + #local_dict={'lhs': lhs, 'rhs': rhs}, + # engine=self.engine, parser=self.parser) # except AssertionError: - #import ipdb; ipdb.set_trace() - # raise + #import ipdb; ipdb.set_trace() + # raise else: expected = _eval_single_bin( lhs_new, binop, rhs_new, self.engine) @@ -351,7 +354,7 @@ def check_single_invert_op(self, lhs, cmp1, rhs): for engine in self.current_engines: tm.skip_if_no_ne(engine) tm.assert_numpy_array_equal(result, pd.eval('~elb', engine=engine, - parser=self.parser)) + parser=self.parser)) def check_compound_invert_op(self, lhs, cmp1, rhs): skip_these = 'in', 'not in' @@ -616,8 +619,8 @@ def test_unary_in_array(self): '-False, False, ~False, +False,' '-37, 37, ~37, +37]'), np.array([-True, True, ~True, +True, - -False, False, ~False, +False, - -37, 37, ~37, +37])) + -False, False, ~False, +False, + -37, 37, ~37, +37])) def test_disallow_scalar_bool_ops(self): exprs = '1 or 2', '1 and 2' @@ -834,7 +837,8 @@ def check_medium_complex_frame_alignment(self, engine, parser): res = pd.eval('df + df2 + df3', engine=engine, parser=parser) else: - res = pd.eval('df + df2 + df3', engine=engine, parser=parser) + res = pd.eval('df + df2 + df3', + engine=engine, parser=parser) assert_frame_equal(res, df + df2 + df3) @slow @@ -1549,6 +1553,7 @@ def setUpClass(cls): class TestMathPythonPython(tm.TestCase): + @classmethod def setUpClass(cls): super(TestMathPythonPython, cls).setUpClass() @@ -1648,6 +1653,7 @@ def test_keyword_arg(self): class TestMathPythonPandas(TestMathPythonPython): + @classmethod def setUpClass(cls): super(TestMathPythonPandas, cls).setUpClass() @@ -1656,6 +1662,7 @@ def setUpClass(cls): class TestMathNumExprPandas(TestMathPythonPython): + @classmethod def setUpClass(cls): super(TestMathNumExprPandas, cls).setUpClass() @@ -1664,6 +1671,7 @@ def setUpClass(cls): class TestMathNumExprPython(TestMathPythonPython): + @classmethod def setUpClass(cls): super(TestMathNumExprPython, cls).setUpClass() @@ -1679,7 +1687,7 @@ class TestScope(object): def check_global_scope(self, e, engine, parser): tm.skip_if_no_ne(engine) tm.assert_numpy_array_equal(_var_s * 2, pd.eval(e, engine=engine, - parser=parser)) + parser=parser)) def test_global_scope(self): e = '_var_s * 2' @@ -1819,7 +1827,7 @@ def check_numexpr_builtin_raises(engine, parser): sin, dotted_line = 1, 2 if engine == 'numexpr': with tm.assertRaisesRegexp(NumExprClobberingError, - 'Variables in expression .+'): + 'Variables in expression .+'): pd.eval('sin + dotted_line', engine=engine, parser=parser) else: res = pd.eval('sin + dotted_line', engine=engine, parser=parser) @@ -1906,6 +1914,7 @@ def check_negate_lt_eq_le(engine, parser): result = df.query('not (cat > 0)', engine=engine, parser=parser) tm.assert_frame_equal(result, expected) + def test_negate_lt_eq_le(): for engine, parser in product(_engines, expr._parsers): yield check_negate_lt_eq_le, engine, parser diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index f06ad927bb61b..65d853f92b6cd 100755 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -2211,7 +2211,7 @@ def _clean_na_values(na_values, keep_default_na=True): v = set(list(v)) | _NA_VALUES na_values[k] = v na_fvalues = dict([ - (k, _floatify_na_values(v)) for k, v in na_values.items() + (k, _floatify_na_values(v)) for k, v in na_values.items() # noqa ]) else: if not com.is_list_like(na_values): diff --git a/pandas/msgpack/__init__.py b/pandas/msgpack/__init__.py index bf0e2853ae131..0c2370df936a4 100644 --- a/pandas/msgpack/__init__.py +++ b/pandas/msgpack/__init__.py @@ -1,4 +1,6 @@ # coding: utf-8 +# flake8: noqa + from pandas.msgpack._version import version from pandas.msgpack.exceptions import * diff --git a/pandas/msgpack/exceptions.py b/pandas/msgpack/exceptions.py index f7678f135bd26..40f5a8af8f583 100644 --- a/pandas/msgpack/exceptions.py +++ b/pandas/msgpack/exceptions.py @@ -22,8 +22,10 @@ def __init__(self, unpacked, extra): def __str__(self): return "unpack(b) received extra data." + class PackException(Exception): pass + class PackValueError(PackException, ValueError): pass diff --git a/pandas/src/generate_code.py b/pandas/src/generate_code.py index cf42279c89508..053e69b7f5426 100644 --- a/pandas/src/generate_code.py +++ b/pandas/src/generate_code.py @@ -6,6 +6,8 @@ """ +# flake8: noqa + from __future__ import print_function import os from pandas.compat import StringIO diff --git a/pandas/util/decorators.py b/pandas/util/decorators.py index 5c3cb573766d7..c2d25b30c2b22 100644 --- a/pandas/util/decorators.py +++ b/pandas/util/decorators.py @@ -1,5 +1,5 @@ from pandas.compat import StringIO, callable -from pandas.lib import cache_readonly +from pandas.lib import cache_readonly # noqa import sys import warnings from textwrap import dedent @@ -60,6 +60,7 @@ def deprecate_kwarg(old_arg_name, new_arg_name, mapping=None, stacklevel=2): not callable(mapping): raise TypeError("mapping from old to new argument values " "must be dict or callable!") + def _deprecate_kwarg(func): @wraps(func) def wrapper(*args, **kwargs): @@ -82,8 +83,8 @@ def wrapper(*args, **kwargs): warnings.warn(msg, FutureWarning, stacklevel=stacklevel) if kwargs.get(new_arg_name, None) is not None: - msg = "Can only specify '%s' or '%s', not both" % \ - (old_arg_name, new_arg_name) + msg = ("Can only specify '%s' or '%s', not both" % + (old_arg_name, new_arg_name)) raise TypeError(msg) else: kwargs[new_arg_name] = new_arg_value @@ -126,7 +127,7 @@ def some_function(x): """ def __init__(self, *args, **kwargs): if (args and kwargs): - raise AssertionError( "Only positional or keyword args are allowed") + raise AssertionError("Only positional or keyword args are allowed") self.params = args or kwargs @@ -261,7 +262,8 @@ def knownfailer(*args, **kwargs): return knownfail_decorator -def make_signature(func) : + +def make_signature(func): """ Returns a string repr of the arg list of a func call, with any defaults @@ -275,15 +277,15 @@ def make_signature(func) : """ from inspect import getargspec spec = getargspec(func) - if spec.defaults is None : + if spec.defaults is None: n_wo_defaults = len(spec.args) defaults = ('',) * n_wo_defaults - else : + else: n_wo_defaults = len(spec.args) - len(spec.defaults) defaults = ('',) * n_wo_defaults + spec.defaults args = [] - for i, (var, default) in enumerate(zip(spec.args, defaults)) : - args.append(var if default=='' else var+'='+repr(default)) + for i, (var, default) in enumerate(zip(spec.args, defaults)): + args.append(var if default == '' else var + '=' + repr(default)) if spec.varargs: args.append('*' + spec.varargs) if spec.keywords: diff --git a/pandas/util/doctools.py b/pandas/util/doctools.py index 20a2a68ce6b03..62dcba1405581 100644 --- a/pandas/util/doctools.py +++ b/pandas/util/doctools.py @@ -23,11 +23,15 @@ def _get_cells(self, left, right, vertical): """Calcurate appropriate figure size based on left and right data""" if vertical: # calcurate required number of cells - vcells = max(sum([self._shape(l)[0] for l in left]), self._shape(right)[0]) - hcells = max([self._shape(l)[1] for l in left]) + self._shape(right)[1] + vcells = max(sum([self._shape(l)[0] for l in left]), + self._shape(right)[0]) + hcells = (max([self._shape(l)[1] for l in left]) + + self._shape(right)[1]) else: - vcells = max([self._shape(l)[0] for l in left] + [self._shape(right)[0]]) - hcells = sum([self._shape(l)[1] for l in left] + [self._shape(right)[1]]) + vcells = max([self._shape(l)[0] for l in left] + + [self._shape(right)[0]]) + hcells = sum([self._shape(l)[1] for l in left] + + [self._shape(right)[1]]) return hcells, vcells def plot(self, left, right, labels=None, vertical=True): @@ -66,10 +70,11 @@ def plot(self, left, right, labels=None, vertical=True): max_left_rows = max([self._shape(l)[0] for l in left]) for i, (l, label) in enumerate(zip(left, labels)): ax = fig.add_subplot(gs[i, 0:max_left_cols]) - self._make_table(ax, l, title=label, height=1.0/max_left_rows) + self._make_table(ax, l, title=label, + height=1.0 / max_left_rows) # right ax = plt.subplot(gs[:, max_left_cols:]) - self._make_table(ax, right, title='Result', height=1.05/vcells) + self._make_table(ax, right, title='Result', height=1.05 / vcells) fig.subplots_adjust(top=0.9, bottom=0.05, left=0.05, right=0.95) else: max_rows = max([self._shape(df)[0] for df in left + [right]]) @@ -79,7 +84,7 @@ def plot(self, left, right, labels=None, vertical=True): i = 0 for l, label in zip(left, labels): sp = self._shape(l) - ax = fig.add_subplot(gs[0, i:i+sp[1]]) + ax = fig.add_subplot(gs[0, i:i + sp[1]]) self._make_table(ax, l, title=label, height=height) i += sp[1] # right @@ -107,12 +112,14 @@ def _insert_index(self, data): data.insert(0, 'Index', data.index) else: for i in range(idx_nlevels): - data.insert(i, 'Index{0}'.format(i), data.index.get_level_values(i)) + data.insert(i, 'Index{0}'.format(i), + data.index.get_level_values(i)) col_nlevels = data.columns.nlevels if col_nlevels > 1: col = data.columns.get_level_values(0) - values = [data.columns.get_level_values(i).values for i in range(1, col_nlevels)] + values = [data.columns.get_level_values(i).values + for i in range(1, col_nlevels)] col_df = pd.DataFrame(values) data.columns = col_df.columns data = pd.concat([col_df, data]) @@ -151,7 +158,6 @@ def _make_table(self, ax, df, title, height=None): if __name__ == "__main__": - import pandas as pd import matplotlib.pyplot as plt p = TablePlotter() @@ -174,11 +180,11 @@ def _make_table(self, ax, df, title, height=None): plt.show() idx = pd.MultiIndex.from_tuples([(1, 'A'), (1, 'B'), (1, 'C'), - (2, 'A'), (2, 'B'), (2, 'C')]) + (2, 'A'), (2, 'B'), (2, 'C')]) col = pd.MultiIndex.from_tuples([(1, 'A'), (1, 'B')]) df3 = pd.DataFrame({'v1': [1, 2, 3, 4, 5, 6], 'v2': [5, 6, 7, 8, 9, 10]}, - index=idx) + index=idx) df3.columns = col p.plot(df3, df3, labels=['df3']) plt.show() diff --git a/pandas/util/misc.py b/pandas/util/misc.py index 15492cde5a9f7..2dd59043b5f63 100644 --- a/pandas/util/misc.py +++ b/pandas/util/misc.py @@ -1,10 +1,12 @@ """ various miscellaneous utilities """ + def is_little_endian(): """ am I little endian """ import sys return sys.byteorder == 'little' + def exclusive(*args): count = sum([arg is not None for arg in args]) return count == 1 diff --git a/pandas/util/print_versions.py b/pandas/util/print_versions.py index a4cb84d530336..5c09f877d863b 100644 --- a/pandas/util/print_versions.py +++ b/pandas/util/print_versions.py @@ -16,7 +16,8 @@ def get_sys_info(): if os.path.isdir(".git") and os.path.isdir("pandas"): try: pipe = subprocess.Popen('git log --format="%H" -n 1'.split(" "), - stdout=subprocess.PIPE, stderr=subprocess.PIPE) + stdout=subprocess.PIPE, + stderr=subprocess.PIPE) so, serr = pipe.communicate() except: pass @@ -32,8 +33,8 @@ def get_sys_info(): blob.append(('commit', commit)) try: - sysname, nodename, release, version, machine, processor = platform.uname( - ) + (sysname, nodename, release, + version, machine, processor) = platform.uname() blob.extend([ ("python", "%d.%d.%d.%s.%s" % sys.version_info[:]), ("python-bits", struct.calcsize("P") * 8), @@ -113,7 +114,7 @@ def show_versions(as_json=False): j = dict(system=dict(sys_info), dependencies=dict(deps_blob)) - if as_json == True: + if as_json is True: print(j) else: with codecs.open(as_json, "wb", encoding='utf8') as f: @@ -136,7 +137,8 @@ def main(): from optparse import OptionParser parser = OptionParser() parser.add_option("-j", "--json", metavar="FILE", nargs=1, - help="Save output as JSON into file, pass in '-' to output to stdout") + help="Save output as JSON into file, pass in " + "'-' to output to stdout") (options, args) = parser.parse_args() diff --git a/pandas/util/terminal.py b/pandas/util/terminal.py index fc985855d2682..6b8428ff75806 100644 --- a/pandas/util/terminal.py +++ b/pandas/util/terminal.py @@ -94,7 +94,6 @@ def ioctl_GWINSZ(fd): import fcntl import termios import struct - import os cr = struct.unpack( 'hh', fcntl.ioctl(fd, termios.TIOCGWINSZ, '1234')) except: diff --git a/pandas/util/testing.py b/pandas/util/testing.py index 685d89fee53b5..b78ba929463c9 100644 --- a/pandas/util/testing.py +++ b/pandas/util/testing.py @@ -1,6 +1,8 @@ from __future__ import division # pylint: disable-msg=W0402 +# flake8: noqa + import random import re import string
https://api.github.com/repos/pandas-dev/pandas/pulls/12115
2016-01-22T00:28:08Z
2016-01-22T15:57:24Z
null
2016-01-22T15:57:31Z
CLN: fix flake8 warnings in pandas/stats
diff --git a/pandas/stats/api.py b/pandas/stats/api.py index 3732f9ed39524..fd81b875faa91 100644 --- a/pandas/stats/api.py +++ b/pandas/stats/api.py @@ -4,6 +4,8 @@ # pylint: disable-msg=W0611,W0614,W0401 +# flake8: noqa + from pandas.stats.moments import * from pandas.stats.interface import ols from pandas.stats.fama_macbeth import fama_macbeth diff --git a/pandas/stats/common.py b/pandas/stats/common.py index c30b3e7a4bf61..be3b842e93cc8 100644 --- a/pandas/stats/common.py +++ b/pandas/stats/common.py @@ -5,9 +5,10 @@ 2: 'expanding' } # also allow 'rolling' as key -_WINDOW_TYPES.update((v, v) for k,v in list(_WINDOW_TYPES.items())) +_WINDOW_TYPES.update((v, v) for k, v in list(_WINDOW_TYPES.items())) _ADDITIONAL_CLUSTER_TYPES = set(("entity", "time")) + def _get_cluster_type(cluster_type): # this was previous behavior if cluster_type is None: @@ -20,15 +21,18 @@ def _get_cluster_type(cluster_type): return final_type raise ValueError('Unrecognized cluster type: %s' % cluster_type) + def _get_window_type(window_type): # e.g., 0, 1, 2 final_type = _WINDOW_TYPES.get(window_type) # e.g., 'full_sample' - final_type = final_type or _WINDOW_TYPES.get(str(window_type).lower().replace(" ", "_")) + final_type = final_type or _WINDOW_TYPES.get( + str(window_type).lower().replace(" ", "_")) if final_type is None: raise ValueError('Unrecognized window type: %s' % window_type) return final_type + def banner(text, width=80): """ diff --git a/pandas/stats/fama_macbeth.py b/pandas/stats/fama_macbeth.py index 01e68be273226..caad53df2c7fe 100644 --- a/pandas/stats/fama_macbeth.py +++ b/pandas/stats/fama_macbeth.py @@ -7,6 +7,7 @@ import pandas.stats.common as common from pandas.util.decorators import cache_readonly +# flake8: noqa def fama_macbeth(**kwargs): """Runs Fama-MacBeth regression. @@ -28,6 +29,7 @@ def fama_macbeth(**kwargs): class FamaMacBeth(StringMixin): + def __init__(self, y, x, intercept=True, nw_lags=None, nw_lags_beta=None, entity_effects=False, time_effects=False, x_effects=None, @@ -39,7 +41,7 @@ def __init__(self, y, x, intercept=True, nw_lags=None, FutureWarning, stacklevel=4) if dropped_dummies is None: - dropped_dummies = {} + dropped_dummies = {} self._nw_lags_beta = nw_lags_beta from pandas.stats.plm import MovingPanelOLS @@ -99,7 +101,7 @@ def _results(self): def _coef_table(self): buffer = StringIO() buffer.write('%13s %13s %13s %13s %13s %13s\n' % - ('Variable', 'Beta', 'Std Err', 't-stat', 'CI 2.5%', 'CI 97.5%')) + ('Variable', 'Beta', 'Std Err', 't-stat', 'CI 2.5%', 'CI 97.5%')) template = '%13s %13.4f %13.4f %13.2f %13.4f %13.4f\n' for i, name in enumerate(self._cols): @@ -148,12 +150,13 @@ def summary(self): class MovingFamaMacBeth(FamaMacBeth): + def __init__(self, y, x, window_type='rolling', window=10, intercept=True, nw_lags=None, nw_lags_beta=None, entity_effects=False, time_effects=False, x_effects=None, cluster=None, dropped_dummies=None, verbose=False): if dropped_dummies is None: - dropped_dummies = {} + dropped_dummies = {} self._window_type = common._get_window_type(window_type) self._window = window diff --git a/pandas/stats/interface.py b/pandas/stats/interface.py index 96b2b3e32be0d..caf468b4f85fe 100644 --- a/pandas/stats/interface.py +++ b/pandas/stats/interface.py @@ -76,7 +76,8 @@ def ols(**kwargs): result = ols(y=y, x=x) # Run expanding panel OLS with window 10 and entity clustering. - result = ols(y=y, x=x, cluster='entity', window_type='expanding', window=10) + result = ols(y=y, x=x, cluster='entity', window_type='expanding', + window=10) Returns ------- @@ -85,12 +86,11 @@ def ols(**kwargs): """ if (kwargs.get('cluster') is not None and - kwargs.get('nw_lags') is not None): + kwargs.get('nw_lags') is not None): raise ValueError( 'Pandas OLS does not work with Newey-West correction ' 'and clustering.') - pool = kwargs.get('pool') if 'pool' in kwargs: del kwargs['pool'] diff --git a/pandas/stats/misc.py b/pandas/stats/misc.py index ef663b25e9ca0..1a077dcb6f9a1 100644 --- a/pandas/stats/misc.py +++ b/pandas/stats/misc.py @@ -2,9 +2,10 @@ from pandas import compat import numpy as np -from pandas.core.api import Series, DataFrame, isnull, notnull +from pandas.core.api import Series, DataFrame from pandas.core.series import remove_na -from pandas.compat import zip +from pandas.compat import zip, lrange +import pandas.core.common as com def zscore(series): @@ -42,6 +43,7 @@ def correl_ts(frame1, frame2): def correl_xs(frame1, frame2): return correl_ts(frame1.T, frame2.T) + def percentileofscore(a, score, kind='rank'): """The percentile rank of a score relative to a list of scores. @@ -131,6 +133,7 @@ def percentileofscore(a, score, kind='rank'): else: raise ValueError("kind can only be 'rank', 'strict', 'weak' or 'mean'") + def percentileRank(frame, column=None, kind='mean'): """ Return score at percentile for each point in time (cross-section) diff --git a/pandas/stats/moments.py b/pandas/stats/moments.py index 28f35cf26e582..c875a9d49039b 100644 --- a/pandas/stats/moments.py +++ b/pandas/stats/moments.py @@ -20,9 +20,9 @@ 'expanding_sum', 'expanding_mean', 'expanding_std', 'expanding_cov', 'expanding_corr', 'expanding_var', 'expanding_skew', 'expanding_kurt', 'expanding_quantile', - 'expanding_median', 'expanding_apply' ] + 'expanding_median', 'expanding_apply'] -#------------------------------------------------------------------------------ +# ----------------------------------------------------------------------------- # Docs # The order of arguments for the _doc_template is: @@ -72,7 +72,8 @@ span : float, optional Specify decay in terms of span, :math:`\alpha = 2 / (span + 1)` halflife : float, optional - Specify decay in terms of halflife, :math:`\alpha = 1 - exp(log(0.5) / halflife)` + Specify decay in terms of halflife, + :math:`\alpha = 1 - exp(log(0.5) / halflife)` min_periods : int, default 0 Minimum number of observations in window required to have a value (otherwise result is NA). @@ -173,6 +174,7 @@ Use a standard estimation bias correction """ + def ensure_compat(dispatch, name, arg, func_kw=None, *args, **kwargs): """ wrapper function to dispatch to the appropriate window functions @@ -189,8 +191,10 @@ def ensure_compat(dispatch, name, arg, func_kw=None, *args, **kwargs): else: raise AssertionError("cannot support ndim > 2 for ndarray compat") - warnings.warn("pd.{dispatch}_{name} is deprecated for ndarrays and will be removed " - "in a future version".format(dispatch=dispatch,name=name), + warnings.warn("pd.{dispatch}_{name} is deprecated for ndarrays and " + "will be removed " + "in a future version" + .format(dispatch=dispatch, name=name), FutureWarning, stacklevel=3) # get the functional keywords here @@ -198,46 +202,46 @@ def ensure_compat(dispatch, name, arg, func_kw=None, *args, **kwargs): func_kw = [] kwds = {} for k in func_kw: - value = kwargs.pop(k,None) + value = kwargs.pop(k, None) if value is not None: kwds[k] = value # how is a keyword that if not-None should be in kwds - how = kwargs.pop('how',None) + how = kwargs.pop('how', None) if how is not None: kwds['how'] = how - r = getattr(arg,dispatch)(**kwargs) + r = getattr(arg, dispatch)(**kwargs) if not is_ndarray: # give a helpful deprecation message # with copy-pastable arguments - pargs = ','.join([ "{a}={b}".format(a=a,b=b) for a,b in kwargs.items() if b is not None ]) + pargs = ','.join(["{a}={b}".format(a=a, b=b) + for a, b in kwargs.items() if b is not None]) aargs = ','.join(args) if len(aargs): aargs += ',' - def f(a,b): + def f(a, b): if lib.isscalar(b): - return "{a}={b}".format(a=a,b=b) - return "{a}=<{b}>".format(a=a,b=type(b).__name__) - aargs = ','.join([ f(a,b) for a,b in kwds.items() if b is not None ]) + return "{a}={b}".format(a=a, b=b) + return "{a}=<{b}>".format(a=a, b=type(b).__name__) + aargs = ','.join([f(a, b) for a, b in kwds.items() if b is not None]) warnings.warn("pd.{dispatch}_{name} is deprecated for {klass} " "and will be removed in a future version, replace with " - "\n\t{klass}.{dispatch}({pargs}).{name}({aargs})".format(klass=type(arg).__name__, - pargs=pargs, - aargs=aargs, - dispatch=dispatch, - name=name), + "\n\t{klass}.{dispatch}({pargs}).{name}({aargs})" + .format(klass=type(arg).__name__, pargs=pargs, + aargs=aargs, dispatch=dispatch, name=name), FutureWarning, stacklevel=3) - result = getattr(r,name)(*args, **kwds) + result = getattr(r, name)(*args, **kwds) if is_ndarray: result = result.values return result + def rolling_count(arg, window, **kwargs): """ Rolling count of number of non-NaN observations inside provided window. @@ -249,8 +253,8 @@ def rolling_count(arg, window, **kwargs): Size of the moving window. This is the number of observations used for calculating the statistic. freq : string or DateOffset object, optional (default None) - Frequency to conform the data to before computing the statistic. Specified - as a frequency string or DateOffset object. + Frequency to conform the data to before computing the + statistic. Specified as a frequency string or DateOffset object. center : boolean, default False Whether the label should correspond with center of window how : string, default 'mean' @@ -268,8 +272,10 @@ def rolling_count(arg, window, **kwargs): """ return ensure_compat('rolling', 'count', arg, window=window, **kwargs) + @Substitution("Unbiased moving covariance.", _binary_arg_flex, - _roll_kw%'None'+_pairwise_kw+_ddof_kw, _flex_retval, _roll_notes) + _roll_kw % 'None' + _pairwise_kw + _ddof_kw, _flex_retval, + _roll_notes) @Appender(_doc_template) def rolling_cov(arg1, arg2=None, window=None, pairwise=None, **kwargs): if window is None and isinstance(arg2, (int, float)): @@ -285,11 +291,12 @@ def rolling_cov(arg1, arg2=None, window=None, pairwise=None, **kwargs): other=arg2, window=window, pairwise=pairwise, - func_kw=['other','pairwise','ddof'], + func_kw=['other', 'pairwise', 'ddof'], **kwargs) + @Substitution("Moving sample correlation.", _binary_arg_flex, - _roll_kw%'None'+_pairwise_kw, _flex_retval, _roll_notes) + _roll_kw % 'None' + _pairwise_kw, _flex_retval, _roll_notes) @Appender(_doc_template) def rolling_corr(arg1, arg2=None, window=None, pairwise=None, **kwargs): if window is None and isinstance(arg2, (int, float)): @@ -305,11 +312,11 @@ def rolling_corr(arg1, arg2=None, window=None, pairwise=None, **kwargs): other=arg2, window=window, pairwise=pairwise, - func_kw=['other','pairwise'], + func_kw=['other', 'pairwise'], **kwargs) -#------------------------------------------------------------------------------ +# ----------------------------------------------------------------------------- # Exponential moving moments @@ -330,8 +337,9 @@ def ewma(arg, com=None, span=None, halflife=None, min_periods=0, freq=None, how=how, ignore_na=ignore_na) + @Substitution("Exponentially-weighted moving variance", _unary_arg, - _ewm_kw+_bias_kw, _type_of_input_retval, _ewm_notes) + _ewm_kw + _bias_kw, _type_of_input_retval, _ewm_notes) @Appender(_doc_template) def ewmvar(arg, com=None, span=None, halflife=None, min_periods=0, bias=False, freq=None, how=None, ignore_na=False, adjust=True): @@ -349,8 +357,9 @@ def ewmvar(arg, com=None, span=None, halflife=None, min_periods=0, bias=False, bias=bias, func_kw=['bias']) + @Substitution("Exponentially-weighted moving std", _unary_arg, - _ewm_kw+_bias_kw, _type_of_input_retval, _ewm_notes) + _ewm_kw + _bias_kw, _type_of_input_retval, _ewm_notes) @Appender(_doc_template) def ewmstd(arg, com=None, span=None, halflife=None, min_periods=0, bias=False, freq=None, how=None, ignore_na=False, adjust=True): @@ -372,10 +381,11 @@ def ewmstd(arg, com=None, span=None, halflife=None, min_periods=0, bias=False, @Substitution("Exponentially-weighted moving covariance", _binary_arg_flex, - _ewm_kw+_pairwise_kw, _type_of_input_retval, _ewm_notes) + _ewm_kw + _pairwise_kw, _type_of_input_retval, _ewm_notes) @Appender(_doc_template) def ewmcov(arg1, arg2=None, com=None, span=None, halflife=None, min_periods=0, - bias=False, freq=None, pairwise=None, how=None, ignore_na=False, adjust=True): + bias=False, freq=None, pairwise=None, how=None, ignore_na=False, + adjust=True): if arg2 is None: arg2 = arg1 pairwise = True if pairwise is None else pairwise @@ -398,10 +408,11 @@ def ewmcov(arg1, arg2=None, com=None, span=None, halflife=None, min_periods=0, ignore_na=ignore_na, adjust=adjust, pairwise=pairwise, - func_kw=['other','pairwise','bias']) + func_kw=['other', 'pairwise', 'bias']) + @Substitution("Exponentially-weighted moving correlation", _binary_arg_flex, - _ewm_kw+_pairwise_kw, _type_of_input_retval, _ewm_notes) + _ewm_kw + _pairwise_kw, _type_of_input_retval, _ewm_notes) @Appender(_doc_template) def ewmcorr(arg1, arg2=None, com=None, span=None, halflife=None, min_periods=0, freq=None, pairwise=None, how=None, ignore_na=False, adjust=True): @@ -425,9 +436,9 @@ def ewmcorr(arg1, arg2=None, com=None, span=None, halflife=None, min_periods=0, ignore_na=ignore_na, adjust=adjust, pairwise=pairwise, - func_kw=['other','pairwise']) + func_kw=['other', 'pairwise']) -#---------------------------------------------------------------------- +# --------------------------------------------------------------------- # Python interface to Cython functions @@ -435,9 +446,9 @@ def _rolling_func(name, desc, how=None, func_kw=None, additional_kw=''): if how is None: how_arg_str = 'None' else: - how_arg_str = "'%s"%how + how_arg_str = "'%s" % how - @Substitution(desc, _unary_arg, _roll_kw%how_arg_str + additional_kw, + @Substitution(desc, _unary_arg, _roll_kw % how_arg_str + additional_kw, _type_of_input_retval, _roll_notes) @Appender(_doc_template) def f(arg, window, min_periods=None, freq=None, center=False, @@ -468,6 +479,7 @@ def f(arg, window, min_periods=None, freq=None, center=False, rolling_skew = _rolling_func('skew', 'Unbiased moving skewness.') rolling_kurt = _rolling_func('kurt', 'Unbiased moving kurtosis.') + def rolling_quantile(arg, window, quantile, min_periods=None, freq=None, center=False): """Moving quantile. @@ -484,8 +496,8 @@ def rolling_quantile(arg, window, quantile, min_periods=None, freq=None, Minimum number of observations in window required to have a value (otherwise result is NA). freq : string or DateOffset object, optional (default None) - Frequency to conform the data to before computing the statistic. Specified - as a frequency string or DateOffset object. + Frequency to conform the data to before computing the + statistic. Specified as a frequency string or DateOffset object. center : boolean, default False Whether the label should correspond with center of window @@ -529,8 +541,8 @@ def rolling_apply(arg, window, func, min_periods=None, freq=None, Minimum number of observations in window required to have a value (otherwise result is NA). freq : string or DateOffset object, optional (default None) - Frequency to conform the data to before computing the statistic. Specified - as a frequency string or DateOffset object. + Frequency to conform the data to before computing the + statistic. Specified as a frequency string or DateOffset object. center : boolean, default False Whether the label should correspond with center of window args : tuple @@ -558,7 +570,7 @@ def rolling_apply(arg, window, func, min_periods=None, freq=None, freq=freq, center=center, min_periods=min_periods, - func_kw=['func','args','kwargs'], + func_kw=['func', 'args', 'kwargs'], func=func, args=args, kwargs=kwargs) @@ -583,8 +595,8 @@ def rolling_window(arg, window=None, win_type=None, min_periods=None, Minimum number of observations in window required to have a value (otherwise result is NA). freq : string or DateOffset object, optional (default None) - Frequency to conform the data to before computing the statistic. Specified - as a frequency string or DateOffset object. + Frequency to conform the data to before computing the + statistic. Specified as a frequency string or DateOffset object. center : boolean, default False Whether the label should correspond with center of window mean : boolean, default True @@ -636,6 +648,7 @@ def rolling_window(arg, window=None, win_type=None, min_periods=None, func_kw=kwargs.keys(), **kwargs) + def _expanding_func(name, desc, func_kw=None, additional_kw=''): @Substitution(desc, _unary_arg, _expanding_kw + additional_kw, _type_of_input_retval, "") @@ -674,8 +687,8 @@ def expanding_count(arg, freq=None): ---------- arg : DataFrame or numpy ndarray-like freq : string or DateOffset object, optional (default None) - Frequency to conform the data to before computing the statistic. Specified - as a frequency string or DateOffset object. + Frequency to conform the data to before computing the + statistic. Specified as a frequency string or DateOffset object. Returns ------- @@ -702,8 +715,8 @@ def expanding_quantile(arg, quantile, min_periods=1, freq=None): Minimum number of observations in window required to have a value (otherwise result is NA). freq : string or DateOffset object, optional (default None) - Frequency to conform the data to before computing the statistic. Specified - as a frequency string or DateOffset object. + Frequency to conform the data to before computing the + statistic. Specified as a frequency string or DateOffset object. Returns ------- @@ -723,10 +736,12 @@ def expanding_quantile(arg, quantile, min_periods=1, freq=None): func_kw=['quantile'], quantile=quantile) + @Substitution("Unbiased expanding covariance.", _binary_arg_flex, - _expanding_kw+_pairwise_kw+_ddof_kw, _flex_retval, "") + _expanding_kw + _pairwise_kw + _ddof_kw, _flex_retval, "") @Appender(_doc_template) -def expanding_cov(arg1, arg2=None, min_periods=1, freq=None, pairwise=None, ddof=1): +def expanding_cov(arg1, arg2=None, min_periods=1, freq=None, + pairwise=None, ddof=1): if arg2 is None: arg2 = arg1 pairwise = True if pairwise is None else pairwise @@ -742,11 +757,11 @@ def expanding_cov(arg1, arg2=None, min_periods=1, freq=None, pairwise=None, ddof pairwise=pairwise, freq=freq, ddof=ddof, - func_kw=['other','pairwise','ddof']) + func_kw=['other', 'pairwise', 'ddof']) @Substitution("Expanding sample correlation.", _binary_arg_flex, - _expanding_kw+_pairwise_kw, _flex_retval, "") + _expanding_kw + _pairwise_kw, _flex_retval, "") @Appender(_doc_template) def expanding_corr(arg1, arg2=None, min_periods=1, freq=None, pairwise=None): if arg2 is None: @@ -763,7 +778,8 @@ def expanding_corr(arg1, arg2=None, min_periods=1, freq=None, pairwise=None): min_periods=min_periods, pairwise=pairwise, freq=freq, - func_kw=['other','pairwise','ddof']) + func_kw=['other', 'pairwise', 'ddof']) + def expanding_apply(arg, func, min_periods=1, freq=None, args=(), kwargs={}): @@ -778,8 +794,8 @@ def expanding_apply(arg, func, min_periods=1, freq=None, Minimum number of observations in window required to have a value (otherwise result is NA). freq : string or DateOffset object, optional (default None) - Frequency to conform the data to before computing the statistic. Specified - as a frequency string or DateOffset object. + Frequency to conform the data to before computing the + statistic. Specified as a frequency string or DateOffset object. args : tuple Passed on to func kwargs : dict @@ -800,7 +816,7 @@ def expanding_apply(arg, func, min_periods=1, freq=None, arg, freq=freq, min_periods=min_periods, - func_kw=['func','args','kwargs'], + func_kw=['func', 'args', 'kwargs'], func=func, args=args, kwargs=kwargs) diff --git a/pandas/stats/ols.py b/pandas/stats/ols.py index 7031d55c0f682..e2375ea180ed2 100644 --- a/pandas/stats/ols.py +++ b/pandas/stats/ols.py @@ -4,6 +4,8 @@ # pylint: disable-msg=W0201 +# flake8: noqa + from pandas.compat import zip, range, StringIO from itertools import starmap from pandas import compat @@ -22,7 +24,6 @@ _FP_ERR = 1e-8 - class OLS(StringMixin): """ Runs a full sample ordinary least squares regression. @@ -103,7 +104,7 @@ def _prepare_data(self): filt_rhs['intercept'] = 1. pre_filt_rhs['intercept'] = 1. - if hasattr(filt_weights,'to_dense'): + if hasattr(filt_weights, 'to_dense'): filt_weights = filt_weights.to_dense() return (filt_lhs, filt_rhs, filt_weights, @@ -630,6 +631,7 @@ class MovingOLS(OLS): Assume data is overlapping when computing Newey-West estimator """ + def __init__(self, y, x, weights=None, window_type='expanding', window=None, min_periods=None, intercept=True, nw_lags=None, nw_overlap=False): @@ -989,7 +991,7 @@ def _p_value_raw(self): result = [2 * t.sf(a, b) for a, b in zip(np.fabs(self._t_stat_raw), - self._df_resid_raw)] + self._df_resid_raw)] return np.array(result) @@ -1220,7 +1222,8 @@ def _nobs_raw(self): # expanding case window = len(self._index) - result = Series(self._time_obs_count).rolling(window, min_periods=1).sum().values + result = Series(self._time_obs_count).rolling( + window, min_periods=1).sum().values return result.astype(int) @@ -1314,7 +1317,7 @@ def _filter_data(lhs, rhs, weights=None): filt_lhs = combined.pop('__y__') filt_rhs = combined - if hasattr(filt_weights,'to_dense'): + if hasattr(filt_weights, 'to_dense'): filt_weights = filt_weights.to_dense() return (filt_lhs.to_dense(), filt_rhs.to_dense(), filt_weights, diff --git a/pandas/stats/plm.py b/pandas/stats/plm.py index 177452476b875..dca1977fb19bd 100644 --- a/pandas/stats/plm.py +++ b/pandas/stats/plm.py @@ -5,6 +5,8 @@ # pylint: disable-msg=W0231 # pylint: disable-msg=E1101,E1103 +# flake8: noqa + from __future__ import division from pandas.compat import range from pandas import compat @@ -291,7 +293,8 @@ def _add_categorical_dummies(self, panel, cat_mappings): self.log( '-- Excluding dummy for %s: %s' % (effect, to_exclude)) - dummies = dummies.filter(dummies.columns.difference([mapped_name])) + dummies = dummies.filter( + dummies.columns.difference([mapped_name])) dropped_dummy = True dummies = _convertDummies(dummies, cat_mappings.get(effect)) @@ -793,7 +796,7 @@ def _var_beta_panel(y, x, beta, xx, rmse, cluster_axis, resid = resid.swaplevel(0, 1).sortlevel(0) m = _group_agg(x.values * resid.values, x.index._bounds, - lambda x: np.sum(x, axis=0)) + lambda x: np.sum(x, axis=0)) if nw_lags is None: nw_lags = 0 @@ -805,6 +808,7 @@ def _var_beta_panel(y, x, beta, xx, rmse, cluster_axis, return np.dot(xx_inv, np.dot(xox, xx_inv)) + def _group_agg(values, bounds, f): """ R-style aggregator @@ -840,6 +844,7 @@ def _group_agg(values, bounds, f): return result + def _xx_time_effects(x, y): """ Returns X'X - (X'T) (T'T)^-1 (T'X) diff --git a/pandas/stats/tests/common.py b/pandas/stats/tests/common.py index 717eb51292796..0ce4b20a4b719 100644 --- a/pandas/stats/tests/common.py +++ b/pandas/stats/tests/common.py @@ -1,4 +1,5 @@ # pylint: disable-msg=W0611,W0402 +# flake8: noqa from datetime import datetime import string @@ -54,6 +55,7 @@ def check_for_statsmodels(): class BaseTest(tm.TestCase): + def setUp(self): check_for_scipy() check_for_statsmodels() diff --git a/pandas/stats/tests/test_fama_macbeth.py b/pandas/stats/tests/test_fama_macbeth.py index 05849bd80c7a8..deff392d6a16c 100644 --- a/pandas/stats/tests/test_fama_macbeth.py +++ b/pandas/stats/tests/test_fama_macbeth.py @@ -1,3 +1,5 @@ +# flake8: noqa + from pandas import DataFrame, Panel from pandas.stats.api import fama_macbeth from .common import assert_almost_equal, BaseTest @@ -9,6 +11,7 @@ class TestFamaMacBeth(BaseTest): + def testFamaMacBethRolling(self): # self.checkFamaMacBethExtended('rolling', self.panel_x, self.panel_y, # nw_lags_beta=2) diff --git a/pandas/stats/tests/test_math.py b/pandas/stats/tests/test_math.py index 628a37006cfeb..bc09f33d2f467 100644 --- a/pandas/stats/tests/test_math.py +++ b/pandas/stats/tests/test_math.py @@ -5,12 +5,8 @@ import numpy as np from pandas.core.api import Series, DataFrame, date_range -from pandas.util.testing import assert_almost_equal -import pandas.core.datetools as datetools -import pandas.stats.moments as mom import pandas.util.testing as tm import pandas.stats.math as pmath -import pandas.tests.test_series as ts from pandas import ols N, K = 100, 10 @@ -20,7 +16,7 @@ import statsmodels.api as sm except ImportError: try: - import scikits.statsmodels.api as sm + import scikits.statsmodels.api as sm # noqa except ImportError: _have_statsmodels = False @@ -63,6 +59,5 @@ def test_inv_illformed(self): self.assertTrue(np.allclose(rs, expected)) if __name__ == '__main__': - import nose nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/stats/tests/test_ols.py b/pandas/stats/tests/test_ols.py index 01095ab2336ce..175ad9dc33dc2 100644 --- a/pandas/stats/tests/test_ols.py +++ b/pandas/stats/tests/test_ols.py @@ -4,6 +4,8 @@ # pylint: disable-msg=W0212 +# flake8: noqa + from __future__ import division from datetime import datetime @@ -425,6 +427,7 @@ def test_catch_regressor_overlap(self): y = tm.makeTimeSeries() data = {'foo': df1, 'bar': df2} + def f(): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): ols(y=y, x=data) @@ -655,7 +658,8 @@ def testWithXEffectsAndDroppedDummies(self): def testWithXEffectsAndConversion(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - result = ols(y=self.panel_y3, x=self.panel_x3, x_effects=['x1', 'x2']) + result = ols(y=self.panel_y3, x=self.panel_x3, + x_effects=['x1', 'x2']) assert_almost_equal(result._y.values.flat, [1, 2, 3, 4]) exp_x = [[0, 0, 0, 1, 1], [1, 0, 0, 0, 1], [0, 1, 1, 0, 1], @@ -713,10 +717,12 @@ def testRollingWithNeweyWest(self): def testRollingWithEntityCluster(self): self.checkMovingOLS(self.panel_x, self.panel_y, cluster='entity') + def testUnknownClusterRaisesValueError(self): assertRaisesRegexp(ValueError, "Unrecognized cluster.*ridiculous", self.checkMovingOLS, self.panel_x, self.panel_y, - cluster='ridiculous') + cluster='ridiculous') + def testRollingWithTimeEffectsAndEntityCluster(self): self.checkMovingOLS(self.panel_x, self.panel_y, time_effects=True, cluster='entity') @@ -744,6 +750,7 @@ def testNonPooled(self): self.checkNonPooled(y=self.panel_y, x=self.panel_x) self.checkNonPooled(y=self.panel_y, x=self.panel_x, window_type='rolling', window=25, min_periods=10) + def testUnknownWindowType(self): assertRaisesRegexp(ValueError, "window.*ridiculous", self.checkNonPooled, y=self.panel_y, x=self.panel_x, @@ -856,6 +863,7 @@ def test_group_agg(self): f2 = lambda x: np.zeros((2, 2)) self.assertRaises(Exception, _group_agg, values, bounds, f2) + def _check_non_raw_results(model): _check_repr(model) _check_repr(model.resid) diff --git a/pandas/stats/tests/test_var.py b/pandas/stats/tests/test_var.py index c6eca4041a61b..9bcd070dc1d33 100644 --- a/pandas/stats/tests/test_var.py +++ b/pandas/stats/tests/test_var.py @@ -1,3 +1,5 @@ +# flake8: noqa + from __future__ import print_function from numpy.testing import run_module_suite, assert_equal, TestCase @@ -29,6 +31,7 @@ class CheckVAR(object): + def test_params(self): assert_almost_equal(self.res1.params, self.res2.params, DECIMAL_3) @@ -80,6 +83,7 @@ def test_bse(self): class Foo(object): + def __init__(self): data = sm.datasets.macrodata.load() data = data.data[['realinv', 'realgdp', 'realcons']].view((float, 3)) diff --git a/pandas/stats/var.py b/pandas/stats/var.py index b06e2f3181496..cc78ca2886fb3 100644 --- a/pandas/stats/var.py +++ b/pandas/stats/var.py @@ -1,3 +1,5 @@ +# flake8: noqa + from __future__ import division from pandas.compat import range, lrange, zip, reduce @@ -517,6 +519,7 @@ class PanelVAR(VAR): data: Panel or dict of DataFrame lags: int """ + def __init__(self, data, lags, intercept=True): self._data = _prep_panel_data(data) self._p = lags
https://api.github.com/repos/pandas-dev/pandas/pulls/12114
2016-01-22T00:14:07Z
2016-01-22T15:56:18Z
null
2016-01-22T15:56:25Z
CLN: cleaned RangeIndex._min_fitting_element
diff --git a/pandas/core/index.py b/pandas/core/index.py index 558da897b241e..ad5ed86236e50 100644 --- a/pandas/core/index.py +++ b/pandas/core/index.py @@ -3,8 +3,6 @@ import warnings import operator from functools import partial -from math import ceil, floor - from sys import getsizeof import numpy as np @@ -4267,16 +4265,14 @@ def intersection(self, other): return new_index def _min_fitting_element(self, lower_limit): - """Returns the value of the smallest element greater than the limit""" - round = ceil if self._step > 0 else floor - no_steps = round((float(lower_limit) - self._start) / self._step) - return self._start + self._step * no_steps + """Returns the smallest element greater than or equal to the limit""" + no_steps = -(-(lower_limit - self._start) // abs(self._step)) + return self._start + abs(self._step) * no_steps def _max_fitting_element(self, upper_limit): - """Returns the value of the largest element smaller than the limit""" - round = floor if self._step > 0 else ceil - no_steps = round((float(upper_limit) - self._start) / self._step) - return self._start + self._step * no_steps + """Returns the largest element smaller than or equal to the limit""" + no_steps = (upper_limit - self._start) // abs(self._step) + return self._start + abs(self._step) * no_steps def _extended_gcd(self, a, b): """ diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py index 68150bfbca3f9..af42c2751bf46 100644 --- a/pandas/tests/test_index.py +++ b/pandas/tests/test_index.py @@ -4266,6 +4266,11 @@ def test_min_fitting_element(self): result = RangeIndex(5, 0, -1)._min_fitting_element(1) self.assertEqual(1, result) + big_num = 500000000000000000000000 + + result = RangeIndex(5, big_num * 2, 1)._min_fitting_element(big_num) + self.assertEqual(big_num, result) + def test_max_fitting_element(self): result = RangeIndex(0, 20, 2)._max_fitting_element(17) self.assertEqual(16, result) @@ -4279,6 +4284,11 @@ def test_max_fitting_element(self): result = RangeIndex(5, 0, -1)._max_fitting_element(4) self.assertEqual(4, result) + big_num = 500000000000000000000000 + + result = RangeIndex(5, big_num * 2, 1)._max_fitting_element(big_num) + self.assertEqual(big_num, result) + def test_pickle_compat_construction(self): # RangeIndex() is a valid constructor pass
Added test cases for `_min_fitting_element` and `_max_fitting_element` (unused) that would fail in master because of precision
https://api.github.com/repos/pandas-dev/pandas/pulls/12113
2016-01-22T00:02:47Z
2016-01-22T15:53:33Z
null
2016-01-22T15:53:38Z
ENH: add error message for merge with Series GH12081
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 193d8f83ded79..292444a2bbacc 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -203,6 +203,8 @@ In addition, ``.round()``, ``.floor()`` and ``.ceil()`` will be available thru t .. _whatsnew_0180.api: +- ``pandas.merge()`` and ``DataFrame.merge()`` will show a specific error message when trying to merge with an object that is not of type ``DataFrame`` or a subclass (:issue:`12081`) + .. _whatsnew_0180.api_breaking: Backwards incompatible API changes diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py index 6ea217c4a72a7..60dda9183e7d5 100644 --- a/pandas/tools/merge.py +++ b/pandas/tools/merge.py @@ -184,6 +184,13 @@ def __init__(self, left, right, how='inner', on=None, raise ValueError( 'indicator option can only accept boolean or string arguments') + if not isinstance(left, DataFrame): + raise ValueError( + 'can not merge DataFrame with instance of type {0}'.format(type(left))) + if not isinstance(right, DataFrame): + raise ValueError( + 'can not merge DataFrame with instance of type {0}'.format(type(right))) + # note this function has side effects (self.left_join_keys, self.right_join_keys, diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py index 9e64e0eeb2792..8be15b212085a 100644 --- a/pandas/tools/tests/test_merge.py +++ b/pandas/tools/tests/test_merge.py @@ -261,6 +261,18 @@ def test_join_on_fails_with_different_column_counts(self): index=tm.makeCustomIndex(10, 2)) merge(df, df2, right_on='a', left_on=['a', 'b']) + + def test_join_on_fails_with_wrong_object_type(self): + # GH12081 + wrongly_typed = [Series([0, 1]), 2, 'str', None, np.ndarray([0, 1])] + df = DataFrame({'a': [1, 1]}) + + for obj in wrongly_typed: + with tm.assertRaisesRegexp(ValueError, str(type(obj))): + merge(obj, df, left_on='a', right_on='a') + with tm.assertRaisesRegexp(ValueError, str(type(obj))): + merge(df, obj, left_on='a', right_on='a') + def test_join_on_pass_vector(self): expected = self.target.join(self.source, on='C') del expected['C']
closes #12081
https://api.github.com/repos/pandas-dev/pandas/pulls/12112
2016-01-21T22:26:47Z
2016-01-26T15:55:45Z
null
2016-01-26T15:55:59Z
CLN: remove core/matrix.py, not imported into the pandas namespace
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 2be438dd7890e..193d8f83ded79 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -445,7 +445,7 @@ Removal of prior version deprecations/changes - Removal of ``rolling_corr_pairwise`` in favor of ``.rolling().corr(pairwise=True)`` (:issue:`4950`) - Removal of ``expanding_corr_pairwise`` in favor of ``.expanding().corr(pairwise=True)`` (:issue:`4950`) - +- Removal of ``DataMatrix`` module. This was not imported into the pandas namespace in any event (:issue:`12111`) diff --git a/pandas/core/matrix.py b/pandas/core/matrix.py deleted file mode 100644 index 15842464cfda8..0000000000000 --- a/pandas/core/matrix.py +++ /dev/null @@ -1,3 +0,0 @@ -# flake8: noqa - -from pandas.core.frame import DataFrame as DataMatrix
https://api.github.com/repos/pandas-dev/pandas/pulls/12111
2016-01-21T12:38:14Z
2016-01-21T15:37:16Z
2016-01-21T15:37:15Z
2016-01-22T15:40:43Z
ERR: read_csv with dtype on empty data, #12048.
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 193d8f83ded79..f9c82dcfde544 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -537,4 +537,5 @@ of columns didn't match the number of series provided (:issue:`12039`). - Bug in ``Series`` constructor with read-only data (:issue:`11502`) - Bug in ``.loc`` setitem indexer preventing the use of a TZ-aware DatetimeIndex (:issue:`12050`) -- Big in ``.style`` indexes and multi-indexes not appearing (:issue:`11655`) +- Bug in ``.style`` indexes and multi-indexes not appearing (:issue:`11655`) +- Bug in ``.read_csv`` with dtype specified on empty data producing an error (:issue:`12048`) \ No newline at end of file diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index f06ad927bb61b..910da736b4a8e 100755 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -4,6 +4,7 @@ from __future__ import print_function from pandas.compat import range, lrange, StringIO, lzip, zip, string_types, map from pandas import compat +from collections import defaultdict import re import csv import warnings @@ -2264,6 +2265,8 @@ def _get_empty_meta(columns, index_col, index_names, dtype=None): if dtype is None: dtype = {} else: + if not isinstance(dtype, dict): + dtype = defaultdict(lambda: dtype) # Convert column indexes to column names. dtype = dict((columns[k] if com.is_integer(k) else k, v) for k, v in compat.iteritems(dtype)) diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py index a5b86b35d330e..b28f23f3dff1c 100644 --- a/pandas/tests/frame/test_to_csv.py +++ b/pandas/tests/frame/test_to_csv.py @@ -1108,3 +1108,9 @@ def test_to_csv_with_dst_transitions(self): df.to_pickle(path) result = pd.read_pickle(path) assert_frame_equal(result, df) + + def test_to_csv_empty_frame(self): + # GH12048 + actual = read_csv(StringIO('A,B'), dtype=str) + expected = pd.DataFrame({'A': [], 'B': []}, index=[], dtype=str) + assert_frame_equal(actual, expected)
ERR: read_csv with dtype specified on empty data closes #12048.
https://api.github.com/repos/pandas-dev/pandas/pulls/12110
2016-01-21T12:25:26Z
2016-01-24T22:40:27Z
null
2016-01-24T22:40:27Z
ENH: GH12034 RangeIndex.union with RangeIndex returns RangeIndex if possible
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 2706cb200dd54..2be438dd7890e 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -110,7 +110,7 @@ Range Index A ``RangeIndex`` has been added to the ``Int64Index`` sub-classes to support a memory saving alternative for common use cases. This has a similar implementation to the python ``range`` object (``xrange`` in python 2), in that it only stores the start, stop, and step values for the index. It will transparently interact with the user API, converting to ``Int64Index`` if needed. -This will now be the default constructed index for ``NDFrame`` objects, rather than previous an ``Int64Index``. (:issue:`939`, :issue:`12070`, :issue:`12071`) +This will now be the default constructed index for ``NDFrame`` objects, rather than previous an ``Int64Index``. (:issue:`939`, :issue:`12070`, :issue:`12071`, :issue:`12109`) Previous Behavior: diff --git a/pandas/core/index.py b/pandas/core/index.py index 1fbb717bf76d8..558da897b241e 100644 --- a/pandas/core/index.py +++ b/pandas/core/index.py @@ -4307,7 +4307,48 @@ def union(self, other): ------- union : Index """ - # note: could return a RangeIndex in some circumstances + self._assert_can_do_setop(other) + if len(other) == 0 or self.equals(other): + return self + if len(self) == 0: + return other + if isinstance(other, RangeIndex): + start_s, step_s = self._start, self._step + end_s = self._start + self._step * (len(self) - 1) + start_o, step_o = other._start, other._step + end_o = other._start + other._step * (len(other) - 1) + if self._step < 0: + start_s, step_s, end_s = end_s, -step_s, start_s + if other._step < 0: + start_o, step_o, end_o = end_o, -step_o, start_o + if len(self) == 1 and len(other) == 1: + step_s = step_o = abs(self._start - other._start) + elif len(self) == 1: + step_s = step_o + elif len(other) == 1: + step_o = step_s + start_r = min(start_s, start_o) + end_r = max(end_s, end_o) + if step_o == step_s: + if ((start_s - start_o) % step_s == 0 and + (start_s - end_o) <= step_s and + (start_o - end_s) <= step_s): + return RangeIndex(start_r, end_r + step_s, step_s) + if ((step_s % 2 == 0) and + (abs(start_s - start_o) <= step_s / 2) and + (abs(end_s - end_o) <= step_s / 2)): + return RangeIndex(start_r, end_r + step_s / 2, step_s / 2) + elif step_o % step_s == 0: + if ((start_o - start_s) % step_s == 0 and + (start_o + step_s >= start_s) and + (end_o - step_s <= end_s)): + return RangeIndex(start_r, end_r + step_s, step_s) + elif step_s % step_o == 0: + if ((start_s - start_o) % step_o == 0 and + (start_s + step_o >= start_o) and + (end_s - step_o <= end_o)): + return RangeIndex(start_r, end_r + step_o, step_o) + return self._int64index.union(other) def join(self, other, how='left', level=None, return_indexers=False): diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py index b0210c9fde2e9..68150bfbca3f9 100644 --- a/pandas/tests/test_index.py +++ b/pandas/tests/test_index.py @@ -4130,6 +4130,39 @@ def test_union_noncomparable(self): expected = np.concatenate((other, self.index)) self.assert_numpy_array_equal(result, expected) + def test_union(self): + RI = RangeIndex + I64 = Int64Index + cases = [(RI(0, 10, 1), RI(0, 10, 1), RI(0, 10, 1)), + (RI(0, 10, 1), RI(5, 20, 1), RI(0, 20, 1)), + (RI(0, 10, 1), RI(10, 20, 1), RI(0, 20, 1)), + (RI(0, -10, -1), RI(0, -10, -1), RI(0, -10, -1)), + (RI(0, -10, -1), RI(-10, -20, -1), RI(-19, 1, 1)), + (RI(0, 10, 2), RI(1, 10, 2), RI(0, 10, 1)), + (RI(0, 11, 2), RI(1, 12, 2), RI(0, 12, 1)), + (RI(0, 21, 4), RI(-2, 24, 4), RI(-2, 24, 2)), + (RI(0, -20, -2), RI(-1, -21, -2), RI(-19, 1, 1)), + (RI(0, 100, 5), RI(0, 100, 20), RI(0, 100, 5)), + (RI(0, -100, -5), RI(5, -100, -20), RI(-95, 10, 5)), + (RI(0, -11, -1), RI(1, -12, -4), RI(-11, 2, 1)), + (RI(), RI(), RI()), + (RI(0, -10, -2), RI(), RI(0, -10, -2)), + (RI(0, 100, 2), RI(100, 150, 200), RI(0, 102, 2)), + (RI(0, -100, -2), RI(-100, 50, 102), RI(-100, 4, 2)), + (RI(0, -100, -1), RI(0, -50, -3), RI(-99, 1, 1)), + (RI(0, 1, 1), RI(5, 6, 10), RI(0, 6, 5)), + (RI(0, 10, 5), RI(-5, -6, -20), RI(-5, 10, 5)), + (RI(0, 3, 1), RI(4, 5, 1), I64([0, 1, 2, 4])), + (RI(0, 10, 1), I64([]), RI(0, 10, 1)), + (RI(), I64([1, 5, 6]), I64([1, 5, 6]))] + for idx1, idx2, expected in cases: + res1 = idx1.union(idx2) + res2 = idx2.union(idx1) + res3 = idx1._int64index.union(idx2) + tm.assert_index_equal(res1, expected, exact=True) + tm.assert_index_equal(res2, expected, exact=True) + tm.assert_index_equal(res3, expected) + def test_nbytes(self): # memory savings vs int index
xref #12034
https://api.github.com/repos/pandas-dev/pandas/pulls/12109
2016-01-21T12:02:01Z
2016-01-21T12:26:50Z
2016-01-21T12:26:50Z
2016-01-21T12:26:59Z
PEP: pandas/core final round (api, matrix, sparse, cleanup of common,…
diff --git a/pandas/core/api.py b/pandas/core/api.py index 0c463d1a201b9..1d9a07eca5f03 100644 --- a/pandas/core/api.py +++ b/pandas/core/api.py @@ -1,5 +1,6 @@ # pylint: disable=W0614,W0401,W0611 +# flake8: noqa import numpy as np diff --git a/pandas/core/common.py b/pandas/core/common.py index 510c4cd414532..059b0c5c65cab 100644 --- a/pandas/core/common.py +++ b/pandas/core/common.py @@ -1312,7 +1312,7 @@ def trans(x): dtype = 'int64' if issubclass(result.dtype.type, np.number): - def trans(x): + def trans(x): # noqa return x.round() else: dtype = 'object' @@ -2731,7 +2731,7 @@ def check_main(): get_option('mode.sim_interactive')) try: - return __IPYTHON__ or check_main() + return __IPYTHON__ or check_main() # noqa except: return check_main() @@ -2743,7 +2743,7 @@ def in_qtconsole(): DEPRECATED: This is no longer needed, or working, in IPython 3 and above. """ try: - ip = get_ipython() + ip = get_ipython() # noqa front_end = ( ip.config.get('KernelApp', {}).get('parent_appname', "") or ip.config.get('IPKernelApp', {}).get('parent_appname', "")) @@ -2762,7 +2762,7 @@ def in_ipnb(): and above. """ try: - ip = get_ipython() + ip = get_ipython() # noqa front_end = ( ip.config.get('KernelApp', {}).get('parent_appname', "") or ip.config.get('IPKernelApp', {}).get('parent_appname', "")) @@ -2778,7 +2778,7 @@ def in_ipython_frontend(): check if we're inside an an IPython zmq frontend """ try: - ip = get_ipython() + ip = get_ipython() # noqa return 'zmq' in str(type(ip)).lower() except: pass diff --git a/pandas/core/generic.py b/pandas/core/generic.py index b970923ff0fe3..2b659ee355e51 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -2731,9 +2731,9 @@ def _convert(self, datetime=False, numeric=False, timedelta=False, converted : same as input object """ return self._constructor( - self._data.convert(datetime=datetime, numeric=numeric, - timedelta=timedelta, coerce=coerce, - copy=copy)).__finalize__(self) + self._data.convert(datetime=datetime, numeric=numeric, + timedelta=timedelta, coerce=coerce, + copy=copy)).__finalize__(self) # TODO: Remove in 0.18 or 2017, which ever is sooner def convert_objects(self, convert_dates=True, convert_numeric=False, diff --git a/pandas/core/matrix.py b/pandas/core/matrix.py index 3d42fd93d969b..15842464cfda8 100644 --- a/pandas/core/matrix.py +++ b/pandas/core/matrix.py @@ -1 +1,3 @@ +# flake8: noqa + from pandas.core.frame import DataFrame as DataMatrix diff --git a/pandas/core/sparse.py b/pandas/core/sparse.py index 84149e5598f82..701e6b1102b05 100644 --- a/pandas/core/sparse.py +++ b/pandas/core/sparse.py @@ -4,6 +4,7 @@ """ # pylint: disable=W0611 +# flake8: noqa from pandas.sparse.series import SparseSeries from pandas.sparse.frame import SparseDataFrame
… generic) Besides groupby we're clean.
https://api.github.com/repos/pandas-dev/pandas/pulls/12107
2016-01-21T01:18:50Z
2016-01-21T06:47:54Z
null
2016-01-21T06:47:55Z
Add Zip file functionality. Fixes #11413
diff --git a/pandas/io/common.py b/pandas/io/common.py index 811d42b7b4b9e..11cf45aa47f8e 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -345,6 +345,16 @@ def _get_handle(path, mode, encoding=None, compression=None): elif compression == 'bz2': import bz2 f = bz2.BZ2File(path, mode) + elif compression == 'zip': + import zipfile + zip_file = zipfile.ZipFile(path) + zip_names = zip_file.namelist() + + if len(zip_names) == 1: + file_name = zip_names.pop() + f = zip_file.open(file_name) + else: + raise ValueError('ZIP file contains multiple files {}', zip_file.filename) else: raise ValueError('Unrecognized compression type: %s' % compression) diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index dc6923b752ac7..7eb544ceddad3 100755 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -61,11 +61,11 @@ class ParserWarning(Warning): dtype : Type name or dict of column -> type, default None Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32} (Unsupported with engine='python') -compression : {'gzip', 'bz2', 'infer', None}, default 'infer' - For on-the-fly decompression of on-disk data. If 'infer', then use gzip or - bz2 if filepath_or_buffer is a string ending in '.gz' or '.bz2', - respectively, and no decompression otherwise. Set to None for no - decompression. +compression : {'gzip', 'bz2', 'zip', 'infer', None}, default 'infer' + For on-the-fly decompression of on-disk data. If 'infer', then use gzip, + bz2 or zip if filepath_or_buffer is a string ending in '.gz', '.bz2' or '.zip', + respectively, and no decompression otherwise. If using 'zip', the ZIP file must + contain only one data file to be read in. Set to None for no decompression. dialect : string or csv.Dialect instance, default None If None defaults to Excel dialect. Ignored if sep longer than 1 char See csv.Dialect documentation for more details @@ -258,6 +258,8 @@ def _read(filepath_or_buffer, kwds): inferred_compression = 'gzip' elif filepath_or_buffer.endswith('.bz2'): inferred_compression = 'bz2' + elif filepath_or_buffer.endswith('.zip'): + inferred_compression = 'zip' else: inferred_compression = None else: @@ -738,10 +740,10 @@ def _make_engine(self, engine='c'): if engine == 'c': self._engine = CParserWrapper(self.f, **self.options) else: - if engine == 'python': - klass = PythonParser - elif engine == 'python-fwf': + if engine == 'python-fwf': klass = FixedWidthFieldParser + else: #default to engine == 'python': + klass = PythonParser self._engine = klass(self.f, **self.options) def _failover_to_python(self): @@ -1387,6 +1389,20 @@ def _wrap_compressed(f, compression, encoding=None): data = bz2.decompress(f.read()) f = StringIO(data) return f + elif compression == 'zip': + import zipfile + zip_file = zipfile.ZipFile(f) + zip_names = zip_file.namelist() + + if len(zip_names) == 1: + file_name = zip_names.pop() + f = zip_file.open(file_name) + return f + + else: + raise ValueError('Multiple files found in compressed ' + 'zip file %s', str(zip_names)) + else: raise ValueError('do not recognize compression method %s' % compression) diff --git a/pandas/io/tests/data/salary.table.zip b/pandas/io/tests/data/salary.table.zip new file mode 100644 index 0000000000000..97a74a9983082 Binary files /dev/null and b/pandas/io/tests/data/salary.table.zip differ diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py index 7c68a44874631..a17a7f2e6df6c 100755 --- a/pandas/io/tests/test_parsers.py +++ b/pandas/io/tests/test_parsers.py @@ -6,39 +6,35 @@ from datetime import datetime import csv import os -import sys -import re -import nose import platform - +import re +import sys +from datetime import datetime from multiprocessing.pool import ThreadPool -from numpy import nan +import nose import numpy as np -from pandas.io.common import DtypeWarning +import pandas.lib as lib +import pandas.parser +from numpy import nan +from numpy.testing.decorators import slow +from pandas.lib import Timestamp +import pandas as pd +import pandas.io.parsers as parsers +import pandas.tseries.tools as tools +import pandas.util.testing as tm from pandas import DataFrame, Series, Index, MultiIndex, DatetimeIndex +from pandas import compat from pandas.compat import( StringIO, BytesIO, PY3, range, long, lrange, lmap, u ) +from pandas.compat import parse_date +from pandas.io.common import DtypeWarning from pandas.io.common import URLError -import pandas.io.parsers as parsers from pandas.io.parsers import (read_csv, read_table, read_fwf, TextFileReader, TextParser) - -import pandas.util.testing as tm -import pandas as pd - -from pandas.compat import parse_date -import pandas.lib as lib -from pandas import compat -from pandas.lib import Timestamp from pandas.tseries.index import date_range -import pandas.tseries.tools as tools - -from numpy.testing.decorators import slow - -import pandas.parser class ParserTests(object): @@ -3753,6 +3749,150 @@ def test_single_char_leading_whitespace(self): tm.assert_frame_equal(result, expected) +class TestCompression(ParserTests, tm.TestCase): + + def read_csv(self, *args, **kwargs): + return read_csv(*args, **kwargs) + + def read_table(self, *args, **kwargs): + return read_csv(*args, **kwargs) + + def test_zip(self): + try: + import zipfile + except ImportError: + raise nose.SkipTest('need zipfile to run') + + data = open(self.csv1, 'rb').read() + expected = self.read_csv(self.csv1) + + with tm.ensure_clean() as path: + file_name = 'test_file' + tmp = zipfile.ZipFile(path, mode='w') + tmp.writestr(file_name, data) + tmp.close() + + result = self.read_csv(path, compression='zip', engine='c') + tm.assert_frame_equal(result, expected) + + result = self.read_csv(path, compression='zip', engine='python') + tm.assert_frame_equal(result, expected) + + result = self.read_csv(open(path, 'rb'), compression='zip', engine='c') + tm.assert_frame_equal(result, expected) + + result = self.read_csv(open(path, 'rb'), compression='zip', engine='python') + tm.assert_frame_equal(result, expected) + + with tm.ensure_clean() as path: + file_names = ['test_file', 'second_file'] + tmp = zipfile.ZipFile(path, mode='w') + for file_name in file_names: + tmp.writestr(file_name, data) + tmp.close() + + self.assertRaises(ValueError, self.read_csv, + path, compression='zip', engine='c') + + self.assertRaises(ValueError, self.read_csv, + path, compression='zip', engine='python') + + def test_gzip(self): + try: + import gzip + except ImportError: + raise nose.SkipTest('need gzip to run') + + data = open(self.csv1, 'rb').read() + expected = self.read_csv(self.csv1) + with tm.ensure_clean() as path: + tmp = gzip.GzipFile(path, mode='wb') + tmp.write(data) + tmp.close() + + result = self.read_csv(path, compression='gzip', engine='c') + tm.assert_frame_equal(result, expected) + + result = self.read_csv(path, compression='gzip', engine='python') + tm.assert_frame_equal(result, expected) + + result = self.read_csv(open(path, 'rb'), compression='gzip', engine='c') + tm.assert_frame_equal(result, expected) + + result = self.read_csv(open(path, 'rb'), compression='gzip', engine='python') + tm.assert_frame_equal(result, expected) + + def test_bz2(self): + try: + import bz2 + except ImportError: + raise nose.SkipTest('need bz2 to run') + + data = open(self.csv1, 'rb').read() + expected = self.read_csv(self.csv1) + with tm.ensure_clean() as path: + tmp = bz2.BZ2File(path, mode='wb') + tmp.write(data) + tmp.close() + + result = self.read_csv(path, compression='bz2', engine='c') + tm.assert_frame_equal(result, expected) + + result = self.read_csv(path, compression='bz2', engine='python') + tm.assert_frame_equal(result, expected) + + self.assertRaises(ValueError, self.read_csv, + path, compression='bz3') + + with open(path, 'rb') as fin: + if compat.PY3: + result = self.read_csv(fin, compression='bz2', engine='c') + tm.assert_frame_equal(result, expected) + result = self.read_csv(fin, compression='bz2', engine='python') + tm.assert_frame_equal(result, expected) + else: + self.assertRaises(ValueError, self.read_csv, + fin, compression='bz2', engine='c') + + def test_decompression_regex_sep(self): + try: + import gzip + import bz2 + except ImportError: + raise nose.SkipTest('need gzip and bz2 to run') + + data = open(self.csv1, 'rb').read() + data = data.replace(b',', b'::') + expected = self.read_csv(self.csv1) + + with tm.ensure_clean() as path: + tmp = gzip.GzipFile(path, mode='wb') + tmp.write(data) + tmp.close() + + # GH 6607 + # Test currently only valid with the python engine because of + # regex sep. Temporarily copied to TestPythonParser. + # Here test for ValueError when passing regex sep: + + with tm.assertRaisesRegexp(ValueError, 'regex sep'): #XXX + result = self.read_csv(path, sep='::', compression='gzip', engine='c') + tm.assert_frame_equal(result, expected) + + with tm.ensure_clean() as path: + tmp = bz2.BZ2File(path, mode='wb') + tmp.write(data) + tmp.close() + + # GH 6607 + with tm.assertRaisesRegexp(ValueError, 'regex sep'): #XXX + result = self.read_csv(path, sep='::', compression='bz2', engine='c') + tm.assert_frame_equal(result, expected) + + self.assertRaises(ValueError, self.read_csv, + path, compression='bz3') + + class TestCParserLowMemory(CParserTests, tm.TestCase): def read_csv(self, *args, **kwds): @@ -3981,86 +4121,6 @@ def test_pure_python_failover(self): expected = DataFrame({'a': [1, 4], 'b': [2, 5], 'c': [3, 6]}) tm.assert_frame_equal(result, expected) - def test_decompression(self): - try: - import gzip - import bz2 - except ImportError: - raise nose.SkipTest('need gzip and bz2 to run') - - data = open(self.csv1, 'rb').read() - expected = self.read_csv(self.csv1) - - with tm.ensure_clean() as path: - tmp = gzip.GzipFile(path, mode='wb') - tmp.write(data) - tmp.close() - - result = self.read_csv(path, compression='gzip') - tm.assert_frame_equal(result, expected) - - result = self.read_csv(open(path, 'rb'), compression='gzip') - tm.assert_frame_equal(result, expected) - - with tm.ensure_clean() as path: - tmp = bz2.BZ2File(path, mode='wb') - tmp.write(data) - tmp.close() - - result = self.read_csv(path, compression='bz2') - tm.assert_frame_equal(result, expected) - - # result = self.read_csv(open(path, 'rb'), compression='bz2') - # tm.assert_frame_equal(result, expected) - - self.assertRaises(ValueError, self.read_csv, - path, compression='bz3') - - with open(path, 'rb') as fin: - if compat.PY3: - result = self.read_csv(fin, compression='bz2') - tm.assert_frame_equal(result, expected) - else: - self.assertRaises(ValueError, self.read_csv, - fin, compression='bz2') - - def test_decompression_regex_sep(self): - try: - import gzip - import bz2 - except ImportError: - raise nose.SkipTest('need gzip and bz2 to run') - - data = open(self.csv1, 'rb').read() - data = data.replace(b',', b'::') - expected = self.read_csv(self.csv1) - - with tm.ensure_clean() as path: - tmp = gzip.GzipFile(path, mode='wb') - tmp.write(data) - tmp.close() - - # GH 6607 - # Test currently only valid with the python engine because of - # regex sep. Temporarily copied to TestPythonParser. - # Here test for ValueError when passing regex sep: - - with tm.assertRaisesRegexp(ValueError, 'regex sep'): # XXX - result = self.read_csv(path, sep='::', compression='gzip') - tm.assert_frame_equal(result, expected) - - with tm.ensure_clean() as path: - tmp = bz2.BZ2File(path, mode='wb') - tmp.write(data) - tmp.close() - - # GH 6607 - with tm.assertRaisesRegexp(ValueError, 'regex sep'): # XXX - result = self.read_csv(path, sep='::', compression='bz2') - tm.assert_frame_equal(result, expected) - - self.assertRaises(ValueError, self.read_csv, - path, compression='bz3') def test_memory_map(self): # it works! diff --git a/pandas/parser.pyx b/pandas/parser.pyx index f9b8d921f02d1..cad389f9e2a09 100644 --- a/pandas/parser.pyx +++ b/pandas/parser.pyx @@ -563,6 +563,18 @@ cdef class TextReader: else: raise ValueError('Python 2 cannot read bz2 from open file ' 'handle') + elif self.compression == 'zip': + import zipfile + zip_file = zipfile.ZipFile(source) + zip_names = zip_file.namelist() + + if len(zip_names) == 1: + file_name = zip_names.pop() + source = zip_file.open(file_name) + + else: + raise ValueError('Multiple files found in compressed ' + 'zip file %s', str(zip_names)) else: raise ValueError('Unrecognized compression type: %s' % self.compression)
closes #11413 This PR leverages Python's ZipFile functionality to automatically unzip files read into DataFrames using read_csv().
https://api.github.com/repos/pandas-dev/pandas/pulls/12103
2016-01-20T21:01:30Z
2016-01-29T15:14:46Z
null
2016-05-05T22:39:21Z
Update test_panel.py
diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py index 2da4427af4cb6..1ca0b50c06ab3 100644 --- a/pandas/compat/__init__.py +++ b/pandas/compat/__init__.py @@ -33,6 +33,7 @@ import types from unicodedata import east_asian_width import struct +import inspect PY2 = sys.version_info[0] == 2 PY3 = (sys.version_info[0] >= 3) @@ -66,6 +67,9 @@ def str_to_bytes(s, encoding=None): def bytes_to_str(b, encoding=None): return b.decode(encoding or 'utf-8') + + def signature(f): + return list(inspect.signature(f).parameters.keys()) # have to explicitly put builtins into the namespace range = range @@ -102,6 +106,9 @@ def str_to_bytes(s, encoding='ascii'): def bytes_to_str(b, encoding='ascii'): return b + + def signature(f): + return inspect.getargspec(f).args # import iterator versions of these functions range = xrange diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py index a1f2b3edf892f..5f58c26f6bfa2 100644 --- a/pandas/tests/test_panel.py +++ b/pandas/tests/test_panel.py @@ -2,7 +2,7 @@ # pylint: disable=W0612,E1101 from datetime import datetime -from inspect import getargspec + import operator import nose from functools import wraps @@ -17,7 +17,7 @@ from pandas.core.series import remove_na import pandas.core.common as com from pandas import compat -from pandas.compat import range, lrange, StringIO, OrderedDict +from pandas.compat import range, lrange, StringIO, OrderedDict, signature from pandas import SparsePanel from pandas.util.testing import (assert_panel_equal, assert_frame_equal, @@ -198,7 +198,7 @@ def wrapper(x): self.assertRaises(Exception, f, axis=obj.ndim) # Unimplemented numeric_only parameter. - if 'numeric_only' in getargspec(f).args: + if 'numeric_only' in signature(f): self.assertRaisesRegexp(NotImplementedError, name, f, numeric_only=True)
closes #12101 changed deprecated getargspec to signature for python 3 while keeping the getargspec for python 2
https://api.github.com/repos/pandas-dev/pandas/pulls/12102
2016-01-20T19:55:18Z
2016-01-23T16:05:21Z
null
2016-01-23T16:05:21Z
PEP: pandas/core round 7 (window, reshape, series, format)
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py index abc9e58d7c435..8a6ea69058c7e 100644 --- a/pandas/core/categorical.py +++ b/pandas/core/categorical.py @@ -14,11 +14,11 @@ from pandas.util.decorators import cache_readonly, deprecate_kwarg from pandas.core.common import ( - ABCSeries, ABCIndexClass, ABCPeriodIndex, ABCCategoricalIndex, isnull, - notnull, is_dtype_equal, is_categorical_dtype, is_integer_dtype, - is_object_dtype, _possibly_infer_to_datetimelike, get_dtype_kinds, - is_list_like, is_sequence, is_null_slice, is_bool, _ensure_platform_int, - _ensure_object, _ensure_int64, _coerce_indexer_dtype, take_1d) + ABCSeries, ABCIndexClass, ABCCategoricalIndex, isnull, notnull, + is_dtype_equal, is_categorical_dtype, is_integer_dtype, + _possibly_infer_to_datetimelike, get_dtype_kinds, is_list_like, + is_sequence, is_null_slice, is_bool, _ensure_object, _ensure_int64, + _coerce_indexer_dtype, take_1d) from pandas.core.dtypes import CategoricalDtype from pandas.util.terminal import get_terminal_size from pandas.core.config import get_option diff --git a/pandas/core/format.py b/pandas/core/format.py index a50edd9462431..10b67d6229234 100644 --- a/pandas/core/format.py +++ b/pandas/core/format.py @@ -6,11 +6,11 @@ import sys from pandas.core.base import PandasObject -from pandas.core.common import adjoin, isnull, notnull +from pandas.core.common import isnull, notnull from pandas.core.index import Index, MultiIndex, _ensure_index from pandas import compat -from pandas.compat import(StringIO, lzip, range, map, zip, reduce, u, - OrderedDict) +from pandas.compat import (StringIO, lzip, range, map, zip, reduce, u, + OrderedDict) from pandas.util.terminal import get_terminal_size from pandas.core.config import get_option, set_option from pandas.io.common import _get_handle, UnicodeWriter, _expand_user @@ -54,7 +54,7 @@ index_names : bool, optional Prints the names of the indexes, default True""" -justify_docstring = """ +justify_docstring = """ justify : {'left', 'right'}, default None Left or right-justify the column labels. If None uses the option from the print configuration (controlled by set_option), 'right' out @@ -68,10 +68,10 @@ docstring_to_string = common_docstring + justify_docstring + return_docstring -class CategoricalFormatter(object): - def __init__(self, categorical, buf=None, length=True, - na_rep='NaN', footer=True): +class CategoricalFormatter(object): + def __init__(self, categorical, buf=None, length=True, na_rep='NaN', + footer=True): self.categorical = categorical self.buf = buf if buf is not None else StringIO(u("")) self.na_rep = na_rep @@ -97,8 +97,7 @@ def _get_footer(self): def _get_formatted_values(self): return format_array(self.categorical.get_values(), None, - float_format=None, - na_rep=self.na_rep) + float_format=None, na_rep=self.na_rep) def to_string(self): categorical = self.categorical @@ -114,7 +113,7 @@ def to_string(self): result = ['%s' % i for i in fmt_values] result = [i.strip() for i in result] result = u(', ').join(result) - result = [u('[')+result+u(']')] + result = [u('[') + result + u(']')] if self.footer: footer = self._get_footer() if footer: @@ -124,10 +123,9 @@ def to_string(self): class SeriesFormatter(object): - - def __init__(self, series, buf=None, length=True, header=True, - index=True, na_rep='NaN', name=False, float_format=None, - dtype=True, max_rows=None): + def __init__(self, series, buf=None, length=True, header=True, index=True, + na_rep='NaN', name=False, float_format=None, dtype=True, + max_rows=None): self.series = series self.buf = buf if buf is not None else StringIO() self.name = name @@ -156,7 +154,8 @@ def _chk_truncate(self): series = series.iloc[:max_rows] else: row_num = max_rows // 2 - series = concat((series.iloc[:row_num], series.iloc[-row_num:])) + series = concat((series.iloc[:row_num], + series.iloc[-row_num:])) self.tr_row_num = row_num self.tr_series = series self.truncate_v = truncate_v @@ -174,8 +173,7 @@ def _get_footer(self): series_name = com.pprint_thing(name, escape_chars=('\t', '\r', '\n')) - footer += ("Name: %s" % - series_name) if name is not None else "" + footer += ("Name: %s" % series_name) if name is not None else "" if self.length: if footer: @@ -189,7 +187,8 @@ def _get_footer(self): footer += ', ' footer += 'dtype: %s' % com.pprint_thing(name) - # level infos are added to the end and in a new line, like it is done for Categoricals + # level infos are added to the end and in a new line, like it is done + # for Categoricals if com.is_categorical_dtype(self.tr_series.dtype): level_info = self.tr_series._values._repr_categories_info() if footer: @@ -212,8 +211,7 @@ def _get_formatted_index(self): def _get_formatted_values(self): return format_array(self.tr_series._values, None, - float_format=self.float_format, - na_rep=self.na_rep) + float_format=self.float_format, na_rep=self.na_rep) def to_string(self): series = self.tr_series @@ -225,13 +223,10 @@ def to_string(self): fmt_index, have_header = self._get_formatted_index() fmt_values = self._get_formatted_values() - maxlen = max(self.adj.len(x) for x in fmt_index) # max index len - pad_space = min(maxlen, 60) - if self.truncate_v: n_header_rows = 0 row_num = self.tr_row_num - width = self.adj.len(fmt_values[row_num-1]) + width = self.adj.len(fmt_values[row_num - 1]) if width > 3: dot_str = '...' else: @@ -258,7 +253,6 @@ def to_string(self): class TextAdjustment(object): - def __init__(self): self.encoding = get_option("display.encoding") @@ -274,7 +268,6 @@ def adjoin(self, space, *lists, **kwargs): class EastAsianTextAdjustment(TextAdjustment): - def __init__(self): super(EastAsianTextAdjustment, self).__init__() if get_option("display.unicode.ambiguous_as_wide"): @@ -313,8 +306,8 @@ class TableFormatter(object): @property def should_show_dimensions(self): - return self.show_dimensions is True or (self.show_dimensions == 'truncate' and - self.is_truncated) + return (self.show_dimensions is True or + (self.show_dimensions == 'truncate' and self.is_truncated)) def _get_formatter(self, i): if isinstance(self.formatters, (list, tuple)): @@ -329,7 +322,6 @@ def _get_formatter(self, i): class DataFrameFormatter(TableFormatter): - """ Render a DataFrame @@ -386,10 +378,10 @@ def __init__(self, frame, buf=None, columns=None, col_space=None, self.adj = _get_adjustment() def _chk_truncate(self): - ''' + """ Checks whether the frame should be truncated. If so, slices the frame up. - ''' + """ from pandas.tools.merge import concat # Column of which first element is used to determine width of a dot col @@ -399,7 +391,8 @@ def _chk_truncate(self): max_cols = self.max_cols max_rows = self.max_rows - if max_cols == 0 or max_rows == 0: # assume we are in the terminal (why else = 0) + if max_cols == 0 or max_rows == 0: # assume we are in the terminal + # (why else = 0) (w, h) = get_terminal_size() self.w = w self.h = h @@ -408,11 +401,14 @@ def _chk_truncate(self): prompt_row = 1 if self.show_dimensions: show_dimension_rows = 3 - n_add_rows = self.header + dot_row + show_dimension_rows + prompt_row - max_rows_adj = self.h - n_add_rows # rows available to fill with actual data + n_add_rows = (self.header + dot_row + show_dimension_rows + + prompt_row) + # rows available to fill with actual data + max_rows_adj = self.h - n_add_rows self.max_rows_adj = max_rows_adj - # Format only rows and columns that could potentially fit the screen + # Format only rows and columns that could potentially fit the + # screen if max_cols == 0 and len(self.frame.columns) > w: max_cols = w if max_rows == 0 and len(self.frame) > h: @@ -438,7 +434,8 @@ def _chk_truncate(self): col_num = max_cols else: col_num = (max_cols_adj // 2) - frame = concat((frame.iloc[:, :col_num], frame.iloc[:, -col_num:]), axis=1) + frame = concat((frame.iloc[:, :col_num], + frame.iloc[:, -col_num:]), axis=1) self.tr_col_num = col_num if truncate_v: if max_rows_adj == 0: @@ -448,7 +445,8 @@ def _chk_truncate(self): frame = frame.iloc[:max_rows, :] else: row_num = max_rows_adj // 2 - frame = concat((frame.iloc[:row_num, :], frame.iloc[-row_num:, :])) + frame = concat((frame.iloc[:row_num, :], + frame.iloc[-row_num:, :])) self.tr_row_num = row_num self.tr_frame = frame @@ -471,8 +469,8 @@ def _to_str_columns(self): stringified = [] for i, c in enumerate(frame): cheader = str_columns[i] - max_colwidth = max(self.col_space or 0, - *(self.adj.len(x) for x in cheader)) + max_colwidth = max(self.col_space or 0, *(self.adj.len(x) + for x in cheader)) fmt_values = self._format_col(i) fmt_values = _make_fixed_width(fmt_values, self.justify, minimum=max_colwidth, @@ -502,13 +500,16 @@ def _to_str_columns(self): if truncate_h: col_num = self.tr_col_num - col_width = self.adj.len(strcols[self.tr_size_col][0]) # infer from column header - strcols.insert(self.tr_col_num + 1, ['...'.center(col_width)] * (len(str_index))) + # infer from column header + col_width = self.adj.len(strcols[self.tr_size_col][0]) + strcols.insert(self.tr_col_num + 1, ['...'.center(col_width)] * + (len(str_index))) if truncate_v: n_header_rows = len(str_index) - len(frame) row_num = self.tr_row_num for ix, col in enumerate(strcols): - cwidth = self.adj.len(strcols[ix][row_num]) # infer from above row + # infer from above row + cwidth = self.adj.len(strcols[ix][row_num]) is_dot_col = False if truncate_h: is_dot_col = ix == col_num + 1 @@ -537,16 +538,18 @@ def to_string(self): frame = self.frame if len(frame.columns) == 0 or len(frame.index) == 0: - info_line = (u('Empty %s\nColumns: %s\nIndex: %s') - % (type(self.frame).__name__, - com.pprint_thing(frame.columns), - com.pprint_thing(frame.index))) + info_line = (u('Empty %s\nColumns: %s\nIndex: %s') % + (type(self.frame).__name__, + com.pprint_thing(frame.columns), + com.pprint_thing(frame.index))) text = info_line else: strcols = self._to_str_columns() - if self.line_width is None: # no need to wrap around just print the whole frame + if self.line_width is None: # no need to wrap around just print + # the whole frame text = self.adj.adjoin(1, *strcols) - elif not isinstance(self.max_cols, int) or self.max_cols > 0: # need to wrap around + elif (not isinstance(self.max_cols, int) or + self.max_cols > 0): # need to wrap around text = self._join_multiline(*strcols) else: # max_cols == 0. Try to fit frame to terminal text = self.adj.adjoin(1, *strcols).split('\n') @@ -554,12 +557,15 @@ def to_string(self): max_len_col_ix = np.argmax(row_lens) max_len = row_lens[max_len_col_ix] headers = [ele[0] for ele in strcols] - # Size of last col determines dot col size. See `self._to_str_columns + # Size of last col determines dot col size. See + # `self._to_str_columns size_tr_col = len(headers[self.tr_size_col]) - max_len += size_tr_col # Need to make space for largest row plus truncate dot col + max_len += size_tr_col # Need to make space for largest row + # plus truncate dot col dif = max_len - self.w adj_dif = dif - col_lens = Series([Series(ele).apply(len).max() for ele in strcols]) + col_lens = Series([Series(ele).apply(len).max() + for ele in strcols]) n_cols = len(col_lens) counter = 0 while adj_dif > 0 and n_cols > 1: @@ -583,8 +589,8 @@ def to_string(self): self.buf.writelines(text) if self.should_show_dimensions: - self.buf.write("\n\n[%d rows x %d columns]" - % (len(frame), len(frame.columns))) + self.buf.write("\n\n[%d rows x %d columns]" % + (len(frame), len(frame.columns))) def _join_multiline(self, *strcols): lwidth = self.line_width @@ -592,11 +598,11 @@ def _join_multiline(self, *strcols): strcols = list(strcols) if self.index: idx = strcols.pop(0) - lwidth -= np.array([self.adj.len(x) for x in idx]).max() + adjoin_width + lwidth -= np.array([self.adj.len(x) + for x in idx]).max() + adjoin_width - col_widths = [np.array([self.adj.len(x) for x in col]).max() - if len(col) > 0 else 0 - for col in strcols] + col_widths = [np.array([self.adj.len(x) for x in col]).max() if + len(col) > 0 else 0 for col in strcols] col_bins = _binify(col_widths, lwidth) nbins = len(col_bins) @@ -640,11 +646,9 @@ def to_latex(self, column_format=None, longtable=False, encoding=None): def _format_col(self, i): frame = self.tr_frame formatter = self._get_formatter(i) - return format_array( - frame.iloc[:, i]._values, - formatter, float_format=self.float_format, na_rep=self.na_rep, - space=self.col_space - ) + return format_array(frame.iloc[:, i]._values, formatter, + float_format=self.float_format, na_rep=self.na_rep, + space=self.col_space) def to_html(self, classes=None, notebook=False): """ @@ -687,11 +691,13 @@ def is_numeric_dtype(dtype): need_leadsp = dict(zip(fmt_columns, map(is_numeric_dtype, dtypes))) def space_format(x, y): - if y not in self.formatters and need_leadsp[x] and not restrict_formatting: + if (y not in self.formatters and + need_leadsp[x] and not restrict_formatting): return ' ' + y return y - str_columns = list(zip(*[[space_format(x, y) for y in x] for x in fmt_columns])) + str_columns = list(zip(*[[space_format(x, y) for y in x] + for x in fmt_columns])) if self.sparsify: str_columns = _sparsify(str_columns) @@ -700,11 +706,10 @@ def space_format(x, y): fmt_columns = columns.format() dtypes = self.frame.dtypes need_leadsp = dict(zip(fmt_columns, map(is_numeric_dtype, dtypes))) - str_columns = [[' ' + x - if not self._get_formatter(i) and need_leadsp[x] - else x] - for i, (col, x) in - enumerate(zip(columns, fmt_columns))] + str_columns = [[' ' + x if not self._get_formatter(i) and + need_leadsp[x] else x] + for i, (col, x) in enumerate(zip(columns, + fmt_columns))] if self.show_index_names and self.has_index_names: for x in str_columns: @@ -722,7 +727,8 @@ def has_column_names(self): return _has_names(self.frame.columns) def _get_formatted_index(self, frame): - # Note: this is only used by to_string() and to_latex(), not by to_html(). + # Note: this is only used by to_string() and to_latex(), not by + # to_html(). index = frame.index columns = frame.columns @@ -733,14 +739,12 @@ def _get_formatted_index(self, frame): if isinstance(index, MultiIndex): fmt_index = index.format(sparsify=self.sparsify, adjoin=False, - names=show_index_names, - formatter=fmt) + names=show_index_names, formatter=fmt) else: fmt_index = [index.format(name=show_index_names, formatter=fmt)] fmt_index = [tuple(_make_fixed_width(list(x), justify='left', minimum=(self.col_space or 0), - adj=self.adj)) - for x in fmt_index] + adj=self.adj)) for x in fmt_index] adjoined = self.adj.adjoin(1, *fmt_index).split('\n') @@ -797,9 +801,9 @@ def write_result(self, buf): # string representation of the columns if len(self.frame.columns) == 0 or len(self.frame.index) == 0: - info_line = (u('Empty %s\nColumns: %s\nIndex: %s') - % (type(self.frame).__name__, - self.frame.columns, self.frame.index)) + info_line = (u('Empty %s\nColumns: %s\nIndex: %s') % + (type(self.frame).__name__, self.frame.columns, + self.frame.index)) strcols = [[info_line]] else: strcols = self.fmt._to_str_columns() @@ -862,16 +866,12 @@ def get_col_type(dtype): buf.write('\\endlastfoot\n') if self.fmt.kwds.get('escape', True): # escape backslashes first - crow = [(x.replace('\\', '\\textbackslash') - .replace('_', '\\_') - .replace('%', '\\%') - .replace('$', '\\$') - .replace('#', '\\#') - .replace('{', '\\{') - .replace('}', '\\}') - .replace('~', '\\textasciitilde') - .replace('^', '\\textasciicircum') - .replace('&', '\\&') if x else '{}') for x in row] + crow = [(x.replace('\\', '\\textbackslash').replace('_', '\\_') + .replace('%', '\\%').replace('$', '\\$') + .replace('#', '\\#').replace('{', '\\{') + .replace('}', '\\}').replace('~', '\\textasciitilde') + .replace('^', '\\textasciicircum').replace('&', '\\&') + if x else '{}') for x in row] else: crow = [x if x else '{}' for x in row] buf.write(' & '.join(crow)) @@ -911,8 +911,7 @@ def write(self, s, indent=0): self.elements.append(' ' * indent + rs) def write_th(self, s, indent=0, tags=None): - if (self.fmt.col_space is not None - and self.fmt.col_space > 0): + if self.fmt.col_space is not None and self.fmt.col_space > 0: tags = (tags or "") tags += 'style="min-width: %s;"' % self.fmt.col_space @@ -929,14 +928,12 @@ def _write_cell(self, s, kind='td', indent=0, tags=None): if self.escape: # escape & first to prevent double escaping of & - esc = OrderedDict( - [('&', r'&amp;'), ('<', r'&lt;'), ('>', r'&gt;')] - ) + esc = OrderedDict([('&', r'&amp;'), ('<', r'&lt;'), + ('>', r'&gt;')]) else: esc = {} rs = com.pprint_thing(s, escape_chars=esc).strip() - self.write( - '%s%s</%s>' % (start_tag, rs, kind), indent) + self.write('%s%s</%s>' % (start_tag, rs, kind), indent) def write_tr(self, line, indent=0, indent_delta=4, header=False, align=None, tags=None, nindex_levels=0): @@ -968,8 +965,8 @@ def write_result(self, buf): if isinstance(self.classes, str): self.classes = self.classes.split() if not isinstance(self.classes, (list, tuple)): - raise AssertionError(('classes must be list or tuple, ' - 'not %s') % type(self.classes)) + raise AssertionError('classes must be list or tuple, ' + 'not %s' % type(self.classes)) _classes.extend(self.classes) if self.notebook: @@ -1020,8 +1017,8 @@ def _column_header(): else: row.append('') style = "text-align: %s;" % self.fmt.justify - row.extend([single_column_table(c, self.fmt.justify, style) for - c in self.columns]) + row.extend([single_column_table(c, self.fmt.justify, style) + for c in self.columns]) else: if self.fmt.index: row.append(self.columns.name or '') @@ -1041,8 +1038,8 @@ def _column_header(): sentinel = com.sentinel_factory() else: sentinel = None - levels = self.columns.format(sparsify=sentinel, - adjoin=False, names=False) + levels = self.columns.format(sparsify=sentinel, adjoin=False, + names=False) level_lengths = _get_level_lengths(levels, sentinel) inner_lvl = len(level_lengths) - 1 for lnum, (records, values) in enumerate(zip(level_lengths, @@ -1059,18 +1056,21 @@ def _column_header(): elif tag + span > ins_col: recs_new[tag] = span + 1 if lnum == inner_lvl: - values = values[:ins_col] + (u('...'),) + \ - values[ins_col:] - else: # sparse col headers do not receive a ... - values = (values[:ins_col] + (values[ins_col - 1],) + + values = (values[:ins_col] + (u('...'),) + + values[ins_col:]) + else: + # sparse col headers do not receive a ... + values = (values[:ins_col] + + (values[ins_col - 1], ) + values[ins_col:]) else: recs_new[tag] = span - # if ins_col lies between tags, all col headers get ... + # if ins_col lies between tags, all col headers + # get ... if tag + span == ins_col: recs_new[ins_col] = 1 - values = values[:ins_col] + (u('...'),) + \ - values[ins_col:] + values = (values[:ins_col] + (u('...'),) + + values[ins_col:]) records = recs_new inner_lvl = len(level_lengths) - 1 if lnum == inner_lvl: @@ -1084,11 +1084,12 @@ def _column_header(): recs_new[tag] = span recs_new[ins_col] = 1 records = recs_new - values = values[:ins_col] + [u('...')] + values[ins_col:] + values = (values[:ins_col] + [u('...')] + + values[ins_col:]) name = self.columns.names[lnum] - row = [''] * (row_levels - 1) + ['' if name is None - else com.pprint_thing(name)] + row = [''] * (row_levels - 1) + ['' if name is None else + com.pprint_thing(name)] if row == [""] and self.fmt.index is False: row = [] @@ -1117,9 +1118,9 @@ def _column_header(): align=align) if self.fmt.has_index_names and self.fmt.index: - row = [ - x if x is not None else '' for x in self.frame.index.names - ] + [''] * min(len(self.columns), self.max_cols) + row = ([x if x is not None else '' + for x in self.frame.index.names] + + [''] * min(len(self.columns), self.max_cols)) if truncate_h: ins_col = row_levels + self.fmt.tr_col_num row.insert(ins_col, '') @@ -1172,8 +1173,8 @@ def _write_regular_rows(self, fmt_values, indent): if truncate_v and i == (self.fmt.tr_row_num): str_sep_row = ['...' for ele in row] - self.write_tr(str_sep_row, indent, self.indent_delta, tags=None, - nindex_levels=1) + self.write_tr(str_sep_row, indent, self.indent_delta, + tags=None, nindex_levels=1) row = [] row.append(index_values[i]) @@ -1195,13 +1196,15 @@ def _write_hierarchical_rows(self, fmt_values, indent): nrows = len(frame) row_levels = self.frame.index.nlevels - idx_values = frame.index.format(sparsify=False, adjoin=False, names=False) + idx_values = frame.index.format(sparsify=False, adjoin=False, + names=False) idx_values = lzip(*idx_values) if self.fmt.sparsify: # GH3547 sentinel = com.sentinel_factory() - levels = frame.index.format(sparsify=sentinel, adjoin=False, names=False) + levels = frame.index.format(sparsify=sentinel, adjoin=False, + names=False) level_lengths = _get_level_lengths(levels, sentinel) inner_lvl = len(level_lengths) - 1 @@ -1221,11 +1224,13 @@ def _write_hierarchical_rows(self, fmt_values, indent): idx_values.insert(ins_row, tuple(dot_row)) else: rec_new[tag] = span - # If ins_row lies between tags, all cols idx cols receive ... + # If ins_row lies between tags, all cols idx cols + # receive ... if tag + span == ins_row: rec_new[ins_row] = 1 if lnum == 0: - idx_values.insert(ins_row, tuple([u('...')]*len(level_lengths))) + idx_values.insert(ins_row, tuple( + [u('...')] * len(level_lengths))) level_lengths[lnum] = rec_new level_lengths[inner_lvl][ins_row] = 1 @@ -1252,14 +1257,14 @@ def _write_hierarchical_rows(self, fmt_values, indent): row.extend(fmt_values[j][i] for j in range(ncols)) if truncate_h: - row.insert(row_levels - sparse_offset + self.fmt.tr_col_num, '...') + row.insert(row_levels - sparse_offset + + self.fmt.tr_col_num, '...') self.write_tr(row, indent, self.indent_delta, tags=tags, nindex_levels=len(levels) - sparse_offset) else: for i in range(len(frame)): - idx_values = list(zip(*frame.index.format(sparsify=False, - adjoin=False, - names=False))) + idx_values = list(zip(*frame.index.format( + sparsify=False, adjoin=False, names=False))) row = [] row.extend(idx_values[i]) row.extend(fmt_values[j][i] for j in range(ncols)) @@ -1279,6 +1284,7 @@ def grouper(x): if x != sentinel: record['count'] += 1 return record['count'] + return grouper result = [] @@ -1297,18 +1303,18 @@ def grouper(x): class CSVFormatter(object): - - def __init__(self, obj, path_or_buf=None, sep=",", na_rep='', float_format=None, - cols=None, header=True, index=True, index_label=None, - mode='w', nanRep=None, encoding=None, compression=None, quoting=None, - line_terminator='\n', chunksize=None, engine=None, - tupleize_cols=False, quotechar='"', date_format=None, - doublequote=True, escapechar=None, decimal='.'): + def __init__(self, obj, path_or_buf=None, sep=",", na_rep='', + float_format=None, cols=None, header=True, index=True, + index_label=None, mode='w', nanRep=None, encoding=None, + compression=None, quoting=None, line_terminator='\n', + chunksize=None, engine=None, tupleize_cols=False, + quotechar='"', date_format=None, doublequote=True, + escapechar=None, decimal='.'): if engine is not None: - warnings.warn("'engine' keyword is deprecated and " - "will be removed in a future version", - FutureWarning, stacklevel=3) + warnings.warn("'engine' keyword is deprecated and will be " + "removed in a future version", FutureWarning, + stacklevel=3) self.engine = engine # remove for 0.18 self.obj = obj @@ -1350,8 +1356,8 @@ def __init__(self, obj, path_or_buf=None, sep=",", na_rep='', float_format=None, "supported with engine='python'") self.tupleize_cols = tupleize_cols - self.has_mi_columns = isinstance(obj.columns, MultiIndex - ) and not self.tupleize_cols + self.has_mi_columns = (isinstance(obj.columns, MultiIndex) and + not self.tupleize_cols) # validate mi options if self.has_mi_columns: @@ -1398,9 +1404,8 @@ def __init__(self, obj, path_or_buf=None, sep=",", na_rep='', float_format=None, if (isinstance(self.data_index, DatetimeIndex) and date_format is not None): - self.data_index = Index([x.strftime(date_format) - if notnull(x) else '' - for x in self.data_index]) + self.data_index = Index([x.strftime(date_format) if notnull(x) else + '' for x in self.data_index]) self.nlevels = getattr(self.data_index, 'nlevels', 1) if not index: @@ -1408,9 +1413,9 @@ def __init__(self, obj, path_or_buf=None, sep=",", na_rep='', float_format=None, # original python implem. of df.to_csv # invoked by df.to_csv(engine=python) - def _helper_csv(self, writer, na_rep=None, cols=None, - header=True, index=True, - index_label=None, float_format=None, date_format=None): + def _helper_csv(self, writer, na_rep=None, cols=None, header=True, + index=True, index_label=None, float_format=None, + date_format=None): if cols is None: cols = self.columns @@ -1459,6 +1464,7 @@ def _helper_csv(self, writer, na_rep=None, cols=None, if date_format is None: date_formatter = lambda x: Timestamp(x)._repr_base else: + def strftime_with_nulls(x): x = Timestamp(x) if notnull(x): @@ -1477,9 +1483,7 @@ def strftime_with_nulls(x): values = self.obj.copy() values.index = data_index values.columns = values.columns.to_native_types( - na_rep=na_rep, - float_format=float_format, - date_format=date_format, + na_rep=na_rep, float_format=float_format, date_format=date_format, quoting=self.quoting) values = values[cols] @@ -1516,8 +1520,8 @@ def save(self): close = False else: f = _get_handle(self.path_or_buf, self.mode, - encoding=self.encoding, - compression=self.compression) + encoding=self.encoding, + compression=self.compression) close = True try: @@ -1586,7 +1590,8 @@ def _save_header(self): index_label = [''] else: index_label = [index_label] - elif not isinstance(index_label, (list, tuple, np.ndarray, Index)): + elif not isinstance(index_label, + (list, tuple, np.ndarray, Index)): # given a string for a DF with Index index_label = [index_label] @@ -1652,8 +1657,7 @@ def _save_chunk(self, start_i, end_i): slicer = slice(start_i, end_i) for i in range(len(self.blocks)): b = self.blocks[i] - d = b.to_native_types(slicer=slicer, - na_rep=self.na_rep, + d = b.to_native_types(slicer=slicer, na_rep=self.na_rep, float_format=self.float_format, decimal=self.decimal, date_format=self.date_format, @@ -1663,8 +1667,7 @@ def _save_chunk(self, start_i, end_i): # self.data is a preallocated list self.data[col_loc] = col - ix = data_index.to_native_types(slicer=slicer, - na_rep=self.na_rep, + ix = data_index.to_native_types(slicer=slicer, na_rep=self.na_rep, float_format=self.float_format, decimal=self.decimal, date_format=self.date_format, @@ -1681,8 +1684,8 @@ class ExcelCell(object): __fields__ = ('row', 'col', 'val', 'style', 'mergestart', 'mergeend') __slots__ = __fields__ - def __init__(self, row, col, val, - style=None, mergestart=None, mergeend=None): + def __init__(self, row, col, val, style=None, mergestart=None, + mergeend=None): self.row = row self.col = col self.val = val @@ -1696,11 +1699,11 @@ def __init__(self, row, col, val, "right": "thin", "bottom": "thin", "left": "thin"}, - "alignment": {"horizontal": "center", "vertical": "top"}} + "alignment": {"horizontal": "center", + "vertical": "top"}} class ExcelFormatter(object): - """ Class for formatting a DataFrame to a list of ExcelCells, @@ -1761,15 +1764,17 @@ def _format_header_mi(self): if self.columns.nlevels > 1: if not self.index: raise NotImplementedError("Writing to Excel with MultiIndex" - " columns and no index ('index'=False) " - "is not yet implemented.") + " columns and no index " + "('index'=False) is not yet " + "implemented.") has_aliases = isinstance(self.header, (tuple, list, np.ndarray, Index)) - if not(has_aliases or self.header): + if not (has_aliases or self.header): return columns = self.columns - level_strs = columns.format(sparsify=self.merge_cells, adjoin=False, names=False) + level_strs = columns.format(sparsify=self.merge_cells, adjoin=False, + names=False) level_lengths = _get_level_lengths(level_strs) coloffset = 0 lnum = 0 @@ -1783,23 +1788,16 @@ def _format_header_mi(self): name = columns.names[lnum] yield ExcelCell(lnum, coloffset, name, header_style) - for lnum, (spans, levels, labels) in enumerate(zip(level_lengths, - columns.levels, - columns.labels) - ): + for lnum, (spans, levels, labels) in enumerate(zip( + level_lengths, columns.levels, columns.labels)): values = levels.take(labels) for i in spans: if spans[i] > 1: - yield ExcelCell(lnum, - coloffset + i + 1, - values[i], - header_style, - lnum, + yield ExcelCell(lnum, coloffset + i + 1, values[i], + header_style, lnum, coloffset + i + spans[i]) else: - yield ExcelCell(lnum, - coloffset + i + 1, - values[i], + yield ExcelCell(lnum, coloffset + i + 1, values[i], header_style) else: # Format in legacy format with dots to indicate levels. @@ -1822,8 +1820,8 @@ def _format_header_regular(self): colnames = self.columns if has_aliases: if len(self.header) != len(self.columns): - raise ValueError(('Writing %d cols but got %d aliases' - % (len(self.columns), len(self.header)))) + raise ValueError('Writing %d cols but got %d aliases' % + (len(self.columns), len(self.header))) else: colnames = self.header @@ -1864,8 +1862,9 @@ def _format_regular_rows(self): if self.index: # chek aliases # if list only take first as this is not a MultiIndex - if self.index_label and isinstance(self.index_label, - (list, tuple, np.ndarray, Index)): + if (self.index_label and + isinstance(self.index_label, (list, tuple, np.ndarray, + Index))): index_label = self.index_label[0] # if string good to go elif self.index_label and isinstance(self.index_label, str): @@ -1877,9 +1876,7 @@ def _format_regular_rows(self): self.rowcounter += 1 if index_label and self.header is not False: - yield ExcelCell(self.rowcounter - 1, - 0, - index_label, + yield ExcelCell(self.rowcounter - 1, 0, index_label, header_style) # write index_values @@ -1907,8 +1904,9 @@ def _format_hierarchical_rows(self): if self.index: index_labels = self.df.index.names # check for aliases - if self.index_label and isinstance(self.index_label, - (list, tuple, np.ndarray, Index)): + if (self.index_label and + isinstance(self.index_label, (list, tuple, np.ndarray, + Index))): index_labels = self.index_label # MultiIndex columns require an extra row @@ -1919,13 +1917,11 @@ def _format_hierarchical_rows(self): self.rowcounter += 1 # if index labels are not empty go ahead and dump - if (any(x is not None for x in index_labels) - and self.header is not False): + if (any(x is not None for x in index_labels) and + self.header is not False): for cidx, name in enumerate(index_labels): - yield ExcelCell(self.rowcounter - 1, - cidx, - name, + yield ExcelCell(self.rowcounter - 1, cidx, name, header_style) if self.merge_cells: @@ -1940,27 +1936,21 @@ def _format_hierarchical_rows(self): values = levels.take(labels) for i in spans: if spans[i] > 1: - yield ExcelCell(self.rowcounter + i, - gcolidx, - values[i], - header_style, + yield ExcelCell(self.rowcounter + i, gcolidx, + values[i], header_style, self.rowcounter + i + spans[i] - 1, gcolidx) else: - yield ExcelCell(self.rowcounter + i, - gcolidx, - values[i], - header_style) + yield ExcelCell(self.rowcounter + i, gcolidx, + values[i], header_style) gcolidx += 1 else: # Format hierarchical rows with non-merged values. for indexcolvals in zip(*self.df.index): for idx, indexcolval in enumerate(indexcolvals): - yield ExcelCell(self.rowcounter + idx, - gcolidx, - indexcolval, - header_style) + yield ExcelCell(self.rowcounter + idx, gcolidx, + indexcolval, header_style) gcolidx += 1 # Write the body of the frame data series by series. @@ -2009,18 +1999,16 @@ def format_array(values, formatter, float_format=None, na_rep='NaN', digits = get_option("display.precision") fmt_obj = fmt_klass(values, digits=digits, na_rep=na_rep, - float_format=float_format, - formatter=formatter, space=space, - justify=justify) + float_format=float_format, formatter=formatter, + space=space, justify=justify) return fmt_obj.get_result() class GenericArrayFormatter(object): - def __init__(self, values, digits=7, formatter=None, na_rep='NaN', - space=12, float_format=None, justify='right', - decimal='.', quoting=None): + space=12, float_format=None, justify='right', decimal='.', + quoting=None): self.values = values self.digits = digits self.na_rep = na_rep @@ -2044,8 +2032,9 @@ def _format_strings(self): else: float_format = self.float_format - formatter = self.formatter if self.formatter is not None else \ - (lambda x: com.pprint_thing(x, escape_chars=('\t', '\r', '\n'))) + formatter = ( + self.formatter if self.formatter is not None else + (lambda x: com.pprint_thing(x, escape_chars=('\t', '\r', '\n')))) def _format(x): if self.na_rep is not None and lib.checknull(x): @@ -2080,7 +2069,6 @@ def _format(x): class FloatArrayFormatter(GenericArrayFormatter): - """ """ @@ -2128,7 +2116,7 @@ def _format_strings(self): # this is pretty arbitrary for now has_large_values = (abs_vals > 1e8).any() - has_small_values = ((abs_vals < 10 ** (-self.digits)) & + has_small_values = ((abs_vals < 10**(-self.digits)) & (abs_vals > 0)).any() if too_long and has_large_values: @@ -2152,8 +2140,9 @@ def get_formatted_data(self): mask = isnull(values) # the following variable is to be applied on each value to format it - # according to the string containing the float format, self.float_format - # and the character to use as decimal separator, self.decimal + # according to the string containing the float format, + # self.float_format and the character to use as decimal separator, + # self.decimal formatter = None if self.float_format and self.decimal != '.': formatter = lambda v: ( @@ -2171,14 +2160,13 @@ def get_formatted_data(self): values[mask] = self.na_rep if formatter: imask = (~mask).ravel() - values.flat[imask] = np.array( - [formatter(val) for val in values.ravel()[imask]]) + values.flat[imask] = np.array([formatter(val) + for val in values.ravel()[imask]]) return values class IntArrayFormatter(GenericArrayFormatter): - def _format_strings(self): formatter = self.formatter or (lambda x: '% d' % x) fmt_values = [formatter(x) for x in self.values] @@ -2198,14 +2186,15 @@ def _format_strings(self): if not isinstance(values, DatetimeIndex): values = DatetimeIndex(values) - fmt_values = format_array_from_datetime(values.asi8.ravel(), - format=_get_format_datetime64_from_values(values, self.date_format), - na_rep=self.nat_rep).reshape(values.shape) + fmt_values = format_array_from_datetime( + values.asi8.ravel(), + format=_get_format_datetime64_from_values(values, + self.date_format), + na_rep=self.nat_rep).reshape(values.shape) return fmt_values.tolist() class PeriodArrayFormatter(IntArrayFormatter): - def _format_strings(self): values = PeriodIndex(self.values).to_native_types() formatter = self.formatter or (lambda x: '%s' % x) @@ -2214,7 +2203,6 @@ def _format_strings(self): class CategoricalArrayFormatter(GenericArrayFormatter): - def __init__(self, values, *args, **kwargs): GenericArrayFormatter.__init__(self, values, *args, **kwargs) @@ -2235,11 +2223,13 @@ def _is_dates_only(values): values_int = values.asi8 consider_values = values_int != iNaT one_day_nanos = (86400 * 1e9) - even_days = np.logical_and(consider_values, values_int % one_day_nanos != 0).sum() == 0 + even_days = np.logical_and(consider_values, + values_int % one_day_nanos != 0).sum() == 0 if even_days: return True return False + def _format_datetime64(x, tz=None, nat_rep='NaT'): if x is None or lib.checknull(x): return nat_rep @@ -2262,12 +2252,12 @@ def _format_datetime64_dateonly(x, nat_rep='NaT', date_format=None): else: return x._date_repr + def _get_format_datetime64(is_dates_only, nat_rep='NaT', date_format=None): if is_dates_only: - return lambda x, tz=None: _format_datetime64_dateonly(x, - nat_rep=nat_rep, - date_format=date_format) + return lambda x, tz=None: _format_datetime64_dateonly( + x, nat_rep=nat_rep, date_format=date_format) else: return lambda x, tz=None: _format_datetime64(x, tz=tz, nat_rep=nat_rep) @@ -2281,27 +2271,29 @@ def _get_format_datetime64_from_values(values, date_format): class Datetime64TZFormatter(Datetime64Formatter): - def _format_strings(self): """ we by definition have a TZ """ values = self.values.asobject is_dates_only = _is_dates_only(values) - formatter = (self.formatter or _get_format_datetime64(is_dates_only, date_format=self.date_format)) - fmt_values = [ formatter(x) for x in values ] + formatter = (self.formatter or + _get_format_datetime64(is_dates_only, + date_format=self.date_format)) + fmt_values = [formatter(x) for x in values] return fmt_values -class Timedelta64Formatter(GenericArrayFormatter): +class Timedelta64Formatter(GenericArrayFormatter): def __init__(self, values, nat_rep='NaT', box=False, **kwargs): super(Timedelta64Formatter, self).__init__(values, **kwargs) self.nat_rep = nat_rep self.box = box def _format_strings(self): - formatter = self.formatter or _get_format_timedelta64(self.values, nat_rep=self.nat_rep, - box=self.box) + formatter = (self.formatter or + _get_format_timedelta64(self.values, nat_rep=self.nat_rep, + box=self.box)) fmt_values = np.array([formatter(x) for x in self.values]) return fmt_values @@ -2319,8 +2311,10 @@ def _get_format_timedelta64(values, nat_rep='NaT', box=False): consider_values = values_int != iNaT one_day_nanos = (86400 * 1e9) - even_days = np.logical_and(consider_values, values_int % one_day_nanos != 0).sum() == 0 - all_sub_day = np.logical_and(consider_values, np.abs(values_int) >= one_day_nanos).sum() == 0 + even_days = np.logical_and(consider_values, + values_int % one_day_nanos != 0).sum() == 0 + all_sub_day = np.logical_and( + consider_values, np.abs(values_int) >= one_day_nanos).sum() == 0 if even_days: format = 'even_day' @@ -2343,8 +2337,7 @@ def _formatter(x): return _formatter -def _make_fixed_width(strings, justify='right', minimum=None, - adj=None): +def _make_fixed_width(strings, justify='right', minimum=None, adj=None): if len(strings) == 0 or justify == 'all': return strings @@ -2381,7 +2374,7 @@ def _trim_zeros(str_floats, na_rep='NaN'): def _cond(values): non_na = [x for x in values if x != na_rep] return (len(non_na) > 0 and all([x.endswith('0') for x in non_na]) and - not(any([('e' in x) or ('E' in x) for x in non_na]))) + not (any([('e' in x) or ('E' in x) for x in non_na]))) while _cond(trimmed): trimmed = [x[:-1] if x != na_rep else x for x in trimmed] @@ -2417,8 +2410,7 @@ def _has_names(index): else: return index.name is not None - -# ------------------------------------------------------------------------------ +# ----------------------------------------------------------------------------- # Global formatting options _initial_defencoding = None @@ -2496,7 +2488,6 @@ def get_console_size(): class EngFormatter(object): - """ Formats float values according to engineering format. @@ -2576,7 +2567,7 @@ def __call__(self, num): else: prefix = 'E+%02d' % int_pow10 - mant = sign * dnum / (10 ** pow10) + mant = sign * dnum / (10**pow10) if self.accuracy is None: # pragma: no cover format_str = u("% g%s") @@ -2626,16 +2617,16 @@ def _binify(cols, line_width): bins.append(len(cols)) return bins + if __name__ == '__main__': arr = np.array([746.03, 0.00, 5620.00, 1592.36]) # arr = np.array([11111111.1, 1.55]) # arr = [314200.0034, 1.4125678] - arr = np.array([327763.3119, 345040.9076, 364460.9915, 398226.8688, - 383800.5172, 433442.9262, 539415.0568, 568590.4108, - 599502.4276, 620921.8593, 620898.5294, 552427.1093, - 555221.2193, 519639.7059, 388175.7, 379199.5854, - 614898.25, 504833.3333, 560600., 941214.2857, - 1134250., 1219550., 855736.85, 1042615.4286, - 722621.3043, 698167.1818, 803750.]) + arr = np.array( + [327763.3119, 345040.9076, 364460.9915, 398226.8688, 383800.5172, + 433442.9262, 539415.0568, 568590.4108, 599502.4276, 620921.8593, + 620898.5294, 552427.1093, 555221.2193, 519639.7059, 388175.7, + 379199.5854, 614898.25, 504833.3333, 560600., 941214.2857, 1134250., + 1219550., 855736.85, 1042615.4286, 722621.3043, 698167.1818, 803750.]) fmt = FloatArrayFormatter(arr, digits=7) print(fmt.get_result()) diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py index 719f35dd90ce2..4dffaa0b0c416 100644 --- a/pandas/core/reshape.py +++ b/pandas/core/reshape.py @@ -24,7 +24,6 @@ class _Unstacker(object): - """ Helper class to unstack data / pivot with multi-level index @@ -159,10 +158,9 @@ def get_result(self): # may need to coerce categoricals here if self.is_categorical is not None: - values = [ Categorical.from_array(values[:,i], - categories=self.is_categorical.categories, - ordered=True) - for i in range(values.shape[-1]) ] + values = [Categorical.from_array( + values[:, i], categories=self.is_categorical.categories, + ordered=True) for i in range(values.shape[-1])] return DataFrame(values, index=index, columns=columns) @@ -188,8 +186,8 @@ def get_new_values(self): # is there a simpler / faster way of doing this? for i in range(values.shape[1]): - chunk = new_values[:, i * width: (i + 1) * width] - mask_chunk = new_mask[:, i * width: (i + 1) * width] + chunk = new_values[:, i * width:(i + 1) * width] + mask_chunk = new_mask[:, i * width:(i + 1) * width] chunk.flat[self.mask] = self.sorted_values[:, i] mask_chunk.flat[self.mask] = True @@ -233,10 +231,8 @@ def get_new_index(self): lev = lev.insert(len(lev), _get_na_value(lev.dtype.type)) return lev.take(lab) - return MultiIndex(levels=self.new_index_levels, - labels=result_labels, - names=self.new_index_names, - verify_integrity=False) + return MultiIndex(levels=self.new_index_levels, labels=result_labels, + names=self.new_index_names, verify_integrity=False) def _unstack_multiple(data, clocs): @@ -264,8 +260,8 @@ def _unstack_multiple(data, clocs): group_index = get_group_index(clabels, shape, sort=False, xnull=False) comp_ids, obs_ids = _compress_group_index(group_index, sort=False) - recons_labels = decons_obs_group_ids(comp_ids, - obs_ids, shape, clabels, xnull=False) + recons_labels = decons_obs_group_ids(comp_ids, obs_ids, shape, clabels, + xnull=False) dummy_index = MultiIndex(levels=rlevels + [obs_ids], labels=rlabels + [comp_ids], @@ -288,8 +284,7 @@ def _unstack_multiple(data, clocs): return result - dummy = DataFrame(data.values, index=dummy_index, - columns=data.columns) + dummy = DataFrame(data.values, index=dummy_index, columns=data.columns) unstacked = dummy.unstack('__placeholder__') if isinstance(unstacked, Series): @@ -329,8 +324,7 @@ def pivot(self, index=None, columns=None, values=None): else: index = self[index] indexed = Series(self[values].values, - index=MultiIndex.from_arrays([index, - self[columns]])) + index=MultiIndex.from_arrays([index, self[columns]])) return indexed.unstack(columns) @@ -461,6 +455,7 @@ def stack(frame, level=-1, dropna=True): ------- stacked : Series """ + def factorize(index): if index.is_unique: return index, np.arange(len(index)) @@ -492,11 +487,10 @@ def factorize(index): new_index = MultiIndex(levels=new_levels, labels=new_labels, names=new_names, verify_integrity=False) else: - levels, (ilab, clab) = \ - zip(*map(factorize, (frame.index, frame.columns))) + levels, (ilab, clab) = zip(*map(factorize, (frame.index, + frame.columns))) labels = ilab.repeat(K), np.tile(clab, N).ravel() - new_index = MultiIndex(levels=levels, - labels=labels, + new_index = MultiIndex(levels=levels, labels=labels, names=[frame.index.name, frame.columns.name], verify_integrity=False) @@ -541,8 +535,8 @@ def stack_multiple(frame, level, dropna=True): level = updated_level else: - raise ValueError("level should contain all level names or all level numbers, " - "not a mixture of the two.") + raise ValueError("level should contain all level names or all level " + "numbers, not a mixture of the two.") return result @@ -550,12 +544,12 @@ def stack_multiple(frame, level, dropna=True): def _stack_multi_columns(frame, level_num=-1, dropna=True): def _convert_level_number(level_num, columns): """ - Logic for converting the level number to something - we can safely pass to swaplevel: + Logic for converting the level number to something we can safely pass + to swaplevel: - We generally want to convert the level number into - a level name, except when columns do not have names, - in which case we must leave as a level number + We generally want to convert the level number into a level name, except + when columns do not have names, in which case we must leave as a level + number """ if level_num in columns.names: return columns.names[level_num] @@ -587,10 +581,9 @@ def _convert_level_number(level_num, columns): # tuple list excluding level for grouping columns if len(frame.columns.levels) > 2: - tuples = list(zip(*[ - lev.take(lab) for lev, lab in - zip(this.columns.levels[:-1], this.columns.labels[:-1]) - ])) + tuples = list(zip(*[lev.take(lab) + for lev, lab in zip(this.columns.levels[:-1], + this.columns.labels[:-1])])) unique_groups = [key for key, _ in itertools.groupby(tuples)] new_names = this.columns.names[:-1] new_columns = MultiIndex.from_tuples(unique_groups, names=new_names) @@ -655,8 +648,8 @@ def _convert_level_number(level_num, columns): return result -def melt(frame, id_vars=None, value_vars=None, - var_name=None, value_name='value', col_level=None): +def melt(frame, id_vars=None, value_vars=None, var_name=None, + value_name='value', col_level=None): """ "Unpivots" a DataFrame from wide format to long format, optionally leaving identifier variables set. @@ -772,8 +765,8 @@ def melt(frame, id_vars=None, value_vars=None, if len(frame.columns.names) == len(set(frame.columns.names)): var_name = frame.columns.names else: - var_name = ['variable_%s' % i for i in - range(len(frame.columns.names))] + var_name = ['variable_%s' % i + for i in range(len(frame.columns.names))] else: var_name = [frame.columns.name if frame.columns.name is not None else 'variable'] @@ -922,6 +915,7 @@ def wide_to_long(df, stubnames, i, j): `pandas.melt` under the hood, but is hard-coded to "do the right thing" in a typicaly case. """ + def get_var_names(df, regex): return df.filter(regex=regex).columns.tolist() @@ -948,6 +942,7 @@ def melt_stub(df, stub, i, j): newdf = newdf.merge(new, how="outer", on=id_vars + [j], copy=False) return newdf.set_index([i, j]) + def get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False, columns=None, sparse=False): """ @@ -1026,21 +1021,20 @@ def get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False, # determine columns being encoded if columns is None: - columns_to_encode = data.select_dtypes(include=['object', - 'category']).columns + columns_to_encode = data.select_dtypes( + include=['object', 'category']).columns else: columns_to_encode = columns # validate prefixes and separator to avoid silently dropping cols def check_len(item, name): - length_msg = ("Length of '{0}' ({1}) did " - "not match the length of the columns " - "being encoded ({2}).") + length_msg = ("Length of '{0}' ({1}) did not match the length of " + "the columns being encoded ({2}).") if com.is_list_like(item): if not len(item) == len(columns_to_encode): raise ValueError(length_msg.format(name, len(item), - len(columns_to_encode))) + len(columns_to_encode))) check_len(prefix, 'prefix') check_len(prefix_sep, 'prefix_sep') @@ -1075,7 +1069,8 @@ def check_len(item, name): return result -def _get_dummies_1d(data, prefix, prefix_sep='_', dummy_na=False, sparse=False): +def _get_dummies_1d(data, prefix, prefix_sep='_', dummy_na=False, + sparse=False): # Series avoids inconsistent NaN handling cat = Categorical.from_array(Series(data), ordered=True) levels = cat.categories @@ -1099,8 +1094,7 @@ def _get_dummies_1d(data, prefix, prefix_sep='_', dummy_na=False, sparse=False): number_of_cols = len(levels) if prefix is not None: - dummy_cols = ['%s%s%s' % (prefix, prefix_sep, v) - for v in levels] + dummy_cols = ['%s%s%s' % (prefix, prefix_sep, v) for v in levels] else: dummy_cols = levels @@ -1112,7 +1106,7 @@ def _get_dummies_1d(data, prefix, prefix_sep='_', dummy_na=False, sparse=False): if sparse: sparse_series = {} N = len(data) - sp_indices = [ [] for _ in range(len(dummy_cols)) ] + sp_indices = [[] for _ in range(len(dummy_cols))] for ndx, code in enumerate(codes): if code == -1: # Blank entries if not dummy_na and code == -1, #GH4446 @@ -1120,8 +1114,8 @@ def _get_dummies_1d(data, prefix, prefix_sep='_', dummy_na=False, sparse=False): sp_indices[code].append(ndx) for col, ixs in zip(dummy_cols, sp_indices): - sarr = SparseArray(np.ones(len(ixs)), sparse_index=IntIndex(N, ixs), - fill_value=0) + sarr = SparseArray(np.ones(len(ixs)), + sparse_index=IntIndex(N, ixs), fill_value=0) sparse_series[col] = SparseSeries(data=sarr, index=index) return SparseDataFrame(sparse_series, index=index, columns=dummy_cols) @@ -1157,10 +1151,7 @@ def make_axis_dummies(frame, axis='minor', transform=None): dummies : DataFrame Column names taken from chosen axis """ - numbers = { - 'major': 0, - 'minor': 1 - } + numbers = {'major': 0, 'minor': 1} num = numbers.get(axis, axis) items = frame.index.levels[num] diff --git a/pandas/core/series.py b/pandas/core/series.py index 73e645039506f..73cca93a498c5 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -17,11 +17,10 @@ _default_index, _maybe_upcast, _asarray_tuplesafe, _infer_dtype_from_scalar, is_list_like, _values_from_object, - is_categorical_dtype, is_datetime64tz_dtype, - needs_i8_conversion, i8_boxer, - _possibly_cast_to_datetime, _possibly_castable, - _possibly_convert_platform, _try_sort, - is_int64_dtype, is_internal_type, is_datetimetz, + is_categorical_dtype, needs_i8_conversion, + i8_boxer, _possibly_cast_to_datetime, + _possibly_castable, _possibly_convert_platform, + _try_sort, is_internal_type, is_datetimetz, _maybe_match_name, ABCSparseArray, _coerce_to_dtype, SettingWithCopyError, _maybe_box_datetimelike, ABCDataFrame, @@ -42,6 +41,7 @@ from pandas.util.terminal import get_terminal_size from pandas.compat import zip, u, OrderedDict, StringIO + import pandas.core.ops as ops from pandas.core import algorithms @@ -49,7 +49,7 @@ import pandas.core.datetools as datetools import pandas.core.format as fmt import pandas.core.nanops as nanops -from pandas.util.decorators import Appender, cache_readonly, deprecate_kwarg +from pandas.util.decorators import Appender, deprecate_kwarg import pandas.lib as lib import pandas.tslib as tslib @@ -62,15 +62,11 @@ __all__ = ['Series'] - _shared_doc_kwargs = dict( - axes='index', - klass='Series', - axes_single_arg="{0, 'index'}", + axes='index', klass='Series', axes_single_arg="{0, 'index'}", inplace="""inplace : boolean, default False - If True, performs operation inplace and returns None.""", - duplicated='Series' -) + If True, performs operation inplace and returns None.""", + duplicated='Series') def _coerce_method(converter): @@ -79,17 +75,17 @@ def _coerce_method(converter): def wrapper(self): if len(self) == 1: return converter(self.iloc[0]) - raise TypeError( - "cannot convert the series to {0}".format(str(converter))) - return wrapper + raise TypeError("cannot convert the series to " + "{0}".format(str(converter))) + return wrapper -#---------------------------------------------------------------------- +# ---------------------------------------------------------------------- # Series class -class Series(base.IndexOpsMixin, strings.StringAccessorMixin, generic.NDFrame,): - +class Series(base.IndexOpsMixin, strings.StringAccessorMixin, + generic.NDFrame,): """ One-dimensional ndarray with axis labels (including time series). @@ -182,14 +178,14 @@ def __init__(self, data=None, index=None, dtype=None, name=None, else: data = np.nan elif isinstance(index, PeriodIndex): - data = [data.get(i, nan) - for i in index] if data else np.nan + data = ([data.get(i, nan) for i in index] + if data else np.nan) else: data = lib.fast_multiget(data, index.values, default=np.nan) except TypeError: - data = [data.get(i, nan) - for i in index] if data else np.nan + data = ([data.get(i, nan) for i in index] + if data else np.nan) elif isinstance(data, SingleBlockManager): if index is None: @@ -198,7 +194,8 @@ def __init__(self, data=None, index=None, dtype=None, name=None, data = data.reindex(index, copy=copy) elif isinstance(data, Categorical): if dtype is not None: - raise ValueError("cannot specify a dtype with a Categorical") + raise ValueError("cannot specify a dtype with a " + "Categorical") elif (isinstance(data, types.GeneratorType) or (compat.PY3 and isinstance(data, map))): data = list(data) @@ -241,7 +238,8 @@ def from_array(cls, arr, index=None, name=None, dtype=None, copy=False, from pandas.sparse.series import SparseSeries cls = SparseSeries - return cls(arr, index=index, name=name, dtype=dtype, copy=copy, fastpath=fastpath) + return cls(arr, index=index, name=name, dtype=dtype, copy=copy, + fastpath=fastpath) @property def _constructor(self): @@ -259,12 +257,11 @@ def _can_hold_na(self): @property def is_time_series(self): - msg = "is_time_series is deprecated. Please use Series.index.is_all_dates" - warnings.warn(msg, FutureWarning, stacklevel=2) + warnings.warn("is_time_series is deprecated. Please use " + "Series.index.is_all_dates", FutureWarning, stacklevel=2) # return self._subtyp in ['time_series', 'sparse_time_series'] return self.index.is_all_dates - _index = None def _set_axis(self, axis, labels, fastpath=False): @@ -276,7 +273,8 @@ def _set_axis(self, axis, labels, fastpath=False): is_all_dates = labels.is_all_dates if is_all_dates: - if not isinstance(labels, (DatetimeIndex, PeriodIndex, TimedeltaIndex)): + if not isinstance(labels, + (DatetimeIndex, PeriodIndex, TimedeltaIndex)): labels = DatetimeIndex(labels) # need to set here becuase we changed the index @@ -343,7 +341,8 @@ def values(self): Timezone aware datetime data is converted to UTC: - >>> pd.Series(pd.date_range('20130101',periods=3,tz='US/Eastern')).values + >>> pd.Series(pd.date_range('20130101', periods=3, + tz='US/Eastern')).values array(['2013-01-01T00:00:00.000000000-0500', '2013-01-02T00:00:00.000000000-0500', '2013-01-03T00:00:00.000000000-0500'], dtype='datetime64[ns]') @@ -449,9 +448,10 @@ def __array_prepare__(self, result, context=None): if context is not None and not isinstance(self._values, np.ndarray): obj = context[1][0] raise TypeError("{obj} with dtype {dtype} cannot perform " - "the numpy op {op}".format(obj=type(obj).__name__, - dtype=getattr(obj,'dtype',None), - op=context[0].__name__)) + "the numpy op {op}".format( + obj=type(obj).__name__, + dtype=getattr(obj, 'dtype', None), + op=context[0].__name__)) return result # complex @@ -508,9 +508,7 @@ def _unpickle_series_compat(self, state): # indexers @property def axes(self): - """ - Return a list of the row axis labels - """ + """Return a list of the row axis labels""" return [self.index] def _ixs(self, i, axis=0): @@ -551,7 +549,8 @@ def _is_mixed_type(self): return False def _slice(self, slobj, axis=0, kind=None): - slobj = self.index._convert_slice_indexer(slobj, kind=kind or 'getitem') + slobj = self.index._convert_slice_indexer(slobj, + kind=kind or 'getitem') return self._get_values(slobj) def __getitem__(self, key): @@ -564,9 +563,9 @@ def __getitem__(self, key): # we need to box if we have a non-unique index here # otherwise have inline ndarray/lists if not self.index.is_unique: - result = self._constructor(result, - index=[key]*len(result) - ,dtype=self.dtype).__finalize__(self) + result = self._constructor( + result, index=[key] * len(result), + dtype=self.dtype).__finalize__(self) return result except InvalidIndexError: @@ -582,7 +581,8 @@ def __getitem__(self, key): else: # we can try to coerce the indexer (or this will raise) - new_key = self.index._convert_scalar_indexer(key,kind='getitem') + new_key = self.index._convert_scalar_indexer(key, + kind='getitem') if type(new_key) != type(key): return self.__getitem__(new_key) raise @@ -604,8 +604,8 @@ def _get_with(self, key): indexer = self.index._convert_slice_indexer(key, kind='getitem') return self._get_values(indexer) elif isinstance(key, ABCDataFrame): - raise TypeError('Indexing a Series with DataFrame is not supported, '\ - 'use the appropriate DataFrame column') + raise TypeError('Indexing a Series with DataFrame is not ' + 'supported, use the appropriate DataFrame column') else: if isinstance(key, tuple): try: @@ -669,7 +669,6 @@ def _get_values(self, indexer): return self._values[indexer] def __setitem__(self, key, value): - def setitem(key, value): try: self._set_with_engine(key, value) @@ -678,8 +677,8 @@ def setitem(key, value): raise except (KeyError, ValueError): values = self._values - if (com.is_integer(key) - and not self.index.inferred_type == 'integer'): + if (com.is_integer(key) and + not self.index.inferred_type == 'integer'): values[key] = value return @@ -694,17 +693,18 @@ def setitem(key, value): value = tslib.iNaT try: - self.index._engine.set_value(self._values, key, value) + self.index._engine.set_value(self._values, key, + value) return - except (TypeError): + except TypeError: pass self.loc[key] = value return except TypeError as e: - if isinstance(key, tuple) and not isinstance(self.index, - MultiIndex): + if (isinstance(key, tuple) and + not isinstance(self.index, MultiIndex)): raise ValueError("Can only tuple-index with a MultiIndex") # python 3 type errors should be raised @@ -716,7 +716,7 @@ def setitem(key, value): try: self.where(~key, value, inplace=True) return - except (InvalidIndexError): + except InvalidIndexError: pass self._set_with(key, value) @@ -752,7 +752,7 @@ def _set_with(self, key, value): try: key = list(key) except: - key = [ key ] + key = [key] if isinstance(key, Index): key_type = key.inferred_type @@ -777,8 +777,7 @@ def _set_labels(self, key, value): indexer = self.index.get_indexer(key) mask = indexer == -1 if mask.any(): - raise ValueError('%s not contained in the index' - % str(key[mask])) + raise ValueError('%s not contained in the index' % str(key[mask])) self._set_values(indexer, value) def _set_values(self, key, value): @@ -828,8 +827,8 @@ def iget_value(self, i, axis=0): """ DEPRECATED. Use ``.iloc[i]`` or ``.iat[i]`` instead """ - warnings.warn("iget_value(i) is deprecated. Please use .iloc[i] or .iat[i]", - FutureWarning, stacklevel=2) + warnings.warn("iget_value(i) is deprecated. Please use .iloc[i] or " + ".iat[i]", FutureWarning, stacklevel=2) return self._ixs(i) def iget(self, i, axis=0): @@ -951,8 +950,8 @@ def __unicode__(self): """ buf = StringIO(u("")) width, height = get_terminal_size() - max_rows = (height if get_option("display.max_rows") == 0 - else get_option("display.max_rows")) + max_rows = (height if get_option("display.max_rows") == 0 else + get_option("display.max_rows")) self.to_string(buf=buf, name=self.name, dtype=self.dtype, max_rows=max_rows) @@ -961,7 +960,8 @@ def __unicode__(self): return result def to_string(self, buf=None, na_rep='NaN', float_format=None, header=True, - index=True, length=False, dtype=False, name=False, max_rows=None): + index=True, length=False, dtype=False, name=False, + max_rows=None): """ Render a string representation of the Series @@ -1012,17 +1012,15 @@ def to_string(self, buf=None, na_rep='NaN', float_format=None, header=True, with open(buf, 'w') as f: f.write(the_repr) - def _get_repr( - self, name=False, header=True, index=True, length=True, dtype=True, - na_rep='NaN', float_format=None, max_rows=None): + def _get_repr(self, name=False, header=True, index=True, length=True, + dtype=True, na_rep='NaN', float_format=None, max_rows=None): """ Internal function, should always return unicode string """ - formatter = fmt.SeriesFormatter(self, name=name, - length=length, header=header, - index=index, dtype=dtype, - na_rep=na_rep, + formatter = fmt.SeriesFormatter(self, name=name, length=length, + header=header, index=index, + dtype=dtype, na_rep=na_rep, float_format=float_format, max_rows=max_rows) result = formatter.to_string() @@ -1052,11 +1050,11 @@ def iteritems(self): if compat.PY3: # pragma: no cover items = iteritems - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Misc public methods def keys(self): - "Alias for index" + """Alias for index""" return self.index def tolist(self): @@ -1111,7 +1109,7 @@ def to_sparse(self, kind='block', fill_value=None): return SparseSeries(self, kind=kind, fill_value=fill_value).__finalize__(self) - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Statistics, overridden ndarray methods # TODO: integrate bottleneck @@ -1148,7 +1146,8 @@ def count(self, level=None): obs = lab[notnull(self.values)] out = np.bincount(obs, minlength=len(lev) or None) - return self._constructor(out, index=lev, dtype='int64').__finalize__(self) + return self._constructor(out, index=lev, + dtype='int64').__finalize__(self) def mode(self): """Returns the mode(s) of the dataset. @@ -1169,12 +1168,14 @@ def mode(self): # TODO: Add option for bins like value_counts() return algorithms.mode(self) - @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', False: 'first'}) + @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', + False: 'first'}) @Appender(base._shared_docs['drop_duplicates'] % _shared_doc_kwargs) def drop_duplicates(self, keep='first', inplace=False): return super(Series, self).drop_duplicates(keep=keep, inplace=inplace) - @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', False: 'first'}) + @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', + False: 'first'}) @Appender(base._shared_docs['duplicated'] % _shared_doc_kwargs) def duplicated(self, keep='first'): return super(Series, self).duplicated(keep=keep) @@ -1258,8 +1259,7 @@ def round(self, decimals=0): """ result = _values_from_object(self).round(decimals) - result = self._constructor(result, - index=self.index).__finalize__(self) + result = self._constructor(result, index=self.index).__finalize__(self) return result @@ -1324,8 +1324,7 @@ def multi(values, qs, **kwargs): return self._maybe_box(lambda values: multi(values, q, **kwargs), dropna=True) - def corr(self, other, method='pearson', - min_periods=None): + def corr(self, other, method='pearson', min_periods=None): """ Compute correlation with `other` Series, excluding missing values @@ -1503,7 +1502,7 @@ def searchsorted(self, v, side='left', sorter=None): return self._values.searchsorted(Series(v)._values, side=side, sorter=sorter) - #------------------------------------------------------------------------------ + # ------------------------------------------------------------------- # Combination def append(self, to_append, verify_integrity=False): @@ -1585,7 +1584,8 @@ def _binop(self, other, func, level=None, fill_value=None): this = self if not self.index.equals(other.index): - this, other = self.align(other, level=level, join='outer', copy=False) + this, other = self.align(other, level=level, join='outer', + copy=False) new_index = this.index this_vals = this.values @@ -1657,7 +1657,8 @@ def combine_first(self, other): new_index = self.index.union(other.index) this = self.reindex(new_index, copy=False) other = other.reindex(new_index, copy=False) - name = _maybe_match_name(self, other) + # TODO: do we need name? + name = _maybe_match_name(self, other) # noqa rs_vals = com._where_compat(isnull(this), other._values, this._values) return self._constructor(rs_vals, index=new_index).__finalize__(self) @@ -1676,7 +1677,7 @@ def update(self, other): self._data = self._data.putmask(mask=mask, new=other, inplace=True) self._maybe_update_cacher() - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Reindexing, sorting @Appender(generic._shared_docs['sort_values'] % _shared_doc_kwargs) @@ -1750,19 +1751,21 @@ def sort_index(self, axis=0, level=None, ascending=True, inplace=False, ascending=ascending) new_values = self._values.take(indexer) - result = self._constructor(new_values, index=new_index) + result = self._constructor(new_values, index=new_index) if inplace: self._update_inplace(result) else: return result.__finalize__(self) - def sort(self, axis=0, ascending=True, kind='quicksort', na_position='last', inplace=True): + def sort(self, axis=0, ascending=True, kind='quicksort', + na_position='last', inplace=True): """ - DEPRECATED: use :meth:`Series.sort_values(inplace=True)` for INPLACE sorting + DEPRECATED: use :meth:`Series.sort_values(inplace=True)` for INPLACE + sorting - Sort values and index labels by value. This is an inplace sort by default. - Series.order is the equivalent but returns a new Series. + Sort values and index labels by value. This is an inplace sort by + default. Series.order is the equivalent but returns a new Series. Parameters ---------- @@ -1782,24 +1785,24 @@ def sort(self, axis=0, ascending=True, kind='quicksort', na_position='last', inp -------- Series.sort_values """ - warnings.warn("sort is deprecated, use sort_values(inplace=True) for for INPLACE sorting", - FutureWarning, stacklevel=2) + warnings.warn("sort is deprecated, use sort_values(inplace=True) for " + "INPLACE sorting", FutureWarning, stacklevel=2) - return self.sort_values(ascending=ascending, - kind=kind, - na_position=na_position, - inplace=inplace) + return self.sort_values(ascending=ascending, kind=kind, + na_position=na_position, inplace=inplace) - def order(self, na_last=None, ascending=True, kind='quicksort', na_position='last', inplace=False): + def order(self, na_last=None, ascending=True, kind='quicksort', + na_position='last', inplace=False): """ DEPRECATED: use :meth:`Series.sort_values` Sorts Series object, by value, maintaining index-value link. - This will return a new Series by default. Series.sort is the equivalent but as an inplace method. + This will return a new Series by default. Series.sort is the equivalent + but as an inplace method. Parameters ---------- - na_last : boolean (optional, default=True) (DEPRECATED; use na_position) + na_last : boolean (optional, default=True)--DEPRECATED; use na_position Put NaN's at beginning or end ascending : boolean, default True Sort ascending. Passing False sorts descending @@ -1823,10 +1826,8 @@ def order(self, na_last=None, ascending=True, kind='quicksort', na_position='las warnings.warn("order is deprecated, use sort_values(...)", FutureWarning, stacklevel=2) - return self.sort_values(ascending=ascending, - kind=kind, - na_position=na_position, - inplace=inplace) + return self.sort_values(ascending=ascending, kind=kind, + na_position=na_position, inplace=inplace) def argsort(self, axis=0, kind='quicksort', order=None): """ @@ -1853,8 +1854,8 @@ def argsort(self, axis=0, kind='quicksort', order=None): mask = isnull(values) if mask.any(): - result = Series( - -1, index=self.index, name=self.name, dtype='int64') + result = Series(-1, index=self.index, name=self.name, + dtype='int64') notmask = ~mask result[notmask] = np.argsort(values[notmask], kind=kind) return self._constructor(result, @@ -1889,11 +1890,13 @@ def rank(self, method='average', na_option='keep', ascending=True, ------- ranks : Series """ - ranks = algorithms.rank(self._values, method=method, na_option=na_option, - ascending=ascending, pct=pct) + ranks = algorithms.rank(self._values, method=method, + na_option=na_option, ascending=ascending, + pct=pct) return self._constructor(ranks, index=self.index).__finalize__(self) - @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', False: 'first'}) + @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', + False: 'first'}) def nlargest(self, n=5, keep='first'): """Return the largest `n` elements. @@ -1914,8 +1917,8 @@ def nlargest(self, n=5, keep='first'): Notes ----- - Faster than ``.sort_values(ascending=False).head(n)`` for small `n` relative - to the size of the ``Series`` object. + Faster than ``.sort_values(ascending=False).head(n)`` for small `n` + relative to the size of the ``Series`` object. See Also -------- @@ -1930,7 +1933,8 @@ def nlargest(self, n=5, keep='first'): """ return algorithms.select_n(self, n=n, keep=keep, method='nlargest') - @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', False: 'first'}) + @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', + False: 'first'}) def nsmallest(self, n=5, keep='first'): """Return the smallest `n` elements. @@ -1987,7 +1991,8 @@ def sortlevel(self, level=0, ascending=True, sort_remaining=True): Series.sort_index(level=...) """ - return self.sort_index(level=level, ascending=ascending, sort_remaining=sort_remaining) + return self.sort_index(level=level, ascending=ascending, + sort_remaining=sort_remaining) def swaplevel(self, i, j, copy=True): """ @@ -2063,7 +2068,7 @@ def unstack(self, level=-1): from pandas.core.reshape import unstack return unstack(self, level) - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # function application def map(self, arg, na_action=None): @@ -2259,8 +2264,8 @@ def _reduce(self, op, name, axis=0, skipna=True, numeric_only=None, # Validate that 'axis' is consistent with Series's single axis. self._get_axis_number(axis) if numeric_only: - raise NotImplementedError( - 'Series.{0} does not implement numeric_only.'.format(name)) + raise NotImplementedError('Series.{0} does not implement ' + 'numeric_only.'.format(name)) return op(delegate, skipna=skipna, **kwds) return delegate._reduce(op=op, name=name, axis=axis, skipna=skipna, @@ -2326,9 +2331,11 @@ def _needs_reindex_multi(self, axes, method, level): def align(self, other, join='outer', axis=None, level=None, copy=True, fill_value=None, method=None, limit=None, fill_axis=0, broadcast_axis=None): - return super(Series, self).align(other, join=join, axis=axis, level=level, copy=copy, - fill_value=fill_value, method=method, limit=limit, - fill_axis=fill_axis, broadcast_axis=broadcast_axis) + return super(Series, self).align(other, join=join, axis=axis, + level=level, copy=copy, + fill_value=fill_value, method=method, + limit=limit, fill_axis=fill_axis, + broadcast_axis=broadcast_axis) @Appender(generic._shared_docs['rename'] % _shared_doc_kwargs) def rename(self, index=None, **kwargs): @@ -2348,8 +2355,7 @@ def fillna(self, value=None, method=None, axis=None, inplace=False, @Appender(generic._shared_docs['shift'] % _shared_doc_kwargs) def shift(self, periods=1, freq=None, axis=0): - return super(Series, self).shift(periods=periods, freq=freq, - axis=axis) + return super(Series, self).shift(periods=periods, freq=freq, axis=axis) def reindex_axis(self, labels, axis=0, **kwargs): """ for compatibility with higher dims """ @@ -2405,8 +2411,7 @@ def take(self, indices, axis=0, convert=True, is_copy=False): """ # check/convert indicies here if convert: - indices = maybe_convert_indices( - indices, len(self._get_axis(axis))) + indices = maybe_convert_indices(indices, len(self._get_axis(axis))) indices = com._ensure_platform_int(indices) new_index = self.index.take(indices) @@ -2492,7 +2497,8 @@ def between(self, left, right, inclusive=True): def from_csv(cls, path, sep=',', parse_dates=True, header=None, index_col=0, encoding=None, infer_datetime_format=False): """ - Read CSV file (DISCOURAGED, please use :func:`pandas.read_csv` instead). + Read CSV file (DISCOURAGED, please use :func:`pandas.read_csv` + instead). It is preferable to use the more powerful :func:`pandas.read_csv` for most general purposes, but ``from_csv`` makes for an easy @@ -2544,16 +2550,15 @@ def from_csv(cls, path, sep=',', parse_dates=True, header=None, sep=sep, parse_dates=parse_dates, encoding=encoding, infer_datetime_format=infer_datetime_format) - result = df.iloc[:,0] + result = df.iloc[:, 0] if header is None: result.index.name = result.name = None return result - def to_csv(self, path, index=True, sep=",", na_rep='', - float_format=None, header=False, - index_label=None, mode='w', nanRep=None, encoding=None, - date_format=None, decimal='.'): + def to_csv(self, path, index=True, sep=",", na_rep='', float_format=None, + header=False, index_label=None, mode='w', nanRep=None, + encoding=None, date_format=None, decimal='.'): """ Write Series to a comma-separated values (csv) file @@ -2582,15 +2587,17 @@ def to_csv(self, path, index=True, sep=",", na_rep='', date_format: string, default None Format string for datetime objects. decimal: string, default '.' - Character recognized as decimal separator. E.g. use ',' for European data + Character recognized as decimal separator. E.g. use ',' for + European data """ from pandas.core.frame import DataFrame df = DataFrame(self) # result is only a string if no path provided, otherwise None result = df.to_csv(path, index=index, sep=sep, na_rep=na_rep, - float_format=float_format, header=header, - index_label=index_label, mode=mode, nanRep=nanRep, - encoding=encoding, date_format=date_format, decimal=decimal) + float_format=float_format, header=header, + index_label=index_label, mode=mode, nanRep=nanRep, + encoding=encoding, date_format=date_format, + decimal=decimal) if path is None: return result @@ -2607,7 +2614,7 @@ def dropna(self, axis=0, inplace=False, **kwargs): kwargs.pop('how', None) if kwargs: raise TypeError('dropna() got an unexpected keyword ' - 'argument "{0}"'.format(list(kwargs.keys())[0])) + 'argument "{0}"'.format(list(kwargs.keys())[0])) axis = self._get_axis_number(axis or 0) @@ -2655,7 +2662,7 @@ def last_valid_index(self): else: return self.index[len(self) - i - 1] - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Time series-oriented methods def asof(self, where): @@ -2749,7 +2756,7 @@ def to_period(self, freq=None, copy=True): return self._constructor(new_values, index=new_index).__finalize__(self) - #------------------------------------------------------------------------------ + # ------------------------------------------------------------------------- # Datetimelike delegation methods def _make_dt_accessor(self): @@ -2759,9 +2766,10 @@ def _make_dt_accessor(self): raise AttributeError("Can only use .dt accessor with datetimelike " "values") - dt = base.AccessorProperty(CombinedDatetimelikeProperties, _make_dt_accessor) + dt = base.AccessorProperty(CombinedDatetimelikeProperties, + _make_dt_accessor) - #------------------------------------------------------------------------------ + # ------------------------------------------------------------------------- # Categorical methods def _make_cat_accessor(self): @@ -2785,14 +2793,14 @@ def _dir_additions(self): pass return rv -Series._setup_axes(['index'], info_axis=0, stat_axis=0, - aliases={'rows': 0}) + +Series._setup_axes(['index'], info_axis=0, stat_axis=0, aliases={'rows': 0}) Series._add_numeric_operations() Series._add_series_only_operations() Series._add_series_or_dataframe_operations() _INDEX_TYPES = ndarray, Index, list, tuple -#------------------------------------------------------------------------------ +# ----------------------------------------------------------------------------- # Supplementary functions @@ -2804,14 +2812,15 @@ def remove_na(series): def _sanitize_index(data, index, copy=False): - """ sanitize an index type to return an ndarray of the underlying, pass thru a non-Index """ + """ sanitize an index type to return an ndarray of the underlying, pass + thru a non-Index + """ if index is None: return data if len(data) != len(index): - raise ValueError('Length of values does not match length of ' - 'index') + raise ValueError('Length of values does not match length of ' 'index') if isinstance(data, PeriodIndex): data = data.asobject @@ -2822,14 +2831,17 @@ def _sanitize_index(data, index, copy=False): elif isinstance(data, np.ndarray): # coerce datetimelike types - if data.dtype.kind in ['M','m']: + if data.dtype.kind in ['M', 'm']: data = _sanitize_array(data, index, copy=copy) return data + def _sanitize_array(data, index, dtype=None, copy=False, raise_cast_failure=False): - """ sanitize input data to an ndarray, copy if specified, coerce to the dtype if specified """ + """ sanitize input data to an ndarray, copy if specified, coerce to the + dtype if specified + """ if dtype is not None: dtype = _coerce_to_dtype(dtype) @@ -2878,7 +2890,8 @@ def _try_cast(arr, take_fast_path): subarr = _try_cast(data, True) elif isinstance(data, Index): # don't coerce Index types - # e.g. indexes can have different conversions (so don't fast path them) + # e.g. indexes can have different conversions (so don't fast path + # them) # GH 6140 subarr = _sanitize_index(data, index, copy=True) else: @@ -2916,7 +2929,7 @@ def create_from_value(value, index, dtype): # return a new empty value suitable for the dtype if is_datetimetz(dtype): - subarr = DatetimeIndex([value]*len(index)) + subarr = DatetimeIndex([value] * len(index)) else: if not isinstance(dtype, (np.dtype, type(np.dtype))): dtype = dtype.dtype @@ -2965,9 +2978,9 @@ def create_from_value(value, index, dtype): return subarr + # backwards compatiblity class TimeSeries(Series): - def __init__(self, *args, **kwargs): # deprecation TimeSeries, #10890 warnings.warn("TimeSeries is deprecated. Please use Series", @@ -2975,12 +2988,13 @@ def __init__(self, *args, **kwargs): super(TimeSeries, self).__init__(*args, **kwargs) -#---------------------------------------------------------------------- +# ---------------------------------------------------------------------- # Add plotting methods to Series -import pandas.tools.plotting as _gfx +import pandas.tools.plotting as _gfx # noqa -Series.plot = base.AccessorProperty(_gfx.SeriesPlotMethods, _gfx.SeriesPlotMethods) +Series.plot = base.AccessorProperty(_gfx.SeriesPlotMethods, + _gfx.SeriesPlotMethods) Series.hist = _gfx.hist_series # Add arithmetic! diff --git a/pandas/core/window.py b/pandas/core/window.py index ce8fda9e932bc..04103893a5e55 100644 --- a/pandas/core/window.py +++ b/pandas/core/window.py @@ -9,12 +9,11 @@ import warnings import numpy as np -from functools import wraps from collections import defaultdict import pandas as pd from pandas.lib import isscalar -from pandas.core.base import PandasObject, SelectionMixin, AbstractMethodError +from pandas.core.base import PandasObject, SelectionMixin import pandas.core.common as com import pandas.algos as algos from pandas import compat @@ -34,17 +33,19 @@ pandas.DataFrame.%(name)s """ + class _Window(PandasObject, SelectionMixin): - _attributes = ['window','min_periods','freq','center','win_type','axis'] + _attributes = ['window', 'min_periods', 'freq', 'center', 'win_type', + 'axis'] exclusions = set() - def __init__(self, obj, window=None, min_periods=None, freq=None, center=False, - win_type=None, axis=0): + def __init__(self, obj, window=None, min_periods=None, freq=None, + center=False, win_type=None, axis=0): if freq is not None: - warnings.warn("The freq kw is deprecated and will be removed in a future version. You can resample prior " - "to passing to a window function", - FutureWarning, stacklevel=3) + warnings.warn("The freq kw is deprecated and will be removed in a " + "future version. You can resample prior to passing " + "to a window function", FutureWarning, stacklevel=3) self.blocks = [] self.obj = obj @@ -67,11 +68,13 @@ def _convert_freq(self, how=None): """ resample according to the how, return a new object """ obj = self._selected_obj - if self.freq is not None and isinstance(obj, (com.ABCSeries, com.ABCDataFrame)): + if (self.freq is not None and + isinstance(obj, (com.ABCSeries, com.ABCDataFrame))): if how is not None: - warnings.warn("The how kw argument is deprecated and removed in a future version. You can resample prior " - "to passing to a window function", - FutureWarning, stacklevel=6) + warnings.warn("The how kw argument is deprecated and removed " + "in a future version. You can resample prior " + "to passing to a window function", FutureWarning, + stacklevel=6) obj = obj.resample(self.freq, how=how) return obj @@ -101,7 +104,7 @@ def _gotitem(self, key, ndim, subset=None): subset = self.obj self = self._shallow_copy(subset) self._reset_cache() - if subset.ndim==2: + if subset.ndim == 2: if isscalar(key) and key in subset or com.is_list_like(key): self._selection = key return self @@ -124,8 +127,9 @@ def _get_window(self, other=None): def __unicode__(self): """ provide a nice str repr of our rolling object """ - attrs = [ "{k}={v}".format(k=k,v=getattr(self,k)) \ - for k in self._attributes if getattr(self,k,None) is not None ] + attrs = ["{k}={v}".format(k=k, v=getattr(self, k)) + for k in self._attributes + if getattr(self, k, None) is not None] return "{klass} [{attrs}]".format(klass=self.__class__.__name__, attrs=','.join(attrs)) @@ -137,13 +141,13 @@ def _shallow_copy(self, obj=None, **kwargs): obj = obj.obj for attr in self._attributes: if attr not in kwargs: - kwargs[attr] = getattr(self,attr) + kwargs[attr] = getattr(self, attr) return self._constructor(obj, **kwargs) def _prep_values(self, values=None, kill_inf=True, how=None): if values is None: - values = getattr(self._selected_obj,'values',self._selected_obj) + values = getattr(self._selected_obj, 'values', self._selected_obj) # coerce dtypes as appropriate if com.is_float_dtype(values.dtype): @@ -156,7 +160,8 @@ def _prep_values(self, values=None, kill_inf=True, how=None): try: values = values.astype(float) except (ValueError, TypeError): - raise TypeError("cannot handle this type -> {0}".format(values.dtype)) + raise TypeError("cannot handle this type -> {0}" + "".format(values.dtype)) if kill_inf: values = values.copy() @@ -174,15 +179,14 @@ def _wrap_result(self, result, block=None, obj=None): # coerce if necessary if block is not None: if com.is_timedelta64_dtype(block.values.dtype): - result = pd.to_timedelta(result.ravel(),unit='ns').values.reshape(result.shape) + result = pd.to_timedelta( + result.ravel(), unit='ns').values.reshape(result.shape) if result.ndim == 1: from pandas import Series return Series(result, obj.index, name=obj.name) - return type(obj)(result, - index=obj.index, - columns=block.columns) + return type(obj)(result, index=obj.index, columns=block.columns) return result def _wrap_results(self, results, blocks, obj): @@ -206,11 +210,11 @@ def _wrap_results(self, results, blocks, obj): if not len(final): return obj.astype('float64') - return pd.concat(final,axis=1).reindex(columns=obj.columns) + return pd.concat(final, axis=1).reindex(columns=obj.columns) def _center_window(self, result, window): """ center the result in the window """ - if self.axis > result.ndim-1: + if self.axis > result.ndim - 1: raise ValueError("Requested axis is larger then no. of argument " "dimensions") @@ -249,6 +253,7 @@ def aggregate(self, arg, *args, **kwargs): how : string, default None (DEPRECATED) Method for down- or re-sampling""") + class Window(_Window): """ Provides rolling transformations. @@ -264,8 +269,8 @@ class Window(_Window): Minimum number of observations in window required to have a value (otherwise result is NA). freq : string or DateOffset object, optional (default None) (DEPRECATED) - Frequency to conform the data to before computing the statistic. Specified - as a frequency string or DateOffset object. + Frequency to conform the data to before computing the statistic. + Specified as a frequency string or DateOffset object. center : boolean, default False Set the labels at the center of the window. win_type : string, default None @@ -313,8 +318,10 @@ def _prep_window(self, **kwargs): try: import scipy.signal as sig except ImportError: - raise ImportError('Please install scipy to generate window weight') - win_type = _validate_win_type(self.win_type, kwargs) # may pop from kwargs + raise ImportError('Please install scipy to generate window ' + 'weight') + # the below may pop from kwargs + win_type = _validate_win_type(self.win_type, kwargs) return sig.get_window(win_type, window).astype(float) raise ValueError('Invalid window %s' % str(window)) @@ -353,10 +360,12 @@ def _apply_window(self, mean=True, how=None, **kwargs): offset = _offset(window, center) additional_nans = np.array([np.NaN] * offset) + def f(arg, *args, **kwargs): minp = _use_window(self.min_periods, len(window)) - return algos.roll_window(np.concatenate((arg, additional_nans)) if center else arg, - window, minp, avg=mean) + return algos.roll_window(np.concatenate((arg, additional_nans)) + if center else arg, window, minp, + avg=mean) result = np.apply_along_axis(f, self.axis, values) @@ -392,13 +401,14 @@ def sum(self, **kwargs): def mean(self, **kwargs): return self._apply_window(mean=True, **kwargs) -class _Rolling(_Window): +class _Rolling(_Window): @property def _constructor(self): return Rolling - def _apply(self, func, window=None, center=None, check_minp=None, how=None, **kwargs): + def _apply(self, func, window=None, center=None, check_minp=None, how=None, + **kwargs): """ Rolling statistical measure using supplied function. Designed to be used with passed-in Cython array-based functions. @@ -440,9 +450,11 @@ def _apply(self, func, window=None, center=None, check_minp=None, how=None, **kw # if we have a string function name, wrap it if isinstance(func, compat.string_types): if not hasattr(algos, func): - raise ValueError("we do not support this function algos.{0}".format(func)) + raise ValueError("we do not support this function " + "algos.{0}".format(func)) cfunc = getattr(algos, func) + def func(arg, window, min_periods=None): minp = check_minp(min_periods, window) return cfunc(arg, window, minp, **kwargs) @@ -451,12 +463,14 @@ def func(arg, window, min_periods=None): if center: offset = _offset(window, center) additional_nans = np.array([np.NaN] * offset) + def calc(x): return func(np.concatenate((x, additional_nans)), window, min_periods=self.min_periods) else: + def calc(x): - return func(x,window, min_periods=self.min_periods) + return func(x, window, min_periods=self.min_periods) if values.ndim > 1: result = np.apply_along_axis(calc, self.axis, values) @@ -470,9 +484,12 @@ def calc(x): return self._wrap_results(results, blocks, obj) + class _Rolling_and_Expanding(_Rolling): - _shared_docs['count'] = """%(name)s count of number of non-NaN observations inside provided window.""" + _shared_docs['count'] = """%(name)s count of number of non-NaN + observations inside provided window.""" + def count(self): obj = self._convert_freq() window = self._get_window() @@ -481,9 +498,7 @@ def count(self): converted = np.isfinite(obj).astype(float) except TypeError: converted = np.isfinite(obj.astype(float)).astype(float) - result = self._constructor(converted, - window=window, - min_periods=0, + result = self._constructor(converted, window=window, min_periods=0, center=self.center).sum() result[result.isnull()] = 0 @@ -499,12 +514,15 @@ def count(self): *args and **kwargs are passed to the function""") def apply(self, func, args=(), kwargs={}): - _level = kwargs.pop('_level',None) + # TODO: _level is unused? + _level = kwargs.pop('_level', None) # noqa window = self._get_window() offset = _offset(window, self.center) + def f(arg, window, min_periods): minp = _use_window(min_periods, window) - return algos.roll_generic(arg, window, minp, offset, func, args, kwargs) + return algos.roll_generic(arg, window, minp, offset, func, args, + kwargs) return self._apply(f, center=False) @@ -518,6 +536,7 @@ def sum(self, **kwargs): ---------- how : string, default 'max' (DEPRECATED) Method for down- or re-sampling""") + def max(self, how=None, **kwargs): if self.freq is not None and how is None: how = 'max' @@ -530,6 +549,7 @@ def max(self, how=None, **kwargs): ---------- how : string, default 'min' (DEPRECATED) Method for down- or re-sampling""") + def min(self, how=None, **kwargs): if self.freq is not None and how is None: how = 'min' @@ -545,6 +565,7 @@ def mean(self, **kwargs): ---------- how : string, default 'median' (DEPRECATED) Method for down- or re-sampling""") + def median(self, how=None, **kwargs): if self.freq is not None and how is None: how = 'median' @@ -561,6 +582,7 @@ def median(self, how=None, **kwargs): def std(self, ddof=1, **kwargs): window = self._get_window() + def f(arg, *args, **kwargs): minp = _require_min_periods(1)(self.min_periods, window) return _zsqrt(algos.roll_var(arg, window, minp, ddof)) @@ -577,21 +599,19 @@ def f(arg, *args, **kwargs): is ``N - ddof``, where ``N`` represents the number of elements.""") def var(self, ddof=1, **kwargs): - return self._apply('roll_var', - check_minp=_require_min_periods(1), - ddof=ddof, - **kwargs) + return self._apply('roll_var', check_minp=_require_min_periods(1), + ddof=ddof, **kwargs) _shared_docs['skew'] = """Unbiased %(name)s skewness""" + def skew(self, **kwargs): - return self._apply('roll_skew', - check_minp=_require_min_periods(3), + return self._apply('roll_skew', check_minp=_require_min_periods(3), **kwargs) _shared_docs['kurt'] = """Unbiased %(name)s kurtosis""" + def kurt(self, **kwargs): - return self._apply('roll_kurt', - check_minp=_require_min_periods(4), + return self._apply('roll_kurt', check_minp=_require_min_periods(4), **kwargs) _shared_docs['quantile'] = dedent(""" @@ -604,6 +624,7 @@ def kurt(self, **kwargs): def quantile(self, quantile, **kwargs): window = self._get_window() + def f(arg, *args, **kwargs): minp = _use_window(self.min_periods, window) return algos.roll_quantile(arg, window, minp, quantile) @@ -618,11 +639,11 @@ def f(arg, *args, **kwargs): other : Series, DataFrame, or ndarray, optional if not supplied then will default to self and produce pairwise output pairwise : bool, default None - If False then only matching columns between self and other will be used and - the output will be a DataFrame. - If True then all pairwise combinations will be calculated and the output - will be a Panel in the case of DataFrame inputs. In the case of missing - elements, only complete pairwise observations will be used. + If False then only matching columns between self and other will be used + and the output will be a DataFrame. + If True then all pairwise combinations will be calculated and the + output will be a Panel in the case of DataFrame inputs. In the case of + missing elements, only complete pairwise observations will be used. ddof : int, default 1 Delta Degrees of Freedom. The divisor used in calculations is ``N - ddof``, where ``N`` represents the number of elements.""") @@ -630,16 +651,21 @@ def f(arg, *args, **kwargs): def cov(self, other=None, pairwise=None, ddof=1, **kwargs): if other is None: other = self._selected_obj - pairwise = True if pairwise is None else pairwise # only default unset + # only default unset + pairwise = True if pairwise is None else pairwise other = self._shallow_copy(other) window = self._get_window(other) def _get_cov(X, Y): - mean = lambda x: x.rolling(window, self.min_periods, center=self.center).mean(**kwargs) - count = (X+Y).rolling(window=window, center=self.center).count(**kwargs) + mean = lambda x: x.rolling(window, self.min_periods, + center=self.center).mean(**kwargs) + count = (X + Y).rolling(window=window, + center=self.center).count(**kwargs) bias_adj = count / (count - ddof) return (mean(X * Y) - mean(X) * mean(Y)) * bias_adj - return _flex_binary_moment(self._selected_obj, other._selected_obj, _get_cov, pairwise=bool(pairwise)) + + return _flex_binary_moment(self._selected_obj, other._selected_obj, + _get_cov, pairwise=bool(pairwise)) _shared_docs['corr'] = dedent(""" %(name)s sample correlation @@ -649,31 +675,31 @@ def _get_cov(X, Y): other : Series, DataFrame, or ndarray, optional if not supplied then will default to self and produce pairwise output pairwise : bool, default None - If False then only matching columns between self and other will be used and - the output will be a DataFrame. - If True then all pairwise combinations will be calculated and the output - will be a Panel in the case of DataFrame inputs. In the case of missing - elements, only complete pairwise observations will be used.""") + If False then only matching columns between self and other will be used + and the output will be a DataFrame. + If True then all pairwise combinations will be calculated and the + output will be a Panel in the case of DataFrame inputs. In the case of + missing elements, only complete pairwise observations will be used.""") def corr(self, other=None, pairwise=None, **kwargs): if other is None: other = self._selected_obj - pairwise = True if pairwise is None else pairwise # only default unset + # only default unset + pairwise = True if pairwise is None else pairwise other = self._shallow_copy(other) window = self._get_window(other) def _get_corr(a, b): - a = a.rolling(window=window, - min_periods=self.min_periods, - freq=self.freq, - center=self.center) - b = b.rolling(window=window, - min_periods=self.min_periods, - freq=self.freq, - center=self.center) + a = a.rolling(window=window, min_periods=self.min_periods, + freq=self.freq, center=self.center) + b = b.rolling(window=window, min_periods=self.min_periods, + freq=self.freq, center=self.center) return a.cov(b, **kwargs) / (a.std(**kwargs) * b.std(**kwargs)) - return _flex_binary_moment(self._selected_obj, other._selected_obj, _get_corr, pairwise=bool(pairwise)) + + return _flex_binary_moment(self._selected_obj, other._selected_obj, + _get_corr, pairwise=bool(pairwise)) + class Rolling(_Rolling_and_Expanding): """ @@ -690,8 +716,8 @@ class Rolling(_Rolling_and_Expanding): Minimum number of observations in window required to have a value (otherwise result is NA). freq : string or DateOffset object, optional (default None) (DEPRECATED) - Frequency to conform the data to before computing the statistic. Specified - as a frequency string or DateOffset object. + Frequency to conform the data to before computing the statistic. + Specified as a frequency string or DateOffset object. center : boolean, default False Set the labels at the center of the window. axis : int, default 0 @@ -794,13 +820,16 @@ def quantile(self, quantile, **kwargs): @Appender(_doc_template) @Appender(_shared_docs['cov']) def cov(self, other=None, pairwise=None, ddof=1, **kwargs): - return super(Rolling, self).cov(other=other, pairwise=pairwise, ddof=ddof, **kwargs) + return super(Rolling, self).cov(other=other, pairwise=pairwise, + ddof=ddof, **kwargs) @Substitution(name='rolling') @Appender(_doc_template) @Appender(_shared_docs['corr']) def corr(self, other=None, pairwise=None, **kwargs): - return super(Rolling, self).corr(other=other, pairwise=pairwise, **kwargs) + return super(Rolling, self).corr(other=other, pairwise=pairwise, + **kwargs) + class Expanding(_Rolling_and_Expanding): """ @@ -814,8 +843,8 @@ class Expanding(_Rolling_and_Expanding): Minimum number of observations in window required to have a value (otherwise result is NA). freq : string or DateOffset object, optional (default None) (DEPRECATED) - Frequency to conform the data to before computing the statistic. Specified - as a frequency string or DateOffset object. + Frequency to conform the data to before computing the statistic. + Specified as a frequency string or DateOffset object. center : boolean, default False Set the labels at the center of the window. axis : int, default 0 @@ -834,10 +863,14 @@ class Expanding(_Rolling_and_Expanding): of :meth:`~pandas.Series.resample` (i.e. using the `mean`). """ - _attributes = ['min_periods','freq','center','axis'] + _attributes = ['min_periods', 'freq', 'center', 'axis'] - def __init__(self, obj, min_periods=1, freq=None, center=False, axis=0, **kwargs): - return super(Expanding, self).__init__(obj=obj, min_periods=min_periods, freq=freq, center=center, axis=axis) + def __init__(self, obj, min_periods=1, freq=None, center=False, axis=0, + **kwargs): + return super(Expanding, self).__init__(obj=obj, + min_periods=min_periods, + freq=freq, center=center, + axis=axis) @property def _constructor(self): @@ -846,8 +879,10 @@ def _constructor(self): def _get_window(self, other=None): obj = self._selected_obj if other is None: - return max(len(obj), self.min_periods) if self.min_periods else len(obj) - return max((len(obj) + len(obj)), self.min_periods) if self.min_periods else (len(obj) + len(obj)) + return (max(len(obj), self.min_periods) if self.min_periods + else len(obj)) + return (max((len(obj) + len(obj)), self.min_periods) + if self.min_periods else (len(obj) + len(obj))) @Substitution(name='expanding') @Appender(SelectionMixin._see_also_template) @@ -933,13 +968,16 @@ def quantile(self, quantile, **kwargs): @Appender(_doc_template) @Appender(_shared_docs['cov']) def cov(self, other=None, pairwise=None, ddof=1, **kwargs): - return super(Expanding, self).cov(other=other, pairwise=pairwise, ddof=ddof, **kwargs) + return super(Expanding, self).cov(other=other, pairwise=pairwise, + ddof=ddof, **kwargs) @Substitution(name='expanding') @Appender(_doc_template) @Appender(_shared_docs['corr']) def corr(self, other=None, pairwise=None, **kwargs): - return super(Expanding, self).corr(other=other, pairwise=pairwise, **kwargs) + return super(Expanding, self).corr(other=other, pairwise=pairwise, + **kwargs) + _bias_template = """ @@ -979,15 +1017,16 @@ class EWM(_Rolling): span : float, optional Specify decay in terms of span, :math:`\alpha = 2 / (span + 1)` halflife : float, optional - Specify decay in terms of halflife, :math:`\alpha = 1 - exp(log(0.5) / halflife)` + Specify decay in terms of halflife, + :math:`\alpha = 1 - exp(log(0.5) / halflife)` min_periods : int, default 0 Minimum number of observations in window required to have a value (otherwise result is NA). freq : None or string alias / date offset object, default=None (DEPRECATED) Frequency to conform to before computing statistic adjust : boolean, default True - Divide by decaying adjustment factor in beginning periods to account for - imbalance in relative weightings (viewing EWMA as a moving average) + Divide by decaying adjustment factor in beginning periods to account + for imbalance in relative weightings (viewing EWMA as a moving average) ignore_na : boolean, default False Ignore missing values when calculating weights; specify True to reproduce pre-0.15.0 behavior @@ -1004,8 +1043,8 @@ class EWM(_Rolling): decay parameter :math:`\alpha` is related to the span as :math:`\alpha = 2 / (s + 1) = 1 / (1 + c)` - where `c` is the center of mass. Given a span, the associated center of mass is - :math:`c = (s - 1) / 2` + where `c` is the center of mass. Given a span, the associated center of + mass is :math:`c = (s - 1) / 2` So a "20-day EWMA" would have center 9.5. @@ -1013,8 +1052,8 @@ class EWM(_Rolling): frequency by resampling the data. This is done with the default parameters of :meth:`~pandas.Series.resample` (i.e. using the `mean`). - When adjust is True (default), weighted averages are calculated using weights - (1-alpha)**(n-1), (1-alpha)**(n-2), ..., 1-alpha, 1. + When adjust is True (default), weighted averages are calculated using + weights (1-alpha)**(n-1), (1-alpha)**(n-2), ..., 1-alpha, 1. When adjust is False, weighted averages are calculated recursively as: weighted_average[0] = arg[0]; @@ -1025,18 +1064,18 @@ class EWM(_Rolling): average of [x, None, y] are (1-alpha)**2 and 1 (if adjust is True), and (1-alpha)**2 and alpha (if adjust is False). - When ignore_na is True (reproducing pre-0.15.0 behavior), weights are based on - relative positions. For example, the weights of x and y used in calculating - the final weighted average of [x, None, y] are 1-alpha and 1 (if adjust is - True), and 1-alpha and alpha (if adjust is False). + When ignore_na is True (reproducing pre-0.15.0 behavior), weights are based + on relative positions. For example, the weights of x and y used in + calculating the final weighted average of [x, None, y] are 1-alpha and 1 + (if adjust is True), and 1-alpha and alpha (if adjust is False). More details can be found at http://pandas.pydata.org/pandas-docs/stable/computation.html#exponentially-weighted-moment-functions """ - _attributes = ['com','min_periods','freq','adjust','ignore_na','axis'] + _attributes = ['com', 'min_periods', 'freq', 'adjust', 'ignore_na', 'axis'] - def __init__(self, obj, com=None, span=None, halflife=None, min_periods=0, freq=None, - adjust=True, ignore_na=False, axis=0): + def __init__(self, obj, com=None, span=None, halflife=None, min_periods=0, + freq=None, adjust=True, ignore_na=False, axis=0): self.obj = obj self.com = _get_center_of_mass(com, span, halflife) self.min_periods = min_periods @@ -1088,11 +1127,14 @@ def _apply(self, func, how=None, **kwargs): # if we have a string function name, wrap it if isinstance(func, compat.string_types): if not hasattr(algos, func): - raise ValueError("we do not support this function algos.{0}".format(func)) + raise ValueError("we do not support this function " + "algos.{0}".format(func)) cfunc = getattr(algos, func) + def func(arg): - return cfunc(arg, self.com, int(self.adjust), int(self.ignore_na), int(self.min_periods)) + return cfunc(arg, self.com, int(self.adjust), + int(self.ignore_na), int(self.min_periods)) results.append(np.apply_along_axis(func, self.axis, values)) @@ -1110,20 +1152,18 @@ def mean(self, **kwargs): def std(self, bias=False, **kwargs): """exponential weighted moving stddev""" return _zsqrt(self.var(bias=bias, **kwargs)) - vol=std + + vol = std @Substitution(name='ewm') @Appender(_doc_template) @Appender(_bias_template) def var(self, bias=False, **kwargs): """exponential weighted moving variance""" + def f(arg): - return algos.ewmcov(arg, - arg, - self.com, - int(self.adjust), - int(self.ignore_na), - int(self.min_periods), + return algos.ewmcov(arg, arg, self.com, int(self.adjust), + int(self.ignore_na), int(self.min_periods), int(bias)) return self._apply(f, **kwargs) @@ -1135,22 +1175,20 @@ def cov(self, other=None, pairwise=None, bias=False, **kwargs): """exponential weighted sample covariance""" if other is None: other = self._selected_obj - pairwise = True if pairwise is None else pairwise # only default unset + # only default unset + pairwise = True if pairwise is None else pairwise other = self._shallow_copy(other) def _get_cov(X, Y): X = self._shallow_copy(X) Y = self._shallow_copy(Y) - cov = algos.ewmcov(X._prep_values(), - Y._prep_values(), - self.com, - int(self.adjust), - int(self.ignore_na), - int(self.min_periods), - int(bias)) + cov = algos.ewmcov(X._prep_values(), Y._prep_values(), self.com, + int(self.adjust), int(self.ignore_na), + int(self.min_periods), int(bias)) return X._wrap_result(cov) - return _flex_binary_moment(self._selected_obj, other._selected_obj, _get_cov, pairwise=bool(pairwise)) + return _flex_binary_moment(self._selected_obj, other._selected_obj, + _get_cov, pairwise=bool(pairwise)) @Substitution(name='ewm') @Appender(_doc_template) @@ -1159,14 +1197,18 @@ def corr(self, other=None, pairwise=None, **kwargs): """exponential weighted sample correlation""" if other is None: other = self._selected_obj - pairwise = True if pairwise is None else pairwise # only default unset + # only default unset + pairwise = True if pairwise is None else pairwise other = self._shallow_copy(other) def _get_corr(X, Y): X = self._shallow_copy(X) Y = self._shallow_copy(Y) + def _cov(x, y): - return algos.ewmcov(x, y, self.com, int(self.adjust), int(self.ignore_na), int(self.min_periods), 1) + return algos.ewmcov(x, y, self.com, int(self.adjust), + int(self.ignore_na), int(self.min_periods), + 1) x_values = X._prep_values() y_values = Y._prep_values() @@ -1176,25 +1218,26 @@ def _cov(x, y): corr = cov / _zsqrt(x_var * y_var) return X._wrap_result(corr) - return _flex_binary_moment(self._selected_obj, other._selected_obj, _get_corr, pairwise=bool(pairwise)) + return _flex_binary_moment(self._selected_obj, other._selected_obj, + _get_corr, pairwise=bool(pairwise)) + +# Helper Funcs -######################## -##### Helper Funcs ##### -######################## def _flex_binary_moment(arg1, arg2, f, pairwise=False): from pandas import Series, DataFrame, Panel - if not (isinstance(arg1,(np.ndarray, Series, DataFrame)) and - isinstance(arg2,(np.ndarray, Series, DataFrame))): + if not (isinstance(arg1, (np.ndarray, Series, DataFrame)) and + isinstance(arg2, (np.ndarray, Series, DataFrame))): raise TypeError("arguments to moment function must be of type " - "np.ndarray/Series/DataFrame") + "np.ndarray/Series/DataFrame") - if isinstance(arg1, (np.ndarray, Series)) and \ - isinstance(arg2, (np.ndarray,Series)): + if (isinstance(arg1, (np.ndarray, Series)) and + isinstance(arg2, (np.ndarray, Series))): X, Y = _prep_binary(arg1, arg2) return f(X, Y) elif isinstance(arg1, DataFrame): + def dataframe_from_int_dict(data, frame_template): result = DataFrame(data, index=frame_template.index) if len(result.columns) > 0: @@ -1221,16 +1264,18 @@ def dataframe_from_int_dict(data, frame_template): for col in res_columns: if col in X and col in Y: results[col] = f(X[col], Y[col]) - return DataFrame(results, index=X.index, columns=res_columns) + return DataFrame(results, index=X.index, + columns=res_columns) elif pairwise is True: results = defaultdict(dict) for i, k1 in enumerate(arg1.columns): for j, k2 in enumerate(arg2.columns): - if j<i and arg2 is arg1: + if j < i and arg2 is arg1: # Symmetric case results[i][j] = results[j][i] else: - results[i][j] = f(*_prep_binary(arg1.iloc[:, i], arg2.iloc[:, j])) + results[i][j] = f(*_prep_binary(arg1.iloc[:, i], + arg2.iloc[:, j])) p = Panel.from_dict(results).swapaxes('items', 'major') if len(p.major_axis) > 0: p.major_axis = arg1.columns[p.major_axis] @@ -1248,6 +1293,7 @@ def dataframe_from_int_dict(data, frame_template): else: return _flex_binary_moment(arg2, arg1, f) + def _get_center_of_mass(com, span, halflife): valid_count = len([x for x in [com, span, halflife] if x is not None]) if valid_count > 1: @@ -1265,6 +1311,7 @@ def _get_center_of_mass(com, span, halflife): return float(com) + def _offset(window, center): if not com.is_integer(window): window = len(window) @@ -1274,20 +1321,24 @@ def _offset(window, center): except: return offset.astype(int) + def _require_min_periods(p): def _check_func(minp, window): if minp is None: return window else: return max(p, minp) + return _check_func + def _use_window(minp, window): if minp is None: return window else: return minp + def _zsqrt(x): result = np.sqrt(x) mask = x < 0 @@ -1302,6 +1353,7 @@ def _zsqrt(x): return result + def _prep_binary(arg1, arg2): if not isinstance(arg2, type(arg1)): raise Exception('Input arrays must be of the same type!') @@ -1312,6 +1364,7 @@ def _prep_binary(arg1, arg2): return X, Y + def _validate_win_type(win_type, kwargs): # may pop from kwargs arg_map = {'kaiser': ['beta'], @@ -1319,8 +1372,8 @@ def _validate_win_type(win_type, kwargs): 'general_gaussian': ['power', 'width'], 'slepian': ['width']} if win_type in arg_map: - return tuple([win_type] + - _pop_args(win_type, arg_map[win_type], kwargs)) + return tuple([win_type] + _pop_args(win_type, arg_map[win_type], + kwargs)) return win_type @@ -1333,9 +1386,9 @@ def _pop_args(win_type, arg_names, kwargs): all_args.append(kwargs.pop(n)) return all_args -############################# -##### top-level exports ##### -############################# + +# Top-level exports + def rolling(obj, win_type=None, **kwds): from pandas import Series, DataFrame @@ -1346,20 +1399,28 @@ def rolling(obj, win_type=None, **kwds): return Window(obj, win_type=win_type, **kwds) return Rolling(obj, **kwds) + + rolling.__doc__ = Window.__doc__ + def expanding(obj, **kwds): from pandas import Series, DataFrame if not isinstance(obj, (Series, DataFrame)): raise TypeError('invalid type: %s' % type(obj)) return Expanding(obj, **kwds) + + expanding.__doc__ = Expanding.__doc__ + def ewm(obj, **kwds): from pandas import Series, DataFrame if not isinstance(obj, (Series, DataFrame)): raise TypeError('invalid type: %s' % type(obj)) return EWM(obj, **kwds) + + ewm.__doc__ = EWM.__doc__
And residual categorical. What should we do with api/sparse/matrix as those are all unused imports--noqa for each line?
https://api.github.com/repos/pandas-dev/pandas/pulls/12099
2016-01-20T12:59:05Z
2016-01-21T06:46:19Z
null
2016-01-21T06:46:19Z
CLN: fix all flake8 warnings in pandas/io
diff --git a/pandas/io/api.py b/pandas/io/api.py index fedde462c74b7..3ac4c670c8466 100644 --- a/pandas/io/api.py +++ b/pandas/io/api.py @@ -2,6 +2,8 @@ Data IO api """ +# flake8: noqa + from pandas.io.parsers import read_csv, read_table, read_fwf from pandas.io.clipboard import read_clipboard from pandas.io.excel import ExcelFile, ExcelWriter, read_excel diff --git a/pandas/io/clipboard.py b/pandas/io/clipboard.py index dfa46156aaead..2109e1c5d6d4c 100644 --- a/pandas/io/clipboard.py +++ b/pandas/io/clipboard.py @@ -42,7 +42,7 @@ def read_clipboard(**kwargs): # pragma: no cover # 1 3 4 counts = set([x.lstrip().count('\t') for x in lines]) - if len(lines)>1 and len(counts) == 1 and counts.pop() != 0: + if len(lines) > 1 and len(counts) == 1 and counts.pop() != 0: kwargs['sep'] = '\t' if kwargs.get('sep') is None and kwargs.get('delim_whitespace') is None: diff --git a/pandas/io/common.py b/pandas/io/common.py index e46f609077810..811d42b7b4b9e 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -30,20 +30,19 @@ from urllib.request import urlopen, pathname2url _urlopen = urlopen from urllib.parse import urlparse as parse_url - import urllib.parse as compat_parse from urllib.parse import (uses_relative, uses_netloc, uses_params, urlencode, urljoin) from urllib.error import URLError - from http.client import HTTPException + from http.client import HTTPException # noqa else: from urllib2 import urlopen as _urlopen - from urllib import urlencode, pathname2url + from urllib import urlencode, pathname2url # noqa from urlparse import urlparse as parse_url from urlparse import uses_relative, uses_netloc, uses_params, urljoin - from urllib2 import URLError - from httplib import HTTPException - from contextlib import contextmanager, closing - from functools import wraps + from urllib2 import URLError # noqa + from httplib import HTTPException # noqa + from contextlib import contextmanager, closing # noqa + from functools import wraps # noqa # @wraps(_urlopen) @contextmanager @@ -66,6 +65,7 @@ class DtypeWarning(Warning): try: from boto.s3 import key + class BotoFileLikeReader(key.Key): """boto Key modified to be more file-like @@ -78,10 +78,12 @@ class BotoFileLikeReader(key.Key): Also adds a `readline` function which will split the returned values by the `\n` character. """ + def __init__(self, *args, **kwargs): encoding = kwargs.pop("encoding", None) # Python 2 compat super(BotoFileLikeReader, self).__init__(*args, **kwargs) - self.finished_read = False # Add a flag to mark the end of the read. + # Add a flag to mark the end of the read. + self.finished_read = False self.buffer = "" self.lines = [] if encoding is None and compat.PY3: @@ -121,7 +123,8 @@ def readline(self): raise StopIteration if self.encoding: - self.buffer = "{}{}".format(self.buffer, self.read(8192).decode(self.encoding)) + self.buffer = "{}{}".format( + self.buffer, self.read(8192).decode(self.encoding)) else: self.buffer = "{}{}".format(self.buffer, self.read(8192)) @@ -211,6 +214,7 @@ def _expand_user(filepath_or_buffer): return os.path.expanduser(filepath_or_buffer) return filepath_or_buffer + def _validate_header_arg(header): if isinstance(header, bool): raise TypeError("Passing a bool to header is invalid. " @@ -218,6 +222,7 @@ def _validate_header_arg(header): "header=int or list-like of ints to specify " "the row(s) making up the column names") + def _stringify_path(filepath_or_buffer): """Return the argument coerced to a string if it was a pathlib.Path or a py.path.local @@ -263,8 +268,9 @@ def get_filepath_or_buffer(filepath_or_buffer, encoding=None, else: compression = None # cat on the compression to the tuple returned by the function - to_return = list(maybe_read_encoded_stream(req, encoding, compression)) + \ - [compression] + to_return = (list(maybe_read_encoded_stream(req, encoding, + compression)) + + [compression]) return tuple(to_return) if _is_s3_url(filepath_or_buffer): @@ -467,4 +473,4 @@ def _check_as_is(x): # write to the target stream self.stream.write(data) # empty queue - self.queue.truncate(0) \ No newline at end of file + self.queue.truncate(0) diff --git a/pandas/io/data.py b/pandas/io/data.py index ac6f14e846bec..5fa440e7bb1ff 100644 --- a/pandas/io/data.py +++ b/pandas/io/data.py @@ -3,6 +3,8 @@ """ +# flake8: noqa + import warnings import tempfile import datetime as dt diff --git a/pandas/io/excel.py b/pandas/io/excel.py index 106d263f56093..0642079cc5b34 100644 --- a/pandas/io/excel.py +++ b/pandas/io/excel.py @@ -2,23 +2,24 @@ Module parse to/from Excel """ -#---------------------------------------------------------------------- +# --------------------------------------------------------------------- # ExcelFile class +from datetime import datetime, date, time, MINYEAR + import os -import datetime import abc import numpy as np from pandas.core.frame import DataFrame from pandas.io.parsers import TextParser -from pandas.io.common import _is_url, _urlopen, _validate_header_arg, get_filepath_or_buffer, _is_s3_url +from pandas.io.common import (_is_url, _urlopen, _validate_header_arg, + get_filepath_or_buffer, _is_s3_url) from pandas.tseries.period import Period from pandas import json from pandas.compat import (map, zip, reduce, range, lrange, u, add_metaclass, - BytesIO, string_types) + string_types) from pandas.core import config from pandas.core.common import pprint_thing -from pandas.util.decorators import Appender import pandas.compat as compat import pandas.compat.openpyxl_compat as openpyxl_compat import pandas.core.common as com @@ -56,11 +57,11 @@ def get_writer(engine_name): # with version-less openpyxl engine # make sure we make the intelligent choice for the user if LooseVersion(openpyxl.__version__) < '2.0.0': - return _writers['openpyxl1'] + return _writers['openpyxl1'] elif LooseVersion(openpyxl.__version__) < '2.2.0': - return _writers['openpyxl20'] + return _writers['openpyxl20'] else: - return _writers['openpyxl22'] + return _writers['openpyxl22'] except ImportError: # fall through to normal exception handling below pass @@ -70,6 +71,7 @@ def get_writer(engine_name): except KeyError: raise ValueError("No Excel writer '%s'" % engine_name) + def read_excel(io, sheetname=0, header=0, skiprows=None, skip_footer=0, index_col=None, names=None, parse_cols=None, parse_dates=False, date_parser=None, na_values=None, thousands=None, @@ -86,15 +88,16 @@ def read_excel(io, sheetname=0, header=0, skiprows=None, skip_footer=0, file could be file://localhost/path/to/workbook.xlsx sheetname : string, int, mixed list of strings/ints, or None, default 0 - Strings are used for sheet names, Integers are used in zero-indexed sheet - positions. + Strings are used for sheet names, Integers are used in zero-indexed + sheet positions. Lists of strings/integers are used to request multiple sheets. Specify None to get all sheets. str|int -> DataFrame is returned. - list|None -> Dict of DataFrames is returned, with keys representing sheets. + list|None -> Dict of DataFrames is returned, with keys representing + sheets. Available Cases @@ -150,18 +153,16 @@ def read_excel(io, sheetname=0, header=0, skiprows=None, skip_footer=0, data will be read in as floats: Excel stores all numbers as floats internally has_index_names : boolean, default None - DEPRECATED: for version 0.17+ index names will be automatically inferred - based on index_col. To read Excel output from 0.16.2 and prior that - had saved index names, use True. + DEPRECATED: for version 0.17+ index names will be automatically + inferred based on index_col. To read Excel output from 0.16.2 and + prior that had saved index names, use True. Returns ------- parsed : DataFrame or Dict of DataFrames - DataFrame from the passed in Excel file. See notes in sheetname argument - for more information on when a Dict of Dataframes is returned. - + DataFrame from the passed in Excel file. See notes in sheetname + argument for more information on when a Dict of Dataframes is returned. """ - if not isinstance(io, ExcelFile): io = ExcelFile(io, engine=engine) @@ -172,6 +173,7 @@ def read_excel(io, sheetname=0, header=0, skiprows=None, skip_footer=0, convert_float=convert_float, has_index_names=has_index_names, skip_footer=skip_footer, converters=converters, **kwds) + class ExcelFile(object): """ Class for parsing tabular excel sheets into DataFrame objects. @@ -185,6 +187,7 @@ class ExcelFile(object): If io is not a buffer or path, this must be set to identify io. Acceptable values are None or xlrd """ + def __init__(self, io, **kwds): import xlrd # throw an ImportError if we need to @@ -223,7 +226,8 @@ def __init__(self, io, **kwds): def parse(self, sheetname=0, header=0, skiprows=None, skip_footer=0, index_col=None, parse_cols=None, parse_dates=False, date_parser=None, na_values=None, thousands=None, - convert_float=True, has_index_names=None, converters=None, **kwds): + convert_float=True, has_index_names=None, + converters=None, **kwds): """ Parse specified sheet(s) into a DataFrame @@ -313,7 +317,7 @@ def _parse_excel(self, sheetname=0, header=0, skiprows=None, skip_footer=0, epoch1904 = self.book.datemode - def _parse_cell(cell_contents,cell_typ): + def _parse_cell(cell_contents, cell_typ): """converts the contents of the cell into a pandas appropriate object""" @@ -327,20 +331,20 @@ def _parse_cell(cell_contents,cell_typ): # so we treat dates on the epoch as times only. # Also, Excel supports 1900 and 1904 epochs. year = (cell_contents.timetuple())[0:3] - if ((not epoch1904 and year == (1899, 12, 31)) - or (epoch1904 and year == (1904, 1, 1))): - cell_contents = datetime.time(cell_contents.hour, - cell_contents.minute, - cell_contents.second, - cell_contents.microsecond) + if ((not epoch1904 and year == (1899, 12, 31)) or + (epoch1904 and year == (1904, 1, 1))): + cell_contents = time(cell_contents.hour, + cell_contents.minute, + cell_contents.second, + cell_contents.microsecond) else: # Use the xlrd <= 0.9.2 date handling. dt = xldate.xldate_as_tuple(cell_contents, epoch1904) - if dt[0] < datetime.MINYEAR: - cell_contents = datetime.time(*dt[3:]) + if dt[0] < MINYEAR: + cell_contents = time(*dt[3:]) else: - cell_contents = datetime.datetime(*dt) + cell_contents = datetime(*dt) elif cell_typ == XL_CELL_ERROR: cell_contents = np.nan @@ -362,7 +366,7 @@ def _parse_cell(cell_contents,cell_typ): ret_dict = False - #Keep sheetname to maintain backwards compatibility. + # Keep sheetname to maintain backwards compatibility. if isinstance(sheetname, list): sheets = sheetname ret_dict = True @@ -372,7 +376,7 @@ def _parse_cell(cell_contents,cell_typ): else: sheets = [sheetname] - #handle same-type duplicates. + # handle same-type duplicates. sheets = list(set(sheets)) output = {} @@ -397,7 +401,7 @@ def _parse_cell(cell_contents,cell_typ): should_parse[j] = self._should_parse(j, parse_cols) if parse_cols is None or should_parse[j]: - row.append(_parse_cell(value,typ)) + row.append(_parse_cell(value, typ)) data.append(row) if sheet.nrows == 0: @@ -416,7 +420,8 @@ def _parse_cell(cell_contents,cell_typ): if com.is_integer(skiprows): row += skiprows data[row] = _fill_mi_header(data[row]) - header_name, data[row] = _pop_header_name(data[row], index_col) + header_name, data[row] = _pop_header_name( + data[row], index_col) header_names.append(header_name) else: data[header] = _trim_excel_header(data[header]) @@ -450,14 +455,14 @@ def _parse_cell(cell_contents,cell_typ): **kwds) output[asheetname] = parser.read() - output[asheetname].columns = output[asheetname].columns.set_names(header_names) + output[asheetname].columns = output[ + asheetname].columns.set_names(header_names) if ret_dict: return output else: return output[asheetname] - @property def sheet_names(self): return self.book.sheet_names() @@ -481,6 +486,7 @@ def _trim_excel_header(row): row = row[1:] return row + def _fill_mi_header(row): # forward fill blanks entries # from headers if parsing as MultiIndex @@ -493,6 +499,8 @@ def _fill_mi_header(row): return row # fill blank if index_col not None + + def _pop_header_name(row, index_col): """ (header, new_data) for header rows in MultiIndex parsing""" none_fill = lambda x: None if x == '' else x @@ -503,7 +511,8 @@ def _pop_header_name(row, index_col): else: # pop out header name and fill w/ blank i = index_col if not com.is_list_like(index_col) else max(index_col) - return none_fill(row[i]), row[:i] + [''] + row[i+1:] + return none_fill(row[i]), row[:i] + [''] + row[i + 1:] + def _conv_value(val): # Convert numpy types to Python types for the Excel writers. @@ -722,9 +731,8 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): for cell in cells: colletter = get_column_letter(startcol + cell.col + 1) xcell = wks.cell("%s%s" % (colletter, startrow + cell.row + 1)) - if (isinstance(cell.val, compat.string_types) - and xcell.data_type_for_value(cell.val) - != xcell.TYPE_STRING): + if (isinstance(cell.val, compat.string_types) and + xcell.data_type_for_value(cell.val) != xcell.TYPE_STRING): xcell.set_value_explicit(cell.val) else: xcell.value = _conv_value(cell.val) @@ -735,9 +743,9 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): xcell.style.__setattr__(field, style.__getattribute__(field)) - if isinstance(cell.val, datetime.datetime): + if isinstance(cell.val, datetime): xcell.style.number_format.format_code = self.datetime_format - elif isinstance(cell.val, datetime.date): + elif isinstance(cell.val, date): xcell.style.number_format.format_code = self.date_format if cell.mergestart is not None and cell.mergeend is not None: @@ -825,12 +833,12 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): style_kwargs = {} # Apply format codes before cell.style to allow override - if isinstance(cell.val, datetime.datetime): + if isinstance(cell.val, datetime): style_kwargs.update(self._convert_to_style_kwargs({ - 'number_format':{'format_code': self.datetime_format}})) - elif isinstance(cell.val, datetime.date): + 'number_format': {'format_code': self.datetime_format}})) + elif isinstance(cell.val, date): style_kwargs.update(self._convert_to_style_kwargs({ - 'number_format':{'format_code': self.date_format}})) + 'number_format': {'format_code': self.date_format}})) if cell.style: style_kwargs.update(self._convert_to_style_kwargs(cell.style)) @@ -896,14 +904,13 @@ def _convert_to_style_kwargs(cls, style_dict): if k in _style_key_map: k = _style_key_map[k] _conv_to_x = getattr(cls, '_convert_to_{0}'.format(k), - lambda x: None) + lambda x: None) new_v = _conv_to_x(v) if new_v: style_kwargs[k] = new_v return style_kwargs - @classmethod def _convert_to_color(cls, color_spec): """ @@ -932,7 +939,6 @@ def _convert_to_color(cls, color_spec): else: return Color(**color_spec) - @classmethod def _convert_to_font(cls, font_dict): """ @@ -981,7 +987,6 @@ def _convert_to_font(cls, font_dict): return Font(**font_kwargs) - @classmethod def _convert_to_stop(cls, stop_seq): """ @@ -999,7 +1004,6 @@ def _convert_to_stop(cls, stop_seq): return map(cls._convert_to_color, stop_seq) - @classmethod def _convert_to_fill(cls, fill_dict): """ @@ -1064,7 +1068,6 @@ def _convert_to_fill(cls, fill_dict): except TypeError: return GradientFill(**gfill_kwargs) - @classmethod def _convert_to_side(cls, side_spec): """ @@ -1100,7 +1103,6 @@ def _convert_to_side(cls, side_spec): return Side(**side_kwargs) - @classmethod def _convert_to_border(cls, border_dict): """ @@ -1144,7 +1146,6 @@ def _convert_to_border(cls, border_dict): return Border(**border_kwargs) - @classmethod def _convert_to_alignment(cls, alignment_dict): """ @@ -1168,7 +1169,6 @@ def _convert_to_alignment(cls, alignment_dict): return Alignment(**alignment_dict) - @classmethod def _convert_to_number_format(cls, number_format_dict): """ @@ -1212,6 +1212,7 @@ def _convert_to_protection(cls, protection_dict): register_writer(_Openpyxl20Writer) + class _Openpyxl22Writer(_Openpyxl20Writer): """ Note: Support for OpenPyxl v2.2 is currently EXPERIMENTAL (GH7565). @@ -1221,8 +1222,6 @@ class _Openpyxl22Writer(_Openpyxl20Writer): def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): # Write the frame cells using openpyxl. - from openpyxl import styles - sheet_name = self._get_sheet_name(sheet_name) _style_cache = {} @@ -1236,9 +1235,9 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): for cell in cells: xcell = wks.cell( - row=startrow + cell.row + 1, - column=startcol + cell.col + 1 - ) + row=startrow + cell.row + 1, + column=startcol + cell.col + 1 + ) xcell.value = _conv_value(cell.val) style_kwargs = {} @@ -1256,14 +1255,15 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): if cell.mergestart is not None and cell.mergeend is not None: wks.merge_cells( - start_row=startrow + cell.row + 1, - start_column=startcol + cell.col + 1, - end_column=startcol + cell.mergeend + 1, - end_row=startrow + cell.mergestart + 1 - ) + start_row=startrow + cell.row + 1, + start_column=startcol + cell.col + 1, + end_column=startcol + cell.mergeend + 1, + end_row=startrow + cell.mergestart + 1 + ) # When cells are merged only the top-left cell is preserved - # The behaviour of the other cells in a merged range is undefined + # The behaviour of the other cells in a merged range is + # undefined if style_kwargs: first_row = startrow + cell.row + 1 last_row = startrow + cell.mergestart + 1 @@ -1281,6 +1281,7 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): register_writer(_Openpyxl22Writer) + class _XlwtWriter(ExcelWriter): engine = 'xlwt' supported_extensions = ('.xls',) @@ -1320,9 +1321,9 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): val = _conv_value(cell.val) num_format_str = None - if isinstance(cell.val, datetime.datetime): + if isinstance(cell.val, datetime): num_format_str = self.datetime_format - elif isinstance(cell.val, datetime.date): + elif isinstance(cell.val, date): num_format_str = self.date_format stylekey = json.dumps(cell.style) @@ -1443,9 +1444,9 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): val = _conv_value(cell.val) num_format_str = None - if isinstance(cell.val, datetime.datetime): + if isinstance(cell.val, datetime): num_format_str = self.datetime_format - elif isinstance(cell.val, datetime.date): + elif isinstance(cell.val, date): num_format_str = self.date_format stylekey = json.dumps(cell.style) @@ -1500,11 +1501,11 @@ def _convert_to_style(self, style_dict, num_format_str=None): # Map the alignment to XlsxWriter alignment properties. alignment = style_dict.get('alignment') if alignment: - if (alignment.get('horizontal') - and alignment['horizontal'] == 'center'): + if (alignment.get('horizontal') and + alignment['horizontal'] == 'center'): xl_format.set_align('center') - if (alignment.get('vertical') - and alignment['vertical'] == 'top'): + if (alignment.get('vertical') and + alignment['vertical'] == 'top'): xl_format.set_align('top') # Map the cell borders to XlsxWriter border properties. diff --git a/pandas/io/ga.py b/pandas/io/ga.py index a6f9c9ed9467f..6dd0bb7472c37 100644 --- a/pandas/io/ga.py +++ b/pandas/io/ga.py @@ -4,6 +4,8 @@ 3. Goto APIs and register for OAuth2.0 for installed applications 4. Download JSON secret file and move into same directory as this file """ +# flake8: noqa + from datetime import datetime import re from pandas import compat diff --git a/pandas/io/gbq.py b/pandas/io/gbq.py index fff36a82529e3..4bf46f199c34a 100644 --- a/pandas/io/gbq.py +++ b/pandas/io/gbq.py @@ -12,9 +12,9 @@ from pandas.core.api import DataFrame from pandas.tools.merge import concat from pandas.core.common import PandasError -from pandas.util.decorators import deprecate from pandas.compat import lzip, bytes_to_str + def _check_google_client_version(): try: @@ -28,11 +28,16 @@ def _check_google_client_version(): else: google_api_minimum_version = '1.2.0' - _GOOGLE_API_CLIENT_VERSION = pkg_resources.get_distribution('google-api-python-client').version + _GOOGLE_API_CLIENT_VERSION = pkg_resources.get_distribution( + 'google-api-python-client').version - if StrictVersion(_GOOGLE_API_CLIENT_VERSION) < StrictVersion(google_api_minimum_version): - raise ImportError("pandas requires google-api-python-client >= {0} for Google BigQuery support, " - "current version {1}".format(google_api_minimum_version, _GOOGLE_API_CLIENT_VERSION)) + if (StrictVersion(_GOOGLE_API_CLIENT_VERSION) < + StrictVersion(google_api_minimum_version)): + raise ImportError("pandas requires google-api-python-client >= {0} " + "for Google BigQuery support, " + "current version {1}" + .format(google_api_minimum_version, + _GOOGLE_API_CLIENT_VERSION)) logger = logging.getLogger('pandas.io.gbq') logger.setLevel(logging.ERROR) @@ -87,7 +92,8 @@ class InvalidSchema(PandasError, ValueError): class NotFoundException(PandasError, ValueError): """ - Raised when the project_id, table or dataset provided in the query could not be found. + Raised when the project_id, table or dataset provided in the query could + not be found. """ pass @@ -118,15 +124,16 @@ def __init__(self, project_id, reauth=False): def test_google_api_imports(self): try: - import httplib2 - from apiclient.discovery import build - from apiclient.errors import HttpError - from oauth2client.client import AccessTokenRefreshError - from oauth2client.client import OAuth2WebServerFlow - from oauth2client.file import Storage - from oauth2client.tools import run_flow, argparser + import httplib2 # noqa + from apiclient.discovery import build # noqa + from apiclient.errors import HttpError # noqa + from oauth2client.client import AccessTokenRefreshError # noqa + from oauth2client.client import OAuth2WebServerFlow # noqa + from oauth2client.file import Storage # noqa + from oauth2client.tools import run_flow, argparser # noqa except ImportError as e: - raise ImportError("Missing module required for Google BigQuery support: {0}".format(str(e))) + raise ImportError("Missing module required for Google BigQuery " + "support: {0}".format(str(e))) def get_credentials(self): from oauth2client.client import OAuth2WebServerFlow @@ -135,10 +142,12 @@ def get_credentials(self): _check_google_client_version() - flow = OAuth2WebServerFlow(client_id='495642085510-k0tmvj2m941jhre2nbqka17vqpjfddtd.apps.googleusercontent.com', - client_secret='kOc9wMptUtxkcIFbtZCcrEAc', - scope='https://www.googleapis.com/auth/bigquery', - redirect_uri='urn:ietf:wg:oauth:2.0:oob') + flow = OAuth2WebServerFlow( + client_id=('495642085510-k0tmvj2m941jhre2nbqka17vqpjfddtd' + '.apps.googleusercontent.com'), + client_secret='kOc9wMptUtxkcIFbtZCcrEAc', + scope='https://www.googleapis.com/auth/bigquery', + redirect_uri='urn:ietf:wg:oauth:2.0:oob') storage = Storage('bigquery_credentials.dat') credentials = storage.get() @@ -163,7 +172,8 @@ def get_service(credentials): @staticmethod def process_http_error(ex): - # See `BigQuery Troubleshooting Errors <https://cloud.google.com/bigquery/troubleshooting-errors>`__ + # See `BigQuery Troubleshooting Errors + # <https://cloud.google.com/bigquery/troubleshooting-errors>`__ status = json.loads(bytes_to_str(ex.content))['error'] errors = status.get('errors', None) @@ -173,7 +183,8 @@ def process_http_error(ex): reason = error['reason'] message = error['message'] - raise GenericGBQException("Reason: {0}, Message: {1}".format(reason, message)) + raise GenericGBQException( + "Reason: {0}, Message: {1}".format(reason, message)) raise GenericGBQException(errors) @@ -186,13 +197,17 @@ def process_insert_errors(insert_errors, verbose): reason = error['reason'] message = error['message'] location = error['location'] - error_message = 'Error at Row: {0}, Reason: {1}, Location: {2}, Message: {3}'.format(row, reason, location, message) + error_message = ('Error at Row: {0}, Reason: {1}, ' + 'Location: {2}, Message: {3}' + .format(row, reason, location, message)) # Report all error messages if verbose is set if verbose: print(error_message) else: - raise StreamingInsertError(error_message + '\nEnable verbose logging to see all errors') + raise StreamingInsertError(error_message + + '\nEnable verbose logging to ' + 'see all errors') raise StreamingInsertError @@ -207,15 +222,18 @@ def run_query(self, query, verbose=True): 'configuration': { 'query': { 'query': query - # 'allowLargeResults', 'createDisposition', 'preserveNulls', destinationTable, useQueryCache + # 'allowLargeResults', 'createDisposition', + # 'preserveNulls', destinationTable, useQueryCache } } } try: - query_reply = job_collection.insert(projectId=self.project_id, body=job_data).execute() + query_reply = job_collection.insert( + projectId=self.project_id, body=job_data).execute() except AccessTokenRefreshError: - raise AccessDenied("The credentials have been revoked or expired, please re-run the application " + raise AccessDenied("The credentials have been revoked or expired, " + "please re-run the application " "to re-authorize") except HttpError as ex: self.process_http_error(ex) @@ -226,8 +244,9 @@ def run_query(self, query, verbose=True): if verbose: print('Waiting for job to complete...') try: - query_reply = job_collection.getQueryResults(projectId=job_reference['projectId'], - jobId=job_reference['jobId']).execute() + query_reply = job_collection.getQueryResults( + projectId=job_reference['projectId'], + jobId=job_reference['jobId']).execute() except HttpError as ex: self.process_http_error(ex) @@ -246,9 +265,9 @@ def run_query(self, query, verbose=True): page_token = query_reply.get('pageToken', None) if not page_token and current_row < total_rows: - raise InvalidPageToken( - "Required pageToken was missing. Received {0} of {1} rows".format(current_row, - total_rows)) + raise InvalidPageToken("Required pageToken was missing. " + "Received {0} of {1} rows" + .format(current_row, total_rows)) elif page_token in seen_page_tokens: raise InvalidPageToken("A duplicate pageToken was returned") @@ -257,9 +276,9 @@ def run_query(self, query, verbose=True): try: query_reply = job_collection.getQueryResults( - projectId=job_reference['projectId'], - jobId=job_reference['jobId'], - pageToken=page_token).execute() + projectId=job_reference['projectId'], + jobId=job_reference['jobId'], + pageToken=page_token).execute() except HttpError as ex: self.process_http_error(ex) @@ -290,23 +309,28 @@ def load_data(self, dataframe, dataset_id, table_id, chunksize, verbose): if (len(rows) % chunksize == 0) or (remaining_rows == 0): if verbose: - print("\rStreaming Insert is {0}% Complete".format(((total_rows - remaining_rows) * 100) / total_rows)) + print("\rStreaming Insert is {0}% Complete".format( + ((total_rows - remaining_rows) * 100) / total_rows)) body = {'rows': rows} try: response = self.service.tabledata().insertAll( - projectId = self.project_id, - datasetId = dataset_id, - tableId = table_id, - body = body).execute() + projectId=self.project_id, + datasetId=dataset_id, + tableId=table_id, + body=body).execute() except HttpError as ex: self.process_http_error(ex) - # For streaming inserts, even if you receive a success HTTP response code, you'll need to check the - # insertErrors property of the response to determine if the row insertions were successful, because - # it's possible that BigQuery was only partially successful at inserting the rows. - # See the `Success HTTP Response Codes <https://cloud.google.com/bigquery/streaming-data-into-bigquery#troubleshooting>`__ + # For streaming inserts, even if you receive a success HTTP + # response code, you'll need to check the insertErrors property + # of the response to determine if the row insertions were + # successful, because it's possible that BigQuery was only + # partially successful at inserting the rows. See the `Success + # HTTP Response Codes + # <https://cloud.google.com/bigquery/ + # streaming-data-into-bigquery#troubleshooting>`__ # section insert_errors = response.get('insertErrors', None) @@ -332,16 +356,20 @@ def verify_schema(self, dataset_id, table_id, schema): except HttpError as ex: self.process_http_error(ex) - def delete_and_recreate_table(self, dataset_id, table_id, table_schema, verbose): + def delete_and_recreate_table(self, dataset_id, table_id, + table_schema, verbose): delay = 0 - # Changes to table schema may take up to 2 minutes as of May 2015 - # See `Issue 191 <https://code.google.com/p/google-bigquery/issues/detail?id=191>`__ - # Compare previous schema with new schema to determine if there should be a 120 second delay + # Changes to table schema may take up to 2 minutes as of May 2015 See + # `Issue 191 + # <https://code.google.com/p/google-bigquery/issues/detail?id=191>`__ + # Compare previous schema with new schema to determine if there should + # be a 120 second delay if not self.verify_schema(dataset_id, table_id, table_schema): if verbose: - print('The existing table has a different schema. Please wait 2 minutes. See Google BigQuery issue #191') + print('The existing table has a different schema. ' + 'Please wait 2 minutes. See Google BigQuery issue #191') delay = 120 table = _Table(self.project_id, dataset_id) @@ -351,10 +379,13 @@ def delete_and_recreate_table(self, dataset_id, table_id, table_schema, verbose) def _parse_data(schema, rows): - # see: http://pandas.pydata.org/pandas-docs/dev/missing_data.html#missing-data-casting-rules-and-indexing + # see: + # http://pandas.pydata.org/pandas-docs/dev/missing_data.html + # #missing-data-casting-rules-and-indexing dtype_map = {'INTEGER': np.dtype(float), 'FLOAT': np.dtype(float), - 'TIMESTAMP': 'M8[ns]'} # This seems to be buggy without nanosecond indicator + # This seems to be buggy without nanosecond indicator + 'TIMESTAMP': 'M8[ns]'} fields = schema['fields'] col_types = [field['type'] for field in fields] @@ -386,15 +417,17 @@ def _parse_entry(field_value, field_type): return field_value -def read_gbq(query, project_id=None, index_col=None, col_order=None, reauth=False, verbose=True): +def read_gbq(query, project_id=None, index_col=None, col_order=None, + reauth=False, verbose=True): """Load data from Google BigQuery. THIS IS AN EXPERIMENTAL LIBRARY - The main method a user calls to execute a Query in Google BigQuery and read results - into a pandas DataFrame using the v2 Google API client for Python. Documentation for - the API is available at https://developers.google.com/api-client-library/python/. - Authentication to the Google BigQuery service is via OAuth 2.0 using the product name + The main method a user calls to execute a Query in Google BigQuery and read + results into a pandas DataFrame using the v2 Google API client for Python. + Documentation for the API is available at + https://developers.google.com/api-client-library/python/. Authentication + to the Google BigQuery service is via OAuth 2.0 using the product name 'pandas GBQ'. Parameters @@ -493,7 +526,8 @@ def to_gbq(dataframe, destination_table, project_id, chunksize=10000, raise ValueError("'{0}' is not valid for if_exists".format(if_exists)) if '.' not in destination_table: - raise NotFoundException("Invalid Table Name. Should be of the form 'datasetId.tableId' ") + raise NotFoundException( + "Invalid Table Name. Should be of the form 'datasetId.tableId' ") connector = GbqConnector(project_id, reauth=reauth) dataset_id, table_id = destination_table.rsplit('.', 1) @@ -505,14 +539,19 @@ def to_gbq(dataframe, destination_table, project_id, chunksize=10000, # If table exists, check if_exists parameter if table.exists(table_id): if if_exists == 'fail': - raise TableCreationError("Could not create the table because it already exists. " - "Change the if_exists parameter to append or replace data.") + raise TableCreationError("Could not create the table because it " + "already exists. " + "Change the if_exists parameter to " + "append or replace data.") elif if_exists == 'replace': - connector.delete_and_recreate_table(dataset_id, table_id, table_schema, verbose) + connector.delete_and_recreate_table( + dataset_id, table_id, table_schema, verbose) elif if_exists == 'append': if not connector.verify_schema(dataset_id, table_id, table_schema): - raise InvalidSchema("Please verify that the column order, structure and data types in the DataFrame " - "match the schema of the destination table.") + raise InvalidSchema("Please verify that the column order, " + "structure and data types in the " + "DataFrame match the schema of the " + "destination table.") else: table.create(table_id, table_schema) @@ -520,13 +559,13 @@ def to_gbq(dataframe, destination_table, project_id, chunksize=10000, def generate_bq_schema(df, default_type='STRING'): - # deprecation TimeSeries, #11121 - warnings.warn("generate_bq_schema is deprecated and will be removed in a future version", - FutureWarning, stacklevel=2) + warnings.warn("generate_bq_schema is deprecated and will be removed in " + "a future version", FutureWarning, stacklevel=2) return _generate_bq_schema(df, default_type=default_type) + def _generate_bq_schema(df, default_type='STRING'): """ Given a passed df, generate the associated Google BigQuery schema. @@ -555,6 +594,7 @@ def _generate_bq_schema(df, default_type='STRING'): return {'fields': fields} + class _Table(GbqConnector): def __init__(self, project_id, dataset_id, reauth=False): @@ -585,9 +625,9 @@ def exists(self, table_id): try: self.service.tables().get( - projectId=self.project_id, - datasetId=self.dataset_id, - tableId=table_id).execute() + projectId=self.project_id, + datasetId=self.dataset_id, + tableId=table_id).execute() return True except self.http_error as ex: if ex.resp.status == 404: @@ -605,11 +645,13 @@ def create(self, table_id, schema): table : str Name of table to be written schema : str - Use the generate_bq_schema to generate your table schema from a dataframe. + Use the generate_bq_schema to generate your table schema from a + dataframe. """ if self.exists(table_id): - raise TableCreationError("The table could not be created because it already exists") + raise TableCreationError( + "The table could not be created because it already exists") if not _Dataset(self.project_id).exists(self.dataset_id): _Dataset(self.project_id).create(self.dataset_id) @@ -625,9 +667,9 @@ def create(self, table_id, schema): try: self.service.tables().insert( - projectId=self.project_id, - datasetId=self.dataset_id, - body=body).execute() + projectId=self.project_id, + datasetId=self.dataset_id, + body=body).execute() except self.http_error as ex: self.process_http_error(ex) @@ -647,9 +689,9 @@ def delete(self, table_id): try: self.service.tables().delete( - datasetId=self.dataset_id, - projectId=self.project_id, - tableId=table_id).execute() + datasetId=self.dataset_id, + projectId=self.project_id, + tableId=table_id).execute() except self.http_error as ex: self.process_http_error(ex) @@ -683,8 +725,8 @@ def exists(self, dataset_id): try: self.service.datasets().get( - projectId=self.project_id, - datasetId=dataset_id).execute() + projectId=self.project_id, + datasetId=dataset_id).execute() return True except self.http_error as ex: if ex.resp.status == 404: @@ -709,7 +751,7 @@ def datasets(self): try: list_dataset_response = self.service.datasets().list( - projectId=self.project_id).execute().get('datasets', None) + projectId=self.project_id).execute().get('datasets', None) if not list_dataset_response: return [] @@ -735,7 +777,8 @@ def create(self, dataset_id): """ if self.exists(dataset_id): - raise DatasetCreationError("The dataset could not be created because it already exists") + raise DatasetCreationError( + "The dataset could not be created because it already exists") body = { 'datasetReference': { @@ -746,8 +789,8 @@ def create(self, dataset_id): try: self.service.datasets().insert( - projectId=self.project_id, - body=body).execute() + projectId=self.project_id, + body=body).execute() except self.http_error as ex: self.process_http_error(ex) @@ -763,12 +806,13 @@ def delete(self, dataset_id): """ if not self.exists(dataset_id): - raise NotFoundException("Dataset {0} does not exist".format(dataset_id)) + raise NotFoundException( + "Dataset {0} does not exist".format(dataset_id)) try: self.service.datasets().delete( - datasetId=dataset_id, - projectId=self.project_id).execute() + datasetId=dataset_id, + projectId=self.project_id).execute() except self.http_error as ex: self.process_http_error(ex) @@ -791,8 +835,8 @@ def tables(self, dataset_id): try: list_table_response = self.service.tables().list( - projectId=self.project_id, - datasetId=dataset_id).execute().get('tables', None) + projectId=self.project_id, + datasetId=dataset_id).execute().get('tables', None) if not list_table_response: return [] diff --git a/pandas/io/html.py b/pandas/io/html.py index f175702dedabc..b21f1ef7f160c 100644 --- a/pandas/io/html.py +++ b/pandas/io/html.py @@ -7,7 +7,6 @@ import re import numbers import collections -import warnings from distutils.version import LooseVersion @@ -26,6 +25,7 @@ _HAS_LXML = False _HAS_HTML5LIB = False + def _importers(): # import things we need # but make this done on a first use basis @@ -39,19 +39,19 @@ def _importers(): global _HAS_BS4, _HAS_LXML, _HAS_HTML5LIB try: - import bs4 + import bs4 # noqa _HAS_BS4 = True except ImportError: pass try: - import lxml + import lxml # noqa _HAS_LXML = True except ImportError: pass try: - import html5lib + import html5lib # noqa _HAS_HTML5LIB = True except ImportError: pass @@ -183,6 +183,7 @@ class _HtmlFrameParser(object): See each method's respective documentation for details on their functionality. """ + def __init__(self, io, match, attrs, encoding): self.io = io self.match = match @@ -385,6 +386,7 @@ class _BeautifulSoupHtml5LibFrameParser(_HtmlFrameParser): Documentation strings for this class are in the base class :class:`pandas.io.html._HtmlFrameParser`. """ + def __init__(self, *args, **kwargs): super(_BeautifulSoupHtml5LibFrameParser, self).__init__(*args, **kwargs) @@ -488,6 +490,7 @@ class _LxmlFrameParser(_HtmlFrameParser): Documentation strings for this class are in the base class :class:`_HtmlFrameParser`. """ + def __init__(self, *args, **kwargs): super(_LxmlFrameParser, self).__init__(*args, **kwargs) @@ -662,7 +665,8 @@ def _parser_dispatch(flavor): if not _HAS_HTML5LIB: raise ImportError("html5lib not found, please install it") if not _HAS_BS4: - raise ImportError("BeautifulSoup4 (bs4) not found, please install it") + raise ImportError( + "BeautifulSoup4 (bs4) not found, please install it") import bs4 if bs4.__version__ == LooseVersion('4.2.0'): raise ValueError("You're using a version" @@ -737,7 +741,7 @@ def _parse(flavor, io, match, header, index_col, skiprows, parse_dates=parse_dates, tupleize_cols=tupleize_cols, thousands=thousands)) - except StopIteration: # empty table + except StopIteration: # empty table continue return ret diff --git a/pandas/io/json.py b/pandas/io/json.py index f368f0e6cf28e..76cda87043a37 100644 --- a/pandas/io/json.py +++ b/pandas/io/json.py @@ -16,7 +16,8 @@ loads = _json.loads dumps = _json.dumps -### interface to/from ### + +# interface to/from def to_json(path_or_buf, obj, orient=None, date_format='epoch', @@ -115,7 +116,7 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True, file. For file URLs, a host is expected. For instance, a local file could be ``file://localhost/path/to/table.json`` - orient + orient * `Series` @@ -151,15 +152,15 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True, convert_dates : boolean, default True List of columns to parse for dates; If True, then try to parse datelike columns default is True; a column label is datelike if - + * it ends with ``'_at'``, - + * it ends with ``'_time'``, - + * it begins with ``'timestamp'``, - + * it is ``'modified'``, or - + * it is ``'date'`` keep_default_dates : boolean, default True @@ -190,7 +191,7 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True, # if the filepath is too long will raise here # 5874 - except (TypeError,ValueError): + except (TypeError, ValueError): exists = False if exists: @@ -566,13 +567,13 @@ def is_ok(col): self._process_converter( lambda col, c: self._try_convert_to_date(c), - lambda col, c: ((self.keep_default_dates and is_ok(col)) - or col in convert_dates)) + lambda col, c: ((self.keep_default_dates and is_ok(col)) or + col in convert_dates)) - -#---------------------------------------------------------------------- +# --------------------------------------------------------------------- # JSON normalization routines + def nested_to_record(ds, prefix="", level=0): """a simplified json_normalize @@ -627,7 +628,7 @@ def nested_to_record(ds, prefix="", level=0): continue else: v = new_d.pop(k) - new_d.update(nested_to_record(v, newkey, level+1)) + new_d.update(nested_to_record(v, newkey, level + 1)) new_ds.append(new_d) if singleton: @@ -741,7 +742,7 @@ def _recursive_extract(data, path, seen_meta, level=0): seen_meta[key] = _pull_field(obj, val[-1]) _recursive_extract(obj[path[0]], path[1:], - seen_meta, level=level+1) + seen_meta, level=level + 1) else: for obj in data: recs = _pull_field(obj, path[0]) diff --git a/pandas/io/packers.py b/pandas/io/packers.py index 0ba1254659540..a16f3600736b8 100644 --- a/pandas/io/packers.py +++ b/pandas/io/packers.py @@ -1,12 +1,10 @@ """ Msgpack serializer support for reading and writing pandas data structures to disk -""" -# portions of msgpack_numpy package, by Lev Givon were incorporated -# into this module (and tests_packers.py) +portions of msgpack_numpy package, by Lev Givon were incorporated +into this module (and tests_packers.py) -""" License ======= @@ -46,12 +44,10 @@ import numpy as np from pandas import compat -from pandas.compat import u, PY3 -from pandas import ( - Timestamp, Period, Series, DataFrame, Panel, Panel4D, - Index, MultiIndex, Int64Index, RangeIndex, PeriodIndex, - DatetimeIndex, Float64Index, NaT -) +from pandas.compat import u +from pandas import (Timestamp, Period, Series, DataFrame, # noqa + Index, MultiIndex, Float64Index, Int64Index, + Panel, RangeIndex, PeriodIndex, DatetimeIndex) from pandas.sparse.api import SparseSeries, SparseDataFrame, SparsePanel from pandas.sparse.array import BlockIndex, IntIndex from pandas.core.generic import NDFrame @@ -174,7 +170,7 @@ def read(fh): # this is platform int, which we need to remap to np.int64 # for compat on windows platforms 7: np.dtype('int64'), -} + } def dtype_for(t): @@ -183,9 +179,9 @@ def dtype_for(t): return dtype_dict[t] return np.typeDict[t] -c2f_dict = {'complex': np.float64, +c2f_dict = {'complex': np.float64, 'complex128': np.float64, - 'complex64': np.float32} + 'complex64': np.float32} # numpy 1.6.1 compat if hasattr(np, 'float128'): @@ -322,16 +318,16 @@ def encode(obj): raise NotImplementedError( 'msgpack sparse series is not implemented' ) - #d = {'typ': 'sparse_series', + # d = {'typ': 'sparse_series', # 'klass': obj.__class__.__name__, # 'dtype': obj.dtype.name, # 'index': obj.index, # 'sp_index': obj.sp_index, # 'sp_values': convert(obj.sp_values), # 'compress': compressor} - #for f in ['name', 'fill_value', 'kind']: + # for f in ['name', 'fill_value', 'kind']: # d[f] = getattr(obj, f, None) - #return d + # return d else: return {'typ': 'series', 'klass': obj.__class__.__name__, @@ -345,33 +341,33 @@ def encode(obj): raise NotImplementedError( 'msgpack sparse frame is not implemented' ) - #d = {'typ': 'sparse_dataframe', + # d = {'typ': 'sparse_dataframe', # 'klass': obj.__class__.__name__, # 'columns': obj.columns} - #for f in ['default_fill_value', 'default_kind']: + # for f in ['default_fill_value', 'default_kind']: # d[f] = getattr(obj, f, None) - #d['data'] = dict([(name, ss) + # d['data'] = dict([(name, ss) # for name, ss in compat.iteritems(obj)]) - #return d + # return d elif isinstance(obj, SparsePanel): raise NotImplementedError( 'msgpack sparse frame is not implemented' ) - #d = {'typ': 'sparse_panel', + # d = {'typ': 'sparse_panel', # 'klass': obj.__class__.__name__, # 'items': obj.items} - #for f in ['default_fill_value', 'default_kind']: + # for f in ['default_fill_value', 'default_kind']: # d[f] = getattr(obj, f, None) - #d['data'] = dict([(name, df) + # d['data'] = dict([(name, df) # for name, df in compat.iteritems(obj)]) - #return d + # return d else: data = obj._data if not data.is_consolidated(): data = data.consolidate() - # the block manager + # the block manager return {'typ': 'block_manager', 'klass': obj.__class__.__name__, 'axes': data.axes, @@ -512,7 +508,8 @@ def create_block(b): values = unconvert(b['values'], dtype_for(b['dtype']), b['compress']).reshape(b['shape']) - # locs handles duplicate column names, and should be used instead of items; see GH 9618 + # locs handles duplicate column names, and should be used instead + # of items; see GH 9618 if 'locs' in b: placement = b['locs'] else: @@ -533,19 +530,19 @@ def create_block(b): return timedelta(*obj['data']) elif typ == 'timedelta64': return np.timedelta64(int(obj['data'])) - #elif typ == 'sparse_series': + # elif typ == 'sparse_series': # dtype = dtype_for(obj['dtype']) # return globals()[obj['klass']]( # unconvert(obj['sp_values'], dtype, obj['compress']), # sparse_index=obj['sp_index'], index=obj['index'], # fill_value=obj['fill_value'], kind=obj['kind'], name=obj['name']) - #elif typ == 'sparse_dataframe': + # elif typ == 'sparse_dataframe': # return globals()[obj['klass']]( # obj['data'], columns=obj['columns'], # default_fill_value=obj['default_fill_value'], # default_kind=obj['default_kind'] # ) - #elif typ == 'sparse_panel': + # elif typ == 'sparse_panel': # return globals()[obj['klass']]( # obj['data'], items=obj['items'], # default_fill_value=obj['default_fill_value'], diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index 9d25eaecc6620..f06ad927bb61b 100755 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -24,7 +24,6 @@ from pandas.util.decorators import Appender import pandas.lib as lib -import pandas.tslib as tslib import pandas.parser as _parser @@ -70,13 +69,13 @@ class ParserWarning(Warning): If None defaults to Excel dialect. Ignored if sep longer than 1 char See csv.Dialect documentation for more details header : int, list of ints, default 'infer' - Row number(s) to use as the column names, and the start of the - data. Defaults to 0 if no ``names`` passed, otherwise ``None``. Explicitly - pass ``header=0`` to be able to replace existing names. The header can be - a list of integers that specify row locations for a multi-index on the - columns E.g. [0,1,3]. Intervening rows that are not specified will be - skipped (e.g. 2 in this example are skipped). Note that this parameter - ignores commented lines and empty lines if ``skip_blank_lines=True``, so header=0 + Row number(s) to use as the column names, and the start of the data. + Defaults to 0 if no ``names`` passed, otherwise ``None``. Explicitly pass + ``header=0`` to be able to replace existing names. The header can be a list + of integers that specify row locations for a multi-index on the columns + E.g. [0,1,3]. Intervening rows that are not specified will be skipped + (e.g. 2 in this example are skipped). Note that this parameter ignores + commented lines and empty lines if ``skip_blank_lines=True``, so header=0 denotes the first line of data rather than the first line of the file. skiprows : list-like or integer, default None Line numbers to skip (0-indexed) or number of lines to skip (int) @@ -101,42 +100,47 @@ class ParserWarning(Warning): keep_default_na : bool, default True If na_values are specified and keep_default_na is False the default NaN values are overridden, otherwise they're appended to -parse_dates : boolean, list of ints or names, list of lists, or dict, default False - If True -> try parsing the index. - If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column. - If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column. - {'foo' : [1, 3]} -> parse columns 1, 3 as date and call result 'foo' - A fast-path exists for iso8601-formatted dates. +parse_dates : various, default False + + * boolean. If True -> try parsing the index. + * list of ints or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 + each as a separate date column. + * list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as + a single date column. + * dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call result + 'foo' + Note: A fast-path exists for iso8601-formatted dates. keep_date_col : boolean, default False If True and parse_dates specifies combining multiple columns then keep the original columns. date_parser : function, default None - Function to use for converting a sequence of string columns to an - array of datetime instances. The default uses dateutil.parser.parser - to do the conversion. Pandas will try to call date_parser in three different - ways, advancing to the next if an exception occurs: 1) Pass one or more arrays - (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the string - values from the columns defined by parse_dates into a single array and pass - that; and 3) call date_parser once for each row using one or more strings - (corresponding to the columns defined by parse_dates) as arguments. + Function to use for converting a sequence of string columns to an array of + datetime instances. The default uses dateutil.parser.parser to do the + conversion. Pandas will try to call date_parser in three different ways, + advancing to the next if an exception occurs: 1) Pass one or more arrays + (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the + string values from the columns defined by parse_dates into a single array + and pass that; and 3) call date_parser once for each row using one or more + strings (corresponding to the columns defined by parse_dates) as arguments. dayfirst : boolean, default False DD/MM format dates, international and European format thousands : str, default None Thousands separator comment : str, default None - Indicates remainder of line should not be parsed. If found at the - beginning of a line, the line will be ignored altogether. This parameter - must be a single character. Like empty lines (as long as ``skip_blank_lines=True``), - fully commented lines are ignored by the parameter `header` - but not by `skiprows`. For example, if comment='#', parsing - '#empty\\na,b,c\\n1,2,3' with `header=0` will result in 'a,b,c' being + Indicates remainder of line should not be parsed. If found at the beginning + of a line, the line will be ignored altogether. This parameter must be a + single character. Like empty lines (as long as ``skip_blank_lines=True``), + fully commented lines are ignored by the parameter `header` but not by + `skiprows`. For example, if comment='#', parsing '#empty\\na,b,c\\n1,2,3' + with `header=0` will result in 'a,b,c' being treated as the header. decimal : str, default '.' Character to recognize as decimal point. E.g. use ',' for European data nrows : int, default None Number of rows of file to read. Useful for reading pieces of large files iterator : boolean, default False - Return TextFileReader object for iteration or getting chunks with ``get_chunk()``. + Return TextFileReader object for iteration or getting chunks with + ``get_chunk()``. chunksize : int, default None Return TextFileReader object for iteration. `See IO Tools docs for more information @@ -242,9 +246,10 @@ def _read(filepath_or_buffer, kwds): if skipfooter is not None: kwds['skip_footer'] = skipfooter - # If the input could be a filename, check for a recognizable compression extension. - # If we're reading from a URL, the `get_filepath_or_buffer` will use header info - # to determine compression, so use what it finds in that case. + # If the input could be a filename, check for a recognizable compression + # extension. If we're reading from a URL, the `get_filepath_or_buffer` + # will use header info to determine compression, so use what it finds in + # that case. inferred_compression = kwds.get('compression') if inferred_compression == 'infer': if isinstance(filepath_or_buffer, compat.string_types): @@ -257,10 +262,11 @@ def _read(filepath_or_buffer, kwds): else: inferred_compression = None - filepath_or_buffer, _, compression = get_filepath_or_buffer(filepath_or_buffer, - encoding, - compression=kwds.get('compression', None)) - kwds['compression'] = inferred_compression if compression == 'infer' else compression + filepath_or_buffer, _, compression = get_filepath_or_buffer( + filepath_or_buffer, encoding, + compression=kwds.get('compression', None)) + kwds['compression'] = (inferred_compression if compression == 'infer' + else compression) if kwds.get('date_parser', None) is not None: if isinstance(kwds['parse_dates'], bool): @@ -533,8 +539,8 @@ def read_fwf(filepath_or_buffer, colspecs='infer', widths=None, **kwds): # no longer excluding inf representations # '1.#INF','-1.#INF', '1.#INF000000', _NA_VALUES = set([ - '-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A', 'NA', '#NA', - 'NULL', 'NaN', '-NaN', 'nan', '-nan', '' + '-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', + 'N/A', 'NA', '#NA', 'NULL', 'NaN', '-NaN', 'nan', '-nan', '' ]) @@ -658,7 +664,8 @@ def _clean_options(self, options, engine): msg = ("Falling back to the 'python' engine because" " {reason}, but this causes {option!r} to be" " ignored as it is not supported by the 'python'" - " engine.").format(reason=fallback_reason, option=arg) + " engine.").format(reason=fallback_reason, + option=arg) if arg == 'dtype': msg += " (Note the 'converters' option provides"\ " similar functionality.)" @@ -1431,7 +1438,7 @@ def __init__(self, f, **kwds): if isinstance(f, compat.string_types): f = _get_handle(f, 'r', encoding=self.encoding, - compression=self.compression) + compression=self.compression) elif self.compression: f = _wrap_compressed(f, self.compression, self.encoding) # in Python 3, convert BytesIO or fileobjects passed with an encoding @@ -1472,8 +1479,8 @@ def __init__(self, f, **kwds): # multiple date column thing turning into a real spaghetti factory if not self._has_complex_date_col: - (index_names, - self.orig_names, self.columns) = self._get_index_name(self.columns) + (index_names, self.orig_names, self.columns) = ( + self._get_index_name(self.columns)) self._name_processed = True if self.index_names is None: self.index_names = index_names @@ -1697,7 +1704,7 @@ def _infer_columns(self): lc = len(this_columns) ic = (len(self.index_col) if self.index_col is not None else 0) - if lc != unnamed_count and lc-ic > unnamed_count: + if lc != unnamed_count and lc - ic > unnamed_count: clear_buffer = False this_columns = [None] * lc self.buf = [self.buf[-1]] @@ -1710,10 +1717,10 @@ def _infer_columns(self): self._clear_buffer() if names is not None: - if ((self.usecols is not None - and len(names) != len(self.usecols)) - or (self.usecols is None - and len(names) != len(columns[0]))): + if ((self.usecols is not None and + len(names) != len(self.usecols)) or + (self.usecols is None and + len(names) != len(columns[0]))): raise ValueError('Number of passed names did not match ' 'number of header fields in the file') if len(columns) > 1: @@ -1737,7 +1744,8 @@ def _infer_columns(self): num_original_columns = ncols if not names: if self.prefix: - columns = [['%s%d' % (self.prefix, i) for i in range(ncols)]] + columns = [['%s%d' % (self.prefix, i) + for i in range(ncols)]] else: columns = [lrange(ncols)] columns = self._handle_usecols(columns, columns[0]) @@ -1824,7 +1832,8 @@ def _next_line(self): orig_line = next(self.data) line = self._check_comments([orig_line])[0] self.pos += 1 - if not self.skip_blank_lines and (self._empty(orig_line) or line): + if (not self.skip_blank_lines and + (self._empty(orig_line) or line)): break elif self.skip_blank_lines: ret = self._check_empty([line]) @@ -1858,8 +1867,9 @@ def _check_empty(self, lines): ret = [] for l in lines: # Remove empty lines and lines with only one whitespace value - if len(l) > 1 or len(l) == 1 and (not isinstance(l[0], - compat.string_types) or l[0].strip()): + if (len(l) > 1 or len(l) == 1 and + (not isinstance(l[0], compat.string_types) or + l[0].strip())): ret.append(l) return ret @@ -1873,9 +1883,9 @@ def _check_thousands(self, lines): for i, x in enumerate(l): if (not isinstance(x, compat.string_types) or self.thousands not in x or - (self._no_thousands_columns - and i in self._no_thousands_columns) - or nonnum.search(x.strip())): + (self._no_thousands_columns and + i in self._no_thousands_columns) or + nonnum.search(x.strip())): rl.append(x) else: rl.append(x.replace(self.thousands, '')) @@ -1983,9 +1993,8 @@ def _rows_to_cols(self, content): if self._implicit_index: zipped_content = [ a for i, a in enumerate(zipped_content) - if (i < len(self.index_col) - or i - len(self.index_col) in self._col_indices) - ] + if (i < len(self.index_col) or + i - len(self.index_col) in self._col_indices)] else: zipped_content = [a for i, a in enumerate(zipped_content) if i in self._col_indices] @@ -2087,7 +2096,8 @@ def converter(*date_cols): lib.try_parse_dates(strs, dayfirst=dayfirst)) else: try: - result = tools.to_datetime(date_parser(*date_cols), errors='ignore') + result = tools.to_datetime( + date_parser(*date_cols), errors='ignore') if isinstance(result, datetime.datetime): raise Exception('scalar parser') return result @@ -2109,9 +2119,9 @@ def _process_date_conversion(data_dict, converter, parse_spec, keep_date_col=False): def _isindex(colspec): return ((isinstance(index_col, list) and - colspec in index_col) - or (isinstance(index_names, list) and - colspec in index_names)) + colspec in index_col) or + (isinstance(index_names, list) and + colspec in index_names)) new_cols = [] new_data = {} @@ -2262,13 +2272,14 @@ def _get_empty_meta(columns, index_col, index_names, dtype=None): index = Index([]) else: index = [np.empty(0, dtype=dtype.get(index_name, np.object)) - for index_name in index_names] + for index_name in index_names] index = MultiIndex.from_arrays(index, names=index_names) index_col.sort() for i, n in enumerate(index_col): - columns.pop(n-i) + columns.pop(n - i) - col_dict = dict((col_name, np.empty(0, dtype=dtype.get(col_name, np.object))) + col_dict = dict((col_name, + np.empty(0, dtype=dtype.get(col_name, np.object))) for col_name in columns) return index, columns, col_dict @@ -2315,8 +2326,6 @@ def _stringify_na_values(na_values): def _get_na_values(col, na_values, na_fvalues): if isinstance(na_values, dict): if col in na_values: - values = na_values[col] - fvalues = na_fvalues[col] return na_values[col], na_fvalues[col] else: return _NA_VALUES, set() @@ -2355,6 +2364,7 @@ class FixedWidthReader(object): """ A reader of fixed-width lines. """ + def __init__(self, f, colspecs, delimiter, comment): self.f = f self.buffer = None @@ -2426,6 +2436,7 @@ class FixedWidthFieldParser(PythonParser): Specialization that Converts fixed-width fields into DataFrames. See PythonParser for details. """ + def __init__(self, f, **kwds): # Support iterators, convert to a list. self.colspecs = kwds.pop('colspecs') diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py index 52a9ef0370e9e..3b1338df525b2 100644 --- a/pandas/io/pickle.py +++ b/pandas/io/pickle.py @@ -1,5 +1,6 @@ from pandas.compat import cPickle as pkl, pickle_compat as pc, PY3 + def to_pickle(obj, path): """ Pickle (serialize) object to input file path @@ -44,8 +45,7 @@ def try_read(path, encoding=None): try: with open(path, 'rb') as fh: return pkl.load(fh) - except (Exception) as e: - + except Exception: # reg/patched pickle try: with open(path, 'rb') as fh: diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py index fe063f5b4bc4d..9b59007d4268f 100644 --- a/pandas/io/pytables.py +++ b/pandas/io/pytables.py @@ -15,7 +15,8 @@ import numpy as np import pandas as pd from pandas import (Series, DataFrame, Panel, Panel4D, Index, - MultiIndex, Int64Index, Timestamp) + MultiIndex, Int64Index) +from pandas.core import config from pandas.sparse.api import SparseSeries, SparseDataFrame, SparsePanel from pandas.sparse.array import BlockIndex, IntIndex from pandas.tseries.api import PeriodIndex, DatetimeIndex @@ -25,10 +26,10 @@ from pandas.core.algorithms import match, unique from pandas.core.categorical import Categorical from pandas.core.common import _asarray_tuplesafe -from pandas.core.internals import (BlockManager, make_block, _block2d_to_blocknd, +from pandas.core.internals import (BlockManager, make_block, + _block2d_to_blocknd, _factor_indexer, _block_shape) from pandas.core.index import _ensure_index -from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type import pandas.core.common as com from pandas.tools.merge import concat from pandas import compat @@ -41,16 +42,16 @@ import pandas.algos as algos import pandas.tslib as tslib -from contextlib import contextmanager from distutils.version import LooseVersion # versioning attribute _version = '0.15.2' -### encoding ### +# encoding # PY3 encoding if we don't specify _default_encoding = 'UTF-8' + def _ensure_decoded(s): """ if we have bytes, decode them to unicode """ if isinstance(s, np.bytes_): @@ -67,6 +68,7 @@ def _ensure_encoding(encoding): Term = Expr + def _ensure_term(where, scope_level): """ ensure that the where is a Term or a list of Term @@ -196,7 +198,6 @@ class DuplicateWarning(Warning): } # register our configuration options -from pandas.core import config dropna_doc = """ : boolean drop ALL nan rows when appending to a table @@ -219,6 +220,7 @@ class DuplicateWarning(Warning): _table_mod = None _table_file_open_policy_is_strict = False + def _tables(): global _table_mod global _table_file_open_policy_is_strict @@ -234,7 +236,8 @@ def _tables(): # return the file open policy; this changes as of pytables 3.1 # depending on the HDF5 version try: - _table_file_open_policy_is_strict = tables.file._FILE_OPEN_POLICY == 'strict' + _table_file_open_policy_is_strict = ( + tables.file._FILE_OPEN_POLICY == 'strict') except: pass @@ -242,6 +245,7 @@ def _tables(): # interface to/from ### + def to_hdf(path_or_buf, key, value, mode=None, complevel=None, complib=None, append=None, **kwargs): """ store this object, close it if we opened it """ @@ -253,7 +257,7 @@ def to_hdf(path_or_buf, key, value, mode=None, complevel=None, complib=None, if isinstance(path_or_buf, string_types): with HDFStore(path_or_buf, mode=mode, complevel=complevel, - complib=complib) as store: + complib=complib) as store: f(store) else: f(path_or_buf) @@ -295,8 +299,8 @@ def read_hdf(path_or_buf, key=None, **kwargs): try: exists = os.path.exists(path_or_buf) - #if filepath is too long - except (TypeError,ValueError): + # if filepath is too long + except (TypeError, ValueError): exists = False if not exists: @@ -380,9 +384,10 @@ class HDFStore(StringMixin): def __init__(self, path, mode=None, complevel=None, complib=None, fletcher32=False, **kwargs): try: - import tables + import tables # noqa except ImportError as ex: # pragma: no cover - raise ImportError('HDFStore requires PyTables, "{ex}" problem importing'.format(ex=str(ex))) + raise ImportError('HDFStore requires PyTables, "{ex}" problem ' + 'importing'.format(ex=str(ex))) if complib not in (None, 'blosc', 'bzip2', 'lzo', 'zlib'): raise ValueError("complib only supports 'blosc', 'bzip2', lzo' " @@ -546,13 +551,17 @@ def open(self, mode='a', **kwargs): # trap PyTables >= 3.1 FILE_OPEN_POLICY exception # to provide an updated message if 'FILE_OPEN_POLICY' in str(e): - - e = ValueError("PyTables [{version}] no longer supports opening multiple files\n" - "even in read-only mode on this HDF5 version [{hdf_version}]. You can accept this\n" - "and not open the same file multiple times at once,\n" - "upgrade the HDF5 version, or downgrade to PyTables 3.0.0 which allows\n" - "files to be opened multiple times at once\n".format(version=tables.__version__, - hdf_version=tables.get_hdf5_version())) + e = ValueError( + "PyTables [{version}] no longer supports opening multiple " + "files\n" + "even in read-only mode on this HDF5 version " + "[{hdf_version}]. You can accept this\n" + "and not open the same file multiple times at once,\n" + "upgrade the HDF5 version, or downgrade to PyTables 3.0.0 " + "which allows\n" + "files to be opened multiple times at once\n" + .format(version=tables.__version__, + hdf_version=tables.get_hdf5_version())) raise e @@ -662,9 +671,9 @@ def func(_start, _stop, _where): columns=columns, **kwargs) # create the iterator - it = TableIterator(self, s, func, where=where, nrows=s.nrows, start=start, - stop=stop, iterator=iterator, chunksize=chunksize, - auto_close=auto_close) + it = TableIterator(self, s, func, where=where, nrows=s.nrows, + start=start, stop=stop, iterator=iterator, + chunksize=chunksize, auto_close=auto_close) return it.get_result() @@ -751,7 +760,7 @@ def select_as_multiple(self, keys, where=None, selector=None, columns=None, # validate rows nrows = None - for t, k in itertools.chain([(s,selector)], zip(tbls, keys)): + for t, k in itertools.chain([(s, selector)], zip(tbls, keys)): if t is None: raise KeyError("Invalid table [%s]" % k) if not t.is_table: @@ -771,21 +780,22 @@ def select_as_multiple(self, keys, where=None, selector=None, columns=None, def func(_start, _stop, _where): - # retrieve the objs, _where is always passed as a set of coordinates here - objs = [t.read(where=_where, columns=columns, **kwargs) for t in tbls] + # retrieve the objs, _where is always passed as a set of + # coordinates here + objs = [t.read(where=_where, columns=columns, **kwargs) + for t in tbls] # concat and return return concat(objs, axis=axis, verify_integrity=False).consolidate() # create the iterator - it = TableIterator(self, s, func, where=where, nrows=nrows, start=start, - stop=stop, iterator=iterator, chunksize=chunksize, - auto_close=auto_close) + it = TableIterator(self, s, func, where=where, nrows=nrows, + start=start, stop=stop, iterator=iterator, + chunksize=chunksize, auto_close=auto_close) return it.get_result(coordinates=True) - def put(self, key, value, format=None, append=False, **kwargs): """ Store object in HDFStore @@ -1290,8 +1300,8 @@ class TableIterator(object): def __init__(self, store, s, func, where, nrows, start=None, stop=None, iterator=False, chunksize=None, auto_close=False): self.store = store - self.s = s - self.func = func + self.s = s + self.func = func self.where = where self.nrows = nrows or 0 self.start = start or 0 @@ -1353,6 +1363,7 @@ def get_result(self, coordinates=False): self.close() return results + class IndexCol(StringMixin): """ an index column description class @@ -1624,14 +1635,15 @@ def validate_metadata(self, handler): new_metadata = self.metadata cur_metadata = handler.read_metadata(self.cname) if new_metadata is not None and cur_metadata is not None \ - and not com.array_equivalent(new_metadata, cur_metadata): - raise ValueError("cannot append a categorical with different categories" - " to the existing") + and not com.array_equivalent(new_metadata, cur_metadata): + raise ValueError("cannot append a categorical with " + "different categories to the existing") def write_metadata(self, handler): """ set the meta data """ if self.metadata is not None: - handler.write_metadata(self.cname,self.metadata) + handler.write_metadata(self.cname, self.metadata) + class GenericIndexCol(IndexCol): @@ -1669,7 +1681,7 @@ class DataCol(IndexCol): """ is_an_indexable = False is_data_indexable = False - _info_fields = ['tz','ordered'] + _info_fields = ['tz', 'ordered'] @classmethod def create_for_block( @@ -1694,9 +1706,10 @@ def create_for_block( return cls(name=name, cname=cname, **kwargs) def __init__(self, values=None, kind=None, typ=None, - cname=None, data=None, meta=None, metadata=None, block=None, **kwargs): - super(DataCol, self).__init__( - values=values, kind=kind, typ=typ, cname=cname, **kwargs) + cname=None, data=None, meta=None, metadata=None, + block=None, **kwargs): + super(DataCol, self).__init__(values=values, kind=kind, typ=typ, + cname=cname, **kwargs) self.dtype = None self.dtype_attr = u("%s_dtype" % self.name) self.meta = meta @@ -1737,7 +1750,7 @@ def take_data(self): def set_metadata(self, metadata): """ record the metadata """ if metadata is not None: - metadata = np.array(metadata,copy=False).ravel() + metadata = np.array(metadata, copy=False).ravel() self.metadata = metadata def set_kind(self): @@ -1776,7 +1789,8 @@ def set_atom(self, block, block_items, existing_col, min_itemsize, # short-cut certain block types if block.is_categorical: - return self.set_atom_categorical(block, items=block_items, info=info) + return self.set_atom_categorical(block, items=block_items, + info=info) elif block.is_datetimetz: return self.set_atom_datetime64tz(block, info=info) elif block.is_datetime: @@ -1800,7 +1814,7 @@ def set_atom(self, block, block_items, existing_col, min_itemsize, raise TypeError( "too many timezones in this block, create separate " "data columns" - ) + ) elif inferred_type == 'unicode': raise TypeError( "[unicode] is not implemented as a table column") @@ -1886,7 +1900,8 @@ def get_atom_data(self, block, kind=None): def set_atom_complex(self, block): self.kind = block.dtype.name itemsize = int(self.kind.split('complex')[-1]) // 8 - self.typ = _tables().ComplexCol(itemsize=itemsize, shape=block.shape[0]) + self.typ = _tables().ComplexCol( + itemsize=itemsize, shape=block.shape[0]) self.set_data(block.values.astype(self.typ.type, copy=False)) def set_atom_data(self, block): @@ -2239,7 +2254,10 @@ def write(self, **kwargs): "cannot write on an abstract storer: sublcasses should implement") def delete(self, where=None, start=None, stop=None, **kwargs): - """ support fully deleting the node in its entirety (only) - where specification must be None """ + """ + support fully deleting the node in its entirety (only) - where + specification must be None + """ if where is None and start is None and stop is None: self._handle.remove_node(self.group, recursive=True) return None @@ -2252,7 +2270,7 @@ class GenericFixed(Fixed): """ a generified fixed version """ _index_type_map = {DatetimeIndex: 'datetime', PeriodIndex: 'period'} _reverse_index_map = dict([(v, k) - for k, v in compat.iteritems(_index_type_map)]) + for k, v in compat.iteritems(_index_type_map)]) attributes = [] # indexer helpders @@ -2515,8 +2533,8 @@ def write_array(self, key, value, items=None): # create an empty chunked array and fill it from value if not empty_array: ca = self._handle.create_carray(self.group, key, atom, - value.shape, - filters=self._filters) + value.shape, + filters=self._filters) ca[:] = value getattr(self.group, key)._v_attrs.transposed = transposed @@ -2543,14 +2561,15 @@ def write_array(self, key, value, items=None): warnings.warn(ws, PerformanceWarning, stacklevel=7) vlarr = self._handle.create_vlarray(self.group, key, - _tables().ObjectAtom()) + _tables().ObjectAtom()) vlarr.append(value) else: if empty_array: self.write_array_empty(key, value) else: if com.is_datetime64_dtype(value.dtype): - self._handle.create_array(self.group, key, value.view('i8')) + self._handle.create_array( + self.group, key, value.view('i8')) getattr( self.group, key)._v_attrs.value_type = 'datetime64' elif com.is_datetime64tz_dtype(value.dtype): @@ -2563,7 +2582,8 @@ def write_array(self, key, value, items=None): node._v_attrs.tz = _get_tz(value.tz) node._v_attrs.value_type = 'datetime64' elif com.is_timedelta64_dtype(value.dtype): - self._handle.create_array(self.group, key, value.view('i8')) + self._handle.create_array( + self.group, key, value.view('i8')) getattr( self.group, key)._v_attrs.value_type = 'timedelta64' else: @@ -2778,7 +2798,8 @@ def write(self, obj, **kwargs): for i, ax in enumerate(data.axes): if i == 0: if not ax.is_unique: - raise ValueError("Columns index has to be unique for fixed format") + raise ValueError( + "Columns index has to be unique for fixed format") self.write_index('axis%d' % i, ax) # Supporting mixed-type DataFrame objects...nontrivial @@ -2911,7 +2932,8 @@ def is_multi_index(self): def validate_metadata(self, existing): """ create / validate metadata """ - self.metadata = [ c.name for c in self.values_axes if c.metadata is not None ] + self.metadata = [ + c.name for c in self.values_axes if c.metadata is not None] def validate_multiindex(self, obj): """validate that we can store the multi-index; reset and return the @@ -3008,11 +3030,11 @@ def write_metadata(self, key, values): """ values = Series(values) self.parent.put(self._get_metadata_path(key), values, format='table', - encoding=self.encoding, nan_rep=self.nan_rep) + encoding=self.encoding, nan_rep=self.nan_rep) def read_metadata(self, key): """ return the meta data array for this key """ - if getattr(getattr(self.group,'meta',None),key,None) is not None: + if getattr(getattr(self.group, 'meta', None), key, None) is not None: return self.parent.select(self._get_metadata_path(key)) return None @@ -3173,11 +3195,13 @@ def create_index(self, columns=None, optlevel=None, kind=None): # create the index if not v.is_indexed: if v.type.startswith('complex'): - raise TypeError('Columns containing complex values can be stored but cannot' - ' be indexed when using table format. Either use fixed ' - 'format, set index=False, or do not include the columns ' - 'containing complex values to data_columns when ' - 'initializing the table.') + raise TypeError( + 'Columns containing complex values can be stored ' + 'but cannot' + ' be indexed when using table format. Either use ' + 'fixed format, set index=False, or do not include ' + 'the columns containing complex values to ' + 'data_columns when initializing the table.') v.create_index(**kw) def read_axes(self, where, **kwargs): @@ -3553,8 +3577,10 @@ def read_coordinates(self, where=None, start=None, stop=None, **kwargs): coords = self.selection.select_coords() if self.selection.filter is not None: for field, op, filt in self.selection.filter.format(): - data = self.read_column(field, start=coords.min(), stop=coords.max()+1) - coords = coords[op(data.iloc[coords-coords.min()], filt).values] + data = self.read_column( + field, start=coords.min(), stop=coords.max() + 1) + coords = coords[ + op(data.iloc[coords - coords.min()], filt).values] return Index(coords) @@ -3643,7 +3669,8 @@ def read(self, where=None, columns=None, **kwargs): if not self.read_axes(where=where, **kwargs): return None - factors = [Categorical.from_array(a.values, ordered=True) for a in self.index_axes] + factors = [Categorical.from_array( + a.values, ordered=True) for a in self.index_axes] levels = [f.categories for f in factors] N = [len(f.categories) for f in factors] labels = [f.codes for f in factors] @@ -3664,7 +3691,8 @@ def read(self, where=None, columns=None, **kwargs): # the data need to be sorted sorted_values = c.take_data().take(sorter, axis=0) if sorted_values.ndim == 1: - sorted_values = sorted_values.reshape((sorted_values.shape[0],1)) + sorted_values = sorted_values.reshape( + (sorted_values.shape[0], 1)) take_labels = [l.take(sorter) for l in labels] items = Index(c.values) @@ -3771,10 +3799,10 @@ def write(self, obj, axes=None, append=False, complib=None, self.set_attrs() # create the table - table = self._handle.create_table(self.group, **options) - + self._handle.create_table(self.group, **options) else: - table = self.table + pass + # table = self.table # update my info self.set_info() @@ -3828,7 +3856,7 @@ def write_data(self, chunksize, dropna=False): if i < nindexes - 1: repeater = np.prod([indexes[bi].shape[0] - for bi in range(i + 1, nindexes)]) + for bi in range(i + 1, nindexes)]) idx = np.repeat(idx, repeater) bindexes.append(idx) @@ -3847,7 +3875,7 @@ def write_data(self, chunksize, dropna=False): if chunksize is None: chunksize = 100000 - rows = np.empty(min(chunksize,nrows), dtype=self.dtype) + rows = np.empty(min(chunksize, nrows), dtype=self.dtype) chunks = int(nrows / chunksize) + 1 for i in range(chunks): start_i = i * chunksize @@ -3928,7 +3956,8 @@ def delete(self, where=None, start=None, stop=None, **kwargs): # create the selection table = self.table - self.selection = Selection(self, where, start=start, stop=stop, **kwargs) + self.selection = Selection( + self, where, start=start, stop=stop, **kwargs) values = self.selection.select_coords() # delete the rows in reverse order @@ -3958,7 +3987,7 @@ def delete(self, where=None, start=None, stop=None, **kwargs): for g in reversed(groups): rows = l.take(lrange(g, pg)) table.remove_rows(start=rows[rows.index[0] - ], stop=rows[rows.index[-1]] + 1) + ], stop=rows[rows.index[-1]] + 1) pg = g self.table.flush() @@ -4177,6 +4206,7 @@ def read(self, **kwargs): return df + class AppendablePanelTable(AppendableTable): """ suppor the new appendable table formats """ @@ -4232,7 +4262,9 @@ def _get_info(info, name): idx = info[name] = dict() return idx -### tz to/from coercion ### +# tz to/from coercion + + def _get_tz(tz): """ for a tz-aware type, return an encoded zone """ zone = tslib.get_timezone(tz) @@ -4240,6 +4272,7 @@ def _get_tz(tz): zone = tslib.tot_seconds(tz.utcoffset()) return zone + def _set_tz(values, tz, preserve_UTC=False, coerce=False): """ coerce the values to a DatetimeIndex if tz is set @@ -4267,6 +4300,7 @@ def _set_tz(values, tz, preserve_UTC=False, coerce=False): return values + def _convert_index(index, encoding=None, format_type=None): index_name = getattr(index, 'name', None) @@ -4393,7 +4427,8 @@ def _unconvert_index_legacy(data, kind, legacy=False, encoding=None): def _convert_string_array(data, encoding, itemsize=None): """ - we take a string-like that is object dtype and coerce to a fixed size string type + we take a string-like that is object dtype and coerce to a fixed size + string type Parameters ---------- @@ -4408,7 +4443,8 @@ def _convert_string_array(data, encoding, itemsize=None): # encode if needed if encoding is not None and len(data): - data = Series(data.ravel()).str.encode(encoding).values.reshape(data.shape) + data = Series(data.ravel()).str.encode( + encoding).values.reshape(data.shape) # create the sized dtype if itemsize is None: @@ -4417,6 +4453,7 @@ def _convert_string_array(data, encoding, itemsize=None): data = np.asarray(data, dtype="S%d" % itemsize) return data + def _unconvert_string_array(data, nan_rep=None, encoding=None): """ inverse of _convert_string_array @@ -4552,7 +4589,7 @@ def generate(self, where): q = self.table.queryables() try: return Expr(where, queryables=q, encoding=self.table.encoding) - except NameError as detail: + except NameError: # raise a nice message, suggesting that the user should use # data_columns raise ValueError( @@ -4572,7 +4609,8 @@ def select(self): """ if self.condition is not None: return self.table.table.read_where(self.condition.format(), - start=self.start, stop=self.stop) + start=self.start, + stop=self.stop) elif self.coordinates is not None: return self.table.table.read_coordinates(self.coordinates) return self.table.table.read(start=self.start, stop=self.stop) @@ -4594,8 +4632,8 @@ def select_coords(self): if self.condition is not None: return self.table.table.get_where_list(self.condition.format(), - start=start, stop=stop, - sort=True) + start=start, stop=stop, + sort=True) elif self.coordinates is not None: return self.coordinates @@ -4603,6 +4641,7 @@ def select_coords(self): # utilities ### + def timeit(key, df, fn=None, remove=True, **kwargs): if fn is None: fn = 'timeit.h5' diff --git a/pandas/io/sas.py b/pandas/io/sas.py index 006c2aaf55ca8..39e83b7715cda 100644 --- a/pandas/io/sas.py +++ b/pandas/io/sas.py @@ -16,15 +16,18 @@ import numpy as np from pandas.util.decorators import Appender -_correct_line1 = "HEADER RECORD*******LIBRARY HEADER RECORD!!!!!!!000000000000000000000000000000 " -_correct_header1 = "HEADER RECORD*******MEMBER HEADER RECORD!!!!!!!000000000000000001600000000" -_correct_header2 = "HEADER RECORD*******DSCRPTR HEADER RECORD!!!!!!!000000000000000000000000000000 " -_correct_obs_header = "HEADER RECORD*******OBS HEADER RECORD!!!!!!!000000000000000000000000000000 " +_correct_line1 = ("HEADER RECORD*******LIBRARY HEADER RECORD" + "!!!!!!!000000000000000000000000000000 ") +_correct_header1 = ("HEADER RECORD*******MEMBER HEADER RECORD!!!!!!!" + "000000000000000001600000000") +_correct_header2 = ("HEADER RECORD*******DSCRPTR HEADER RECORD!!!!!!!" + "000000000000000000000000000000 ") +_correct_obs_header = ("HEADER RECORD*******OBS HEADER RECORD!!!!!!!" + "000000000000000000000000000000 ") _fieldkeys = ['ntype', 'nhfun', 'field_length', 'nvar0', 'name', 'label', 'nform', 'nfl', 'num_decimals', 'nfj', 'nfill', 'niform', 'nifl', 'nifd', 'npos', '_'] - _base_params_doc = """\ Parameters ---------- @@ -110,13 +113,14 @@ @Appender(_read_sas_doc) -def read_sas(filepath_or_buffer, format='xport', index=None, encoding='ISO-8859-1', - chunksize=None, iterator=False): +def read_sas(filepath_or_buffer, format='xport', index=None, + encoding='ISO-8859-1', chunksize=None, iterator=False): format = format.lower() if format == 'xport': - reader = XportReader(filepath_or_buffer, index=index, encoding=encoding, + reader = XportReader(filepath_or_buffer, index=index, + encoding=encoding, chunksize=chunksize) else: raise ValueError('only xport format is supported') @@ -130,7 +134,8 @@ def read_sas(filepath_or_buffer, format='xport', index=None, encoding='ISO-8859- def _parse_date(datestr): """ Given a date in xport format, return Python date. """ try: - return datetime.strptime(datestr, "%d%b%y:%H:%M:%S") # e.g. "16FEB11:10:07:55" + # e.g. "16FEB11:10:07:55" + return datetime.strptime(datestr, "%d%b%y:%H:%M:%S") except ValueError: return pd.NaT @@ -151,7 +156,7 @@ def _split_line(s, parts): out = {} start = 0 for name, length in parts: - out[name] = s[start:start+length].strip() + out[name] = s[start:start + length].strip() start += length del out['_'] return out @@ -225,7 +230,8 @@ def _parse_float_vec(vec): # incremented by 1 and the fraction bits left 4 positions to the # right of the radix point. (had to add >> 24 because C treats & # 0x7f as 0x7f000000 and Python doesn't) - ieee1 |= ((((((xport1 >> 24) & 0x7f) - 65) << 2) + shift + 1023) << 20) | (xport1 & 0x80000000) + ieee1 |= ((((((xport1 >> 24) & 0x7f) - 65) << 2) + + shift + 1023) << 20) | (xport1 & 0x80000000) ieee = np.empty((len(ieee1),), dtype='>u4,>u4') ieee['f0'] = ieee1 @@ -236,11 +242,9 @@ def _parse_float_vec(vec): return ieee - class XportReader(object): __doc__ = _xport_reader_doc - def __init__(self, filepath_or_buffer, index=None, encoding='ISO-8859-1', chunksize=None): @@ -266,11 +270,9 @@ def __init__(self, filepath_or_buffer, index=None, encoding='ISO-8859-1', self._read_header() - def _get_row(self): return self.filepath_or_buffer.read(80).decode() - def _read_header(self): self.filepath_or_buffer.seek(0) @@ -280,8 +282,8 @@ def _read_header(self): raise ValueError("Header record is not an XPORT file.") line2 = self._get_row() - file_info = _split_line(line2, [['prefix', 24], ['version', 8], ['OS', 8], - ['_', 24], ['created', 16]]) + file_info = _split_line(line2, [['prefix', 24], ['version', 8], + ['OS', 8], ['_', 24], ['created', 16]]) if file_info['prefix'] != "SAS SAS SASLIB": raise ValueError("Header record has invalid prefix.") file_info['created'] = _parse_date(file_info['created']) @@ -293,16 +295,22 @@ def _read_header(self): # read member header header1 = self._get_row() header2 = self._get_row() - if not header1.startswith(_correct_header1) or not header2 == _correct_header2: + if (not header1.startswith(_correct_header1) or + not header2 == _correct_header2): raise ValueError("Member header not found.") - fieldnamelength = int(header1[-5:-2]) # usually 140, could be 135 + fieldnamelength = int(header1[-5:-2]) # usually 140, could be 135 # member info - member_info = _split_line(self._get_row(), [['prefix', 8], ['set_name', 8], - ['sasdata', 8],['version', 8], - ['OS', 8],['_', 24],['created', 16]]) - member_info.update( _split_line(self._get_row(), [['modified', 16], ['_', 16], - ['label', 40],['type', 8]])) + member_info = _split_line(self._get_row(), [['prefix', 8], + ['set_name', 8], + ['sasdata', 8], + ['version', 8], + ['OS', 8], ['_', 24], + ['created', 16]]) + member_info.update(_split_line(self._get_row(), + [['modified', 16], + ['_', 16], + ['label', 40], ['type', 8]])) member_info['modified'] = _parse_date(member_info['modified']) member_info['created'] = _parse_date(member_info['created']) self.member_info = member_info @@ -310,15 +318,16 @@ def _read_header(self): # read field names types = {1: 'numeric', 2: 'char'} fieldcount = int(self._get_row()[54:58]) - datalength = fieldnamelength*fieldcount - if datalength % 80: # round up to nearest 80 - datalength += 80 - datalength%80 + datalength = fieldnamelength * fieldcount + if datalength % 80: # round up to nearest 80 + datalength += 80 - datalength % 80 fielddata = self.filepath_or_buffer.read(datalength) fields = [] obs_length = 0 while len(fielddata) >= fieldnamelength: # pull data for one field - field, fielddata = (fielddata[:fieldnamelength], fielddata[fieldnamelength:]) + field, fielddata = ( + fielddata[:fieldnamelength], fielddata[fieldnamelength:]) # rest at end gets ignored, so if field is short, pad out # to match struct pattern below @@ -330,7 +339,8 @@ def _read_header(self): field['ntype'] = types[field['ntype']] fl = field['field_length'] if field['ntype'] == 'numeric' and ((fl < 2) or (fl > 8)): - raise TypeError("Floating point field width %d is not between 2 and 8." % fw) + raise TypeError("Floating point field width %d is not between " + "2 and 8." % fl) for k, v in field.items(): try: @@ -354,12 +364,11 @@ def _read_header(self): # Setup the dtype. dtypel = [] - for i,field in enumerate(self.fields): + for i, field in enumerate(self.fields): dtypel.append(('s' + str(i), "S" + str(field['field_length']))) dtype = np.dtype(dtypel) self._dtype = dtype - def __iter__(self): try: if self._chunksize: @@ -370,20 +379,22 @@ def __iter__(self): except StopIteration: pass - def _record_count(self): """ Get number of records in file. - This is maybe suboptimal because we have to seek to the end of the file. + This is maybe suboptimal because we have to seek to the end of the + file. Side effect: returns file position to record_start. """ self.filepath_or_buffer.seek(0, 2) - total_records_length = self.filepath_or_buffer.tell() - self.record_start + total_records_length = (self.filepath_or_buffer.tell() - + self.record_start) if total_records_length % 80 != 0: + import warnings warnings.warn("xport file may be corrupted") if self.record_length > 80: @@ -406,7 +417,6 @@ def _record_count(self): return (total_records_length - tail_pad) // self.record_length - def get_chunk(self, size=None): """ Reads lines from Xport file and returns as dataframe @@ -424,7 +434,6 @@ def get_chunk(self, size=None): size = self._chunksize return self.read(nrows=size) - def _missing_double(self, vec): v = vec.view(dtype='u1,u1,u2,u4') miss = (v['f1'] == 0) & (v['f2'] == 0) & (v['f3'] == 0) @@ -433,7 +442,6 @@ def _missing_double(self, vec): miss &= miss1 return miss - @Appender(_read_method_doc) def read(self, nrows=None): @@ -448,11 +456,12 @@ def read(self, nrows=None): data = np.frombuffer(raw, dtype=self._dtype, count=read_lines) df = pd.DataFrame(index=range(read_lines)) - for j,x in enumerate(self.columns): + for j, x in enumerate(self.columns): vec = data['s%d' % j] ntype = self.fields[j]['ntype'] if ntype == "numeric": - vec = _handle_truncated_float_vec(vec, self.fields[j]['field_length']) + vec = _handle_truncated_float_vec( + vec, self.fields[j]['field_length']) miss = self._missing_double(vec) v = _parse_float_vec(vec) v[miss] = np.nan diff --git a/pandas/io/sql.py b/pandas/io/sql.py index 95a6d02b1ccb6..63725988c8065 100644 --- a/pandas/io/sql.py +++ b/pandas/io/sql.py @@ -14,7 +14,8 @@ import pandas.lib as lib import pandas.core.common as com -from pandas.compat import lzip, map, zip, raise_with_traceback, string_types, text_type +from pandas.compat import (lzip, map, zip, raise_with_traceback, + string_types, text_type) from pandas.core.api import DataFrame, Series from pandas.core.common import isnull from pandas.core.base import PandasObject @@ -33,8 +34,8 @@ class DatabaseError(IOError): pass -#------------------------------------------------------------------------------ -#--- Helper functions +# ----------------------------------------------------------------------------- +# -- Helper functions _SQLALCHEMY_INSTALLED = None @@ -85,15 +86,16 @@ def _handle_date_column(col, format=None): else: if format in ['D', 's', 'ms', 'us', 'ns']: return to_datetime(col, errors='coerce', unit=format, utc=True) - elif (issubclass(col.dtype.type, np.floating) - or issubclass(col.dtype.type, np.integer)): + elif (issubclass(col.dtype.type, np.floating) or + issubclass(col.dtype.type, np.integer)): # parse dates as timestamp format = 's' if format is None else format return to_datetime(col, errors='coerce', unit=format, utc=True) elif com.is_datetime64tz_dtype(col): # coerce to UTC timezone # GH11216 - return to_datetime(col,errors='coerce').astype('datetime64[ns, UTC]') + return (to_datetime(col, errors='coerce') + .astype('datetime64[ns, UTC]')) else: return to_datetime(col, errors='coerce', format=format, utc=True) @@ -118,7 +120,6 @@ def _parse_date_columns(data_frame, parse_dates): fmt = None data_frame[col_name] = _handle_date_column(df_col, format=fmt) - # we want to coerce datetime64_tz dtypes for now # we could in theory do a 'nice' conversion from a FixedOffset tz # GH11216 @@ -152,7 +153,7 @@ def execute(sql, con, cur=None, params=None): ---------- sql : string Query to be executed - con : SQLAlchemy connectable(engine/connection) or sqlite3 DBAPI2 connection + con : SQLAlchemy connectable(engine/connection) or sqlite3 connection Using SQLAlchemy makes it possible to use any DB supported by that library. If a DBAPI2 object, only sqlite3 is supported. @@ -172,8 +173,8 @@ def execute(sql, con, cur=None, params=None): return pandas_sql.execute(*args) -#------------------------------------------------------------------------------ -#--- Deprecated tquery and uquery +# ----------------------------------------------------------------------------- +# -- Deprecated tquery and uquery def _safe_fetch(cur): try: @@ -204,7 +205,8 @@ def tquery(sql, con=None, cur=None, retry=True): SQL query to be executed con: DBAPI2 connection, default: None cur: deprecated, cursor is obtained from connection, default: None - retry: boolean value to specify whether to retry after failure, default: True + retry: boolean value to specify whether to retry after failure + default: True Returns ------- @@ -258,7 +260,8 @@ def uquery(sql, con=None, cur=None, retry=True, params=None): SQL query to be executed con: DBAPI2 connection, default: None cur: deprecated, cursor is obtained from connection, default: None - retry: boolean value to specify whether to retry after failure, default: True + retry: boolean value to specify whether to retry after failure + default: True params: list or tuple, optional, default: None List of parameters to pass to execute method. @@ -289,8 +292,8 @@ def uquery(sql, con=None, cur=None, retry=True, params=None): return result -#------------------------------------------------------------------------------ -#--- Read and write to DataFrames +# ----------------------------------------------------------------------------- +# -- Read and write to DataFrames def read_sql_table(table_name, con, schema=None, index_col=None, coerce_float=True, parse_dates=None, columns=None, @@ -601,7 +604,8 @@ def has_table(table_name, con, flavor='sqlite', schema=None): _MYSQL_WARNING = ("The 'mysql' flavor with DBAPI connection is deprecated " "and will be removed in future versions. " - "MySQL will be further supported with SQLAlchemy connectables.") + "MySQL will be further supported with SQLAlchemy " + "connectables.") def _engine_builder(con): @@ -609,17 +613,18 @@ def _engine_builder(con): Returns a SQLAlchemy engine from a URI (if con is a string) else it just return con without modifying it """ + global _SQLALCHEMY_INSTALLED if isinstance(con, string_types): try: import sqlalchemy con = sqlalchemy.create_engine(con) return con - except ImportError: _SQLALCHEMY_INSTALLED = False return con + def pandasSQL_builder(con, flavor=None, schema=None, meta=None, is_cursor=False): """ @@ -646,6 +651,7 @@ class SQLTable(PandasObject): pass them between functions all the time. """ # TODO: support for multiIndex + def __init__(self, name, pandas_sql_engine, frame=None, index=True, if_exists='fail', prefix='pandas', index_label=None, schema=None, keys=None, dtype=None): @@ -829,8 +835,8 @@ def _index_name(self, index, index_label): else: return index_label # return the used column labels for the index columns - if (nlevels == 1 and 'index' not in self.frame.columns - and self.frame.index.name is None): + if (nlevels == 1 and 'index' not in self.frame.columns and + self.frame.index.name is None): return ['index'] else: return [l if l is not None else "level_{0}".format(i) @@ -857,7 +863,7 @@ def _get_column_names_and_types(self, dtype_mapper): dtype_mapper(self.frame.iloc[:, i]), False) for i in range(len(self.frame.columns)) - ] + ] return column_names_and_types @@ -913,7 +919,8 @@ def _harmonize_columns(self, parse_dates=None): # the type the dataframe column should have col_type = self._get_dtype(sql_col.type) - if col_type is datetime or col_type is date or col_type is DatetimeTZDtype: + if (col_type is datetime or col_type is date or + col_type is DatetimeTZDtype): self.frame[col_name] = _handle_date_column(df_col) elif col_type is float: @@ -923,7 +930,8 @@ def _harmonize_columns(self, parse_dates=None): elif len(df_col) == df_col.count(): # No NA values, can convert ints and bools if col_type is np.dtype('int64') or col_type is bool: - self.frame[col_name] = df_col.astype(col_type, copy=False) + self.frame[col_name] = df_col.astype( + col_type, copy=False) # Handle date parsing if col_name in parse_dates: @@ -959,12 +967,13 @@ def _sqlalchemy_type(self, col): col_type = self._get_notnull_col_dtype(col) - from sqlalchemy.types import (BigInteger, Integer, Float, Text, Boolean, - DateTime, Date, Time) + from sqlalchemy.types import (BigInteger, Integer, Float, + Text, Boolean, + DateTime, Date, Time) if col_type == 'datetime64' or col_type == 'datetime': try: - tz = col.tzinfo + tz = col.tzinfo # noqa return DateTime(timezone=True) except: return DateTime @@ -995,7 +1004,8 @@ def _sqlalchemy_type(self, col): return Text def _get_dtype(self, sqltype): - from sqlalchemy.types import Integer, Float, Boolean, DateTime, Date, TIMESTAMP + from sqlalchemy.types import (Integer, Float, Boolean, DateTime, + Date, TIMESTAMP) if isinstance(sqltype, Float): return float @@ -1023,12 +1033,12 @@ class PandasSQL(PandasObject): """ def read_sql(self, *args, **kwargs): - raise ValueError("PandasSQL must be created with an SQLAlchemy connectable" - " or connection+sql flavor") + raise ValueError("PandasSQL must be created with an SQLAlchemy " + "connectable or connection+sql flavor") def to_sql(self, *args, **kwargs): - raise ValueError("PandasSQL must be created with an SQLAlchemy connectable" - " or connection+sql flavor") + raise ValueError("PandasSQL must be created with an SQLAlchemy " + "connectable or connection+sql flavor") class SQLDatabase(PandasSQL): @@ -1158,10 +1168,10 @@ def read_query(self, sql, index_col=None, coerce_float=True, - Dict of ``{column_name: format string}`` where format string is strftime compatible in case of parsing string times or is one of (D, s, ns, ms, us) in case of parsing integer timestamps - - Dict of ``{column_name: arg dict}``, where the arg dict corresponds - to the keyword arguments of :func:`pandas.to_datetime` - Especially useful with databases without native Datetime support, - such as SQLite + - Dict of ``{column_name: arg dict}``, where the arg dict + corresponds to the keyword arguments of + :func:`pandas.to_datetime` Especially useful with databases + without native Datetime support, such as SQLite chunksize : int, default None If specified, return an iterator where `chunksize` is the number of rows to include in each chunk. @@ -1250,7 +1260,8 @@ def to_sql(self, frame, name, if_exists='fail', index=True, warnings.warn("The provided table name '{0}' is not found exactly " "as such in the database after writing the table, " "possibly due to case sensitivity issues. Consider " - "using lower case table names.".format(name), UserWarning) + "using lower case table names.".format(name), + UserWarning) @property def tables(self): @@ -1334,6 +1345,7 @@ def _get_unicode_name(name): raise ValueError("Cannot convert identifier to UTF-8: '%s'" % name) return uname + def _get_valid_mysql_name(name): # Filter for unquoted identifiers # See http://dev.mysql.com/doc/refman/5.0/en/identifiers.html @@ -1351,7 +1363,8 @@ def _get_valid_mysql_name(name): def _get_valid_sqlite_name(name): - # See http://stackoverflow.com/questions/6514274/how-do-you-escape-strings-for-sqlite-table-column-names-in-python + # See http://stackoverflow.com/questions/6514274/how-do-you-escape-strings\ + # -for-sqlite-table-column-names-in-python # Ensure the string can be encoded as UTF-8. # Ensure the string does not include any NUL characters. # Replace all " with "". @@ -1447,7 +1460,7 @@ def _create_table_setup(self): cnames_br = ", ".join([escape(c) for c in keys]) create_tbl_stmts.append( "CONSTRAINT {tbl}_pk PRIMARY KEY ({cnames_br})".format( - tbl=self.name, cnames_br=cnames_br)) + tbl=self.name, cnames_br=cnames_br)) create_stmts = ["CREATE TABLE " + escape(self.name) + " (\n" + ',\n '.join(create_tbl_stmts) + "\n)"] @@ -1458,7 +1471,7 @@ def _create_table_setup(self): cnames = "_".join(ix_cols) cnames_br = ",".join([escape(c) for c in ix_cols]) create_stmts.append( - "CREATE INDEX " + escape("ix_"+self.name+"_"+cnames) + + "CREATE INDEX " + escape("ix_" + self.name + "_" + cnames) + "ON " + escape(self.name) + " (" + cnames_br + ")") return create_stmts @@ -1546,7 +1559,8 @@ def execute(self, *args, **kwargs): " to rollback" % (args[0], exc)) raise_with_traceback(ex) - ex = DatabaseError("Execution failed on sql '%s': %s" % (args[0], exc)) + ex = DatabaseError( + "Execution failed on sql '%s': %s" % (args[0], exc)) raise_with_traceback(ex) @staticmethod @@ -1557,7 +1571,7 @@ def _query_iterator(cursor, chunksize, columns, index_col=None, while True: data = cursor.fetchmany(chunksize) if type(data) == tuple: - data = list(data) + data = list(data) if not data: cursor.close() break @@ -1636,8 +1650,10 @@ def to_sql(self, frame, name, if_exists='fail', index=True, table.insert(chunksize) def has_table(self, name, schema=None): - escape = _SQL_GET_IDENTIFIER[self.flavor] - esc_name = escape(name) + # TODO(wesm): unused? + # escape = _SQL_GET_IDENTIFIER[self.flavor] + # esc_name = escape(name) + wld = _SQL_WILDCARD[self.flavor] flavor_map = { 'sqlite': ("SELECT name FROM sqlite_master " @@ -1645,7 +1661,7 @@ def has_table(self, name, schema=None): 'mysql': "SHOW TABLES LIKE %s" % wld} query = flavor_map.get(self.flavor) - return len(self.execute(query, [name,]).fetchall()) > 0 + return len(self.execute(query, [name, ]).fetchall()) > 0 def get_table(self, table_name, schema=None): return None # not supported in fallback mode diff --git a/pandas/io/stata.py b/pandas/io/stata.py index 806bd3df83843..8181e69abc60b 100644 --- a/pandas/io/stata.py +++ b/pandas/io/stata.py @@ -29,7 +29,9 @@ from pandas.lib import max_len_string_array, infer_dtype from pandas.tslib import NaT, Timestamp -_version_error = "Version of given Stata file is not 104, 105, 108, 113 (Stata 8/9), 114 (Stata 10/11), 115 (Stata 12), 117 (Stata 13), or 118 (Stata 14)" +_version_error = ("Version of given Stata file is not 104, 105, 108, " + "113 (Stata 8/9), 114 (Stata 10/11), 115 (Stata 12), " + "117 (Stata 13), or 118 (Stata 14)") _statafile_processing_params1 = """\ convert_dates : boolean, defaults to True @@ -245,11 +247,12 @@ def convert_year_days_safe(year, days): datetime or datetime64 Series """ if year.max() < (MAX_YEAR - 1) and year.min() > MIN_YEAR: - return to_datetime(year, format='%Y') + to_timedelta(days, unit='d') + return (to_datetime(year, format='%Y') + + to_timedelta(days, unit='d')) else: index = getattr(year, 'index', None) - value = [datetime.datetime(y, 1, 1) + relativedelta(days=int(d)) for - y, d in zip(year, days)] + value = [datetime.datetime(y, 1, 1) + relativedelta(days=int(d)) + for y, d in zip(year, days)] return Series(value, index=index) def convert_delta_safe(base, deltas, unit): @@ -265,8 +268,8 @@ def convert_delta_safe(base, deltas, unit): return Series(values, index=index) elif unit == 'ms': if deltas.max() > MAX_MS_DELTA or deltas.min() < MIN_MS_DELTA: - values = [base + relativedelta(microseconds=(int(d) * 1000)) for - d in deltas] + values = [base + relativedelta(microseconds=(int(d) * 1000)) + for d in deltas] return Series(values, index=index) else: raise ValueError('format not understood') @@ -274,7 +277,8 @@ def convert_delta_safe(base, deltas, unit): deltas = to_timedelta(deltas, unit=unit) return base + deltas - # TODO: If/when pandas supports more than datetime64[ns], this should be improved to use correct range, e.g. datetime[Y] for yearly + # TODO: If/when pandas supports more than datetime64[ns], this should be + # improved to use correct range, e.g. datetime[Y] for yearly bad_locs = np.isnan(dates) has_bad_values = False if bad_locs.any(): @@ -426,8 +430,8 @@ def parse_dates_safe(dates, delta=False, year=False, days=False): excessive_string_length_error = """ -Fixed width strings in Stata .dta files are limited to 244 (or fewer) characters. -Column '%s' does not satisfy this restriction. +Fixed width strings in Stata .dta files are limited to 244 (or fewer) +characters. Column '%s' does not satisfy this restriction. """ @@ -462,8 +466,8 @@ class InvalidColumnName(Warning): {0} If this is not what you expect, please make sure you have Stata-compliant -column names in your DataFrame (strings only, max 32 characters, only alphanumerics and -underscores, no Stata reserved words) +column names in your DataFrame (strings only, max 32 characters, only +alphanumerics and underscores, no Stata reserved words) """ @@ -481,17 +485,16 @@ def _cast_to_stata_types(data): ----- Numeric columns in Stata must be one of int8, int16, int32, float32 or float64, with some additional value restrictions. int8 and int16 columns - are checked for violations of the value restrictions and - upcast if needed. int64 data is not usable in Stata, and so it is - downcast to int32 whenever the value are in the int32 range, and - sidecast to float64 when larger than this range. If the int64 values - are outside of the range of those perfectly representable as float64 values, - a warning is raised. - - bool columns are cast to int8. uint colums are converted to int of the same - size if there is no loss in precision, other wise are upcast to a larger - type. uint64 is currently not supported since it is concerted to object in - a DataFrame. + are checked for violations of the value restrictions and upcast if needed. + int64 data is not usable in Stata, and so it is downcast to int32 whenever + the value are in the int32 range, and sidecast to float64 when larger than + this range. If the int64 values are outside of the range of those + perfectly representable as float64 values, a warning is raised. + + bool columns are cast to int8. uint colums are converted to int of the + same size if there is no loss in precision, other wise are upcast to a + larger type. uint64 is currently not supported since it is concerted to + object in a DataFrame. """ ws = '' # original, if small, if large @@ -510,8 +513,8 @@ def _cast_to_stata_types(data): else: dtype = c_data[2] if c_data[2] == np.float64: # Warn if necessary - if data[col].max() >= 2 ** 53: - ws = precision_loss_doc % ('uint64', 'float64') + if data[col].max() >= 2 ** 53: + ws = precision_loss_doc % ('uint64', 'float64') data[col] = data[col].astype(dtype) @@ -523,7 +526,8 @@ def _cast_to_stata_types(data): if data[col].max() > 32740 or data[col].min() < -32767: data[col] = data[col].astype(np.int32) elif dtype == np.int64: - if data[col].max() <= 2147483620 and data[col].min() >= -2147483647: + if (data[col].max() <= 2147483620 and + data[col].min() >= -2147483647): data[col] = data[col].astype(np.int32) else: data[col] = data[col].astype(np.float64) @@ -723,7 +727,8 @@ class StataMissingValue(StringMixin): MISSING_VALUES[value] = '.' if i > 0: MISSING_VALUES[value] += chr(96 + i) - int_value = struct.unpack('<i', struct.pack('<f', value))[0] + increment + int_value = struct.unpack('<i', struct.pack('<f', value))[ + 0] + increment float32_base = struct.pack('<i', int_value) float64_base = b'\x00\x00\x00\x00\x00\x00\xe0\x7f' @@ -762,8 +767,8 @@ def __repr__(self): return "%s(%s)" % (self.__class__, self) def __eq__(self, other): - return (isinstance(other, self.__class__) - and self.string == other.string and self.value == other.value) + return (isinstance(other, self.__class__) and + self.string == other.string and self.value == other.value) @classmethod def get_base_missing_value(cls, dtype): @@ -829,7 +834,9 @@ def __init__(self, encoding): self.TYPE_MAP_XML = \ dict( [ - (32768, 'Q'), # Not really a Q, unclear how to handle byteswap + # Not really a Q, unclear how to handle byteswap + (32768, 'Q'), + (65526, 'd'), (65527, 'f'), (65528, 'l'), @@ -863,23 +870,23 @@ def __init__(self, encoding): } # These missing values are the generic '.' in Stata, and are used # to replace nans - self.MISSING_VALUES = \ - { - 'b': 101, - 'h': 32741, - 'l': 2147483621, - 'f': np.float32(struct.unpack('<f', b'\x00\x00\x00\x7f')[0]), - 'd': np.float64(struct.unpack('<d', b'\x00\x00\x00\x00\x00\x00\xe0\x7f')[0]) - } + self.MISSING_VALUES = { + 'b': 101, + 'h': 32741, + 'l': 2147483621, + 'f': np.float32(struct.unpack('<f', b'\x00\x00\x00\x7f')[0]), + 'd': np.float64( + struct.unpack('<d', b'\x00\x00\x00\x00\x00\x00\xe0\x7f')[0]) + } self.NUMPY_TYPE_MAP = \ - { + { 'b': 'i1', 'h': 'i2', 'l': 'i4', 'f': 'f4', 'd': 'f8', 'Q': 'u8' - } + } # Reserved words cannot be used as variable names self.RESERVED_WORDS = ('aggregate', 'array', 'boolean', 'break', @@ -1176,7 +1183,8 @@ def _read_old_header(self, first_char): self.format_version = struct.unpack('b', first_char)[0] if self.format_version not in [104, 105, 108, 113, 114, 115]: raise ValueError(_version_error) - self.byteorder = struct.unpack('b', self.path_or_buf.read(1))[0] == 0x1 and '>' or '<' + self.byteorder = struct.unpack('b', self.path_or_buf.read(1))[ + 0] == 0x1 and '>' or '<' self.filetype = struct.unpack('b', self.path_or_buf.read(1))[0] self.path_or_buf.read(1) # unused @@ -1250,8 +1258,8 @@ def _read_old_header(self, first_char): self.data_location = self.path_or_buf.tell() def _calcsize(self, fmt): - return (type(fmt) is int and fmt - or struct.calcsize(self.byteorder + fmt)) + return (type(fmt) is int and fmt or + struct.calcsize(self.byteorder + fmt)) def _decode(self, s): s = s.partition(b"\0")[0] @@ -1335,7 +1343,8 @@ def _read_strls(self): break if self.format_version == 117: - v_o = struct.unpack(self.byteorder + 'Q', self.path_or_buf.read(8))[0] + v_o = struct.unpack(self.byteorder + 'Q', + self.path_or_buf.read(8))[0] else: buf = self.path_or_buf.read(12) # Only tested on little endian file on little endian machine. @@ -1435,7 +1444,8 @@ def read(self, nrows=None, convert_dates=None, dtype = [] # Convert struct data types to numpy data type for i, typ in enumerate(self.typlist): if typ in self.NUMPY_TYPE_MAP: - dtype.append(('s' + str(i), self.byteorder + self.NUMPY_TYPE_MAP[typ])) + dtype.append(('s' + str(i), self.byteorder + + self.NUMPY_TYPE_MAP[typ])) else: dtype.append(('s' + str(i), 'S' + str(typ))) dtype = np.dtype(dtype) @@ -1487,7 +1497,8 @@ def read(self, nrows=None, convert_dates=None, # Decode strings for col, typ in zip(data, self.typlist): if type(typ) is int: - data[col] = data[col].apply(self._null_terminate, convert_dtype=True) + data[col] = data[col].apply( + self._null_terminate, convert_dtype=True) data = self._insert_strls(data) @@ -1503,7 +1514,8 @@ def read(self, nrows=None, convert_dates=None, dtype = data[col].dtype if (dtype != np.dtype(object)) and (dtype != self.dtyplist[i]): requires_type_conversion = True - data_formatted.append((col, Series(data[col], index, self.dtyplist[i]))) + data_formatted.append( + (col, Series(data[col], index, self.dtyplist[i]))) else: data_formatted.append((col, data[col])) if requires_type_conversion: @@ -1771,8 +1783,8 @@ def _dtype_to_default_stata_fmt(dtype, column): # TODO: expand this to handle a default datetime format? if dtype.type == np.object_: inferred_dtype = infer_dtype(column.dropna()) - if not (inferred_dtype in ('string', 'unicode') - or len(column) == 0): + if not (inferred_dtype in ('string', 'unicode') or + len(column) == 0): raise ValueError('Writing general object arrays is not supported') itemsize = max_len_string_array(com._ensure_object(column.values)) if itemsize > 244: @@ -1836,6 +1848,7 @@ class StataWriter(StataParser): >>> writer = StataWriter('./date_data_file.dta', data, {'date' : 'tw'}) >>> writer.write_file() """ + def __init__(self, fname, data, convert_dates=None, write_index=True, encoding="latin-1", byteorder=None, time_stamp=None, data_label=None): @@ -1883,8 +1896,8 @@ def _prepare_categoricals(self, data): self._value_labels.append(StataValueLabel(data[col])) dtype = data[col].cat.codes.dtype if dtype == np.int64: - raise ValueError('It is not possible to export int64-based ' - 'categorical data to Stata.') + raise ValueError('It is not possible to export ' + 'int64-based categorical data to Stata.') values = data[col].cat.codes.values.copy() # Upcast if needed so that correct missing values can be set @@ -1921,16 +1934,17 @@ def _replace_nans(self, data): return data def _check_column_names(self, data): - """Checks column names to ensure that they are valid Stata column names. + """ + Checks column names to ensure that they are valid Stata column names. This includes checks for: * Non-string names * Stata keywords * Variables that start with numbers * Variables with names that are too long - When an illegal variable name is detected, it is converted, and if dates - are exported, the variable name is propogated to the date conversion - dictionary + When an illegal variable name is detected, it is converted, and if + dates are exported, the variable name is propogated to the date + conversion dictionary """ converted_names = [] columns = list(data.columns) @@ -1970,7 +1984,8 @@ def _check_column_names(self, data): orig_name = orig_name.encode('utf-8') except: pass - converted_names.append('{0} -> {1}'.format(orig_name, name)) + converted_names.append( + '{0} -> {1}'.format(orig_name, name)) columns[j] = name diff --git a/pandas/io/tests/generate_legacy_storage_files.py b/pandas/io/tests/generate_legacy_storage_files.py index 91d0333b3407f..f556c980bb80c 100644 --- a/pandas/io/tests/generate_legacy_storage_files.py +++ b/pandas/io/tests/generate_legacy_storage_files.py @@ -2,15 +2,14 @@ from __future__ import print_function from distutils.version import LooseVersion from pandas import (Series, DataFrame, Panel, - SparseSeries, SparseDataFrame, SparsePanel, - Index, MultiIndex, PeriodIndex, bdate_range, to_msgpack, - date_range, period_range, bdate_range, Timestamp, Categorical, - Period) + SparseSeries, SparseDataFrame, + Index, MultiIndex, bdate_range, to_msgpack, + date_range, period_range, + Timestamp, Categorical, Period) import os import sys import numpy as np import pandas -import pandas.util.testing as tm import platform as pl @@ -66,50 +65,68 @@ def create_data(): scalars = dict(timestamp=Timestamp('20130101')) if LooseVersion(pandas.__version__) >= '0.17.0': - scalars['period'] = Period('2012','M') + scalars['period'] = Period('2012', 'M') index = dict(int=Index(np.arange(10)), date=date_range('20130101', periods=10), period=period_range('2013-01-01', freq='M', periods=10)) - mi = dict(reg2=MultiIndex.from_tuples(tuple(zip(*[['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], - ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']])), - names=['first', 'second'])) + mi = dict(reg2=MultiIndex.from_tuples( + tuple(zip(*[['bar', 'bar', 'baz', 'baz', 'foo', + 'foo', 'qux', 'qux'], + ['one', 'two', 'one', 'two', 'one', + 'two', 'one', 'two']])), + names=['first', 'second'])) series = dict(float=Series(data['A']), int=Series(data['B']), mixed=Series(data['E']), - ts=Series(np.arange(10).astype(np.int64), index=date_range('20130101',periods=10)), + ts=Series(np.arange(10).astype(np.int64), + index=date_range('20130101', periods=10)), mi=Series(np.arange(5).astype(np.float64), - index=MultiIndex.from_tuples(tuple(zip(*[[1, 1, 2, 2, 2], [3, 4, 3, 4, 5]])), - names=['one', 'two'])), - dup=Series(np.arange(5).astype(np.float64), index=['A', 'B', 'C', 'D', 'A']), + index=MultiIndex.from_tuples( + tuple(zip(*[[1, 1, 2, 2, 2], + [3, 4, 3, 4, 5]])), + names=['one', 'two'])), + dup=Series(np.arange(5).astype(np.float64), + index=['A', 'B', 'C', 'D', 'A']), cat=Series(Categorical(['foo', 'bar', 'baz'])), - dt=Series(date_range('20130101',periods=5)), - dt_tz=Series(date_range('20130101',periods=5,tz='US/Eastern'))) + dt=Series(date_range('20130101', periods=5)), + dt_tz=Series(date_range('20130101', periods=5, + tz='US/Eastern'))) if LooseVersion(pandas.__version__) >= '0.17.0': series['period'] = Series([Period('2000Q1')] * 5) mixed_dup_df = DataFrame(data) mixed_dup_df.columns = list("ABCDA") - frame = dict(float=DataFrame(dict(A=series['float'], B=series['float'] + 1)), + frame = dict(float=DataFrame(dict(A=series['float'], + B=series['float'] + 1)), int=DataFrame(dict(A=series['int'], B=series['int'] + 1)), - mixed=DataFrame(dict([(k, data[k]) for k in ['A', 'B', 'C', 'D']])), - mi=DataFrame(dict(A=np.arange(5).astype(np.float64), B=np.arange(5).astype(np.int64)), - index=MultiIndex.from_tuples(tuple(zip(*[['bar', 'bar', 'baz', 'baz', 'baz'], - ['one', 'two', 'one', 'two', 'three']])), - names=['first', 'second'])), + mixed=DataFrame(dict([(k, data[k]) + for k in ['A', 'B', 'C', 'D']])), + mi=DataFrame(dict(A=np.arange(5).astype(np.float64), + B=np.arange(5).astype(np.int64)), + index=MultiIndex.from_tuples( + tuple(zip(*[['bar', 'bar', 'baz', + 'baz', 'baz'], + ['one', 'two', 'one', + 'two', 'three']])), + names=['first', 'second'])), dup=DataFrame(np.arange(15).reshape(5, 3).astype(np.float64), columns=['A', 'B', 'A']), cat_onecol=DataFrame(dict(A=Categorical(['foo', 'bar']))), - cat_and_float=DataFrame(dict(A=Categorical(['foo', 'bar', 'baz']), - B=np.arange(3).astype(np.int64))), + cat_and_float=DataFrame(dict( + A=Categorical(['foo', 'bar', 'baz']), + B=np.arange(3).astype(np.int64))), mixed_dup=mixed_dup_df, - dt_mixed_tzs=DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'), B=Timestamp('20130603', tz='CET')), index=range(5)), + dt_mixed_tzs=DataFrame(dict( + A=Timestamp('20130102', tz='US/Eastern'), + B=Timestamp('20130603', tz='CET')), index=range(5)), ) mixed_dup_panel = Panel(dict(ItemA=frame['float'], ItemB=frame['int'])) mixed_dup_panel.items = ['ItemA', 'ItemA'] - panel = dict(float=Panel(dict(ItemA=frame['float'], ItemB=frame['float'] + 1)), + panel = dict(float=Panel(dict(ItemA=frame['float'], + ItemB=frame['float'] + 1)), dup=Panel(np.arange(30).reshape(3, 5, 2).astype(np.float64), items=['A', 'B', 'A']), mixed_dup=mixed_dup_panel) @@ -153,20 +170,22 @@ def create_msgpack_data(): def platform_name(): - return '_'.join([str(pandas.__version__), str(pl.machine()), str(pl.system().lower()), str(pl.python_version())]) + return '_'.join([str(pandas.__version__), str(pl.machine()), + str(pl.system().lower()), str(pl.python_version())]) def write_legacy_pickles(output_dir): # make sure we are < 0.13 compat (in py3) try: - from pandas.compat import zip, cPickle as pickle + from pandas.compat import zip, cPickle as pickle # noqa except: import pickle version = pandas.__version__ - print("This script generates a storage file for the current arch, system, and python version") + print("This script generates a storage file for the current arch, system, " + "and python version") print(" pandas version: {0}".format(version)) print(" output dir : {0}".format(output_dir)) print(" storage format: pickle") @@ -184,7 +203,8 @@ def write_legacy_msgpack(output_dir): version = pandas.__version__ - print("This script generates a storage file for the current arch, system, and python version") + print("This script generates a storage file for the current arch, " + "system, and python version") print(" pandas version: {0}".format(version)) print(" output dir : {0}".format(output_dir)) print(" storage format: msgpack") @@ -200,7 +220,8 @@ def write_legacy_file(): sys.path.insert(0, '.') if len(sys.argv) != 3: - exit("Specify output directory and storage type: generate_legacy_storage_files.py <output_dir> <storage_type>") + exit("Specify output directory and storage type: generate_legacy_" + "storage_files.py <output_dir> <storage_type>") output_dir = str(sys.argv[1]) storage_type = str(sys.argv[2]) diff --git a/pandas/io/tests/test_clipboard.py b/pandas/io/tests/test_clipboard.py index a056bac293cfa..a7da27a2f75dd 100644 --- a/pandas/io/tests/test_clipboard.py +++ b/pandas/io/tests/test_clipboard.py @@ -13,13 +13,14 @@ try: - import pandas.util.clipboard + import pandas.util.clipboard # noqa except OSError: raise nose.SkipTest("no clipboard found") @disabled class TestClipboard(tm.TestCase): + @classmethod def setUpClass(cls): super(TestClipboard, cls).setUpClass() @@ -40,11 +41,12 @@ def setUpClass(cls): # Test columns exceeding "max_colwidth" (GH8305) _cw = get_option('display.max_colwidth') + 1 cls.data['colwidth'] = mkdf(5, 3, data_gen_f=lambda *args: 'x' * _cw, - c_idx_type='s', r_idx_type='i', - c_idx_names=[None], r_idx_names=[None]) + c_idx_type='s', r_idx_type='i', + c_idx_names=[None], r_idx_names=[None]) # Test GH-5346 max_rows = get_option('display.max_rows') - cls.data['longdf'] = mkdf(max_rows+1, 3, data_gen_f=lambda *args: randint(2), + cls.data['longdf'] = mkdf(max_rows + 1, 3, + data_gen_f=lambda *args: randint(2), c_idx_type='s', r_idx_type='i', c_idx_names=[None], r_idx_names=[None]) # Test for non-ascii text: GH9263 @@ -61,18 +63,18 @@ def check_round_trip_frame(self, data_type, excel=None, sep=None): data = self.data[data_type] data.to_clipboard(excel=excel, sep=sep) if sep is not None: - result = read_clipboard(sep=sep,index_col=0) + result = read_clipboard(sep=sep, index_col=0) else: result = read_clipboard() tm.assert_frame_equal(data, result, check_dtype=False) def test_round_trip_frame_sep(self): for dt in self.data_types: - self.check_round_trip_frame(dt,sep=',') + self.check_round_trip_frame(dt, sep=',') def test_round_trip_frame_string(self): for dt in self.data_types: - self.check_round_trip_frame(dt,excel=False) + self.check_round_trip_frame(dt, excel=False) def test_round_trip_frame(self): for dt in self.data_types: diff --git a/pandas/io/tests/test_common.py b/pandas/io/tests/test_common.py index 73cae1130c740..55fe3f3357c05 100644 --- a/pandas/io/tests/test_common.py +++ b/pandas/io/tests/test_common.py @@ -5,7 +5,6 @@ import os from os.path import isabs -import nose import pandas.util.testing as tm from pandas.io import common @@ -20,6 +19,7 @@ except ImportError: pass + class TestCommonIOCapabilities(tm.TestCase): def test_expand_user(self): diff --git a/pandas/io/tests/test_cparser.py b/pandas/io/tests/test_cparser.py index ceb845073e2c3..52cb56bea1122 100644 --- a/pandas/io/tests/test_cparser.py +++ b/pandas/io/tests/test_cparser.py @@ -3,27 +3,18 @@ """ from pandas.compat import StringIO, BytesIO, map -from datetime import datetime from pandas import compat -import csv import os import sys -import re import nose from numpy import nan import numpy as np -from pandas import DataFrame, Series, Index, isnull, MultiIndex -import pandas.io.parsers as parsers -from pandas.io.parsers import (read_csv, read_table, read_fwf, - TextParser, TextFileReader) -from pandas.util.testing import (assert_almost_equal, assert_frame_equal, - assert_series_equal, network) -import pandas.lib as lib -from pandas import compat -from pandas.lib import Timestamp +from pandas import DataFrame +from pandas.io.parsers import (read_csv, TextFileReader) +from pandas.util.testing import assert_frame_equal import pandas.util.testing as tm @@ -43,19 +34,19 @@ def test_file_handle(self): try: f = open(self.csv1, 'rb') reader = TextReader(f) - result = reader.read() + result = reader.read() # noqa finally: f.close() def test_string_filename(self): reader = TextReader(self.csv1, header=None) - result = reader.read() + reader.read() def test_file_handle_mmap(self): try: f = open(self.csv1, 'rb') reader = TextReader(f, memory_map=True, header=None) - result = reader.read() + reader.read() finally: f.close() @@ -63,7 +54,7 @@ def test_StringIO(self): text = open(self.csv1, 'rb').read() src = BytesIO(text) reader = TextReader(src, header=None) - result = reader.read() + reader.read() def test_string_factorize(self): # should this be optional? @@ -136,7 +127,7 @@ def test_integer_thousands_alt(self): data = '123.456\n12.500' reader = TextFileReader(StringIO(data), delimiter=':', - thousands='.', header=None) + thousands='.', header=None) result = reader.read() expected = [123456, 12500] @@ -192,7 +183,7 @@ def test_header_not_enough_lines(self): self.assertEqual(header, expected) recs = reader.read() - expected = {0 : [1, 4], 1 : [2, 5], 2 : [3, 6]} + expected = {0: [1, 4], 1: [2, 5], 2: [3, 6]} assert_array_dicts_equal(expected, recs) # not enough rows @@ -202,7 +193,8 @@ def test_header_not_enough_lines(self): def test_header_not_enough_lines_as_recarray(self): if compat.is_platform_windows(): - raise nose.SkipTest("segfaults on win-64, only when all tests are run") + raise nose.SkipTest( + "segfaults on win-64, only when all tests are run") data = ('skip this\n' 'skip this\n' @@ -279,7 +271,8 @@ def test_numpy_string_dtype_as_recarray(self): aaaaa,5""" if compat.is_platform_windows(): - raise nose.SkipTest("segfaults on win-64, only when all tests are run") + raise nose.SkipTest( + "segfaults on win-64, only when all tests are run") def _make_reader(**kwds): return TextReader(StringIO(data), delimiter=',', header=None, @@ -382,15 +375,15 @@ def test_empty_field_eof(self): index=[1, 1]) c = DataFrame([[1, 2, 3, 4], [6, nan, nan, nan], [8, 9, 10, 11], [13, 14, nan, nan]], - columns=list('abcd'), - index=[0, 5, 7, 12]) + columns=list('abcd'), + index=[0, 5, 7, 12]) for _ in range(100): df = read_csv(StringIO('a,b\nc\n'), skiprows=0, names=['a'], engine='c') assert_frame_equal(df, a) - df = read_csv(StringIO('1,1,1,1,0\n'*2 + '\n'*2), + df = read_csv(StringIO('1,1,1,1,0\n' * 2 + '\n' * 2), names=list("abcd"), engine='c') assert_frame_equal(df, b) @@ -398,6 +391,7 @@ def test_empty_field_eof(self): names=list('abcd'), engine='c') assert_frame_equal(df, c) + def assert_array_dicts_equal(left, right): for k, v in compat.iteritems(left): assert(np.array_equal(v, right[k])) diff --git a/pandas/io/tests/test_data.py b/pandas/io/tests/test_data.py index afc61dc42f569..ee4dd079ccb0a 100644 --- a/pandas/io/tests/test_data.py +++ b/pandas/io/tests/test_data.py @@ -1,3 +1,5 @@ +# flake8: noqa + from __future__ import print_function from pandas import compat import warnings diff --git a/pandas/io/tests/test_date_converters.py b/pandas/io/tests/test_date_converters.py index 2b23556706f0c..3855dc485ed83 100644 --- a/pandas/io/tests/test_date_converters.py +++ b/pandas/io/tests/test_date_converters.py @@ -1,28 +1,17 @@ -from pandas.compat import StringIO, BytesIO +from pandas.compat import StringIO from datetime import date, datetime -import csv -import os -import sys -import re import nose -from numpy import nan import numpy as np -from numpy.testing.decorators import slow - -from pandas import DataFrame, Series, Index, MultiIndex, isnull -import pandas.io.parsers as parsers -from pandas.io.parsers import (read_csv, read_table, read_fwf, - TextParser) -from pandas.util.testing import (assert_almost_equal, assert_frame_equal, - assert_series_equal, network) -import pandas.lib as lib -from pandas import compat -from pandas.lib import Timestamp + +from pandas import DataFrame, MultiIndex +from pandas.io.parsers import (read_csv, read_table) +from pandas.util.testing import assert_frame_equal import pandas.io.date_converters as conv import pandas.util.testing as tm + class TestConverters(tm.TestCase): def setUp(self): @@ -68,7 +57,8 @@ def test_parse_date_fields(self): expected = np.array([datetime(2007, 1, 3), datetime(2008, 2, 4)]) self.assertTrue((result == expected).all()) - data = "year, month, day, a\n 2001 , 01 , 10 , 10.\n 2001 , 02 , 1 , 11." + data = ("year, month, day, a\n 2001 , 01 , 10 , 10.\n" + "2001 , 02 , 1 , 11.") datecols = {'ymd': [0, 1, 2]} df = read_table(StringIO(data), sep=',', header=0, parse_dates=datecols, @@ -136,8 +126,9 @@ def date_parser(date, time): parse_dates={'datetime': ['date', 'time']}, index_col=['datetime', 'prn']) - datetimes = np.array(['2013-11-03T19:00:00Z']*3, dtype='datetime64[s]') - df_correct = DataFrame(data={'rxstatus': ['00E80000']*3}, + datetimes = np.array(['2013-11-03T19:00:00Z'] * 3, + dtype='datetime64[s]') + df_correct = DataFrame(data={'rxstatus': ['00E80000'] * 3}, index=MultiIndex.from_tuples( [(datetimes[0], 126), (datetimes[1], 23), @@ -146,6 +137,5 @@ def date_parser(date, time): assert_frame_equal(df, df_correct) if __name__ == '__main__': - import nose nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py index 8023c25cdd660..082a26df681a4 100644 --- a/pandas/io/tests/test_excel.py +++ b/pandas/io/tests/test_excel.py @@ -82,6 +82,7 @@ def _skip_if_no_boto(): class SharedItems(object): + def setUp(self): self.dirpath = tm.get_data_path() self.frame = _frame.copy() @@ -233,13 +234,13 @@ def test_excel_passes_na(self): excel = self.get_excelfile('test4') parsed = read_excel(excel, 'Sheet1', keep_default_na=False, - na_values=['apple']) + na_values=['apple']) expected = DataFrame([['NA'], [1], ['NA'], [np.nan], ['rabbit']], columns=['Test']) tm.assert_frame_equal(parsed, expected) parsed = read_excel(excel, 'Sheet1', keep_default_na=True, - na_values=['apple']) + na_values=['apple']) expected = DataFrame([[np.nan], [1], [np.nan], [np.nan], ['rabbit']], columns=['Test']) tm.assert_frame_equal(parsed, expected) @@ -325,7 +326,8 @@ def test_reader_special_dtypes(self): # convert_float and converters should be different but both accepted expected["StrCol"] = expected["StrCol"].apply(str) - actual = self.get_exceldf(basename, 'Sheet1', converters={"StrCol": str}) + actual = self.get_exceldf( + basename, 'Sheet1', converters={"StrCol": str}) tm.assert_frame_equal(actual, expected) no_convert_float = float_expected.copy() @@ -352,7 +354,8 @@ def test_reader_converters(self): 3: lambda x: str(x) if x else '', } - # should read in correctly and set types of single cells (not array dtypes) + # should read in correctly and set types of single cells (not array + # dtypes) actual = self.get_exceldf(basename, 'Sheet1', converters=converters) tm.assert_frame_equal(actual, expected) @@ -490,21 +493,21 @@ def test_creating_and_reading_multiple_sheets(self): _skip_if_no_openpyxl() def tdf(sheetname): - d, i = [11,22,33], [1,2,3] - return DataFrame(d,i,columns=[sheetname]) + d, i = [11, 22, 33], [1, 2, 3] + return DataFrame(d, i, columns=[sheetname]) - sheets = ['AAA','BBB','CCC'] + sheets = ['AAA', 'BBB', 'CCC'] dfs = [tdf(s) for s in sheets] - dfs = dict(zip(sheets,dfs)) + dfs = dict(zip(sheets, dfs)) with ensure_clean(self.ext) as pth: with ExcelWriter(pth) as ew: for sheetname, df in iteritems(dfs): - df.to_excel(ew,sheetname) - dfs_returned = read_excel(pth,sheetname=sheets) + df.to_excel(ew, sheetname) + dfs_returned = read_excel(pth, sheetname=sheets) for s in sheets: - tm.assert_frame_equal(dfs[s],dfs_returned[s]) + tm.assert_frame_equal(dfs[s], dfs_returned[s]) def test_reader_seconds(self): # Test reading times with and without milliseconds. GH5945. @@ -546,133 +549,152 @@ def test_reader_seconds(self): tm.assert_frame_equal(actual, expected) def test_read_excel_multiindex(self): - #GH 4679 - mi = MultiIndex.from_product([['foo','bar'],['a','b']]) + # GH 4679 + mi = MultiIndex.from_product([['foo', 'bar'], ['a', 'b']]) mi_file = os.path.join(self.dirpath, 'testmultiindex' + self.ext) expected = DataFrame([[1, 2.5, pd.Timestamp('2015-01-01'), True], - [2, 3.5, pd.Timestamp('2015-01-02'), False], - [3, 4.5, pd.Timestamp('2015-01-03'), False], - [4, 5.5, pd.Timestamp('2015-01-04'), True]], - columns = mi) + [2, 3.5, pd.Timestamp('2015-01-02'), False], + [3, 4.5, pd.Timestamp('2015-01-03'), False], + [4, 5.5, pd.Timestamp('2015-01-04'), True]], + columns=mi) - actual = read_excel(mi_file, 'mi_column', header=[0,1]) + actual = read_excel(mi_file, 'mi_column', header=[0, 1]) tm.assert_frame_equal(actual, expected) - actual = read_excel(mi_file, 'mi_column', header=[0,1], index_col=0) + actual = read_excel(mi_file, 'mi_column', header=[0, 1], index_col=0) tm.assert_frame_equal(actual, expected) expected.columns = ['a', 'b', 'c', 'd'] expected.index = mi - actual = read_excel(mi_file, 'mi_index', index_col=[0,1]) + actual = read_excel(mi_file, 'mi_index', index_col=[0, 1]) tm.assert_frame_equal(actual, expected, check_names=False) expected.columns = mi - actual = read_excel(mi_file, 'both', index_col=[0,1], header=[0,1]) + actual = read_excel(mi_file, 'both', index_col=[0, 1], header=[0, 1]) tm.assert_frame_equal(actual, expected, check_names=False) expected.index = mi.set_names(['ilvl1', 'ilvl2']) expected.columns = ['a', 'b', 'c', 'd'] - actual = read_excel(mi_file, 'mi_index_name', index_col=[0,1]) + actual = read_excel(mi_file, 'mi_index_name', index_col=[0, 1]) tm.assert_frame_equal(actual, expected) expected.index = list(range(4)) expected.columns = mi.set_names(['c1', 'c2']) - actual = read_excel(mi_file, 'mi_column_name', header=[0,1], index_col=0) + actual = read_excel(mi_file, 'mi_column_name', + header=[0, 1], index_col=0) tm.assert_frame_equal(actual, expected) # Issue #11317 - expected.columns = mi.set_levels([1,2],level=1).set_names(['c1', 'c2']) - actual = read_excel(mi_file, 'name_with_int', index_col=0, header=[0,1]) + expected.columns = mi.set_levels( + [1, 2], level=1).set_names(['c1', 'c2']) + actual = read_excel(mi_file, 'name_with_int', + index_col=0, header=[0, 1]) tm.assert_frame_equal(actual, expected) expected.columns = mi.set_names(['c1', 'c2']) expected.index = mi.set_names(['ilvl1', 'ilvl2']) - actual = read_excel(mi_file, 'both_name', index_col=[0,1], header=[0,1]) + actual = read_excel(mi_file, 'both_name', + index_col=[0, 1], header=[0, 1]) tm.assert_frame_equal(actual, expected) - actual = read_excel(mi_file, 'both_name', index_col=[0,1], header=[0,1]) + actual = read_excel(mi_file, 'both_name', + index_col=[0, 1], header=[0, 1]) tm.assert_frame_equal(actual, expected) - actual = read_excel(mi_file, 'both_name_skiprows', index_col=[0,1], - header=[0,1], skiprows=2) + actual = read_excel(mi_file, 'both_name_skiprows', index_col=[0, 1], + header=[0, 1], skiprows=2) tm.assert_frame_equal(actual, expected) - def test_excel_multindex_roundtrip(self): - #GH 4679 + # GH 4679 _skip_if_no_xlsxwriter() with ensure_clean('.xlsx') as pth: for c_idx_names in [True, False]: for r_idx_names in [True, False]: for c_idx_levels in [1, 3]: for r_idx_levels in [1, 3]: - # column index name can't be serialized unless MultiIndex + # column index name can't be serialized unless + # MultiIndex if (c_idx_levels == 1 and c_idx_names): continue - # empty name case current read in as unamed levels, not Nones + # empty name case current read in as unamed levels, + # not Nones check_names = True if not r_idx_names and r_idx_levels > 1: check_names = False df = mkdf(5, 5, c_idx_names, - r_idx_names, c_idx_levels, - r_idx_levels) + r_idx_names, c_idx_levels, + r_idx_levels) df.to_excel(pth) - act = pd.read_excel(pth, index_col=list(range(r_idx_levels)), - header=list(range(c_idx_levels))) - tm.assert_frame_equal(df, act, check_names=check_names) + act = pd.read_excel( + pth, index_col=list(range(r_idx_levels)), + header=list(range(c_idx_levels))) + tm.assert_frame_equal( + df, act, check_names=check_names) df.iloc[0, :] = np.nan df.to_excel(pth) - act = pd.read_excel(pth, index_col=list(range(r_idx_levels)), - header=list(range(c_idx_levels))) - tm.assert_frame_equal(df, act, check_names=check_names) + act = pd.read_excel( + pth, index_col=list(range(r_idx_levels)), + header=list(range(c_idx_levels))) + tm.assert_frame_equal( + df, act, check_names=check_names) df.iloc[-1, :] = np.nan df.to_excel(pth) - act = pd.read_excel(pth, index_col=list(range(r_idx_levels)), - header=list(range(c_idx_levels))) - tm.assert_frame_equal(df, act, check_names=check_names) + act = pd.read_excel( + pth, index_col=list(range(r_idx_levels)), + header=list(range(c_idx_levels))) + tm.assert_frame_equal( + df, act, check_names=check_names) def test_excel_oldindex_format(self): - #GH 4679 + # GH 4679 data = np.array([['R0C0', 'R0C1', 'R0C2', 'R0C3', 'R0C4'], ['R1C0', 'R1C1', 'R1C2', 'R1C3', 'R1C4'], ['R2C0', 'R2C1', 'R2C2', 'R2C3', 'R2C4'], ['R3C0', 'R3C1', 'R3C2', 'R3C3', 'R3C4'], ['R4C0', 'R4C1', 'R4C2', 'R4C3', 'R4C4']]) columns = ['C_l0_g0', 'C_l0_g1', 'C_l0_g2', 'C_l0_g3', 'C_l0_g4'] - mi = MultiIndex(levels=[['R_l0_g0', 'R_l0_g1', 'R_l0_g2', 'R_l0_g3', 'R_l0_g4'], - ['R_l1_g0', 'R_l1_g1', 'R_l1_g2', 'R_l1_g3', 'R_l1_g4']], + mi = MultiIndex(levels=[['R_l0_g0', 'R_l0_g1', 'R_l0_g2', + 'R_l0_g3', 'R_l0_g4'], + ['R_l1_g0', 'R_l1_g1', 'R_l1_g2', + 'R_l1_g3', 'R_l1_g4']], labels=[[0, 1, 2, 3, 4], [0, 1, 2, 3, 4]], names=['R0', 'R1']) - si = Index(['R_l0_g0', 'R_l0_g1', 'R_l0_g2', 'R_l0_g3', 'R_l0_g4'], name='R0') + si = Index(['R_l0_g0', 'R_l0_g1', 'R_l0_g2', + 'R_l0_g3', 'R_l0_g4'], name='R0') - in_file = os.path.join(self.dirpath, 'test_index_name_pre17' + self.ext) + in_file = os.path.join( + self.dirpath, 'test_index_name_pre17' + self.ext) expected = pd.DataFrame(data, index=si, columns=columns) with tm.assert_produces_warning(FutureWarning): - actual = pd.read_excel(in_file, 'single_names', has_index_names=True) + actual = pd.read_excel( + in_file, 'single_names', has_index_names=True) tm.assert_frame_equal(actual, expected) expected.index.name = None actual = pd.read_excel(in_file, 'single_no_names') tm.assert_frame_equal(actual, expected) with tm.assert_produces_warning(FutureWarning): - actual = pd.read_excel(in_file, 'single_no_names', has_index_names=False) + actual = pd.read_excel( + in_file, 'single_no_names', has_index_names=False) tm.assert_frame_equal(actual, expected) expected.index = mi with tm.assert_produces_warning(FutureWarning): - actual = pd.read_excel(in_file, 'multi_names', has_index_names=True) + actual = pd.read_excel( + in_file, 'multi_names', has_index_names=True) tm.assert_frame_equal(actual, expected) expected.index.names = [None, None] - actual = pd.read_excel(in_file, 'multi_no_names', index_col=[0,1]) + actual = pd.read_excel(in_file, 'multi_no_names', index_col=[0, 1]) tm.assert_frame_equal(actual, expected, check_names=False) with tm.assert_produces_warning(FutureWarning): - actual = pd.read_excel(in_file, 'multi_no_names', index_col=[0,1], + actual = pd.read_excel(in_file, 'multi_no_names', index_col=[0, 1], has_index_names=False) tm.assert_frame_equal(actual, expected, check_names=False) @@ -684,7 +706,7 @@ def test_read_excel_bool_header_arg(self): header=arg) def test_read_excel_chunksize(self): - #GH 8011 + # GH 8011 with tm.assertRaises(NotImplementedError): pd.read_excel(os.path.join(self.dirpath, 'test1' + self.ext), chunksize=100) @@ -703,20 +725,23 @@ def test_read_excel_date_parser(self): date_parser=dateparse) def test_read_excel_skiprows_list(self): - #GH 4903 - actual = pd.read_excel(os.path.join(self.dirpath, 'testskiprows' + self.ext), - 'skiprows_list', skiprows=[0,2]) + # GH 4903 + actual = pd.read_excel(os.path.join(self.dirpath, + 'testskiprows' + self.ext), + 'skiprows_list', skiprows=[0, 2]) expected = DataFrame([[1, 2.5, pd.Timestamp('2015-01-01'), True], [2, 3.5, pd.Timestamp('2015-01-02'), False], [3, 4.5, pd.Timestamp('2015-01-03'), False], [4, 5.5, pd.Timestamp('2015-01-04'), True]], - columns = ['a','b','c','d']) + columns=['a', 'b', 'c', 'd']) tm.assert_frame_equal(actual, expected) - actual = pd.read_excel(os.path.join(self.dirpath, 'testskiprows' + self.ext), - 'skiprows_list', skiprows=np.array([0,2])) + actual = pd.read_excel(os.path.join(self.dirpath, + 'testskiprows' + self.ext), + 'skiprows_list', skiprows=np.array([0, 2])) tm.assert_frame_equal(actual, expected) + class XlsReaderTests(XlrdTests, tm.TestCase): ext = '.xls' engine_name = 'xlrd' @@ -735,8 +760,6 @@ class XlsmReaderTests(XlrdTests, tm.TestCase): check_skip = staticmethod(_skip_if_no_xlrd) - - class ExcelWriterBase(SharedItems): # Base class for test cases to run with different Excel writers. # To add a writer test, define the following: @@ -882,7 +905,8 @@ def test_int_types(self): float_frame = frame.astype(float) recons = read_excel(path, 'test1', convert_float=False) tm.assert_frame_equal(recons, float_frame, - check_index_type=False, check_column_type=False) + check_index_type=False, + check_column_type=False) def test_float_types(self): _skip_if_no_xlrd() @@ -982,8 +1006,8 @@ def test_roundtrip_indexlabels(self): merge_cells=self.merge_cells) reader = ExcelFile(path) recons = read_excel(reader, 'test1', - index_col=0, - ).astype(np.int64) + index_col=0, + ).astype(np.int64) frame.index.names = ['test'] self.assertEqual(frame.index.names, recons.index.names) @@ -994,8 +1018,8 @@ def test_roundtrip_indexlabels(self): merge_cells=self.merge_cells) reader = ExcelFile(path) recons = read_excel(reader, 'test1', - index_col=0, - ).astype(np.int64) + index_col=0, + ).astype(np.int64) frame.index.names = ['test'] self.assertEqual(frame.index.names, recons.index.names) @@ -1006,8 +1030,8 @@ def test_roundtrip_indexlabels(self): merge_cells=self.merge_cells) reader = ExcelFile(path) recons = read_excel(reader, 'test1', - index_col=0, - ).astype(np.int64) + index_col=0, + ).astype(np.int64) frame.index.names = ['test'] tm.assert_frame_equal(frame, recons.astype(bool)) @@ -1036,7 +1060,7 @@ def test_excel_roundtrip_indexname(self): xf = ExcelFile(path) result = read_excel(xf, xf.sheet_names[0], - index_col=0) + index_col=0) tm.assert_frame_equal(result, df) self.assertEqual(result.index.name, 'foo') @@ -1072,8 +1096,8 @@ def test_excel_date_datetime_format(self): with ensure_clean(self.ext) as filename2: writer1 = ExcelWriter(filename1) writer2 = ExcelWriter(filename2, - date_format='DD.MM.YYYY', - datetime_format='DD.MM.YYYY HH-MM-SS') + date_format='DD.MM.YYYY', + datetime_format='DD.MM.YYYY HH-MM-SS') df.to_excel(writer1, 'test1') df.to_excel(writer2, 'test1') @@ -1123,7 +1147,7 @@ def test_to_excel_multiindex(self): frame.to_excel(path, 'test1', merge_cells=self.merge_cells) reader = ExcelFile(path) df = read_excel(reader, 'test1', index_col=[0, 1], - parse_dates=False) + parse_dates=False) tm.assert_frame_equal(frame, df) # Test for Issue 11328. If column indices are integers, make @@ -1146,7 +1170,7 @@ def test_to_excel_multiindex_cols(self): header = 0 with ensure_clean(self.ext) as path: - # round trip + # round trip frame.to_excel(path, 'test1', merge_cells=self.merge_cells) reader = ExcelFile(path) df = read_excel(reader, 'test1', header=header, @@ -1155,7 +1179,7 @@ def test_to_excel_multiindex_cols(self): if not self.merge_cells: fm = frame.columns.format(sparsify=False, adjoin=False, names=False) - frame.columns = [ ".".join(map(str, q)) for q in zip(*fm) ] + frame.columns = [".".join(map(str, q)) for q in zip(*fm)] tm.assert_frame_equal(frame, df) def test_to_excel_multiindex_dates(self): @@ -1171,7 +1195,7 @@ def test_to_excel_multiindex_dates(self): tsframe.to_excel(path, 'test1', merge_cells=self.merge_cells) reader = ExcelFile(path) recons = read_excel(reader, 'test1', - index_col=[0, 1]) + index_col=[0, 1]) tm.assert_frame_equal(tsframe, recons) self.assertEqual(recons.index.names, ('time', 'foo')) @@ -1206,7 +1230,7 @@ def test_to_excel_float_format(self): df = DataFrame([[0.123456, 0.234567, 0.567567], [12.32112, 123123.2, 321321.2]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) + index=['A', 'B'], columns=['X', 'Y', 'Z']) with ensure_clean(self.ext) as filename: df.to_excel(filename, 'test1', float_format='%.2f') @@ -1215,7 +1239,7 @@ def test_to_excel_float_format(self): rs = read_excel(reader, 'test1', index_col=None) xp = DataFrame([[0.12, 0.23, 0.57], [12.32, 123123.20, 321321.20]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) + index=['A', 'B'], columns=['X', 'Y', 'Z']) tm.assert_frame_equal(rs, xp) def test_to_excel_output_encoding(self): @@ -1226,7 +1250,8 @@ def test_to_excel_output_encoding(self): # avoid mixed inferred_type df = DataFrame([[u'\u0192', u'\u0193', u'\u0194'], [u'\u0195', u'\u0196', u'\u0197']], - index=[u'A\u0192', u'B'], columns=[u'X\u0193', u'Y', u'Z']) + index=[u'A\u0192', u'B'], + columns=[u'X\u0193', u'Y', u'Z']) with ensure_clean(filename) as filename: df.to_excel(filename, sheet_name='TestSheet', encoding='utf8') @@ -1245,7 +1270,7 @@ def test_to_excel_unicode_filename(self): df = DataFrame([[0.123456, 0.234567, 0.567567], [12.32112, 123123.2, 321321.2]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) + index=['A', 'B'], columns=['X', 'Y', 'Z']) df.to_excel(filename, 'test1', float_format='%.2f') @@ -1253,7 +1278,7 @@ def test_to_excel_unicode_filename(self): rs = read_excel(reader, 'test1', index_col=None) xp = DataFrame([[0.12, 0.23, 0.57], [12.32, 123123.20, 321321.20]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) + index=['A', 'B'], columns=['X', 'Y', 'Z']) tm.assert_frame_equal(rs, xp) # def test_to_excel_header_styling_xls(self): @@ -1370,7 +1395,8 @@ def test_excel_010_hemstring(self): def roundtrip(df, header=True, parser_hdr=0, index=True): with ensure_clean(self.ext) as path: - df.to_excel(path, header=header, merge_cells=self.merge_cells, index=index) + df.to_excel(path, header=header, + merge_cells=self.merge_cells, index=index) xf = ExcelFile(path) res = read_excel(xf, xf.sheet_names[0], header=parser_hdr) return res @@ -1382,9 +1408,9 @@ def roundtrip(df, header=True, parser_hdr=0, index=True): for j in range(1, 4): # col "" df = mkdf(nrows, ncols, r_idx_nlevels=i, c_idx_nlevels=j) - #this if will be removed once multi column excel writing - #is implemented for now fixing #9794 - if j>1: + # this if will be removed once multi column excel writing + # is implemented for now fixing #9794 + if j > 1: with tm.assertRaises(NotImplementedError): res = roundtrip(df, use_headers, index=False) else: @@ -1424,17 +1450,19 @@ def test_excel_010_hemstring_raises_NotImplementedError(self): def roundtrip2(df, header=True, parser_hdr=0, index=True): with ensure_clean(self.ext) as path: - df.to_excel(path, header=header, merge_cells=self.merge_cells, index=index) + df.to_excel(path, header=header, + merge_cells=self.merge_cells, index=index) xf = ExcelFile(path) res = read_excel(xf, xf.sheet_names[0], header=parser_hdr) return res - nrows = 5; ncols = 3 - j = 2; i = 1 + nrows = 5 + ncols = 3 + j = 2 + i = 1 df = mkdf(nrows, ncols, r_idx_nlevels=i, c_idx_nlevels=j) with tm.assertRaises(NotImplementedError): - res = roundtrip2(df, header=False, index=False) - + roundtrip2(df, header=False, index=False) def test_duplicated_columns(self): # Test for issue #5235 @@ -1452,11 +1480,11 @@ def test_duplicated_columns(self): tm.assert_frame_equal(write_frame, read_frame) # 11007 / #10970 - write_frame = DataFrame([[1,2,3,4],[5,6,7,8]], - columns=['A','B','A','B']) + write_frame = DataFrame([[1, 2, 3, 4], [5, 6, 7, 8]], + columns=['A', 'B', 'A', 'B']) write_frame.to_excel(path, 'test1') read_frame = read_excel(path, 'test1') - read_frame.columns = ['A','B','A','B'] + read_frame.columns = ['A', 'B', 'A', 'B'] tm.assert_frame_equal(write_frame, read_frame) # 10982 @@ -1488,14 +1516,13 @@ def test_invalid_columns(self): 'B': [2, 2, 2]}) write_frame.to_excel(path, 'test1', columns=['B', 'C']) - expected = write_frame.loc[:, ['B','C']] + expected = write_frame.loc[:, ['B', 'C']] read_frame = read_excel(path, 'test1') tm.assert_frame_equal(expected, read_frame) with tm.assertRaises(KeyError): write_frame.to_excel(path, 'test1', columns=['C', 'D']) - def test_datetimes(self): # Test writing and reading datetimes. For issue #9139. (xref #9185) @@ -1557,7 +1584,8 @@ def wrapped(self, *args, **kwargs): if openpyxl_compat.is_compat(major_ver=major_ver): orig_method(self, *args, **kwargs) else: - msg = 'Installed openpyxl is not supported at this time\. Use.+' + msg = ('Installed openpyxl is not supported at this ' + 'time\. Use.+') with tm.assertRaisesRegexp(ValueError, msg): orig_method(self, *args, **kwargs) return wrapped @@ -1566,9 +1594,11 @@ def wrapped(self, *args, **kwargs): def raise_on_incompat_version(major_ver): def versioned_raise_on_incompat_version(cls): - methods = filter(operator.methodcaller('startswith', 'test_'), dir(cls)) + methods = filter(operator.methodcaller( + 'startswith', 'test_'), dir(cls)) for method in methods: - setattr(cls, method, raise_wrapper(major_ver)(getattr(cls, method))) + setattr(cls, method, raise_wrapper( + major_ver)(getattr(cls, method))) return cls return versioned_raise_on_incompat_version @@ -1617,12 +1647,14 @@ def setUpClass(cls): _skip_if_no_openpyxl() import openpyxl ver = openpyxl.__version__ - if not (LooseVersion(ver) >= LooseVersion('2.0.0') and LooseVersion(ver) < LooseVersion('2.2.0')): + if (not (LooseVersion(ver) >= LooseVersion('2.0.0') and + LooseVersion(ver) < LooseVersion('2.2.0'))): raise nose.SkipTest("openpyxl %s >= 2.2" % str(ver)) cls.setUpClass = setUpClass return cls + @raise_on_incompat_version(2) @skip_openpyxl_gt21 class Openpyxl20Tests(ExcelWriterBase, tm.TestCase): @@ -1678,7 +1710,7 @@ def test_to_excel_styleconverter(self): if ver >= LooseVersion('2.0.0') and ver < LooseVersion('2.1.0'): number_format = styles.NumberFormat(format_code='0.00') else: - number_format = '0.00' # XXX: Only works with openpyxl-2.1.0 + number_format = '0.00' # XXX: Only works with openpyxl-2.1.0 protection = styles.Protection(locked=True, hidden=False) @@ -1690,12 +1722,11 @@ def test_to_excel_styleconverter(self): self.assertEqual(kw['number_format'], number_format) self.assertEqual(kw['protection'], protection) - def test_write_cells_merge_styled(self): from pandas.core.format import ExcelCell from openpyxl import styles - sheet_name='merge_styled' + sheet_name = 'merge_styled' sty_b1 = {'font': {'color': '00FF0000'}} sty_a2 = {'font': {'color': '0000FF00'}} @@ -1705,12 +1736,12 @@ def test_write_cells_merge_styled(self): ExcelCell(col=0, row=1, val=99, style=sty_a2), ] - sty_merged = {'font': { 'color': '000000FF', 'bold': True }} + sty_merged = {'font': {'color': '000000FF', 'bold': True}} sty_kwargs = _Openpyxl20Writer._convert_to_style_kwargs(sty_merged) openpyxl_sty_merged = styles.Style(**sty_kwargs) merge_cells = [ ExcelCell(col=0, row=0, val='pandas', - mergestart=1, mergeend=1, style=sty_merged), + mergestart=1, mergeend=1, style=sty_merged), ] with ensure_clean('.xlsx') as path: @@ -1724,6 +1755,7 @@ def test_write_cells_merge_styled(self): self.assertEqual(xcell_b1.style, openpyxl_sty_merged) self.assertEqual(xcell_a2.style, openpyxl_sty_merged) + def skip_openpyxl_lt22(cls): """Skip a TestCase instance if openpyxl < 2.2""" @@ -1738,6 +1770,7 @@ def setUpClass(cls): cls.setUpClass = setUpClass return cls + @raise_on_incompat_version(2) @skip_openpyxl_lt22 class Openpyxl22Tests(ExcelWriterBase, tm.TestCase): @@ -1746,7 +1779,6 @@ class Openpyxl22Tests(ExcelWriterBase, tm.TestCase): check_skip = staticmethod(lambda *args, **kwargs: None) def test_to_excel_styleconverter(self): - import openpyxl from openpyxl import styles hstyle = { @@ -1800,15 +1832,13 @@ def test_to_excel_styleconverter(self): self.assertEqual(kw['number_format'], number_format) self.assertEqual(kw['protection'], protection) - def test_write_cells_merge_styled(self): if not openpyxl_compat.is_compat(major_ver=2): raise nose.SkipTest('incompatiable openpyxl version') from pandas.core.format import ExcelCell - from openpyxl import styles - sheet_name='merge_styled' + sheet_name = 'merge_styled' sty_b1 = {'font': {'color': '00FF0000'}} sty_a2 = {'font': {'color': '0000FF00'}} @@ -1818,12 +1848,12 @@ def test_write_cells_merge_styled(self): ExcelCell(col=0, row=1, val=99, style=sty_a2), ] - sty_merged = {'font': { 'color': '000000FF', 'bold': True }} + sty_merged = {'font': {'color': '000000FF', 'bold': True}} sty_kwargs = _Openpyxl22Writer._convert_to_style_kwargs(sty_merged) openpyxl_sty_merged = sty_kwargs['font'] merge_cells = [ ExcelCell(col=0, row=0, val='pandas', - mergestart=1, mergeend=1, style=sty_merged), + mergestart=1, mergeend=1, style=sty_merged), ] with ensure_clean('.xlsx') as path: @@ -1847,8 +1877,8 @@ def test_excel_raise_error_on_multiindex_columns_and_no_index(self): _skip_if_no_xlwt() # MultiIndex as columns is not yet implemented 9794 cols = MultiIndex.from_tuples([('site', ''), - ('2014', 'height'), - ('2014', 'weight')]) + ('2014', 'height'), + ('2014', 'weight')]) df = DataFrame(np.random.randn(10, 3), columns=cols) with tm.assertRaises(NotImplementedError): with ensure_clean(self.ext) as path: @@ -1857,8 +1887,8 @@ def test_excel_raise_error_on_multiindex_columns_and_no_index(self): def test_excel_multiindex_columns_and_index_true(self): _skip_if_no_xlwt() cols = MultiIndex.from_tuples([('site', ''), - ('2014', 'height'), - ('2014', 'weight')]) + ('2014', 'height'), + ('2014', 'weight')]) df = pd.DataFrame(np.random.randn(10, 3), columns=cols) with ensure_clean(self.ext) as path: df.to_excel(path, index=True) @@ -1867,8 +1897,8 @@ def test_excel_multiindex_index(self): _skip_if_no_xlwt() # MultiIndex as index works so assert no error #9794 cols = MultiIndex.from_tuples([('site', ''), - ('2014', 'height'), - ('2014', 'weight')]) + ('2014', 'height'), + ('2014', 'weight')]) df = DataFrame(np.random.randn(3, 10), index=cols) with ensure_clean(self.ext) as path: df.to_excel(path, index=False) @@ -1975,7 +2005,7 @@ def test_ExcelWriter_dispatch(self): ExcelWriter('nothing') try: - import xlsxwriter + import xlsxwriter # noqa writer_klass = _XlsxWriter except ImportError: _skip_if_no_openpyxl() diff --git a/pandas/io/tests/test_ga.py b/pandas/io/tests/test_ga.py index 965b3441d7405..b8b698691a9f5 100644 --- a/pandas/io/tests/test_ga.py +++ b/pandas/io/tests/test_ga.py @@ -1,3 +1,5 @@ +# flake8: noqa + import os from datetime import datetime diff --git a/pandas/io/tests/test_gbq.py b/pandas/io/tests/test_gbq.py index cc1e901d8f119..88a1e3e0a5cc3 100644 --- a/pandas/io/tests/test_gbq.py +++ b/pandas/io/tests/test_gbq.py @@ -31,7 +31,7 @@ def _test_imports(): global _GOOGLE_API_CLIENT_INSTALLED, _GOOGLE_API_CLIENT_VALID_VERSION, \ - _HTTPLIB2_INSTALLED, _SETUPTOOLS_INSTALLED + _HTTPLIB2_INSTALLED, _SETUPTOOLS_INSTALLED try: import pkg_resources @@ -46,25 +46,27 @@ def _test_imports(): if _SETUPTOOLS_INSTALLED: try: - from apiclient.discovery import build - from apiclient.errors import HttpError + from apiclient.discovery import build # noqa + from apiclient.errors import HttpError # noqa - from oauth2client.client import OAuth2WebServerFlow - from oauth2client.client import AccessTokenRefreshError + from oauth2client.client import OAuth2WebServerFlow # noqa + from oauth2client.client import AccessTokenRefreshError # noqa - from oauth2client.file import Storage - from oauth2client.tools import run_flow - _GOOGLE_API_CLIENT_INSTALLED=True - _GOOGLE_API_CLIENT_VERSION = pkg_resources.get_distribution('google-api-python-client').version + from oauth2client.file import Storage # noqa + from oauth2client.tools import run_flow # noqa + _GOOGLE_API_CLIENT_INSTALLED = True + _GOOGLE_API_CLIENT_VERSION = pkg_resources.get_distribution( + 'google-api-python-client').version - if StrictVersion(_GOOGLE_API_CLIENT_VERSION) >= StrictVersion(google_api_minimum_version): + if (StrictVersion(_GOOGLE_API_CLIENT_VERSION) >= + StrictVersion(google_api_minimum_version)): _GOOGLE_API_CLIENT_VALID_VERSION = True except ImportError: _GOOGLE_API_CLIENT_INSTALLED = False try: - import httplib2 + import httplib2 # noqa _HTTPLIB2_INSTALLED = True except ImportError: _HTTPLIB2_INSTALLED = False @@ -76,11 +78,15 @@ def _test_imports(): raise ImportError('Could not import Google API Client.') if not _GOOGLE_API_CLIENT_VALID_VERSION: - raise ImportError("pandas requires google-api-python-client >= {0} for Google BigQuery support, " - "current version {1}".format(google_api_minimum_version, _GOOGLE_API_CLIENT_VERSION)) + raise ImportError("pandas requires google-api-python-client >= {0} " + "for Google BigQuery support, " + "current version {1}" + .format(google_api_minimum_version, + _GOOGLE_API_CLIENT_VERSION)) if not _HTTPLIB2_INSTALLED: - raise ImportError("pandas requires httplib2 for Google BigQuery support") + raise ImportError( + "pandas requires httplib2 for Google BigQuery support") def test_requirements(): @@ -110,13 +116,14 @@ def make_mixed_dataframe_v2(test_size): flts = np.random.randn(1, test_size) ints = np.random.randint(1, 10, size=(1, test_size)) strs = np.random.randint(1, 10, size=(1, test_size)).astype(str) - times = [datetime.now(pytz.timezone('US/Arizona')) for t in range(test_size)] + times = [datetime.now(pytz.timezone('US/Arizona')) + for t in range(test_size)] return DataFrame({'bools': bools[0], 'flts': flts[0], 'ints': ints[0], 'strs': strs[0], 'times': times[0]}, - index=range(test_size)) + index=range(test_size)) def test_generate_bq_schema_deprecated(): @@ -125,17 +132,21 @@ def test_generate_bq_schema_deprecated(): df = make_mixed_dataframe_v2(10) gbq.generate_bq_schema(df) + class TestGBQConnectorIntegration(tm.TestCase): + def setUp(self): test_requirements() if not PROJECT_ID: - raise nose.SkipTest("Cannot run integration tests without a project id") + raise nose.SkipTest( + "Cannot run integration tests without a project id") self.sut = gbq.GbqConnector(PROJECT_ID) def test_should_be_able_to_make_a_connector(self): - self.assertTrue(self.sut is not None, 'Could not create a GbqConnector') + self.assertTrue(self.sut is not None, + 'Could not create a GbqConnector') def test_should_be_able_to_get_valid_credentials(self): credentials = self.sut.get_credentials() @@ -156,6 +167,7 @@ def test_should_be_able_to_get_results_from_query(self): class TestReadGBQUnitTests(tm.TestCase): + def setUp(self): test_requirements() @@ -192,7 +204,8 @@ def test_read_gbq_with_no_project_id_given_should_fail(self): gbq.read_gbq('SELECT "1" as NUMBER_1') def test_that_parse_data_works_properly(self): - test_schema = {'fields': [{'mode': 'NULLABLE', 'name': 'VALID_STRING', 'type': 'STRING'}]} + test_schema = {'fields': [ + {'mode': 'NULLABLE', 'name': 'VALID_STRING', 'type': 'STRING'}]} test_page = [{'f': [{'v': 'PI'}]}] test_output = gbq._parse_data(test_schema, test_page) @@ -201,31 +214,36 @@ def test_that_parse_data_works_properly(self): class TestReadGBQIntegration(tm.TestCase): + @classmethod def setUpClass(cls): # - GLOBAL CLASS FIXTURES - - # put here any instruction you want to execute only *ONCE* *BEFORE* executing *ALL* tests - # described below. + # put here any instruction you want to execute only *ONCE* *BEFORE* + # executing *ALL* tests described below. if not PROJECT_ID: - raise nose.SkipTest("Cannot run integration tests without a project id") + raise nose.SkipTest( + "Cannot run integration tests without a project id") test_requirements() def setUp(self): # - PER-TEST FIXTURES - - # put here any instruction you want to be run *BEFORE* *EVERY* test is executed. + # put here any instruction you want to be run *BEFORE* *EVERY* test is + # executed. pass @classmethod def tearDownClass(cls): # - GLOBAL CLASS FIXTURES - - # put here any instruction you want to execute only *ONCE* *AFTER* executing all tests. + # put here any instruction you want to execute only *ONCE* *AFTER* + # executing all tests. pass def tearDown(self): # - PER-TEST FIXTURES - - # put here any instructions you want to be run *AFTER* *EVERY* test is executed. + # put here any instructions you want to be run *AFTER* *EVERY* test is + # executed. pass def test_should_properly_handle_valid_strings(self): @@ -256,7 +274,8 @@ def test_should_properly_handle_null_integers(self): def test_should_properly_handle_valid_floats(self): query = 'SELECT PI() as VALID_FLOAT' df = gbq.read_gbq(query, project_id=PROJECT_ID) - tm.assert_frame_equal(df, DataFrame({'VALID_FLOAT': [3.141592653589793]})) + tm.assert_frame_equal(df, DataFrame( + {'VALID_FLOAT': [3.141592653589793]})) def test_should_properly_handle_null_floats(self): query = 'SELECT FLOAT(NULL) as NULL_FLOAT' @@ -266,12 +285,15 @@ def test_should_properly_handle_null_floats(self): def test_should_properly_handle_timestamp_unix_epoch(self): query = 'SELECT TIMESTAMP("1970-01-01 00:00:00") as UNIX_EPOCH' df = gbq.read_gbq(query, project_id=PROJECT_ID) - tm.assert_frame_equal(df, DataFrame({'UNIX_EPOCH': [np.datetime64('1970-01-01T00:00:00.000000Z')]})) + tm.assert_frame_equal(df, DataFrame( + {'UNIX_EPOCH': [np.datetime64('1970-01-01T00:00:00.000000Z')]})) def test_should_properly_handle_arbitrary_timestamp(self): query = 'SELECT TIMESTAMP("2004-09-15 05:00:00") as VALID_TIMESTAMP' df = gbq.read_gbq(query, project_id=PROJECT_ID) - tm.assert_frame_equal(df, DataFrame({'VALID_TIMESTAMP': [np.datetime64('2004-09-15T05:00:00.000000Z')]})) + tm.assert_frame_equal(df, DataFrame({ + 'VALID_TIMESTAMP': [np.datetime64('2004-09-15T05:00:00.000000Z')] + })) def test_should_properly_handle_null_timestamp(self): query = 'SELECT TIMESTAMP(NULL) as NULL_TIMESTAMP' @@ -310,29 +332,36 @@ def test_unicode_string_conversion_and_normalization(self): def test_index_column(self): query = "SELECT 'a' as STRING_1, 'b' as STRING_2" - result_frame = gbq.read_gbq(query, project_id=PROJECT_ID, index_col="STRING_1") - correct_frame = DataFrame({'STRING_1': ['a'], 'STRING_2': ['b']}).set_index("STRING_1") + result_frame = gbq.read_gbq( + query, project_id=PROJECT_ID, index_col="STRING_1") + correct_frame = DataFrame( + {'STRING_1': ['a'], 'STRING_2': ['b']}).set_index("STRING_1") tm.assert_equal(result_frame.index.name, correct_frame.index.name) def test_column_order(self): query = "SELECT 'a' as STRING_1, 'b' as STRING_2, 'c' as STRING_3" col_order = ['STRING_3', 'STRING_1', 'STRING_2'] - result_frame = gbq.read_gbq(query, project_id=PROJECT_ID, col_order=col_order) - correct_frame = DataFrame({'STRING_1': ['a'], 'STRING_2': ['b'], 'STRING_3': ['c']})[col_order] + result_frame = gbq.read_gbq( + query, project_id=PROJECT_ID, col_order=col_order) + correct_frame = DataFrame({'STRING_1': ['a'], 'STRING_2': [ + 'b'], 'STRING_3': ['c']})[col_order] tm.assert_frame_equal(result_frame, correct_frame) def test_column_order_plus_index(self): query = "SELECT 'a' as STRING_1, 'b' as STRING_2, 'c' as STRING_3" col_order = ['STRING_3', 'STRING_2'] - result_frame = gbq.read_gbq(query, project_id=PROJECT_ID, index_col='STRING_1', col_order=col_order) - correct_frame = DataFrame({'STRING_1': ['a'], 'STRING_2': ['b'], 'STRING_3': ['c']}) + result_frame = gbq.read_gbq(query, project_id=PROJECT_ID, + index_col='STRING_1', col_order=col_order) + correct_frame = DataFrame( + {'STRING_1': ['a'], 'STRING_2': ['b'], 'STRING_3': ['c']}) correct_frame.set_index('STRING_1', inplace=True) correct_frame = correct_frame[col_order] tm.assert_frame_equal(result_frame, correct_frame) def test_malformed_query(self): with tm.assertRaises(gbq.GenericGBQException): - gbq.read_gbq("SELCET * FORM [publicdata:samples.shakespeare]", project_id=PROJECT_ID) + gbq.read_gbq("SELCET * FORM [publicdata:samples.shakespeare]", + project_id=PROJECT_ID) def test_bad_project_id(self): with tm.assertRaises(gbq.GenericGBQException): @@ -340,19 +369,24 @@ def test_bad_project_id(self): def test_bad_table_name(self): with tm.assertRaises(gbq.GenericGBQException): - gbq.read_gbq("SELECT * FROM [publicdata:samples.nope]", project_id=PROJECT_ID) + gbq.read_gbq("SELECT * FROM [publicdata:samples.nope]", + project_id=PROJECT_ID) def test_download_dataset_larger_than_200k_rows(self): test_size = 200005 # Test for known BigQuery bug in datasets larger than 100k rows # http://stackoverflow.com/questions/19145587/bq-py-not-paging-results - df = gbq.read_gbq("SELECT id FROM [publicdata:samples.wikipedia] GROUP EACH BY id ORDER BY id ASC LIMIT {0}".format(test_size), + df = gbq.read_gbq("SELECT id FROM [publicdata:samples.wikipedia] " + "GROUP EACH BY id ORDER BY id ASC LIMIT {0}" + .format(test_size), project_id=PROJECT_ID) self.assertEqual(len(df.drop_duplicates()), test_size) def test_zero_rows(self): # Bug fix for https://github.com/pydata/pandas/issues/10273 - df = gbq.read_gbq("SELECT title, language FROM [publicdata:samples.wikipedia] where timestamp=-9999999", + df = gbq.read_gbq("SELECT title, language FROM " + "[publicdata:samples.wikipedia] where " + "timestamp=-9999999", project_id=PROJECT_ID) expected_result = DataFrame(columns=['title', 'language']) self.assert_frame_equal(df, expected_result) @@ -361,17 +395,19 @@ def test_zero_rows(self): class TestToGBQIntegration(tm.TestCase): # Changes to BigQuery table schema may take up to 2 minutes as of May 2015 # As a workaround to this issue, each test should use a unique table name. - # Make sure to modify the for loop range in the tearDownClass when a new test is added - # See `Issue 191 <https://code.google.com/p/google-bigquery/issues/detail?id=191>`__ + # Make sure to modify the for loop range in the tearDownClass when a new + # test is added See `Issue 191 + # <https://code.google.com/p/google-bigquery/issues/detail?id=191>`__ @classmethod def setUpClass(cls): # - GLOBAL CLASS FIXTURES - - # put here any instruction you want to execute only *ONCE* *BEFORE* executing *ALL* tests - # described below. + # put here any instruction you want to execute only *ONCE* *BEFORE* + # executing *ALL* tests described below. if not PROJECT_ID: - raise nose.SkipTest("Cannot run integration tests without a project id") + raise nose.SkipTest( + "Cannot run integration tests without a project id") test_requirements() clean_gbq_environment() @@ -380,7 +416,8 @@ def setUpClass(cls): def setUp(self): # - PER-TEST FIXTURES - - # put here any instruction you want to be run *BEFORE* *EVERY* test is executed. + # put here any instruction you want to be run *BEFORE* *EVERY* test is + # executed. self.dataset = gbq._Dataset(PROJECT_ID) self.table = gbq._Table(PROJECT_ID, DATASET_ID + "1") @@ -388,13 +425,15 @@ def setUp(self): @classmethod def tearDownClass(cls): # - GLOBAL CLASS FIXTURES - - # put here any instruction you want to execute only *ONCE* *AFTER* executing all tests. + # put here any instruction you want to execute only *ONCE* *AFTER* + # executing all tests. clean_gbq_environment() def tearDown(self): # - PER-TEST FIXTURES - - # put here any instructions you want to be run *AFTER* *EVERY* test is executed. + # put here any instructions you want to be run *AFTER* *EVERY* test is + # executed. pass def test_upload_data(self): @@ -407,7 +446,8 @@ def test_upload_data(self): sleep(60) # <- Curses Google!!! - result = gbq.read_gbq("SELECT COUNT(*) as NUM_ROWS FROM {0}".format(destination_table), + result = gbq.read_gbq("SELECT COUNT(*) as NUM_ROWS FROM {0}" + .format(destination_table), project_id=PROJECT_ID) self.assertEqual(result['NUM_ROWS'][0], test_size) @@ -441,12 +481,15 @@ def test_upload_data_if_table_exists_append(self): sleep(60) # <- Curses Google!!! - result = gbq.read_gbq("SELECT COUNT(*) as NUM_ROWS FROM {0}".format(destination_table), project_id=PROJECT_ID) + result = gbq.read_gbq("SELECT COUNT(*) as NUM_ROWS FROM {0}" + .format(destination_table), + project_id=PROJECT_ID) self.assertEqual(result['NUM_ROWS'][0], test_size * 2) # Try inserting with a different schema, confirm failure with tm.assertRaises(gbq.InvalidSchema): - gbq.to_gbq(df_different_schema, destination_table, PROJECT_ID, if_exists='append') + gbq.to_gbq(df_different_schema, destination_table, + PROJECT_ID, if_exists='append') def test_upload_data_if_table_exists_replace(self): destination_table = DESTINATION_TABLE + "4" @@ -459,19 +502,24 @@ def test_upload_data_if_table_exists_replace(self): gbq.to_gbq(df, destination_table, PROJECT_ID, chunksize=10000) # Test the if_exists parameter with the value 'replace'. - gbq.to_gbq(df_different_schema, destination_table, PROJECT_ID, if_exists='replace') + gbq.to_gbq(df_different_schema, destination_table, + PROJECT_ID, if_exists='replace') sleep(60) # <- Curses Google!!! - result = gbq.read_gbq("SELECT COUNT(*) as NUM_ROWS FROM {0}".format(destination_table), project_id=PROJECT_ID) + result = gbq.read_gbq("SELECT COUNT(*) as NUM_ROWS FROM {0}" + .format(destination_table), + project_id=PROJECT_ID) self.assertEqual(result['NUM_ROWS'][0], 5) def test_google_upload_errors_should_raise_exception(self): destination_table = DESTINATION_TABLE + "5" test_timestamp = datetime.now(pytz.timezone('US/Arizona')) - bad_df = DataFrame({'bools': [False, False], 'flts': [0.0, 1.0], 'ints': [0, '1'], 'strs': ['a', 1], - 'times': [test_timestamp, test_timestamp]}, index=range(2)) + bad_df = DataFrame({'bools': [False, False], 'flts': [0.0, 1.0], + 'ints': [0, '1'], 'strs': ['a', 1], + 'times': [test_timestamp, test_timestamp]}, + index=range(2)) with tm.assertRaises(gbq.StreamingInsertError): gbq.to_gbq(bad_df, destination_table, PROJECT_ID, verbose=True) @@ -489,56 +537,72 @@ def test_generate_schema(self): def test_create_table(self): destination_table = TABLE_ID + "6" - test_schema = {'fields': [{'name': 'A', 'type': 'FLOAT'}, {'name': 'B', 'type': 'FLOAT'}, - {'name': 'C', 'type': 'STRING'}, {'name': 'D', 'type': 'TIMESTAMP'}]} + test_schema = {'fields': [{'name': 'A', 'type': 'FLOAT'}, + {'name': 'B', 'type': 'FLOAT'}, + {'name': 'C', 'type': 'STRING'}, + {'name': 'D', 'type': 'TIMESTAMP'}]} self.table.create(destination_table, test_schema) - self.assertTrue(self.table.exists(destination_table), 'Expected table to exist') + self.assertTrue(self.table.exists(destination_table), + 'Expected table to exist') def test_table_does_not_exist(self): - self.assertTrue(not self.table.exists(TABLE_ID + "7"), 'Expected table not to exist') + self.assertTrue(not self.table.exists(TABLE_ID + "7"), + 'Expected table not to exist') def test_delete_table(self): destination_table = TABLE_ID + "8" - test_schema = {'fields': [{'name': 'A', 'type': 'FLOAT'}, {'name': 'B', 'type': 'FLOAT'}, - {'name': 'C', 'type': 'STRING'}, {'name': 'D', 'type': 'TIMESTAMP'}]} + test_schema = {'fields': [{'name': 'A', 'type': 'FLOAT'}, + {'name': 'B', 'type': 'FLOAT'}, + {'name': 'C', 'type': 'STRING'}, + {'name': 'D', 'type': 'TIMESTAMP'}]} self.table.create(destination_table, test_schema) self.table.delete(destination_table) - self.assertTrue(not self.table.exists(destination_table), 'Expected table not to exist') + self.assertTrue(not self.table.exists( + destination_table), 'Expected table not to exist') def test_list_table(self): destination_table = TABLE_ID + "9" - test_schema = {'fields': [{'name': 'A', 'type': 'FLOAT'}, {'name': 'B', 'type': 'FLOAT'}, - {'name': 'C', 'type': 'STRING'}, {'name': 'D', 'type': 'TIMESTAMP'}]} + test_schema = {'fields': [{'name': 'A', 'type': 'FLOAT'}, + {'name': 'B', 'type': 'FLOAT'}, + {'name': 'C', 'type': 'STRING'}, + {'name': 'D', 'type': 'TIMESTAMP'}]} self.table.create(destination_table, test_schema) - self.assertTrue(destination_table in self.dataset.tables(DATASET_ID + "1"), - 'Expected table list to contain table {0}'.format(destination_table)) + self.assertTrue( + destination_table in self.dataset.tables(DATASET_ID + "1"), + 'Expected table list to contain table {0}' + .format(destination_table)) def test_list_dataset(self): dataset_id = DATASET_ID + "1" self.assertTrue(dataset_id in self.dataset.datasets(), - 'Expected dataset list to contain dataset {0}'.format(dataset_id)) + 'Expected dataset list to contain dataset {0}' + .format(dataset_id)) def test_list_table_zero_results(self): dataset_id = DATASET_ID + "2" self.dataset.create(dataset_id) table_list = gbq._Dataset(PROJECT_ID).tables(dataset_id) - self.assertEqual(len(table_list), 0, 'Expected gbq.list_table() to return 0') + self.assertEqual(len(table_list), 0, + 'Expected gbq.list_table() to return 0') def test_create_dataset(self): dataset_id = DATASET_ID + "3" self.dataset.create(dataset_id) - self.assertTrue(dataset_id in self.dataset.datasets(), 'Expected dataset to exist') + self.assertTrue(dataset_id in self.dataset.datasets(), + 'Expected dataset to exist') def test_delete_dataset(self): dataset_id = DATASET_ID + "4" self.dataset.create(dataset_id) self.dataset.delete(dataset_id) - self.assertTrue(dataset_id not in self.dataset.datasets(), 'Expected dataset not to exist') + self.assertTrue(dataset_id not in self.dataset.datasets(), + 'Expected dataset not to exist') def test_dataset_exists(self): dataset_id = DATASET_ID + "5" self.dataset.create(dataset_id) - self.assertTrue(self.dataset.exists(dataset_id), 'Expected dataset to exist') + self.assertTrue(self.dataset.exists(dataset_id), + 'Expected dataset to exist') def create_table_data_dataset_does_not_exist(self): dataset_id = DATASET_ID + "6" @@ -546,11 +610,14 @@ def create_table_data_dataset_does_not_exist(self): table_with_new_dataset = gbq._Table(PROJECT_ID, dataset_id) df = make_mixed_dataframe_v2(10) table_with_new_dataset.create(table_id, gbq._generate_bq_schema(df)) - self.assertTrue(self.dataset.exists(dataset_id), 'Expected dataset to exist') - self.assertTrue(table_with_new_dataset.exists(table_id), 'Expected dataset to exist') + self.assertTrue(self.dataset.exists(dataset_id), + 'Expected dataset to exist') + self.assertTrue(table_with_new_dataset.exists( + table_id), 'Expected dataset to exist') def test_dataset_does_not_exist(self): - self.assertTrue(not self.dataset.exists(DATASET_ID + "_not_found"), 'Expected dataset not to exist') + self.assertTrue(not self.dataset.exists( + DATASET_ID + "_not_found"), 'Expected dataset not to exist') if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py index 141533a131e42..9a18da7d57648 100644 --- a/pandas/io/tests/test_html.py +++ b/pandas/io/tests/test_html.py @@ -20,7 +20,8 @@ from pandas import (DataFrame, MultiIndex, read_csv, Timestamp, Index, date_range, Series) -from pandas.compat import map, zip, StringIO, string_types, BytesIO, is_platform_windows +from pandas.compat import (map, zip, StringIO, string_types, BytesIO, + is_platform_windows) from pandas.io.common import URLError, urlopen, file_path_to_url from pandas.io.html import read_html from pandas.parser import CParserError @@ -87,6 +88,7 @@ def test_bs4_version_fails(): class ReadHtmlMixin(object): + def read_html(self, *args, **kwargs): kwargs.setdefault('flavor', self.flavor) return read_html(*args, **kwargs) @@ -437,8 +439,9 @@ def test_tfoot_read(self): </tfoot> </table>''' - data1 = data_template.format(footer = "") - data2 = data_template.format(footer ="<tr><td>footA</td><th>footB</th></tr>") + data1 = data_template.format(footer="") + data2 = data_template.format( + footer="<tr><td>footA</td><th>footB</th></tr>") d1 = {'A': ['bodyA'], 'B': ['bodyB']} d2 = {'A': ['bodyA', 'footA'], 'B': ['bodyB', 'footB']} @@ -528,10 +531,10 @@ def try_remove_ws(x): dfnew = df.applymap(try_remove_ws).replace(old, new) gtnew = ground_truth.applymap(try_remove_ws) converted = dfnew._convert(datetime=True, numeric=True) - date_cols = ['Closing Date','Updated Date'] + date_cols = ['Closing Date', 'Updated Date'] converted[date_cols] = converted[date_cols]._convert(datetime=True, coerce=True) - tm.assert_frame_equal(converted,gtnew) + tm.assert_frame_equal(converted, gtnew) @slow def test_gold_canyon(self): @@ -638,11 +641,12 @@ def test_wikipedia_states_table(self): nose.tools.assert_equal(result['sq mi'].dtype, np.dtype('float64')) def test_bool_header_arg(self): - #GH 6114 + # GH 6114 for arg in [True, False]: with tm.assertRaises(TypeError): read_html(self.spam_data, header=arg) + def _lang_enc(filename): return os.path.splitext(os.path.basename(filename))[0].split('_') @@ -682,14 +686,14 @@ def test_encode(self): from_filename = self.read_filename(f, encoding).pop() tm.assert_frame_equal(from_string, from_file_like) tm.assert_frame_equal(from_string, from_filename) - except Exception as e: - + except Exception: # seems utf-16/32 fail on windows if is_platform_windows(): if '16' in encoding or '32' in encoding: continue raise + class TestReadHtmlEncodingLxml(TestReadHtmlEncoding): flavor = 'lxml' diff --git a/pandas/io/tests/test_json/test_pandas.py b/pandas/io/tests/test_json/test_pandas.py index 1690667ef743b..2889acef8180d 100644 --- a/pandas/io/tests/test_json/test_pandas.py +++ b/pandas/io/tests/test_json/test_pandas.py @@ -3,7 +3,7 @@ import os import numpy as np -from pandas import (Series, DataFrame, DatetimeIndex, Timestamp, CategoricalIndex, +from pandas import (Series, DataFrame, DatetimeIndex, Timestamp, read_json, compat) from datetime import timedelta import pandas as pd @@ -23,13 +23,15 @@ _tsframe = DataFrame(_tsd) _cat_frame = _frame.copy() -cat = ['bah']*5 + ['bar']*5 + ['baz']*5 + ['foo']*(len(_cat_frame)-15) -_cat_frame.index = pd.CategoricalIndex(cat,name='E') +cat = ['bah'] * 5 + ['bar'] * 5 + ['baz'] * \ + 5 + ['foo'] * (len(_cat_frame) - 15) +_cat_frame.index = pd.CategoricalIndex(cat, name='E') _cat_frame['E'] = list(reversed(cat)) -_cat_frame['sort'] = np.arange(len(_cat_frame),dtype='int64') +_cat_frame['sort'] = np.arange(len(_cat_frame), dtype='int64') _mixed_frame = _frame.copy() + class TestPandasContainer(tm.TestCase): def setUp(self): @@ -116,7 +118,8 @@ def test_frame_non_unique_columns(self): np.testing.assert_equal(df.values, unser.values) # GH4377; duplicate columns not processing correctly - df = DataFrame([['a','b'],['c','d']], index=[1,2], columns=['x','y']) + df = DataFrame([['a', 'b'], ['c', 'd']], index=[ + 1, 2], columns=['x', 'y']) result = read_json(df.to_json(orient='split'), orient='split') assert_frame_equal(result, df) @@ -125,11 +128,12 @@ def _check(df): convert_dates=['x']) assert_frame_equal(result, df) - for o in [[['a','b'],['c','d']], - [[1.5,2.5],[3.5,4.5]], - [[1,2.5],[3,4.5]], - [[Timestamp('20130101'),3.5],[Timestamp('20130102'),4.5]]]: - _check(DataFrame(o, index=[1,2], columns=['x','x'])) + for o in [[['a', 'b'], ['c', 'd']], + [[1.5, 2.5], [3.5, 4.5]], + [[1, 2.5], [3, 4.5]], + [[Timestamp('20130101'), 3.5], + [Timestamp('20130102'), 4.5]]]: + _check(DataFrame(o, index=[1, 2], columns=['x', 'x'])) def test_frame_from_json_to_json(self): def _check_orient(df, orient, dtype=None, numpy=False, @@ -143,11 +147,14 @@ def _check_orient(df, orient, dtype=None, numpy=False, # if we are not unique, then check that we are raising ValueError # for the appropriate orients - if not df.index.is_unique and orient in ['index','columns']: - self.assertRaises(ValueError, lambda : df.to_json(orient=orient)) + if not df.index.is_unique and orient in ['index', 'columns']: + self.assertRaises( + ValueError, lambda: df.to_json(orient=orient)) return - if not df.columns.is_unique and orient in ['index','columns','records']: - self.assertRaises(ValueError, lambda : df.to_json(orient=orient)) + if (not df.columns.is_unique and + orient in ['index', 'columns', 'records']): + self.assertRaises( + ValueError, lambda: df.to_json(orient=orient)) return dfjson = df.to_json(orient=orient) @@ -167,7 +174,7 @@ def _check_orient(df, orient, dtype=None, numpy=False, unser = unser.sort_index() if dtype is False: - check_dtype=False + check_dtype = False if not convert_axes and df.index.dtype.type == np.datetime64: unser.index = DatetimeIndex( @@ -199,8 +206,8 @@ def _check_orient(df, orient, dtype=None, numpy=False, assert_frame_equal(df, unser, check_less_precise=False, check_dtype=check_dtype) - def _check_all_orients(df, dtype=None, convert_axes=True, raise_ok=None, - sort=None, check_index_type=True, + def _check_all_orients(df, dtype=None, convert_axes=True, + raise_ok=None, sort=None, check_index_type=True, check_column_type=True): # numpy=False @@ -216,11 +223,16 @@ def _check_all_orients(df, dtype=None, convert_axes=True, raise_ok=None, _check_orient(df, "values", dtype=dtype, sort=sort, check_index_type=False, check_column_type=False) - _check_orient(df, "columns", dtype=dtype, convert_axes=False, sort=sort) - _check_orient(df, "records", dtype=dtype, convert_axes=False, sort=sort) - _check_orient(df, "split", dtype=dtype, convert_axes=False, sort=sort) - _check_orient(df, "index", dtype=dtype, convert_axes=False, sort=sort) - _check_orient(df, "values", dtype=dtype ,convert_axes=False, sort=sort) + _check_orient(df, "columns", dtype=dtype, + convert_axes=False, sort=sort) + _check_orient(df, "records", dtype=dtype, + convert_axes=False, sort=sort) + _check_orient(df, "split", dtype=dtype, + convert_axes=False, sort=sort) + _check_orient(df, "index", dtype=dtype, + convert_axes=False, sort=sort) + _check_orient(df, "values", dtype=dtype, + convert_axes=False, sort=sort) # numpy=True and raise_ok might be not None, so ignore the error if convert_axes: @@ -265,7 +277,7 @@ def _check_all_orients(df, dtype=None, convert_axes=True, raise_ok=None, biggie = DataFrame(np.zeros((200, 4)), columns=[str(i) for i in range(4)], index=[str(i) for i in range(200)]) - _check_all_orients(biggie,dtype=False, convert_axes=False) + _check_all_orients(biggie, dtype=False, convert_axes=False) # dtypes _check_all_orients(DataFrame(biggie, dtype=np.float64), @@ -336,31 +348,32 @@ def test_frame_from_json_nones(self): df = DataFrame([['1', '2'], ['4', '5', '6']]) unser = read_json(df.to_json()) self.assertTrue(np.isnan(unser[2][0])) - unser = read_json(df.to_json(),dtype=False) + unser = read_json(df.to_json(), dtype=False) self.assertTrue(unser[2][0] is None) - unser = read_json(df.to_json(),convert_axes=False,dtype=False) + unser = read_json(df.to_json(), convert_axes=False, dtype=False) self.assertTrue(unser['2']['0'] is None) unser = read_json(df.to_json(), numpy=False) self.assertTrue(np.isnan(unser[2][0])) unser = read_json(df.to_json(), numpy=False, dtype=False) self.assertTrue(unser[2][0] is None) - unser = read_json(df.to_json(), numpy=False, convert_axes=False, dtype=False) + unser = read_json(df.to_json(), numpy=False, + convert_axes=False, dtype=False) self.assertTrue(unser['2']['0'] is None) # infinities get mapped to nulls which get mapped to NaNs during # deserialisation df = DataFrame([[1, 2], [4, 5, 6]]) - df.loc[0,2] = np.inf + df.loc[0, 2] = np.inf unser = read_json(df.to_json()) self.assertTrue(np.isnan(unser[2][0])) unser = read_json(df.to_json(), dtype=False) self.assertTrue(np.isnan(unser[2][0])) - df.loc[0,2] = np.NINF + df.loc[0, 2] = np.NINF unser = read_json(df.to_json()) self.assertTrue(np.isnan(unser[2][0])) - unser = read_json(df.to_json(),dtype=False) + unser = read_json(df.to_json(), dtype=False) self.assertTrue(np.isnan(unser[2][0])) def test_frame_to_json_except(self): @@ -410,11 +423,11 @@ def test_frame_mixedtype_orient(self): # GH10289 def test_v12_compat(self): df = DataFrame( - [[1.56808523, 0.65727391, 1.81021139, -0.17251653], + [[1.56808523, 0.65727391, 1.81021139, -0.17251653], [-0.2550111, -0.08072427, -0.03202878, -0.17581665], - [1.51493992, 0.11805825, 1.629455, -1.31506612], - [-0.02765498, 0.44679743, 0.33192641, -0.27885413], - [0.05951614, -2.69652057, 1.28163262, 0.34703478]], + [1.51493992, 0.11805825, 1.629455, -1.31506612], + [-0.02765498, 0.44679743, 0.33192641, -0.27885413], + [0.05951614, -2.69652057, 1.28163262, 0.34703478]], columns=['A', 'B', 'C', 'D'], index=pd.date_range('2000-01-03', '2000-01-07')) df['date'] = pd.Timestamp('19920106 18:21:32.12') @@ -438,10 +451,10 @@ def test_blocks_compat_GH9037(self): -0.60316077, 0.24653374, 0.28668979, -2.51969012, 0.95748401, -1.02970536], int_1=[19680418, 75337055, 99973684, 65103179, 79373900, - 40314334, 21290235, 4991321, 41903419, 16008365], + 40314334, 21290235, 4991321, 41903419, 16008365], str_1=['78c608f1', '64a99743', '13d2ff52', 'ca7f4af2', '97236474', 'bde7e214', '1a6bde47', 'b1190be5', '7a669144', '8d64d068'], - float_2=[-0.0428278, -1.80872357, 3.36042349, -0.7573685, + float_2=[-0.0428278, -1.80872357, 3.36042349, -0.7573685, -0.48217572, 0.86229683, 1.08935819, 0.93898739, -0.03030452, 1.43366348], str_2=['14f04af9', 'd085da90', '4bcfac83', '81504caf', '2ffef4a9', @@ -468,7 +481,7 @@ def test_series_non_unique_index(self): self.assertRaises(ValueError, s.to_json, orient='index') assert_series_equal(s, read_json(s.to_json(orient='split'), - orient='split', typ='series')) + orient='split', typ='series')) unser = read_json(s.to_json(orient='records'), orient='records', typ='series') np.testing.assert_equal(s.values, unser.values) @@ -532,7 +545,7 @@ def _check_all_orients(series, dtype=None, check_index_type=True): _check_all_orients(self.ts) # dtype - s = Series(lrange(6), index=['a','b','c','d','e','f']) + s = Series(lrange(6), index=['a', 'b', 'c', 'd', 'e', 'f']) _check_all_orients(Series(s, dtype=np.float64), dtype=np.float64) _check_all_orients(Series(s, dtype=np.int), dtype=np.int) @@ -548,12 +561,14 @@ def test_series_from_json_precise_float(self): def test_frame_from_json_precise_float(self): df = DataFrame([[4.56, 4.56, 4.56], [4.56, 4.56, 4.56]]) result = read_json(df.to_json(), precise_float=True) - assert_frame_equal(result, df, check_index_type=False, check_column_type=False) + assert_frame_equal(result, df, check_index_type=False, + check_column_type=False) def test_typ(self): - s = Series(lrange(6), index=['a','b','c','d','e','f'], dtype='int64') - result = read_json(s.to_json(),typ=None) + s = Series(lrange(6), index=['a', 'b', 'c', + 'd', 'e', 'f'], dtype='int64') + result = read_json(s.to_json(), typ=None) assert_series_equal(result, s) def test_reconstruction_index(self): @@ -563,7 +578,8 @@ def test_reconstruction_index(self): self.assertEqual(result.index.dtype, np.float64) self.assertEqual(result.columns.dtype, np.float64) - assert_frame_equal(result, df, check_index_type=False, check_column_type=False) + assert_frame_equal(result, df, check_index_type=False, + check_column_type=False) df = DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]}, index=['A', 'B', 'C']) result = read_json(df.to_json()) @@ -614,12 +630,13 @@ def test_convert_dates(self): assert_series_equal(result, ts) def test_convert_dates_infer(self): - #GH10747 + # GH10747 infer_words = ['trade_time', 'date', 'datetime', 'sold_at', 'modified', 'timestamp', 'timestamps'] for infer_word in infer_words: data = [{'id': 1, infer_word: 1036713600000}, {'id': 2}] - expected = DataFrame([[1, Timestamp('2002-11-08')], [2, pd.NaT]], columns=['id', infer_word]) + expected = DataFrame([[1, Timestamp('2002-11-08')], [2, pd.NaT]], + columns=['id', infer_word]) result = read_json(pd.json.dumps(data))[['id', infer_word]] assert_frame_equal(result, expected) @@ -713,17 +730,17 @@ def test_doc_example(self): dfj2['date'] = Timestamp('20130101') dfj2['ints'] = lrange(5) dfj2['bools'] = True - dfj2.index = pd.date_range('20130101',periods=5) + dfj2.index = pd.date_range('20130101', periods=5) json = dfj2.to_json() - result = read_json(json,dtype={'ints' : np.int64, 'bools' : np.bool_}) - assert_frame_equal(result,result) + result = read_json(json, dtype={'ints': np.int64, 'bools': np.bool_}) + assert_frame_equal(result, result) def test_misc_example(self): # parsing unordered input fails result = read_json('[{"a": 1, "b": 2}, {"b":2, "a" :1}]', numpy=True) - expected = DataFrame([[1,2], [1,2]], columns=['a', 'b']) + expected = DataFrame([[1, 2], [1, 2]], columns=['a', 'b']) error_msg = """DataFrame\\.index are different @@ -734,7 +751,7 @@ def test_misc_example(self): assert_frame_equal(result, expected, check_index_type=False) result = read_json('[{"a": 1, "b": 2}, {"b":2, "a" :1}]') - expected = DataFrame([[1,2], [1,2]], columns=['a','b']) + expected = DataFrame([[1, 2], [1, 2]], columns=['a', 'b']) assert_frame_equal(result, expected) @network @@ -744,7 +761,8 @@ def test_round_trip_exception_(self): df = pd.read_csv(csv) s = df.to_json() result = pd.read_json(s) - assert_frame_equal(result.reindex(index=df.index,columns=df.columns),df) + assert_frame_equal(result.reindex( + index=df.index, columns=df.columns), df) @network def test_url(self): @@ -754,22 +772,27 @@ def test_url(self): self.assertEqual(result[c].dtype, 'datetime64[ns]') def test_timedelta(self): - converter = lambda x: pd.to_timedelta(x,unit='ms') + converter = lambda x: pd.to_timedelta(x, unit='ms') s = Series([timedelta(23), timedelta(seconds=5)]) self.assertEqual(s.dtype, 'timedelta64[ns]') # index will be float dtype - assert_series_equal(s, pd.read_json(s.to_json(),typ='series').apply(converter), + assert_series_equal(s, pd.read_json(s.to_json(), typ='series') + .apply(converter), check_index_type=False) - s = Series([timedelta(23), timedelta(seconds=5)], index=pd.Index([0, 1], dtype=float)) + s = Series([timedelta(23), timedelta(seconds=5)], + index=pd.Index([0, 1], dtype=float)) self.assertEqual(s.dtype, 'timedelta64[ns]') - assert_series_equal(s, pd.read_json(s.to_json(), typ='series').apply(converter)) + assert_series_equal(s, pd.read_json( + s.to_json(), typ='series').apply(converter)) frame = DataFrame([timedelta(23), timedelta(seconds=5)]) - self.assertEqual(frame[0].dtype,'timedelta64[ns]') - assert_frame_equal(frame, pd.read_json(frame.to_json()).apply(converter), - check_index_type=False, check_column_type=False) + self.assertEqual(frame[0].dtype, 'timedelta64[ns]') + assert_frame_equal(frame, pd.read_json(frame.to_json()) + .apply(converter), + check_index_type=False, + check_column_type=False) frame = DataFrame({'a': [timedelta(days=23), timedelta(seconds=5)], 'b': [1, 2], @@ -800,7 +823,8 @@ def test_default_handler(self): def test_default_handler_raises(self): def my_handler_raises(obj): raise TypeError("raisin") - self.assertRaises(TypeError, DataFrame({'a': [1, 2, object()]}).to_json, + self.assertRaises(TypeError, + DataFrame({'a': [1, 2, object()]}).to_json, default_handler=my_handler_raises) diff --git a/pandas/io/tests/test_json/test_ujson.py b/pandas/io/tests/test_json/test_ujson.py index 9590dbb90e5c6..f5efb54099ddd 100644 --- a/pandas/io/tests/test_json/test_ujson.py +++ b/pandas/io/tests/test_json/test_ujson.py @@ -24,7 +24,6 @@ from numpy.testing import (assert_array_almost_equal_nulp, assert_approx_equal) import pytz -import dateutil from pandas import DataFrame, Series, Index, NaT, DatetimeIndex import pandas.util.testing as tm @@ -38,6 +37,7 @@ def _skip_if_python_ver(skip_major, skip_minor=None): json_unicode = (json.dumps if compat.PY3 else partial(json.dumps, encoding="utf-8")) + class UltraJSONTests(TestCase): def test_encodeDecimal(self): @@ -48,8 +48,10 @@ def test_encodeDecimal(self): def test_encodeStringConversion(self): input = "A string \\ / \b \f \n \r \t </script> &" - not_html_encoded = '"A string \\\\ \\/ \\b \\f \\n \\r \\t <\\/script> &"' - html_encoded = '"A string \\\\ \\/ \\b \\f \\n \\r \\t \\u003c\\/script\\u003e \\u0026"' + not_html_encoded = ('"A string \\\\ \\/ \\b \\f \\n ' + '\\r \\t <\\/script> &"') + html_encoded = ('"A string \\\\ \\/ \\b \\f \\n \\r \\t ' + '\\u003c\\/script\\u003e \\u0026"') def helper(expected_output, **encode_kwargs): output = ujson.encode(input, **encode_kwargs) @@ -127,18 +129,16 @@ def test_encodeDoubleTinyExponential(self): def test_encodeDictWithUnicodeKeys(self): input = {u("key1"): u("value1"), u("key1"): - u("value1"), u("key1"): u("value1"), - u("key1"): u("value1"), u("key1"): - u("value1"), u("key1"): u("value1")} + u("value1"), u("key1"): u("value1"), + u("key1"): u("value1"), u("key1"): + u("value1"), u("key1"): u("value1")} output = ujson.encode(input) input = {u("بن"): u("value1"), u("بن"): u("value1"), - u("بن"): u("value1"), u("بن"): u("value1"), - u("بن"): u("value1"), u("بن"): u("value1"), - u("بن"): u("value1")} - output = ujson.encode(input) - - pass + u("بن"): u("value1"), u("بن"): u("value1"), + u("بن"): u("value1"), u("بن"): u("value1"), + u("بن"): u("value1")} + output = ujson.encode(input) # noqa def test_encodeDoubleConversion(self): input = math.pi @@ -162,45 +162,48 @@ def test_encodeArrayOfNestedArrays(self): input = [[[[]]]] * 20 output = ujson.encode(input) self.assertEqual(input, json.loads(output)) - #self.assertEqual(output, json.dumps(input)) + # self.assertEqual(output, json.dumps(input)) self.assertEqual(input, ujson.decode(output)) input = np.array(input) - tm.assert_numpy_array_equal(input, ujson.decode(output, numpy=True, dtype=input.dtype)) + tm.assert_numpy_array_equal(input, ujson.decode( + output, numpy=True, dtype=input.dtype)) def test_encodeArrayOfDoubles(self): - input = [ 31337.31337, 31337.31337, 31337.31337, 31337.31337] * 10 + input = [31337.31337, 31337.31337, 31337.31337, 31337.31337] * 10 output = ujson.encode(input) self.assertEqual(input, json.loads(output)) - #self.assertEqual(output, json.dumps(input)) + # self.assertEqual(output, json.dumps(input)) self.assertEqual(input, ujson.decode(output)) - tm.assert_numpy_array_equal(np.array(input), ujson.decode(output, numpy=True)) + tm.assert_numpy_array_equal( + np.array(input), ujson.decode(output, numpy=True)) def test_doublePrecisionTest(self): input = 30.012345678901234 - output = ujson.encode(input, double_precision = 15) + output = ujson.encode(input, double_precision=15) self.assertEqual(input, json.loads(output)) self.assertEqual(input, ujson.decode(output)) - output = ujson.encode(input, double_precision = 9) + output = ujson.encode(input, double_precision=9) self.assertEqual(round(input, 9), json.loads(output)) self.assertEqual(round(input, 9), ujson.decode(output)) - output = ujson.encode(input, double_precision = 3) + output = ujson.encode(input, double_precision=3) self.assertEqual(round(input, 3), json.loads(output)) self.assertEqual(round(input, 3), ujson.decode(output)) def test_invalidDoublePrecision(self): input = 30.12345678901234567890 - self.assertRaises(ValueError, ujson.encode, input, double_precision = 20) - self.assertRaises(ValueError, ujson.encode, input, double_precision = -1) + self.assertRaises(ValueError, ujson.encode, input, double_precision=20) + self.assertRaises(ValueError, ujson.encode, input, double_precision=-1) # will throw typeError - self.assertRaises(TypeError, ujson.encode, input, double_precision = '9') + self.assertRaises(TypeError, ujson.encode, input, double_precision='9') # will throw typeError - self.assertRaises(TypeError, ujson.encode, input, double_precision = None) + self.assertRaises(TypeError, ujson.encode, + input, double_precision=None) - def test_encodeStringConversion(self): + def test_encodeStringConversion2(self): input = "A string \\ / \b \f \n \r \t" output = ujson.encode(input) self.assertEqual(input, json.loads(output)) @@ -270,7 +273,8 @@ def test_encodeArrayInArray(self): self.assertEqual(input, json.loads(output)) self.assertEqual(output, json.dumps(input)) self.assertEqual(input, ujson.decode(output)) - tm.assert_numpy_array_equal(np.array(input), ujson.decode(output, numpy=True)) + tm.assert_numpy_array_equal( + np.array(input), ujson.decode(output, numpy=True)) pass def test_encodeIntConversion(self): @@ -293,25 +297,22 @@ def test_encodeLongNegConversion(self): input = -9223372036854775808 output = ujson.encode(input) - outputjson = json.loads(output) - outputujson = ujson.decode(output) - self.assertEqual(input, json.loads(output)) self.assertEqual(output, json.dumps(input)) self.assertEqual(input, ujson.decode(output)) - pass def test_encodeListConversion(self): - input = [ 1, 2, 3, 4 ] + input = [1, 2, 3, 4] output = ujson.encode(input) self.assertEqual(input, json.loads(output)) self.assertEqual(input, ujson.decode(output)) - tm.assert_numpy_array_equal(np.array(input), ujson.decode(output, numpy=True)) + tm.assert_numpy_array_equal( + np.array(input), ujson.decode(output, numpy=True)) pass def test_encodeDictConversion(self): - input = { "k1": 1, "k2": 2, "k3": 3, "k4": 4 } - output = ujson.encode(input) + input = {"k1": 1, "k2": 2, "k3": 3, "k4": 4} + output = ujson.encode(input) # noqa self.assertEqual(input, json.loads(output)) self.assertEqual(input, ujson.decode(output)) self.assertEqual(input, ujson.decode(output)) @@ -365,8 +366,9 @@ def test_encodeTimeConversion(self): datetime.time(1, 2, 3), datetime.time(10, 12, 15, 343243), datetime.time(10, 12, 15, 343243, pytz.utc), -# datetime.time(10, 12, 15, 343243, dateutil.tz.gettz('UTC')), # this segfaults! No idea why. - ] + # datetime.time(10, 12, 15, 343243, dateutil.tz.gettz('UTC')), # + # this segfaults! No idea why. + ] for test in tests: output = ujson.encode(test) expected = '"%s"' % test.isoformat() @@ -435,7 +437,7 @@ class O1: input.member.member = input try: - output = ujson.encode(input) + output = ujson.encode(input) # noqa assert False, "Expected overflow exception" except(OverflowError): pass @@ -575,7 +577,7 @@ def test_decodeBrokenDictKeyTypeLeakTest(self): try: ujson.decode(input) assert False, "Expected exception!" - except ValueError as e: + except ValueError: continue assert False, "Wrong exception" @@ -644,7 +646,7 @@ def test_encodeUnicode4BytesUTF8Fail(self): _skip_if_python_ver(3) input = "\xfd\xbf\xbf\xbf\xbf\xbf" try: - enc = ujson.encode(input) + enc = ujson.encode(input) # noqa assert False, "Expected exception" except OverflowError: pass @@ -671,12 +673,13 @@ def test_decodeNullCharacter(self): def test_encodeListLongConversion(self): input = [9223372036854775807, 9223372036854775807, 9223372036854775807, - 9223372036854775807, 9223372036854775807, 9223372036854775807 ] + 9223372036854775807, 9223372036854775807, 9223372036854775807] output = ujson.encode(input) self.assertEqual(input, json.loads(output)) self.assertEqual(input, ujson.decode(output)) - tm.assert_numpy_array_equal(np.array(input), ujson.decode(output, numpy=True, - dtype=np.int64)) + tm.assert_numpy_array_equal(np.array(input), + ujson.decode(output, numpy=True, + dtype=np.int64)) pass def test_encodeLongConversion(self): @@ -734,8 +737,10 @@ def test_dumpToFile(self): def test_dumpToFileLikeObject(self): class filelike: + def __init__(self): self.bytes = '' + def write(self, bytes): self.bytes += bytes f = filelike() @@ -754,10 +759,12 @@ def test_loadFile(self): f = StringIO("[1,2,3,4]") self.assertEqual([1, 2, 3, 4], ujson.load(f)) f = StringIO("[1,2,3,4]") - tm.assert_numpy_array_equal(np.array([1, 2, 3, 4]), ujson.load(f, numpy=True)) + tm.assert_numpy_array_equal( + np.array([1, 2, 3, 4]), ujson.load(f, numpy=True)) def test_loadFileLikeObject(self): class filelike: + def read(self): try: self.end @@ -767,7 +774,8 @@ def read(self): f = filelike() self.assertEqual([1, 2, 3, 4], ujson.load(f)) f = filelike() - tm.assert_numpy_array_equal(np.array([1, 2, 3, 4]), ujson.load(f, numpy=True)) + tm.assert_numpy_array_equal( + np.array([1, 2, 3, 4]), ujson.load(f, numpy=True)) def test_loadFileArgsError(self): try: @@ -779,7 +787,7 @@ def test_loadFileArgsError(self): def test_version(self): assert re.match(r'^\d+\.\d+(\.\d+)?$', ujson.__version__), \ - "ujson.__version__ must be a string like '1.4.0'" + "ujson.__version__ must be a string like '1.4.0'" def test_encodeNumericOverflow(self): try: @@ -804,18 +812,18 @@ class Nested: assert False, "expected OverflowError" def test_decodeNumberWith32bitSignBit(self): - #Test that numbers that fit within 32 bits but would have the + # Test that numbers that fit within 32 bits but would have the # sign bit set (2**31 <= x < 2**32) are decoded properly. - boundary1 = 2**31 - boundary2 = 2**32 + boundary1 = 2**31 # noqa + boundary2 = 2**32 # noqa docs = ( '{"id": 3590016419}', '{"id": %s}' % 2**31, '{"id": %s}' % 2**32, - '{"id": %s}' % ((2**32)-1), + '{"id": %s}' % ((2**32) - 1), ) - results = (3590016419, 2**31, 2**32, 2**32-1) - for doc,result in zip(docs, results): + results = (3590016419, 2**31, 2**32, 2**32 - 1) + for doc, result in zip(docs, results): self.assertEqual(ujson.decode(doc)['id'], result) def test_encodeBigEscape(self): @@ -825,7 +833,7 @@ def test_encodeBigEscape(self): else: base = "\xc3\xa5" input = base * 1024 * 1024 * 2 - output = ujson.encode(input) + output = ujson.encode(input) # noqa def test_decodeBigEscape(self): for x in range(10): @@ -835,12 +843,13 @@ def test_decodeBigEscape(self): base = "\xc3\xa5" quote = compat.str_to_bytes("\"") input = quote + (base * 1024 * 1024 * 2) + quote - output = ujson.decode(input) + output = ujson.decode(input) # noqa def test_toDict(self): d = {u("key"): 31337} class DictTest: + def toDict(self): return d @@ -865,12 +874,12 @@ def __str__(self): self.assertRaises(OverflowError, ujson.encode, _TestObject("foo")) self.assertEqual('"foo"', ujson.encode(_TestObject("foo"), - default_handler=str)) + default_handler=str)) def my_handler(obj): return "foobar" self.assertEqual('"foobar"', ujson.encode(_TestObject("foo"), - default_handler=my_handler)) + default_handler=my_handler)) def my_handler_raises(obj): raise TypeError("I raise for anything") @@ -892,7 +901,7 @@ def my_obj_handler(obj): l = [_TestObject("foo"), _TestObject("bar")] self.assertEqual(json.loads(json.dumps(l, default=str)), - ujson.decode(ujson.encode(l, default_handler=str))) + ujson.decode(ujson.encode(l, default_handler=str))) class NumpyJSONTests(TestCase): @@ -902,8 +911,8 @@ def testBool(self): self.assertEqual(ujson.decode(ujson.encode(b)), b) def testBoolArray(self): - inpt = np.array([True, False, True, True, False, True, False , False], - dtype=np.bool) + inpt = np.array([True, False, True, True, False, True, False, False], + dtype=np.bool) outp = np.array(ujson.decode(ujson.encode(inpt)), dtype=np.bool) tm.assert_numpy_array_equal(inpt, outp) @@ -990,43 +999,56 @@ def testFloatArray(self): for dtype in dtypes: inpt = arr.astype(dtype) - outp = np.array(ujson.decode(ujson.encode(inpt, double_precision=15)), dtype=dtype) + outp = np.array(ujson.decode(ujson.encode( + inpt, double_precision=15)), dtype=dtype) assert_array_almost_equal_nulp(inpt, outp) def testFloatMax(self): - num = np.float(np.finfo(np.float).max/10) - assert_approx_equal(np.float(ujson.decode(ujson.encode(num, double_precision=15))), num, 15) + num = np.float(np.finfo(np.float).max / 10) + assert_approx_equal(np.float(ujson.decode( + ujson.encode(num, double_precision=15))), num, 15) - num = np.float32(np.finfo(np.float32).max/10) - assert_approx_equal(np.float32(ujson.decode(ujson.encode(num, double_precision=15))), num, 15) + num = np.float32(np.finfo(np.float32).max / 10) + assert_approx_equal(np.float32(ujson.decode( + ujson.encode(num, double_precision=15))), num, 15) - num = np.float64(np.finfo(np.float64).max/10) - assert_approx_equal(np.float64(ujson.decode(ujson.encode(num, double_precision=15))), num, 15) + num = np.float64(np.finfo(np.float64).max / 10) + assert_approx_equal(np.float64(ujson.decode( + ujson.encode(num, double_precision=15))), num, 15) def testArrays(self): arr = np.arange(100) arr = arr.reshape((10, 10)) - tm.assert_numpy_array_equal(np.array(ujson.decode(ujson.encode(arr))), arr) - tm.assert_numpy_array_equal(ujson.decode(ujson.encode(arr), numpy=True), arr) + tm.assert_numpy_array_equal( + np.array(ujson.decode(ujson.encode(arr))), arr) + tm.assert_numpy_array_equal(ujson.decode( + ujson.encode(arr), numpy=True), arr) arr = arr.reshape((5, 5, 4)) - tm.assert_numpy_array_equal(np.array(ujson.decode(ujson.encode(arr))), arr) - tm.assert_numpy_array_equal(ujson.decode(ujson.encode(arr), numpy=True), arr) + tm.assert_numpy_array_equal( + np.array(ujson.decode(ujson.encode(arr))), arr) + tm.assert_numpy_array_equal(ujson.decode( + ujson.encode(arr), numpy=True), arr) arr = arr.reshape((100, 1)) - tm.assert_numpy_array_equal(np.array(ujson.decode(ujson.encode(arr))), arr) - tm.assert_numpy_array_equal(ujson.decode(ujson.encode(arr), numpy=True), arr) + tm.assert_numpy_array_equal( + np.array(ujson.decode(ujson.encode(arr))), arr) + tm.assert_numpy_array_equal(ujson.decode( + ujson.encode(arr), numpy=True), arr) arr = np.arange(96) arr = arr.reshape((2, 2, 2, 2, 3, 2)) - tm.assert_numpy_array_equal(np.array(ujson.decode(ujson.encode(arr))), arr) - tm.assert_numpy_array_equal(ujson.decode(ujson.encode(arr), numpy=True), arr) + tm.assert_numpy_array_equal( + np.array(ujson.decode(ujson.encode(arr))), arr) + tm.assert_numpy_array_equal(ujson.decode( + ujson.encode(arr), numpy=True), arr) l = ['a', list(), dict(), dict(), list(), 42, 97.8, ['a', 'b'], {'key': 'val'}] arr = np.array(l) - tm.assert_numpy_array_equal(np.array(ujson.decode(ujson.encode(arr))), arr) + tm.assert_numpy_array_equal( + np.array(ujson.decode(ujson.encode(arr))), arr) arr = np.arange(100.202, 200.202, 1, dtype=np.float32) arr = arr.reshape((5, 5, 4)) @@ -1137,17 +1159,22 @@ def testArrayNumpyLabelled(self): self.assertTrue(output[1] is None) self.assertTrue((np.array([u('a')]) == output[2]).all()) - # Write out the dump explicitly so there is no dependency on iteration order GH10837 - input_dumps = '[{"a": 42, "b":31}, {"a": 24, "c": 99}, {"a": 2.4, "b": 78}]' + # Write out the dump explicitly so there is no dependency on iteration + # order GH10837 + input_dumps = ('[{"a": 42, "b":31}, {"a": 24, "c": 99}, ' + '{"a": 2.4, "b": 78}]') output = ujson.loads(input_dumps, numpy=True, labelled=True) - expectedvals = np.array([42, 31, 24, 99, 2.4, 78], dtype=int).reshape((3, 2)) + expectedvals = np.array( + [42, 31, 24, 99, 2.4, 78], dtype=int).reshape((3, 2)) self.assertTrue((expectedvals == output[0]).all()) self.assertTrue(output[1] is None) self.assertTrue((np.array([u('a'), 'b']) == output[2]).all()) - input_dumps = '{"1": {"a": 42, "b":31}, "2": {"a": 24, "c": 99}, "3": {"a": 2.4, "b": 78}}' + input_dumps = ('{"1": {"a": 42, "b":31}, "2": {"a": 24, "c": 99}, ' + '"3": {"a": 2.4, "b": 78}}') output = ujson.loads(input_dumps, numpy=True, labelled=True) - expectedvals = np.array([42, 31, 24, 99, 2.4, 78], dtype=int).reshape((3, 2)) + expectedvals = np.array( + [42, 31, 24, 99, 2.4, 78], dtype=int).reshape((3, 2)) self.assertTrue((expectedvals == output[0]).all()) self.assertTrue((np.array(['1', '2', '3']) == output[1]).all()) self.assertTrue((np.array(['a', 'b']) == output[2]).all()) @@ -1156,7 +1183,8 @@ def testArrayNumpyLabelled(self): class PandasJSONTests(TestCase): def testDataFrame(self): - df = DataFrame([[1,2,3], [4,5,6]], index=['a', 'b'], columns=['x', 'y', 'z']) + df = DataFrame([[1, 2, 3], [4, 5, 6]], index=[ + 'a', 'b'], columns=['x', 'y', 'z']) # column indexed outp = DataFrame(ujson.decode(ujson.encode(df))) @@ -1185,7 +1213,8 @@ def testDataFrame(self): tm.assert_numpy_array_equal(df.transpose().index, outp.index) def testDataFrameNumpy(self): - df = DataFrame([[1,2,3], [4,5,6]], index=['a', 'b'], columns=['x', 'y', 'z']) + df = DataFrame([[1, 2, 3], [4, 5, 6]], index=[ + 'a', 'b'], columns=['x', 'y', 'z']) # column indexed outp = DataFrame(ujson.decode(ujson.encode(df), numpy=True)) @@ -1194,19 +1223,21 @@ def testDataFrameNumpy(self): tm.assert_numpy_array_equal(df.index, outp.index) dec = _clean_dict(ujson.decode(ujson.encode(df, orient="split"), - numpy=True)) + numpy=True)) outp = DataFrame(**dec) self.assertTrue((df == outp).values.all()) tm.assert_numpy_array_equal(df.columns, outp.columns) tm.assert_numpy_array_equal(df.index, outp.index) - outp = DataFrame(ujson.decode(ujson.encode(df, orient="index"), numpy=True)) + outp = DataFrame(ujson.decode( + ujson.encode(df, orient="index"), numpy=True)) self.assertTrue((df.transpose() == outp).values.all()) tm.assert_numpy_array_equal(df.transpose().columns, outp.columns) tm.assert_numpy_array_equal(df.transpose().index, outp.index) def testDataFrameNested(self): - df = DataFrame([[1,2,3], [4,5,6]], index=['a', 'b'], columns=['x', 'y', 'z']) + df = DataFrame([[1, 2, 3], [4, 5, 6]], index=[ + 'a', 'b'], columns=['x', 'y', 'z']) nested = {'df1': df, 'df2': df.copy()} @@ -1216,41 +1247,50 @@ def testDataFrameNested(self): exp = {'df1': ujson.decode(ujson.encode(df, orient="index")), 'df2': ujson.decode(ujson.encode(df, orient="index"))} - self.assertTrue(ujson.decode(ujson.encode(nested, orient="index")) == exp) + self.assertTrue(ujson.decode( + ujson.encode(nested, orient="index")) == exp) exp = {'df1': ujson.decode(ujson.encode(df, orient="records")), 'df2': ujson.decode(ujson.encode(df, orient="records"))} - self.assertTrue(ujson.decode(ujson.encode(nested, orient="records")) == exp) + self.assertTrue(ujson.decode( + ujson.encode(nested, orient="records")) == exp) exp = {'df1': ujson.decode(ujson.encode(df, orient="values")), 'df2': ujson.decode(ujson.encode(df, orient="values"))} - self.assertTrue(ujson.decode(ujson.encode(nested, orient="values")) == exp) + self.assertTrue(ujson.decode( + ujson.encode(nested, orient="values")) == exp) exp = {'df1': ujson.decode(ujson.encode(df, orient="split")), 'df2': ujson.decode(ujson.encode(df, orient="split"))} - self.assertTrue(ujson.decode(ujson.encode(nested, orient="split")) == exp) + self.assertTrue(ujson.decode( + ujson.encode(nested, orient="split")) == exp) def testDataFrameNumpyLabelled(self): - df = DataFrame([[1,2,3], [4,5,6]], index=['a', 'b'], columns=['x', 'y', 'z']) + df = DataFrame([[1, 2, 3], [4, 5, 6]], index=[ + 'a', 'b'], columns=['x', 'y', 'z']) # column indexed - outp = DataFrame(*ujson.decode(ujson.encode(df), numpy=True, labelled=True)) + outp = DataFrame(*ujson.decode(ujson.encode(df), + numpy=True, labelled=True)) self.assertTrue((df.T == outp).values.all()) tm.assert_numpy_array_equal(df.T.columns, outp.columns) tm.assert_numpy_array_equal(df.T.index, outp.index) - outp = DataFrame(*ujson.decode(ujson.encode(df, orient="records"), numpy=True, labelled=True)) + outp = DataFrame(*ujson.decode(ujson.encode(df, orient="records"), + numpy=True, labelled=True)) outp.index = df.index self.assertTrue((df == outp).values.all()) tm.assert_numpy_array_equal(df.columns, outp.columns) - outp = DataFrame(*ujson.decode(ujson.encode(df, orient="index"), numpy=True, labelled=True)) + outp = DataFrame(*ujson.decode(ujson.encode(df, orient="index"), + numpy=True, labelled=True)) self.assertTrue((df == outp).values.all()) tm.assert_numpy_array_equal(df.columns, outp.columns) tm.assert_numpy_array_equal(df.index, outp.index) def testSeries(self): - s = Series([10, 20, 30, 40, 50, 60], name="series", index=[6,7,8,9,10,15]).sort_values() + s = Series([10, 20, 30, 40, 50, 60], name="series", + index=[6, 7, 8, 9, 10, 15]).sort_values() # column indexed outp = Series(ujson.decode(ujson.encode(s))).sort_values() @@ -1265,31 +1305,36 @@ def testSeries(self): self.assertTrue(s.name == outp.name) dec = _clean_dict(ujson.decode(ujson.encode(s, orient="split"), - numpy=True)) + numpy=True)) outp = Series(**dec) self.assertTrue((s == outp).values.all()) self.assertTrue(s.name == outp.name) - outp = Series(ujson.decode(ujson.encode(s, orient="records"), numpy=True)) + outp = Series(ujson.decode(ujson.encode( + s, orient="records"), numpy=True)) self.assertTrue((s == outp).values.all()) outp = Series(ujson.decode(ujson.encode(s, orient="records"))) self.assertTrue((s == outp).values.all()) - outp = Series(ujson.decode(ujson.encode(s, orient="values"), numpy=True)) + outp = Series(ujson.decode( + ujson.encode(s, orient="values"), numpy=True)) self.assertTrue((s == outp).values.all()) outp = Series(ujson.decode(ujson.encode(s, orient="values"))) self.assertTrue((s == outp).values.all()) - outp = Series(ujson.decode(ujson.encode(s, orient="index"))).sort_values() + outp = Series(ujson.decode(ujson.encode( + s, orient="index"))).sort_values() self.assertTrue((s == outp).values.all()) - outp = Series(ujson.decode(ujson.encode(s, orient="index"), numpy=True)).sort_values() + outp = Series(ujson.decode(ujson.encode( + s, orient="index"), numpy=True)).sort_values() self.assertTrue((s == outp).values.all()) def testSeriesNested(self): - s = Series([10, 20, 30, 40, 50, 60], name="series", index=[6,7,8,9,10,15]).sort_values() + s = Series([10, 20, 30, 40, 50, 60], name="series", + index=[6, 7, 8, 9, 10, 15]).sort_values() nested = {'s1': s, 's2': s.copy()} @@ -1299,19 +1344,23 @@ def testSeriesNested(self): exp = {'s1': ujson.decode(ujson.encode(s, orient="split")), 's2': ujson.decode(ujson.encode(s, orient="split"))} - self.assertTrue(ujson.decode(ujson.encode(nested, orient="split")) == exp) + self.assertTrue(ujson.decode( + ujson.encode(nested, orient="split")) == exp) exp = {'s1': ujson.decode(ujson.encode(s, orient="records")), 's2': ujson.decode(ujson.encode(s, orient="records"))} - self.assertTrue(ujson.decode(ujson.encode(nested, orient="records")) == exp) + self.assertTrue(ujson.decode( + ujson.encode(nested, orient="records")) == exp) exp = {'s1': ujson.decode(ujson.encode(s, orient="values")), 's2': ujson.decode(ujson.encode(s, orient="values"))} - self.assertTrue(ujson.decode(ujson.encode(nested, orient="values")) == exp) + self.assertTrue(ujson.decode( + ujson.encode(nested, orient="values")) == exp) exp = {'s1': ujson.decode(ujson.encode(s, orient="index")), 's2': ujson.decode(ujson.encode(s, orient="index"))} - self.assertTrue(ujson.decode(ujson.encode(nested, orient="index")) == exp) + self.assertTrue(ujson.decode( + ujson.encode(nested, orient="index")) == exp) def testIndex(self): i = Index([23, 45, 18, 98, 43, 11], name="index") @@ -1329,7 +1378,7 @@ def testIndex(self): self.assertTrue(i.name == outp.name) dec = _clean_dict(ujson.decode(ujson.encode(i, orient="split"), - numpy=True)) + numpy=True)) outp = Index(**dec) self.assertTrue(i.equals(outp)) self.assertTrue(i.name == outp.name) @@ -1337,13 +1386,15 @@ def testIndex(self): outp = Index(ujson.decode(ujson.encode(i, orient="values"))) self.assertTrue(i.equals(outp)) - outp = Index(ujson.decode(ujson.encode(i, orient="values"), numpy=True)) + outp = Index(ujson.decode(ujson.encode( + i, orient="values"), numpy=True)) self.assertTrue(i.equals(outp)) outp = Index(ujson.decode(ujson.encode(i, orient="records"))) self.assertTrue(i.equals(outp)) - outp = Index(ujson.decode(ujson.encode(i, orient="records"), numpy=True)) + outp = Index(ujson.decode(ujson.encode( + i, orient="records"), numpy=True)) self.assertTrue(i.equals(outp)) outp = Index(ujson.decode(ujson.encode(i, orient="index"))) @@ -1424,7 +1475,7 @@ def test_decodeTooBigValue(self): try: input = "9223372036854775808" ujson.decode(input) - except ValueError as e: + except ValueError: pass else: assert False, "expected ValueError" @@ -1433,7 +1484,7 @@ def test_decodeTooSmallValue(self): try: input = "-90223372036854775809" ujson.decode(input) - except ValueError as e: + except ValueError: pass else: assert False, "expected ValueError" @@ -1488,21 +1539,32 @@ def test_decodeArrayFaultyUnicode(self): def test_decodeFloatingPointAdditionalTests(self): places = 15 - self.assertAlmostEqual(-1.1234567893, ujson.loads("-1.1234567893"), places=places) - self.assertAlmostEqual(-1.234567893, ujson.loads("-1.234567893"), places=places) - self.assertAlmostEqual(-1.34567893, ujson.loads("-1.34567893"), places=places) - self.assertAlmostEqual(-1.4567893, ujson.loads("-1.4567893"), places=places) - self.assertAlmostEqual(-1.567893, ujson.loads("-1.567893"), places=places) - self.assertAlmostEqual(-1.67893, ujson.loads("-1.67893"), places=places) + self.assertAlmostEqual(-1.1234567893, + ujson.loads("-1.1234567893"), places=places) + self.assertAlmostEqual(-1.234567893, + ujson.loads("-1.234567893"), places=places) + self.assertAlmostEqual(-1.34567893, + ujson.loads("-1.34567893"), places=places) + self.assertAlmostEqual(-1.4567893, + ujson.loads("-1.4567893"), places=places) + self.assertAlmostEqual(-1.567893, + ujson.loads("-1.567893"), places=places) + self.assertAlmostEqual(-1.67893, + ujson.loads("-1.67893"), places=places) self.assertAlmostEqual(-1.7893, ujson.loads("-1.7893"), places=places) self.assertAlmostEqual(-1.893, ujson.loads("-1.893"), places=places) self.assertAlmostEqual(-1.3, ujson.loads("-1.3"), places=places) - self.assertAlmostEqual(1.1234567893, ujson.loads("1.1234567893"), places=places) - self.assertAlmostEqual(1.234567893, ujson.loads("1.234567893"), places=places) - self.assertAlmostEqual(1.34567893, ujson.loads("1.34567893"), places=places) - self.assertAlmostEqual(1.4567893, ujson.loads("1.4567893"), places=places) - self.assertAlmostEqual(1.567893, ujson.loads("1.567893"), places=places) + self.assertAlmostEqual(1.1234567893, ujson.loads( + "1.1234567893"), places=places) + self.assertAlmostEqual(1.234567893, ujson.loads( + "1.234567893"), places=places) + self.assertAlmostEqual( + 1.34567893, ujson.loads("1.34567893"), places=places) + self.assertAlmostEqual( + 1.4567893, ujson.loads("1.4567893"), places=places) + self.assertAlmostEqual( + 1.567893, ujson.loads("1.567893"), places=places) self.assertAlmostEqual(1.67893, ujson.loads("1.67893"), places=places) self.assertAlmostEqual(1.7893, ujson.loads("1.7893"), places=places) self.assertAlmostEqual(1.893, ujson.loads("1.893"), places=places) @@ -1519,7 +1581,7 @@ def test_encodeEmptySet(self): self.assertEqual("[]", ujson.encode(s)) def test_encodeSet(self): - s = set([1,2,3,4,5,6,7,8,9]) + s = set([1, 2, 3, 4, 5, 6, 7, 8, 9]) enc = ujson.encode(s) dec = ujson.decode(enc) @@ -1532,5 +1594,5 @@ def _clean_dict(d): if __name__ == '__main__': - nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'], + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/io/tests/test_json_norm.py b/pandas/io/tests/test_json_norm.py index 8084446d2d246..81a1fecbdebac 100644 --- a/pandas/io/tests/test_json_norm.py +++ b/pandas/io/tests/test_json_norm.py @@ -7,6 +7,7 @@ from pandas.io.json import json_normalize, nested_to_record + def _assert_equal_data(left, right): if not left.columns.equals(right.columns): left = left.reindex(columns=right.columns) @@ -18,17 +19,17 @@ class TestJSONNormalize(tm.TestCase): def setUp(self): self.state_data = [ - {'counties': [{'name': 'Dade', 'population': 12345}, - {'name': 'Broward', 'population': 40000}, - {'name': 'Palm Beach', 'population': 60000}], - 'info': {'governor': 'Rick Scott'}, - 'shortname': 'FL', - 'state': 'Florida'}, - {'counties': [{'name': 'Summit', 'population': 1234}, - {'name': 'Cuyahoga', 'population': 1337}], - 'info': {'governor': 'John Kasich'}, - 'shortname': 'OH', - 'state': 'Ohio'}] + {'counties': [{'name': 'Dade', 'population': 12345}, + {'name': 'Broward', 'population': 40000}, + {'name': 'Palm Beach', 'population': 60000}], + 'info': {'governor': 'Rick Scott'}, + 'shortname': 'FL', + 'state': 'Florida'}, + {'counties': [{'name': 'Summit', 'population': 1234}, + {'name': 'Cuyahoga', 'population': 1337}], + 'info': {'governor': 'John Kasich'}, + 'shortname': 'OH', + 'state': 'Ohio'}] def test_simple_records(self): recs = [{'a': 1, 'b': 2, 'c': 3}, @@ -67,28 +68,28 @@ def test_more_deeply_nested(self): 'pop': 12345}, {'name': 'Los Angeles', 'pop': 12346}] - }, + }, {'name': 'Ohio', 'cities': [{'name': 'Columbus', 'pop': 1234}, {'name': 'Cleveland', 'pop': 1236}]} - ] + ] }, {'country': 'Germany', 'states': [{'name': 'Bayern', 'cities': [{'name': 'Munich', 'pop': 12347}] - }, + }, {'name': 'Nordrhein-Westfalen', 'cities': [{'name': 'Duesseldorf', 'pop': 1238}, {'name': 'Koeln', 'pop': 1239}]} - ] + ] } ] result = json_normalize(data, ['states', 'cities'], meta=['country', ['states', 'name']]) - # meta_prefix={'states': 'state_'}) + # meta_prefix={'states': 'state_'}) ex_data = {'country': ['USA'] * 4 + ['Germany'] * 3, 'states.name': ['California', 'California', 'Ohio', 'Ohio', @@ -105,15 +106,15 @@ def test_shallow_nested(self): data = [{'state': 'Florida', 'shortname': 'FL', 'info': { - 'governor': 'Rick Scott' + 'governor': 'Rick Scott' }, 'counties': [{'name': 'Dade', 'population': 12345}, - {'name': 'Broward', 'population': 40000}, - {'name': 'Palm Beach', 'population': 60000}]}, + {'name': 'Broward', 'population': 40000}, + {'name': 'Palm Beach', 'population': 60000}]}, {'state': 'Ohio', 'shortname': 'OH', 'info': { - 'governor': 'John Kasich' + 'governor': 'John Kasich' }, 'counties': [{'name': 'Summit', 'population': 1234}, {'name': 'Cuyahoga', 'population': 1337}]}] @@ -167,8 +168,8 @@ def test_record_prefix(self): class TestNestedToRecord(tm.TestCase): def test_flat_stays_flat(self): - recs = [dict(flat1=1,flat2=2), - dict(flat1=3,flat2=4), + recs = [dict(flat1=1, flat2=2), + dict(flat1=3, flat2=4), ] result = nested_to_record(recs) @@ -177,30 +178,30 @@ def test_flat_stays_flat(self): def test_one_level_deep_flattens(self): data = dict(flat1=1, - dict1=dict(c=1,d=2)) + dict1=dict(c=1, d=2)) result = nested_to_record(data) - expected = {'dict1.c': 1, - 'dict1.d': 2, - 'flat1': 1} + expected = {'dict1.c': 1, + 'dict1.d': 2, + 'flat1': 1} - self.assertEqual(result,expected) + self.assertEqual(result, expected) def test_nested_flattens(self): data = dict(flat1=1, - dict1=dict(c=1,d=2), - nested=dict(e=dict(c=1,d=2), + dict1=dict(c=1, d=2), + nested=dict(e=dict(c=1, d=2), d=2)) result = nested_to_record(data) - expected = {'dict1.c': 1, - 'dict1.d': 2, - 'flat1': 1, - 'nested.d': 2, - 'nested.e.c': 1, - 'nested.e.d': 2} - - self.assertEqual(result,expected) + expected = {'dict1.c': 1, + 'dict1.d': 2, + 'flat1': 1, + 'nested.d': 2, + 'nested.e.c': 1, + 'nested.e.d': 2} + + self.assertEqual(result, expected) if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', diff --git a/pandas/io/tests/test_packers.py b/pandas/io/tests/test_packers.py index bdbcb9c0d0d3e..6905225600ae6 100644 --- a/pandas/io/tests/test_packers.py +++ b/pandas/io/tests/test_packers.py @@ -18,7 +18,6 @@ from pandas.tests.test_panel import assert_panel_equal import pandas -from pandas.sparse.tests.test_sparse import assert_sp_series_equal, assert_sp_frame_equal from pandas import Timestamp, tslib nan = np.nan @@ -57,6 +56,7 @@ def check_arbitrary(a, b): else: assert(a == b) + class TestPackers(tm.TestCase): def setUp(self): @@ -70,31 +70,32 @@ def encode_decode(self, x, compress=None, **kwargs): to_msgpack(p, x, compress=compress, **kwargs) return read_msgpack(p, **kwargs) + class TestAPI(TestPackers): def test_string_io(self): - df = DataFrame(np.random.randn(10,2)) + df = DataFrame(np.random.randn(10, 2)) s = df.to_msgpack(None) result = read_msgpack(s) - tm.assert_frame_equal(result,df) + tm.assert_frame_equal(result, df) s = df.to_msgpack() result = read_msgpack(s) - tm.assert_frame_equal(result,df) + tm.assert_frame_equal(result, df) s = df.to_msgpack() result = read_msgpack(compat.BytesIO(s)) - tm.assert_frame_equal(result,df) + tm.assert_frame_equal(result, df) - s = to_msgpack(None,df) + s = to_msgpack(None, df) result = read_msgpack(s) tm.assert_frame_equal(result, df) with ensure_clean(self.path) as p: s = df.to_msgpack() - fh = open(p,'wb') + fh = open(p, 'wb') fh.write(s) fh.close() result = read_msgpack(p) @@ -102,14 +103,15 @@ def test_string_io(self): def test_iterator_with_string_io(self): - dfs = [ DataFrame(np.random.randn(10,2)) for i in range(5) ] - s = to_msgpack(None,*dfs) - for i, result in enumerate(read_msgpack(s,iterator=True)): - tm.assert_frame_equal(result,dfs[i]) + dfs = [DataFrame(np.random.randn(10, 2)) for i in range(5)] + s = to_msgpack(None, *dfs) + for i, result in enumerate(read_msgpack(s, iterator=True)): + tm.assert_frame_equal(result, dfs[i]) def test_invalid_arg(self): - #GH10369 + # GH10369 class A(object): + def __init__(self): self.read = 0 @@ -123,7 +125,7 @@ class TestNumpy(TestPackers): def test_numpy_scalar_float(self): x = np.float32(np.random.rand()) x_rec = self.encode_decode(x) - tm.assert_almost_equal(x,x_rec) + tm.assert_almost_equal(x, x_rec) def test_numpy_scalar_complex(self): x = np.complex64(np.random.rand() + 1j * np.random.rand()) @@ -133,7 +135,7 @@ def test_numpy_scalar_complex(self): def test_scalar_float(self): x = np.random.rand() x_rec = self.encode_decode(x) - tm.assert_almost_equal(x,x_rec) + tm.assert_almost_equal(x, x_rec) def test_scalar_complex(self): x = np.random.rand() + 1j * np.random.rand() @@ -143,7 +145,7 @@ def test_scalar_complex(self): def test_list_numpy_float(self): x = [np.float32(np.random.rand()) for i in range(5)] x_rec = self.encode_decode(x) - tm.assert_almost_equal(x,x_rec) + tm.assert_almost_equal(x, x_rec) def test_list_numpy_float_complex(self): if not hasattr(np, 'complex128'): @@ -158,7 +160,7 @@ def test_list_numpy_float_complex(self): def test_list_float(self): x = [np.random.rand() for i in range(5)] x_rec = self.encode_decode(x) - tm.assert_almost_equal(x,x_rec) + tm.assert_almost_equal(x, x_rec) def test_list_float_complex(self): x = [np.random.rand() for i in range(5)] + \ @@ -169,7 +171,7 @@ def test_list_float_complex(self): def test_dict_float(self): x = {'foo': 1.0, 'bar': 2.0} x_rec = self.encode_decode(x) - tm.assert_almost_equal(x,x_rec) + tm.assert_almost_equal(x, x_rec) def test_dict_complex(self): x = {'foo': 1.0 + 1.0j, 'bar': 2.0 + 2.0j} @@ -181,7 +183,7 @@ def test_dict_complex(self): def test_dict_numpy_float(self): x = {'foo': np.float32(1.0), 'bar': np.float32(2.0)} x_rec = self.encode_decode(x) - tm.assert_almost_equal(x,x_rec) + tm.assert_almost_equal(x, x_rec) def test_dict_numpy_complex(self): x = {'foo': np.complex128(1.0 + 1.0j), @@ -196,10 +198,10 @@ def test_numpy_array_float(self): # run multiple times for n in range(10): x = np.random.rand(10) - for dtype in ['float32','float64']: + for dtype in ['float32', 'float64']: x = x.astype(dtype) x_rec = self.encode_decode(x) - tm.assert_almost_equal(x,x_rec) + tm.assert_almost_equal(x, x_rec) def test_numpy_array_complex(self): x = (np.random.rand(5) + 1j * np.random.rand(5)).astype(np.complex128) @@ -210,7 +212,8 @@ def test_numpy_array_complex(self): def test_list_mixed(self): x = [1.0, np.float32(3.5), np.complex128(4.25), u('foo')] x_rec = self.encode_decode(x) - tm.assert_almost_equal(x,x_rec) + tm.assert_almost_equal(x, x_rec) + class TestBasic(TestPackers): @@ -229,9 +232,10 @@ def test_datetimes(self): if LooseVersion(sys.version) < '2.7': raise nose.SkipTest('2.6 with np.datetime64 is broken') - for i in [datetime.datetime( - 2013, 1, 1), datetime.datetime(2013, 1, 1, 5, 1), - datetime.date(2013, 1, 1), np.datetime64(datetime.datetime(2013, 1, 5, 2, 15))]: + for i in [datetime.datetime(2013, 1, 1), + datetime.datetime(2013, 1, 1, 5, 1), + datetime.date(2013, 1, 1), + np.datetime64(datetime.datetime(2013, 1, 5, 2, 15))]: i_rec = self.encode_decode(i) self.assertEqual(i, i_rec) @@ -263,8 +267,10 @@ def setUp(self): } self.mi = { - 'reg': MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'), ('foo', 'two'), - ('qux', 'one'), ('qux', 'two')], names=['first', 'second']), + 'reg': MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'), + ('foo', 'two'), + ('qux', 'one'), ('qux', 'two')], + names=['first', 'second']), } def test_basic_index(self): @@ -274,12 +280,13 @@ def test_basic_index(self): self.assertTrue(i.equals(i_rec)) # datetime with no freq (GH5506) - i = Index([Timestamp('20130101'),Timestamp('20130103')]) + i = Index([Timestamp('20130101'), Timestamp('20130103')]) i_rec = self.encode_decode(i) self.assertTrue(i.equals(i_rec)) # datetime with timezone - i = Index([Timestamp('20130101 9:00:00'),Timestamp('20130103 11:00:00')]).tz_localize('US/Eastern') + i = Index([Timestamp('20130101 9:00:00'), Timestamp( + '20130103 11:00:00')]).tz_localize('US/Eastern') i_rec = self.encode_decode(i) self.assertTrue(i.equals(i_rec)) @@ -295,8 +302,8 @@ def test_unicode(self): # this currently fails self.assertRaises(UnicodeEncodeError, self.encode_decode, i) - #i_rec = self.encode_decode(i) - #self.assertTrue(i.equals(i_rec)) + # i_rec = self.encode_decode(i) + # self.assertTrue(i.equals(i_rec)) class TestSeries(TestPackers): @@ -354,10 +361,12 @@ def setUp(self): self.frame = { 'float': DataFrame(dict(A=data['A'], B=Series(data['A']) + 1)), 'int': DataFrame(dict(A=data['B'], B=Series(data['B']) + 1)), - 'mixed': DataFrame(dict([(k, data[k]) for k in ['A', 'B', 'C', 'D']]))} + 'mixed': DataFrame(dict([(k, data[k]) + for k in ['A', 'B', 'C', 'D']]))} self.panel = { - 'float': Panel(dict(ItemA=self.frame['float'], ItemB=self.frame['float'] + 1))} + 'float': Panel(dict(ItemA=self.frame['float'], + ItemB=self.frame['float'] + 1))} def test_basic_frame(self): @@ -377,8 +386,8 @@ def test_multi(self): for k in self.frame.keys(): assert_frame_equal(self.frame[k], i_rec[k]) - l = tuple( - [self.frame['float'], self.frame['float'].A, self.frame['float'].B, None]) + l = tuple([self.frame['float'], self.frame['float'].A, + self.frame['float'].B, None]) l_rec = self.encode_decode(l) check_arbitrary(l, l_rec) @@ -415,7 +424,7 @@ def test_dataframe_duplicate_column_names(self): # GH 9618 expected_1 = DataFrame(columns=['a', 'a']) - expected_2 = DataFrame(columns=[1]*100) + expected_2 = DataFrame(columns=[1] * 100) expected_2.loc[0] = np.random.randn(100) expected_3 = DataFrame(columns=[1, 1]) expected_3.loc[0] = ['abc', np.nan] @@ -434,8 +443,8 @@ class TestSparse(TestPackers): def _check_roundtrip(self, obj, comparator, **kwargs): # currently these are not implemetned - #i_rec = self.encode_decode(obj) - #comparator(obj, i_rec, **kwargs) + # i_rec = self.encode_decode(obj) + # comparator(obj, i_rec, **kwargs) self.assertRaises(NotImplementedError, self.encode_decode, obj) def test_sparse_series(self): @@ -581,29 +590,30 @@ def test_readonly_axis_zlib_to_sql(self): class TestEncoding(TestPackers): - def setUp(self): - super(TestEncoding, self).setUp() - data = { - 'A': [compat.u('\u2019')] * 1000, - 'B': np.arange(1000, dtype=np.int32), - 'C': list(100 * 'abcdefghij'), - 'D': date_range(datetime.datetime(2015, 4, 1), periods=1000), - 'E': [datetime.timedelta(days=x) for x in range(1000)], - 'G': [400] * 1000 - } - self.frame = { - 'float': DataFrame(dict((k, data[k]) for k in ['A', 'A'])), - 'int': DataFrame(dict((k, data[k]) for k in ['B', 'B'])), - 'mixed': DataFrame(data), - } - self.utf_encodings = ['utf8', 'utf16', 'utf32'] - - def test_utf(self): - # GH10581 - for encoding in self.utf_encodings: - for frame in compat.itervalues(self.frame): - result = self.encode_decode(frame, encoding=encoding) - assert_frame_equal(result, frame) + + def setUp(self): + super(TestEncoding, self).setUp() + data = { + 'A': [compat.u('\u2019')] * 1000, + 'B': np.arange(1000, dtype=np.int32), + 'C': list(100 * 'abcdefghij'), + 'D': date_range(datetime.datetime(2015, 4, 1), periods=1000), + 'E': [datetime.timedelta(days=x) for x in range(1000)], + 'G': [400] * 1000 + } + self.frame = { + 'float': DataFrame(dict((k, data[k]) for k in ['A', 'A'])), + 'int': DataFrame(dict((k, data[k]) for k in ['B', 'B'])), + 'mixed': DataFrame(data), + } + self.utf_encodings = ['utf8', 'utf16', 'utf32'] + + def test_utf(self): + # GH10581 + for encoding in self.utf_encodings: + for frame in compat.itervalues(self.frame): + result = self.encode_decode(frame, encoding=encoding) + assert_frame_equal(result, frame) class TestMsgpack(): @@ -620,13 +630,15 @@ class TestMsgpack(): NOTE: TestMsgpack can't be a subclass of tm.Testcase to use test generator. http://stackoverflow.com/questions/6689537/nose-test-generators-inside-class """ + def setUp(self): from pandas.io.tests.generate_legacy_storage_files import ( create_msgpack_data, create_data) self.data = create_msgpack_data() self.all_data = create_data() self.path = u('__%s__.msgpack' % tm.rands(10)) - self.minimum_structure = {'series': ['float', 'int', 'mixed', 'ts', 'mi', 'dup'], + self.minimum_structure = {'series': ['float', 'int', 'mixed', + 'ts', 'mi', 'dup'], 'frame': ['float', 'int', 'mixed', 'mi'], 'panel': ['float'], 'index': ['int', 'date', 'period'], @@ -636,15 +648,19 @@ def check_min_structure(self, data): for typ, v in self.minimum_structure.items(): assert typ in data, '"{0}" not found in unpacked data'.format(typ) for kind in v: - assert kind in data[typ], '"{0}" not found in data["{1}"]'.format(kind, typ) + assert kind in data[ + typ], '"{0}" not found in data["{1}"]'.format(kind, typ) def compare(self, vf, version): data = read_msgpack(vf) self.check_min_structure(data) for typ, dv in data.items(): - assert typ in self.all_data, 'unpacked data contains extra key "{0}"'.format(typ) + assert typ in self.all_data, ('unpacked data contains ' + 'extra key "{0}"' + .format(typ)) for dt, result in dv.items(): - assert dt in self.all_data[typ], 'data["{0}"] contains extra key "{1}"'.format(typ, dt) + assert dt in self.all_data[typ], ('data["{0}"] contains extra ' + 'key "{1}"'.format(typ, dt)) try: expected = self.data[typ][dt] except KeyError: @@ -652,7 +668,8 @@ def compare(self, vf, version): # use a specific comparator # if available - comparator = getattr(self,"compare_{typ}_{dt}".format(typ=typ,dt=dt), None) + comparator = getattr( + self, "compare_{typ}_{dt}".format(typ=typ, dt=dt), None) if comparator is not None: comparator(result, expected, typ, version) else: @@ -697,9 +714,3 @@ def test_msgpack(self): yield self.read_msgpacks, v n += 1 assert n > 0, 'Msgpack files are not tested' - - -if __name__ == '__main__': - import nose - nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], - exit=False) diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py index e34f2cb87a2df..06afd20071349 100755 --- a/pandas/io/tests/test_parsers.py +++ b/pandas/io/tests/test_parsers.py @@ -1,6 +1,8 @@ # -*- coding: utf-8 -*- # pylint: disable=E1101 +# flake8: noqa + from datetime import datetime import csv import os @@ -71,13 +73,6 @@ def test_converters_type_must_be_dict(self): with tm.assertRaisesRegexp(TypeError, 'Type converters.+'): self.read_csv(StringIO(self.data1), converters=0) - def test_multi_character_decimal_marker(self): - data = """A|B|C -1|2,334|5 -10|13|10. -""" - self.assertRaises(ValueError, read_csv, StringIO(data), decimal=',,') - def test_empty_decimal_marker(self): data = """A|B|C 1|2,334|5 @@ -92,7 +87,6 @@ def test_empty_thousands_marker(self): """ self.assertRaises(ValueError, read_csv, StringIO(data), thousands='') - def test_multi_character_decimal_marker(self): data = """A|B|C 1|2,334|5 @@ -141,8 +135,8 @@ def test_empty_string(self): np.nan, 'seven']}) tm.assert_frame_equal(xp.reindex(columns=df.columns), df) - - # GH4318, passing na_values=None and keep_default_na=False yields 'None' as a na_value + # GH4318, passing na_values=None and keep_default_na=False yields + # 'None' as a na_value data = """\ One,Two,Three a,1,None @@ -161,7 +155,6 @@ def test_empty_string(self): 'seven']}) tm.assert_frame_equal(xp.reindex(columns=df.columns), df) - def test_read_csv(self): if not compat.PY3: if compat.is_platform_windows(): @@ -170,7 +163,7 @@ def test_read_csv(self): prefix = u("file://") fname = prefix + compat.text_type(self.csv1) # it works! - df1 = read_csv(fname, index_col=0, parse_dates=True) + read_csv(fname, index_col=0, parse_dates=True) def test_dialect(self): data = """\ @@ -202,7 +195,7 @@ def test_dialect_str(self): 'fruit': ['apple', 'pear'], 'vegetable': ['brocolli', 'tomato'] }) - dia = csv.register_dialect('mydialect', delimiter=':') + dia = csv.register_dialect('mydialect', delimiter=':') # noqa df = self.read_csv(StringIO(data), dialect='mydialect') tm.assert_frame_equal(df, exp) csv.unregister_dialect('mydialect') @@ -242,17 +235,20 @@ def test_1000_sep_with_decimal(self): df = self.read_csv(StringIO(data), sep='|', thousands=',', decimal='.') tm.assert_frame_equal(df, expected) - df = self.read_table(StringIO(data), sep='|', thousands=',', decimal='.') + df = self.read_table(StringIO(data), sep='|', + thousands=',', decimal='.') tm.assert_frame_equal(df, expected) data_with_odd_sep = """A|B|C 1|2.334,01|5 10|13|10, """ - df = self.read_csv(StringIO(data_with_odd_sep), sep='|', thousands='.', decimal=',') + df = self.read_csv(StringIO(data_with_odd_sep), + sep='|', thousands='.', decimal=',') tm.assert_frame_equal(df, expected) - df = self.read_table(StringIO(data_with_odd_sep), sep='|', thousands='.', decimal=',') + df = self.read_table(StringIO(data_with_odd_sep), + sep='|', thousands='.', decimal=',') tm.assert_frame_equal(df, expected) def test_separator_date_conflict(self): @@ -264,7 +260,8 @@ def test_separator_date_conflict(self): columns=['Date', 2] ) - df = self.read_csv(StringIO(data), sep=';', thousands='-', parse_dates={'Date': [0, 1]}, header=None) + df = self.read_csv(StringIO(data), sep=';', thousands='-', + parse_dates={'Date': [0, 1]}, header=None) tm.assert_frame_equal(df, expected) def test_squeeze(self): @@ -410,7 +407,7 @@ def test_multiple_date_col_timestamp_parse(self): data = """05/31/2012,15:30:00.029,1306.25,1,E,0,,1306.25 05/31/2012,15:30:00.029,1306.25,8,E,0,,1306.25""" result = self.read_csv(StringIO(data), sep=',', header=None, - parse_dates=[[0,1]], date_parser=Timestamp) + parse_dates=[[0, 1]], date_parser=Timestamp) ex_val = Timestamp('05/31/2012 15:30:00.029') self.assertEqual(result['0_1'][0], ex_val) @@ -471,7 +468,7 @@ def test_multiple_date_col_name_collision(self): KORD3,19990127, 21:00:00, 20:56:00, -0.5900, 2.2100, 5.7000, 0.0000, 280.0000 KORD4,19990127, 21:00:00, 21:18:00, -0.9900, 2.0100, 3.6000, 0.0000, 270.0000 KORD5,19990127, 22:00:00, 21:56:00, -0.5900, 1.7100, 5.1000, 0.0000, 290.0000 -KORD6,19990127, 23:00:00, 22:56:00, -0.5900, 1.7100, 4.6000, 0.0000, 280.0000""" +KORD6,19990127, 23:00:00, 22:56:00, -0.5900, 1.7100, 4.6000, 0.0000, 280.0000""" # noqa self.assertRaises(ValueError, self.read_csv, StringIO(data), parse_dates=[[1, 2]]) @@ -515,11 +512,12 @@ def test_usecols_index_col_False(self): # Issue 9082 s = "a,b,c,d\n1,2,3,4\n5,6,7,8" s_malformed = "a,b,c,d\n1,2,3,4,\n5,6,7,8," - cols = ['a','c','d'] - expected = DataFrame({'a':[1,5], 'c':[3,7], 'd':[4,8]}) + cols = ['a', 'c', 'd'] + expected = DataFrame({'a': [1, 5], 'c': [3, 7], 'd': [4, 8]}) df = self.read_csv(StringIO(s), usecols=cols, index_col=False) tm.assert_frame_equal(expected, df) - df = self.read_csv(StringIO(s_malformed), usecols=cols, index_col=False) + df = self.read_csv(StringIO(s_malformed), + usecols=cols, index_col=False) tm.assert_frame_equal(expected, df) def test_index_col_is_True(self): @@ -542,9 +540,9 @@ def test_date_parser_int_bug(self): # #3071 log_file = StringIO( 'posix_timestamp,elapsed,sys,user,queries,query_time,rows,' - 'accountid,userid,contactid,level,silo,method\n' + 'accountid,userid,contactid,level,silo,method\n' '1343103150,0.062353,0,4,6,0.01690,3,' - '12345,1,-1,3,invoice_InvoiceResource,search\n' + '12345,1,-1,3,invoice_InvoiceResource,search\n' ) def f(posix_string): @@ -588,7 +586,7 @@ def test_malformed(self): # Test for ValueError with other engines: try: - with tm.assertRaisesRegexp(ValueError, 'skip_footer'): #XXX + with tm.assertRaisesRegexp(ValueError, 'skip_footer'): # XXX df = self.read_table( StringIO(data), sep=',', header=1, comment='#', skip_footer=1) @@ -607,7 +605,8 @@ def test_malformed(self): """ try: it = self.read_table(StringIO(data), sep=',', - header=1, comment='#', iterator=True, chunksize=1, + header=1, comment='#', + iterator=True, chunksize=1, skiprows=[2]) df = it.read(5) self.assertTrue(False) @@ -644,8 +643,8 @@ def test_malformed(self): """ try: it = self.read_table(StringIO(data), sep=',', - header=1, comment='#', iterator=True, chunksize=1, - skiprows=[2]) + header=1, comment='#', + iterator=True, chunksize=1, skiprows=[2]) df = it.read(1) it.read() self.assertTrue(False) @@ -659,9 +658,10 @@ def test_passing_dtype(self): # Test for ValueError with other engines: with tm.assertRaisesRegexp(ValueError, - "The 'dtype' option is not supported"): + "The 'dtype' option is not supported"): - df = DataFrame(np.random.rand(5,2),columns=list('AB'),index=['1A','1B','1C','1D','1E']) + df = DataFrame(np.random.rand(5, 2), columns=list( + 'AB'), index=['1A', '1B', '1C', '1D', '1E']) with tm.ensure_clean('__passing_str_as_dtype__.csv') as path: df.to_csv(path) @@ -669,24 +669,30 @@ def test_passing_dtype(self): # GH 3795 # passing 'str' as the dtype result = self.read_csv(path, dtype=str, index_col=0) - tm.assert_series_equal(result.dtypes,Series({ 'A' : 'object', 'B' : 'object' })) + tm.assert_series_equal(result.dtypes, Series( + {'A': 'object', 'B': 'object'})) - # we expect all object columns, so need to convert to test for equivalence + # we expect all object columns, so need to convert to test for + # equivalence result = result.astype(float) - tm.assert_frame_equal(result,df) + tm.assert_frame_equal(result, df) # invalid dtype - self.assertRaises(TypeError, self.read_csv, path, dtype={'A' : 'foo', 'B' : 'float64' }, + self.assertRaises(TypeError, self.read_csv, path, + dtype={'A': 'foo', 'B': 'float64'}, index_col=0) # valid but we don't support it (date) - self.assertRaises(TypeError, self.read_csv, path, dtype={'A' : 'datetime64', 'B' : 'float64' }, + self.assertRaises(TypeError, self.read_csv, path, + dtype={'A': 'datetime64', 'B': 'float64'}, index_col=0) - self.assertRaises(TypeError, self.read_csv, path, dtype={'A' : 'datetime64', 'B' : 'float64' }, + self.assertRaises(TypeError, self.read_csv, path, + dtype={'A': 'datetime64', 'B': 'float64'}, index_col=0, parse_dates=['B']) # valid but we don't support it - self.assertRaises(TypeError, self.read_csv, path, dtype={'A' : 'timedelta64', 'B' : 'float64' }, + self.assertRaises(TypeError, self.read_csv, path, + dtype={'A': 'timedelta64', 'B': 'float64'}, index_col=0) def test_quoting(self): @@ -706,65 +712,72 @@ def test_quoting(self): def test_non_string_na_values(self): # GH3611, na_values that are not a string are an issue with tm.ensure_clean('__non_string_na_values__.csv') as path: - df = DataFrame({'A' : [-999, 2, 3], 'B' : [1.2, -999, 4.5]}) + df = DataFrame({'A': [-999, 2, 3], 'B': [1.2, -999, 4.5]}) df.to_csv(path, sep=' ', index=False) - result1 = read_csv(path, sep= ' ', header=0, na_values=['-999.0','-999']) - result2 = read_csv(path, sep= ' ', header=0, na_values=[-999,-999.0]) - result3 = read_csv(path, sep= ' ', header=0, na_values=[-999.0,-999]) - tm.assert_frame_equal(result1,result2) - tm.assert_frame_equal(result2,result3) - - result4 = read_csv(path, sep= ' ', header=0, na_values=['-999.0']) - result5 = read_csv(path, sep= ' ', header=0, na_values=['-999']) - result6 = read_csv(path, sep= ' ', header=0, na_values=[-999.0]) - result7 = read_csv(path, sep= ' ', header=0, na_values=[-999]) - tm.assert_frame_equal(result4,result3) - tm.assert_frame_equal(result5,result3) - tm.assert_frame_equal(result6,result3) - tm.assert_frame_equal(result7,result3) + result1 = read_csv(path, sep=' ', header=0, + na_values=['-999.0', '-999']) + result2 = read_csv(path, sep=' ', header=0, + na_values=[-999, -999.0]) + result3 = read_csv(path, sep=' ', header=0, + na_values=[-999.0, -999]) + tm.assert_frame_equal(result1, result2) + tm.assert_frame_equal(result2, result3) + + result4 = read_csv(path, sep=' ', header=0, na_values=['-999.0']) + result5 = read_csv(path, sep=' ', header=0, na_values=['-999']) + result6 = read_csv(path, sep=' ', header=0, na_values=[-999.0]) + result7 = read_csv(path, sep=' ', header=0, na_values=[-999]) + tm.assert_frame_equal(result4, result3) + tm.assert_frame_equal(result5, result3) + tm.assert_frame_equal(result6, result3) + tm.assert_frame_equal(result7, result3) good_compare = result3 - # with an odd float format, so we can't match the string 999.0 exactly, - # but need float matching - df.to_csv(path, sep=' ', index=False, float_format = '%.3f') - result1 = read_csv(path, sep= ' ', header=0, na_values=['-999.0','-999']) - result2 = read_csv(path, sep= ' ', header=0, na_values=[-999,-999.0]) - result3 = read_csv(path, sep= ' ', header=0, na_values=[-999.0,-999]) - tm.assert_frame_equal(result1,good_compare) - tm.assert_frame_equal(result2,good_compare) - tm.assert_frame_equal(result3,good_compare) - - result4 = read_csv(path, sep= ' ', header=0, na_values=['-999.0']) - result5 = read_csv(path, sep= ' ', header=0, na_values=['-999']) - result6 = read_csv(path, sep= ' ', header=0, na_values=[-999.0]) - result7 = read_csv(path, sep= ' ', header=0, na_values=[-999]) - tm.assert_frame_equal(result4,good_compare) - tm.assert_frame_equal(result5,good_compare) - tm.assert_frame_equal(result6,good_compare) - tm.assert_frame_equal(result7,good_compare) + # with an odd float format, so we can't match the string 999.0 + # exactly, but need float matching + df.to_csv(path, sep=' ', index=False, float_format='%.3f') + result1 = read_csv(path, sep=' ', header=0, + na_values=['-999.0', '-999']) + result2 = read_csv(path, sep=' ', header=0, + na_values=[-999, -999.0]) + result3 = read_csv(path, sep=' ', header=0, + na_values=[-999.0, -999]) + tm.assert_frame_equal(result1, good_compare) + tm.assert_frame_equal(result2, good_compare) + tm.assert_frame_equal(result3, good_compare) + + result4 = read_csv(path, sep=' ', header=0, na_values=['-999.0']) + result5 = read_csv(path, sep=' ', header=0, na_values=['-999']) + result6 = read_csv(path, sep=' ', header=0, na_values=[-999.0]) + result7 = read_csv(path, sep=' ', header=0, na_values=[-999]) + tm.assert_frame_equal(result4, good_compare) + tm.assert_frame_equal(result5, good_compare) + tm.assert_frame_equal(result6, good_compare) + tm.assert_frame_equal(result7, good_compare) def test_default_na_values(self): _NA_VALUES = set(['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', - '#N/A','N/A', 'NA', '#NA', 'NULL', 'NaN', - 'nan', '-NaN', '-nan', '#N/A N/A','']) + '#N/A', 'N/A', 'NA', '#NA', 'NULL', 'NaN', + 'nan', '-NaN', '-nan', '#N/A N/A', '']) self.assertEqual(_NA_VALUES, parsers._NA_VALUES) nv = len(_NA_VALUES) + def f(i, v): if i == 0: buf = '' elif i > 0: buf = ''.join([','] * i) - buf = "{0}{1}".format(buf,v) + buf = "{0}{1}".format(buf, v) - if i < nv-1: - buf = "{0}{1}".format(buf,''.join([','] * (nv-i-1))) + if i < nv - 1: + buf = "{0}{1}".format(buf, ''.join([','] * (nv - i - 1))) return buf - data = StringIO('\n'.join([ f(i, v) for i, v in enumerate(_NA_VALUES) ])) - expected = DataFrame(np.nan,columns=range(nv),index=range(nv)) + data = StringIO('\n'.join([f(i, v) for i, v in enumerate(_NA_VALUES)])) + expected = DataFrame(np.nan, columns=range(nv), index=range(nv)) df = self.read_csv(data, header=None) tm.assert_frame_equal(df, expected) @@ -794,23 +807,24 @@ def test_nat_parse(self): # GH 3062 df = DataFrame(dict({ - 'A' : np.asarray(lrange(10),dtype='float64'), - 'B' : pd.Timestamp('20010101') })) - df.iloc[3:6,:] = np.nan + 'A': np.asarray(lrange(10), dtype='float64'), + 'B': pd.Timestamp('20010101')})) + df.iloc[3:6, :] = np.nan with tm.ensure_clean('__nat_parse_.csv') as path: df.to_csv(path) - result = read_csv(path,index_col=0,parse_dates=['B']) - tm.assert_frame_equal(result,df) + result = read_csv(path, index_col=0, parse_dates=['B']) + tm.assert_frame_equal(result, df) - expected = Series(dict( A = 'float64',B = 'datetime64[ns]')) - tm.assert_series_equal(expected,result.dtypes) + expected = Series(dict(A='float64', B='datetime64[ns]')) + tm.assert_series_equal(expected, result.dtypes) # test with NaT for the nan_rep - # we don't have a method to specif the Datetime na_rep (it defaults to '') + # we don't have a method to specif the Datetime na_rep (it defaults + # to '') df.to_csv(path) - result = read_csv(path,index_col=0,parse_dates=['B']) - tm.assert_frame_equal(result,df) + result = read_csv(path, index_col=0, parse_dates=['B']) + tm.assert_frame_equal(result, df) def test_skiprows_bug(self): # GH #505 @@ -840,8 +854,12 @@ def test_skiprows_bug(self): def test_deep_skiprows(self): # GH #4382 - text = "a,b,c\n" + "\n".join([",".join([str(i), str(i+1), str(i+2)]) for i in range(10)]) - condensed_text = "a,b,c\n" + "\n".join([",".join([str(i), str(i+1), str(i+2)]) for i in [0, 1, 2, 3, 4, 6, 8, 9]]) + text = "a,b,c\n" + \ + "\n".join([",".join([str(i), str(i + 1), str(i + 2)]) + for i in range(10)]) + condensed_text = "a,b,c\n" + \ + "\n".join([",".join([str(i), str(i + 1), str(i + 2)]) + for i in [0, 1, 2, 3, 4, 6, 8, 9]]) data = self.read_csv(StringIO(text), skiprows=[6, 8]) condensed_data = self.read_csv(StringIO(condensed_text)) tm.assert_frame_equal(data, condensed_data) @@ -859,7 +877,7 @@ def test_skiprows_blank(self): 1/3/2000,7,8,9 """ data = self.read_csv(StringIO(text), skiprows=6, header=None, - index_col=0, parse_dates=True) + index_col=0, parse_dates=True) expected = DataFrame(np.arange(1., 10.).reshape((3, 3)), columns=[1, 2, 3], @@ -918,13 +936,15 @@ def test_duplicate_columns(self): 11,12,13,14,15 """ # check default beahviour - df = self.read_table(StringIO(data), sep=',',engine=engine) + df = self.read_table(StringIO(data), sep=',', engine=engine) self.assertEqual(list(df.columns), ['A', 'A.1', 'B', 'B.1', 'B.2']) - df = self.read_table(StringIO(data), sep=',',engine=engine,mangle_dupe_cols=False) + df = self.read_table(StringIO(data), sep=',', + engine=engine, mangle_dupe_cols=False) self.assertEqual(list(df.columns), ['A', 'A', 'B', 'B', 'B']) - df = self.read_table(StringIO(data), sep=',',engine=engine,mangle_dupe_cols=True) + df = self.read_table(StringIO(data), sep=',', + engine=engine, mangle_dupe_cols=True) self.assertEqual(list(df.columns), ['A', 'A.1', 'B', 'B.1', 'B.2']) def test_csv_mixed_type(self): @@ -955,7 +975,8 @@ def test_parse_dates_implicit_first_col(self): """ df = self.read_csv(StringIO(data), parse_dates=True) expected = self.read_csv(StringIO(data), index_col=0, parse_dates=True) - self.assertIsInstance(df.index[0], (datetime, np.datetime64, Timestamp)) + self.assertIsInstance( + df.index[0], (datetime, np.datetime64, Timestamp)) tm.assert_frame_equal(df, expected) def test_parse_dates_string(self): @@ -1087,7 +1108,8 @@ def test_read_csv_dataframe(self): parse_dates=True) self.assert_numpy_array_equal(df.columns, ['A', 'B', 'C', 'D']) self.assertEqual(df.index.name, 'index') - self.assertIsInstance(df.index[0], (datetime, np.datetime64, Timestamp)) + self.assertIsInstance( + df.index[0], (datetime, np.datetime64, Timestamp)) self.assertEqual(df.values.dtype, np.float64) tm.assert_frame_equal(df, df2) @@ -1096,8 +1118,10 @@ def test_read_csv_no_index_name(self): df2 = self.read_table(self.csv2, sep=',', index_col=0, parse_dates=True) self.assert_numpy_array_equal(df.columns, ['A', 'B', 'C', 'D', 'E']) - self.assertIsInstance(df.index[0], (datetime, np.datetime64, Timestamp)) - self.assertEqual(df.ix[:, ['A', 'B', 'C', 'D']].values.dtype, np.float64) + self.assertIsInstance( + df.index[0], (datetime, np.datetime64, Timestamp)) + self.assertEqual(df.ix[:, ['A', 'B', 'C', 'D'] + ].values.dtype, np.float64) tm.assert_frame_equal(df, df2) def test_read_csv_infer_compression(self): @@ -1109,7 +1133,7 @@ def test_read_csv_infer_compression(self): for f in inputs: df = self.read_csv(f, index_col=0, parse_dates=True, - compression='infer') + compression='infer') tm.assert_frame_equal(expected, df) @@ -1311,13 +1335,15 @@ def test_iterator(self): """ reader = self.read_csv(StringIO(data), iterator=True) result = list(reader) - expected = DataFrame(dict(A = [1,4,7], B = [2,5,8], C = [3,6,9]), index=['foo','bar','baz']) + expected = DataFrame(dict(A=[1, 4, 7], B=[2, 5, 8], C=[ + 3, 6, 9]), index=['foo', 'bar', 'baz']) tm.assert_frame_equal(result[0], expected) # chunksize = 1 reader = self.read_csv(StringIO(data), chunksize=1) result = list(reader) - expected = DataFrame(dict(A = [1,4,7], B = [2,5,8], C = [3,6,9]), index=['foo','bar','baz']) + expected = DataFrame(dict(A=[1, 4, 7], B=[2, 5, 8], C=[ + 3, 6, 9]), index=['foo', 'bar', 'baz']) self.assertEqual(len(result), 3) tm.assert_frame_equal(pd.concat(result), expected) @@ -1340,7 +1366,8 @@ def test_header_not_first_line(self): tm.assert_frame_equal(df, expected) def test_header_multi_index(self): - expected = tm.makeCustomDataframe(5,3,r_idx_nlevels=2,c_idx_nlevels=4) + expected = tm.makeCustomDataframe( + 5, 3, r_idx_nlevels=2, c_idx_nlevels=4) data = """\ C0,,C_l0_g0,C_l0_g1,C_l0_g2 @@ -1356,35 +1383,37 @@ def test_header_multi_index(self): R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2 """ - df = self.read_csv(StringIO(data), header=[0, 1, 2, 3], index_col=[0, 1], tupleize_cols=False) + df = self.read_csv(StringIO(data), header=[0, 1, 2, 3], index_col=[ + 0, 1], tupleize_cols=False) tm.assert_frame_equal(df, expected) # skipping lines in the header - df = self.read_csv(StringIO(data), header=[0, 1, 2, 3], index_col=[0, 1], tupleize_cols=False) + df = self.read_csv(StringIO(data), header=[0, 1, 2, 3], index_col=[ + 0, 1], tupleize_cols=False) tm.assert_frame_equal(df, expected) #### invalid options #### # no as_recarray - self.assertRaises(ValueError, self.read_csv, StringIO(data), header=[0,1,2,3], - index_col=[0,1], as_recarray=True, tupleize_cols=False) + self.assertRaises(ValueError, self.read_csv, StringIO(data), header=[0, 1, 2, 3], + index_col=[0, 1], as_recarray=True, tupleize_cols=False) # names - self.assertRaises(ValueError, self.read_csv, StringIO(data), header=[0,1,2,3], - index_col=[0,1], names=['foo','bar'], tupleize_cols=False) + self.assertRaises(ValueError, self.read_csv, StringIO(data), header=[0, 1, 2, 3], + index_col=[0, 1], names=['foo', 'bar'], tupleize_cols=False) # usecols - self.assertRaises(ValueError, self.read_csv, StringIO(data), header=[0,1,2,3], - index_col=[0,1], usecols=['foo','bar'], tupleize_cols=False) + self.assertRaises(ValueError, self.read_csv, StringIO(data), header=[0, 1, 2, 3], + index_col=[0, 1], usecols=['foo', 'bar'], tupleize_cols=False) # non-numeric index_col - self.assertRaises(ValueError, self.read_csv, StringIO(data), header=[0,1,2,3], - index_col=['foo','bar'], tupleize_cols=False) + self.assertRaises(ValueError, self.read_csv, StringIO(data), header=[0, 1, 2, 3], + index_col=['foo', 'bar'], tupleize_cols=False) def test_header_multiindex_common_format(self): - df = DataFrame([[1,2,3,4,5,6],[7,8,9,10,11,12]], - index=['one','two'], - columns=MultiIndex.from_tuples([('a','q'),('a','r'),('a','s'), - ('b','t'),('c','u'),('c','v')])) + df = DataFrame([[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12]], + index=['one', 'two'], + columns=MultiIndex.from_tuples([('a', 'q'), ('a', 'r'), ('a', 's'), + ('b', 't'), ('c', 'u'), ('c', 'v')])) # to_csv data = """,a,a,a,b,c,c @@ -1393,8 +1422,8 @@ def test_header_multiindex_common_format(self): one,1,2,3,4,5,6 two,7,8,9,10,11,12""" - result = self.read_csv(StringIO(data),header=[0,1],index_col=0) - tm.assert_frame_equal(df,result) + result = self.read_csv(StringIO(data), header=[0, 1], index_col=0) + tm.assert_frame_equal(df, result) # common data = """,a,a,a,b,c,c @@ -1402,8 +1431,8 @@ def test_header_multiindex_common_format(self): one,1,2,3,4,5,6 two,7,8,9,10,11,12""" - result = self.read_csv(StringIO(data),header=[0,1],index_col=0) - tm.assert_frame_equal(df,result) + result = self.read_csv(StringIO(data), header=[0, 1], index_col=0) + tm.assert_frame_equal(df, result) # common, no index_col data = """a,a,a,b,c,c @@ -1411,15 +1440,16 @@ def test_header_multiindex_common_format(self): 1,2,3,4,5,6 7,8,9,10,11,12""" - result = self.read_csv(StringIO(data),header=[0,1],index_col=None) - tm.assert_frame_equal(df.reset_index(drop=True),result) + result = self.read_csv(StringIO(data), header=[0, 1], index_col=None) + tm.assert_frame_equal(df.reset_index(drop=True), result) # malformed case 1 expected = DataFrame(np.array([[2, 3, 4, 5, 6], [8, 9, 10, 11, 12]], dtype='int64'), index=Index([1, 7]), columns=MultiIndex(levels=[[u('a'), u('b'), u('c')], [u('r'), u('s'), u('t'), u('u'), u('v')]], - labels=[[0, 0, 1, 2, 2], [0, 1, 2, 3, 4]], + labels=[[0, 0, 1, 2, 2], [ + 0, 1, 2, 3, 4]], names=[u('a'), u('q')])) data = """a,a,a,b,c,c @@ -1427,15 +1457,16 @@ def test_header_multiindex_common_format(self): 1,2,3,4,5,6 7,8,9,10,11,12""" - result = self.read_csv(StringIO(data),header=[0,1],index_col=0) - tm.assert_frame_equal(expected,result) + result = self.read_csv(StringIO(data), header=[0, 1], index_col=0) + tm.assert_frame_equal(expected, result) # malformed case 2 expected = DataFrame(np.array([[2, 3, 4, 5, 6], [8, 9, 10, 11, 12]], dtype='int64'), index=Index([1, 7]), columns=MultiIndex(levels=[[u('a'), u('b'), u('c')], [u('r'), u('s'), u('t'), u('u'), u('v')]], - labels=[[0, 0, 1, 2, 2], [0, 1, 2, 3, 4]], + labels=[[0, 0, 1, 2, 2], [ + 0, 1, 2, 3, 4]], names=[None, u('q')])) data = """,a,a,b,c,c @@ -1443,16 +1474,17 @@ def test_header_multiindex_common_format(self): 1,2,3,4,5,6 7,8,9,10,11,12""" - result = self.read_csv(StringIO(data),header=[0,1],index_col=0) - tm.assert_frame_equal(expected,result) + result = self.read_csv(StringIO(data), header=[0, 1], index_col=0) + tm.assert_frame_equal(expected, result) # mi on columns and index (malformed) - expected = DataFrame(np.array([[ 3, 4, 5, 6], - [ 9, 10, 11, 12]], dtype='int64'), + expected = DataFrame(np.array([[3, 4, 5, 6], + [9, 10, 11, 12]], dtype='int64'), index=MultiIndex(levels=[[1, 7], [2, 8]], labels=[[0, 1], [0, 1]]), columns=MultiIndex(levels=[[u('a'), u('b'), u('c')], [u('s'), u('t'), u('u'), u('v')]], - labels=[[0, 1, 2, 2], [0, 1, 2, 3]], + labels=[[0, 1, 2, 2], + [0, 1, 2, 3]], names=[None, u('q')])) data = """,a,a,b,c,c @@ -1460,8 +1492,8 @@ def test_header_multiindex_common_format(self): 1,2,3,4,5,6 7,8,9,10,11,12""" - result = self.read_csv(StringIO(data),header=[0,1],index_col=[0, 1]) - tm.assert_frame_equal(expected,result) + result = self.read_csv(StringIO(data), header=[0, 1], index_col=[0, 1]) + tm.assert_frame_equal(expected, result) def test_pass_names_with_index(self): lines = self.data1.split('\n') @@ -1795,7 +1827,7 @@ def test_na_value_dict(self): def test_url(self): # HTTP(S) url = ('https://raw.github.com/pydata/pandas/master/' - 'pandas/io/tests/data/salary.table') + 'pandas/io/tests/data/salary.table') url_table = self.read_table(url) dirpath = tm.get_data_path() localtable = os.path.join(dirpath, 'salary.table') @@ -1865,7 +1897,7 @@ def test_multiple_date_cols_chunked(self): df = self.read_csv(StringIO(self.ts_data), parse_dates={ 'nominal': [1, 2]}, index_col='nominal') reader = self.read_csv(StringIO(self.ts_data), parse_dates={'nominal': - [1, 2]}, index_col='nominal', chunksize=2) + [1, 2]}, index_col='nominal', chunksize=2) chunks = list(reader) @@ -2151,24 +2183,31 @@ def test_usecols_index_col_conflict(self): 10000,2013-5-11,100,10,1 500,2013-5-12,101,11,1 """ - expected = DataFrame({'Price': [100, 101]}, index=[datetime(2013, 5, 11), datetime(2013, 5, 12)]) + expected = DataFrame({'Price': [100, 101]}, index=[ + datetime(2013, 5, 11), datetime(2013, 5, 12)]) expected.index.name = 'Time' - df = self.read_csv(StringIO(data), usecols=['Time', 'Price'], parse_dates=True, index_col=0) + df = self.read_csv(StringIO(data), usecols=[ + 'Time', 'Price'], parse_dates=True, index_col=0) tm.assert_frame_equal(expected, df) - df = self.read_csv(StringIO(data), usecols=['Time', 'Price'], parse_dates=True, index_col='Time') + df = self.read_csv(StringIO(data), usecols=[ + 'Time', 'Price'], parse_dates=True, index_col='Time') tm.assert_frame_equal(expected, df) - df = self.read_csv(StringIO(data), usecols=[1, 2], parse_dates=True, index_col='Time') + df = self.read_csv(StringIO(data), usecols=[ + 1, 2], parse_dates=True, index_col='Time') tm.assert_frame_equal(expected, df) - df = self.read_csv(StringIO(data), usecols=[1, 2], parse_dates=True, index_col=0) + df = self.read_csv(StringIO(data), usecols=[ + 1, 2], parse_dates=True, index_col=0) tm.assert_frame_equal(expected, df) - expected = DataFrame({'P3': [1, 1], 'Price': (100, 101), 'P2': (10, 11)}) + expected = DataFrame( + {'P3': [1, 1], 'Price': (100, 101), 'P2': (10, 11)}) expected = expected.set_index(['Price', 'P2']) - df = self.read_csv(StringIO(data), usecols=['Price', 'P2', 'P3'], parse_dates=True, index_col=['Price', 'P2']) + df = self.read_csv(StringIO(data), usecols=[ + 'Price', 'P2', 'P3'], parse_dates=True, index_col=['Price', 'P2']) tm.assert_frame_equal(expected, df) def test_chunks_have_consistent_numerical_type(self): @@ -2177,7 +2216,8 @@ def test_chunks_have_consistent_numerical_type(self): with tm.assert_produces_warning(False): df = self.read_csv(StringIO(data)) - self.assertTrue(type(df.a[0]) is np.float64) # Assert that types were coerced. + # Assert that types were coerced. + self.assertTrue(type(df.a[0]) is np.float64) self.assertEqual(df.a.dtype, np.float) def test_warn_if_chunks_have_mismatched_type(self): @@ -2230,7 +2270,6 @@ def test_usecols(self): header=None, usecols=['b', 'c']) tm.assert_frame_equal(result2, result) - # 5766 result = self.read_csv(StringIO(data), names=['a', 'b'], header=None, usecols=[0, 1]) @@ -2261,19 +2300,20 @@ def test_catch_too_many_names(self): 4,,6 7,8,9 10,11,12\n""" - tm.assertRaises(Exception, read_csv, StringIO(data), header=0, names=['a', 'b', 'c', 'd']) + tm.assertRaises(Exception, read_csv, StringIO(data), + header=0, names=['a', 'b', 'c', 'd']) def test_ignore_leading_whitespace(self): # GH 6607, GH 3374 data = ' a b c\n 1 2 3\n 4 5 6\n 7 8 9' result = self.read_table(StringIO(data), sep='\s+') - expected = DataFrame({'a':[1,4,7], 'b':[2,5,8], 'c': [3,6,9]}) + expected = DataFrame({'a': [1, 4, 7], 'b': [2, 5, 8], 'c': [3, 6, 9]}) tm.assert_frame_equal(result, expected) def test_nrows_and_chunksize_raises_notimplemented(self): data = 'a b c' self.assertRaises(NotImplementedError, self.read_csv, StringIO(data), - nrows=10, chunksize=5) + nrows=10, chunksize=5) def test_single_char_leading_whitespace(self): # GH 9710 @@ -2284,7 +2324,7 @@ def test_single_char_leading_whitespace(self): a b\n""" - expected = DataFrame({'MyColumn' : list('abab')}) + expected = DataFrame({'MyColumn': list('abab')}) result = self.read_csv(StringIO(data), skipinitialspace=True) tm.assert_frame_equal(result, expected) @@ -2329,27 +2369,37 @@ def test_empty_index_col_scenarios(self): # None, no index index_col, expected = None, DataFrame([], columns=list('xyz')), - tm.assert_frame_equal(self.read_csv(StringIO(data), index_col=index_col), expected) + tm.assert_frame_equal(self.read_csv( + StringIO(data), index_col=index_col), expected) # False, no index index_col, expected = False, DataFrame([], columns=list('xyz')), - tm.assert_frame_equal(self.read_csv(StringIO(data), index_col=index_col), expected) + tm.assert_frame_equal(self.read_csv( + StringIO(data), index_col=index_col), expected) # int, first column - index_col, expected = 0, DataFrame([], columns=['y', 'z'], index=Index([], name='x')) - tm.assert_frame_equal(self.read_csv(StringIO(data), index_col=index_col), expected) + index_col, expected = 0, DataFrame( + [], columns=['y', 'z'], index=Index([], name='x')) + tm.assert_frame_equal(self.read_csv( + StringIO(data), index_col=index_col), expected) # int, not first column - index_col, expected = 1, DataFrame([], columns=['x', 'z'], index=Index([], name='y')) - tm.assert_frame_equal(self.read_csv(StringIO(data), index_col=index_col), expected) + index_col, expected = 1, DataFrame( + [], columns=['x', 'z'], index=Index([], name='y')) + tm.assert_frame_equal(self.read_csv( + StringIO(data), index_col=index_col), expected) # str, first column - index_col, expected = 'x', DataFrame([], columns=['y', 'z'], index=Index([], name='x')) - tm.assert_frame_equal(self.read_csv(StringIO(data), index_col=index_col), expected) + index_col, expected = 'x', DataFrame( + [], columns=['y', 'z'], index=Index([], name='x')) + tm.assert_frame_equal(self.read_csv( + StringIO(data), index_col=index_col), expected) # str, not the first column - index_col, expected = 'y', DataFrame([], columns=['x', 'z'], index=Index([], name='y')) - tm.assert_frame_equal(self.read_csv(StringIO(data), index_col=index_col), expected) + index_col, expected = 'y', DataFrame( + [], columns=['x', 'z'], index=Index([], name='y')) + tm.assert_frame_equal(self.read_csv( + StringIO(data), index_col=index_col), expected) # list of int index_col, expected = [0, 1], DataFrame([], columns=['z'], @@ -2359,19 +2409,22 @@ def test_empty_index_col_scenarios(self): # list of str index_col = ['x', 'y'] - expected = DataFrame([], columns=['z'], index=MultiIndex.from_arrays([[]] * 2, names=['x', 'y'])) + expected = DataFrame([], columns=['z'], index=MultiIndex.from_arrays( + [[]] * 2, names=['x', 'y'])) tm.assert_frame_equal(self.read_csv(StringIO(data), index_col=index_col), expected, check_index_type=False) # list of int, reversed sequence index_col = [1, 0] - expected = DataFrame([], columns=['z'], index=MultiIndex.from_arrays([[]] * 2, names=['y', 'x'])) + expected = DataFrame([], columns=['z'], index=MultiIndex.from_arrays( + [[]] * 2, names=['y', 'x'])) tm.assert_frame_equal(self.read_csv(StringIO(data), index_col=index_col), expected, check_index_type=False) # list of str, reversed sequence index_col = ['y', 'x'] - expected = DataFrame([], columns=['z'], index=MultiIndex.from_arrays([[]] * 2, names=['y', 'x'])) + expected = DataFrame([], columns=['z'], index=MultiIndex.from_arrays( + [[]] * 2, names=['y', 'x'])) tm.assert_frame_equal(self.read_csv(StringIO(data), index_col=index_col), expected, check_index_type=False) @@ -2403,8 +2456,8 @@ def test_int64_overflow(self): self.assertTrue(result['ID'].dtype == object) self.assertRaises((OverflowError, pandas.parser.OverflowError), - self.read_csv, StringIO(data), - converters={'ID' : np.int64}) + self.read_csv, StringIO(data), + converters={'ID': np.int64}) # Just inside int64 range: parse as integer i_max = np.iinfo(np.int64).max @@ -2434,16 +2487,19 @@ def test_empty_with_nrows_chunksize(self): result = pd.read_csv(StringIO('foo,bar\n'), nrows=10, as_recarray=True) result = pd.DataFrame(result[2], columns=result[1], index=result[0]) - tm.assert_frame_equal(pd.DataFrame.from_records(result), expected, check_index_type=False) + tm.assert_frame_equal(pd.DataFrame.from_records( + result), expected, check_index_type=False) - result = next(iter(pd.read_csv(StringIO('foo,bar\n'), chunksize=10, as_recarray=True))) + result = next( + iter(pd.read_csv(StringIO('foo,bar\n'), chunksize=10, as_recarray=True))) result = pd.DataFrame(result[2], columns=result[1], index=result[0]) - tm.assert_frame_equal(pd.DataFrame.from_records(result), expected, check_index_type=False) + tm.assert_frame_equal(pd.DataFrame.from_records( + result), expected, check_index_type=False) def test_eof_states(self): # GH 10728 and 10548 - ## With skip_blank_lines = True + # With skip_blank_lines = True expected = pd.DataFrame([[4, 5, 6]], columns=['a', 'b', 'c']) # GH 10728 @@ -2473,43 +2529,49 @@ def test_eof_states(self): result = self.read_csv(StringIO(data), skiprows=[2]) tm.assert_frame_equal(result, expected) - ## With skip_blank_lines = False + # With skip_blank_lines = False # EAT_LINE_COMMENT data = 'a,b,c\n4,5,6\n#comment' - result = self.read_csv(StringIO(data), comment='#', skip_blank_lines=False) + result = self.read_csv( + StringIO(data), comment='#', skip_blank_lines=False) expected = pd.DataFrame([[4, 5, 6]], columns=['a', 'b', 'c']) tm.assert_frame_equal(result, expected) # IN_FIELD data = 'a,b,c\n4,5,6\n ' result = self.read_csv(StringIO(data), skip_blank_lines=False) - expected = pd.DataFrame([['4', 5, 6], [' ', None, None]], columns=['a', 'b', 'c']) + expected = pd.DataFrame( + [['4', 5, 6], [' ', None, None]], columns=['a', 'b', 'c']) tm.assert_frame_equal(result, expected) # EAT_CRNL data = 'a,b,c\n4,5,6\n\r' result = self.read_csv(StringIO(data), skip_blank_lines=False) - expected = pd.DataFrame([[4, 5, 6], [None, None, None]], columns=['a', 'b', 'c']) + expected = pd.DataFrame( + [[4, 5, 6], [None, None, None]], columns=['a', 'b', 'c']) tm.assert_frame_equal(result, expected) - ## Should produce exceptions + # Should produce exceptions # ESCAPED_CHAR data = "a,b,c\n4,5,6\n\\" - self.assertRaises(Exception, self.read_csv, StringIO(data), escapechar='\\') + self.assertRaises(Exception, self.read_csv, + StringIO(data), escapechar='\\') # ESCAPE_IN_QUOTED_FIELD data = 'a,b,c\n4,5,6\n"\\' - self.assertRaises(Exception, self.read_csv, StringIO(data), escapechar='\\') + self.assertRaises(Exception, self.read_csv, + StringIO(data), escapechar='\\') # IN_QUOTED_FIELD data = 'a,b,c\n4,5,6\n"' - self.assertRaises(Exception, self.read_csv, StringIO(data), escapechar='\\') - + self.assertRaises(Exception, self.read_csv, + StringIO(data), escapechar='\\') class TestPythonParser(ParserTests, tm.TestCase): + def test_negative_skipfooter_raises(self): text = """#foo,a,b,c #foo,a,b,c @@ -2718,14 +2780,13 @@ def test_fwf_colspecs_None(self): expected = DataFrame([[123456, 456], [456789, 789]]) tm.assert_frame_equal(result, expected) - def test_fwf_regression(self): # GH 3594 - #### turns out 'T060' is parsable as a datetime slice! + # turns out 'T060' is parsable as a datetime slice! - tzlist = [1,10,20,30,60,80,100] + tzlist = [1, 10, 20, 30, 60, 80, 100] ntz = len(tzlist) - tcolspecs = [16]+[8]*ntz + tcolspecs = [16] + [8] * ntz tcolnames = ['SST'] + ["T%03d" % z for z in tzlist[1:]] data = """ 2009164202000 9.5403 9.4105 8.6571 7.8372 6.0612 5.8843 5.5192 2009164203000 9.5435 9.2010 8.6167 7.8176 6.0804 5.8728 5.4869 @@ -2740,27 +2801,28 @@ def test_fwf_regression(self): names=tcolnames, widths=tcolspecs, parse_dates=True, - date_parser=lambda s: datetime.strptime(s,'%Y%j%H%M%S')) + date_parser=lambda s: datetime.strptime(s, '%Y%j%H%M%S')) for c in df.columns: - res = df.loc[:,c] + res = df.loc[:, c] self.assertTrue(len(res)) def test_fwf_for_uint8(self): data = """1421302965.213420 PRI=3 PGN=0xef00 DST=0x17 SRC=0x28 04 154 00 00 00 00 00 127 1421302964.226776 PRI=6 PGN=0xf002 SRC=0x47 243 00 00 255 247 00 00 71""" df = read_fwf(StringIO(data), - colspecs=[(0,17),(25,26),(33,37),(49,51),(58,62),(63,1000)], - names=['time','pri','pgn','dst','src','data'], - converters={ - 'pgn':lambda x: int(x,16), - 'src':lambda x: int(x,16), - 'dst':lambda x: int(x,16), - 'data':lambda x: len(x.split(' '))}) - - expected = DataFrame([[1421302965.213420,3,61184,23,40,8], - [1421302964.226776,6,61442,None, 71,8]], - columns = ["time", "pri", "pgn", "dst", "src","data"]) + colspecs=[(0, 17), (25, 26), (33, 37), + (49, 51), (58, 62), (63, 1000)], + names=['time', 'pri', 'pgn', 'dst', 'src', 'data'], + converters={ + 'pgn': lambda x: int(x, 16), + 'src': lambda x: int(x, 16), + 'dst': lambda x: int(x, 16), + 'data': lambda x: len(x.split(' '))}) + + expected = DataFrame([[1421302965.213420, 3, 61184, 23, 40, 8], + [1421302964.226776, 6, 61442, None, 71, 8]], + columns=["time", "pri", "pgn", "dst", "src", "data"]) expected["dst"] = expected["dst"].astype(object) tm.assert_frame_equal(df, expected) @@ -2792,14 +2854,16 @@ def test_fwf_compression(self): def test_BytesIO_input(self): if not compat.PY3: - raise nose.SkipTest("Bytes-related test - only needs to work on Python 3") - result = pd.read_fwf(BytesIO("שלום\nשלום".encode('utf8')), widths=[2,2], encoding='utf8') + raise nose.SkipTest( + "Bytes-related test - only needs to work on Python 3") + result = pd.read_fwf(BytesIO("שלום\nשלום".encode('utf8')), widths=[ + 2, 2], encoding='utf8') expected = pd.DataFrame([["של", "ום"]], columns=["של", "ום"]) tm.assert_frame_equal(result, expected) data = BytesIO("שלום::1234\n562::123".encode('cp1255')) result = pd.read_table(data, sep="::", engine='python', encoding='cp1255') - expected = pd.DataFrame([[562, 123]], columns=["שלום","1234"]) + expected = pd.DataFrame([[562, 123]], columns=["שלום", "1234"]) tm.assert_frame_equal(result, expected) def test_verbose_import(self): @@ -2819,7 +2883,8 @@ def test_verbose_import(self): try: # it works! df = self.read_csv(StringIO(text), verbose=True) - self.assertEqual(buf.getvalue(), 'Filled 3 NA values in column a\n') + self.assertEqual( + buf.getvalue(), 'Filled 3 NA values in column a\n') finally: sys.stdout = sys.__stdout__ @@ -2839,19 +2904,22 @@ def test_verbose_import(self): try: # it works! df = self.read_csv(StringIO(text), verbose=True, index_col=0) - self.assertEqual(buf.getvalue(), 'Filled 1 NA values in column a\n') + self.assertEqual( + buf.getvalue(), 'Filled 1 NA values in column a\n') finally: sys.stdout = sys.__stdout__ def test_float_precision_specified(self): - # Should raise an error if float_precision (C parser option) is specified + # Should raise an error if float_precision (C parser option) is + # specified with tm.assertRaisesRegexp(ValueError, "The 'float_precision' option " "is not supported with the 'python' engine"): self.read_csv(StringIO('a,b,c\n1,2,3'), float_precision='high') def test_iteration_open_handle(self): if PY3: - raise nose.SkipTest("won't work in Python 3 {0}".format(sys.version_info)) + raise nose.SkipTest( + "won't work in Python 3 {0}".format(sys.version_info)) with tm.ensure_clean() as path: with open(path, 'wb') as f: @@ -2923,13 +2991,15 @@ def test_iterator(self): """ reader = self.read_csv(StringIO(data), iterator=True) result = list(reader) - expected = DataFrame(dict(A = [1,4,7], B = [2,5,8], C = [3,6,9]), index=['foo','bar','baz']) + expected = DataFrame(dict(A=[1, 4, 7], B=[2, 5, 8], C=[ + 3, 6, 9]), index=['foo', 'bar', 'baz']) tm.assert_frame_equal(result[0], expected) # chunksize = 1 reader = self.read_csv(StringIO(data), chunksize=1) result = list(reader) - expected = DataFrame(dict(A = [1,4,7], B = [2,5,8], C = [3,6,9]), index=['foo','bar','baz']) + expected = DataFrame(dict(A=[1, 4, 7], B=[2, 5, 8], C=[ + 3, 6, 9]), index=['foo', 'bar', 'baz']) self.assertEqual(len(result), 3) tm.assert_frame_equal(pd.concat(result), expected) @@ -3124,8 +3194,8 @@ def test_read_table_buglet_4x_multiindex(self): # GH 6893 data = ' A B C\na b c\n1 3 7 0 3 6\n3 1 4 1 5 9' - expected = DataFrame.from_records([(1,3,7,0,3,6), (3,1,4,1,5,9)], - columns=list('abcABC'), index=list('abc')) + expected = DataFrame.from_records([(1, 3, 7, 0, 3, 6), (3, 1, 4, 1, 5, 9)], + columns=list('abcABC'), index=list('abc')) actual = self.read_table(StringIO(data), sep='\s+') tm.assert_frame_equal(actual, expected) @@ -3177,12 +3247,13 @@ def test_whitespace_lines(self): 5.,NaN,10.0 """ expected = [[1, 2., 4.], - [5., np.nan, 10.]] + [5., np.nan, 10.]] df = self.read_csv(StringIO(data)) tm.assert_almost_equal(df.values, expected) class TestFwfColspaceSniffing(tm.TestCase): + def test_full_file(self): # File with all values test = '''index A B C @@ -3270,7 +3341,8 @@ def test_multiple_delimiters(self): def test_variable_width_unicode(self): if not compat.PY3: - raise nose.SkipTest('Bytes-related test - only needs to work on Python 3') + raise nose.SkipTest( + 'Bytes-related test - only needs to work on Python 3') test = ''' שלום שלום ום שלל @@ -3298,7 +3370,8 @@ def read_table(self, *args, **kwds): def test_compact_ints(self): if compat.is_platform_windows(): - raise nose.SkipTest("segfaults on win-64, only when all tests are run") + raise nose.SkipTest( + "segfaults on win-64, only when all tests are run") data = ('0,1,0,0\n' '1,1,0,0\n' @@ -3322,7 +3395,8 @@ def test_parse_dates_empty_string(self): self.assertTrue(result['Date'].isnull()[1]) def test_usecols(self): - raise nose.SkipTest("Usecols is not supported in C High Memory engine.") + raise nose.SkipTest( + "Usecols is not supported in C High Memory engine.") def test_line_comment(self): data = """# empty @@ -3337,11 +3411,11 @@ def test_line_comment(self): tm.assert_almost_equal(df.values, expected) # check with delim_whitespace=True df = self.read_csv(StringIO(data.replace(',', ' ')), comment='#', - delim_whitespace=True) + delim_whitespace=True) tm.assert_almost_equal(df.values, expected) # check with custom line terminator df = self.read_csv(StringIO(data.replace('\n', '*')), comment='#', - lineterminator='*') + lineterminator='*') tm.assert_almost_equal(df.values, expected) def test_comment_skiprows(self): @@ -3360,7 +3434,7 @@ def test_comment_skiprows(self): tm.assert_almost_equal(df.values, expected) def test_skiprows_lineterminator(self): - #GH #9079 + # GH #9079 data = '\n'.join(['SMOSMANIA ThetaProbe-ML2X ', '2007/01/01 01:00 0.2140 U M ', '2007/01/01 02:00 0.2141 M O ', @@ -3386,23 +3460,24 @@ def test_skiprows_lineterminator(self): def test_trailing_spaces(self): data = "A B C \nrandom line with trailing spaces \nskip\n1,2,3\n1,2.,4.\nrandom line with trailing tabs\t\t\t\n \n5.1,NaN,10.0\n" expected = pd.DataFrame([[1., 2., 4.], - [5.1, np.nan, 10.]]) + [5.1, np.nan, 10.]]) # this should ignore six lines including lines with trailing # whitespace and blank lines. issues 8661, 8679 df = self.read_csv(StringIO(data.replace(',', ' ')), header=None, delim_whitespace=True, - skiprows=[0,1,2,3,5,6], skip_blank_lines=True) + skiprows=[0, 1, 2, 3, 5, 6], skip_blank_lines=True) tm.assert_frame_equal(df, expected) df = self.read_table(StringIO(data.replace(',', ' ')), header=None, delim_whitespace=True, - skiprows=[0,1,2,3,5,6], skip_blank_lines=True) + skiprows=[0, 1, 2, 3, 5, 6], skip_blank_lines=True) tm.assert_frame_equal(df, expected) - # test skipping set of rows after a row with trailing spaces, issue #8983 - expected = pd.DataFrame({"A":[1., 5.1], "B":[2., np.nan], - "C":[4., 10]}) + # test skipping set of rows after a row with trailing spaces, issue + # #8983 + expected = pd.DataFrame({"A": [1., 5.1], "B": [2., np.nan], + "C": [4., 10]}) df = self.read_table(StringIO(data.replace(',', ' ')), delim_whitespace=True, - skiprows=[1,2,3,5,6], skip_blank_lines=True) + skiprows=[1, 2, 3, 5, 6], skip_blank_lines=True) tm.assert_frame_equal(df, expected) def test_comment_header(self): @@ -3473,7 +3548,7 @@ def test_whitespace_lines(self): 5.,NaN,10.0 """ expected = [[1, 2., 4.], - [5., np.nan, 10.]] + [5., np.nan, 10.]] df = self.read_csv(StringIO(data)) tm.assert_almost_equal(df.values, expected) @@ -3482,7 +3557,8 @@ def test_passing_dtype(self): # This is a copy which should eventually be merged into ParserTests # when the dtype argument is supported by all engines. - df = DataFrame(np.random.rand(5,2),columns=list('AB'),index=['1A','1B','1C','1D','1E']) + df = DataFrame(np.random.rand(5, 2), columns=list( + 'AB'), index=['1A', '1B', '1C', '1D', '1E']) with tm.ensure_clean('__passing_str_as_dtype__.csv') as path: df.to_csv(path) @@ -3490,24 +3566,26 @@ def test_passing_dtype(self): # GH 3795 # passing 'str' as the dtype result = self.read_csv(path, dtype=str, index_col=0) - tm.assert_series_equal(result.dtypes,Series({ 'A' : 'object', 'B' : 'object' })) + tm.assert_series_equal(result.dtypes, Series( + {'A': 'object', 'B': 'object'})) - # we expect all object columns, so need to convert to test for equivalence + # we expect all object columns, so need to convert to test for + # equivalence result = result.astype(float) - tm.assert_frame_equal(result,df) + tm.assert_frame_equal(result, df) # invalid dtype - self.assertRaises(TypeError, self.read_csv, path, dtype={'A' : 'foo', 'B' : 'float64' }, + self.assertRaises(TypeError, self.read_csv, path, dtype={'A': 'foo', 'B': 'float64'}, index_col=0) # valid but we don't support it (date) - self.assertRaises(TypeError, self.read_csv, path, dtype={'A' : 'datetime64', 'B' : 'float64' }, + self.assertRaises(TypeError, self.read_csv, path, dtype={'A': 'datetime64', 'B': 'float64'}, index_col=0) - self.assertRaises(TypeError, self.read_csv, path, dtype={'A' : 'datetime64', 'B' : 'float64' }, + self.assertRaises(TypeError, self.read_csv, path, dtype={'A': 'datetime64', 'B': 'float64'}, index_col=0, parse_dates=['B']) # valid but we don't support it - self.assertRaises(TypeError, self.read_csv, path, dtype={'A' : 'timedelta64', 'B' : 'float64' }, + self.assertRaises(TypeError, self.read_csv, path, dtype={'A': 'timedelta64', 'B': 'float64'}, index_col=0) def test_dtype_and_names_error(self): @@ -3521,17 +3599,20 @@ def test_dtype_and_names_error(self): 3.0 3 """ # base cases - result = self.read_csv(StringIO(data),sep='\s+',header=None) - expected = DataFrame([[1.0,1],[2.0,2],[3.0,3]]) + result = self.read_csv(StringIO(data), sep='\s+', header=None) + expected = DataFrame([[1.0, 1], [2.0, 2], [3.0, 3]]) tm.assert_frame_equal(result, expected) - result = self.read_csv(StringIO(data),sep='\s+',header=None,names=['a','b']) - expected = DataFrame([[1.0,1],[2.0,2],[3.0,3]],columns=['a','b']) + result = self.read_csv(StringIO(data), sep='\s+', + header=None, names=['a', 'b']) + expected = DataFrame( + [[1.0, 1], [2.0, 2], [3.0, 3]], columns=['a', 'b']) tm.assert_frame_equal(result, expected) # fallback casting - result = self.read_csv(StringIO(data),sep='\s+',header=None,names=['a','b'],dtype={'a' : np.int32}) - expected = DataFrame([[1,1],[2,2],[3,3]],columns=['a','b']) + result = self.read_csv(StringIO( + data), sep='\s+', header=None, names=['a', 'b'], dtype={'a': np.int32}) + expected = DataFrame([[1, 1], [2, 2], [3, 3]], columns=['a', 'b']) expected['a'] = expected['a'].astype(np.int32) tm.assert_frame_equal(result, expected) @@ -3542,7 +3623,8 @@ def test_dtype_and_names_error(self): """ # fallback casting, but not castable with tm.assertRaisesRegexp(ValueError, 'cannot safely convert'): - self.read_csv(StringIO(data),sep='\s+',header=None,names=['a','b'],dtype={'a' : np.int32}) + self.read_csv(StringIO(data), sep='\s+', header=None, + names=['a', 'b'], dtype={'a': np.int32}) def test_fallback_to_python(self): # GH 6607 @@ -3551,13 +3633,12 @@ def test_fallback_to_python(self): # specify C engine with unsupported options (raise) with tm.assertRaisesRegexp(ValueError, 'does not support'): self.read_table(StringIO(data), engine='c', sep=None, - delim_whitespace=False) + delim_whitespace=False) with tm.assertRaisesRegexp(ValueError, 'does not support'): self.read_table(StringIO(data), engine='c', sep='\s') with tm.assertRaisesRegexp(ValueError, 'does not support'): self.read_table(StringIO(data), engine='c', skip_footer=1) - def test_buffer_overflow(self): # GH9205 # test certain malformed input files that cause buffer overflows in @@ -3569,7 +3650,8 @@ def test_buffer_overflow(self): try: df = self.read_table(StringIO(malf)) except Exception as cperr: - self.assertIn('Buffer overflow caught - possible malformed input file.', str(cperr)) + self.assertIn( + 'Buffer overflow caught - possible malformed input file.', str(cperr)) def test_single_char_leading_whitespace(self): # GH 9710 @@ -3580,7 +3662,7 @@ def test_single_char_leading_whitespace(self): a b\n""" - expected = DataFrame({'MyColumn' : list('abab')}) + expected = DataFrame({'MyColumn': list('abab')}) result = self.read_csv(StringIO(data), delim_whitespace=True, skipinitialspace=True) @@ -3590,6 +3672,7 @@ def test_single_char_leading_whitespace(self): skipinitialspace=True) tm.assert_frame_equal(result, expected) + class TestCParserLowMemory(ParserTests, tm.TestCase): def read_csv(self, *args, **kwds): @@ -3617,14 +3700,15 @@ def test_compact_ints(self): self.assertEqual(result.to_records(index=False).dtype, ex_dtype) result = read_csv(StringIO(data), delimiter=',', header=None, - compact_ints=True, + compact_ints=True, use_unsigned=True) ex_dtype = np.dtype([(str(i), 'u1') for i in range(4)]) self.assertEqual(result.to_records(index=False).dtype, ex_dtype) def test_compact_ints_as_recarray(self): if compat.is_platform_windows(): - raise nose.SkipTest("segfaults on win-64, only when all tests are run") + raise nose.SkipTest( + "segfaults on win-64, only when all tests are run") data = ('0,1,0,0\n' '1,1,0,0\n' @@ -3647,17 +3731,21 @@ def test_precise_conversion(self): from decimal import Decimal normal_errors = [] precise_errors = [] - for num in np.linspace(1., 2., num=500): # test numbers between 1 and 2 - text = 'a\n{0:.25}'.format(num) # 25 decimal digits of precision + for num in np.linspace(1., 2., num=500): # test numbers between 1 and 2 + text = 'a\n{0:.25}'.format(num) # 25 decimal digits of precision normal_val = float(self.read_csv(StringIO(text))['a'][0]) - precise_val = float(self.read_csv(StringIO(text), float_precision='high')['a'][0]) - roundtrip_val = float(self.read_csv(StringIO(text), float_precision='round_trip')['a'][0]) + precise_val = float(self.read_csv( + StringIO(text), float_precision='high')['a'][0]) + roundtrip_val = float(self.read_csv( + StringIO(text), float_precision='round_trip')['a'][0]) actual_val = Decimal(text[2:]) + def error(val): return abs(Decimal('{0:.100}'.format(val)) - actual_val) normal_errors.append(error(normal_val)) precise_errors.append(error(precise_val)) - self.assertEqual(roundtrip_val, float(text[2:])) # round-trip should match float() + # round-trip should match float() + self.assertEqual(roundtrip_val, float(text[2:])) self.assertTrue(sum(precise_errors) <= sum(normal_errors)) self.assertTrue(max(precise_errors) <= max(normal_errors)) @@ -3682,7 +3770,8 @@ def test_pass_dtype_as_recarray(self): 4,5.5""" if compat.is_platform_windows(): - raise nose.SkipTest("segfaults on win-64, only when all tests are run") + raise nose.SkipTest( + "segfaults on win-64, only when all tests are run") result = self.read_csv(StringIO(data), dtype={'one': 'u1', 1: 'S1'}, as_recarray=True) @@ -3713,35 +3802,42 @@ def test_empty_with_multiindex_pass_dtype(self): exp_idx = MultiIndex.from_arrays([np.empty(0, dtype='u1'), np.empty(0, dtype='O')], names=['one', 'two']) - expected = DataFrame({'three': np.empty(0, dtype=np.object)}, index=exp_idx) + expected = DataFrame( + {'three': np.empty(0, dtype=np.object)}, index=exp_idx) tm.assert_frame_equal(result, expected, check_index_type=False) def test_empty_with_mangled_column_pass_dtype_by_names(self): data = 'one,one' - result = self.read_csv(StringIO(data), dtype={'one': 'u1', 'one.1': 'f'}) + result = self.read_csv(StringIO(data), dtype={ + 'one': 'u1', 'one.1': 'f'}) - expected = DataFrame({'one': np.empty(0, dtype='u1'), 'one.1': np.empty(0, dtype='f')}) + expected = DataFrame( + {'one': np.empty(0, dtype='u1'), 'one.1': np.empty(0, dtype='f')}) tm.assert_frame_equal(result, expected, check_index_type=False) def test_empty_with_mangled_column_pass_dtype_by_indexes(self): data = 'one,one' result = self.read_csv(StringIO(data), dtype={0: 'u1', 1: 'f'}) - expected = DataFrame({'one': np.empty(0, dtype='u1'), 'one.1': np.empty(0, dtype='f')}) + expected = DataFrame( + {'one': np.empty(0, dtype='u1'), 'one.1': np.empty(0, dtype='f')}) tm.assert_frame_equal(result, expected, check_index_type=False) def test_empty_with_dup_column_pass_dtype_by_names(self): data = 'one,one' - result = self.read_csv(StringIO(data), mangle_dupe_cols=False, dtype={'one': 'u1'}) + result = self.read_csv( + StringIO(data), mangle_dupe_cols=False, dtype={'one': 'u1'}) expected = pd.concat([Series([], name='one', dtype='u1')] * 2, axis=1) tm.assert_frame_equal(result, expected, check_index_type=False) def test_empty_with_dup_column_pass_dtype_by_indexes(self): ### FIXME in GH9424 - raise nose.SkipTest("GH 9424; known failure read_csv with duplicate columns") + raise nose.SkipTest( + "GH 9424; known failure read_csv with duplicate columns") data = 'one,one' - result = self.read_csv(StringIO(data), mangle_dupe_cols=False, dtype={0: 'u1', 1: 'f'}) + result = self.read_csv( + StringIO(data), mangle_dupe_cols=False, dtype={0: 'u1', 1: 'f'}) expected = pd.concat([Series([], name='one', dtype='u1'), Series([], name='one', dtype='f')], axis=1) tm.assert_frame_equal(result, expected, check_index_type=False) @@ -3758,13 +3854,13 @@ def test_usecols_dtypes(self): header=None, converters={'a': str}, dtype={'b': int, 'c': float}, - ) + ) result2 = self.read_csv(StringIO(data), usecols=(0, 2), - names=('a', 'b', 'c'), - header=None, - converters={'a': str}, - dtype={'b': int, 'c': float}, - ) + names=('a', 'b', 'c'), + header=None, + converters={'a': str}, + dtype={'b': int, 'c': float}, + ) self.assertTrue((result.dtypes == [object, np.int, np.float]).all()) self.assertTrue((result2.dtypes == [object, np.float]).all()) @@ -3869,7 +3965,7 @@ def test_decompression_regex_sep(self): # regex sep. Temporarily copied to TestPythonParser. # Here test for ValueError when passing regex sep: - with tm.assertRaisesRegexp(ValueError, 'regex sep'): #XXX + with tm.assertRaisesRegexp(ValueError, 'regex sep'): # XXX result = self.read_csv(path, sep='::', compression='gzip') tm.assert_frame_equal(result, expected) @@ -3879,7 +3975,7 @@ def test_decompression_regex_sep(self): tmp.close() # GH 6607 - with tm.assertRaisesRegexp(ValueError, 'regex sep'): #XXX + with tm.assertRaisesRegexp(ValueError, 'regex sep'): # XXX result = self.read_csv(path, sep='::', compression='bz2') tm.assert_frame_equal(result, expected) @@ -4034,7 +4130,8 @@ def test_passing_dtype(self): # This is a copy which should eventually be merged into ParserTests # when the dtype argument is supported by all engines. - df = DataFrame(np.random.rand(5,2),columns=list('AB'),index=['1A','1B','1C','1D','1E']) + df = DataFrame(np.random.rand(5, 2), columns=list( + 'AB'), index=['1A', '1B', '1C', '1D', '1E']) with tm.ensure_clean('__passing_str_as_dtype__.csv') as path: df.to_csv(path) @@ -4042,24 +4139,26 @@ def test_passing_dtype(self): # GH 3795 # passing 'str' as the dtype result = self.read_csv(path, dtype=str, index_col=0) - tm.assert_series_equal(result.dtypes,Series({ 'A' : 'object', 'B' : 'object' })) + tm.assert_series_equal(result.dtypes, Series( + {'A': 'object', 'B': 'object'})) - # we expect all object columns, so need to convert to test for equivalence + # we expect all object columns, so need to convert to test for + # equivalence result = result.astype(float) - tm.assert_frame_equal(result,df) + tm.assert_frame_equal(result, df) # invalid dtype - self.assertRaises(TypeError, self.read_csv, path, dtype={'A' : 'foo', 'B' : 'float64' }, + self.assertRaises(TypeError, self.read_csv, path, dtype={'A': 'foo', 'B': 'float64'}, index_col=0) # valid but we don't support it (date) - self.assertRaises(TypeError, self.read_csv, path, dtype={'A' : 'datetime64', 'B' : 'float64' }, + self.assertRaises(TypeError, self.read_csv, path, dtype={'A': 'datetime64', 'B': 'float64'}, index_col=0) - self.assertRaises(TypeError, self.read_csv, path, dtype={'A' : 'datetime64', 'B' : 'float64' }, + self.assertRaises(TypeError, self.read_csv, path, dtype={'A': 'datetime64', 'B': 'float64'}, index_col=0, parse_dates=['B']) # valid but we don't support it - self.assertRaises(TypeError, self.read_csv, path, dtype={'A' : 'timedelta64', 'B' : 'float64' }, + self.assertRaises(TypeError, self.read_csv, path, dtype={'A': 'timedelta64', 'B': 'float64'}, index_col=0) def test_fallback_to_python(self): @@ -4069,7 +4168,7 @@ def test_fallback_to_python(self): # specify C engine with C-unsupported options (raise) with tm.assertRaisesRegexp(ValueError, 'does not support'): self.read_table(StringIO(data), engine='c', sep=None, - delim_whitespace=False) + delim_whitespace=False) with tm.assertRaisesRegexp(ValueError, 'does not support'): self.read_table(StringIO(data), engine='c', sep='\s') with tm.assertRaisesRegexp(ValueError, 'does not support'): @@ -4081,7 +4180,6 @@ def test_raise_on_sep_with_delim_whitespace(self): with tm.assertRaisesRegexp(ValueError, 'you can only specify one'): self.read_table(StringIO(data), sep='\s', delim_whitespace=True) - def test_buffer_overflow(self): # GH9205 # test certain malformed input files that cause buffer overflows in @@ -4093,7 +4191,8 @@ def test_buffer_overflow(self): try: df = self.read_table(StringIO(malf)) except Exception as cperr: - self.assertIn('Buffer overflow caught - possible malformed input file.', str(cperr)) + self.assertIn( + 'Buffer overflow caught - possible malformed input file.', str(cperr)) def test_single_char_leading_whitespace(self): # GH 9710 @@ -4104,7 +4203,7 @@ def test_single_char_leading_whitespace(self): a b\n""" - expected = DataFrame({'MyColumn' : list('abab')}) + expected = DataFrame({'MyColumn': list('abab')}) result = self.read_csv(StringIO(data), delim_whitespace=True, skipinitialspace=True) @@ -4233,7 +4332,7 @@ def test_fallback_to_python(self): # (options will be ignored on fallback, raise) with tm.assertRaisesRegexp(ValueError, 'Falling back'): pd.read_table(StringIO(data), sep=None, - delim_whitespace=False, dtype={'a': float}) + delim_whitespace=False, dtype={'a': float}) with tm.assertRaisesRegexp(ValueError, 'Falling back'): pd.read_table(StringIO(data), sep='\s', dtype={'a': float}) with tm.assertRaisesRegexp(ValueError, 'Falling back'): @@ -4315,6 +4414,7 @@ def test_convert_sql_column_decimals(self): class TestUrlGz(tm.TestCase): + def setUp(self): dirpath = tm.get_data_path() localtable = os.path.join(dirpath, 'salary.table') @@ -4334,6 +4434,7 @@ def test_url_gz_infer(self): class TestS3(tm.TestCase): + def setUp(self): try: import boto @@ -4349,10 +4450,12 @@ def test_parse_public_s3_bucket(self): 's3://pandas-test/tips.csv' + ext, compression=comp) else: - df = pd.read_csv('s3://pandas-test/tips.csv' + ext, compression=comp) + df = pd.read_csv('s3://pandas-test/tips.csv' + + ext, compression=comp) self.assertTrue(isinstance(df, pd.DataFrame)) self.assertFalse(df.empty) - tm.assert_frame_equal(pd.read_csv(tm.get_data_path('tips.csv')), df) + tm.assert_frame_equal(pd.read_csv( + tm.get_data_path('tips.csv')), df) # Read public file from bucket with not-public contents df = pd.read_csv('s3://cant_get_it/tips.csv') @@ -4366,7 +4469,8 @@ def test_parse_public_s3n_bucket(self): df = pd.read_csv('s3n://pandas-test/tips.csv', nrows=10) self.assertTrue(isinstance(df, pd.DataFrame)) self.assertFalse(df.empty) - tm.assert_frame_equal(pd.read_csv(tm.get_data_path('tips.csv')).iloc[:10], df) + tm.assert_frame_equal(pd.read_csv( + tm.get_data_path('tips.csv')).iloc[:10], df) @tm.network def test_parse_public_s3a_bucket(self): @@ -4374,7 +4478,8 @@ def test_parse_public_s3a_bucket(self): df = pd.read_csv('s3a://pandas-test/tips.csv', nrows=10) self.assertTrue(isinstance(df, pd.DataFrame)) self.assertFalse(df.empty) - tm.assert_frame_equal(pd.read_csv(tm.get_data_path('tips.csv')).iloc[:10], df) + tm.assert_frame_equal(pd.read_csv( + tm.get_data_path('tips.csv')).iloc[:10], df) @tm.network def test_parse_public_s3_bucket_nrows(self): @@ -4385,10 +4490,12 @@ def test_parse_public_s3_bucket_nrows(self): 's3://pandas-test/tips.csv' + ext, compression=comp) else: - df = pd.read_csv('s3://pandas-test/tips.csv' + ext, nrows=10, compression=comp) + df = pd.read_csv('s3://pandas-test/tips.csv' + + ext, nrows=10, compression=comp) self.assertTrue(isinstance(df, pd.DataFrame)) self.assertFalse(df.empty) - tm.assert_frame_equal(pd.read_csv(tm.get_data_path('tips.csv')).iloc[:10], df) + tm.assert_frame_equal(pd.read_csv( + tm.get_data_path('tips.csv')).iloc[:10], df) @tm.network def test_parse_public_s3_bucket_chunked(self): @@ -4406,12 +4513,15 @@ def test_parse_public_s3_bucket_chunked(self): chunksize=chunksize, compression=comp) self.assertEqual(df_reader.chunksize, chunksize) for i_chunk in [0, 1, 2]: - # Read a couple of chunks and make sure we see them properly. + # Read a couple of chunks and make sure we see them + # properly. df = df_reader.get_chunk() self.assertTrue(isinstance(df, pd.DataFrame)) self.assertFalse(df.empty) - true_df = local_tips.iloc[chunksize * i_chunk: chunksize * (i_chunk + 1)] - true_df = true_df.reset_index().drop('index', axis=1) # Chunking doesn't preserve row numbering + true_df = local_tips.iloc[ + chunksize * i_chunk: chunksize * (i_chunk + 1)] + # Chunking doesn't preserve row numbering + true_df = true_df.reset_index().drop('index', axis=1) tm.assert_frame_equal(true_df, df) @tm.network @@ -4429,8 +4539,10 @@ def test_parse_public_s3_bucket_chunked_python(self): df = df_reader.get_chunk() self.assertTrue(isinstance(df, pd.DataFrame)) self.assertFalse(df.empty) - true_df = local_tips.iloc[chunksize * i_chunk: chunksize * (i_chunk + 1)] - true_df = true_df.reset_index().drop('index', axis=1) # Chunking doesn't preserve row numbering + true_df = local_tips.iloc[ + chunksize * i_chunk: chunksize * (i_chunk + 1)] + # Chunking doesn't preserve row numbering + true_df = true_df.reset_index().drop('index', axis=1) tm.assert_frame_equal(true_df, df) @tm.network @@ -4440,7 +4552,8 @@ def test_parse_public_s3_bucket_python(self): compression=comp) self.assertTrue(isinstance(df, pd.DataFrame)) self.assertFalse(df.empty) - tm.assert_frame_equal(pd.read_csv(tm.get_data_path('tips.csv')), df) + tm.assert_frame_equal(pd.read_csv( + tm.get_data_path('tips.csv')), df) @tm.network def test_infer_s3_compression(self): @@ -4449,7 +4562,8 @@ def test_infer_s3_compression(self): engine='python', compression='infer') self.assertTrue(isinstance(df, pd.DataFrame)) self.assertFalse(df.empty) - tm.assert_frame_equal(pd.read_csv(tm.get_data_path('tips.csv')), df) + tm.assert_frame_equal(pd.read_csv( + tm.get_data_path('tips.csv')), df) @tm.network def test_parse_public_s3_bucket_nrows_python(self): @@ -4458,13 +4572,14 @@ def test_parse_public_s3_bucket_nrows_python(self): nrows=10, compression=comp) self.assertTrue(isinstance(df, pd.DataFrame)) self.assertFalse(df.empty) - tm.assert_frame_equal(pd.read_csv(tm.get_data_path('tips.csv')).iloc[:10], df) + tm.assert_frame_equal(pd.read_csv( + tm.get_data_path('tips.csv')).iloc[:10], df) @tm.network def test_s3_fails(self): import boto with tm.assertRaisesRegexp(boto.exception.S3ResponseError, - 'S3ResponseError: 404 Not Found'): + 'S3ResponseError: 404 Not Found'): pd.read_csv('s3://nyqpug/asdf.csv') # Receive a permission error when trying to read a private bucket. diff --git a/pandas/io/tests/test_pickle.py b/pandas/io/tests/test_pickle.py index 2a4e429e28580..61f78b2b619fc 100644 --- a/pandas/io/tests/test_pickle.py +++ b/pandas/io/tests/test_pickle.py @@ -2,23 +2,18 @@ """ manage legacy pickle tests """ -from datetime import datetime, timedelta -import operator -import pickle as pkl import nose import os from distutils.version import LooseVersion -import numpy as np -import pandas.util.testing as tm import pandas as pd from pandas import Index -from pandas.sparse.tests import test_sparse -from pandas import compat from pandas.compat import u +from pandas.sparse.tests import test_sparse from pandas.util.misc import is_little_endian import pandas +import pandas.util.testing as tm from pandas.tseries.offsets import Day, MonthEnd @@ -34,26 +29,29 @@ class TestPickle(): 3. Move the created pickle to "data/legacy_pickle/<version>" directory. NOTE: TestPickle can't be a subclass of tm.Testcase to use test generator. - http://stackoverflow.com/questions/6689537/nose-test-generators-inside-class + http://stackoverflow.com/questions/6689537/ + nose-test-generators-inside-class """ _multiprocess_can_split_ = True def setUp(self): - from pandas.io.tests.generate_legacy_storage_files import create_pickle_data + from pandas.io.tests.generate_legacy_storage_files import ( + create_pickle_data) self.data = create_pickle_data() self.path = u('__%s__.pickle' % tm.rands(10)) def compare_element(self, result, expected, typ, version=None): - if isinstance(expected,Index): + if isinstance(expected, Index): tm.assert_index_equal(expected, result) return if typ.startswith('sp_'): - comparator = getattr(test_sparse,"assert_%s_equal" % typ) - comparator(result,expected,exact_indices=False) + comparator = getattr(test_sparse, "assert_%s_equal" % typ) + comparator(result, expected, exact_indices=False) else: - comparator = getattr(tm,"assert_%s_equal" % typ,tm.assert_almost_equal) - comparator(result,expected) + comparator = getattr(tm, "assert_%s_equal" % + typ, tm.assert_almost_equal) + comparator(result, expected) def compare(self, vf, version): @@ -76,7 +74,8 @@ def compare(self, vf, version): # use a specific comparator # if available - comparator = getattr(self,"compare_{typ}_{dt}".format(typ=typ,dt=dt), self.compare_element) + comparator = getattr(self, "compare_{typ}_{dt}".format( + typ=typ, dt=dt), self.compare_element) comparator(result, expected, typ, version) return data @@ -113,7 +112,8 @@ def read_pickles(self, version): if 'series' in data: if 'ts' in data['series']: - self._validate_timeseries(data['series']['ts'], self.data['series']['ts']) + self._validate_timeseries( + data['series']['ts'], self.data['series']['ts']) self._validate_frequency(data['series']['ts']) if 'index' in data: if 'period' in data['index']: @@ -136,12 +136,13 @@ def test_round_trip_current(self): try: import cPickle as c_pickle - def c_pickler(obj,path): - with open(path,'wb') as fh: - c_pickle.dump(obj,fh,protocol=-1) + + def c_pickler(obj, path): + with open(path, 'wb') as fh: + c_pickle.dump(obj, fh, protocol=-1) def c_unpickler(path): - with open(path,'rb') as fh: + with open(path, 'rb') as fh: fh.seek(0) return c_pickle.load(fh) except: @@ -150,26 +151,26 @@ def c_unpickler(path): import pickle as python_pickle - def python_pickler(obj,path): - with open(path,'wb') as fh: - python_pickle.dump(obj,fh,protocol=-1) + def python_pickler(obj, path): + with open(path, 'wb') as fh: + python_pickle.dump(obj, fh, protocol=-1) def python_unpickler(path): - with open(path,'rb') as fh: + with open(path, 'rb') as fh: fh.seek(0) return python_pickle.load(fh) for typ, dv in self.data.items(): for dt, expected in dv.items(): - for writer in [pd.to_pickle, c_pickler, python_pickler ]: + for writer in [pd.to_pickle, c_pickler, python_pickler]: if writer is None: continue with tm.ensure_clean(self.path) as path: # test writing with each pickler - writer(expected,path) + writer(expected, path) # test reading with each unpickler result = pd.read_pickle(path) @@ -212,7 +213,6 @@ def _validate_periodindex(self, pickled, current): if __name__ == '__main__': - import nose nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], # '--with-coverage', '--cover-package=pandas.core'], exit=False) diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py index 38f5150516551..b08d24747bcd3 100644 --- a/pandas/io/tests/test_pytables.py +++ b/pandas/io/tests/test_pytables.py @@ -28,6 +28,7 @@ AttributeConflictWarning, DuplicateWarning, PossibleDataLossError, ClosedFileError) from pandas.io import pytables as pytables +import pandas.core.common as com import pandas.util.testing as tm from pandas.util.testing import (assert_panel4d_equal, assert_panel_equal, @@ -37,8 +38,6 @@ from pandas import concat, Timestamp from pandas import compat from pandas.compat import range, lrange, u -from pandas.util.testing import assert_produces_warning -from numpy.testing.decorators import slow try: import tables @@ -57,6 +56,8 @@ skip_compression = PY3 and is_platform_windows() # contextmanager to ensure the file cleanup + + def safe_remove(path): if path is not None: try: @@ -75,12 +76,12 @@ def safe_close(store): def create_tempfile(path): """ create an unopened named temporary file """ - return os.path.join(tempfile.gettempdir(),path) + return os.path.join(tempfile.gettempdir(), path) @contextmanager def ensure_clean_store(path, mode='a', complevel=None, complib=None, - fletcher32=False): + fletcher32=False): try: @@ -106,10 +107,10 @@ def ensure_clean_path(path): """ try: if isinstance(path, list): - filenames = [ create_tempfile(p) for p in path ] + filenames = [create_tempfile(p) for p in path] yield filenames else: - filenames = [ create_tempfile(path) ] + filenames = [create_tempfile(path)] yield filenames[0] finally: for f in filenames: @@ -118,8 +119,9 @@ def ensure_clean_path(path): # set these parameters so we don't have file sharing tables.parameters.MAX_NUMEXPR_THREADS = 1 -tables.parameters.MAX_BLOSC_THREADS = 1 -tables.parameters.MAX_THREADS = 1 +tables.parameters.MAX_BLOSC_THREADS = 1 +tables.parameters.MAX_THREADS = 1 + def _maybe_remove(store, key): """For tests using tables, try removing the table to be sure there is @@ -209,27 +211,27 @@ def test_context(self): def test_conv_read_write(self): path = create_tempfile(self.path) try: - def roundtrip(key, obj,**kwargs): - obj.to_hdf(path, key,**kwargs) + def roundtrip(key, obj, **kwargs): + obj.to_hdf(path, key, **kwargs) return read_hdf(path, key) o = tm.makeTimeSeries() - assert_series_equal(o, roundtrip('series',o)) + assert_series_equal(o, roundtrip('series', o)) o = tm.makeStringSeries() - assert_series_equal(o, roundtrip('string_series',o)) + assert_series_equal(o, roundtrip('string_series', o)) o = tm.makeDataFrame() - assert_frame_equal(o, roundtrip('frame',o)) + assert_frame_equal(o, roundtrip('frame', o)) o = tm.makePanel() - assert_panel_equal(o, roundtrip('panel',o)) + assert_panel_equal(o, roundtrip('panel', o)) # table df = DataFrame(dict(A=lrange(5), B=lrange(5))) - df.to_hdf(path,'table',append=True) - result = read_hdf(path, 'table', where = ['index>2']) - assert_frame_equal(df[df.index>2],result) + df.to_hdf(path, 'table', append=True) + result = read_hdf(path, 'table', where=['index>2']) + assert_frame_equal(df[df.index > 2], result) finally: safe_remove(path) @@ -248,7 +250,6 @@ def test_long_strings(self): result = store.select('df') assert_frame_equal(df, result) - def test_api(self): # GH4584 @@ -256,80 +257,84 @@ def test_api(self): with ensure_clean_path(self.path) as path: df = tm.makeDataFrame() - df.iloc[:10].to_hdf(path,'df',append=True,format='table') - df.iloc[10:].to_hdf(path,'df',append=True,format='table') - assert_frame_equal(read_hdf(path,'df'),df) + df.iloc[:10].to_hdf(path, 'df', append=True, format='table') + df.iloc[10:].to_hdf(path, 'df', append=True, format='table') + assert_frame_equal(read_hdf(path, 'df'), df) # append to False - df.iloc[:10].to_hdf(path,'df',append=False,format='table') - df.iloc[10:].to_hdf(path,'df',append=True,format='table') - assert_frame_equal(read_hdf(path,'df'),df) + df.iloc[:10].to_hdf(path, 'df', append=False, format='table') + df.iloc[10:].to_hdf(path, 'df', append=True, format='table') + assert_frame_equal(read_hdf(path, 'df'), df) with ensure_clean_path(self.path) as path: df = tm.makeDataFrame() - df.iloc[:10].to_hdf(path,'df',append=True) - df.iloc[10:].to_hdf(path,'df',append=True,format='table') - assert_frame_equal(read_hdf(path,'df'),df) + df.iloc[:10].to_hdf(path, 'df', append=True) + df.iloc[10:].to_hdf(path, 'df', append=True, format='table') + assert_frame_equal(read_hdf(path, 'df'), df) # append to False - df.iloc[:10].to_hdf(path,'df',append=False,format='table') - df.iloc[10:].to_hdf(path,'df',append=True) - assert_frame_equal(read_hdf(path,'df'),df) + df.iloc[:10].to_hdf(path, 'df', append=False, format='table') + df.iloc[10:].to_hdf(path, 'df', append=True) + assert_frame_equal(read_hdf(path, 'df'), df) with ensure_clean_path(self.path) as path: df = tm.makeDataFrame() - df.to_hdf(path,'df',append=False,format='fixed') - assert_frame_equal(read_hdf(path,'df'),df) + df.to_hdf(path, 'df', append=False, format='fixed') + assert_frame_equal(read_hdf(path, 'df'), df) - df.to_hdf(path,'df',append=False,format='f') - assert_frame_equal(read_hdf(path,'df'),df) + df.to_hdf(path, 'df', append=False, format='f') + assert_frame_equal(read_hdf(path, 'df'), df) - df.to_hdf(path,'df',append=False) - assert_frame_equal(read_hdf(path,'df'),df) + df.to_hdf(path, 'df', append=False) + assert_frame_equal(read_hdf(path, 'df'), df) - df.to_hdf(path,'df') - assert_frame_equal(read_hdf(path,'df'),df) + df.to_hdf(path, 'df') + assert_frame_equal(read_hdf(path, 'df'), df) with ensure_clean_store(self.path) as store: path = store._path df = tm.makeDataFrame() - _maybe_remove(store,'df') - store.append('df',df.iloc[:10],append=True,format='table') - store.append('df',df.iloc[10:],append=True,format='table') - assert_frame_equal(store.select('df'),df) + _maybe_remove(store, 'df') + store.append('df', df.iloc[:10], append=True, format='table') + store.append('df', df.iloc[10:], append=True, format='table') + assert_frame_equal(store.select('df'), df) # append to False - _maybe_remove(store,'df') - store.append('df',df.iloc[:10],append=False,format='table') - store.append('df',df.iloc[10:],append=True,format='table') - assert_frame_equal(store.select('df'),df) + _maybe_remove(store, 'df') + store.append('df', df.iloc[:10], append=False, format='table') + store.append('df', df.iloc[10:], append=True, format='table') + assert_frame_equal(store.select('df'), df) # formats - _maybe_remove(store,'df') - store.append('df',df.iloc[:10],append=False,format='table') - store.append('df',df.iloc[10:],append=True,format='table') - assert_frame_equal(store.select('df'),df) + _maybe_remove(store, 'df') + store.append('df', df.iloc[:10], append=False, format='table') + store.append('df', df.iloc[10:], append=True, format='table') + assert_frame_equal(store.select('df'), df) - _maybe_remove(store,'df') - store.append('df',df.iloc[:10],append=False,format='table') - store.append('df',df.iloc[10:],append=True,format=None) - assert_frame_equal(store.select('df'),df) + _maybe_remove(store, 'df') + store.append('df', df.iloc[:10], append=False, format='table') + store.append('df', df.iloc[10:], append=True, format=None) + assert_frame_equal(store.select('df'), df) with ensure_clean_path(self.path) as path: # invalid df = tm.makeDataFrame() - self.assertRaises(ValueError, df.to_hdf, path,'df',append=True,format='f') - self.assertRaises(ValueError, df.to_hdf, path,'df',append=True,format='fixed') + self.assertRaises(ValueError, df.to_hdf, path, + 'df', append=True, format='f') + self.assertRaises(ValueError, df.to_hdf, path, + 'df', append=True, format='fixed') - self.assertRaises(TypeError, df.to_hdf, path,'df',append=True,format='foo') - self.assertRaises(TypeError, df.to_hdf, path,'df',append=False,format='bar') + self.assertRaises(TypeError, df.to_hdf, path, + 'df', append=True, format='foo') + self.assertRaises(TypeError, df.to_hdf, path, + 'df', append=False, format='bar') - #File path doesn't exist + # File path doesn't exist path = "" self.assertRaises(IOError, read_hdf, path, 'df') @@ -339,41 +344,41 @@ def test_api_default_format(self): with ensure_clean_store(self.path) as store: df = tm.makeDataFrame() - pandas.set_option('io.hdf.default_format','fixed') - _maybe_remove(store,'df') - store.put('df',df) + pandas.set_option('io.hdf.default_format', 'fixed') + _maybe_remove(store, 'df') + store.put('df', df) self.assertFalse(store.get_storer('df').is_table) - self.assertRaises(ValueError, store.append, 'df2',df) + self.assertRaises(ValueError, store.append, 'df2', df) - pandas.set_option('io.hdf.default_format','table') - _maybe_remove(store,'df') - store.put('df',df) + pandas.set_option('io.hdf.default_format', 'table') + _maybe_remove(store, 'df') + store.put('df', df) self.assertTrue(store.get_storer('df').is_table) - _maybe_remove(store,'df2') - store.append('df2',df) + _maybe_remove(store, 'df2') + store.append('df2', df) self.assertTrue(store.get_storer('df').is_table) - pandas.set_option('io.hdf.default_format',None) + pandas.set_option('io.hdf.default_format', None) with ensure_clean_path(self.path) as path: df = tm.makeDataFrame() - pandas.set_option('io.hdf.default_format','fixed') - df.to_hdf(path,'df') + pandas.set_option('io.hdf.default_format', 'fixed') + df.to_hdf(path, 'df') with get_store(path) as store: self.assertFalse(store.get_storer('df').is_table) - self.assertRaises(ValueError, df.to_hdf, path,'df2', append=True) + self.assertRaises(ValueError, df.to_hdf, path, 'df2', append=True) - pandas.set_option('io.hdf.default_format','table') - df.to_hdf(path,'df3') + pandas.set_option('io.hdf.default_format', 'table') + df.to_hdf(path, 'df3') with HDFStore(path) as store: self.assertTrue(store.get_storer('df3').is_table) - df.to_hdf(path,'df4',append=True) + df.to_hdf(path, 'df4', append=True) with HDFStore(path) as store: self.assertTrue(store.get_storer('df4').is_table) - pandas.set_option('io.hdf.default_format',None) + pandas.set_option('io.hdf.default_format', None) def test_keys(self): @@ -408,9 +413,9 @@ def test_repr(self): df['int2'] = 2 df['timestamp1'] = Timestamp('20010102') df['timestamp2'] = Timestamp('20010103') - df['datetime1'] = datetime.datetime(2001,1,2,0,0) - df['datetime2'] = datetime.datetime(2001,1,3,0,0) - df.ix[3:6,['obj1']] = np.nan + df['datetime1'] = datetime.datetime(2001, 1, 2, 0, 0) + df['datetime2'] = datetime.datetime(2001, 1, 3, 0, 0) + df.ix[3:6, ['obj1']] = np.nan df = df.consolidate()._convert(datetime=True) warnings.filterwarnings('ignore', category=PerformanceWarning) @@ -418,7 +423,7 @@ def test_repr(self): warnings.filterwarnings('always', category=PerformanceWarning) # make a random group in hdf space - store._handle.create_group(store._handle.root,'bah') + store._handle.create_group(store._handle.root, 'bah') repr(store) str(store) @@ -427,7 +432,7 @@ def test_repr(self): with ensure_clean_store(self.path) as store: df = tm.makeDataFrame() - store.append('df',df) + store.append('df', df) s = store.get_storer('df') repr(s) @@ -448,7 +453,8 @@ def test_contains(self): self.assertNotIn('bar', store) # GH 2694 - warnings.filterwarnings('ignore', category=tables.NaturalNameWarning) + warnings.filterwarnings( + 'ignore', category=tables.NaturalNameWarning) store['node())'] = tm.makeDataFrame() self.assertIn('node())', store) @@ -469,8 +475,8 @@ def test_versioning(self): _maybe_remove(store, 'df2') store.append('df2', df) - # this is an error because its table_type is appendable, but no version - # info + # this is an error because its table_type is appendable, but no + # version info store.get_node('df2')._v_attrs.pandas_version = None self.assertRaises(Exception, store.select, 'df2') @@ -483,41 +489,43 @@ def check(mode): with ensure_clean_path(self.path) as path: # constructor - if mode in ['r','r+']: + if mode in ['r', 'r+']: self.assertRaises(IOError, HDFStore, path, mode=mode) else: - store = HDFStore(path,mode=mode) + store = HDFStore(path, mode=mode) self.assertEqual(store._handle.mode, mode) store.close() with ensure_clean_path(self.path) as path: # context - if mode in ['r','r+']: + if mode in ['r', 'r+']: def f(): - with HDFStore(path,mode=mode) as store: + with HDFStore(path, mode=mode) as store: # noqa pass self.assertRaises(IOError, f) else: - with HDFStore(path,mode=mode) as store: + with HDFStore(path, mode=mode) as store: self.assertEqual(store._handle.mode, mode) with ensure_clean_path(self.path) as path: # conv write - if mode in ['r','r+']: - self.assertRaises(IOError, df.to_hdf, path, 'df', mode=mode) - df.to_hdf(path,'df',mode='w') + if mode in ['r', 'r+']: + self.assertRaises(IOError, df.to_hdf, + path, 'df', mode=mode) + df.to_hdf(path, 'df', mode='w') else: - df.to_hdf(path,'df',mode=mode) + df.to_hdf(path, 'df', mode=mode) # conv read if mode in ['w']: - self.assertRaises(KeyError, read_hdf, path, 'df', mode=mode) + self.assertRaises(KeyError, read_hdf, + path, 'df', mode=mode) else: - result = read_hdf(path,'df',mode=mode) - assert_frame_equal(result,df) + result = read_hdf(path, 'df', mode=mode) + assert_frame_equal(result, df) check('r') check('r+') @@ -528,7 +536,7 @@ def test_reopen_handle(self): with ensure_clean_path(self.path) as path: - store = HDFStore(path,mode='a') + store = HDFStore(path, mode='a') store['a'] = tm.makeTimeSeries() # invalid mode change @@ -543,7 +551,7 @@ def test_reopen_handle(self): store.close() self.assertFalse(store.is_open) - store = HDFStore(path,mode='a') + store = HDFStore(path, mode='a') store['a'] = tm.makeTimeSeries() # reopen as read @@ -577,12 +585,13 @@ def test_open_args(self): df = tm.makeDataFrame() # create an in memory store - store = HDFStore(path,mode='a',driver='H5FD_CORE',driver_core_backing_store=0) + store = HDFStore(path, mode='a', driver='H5FD_CORE', + driver_core_backing_store=0) store['df'] = df - store.append('df2',df) + store.append('df2', df) - tm.assert_frame_equal(store['df'],df) - tm.assert_frame_equal(store['df2'],df) + tm.assert_frame_equal(store['df'], df) + tm.assert_frame_equal(store['df2'], df) store.close() @@ -620,7 +629,7 @@ def test_getattr(self): # test attribute access result = store.a tm.assert_series_equal(result, s) - result = getattr(store,'a') + result = getattr(store, 'a') tm.assert_series_equal(result, s) df = tm.makeTimeDataFrame() @@ -631,12 +640,12 @@ def test_getattr(self): # errors self.assertRaises(AttributeError, getattr, store, 'd') - for x in ['mode','path','handle','complib']: + for x in ['mode', 'path', 'handle', 'complib']: self.assertRaises(AttributeError, getattr, store, x) # not stores - for x in ['mode','path','handle','complib']: - getattr(store,"_%s" % x) + for x in ['mode', 'path', 'handle', 'complib']: + getattr(store, "_%s" % x) def test_put(self): @@ -655,10 +664,11 @@ def test_put(self): self.assertRaises( ValueError, store.put, 'b', df[10:], append=True) - # node does not currently exist, test _is_table_type returns False in - # this case + # node does not currently exist, test _is_table_type returns False + # in this case # _maybe_remove(store, 'f') - # self.assertRaises(ValueError, store.put, 'f', df[10:], append=True) + # self.assertRaises(ValueError, store.put, 'f', df[10:], + # append=True) # can't put to a table (use append instead) self.assertRaises(ValueError, store.put, 'c', df[10:], append=True) @@ -683,7 +693,9 @@ def test_put_string_index(self): tm.assert_frame_equal(store['b'], df) # mixed length - index = Index(['abcdefghijklmnopqrstuvwxyz1234567890'] + ["I am a very long string index: %s" % i for i in range(20)]) + index = Index(['abcdefghijklmnopqrstuvwxyz1234567890'] + + ["I am a very long string index: %s" % i + for i in range(20)]) s = Series(np.arange(21), index=index) df = DataFrame({'A': s, 'B': s}) store['a'] = s @@ -747,11 +759,11 @@ def test_put_mixed_type(self): # cannot use assert_produces_warning here for some reason # a PendingDeprecationWarning is also raised? warnings.filterwarnings('ignore', category=PerformanceWarning) - store.put('df',df) + store.put('df', df) warnings.filterwarnings('always', category=PerformanceWarning) expected = store.get('df') - tm.assert_frame_equal(expected,df) + tm.assert_frame_equal(expected, df) def test_append(self): @@ -773,7 +785,8 @@ def test_append(self): tm.assert_frame_equal(store['df3'], df) # this is allowed by almost always don't want to do it - with tm.assert_produces_warning(expected_warning=tables.NaturalNameWarning): + with tm.assert_produces_warning( + expected_warning=tables.NaturalNameWarning): _maybe_remove(store, '/df3 foo') store.append('/df3 foo', df[:10]) store.append('/df3 foo', df[10:]) @@ -796,9 +809,9 @@ def test_append(self): # test using axis labels _maybe_remove(store, 'p4d') store.append('p4d', p4d.ix[:, :, :10, :], axes=[ - 'items', 'major_axis', 'minor_axis']) + 'items', 'major_axis', 'minor_axis']) store.append('p4d', p4d.ix[:, :, 10:, :], axes=[ - 'items', 'major_axis', 'minor_axis']) + 'items', 'major_axis', 'minor_axis']) assert_panel4d_equal(store['p4d'], p4d) # test using differnt number of items on each axis @@ -827,18 +840,24 @@ def test_append(self): tm.assert_frame_equal(store['df'], df) # uints - test storage of uints - uint_data = DataFrame({'u08' : Series(np.random.random_integers(0, high=255, size=5), dtype=np.uint8), - 'u16' : Series(np.random.random_integers(0, high=65535, size=5), dtype=np.uint16), - 'u32' : Series(np.random.random_integers(0, high=2**30, size=5), dtype=np.uint32), - 'u64' : Series([2**58, 2**59, 2**60, 2**61, 2**62], dtype=np.uint64)}, - index=np.arange(5)) + uint_data = DataFrame({ + 'u08': Series(np.random.random_integers(0, high=255, size=5), + dtype=np.uint8), + 'u16': Series(np.random.random_integers(0, high=65535, size=5), + dtype=np.uint16), + 'u32': Series(np.random.random_integers(0, high=2**30, size=5), + dtype=np.uint32), + 'u64': Series([2**58, 2**59, 2**60, 2**61, 2**62], + dtype=np.uint64)}, index=np.arange(5)) _maybe_remove(store, 'uints') store.append('uints', uint_data) tm.assert_frame_equal(store['uints'], uint_data) # uints - test storage of uints in indexable columns _maybe_remove(store, 'uints') - store.append('uints', uint_data, data_columns=['u08','u16','u32']) # 64-bit indices not yet supported + # 64-bit indices not yet supported + store.append('uints', uint_data, data_columns=[ + 'u08', 'u16', 'u32']) tm.assert_frame_equal(store['uints'], uint_data) def test_append_series(self): @@ -867,21 +886,21 @@ def test_append_series(self): self.assertEqual(result.name, ns.name) # select on the values - expected = ns[ns>60] - result = store.select('ns',Term('foo>60')) - tm.assert_series_equal(result,expected) + expected = ns[ns > 60] + result = store.select('ns', Term('foo>60')) + tm.assert_series_equal(result, expected) # select on the index and values - expected = ns[(ns>70) & (ns.index<90)] - result = store.select('ns',[Term('foo>70'), Term('index<90')]) - tm.assert_series_equal(result,expected) + expected = ns[(ns > 70) & (ns.index < 90)] + result = store.select('ns', [Term('foo>70'), Term('index<90')]) + tm.assert_series_equal(result, expected) # multi-index - mi = DataFrame(np.random.randn(5,1),columns=['A']) + mi = DataFrame(np.random.randn(5, 1), columns=['A']) mi['B'] = np.arange(len(mi)) mi['C'] = 'foo' - mi.loc[3:5,'C'] = 'bar' - mi.set_index(['C','B'],inplace=True) + mi.loc[3:5, 'C'] = 'bar' + mi.set_index(['C', 'B'], inplace=True) s = mi.stack() s.index = s.index.droplevel(2) store.append('mi', s) @@ -893,36 +912,37 @@ def test_store_index_types(self): with ensure_clean_store(self.path) as store: - def check(format,index): - df = DataFrame(np.random.randn(10,2),columns=list('AB')) + def check(format, index): + df = DataFrame(np.random.randn(10, 2), columns=list('AB')) df.index = index(len(df)) _maybe_remove(store, 'df') - store.put('df',df,format=format) - assert_frame_equal(df,store['df']) + store.put('df', df, format=format) + assert_frame_equal(df, store['df']) - for index in [ tm.makeFloatIndex, tm.makeStringIndex, tm.makeIntIndex, - tm.makeDateIndex ]: + for index in [tm.makeFloatIndex, tm.makeStringIndex, + tm.makeIntIndex, tm.makeDateIndex]: - check('table',index) - check('fixed',index) + check('table', index) + check('fixed', index) # period index currently broken for table # seee GH7796 FIXME - check('fixed',tm.makePeriodIndex) - #check('table',tm.makePeriodIndex) + check('fixed', tm.makePeriodIndex) + # check('table',tm.makePeriodIndex) # unicode index = tm.makeUnicodeIndex if compat.PY3: - check('table',index) - check('fixed',index) + check('table', index) + check('fixed', index) else: # only support for fixed types (and they have a perf warning) self.assertRaises(TypeError, check, 'table', index) - with tm.assert_produces_warning(expected_warning=PerformanceWarning): - check('fixed',index) + with tm.assert_produces_warning( + expected_warning=PerformanceWarning): + check('fixed', index) def test_encoding(self): @@ -930,21 +950,22 @@ def test_encoding(self): raise nose.SkipTest('system byteorder is not little') with ensure_clean_store(self.path) as store: - df = DataFrame(dict(A='foo',B='bar'),index=range(5)) - df.loc[2,'A'] = np.nan - df.loc[3,'B'] = np.nan + df = DataFrame(dict(A='foo', B='bar'), index=range(5)) + df.loc[2, 'A'] = np.nan + df.loc[3, 'B'] = np.nan _maybe_remove(store, 'df') store.append('df', df, encoding='ascii') tm.assert_frame_equal(store['df'], df) expected = df.reindex(columns=['A']) - result = store.select('df',Term('columns=A',encoding='ascii')) - tm.assert_frame_equal(result,expected) + result = store.select('df', Term('columns=A', encoding='ascii')) + tm.assert_frame_equal(result, expected) def test_latin_encoding(self): if compat.PY2: - self.assertRaisesRegexp(TypeError, '\[unicode\] is not implemented as a table column') + self.assertRaisesRegexp( + TypeError, '\[unicode\] is not implemented as a table column') return values = [[b'E\xc9, 17', b'', b'a', b'b', b'c'], @@ -973,7 +994,7 @@ def _try_decode(x, encoding='latin-1'): def roundtrip(s, key='data', encoding='latin-1', nan_rep=''): with ensure_clean_path(self.path) as store: s.to_hdf(store, key, format='table', encoding=encoding, - nan_rep=nan_rep) + nan_rep=nan_rep) retr = read_hdf(store, key) s_nan = s.replace(nan_rep, np.nan) assert_series_equal(s_nan, retr) @@ -985,25 +1006,26 @@ def roundtrip(s, key='data', encoding='latin-1', nan_rep=''): # for x in examples: # roundtrip(s, nan_rep=b'\xf8\xfc') - def test_append_some_nans(self): with ensure_clean_store(self.path) as store: - df = DataFrame({'A' : Series(np.random.randn(20)).astype('int32'), - 'A1' : np.random.randn(20), - 'A2' : np.random.randn(20), - 'B' : 'foo', 'C' : 'bar', 'D' : Timestamp("20010101"), 'E' : datetime.datetime(2001,1,2,0,0) }, + df = DataFrame({'A': Series(np.random.randn(20)).astype('int32'), + 'A1': np.random.randn(20), + 'A2': np.random.randn(20), + 'B': 'foo', 'C': 'bar', + 'D': Timestamp("20010101"), + 'E': datetime.datetime(2001, 1, 2, 0, 0)}, index=np.arange(20)) # some nans _maybe_remove(store, 'df1') - df.ix[0:15,['A1','B','D','E']] = np.nan + df.ix[0:15, ['A1', 'B', 'D', 'E']] = np.nan store.append('df1', df[:10]) store.append('df1', df[10:]) tm.assert_frame_equal(store['df1'], df) # first column df1 = df.copy() - df1.ix[:,'A1'] = np.nan + df1.ix[:, 'A1'] = np.nan _maybe_remove(store, 'df1') store.append('df1', df1[:10]) store.append('df1', df1[10:]) @@ -1011,7 +1033,7 @@ def test_append_some_nans(self): # 2nd column df2 = df.copy() - df2.ix[:,'A2'] = np.nan + df2.ix[:, 'A2'] = np.nan _maybe_remove(store, 'df2') store.append('df2', df2[:10]) store.append('df2', df2[10:]) @@ -1019,7 +1041,7 @@ def test_append_some_nans(self): # datetimes df3 = df.copy() - df3.ix[:,'E'] = np.nan + df3.ix[:, 'E'] = np.nan _maybe_remove(store, 'df3') store.append('df3', df3[:10]) store.append('df3', df3[10:]) @@ -1029,11 +1051,10 @@ def test_append_all_nans(self): with ensure_clean_store(self.path) as store: - df = DataFrame({'A1' : np.random.randn(20), - 'A2' : np.random.randn(20)}, + df = DataFrame({'A1': np.random.randn(20), + 'A2': np.random.randn(20)}, index=np.arange(20)) - df.ix[0:15,:] = np.nan - + df.ix[0:15, :] = np.nan # nan some entire rows (dropna=True) _maybe_remove(store, 'df') @@ -1048,25 +1069,25 @@ def test_append_all_nans(self): tm.assert_frame_equal(store['df2'], df) # tests the option io.hdf.dropna_table - pandas.set_option('io.hdf.dropna_table',False) + pandas.set_option('io.hdf.dropna_table', False) _maybe_remove(store, 'df3') store.append('df3', df[:10]) store.append('df3', df[10:]) tm.assert_frame_equal(store['df3'], df) - pandas.set_option('io.hdf.dropna_table',True) + pandas.set_option('io.hdf.dropna_table', True) _maybe_remove(store, 'df4') store.append('df4', df[:10]) store.append('df4', df[10:]) tm.assert_frame_equal(store['df4'], df[-4:]) # nan some entire rows (string are still written!) - df = DataFrame({'A1' : np.random.randn(20), - 'A2' : np.random.randn(20), - 'B' : 'foo', 'C' : 'bar'}, + df = DataFrame({'A1': np.random.randn(20), + 'A2': np.random.randn(20), + 'B': 'foo', 'C': 'bar'}, index=np.arange(20)) - df.ix[0:15,:] = np.nan + df.ix[0:15, :] = np.nan _maybe_remove(store, 'df') store.append('df', df[:10], dropna=True) @@ -1078,13 +1099,16 @@ def test_append_all_nans(self): store.append('df2', df[10:], dropna=False) tm.assert_frame_equal(store['df2'], df) - # nan some entire rows (but since we have dates they are still written!) - df = DataFrame({'A1' : np.random.randn(20), - 'A2' : np.random.randn(20), - 'B' : 'foo', 'C' : 'bar', 'D' : Timestamp("20010101"), 'E' : datetime.datetime(2001,1,2,0,0) }, + # nan some entire rows (but since we have dates they are still + # written!) + df = DataFrame({'A1': np.random.randn(20), + 'A2': np.random.randn(20), + 'B': 'foo', 'C': 'bar', + 'D': Timestamp("20010101"), + 'E': datetime.datetime(2001, 1, 2, 0, 0)}, index=np.arange(20)) - df.ix[0:15,:] = np.nan + df.ix[0:15, :] = np.nan _maybe_remove(store, 'df') store.append('df', df[:10], dropna=True) @@ -1098,25 +1122,27 @@ def test_append_all_nans(self): # Test to make sure defaults are to not drop. # Corresponding to Issue 9382 - df_with_missing = DataFrame({'col1':[0, np.nan, 2], 'col2':[1, np.nan, np.nan]}) + df_with_missing = DataFrame( + {'col1': [0, np.nan, 2], 'col2': [1, np.nan, np.nan]}) with ensure_clean_path(self.path) as path: - df_with_missing.to_hdf(path, 'df_with_missing', format = 'table') + df_with_missing.to_hdf(path, 'df_with_missing', format='table') reloaded = read_hdf(path, 'df_with_missing') tm.assert_frame_equal(df_with_missing, reloaded) - matrix = [[[np.nan, np.nan, np.nan],[1,np.nan,np.nan]], - [[np.nan, np.nan, np.nan], [np.nan,5,6]], - [[np.nan, np.nan, np.nan],[np.nan,3,np.nan]]] + matrix = [[[np.nan, np.nan, np.nan], [1, np.nan, np.nan]], + [[np.nan, np.nan, np.nan], [np.nan, 5, 6]], + [[np.nan, np.nan, np.nan], [np.nan, 3, np.nan]]] - panel_with_missing = Panel(matrix, items=['Item1', 'Item2','Item3'], - major_axis=[1,2], - minor_axis=['A', 'B', 'C']) + panel_with_missing = Panel(matrix, items=['Item1', 'Item2', 'Item3'], + major_axis=[1, 2], + minor_axis=['A', 'B', 'C']) with ensure_clean_path(self.path) as path: - panel_with_missing.to_hdf(path, 'panel_with_missing', format='table') - reloaded_panel = read_hdf(path, 'panel_with_missing') - tm.assert_panel_equal(panel_with_missing, reloaded_panel) + panel_with_missing.to_hdf( + path, 'panel_with_missing', format='table') + reloaded_panel = read_hdf(path, 'panel_with_missing') + tm.assert_panel_equal(panel_with_missing, reloaded_panel) def test_append_frame_column_oriented(self): @@ -1141,46 +1167,48 @@ def test_append_frame_column_oriented(self): # this isn't supported self.assertRaises(TypeError, store.select, 'df1', ( - 'columns=A', Term('index>df.index[4]'))) + 'columns=A', Term('index>df.index[4]'))) def test_append_with_different_block_ordering(self): - #GH 4096; using same frames, but different block orderings + # GH 4096; using same frames, but different block orderings with ensure_clean_store(self.path) as store: for i in range(10): - df = DataFrame(np.random.randn(10,2),columns=list('AB')) + df = DataFrame(np.random.randn(10, 2), columns=list('AB')) df['index'] = range(10) - df['index'] += i*10 - df['int64'] = Series([1]*len(df),dtype='int64') - df['int16'] = Series([1]*len(df),dtype='int16') + df['index'] += i * 10 + df['int64'] = Series([1] * len(df), dtype='int64') + df['int16'] = Series([1] * len(df), dtype='int16') if i % 2 == 0: del df['int64'] - df['int64'] = Series([1]*len(df),dtype='int64') + df['int64'] = Series([1] * len(df), dtype='int64') if i % 3 == 0: a = df.pop('A') df['A'] = a - df.set_index('index',inplace=True) + df.set_index('index', inplace=True) - store.append('df',df) + store.append('df', df) - # test a different ordering but with more fields (like invalid combinate) + # test a different ordering but with more fields (like invalid + # combinate) with ensure_clean_store(self.path) as store: - df = DataFrame(np.random.randn(10,2),columns=list('AB'), dtype='float64') - df['int64'] = Series([1]*len(df),dtype='int64') - df['int16'] = Series([1]*len(df),dtype='int16') - store.append('df',df) + df = DataFrame(np.random.randn(10, 2), + columns=list('AB'), dtype='float64') + df['int64'] = Series([1] * len(df), dtype='int64') + df['int16'] = Series([1] * len(df), dtype='int16') + store.append('df', df) # store additonal fields in different blocks - df['int16_2'] = Series([1]*len(df),dtype='int16') + df['int16_2'] = Series([1] * len(df), dtype='int16') self.assertRaises(ValueError, store.append, 'df', df) # store multile additonal fields in different blocks - df['float_3'] = Series([1.]*len(df),dtype='float64') + df['float_3'] = Series([1.] * len(df), dtype='float64') self.assertRaises(ValueError, store.append, 'df', df) def test_ndim_indexables(self): @@ -1208,14 +1236,14 @@ def check_indexers(key, indexers): _maybe_remove(store, 'p4d') store.append('p4d', p4d.ix[:, :, :10, :], axes=indexers) store.append('p4d', p4d.ix[:, :, 10:, :], axes=[ - 'labels', 'items', 'major_axis']) + 'labels', 'items', 'major_axis']) assert_panel4d_equal(store.select('p4d'), p4d) check_indexers('p4d', indexers) # pass incorrect number of axes _maybe_remove(store, 'p4d') self.assertRaises(ValueError, store.append, 'p4d', p4d.ix[ - :, :, :10, :], axes=['major_axis', 'minor_axis']) + :, :, :10, :], axes=['major_axis', 'minor_axis']) # different than default indexables #1 indexers = ['labels', 'major_axis', 'minor_axis'] @@ -1240,14 +1268,14 @@ def check_indexers(key, indexers): # partial selection2 result = store.select('p4d', [Term( - 'labels=l1'), Term('items=ItemA'), Term('minor_axis=B')]) + 'labels=l1'), Term('items=ItemA'), Term('minor_axis=B')]) expected = p4d.reindex( labels=['l1'], items=['ItemA'], minor_axis=['B']) assert_panel4d_equal(result, expected) # non-existant partial selection result = store.select('p4d', [Term( - 'labels=l1'), Term('items=Item1'), Term('minor_axis=B')]) + 'labels=l1'), Term('items=Item1'), Term('minor_axis=B')]) expected = p4d.reindex(labels=['l1'], items=[], minor_axis=['B']) assert_panel4d_equal(result, expected) @@ -1258,8 +1286,9 @@ def test_append_with_strings(self): wp2 = wp.rename_axis( dict([(x, "%s_extra" % x) for x in wp.minor_axis]), axis=2) - def check_col(key,name,size): - self.assertEqual(getattr(store.get_storer(key).table.description,name).itemsize, size) + def check_col(key, name, size): + self.assertEqual(getattr(store.get_storer( + key).table.description, name).itemsize, size) store.append('s1', wp, min_itemsize=20) store.append('s1', wp2) @@ -1324,26 +1353,28 @@ def check_col(key,name,size): with ensure_clean_store(self.path) as store: - def check_col(key,name,size): - self.assertEqual(getattr(store.get_storer(key).table.description,name).itemsize, size) + def check_col(key, name, size): + self.assertEqual(getattr(store.get_storer( + key).table.description, name).itemsize, size) - df = DataFrame(dict(A = 'foo', B = 'bar'),index=range(10)) + df = DataFrame(dict(A='foo', B='bar'), index=range(10)) # a min_itemsize that creates a data_column _maybe_remove(store, 'df') - store.append('df', df, min_itemsize={'A' : 200 }) + store.append('df', df, min_itemsize={'A': 200}) check_col('df', 'A', 200) self.assertEqual(store.get_storer('df').data_columns, ['A']) # a min_itemsize that creates a data_column2 _maybe_remove(store, 'df') - store.append('df', df, data_columns = ['B'], min_itemsize={'A' : 200 }) + store.append('df', df, data_columns=['B'], min_itemsize={'A': 200}) check_col('df', 'A', 200) - self.assertEqual(store.get_storer('df').data_columns, ['B','A']) + self.assertEqual(store.get_storer('df').data_columns, ['B', 'A']) # a min_itemsize that creates a data_column2 _maybe_remove(store, 'df') - store.append('df', df, data_columns = ['B'], min_itemsize={'values' : 200 }) + store.append('df', df, data_columns=[ + 'B'], min_itemsize={'values': 200}) check_col('df', 'B', 200) check_col('df', 'values_block_0', 200) self.assertEqual(store.get_storer('df').data_columns, ['B']) @@ -1355,15 +1386,17 @@ def check_col(key,name,size): tm.assert_frame_equal(store['df'], df) # invalid min_itemsize keys - df = DataFrame(['foo','foo','foo','barh','barh','barh'],columns=['A']) + df = DataFrame(['foo', 'foo', 'foo', 'barh', + 'barh', 'barh'], columns=['A']) _maybe_remove(store, 'df') - self.assertRaises(ValueError, store.append, 'df', df, min_itemsize={'foo' : 20, 'foobar' : 20}) + self.assertRaises(ValueError, store.append, 'df', + df, min_itemsize={'foo': 20, 'foobar': 20}) def test_append_with_data_columns(self): with ensure_clean_store(self.path) as store: df = tm.makeTimeDataFrame() - df.loc[:,'B'].iloc[0] = 1. + df.loc[:, 'B'].iloc[0] = 1. _maybe_remove(store, 'df') store.append('df', df[:2], data_columns=['B']) store.append('df', df[2:]) @@ -1388,8 +1421,8 @@ def test_append_with_data_columns(self): # data column selection with a string data_column df_new = df.copy() df_new['string'] = 'foo' - df_new.loc[1:4,'string'] = np.nan - df_new.loc[5:6,'string'] = 'bar' + df_new.loc[1:4, 'string'] = np.nan + df_new.loc[5:6, 'string'] = 'bar' _maybe_remove(store, 'df') store.append('df', df_new, data_columns=['string']) result = store.select('df', [Term('string=foo')]) @@ -1397,8 +1430,9 @@ def test_append_with_data_columns(self): tm.assert_frame_equal(result, expected) # using min_itemsize and a data column - def check_col(key,name,size): - self.assertEqual(getattr(store.get_storer(key).table.description,name).itemsize, size) + def check_col(key, name, size): + self.assertEqual(getattr(store.get_storer( + key).table.description, name).itemsize, size) with ensure_clean_store(self.path) as store: _maybe_remove(store, 'df') @@ -1419,7 +1453,9 @@ def check_col(key,name,size): df_new['string_block1'] = 'foobarbah1' df_new['string_block2'] = 'foobarbah2' _maybe_remove(store, 'df') - store.append('df', df_new, data_columns=['string', 'string2'], min_itemsize={'string': 30, 'string2': 40, 'values': 50}) + store.append('df', df_new, data_columns=['string', 'string2'], + min_itemsize={'string': 30, 'string2': 40, + 'values': 50}) check_col('df', 'string', 30) check_col('df', 'string2', 40) check_col('df', 'values_block_1', 50) @@ -1427,28 +1463,28 @@ def check_col(key,name,size): with ensure_clean_store(self.path) as store: # multiple data columns df_new = df.copy() - df_new.ix[0,'A'] = 1. - df_new.ix[0,'B'] = -1. + df_new.ix[0, 'A'] = 1. + df_new.ix[0, 'B'] = -1. df_new['string'] = 'foo' - df_new.loc[1:4,'string'] = np.nan - df_new.loc[5:6,'string'] = 'bar' + df_new.loc[1:4, 'string'] = np.nan + df_new.loc[5:6, 'string'] = 'bar' df_new['string2'] = 'foo' - df_new.loc[2:5,'string2'] = np.nan - df_new.loc[7:8,'string2'] = 'bar' + df_new.loc[2:5, 'string2'] = np.nan + df_new.loc[7:8, 'string2'] = 'bar' _maybe_remove(store, 'df') store.append( 'df', df_new, data_columns=['A', 'B', 'string', 'string2']) result = store.select('df', [Term('string=foo'), Term( - 'string2=foo'), Term('A>0'), Term('B<0')]) + 'string2=foo'), Term('A>0'), Term('B<0')]) expected = df_new[(df_new.string == 'foo') & ( - df_new.string2 == 'foo') & (df_new.A > 0) & (df_new.B < 0)] + df_new.string2 == 'foo') & (df_new.A > 0) & (df_new.B < 0)] tm.assert_frame_equal(result, expected, check_index_type=False) # yield an empty frame result = store.select('df', [Term('string=foo'), Term( - 'string2=cool')]) + 'string2=cool')]) expected = df_new[(df_new.string == 'foo') & ( - df_new.string2 == 'cool')] + df_new.string2 == 'cool')] tm.assert_frame_equal(result, expected, check_index_type=False) with ensure_clean_store(self.path) as store: @@ -1463,8 +1499,9 @@ def check_col(key,name,size): df_dc.ix[3:5, ['A', 'B', 'datetime']] = np.nan _maybe_remove(store, 'df_dc') - store.append('df_dc', df_dc, data_columns=['B', 'C', - 'string', 'string2', 'datetime']) + store.append('df_dc', df_dc, + data_columns=['B', 'C', 'string', + 'string2', 'datetime']) result = store.select('df_dc', [Term('B>0')]) expected = df_dc[df_dc.B > 0] @@ -1473,7 +1510,7 @@ def check_col(key,name,size): result = store.select( 'df_dc', ['B > 0', 'C > 0', 'string == foo']) expected = df_dc[(df_dc.B > 0) & (df_dc.C > 0) & ( - df_dc.string == 'foo')] + df_dc.string == 'foo')] tm.assert_frame_equal(result, expected, check_index_type=False) with ensure_clean_store(self.path) as store: @@ -1483,21 +1520,24 @@ def check_col(key,name,size): df_dc = DataFrame(np.random.randn(8, 3), index=index, columns=['A', 'B', 'C']) df_dc['string'] = 'foo' - df_dc.ix[4:6,'string'] = np.nan - df_dc.ix[7:9,'string'] = 'bar' - df_dc.ix[:,['B','C']] = df_dc.ix[:,['B','C']].abs() + df_dc.ix[4:6, 'string'] = np.nan + df_dc.ix[7:9, 'string'] = 'bar' + df_dc.ix[:, ['B', 'C']] = df_dc.ix[:, ['B', 'C']].abs() df_dc['string2'] = 'cool' # on-disk operations - store.append('df_dc', df_dc, data_columns = ['B', 'C', 'string', 'string2']) + store.append('df_dc', df_dc, data_columns=[ + 'B', 'C', 'string', 'string2']) - result = store.select('df_dc', [ Term('B>0') ]) - expected = df_dc[df_dc.B>0] - tm.assert_frame_equal(result,expected) + result = store.select('df_dc', [Term('B>0')]) + expected = df_dc[df_dc.B > 0] + tm.assert_frame_equal(result, expected) - result = store.select('df_dc', ['B > 0', 'C > 0', 'string == "foo"']) - expected = df_dc[(df_dc.B > 0) & (df_dc.C > 0) & (df_dc.string == 'foo')] - tm.assert_frame_equal(result,expected) + result = store.select( + 'df_dc', ['B > 0', 'C > 0', 'string == "foo"']) + expected = df_dc[(df_dc.B > 0) & (df_dc.C > 0) & + (df_dc.string == 'foo')] + tm.assert_frame_equal(result, expected) with ensure_clean_store(self.path) as store: # panel @@ -1505,29 +1545,30 @@ def check_col(key,name,size): np.random.seed(1234) p = tm.makePanel() - store.append('p1',p) - tm.assert_panel_equal(store.select('p1'),p) + store.append('p1', p) + tm.assert_panel_equal(store.select('p1'), p) - store.append('p2',p,data_columns=True) - tm.assert_panel_equal(store.select('p2'),p) + store.append('p2', p, data_columns=True) + tm.assert_panel_equal(store.select('p2'), p) - result = store.select('p2',where='ItemA>0') + result = store.select('p2', where='ItemA>0') expected = p.to_frame() - expected = expected[expected['ItemA']>0] - tm.assert_frame_equal(result.to_frame(),expected) + expected = expected[expected['ItemA'] > 0] + tm.assert_frame_equal(result.to_frame(), expected) - result = store.select('p2',where='ItemA>0 & minor_axis=["A","B"]') + result = store.select('p2', where='ItemA>0 & minor_axis=["A","B"]') expected = p.to_frame() - expected = expected[expected['ItemA']>0] - expected = expected[expected.reset_index(level=['major']).index.isin(['A','B'])] - tm.assert_frame_equal(result.to_frame(),expected) + expected = expected[expected['ItemA'] > 0] + expected = expected[expected.reset_index( + level=['major']).index.isin(['A', 'B'])] + tm.assert_frame_equal(result.to_frame(), expected) def test_create_table_index(self): with ensure_clean_store(self.path) as store: - def col(t,column): - return getattr(store.get_storer(t).table.cols,column) + def col(t, column): + return getattr(store.get_storer(t).table.cols, column) # index=False wp = tm.makePanel() @@ -1607,15 +1648,15 @@ def test_append_hierarchical(self): tm.assert_frame_equal(result, df) # GH 3748 - result = store.select('mi',columns=['A','B']) - expected = df.reindex(columns=['A','B']) - tm.assert_frame_equal(result,expected) + result = store.select('mi', columns=['A', 'B']) + expected = df.reindex(columns=['A', 'B']) + tm.assert_frame_equal(result, expected) with ensure_clean_path('test.hdf') as path: - df.to_hdf(path,'df',format='table') - result = read_hdf(path,'df',columns=['A','B']) - expected = df.reindex(columns=['A','B']) - tm.assert_frame_equal(result,expected) + df.to_hdf(path, 'df', format='table') + result = read_hdf(path, 'df', columns=['A', 'B']) + expected = df.reindex(columns=['A', 'B']) + tm.assert_frame_equal(result, expected) def test_column_multiindex(self): # GH 4710 @@ -1658,7 +1699,7 @@ def test_column_multiindex(self): columns=Index(list('ABCD'), name='foo')) expected = df.copy() if isinstance(expected.index, RangeIndex): - expected.index = Int64Index(expected.index) + expected.index = Int64Index(expected.index) with ensure_clean_store(self.path) as store: @@ -1674,44 +1715,53 @@ def test_store_multiindex(self): with ensure_clean_store(self.path) as store: def make_index(names=None): - return MultiIndex.from_tuples([( datetime.datetime(2013,12,d), s, t) for d in range(1,3) for s in range(2) for t in range(3)], + return MultiIndex.from_tuples([(datetime.datetime(2013, 12, d), + s, t) + for d in range(1, 3) + for s in range(2) + for t in range(3)], names=names) - # no names _maybe_remove(store, 'df') - df = DataFrame(np.zeros((12,2)), columns=['a','b'], index=make_index()) - store.append('df',df) - tm.assert_frame_equal(store.select('df'),df) + df = DataFrame(np.zeros((12, 2)), columns=[ + 'a', 'b'], index=make_index()) + store.append('df', df) + tm.assert_frame_equal(store.select('df'), df) # partial names _maybe_remove(store, 'df') - df = DataFrame(np.zeros((12,2)), columns=['a','b'], index=make_index(['date',None,None])) - store.append('df',df) - tm.assert_frame_equal(store.select('df'),df) + df = DataFrame(np.zeros((12, 2)), columns=[ + 'a', 'b'], index=make_index(['date', None, None])) + store.append('df', df) + tm.assert_frame_equal(store.select('df'), df) # series _maybe_remove(store, 's') s = Series(np.zeros(12), index=make_index(['date', None, None])) - store.append('s',s) - xp = Series(np.zeros(12), index=make_index(['date', 'level_1', 'level_2'])) + store.append('s', s) + xp = Series(np.zeros(12), index=make_index( + ['date', 'level_1', 'level_2'])) tm.assert_series_equal(store.select('s'), xp) # dup with column _maybe_remove(store, 'df') - df = DataFrame(np.zeros((12,2)), columns=['a','b'], index=make_index(['date','a','t'])) - self.assertRaises(ValueError, store.append, 'df',df) + df = DataFrame(np.zeros((12, 2)), columns=[ + 'a', 'b'], index=make_index(['date', 'a', 't'])) + self.assertRaises(ValueError, store.append, 'df', df) # dup within level _maybe_remove(store, 'df') - df = DataFrame(np.zeros((12,2)), columns=['a','b'], index=make_index(['date','date','date'])) - self.assertRaises(ValueError, store.append, 'df',df) + df = DataFrame(np.zeros((12, 2)), columns=['a', 'b'], + index=make_index(['date', 'date', 'date'])) + self.assertRaises(ValueError, store.append, 'df', df) # fully names _maybe_remove(store, 'df') - df = DataFrame(np.zeros((12,2)), columns=['a','b'], index=make_index(['date','s','t'])) - store.append('df',df) - tm.assert_frame_equal(store.select('df'),df) + df = DataFrame(np.zeros((12, 2)), columns=[ + 'a', 'b'], index=make_index(['date', 's', 't'])) + store.append('df', df) + tm.assert_frame_equal(store.select('df'), df) def test_select_columns_in_where(self): @@ -1734,23 +1784,25 @@ def test_select_columns_in_where(self): tm.assert_frame_equal(store.select('df', columns=['A']), expected) - tm.assert_frame_equal(store.select('df', where="columns=['A']"), expected) + tm.assert_frame_equal(store.select( + 'df', where="columns=['A']"), expected) # With a Series s = Series(np.random.randn(10), index=index, name='A') with ensure_clean_store(self.path) as store: store.put('s', s, format='table') - tm.assert_series_equal(store.select('s', where="columns=['A']"),s) + tm.assert_series_equal(store.select('s', where="columns=['A']"), s) def test_pass_spec_to_storer(self): df = tm.makeDataFrame() with ensure_clean_store(self.path) as store: - store.put('df',df) + store.put('df', df) self.assertRaises(TypeError, store.select, 'df', columns=['A']) - self.assertRaises(TypeError, store.select, 'df',where=[('columns=A')]) + self.assertRaises(TypeError, store.select, + 'df', where=[('columns=A')]) def test_append_misc(self): @@ -1758,13 +1810,13 @@ def test_append_misc(self): # unsuported data types for non-tables p4d = tm.makePanel4D() - self.assertRaises(TypeError, store.put,'p4d',p4d) + self.assertRaises(TypeError, store.put, 'p4d', p4d) # unsuported data types - self.assertRaises(TypeError, store.put,'abc',None) - self.assertRaises(TypeError, store.put,'abc','123') - self.assertRaises(TypeError, store.put,'abc',123) - self.assertRaises(TypeError, store.put,'abc',np.arange(5)) + self.assertRaises(TypeError, store.put, 'abc', None) + self.assertRaises(TypeError, store.put, 'abc', '123') + self.assertRaises(TypeError, store.put, 'abc', 123) + self.assertRaises(TypeError, store.put, 'abc', np.arange(5)) df = tm.makeDataFrame() store.append('df', df, chunksize=1) @@ -1778,18 +1830,18 @@ def test_append_misc(self): # more chunksize in append tests def check(obj, comparator): for c in [10, 200, 1000]: - with ensure_clean_store(self.path,mode='w') as store: + with ensure_clean_store(self.path, mode='w') as store: store.append('obj', obj, chunksize=c) result = store.select('obj') - comparator(result,obj) + comparator(result, obj) df = tm.makeDataFrame() df['string'] = 'foo' df['float322'] = 1. df['float322'] = df['float322'].astype('float32') - df['bool'] = df['float322'] > 0 - df['time1'] = Timestamp('20130101') - df['time2'] = Timestamp('20130102') + df['bool'] = df['float322'] > 0 + df['time1'] = Timestamp('20130101') + df['time2'] = Timestamp('20130102') check(df, tm.assert_frame_equal) p = tm.makePanel() @@ -1803,36 +1855,36 @@ def check(obj, comparator): # 0 len df_empty = DataFrame(columns=list('ABC')) - store.append('df',df_empty) - self.assertRaises(KeyError,store.select, 'df') + store.append('df', df_empty) + self.assertRaises(KeyError, store.select, 'df') # repeated append of 0/non-zero frames - df = DataFrame(np.random.rand(10,3),columns=list('ABC')) - store.append('df',df) - assert_frame_equal(store.select('df'),df) - store.append('df',df_empty) - assert_frame_equal(store.select('df'),df) + df = DataFrame(np.random.rand(10, 3), columns=list('ABC')) + store.append('df', df) + assert_frame_equal(store.select('df'), df) + store.append('df', df_empty) + assert_frame_equal(store.select('df'), df) # store df = DataFrame(columns=list('ABC')) - store.put('df2',df) - assert_frame_equal(store.select('df2'),df) + store.put('df2', df) + assert_frame_equal(store.select('df2'), df) # 0 len p_empty = Panel(items=list('ABC')) - store.append('p',p_empty) - self.assertRaises(KeyError,store.select, 'p') + store.append('p', p_empty) + self.assertRaises(KeyError, store.select, 'p') # repeated append of 0/non-zero frames - p = Panel(np.random.randn(3,4,5),items=list('ABC')) - store.append('p',p) - assert_panel_equal(store.select('p'),p) - store.append('p',p_empty) - assert_panel_equal(store.select('p'),p) + p = Panel(np.random.randn(3, 4, 5), items=list('ABC')) + store.append('p', p) + assert_panel_equal(store.select('p'), p) + store.append('p', p_empty) + assert_panel_equal(store.select('p'), p) # store - store.put('p2',p_empty) - assert_panel_equal(store.select('p2'),p_empty) + store.put('p2', p_empty) + assert_panel_equal(store.select('p2'), p_empty) def test_append_raise(self): @@ -1844,34 +1896,35 @@ def test_append_raise(self): df = tm.makeDataFrame() df['invalid'] = [['a']] * len(df) self.assertEqual(df.dtypes['invalid'], np.object_) - self.assertRaises(TypeError, store.append,'df',df) + self.assertRaises(TypeError, store.append, 'df', df) # multiple invalid columns df['invalid2'] = [['a']] * len(df) df['invalid3'] = [['a']] * len(df) - self.assertRaises(TypeError, store.append,'df',df) + self.assertRaises(TypeError, store.append, 'df', df) # datetime with embedded nans as object df = tm.makeDataFrame() - s = Series(datetime.datetime(2001,1,2),index=df.index) + s = Series(datetime.datetime(2001, 1, 2), index=df.index) s = s.astype(object) s[0:5] = np.nan df['invalid'] = s self.assertEqual(df.dtypes['invalid'], np.object_) - self.assertRaises(TypeError, store.append,'df', df) + self.assertRaises(TypeError, store.append, 'df', df) # directy ndarray - self.assertRaises(TypeError, store.append,'df',np.arange(10)) + self.assertRaises(TypeError, store.append, 'df', np.arange(10)) # series directly - self.assertRaises(TypeError, store.append,'df',Series(np.arange(10))) + self.assertRaises(TypeError, store.append, + 'df', Series(np.arange(10))) # appending an incompatbile table df = tm.makeDataFrame() - store.append('df',df) + store.append('df', df) df['foo'] = 'foo' - self.assertRaises(ValueError, store.append,'df',df) + self.assertRaises(ValueError, store.append, 'df', df) def test_table_index_incompatible_dtypes(self): df1 = DataFrame({'a': [1, 2, 3]}) @@ -1888,39 +1941,42 @@ def test_table_values_dtypes_roundtrip(self): with ensure_clean_store(self.path) as store: df1 = DataFrame({'a': [1, 2, 3]}, dtype='f8') store.append('df_f8', df1) - assert_series_equal(df1.dtypes,store['df_f8'].dtypes) + assert_series_equal(df1.dtypes, store['df_f8'].dtypes) df2 = DataFrame({'a': [1, 2, 3]}, dtype='i8') store.append('df_i8', df2) - assert_series_equal(df2.dtypes,store['df_i8'].dtypes) + assert_series_equal(df2.dtypes, store['df_i8'].dtypes) # incompatible dtype self.assertRaises(ValueError, store.append, 'df_i8', df1) - # check creation/storage/retrieval of float32 (a bit hacky to actually create them thought) - df1 = DataFrame(np.array([[1],[2],[3]],dtype='f4'),columns = ['A']) + # check creation/storage/retrieval of float32 (a bit hacky to + # actually create them thought) + df1 = DataFrame( + np.array([[1], [2], [3]], dtype='f4'), columns=['A']) store.append('df_f4', df1) - assert_series_equal(df1.dtypes,store['df_f4'].dtypes) + assert_series_equal(df1.dtypes, store['df_f4'].dtypes) assert df1.dtypes[0] == 'float32' # check with mixed dtypes - df1 = DataFrame(dict([ (c,Series(np.random.randn(5),dtype=c)) for c in - ['float32','float64','int32','int64','int16','int8'] ])) + df1 = DataFrame(dict([(c, Series(np.random.randn(5), dtype=c)) + for c in ['float32', 'float64', 'int32', + 'int64', 'int16', 'int8']])) df1['string'] = 'foo' df1['float322'] = 1. df1['float322'] = df1['float322'].astype('float32') - df1['bool'] = df1['float32'] > 0 - df1['time1'] = Timestamp('20130101') - df1['time2'] = Timestamp('20130102') + df1['bool'] = df1['float32'] > 0 + df1['time1'] = Timestamp('20130101') + df1['time2'] = Timestamp('20130102') store.append('df_mixed_dtypes1', df1) result = store.select('df_mixed_dtypes1').get_dtype_counts() - expected = Series({ 'float32' : 2, 'float64' : 1,'int32' : 1, 'bool' : 1, - 'int16' : 1, 'int8' : 1, 'int64' : 1, 'object' : 1, - 'datetime64[ns]' : 2}) + expected = Series({'float32': 2, 'float64': 1, 'int32': 1, + 'bool': 1, 'int16': 1, 'int8': 1, + 'int64': 1, 'object': 1, 'datetime64[ns]': 2}) result.sort() expected.sort() - tm.assert_series_equal(result,expected) + tm.assert_series_equal(result, expected) def test_table_mixed_dtypes(self): @@ -1982,7 +2038,7 @@ def test_unimplemented_dtypes_table_columns(self): if not compat.PY3: l.append(('unicode', u('\\u03c3'))) - ### currently not supported dtypes #### + # currently not supported dtypes #### for n, f in l: df = tm.makeDataFrame() df[n] = f @@ -2005,20 +2061,23 @@ def test_calendar_roundtrip_issue(self): # 8591 # doc example from tseries holiday section weekmask_egypt = 'Sun Mon Tue Wed Thu' - holidays = ['2012-05-01', datetime.datetime(2013, 5, 1), np.datetime64('2014-05-01')] - bday_egypt = pandas.offsets.CustomBusinessDay(holidays=holidays, weekmask=weekmask_egypt) + holidays = ['2012-05-01', + datetime.datetime(2013, 5, 1), np.datetime64('2014-05-01')] + bday_egypt = pandas.offsets.CustomBusinessDay( + holidays=holidays, weekmask=weekmask_egypt) dt = datetime.datetime(2013, 4, 30) dts = date_range(dt, periods=5, freq=bday_egypt) - s = (Series(dts.weekday, dts).map(Series('Mon Tue Wed Thu Fri Sat Sun'.split()))) + s = (Series(dts.weekday, dts).map( + Series('Mon Tue Wed Thu Fri Sat Sun'.split()))) with ensure_clean_store(self.path) as store: - store.put('fixed',s) + store.put('fixed', s) result = store.select('fixed') assert_series_equal(result, s) - store.append('table',s) + store.append('table', s) result = store.select('table') assert_series_equal(result, s) @@ -2027,42 +2086,43 @@ def test_append_with_timedelta(self): # append timedelta from datetime import timedelta - df = DataFrame(dict(A = Timestamp('20130101'), B = [ Timestamp('20130101') + timedelta(days=i,seconds=10) for i in range(10) ])) - df['C'] = df['A']-df['B'] - df.ix[3:5,'C'] = np.nan + df = DataFrame(dict(A=Timestamp('20130101'), B=[Timestamp( + '20130101') + timedelta(days=i, seconds=10) for i in range(10)])) + df['C'] = df['A'] - df['B'] + df.ix[3:5, 'C'] = np.nan with ensure_clean_store(self.path) as store: # table _maybe_remove(store, 'df') - store.append('df',df,data_columns=True) + store.append('df', df, data_columns=True) result = store.select('df') - assert_frame_equal(result,df) + assert_frame_equal(result, df) - result = store.select('df',Term("C<100000")) - assert_frame_equal(result,df) + result = store.select('df', Term("C<100000")) + assert_frame_equal(result, df) - result = store.select('df',Term("C","<",-3*86400)) - assert_frame_equal(result,df.iloc[3:]) + result = store.select('df', Term("C", "<", -3 * 86400)) + assert_frame_equal(result, df.iloc[3:]) - result = store.select('df',"C<'-3D'") - assert_frame_equal(result,df.iloc[3:]) + result = store.select('df', "C<'-3D'") + assert_frame_equal(result, df.iloc[3:]) # a bit hacky here as we don't really deal with the NaT properly - result = store.select('df',"C<'-500000s'") + result = store.select('df', "C<'-500000s'") result = result.dropna(subset=['C']) - assert_frame_equal(result,df.iloc[6:]) + assert_frame_equal(result, df.iloc[6:]) - result = store.select('df',"C<'-3.5D'") + result = store.select('df', "C<'-3.5D'") result = result.iloc[1:] - assert_frame_equal(result,df.iloc[4:]) + assert_frame_equal(result, df.iloc[4:]) # fixed _maybe_remove(store, 'df2') - store.put('df2',df) + store.put('df2', df) result = store.select('df2') - assert_frame_equal(result,df) + assert_frame_equal(result, df) def test_remove(self): @@ -2148,9 +2208,9 @@ def test_remove_startstop(self): _maybe_remove(store, 'wp1') store.put('wp1', wp, format='t') n = store.remove('wp1', start=32) - self.assertTrue(n == 120-32) + self.assertTrue(n == 120 - 32) result = store.select('wp1') - expected = wp.reindex(major_axis=wp.major_axis[:32//4]) + expected = wp.reindex(major_axis=wp.major_axis[:32 // 4]) assert_panel_equal(result, expected) _maybe_remove(store, 'wp2') @@ -2158,7 +2218,7 @@ def test_remove_startstop(self): n = store.remove('wp2', start=-32) self.assertTrue(n == 32) result = store.select('wp2') - expected = wp.reindex(major_axis=wp.major_axis[:-32//4]) + expected = wp.reindex(major_axis=wp.major_axis[:-32 // 4]) assert_panel_equal(result, expected) # stop @@ -2167,24 +2227,25 @@ def test_remove_startstop(self): n = store.remove('wp3', stop=32) self.assertTrue(n == 32) result = store.select('wp3') - expected = wp.reindex(major_axis=wp.major_axis[32//4:]) + expected = wp.reindex(major_axis=wp.major_axis[32 // 4:]) assert_panel_equal(result, expected) _maybe_remove(store, 'wp4') store.put('wp4', wp, format='t') n = store.remove('wp4', stop=-32) - self.assertTrue(n == 120-32) + self.assertTrue(n == 120 - 32) result = store.select('wp4') - expected = wp.reindex(major_axis=wp.major_axis[-32//4:]) + expected = wp.reindex(major_axis=wp.major_axis[-32 // 4:]) assert_panel_equal(result, expected) # start n stop _maybe_remove(store, 'wp5') store.put('wp5', wp, format='t') n = store.remove('wp5', start=16, stop=-16) - self.assertTrue(n == 120-32) + self.assertTrue(n == 120 - 32) result = store.select('wp5') - expected = wp.reindex(major_axis=wp.major_axis[:16//4].union(wp.major_axis[-16//4:])) + expected = wp.reindex(major_axis=wp.major_axis[ + :16 // 4].union(wp.major_axis[-16 // 4:])) assert_panel_equal(result, expected) _maybe_remove(store, 'wp6') @@ -2197,13 +2258,17 @@ def test_remove_startstop(self): # with where _maybe_remove(store, 'wp7') - date = wp.major_axis.take(np.arange(0,30,3)) + + # TODO: unused? + date = wp.major_axis.take(np.arange(0, 30, 3)) # noqa + crit = Term('major_axis=date') store.put('wp7', wp, format='t') n = store.remove('wp7', where=[crit], stop=80) self.assertTrue(n == 28) result = store.select('wp7') - expected = wp.reindex(major_axis=wp.major_axis.difference(wp.major_axis[np.arange(0,20,3)])) + expected = wp.reindex(major_axis=wp.major_axis.difference( + wp.major_axis[np.arange(0, 20, 3)])) assert_panel_equal(result, expected) def test_remove_crit(self): @@ -2256,16 +2321,18 @@ def test_remove_crit(self): crit2 = Term('major_axis=date2') store.remove('wp2', where=[crit2]) result = store['wp2'] - expected = wp.reindex( - major_axis=wp.major_axis.difference(date1).difference(Index([date2]))) + expected = wp.reindex(major_axis=wp.major_axis.difference(date1) + .difference(Index([date2]))) assert_panel_equal(result, expected) date3 = [wp.major_axis[7], wp.major_axis[9]] crit3 = Term('major_axis=date3') store.remove('wp2', where=[crit3]) result = store['wp2'] - expected = wp.reindex( - major_axis=wp.major_axis.difference(date1).difference(Index([date2])).difference(Index(date3))) + expected = wp.reindex(major_axis=wp.major_axis + .difference(date1) + .difference(Index([date2])) + .difference(Index(date3))) assert_panel_equal(result, expected) # corners @@ -2282,7 +2349,7 @@ def test_invalid_terms(self): df = tm.makeTimeDataFrame() df['string'] = 'foo' - df.ix[0:4,'string'] = 'bar' + df.ix[0:4, 'string'] = 'bar' wp = tm.makePanel() p4d = tm.makePanel4D() store.put('df', df, format='table') @@ -2290,31 +2357,39 @@ def test_invalid_terms(self): store.put('p4d', p4d, format='table') # some invalid terms - self.assertRaises(ValueError, store.select, 'wp', "minor=['A', 'B']") - self.assertRaises(ValueError, store.select, 'wp', ["index=['20121114']"]) - self.assertRaises(ValueError, store.select, 'wp', ["index=['20121114', '20121114']"]) + self.assertRaises(ValueError, store.select, + 'wp', "minor=['A', 'B']") + self.assertRaises(ValueError, store.select, + 'wp', ["index=['20121114']"]) + self.assertRaises(ValueError, store.select, 'wp', [ + "index=['20121114', '20121114']"]) self.assertRaises(TypeError, Term) # more invalid - self.assertRaises(ValueError, store.select, 'df','df.index[3]') - self.assertRaises(SyntaxError, store.select, 'df','index>') - self.assertRaises(ValueError, store.select, 'wp', "major_axis<'20000108' & minor_axis['A', 'B']") + self.assertRaises(ValueError, store.select, 'df', 'df.index[3]') + self.assertRaises(SyntaxError, store.select, 'df', 'index>') + self.assertRaises(ValueError, store.select, 'wp', + "major_axis<'20000108' & minor_axis['A', 'B']") # from the docs with ensure_clean_path(self.path) as path: - dfq = DataFrame(np.random.randn(10,4),columns=list('ABCD'),index=date_range('20130101',periods=10)) - dfq.to_hdf(path,'dfq',format='table',data_columns=True) + dfq = DataFrame(np.random.randn(10, 4), columns=list( + 'ABCD'), index=date_range('20130101', periods=10)) + dfq.to_hdf(path, 'dfq', format='table', data_columns=True) # check ok - read_hdf(path,'dfq',where="index>Timestamp('20130104') & columns=['A', 'B']") - read_hdf(path,'dfq',where="A>0 or C>0") + read_hdf(path, 'dfq', + where="index>Timestamp('20130104') & columns=['A', 'B']") + read_hdf(path, 'dfq', where="A>0 or C>0") # catch the invalid reference with ensure_clean_path(self.path) as path: - dfq = DataFrame(np.random.randn(10,4),columns=list('ABCD'),index=date_range('20130101',periods=10)) - dfq.to_hdf(path,'dfq',format='table') + dfq = DataFrame(np.random.randn(10, 4), columns=list( + 'ABCD'), index=date_range('20130101', periods=10)) + dfq.to_hdf(path, 'dfq', format='table') - self.assertRaises(ValueError, read_hdf, path,'dfq',where="A>0 or C>0") + self.assertRaises(ValueError, read_hdf, path, + 'dfq', where="A>0 or C>0") def test_terms(self): @@ -2322,7 +2397,8 @@ def test_terms(self): wp = tm.makePanel() p4d = tm.makePanel4D() - wpneg = Panel.fromDict({-1: tm.makeDataFrame(), 0: tm.makeDataFrame(), + wpneg = Panel.fromDict({-1: tm.makeDataFrame(), + 0: tm.makeDataFrame(), 1: tm.makeDataFrame()}) store.put('wp', wp, format='table') store.put('p4d', p4d, format='table') @@ -2330,13 +2406,13 @@ def test_terms(self): # panel result = store.select('wp', [Term( - 'major_axis<"20000108"'), Term("minor_axis=['A', 'B']")]) + 'major_axis<"20000108"'), Term("minor_axis=['A', 'B']")]) expected = wp.truncate(after='20000108').reindex(minor=['A', 'B']) assert_panel_equal(result, expected) # with deprecation result = store.select('wp', [Term( - 'major_axis','<',"20000108"), Term("minor_axis=['A', 'B']")]) + 'major_axis', '<', "20000108"), Term("minor_axis=['A', 'B']")]) expected = wp.truncate(after='20000108').reindex(minor=['A', 'B']) tm.assert_panel_equal(result, expected) @@ -2372,7 +2448,7 @@ def test_terms(self): ((("minor_axis==['A', 'B']"),),), (("items=['ItemA', 'ItemB']"),), ('items=ItemA'), - ] + ] for t in terms: store.select('wp', t) @@ -2382,13 +2458,15 @@ def test_terms(self): terms = [ (("labels=['l1', 'l2']"),), Term("labels=['l1', 'l2']"), - ] + ] for t in terms: store.select('p4d', t) - with tm.assertRaisesRegexp(TypeError, 'Only named functions are supported'): - store.select('wp', Term('major_axis == (lambda x: x)("20130101")')) + with tm.assertRaisesRegexp(TypeError, + 'Only named functions are supported'): + store.select('wp', Term( + 'major_axis == (lambda x: x)("20130101")')) # check USub node parsing res = store.select('wpneg', Term('items == -1')) @@ -2405,16 +2483,17 @@ def test_term_compat(self): wp = Panel(np.random.randn(2, 5, 4), items=['Item1', 'Item2'], major_axis=date_range('1/1/2000', periods=5), minor_axis=['A', 'B', 'C', 'D']) - store.append('wp',wp) + store.append('wp', wp) result = store.select('wp', [Term('major_axis>20000102'), - Term('minor_axis', '=', ['A','B']) ]) - expected = wp.loc[:,wp.major_axis>Timestamp('20000102'),['A','B']] + Term('minor_axis', '=', ['A', 'B'])]) + expected = wp.loc[:, wp.major_axis > + Timestamp('20000102'), ['A', 'B']] assert_panel_equal(result, expected) store.remove('wp', Term('major_axis>20000103')) result = store.select('wp') - expected = wp.loc[:,wp.major_axis<=Timestamp('20000103'),:] + expected = wp.loc[:, wp.major_axis <= Timestamp('20000103'), :] assert_panel_equal(result, expected) with ensure_clean_store(self.path) as store: @@ -2422,23 +2501,30 @@ def test_term_compat(self): wp = Panel(np.random.randn(2, 5, 4), items=['Item1', 'Item2'], major_axis=date_range('1/1/2000', periods=5), minor_axis=['A', 'B', 'C', 'D']) - store.append('wp',wp) + store.append('wp', wp) # stringified datetimes - result = store.select('wp', [Term('major_axis','>',datetime.datetime(2000,1,2))]) - expected = wp.loc[:,wp.major_axis>Timestamp('20000102')] + result = store.select( + 'wp', [Term('major_axis', '>', datetime.datetime(2000, 1, 2))]) + expected = wp.loc[:, wp.major_axis > Timestamp('20000102')] assert_panel_equal(result, expected) - result = store.select('wp', [Term('major_axis','>',datetime.datetime(2000,1,2,0,0))]) - expected = wp.loc[:,wp.major_axis>Timestamp('20000102')] + result = store.select( + 'wp', [Term('major_axis', '>', + datetime.datetime(2000, 1, 2, 0, 0))]) + expected = wp.loc[:, wp.major_axis > Timestamp('20000102')] assert_panel_equal(result, expected) - result = store.select('wp', [Term('major_axis','=',[datetime.datetime(2000,1,2,0,0),datetime.datetime(2000,1,3,0,0)])]) - expected = wp.loc[:,[Timestamp('20000102'),Timestamp('20000103')]] + result = store.select( + 'wp', [Term('major_axis', '=', + [datetime.datetime(2000, 1, 2, 0, 0), + datetime.datetime(2000, 1, 3, 0, 0)])]) + expected = wp.loc[:, [Timestamp('20000102'), + Timestamp('20000103')]] assert_panel_equal(result, expected) - result = store.select('wp', [Term('minor_axis','=',['A','B'])]) - expected = wp.loc[:,:,['A','B']] + result = store.select('wp', [Term('minor_axis', '=', ['A', 'B'])]) + expected = wp.loc[:, :, ['A', 'B']] assert_panel_equal(result, expected) def test_backwards_compat_without_term_object(self): @@ -2459,7 +2545,7 @@ def test_backwards_compat_without_term_object(self): store.remove('wp', ('major_axis>20000103')) result = store.select('wp') - expected = wp.loc[:,wp.major_axis<=Timestamp('20000103'),:] + expected = wp.loc[:, wp.major_axis <= Timestamp('20000103'), :] assert_panel_equal(result, expected) with ensure_clean_store(self.path) as store: @@ -2503,22 +2589,23 @@ def test_same_name_scoping(self): with ensure_clean_store(self.path) as store: import pandas as pd - df = DataFrame(np.random.randn(20, 2),index=pd.date_range('20130101',periods=20)) + df = DataFrame(np.random.randn(20, 2), + index=pd.date_range('20130101', periods=20)) store.put('df', df, format='table') - expected = df[df.index>pd.Timestamp('20130105')] + expected = df[df.index > pd.Timestamp('20130105')] - import datetime - result = store.select('df','index>datetime.datetime(2013,1,5)') - assert_frame_equal(result,expected) + import datetime # noqa + result = store.select('df', 'index>datetime.datetime(2013,1,5)') + assert_frame_equal(result, expected) - from datetime import datetime + from datetime import datetime # noqa # technically an error, but allow it - result = store.select('df','index>datetime.datetime(2013,1,5)') - assert_frame_equal(result,expected) + result = store.select('df', 'index>datetime.datetime(2013,1,5)') + assert_frame_equal(result, expected) - result = store.select('df','index>datetime(2013,1,5)') - assert_frame_equal(result,expected) + result = store.select('df', 'index>datetime(2013,1,5)') + assert_frame_equal(result, expected) def test_series(self): @@ -2533,7 +2620,8 @@ def test_series(self): ts3 = Series(ts.values, Index(np.asarray(ts.index, dtype=object), dtype=object)) - self._check_roundtrip(ts3, tm.assert_series_equal, check_index_type=False) + self._check_roundtrip(ts3, tm.assert_series_equal, + check_index_type=False) def test_sparse_series(self): @@ -2602,7 +2690,8 @@ def test_tuple_index(self): DF = DataFrame(data, index=idx, columns=col) expected_warning = Warning if PY35 else PerformanceWarning - with tm.assert_produces_warning(expected_warning=expected_warning, check_stacklevel=False): + with tm.assert_produces_warning(expected_warning=expected_warning, + check_stacklevel=False): self._check_roundtrip(DF, tm.assert_frame_equal) def test_index_types(self): @@ -2616,23 +2705,28 @@ def test_index_types(self): # nose has a deprecation warning in 3.5 expected_warning = Warning if PY35 else PerformanceWarning - with tm.assert_produces_warning(expected_warning=expected_warning, check_stacklevel=False): + with tm.assert_produces_warning(expected_warning=expected_warning, + check_stacklevel=False): ser = Series(values, [0, 'y']) self._check_roundtrip(ser, func) - with tm.assert_produces_warning(expected_warning=expected_warning, check_stacklevel=False): + with tm.assert_produces_warning(expected_warning=expected_warning, + check_stacklevel=False): ser = Series(values, [datetime.datetime.today(), 0]) self._check_roundtrip(ser, func) - with tm.assert_produces_warning(expected_warning=expected_warning, check_stacklevel=False): + with tm.assert_produces_warning(expected_warning=expected_warning, + check_stacklevel=False): ser = Series(values, ['y', 0]) self._check_roundtrip(ser, func) - with tm.assert_produces_warning(expected_warning=expected_warning, check_stacklevel=False): + with tm.assert_produces_warning(expected_warning=expected_warning, + check_stacklevel=False): ser = Series(values, [datetime.date.today(), 'a']) self._check_roundtrip(ser, func) - with tm.assert_produces_warning(expected_warning=expected_warning, check_stacklevel=False): + with tm.assert_produces_warning(expected_warning=expected_warning, + check_stacklevel=False): ser = Series(values, [1.23, 'b']) self._check_roundtrip(ser, func) @@ -2806,58 +2900,61 @@ def test_wide_table(self): def test_select_with_dups(self): # single dtypes - df = DataFrame(np.random.randn(10,4),columns=['A','A','B','B']) - df.index = date_range('20130101 9:30',periods=10,freq='T') + df = DataFrame(np.random.randn(10, 4), columns=['A', 'A', 'B', 'B']) + df.index = date_range('20130101 9:30', periods=10, freq='T') with ensure_clean_store(self.path) as store: - store.append('df',df) + store.append('df', df) result = store.select('df') expected = df - assert_frame_equal(result,expected,by_blocks=True) + assert_frame_equal(result, expected, by_blocks=True) - result = store.select('df',columns=df.columns) + result = store.select('df', columns=df.columns) expected = df - assert_frame_equal(result,expected,by_blocks=True) + assert_frame_equal(result, expected, by_blocks=True) - result = store.select('df',columns=['A']) - expected = df.loc[:,['A']] - assert_frame_equal(result,expected) + result = store.select('df', columns=['A']) + expected = df.loc[:, ['A']] + assert_frame_equal(result, expected) # dups accross dtypes - df = concat([DataFrame(np.random.randn(10,4),columns=['A','A','B','B']), - DataFrame(np.random.randint(0,10,size=20).reshape(10,2),columns=['A','C'])], + df = concat([DataFrame(np.random.randn(10, 4), + columns=['A', 'A', 'B', 'B']), + DataFrame(np.random.randint(0, 10, size=20) + .reshape(10, 2), + columns=['A', 'C'])], axis=1) - df.index = date_range('20130101 9:30',periods=10,freq='T') + df.index = date_range('20130101 9:30', periods=10, freq='T') with ensure_clean_store(self.path) as store: - store.append('df',df) + store.append('df', df) result = store.select('df') expected = df - assert_frame_equal(result,expected,by_blocks=True) + assert_frame_equal(result, expected, by_blocks=True) - result = store.select('df',columns=df.columns) + result = store.select('df', columns=df.columns) expected = df - assert_frame_equal(result,expected,by_blocks=True) + assert_frame_equal(result, expected, by_blocks=True) - expected = df.loc[:,['A']] - result = store.select('df',columns=['A']) - assert_frame_equal(result,expected,by_blocks=True) + expected = df.loc[:, ['A']] + result = store.select('df', columns=['A']) + assert_frame_equal(result, expected, by_blocks=True) - expected = df.loc[:,['B','A']] - result = store.select('df',columns=['B','A']) - assert_frame_equal(result,expected,by_blocks=True) + expected = df.loc[:, ['B', 'A']] + result = store.select('df', columns=['B', 'A']) + assert_frame_equal(result, expected, by_blocks=True) # duplicates on both index and columns with ensure_clean_store(self.path) as store: - store.append('df',df) - store.append('df',df) + store.append('df', df) + store.append('df', df) - expected = df.loc[:,['B','A']] + expected = df.loc[:, ['B', 'A']] expected = concat([expected, expected]) - result = store.select('df',columns=['B','A']) - assert_frame_equal(result,expected,by_blocks=True) + result = store.select('df', columns=['B', 'A']) + assert_frame_equal(result, expected, by_blocks=True) def test_wide_table_dups(self): wp = tm.makePanel() @@ -2897,16 +2994,17 @@ def test_sparse_with_compression(self): # GH 2931 # make sparse dataframe - df = DataFrame(np.random.binomial(n=1, p=.01, size=(1e3, 10))).to_sparse(fill_value=0) + df = DataFrame(np.random.binomial( + n=1, p=.01, size=(1e3, 10))).to_sparse(fill_value=0) # case 1: store uncompressed self._check_double_roundtrip(df, tm.assert_frame_equal, - compression = False, + compression=False, check_frame_type=True) # case 2: store compressed (works) self._check_double_roundtrip(df, tm.assert_frame_equal, - compression = 'zlib', + compression='zlib', check_frame_type=True) # set one series to be completely sparse @@ -2914,12 +3012,13 @@ def test_sparse_with_compression(self): # case 3: store df with completely sparse series uncompressed self._check_double_roundtrip(df, tm.assert_frame_equal, - compression = False, + compression=False, check_frame_type=True) - # case 4: try storing df with completely sparse series compressed (fails) + # case 4: try storing df with completely sparse series compressed + # (fails) self._check_double_roundtrip(df, tm.assert_frame_equal, - compression = 'zlib', + compression='zlib', check_frame_type=True) def test_select(self): @@ -2938,9 +3037,10 @@ def test_select(self): store.select('wp2') # selection on the non-indexable with a large number of columns - wp = Panel( - np.random.randn(100, 100, 100), items=['Item%03d' % i for i in range(100)], - major_axis=date_range('1/1/2000', periods=100), minor_axis=['E%03d' % i for i in range(100)]) + wp = Panel(np.random.randn(100, 100, 100), + items=['Item%03d' % i for i in range(100)], + major_axis=date_range('1/1/2000', periods=100), + minor_axis=['E%03d' % i for i in range(100)]) _maybe_remove(store, 'wp') store.append('wp', wp) @@ -2990,9 +3090,10 @@ def test_select(self): def test_select_dtypes(self): with ensure_clean_store(self.path) as store: - # with a Timestamp data column (GH #2637) - df = DataFrame(dict(ts=bdate_range('2012-01-01', periods=300), A=np.random.randn(300))) + df = DataFrame(dict( + ts=bdate_range('2012-01-01', periods=300), + A=np.random.randn(300))) _maybe_remove(store, 'df') store.append('df', df, data_columns=['ts', 'A']) @@ -3001,21 +3102,25 @@ def test_select_dtypes(self): tm.assert_frame_equal(expected, result) # bool columns (GH #2849) - df = DataFrame(np.random.randn(5,2), columns =['A','B']) + df = DataFrame(np.random.randn(5, 2), columns=['A', 'B']) df['object'] = 'foo' - df.ix[4:5,'object'] = 'bar' + df.ix[4:5, 'object'] = 'bar' df['boolv'] = df['A'] > 0 _maybe_remove(store, 'df') - store.append('df', df, data_columns = True) + store.append('df', df, data_columns=True) - expected = df[df.boolv == True].reindex(columns=['A','boolv']) - for v in [True,'true',1]: - result = store.select('df', Term('boolv == %s' % str(v)), columns = ['A','boolv']) + expected = (df[df.boolv == True] # noqa + .reindex(columns=['A', 'boolv'])) + for v in [True, 'true', 1]: + result = store.select('df', Term( + 'boolv == %s' % str(v)), columns=['A', 'boolv']) tm.assert_frame_equal(expected, result) - expected = df[df.boolv == False ].reindex(columns=['A','boolv']) - for v in [False,'false',0]: - result = store.select('df', Term('boolv == %s' % str(v)), columns = ['A','boolv']) + expected = (df[df.boolv == False] # noqa + .reindex(columns=['A', 'boolv'])) + for v in [False, 'false', 0]: + result = store.select('df', Term( + 'boolv == %s' % str(v)), columns=['A', 'boolv']) tm.assert_frame_equal(expected, result) # integer index @@ -3024,55 +3129,57 @@ def test_select_dtypes(self): store.append('df_int', df) result = store.select( 'df_int', [Term("index<10"), Term("columns=['A']")]) - expected = df.reindex(index=list(df.index)[0:10],columns=['A']) + expected = df.reindex(index=list(df.index)[0:10], columns=['A']) tm.assert_frame_equal(expected, result) # float index df = DataFrame(dict(A=np.random.rand( - 20), B=np.random.rand(20), index=np.arange(20, dtype='f8'))) + 20), B=np.random.rand(20), index=np.arange(20, dtype='f8'))) _maybe_remove(store, 'df_float') store.append('df_float', df) result = store.select( 'df_float', [Term("index<10.0"), Term("columns=['A']")]) - expected = df.reindex(index=list(df.index)[0:10],columns=['A']) + expected = df.reindex(index=list(df.index)[0:10], columns=['A']) tm.assert_frame_equal(expected, result) with ensure_clean_store(self.path) as store: # floats w/o NaN - df = DataFrame(dict(cols = range(11), values = range(11)),dtype='float64') - df['cols'] = (df['cols']+10).apply(str) + df = DataFrame( + dict(cols=range(11), values=range(11)), dtype='float64') + df['cols'] = (df['cols'] + 10).apply(str) - store.append('df1',df,data_columns=True) + store.append('df1', df, data_columns=True) result = store.select( 'df1', where='values>2.0') - expected = df[df['values']>2.0] + expected = df[df['values'] > 2.0] tm.assert_frame_equal(expected, result) # floats with NaN df.iloc[0] = np.nan - expected = df[df['values']>2.0] + expected = df[df['values'] > 2.0] - store.append('df2',df,data_columns=True,index=False) + store.append('df2', df, data_columns=True, index=False) result = store.select( 'df2', where='values>2.0') tm.assert_frame_equal(expected, result) # https://github.com/PyTables/PyTables/issues/282 # bug in selection when 0th row has a np.nan and an index - #store.append('df3',df,data_columns=True) - #result = store.select( + # store.append('df3',df,data_columns=True) + # result = store.select( # 'df3', where='values>2.0') - #tm.assert_frame_equal(expected, result) + # tm.assert_frame_equal(expected, result) # not in first position float with NaN ok too - df = DataFrame(dict(cols = range(11), values = range(11)),dtype='float64') - df['cols'] = (df['cols']+10).apply(str) + df = DataFrame( + dict(cols=range(11), values=range(11)), dtype='float64') + df['cols'] = (df['cols'] + 10).apply(str) df.iloc[1] = np.nan - expected = df[df['values']>2.0] + expected = df[df['values'] > 2.0] - store.append('df4',df,data_columns=True) + store.append('df4', df, data_columns=True) result = store.select( 'df4', where='values>2.0') tm.assert_frame_equal(expected, result) @@ -3082,10 +3189,10 @@ def test_select_dtypes(self): with ensure_clean_store(self.path) as store: df = tm.makeDataFrame() - expected = df[df['A']>0] + expected = df[df['A'] > 0] store.append('df', df, data_columns=True) - np_zero = np.float64(0) + np_zero = np.float64(0) # noqa result = store.select('df', where=["A>np_zero"]) tm.assert_frame_equal(expected, result) @@ -3096,7 +3203,8 @@ def test_select_with_many_inputs(self): df = DataFrame(dict(ts=bdate_range('2012-01-01', periods=300), A=np.random.randn(300), B=range(300), - users = ['a']*50 + ['b']*50 + ['c']*100 + ['a%03d' % i for i in range(100)])) + users=['a'] * 50 + ['b'] * 50 + ['c'] * 100 + + ['a%03d' % i for i in range(100)])) _maybe_remove(store, 'df') store.append('df', df, data_columns=['ts', 'A', 'B', 'users']) @@ -3106,26 +3214,32 @@ def test_select_with_many_inputs(self): tm.assert_frame_equal(expected, result) # small selector - result = store.select('df', [Term("ts>=Timestamp('2012-02-01') & users=['a','b','c']")]) - expected = df[ (df.ts >= Timestamp('2012-02-01')) & df.users.isin(['a','b','c']) ] + result = store.select( + 'df', [Term("ts>=Timestamp('2012-02-01') & " + "users=['a','b','c']")]) + expected = df[(df.ts >= Timestamp('2012-02-01')) & + df.users.isin(['a', 'b', 'c'])] tm.assert_frame_equal(expected, result) # big selector along the columns - selector = [ 'a','b','c' ] + [ 'a%03d' % i for i in range(60) ] - result = store.select('df', [Term("ts>=Timestamp('2012-02-01')"),Term('users=selector')]) - expected = df[ (df.ts >= Timestamp('2012-02-01')) & df.users.isin(selector) ] + selector = ['a', 'b', 'c'] + ['a%03d' % i for i in range(60)] + result = store.select( + 'df', [Term("ts>=Timestamp('2012-02-01')"), + Term('users=selector')]) + expected = df[(df.ts >= Timestamp('2012-02-01')) & + df.users.isin(selector)] tm.assert_frame_equal(expected, result) - selector = range(100,200) + selector = range(100, 200) result = store.select('df', [Term('B=selector')]) - expected = df[ df.B.isin(selector) ] + expected = df[df.B.isin(selector)] tm.assert_frame_equal(expected, result) self.assertEqual(len(result), 100) # big selector along the index selector = Index(df.ts[0:100].values) - result = store.select('df', [Term('ts=selector')]) - expected = df[ df.ts.isin(selector.values) ] + result = store.select('df', [Term('ts=selector')]) + expected = df[df.ts.isin(selector.values)] tm.assert_frame_equal(expected, result) self.assertEqual(len(result), 100) @@ -3140,80 +3254,84 @@ def test_select_iterator(self): expected = store.select('df') - results = [ s for s in store.select('df',iterator=True) ] + results = [s for s in store.select('df', iterator=True)] result = concat(results) tm.assert_frame_equal(expected, result) - results = [ s for s in store.select('df',chunksize=100) ] + results = [s for s in store.select('df', chunksize=100)] self.assertEqual(len(results), 5) result = concat(results) tm.assert_frame_equal(expected, result) - results = [ s for s in store.select('df',chunksize=150) ] + results = [s for s in store.select('df', chunksize=150)] result = concat(results) tm.assert_frame_equal(result, expected) with ensure_clean_path(self.path) as path: df = tm.makeTimeDataFrame(500) - df.to_hdf(path,'df_non_table') - self.assertRaises(TypeError, read_hdf, path,'df_non_table',chunksize=100) - self.assertRaises(TypeError, read_hdf, path,'df_non_table',iterator=True) + df.to_hdf(path, 'df_non_table') + self.assertRaises(TypeError, read_hdf, path, + 'df_non_table', chunksize=100) + self.assertRaises(TypeError, read_hdf, path, + 'df_non_table', iterator=True) with ensure_clean_path(self.path) as path: df = tm.makeTimeDataFrame(500) - df.to_hdf(path,'df',format='table') + df.to_hdf(path, 'df', format='table') - results = [ s for s in read_hdf(path,'df',chunksize=100) ] + results = [s for s in read_hdf(path, 'df', chunksize=100)] result = concat(results) self.assertEqual(len(results), 5) tm.assert_frame_equal(result, df) - tm.assert_frame_equal(result, read_hdf(path,'df')) + tm.assert_frame_equal(result, read_hdf(path, 'df')) # multiple with ensure_clean_store(self.path) as store: df1 = tm.makeTimeDataFrame(500) - store.append('df1',df1,data_columns=True) - df2 = tm.makeTimeDataFrame(500).rename(columns=lambda x: "%s_2" % x) + store.append('df1', df1, data_columns=True) + df2 = tm.makeTimeDataFrame(500).rename( + columns=lambda x: "%s_2" % x) df2['foo'] = 'bar' - store.append('df2',df2) + store.append('df2', df2) df = concat([df1, df2], axis=1) # full selection expected = store.select_as_multiple( ['df1', 'df2'], selector='df1') - results = [ s for s in store.select_as_multiple( - ['df1', 'df2'], selector='df1', chunksize=150) ] + results = [s for s in store.select_as_multiple( + ['df1', 'df2'], selector='df1', chunksize=150)] result = concat(results) tm.assert_frame_equal(expected, result) # where selection - #expected = store.select_as_multiple( + # expected = store.select_as_multiple( # ['df1', 'df2'], where= Term('A>0'), selector='df1') - #results = [] - #for s in store.select_as_multiple( - # ['df1', 'df2'], where= Term('A>0'), selector='df1', chunksize=25): + # results = [] + # for s in store.select_as_multiple( + # ['df1', 'df2'], where= Term('A>0'), selector='df1', + # chunksize=25): # results.append(s) - #result = concat(results) - #tm.assert_frame_equal(expected, result) + # result = concat(results) + # tm.assert_frame_equal(expected, result) def test_select_iterator_complete_8014(self): # GH 8014 # using iterator and where clause - chunksize=1e4 + chunksize = 1e4 # no iterator with ensure_clean_store(self.path) as store: expected = tm.makeTimeDataFrame(100064, 'S') _maybe_remove(store, 'df') - store.append('df',expected) + store.append('df', expected) beg_dt = expected.index[0] end_dt = expected.index[-1] @@ -3225,19 +3343,19 @@ def test_select_iterator_complete_8014(self): # select w/o iterator and where clause, single term, begin # of range, works where = "index >= '%s'" % beg_dt - result = store.select('df',where=where) + result = store.select('df', where=where) tm.assert_frame_equal(expected, result) # select w/o iterator and where clause, single term, end # of range, works where = "index <= '%s'" % end_dt - result = store.select('df',where=where) + result = store.select('df', where=where) tm.assert_frame_equal(expected, result) # select w/o iterator and where clause, inclusive range, # works where = "index >= '%s' & index <= '%s'" % (beg_dt, end_dt) - result = store.select('df',where=where) + result = store.select('df', where=where) tm.assert_frame_equal(expected, result) # with iterator, full range @@ -3245,31 +3363,34 @@ def test_select_iterator_complete_8014(self): expected = tm.makeTimeDataFrame(100064, 'S') _maybe_remove(store, 'df') - store.append('df',expected) + store.append('df', expected) beg_dt = expected.index[0] end_dt = expected.index[-1] # select w/iterator and no where clause works - results = [ s for s in store.select('df',chunksize=chunksize) ] + results = [s for s in store.select('df', chunksize=chunksize)] result = concat(results) tm.assert_frame_equal(expected, result) # select w/iterator and where clause, single term, begin of range where = "index >= '%s'" % beg_dt - results = [ s for s in store.select('df',where=where,chunksize=chunksize) ] + results = [s for s in store.select( + 'df', where=where, chunksize=chunksize)] result = concat(results) tm.assert_frame_equal(expected, result) # select w/iterator and where clause, single term, end of range where = "index <= '%s'" % end_dt - results = [ s for s in store.select('df',where=where,chunksize=chunksize) ] + results = [s for s in store.select( + 'df', where=where, chunksize=chunksize)] result = concat(results) tm.assert_frame_equal(expected, result) # select w/iterator and where clause, inclusive range where = "index >= '%s' & index <= '%s'" % (beg_dt, end_dt) - results = [ s for s in store.select('df',where=where,chunksize=chunksize) ] + results = [s for s in store.select( + 'df', where=where, chunksize=chunksize)] result = concat(results) tm.assert_frame_equal(expected, result) @@ -3277,37 +3398,41 @@ def test_select_iterator_non_complete_8014(self): # GH 8014 # using iterator and where clause - chunksize=1e4 + chunksize = 1e4 # with iterator, non complete range with ensure_clean_store(self.path) as store: expected = tm.makeTimeDataFrame(100064, 'S') _maybe_remove(store, 'df') - store.append('df',expected) + store.append('df', expected) beg_dt = expected.index[1] end_dt = expected.index[-2] # select w/iterator and where clause, single term, begin of range where = "index >= '%s'" % beg_dt - results = [ s for s in store.select('df',where=where,chunksize=chunksize) ] + results = [s for s in store.select( + 'df', where=where, chunksize=chunksize)] result = concat(results) rexpected = expected[expected.index >= beg_dt] tm.assert_frame_equal(rexpected, result) # select w/iterator and where clause, single term, end of range where = "index <= '%s'" % end_dt - results = [ s for s in store.select('df',where=where,chunksize=chunksize) ] + results = [s for s in store.select( + 'df', where=where, chunksize=chunksize)] result = concat(results) rexpected = expected[expected.index <= end_dt] tm.assert_frame_equal(rexpected, result) # select w/iterator and where clause, inclusive range where = "index >= '%s' & index <= '%s'" % (beg_dt, end_dt) - results = [ s for s in store.select('df',where=where,chunksize=chunksize) ] + results = [s for s in store.select( + 'df', where=where, chunksize=chunksize)] result = concat(results) - rexpected = expected[(expected.index >= beg_dt) & (expected.index <= end_dt)] + rexpected = expected[(expected.index >= beg_dt) & + (expected.index <= end_dt)] tm.assert_frame_equal(rexpected, result) # with iterator, empty where @@ -3315,13 +3440,14 @@ def test_select_iterator_non_complete_8014(self): expected = tm.makeTimeDataFrame(100064, 'S') _maybe_remove(store, 'df') - store.append('df',expected) + store.append('df', expected) end_dt = expected.index[-1] # select w/iterator and where clause, single term, begin of range where = "index > '%s'" % end_dt - results = [ s for s in store.select('df',where=where,chunksize=chunksize) ] + results = [s for s in store.select( + 'df', where=where, chunksize=chunksize)] self.assertEqual(0, len(results)) def test_select_iterator_many_empty_frames(self): @@ -3329,28 +3455,30 @@ def test_select_iterator_many_empty_frames(self): # GH 8014 # using iterator and where clause can return many empty # frames. - chunksize=int(1e4) + chunksize = int(1e4) # with iterator, range limited to the first chunk with ensure_clean_store(self.path) as store: expected = tm.makeTimeDataFrame(100000, 'S') _maybe_remove(store, 'df') - store.append('df',expected) + store.append('df', expected) beg_dt = expected.index[0] - end_dt = expected.index[chunksize-1] + end_dt = expected.index[chunksize - 1] # select w/iterator and where clause, single term, begin of range where = "index >= '%s'" % beg_dt - results = [ s for s in store.select('df',where=where,chunksize=chunksize) ] + results = [s for s in store.select( + 'df', where=where, chunksize=chunksize)] result = concat(results) rexpected = expected[expected.index >= beg_dt] tm.assert_frame_equal(rexpected, result) # select w/iterator and where clause, single term, end of range where = "index <= '%s'" % end_dt - results = [ s for s in store.select('df',where=where,chunksize=chunksize) ] + results = [s for s in store.select( + 'df', where=where, chunksize=chunksize)] tm.assert_equal(1, len(results)) result = concat(results) @@ -3359,12 +3487,14 @@ def test_select_iterator_many_empty_frames(self): # select w/iterator and where clause, inclusive range where = "index >= '%s' & index <= '%s'" % (beg_dt, end_dt) - results = [ s for s in store.select('df',where=where,chunksize=chunksize) ] + results = [s for s in store.select( + 'df', where=where, chunksize=chunksize)] # should be 1, is 10 tm.assert_equal(1, len(results)) result = concat(results) - rexpected = expected[(expected.index >= beg_dt) & (expected.index <= end_dt)] + rexpected = expected[(expected.index >= beg_dt) & + (expected.index <= end_dt)] tm.assert_frame_equal(rexpected, result) # select w/iterator and where clause which selects @@ -3375,74 +3505,88 @@ def test_select_iterator_many_empty_frames(self): # True. where = "index <= '%s' & index >= '%s'" % (beg_dt, end_dt) - results = [ s for s in store.select('df',where=where,chunksize=chunksize) ] + results = [s for s in store.select( + 'df', where=where, chunksize=chunksize)] # should be [] tm.assert_equal(0, len(results)) - def test_retain_index_attributes(self): # GH 3499, losing frequency info on index recreation - df = DataFrame(dict(A = Series(lrange(3), - index=date_range('2000-1-1',periods=3,freq='H')))) + df = DataFrame(dict( + A=Series(lrange(3), + index=date_range('2000-1-1', periods=3, freq='H')))) with ensure_clean_store(self.path) as store: - _maybe_remove(store,'data') + _maybe_remove(store, 'data') store.put('data', df, format='table') result = store.get('data') - tm.assert_frame_equal(df,result) - - for attr in ['freq','tz','name']: - for idx in ['index','columns']: - self.assertEqual(getattr(getattr(df,idx),attr,None), - getattr(getattr(result,idx),attr,None)) + tm.assert_frame_equal(df, result) + for attr in ['freq', 'tz', 'name']: + for idx in ['index', 'columns']: + self.assertEqual(getattr(getattr(df, idx), attr, None), + getattr(getattr(result, idx), attr, None)) # try to append a table with a different frequency - with tm.assert_produces_warning(expected_warning=AttributeConflictWarning): - df2 = DataFrame(dict(A = Series(lrange(3), - index=date_range('2002-1-1',periods=3,freq='D')))) - store.append('data',df2) + with tm.assert_produces_warning( + expected_warning=AttributeConflictWarning): + df2 = DataFrame(dict( + A=Series(lrange(3), + index=date_range('2002-1-1', + periods=3, freq='D')))) + store.append('data', df2) self.assertIsNone(store.get_storer('data').info['index']['freq']) # this is ok - _maybe_remove(store,'df2') - df2 = DataFrame(dict(A = Series(lrange(3), - index=[Timestamp('20010101'),Timestamp('20010102'),Timestamp('20020101')]))) - store.append('df2',df2) - df3 = DataFrame(dict(A = Series(lrange(3),index=date_range('2002-1-1',periods=3,freq='D')))) - store.append('df2',df3) + _maybe_remove(store, 'df2') + df2 = DataFrame(dict( + A=Series(lrange(3), + index=[Timestamp('20010101'), Timestamp('20010102'), + Timestamp('20020101')]))) + store.append('df2', df2) + df3 = DataFrame(dict( + A=Series(lrange(3), + index=date_range('2002-1-1', periods=3, + freq='D')))) + store.append('df2', df3) def test_retain_index_attributes2(self): - with ensure_clean_path(self.path) as path: - expected_warning = Warning if PY35 else AttributeConflictWarning - with tm.assert_produces_warning(expected_warning=expected_warning, check_stacklevel=False): - - df = DataFrame(dict(A = Series(lrange(3), index=date_range('2000-1-1',periods=3,freq='H')))) - df.to_hdf(path,'data',mode='w',append=True) - df2 = DataFrame(dict(A = Series(lrange(3), index=date_range('2002-1-1',periods=3,freq='D')))) - df2.to_hdf(path,'data',append=True) - - idx = date_range('2000-1-1',periods=3,freq='H') + with tm.assert_produces_warning(expected_warning=expected_warning, + check_stacklevel=False): + + df = DataFrame(dict( + A=Series(lrange(3), + index=date_range('2000-1-1', + periods=3, freq='H')))) + df.to_hdf(path, 'data', mode='w', append=True) + df2 = DataFrame(dict( + A=Series(lrange(3), + index=date_range('2002-1-1', periods=3, + freq='D')))) + df2.to_hdf(path, 'data', append=True) + + idx = date_range('2000-1-1', periods=3, freq='H') idx.name = 'foo' - df = DataFrame(dict(A = Series(lrange(3), index=idx))) - df.to_hdf(path,'data',mode='w',append=True) + df = DataFrame(dict(A=Series(lrange(3), index=idx))) + df.to_hdf(path, 'data', mode='w', append=True) - self.assertEqual(read_hdf(path,'data').index.name, 'foo') + self.assertEqual(read_hdf(path, 'data').index.name, 'foo') - with tm.assert_produces_warning(expected_warning=expected_warning, check_stacklevel=False): + with tm.assert_produces_warning(expected_warning=expected_warning, + check_stacklevel=False): - idx2 = date_range('2001-1-1',periods=3,freq='H') + idx2 = date_range('2001-1-1', periods=3, freq='H') idx2.name = 'bar' - df2 = DataFrame(dict(A = Series(lrange(3), index=idx2))) - df2.to_hdf(path,'data',append=True) + df2 = DataFrame(dict(A=Series(lrange(3), index=idx2))) + df2.to_hdf(path, 'data', append=True) - self.assertIsNone(read_hdf(path,'data').index.name) + self.assertIsNone(read_hdf(path, 'data').index.name) def test_panel_select(self): @@ -3469,7 +3613,7 @@ def test_frame_select(self): df = tm.makeTimeDataFrame() with ensure_clean_store(self.path) as store: - store.put('frame', df,format='table') + store.put('frame', df, format='table') date = df.index[len(df) // 2] crit1 = Term('index>=date') @@ -3502,107 +3646,117 @@ def test_frame_select_complex(self): df = tm.makeTimeDataFrame() df['string'] = 'foo' - df.loc[df.index[0:4],'string'] = 'bar' + df.loc[df.index[0:4], 'string'] = 'bar' with ensure_clean_store(self.path) as store: store.put('df', df, format='table', data_columns=['string']) # empty result = store.select('df', 'index>df.index[3] & string="bar"') - expected = df.loc[(df.index>df.index[3]) & (df.string=='bar')] + expected = df.loc[(df.index > df.index[3]) & (df.string == 'bar')] tm.assert_frame_equal(result, expected) result = store.select('df', 'index>df.index[3] & string="foo"') - expected = df.loc[(df.index>df.index[3]) & (df.string=='foo')] + expected = df.loc[(df.index > df.index[3]) & (df.string == 'foo')] tm.assert_frame_equal(result, expected) # or result = store.select('df', 'index>df.index[3] | string="bar"') - expected = df.loc[(df.index>df.index[3]) | (df.string=='bar')] + expected = df.loc[(df.index > df.index[3]) | (df.string == 'bar')] tm.assert_frame_equal(result, expected) - result = store.select('df', '(index>df.index[3] & index<=df.index[6]) | string="bar"') - expected = df.loc[((df.index>df.index[3]) & (df.index<=df.index[6])) | (df.string=='bar')] + result = store.select('df', '(index>df.index[3] & ' + 'index<=df.index[6]) | string="bar"') + expected = df.loc[((df.index > df.index[3]) & ( + df.index <= df.index[6])) | (df.string == 'bar')] tm.assert_frame_equal(result, expected) # invert result = store.select('df', 'string!="bar"') - expected = df.loc[df.string!='bar'] + expected = df.loc[df.string != 'bar'] tm.assert_frame_equal(result, expected) # invert not implemented in numexpr :( - self.assertRaises(NotImplementedError, store.select, 'df', '~(string="bar")') + self.assertRaises(NotImplementedError, + store.select, 'df', '~(string="bar")') # invert ok for filters result = store.select('df', "~(columns=['A','B'])") - expected = df.loc[:,df.columns.difference(['A','B'])] + expected = df.loc[:, df.columns.difference(['A', 'B'])] tm.assert_frame_equal(result, expected) # in - result = store.select('df', "index>df.index[3] & columns in ['A','B']") - expected = df.loc[df.index>df.index[3]].reindex(columns=['A','B']) + result = store.select( + 'df', "index>df.index[3] & columns in ['A','B']") + expected = df.loc[df.index > df.index[3]].reindex(columns=[ + 'A', 'B']) tm.assert_frame_equal(result, expected) def test_frame_select_complex2(self): - with ensure_clean_path(['parms.hdf','hist.hdf']) as paths: + with ensure_clean_path(['parms.hdf', 'hist.hdf']) as paths: pp, hh = paths # use non-trivial selection criteria - parms = DataFrame({ 'A' : [1,1,2,2,3] }) - parms.to_hdf(pp,'df',mode='w',format='table',data_columns=['A']) + parms = DataFrame({'A': [1, 1, 2, 2, 3]}) + parms.to_hdf(pp, 'df', mode='w', + format='table', data_columns=['A']) - selection = read_hdf(pp,'df',where='A=[2,3]') - hist = DataFrame(np.random.randn(25,1),columns=['data'], - index=MultiIndex.from_tuples([ (i,j) for i in range(5) for j in range(5) ], - names=['l1','l2'])) + selection = read_hdf(pp, 'df', where='A=[2,3]') + hist = DataFrame(np.random.randn(25, 1), + columns=['data'], + index=MultiIndex.from_tuples( + [(i, j) for i in range(5) + for j in range(5)], + names=['l1', 'l2'])) - hist.to_hdf(hh,'df',mode='w',format='table') + hist.to_hdf(hh, 'df', mode='w', format='table') - expected = read_hdf(hh,'df',where=Term('l1','=',[2,3,4])) + expected = read_hdf(hh, 'df', where=Term('l1', '=', [2, 3, 4])) # list like - result = read_hdf(hh,'df',where=Term('l1','=',selection.index.tolist())) + result = read_hdf(hh, 'df', where=Term( + 'l1', '=', selection.index.tolist())) assert_frame_equal(result, expected) - l = selection.index.tolist() + l = selection.index.tolist() # noqa # sccope with list like store = HDFStore(hh) - result = store.select('df',where='l1=l') + result = store.select('df', where='l1=l') assert_frame_equal(result, expected) store.close() - result = read_hdf(hh,'df',where='l1=l') + result = read_hdf(hh, 'df', where='l1=l') assert_frame_equal(result, expected) # index - index = selection.index - result = read_hdf(hh,'df',where='l1=index') + index = selection.index # noqa + result = read_hdf(hh, 'df', where='l1=index') assert_frame_equal(result, expected) - result = read_hdf(hh,'df',where='l1=selection.index') + result = read_hdf(hh, 'df', where='l1=selection.index') assert_frame_equal(result, expected) - result = read_hdf(hh,'df',where='l1=selection.index.tolist()') + result = read_hdf(hh, 'df', where='l1=selection.index.tolist()') assert_frame_equal(result, expected) - result = read_hdf(hh,'df',where='l1=list(selection.index)') + result = read_hdf(hh, 'df', where='l1=list(selection.index)') assert_frame_equal(result, expected) # sccope with index store = HDFStore(hh) - result = store.select('df',where='l1=index') + result = store.select('df', where='l1=index') assert_frame_equal(result, expected) - result = store.select('df',where='l1=selection.index') + result = store.select('df', where='l1=selection.index') assert_frame_equal(result, expected) - result = store.select('df',where='l1=selection.index.tolist()') + result = store.select('df', where='l1=selection.index.tolist()') assert_frame_equal(result, expected) - result = store.select('df',where='l1=list(selection.index)') + result = store.select('df', where='l1=list(selection.index)') assert_frame_equal(result, expected) store.close() @@ -3617,10 +3771,12 @@ def test_invalid_filtering(self): store.put('df', df, format='table') # not implemented - self.assertRaises(NotImplementedError, store.select, 'df', "columns=['A'] | columns=['B']") + self.assertRaises(NotImplementedError, store.select, + 'df', "columns=['A'] | columns=['B']") # in theory we could deal with this - self.assertRaises(NotImplementedError, store.select, 'df', "columns=['A','B'] & columns=['C']") + self.assertRaises(NotImplementedError, store.select, + 'df', "columns=['A','B'] & columns=['C']") def test_string_select(self): # GH 2973 @@ -3630,44 +3786,44 @@ def test_string_select(self): # test string ==/!= df['x'] = 'none' - df.ix[2:7,'x'] = '' + df.ix[2:7, 'x'] = '' - store.append('df',df,data_columns=['x']) + store.append('df', df, data_columns=['x']) - result = store.select('df',Term('x=none')) + result = store.select('df', Term('x=none')) expected = df[df.x == 'none'] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) try: - result = store.select('df',Term('x!=none')) + result = store.select('df', Term('x!=none')) expected = df[df.x != 'none'] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) except Exception as detail: com.pprint_thing("[{0}]".format(detail)) com.pprint_thing(store) com.pprint_thing(expected) df2 = df.copy() - df2.loc[df2.x=='','x'] = np.nan + df2.loc[df2.x == '', 'x'] = np.nan - store.append('df2',df2,data_columns=['x']) - result = store.select('df2',Term('x!=none')) + store.append('df2', df2, data_columns=['x']) + result = store.select('df2', Term('x!=none')) expected = df2[isnull(df2.x)] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) # int ==/!= df['int'] = 1 - df.ix[2:7,'int'] = 2 + df.ix[2:7, 'int'] = 2 - store.append('df3',df,data_columns=['int']) + store.append('df3', df, data_columns=['int']) - result = store.select('df3',Term('int=2')) - expected = df[df.int==2] - assert_frame_equal(result,expected) + result = store.select('df3', Term('int=2')) + expected = df[df.int == 2] + assert_frame_equal(result, expected) - result = store.select('df3',Term('int!=2')) - expected = df[df.int!=2] - assert_frame_equal(result,expected) + result = store.select('df3', Term('int!=2')) + expected = df[df.int != 2] + assert_frame_equal(result, expected) def test_read_column(self): @@ -3681,7 +3837,7 @@ def test_read_column(self): self.assertRaises(KeyError, store.select_column, 'df', 'foo') def f(): - store.select_column('df', 'index', where = ['index>5']) + store.select_column('df', 'index', where=['index>5']) self.assertRaises(Exception, f) # valid @@ -3734,7 +3890,6 @@ def f(): result = store.select_column('df4', 'B') tm.assert_series_equal(result, expected) - def test_coordinates(self): df = tm.makeTimeDataFrame() @@ -3745,7 +3900,7 @@ def test_coordinates(self): # all c = store.select_as_coordinates('df') - assert((c.values == np.arange(len(df.index))).all() == True) + assert((c.values == np.arange(len(df.index))).all()) # get coordinates back & test vs frame _maybe_remove(store, 'df') @@ -3753,13 +3908,13 @@ def test_coordinates(self): df = DataFrame(dict(A=lrange(5), B=lrange(5))) store.append('df', df) c = store.select_as_coordinates('df', ['index<3']) - assert((c.values == np.arange(3)).all() == True) + assert((c.values == np.arange(3)).all()) result = store.select('df', where=c) expected = df.ix[0:2, :] tm.assert_frame_equal(result, expected) c = store.select_as_coordinates('df', ['index>=3', 'index<=4']) - assert((c.values == np.arange(2) + 3).all() == True) + assert((c.values == np.arange(2) + 3).all()) result = store.select('df', where=c) expected = df.ix[3:4, :] tm.assert_frame_equal(result, expected) @@ -3785,50 +3940,55 @@ def test_coordinates(self): # pass array/mask as the coordinates with ensure_clean_store(self.path) as store: - df = DataFrame(np.random.randn(1000,2),index=date_range('20000101',periods=1000)) - store.append('df',df) - c = store.select_column('df','index') - where = c[DatetimeIndex(c).month==5].index + df = DataFrame(np.random.randn(1000, 2), + index=date_range('20000101', periods=1000)) + store.append('df', df) + c = store.select_column('df', 'index') + where = c[DatetimeIndex(c).month == 5].index expected = df.iloc[where] # locations - result = store.select('df',where=where) - tm.assert_frame_equal(result,expected) + result = store.select('df', where=where) + tm.assert_frame_equal(result, expected) # boolean - result = store.select('df',where=where) - tm.assert_frame_equal(result,expected) + result = store.select('df', where=where) + tm.assert_frame_equal(result, expected) # invalid - self.assertRaises(ValueError, store.select, 'df',where=np.arange(len(df),dtype='float64')) - self.assertRaises(ValueError, store.select, 'df',where=np.arange(len(df)+1)) - self.assertRaises(ValueError, store.select, 'df',where=np.arange(len(df)),start=5) - self.assertRaises(ValueError, store.select, 'df',where=np.arange(len(df)),start=5,stop=10) + self.assertRaises(ValueError, store.select, 'df', + where=np.arange(len(df), dtype='float64')) + self.assertRaises(ValueError, store.select, 'df', + where=np.arange(len(df) + 1)) + self.assertRaises(ValueError, store.select, 'df', + where=np.arange(len(df)), start=5) + self.assertRaises(ValueError, store.select, 'df', + where=np.arange(len(df)), start=5, stop=10) # selection with filter - selection = date_range('20000101',periods=500) + selection = date_range('20000101', periods=500) result = store.select('df', where='index in selection') expected = df[df.index.isin(selection)] - tm.assert_frame_equal(result,expected) + tm.assert_frame_equal(result, expected) # list - df = DataFrame(np.random.randn(10,2)) - store.append('df2',df) - result = store.select('df2',where=[0,3,5]) - expected = df.iloc[[0,3,5]] - tm.assert_frame_equal(result,expected) + df = DataFrame(np.random.randn(10, 2)) + store.append('df2', df) + result = store.select('df2', where=[0, 3, 5]) + expected = df.iloc[[0, 3, 5]] + tm.assert_frame_equal(result, expected) # boolean where = [True] * 10 where[-2] = False - result = store.select('df2',where=where) + result = store.select('df2', where=where) expected = df.loc[where] - tm.assert_frame_equal(result,expected) + tm.assert_frame_equal(result, expected) # start/stop result = store.select('df2', start=5, stop=10) expected = df[5:10] - tm.assert_frame_equal(result,expected) + tm.assert_frame_equal(result, expected) def test_append_to_multiple(self): df1 = tm.makeTimeDataFrame() @@ -3840,7 +4000,8 @@ def test_append_to_multiple(self): # exceptions self.assertRaises(ValueError, store.append_to_multiple, - {'df1': ['A', 'B'], 'df2': None}, df, selector='df3') + {'df1': ['A', 'B'], 'df2': None}, df, + selector='df3') self.assertRaises(ValueError, store.append_to_multiple, {'df1': None, 'df2': None}, df, selector='df3') self.assertRaises( @@ -3901,11 +4062,13 @@ def test_select_as_multiple(self): self.assertRaises(Exception, store.select_as_multiple, [None], where=['A>0', 'B>0'], selector='df1') self.assertRaises(KeyError, store.select_as_multiple, - ['df1','df3'], where=['A>0', 'B>0'], selector='df1') + ['df1', 'df3'], where=['A>0', 'B>0'], + selector='df1') self.assertRaises(KeyError, store.select_as_multiple, ['df3'], where=['A>0', 'B>0'], selector='df1') self.assertRaises(KeyError, store.select_as_multiple, - ['df1','df2'], where=['A>0', 'B>0'], selector='df4') + ['df1', 'df2'], where=['A>0', 'B>0'], + selector='df4') # default select result = store.select('df1', ['A>0', 'B>0']) @@ -3933,26 +4096,30 @@ def test_select_as_multiple(self): # test excpection for diff rows store.append('df3', tm.makeTimeDataFrame(nper=50)) self.assertRaises(ValueError, store.select_as_multiple, - ['df1','df3'], where=['A>0', 'B>0'], selector='df1') + ['df1', 'df3'], where=['A>0', 'B>0'], + selector='df1') def test_nan_selection_bug_4858(self): # GH 4858; nan selection bug, only works for pytables >= 3.1 if LooseVersion(tables.__version__) < '3.1.0': - raise nose.SkipTest('tables version does not support fix for nan selection bug: GH 4858') + raise nose.SkipTest('tables version does not support fix for nan ' + 'selection bug: GH 4858') with ensure_clean_store(self.path) as store: - df = DataFrame(dict(cols = range(6), values = range(6)), dtype='float64') - df['cols'] = (df['cols']+10).apply(str) + df = DataFrame(dict(cols=range(6), values=range(6)), + dtype='float64') + df['cols'] = (df['cols'] + 10).apply(str) df.iloc[0] = np.nan - expected = DataFrame(dict(cols = ['13.0','14.0','15.0'], values = [3.,4.,5.]), index=[3,4,5]) + expected = DataFrame(dict(cols=['13.0', '14.0', '15.0'], values=[ + 3., 4., 5.]), index=[3, 4, 5]) # write w/o the index on that particular column - store.append('df',df, data_columns=True,index=['cols']) - result = store.select('df',where='values>2.0') - assert_frame_equal(result,expected) + store.append('df', df, data_columns=True, index=['cols']) + result = store.select('df', where='values>2.0') + assert_frame_equal(result, expected) def test_start_stop(self): @@ -4031,7 +4198,7 @@ def test_multiple_open_close(self): with ensure_clean_path(self.path) as path: df = tm.makeDataFrame() - df.to_hdf(path,'df',mode='w',format='table') + df.to_hdf(path, 'df', mode='w', format='table') # single store = HDFStore(path) @@ -4047,6 +4214,7 @@ def test_multiple_open_close(self): # multiples store1 = HDFStore(path) + def f(): HDFStore(path) self.assertRaises(ValueError, f) @@ -4076,11 +4244,11 @@ def f(): self.assertFalse(store2.is_open) # nested close - store = HDFStore(path,mode='w') - store.append('df',df) + store = HDFStore(path, mode='w') + store.append('df', df) store2 = HDFStore(path) - store2.append('df2',df) + store2.append('df2', df) store2.close() self.assertIn('CLOSED', str(store2)) self.assertFalse(store2.is_open) @@ -4090,7 +4258,7 @@ def f(): self.assertFalse(store.is_open) # double closing - store = HDFStore(path,mode='w') + store = HDFStore(path, mode='w') store.append('df', df) store2 = HDFStore(path) @@ -4106,16 +4274,16 @@ def f(): with ensure_clean_path(self.path) as path: df = tm.makeDataFrame() - df.to_hdf(path,'df',mode='w',format='table') + df.to_hdf(path, 'df', mode='w', format='table') store = HDFStore(path) store.close() self.assertRaises(ClosedFileError, store.keys) - self.assertRaises(ClosedFileError, lambda : 'df' in store) - self.assertRaises(ClosedFileError, lambda : len(store)) - self.assertRaises(ClosedFileError, lambda : store['df']) - self.assertRaises(ClosedFileError, lambda : store.df) + self.assertRaises(ClosedFileError, lambda: 'df' in store) + self.assertRaises(ClosedFileError, lambda: len(store)) + self.assertRaises(ClosedFileError, lambda: store['df']) + self.assertRaises(ClosedFileError, lambda: store.df) self.assertRaises(ClosedFileError, store.select, 'df') self.assertRaises(ClosedFileError, store.get, 'df') self.assertRaises(ClosedFileError, store.append, 'df2', df) @@ -4129,7 +4297,9 @@ def f(): def test_pytables_native_read(self): - with ensure_clean_store(tm.get_data_path('legacy_hdf/pytables_native.h5'), mode='r') as store: + with ensure_clean_store( + tm.get_data_path('legacy_hdf/pytables_native.h5'), + mode='r') as store: d2 = store['detector/readout'] self.assertIsInstance(d2, DataFrame) @@ -4138,13 +4308,17 @@ def test_pytables_native2_read(self): if PY35 and is_platform_windows(): raise nose.SkipTest("native2 read fails oddly on windows / 3.5") - with ensure_clean_store(tm.get_data_path('legacy_hdf/pytables_native2.h5'), mode='r') as store: + with ensure_clean_store( + tm.get_data_path('legacy_hdf/pytables_native2.h5'), + mode='r') as store: str(store) d1 = store['detector'] self.assertIsInstance(d1, DataFrame) def test_legacy_read(self): - with ensure_clean_store(tm.get_data_path('legacy_hdf/legacy.h5'), mode='r') as store: + with ensure_clean_store( + tm.get_data_path('legacy_hdf/legacy.h5'), + mode='r') as store: store['a'] store['b'] store['c'] @@ -4152,7 +4326,9 @@ def test_legacy_read(self): def test_legacy_table_read(self): # legacy table types - with ensure_clean_store(tm.get_data_path('legacy_hdf/legacy_table.h5'), mode='r') as store: + with ensure_clean_store( + tm.get_data_path('legacy_hdf/legacy_table.h5'), + mode='r') as store: store.select('df1') store.select('df2') store.select('wp1') @@ -4161,7 +4337,8 @@ def test_legacy_table_read(self): store.select('df2', typ='legacy_frame') # old version warning - with tm.assert_produces_warning(expected_warning=IncompatibilityWarning): + with tm.assert_produces_warning( + expected_warning=IncompatibilityWarning): self.assertRaises( Exception, store.select, 'wp1', Term('minor_axis=B')) @@ -4172,7 +4349,9 @@ def test_legacy_table_read(self): def test_legacy_0_10_read(self): # legacy from 0.10 - with ensure_clean_store(tm.get_data_path('legacy_hdf/legacy_0.10.h5'), mode='r') as store: + with ensure_clean_store( + tm.get_data_path('legacy_hdf/legacy_0.10.h5'), + mode='r') as store: str(store) for k in store.keys(): store.select(k) @@ -4194,20 +4373,20 @@ def test_legacy_0_11_read(self): def test_copy(self): - def do_copy(f = None, new_f = None, keys = None, propindexes = True, **kwargs): + def do_copy(f=None, new_f=None, keys=None, propindexes=True, **kwargs): try: if f is None: f = tm.get_data_path(os.path.join('legacy_hdf', 'legacy_0.10.h5')) - store = HDFStore(f, 'r') if new_f is None: import tempfile fd, new_f = tempfile.mkstemp() - tstore = store.copy(new_f, keys = keys, propindexes = propindexes, **kwargs) + tstore = store.copy( + new_f, keys=keys, propindexes=propindexes, **kwargs) # check keys if keys is None: @@ -4238,8 +4417,8 @@ def do_copy(f = None, new_f = None, keys = None, propindexes = True, **kwargs): safe_remove(new_f) do_copy() - do_copy(keys = ['/a','/b','/df1_mixed']) - do_copy(propindexes = False) + do_copy(keys=['/a', '/b', '/df1_mixed']) + do_copy(propindexes=False) # new table df = tm.makeDataFrame() @@ -4247,17 +4426,18 @@ def do_copy(f = None, new_f = None, keys = None, propindexes = True, **kwargs): try: path = create_tempfile(self.path) st = HDFStore(path) - st.append('df', df, data_columns = ['A']) + st.append('df', df, data_columns=['A']) st.close() - do_copy(f = path) - do_copy(f = path, propindexes = False) + do_copy(f=path) + do_copy(f=path, propindexes=False) finally: safe_remove(path) def test_legacy_table_write(self): raise nose.SkipTest("cannot write legacy tables") - store = HDFStore(tm.get_data_path('legacy_hdf/legacy_table_%s.h5' % pandas.__version__), 'a') + store = HDFStore(tm.get_data_path( + 'legacy_hdf/legacy_table_%s.h5' % pandas.__version__), 'a') df = tm.makeDataFrame() wp = tm.makePanel() @@ -4271,8 +4451,8 @@ def test_legacy_table_write(self): columns=['A', 'B', 'C']) store.append('mi', df) - df = DataFrame(dict(A = 'foo', B = 'bar'),index=lrange(10)) - store.append('df', df, data_columns = ['B'], min_itemsize={'A' : 200 }) + df = DataFrame(dict(A='foo', B='bar'), index=lrange(10)) + store.append('df', df, data_columns=['B'], min_itemsize={'A': 200}) store.append('wp', wp) store.close() @@ -4330,13 +4510,13 @@ def test_tseries_indices_frame(self): def test_unicode_index(self): unicode_values = [u('\u03c3'), u('\u03c3\u03c3')] + def f(): s = Series(np.random.randn(len(unicode_values)), unicode_values) self._check_roundtrip(s, tm.assert_series_equal) compat_assert_produces_warning(PerformanceWarning, f) - def test_unicode_longer_encoded(self): # GH 11234 char = '\u0394' @@ -4384,7 +4564,8 @@ def test_append_with_diff_col_name_types_raises_value_error(self): store.append(name, d) def test_query_with_nested_special_character(self): - df = DataFrame({'a': ['a', 'a', 'c', 'b', 'test & test', 'c' , 'b', 'e'], + df = DataFrame({'a': ['a', 'a', 'c', 'b', + 'test & test', 'c', 'b', 'e'], 'b': [1, 2, 3, 4, 5, 6, 7, 8]}) expected = df[df.a == 'test & test'] with ensure_clean_store(self.path) as store: @@ -4398,38 +4579,40 @@ def test_categorical(self): # basic _maybe_remove(store, 's') - s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'c'], categories=['a','b','c','d'], ordered=False)) + s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'c'], categories=[ + 'a', 'b', 'c', 'd'], ordered=False)) store.append('s', s, format='table') result = store.select('s') tm.assert_series_equal(s, result) _maybe_remove(store, 's_ordered') - s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'c'], categories=['a','b','c','d'], ordered=True)) + s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'c'], categories=[ + 'a', 'b', 'c', 'd'], ordered=True)) store.append('s_ordered', s, format='table') result = store.select('s_ordered') tm.assert_series_equal(s, result) _maybe_remove(store, 'df') - df = DataFrame({"s":s, "vals":[1,2,3,4,5,6]}) + df = DataFrame({"s": s, "vals": [1, 2, 3, 4, 5, 6]}) store.append('df', df, format='table') result = store.select('df') tm.assert_frame_equal(result, df) # dtypes - s = Series([1,1,2,2,3,4,5]).astype('category') - store.append('si',s) + s = Series([1, 1, 2, 2, 3, 4, 5]).astype('category') + store.append('si', s) result = store.select('si') tm.assert_series_equal(result, s) - s = Series([1,1,np.nan,2,3,4,5]).astype('category') - store.append('si2',s) + s = Series([1, 1, np.nan, 2, 3, 4, 5]).astype('category') + store.append('si2', s) result = store.select('si2') tm.assert_series_equal(result, s) # multiple df2 = df.copy() df2['s2'] = Series(list('abcdefg')).astype('category') - store.append('df2',df2) + store.append('df2', df2) result = store.select('df2') tm.assert_frame_equal(result, df2) @@ -4439,55 +4622,59 @@ def test_categorical(self): self.assertTrue('/df2/meta/values_block_1/meta' in str(store)) # unordered - s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'c'], categories=['a','b','c','d'],ordered=False)) + s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'c'], categories=[ + 'a', 'b', 'c', 'd'], ordered=False)) store.append('s2', s, format='table') result = store.select('s2') tm.assert_series_equal(result, s) # query store.append('df3', df, data_columns=['s']) - expected = df[df.s.isin(['b','c'])] - result = store.select('df3', where = ['s in ["b","c"]']) + expected = df[df.s.isin(['b', 'c'])] + result = store.select('df3', where=['s in ["b","c"]']) tm.assert_frame_equal(result, expected) - expected = df[df.s.isin(['b','c'])] - result = store.select('df3', where = ['s = ["b","c"]']) + expected = df[df.s.isin(['b', 'c'])] + result = store.select('df3', where=['s = ["b","c"]']) tm.assert_frame_equal(result, expected) expected = df[df.s.isin(['d'])] - result = store.select('df3', where = ['s in ["d"]']) + result = store.select('df3', where=['s in ["d"]']) tm.assert_frame_equal(result, expected) expected = df[df.s.isin(['f'])] - result = store.select('df3', where = ['s in ["f"]']) + result = store.select('df3', where=['s in ["f"]']) tm.assert_frame_equal(result, expected) # appending with same categories is ok store.append('df3', df) - df = concat([df,df]) - expected = df[df.s.isin(['b','c'])] - result = store.select('df3', where = ['s in ["b","c"]']) + df = concat([df, df]) + expected = df[df.s.isin(['b', 'c'])] + result = store.select('df3', where=['s in ["b","c"]']) tm.assert_frame_equal(result, expected) # appending must have the same categories df3 = df.copy() df3['s'].cat.remove_unused_categories(inplace=True) - self.assertRaises(ValueError, lambda : store.append('df3', df3)) + self.assertRaises(ValueError, lambda: store.append('df3', df3)) # remove - # make sure meta data is removed (its a recursive removal so should be) + # make sure meta data is removed (its a recursive removal so should + # be) result = store.select('df3/meta/s/meta') self.assertIsNotNone(result) store.remove('df3') - self.assertRaises(KeyError, lambda : store.select('df3/meta/s/meta')) + self.assertRaises( + KeyError, lambda: store.select('df3/meta/s/meta')) def test_duplicate_column_name(self): df = DataFrame(columns=["a", "a"], data=[[0, 0]]) with ensure_clean_path(self.path) as path: - self.assertRaises(ValueError, df.to_hdf, path, 'df', format='fixed') + self.assertRaises(ValueError, df.to_hdf, + path, 'df', format='fixed') df.to_hdf(path, 'df', format='table') other = read_hdf(path, 'df') @@ -4498,7 +4685,7 @@ def test_duplicate_column_name(self): def test_round_trip_equals(self): # GH 9330 - df = DataFrame({"B": [1,2], "A": ["x","y"]}) + df = DataFrame({"B": [1, 2], "A": ["x", "y"]}) with ensure_clean_path(self.path) as path: df.to_hdf(path, 'df', format='table') @@ -4511,8 +4698,9 @@ def test_preserve_timedeltaindex_type(self): # GH9635 # Storing TimedeltaIndexed DataFrames in fixed stores did not preserve # the type of the index. - df = DataFrame(np.random.normal(size=(10,5))) - df.index = timedelta_range(start='0s',periods=10,freq='1s',name='example') + df = DataFrame(np.random.normal(size=(10, 5))) + df.index = timedelta_range( + start='0s', periods=10, freq='1s', name='example') with ensure_clean_store(self.path) as store: @@ -4530,7 +4718,7 @@ def test_colums_multiindex_modified(self): df.index.name = 'letters' df = df.set_index(keys='E', append=True) - data_columns = df.index.names+df.columns.tolist() + data_columns = df.index.names + df.columns.tolist() with ensure_clean_path(self.path) as path: df.to_hdf(path, 'df', mode='a', @@ -4539,7 +4727,7 @@ def test_colums_multiindex_modified(self): index=False) cols2load = list('BCD') cols2load_original = list(cols2load) - df_loaded = read_hdf(path, 'df', columns=cols2load) + df_loaded = read_hdf(path, 'df', columns=cols2load) # noqa self.assertTrue(cols2load_original == cols2load) def test_to_hdf_with_object_column_names(self): @@ -4547,10 +4735,10 @@ def test_to_hdf_with_object_column_names(self): # Writing HDF5 table format should only work for string-like # column types - types_should_fail = [ tm.makeIntIndex, tm.makeFloatIndex, - tm.makeDateIndex, tm.makeTimedeltaIndex, - tm.makePeriodIndex ] - types_should_run = [ tm.makeStringIndex, tm.makeCategoricalIndex ] + types_should_fail = [tm.makeIntIndex, tm.makeFloatIndex, + tm.makeDateIndex, tm.makeTimedeltaIndex, + tm.makePeriodIndex] + types_should_run = [tm.makeStringIndex, tm.makeCategoricalIndex] if compat.PY3: types_should_run.append(tm.makeUnicodeIndex) @@ -4560,18 +4748,19 @@ def test_to_hdf_with_object_column_names(self): for index in types_should_fail: df = DataFrame(np.random.randn(10, 2), columns=index(2)) with ensure_clean_path(self.path) as path: - with self.assertRaises(ValueError, - msg="cannot have non-object label DataIndexableCol"): + with self.assertRaises( + ValueError, msg=("cannot have non-object label " + "DataIndexableCol")): df.to_hdf(path, 'df', format='table', data_columns=True) for index in types_should_run: df = DataFrame(np.random.randn(10, 2), columns=index(2)) with ensure_clean_path(self.path) as path: df.to_hdf(path, 'df', format='table', data_columns=True) - result = pd.read_hdf(path, 'df', where="index = [{0}]".format(df.index[0])) + result = pd.read_hdf( + path, 'df', where="index = [{0}]".format(df.index[0])) assert(len(result)) - def test_read_hdf_open_store(self): # GH10330 # No check for non-string path_or-buf, and no test of open store @@ -4625,8 +4814,10 @@ def test_invalid_complib(self): index=list('abcd'), columns=list('ABCDE')) with ensure_clean_path(self.path) as path: - self.assertRaises(ValueError, df.to_hdf, path, 'df', complib='blosc:zlib') + self.assertRaises(ValueError, df.to_hdf, path, + 'df', complib='blosc:zlib') # GH10443 + def test_read_nokey(self): df = DataFrame(np.random.rand(4, 5), index=list('abcd'), @@ -4641,6 +4832,7 @@ def test_read_nokey(self): class TestHDFComplexValues(Base): # GH10447 + def test_complex_fixed(self): df = DataFrame(np.random.rand(4, 5).astype(np.complex64), index=list('abcd'), @@ -4679,7 +4871,8 @@ def test_complex_table(self): assert_frame_equal(df, reread) def test_complex_mixed_fixed(self): - complex64 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex64) + complex64 = np.array([1.0 + 1.0j, 1.0 + 1.0j, + 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex64) complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex128) df = DataFrame({'A': [1, 2, 3, 4], @@ -4694,7 +4887,8 @@ def test_complex_mixed_fixed(self): assert_frame_equal(df, reread) def test_complex_mixed_table(self): - complex64 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex64) + complex64 = np.array([1.0 + 1.0j, 1.0 + 1.0j, + 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex64) complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j], dtype=np.complex128) df = DataFrame({'A': [1, 2, 3, 4], @@ -4753,7 +4947,8 @@ def test_complex_indexing_error(self): 'C': complex128}, index=list('abcd')) with ensure_clean_store(self.path) as store: - self.assertRaises(TypeError, store.append, 'df', df, data_columns=['C']) + self.assertRaises(TypeError, store.append, + 'df', df, data_columns=['C']) def test_complex_series_error(self): complex128 = np.array([1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j, 1.0 + 1.0j]) @@ -4777,8 +4972,8 @@ def test_complex_append(self): result = store.select('df') assert_frame_equal(pd.concat([df, df], 0), result) -class TestTimezones(Base, tm.TestCase): +class TestTimezones(Base, tm.TestCase): def _compare_with_tz(self, a, b): tm.assert_frame_equal(a, b) @@ -4786,17 +4981,19 @@ def _compare_with_tz(self, a, b): # compare the zones on each element for c in a.columns: for i in a.index: - a_e = a.loc[i,c] - b_e = b.loc[i,c] + a_e = a.loc[i, c] + b_e = b.loc[i, c] if not (a_e == b_e and a_e.tz == b_e.tz): - raise AssertionError("invalid tz comparsion [%s] [%s]" % (a_e, b_e)) + raise AssertionError( + "invalid tz comparsion [%s] [%s]" % (a_e, b_e)) def test_append_with_timezones_dateutil(self): from datetime import timedelta tm._skip_if_no_dateutil() - # use maybe_get_tz instead of dateutil.tz.gettz to handle the windows filename issues. + # use maybe_get_tz instead of dateutil.tz.gettz to handle the windows + # filename issues. from pandas.tslib import maybe_get_tz gettz = lambda x: maybe_get_tz('dateutil/' + x) @@ -4804,7 +5001,8 @@ def test_append_with_timezones_dateutil(self): with ensure_clean_store(self.path) as store: _maybe_remove(store, 'df_tz') - df = DataFrame(dict(A=[ Timestamp('20130102 2:00:00', tz=gettz('US/Eastern')) + timedelta(hours=1) * i for i in range(5) ])) + df = DataFrame(dict(A=[Timestamp('20130102 2:00:00', tz=gettz( + 'US/Eastern')) + timedelta(hours=1) * i for i in range(5)])) store.append('df_tz', df, data_columns=['A']) result = store['df_tz'] @@ -4818,13 +5016,20 @@ def test_append_with_timezones_dateutil(self): # ensure we include dates in DST and STD time here. _maybe_remove(store, 'df_tz') - df = DataFrame(dict(A=Timestamp('20130102', tz=gettz('US/Eastern')), B=Timestamp('20130603', tz=gettz('US/Eastern'))), index=range(5)) + df = DataFrame(dict(A=Timestamp('20130102', + tz=gettz('US/Eastern')), + B=Timestamp('20130603', + tz=gettz('US/Eastern'))), + index=range(5)) store.append('df_tz', df) result = store['df_tz'] self._compare_with_tz(result, df) assert_frame_equal(result, df) - df = DataFrame(dict(A=Timestamp('20130102', tz=gettz('US/Eastern')), B=Timestamp('20130102', tz=gettz('EET'))), index=range(5)) + df = DataFrame(dict(A=Timestamp('20130102', + tz=gettz('US/Eastern')), + B=Timestamp('20130102', tz=gettz('EET'))), + index=range(5)) self.assertRaises(ValueError, store.append, 'df_tz', df) # this is ok @@ -4835,14 +5040,18 @@ def test_append_with_timezones_dateutil(self): assert_frame_equal(result, df) # can't append with diff timezone - df = DataFrame(dict(A=Timestamp('20130102', tz=gettz('US/Eastern')), B=Timestamp('20130102', tz=gettz('CET'))), index=range(5)) + df = DataFrame(dict(A=Timestamp('20130102', + tz=gettz('US/Eastern')), + B=Timestamp('20130102', tz=gettz('CET'))), + index=range(5)) self.assertRaises(ValueError, store.append, 'df_tz', df) # as index with ensure_clean_store(self.path) as store: # GH 4098 example - df = DataFrame(dict(A=Series(lrange(3), index=date_range('2000-1-1', periods=3, freq='H', tz=gettz('US/Eastern'))))) + df = DataFrame(dict(A=Series(lrange(3), index=date_range( + '2000-1-1', periods=3, freq='H', tz=gettz('US/Eastern'))))) _maybe_remove(store, 'df') store.put('df', df) @@ -4862,52 +5071,63 @@ def test_append_with_timezones_pytz(self): with ensure_clean_store(self.path) as store: _maybe_remove(store, 'df_tz') - df = DataFrame(dict(A = [ Timestamp('20130102 2:00:00',tz='US/Eastern') + timedelta(hours=1)*i for i in range(5) ])) - store.append('df_tz',df,data_columns=['A']) + df = DataFrame(dict(A=[Timestamp('20130102 2:00:00', + tz='US/Eastern') + + timedelta(hours=1) * i + for i in range(5)])) + store.append('df_tz', df, data_columns=['A']) result = store['df_tz'] - self._compare_with_tz(result,df) - assert_frame_equal(result,df) + self._compare_with_tz(result, df) + assert_frame_equal(result, df) # select with tz aware - self._compare_with_tz(store.select('df_tz',where=Term('A>=df.A[3]')),df[df.A>=df.A[3]]) + self._compare_with_tz(store.select( + 'df_tz', where=Term('A>=df.A[3]')), df[df.A >= df.A[3]]) _maybe_remove(store, 'df_tz') # ensure we include dates in DST and STD time here. - df = DataFrame(dict(A = Timestamp('20130102',tz='US/Eastern'), B = Timestamp('20130603',tz='US/Eastern')),index=range(5)) - store.append('df_tz',df) + df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'), + B=Timestamp('20130603', tz='US/Eastern')), + index=range(5)) + store.append('df_tz', df) result = store['df_tz'] - self._compare_with_tz(result,df) - assert_frame_equal(result,df) + self._compare_with_tz(result, df) + assert_frame_equal(result, df) - df = DataFrame(dict(A = Timestamp('20130102',tz='US/Eastern'), B = Timestamp('20130102',tz='EET')),index=range(5)) + df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'), + B=Timestamp('20130102', tz='EET')), + index=range(5)) self.assertRaises(ValueError, store.append, 'df_tz', df) # this is ok _maybe_remove(store, 'df_tz') - store.append('df_tz',df,data_columns=['A','B']) + store.append('df_tz', df, data_columns=['A', 'B']) result = store['df_tz'] - self._compare_with_tz(result,df) - assert_frame_equal(result,df) + self._compare_with_tz(result, df) + assert_frame_equal(result, df) # can't append with diff timezone - df = DataFrame(dict(A = Timestamp('20130102',tz='US/Eastern'), B = Timestamp('20130102',tz='CET')),index=range(5)) + df = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'), + B=Timestamp('20130102', tz='CET')), + index=range(5)) self.assertRaises(ValueError, store.append, 'df_tz', df) # as index with ensure_clean_store(self.path) as store: # GH 4098 example - df = DataFrame(dict(A = Series(lrange(3), index=date_range('2000-1-1',periods=3,freq='H', tz='US/Eastern')))) + df = DataFrame(dict(A=Series(lrange(3), index=date_range( + '2000-1-1', periods=3, freq='H', tz='US/Eastern')))) _maybe_remove(store, 'df') - store.put('df',df) + store.put('df', df) result = store.select('df') - assert_frame_equal(result,df) + assert_frame_equal(result, df) _maybe_remove(store, 'df') - store.append('df',df) + store.append('df', df) result = store.select('df') - assert_frame_equal(result,df) + assert_frame_equal(result, df) def test_tseries_select_index_column(self): # GH7777 @@ -4954,10 +5174,10 @@ def test_timezones_fixed(self): # as data # GH11411 _maybe_remove(store, 'df') - df = DataFrame({'A' : rng, - 'B' : rng.tz_convert('UTC').tz_localize(None), - 'C' : rng.tz_convert('CET'), - 'D' : range(len(rng))}, index=rng) + df = DataFrame({'A': rng, + 'B': rng.tz_convert('UTC').tz_localize(None), + 'C': rng.tz_convert('CET'), + 'D': range(len(rng))}, index=rng) store['df'] = df result = store['df'] assert_frame_equal(result, df) @@ -4974,7 +5194,8 @@ def test_fixed_offset_tz(self): def test_store_timezone(self): # GH2852 - # issue storing datetime.date with a timezone as it resets when read back in a new timezone + # issue storing datetime.date with a timezone as it resets when read + # back in a new timezone import platform if platform.system() == "Windows": @@ -4987,8 +5208,8 @@ def test_store_timezone(self): # original method with ensure_clean_store(self.path) as store: - today = datetime.date(2013,9,10) - df = DataFrame([1,2,3], index = [today, today, today]) + today = datetime.date(2013, 9, 10) + df = DataFrame([1, 2, 3], index=[today, today, today]) store['obj1'] = df result = store['obj1'] assert_frame_equal(result, df) @@ -5003,7 +5224,7 @@ def setTZ(tz): except: pass else: - os.environ['TZ']=tz + os.environ['TZ'] = tz time.tzset() try: @@ -5011,8 +5232,8 @@ def setTZ(tz): with ensure_clean_store(self.path) as store: setTZ('EST5EDT') - today = datetime.date(2013,9,10) - df = DataFrame([1,2,3], index = [today, today, today]) + today = datetime.date(2013, 9, 10) + df = DataFrame([1, 2, 3], index=[today, today, today]) store['obj1'] = df setTZ('CST6CDT') @@ -5026,8 +5247,12 @@ def setTZ(tz): def test_legacy_datetimetz_object(self): # legacy from < 0.17.0 # 8260 - expected = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'), B=Timestamp('20130603', tz='CET')), index=range(5)) - with ensure_clean_store(tm.get_data_path('legacy_hdf/datetimetz_object.h5'), mode='r') as store: + expected = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'), + B=Timestamp('20130603', tz='CET')), + index=range(5)) + with ensure_clean_store( + tm.get_data_path('legacy_hdf/datetimetz_object.h5'), + mode='r') as store: result = store['df'] assert_frame_equal(result, expected) @@ -5039,13 +5264,14 @@ def test_dst_transitions(self): freq="H", ambiguous='infer') - for i in [times, times+pd.Timedelta('10min')]: + for i in [times, times + pd.Timedelta('10min')]: _maybe_remove(store, 'df') - df = DataFrame({'A' : range(len(i)), 'B' : i }, index=i) - store.append('df',df) + df = DataFrame({'A': range(len(i)), 'B': i}, index=i) + store.append('df', df) result = store.select('df') assert_frame_equal(result, df) + def _test_sort(obj): if isinstance(obj, DataFrame): return obj.reindex(sorted(obj.index)) diff --git a/pandas/io/tests/test_sas.py b/pandas/io/tests/test_sas.py index 08737bfb60086..bca3594f4b47c 100644 --- a/pandas/io/tests/test_sas.py +++ b/pandas/io/tests/test_sas.py @@ -1,6 +1,5 @@ import pandas as pd import pandas.util.testing as tm -from pandas import compat from pandas.io.sas import XportReader, read_sas import numpy as np import os @@ -9,6 +8,8 @@ # Numbers in a SAS xport file are always float64, so need to convert # before making comparisons. + + def numeric_as_float(data): for v in data.columns: if data[v].dtype is np.dtype('int64'): @@ -24,7 +25,6 @@ def setUp(self): self.file03 = os.path.join(self.dirpath, "DRXFCD_G.XPT") self.file04 = os.path.join(self.dirpath, "paxraw_d_short.xpt") - def test1_basic(self): # Tests with DEMO_G.XPT (all numeric file) @@ -50,7 +50,6 @@ def test1_basic(self): data = read_sas(self.file01) tm.assert_frame_equal(data, data_csv) - def test1_index(self): # Tests with DEMO_G.XPT using index (all numeric file) @@ -66,13 +65,14 @@ def test1_index(self): # Test incremental read with `read` method. reader = XportReader(self.file01, index="SEQN") data = reader.read(10) - tm.assert_frame_equal(data, data_csv.iloc[0:10, :], check_index_type=False) + tm.assert_frame_equal(data, data_csv.iloc[ + 0:10, :], check_index_type=False) # Test incremental read with `get_chunk` method. reader = XportReader(self.file01, index="SEQN", chunksize=10) data = reader.get_chunk() - tm.assert_frame_equal(data, data_csv.iloc[0:10, :], check_index_type=False) - + tm.assert_frame_equal(data, data_csv.iloc[ + 0:10, :], check_index_type=False) def test1_incremental(self): # Test with DEMO_G.XPT, reading full file incrementally @@ -88,7 +88,6 @@ def test1_incremental(self): tm.assert_frame_equal(data, data_csv, check_index_type=False) - def test2(self): # Test with SSHSV1_A.XPT @@ -99,7 +98,6 @@ def test2(self): data = XportReader(self.file02).read() tm.assert_frame_equal(data, data_csv) - def test_multiple_types(self): # Test with DRXFCD_G.XPT (contains text and numeric variables) @@ -112,7 +110,6 @@ def test_multiple_types(self): data = read_sas(self.file03) tm.assert_frame_equal(data, data_csv) - def test_truncated_float_support(self): # Test with paxraw_d_short.xpt, a shortened version of: # http://wwwn.cdc.gov/Nchs/Nhanes/2005-2006/PAXRAW_D.ZIP diff --git a/pandas/io/tests/test_sql.py b/pandas/io/tests/test_sql.py index bfd1ac3f08ee8..455e27b70055d 100644 --- a/pandas/io/tests/test_sql.py +++ b/pandas/io/tests/test_sql.py @@ -6,12 +6,13 @@ - Tests for the public API (only tests with sqlite3) - `_TestSQLApi` base class - `TestSQLApi`: test the public API with sqlalchemy engine - - `TestSQLiteFallbackApi`: test the public API with a sqlite DBAPI connection + - `TestSQLiteFallbackApi`: test the public API with a sqlite DBAPI + connection - Tests for the different SQL flavors (flavor specific type conversions) - Tests for the sqlalchemy mode: `_TestSQLAlchemy` is the base class with common methods, `_TestSQLAlchemyConn` tests the API with a SQLAlchemy - Connection object. The different tested flavors (sqlite3, MySQL, PostgreSQL) - derive from the base class + Connection object. The different tested flavors (sqlite3, MySQL, + PostgreSQL) derive from the base class - Tests for the fallback mode (`TestSQLiteFallback` and `TestMySQLLegacy`) """ @@ -141,7 +142,8 @@ VALUES(%s, %s, %s, %s, %s, %s, %s, %s, %s) """, 'fields': ( - 'TextCol', 'DateCol', 'DateColWithTz', 'IntDateCol', 'FloatCol', + 'TextCol', 'DateCol', 'DateColWithTz', + 'IntDateCol', 'FloatCol', 'IntCol', 'BoolCol', 'IntColWithNull', 'BoolColWithNull' ) }, @@ -174,6 +176,7 @@ class MixInBase(object): + def tearDown(self): for tbl in self._get_all_tables(): self.drop_table(tbl) @@ -181,9 +184,11 @@ def tearDown(self): class MySQLMixIn(MixInBase): + def drop_table(self, table_name): cur = self.conn.cursor() - cur.execute("DROP TABLE IF EXISTS %s" % sql._get_valid_mysql_name(table_name)) + cur.execute("DROP TABLE IF EXISTS %s" % + sql._get_valid_mysql_name(table_name)) self.conn.commit() def _get_all_tables(self): @@ -200,12 +205,15 @@ def _close_conn(self): class SQLiteMixIn(MixInBase): + def drop_table(self, table_name): - self.conn.execute("DROP TABLE IF EXISTS %s" % sql._get_valid_sqlite_name(table_name)) + self.conn.execute("DROP TABLE IF EXISTS %s" % + sql._get_valid_sqlite_name(table_name)) self.conn.commit() def _get_all_tables(self): - c = self.conn.execute("SELECT name FROM sqlite_master WHERE type='table'") + c = self.conn.execute( + "SELECT name FROM sqlite_master WHERE type='table'") return [table[0] for table in c.fetchall()] def _close_conn(self): @@ -213,6 +221,7 @@ def _close_conn(self): class SQLAlchemyMixIn(MixInBase): + def drop_table(self, table_name): sql.SQLDatabase(self.conn).drop_table(table_name) @@ -225,6 +234,7 @@ def _get_all_tables(self): def _close_conn(self): pass + class PandasSQLTest(unittest.TestCase): """ Base class with common private methods for SQLAlchemy and fallback cases. @@ -267,12 +277,14 @@ def _check_iris_loaded_frame(self, iris_frame): def _load_test1_data(self): columns = ['index', 'A', 'B', 'C', 'D'] data = [( - '2000-01-03 00:00:00', 0.980268513777, 3.68573087906, -0.364216805298, -1.15973806169), + '2000-01-03 00:00:00', 0.980268513777, 3.68573087906, + -0.364216805298, -1.15973806169), ('2000-01-04 00:00:00', 1.04791624281, - 0.0412318367011, -0.16181208307, 0.212549316967), ('2000-01-05 00:00:00', 0.498580885705, 0.731167677815, -0.537677223318, 1.34627041952), - ('2000-01-06 00:00:00', 1.12020151869, 1.56762092543, 0.00364077397681, 0.67525259227)] + ('2000-01-06 00:00:00', 1.12020151869, 1.56762092543, + 0.00364077397681, 0.67525259227)] self.test_frame1 = DataFrame(data, columns=columns) @@ -281,7 +293,8 @@ def _load_test2_data(self): B=['asd', 'gsq', 'ylt', 'jkl'], C=[1.1, 3.1, 6.9, 5.3], D=[False, True, True, False], - E=['1990-11-22', '1991-10-26', '1993-11-26', '1995-12-12'])) + E=['1990-11-22', '1991-10-26', + '1993-11-26', '1995-12-12'])) df['E'] = to_datetime(df['E']) self.test_frame2 = df @@ -423,7 +436,8 @@ def _to_sql_append(self): def _roundtrip(self): self.drop_table('test_frame_roundtrip') self.pandasSQL.to_sql(self.test_frame1, 'test_frame_roundtrip') - result = self.pandasSQL.read_query('SELECT * FROM test_frame_roundtrip') + result = self.pandasSQL.read_query( + 'SELECT * FROM test_frame_roundtrip') result.set_index('level_0', inplace=True) # result.index.astype(int) @@ -439,11 +453,11 @@ def _execute_sql(self): tm.equalContents(row, [5.1, 3.5, 1.4, 0.2, 'Iris-setosa']) def _to_sql_save_index(self): - df = DataFrame.from_records([(1,2.1,'line1'), (2,1.5,'line2')], - columns=['A','B','C'], index=['A']) + df = DataFrame.from_records([(1, 2.1, 'line1'), (2, 1.5, 'line2')], + columns=['A', 'B', 'C'], index=['A']) self.pandasSQL.to_sql(df, 'test_to_sql_saves_index') ix_cols = self._get_index_columns('test_to_sql_saves_index') - self.assertEqual(ix_cols, [['A',],]) + self.assertEqual(ix_cols, [['A', ], ]) def _transaction_test(self): self.pandasSQL.execute("CREATE TABLE test_trans (A INT, B TEXT)") @@ -468,8 +482,8 @@ def _transaction_test(self): self.assertEqual(len(res2), 1) -#------------------------------------------------------------------------------ -#--- Testing the public API +# ----------------------------------------------------------------------------- +# -- Testing the public API class _TestSQLApi(PandasSQLTest): @@ -477,9 +491,9 @@ class _TestSQLApi(PandasSQLTest): Base class to test the public API. From this two classes are derived to run these tests for both the - sqlalchemy mode (`TestSQLApi`) and the fallback mode (`TestSQLiteFallbackApi`). - These tests are run with sqlite3. Specific tests for the different - sql flavours are included in `_TestSQLAlchemy`. + sqlalchemy mode (`TestSQLApi`) and the fallback mode + (`TestSQLiteFallbackApi`). These tests are run with sqlite3. Specific + tests for the different sql flavours are included in `_TestSQLAlchemy`. Notes: flavor can always be passed even in SQLAlchemy mode, @@ -519,16 +533,19 @@ def test_legacy_read_frame(self): def test_to_sql(self): sql.to_sql(self.test_frame1, 'test_frame1', self.conn, flavor='sqlite') self.assertTrue( - sql.has_table('test_frame1', self.conn, flavor='sqlite'), 'Table not written to DB') + sql.has_table('test_frame1', self.conn, flavor='sqlite'), + 'Table not written to DB') def test_to_sql_fail(self): sql.to_sql(self.test_frame1, 'test_frame2', self.conn, flavor='sqlite', if_exists='fail') self.assertTrue( - sql.has_table('test_frame2', self.conn, flavor='sqlite'), 'Table not written to DB') + sql.has_table('test_frame2', self.conn, flavor='sqlite'), + 'Table not written to DB') self.assertRaises(ValueError, sql.to_sql, self.test_frame1, - 'test_frame2', self.conn, flavor='sqlite', if_exists='fail') + 'test_frame2', self.conn, flavor='sqlite', + if_exists='fail') def test_to_sql_replace(self): sql.to_sql(self.test_frame1, 'test_frame3', @@ -608,7 +625,7 @@ def test_roundtrip(self): def test_roundtrip_chunksize(self): sql.to_sql(self.test_frame1, 'test_frame_roundtrip', con=self.conn, - index=False, flavor='sqlite', chunksize=2) + index=False, flavor='sqlite', chunksize=2) result = sql.read_sql_query( 'SELECT * FROM test_frame_roundtrip', con=self.conn) @@ -668,14 +685,15 @@ def test_date_and_index(self): def test_timedelta(self): # see #6921 - df = to_timedelta(Series(['00:00:01', '00:00:03'], name='foo')).to_frame() + df = to_timedelta( + Series(['00:00:01', '00:00:03'], name='foo')).to_frame() with tm.assert_produces_warning(UserWarning): df.to_sql('test_timedelta', self.conn) result = sql.read_sql_query('SELECT * FROM test_timedelta', self.conn) tm.assert_series_equal(result['foo'], df['foo'].astype('int64')) def test_complex(self): - df = DataFrame({'a':[1+1j, 2j]}) + df = DataFrame({'a': [1 + 1j, 2j]}) # Complex data type should raise error self.assertRaises(ValueError, df.to_sql, 'test_complex', self.conn) @@ -711,7 +729,8 @@ def test_to_sql_index_label(self): def test_to_sql_index_label_multiindex(self): temp_frame = DataFrame({'col1': range(4)}, - index=MultiIndex.from_product([('A0', 'A1'), ('B0', 'B1')])) + index=MultiIndex.from_product( + [('A0', 'A1'), ('B0', 'B1')])) # no index name, defaults to 'level_0' and 'level_1' sql.to_sql(temp_frame, 'test_index_label', self.conn) @@ -747,12 +766,12 @@ def test_to_sql_index_label_multiindex(self): index_label='C') def test_multiindex_roundtrip(self): - df = DataFrame.from_records([(1,2.1,'line1'), (2,1.5,'line2')], - columns=['A','B','C'], index=['A','B']) + df = DataFrame.from_records([(1, 2.1, 'line1'), (2, 1.5, 'line2')], + columns=['A', 'B', 'C'], index=['A', 'B']) df.to_sql('test_multiindex_roundtrip', self.conn) result = sql.read_sql_query('SELECT * FROM test_multiindex_roundtrip', - self.conn, index_col=['A','B']) + self.conn, index_col=['A', 'B']) tm.assert_frame_equal(df, result, check_index_type=True) def test_integer_col_names(self): @@ -766,15 +785,15 @@ def test_get_schema(self): self.assertTrue('CREATE' in create_sql) def test_get_schema_dtypes(self): - float_frame = DataFrame({'a':[1.1,1.2], 'b':[2.1,2.2]}) + float_frame = DataFrame({'a': [1.1, 1.2], 'b': [2.1, 2.2]}) dtype = sqlalchemy.Integer if self.mode == 'sqlalchemy' else 'INTEGER' create_sql = sql.get_schema(float_frame, 'test', 'sqlite', - con=self.conn, dtype={'b':dtype}) + con=self.conn, dtype={'b': dtype}) self.assertTrue('CREATE' in create_sql) self.assertTrue('INTEGER' in create_sql) def test_get_schema_keys(self): - frame = DataFrame({'Col1':[1.1,1.2], 'Col2':[2.1,2.2]}) + frame = DataFrame({'Col1': [1.1, 1.2], 'Col2': [2.1, 2.2]}) create_sql = sql.get_schema(frame, 'test', 'sqlite', con=self.conn, keys='Col1') constraint_sentence = 'CONSTRAINT test_pk PRIMARY KEY ("Col1")' @@ -836,7 +855,7 @@ def test_categorical(self): def test_unicode_column_name(self): # GH 11431 - df = DataFrame([[1,2],[3,4]], columns = [u'\xe9',u'b']) + df = DataFrame([[1, 2], [3, 4]], columns=[u'\xe9', u'b']) df.to_sql('test_unicode', self.conn, index=False) @@ -874,12 +893,14 @@ def test_read_table_index_col(self): self.assertEqual(result.index.names, ["index"], "index_col not correctly set") - result = sql.read_sql_table('test_frame', self.conn, index_col=["A", "B"]) + result = sql.read_sql_table( + 'test_frame', self.conn, index_col=["A", "B"]) self.assertEqual(result.index.names, ["A", "B"], "index_col not correctly set") - result = sql.read_sql_table('test_frame', self.conn, index_col=["A", "B"], - columns=["C", "D"]) + result = sql.read_sql_table('test_frame', self.conn, + index_col=["A", "B"], + columns=["C", "D"]) self.assertEqual(result.index.names, ["A", "B"], "index_col not correctly set") self.assertEqual(result.columns.tolist(), ["C", "D"], @@ -923,7 +944,8 @@ def test_warning_case_insensitive_table_name(self): # This should not trigger a Warning self.test_frame1.to_sql('CaseSensitive', self.conn) # Verify some things - self.assertEqual(len(w), 0, "Warning triggered for writing a table") + self.assertEqual( + len(w), 0, "Warning triggered for writing a table") def _get_index_columns(self, tbl_name): from sqlalchemy.engine import reflection @@ -939,13 +961,16 @@ def test_sqlalchemy_type_mapping(self): utc=True)}) db = sql.SQLDatabase(self.conn) table = sql.SQLTable("test_type", db, frame=df) - self.assertTrue(isinstance(table.table.c['time'].type, sqltypes.DateTime)) + self.assertTrue(isinstance( + table.table.c['time'].type, sqltypes.DateTime)) def test_to_sql_read_sql_with_database_uri(self): # Test read_sql and .to_sql method with a database URI (GH10654) test_frame1 = self.test_frame1 - #db_uri = 'sqlite:///:memory:' # raises sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near "iris": syntax error [SQL: 'iris'] + # db_uri = 'sqlite:///:memory:' # raises + # sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near + # "iris": syntax error [SQL: 'iris'] with tm.ensure_clean() as name: db_uri = 'sqlite:///' + name table = 'iris' @@ -962,19 +987,20 @@ def _make_iris_table_metadata(self): sa = sqlalchemy metadata = sa.MetaData() iris = sa.Table('iris', metadata, - sa.Column('SepalLength', sa.REAL), - sa.Column('SepalWidth', sa.REAL), - sa.Column('PetalLength', sa.REAL), - sa.Column('PetalWidth', sa.REAL), - sa.Column('Name', sa.TEXT) - ) + sa.Column('SepalLength', sa.REAL), + sa.Column('SepalWidth', sa.REAL), + sa.Column('PetalLength', sa.REAL), + sa.Column('PetalWidth', sa.REAL), + sa.Column('Name', sa.TEXT) + ) return iris def test_query_by_text_obj(self): # WIP : GH10846 name_text = sqlalchemy.text('select * from iris where name=:name') - iris_df = sql.read_sql(name_text, self.conn, params={'name': 'Iris-versicolor'}) + iris_df = sql.read_sql(name_text, self.conn, params={ + 'name': 'Iris-versicolor'}) all_names = set(iris_df['Name']) self.assertEqual(all_names, set(['Iris-versicolor'])) @@ -982,8 +1008,10 @@ def test_query_by_select_obj(self): # WIP : GH10846 iris = self._make_iris_table_metadata() - name_select = sqlalchemy.select([iris]).where(iris.c.Name == sqlalchemy.bindparam('name')) - iris_df = sql.read_sql(name_select, self.conn, params={'name': 'Iris-setosa'}) + name_select = sqlalchemy.select([iris]).where( + iris.c.Name == sqlalchemy.bindparam('name')) + iris_df = sql.read_sql(name_select, self.conn, + params={'name': 'Iris-setosa'}) all_names = set(iris_df['Name']) self.assertEqual(all_names, set(['Iris-setosa'])) @@ -1093,8 +1121,8 @@ def test_sqlite_type_mapping(self): "TIMESTAMP") -#------------------------------------------------------------------------------ -#--- Database flavor specific tests +# ----------------------------------------------------------------------------- +# -- Database flavor specific tests class _TestSQLAlchemy(SQLAlchemyMixIn, PandasSQLTest): @@ -1148,7 +1176,8 @@ def setup_connect(self): # to test if connection can be made: self.conn.connect() except sqlalchemy.exc.OperationalError: - raise nose.SkipTest("Can't connect to {0} server".format(self.flavor)) + raise nose.SkipTest( + "Can't connect to {0} server".format(self.flavor)) def test_aread_sql(self): self._read_sql_iris() @@ -1241,7 +1270,7 @@ def test_default_type_conversion(self): def test_bigint(self): # int64 should be converted to BigInteger, GH7433 - df = DataFrame(data={'i64':[2**62]}) + df = DataFrame(data={'i64': [2**62]}) df.to_sql('test_bigint', self.conn, index=False) result = sql.read_sql_table('test_bigint', self.conn) @@ -1265,50 +1294,64 @@ def check(col): # or datetime64[ns, UTC] if com.is_datetime64_dtype(col.dtype): - # "2000-01-01 00:00:00-08:00" should convert to "2000-01-01 08:00:00" + # "2000-01-01 00:00:00-08:00" should convert to + # "2000-01-01 08:00:00" self.assertEqual(col[0], Timestamp('2000-01-01 08:00:00')) - # "2000-06-01 00:00:00-07:00" should convert to "2000-06-01 07:00:00" + # "2000-06-01 00:00:00-07:00" should convert to + # "2000-06-01 07:00:00" self.assertEqual(col[1], Timestamp('2000-06-01 07:00:00')) elif com.is_datetime64tz_dtype(col.dtype): self.assertTrue(str(col.dt.tz) == 'UTC') - # "2000-01-01 00:00:00-08:00" should convert to "2000-01-01 08:00:00" - self.assertEqual(col[0], Timestamp('2000-01-01 08:00:00', tz='UTC')) + # "2000-01-01 00:00:00-08:00" should convert to + # "2000-01-01 08:00:00" + self.assertEqual(col[0], Timestamp( + '2000-01-01 08:00:00', tz='UTC')) - # "2000-06-01 00:00:00-07:00" should convert to "2000-06-01 07:00:00" - self.assertEqual(col[1], Timestamp('2000-06-01 07:00:00', tz='UTC')) + # "2000-06-01 00:00:00-07:00" should convert to + # "2000-06-01 07:00:00" + self.assertEqual(col[1], Timestamp( + '2000-06-01 07:00:00', tz='UTC')) else: - raise AssertionError("DateCol loaded with incorrect type -> {0}".format(col.dtype)) + raise AssertionError("DateCol loaded with incorrect type " + "-> {0}".format(col.dtype)) # GH11216 df = pd.read_sql_query("select * from types_test_data", self.conn) - if not hasattr(df,'DateColWithTz'): + if not hasattr(df, 'DateColWithTz'): raise nose.SkipTest("no column with datetime with time zone") # this is parsed on Travis (linux), but not on macosx for some reason - # even with the same versions of psycopg2 & sqlalchemy, possibly a Postgrsql server - # version difference + # even with the same versions of psycopg2 & sqlalchemy, possibly a + # Postgrsql server version difference col = df.DateColWithTz - self.assertTrue(com.is_object_dtype(col.dtype) or com.is_datetime64_dtype(col.dtype) \ - or com.is_datetime64tz_dtype(col.dtype), - "DateCol loaded with incorrect type -> {0}".format(col.dtype)) - - df = pd.read_sql_query("select * from types_test_data", self.conn, parse_dates=['DateColWithTz']) - if not hasattr(df,'DateColWithTz'): + self.assertTrue(com.is_object_dtype(col.dtype) or + com.is_datetime64_dtype(col.dtype) or + com.is_datetime64tz_dtype(col.dtype), + "DateCol loaded with incorrect type -> {0}" + .format(col.dtype)) + + df = pd.read_sql_query("select * from types_test_data", + self.conn, parse_dates=['DateColWithTz']) + if not hasattr(df, 'DateColWithTz'): raise nose.SkipTest("no column with datetime with time zone") check(df.DateColWithTz) df = pd.concat(list(pd.read_sql_query("select * from types_test_data", - self.conn,chunksize=1)),ignore_index=True) + self.conn, chunksize=1)), + ignore_index=True) col = df.DateColWithTz self.assertTrue(com.is_datetime64tz_dtype(col.dtype), - "DateCol loaded with incorrect type -> {0}".format(col.dtype)) + "DateCol loaded with incorrect type -> {0}" + .format(col.dtype)) self.assertTrue(str(col.dt.tz) == 'UTC') expected = sql.read_sql_table("types_test_data", self.conn) - tm.assert_series_equal(df.DateColWithTz, expected.DateColWithTz.astype('datetime64[ns, UTC]')) + tm.assert_series_equal(df.DateColWithTz, + expected.DateColWithTz + .astype('datetime64[ns, UTC]')) # xref #7139 # this might or might not be converted depending on the postgres driver @@ -1330,7 +1373,7 @@ def test_date_parsing(self): "DateCol loaded with incorrect type") df = sql.read_sql_table("types_test_data", self.conn, parse_dates={ - 'DateCol': {'format': '%Y-%m-%d %H:%M:%S'}}) + 'DateCol': {'format': '%Y-%m-%d %H:%M:%S'}}) self.assertTrue(issubclass(df.DateCol.dtype.type, np.datetime64), "IntDateCol loaded with incorrect type") @@ -1344,8 +1387,8 @@ def test_date_parsing(self): self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64), "IntDateCol loaded with incorrect type") - df = sql.read_sql_table( - "types_test_data", self.conn, parse_dates={'IntDateCol': {'unit': 's'}}) + df = sql.read_sql_table("types_test_data", self.conn, + parse_dates={'IntDateCol': {'unit': 's'}}) self.assertTrue(issubclass(df.IntDateCol.dtype.type, np.datetime64), "IntDateCol loaded with incorrect type") @@ -1405,8 +1448,8 @@ def test_datetime_time(self): def test_mixed_dtype_insert(self): # see GH6509 - s1 = Series(2**25 + 1,dtype=np.int32) - s2 = Series(0.0,dtype=np.float32) + s1 = Series(2**25 + 1, dtype=np.int32) + s2 = Series(0.0, dtype=np.float32) df = DataFrame({'s1': s1, 's2': s2}) # write and read again @@ -1417,7 +1460,7 @@ def test_mixed_dtype_insert(self): def test_nan_numeric(self): # NaNs in numeric float column - df = DataFrame({'A':[0, 1, 2], 'B':[0.2, np.nan, 5.6]}) + df = DataFrame({'A': [0, 1, 2], 'B': [0.2, np.nan, 5.6]}) df.to_sql('test_nan', self.conn, index=False) # with read_table @@ -1430,7 +1473,7 @@ def test_nan_numeric(self): def test_nan_fullcolumn(self): # full NaN column (numeric float column) - df = DataFrame({'A':[0, 1, 2], 'B':[np.nan, np.nan, np.nan]}) + df = DataFrame({'A': [0, 1, 2], 'B': [np.nan, np.nan, np.nan]}) df.to_sql('test_nan', self.conn, index=False) # with read_table @@ -1445,7 +1488,7 @@ def test_nan_fullcolumn(self): def test_nan_string(self): # NaNs in string column - df = DataFrame({'A':[0, 1, 2], 'B':['a', 'b', np.nan]}) + df = DataFrame({'A': [0, 1, 2], 'B': ['a', 'b', np.nan]}) df.to_sql('test_nan', self.conn, index=False) # NaNs are coming back as None @@ -1485,7 +1528,8 @@ def test_get_schema_create_table(self): self.drop_table(tbl) self.conn.execute(create_sql) returned_df = sql.read_sql_table(tbl, self.conn) - tm.assert_frame_equal(returned_df, blank_test_df, check_index_type=False) + tm.assert_frame_equal(returned_df, blank_test_df, + check_index_type=False) self.drop_table(tbl) def test_dtype(self): @@ -1510,16 +1554,16 @@ def test_dtype(self): self.assertEqual(sqltype.length, 10) def test_notnull_dtype(self): - cols = {'Bool': Series([True,None]), + cols = {'Bool': Series([True, None]), 'Date': Series([datetime(2012, 5, 1), None]), - 'Int' : Series([1, None], dtype='object'), + 'Int': Series([1, None], dtype='object'), 'Float': Series([1.1, None]) - } + } df = DataFrame(cols) tbl = 'notnull_dtype_test' df.to_sql(tbl, self.conn) - returned_df = sql.read_sql_table(tbl, self.conn) + returned_df = sql.read_sql_table(tbl, self.conn) # noqa meta = sqlalchemy.schema.MetaData(bind=self.conn) meta.reflect() if self.flavor == 'mysql': @@ -1537,20 +1581,20 @@ def test_notnull_dtype(self): def test_double_precision(self): V = 1.23456789101112131415 - df = DataFrame({'f32':Series([V,], dtype='float32'), - 'f64':Series([V,], dtype='float64'), - 'f64_as_f32':Series([V,], dtype='float64'), - 'i32':Series([5,], dtype='int32'), - 'i64':Series([5,], dtype='int64'), + df = DataFrame({'f32': Series([V, ], dtype='float32'), + 'f64': Series([V, ], dtype='float64'), + 'f64_as_f32': Series([V, ], dtype='float64'), + 'i32': Series([5, ], dtype='int32'), + 'i64': Series([5, ], dtype='int64'), }) df.to_sql('test_dtypes', self.conn, index=False, if_exists='replace', - dtype={'f64_as_f32':sqlalchemy.Float(precision=23)}) + dtype={'f64_as_f32': sqlalchemy.Float(precision=23)}) res = sql.read_sql_table('test_dtypes', self.conn) # check precision of float64 - self.assertEqual(np.round(df['f64'].iloc[0],14), - np.round(res['f64'].iloc[0],14)) + self.assertEqual(np.round(df['f64'].iloc[0], 14), + np.round(res['f64'].iloc[0], 14)) # check sql types meta = sqlalchemy.schema.MetaData(bind=self.conn) @@ -1572,7 +1616,8 @@ def foo(connection): return sql.read_sql_query(query, con=connection) def bar(connection, data): - data.to_sql(name='test_foo_data', con=connection, if_exists='append') + data.to_sql(name='test_foo_data', + con=connection, if_exists='append') def main(connectable): with connectable.connect() as conn: @@ -1580,7 +1625,8 @@ def main(connectable): foo_data = conn.run_callable(foo) conn.run_callable(bar, foo_data) - DataFrame({'test_foo_data': [0, 1, 2]}).to_sql('test_foo_data', self.conn) + DataFrame({'test_foo_data': [0, 1, 2]}).to_sql( + 'test_foo_data', self.conn) main(self.conn) def test_temporary_table(self): @@ -1610,8 +1656,10 @@ class Temporary(Base): class _TestSQLAlchemyConn(_EngineToConnMixin, _TestSQLAlchemy): + def test_transactions(self): - raise nose.SkipTest("Nested transactions rollbacks don't work with Pandas") + raise nose.SkipTest( + "Nested transactions rollbacks don't work with Pandas") class _TestSQLiteAlchemy(object): @@ -1657,7 +1705,7 @@ def test_default_date_load(self): def test_bigint_warning(self): # test no warning for BIGINT (to support int64) is raised (GH7433) - df = DataFrame({'a':[1,2]}, dtype='int64') + df = DataFrame({'a': [1, 2]}, dtype='int64') df.to_sql('test_bigintwarning', self.conn, index=False) with warnings.catch_warnings(record=True) as w: @@ -1681,7 +1729,7 @@ def connect(cls): @classmethod def setup_driver(cls): try: - import pymysql + import pymysql # noqa cls.driver = 'pymysql' except ImportError: raise nose.SkipTest('pymysql not installed') @@ -1707,7 +1755,7 @@ def test_default_type_conversion(self): def test_read_procedure(self): # see GH7324. Although it is more an api test, it is added to the # mysql tests as sqlite does not have stored procedures - df = DataFrame({'a': [1, 2, 3], 'b':[0.1, 0.2, 0.3]}) + df = DataFrame({'a': [1, 2, 3], 'b': [0.1, 0.2, 0.3]}) df.to_sql('test_procedure', self.conn, index=False) proc = """DROP PROCEDURE IF EXISTS get_testdb; @@ -1721,7 +1769,7 @@ def test_read_procedure(self): connection = self.conn.connect() trans = connection.begin() try: - r1 = connection.execute(proc) + r1 = connection.execute(proc) # noqa trans.commit() except: trans.rollback() @@ -1750,14 +1798,16 @@ def connect(cls): @classmethod def setup_driver(cls): try: - import psycopg2 + import psycopg2 # noqa cls.driver = 'psycopg2' except ImportError: raise nose.SkipTest('psycopg2 not installed') def test_schema_support(self): - # only test this for postgresql (schema's not supported in mysql/sqlite) - df = DataFrame({'col1':[1, 2], 'col2':[0.1, 0.2], 'col3':['a', 'n']}) + # only test this for postgresql (schema's not supported in + # mysql/sqlite) + df = DataFrame({'col1': [1, 2], 'col2': [ + 0.1, 0.2], 'col3': ['a', 'n']}) # create a schema self.conn.execute("DROP SCHEMA IF EXISTS other CASCADE;") @@ -1783,7 +1833,7 @@ def test_schema_support(self): self.assertRaises(ValueError, sql.read_sql_table, 'test_schema_other', self.conn, schema='public') - ## different if_exists options + # different if_exists options # create a schema self.conn.execute("DROP SCHEMA IF EXISTS other CASCADE;") @@ -1795,10 +1845,11 @@ def test_schema_support(self): if_exists='replace') df.to_sql('test_schema_other', self.conn, schema='other', index=False, if_exists='append') - res = sql.read_sql_table('test_schema_other', self.conn, schema='other') + res = sql.read_sql_table( + 'test_schema_other', self.conn, schema='other') tm.assert_frame_equal(concat([df, df], ignore_index=True), res) - ## specifying schema in user-provided meta + # specifying schema in user-provided meta # The schema won't be applied on another Connection # because of transactional schemas @@ -1807,12 +1858,16 @@ def test_schema_support(self): meta = sqlalchemy.MetaData(engine2, schema='other') pdsql = sql.SQLDatabase(engine2, meta=meta) pdsql.to_sql(df, 'test_schema_other2', index=False) - pdsql.to_sql(df, 'test_schema_other2', index=False, if_exists='replace') - pdsql.to_sql(df, 'test_schema_other2', index=False, if_exists='append') - res1 = sql.read_sql_table('test_schema_other2', self.conn, schema='other') + pdsql.to_sql(df, 'test_schema_other2', + index=False, if_exists='replace') + pdsql.to_sql(df, 'test_schema_other2', + index=False, if_exists='append') + res1 = sql.read_sql_table( + 'test_schema_other2', self.conn, schema='other') res2 = pdsql.read_table('test_schema_other2') tm.assert_frame_equal(res1, res2) + class TestMySQLAlchemy(_TestMySQLAlchemy, _TestSQLAlchemy): pass @@ -1837,8 +1892,8 @@ class TestSQLiteAlchemyConn(_TestSQLiteAlchemy, _TestSQLAlchemyConn): pass -#------------------------------------------------------------------------------ -#--- Test Sqlite / MySQL fallback +# ----------------------------------------------------------------------------- +# -- Test Sqlite / MySQL fallback class TestSQLiteFallback(SQLiteMixIn, PandasSQLTest): """ @@ -1961,9 +2016,11 @@ def test_dtype(self): df.to_sql('dtype_test2', self.conn, dtype={'B': 'STRING'}) # sqlite stores Boolean values as INTEGER - self.assertEqual(self._get_sqlite_column_type('dtype_test', 'B'), 'INTEGER') + self.assertEqual(self._get_sqlite_column_type( + 'dtype_test', 'B'), 'INTEGER') - self.assertEqual(self._get_sqlite_column_type('dtype_test2', 'B'), 'STRING') + self.assertEqual(self._get_sqlite_column_type( + 'dtype_test2', 'B'), 'STRING') self.assertRaises(ValueError, df.to_sql, 'error', self.conn, dtype={'B': bool}) @@ -1971,18 +2028,19 @@ def test_notnull_dtype(self): if self.flavor == 'mysql': raise nose.SkipTest('Not applicable to MySQL legacy') - cols = {'Bool': Series([True,None]), + cols = {'Bool': Series([True, None]), 'Date': Series([datetime(2012, 5, 1), None]), - 'Int' : Series([1, None], dtype='object'), + 'Int': Series([1, None], dtype='object'), 'Float': Series([1.1, None]) - } + } df = DataFrame(cols) tbl = 'notnull_dtype_test' df.to_sql(tbl, self.conn) self.assertEqual(self._get_sqlite_column_type(tbl, 'Bool'), 'INTEGER') - self.assertEqual(self._get_sqlite_column_type(tbl, 'Date'), 'TIMESTAMP') + self.assertEqual(self._get_sqlite_column_type( + tbl, 'Date'), 'TIMESTAMP') self.assertEqual(self._get_sqlite_column_type(tbl, 'Int'), 'INTEGER') self.assertEqual(self._get_sqlite_column_type(tbl, 'Float'), 'REAL') @@ -1992,17 +2050,18 @@ def test_illegal_names(self): # Raise error on blank self.assertRaises(ValueError, df.to_sql, "", self.conn, - flavor=self.flavor) + flavor=self.flavor) - for ndx, weird_name in enumerate(['test_weird_name]','test_weird_name[', - 'test_weird_name`','test_weird_name"', 'test_weird_name\'', - '_b.test_weird_name_01-30', '"_b.test_weird_name_01-30"', - '99beginswithnumber', '12345', u'\xe9']): + for ndx, weird_name in enumerate( + ['test_weird_name]', 'test_weird_name[', + 'test_weird_name`', 'test_weird_name"', 'test_weird_name\'', + '_b.test_weird_name_01-30', '"_b.test_weird_name_01-30"', + '99beginswithnumber', '12345', u'\xe9']): df.to_sql(weird_name, self.conn, flavor=self.flavor) sql.table_exists(weird_name, self.conn) df2 = DataFrame([[1, 2], [3, 4]], columns=['a', weird_name]) - c_tbl = 'test_weird_col_name%d'%ndx + c_tbl = 'test_weird_col_name%d' % ndx df2.to_sql(c_tbl, self.conn, flavor=self.flavor) sql.table_exists(c_tbl, self.conn) @@ -2022,7 +2081,8 @@ def setUpClass(cls): try: cls.connect() except cls.driver.err.OperationalError: - raise nose.SkipTest("{0} - can't connect to MySQL server".format(cls)) + raise nose.SkipTest( + "{0} - can't connect to MySQL server".format(cls)) @classmethod def setup_driver(cls): @@ -2034,7 +2094,8 @@ def setup_driver(cls): @classmethod def connect(cls): - return cls.driver.connect(host='127.0.0.1', user='root', passwd='', db='pandas_nosetest') + return cls.driver.connect(host='127.0.0.1', user='root', passwd='', + db='pandas_nosetest') def _count_rows(self, table_name): cur = self._get_exec() @@ -2072,14 +2133,15 @@ def _get_index_columns(self, tbl_name): ix_cols[ix_name].append(ix_col) return list(ix_cols.values()) - def test_to_sql_save_index(self): - self._to_sql_save_index() + # TODO: cruft? + # def test_to_sql_save_index(self): + # self._to_sql_save_index() - for ix_name, ix_col in zip(ixs.Key_name, ixs.Column_name): - if ix_name not in ix_cols: - ix_cols[ix_name] = [] - ix_cols[ix_name].append(ix_col) - return ix_cols.values() + # for ix_name, ix_col in zip(ixs.Key_name, ixs.Column_name): + # if ix_name not in ix_cols: + # ix_cols[ix_name] = [] + # ix_cols[ix_name].append(ix_col) + # return ix_cols.values() def test_to_sql_save_index(self): self._to_sql_save_index() @@ -2088,27 +2150,31 @@ def test_illegal_names(self): df = DataFrame([[1, 2], [3, 4]], columns=['a', 'b']) # These tables and columns should be ok - for ndx, ok_name in enumerate(['99beginswithnumber','12345']): + for ndx, ok_name in enumerate(['99beginswithnumber', '12345']): df.to_sql(ok_name, self.conn, flavor=self.flavor, index=False, if_exists='replace') df2 = DataFrame([[1, 2], [3, 4]], columns=['a', ok_name]) - df2.to_sql('test_ok_col_name', self.conn, flavor=self.flavor, index=False, - if_exists='replace') + df2.to_sql('test_ok_col_name', self.conn, + flavor=self.flavor, index=False, + if_exists='replace') # For MySQL, these should raise ValueError - for ndx, illegal_name in enumerate(['test_illegal_name]','test_illegal_name[', - 'test_illegal_name`','test_illegal_name"', 'test_illegal_name\'', '']): + for ndx, illegal_name in enumerate( + ['test_illegal_name]', 'test_illegal_name[', + 'test_illegal_name`', 'test_illegal_name"', + 'test_illegal_name\'', '']): self.assertRaises(ValueError, df.to_sql, illegal_name, self.conn, - flavor=self.flavor, index=False) + flavor=self.flavor, index=False) df2 = DataFrame([[1, 2], [3, 4]], columns=['a', illegal_name]) - self.assertRaises(ValueError, df2.to_sql, 'test_illegal_col_name%d'%ndx, - self.conn, flavor=self.flavor, index=False) + self.assertRaises(ValueError, df2.to_sql, + 'test_illegal_col_name%d' % ndx, + self.conn, flavor=self.flavor, index=False) -#------------------------------------------------------------------------------ -#--- Old tests from 0.13.1 (before refactor using sqlalchemy) +# ----------------------------------------------------------------------------- +# -- Old tests from 0.13.1 (before refactor using sqlalchemy) _formatters = { @@ -2124,6 +2190,7 @@ def test_illegal_names(self): bool: lambda x: "'%s'" % x, } + def format_query(sql, *args): """ @@ -2138,9 +2205,10 @@ def format_query(sql, *args): return sql % tuple(processed_args) + def _skip_if_no_pymysql(): try: - import pymysql + import pymysql # noqa except ImportError: raise nose.SkipTest('pymysql not installed, skipping') @@ -2283,7 +2351,7 @@ def test_tquery(self): frame = tm.makeTimeDataFrame() sql.write_frame(frame, name='test_table', con=self.conn) result = sql.tquery("select A from test_table", self.conn) - expected = Series(frame.A.values, frame.index) # not to have name + expected = Series(frame.A.values, frame.index) # not to have name result = Series(result, frame.index) tm.assert_series_equal(result, expected) @@ -2318,27 +2386,29 @@ def test_uquery(self): def test_keyword_as_column_names(self): ''' ''' - df = DataFrame({'From':np.ones(5)}) - sql.write_frame(df, con = self.conn, name = 'testkeywords') + df = DataFrame({'From': np.ones(5)}) + sql.write_frame(df, con=self.conn, name='testkeywords') def test_onecolumn_of_integer(self): # GH 3628 # a column_of_integers dataframe should transfer well to sql - mono_df=DataFrame([1 , 2], columns=['c0']) - sql.write_frame(mono_df, con = self.conn, name = 'mono_df') + mono_df = DataFrame([1, 2], columns=['c0']) + sql.write_frame(mono_df, con=self.conn, name='mono_df') # computing the sum via sql - con_x=self.conn - the_sum=sum([my_c0[0] for my_c0 in con_x.execute("select * from mono_df")]) + con_x = self.conn + the_sum = sum([my_c0[0] + for my_c0 in con_x.execute("select * from mono_df")]) # it should not fail, and gives 3 ( Issue #3628 ) - self.assertEqual(the_sum , 3) + self.assertEqual(the_sum, 3) - result = sql.read_frame("select * from mono_df",con_x) - tm.assert_frame_equal(result,mono_df) + result = sql.read_frame("select * from mono_df", con_x) + tm.assert_frame_equal(result, mono_df) def test_if_exists(self): df_if_exists_1 = DataFrame({'col1': [1, 2], 'col2': ['A', 'B']}) - df_if_exists_2 = DataFrame({'col1': [3, 4, 5], 'col2': ['C', 'D', 'E']}) + df_if_exists_2 = DataFrame( + {'col1': [3, 4, 5], 'col2': ['C', 'D', 'E']}) table_name = 'table_if_exists' sql_select = "SELECT * FROM %s" % table_name @@ -2412,12 +2482,12 @@ def setUpClass(cls): return try: pymysql.connect(read_default_group='pandas') - except pymysql.ProgrammingError as e: + except pymysql.ProgrammingError: raise nose.SkipTest( "Create a group of connection parameters under the heading " "[pandas] in your system's mysql default file, " "typically located at ~/.my.cnf or /etc/.my.cnf. ") - except pymysql.Error as e: + except pymysql.Error: raise nose.SkipTest( "Cannot connect to database. " "Create a group of connection parameters under the heading " @@ -2430,27 +2500,26 @@ def setUp(self): try: # Try Travis defaults. # No real user should allow root access with a blank password. - self.conn = pymysql.connect(host='localhost', user='root', passwd='', - db='pandas_nosetest') + self.conn = pymysql.connect(host='localhost', user='root', + passwd='', db='pandas_nosetest') except: pass else: return try: self.conn = pymysql.connect(read_default_group='pandas') - except pymysql.ProgrammingError as e: + except pymysql.ProgrammingError: raise nose.SkipTest( "Create a group of connection parameters under the heading " "[pandas] in your system's mysql default file, " "typically located at ~/.my.cnf or /etc/.my.cnf. ") - except pymysql.Error as e: + except pymysql.Error: raise nose.SkipTest( "Cannot connect to database. " "Create a group of connection parameters under the heading " "[pandas] in your system's mysql default file, " "typically located at ~/.my.cnf or /etc/.my.cnf. ") - def test_basic(self): _skip_if_no_pymysql() frame = tm.makeTimeDataFrame() @@ -2586,7 +2655,6 @@ def test_execute_closed_connection(self): # Initialize connection again (needed for tearDown) self.setUp() - def test_na_roundtrip(self): _skip_if_no_pymysql() pass @@ -2598,7 +2666,8 @@ def _check_roundtrip(self, frame): with warnings.catch_warnings(): warnings.filterwarnings("ignore", "Unknown table.*") cur.execute(drop_sql) - sql.write_frame(frame, name='test_table', con=self.conn, flavor='mysql') + sql.write_frame(frame, name='test_table', + con=self.conn, flavor='mysql') result = sql.read_frame("select * from test_table", self.conn) # HACK! Change this once indexes are handled properly. @@ -2617,7 +2686,8 @@ def _check_roundtrip(self, frame): with warnings.catch_warnings(): warnings.filterwarnings("ignore", "Unknown table.*") cur.execute(drop_sql) - sql.write_frame(frame2, name='test_table2', con=self.conn, flavor='mysql') + sql.write_frame(frame2, name='test_table2', + con=self.conn, flavor='mysql') result = sql.read_frame("select * from test_table2", self.conn, index_col='Idx') expected = frame.copy() @@ -2629,16 +2699,17 @@ def _check_roundtrip(self, frame): def test_tquery(self): try: - import pymysql + import pymysql # noqa except ImportError: raise nose.SkipTest("no pymysql") frame = tm.makeTimeDataFrame() drop_sql = "DROP TABLE IF EXISTS test_table" cur = self.conn.cursor() cur.execute(drop_sql) - sql.write_frame(frame, name='test_table', con=self.conn, flavor='mysql') + sql.write_frame(frame, name='test_table', + con=self.conn, flavor='mysql') result = sql.tquery("select A from test_table", self.conn) - expected = Series(frame.A.values, frame.index) # not to have name + expected = Series(frame.A.values, frame.index) # not to have name result = Series(result, frame.index) tm.assert_series_equal(result, expected) @@ -2654,14 +2725,15 @@ def test_tquery(self): def test_uquery(self): try: - import pymysql + import pymysql # noqa except ImportError: raise nose.SkipTest("no pymysql") frame = tm.makeTimeDataFrame() drop_sql = "DROP TABLE IF EXISTS test_table" cur = self.conn.cursor() cur.execute(drop_sql) - sql.write_frame(frame, name='test_table', con=self.conn, flavor='mysql') + sql.write_frame(frame, name='test_table', + con=self.conn, flavor='mysql') stmt = 'INSERT INTO test_table VALUES(2.314, -123.1, 1.234, 2.3)' self.assertEqual(sql.uquery(stmt, con=self.conn), 1) @@ -2681,14 +2753,15 @@ def test_keyword_as_column_names(self): ''' ''' _skip_if_no_pymysql() - df = DataFrame({'From':np.ones(5)}) - sql.write_frame(df, con = self.conn, name = 'testkeywords', + df = DataFrame({'From': np.ones(5)}) + sql.write_frame(df, con=self.conn, name='testkeywords', if_exists='replace', flavor='mysql') def test_if_exists(self): _skip_if_no_pymysql() df_if_exists_1 = DataFrame({'col1': [1, 2], 'col2': ['A', 'B']}) - df_if_exists_2 = DataFrame({'col1': [3, 4, 5], 'col2': ['C', 'D', 'E']}) + df_if_exists_2 = DataFrame( + {'col1': [3, 4, 5], 'col2': ['C', 'D', 'E']}) table_name = 'table_if_exists' sql_select = "SELECT * FROM %s" % table_name diff --git a/pandas/io/tests/test_stata.py b/pandas/io/tests/test_stata.py index 86dfbc8f76a9b..e1e12e47457f9 100644 --- a/pandas/io/tests/test_stata.py +++ b/pandas/io/tests/test_stata.py @@ -18,7 +18,7 @@ from pandas.core.common import is_categorical_dtype from pandas.io.parsers import read_csv from pandas.io.stata import (read_stata, StataReader, InvalidColumnName, - PossiblePrecisionLoss, StataMissingValue) + PossiblePrecisionLoss, StataMissingValue) import pandas.util.testing as tm from pandas.tslib import NaT from pandas import compat @@ -92,14 +92,14 @@ def test_read_empty_dta(self): empty_ds = DataFrame(columns=['unit']) # GH 7369, make sure can read a 0-obs dta file with tm.ensure_clean() as path: - empty_ds.to_stata(path,write_index=False) + empty_ds.to_stata(path, write_index=False) empty_ds2 = read_stata(path) tm.assert_frame_equal(empty_ds, empty_ds2) def test_data_method(self): # Minimal testing of legacy data method with StataReader(self.dta1_114) as rdr: - with warnings.catch_warnings(record=True) as w: + with warnings.catch_warnings(record=True) as w: # noqa parsed_114_data = rdr.data() with StataReader(self.dta1_114) as rdr: @@ -184,9 +184,12 @@ def test_read_dta2(self): # buggy test because of the NaT comparison on certain platforms # Format 113 test fails since it does not support tc and tC formats # tm.assert_frame_equal(parsed_113, expected) - tm.assert_frame_equal(parsed_114, expected, check_datetimelike_compat=True) - tm.assert_frame_equal(parsed_115, expected, check_datetimelike_compat=True) - tm.assert_frame_equal(parsed_117, expected, check_datetimelike_compat=True) + tm.assert_frame_equal(parsed_114, expected, + check_datetimelike_compat=True) + tm.assert_frame_equal(parsed_115, expected, + check_datetimelike_compat=True) + tm.assert_frame_equal(parsed_117, expected, + check_datetimelike_compat=True) def test_read_dta3(self): parsed_113 = self.read_dta(self.dta3_113) @@ -228,7 +231,8 @@ def test_read_dta4(self): 'labeled_with_missings', 'float_labelled']) # these are all categoricals - expected = pd.concat([expected[col].astype('category') for col in expected], axis=1) + expected = pd.concat([expected[col].astype('category') + for col in expected], axis=1) tm.assert_frame_equal(parsed_113, expected) tm.assert_frame_equal(parsed_114, expected) @@ -248,7 +252,6 @@ def test_read_dta12(self): tm.assert_frame_equal(parsed_117, expected, check_dtype=False) - def test_read_dta18(self): parsed_118 = self.read_dta(self.dta22_118) parsed_118["Bytes"] = parsed_118["Bytes"].astype('O') @@ -257,16 +260,18 @@ def test_read_dta18(self): ['Dog', 'Boston', u'Uzunköprü', np.nan, np.nan, np.nan, np.nan], ['Plane', 'Rome', u'Tromsø', 0, 0.0, 'option a', 0.0], ['Potato', 'Tokyo', u'Elâzığ', -4, 4.0, 4, 4], - ['', '', '', 0, 0.3332999, 'option a', 1/3.] + ['', '', '', 0, 0.3332999, 'option a', 1 / 3.] ], - columns=['Things', 'Cities', 'Unicode_Cities_Strl', 'Ints', 'Floats', 'Bytes', 'Longs']) + columns=['Things', 'Cities', 'Unicode_Cities_Strl', + 'Ints', 'Floats', 'Bytes', 'Longs']) expected["Floats"] = expected["Floats"].astype(np.float32) for col in parsed_118.columns: tm.assert_almost_equal(parsed_118[col], expected[col]) with StataReader(self.dta22_118) as rdr: vl = rdr.variable_labels() - vl_expected = {u'Unicode_Cities_Strl': u'Here are some strls with Ünicode chars', + vl_expected = {u'Unicode_Cities_Strl': + u'Here are some strls with Ünicode chars', u'Longs': u'long data', u'Things': u'Here are some things', u'Bytes': u'byte data', @@ -305,8 +310,8 @@ def test_write_dta6(self): def test_read_write_dta10(self): original = DataFrame(data=[["string", "object", 1, 1.1, np.datetime64('2003-12-25')]], - columns=['string', 'object', 'integer', 'floating', - 'datetime']) + columns=['string', 'object', 'integer', + 'floating', 'datetime']) original["object"] = Series(original["object"], dtype=object) original.index.name = 'index' original.index = original.index.astype(np.int32) @@ -327,7 +332,7 @@ def test_stata_doc_examples(self): def test_write_preserves_original(self): # 9795 np.random.seed(423) - df = pd.DataFrame(np.random.randn(5,4), columns=list('abcd')) + df = pd.DataFrame(np.random.randn(5, 4), columns=list('abcd')) df.ix[2, 'a':'c'] = np.nan df_copy = df.copy() with tm.ensure_clean() as path: @@ -348,18 +353,20 @@ def test_encoding(self): else: expected = raw.kreis1849.str.decode("latin-1")[0] self.assertEqual(result, expected) - self.assertIsInstance(result, unicode) + self.assertIsInstance(result, unicode) # noqa with tm.ensure_clean() as path: - encoded.to_stata(path,encoding='latin-1', write_index=False) + encoded.to_stata(path, encoding='latin-1', write_index=False) reread_encoded = read_stata(path, encoding='latin-1') tm.assert_frame_equal(encoded, reread_encoded) def test_read_write_dta11(self): original = DataFrame([(1, 2, 3, 4)], - columns=['good', compat.u('b\u00E4d'), '8number', 'astringwithmorethan32characters______']) + columns=['good', compat.u('b\u00E4d'), '8number', + 'astringwithmorethan32characters______']) formatted = DataFrame([(1, 2, 3, 4)], - columns=['good', 'b_d', '_8number', 'astringwithmorethan32characters_']) + columns=['good', 'b_d', '_8number', + 'astringwithmorethan32characters_']) formatted.index.name = 'index' formatted = formatted.astype(np.int32) @@ -370,7 +377,8 @@ def test_read_write_dta11(self): tm.assert_equal(len(w), 1) written_and_read_again = self.read_dta(path) - tm.assert_frame_equal(written_and_read_again.set_index('index'), formatted) + tm.assert_frame_equal( + written_and_read_again.set_index('index'), formatted) def test_read_write_dta12(self): original = DataFrame([(1, 2, 3, 4, 5, 6)], @@ -393,10 +401,12 @@ def test_read_write_dta12(self): with tm.ensure_clean() as path: with warnings.catch_warnings(record=True) as w: original.to_stata(path, None) - tm.assert_equal(len(w), 1) # should get a warning for that format. + # should get a warning for that format. + tm.assert_equal(len(w), 1) written_and_read_again = self.read_dta(path) - tm.assert_frame_equal(written_and_read_again.set_index('index'), formatted) + tm.assert_frame_equal( + written_and_read_again.set_index('index'), formatted) def test_read_write_dta13(self): s1 = Series(2**9, dtype=np.int16) @@ -420,7 +430,8 @@ def test_read_write_reread_dta14(self): for col in cols: expected[col] = expected[col]._convert(datetime=True, numeric=True) expected['float_'] = expected['float_'].astype(np.float32) - expected['date_td'] = pd.to_datetime(expected['date_td'], errors='coerce') + expected['date_td'] = pd.to_datetime( + expected['date_td'], errors='coerce') parsed_113 = self.read_dta(self.dta14_113) parsed_113.index.name = 'index' @@ -438,7 +449,8 @@ def test_read_write_reread_dta14(self): with tm.ensure_clean() as path: parsed_114.to_stata(path, {'date_td': 'td'}) written_and_read_again = self.read_dta(path) - tm.assert_frame_equal(written_and_read_again.set_index('index'), parsed_114) + tm.assert_frame_equal( + written_and_read_again.set_index('index'), parsed_114) def test_read_write_reread_dta15(self): expected = self.read_csv(self.csv15) @@ -447,7 +459,8 @@ def test_read_write_reread_dta15(self): expected['long_'] = expected['long_'].astype(np.int32) expected['float_'] = expected['float_'].astype(np.float32) expected['double_'] = expected['double_'].astype(np.float64) - expected['date_td'] = expected['date_td'].apply(datetime.strptime, args=('%Y-%m-%d',)) + expected['date_td'] = expected['date_td'].apply( + datetime.strptime, args=('%Y-%m-%d',)) parsed_113 = self.read_dta(self.dta15_113) parsed_114 = self.read_dta(self.dta15_114) @@ -464,10 +477,12 @@ def test_timestamp_and_label(self): time_stamp = datetime(2000, 2, 29, 14, 21) data_label = 'This is a data file.' with tm.ensure_clean() as path: - original.to_stata(path, time_stamp=time_stamp, data_label=data_label) + original.to_stata(path, time_stamp=time_stamp, + data_label=data_label) with StataReader(path) as reader: - parsed_time_stamp = dt.datetime.strptime(reader.time_stamp, ('%d %b %Y %H:%M')) + parsed_time_stamp = dt.datetime.strptime( + reader.time_stamp, ('%d %b %Y %H:%M')) assert parsed_time_stamp == time_stamp assert reader.data_label == data_label @@ -507,8 +522,8 @@ def test_no_index(self): with tm.ensure_clean() as path: original.to_stata(path, write_index=False) written_and_read_again = self.read_dta(path) - tm.assertRaises(KeyError, - lambda: written_and_read_again['index_not_written']) + tm.assertRaises( + KeyError, lambda: written_and_read_again['index_not_written']) def test_string_no_dates(self): s1 = Series(['a', 'A longer string']) @@ -616,7 +631,7 @@ def test_variable_labels(self): sr_117 = rdr.variable_labels() keys = ('var1', 'var2', 'var3') labels = ('label1', 'label2', 'label3') - for k,v in compat.iteritems(sr_115): + for k, v in compat.iteritems(sr_115): self.assertTrue(k in sr_117) self.assertTrue(v == sr_117[k]) self.assertTrue(k in keys) @@ -626,7 +641,8 @@ def test_minimal_size_col(self): str_lens = (1, 100, 244) s = {} for str_len in str_lens: - s['s' + str(str_len)] = Series(['a' * str_len, 'b' * str_len, 'c' * str_len]) + s['s' + str(str_len)] = Series(['a' * str_len, + 'b' * str_len, 'c' * str_len]) original = DataFrame(s) with tm.ensure_clean() as path: original.to_stata(path, write_index=False) @@ -643,15 +659,16 @@ def test_excessively_long_string(self): str_lens = (1, 244, 500) s = {} for str_len in str_lens: - s['s' + str(str_len)] = Series(['a' * str_len, 'b' * str_len, 'c' * str_len]) + s['s' + str(str_len)] = Series(['a' * str_len, + 'b' * str_len, 'c' * str_len]) original = DataFrame(s) with tm.assertRaises(ValueError): with tm.ensure_clean() as path: original.to_stata(path) def test_missing_value_generator(self): - types = ('b','h','l') - df = DataFrame([[0.0]],columns=['float_']) + types = ('b', 'h', 'l') + df = DataFrame([[0.0]], columns=['float_']) with tm.ensure_clean() as path: df.to_stata(path) with StataReader(path) as rdr: @@ -660,20 +677,22 @@ def test_missing_value_generator(self): expected_values.insert(0, '.') for t in types: offset = valid_range[t][1] - for i in range(0,27): - val = StataMissingValue(offset+1+i) + for i in range(0, 27): + val = StataMissingValue(offset + 1 + i) self.assertTrue(val.string == expected_values[i]) # Test extremes for floats - val = StataMissingValue(struct.unpack('<f',b'\x00\x00\x00\x7f')[0]) + val = StataMissingValue(struct.unpack('<f', b'\x00\x00\x00\x7f')[0]) self.assertTrue(val.string == '.') - val = StataMissingValue(struct.unpack('<f',b'\x00\xd0\x00\x7f')[0]) + val = StataMissingValue(struct.unpack('<f', b'\x00\xd0\x00\x7f')[0]) self.assertTrue(val.string == '.z') # Test extremes for floats - val = StataMissingValue(struct.unpack('<d',b'\x00\x00\x00\x00\x00\x00\xe0\x7f')[0]) + val = StataMissingValue(struct.unpack( + '<d', b'\x00\x00\x00\x00\x00\x00\xe0\x7f')[0]) self.assertTrue(val.string == '.') - val = StataMissingValue(struct.unpack('<d',b'\x00\x00\x00\x00\x00\x1a\xe0\x7f')[0]) + val = StataMissingValue(struct.unpack( + '<d', b'\x00\x00\x00\x00\x00\x1a\xe0\x7f')[0]) self.assertTrue(val.string == '.z') def test_missing_value_conversion(self): @@ -683,9 +702,9 @@ def test_missing_value_conversion(self): keys.sort() data = [] for i in range(27): - row = [StataMissingValue(keys[i+(j*27)]) for j in range(5)] + row = [StataMissingValue(keys[i + (j * 27)]) for j in range(5)] data.append(row) - expected = DataFrame(data,columns=columns) + expected = DataFrame(data, columns=columns) parsed_113 = read_stata(self.dta17_113, convert_missing=True) parsed_115 = read_stata(self.dta17_115, convert_missing=True) @@ -719,24 +738,27 @@ def test_big_dates(self): 'date_th', 'date_ty'] # Fixes for weekly, quarterly,half,year - expected[2][2] = datetime(9999,12,24) - expected[2][3] = datetime(9999,12,1) - expected[2][4] = datetime(9999,10,1) - expected[2][5] = datetime(9999,7,1) - expected[4][2] = datetime(2262,4,16) - expected[4][3] = expected[4][4] = datetime(2262,4,1) - expected[4][5] = expected[4][6] = datetime(2262,1,1) - expected[5][2] = expected[5][3] = expected[5][4] = datetime(1677,10,1) - expected[5][5] = expected[5][6] = datetime(1678,1,1) + expected[2][2] = datetime(9999, 12, 24) + expected[2][3] = datetime(9999, 12, 1) + expected[2][4] = datetime(9999, 10, 1) + expected[2][5] = datetime(9999, 7, 1) + expected[4][2] = datetime(2262, 4, 16) + expected[4][3] = expected[4][4] = datetime(2262, 4, 1) + expected[4][5] = expected[4][6] = datetime(2262, 1, 1) + expected[5][2] = expected[5][3] = expected[ + 5][4] = datetime(1677, 10, 1) + expected[5][5] = expected[5][6] = datetime(1678, 1, 1) expected = DataFrame(expected, columns=columns, dtype=np.object) parsed_115 = read_stata(self.dta18_115) parsed_117 = read_stata(self.dta18_117) - tm.assert_frame_equal(expected, parsed_115, check_datetimelike_compat=True) - tm.assert_frame_equal(expected, parsed_117, check_datetimelike_compat=True) + tm.assert_frame_equal(expected, parsed_115, + check_datetimelike_compat=True) + tm.assert_frame_equal(expected, parsed_117, + check_datetimelike_compat=True) - date_conversion = dict((c, c[-2:]) for c in columns) - #{c : c[-2:] for c in columns} + date_conversion = dict((c, c[-2:]) for c in columns) + # {c : c[-2:] for c in columns} with tm.ensure_clean() as path: expected.index.name = 'index' expected.to_stata(path, date_conversion) @@ -821,20 +843,23 @@ def test_categorical_writing(self): expected = original.copy() # these are all categoricals - original = pd.concat([original[col].astype('category') for col in original], axis=1) + original = pd.concat([original[col].astype('category') + for col in original], axis=1) - expected['incompletely_labeled'] = expected['incompletely_labeled'].apply(str) + expected['incompletely_labeled'] = expected[ + 'incompletely_labeled'].apply(str) expected['unlabeled'] = expected['unlabeled'].apply(str) - expected = pd.concat([expected[col].astype('category') for col in expected], axis=1) + expected = pd.concat([expected[col].astype('category') + for col in expected], axis=1) expected.index.name = 'index' with tm.ensure_clean() as path: - with warnings.catch_warnings(record=True) as w: + with warnings.catch_warnings(record=True) as w: # noqa # Silence warnings original.to_stata(path) written_and_read_again = self.read_dta(path) - tm.assert_frame_equal(written_and_read_again.set_index('index'), expected) - + tm.assert_frame_equal( + written_and_read_again.set_index('index'), expected) def test_categorical_warnings_and_errors(self): # Warning for non-string labels @@ -846,7 +871,8 @@ def test_categorical_warnings_and_errors(self): ['d' * 10000]], columns=['Too_long']) - original = pd.concat([original[col].astype('category') for col in original], axis=1) + original = pd.concat([original[col].astype('category') + for col in original], axis=1) with tm.ensure_clean() as path: tm.assertRaises(ValueError, original.to_stata, path) @@ -857,33 +883,43 @@ def test_categorical_warnings_and_errors(self): ['d'], [1]], columns=['Too_long']) - original = pd.concat([original[col].astype('category') for col in original], axis=1) + original = pd.concat([original[col].astype('category') + for col in original], axis=1) with warnings.catch_warnings(record=True) as w: original.to_stata(path) - tm.assert_equal(len(w), 1) # should get a warning for mixed content + # should get a warning for mixed content + tm.assert_equal(len(w), 1) def test_categorical_with_stata_missing_values(self): values = [['a' + str(i)] for i in range(120)] values.append([np.nan]) original = pd.DataFrame.from_records(values, columns=['many_labels']) - original = pd.concat([original[col].astype('category') for col in original], axis=1) + original = pd.concat([original[col].astype('category') + for col in original], axis=1) original.index.name = 'index' with tm.ensure_clean() as path: original.to_stata(path) written_and_read_again = self.read_dta(path) - tm.assert_frame_equal(written_and_read_again.set_index('index'), original) + tm.assert_frame_equal( + written_and_read_again.set_index('index'), original) def test_categorical_order(self): # Directly construct using expected codes # Format is is_cat, col_name, labels (in order), underlying data expected = [(True, 'ordered', ['a', 'b', 'c', 'd', 'e'], np.arange(5)), - (True, 'reverse', ['a', 'b', 'c', 'd', 'e'], np.arange(5)[::-1]), - (True, 'noorder', ['a', 'b', 'c', 'd', 'e'], np.array([2, 1, 4, 0, 3])), - (True, 'floating', ['a', 'b', 'c', 'd', 'e'], np.arange(0, 5)), - (True, 'float_missing', ['a', 'd', 'e'], np.array([0, 1, 2, -1, -1])), - (False, 'nolabel', [1.0, 2.0, 3.0, 4.0, 5.0], np.arange(5)), - (True, 'int32_mixed', ['d', 2, 'e', 'b', 'a'], np.arange(5))] + (True, 'reverse', ['a', 'b', 'c', + 'd', 'e'], np.arange(5)[::-1]), + (True, 'noorder', ['a', 'b', 'c', 'd', + 'e'], np.array([2, 1, 4, 0, 3])), + (True, 'floating', [ + 'a', 'b', 'c', 'd', 'e'], np.arange(0, 5)), + (True, 'float_missing', [ + 'a', 'd', 'e'], np.array([0, 1, 2, -1, -1])), + (False, 'nolabel', [ + 1.0, 2.0, 3.0, 4.0, 5.0], np.arange(5)), + (True, 'int32_mixed', ['d', 2, 'e', 'b', 'a'], + np.arange(5))] cols = [] for is_cat, col, labels, codes in expected: if is_cat: @@ -938,7 +974,6 @@ def test_categorical_ordering(self): tm.assert_equal(False, parsed_115_unordered[col].cat.ordered) tm.assert_equal(False, parsed_117_unordered[col].cat.ordered) - def test_read_chunks_117(self): files_117 = [self.dta1_117, self.dta2_117, self.dta3_117, self.dta4_117, self.dta14_117, self.dta15_117, @@ -946,30 +981,33 @@ def test_read_chunks_117(self): self.dta19_117, self.dta20_117] for fname in files_117: - for chunksize in 1,2: + for chunksize in 1, 2: for convert_categoricals in False, True: for convert_dates in False, True: with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") - parsed = read_stata(fname, convert_categoricals=convert_categoricals, - convert_dates=convert_dates) - itr = read_stata(fname, iterator=True, convert_categoricals=convert_categoricals, - convert_dates=convert_dates) + parsed = read_stata( + fname, + convert_categoricals=convert_categoricals, + convert_dates=convert_dates) + itr = read_stata( + fname, iterator=True, + convert_categoricals=convert_categoricals, + convert_dates=convert_dates) pos = 0 for j in range(5): - with warnings.catch_warnings(record=True) as w: + with warnings.catch_warnings(record=True) as w: # noqa warnings.simplefilter("always") try: chunk = itr.read(chunksize) except StopIteration: break - from_frame = parsed.iloc[pos:pos+chunksize, :] - tm.assert_frame_equal(from_frame, - chunk, - check_dtype=False, - check_datetimelike_compat=True) + from_frame = parsed.iloc[pos:pos + chunksize, :] + tm.assert_frame_equal( + from_frame, chunk, check_dtype=False, + check_datetimelike_compat=True) pos += chunksize @@ -995,7 +1033,6 @@ def test_iterator(self): chunk = itr.get_chunk() tm.assert_frame_equal(parsed.iloc[0:5, :], chunk) - def test_read_chunks_115(self): files_115 = [self.dta2_115, self.dta3_115, self.dta4_115, self.dta14_115, self.dta15_115, self.dta16_115, @@ -1003,32 +1040,35 @@ def test_read_chunks_115(self): self.dta20_115] for fname in files_115: - for chunksize in 1,2: + for chunksize in 1, 2: for convert_categoricals in False, True: for convert_dates in False, True: # Read the whole file with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") - parsed = read_stata(fname, convert_categoricals=convert_categoricals, - convert_dates=convert_dates) + parsed = read_stata( + fname, + convert_categoricals=convert_categoricals, + convert_dates=convert_dates) # Compare to what we get when reading by chunk - itr = read_stata(fname, iterator=True, convert_dates=convert_dates, - convert_categoricals=convert_categoricals) + itr = read_stata( + fname, iterator=True, + convert_dates=convert_dates, + convert_categoricals=convert_categoricals) pos = 0 for j in range(5): - with warnings.catch_warnings(record=True) as w: + with warnings.catch_warnings(record=True) as w: # noqa warnings.simplefilter("always") try: chunk = itr.read(chunksize) except StopIteration: break - from_frame = parsed.iloc[pos:pos+chunksize, :] - tm.assert_frame_equal(from_frame, - chunk, - check_dtype=False, - check_datetimelike_compat=True) + from_frame = parsed.iloc[pos:pos + chunksize, :] + tm.assert_frame_equal( + from_frame, chunk, check_dtype=False, + check_datetimelike_compat=True) pos += chunksize @@ -1044,7 +1084,7 @@ def test_read_chunks_columns(self): chunk = itr.read(chunksize, columns=columns) if chunk is None: break - from_frame = parsed.iloc[pos:pos+chunksize, :] + from_frame = parsed.iloc[pos:pos + chunksize, :] tm.assert_frame_equal(from_frame, chunk, check_dtype=False) pos += chunksize diff --git a/pandas/io/tests/test_wb.py b/pandas/io/tests/test_wb.py index ef72ad4964ff2..58386c3f1c145 100644 --- a/pandas/io/tests/test_wb.py +++ b/pandas/io/tests/test_wb.py @@ -1,3 +1,5 @@ +# flake8: noqa + import nose import pandas diff --git a/pandas/io/wb.py b/pandas/io/wb.py index 3b47fb0c1b2bb..50ffae4970998 100644 --- a/pandas/io/wb.py +++ b/pandas/io/wb.py @@ -1,5 +1,7 @@ # -*- coding: utf-8 -*- +# flake8: noqa + from __future__ import print_function from pandas.compat import map, reduce, range, lrange
I suppressed warnings in test_parsers.py because of the amount of literal strings in the file.
https://api.github.com/repos/pandas-dev/pandas/pulls/12096
2016-01-20T06:20:53Z
2016-01-21T08:02:17Z
null
2016-01-21T08:02:27Z
PEP: pandas/core round 6 (config*, convert, datetools, strings, style)
diff --git a/pandas/core/config.py b/pandas/core/config.py index e974689470c46..b6f00034429b2 100644 --- a/pandas/core/config.py +++ b/pandas/core/config.py @@ -57,8 +57,8 @@ import pandas.compat as compat DeprecatedOption = namedtuple('DeprecatedOption', 'key msg rkey removal_ver') -RegisteredOption = namedtuple( - 'RegisteredOption', 'key defval doc validator cb') +RegisteredOption = namedtuple('RegisteredOption', + 'key defval doc validator cb') _deprecated_options = {} # holds deprecated option metdata _registered_options = {} # holds registered option metdata @@ -67,14 +67,14 @@ class OptionError(AttributeError, KeyError): - """Exception for pandas.options, backwards compatible with KeyError - checks""" - + checks + """ # # User API + def _get_single_key(pat, silent): keys = _select_options(pat) if len(keys) == 0: @@ -106,14 +106,14 @@ def _set_option(*args, **kwargs): nargs = len(args) if not nargs or nargs % 2 != 0: raise ValueError("Must provide an even number of non-keyword " - "arguments") + "arguments") # default to false silent = kwargs.pop('silent', False) if kwargs: raise TypeError('_set_option() got an unexpected keyword ' - 'argument "{0}"'.format(list(kwargs.keys())[0])) + 'argument "{0}"'.format(list(kwargs.keys())[0])) for k, v in zip(args[::2], args[1::2]): key = _get_single_key(k, silent) @@ -129,6 +129,7 @@ def _set_option(*args, **kwargs): if o.cb: o.cb(key) + def _describe_option(pat='', _print_desc=True): keys = _select_options(pat) @@ -168,9 +169,7 @@ def get_default_val(pat): class DictWrapper(object): - - """ provide attribute-style access to a nested dict - """ + """ provide attribute-style access to a nested dict""" def __init__(self, d, prefix=""): object.__setattr__(self, "d", d) @@ -202,7 +201,6 @@ def __getattr__(self, key): def __dir__(self): return list(self.d.keys()) - # For user convenience, we'd like to have the available options described # in the docstring. For dev convenience we'd like to generate the docstrings # dynamically instead of maintaining them by hand. To this, we use the @@ -213,7 +211,6 @@ def __dir__(self): class CallableDynamicDoc(object): - def __init__(self, func, doc_tmpl): self.__doc_tmpl__ = doc_tmpl self.__func__ = func @@ -228,6 +225,7 @@ def __doc__(self): return self.__doc_tmpl__.format(opts_desc=opts_desc, opts_list=opts_list) + _get_option_tmpl = """ get_option(pat) @@ -384,10 +382,8 @@ class option_context(object): def __init__(self, *args): if not (len(args) % 2 == 0 and len(args) >= 2): - raise ValueError( - 'Need to invoke as' - 'option_context(pat, val, [(pat, val), ...)).' - ) + raise ValueError('Need to invoke as' + 'option_context(pat, val, [(pat, val), ...)).') self.ops = list(zip(args[::2], args[1::2])) @@ -462,8 +458,8 @@ def register_option(key, defval, doc='', validator=None, cb=None): cursor = cursor[p] if not isinstance(cursor, dict): - raise OptionError("Path prefix to option '%s' is already an option" - % '.'.join(path[:-1])) + raise OptionError("Path prefix to option '%s' is already an option" % + '.'.join(path[:-1])) cursor[path[-1]] = defval # initialize @@ -520,10 +516,10 @@ def deprecate_option(key, msg=None, rkey=None, removal_ver=None): _deprecated_options[key] = DeprecatedOption(key, msg, rkey, removal_ver) - # # functions internal to the module + def _select_options(pat): """returns a list of keys matching `pat` @@ -681,7 +677,6 @@ def pp(name, ks): else: return s - # # helpers @@ -717,7 +712,6 @@ def config_prefix(prefix): global register_option, get_option, set_option, reset_option def wrap(func): - def inner(key, *args, **kwds): pkey = '%s.%s' % (prefix, key) return func(pkey, *args, **kwds) @@ -735,10 +729,10 @@ def inner(key, *args, **kwds): get_option = _get_option register_option = _register_option - # These factories and methods are handy for use as the validator # arg in register_option + def is_type_factory(_type): """ @@ -790,10 +784,10 @@ def inner(x): def is_one_of_factory(legal_values): def inner(x): from pandas.core.common import pprint_thing as pp - if not x in legal_values: + if x not in legal_values: pp_values = lmap(pp, legal_values) - raise ValueError("Value must be one of %s" - % pp("|".join(pp_values))) + raise ValueError("Value must be one of %s" % + pp("|".join(pp_values))) return inner diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py index df24ef6f3743e..01a39583001c1 100644 --- a/pandas/core/config_init.py +++ b/pandas/core/config_init.py @@ -11,12 +11,10 @@ """ import pandas.core.config as cf -from pandas.core.config import (is_int, is_bool, is_text, is_float, - is_instance_factory, is_one_of_factory, - get_default_val) +from pandas.core.config import (is_int, is_bool, is_text, is_instance_factory, + is_one_of_factory, get_default_val) from pandas.core.format import detect_console_encoding - # # options from the "display" namespace @@ -61,8 +59,8 @@ pc_max_categories_doc = """ : int - This sets the maximum number of categories pandas should output when printing - out a `Categorical` or a Series of dtype "category". + This sets the maximum number of categories pandas should output when + printing out a `Categorical` or a Series of dtype "category". """ pc_max_info_cols_doc = """ @@ -146,9 +144,11 @@ pc_east_asian_width_doc = """ : boolean - Whether to use the Unicode East Asian Width to calculate the display text width + Whether to use the Unicode East Asian Width to calculate the display text + width. Enabling this may affect to the performance (default: False) """ + pc_ambiguous_as_wide_doc = """ : boolean Whether to handle Unicode characters belong to Ambiguous as Wide (width=2) @@ -197,7 +197,8 @@ : int or None df.info() will usually show null-counts for each column. For large frames this can be quite slow. max_info_rows and max_info_cols - limit this null check only to frames with smaller dimensions then specified. + limit this null check only to frames with smaller dimensions than + specified. """ pc_large_repr_doc = """ @@ -222,15 +223,16 @@ pc_latex_escape = """ : bool - This specifies if the to_latex method of a Dataframe uses escapes special + This specifies if the to_latex method of a Dataframe uses escapes special characters. - method. Valid values: False,True + method. Valid values: False,True """ pc_latex_longtable = """ :bool - This specifies if the to_latex method of a Dataframe uses the longtable format. - method. Valid values: False,True + This specifies if the to_latex method of a Dataframe uses the longtable + format. + method. Valid values: False,True """ style_backup = dict() @@ -244,7 +246,7 @@ def mpl_style_cb(key): val = cf.get_option(key) if 'matplotlib' not in sys.modules.keys(): - if not(val): # starting up, we get reset to None + if not val: # starting up, we get reset to None return val raise Exception("matplotlib has not been imported. aborting") @@ -267,7 +269,8 @@ def mpl_style_cb(key): validator=is_instance_factory((int, type(None)))) cf.register_option('max_rows', 60, pc_max_rows_doc, validator=is_instance_factory([type(None), int])) - cf.register_option('max_categories', 8, pc_max_categories_doc, validator=is_int) + cf.register_option('max_categories', 8, pc_max_categories_doc, + validator=is_int) cf.register_option('max_colwidth', 50, max_colwidth_doc, validator=is_int) cf.register_option('max_columns', 20, pc_max_cols_doc, validator=is_instance_factory([type(None), int])) @@ -305,28 +308,29 @@ def mpl_style_cb(key): cf.register_option('line_width', get_default_val('display.width'), pc_line_width_doc) cf.register_option('memory_usage', True, pc_memory_usage_doc, - validator=is_one_of_factory([None, True, False, 'deep'])) + validator=is_one_of_factory([None, True, + False, 'deep'])) cf.register_option('unicode.east_asian_width', False, pc_east_asian_width_doc, validator=is_bool) cf.register_option('unicode.ambiguous_as_wide', False, pc_east_asian_width_doc, validator=is_bool) - cf.register_option('latex.escape',True, pc_latex_escape, - validator=is_bool) - cf.register_option('latex.longtable',False,pc_latex_longtable, - validator=is_bool) + cf.register_option('latex.escape', True, pc_latex_escape, + validator=is_bool) + cf.register_option('latex.longtable', False, pc_latex_longtable, + validator=is_bool) cf.deprecate_option('display.line_width', msg=pc_line_width_deprecation_warning, rkey='display.width') -cf.deprecate_option('display.height', - msg=pc_height_deprecation_warning, +cf.deprecate_option('display.height', msg=pc_height_deprecation_warning, rkey='display.max_rows') tc_sim_interactive_doc = """ : boolean Whether to simulate interactive mode for purposes of testing """ + with cf.config_prefix('mode'): cf.register_option('sim_interactive', False, tc_sim_interactive_doc) @@ -349,7 +353,6 @@ def use_inf_as_null_cb(key): cf.register_option('use_inf_as_null', False, use_inf_as_null_doc, cb=use_inf_as_null_cb) - # user warnings chained_assignment = """ : string @@ -361,7 +364,6 @@ def use_inf_as_null_cb(key): cf.register_option('chained_assignment', 'warn', chained_assignment, validator=is_one_of_factory([None, 'warn', 'raise'])) - # Set up the io.excel specific configuration. writer_engine_doc = """ : string @@ -371,8 +373,7 @@ def use_inf_as_null_cb(key): with cf.config_prefix('io.excel'): # going forward, will be additional writers - for ext, options in [('xls', ['xlwt']), - ('xlsm', ['openpyxl'])]: + for ext, options in [('xls', ['xlwt']), ('xlsm', ['openpyxl'])]: default = options.pop(0) if options: options = " " + ", ".join(options) @@ -384,14 +385,13 @@ def use_inf_as_null_cb(key): def _register_xlsx(engine, other): cf.register_option('xlsx.writer', engine, - writer_engine_doc.format(ext='xlsx', - default=engine, + writer_engine_doc.format(ext='xlsx', default=engine, others=", '%s'" % other), validator=str) try: # better memory footprint - import xlsxwriter + import xlsxwriter # noqa _register_xlsx('xlsxwriter', 'openpyxl') except ImportError: # fallback diff --git a/pandas/core/convert.py b/pandas/core/convert.py index 3745d4f5f6914..7f4fe73c688f8 100644 --- a/pandas/core/convert.py +++ b/pandas/core/convert.py @@ -9,11 +9,10 @@ isnull) import pandas.lib as lib + # TODO: Remove in 0.18 or 2017, which ever is sooner -def _possibly_convert_objects(values, convert_dates=True, - convert_numeric=True, - convert_timedeltas=True, - copy=True): +def _possibly_convert_objects(values, convert_dates=True, convert_numeric=True, + convert_timedeltas=True, copy=True): """ if we have an object dtype, try to coerce dates and/or numbers """ # if we have passed in a list or scalar @@ -27,16 +26,16 @@ def _possibly_convert_objects(values, convert_dates=True, # we take an aggressive stance and convert to datetime64[ns] if convert_dates == 'coerce': - new_values = _possibly_cast_to_datetime( - values, 'M8[ns]', errors='coerce') + new_values = _possibly_cast_to_datetime(values, 'M8[ns]', + errors='coerce') # if we are all nans then leave me alone if not isnull(new_values).all(): values = new_values else: - values = lib.maybe_convert_objects( - values, convert_datetime=convert_dates) + values = lib.maybe_convert_objects(values, + convert_datetime=convert_dates) # convert timedeltas if convert_timedeltas and values.dtype == np.object_: @@ -57,8 +56,8 @@ def _possibly_convert_objects(values, convert_dates=True, if values.dtype == np.object_: if convert_numeric: try: - new_values = lib.maybe_convert_numeric( - values, set(), coerce_numeric=True) + new_values = lib.maybe_convert_numeric(values, set(), + coerce_numeric=True) # if we are all nans then leave me alone if not isnull(new_values).all(): @@ -84,9 +83,8 @@ def _soft_convert_objects(values, datetime=True, numeric=True, timedelta=True, raise ValueError('At least one of datetime, numeric or timedelta must ' 'be True.') elif conversion_count > 1 and coerce: - raise ValueError("Only one of 'datetime', 'numeric' or " - "'timedelta' can be True when when coerce=True.") - + raise ValueError("Only one of 'datetime', 'numeric' or " + "'timedelta' can be True when when coerce=True.") if isinstance(values, (list, tuple)): # List or scalar @@ -110,19 +108,16 @@ def _soft_convert_objects(values, datetime=True, numeric=True, timedelta=True, # Soft conversions if datetime: - values = lib.maybe_convert_objects(values, - convert_datetime=datetime) + values = lib.maybe_convert_objects(values, convert_datetime=datetime) if timedelta and is_object_dtype(values.dtype): # Object check to ensure only run if previous did not convert - values = lib.maybe_convert_objects(values, - convert_timedelta=timedelta) + values = lib.maybe_convert_objects(values, convert_timedelta=timedelta) if numeric and is_object_dtype(values.dtype): try: - converted = lib.maybe_convert_numeric(values, - set(), - coerce_numeric=True) + converted = lib.maybe_convert_numeric(values, set(), + coerce_numeric=True) # If all NaNs, then do not-alter values = converted if not isnull(converted).all() else values values = values.copy() if copy else values diff --git a/pandas/core/datetools.py b/pandas/core/datetools.py index 28cd97f437f29..91b33d30004b6 100644 --- a/pandas/core/datetools.py +++ b/pandas/core/datetools.py @@ -1,8 +1,8 @@ """A collection of random tools for dealing with dates in Python""" -from pandas.tseries.tools import * -from pandas.tseries.offsets import * -from pandas.tseries.frequencies import * +from pandas.tseries.tools import * # noqa +from pandas.tseries.offsets import * # noqa +from pandas.tseries.frequencies import * # noqa day = DateOffset() bday = BDay() diff --git a/pandas/core/strings.py b/pandas/core/strings.py index 37c8e8b1d8829..1ffa836a75a1b 100644 --- a/pandas/core/strings.py +++ b/pandas/core/strings.py @@ -1,8 +1,9 @@ import numpy as np from pandas.compat import zip -from pandas.core.common import (isnull, _values_from_object, is_bool_dtype, is_list_like, - is_categorical_dtype, is_object_dtype, take_1d) +from pandas.core.common import (isnull, _values_from_object, is_bool_dtype, + is_list_like, is_categorical_dtype, + is_object_dtype, take_1d) import pandas.compat as compat from pandas.core.base import AccessorProperty, NoNewAttributesMixin from pandas.util.decorators import Appender, deprecate_kwarg @@ -11,7 +12,6 @@ import warnings import textwrap - _shared_docs = dict() @@ -138,11 +138,13 @@ def _map(f, arr, na_mask=False, na_value=np.nan, dtype=object): try: result = lib.map_infer_mask(arr, f, mask.view(np.uint8)) except (TypeError, AttributeError): + def g(x): try: return f(x) except (TypeError, AttributeError): return na_value + return _map(g, arr, dtype=dtype) if na_value is not np.nan: np.putmask(result, mask, na_value) @@ -206,7 +208,8 @@ def str_contains(arr, pat, case=True, flags=0, na=np.nan, regex=True): if regex.groups > 0: warnings.warn("This pattern has match groups. To actually get the" - " groups, use str.extract.", UserWarning, stacklevel=3) + " groups, use str.extract.", UserWarning, + stacklevel=3) f = lambda x: bool(regex.search(x)) else: @@ -314,6 +317,7 @@ def str_repeat(arr, repeats): repeated : Series/Index of objects """ if np.isscalar(repeats): + def rep(x): try: return compat.binary_type.__mul__(x, repeats) @@ -322,6 +326,7 @@ def rep(x): return _na_map(rep, arr) else: + def rep(x, r): try: return compat.binary_type.__mul__(x, r) @@ -360,7 +365,7 @@ def str_match(arr, pat, case=True, flags=0, na=np.nan, as_indexer=False): See Also -------- - contains : analagous, but less strict, relying on re.search instead of + contains : analogous, but less strict, relying on re.search instead of re.match extract : now preferred to the deprecated usage of match (as_indexer=False) @@ -467,7 +472,6 @@ def str_extract(arr, pat, flags=0): 2 NaN NaN """ - from pandas.core.series import Series from pandas.core.frame import DataFrame from pandas.core.index import Index @@ -475,7 +479,7 @@ def str_extract(arr, pat, flags=0): # just to be safe, check this if regex.groups == 0: raise ValueError("This pattern contains no groups to capture.") - empty_row = [np.nan]*regex.groups + empty_row = [np.nan] * regex.groups def f(x): if not isinstance(x, compat.string_types): @@ -498,10 +502,8 @@ def f(x): if arr.empty: result = DataFrame(columns=columns, dtype=object) else: - result = DataFrame([f(val) for val in arr], - columns=columns, - index=arr.index, - dtype=object) + result = DataFrame([f(val) for val in arr], columns=columns, + index=arr.index, dtype=object) return result, name @@ -542,7 +544,8 @@ def str_get_dummies(arr, sep='|'): # GH9980, Index.str does not support get_dummies() as it returns a frame if isinstance(arr, Index): - raise TypeError("get_dummies is not supported for string methods on Index") + raise TypeError("get_dummies is not supported for string methods on " + "Index") # TODO remove this hack? arr = arr.fillna('') @@ -818,6 +821,7 @@ def f(x): if stop is not None: y += x[local_stop:] return y + return _na_map(f, arr) @@ -919,10 +923,10 @@ def str_translate(arr, table, deletechars=None): Parameters ---------- table : dict (python 3), str or None (python 2) - In python 3, table is a mapping of Unicode ordinals to Unicode ordinals, - strings, or None. Unmapped characters are left untouched. Characters - mapped to None are deleted. :meth:`str.maketrans` is a helper function - for making translation tables. + In python 3, table is a mapping of Unicode ordinals to Unicode + ordinals, strings, or None. Unmapped characters are left untouched. + Characters mapped to None are deleted. :meth:`str.maketrans` is a + helper function for making translation tables. In python 2, table is either a string of length 256 or None. If the table argument is None, no translation is applied and the operation simply removes the characters in deletechars. :func:`string.maketrans` @@ -942,7 +946,8 @@ def str_translate(arr, table, deletechars=None): if compat.PY3: raise ValueError("deletechars is not a valid argument for " "str.translate in python 3. You should simply " - "specify character deletions in the table argument") + "specify character deletions in the table " + "argument") f = lambda x: x.translate(table, deletechars) return _na_map(f, arr) @@ -1040,15 +1045,16 @@ def wrapper3(self, pat, na=np.nan): def copy(source): "Copy a docstring from another source function (if present)" + def do_copy(target): if source.__doc__: target.__doc__ = source.__doc__ return target + return do_copy class StringMethods(NoNewAttributesMixin): - """ Vectorized string functions for Series and Index. NAs stay NA unless handled otherwise by a particular method. Patterned after Python's string @@ -1069,8 +1075,7 @@ def __init__(self, data): def __getitem__(self, key): if isinstance(key, slice): - return self.slice(start=key.start, stop=key.stop, - step=key.step) + return self.slice(start=key.start, stop=key.stop, step=key.step) else: return self.get(key) @@ -1087,8 +1092,8 @@ def _wrap_result(self, result, use_codes=True, name=None): # for category, we do the stuff on the categories, so blow it up # to the full series again # But for some operations, we have to do the stuff on the full values, - # so make it possible to skip this step as the method already did this before - # the transformation... + # so make it possible to skip this step as the method already did this + # before the transformation... if use_codes and self._is_categorical: result = take_1d(result, self._orig.cat.codes) @@ -1142,11 +1147,13 @@ def _wrap_result_expand(self, result, expand=False): else: index = self._orig.index if expand: + def cons_row(x): if is_list_like(x): return x else: - return [ x ] + return [x] + cons = self._orig._constructor_expanddim data = [cons_row(x) for x in result] return cons(data, index=index) @@ -1161,9 +1168,8 @@ def cat(self, others=None, sep=None, na_rep=None): result = str_cat(data, others=others, sep=sep, na_rep=na_rep) return self._wrap_result(result, use_codes=(not self._is_categorical)) - - @deprecate_kwarg('return_type', 'expand', - mapping={'series': False, 'frame': True}) + @deprecate_kwarg('return_type', 'expand', mapping={'series': False, + 'frame': True}) @copy(str_split) def split(self, pat=None, n=-1, expand=False): result = str_split(self._data, pat, n=n) @@ -1217,17 +1223,24 @@ def rsplit(self, pat=None, n=-1, expand=False): 1 D_E _ F 2 X """) - @Appender(_shared_docs['str_partition'] % {'side': 'first', - 'return': '3 elements containing the string itself, followed by two empty strings', - 'also': 'rpartition : Split the string at the last occurrence of `sep`'}) + + @Appender(_shared_docs['str_partition'] % { + 'side': 'first', + 'return': '3 elements containing the string itself, followed by two ' + 'empty strings', + 'also': 'rpartition : Split the string at the last occurrence of `sep`' + }) def partition(self, pat=' ', expand=True): f = lambda x: x.partition(pat) result = _na_map(f, self._data) return self._wrap_result_expand(result, expand=expand) - @Appender(_shared_docs['str_partition'] % {'side': 'last', - 'return': '3 elements containing two empty strings, followed by the string itself', - 'also': 'partition : Split the string at the first occurrence of `sep`'}) + @Appender(_shared_docs['str_partition'] % { + 'side': 'last', + 'return': '3 elements containing two empty strings, followed by the ' + 'string itself', + 'also': 'partition : Split the string at the first occurrence of `sep`' + }) def rpartition(self, pat=' ', expand=True): f = lambda x: x.rpartition(pat) result = _na_map(f, self._data) @@ -1245,14 +1258,14 @@ def join(self, sep): @copy(str_contains) def contains(self, pat, case=True, flags=0, na=np.nan, regex=True): - result = str_contains(self._data, pat, case=case, flags=flags, - na=na, regex=regex) + result = str_contains(self._data, pat, case=case, flags=flags, na=na, + regex=regex) return self._wrap_result(result) @copy(str_match) def match(self, pat, case=True, flags=0, na=np.nan, as_indexer=False): - result = str_match(self._data, pat, case=case, flags=flags, - na=na, as_indexer=as_indexer) + result = str_match(self._data, pat, case=case, flags=flags, na=na, + as_indexer=as_indexer) return self._wrap_result(result) @copy(str_replace) @@ -1289,7 +1302,7 @@ def pad(self, width, side='left', fillchar=' '): """) @Appender(_shared_docs['str_pad'] % dict(side='left and right', - method='center')) + method='center')) def center(self, width, fillchar=' '): return self.pad(width, side='both', fillchar=fillchar) @@ -1349,19 +1362,19 @@ def encode(self, encoding, errors="strict"): """) @Appender(_shared_docs['str_strip'] % dict(side='left and right sides', - method='strip')) + method='strip')) def strip(self, to_strip=None): result = str_strip(self._data, to_strip, side='both') return self._wrap_result(result) @Appender(_shared_docs['str_strip'] % dict(side='left side', - method='lstrip')) + method='lstrip')) def lstrip(self, to_strip=None): result = str_strip(self._data, to_strip, side='left') return self._wrap_result(result) @Appender(_shared_docs['str_strip'] % dict(side='right side', - method='rstrip')) + method='rstrip')) def rstrip(self, to_strip=None): result = str_strip(self._data, to_strip, side='right') return self._wrap_result(result) @@ -1417,14 +1430,16 @@ def extract(self, pat, flags=0): %(also)s """) - @Appender(_shared_docs['find'] % dict(side='lowest', method='find', - also='rfind : Return highest indexes in each strings')) + @Appender(_shared_docs['find'] % + dict(side='lowest', method='find', + also='rfind : Return highest indexes in each strings')) def find(self, sub, start=0, end=None): result = str_find(self._data, sub, start=start, end=end, side='left') return self._wrap_result(result) - @Appender(_shared_docs['find'] % dict(side='highest', method='rfind', - also='find : Return lowest indexes in each strings')) + @Appender(_shared_docs['find'] % + dict(side='highest', method='rfind', + also='find : Return lowest indexes in each strings')) def rfind(self, sub, start=0, end=None): result = str_find(self._data, sub, start=start, end=end, side='right') return self._wrap_result(result) @@ -1450,9 +1465,9 @@ def normalize(self, form): _shared_docs['index'] = (""" Return %(side)s indexes in each strings where the substring is - fully contained between [start:end]. This is the same as ``str.%(similar)s`` - except instead of returning -1, it raises a ValueError when the substring - is not found. Equivalent to standard ``str.%(method)s``. + fully contained between [start:end]. This is the same as + ``str.%(similar)s`` except instead of returning -1, it raises a ValueError + when the substring is not found. Equivalent to standard ``str.%(method)s``. Parameters ---------- @@ -1472,14 +1487,16 @@ def normalize(self, form): %(also)s """) - @Appender(_shared_docs['index'] % dict(side='lowest', similar='find', method='index', - also='rindex : Return highest indexes in each strings')) + @Appender(_shared_docs['index'] % + dict(side='lowest', similar='find', method='index', + also='rindex : Return highest indexes in each strings')) def index(self, sub, start=0, end=None): result = str_index(self._data, sub, start=start, end=end, side='left') return self._wrap_result(result) - @Appender(_shared_docs['index'] % dict(side='highest', similar='rfind', method='rindex', - also='index : Return lowest indexes in each strings')) + @Appender(_shared_docs['index'] % + dict(side='highest', similar='rfind', method='rindex', + also='index : Return lowest indexes in each strings')) def rindex(self, sub, start=0, end=None): result = str_index(self._data, sub, start=start, end=end, side='right') return self._wrap_result(result) @@ -1568,6 +1585,7 @@ def rindex(self, sub, start=0, end=None): docstring=_shared_docs['ismethods'] % _shared_docs['isdecimal']) + class StringAccessorMixin(object): """ Mixin to add a `.str` acessor to the class.""" @@ -1575,15 +1593,15 @@ class StringAccessorMixin(object): def _make_str_accessor(self): from pandas.core.series import Series from pandas.core.index import Index - if isinstance(self, Series) and not( - (is_categorical_dtype(self.dtype) and - is_object_dtype(self.values.categories)) or - (is_object_dtype(self.dtype))): - # it's neither a string series not a categorical series with strings - # inside the categories. - # this really should exclude all series with any non-string values (instead of test - # for object dtype), but that isn't practical for performance reasons until we have a - # str dtype (GH 9343) + if (isinstance(self, Series) and + not ((is_categorical_dtype(self.dtype) and + is_object_dtype(self.values.categories)) or + (is_object_dtype(self.dtype)))): + # it's neither a string series not a categorical series with + # strings inside the categories. + # this really should exclude all series with any non-string values + # (instead of test for object dtype), but that isn't practical for + # performance reasons until we have a str dtype (GH 9343) raise AttributeError("Can only use .str accessor with string " "values, which use np.object_ dtype in " "pandas") @@ -1592,10 +1610,12 @@ def _make_str_accessor(self): allowed_types = ('string', 'unicode', 'mixed', 'mixed-integer') if self.inferred_type not in allowed_types: message = ("Can only use .str accessor with string values " - "(i.e. inferred_type is 'string', 'unicode' or 'mixed')") + "(i.e. inferred_type is 'string', 'unicode' or " + "'mixed')") raise AttributeError(message) if self.nlevels > 1: - message = "Can only use .str accessor with Index, not MultiIndex" + message = ("Can only use .str accessor with Index, not " + "MultiIndex") raise AttributeError(message) return StringMethods(self) diff --git a/pandas/core/style.py b/pandas/core/style.py index d8cb53e04ea03..b2302f311e01e 100644 --- a/pandas/core/style.py +++ b/pandas/core/style.py @@ -154,9 +154,7 @@ def __init__(self, data, precision=None, table_styles=None, uuid=None, self.table_attributes = table_attributes def _repr_html_(self): - ''' - Hooks into Jupyter notebook rich display system. - ''' + """Hooks into Jupyter notebook rich display system.""" return self.render() def _translate(self): @@ -196,31 +194,34 @@ def _translate(self): head = [] for r in range(n_clvls): - row_es = [{"type": "th", "value": BLANK_VALUE, + row_es = [{"type": "th", + "value": BLANK_VALUE, "class": " ".join([BLANK_CLASS])}] * n_rlvls for c in range(len(clabels[0])): cs = [COL_HEADING_CLASS, "level%s" % r, "col%s" % c] - cs.extend(cell_context.get( - "col_headings", {}).get(r, {}).get(c, [])) - row_es.append({"type": "th", "value": clabels[r][c], + cs.extend( + cell_context.get("col_headings", {}).get(r, {}).get(c, [])) + row_es.append({"type": "th", + "value": clabels[r][c], "class": " ".join(cs)}) head.append(row_es) body = [] for r, idx in enumerate(self.data.index): cs = [ROW_HEADING_CLASS, "level%s" % c, "row%s" % r] - cs.extend(cell_context.get( - "row_headings", {}).get(r, {}).get(c, [])) + cs.extend( + cell_context.get("row_headings", {}).get(r, {}).get(c, [])) row_es = [{"type": "th", "value": rlabels[r][c], - "class": " ".join(cs)} - for c in range(len(rlabels[r]))] + "class": " ".join(cs)} for c in range(len(rlabels[r]))] for c, col in enumerate(self.data.columns): cs = [DATA_CLASS, "row%s" % r, "col%s" % c] cs.extend(cell_context.get("data", {}).get(r, {}).get(c, [])) - row_es.append({"type": "td", "value": self.data.iloc[r][c], - "class": " ".join(cs), "id": "_".join(cs[1:])}) + row_es.append({"type": "td", + "value": self.data.iloc[r][c], + "class": " ".join(cs), + "id": "_".join(cs[1:])}) props = [] for x in ctx[r, c]: # have to handle empty styles like [''] @@ -228,10 +229,8 @@ def _translate(self): props.append(x.split(":")) else: props.append(['', '']) - cellstyle.append( - {'props': props, - 'selector': "row%s_col%s" % (r, c)} - ) + cellstyle.append({'props': props, + 'selector': "row%s_col%s" % (r, c)}) body.append(row_es) return dict(head=head, cellstyle=cellstyle, body=body, uuid=uuid, @@ -262,8 +261,8 @@ def render(self): # filter out empty styles, every cell will have a class # but the list of props may just be [['', '']]. # so we have the neested anys below - trimmed = [x for x in d['cellstyle'] if - any(any(y) for y in x['props'])] + trimmed = [x for x in d['cellstyle'] + if any(any(y) for y in x['props'])] d['cellstyle'] = trimmed return self.template.render(**d) @@ -306,22 +305,21 @@ def __deepcopy__(self, memo): return self._copy(deepcopy=True) def clear(self): - ''' - "Reset" the styler, removing any previously applied styles. + """"Reset" the styler, removing any previously applied styles. Returns None. - ''' + """ self.ctx.clear() self._todo = [] def _compute(self): - ''' + """ Execute the style functions built up in `self._todo`. Relies on the conventions that all style functions go through .apply or .applymap. The append styles to apply as tuples of (application method, *args, **kwargs) - ''' + """ r = self for func, args, kwargs in self._todo: r = func(self)(*args, **kwargs) @@ -369,8 +367,7 @@ def apply(self, func, axis=0, subset=None, **kwargs): rather than column-wise or row-wise. """ self._todo.append((lambda instance: getattr(instance, '_apply'), - (func, axis, subset), - kwargs)) + (func, axis, subset), kwargs)) return self def _applymap(self, func, subset=None, **kwargs): @@ -404,8 +401,7 @@ def applymap(self, func, subset=None, **kwargs): """ self._todo.append((lambda instance: getattr(instance, '_applymap'), - (func, subset), - kwargs)) + (func, subset), kwargs)) return self def set_precision(self, precision): @@ -574,8 +570,8 @@ def highlight_null(self, null_color='red'): self.applymap(self._highlight_null, null_color=null_color) return self - def background_gradient(self, cmap='PuBu', low=0, high=0, - axis=0, subset=None): + def background_gradient(self, cmap='PuBu', low=0, high=0, axis=0, + subset=None): """ Color the background in a gradient according to the data in each column (optionally row). @@ -648,8 +644,8 @@ def set_properties(self, subset=None, **kwargs): >>> df = pd.DataFrame(np.random.randn(10, 4)) >>> df.style.set_properties(color="white", align="right") """ - values = ';'.join('{p}: {v}'.format(p=p, v=v) for p, v in - kwargs.items()) + values = ';'.join('{p}: {v}'.format(p=p, v=v) + for p, v in kwargs.items()) f = lambda x: values return self.applymap(f, subset=subset) @@ -658,7 +654,8 @@ def _bar(s, color, width): normed = width * (s - s.min()) / (s.max() - s.min()) base = 'width: 10em; height: 80%;' - attrs = base + 'background: linear-gradient(90deg,{c} {w}%, transparent 0%)' + attrs = (base + 'background: linear-gradient(90deg,{c} {w}%, ' + 'transparent 0%)') return [attrs.format(c=color, w=x) if x != 0 else base for x in normed] def bar(self, subset=None, axis=0, color='#d65f5f', width=100): @@ -741,9 +738,7 @@ def _highlight_handler(self, subset=None, color='yellow', axis=None, @staticmethod def _highlight_extrema(data, color='yellow', max_=True): - ''' - highlight the min or max in a Series or DataFrame - ''' + """Highlight the min or max in a Series or DataFrame""" attr = 'background-color: {0}'.format(color) if data.ndim == 1: # Series from .apply if max_:
https://api.github.com/repos/pandas-dev/pandas/pulls/12095
2016-01-20T02:09:07Z
2016-01-20T04:23:16Z
null
2016-01-20T04:23:27Z
ENH: GH12042 Add parameter `drop_first` to get_dummies to get n-1 variables out of n levels.
diff --git a/doc/source/reshaping.rst b/doc/source/reshaping.rst index dbf3b838593a9..190b30af5acf6 100644 --- a/doc/source/reshaping.rst +++ b/doc/source/reshaping.rst @@ -518,6 +518,32 @@ the prefix separator. You can specify ``prefix`` and ``prefix_sep`` in 3 ways from_dict = pd.get_dummies(df, prefix={'B': 'from_B', 'A': 'from_A'}) from_dict +.. versionadded:: 0.18.0 + +Sometimes it will be useful to only keep k-1 levels of a categorical +variable to avoid collinearity when feeding the result to statistical models. +You can switch to this mode by turn on ``drop_first``. + +.. ipython:: python + + s = pd.Series(list('abcaa')) + + pd.get_dummies(s) + + pd.get_dummies(s, drop_first=True) + +When a column contains only one level, it will be omitted in the result. + +.. ipython:: python + + df = pd.DataFrame({'A':list('aaaaa'),'B':list('ababc')}) + + pd.get_dummies(df) + + pd.get_dummies(df, drop_first=True) + + + Factorizing values ------------------ diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py index 4dffaa0b0c416..c4b7005775536 100644 --- a/pandas/core/reshape.py +++ b/pandas/core/reshape.py @@ -944,7 +944,7 @@ def melt_stub(df, stub, i, j): def get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False, - columns=None, sparse=False): + columns=None, sparse=False, drop_first=False): """ Convert categorical variable into dummy/indicator variables @@ -971,7 +971,11 @@ def get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False, Otherwise returns a DataFrame with some SparseBlocks. .. versionadded:: 0.16.1 + drop_first : bool, default False + Whether to get k-1 dummies out of n categorical levels by removing the + first level. + .. versionadded:: 0.18.0 Returns ------- dummies : DataFrame or SparseDataFrame @@ -1011,6 +1015,21 @@ def get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False, 1 2 0 1 1 0 0 2 3 1 0 0 0 1 + >>> pd.get_dummies(pd.Series(list('abcaa'))) + a b c + 0 1 0 0 + 1 0 1 0 + 2 0 0 1 + 3 1 0 0 + 4 1 0 0 + + >>> pd.get_dummies(pd.Series(list('abcaa')), drop_first=True)) + b c + 0 0 0 + 1 1 0 + 2 0 1 + 3 0 0 + 4 0 0 See also ``Series.str.get_dummies``. """ @@ -1060,23 +1079,23 @@ def check_len(item, name): for (col, pre, sep) in zip(columns_to_encode, prefix, prefix_sep): dummy = _get_dummies_1d(data[col], prefix=pre, prefix_sep=sep, - dummy_na=dummy_na, sparse=sparse) + dummy_na=dummy_na, sparse=sparse, + drop_first=drop_first) with_dummies.append(dummy) result = concat(with_dummies, axis=1) else: result = _get_dummies_1d(data, prefix, prefix_sep, dummy_na, - sparse=sparse) + sparse=sparse, drop_first=drop_first) return result def _get_dummies_1d(data, prefix, prefix_sep='_', dummy_na=False, - sparse=False): + sparse=False, drop_first=False): # Series avoids inconsistent NaN handling cat = Categorical.from_array(Series(data), ordered=True) levels = cat.categories - # if all NaN - if not dummy_na and len(levels) == 0: + def get_empty_Frame(data, sparse): if isinstance(data, Series): index = data.index else: @@ -1086,11 +1105,19 @@ def _get_dummies_1d(data, prefix, prefix_sep='_', dummy_na=False, else: return SparseDataFrame(index=index) + # if all NaN + if not dummy_na and len(levels) == 0: + return get_empty_Frame(data, sparse) + codes = cat.codes.copy() if dummy_na: codes[codes == -1] = len(cat.categories) levels = np.append(cat.categories, np.nan) + # if dummy_na, we just fake a nan level. drop_first will drop it again + if drop_first and len(levels) == 1: + return get_empty_Frame(data, sparse) + number_of_cols = len(levels) if prefix is not None: @@ -1113,6 +1140,11 @@ def _get_dummies_1d(data, prefix, prefix_sep='_', dummy_na=False, continue sp_indices[code].append(ndx) + if drop_first: + # remove first categorical level to avoid perfect collinearity + # GH12042 + sp_indices = sp_indices[1:] + dummy_cols = dummy_cols[1:] for col, ixs in zip(dummy_cols, sp_indices): sarr = SparseArray(np.ones(len(ixs)), sparse_index=IntIndex(N, ixs), fill_value=0) @@ -1127,6 +1159,10 @@ def _get_dummies_1d(data, prefix, prefix_sep='_', dummy_na=False, # reset NaN GH4446 dummy_mat[codes == -1] = 0 + if drop_first: + # remove first GH12042 + dummy_mat = dummy_mat[:, 1:] + dummy_cols = dummy_cols[1:] return DataFrame(dummy_mat, index=index, columns=dummy_cols) diff --git a/pandas/tests/test_reshape.py b/pandas/tests/test_reshape.py index 6de589f87cfd8..671c345898ec2 100644 --- a/pandas/tests/test_reshape.py +++ b/pandas/tests/test_reshape.py @@ -411,6 +411,111 @@ def test_dataframe_dummies_with_categorical(self): ]] assert_frame_equal(result, expected) + # GH12402 Add a new parameter `drop_first` to avoid collinearity + def test_basic_drop_first(self): + # Basic case + s_list = list('abc') + s_series = Series(s_list) + s_series_index = Series(s_list, list('ABC')) + + expected = DataFrame({'b': {0: 0.0, + 1: 1.0, + 2: 0.0}, + 'c': {0: 0.0, + 1: 0.0, + 2: 1.0}}) + + result = get_dummies(s_list, sparse=self.sparse, drop_first=True) + assert_frame_equal(result, expected) + + result = get_dummies(s_series, sparse=self.sparse, drop_first=True) + assert_frame_equal(result, expected) + + expected.index = list('ABC') + result = get_dummies(s_series_index, sparse=self.sparse, + drop_first=True) + assert_frame_equal(result, expected) + + def test_basic_drop_first_one_level(self): + # Test the case that categorical variable only has one level. + s_list = list('aaa') + s_series = Series(s_list) + s_series_index = Series(s_list, list('ABC')) + + expected = DataFrame(index=np.arange(3)) + + result = get_dummies(s_list, sparse=self.sparse, drop_first=True) + assert_frame_equal(result, expected) + + result = get_dummies(s_series, sparse=self.sparse, drop_first=True) + assert_frame_equal(result, expected) + + expected = DataFrame(index=list('ABC')) + result = get_dummies(s_series_index, sparse=self.sparse, + drop_first=True) + assert_frame_equal(result, expected) + + def test_basic_drop_first_NA(self): + # Test NA hadling together with drop_first + s_NA = ['a', 'b', np.nan] + res = get_dummies(s_NA, sparse=self.sparse, drop_first=True) + exp = DataFrame({'b': {0: 0.0, + 1: 1.0, + 2: 0.0}}) + assert_frame_equal(res, exp) + + res_na = get_dummies(s_NA, dummy_na=True, sparse=self.sparse, + drop_first=True) + exp_na = DataFrame({'b': {0: 0.0, + 1: 1.0, + 2: 0.0}, + nan: {0: 0.0, + 1: 0.0, + 2: 1.0}}).reindex_axis( + ['b', nan], 1) + assert_frame_equal(res_na, exp_na) + + res_just_na = get_dummies([nan], dummy_na=True, sparse=self.sparse, + drop_first=True) + exp_just_na = DataFrame(index=np.arange(1)) + assert_frame_equal(res_just_na, exp_just_na) + + def test_dataframe_dummies_drop_first(self): + df = self.df[['A', 'B']] + result = get_dummies(df, sparse=self.sparse, drop_first=True) + expected = DataFrame({'A_b': [0., 1, 0], + 'B_c': [0., 0, 1]}) + assert_frame_equal(result, expected) + + def test_dataframe_dummies_drop_first_with_categorical(self): + df = self.df + df['cat'] = pd.Categorical(['x', 'y', 'y']) + result = get_dummies(df, sparse=self.sparse, drop_first=True) + expected = DataFrame({'C': [1, 2, 3], + 'A_b': [0., 1, 0], + 'B_c': [0., 0, 1], + 'cat_y': [0., 1, 1]}) + expected = expected[['C', 'A_b', 'B_c', 'cat_y']] + assert_frame_equal(result, expected) + + def test_dataframe_dummies_drop_first_with_na(self): + df = self.df + df.loc[3, :] = [np.nan, np.nan, np.nan] + result = get_dummies(df, dummy_na=True, sparse=self.sparse, + drop_first=True) + expected = DataFrame({'C': [1, 2, 3, np.nan], + 'A_b': [0., 1, 0, 0], + 'A_nan': [0., 0, 0, 1], + 'B_c': [0., 0, 1, 0], + 'B_nan': [0., 0, 0, 1]}) + expected = expected[['C', 'A_b', 'A_nan', 'B_c', 'B_nan']] + assert_frame_equal(result, expected) + + result = get_dummies(df, dummy_na=False, sparse=self.sparse, + drop_first=True) + expected = expected[['C', 'A_b', 'B_c']] + assert_frame_equal(result, expected) + class TestGetDummiesSparse(TestGetDummies): sparse = True
closes #12042 Some times it's useful to only accept n-1 variables out of n categorical levels.
https://api.github.com/repos/pandas-dev/pandas/pulls/12092
2016-01-19T18:47:48Z
2016-02-08T15:28:58Z
null
2016-02-08T15:28:59Z
add SAS's way to read csv file online
diff --git a/doc/source/comparison_with_sas.rst b/doc/source/comparison_with_sas.rst index 85d432b546f21..606f22b99e36a 100644 --- a/doc/source/comparison_with_sas.rst +++ b/doc/source/comparison_with_sas.rst @@ -123,7 +123,8 @@ SAS provides ``PROC IMPORT`` to read csv data into a data set. .. code-block:: none - proc import datafile='tips.csv' dbms=csv out=tips replace; + filename tips url "https://raw.githubusercontent.com/pydata/pandas/master/pandas/tests/data/tips.csv"; + proc import datafile=tips dbms=csv out=tips replace; getnames=yes; run; @@ -266,7 +267,7 @@ date/datetime columns. data tips; set tips; - format date1 date2 date1_plusmonth mmddyy10.; + format date1 date2 date1_next mmddyy10.; date1 = mdy(1, 15, 2013); date2 = mdy(2, 15, 2015); date1_year = year(date1);
using filename url statement
https://api.github.com/repos/pandas-dev/pandas/pulls/12091
2016-01-19T16:27:03Z
2016-01-30T15:22:58Z
null
2016-01-30T15:22:58Z
Show Index Headers on DataFrames with Style
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index db9cf5ae86d39..3fce0939445d7 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -538,3 +538,4 @@ of columns didn't match the number of series provided (:issue:`12039`). - Bug in ``Series`` constructor with read-only data (:issue:`11502`) - Bug in ``.loc`` setitem indexer preventing the use of a TZ-aware DatetimeIndex (:issue:`12050`) +- Big in ``.style`` indexes and multi-indexes not appearing (:issue:`11655`) diff --git a/pandas/core/style.py b/pandas/core/style.py index d8cb53e04ea03..203eda4fdf338 100644 --- a/pandas/core/style.py +++ b/pandas/core/style.py @@ -206,6 +206,24 @@ def _translate(self): "class": " ".join(cs)}) head.append(row_es) + if self.data.index.names: + index_header_row = [] + + for c, name in enumerate(self.data.index.names): + cs = [COL_HEADING_CLASS, + "level%s" % (n_clvls + 1), + "col%s" % c] + index_header_row.append({"type": "th", "value": name, + "class": " ".join(cs)}) + + index_header_row.extend( + [{"type": "th", + "value": BLANK_VALUE, + "class": " ".join([BLANK_CLASS]) + }] * len(clabels[0])) + + head.append(index_header_row) + body = [] for r, idx in enumerate(self.data.index): cs = [ROW_HEADING_CLASS, "level%s" % c, "row%s" % r] diff --git a/pandas/tests/test_style.py b/pandas/tests/test_style.py index fd8540fdf9c0a..b9ca3f331711d 100644 --- a/pandas/tests/test_style.py +++ b/pandas/tests/test_style.py @@ -130,6 +130,40 @@ def test_set_properties_subset(self): expected = {(0, 0): ['color: white']} self.assertEqual(result, expected) + def test_index_name(self): + # https://github.com/pydata/pandas/issues/11655 + df = pd.DataFrame({'A': [1, 2], 'B': [3, 4], 'C': [5, 6]}) + result = df.set_index('A').style._translate() + + expected = [[{'class': 'blank', 'type': 'th', 'value': ''}, + {'class': 'col_heading level0 col0', 'type': 'th', + 'value': 'B'}, + {'class': 'col_heading level0 col1', 'type': 'th', + 'value': 'C'}], + [{'class': 'col_heading level2 col0', 'type': 'th', + 'value': 'A'}, + {'class': 'blank', 'type': 'th', 'value': ''}, + {'class': 'blank', 'type': 'th', 'value': ''}]] + + self.assertEqual(result['head'], expected) + + def test_multiindex_name(self): + # https://github.com/pydata/pandas/issues/11655 + df = pd.DataFrame({'A': [1, 2], 'B': [3, 4], 'C': [5, 6]}) + result = df.set_index(['A', 'B']).style._translate() + + expected = [[{'class': 'blank', 'type': 'th', 'value': ''}, + {'class': 'blank', 'type': 'th', 'value': ''}, + {'class': 'col_heading level0 col0', 'type': 'th', + 'value': 'C'}], + [{'class': 'col_heading level2 col0', 'type': 'th', + 'value': 'A'}, + {'class': 'col_heading level2 col1', 'type': 'th', + 'value': 'B'}, + {'class': 'blank', 'type': 'th', 'value': ''}]] + + self.assertEqual(result['head'], expected) + def test_apply_axis(self): df = pd.DataFrame({'A': [0, 0], 'B': [1, 1]}) f = lambda x: ['val: %s' % x.max() for v in x]
Partial solution for https://github.com/pydata/pandas/issues/11655 (**Conditional HTML styling hides MultiIndex structure**) and https://github.com/pydata/pandas/issues/11610 (**Followup to Conditional HTML Styling**). The Style API is inconsistent with `DataFrame.to_html()` when printing index names. Currently the style API doesn't print any index names but `to_html` and the standard notebook repr functions do. This PR adds a row for index headings in the `Styler._translate` method. This PR does not cause multi-index rownames to span multiple rows, but I can work on adding that if people want. ## Reproducing Output: ``` python import pandas as pd import numpy as np np.random.seed(24) df = pd.DataFrame({'A': np.linspace(1, 10, 10)}) df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4), columns=list('BCDE'))], axis=1) ``` ``` python df.set_index(['A', 'B']).style ``` ##### Current Output <img width="506" alt="screen shot 2016-01-19 at 10 19 08 am" src="https://cloud.githubusercontent.com/assets/3064019/12422922/32254fb2-be97-11e5-8096-fcadbd902daa.png"> ##### Patched Output: <img width="441" alt="screen shot 2016-01-19 at 10 19 19 am" src="https://cloud.githubusercontent.com/assets/3064019/12422947/46affb6c-be97-11e5-83c7-acd276bdb88f.png"> ## TODO: - [X] Add Tests
https://api.github.com/repos/pandas-dev/pandas/pulls/12090
2016-01-19T15:57:13Z
2016-01-20T23:55:54Z
2016-01-20T23:55:54Z
2021-03-31T20:21:14Z
ENH: accept dict of column:dtype as dtype argument in DataFrame.astype
diff --git a/.travis.yml b/.travis.yml index 1f2940404eed0..5a16c1a6c25e7 100644 --- a/.travis.yml +++ b/.travis.yml @@ -14,7 +14,7 @@ env: git: # for cloning - depth: 300 + depth: 500 matrix: fast_finish: true diff --git a/asv_bench/benchmarks/frame_methods.py b/asv_bench/benchmarks/frame_methods.py index 9367c42f8d39a..5c5a1df4ea1f8 100644 --- a/asv_bench/benchmarks/frame_methods.py +++ b/asv_bench/benchmarks/frame_methods.py @@ -423,7 +423,7 @@ class frame_get_dtype_counts(object): goal_time = 0.2 def setup(self): - self.df = pandas.DataFrame(np.random.randn(10, 10000)) + self.df = DataFrame(np.random.randn(10, 10000)) def time_frame_get_dtype_counts(self): self.df.get_dtype_counts() @@ -985,3 +985,14 @@ def setup(self): def time_series_string_vector_slice(self): self.s.str[:5] + + +class frame_quantile_axis1(object): + goal_time = 0.2 + + def setup(self): + self.df = DataFrame(np.random.randn(1000, 3), + columns=list('ABC')) + + def time_frame_quantile_axis1(self): + self.df.quantile([0.1, 0.5], axis=1) diff --git a/asv_bench/benchmarks/groupby.py b/asv_bench/benchmarks/groupby.py index 7279d73eb0d97..586bd00b091fe 100644 --- a/asv_bench/benchmarks/groupby.py +++ b/asv_bench/benchmarks/groupby.py @@ -773,6 +773,21 @@ def setup(self): def time_groupby_transform_series2(self): self.df.groupby('id')['val'].transform(np.mean) + +class groupby_transform_dataframe(object): + # GH 12737 + goal_time = 0.2 + + def setup(self): + self.df = pd.DataFrame({'group': np.repeat(np.arange(1000), 10), + 'B': np.nan, + 'C': np.nan}) + self.df.ix[4::10, 'B':'C'] = 5 + + def time_groupby_transform_dataframe(self): + self.df.groupby('group').transform('first') + + class groupby_transform_cythonized(object): goal_time = 0.2 diff --git a/asv_bench/benchmarks/parser_vb.py b/asv_bench/benchmarks/parser_vb.py index 18cd4de6cc9c5..04f25034638cd 100644 --- a/asv_bench/benchmarks/parser_vb.py +++ b/asv_bench/benchmarks/parser_vb.py @@ -23,18 +23,42 @@ class read_csv_default_converter(object): goal_time = 0.2 def setup(self): - self.data = '0.1213700904466425978256438611,0.0525708283766902484401839501,0.4174092731488769913994474336\n 0.4096341697147408700274695547,0.1587830198973579909349496119,0.1292545832485494372576795285\n 0.8323255650024565799327547210,0.9694902427379478160318626578,0.6295047811546814475747169126\n 0.4679375305798131323697930383,0.2963942381834381301075609371,0.5268936082160610157032465394\n 0.6685382761849776311890991564,0.6721207066140679753374342908,0.6519975277021627935170045020\n ' + self.data = """0.1213700904466425978256438611,0.0525708283766902484401839501,0.4174092731488769913994474336\n +0.4096341697147408700274695547,0.1587830198973579909349496119,0.1292545832485494372576795285\n +0.8323255650024565799327547210,0.9694902427379478160318626578,0.6295047811546814475747169126\n +0.4679375305798131323697930383,0.2963942381834381301075609371,0.5268936082160610157032465394\n +0.6685382761849776311890991564,0.6721207066140679753374342908,0.6519975277021627935170045020\n""" self.data = (self.data * 200) def time_read_csv_default_converter(self): read_csv(StringIO(self.data), sep=',', header=None, float_precision=None) +class read_csv_default_converter_with_decimal(object): + goal_time = 0.2 + + def setup(self): + self.data = """0,1213700904466425978256438611;0,0525708283766902484401839501;0,4174092731488769913994474336\n +0,4096341697147408700274695547;0,1587830198973579909349496119;0,1292545832485494372576795285\n +0,8323255650024565799327547210;0,9694902427379478160318626578;0,6295047811546814475747169126\n +0,4679375305798131323697930383;0,2963942381834381301075609371;0,5268936082160610157032465394\n +0,6685382761849776311890991564;0,6721207066140679753374342908;0,6519975277021627935170045020\n""" + self.data = (self.data * 200) + + def time_read_csv_default_converter_with_decimal(self): + read_csv(StringIO(self.data), sep=';', header=None, + float_precision=None, decimal=',') + + class read_csv_precise_converter(object): goal_time = 0.2 def setup(self): - self.data = '0.1213700904466425978256438611,0.0525708283766902484401839501,0.4174092731488769913994474336\n 0.4096341697147408700274695547,0.1587830198973579909349496119,0.1292545832485494372576795285\n 0.8323255650024565799327547210,0.9694902427379478160318626578,0.6295047811546814475747169126\n 0.4679375305798131323697930383,0.2963942381834381301075609371,0.5268936082160610157032465394\n 0.6685382761849776311890991564,0.6721207066140679753374342908,0.6519975277021627935170045020\n ' + self.data = """0.1213700904466425978256438611,0.0525708283766902484401839501,0.4174092731488769913994474336\n +0.4096341697147408700274695547,0.1587830198973579909349496119,0.1292545832485494372576795285\n +0.8323255650024565799327547210,0.9694902427379478160318626578,0.6295047811546814475747169126\n +0.4679375305798131323697930383,0.2963942381834381301075609371,0.5268936082160610157032465394\n +0.6685382761849776311890991564,0.6721207066140679753374342908,0.6519975277021627935170045020\n""" self.data = (self.data * 200) def time_read_csv_precise_converter(self): @@ -45,7 +69,11 @@ class read_csv_roundtrip_converter(object): goal_time = 0.2 def setup(self): - self.data = '0.1213700904466425978256438611,0.0525708283766902484401839501,0.4174092731488769913994474336\n 0.4096341697147408700274695547,0.1587830198973579909349496119,0.1292545832485494372576795285\n 0.8323255650024565799327547210,0.9694902427379478160318626578,0.6295047811546814475747169126\n 0.4679375305798131323697930383,0.2963942381834381301075609371,0.5268936082160610157032465394\n 0.6685382761849776311890991564,0.6721207066140679753374342908,0.6519975277021627935170045020\n ' + self.data = """0.1213700904466425978256438611,0.0525708283766902484401839501,0.4174092731488769913994474336\n +0.4096341697147408700274695547,0.1587830198973579909349496119,0.1292545832485494372576795285\n +0.8323255650024565799327547210,0.9694902427379478160318626578,0.6295047811546814475747169126\n +0.4679375305798131323697930383,0.2963942381834381301075609371,0.5268936082160610157032465394\n +0.6685382761849776311890991564,0.6721207066140679753374342908,0.6519975277021627935170045020\n""" self.data = (self.data * 200) def time_read_csv_roundtrip_converter(self): @@ -109,4 +137,28 @@ def setup(self): self.data = (self.data * 200) def time_read_table_multiple_date_baseline(self): - read_table(StringIO(self.data), sep=',', header=None, parse_dates=[1]) \ No newline at end of file + read_table(StringIO(self.data), sep=',', header=None, parse_dates=[1]) + + +class read_csv_default_converter_python_engine(object): + goal_time = 0.2 + + def setup(self): + self.data = '0.1213700904466425978256438611,0.0525708283766902484401839501,0.4174092731488769913994474336\n 0.4096341697147408700274695547,0.1587830198973579909349496119,0.1292545832485494372576795285\n 0.8323255650024565799327547210,0.9694902427379478160318626578,0.6295047811546814475747169126\n 0.4679375305798131323697930383,0.2963942381834381301075609371,0.5268936082160610157032465394\n 0.6685382761849776311890991564,0.6721207066140679753374342908,0.6519975277021627935170045020\n ' + self.data = (self.data * 200) + + def time_read_csv_default_converter(self): + read_csv(StringIO(self.data), sep=',', header=None, + float_precision=None, engine='python') + + +class read_csv_default_converter_with_decimal_python_engine(object): + goal_time = 0.2 + + def setup(self): + self.data = '0,1213700904466425978256438611;0,0525708283766902484401839501;0,4174092731488769913994474336\n 0,4096341697147408700274695547;0,1587830198973579909349496119;0,1292545832485494372576795285\n 0,8323255650024565799327547210;0,9694902427379478160318626578;0,6295047811546814475747169126\n 0,4679375305798131323697930383;0,2963942381834381301075609371;0,5268936082160610157032465394\n 0,6685382761849776311890991564;0,6721207066140679753374342908;0,6519975277021627935170045020\n ' + self.data = (self.data * 200) + + def time_read_csv_default_converter_with_decimal(self): + read_csv(StringIO(self.data), sep=';', header=None, + float_precision=None, decimal=',', engine='python') diff --git a/ci/cron/go_doc.sh b/ci/cron/go_doc.sh deleted file mode 100755 index 89659577d0e7f..0000000000000 --- a/ci/cron/go_doc.sh +++ /dev/null @@ -1,99 +0,0 @@ -#!/bin/bash - -# This is a one-command cron job for setting up -# a virtualenv-based, linux-based, py2-based environment -# for building the Pandas documentation. -# -# The first run will install all required deps from pypi -# into the venv including monsters like scipy. -# You may want to set it up yourself to speed up the -# process. -# -# This is meant to be run as a cron job under a dedicated -# user account whose HOME directory contains this script. -# a CI directory will be created under it and all files -# stored within it. -# -# The hardcoded dep versions will gradually become obsolete -# You may need to tweak them -# -# @y-p, Jan/2014 - -# disto latex is sometimes finicky. Optionall use -# a local texlive install -export PATH=/mnt/debian/texlive/2013/bin/x86_64-linux:$PATH - -# Having ccache will speed things up -export PATH=/usr/lib64/ccache/:$PATH - -# limit disk usage -ccache -M 200M - -BASEDIR="$HOME/CI" -REPO_URL="https://github.com/pydata/pandas" -REPO_LOC="$BASEDIR/pandas" - -if [ ! -d $BASEDIR ]; then - mkdir -p $BASEDIR - virtualenv $BASEDIR/venv -fi - -source $BASEDIR/venv/bin/activate - -pip install numpy==1.7.2 -pip install cython==0.20.0 -pip install python-dateutil==2.2 -pip install --pre pytz==2013.9 -pip install sphinx==1.1.3 -pip install numexpr==2.2.2 - -pip install matplotlib==1.3.0 -pip install lxml==3.2.5 -pip install beautifulsoup4==4.3.2 -pip install html5lib==0.99 - -# You'll need R as well -pip install rpy2==2.3.9 - -pip install tables==3.0.0 -pip install bottleneck==0.7.0 -pip install ipython==0.13.2 - -# only if you have too -pip install scipy==0.13.2 - -pip install openpyxl==1.6.2 -pip install xlrd==0.9.2 -pip install xlwt==0.7.5 -pip install xlsxwriter==0.5.1 -pip install sqlalchemy==0.8.3 - -if [ ! -d "$REPO_LOC" ]; then - git clone "$REPO_URL" "$REPO_LOC" -fi - -cd "$REPO_LOC" -git reset --hard -git clean -df -git checkout master -git pull origin -make - -source $BASEDIR/venv/bin/activate -export PATH="/usr/lib64/ccache/:$PATH" -pip uninstall pandas -yq -pip install "$REPO_LOC" - -cd "$REPO_LOC"/doc - -python make.py clean -python make.py html -if [ ! $? == 0 ]; then - exit 1 -fi -python make.py zip_html -# usually requires manual intervention -# python make.py latex - -# If you have access: -# python make.py upload_dev diff --git a/ci/lint.sh b/ci/lint.sh index 6b8f160fc90db..a4c960084040f 100755 --- a/ci/lint.sh +++ b/ci/lint.sh @@ -15,7 +15,17 @@ if [ "$LINT" ]; then if [ $? -ne "0" ]; then RET=1 fi + done + echo "Linting DONE" + + echo "Check for invalid testing" + grep -r -E --include '*.py' --exclude nosetester.py --exclude testing.py '(numpy|np)\.testing' pandas + if [ $? = "0" ]; then + RET=1 + fi + echo "Check for invalid testing DONE" + else echo "NOT Linting" fi diff --git a/ci/requirements-3.4.run b/ci/requirements-3.4.run index 7d4cdcd21595a..3e12adae7dd9f 100644 --- a/ci/requirements-3.4.run +++ b/ci/requirements-3.4.run @@ -1,4 +1,4 @@ -pytz +pytz=2015.7 numpy=1.8.1 openpyxl xlsxwriter diff --git a/codecov.yml b/codecov.yml index edf2d821e07e5..45a6040c6a50d 100644 --- a/codecov.yml +++ b/codecov.yml @@ -7,6 +7,3 @@ coverage: default: target: '50' branches: null - changes: - default: - branches: null diff --git a/doc/README.rst b/doc/README.rst index 06d95e6b9c44d..a93ad32a4c8f8 100644 --- a/doc/README.rst +++ b/doc/README.rst @@ -160,7 +160,7 @@ and `Good as first PR <https://github.com/pydata/pandas/issues?labels=Good+as+first+PR&sort=updated&state=open>`_ where you could start out. -Or maybe you have an idea of you own, by using pandas, looking for something +Or maybe you have an idea of your own, by using pandas, looking for something in the documentation and thinking 'this can be improved', let's do something about that! diff --git a/doc/source/10min.rst b/doc/source/10min.rst index d51290b2a983b..54bcd76855f32 100644 --- a/doc/source/10min.rst +++ b/doc/source/10min.rst @@ -483,6 +483,17 @@ SQL style merges. See the :ref:`Database style joining <merging.join>` right pd.merge(left, right, on='key') +Another example that can be given is: + +.. ipython:: python + + left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]}) + right = pd.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]}) + left + right + pd.merge(left, right, on='key') + + Append ~~~~~~ diff --git a/doc/source/advanced.rst b/doc/source/advanced.rst index 7c7895a95310d..e50e792201d26 100644 --- a/doc/source/advanced.rst +++ b/doc/source/advanced.rst @@ -528,6 +528,13 @@ return a copy of the data rather than a view: jim joe 1 z 0.64094 +Furthermore if you try to index something that is not fully lexsorted, this can raise: + +.. code-block:: ipython + + In [5]: dfm.loc[(0,'y'):(1, 'z')] + KeyError: 'Key length (2) was greater than MultiIndex lexsort depth (1)' + The ``is_lexsorted()`` method on an ``Index`` show if the index is sorted, and the ``lexsort_depth`` property returns the sort depth: .. ipython:: python @@ -542,6 +549,12 @@ The ``is_lexsorted()`` method on an ``Index`` show if the index is sorted, and t dfm.index.is_lexsorted() dfm.index.lexsort_depth +And now selection works as expected. + +.. ipython:: python + + dfm.loc[(0,'y'):(1, 'z')] + Take Methods ------------ diff --git a/doc/source/api.rst b/doc/source/api.rst index 9557867c252ed..0e893308dd935 100644 --- a/doc/source/api.rst +++ b/doc/source/api.rst @@ -354,6 +354,9 @@ Computations / Descriptive Stats Series.unique Series.nunique Series.is_unique + Series.is_monotonic + Series.is_monotonic_increasing + Series.is_monotonic_decreasing Series.value_counts Reindexing / Selection / Label manipulation @@ -1333,6 +1336,7 @@ Modifying and Computations Index.max Index.reindex Index.repeat + Index.where Index.take Index.putmask Index.set_names diff --git a/doc/source/basics.rst b/doc/source/basics.rst index e3b0915cd571d..917d2f2bb8b04 100644 --- a/doc/source/basics.rst +++ b/doc/source/basics.rst @@ -1726,6 +1726,28 @@ then the more *general* one will be used as the result of the operation. # conversion of dtypes df3.astype('float32').dtypes +Convert a subset of columns to a specified type using :meth:`~DataFrame.astype` + +.. ipython:: python + + dft = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6], 'c': [7, 8, 9]}) + dft[['a','b']] = dft[['a','b']].astype(np.uint8) + dft + dft.dtypes + +.. note:: + + When trying to convert a subset of columns to a specified type using :meth:`~DataFrame.astype` and :meth:`~DataFrame.loc`, upcasting occurs. + + :meth:`~DataFrame.loc` tries to fit in what we are assigning to the current dtypes, while ``[]`` will overwrite them taking the dtype from the right hand side. Therefore the following piece of code produces the unintended result. + + .. ipython:: python + + dft = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6], 'c': [7, 8, 9]}) + dft.loc[:, ['a', 'b']].astype(np.uint8).dtypes + dft.loc[:, ['a', 'b']] = dft.loc[:, ['a', 'b']].astype(np.uint8) + dft.dtypes + object conversion ~~~~~~~~~~~~~~~~~ diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst index e64ff4c155132..a9b86925666b7 100644 --- a/doc/source/contributing.rst +++ b/doc/source/contributing.rst @@ -21,7 +21,7 @@ and `Difficulty Novice <https://github.com/pydata/pandas/issues?q=is%3Aopen+is%3Aissue+label%3A%22Difficulty+Novice%22>`_ where you could start out. -Or maybe through using *pandas* you have an idea of you own or are looking for something +Or maybe through using *pandas* you have an idea of your own or are looking for something in the documentation and thinking 'this can be improved'...you can do something about it! diff --git a/doc/source/enhancingperf.rst b/doc/source/enhancingperf.rst index a4db4b7c0d953..685a8690a53d5 100644 --- a/doc/source/enhancingperf.rst +++ b/doc/source/enhancingperf.rst @@ -95,7 +95,7 @@ Plain cython ~~~~~~~~~~~~ First we're going to need to import the cython magic function to ipython (for -cython versions >=0.21 you can use ``%load_ext Cython``): +cython versions < 0.21 you can use ``%load_ext cythonmagic``): .. ipython:: python :okwarning: diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst index 4cde1fed344a8..02309fe5d6509 100644 --- a/doc/source/groupby.rst +++ b/doc/source/groupby.rst @@ -52,7 +52,7 @@ following: step and try to return a sensibly combined result if it doesn't fit into either of the above two categories -Since the set of object instance method on pandas data structures are generally +Since the set of object instance methods on pandas data structures are generally rich and expressive, we often simply want to invoke, say, a DataFrame function on each group. The name GroupBy should be quite familiar to those who have used a SQL-based tool (or ``itertools``), in which you can write code like: @@ -129,7 +129,7 @@ columns: In [5]: grouped = df.groupby(get_letter_type, axis=1) -Starting with 0.8, pandas Index objects now supports duplicate values. If a +Starting with 0.8, pandas Index objects now support duplicate values. If a non-unique index is used as the group key in a groupby operation, all values for the same index value will be considered to be in one group and thus the output of aggregation functions will only contain unique index values: @@ -171,7 +171,8 @@ By default the group keys are sorted during the ``groupby`` operation. You may h df2.groupby(['X'], sort=False).sum() -Note that ``groupby`` will preserve the order in which *observations* are sorted *within* each group. For example, the groups created by ``groupby()`` below are in the order the appeared in the original ``DataFrame``: +Note that ``groupby`` will preserve the order in which *observations* are sorted *within* each group. +For example, the groups created by ``groupby()`` below are in the order they appeared in the original ``DataFrame``: .. ipython:: python @@ -254,7 +255,7 @@ GroupBy with MultiIndex With :ref:`hierarchically-indexed data <advanced.hierarchical>`, it's quite natural to group by one of the levels of the hierarchy. -Let's create a series with a two-level ``MultiIndex``. +Let's create a Series with a two-level ``MultiIndex``. .. ipython:: python @@ -636,7 +637,7 @@ with NaNs. dff.groupby('B').filter(lambda x: len(x) > 2, dropna=False) -For dataframes with multiple columns, filters should explicitly specify a column as the filter criterion. +For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion. .. ipython:: python @@ -755,7 +756,7 @@ The dimension of the returned result can also change: .. note:: - ``apply`` can act as a reducer, transformer, *or* filter function, depending on exactly what is passed to apply. + ``apply`` can act as a reducer, transformer, *or* filter function, depending on exactly what is passed to it. So depending on the path taken, and exactly what you are grouping. Thus the grouped columns(s) may be included in the output as well as set the indices. @@ -789,7 +790,7 @@ Again consider the example DataFrame we've been looking at: df -Supposed we wished to compute the standard deviation grouped by the ``A`` +Suppose we wish to compute the standard deviation grouped by the ``A`` column. There is a slight problem, namely that we don't care about the data in column ``B``. We refer to this as a "nuisance" column. If the passed aggregation function can't be applied to some columns, the troublesome columns @@ -1019,7 +1020,7 @@ Returning a Series to propagate names ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Group DataFrame columns, compute a set of metrics and return a named Series. -The Series name is used as the name for the column index. This is especially +The Series name is used as the name for the column index. This is especially useful in conjunction with reshaping operations such as stacking in which the column index name will be used as the name of the inserted column: diff --git a/doc/source/io.rst b/doc/source/io.rst index cc51fbd1e30ab..f559c3cb3ebaf 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -99,7 +99,7 @@ delimiter : str, default ``None`` Alternative argument name for sep. delim_whitespace : boolean, default False Specifies whether or not whitespace (e.g. ``' '`` or ``'\t'``) - will be used as the delimiter. Equivalent to setting ``sep='\+s'``. + will be used as the delimiter. Equivalent to setting ``sep='\s+'``. If this option is set to True, nothing should be passed in for the ``delimiter`` parameter. @@ -120,7 +120,8 @@ header : int or list of ints, default ``'infer'`` rather than the first line of the file. names : array-like, default ``None`` List of column names to use. If file contains no header row, then you should - explicitly pass ``header=None``. + explicitly pass ``header=None``. Duplicates in this list are not allowed unless + ``mangle_dupe_cols=True``, which is the default. index_col : int or sequence or ``False``, default ``None`` Column to use as the row labels of the DataFrame. If a sequence is given, a MultiIndex is used. If you have a malformed file with delimiters at the end of @@ -139,6 +140,8 @@ prefix : str, default ``None`` Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ... mangle_dupe_cols : boolean, default ``True`` Duplicate columns will be specified as 'X.0'...'X.N', rather than 'X'...'X'. + Passing in False will cause data to be overwritten if there are duplicate + names in the columns. General Parsing Configuration +++++++++++++++++++++++++++++ @@ -166,6 +169,30 @@ skipfooter : int, default ``0`` Number of lines at bottom of file to skip (unsupported with engine='c'). nrows : int, default ``None`` Number of rows of file to read. Useful for reading pieces of large files. +low_memory : boolean, default ``True`` + Internally process the file in chunks, resulting in lower memory use + while parsing, but possibly mixed type inference. To ensure no mixed + types either set ``False``, or specify the type with the ``dtype`` parameter. + Note that the entire file is read into a single DataFrame regardless, + use the ``chunksize`` or ``iterator`` parameter to return the data in chunks. + (Only valid with C parser) +buffer_lines : int, default None + DEPRECATED: this argument will be removed in a future version because its + value is not respected by the parser + + If ``low_memory`` is ``True``, specify the number of rows to be read for + each chunk. (Only valid with C parser) +compact_ints : boolean, default False + DEPRECATED: this argument will be removed in a future version + + If ``compact_ints`` is ``True``, then for any column that is of integer dtype, the + parser will attempt to cast it as the smallest integer ``dtype`` possible, either + signed or unsigned depending on the specification from the ``use_unsigned`` parameter. +use_unsigned : boolean, default False + DEPRECATED: this argument will be removed in a future version + + If integer columns are being compacted (i.e. ``compact_ints=True``), specify whether + the column should be compacted to the smallest signed or unsigned integer dtype. NA and Missing Data Handling ++++++++++++++++++++++++++++ @@ -252,6 +279,10 @@ quoting : int or ``csv.QUOTE_*`` instance, default ``None`` ``QUOTE_MINIMAL`` (0), ``QUOTE_ALL`` (1), ``QUOTE_NONNUMERIC`` (2) or ``QUOTE_NONE`` (3). Default (``None``) results in ``QUOTE_MINIMAL`` behavior. +doublequote : boolean, default ``True`` + When ``quotechar`` is specified and ``quoting`` is not ``QUOTE_NONE``, + indicate whether or not to interpret two consecutive ``quotechar`` elements + **inside** a field as a single ``quotechar`` element. escapechar : str (length 1), default ``None`` One-character string used to escape delimiter when quoting is ``QUOTE_NONE``. comment : str, default ``None`` @@ -432,6 +463,42 @@ If the header is in a row other than the first, pass the row number to data = 'skip this skip it\na,b,c\n1,2,3\n4,5,6\n7,8,9' pd.read_csv(StringIO(data), header=1) +.. _io.dupe_names: + +Duplicate names parsing +''''''''''''''''''''''' + +If the file or header contains duplicate names, pandas by default will deduplicate +these names so as to prevent data overwrite: + +.. ipython :: python + + data = 'a,b,a\n0,1,2\n3,4,5' + pd.read_csv(StringIO(data)) + +There is no more duplicate data because ``mangle_dupe_cols=True`` by default, which modifies +a series of duplicate columns 'X'...'X' to become 'X.0'...'X.N'. If ``mangle_dupe_cols +=False``, duplicate data can arise: + +.. code-block :: python + + In [2]: data = 'a,b,a\n0,1,2\n3,4,5' + In [3]: pd.read_csv(StringIO(data), mangle_dupe_cols=False) + Out[3]: + a b a + 0 2 1 2 + 1 5 4 5 + +To prevent users from encountering this problem with duplicate data, a ``ValueError`` +exception is raised if ``mangle_dupe_cols != True``: + +.. code-block :: python + + In [2]: data = 'a,b,a\n0,1,2\n3,4,5' + In [3]: pd.read_csv(StringIO(data), mangle_dupe_cols=False) + ... + ValueError: Setting mangle_dupe_cols=False is not supported yet + .. _io.usecols: Filtering columns (``usecols``) diff --git a/doc/source/merging.rst b/doc/source/merging.rst index 7908428135308..ba675d9aac830 100644 --- a/doc/source/merging.rst +++ b/doc/source/merging.rst @@ -562,10 +562,8 @@ DataFrame instance method, with the calling DataFrame being implicitly considered the left object in the join. The related ``DataFrame.join`` method, uses ``merge`` internally for the -index-on-index and index-on-column(s) joins, but *joins on indexes* by default -rather than trying to join on common columns (the default behavior for -``merge``). If you are joining on index, you may wish to use ``DataFrame.join`` -to save yourself some typing. +index-on-index (by default) and column(s)-on-index join. If you are joining on +index only, you may wish to use ``DataFrame.join`` to save yourself some typing. Brief primer on merge methods (relational algebra) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/source/reshaping.rst b/doc/source/reshaping.rst index 21765b3f621ce..9ed2c42610b69 100644 --- a/doc/source/reshaping.rst +++ b/doc/source/reshaping.rst @@ -445,6 +445,16 @@ If ``crosstab`` receives only two Series, it will provide a frequency table. pd.crosstab(df.A, df.B) +Any input passed containing ``Categorical`` data will have **all** of its +categories included in the cross-tabulation, even if the actual data does +not contain any instances of a particular category. + +.. ipython:: python + + foo = pd.Categorical(['a', 'b'], categories=['a', 'b', 'c']) + bar = pd.Categorical(['d', 'e'], categories=['d', 'e', 'f']) + pd.crosstab(foo, bar) + Normalization ~~~~~~~~~~~~~ diff --git a/doc/source/text.rst b/doc/source/text.rst index 16b16a320f75b..3822c713d7f85 100644 --- a/doc/source/text.rst +++ b/doc/source/text.rst @@ -281,7 +281,7 @@ Unlike ``extract`` (which returns only the first match), .. ipython:: python - s = pd.Series(["a1a2", "b1", "c1"], ["A", "B", "C"]) + s = pd.Series(["a1a2", "b1", "c1"], index=["A", "B", "C"]) s two_groups = '(?P<letter>[a-z])(?P<digit>[0-9])' s.str.extract(two_groups, expand=True) @@ -313,6 +313,17 @@ then ``extractall(pat).xs(0, level='match')`` gives the same result as extractall_result extractall_result.xs(0, level="match") +``Index`` also supports ``.str.extractall``. It returns a ``DataFrame`` which has the +same result as a ``Series.str.extractall`` with a default index (starts from 0). + +.. versionadded:: 0.18.2 + +.. ipython:: python + + pd.Index(["a1a2", "b1", "c1"]).str.extractall(two_groups) + + pd.Series(["a1a2", "b1", "c1"]).str.extractall(two_groups) + Testing for Strings that Match or Contain a Pattern --------------------------------------------------- diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst index 114607f117756..62601821488d3 100644 --- a/doc/source/timeseries.rst +++ b/doc/source/timeseries.rst @@ -98,6 +98,7 @@ time. pd.Timestamp(datetime(2012, 5, 1)) pd.Timestamp('2012-05-01') + pd.Timestamp(2012, 5, 1) However, in many cases it is more natural to associate things like change variables with a time span instead. The span represented by ``Period`` can be diff --git a/doc/source/whatsnew/v0.18.1.txt b/doc/source/whatsnew/v0.18.1.txt index 7f837bef5251c..51982c42499ff 100644 --- a/doc/source/whatsnew/v0.18.1.txt +++ b/doc/source/whatsnew/v0.18.1.txt @@ -563,7 +563,6 @@ Performance Improvements - Improved speed of SAS reader (:issue:`12656`, :issue:`12961`) - Performance improvements in ``.groupby(..).cumcount()`` (:issue:`11039`) - Improved memory usage in ``pd.read_csv()`` when using ``skiprows=an_integer`` (:issue:`13005`) - - Improved performance of ``DataFrame.to_sql`` when checking case sensitivity for tables. Now only checks if table has been created correctly when table name is not lower case. (:issue:`12876`) - Improved performance of ``Period`` construction and time series plotting (:issue:`12903`, :issue:`11831`). - Improved performance of ``.str.encode()`` and ``.str.decode()`` methods (:issue:`13008`) diff --git a/doc/source/whatsnew/v0.18.2.txt b/doc/source/whatsnew/v0.18.2.txt index fa426aa30bc65..6829afa2b36b8 100644 --- a/doc/source/whatsnew/v0.18.2.txt +++ b/doc/source/whatsnew/v0.18.2.txt @@ -19,10 +19,37 @@ Highlights include: New features ~~~~~~~~~~~~ +.. _whatsnew_0182.enhancements.read_csv_dupe_col_names_support: +``pd.read_csv`` has improved support for duplicate column names +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +:ref:`Duplicate column names <io.dupe_names>` are now supported in ``pd.read_csv()`` whether +they are in the file or passed in as the ``names`` parameter (:issue:`7160`, :issue:`9424`) +.. ipython :: python + data = '0,1,2\n3,4,5' + names = ['a', 'b', 'a'] + +Previous behaviour: + +.. code-block:: ipython + + In [2]: pd.read_csv(StringIO(data), names=names) + Out[2]: + a b a + 0 2 1 2 + 1 5 4 5 + +The first 'a' column contains the same data as the second 'a' column, when it should have +contained the array ``[0, 3]``. + +New behaviour: + +.. ipython :: python + + In [2]: pd.read_csv(StringIO(data), names=names) .. _whatsnew_0182.enhancements.other: @@ -30,9 +57,38 @@ Other enhancements ^^^^^^^^^^^^^^^^^^ - The ``.tz_localize()`` method of ``DatetimeIndex`` and ``Timestamp`` has gained the ``errors`` keyword, so you can potentially coerce nonexistent timestamps to ``NaT``. The default behaviour remains to raising a ``NonExistentTimeError`` (:issue:`13057`) +- ``astype()`` will now accept a dict of column name to data types mapping as the ``dtype`` argument. (:issue:`12086`) +- ``.to_hdf/read_hdf()`` now accept path objects (e.g. ``pathlib.Path``, ``py.path.local``) for the file path (:issue:`11773`) + + .. ipython:: python + + idx = pd.Index(["a1a2", "b1", "c1"]) + idx.str.extractall("[ab](?P<digit>\d)") + +- ``Timestamp`` s can now accept positional and keyword parameters like :func:`datetime.datetime` (:issue:`10758`, :issue:`11630`) + .. ipython:: python + pd.Timestamp(2012, 1, 1) + pd.Timestamp(year=2012, month=1, day=1, hour=8, minute=30) + +- The ``pd.read_csv()`` with ``engine='python'`` has gained support for the ``decimal`` option (:issue:`12933`) +- The ``pd.read_csv()`` with ``engine='python'`` has gained support for the ``na_filter`` option (:issue:`13321`) + +- ``Index.astype()`` now accepts an optional boolean argument ``copy``, which allows optional copying if the requirements on dtype are satisfied (:issue:`13209`) +- ``Index`` now supports the ``.where()`` function for same shape indexing (:issue:`13170`) + .. ipython:: python + + idx = pd.Index(['a', 'b', 'c']) + idx.where([True, False, True]) +- ``Categorical.astype()`` now accepts an optional boolean argument ``copy``, effective when dtype is categorical (:issue:`13209`) +- Consistent with the Python API, ``pd.read_csv()`` will now interpret ``+inf`` as positive infinity (:issue:`13274`) +- The ``DataFrame`` constructor will now respect key ordering if a list of ``OrderedDict`` objects are passed in (:issue:`13304`) +- ``pd.read_html()`` has gained support for the ``decimal`` option (:issue:`12907`) + +- ``eval``'s upcasting rules for ``float32`` types have been updated to be more consistent with NumPy's rules. New behavior will not upcast to ``float64`` if you multiply a pandas ``float32`` object by a scalar float64. (:issue:`12388`) +- ``Series`` has gained the properties ``.is_monotonic``, ``.is_monotonic_increasing``, ``.is_monotonic_decreasing``, similar to ``Index`` (:issue:`13336`) .. _whatsnew_0182.api: @@ -41,7 +97,8 @@ API changes - Non-convertible dates in an excel date column will be returned without conversion and the column will be ``object`` dtype, rather than raising an exception (:issue:`10001`) - +- An ``UnsupportedFunctionCall`` error is now raised if NumPy ufuncs like ``np.mean`` are called on groupby or resample objects (:issue:`12811`) +- Calls to ``.sample()`` will respect the random seed set via ``numpy.random.seed(n)`` (:issue:`13161`) .. _whatsnew_0182.api.tolist: @@ -70,25 +127,170 @@ New Behavior: type(s.tolist()[0]) +.. _whatsnew_0182.api.promote: + +``Series`` type promotion on assignment +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +A ``Series`` will now correctly promote its dtype for assignment with incompat values to the current dtype (:issue:`13234`) + + +.. ipython:: python + + s = pd.Series() + +Previous Behavior: + +.. code-block:: ipython + + In [2]: s["a"] = pd.Timestamp("2016-01-01") + + In [3]: s["b"] = 3.0 + TypeError: invalid type promotion + +New Behavior: + +.. ipython:: python + + s["a"] = pd.Timestamp("2016-01-01") + s["b"] = 3.0 + s + s.dtype + +.. _whatsnew_0182.api.to_datetime_coerce: + +``.to_datetime()`` when coercing +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +A bug is fixed in ``.to_datetime()`` when passing integers or floats, and no ``unit`` and ``errors='coerce'`` (:issue:`13180`). +Previously if ``.to_datetime()`` encountered mixed integers/floats and strings, but no datetimes with ``errors='coerce'`` it would convert all to ``NaT``. + +Previous Behavior: + +.. code-block:: ipython + + In [2]: pd.to_datetime([1, 'foo'], errors='coerce') + Out[2]: DatetimeIndex(['NaT', 'NaT'], dtype='datetime64[ns]', freq=None) + +This will now convert integers/floats with the default unit of ``ns``. + +.. ipython:: python + + pd.to_datetime([1, 'foo'], errors='coerce') + +.. _whatsnew_0182.api.merging: + +Merging changes +^^^^^^^^^^^^^^^ + +Merging will now preserve the dtype of the join keys (:issue:`8596`) + +.. ipython:: python + + df1 = pd.DataFrame({'key': [1], 'v1': [10]}) + df1 + df2 = pd.DataFrame({'key': [1, 2], 'v1': [20, 30]}) + df2 + +Previous Behavior: + +.. code-block:: ipython + + In [5]: pd.merge(df1, df2, how='outer') + Out[5]: + key v1 + 0 1.0 10.0 + 1 1.0 20.0 + 2 2.0 30.0 + + In [6]: pd.merge(df1, df2, how='outer').dtypes + Out[6]: + key float64 + v1 float64 + dtype: object + +New Behavior: + +We are able to preserve the join keys + +.. ipython:: python + + pd.merge(df1, df2, how='outer') + pd.merge(df1, df2, how='outer').dtypes + +Of course if you have missing values that are introduced, then the +resulting dtype will be upcast (unchanged from previous). + +.. ipython:: python + + pd.merge(df1, df2, how='outer', on='key') + pd.merge(df1, df2, how='outer', on='key').dtypes + +.. _whatsnew_0182.describe: + +``.describe()`` changes +^^^^^^^^^^^^^^^^^^^^^^^ +Percentile identifiers in the index of a ``.describe()`` output will now be rounded to the least precision that keeps them distinct (:issue:`13104`) +.. ipython:: python + s = pd.Series([0, 1, 2, 3, 4]) + df = pd.DataFrame([0, 1, 2, 3, 4]) +Previous Behavior: +The percentiles were rounded to at most one decimal place, which could raise ``ValueError`` for a data frame if the percentiles were duplicated. +.. code-block:: ipython + + In [3]: s.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999]) + Out[3]: + count 5.000000 + mean 2.000000 + std 1.581139 + min 0.000000 + 0.0% 0.000400 + 0.1% 0.002000 + 0.1% 0.004000 + 50% 2.000000 + 99.9% 3.996000 + 100.0% 3.998000 + 100.0% 3.999600 + max 4.000000 + dtype: float64 + + In [4]: df.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999]) + Out[4]: + ... + ValueError: cannot reindex from a duplicate axis + +New Behavior: + +.. ipython:: python + s.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999]) + df.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999]) +- Passing duplicated ``percentiles`` will now raise a ``ValueError``. +- Bug in ``.describe()`` on a DataFrame with a mixed-dtype column index, which would previously raise a ``TypeError`` (:issue:`13288`) .. _whatsnew_0182.api.other: Other API changes ^^^^^^^^^^^^^^^^^ +- ``Float64Index.astype(int)`` will now raise ``ValueError`` if ``Float64Index`` contains ``NaN`` values (:issue:`13149`) +- ``TimedeltaIndex.astype(int)`` and ``DatetimeIndex.astype(int)`` will now return ``Int64Index`` instead of ``np.array`` (:issue:`13209`) +- ``.filter()`` enforces mutual exclusion of the keyword arguments. (:issue:`12399`) + .. _whatsnew_0182.deprecations: Deprecations ^^^^^^^^^^^^ +- ``compact_ints`` and ``use_unsigned`` have been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13320`) +- ``buffer_lines`` has been deprecated in ``pd.read_csv()`` and will be removed in a future version (:issue:`13360`) .. _whatsnew_0182.performance: @@ -97,38 +299,79 @@ Performance Improvements - Improved performance of sparse ``IntIndex.intersect`` (:issue:`13082`) - Improved performance of sparse arithmetic with ``BlockIndex`` when the number of blocks are large, though recommended to use ``IntIndex`` in such cases (:issue:`13082`) +- increased performance of ``DataFrame.quantile()`` as it now operates per-block (:issue:`11623`) + +- Improved performance of ``DataFrameGroupBy.transform`` (:issue:`12737`) .. _whatsnew_0182.bug_fixes: Bug Fixes ~~~~~~~~~ + +- Bug in ``io.json.json_normalize()``, where non-ascii keys raised an exception (:issue:`13213`) +- Bug in ``SparseSeries`` with ``MultiIndex`` ``[]`` indexing may raise ``IndexError`` (:issue:`13144`) +- Bug in ``SparseSeries`` with ``MultiIndex`` ``[]`` indexing result may have normal ``Index`` (:issue:`13144`) - Bug in ``SparseDataFrame`` in which ``axis=None`` did not default to ``axis=0`` (:issue:`13048`) +- Bug in ``SparseSeries`` and ``SparseDataFrame`` creation with ``object`` dtype may raise ``TypeError`` (:issue:`11633`) +- Bug when passing a not-default-indexed ``Series`` as ``xerr`` or ``yerr`` in ``.plot()`` (:issue:`11858`) +- Bug in matplotlib ``AutoDataFormatter``; this restores the second scaled formatting and re-adds micro-second scaled formatting (:issue:`13131`) +- Bug in selection from a ``HDFStore`` with a fixed format and ``start`` and/or ``stop`` specified will now return the selected range (:issue:`8287`) +- Bug in ``.groupby(..).resample(..)`` when the same object is called multiple times (:issue:`13174`) +- Bug in ``.to_records()`` when index name is a unicode string (:issue:`13172`) +- Bug in calling ``.memory_usage()`` on object which doesn't implement (:issue:`12924`) +- Regression in ``Series.quantile`` with nans (also shows up in ``.median()`` and ``.describe()``); furthermore now names the ``Series`` with the quantile (:issue:`13098`, :issue:`13146`) +- Bug in ``SeriesGroupBy.transform`` with datetime values and missing groups (:issue:`13191`) +- Bug in ``Series.str.extractall()`` with ``str`` index raises ``ValueError`` (:issue:`13156`) +- Bug in ``PeriodIndex`` and ``Period`` subtraction raises ``AttributeError`` (:issue:`13071`) +- Bug in ``PeriodIndex`` construction returning a ``float64`` index in some circumstances (:issue:`13067`) +- Bug in ``.resample(..)`` with a ``PeriodIndex`` not changing its ``freq`` appropriately when empty (:issue:`13067`) +- Bug in ``.resample(..)`` with a ``PeriodIndex`` not retaining its type or name with an empty ``DataFrame``appropriately when empty (:issue:`13212`) +- Bug in ``groupby(..).resample(..)`` where passing some keywords would raise an exception (:issue:`13235`) +- Bug in ``.tz_convert`` on a tz-aware ``DateTimeIndex`` that relied on index being sorted for correct results (:issue: `13306`) +- Bug in ``pd.read_hdf()`` where attempting to load an HDF file with a single dataset, that had one or more categorical columns, failed unless the key argument was set to the name of the dataset. (:issue:`13231`) -- Bug in ``PeriodIndex`` and ``Period`` subtraction raises ``AttributeError`` (:issue:`13071`) +- Bug in ``MultiIndex`` slicing where extra elements were returned when level is non-unique (:issue:`12896`) +- Bug in ``pd.read_csv()`` with ``engine='python'`` in which ``NaN`` values weren't being detected after data was converted to numeric values (:issue:`13314`) +- Bug in ``pd.read_csv()`` in which the ``nrows`` argument was not properly validated for both engines (:issue:`10476`) +- Bug in ``pd.read_csv()`` with ``engine='python'`` in which infinities of mixed-case forms were not being interpreted properly (:issue:`13274`) +- Bug in ``pd.read_csv()`` with ``engine='python'`` in which trailing ``NaN`` values were not being parsed (:issue:`13320`) +- Bug in ``pd.read_csv()`` that prevents ``usecols`` kwarg from accepting single-byte unicode strings (:issue:`13219`) +- Bug in ``Series`` arithmetic raises ``TypeError`` if it contains datetime-like as ``object`` dtype (:issue:`13043`) +- Bug in ``pd.to_datetime()`` when passing invalid datatypes (e.g. bool); will now respect the ``errors`` keyword (:issue:`13176`) +- Bug in extension dtype creation where the created types were not is/identical (:issue:`13285`) +- Bug in ``NaT`` - ``Period`` raises ``AttributeError`` (:issue:`13071`) +- Bug in ``Period`` addition raises ``TypeError`` if ``Period`` is on right hand side (:issue:`13069`) +- Bug in ``Peirod`` and ``Series`` or ``Index`` comparison raises ``TypeError`` (:issue:`13200`) +- Bug in ``pd.set_eng_float_format()`` that would prevent NaN's from formatting (:issue:`11981`) +- Bug in ``.unstack`` with ``Categorical`` dtype resets ``.ordered`` to ``True`` (:issue:`13249`) +- Bug in ``Series`` comparison operators when dealing with zero dim NumPy arrays (:issue:`13006`) +- Bug in ``groupby`` where ``apply`` returns different result depending on whether first result is ``None`` or not (:issue:`12824`) -- Bug in ``NaT`` - ``Period`` raises ``AttributeError`` (:issue:`13071`) -- Bug in ``Period`` addition raises ``TypeError`` if ``Period`` is on right hand side (:issue:`13069`) +- Bug in ``pd.to_numeric`` when ``errors='coerce'`` and input contains non-hashable objects (:issue:`13324`) + + +- Bug in ``Categorical.remove_unused_categories()`` changes ``.codes`` dtype to platform int (:issue:`13261`) diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt new file mode 100644 index 0000000000000..42db0388ca5d9 --- /dev/null +++ b/doc/source/whatsnew/v0.19.0.txt @@ -0,0 +1,83 @@ +.. _whatsnew_0190: + +v0.19.0 (????, 2016) +-------------------- + +This is a major release from 0.18.2 and includes a small number of API changes, several new features, +enhancements, and performance improvements along with a large number of bug fixes. We recommend that all +users upgrade to this version. + +Highlights include: + + +Check the :ref:`API Changes <whatsnew_0190.api_breaking>` and :ref:`deprecations <whatsnew_0190.deprecations>` before updating. + +.. contents:: What's new in v0.19.0 + :local: + :backlinks: none + +.. _whatsnew_0190.enhancements: + +New features +~~~~~~~~~~~~ + + + + + +.. _whatsnew_0190.enhancements.other: + +Other enhancements +^^^^^^^^^^^^^^^^^^ + + + + + + +.. _whatsnew_0190.api_breaking: + +Backwards incompatible API changes +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. _whatsnew_0190.api: + + + + + + +Other API Changes +^^^^^^^^^^^^^^^^^ + +.. _whatsnew_0190.deprecations: + +Deprecations +^^^^^^^^^^^^ + + + + + +.. _whatsnew_0190.prior_deprecations: + +Removal of prior version deprecations/changes +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + + + + +.. _whatsnew_0190.performance: + +Performance Improvements +~~~~~~~~~~~~~~~~~~~~~~~~ + + + + + +.. _whatsnew_0190.bug_fixes: + +Bug Fixes +~~~~~~~~~ diff --git a/pandas/algos.pyx b/pandas/algos.pyx index a31b35ba4afc6..7884d9c41845c 100644 --- a/pandas/algos.pyx +++ b/pandas/algos.pyx @@ -1505,52 +1505,8 @@ def roll_kurt(ndarray[double_t] input, #------------------------------------------------------------------------------- # Rolling median, min, max -ctypedef double_t (* skiplist_f)(object sl, int n, int p) - -cdef _roll_skiplist_op(ndarray arg, int win, int minp, skiplist_f op): - cdef ndarray[double_t] input = arg - cdef double val, prev, midpoint - cdef IndexableSkiplist skiplist - cdef Py_ssize_t nobs = 0, i - - cdef Py_ssize_t N = len(input) - cdef ndarray[double_t] output = np.empty(N, dtype=float) - - skiplist = IndexableSkiplist(win) - - minp = _check_minp(win, minp, N) - - for i from 0 <= i < minp - 1: - val = input[i] - - # Not NaN - if val == val: - nobs += 1 - skiplist.insert(val) - - output[i] = NaN - - for i from minp - 1 <= i < N: - val = input[i] - - if i > win - 1: - prev = input[i - win] - - if prev == prev: - skiplist.remove(prev) - nobs -= 1 - - if val == val: - nobs += 1 - skiplist.insert(val) - - output[i] = op(skiplist, nobs, minp) - - return output - from skiplist cimport * - @cython.boundscheck(False) @cython.wraparound(False) def roll_median_c(ndarray[float64_t] arg, int win, int minp): diff --git a/pandas/compat/numpy/function.py b/pandas/compat/numpy/function.py index 069cb3638fe75..274761f5d0b9c 100644 --- a/pandas/compat/numpy/function.py +++ b/pandas/compat/numpy/function.py @@ -21,7 +21,7 @@ from numpy import ndarray from pandas.util.validators import (validate_args, validate_kwargs, validate_args_and_kwargs) -from pandas.core.common import is_integer +from pandas.core.common import is_integer, UnsupportedFunctionCall from pandas.compat import OrderedDict @@ -245,3 +245,77 @@ def validate_transpose_for_generic(inst, kwargs): msg += " for {klass} instances".format(klass=klass) raise ValueError(msg) + + +def validate_window_func(name, args, kwargs): + numpy_args = ('axis', 'dtype', 'out') + msg = ("numpy operations are not " + "valid with window objects. " + "Use .{func}() directly instead ".format(func=name)) + + if len(args) > 0: + raise UnsupportedFunctionCall(msg) + + for arg in numpy_args: + if arg in kwargs: + raise UnsupportedFunctionCall(msg) + + +def validate_rolling_func(name, args, kwargs): + numpy_args = ('axis', 'dtype', 'out') + msg = ("numpy operations are not " + "valid with window objects. " + "Use .rolling(...).{func}() instead ".format(func=name)) + + if len(args) > 0: + raise UnsupportedFunctionCall(msg) + + for arg in numpy_args: + if arg in kwargs: + raise UnsupportedFunctionCall(msg) + + +def validate_expanding_func(name, args, kwargs): + numpy_args = ('axis', 'dtype', 'out') + msg = ("numpy operations are not " + "valid with window objects. " + "Use .expanding(...).{func}() instead ".format(func=name)) + + if len(args) > 0: + raise UnsupportedFunctionCall(msg) + + for arg in numpy_args: + if arg in kwargs: + raise UnsupportedFunctionCall(msg) + + +def validate_groupby_func(name, args, kwargs): + """ + 'args' and 'kwargs' should be empty because all of + their necessary parameters are explicitly listed in + the function signature + """ + if len(args) + len(kwargs) > 0: + raise UnsupportedFunctionCall(( + "numpy operations are not valid " + "with groupby. Use .groupby(...)." + "{func}() instead".format(func=name))) + +RESAMPLER_NUMPY_OPS = ('min', 'max', 'sum', 'prod', + 'mean', 'std', 'var') + + +def validate_resampler_func(method, args, kwargs): + """ + 'args' and 'kwargs' should be empty because all of + their necessary parameters are explicitly listed in + the function signature + """ + if len(args) + len(kwargs) > 0: + if method in RESAMPLER_NUMPY_OPS: + raise UnsupportedFunctionCall(( + "numpy operations are not valid " + "with resample. Use .resample(...)." + "{func}() instead".format(func=method))) + else: + raise TypeError("too many arguments passed in") diff --git a/pandas/computation/expr.py b/pandas/computation/expr.py index 01d0fa664ac41..f1cf210754d12 100644 --- a/pandas/computation/expr.py +++ b/pandas/computation/expr.py @@ -5,6 +5,7 @@ import tokenize from functools import partial +import numpy as np import pandas as pd from pandas import compat @@ -356,6 +357,19 @@ def _possibly_transform_eq_ne(self, node, left=None, right=None): right) return op, op_class, left, right + def _possibly_downcast_constants(self, left, right): + f32 = np.dtype(np.float32) + if left.isscalar and not right.isscalar and right.return_type == f32: + # right is a float32 array, left is a scalar + name = self.env.add_tmp(np.float32(left.value)) + left = self.term_type(name, self.env) + if right.isscalar and not left.isscalar and left.return_type == f32: + # left is a float32 array, right is a scalar + name = self.env.add_tmp(np.float32(right.value)) + right = self.term_type(name, self.env) + + return left, right + def _possibly_eval(self, binop, eval_in_python): # eval `in` and `not in` (for now) in "partial" python space # things that can be evaluated in "eval" space will be turned into @@ -399,6 +413,7 @@ def _possibly_evaluate_binop(self, op, op_class, lhs, rhs, def visit_BinOp(self, node, **kwargs): op, op_class, left, right = self._possibly_transform_eq_ne(node) + left, right = self._possibly_downcast_constants(left, right) return self._possibly_evaluate_binop(op, op_class, left, right) def visit_Div(self, node, **kwargs): diff --git a/pandas/computation/ops.py b/pandas/computation/ops.py index 603c030dcaa6e..bf6fa35cf255f 100644 --- a/pandas/computation/ops.py +++ b/pandas/computation/ops.py @@ -276,18 +276,26 @@ def _not_in(x, y): _binary_ops_dict.update(d) -def _cast_inplace(terms, dtype): +def _cast_inplace(terms, acceptable_dtypes, dtype): """Cast an expression inplace. Parameters ---------- terms : Op The expression that should cast. + acceptable_dtypes : list of acceptable numpy.dtype + Will not cast if term's dtype in this list. + + .. versionadded:: 0.18.2 + dtype : str or numpy.dtype The dtype to cast to. """ dt = np.dtype(dtype) for term in terms: + if term.type in acceptable_dtypes: + continue + try: new_value = term.value.astype(dt) except AttributeError: @@ -452,7 +460,9 @@ def __init__(self, lhs, rhs, truediv, *args, **kwargs): rhs.return_type)) if truediv or PY3: - _cast_inplace(com.flatten(self), np.float_) + # do not upcast float32s to float64 un-necessarily + acceptable_dtypes = [np.float32, np.float_] + _cast_inplace(com.flatten(self), acceptable_dtypes, np.float_) _unary_ops_syms = '+', '-', '~', 'not' diff --git a/pandas/computation/tests/test_eval.py b/pandas/computation/tests/test_eval.py index 143e6017b462a..5019dd392a567 100644 --- a/pandas/computation/tests/test_eval.py +++ b/pandas/computation/tests/test_eval.py @@ -12,8 +12,6 @@ from numpy.random import randn, rand, randint import numpy as np -from numpy.testing import assert_allclose -from numpy.testing.decorators import slow import pandas as pd from pandas.core import common as com @@ -33,7 +31,8 @@ import pandas.lib as lib from pandas.util.testing import (assert_frame_equal, randbool, assertRaisesRegexp, assert_numpy_array_equal, - assert_produces_warning, assert_series_equal) + assert_produces_warning, assert_series_equal, + slow) from pandas.compat import PY3, u, reduce _series_frame_incompatible = _bool_ops_syms @@ -186,6 +185,16 @@ def test_chained_cmp_op(self): mids, cmp_ops, self.rhses): self.check_chained_cmp_op(lhs, cmp1, mid, cmp2, rhs) + def check_equal(self, result, expected): + if isinstance(result, DataFrame): + tm.assert_frame_equal(result, expected) + elif isinstance(result, Series): + tm.assert_series_equal(result, expected) + elif isinstance(result, np.ndarray): + tm.assert_numpy_array_equal(result, expected) + else: + self.assertEqual(result, expected) + def check_complex_cmp_op(self, lhs, cmp1, rhs, binop, cmp2): skip_these = _scalar_skip ex = '(lhs {cmp1} rhs) {binop} (lhs {cmp2} rhs)'.format(cmp1=cmp1, @@ -219,7 +228,7 @@ def check_complex_cmp_op(self, lhs, cmp1, rhs, binop, cmp2): expected = _eval_single_bin( lhs_new, binop, rhs_new, self.engine) result = pd.eval(ex, engine=self.engine, parser=self.parser) - tm.assert_numpy_array_equal(result, expected) + self.check_equal(result, expected) def check_chained_cmp_op(self, lhs, cmp1, mid, cmp2, rhs): skip_these = _scalar_skip @@ -239,7 +248,8 @@ def check_operands(left, right, cmp_op): for ex in (ex1, ex2, ex3): result = pd.eval(ex, engine=self.engine, parser=self.parser) - tm.assert_numpy_array_equal(result, expected) + + tm.assert_almost_equal(result, expected) def check_simple_cmp_op(self, lhs, cmp1, rhs): ex = 'lhs {0} rhs'.format(cmp1) @@ -250,13 +260,14 @@ def check_simple_cmp_op(self, lhs, cmp1, rhs): else: expected = _eval_single_bin(lhs, cmp1, rhs, self.engine) result = pd.eval(ex, engine=self.engine, parser=self.parser) - tm.assert_numpy_array_equal(result, expected) + self.check_equal(result, expected) def check_binary_arith_op(self, lhs, arith1, rhs): ex = 'lhs {0} rhs'.format(arith1) result = pd.eval(ex, engine=self.engine, parser=self.parser) expected = _eval_single_bin(lhs, arith1, rhs, self.engine) - tm.assert_numpy_array_equal(result, expected) + + tm.assert_almost_equal(result, expected) ex = 'lhs {0} rhs {0} rhs'.format(arith1) result = pd.eval(ex, engine=self.engine, parser=self.parser) nlhs = _eval_single_bin(lhs, arith1, rhs, @@ -271,8 +282,10 @@ def check_alignment(self, result, nlhs, ghs, op): # TypeError, AttributeError: series or frame with scalar align pass else: + + # direct numpy comparison expected = self.ne.evaluate('nlhs {0} ghs'.format(op)) - tm.assert_numpy_array_equal(result, expected) + tm.assert_numpy_array_equal(result.values, expected) # modulus, pow, and floor division require special casing @@ -280,9 +293,13 @@ def check_modulus(self, lhs, arith1, rhs): ex = 'lhs {0} rhs'.format(arith1) result = pd.eval(ex, engine=self.engine, parser=self.parser) expected = lhs % rhs - assert_allclose(result, expected) + + tm.assert_almost_equal(result, expected) expected = self.ne.evaluate('expected {0} rhs'.format(arith1)) - assert_allclose(result, expected) + if isinstance(result, (DataFrame, Series)): + tm.assert_almost_equal(result.values, expected) + else: + tm.assert_almost_equal(result, expected.item()) def check_floor_division(self, lhs, arith1, rhs): ex = 'lhs {0} rhs'.format(arith1) @@ -290,7 +307,7 @@ def check_floor_division(self, lhs, arith1, rhs): if self.engine == 'python': res = pd.eval(ex, engine=self.engine, parser=self.parser) expected = lhs // rhs - tm.assert_numpy_array_equal(res, expected) + self.check_equal(res, expected) else: self.assertRaises(TypeError, pd.eval, ex, local_dict={'lhs': lhs, 'rhs': rhs}, @@ -319,13 +336,13 @@ def check_pow(self, lhs, arith1, rhs): self.assertRaises(AssertionError, tm.assert_numpy_array_equal, result, expected) else: - assert_allclose(result, expected) + tm.assert_almost_equal(result, expected) ex = '(lhs {0} rhs) {0} rhs'.format(arith1) result = pd.eval(ex, engine=self.engine, parser=self.parser) expected = self.get_expected_pow_result( self.get_expected_pow_result(lhs, rhs), rhs) - assert_allclose(result, expected) + tm.assert_almost_equal(result, expected) def check_single_invert_op(self, lhs, cmp1, rhs): # simple @@ -336,12 +353,12 @@ def check_single_invert_op(self, lhs, cmp1, rhs): elb = np.array([bool(el)]) expected = ~elb result = pd.eval('~elb', engine=self.engine, parser=self.parser) - tm.assert_numpy_array_equal(expected, result) + tm.assert_almost_equal(expected, result) for engine in self.current_engines: tm.skip_if_no_ne(engine) - tm.assert_numpy_array_equal(result, pd.eval('~elb', engine=engine, - parser=self.parser)) + tm.assert_almost_equal(result, pd.eval('~elb', engine=engine, + parser=self.parser)) def check_compound_invert_op(self, lhs, cmp1, rhs): skip_these = 'in', 'not in' @@ -361,13 +378,13 @@ def check_compound_invert_op(self, lhs, cmp1, rhs): else: expected = ~expected result = pd.eval(ex, engine=self.engine, parser=self.parser) - tm.assert_numpy_array_equal(expected, result) + tm.assert_almost_equal(expected, result) # make sure the other engines work the same as this one for engine in self.current_engines: tm.skip_if_no_ne(engine) ev = pd.eval(ex, engine=self.engine, parser=self.parser) - tm.assert_numpy_array_equal(ev, result) + tm.assert_almost_equal(ev, result) def ex(self, op, var_name='lhs'): return '{0}{1}'.format(op, var_name) @@ -701,10 +718,10 @@ def check_modulus(self, lhs, arith1, rhs): result = pd.eval(ex, engine=self.engine, parser=self.parser) expected = lhs % rhs - assert_allclose(result, expected) + tm.assert_almost_equal(result, expected) expected = _eval_single_bin(expected, arith1, rhs, self.engine) - assert_allclose(result, expected) + tm.assert_almost_equal(result, expected) def check_alignment(self, result, nlhs, ghs, op): try: @@ -715,7 +732,7 @@ def check_alignment(self, result, nlhs, ghs, op): pass else: expected = eval('nlhs {0} ghs'.format(op)) - tm.assert_numpy_array_equal(result, expected) + tm.assert_almost_equal(result, expected) class TestEvalPythonPandas(TestEvalPythonPython): @@ -736,6 +753,35 @@ def check_chained_cmp_op(self, lhs, cmp1, mid, cmp2, rhs): ENGINES_PARSERS = list(product(_engines, expr._parsers)) +#------------------------------------- +# typecasting rules consistency with python +# issue #12388 + +class TestTypeCasting(tm.TestCase): + + def check_binop_typecasting(self, engine, parser, op, dt): + tm.skip_if_no_ne(engine) + df = mkdf(5, 3, data_gen_f=f, dtype=dt) + s = 'df {} 3'.format(op) + res = pd.eval(s, engine=engine, parser=parser) + self.assertTrue(df.values.dtype == dt) + self.assertTrue(res.values.dtype == dt) + assert_frame_equal(res, eval(s)) + + s = '3 {} df'.format(op) + res = pd.eval(s, engine=engine, parser=parser) + self.assertTrue(df.values.dtype == dt) + self.assertTrue(res.values.dtype == dt) + assert_frame_equal(res, eval(s)) + + def test_binop_typecasting(self): + for engine, parser in ENGINES_PARSERS: + for op in ['+', '-', '*', '**', '/']: + # maybe someday... numexpr has too many upcasting rules now + #for dt in chain(*(np.sctypes[x] for x in ['uint', 'int', 'float'])): + for dt in [np.float32, np.float64]: + yield self.check_binop_typecasting, engine, parser, op, dt + #------------------------------------- # basic and complex alignment @@ -1578,7 +1624,7 @@ def test_binary_functions(self): expr = "{0}(a, b)".format(fn) got = self.eval(expr) expect = getattr(np, fn)(a, b) - np.testing.assert_allclose(got, expect) + tm.assert_almost_equal(got, expect, check_names=False) def test_df_use_case(self): df = DataFrame({'a': np.random.randn(10), diff --git a/pandas/core/base.py b/pandas/core/base.py index 1a812ba2e4878..96732a7140f9e 100644 --- a/pandas/core/base.py +++ b/pandas/core/base.py @@ -127,7 +127,7 @@ def __sizeof__(self): # no memory_usage attribute, so fall back to # object's 'sizeof' - return super(self, PandasObject).__sizeof__() + return super(PandasObject, self).__sizeof__() class NoNewAttributesMixin(object): @@ -995,6 +995,37 @@ def is_unique(self): """ return self.nunique() == len(self) + @property + def is_monotonic(self): + """ + Return boolean if values in the object are + monotonic_increasing + + .. versionadded:: 0.18.2 + + Returns + ------- + is_monotonic : boolean + """ + from pandas import Index + return Index(self).is_monotonic + is_monotonic_increasing = is_monotonic + + @property + def is_monotonic_decreasing(self): + """ + Return boolean if values in the object are + monotonic_decreasing + + .. versionadded:: 0.18.2 + + Returns + ------- + is_monotonic_decreasing : boolean + """ + from pandas import Index + return Index(self).is_monotonic_decreasing + def memory_usage(self, deep=False): """ Memory usage of my values diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py index 4f80c610c1126..fa3d13c174245 100644 --- a/pandas/core/categorical.py +++ b/pandas/core/categorical.py @@ -336,11 +336,26 @@ def copy(self): categories=self.categories, ordered=self.ordered, fastpath=True) - def astype(self, dtype): - """ coerce this type to another dtype """ + def astype(self, dtype, copy=True): + """ + Coerce this type to another dtype + + Parameters + ---------- + dtype : numpy dtype or pandas type + copy : bool, default True + By default, astype always returns a newly allocated object. + If copy is set to False and dtype is categorical, the original + object is returned. + + .. versionadded:: 0.18.2 + + """ if is_categorical_dtype(dtype): + if copy is True: + return self.copy() return self - return np.array(self, dtype=dtype) + return np.array(self, dtype=dtype, copy=copy) @cache_readonly def ndim(self): @@ -883,8 +898,8 @@ def remove_unused_categories(self, inplace=False): if idx.size != 0 and idx[0] == -1: # na sentinel idx, inv = idx[1:], inv - 1 - cat._codes = inv cat._categories = cat.categories.take(idx) + cat._codes = _coerce_indexer_dtype(inv, self._categories) if not inplace: return cat @@ -985,7 +1000,7 @@ def __setstate__(self, state): # Provide compatibility with pre-0.15.0 Categoricals. if '_codes' not in state and 'labels' in state: - state['_codes'] = state.pop('labels') + state['_codes'] = state.pop('labels').astype(np.int8) if '_categories' not in state and '_levels' in state: state['_categories'] = self._validate_categories(state.pop( '_levels')) diff --git a/pandas/core/common.py b/pandas/core/common.py index c64cfa77b9e62..d26c59e62de30 100644 --- a/pandas/core/common.py +++ b/pandas/core/common.py @@ -41,6 +41,10 @@ class AmbiguousIndexError(PandasError, KeyError): pass +class UnsupportedFunctionCall(ValueError): + pass + + class AbstractMethodError(NotImplementedError): """Raise this error instead of NotImplementedError for abstract methods while keeping compatibility with Python 2 and Python 3. @@ -138,7 +142,7 @@ def _isnull_old(obj): def _use_inf_as_null(key): """Option change callback for null/inf behaviour - Choose which replacement for numpy.isnan / -numpy.isfinite is used. + Choose which replacement for numpy.isnan / ~numpy.isfinite is used. Parameters ---------- @@ -229,7 +233,7 @@ def _isnull_ndarraylike_old(obj): def notnull(obj): - """Replacement for numpy.isfinite / -numpy.isnan which is suitable for use + """Replacement for numpy.isfinite / ~numpy.isnan which is suitable for use on object arrays. Parameters @@ -312,8 +316,8 @@ def array_equivalent(left, right, strict_nan=False): if not strict_nan: # pd.isnull considers NaN and None to be equivalent. - return lib.array_equivalent_object( - _ensure_object(left.ravel()), _ensure_object(right.ravel())) + return lib.array_equivalent_object(_ensure_object(left.ravel()), + _ensure_object(right.ravel())) for left_value, right_value in zip(left, right): if left_value is tslib.NaT and right_value is not tslib.NaT: @@ -1111,7 +1115,7 @@ def _possibly_cast_to_datetime(value, dtype, errors='raise'): def _possibly_infer_to_datetimelike(value, convert_dates=False): """ - we might have a array (or single object) that is datetime like, + we might have an array (or single object) that is datetime like, and no dtype is passed don't change the value unless we find a datetime/timedelta set @@ -1596,7 +1600,7 @@ def is_timedelta64_dtype(arr_or_dtype): def is_timedelta64_ns_dtype(arr_or_dtype): - tipo = _get_dtype_type(arr_or_dtype) + tipo = _get_dtype(arr_or_dtype) return tipo == _TD_DTYPE @@ -2058,7 +2062,7 @@ def _random_state(state=None): state : int, np.random.RandomState, None. If receives an int, passes to np.random.RandomState() as seed. If receives an np.random.RandomState object, just returns object. - If receives `None`, returns an np.random.RandomState object. + If receives `None`, returns np.random. If receives anything else, raises an informative ValueError. Default None. @@ -2072,7 +2076,7 @@ def _random_state(state=None): elif isinstance(state, np.random.RandomState): return state elif state is None: - return np.random.RandomState() + return np.random else: raise ValueError("random_state must be an integer, a numpy " "RandomState, or None") diff --git a/pandas/core/frame.py b/pandas/core/frame.py index b209b6d6ec543..69def7502a6f7 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -1062,7 +1062,7 @@ def to_records(self, index=True, convert_datetime64=True): count += 1 elif index_names[0] is None: index_names = ['index'] - names = index_names + lmap(str, self.columns) + names = lmap(str, index_names) + lmap(str, self.columns) else: arrays = [self[c].get_values() for c in self.columns] names = lmap(str, self.columns) @@ -4351,18 +4351,20 @@ def join(self, other, on=None, how='left', lsuffix='', rsuffix='', Series is passed, its name attribute must be set, and that will be used as the column name in the resulting joined DataFrame on : column name, tuple/list of column names, or array-like - Column(s) to use for joining, otherwise join on index. If multiples + Column(s) in the caller to join on the index in other, + otherwise joins index-on-index. If multiples columns given, the passed DataFrame must have a MultiIndex. Can pass an array as the join key if not already contained in the calling DataFrame. Like an Excel VLOOKUP operation - how : {'left', 'right', 'outer', 'inner'} - How to handle indexes of the two objects. Default: 'left' - for joining on index, None otherwise - - * left: use calling frame's index - * right: use input frame's index - * outer: form union of indexes - * inner: use intersection of indexes + how : {'left', 'right', 'outer', 'inner'}, default: 'left' + How to handle the operation of the two objects. + + * left: use calling frame's index (or column if on is specified) + * right: use other frame's index + * outer: form union of calling frame's index (or column if on is + specified) with other frame's index + * inner: form intersection of calling frame's index (or column if + on is specified) with other frame's index lsuffix : string Suffix to use from left frame's overlapping columns rsuffix : string @@ -4376,6 +4378,77 @@ def join(self, other, on=None, how='left', lsuffix='', rsuffix='', on, lsuffix, and rsuffix options are not supported when passing a list of DataFrame objects + Examples + -------- + >>> caller = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'], + ... 'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']}) + + >>> caller + A key + 0 A0 K0 + 1 A1 K1 + 2 A2 K2 + 3 A3 K3 + 4 A4 K4 + 5 A5 K5 + + >>> other = pd.DataFrame({'key': ['K0', 'K1', 'K2'], + ... 'B': ['B0', 'B1', 'B2']}) + + >>> other + B key + 0 B0 K0 + 1 B1 K1 + 2 B2 K2 + + Join DataFrames using their indexes. + + >>> caller.join(other, lsuffix='_caller', rsuffix='_other') + + >>> A key_caller B key_other + 0 A0 K0 B0 K0 + 1 A1 K1 B1 K1 + 2 A2 K2 B2 K2 + 3 A3 K3 NaN NaN + 4 A4 K4 NaN NaN + 5 A5 K5 NaN NaN + + + If we want to join using the key columns, we need to set key to be + the index in both caller and other. The joined DataFrame will have + key as its index. + + >>> caller.set_index('key').join(other.set_index('key')) + + >>> A B + key + K0 A0 B0 + K1 A1 B1 + K2 A2 B2 + K3 A3 NaN + K4 A4 NaN + K5 A5 NaN + + Another option to join using the key columns is to use the on + parameter. DataFrame.join always uses other's index but we can use any + column in the caller. This method preserves the original caller's + index in the result. + + >>> caller.join(other.set_index('key'), on='key') + + >>> A key B + 0 A0 K0 B0 + 1 A1 K1 B1 + 2 A2 K2 B2 + 3 A3 K3 NaN + 4 A4 K4 NaN + 5 A5 K5 NaN + + + See also + -------- + DataFrame.merge : For column(s)-on-columns(s) operations + Returns ------- joined : DataFrame @@ -4989,31 +5062,27 @@ def quantile(self, q=0.5, axis=0, numeric_only=True, 0.5 2.5 55.0 """ self._check_percentile(q) - if not com.is_list_like(q): - q = [q] - squeeze = True - else: - squeeze = False data = self._get_numeric_data() if numeric_only else self axis = self._get_axis_number(axis) + is_transposed = axis == 1 - def _quantile(series): - res = series.quantile(q, interpolation=interpolation) - return series.name, res - - if axis == 1: + if is_transposed: data = data.T - # unable to use DataFrame.apply, becasuse data may be empty - result = dict(_quantile(s) for (_, s) in data.iteritems()) - result = self._constructor(result, columns=data.columns) - if squeeze: - if result.shape == (1, 1): - result = result.T.iloc[:, 0] # don't want scalar - else: - result = result.T.squeeze() - result.name = None # For groupby, so it can set an index name + result = data._data.quantile(qs=q, + axis=1, + interpolation=interpolation, + transposed=is_transposed) + + if result.ndim == 2: + result = self._constructor(result) + else: + result = self._constructor_sliced(result, name=q) + + if is_transposed: + result = result.T + return result def to_timestamp(self, freq=None, how='start', axis=0, copy=True): @@ -5468,7 +5537,8 @@ def _list_of_series_to_arrays(data, columns, coerce_float=False, dtype=None): def _list_of_dict_to_arrays(data, columns, coerce_float=False, dtype=None): if columns is None: gen = (list(x.keys()) for x in data) - columns = lib.fast_unique_multiple_list_gen(gen) + sort = not any(isinstance(d, OrderedDict) for d in data) + columns = lib.fast_unique_multiple_list_gen(gen, sort=sort) # assure that they are of the base dict class and not of derived # classes diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 6c80ab9d87e33..6f062a28b8dc7 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -1,4 +1,5 @@ # pylint: disable=W0231,E1101 +import collections import warnings import operator import weakref @@ -20,6 +21,7 @@ import pandas.core.missing as missing import pandas.core.datetools as datetools from pandas.formats.printing import pprint_thing +from pandas.formats.format import format_percentiles from pandas import compat from pandas.compat.numpy import function as nv from pandas.compat import (map, zip, lrange, string_types, @@ -143,7 +145,7 @@ def _init_mgr(self, mgr, axes=None, dtype=None, copy=False): @property def _constructor(self): - """Used when a manipulation result has the same dimesions as the + """Used when a manipulation result has the same dimensions as the original. """ raise AbstractMethodError(self) @@ -2356,7 +2358,11 @@ def _reindex_axis(self, new_index, fill_method, axis, copy): def filter(self, items=None, like=None, regex=None, axis=None): """ - Restrict the info axis to set of items or wildcard + Subset rows or columns of dataframe according to labels in + the specified index. + + Note that this routine does not filter a dataframe on its + contents. The filter is applied to the labels of the index. Parameters ---------- @@ -2366,19 +2372,57 @@ def filter(self, items=None, like=None, regex=None, axis=None): Keep info axis where "arg in col == True" regex : string (regular expression) Keep info axis with re.search(regex, col) == True - axis : int or None - The axis to filter on. By default this is the info axis. The "info - axis" is the axis that is used when indexing with ``[]``. For - example, ``df = DataFrame({'a': [1, 2, 3, 4]]}); df['a']``. So, - the ``DataFrame`` columns are the info axis. + axis : int or string axis name + The axis to filter on. By default this is the info axis, + 'index' for Series, 'columns' for DataFrame + + Returns + ------- + same type as input object + + Examples + -------- + >>> df + one two three + mouse 1 2 3 + rabbit 4 5 6 + + >>> # select columns by name + >>> df.filter(items=['one', 'three']) + one three + mouse 1 3 + rabbit 4 6 + + >>> # select columns by regular expression + >>> df.filter(regex='e$', axis=1) + one three + mouse 1 3 + rabbit 4 6 + + >>> # select rows containing 'bbi' + >>> df.filter(like='bbi', axis=0) + one two three + rabbit 4 5 6 + + See Also + -------- + pandas.DataFrame.select Notes ----- - Arguments are mutually exclusive, but this is not checked for + The ``items``, ``like``, and ``regex`` parameters are + enforced to be mutually exclusive. + ``axis`` defaults to the info axis that is used when indexing + with ``[]``. """ import re + nkw = sum([x is not None for x in [items, like, regex]]) + if nkw > 1: + raise TypeError('Keyword arguments `items`, `like`, or `regex` ' + 'are mutually exclusive') + if axis is None: axis = self._info_axis_name axis_name = self._get_axis_name(axis) @@ -2937,7 +2981,11 @@ def astype(self, dtype, copy=True, raise_on_error=True, **kwargs): Parameters ---------- - dtype : numpy.dtype or Python type + dtype : numpy.dtype, Python type, or dict + Use a numpy.dtype or Python type to cast entire pandas object to the + same type. Alternatively, use {col: dtype, ...}, where col is a + column label and dtype is a numpy.dtype or Python type to cast one + or more of the DataFrame's columns to column-specific types. raise_on_error : raise on invalid input kwargs : keyword arguments to pass on to the constructor @@ -2945,10 +2993,27 @@ def astype(self, dtype, copy=True, raise_on_error=True, **kwargs): ------- casted : type of caller """ - - mgr = self._data.astype(dtype=dtype, copy=copy, - raise_on_error=raise_on_error, **kwargs) - return self._constructor(mgr).__finalize__(self) + if isinstance(dtype, collections.Mapping): + if self.ndim == 1: # i.e. Series + if len(dtype) > 1 or list(dtype.keys())[0] != self.name: + raise KeyError('Only the Series name can be used for ' + 'the key in Series dtype mappings.') + typ = list(dtype.values())[0] + return self.astype(typ, copy, raise_on_error, **kwargs) + + from pandas.tools.merge import concat + casted_cols = [self[col].astype(typ, copy=copy) + for col, typ in dtype.items()] + other_col_labels = self.columns.difference(dtype.keys()) + other_cols = [self[col].copy() if copy else self[col] + for col in other_col_labels] + new_df = concat(casted_cols + other_cols, axis=1) + return new_df.reindex(columns=self.columns, copy=False) + + # else, only a single dtype is given + new_data = self._data.astype(dtype=dtype, copy=copy, + raise_on_error=raise_on_error, **kwargs) + return self._constructor(new_data).__finalize__(self) def copy(self, deep=True): """ @@ -4868,32 +4933,33 @@ def abs(self): @Appender(_shared_docs['describe'] % _shared_doc_kwargs) def describe(self, percentiles=None, include=None, exclude=None): if self.ndim >= 3: - msg = "describe is not implemented on on Panel or PanelND objects." + msg = "describe is not implemented on Panel or PanelND objects." raise NotImplementedError(msg) + elif self.ndim == 2 and self.columns.size == 0: + raise ValueError("Cannot describe a DataFrame without columns") if percentiles is not None: # get them all to be in [0, 1] self._check_percentile(percentiles) + + # median should always be included + if 0.5 not in percentiles: + percentiles.append(0.5) percentiles = np.asarray(percentiles) else: percentiles = np.array([0.25, 0.5, 0.75]) - # median should always be included - if (percentiles != 0.5).all(): # median isn't included - lh = percentiles[percentiles < .5] - uh = percentiles[percentiles > .5] - percentiles = np.hstack([lh, 0.5, uh]) + # sort and check for duplicates + unique_pcts = np.unique(percentiles) + if len(unique_pcts) < len(percentiles): + raise ValueError("percentiles cannot contain duplicates") + percentiles = unique_pcts - def pretty_name(x): - x *= 100 - if x == int(x): - return '%.0f%%' % x - else: - return '%.1f%%' % x + formatted_percentiles = format_percentiles(percentiles) - def describe_numeric_1d(series, percentiles): + def describe_numeric_1d(series): stat_index = (['count', 'mean', 'std', 'min'] + - [pretty_name(x) for x in percentiles] + ['max']) + formatted_percentiles + ['max']) d = ([series.count(), series.mean(), series.std(), series.min()] + [series.quantile(x) for x in percentiles] + [series.max()]) return pd.Series(d, index=stat_index, name=series.name) @@ -4918,18 +4984,18 @@ def describe_categorical_1d(data): return pd.Series(result, index=names, name=data.name) - def describe_1d(data, percentiles): + def describe_1d(data): if com.is_bool_dtype(data): return describe_categorical_1d(data) elif com.is_numeric_dtype(data): - return describe_numeric_1d(data, percentiles) + return describe_numeric_1d(data) elif com.is_timedelta64_dtype(data): - return describe_numeric_1d(data, percentiles) + return describe_numeric_1d(data) else: return describe_categorical_1d(data) if self.ndim == 1: - return describe_1d(self, percentiles) + return describe_1d(self) elif (include is None) and (exclude is None): if len(self._get_numeric_data()._info_axis) > 0: # when some numerics are found, keep only numerics @@ -4944,7 +5010,7 @@ def describe_1d(data, percentiles): else: data = self.select_dtypes(include=include, exclude=exclude) - ldesc = [describe_1d(s, percentiles) for _, s in data.iteritems()] + ldesc = [describe_1d(s) for _, s in data.iteritems()] # set a convenient order for rows names = [] ldesc_indexes = sorted([x.index for x in ldesc], key=len) @@ -4954,8 +5020,7 @@ def describe_1d(data, percentiles): names.append(name) d = pd.concat(ldesc, join_axes=pd.Index([names]), axis=1) - d.columns = self.columns._shallow_copy(values=d.columns.values) - d.columns.names = data.columns.names + d.columns = data.columns.copy() return d def _check_percentile(self, q): @@ -5299,7 +5364,7 @@ def _make_stat_function(cls, name, name1, name2, axis_descr, desc, f): @Appender(_num_doc) def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None, **kwargs): - nv.validate_stat_func(tuple(), kwargs) + nv.validate_stat_func(tuple(), kwargs, fname=name) if skipna is None: skipna = True if axis is None: @@ -5319,7 +5384,7 @@ def _make_stat_function_ddof(cls, name, name1, name2, axis_descr, desc, f): @Appender(_num_ddof_doc) def stat_func(self, axis=None, skipna=None, level=None, ddof=1, numeric_only=None, **kwargs): - nv.validate_stat_ddof_func(tuple(), kwargs) + nv.validate_stat_ddof_func(tuple(), kwargs, fname=name) if skipna is None: skipna = True if axis is None: @@ -5340,7 +5405,7 @@ def _make_cum_function(cls, name, name1, name2, axis_descr, desc, accum_func, @Appender("Return cumulative {0} over requested axis.".format(name) + _cnum_doc) def cum_func(self, axis=None, dtype=None, out=None, skipna=True, **kwargs): - nv.validate_cum_func(tuple(), kwargs) + nv.validate_cum_func(tuple(), kwargs, fname=name) if axis is None: axis = self._stat_axis_number else: @@ -5374,7 +5439,7 @@ def _make_logical_function(cls, name, name1, name2, axis_descr, desc, f): @Appender(_bool_doc) def logical_func(self, axis=None, bool_only=None, skipna=None, level=None, **kwargs): - nv.validate_logical_func(tuple(), kwargs) + nv.validate_logical_func(tuple(), kwargs, fname=name) if skipna is None: skipna = True if axis is None: diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py index 7a4791189726e..bea62e98e4a2a 100644 --- a/pandas/core/groupby.py +++ b/pandas/core/groupby.py @@ -11,6 +11,7 @@ callable, map ) from pandas import compat +from pandas.compat.numpy import function as nv from pandas.compat.numpy import _np_version_under1p8 from pandas.core.base import (PandasObject, SelectionMixin, GroupByError, DataError, SpecificationError) @@ -36,7 +37,7 @@ is_datetime_or_timedelta_dtype, is_bool, is_bool_dtype, AbstractMethodError, _maybe_fill) -from pandas.core.config import option_context +from pandas.core.config import option_context, is_callable import pandas.lib as lib from pandas.lib import Timestamp import pandas.tslib as tslib @@ -642,9 +643,20 @@ def apply(self, func, *args, **kwargs): func = self._is_builtin_func(func) - @wraps(func) - def f(g): - return func(g, *args, **kwargs) + # this is needed so we don't try and wrap strings. If we could + # resolve functions to their callable functions prior, this + # wouldn't be needed + if args or kwargs: + if is_callable(func): + + @wraps(func) + def f(g): + return func(g, *args, **kwargs) + else: + raise ValueError('func must be a callable if args or ' + 'kwargs are supplied') + else: + f = func # ignore SettingWithCopy here in case the user mutates with option_context('mode.chained_assignment', None): @@ -806,8 +818,9 @@ def reset_identity(values): # reset the identities of the components # of the values to prevent aliasing for v in values: - ax = v._get_axis(self.axis) - ax._reset_identity() + if v is not None: + ax = v._get_axis(self.axis) + ax._reset_identity() return values if not not_indexed_same: @@ -954,12 +967,13 @@ def count(self): @Substitution(name='groupby') @Appender(_doc_template) - def mean(self): + def mean(self, *args, **kwargs): """ Compute mean of groups, excluding missing values For multiple groupings, the result index will be a MultiIndex """ + nv.validate_groupby_func('mean', args, kwargs) try: return self._cython_agg_general('mean') except GroupByError: @@ -993,7 +1007,7 @@ def f(x): @Substitution(name='groupby') @Appender(_doc_template) - def std(self, ddof=1): + def std(self, ddof=1, *args, **kwargs): """ Compute standard deviation of groups, excluding missing values @@ -1005,12 +1019,13 @@ def std(self, ddof=1): degrees of freedom """ - # todo, implement at cython level? + # TODO: implement at Cython level? + nv.validate_groupby_func('std', args, kwargs) return np.sqrt(self.var(ddof=ddof)) @Substitution(name='groupby') @Appender(_doc_template) - def var(self, ddof=1): + def var(self, ddof=1, *args, **kwargs): """ Compute variance of groups, excluding missing values @@ -1021,7 +1036,7 @@ def var(self, ddof=1): ddof : integer, default 1 degrees of freedom """ - + nv.validate_groupby_func('var', args, kwargs) if ddof == 1: return self._cython_agg_general('var') else: @@ -1317,8 +1332,9 @@ def cumcount(self, ascending=True): @Substitution(name='groupby') @Appender(_doc_template) - def cumprod(self, axis=0): + def cumprod(self, axis=0, *args, **kwargs): """Cumulative product for each group""" + nv.validate_groupby_func('cumprod', args, kwargs) if axis != 0: return self.apply(lambda x: x.cumprod(axis=axis)) @@ -1326,8 +1342,9 @@ def cumprod(self, axis=0): @Substitution(name='groupby') @Appender(_doc_template) - def cumsum(self, axis=0): + def cumsum(self, axis=0, *args, **kwargs): """Cumulative sum for each group""" + nv.validate_groupby_func('cumsum', args, kwargs) if axis != 0: return self.apply(lambda x: x.cumprod(axis=axis)) @@ -2669,7 +2686,7 @@ def _wrap_transformed_output(self, output, names=None): def _wrap_applied_output(self, keys, values, not_indexed_same=False): if len(keys) == 0: # GH #6265 - return Series([], name=self.name) + return Series([], name=self.name, index=keys) def _get_index(): if self.grouper.nkeys > 1: @@ -2776,18 +2793,11 @@ def _transform_fast(self, func): func = getattr(self, func) ids, _, ngroup = self.grouper.group_info - mask = ids != -1 - - out = func().values[ids] - if not mask.all(): - out = np.where(mask, out, np.nan) - - obs = np.zeros(ngroup, dtype='bool') - obs[ids[mask]] = True - if not obs.all(): - out = self._try_cast(out, self._selected_obj) - - return Series(out, index=self.obj.index) + cast = (self.size().fillna(0) > 0).any() + out = algos.take_1d(func().values, ids) + if cast: + out = self._try_cast(out, self.obj) + return Series(out, index=self.obj.index, name=self.obj.name) def filter(self, func, dropna=True, *args, **kwargs): # noqa """ @@ -3223,12 +3233,25 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False): from pandas.core.index import _all_indexes_same if len(keys) == 0: - # XXX - return DataFrame({}) + return DataFrame(index=keys) key_names = self.grouper.names - if isinstance(values[0], DataFrame): + # GH12824. + def first_non_None_value(values): + try: + v = next(v for v in values if v is not None) + except StopIteration: + return None + return v + + v = first_non_None_value(values) + + if v is None: + # GH9684. If all values are None, then this will throw an error. + # We'd prefer it return an empty dataframe. + return DataFrame() + elif isinstance(v, DataFrame): return self._concat_objects(keys, values, not_indexed_same=not_indexed_same) elif self.grouper.groupings is not None: @@ -3255,21 +3278,15 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False): key_index = None # make Nones an empty object - if com._count_not_none(*values) != len(values): - try: - v = next(v for v in values if v is not None) - except StopIteration: - # If all values are None, then this will throw an error. - # We'd prefer it return an empty dataframe. - return DataFrame() - if v is None: - return DataFrame() - elif isinstance(v, NDFrame): - values = [ - x if x is not None else - v._constructor(**v._construct_axes_dict()) - for x in values - ] + v = first_non_None_value(values) + if v is None: + return DataFrame() + elif isinstance(v, NDFrame): + values = [ + x if x is not None else + v._constructor(**v._construct_axes_dict()) + for x in values + ] v = values[0] @@ -3465,19 +3482,28 @@ def transform(self, func, *args, **kwargs): if not result.columns.equals(obj.columns): return self._transform_general(func, *args, **kwargs) - results = np.empty_like(obj.values, result.values.dtype) - for (name, group), (i, row) in zip(self, result.iterrows()): - indexer = self._get_index(name) - if len(indexer) > 0: - results[indexer] = np.tile(row.values, len( - indexer)).reshape(len(indexer), -1) + return self._transform_fast(result, obj) - counts = self.size().fillna(0).values - if any(counts == 0): - results = self._try_cast(results, obj[result.columns]) + def _transform_fast(self, result, obj): + """ + Fast transform path for aggregations + """ + # if there were groups with no observations (Categorical only?) + # try casting data to original dtype + cast = (self.size().fillna(0) > 0).any() + + # for each col, reshape to to size of original frame + # by take operation + ids, _, ngroup = self.grouper.group_info + output = [] + for i, _ in enumerate(result.columns): + res = algos.take_1d(result.iloc[:, i].values, ids) + if cast: + res = self._try_cast(res, obj.iloc[:, i]) + output.append(res) - return (DataFrame(results, columns=result.columns, index=obj.index) - ._convert(datetime=True)) + return DataFrame._from_arrays(output, columns=result.columns, + index=obj.index) def _define_paths(self, func, *args, **kwargs): if isinstance(func, compat.string_types): @@ -3630,17 +3656,12 @@ def _gotitem(self, key, ndim, subset=None): def _wrap_generic_output(self, result, obj): result_index = self.grouper.levels[0] - if result: - if self.axis == 0: - result = DataFrame(result, index=obj.columns, - columns=result_index).T - else: - result = DataFrame(result, index=obj.index, - columns=result_index) + if self.axis == 0: + return DataFrame(result, index=obj.columns, + columns=result_index).T else: - result = DataFrame(result) - - return result + return DataFrame(result, index=obj.index, + columns=result_index) def _get_data_to_aggregate(self): obj = self._obj_with_exclusions diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index acb0675247a78..9485f50ed07f1 100644 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -336,9 +336,12 @@ def _setitem_with_indexer(self, indexer, value): # this preserves dtype of the value new_values = Series([value])._values if len(self.obj._values): - new_values = np.concatenate([self.obj._values, - new_values]) - + try: + new_values = np.concatenate([self.obj._values, + new_values]) + except TypeError: + new_values = np.concatenate([self.obj.asobject, + new_values]) self.obj._data = self.obj._constructor( new_values, index=new_index, name=self.obj.name)._data self.obj._maybe_update_cacher(clear=True) diff --git a/pandas/core/internals.py b/pandas/core/internals.py index abfc5c989056e..97df81ad6be48 100644 --- a/pandas/core/internals.py +++ b/pandas/core/internals.py @@ -40,7 +40,7 @@ from pandas.util.decorators import cache_readonly from pandas.tslib import Timedelta -from pandas import compat +from pandas import compat, _np_version_under1p9 from pandas.compat import range, map, zip, u from pandas.lib import BlockPlacement @@ -84,7 +84,7 @@ def __init__(self, values, placement, ndim=None, fastpath=False): self.mgr_locs = placement self.values = values - if len(self.mgr_locs) != len(self.values): + if ndim and len(self.mgr_locs) != len(self.values): raise ValueError('Wrong number of items passed %d, placement ' 'implies %d' % (len(self.values), len(self.mgr_locs))) @@ -180,6 +180,12 @@ def make_block(self, values, placement=None, ndim=None, **kwargs): return make_block(values, placement=placement, ndim=ndim, **kwargs) + def make_block_scalar(self, values, **kwargs): + """ + Create a ScalarBlock + """ + return ScalarBlock(values) + def make_block_same_class(self, values, placement=None, fastpath=True, **kwargs): """ Wrap given values in a block of same type as self. """ @@ -324,7 +330,8 @@ def apply(self, func, mgr=None, **kwargs): """ result = func(self.values, **kwargs) if not isinstance(result, Block): - result = self.make_block(values=_block_shape(result)) + result = self.make_block(values=_block_shape(result, + ndim=self.ndim)) return result @@ -1260,32 +1267,117 @@ def equals(self, other): return False return array_equivalent(self.values, other.values) - def quantile(self, qs, mgr=None, **kwargs): + def quantile(self, qs, interpolation='linear', axis=0, mgr=None): """ compute the quantiles of the Parameters ---------- - qs : a scalar or list of the quantiles to be computed + qs: a scalar or list of the quantiles to be computed + interpolation: type of interpolation, default 'linear' + axis: axis to compute, default 0 + + Returns + ------- + tuple of (axis, block) + """ + if _np_version_under1p9: + if interpolation != 'linear': + raise ValueError("Interpolation methods other than linear " + "are not supported in numpy < 1.9.") + + kw = {} + if not _np_version_under1p9: + kw.update({'interpolation': interpolation}) values = self.get_values() - values, mask, _, _ = self._try_coerce_args(values, values) + values, _, _, _ = self._try_coerce_args(values, values) + mask = isnull(self.values) if not lib.isscalar(mask) and mask.any(): - values = values[~mask] - if len(values) == 0: - if com.is_list_like(qs): - result = np.array([self.fill_value]) + # even though this could be a 2-d mask it appears + # as a 1-d result + mask = mask.reshape(values.shape) + result_shape = tuple([values.shape[0]] + [-1] * (self.ndim - 1)) + values = _block_shape(values[~mask], ndim=self.ndim) + if self.ndim > 1: + values = values.reshape(result_shape) + + from pandas import Float64Index + is_empty = values.shape[axis] == 0 + if com.is_list_like(qs): + ax = Float64Index(qs) + + if is_empty: + if self.ndim == 1: + result = self._na_value + else: + # create the array of na_values + # 2d len(values) * len(qs) + result = np.repeat(np.array([self._na_value] * len(qs)), + len(values)).reshape(len(values), + len(qs)) else: - result = self._na_value - elif com.is_list_like(qs): - values = [_quantile(values, x * 100, **kwargs) for x in qs] - result = np.array(values) + + try: + result = _quantile(values, np.array(qs) * 100, + axis=axis, **kw) + except ValueError: + + # older numpies don't handle an array for q + result = [_quantile(values, q * 100, + axis=axis, **kw) for q in qs] + + result = np.array(result, copy=False) + if self.ndim > 1: + result = result.T + else: - result = _quantile(values, qs * 100, **kwargs) - return self._try_coerce_result(result) + if self.ndim == 1: + ax = Float64Index([qs]) + else: + ax = mgr.axes[0] + + if is_empty: + if self.ndim == 1: + result = self._na_value + else: + result = np.array([self._na_value] * len(self)) + else: + result = _quantile(values, qs * 100, axis=axis, **kw) + + ndim = getattr(result, 'ndim', None) or 0 + result = self._try_coerce_result(result) + if lib.isscalar(result): + return ax, self.make_block_scalar(result) + return ax, make_block(result, + placement=np.arange(len(result)), + ndim=ndim) + + +class ScalarBlock(Block): + """ + a scalar compat Block + """ + __slots__ = ['_mgr_locs', 'values', 'ndim'] + + def __init__(self, values): + self.ndim = 0 + self.mgr_locs = [0] + self.values = values + + @property + def dtype(self): + return type(self.values) + + @property + def shape(self): + return tuple([0]) + + def __len__(self): + return 0 class NonConsolidatableMixIn(object): @@ -1378,6 +1470,8 @@ def putmask(self, mask, new, align=True, inplace=False, axis=0, if isinstance(new, np.ndarray) and len(new) == len(mask): new = new[mask] + + mask = mask.reshape(new_values.shape) new_values[mask] = new new_values = self._try_coerce_result(new_values) return [self.make_block(values=new_values)] @@ -1676,6 +1770,7 @@ def convert(self, *args, **kwargs): can return multiple blocks! """ + if args: raise NotImplementedError by_item = True if 'by_item' not in kwargs else kwargs['by_item'] @@ -1706,8 +1801,13 @@ def convert(self, *args, **kwargs): for i, rl in enumerate(self.mgr_locs): values = self.iget(i) - values = fn(values.ravel(), **fn_kwargs).reshape(values.shape) - values = _block_shape(values, ndim=self.ndim) + shape = values.shape + values = fn(values.ravel(), **fn_kwargs) + try: + values = values.reshape(shape) + values = _block_shape(values, ndim=self.ndim) + except AttributeError: + pass newb = make_block(values, ndim=self.ndim, placement=[rl]) blocks.append(newb) @@ -2115,7 +2215,10 @@ def _try_coerce_result(self, result): """ reverse of try_coerce_args """ if isinstance(result, np.ndarray): if result.dtype.kind in ['i', 'f', 'O']: - result = result.astype('M8[ns]') + try: + result = result.astype('M8[ns]') + except ValueError: + pass elif isinstance(result, (np.integer, np.float, np.datetime64)): result = self._box_func(result) return result @@ -2219,11 +2322,6 @@ def to_object_block(self, mgr): kwargs['placement'] = [0] return self.make_block(values, klass=ObjectBlock, **kwargs) - def replace(self, *args, **kwargs): - # if we are forced to ObjectBlock, then don't coerce (to UTC) - kwargs['convert'] = False - return super(DatetimeTZBlock, self).replace(*args, **kwargs) - def _slice(self, slicer): """ return a slice of my values """ if isinstance(slicer, tuple): @@ -2246,8 +2344,8 @@ def _try_coerce_args(self, values, other): ------- base-type values, values mask, base-type other, other mask """ - values_mask = isnull(values) - values = values.tz_localize(None).asi8 + values_mask = _block_shape(isnull(values), ndim=self.ndim) + values = _block_shape(values.tz_localize(None).asi8, ndim=self.ndim) other_mask = False if isinstance(other, ABCSeries): @@ -2283,6 +2381,9 @@ def _try_coerce_result(self, result): elif isinstance(result, (np.integer, np.float, np.datetime64)): result = lib.Timestamp(result).tz_localize(self.values.tz) if isinstance(result, np.ndarray): + # allow passing of > 1dim if its trivial + if result.ndim > 1: + result = result.reshape(len(result)) result = self._holder(result).tz_localize(self.values.tz) return result @@ -2809,7 +2910,7 @@ def _verify_integrity(self): len(self.items), tot_items)) def apply(self, f, axes=None, filter=None, do_integrity_check=False, - consolidate=True, raw=False, **kwargs): + consolidate=True, **kwargs): """ iterate over the blocks, collect and create a new block manager @@ -2823,7 +2924,6 @@ def apply(self, f, axes=None, filter=None, do_integrity_check=False, integrity check consolidate: boolean, default True. Join together blocks having same dtype - raw: boolean, default False. Return the raw returned results Returns ------- @@ -2890,17 +2990,102 @@ def apply(self, f, axes=None, filter=None, do_integrity_check=False, applied = getattr(b, f)(**kwargs) result_blocks = _extend_blocks(applied, result_blocks) - if raw: - if self._is_single_block: - return result_blocks[0] - return result_blocks - elif len(result_blocks) == 0: + if len(result_blocks) == 0: return self.make_empty(axes or self.axes) bm = self.__class__(result_blocks, axes or self.axes, do_integrity_check=do_integrity_check) bm._consolidate_inplace() return bm + def reduction(self, f, axis=0, consolidate=True, transposed=False, + **kwargs): + """ + iterate over the blocks, collect and create a new block manager. + This routine is intended for reduction type operations and + will do inference on the generated blocks. + + Parameters + ---------- + f: the callable or function name to operate on at the block level + axis: reduction axis, default 0 + consolidate: boolean, default True. Join together blocks having same + dtype + transposed: boolean, default False + we are holding transposed data + + Returns + ------- + Block Manager (new object) + + """ + + if consolidate: + self._consolidate_inplace() + + axes, blocks = [], [] + for b in self.blocks: + kwargs['mgr'] = self + axe, block = getattr(b, f)(axis=axis, **kwargs) + + axes.append(axe) + blocks.append(block) + + # note that some DatetimeTZ, Categorical are always ndim==1 + ndim = set([b.ndim for b in blocks]) + + if 2 in ndim: + + new_axes = list(self.axes) + + # multiple blocks that are reduced + if len(blocks) > 1: + new_axes[1] = axes[0] + + # reset the placement to the original + for b, sb in zip(blocks, self.blocks): + b.mgr_locs = sb.mgr_locs + + else: + new_axes[axis] = Index(np.concatenate( + [ax.values for ax in axes])) + + if transposed: + new_axes = new_axes[::-1] + blocks = [b.make_block(b.values.T, + placement=np.arange(b.shape[1]) + ) for b in blocks] + + return self.__class__(blocks, new_axes) + + # 0 ndim + if 0 in ndim and 1 not in ndim: + values = np.array([b.values for b in blocks]) + if len(values) == 1: + return values.item() + blocks = [make_block(values, ndim=1)] + axes = Index([ax[0] for ax in axes]) + + # single block + values = _concat._concat_compat([b.values for b in blocks]) + + # compute the orderings of our original data + if len(self.blocks) > 1: + + indexer = np.empty(len(self.axes[0]), dtype='int64') + i = 0 + for b in self.blocks: + for j in b.mgr_locs: + indexer[j] = i + i = i + 1 + + values = values.take(indexer) + + return SingleBlockManager( + [make_block(values, + ndim=1, + placement=np.arange(len(values)))], + axes[0]) + def isnull(self, **kwargs): return self.apply('apply', **kwargs) @@ -2911,7 +3096,7 @@ def eval(self, **kwargs): return self.apply('eval', **kwargs) def quantile(self, **kwargs): - return self.apply('quantile', raw=True, **kwargs) + return self.reduction('quantile', **kwargs) def setitem(self, **kwargs): return self.apply('setitem', **kwargs) @@ -3068,7 +3253,6 @@ def combine(self, blocks, copy=True): indexer = np.sort(np.concatenate([b.mgr_locs.as_array for b in blocks])) inv_indexer = lib.get_reverse_indexer(indexer, self.shape[0]) - new_items = self.items.take(indexer) new_blocks = [] for b in blocks: @@ -3077,9 +3261,10 @@ def combine(self, blocks, copy=True): axis=0, allow_fill=False) new_blocks.append(b) - new_axes = list(self.axes) - new_axes[0] = new_items - return self.__class__(new_blocks, new_axes, do_integrity_check=False) + axes = list(self.axes) + axes[0] = self.items.take(indexer) + + return self.__class__(new_blocks, axes, do_integrity_check=False) def get_slice(self, slobj, axis=0): if axis >= self.ndim: @@ -3829,6 +4014,16 @@ def _block(self): def _values(self): return self._block.values + @property + def _blknos(self): + """ compat with BlockManager """ + return None + + @property + def _blklocs(self): + """ compat with BlockManager """ + return None + def reindex(self, new_axis, indexer=None, method=None, fill_value=None, limit=None, copy=True): # if we are the same and don't copy, just return @@ -4317,7 +4512,7 @@ def _extend_blocks(result, blocks=None): def _block_shape(values, ndim=1, shape=None): """ guarantee the shape of the values to be at least 1 d """ - if values.ndim <= ndim: + if values.ndim < ndim: if shape is None: shape = values.shape values = values.reshape(tuple((1, ) + shape)) diff --git a/pandas/core/ops.py b/pandas/core/ops.py index 63fea71895da2..f27a83f50e115 100644 --- a/pandas/core/ops.py +++ b/pandas/core/ops.py @@ -19,6 +19,7 @@ from pandas.tslib import iNaT from pandas.compat import bind_method import pandas.core.missing as missing +import pandas.algos as _algos import pandas.core.algorithms as algos from pandas.core.common import (is_list_like, notnull, isnull, _values_from_object, _maybe_match_name, @@ -421,7 +422,7 @@ def _convert_to_array(self, values, name=None, other=None): values = tslib.array_to_datetime(values) elif inferred_type in ('timedelta', 'timedelta64'): # have a timedelta, convert to to ns here - values = to_timedelta(values, errors='coerce') + values = to_timedelta(values, errors='coerce', box=False) elif inferred_type == 'integer': # py3 compat where dtype is 'm' but is an integer if values.dtype.kind == 'm': @@ -503,9 +504,9 @@ def _offset(lvalues, rvalues): # convert Tick DateOffset to underlying delta if self.is_offset_lhs: - lvalues = to_timedelta(lvalues) + lvalues = to_timedelta(lvalues, box=False) if self.is_offset_rhs: - rvalues = to_timedelta(rvalues) + rvalues = to_timedelta(rvalues, box=False) lvalues = lvalues.astype(np.int64) if not self.is_floating_rhs: @@ -600,6 +601,21 @@ def na_op(x, y): result = missing.fill_zeros(result, x, y, name, fill_zeros) return result + def safe_na_op(lvalues, rvalues): + try: + return na_op(lvalues, rvalues) + except Exception: + if isinstance(rvalues, ABCSeries): + if is_object_dtype(rvalues): + # if dtype is object, try elementwise op + return _algos.arrmap_object(rvalues, + lambda x: op(lvalues, x)) + else: + if is_object_dtype(lvalues): + return _algos.arrmap_object(lvalues, + lambda x: op(x, rvalues)) + raise + def wrapper(left, right, name=name, na_op=na_op): if isinstance(right, pd.DataFrame): @@ -638,9 +654,8 @@ def wrapper(left, right, name=name, na_op=na_op): if ridx is not None: rvalues = algos.take_1d(rvalues, ridx) - arr = na_op(lvalues, rvalues) - - return left._constructor(wrap_results(arr), index=index, + result = wrap_results(safe_na_op(lvalues, rvalues)) + return left._constructor(result, index=index, name=name, dtype=dtype) else: # scalars @@ -648,7 +663,8 @@ def wrapper(left, right, name=name, na_op=na_op): not isinstance(lvalues, pd.DatetimeIndex)): lvalues = lvalues.values - return left._constructor(wrap_results(na_op(lvalues, rvalues)), + result = wrap_results(safe_na_op(lvalues, rvalues)) + return left._constructor(result, index=left.index, name=left.name, dtype=dtype) @@ -738,7 +754,10 @@ def wrapper(self, other, axis=None): elif isinstance(other, pd.DataFrame): # pragma: no cover return NotImplemented elif isinstance(other, (np.ndarray, pd.Index)): - if len(self) != len(other): + # do not check length of zerodim array + # as it will broadcast + if (not lib.isscalar(lib.item_from_zerodim(other)) and + len(self) != len(other)): raise ValueError('Lengths must match to compare') return self._constructor(na_op(self.values, np.asarray(other)), index=self.index).__finalize__(self) diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py index 7e0c094aec4c2..8d237016d1b33 100644 --- a/pandas/core/reshape.py +++ b/pandas/core/reshape.py @@ -162,9 +162,12 @@ def get_result(self): # may need to coerce categoricals here if self.is_categorical is not None: - values = [Categorical.from_array( - values[:, i], categories=self.is_categorical.categories, - ordered=True) for i in range(values.shape[-1])] + categories = self.is_categorical.categories + ordered = self.is_categorical.ordered + values = [Categorical.from_array(values[:, i], + categories=categories, + ordered=ordered) + for i in range(values.shape[-1])] return DataFrame(values, index=index, columns=columns) diff --git a/pandas/core/series.py b/pandas/core/series.py index 58e983ad904ba..43b4ba3a51212 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -57,8 +57,6 @@ from pandas.core.config import get_option -from pandas import _np_version_under1p9 - __all__ = ['Series'] _shared_doc_kwargs = dict( @@ -1349,21 +1347,12 @@ def quantile(self, q=0.5, interpolation='linear'): self._check_percentile(q) - if _np_version_under1p9: - if interpolation != 'linear': - raise ValueError("Interpolation methods other than linear " - "are not supported in numpy < 1.9.") - - kwargs = dict() - if not _np_version_under1p9: - kwargs.update({'interpolation': interpolation}) + result = self._data.quantile(qs=q, interpolation=interpolation) - result = self._data.quantile(qs=q, **kwargs) - - if com.is_list_like(result): - # explicitly use Float64Index to coerce empty result to float dtype - index = Float64Index(q) - return self._constructor(result, index=index, name=self.name) + if com.is_list_like(q): + return self._constructor(result, + index=Float64Index(q), + name=self.name) else: # scalar return result diff --git a/pandas/core/strings.py b/pandas/core/strings.py index 524c0205d7f73..5b1b8bd05af42 100644 --- a/pandas/core/strings.py +++ b/pandas/core/strings.py @@ -8,6 +8,7 @@ from pandas.core.algorithms import take_1d import pandas.compat as compat from pandas.core.base import AccessorProperty, NoNewAttributesMixin +from pandas.types import api as gt from pandas.util.decorators import Appender, deprecate_kwarg import re import pandas.lib as lib @@ -148,12 +149,10 @@ def _na_map(f, arr, na_result=np.nan, dtype=object): def _map(f, arr, na_mask=False, na_value=np.nan, dtype=object): - from pandas.core.series import Series - if not len(arr): return np.ndarray(0, dtype=dtype) - if isinstance(arr, Series): + if isinstance(arr, gt.ABCSeries): arr = arr.values if not isinstance(arr, np.ndarray): arr = np.asarray(arr, dtype=object) @@ -687,33 +686,42 @@ def str_extractall(arr, pat, flags=0): C 0 NaN 1 """ - from pandas import DataFrame, MultiIndex + regex = re.compile(pat, flags=flags) # the regex must contain capture groups. if regex.groups == 0: raise ValueError("pattern contains no capture groups") + + if isinstance(arr, gt.ABCIndex): + arr = arr.to_series().reset_index(drop=True) + names = dict(zip(regex.groupindex.values(), regex.groupindex.keys())) columns = [names.get(1 + i, i) for i in range(regex.groups)] match_list = [] index_list = [] + is_mi = arr.index.nlevels > 1 + for subject_key, subject in arr.iteritems(): if isinstance(subject, compat.string_types): - try: - key_list = list(subject_key) - except TypeError: - key_list = [subject_key] + + if not is_mi: + subject_key = (subject_key, ) + for match_i, match_tuple in enumerate(regex.findall(subject)): - na_tuple = [ - np.NaN if group == "" else group for group in match_tuple] + na_tuple = [np.NaN if group == "" else group + for group in match_tuple] match_list.append(na_tuple) - result_key = tuple(key_list + [match_i]) + result_key = tuple(subject_key + (match_i, )) index_list.append(result_key) + if 0 < len(index_list): + from pandas import MultiIndex index = MultiIndex.from_tuples( index_list, names=arr.index.names + ["match"]) else: index = None - result = DataFrame(match_list, index, columns) + result = arr._constructor_expanddim(match_list, index=index, + columns=columns) return result @@ -1804,9 +1812,9 @@ class StringAccessorMixin(object): # string methods def _make_str_accessor(self): - from pandas.core.series import Series from pandas.core.index import Index - if (isinstance(self, Series) and + + if (isinstance(self, gt.ABCSeries) and not ((is_categorical_dtype(self.dtype) and is_object_dtype(self.values.categories)) or (is_object_dtype(self.dtype)))): @@ -1819,6 +1827,8 @@ def _make_str_accessor(self): "values, which use np.object_ dtype in " "pandas") elif isinstance(self, Index): + # can't use ABCIndex to exclude non-str + # see scc/inferrence.pyx which can contain string values allowed_types = ('string', 'unicode', 'mixed', 'mixed-integer') if self.inferred_type not in allowed_types: diff --git a/pandas/core/window.py b/pandas/core/window.py index b1be66bee9bc8..cd66d4e30c351 100644 --- a/pandas/core/window.py +++ b/pandas/core/window.py @@ -18,6 +18,7 @@ import pandas.core.common as com import pandas.algos as algos from pandas import compat +from pandas.compat.numpy import function as nv from pandas.util.decorators import Substitution, Appender from textwrap import dedent @@ -435,13 +436,15 @@ def aggregate(self, arg, *args, **kwargs): @Substitution(name='window') @Appender(_doc_template) @Appender(_shared_docs['sum']) - def sum(self, **kwargs): + def sum(self, *args, **kwargs): + nv.validate_window_func('sum', args, kwargs) return self._apply_window(mean=False, **kwargs) @Substitution(name='window') @Appender(_doc_template) @Appender(_shared_docs['mean']) - def mean(self, **kwargs): + def mean(self, *args, **kwargs): + nv.validate_window_func('mean', args, kwargs) return self._apply_window(mean=True, **kwargs) @@ -620,7 +623,8 @@ def f(arg, window, min_periods): return self._apply(f, func, args=args, kwargs=kwargs, center=False) - def sum(self, **kwargs): + def sum(self, *args, **kwargs): + nv.validate_window_func('sum', args, kwargs) return self._apply('roll_sum', 'sum', **kwargs) _shared_docs['max'] = dedent(""" @@ -631,7 +635,8 @@ def sum(self, **kwargs): how : string, default 'max' (DEPRECATED) Method for down- or re-sampling""") - def max(self, how=None, **kwargs): + def max(self, how=None, *args, **kwargs): + nv.validate_window_func('max', args, kwargs) if self.freq is not None and how is None: how = 'max' return self._apply('roll_max', 'max', how=how, **kwargs) @@ -644,12 +649,14 @@ def max(self, how=None, **kwargs): how : string, default 'min' (DEPRECATED) Method for down- or re-sampling""") - def min(self, how=None, **kwargs): + def min(self, how=None, *args, **kwargs): + nv.validate_window_func('min', args, kwargs) if self.freq is not None and how is None: how = 'min' return self._apply('roll_min', 'min', how=how, **kwargs) - def mean(self, **kwargs): + def mean(self, *args, **kwargs): + nv.validate_window_func('mean', args, kwargs) return self._apply('roll_mean', 'mean', **kwargs) _shared_docs['median'] = dedent(""" @@ -674,7 +681,8 @@ def median(self, how=None, **kwargs): Delta Degrees of Freedom. The divisor used in calculations is ``N - ddof``, where ``N`` represents the number of elements.""") - def std(self, ddof=1, **kwargs): + def std(self, ddof=1, *args, **kwargs): + nv.validate_window_func('std', args, kwargs) window = self._get_window() def f(arg, *args, **kwargs): @@ -693,7 +701,8 @@ def f(arg, *args, **kwargs): Delta Degrees of Freedom. The divisor used in calculations is ``N - ddof``, where ``N`` represents the number of elements.""") - def var(self, ddof=1, **kwargs): + def var(self, ddof=1, *args, **kwargs): + nv.validate_window_func('var', args, kwargs) return self._apply('roll_var', 'var', check_minp=_require_min_periods(1), ddof=ddof, **kwargs) @@ -865,26 +874,30 @@ def apply(self, func, args=(), kwargs={}): @Substitution(name='rolling') @Appender(_doc_template) @Appender(_shared_docs['sum']) - def sum(self, **kwargs): - return super(Rolling, self).sum(**kwargs) + def sum(self, *args, **kwargs): + nv.validate_rolling_func('sum', args, kwargs) + return super(Rolling, self).sum(*args, **kwargs) @Substitution(name='rolling') @Appender(_doc_template) @Appender(_shared_docs['max']) - def max(self, **kwargs): - return super(Rolling, self).max(**kwargs) + def max(self, *args, **kwargs): + nv.validate_rolling_func('max', args, kwargs) + return super(Rolling, self).max(*args, **kwargs) @Substitution(name='rolling') @Appender(_doc_template) @Appender(_shared_docs['min']) - def min(self, **kwargs): - return super(Rolling, self).min(**kwargs) + def min(self, *args, **kwargs): + nv.validate_rolling_func('min', args, kwargs) + return super(Rolling, self).min(*args, **kwargs) @Substitution(name='rolling') @Appender(_doc_template) @Appender(_shared_docs['mean']) - def mean(self, **kwargs): - return super(Rolling, self).mean(**kwargs) + def mean(self, *args, **kwargs): + nv.validate_rolling_func('mean', args, kwargs) + return super(Rolling, self).mean(*args, **kwargs) @Substitution(name='rolling') @Appender(_doc_template) @@ -895,13 +908,15 @@ def median(self, **kwargs): @Substitution(name='rolling') @Appender(_doc_template) @Appender(_shared_docs['std']) - def std(self, ddof=1, **kwargs): + def std(self, ddof=1, *args, **kwargs): + nv.validate_rolling_func('std', args, kwargs) return super(Rolling, self).std(ddof=ddof, **kwargs) @Substitution(name='rolling') @Appender(_doc_template) @Appender(_shared_docs['var']) - def var(self, ddof=1, **kwargs): + def var(self, ddof=1, *args, **kwargs): + nv.validate_rolling_func('var', args, kwargs) return super(Rolling, self).var(ddof=ddof, **kwargs) @Substitution(name='rolling') @@ -985,10 +1000,8 @@ class Expanding(_Rolling_and_Expanding): def __init__(self, obj, min_periods=1, freq=None, center=False, axis=0, **kwargs): - return super(Expanding, self).__init__(obj=obj, - min_periods=min_periods, - freq=freq, center=center, - axis=axis) + super(Expanding, self).__init__(obj=obj, min_periods=min_periods, + freq=freq, center=center, axis=axis) @property def _constructor(self): @@ -1025,26 +1038,30 @@ def apply(self, func, args=(), kwargs={}): @Substitution(name='expanding') @Appender(_doc_template) @Appender(_shared_docs['sum']) - def sum(self, **kwargs): - return super(Expanding, self).sum(**kwargs) + def sum(self, *args, **kwargs): + nv.validate_expanding_func('sum', args, kwargs) + return super(Expanding, self).sum(*args, **kwargs) @Substitution(name='expanding') @Appender(_doc_template) @Appender(_shared_docs['max']) - def max(self, **kwargs): - return super(Expanding, self).max(**kwargs) + def max(self, *args, **kwargs): + nv.validate_expanding_func('max', args, kwargs) + return super(Expanding, self).max(*args, **kwargs) @Substitution(name='expanding') @Appender(_doc_template) @Appender(_shared_docs['min']) - def min(self, **kwargs): - return super(Expanding, self).min(**kwargs) + def min(self, *args, **kwargs): + nv.validate_expanding_func('min', args, kwargs) + return super(Expanding, self).min(*args, **kwargs) @Substitution(name='expanding') @Appender(_doc_template) @Appender(_shared_docs['mean']) - def mean(self, **kwargs): - return super(Expanding, self).mean(**kwargs) + def mean(self, *args, **kwargs): + nv.validate_expanding_func('mean', args, kwargs) + return super(Expanding, self).mean(*args, **kwargs) @Substitution(name='expanding') @Appender(_doc_template) @@ -1055,13 +1072,15 @@ def median(self, **kwargs): @Substitution(name='expanding') @Appender(_doc_template) @Appender(_shared_docs['std']) - def std(self, ddof=1, **kwargs): + def std(self, ddof=1, *args, **kwargs): + nv.validate_expanding_func('std', args, kwargs) return super(Expanding, self).std(ddof=ddof, **kwargs) @Substitution(name='expanding') @Appender(_doc_template) @Appender(_shared_docs['var']) - def var(self, ddof=1, **kwargs): + def var(self, ddof=1, *args, **kwargs): + nv.validate_expanding_func('var', args, kwargs) return super(Expanding, self).var(ddof=ddof, **kwargs) @Substitution(name='expanding') @@ -1275,15 +1294,17 @@ def func(arg): @Substitution(name='ewm') @Appender(_doc_template) - def mean(self, **kwargs): + def mean(self, *args, **kwargs): """exponential weighted moving average""" + nv.validate_window_func('mean', args, kwargs) return self._apply('ewma', **kwargs) @Substitution(name='ewm') @Appender(_doc_template) @Appender(_bias_template) - def std(self, bias=False, **kwargs): + def std(self, bias=False, *args, **kwargs): """exponential weighted moving stddev""" + nv.validate_window_func('std', args, kwargs) return _zsqrt(self.var(bias=bias, **kwargs)) vol = std @@ -1291,8 +1312,9 @@ def std(self, bias=False, **kwargs): @Substitution(name='ewm') @Appender(_doc_template) @Appender(_bias_template) - def var(self, bias=False, **kwargs): + def var(self, bias=False, *args, **kwargs): """exponential weighted moving variance""" + nv.validate_window_func('var', args, kwargs) def f(arg): return algos.ewmcov(arg, arg, self.com, int(self.adjust), diff --git a/pandas/formats/format.py b/pandas/formats/format.py index c3ffc018d1031..27d8b553013b9 100644 --- a/pandas/formats/format.py +++ b/pandas/formats/format.py @@ -6,7 +6,7 @@ import sys from pandas.core.base import PandasObject -from pandas.core.common import isnull, notnull +from pandas.core.common import isnull, notnull, is_numeric_dtype from pandas.core.index import Index, MultiIndex, _ensure_index from pandas import compat from pandas.compat import (StringIO, lzip, range, map, zip, reduce, u, @@ -2260,6 +2260,68 @@ def _format_strings(self): return fmt_values +def format_percentiles(percentiles): + """ + Outputs rounded and formatted percentiles. + + Parameters + ---------- + percentiles : list-like, containing floats from interval [0,1] + + Returns + ------- + formatted : list of strings + + Notes + ----- + Rounding precision is chosen so that: (1) if any two elements of + ``percentiles`` differ, they remain different after rounding + (2) no entry is *rounded* to 0% or 100%. + Any non-integer is always rounded to at least 1 decimal place. + + Examples + -------- + Keeps all entries different after rounding: + + >>> format_percentiles([0.01999, 0.02001, 0.5, 0.666666, 0.9999]) + ['1.999%', '2.001%', '50%', '66.667%', '99.99%'] + + No element is rounded to 0% or 100% (unless already equal to it). + Duplicates are allowed: + + >>> format_percentiles([0, 0.5, 0.02001, 0.5, 0.666666, 0.9999]) + ['0%', '50%', '2.0%', '50%', '66.67%', '99.99%'] + """ + + percentiles = np.asarray(percentiles) + + # It checks for np.NaN as well + if not is_numeric_dtype(percentiles) or not np.all(percentiles >= 0) \ + or not np.all(percentiles <= 1): + raise ValueError("percentiles should all be in the interval [0,1]") + + percentiles = 100 * percentiles + int_idx = (percentiles.astype(int) == percentiles) + + if np.all(int_idx): + out = percentiles.astype(int).astype(str) + return [i + '%' for i in out] + + unique_pcts = np.unique(percentiles) + to_begin = unique_pcts[0] if unique_pcts[0] > 0 else None + to_end = 100 - unique_pcts[-1] if unique_pcts[-1] < 100 else None + + # Least precision that keeps percentiles unique after rounding + prec = -np.floor(np.log10(np.min( + np.ediff1d(unique_pcts, to_begin=to_begin, to_end=to_end) + ))).astype(int) + prec = max(1, prec) + out = np.empty_like(percentiles, dtype=object) + out[int_idx] = percentiles[int_idx].astype(int).astype(str) + out[~int_idx] = percentiles[~int_idx].round(prec).astype(str) + return [i + '%' for i in out] + + def _is_dates_only(values): # return a boolean if we are only dates (and don't have a timezone) values = DatetimeIndex(values) @@ -2590,6 +2652,9 @@ def __call__(self, num): import math dnum = decimal.Decimal(str(num)) + if decimal.Decimal.is_nan(dnum): + return 'NaN' + sign = 1 if dnum < 0: # pragma: no cover diff --git a/pandas/indexes/base.py b/pandas/indexes/base.py index dc178c1178c74..82f16becbd511 100644 --- a/pandas/indexes/base.py +++ b/pandas/indexes/base.py @@ -465,6 +465,24 @@ def repeat(self, n, *args, **kwargs): nv.validate_repeat(args, kwargs) return self._shallow_copy(self._values.repeat(n)) + def where(self, cond, other=None): + """ + .. versionadded:: 0.18.2 + + Return an Index of same shape as self and whose corresponding + entries are from self where cond is True and otherwise are from + other. + + Parameters + ---------- + cond : boolean same length as self + other : scalar, or array-like + """ + if other is None: + other = self._na_value + values = np.where(cond, self.values, other) + return self._shallow_copy_with_infer(values, dtype=self.dtype) + def ravel(self, order='C'): """ return an ndarray of the flattened values of the underlying data @@ -754,8 +772,28 @@ def _to_embed(self, keep_tz=False): """ return self.values.copy() - def astype(self, dtype): - return Index(self.values.astype(dtype), name=self.name, dtype=dtype) + _index_shared_docs['astype'] = """ + Create an Index with values cast to dtypes. The class of a new Index + is determined by dtype. When conversion is impossible, a ValueError + exception is raised. + + Parameters + ---------- + dtype : numpy dtype or pandas type + copy : bool, default True + By default, astype always returns a newly allocated object. + If copy is set to False and internal requirements on dtype are + satisfied, the original data is used to create a new Index + or the original Index is returned. + + .. versionadded:: 0.18.2 + + """ + + @Appender(_index_shared_docs['astype']) + def astype(self, dtype, copy=True): + return Index(self.values.astype(dtype, copy=copy), name=self.name, + dtype=dtype) def _to_safe_for_reshape(self): """ convert to object if we are a categorical """ diff --git a/pandas/indexes/category.py b/pandas/indexes/category.py index 8f343c5de5fb6..e877e43bcc603 100644 --- a/pandas/indexes/category.py +++ b/pandas/indexes/category.py @@ -307,6 +307,29 @@ def _can_reindex(self, indexer): """ always allow reindexing """ pass + def where(self, cond, other=None): + """ + .. versionadded:: 0.18.2 + + Return an Index of same shape as self and whose corresponding + entries are from self where cond is True and otherwise are from + other. + + Parameters + ---------- + cond : boolean same length as self + other : scalar, or array-like + """ + if other is None: + other = self._na_value + values = np.where(cond, self.values, other) + + from pandas.core.categorical import Categorical + cat = Categorical(values, + categories=self.categories, + ordered=self.ordered) + return self._shallow_copy(cat, **self._get_attributes_dict()) + def reindex(self, target, method=None, level=None, limit=None, tolerance=None): """ diff --git a/pandas/indexes/multi.py b/pandas/indexes/multi.py index 3effc9b1315e6..05b2045a4850f 100644 --- a/pandas/indexes/multi.py +++ b/pandas/indexes/multi.py @@ -592,7 +592,6 @@ def fillna(self, value=None, downcast=None): def get_value(self, series, key): # somewhat broken encapsulation from pandas.core.indexing import maybe_droplevels - from pandas.core.series import Series # Label-based s = _values_from_object(series) @@ -604,7 +603,8 @@ def _try_mi(k): new_values = series._values[loc] new_index = self[loc] new_index = maybe_droplevels(new_index, k) - return Series(new_values, index=new_index, name=series.name) + return series._constructor(new_values, index=new_index, + name=series.name).__finalize__(self) try: return self._engine.get_value(s, k) @@ -1084,6 +1084,10 @@ def repeat(self, n, *args, **kwargs): for label in self.labels], names=self.names, sortorder=self.sortorder, verify_integrity=False) + def where(self, cond, other=None): + raise NotImplementedError(".where is not supported for " + "MultiIndex operations") + def drop(self, labels, level=None, errors='raise'): """ Make new MultiIndex with passed list of labels deleted @@ -1761,7 +1765,8 @@ def convert_indexer(start, stop, step, indexer=indexer, labels=labels): else: m = np.zeros(len(labels), dtype=bool) - m[np.in1d(labels, r, assume_unique=True)] = True + m[np.in1d(labels, r, + assume_unique=Index(labels).is_unique)] = True return m @@ -2073,11 +2078,14 @@ def difference(self, other): return MultiIndex.from_tuples(difference, sortorder=0, names=result_names) - def astype(self, dtype): + @Appender(_index_shared_docs['astype']) + def astype(self, dtype, copy=True): if not is_object_dtype(np.dtype(dtype)): raise TypeError('Setting %s dtype to anything other than object ' 'is not supported' % self.__class__) - return self._shallow_copy() + elif copy is True: + return self._shallow_copy() + return self def _convert_can_do_setop(self, other): result_names = self.names diff --git a/pandas/indexes/numeric.py b/pandas/indexes/numeric.py index 983ea731b11ac..0deaf4da9b2bb 100644 --- a/pandas/indexes/numeric.py +++ b/pandas/indexes/numeric.py @@ -4,7 +4,7 @@ import pandas.index as _index from pandas import compat -from pandas.indexes.base import Index, InvalidIndexError +from pandas.indexes.base import Index, InvalidIndexError, _index_shared_docs from pandas.util.decorators import Appender, cache_readonly import pandas.core.common as com from pandas.core.common import (is_dtype_equal, isnull, pandas_dtype, @@ -238,12 +238,17 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, def inferred_type(self): return 'floating' - def astype(self, dtype): + @Appender(_index_shared_docs['astype']) + def astype(self, dtype, copy=True): dtype = pandas_dtype(dtype) - if is_float_dtype(dtype) or is_integer_dtype(dtype): - values = self._values.astype(dtype) + if is_float_dtype(dtype): + values = self._values.astype(dtype, copy=copy) + elif is_integer_dtype(dtype): + if self.hasnans: + raise ValueError('cannot convert float NaN to integer') + values = self._values.astype(dtype, copy=copy) elif is_object_dtype(dtype): - values = self._values + values = self._values.astype('object', copy=copy) else: raise TypeError('Setting %s dtype to anything other than ' 'float64 or object is not supported' % diff --git a/pandas/io/common.py b/pandas/io/common.py index dc7c483c1fb68..cf4bba6e97afb 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -104,85 +104,6 @@ def __next__(self): BaseIterator.next = lambda self: self.__next__() -try: - from boto.s3 import key - - class BotoFileLikeReader(key.Key): - """boto Key modified to be more file-like - - This modification of the boto Key will read through a supplied - S3 key once, then stop. The unmodified boto Key object will repeatedly - cycle through a file in S3: after reaching the end of the file, - boto will close the file. Then the next call to `read` or `next` will - re-open the file and start reading from the beginning. - - Also adds a `readline` function which will split the returned - values by the `\n` character. - """ - - def __init__(self, *args, **kwargs): - encoding = kwargs.pop("encoding", None) # Python 2 compat - super(BotoFileLikeReader, self).__init__(*args, **kwargs) - # Add a flag to mark the end of the read. - self.finished_read = False - self.buffer = "" - self.lines = [] - if encoding is None and compat.PY3: - encoding = "utf-8" - self.encoding = encoding - self.lines = [] - - def next(self): - return self.readline() - - __next__ = next - - def read(self, *args, **kwargs): - if self.finished_read: - return b'' if compat.PY3 else '' - return super(BotoFileLikeReader, self).read(*args, **kwargs) - - def close(self, *args, **kwargs): - self.finished_read = True - return super(BotoFileLikeReader, self).close(*args, **kwargs) - - def seekable(self): - """Needed for reading by bz2""" - return False - - def readline(self): - """Split the contents of the Key by '\n' characters.""" - if self.lines: - retval = self.lines[0] - self.lines = self.lines[1:] - return retval - if self.finished_read: - if self.buffer: - retval, self.buffer = self.buffer, "" - return retval - else: - raise StopIteration - - if self.encoding: - self.buffer = "{}{}".format( - self.buffer, self.read(8192).decode(self.encoding)) - else: - self.buffer = "{}{}".format(self.buffer, self.read(8192)) - - split_buffer = self.buffer.split("\n") - self.lines.extend(split_buffer[:-1]) - self.buffer = split_buffer[-1] - - return self.readline() -except ImportError: - # boto is only needed for reading from S3. - pass -except TypeError: - # boto/boto3 issues - # GH11915 - pass - - def _is_url(url): """Check to see if a URL has a valid protocol. @@ -319,32 +240,10 @@ def get_filepath_or_buffer(filepath_or_buffer, encoding=None, return tuple(to_return) if _is_s3_url(filepath_or_buffer): - try: - import boto - except: - raise ImportError("boto is required to handle s3 files") - # Assuming AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_S3_HOST - # are environment variables - parsed_url = parse_url(filepath_or_buffer) - s3_host = os.environ.get('AWS_S3_HOST', 's3.amazonaws.com') - - try: - conn = boto.connect_s3(host=s3_host) - except boto.exception.NoAuthHandlerFound: - conn = boto.connect_s3(host=s3_host, anon=True) - - b = conn.get_bucket(parsed_url.netloc, validate=False) - if compat.PY2 and (compression == 'gzip' or - (compression == 'infer' and - filepath_or_buffer.endswith(".gz"))): - k = boto.s3.key.Key(b, parsed_url.path) - filepath_or_buffer = BytesIO(k.get_contents_as_string( - encoding=encoding)) - else: - k = BotoFileLikeReader(b, parsed_url.path, encoding=encoding) - k.open('r') # Expose read errors immediately - filepath_or_buffer = k - return filepath_or_buffer, None, compression + from pandas.io.s3 import get_filepath_or_buffer + return get_filepath_or_buffer(filepath_or_buffer, + encoding=encoding, + compression=compression) # It is a pathlib.Path/py.path.local or string filepath_or_buffer = _stringify_path(filepath_or_buffer) diff --git a/pandas/io/html.py b/pandas/io/html.py index e350a40bfa805..48caaa39dd711 100644 --- a/pandas/io/html.py +++ b/pandas/io/html.py @@ -612,7 +612,8 @@ def _expand_elements(body): def _data_to_frame(data, header, index_col, skiprows, - parse_dates, tupleize_cols, thousands): + parse_dates, tupleize_cols, thousands, + decimal): head, body, foot = data if head: @@ -630,7 +631,7 @@ def _data_to_frame(data, header, index_col, skiprows, tp = TextParser(body, header=header, index_col=index_col, skiprows=_get_skiprows(skiprows), parse_dates=parse_dates, tupleize_cols=tupleize_cols, - thousands=thousands) + thousands=thousands, decimal=decimal) df = tp.read() return df @@ -716,7 +717,8 @@ def _validate_flavor(flavor): def _parse(flavor, io, match, header, index_col, skiprows, - parse_dates, tupleize_cols, thousands, attrs, encoding): + parse_dates, tupleize_cols, thousands, attrs, encoding, + decimal): flavor = _validate_flavor(flavor) compiled_match = re.compile(match) # you can pass a compiled regex here @@ -744,7 +746,9 @@ def _parse(flavor, io, match, header, index_col, skiprows, skiprows=skiprows, parse_dates=parse_dates, tupleize_cols=tupleize_cols, - thousands=thousands)) + thousands=thousands, + decimal=decimal + )) except EmptyDataError: # empty table continue return ret @@ -752,7 +756,8 @@ def _parse(flavor, io, match, header, index_col, skiprows, def read_html(io, match='.+', flavor=None, header=None, index_col=None, skiprows=None, attrs=None, parse_dates=False, - tupleize_cols=False, thousands=',', encoding=None): + tupleize_cols=False, thousands=',', encoding=None, + decimal='.'): r"""Read HTML tables into a ``list`` of ``DataFrame`` objects. Parameters @@ -828,6 +833,12 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None, underlying parser library (e.g., the parser library will try to use the encoding provided by the document). + decimal : str, default '.' + Character to recognize as decimal point (e.g. use ',' for European + data). + + .. versionadded:: 0.18.2 + Returns ------- dfs : list of DataFrames @@ -871,4 +882,5 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None, 'data (you passed a negative value)') _validate_header_arg(header) return _parse(flavor, io, match, header, index_col, skiprows, - parse_dates, tupleize_cols, thousands, attrs, encoding) + parse_dates, tupleize_cols, thousands, attrs, encoding, + decimal) diff --git a/pandas/io/json.py b/pandas/io/json.py index 08bfd8d7796a0..fd97e51208f7e 100644 --- a/pandas/io/json.py +++ b/pandas/io/json.py @@ -614,10 +614,12 @@ def nested_to_record(ds, prefix="", level=0): new_d = copy.deepcopy(d) for k, v in d.items(): # each key gets renamed with prefix + if not isinstance(k, compat.string_types): + k = str(k) if level == 0: - newkey = str(k) + newkey = k else: - newkey = prefix + '.' + str(k) + newkey = prefix + '.' + k # only dicts gets recurse-flattend # only at level>1 do we rename the rest of the keys diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index f4527df56db88..a851a5f48f5e6 100755 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -55,7 +55,7 @@ Alternative argument name for sep. delim_whitespace : boolean, default False Specifies whether or not whitespace (e.g. ``' '`` or ``'\t'``) will be - used as the sep. Equivalent to setting ``sep='\+s'``. If this option + used as the sep. Equivalent to setting ``sep='\s+'``. If this option is set to True, nothing should be passed in for the ``delimiter`` parameter. @@ -73,7 +73,8 @@ rather than the first line of the file. names : array-like, default None List of column names to use. If file contains no header row, then you - should explicitly pass header=None + should explicitly pass header=None. Duplicates in this list are not + allowed unless mangle_dupe_cols=True, which is the default. index_col : int or sequence or False, default None Column to use as the row labels of the DataFrame. If a sequence is given, a MultiIndex is used. If you have a malformed file with delimiters at the end @@ -91,7 +92,9 @@ prefix : str, default None Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ... mangle_dupe_cols : boolean, default True - Duplicate columns will be specified as 'X.0'...'X.N', rather than 'X'...'X' + Duplicate columns will be specified as 'X.0'...'X.N', rather than + 'X'...'X'. Passing in False will cause data to be overwritten if there + are duplicate names in the columns. dtype : Type name or dict of column -> type, default None Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32} (Unsupported with engine='python'). Use `str` or `object` to preserve and @@ -189,6 +192,10 @@ Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3). Default (None) results in QUOTE_MINIMAL behavior. +doublequote : boolean, default ``True`` + When quotechar is specified and quoting is not ``QUOTE_NONE``, indicate + whether or not to interpret two consecutive quotechar elements INSIDE a + field as a single ``quotechar`` element. escapechar : str (length 1), default None One-character string used to escape delimiter when quoting is QUOTE_NONE. comment : str, default None @@ -217,6 +224,32 @@ warn_bad_lines : boolean, default True If error_bad_lines is False, and warn_bad_lines is True, a warning for each "bad line" will be output. (Only valid with C parser). +low_memory : boolean, default True + Internally process the file in chunks, resulting in lower memory use + while parsing, but possibly mixed type inference. To ensure no mixed + types either set False, or specify the type with the `dtype` parameter. + Note that the entire file is read into a single DataFrame regardless, + use the `chunksize` or `iterator` parameter to return the data in chunks. + (Only valid with C parser) +buffer_lines : int, default None + DEPRECATED: this argument will be removed in a future version because its + value is not respected by the parser + + If low_memory is True, specify the number of rows to be read for each + chunk. (Only valid with C parser) +compact_ints : boolean, default False + DEPRECATED: this argument will be removed in a future version + + If compact_ints is True, then for any column that is of integer dtype, + the parser will attempt to cast it as the smallest integer dtype possible, + either signed or unsigned depending on the specification from the + `use_unsigned` parameter. +use_unsigned : boolean, default False + DEPRECATED: this argument will be removed in a future version + + If integer columns are being compacted (i.e. `compact_ints=True`), specify + whether the column should be compacted to the smallest signed or unsigned + integer dtype. Returns ------- @@ -269,6 +302,26 @@ """ % (_parser_params % (_fwf_widths, '')) +def _validate_nrows(nrows): + """ + Checks whether the 'nrows' parameter for parsing is either + an integer OR float that can SAFELY be cast to an integer + without losing accuracy. Raises a ValueError if that is + not the case. + """ + msg = "'nrows' must be an integer" + + if nrows is not None: + if com.is_float(nrows): + if int(nrows) != nrows: + raise ValueError(msg) + nrows = int(nrows) + elif not com.is_integer(nrows): + raise ValueError(msg) + + return nrows + + def _read(filepath_or_buffer, kwds): "Generic reader of line files." encoding = kwds.get('encoding', None) @@ -308,14 +361,14 @@ def _read(filepath_or_buffer, kwds): # Extract some of the arguments (pass chunksize on). iterator = kwds.get('iterator', False) - nrows = kwds.pop('nrows', None) chunksize = kwds.get('chunksize', None) + nrows = _validate_nrows(kwds.pop('nrows', None)) # Create the parser. parser = TextFileReader(filepath_or_buffer, **kwds) if (nrows is not None) and (chunksize is not None): - raise NotImplementedError("'nrows' and 'chunksize' can not be used" + raise NotImplementedError("'nrows' and 'chunksize' cannot be used" " together yet.") elif nrows is not None: return parser.read(nrows) @@ -348,6 +401,7 @@ def _read(filepath_or_buffer, kwds): 'keep_default_na': True, 'thousands': None, 'comment': None, + 'decimal': b'.', # 'engine': 'c', 'parse_dates': False, @@ -383,7 +437,6 @@ def _read(filepath_or_buffer, kwds): 'error_bad_lines': True, 'warn_bad_lines': True, 'dtype': None, - 'decimal': b'.', 'float_precision': None } @@ -395,18 +448,19 @@ def _read(filepath_or_buffer, kwds): _c_unsupported = set(['skip_footer']) _python_unsupported = set([ 'as_recarray', - 'na_filter', - 'compact_ints', - 'use_unsigned', 'low_memory', 'memory_map', 'buffer_lines', 'error_bad_lines', 'warn_bad_lines', 'dtype', - 'decimal', 'float_precision', ]) +_deprecated_args = set([ + 'buffer_lines', + 'compact_ints', + 'use_unsigned', +]) def _make_parser_function(name, sep=','): @@ -656,7 +710,14 @@ def _get_options_with_defaults(self, engine): options = {} for argname, default in compat.iteritems(_parser_defaults): - options[argname] = kwds.get(argname, default) + value = kwds.get(argname, default) + + # see gh-12935 + if argname == 'mangle_dupe_cols' and not value: + raise ValueError('Setting mangle_dupe_cols=False is ' + 'not supported yet') + else: + options[argname] = value for argname, default in compat.iteritems(_c_parser_defaults): if argname in kwds: @@ -754,6 +815,13 @@ def _clean_options(self, options, engine): _validate_header_arg(options['header']) + for arg in _deprecated_args: + parser_default = _c_parser_defaults[arg] + if result.get(arg, parser_default) != parser_default: + warnings.warn("The '{arg}' argument has been deprecated " + "and will be removed in a future version" + .format(arg=arg), FutureWarning, stacklevel=2) + if index_col is True: raise ValueError("The value of index_col couldn't be 'True'") if _is_index_col(index_col): @@ -847,12 +915,13 @@ def _validate_usecols_arg(usecols): or strings (column by name). Raises a ValueError if that is not the case. """ + msg = ("The elements of 'usecols' must " + "either be all strings, all unicode, or all integers") + if usecols is not None: usecols_dtype = lib.infer_dtype(usecols) - if usecols_dtype not in ('integer', 'string'): - raise ValueError(("The elements of 'usecols' " - "must either be all strings " - "or all integers")) + if usecols_dtype not in ('integer', 'string', 'unicode'): + raise ValueError(msg) return usecols @@ -900,6 +969,7 @@ def __init__(self, kwds): self.true_values = kwds.get('true_values') self.false_values = kwds.get('false_values') self.tupleize_cols = kwds.get('tupleize_cols', False) + self.mangle_dupe_cols = kwds.get('mangle_dupe_cols', True) self.infer_datetime_format = kwds.pop('infer_datetime_format', False) self._date_conv = _make_date_converter( @@ -1013,6 +1083,26 @@ def tostr(x): return names, index_names, col_names, passed_names + def _maybe_dedup_names(self, names): + # see gh-7160 and gh-9424: this helps to provide + # immediate alleviation of the duplicate names + # issue and appears to be satisfactory to users, + # but ultimately, not needing to butcher the names + # would be nice! + if self.mangle_dupe_cols: + names = list(names) # so we can index + counts = {} + + for i, col in enumerate(names): + cur_count = counts.get(col, 0) + + if cur_count > 0: + names[i] = '%s.%d' % (col, cur_count) + + counts[col] = cur_count + 1 + + return names + def _maybe_make_multi_index_columns(self, columns, col_names=None): # possibly create a column mi here if (not self.tupleize_cols and len(columns) and @@ -1131,8 +1221,13 @@ def _convert_to_ndarrays(self, dct, na_values, na_fvalues, verbose=False, result = {} for c, values in compat.iteritems(dct): conv_f = None if converters is None else converters.get(c, None) - col_na_values, col_na_fvalues = _get_na_values(c, na_values, - na_fvalues) + + if self.na_filter: + col_na_values, col_na_fvalues = _get_na_values( + c, na_values, na_fvalues) + else: + col_na_values, col_na_fvalues = set(), set() + coerce_type = True if conv_f is not None: try: @@ -1144,6 +1239,12 @@ def _convert_to_ndarrays(self, dct, na_values, na_fvalues, verbose=False, cvals, na_count = self._convert_types( values, set(col_na_values) | col_na_fvalues, coerce_type) + + if issubclass(cvals.dtype.type, np.integer) and self.compact_ints: + cvals = lib.downcast_int64( + cvals, _parser.na_values, + self.use_unsigned) + result[c] = cvals if verbose and na_count: print('Filled %d NA values in column %s' % (na_count, str(c))) @@ -1315,10 +1416,11 @@ def read(self, nrows=None): except StopIteration: if self._first_chunk: self._first_chunk = False + names = self._maybe_dedup_names(self.orig_names) index, columns, col_dict = _get_empty_meta( - self.orig_names, self.index_col, - self.index_names, dtype=self.kwds.get('dtype')) + names, self.index_col, self.index_names, + dtype=self.kwds.get('dtype')) if self.usecols is not None: columns = self._filter_usecols(columns) @@ -1362,6 +1464,8 @@ def read(self, nrows=None): if self.usecols is not None: names = self._filter_usecols(names) + names = self._maybe_dedup_names(names) + # rename dict keys data = sorted(data.items()) data = dict((k, v) for k, (i, v) in zip(names, data)) @@ -1374,6 +1478,7 @@ def read(self, nrows=None): # ugh, mutation names = list(self.orig_names) + names = self._maybe_dedup_names(names) if self.usecols is not None: names = self._filter_usecols(names) @@ -1568,12 +1673,13 @@ def __init__(self, f, **kwds): self.skipinitialspace = kwds['skipinitialspace'] self.lineterminator = kwds['lineterminator'] self.quoting = kwds['quoting'] - self.mangle_dupe_cols = kwds.get('mangle_dupe_cols', True) self.usecols = _validate_usecols_arg(kwds['usecols']) self.skip_blank_lines = kwds['skip_blank_lines'] self.names_passed = kwds['names'] or None + self.na_filter = kwds['na_filter'] + self.has_index_names = False if 'has_index_names' in kwds: self.has_index_names = kwds['has_index_names'] @@ -1581,7 +1687,11 @@ def __init__(self, f, **kwds): self.verbose = kwds['verbose'] self.converters = kwds['converters'] + self.compact_ints = kwds['compact_ints'] + self.use_unsigned = kwds['use_unsigned'] self.thousands = kwds['thousands'] + self.decimal = kwds['decimal'] + self.comment = kwds['comment'] self._comment_lines = [] @@ -1639,6 +1749,15 @@ def __init__(self, f, **kwds): else: self._no_thousands_columns = None + if len(self.decimal) != 1: + raise ValueError('Only length-1 decimal markers supported') + + if self.thousands is None: + self.nonnum = re.compile('[^-^0-9^%s]+' % self.decimal) + else: + self.nonnum = re.compile('[^-^0-9^%s^%s]+' % (self.thousands, + self.decimal)) + def _set_no_thousands_columns(self): # Create a set of column ids that are not to be stripped of thousands # operators. @@ -1747,8 +1866,8 @@ def read(self, rows=None): columns = list(self.orig_names) if not len(content): # pragma: no cover # DataFrame with the right metadata, even though it's length 0 - return _get_empty_meta(self.orig_names, - self.index_col, + names = self._maybe_dedup_names(self.orig_names) + return _get_empty_meta(names, self.index_col, self.index_names) # handle new style for names in index @@ -1761,7 +1880,8 @@ def read(self, rows=None): alldata = self._rows_to_cols(content) data = self._exclude_implicit_index(alldata) - columns, data = self._do_date_conversions(self.columns, data) + columns = self._maybe_dedup_names(self.columns) + columns, data = self._do_date_conversions(columns, data) data = self._convert_data(data) index, columns = self._make_index(data, alldata, columns, indexnamerow) @@ -1769,18 +1889,19 @@ def read(self, rows=None): return index, columns, data def _exclude_implicit_index(self, alldata): + names = self._maybe_dedup_names(self.orig_names) if self._implicit_index: excl_indices = self.index_col data = {} offset = 0 - for i, col in enumerate(self.orig_names): + for i, col in enumerate(names): while i + offset in excl_indices: offset += 1 data[col] = alldata[i + offset] else: - data = dict((k, v) for k, v in zip(self.orig_names, alldata)) + data = dict((k, v) for k, v in zip(names, alldata)) return data @@ -2050,22 +2171,35 @@ def _check_empty(self, lines): def _check_thousands(self, lines): if self.thousands is None: return lines - nonnum = re.compile('[^-^0-9^%s^.]+' % self.thousands) + + return self._search_replace_num_columns(lines=lines, + search=self.thousands, + replace='') + + def _search_replace_num_columns(self, lines, search, replace): ret = [] for l in lines: rl = [] for i, x in enumerate(l): if (not isinstance(x, compat.string_types) or - self.thousands not in x or + search not in x or (self._no_thousands_columns and i in self._no_thousands_columns) or - nonnum.search(x.strip())): + self.nonnum.search(x.strip())): rl.append(x) else: - rl.append(x.replace(self.thousands, '')) + rl.append(x.replace(search, replace)) ret.append(rl) return ret + def _check_decimal(self, lines): + if self.decimal == _parser_defaults['decimal']: + return lines + + return self._search_replace_num_columns(lines=lines, + search=self.decimal, + replace='.') + def _clear_buffer(self): self.buf = [] @@ -2135,14 +2269,16 @@ def _get_index_name(self, columns): return index_name, orig_names, columns def _rows_to_cols(self, content): - zipped_content = list(lib.to_object_array(content).T) - col_len = self.num_original_columns - zip_len = len(zipped_content) if self._implicit_index: col_len += len(self.index_col) + # see gh-13320 + zipped_content = list(lib.to_object_array( + content, min_width=col_len).T) + zip_len = len(zipped_content) + if self.skip_footer < 0: raise ValueError('skip footer cannot be negative') @@ -2249,7 +2385,8 @@ def _get_lines(self, rows=None): lines = self._check_comments(lines) if self.skip_blank_lines: lines = self._check_empty(lines) - return self._check_thousands(lines) + lines = self._check_thousands(lines) + return self._check_decimal(lines) def _make_date_converter(date_parser=None, dayfirst=False, diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py index dff2c6f0df7b1..cbe04349b5105 100644 --- a/pandas/io/pytables.py +++ b/pandas/io/pytables.py @@ -13,10 +13,12 @@ import os import numpy as np + import pandas as pd from pandas import (Series, DataFrame, Panel, Panel4D, Index, MultiIndex, Int64Index) from pandas.core import config +from pandas.io.common import _stringify_path from pandas.sparse.api import SparseSeries, SparseDataFrame, SparsePanel from pandas.sparse.array import BlockIndex, IntIndex from pandas.tseries.api import PeriodIndex, DatetimeIndex @@ -254,6 +256,7 @@ def to_hdf(path_or_buf, key, value, mode=None, complevel=None, complib=None, else: f = lambda store: store.put(key, value, **kwargs) + path_or_buf = _stringify_path(path_or_buf) if isinstance(path_or_buf, string_types): with HDFStore(path_or_buf, mode=mode, complevel=complevel, complib=complib) as store: @@ -270,7 +273,11 @@ def read_hdf(path_or_buf, key=None, **kwargs): Parameters ---------- - path_or_buf : path (string), or buffer to read from + path_or_buf : path (string), buffer, or path object (pathlib.Path or + py._path.local.LocalPath) to read from + + .. versionadded:: 0.18.2 support for pathlib, py.path. + key : group identifier in the store. Can be omitted a HDF file contains a single pandas object. where : list of Term (or convertable) objects, optional @@ -293,6 +300,7 @@ def read_hdf(path_or_buf, key=None, **kwargs): if 'where' in kwargs: kwargs['where'] = _ensure_term(kwargs['where'], scope_level=1) + path_or_buf = _stringify_path(path_or_buf) if isinstance(path_or_buf, string_types): try: @@ -316,17 +324,27 @@ def read_hdf(path_or_buf, key=None, **kwargs): store = path_or_buf auto_close = False + else: raise NotImplementedError('Support for generic buffers has not been ' 'implemented.') try: if key is None: - keys = store.keys() - if len(keys) != 1: - raise ValueError('key must be provided when HDF file contains ' - 'multiple datasets.') - key = keys[0] + groups = store.groups() + if len(groups) == 0: + raise ValueError('No dataset in HDF5 file.') + candidate_only_group = groups[0] + + # For the HDF file to have only one dataset, all other groups + # should then be metadata groups for that candidate group. (This + # assumes that the groups() method enumerates parent groups + # before their children.) + for group_to_check in groups[1:]: + if not _is_metadata_of(group_to_check, candidate_only_group): + raise ValueError('key must be provided when HDF5 file ' + 'contains multiple datasets.') + key = candidate_only_group._v_pathname return store.select(key, auto_close=auto_close, **kwargs) except: # if there is an error, close the store @@ -338,6 +356,20 @@ def read_hdf(path_or_buf, key=None, **kwargs): raise +def _is_metadata_of(group, parent_group): + """Check if a given group is a metadata group for a given parent_group.""" + if group._v_depth <= parent_group._v_depth: + return False + + current = group + while current._v_depth > 1: + parent = current._v_parent + if parent == parent_group and current._v_name == 'meta': + return True + current = current._v_parent + return False + + class HDFStore(StringMixin): """ @@ -1305,12 +1337,20 @@ def __init__(self, store, s, func, where, nrows, start=None, stop=None, self.s = s self.func = func self.where = where - self.nrows = nrows or 0 - self.start = start or 0 - if stop is None: - stop = self.nrows - self.stop = min(self.nrows, stop) + # set start/stop if they are not set if we are a table + if self.s.is_table: + if nrows is None: + nrows = 0 + if start is None: + start = 0 + if stop is None: + stop = nrows + stop = min(nrows, stop) + + self.nrows = nrows + self.start = start + self.stop = stop self.coordinates = None if iterator or chunksize is not None: @@ -2294,14 +2334,23 @@ def f(values, freq=None, tz=None): return klass def validate_read(self, kwargs): - if kwargs.get('columns') is not None: + """ + remove table keywords from kwargs and return + raise if any keywords are passed which are not-None + """ + kwargs = copy.copy(kwargs) + + columns = kwargs.pop('columns', None) + if columns is not None: raise TypeError("cannot pass a column specification when reading " "a Fixed format store. this store must be " "selected in its entirety") - if kwargs.get('where') is not None: + where = kwargs.pop('where', None) + if where is not None: raise TypeError("cannot pass a where specification when reading " "from a Fixed format store. this store must be " "selected in its entirety") + return kwargs @property def is_exists(self): @@ -2320,11 +2369,11 @@ def get_attrs(self): def write(self, obj, **kwargs): self.set_attrs() - def read_array(self, key): + def read_array(self, key, start=None, stop=None): """ read an array for the specified node (off of group """ import tables node = getattr(self.group, key) - data = node[:] + data = node[start:stop] attrs = node._v_attrs transposed = getattr(attrs, 'transposed', False) @@ -2354,17 +2403,17 @@ def read_array(self, key): else: return ret - def read_index(self, key): + def read_index(self, key, **kwargs): variety = _ensure_decoded(getattr(self.attrs, '%s_variety' % key)) if variety == u('multi'): - return self.read_multi_index(key) + return self.read_multi_index(key, **kwargs) elif variety == u('block'): - return self.read_block_index(key) + return self.read_block_index(key, **kwargs) elif variety == u('sparseint'): - return self.read_sparse_intindex(key) + return self.read_sparse_intindex(key, **kwargs) elif variety == u('regular'): - _, index = self.read_index_node(getattr(self.group, key)) + _, index = self.read_index_node(getattr(self.group, key), **kwargs) return index else: # pragma: no cover raise TypeError('unrecognized index variety: %s' % variety) @@ -2402,19 +2451,19 @@ def write_block_index(self, key, index): self.write_array('%s_blengths' % key, index.blengths) setattr(self.attrs, '%s_length' % key, index.length) - def read_block_index(self, key): + def read_block_index(self, key, **kwargs): length = getattr(self.attrs, '%s_length' % key) - blocs = self.read_array('%s_blocs' % key) - blengths = self.read_array('%s_blengths' % key) + blocs = self.read_array('%s_blocs' % key, **kwargs) + blengths = self.read_array('%s_blengths' % key, **kwargs) return BlockIndex(length, blocs, blengths) def write_sparse_intindex(self, key, index): self.write_array('%s_indices' % key, index.indices) setattr(self.attrs, '%s_length' % key, index.length) - def read_sparse_intindex(self, key): + def read_sparse_intindex(self, key, **kwargs): length = getattr(self.attrs, '%s_length' % key) - indices = self.read_array('%s_indices' % key) + indices = self.read_array('%s_indices' % key, **kwargs) return IntIndex(length, indices) def write_multi_index(self, key, index): @@ -2439,7 +2488,7 @@ def write_multi_index(self, key, index): label_key = '%s_label%d' % (key, i) self.write_array(label_key, lab) - def read_multi_index(self, key): + def read_multi_index(self, key, **kwargs): nlevels = getattr(self.attrs, '%s_nlevels' % key) levels = [] @@ -2447,19 +2496,20 @@ def read_multi_index(self, key): names = [] for i in range(nlevels): level_key = '%s_level%d' % (key, i) - name, lev = self.read_index_node(getattr(self.group, level_key)) + name, lev = self.read_index_node(getattr(self.group, level_key), + **kwargs) levels.append(lev) names.append(name) label_key = '%s_label%d' % (key, i) - lab = self.read_array(label_key) + lab = self.read_array(label_key, **kwargs) labels.append(lab) return MultiIndex(levels=levels, labels=labels, names=names, verify_integrity=True) - def read_index_node(self, node): - data = node[:] + def read_index_node(self, node, start=None, stop=None): + data = node[start:stop] # If the index was an empty array write_array_empty() will # have written a sentinel. Here we relace it with the original. if ('shape' in node._v_attrs and @@ -2598,9 +2648,9 @@ def write_array(self, key, value, items=None): class LegacyFixed(GenericFixed): - def read_index_legacy(self, key): + def read_index_legacy(self, key, start=None, stop=None): node = getattr(self.group, key) - data = node[:] + data = node[start:stop] kind = node._v_attrs.kind return _unconvert_index_legacy(data, kind, encoding=self.encoding) @@ -2608,7 +2658,7 @@ def read_index_legacy(self, key): class LegacySeriesFixed(LegacyFixed): def read(self, **kwargs): - self.validate_read(kwargs) + kwargs = self.validate_read(kwargs) index = self.read_index_legacy('index') values = self.read_array('values') return Series(values, index=index) @@ -2617,7 +2667,7 @@ def read(self, **kwargs): class LegacyFrameFixed(LegacyFixed): def read(self, **kwargs): - self.validate_read(kwargs) + kwargs = self.validate_read(kwargs) index = self.read_index_legacy('index') columns = self.read_index_legacy('columns') values = self.read_array('values') @@ -2636,9 +2686,9 @@ def shape(self): return None def read(self, **kwargs): - self.validate_read(kwargs) - index = self.read_index('index') - values = self.read_array('values') + kwargs = self.validate_read(kwargs) + index = self.read_index('index', **kwargs) + values = self.read_array('values', **kwargs) return Series(values, index=index, name=self.name) def write(self, obj, **kwargs): @@ -2648,12 +2698,25 @@ def write(self, obj, **kwargs): self.attrs.name = obj.name -class SparseSeriesFixed(GenericFixed): +class SparseFixed(GenericFixed): + + def validate_read(self, kwargs): + """ + we don't support start, stop kwds in Sparse + """ + kwargs = super(SparseFixed, self).validate_read(kwargs) + if 'start' in kwargs or 'stop' in kwargs: + raise NotImplementedError("start and/or stop are not supported " + "in fixed Sparse reading") + return kwargs + + +class SparseSeriesFixed(SparseFixed): pandas_kind = u('sparse_series') attributes = ['name', 'fill_value', 'kind'] def read(self, **kwargs): - self.validate_read(kwargs) + kwargs = self.validate_read(kwargs) index = self.read_index('index') sp_values = self.read_array('sp_values') sp_index = self.read_index('sp_index') @@ -2672,12 +2735,12 @@ def write(self, obj, **kwargs): self.attrs.kind = obj.kind -class SparseFrameFixed(GenericFixed): +class SparseFrameFixed(SparseFixed): pandas_kind = u('sparse_frame') attributes = ['default_kind', 'default_fill_value'] def read(self, **kwargs): - self.validate_read(kwargs) + kwargs = self.validate_read(kwargs) columns = self.read_index('columns') sdict = {} for c in columns: @@ -2705,12 +2768,12 @@ def write(self, obj, **kwargs): self.write_index('columns', obj.columns) -class SparsePanelFixed(GenericFixed): +class SparsePanelFixed(SparseFixed): pandas_kind = u('sparse_panel') attributes = ['default_kind', 'default_fill_value'] def read(self, **kwargs): - self.validate_read(kwargs) + kwargs = self.validate_read(kwargs) items = self.read_index('items') sdict = {} @@ -2773,19 +2836,26 @@ def shape(self): except: return None - def read(self, **kwargs): - self.validate_read(kwargs) + def read(self, start=None, stop=None, **kwargs): + # start, stop applied to rows, so 0th axis only + + kwargs = self.validate_read(kwargs) + select_axis = self.obj_type()._get_block_manager_axis(0) axes = [] for i in range(self.ndim): - ax = self.read_index('axis%d' % i) + + _start, _stop = (start, stop) if i == select_axis else (None, None) + ax = self.read_index('axis%d' % i, start=_start, stop=_stop) axes.append(ax) items = axes[0] blocks = [] for i in range(self.nblocks): + blk_items = self.read_index('block%d_items' % i) - values = self.read_array('block%d_values' % i) + values = self.read_array('block%d_values' % i, + start=_start, stop=_stop) blk = make_block(values, placement=items.get_indexer(blk_items)) blocks.append(blk) @@ -3826,24 +3896,24 @@ def write_data(self, chunksize, dropna=False): nrows = self.nrows_expected # if dropna==True, then drop ALL nan rows + masks = [] if dropna: - masks = [] for a in self.values_axes: # figure the mask: only do if we can successfully process this # column, otherwise ignore the mask mask = com.isnull(a.data).all(axis=0) - masks.append(mask.astype('u1', copy=False)) + if isinstance(mask, np.ndarray): + masks.append(mask.astype('u1', copy=False)) - # consolidate masks + # consolidate masks + if len(masks): mask = masks[0] for m in masks[1:]: mask = mask & m mask = mask.ravel() - else: - mask = None # broadcast the indexes if needed diff --git a/pandas/io/s3.py b/pandas/io/s3.py new file mode 100644 index 0000000000000..df8f1d9187031 --- /dev/null +++ b/pandas/io/s3.py @@ -0,0 +1,112 @@ +""" s3 support for remote file interactivity """ + +import os +from pandas import compat +from pandas.compat import BytesIO + +try: + import boto + from boto.s3 import key +except: + raise ImportError("boto is required to handle s3 files") + +if compat.PY3: + from urllib.parse import urlparse as parse_url +else: + from urlparse import urlparse as parse_url + + +class BotoFileLikeReader(key.Key): + """boto Key modified to be more file-like + + This modification of the boto Key will read through a supplied + S3 key once, then stop. The unmodified boto Key object will repeatedly + cycle through a file in S3: after reaching the end of the file, + boto will close the file. Then the next call to `read` or `next` will + re-open the file and start reading from the beginning. + + Also adds a `readline` function which will split the returned + values by the `\n` character. + """ + + def __init__(self, *args, **kwargs): + encoding = kwargs.pop("encoding", None) # Python 2 compat + super(BotoFileLikeReader, self).__init__(*args, **kwargs) + # Add a flag to mark the end of the read. + self.finished_read = False + self.buffer = "" + self.lines = [] + if encoding is None and compat.PY3: + encoding = "utf-8" + self.encoding = encoding + self.lines = [] + + def next(self): + return self.readline() + + __next__ = next + + def read(self, *args, **kwargs): + if self.finished_read: + return b'' if compat.PY3 else '' + return super(BotoFileLikeReader, self).read(*args, **kwargs) + + def close(self, *args, **kwargs): + self.finished_read = True + return super(BotoFileLikeReader, self).close(*args, **kwargs) + + def seekable(self): + """Needed for reading by bz2""" + return False + + def readline(self): + """Split the contents of the Key by '\n' characters.""" + if self.lines: + retval = self.lines[0] + self.lines = self.lines[1:] + return retval + if self.finished_read: + if self.buffer: + retval, self.buffer = self.buffer, "" + return retval + else: + raise StopIteration + + if self.encoding: + self.buffer = "{}{}".format( + self.buffer, self.read(8192).decode(self.encoding)) + else: + self.buffer = "{}{}".format(self.buffer, self.read(8192)) + + split_buffer = self.buffer.split("\n") + self.lines.extend(split_buffer[:-1]) + self.buffer = split_buffer[-1] + + return self.readline() + + +def get_filepath_or_buffer(filepath_or_buffer, encoding=None, + compression=None): + + # Assuming AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_S3_HOST + # are environment variables + parsed_url = parse_url(filepath_or_buffer) + s3_host = os.environ.get('AWS_S3_HOST', 's3.amazonaws.com') + + try: + conn = boto.connect_s3(host=s3_host) + except boto.exception.NoAuthHandlerFound: + conn = boto.connect_s3(host=s3_host, anon=True) + + b = conn.get_bucket(parsed_url.netloc, validate=False) + if compat.PY2 and (compression == 'gzip' or + (compression == 'infer' and + filepath_or_buffer.endswith(".gz"))): + k = boto.s3.key.Key(b, parsed_url.path) + filepath_or_buffer = BytesIO(k.get_contents_as_string( + encoding=encoding)) + else: + k = BotoFileLikeReader(b, parsed_url.path, encoding=encoding) + k.open('r') # Expose read errors immediately + filepath_or_buffer = k + return filepath_or_buffer, None, compression diff --git a/pandas/io/stata.py b/pandas/io/stata.py index 6c6e11a53d2d3..ae7200cf6fb2e 100644 --- a/pandas/io/stata.py +++ b/pandas/io/stata.py @@ -89,12 +89,14 @@ Examples -------- Read a Stata dta file: ->> df = pandas.read_stata('filename.dta') + +>>> df = pandas.read_stata('filename.dta') Read a Stata dta file in 10,000 line chunks: ->> itr = pandas.read_stata('filename.dta', chunksize=10000) ->> for chunk in itr: ->> do_something(chunk) + +>>> itr = pandas.read_stata('filename.dta', chunksize=10000) +>>> for chunk in itr: +>>> do_something(chunk) """ % (_statafile_processing_params1, _encoding_params, _statafile_processing_params2, _chunksize_params, _iterator_params) diff --git a/pandas/io/tests/json/test_json_norm.py b/pandas/io/tests/json/test_json_norm.py index 81a1fecbdebac..4848db97194d9 100644 --- a/pandas/io/tests/json/test_json_norm.py +++ b/pandas/io/tests/json/test_json_norm.py @@ -2,8 +2,10 @@ from pandas import DataFrame import numpy as np +import json import pandas.util.testing as tm +from pandas import compat from pandas.io.json import json_normalize, nested_to_record @@ -164,6 +166,26 @@ def test_record_prefix(self): tm.assert_frame_equal(result, expected) + def test_non_ascii_key(self): + if compat.PY3: + testjson = ( + b'[{"\xc3\x9cnic\xc3\xb8de":0,"sub":{"A":1, "B":2}},' + + b'{"\xc3\x9cnic\xc3\xb8de":1,"sub":{"A":3, "B":4}}]' + ).decode('utf8') + else: + testjson = ('[{"\xc3\x9cnic\xc3\xb8de":0,"sub":{"A":1, "B":2}},' + '{"\xc3\x9cnic\xc3\xb8de":1,"sub":{"A":3, "B":4}}]') + + testdata = { + u'sub.A': [1, 3], + u'sub.B': [2, 4], + b"\xc3\x9cnic\xc3\xb8de".decode('utf8'): [0, 1] + } + expected = DataFrame(testdata) + + result = json_normalize(json.loads(testjson)) + tm.assert_frame_equal(result, expected) + class TestNestedToRecord(tm.TestCase): diff --git a/pandas/io/tests/json/test_pandas.py b/pandas/io/tests/json/test_pandas.py index 6fe559e5cacd8..9f8aedc2e399e 100644 --- a/pandas/io/tests/json/test_pandas.py +++ b/pandas/io/tests/json/test_pandas.py @@ -87,7 +87,7 @@ def test_frame_double_encoded_labels(self): orient='index')) df_unser = read_json(df.to_json(orient='records'), orient='records') assert_index_equal(df.columns, df_unser.columns) - np.testing.assert_equal(df.values, df_unser.values) + tm.assert_numpy_array_equal(df.values, df_unser.values) def test_frame_non_unique_index(self): df = DataFrame([['a', 'b'], ['c', 'd']], index=[1, 1], @@ -99,10 +99,10 @@ def test_frame_non_unique_index(self): assert_frame_equal(df, read_json(df.to_json(orient='split'), orient='split')) unser = read_json(df.to_json(orient='records'), orient='records') - self.assertTrue(df.columns.equals(unser.columns)) - np.testing.assert_equal(df.values, unser.values) + self.assert_index_equal(df.columns, unser.columns) + tm.assert_almost_equal(df.values, unser.values) unser = read_json(df.to_json(orient='values'), orient='values') - np.testing.assert_equal(df.values, unser.values) + tm.assert_numpy_array_equal(df.values, unser.values) def test_frame_non_unique_columns(self): df = DataFrame([['a', 'b'], ['c', 'd']], index=[1, 2], @@ -115,7 +115,7 @@ def test_frame_non_unique_columns(self): assert_frame_equal(df, read_json(df.to_json(orient='split'), orient='split', dtype=False)) unser = read_json(df.to_json(orient='values'), orient='values') - np.testing.assert_equal(df.values, unser.values) + tm.assert_numpy_array_equal(df.values, unser.values) # GH4377; duplicate columns not processing correctly df = DataFrame([['a', 'b'], ['c', 'd']], index=[ @@ -183,7 +183,8 @@ def _check_orient(df, orient, dtype=None, numpy=False, # index is not captured in this orientation assert_almost_equal(df.values, unser.values, check_dtype=check_numpy_dtype) - self.assertTrue(df.columns.equals(unser.columns)) + self.assert_index_equal(df.columns, unser.columns, + exact=check_column_type) elif orient == "values": # index and cols are not captured in this orientation if numpy is True and df.shape == (0, 0): @@ -302,12 +303,10 @@ def _check_all_orients(df, dtype=None, convert_axes=True, # mixed data index = pd.Index(['a', 'b', 'c', 'd', 'e']) - data = { - 'A': [0., 1., 2., 3., 4.], - 'B': [0., 1., 0., 1., 0.], - 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'], - 'D': [True, False, True, False, True] - } + data = {'A': [0., 1., 2., 3., 4.], + 'B': [0., 1., 0., 1., 0.], + 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'], + 'D': [True, False, True, False, True]} df = DataFrame(data=data, index=index) _check_orient(df, "split", check_dtype=False) _check_orient(df, "records", check_dtype=False) @@ -487,7 +486,7 @@ def test_series_non_unique_index(self): orient='split', typ='series')) unser = read_json(s.to_json(orient='records'), orient='records', typ='series') - np.testing.assert_equal(s.values, unser.values) + tm.assert_numpy_array_equal(s.values, unser.values) def test_series_from_json_to_json(self): diff --git a/pandas/io/tests/json/test_ujson.py b/pandas/io/tests/json/test_ujson.py index babcd910a2edd..13b2dafec9c89 100644 --- a/pandas/io/tests/json/test_ujson.py +++ b/pandas/io/tests/json/test_ujson.py @@ -21,8 +21,6 @@ import pandas.compat as compat import numpy as np -from numpy.testing import (assert_array_almost_equal_nulp, - assert_approx_equal) from pandas import DataFrame, Series, Index, NaT, DatetimeIndex import pandas.util.testing as tm @@ -1015,19 +1013,19 @@ def testFloatArray(self): inpt = arr.astype(dtype) outp = np.array(ujson.decode(ujson.encode( inpt, double_precision=15)), dtype=dtype) - assert_array_almost_equal_nulp(inpt, outp) + tm.assert_almost_equal(inpt, outp) def testFloatMax(self): num = np.float(np.finfo(np.float).max / 10) - assert_approx_equal(np.float(ujson.decode( + tm.assert_almost_equal(np.float(ujson.decode( ujson.encode(num, double_precision=15))), num, 15) num = np.float32(np.finfo(np.float32).max / 10) - assert_approx_equal(np.float32(ujson.decode( + tm.assert_almost_equal(np.float32(ujson.decode( ujson.encode(num, double_precision=15))), num, 15) num = np.float64(np.finfo(np.float64).max / 10) - assert_approx_equal(np.float64(ujson.decode( + tm.assert_almost_equal(np.float64(ujson.decode( ujson.encode(num, double_precision=15))), num, 15) def testArrays(self): @@ -1067,9 +1065,9 @@ def testArrays(self): arr = np.arange(100.202, 200.202, 1, dtype=np.float32) arr = arr.reshape((5, 5, 4)) outp = np.array(ujson.decode(ujson.encode(arr)), dtype=np.float32) - assert_array_almost_equal_nulp(arr, outp) + tm.assert_almost_equal(arr, outp) outp = ujson.decode(ujson.encode(arr), numpy=True, dtype=np.float32) - assert_array_almost_equal_nulp(arr, outp) + tm.assert_almost_equal(arr, outp) def testOdArray(self): def will_raise(): @@ -1203,19 +1201,19 @@ def testDataFrame(self): # column indexed outp = DataFrame(ujson.decode(ujson.encode(df))) self.assertTrue((df == outp).values.all()) - tm.assert_numpy_array_equal(df.columns, outp.columns) - tm.assert_numpy_array_equal(df.index, outp.index) + tm.assert_index_equal(df.columns, outp.columns) + tm.assert_index_equal(df.index, outp.index) dec = _clean_dict(ujson.decode(ujson.encode(df, orient="split"))) outp = DataFrame(**dec) self.assertTrue((df == outp).values.all()) - tm.assert_numpy_array_equal(df.columns, outp.columns) - tm.assert_numpy_array_equal(df.index, outp.index) + tm.assert_index_equal(df.columns, outp.columns) + tm.assert_index_equal(df.index, outp.index) outp = DataFrame(ujson.decode(ujson.encode(df, orient="records"))) outp.index = df.index self.assertTrue((df == outp).values.all()) - tm.assert_numpy_array_equal(df.columns, outp.columns) + tm.assert_index_equal(df.columns, outp.columns) outp = DataFrame(ujson.decode(ujson.encode(df, orient="values"))) outp.index = df.index @@ -1223,8 +1221,8 @@ def testDataFrame(self): outp = DataFrame(ujson.decode(ujson.encode(df, orient="index"))) self.assertTrue((df.transpose() == outp).values.all()) - tm.assert_numpy_array_equal(df.transpose().columns, outp.columns) - tm.assert_numpy_array_equal(df.transpose().index, outp.index) + tm.assert_index_equal(df.transpose().columns, outp.columns) + tm.assert_index_equal(df.transpose().index, outp.index) def testDataFrameNumpy(self): df = DataFrame([[1, 2, 3], [4, 5, 6]], index=[ @@ -1233,21 +1231,21 @@ def testDataFrameNumpy(self): # column indexed outp = DataFrame(ujson.decode(ujson.encode(df), numpy=True)) self.assertTrue((df == outp).values.all()) - tm.assert_numpy_array_equal(df.columns, outp.columns) - tm.assert_numpy_array_equal(df.index, outp.index) + tm.assert_index_equal(df.columns, outp.columns) + tm.assert_index_equal(df.index, outp.index) dec = _clean_dict(ujson.decode(ujson.encode(df, orient="split"), numpy=True)) outp = DataFrame(**dec) self.assertTrue((df == outp).values.all()) - tm.assert_numpy_array_equal(df.columns, outp.columns) - tm.assert_numpy_array_equal(df.index, outp.index) + tm.assert_index_equal(df.columns, outp.columns) + tm.assert_index_equal(df.index, outp.index) - outp = DataFrame(ujson.decode( - ujson.encode(df, orient="index"), numpy=True)) + outp = DataFrame(ujson.decode(ujson.encode(df, orient="index"), + numpy=True)) self.assertTrue((df.transpose() == outp).values.all()) - tm.assert_numpy_array_equal(df.transpose().columns, outp.columns) - tm.assert_numpy_array_equal(df.transpose().index, outp.index) + tm.assert_index_equal(df.transpose().columns, outp.columns) + tm.assert_index_equal(df.transpose().index, outp.index) def testDataFrameNested(self): df = DataFrame([[1, 2, 3], [4, 5, 6]], index=[ @@ -1287,20 +1285,20 @@ def testDataFrameNumpyLabelled(self): outp = DataFrame(*ujson.decode(ujson.encode(df), numpy=True, labelled=True)) self.assertTrue((df.T == outp).values.all()) - tm.assert_numpy_array_equal(df.T.columns, outp.columns) - tm.assert_numpy_array_equal(df.T.index, outp.index) + tm.assert_index_equal(df.T.columns, outp.columns) + tm.assert_index_equal(df.T.index, outp.index) outp = DataFrame(*ujson.decode(ujson.encode(df, orient="records"), numpy=True, labelled=True)) outp.index = df.index self.assertTrue((df == outp).values.all()) - tm.assert_numpy_array_equal(df.columns, outp.columns) + tm.assert_index_equal(df.columns, outp.columns) outp = DataFrame(*ujson.decode(ujson.encode(df, orient="index"), numpy=True, labelled=True)) self.assertTrue((df == outp).values.all()) - tm.assert_numpy_array_equal(df.columns, outp.columns) - tm.assert_numpy_array_equal(df.index, outp.index) + tm.assert_index_equal(df.columns, outp.columns) + tm.assert_index_equal(df.index, outp.index) def testSeries(self): s = Series([10, 20, 30, 40, 50, 60], name="series", @@ -1380,42 +1378,46 @@ def testIndex(self): i = Index([23, 45, 18, 98, 43, 11], name="index") # column indexed - outp = Index(ujson.decode(ujson.encode(i))) - self.assertTrue(i.equals(outp)) + outp = Index(ujson.decode(ujson.encode(i)), name='index') + tm.assert_index_equal(i, outp) - outp = Index(ujson.decode(ujson.encode(i), numpy=True)) - self.assertTrue(i.equals(outp)) + outp = Index(ujson.decode(ujson.encode(i), numpy=True), name='index') + tm.assert_index_equal(i, outp) dec = _clean_dict(ujson.decode(ujson.encode(i, orient="split"))) outp = Index(**dec) - self.assertTrue(i.equals(outp)) + tm.assert_index_equal(i, outp) self.assertTrue(i.name == outp.name) dec = _clean_dict(ujson.decode(ujson.encode(i, orient="split"), numpy=True)) outp = Index(**dec) - self.assertTrue(i.equals(outp)) + tm.assert_index_equal(i, outp) self.assertTrue(i.name == outp.name) - outp = Index(ujson.decode(ujson.encode(i, orient="values"))) - self.assertTrue(i.equals(outp)) + outp = Index(ujson.decode(ujson.encode(i, orient="values")), + name='index') + tm.assert_index_equal(i, outp) - outp = Index(ujson.decode(ujson.encode( - i, orient="values"), numpy=True)) - self.assertTrue(i.equals(outp)) + outp = Index(ujson.decode(ujson.encode(i, orient="values"), + numpy=True), name='index') + tm.assert_index_equal(i, outp) - outp = Index(ujson.decode(ujson.encode(i, orient="records"))) - self.assertTrue(i.equals(outp)) + outp = Index(ujson.decode(ujson.encode(i, orient="records")), + name='index') + tm.assert_index_equal(i, outp) - outp = Index(ujson.decode(ujson.encode( - i, orient="records"), numpy=True)) - self.assertTrue(i.equals(outp)) + outp = Index(ujson.decode(ujson.encode(i, orient="records"), + numpy=True), name='index') + tm.assert_index_equal(i, outp) - outp = Index(ujson.decode(ujson.encode(i, orient="index"))) - self.assertTrue(i.equals(outp)) + outp = Index(ujson.decode(ujson.encode(i, orient="index")), + name='index') + tm.assert_index_equal(i, outp) - outp = Index(ujson.decode(ujson.encode(i, orient="index"), numpy=True)) - self.assertTrue(i.equals(outp)) + outp = Index(ujson.decode(ujson.encode(i, orient="index"), + numpy=True), name='index') + tm.assert_index_equal(i, outp) def test_datetimeindex(self): from pandas.tseries.index import date_range @@ -1425,7 +1427,7 @@ def test_datetimeindex(self): encoded = ujson.encode(rng, date_unit='ns') decoded = DatetimeIndex(np.array(ujson.decode(encoded))) - self.assertTrue(rng.equals(decoded)) + tm.assert_index_equal(rng, decoded) ts = Series(np.random.randn(len(rng)), index=rng) decoded = Series(ujson.decode(ujson.encode(ts, date_unit='ns'))) diff --git a/pandas/io/tests/parser/c_parser_only.py b/pandas/io/tests/parser/c_parser_only.py index 24c670abe8158..b7ef754004e18 100644 --- a/pandas/io/tests/parser/c_parser_only.py +++ b/pandas/io/tests/parser/c_parser_only.py @@ -61,12 +61,6 @@ def test_delim_whitespace_custom_terminator(self): columns=['a', 'b', 'c']) tm.assert_frame_equal(df, expected) - def test_parse_dates_empty_string(self): - # see gh-2263 - s = StringIO("Date, test\n2012-01-01, 1\n,2") - result = self.read_csv(s, parse_dates=["Date"], na_filter=False) - self.assertTrue(result['Date'].isnull()[1]) - def test_dtype_and_names_error(self): # see gh-8833: passing both dtype and names # resulting in an error reporting issue @@ -178,28 +172,8 @@ def error(val): self.assertTrue(sum(precise_errors) <= sum(normal_errors)) self.assertTrue(max(precise_errors) <= max(normal_errors)) - def test_compact_ints(self): - if compat.is_platform_windows() and not self.low_memory: - raise nose.SkipTest( - "segfaults on win-64, only when all tests are run") - - data = ('0,1,0,0\n' - '1,1,0,0\n' - '0,1,0,1') - - result = self.read_csv(StringIO(data), delimiter=',', header=None, - compact_ints=True, as_recarray=True) - ex_dtype = np.dtype([(str(i), 'i1') for i in range(4)]) - self.assertEqual(result.dtype, ex_dtype) - - result = self.read_csv(StringIO(data), delimiter=',', header=None, - as_recarray=True, compact_ints=True, - use_unsigned=True) - ex_dtype = np.dtype([(str(i), 'u1') for i in range(4)]) - self.assertEqual(result.dtype, ex_dtype) - def test_compact_ints_as_recarray(self): - if compat.is_platform_windows() and self.low_memory: + if compat.is_platform_windows(): raise nose.SkipTest( "segfaults on win-64, only when all tests are run") @@ -207,16 +181,20 @@ def test_compact_ints_as_recarray(self): '1,1,0,0\n' '0,1,0,1') - result = self.read_csv(StringIO(data), delimiter=',', header=None, - compact_ints=True, as_recarray=True) - ex_dtype = np.dtype([(str(i), 'i1') for i in range(4)]) - self.assertEqual(result.dtype, ex_dtype) - - result = self.read_csv(StringIO(data), delimiter=',', header=None, - as_recarray=True, compact_ints=True, - use_unsigned=True) - ex_dtype = np.dtype([(str(i), 'u1') for i in range(4)]) - self.assertEqual(result.dtype, ex_dtype) + with tm.assert_produces_warning( + FutureWarning, check_stacklevel=False): + result = self.read_csv(StringIO(data), delimiter=',', header=None, + compact_ints=True, as_recarray=True) + ex_dtype = np.dtype([(str(i), 'i1') for i in range(4)]) + self.assertEqual(result.dtype, ex_dtype) + + with tm.assert_produces_warning( + FutureWarning, check_stacklevel=False): + result = self.read_csv(StringIO(data), delimiter=',', header=None, + as_recarray=True, compact_ints=True, + use_unsigned=True) + ex_dtype = np.dtype([(str(i), 'u1') for i in range(4)]) + self.assertEqual(result.dtype, ex_dtype) def test_pass_dtype(self): data = """\ @@ -293,23 +271,18 @@ def test_empty_with_mangled_column_pass_dtype_by_indexes(self): {'one': np.empty(0, dtype='u1'), 'one.1': np.empty(0, dtype='f')}) tm.assert_frame_equal(result, expected, check_index_type=False) - def test_empty_with_dup_column_pass_dtype_by_names(self): - data = 'one,one' - result = self.read_csv( - StringIO(data), mangle_dupe_cols=False, dtype={'one': 'u1'}) - expected = pd.concat([Series([], name='one', dtype='u1')] * 2, axis=1) - tm.assert_frame_equal(result, expected, check_index_type=False) - def test_empty_with_dup_column_pass_dtype_by_indexes(self): - # FIXME in gh-9424 - raise nose.SkipTest( - "gh-9424; known failure read_csv with duplicate columns") + # see gh-9424 + expected = pd.concat([Series([], name='one', dtype='u1'), + Series([], name='one.1', dtype='f')], axis=1) data = 'one,one' - result = self.read_csv( - StringIO(data), mangle_dupe_cols=False, dtype={0: 'u1', 1: 'f'}) - expected = pd.concat([Series([], name='one', dtype='u1'), - Series([], name='one', dtype='f')], axis=1) + result = self.read_csv(StringIO(data), dtype={0: 'u1', 1: 'f'}) + tm.assert_frame_equal(result, expected, check_index_type=False) + + data = '' + result = self.read_csv(StringIO(data), names=['one', 'one'], + dtype={0: 'u1', 1: 'f'}) tm.assert_frame_equal(result, expected, check_index_type=False) def test_usecols_dtypes(self): @@ -353,17 +326,6 @@ def test_disable_bool_parsing(self): result = self.read_csv(StringIO(data), dtype=object, na_filter=False) self.assertEqual(result['B'][2], '') - def test_euro_decimal_format(self): - data = """Id;Number1;Number2;Text1;Text2;Number3 -1;1521,1541;187101,9543;ABC;poi;4,738797819 -2;121,12;14897,76;DEF;uyt;0,377320872 -3;878,158;108013,434;GHI;rez;2,735694704""" - - df2 = self.read_csv(StringIO(data), sep=';', decimal=',') - self.assertEqual(df2['Number1'].dtype, float) - self.assertEqual(df2['Number2'].dtype, float) - self.assertEqual(df2['Number3'].dtype, float) - def test_custom_lineterminator(self): data = 'a,b,c~1,2,3~4,5,6' @@ -382,15 +344,6 @@ def test_raise_on_passed_int_dtype_with_nas(self): sep=",", skipinitialspace=True, dtype={'DOY': np.int64}) - def test_na_trailing_columns(self): - data = """Date,Currenncy,Symbol,Type,Units,UnitPrice,Cost,Tax -2012-03-14,USD,AAPL,BUY,1000 -2012-05-12,USD,SBUX,SELL,500""" - - result = self.read_csv(StringIO(data)) - self.assertEqual(result['Date'][1], '2012-05-12') - self.assertTrue(result['UnitPrice'].isnull().all()) - def test_parse_ragged_csv(self): data = """1,2,3 1,2,3,4 @@ -435,49 +388,6 @@ def test_tokenize_CR_with_quoting(self): expected = self.read_csv(StringIO(data.replace('\r', '\n'))) tm.assert_frame_equal(result, expected) - def test_raise_on_no_columns(self): - # single newline - data = "\n" - self.assertRaises(ValueError, self.read_csv, StringIO(data)) - - # test with more than a single newline - data = "\n\n\n" - self.assertRaises(ValueError, self.read_csv, StringIO(data)) - - def test_1000_sep_with_decimal(self): - data = """A|B|C -1|2,334.01|5 -10|13|10. -""" - expected = DataFrame({ - 'A': [1, 10], - 'B': [2334.01, 13], - 'C': [5, 10.] - }) - - tm.assert_equal(expected.A.dtype, 'int64') - tm.assert_equal(expected.B.dtype, 'float') - tm.assert_equal(expected.C.dtype, 'float') - - df = self.read_csv(StringIO(data), sep='|', thousands=',', decimal='.') - tm.assert_frame_equal(df, expected) - - df = self.read_table(StringIO(data), sep='|', - thousands=',', decimal='.') - tm.assert_frame_equal(df, expected) - - data_with_odd_sep = """A|B|C -1|2.334,01|5 -10|13|10, -""" - df = self.read_csv(StringIO(data_with_odd_sep), - sep='|', thousands='.', decimal=',') - tm.assert_frame_equal(df, expected) - - df = self.read_table(StringIO(data_with_odd_sep), - sep='|', thousands='.', decimal=',') - tm.assert_frame_equal(df, expected) - def test_grow_boundary_at_cap(self): # See gh-12494 # @@ -497,25 +407,3 @@ def test_empty_header_read(count): for count in range(1, 101): test_empty_header_read(count) - - def test_inf_parsing(self): - data = """\ -,A -a,inf -b,-inf -c,Inf -d,-Inf -e,INF -f,-INF -g,INf -h,-INf -i,inF -j,-inF""" - inf = float('inf') - expected = Series([inf, -inf] * 5) - - df = self.read_csv(StringIO(data), index_col=0) - tm.assert_almost_equal(df['A'].values, expected.values) - - df = self.read_csv(StringIO(data), index_col=0, na_filter=False) - tm.assert_almost_equal(df['A'].values, expected.values) diff --git a/pandas/io/tests/parser/comment.py b/pandas/io/tests/parser/comment.py index 07fc6a167a6c0..f7cd1e190ec16 100644 --- a/pandas/io/tests/parser/comment.py +++ b/pandas/io/tests/parser/comment.py @@ -19,14 +19,14 @@ def test_comment(self): 1,2.,4.#hello world 5.,NaN,10.0 """ - expected = [[1., 2., 4.], - [5., np.nan, 10.]] + expected = np.array([[1., 2., 4.], + [5., np.nan, 10.]]) df = self.read_csv(StringIO(data), comment='#') - tm.assert_almost_equal(df.values, expected) + tm.assert_numpy_array_equal(df.values, expected) df = self.read_table(StringIO(data), sep=',', comment='#', na_values=['NaN']) - tm.assert_almost_equal(df.values, expected) + tm.assert_numpy_array_equal(df.values, expected) def test_line_comment(self): data = """# empty @@ -35,10 +35,10 @@ def test_line_comment(self): #ignore this line 5.,NaN,10.0 """ - expected = [[1., 2., 4.], - [5., np.nan, 10.]] + expected = np.array([[1., 2., 4.], + [5., np.nan, 10.]]) df = self.read_csv(StringIO(data), comment='#') - tm.assert_almost_equal(df.values, expected) + tm.assert_numpy_array_equal(df.values, expected) # check with delim_whitespace=True df = self.read_csv(StringIO(data.replace(',', ' ')), comment='#', @@ -48,11 +48,11 @@ def test_line_comment(self): # custom line terminator is not supported # with the Python parser yet if self.engine == 'c': - expected = [[1., 2., 4.], - [5., np.nan, 10.]] + expected = np.array([[1., 2., 4.], + [5., np.nan, 10.]]) df = self.read_csv(StringIO(data.replace('\n', '*')), comment='#', lineterminator='*') - tm.assert_almost_equal(df.values, expected) + tm.assert_numpy_array_equal(df.values, expected) def test_comment_skiprows(self): data = """# empty @@ -64,9 +64,9 @@ def test_comment_skiprows(self): 5.,NaN,10.0 """ # this should ignore the first four lines (including comments) - expected = [[1., 2., 4.], [5., np.nan, 10.]] + expected = np.array([[1., 2., 4.], [5., np.nan, 10.]]) df = self.read_csv(StringIO(data), comment='#', skiprows=4) - tm.assert_almost_equal(df.values, expected) + tm.assert_numpy_array_equal(df.values, expected) def test_comment_header(self): data = """# empty @@ -77,9 +77,9 @@ def test_comment_header(self): 5.,NaN,10.0 """ # header should begin at the second non-comment line - expected = [[1., 2., 4.], [5., np.nan, 10.]] + expected = np.array([[1., 2., 4.], [5., np.nan, 10.]]) df = self.read_csv(StringIO(data), comment='#', header=1) - tm.assert_almost_equal(df.values, expected) + tm.assert_numpy_array_equal(df.values, expected) def test_comment_skiprows_header(self): data = """# empty @@ -94,9 +94,9 @@ def test_comment_skiprows_header(self): # skiprows should skip the first 4 lines (including comments), while # header should start from the second non-commented line starting # with line 5 - expected = [[1., 2., 4.], [5., np.nan, 10.]] + expected = np.array([[1., 2., 4.], [5., np.nan, 10.]]) df = self.read_csv(StringIO(data), comment='#', skiprows=4, header=1) - tm.assert_almost_equal(df.values, expected) + tm.assert_numpy_array_equal(df.values, expected) def test_custom_comment_char(self): data = "a,b,c\n1,2,3#ignore this!\n4,5,6#ignorethistoo" diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py index 4d9ce922184d9..f8c7241fdf88a 100644 --- a/pandas/io/tests/parser/common.py +++ b/pandas/io/tests/parser/common.py @@ -10,7 +10,6 @@ import nose import numpy as np -from numpy.testing.decorators import slow from pandas.lib import Timestamp import pandas as pd @@ -41,10 +40,10 @@ def test_empty_decimal_marker(self): 1|2,334|5 10|13|10. """ - # C parser: supports only length-1 decimals - # Python parser: 'decimal' not supported yet - self.assertRaises(ValueError, self.read_csv, - StringIO(data), decimal='') + # Parsers support only length-1 decimals + msg = 'Only length-1 decimal markers supported' + with tm.assertRaisesRegexp(ValueError, msg): + self.read_csv(StringIO(data), decimal='') def test_read_csv(self): if not compat.PY3: @@ -233,16 +232,18 @@ def test_unnamed_columns(self): 6,7,8,9,10 11,12,13,14,15 """ - expected = [[1, 2, 3, 4, 5.], - [6, 7, 8, 9, 10], - [11, 12, 13, 14, 15]] + expected = np.array([[1, 2, 3, 4, 5], + [6, 7, 8, 9, 10], + [11, 12, 13, 14, 15]], dtype=np.int64) df = self.read_table(StringIO(data), sep=',') tm.assert_almost_equal(df.values, expected) - self.assert_numpy_array_equal(df.columns, - ['A', 'B', 'C', 'Unnamed: 3', - 'Unnamed: 4']) + self.assert_index_equal(df.columns, + Index(['A', 'B', 'C', 'Unnamed: 3', + 'Unnamed: 4'])) def test_duplicate_columns(self): + # TODO: add test for condition 'mangle_dupe_cols=False' + # once it is actually supported (gh-12935) data = """A,A,B,B,B 1,2,3,4,5 6,7,8,9,10 @@ -256,11 +257,6 @@ def test_duplicate_columns(self): self.assertEqual(list(df.columns), ['A', 'A.1', 'B', 'B.1', 'B.2']) - df = getattr(self, method)(StringIO(data), sep=',', - mangle_dupe_cols=False) - self.assertEqual(list(df.columns), - ['A', 'A', 'B', 'B', 'B']) - df = getattr(self, method)(StringIO(data), sep=',', mangle_dupe_cols=True) self.assertEqual(list(df.columns), @@ -279,7 +275,7 @@ def test_read_csv_dataframe(self): df = self.read_csv(self.csv1, index_col=0, parse_dates=True) df2 = self.read_table(self.csv1, sep=',', index_col=0, parse_dates=True) - self.assert_numpy_array_equal(df.columns, ['A', 'B', 'C', 'D']) + self.assert_index_equal(df.columns, pd.Index(['A', 'B', 'C', 'D'])) self.assertEqual(df.index.name, 'index') self.assertIsInstance( df.index[0], (datetime, np.datetime64, Timestamp)) @@ -290,12 +286,12 @@ def test_read_csv_no_index_name(self): df = self.read_csv(self.csv2, index_col=0, parse_dates=True) df2 = self.read_table(self.csv2, sep=',', index_col=0, parse_dates=True) - self.assert_numpy_array_equal(df.columns, ['A', 'B', 'C', 'D', 'E']) - self.assertIsInstance( - df.index[0], (datetime, np.datetime64, Timestamp)) - self.assertEqual(df.ix[ - :, ['A', 'B', 'C', 'D'] - ].values.dtype, np.float64) + self.assert_index_equal(df.columns, + pd.Index(['A', 'B', 'C', 'D', 'E'])) + self.assertIsInstance(df.index[0], + (datetime, np.datetime64, Timestamp)) + self.assertEqual(df.ix[:, ['A', 'B', 'C', 'D']].values.dtype, + np.float64) tm.assert_frame_equal(df, df2) def test_read_table_unicode(self): @@ -394,10 +390,23 @@ def test_int_conversion(self): self.assertEqual(data['B'].dtype, np.int64) def test_read_nrows(self): - df = self.read_csv(StringIO(self.data1), nrows=3) expected = self.read_csv(StringIO(self.data1))[:3] + + df = self.read_csv(StringIO(self.data1), nrows=3) tm.assert_frame_equal(df, expected) + # see gh-10476 + df = self.read_csv(StringIO(self.data1), nrows=3.0) + tm.assert_frame_equal(df, expected) + + msg = "must be an integer" + + with tm.assertRaisesRegexp(ValueError, msg): + self.read_csv(StringIO(self.data1), nrows=1.2) + + with tm.assertRaisesRegexp(ValueError, msg): + self.read_csv(StringIO(self.data1), nrows='foo') + def test_read_chunksize(self): reader = self.read_csv(StringIO(self.data1), index_col=0, chunksize=2) df = self.read_csv(StringIO(self.data1), index_col=0) @@ -597,7 +606,7 @@ def test_url(self): tm.assert_frame_equal(url_table, local_table) # TODO: ftp testing - @slow + @tm.slow def test_file(self): # FILE @@ -818,11 +827,6 @@ def test_ignore_leading_whitespace(self): expected = DataFrame({'a': [1, 4, 7], 'b': [2, 5, 8], 'c': [3, 6, 9]}) tm.assert_frame_equal(result, expected) - def test_nrows_and_chunksize_raises_notimplemented(self): - data = 'a b c' - self.assertRaises(NotImplementedError, self.read_csv, StringIO(data), - nrows=10, chunksize=5) - def test_chunk_begins_with_newline_whitespace(self): # see gh-10022 data = '\n hello\nworld\n' @@ -1117,21 +1121,21 @@ def test_empty_lines(self): -70,.4,1 """ - expected = [[1., 2., 4.], - [5., np.nan, 10.], - [-70., .4, 1.]] + expected = np.array([[1., 2., 4.], + [5., np.nan, 10.], + [-70., .4, 1.]]) df = self.read_csv(StringIO(data)) - tm.assert_almost_equal(df.values, expected) + tm.assert_numpy_array_equal(df.values, expected) df = self.read_csv(StringIO(data.replace(',', ' ')), sep='\s+') - tm.assert_almost_equal(df.values, expected) - expected = [[1., 2., 4.], - [np.nan, np.nan, np.nan], - [np.nan, np.nan, np.nan], - [5., np.nan, 10.], - [np.nan, np.nan, np.nan], - [-70., .4, 1.]] + tm.assert_numpy_array_equal(df.values, expected) + expected = np.array([[1., 2., 4.], + [np.nan, np.nan, np.nan], + [np.nan, np.nan, np.nan], + [5., np.nan, 10.], + [np.nan, np.nan, np.nan], + [-70., .4, 1.]]) df = self.read_csv(StringIO(data), skip_blank_lines=False) - tm.assert_almost_equal(list(df.values), list(expected)) + tm.assert_numpy_array_equal(df.values, expected) def test_whitespace_lines(self): data = """ @@ -1142,10 +1146,10 @@ def test_whitespace_lines(self): \t 1,2.,4. 5.,NaN,10.0 """ - expected = [[1, 2., 4.], - [5., np.nan, 10.]] + expected = np.array([[1, 2., 4.], + [5., np.nan, 10.]]) df = self.read_csv(StringIO(data)) - tm.assert_almost_equal(df.values, expected) + tm.assert_numpy_array_equal(df.values, expected) def test_regex_separator(self): # see gh-6607 @@ -1236,3 +1240,136 @@ def test_iteration_open_handle(self): result = self.read_table(f, squeeze=True, header=None) expected = Series(['DDD', 'EEE', 'FFF', 'GGG'], name=0) tm.assert_series_equal(result, expected) + + def test_1000_sep_with_decimal(self): + data = """A|B|C +1|2,334.01|5 +10|13|10. +""" + expected = DataFrame({ + 'A': [1, 10], + 'B': [2334.01, 13], + 'C': [5, 10.] + }) + + tm.assert_equal(expected.A.dtype, 'int64') + tm.assert_equal(expected.B.dtype, 'float') + tm.assert_equal(expected.C.dtype, 'float') + + df = self.read_csv(StringIO(data), sep='|', thousands=',', decimal='.') + tm.assert_frame_equal(df, expected) + + df = self.read_table(StringIO(data), sep='|', + thousands=',', decimal='.') + tm.assert_frame_equal(df, expected) + + data_with_odd_sep = """A|B|C +1|2.334,01|5 +10|13|10, +""" + df = self.read_csv(StringIO(data_with_odd_sep), + sep='|', thousands='.', decimal=',') + tm.assert_frame_equal(df, expected) + + df = self.read_table(StringIO(data_with_odd_sep), + sep='|', thousands='.', decimal=',') + tm.assert_frame_equal(df, expected) + + def test_euro_decimal_format(self): + data = """Id;Number1;Number2;Text1;Text2;Number3 +1;1521,1541;187101,9543;ABC;poi;4,738797819 +2;121,12;14897,76;DEF;uyt;0,377320872 +3;878,158;108013,434;GHI;rez;2,735694704""" + + df2 = self.read_csv(StringIO(data), sep=';', decimal=',') + self.assertEqual(df2['Number1'].dtype, float) + self.assertEqual(df2['Number2'].dtype, float) + self.assertEqual(df2['Number3'].dtype, float) + + def test_read_duplicate_names(self): + # See gh-7160 + data = "a,b,a\n0,1,2\n3,4,5" + df = self.read_csv(StringIO(data)) + expected = DataFrame([[0, 1, 2], [3, 4, 5]], + columns=['a', 'b', 'a.1']) + tm.assert_frame_equal(df, expected) + + data = "0,1,2\n3,4,5" + df = self.read_csv(StringIO(data), names=["a", "b", "a"]) + expected = DataFrame([[0, 1, 2], [3, 4, 5]], + columns=['a', 'b', 'a.1']) + tm.assert_frame_equal(df, expected) + + def test_inf_parsing(self): + data = """\ +,A +a,inf +b,-inf +c,+Inf +d,-Inf +e,INF +f,-INF +g,+INf +h,-INf +i,inF +j,-inF""" + inf = float('inf') + expected = Series([inf, -inf] * 5) + + df = self.read_csv(StringIO(data), index_col=0) + tm.assert_almost_equal(df['A'].values, expected.values) + + df = self.read_csv(StringIO(data), index_col=0, na_filter=False) + tm.assert_almost_equal(df['A'].values, expected.values) + + def test_raise_on_no_columns(self): + # single newline + data = "\n" + self.assertRaises(EmptyDataError, self.read_csv, StringIO(data)) + + # test with more than a single newline + data = "\n\n\n" + self.assertRaises(EmptyDataError, self.read_csv, StringIO(data)) + + def test_compact_ints_use_unsigned(self): + # see gh-13323 + data = 'a,b,c\n1,9,258' + + # sanity check + expected = DataFrame({ + 'a': np.array([1], dtype=np.int64), + 'b': np.array([9], dtype=np.int64), + 'c': np.array([258], dtype=np.int64), + }) + out = self.read_csv(StringIO(data)) + tm.assert_frame_equal(out, expected) + + expected = DataFrame({ + 'a': np.array([1], dtype=np.int8), + 'b': np.array([9], dtype=np.int8), + 'c': np.array([258], dtype=np.int16), + }) + + # default behaviour for 'use_unsigned' + with tm.assert_produces_warning( + FutureWarning, check_stacklevel=False): + out = self.read_csv(StringIO(data), compact_ints=True) + tm.assert_frame_equal(out, expected) + + with tm.assert_produces_warning( + FutureWarning, check_stacklevel=False): + out = self.read_csv(StringIO(data), compact_ints=True, + use_unsigned=False) + tm.assert_frame_equal(out, expected) + + expected = DataFrame({ + 'a': np.array([1], dtype=np.uint8), + 'b': np.array([9], dtype=np.uint8), + 'c': np.array([258], dtype=np.uint16), + }) + + with tm.assert_produces_warning( + FutureWarning, check_stacklevel=False): + out = self.read_csv(StringIO(data), compact_ints=True, + use_unsigned=True) + tm.assert_frame_equal(out, expected) diff --git a/pandas/io/tests/parser/header.py b/pandas/io/tests/parser/header.py index e3c408f0af907..ca148b373d659 100644 --- a/pandas/io/tests/parser/header.py +++ b/pandas/io/tests/parser/header.py @@ -43,14 +43,14 @@ def test_no_header_prefix(self): df_pref = self.read_table(StringIO(data), sep=',', prefix='Field', header=None) - expected = [[1, 2, 3, 4, 5.], - [6, 7, 8, 9, 10], - [11, 12, 13, 14, 15]] + expected = np.array([[1, 2, 3, 4, 5], + [6, 7, 8, 9, 10], + [11, 12, 13, 14, 15]], dtype=np.int64) tm.assert_almost_equal(df_pref.values, expected) - self.assert_numpy_array_equal( - df_pref.columns, ['Field0', 'Field1', 'Field2', - 'Field3', 'Field4']) + self.assert_index_equal(df_pref.columns, + Index(['Field0', 'Field1', 'Field2', + 'Field3', 'Field4'])) def test_header_with_index_col(self): data = """foo,1,2,3 @@ -262,14 +262,14 @@ def test_no_header(self): names = ['foo', 'bar', 'baz', 'quux', 'panda'] df2 = self.read_table(StringIO(data), sep=',', names=names) - expected = [[1, 2, 3, 4, 5.], - [6, 7, 8, 9, 10], - [11, 12, 13, 14, 15]] + expected = np.array([[1, 2, 3, 4, 5], + [6, 7, 8, 9, 10], + [11, 12, 13, 14, 15]], dtype=np.int64) tm.assert_almost_equal(df.values, expected) tm.assert_almost_equal(df.values, df2.values) - self.assert_numpy_array_equal(df_pref.columns, - ['X0', 'X1', 'X2', 'X3', 'X4']) - self.assert_numpy_array_equal(df.columns, lrange(5)) + self.assert_index_equal(df_pref.columns, + Index(['X0', 'X1', 'X2', 'X3', 'X4'])) + self.assert_index_equal(df.columns, Index(lrange(5))) - self.assert_numpy_array_equal(df2.columns, names) + self.assert_index_equal(df2.columns, Index(names)) diff --git a/pandas/io/tests/parser/na_values.py b/pandas/io/tests/parser/na_values.py index 853e6242751c9..2a8c934abce61 100644 --- a/pandas/io/tests/parser/na_values.py +++ b/pandas/io/tests/parser/na_values.py @@ -11,7 +11,7 @@ import pandas.io.parsers as parsers import pandas.util.testing as tm -from pandas import DataFrame, MultiIndex, read_csv +from pandas import DataFrame, MultiIndex from pandas.compat import StringIO, range @@ -37,62 +37,36 @@ def test_detect_string_na(self): NA,baz NaN,nan """ - expected = [['foo', 'bar'], [nan, 'baz'], [nan, nan]] + expected = np.array([['foo', 'bar'], [nan, 'baz'], [nan, nan]], + dtype=np.object_) df = self.read_csv(StringIO(data)) - tm.assert_almost_equal(df.values, expected) + tm.assert_numpy_array_equal(df.values, expected) def test_non_string_na_values(self): - # see gh-3611, na_values that are not a string are an issue - with tm.ensure_clean('__non_string_na_values__.csv') as path: - df = DataFrame({'A': [-999, 2, 3], 'B': [1.2, -999, 4.5]}) - df.to_csv(path, sep=' ', index=False) - result1 = self.read_csv(path, sep=' ', header=0, - na_values=['-999.0', '-999']) - result2 = self.read_csv(path, sep=' ', header=0, - na_values=[-999, -999.0]) - result3 = self.read_csv(path, sep=' ', header=0, - na_values=[-999.0, -999]) - tm.assert_frame_equal(result1, result2) - tm.assert_frame_equal(result2, result3) - - result4 = self.read_csv( - path, sep=' ', header=0, na_values=['-999.0']) - result5 = self.read_csv( - path, sep=' ', header=0, na_values=['-999']) - result6 = self.read_csv( - path, sep=' ', header=0, na_values=[-999.0]) - result7 = self.read_csv( - path, sep=' ', header=0, na_values=[-999]) - tm.assert_frame_equal(result4, result3) - tm.assert_frame_equal(result5, result3) - tm.assert_frame_equal(result6, result3) - tm.assert_frame_equal(result7, result3) - - good_compare = result3 - - # with an odd float format, so we can't match the string 999.0 - # exactly, but need float matching - # TODO: change these to self.read_csv when Python bug is squashed - df.to_csv(path, sep=' ', index=False, float_format='%.3f') - result1 = read_csv(path, sep=' ', header=0, - na_values=['-999.0', '-999']) - result2 = read_csv(path, sep=' ', header=0, - na_values=[-999.0, -999]) - tm.assert_frame_equal(result1, good_compare) - tm.assert_frame_equal(result2, good_compare) - - result3 = read_csv(path, sep=' ', - header=0, na_values=['-999.0']) - result4 = read_csv(path, sep=' ', - header=0, na_values=['-999']) - result5 = read_csv(path, sep=' ', - header=0, na_values=[-999.0]) - result6 = read_csv(path, sep=' ', - header=0, na_values=[-999]) - tm.assert_frame_equal(result3, good_compare) - tm.assert_frame_equal(result4, good_compare) - tm.assert_frame_equal(result5, good_compare) - tm.assert_frame_equal(result6, good_compare) + # see gh-3611: with an odd float format, we can't match + # the string '999.0' exactly but still need float matching + nice = """A,B +-999,1.2 +2,-999 +3,4.5 +""" + ugly = """A,B +-999,1.200 +2,-999.000 +3,4.500 +""" + na_values_param = [['-999.0', '-999'], + [-999, -999.0], + [-999.0, -999], + ['-999.0'], ['-999'], + [-999.0], [-999]] + expected = DataFrame([[np.nan, 1.2], [2.0, np.nan], + [3.0, 4.5]], columns=['A', 'B']) + + for data in (nice, ugly): + for na_values in na_values_param: + out = self.read_csv(StringIO(data), na_values=na_values) + tm.assert_frame_equal(out, expected) def test_default_na_values(self): _NA_VALUES = set(['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', @@ -126,20 +100,20 @@ def test_custom_na_values(self): -1.#IND,5,baz 7,8,NaN """ - expected = [[1., nan, 3], - [nan, 5, nan], - [7, 8, nan]] + expected = np.array([[1., nan, 3], + [nan, 5, nan], + [7, 8, nan]]) df = self.read_csv(StringIO(data), na_values=['baz'], skiprows=[1]) - tm.assert_almost_equal(df.values, expected) + tm.assert_numpy_array_equal(df.values, expected) df2 = self.read_table(StringIO(data), sep=',', na_values=['baz'], skiprows=[1]) - tm.assert_almost_equal(df2.values, expected) + tm.assert_numpy_array_equal(df2.values, expected) df3 = self.read_table(StringIO(data), sep=',', na_values='baz', skiprows=[1]) - tm.assert_almost_equal(df3.values, expected) + tm.assert_numpy_array_equal(df3.values, expected) def test_bool_na_values(self): data = """A,B,C @@ -250,116 +224,29 @@ def test_na_values_keep_default(self): 'seven']}) tm.assert_frame_equal(xp.reindex(columns=df.columns), df) - def test_skiprow_with_newline(self): - # see gh-12775 and gh-10911 - data = """id,text,num_lines -1,"line 11 -line 12",2 -2,"line 21 -line 22",2 -3,"line 31",1""" - expected = [[2, 'line 21\nline 22', 2], - [3, 'line 31', 1]] - expected = DataFrame(expected, columns=[ - 'id', 'text', 'num_lines']) - df = self.read_csv(StringIO(data), skiprows=[1]) - tm.assert_frame_equal(df, expected) - - data = ('a,b,c\n~a\n b~,~e\n d~,' - '~f\n f~\n1,2,~12\n 13\n 14~') - expected = [['a\n b', 'e\n d', 'f\n f']] - expected = DataFrame(expected, columns=[ - 'a', 'b', 'c']) - df = self.read_csv(StringIO(data), - quotechar="~", - skiprows=[2]) - tm.assert_frame_equal(df, expected) - - data = ('Text,url\n~example\n ' - 'sentence\n one~,url1\n~' - 'example\n sentence\n two~,url2\n~' - 'example\n sentence\n three~,url3') - expected = [['example\n sentence\n two', 'url2']] - expected = DataFrame(expected, columns=[ - 'Text', 'url']) - df = self.read_csv(StringIO(data), - quotechar="~", - skiprows=[1, 3]) - tm.assert_frame_equal(df, expected) - - def test_skiprow_with_quote(self): - # see gh-12775 and gh-10911 - data = """id,text,num_lines -1,"line '11' line 12",2 -2,"line '21' line 22",2 -3,"line '31' line 32",1""" - expected = [[2, "line '21' line 22", 2], - [3, "line '31' line 32", 1]] - expected = DataFrame(expected, columns=[ - 'id', 'text', 'num_lines']) - df = self.read_csv(StringIO(data), skiprows=[1]) - tm.assert_frame_equal(df, expected) - - def test_skiprow_with_newline_and_quote(self): - # see gh-12775 and gh-10911 - data = """id,text,num_lines -1,"line \n'11' line 12",2 -2,"line \n'21' line 22",2 -3,"line \n'31' line 32",1""" - expected = [[2, "line \n'21' line 22", 2], - [3, "line \n'31' line 32", 1]] - expected = DataFrame(expected, columns=[ - 'id', 'text', 'num_lines']) - df = self.read_csv(StringIO(data), skiprows=[1]) - tm.assert_frame_equal(df, expected) - - data = """id,text,num_lines -1,"line '11\n' line 12",2 -2,"line '21\n' line 22",2 -3,"line '31\n' line 32",1""" - expected = [[2, "line '21\n' line 22", 2], - [3, "line '31\n' line 32", 1]] - expected = DataFrame(expected, columns=[ - 'id', 'text', 'num_lines']) - df = self.read_csv(StringIO(data), skiprows=[1]) - tm.assert_frame_equal(df, expected) + def test_na_values_na_filter_override(self): + data = """\ +A,B +1,A +nan,B +3,C +""" - data = """id,text,num_lines -1,"line '11\n' \r\tline 12",2 -2,"line '21\n' \r\tline 22",2 -3,"line '31\n' \r\tline 32",1""" - expected = [[2, "line '21\n' \r\tline 22", 2], - [3, "line '31\n' \r\tline 32", 1]] - expected = DataFrame(expected, columns=[ - 'id', 'text', 'num_lines']) - df = self.read_csv(StringIO(data), skiprows=[1]) - tm.assert_frame_equal(df, expected) + expected = DataFrame([[1, 'A'], [np.nan, np.nan], [3, 'C']], + columns=['A', 'B']) + out = self.read_csv(StringIO(data), na_values=['B'], na_filter=True) + tm.assert_frame_equal(out, expected) - def test_skiprows_lineterminator(self): - # see gh-9079 - data = '\n'.join(['SMOSMANIA ThetaProbe-ML2X ', - '2007/01/01 01:00 0.2140 U M ', - '2007/01/01 02:00 0.2141 M O ', - '2007/01/01 04:00 0.2142 D M ']) - expected = DataFrame([['2007/01/01', '01:00', 0.2140, 'U', 'M'], - ['2007/01/01', '02:00', 0.2141, 'M', 'O'], - ['2007/01/01', '04:00', 0.2142, 'D', 'M']], - columns=['date', 'time', 'var', 'flag', - 'oflag']) - - # test with default line terminators "LF" and "CRLF" - df = self.read_csv(StringIO(data), skiprows=1, delim_whitespace=True, - names=['date', 'time', 'var', 'flag', 'oflag']) - tm.assert_frame_equal(df, expected) + expected = DataFrame([['1', 'A'], ['nan', 'B'], ['3', 'C']], + columns=['A', 'B']) + out = self.read_csv(StringIO(data), na_values=['B'], na_filter=False) + tm.assert_frame_equal(out, expected) - df = self.read_csv(StringIO(data.replace('\n', '\r\n')), - skiprows=1, delim_whitespace=True, - names=['date', 'time', 'var', 'flag', 'oflag']) - tm.assert_frame_equal(df, expected) + def test_na_trailing_columns(self): + data = """Date,Currenncy,Symbol,Type,Units,UnitPrice,Cost,Tax +2012-03-14,USD,AAPL,BUY,1000 +2012-05-12,USD,SBUX,SELL,500""" - # "CR" is not respected with the Python parser yet - if self.engine == 'c': - df = self.read_csv(StringIO(data.replace('\n', '\r')), - skiprows=1, delim_whitespace=True, - names=['date', 'time', 'var', 'flag', 'oflag']) - tm.assert_frame_equal(df, expected) + result = self.read_csv(StringIO(data)) + self.assertEqual(result['Date'][1], '2012-05-12') + self.assertTrue(result['UnitPrice'].isnull().all()) diff --git a/pandas/io/tests/parser/parse_dates.py b/pandas/io/tests/parser/parse_dates.py index ec368bb358ad5..01816bde66120 100644 --- a/pandas/io/tests/parser/parse_dates.py +++ b/pandas/io/tests/parser/parse_dates.py @@ -467,3 +467,10 @@ def test_read_with_parse_dates_invalid_type(self): StringIO(data), parse_dates=np.array([4, 5])) tm.assertRaisesRegexp(TypeError, errmsg, self.read_csv, StringIO(data), parse_dates=set([1, 3, 3])) + + def test_parse_dates_empty_string(self): + # see gh-2263 + data = "Date, test\n2012-01-01, 1\n,2" + result = self.read_csv(StringIO(data), parse_dates=["Date"], + na_filter=False) + self.assertTrue(result['Date'].isnull()[1]) diff --git a/pandas/io/tests/parser/python_parser_only.py b/pandas/io/tests/parser/python_parser_only.py index 7d1793c429f4e..a08cb36c13f80 100644 --- a/pandas/io/tests/parser/python_parser_only.py +++ b/pandas/io/tests/parser/python_parser_only.py @@ -40,7 +40,8 @@ def test_sniff_delimiter(self): baz|7|8|9 """ data = self.read_csv(StringIO(text), index_col=0, sep=None) - self.assertTrue(data.index.equals(Index(['foo', 'bar', 'baz']))) + self.assert_index_equal(data.index, + Index(['foo', 'bar', 'baz'], name='index')) data2 = self.read_csv(StringIO(text), index_col=0, delimiter='|') tm.assert_frame_equal(data, data2) diff --git a/pandas/io/tests/parser/skiprows.py b/pandas/io/tests/parser/skiprows.py index 3e585a9a623c9..c9f50dec6c01e 100644 --- a/pandas/io/tests/parser/skiprows.py +++ b/pandas/io/tests/parser/skiprows.py @@ -76,3 +76,117 @@ def test_skiprows_blank(self): datetime(2000, 1, 3)]) expected.index.name = 0 tm.assert_frame_equal(data, expected) + + def test_skiprow_with_newline(self): + # see gh-12775 and gh-10911 + data = """id,text,num_lines +1,"line 11 +line 12",2 +2,"line 21 +line 22",2 +3,"line 31",1""" + expected = [[2, 'line 21\nline 22', 2], + [3, 'line 31', 1]] + expected = DataFrame(expected, columns=[ + 'id', 'text', 'num_lines']) + df = self.read_csv(StringIO(data), skiprows=[1]) + tm.assert_frame_equal(df, expected) + + data = ('a,b,c\n~a\n b~,~e\n d~,' + '~f\n f~\n1,2,~12\n 13\n 14~') + expected = [['a\n b', 'e\n d', 'f\n f']] + expected = DataFrame(expected, columns=[ + 'a', 'b', 'c']) + df = self.read_csv(StringIO(data), + quotechar="~", + skiprows=[2]) + tm.assert_frame_equal(df, expected) + + data = ('Text,url\n~example\n ' + 'sentence\n one~,url1\n~' + 'example\n sentence\n two~,url2\n~' + 'example\n sentence\n three~,url3') + expected = [['example\n sentence\n two', 'url2']] + expected = DataFrame(expected, columns=[ + 'Text', 'url']) + df = self.read_csv(StringIO(data), + quotechar="~", + skiprows=[1, 3]) + tm.assert_frame_equal(df, expected) + + def test_skiprow_with_quote(self): + # see gh-12775 and gh-10911 + data = """id,text,num_lines +1,"line '11' line 12",2 +2,"line '21' line 22",2 +3,"line '31' line 32",1""" + expected = [[2, "line '21' line 22", 2], + [3, "line '31' line 32", 1]] + expected = DataFrame(expected, columns=[ + 'id', 'text', 'num_lines']) + df = self.read_csv(StringIO(data), skiprows=[1]) + tm.assert_frame_equal(df, expected) + + def test_skiprow_with_newline_and_quote(self): + # see gh-12775 and gh-10911 + data = """id,text,num_lines +1,"line \n'11' line 12",2 +2,"line \n'21' line 22",2 +3,"line \n'31' line 32",1""" + expected = [[2, "line \n'21' line 22", 2], + [3, "line \n'31' line 32", 1]] + expected = DataFrame(expected, columns=[ + 'id', 'text', 'num_lines']) + df = self.read_csv(StringIO(data), skiprows=[1]) + tm.assert_frame_equal(df, expected) + + data = """id,text,num_lines +1,"line '11\n' line 12",2 +2,"line '21\n' line 22",2 +3,"line '31\n' line 32",1""" + expected = [[2, "line '21\n' line 22", 2], + [3, "line '31\n' line 32", 1]] + expected = DataFrame(expected, columns=[ + 'id', 'text', 'num_lines']) + df = self.read_csv(StringIO(data), skiprows=[1]) + tm.assert_frame_equal(df, expected) + + data = """id,text,num_lines +1,"line '11\n' \r\tline 12",2 +2,"line '21\n' \r\tline 22",2 +3,"line '31\n' \r\tline 32",1""" + expected = [[2, "line '21\n' \r\tline 22", 2], + [3, "line '31\n' \r\tline 32", 1]] + expected = DataFrame(expected, columns=[ + 'id', 'text', 'num_lines']) + df = self.read_csv(StringIO(data), skiprows=[1]) + tm.assert_frame_equal(df, expected) + + def test_skiprows_lineterminator(self): + # see gh-9079 + data = '\n'.join(['SMOSMANIA ThetaProbe-ML2X ', + '2007/01/01 01:00 0.2140 U M ', + '2007/01/01 02:00 0.2141 M O ', + '2007/01/01 04:00 0.2142 D M ']) + expected = DataFrame([['2007/01/01', '01:00', 0.2140, 'U', 'M'], + ['2007/01/01', '02:00', 0.2141, 'M', 'O'], + ['2007/01/01', '04:00', 0.2142, 'D', 'M']], + columns=['date', 'time', 'var', 'flag', + 'oflag']) + + # test with default line terminators "LF" and "CRLF" + df = self.read_csv(StringIO(data), skiprows=1, delim_whitespace=True, + names=['date', 'time', 'var', 'flag', 'oflag']) + tm.assert_frame_equal(df, expected) + + df = self.read_csv(StringIO(data.replace('\n', '\r\n')), + skiprows=1, delim_whitespace=True, + names=['date', 'time', 'var', 'flag', 'oflag']) + tm.assert_frame_equal(df, expected) + + # "CR" is not respected with the Python parser yet + if self.engine == 'c': + df = self.read_csv(StringIO(data.replace('\n', '\r')), + skiprows=1, delim_whitespace=True, + names=['date', 'time', 'var', 'flag', 'oflag']) + tm.assert_frame_equal(df, expected) diff --git a/pandas/io/tests/parser/test_parsers.py b/pandas/io/tests/parser/test_parsers.py index 374485b5ddaad..fda7b28769647 100644 --- a/pandas/io/tests/parser/test_parsers.py +++ b/pandas/io/tests/parser/test_parsers.py @@ -72,25 +72,16 @@ def read_csv(self, *args, **kwds): kwds = kwds.copy() kwds['engine'] = self.engine kwds['low_memory'] = self.low_memory - kwds['buffer_lines'] = 2 return read_csv(*args, **kwds) def read_table(self, *args, **kwds): kwds = kwds.copy() kwds['engine'] = self.engine kwds['low_memory'] = True - kwds['buffer_lines'] = 2 return read_table(*args, **kwds) class TestPythonParser(BaseParser, PythonParserTests, tm.TestCase): - """ - Class for Python parser testing. Unless specifically stated - as a PythonParser-specific issue, the goal is to eventually move - as many of these tests into ParserTests as soon as the C parser - can accept further specific arguments when parsing. - """ - engine = 'python' float_precision_choices = [None] diff --git a/pandas/io/tests/parser/test_read_fwf.py b/pandas/io/tests/parser/test_read_fwf.py index 5599188400368..11b10211650d6 100644 --- a/pandas/io/tests/parser/test_read_fwf.py +++ b/pandas/io/tests/parser/test_read_fwf.py @@ -217,8 +217,8 @@ def test_comment_fwf(self): 1 2. 4 #hello world 5 NaN 10.0 """ - expected = [[1, 2., 4], - [5, np.nan, 10.]] + expected = np.array([[1, 2., 4], + [5, np.nan, 10.]]) df = read_fwf(StringIO(data), colspecs=[(0, 3), (4, 9), (9, 25)], comment='#') tm.assert_almost_equal(df.values, expected) @@ -228,8 +228,8 @@ def test_1000_fwf(self): 1 2,334.0 5 10 13 10. """ - expected = [[1, 2334., 5], - [10, 13, 10]] + expected = np.array([[1, 2334., 5], + [10, 13, 10]]) df = read_fwf(StringIO(data), colspecs=[(0, 3), (3, 11), (12, 16)], thousands=',') tm.assert_almost_equal(df.values, expected) diff --git a/pandas/io/tests/parser/test_textreader.py b/pandas/io/tests/parser/test_textreader.py index f3de604f1ec48..c35cfca7012d3 100644 --- a/pandas/io/tests/parser/test_textreader.py +++ b/pandas/io/tests/parser/test_textreader.py @@ -76,8 +76,12 @@ def test_skipinitialspace(self): header=None) result = reader.read() - self.assert_numpy_array_equal(result[0], ['a', 'a', 'a', 'a']) - self.assert_numpy_array_equal(result[1], ['b', 'b', 'b', 'b']) + self.assert_numpy_array_equal(result[0], + np.array(['a', 'a', 'a', 'a'], + dtype=np.object_)) + self.assert_numpy_array_equal(result[1], + np.array(['b', 'b', 'b', 'b'], + dtype=np.object_)) def test_parse_booleans(self): data = 'True\nFalse\nTrue\nTrue' @@ -94,8 +98,10 @@ def test_delimit_whitespace(self): header=None) result = reader.read() - self.assert_numpy_array_equal(result[0], ['a', 'a', 'a']) - self.assert_numpy_array_equal(result[1], ['b', 'b', 'b']) + self.assert_numpy_array_equal(result[0], np.array(['a', 'a', 'a'], + dtype=np.object_)) + self.assert_numpy_array_equal(result[1], np.array(['b', 'b', 'b'], + dtype=np.object_)) def test_embedded_newline(self): data = 'a\n"hello\nthere"\nthis' @@ -103,7 +109,7 @@ def test_embedded_newline(self): reader = TextReader(StringIO(data), header=None) result = reader.read() - expected = ['a', 'hello\nthere', 'this'] + expected = np.array(['a', 'hello\nthere', 'this'], dtype=np.object_) self.assert_numpy_array_equal(result[0], expected) def test_euro_decimal(self): @@ -113,7 +119,7 @@ def test_euro_decimal(self): decimal=',', header=None) result = reader.read() - expected = [12345.67, 345.678] + expected = np.array([12345.67, 345.678]) tm.assert_almost_equal(result[0], expected) def test_integer_thousands(self): @@ -123,7 +129,7 @@ def test_integer_thousands(self): thousands=',', header=None) result = reader.read() - expected = [123456, 12500] + expected = np.array([123456, 12500], dtype=np.int64) tm.assert_almost_equal(result[0], expected) def test_integer_thousands_alt(self): diff --git a/pandas/io/tests/parser/test_unsupported.py b/pandas/io/tests/parser/test_unsupported.py index 1813a95d7a306..97862ffa90cef 100644 --- a/pandas/io/tests/parser/test_unsupported.py +++ b/pandas/io/tests/parser/test_unsupported.py @@ -20,6 +20,25 @@ class TestUnsupportedFeatures(tm.TestCase): + def test_mangle_dupe_cols_false(self): + # see gh-12935 + data = 'a b c\n1 2 3' + msg = 'is not supported' + + for engine in ('c', 'python'): + with tm.assertRaisesRegexp(ValueError, msg): + read_csv(StringIO(data), engine=engine, + mangle_dupe_cols=False) + + def test_nrows_and_chunksize(self): + data = 'a b c' + msg = "cannot be used together yet" + + for engine in ('c', 'python'): + with tm.assertRaisesRegexp(NotImplementedError, msg): + read_csv(StringIO(data), engine=engine, + nrows=10, chunksize=5) + def test_c_engine(self): # see gh-6607 data = 'a b c\n1 2 3' @@ -98,6 +117,32 @@ def test_python_engine(self): with tm.assertRaisesRegexp(ValueError, msg): read_csv(StringIO(data), engine=engine, **kwargs) + +class TestDeprecatedFeatures(tm.TestCase): + def test_deprecated_args(self): + data = '1,2,3' + + # deprecated arguments with non-default values + deprecated = { + 'buffer_lines': True, + 'compact_ints': True, + 'use_unsigned': True, + } + + engines = 'c', 'python' + + for engine in engines: + for arg, non_default_val in deprecated.items(): + if engine == 'python' and arg == 'buffer_lines': + # unsupported --> exception is raised first + continue + + with tm.assert_produces_warning( + FutureWarning, check_stacklevel=False): + kwargs = {arg: non_default_val} + read_csv(StringIO(data), engine=engine, + **kwargs) + if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/io/tests/parser/usecols.py b/pandas/io/tests/parser/usecols.py index 06275c168becd..0d3ae95f0d1d4 100644 --- a/pandas/io/tests/parser/usecols.py +++ b/pandas/io/tests/parser/usecols.py @@ -6,6 +6,7 @@ """ from datetime import datetime +import nose import pandas.util.testing as tm @@ -22,9 +23,8 @@ def test_raise_on_mixed_dtype_usecols(self): 1000,2000,3000 4000,5000,6000 """ - msg = ("The elements of \'usecols\' " - "must either be all strings " - "or all integers") + msg = ("The elements of 'usecols' must " + "either be all strings, all unicode, or all integers") usecols = [0, 'b', 2] with tm.assertRaisesRegexp(ValueError, msg): @@ -254,3 +254,103 @@ def test_usecols_with_parse_dates_and_usecol_names(self): usecols=[3, 0, 2], parse_dates=parse_dates) tm.assert_frame_equal(df, expected) + + def test_usecols_with_unicode_strings(self): + # see gh-13219 + + s = '''AAA,BBB,CCC,DDD + 0.056674973,8,True,a + 2.613230982,2,False,b + 3.568935038,7,False,a + ''' + + data = { + 'AAA': { + 0: 0.056674972999999997, + 1: 2.6132309819999997, + 2: 3.5689350380000002 + }, + 'BBB': {0: 8, 1: 2, 2: 7} + } + expected = DataFrame(data) + + df = self.read_csv(StringIO(s), usecols=[u'AAA', u'BBB']) + tm.assert_frame_equal(df, expected) + + def test_usecols_with_single_byte_unicode_strings(self): + # see gh-13219 + + s = '''A,B,C,D + 0.056674973,8,True,a + 2.613230982,2,False,b + 3.568935038,7,False,a + ''' + + data = { + 'A': { + 0: 0.056674972999999997, + 1: 2.6132309819999997, + 2: 3.5689350380000002 + }, + 'B': {0: 8, 1: 2, 2: 7} + } + expected = DataFrame(data) + + df = self.read_csv(StringIO(s), usecols=[u'A', u'B']) + tm.assert_frame_equal(df, expected) + + def test_usecols_with_mixed_encoding_strings(self): + s = '''AAA,BBB,CCC,DDD + 0.056674973,8,True,a + 2.613230982,2,False,b + 3.568935038,7,False,a + ''' + + msg = ("The elements of 'usecols' must " + "either be all strings, all unicode, or all integers") + + with tm.assertRaisesRegexp(ValueError, msg): + self.read_csv(StringIO(s), usecols=[u'AAA', b'BBB']) + + with tm.assertRaisesRegexp(ValueError, msg): + self.read_csv(StringIO(s), usecols=[b'AAA', u'BBB']) + + def test_usecols_with_multibyte_characters(self): + s = '''あああ,いい,ううう,ええええ + 0.056674973,8,True,a + 2.613230982,2,False,b + 3.568935038,7,False,a + ''' + data = { + 'あああ': { + 0: 0.056674972999999997, + 1: 2.6132309819999997, + 2: 3.5689350380000002 + }, + 'いい': {0: 8, 1: 2, 2: 7} + } + expected = DataFrame(data) + + df = self.read_csv(StringIO(s), usecols=['あああ', 'いい']) + tm.assert_frame_equal(df, expected) + + def test_usecols_with_multibyte_unicode_characters(self): + raise nose.SkipTest('TODO: see gh-13253') + + s = '''あああ,いい,ううう,ええええ + 0.056674973,8,True,a + 2.613230982,2,False,b + 3.568935038,7,False,a + ''' + data = { + 'あああ': { + 0: 0.056674972999999997, + 1: 2.6132309819999997, + 2: 3.5689350380000002 + }, + 'いい': {0: 8, 1: 2, 2: 7} + } + expected = DataFrame(data) + + df = self.read_csv(StringIO(s), usecols=[u'あああ', u'いい']) + tm.assert_frame_equal(df, expected) diff --git a/pandas/io/tests/test_data.py b/pandas/io/tests/test_data.py index d9c09fa788332..1efa8b13598a7 100644 --- a/pandas/io/tests/test_data.py +++ b/pandas/io/tests/test_data.py @@ -302,6 +302,8 @@ class TestYahooOptions(tm.TestCase): @classmethod def setUpClass(cls): super(TestYahooOptions, cls).setUpClass() + raise nose.SkipTest('disable Yahoo Options tests') + _skip_if_no_lxml() _skip_if_no_bs() raise nose.SkipTest('unreliable test') @@ -472,9 +474,6 @@ def test_options_source_warning(self): class TestDataReader(tm.TestCase): - def test_is_s3_url(self): - from pandas.io.common import _is_s3_url - self.assertTrue(_is_s3_url("s3://pandas/somethingelse.com")) @network def test_read_yahoo(self): @@ -503,6 +502,12 @@ def test_read_famafrench(self): class TestFred(tm.TestCase): + + @classmethod + def setUpClass(cls): + super(TestFred, cls).setUpClass() + raise nose.SkipTest('disable Fred tests') + @network def test_fred(self): raise nose.SkipTest('buggy as of 2/14/16; maybe a data revision?') diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py index af053450d78c4..b7e5360a6f3db 100644 --- a/pandas/io/tests/test_excel.py +++ b/pandas/io/tests/test_excel.py @@ -13,7 +13,6 @@ from numpy import nan import numpy as np -from numpy.testing.decorators import slow import pandas as pd from pandas import DataFrame, Index, MultiIndex @@ -544,7 +543,7 @@ def test_read_from_s3_url(self): local_table = self.get_exceldf('test1') tm.assert_frame_equal(url_table, local_table) - @slow + @tm.slow def test_read_from_file_url(self): # FILE @@ -1102,9 +1101,9 @@ def test_sheets(self): tm.assert_frame_equal(self.frame, recons) recons = read_excel(reader, 'test2', index_col=0) tm.assert_frame_equal(self.tsframe, recons) - np.testing.assert_equal(2, len(reader.sheet_names)) - np.testing.assert_equal('test1', reader.sheet_names[0]) - np.testing.assert_equal('test2', reader.sheet_names[1]) + self.assertEqual(2, len(reader.sheet_names)) + self.assertEqual('test1', reader.sheet_names[0]) + self.assertEqual('test2', reader.sheet_names[1]) def test_colaliases(self): _skip_if_no_xlrd() diff --git a/pandas/io/tests/test_ga.py b/pandas/io/tests/test_ga.py index b8b698691a9f5..469e121f633d7 100644 --- a/pandas/io/tests/test_ga.py +++ b/pandas/io/tests/test_ga.py @@ -7,8 +7,8 @@ import nose import pandas as pd from pandas import compat -from pandas.util.testing import network, assert_frame_equal, with_connectivity_check -from numpy.testing.decorators import slow +from pandas.util.testing import (network, assert_frame_equal, + with_connectivity_check, slow) import pandas.util.testing as tm if compat.PY3: diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py index 21d0748fb6aba..5a95fe7727df0 100644 --- a/pandas/io/tests/test_html.py +++ b/pandas/io/tests/test_html.py @@ -16,7 +16,6 @@ import numpy as np from numpy.random import rand -from numpy.testing.decorators import slow from pandas import (DataFrame, MultiIndex, read_csv, Timestamp, Index, date_range, Series) @@ -129,7 +128,7 @@ def test_spam_url(self): assert_framelist_equal(df1, df2) - @slow + @tm.slow def test_banklist(self): df1 = self.read_html(self.banklist_data, '.*Florida.*', attrs={'id': 'table'}) @@ -289,9 +288,9 @@ def test_invalid_url(self): self.read_html('http://www.a23950sdfa908sd.com', match='.*Water.*') except ValueError as e: - tm.assert_equal(str(e), 'No tables found') + self.assertEqual(str(e), 'No tables found') - @slow + @tm.slow def test_file_url(self): url = self.banklist_data dfs = self.read_html(file_path_to_url(url), 'First', @@ -300,7 +299,7 @@ def test_file_url(self): for df in dfs: tm.assertIsInstance(df, DataFrame) - @slow + @tm.slow def test_invalid_table_attrs(self): url = self.banklist_data with tm.assertRaisesRegexp(ValueError, 'No tables found'): @@ -311,39 +310,39 @@ def _bank_data(self, *args, **kwargs): return self.read_html(self.banklist_data, 'Metcalf', attrs={'id': 'table'}, *args, **kwargs) - @slow + @tm.slow def test_multiindex_header(self): df = self._bank_data(header=[0, 1])[0] tm.assertIsInstance(df.columns, MultiIndex) - @slow + @tm.slow def test_multiindex_index(self): df = self._bank_data(index_col=[0, 1])[0] tm.assertIsInstance(df.index, MultiIndex) - @slow + @tm.slow def test_multiindex_header_index(self): df = self._bank_data(header=[0, 1], index_col=[0, 1])[0] tm.assertIsInstance(df.columns, MultiIndex) tm.assertIsInstance(df.index, MultiIndex) - @slow + @tm.slow def test_multiindex_header_skiprows_tuples(self): df = self._bank_data(header=[0, 1], skiprows=1, tupleize_cols=True)[0] tm.assertIsInstance(df.columns, Index) - @slow + @tm.slow def test_multiindex_header_skiprows(self): df = self._bank_data(header=[0, 1], skiprows=1)[0] tm.assertIsInstance(df.columns, MultiIndex) - @slow + @tm.slow def test_multiindex_header_index_skiprows(self): df = self._bank_data(header=[0, 1], index_col=[0, 1], skiprows=1)[0] tm.assertIsInstance(df.index, MultiIndex) tm.assertIsInstance(df.columns, MultiIndex) - @slow + @tm.slow def test_regex_idempotency(self): url = self.banklist_data dfs = self.read_html(file_path_to_url(url), @@ -371,7 +370,7 @@ def test_python_docs_table(self): zz = [df.iloc[0, 0][0:4] for df in dfs] self.assertEqual(sorted(zz), sorted(['Repo', 'What'])) - @slow + @tm.slow def test_thousands_macau_stats(self): all_non_nan_table_index = -2 macau_data = os.path.join(DATA_PATH, 'macau.html') @@ -381,7 +380,7 @@ def test_thousands_macau_stats(self): self.assertFalse(any(s.isnull().any() for _, s in df.iteritems())) - @slow + @tm.slow def test_thousands_macau_index_col(self): all_non_nan_table_index = -2 macau_data = os.path.join(DATA_PATH, 'macau.html') @@ -520,9 +519,9 @@ def test_nyse_wsj_commas_table(self): 'Volume', 'Price', 'Chg', '% Chg']) nrows = 100 self.assertEqual(df.shape[0], nrows) - self.assertTrue(df.columns.equals(columns)) + self.assert_index_equal(df.columns, columns) - @slow + @tm.slow def test_banklist_header(self): from pandas.io.html import _remove_whitespace @@ -561,7 +560,7 @@ def try_remove_ws(x): coerce=True) tm.assert_frame_equal(converted, gtnew) - @slow + @tm.slow def test_gold_canyon(self): gc = 'Gold Canyon' with open(self.banklist_data, 'r') as f: @@ -663,7 +662,31 @@ def test_wikipedia_states_table(self): assert os.path.isfile(data), '%r is not a file' % data assert os.path.getsize(data), '%r is an empty file' % data result = self.read_html(data, 'Arizona', header=1)[0] - nose.tools.assert_equal(result['sq mi'].dtype, np.dtype('float64')) + self.assertEqual(result['sq mi'].dtype, np.dtype('float64')) + + def test_decimal_rows(self): + + # GH 12907 + data = StringIO('''<html> + <body> + <table> + <thead> + <tr> + <th>Header</th> + </tr> + </thead> + <tbody> + <tr> + <td>1100#101</td> + </tr> + </tbody> + </table> + </body> + </html>''') + expected = DataFrame(data={'Header': 1100.101}, index=[0]) + result = self.read_html(data, decimal='#')[0] + nose.tools.assert_equal(result['Header'].dtype, np.dtype('float64')) + tm.assert_frame_equal(result, expected) def test_bool_header_arg(self): # GH 6114 @@ -753,7 +776,7 @@ def test_works_on_valid_markup(self): tm.assertIsInstance(dfs, list) tm.assertIsInstance(dfs[0], DataFrame) - @slow + @tm.slow def test_fallback_success(self): _skip_if_none_of(('bs4', 'html5lib')) banklist_data = os.path.join(DATA_PATH, 'banklist.html') @@ -796,7 +819,7 @@ def get_elements_from_file(url, element='table'): return soup.find_all(element) -@slow +@tm.slow def test_bs4_finds_tables(): filepath = os.path.join(DATA_PATH, "spam.html") with warnings.catch_warnings(): @@ -811,13 +834,13 @@ def get_lxml_elements(url, element): return doc.xpath('.//{0}'.format(element)) -@slow +@tm.slow def test_lxml_finds_tables(): filepath = os.path.join(DATA_PATH, "spam.html") assert get_lxml_elements(filepath, 'table') -@slow +@tm.slow def test_lxml_finds_tbody(): filepath = os.path.join(DATA_PATH, "spam.html") assert get_lxml_elements(filepath, 'tbody') diff --git a/pandas/io/tests/test_packers.py b/pandas/io/tests/test_packers.py index 7c61a6942e8e7..ad7d6c3c9f94f 100644 --- a/pandas/io/tests/test_packers.py +++ b/pandas/io/tests/test_packers.py @@ -150,7 +150,11 @@ def test_scalar_complex(self): def test_list_numpy_float(self): x = [np.float32(np.random.rand()) for i in range(5)] x_rec = self.encode_decode(x) - tm.assert_almost_equal(x, x_rec) + # current msgpack cannot distinguish list/tuple + tm.assert_almost_equal(tuple(x), x_rec) + + x_rec = self.encode_decode(tuple(x)) + tm.assert_almost_equal(tuple(x), x_rec) def test_list_numpy_float_complex(self): if not hasattr(np, 'complex128'): @@ -165,7 +169,11 @@ def test_list_numpy_float_complex(self): def test_list_float(self): x = [np.random.rand() for i in range(5)] x_rec = self.encode_decode(x) - tm.assert_almost_equal(x, x_rec) + # current msgpack cannot distinguish list/tuple + tm.assert_almost_equal(tuple(x), x_rec) + + x_rec = self.encode_decode(tuple(x)) + tm.assert_almost_equal(tuple(x), x_rec) def test_list_float_complex(self): x = [np.random.rand() for i in range(5)] + \ @@ -217,7 +225,11 @@ def test_numpy_array_complex(self): def test_list_mixed(self): x = [1.0, np.float32(3.5), np.complex128(4.25), u('foo')] x_rec = self.encode_decode(x) - tm.assert_almost_equal(x, x_rec) + # current msgpack cannot distinguish list/tuple + tm.assert_almost_equal(tuple(x), x_rec) + + x_rec = self.encode_decode(tuple(x)) + tm.assert_almost_equal(tuple(x), x_rec) class TestBasic(TestPackers): @@ -286,30 +298,30 @@ def test_basic_index(self): for s, i in self.d.items(): i_rec = self.encode_decode(i) - self.assertTrue(i.equals(i_rec)) + self.assert_index_equal(i, i_rec) # datetime with no freq (GH5506) i = Index([Timestamp('20130101'), Timestamp('20130103')]) i_rec = self.encode_decode(i) - self.assertTrue(i.equals(i_rec)) + self.assert_index_equal(i, i_rec) # datetime with timezone i = Index([Timestamp('20130101 9:00:00'), Timestamp( '20130103 11:00:00')]).tz_localize('US/Eastern') i_rec = self.encode_decode(i) - self.assertTrue(i.equals(i_rec)) + self.assert_index_equal(i, i_rec) def test_multi_index(self): for s, i in self.mi.items(): i_rec = self.encode_decode(i) - self.assertTrue(i.equals(i_rec)) + self.assert_index_equal(i, i_rec) def test_unicode(self): i = tm.makeUnicodeIndex(100) i_rec = self.encode_decode(i) - self.assertTrue(i.equals(i_rec)) + self.assert_index_equal(i, i_rec) class TestSeries(TestPackers): @@ -659,14 +671,14 @@ def _test_small_strings_no_warn(self, compress): with tm.assert_produces_warning(None): empty_unpacked = self.encode_decode(empty, compress=compress) - np.testing.assert_array_equal(empty_unpacked, empty) + tm.assert_numpy_array_equal(empty_unpacked, empty) self.assertTrue(empty_unpacked.flags.writeable) char = np.array([ord(b'a')], dtype='uint8') with tm.assert_produces_warning(None): char_unpacked = self.encode_decode(char, compress=compress) - np.testing.assert_array_equal(char_unpacked, char) + tm.assert_numpy_array_equal(char_unpacked, char) self.assertTrue(char_unpacked.flags.writeable) # if this test fails I am sorry because the interpreter is now in a # bad state where b'a' points to 98 == ord(b'b'). @@ -676,7 +688,7 @@ def _test_small_strings_no_warn(self, compress): # always be the same (unless we were able to mutate the shared # character singleton in which case ord(b'a') == ord(b'b'). self.assertEqual(ord(b'a'), ord(u'a')) - np.testing.assert_array_equal( + tm.assert_numpy_array_equal( char_unpacked, np.array([ord(b'b')], dtype='uint8'), ) diff --git a/pandas/io/tests/test_pickle.py b/pandas/io/tests/test_pickle.py index 4ff0363d07df6..c12d6e02e3a2e 100644 --- a/pandas/io/tests/test_pickle.py +++ b/pandas/io/tests/test_pickle.py @@ -85,7 +85,7 @@ def compare_series_ts(self, result, expected, typ, version): tm.assert_series_equal(result, expected) tm.assert_equal(result.index.freq, expected.index.freq) tm.assert_equal(result.index.freq.normalize, False) - tm.assert_numpy_array_equal(result > 0, expected > 0) + tm.assert_series_equal(result > 0, expected > 0) # GH 9291 freq = result.index.freq @@ -108,6 +108,13 @@ def compare_series_dt_tz(self, result, expected, typ, version): else: tm.assert_series_equal(result, expected) + def compare_series_cat(self, result, expected, typ, version): + # Categorical.ordered is changed in < 0.16.0 + if LooseVersion(version) < '0.16.0': + tm.assert_series_equal(result, expected, check_categorical=False) + else: + tm.assert_series_equal(result, expected) + def compare_frame_dt_mixed_tzs(self, result, expected, typ, version): # 8260 # dtype is object < 0.17.0 @@ -117,6 +124,16 @@ def compare_frame_dt_mixed_tzs(self, result, expected, typ, version): else: tm.assert_frame_equal(result, expected) + def compare_frame_cat_onecol(self, result, expected, typ, version): + # Categorical.ordered is changed in < 0.16.0 + if LooseVersion(version) < '0.16.0': + tm.assert_frame_equal(result, expected, check_categorical=False) + else: + tm.assert_frame_equal(result, expected) + + def compare_frame_cat_and_float(self, result, expected, typ, version): + self.compare_frame_cat_onecol(result, expected, typ, version) + def compare_index_period(self, result, expected, typ, version): tm.assert_index_equal(result, expected) tm.assertIsInstance(result.freq, MonthEnd) diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py index d21189fe91a2a..9c13162bd774c 100644 --- a/pandas/io/tests/test_pytables.py +++ b/pandas/io/tests/test_pytables.py @@ -46,8 +46,8 @@ from distutils.version import LooseVersion -_default_compressor = LooseVersion(tables.__version__) >= '2.2' \ - and 'blosc' or 'zlib' +_default_compressor = ('blosc' if LooseVersion(tables.__version__) >= '2.2' + else 'zlib') _multiprocess_can_split_ = False @@ -1004,7 +1004,7 @@ def roundtrip(s, key='data', encoding='latin-1', nan_rep=''): nan_rep=nan_rep) retr = read_hdf(store, key) s_nan = s.replace(nan_rep, np.nan) - assert_series_equal(s_nan, retr) + assert_series_equal(s_nan, retr, check_categorical=False) for s in examples: roundtrip(s) @@ -4128,10 +4128,11 @@ def test_nan_selection_bug_4858(self): result = store.select('df', where='values>2.0') assert_frame_equal(result, expected) - def test_start_stop(self): + def test_start_stop_table(self): with ensure_clean_store(self.path) as store: + # table df = DataFrame(dict(A=np.random.rand(20), B=np.random.rand(20))) store.append('df', df) @@ -4143,8 +4144,55 @@ def test_start_stop(self): # out of range result = store.select( 'df', [Term("columns=['A']")], start=30, stop=40) - assert(len(result) == 0) - assert(type(result) == DataFrame) + self.assertTrue(len(result) == 0) + expected = df.ix[30:40, ['A']] + tm.assert_frame_equal(result, expected) + + def test_start_stop_fixed(self): + + with ensure_clean_store(self.path) as store: + + # fixed, GH 8287 + df = DataFrame(dict(A=np.random.rand(20), + B=np.random.rand(20)), + index=pd.date_range('20130101', periods=20)) + store.put('df', df) + + result = store.select( + 'df', start=0, stop=5) + expected = df.iloc[0:5, :] + tm.assert_frame_equal(result, expected) + + result = store.select( + 'df', start=5, stop=10) + expected = df.iloc[5:10, :] + tm.assert_frame_equal(result, expected) + + # out of range + result = store.select( + 'df', start=30, stop=40) + expected = df.iloc[30:40, :] + tm.assert_frame_equal(result, expected) + + # series + s = df.A + store.put('s', s) + result = store.select('s', start=0, stop=5) + expected = s.iloc[0:5] + tm.assert_series_equal(result, expected) + + result = store.select('s', start=5, stop=10) + expected = s.iloc[5:10] + tm.assert_series_equal(result, expected) + + # sparse; not implemented + df = tm.makeDataFrame() + df.ix[3:5, 1:3] = np.nan + df.ix[8:10, -2] = np.nan + dfs = df.to_sparse() + store.put('dfs', dfs) + with self.assertRaises(NotImplementedError): + store.select('dfs', start=0, stop=5) def test_select_filter_corner(self): @@ -4829,6 +4877,9 @@ def test_read_nokey(self): df = DataFrame(np.random.rand(4, 5), index=list('abcd'), columns=list('ABCDE')) + + # Categorical dtype not supported for "fixed" format. So no need + # to test with that dtype in the dataframe here. with ensure_clean_path(self.path) as path: df.to_hdf(path, 'df', mode='a') reread = read_hdf(path) @@ -4836,6 +4887,60 @@ def test_read_nokey(self): df.to_hdf(path, 'df2', mode='a') self.assertRaises(ValueError, read_hdf, path) + def test_read_nokey_table(self): + # GH13231 + df = DataFrame({'i': range(5), + 'c': Series(list('abacd'), dtype='category')}) + + with ensure_clean_path(self.path) as path: + df.to_hdf(path, 'df', mode='a', format='table') + reread = read_hdf(path) + assert_frame_equal(df, reread) + df.to_hdf(path, 'df2', mode='a', format='table') + self.assertRaises(ValueError, read_hdf, path) + + def test_read_nokey_empty(self): + with ensure_clean_path(self.path) as path: + store = HDFStore(path) + store.close() + self.assertRaises(ValueError, read_hdf, path) + + def test_read_from_pathlib_path(self): + + # GH11773 + tm._skip_if_no_pathlib() + + from pathlib import Path + + expected = DataFrame(np.random.rand(4, 5), + index=list('abcd'), + columns=list('ABCDE')) + with ensure_clean_path(self.path) as filename: + path_obj = Path(filename) + + expected.to_hdf(path_obj, 'df', mode='a') + actual = read_hdf(path_obj, 'df') + + tm.assert_frame_equal(expected, actual) + + def test_read_from_py_localpath(self): + + # GH11773 + tm._skip_if_no_localpath() + + from py.path import local as LocalPath + + expected = DataFrame(np.random.rand(4, 5), + index=list('abcd'), + columns=list('ABCDE')) + with ensure_clean_path(self.path) as filename: + path_obj = LocalPath(filename) + + expected.to_hdf(path_obj, 'df', mode='a') + actual = read_hdf(path_obj, 'df') + + tm.assert_frame_equal(expected, actual) + class TestHDFComplexValues(Base): # GH10447 @@ -5196,7 +5301,7 @@ def test_fixed_offset_tz(self): with ensure_clean_store(self.path) as store: store['frame'] = frame recons = store['frame'] - self.assertTrue(recons.index.equals(rng)) + self.assert_index_equal(recons.index, rng) self.assertEqual(rng.tz, recons.index.tz) def test_store_timezone(self): diff --git a/pandas/io/tests/test_s3.py b/pandas/io/tests/test_s3.py new file mode 100644 index 0000000000000..8058698a906ea --- /dev/null +++ b/pandas/io/tests/test_s3.py @@ -0,0 +1,14 @@ +import nose +from pandas.util import testing as tm + +from pandas.io.common import _is_s3_url + + +class TestS3URL(tm.TestCase): + def test_is_s3_url(self): + self.assertTrue(_is_s3_url("s3://pandas/somethingelse.com")) + self.assertFalse(_is_s3_url("s4://pandas/somethingelse.com")) + +if __name__ == '__main__': + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], + exit=False) diff --git a/pandas/io/tests/test_stata.py b/pandas/io/tests/test_stata.py index fe782bb86d1be..830c68d62efad 100644 --- a/pandas/io/tests/test_stata.py +++ b/pandas/io/tests/test_stata.py @@ -179,7 +179,7 @@ def test_read_dta2(self): w = [x for x in w if x.category is UserWarning] # should get warning for each call to read_dta - tm.assert_equal(len(w), 3) + self.assertEqual(len(w), 3) # buggy test because of the NaT comparison on certain platforms # Format 113 test fails since it does not support tc and tC formats @@ -234,10 +234,11 @@ def test_read_dta4(self): expected = pd.concat([expected[col].astype('category') for col in expected], axis=1) - tm.assert_frame_equal(parsed_113, expected) - tm.assert_frame_equal(parsed_114, expected) - tm.assert_frame_equal(parsed_115, expected) - tm.assert_frame_equal(parsed_117, expected) + # stata doesn't save .category metadata + tm.assert_frame_equal(parsed_113, expected, check_categorical=False) + tm.assert_frame_equal(parsed_114, expected, check_categorical=False) + tm.assert_frame_equal(parsed_115, expected, check_categorical=False) + tm.assert_frame_equal(parsed_117, expected, check_categorical=False) # File containing strls def test_read_dta12(self): @@ -374,7 +375,7 @@ def test_read_write_dta11(self): with warnings.catch_warnings(record=True) as w: original.to_stata(path, None) # should get a warning for that format. - tm.assert_equal(len(w), 1) + self.assertEqual(len(w), 1) written_and_read_again = self.read_dta(path) tm.assert_frame_equal( @@ -402,7 +403,7 @@ def test_read_write_dta12(self): with warnings.catch_warnings(record=True) as w: original.to_stata(path, None) # should get a warning for that format. - tm.assert_equal(len(w), 1) + self.assertEqual(len(w), 1) written_and_read_again = self.read_dta(path) tm.assert_frame_equal( @@ -872,8 +873,8 @@ def test_categorical_writing(self): # Silence warnings original.to_stata(path) written_and_read_again = self.read_dta(path) - tm.assert_frame_equal( - written_and_read_again.set_index('index'), expected) + res = written_and_read_again.set_index('index') + tm.assert_frame_equal(res, expected, check_categorical=False) def test_categorical_warnings_and_errors(self): # Warning for non-string labels @@ -903,7 +904,7 @@ def test_categorical_warnings_and_errors(self): with warnings.catch_warnings(record=True) as w: original.to_stata(path) # should get a warning for mixed content - tm.assert_equal(len(w), 1) + self.assertEqual(len(w), 1) def test_categorical_with_stata_missing_values(self): values = [['a' + str(i)] for i in range(120)] @@ -915,8 +916,8 @@ def test_categorical_with_stata_missing_values(self): with tm.ensure_clean() as path: original.to_stata(path) written_and_read_again = self.read_dta(path) - tm.assert_frame_equal( - written_and_read_again.set_index('index'), original) + res = written_and_read_again.set_index('index') + tm.assert_frame_equal(res, original, check_categorical=False) def test_categorical_order(self): # Directly construct using expected codes @@ -945,8 +946,8 @@ def test_categorical_order(self): # Read with and with out categoricals, ensure order is identical parsed_115 = read_stata(self.dta19_115) parsed_117 = read_stata(self.dta19_117) - tm.assert_frame_equal(expected, parsed_115) - tm.assert_frame_equal(expected, parsed_117) + tm.assert_frame_equal(expected, parsed_115, check_categorical=False) + tm.assert_frame_equal(expected, parsed_117, check_categorical=False) # Check identity of codes for col in expected: @@ -969,8 +970,10 @@ def test_categorical_sorting(self): categories = ["Poor", "Fair", "Good", "Very good", "Excellent"] cat = pd.Categorical.from_codes(codes=codes, categories=categories) expected = pd.Series(cat, name='srh') - tm.assert_series_equal(expected, parsed_115["srh"]) - tm.assert_series_equal(expected, parsed_117["srh"]) + tm.assert_series_equal(expected, parsed_115["srh"], + check_categorical=False) + tm.assert_series_equal(expected, parsed_117["srh"], + check_categorical=False) def test_categorical_ordering(self): parsed_115 = read_stata(self.dta19_115) @@ -983,10 +986,10 @@ def test_categorical_ordering(self): for col in parsed_115: if not is_categorical_dtype(parsed_115[col]): continue - tm.assert_equal(True, parsed_115[col].cat.ordered) - tm.assert_equal(True, parsed_117[col].cat.ordered) - tm.assert_equal(False, parsed_115_unordered[col].cat.ordered) - tm.assert_equal(False, parsed_117_unordered[col].cat.ordered) + self.assertEqual(True, parsed_115[col].cat.ordered) + self.assertEqual(True, parsed_117[col].cat.ordered) + self.assertEqual(False, parsed_115_unordered[col].cat.ordered) + self.assertEqual(False, parsed_117_unordered[col].cat.ordered) def test_read_chunks_117(self): files_117 = [self.dta1_117, self.dta2_117, self.dta3_117, @@ -1021,7 +1024,8 @@ def test_read_chunks_117(self): from_frame = parsed.iloc[pos:pos + chunksize, :] tm.assert_frame_equal( from_frame, chunk, check_dtype=False, - check_datetimelike_compat=True) + check_datetimelike_compat=True, + check_categorical=False) pos += chunksize itr.close() @@ -1087,7 +1091,8 @@ def test_read_chunks_115(self): from_frame = parsed.iloc[pos:pos + chunksize, :] tm.assert_frame_equal( from_frame, chunk, check_dtype=False, - check_datetimelike_compat=True) + check_datetimelike_compat=True, + check_categorical=False) pos += chunksize itr.close() diff --git a/pandas/io/tests/test_wb.py b/pandas/io/tests/test_wb.py index 58386c3f1c145..42884b19de03a 100644 --- a/pandas/io/tests/test_wb.py +++ b/pandas/io/tests/test_wb.py @@ -6,7 +6,6 @@ from pandas.compat import u from pandas.util.testing import network from pandas.util.testing import assert_frame_equal -from numpy.testing.decorators import slow import pandas.util.testing as tm # deprecated @@ -15,7 +14,7 @@ class TestWB(tm.TestCase): - @slow + @tm.slow @network def test_wdi_search(self): @@ -26,7 +25,7 @@ def test_wdi_search(self): result = search('gdp.*capita.*constant') self.assertTrue(result.name.str.contains('GDP').any()) - @slow + @tm.slow @network def test_wdi_download(self): @@ -55,7 +54,7 @@ def test_wdi_download(self): expected.index = result.index assert_frame_equal(result, pandas.DataFrame(expected)) - @slow + @tm.slow @network def test_wdi_download_w_retired_indicator(self): @@ -85,7 +84,7 @@ def test_wdi_download_w_retired_indicator(self): if len(result) > 0: raise nose.SkipTest("Invalid results") - @slow + @tm.slow @network def test_wdi_download_w_crash_inducing_countrycode(self): @@ -103,7 +102,7 @@ def test_wdi_download_w_crash_inducing_countrycode(self): if len(result) > 0: raise nose.SkipTest("Invalid results") - @slow + @tm.slow @network def test_wdi_get_countries(self): result = get_countries() diff --git a/pandas/lib.pyx b/pandas/lib.pyx index 328166168a3fc..a9c7f93097f1b 100644 --- a/pandas/lib.pyx +++ b/pandas/lib.pyx @@ -493,7 +493,21 @@ def fast_unique_multiple_list(list lists): @cython.wraparound(False) @cython.boundscheck(False) -def fast_unique_multiple_list_gen(object gen): +def fast_unique_multiple_list_gen(object gen, bint sort=True): + """ + Generate a list of unique values from a generator of lists. + + Parameters + ---------- + gen : generator object + A generator of lists from which the unique list is created + sort : boolean + Whether or not to sort the resulting unique list + + Returns + ------- + unique_list : list of unique values + """ cdef: list buf Py_ssize_t j, n @@ -508,11 +522,11 @@ def fast_unique_multiple_list_gen(object gen): if val not in table: table[val] = stub uniques.append(val) - - try: - uniques.sort() - except Exception: - pass + if sort: + try: + uniques.sort() + except Exception: + pass return uniques diff --git a/pandas/parser.pyx b/pandas/parser.pyx index 94d7f36f4f205..d7ddaee658fe7 100644 --- a/pandas/parser.pyx +++ b/pandas/parser.pyx @@ -1018,7 +1018,7 @@ cdef class TextReader: col_res = _maybe_upcast(col_res) if issubclass(col_res.dtype.type, np.integer) and self.compact_ints: - col_res = downcast_int64(col_res, self.use_unsigned) + col_res = lib.downcast_int64(col_res, na_values, self.use_unsigned) if col_res is None: raise CParserError('Unable to parse column %d' % i) @@ -1501,6 +1501,7 @@ cdef inline void _to_fw_string_nogil(parser_t *parser, int col, int line_start, data += width cdef char* cinf = b'inf' +cdef char* cposinf = b'+inf' cdef char* cneginf = b'-inf' cdef _try_double(parser_t *parser, int col, int line_start, int line_end, @@ -1562,7 +1563,7 @@ cdef inline int _try_double_nogil(parser_t *parser, int col, int line_start, int data[0] = parser.converter(word, &p_end, parser.decimal, parser.sci, parser.thousands, 1) if errno != 0 or p_end[0] or p_end == word: - if strcasecmp(word, cinf) == 0: + if strcasecmp(word, cinf) == 0 or strcasecmp(word, cposinf) == 0: data[0] = INF elif strcasecmp(word, cneginf) == 0: data[0] = NEGINF @@ -1581,7 +1582,7 @@ cdef inline int _try_double_nogil(parser_t *parser, int col, int line_start, int data[0] = parser.converter(word, &p_end, parser.decimal, parser.sci, parser.thousands, 1) if errno != 0 or p_end[0] or p_end == word: - if strcasecmp(word, cinf) == 0: + if strcasecmp(word, cinf) == 0 or strcasecmp(word, cposinf) == 0: data[0] = INF elif strcasecmp(word, cneginf) == 0: data[0] = NEGINF @@ -1865,76 +1866,6 @@ cdef raise_parser_error(object base, parser_t *parser): raise CParserError(message) -def downcast_int64(ndarray[int64_t] arr, bint use_unsigned=0): - cdef: - Py_ssize_t i, n = len(arr) - int64_t mx = INT64_MIN + 1, mn = INT64_MAX - int64_t NA = na_values[np.int64] - int64_t val - ndarray[uint8_t] mask - int na_count = 0 - - _mask = np.empty(n, dtype=bool) - mask = _mask.view(np.uint8) - - for i in range(n): - val = arr[i] - - if val == NA: - mask[i] = 1 - na_count += 1 - continue - - # not NA - mask[i] = 0 - - if val > mx: - mx = val - - if val < mn: - mn = val - - if mn >= 0 and use_unsigned: - if mx <= UINT8_MAX - 1: - result = arr.astype(np.uint8) - if na_count: - np.putmask(result, _mask, na_values[np.uint8]) - return result - - if mx <= UINT16_MAX - 1: - result = arr.astype(np.uint16) - if na_count: - np.putmask(result, _mask, na_values[np.uint16]) - return result - - if mx <= UINT32_MAX - 1: - result = arr.astype(np.uint32) - if na_count: - np.putmask(result, _mask, na_values[np.uint32]) - return result - - else: - if mn >= INT8_MIN + 1 and mx <= INT8_MAX: - result = arr.astype(np.int8) - if na_count: - np.putmask(result, _mask, na_values[np.int8]) - return result - - if mn >= INT16_MIN + 1 and mx <= INT16_MAX: - result = arr.astype(np.int16) - if na_count: - np.putmask(result, _mask, na_values[np.int16]) - return result - - if mn >= INT32_MIN + 1 and mx <= INT32_MAX: - result = arr.astype(np.int32) - if na_count: - np.putmask(result, _mask, na_values[np.int32]) - return result - - return arr - - def _concatenate_chunks(list chunks): cdef: list names = list(chunks[0].keys()) diff --git a/pandas/sparse/array.py b/pandas/sparse/array.py index e114bee87ca27..0312fb023f7fd 100644 --- a/pandas/sparse/array.py +++ b/pandas/sparse/array.py @@ -152,9 +152,17 @@ def __new__(cls, data, sparse_index=None, index=None, kind='integer', # Create array, do *not* copy data by default if copy: - subarr = np.array(values, dtype=dtype, copy=True) + try: + # ToDo: Can remove this error handling when we actually + # support other dtypes + subarr = np.array(values, dtype=dtype, copy=True) + except ValueError: + subarr = np.array(values, copy=True) else: - subarr = np.asarray(values, dtype=dtype) + try: + subarr = np.asarray(values, dtype=dtype) + except ValueError: + subarr = np.asarray(values) # if we have a bool type, make sure that we have a bool fill_value if ((dtype is not None and issubclass(dtype.type, np.bool_)) or @@ -437,12 +445,12 @@ def count(self): @property def _null_fill_value(self): - return np.isnan(self.fill_value) + return com.isnull(self.fill_value) @property def _valid_sp_values(self): sp_vals = self.sp_values - mask = np.isfinite(sp_vals) + mask = com.notnull(sp_vals) return sp_vals[mask] @Appender(_index_shared_docs['fillna'] % _sparray_doc_kwargs) @@ -616,8 +624,8 @@ def make_sparse(arr, kind='block', fill_value=nan): if arr.ndim > 1: raise TypeError("expected dimension <= 1 data") - if np.isnan(fill_value): - mask = ~np.isnan(arr) + if com.isnull(fill_value): + mask = com.notnull(arr) else: mask = arr != fill_value diff --git a/pandas/sparse/series.py b/pandas/sparse/series.py index a783a7c596955..519068b97a010 100644 --- a/pandas/sparse/series.py +++ b/pandas/sparse/series.py @@ -5,14 +5,13 @@ # pylint: disable=E1101,E1103,W0231 -from numpy import nan, ndarray import numpy as np import warnings import operator from pandas.compat.numpy import function as nv from pandas.core.common import isnull, _values_from_object, _maybe_match_name -from pandas.core.index import Index, _ensure_index +from pandas.core.index import Index, _ensure_index, InvalidIndexError from pandas.core.series import Series from pandas.core.frame import DataFrame from pandas.core.internals import SingleBlockManager @@ -135,7 +134,7 @@ def __init__(self, data=None, index=None, sparse_index=None, kind='block', if is_sparse_array: fill_value = data.fill_value else: - fill_value = nan + fill_value = np.nan if is_sparse_array: if isinstance(data, SparseSeries) and index is None: @@ -393,8 +392,10 @@ def _get_val_at(self, loc): def __getitem__(self, key): try: - return self._get_val_at(self.index.get_loc(key)) + return self.index.get_value(self, key) + except InvalidIndexError: + pass except KeyError: if isinstance(key, (int, np.integer)): return self._get_val_at(key) @@ -406,13 +407,12 @@ def __getitem__(self, key): # Could not hash item, must be array-like? pass - # is there a case where this would NOT be an ndarray? - # need to find an example, I took out the case for now - key = _values_from_object(key) - dataSlice = self.values[key] - new_index = Index(self.index.view(ndarray)[key]) - return self._constructor(dataSlice, index=new_index).__finalize__(self) + if self.index.nlevels > 1 and isinstance(key, tuple): + # to handle MultiIndex labels + key = self.index.get_loc(key) + return self._constructor(self.values[key], + index=self.index[key]).__finalize__(self) def _get_values(self, indexer): try: diff --git a/pandas/sparse/tests/test_array.py b/pandas/sparse/tests/test_array.py index 26d018c56a8a8..dd2126d0f52d2 100644 --- a/pandas/sparse/tests/test_array.py +++ b/pandas/sparse/tests/test_array.py @@ -46,6 +46,17 @@ def test_constructor_dtype(self): self.assertEqual(arr.dtype, np.int64) self.assertEqual(arr.fill_value, 0) + def test_constructor_object_dtype(self): + # GH 11856 + arr = SparseArray(['A', 'A', np.nan, 'B'], dtype=np.object) + self.assertEqual(arr.dtype, np.object) + self.assertTrue(np.isnan(arr.fill_value)) + + arr = SparseArray(['A', 'A', np.nan, 'B'], dtype=np.object, + fill_value='A') + self.assertEqual(arr.dtype, np.object) + self.assertEqual(arr.fill_value, 'A') + def test_constructor_spindex_dtype(self): arr = SparseArray(data=[1, 2], sparse_index=IntIndex(4, [1, 2])) tm.assert_sp_array_equal(arr, SparseArray([np.nan, 1, 2, np.nan])) diff --git a/pandas/sparse/tests/test_format.py b/pandas/sparse/tests/test_format.py new file mode 100644 index 0000000000000..9bdc1fdd101ea --- /dev/null +++ b/pandas/sparse/tests/test_format.py @@ -0,0 +1,64 @@ +# -*- coding: utf-8 -*- +from __future__ import print_function + +import numpy as np +import pandas as pd + +import pandas.util.testing as tm +from pandas.compat import (is_platform_windows, + is_platform_32bit) +from pandas.core.config import option_context + + +use_32bit_repr = is_platform_windows() or is_platform_32bit() + + +class TestSeriesFormatting(tm.TestCase): + + _multiprocess_can_split_ = True + + @property + def dtype_format_for_platform(self): + return '' if use_32bit_repr else ', dtype=int32' + + def test_sparse_max_row(self): + s = pd.Series([1, np.nan, np.nan, 3, np.nan]).to_sparse() + result = repr(s) + dfm = self.dtype_format_for_platform + exp = ("0 1.0\n1 NaN\n2 NaN\n3 3.0\n" + "4 NaN\ndtype: float64\nBlockIndex\n" + "Block locations: array([0, 3]{0})\n" + "Block lengths: array([1, 1]{0})".format(dfm)) + self.assertEqual(result, exp) + + with option_context("display.max_rows", 3): + # GH 10560 + result = repr(s) + exp = ("0 1.0\n ... \n4 NaN\n" + "dtype: float64\nBlockIndex\n" + "Block locations: array([0, 3]{0})\n" + "Block lengths: array([1, 1]{0})".format(dfm)) + self.assertEqual(result, exp) + + def test_sparse_mi_max_row(self): + idx = pd.MultiIndex.from_tuples([('A', 0), ('A', 1), ('B', 0), + ('C', 0), ('C', 1), ('C', 2)]) + s = pd.Series([1, np.nan, np.nan, 3, np.nan, np.nan], + index=idx).to_sparse() + result = repr(s) + dfm = self.dtype_format_for_platform + exp = ("A 0 1.0\n 1 NaN\nB 0 NaN\n" + "C 0 3.0\n 1 NaN\n 2 NaN\n" + "dtype: float64\nBlockIndex\n" + "Block locations: array([0, 3]{0})\n" + "Block lengths: array([1, 1]{0})".format(dfm)) + self.assertEqual(result, exp) + + with option_context("display.max_rows", 3): + # GH 13144 + result = repr(s) + exp = ("A 0 1.0\n ... \nC 2 NaN\n" + "dtype: float64\nBlockIndex\n" + "Block locations: array([0, 3]{0})\n" + "Block lengths: array([1, 1]{0})".format(dfm)) + self.assertEqual(result, exp) diff --git a/pandas/sparse/tests/test_frame.py b/pandas/sparse/tests/test_frame.py index fde4ad15e1185..43d35a4e7f72e 100644 --- a/pandas/sparse/tests/test_frame.py +++ b/pandas/sparse/tests/test_frame.py @@ -97,8 +97,11 @@ def test_constructor(self): # constructed zframe from matrix above self.assertEqual(self.zframe['A'].fill_value, 0) - tm.assert_almost_equal([0, 0, 0, 0, 1, 2, 3, 4, 5, 6], - self.zframe['A'].values) + tm.assert_numpy_array_equal(pd.SparseArray([1., 2., 3., 4., 5., 6.]), + self.zframe['A'].values) + tm.assert_numpy_array_equal(np.array([0., 0., 0., 0., 1., 2., + 3., 4., 5., 6.]), + self.zframe['A'].to_dense().values) # construct no data sdf = SparseDataFrame(columns=np.arange(10), index=np.arange(10)) @@ -380,8 +383,8 @@ def test_set_value(self): res2 = res.set_value('foobar', 'qux', 1.5) self.assertIsNot(res2, res) - self.assert_numpy_array_equal(res2.columns, - list(self.frame.columns) + ['qux']) + self.assert_index_equal(res2.columns, + pd.Index(list(self.frame.columns) + ['qux'])) self.assertEqual(res2.get_value('foobar', 'qux'), 1.5) def test_fancy_index_misc(self): @@ -407,7 +410,7 @@ def test_getitem_overload(self): subindex = self.frame.index[indexer] subframe = self.frame[indexer] - self.assert_numpy_array_equal(subindex, subframe.index) + self.assert_index_equal(subindex, subframe.index) self.assertRaises(Exception, self.frame.__getitem__, indexer[:-1]) def test_setitem(self): diff --git a/pandas/sparse/tests/test_groupby.py b/pandas/sparse/tests/test_groupby.py new file mode 100644 index 0000000000000..0cb33f4ea0a56 --- /dev/null +++ b/pandas/sparse/tests/test_groupby.py @@ -0,0 +1,46 @@ +# -*- coding: utf-8 -*- +import numpy as np +import pandas as pd +import pandas.util.testing as tm + + +class TestSparseGroupBy(tm.TestCase): + + _multiprocess_can_split_ = True + + def setUp(self): + self.dense = pd.DataFrame({'A': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'foo', 'foo'], + 'B': ['one', 'one', 'two', 'three', + 'two', 'two', 'one', 'three'], + 'C': np.random.randn(8), + 'D': np.random.randn(8), + 'E': [np.nan, np.nan, 1, 2, + np.nan, 1, np.nan, np.nan]}) + self.sparse = self.dense.to_sparse() + + def test_first_last_nth(self): + # tests for first / last / nth + sparse_grouped = self.sparse.groupby('A') + dense_grouped = self.dense.groupby('A') + + tm.assert_frame_equal(sparse_grouped.first(), + dense_grouped.first()) + tm.assert_frame_equal(sparse_grouped.last(), + dense_grouped.last()) + tm.assert_frame_equal(sparse_grouped.nth(1), + dense_grouped.nth(1)) + + def test_aggfuncs(self): + sparse_grouped = self.sparse.groupby('A') + dense_grouped = self.dense.groupby('A') + + tm.assert_frame_equal(sparse_grouped.mean(), + dense_grouped.mean()) + + # ToDo: sparse sum includes str column + # tm.assert_frame_equal(sparse_grouped.sum(), + # dense_grouped.sum()) + + tm.assert_frame_equal(sparse_grouped.count(), + dense_grouped.count()) diff --git a/pandas/sparse/tests/test_indexing.py b/pandas/sparse/tests/test_indexing.py index ca2996941aef7..1f88d22bd8f93 100644 --- a/pandas/sparse/tests/test_indexing.py +++ b/pandas/sparse/tests/test_indexing.py @@ -10,9 +10,13 @@ class TestSparseSeriesIndexing(tm.TestCase): _multiprocess_can_split_ = True + def setUp(self): + self.orig = pd.Series([1, np.nan, np.nan, 3, np.nan]) + self.sparse = self.orig.to_sparse() + def test_getitem(self): - orig = pd.Series([1, np.nan, np.nan, 3, np.nan]) - sparse = orig.to_sparse() + orig = self.orig + sparse = self.sparse self.assertEqual(sparse[0], 1) self.assertTrue(np.isnan(sparse[1])) @@ -33,8 +37,9 @@ def test_getitem(self): tm.assert_sp_series_equal(result, exp) def test_getitem_slice(self): - orig = pd.Series([1, np.nan, np.nan, 3, np.nan]) - sparse = orig.to_sparse() + orig = self.orig + sparse = self.sparse + tm.assert_sp_series_equal(sparse[:2], orig[:2].to_sparse()) tm.assert_sp_series_equal(sparse[4:2], orig[4:2].to_sparse()) tm.assert_sp_series_equal(sparse[::2], orig[::2].to_sparse()) @@ -84,8 +89,8 @@ def test_getitem_slice_fill_value(self): orig[-5:].to_sparse(fill_value=0)) def test_loc(self): - orig = pd.Series([1, np.nan, np.nan, 3, np.nan]) - sparse = orig.to_sparse() + orig = self.orig + sparse = self.sparse self.assertEqual(sparse.loc[0], 1) self.assertTrue(np.isnan(sparse.loc[1])) @@ -154,10 +159,17 @@ def test_loc_index_fill_value(self): tm.assert_sp_series_equal(result, exp) def test_loc_slice(self): - orig = pd.Series([1, np.nan, np.nan, 3, np.nan]) - sparse = orig.to_sparse() + orig = self.orig + sparse = self.sparse tm.assert_sp_series_equal(sparse.loc[2:], orig.loc[2:].to_sparse()) + def test_loc_slice_index_fill_value(self): + orig = pd.Series([1, np.nan, 0, 3, 0], index=list('ABCDE')) + sparse = orig.to_sparse(fill_value=0) + + tm.assert_sp_series_equal(sparse.loc['C':], + orig.loc['C':].to_sparse(fill_value=0)) + def test_loc_slice_fill_value(self): orig = pd.Series([1, np.nan, 0, 3, 0]) sparse = orig.to_sparse(fill_value=0) @@ -165,8 +177,8 @@ def test_loc_slice_fill_value(self): orig.loc[2:].to_sparse(fill_value=0)) def test_iloc(self): - orig = pd.Series([1, np.nan, np.nan, 3, np.nan]) - sparse = orig.to_sparse() + orig = self.orig + sparse = self.sparse self.assertEqual(sparse.iloc[3], 3) self.assertTrue(np.isnan(sparse.iloc[2])) @@ -234,8 +246,9 @@ def test_at_fill_value(self): self.assertEqual(sparse.at['e'], orig.at['e']) def test_iat(self): - orig = pd.Series([1, np.nan, np.nan, 3, np.nan]) - sparse = orig.to_sparse() + orig = self.orig + sparse = self.sparse + self.assertEqual(sparse.iat[0], orig.iat[0]) self.assertTrue(np.isnan(sparse.iat[1])) self.assertTrue(np.isnan(sparse.iat[2])) @@ -356,6 +369,111 @@ def test_reindex_fill_value(self): tm.assert_sp_series_equal(res, exp) +class TestSparseSeriesMultiIndexing(TestSparseSeriesIndexing): + + _multiprocess_can_split_ = True + + def setUp(self): + # Mi with duplicated values + idx = pd.MultiIndex.from_tuples([('A', 0), ('A', 1), ('B', 0), + ('C', 0), ('C', 1)]) + self.orig = pd.Series([1, np.nan, np.nan, 3, np.nan], index=idx) + self.sparse = self.orig.to_sparse() + + def test_getitem_multi(self): + orig = self.orig + sparse = self.sparse + + self.assertEqual(sparse[0], orig[0]) + self.assertTrue(np.isnan(sparse[1])) + self.assertEqual(sparse[3], orig[3]) + + tm.assert_sp_series_equal(sparse['A'], orig['A'].to_sparse()) + tm.assert_sp_series_equal(sparse['B'], orig['B'].to_sparse()) + + result = sparse[[1, 3, 4]] + exp = orig[[1, 3, 4]].to_sparse() + tm.assert_sp_series_equal(result, exp) + + # dense array + result = sparse[orig % 2 == 1] + exp = orig[orig % 2 == 1].to_sparse() + tm.assert_sp_series_equal(result, exp) + + # sparse array (actuary it coerces to normal Series) + result = sparse[sparse % 2 == 1] + exp = orig[orig % 2 == 1].to_sparse() + tm.assert_sp_series_equal(result, exp) + + def test_getitem_multi_tuple(self): + orig = self.orig + sparse = self.sparse + + self.assertEqual(sparse['C', 0], orig['C', 0]) + self.assertTrue(np.isnan(sparse['A', 1])) + self.assertTrue(np.isnan(sparse['B', 0])) + + def test_getitems_slice_multi(self): + orig = self.orig + sparse = self.sparse + + tm.assert_sp_series_equal(sparse[2:], orig[2:].to_sparse()) + tm.assert_sp_series_equal(sparse.loc['B':], orig.loc['B':].to_sparse()) + tm.assert_sp_series_equal(sparse.loc['C':], orig.loc['C':].to_sparse()) + + tm.assert_sp_series_equal(sparse.loc['A':'B'], + orig.loc['A':'B'].to_sparse()) + tm.assert_sp_series_equal(sparse.loc[:'B'], orig.loc[:'B'].to_sparse()) + + def test_loc(self): + # need to be override to use different label + orig = self.orig + sparse = self.sparse + + tm.assert_sp_series_equal(sparse.loc['A'], + orig.loc['A'].to_sparse()) + tm.assert_sp_series_equal(sparse.loc['B'], + orig.loc['B'].to_sparse()) + + result = sparse.loc[[1, 3, 4]] + exp = orig.loc[[1, 3, 4]].to_sparse() + tm.assert_sp_series_equal(result, exp) + + # exceeds the bounds + result = sparse.loc[[1, 3, 4, 5]] + exp = orig.loc[[1, 3, 4, 5]].to_sparse() + tm.assert_sp_series_equal(result, exp) + + # dense array + result = sparse.loc[orig % 2 == 1] + exp = orig.loc[orig % 2 == 1].to_sparse() + tm.assert_sp_series_equal(result, exp) + + # sparse array (actuary it coerces to normal Series) + result = sparse.loc[sparse % 2 == 1] + exp = orig.loc[orig % 2 == 1].to_sparse() + tm.assert_sp_series_equal(result, exp) + + def test_loc_multi_tuple(self): + orig = self.orig + sparse = self.sparse + + self.assertEqual(sparse.loc['C', 0], orig.loc['C', 0]) + self.assertTrue(np.isnan(sparse.loc['A', 1])) + self.assertTrue(np.isnan(sparse.loc['B', 0])) + + def test_loc_slice(self): + orig = self.orig + sparse = self.sparse + tm.assert_sp_series_equal(sparse.loc['A':], orig.loc['A':].to_sparse()) + tm.assert_sp_series_equal(sparse.loc['B':], orig.loc['B':].to_sparse()) + tm.assert_sp_series_equal(sparse.loc['C':], orig.loc['C':].to_sparse()) + + tm.assert_sp_series_equal(sparse.loc['A':'B'], + orig.loc['A':'B'].to_sparse()) + tm.assert_sp_series_equal(sparse.loc[:'B'], orig.loc[:'B'].to_sparse()) + + class TestSparseDataFrameIndexing(tm.TestCase): _multiprocess_can_split_ = True diff --git a/pandas/sparse/tests/test_libsparse.py b/pandas/sparse/tests/test_libsparse.py index 352355fd55c23..11bf980a99fec 100644 --- a/pandas/sparse/tests/test_libsparse.py +++ b/pandas/sparse/tests/test_libsparse.py @@ -3,7 +3,6 @@ import nose # noqa import numpy as np import operator -from numpy.testing import assert_equal import pandas.util.testing as tm from pandas import compat @@ -51,14 +50,17 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): yindex = BlockIndex(TEST_LENGTH, yloc, ylen) bresult = xindex.make_union(yindex) assert (isinstance(bresult, BlockIndex)) - assert_equal(bresult.blocs, eloc) - assert_equal(bresult.blengths, elen) + tm.assert_numpy_array_equal(bresult.blocs, + np.array(eloc, dtype=np.int32)) + tm.assert_numpy_array_equal(bresult.blengths, + np.array(elen, dtype=np.int32)) ixindex = xindex.to_int_index() iyindex = yindex.to_int_index() iresult = ixindex.make_union(iyindex) assert (isinstance(iresult, IntIndex)) - assert_equal(iresult.indices, bresult.to_int_index().indices) + tm.assert_numpy_array_equal(iresult.indices, + bresult.to_int_index().indices) """ x: ---- @@ -411,7 +413,8 @@ def test_to_int_index(self): block = BlockIndex(20, locs, lengths) dense = block.to_int_index() - assert_equal(dense.indices, exp_inds) + tm.assert_numpy_array_equal(dense.indices, + np.array(exp_inds, dtype=np.int32)) def test_to_block_index(self): index = BlockIndex(10, [0, 5], [4, 5]) @@ -489,7 +492,7 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): ydindex, yfill) self.assertTrue(rb_index.to_int_index().equals(ri_index)) - assert_equal(result_block_vals, result_int_vals) + tm.assert_numpy_array_equal(result_block_vals, result_int_vals) # check versus Series... xseries = Series(x, xdindex.indices) @@ -501,8 +504,9 @@ def _check_case(xloc, xlen, yloc, ylen, eloc, elen): series_result = python_op(xseries, yseries) series_result = series_result.reindex(ri_index.indices) - assert_equal(result_block_vals, series_result.values) - assert_equal(result_int_vals, series_result.values) + tm.assert_numpy_array_equal(result_block_vals, + series_result.values) + tm.assert_numpy_array_equal(result_int_vals, series_result.values) check_cases(_check_case) diff --git a/pandas/sparse/tests/test_panel.py b/pandas/sparse/tests/test_panel.py index 89a90f5be40e6..e988ddebd92f0 100644 --- a/pandas/sparse/tests/test_panel.py +++ b/pandas/sparse/tests/test_panel.py @@ -121,7 +121,8 @@ def _compare_with_dense(panel): dlp = panel.to_dense().to_frame() self.assert_numpy_array_equal(slp.values, dlp.values) - self.assertTrue(slp.index.equals(dlp.index)) + self.assert_index_equal(slp.index, dlp.index, + check_names=False) _compare_with_dense(self.panel) _compare_with_dense(self.panel.reindex(items=['ItemA'])) diff --git a/pandas/sparse/tests/test_pivot.py b/pandas/sparse/tests/test_pivot.py new file mode 100644 index 0000000000000..482a99a96194f --- /dev/null +++ b/pandas/sparse/tests/test_pivot.py @@ -0,0 +1,52 @@ +import numpy as np +import pandas as pd +import pandas.util.testing as tm + + +class TestPivotTable(tm.TestCase): + + _multiprocess_can_split_ = True + + def setUp(self): + self.dense = pd.DataFrame({'A': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'foo', 'foo'], + 'B': ['one', 'one', 'two', 'three', + 'two', 'two', 'one', 'three'], + 'C': np.random.randn(8), + 'D': np.random.randn(8), + 'E': [np.nan, np.nan, 1, 2, + np.nan, 1, np.nan, np.nan]}) + self.sparse = self.dense.to_sparse() + + def test_pivot_table(self): + res_sparse = pd.pivot_table(self.sparse, index='A', columns='B', + values='C') + res_dense = pd.pivot_table(self.dense, index='A', columns='B', + values='C') + tm.assert_frame_equal(res_sparse, res_dense) + + res_sparse = pd.pivot_table(self.sparse, index='A', columns='B', + values='E') + res_dense = pd.pivot_table(self.dense, index='A', columns='B', + values='E') + tm.assert_frame_equal(res_sparse, res_dense) + + res_sparse = pd.pivot_table(self.sparse, index='A', columns='B', + values='E', aggfunc='mean') + res_dense = pd.pivot_table(self.dense, index='A', columns='B', + values='E', aggfunc='mean') + tm.assert_frame_equal(res_sparse, res_dense) + + # ToDo: sum doesn't handle nan properly + # res_sparse = pd.pivot_table(self.sparse, index='A', columns='B', + # values='E', aggfunc='sum') + # res_dense = pd.pivot_table(self.dense, index='A', columns='B', + # values='E', aggfunc='sum') + # tm.assert_frame_equal(res_sparse, res_dense) + + def test_pivot_table_multi(self): + res_sparse = pd.pivot_table(self.sparse, index='A', columns='B', + values=['D', 'E']) + res_dense = pd.pivot_table(self.dense, index='A', columns='B', + values=['D', 'E']) + tm.assert_frame_equal(res_sparse, res_dense) diff --git a/pandas/sparse/tests/test_series.py b/pandas/sparse/tests/test_series.py index 44bc51077ef3e..27112319ea915 100644 --- a/pandas/sparse/tests/test_series.py +++ b/pandas/sparse/tests/test_series.py @@ -5,7 +5,6 @@ from numpy import nan import numpy as np import pandas as pd -from numpy.testing import assert_equal from pandas import Series, DataFrame, bdate_range from pandas.core.datetools import BDay @@ -148,20 +147,23 @@ def test_series_density(self): def test_sparse_to_dense(self): arr, index = _test_data1() series = self.bseries.to_dense() - assert_equal(series, arr) + tm.assert_series_equal(series, Series(arr, name='bseries')) series = self.bseries.to_dense(sparse_only=True) - assert_equal(series, arr[np.isfinite(arr)]) + + indexer = np.isfinite(arr) + exp = Series(arr[indexer], index=index[indexer], name='bseries') + tm.assert_series_equal(series, exp) series = self.iseries.to_dense() - assert_equal(series, arr) + tm.assert_series_equal(series, Series(arr, name='iseries')) arr, index = _test_data1_zero() series = self.zbseries.to_dense() - assert_equal(series, arr) + tm.assert_series_equal(series, Series(arr, name='zbseries')) series = self.ziseries.to_dense() - assert_equal(series, arr) + tm.assert_series_equal(series, Series(arr)) def test_to_dense_fill_value(self): s = pd.Series([1, np.nan, np.nan, 3, np.nan]) @@ -225,8 +227,8 @@ def test_constructor(self): tm.assertIsInstance(self.iseries.sp_index, IntIndex) self.assertEqual(self.zbseries.fill_value, 0) - assert_equal(self.zbseries.values.values, - self.bseries.to_dense().fillna(0).values) + tm.assert_numpy_array_equal(self.zbseries.values.values, + self.bseries.to_dense().fillna(0).values) # pass SparseSeries def _check_const(sparse, name): @@ -252,7 +254,7 @@ def _check_const(sparse, name): # pass Series bseries2 = SparseSeries(self.bseries.to_dense()) - assert_equal(self.bseries.sp_values, bseries2.sp_values) + tm.assert_numpy_array_equal(self.bseries.sp_values, bseries2.sp_values) # pass dict? @@ -292,7 +294,7 @@ def test_constructor_ndarray(self): def test_constructor_nonnan(self): arr = [0, 0, 0, nan, nan] sp_series = SparseSeries(arr, fill_value=0) - assert_equal(sp_series.values.values, arr) + tm.assert_numpy_array_equal(sp_series.values.values, np.array(arr)) self.assertEqual(len(sp_series), 5) self.assertEqual(sp_series.shape, (5, )) @@ -724,9 +726,9 @@ def test_dropna(self): expected = sp.to_dense().valid() expected = expected[expected != 0] - - tm.assert_almost_equal(sp_valid.values, expected.values) - self.assertTrue(sp_valid.index.equals(expected.index)) + exp_arr = pd.SparseArray(expected.values, fill_value=0, kind='block') + tm.assert_sp_array_equal(sp_valid.values, exp_arr) + self.assert_index_equal(sp_valid.index, expected.index) self.assertEqual(len(sp_valid.sp_values), 2) result = self.bseries.dropna() @@ -1019,6 +1021,15 @@ def test_from_coo_nodense_index(self): check = check.dropna().to_sparse() tm.assert_sp_series_equal(ss, check) + def test_from_coo_long_repr(self): + # GH 13114 + # test it doesn't raise error. Formatting is tested in test_format + tm._skip_if_no_scipy() + import scipy.sparse + + sparse = SparseSeries.from_coo(scipy.sparse.rand(350, 18)) + repr(sparse) + def _run_test(self, ss, kwargs, check): results = ss.to_coo(**kwargs) self._check_results_to_coo(results, check) @@ -1031,8 +1042,7 @@ def _run_test(self, ss, kwargs, check): results = (results[0].T, results[2], results[1]) self._check_results_to_coo(results, check) - @staticmethod - def _check_results_to_coo(results, check): + def _check_results_to_coo(self, results, check): (A, il, jl) = results (A_result, il_result, jl_result) = check # convert to dense and compare @@ -1040,8 +1050,8 @@ def _check_results_to_coo(results, check): # or compare directly as difference of sparse # assert(abs(A - A_result).max() < 1e-12) # max is failing in python # 2.6 - assert_equal(il, il_result) - assert_equal(jl, jl_result) + self.assertEqual(il, il_result) + self.assertEqual(jl, jl_result) def test_concat(self): val1 = np.array([1, 2, np.nan, np.nan, 0, np.nan]) diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx index 843031fafa1a9..262e036ff44f1 100644 --- a/pandas/src/inference.pyx +++ b/pandas/src/inference.pyx @@ -6,6 +6,20 @@ iNaT = util.get_nat() cdef bint PY2 = sys.version_info[0] == 2 +cdef extern from "headers/stdint.h": + enum: UINT8_MAX + enum: UINT16_MAX + enum: UINT32_MAX + enum: UINT64_MAX + enum: INT8_MIN + enum: INT8_MAX + enum: INT16_MIN + enum: INT16_MAX + enum: INT32_MAX + enum: INT32_MIN + enum: INT64_MAX + enum: INT64_MIN + # core.common import for fast inference checks def is_float(object obj): return util.is_float_object(obj) @@ -569,7 +583,7 @@ def maybe_convert_numeric(object[:] values, set na_values, for i in range(n): val = values[i] - if val in na_values: + if val.__hash__ is not None and val in na_values: floats[i] = complexes[i] = nan seen_float = True elif util.is_float_object(val): @@ -596,7 +610,13 @@ def maybe_convert_numeric(object[:] values, set na_values, else: try: status = floatify(val, &fval, &maybe_int) - floats[i] = fval + + if fval in na_values: + floats[i] = complexes[i] = nan + seen_float = True + else: + floats[i] = fval + if not seen_float: if maybe_int: as_int = int(val) @@ -642,6 +662,7 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0, bint seen_float = 0 bint seen_complex = 0 bint seen_datetime = 0 + bint seen_datetimetz = 0 bint seen_timedelta = 0 bint seen_int = 0 bint seen_bool = 0 @@ -675,6 +696,15 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0, if val is None: seen_null = 1 floats[i] = complexes[i] = fnan + elif val is NaT: + if convert_datetime: + idatetimes[i] = iNaT + seen_datetime = 1 + if convert_timedelta: + itimedeltas[i] = iNaT + seen_timedelta = 1 + if not (convert_datetime or convert_timedelta): + seen_object = 1 elif util.is_bool_object(val): seen_bool = 1 bools[i] = val @@ -710,9 +740,15 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0, complexes[i] = val seen_complex = 1 elif PyDateTime_Check(val) or util.is_datetime64_object(val): + + # if we have an tz's attached then return the objects if convert_datetime: - seen_datetime = 1 - idatetimes[i] = convert_to_tsobject(val, None, None, 0, 0).value + if getattr(val, 'tzinfo', None) is not None: + seen_datetimetz = 1 + break + else: + seen_datetime = 1 + idatetimes[i] = convert_to_tsobject(val, None, None, 0, 0).value else: seen_object = 1 break @@ -731,6 +767,13 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0, seen_numeric = seen_complex or seen_float or seen_int + # we try to coerce datetime w/tz but must all have the same tz + if seen_datetimetz: + if len(set([ getattr(val, 'tz', None) for val in objects ])) == 1: + from pandas import DatetimeIndex + return DatetimeIndex(objects) + seen_object = 1 + if not seen_object: if not safe: @@ -1103,7 +1146,24 @@ def map_infer(ndarray arr, object f, bint convert=1): return result -def to_object_array(list rows): +def to_object_array(list rows, int min_width=0): + """ + Convert a list of lists into an object array. + + Parameters + ---------- + rows : 2-d array (N, K) + A list of lists to be converted into an array + min_width : int + The minimum width of the object array. If a list + in `rows` contains fewer than `width` elements, + the remaining elements in the corresponding row + will all be `NaN`. + + Returns + ------- + obj_array : numpy array of the object dtype + """ cdef: Py_ssize_t i, j, n, k, tmp ndarray[object, ndim=2] result @@ -1111,7 +1171,7 @@ def to_object_array(list rows): n = len(rows) - k = 0 + k = min_width for i from 0 <= i < n: tmp = len(rows[i]) if tmp > k: @@ -1194,3 +1254,74 @@ def fast_multiget(dict mapping, ndarray keys, default=np.nan): output[i] = default return maybe_convert_objects(output) + + +def downcast_int64(ndarray[int64_t] arr, object na_values, + bint use_unsigned=0): + cdef: + Py_ssize_t i, n = len(arr) + int64_t mx = INT64_MIN + 1, mn = INT64_MAX + int64_t NA = na_values[np.int64] + int64_t val + ndarray[uint8_t] mask + int na_count = 0 + + _mask = np.empty(n, dtype=bool) + mask = _mask.view(np.uint8) + + for i in range(n): + val = arr[i] + + if val == NA: + mask[i] = 1 + na_count += 1 + continue + + # not NA + mask[i] = 0 + + if val > mx: + mx = val + + if val < mn: + mn = val + + if mn >= 0 and use_unsigned: + if mx <= UINT8_MAX - 1: + result = arr.astype(np.uint8) + if na_count: + np.putmask(result, _mask, na_values[np.uint8]) + return result + + if mx <= UINT16_MAX - 1: + result = arr.astype(np.uint16) + if na_count: + np.putmask(result, _mask, na_values[np.uint16]) + return result + + if mx <= UINT32_MAX - 1: + result = arr.astype(np.uint32) + if na_count: + np.putmask(result, _mask, na_values[np.uint32]) + return result + + else: + if mn >= INT8_MIN + 1 and mx <= INT8_MAX: + result = arr.astype(np.int8) + if na_count: + np.putmask(result, _mask, na_values[np.int8]) + return result + + if mn >= INT16_MIN + 1 and mx <= INT16_MAX: + result = arr.astype(np.int16) + if na_count: + np.putmask(result, _mask, na_values[np.int16]) + return result + + if mn >= INT32_MIN + 1 and mx <= INT32_MAX: + result = arr.astype(np.int32) + if na_count: + np.putmask(result, _mask, na_values[np.int32]) + return result + + return arr diff --git a/pandas/src/parse_helper.h b/pandas/src/parse_helper.h index d47e448700029..fd5089dd8963d 100644 --- a/pandas/src/parse_helper.h +++ b/pandas/src/parse_helper.h @@ -1,5 +1,6 @@ #include <errno.h> #include <float.h> +#include "headers/portable.h" static double xstrtod(const char *p, char **q, char decimal, char sci, int skip_trailing, int *maybe_int); @@ -39,22 +40,36 @@ int floatify(PyObject* str, double *result, int *maybe_int) { if (!status) { /* handle inf/-inf */ - if (0 == strcmp(data, "-inf")) { - *result = -HUGE_VAL; - *maybe_int = 0; - } else if (0 == strcmp(data, "inf")) { - *result = HUGE_VAL; - *maybe_int = 0; + if (strlen(data) == 3) { + if (0 == strcasecmp(data, "inf")) { + *result = HUGE_VAL; + *maybe_int = 0; + } else { + goto parsingerror; + } + } else if (strlen(data) == 4) { + if (0 == strcasecmp(data, "-inf")) { + *result = -HUGE_VAL; + *maybe_int = 0; + } else if (0 == strcasecmp(data, "+inf")) { + *result = HUGE_VAL; + *maybe_int = 0; + } else { + goto parsingerror; + } } else { - PyErr_SetString(PyExc_ValueError, "Unable to parse string"); - Py_XDECREF(tmp); - return -1; + goto parsingerror; } } Py_XDECREF(tmp); return 0; +parsingerror: + PyErr_SetString(PyExc_ValueError, "Unable to parse string"); + Py_XDECREF(tmp); + return -1; + /* #if PY_VERSION_HEX >= 0x03000000 return PyFloat_FromString(str); diff --git a/pandas/src/period.pyx b/pandas/src/period.pyx index 0cb0b575b25dc..858aa58df8d7d 100644 --- a/pandas/src/period.pyx +++ b/pandas/src/period.pyx @@ -772,6 +772,9 @@ cdef class Period(object): if self.ordinal == tslib.iNaT or other.ordinal == tslib.iNaT: return _nat_scalar_rules[op] return PyObject_RichCompareBool(self.ordinal, other.ordinal, op) + # index/series like + elif hasattr(other, '_typ'): + return NotImplemented else: if op == Py_EQ: return NotImplemented @@ -796,8 +799,8 @@ cdef class Period(object): else: ordinal = self.ordinal + (nanos // offset_nanos) return Period(ordinal=ordinal, freq=self.freq) - msg = 'Input cannnot be converted to Period(freq={0})' - raise ValueError(msg) + msg = 'Input cannot be converted to Period(freq={0})' + raise IncompatibleFrequency(msg.format(self.freqstr)) elif isinstance(other, offsets.DateOffset): freqstr = frequencies.get_standard_freq(other) base = frequencies.get_base_alias(freqstr) @@ -846,8 +849,8 @@ cdef class Period(object): return Period(ordinal=ordinal, freq=self.freq) elif isinstance(other, Period): if other.freq != self.freq: - raise ValueError("Cannot do arithmetic with " - "non-conforming periods") + msg = _DIFFERENT_FREQ.format(self.freqstr, other.freqstr) + raise IncompatibleFrequency(msg) if self.ordinal == tslib.iNaT or other.ordinal == tslib.iNaT: return Period(ordinal=tslib.iNaT, freq=self.freq) return self.ordinal - other.ordinal @@ -862,7 +865,6 @@ cdef class Period(object): else: return NotImplemented - def asfreq(self, freq, how='E'): """ Convert Period to desired frequency, either at the start or end of the diff --git a/pandas/src/testing.pyx b/pandas/src/testing.pyx index 9f102ded597fd..6780cf311c244 100644 --- a/pandas/src/testing.pyx +++ b/pandas/src/testing.pyx @@ -55,7 +55,9 @@ cpdef assert_dict_equal(a, b, bint compare_keys=True): return True -cpdef assert_almost_equal(a, b, bint check_less_precise=False, check_dtype=True, +cpdef assert_almost_equal(a, b, + check_less_precise=False, + bint check_dtype=True, obj=None, lobj=None, robj=None): """Check that left and right objects are almost equal. @@ -63,9 +65,10 @@ cpdef assert_almost_equal(a, b, bint check_less_precise=False, check_dtype=True, ---------- a : object b : object - check_less_precise : bool, default False + check_less_precise : bool or int, default False Specify comparison precision. 5 digits (False) or 3 digits (True) after decimal points are compared. + If an integer, then this will be the number of decimal points to compare check_dtype: bool, default True check dtype if both a and b are np.ndarray obj : str, default None @@ -91,6 +94,8 @@ cpdef assert_almost_equal(a, b, bint check_less_precise=False, check_dtype=True, if robj is None: robj = b + assert isinstance(check_less_precise, (int, bool)) + if isinstance(a, dict) or isinstance(b, dict): return assert_dict_equal(a, b) @@ -145,7 +150,7 @@ cpdef assert_almost_equal(a, b, bint check_less_precise=False, check_dtype=True, for i in xrange(len(a)): try: - assert_almost_equal(a[i], b[i], check_less_precise) + assert_almost_equal(a[i], b[i], check_less_precise=check_less_precise) except AssertionError: is_unequal = True diff += 1 @@ -173,11 +178,12 @@ cpdef assert_almost_equal(a, b, bint check_less_precise=False, check_dtype=True, # inf comparison return True - decimal = 5 - - # deal with differing dtypes - if check_less_precise: + if check_less_precise is True: decimal = 3 + elif check_less_precise is False: + decimal = 5 + else: + decimal = check_less_precise fa, fb = a, b diff --git a/pandas/stats/tests/test_fama_macbeth.py b/pandas/stats/tests/test_fama_macbeth.py index 2c69eb64fd61d..706becfa730c4 100644 --- a/pandas/stats/tests/test_fama_macbeth.py +++ b/pandas/stats/tests/test_fama_macbeth.py @@ -50,7 +50,9 @@ def checkFamaMacBethExtended(self, window_type, x, y, **kwds): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): reference = fama_macbeth(y=y2, x=x2, **kwds) - assert_almost_equal(reference._stats, result._stats[:, i]) + # reference._stats is tuple + assert_almost_equal(reference._stats, result._stats[:, i], + check_dtype=False) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): static = fama_macbeth(y=y2, x=x2, **kwds) diff --git a/pandas/stats/tests/test_ols.py b/pandas/stats/tests/test_ols.py index 725a4e8296dd2..bac824f0b4840 100644 --- a/pandas/stats/tests/test_ols.py +++ b/pandas/stats/tests/test_ols.py @@ -13,7 +13,6 @@ from distutils.version import LooseVersion import nose import numpy as np -from numpy.testing.decorators import slow from pandas import date_range, bdate_range from pandas.core.panel import Panel @@ -22,7 +21,7 @@ from pandas.stats.ols import _filter_data from pandas.stats.plm import NonPooledPanelOLS, PanelOLS from pandas.util.testing import (assert_almost_equal, assert_series_equal, - assert_frame_equal, assertRaisesRegexp) + assert_frame_equal, assertRaisesRegexp, slow) import pandas.util.testing as tm import pandas.compat as compat from .common import BaseTest @@ -379,7 +378,7 @@ def test_predict_longer_exog(self): model = ols(y=endog, x=exog) pred = model.y_predict - self.assertTrue(pred.index.equals(exog.index)) + self.assert_index_equal(pred.index, exog.index) def test_longpanel_series_combo(self): wp = tm.makePanel() @@ -528,13 +527,12 @@ def testFiltering(self): index = x.index.get_level_values(0) index = Index(sorted(set(index))) exp_index = Index([datetime(2000, 1, 1), datetime(2000, 1, 3)]) - self.assertTrue - (exp_index.equals(index)) + self.assert_index_equal(exp_index, index) index = x.index.get_level_values(1) index = Index(sorted(set(index))) exp_index = Index(['A', 'B']) - self.assertTrue(exp_index.equals(index)) + self.assert_index_equal(exp_index, index) x = result._x_filtered index = x.index.get_level_values(0) @@ -542,24 +540,22 @@ def testFiltering(self): exp_index = Index([datetime(2000, 1, 1), datetime(2000, 1, 3), datetime(2000, 1, 4)]) - self.assertTrue(exp_index.equals(index)) + self.assert_index_equal(exp_index, index) - assert_almost_equal(result._y.values.flat, [1, 4, 5]) + # .flat is flatiter instance + assert_almost_equal(result._y.values.flat, [1, 4, 5], + check_dtype=False) - exp_x = [[6, 14, 1], - [9, 17, 1], - [30, 48, 1]] + exp_x = np.array([[6, 14, 1], [9, 17, 1], + [30, 48, 1]], dtype=np.float64) assert_almost_equal(exp_x, result._x.values) - exp_x_filtered = [[6, 14, 1], - [9, 17, 1], - [30, 48, 1], - [11, 20, 1], - [12, 21, 1]] + exp_x_filtered = np.array([[6, 14, 1], [9, 17, 1], [30, 48, 1], + [11, 20, 1], [12, 21, 1]], dtype=np.float64) assert_almost_equal(exp_x_filtered, result._x_filtered.values) - self.assertTrue(result._x_filtered.index.levels[0].equals( - result.y_fitted.index)) + self.assert_index_equal(result._x_filtered.index.levels[0], + result.y_fitted.index) def test_wls_panel(self): y = tm.makeTimeDataFrame() @@ -598,9 +594,11 @@ def testWithTimeEffects(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): result = ols(y=self.panel_y2, x=self.panel_x2, time_effects=True) - assert_almost_equal(result._y_trans.values.flat, [0, -0.5, 0.5]) + # .flat is flatiter instance + assert_almost_equal(result._y_trans.values.flat, [0, -0.5, 0.5], + check_dtype=False) - exp_x = [[0, 0], [-10.5, -15.5], [10.5, 15.5]] + exp_x = np.array([[0, 0], [-10.5, -15.5], [10.5, 15.5]]) assert_almost_equal(result._x_trans.values, exp_x) # _check_non_raw_results(result) @@ -609,7 +607,9 @@ def testWithEntityEffects(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): result = ols(y=self.panel_y2, x=self.panel_x2, entity_effects=True) - assert_almost_equal(result._y.values.flat, [1, 4, 5]) + # .flat is flatiter instance + assert_almost_equal(result._y.values.flat, [1, 4, 5], + check_dtype=False) exp_x = DataFrame([[0., 6., 14., 1.], [0, 9, 17, 1], [1, 30, 48, 1]], index=result._x.index, columns=['FE_B', 'x1', 'x2', @@ -623,7 +623,9 @@ def testWithEntityEffectsAndDroppedDummies(self): result = ols(y=self.panel_y2, x=self.panel_x2, entity_effects=True, dropped_dummies={'entity': 'B'}) - assert_almost_equal(result._y.values.flat, [1, 4, 5]) + # .flat is flatiter instance + assert_almost_equal(result._y.values.flat, [1, 4, 5], + check_dtype=False) exp_x = DataFrame([[1., 6., 14., 1.], [1, 9, 17, 1], [0, 30, 48, 1]], index=result._x.index, columns=['FE_A', 'x1', 'x2', 'intercept'], @@ -635,7 +637,9 @@ def testWithXEffects(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): result = ols(y=self.panel_y2, x=self.panel_x2, x_effects=['x1']) - assert_almost_equal(result._y.values.flat, [1, 4, 5]) + # .flat is flatiter instance + assert_almost_equal(result._y.values.flat, [1, 4, 5], + check_dtype=False) res = result._x exp_x = DataFrame([[0., 0., 14., 1.], [0, 1, 17, 1], [1, 0, 48, 1]], @@ -649,7 +653,9 @@ def testWithXEffectsAndDroppedDummies(self): dropped_dummies={'x1': 30}) res = result._x - assert_almost_equal(result._y.values.flat, [1, 4, 5]) + # .flat is flatiter instance + assert_almost_equal(result._y.values.flat, [1, 4, 5], + check_dtype=False) exp_x = DataFrame([[1., 0., 14., 1.], [0, 1, 17, 1], [0, 0, 48, 1]], columns=['x1_6', 'x1_9', 'x2', 'intercept'], index=res.index, dtype=float) @@ -661,13 +667,15 @@ def testWithXEffectsAndConversion(self): result = ols(y=self.panel_y3, x=self.panel_x3, x_effects=['x1', 'x2']) - assert_almost_equal(result._y.values.flat, [1, 2, 3, 4]) - exp_x = [[0, 0, 0, 1, 1], [1, 0, 0, 0, 1], [0, 1, 1, 0, 1], - [0, 0, 0, 1, 1]] + # .flat is flatiter instance + assert_almost_equal(result._y.values.flat, [1, 2, 3, 4], + check_dtype=False) + exp_x = np.array([[0, 0, 0, 1, 1], [1, 0, 0, 0, 1], [0, 1, 1, 0, 1], + [0, 0, 0, 1, 1]], dtype=np.float64) assert_almost_equal(result._x.values, exp_x) exp_index = Index(['x1_B', 'x1_C', 'x2_baz', 'x2_foo', 'intercept']) - self.assertTrue(exp_index.equals(result._x.columns)) + self.assert_index_equal(exp_index, result._x.columns) # _check_non_raw_results(result) @@ -675,14 +683,15 @@ def testWithXEffectsAndConversionAndDroppedDummies(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): result = ols(y=self.panel_y3, x=self.panel_x3, x_effects=['x1', 'x2'], dropped_dummies={'x2': 'foo'}) - - assert_almost_equal(result._y.values.flat, [1, 2, 3, 4]) - exp_x = [[0, 0, 0, 0, 1], [1, 0, 1, 0, 1], [0, 1, 0, 1, 1], - [0, 0, 0, 0, 1]] + # .flat is flatiter instance + assert_almost_equal(result._y.values.flat, [1, 2, 3, 4], + check_dtype=False) + exp_x = np.array([[0, 0, 0, 0, 1], [1, 0, 1, 0, 1], [0, 1, 0, 1, 1], + [0, 0, 0, 0, 1]], dtype=np.float64) assert_almost_equal(result._x.values, exp_x) exp_index = Index(['x1_B', 'x1_C', 'x2_bar', 'x2_baz', 'intercept']) - self.assertTrue(exp_index.equals(result._x.columns)) + self.assert_index_equal(exp_index, result._x.columns) # _check_non_raw_results(result) @@ -915,16 +924,21 @@ def setUp(self): def testFilterWithSeriesRHS(self): (lhs, rhs, weights, rhs_pre, index, valid) = _filter_data(self.TS1, {'x1': self.TS2}, None) - self.tsAssertEqual(self.TS1, lhs) - self.tsAssertEqual(self.TS2[:3], rhs['x1']) - self.tsAssertEqual(self.TS2, rhs_pre['x1']) + self.tsAssertEqual(self.TS1.astype(np.float64), lhs, check_names=False) + self.tsAssertEqual(self.TS2[:3].astype(np.float64), rhs['x1'], + check_names=False) + self.tsAssertEqual(self.TS2.astype(np.float64), rhs_pre['x1'], + check_names=False) def testFilterWithSeriesRHS2(self): (lhs, rhs, weights, rhs_pre, index, valid) = _filter_data(self.TS2, {'x1': self.TS1}, None) - self.tsAssertEqual(self.TS2[:3], lhs) - self.tsAssertEqual(self.TS1, rhs['x1']) - self.tsAssertEqual(self.TS1, rhs_pre['x1']) + self.tsAssertEqual(self.TS2[:3].astype(np.float64), lhs, + check_names=False) + self.tsAssertEqual(self.TS1.astype(np.float64), rhs['x1'], + check_names=False) + self.tsAssertEqual(self.TS1.astype(np.float64), rhs_pre['x1'], + check_names=False) def testFilterWithSeriesRHS3(self): (lhs, rhs, weights, rhs_pre, @@ -932,32 +946,32 @@ def testFilterWithSeriesRHS3(self): exp_lhs = self.TS3[2:3] exp_rhs = self.TS4[2:3] exp_rhs_pre = self.TS4[1:] - self.tsAssertEqual(exp_lhs, lhs) - self.tsAssertEqual(exp_rhs, rhs['x1']) - self.tsAssertEqual(exp_rhs_pre, rhs_pre['x1']) + self.tsAssertEqual(exp_lhs, lhs, check_names=False) + self.tsAssertEqual(exp_rhs, rhs['x1'], check_names=False) + self.tsAssertEqual(exp_rhs_pre, rhs_pre['x1'], check_names=False) def testFilterWithDataFrameRHS(self): (lhs, rhs, weights, rhs_pre, index, valid) = _filter_data(self.TS1, self.DF1, None) - exp_lhs = self.TS1[1:] + exp_lhs = self.TS1[1:].astype(np.float64) exp_rhs1 = self.TS2[1:3] - exp_rhs2 = self.TS4[1:3] - self.tsAssertEqual(exp_lhs, lhs) - self.tsAssertEqual(exp_rhs1, rhs['x1']) - self.tsAssertEqual(exp_rhs2, rhs['x2']) + exp_rhs2 = self.TS4[1:3].astype(np.float64) + self.tsAssertEqual(exp_lhs, lhs, check_names=False) + self.tsAssertEqual(exp_rhs1, rhs['x1'], check_names=False) + self.tsAssertEqual(exp_rhs2, rhs['x2'], check_names=False) def testFilterWithDictRHS(self): (lhs, rhs, weights, rhs_pre, index, valid) = _filter_data(self.TS1, self.DICT1, None) - exp_lhs = self.TS1[1:] - exp_rhs1 = self.TS2[1:3] - exp_rhs2 = self.TS4[1:3] - self.tsAssertEqual(exp_lhs, lhs) - self.tsAssertEqual(exp_rhs1, rhs['x1']) - self.tsAssertEqual(exp_rhs2, rhs['x2']) - - def tsAssertEqual(self, ts1, ts2): - self.assert_numpy_array_equal(ts1, ts2) + exp_lhs = self.TS1[1:].astype(np.float64) + exp_rhs1 = self.TS2[1:3].astype(np.float64) + exp_rhs2 = self.TS4[1:3].astype(np.float64) + self.tsAssertEqual(exp_lhs, lhs, check_names=False) + self.tsAssertEqual(exp_rhs1, rhs['x1'], check_names=False) + self.tsAssertEqual(exp_rhs2, rhs['x2'], check_names=False) + + def tsAssertEqual(self, ts1, ts2, **kwargs): + self.assert_series_equal(ts1, ts2, **kwargs) if __name__ == '__main__': diff --git a/pandas/stats/tests/test_var.py b/pandas/stats/tests/test_var.py index 9bcd070dc1d33..9f2c95a2d3d5c 100644 --- a/pandas/stats/tests/test_var.py +++ b/pandas/stats/tests/test_var.py @@ -1,9 +1,8 @@ # flake8: noqa from __future__ import print_function -from numpy.testing import run_module_suite, assert_equal, TestCase -from pandas.util.testing import assert_almost_equal +import pandas.util.testing as tm from pandas.compat import range import nose @@ -33,53 +32,56 @@ class CheckVAR(object): def test_params(self): - assert_almost_equal(self.res1.params, self.res2.params, DECIMAL_3) + tm.assert_almost_equal(self.res1.params, self.res2.params, DECIMAL_3) def test_neqs(self): - assert_equal(self.res1.neqs, self.res2.neqs) + tm.assert_numpy_array_equal(self.res1.neqs, self.res2.neqs) def test_nobs(self): - assert_equal(self.res1.avobs, self.res2.nobs) + tm.assert_numpy_array_equal(self.res1.avobs, self.res2.nobs) def test_df_eq(self): - assert_equal(self.res1.df_eq, self.res2.df_eq) + tm.assert_numpy_array_equal(self.res1.df_eq, self.res2.df_eq) def test_rmse(self): results = self.res1.results for i in range(len(results)): - assert_almost_equal(results[i].mse_resid ** .5, - eval('self.res2.rmse_' + str(i + 1)), DECIMAL_6) + tm.assert_almost_equal(results[i].mse_resid ** .5, + eval('self.res2.rmse_' + str(i + 1)), + DECIMAL_6) def test_rsquared(self): results = self.res1.results for i in range(len(results)): - assert_almost_equal(results[i].rsquared, - eval('self.res2.rsquared_' + str(i + 1)), DECIMAL_3) + tm.assert_almost_equal(results[i].rsquared, + eval('self.res2.rsquared_' + str(i + 1)), + DECIMAL_3) def test_llf(self): results = self.res1.results - assert_almost_equal(self.res1.llf, self.res2.llf, DECIMAL_2) + tm.assert_almost_equal(self.res1.llf, self.res2.llf, DECIMAL_2) for i in range(len(results)): - assert_almost_equal(results[i].llf, - eval('self.res2.llf_' + str(i + 1)), DECIMAL_2) + tm.assert_almost_equal(results[i].llf, + eval('self.res2.llf_' + str(i + 1)), + DECIMAL_2) def test_aic(self): - assert_almost_equal(self.res1.aic, self.res2.aic) + tm.assert_almost_equal(self.res1.aic, self.res2.aic) def test_bic(self): - assert_almost_equal(self.res1.bic, self.res2.bic) + tm.assert_almost_equal(self.res1.bic, self.res2.bic) def test_hqic(self): - assert_almost_equal(self.res1.hqic, self.res2.hqic) + tm.assert_almost_equal(self.res1.hqic, self.res2.hqic) def test_fpe(self): - assert_almost_equal(self.res1.fpe, self.res2.fpe) + tm.assert_almost_equal(self.res1.fpe, self.res2.fpe) def test_detsig(self): - assert_almost_equal(self.res1.detomega, self.res2.detsig) + tm.assert_almost_equal(self.res1.detomega, self.res2.detsig) def test_bse(self): - assert_almost_equal(self.res1.bse, self.res2.bse, DECIMAL_4) + tm.assert_almost_equal(self.res1.bse, self.res2.bse, DECIMAL_4) class Foo(object): diff --git a/pandas/tests/formats/test_format.py b/pandas/tests/formats/test_format.py index 4fcee32c46067..e67fe2cddde77 100644 --- a/pandas/tests/formats/test_format.py +++ b/pandas/tests/formats/test_format.py @@ -3087,11 +3087,11 @@ def test_to_csv_doublequote(self): def test_to_csv_escapechar(self): df = DataFrame({'col': ['a"a', '"bb"']}) - expected = """\ + expected = '''\ "","col" "0","a\\"a" "1","\\"bb\\"" -""" +''' with tm.ensure_clean('test.csv') as path: # QUOTE_ALL df.to_csv(path, quoting=1, doublequote=False, escapechar='\\') @@ -3758,25 +3758,6 @@ def test_to_string_header(self): exp = '0 0\n ..\n9 9' self.assertEqual(res, exp) - def test_sparse_max_row(self): - s = pd.Series([1, np.nan, np.nan, 3, np.nan]).to_sparse() - result = repr(s) - dtype = '' if use_32bit_repr else ', dtype=int32' - exp = ("0 1.0\n1 NaN\n2 NaN\n3 3.0\n" - "4 NaN\ndtype: float64\nBlockIndex\n" - "Block locations: array([0, 3]{0})\n" - "Block lengths: array([1, 1]{0})".format(dtype)) - self.assertEqual(result, exp) - - with option_context("display.max_rows", 3): - # GH 10560 - result = repr(s) - exp = ("0 1.0\n ... \n4 NaN\n" - "dtype: float64\nBlockIndex\n" - "Block locations: array([0, 3]{0})\n" - "Block lengths: array([1, 1]{0})".format(dtype)) - self.assertEqual(result, exp) - class TestEngFormatter(tm.TestCase): _multiprocess_can_split_ = True @@ -3925,6 +3906,21 @@ def test_rounding(self): result = formatter(0) self.assertEqual(result, u(' 0.000')) + def test_nan(self): + # Issue #11981 + + formatter = fmt.EngFormatter(accuracy=1, use_eng_prefix=True) + result = formatter(np.nan) + self.assertEqual(result, u('NaN')) + + df = pd.DataFrame({'a':[1.5, 10.3, 20.5], + 'b':[50.3, 60.67, 70.12], + 'c':[100.2, 101.33, 120.33]}) + pt = df.pivot_table(values='a', index='b', columns='c') + fmt.set_eng_float_format(accuracy=1) + result = pt.to_string() + self.assertTrue('NaN' in result) + self.reset_display_options() def _three_digit_exp(): return '%.4g' % 1.7e8 == '1.7e+008' @@ -4268,6 +4264,21 @@ def test_nat_representations(self): self.assertEqual(f(pd.NaT), 'NaT') +def test_format_percentiles(): + result = fmt.format_percentiles([0.01999, 0.02001, 0.5, 0.666666, 0.9999]) + expected = ['1.999%', '2.001%', '50%', '66.667%', '99.99%'] + tm.assert_equal(result, expected) + + result = fmt.format_percentiles([0, 0.5, 0.02001, 0.5, 0.666666, 0.9999]) + expected = ['0%', '50%', '2.0%', '50%', '66.67%', '99.99%'] + tm.assert_equal(result, expected) + + tm.assertRaises(ValueError, fmt.format_percentiles, [0.1, np.nan, 0.5]) + tm.assertRaises(ValueError, fmt.format_percentiles, [-0.001, 0.1, 0.5]) + tm.assertRaises(ValueError, fmt.format_percentiles, [2, 0.1, 0.5]) + tm.assertRaises(ValueError, fmt.format_percentiles, [0.1, 0.5, 'a']) + + if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py index 1da5487aefc01..3b50dd2c1d49f 100644 --- a/pandas/tests/frame/test_alter_axes.py +++ b/pandas/tests/frame/test_alter_axes.py @@ -330,28 +330,30 @@ def test_rename(self): # gets sorted alphabetical df = DataFrame(data) renamed = df.rename(index={'foo': 'bar', 'bar': 'foo'}) - self.assert_numpy_array_equal(renamed.index, ['foo', 'bar']) + tm.assert_index_equal(renamed.index, pd.Index(['foo', 'bar'])) renamed = df.rename(index=str.upper) - self.assert_numpy_array_equal(renamed.index, ['BAR', 'FOO']) + tm.assert_index_equal(renamed.index, pd.Index(['BAR', 'FOO'])) # have to pass something self.assertRaises(TypeError, self.frame.rename) # partial columns renamed = self.frame.rename(columns={'C': 'foo', 'D': 'bar'}) - self.assert_numpy_array_equal( - renamed.columns, ['A', 'B', 'foo', 'bar']) + tm.assert_index_equal(renamed.columns, + pd.Index(['A', 'B', 'foo', 'bar'])) # other axis renamed = self.frame.T.rename(index={'C': 'foo', 'D': 'bar'}) - self.assert_numpy_array_equal(renamed.index, ['A', 'B', 'foo', 'bar']) + tm.assert_index_equal(renamed.index, + pd.Index(['A', 'B', 'foo', 'bar'])) # index with name index = Index(['foo', 'bar'], name='name') renamer = DataFrame(data, index=index) renamed = renamer.rename(index={'foo': 'bar', 'bar': 'foo'}) - self.assert_numpy_array_equal(renamed.index, ['bar', 'foo']) + tm.assert_index_equal(renamed.index, + pd.Index(['bar', 'foo'], name='name')) self.assertEqual(renamed.index.name, renamer.index.name) # MultiIndex @@ -363,12 +365,14 @@ def test_rename(self): renamer = DataFrame([(0, 0), (1, 1)], index=index, columns=columns) renamed = renamer.rename(index={'foo1': 'foo3', 'bar2': 'bar3'}, columns={'fizz1': 'fizz3', 'buzz2': 'buzz3'}) - new_index = MultiIndex.from_tuples( - [('foo3', 'bar1'), ('foo2', 'bar3')]) - new_columns = MultiIndex.from_tuples( - [('fizz3', 'buzz1'), ('fizz2', 'buzz3')]) - self.assert_numpy_array_equal(renamed.index, new_index) - self.assert_numpy_array_equal(renamed.columns, new_columns) + new_index = MultiIndex.from_tuples([('foo3', 'bar1'), + ('foo2', 'bar3')], + names=['foo', 'bar']) + new_columns = MultiIndex.from_tuples([('fizz3', 'buzz1'), + ('fizz2', 'buzz3')], + names=['fizz', 'buzz']) + self.assert_index_equal(renamed.index, new_index) + self.assert_index_equal(renamed.columns, new_columns) self.assertEqual(renamed.index.names, renamer.index.names) self.assertEqual(renamed.columns.names, renamer.columns.names) @@ -460,28 +464,30 @@ def test_reset_index(self): stacked.index.names = [None, None] deleveled2 = stacked.reset_index() - self.assert_numpy_array_equal(deleveled['first'], - deleveled2['level_0']) - self.assert_numpy_array_equal(deleveled['second'], - deleveled2['level_1']) + tm.assert_series_equal(deleveled['first'], deleveled2['level_0'], + check_names=False) + tm.assert_series_equal(deleveled['second'], deleveled2['level_1'], + check_names=False) # default name assigned rdf = self.frame.reset_index() - self.assert_numpy_array_equal(rdf['index'], self.frame.index.values) + exp = pd.Series(self.frame.index.values, name='index') + self.assert_series_equal(rdf['index'], exp) # default name assigned, corner case df = self.frame.copy() df['index'] = 'foo' rdf = df.reset_index() - self.assert_numpy_array_equal(rdf['level_0'], self.frame.index.values) + exp = pd.Series(self.frame.index.values, name='level_0') + self.assert_series_equal(rdf['level_0'], exp) # but this is ok self.frame.index.name = 'index' deleveled = self.frame.reset_index() - self.assert_numpy_array_equal(deleveled['index'], - self.frame.index.values) - self.assert_numpy_array_equal(deleveled.index, - np.arange(len(deleveled))) + self.assert_series_equal(deleveled['index'], + pd.Series(self.frame.index)) + self.assert_index_equal(deleveled.index, + pd.Index(np.arange(len(deleveled)))) # preserve column names self.frame.columns.name = 'columns' diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py index 20aaae586f14f..b71235a8f6576 100644 --- a/pandas/tests/frame/test_analytics.py +++ b/pandas/tests/frame/test_analytics.py @@ -18,12 +18,6 @@ import pandas.core.nanops as nanops import pandas.formats.printing as printing -from pandas.util.testing import (assert_almost_equal, - assert_equal, - assert_series_equal, - assert_frame_equal, - assertRaisesRegexp) - import pandas.util.testing as tm from pandas.tests.frame.common import TestData @@ -60,12 +54,12 @@ def _check_method(self, method='pearson', check_minp=False): if not check_minp: correls = self.frame.corr(method=method) exp = self.frame['A'].corr(self.frame['C'], method=method) - assert_almost_equal(correls['A']['C'], exp) + tm.assert_almost_equal(correls['A']['C'], exp) else: result = self.frame.corr(min_periods=len(self.frame) - 8) expected = self.frame.corr() expected.ix['A', 'B'] = expected.ix['B', 'A'] = nan - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_corr_non_numeric(self): tm._skip_if_no_scipy() @@ -75,7 +69,7 @@ def test_corr_non_numeric(self): # exclude non-numeric types result = self.mixed_frame.corr() expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].corr() - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_corr_nooverlap(self): tm._skip_if_no_scipy() @@ -123,14 +117,14 @@ def test_corr_int_and_boolean(self): expected = DataFrame(np.ones((2, 2)), index=[ 'a', 'b'], columns=['a', 'b']) for meth in ['pearson', 'kendall', 'spearman']: - assert_frame_equal(df.corr(meth), expected) + tm.assert_frame_equal(df.corr(meth), expected) def test_cov(self): # min_periods no NAs (corner case) expected = self.frame.cov() result = self.frame.cov(min_periods=len(self.frame)) - assert_frame_equal(expected, result) + tm.assert_frame_equal(expected, result) result = self.frame.cov(min_periods=len(self.frame) + 1) self.assertTrue(isnull(result.values).all()) @@ -149,25 +143,25 @@ def test_cov(self): self.frame['B'][:10] = nan cov = self.frame.cov() - assert_almost_equal(cov['A']['C'], - self.frame['A'].cov(self.frame['C'])) + tm.assert_almost_equal(cov['A']['C'], + self.frame['A'].cov(self.frame['C'])) # exclude non-numeric types result = self.mixed_frame.cov() expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].cov() - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # Single column frame df = DataFrame(np.linspace(0.0, 1.0, 10)) result = df.cov() expected = DataFrame(np.cov(df.values.T).reshape((1, 1)), index=df.columns, columns=df.columns) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) df.ix[0] = np.nan result = df.cov() expected = DataFrame(np.cov(df.values[1:].T).reshape((1, 1)), index=df.columns, columns=df.columns) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_corrwith(self): a = self.tsframe @@ -180,13 +174,13 @@ def test_corrwith(self): del b['B'] colcorr = a.corrwith(b, axis=0) - assert_almost_equal(colcorr['A'], a['A'].corr(b['A'])) + tm.assert_almost_equal(colcorr['A'], a['A'].corr(b['A'])) rowcorr = a.corrwith(b, axis=1) - assert_series_equal(rowcorr, a.T.corrwith(b.T, axis=0)) + tm.assert_series_equal(rowcorr, a.T.corrwith(b.T, axis=0)) dropped = a.corrwith(b, axis=0, drop=True) - assert_almost_equal(dropped['A'], a['A'].corr(b['A'])) + tm.assert_almost_equal(dropped['A'], a['A'].corr(b['A'])) self.assertNotIn('B', dropped) dropped = a.corrwith(b, axis=1, drop=True) @@ -199,7 +193,7 @@ def test_corrwith(self): df2 = DataFrame(randn(4, 4), index=index[:4], columns=columns) correls = df1.corrwith(df2, axis=1) for row in index[:4]: - assert_almost_equal(correls[row], df1.ix[row].corr(df2.ix[row])) + tm.assert_almost_equal(correls[row], df1.ix[row].corr(df2.ix[row])) def test_corrwith_with_objects(self): df1 = tm.makeTimeDataFrame() @@ -211,17 +205,17 @@ def test_corrwith_with_objects(self): result = df1.corrwith(df2) expected = df1.ix[:, cols].corrwith(df2.ix[:, cols]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) result = df1.corrwith(df2, axis=1) expected = df1.ix[:, cols].corrwith(df2.ix[:, cols], axis=1) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_corrwith_series(self): result = self.tsframe.corrwith(self.tsframe['A']) expected = self.tsframe.apply(self.tsframe['A'].corr) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_corrwith_matches_corrcoef(self): df1 = DataFrame(np.arange(10000), columns=['a']) @@ -229,7 +223,7 @@ def test_corrwith_matches_corrcoef(self): c1 = df1.corrwith(df2)['a'] c2 = np.corrcoef(df1['a'], df2['a'])[0][1] - assert_almost_equal(c1, c2) + tm.assert_almost_equal(c1, c2) self.assertTrue(c1 < 1) def test_bool_describe_in_mixed_frame(self): @@ -246,14 +240,14 @@ def test_bool_describe_in_mixed_frame(self): 10, 20, 30, 40, 50]}, index=['count', 'mean', 'std', 'min', '25%', '50%', '75%', 'max']) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # Top value is a boolean value that is False result = df.describe(include=['bool']) expected = DataFrame({'bool_data': [5, 2, False, 3]}, index=['count', 'unique', 'top', 'freq']) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_describe_categorical_columns(self): # GH 11558 @@ -310,8 +304,9 @@ def test_reduce_mixed_frame(self): }) df.reindex(columns=['bool_data', 'int_data', 'string_data']) test = df.sum(axis=0) - assert_almost_equal(test.values, [2, 150, 'abcde']) - assert_series_equal(test, df.T.sum(axis=1)) + tm.assert_numpy_array_equal(test.values, + np.array([2, 150, 'abcde'], dtype=object)) + tm.assert_series_equal(test, df.T.sum(axis=1)) def test_count(self): f = lambda s: notnull(s).sum() @@ -333,17 +328,17 @@ def test_count(self): df = DataFrame(index=lrange(10)) result = df.count(1) expected = Series(0, index=df.index) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) df = DataFrame(columns=lrange(10)) result = df.count(0) expected = Series(0, index=df.columns) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) df = DataFrame() result = df.count() expected = Series(0, index=[]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_sum(self): self._check_stat_op('sum', np.sum, has_numeric_only=True) @@ -377,7 +372,7 @@ def test_stat_operators_attempt_obj_array(self): expected = getattr(df.astype('f8'), meth)(1) if not tm._incompat_bottleneck_version(meth): - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_mean(self): self._check_stat_op('mean', np.mean, check_dates=True) @@ -405,12 +400,12 @@ def test_cummin(self): # axis = 0 cummin = self.tsframe.cummin() expected = self.tsframe.apply(Series.cummin) - assert_frame_equal(cummin, expected) + tm.assert_frame_equal(cummin, expected) # axis = 1 cummin = self.tsframe.cummin(axis=1) expected = self.tsframe.apply(Series.cummin, axis=1) - assert_frame_equal(cummin, expected) + tm.assert_frame_equal(cummin, expected) # it works df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) @@ -428,12 +423,12 @@ def test_cummax(self): # axis = 0 cummax = self.tsframe.cummax() expected = self.tsframe.apply(Series.cummax) - assert_frame_equal(cummax, expected) + tm.assert_frame_equal(cummax, expected) # axis = 1 cummax = self.tsframe.cummax(axis=1) expected = self.tsframe.apply(Series.cummax, axis=1) - assert_frame_equal(cummax, expected) + tm.assert_frame_equal(cummax, expected) # it works df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) @@ -460,11 +455,11 @@ def test_var_std(self): result = self.tsframe.std(ddof=4) expected = self.tsframe.apply(lambda x: x.std(ddof=4)) - assert_almost_equal(result, expected) + tm.assert_almost_equal(result, expected) result = self.tsframe.var(ddof=4) expected = self.tsframe.apply(lambda x: x.var(ddof=4)) - assert_almost_equal(result, expected) + tm.assert_almost_equal(result, expected) arr = np.repeat(np.random.random((1, 1000)), 1000, 0) result = nanops.nanvar(arr, axis=0) @@ -489,11 +484,11 @@ def test_numeric_only_flag(self): for meth in methods: result = getattr(df1, meth)(axis=1, numeric_only=True) expected = getattr(df1[['bar', 'baz']], meth)(axis=1) - assert_series_equal(expected, result) + tm.assert_series_equal(expected, result) result = getattr(df2, meth)(axis=1, numeric_only=True) expected = getattr(df2[['bar', 'baz']], meth)(axis=1) - assert_series_equal(expected, result) + tm.assert_series_equal(expected, result) # df1 has all numbers, df2 has a letter inside self.assertRaises(TypeError, lambda: getattr(df1, meth) @@ -509,12 +504,12 @@ def test_cumsum(self): # axis = 0 cumsum = self.tsframe.cumsum() expected = self.tsframe.apply(Series.cumsum) - assert_frame_equal(cumsum, expected) + tm.assert_frame_equal(cumsum, expected) # axis = 1 cumsum = self.tsframe.cumsum(axis=1) expected = self.tsframe.apply(Series.cumsum, axis=1) - assert_frame_equal(cumsum, expected) + tm.assert_frame_equal(cumsum, expected) # works df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) @@ -532,12 +527,12 @@ def test_cumprod(self): # axis = 0 cumprod = self.tsframe.cumprod() expected = self.tsframe.apply(Series.cumprod) - assert_frame_equal(cumprod, expected) + tm.assert_frame_equal(cumprod, expected) # axis = 1 cumprod = self.tsframe.cumprod(axis=1) expected = self.tsframe.apply(Series.cumprod, axis=1) - assert_frame_equal(cumprod, expected) + tm.assert_frame_equal(cumprod, expected) # fix issue cumprod_xs = self.tsframe.cumprod(axis=1) @@ -574,48 +569,48 @@ def test_rank(self): exp1 = np.apply_along_axis(rankdata, 1, fvals) exp1[mask] = np.nan - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) + tm.assert_almost_equal(ranks0.values, exp0) + tm.assert_almost_equal(ranks1.values, exp1) # integers df = DataFrame(np.random.randint(0, 5, size=40).reshape((10, 4))) result = df.rank() exp = df.astype(float).rank() - assert_frame_equal(result, exp) + tm.assert_frame_equal(result, exp) result = df.rank(1) exp = df.astype(float).rank(1) - assert_frame_equal(result, exp) + tm.assert_frame_equal(result, exp) def test_rank2(self): df = DataFrame([[1, 3, 2], [1, 2, 3]]) expected = DataFrame([[1.0, 3.0, 2.0], [1, 2, 3]]) / 3.0 result = df.rank(1, pct=True) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) df = DataFrame([[1, 3, 2], [1, 2, 3]]) expected = df.rank(0) / 2.0 result = df.rank(0, pct=True) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) df = DataFrame([['b', 'c', 'a'], ['a', 'c', 'b']]) expected = DataFrame([[2.0, 3.0, 1.0], [1, 3, 2]]) result = df.rank(1, numeric_only=False) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) expected = DataFrame([[2.0, 1.5, 1.0], [1, 1.5, 2]]) result = df.rank(0, numeric_only=False) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) df = DataFrame([['b', np.nan, 'a'], ['a', 'c', 'b']]) expected = DataFrame([[2.0, nan, 1.0], [1.0, 3.0, 2.0]]) result = df.rank(1, numeric_only=False) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) expected = DataFrame([[2.0, nan, 1.0], [1.0, 1.0, 2.0]]) result = df.rank(0, numeric_only=False) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # f7u12, this does not work without extensive workaround data = [[datetime(2001, 1, 5), nan, datetime(2001, 1, 2)], @@ -627,12 +622,12 @@ def test_rank2(self): expected = DataFrame([[2., nan, 1.], [2., 3., 1.]]) result = df.rank(1, numeric_only=False, ascending=True) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) expected = DataFrame([[1., nan, 2.], [2., 1., 3.]]) result = df.rank(1, numeric_only=False, ascending=False) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # mixed-type frames self.mixed_frame['datetime'] = datetime.now() @@ -640,12 +635,12 @@ def test_rank2(self): result = self.mixed_frame.rank(1) expected = self.mixed_frame.rank(1, numeric_only=True) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) df = DataFrame({"a": [1e-20, -5, 1e-20 + 1e-40, 10, 1e60, 1e80, 1e-30]}) exp = DataFrame({"a": [3.5, 1., 3.5, 5., 6., 7., 2.]}) - assert_frame_equal(df.rank(), exp) + tm.assert_frame_equal(df.rank(), exp) def test_rank_na_option(self): tm._skip_if_no_scipy() @@ -665,8 +660,8 @@ def test_rank_na_option(self): exp0 = np.apply_along_axis(rankdata, 0, fvals) exp1 = np.apply_along_axis(rankdata, 1, fvals) - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) + tm.assert_almost_equal(ranks0.values, exp0) + tm.assert_almost_equal(ranks1.values, exp1) # top ranks0 = self.frame.rank(na_option='top') @@ -680,8 +675,8 @@ def test_rank_na_option(self): exp0 = np.apply_along_axis(rankdata, 0, fval0) exp1 = np.apply_along_axis(rankdata, 1, fval1) - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) + tm.assert_almost_equal(ranks0.values, exp0) + tm.assert_almost_equal(ranks1.values, exp1) # descending @@ -694,8 +689,8 @@ def test_rank_na_option(self): exp0 = np.apply_along_axis(rankdata, 0, -fvals) exp1 = np.apply_along_axis(rankdata, 1, -fvals) - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) + tm.assert_almost_equal(ranks0.values, exp0) + tm.assert_almost_equal(ranks1.values, exp1) # descending @@ -711,14 +706,14 @@ def test_rank_na_option(self): exp0 = np.apply_along_axis(rankdata, 0, -fval0) exp1 = np.apply_along_axis(rankdata, 1, -fval1) - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) + tm.assert_numpy_array_equal(ranks0.values, exp0) + tm.assert_numpy_array_equal(ranks1.values, exp1) def test_rank_axis(self): # check if using axes' names gives the same result df = pd.DataFrame([[2, 1], [4, 3]]) - assert_frame_equal(df.rank(axis=0), df.rank(axis='index')) - assert_frame_equal(df.rank(axis=1), df.rank(axis='columns')) + tm.assert_frame_equal(df.rank(axis=0), df.rank(axis='index')) + tm.assert_frame_equal(df.rank(axis=1), df.rank(axis='columns')) def test_sem(self): alt = lambda x: np.std(x, ddof=1) / np.sqrt(len(x)) @@ -727,7 +722,7 @@ def test_sem(self): result = self.tsframe.sem(ddof=4) expected = self.tsframe.apply( lambda x: x.std(ddof=4) / np.sqrt(len(x))) - assert_almost_equal(result, expected) + tm.assert_almost_equal(result, expected) arr = np.repeat(np.random.random((1, 1000)), 1000, 0) result = nanops.nansem(arr, axis=0) @@ -789,7 +784,7 @@ def alt(x): kurt = df.kurt() kurt2 = df.kurt(level=0).xs('bar') - assert_series_equal(kurt, kurt2, check_names=False) + tm.assert_series_equal(kurt, kurt2, check_names=False) self.assertTrue(kurt.name is None) self.assertEqual(kurt2.name, 'bar') @@ -827,26 +822,26 @@ def wrapper(x): result0 = f(axis=0, skipna=False) result1 = f(axis=1, skipna=False) - assert_series_equal(result0, frame.apply(wrapper), - check_dtype=check_dtype, - check_less_precise=check_less_precise) + tm.assert_series_equal(result0, frame.apply(wrapper), + check_dtype=check_dtype, + check_less_precise=check_less_precise) # HACK: win32 - assert_series_equal(result1, frame.apply(wrapper, axis=1), - check_dtype=False, - check_less_precise=check_less_precise) + tm.assert_series_equal(result1, frame.apply(wrapper, axis=1), + check_dtype=False, + check_less_precise=check_less_precise) else: skipna_wrapper = alternative wrapper = alternative result0 = f(axis=0) result1 = f(axis=1) - assert_series_equal(result0, frame.apply(skipna_wrapper), - check_dtype=check_dtype, - check_less_precise=check_less_precise) + tm.assert_series_equal(result0, frame.apply(skipna_wrapper), + check_dtype=check_dtype, + check_less_precise=check_less_precise) if not tm._incompat_bottleneck_version(name): - assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1), - check_dtype=False, - check_less_precise=check_less_precise) + exp = frame.apply(skipna_wrapper, axis=1) + tm.assert_series_equal(result1, exp, check_dtype=False, + check_less_precise=check_less_precise) # check dtypes if check_dtype: @@ -859,7 +854,7 @@ def wrapper(x): # assert_series_equal(result, comp) # bad axis - assertRaisesRegexp(ValueError, 'No axis named 2', f, axis=2) + tm.assertRaisesRegexp(ValueError, 'No axis named 2', f, axis=2) # make sure works on mixed-type frame getattr(self.mixed_frame, name)(axis=0) getattr(self.mixed_frame, name)(axis=1) @@ -885,20 +880,20 @@ def test_mode(self): "C": [8, 8, 8, 9, 9, 9], "D": np.arange(6, dtype='int64'), "E": [8, 8, 1, 1, 3, 3]}) - assert_frame_equal(df[["A"]].mode(), - pd.DataFrame({"A": [12]})) + tm.assert_frame_equal(df[["A"]].mode(), + pd.DataFrame({"A": [12]})) expected = pd.Series([], dtype='int64', name='D').to_frame() - assert_frame_equal(df[["D"]].mode(), expected) + tm.assert_frame_equal(df[["D"]].mode(), expected) expected = pd.Series([1, 3, 8], dtype='int64', name='E').to_frame() - assert_frame_equal(df[["E"]].mode(), expected) - assert_frame_equal(df[["A", "B"]].mode(), - pd.DataFrame({"A": [12], "B": [10.]})) - assert_frame_equal(df.mode(), - pd.DataFrame({"A": [12, np.nan, np.nan], - "B": [10, np.nan, np.nan], - "C": [8, 9, np.nan], - "D": [np.nan, np.nan, np.nan], - "E": [1, 3, 8]})) + tm.assert_frame_equal(df[["E"]].mode(), expected) + tm.assert_frame_equal(df[["A", "B"]].mode(), + pd.DataFrame({"A": [12], "B": [10.]})) + tm.assert_frame_equal(df.mode(), + pd.DataFrame({"A": [12, np.nan, np.nan], + "B": [10, np.nan, np.nan], + "C": [8, 9, np.nan], + "D": [np.nan, np.nan, np.nan], + "E": [1, 3, 8]})) # outputs in sorted order df["C"] = list(reversed(df["C"])) @@ -910,7 +905,7 @@ def test_mode(self): "C": [8, 9]})) printing.pprint_thing(a) printing.pprint_thing(b) - assert_frame_equal(a, b) + tm.assert_frame_equal(a, b) # should work with heterogeneous types df = pd.DataFrame({"A": np.arange(6, dtype='int64'), "B": pd.date_range('2011', periods=6), @@ -918,7 +913,7 @@ def test_mode(self): exp = pd.DataFrame({"A": pd.Series([], dtype=df["A"].dtype), "B": pd.Series([], dtype=df["B"].dtype), "C": pd.Series([], dtype=df["C"].dtype)}) - assert_frame_equal(df.mode(), exp) + tm.assert_frame_equal(df.mode(), exp) # and also when not empty df.loc[1, "A"] = 0 @@ -929,7 +924,7 @@ def test_mode(self): dtype=df["B"].dtype), "C": pd.Series(['e'], dtype=df["C"].dtype)}) - assert_frame_equal(df.mode(), exp) + tm.assert_frame_equal(df.mode(), exp) def test_operators_timedelta64(self): from datetime import timedelta @@ -962,8 +957,8 @@ def test_operators_timedelta64(self): result2 = abs(diffs) expected = DataFrame(dict(A=df['A'] - df['C'], B=df['B'] - df['A'])) - assert_frame_equal(result, expected) - assert_frame_equal(result2, expected) + tm.assert_frame_equal(result, expected) + tm.assert_frame_equal(result2, expected) # mixed frame mixed = diffs.copy() @@ -982,22 +977,22 @@ def test_operators_timedelta64(self): 'foo', 1, 1.0, Timestamp('20130101')], index=mixed.columns) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) # excludes numeric result = mixed.min(axis=1) expected = Series([1, 1, 1.], index=[0, 1, 2]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) # works when only those columns are selected result = mixed[['A', 'B']].min(1) expected = Series([timedelta(days=-1)] * 3) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) result = mixed[['A', 'B']].min() expected = Series([timedelta(seconds=5 * 60 + 5), timedelta(days=-1)], index=['A', 'B']) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) # GH 3106 df = DataFrame({'time': date_range('20130102', periods=5), @@ -1035,13 +1030,13 @@ def test_mean_corner(self): # unit test when have object data the_mean = self.mixed_frame.mean(axis=0) the_sum = self.mixed_frame.sum(axis=0, numeric_only=True) - self.assertTrue(the_sum.index.equals(the_mean.index)) + self.assert_index_equal(the_sum.index, the_mean.index) self.assertTrue(len(the_mean.index) < len(self.mixed_frame.columns)) # xs sum mixed type, just want to know it works... the_mean = self.mixed_frame.mean(axis=1) the_sum = self.mixed_frame.sum(axis=1, numeric_only=True) - self.assertTrue(the_sum.index.equals(the_mean.index)) + self.assert_index_equal(the_sum.index, the_mean.index) # take mean of boolean column self.frame['bool'] = self.frame['A'] > 0 @@ -1070,8 +1065,8 @@ def test_count_objects(self): dm = DataFrame(self.mixed_frame._series) df = DataFrame(self.mixed_frame._series) - assert_series_equal(dm.count(), df.count()) - assert_series_equal(dm.count(1), df.count(1)) + tm.assert_series_equal(dm.count(), df.count()) + tm.assert_series_equal(dm.count(1), df.count(1)) def test_cumsum_corner(self): dm = DataFrame(np.arange(20).reshape(4, 5), @@ -1094,9 +1089,9 @@ def test_idxmin(self): for axis in [0, 1]: for df in [frame, self.intframe]: result = df.idxmin(axis=axis, skipna=skipna) - expected = df.apply( - Series.idxmin, axis=axis, skipna=skipna) - assert_series_equal(result, expected) + expected = df.apply(Series.idxmin, axis=axis, + skipna=skipna) + tm.assert_series_equal(result, expected) self.assertRaises(ValueError, frame.idxmin, axis=2) @@ -1108,9 +1103,9 @@ def test_idxmax(self): for axis in [0, 1]: for df in [frame, self.intframe]: result = df.idxmax(axis=axis, skipna=skipna) - expected = df.apply( - Series.idxmax, axis=axis, skipna=skipna) - assert_series_equal(result, expected) + expected = df.apply(Series.idxmax, axis=axis, + skipna=skipna) + tm.assert_series_equal(result, expected) self.assertRaises(ValueError, frame.idxmax, axis=2) @@ -1169,18 +1164,18 @@ def wrapper(x): result0 = f(axis=0, skipna=False) result1 = f(axis=1, skipna=False) - assert_series_equal(result0, frame.apply(wrapper)) - assert_series_equal(result1, frame.apply(wrapper, axis=1), - check_dtype=False) # HACK: win32 + tm.assert_series_equal(result0, frame.apply(wrapper)) + tm.assert_series_equal(result1, frame.apply(wrapper, axis=1), + check_dtype=False) # HACK: win32 else: skipna_wrapper = alternative wrapper = alternative result0 = f(axis=0) result1 = f(axis=1) - assert_series_equal(result0, frame.apply(skipna_wrapper)) - assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1), - check_dtype=False) + tm.assert_series_equal(result0, frame.apply(skipna_wrapper)) + tm.assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1), + check_dtype=False) # result = f(axis=1) # comp = frame.apply(alternative, axis=1).reindex(result.index) @@ -1230,7 +1225,7 @@ def test_nlargest(self): 'b': list(ascii_lowercase[:10])}) result = df.nlargest(5, 'a') expected = df.sort_values('a', ascending=False).head(5) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_nlargest_multiple_columns(self): from string import ascii_lowercase @@ -1239,7 +1234,7 @@ def test_nlargest_multiple_columns(self): 'c': np.random.permutation(10).astype('float64')}) result = df.nlargest(5, ['a', 'b']) expected = df.sort_values(['a', 'b'], ascending=False).head(5) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_nsmallest(self): from string import ascii_lowercase @@ -1247,7 +1242,7 @@ def test_nsmallest(self): 'b': list(ascii_lowercase[:10])}) result = df.nsmallest(5, 'a') expected = df.sort_values('a').head(5) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_nsmallest_multiple_columns(self): from string import ascii_lowercase @@ -1256,7 +1251,7 @@ def test_nsmallest_multiple_columns(self): 'c': np.random.permutation(10).astype('float64')}) result = df.nsmallest(5, ['a', 'c']) expected = df.sort_values(['a', 'c']).head(5) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # ---------------------------------------------------------------------- # Isin @@ -1270,13 +1265,13 @@ def test_isin(self): result = df.isin(other) expected = DataFrame([df.loc[s].isin(other) for s in df.index]) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_isin_empty(self): df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) result = df.isin([]) expected = pd.DataFrame(False, df.index, df.columns) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_isin_dict(self): df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) @@ -1286,7 +1281,7 @@ def test_isin_dict(self): expected.loc[0, 'A'] = True result = df.isin(d) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # non unique columns df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) @@ -1294,7 +1289,7 @@ def test_isin_dict(self): expected = DataFrame(False, df.index, df.columns) expected.loc[0, 'A'] = True result = df.isin(d) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_isin_with_string_scalar(self): # GH4763 @@ -1314,13 +1309,13 @@ def test_isin_df(self): result = df1.isin(df2) expected['A'].loc[[1, 3]] = True expected['B'].loc[[0, 2]] = True - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # partial overlapping columns df2.columns = ['A', 'C'] result = df1.isin(df2) expected['B'] = False - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_isin_df_dupe_values(self): df1 = DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}) @@ -1348,7 +1343,7 @@ def test_isin_dupe_self(self): expected = DataFrame(False, index=df.index, columns=df.columns) expected.loc[0] = True expected.iloc[1, 1] = True - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_isin_against_series(self): df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}, @@ -1358,7 +1353,7 @@ def test_isin_against_series(self): expected['A'].loc['a'] = True expected.loc['d'] = True result = df.isin(s) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_isin_multiIndex(self): idx = MultiIndex.from_tuples([(0, 'a', 'foo'), (0, 'a', 'bar'), @@ -1374,7 +1369,7 @@ def test_isin_multiIndex(self): # against regular index expected = DataFrame(False, index=df1.index, columns=df1.columns) result = df1.isin(df2) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) df2.index = idx expected = df2.values.astype(np.bool) @@ -1382,7 +1377,7 @@ def test_isin_multiIndex(self): expected = DataFrame(expected, columns=['A', 'B'], index=idx) result = df1.isin(df2) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # ---------------------------------------------------------------------- # Row deduplication @@ -1398,43 +1393,43 @@ def test_drop_duplicates(self): # single column result = df.drop_duplicates('AAA') expected = df[:2] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('AAA', keep='last') expected = df.ix[[6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('AAA', keep=False) expected = df.ix[[]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) self.assertEqual(len(result), 0) # deprecate take_last with tm.assert_produces_warning(FutureWarning): result = df.drop_duplicates('AAA', take_last=True) expected = df.ix[[6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # multi column expected = df.ix[[0, 1, 2, 3]] result = df.drop_duplicates(np.array(['AAA', 'B'])) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates(['AAA', 'B']) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates(('AAA', 'B'), keep='last') expected = df.ix[[0, 5, 6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates(('AAA', 'B'), keep=False) expected = df.ix[[0]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # deprecate take_last with tm.assert_produces_warning(FutureWarning): result = df.drop_duplicates(('AAA', 'B'), take_last=True) expected = df.ix[[0, 5, 6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # consider everything df2 = df.ix[:, ['AAA', 'B', 'C']] @@ -1442,64 +1437,64 @@ def test_drop_duplicates(self): result = df2.drop_duplicates() # in this case only expected = df2.drop_duplicates(['AAA', 'B']) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df2.drop_duplicates(keep='last') expected = df2.drop_duplicates(['AAA', 'B'], keep='last') - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df2.drop_duplicates(keep=False) expected = df2.drop_duplicates(['AAA', 'B'], keep=False) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # deprecate take_last with tm.assert_produces_warning(FutureWarning): result = df2.drop_duplicates(take_last=True) with tm.assert_produces_warning(FutureWarning): expected = df2.drop_duplicates(['AAA', 'B'], take_last=True) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # integers result = df.drop_duplicates('C') expected = df.iloc[[0, 2]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('C', keep='last') expected = df.iloc[[-2, -1]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) df['E'] = df['C'].astype('int8') result = df.drop_duplicates('E') expected = df.iloc[[0, 2]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('E', keep='last') expected = df.iloc[[-2, -1]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # GH 11376 df = pd.DataFrame({'x': [7, 6, 3, 3, 4, 8, 0], 'y': [0, 6, 5, 5, 9, 1, 2]}) expected = df.loc[df.index != 3] - assert_frame_equal(df.drop_duplicates(), expected) + tm.assert_frame_equal(df.drop_duplicates(), expected) df = pd.DataFrame([[1, 0], [0, 2]]) - assert_frame_equal(df.drop_duplicates(), df) + tm.assert_frame_equal(df.drop_duplicates(), df) df = pd.DataFrame([[-2, 0], [0, -4]]) - assert_frame_equal(df.drop_duplicates(), df) + tm.assert_frame_equal(df.drop_duplicates(), df) x = np.iinfo(np.int64).max / 3 * 2 df = pd.DataFrame([[-x, x], [0, x + 4]]) - assert_frame_equal(df.drop_duplicates(), df) + tm.assert_frame_equal(df.drop_duplicates(), df) df = pd.DataFrame([[-x, x], [x, x + 4]]) - assert_frame_equal(df.drop_duplicates(), df) + tm.assert_frame_equal(df.drop_duplicates(), df) # GH 11864 df = pd.DataFrame([i] * 9 for i in range(16)) df = df.append([[1] + [0] * 8], ignore_index=True) for keep in ['first', 'last', False]: - assert_equal(df.duplicated(keep=keep).sum(), 0) + self.assertEqual(df.duplicated(keep=keep).sum(), 0) def test_drop_duplicates_for_take_all(self): df = DataFrame({'AAA': ['foo', 'bar', 'baz', 'bar', @@ -1512,28 +1507,28 @@ def test_drop_duplicates_for_take_all(self): # single column result = df.drop_duplicates('AAA') expected = df.iloc[[0, 1, 2, 6]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('AAA', keep='last') expected = df.iloc[[2, 5, 6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('AAA', keep=False) expected = df.iloc[[2, 6]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # multiple columns result = df.drop_duplicates(['AAA', 'B']) expected = df.iloc[[0, 1, 2, 3, 4, 6]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates(['AAA', 'B'], keep='last') expected = df.iloc[[0, 1, 2, 5, 6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates(['AAA', 'B'], keep=False) expected = df.iloc[[0, 1, 2, 6]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_drop_duplicates_tuple(self): df = DataFrame({('AA', 'AB'): ['foo', 'bar', 'foo', 'bar', @@ -1546,27 +1541,27 @@ def test_drop_duplicates_tuple(self): # single column result = df.drop_duplicates(('AA', 'AB')) expected = df[:2] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates(('AA', 'AB'), keep='last') expected = df.ix[[6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates(('AA', 'AB'), keep=False) expected = df.ix[[]] # empty df self.assertEqual(len(result), 0) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # deprecate take_last with tm.assert_produces_warning(FutureWarning): result = df.drop_duplicates(('AA', 'AB'), take_last=True) expected = df.ix[[6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # multi column expected = df.ix[[0, 1, 2, 3]] result = df.drop_duplicates((('AA', 'AB'), 'B')) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_drop_duplicates_NA(self): # none @@ -1580,41 +1575,41 @@ def test_drop_duplicates_NA(self): # single column result = df.drop_duplicates('A') expected = df.ix[[0, 2, 3]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('A', keep='last') expected = df.ix[[1, 6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('A', keep=False) expected = df.ix[[]] # empty df - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) self.assertEqual(len(result), 0) # deprecate take_last with tm.assert_produces_warning(FutureWarning): result = df.drop_duplicates('A', take_last=True) expected = df.ix[[1, 6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # multi column result = df.drop_duplicates(['A', 'B']) expected = df.ix[[0, 2, 3, 6]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates(['A', 'B'], keep='last') expected = df.ix[[1, 5, 6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates(['A', 'B'], keep=False) expected = df.ix[[6]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # deprecate take_last with tm.assert_produces_warning(FutureWarning): result = df.drop_duplicates(['A', 'B'], take_last=True) expected = df.ix[[1, 5, 6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # nan df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', @@ -1627,41 +1622,41 @@ def test_drop_duplicates_NA(self): # single column result = df.drop_duplicates('C') expected = df[:2] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('C', keep='last') expected = df.ix[[3, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('C', keep=False) expected = df.ix[[]] # empty df - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) self.assertEqual(len(result), 0) # deprecate take_last with tm.assert_produces_warning(FutureWarning): result = df.drop_duplicates('C', take_last=True) expected = df.ix[[3, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # multi column result = df.drop_duplicates(['C', 'B']) expected = df.ix[[0, 1, 2, 4]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates(['C', 'B'], keep='last') expected = df.ix[[1, 3, 6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates(['C', 'B'], keep=False) expected = df.ix[[1]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # deprecate take_last with tm.assert_produces_warning(FutureWarning): result = df.drop_duplicates(['C', 'B'], take_last=True) expected = df.ix[[1, 3, 6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_drop_duplicates_NA_for_take_all(self): # none @@ -1672,30 +1667,30 @@ def test_drop_duplicates_NA_for_take_all(self): # single column result = df.drop_duplicates('A') expected = df.iloc[[0, 2, 3, 5, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('A', keep='last') expected = df.iloc[[1, 4, 5, 6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('A', keep=False) expected = df.iloc[[5, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # nan # single column result = df.drop_duplicates('C') expected = df.iloc[[0, 1, 5, 6]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('C', keep='last') expected = df.iloc[[3, 5, 6, 7]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = df.drop_duplicates('C', keep=False) expected = df.iloc[[5, 6]] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_drop_duplicates_inplace(self): orig = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', @@ -1710,19 +1705,19 @@ def test_drop_duplicates_inplace(self): df.drop_duplicates('A', inplace=True) expected = orig[:2] result = df - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) df = orig.copy() df.drop_duplicates('A', keep='last', inplace=True) expected = orig.ix[[6, 7]] result = df - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) df = orig.copy() df.drop_duplicates('A', keep=False, inplace=True) expected = orig.ix[[]] result = df - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) self.assertEqual(len(df), 0) # deprecate take_last @@ -1731,26 +1726,26 @@ def test_drop_duplicates_inplace(self): df.drop_duplicates('A', take_last=True, inplace=True) expected = orig.ix[[6, 7]] result = df - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # multi column df = orig.copy() df.drop_duplicates(['A', 'B'], inplace=True) expected = orig.ix[[0, 1, 2, 3]] result = df - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) df = orig.copy() df.drop_duplicates(['A', 'B'], keep='last', inplace=True) expected = orig.ix[[0, 5, 6, 7]] result = df - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) df = orig.copy() df.drop_duplicates(['A', 'B'], keep=False, inplace=True) expected = orig.ix[[0]] result = df - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # deprecate take_last df = orig.copy() @@ -1758,7 +1753,7 @@ def test_drop_duplicates_inplace(self): df.drop_duplicates(['A', 'B'], take_last=True, inplace=True) expected = orig.ix[[0, 5, 6, 7]] result = df - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # consider everything orig2 = orig.ix[:, ['A', 'B', 'C']].copy() @@ -1768,19 +1763,19 @@ def test_drop_duplicates_inplace(self): # in this case only expected = orig2.drop_duplicates(['A', 'B']) result = df2 - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) df2 = orig2.copy() df2.drop_duplicates(keep='last', inplace=True) expected = orig2.drop_duplicates(['A', 'B'], keep='last') result = df2 - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) df2 = orig2.copy() df2.drop_duplicates(keep=False, inplace=True) expected = orig2.drop_duplicates(['A', 'B'], keep=False) result = df2 - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # deprecate take_last df2 = orig2.copy() @@ -1789,7 +1784,7 @@ def test_drop_duplicates_inplace(self): with tm.assert_produces_warning(FutureWarning): expected = orig2.drop_duplicates(['A', 'B'], take_last=True) result = df2 - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # Rounding @@ -1798,26 +1793,26 @@ def test_round(self): # Test that rounding an empty DataFrame does nothing df = DataFrame() - assert_frame_equal(df, df.round()) + tm.assert_frame_equal(df, df.round()) # Here's the test frame we'll be working with - df = DataFrame( - {'col1': [1.123, 2.123, 3.123], 'col2': [1.234, 2.234, 3.234]}) + df = DataFrame({'col1': [1.123, 2.123, 3.123], + 'col2': [1.234, 2.234, 3.234]}) # Default round to integer (i.e. decimals=0) expected_rounded = DataFrame( {'col1': [1., 2., 3.], 'col2': [1., 2., 3.]}) - assert_frame_equal(df.round(), expected_rounded) + tm.assert_frame_equal(df.round(), expected_rounded) # Round with an integer decimals = 2 - expected_rounded = DataFrame( - {'col1': [1.12, 2.12, 3.12], 'col2': [1.23, 2.23, 3.23]}) - assert_frame_equal(df.round(decimals), expected_rounded) + expected_rounded = DataFrame({'col1': [1.12, 2.12, 3.12], + 'col2': [1.23, 2.23, 3.23]}) + tm.assert_frame_equal(df.round(decimals), expected_rounded) # This should also work with np.round (since np.round dispatches to # df.round) - assert_frame_equal(np.round(df, decimals), expected_rounded) + tm.assert_frame_equal(np.round(df, decimals), expected_rounded) # Round with a list round_list = [1, 2] @@ -1828,19 +1823,19 @@ def test_round(self): expected_rounded = DataFrame( {'col1': [1.1, 2.1, 3.1], 'col2': [1.23, 2.23, 3.23]}) round_dict = {'col1': 1, 'col2': 2} - assert_frame_equal(df.round(round_dict), expected_rounded) + tm.assert_frame_equal(df.round(round_dict), expected_rounded) # Incomplete dict expected_partially_rounded = DataFrame( {'col1': [1.123, 2.123, 3.123], 'col2': [1.2, 2.2, 3.2]}) partial_round_dict = {'col2': 1} - assert_frame_equal( - df.round(partial_round_dict), expected_partially_rounded) + tm.assert_frame_equal(df.round(partial_round_dict), + expected_partially_rounded) # Dict with unknown elements wrong_round_dict = {'col3': 2, 'col2': 1} - assert_frame_equal( - df.round(wrong_round_dict), expected_partially_rounded) + tm.assert_frame_equal(df.round(wrong_round_dict), + expected_partially_rounded) # float input to `decimals` non_int_round_dict = {'col1': 1, 'col2': 0.5} @@ -1879,8 +1874,8 @@ def test_round(self): big_df = df * 100 expected_neg_rounded = DataFrame( {'col1': [110., 210, 310], 'col2': [100., 200, 300]}) - assert_frame_equal( - big_df.round(negative_round_dict), expected_neg_rounded) + tm.assert_frame_equal(big_df.round(negative_round_dict), + expected_neg_rounded) # nan in Series round nan_round_Series = Series({'col1': nan, 'col2': 1}) @@ -1899,7 +1894,7 @@ def test_round(self): df.round(nan_round_Series) # Make sure this doesn't break existing Series.round - assert_series_equal(df['col1'].round(1), expected_rounded['col1']) + tm.assert_series_equal(df['col1'].round(1), expected_rounded['col1']) # named columns # GH 11986 @@ -1908,20 +1903,20 @@ def test_round(self): {'col1': [1.12, 2.12, 3.12], 'col2': [1.23, 2.23, 3.23]}) df.columns.name = "cols" expected_rounded.columns.name = "cols" - assert_frame_equal(df.round(decimals), expected_rounded) + tm.assert_frame_equal(df.round(decimals), expected_rounded) # interaction of named columns & series - assert_series_equal(df['col1'].round(decimals), - expected_rounded['col1']) - assert_series_equal(df.round(decimals)['col1'], - expected_rounded['col1']) + tm.assert_series_equal(df['col1'].round(decimals), + expected_rounded['col1']) + tm.assert_series_equal(df.round(decimals)['col1'], + expected_rounded['col1']) def test_numpy_round(self): # See gh-12600 df = DataFrame([[1.53, 1.36], [0.06, 7.01]]) out = np.round(df, decimals=0) expected = DataFrame([[2., 1.], [0., 7.]]) - assert_frame_equal(out, expected) + tm.assert_frame_equal(out, expected) msg = "the 'out' parameter is not supported" with tm.assertRaisesRegexp(ValueError, msg): @@ -1935,12 +1930,12 @@ def test_round_mixed_type(self): round_0 = DataFrame({'col1': [1., 2., 3., 4.], 'col2': ['1', 'a', 'c', 'f'], 'col3': date_range('20111111', periods=4)}) - assert_frame_equal(df.round(), round_0) - assert_frame_equal(df.round(1), df) - assert_frame_equal(df.round({'col1': 1}), df) - assert_frame_equal(df.round({'col1': 0}), round_0) - assert_frame_equal(df.round({'col1': 0, 'col2': 1}), round_0) - assert_frame_equal(df.round({'col3': 1}), df) + tm.assert_frame_equal(df.round(), round_0) + tm.assert_frame_equal(df.round(1), df) + tm.assert_frame_equal(df.round({'col1': 1}), df) + tm.assert_frame_equal(df.round({'col1': 0}), round_0) + tm.assert_frame_equal(df.round({'col1': 0, 'col2': 1}), round_0) + tm.assert_frame_equal(df.round({'col3': 1}), df) def test_round_issue(self): # GH11611 @@ -1950,7 +1945,7 @@ def test_round_issue(self): dfs = pd.concat((df, df), axis=1) rounded = dfs.round() - self.assertTrue(rounded.index.equals(dfs.index)) + self.assert_index_equal(rounded.index, dfs.index) decimals = pd.Series([1, 0, 2], index=['A', 'B', 'A']) self.assertRaises(ValueError, df.round, decimals) @@ -1968,7 +1963,7 @@ def test_built_in_round(self): # Default round to integer (i.e. decimals=0) expected_rounded = DataFrame( {'col1': [1., 2., 3.], 'col2': [1., 2., 3.]}) - assert_frame_equal(round(df), expected_rounded) + tm.assert_frame_equal(round(df), expected_rounded) # Clip @@ -2015,14 +2010,14 @@ def test_clip_against_series(self): mask = ~lb_mask & ~ub_mask result = clipped_df.loc[lb_mask, i] - assert_series_equal(result, lb[lb_mask], check_names=False) + tm.assert_series_equal(result, lb[lb_mask], check_names=False) self.assertEqual(result.name, i) result = clipped_df.loc[ub_mask, i] - assert_series_equal(result, ub[ub_mask], check_names=False) + tm.assert_series_equal(result, ub[ub_mask], check_names=False) self.assertEqual(result.name, i) - assert_series_equal(clipped_df.loc[mask, i], df.loc[mask, i]) + tm.assert_series_equal(clipped_df.loc[mask, i], df.loc[mask, i]) def test_clip_against_frame(self): df = DataFrame(np.random.randn(1000, 2)) @@ -2035,9 +2030,9 @@ def test_clip_against_frame(self): ub_mask = df >= ub mask = ~lb_mask & ~ub_mask - assert_frame_equal(clipped_df[lb_mask], lb[lb_mask]) - assert_frame_equal(clipped_df[ub_mask], ub[ub_mask]) - assert_frame_equal(clipped_df[mask], df[mask]) + tm.assert_frame_equal(clipped_df[lb_mask], lb[lb_mask]) + tm.assert_frame_equal(clipped_df[ub_mask], ub[ub_mask]) + tm.assert_frame_equal(clipped_df[mask], df[mask]) # Matrix-like @@ -2054,15 +2049,15 @@ def test_dot(self): # Check alignment b1 = b.reindex(index=reversed(b.index)) result = a.dot(b) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # Check series argument result = a.dot(b['one']) - assert_series_equal(result, expected['one'], check_names=False) + tm.assert_series_equal(result, expected['one'], check_names=False) self.assertTrue(result.name is None) result = a.dot(b1['one']) - assert_series_equal(result, expected['one'], check_names=False) + tm.assert_series_equal(result, expected['one'], check_names=False) self.assertTrue(result.name is None) # can pass correct-length arrays @@ -2070,9 +2065,9 @@ def test_dot(self): result = a.dot(row) exp = a.dot(a.ix[0]) - assert_series_equal(result, exp) + tm.assert_series_equal(result, exp) - with assertRaisesRegexp(ValueError, 'Dot product shape mismatch'): + with tm.assertRaisesRegexp(ValueError, 'Dot product shape mismatch'): a.dot(row[:-1]) a = np.random.rand(1, 5) @@ -2089,7 +2084,8 @@ def test_dot(self): df = DataFrame(randn(3, 4), index=[1, 2, 3], columns=lrange(4)) df2 = DataFrame(randn(5, 3), index=lrange(5), columns=[1, 2, 3]) - assertRaisesRegexp(ValueError, 'aligned', df.dot, df2) + with tm.assertRaisesRegexp(ValueError, 'aligned'): + df.dot(df2) if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], diff --git a/pandas/tests/frame/test_axis_select_reindex.py b/pandas/tests/frame/test_axis_select_reindex.py index 09dd0f3b14812..9da1b31d259c5 100644 --- a/pandas/tests/frame/test_axis_select_reindex.py +++ b/pandas/tests/frame/test_axis_select_reindex.py @@ -221,7 +221,7 @@ def test_reindex(self): # pass non-Index newFrame = self.frame.reindex(list(self.ts1.index)) - self.assertTrue(newFrame.index.equals(self.ts1.index)) + self.assert_index_equal(newFrame.index, self.ts1.index) # copy with no axes result = self.frame.reindex() @@ -381,7 +381,7 @@ def test_align(self): # axis = 0 other = self.frame.ix[:-5, :3] af, bf = self.frame.align(other, axis=0, fill_value=-1) - self.assertTrue(bf.columns.equals(other.columns)) + self.assert_index_equal(bf.columns, other.columns) # test fill value join_idx = self.frame.index.join(other.index) diff_a = self.frame.index.difference(join_idx) @@ -391,15 +391,15 @@ def test_align(self): self.assertTrue((diff_a_vals == -1).all()) af, bf = self.frame.align(other, join='right', axis=0) - self.assertTrue(bf.columns.equals(other.columns)) - self.assertTrue(bf.index.equals(other.index)) - self.assertTrue(af.index.equals(other.index)) + self.assert_index_equal(bf.columns, other.columns) + self.assert_index_equal(bf.index, other.index) + self.assert_index_equal(af.index, other.index) # axis = 1 other = self.frame.ix[:-5, :3].copy() af, bf = self.frame.align(other, axis=1) - self.assertTrue(bf.columns.equals(self.frame.columns)) - self.assertTrue(bf.index.equals(other.index)) + self.assert_index_equal(bf.columns, self.frame.columns) + self.assert_index_equal(bf.index, other.index) # test fill value join_idx = self.frame.index.join(other.index) @@ -413,35 +413,35 @@ def test_align(self): self.assertTrue((diff_a_vals == -1).all()) af, bf = self.frame.align(other, join='inner', axis=1) - self.assertTrue(bf.columns.equals(other.columns)) + self.assert_index_equal(bf.columns, other.columns) af, bf = self.frame.align(other, join='inner', axis=1, method='pad') - self.assertTrue(bf.columns.equals(other.columns)) + self.assert_index_equal(bf.columns, other.columns) # test other non-float types af, bf = self.intframe.align(other, join='inner', axis=1, method='pad') - self.assertTrue(bf.columns.equals(other.columns)) + self.assert_index_equal(bf.columns, other.columns) af, bf = self.mixed_frame.align(self.mixed_frame, join='inner', axis=1, method='pad') - self.assertTrue(bf.columns.equals(self.mixed_frame.columns)) + self.assert_index_equal(bf.columns, self.mixed_frame.columns) af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1, method=None, fill_value=None) - self.assertTrue(bf.index.equals(Index([]))) + self.assert_index_equal(bf.index, Index([])) af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1, method=None, fill_value=0) - self.assertTrue(bf.index.equals(Index([]))) + self.assert_index_equal(bf.index, Index([])) # mixed floats/ints af, bf = self.mixed_float.align(other.ix[:, 0], join='inner', axis=1, method=None, fill_value=0) - self.assertTrue(bf.index.equals(Index([]))) + self.assert_index_equal(bf.index, Index([])) af, bf = self.mixed_int.align(other.ix[:, 0], join='inner', axis=1, method=None, fill_value=0) - self.assertTrue(bf.index.equals(Index([]))) + self.assert_index_equal(bf.index, Index([])) # try to align dataframe to series along bad axis self.assertRaises(ValueError, self.frame.align, af.ix[0, :3], @@ -661,8 +661,24 @@ def test_filter(self): assert_frame_equal(filtered, expected) # pass in None + with assertRaisesRegexp(TypeError, 'Must pass'): + self.frame.filter() with assertRaisesRegexp(TypeError, 'Must pass'): self.frame.filter(items=None) + with assertRaisesRegexp(TypeError, 'Must pass'): + self.frame.filter(axis=1) + + # test mutually exclusive arguments + with assertRaisesRegexp(TypeError, 'mutually exclusive'): + self.frame.filter(items=['one', 'three'], regex='e$', like='bbi') + with assertRaisesRegexp(TypeError, 'mutually exclusive'): + self.frame.filter(items=['one', 'three'], regex='e$', axis=1) + with assertRaisesRegexp(TypeError, 'mutually exclusive'): + self.frame.filter(items=['one', 'three'], regex='e$') + with assertRaisesRegexp(TypeError, 'mutually exclusive'): + self.frame.filter(items=['one', 'three'], like='bbi', axis=0) + with assertRaisesRegexp(TypeError, 'mutually exclusive'): + self.frame.filter(items=['one', 'three'], like='bbi') # objects filtered = self.mixed_frame.filter(like='foo') @@ -810,10 +826,9 @@ def test_reindex_corner(self): index = Index(['a', 'b', 'c']) dm = self.empty.reindex(index=[1, 2, 3]) reindexed = dm.reindex(columns=index) - self.assertTrue(reindexed.columns.equals(index)) + self.assert_index_equal(reindexed.columns, index) # ints are weird - smaller = self.intframe.reindex(columns=['A', 'B', 'E']) self.assertEqual(smaller['E'].dtype, np.float64) diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py index f337bf48c05ee..0421cf2ba42d2 100644 --- a/pandas/tests/frame/test_block_internals.py +++ b/pandas/tests/frame/test_block_internals.py @@ -505,8 +505,8 @@ def test_get_X_columns(self): 'd': [None, None, None], 'e': [3.14, 0.577, 2.773]}) - self.assert_numpy_array_equal(df._get_numeric_data().columns, - ['a', 'b', 'e']) + self.assert_index_equal(df._get_numeric_data().columns, + pd.Index(['a', 'b', 'e'])) def test_strange_column_corruption_issue(self): # (wesm) Unclear how exactly this is related to internal matters diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py index 083da2a040ed5..b42aef9447373 100644 --- a/pandas/tests/frame/test_constructors.py +++ b/pandas/tests/frame/test_constructors.py @@ -17,21 +17,13 @@ from pandas.compat import (lmap, long, zip, range, lrange, lzip, OrderedDict, is_platform_little_endian) from pandas import compat -from pandas import (DataFrame, Index, Series, notnull, isnull, +from pandas import (DataFrame, Index, Series, isnull, MultiIndex, Timedelta, Timestamp, date_range) from pandas.core.common import PandasError import pandas as pd import pandas.core.common as com import pandas.lib as lib - -from pandas.types.api import DatetimeTZDtype - -from pandas.util.testing import (assert_numpy_array_equal, - assert_series_equal, - assert_frame_equal, - assertRaisesRegexp) - import pandas.util.testing as tm from pandas.tests.frame.common import TestData @@ -173,16 +165,16 @@ def test_constructor_rec(self): index = self.frame.index df = DataFrame(rec) - self.assert_numpy_array_equal(df.columns, rec.dtype.names) + self.assert_index_equal(df.columns, pd.Index(rec.dtype.names)) df2 = DataFrame(rec, index=index) - self.assert_numpy_array_equal(df2.columns, rec.dtype.names) - self.assertTrue(df2.index.equals(index)) + self.assert_index_equal(df2.columns, pd.Index(rec.dtype.names)) + self.assert_index_equal(df2.index, index) rng = np.arange(len(rec))[::-1] df3 = DataFrame(rec, index=rng, columns=['C', 'B']) expected = DataFrame(rec, index=rng).reindex(columns=['C', 'B']) - assert_frame_equal(df3, expected) + tm.assert_frame_equal(df3, expected) def test_constructor_bool(self): df = DataFrame({0: np.ones(10, dtype=bool), @@ -220,8 +212,15 @@ def test_constructor_dict(self): frame = DataFrame({'col1': self.ts1, 'col2': self.ts2}) - tm.assert_dict_equal(self.ts1, frame['col1'], compare_keys=False) - tm.assert_dict_equal(self.ts2, frame['col2'], compare_keys=False) + # col2 is padded with NaN + self.assertEqual(len(self.ts1), 30) + self.assertEqual(len(self.ts2), 25) + + tm.assert_series_equal(self.ts1, frame['col1'], check_names=False) + + exp = pd.Series(np.concatenate([[np.nan] * 5, self.ts2.values]), + index=self.ts1.index, name='col2') + tm.assert_series_equal(exp, frame['col2']) frame = DataFrame({'col1': self.ts1, 'col2': self.ts2}, @@ -241,7 +240,7 @@ def test_constructor_dict(self): # Length-one dict micro-optimization frame = DataFrame({'A': {'1': 1, '2': 2}}) - self.assert_numpy_array_equal(frame.index, ['1', '2']) + self.assert_index_equal(frame.index, pd.Index(['1', '2'])) # empty dict plus index idx = Index([0, 1, 2]) @@ -257,7 +256,7 @@ def test_constructor_dict(self): # with dict of empty list and Series frame = DataFrame({'A': [], 'B': []}, columns=['A', 'B']) - self.assertTrue(frame.index.equals(Index([]))) + self.assert_index_equal(frame.index, Index([], dtype=np.int64)) # GH10856 # dict with scalar values should raise error, even if columns passed @@ -286,37 +285,37 @@ def test_constructor_multi_index(self): def test_constructor_error_msgs(self): msg = "Empty data passed with indices specified." # passing an empty array with columns specified. - with assertRaisesRegexp(ValueError, msg): + with tm.assertRaisesRegexp(ValueError, msg): DataFrame(np.empty(0), columns=list('abc')) msg = "Mixing dicts with non-Series may lead to ambiguous ordering." # mix dict and array, wrong size - with assertRaisesRegexp(ValueError, msg): + with tm.assertRaisesRegexp(ValueError, msg): DataFrame({'A': {'a': 'a', 'b': 'b'}, 'B': ['a', 'b', 'c']}) # wrong size ndarray, GH 3105 msg = "Shape of passed values is \(3, 4\), indices imply \(3, 3\)" - with assertRaisesRegexp(ValueError, msg): + with tm.assertRaisesRegexp(ValueError, msg): DataFrame(np.arange(12).reshape((4, 3)), columns=['foo', 'bar', 'baz'], index=pd.date_range('2000-01-01', periods=3)) # higher dim raise exception - with assertRaisesRegexp(ValueError, 'Must pass 2-d input'): + with tm.assertRaisesRegexp(ValueError, 'Must pass 2-d input'): DataFrame(np.zeros((3, 3, 3)), columns=['A', 'B', 'C'], index=[1]) # wrong size axis labels - with assertRaisesRegexp(ValueError, "Shape of passed values is " - "\(3, 2\), indices imply \(3, 1\)"): + with tm.assertRaisesRegexp(ValueError, "Shape of passed values is " + "\(3, 2\), indices imply \(3, 1\)"): DataFrame(np.random.rand(2, 3), columns=['A', 'B', 'C'], index=[1]) - with assertRaisesRegexp(ValueError, "Shape of passed values is " - "\(3, 2\), indices imply \(2, 2\)"): + with tm.assertRaisesRegexp(ValueError, "Shape of passed values is " + "\(3, 2\), indices imply \(2, 2\)"): DataFrame(np.random.rand(2, 3), columns=['A', 'B'], index=[1, 2]) - with assertRaisesRegexp(ValueError, 'If using all scalar values, you ' - 'must pass an index'): + with tm.assertRaisesRegexp(ValueError, 'If using all scalar values, ' + 'you must pass an index'): DataFrame({'a': False, 'b': True}) def test_constructor_with_embedded_frames(self): @@ -329,10 +328,10 @@ def test_constructor_with_embedded_frames(self): str(df2) result = df2.loc[0, 0] - assert_frame_equal(result, df1) + tm.assert_frame_equal(result, df1) result = df2.loc[1, 0] - assert_frame_equal(result, df1 + 10) + tm.assert_frame_equal(result, df1 + 10) def test_constructor_subclass_dict(self): # Test for passing dict subclass to constructor @@ -341,11 +340,11 @@ def test_constructor_subclass_dict(self): df = DataFrame(data) refdf = DataFrame(dict((col, dict(compat.iteritems(val))) for col, val in compat.iteritems(data))) - assert_frame_equal(refdf, df) + tm.assert_frame_equal(refdf, df) data = tm.TestSubDict(compat.iteritems(data)) df = DataFrame(data) - assert_frame_equal(refdf, df) + tm.assert_frame_equal(refdf, df) # try with defaultdict from collections import defaultdict @@ -356,10 +355,10 @@ def test_constructor_subclass_dict(self): dct.update(v.to_dict()) data[k] = dct frame = DataFrame(data) - assert_frame_equal(self.frame.sort_index(), frame) + tm.assert_frame_equal(self.frame.sort_index(), frame) def test_constructor_dict_block(self): - expected = [[4., 3., 2., 1.]] + expected = np.array([[4., 3., 2., 1.]]) df = DataFrame({'d': [4.], 'c': [3.], 'b': [2.], 'a': [1.]}, columns=['d', 'c', 'b', 'a']) tm.assert_numpy_array_equal(df.values, expected) @@ -405,10 +404,10 @@ def test_constructor_dict_of_tuples(self): result = DataFrame(data) expected = DataFrame(dict((k, list(v)) for k, v in compat.iteritems(data))) - assert_frame_equal(result, expected, check_dtype=False) + tm.assert_frame_equal(result, expected, check_dtype=False) def test_constructor_dict_multiindex(self): - check = lambda result, expected: assert_frame_equal( + check = lambda result, expected: tm.assert_frame_equal( result, expected, check_dtype=True, check_index_type=True, check_column_type=True, check_names=True) d = {('a', 'a'): {('i', 'i'): 0, ('i', 'j'): 1, ('j', 'i'): 2}, @@ -453,9 +452,9 @@ def create_data(constructor): result_datetime64 = DataFrame(data_datetime64) result_datetime = DataFrame(data_datetime) result_Timestamp = DataFrame(data_Timestamp) - assert_frame_equal(result_datetime64, expected) - assert_frame_equal(result_datetime, expected) - assert_frame_equal(result_Timestamp, expected) + tm.assert_frame_equal(result_datetime64, expected) + tm.assert_frame_equal(result_datetime, expected) + tm.assert_frame_equal(result_Timestamp, expected) def test_constructor_dict_timedelta64_index(self): # GH 10160 @@ -478,9 +477,9 @@ def create_data(constructor): result_timedelta64 = DataFrame(data_timedelta64) result_timedelta = DataFrame(data_timedelta) result_Timedelta = DataFrame(data_Timedelta) - assert_frame_equal(result_timedelta64, expected) - assert_frame_equal(result_timedelta, expected) - assert_frame_equal(result_Timedelta, expected) + tm.assert_frame_equal(result_timedelta64, expected) + tm.assert_frame_equal(result_timedelta, expected) + tm.assert_frame_equal(result_Timedelta, expected) def test_constructor_period(self): # PeriodIndex @@ -506,7 +505,7 @@ def test_nested_dict_frame_constructor(self): data.setdefault(col, {})[row] = df.get_value(row, col) result = DataFrame(data, columns=rng) - assert_frame_equal(result, df) + tm.assert_frame_equal(result, df) data = {} for col in df.columns: @@ -514,7 +513,7 @@ def test_nested_dict_frame_constructor(self): data.setdefault(row, {})[col] = df.get_value(row, col) result = DataFrame(data, index=rng).T - assert_frame_equal(result, df) + tm.assert_frame_equal(result, df) def _check_basic_constructor(self, empty): # mat: 2d matrix with shpae (3, 2) to input. empty - makes sized @@ -538,27 +537,27 @@ def _check_basic_constructor(self, empty): # wrong size axis labels msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)' - with assertRaisesRegexp(ValueError, msg): + with tm.assertRaisesRegexp(ValueError, msg): DataFrame(mat, columns=['A', 'B', 'C'], index=[1]) msg = r'Shape of passed values is \(3, 2\), indices imply \(2, 2\)' - with assertRaisesRegexp(ValueError, msg): + with tm.assertRaisesRegexp(ValueError, msg): DataFrame(mat, columns=['A', 'B'], index=[1, 2]) # higher dim raise exception - with assertRaisesRegexp(ValueError, 'Must pass 2-d input'): + with tm.assertRaisesRegexp(ValueError, 'Must pass 2-d input'): DataFrame(empty((3, 3, 3)), columns=['A', 'B', 'C'], index=[1]) # automatic labeling frame = DataFrame(mat) - self.assert_numpy_array_equal(frame.index, lrange(2)) - self.assert_numpy_array_equal(frame.columns, lrange(3)) + self.assert_index_equal(frame.index, pd.Index(lrange(2))) + self.assert_index_equal(frame.columns, pd.Index(lrange(3))) frame = DataFrame(mat, index=[1, 2]) - self.assert_numpy_array_equal(frame.columns, lrange(3)) + self.assert_index_equal(frame.columns, pd.Index(lrange(3))) frame = DataFrame(mat, columns=['A', 'B', 'C']) - self.assert_numpy_array_equal(frame.index, lrange(2)) + self.assert_index_equal(frame.index, pd.Index(lrange(2))) # 0-length axis frame = DataFrame(empty((0, 3))) @@ -660,7 +659,7 @@ def test_constructor_mrecarray(self): # Ensure mrecarray produces frame identical to dict of masked arrays # from GH3479 - assert_fr_equal = functools.partial(assert_frame_equal, + assert_fr_equal = functools.partial(tm.assert_frame_equal, check_index_type=True, check_column_type=True, check_frame_type=True) @@ -734,13 +733,13 @@ def test_constructor_arrays_and_scalars(self): df = DataFrame({'a': randn(10), 'b': True}) exp = DataFrame({'a': df['a'].values, 'b': [True] * 10}) - assert_frame_equal(df, exp) + tm.assert_frame_equal(df, exp) with tm.assertRaisesRegexp(ValueError, 'must pass an index'): DataFrame({'a': False, 'b': True}) def test_constructor_DataFrame(self): df = DataFrame(self.frame) - assert_frame_equal(df, self.frame) + tm.assert_frame_equal(df, self.frame) df_casted = DataFrame(self.frame, dtype=np.int64) self.assertEqual(df_casted.values.dtype, np.int64) @@ -768,17 +767,17 @@ def test_constructor_more(self): # corner, silly # TODO: Fix this Exception to be better... - with assertRaisesRegexp(PandasError, 'constructor not ' - 'properly called'): + with tm.assertRaisesRegexp(PandasError, 'constructor not ' + 'properly called'): DataFrame((1, 2, 3)) # can't cast mat = np.array(['foo', 'bar'], dtype=object).reshape(2, 1) - with assertRaisesRegexp(ValueError, 'cast'): + with tm.assertRaisesRegexp(ValueError, 'cast'): DataFrame(mat, index=[0, 1], columns=[0], dtype=float) dm = DataFrame(DataFrame(self.frame._series)) - assert_frame_equal(dm, self.frame) + tm.assert_frame_equal(dm, self.frame) # int cast dm = DataFrame({'A': np.ones(10, dtype=int), @@ -791,12 +790,12 @@ def test_constructor_more(self): def test_constructor_empty_list(self): df = DataFrame([], index=[]) expected = DataFrame(index=[]) - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) # GH 9939 df = DataFrame([], columns=['A', 'B']) expected = DataFrame({}, columns=['A', 'B']) - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) # Empty generator: list(empty_gen()) == [] def empty_gen(): @@ -804,7 +803,7 @@ def empty_gen(): yield df = DataFrame(empty_gen(), columns=['A', 'B']) - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) def test_constructor_list_of_lists(self): # GH #484 @@ -818,7 +817,7 @@ def test_constructor_list_of_lists(self): expected = DataFrame({0: range(10)}) data = [np.array(x) for x in range(10)] result = DataFrame(data) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_constructor_sequence_like(self): # GH 3783 @@ -840,25 +839,25 @@ def __len__(self, n): columns = ["num", "str"] result = DataFrame(l, columns=columns) expected = DataFrame([[1, 'a'], [2, 'b']], columns=columns) - assert_frame_equal(result, expected, check_dtype=False) + tm.assert_frame_equal(result, expected, check_dtype=False) # GH 4297 # support Array import array result = DataFrame.from_items([('A', array.array('i', range(10)))]) expected = DataFrame({'A': list(range(10))}) - assert_frame_equal(result, expected, check_dtype=False) + tm.assert_frame_equal(result, expected, check_dtype=False) expected = DataFrame([list(range(10)), list(range(10))]) result = DataFrame([array.array('i', range(10)), array.array('i', range(10))]) - assert_frame_equal(result, expected, check_dtype=False) + tm.assert_frame_equal(result, expected, check_dtype=False) def test_constructor_iterator(self): expected = DataFrame([list(range(10)), list(range(10))]) result = DataFrame([range(10), range(10)]) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_constructor_generator(self): # related #2305 @@ -868,12 +867,12 @@ def test_constructor_generator(self): expected = DataFrame([list(range(10)), list(range(10))]) result = DataFrame([gen1, gen2]) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) gen = ([i, 'a'] for i in range(10)) result = DataFrame(gen) expected = DataFrame({0: range(10), 1: 'a'}) - assert_frame_equal(result, expected, check_dtype=False) + tm.assert_frame_equal(result, expected, check_dtype=False) def test_constructor_list_of_dicts(self): data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]), @@ -886,11 +885,50 @@ def test_constructor_list_of_dicts(self): result = DataFrame(data) expected = DataFrame.from_dict(dict(zip(range(len(data)), data)), orient='index') - assert_frame_equal(result, expected.reindex(result.index)) + tm.assert_frame_equal(result, expected.reindex(result.index)) result = DataFrame([{}]) expected = DataFrame(index=[0]) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) + + def test_constructor_ordered_dict_preserve_order(self): + # see gh-13304 + expected = DataFrame([[2, 1]], columns=['b', 'a']) + + data = OrderedDict() + data['b'] = [2] + data['a'] = [1] + + result = DataFrame(data) + tm.assert_frame_equal(result, expected) + + data = OrderedDict() + data['b'] = 2 + data['a'] = 1 + + result = DataFrame([data]) + tm.assert_frame_equal(result, expected) + + def test_constructor_ordered_dict_conflicting_orders(self): + # the first dict element sets the ordering for the DataFrame, + # even if there are conflicting orders from subsequent ones + row_one = OrderedDict() + row_one['b'] = 2 + row_one['a'] = 1 + + row_two = OrderedDict() + row_two['a'] = 1 + row_two['b'] = 2 + + row_three = {'b': 2, 'a': 1} + + expected = DataFrame([[2, 1], [2, 1]], columns=['b', 'a']) + result = DataFrame([row_one, row_two]) + tm.assert_frame_equal(result, expected) + + expected = DataFrame([[2, 1], [2, 1], [2, 1]], columns=['b', 'a']) + result = DataFrame([row_one, row_two, row_three]) + tm.assert_frame_equal(result, expected) def test_constructor_list_of_series(self): data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]), @@ -903,7 +941,7 @@ def test_constructor_list_of_series(self): Series([1.5, 3, 6], idx, name='y')] result = DataFrame(data2) expected = DataFrame.from_dict(sdict, orient='index') - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # some unnamed data2 = [Series([1.5, 3, 4], idx, dtype='O', name='x'), @@ -912,7 +950,7 @@ def test_constructor_list_of_series(self): sdict = OrderedDict(zip(['x', 'Unnamed 0'], data)) expected = DataFrame.from_dict(sdict, orient='index') - assert_frame_equal(result.sort_index(), expected) + tm.assert_frame_equal(result.sort_index(), expected) # none named data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]), @@ -926,14 +964,14 @@ def test_constructor_list_of_series(self): result = DataFrame(data) sdict = OrderedDict(zip(range(len(data)), data)) expected = DataFrame.from_dict(sdict, orient='index') - assert_frame_equal(result, expected.reindex(result.index)) + tm.assert_frame_equal(result, expected.reindex(result.index)) result2 = DataFrame(data, index=np.arange(6)) - assert_frame_equal(result, result2) + tm.assert_frame_equal(result, result2) result = DataFrame([Series({})]) expected = DataFrame(index=[0]) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]), OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])] @@ -944,7 +982,7 @@ def test_constructor_list_of_series(self): Series([1.5, 3, 6], idx)] result = DataFrame(data2) expected = DataFrame.from_dict(sdict, orient='index') - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_constructor_list_of_derived_dicts(self): class CustomDict(dict): @@ -956,19 +994,20 @@ class CustomDict(dict): result_custom = DataFrame(data_custom) result = DataFrame(data) - assert_frame_equal(result, result_custom) + tm.assert_frame_equal(result, result_custom) def test_constructor_ragged(self): data = {'A': randn(10), 'B': randn(8)} - with assertRaisesRegexp(ValueError, 'arrays must all be same length'): + with tm.assertRaisesRegexp(ValueError, + 'arrays must all be same length'): DataFrame(data) def test_constructor_scalar(self): idx = Index(lrange(3)) df = DataFrame({"a": 0}, index=idx) expected = DataFrame({"a": [0, 0, 0]}, index=idx) - assert_frame_equal(df, expected, check_dtype=False) + tm.assert_frame_equal(df, expected, check_dtype=False) def test_constructor_Series_copy_bug(self): df = DataFrame(self.frame['A'], index=self.frame.index, columns=['A']) @@ -983,7 +1022,7 @@ def test_constructor_mixed_dict_and_Series(self): self.assertTrue(result.index.is_monotonic) # ordering ambiguous, raise exception - with assertRaisesRegexp(ValueError, 'ambiguous ordering'): + with tm.assertRaisesRegexp(ValueError, 'ambiguous ordering'): DataFrame({'A': ['a', 'b'], 'B': {'a': 'a', 'b': 'b'}}) # this is OK though @@ -991,12 +1030,12 @@ def test_constructor_mixed_dict_and_Series(self): 'B': Series(['a', 'b'], index=['a', 'b'])}) expected = DataFrame({'A': ['a', 'b'], 'B': ['a', 'b']}, index=['a', 'b']) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_constructor_tuples(self): result = DataFrame({'A': [(1, 2), (3, 4)]}) expected = DataFrame({'A': Series([(1, 2), (3, 4)])}) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_constructor_namedtuples(self): # GH11181 @@ -1005,43 +1044,43 @@ def test_constructor_namedtuples(self): tuples = [named_tuple(1, 3), named_tuple(2, 4)] expected = DataFrame({'a': [1, 2], 'b': [3, 4]}) result = DataFrame(tuples) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # with columns expected = DataFrame({'y': [1, 2], 'z': [3, 4]}) result = DataFrame(tuples, columns=['y', 'z']) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_constructor_orient(self): data_dict = self.mixed_frame.T._series recons = DataFrame.from_dict(data_dict, orient='index') expected = self.mixed_frame.sort_index() - assert_frame_equal(recons, expected) + tm.assert_frame_equal(recons, expected) # dict of sequence a = {'hi': [32, 3, 3], 'there': [3, 5, 3]} rs = DataFrame.from_dict(a, orient='index') xp = DataFrame.from_dict(a).T.reindex(list(a.keys())) - assert_frame_equal(rs, xp) + tm.assert_frame_equal(rs, xp) def test_constructor_Series_named(self): a = Series([1, 2, 3], index=['a', 'b', 'c'], name='x') df = DataFrame(a) self.assertEqual(df.columns[0], 'x') - self.assertTrue(df.index.equals(a.index)) + self.assert_index_equal(df.index, a.index) # ndarray like arr = np.random.randn(10) s = Series(arr, name='x') df = DataFrame(s) expected = DataFrame(dict(x=s)) - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) s = Series(arr, index=range(3, 13)) df = DataFrame(s) expected = DataFrame({0: s}) - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) self.assertRaises(ValueError, DataFrame, s, columns=[1, 2]) @@ -1055,12 +1094,12 @@ def test_constructor_Series_named(self): df = DataFrame([s1, arr]).T expected = DataFrame({'x': s1, 'Unnamed 0': arr}, columns=['x', 'Unnamed 0']) - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) # this is a bit non-intuitive here; the series collapse down to arrays df = DataFrame([arr, s1]).T expected = DataFrame({1: s1, 0: arr}, columns=[0, 1]) - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) def test_constructor_Series_differently_indexed(self): # name @@ -1074,13 +1113,13 @@ def test_constructor_Series_differently_indexed(self): df1 = DataFrame(s1, index=other_index) exp1 = DataFrame(s1.reindex(other_index)) self.assertEqual(df1.columns[0], 'x') - assert_frame_equal(df1, exp1) + tm.assert_frame_equal(df1, exp1) df2 = DataFrame(s2, index=other_index) exp2 = DataFrame(s2.reindex(other_index)) self.assertEqual(df2.columns[0], 0) - self.assertTrue(df2.index.equals(other_index)) - assert_frame_equal(df2, exp2) + self.assert_index_equal(df2.index, other_index) + tm.assert_frame_equal(df2, exp2) def test_constructor_manager_resize(self): index = list(self.frame.index[:5]) @@ -1088,17 +1127,17 @@ def test_constructor_manager_resize(self): result = DataFrame(self.frame._data, index=index, columns=columns) - self.assert_numpy_array_equal(result.index, index) - self.assert_numpy_array_equal(result.columns, columns) + self.assert_index_equal(result.index, Index(index)) + self.assert_index_equal(result.columns, Index(columns)) def test_constructor_from_items(self): items = [(c, self.frame[c]) for c in self.frame.columns] recons = DataFrame.from_items(items) - assert_frame_equal(recons, self.frame) + tm.assert_frame_equal(recons, self.frame) # pass some columns recons = DataFrame.from_items(items, columns=['C', 'B', 'A']) - assert_frame_equal(recons, self.frame.ix[:, ['C', 'B', 'A']]) + tm.assert_frame_equal(recons, self.frame.ix[:, ['C', 'B', 'A']]) # orient='index' @@ -1108,7 +1147,7 @@ def test_constructor_from_items(self): recons = DataFrame.from_items(row_items, columns=self.mixed_frame.columns, orient='index') - assert_frame_equal(recons, self.mixed_frame) + tm.assert_frame_equal(recons, self.mixed_frame) self.assertEqual(recons['A'].dtype, np.float64) with tm.assertRaisesRegexp(TypeError, @@ -1124,7 +1163,7 @@ def test_constructor_from_items(self): recons = DataFrame.from_items(row_items, columns=self.mixed_frame.columns, orient='index') - assert_frame_equal(recons, self.mixed_frame) + tm.assert_frame_equal(recons, self.mixed_frame) tm.assertIsInstance(recons['foo'][0], tuple) rs = DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])], @@ -1132,12 +1171,12 @@ def test_constructor_from_items(self): columns=['one', 'two', 'three']) xp = DataFrame([[1, 2, 3], [4, 5, 6]], index=['A', 'B'], columns=['one', 'two', 'three']) - assert_frame_equal(rs, xp) + tm.assert_frame_equal(rs, xp) def test_constructor_mix_series_nonseries(self): df = DataFrame({'A': self.frame['A'], 'B': list(self.frame['B'])}, columns=['A', 'B']) - assert_frame_equal(df, self.frame.ix[:, ['A', 'B']]) + tm.assert_frame_equal(df, self.frame.ix[:, ['A', 'B']]) with tm.assertRaisesRegexp(ValueError, 'does not match index length'): DataFrame({'A': self.frame['A'], 'B': list(self.frame['B'])[:-2]}) @@ -1145,10 +1184,10 @@ def test_constructor_mix_series_nonseries(self): def test_constructor_miscast_na_int_dtype(self): df = DataFrame([[np.nan, 1], [1, 0]], dtype=np.int64) expected = DataFrame([[np.nan, 1], [1, 0]]) - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) def test_constructor_iterator_failure(self): - with assertRaisesRegexp(TypeError, 'iterator'): + with tm.assertRaisesRegexp(TypeError, 'iterator'): df = DataFrame(iter([1, 2, 3])) # noqa def test_constructor_column_duplicates(self): @@ -1157,11 +1196,11 @@ def test_constructor_column_duplicates(self): edf = DataFrame([[8, 5]]) edf.columns = ['a', 'a'] - assert_frame_equal(df, edf) + tm.assert_frame_equal(df, edf) idf = DataFrame.from_items( [('a', [8]), ('a', [5])], columns=['a', 'a']) - assert_frame_equal(idf, edf) + tm.assert_frame_equal(idf, edf) self.assertRaises(ValueError, DataFrame.from_items, [('a', [8]), ('a', [5]), ('b', [6])], @@ -1172,30 +1211,29 @@ def test_constructor_empty_with_string_dtype(self): expected = DataFrame(index=[0, 1], columns=[0, 1], dtype=object) df = DataFrame(index=[0, 1], columns=[0, 1], dtype=str) - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.str_) - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.unicode_) - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) df = DataFrame(index=[0, 1], columns=[0, 1], dtype='U5') - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) def test_constructor_single_value(self): # expecting single value upcasting here df = DataFrame(0., index=[1, 2, 3], columns=['a', 'b', 'c']) - assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('float64'), - df.index, df.columns)) + tm.assert_frame_equal(df, + DataFrame(np.zeros(df.shape).astype('float64'), + df.index, df.columns)) df = DataFrame(0, index=[1, 2, 3], columns=['a', 'b', 'c']) - assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('int64'), - df.index, df.columns)) + tm.assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('int64'), + df.index, df.columns)) df = DataFrame('a', index=[1, 2], columns=['a', 'c']) - assert_frame_equal(df, DataFrame(np.array([['a', 'a'], - ['a', 'a']], - dtype=object), - index=[1, 2], - columns=['a', 'c'])) + tm.assert_frame_equal(df, DataFrame(np.array([['a', 'a'], ['a', 'a']], + dtype=object), + index=[1, 2], columns=['a', 'c'])) self.assertRaises(com.PandasError, DataFrame, 'a', [1, 2]) self.assertRaises(com.PandasError, DataFrame, 'a', columns=['a', 'c']) @@ -1217,7 +1255,7 @@ def test_constructor_with_datetimes(self): expected = Series({'int64': 1, datetime64name: 2, objectname: 2}) result.sort_index() expected.sort_index() - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) # check with ndarray construction ndim==0 (e.g. we are passing a ndim 0 # ndarray with a dtype specified) @@ -1241,7 +1279,7 @@ def test_constructor_with_datetimes(self): result.sort_index() expected = Series(expected) expected.sort_index() - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) # check with ndarray construction ndim>0 df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', @@ -1250,7 +1288,7 @@ def test_constructor_with_datetimes(self): index=np.arange(10)) result = df.get_dtype_counts() result.sort_index() - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) # GH 2809 ind = date_range(start="2000-01-01", freq="D", periods=10) @@ -1262,7 +1300,7 @@ def test_constructor_with_datetimes(self): expected = Series({datetime64name: 1}) result.sort_index() expected.sort_index() - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) # GH 2810 ind = date_range(start="2000-01-01", freq="D", periods=10) @@ -1273,7 +1311,7 @@ def test_constructor_with_datetimes(self): expected = Series({datetime64name: 1, objectname: 1}) result.sort_index() expected.sort_index() - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) # GH 7594 # don't coerce tz-aware @@ -1283,12 +1321,12 @@ def test_constructor_with_datetimes(self): df = DataFrame({'End Date': dt}, index=[0]) self.assertEqual(df.iat[0, 0], dt) - assert_series_equal(df.dtypes, Series( + tm.assert_series_equal(df.dtypes, Series( {'End Date': 'datetime64[ns, US/Eastern]'})) df = DataFrame([{'End Date': dt}]) self.assertEqual(df.iat[0, 0], dt) - assert_series_equal(df.dtypes, Series( + tm.assert_series_equal(df.dtypes, Series( {'End Date': 'datetime64[ns, US/Eastern]'})) # tz-aware (UTC and other tz's) @@ -1311,196 +1349,17 @@ def test_constructor_with_datetimes(self): {'a': i.to_series(keep_tz=True).reset_index(drop=True)}) df = DataFrame() df['a'] = i - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) df = DataFrame({'a': i}) - assert_frame_equal(df, expected) + tm.assert_frame_equal(df, expected) # multiples i_no_tz = date_range('1/1/2011', periods=5, freq='10s') df = DataFrame({'a': i, 'b': i_no_tz}) expected = DataFrame({'a': i.to_series(keep_tz=True) .reset_index(drop=True), 'b': i_no_tz}) - assert_frame_equal(df, expected) - - def test_constructor_with_datetime_tz(self): - - # 8260 - # support datetime64 with tz - - idx = Index(date_range('20130101', periods=3, tz='US/Eastern'), - name='foo') - dr = date_range('20130110', periods=3) - - # construction - df = DataFrame({'A': idx, 'B': dr}) - self.assertTrue(df['A'].dtype, 'M8[ns, US/Eastern') - self.assertTrue(df['A'].name == 'A') - assert_series_equal(df['A'], Series(idx, name='A')) - assert_series_equal(df['B'], Series(dr, name='B')) - - # construction from dict - df2 = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'), - B=Timestamp('20130603', tz='CET')), - index=range(5)) - assert_series_equal(df2.dtypes, Series(['datetime64[ns, US/Eastern]', - 'datetime64[ns, CET]'], - index=['A', 'B'])) - - # dtypes - tzframe = DataFrame({'A': date_range('20130101', periods=3), - 'B': date_range('20130101', periods=3, - tz='US/Eastern'), - 'C': date_range('20130101', periods=3, tz='CET')}) - tzframe.iloc[1, 1] = pd.NaT - tzframe.iloc[1, 2] = pd.NaT - result = tzframe.dtypes.sort_index() - expected = Series([np.dtype('datetime64[ns]'), - DatetimeTZDtype('datetime64[ns, US/Eastern]'), - DatetimeTZDtype('datetime64[ns, CET]')], - ['A', 'B', 'C']) - - # concat - df3 = pd.concat([df2.A.to_frame(), df2.B.to_frame()], axis=1) - assert_frame_equal(df2, df3) - - # select_dtypes - result = df3.select_dtypes(include=['datetime64[ns]']) - expected = df3.reindex(columns=[]) - assert_frame_equal(result, expected) - - # this will select based on issubclass, and these are the same class - result = df3.select_dtypes(include=['datetime64[ns, CET]']) - expected = df3 - assert_frame_equal(result, expected) - - # from index - idx2 = date_range('20130101', periods=3, tz='US/Eastern', name='foo') - df2 = DataFrame(idx2) - assert_series_equal(df2['foo'], Series(idx2, name='foo')) - df2 = DataFrame(Series(idx2)) - assert_series_equal(df2['foo'], Series(idx2, name='foo')) - - idx2 = date_range('20130101', periods=3, tz='US/Eastern') - df2 = DataFrame(idx2) - assert_series_equal(df2[0], Series(idx2, name=0)) - df2 = DataFrame(Series(idx2)) - assert_series_equal(df2[0], Series(idx2, name=0)) - - # interleave with object - result = self.tzframe.assign(D='foo').values - expected = np.array([[Timestamp('2013-01-01 00:00:00'), - Timestamp('2013-01-02 00:00:00'), - Timestamp('2013-01-03 00:00:00')], - [Timestamp('2013-01-01 00:00:00-0500', - tz='US/Eastern'), - pd.NaT, - Timestamp('2013-01-03 00:00:00-0500', - tz='US/Eastern')], - [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), - pd.NaT, - Timestamp('2013-01-03 00:00:00+0100', tz='CET')], - ['foo', 'foo', 'foo']], dtype=object).T - self.assert_numpy_array_equal(result, expected) - - # interleave with only datetime64[ns] - result = self.tzframe.values - expected = np.array([[Timestamp('2013-01-01 00:00:00'), - Timestamp('2013-01-02 00:00:00'), - Timestamp('2013-01-03 00:00:00')], - [Timestamp('2013-01-01 00:00:00-0500', - tz='US/Eastern'), - pd.NaT, - Timestamp('2013-01-03 00:00:00-0500', - tz='US/Eastern')], - [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), - pd.NaT, - Timestamp('2013-01-03 00:00:00+0100', - tz='CET')]], dtype=object).T - self.assert_numpy_array_equal(result, expected) - - # astype - expected = np.array([[Timestamp('2013-01-01 00:00:00'), - Timestamp('2013-01-02 00:00:00'), - Timestamp('2013-01-03 00:00:00')], - [Timestamp('2013-01-01 00:00:00-0500', - tz='US/Eastern'), - pd.NaT, - Timestamp('2013-01-03 00:00:00-0500', - tz='US/Eastern')], - [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), - pd.NaT, - Timestamp('2013-01-03 00:00:00+0100', - tz='CET')]], - dtype=object).T - result = self.tzframe.astype(object) - assert_frame_equal(result, DataFrame( - expected, index=self.tzframe.index, columns=self.tzframe.columns)) - - result = self.tzframe.astype('datetime64[ns]') - expected = DataFrame({'A': date_range('20130101', periods=3), - 'B': (date_range('20130101', periods=3, - tz='US/Eastern') - .tz_convert('UTC') - .tz_localize(None)), - 'C': (date_range('20130101', periods=3, - tz='CET') - .tz_convert('UTC') - .tz_localize(None))}) - expected.iloc[1, 1] = pd.NaT - expected.iloc[1, 2] = pd.NaT - assert_frame_equal(result, expected) - - # str formatting - result = self.tzframe.astype(str) - expected = np.array([['2013-01-01', '2013-01-01 00:00:00-05:00', - '2013-01-01 00:00:00+01:00'], - ['2013-01-02', 'NaT', 'NaT'], - ['2013-01-03', '2013-01-03 00:00:00-05:00', - '2013-01-03 00:00:00+01:00']], dtype=object) - self.assert_numpy_array_equal(result, expected) - - result = str(self.tzframe) - self.assertTrue('0 2013-01-01 2013-01-01 00:00:00-05:00 ' - '2013-01-01 00:00:00+01:00' in result) - self.assertTrue('1 2013-01-02 ' - 'NaT NaT' in result) - self.assertTrue('2 2013-01-03 2013-01-03 00:00:00-05:00 ' - '2013-01-03 00:00:00+01:00' in result) - - # setitem - df['C'] = idx - assert_series_equal(df['C'], Series(idx, name='C')) - - df['D'] = 'foo' - df['D'] = idx - assert_series_equal(df['D'], Series(idx, name='D')) - del df['D'] - - # assert that A & C are not sharing the same base (e.g. they - # are copies) - b1 = df._data.blocks[1] - b2 = df._data.blocks[2] - self.assertTrue(b1.values.equals(b2.values)) - self.assertFalse(id(b1.values.values.base) == - id(b2.values.values.base)) - - # with nan - df2 = df.copy() - df2.iloc[1, 1] = pd.NaT - df2.iloc[1, 2] = pd.NaT - result = df2['B'] - assert_series_equal(notnull(result), Series( - [True, False, True], name='B')) - assert_series_equal(df2.dtypes, df.dtypes) - - # set/reset - df = DataFrame({'A': [0, 1, 2]}, index=idx) - result = df.reset_index() - self.assertTrue(result['foo'].dtype, 'M8[ns, US/Eastern') - - result = result.set_index('foo') - tm.assert_index_equal(df.index, idx) + tm.assert_frame_equal(df, expected) def test_constructor_for_list_with_dtypes(self): # TODO(wesm): unused @@ -1523,39 +1382,39 @@ def test_constructor_for_list_with_dtypes(self): df = DataFrame({'a': [2 ** 31, 2 ** 31 + 1]}) result = df.get_dtype_counts() expected = Series({'int64': 1}) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) # GH #2751 (construction with no index specified), make sure we cast to # platform values df = DataFrame([1, 2]) result = df.get_dtype_counts() expected = Series({'int64': 1}) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) df = DataFrame([1., 2.]) result = df.get_dtype_counts() expected = Series({'float64': 1}) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) df = DataFrame({'a': [1, 2]}) result = df.get_dtype_counts() expected = Series({'int64': 1}) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) df = DataFrame({'a': [1., 2.]}) result = df.get_dtype_counts() expected = Series({'float64': 1}) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) df = DataFrame({'a': 1}, index=lrange(3)) result = df.get_dtype_counts() expected = Series({'int64': 1}) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) df = DataFrame({'a': 1.}, index=lrange(3)) result = df.get_dtype_counts() expected = Series({'float64': 1}) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) # with object list df = DataFrame({'a': [1, 2, 4, 7], 'b': [1.2, 2.3, 5.1, 6.3], @@ -1567,7 +1426,7 @@ def test_constructor_for_list_with_dtypes(self): {'int64': 1, 'float64': 2, datetime64name: 1, objectname: 1}) result.sort_index() expected.sort_index() - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_constructor_frame_copy(self): cop = DataFrame(self.frame, copy=True) @@ -1605,7 +1464,8 @@ def check(df): indexer = np.arange(len(df.columns))[isnull(df.columns)] if len(indexer) == 1: - assert_series_equal(df.iloc[:, indexer[0]], df.loc[:, np.nan]) + tm.assert_series_equal(df.iloc[:, indexer[0]], + df.loc[:, np.nan]) # multiple nans should fail else: @@ -1642,17 +1502,17 @@ def test_from_records_to_records(self): # TODO(wesm): unused frame = DataFrame.from_records(arr) # noqa - index = np.arange(len(arr))[::-1] + index = pd.Index(np.arange(len(arr))[::-1]) indexed_frame = DataFrame.from_records(arr, index=index) - self.assert_numpy_array_equal(indexed_frame.index, index) + self.assert_index_equal(indexed_frame.index, index) # without names, it should go to last ditch arr2 = np.zeros((2, 3)) - assert_frame_equal(DataFrame.from_records(arr2), DataFrame(arr2)) + tm.assert_frame_equal(DataFrame.from_records(arr2), DataFrame(arr2)) # wrong length msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)' - with assertRaisesRegexp(ValueError, msg): + with tm.assertRaisesRegexp(ValueError, msg): DataFrame.from_records(arr, index=index[:-1]) indexed_frame = DataFrame.from_records(arr, index='f1') @@ -1683,14 +1543,14 @@ def test_from_records_iterator(self): 'u': np.array([1.0, 3.0], dtype=np.float32), 'y': np.array([2, 4], dtype=np.int64), 'z': np.array([2, 4], dtype=np.int32)}) - assert_frame_equal(df.reindex_like(xp), xp) + tm.assert_frame_equal(df.reindex_like(xp), xp) # no dtypes specified here, so just compare with the default arr = [(1.0, 2), (3.0, 4), (5., 6), (7., 8)] df = DataFrame.from_records(iter(arr), columns=['x', 'y'], nrows=2) - assert_frame_equal(df, xp.reindex( - columns=['x', 'y']), check_dtype=False) + tm.assert_frame_equal(df, xp.reindex(columns=['x', 'y']), + check_dtype=False) def test_from_records_tuples_generator(self): def tuple_generator(length): @@ -1707,7 +1567,7 @@ def tuple_generator(length): generator = tuple_generator(10) result = DataFrame.from_records(generator, columns=columns_names) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_from_records_lists_generator(self): def list_generator(length): @@ -1724,7 +1584,7 @@ def list_generator(length): generator = list_generator(10) result = DataFrame.from_records(generator, columns=columns_names) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_from_records_columns_not_modified(self): tuples = [(1, 2, 3), @@ -1757,7 +1617,7 @@ def test_from_records_duplicates(self): expected = DataFrame([(1, 2, 3), (4, 5, 6)], columns=['a', 'b', 'a']) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_from_records_set_index_name(self): def create_dict(order_id): @@ -1782,7 +1642,7 @@ def test_from_records_misc_brokenness(self): result = DataFrame.from_records(data, columns=['a', 'b']) exp = DataFrame(data, columns=['a', 'b']) - assert_frame_equal(result, exp) + tm.assert_frame_equal(result, exp) # overlap in index/index_names @@ -1790,7 +1650,7 @@ def test_from_records_misc_brokenness(self): result = DataFrame.from_records(data, index=['a', 'b', 'c']) exp = DataFrame(data, index=['a', 'b', 'c']) - assert_frame_equal(result, exp) + tm.assert_frame_equal(result, exp) # GH 2623 rows = [] @@ -1806,28 +1666,28 @@ def test_from_records_misc_brokenness(self): df2_obj = DataFrame.from_records(rows, columns=['date', 'test']) results = df2_obj.get_dtype_counts() expected = Series({'datetime64[ns]': 1, 'int64': 1}) - assert_series_equal(results, expected) + tm.assert_series_equal(results, expected) def test_from_records_empty(self): # 3562 result = DataFrame.from_records([], columns=['a', 'b', 'c']) expected = DataFrame(columns=['a', 'b', 'c']) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = DataFrame.from_records([], columns=['a', 'b', 'b']) expected = DataFrame(columns=['a', 'b', 'b']) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_from_records_empty_with_nonempty_fields_gh3682(self): a = np.array([(1, 2)], dtype=[('id', np.int64), ('value', np.int64)]) df = DataFrame.from_records(a, index='id') - assert_numpy_array_equal(df.index, Index([1], name='id')) + tm.assert_index_equal(df.index, Index([1], name='id')) self.assertEqual(df.index.name, 'id') - assert_numpy_array_equal(df.columns, Index(['value'])) + tm.assert_index_equal(df.columns, Index(['value'])) b = np.array([], dtype=[('id', np.int64), ('value', np.int64)]) df = DataFrame.from_records(b, index='id') - assert_numpy_array_equal(df.index, Index([], name='id')) + tm.assert_index_equal(df.index, Index([], name='id')) self.assertEqual(df.index.name, 'id') def test_from_records_with_datetimes(self): @@ -1850,14 +1710,14 @@ def test_from_records_with_datetimes(self): raise nose.SkipTest("known failure of numpy rec array creation") result = DataFrame.from_records(recarray) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) # coercion should work too arrdata = [np.array([datetime(2005, 3, 1, 0, 0), None])] dtypes = [('EXPIRY', '<M8[m]')] recarray = np.core.records.fromarrays(arrdata, dtype=dtypes) result = DataFrame.from_records(recarray) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_from_records_sequencelike(self): df = DataFrame({'A': np.array(np.random.randn(6), dtype=np.float64), @@ -1903,14 +1763,14 @@ def test_from_records_sequencelike(self): result4 = (DataFrame.from_records(lists, columns=columns) .reindex(columns=df.columns)) - assert_frame_equal(result, df, check_dtype=False) - assert_frame_equal(result2, df) - assert_frame_equal(result3, df) - assert_frame_equal(result4, df, check_dtype=False) + tm.assert_frame_equal(result, df, check_dtype=False) + tm.assert_frame_equal(result2, df) + tm.assert_frame_equal(result3, df) + tm.assert_frame_equal(result4, df, check_dtype=False) # tuples is in the order of the columns result = DataFrame.from_records(tuples) - self.assert_numpy_array_equal(result.columns, lrange(8)) + tm.assert_index_equal(result.columns, pd.Index(lrange(8))) # test exclude parameter & we are casting the results here (as we don't # have dtype info to recover) @@ -1919,13 +1779,14 @@ def test_from_records_sequencelike(self): exclude = list(set(range(8)) - set(columns_to_test)) result = DataFrame.from_records(tuples, exclude=exclude) result.columns = [columns[i] for i in sorted(columns_to_test)] - assert_series_equal(result['C'], df['C']) - assert_series_equal(result['E1'], df['E1'].astype('float64')) + tm.assert_series_equal(result['C'], df['C']) + tm.assert_series_equal(result['E1'], df['E1'].astype('float64')) # empty case result = DataFrame.from_records([], columns=['foo', 'bar', 'baz']) self.assertEqual(len(result), 0) - self.assert_numpy_array_equal(result.columns, ['foo', 'bar', 'baz']) + self.assert_index_equal(result.columns, + pd.Index(['foo', 'bar', 'baz'])) result = DataFrame.from_records([]) self.assertEqual(len(result), 0) @@ -1962,24 +1823,24 @@ def test_from_records_dictlike(self): .reindex(columns=df.columns)) for r in results: - assert_frame_equal(r, df) + tm.assert_frame_equal(r, df) def test_from_records_with_index_data(self): df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C']) data = np.random.randn(10) df1 = DataFrame.from_records(df, index=data) - assert(df1.index.equals(Index(data))) + tm.assert_index_equal(df1.index, Index(data)) def test_from_records_bad_index_column(self): df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C']) # should pass df1 = DataFrame.from_records(df, index=['C']) - assert(df1.index.equals(Index(df.C))) + tm.assert_index_equal(df1.index, Index(df.C)) df1 = DataFrame.from_records(df, index='C') - assert(df1.index.equals(Index(df.C))) + tm.assert_index_equal(df1.index, Index(df.C)) # should fail self.assertRaises(ValueError, DataFrame.from_records, df, index=[2]) @@ -2002,7 +1863,7 @@ def __iter__(self): result = DataFrame.from_records(recs) expected = DataFrame.from_records(tups) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_from_records_len0_with_columns(self): # #2633 @@ -2012,3 +1873,45 @@ def test_from_records_len0_with_columns(self): self.assertTrue(np.array_equal(result.columns, ['bar'])) self.assertEqual(len(result), 0) self.assertEqual(result.index.name, 'foo') + + +class TestDataFrameConstructorWithDatetimeTZ(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_from_dict(self): + + # 8260 + # support datetime64 with tz + + idx = Index(date_range('20130101', periods=3, tz='US/Eastern'), + name='foo') + dr = date_range('20130110', periods=3) + + # construction + df = DataFrame({'A': idx, 'B': dr}) + self.assertTrue(df['A'].dtype, 'M8[ns, US/Eastern') + self.assertTrue(df['A'].name == 'A') + tm.assert_series_equal(df['A'], Series(idx, name='A')) + tm.assert_series_equal(df['B'], Series(dr, name='B')) + + def test_from_index(self): + + # from index + idx2 = date_range('20130101', periods=3, tz='US/Eastern', name='foo') + df2 = DataFrame(idx2) + tm.assert_series_equal(df2['foo'], Series(idx2, name='foo')) + df2 = DataFrame(Series(idx2)) + tm.assert_series_equal(df2['foo'], Series(idx2, name='foo')) + + idx2 = date_range('20130101', periods=3, tz='US/Eastern') + df2 = DataFrame(idx2) + tm.assert_series_equal(df2[0], Series(idx2, name=0)) + df2 = DataFrame(Series(idx2)) + tm.assert_series_equal(df2[0], Series(idx2, name=0)) + +if __name__ == '__main__': + import nose # noqa + + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], + exit=False) diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py index 8bb253e17fd06..53083a602e183 100644 --- a/pandas/tests/frame/test_convert_to.py +++ b/pandas/tests/frame/test_convert_to.py @@ -42,19 +42,18 @@ def test_to_dict(self): self.assertEqual(v2, recons_data[k][k2]) recons_data = DataFrame(test_data).to_dict("sp") - expected_split = {'columns': ['A', 'B'], 'index': ['1', '2', '3'], 'data': [[1.0, '1'], [2.0, '2'], [nan, '3']]} - - tm.assert_almost_equal(recons_data, expected_split) + tm.assert_dict_equal(recons_data, expected_split) recons_data = DataFrame(test_data).to_dict("r") - expected_records = [{'A': 1.0, 'B': '1'}, {'A': 2.0, 'B': '2'}, {'A': nan, 'B': '3'}] - - tm.assert_almost_equal(recons_data, expected_records) + tm.assertIsInstance(recons_data, list) + self.assertEqual(len(recons_data), 3) + for l, r in zip(recons_data, expected_records): + tm.assert_dict_equal(l, r) # GH10844 recons_data = DataFrame(test_data).to_dict("i") @@ -78,24 +77,24 @@ def test_to_dict_timestamp(self): expected_records_mixed = [{'A': tsmp, 'B': 1}, {'A': tsmp, 'B': 2}] - tm.assert_almost_equal(test_data.to_dict( - orient='records'), expected_records) - tm.assert_almost_equal(test_data_mixed.to_dict( - orient='records'), expected_records_mixed) + self.assertEqual(test_data.to_dict(orient='records'), + expected_records) + self.assertEqual(test_data_mixed.to_dict(orient='records'), + expected_records_mixed) expected_series = { - 'A': Series([tsmp, tsmp]), - 'B': Series([tsmp, tsmp]), + 'A': Series([tsmp, tsmp], name='A'), + 'B': Series([tsmp, tsmp], name='B'), } expected_series_mixed = { - 'A': Series([tsmp, tsmp]), - 'B': Series([1, 2]), + 'A': Series([tsmp, tsmp], name='A'), + 'B': Series([1, 2], name='B'), } - tm.assert_almost_equal(test_data.to_dict( - orient='series'), expected_series) - tm.assert_almost_equal(test_data_mixed.to_dict( - orient='series'), expected_series_mixed) + tm.assert_dict_equal(test_data.to_dict(orient='series'), + expected_series) + tm.assert_dict_equal(test_data_mixed.to_dict(orient='series'), + expected_series_mixed) expected_split = { 'index': [0, 1], @@ -110,10 +109,10 @@ def test_to_dict_timestamp(self): 'columns': ['A', 'B'] } - tm.assert_almost_equal(test_data.to_dict( - orient='split'), expected_split) - tm.assert_almost_equal(test_data_mixed.to_dict( - orient='split'), expected_split_mixed) + tm.assert_dict_equal(test_data.to_dict(orient='split'), + expected_split) + tm.assert_dict_equal(test_data_mixed.to_dict(orient='split'), + expected_split_mixed) def test_to_dict_invalid_orient(self): df = DataFrame({'A': [0, 1]}) @@ -172,3 +171,11 @@ def test_to_records_index_name(self): df.index.names = ['A', None] rs = df.to_records() self.assertIn('level_0', rs.dtype.fields) + + def test_to_records_with_unicode_index(self): + # GH13172 + # unicode_literals conflict with to_records + result = DataFrame([{u'a': u'x', u'b': 'y'}]).set_index(u'a')\ + .to_records() + expected = np.rec.array([('x', 'y')], dtype=[('a', 'O'), ('b', 'O')]) + tm.assert_almost_equal(result, expected) diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py index 97ca8238b78f9..ab0cf04308bac 100644 --- a/pandas/tests/frame/test_dtypes.py +++ b/pandas/tests/frame/test_dtypes.py @@ -9,6 +9,7 @@ from pandas import (DataFrame, Series, date_range, Timedelta, Timestamp, compat, option_context) from pandas.compat import u +from pandas.core import common as com from pandas.tests.frame.common import TestData from pandas.util.testing import (assert_series_equal, assert_frame_equal, @@ -74,6 +75,21 @@ def test_empty_frame_dtypes_ftypes(self): assert_series_equal(df[:0].dtypes, ex_dtypes) assert_series_equal(df[:0].ftypes, ex_ftypes) + def test_datetime_with_tz_dtypes(self): + tzframe = DataFrame({'A': date_range('20130101', periods=3), + 'B': date_range('20130101', periods=3, + tz='US/Eastern'), + 'C': date_range('20130101', periods=3, tz='CET')}) + tzframe.iloc[1, 1] = pd.NaT + tzframe.iloc[1, 2] = pd.NaT + result = tzframe.dtypes.sort_index() + expected = Series([np.dtype('datetime64[ns]'), + com.DatetimeTZDtype('datetime64[ns, US/Eastern]'), + com.DatetimeTZDtype('datetime64[ns, CET]')], + ['A', 'B', 'C']) + + assert_series_equal(result, expected) + def test_dtypes_are_correct_after_column_slice(self): # GH6525 df = pd.DataFrame(index=range(5), columns=list("abc"), dtype=np.float_) @@ -178,6 +194,16 @@ def test_select_dtypes_bad_datetime64(self): with tm.assertRaisesRegexp(ValueError, '.+ is too specific'): df.select_dtypes(exclude=['datetime64[as]']) + def test_select_dtypes_datetime_with_tz(self): + + df2 = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'), + B=Timestamp('20130603', tz='CET')), + index=range(5)) + df3 = pd.concat([df2.A.to_frame(), df2.B.to_frame()], axis=1) + result = df3.select_dtypes(include=['datetime64[ns]']) + expected = df3.reindex(columns=[]) + assert_frame_equal(result, expected) + def test_select_dtypes_str_raises(self): df = DataFrame({'a': list('abc'), 'g': list(u('abc')), @@ -372,6 +398,51 @@ def test_astype_str(self): expected = DataFrame(['1.12345678901']) assert_frame_equal(result, expected) + def test_astype_dict(self): + # GH7271 + a = Series(date_range('2010-01-04', periods=5)) + b = Series(range(5)) + c = Series([0.0, 0.2, 0.4, 0.6, 0.8]) + d = Series(['1.0', '2', '3.14', '4', '5.4']) + df = DataFrame({'a': a, 'b': b, 'c': c, 'd': d}) + original = df.copy(deep=True) + + # change type of a subset of columns + result = df.astype({'b': 'str', 'd': 'float32'}) + expected = DataFrame({ + 'a': a, + 'b': Series(['0', '1', '2', '3', '4']), + 'c': c, + 'd': Series([1.0, 2.0, 3.14, 4.0, 5.4], dtype='float32')}) + assert_frame_equal(result, expected) + assert_frame_equal(df, original) + + result = df.astype({'b': np.float32, 'c': 'float32', 'd': np.float64}) + expected = DataFrame({ + 'a': a, + 'b': Series([0.0, 1.0, 2.0, 3.0, 4.0], dtype='float32'), + 'c': Series([0.0, 0.2, 0.4, 0.6, 0.8], dtype='float32'), + 'd': Series([1.0, 2.0, 3.14, 4.0, 5.4], dtype='float64')}) + assert_frame_equal(result, expected) + assert_frame_equal(df, original) + + # change all columns + assert_frame_equal(df.astype({'a': str, 'b': str, 'c': str, 'd': str}), + df.astype(str)) + assert_frame_equal(df, original) + + # error should be raised when using something other than column labels + # in the keys of the dtype dict + self.assertRaises(KeyError, df.astype, {'b': str, 2: str}) + self.assertRaises(KeyError, df.astype, {'e': str}) + assert_frame_equal(df, original) + + # if the dtypes provided are the same as the original dtypes, the + # resulting DataFrame should be the same as the original DataFrame + equiv = df.astype({col: df[col].dtype for col in df.columns}) + assert_frame_equal(df, equiv) + assert_frame_equal(df, original) + def test_timedeltas(self): df = DataFrame(dict(A=Series(date_range('2012-1-1', periods=3, freq='D')), @@ -394,3 +465,94 @@ def test_timedeltas(self): 'int64': 1}).sort_values() result = df.get_dtype_counts().sort_values() assert_series_equal(result, expected) + + +class TestDataFrameDatetimeWithTZ(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_interleave(self): + + # interleave with object + result = self.tzframe.assign(D='foo').values + expected = np.array([[Timestamp('2013-01-01 00:00:00'), + Timestamp('2013-01-02 00:00:00'), + Timestamp('2013-01-03 00:00:00')], + [Timestamp('2013-01-01 00:00:00-0500', + tz='US/Eastern'), + pd.NaT, + Timestamp('2013-01-03 00:00:00-0500', + tz='US/Eastern')], + [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), + pd.NaT, + Timestamp('2013-01-03 00:00:00+0100', tz='CET')], + ['foo', 'foo', 'foo']], dtype=object).T + self.assert_numpy_array_equal(result, expected) + + # interleave with only datetime64[ns] + result = self.tzframe.values + expected = np.array([[Timestamp('2013-01-01 00:00:00'), + Timestamp('2013-01-02 00:00:00'), + Timestamp('2013-01-03 00:00:00')], + [Timestamp('2013-01-01 00:00:00-0500', + tz='US/Eastern'), + pd.NaT, + Timestamp('2013-01-03 00:00:00-0500', + tz='US/Eastern')], + [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), + pd.NaT, + Timestamp('2013-01-03 00:00:00+0100', + tz='CET')]], dtype=object).T + self.assert_numpy_array_equal(result, expected) + + def test_astype(self): + # astype + expected = np.array([[Timestamp('2013-01-01 00:00:00'), + Timestamp('2013-01-02 00:00:00'), + Timestamp('2013-01-03 00:00:00')], + [Timestamp('2013-01-01 00:00:00-0500', + tz='US/Eastern'), + pd.NaT, + Timestamp('2013-01-03 00:00:00-0500', + tz='US/Eastern')], + [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), + pd.NaT, + Timestamp('2013-01-03 00:00:00+0100', + tz='CET')]], + dtype=object).T + result = self.tzframe.astype(object) + assert_frame_equal(result, DataFrame( + expected, index=self.tzframe.index, columns=self.tzframe.columns)) + + result = self.tzframe.astype('datetime64[ns]') + expected = DataFrame({'A': date_range('20130101', periods=3), + 'B': (date_range('20130101', periods=3, + tz='US/Eastern') + .tz_convert('UTC') + .tz_localize(None)), + 'C': (date_range('20130101', periods=3, + tz='CET') + .tz_convert('UTC') + .tz_localize(None))}) + expected.iloc[1, 1] = pd.NaT + expected.iloc[1, 2] = pd.NaT + assert_frame_equal(result, expected) + + def test_astype_str(self): + # str formatting + result = self.tzframe.astype(str) + expected = DataFrame([['2013-01-01', '2013-01-01 00:00:00-05:00', + '2013-01-01 00:00:00+01:00'], + ['2013-01-02', 'NaT', 'NaT'], + ['2013-01-03', '2013-01-03 00:00:00-05:00', + '2013-01-03 00:00:00+01:00']], + columns=self.tzframe.columns) + self.assert_frame_equal(result, expected) + + result = str(self.tzframe) + self.assertTrue('0 2013-01-01 2013-01-01 00:00:00-05:00 ' + '2013-01-01 00:00:00+01:00' in result) + self.assertTrue('1 2013-01-02 ' + 'NaT NaT' in result) + self.assertTrue('2 2013-01-03 2013-01-03 00:00:00-05:00 ' + '2013-01-03 00:00:00+01:00' in result) diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py index 1e3940dc8f038..78354f32acbda 100644 --- a/pandas/tests/frame/test_indexing.py +++ b/pandas/tests/frame/test_indexing.py @@ -216,7 +216,7 @@ def test_getitem_boolean(self): subindex = self.tsframe.index[indexer] subframe = self.tsframe[indexer] - self.assert_numpy_array_equal(subindex, subframe.index) + self.assert_index_equal(subindex, subframe.index) with assertRaisesRegexp(ValueError, 'Item wrong length'): self.tsframe[indexer[:-1]] @@ -393,13 +393,17 @@ def test_setitem(self): series = self.frame['A'][::2] self.frame['col5'] = series self.assertIn('col5', self.frame) - tm.assert_dict_equal(series, self.frame['col5'], - compare_keys=False) + + self.assertEqual(len(series), 15) + self.assertEqual(len(self.frame), 30) + + exp = np.ravel(np.column_stack((series.values, [np.nan] * 15))) + exp = Series(exp, index=self.frame.index, name='col5') + tm.assert_series_equal(self.frame['col5'], exp) series = self.frame['A'] self.frame['col6'] = series - tm.assert_dict_equal(series, self.frame['col6'], - compare_keys=False) + tm.assert_series_equal(series, self.frame['col6'], check_names=False) with tm.assertRaises(KeyError): self.frame[randn(len(self.frame) + 1)] = 1 @@ -1196,7 +1200,7 @@ def test_getitem_fancy_scalar(self): for col in f.columns: ts = f[col] for idx in f.index[::5]: - assert_almost_equal(ix[idx, col], ts[idx]) + self.assertEqual(ix[idx, col], ts[idx]) def test_setitem_fancy_scalar(self): f = self.frame @@ -1388,7 +1392,7 @@ def test_setitem_single_column_mixed(self): columns=['foo', 'bar', 'baz']) df['str'] = 'qux' df.ix[::2, 'str'] = nan - expected = [nan, 'qux', nan, 'qux', nan] + expected = np.array([nan, 'qux', nan, 'qux', nan], dtype=object) assert_almost_equal(df['str'].values, expected) def test_setitem_single_column_mixed_datetime(self): @@ -1542,21 +1546,21 @@ def test_get_value(self): for col in self.frame.columns: result = self.frame.get_value(idx, col) expected = self.frame[col][idx] - assert_almost_equal(result, expected) + self.assertEqual(result, expected) def test_lookup(self): - def alt(df, rows, cols): + def alt(df, rows, cols, dtype): result = [] for r, c in zip(rows, cols): result.append(df.get_value(r, c)) - return result + return np.array(result, dtype=dtype) def testit(df): rows = list(df.index) * len(df.columns) cols = list(df.columns) * len(df.index) result = df.lookup(rows, cols) - expected = alt(df, rows, cols) - assert_almost_equal(result, expected) + expected = alt(df, rows, cols, dtype=np.object_) + tm.assert_almost_equal(result, expected, check_dtype=False) testit(self.mixed_frame) testit(self.frame) @@ -1566,7 +1570,7 @@ def testit(df): 'mask_b': [True, False, False, False], 'mask_c': [False, True, False, True]}) df['mask'] = df.lookup(df.index, 'mask_' + df['label']) - exp_mask = alt(df, df.index, 'mask_' + df['label']) + exp_mask = alt(df, df.index, 'mask_' + df['label'], dtype=np.bool_) tm.assert_series_equal(df['mask'], pd.Series(exp_mask, name='mask')) self.assertEqual(df['mask'].dtype, np.bool_) @@ -1583,7 +1587,7 @@ def test_set_value(self): for idx in self.frame.index: for col in self.frame.columns: self.frame.set_value(idx, col, 1) - assert_almost_equal(self.frame[col][idx], 1) + self.assertEqual(self.frame[col][idx], 1) def test_set_value_resize(self): @@ -1773,7 +1777,7 @@ def test_iget_value(self): for j, col in enumerate(self.frame.columns): result = self.frame.iat[i, j] expected = self.frame.at[row, col] - assert_almost_equal(result, expected) + self.assertEqual(result, expected) def test_nested_exception(self): # Ignore the strange way of triggering the problem @@ -2695,3 +2699,64 @@ def test_type_error_multiindex(self): result = dg['x', 0] assert_series_equal(result, expected) + + +class TestDataFrameIndexingDatetimeWithTZ(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def setUp(self): + self.idx = Index(date_range('20130101', periods=3, tz='US/Eastern'), + name='foo') + self.dr = date_range('20130110', periods=3) + self.df = DataFrame({'A': self.idx, 'B': self.dr}) + + def test_setitem(self): + + df = self.df + idx = self.idx + + # setitem + df['C'] = idx + assert_series_equal(df['C'], Series(idx, name='C')) + + df['D'] = 'foo' + df['D'] = idx + assert_series_equal(df['D'], Series(idx, name='D')) + del df['D'] + + # assert that A & C are not sharing the same base (e.g. they + # are copies) + b1 = df._data.blocks[1] + b2 = df._data.blocks[2] + self.assertTrue(b1.values.equals(b2.values)) + self.assertFalse(id(b1.values.values.base) == + id(b2.values.values.base)) + + # with nan + df2 = df.copy() + df2.iloc[1, 1] = pd.NaT + df2.iloc[1, 2] = pd.NaT + result = df2['B'] + assert_series_equal(notnull(result), Series( + [True, False, True], name='B')) + assert_series_equal(df2.dtypes, df.dtypes) + + def test_set_reset(self): + + idx = self.idx + + # set/reset + df = DataFrame({'A': [0, 1, 2]}, index=idx) + result = df.reset_index() + self.assertTrue(result['foo'].dtype, 'M8[ns, US/Eastern') + + result = result.set_index('foo') + tm.assert_index_equal(df.index, idx) + + def test_transpose(self): + + result = self.df.T + expected = DataFrame(self.df.values.T) + expected.index = ['A', 'B'] + assert_frame_equal(result, expected) diff --git a/pandas/tests/frame/test_misc_api.py b/pandas/tests/frame/test_misc_api.py index 0857d23dc1176..03b3c0a5e65d0 100644 --- a/pandas/tests/frame/test_misc_api.py +++ b/pandas/tests/frame/test_misc_api.py @@ -58,7 +58,7 @@ def test_get_value(self): for col in self.frame.columns: result = self.frame.get_value(idx, col) expected = self.frame[col][idx] - assert_almost_equal(result, expected) + tm.assert_almost_equal(result, expected) def test_join_index(self): # left / right @@ -67,15 +67,15 @@ def test_join_index(self): f2 = self.frame.reindex(columns=['C', 'D']) joined = f.join(f2) - self.assertTrue(f.index.equals(joined.index)) + self.assert_index_equal(f.index, joined.index) self.assertEqual(len(joined.columns), 4) joined = f.join(f2, how='left') - self.assertTrue(joined.index.equals(f.index)) + self.assert_index_equal(joined.index, f.index) self.assertEqual(len(joined.columns), 4) joined = f.join(f2, how='right') - self.assertTrue(joined.index.equals(f2.index)) + self.assert_index_equal(joined.index, f2.index) self.assertEqual(len(joined.columns), 4) # inner @@ -84,7 +84,7 @@ def test_join_index(self): f2 = self.frame.reindex(columns=['C', 'D']) joined = f.join(f2, how='inner') - self.assertTrue(joined.index.equals(f.index.intersection(f2.index))) + self.assert_index_equal(joined.index, f.index.intersection(f2.index)) self.assertEqual(len(joined.columns), 4) # outer @@ -148,12 +148,12 @@ def test_join_overlap(self): def test_add_prefix_suffix(self): with_prefix = self.frame.add_prefix('foo#') - expected = ['foo#%s' % c for c in self.frame.columns] - self.assert_numpy_array_equal(with_prefix.columns, expected) + expected = pd.Index(['foo#%s' % c for c in self.frame.columns]) + self.assert_index_equal(with_prefix.columns, expected) with_suffix = self.frame.add_suffix('#foo') - expected = ['%s#foo' % c for c in self.frame.columns] - self.assert_numpy_array_equal(with_suffix.columns, expected) + expected = pd.Index(['%s#foo' % c for c in self.frame.columns]) + self.assert_index_equal(with_suffix.columns, expected) class TestDataFrameMisc(tm.TestCase, SharedWithSparse, TestData): @@ -391,7 +391,7 @@ def test_repr_with_mi_nat(self): index=[[pd.NaT, pd.Timestamp('20130101')], ['a', 'b']]) res = repr(df) exp = ' X\nNaT a 1\n2013-01-01 b 2' - nose.tools.assert_equal(res, exp) + self.assertEqual(res, exp) def test_iterkv_deprecation(self): with tm.assert_produces_warning(FutureWarning): diff --git a/pandas/tests/frame/test_missing.py b/pandas/tests/frame/test_missing.py index 681ff8cf95dc9..8a6cbe44465c1 100644 --- a/pandas/tests/frame/test_missing.py +++ b/pandas/tests/frame/test_missing.py @@ -66,15 +66,17 @@ def test_dropIncompleteRows(self): smaller_frame = frame.dropna() assert_series_equal(frame['foo'], original) inp_frame1.dropna(inplace=True) - self.assert_numpy_array_equal(smaller_frame['foo'], mat[5:]) - self.assert_numpy_array_equal(inp_frame1['foo'], mat[5:]) + + exp = Series(mat[5:], index=self.frame.index[5:], name='foo') + tm.assert_series_equal(smaller_frame['foo'], exp) + tm.assert_series_equal(inp_frame1['foo'], exp) samesize_frame = frame.dropna(subset=['bar']) assert_series_equal(frame['foo'], original) self.assertTrue((frame['bar'] == 5).all()) inp_frame2.dropna(subset=['bar'], inplace=True) - self.assertTrue(samesize_frame.index.equals(self.frame.index)) - self.assertTrue(inp_frame2.index.equals(self.frame.index)) + self.assert_index_equal(samesize_frame.index, self.frame.index) + self.assert_index_equal(inp_frame2.index, self.frame.index) def test_dropna(self): df = DataFrame(np.random.randn(6, 4)) diff --git a/pandas/tests/frame/test_mutate_columns.py b/pandas/tests/frame/test_mutate_columns.py index 0d58dd5402aff..2bdd6657eaf18 100644 --- a/pandas/tests/frame/test_mutate_columns.py +++ b/pandas/tests/frame/test_mutate_columns.py @@ -5,7 +5,7 @@ from pandas.compat import range, lrange import numpy as np -from pandas import DataFrame, Series +from pandas import DataFrame, Series, Index from pandas.util.testing import (assert_series_equal, assert_frame_equal, @@ -123,12 +123,12 @@ def test_insert(self): columns=['c', 'b', 'a']) df.insert(0, 'foo', df['a']) - self.assert_numpy_array_equal(df.columns, ['foo', 'c', 'b', 'a']) + self.assert_index_equal(df.columns, Index(['foo', 'c', 'b', 'a'])) tm.assert_series_equal(df['a'], df['foo'], check_names=False) df.insert(2, 'bar', df['c']) - self.assert_numpy_array_equal(df.columns, - ['foo', 'c', 'bar', 'b', 'a']) + self.assert_index_equal(df.columns, + Index(['foo', 'c', 'bar', 'b', 'a'])) tm.assert_almost_equal(df['c'], df['bar'], check_names=False) # diff dtype diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py index cd2a0fbeefae3..ee7c296f563f0 100644 --- a/pandas/tests/frame/test_operators.py +++ b/pandas/tests/frame/test_operators.py @@ -724,9 +724,14 @@ def test_combineFrame(self): frame_copy['C'][:5] = nan added = self.frame + frame_copy - tm.assert_dict_equal(added['A'].valid(), - self.frame['A'] * 2, - compare_keys=False) + + indexer = added['A'].valid().index + exp = (self.frame['A'] * 2).copy() + + tm.assert_series_equal(added['A'].valid(), exp.loc[indexer]) + + exp.loc[~exp.index.isin(indexer)] = np.nan + tm.assert_series_equal(added['A'], exp.loc[added['A'].index]) self.assertTrue( np.isnan(added['C'].reindex(frame_copy.index)[:5]).all()) @@ -736,7 +741,7 @@ def test_combineFrame(self): self.assertTrue(np.isnan(added['D']).all()) self_added = self.frame + self.frame - self.assertTrue(self_added.index.equals(self.frame.index)) + self.assert_index_equal(self_added.index, self.frame.index) added_rev = frame_copy + self.frame self.assertTrue(np.isnan(added['D']).all()) @@ -833,7 +838,7 @@ def test_combineSeries(self): smaller_frame = self.tsframe[:-5] smaller_added = smaller_frame.add(ts, axis='index') - self.assertTrue(smaller_added.index.equals(self.tsframe.index)) + self.assert_index_equal(smaller_added.index, self.tsframe.index) smaller_ts = ts[:-5] smaller_added2 = self.tsframe.add(smaller_ts, axis='index') diff --git a/pandas/tests/frame/test_quantile.py b/pandas/tests/frame/test_quantile.py index d883363812ddb..52e8697abe850 100644 --- a/pandas/tests/frame/test_quantile.py +++ b/pandas/tests/frame/test_quantile.py @@ -28,9 +28,12 @@ def test_quantile(self): q = self.tsframe.quantile(0.1, axis=0) self.assertEqual(q['A'], percentile(self.tsframe['A'], 10)) + tm.assert_index_equal(q.index, self.tsframe.columns) + q = self.tsframe.quantile(0.9, axis=1) - q = self.intframe.quantile(0.1) - self.assertEqual(q['A'], percentile(self.intframe['A'], 10)) + self.assertEqual(q['2000-01-17'], + percentile(self.tsframe.loc['2000-01-17'], 90)) + tm.assert_index_equal(q.index, self.tsframe.index) # test degenerate case q = DataFrame({'x': [], 'y': []}).quantile(0.1, axis=0) @@ -39,13 +42,13 @@ def test_quantile(self): # non-numeric exclusion df = DataFrame({'col1': ['A', 'A', 'B', 'B'], 'col2': [1, 2, 3, 4]}) rs = df.quantile(0.5) - xp = df.median() + xp = df.median().rename(0.5) assert_series_equal(rs, xp) # axis df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) result = df.quantile(.5, axis=1) - expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3]) + expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3], name=0.5) assert_series_equal(result, expected) result = df.quantile([.5, .75], axis=1) @@ -59,9 +62,25 @@ def test_quantile(self): df = DataFrame([[1, 2, 3], ['a', 'b', 4]]) result = df.quantile(.5, axis=1) - expected = Series([3., 4.], index=[0, 1]) + expected = Series([3., 4.], index=[0, 1], name=0.5) assert_series_equal(result, expected) + def test_quantile_axis_mixed(self): + + # mixed on axis=1 + df = DataFrame({"A": [1, 2, 3], + "B": [2., 3., 4.], + "C": pd.date_range('20130101', periods=3), + "D": ['foo', 'bar', 'baz']}) + result = df.quantile(.5, axis=1) + expected = Series([1.5, 2.5, 3.5], name=0.5) + assert_series_equal(result, expected) + + # must raise + def f(): + df.quantile(.5, axis=1, numeric_only=False) + self.assertRaises(TypeError, f) + def test_quantile_axis_parameter(self): # GH 9543/9544 @@ -69,7 +88,7 @@ def test_quantile_axis_parameter(self): result = df.quantile(.5, axis=0) - expected = Series([2., 3.], index=["A", "B"]) + expected = Series([2., 3.], index=["A", "B"], name=0.5) assert_series_equal(result, expected) expected = df.quantile(.5, axis="index") @@ -77,7 +96,7 @@ def test_quantile_axis_parameter(self): result = df.quantile(.5, axis=1) - expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3]) + expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3], name=0.5) assert_series_equal(result, expected) result = df.quantile(.5, axis="columns") @@ -107,22 +126,23 @@ def test_quantile_interpolation(self): # interpolation method other than default linear df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) result = df.quantile(.5, axis=1, interpolation='nearest') - expected = Series([1, 2, 3], index=[1, 2, 3]) + expected = Series([1, 2, 3], index=[1, 2, 3], name=0.5) assert_series_equal(result, expected) + # cross-check interpolation=nearest results in original dtype exp = np.percentile(np.array([[1, 2, 3], [2, 3, 4]]), .5, axis=0, interpolation='nearest') - expected = Series(exp, index=[1, 2, 3], dtype='int64') + expected = Series(exp, index=[1, 2, 3], name=0.5, dtype='int64') assert_series_equal(result, expected) # float df = DataFrame({"A": [1., 2., 3.], "B": [2., 3., 4.]}, index=[1, 2, 3]) result = df.quantile(.5, axis=1, interpolation='nearest') - expected = Series([1., 2., 3.], index=[1, 2, 3]) + expected = Series([1., 2., 3.], index=[1, 2, 3], name=0.5) assert_series_equal(result, expected) exp = np.percentile(np.array([[1., 2., 3.], [2., 3., 4.]]), .5, axis=0, interpolation='nearest') - expected = Series(exp, index=[1, 2, 3], dtype='float64') + expected = Series(exp, index=[1, 2, 3], name=0.5, dtype='float64') assert_series_equal(result, expected) # axis @@ -217,7 +237,8 @@ def test_quantile_datetime(self): # datetime result = df.quantile(.5, numeric_only=False) expected = Series([Timestamp('2010-07-02 12:00:00'), 2.5], - index=['a', 'b']) + index=['a', 'b'], + name=0.5) assert_series_equal(result, expected) # datetime w/ multi @@ -231,7 +252,8 @@ def test_quantile_datetime(self): result = df[['a', 'c']].quantile(.5, axis=1, numeric_only=False) expected = Series([Timestamp('2010-07-02 12:00:00'), Timestamp('2011-07-02 12:00:00')], - index=[0, 1]) + index=[0, 1], + name=0.5) assert_series_equal(result, expected) result = df[['a', 'c']].quantile([.5], axis=1, numeric_only=False) @@ -256,12 +278,13 @@ def test_quantile_box(self): 'C': [pd.Timedelta('1 days'), pd.Timedelta('2 days'), pd.Timedelta('3 days')]}) + res = df.quantile(0.5, numeric_only=False) - # when squeezed, result.name is explicitly reset + exp = pd.Series([pd.Timestamp('2011-01-02'), pd.Timestamp('2011-01-02', tz='US/Eastern'), pd.Timedelta('2 days')], - name=None, index=['A', 'B', 'C']) + name=0.5, index=['A', 'B', 'C']) tm.assert_series_equal(res, exp) res = df.quantile([0.5], numeric_only=False) @@ -305,7 +328,7 @@ def test_quantile_box(self): pd.Timestamp('2011-01-02', tz='US/Eastern'), pd.Timedelta('2 days'), pd.Timedelta('2 days')], - name=None, index=list('AaBbCc')) + name=0.5, index=list('AaBbCc')) tm.assert_series_equal(res, exp) res = df.quantile([0.5], numeric_only=False) diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py index 3d4be319092c3..66e592c013fb1 100644 --- a/pandas/tests/frame/test_repr_info.py +++ b/pandas/tests/frame/test_repr_info.py @@ -14,7 +14,6 @@ import pandas.formats.format as fmt import pandas as pd -from numpy.testing.decorators import slow import pandas.util.testing as tm from pandas.tests.frame.common import TestData @@ -43,7 +42,7 @@ def test_repr_mixed(self): foo = repr(self.mixed_frame) # noqa self.mixed_frame.info(verbose=False, buf=buf) - @slow + @tm.slow def test_repr_mixed_big(self): # big mixed biggie = DataFrame({'A': np.random.randn(200), @@ -90,7 +89,7 @@ def test_repr_dimensions(self): with option_context('display.show_dimensions', 'truncate'): self.assertFalse("2 rows x 2 columns" in repr(df)) - @slow + @tm.slow def test_repr_big(self): # big one biggie = DataFrame(np.zeros((200, 4)), columns=lrange(4), diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py index e7d64324e6590..066485e966a42 100644 --- a/pandas/tests/frame/test_reshape.py +++ b/pandas/tests/frame/test_reshape.py @@ -79,7 +79,7 @@ def test_pivot_integer_bug(self): result = df.pivot(index=1, columns=0, values=2) repr(result) - self.assert_numpy_array_equal(result.columns, ['A', 'B']) + self.assert_index_equal(result.columns, Index(['A', 'B'], name=0)) def test_pivot_index_none(self): # gh-3962 @@ -158,6 +158,8 @@ def test_unstack_fill(self): index=['x', 'y', 'z'], dtype=np.float) assert_frame_equal(result, expected) + def test_unstack_fill_frame(self): + # From a dataframe rows = [[1, 2], [3, 4], [5, 6], [7, 8]] df = DataFrame(rows, columns=list('AB'), dtype=np.int32) @@ -190,6 +192,8 @@ def test_unstack_fill(self): [('A', 'a'), ('A', 'b'), ('B', 'a'), ('B', 'b')]) assert_frame_equal(result, expected) + def test_unstack_fill_frame_datetime(self): + # Test unstacking with date times dv = pd.date_range('2012-01-01', periods=4).values data = Series(dv) @@ -208,6 +212,8 @@ def test_unstack_fill(self): index=['x', 'y', 'z']) assert_frame_equal(result, expected) + def test_unstack_fill_frame_timedelta(self): + # Test unstacking with time deltas td = [Timedelta(days=i) for i in range(4)] data = Series(td) @@ -226,6 +232,8 @@ def test_unstack_fill(self): index=['x', 'y', 'z']) assert_frame_equal(result, expected) + def test_unstack_fill_frame_period(self): + # Test unstacking with period periods = [Period('2012-01'), Period('2012-02'), Period('2012-03'), Period('2012-04')] @@ -245,6 +253,8 @@ def test_unstack_fill(self): index=['x', 'y', 'z']) assert_frame_equal(result, expected) + def test_unstack_fill_frame_categorical(self): + # Test unstacking with categorical data = pd.Series(['a', 'b', 'c', 'a'], dtype='category') data.index = pd.MultiIndex.from_tuples( @@ -273,27 +283,20 @@ def test_unstack_fill(self): assert_frame_equal(result, expected) def test_stack_ints(self): - df = DataFrame( - np.random.randn(30, 27), - columns=MultiIndex.from_tuples( - list(itertools.product(range(3), repeat=3)) - ) - ) - assert_frame_equal( - df.stack(level=[1, 2]), - df.stack(level=1).stack(level=1) - ) - assert_frame_equal( - df.stack(level=[-2, -1]), - df.stack(level=1).stack(level=1) - ) + columns = MultiIndex.from_tuples(list(itertools.product(range(3), + repeat=3))) + df = DataFrame(np.random.randn(30, 27), columns=columns) + + assert_frame_equal(df.stack(level=[1, 2]), + df.stack(level=1).stack(level=1)) + assert_frame_equal(df.stack(level=[-2, -1]), + df.stack(level=1).stack(level=1)) df_named = df.copy() df_named.columns.set_names(range(3), inplace=True) - assert_frame_equal( - df_named.stack(level=[1, 2]), - df_named.stack(level=1).stack(level=1) - ) + + assert_frame_equal(df_named.stack(level=[1, 2]), + df_named.stack(level=1).stack(level=1)) def test_stack_mixed_levels(self): columns = MultiIndex.from_tuples( diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py index 820076e2c6fd5..b9baae6cbeda7 100644 --- a/pandas/tests/frame/test_timeseries.py +++ b/pandas/tests/frame/test_timeseries.py @@ -120,13 +120,13 @@ def test_pct_change_shift_over_nas(self): def test_shift(self): # naive shift shiftedFrame = self.tsframe.shift(5) - self.assertTrue(shiftedFrame.index.equals(self.tsframe.index)) + self.assert_index_equal(shiftedFrame.index, self.tsframe.index) shiftedSeries = self.tsframe['A'].shift(5) assert_series_equal(shiftedFrame['A'], shiftedSeries) shiftedFrame = self.tsframe.shift(-5) - self.assertTrue(shiftedFrame.index.equals(self.tsframe.index)) + self.assert_index_equal(shiftedFrame.index, self.tsframe.index) shiftedSeries = self.tsframe['A'].shift(-5) assert_series_equal(shiftedFrame['A'], shiftedSeries) @@ -154,10 +154,10 @@ def test_shift(self): ps = tm.makePeriodFrame() shifted = ps.shift(1) unshifted = shifted.shift(-1) - self.assertTrue(shifted.index.equals(ps.index)) - - tm.assert_dict_equal(unshifted.ix[:, 0].valid(), ps.ix[:, 0], - compare_keys=False) + self.assert_index_equal(shifted.index, ps.index) + self.assert_index_equal(unshifted.index, ps.index) + tm.assert_numpy_array_equal(unshifted.ix[:, 0].valid().values, + ps.ix[:-1, 0].values) shifted2 = ps.shift(1, 'B') shifted3 = ps.shift(1, datetools.bday) diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py index 718f47eea3a0f..bacf604c491b1 100644 --- a/pandas/tests/frame/test_to_csv.py +++ b/pandas/tests/frame/test_to_csv.py @@ -14,14 +14,11 @@ import pandas as pd from pandas.util.testing import (assert_almost_equal, - assert_equal, assert_series_equal, assert_frame_equal, ensure_clean, makeCustomDataframe as mkdf, - assertRaisesRegexp) - -from numpy.testing.decorators import slow + assertRaisesRegexp, slow) import pandas.util.testing as tm from pandas.tests.frame.common import TestData @@ -453,7 +450,7 @@ def test_to_csv_with_mix_columns(self): df = DataFrame({0: ['a', 'b', 'c'], 1: ['aa', 'bb', 'cc']}) df['test'] = 'txt' - assert_equal(df.to_csv(), df.to_csv(columns=[0, 1, 'test'])) + self.assertEqual(df.to_csv(), df.to_csv(columns=[0, 1, 'test'])) def test_to_csv_headers(self): # GH6186, the presence or absence of `index` incorrectly @@ -508,8 +505,7 @@ def test_to_csv_multiindex(self): # do not load index tsframe.to_csv(path) recons = DataFrame.from_csv(path, index_col=None) - np.testing.assert_equal( - len(recons.columns), len(tsframe.columns) + 2) + self.assertEqual(len(recons.columns), len(tsframe.columns) + 2) # no index tsframe.to_csv(path, index=False) @@ -630,7 +626,7 @@ def _make_frame(names=None): exp = tsframe[:0] exp.index = [] - self.assertTrue(recons.columns.equals(exp.columns)) + self.assert_index_equal(recons.columns, exp.columns) self.assertEqual(len(recons), 0) def test_to_csv_float32_nanrep(self): diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py index 8ea87e9d69c92..e342eee2aabbb 100644 --- a/pandas/tests/indexes/common.py +++ b/pandas/tests/indexes/common.py @@ -7,7 +7,7 @@ from pandas import (Series, Index, Float64Index, Int64Index, RangeIndex, MultiIndex, CategoricalIndex, DatetimeIndex, - TimedeltaIndex, PeriodIndex) + TimedeltaIndex, PeriodIndex, notnull) from pandas.util.testing import assertRaisesRegexp import pandas.util.testing as tm @@ -363,6 +363,18 @@ def test_numpy_repeat(self): tm.assertRaisesRegexp(ValueError, msg, np.repeat, i, rep, axis=0) + def test_where(self): + i = self.create_index() + result = i.where(notnull(i)) + expected = i + tm.assert_index_equal(result, expected) + + i2 = i.copy() + i2 = pd.Index([np.nan, np.nan] + i[2:].tolist()) + result = i.where(notnull(i2)) + expected = i2 + tm.assert_index_equal(result, expected) + def test_setops_errorcases(self): for name, idx in compat.iteritems(self.indices): # # non-iterable input @@ -595,7 +607,7 @@ def test_equals_op(self): # assuming the 2nd to last item is unique in the data item = index_a[-2] tm.assert_numpy_array_equal(index_a == item, expected3) - tm.assert_numpy_array_equal(series_a == item, expected3) + tm.assert_series_equal(series_a == item, Series(expected3)) def test_numpy_ufuncs(self): # test ufuncs of numpy 1.9.2. see: diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py index 1591df5f1af2a..aa007c039f8ee 100644 --- a/pandas/tests/indexes/test_base.py +++ b/pandas/tests/indexes/test_base.py @@ -74,14 +74,14 @@ def test_constructor(self): arr = np.array(self.strIndex) index = Index(arr) tm.assert_contains_all(arr, index) - tm.assert_numpy_array_equal(self.strIndex, index) + tm.assert_index_equal(self.strIndex, index) # copy arr = np.array(self.strIndex) index = Index(arr, copy=True, name='name') tm.assertIsInstance(index, Index) self.assertEqual(index.name, 'name') - tm.assert_numpy_array_equal(arr, index) + tm.assert_numpy_array_equal(arr, index.values) arr[0] = "SOMEBIGLONGSTRING" self.assertNotEqual(index[0], "SOMEBIGLONGSTRING") @@ -155,30 +155,28 @@ def test_constructor_from_series(self): s = Series([Timestamp('20110101'), Timestamp('20120101'), Timestamp( '20130101')]) result = Index(s) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) result = DatetimeIndex(s) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) # GH 6273 # create from a series, passing a freq s = Series(pd.to_datetime(['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990'])) result = DatetimeIndex(s, freq='MS') - expected = DatetimeIndex( - ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990' - ], freq='MS') - self.assertTrue(result.equals(expected)) + expected = DatetimeIndex(['1-1-1990', '2-1-1990', '3-1-1990', + '4-1-1990', '5-1-1990'], freq='MS') + self.assert_index_equal(result, expected) df = pd.DataFrame(np.random.rand(5, 3)) df['date'] = ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990'] result = DatetimeIndex(df['date'], freq='MS') - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(df['date'].dtype, object) - exp = pd.Series( - ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990' - ], name='date') + exp = pd.Series(['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', + '5-1-1990'], name='date') self.assert_series_equal(df['date'], exp) # GH 6274 @@ -202,26 +200,26 @@ def __array__(self, dtype=None): date_range('2000-01-01', periods=3).values]: expected = pd.Index(array) result = pd.Index(ArrayLike(array)) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) def test_index_ctor_infer_periodindex(self): xp = period_range('2012-1-1', freq='M', periods=3) rs = Index(xp) - tm.assert_numpy_array_equal(rs, xp) + tm.assert_index_equal(rs, xp) tm.assertIsInstance(rs, PeriodIndex) def test_constructor_simple_new(self): idx = Index([1, 2, 3, 4, 5], name='int') result = idx._simple_new(idx, 'int') - self.assertTrue(result.equals(idx)) + self.assert_index_equal(result, idx) idx = Index([1.1, np.nan, 2.2, 3.0], name='float') result = idx._simple_new(idx, 'float') - self.assertTrue(result.equals(idx)) + self.assert_index_equal(result, idx) idx = Index(['A', 'B', 'C', np.nan], name='obj') result = idx._simple_new(idx, 'obj') - self.assertTrue(result.equals(idx)) + self.assert_index_equal(result, idx) def test_constructor_dtypes(self): @@ -338,31 +336,31 @@ def test_insert(self): result = Index(['b', 'c', 'd']) # test 0th element - self.assertTrue(Index(['a', 'b', 'c', 'd']).equals(result.insert(0, - 'a'))) + self.assert_index_equal(Index(['a', 'b', 'c', 'd']), + result.insert(0, 'a')) # test Nth element that follows Python list behavior - self.assertTrue(Index(['b', 'c', 'e', 'd']).equals(result.insert(-1, - 'e'))) + self.assert_index_equal(Index(['b', 'c', 'e', 'd']), + result.insert(-1, 'e')) # test loc +/- neq (0, -1) - self.assertTrue(result.insert(1, 'z').equals(result.insert(-2, 'z'))) + self.assert_index_equal(result.insert(1, 'z'), result.insert(-2, 'z')) # test empty null_index = Index([]) - self.assertTrue(Index(['a']).equals(null_index.insert(0, 'a'))) + self.assert_index_equal(Index(['a']), null_index.insert(0, 'a')) def test_delete(self): idx = Index(['a', 'b', 'c', 'd'], name='idx') expected = Index(['b', 'c', 'd'], name='idx') result = idx.delete(0) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) expected = Index(['a', 'b', 'c'], name='idx') result = idx.delete(-1) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) with tm.assertRaises((IndexError, ValueError)): @@ -525,14 +523,14 @@ def test_intersection(self): idx2 = Index([3, 4, 5, 6, 7], name='idx') expected2 = Index([3, 4, 5], name='idx') result2 = idx1.intersection(idx2) - self.assertTrue(result2.equals(expected2)) + self.assert_index_equal(result2, expected2) self.assertEqual(result2.name, expected2.name) # if target name is different, it will be reset idx3 = Index([3, 4, 5, 6, 7], name='other') expected3 = Index([3, 4, 5], name=None) result3 = idx1.intersection(idx3) - self.assertTrue(result3.equals(expected3)) + self.assert_index_equal(result3, expected3) self.assertEqual(result3.name, expected3.name) # non monotonic @@ -552,7 +550,7 @@ def test_intersection(self): idx2 = Index(['B', 'D']) expected = Index(['B'], dtype='object') result = idx1.intersection(idx2) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) # preserve names first = self.strIndex[5:20] @@ -677,11 +675,11 @@ def test_append_multiple(self): foos = [index[:2], index[2:4], index[4:]] result = foos[0].append(foos[1:]) - self.assertTrue(result.equals(index)) + self.assert_index_equal(result, index) # empty result = index.append([]) - self.assertTrue(result.equals(index)) + self.assert_index_equal(result, index) def test_append_empty_preserve_name(self): left = Index([], name='foo') @@ -883,10 +881,10 @@ def test_get_indexer(self): idx2 = Index([2, 4, 6]) r1 = idx1.get_indexer(idx2) - assert_almost_equal(r1, [1, 3, -1]) + assert_almost_equal(r1, np.array([1, 3, -1])) r1 = idx2.get_indexer(idx1, method='pad') - e1 = [-1, 0, 0, 1, 1] + e1 = np.array([-1, 0, 0, 1, 1]) assert_almost_equal(r1, e1) r2 = idx2.get_indexer(idx1[::-1], method='pad') @@ -896,7 +894,7 @@ def test_get_indexer(self): assert_almost_equal(r1, rffill1) r1 = idx2.get_indexer(idx1, method='backfill') - e1 = [0, 0, 1, 1, 2] + e1 = np.array([0, 0, 1, 1, 2]) assert_almost_equal(r1, e1) rbfill1 = idx2.get_indexer(idx1, method='bfill') @@ -921,25 +919,25 @@ def test_get_indexer_nearest(self): all_methods = ['pad', 'backfill', 'nearest'] for method in all_methods: actual = idx.get_indexer([0, 5, 9], method=method) - tm.assert_numpy_array_equal(actual, [0, 5, 9]) + tm.assert_numpy_array_equal(actual, np.array([0, 5, 9])) actual = idx.get_indexer([0, 5, 9], method=method, tolerance=0) - tm.assert_numpy_array_equal(actual, [0, 5, 9]) + tm.assert_numpy_array_equal(actual, np.array([0, 5, 9])) - for method, expected in zip(all_methods, [[0, 1, 8], [1, 2, 9], [0, 2, - 9]]): + for method, expected in zip(all_methods, [[0, 1, 8], [1, 2, 9], + [0, 2, 9]]): actual = idx.get_indexer([0.2, 1.8, 8.5], method=method) - tm.assert_numpy_array_equal(actual, expected) + tm.assert_numpy_array_equal(actual, np.array(expected)) actual = idx.get_indexer([0.2, 1.8, 8.5], method=method, tolerance=1) - tm.assert_numpy_array_equal(actual, expected) + tm.assert_numpy_array_equal(actual, np.array(expected)) for method, expected in zip(all_methods, [[0, -1, -1], [-1, 2, -1], [0, 2, -1]]): actual = idx.get_indexer([0.2, 1.8, 8.5], method=method, tolerance=0.2) - tm.assert_numpy_array_equal(actual, expected) + tm.assert_numpy_array_equal(actual, np.array(expected)) with tm.assertRaisesRegexp(ValueError, 'limit argument'): idx.get_indexer([1, 0], method='nearest', limit=1) @@ -950,22 +948,22 @@ def test_get_indexer_nearest_decreasing(self): all_methods = ['pad', 'backfill', 'nearest'] for method in all_methods: actual = idx.get_indexer([0, 5, 9], method=method) - tm.assert_numpy_array_equal(actual, [9, 4, 0]) + tm.assert_numpy_array_equal(actual, np.array([9, 4, 0])) - for method, expected in zip(all_methods, [[8, 7, 0], [9, 8, 1], [9, 7, - 0]]): + for method, expected in zip(all_methods, [[8, 7, 0], [9, 8, 1], + [9, 7, 0]]): actual = idx.get_indexer([0.2, 1.8, 8.5], method=method) - tm.assert_numpy_array_equal(actual, expected) + tm.assert_numpy_array_equal(actual, np.array(expected)) def test_get_indexer_strings(self): idx = pd.Index(['b', 'c']) actual = idx.get_indexer(['a', 'b', 'c', 'd'], method='pad') - expected = [-1, 0, 1, 1] + expected = np.array([-1, 0, 1, 1]) tm.assert_numpy_array_equal(actual, expected) actual = idx.get_indexer(['a', 'b', 'c', 'd'], method='backfill') - expected = [0, 0, 1, -1] + expected = np.array([0, 0, 1, -1]) tm.assert_numpy_array_equal(actual, expected) with tm.assertRaises(TypeError): @@ -1086,7 +1084,7 @@ def check_slice(in_slice, expected): in_slice.step) result = idx[s_start:s_stop:in_slice.step] expected = pd.Index(list(expected)) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) for in_slice, expected in [ (SLC[::-1], 'yxdcb'), (SLC['b':'y':-1], ''), @@ -1108,7 +1106,7 @@ def test_drop(self): drop = self.strIndex[lrange(5, 10)] dropped = self.strIndex.drop(drop) expected = self.strIndex[lrange(5) + lrange(10, n)] - self.assertTrue(dropped.equals(expected)) + self.assert_index_equal(dropped, expected) self.assertRaises(ValueError, self.strIndex.drop, ['foo', 'bar']) self.assertRaises(ValueError, self.strIndex.drop, ['1', 'bar']) @@ -1161,13 +1159,13 @@ def test_tuple_union_bug(self): # needs to be 1d like idx1 and idx2 expected = idx1[:4] # pandas.Index(sorted(set(idx1) & set(idx2))) self.assertEqual(int_idx.ndim, 1) - self.assertTrue(int_idx.equals(expected)) + self.assert_index_equal(int_idx, expected) # union broken union_idx = idx1.union(idx2) expected = idx2 self.assertEqual(union_idx.ndim, 1) - self.assertTrue(union_idx.equals(expected)) + self.assert_index_equal(union_idx, expected) def test_is_monotonic_incomparable(self): index = Index([5, datetime.now(), 7]) @@ -1202,21 +1200,22 @@ def test_isin(self): self.assertEqual(result.dtype, np.bool_) def test_isin_nan(self): - tm.assert_numpy_array_equal( - Index(['a', np.nan]).isin([np.nan]), [False, True]) - tm.assert_numpy_array_equal( - Index(['a', pd.NaT]).isin([pd.NaT]), [False, True]) - tm.assert_numpy_array_equal( - Index(['a', np.nan]).isin([float('nan')]), [False, False]) - tm.assert_numpy_array_equal( - Index(['a', np.nan]).isin([pd.NaT]), [False, False]) + tm.assert_numpy_array_equal(Index(['a', np.nan]).isin([np.nan]), + np.array([False, True])) + tm.assert_numpy_array_equal(Index(['a', pd.NaT]).isin([pd.NaT]), + np.array([False, True])) + tm.assert_numpy_array_equal(Index(['a', np.nan]).isin([float('nan')]), + np.array([False, False])) + tm.assert_numpy_array_equal(Index(['a', np.nan]).isin([pd.NaT]), + np.array([False, False])) # Float64Index overrides isin, so must be checked separately + tm.assert_numpy_array_equal(Float64Index([1.0, np.nan]).isin([np.nan]), + np.array([False, True])) tm.assert_numpy_array_equal( - Float64Index([1.0, np.nan]).isin([np.nan]), [False, True]) - tm.assert_numpy_array_equal( - Float64Index([1.0, np.nan]).isin([float('nan')]), [False, True]) - tm.assert_numpy_array_equal( - Float64Index([1.0, np.nan]).isin([pd.NaT]), [False, True]) + Float64Index([1.0, np.nan]).isin([float('nan')]), + np.array([False, True])) + tm.assert_numpy_array_equal(Float64Index([1.0, np.nan]).isin([pd.NaT]), + np.array([False, True])) def test_isin_level_kwarg(self): def check_idx(idx): @@ -1255,7 +1254,7 @@ def test_boolean_cmp(self): def test_get_level_values(self): result = self.strIndex.get_level_values(0) - self.assertTrue(result.equals(self.strIndex)) + self.assert_index_equal(result, self.strIndex) def test_slice_keep_name(self): idx = Index(['a', 'b'], name='asdf') @@ -1619,4 +1618,4 @@ def test_string_index_repr(self): def test_get_combined_index(): from pandas.core.index import _get_combined_index result = _get_combined_index([]) - assert (result.equals(Index([]))) + tm.assert_index_equal(result, Index([])) diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py index 66ddcdebff83b..c64b1e9fc4af8 100644 --- a/pandas/tests/indexes/test_category.py +++ b/pandas/tests/indexes/test_category.py @@ -11,7 +11,7 @@ import numpy as np -from pandas import Categorical, compat +from pandas import Categorical, compat, notnull from pandas.util.testing import assert_almost_equal import pandas.core.config as cf import pandas as pd @@ -48,46 +48,48 @@ def test_construction(self): # empty result = CategoricalIndex(categories=categories) - self.assertTrue(result.categories.equals(Index(categories))) + self.assert_index_equal(result.categories, Index(categories)) tm.assert_numpy_array_equal(result.codes, np.array([], dtype='int8')) self.assertFalse(result.ordered) # passing categories result = CategoricalIndex(list('aabbca'), categories=categories) - self.assertTrue(result.categories.equals(Index(categories))) - tm.assert_numpy_array_equal(result.codes, np.array( - [0, 0, 1, 1, 2, 0], dtype='int8')) + self.assert_index_equal(result.categories, Index(categories)) + tm.assert_numpy_array_equal(result.codes, + np.array([0, 0, 1, 1, 2, 0], dtype='int8')) c = pd.Categorical(list('aabbca')) result = CategoricalIndex(c) - self.assertTrue(result.categories.equals(Index(list('abc')))) - tm.assert_numpy_array_equal(result.codes, np.array( - [0, 0, 1, 1, 2, 0], dtype='int8')) + self.assert_index_equal(result.categories, Index(list('abc'))) + tm.assert_numpy_array_equal(result.codes, + np.array([0, 0, 1, 1, 2, 0], dtype='int8')) self.assertFalse(result.ordered) result = CategoricalIndex(c, categories=categories) - self.assertTrue(result.categories.equals(Index(categories))) - tm.assert_numpy_array_equal(result.codes, np.array( - [0, 0, 1, 1, 2, 0], dtype='int8')) + self.assert_index_equal(result.categories, Index(categories)) + tm.assert_numpy_array_equal(result.codes, + np.array([0, 0, 1, 1, 2, 0], dtype='int8')) self.assertFalse(result.ordered) ci = CategoricalIndex(c, categories=list('abcd')) result = CategoricalIndex(ci) - self.assertTrue(result.categories.equals(Index(categories))) - tm.assert_numpy_array_equal(result.codes, np.array( - [0, 0, 1, 1, 2, 0], dtype='int8')) + self.assert_index_equal(result.categories, Index(categories)) + tm.assert_numpy_array_equal(result.codes, + np.array([0, 0, 1, 1, 2, 0], dtype='int8')) self.assertFalse(result.ordered) result = CategoricalIndex(ci, categories=list('ab')) - self.assertTrue(result.categories.equals(Index(list('ab')))) - tm.assert_numpy_array_equal(result.codes, np.array( - [0, 0, 1, 1, -1, 0], dtype='int8')) + self.assert_index_equal(result.categories, Index(list('ab'))) + tm.assert_numpy_array_equal(result.codes, + np.array([0, 0, 1, 1, -1, 0], + dtype='int8')) self.assertFalse(result.ordered) result = CategoricalIndex(ci, categories=list('ab'), ordered=True) - self.assertTrue(result.categories.equals(Index(list('ab')))) - tm.assert_numpy_array_equal(result.codes, np.array( - [0, 0, 1, 1, -1, 0], dtype='int8')) + self.assert_index_equal(result.categories, Index(list('ab'))) + tm.assert_numpy_array_equal(result.codes, + np.array([0, 0, 1, 1, -1, 0], + dtype='int8')) self.assertTrue(result.ordered) # turn me to an Index @@ -230,6 +232,19 @@ def f(x): ordered=False) tm.assert_categorical_equal(result, exp) + def test_where(self): + i = self.create_index() + result = i.where(notnull(i)) + expected = i + tm.assert_index_equal(result, expected) + + i2 = i.copy() + i2 = pd.CategoricalIndex([np.nan, np.nan] + i[2:].tolist(), + categories=i.categories) + result = i.where(notnull(i2)) + expected = i2 + tm.assert_index_equal(result, expected) + def test_append(self): ci = self.create_index() @@ -310,7 +325,7 @@ def test_astype(self): tm.assert_index_equal(result, ci, exact=True) result = ci.astype(object) - self.assertTrue(result.equals(Index(np.array(ci)))) + self.assert_index_equal(result, Index(np.array(ci))) # this IS equal, but not the same class self.assertTrue(result.equals(ci)) @@ -339,7 +354,7 @@ def test_reindexing(self): expected = oidx.get_indexer_non_unique(finder)[0] actual = ci.get_indexer(finder) - tm.assert_numpy_array_equal(expected, actual) + tm.assert_numpy_array_equal(expected.values, actual, check_dtype=False) def test_reindex_dtype(self): c = CategoricalIndex(['a', 'b', 'c', 'a']) @@ -388,7 +403,7 @@ def test_get_indexer(self): for indexer in [idx2, list('abf'), Index(list('abf'))]: r1 = idx1.get_indexer(idx2) - assert_almost_equal(r1, [0, 1, 2, -1]) + assert_almost_equal(r1, np.array([0, 1, 2, -1])) self.assertRaises(NotImplementedError, lambda: idx2.get_indexer(idx1, method='pad')) diff --git a/pandas/tests/indexes/test_datetimelike.py b/pandas/tests/indexes/test_datetimelike.py index b0ca07e84f7ce..4a664ed3542d7 100644 --- a/pandas/tests/indexes/test_datetimelike.py +++ b/pandas/tests/indexes/test_datetimelike.py @@ -4,9 +4,10 @@ import numpy as np -from pandas import (date_range, period_range, - Series, Index, DatetimeIndex, - TimedeltaIndex, PeriodIndex) +from pandas import (DatetimeIndex, Float64Index, Index, Int64Index, + NaT, Period, PeriodIndex, Series, Timedelta, + TimedeltaIndex, date_range, period_range, + timedelta_range, notnull) import pandas.util.testing as tm @@ -337,6 +338,150 @@ def test_construction_dti_with_mixed_timezones(self): Timestamp('2011-01-02 10:00', tz='US/Eastern')], tz='US/Eastern', name='idx') + def test_astype(self): + # GH 13149, GH 13209 + idx = DatetimeIndex(['2016-05-16', 'NaT', NaT, np.NaN]) + + result = idx.astype(object) + expected = Index([Timestamp('2016-05-16')] + [NaT] * 3, dtype=object) + tm.assert_index_equal(result, expected) + + result = idx.astype(int) + expected = Int64Index([1463356800000000000] + + [-9223372036854775808] * 3, dtype=np.int64) + tm.assert_index_equal(result, expected) + + rng = date_range('1/1/2000', periods=10) + result = rng.astype('i8') + self.assert_index_equal(result, Index(rng.asi8)) + self.assert_numpy_array_equal(result.values, rng.asi8) + + def test_astype_with_tz(self): + + # with tz + rng = date_range('1/1/2000', periods=10, tz='US/Eastern') + result = rng.astype('datetime64[ns]') + expected = (date_range('1/1/2000', periods=10, + tz='US/Eastern') + .tz_convert('UTC').tz_localize(None)) + tm.assert_index_equal(result, expected) + + # BUG#10442 : testing astype(str) is correct for Series/DatetimeIndex + result = pd.Series(pd.date_range('2012-01-01', periods=3)).astype(str) + expected = pd.Series( + ['2012-01-01', '2012-01-02', '2012-01-03'], dtype=object) + tm.assert_series_equal(result, expected) + + result = Series(pd.date_range('2012-01-01', periods=3, + tz='US/Eastern')).astype(str) + expected = Series(['2012-01-01 00:00:00-05:00', + '2012-01-02 00:00:00-05:00', + '2012-01-03 00:00:00-05:00'], + dtype=object) + tm.assert_series_equal(result, expected) + + def test_astype_str_compat(self): + # GH 13149, GH 13209 + # verify that we are returing NaT as a string (and not unicode) + + idx = DatetimeIndex(['2016-05-16', 'NaT', NaT, np.NaN]) + result = idx.astype(str) + expected = Index(['2016-05-16', 'NaT', 'NaT', 'NaT'], dtype=object) + tm.assert_index_equal(result, expected) + + def test_astype_str(self): + # test astype string - #10442 + result = date_range('2012-01-01', periods=4, + name='test_name').astype(str) + expected = Index(['2012-01-01', '2012-01-02', '2012-01-03', + '2012-01-04'], name='test_name', dtype=object) + tm.assert_index_equal(result, expected) + + # test astype string with tz and name + result = date_range('2012-01-01', periods=3, name='test_name', + tz='US/Eastern').astype(str) + expected = Index(['2012-01-01 00:00:00-05:00', + '2012-01-02 00:00:00-05:00', + '2012-01-03 00:00:00-05:00'], + name='test_name', dtype=object) + tm.assert_index_equal(result, expected) + + # test astype string with freqH and name + result = date_range('1/1/2011', periods=3, freq='H', + name='test_name').astype(str) + expected = Index(['2011-01-01 00:00:00', '2011-01-01 01:00:00', + '2011-01-01 02:00:00'], + name='test_name', dtype=object) + tm.assert_index_equal(result, expected) + + # test astype string with freqH and timezone + result = date_range('3/6/2012 00:00', periods=2, freq='H', + tz='Europe/London', name='test_name').astype(str) + expected = Index(['2012-03-06 00:00:00+00:00', + '2012-03-06 01:00:00+00:00'], + dtype=object, name='test_name') + tm.assert_index_equal(result, expected) + + def test_astype_datetime64(self): + # GH 13149, GH 13209 + idx = DatetimeIndex(['2016-05-16', 'NaT', NaT, np.NaN]) + + result = idx.astype('datetime64[ns]') + tm.assert_index_equal(result, idx) + self.assertFalse(result is idx) + + result = idx.astype('datetime64[ns]', copy=False) + tm.assert_index_equal(result, idx) + self.assertTrue(result is idx) + + idx_tz = DatetimeIndex(['2016-05-16', 'NaT', NaT, np.NaN], tz='EST') + result = idx_tz.astype('datetime64[ns]') + expected = DatetimeIndex(['2016-05-16 05:00:00', 'NaT', 'NaT', 'NaT'], + dtype='datetime64[ns]') + tm.assert_index_equal(result, expected) + + def test_astype_raises(self): + # GH 13149, GH 13209 + idx = DatetimeIndex(['2016-05-16', 'NaT', NaT, np.NaN]) + + self.assertRaises(ValueError, idx.astype, float) + self.assertRaises(ValueError, idx.astype, 'timedelta64') + self.assertRaises(ValueError, idx.astype, 'timedelta64[ns]') + self.assertRaises(ValueError, idx.astype, 'datetime64') + self.assertRaises(ValueError, idx.astype, 'datetime64[D]') + + def test_where_other(self): + + # other is ndarray or Index + i = pd.date_range('20130101', periods=3, tz='US/Eastern') + + for arr in [np.nan, pd.NaT]: + result = i.where(notnull(i), other=np.nan) + expected = i + tm.assert_index_equal(result, expected) + + i2 = i.copy() + i2 = Index([pd.NaT, pd.NaT] + i[2:].tolist()) + result = i.where(notnull(i2), i2) + tm.assert_index_equal(result, i2) + + i2 = i.copy() + i2 = Index([pd.NaT, pd.NaT] + i[2:].tolist()) + result = i.where(notnull(i2), i2.values) + tm.assert_index_equal(result, i2) + + def test_where_tz(self): + i = pd.date_range('20130101', periods=3, tz='US/Eastern') + result = i.where(notnull(i)) + expected = i + tm.assert_index_equal(result, expected) + + i2 = i.copy() + i2 = Index([pd.NaT, pd.NaT] + i[2:].tolist()) + result = i.where(notnull(i2)) + expected = i2 + tm.assert_index_equal(result, expected) + def test_get_loc(self): idx = pd.date_range('2000-01-01', periods=3) @@ -388,26 +533,29 @@ def test_get_loc(self): # time indexing idx = pd.date_range('2000-01-01', periods=24, freq='H') - tm.assert_numpy_array_equal(idx.get_loc(time(12)), [12]) - tm.assert_numpy_array_equal(idx.get_loc(time(12, 30)), []) + tm.assert_numpy_array_equal(idx.get_loc(time(12)), + np.array([12], dtype=np.int64)) + tm.assert_numpy_array_equal(idx.get_loc(time(12, 30)), + np.array([], dtype=np.int64)) with tm.assertRaises(NotImplementedError): idx.get_loc(time(12, 30), method='pad') def test_get_indexer(self): idx = pd.date_range('2000-01-01', periods=3) - tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2]) + tm.assert_numpy_array_equal(idx.get_indexer(idx), np.array([0, 1, 2])) target = idx[0] + pd.to_timedelta(['-1 hour', '12 hours', '1 day 1 hour']) - tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'backfill'), [0, 1, 2]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest'), [0, 1, 1]) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), + np.array([-1, 0, 1])) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'backfill'), + np.array([0, 1, 2])) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest'), + np.array([0, 1, 1])) tm.assert_numpy_array_equal( idx.get_indexer(target, 'nearest', tolerance=pd.Timedelta('1 hour')), - [0, -1, 1]) + np.array([0, -1, 1])) with tm.assertRaises(ValueError): idx.get_indexer(idx[[0]], method='nearest', tolerance='foo') @@ -417,7 +565,7 @@ def test_roundtrip_pickle_with_tz(self): # round-trip of timezone index = date_range('20130101', periods=3, tz='US/Eastern', name='foo') unpickled = self.round_trip_pickle(index) - self.assertTrue(index.equals(unpickled)) + self.assert_index_equal(index, unpickled) def test_reindex_preserves_tz_if_target_is_empty_list_or_array(self): # GH7774 @@ -585,6 +733,43 @@ def setUp(self): def create_index(self): return period_range('20130101', periods=5, freq='D') + def test_astype(self): + # GH 13149, GH 13209 + idx = PeriodIndex(['2016-05-16', 'NaT', NaT, np.NaN], freq='D') + + result = idx.astype(object) + expected = Index([Period('2016-05-16', freq='D')] + + [Period(NaT, freq='D')] * 3, dtype='object') + # Hack because of lack of support for Period null checking (GH12759) + tm.assert_index_equal(result[:1], expected[:1]) + result_arr = np.asarray([p.ordinal for p in result], dtype=np.int64) + expected_arr = np.asarray([p.ordinal for p in expected], + dtype=np.int64) + tm.assert_numpy_array_equal(result_arr, expected_arr) + # TODO: When GH12759 is resolved, change the above hack to: + # tm.assert_index_equal(result, expected) # now, it raises. + + result = idx.astype(int) + expected = Int64Index([16937] + [-9223372036854775808] * 3, + dtype=np.int64) + tm.assert_index_equal(result, expected) + + idx = period_range('1990', '2009', freq='A') + result = idx.astype('i8') + self.assert_index_equal(result, Index(idx.asi8)) + self.assert_numpy_array_equal(result.values, idx.values) + + def test_astype_raises(self): + # GH 13149, GH 13209 + idx = PeriodIndex(['2016-05-16', 'NaT', NaT, np.NaN], freq='D') + + self.assertRaises(ValueError, idx.astype, str) + self.assertRaises(ValueError, idx.astype, float) + self.assertRaises(ValueError, idx.astype, 'timedelta64') + self.assertRaises(ValueError, idx.astype, 'timedelta64[ns]') + self.assertRaises(ValueError, idx.astype, 'datetime64') + self.assertRaises(ValueError, idx.astype, 'datetime64[ns]') + def test_shift(self): # test shift for PeriodIndex @@ -628,27 +813,63 @@ def test_get_loc(self): with tm.assertRaises(KeyError): idx.get_loc('2000-01-10', method='nearest', tolerance='1 day') + def test_where(self): + i = self.create_index() + result = i.where(notnull(i)) + expected = i + tm.assert_index_equal(result, expected) + + i2 = i.copy() + i2 = pd.PeriodIndex([pd.NaT, pd.NaT] + i[2:].tolist(), + freq='D') + result = i.where(notnull(i2)) + expected = i2 + tm.assert_index_equal(result, expected) + + def test_where_other(self): + + i = self.create_index() + for arr in [np.nan, pd.NaT]: + result = i.where(notnull(i), other=np.nan) + expected = i + tm.assert_index_equal(result, expected) + + i2 = i.copy() + i2 = pd.PeriodIndex([pd.NaT, pd.NaT] + i[2:].tolist(), + freq='D') + result = i.where(notnull(i2), i2) + tm.assert_index_equal(result, i2) + + i2 = i.copy() + i2 = pd.PeriodIndex([pd.NaT, pd.NaT] + i[2:].tolist(), + freq='D') + result = i.where(notnull(i2), i2.values) + tm.assert_index_equal(result, i2) + def test_get_indexer(self): idx = pd.period_range('2000-01-01', periods=3).asfreq('H', how='start') - tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2]) + tm.assert_numpy_array_equal(idx.get_indexer(idx), + np.array([0, 1, 2], dtype=np.int_)) target = pd.PeriodIndex(['1999-12-31T23', '2000-01-01T12', '2000-01-02T01'], freq='H') - tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'backfill'), [0, 1, 2]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest'), [0, 1, 1]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest', tolerance='1 hour'), - [0, -1, 1]) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), + np.array([-1, 0, 1], dtype=np.int_)) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'backfill'), + np.array([0, 1, 2], dtype=np.int_)) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest'), + np.array([0, 1, 1], dtype=np.int_)) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest', + tolerance='1 hour'), + np.array([0, -1, 1], dtype=np.int_)) msg = 'Input has different freq from PeriodIndex\\(freq=H\\)' with self.assertRaisesRegexp(ValueError, msg): idx.get_indexer(target, 'nearest', tolerance='1 minute') - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest', tolerance='1 day'), [0, 1, 1]) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest', + tolerance='1 day'), + np.array([0, 1, 1], dtype=np.int_)) def test_repeat(self): # GH10183 @@ -726,6 +947,51 @@ def test_shift(self): '10 days 01:00:03'], freq='D') self.assert_index_equal(result, expected) + def test_astype(self): + # GH 13149, GH 13209 + idx = TimedeltaIndex([1e14, 'NaT', pd.NaT, np.NaN]) + + result = idx.astype(object) + expected = Index([Timedelta('1 days 03:46:40')] + [pd.NaT] * 3, + dtype=object) + tm.assert_index_equal(result, expected) + + result = idx.astype(int) + expected = Int64Index([100000000000000] + [-9223372036854775808] * 3, + dtype=np.int64) + tm.assert_index_equal(result, expected) + + rng = timedelta_range('1 days', periods=10) + + result = rng.astype('i8') + self.assert_index_equal(result, Index(rng.asi8)) + self.assert_numpy_array_equal(rng.asi8, result.values) + + def test_astype_timedelta64(self): + # GH 13149, GH 13209 + idx = TimedeltaIndex([1e14, 'NaT', pd.NaT, np.NaN]) + + result = idx.astype('timedelta64') + expected = Float64Index([1e+14] + [np.NaN] * 3, dtype='float64') + tm.assert_index_equal(result, expected) + + result = idx.astype('timedelta64[ns]') + tm.assert_index_equal(result, idx) + self.assertFalse(result is idx) + + result = idx.astype('timedelta64[ns]', copy=False) + tm.assert_index_equal(result, idx) + self.assertTrue(result is idx) + + def test_astype_raises(self): + # GH 13149, GH 13209 + idx = TimedeltaIndex([1e14, 'NaT', pd.NaT, np.NaN]) + + self.assertRaises(ValueError, idx.astype, float) + self.assertRaises(ValueError, idx.astype, str) + self.assertRaises(ValueError, idx.astype, 'datetime64') + self.assertRaises(ValueError, idx.astype, 'datetime64[ns]') + def test_get_loc(self): idx = pd.to_timedelta(['0 days', '1 days', '2 days']) @@ -748,18 +1014,20 @@ def test_get_loc(self): def test_get_indexer(self): idx = pd.to_timedelta(['0 days', '1 days', '2 days']) - tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2]) + tm.assert_numpy_array_equal(idx.get_indexer(idx), + np.array([0, 1, 2], dtype=np.int_)) target = pd.to_timedelta(['-1 hour', '12 hours', '1 day 1 hour']) - tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'backfill'), [0, 1, 2]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest'), [0, 1, 1]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest', - tolerance=pd.Timedelta('1 hour')), - [0, -1, 1]) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), + np.array([-1, 0, 1], dtype=np.int_)) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'backfill'), + np.array([0, 1, 2], dtype=np.int_)) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest'), + np.array([0, 1, 1], dtype=np.int_)) + + res = idx.get_indexer(target, 'nearest', + tolerance=pd.Timedelta('1 hour')) + tm.assert_numpy_array_equal(res, np.array([0, -1, 1], dtype=np.int_)) def test_numeric_compat(self): diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py index b8804daa6cf19..bec52f5f47b09 100644 --- a/pandas/tests/indexes/test_multi.py +++ b/pandas/tests/indexes/test_multi.py @@ -78,6 +78,14 @@ def test_labels_dtypes(self): self.assertTrue((i.labels[0] >= 0).all()) self.assertTrue((i.labels[1] >= 0).all()) + def test_where(self): + i = MultiIndex.from_tuples([('A', 1), ('A', 2)]) + + def f(): + i.where(True) + + self.assertRaises(NotImplementedError, f) + def test_repeat(self): reps = 2 numbers = [1, 2, 3] @@ -636,7 +644,7 @@ def test_from_product(self): ('buz', 'c')] expected = MultiIndex.from_tuples(tuples, names=names) - tm.assert_numpy_array_equal(result, expected) + tm.assert_index_equal(result, expected) self.assertEqual(result.names, names) def test_from_product_datetimeindex(self): @@ -673,14 +681,14 @@ def test_append(self): def test_get_level_values(self): result = self.index.get_level_values(0) - expected = ['foo', 'foo', 'bar', 'baz', 'qux', 'qux'] - tm.assert_numpy_array_equal(result, expected) - + expected = Index(['foo', 'foo', 'bar', 'baz', 'qux', 'qux'], + name='first') + tm.assert_index_equal(result, expected) self.assertEqual(result.name, 'first') result = self.index.get_level_values('first') expected = self.index.get_level_values(0) - tm.assert_numpy_array_equal(result, expected) + tm.assert_index_equal(result, expected) # GH 10460 index = MultiIndex(levels=[CategoricalIndex( @@ -695,19 +703,19 @@ def test_get_level_values_na(self): arrays = [['a', 'b', 'b'], [1, np.nan, 2]] index = pd.MultiIndex.from_arrays(arrays) values = index.get_level_values(1) - expected = [1, np.nan, 2] + expected = np.array([1, np.nan, 2]) tm.assert_numpy_array_equal(values.values.astype(float), expected) arrays = [['a', 'b', 'b'], [np.nan, np.nan, 2]] index = pd.MultiIndex.from_arrays(arrays) values = index.get_level_values(1) - expected = [np.nan, np.nan, 2] + expected = np.array([np.nan, np.nan, 2]) tm.assert_numpy_array_equal(values.values.astype(float), expected) arrays = [[np.nan, np.nan, np.nan], ['a', np.nan, 1]] index = pd.MultiIndex.from_arrays(arrays) values = index.get_level_values(0) - expected = [np.nan, np.nan, np.nan] + expected = np.array([np.nan, np.nan, np.nan]) tm.assert_numpy_array_equal(values.values.astype(float), expected) values = index.get_level_values(1) expected = np.array(['a', np.nan, 1], dtype=object) @@ -1023,10 +1031,10 @@ def test_get_indexer(self): idx2 = index[[1, 3, 5]] r1 = idx1.get_indexer(idx2) - assert_almost_equal(r1, [1, 3, -1]) + assert_almost_equal(r1, np.array([1, 3, -1])) r1 = idx2.get_indexer(idx1, method='pad') - e1 = [-1, 0, 0, 1, 1] + e1 = np.array([-1, 0, 0, 1, 1]) assert_almost_equal(r1, e1) r2 = idx2.get_indexer(idx1[::-1], method='pad') @@ -1036,7 +1044,7 @@ def test_get_indexer(self): assert_almost_equal(r1, rffill1) r1 = idx2.get_indexer(idx1, method='backfill') - e1 = [0, 0, 1, 1, 2] + e1 = np.array([0, 0, 1, 1, 2]) assert_almost_equal(r1, e1) r2 = idx2.get_indexer(idx1[::-1], method='backfill') @@ -1056,9 +1064,10 @@ def test_get_indexer(self): # create index with duplicates idx1 = Index(lrange(10) + lrange(10)) idx2 = Index(lrange(20)) - assertRaisesRegexp(InvalidIndexError, "Reindexing only valid with" - " uniquely valued Index objects", idx1.get_indexer, - idx2) + + msg = "Reindexing only valid with uniquely valued Index objects" + with assertRaisesRegexp(InvalidIndexError, msg): + idx1.get_indexer(idx2) def test_get_indexer_nearest(self): midx = MultiIndex.from_tuples([('a', 1), ('b', 2)]) @@ -1516,15 +1525,18 @@ def test_insert(self): # key not contained in all levels new_index = self.index.insert(0, ('abc', 'three')) - tm.assert_numpy_array_equal(new_index.levels[0], - list(self.index.levels[0]) + ['abc']) - tm.assert_numpy_array_equal(new_index.levels[1], - list(self.index.levels[1]) + ['three']) + + exp0 = Index(list(self.index.levels[0]) + ['abc'], name='first') + tm.assert_index_equal(new_index.levels[0], exp0) + + exp1 = Index(list(self.index.levels[1]) + ['three'], name='second') + tm.assert_index_equal(new_index.levels[1], exp1) self.assertEqual(new_index[0], ('abc', 'three')) # key wrong length - assertRaisesRegexp(ValueError, "Item must have length equal to number" - " of levels", self.index.insert, 0, ('foo2', )) + msg = "Item must have length equal to number of levels" + with assertRaisesRegexp(ValueError, msg): + self.index.insert(0, ('foo2', )) left = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1]], columns=['1st', '2nd', '3rd']) @@ -1545,14 +1557,9 @@ def test_insert(self): ts.loc[('a', 'w')] = 5 ts.loc['a', 'a'] = 6 - right = pd.DataFrame([['a', 'b', 0], - ['b', 'd', 1], - ['b', 'x', 2], - ['b', 'a', -1], - ['b', 'b', 3], - ['a', 'x', 4], - ['a', 'w', 5], - ['a', 'a', 6]], + right = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1], ['b', 'x', 2], + ['b', 'a', -1], ['b', 'b', 3], ['a', 'x', 4], + ['a', 'w', 5], ['a', 'a', 6]], columns=['1st', '2nd', '3rd']) right.set_index(['1st', '2nd'], inplace=True) # FIXME data types changes to float because @@ -1993,9 +2000,9 @@ def test_isin(self): def test_isin_nan(self): idx = MultiIndex.from_arrays([['foo', 'bar'], [1.0, np.nan]]) tm.assert_numpy_array_equal(idx.isin([('bar', np.nan)]), - [False, False]) + np.array([False, False])) tm.assert_numpy_array_equal(idx.isin([('bar', float('nan'))]), - [False, False]) + np.array([False, False])) def test_isin_level_kwarg(self): idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange( diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py index 8592ae1741a4e..5eac0bc870756 100644 --- a/pandas/tests/indexes/test_numeric.py +++ b/pandas/tests/indexes/test_numeric.py @@ -158,6 +158,7 @@ def check_is_index(self, i): def check_coerce(self, a, b, is_float_index=True): self.assertTrue(a.equals(b)) + self.assert_index_equal(a, b, exact=False) if is_float_index: self.assertIsInstance(b, Float64Index) else: @@ -259,6 +260,11 @@ def test_astype(self): for dtype in ['M8[ns]', 'm8[ns]']: self.assertRaises(TypeError, lambda: i.astype(dtype)) + # GH 13149 + for dtype in ['int16', 'int32', 'int64']: + i = Float64Index([0, 1.1, np.NAN]) + self.assertRaises(ValueError, lambda: i.astype(dtype)) + def test_equals(self): i = Float64Index([1.0, 2.0]) @@ -277,14 +283,16 @@ def test_equals(self): def test_get_indexer(self): idx = Float64Index([0.0, 1.0, 2.0]) - tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2]) + tm.assert_numpy_array_equal(idx.get_indexer(idx), + np.array([0, 1, 2], dtype=np.int_)) target = [-0.1, 0.5, 1.1] - tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'backfill'), [0, 1, 2]) - tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest'), [0, 1, 1]) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), + np.array([-1, 0, 1], dtype=np.int_)) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'backfill'), + np.array([0, 1, 2], dtype=np.int_)) + tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest'), + np.array([0, 1, 1], dtype=np.int_)) def test_get_loc(self): idx = Float64Index([0.0, 1.0, 2.0]) @@ -353,7 +361,7 @@ def test_astype_from_object(self): index = Index([1.0, np.nan, 0.2], dtype='object') result = index.astype(float) expected = Float64Index([1.0, np.nan, 0.2]) - tm.assert_equal(result.dtype, expected.dtype) + self.assertEqual(result.dtype, expected.dtype) tm.assert_index_equal(result, expected) def test_fillna_float64(self): @@ -420,12 +428,12 @@ def testit(): def test_constructor(self): # pass list, coerce fine index = Int64Index([-5, 0, 1, 2]) - expected = np.array([-5, 0, 1, 2], dtype=np.int64) - tm.assert_numpy_array_equal(index, expected) + expected = Index([-5, 0, 1, 2], dtype=np.int64) + tm.assert_index_equal(index, expected) # from iterable index = Int64Index(iter([-5, 0, 1, 2])) - tm.assert_numpy_array_equal(index, expected) + tm.assert_index_equal(index, expected) # scalar raise Exception self.assertRaises(TypeError, Int64Index, 5) @@ -433,7 +441,7 @@ def test_constructor(self): # copy arr = self.index.values new_index = Int64Index(arr, copy=True) - tm.assert_numpy_array_equal(new_index, self.index) + tm.assert_index_equal(new_index, self.index) val = arr[0] + 3000 # this should not change index @@ -452,7 +460,7 @@ def test_constructor_corner(self): arr = np.array([1, 2, 3, 4], dtype=object) index = Int64Index(arr) self.assertEqual(index.values.dtype, np.int64) - self.assertTrue(index.equals(arr)) + self.assert_index_equal(index, Index(arr)) # preventing casting arr = np.array([1, '2', 3, '4'], dtype=object) @@ -576,7 +584,7 @@ def test_join_outer(self): res, lidx, ridx = self.index.join(other, how='outer', return_indexers=True) noidx_res = self.index.join(other, how='outer') - self.assertTrue(res.equals(noidx_res)) + self.assert_index_equal(res, noidx_res) eres = Int64Index([0, 1, 2, 4, 5, 6, 7, 8, 10, 12, 14, 16, 18, 25]) elidx = np.array([0, -1, 1, 2, -1, 3, -1, 4, 5, 6, 7, 8, 9, -1], @@ -585,7 +593,7 @@ def test_join_outer(self): dtype=np.int_) tm.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) tm.assert_numpy_array_equal(lidx, elidx) tm.assert_numpy_array_equal(ridx, eridx) @@ -593,14 +601,14 @@ def test_join_outer(self): res, lidx, ridx = self.index.join(other_mono, how='outer', return_indexers=True) noidx_res = self.index.join(other_mono, how='outer') - self.assertTrue(res.equals(noidx_res)) + self.assert_index_equal(res, noidx_res) elidx = np.array([0, -1, 1, 2, -1, 3, -1, 4, 5, 6, 7, 8, 9, -1], dtype=np.int64) eridx = np.array([-1, 0, 1, -1, 2, -1, 3, -1, -1, 4, -1, -1, -1, 5], dtype=np.int64) tm.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) tm.assert_numpy_array_equal(lidx, elidx) tm.assert_numpy_array_equal(ridx, eridx) @@ -623,7 +631,7 @@ def test_join_inner(self): eridx = np.array([4, 1], dtype=np.int_) tm.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) tm.assert_numpy_array_equal(lidx, elidx) tm.assert_numpy_array_equal(ridx, eridx) @@ -632,12 +640,12 @@ def test_join_inner(self): return_indexers=True) res2 = self.index.intersection(other_mono) - self.assertTrue(res.equals(res2)) + self.assert_index_equal(res, res2) elidx = np.array([1, 6], dtype=np.int64) eridx = np.array([1, 4], dtype=np.int64) tm.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) tm.assert_numpy_array_equal(lidx, elidx) tm.assert_numpy_array_equal(ridx, eridx) @@ -653,7 +661,7 @@ def test_join_left(self): dtype=np.int_) tm.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) self.assertIsNone(lidx) tm.assert_numpy_array_equal(ridx, eridx) @@ -663,7 +671,7 @@ def test_join_left(self): eridx = np.array([-1, 1, -1, -1, -1, -1, 4, -1, -1, -1], dtype=np.int64) tm.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) self.assertIsNone(lidx) tm.assert_numpy_array_equal(ridx, eridx) @@ -674,7 +682,7 @@ def test_join_left(self): eres = Index([1, 1, 2, 5, 7, 9]) # 1 is in idx2, so it should be x2 eridx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64) elidx = np.array([0, 0, 1, 2, 3, 4], dtype=np.int64) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) tm.assert_numpy_array_equal(lidx, elidx) tm.assert_numpy_array_equal(ridx, eridx) @@ -689,7 +697,7 @@ def test_join_right(self): elidx = np.array([-1, 6, -1, -1, 1, -1], dtype=np.int_) tm.assertIsInstance(other, Int64Index) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) tm.assert_numpy_array_equal(lidx, elidx) self.assertIsNone(ridx) @@ -699,7 +707,7 @@ def test_join_right(self): eres = other_mono elidx = np.array([-1, 1, -1, -1, 6, -1], dtype=np.int64) tm.assertIsInstance(other, Int64Index) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) tm.assert_numpy_array_equal(lidx, elidx) self.assertIsNone(ridx) @@ -710,7 +718,7 @@ def test_join_right(self): eres = Index([1, 1, 2, 5, 7, 9]) # 1 is in idx2, so it should be x2 elidx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64) eridx = np.array([0, 0, 1, 2, 3, 4], dtype=np.int64) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) tm.assert_numpy_array_equal(lidx, elidx) tm.assert_numpy_array_equal(ridx, eridx) @@ -719,28 +727,27 @@ def test_join_non_int_index(self): outer = self.index.join(other, how='outer') outer2 = other.join(self.index, how='outer') - expected = Index([0, 2, 3, 4, 6, 7, 8, 10, 12, 14, - 16, 18], dtype=object) - self.assertTrue(outer.equals(outer2)) - self.assertTrue(outer.equals(expected)) + expected = Index([0, 2, 3, 4, 6, 7, 8, 10, 12, 14, 16, 18]) + self.assert_index_equal(outer, outer2) + self.assert_index_equal(outer, expected) inner = self.index.join(other, how='inner') inner2 = other.join(self.index, how='inner') - expected = Index([6, 8, 10], dtype=object) - self.assertTrue(inner.equals(inner2)) - self.assertTrue(inner.equals(expected)) + expected = Index([6, 8, 10]) + self.assert_index_equal(inner, inner2) + self.assert_index_equal(inner, expected) left = self.index.join(other, how='left') - self.assertTrue(left.equals(self.index)) + self.assert_index_equal(left, self.index.astype(object)) left2 = other.join(self.index, how='left') - self.assertTrue(left2.equals(other)) + self.assert_index_equal(left2, other) right = self.index.join(other, how='right') - self.assertTrue(right.equals(other)) + self.assert_index_equal(right, other) right2 = other.join(self.index, how='right') - self.assertTrue(right2.equals(self.index)) + self.assert_index_equal(right2, self.index.astype(object)) def test_join_non_unique(self): left = Index([4, 4, 3, 3]) @@ -748,7 +755,7 @@ def test_join_non_unique(self): joined, lidx, ridx = left.join(left, return_indexers=True) exp_joined = Index([3, 3, 3, 3, 4, 4, 4, 4]) - self.assertTrue(joined.equals(exp_joined)) + self.assert_index_equal(joined, exp_joined) exp_lidx = np.array([2, 2, 3, 3, 0, 0, 1, 1], dtype=np.int_) tm.assert_numpy_array_equal(lidx, exp_lidx) @@ -765,13 +772,14 @@ def test_join_self(self): def test_intersection(self): other = Index([1, 2, 3, 4, 5]) result = self.index.intersection(other) - expected = np.sort(np.intersect1d(self.index.values, other.values)) - tm.assert_numpy_array_equal(result, expected) + expected = Index(np.sort(np.intersect1d(self.index.values, + other.values))) + tm.assert_index_equal(result, expected) result = other.intersection(self.index) - expected = np.sort(np.asarray(np.intersect1d(self.index.values, - other.values))) - tm.assert_numpy_array_equal(result, expected) + expected = Index(np.sort(np.asarray(np.intersect1d(self.index.values, + other.values)))) + tm.assert_index_equal(result, expected) def test_intersect_str_dates(self): dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)] @@ -788,12 +796,12 @@ def test_union_noncomparable(self): now = datetime.now() other = Index([now + timedelta(i) for i in range(4)], dtype=object) result = self.index.union(other) - expected = np.concatenate((self.index, other)) - tm.assert_numpy_array_equal(result, expected) + expected = Index(np.concatenate((self.index, other))) + tm.assert_index_equal(result, expected) result = other.union(self.index) - expected = np.concatenate((other, self.index)) - tm.assert_numpy_array_equal(result, expected) + expected = Index(np.concatenate((other, self.index))) + tm.assert_index_equal(result, expected) def test_cant_or_shouldnt_cast(self): # can't diff --git a/pandas/tests/indexes/test_range.py b/pandas/tests/indexes/test_range.py index 8b04b510146d2..99e4b72bcee37 100644 --- a/pandas/tests/indexes/test_range.py +++ b/pandas/tests/indexes/test_range.py @@ -102,10 +102,10 @@ def test_constructor_same(self): self.assertTrue(result.identical(index)) result = RangeIndex(index, copy=True) - self.assertTrue(result.equals(index)) + self.assert_index_equal(result, index, exact=True) result = RangeIndex(index) - self.assertTrue(result.equals(index)) + self.assert_index_equal(result, index, exact=True) self.assertRaises(TypeError, lambda: RangeIndex(index, dtype='float64')) @@ -116,24 +116,24 @@ def test_constructor_range(self): result = RangeIndex.from_range(range(1, 5, 2)) expected = RangeIndex(1, 5, 2) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected, exact=True) result = RangeIndex.from_range(range(5, 6)) expected = RangeIndex(5, 6, 1) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected, exact=True) # an invalid range result = RangeIndex.from_range(range(5, 1)) expected = RangeIndex(0, 0, 1) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected, exact=True) result = RangeIndex.from_range(range(5)) expected = RangeIndex(0, 5, 1) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected, exact=True) result = Index(range(1, 5, 2)) expected = RangeIndex(1, 5, 2) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected, exact=True) self.assertRaises(TypeError, lambda: Index(range(1, 5, 2), dtype='float64')) @@ -165,27 +165,28 @@ def test_numeric_compat2(self): result = idx * 2 expected = RangeIndex(0, 20, 4) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected, exact=True) result = idx + 2 expected = RangeIndex(2, 12, 2) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected, exact=True) result = idx - 2 expected = RangeIndex(-2, 8, 2) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected, exact=True) # truediv under PY3 result = idx / 2 + if PY3: - expected = RangeIndex(0, 5, 1) - else: expected = RangeIndex(0, 5, 1).astype('float64') - self.assertTrue(result.equals(expected)) + else: + expected = RangeIndex(0, 5, 1) + self.assert_index_equal(result, expected, exact=True) result = idx / 4 - expected = RangeIndex(0, 10, 2).values / 4 - self.assertTrue(result.equals(expected)) + expected = RangeIndex(0, 10, 2) / 4 + self.assert_index_equal(result, expected, exact=True) result = idx // 1 expected = idx @@ -220,7 +221,7 @@ def test_constructor_corner(self): arr = np.array([1, 2, 3, 4], dtype=object) index = RangeIndex(1, 5) self.assertEqual(index.values.dtype, np.int64) - self.assertTrue(index.equals(arr)) + self.assert_index_equal(index, Index(arr)) # non-int raise Exception self.assertRaises(TypeError, RangeIndex, '1', '10', '1') @@ -249,7 +250,7 @@ def test_repr(self): self.assertTrue(result, expected) result = eval(result) - self.assertTrue(result.equals(i)) + self.assert_index_equal(result, i, exact=True) i = RangeIndex(5, 0, -1) result = repr(i) @@ -257,7 +258,7 @@ def test_repr(self): self.assertEqual(result, expected) result = eval(result) - self.assertTrue(result.equals(i)) + self.assert_index_equal(result, i, exact=True) def test_insert(self): @@ -265,19 +266,19 @@ def test_insert(self): result = idx[1:4] # test 0th element - self.assertTrue(idx[0:4].equals(result.insert(0, idx[0]))) + self.assert_index_equal(idx[0:4], result.insert(0, idx[0])) def test_delete(self): idx = RangeIndex(5, name='Foo') expected = idx[1:].astype(int) result = idx.delete(0) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) expected = idx[:-1].astype(int) result = idx.delete(-1) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) with tm.assertRaises((IndexError, ValueError)): @@ -292,7 +293,7 @@ def test_view(self): self.assertEqual(i_view.name, 'Foo') i_view = i.view('i8') - tm.assert_numpy_array_equal(i, i_view) + tm.assert_numpy_array_equal(i.values, i_view) i_view = i.view(RangeIndex) tm.assert_index_equal(i, i_view) @@ -376,7 +377,7 @@ def test_join_outer(self): res, lidx, ridx = self.index.join(other, how='outer', return_indexers=True) noidx_res = self.index.join(other, how='outer') - self.assertTrue(res.equals(noidx_res)) + self.assert_index_equal(res, noidx_res) eres = Int64Index([0, 2, 4, 6, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]) @@ -387,7 +388,7 @@ def test_join_outer(self): self.assertIsInstance(res, Int64Index) self.assertFalse(isinstance(res, RangeIndex)) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) self.assert_numpy_array_equal(lidx, elidx) self.assert_numpy_array_equal(ridx, eridx) @@ -397,11 +398,11 @@ def test_join_outer(self): res, lidx, ridx = self.index.join(other, how='outer', return_indexers=True) noidx_res = self.index.join(other, how='outer') - self.assertTrue(res.equals(noidx_res)) + self.assert_index_equal(res, noidx_res) self.assertIsInstance(res, Int64Index) self.assertFalse(isinstance(res, RangeIndex)) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) self.assert_numpy_array_equal(lidx, elidx) self.assert_numpy_array_equal(ridx, eridx) @@ -423,7 +424,7 @@ def test_join_inner(self): eridx = np.array([9, 7]) self.assertIsInstance(res, Int64Index) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) self.assert_numpy_array_equal(lidx, elidx) self.assert_numpy_array_equal(ridx, eridx) @@ -434,7 +435,7 @@ def test_join_inner(self): return_indexers=True) self.assertIsInstance(res, RangeIndex) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) self.assert_numpy_array_equal(lidx, elidx) self.assert_numpy_array_equal(ridx, eridx) @@ -448,7 +449,7 @@ def test_join_left(self): eridx = np.array([-1, -1, -1, -1, -1, -1, -1, -1, 9, 7], dtype=np.int_) self.assertIsInstance(res, RangeIndex) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) self.assertIsNone(lidx) self.assert_numpy_array_equal(ridx, eridx) @@ -459,7 +460,7 @@ def test_join_left(self): return_indexers=True) self.assertIsInstance(res, RangeIndex) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) self.assertIsNone(lidx) self.assert_numpy_array_equal(ridx, eridx) @@ -474,7 +475,7 @@ def test_join_right(self): dtype=np.int_) self.assertIsInstance(other, Int64Index) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) self.assert_numpy_array_equal(lidx, elidx) self.assertIsNone(ridx) @@ -486,7 +487,7 @@ def test_join_right(self): eres = other self.assertIsInstance(other, RangeIndex) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) self.assert_numpy_array_equal(lidx, elidx) self.assertIsNone(ridx) @@ -495,28 +496,27 @@ def test_join_non_int_index(self): outer = self.index.join(other, how='outer') outer2 = other.join(self.index, how='outer') - expected = Index([0, 2, 3, 4, 6, 7, 8, 10, 12, 14, - 16, 18], dtype=object) - self.assertTrue(outer.equals(outer2)) - self.assertTrue(outer.equals(expected)) + expected = Index([0, 2, 3, 4, 6, 7, 8, 10, 12, 14, 16, 18]) + self.assert_index_equal(outer, outer2) + self.assert_index_equal(outer, expected) inner = self.index.join(other, how='inner') inner2 = other.join(self.index, how='inner') - expected = Index([6, 8, 10], dtype=object) - self.assertTrue(inner.equals(inner2)) - self.assertTrue(inner.equals(expected)) + expected = Index([6, 8, 10]) + self.assert_index_equal(inner, inner2) + self.assert_index_equal(inner, expected) left = self.index.join(other, how='left') - self.assertTrue(left.equals(self.index)) + self.assert_index_equal(left, self.index.astype(object)) left2 = other.join(self.index, how='left') - self.assertTrue(left2.equals(other)) + self.assert_index_equal(left2, other) right = self.index.join(other, how='right') - self.assertTrue(right.equals(other)) + self.assert_index_equal(right, other) right2 = other.join(self.index, how='right') - self.assertTrue(right2.equals(self.index)) + self.assert_index_equal(right2, self.index.astype(object)) def test_join_non_unique(self): other = Index([4, 4, 3, 3]) @@ -528,7 +528,7 @@ def test_join_non_unique(self): eridx = np.array([-1, -1, 0, 1, -1, -1, -1, -1, -1, -1, -1], dtype=np.int_) - self.assertTrue(res.equals(eres)) + self.assert_index_equal(res, eres) self.assert_numpy_array_equal(lidx, elidx) self.assert_numpy_array_equal(ridx, eridx) @@ -542,25 +542,28 @@ def test_intersection(self): # intersect with Int64Index other = Index(np.arange(1, 6)) result = self.index.intersection(other) - expected = np.sort(np.intersect1d(self.index.values, other.values)) - self.assert_numpy_array_equal(result, expected) + expected = Index(np.sort(np.intersect1d(self.index.values, + other.values))) + self.assert_index_equal(result, expected) result = other.intersection(self.index) - expected = np.sort(np.asarray(np.intersect1d(self.index.values, - other.values))) - self.assert_numpy_array_equal(result, expected) + expected = Index(np.sort(np.asarray(np.intersect1d(self.index.values, + other.values)))) + self.assert_index_equal(result, expected) # intersect with increasing RangeIndex other = RangeIndex(1, 6) result = self.index.intersection(other) - expected = np.sort(np.intersect1d(self.index.values, other.values)) - self.assert_numpy_array_equal(result, expected) + expected = Index(np.sort(np.intersect1d(self.index.values, + other.values))) + self.assert_index_equal(result, expected) # intersect with decreasing RangeIndex other = RangeIndex(5, 0, -1) result = self.index.intersection(other) - expected = np.sort(np.intersect1d(self.index.values, other.values)) - self.assert_numpy_array_equal(result, expected) + expected = Index(np.sort(np.intersect1d(self.index.values, + other.values))) + self.assert_index_equal(result, expected) def test_intersect_str_dates(self): dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)] @@ -577,12 +580,12 @@ def test_union_noncomparable(self): now = datetime.now() other = Index([now + timedelta(i) for i in range(4)], dtype=object) result = self.index.union(other) - expected = np.concatenate((self.index, other)) - self.assert_numpy_array_equal(result, expected) + expected = Index(np.concatenate((self.index, other))) + self.assert_index_equal(result, expected) result = other.union(self.index) - expected = np.concatenate((other, self.index)) - self.assert_numpy_array_equal(result, expected) + expected = Index(np.concatenate((other, self.index))) + self.assert_index_equal(result, expected) def test_union(self): RI = RangeIndex @@ -789,43 +792,43 @@ def test_slice_specialised(self): # slice value completion index = self.index[:] expected = self.index - self.assert_numpy_array_equal(index, expected) + self.assert_index_equal(index, expected) # positive slice values index = self.index[7:10:2] - expected = np.array([14, 18]) - self.assert_numpy_array_equal(index, expected) + expected = Index(np.array([14, 18]), name='foo') + self.assert_index_equal(index, expected) # negative slice values index = self.index[-1:-5:-2] - expected = np.array([18, 14]) - self.assert_numpy_array_equal(index, expected) + expected = Index(np.array([18, 14]), name='foo') + self.assert_index_equal(index, expected) # stop overshoot index = self.index[2:100:4] - expected = np.array([4, 12]) - self.assert_numpy_array_equal(index, expected) + expected = Index(np.array([4, 12]), name='foo') + self.assert_index_equal(index, expected) # reverse index = self.index[::-1] - expected = self.index.values[::-1] - self.assert_numpy_array_equal(index, expected) + expected = Index(self.index.values[::-1], name='foo') + self.assert_index_equal(index, expected) index = self.index[-8::-1] - expected = np.array([4, 2, 0]) - self.assert_numpy_array_equal(index, expected) + expected = Index(np.array([4, 2, 0]), name='foo') + self.assert_index_equal(index, expected) index = self.index[-40::-1] - expected = np.array([]) - self.assert_numpy_array_equal(index, expected) + expected = Index(np.array([], dtype=np.int64), name='foo') + self.assert_index_equal(index, expected) index = self.index[40::-1] - expected = self.index.values[40::-1] - self.assert_numpy_array_equal(index, expected) + expected = Index(self.index.values[40::-1], name='foo') + self.assert_index_equal(index, expected) index = self.index[10::-1] - expected = self.index.values[::-1] - self.assert_numpy_array_equal(index, expected) + expected = Index(self.index.values[::-1], name='foo') + self.assert_index_equal(index, expected) def test_len_specialised(self): diff --git a/pandas/tests/indexing/test_categorical.py b/pandas/tests/indexing/test_categorical.py index 53ab9aca03f6c..2cb62a60f885b 100644 --- a/pandas/tests/indexing/test_categorical.py +++ b/pandas/tests/indexing/test_categorical.py @@ -108,15 +108,17 @@ def test_loc_listlike_dtypes(self): # unique slice res = df.loc[['a', 'b']] - exp = DataFrame({'A': [1, 2], - 'B': [4, 5]}, index=pd.CategoricalIndex(['a', 'b'])) + exp_index = pd.CategoricalIndex(['a', 'b'], + categories=index.categories) + exp = DataFrame({'A': [1, 2], 'B': [4, 5]}, index=exp_index) tm.assert_frame_equal(res, exp, check_index_type=True) # duplicated slice res = df.loc[['a', 'a', 'b']] - exp = DataFrame({'A': [1, 1, 2], - 'B': [4, 4, 5]}, - index=pd.CategoricalIndex(['a', 'a', 'b'])) + + exp_index = pd.CategoricalIndex(['a', 'a', 'b'], + categories=index.categories) + exp = DataFrame({'A': [1, 1, 2], 'B': [4, 4, 5]}, index=exp_index) tm.assert_frame_equal(res, exp, check_index_type=True) with tm.assertRaisesRegexp( @@ -194,12 +196,15 @@ def test_ix_categorical_index(self): expect = pd.Series(df.ix[:, 'X'], index=cdf.index, name='X') assert_series_equal(cdf.ix[:, 'X'], expect) + exp_index = pd.CategoricalIndex(list('AB'), categories=['A', 'B', 'C']) expect = pd.DataFrame(df.ix[['A', 'B'], :], columns=cdf.columns, - index=pd.CategoricalIndex(list('AB'))) + index=exp_index) assert_frame_equal(cdf.ix[['A', 'B'], :], expect) + exp_columns = pd.CategoricalIndex(list('XY'), + categories=['X', 'Y', 'Z']) expect = pd.DataFrame(df.ix[:, ['X', 'Y']], index=cdf.index, - columns=pd.CategoricalIndex(list('XY'))) + columns=exp_columns) assert_frame_equal(cdf.ix[:, ['X', 'Y']], expect) # non-unique @@ -209,12 +214,14 @@ def test_ix_categorical_index(self): cdf.index = pd.CategoricalIndex(df.index) cdf.columns = pd.CategoricalIndex(df.columns) + exp_index = pd.CategoricalIndex(list('AA'), categories=['A', 'B']) expect = pd.DataFrame(df.ix['A', :], columns=cdf.columns, - index=pd.CategoricalIndex(list('AA'))) + index=exp_index) assert_frame_equal(cdf.ix['A', :], expect) + exp_columns = pd.CategoricalIndex(list('XX'), categories=['X', 'Y']) expect = pd.DataFrame(df.ix[:, 'X'], index=cdf.index, - columns=pd.CategoricalIndex(list('XX'))) + columns=exp_columns) assert_frame_equal(cdf.ix[:, 'X'], expect) expect = pd.DataFrame(df.ix[['A', 'B'], :], columns=cdf.columns, diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py index 2a2f8678694de..29f3889d20bd0 100644 --- a/pandas/tests/indexing/test_floats.py +++ b/pandas/tests/indexing/test_floats.py @@ -538,8 +538,10 @@ def test_slice_float(self): # getitem result = idxr(s)[l] - self.assertTrue(result.equals(expected)) - + if isinstance(s, Series): + self.assert_series_equal(result, expected) + else: + self.assert_frame_equal(result, expected) # setitem s2 = s.copy() idxr(s2)[l] = 0 diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py index 4b8b5ae2571d0..b86b248ead290 100644 --- a/pandas/tests/indexing/test_indexing.py +++ b/pandas/tests/indexing/test_indexing.py @@ -20,14 +20,14 @@ MultiIndex, Timestamp, Timedelta) from pandas.util.testing import (assert_almost_equal, assert_series_equal, assert_frame_equal, assert_panel_equal, - assert_attr_equal) + assert_attr_equal, slow) from pandas.formats.printing import pprint_thing from pandas import concat, lib from pandas.core.common import PerformanceWarning import pandas.util.testing as tm from pandas import date_range -from numpy.testing.decorators import slow + _verbose = False @@ -2334,6 +2334,18 @@ def test_multiindex_slicers_non_unique(self): self.assertFalse(result.index.is_unique) assert_frame_equal(result, expected) + # GH12896 + # numpy-implementation dependent bug + ints = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 12, 13, 14, 14, 16, + 17, 18, 19, 200000, 200000] + n = len(ints) + idx = MultiIndex.from_arrays([['a'] * n, ints]) + result = Series([1] * n, index=idx) + result = result.sort_index() + result = result.loc[(slice(None), slice(100000))] + expected = Series([1] * (n - 2), index=idx[:-2]).sort_index() + assert_series_equal(result, expected) + def test_multiindex_slicers_datetimelike(self): # GH 7429 @@ -2913,7 +2925,7 @@ def test_dups_fancy_indexing(self): df.columns = ['a', 'a', 'b'] result = df[['b', 'a']].columns expected = Index(['b', 'a', 'a']) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) # across dtypes df = DataFrame([[1, 2, 1., 2., 3., 'foo', 'bar']], @@ -3817,7 +3829,7 @@ def test_astype_assignment_with_dups(self): index = df.index.copy() df['A'] = df['A'].astype(np.float64) - self.assertTrue(df.index.equals(index)) + self.assert_index_equal(df.index, index) # TODO(wesm): unused variables # result = df.get_dtype_counts().sort_index() @@ -4238,7 +4250,8 @@ def test_series_partial_set_period(self): pd.Period('2011-01-03', freq='D')] exp = Series([np.nan, 0.2, np.nan], index=pd.PeriodIndex(keys, name='idx'), name='s') - assert_series_equal(ser.loc[keys], exp, check_index_type=True) + result = ser.loc[keys] + assert_series_equal(result, exp) def test_partial_set_invalid(self): diff --git a/pandas/tests/series/test_alter_axes.py b/pandas/tests/series/test_alter_axes.py index 574dcd54933ae..2ddfa27eea377 100644 --- a/pandas/tests/series/test_alter_axes.py +++ b/pandas/tests/series/test_alter_axes.py @@ -48,7 +48,7 @@ def test_rename(self): # partial dict s = Series(np.arange(4), index=['a', 'b', 'c', 'd'], dtype='int64') renamed = s.rename({'b': 'foo', 'd': 'bar'}) - self.assert_numpy_array_equal(renamed.index, ['a', 'foo', 'c', 'bar']) + self.assert_index_equal(renamed.index, Index(['a', 'foo', 'c', 'bar'])) # index with name renamer = Series(np.arange(4), @@ -141,7 +141,7 @@ def test_reset_index(self): self.assertEqual(len(rs.columns), 2) rs = s.reset_index(level=[0, 2], drop=True) - self.assertTrue(rs.index.equals(Index(index.get_level_values(1)))) + self.assert_index_equal(rs.index, Index(index.get_level_values(1))) tm.assertIsInstance(rs, Series) def test_reset_index_range(self): diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py index 002b7fa3aa8df..433f0f4bc67f5 100644 --- a/pandas/tests/series/test_analytics.py +++ b/pandas/tests/series/test_analytics.py @@ -289,8 +289,8 @@ def test_argsort_stable(self): mexpected = np.argsort(s.values, kind='mergesort') qexpected = np.argsort(s.values, kind='quicksort') - self.assert_numpy_array_equal(mindexer, mexpected) - self.assert_numpy_array_equal(qindexer, qexpected) + self.assert_series_equal(mindexer, Series(mexpected)) + self.assert_series_equal(qindexer, Series(qexpected)) self.assertFalse(np.array_equal(qindexer, mindexer)) def test_cumsum(self): @@ -300,24 +300,24 @@ def test_cumprod(self): self._check_accum_op('cumprod') def test_cummin(self): - self.assert_numpy_array_equal(self.ts.cummin(), + self.assert_numpy_array_equal(self.ts.cummin().values, np.minimum.accumulate(np.array(self.ts))) ts = self.ts.copy() ts[::2] = np.NaN result = ts.cummin()[1::2] expected = np.minimum.accumulate(ts.valid()) - self.assert_numpy_array_equal(result, expected) + self.assert_series_equal(result, expected) def test_cummax(self): - self.assert_numpy_array_equal(self.ts.cummax(), + self.assert_numpy_array_equal(self.ts.cummax().values, np.maximum.accumulate(np.array(self.ts))) ts = self.ts.copy() ts[::2] = np.NaN result = ts.cummax()[1::2] expected = np.maximum.accumulate(ts.valid()) - self.assert_numpy_array_equal(result, expected) + self.assert_series_equal(result, expected) def test_cummin_datetime64(self): s = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', '2000-1-1', @@ -489,7 +489,8 @@ def testit(): def _check_accum_op(self, name): func = getattr(np, name) - self.assert_numpy_array_equal(func(self.ts), func(np.array(self.ts))) + self.assert_numpy_array_equal(func(self.ts).values, + func(np.array(self.ts))) # with missing values ts = self.ts.copy() @@ -498,7 +499,7 @@ def _check_accum_op(self, name): result = func(ts)[1::2] expected = func(np.array(ts.valid())) - self.assert_numpy_array_equal(result, expected) + self.assert_numpy_array_equal(result.values, expected) def test_compress(self): cond = [True, False, True, False, False] @@ -1279,6 +1280,7 @@ def test_idxmax(self): self.assertEqual(result, 1.1) def test_numpy_argmax(self): + # argmax is aliased to idxmax data = np.random.randint(0, 11, size=10) result = np.argmax(Series(data)) @@ -1355,7 +1357,7 @@ def test_searchsorted_numeric_dtypes_scalar(self): s = Series([1, 2, 90, 1000, 3e9]) r = s.searchsorted(30) e = 2 - tm.assert_equal(r, e) + self.assertEqual(r, e) r = s.searchsorted([30]) e = np.array([2], dtype=np.int64) @@ -1372,7 +1374,7 @@ def test_search_sorted_datetime64_scalar(self): v = pd.Timestamp('20120102') r = s.searchsorted(v) e = 1 - tm.assert_equal(r, e) + self.assertEqual(r, e) def test_search_sorted_datetime64_list(self): s = Series(pd.date_range('20120101', periods=10, freq='2D')) @@ -1395,6 +1397,23 @@ def test_is_unique(self): s = Series(np.arange(1000)) self.assertTrue(s.is_unique) + def test_is_monotonic(self): + + s = Series(np.random.randint(0, 10, size=1000)) + self.assertFalse(s.is_monotonic) + s = Series(np.arange(1000)) + self.assertTrue(s.is_monotonic) + self.assertTrue(s.is_monotonic_increasing) + s = Series(np.arange(1000, 0, -1)) + self.assertTrue(s.is_monotonic_decreasing) + + s = Series(pd.date_range('20130101', periods=10)) + self.assertTrue(s.is_monotonic) + self.assertTrue(s.is_monotonic_increasing) + s = Series(list(reversed(s.tolist()))) + self.assertFalse(s.is_monotonic) + self.assertTrue(s.is_monotonic_decreasing) + def test_sort_values(self): ts = self.ts.copy() @@ -1403,13 +1422,13 @@ def test_sort_values(self): with tm.assert_produces_warning(FutureWarning): ts.sort() - self.assert_numpy_array_equal(ts, self.ts.sort_values()) - self.assert_numpy_array_equal(ts.index, self.ts.sort_values().index) + self.assert_series_equal(ts, self.ts.sort_values()) + self.assert_index_equal(ts.index, self.ts.sort_values().index) ts.sort_values(ascending=False, inplace=True) - self.assert_numpy_array_equal(ts, self.ts.sort_values(ascending=False)) - self.assert_numpy_array_equal(ts.index, self.ts.sort_values( - ascending=False).index) + self.assert_series_equal(ts, self.ts.sort_values(ascending=False)) + self.assert_index_equal(ts.index, + self.ts.sort_values(ascending=False).index) # GH 5856/5853 # Series.sort_values operating on a view @@ -1512,11 +1531,11 @@ def test_order(self): result = ts.sort_values() self.assertTrue(np.isnan(result[-5:]).all()) - self.assert_numpy_array_equal(result[:-5], np.sort(vals[5:])) + self.assert_numpy_array_equal(result[:-5].values, np.sort(vals[5:])) result = ts.sort_values(na_position='first') self.assertTrue(np.isnan(result[:5]).all()) - self.assert_numpy_array_equal(result[5:], np.sort(vals[5:])) + self.assert_numpy_array_equal(result[5:].values, np.sort(vals[5:])) # something object-type ser = Series(['A', 'B'], [1, 2]) diff --git a/pandas/tests/series/test_apply.py b/pandas/tests/series/test_apply.py index 6e0a0175b403f..26fc80c3ef988 100644 --- a/pandas/tests/series/test_apply.py +++ b/pandas/tests/series/test_apply.py @@ -160,7 +160,7 @@ def test_map(self): # function result = self.ts.map(lambda x: x * 2) - self.assert_numpy_array_equal(result, self.ts * 2) + self.assert_series_equal(result, self.ts * 2) # GH 10324 a = Series([1, 2, 3, 4]) @@ -187,7 +187,8 @@ def test_map(self): index=pd.CategoricalIndex(['b', 'c', 'd', 'e'])) c = Series(['B', 'C', 'D', 'E'], index=Index(['b', 'c', 'd', 'e'])) - exp = Series([np.nan, 'B', 'C', 'D'], dtype='category') + exp = Series(pd.Categorical([np.nan, 'B', 'C', 'D'], + categories=['B', 'C', 'D', 'E'])) self.assert_series_equal(a.map(b), exp) exp = Series([np.nan, 'B', 'C', 'D']) self.assert_series_equal(a.map(c), exp) diff --git a/pandas/tests/series/test_combine_concat.py b/pandas/tests/series/test_combine_concat.py index 72f1cac219998..eb560d4a17055 100644 --- a/pandas/tests/series/test_combine_concat.py +++ b/pandas/tests/series/test_combine_concat.py @@ -49,14 +49,14 @@ def test_combine_first(self): # nothing used from the input combined = series.combine_first(series_copy) - self.assert_numpy_array_equal(combined, series) + self.assert_series_equal(combined, series) # Holes filled from input combined = series_copy.combine_first(series) self.assertTrue(np.isfinite(combined).all()) - self.assert_numpy_array_equal(combined[::2], series[::2]) - self.assert_numpy_array_equal(combined[1::2], series_copy[1::2]) + self.assert_series_equal(combined[::2], series[::2]) + self.assert_series_equal(combined[1::2], series_copy[1::2]) # mixed types index = tm.makeStringIndex(20) @@ -65,8 +65,9 @@ def test_combine_first(self): combined = strings.combine_first(floats) - tm.assert_dict_equal(strings, combined, compare_keys=False) - tm.assert_dict_equal(floats[1::2], combined, compare_keys=False) + tm.assert_series_equal(strings, combined.loc[index[::2]]) + tm.assert_series_equal(floats[1::2].astype(object), + combined.loc[index[1::2]]) # corner case s = Series([1., 2, 3], index=[0, 1, 2]) diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index 68733700e1483..a80a3af56b18f 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -137,7 +137,7 @@ def test_constructor_categorical(self): cat = pd.Categorical([0, 1, 2, 0, 1, 2], ['a', 'b', 'c'], fastpath=True) res = Series(cat) - self.assertTrue(res.values.equals(cat)) + tm.assert_categorical_equal(res.values, cat) # GH12574 self.assertRaises( @@ -418,8 +418,10 @@ def test_constructor_with_datetime_tz(self): result = s.values self.assertIsInstance(result, np.ndarray) self.assertTrue(result.dtype == 'datetime64[ns]') - self.assertTrue(dr.equals(pd.DatetimeIndex(result).tz_localize( - 'UTC').tz_convert(tz=s.dt.tz))) + + exp = pd.DatetimeIndex(result) + exp = exp.tz_localize('UTC').tz_convert(tz=s.dt.tz) + self.assert_index_equal(dr, exp) # indexing result = s.iloc[0] diff --git a/pandas/tests/series/test_datetime_values.py b/pandas/tests/series/test_datetime_values.py index 5b12baf6c6fc5..6e82f81f901a9 100644 --- a/pandas/tests/series/test_datetime_values.py +++ b/pandas/tests/series/test_datetime_values.py @@ -320,8 +320,6 @@ def test_strftime(self): expected = np.array(['2015/03/01', '2015/03/02', '2015/03/03', '2015/03/04', '2015/03/05'], dtype=np.object_) # dtype may be S10 or U10 depending on python version - print(result) - print(expected) self.assert_numpy_array_equal(result, expected, check_dtype=False) period_index = period_range('20150301', periods=5) diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py index fc963d4597246..5194a29bc8b42 100644 --- a/pandas/tests/series/test_dtypes.py +++ b/pandas/tests/series/test_dtypes.py @@ -55,7 +55,7 @@ def test_astype_cast_object_int(self): arr = Series(['1', '2', '3', '4'], dtype=object) result = arr.astype(int) - self.assert_numpy_array_equal(result, np.arange(1, 5)) + self.assert_series_equal(result, Series(np.arange(1, 5))) def test_astype_datetimes(self): import pandas.tslib as tslib @@ -133,6 +133,21 @@ def test_astype_unicode(self): reload(sys) # noqa sys.setdefaultencoding(former_encoding) + def test_astype_dict(self): + s = Series(range(0, 10, 2), name='abc') + + result = s.astype({'abc': str}) + expected = Series(['0', '2', '4', '6', '8'], name='abc') + assert_series_equal(result, expected) + + result = s.astype({'abc': 'float64'}) + expected = Series([0.0, 2.0, 4.0, 6.0, 8.0], dtype='float64', + name='abc') + assert_series_equal(result, expected) + + self.assertRaises(KeyError, s.astype, {'abc': str, 'def': str}) + self.assertRaises(KeyError, s.astype, {0: str}) + def test_complexx(self): # GH4819 # complex access for ndarray compat diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py index 5ed3fda7d0b8f..d01ac3e1aef42 100644 --- a/pandas/tests/series/test_indexing.py +++ b/pandas/tests/series/test_indexing.py @@ -246,7 +246,7 @@ def test_getitem_boolean(self): result = s[list(mask)] expected = s[mask] assert_series_equal(result, expected) - self.assert_numpy_array_equal(result.index, s.index[mask]) + self.assert_index_equal(result.index, s.index[mask]) def test_getitem_boolean_empty(self): s = Series([], dtype=np.int64) @@ -287,6 +287,16 @@ def test_getitem_generator(self): assert_series_equal(result, expected) assert_series_equal(result2, expected) + def test_type_promotion(self): + # GH12599 + s = pd.Series() + s["a"] = pd.Timestamp("2016-01-01") + s["b"] = 3.0 + s["c"] = "foo" + expected = Series([pd.Timestamp("2016-01-01"), 3.0, "foo"], + index=["a", "b", "c"]) + assert_series_equal(s, expected) + def test_getitem_boolean_object(self): # using column from DataFrame diff --git a/pandas/tests/series/test_internals.py b/pandas/tests/series/test_internals.py index 93bd7f0eec7c5..e3a0e056f4da1 100644 --- a/pandas/tests/series/test_internals.py +++ b/pandas/tests/series/test_internals.py @@ -103,7 +103,8 @@ def test_convert_objects(self): with tm.assert_produces_warning(FutureWarning): result = s.convert_objects(convert_dates='coerce', convert_numeric=False) - assert_series_equal(result, s) + expected = Series([lib.NaT] * 2 + [Timestamp(1)] * 2) + assert_series_equal(result, expected) # preserver if non-object s = Series([1], dtype='float32') @@ -270,7 +271,7 @@ def test_convert(self): s = Series(['foo', 'bar', 1, 1.0], dtype='O') result = s._convert(datetime=True, coerce=True) - expected = Series([lib.NaT] * 4) + expected = Series([lib.NaT] * 2 + [Timestamp(1)] * 2) assert_series_equal(result, expected) # preserver if non-object diff --git a/pandas/tests/series/test_io.py b/pandas/tests/series/test_io.py index 4fda1152abd96..f89501d39f014 100644 --- a/pandas/tests/series/test_io.py +++ b/pandas/tests/series/test_io.py @@ -130,7 +130,7 @@ def test_to_frame(self): assert_frame_equal(rs, xp) def test_to_dict(self): - self.assert_numpy_array_equal(Series(self.ts.to_dict()), self.ts) + self.assert_series_equal(Series(self.ts.to_dict(), name='ts'), self.ts) def test_timeseries_periodindex(self): # GH2891 diff --git a/pandas/tests/series/test_misc_api.py b/pandas/tests/series/test_misc_api.py index 9f5433782b062..d74966738909d 100644 --- a/pandas/tests/series/test_misc_api.py +++ b/pandas/tests/series/test_misc_api.py @@ -206,7 +206,7 @@ def test_keys(self): self.assertIs(getkeys(), self.ts.index) def test_values(self): - self.assert_numpy_array_equal(self.ts, self.ts.values) + self.assert_almost_equal(self.ts.values, self.ts, check_dtype=False) def test_iteritems(self): for idx, val in compat.iteritems(self.series): diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py index dec4f878d7d56..ed10f5b0a7af3 100644 --- a/pandas/tests/series/test_missing.py +++ b/pandas/tests/series/test_missing.py @@ -247,16 +247,18 @@ def test_isnull_for_inf(self): def test_fillna(self): ts = Series([0., 1., 2., 3., 4.], index=tm.makeDateIndex(5)) - self.assert_numpy_array_equal(ts, ts.fillna(method='ffill')) + self.assert_series_equal(ts, ts.fillna(method='ffill')) ts[2] = np.NaN - self.assert_numpy_array_equal(ts.fillna(method='ffill'), - [0., 1., 1., 3., 4.]) - self.assert_numpy_array_equal(ts.fillna(method='backfill'), - [0., 1., 3., 3., 4.]) + exp = Series([0., 1., 1., 3., 4.], index=ts.index) + self.assert_series_equal(ts.fillna(method='ffill'), exp) - self.assert_numpy_array_equal(ts.fillna(value=5), [0., 1., 5., 3., 4.]) + exp = Series([0., 1., 3., 3., 4.], index=ts.index) + self.assert_series_equal(ts.fillna(method='backfill'), exp) + + exp = Series([0., 1., 5., 3., 4.], index=ts.index) + self.assert_series_equal(ts.fillna(value=5), exp) self.assertRaises(ValueError, ts.fillna) self.assertRaises(ValueError, self.ts.fillna, value=0, method='ffill') @@ -433,8 +435,8 @@ def test_valid(self): result = ts.valid() self.assertEqual(len(result), ts.count()) - - tm.assert_dict_equal(result, ts, compare_keys=False) + tm.assert_series_equal(result, ts[1::2]) + tm.assert_series_equal(result, ts[pd.notnull(ts)]) def test_isnull(self): ser = Series([0, 5.4, 3, nan, -0.001]) @@ -488,7 +490,7 @@ def test_interpolate(self): ts_copy[5:10] = np.NaN linear_interp = ts_copy.interpolate(method='linear') - self.assert_numpy_array_equal(linear_interp, ts) + self.assert_series_equal(linear_interp, ts) ord_ts = Series([d.toordinal() for d in self.ts.index], index=self.ts.index).astype(float) @@ -497,7 +499,7 @@ def test_interpolate(self): ord_ts_copy[5:10] = np.NaN time_interp = ord_ts_copy.interpolate(method='time') - self.assert_numpy_array_equal(time_interp, ord_ts) + self.assert_series_equal(time_interp, ord_ts) # try time interpolation on a non-TimeSeries # Only raises ValueError if there are NaNs. diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py index c5ef969d3b39d..1e23c87fdb4ca 100644 --- a/pandas/tests/series/test_operators.py +++ b/pandas/tests/series/test_operators.py @@ -264,6 +264,18 @@ def test_operators_timedelta64(self): rs[2] += np.timedelta64(timedelta(minutes=5, seconds=1)) self.assertEqual(rs[2], value) + def test_operator_series_comparison_zerorank(self): + # GH 13006 + result = np.float64(0) > pd.Series([1, 2, 3]) + expected = 0.0 > pd.Series([1, 2, 3]) + self.assert_series_equal(result, expected) + result = pd.Series([1, 2, 3]) < np.float64(0) + expected = pd.Series([1, 2, 3]) < 0.0 + self.assert_series_equal(result, expected) + result = np.array([0, 1, 2])[0] > pd.Series([0, 1, 2]) + expected = 0.0 > pd.Series([1, 2, 3]) + self.assert_series_equal(result, expected) + def test_timedeltas_with_DateOffset(self): # GH 4532 @@ -1227,8 +1239,9 @@ def test_operators_corner(self): # float + int int_ts = self.ts.astype(int)[:-5] added = self.ts + int_ts - expected = self.ts.values[:-5] + int_ts.values - self.assert_numpy_array_equal(added[:-5], expected) + expected = Series(self.ts.values[:-5] + int_ts.values, + index=self.ts.index[:-5], name='ts') + self.assert_series_equal(added[:-5], expected) def test_operators_reverse_object(self): # GH 56 diff --git a/pandas/tests/series/test_quantile.py b/pandas/tests/series/test_quantile.py index f538fa4e90401..e0bff7fbd39e4 100644 --- a/pandas/tests/series/test_quantile.py +++ b/pandas/tests/series/test_quantile.py @@ -126,6 +126,14 @@ def test_quantile_interpolation_np_lt_1p9(self): interpolation='higher') def test_quantile_nan(self): + + # GH 13098 + s = pd.Series([1, 2, 3, 4, np.nan]) + result = s.quantile(0.5) + expected = 2.5 + self.assertEqual(result, expected) + + # all nan/empty cases = [Series([]), Series([np.nan, np.nan])] for s in cases: diff --git a/pandas/tests/series/test_timeseries.py b/pandas/tests/series/test_timeseries.py index ee06bc2c3dd4e..13b95ea97eedf 100644 --- a/pandas/tests/series/test_timeseries.py +++ b/pandas/tests/series/test_timeseries.py @@ -25,7 +25,10 @@ def test_shift(self): shifted = self.ts.shift(1) unshifted = shifted.shift(-1) - tm.assert_dict_equal(unshifted.valid(), self.ts, compare_keys=False) + tm.assert_index_equal(shifted.index, self.ts.index) + tm.assert_index_equal(unshifted.index, self.ts.index) + tm.assert_numpy_array_equal(unshifted.valid().values, + self.ts.values[:-1]) offset = datetools.bday shifted = self.ts.shift(1, freq=offset) @@ -49,7 +52,9 @@ def test_shift(self): ps = tm.makePeriodSeries() shifted = ps.shift(1) unshifted = shifted.shift(-1) - tm.assert_dict_equal(unshifted.valid(), ps, compare_keys=False) + tm.assert_index_equal(shifted.index, ps.index) + tm.assert_index_equal(unshifted.index, ps.index) + tm.assert_numpy_array_equal(unshifted.valid().values, ps.values[:-1]) shifted2 = ps.shift(1, 'B') shifted3 = ps.shift(1, datetools.bday) @@ -77,16 +82,16 @@ def test_shift(self): # xref 8260 # with tz - s = Series( - date_range('2000-01-01 09:00:00', periods=5, - tz='US/Eastern'), name='foo') + s = Series(date_range('2000-01-01 09:00:00', periods=5, + tz='US/Eastern'), name='foo') result = s - s.shift() - assert_series_equal(result, Series( - TimedeltaIndex(['NaT'] + ['1 days'] * 4), name='foo')) + + exp = Series(TimedeltaIndex(['NaT'] + ['1 days'] * 4), name='foo') + assert_series_equal(result, exp) # incompat tz - s2 = Series( - date_range('2000-01-01 09:00:00', periods=5, tz='CET'), name='foo') + s2 = Series(date_range('2000-01-01 09:00:00', periods=5, + tz='CET'), name='foo') self.assertRaises(ValueError, lambda: s - s2) def test_tshift(self): @@ -346,8 +351,10 @@ def test_getitem_setitem_datetime_tz_dateutil(self): from pandas import date_range N = 50 + # testing with timezone, GH #2785 - rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern') + rng = date_range('1/1/1990', periods=N, freq='H', + tz='America/New_York') ts = Series(np.random.randn(N), index=rng) # also test Timestamp tz handling, GH #2789 @@ -368,8 +375,8 @@ def test_getitem_setitem_datetime_tz_dateutil(self): assert_series_equal(result, ts) result = ts.copy() - result[datetime(1990, 1, 1, 3, tzinfo=tz('US/Central'))] = 0 - result[datetime(1990, 1, 1, 3, tzinfo=tz('US/Central'))] = ts[4] + result[datetime(1990, 1, 1, 3, tzinfo=tz('America/Chicago'))] = 0 + result[datetime(1990, 1, 1, 3, tzinfo=tz('America/Chicago'))] = ts[4] assert_series_equal(result, ts) def test_getitem_setitem_periodindex(self): @@ -485,15 +492,15 @@ def test_asfreq(self): daily_ts = ts.asfreq('B') monthly_ts = daily_ts.asfreq('BM') - self.assert_numpy_array_equal(monthly_ts, ts) + self.assert_series_equal(monthly_ts, ts) daily_ts = ts.asfreq('B', method='pad') monthly_ts = daily_ts.asfreq('BM') - self.assert_numpy_array_equal(monthly_ts, ts) + self.assert_series_equal(monthly_ts, ts) daily_ts = ts.asfreq(datetools.bday) monthly_ts = daily_ts.asfreq(datetools.bmonthEnd) - self.assert_numpy_array_equal(monthly_ts, ts) + self.assert_series_equal(monthly_ts, ts) result = ts[:0].asfreq('M') self.assertEqual(len(result), 0) diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py index 917f108711d09..8af93ad0ecb2e 100644 --- a/pandas/tests/test_algos.py +++ b/pandas/tests/test_algos.py @@ -3,15 +3,20 @@ import numpy as np from numpy.random import RandomState +from numpy import nan +import datetime -from pandas.core.api import Series, Categorical, CategoricalIndex +from pandas import Series, Categorical, CategoricalIndex, Index import pandas as pd from pandas import compat +import pandas.algos as _algos +from pandas.compat import lrange import pandas.core.algorithms as algos import pandas.util.testing as tm import pandas.hashtable as hashtable from pandas.compat.numpy import np_array_datetime64_compat +from pandas.util.testing import assert_almost_equal class TestMatch(tm.TestCase): @@ -102,14 +107,14 @@ def test_mixed(self): exp = np.array([0, 0, -1, 1, 2, 3], dtype=np.int_) self.assert_numpy_array_equal(labels, exp) - exp = np.array(['A', 'B', 3.14, np.inf], dtype=object) - self.assert_numpy_array_equal(uniques, exp) + exp = pd.Index(['A', 'B', 3.14, np.inf]) + tm.assert_index_equal(uniques, exp) labels, uniques = algos.factorize(x, sort=True) exp = np.array([2, 2, -1, 3, 0, 1], dtype=np.int_) self.assert_numpy_array_equal(labels, exp) - exp = np.array([3.14, np.inf, 'A', 'B'], dtype=object) - self.assert_numpy_array_equal(uniques, exp) + exp = pd.Index([3.14, np.inf, 'A', 'B']) + tm.assert_index_equal(uniques, exp) def test_datelike(self): @@ -121,14 +126,14 @@ def test_datelike(self): exp = np.array([0, 0, 0, 1, 1, 0], dtype=np.int_) self.assert_numpy_array_equal(labels, exp) - exp = np.array([v1.value, v2.value], dtype='M8[ns]') - self.assert_numpy_array_equal(uniques, exp) + exp = pd.DatetimeIndex([v1, v2]) + self.assert_index_equal(uniques, exp) labels, uniques = algos.factorize(x, sort=True) exp = np.array([1, 1, 1, 0, 0, 1], dtype=np.int_) self.assert_numpy_array_equal(labels, exp) - exp = np.array([v2.value, v1.value], dtype='M8[ns]') - self.assert_numpy_array_equal(uniques, exp) + exp = pd.DatetimeIndex([v2, v1]) + self.assert_index_equal(uniques, exp) # period v1 = pd.Period('201302', freq='M') @@ -139,12 +144,12 @@ def test_datelike(self): labels, uniques = algos.factorize(x) exp = np.array([0, 0, 0, 1, 1, 0], dtype=np.int_) self.assert_numpy_array_equal(labels, exp) - self.assert_numpy_array_equal(uniques, pd.PeriodIndex([v1, v2])) + self.assert_index_equal(uniques, pd.PeriodIndex([v1, v2])) labels, uniques = algos.factorize(x, sort=True) exp = np.array([0, 0, 0, 1, 1, 0], dtype=np.int_) self.assert_numpy_array_equal(labels, exp) - self.assert_numpy_array_equal(uniques, pd.PeriodIndex([v1, v2])) + self.assert_index_equal(uniques, pd.PeriodIndex([v1, v2])) # GH 5986 v1 = pd.to_timedelta('1 day 1 min') @@ -153,12 +158,12 @@ def test_datelike(self): labels, uniques = algos.factorize(x) exp = np.array([0, 1, 0, 0, 1, 1, 0], dtype=np.int_) self.assert_numpy_array_equal(labels, exp) - self.assert_numpy_array_equal(uniques, pd.to_timedelta([v1, v2])) + self.assert_index_equal(uniques, pd.to_timedelta([v1, v2])) labels, uniques = algos.factorize(x, sort=True) exp = np.array([1, 0, 1, 1, 0, 0, 1], dtype=np.int_) self.assert_numpy_array_equal(labels, exp) - self.assert_numpy_array_equal(uniques, pd.to_timedelta([v2, v1])) + self.assert_index_equal(uniques, pd.to_timedelta([v2, v1])) def test_factorize_nan(self): # nan should map to na_sentinel, not reverse_indexer[na_sentinel] @@ -580,7 +585,7 @@ def test_group_var_generic_1d(self): expected_counts = counts + 3 self.algo(out, counts, values, labels) - np.testing.assert_allclose(out, expected_out, self.rtol) + self.assertTrue(np.allclose(out, expected_out, self.rtol)) tm.assert_numpy_array_equal(counts, expected_counts) def test_group_var_generic_1d_flat_labels(self): @@ -596,7 +601,7 @@ def test_group_var_generic_1d_flat_labels(self): self.algo(out, counts, values, labels) - np.testing.assert_allclose(out, expected_out, self.rtol) + self.assertTrue(np.allclose(out, expected_out, self.rtol)) tm.assert_numpy_array_equal(counts, expected_counts) def test_group_var_generic_2d_all_finite(self): @@ -611,7 +616,7 @@ def test_group_var_generic_2d_all_finite(self): expected_counts = counts + 2 self.algo(out, counts, values, labels) - np.testing.assert_allclose(out, expected_out, self.rtol) + self.assertTrue(np.allclose(out, expected_out, self.rtol)) tm.assert_numpy_array_equal(counts, expected_counts) def test_group_var_generic_2d_some_nan(self): @@ -626,11 +631,11 @@ def test_group_var_generic_2d_some_nan(self): expected_out = np.vstack([values[:, 0] .reshape(5, 2, order='F') .std(ddof=1, axis=1) ** 2, - np.nan * np.ones(5)]).T + np.nan * np.ones(5)]).T.astype(self.dtype) expected_counts = counts + 2 self.algo(out, counts, values, labels) - np.testing.assert_allclose(out, expected_out, self.rtol) + tm.assert_almost_equal(out, expected_out, check_less_precise=6) tm.assert_numpy_array_equal(counts, expected_counts) def test_group_var_constant(self): @@ -705,6 +710,315 @@ def test_unique_label_indices(): tm.assert_numpy_array_equal(left, right) +def test_rank(): + tm._skip_if_no_scipy() + from scipy.stats import rankdata + + def _check(arr): + mask = ~np.isfinite(arr) + arr = arr.copy() + result = _algos.rank_1d_float64(arr) + arr[mask] = np.inf + exp = rankdata(arr) + exp[mask] = nan + assert_almost_equal(result, exp) + + _check(np.array([nan, nan, 5., 5., 5., nan, 1, 2, 3, nan])) + _check(np.array([4., nan, 5., 5., 5., nan, 1, 2, 4., nan])) + + +def test_pad_backfill_object_segfault(): + + old = np.array([], dtype='O') + new = np.array([datetime.datetime(2010, 12, 31)], dtype='O') + + result = _algos.pad_object(old, new) + expected = np.array([-1], dtype=np.int64) + assert (np.array_equal(result, expected)) + + result = _algos.pad_object(new, old) + expected = np.array([], dtype=np.int64) + assert (np.array_equal(result, expected)) + + result = _algos.backfill_object(old, new) + expected = np.array([-1], dtype=np.int64) + assert (np.array_equal(result, expected)) + + result = _algos.backfill_object(new, old) + expected = np.array([], dtype=np.int64) + assert (np.array_equal(result, expected)) + + +def test_arrmap(): + values = np.array(['foo', 'foo', 'bar', 'bar', 'baz', 'qux'], dtype='O') + result = _algos.arrmap_object(values, lambda x: x in ['foo', 'bar']) + assert (result.dtype == np.bool_) + + +class TestTseriesUtil(tm.TestCase): + _multiprocess_can_split_ = True + + def test_combineFunc(self): + pass + + def test_reindex(self): + pass + + def test_isnull(self): + pass + + def test_groupby(self): + pass + + def test_groupby_withnull(self): + pass + + def test_backfill(self): + old = Index([1, 5, 10]) + new = Index(lrange(12)) + + filler = _algos.backfill_int64(old.values, new.values) + + expect_filler = np.array([0, 0, 1, 1, 1, 1, + 2, 2, 2, 2, 2, -1], dtype=np.int64) + self.assert_numpy_array_equal(filler, expect_filler) + + # corner case + old = Index([1, 4]) + new = Index(lrange(5, 10)) + filler = _algos.backfill_int64(old.values, new.values) + + expect_filler = np.array([-1, -1, -1, -1, -1], dtype=np.int64) + self.assert_numpy_array_equal(filler, expect_filler) + + def test_pad(self): + old = Index([1, 5, 10]) + new = Index(lrange(12)) + + filler = _algos.pad_int64(old.values, new.values) + + expect_filler = np.array([-1, 0, 0, 0, 0, 1, + 1, 1, 1, 1, 2, 2], dtype=np.int64) + self.assert_numpy_array_equal(filler, expect_filler) + + # corner case + old = Index([5, 10]) + new = Index(lrange(5)) + filler = _algos.pad_int64(old.values, new.values) + expect_filler = np.array([-1, -1, -1, -1, -1], dtype=np.int64) + self.assert_numpy_array_equal(filler, expect_filler) + + +def test_left_join_indexer_unique(): + a = np.array([1, 2, 3, 4, 5], dtype=np.int64) + b = np.array([2, 2, 3, 4, 4], dtype=np.int64) + + result = _algos.left_join_indexer_unique_int64(b, a) + expected = np.array([1, 1, 2, 3, 3], dtype=np.int64) + assert (np.array_equal(result, expected)) + + +def test_left_outer_join_bug(): + left = np.array([0, 1, 0, 1, 1, 2, 3, 1, 0, 2, 1, 2, 0, 1, 1, 2, 3, 2, 3, + 2, 1, 1, 3, 0, 3, 2, 3, 0, 0, 2, 3, 2, 0, 3, 1, 3, 0, 1, + 3, 0, 0, 1, 0, 3, 1, 0, 1, 0, 1, 1, 0, 2, 2, 2, 2, 2, 0, + 3, 1, 2, 0, 0, 3, 1, 3, 2, 2, 0, 1, 3, 0, 2, 3, 2, 3, 3, + 2, 3, 3, 1, 3, 2, 0, 0, 3, 1, 1, 1, 0, 2, 3, 3, 1, 2, 0, + 3, 1, 2, 0, 2], dtype=np.int64) + + right = np.array([3, 1], dtype=np.int64) + max_groups = 4 + + lidx, ridx = _algos.left_outer_join(left, right, max_groups, sort=False) + + exp_lidx = np.arange(len(left)) + exp_ridx = -np.ones(len(left)) + exp_ridx[left == 1] = 1 + exp_ridx[left == 3] = 0 + + assert (np.array_equal(lidx, exp_lidx)) + assert (np.array_equal(ridx, exp_ridx)) + + +def test_inner_join_indexer(): + a = np.array([1, 2, 3, 4, 5], dtype=np.int64) + b = np.array([0, 3, 5, 7, 9], dtype=np.int64) + + index, ares, bres = _algos.inner_join_indexer_int64(a, b) + + index_exp = np.array([3, 5], dtype=np.int64) + assert_almost_equal(index, index_exp) + + aexp = np.array([2, 4], dtype=np.int64) + bexp = np.array([1, 2], dtype=np.int64) + assert_almost_equal(ares, aexp) + assert_almost_equal(bres, bexp) + + a = np.array([5], dtype=np.int64) + b = np.array([5], dtype=np.int64) + + index, ares, bres = _algos.inner_join_indexer_int64(a, b) + tm.assert_numpy_array_equal(index, np.array([5], dtype=np.int64)) + tm.assert_numpy_array_equal(ares, np.array([0], dtype=np.int64)) + tm.assert_numpy_array_equal(bres, np.array([0], dtype=np.int64)) + + +def test_outer_join_indexer(): + a = np.array([1, 2, 3, 4, 5], dtype=np.int64) + b = np.array([0, 3, 5, 7, 9], dtype=np.int64) + + index, ares, bres = _algos.outer_join_indexer_int64(a, b) + + index_exp = np.array([0, 1, 2, 3, 4, 5, 7, 9], dtype=np.int64) + assert_almost_equal(index, index_exp) + + aexp = np.array([-1, 0, 1, 2, 3, 4, -1, -1], dtype=np.int64) + bexp = np.array([0, -1, -1, 1, -1, 2, 3, 4], dtype=np.int64) + assert_almost_equal(ares, aexp) + assert_almost_equal(bres, bexp) + + a = np.array([5], dtype=np.int64) + b = np.array([5], dtype=np.int64) + + index, ares, bres = _algos.outer_join_indexer_int64(a, b) + tm.assert_numpy_array_equal(index, np.array([5], dtype=np.int64)) + tm.assert_numpy_array_equal(ares, np.array([0], dtype=np.int64)) + tm.assert_numpy_array_equal(bres, np.array([0], dtype=np.int64)) + + +def test_left_join_indexer(): + a = np.array([1, 2, 3, 4, 5], dtype=np.int64) + b = np.array([0, 3, 5, 7, 9], dtype=np.int64) + + index, ares, bres = _algos.left_join_indexer_int64(a, b) + + assert_almost_equal(index, a) + + aexp = np.array([0, 1, 2, 3, 4], dtype=np.int64) + bexp = np.array([-1, -1, 1, -1, 2], dtype=np.int64) + assert_almost_equal(ares, aexp) + assert_almost_equal(bres, bexp) + + a = np.array([5], dtype=np.int64) + b = np.array([5], dtype=np.int64) + + index, ares, bres = _algos.left_join_indexer_int64(a, b) + tm.assert_numpy_array_equal(index, np.array([5], dtype=np.int64)) + tm.assert_numpy_array_equal(ares, np.array([0], dtype=np.int64)) + tm.assert_numpy_array_equal(bres, np.array([0], dtype=np.int64)) + + +def test_left_join_indexer2(): + idx = Index([1, 1, 2, 5]) + idx2 = Index([1, 2, 5, 7, 9]) + + res, lidx, ridx = _algos.left_join_indexer_int64(idx2.values, idx.values) + + exp_res = np.array([1, 1, 2, 5, 7, 9], dtype=np.int64) + assert_almost_equal(res, exp_res) + + exp_lidx = np.array([0, 0, 1, 2, 3, 4], dtype=np.int64) + assert_almost_equal(lidx, exp_lidx) + + exp_ridx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64) + assert_almost_equal(ridx, exp_ridx) + + +def test_outer_join_indexer2(): + idx = Index([1, 1, 2, 5]) + idx2 = Index([1, 2, 5, 7, 9]) + + res, lidx, ridx = _algos.outer_join_indexer_int64(idx2.values, idx.values) + + exp_res = np.array([1, 1, 2, 5, 7, 9], dtype=np.int64) + assert_almost_equal(res, exp_res) + + exp_lidx = np.array([0, 0, 1, 2, 3, 4], dtype=np.int64) + assert_almost_equal(lidx, exp_lidx) + + exp_ridx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64) + assert_almost_equal(ridx, exp_ridx) + + +def test_inner_join_indexer2(): + idx = Index([1, 1, 2, 5]) + idx2 = Index([1, 2, 5, 7, 9]) + + res, lidx, ridx = _algos.inner_join_indexer_int64(idx2.values, idx.values) + + exp_res = np.array([1, 1, 2, 5], dtype=np.int64) + assert_almost_equal(res, exp_res) + + exp_lidx = np.array([0, 0, 1, 2], dtype=np.int64) + assert_almost_equal(lidx, exp_lidx) + + exp_ridx = np.array([0, 1, 2, 3], dtype=np.int64) + assert_almost_equal(ridx, exp_ridx) + + +def test_is_lexsorted(): + failure = [ + np.array([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, + 3, 3, + 3, 3, + 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, + 2, 2, 2, 2, 2, 2, 2, + 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, + 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, + 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0]), + np.array([30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, + 15, 14, + 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, + 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, + 12, 11, + 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25, + 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, + 9, 8, + 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25, 24, 23, 22, + 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, + 6, 5, + 4, 3, 2, 1, 0])] + + assert (not _algos.is_lexsorted(failure)) + +# def test_get_group_index(): +# a = np.array([0, 1, 2, 0, 2, 1, 0, 0], dtype=np.int64) +# b = np.array([1, 0, 3, 2, 0, 2, 3, 0], dtype=np.int64) +# expected = np.array([1, 4, 11, 2, 8, 6, 3, 0], dtype=np.int64) + +# result = lib.get_group_index([a, b], (3, 4)) + +# assert(np.array_equal(result, expected)) + + +def test_groupsort_indexer(): + a = np.random.randint(0, 1000, 100).astype(np.int64) + b = np.random.randint(0, 1000, 100).astype(np.int64) + + result = _algos.groupsort_indexer(a, 1000)[0] + + # need to use a stable sort + expected = np.argsort(a, kind='mergesort') + assert (np.array_equal(result, expected)) + + # compare with lexsort + key = a * 1000 + b + result = _algos.groupsort_indexer(key, 1000000)[0] + expected = np.lexsort((b, a)) + assert (np.array_equal(result, expected)) + + +def test_ensure_platform_int(): + arr = np.arange(100) + + result = _algos.ensure_platform_int(arr) + assert (result is arr) + + if __name__ == '__main__': import nose nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py index 2fec7c591a2b7..77ae3ca20d123 100644 --- a/pandas/tests/test_base.py +++ b/pandas/tests/test_base.py @@ -18,7 +18,6 @@ from pandas.core.base import (FrozenList, FrozenNDArray, PandasDelegate, NoNewAttributesMixin) from pandas.tseries.base import DatetimeIndexOpsMixin -from pandas.util.testing import (assertRaisesRegexp, assertIsInstance) class CheckStringMixin(object): @@ -46,7 +45,7 @@ class CheckImmutable(object): def check_mutable_error(self, *args, **kwargs): # pass whatever functions you normally would to assertRaises (after the # Exception kind) - assertRaisesRegexp(TypeError, self.mutable_regex, *args, **kwargs) + tm.assertRaisesRegexp(TypeError, self.mutable_regex, *args, **kwargs) def test_no_mutable_funcs(self): def setitem(): @@ -79,7 +78,7 @@ def test_slicing_maintains_type(self): def check_result(self, result, expected, klass=None): klass = klass or self.klass - assertIsInstance(result, klass) + self.assertIsInstance(result, klass) self.assertEqual(result, expected) @@ -120,13 +119,13 @@ def setUp(self): def test_shallow_copying(self): original = self.container.copy() - assertIsInstance(self.container.view(), FrozenNDArray) + self.assertIsInstance(self.container.view(), FrozenNDArray) self.assertFalse(isinstance( self.container.view(np.ndarray), FrozenNDArray)) self.assertIsNot(self.container.view(), self.container) self.assert_numpy_array_equal(self.container, original) # shallow copy should be the same too - assertIsInstance(self.container._shallow_copy(), FrozenNDArray) + self.assertIsInstance(self.container._shallow_copy(), FrozenNDArray) # setting should not be allowed def testit(container): @@ -141,48 +140,53 @@ def test_values(self): self.assert_numpy_array_equal(original, vals) self.assertIsNot(original, vals) vals[0] = n - self.assert_numpy_array_equal(self.container, original) + self.assertIsInstance(self.container, pd.core.base.FrozenNDArray) + self.assert_numpy_array_equal(self.container.values(), original) self.assertEqual(vals[0], n) class TestPandasDelegate(tm.TestCase): - def setUp(self): - pass + class Delegator(object): + _properties = ['foo'] + _methods = ['bar'] - def test_invalida_delgation(self): - # these show that in order for the delegation to work - # the _delegate_* methods need to be overriden to not raise a TypeError + def _set_foo(self, value): + self.foo = value - class Delegator(object): - _properties = ['foo'] - _methods = ['bar'] + def _get_foo(self): + return self.foo - def _set_foo(self, value): - self.foo = value + foo = property(_get_foo, _set_foo, doc="foo property") - def _get_foo(self): - return self.foo + def bar(self, *args, **kwargs): + """ a test bar method """ + pass - foo = property(_get_foo, _set_foo, doc="foo property") + class Delegate(PandasDelegate): - def bar(self, *args, **kwargs): - """ a test bar method """ - pass + def __init__(self, obj): + self.obj = obj - class Delegate(PandasDelegate): + def setUp(self): + pass - def __init__(self, obj): - self.obj = obj + def test_invalida_delgation(self): + # these show that in order for the delegation to work + # the _delegate_* methods need to be overriden to not raise a TypeError - Delegate._add_delegate_accessors(delegate=Delegator, - accessors=Delegator._properties, - typ='property') - Delegate._add_delegate_accessors(delegate=Delegator, - accessors=Delegator._methods, - typ='method') + self.Delegate._add_delegate_accessors( + delegate=self.Delegator, + accessors=self.Delegator._properties, + typ='property' + ) + self.Delegate._add_delegate_accessors( + delegate=self.Delegator, + accessors=self.Delegator._methods, + typ='method' + ) - delegate = Delegate(Delegator()) + delegate = self.Delegate(self.Delegator()) def f(): delegate.foo @@ -199,6 +203,13 @@ def f(): self.assertRaises(TypeError, f) + def test_memory_usage(self): + # Delegate does not implement memory_usage. + # Check that we fall back to in-built `__sizeof__` + # GH 12924 + delegate = self.Delegate(self.Delegator()) + sys.getsizeof(delegate) + class Ops(tm.TestCase): @@ -437,7 +448,9 @@ def test_nanops(self): self.assertEqual(obj.argmax(), -1) def test_value_counts_unique_nunique(self): - for o in self.objs: + for orig in self.objs: + + o = orig.copy() klass = type(o) values = o.values @@ -474,13 +487,11 @@ def test_value_counts_unique_nunique(self): else: expected_index = pd.Index(values[::-1]) idx = o.index.repeat(range(1, len(o) + 1)) - o = klass( - np.repeat(values, range(1, - len(o) + 1)), index=idx, name='a') + o = klass(np.repeat(values, range(1, len(o) + 1)), + index=idx, name='a') - expected_s = Series( - range(10, 0, - - 1), index=expected_index, dtype='int64', name='a') + expected_s = Series(range(10, 0, -1), index=expected_index, + dtype='int64', name='a') result = o.value_counts() tm.assert_series_equal(result, expected_s) @@ -490,10 +501,10 @@ def test_value_counts_unique_nunique(self): result = o.unique() if isinstance(o, (DatetimeIndex, PeriodIndex)): self.assertTrue(isinstance(result, o.__class__)) - self.assertEqual(result.name, o.name) self.assertEqual(result.freq, o.freq) - - self.assert_numpy_array_equal(result, values) + self.assert_index_equal(result, orig) + else: + self.assert_numpy_array_equal(result, values) self.assertEqual(o.nunique(), len(np.unique(o.values))) @@ -530,9 +541,8 @@ def test_value_counts_unique_nunique(self): # resets name from Index expected_index = pd.Index(o, name=None) # attach name to klass - o = klass( - np.repeat(values, range( - 1, len(o) + 1)), freq=o.freq, name='a') + o = klass(np.repeat(values, range(1, len(o) + 1)), + freq=o.freq, name='a') elif isinstance(o, Index): expected_index = pd.Index(values, name=None) o = klass( @@ -599,6 +609,12 @@ def test_value_counts_inferred(self): expected = Series([.4, .3, .2, .1], index=['b', 'a', 'd', 'c']) tm.assert_series_equal(hist, expected) + def test_value_counts_bins(self): + klasses = [Index, Series] + for klass in klasses: + s_values = ['a', 'b', 'b', 'b', 'b', 'c', 'd', 'd', 'a', 'a'] + s = klass(s_values) + # bins self.assertRaises(TypeError, lambda bins: s.value_counts(bins=bins), 1) @@ -649,6 +665,9 @@ def test_value_counts_inferred(self): check_dtype=False) self.assertEqual(s.nunique(), 0) + def test_value_counts_datetime64(self): + klasses = [Index, Series] + for klass in klasses: # GH 3002, datetime64[ns] # don't test names though txt = "\n".join(['xxyyzz20100101PIE', 'xxyyzz20100101GUM', @@ -662,9 +681,9 @@ def test_value_counts_inferred(self): s = klass(df['dt'].copy()) s.name = None - idx = pd.to_datetime( - ['2010-01-01 00:00:00Z', '2008-09-09 00:00:00Z', - '2009-01-01 00:00:00X']) + idx = pd.to_datetime(['2010-01-01 00:00:00Z', + '2008-09-09 00:00:00Z', + '2009-01-01 00:00:00X']) expected_s = Series([3, 2, 1], index=idx) tm.assert_series_equal(s.value_counts(), expected_s) @@ -673,8 +692,7 @@ def test_value_counts_inferred(self): '2008-09-09 00:00:00Z'], dtype='datetime64[ns]') if isinstance(s, DatetimeIndex): - expected = DatetimeIndex(expected) - self.assertTrue(s.unique().equals(expected)) + self.assert_index_equal(s.unique(), DatetimeIndex(expected)) else: self.assert_numpy_array_equal(s.unique(), expected) @@ -696,9 +714,12 @@ def test_value_counts_inferred(self): self.assertEqual(unique.dtype, 'datetime64[ns]') # numpy_array_equal cannot compare pd.NaT - self.assert_numpy_array_equal(unique[:3], expected) - self.assertTrue(unique[3] is pd.NaT or unique[3].astype('int64') == - pd.tslib.iNaT) + if isinstance(s, DatetimeIndex): + self.assert_index_equal(unique[:3], DatetimeIndex(expected)) + else: + self.assert_numpy_array_equal(unique[:3], expected) + self.assertTrue(unique[3] is pd.NaT or + unique[3].astype('int64') == pd.tslib.iNaT) self.assertEqual(s.nunique(), 3) self.assertEqual(s.nunique(dropna=False), 4) @@ -711,9 +732,9 @@ def test_value_counts_inferred(self): expected_s = Series([6], index=[Timedelta('1day')], name='dt') tm.assert_series_equal(result, expected_s) - expected = TimedeltaIndex(['1 days']) + expected = TimedeltaIndex(['1 days'], name='dt') if isinstance(td, TimedeltaIndex): - self.assertTrue(td.unique().equals(expected)) + self.assert_index_equal(td.unique(), expected) else: self.assert_numpy_array_equal(td.unique(), expected.values) @@ -723,7 +744,8 @@ def test_value_counts_inferred(self): tm.assert_series_equal(result2, expected_s) def test_factorize(self): - for o in self.objs: + for orig in self.objs: + o = orig.copy() if isinstance(o, Index) and o.is_boolean(): exp_arr = np.array([0, 1] + [0] * 8) @@ -736,12 +758,16 @@ def test_factorize(self): self.assert_numpy_array_equal(labels, exp_arr) if isinstance(o, Series): - expected = Index(o.values) - self.assert_numpy_array_equal(uniques, expected) + self.assert_index_equal(uniques, Index(orig), + check_names=False) else: - self.assertTrue(uniques.equals(exp_uniques)) + # factorize explicitly resets name + self.assert_index_equal(uniques, exp_uniques, + check_names=False) - for o in self.objs: + def test_factorize_repeated(self): + for orig in self.objs: + o = orig.copy() # don't test boolean if isinstance(o, Index) and o.is_boolean(): @@ -761,27 +787,25 @@ def test_factorize(self): self.assert_numpy_array_equal(labels, exp_arr) if isinstance(o, Series): - expected = Index(o.values) - self.assert_numpy_array_equal(uniques, expected) + self.assert_index_equal(uniques, Index(orig).sort_values(), + check_names=False) else: - self.assertTrue(uniques.equals(o)) + self.assert_index_equal(uniques, o, check_names=False) exp_arr = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4]) labels, uniques = n.factorize(sort=False) self.assert_numpy_array_equal(labels, exp_arr) if isinstance(o, Series): - expected = Index(np.concatenate([o.values[5:10], o.values[:5] - ])) - self.assert_numpy_array_equal(uniques, expected) + expected = Index(o.iloc[5:10].append(o.iloc[:5])) + self.assert_index_equal(uniques, expected, check_names=False) else: - expected = o[5:].append(o[:5]) - self.assertTrue(uniques.equals(expected)) + expected = o[5:10].append(o[:5]) + self.assert_index_equal(uniques, expected, check_names=False) - def test_duplicated_drop_duplicates(self): + def test_duplicated_drop_duplicates_index(self): # GH 4060 for original in self.objs: - if isinstance(original, Index): # special case diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py index 55df64264d6f9..cff5bbe14f1eb 100644 --- a/pandas/tests/test_categorical.py +++ b/pandas/tests/test_categorical.py @@ -34,10 +34,12 @@ def test_getitem(self): self.assertEqual(self.factor[-1], 'c') subf = self.factor[[0, 1, 2]] - tm.assert_almost_equal(subf._codes, [0, 1, 1]) + tm.assert_numpy_array_equal(subf._codes, + np.array([0, 1, 1], dtype=np.int8)) subf = self.factor[np.asarray(self.factor) == 'c'] - tm.assert_almost_equal(subf._codes, [2, 2, 2]) + tm.assert_numpy_array_equal(subf._codes, + np.array([2, 2, 2], dtype=np.int8)) def test_getitem_listlike(self): @@ -157,39 +159,39 @@ def f(): # Categorical as input c1 = Categorical(["a", "b", "c", "a"]) c2 = Categorical(c1) - self.assertTrue(c1.equals(c2)) + tm.assert_categorical_equal(c1, c2) c1 = Categorical(["a", "b", "c", "a"], categories=["a", "b", "c", "d"]) c2 = Categorical(c1) - self.assertTrue(c1.equals(c2)) + tm.assert_categorical_equal(c1, c2) c1 = Categorical(["a", "b", "c", "a"], categories=["a", "c", "b"]) c2 = Categorical(c1) - self.assertTrue(c1.equals(c2)) + tm.assert_categorical_equal(c1, c2) c1 = Categorical(["a", "b", "c", "a"], categories=["a", "c", "b"]) c2 = Categorical(c1, categories=["a", "b", "c"]) self.assert_numpy_array_equal(c1.__array__(), c2.__array__()) - self.assert_numpy_array_equal(c2.categories, np.array(["a", "b", "c"])) + self.assert_index_equal(c2.categories, Index(["a", "b", "c"])) # Series of dtype category c1 = Categorical(["a", "b", "c", "a"], categories=["a", "b", "c", "d"]) c2 = Categorical(Series(c1)) - self.assertTrue(c1.equals(c2)) + tm.assert_categorical_equal(c1, c2) c1 = Categorical(["a", "b", "c", "a"], categories=["a", "c", "b"]) c2 = Categorical(Series(c1)) - self.assertTrue(c1.equals(c2)) + tm.assert_categorical_equal(c1, c2) # Series c1 = Categorical(["a", "b", "c", "a"]) c2 = Categorical(Series(["a", "b", "c", "a"])) - self.assertTrue(c1.equals(c2)) + tm.assert_categorical_equal(c1, c2) c1 = Categorical(["a", "b", "c", "a"], categories=["a", "b", "c", "d"]) - c2 = Categorical( - Series(["a", "b", "c", "a"]), categories=["a", "b", "c", "d"]) - self.assertTrue(c1.equals(c2)) + c2 = Categorical(Series(["a", "b", "c", "a"]), + categories=["a", "b", "c", "d"]) + tm.assert_categorical_equal(c1, c2) # This should result in integer categories, not float! cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3]) @@ -281,11 +283,12 @@ def f(): def test_constructor_with_index(self): ci = CategoricalIndex(list('aabbca'), categories=list('cab')) - self.assertTrue(ci.values.equals(Categorical(ci))) + tm.assert_categorical_equal(ci.values, Categorical(ci)) ci = CategoricalIndex(list('aabbca'), categories=list('cab')) - self.assertTrue(ci.values.equals(Categorical( - ci.astype(object), categories=ci.categories))) + tm.assert_categorical_equal(ci.values, + Categorical(ci.astype(object), + categories=ci.categories)) def test_constructor_with_generator(self): # This was raising an Error in isnull(single_val).any() because isnull @@ -294,9 +297,9 @@ def test_constructor_with_generator(self): exp = Categorical([0, 1, 2]) cat = Categorical((x for x in [0, 1, 2])) - self.assertTrue(cat.equals(exp)) + tm.assert_categorical_equal(cat, exp) cat = Categorical(xrange(3)) - self.assertTrue(cat.equals(exp)) + tm.assert_categorical_equal(cat, exp) # This uses xrange internally from pandas.core.index import MultiIndex @@ -304,9 +307,9 @@ def test_constructor_with_generator(self): # check that categories accept generators and sequences cat = pd.Categorical([0, 1, 2], categories=(x for x in [0, 1, 2])) - self.assertTrue(cat.equals(exp)) + tm.assert_categorical_equal(cat, exp) cat = pd.Categorical([0, 1, 2], categories=xrange(3)) - self.assertTrue(cat.equals(exp)) + tm.assert_categorical_equal(cat, exp) def test_constructor_with_datetimelike(self): @@ -393,7 +396,7 @@ def f(): exp = Categorical(["a", "b", "c"], ordered=False) res = Categorical.from_codes([0, 1, 2], ["a", "b", "c"]) - self.assertTrue(exp.equals(res)) + tm.assert_categorical_equal(exp, res) # Not available in earlier numpy versions if hasattr(np.random, "choice"): @@ -404,27 +407,27 @@ def test_comparisons(self): result = self.factor[self.factor == 'a'] expected = self.factor[np.asarray(self.factor) == 'a'] - self.assertTrue(result.equals(expected)) + tm.assert_categorical_equal(result, expected) result = self.factor[self.factor != 'a'] expected = self.factor[np.asarray(self.factor) != 'a'] - self.assertTrue(result.equals(expected)) + tm.assert_categorical_equal(result, expected) result = self.factor[self.factor < 'c'] expected = self.factor[np.asarray(self.factor) < 'c'] - self.assertTrue(result.equals(expected)) + tm.assert_categorical_equal(result, expected) result = self.factor[self.factor > 'a'] expected = self.factor[np.asarray(self.factor) > 'a'] - self.assertTrue(result.equals(expected)) + tm.assert_categorical_equal(result, expected) result = self.factor[self.factor >= 'b'] expected = self.factor[np.asarray(self.factor) >= 'b'] - self.assertTrue(result.equals(expected)) + tm.assert_categorical_equal(result, expected) result = self.factor[self.factor <= 'b'] expected = self.factor[np.asarray(self.factor) <= 'b'] - self.assertTrue(result.equals(expected)) + tm.assert_categorical_equal(result, expected) n = len(self.factor) @@ -551,33 +554,40 @@ def test_na_flags_int_categories(self): def test_categories_none(self): factor = Categorical(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c'], ordered=True) - self.assertTrue(factor.equals(self.factor)) + tm.assert_categorical_equal(factor, self.factor) def test_describe(self): # string type desc = self.factor.describe() + self.assertTrue(self.factor.ordered) + exp_index = pd.CategoricalIndex(['a', 'b', 'c'], name='categories', + ordered=self.factor.ordered) expected = DataFrame({'counts': [3, 2, 3], 'freqs': [3 / 8., 2 / 8., 3 / 8.]}, - index=pd.CategoricalIndex(['a', 'b', 'c'], - name='categories')) + index=exp_index) tm.assert_frame_equal(desc, expected) # check unused categories cat = self.factor.copy() cat.set_categories(["a", "b", "c", "d"], inplace=True) desc = cat.describe() + + exp_index = pd.CategoricalIndex(['a', 'b', 'c', 'd'], + ordered=self.factor.ordered, + name='categories') expected = DataFrame({'counts': [3, 2, 3, 0], 'freqs': [3 / 8., 2 / 8., 3 / 8., 0]}, - index=pd.CategoricalIndex(['a', 'b', 'c', 'd'], - name='categories')) + index=exp_index) tm.assert_frame_equal(desc, expected) # check an integer one - desc = Categorical([1, 2, 3, 1, 2, 3, 3, 2, 1, 1, 1]).describe() + cat = Categorical([1, 2, 3, 1, 2, 3, 3, 2, 1, 1, 1]) + desc = cat.describe() + exp_index = pd.CategoricalIndex([1, 2, 3], ordered=cat.ordered, + name='categories') expected = DataFrame({'counts': [5, 3, 3], 'freqs': [5 / 11., 3 / 11., 3 / 11.]}, - index=pd.CategoricalIndex([1, 2, 3], - name='categories')) + index=exp_index) tm.assert_frame_equal(desc, expected) # https://github.com/pydata/pandas/issues/3678 @@ -601,7 +611,7 @@ def test_describe(self): columns=['counts', 'freqs'], index=pd.CategoricalIndex(['b', 'a', 'c', np.nan], name='categories')) - tm.assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected, check_categorical=False) # NA as an unused category with tm.assert_produces_warning(FutureWarning): @@ -613,7 +623,7 @@ def test_describe(self): ['b', 'a', 'c', np.nan], name='categories') expected = DataFrame([[0, 0], [1, 1 / 3.], [2, 2 / 3.], [0, 0]], columns=['counts', 'freqs'], index=exp_idx) - tm.assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected, check_categorical=False) def test_print(self): expected = ["[a, b, b, a, a, c, c, c]", @@ -703,7 +713,7 @@ def test_periodindex(self): exp_arr = np.array([0, 0, 1, 1, 2, 2], dtype=np.int8) exp_idx = PeriodIndex(['2014-01', '2014-02', '2014-03'], freq='M') self.assert_numpy_array_equal(cat1._codes, exp_arr) - self.assertTrue(cat1.categories.equals(exp_idx)) + self.assert_index_equal(cat1.categories, exp_idx) idx2 = PeriodIndex(['2014-03', '2014-03', '2014-02', '2014-01', '2014-03', '2014-01'], freq='M') @@ -712,7 +722,7 @@ def test_periodindex(self): exp_arr = np.array([2, 2, 1, 0, 2, 0], dtype=np.int8) exp_idx2 = PeriodIndex(['2014-01', '2014-02', '2014-03'], freq='M') self.assert_numpy_array_equal(cat2._codes, exp_arr) - self.assertTrue(cat2.categories.equals(exp_idx2)) + self.assert_index_equal(cat2.categories, exp_idx2) idx3 = PeriodIndex(['2013-12', '2013-11', '2013-10', '2013-09', '2013-08', '2013-07', '2013-05'], freq='M') @@ -721,15 +731,14 @@ def test_periodindex(self): exp_idx = PeriodIndex(['2013-05', '2013-07', '2013-08', '2013-09', '2013-10', '2013-11', '2013-12'], freq='M') self.assert_numpy_array_equal(cat3._codes, exp_arr) - self.assertTrue(cat3.categories.equals(exp_idx)) + self.assert_index_equal(cat3.categories, exp_idx) def test_categories_assigments(self): s = pd.Categorical(["a", "b", "c", "a"]) exp = np.array([1, 2, 3, 1], dtype=np.int64) s.categories = [1, 2, 3] self.assert_numpy_array_equal(s.__array__(), exp) - self.assert_numpy_array_equal(s.categories, - np.array([1, 2, 3], dtype=np.int64)) + self.assert_index_equal(s.categories, Index([1, 2, 3])) # lengthen def f(): @@ -755,21 +764,21 @@ def test_construction_with_ordered(self): def test_ordered_api(self): # GH 9347 cat1 = pd.Categorical(["a", "c", "b"], ordered=False) - self.assertTrue(cat1.categories.equals(Index(['a', 'b', 'c']))) + self.assert_index_equal(cat1.categories, Index(['a', 'b', 'c'])) self.assertFalse(cat1.ordered) cat2 = pd.Categorical(["a", "c", "b"], categories=['b', 'c', 'a'], ordered=False) - self.assertTrue(cat2.categories.equals(Index(['b', 'c', 'a']))) + self.assert_index_equal(cat2.categories, Index(['b', 'c', 'a'])) self.assertFalse(cat2.ordered) cat3 = pd.Categorical(["a", "c", "b"], ordered=True) - self.assertTrue(cat3.categories.equals(Index(['a', 'b', 'c']))) + self.assert_index_equal(cat3.categories, Index(['a', 'b', 'c'])) self.assertTrue(cat3.ordered) cat4 = pd.Categorical(["a", "c", "b"], categories=['b', 'c', 'a'], ordered=True) - self.assertTrue(cat4.categories.equals(Index(['b', 'c', 'a']))) + self.assert_index_equal(cat4.categories, Index(['b', 'c', 'a'])) self.assertTrue(cat4.ordered) def test_set_ordered(self): @@ -801,21 +810,21 @@ def test_set_ordered(self): def test_set_categories(self): cat = Categorical(["a", "b", "c", "a"], ordered=True) - exp_categories = np.array(["c", "b", "a"], dtype=np.object_) + exp_categories = Index(["c", "b", "a"]) exp_values = np.array(["a", "b", "c", "a"], dtype=np.object_) res = cat.set_categories(["c", "b", "a"], inplace=True) - self.assert_numpy_array_equal(cat.categories, exp_categories) + self.assert_index_equal(cat.categories, exp_categories) self.assert_numpy_array_equal(cat.__array__(), exp_values) self.assertIsNone(res) res = cat.set_categories(["a", "b", "c"]) # cat must be the same as before - self.assert_numpy_array_equal(cat.categories, exp_categories) + self.assert_index_equal(cat.categories, exp_categories) self.assert_numpy_array_equal(cat.__array__(), exp_values) # only res is changed - exp_categories_back = np.array(["a", "b", "c"]) - self.assert_numpy_array_equal(res.categories, exp_categories_back) + exp_categories_back = Index(["a", "b", "c"]) + self.assert_index_equal(res.categories, exp_categories_back) self.assert_numpy_array_equal(res.__array__(), exp_values) # not all "old" included in "new" -> all not included ones are now @@ -829,19 +838,18 @@ def test_set_categories(self): res = cat.set_categories(["a", "b", "d"]) self.assert_numpy_array_equal(res.codes, np.array([0, 1, -1, 0], dtype=np.int8)) - self.assert_numpy_array_equal(res.categories, - np.array(["a", "b", "d"])) + self.assert_index_equal(res.categories, Index(["a", "b", "d"])) # all "old" included in "new" cat = cat.set_categories(["a", "b", "c", "d"]) - exp_categories = np.array(["a", "b", "c", "d"], dtype=np.object_) - self.assert_numpy_array_equal(cat.categories, exp_categories) + exp_categories = Index(["a", "b", "c", "d"]) + self.assert_index_equal(cat.categories, exp_categories) # internals... c = Categorical([1, 2, 3, 4, 1], categories=[1, 2, 3, 4], ordered=True) self.assert_numpy_array_equal(c._codes, np.array([0, 1, 2, 3, 0], dtype=np.int8)) - self.assert_numpy_array_equal(c.categories, np.array([1, 2, 3, 4])) + self.assert_index_equal(c.categories, Index([1, 2, 3, 4])) exp = np.array([1, 2, 3, 4, 1], dtype=np.int64) self.assert_numpy_array_equal(c.get_values(), exp) @@ -854,7 +862,7 @@ def test_set_categories(self): np.array([3, 2, 1, 0, 3], dtype=np.int8)) # categories are now in new order - self.assert_numpy_array_equal(c.categories, np.array([4, 3, 2, 1])) + self.assert_index_equal(c.categories, Index([4, 3, 2, 1])) # output is the same exp = np.array([1, 2, 3, 4, 1], dtype=np.int64) @@ -879,22 +887,20 @@ def test_rename_categories(self): res = cat.rename_categories([1, 2, 3]) self.assert_numpy_array_equal(res.__array__(), np.array([1, 2, 3, 1], dtype=np.int64)) - self.assert_numpy_array_equal(res.categories, - np.array([1, 2, 3], dtype=np.int64)) + self.assert_index_equal(res.categories, Index([1, 2, 3])) exp_cat = np.array(["a", "b", "c", "a"], dtype=np.object_) self.assert_numpy_array_equal(cat.__array__(), exp_cat) - exp_cat = np.array(["a", "b", "c"], dtype=np.object_) - self.assert_numpy_array_equal(cat.categories, exp_cat) + exp_cat = Index(["a", "b", "c"]) + self.assert_index_equal(cat.categories, exp_cat) res = cat.rename_categories([1, 2, 3], inplace=True) # and now inplace self.assertIsNone(res) self.assert_numpy_array_equal(cat.__array__(), np.array([1, 2, 3, 1], dtype=np.int64)) - self.assert_numpy_array_equal(cat.categories, - np.array([1, 2, 3], dtype=np.int64)) + self.assert_index_equal(cat.categories, Index([1, 2, 3])) # lengthen def f(): @@ -1015,32 +1021,35 @@ def f(): def test_remove_unused_categories(self): c = Categorical(["a", "b", "c", "d", "a"], categories=["a", "b", "c", "d", "e"]) - exp_categories_all = np.array(["a", "b", "c", "d", "e"]) - exp_categories_dropped = np.array(["a", "b", "c", "d"]) + exp_categories_all = Index(["a", "b", "c", "d", "e"]) + exp_categories_dropped = Index(["a", "b", "c", "d"]) - self.assert_numpy_array_equal(c.categories, exp_categories_all) + self.assert_index_equal(c.categories, exp_categories_all) res = c.remove_unused_categories() - self.assert_numpy_array_equal(res.categories, exp_categories_dropped) - self.assert_numpy_array_equal(c.categories, exp_categories_all) + self.assert_index_equal(res.categories, exp_categories_dropped) + self.assert_index_equal(c.categories, exp_categories_all) res = c.remove_unused_categories(inplace=True) - self.assert_numpy_array_equal(c.categories, exp_categories_dropped) + self.assert_index_equal(c.categories, exp_categories_dropped) self.assertIsNone(res) # with NaN values (GH11599) c = Categorical(["a", "b", "c", np.nan], categories=["a", "b", "c", "d", "e"]) res = c.remove_unused_categories() - self.assert_numpy_array_equal(res.categories, - np.array(["a", "b", "c"])) - self.assert_numpy_array_equal(c.categories, exp_categories_all) + self.assert_index_equal(res.categories, + Index(np.array(["a", "b", "c"]))) + exp_codes = np.array([0, 1, 2, -1], dtype=np.int8) + self.assert_numpy_array_equal(res.codes, exp_codes) + self.assert_index_equal(c.categories, exp_categories_all) val = ['F', np.nan, 'D', 'B', 'D', 'F', np.nan] cat = pd.Categorical(values=val, categories=list('ABCDEFG')) out = cat.remove_unused_categories() - self.assert_numpy_array_equal(out.categories, ['B', 'D', 'F']) - self.assert_numpy_array_equal(out.codes, [2, -1, 1, 0, 1, 2, -1]) + self.assert_index_equal(out.categories, Index(['B', 'D', 'F'])) + exp_codes = np.array([2, -1, 1, 0, 1, 2, -1], dtype=np.int8) + self.assert_numpy_array_equal(out.codes, exp_codes) self.assertEqual(out.get_values().tolist(), val) alpha = list('abcdefghijklmnopqrstuvwxyz') @@ -1055,11 +1064,11 @@ def test_nan_handling(self): # Nans are represented as -1 in codes c = Categorical(["a", "b", np.nan, "a"]) - self.assert_numpy_array_equal(c.categories, np.array(["a", "b"])) + self.assert_index_equal(c.categories, Index(["a", "b"])) self.assert_numpy_array_equal(c._codes, np.array([0, 1, -1, 0], dtype=np.int8)) c[1] = np.nan - self.assert_numpy_array_equal(c.categories, np.array(["a", "b"])) + self.assert_index_equal(c.categories, Index(["a", "b"])) self.assert_numpy_array_equal(c._codes, np.array([0, -1, -1, 0], dtype=np.int8)) @@ -1068,15 +1077,11 @@ def test_nan_handling(self): with tm.assert_produces_warning(FutureWarning): c = Categorical(["a", "b", np.nan, "a"], categories=["a", "b", np.nan]) - self.assert_numpy_array_equal(c.categories, - np.array(["a", "b", np.nan], - dtype=np.object_)) + self.assert_index_equal(c.categories, Index(["a", "b", np.nan])) self.assert_numpy_array_equal(c._codes, np.array([0, 1, 2, 0], dtype=np.int8)) c[1] = np.nan - self.assert_numpy_array_equal(c.categories, - np.array(["a", "b", np.nan], - dtype=np.object_)) + self.assert_index_equal(c.categories, Index(["a", "b", np.nan])) self.assert_numpy_array_equal(c._codes, np.array([0, 2, 2, 0], dtype=np.int8)) @@ -1085,30 +1090,24 @@ def test_nan_handling(self): with tm.assert_produces_warning(FutureWarning): c.categories = ["a", "b", np.nan] # noqa - self.assert_numpy_array_equal(c.categories, - np.array(["a", "b", np.nan], - dtype=np.object_)) + self.assert_index_equal(c.categories, Index(["a", "b", np.nan])) self.assert_numpy_array_equal(c._codes, np.array([0, 1, 2, 0], dtype=np.int8)) # Adding nan to categories should make assigned nan point to the # category! c = Categorical(["a", "b", np.nan, "a"]) - self.assert_numpy_array_equal(c.categories, np.array(["a", "b"])) + self.assert_index_equal(c.categories, Index(["a", "b"])) self.assert_numpy_array_equal(c._codes, np.array([0, 1, -1, 0], dtype=np.int8)) with tm.assert_produces_warning(FutureWarning): c.set_categories(["a", "b", np.nan], rename=True, inplace=True) - self.assert_numpy_array_equal(c.categories, - np.array(["a", "b", np.nan], - dtype=np.object_)) + self.assert_index_equal(c.categories, Index(["a", "b", np.nan])) self.assert_numpy_array_equal(c._codes, np.array([0, 1, -1, 0], dtype=np.int8)) c[1] = np.nan - self.assert_numpy_array_equal(c.categories, - np.array(["a", "b", np.nan], - dtype=np.object_)) + self.assert_index_equal(c.categories, Index(["a", "b", np.nan])) self.assert_numpy_array_equal(c._codes, np.array([0, 2, -1, 0], dtype=np.int8)) @@ -1234,63 +1233,58 @@ def test_min_max(self): def test_unique(self): # categories are reordered based on value when ordered=False cat = Categorical(["a", "b"]) - exp = np.asarray(["a", "b"]) + exp = Index(["a", "b"]) res = cat.unique() - self.assert_numpy_array_equal(res, exp) + self.assert_index_equal(res.categories, exp) + self.assert_categorical_equal(res, cat) cat = Categorical(["a", "b", "a", "a"], categories=["a", "b", "c"]) res = cat.unique() - self.assert_numpy_array_equal(res, exp) + self.assert_index_equal(res.categories, exp) tm.assert_categorical_equal(res, Categorical(exp)) cat = Categorical(["c", "a", "b", "a", "a"], categories=["a", "b", "c"]) - exp = np.asarray(["c", "a", "b"]) + exp = Index(["c", "a", "b"]) res = cat.unique() - self.assert_numpy_array_equal(res, exp) - tm.assert_categorical_equal(res, Categorical( - exp, categories=['c', 'a', 'b'])) + self.assert_index_equal(res.categories, exp) + exp_cat = Categorical(exp, categories=['c', 'a', 'b']) + tm.assert_categorical_equal(res, exp_cat) # nan must be removed cat = Categorical(["b", np.nan, "b", np.nan, "a"], categories=["a", "b", "c"]) res = cat.unique() - exp = np.asarray(["b", np.nan, "a"], dtype=object) - self.assert_numpy_array_equal(res, exp) - tm.assert_categorical_equal(res, Categorical( - ["b", np.nan, "a"], categories=["b", "a"])) + exp = Index(["b", "a"]) + self.assert_index_equal(res.categories, exp) + exp_cat = Categorical(["b", np.nan, "a"], categories=["b", "a"]) + tm.assert_categorical_equal(res, exp_cat) def test_unique_ordered(self): # keep categories order when ordered=True cat = Categorical(['b', 'a', 'b'], categories=['a', 'b'], ordered=True) res = cat.unique() - exp = np.asarray(['b', 'a']) - exp_cat = Categorical(exp, categories=['a', 'b'], ordered=True) - self.assert_numpy_array_equal(res, exp) + exp_cat = Categorical(['b', 'a'], categories=['a', 'b'], ordered=True) tm.assert_categorical_equal(res, exp_cat) cat = Categorical(['c', 'b', 'a', 'a'], categories=['a', 'b', 'c'], ordered=True) res = cat.unique() - exp = np.asarray(['c', 'b', 'a']) - exp_cat = Categorical(exp, categories=['a', 'b', 'c'], ordered=True) - self.assert_numpy_array_equal(res, exp) + exp_cat = Categorical(['c', 'b', 'a'], categories=['a', 'b', 'c'], + ordered=True) tm.assert_categorical_equal(res, exp_cat) cat = Categorical(['b', 'a', 'a'], categories=['a', 'b', 'c'], ordered=True) res = cat.unique() - exp = np.asarray(['b', 'a']) - exp_cat = Categorical(exp, categories=['a', 'b'], ordered=True) - self.assert_numpy_array_equal(res, exp) + exp_cat = Categorical(['b', 'a'], categories=['a', 'b'], ordered=True) tm.assert_categorical_equal(res, exp_cat) cat = Categorical(['b', 'b', np.nan, 'a'], categories=['a', 'b', 'c'], ordered=True) res = cat.unique() - exp = np.asarray(['b', np.nan, 'a'], dtype=object) - exp_cat = Categorical(exp, categories=['a', 'b'], ordered=True) - self.assert_numpy_array_equal(res, exp) + exp_cat = Categorical(['b', np.nan, 'a'], categories=['a', 'b'], + ordered=True) tm.assert_categorical_equal(res, exp_cat) def test_mode(self): @@ -1298,33 +1292,33 @@ def test_mode(self): ordered=True) res = s.mode() exp = Categorical([5], categories=[5, 4, 3, 2, 1], ordered=True) - self.assertTrue(res.equals(exp)) + tm.assert_categorical_equal(res, exp) s = Categorical([1, 1, 1, 4, 5, 5, 5], categories=[5, 4, 3, 2, 1], ordered=True) res = s.mode() exp = Categorical([5, 1], categories=[5, 4, 3, 2, 1], ordered=True) - self.assertTrue(res.equals(exp)) + tm.assert_categorical_equal(res, exp) s = Categorical([1, 2, 3, 4, 5], categories=[5, 4, 3, 2, 1], ordered=True) res = s.mode() exp = Categorical([], categories=[5, 4, 3, 2, 1], ordered=True) - self.assertTrue(res.equals(exp)) + tm.assert_categorical_equal(res, exp) # NaN should not become the mode! s = Categorical([np.nan, np.nan, np.nan, 4, 5], categories=[5, 4, 3, 2, 1], ordered=True) res = s.mode() exp = Categorical([], categories=[5, 4, 3, 2, 1], ordered=True) - self.assertTrue(res.equals(exp)) + tm.assert_categorical_equal(res, exp) s = Categorical([np.nan, np.nan, np.nan, 4, 5, 4], categories=[5, 4, 3, 2, 1], ordered=True) res = s.mode() exp = Categorical([4], categories=[5, 4, 3, 2, 1], ordered=True) - self.assertTrue(res.equals(exp)) + tm.assert_categorical_equal(res, exp) s = Categorical([np.nan, np.nan, 4, 5, 4], categories=[5, 4, 3, 2, 1], ordered=True) res = s.mode() exp = Categorical([4], categories=[5, 4, 3, 2, 1], ordered=True) - self.assertTrue(res.equals(exp)) + tm.assert_categorical_equal(res, exp) def test_sort_values(self): @@ -1338,79 +1332,83 @@ def test_sort_values(self): res = cat.sort_values() exp = np.array(["a", "b", "c", "d"], dtype=object) self.assert_numpy_array_equal(res.__array__(), exp) + self.assert_index_equal(res.categories, cat.categories) cat = Categorical(["a", "c", "b", "d"], categories=["a", "b", "c", "d"], ordered=True) res = cat.sort_values() exp = np.array(["a", "b", "c", "d"], dtype=object) self.assert_numpy_array_equal(res.__array__(), exp) + self.assert_index_equal(res.categories, cat.categories) res = cat.sort_values(ascending=False) exp = np.array(["d", "c", "b", "a"], dtype=object) self.assert_numpy_array_equal(res.__array__(), exp) + self.assert_index_equal(res.categories, cat.categories) # sort (inplace order) cat1 = cat.copy() cat1.sort_values(inplace=True) exp = np.array(["a", "b", "c", "d"], dtype=object) self.assert_numpy_array_equal(cat1.__array__(), exp) + self.assert_index_equal(res.categories, cat.categories) # reverse cat = Categorical(["a", "c", "c", "b", "d"], ordered=True) res = cat.sort_values(ascending=False) exp_val = np.array(["d", "c", "c", "b", "a"], dtype=object) - exp_categories = np.array(["a", "b", "c", "d"], dtype=object) + exp_categories = Index(["a", "b", "c", "d"]) self.assert_numpy_array_equal(res.__array__(), exp_val) - self.assert_numpy_array_equal(res.categories, exp_categories) + self.assert_index_equal(res.categories, exp_categories) def test_sort_values_na_position(self): # see gh-12882 cat = Categorical([5, 2, np.nan, 2, np.nan], ordered=True) - exp_categories = np.array([2, 5]) + exp_categories = Index([2, 5]) exp = np.array([2.0, 2.0, 5.0, np.nan, np.nan]) res = cat.sort_values() # default arguments self.assert_numpy_array_equal(res.__array__(), exp) - self.assert_numpy_array_equal(res.categories, exp_categories) + self.assert_index_equal(res.categories, exp_categories) exp = np.array([np.nan, np.nan, 2.0, 2.0, 5.0]) res = cat.sort_values(ascending=True, na_position='first') self.assert_numpy_array_equal(res.__array__(), exp) - self.assert_numpy_array_equal(res.categories, exp_categories) + self.assert_index_equal(res.categories, exp_categories) exp = np.array([np.nan, np.nan, 5.0, 2.0, 2.0]) res = cat.sort_values(ascending=False, na_position='first') self.assert_numpy_array_equal(res.__array__(), exp) - self.assert_numpy_array_equal(res.categories, exp_categories) + self.assert_index_equal(res.categories, exp_categories) exp = np.array([2.0, 2.0, 5.0, np.nan, np.nan]) res = cat.sort_values(ascending=True, na_position='last') self.assert_numpy_array_equal(res.__array__(), exp) - self.assert_numpy_array_equal(res.categories, exp_categories) + self.assert_index_equal(res.categories, exp_categories) exp = np.array([5.0, 2.0, 2.0, np.nan, np.nan]) res = cat.sort_values(ascending=False, na_position='last') self.assert_numpy_array_equal(res.__array__(), exp) - self.assert_numpy_array_equal(res.categories, exp_categories) + self.assert_index_equal(res.categories, exp_categories) cat = Categorical(["a", "c", "b", "d", np.nan], ordered=True) res = cat.sort_values(ascending=False, na_position='last') exp_val = np.array(["d", "c", "b", "a", np.nan], dtype=object) - exp_categories = np.array(["a", "b", "c", "d"], dtype=object) + exp_categories = Index(["a", "b", "c", "d"]) self.assert_numpy_array_equal(res.__array__(), exp_val) - self.assert_numpy_array_equal(res.categories, exp_categories) + self.assert_index_equal(res.categories, exp_categories) cat = Categorical(["a", "c", "b", "d", np.nan], ordered=True) res = cat.sort_values(ascending=False, na_position='first') exp_val = np.array([np.nan, "d", "c", "b", "a"], dtype=object) - exp_categories = np.array(["a", "b", "c", "d"], dtype=object) + exp_categories = Index(["a", "b", "c", "d"]) self.assert_numpy_array_equal(res.__array__(), exp_val) - self.assert_numpy_array_equal(res.categories, exp_categories) + self.assert_index_equal(res.categories, exp_categories) def test_slicing_directly(self): cat = Categorical(["a", "b", "c", "d", "a", "b", "c"]) sliced = cat[3] - tm.assert_equal(sliced, "d") + self.assertEqual(sliced, "d") sliced = cat[3:5] expected = Categorical(["d", "a"], categories=['a', 'b', 'c', 'd']) self.assert_numpy_array_equal(sliced._codes, expected._codes) @@ -1420,7 +1418,7 @@ def test_set_item_nan(self): cat = pd.Categorical([1, 2, 3]) exp = pd.Categorical([1, np.nan, 3], categories=[1, 2, 3]) cat[1] = np.nan - self.assertTrue(cat.equals(exp)) + tm.assert_categorical_equal(cat, exp) # if nan in categories, the proper code should be set! cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3]) @@ -1560,10 +1558,10 @@ def test_deprecated_levels(self): exp = cat.categories with tm.assert_produces_warning(FutureWarning): res = cat.levels - self.assert_numpy_array_equal(res, exp) + self.assert_index_equal(res, exp) with tm.assert_produces_warning(FutureWarning): res = pd.Categorical([1, 2, 3, np.nan], levels=[1, 2, 3]) - self.assert_numpy_array_equal(res.categories, exp) + self.assert_index_equal(res.categories, exp) def test_removed_names_produces_warning(self): @@ -1577,14 +1575,18 @@ def test_removed_names_produces_warning(self): def test_datetime_categorical_comparison(self): dt_cat = pd.Categorical( pd.date_range('2014-01-01', periods=3), ordered=True) - self.assert_numpy_array_equal(dt_cat > dt_cat[0], [False, True, True]) - self.assert_numpy_array_equal(dt_cat[0] < dt_cat, [False, True, True]) + self.assert_numpy_array_equal(dt_cat > dt_cat[0], + np.array([False, True, True])) + self.assert_numpy_array_equal(dt_cat[0] < dt_cat, + np.array([False, True, True])) def test_reflected_comparison_with_scalars(self): # GH8658 cat = pd.Categorical([1, 2, 3], ordered=True) - self.assert_numpy_array_equal(cat > cat[0], [False, True, True]) - self.assert_numpy_array_equal(cat[0] < cat, [False, True, True]) + self.assert_numpy_array_equal(cat > cat[0], + np.array([False, True, True])) + self.assert_numpy_array_equal(cat[0] < cat, + np.array([False, True, True])) def test_comparison_with_unknown_scalars(self): # https://github.com/pydata/pandas/issues/9836#issuecomment-92123057 @@ -1597,8 +1599,10 @@ def test_comparison_with_unknown_scalars(self): self.assertRaises(TypeError, lambda: 4 < cat) self.assertRaises(TypeError, lambda: 4 > cat) - self.assert_numpy_array_equal(cat == 4, [False, False, False]) - self.assert_numpy_array_equal(cat != 4, [True, True, True]) + self.assert_numpy_array_equal(cat == 4, + np.array([False, False, False])) + self.assert_numpy_array_equal(cat != 4, + np.array([True, True, True])) def test_map(self): c = pd.Categorical(list('ABABC'), categories=list('CBA'), @@ -1925,8 +1929,7 @@ def test_nan_handling(self): # Nans are represented as -1 in labels s = Series(Categorical(["a", "b", np.nan, "a"])) - self.assert_numpy_array_equal(s.cat.categories, - np.array(["a", "b"], dtype=np.object_)) + self.assert_index_equal(s.cat.categories, Index(["a", "b"])) self.assert_numpy_array_equal(s.values.codes, np.array([0, 1, -1, 0], dtype=np.int8)) @@ -1936,8 +1939,8 @@ def test_nan_handling(self): s2 = Series(Categorical(["a", "b", np.nan, "a"], categories=["a", "b", np.nan])) - exp_cat = np.array(["a", "b", np.nan], dtype=np.object_) - self.assert_numpy_array_equal(s2.cat.categories, exp_cat) + exp_cat = Index(["a", "b", np.nan]) + self.assert_index_equal(s2.cat.categories, exp_cat) self.assert_numpy_array_equal(s2.values.codes, np.array([0, 1, 2, 0], dtype=np.int8)) @@ -1946,24 +1949,26 @@ def test_nan_handling(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): s3.cat.categories = ["a", "b", np.nan] - exp_cat = np.array(["a", "b", np.nan], dtype=np.object_) - self.assert_numpy_array_equal(s3.cat.categories, exp_cat) + exp_cat = Index(["a", "b", np.nan]) + self.assert_index_equal(s3.cat.categories, exp_cat) self.assert_numpy_array_equal(s3.values.codes, np.array([0, 1, 2, 0], dtype=np.int8)) def test_cat_accessor(self): s = Series(Categorical(["a", "b", np.nan, "a"])) - self.assert_numpy_array_equal(s.cat.categories, np.array(["a", "b"])) + self.assert_index_equal(s.cat.categories, Index(["a", "b"])) self.assertEqual(s.cat.ordered, False) exp = Categorical(["a", "b", np.nan, "a"], categories=["b", "a"]) s.cat.set_categories(["b", "a"], inplace=True) - self.assertTrue(s.values.equals(exp)) + tm.assert_categorical_equal(s.values, exp) + res = s.cat.set_categories(["b", "a"]) - self.assertTrue(res.values.equals(exp)) + tm.assert_categorical_equal(res.values, exp) + exp = Categorical(["a", "b", np.nan, "a"], categories=["b", "a"]) s[:] = "a" s = s.cat.remove_unused_categories() - self.assert_numpy_array_equal(s.cat.categories, np.array(["a"])) + self.assert_index_equal(s.cat.categories, Index(["a"])) def test_sequence_like(self): @@ -2005,11 +2010,11 @@ def test_series_delegations(self): # and the methods '.set_categories()' 'drop_unused_categories()' to the # categorical s = Series(Categorical(["a", "b", "c", "a"], ordered=True)) - exp_categories = np.array(["a", "b", "c"]) - self.assert_numpy_array_equal(s.cat.categories, exp_categories) + exp_categories = Index(["a", "b", "c"]) + tm.assert_index_equal(s.cat.categories, exp_categories) s.cat.categories = [1, 2, 3] - exp_categories = np.array([1, 2, 3]) - self.assert_numpy_array_equal(s.cat.categories, exp_categories) + exp_categories = Index([1, 2, 3]) + self.assert_index_equal(s.cat.categories, exp_categories) exp_codes = Series([0, 1, 2, 0], dtype='int8') tm.assert_series_equal(s.cat.codes, exp_codes) @@ -2022,20 +2027,20 @@ def test_series_delegations(self): # reorder s = Series(Categorical(["a", "b", "c", "a"], ordered=True)) - exp_categories = np.array(["c", "b", "a"]) + exp_categories = Index(["c", "b", "a"]) exp_values = np.array(["a", "b", "c", "a"], dtype=np.object_) s = s.cat.set_categories(["c", "b", "a"]) - self.assert_numpy_array_equal(s.cat.categories, exp_categories) + tm.assert_index_equal(s.cat.categories, exp_categories) self.assert_numpy_array_equal(s.values.__array__(), exp_values) self.assert_numpy_array_equal(s.__array__(), exp_values) # remove unused categories s = Series(Categorical(["a", "b", "b", "a"], categories=["a", "b", "c" ])) - exp_categories = np.array(["a", "b"], dtype=object) + exp_categories = Index(["a", "b"]) exp_values = np.array(["a", "b", "b", "a"], dtype=np.object_) s = s.cat.remove_unused_categories() - self.assert_numpy_array_equal(s.cat.categories, exp_categories) + self.assert_index_equal(s.cat.categories, exp_categories) self.assert_numpy_array_equal(s.values.__array__(), exp_values) self.assert_numpy_array_equal(s.__array__(), exp_values) @@ -2082,11 +2087,11 @@ def test_assignment_to_dataframe(self): result1 = df['D'] result2 = df['E'] - self.assertTrue(result1._data._block.values.equals(d)) + self.assert_categorical_equal(result1._data._block.values, d) # sorting s.name = 'E' - self.assertTrue(result2.sort_index().equals(s.sort_index())) + self.assert_series_equal(result2.sort_index(), s.sort_index()) cat = pd.Categorical([1, 2, 3, 10], categories=[1, 2, 3, 4, 10]) df = pd.DataFrame(pd.Series(cat)) @@ -2885,13 +2890,17 @@ def test_value_counts(self): categories=["c", "a", "b", "d"]) s = pd.Series(cats, name='xxx') res = s.value_counts(sort=False) - exp = Series([3, 1, 2, 0], name='xxx', - index=pd.CategoricalIndex(["c", "a", "b", "d"])) + + exp_index = pd.CategoricalIndex(["c", "a", "b", "d"], + categories=cats.categories) + exp = Series([3, 1, 2, 0], name='xxx', index=exp_index) tm.assert_series_equal(res, exp) res = s.value_counts(sort=True) - exp = Series([3, 2, 1, 0], name='xxx', - index=pd.CategoricalIndex(["c", "b", "a", "d"])) + + exp_index = pd.CategoricalIndex(["c", "b", "a", "d"], + categories=cats.categories) + exp = Series([3, 2, 1, 0], name='xxx', index=exp_index) tm.assert_series_equal(res, exp) # check object dtype handles the Series.name as the same @@ -2927,38 +2936,39 @@ def test_value_counts_with_nan(self): index=pd.CategoricalIndex(["a", "b", np.nan]))) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - s = pd.Series(pd.Categorical( - ["a", "b", "a"], categories=["a", "b", np.nan])) - tm.assert_series_equal( - s.value_counts(dropna=True), - pd.Series([2, 1], index=pd.CategoricalIndex(["a", "b"]))) - tm.assert_series_equal( - s.value_counts(dropna=False), - pd.Series([2, 1, 0], - index=pd.CategoricalIndex(["a", "b", np.nan]))) + s = pd.Series(pd.Categorical(["a", "b", "a"], + categories=["a", "b", np.nan])) + + # internal categories are different because of NaN + exp = pd.Series([2, 1], index=pd.CategoricalIndex(["a", "b"])) + tm.assert_series_equal(s.value_counts(dropna=True), exp, + check_categorical=False) + exp = pd.Series([2, 1, 0], + index=pd.CategoricalIndex(["a", "b", np.nan])) + tm.assert_series_equal(s.value_counts(dropna=False), exp, + check_categorical=False) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - s = pd.Series(pd.Categorical( - ["a", "b", None, "a", None, None], categories=["a", "b", np.nan - ])) - tm.assert_series_equal( - s.value_counts(dropna=True), - pd.Series([2, 1], index=pd.CategoricalIndex(["a", "b"]))) - tm.assert_series_equal( - s.value_counts(dropna=False), - pd.Series([3, 2, 1], - index=pd.CategoricalIndex([np.nan, "a", "b"]))) + s = pd.Series(pd.Categorical(["a", "b", None, "a", None, None], + categories=["a", "b", np.nan])) + + exp = pd.Series([2, 1], index=pd.CategoricalIndex(["a", "b"])) + tm.assert_series_equal(s.value_counts(dropna=True), exp, + check_categorical=False) + exp = pd.Series([3, 2, 1], + index=pd.CategoricalIndex([np.nan, "a", "b"])) + tm.assert_series_equal(s.value_counts(dropna=False), exp, + check_categorical=False) def test_groupby(self): - cats = Categorical( - ["a", "a", "a", "b", "b", "b", "c", "c", "c" - ], categories=["a", "b", "c", "d"], ordered=True) + cats = Categorical(["a", "a", "a", "b", "b", "b", "c", "c", "c"], + categories=["a", "b", "c", "d"], ordered=True) data = DataFrame({"a": [1, 1, 1, 2, 2, 2, 3, 4, 5], "b": cats}) - expected = DataFrame({'a': Series( - [1, 2, 4, np.nan], index=pd.CategoricalIndex( - ['a', 'b', 'c', 'd'], name='b'))}) + exp_index = pd.CategoricalIndex(['a', 'b', 'c', 'd'], name='b', + ordered=True) + expected = DataFrame({'a': [1, 2, 4, np.nan]}, index=exp_index) result = data.groupby("b").mean() tm.assert_frame_equal(result, expected) @@ -2970,17 +2980,19 @@ def test_groupby(self): # single grouper gb = df.groupby("A") - exp_idx = pd.CategoricalIndex(['a', 'b', 'z'], name='A') + exp_idx = pd.CategoricalIndex(['a', 'b', 'z'], name='A', ordered=True) expected = DataFrame({'values': Series([3, 7, np.nan], index=exp_idx)}) result = gb.sum() tm.assert_frame_equal(result, expected) # multiple groupers gb = df.groupby(['A', 'B']) - expected = DataFrame({'values': Series( - [1, 2, np.nan, 3, 4, np.nan, np.nan, np.nan, np.nan - ], index=pd.MultiIndex.from_product( - [['a', 'b', 'z'], ['c', 'd', 'y']], names=['A', 'B']))}) + exp_index = pd.MultiIndex.from_product([['a', 'b', 'z'], + ['c', 'd', 'y']], + names=['A', 'B']) + expected = DataFrame({'values': [1, 2, np.nan, 3, 4, np.nan, + np.nan, np.nan, np.nan]}, + index=exp_index) result = gb.sum() tm.assert_frame_equal(result, expected) @@ -3025,8 +3037,7 @@ def f(x): c = pd.cut(df.a, bins=[0, 10, 20, 30, 40]) result = df.a.groupby(c).transform(sum) - tm.assert_series_equal(result, df['a'], check_names=False) - self.assertTrue(result.name is None) + tm.assert_series_equal(result, df['a']) tm.assert_series_equal( df.a.groupby(c).transform(lambda xs: np.sum(xs)), df['a']) @@ -3043,8 +3054,7 @@ def f(x): c = pd.cut(df.a, bins=[-10, 0, 10, 20, 30, 40]) result = df.a.groupby(c).transform(sum) - tm.assert_series_equal(result, df['a'], check_names=False) - self.assertTrue(result.name is None) + tm.assert_series_equal(result, df['a']) tm.assert_series_equal( df.a.groupby(c).transform(lambda xs: np.sum(xs)), df['a']) @@ -3056,8 +3066,10 @@ def f(x): df = pd.DataFrame({'a': [1, 0, 0, 0]}) c = pd.cut(df.a, [0, 1, 2, 3, 4]) result = df.groupby(c).apply(len) - expected = pd.Series([1, 0, 0, 0], - index=pd.CategoricalIndex(c.values.categories)) + + exp_index = pd.CategoricalIndex(c.values.categories, + ordered=c.values.ordered) + expected = pd.Series([1, 0, 0, 0], index=exp_index) expected.index.name = 'a' tm.assert_series_equal(result, expected) @@ -3135,7 +3147,7 @@ def test_sort_values(self): res = df.sort_values(by=["sort"], ascending=False) exp = df.sort_values(by=["string"], ascending=True) - self.assert_numpy_array_equal(res["values"], exp["values"]) + self.assert_series_equal(res["values"], exp["values"]) self.assertEqual(res["sort"].dtype, "category") self.assertEqual(res["unsort"].dtype, "category") @@ -3371,30 +3383,28 @@ def test_assigning_ops(self): # assign a part of a column with dtype != categorical -> # exp_parts_cats_col - cats = pd.Categorical( - ["a", "a", "a", "a", "a", "a", "a"], categories=["a", "b"]) + cats = pd.Categorical(["a", "a", "a", "a", "a", "a", "a"], + categories=["a", "b"]) idx = pd.Index(["h", "i", "j", "k", "l", "m", "n"]) values = [1, 1, 1, 1, 1, 1, 1] orig = pd.DataFrame({"cats": cats, "values": values}, index=idx) # the expected values # changed single row - cats1 = pd.Categorical( - ["a", "a", "b", "a", "a", "a", "a"], categories=["a", "b"]) + cats1 = pd.Categorical(["a", "a", "b", "a", "a", "a", "a"], + categories=["a", "b"]) idx1 = pd.Index(["h", "i", "j", "k", "l", "m", "n"]) values1 = [1, 1, 2, 1, 1, 1, 1] - exp_single_row = pd.DataFrame( - {"cats": cats1, - "values": values1}, index=idx1) + exp_single_row = pd.DataFrame({"cats": cats1, + "values": values1}, index=idx1) # changed multiple rows - cats2 = pd.Categorical( - ["a", "a", "b", "b", "a", "a", "a"], categories=["a", "b"]) + cats2 = pd.Categorical(["a", "a", "b", "b", "a", "a", "a"], + categories=["a", "b"]) idx2 = pd.Index(["h", "i", "j", "k", "l", "m", "n"]) values2 = [1, 1, 2, 2, 1, 1, 1] - exp_multi_row = pd.DataFrame( - {"cats": cats2, - "values": values2}, index=idx2) + exp_multi_row = pd.DataFrame({"cats": cats2, + "values": values2}, index=idx2) # changed part of the cats column cats3 = pd.Categorical( @@ -3655,7 +3665,8 @@ def f(): exp_fancy["cats"].cat.set_categories(["a", "b", "c"], inplace=True) df[df["cats"] == "c"] = ["b", 2] - tm.assert_frame_equal(df, exp_multi_row) + # category c is kept in .categories + tm.assert_frame_equal(df, exp_fancy) # set_value df = orig.copy() @@ -3710,7 +3721,7 @@ def f(): # ensure that one can set something to np.nan s = Series(Categorical([1, 2, 3])) - exp = Series(Categorical([1, np.nan, 3])) + exp = Series(Categorical([1, np.nan, 3], categories=[1, 2, 3])) s[1] = np.nan tm.assert_series_equal(s, exp) @@ -3890,15 +3901,15 @@ def f(): df1 = df[0:3] df2 = df[3:] - self.assert_numpy_array_equal(df['grade'].cat.categories, - df1['grade'].cat.categories) - self.assert_numpy_array_equal(df['grade'].cat.categories, - df2['grade'].cat.categories) + self.assert_index_equal(df['grade'].cat.categories, + df1['grade'].cat.categories) + self.assert_index_equal(df['grade'].cat.categories, + df2['grade'].cat.categories) dfx = pd.concat([df1, df2]) dfx['grade'].cat.categories - self.assert_numpy_array_equal(df['grade'].cat.categories, - dfx['grade'].cat.categories) + self.assert_index_equal(df['grade'].cat.categories, + dfx['grade'].cat.categories) def test_concat_preserve(self): @@ -4085,10 +4096,12 @@ def f(): c = Categorical(["a", "b", np.nan]) with tm.assert_produces_warning(FutureWarning): c.set_categories(["a", "b", np.nan], rename=True, inplace=True) + c[0] = np.nan df = pd.DataFrame({"cats": c, "vals": [1, 2, 3]}) - df_exp = pd.DataFrame({"cats": Categorical(["a", "b", "a"]), - "vals": [1, 2, 3]}) + + cat_exp = Categorical(["a", "b", "a"], categories=["a", "b", np.nan]) + df_exp = pd.DataFrame({"cats": cat_exp, "vals": [1, 2, 3]}) res = df.fillna("a") tm.assert_frame_equal(res, df_exp) @@ -4130,7 +4143,9 @@ def cmp(a, b): ]: result = valid(s) - tm.assert_series_equal(result, s) + # compare series values + # internal .categories can't be compared because it is sorted + tm.assert_series_equal(result, s, check_categorical=False) # invalid conversion (these are NOT a dtype) for invalid in [lambda x: x.astype(pd.Categorical), diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py index 090669681fb4f..ad43dc1c09ef1 100644 --- a/pandas/tests/test_common.py +++ b/pandas/tests/test_common.py @@ -695,7 +695,7 @@ def test_random_state(): com._random_state(state2).uniform(), npr.RandomState(10).uniform()) # check with no arg random state - assert isinstance(com._random_state(), npr.RandomState) + assert com._random_state() is np.random # Error for floats or strings with tm.assertRaises(ValueError): @@ -817,6 +817,21 @@ def test_dict_compat(): assert (com._dict_compat(data_unchanged) == data_unchanged) +def test_is_timedelta(): + assert (com.is_timedelta64_dtype('timedelta64')) + assert (com.is_timedelta64_dtype('timedelta64[ns]')) + assert (not com.is_timedelta64_ns_dtype('timedelta64')) + assert (com.is_timedelta64_ns_dtype('timedelta64[ns]')) + + tdi = TimedeltaIndex([1e14, 2e14], dtype='timedelta64') + assert (com.is_timedelta64_dtype(tdi)) + assert (com.is_timedelta64_ns_dtype(tdi)) + assert (com.is_timedelta64_ns_dtype(tdi.astype('timedelta64[ns]'))) + # Conversion to Int64Index: + assert (not com.is_timedelta64_ns_dtype(tdi.astype('timedelta64'))) + assert (not com.is_timedelta64_ns_dtype(tdi.astype('timedelta64[h]'))) + + if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py index 044272f24a21f..cc0972937b8a2 100644 --- a/pandas/tests/test_expressions.py +++ b/pandas/tests/test_expressions.py @@ -15,10 +15,10 @@ from pandas import compat from pandas.util.testing import (assert_almost_equal, assert_series_equal, assert_frame_equal, assert_panel_equal, - assert_panel4d_equal) + assert_panel4d_equal, slow) from pandas.formats.printing import pprint_thing import pandas.util.testing as tm -from numpy.testing.decorators import slow + if not expr._USE_NUMEXPR: try: @@ -287,7 +287,12 @@ def testit(): use_numexpr=True) expected = expr.evaluate(op, op_str, f, f, use_numexpr=False) - tm.assert_numpy_array_equal(result, expected.values) + + if isinstance(result, DataFrame): + tm.assert_frame_equal(result, expected) + else: + tm.assert_numpy_array_equal(result, + expected.values) result = expr._can_use_numexpr(op, op_str, f2, f2, 'evaluate') @@ -325,7 +330,10 @@ def testit(): use_numexpr=True) expected = expr.evaluate(op, op_str, f11, f12, use_numexpr=False) - tm.assert_numpy_array_equal(result, expected.values) + if isinstance(result, DataFrame): + tm.assert_frame_equal(result, expected) + else: + tm.assert_numpy_array_equal(result, expected.values) result = expr._can_use_numexpr(op, op_str, f21, f22, 'evaluate') diff --git a/pandas/tests/test_generic.py b/pandas/tests/test_generic.py index ba282f0107d71..2f4c2b414cc30 100644 --- a/pandas/tests/test_generic.py +++ b/pandas/tests/test_generic.py @@ -21,8 +21,7 @@ assert_frame_equal, assert_panel_equal, assert_panel4d_equal, - assert_almost_equal, - assert_equal) + assert_almost_equal) import pandas.util.testing as tm @@ -415,6 +414,14 @@ def test_sample(self): o.sample(frac=0.7, random_state=np.random.RandomState(test)), o.sample(frac=0.7, random_state=np.random.RandomState(test))) + os1, os2 = [], [] + for _ in range(2): + np.random.seed(test) + os1.append(o.sample(n=4)) + os2.append(o.sample(frac=0.7)) + self._compare(*os1) + self._compare(*os2) + # Check for error when random_state argument invalid. with tm.assertRaises(ValueError): o.sample(random_state='astring!') @@ -839,7 +846,7 @@ def test_to_xarray(self): assert_almost_equal(list(result.coords.keys()), ['foo']) self.assertIsInstance(result, DataArray) - def testit(index, check_index_type=True): + def testit(index, check_index_type=True, check_categorical=True): s = Series(range(6), index=index(6)) s.index.name = 'foo' result = s.to_xarray() @@ -851,7 +858,8 @@ def testit(index, check_index_type=True): # idempotency assert_series_equal(result.to_series(), s, - check_index_type=check_index_type) + check_index_type=check_index_type, + check_categorical=check_categorical) for index in [tm.makeFloatIndex, tm.makeIntIndex, tm.makeStringIndex, tm.makeUnicodeIndex, @@ -860,7 +868,8 @@ def testit(index, check_index_type=True): testit(index) # not idempotent - testit(tm.makeCategoricalIndex, check_index_type=False) + testit(tm.makeCategoricalIndex, check_index_type=False, + check_categorical=False) s = Series(range(6)) s.index.name = 'foo' @@ -987,6 +996,59 @@ def test_describe_percentiles_insert_median(self): self.assertTrue('0%' in d1.index) self.assertTrue('100%' in d2.index) + def test_describe_percentiles_unique(self): + # GH13104 + df = tm.makeDataFrame() + with self.assertRaises(ValueError): + df.describe(percentiles=[0.1, 0.2, 0.4, 0.5, 0.2, 0.6]) + with self.assertRaises(ValueError): + df.describe(percentiles=[0.1, 0.2, 0.4, 0.2, 0.6]) + + def test_describe_percentiles_formatting(self): + # GH13104 + df = tm.makeDataFrame() + + # default + result = df.describe().index + expected = Index(['count', 'mean', 'std', 'min', '25%', '50%', '75%', + 'max'], + dtype='object') + tm.assert_index_equal(result, expected) + + result = df.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, + 0.9995, 0.9999]).index + expected = Index(['count', 'mean', 'std', 'min', '0.01%', '0.05%', + '0.1%', '50%', '99.9%', '99.95%', '99.99%', 'max'], + dtype='object') + tm.assert_index_equal(result, expected) + + result = df.describe(percentiles=[0.00499, 0.005, 0.25, 0.50, + 0.75]).index + expected = Index(['count', 'mean', 'std', 'min', '0.499%', '0.5%', + '25%', '50%', '75%', 'max'], + dtype='object') + tm.assert_index_equal(result, expected) + + result = df.describe(percentiles=[0.00499, 0.01001, 0.25, 0.50, + 0.75]).index + expected = Index(['count', 'mean', 'std', 'min', '0.5%', '1.0%', + '25%', '50%', '75%', 'max'], + dtype='object') + tm.assert_index_equal(result, expected) + + def test_describe_column_index_type(self): + # GH13288 + df = pd.DataFrame([1, 2, 3, 4]) + df.columns = pd.Index([0], dtype=object) + result = df.describe().columns + expected = Index([0], dtype=object) + tm.assert_index_equal(result, expected) + + df = pd.DataFrame({'A': list("BCDE"), 0: [1, 2, 3, 4]}) + result = df.describe().columns + expected = Index([0], dtype=object) + tm.assert_index_equal(result, expected) + def test_describe_no_numeric(self): df = DataFrame({'A': ['foo', 'foo', 'bar'] * 8, 'B': ['a', 'b', 'c', 'd'] * 6}) @@ -1001,6 +1063,16 @@ def test_describe_no_numeric(self): desc = df.describe() self.assertEqual(desc.time['first'], min(ts.index)) + def test_describe_empty(self): + df = DataFrame() + tm.assertRaisesRegexp(ValueError, 'DataFrame without columns', + df.describe) + + df = DataFrame(columns=['A', 'B']) + result = df.describe() + expected = DataFrame(0, columns=['A', 'B'], index=['count', 'unique']) + tm.assert_frame_equal(result, expected) + def test_describe_empty_int_columns(self): df = DataFrame([[0, 1], [1, 2]]) desc = df[df[0] < 0].describe() # works @@ -1280,7 +1352,7 @@ def test_tz_convert_and_localize(self): df1 = DataFrame(np.ones(5), index=l0) df1 = getattr(df1, fn)('US/Pacific') - self.assertTrue(df1.index.equals(l0_expected)) + self.assert_index_equal(df1.index, l0_expected) # MultiIndex # GH7846 @@ -1288,14 +1360,14 @@ def test_tz_convert_and_localize(self): df3 = getattr(df2, fn)('US/Pacific', level=0) self.assertFalse(df3.index.levels[0].equals(l0)) - self.assertTrue(df3.index.levels[0].equals(l0_expected)) - self.assertTrue(df3.index.levels[1].equals(l1)) + self.assert_index_equal(df3.index.levels[0], l0_expected) + self.assert_index_equal(df3.index.levels[1], l1) self.assertFalse(df3.index.levels[1].equals(l1_expected)) df3 = getattr(df2, fn)('US/Pacific', level=1) - self.assertTrue(df3.index.levels[0].equals(l0)) + self.assert_index_equal(df3.index.levels[0], l0) self.assertFalse(df3.index.levels[0].equals(l0_expected)) - self.assertTrue(df3.index.levels[1].equals(l1_expected)) + self.assert_index_equal(df3.index.levels[1], l1_expected) self.assertFalse(df3.index.levels[1].equals(l1)) df4 = DataFrame(np.ones(5), @@ -1304,9 +1376,9 @@ def test_tz_convert_and_localize(self): # TODO: untested df5 = getattr(df4, fn)('US/Pacific', level=1) # noqa - self.assertTrue(df3.index.levels[0].equals(l0)) + self.assert_index_equal(df3.index.levels[0], l0) self.assertFalse(df3.index.levels[0].equals(l0_expected)) - self.assertTrue(df3.index.levels[1].equals(l1_expected)) + self.assert_index_equal(df3.index.levels[1], l1_expected) self.assertFalse(df3.index.levels[1].equals(l1)) # Bad Inputs @@ -1336,7 +1408,7 @@ def test_set_attribute(self): df['y'] = [2, 4, 6] df.y = 5 - assert_equal(df.y, 5) + self.assertEqual(df.y, 5) assert_series_equal(df['y'], Series([2, 4, 6], name='y')) def test_pct_change(self): @@ -1401,9 +1473,8 @@ def test_to_xarray(self): expected['f'] = expected['f'].astype(object) expected['h'] = expected['h'].astype('datetime64[ns]') expected.columns.name = None - assert_frame_equal(result.to_dataframe(), - expected, - check_index_type=False) + assert_frame_equal(result.to_dataframe(), expected, + check_index_type=False, check_categorical=False) # available in 0.7.1 # MultiIndex diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py index 3820a9d5f6476..b09185c19bffb 100644 --- a/pandas/tests/test_graphics.py +++ b/pandas/tests/test_graphics.py @@ -19,7 +19,7 @@ import pandas.core.common as com import pandas.util.testing as tm from pandas.util.testing import (ensure_clean, - assert_is_valid_plot_return_object) + assert_is_valid_plot_return_object, slow) from pandas.core.config import set_option @@ -27,8 +27,6 @@ from numpy import random from numpy.random import rand, randn -from numpy.testing import assert_allclose -from numpy.testing.decorators import slow import pandas.tools.plotting as plotting """ These tests are for ``Dataframe.plot`` and ``Series.plot``. @@ -140,7 +138,7 @@ def _check_data(self, xp, rs): def check_line(xpl, rsl): xpdata = xpl.get_xydata() rsdata = rsl.get_xydata() - assert_allclose(xpdata, rsdata) + tm.assert_almost_equal(xpdata, rsdata) self.assertEqual(len(xp_lines), len(rs_lines)) [check_line(xpl, rsl) for xpl, rsl in zip(xp_lines, rs_lines)] @@ -708,14 +706,12 @@ def test_bar_log(self): expected = np.hstack((1.0e-04, expected, 1.0e+01)) ax = Series([0.1, 0.01, 0.001]).plot(log=True, kind='bar') - tm.assert_numpy_array_equal(ax.get_ylim(), - (0.001, 0.10000000000000001)) + self.assertEqual(ax.get_ylim(), (0.001, 0.10000000000000001)) tm.assert_numpy_array_equal(ax.yaxis.get_ticklocs(), expected) tm.close() ax = Series([0.1, 0.01, 0.001]).plot(log=True, kind='barh') - tm.assert_numpy_array_equal(ax.get_xlim(), - (0.001, 0.10000000000000001)) + self.assertEqual(ax.get_xlim(), (0.001, 0.10000000000000001)) tm.assert_numpy_array_equal(ax.xaxis.get_ticklocs(), expected) @slow @@ -2207,11 +2203,11 @@ def test_scatter_colors(self): ax = df.plot.scatter(x='a', y='b', c='c') tm.assert_numpy_array_equal(ax.collections[0].get_facecolor()[0], - (0, 0, 1, 1)) + np.array([0, 0, 1, 1], dtype=np.float64)) ax = df.plot.scatter(x='a', y='b', color='white') tm.assert_numpy_array_equal(ax.collections[0].get_facecolor()[0], - (1, 1, 1, 1)) + np.array([1, 1, 1, 1], dtype=np.float64)) @slow def test_plot_bar(self): diff --git a/pandas/tests/test_graphics_others.py b/pandas/tests/test_graphics_others.py index b032ce196c113..7285d84865542 100644 --- a/pandas/tests/test_graphics_others.py +++ b/pandas/tests/test_graphics_others.py @@ -11,12 +11,12 @@ from pandas import Series, DataFrame, MultiIndex from pandas.compat import range, lmap, lzip import pandas.util.testing as tm +from pandas.util.testing import slow import numpy as np from numpy import random from numpy.random import randn -from numpy.testing.decorators import slow import pandas.tools.plotting as plotting from pandas.tests.test_graphics import (TestPlotBase, _check_plot_works, diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py index 5bd5c80f18386..6659e6b106a67 100644 --- a/pandas/tests/test_groupby.py +++ b/pandas/tests/test_groupby.py @@ -8,6 +8,7 @@ from pandas import date_range, bdate_range, Timestamp from pandas.core.index import Index, MultiIndex, CategoricalIndex from pandas.core.api import Categorical, DataFrame +from pandas.core.common import UnsupportedFunctionCall from pandas.core.groupby import (SpecificationError, DataError, _nargsort, _lexsort_indexer) from pandas.core.series import Series @@ -30,7 +31,6 @@ import pandas.util.testing as tm import pandas as pd -from numpy.testing import assert_equal class TestGroupBy(tm.TestCase): @@ -774,11 +774,11 @@ def test_agg_apply_corner(self): # DataFrame grouped = self.tsframe.groupby(self.tsframe['A'] * np.nan) exp_df = DataFrame(columns=self.tsframe.columns, dtype=float, - index=pd.Index( - [], dtype=np.float64)) + index=pd.Index([], dtype=np.float64)) assert_frame_equal(grouped.sum(), exp_df, check_names=False) assert_frame_equal(grouped.agg(np.sum), exp_df, check_names=False) - assert_frame_equal(grouped.apply(np.sum), DataFrame({}, dtype=float)) + assert_frame_equal(grouped.apply(np.sum), exp_df.iloc[:, :0], + check_names=False) def test_agg_grouping_is_list_tuple(self): from pandas.core.groupby import Grouping @@ -1051,24 +1051,50 @@ def test_transform_fast(self): values = np.repeat(grp.mean().values, com._ensure_platform_int(grp.count().values)) - expected = pd.Series(values, index=df.index) + expected = pd.Series(values, index=df.index, name='val') result = grp.transform(np.mean) assert_series_equal(result, expected) result = grp.transform('mean') assert_series_equal(result, expected) + # GH 12737 + df = pd.DataFrame({'grouping': [0, 1, 1, 3], 'f': [1.1, 2.1, 3.1, 4.5], + 'd': pd.date_range('2014-1-1', '2014-1-4'), + 'i': [1, 2, 3, 4]}, + columns=['grouping', 'f', 'i', 'd']) + result = df.groupby('grouping').transform('first') + + dates = [pd.Timestamp('2014-1-1'), pd.Timestamp('2014-1-2'), + pd.Timestamp('2014-1-2'), pd.Timestamp('2014-1-4')] + expected = pd.DataFrame({'f': [1.1, 2.1, 2.1, 4.5], + 'd': dates, + 'i': [1, 2, 2, 4]}, + columns=['f', 'i', 'd']) + assert_frame_equal(result, expected) + + # selection + result = df.groupby('grouping')[['f', 'i']].transform('first') + expected = expected[['f', 'i']] + assert_frame_equal(result, expected) + + # dup columns + df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=['g', 'a', 'a']) + result = df.groupby('g').transform('first') + expected = df.drop('g', axis=1) + assert_frame_equal(result, expected) + def test_transform_broadcast(self): grouped = self.ts.groupby(lambda x: x.month) result = grouped.transform(np.mean) - self.assertTrue(result.index.equals(self.ts.index)) + self.assert_index_equal(result.index, self.ts.index) for _, gp in grouped: assert_fp_equal(result.reindex(gp.index), gp.mean()) grouped = self.tsframe.groupby(lambda x: x.month) result = grouped.transform(np.mean) - self.assertTrue(result.index.equals(self.tsframe.index)) + self.assert_index_equal(result.index, self.tsframe.index) for _, gp in grouped: agged = gp.mean() res = result.reindex(gp.index) @@ -1079,8 +1105,8 @@ def test_transform_broadcast(self): grouped = self.tsframe.groupby({'A': 0, 'B': 0, 'C': 1, 'D': 1}, axis=1) result = grouped.transform(np.mean) - self.assertTrue(result.index.equals(self.tsframe.index)) - self.assertTrue(result.columns.equals(self.tsframe.columns)) + self.assert_index_equal(result.index, self.tsframe.index) + self.assert_index_equal(result.columns, self.tsframe.columns) for _, gp in grouped: agged = gp.mean(1) res = result.reindex(columns=gp.columns) @@ -1191,6 +1217,16 @@ def test_transform_function_aliases(self): expected = self.df.groupby('A')['C'].transform(np.mean) assert_series_equal(result, expected) + def test_series_fast_transform_date(self): + # GH 13191 + df = pd.DataFrame({'grouping': [np.nan, 1, 1, 3], + 'd': pd.date_range('2014-1-1', '2014-1-4')}) + result = df.groupby('grouping')['d'].transform('first') + dates = [pd.NaT, pd.Timestamp('2014-1-2'), pd.Timestamp('2014-1-2'), + pd.Timestamp('2014-1-4')] + expected = pd.Series(dates, name='d') + assert_series_equal(result, expected) + def test_transform_length(self): # GH 9697 df = pd.DataFrame({'col1': [1, 1, 2, 2], 'col2': [1, 2, 3, np.nan]}) @@ -2101,7 +2137,7 @@ def test_groupby_multiple_key(self): lambda x: x.day], axis=1) agged = grouped.agg(lambda x: x.sum()) - self.assertTrue(agged.index.equals(df.columns)) + self.assert_index_equal(agged.index, df.columns) assert_almost_equal(df.T.values, agged.values) agged = grouped.agg(lambda x: x.sum()) @@ -2513,7 +2549,7 @@ def f(piece): result = grouped.apply(f) tm.assertIsInstance(result, DataFrame) - self.assertTrue(result.index.equals(ts.index)) + self.assert_index_equal(result.index, ts.index) def test_apply_series_yield_constant(self): result = self.df.groupby(['A', 'B'])['C'].apply(len) @@ -2523,7 +2559,7 @@ def test_apply_frame_to_series(self): grouped = self.df.groupby(['A', 'B']) result = grouped.apply(len) expected = grouped.count()['C'] - self.assertTrue(result.index.equals(expected.index)) + self.assert_index_equal(result.index, expected.index) self.assert_numpy_array_equal(result.values, expected.values) def test_apply_frame_concat_series(self): @@ -2637,26 +2673,26 @@ def test_groupby_with_hier_columns(self): df = DataFrame(np.random.randn(8, 4), index=index, columns=columns) result = df.groupby(level=0).mean() - self.assertTrue(result.columns.equals(columns)) + self.assert_index_equal(result.columns, columns) result = df.groupby(level=0, axis=1).mean() - self.assertTrue(result.index.equals(df.index)) + self.assert_index_equal(result.index, df.index) result = df.groupby(level=0).agg(np.mean) - self.assertTrue(result.columns.equals(columns)) + self.assert_index_equal(result.columns, columns) result = df.groupby(level=0).apply(lambda x: x.mean()) - self.assertTrue(result.columns.equals(columns)) + self.assert_index_equal(result.columns, columns) result = df.groupby(level=0, axis=1).agg(lambda x: x.mean(1)) - self.assertTrue(result.columns.equals(Index(['A', 'B']))) - self.assertTrue(result.index.equals(df.index)) + self.assert_index_equal(result.columns, Index(['A', 'B'])) + self.assert_index_equal(result.index, df.index) # add a nuisance column sorted_columns, _ = columns.sortlevel(0) df['A', 'foo'] = 'bar' result = df.groupby(level=0).mean() - self.assertTrue(result.columns.equals(df.columns[:-1])) + self.assert_index_equal(result.columns, df.columns[:-1]) def test_pass_args_kwargs(self): from numpy import percentile @@ -2676,7 +2712,7 @@ def f(x, q=None, axis=0): trans_expected = ts_grouped.transform(g) assert_series_equal(apply_result, agg_expected) - assert_series_equal(agg_result, agg_expected) + assert_series_equal(agg_result, agg_expected, check_names=False) assert_series_equal(trans_result, trans_expected) agg_result = ts_grouped.agg(f, q=80) @@ -2692,11 +2728,11 @@ def f(x, q=None, axis=0): apply_result = df_grouped.apply(DataFrame.quantile, .8) expected = df_grouped.quantile(.8) assert_frame_equal(apply_result, expected) - assert_frame_equal(agg_result, expected) + assert_frame_equal(agg_result, expected, check_names=False) agg_result = df_grouped.agg(f, q=80) apply_result = df_grouped.apply(DataFrame.quantile, q=.8) - assert_frame_equal(agg_result, expected) + assert_frame_equal(agg_result, expected, check_names=False) assert_frame_equal(apply_result, expected) def test_size(self): @@ -3377,18 +3413,18 @@ def test_panel_groupby(self): tm.assert_panel_equal(agged, agged2) - self.assert_numpy_array_equal(agged.items, [0, 1]) + self.assert_index_equal(agged.items, Index([0, 1])) grouped = self.panel.groupby(lambda x: x.month, axis='major') agged = grouped.mean() - self.assert_numpy_array_equal(agged.major_axis, sorted(list(set( - self.panel.major_axis.month)))) + exp = Index(sorted(list(set(self.panel.major_axis.month)))) + self.assert_index_equal(agged.major_axis, exp) grouped = self.panel.groupby({'A': 0, 'B': 0, 'C': 1, 'D': 1}, axis='minor') agged = grouped.mean() - self.assert_numpy_array_equal(agged.minor_axis, [0, 1]) + self.assert_index_equal(agged.minor_axis, Index([0, 1])) def test_numpy_groupby(self): from pandas.core.groupby import numpy_groupby @@ -3414,7 +3450,7 @@ def test_groupby_2d_malformed(self): d['label'] = ['l1', 'l2'] tmp = d.groupby(['group']).mean() res_values = np.array([[0, 1], [0, 1]], dtype=np.int64) - self.assert_numpy_array_equal(tmp.columns, ['zeros', 'ones']) + self.assert_index_equal(tmp.columns, Index(['zeros', 'ones'])) self.assert_numpy_array_equal(tmp.values, res_values) def test_int32_overflow(self): @@ -3453,10 +3489,10 @@ def test_int64_overflow(self): right = rg.sum()['values'] exp_index, _ = left.index.sortlevel(0) - self.assertTrue(left.index.equals(exp_index)) + self.assert_index_equal(left.index, exp_index) exp_index, _ = right.index.sortlevel(0) - self.assertTrue(right.index.equals(exp_index)) + self.assert_index_equal(right.index, exp_index) tups = list(map(tuple, df[['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H' ]].values)) @@ -3684,9 +3720,9 @@ def test_agg_multiple_functions_maintain_order(self): # GH #610 funcs = [('mean', np.mean), ('max', np.max), ('min', np.min)] result = self.df.groupby('A')['C'].agg(funcs) - exp_cols = ['mean', 'max', 'min'] + exp_cols = Index(['mean', 'max', 'min']) - self.assert_numpy_array_equal(result.columns, exp_cols) + self.assert_index_equal(result.columns, exp_cols) def test_multiple_functions_tuples_and_non_tuples(self): # #1359 @@ -3831,8 +3867,8 @@ def test_groupby_sort_categorical(self): ['(0, 2.5]', 1, 60], ['(5, 7.5]', 7, 70]], columns=['range', 'foo', 'bar']) df['range'] = Categorical(df['range'], ordered=True) - index = CategoricalIndex( - ['(0, 2.5]', '(2.5, 5]', '(5, 7.5]', '(7.5, 10]'], name='range') + index = CategoricalIndex(['(0, 2.5]', '(2.5, 5]', '(5, 7.5]', + '(7.5, 10]'], name='range', ordered=True) result_sort = DataFrame([[1, 60], [5, 30], [6, 40], [10, 10]], columns=['foo', 'bar'], index=index) @@ -3842,13 +3878,15 @@ def test_groupby_sort_categorical(self): assert_frame_equal(result_sort, df.groupby(col, sort=False).first()) df['range'] = Categorical(df['range'], ordered=False) - index = CategoricalIndex( - ['(0, 2.5]', '(2.5, 5]', '(5, 7.5]', '(7.5, 10]'], name='range') + index = CategoricalIndex(['(0, 2.5]', '(2.5, 5]', '(5, 7.5]', + '(7.5, 10]'], name='range') result_sort = DataFrame([[1, 60], [5, 30], [6, 40], [10, 10]], columns=['foo', 'bar'], index=index) - index = CategoricalIndex(['(7.5, 10]', '(2.5, 5]', - '(5, 7.5]', '(0, 2.5]'], + index = CategoricalIndex(['(7.5, 10]', '(2.5, 5]', '(5, 7.5]', + '(0, 2.5]'], + categories=['(7.5, 10]', '(2.5, 5]', + '(5, 7.5]', '(0, 2.5]'], name='range') result_nosort = DataFrame([[10, 10], [5, 30], [6, 40], [1, 60]], index=index, columns=['foo', 'bar']) @@ -3938,7 +3976,8 @@ def test_groupby_categorical(self): result = data.groupby(cats).mean() expected = data.groupby(np.asarray(cats)).mean() - exp_idx = CategoricalIndex(levels, ordered=True) + exp_idx = CategoricalIndex(levels, categories=cats.categories, + ordered=True) expected = expected.reindex(exp_idx) assert_frame_equal(result, expected) @@ -3949,14 +3988,16 @@ def test_groupby_categorical(self): idx = cats.codes.argsort() ord_labels = np.asarray(cats).take(idx) ord_data = data.take(idx) - expected = ord_data.groupby( - Categorical(ord_labels), sort=False).describe() + + exp_cats = Categorical(ord_labels, ordered=True, + categories=['foo', 'bar', 'baz', 'qux']) + expected = ord_data.groupby(exp_cats, sort=False).describe() expected.index.names = [None, None] assert_frame_equal(desc_result, expected) # GH 10460 - expc = Categorical.from_codes( - np.arange(4).repeat(8), levels, ordered=True) + expc = Categorical.from_codes(np.arange(4).repeat(8), + levels, ordered=True) exp = CategoricalIndex(expc) self.assert_index_equal(desc_result.index.get_level_values(0), exp) exp = Index(['count', 'mean', 'std', 'min', '25%', '50%', @@ -4234,10 +4275,10 @@ def test_multiindex_columns_empty_level(self): df = DataFrame([[long(1), 'A']], columns=midx) grouped = df.groupby('to filter').groups - self.assert_numpy_array_equal(grouped['A'], [0]) + self.assertEqual(grouped['A'], [0]) grouped = df.groupby([('to filter', '')]).groups - self.assert_numpy_array_equal(grouped['A'], [0]) + self.assertEqual(grouped['A'], [0]) df = DataFrame([[long(1), 'A'], [long(2), 'B']], columns=midx) @@ -4406,7 +4447,7 @@ def test_groupby_datetime64_32_bit(self): df = DataFrame({"A": range(2), "B": [pd.Timestamp('2000-01-1')] * 2}) result = df.groupby("A")["B"].transform(min) - expected = Series([pd.Timestamp('2000-01-1')] * 2) + expected = Series([pd.Timestamp('2000-01-1')] * 2, name='B') assert_series_equal(result, expected) def test_groupby_categorical_unequal_len(self): @@ -4508,7 +4549,7 @@ def test_groupby_with_empty(self): grouped = series.groupby(grouper) assert next(iter(grouped), None) is None - def test_aaa_groupby_with_small_elem(self): + def test_groupby_with_small_elem(self): # GH 8542 # length=2 df = pd.DataFrame({'event': ['start', 'start'], @@ -4579,10 +4620,10 @@ def test_timezone_info(self): import pytz df = pd.DataFrame({'a': [1], 'b': [datetime.now(pytz.utc)]}) - tm.assert_equal(df['b'][0].tzinfo, pytz.utc) + self.assertEqual(df['b'][0].tzinfo, pytz.utc) df = pd.DataFrame({'a': [1, 2, 3]}) df['b'] = datetime.now(pytz.utc) - tm.assert_equal(df['b'][0].tzinfo, pytz.utc) + self.assertEqual(df['b'][0].tzinfo, pytz.utc) def test_groupby_with_timegrouper(self): # GH 4161 @@ -5812,25 +5853,23 @@ def test_lexsort_indexer(self): keys = [[nan] * 5 + list(range(100)) + [nan] * 5] # orders=True, na_position='last' result = _lexsort_indexer(keys, orders=True, na_position='last') - expected = list(range(5, 105)) + list(range(5)) + list(range(105, 110)) - assert_equal(result, expected) + exp = list(range(5, 105)) + list(range(5)) + list(range(105, 110)) + tm.assert_numpy_array_equal(result, np.array(exp)) # orders=True, na_position='first' result = _lexsort_indexer(keys, orders=True, na_position='first') - expected = list(range(5)) + list(range(105, 110)) + list(range(5, 105)) - assert_equal(result, expected) + exp = list(range(5)) + list(range(105, 110)) + list(range(5, 105)) + tm.assert_numpy_array_equal(result, np.array(exp)) # orders=False, na_position='last' result = _lexsort_indexer(keys, orders=False, na_position='last') - expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105, - 110)) - assert_equal(result, expected) + exp = list(range(104, 4, -1)) + list(range(5)) + list(range(105, 110)) + tm.assert_numpy_array_equal(result, np.array(exp)) # orders=False, na_position='first' result = _lexsort_indexer(keys, orders=False, na_position='first') - expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4, - -1)) - assert_equal(result, expected) + exp = list(range(5)) + list(range(105, 110)) + list(range(104, 4, -1)) + tm.assert_numpy_array_equal(result, np.array(exp)) def test_nargsort(self): # np.argsort(items) places NaNs last @@ -5856,54 +5895,50 @@ def test_nargsort(self): # mergesort, ascending=True, na_position='last' result = _nargsort(items, kind='mergesort', ascending=True, na_position='last') - expected = list(range(5, 105)) + list(range(5)) + list(range(105, 110)) - assert_equal(result, expected) + exp = list(range(5, 105)) + list(range(5)) + list(range(105, 110)) + tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64)) # mergesort, ascending=True, na_position='first' result = _nargsort(items, kind='mergesort', ascending=True, na_position='first') - expected = list(range(5)) + list(range(105, 110)) + list(range(5, 105)) - assert_equal(result, expected) + exp = list(range(5)) + list(range(105, 110)) + list(range(5, 105)) + tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64)) # mergesort, ascending=False, na_position='last' result = _nargsort(items, kind='mergesort', ascending=False, na_position='last') - expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105, - 110)) - assert_equal(result, expected) + exp = list(range(104, 4, -1)) + list(range(5)) + list(range(105, 110)) + tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64)) # mergesort, ascending=False, na_position='first' result = _nargsort(items, kind='mergesort', ascending=False, na_position='first') - expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4, - -1)) - assert_equal(result, expected) + exp = list(range(5)) + list(range(105, 110)) + list(range(104, 4, -1)) + tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64)) # mergesort, ascending=True, na_position='last' result = _nargsort(items2, kind='mergesort', ascending=True, na_position='last') - expected = list(range(5, 105)) + list(range(5)) + list(range(105, 110)) - assert_equal(result, expected) + exp = list(range(5, 105)) + list(range(5)) + list(range(105, 110)) + tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64)) # mergesort, ascending=True, na_position='first' result = _nargsort(items2, kind='mergesort', ascending=True, na_position='first') - expected = list(range(5)) + list(range(105, 110)) + list(range(5, 105)) - assert_equal(result, expected) + exp = list(range(5)) + list(range(105, 110)) + list(range(5, 105)) + tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64)) # mergesort, ascending=False, na_position='last' result = _nargsort(items2, kind='mergesort', ascending=False, na_position='last') - expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105, - 110)) - assert_equal(result, expected) + exp = list(range(104, 4, -1)) + list(range(5)) + list(range(105, 110)) + tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64)) # mergesort, ascending=False, na_position='first' result = _nargsort(items2, kind='mergesort', ascending=False, na_position='first') - expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4, - -1)) - assert_equal(result, expected) + exp = list(range(5)) + list(range(105, 110)) + list(range(104, 4, -1)) + tm.assert_numpy_array_equal(result, np.array(exp, dtype=np.int64)) def test_datetime_count(self): df = DataFrame({'a': [1, 2, 3] * 2, @@ -5972,7 +6007,7 @@ def test__cython_agg_general(self): exc.args += ('operation: %s' % op, ) raise - def test_aa_cython_group_transform_algos(self): + def test_cython_group_transform_algos(self): # GH 4095 dtypes = [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint32, np.uint64, np.float32, np.float64] @@ -6229,8 +6264,11 @@ def test_groupby_categorical_two_columns(self): # Grouping on a single column groups_single_key = test.groupby("cat") res = groups_single_key.agg('mean') + + exp_index = pd.CategoricalIndex(["a", "b", "c"], name="cat", + ordered=True) exp = DataFrame({"ints": [1.5, 1.5, np.nan], "val": [20, 30, np.nan]}, - index=pd.CategoricalIndex(["a", "b", "c"], name="cat")) + index=exp_index) tm.assert_frame_equal(res, exp) # Grouping on two columns @@ -6279,6 +6317,29 @@ def test_func(x): expected = DataFrame() tm.assert_frame_equal(result, expected) + def test_groupby_apply_none_first(self): + # GH 12824. Tests if apply returns None first. + test_df1 = DataFrame({'groups': [1, 1, 1, 2], 'vars': [0, 1, 2, 3]}) + test_df2 = DataFrame({'groups': [1, 2, 2, 2], 'vars': [0, 1, 2, 3]}) + + def test_func(x): + if x.shape[0] < 2: + return None + return x.iloc[[0, -1]] + + result1 = test_df1.groupby('groups').apply(test_func) + result2 = test_df2.groupby('groups').apply(test_func) + index1 = MultiIndex.from_arrays([[1, 1], [0, 2]], + names=['groups', None]) + index2 = MultiIndex.from_arrays([[2, 2], [1, 3]], + names=['groups', None]) + expected1 = DataFrame({'groups': [1, 1], 'vars': [0, 2]}, + index=index1) + expected2 = DataFrame({'groups': [2, 2], 'vars': [1, 3]}, + index=index2) + tm.assert_frame_equal(result1, expected1) + tm.assert_frame_equal(result2, expected2) + def test_first_last_max_min_on_time_data(self): # GH 10295 # Verify that NaT is not in the result of max, min, first and last on @@ -6357,6 +6418,19 @@ def test_transform_with_non_scalar_group(self): (axis=1, level=1).transform, lambda z: z.div(z.sum(axis=1), axis=0)) + def test_numpy_compat(self): + # see gh-12811 + df = pd.DataFrame({'A': [1, 2, 1], 'B': [1, 2, 3]}) + g = df.groupby('A') + + msg = "numpy operations are not valid with groupby" + + for func in ('mean', 'var', 'std', 'cumprod', 'cumsum'): + tm.assertRaisesRegexp(UnsupportedFunctionCall, msg, + getattr(g, func), 1, 2, 3) + tm.assertRaisesRegexp(UnsupportedFunctionCall, msg, + getattr(g, func), foo=1) + def assert_fp_equal(a, b): assert (np.abs(a - b) < 1e-12).all() diff --git a/pandas/tests/test_infer_and_convert.py b/pandas/tests/test_infer_and_convert.py new file mode 100644 index 0000000000000..a6941369b35be --- /dev/null +++ b/pandas/tests/test_infer_and_convert.py @@ -0,0 +1,444 @@ +# -*- coding: utf-8 -*- + +from datetime import datetime, timedelta, date, time + +import numpy as np +import pandas as pd +import pandas.lib as lib +import pandas.util.testing as tm +from pandas import Index + +from pandas.compat import long, u, PY2 + + +class TestInference(tm.TestCase): + + def test_infer_dtype_bytes(self): + compare = 'string' if PY2 else 'bytes' + + # string array of bytes + arr = np.array(list('abc'), dtype='S1') + self.assertEqual(pd.lib.infer_dtype(arr), compare) + + # object array of bytes + arr = arr.astype(object) + self.assertEqual(pd.lib.infer_dtype(arr), compare) + + def test_isinf_scalar(self): + # GH 11352 + self.assertTrue(lib.isposinf_scalar(float('inf'))) + self.assertTrue(lib.isposinf_scalar(np.inf)) + self.assertFalse(lib.isposinf_scalar(-np.inf)) + self.assertFalse(lib.isposinf_scalar(1)) + self.assertFalse(lib.isposinf_scalar('a')) + + self.assertTrue(lib.isneginf_scalar(float('-inf'))) + self.assertTrue(lib.isneginf_scalar(-np.inf)) + self.assertFalse(lib.isneginf_scalar(np.inf)) + self.assertFalse(lib.isneginf_scalar(1)) + self.assertFalse(lib.isneginf_scalar('a')) + + def test_maybe_convert_numeric_infinities(self): + # see gh-13274 + infinities = ['inf', 'inF', 'iNf', 'Inf', + 'iNF', 'InF', 'INf', 'INF'] + na_values = set(['', 'NULL', 'nan']) + + pos = np.array(['inf'], dtype=np.float64) + neg = np.array(['-inf'], dtype=np.float64) + + msg = "Unable to parse string" + + for infinity in infinities: + for maybe_int in (True, False): + out = lib.maybe_convert_numeric( + np.array([infinity], dtype=object), + na_values, maybe_int) + tm.assert_numpy_array_equal(out, pos) + + out = lib.maybe_convert_numeric( + np.array(['-' + infinity], dtype=object), + na_values, maybe_int) + tm.assert_numpy_array_equal(out, neg) + + out = lib.maybe_convert_numeric( + np.array([u(infinity)], dtype=object), + na_values, maybe_int) + tm.assert_numpy_array_equal(out, pos) + + out = lib.maybe_convert_numeric( + np.array(['+' + infinity], dtype=object), + na_values, maybe_int) + tm.assert_numpy_array_equal(out, pos) + + # too many characters + with tm.assertRaisesRegexp(ValueError, msg): + lib.maybe_convert_numeric( + np.array(['foo_' + infinity], dtype=object), + na_values, maybe_int) + + def test_maybe_convert_numeric_post_floatify_nan(self): + # see gh-13314 + data = np.array(['1.200', '-999.000', '4.500'], dtype=object) + expected = np.array([1.2, np.nan, 4.5], dtype=np.float64) + nan_values = set([-999, -999.0]) + + for coerce_type in (True, False): + out = lib.maybe_convert_numeric(data, nan_values, coerce_type) + tm.assert_numpy_array_equal(out, expected) + + def test_convert_infs(self): + arr = np.array(['inf', 'inf', 'inf'], dtype='O') + result = lib.maybe_convert_numeric(arr, set(), False) + self.assertTrue(result.dtype == np.float64) + + arr = np.array(['-inf', '-inf', '-inf'], dtype='O') + result = lib.maybe_convert_numeric(arr, set(), False) + self.assertTrue(result.dtype == np.float64) + + def test_scientific_no_exponent(self): + # See PR 12215 + arr = np.array(['42E', '2E', '99e', '6e'], dtype='O') + result = lib.maybe_convert_numeric(arr, set(), False, True) + self.assertTrue(np.all(np.isnan(result))) + + def test_convert_non_hashable(self): + # GH13324 + # make sure that we are handing non-hashables + arr = np.array([[10.0, 2], 1.0, 'apple']) + result = lib.maybe_convert_numeric(arr, set(), False, True) + tm.assert_numpy_array_equal(result, np.array([np.nan, 1.0, np.nan])) + + +class TestTypeInference(tm.TestCase): + _multiprocess_can_split_ = True + + def test_length_zero(self): + result = lib.infer_dtype(np.array([], dtype='i4')) + self.assertEqual(result, 'integer') + + result = lib.infer_dtype([]) + self.assertEqual(result, 'empty') + + def test_integers(self): + arr = np.array([1, 2, 3, np.int64(4), np.int32(5)], dtype='O') + result = lib.infer_dtype(arr) + self.assertEqual(result, 'integer') + + arr = np.array([1, 2, 3, np.int64(4), np.int32(5), 'foo'], dtype='O') + result = lib.infer_dtype(arr) + self.assertEqual(result, 'mixed-integer') + + arr = np.array([1, 2, 3, 4, 5], dtype='i4') + result = lib.infer_dtype(arr) + self.assertEqual(result, 'integer') + + def test_bools(self): + arr = np.array([True, False, True, True, True], dtype='O') + result = lib.infer_dtype(arr) + self.assertEqual(result, 'boolean') + + arr = np.array([np.bool_(True), np.bool_(False)], dtype='O') + result = lib.infer_dtype(arr) + self.assertEqual(result, 'boolean') + + arr = np.array([True, False, True, 'foo'], dtype='O') + result = lib.infer_dtype(arr) + self.assertEqual(result, 'mixed') + + arr = np.array([True, False, True], dtype=bool) + result = lib.infer_dtype(arr) + self.assertEqual(result, 'boolean') + + def test_floats(self): + arr = np.array([1., 2., 3., np.float64(4), np.float32(5)], dtype='O') + result = lib.infer_dtype(arr) + self.assertEqual(result, 'floating') + + arr = np.array([1, 2, 3, np.float64(4), np.float32(5), 'foo'], + dtype='O') + result = lib.infer_dtype(arr) + self.assertEqual(result, 'mixed-integer') + + arr = np.array([1, 2, 3, 4, 5], dtype='f4') + result = lib.infer_dtype(arr) + self.assertEqual(result, 'floating') + + arr = np.array([1, 2, 3, 4, 5], dtype='f8') + result = lib.infer_dtype(arr) + self.assertEqual(result, 'floating') + + def test_string(self): + pass + + def test_unicode(self): + pass + + def test_datetime(self): + + dates = [datetime(2012, 1, x) for x in range(1, 20)] + index = Index(dates) + self.assertEqual(index.inferred_type, 'datetime64') + + def test_date(self): + + dates = [date(2012, 1, x) for x in range(1, 20)] + index = Index(dates) + self.assertEqual(index.inferred_type, 'date') + + def test_to_object_array_tuples(self): + r = (5, 6) + values = [r] + result = lib.to_object_array_tuples(values) + + try: + # make sure record array works + from collections import namedtuple + record = namedtuple('record', 'x y') + r = record(5, 6) + values = [r] + result = lib.to_object_array_tuples(values) # noqa + except ImportError: + pass + + def test_to_object_array_width(self): + # see gh-13320 + rows = [[1, 2, 3], [4, 5, 6]] + + expected = np.array(rows, dtype=object) + out = lib.to_object_array(rows) + tm.assert_numpy_array_equal(out, expected) + + expected = np.array(rows, dtype=object) + out = lib.to_object_array(rows, min_width=1) + tm.assert_numpy_array_equal(out, expected) + + expected = np.array([[1, 2, 3, None, None], + [4, 5, 6, None, None]], dtype=object) + out = lib.to_object_array(rows, min_width=5) + tm.assert_numpy_array_equal(out, expected) + + def test_object(self): + + # GH 7431 + # cannot infer more than this as only a single element + arr = np.array([None], dtype='O') + result = lib.infer_dtype(arr) + self.assertEqual(result, 'mixed') + + def test_categorical(self): + + # GH 8974 + from pandas import Categorical, Series + arr = Categorical(list('abc')) + result = lib.infer_dtype(arr) + self.assertEqual(result, 'categorical') + + result = lib.infer_dtype(Series(arr)) + self.assertEqual(result, 'categorical') + + arr = Categorical(list('abc'), categories=['cegfab'], ordered=True) + result = lib.infer_dtype(arr) + self.assertEqual(result, 'categorical') + + result = lib.infer_dtype(Series(arr)) + self.assertEqual(result, 'categorical') + + +class TestConvert(tm.TestCase): + + def test_convert_objects(self): + arr = np.array(['a', 'b', np.nan, np.nan, 'd', 'e', 'f'], dtype='O') + result = lib.maybe_convert_objects(arr) + self.assertTrue(result.dtype == np.object_) + + def test_convert_objects_ints(self): + # test that we can detect many kinds of integers + dtypes = ['i1', 'i2', 'i4', 'i8', 'u1', 'u2', 'u4', 'u8'] + + for dtype_str in dtypes: + arr = np.array(list(np.arange(20, dtype=dtype_str)), dtype='O') + self.assertTrue(arr[0].dtype == np.dtype(dtype_str)) + result = lib.maybe_convert_objects(arr) + self.assertTrue(issubclass(result.dtype.type, np.integer)) + + def test_convert_objects_complex_number(self): + for dtype in np.sctypes['complex']: + arr = np.array(list(1j * np.arange(20, dtype=dtype)), dtype='O') + self.assertTrue(arr[0].dtype == np.dtype(dtype)) + result = lib.maybe_convert_objects(arr) + self.assertTrue(issubclass(result.dtype.type, np.complexfloating)) + + +class Testisscalar(tm.TestCase): + + def test_isscalar_builtin_scalars(self): + self.assertTrue(lib.isscalar(None)) + self.assertTrue(lib.isscalar(True)) + self.assertTrue(lib.isscalar(False)) + self.assertTrue(lib.isscalar(0.)) + self.assertTrue(lib.isscalar(np.nan)) + self.assertTrue(lib.isscalar('foobar')) + self.assertTrue(lib.isscalar(b'foobar')) + self.assertTrue(lib.isscalar(u('efoobar'))) + self.assertTrue(lib.isscalar(datetime(2014, 1, 1))) + self.assertTrue(lib.isscalar(date(2014, 1, 1))) + self.assertTrue(lib.isscalar(time(12, 0))) + self.assertTrue(lib.isscalar(timedelta(hours=1))) + self.assertTrue(lib.isscalar(pd.NaT)) + + def test_isscalar_builtin_nonscalars(self): + self.assertFalse(lib.isscalar({})) + self.assertFalse(lib.isscalar([])) + self.assertFalse(lib.isscalar([1])) + self.assertFalse(lib.isscalar(())) + self.assertFalse(lib.isscalar((1, ))) + self.assertFalse(lib.isscalar(slice(None))) + self.assertFalse(lib.isscalar(Ellipsis)) + + def test_isscalar_numpy_array_scalars(self): + self.assertTrue(lib.isscalar(np.int64(1))) + self.assertTrue(lib.isscalar(np.float64(1.))) + self.assertTrue(lib.isscalar(np.int32(1))) + self.assertTrue(lib.isscalar(np.object_('foobar'))) + self.assertTrue(lib.isscalar(np.str_('foobar'))) + self.assertTrue(lib.isscalar(np.unicode_(u('foobar')))) + self.assertTrue(lib.isscalar(np.bytes_(b'foobar'))) + self.assertTrue(lib.isscalar(np.datetime64('2014-01-01'))) + self.assertTrue(lib.isscalar(np.timedelta64(1, 'h'))) + + def test_isscalar_numpy_zerodim_arrays(self): + for zerodim in [np.array(1), np.array('foobar'), + np.array(np.datetime64('2014-01-01')), + np.array(np.timedelta64(1, 'h')), + np.array(np.datetime64('NaT'))]: + self.assertFalse(lib.isscalar(zerodim)) + self.assertTrue(lib.isscalar(lib.item_from_zerodim(zerodim))) + + def test_isscalar_numpy_arrays(self): + self.assertFalse(lib.isscalar(np.array([]))) + self.assertFalse(lib.isscalar(np.array([[]]))) + self.assertFalse(lib.isscalar(np.matrix('1; 2'))) + + def test_isscalar_pandas_scalars(self): + self.assertTrue(lib.isscalar(pd.Timestamp('2014-01-01'))) + self.assertTrue(lib.isscalar(pd.Timedelta(hours=1))) + self.assertTrue(lib.isscalar(pd.Period('2014-01-01'))) + + def test_lisscalar_pandas_containers(self): + self.assertFalse(lib.isscalar(pd.Series())) + self.assertFalse(lib.isscalar(pd.Series([1]))) + self.assertFalse(lib.isscalar(pd.DataFrame())) + self.assertFalse(lib.isscalar(pd.DataFrame([[1]]))) + self.assertFalse(lib.isscalar(pd.Panel())) + self.assertFalse(lib.isscalar(pd.Panel([[[1]]]))) + self.assertFalse(lib.isscalar(pd.Index([]))) + self.assertFalse(lib.isscalar(pd.Index([1]))) + + +class TestParseSQL(tm.TestCase): + + def test_convert_sql_column_floats(self): + arr = np.array([1.5, None, 3, 4.2], dtype=object) + result = lib.convert_sql_column(arr) + expected = np.array([1.5, np.nan, 3, 4.2], dtype='f8') + self.assert_numpy_array_equal(result, expected) + + def test_convert_sql_column_strings(self): + arr = np.array(['1.5', None, '3', '4.2'], dtype=object) + result = lib.convert_sql_column(arr) + expected = np.array(['1.5', np.nan, '3', '4.2'], dtype=object) + self.assert_numpy_array_equal(result, expected) + + def test_convert_sql_column_unicode(self): + arr = np.array([u('1.5'), None, u('3'), u('4.2')], + dtype=object) + result = lib.convert_sql_column(arr) + expected = np.array([u('1.5'), np.nan, u('3'), u('4.2')], + dtype=object) + self.assert_numpy_array_equal(result, expected) + + def test_convert_sql_column_ints(self): + arr = np.array([1, 2, 3, 4], dtype='O') + arr2 = np.array([1, 2, 3, 4], dtype='i4').astype('O') + result = lib.convert_sql_column(arr) + result2 = lib.convert_sql_column(arr2) + expected = np.array([1, 2, 3, 4], dtype='i8') + self.assert_numpy_array_equal(result, expected) + self.assert_numpy_array_equal(result2, expected) + + arr = np.array([1, 2, 3, None, 4], dtype='O') + result = lib.convert_sql_column(arr) + expected = np.array([1, 2, 3, np.nan, 4], dtype='f8') + self.assert_numpy_array_equal(result, expected) + + def test_convert_sql_column_longs(self): + arr = np.array([long(1), long(2), long(3), long(4)], dtype='O') + result = lib.convert_sql_column(arr) + expected = np.array([1, 2, 3, 4], dtype='i8') + self.assert_numpy_array_equal(result, expected) + + arr = np.array([long(1), long(2), long(3), None, long(4)], dtype='O') + result = lib.convert_sql_column(arr) + expected = np.array([1, 2, 3, np.nan, 4], dtype='f8') + self.assert_numpy_array_equal(result, expected) + + def test_convert_sql_column_bools(self): + arr = np.array([True, False, True, False], dtype='O') + result = lib.convert_sql_column(arr) + expected = np.array([True, False, True, False], dtype=bool) + self.assert_numpy_array_equal(result, expected) + + arr = np.array([True, False, None, False], dtype='O') + result = lib.convert_sql_column(arr) + expected = np.array([True, False, np.nan, False], dtype=object) + self.assert_numpy_array_equal(result, expected) + + def test_convert_sql_column_decimals(self): + from decimal import Decimal + arr = np.array([Decimal('1.5'), None, Decimal('3'), Decimal('4.2')]) + result = lib.convert_sql_column(arr) + expected = np.array([1.5, np.nan, 3, 4.2], dtype='f8') + self.assert_numpy_array_equal(result, expected) + + def test_convert_downcast_int64(self): + from pandas.parser import na_values + + arr = np.array([1, 2, 7, 8, 10], dtype=np.int64) + expected = np.array([1, 2, 7, 8, 10], dtype=np.int8) + + # default argument + result = lib.downcast_int64(arr, na_values) + self.assert_numpy_array_equal(result, expected) + + result = lib.downcast_int64(arr, na_values, use_unsigned=False) + self.assert_numpy_array_equal(result, expected) + + expected = np.array([1, 2, 7, 8, 10], dtype=np.uint8) + result = lib.downcast_int64(arr, na_values, use_unsigned=True) + self.assert_numpy_array_equal(result, expected) + + # still cast to int8 despite use_unsigned=True + # because of the negative number as an element + arr = np.array([1, 2, -7, 8, 10], dtype=np.int64) + expected = np.array([1, 2, -7, 8, 10], dtype=np.int8) + result = lib.downcast_int64(arr, na_values, use_unsigned=True) + self.assert_numpy_array_equal(result, expected) + + arr = np.array([1, 2, 7, 8, 300], dtype=np.int64) + expected = np.array([1, 2, 7, 8, 300], dtype=np.int16) + result = lib.downcast_int64(arr, na_values) + self.assert_numpy_array_equal(result, expected) + + int8_na = na_values[np.int8] + int64_na = na_values[np.int64] + arr = np.array([int64_na, 2, 3, 10, 15], dtype=np.int64) + expected = np.array([int8_na, 2, 3, 10, 15], dtype=np.int8) + result = lib.downcast_int64(arr, na_values) + self.assert_numpy_array_equal(result, expected) + +if __name__ == '__main__': + import nose + + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], + exit=False) diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py index bf9574f48913a..6a97f195abba7 100644 --- a/pandas/tests/test_internals.py +++ b/pandas/tests/test_internals.py @@ -17,15 +17,19 @@ import pandas.core.algorithms as algos import pandas.util.testing as tm import pandas as pd +from pandas import lib from pandas.util.testing import (assert_almost_equal, assert_frame_equal, randn, assert_series_equal) from pandas.compat import zip, u def assert_block_equal(left, right): - assert_almost_equal(left.values, right.values) + tm.assert_numpy_array_equal(left.values, right.values) assert (left.dtype == right.dtype) - assert_almost_equal(left.mgr_locs, right.mgr_locs) + tm.assertIsInstance(left.mgr_locs, lib.BlockPlacement) + tm.assertIsInstance(right.mgr_locs, lib.BlockPlacement) + tm.assert_numpy_array_equal(left.mgr_locs.as_array, + right.mgr_locs.as_array) def get_numeric_mat(shape): @@ -207,7 +211,9 @@ def _check(blk): _check(self.bool_block) def test_mgr_locs(self): - assert_almost_equal(self.fblock.mgr_locs, [0, 2, 4]) + tm.assertIsInstance(self.fblock.mgr_locs, lib.BlockPlacement) + tm.assert_numpy_array_equal(self.fblock.mgr_locs.as_array, + np.array([0, 2, 4], dtype=np.int64)) def test_attrs(self): self.assertEqual(self.fblock.shape, self.fblock.values.shape) @@ -223,9 +229,10 @@ def test_merge(self): ablock = make_block(avals, ref_cols.get_indexer(['e', 'b'])) bblock = make_block(bvals, ref_cols.get_indexer(['a', 'd'])) merged = ablock.merge(bblock) - assert_almost_equal(merged.mgr_locs, [0, 1, 2, 3]) - assert_almost_equal(merged.values[[0, 2]], avals) - assert_almost_equal(merged.values[[1, 3]], bvals) + tm.assert_numpy_array_equal(merged.mgr_locs.as_array, + np.array([0, 1, 2, 3], dtype=np.int64)) + tm.assert_numpy_array_equal(merged.values[[0, 2]], np.array(avals)) + tm.assert_numpy_array_equal(merged.values[[1, 3]], np.array(bvals)) # TODO: merge with mixed type? @@ -246,17 +253,22 @@ def test_insert(self): def test_delete(self): newb = self.fblock.copy() newb.delete(0) - assert_almost_equal(newb.mgr_locs, [2, 4]) + tm.assertIsInstance(newb.mgr_locs, lib.BlockPlacement) + tm.assert_numpy_array_equal(newb.mgr_locs.as_array, + np.array([2, 4], dtype=np.int64)) self.assertTrue((newb.values[0] == 1).all()) newb = self.fblock.copy() newb.delete(1) - assert_almost_equal(newb.mgr_locs, [0, 4]) + tm.assertIsInstance(newb.mgr_locs, lib.BlockPlacement) + tm.assert_numpy_array_equal(newb.mgr_locs.as_array, + np.array([0, 4], dtype=np.int64)) self.assertTrue((newb.values[1] == 2).all()) newb = self.fblock.copy() newb.delete(2) - assert_almost_equal(newb.mgr_locs, [0, 2]) + tm.assert_numpy_array_equal(newb.mgr_locs.as_array, + np.array([0, 2], dtype=np.int64)) self.assertTrue((newb.values[1] == 1).all()) newb = self.fblock.copy() @@ -399,9 +411,9 @@ def test_get_scalar(self): for i, index in enumerate(self.mgr.axes[1]): res = self.mgr.get_scalar((item, index)) exp = self.mgr.get(item, fastpath=False)[i] - assert_almost_equal(res, exp) + self.assertEqual(res, exp) exp = self.mgr.get(item).internal_values()[i] - assert_almost_equal(res, exp) + self.assertEqual(res, exp) def test_get(self): cols = Index(list('abc')) @@ -421,10 +433,14 @@ def test_set(self): mgr.set('d', np.array(['foo'] * 3)) mgr.set('b', np.array(['bar'] * 3)) - assert_almost_equal(mgr.get('a').internal_values(), [0] * 3) - assert_almost_equal(mgr.get('b').internal_values(), ['bar'] * 3) - assert_almost_equal(mgr.get('c').internal_values(), [2] * 3) - assert_almost_equal(mgr.get('d').internal_values(), ['foo'] * 3) + tm.assert_numpy_array_equal(mgr.get('a').internal_values(), + np.array([0] * 3)) + tm.assert_numpy_array_equal(mgr.get('b').internal_values(), + np.array(['bar'] * 3, dtype=np.object_)) + tm.assert_numpy_array_equal(mgr.get('c').internal_values(), + np.array([2] * 3)) + tm.assert_numpy_array_equal(mgr.get('d').internal_values(), + np.array(['foo'] * 3, dtype=np.object_)) def test_insert(self): self.mgr.insert(0, 'inserted', np.arange(N)) @@ -689,8 +705,9 @@ def test_consolidate_ordering_issues(self): self.assertEqual(cons.nblocks, 4) cons = self.mgr.consolidate().get_numeric_data() self.assertEqual(cons.nblocks, 1) - assert_almost_equal(cons.blocks[0].mgr_locs, - np.arange(len(cons.items))) + tm.assertIsInstance(cons.blocks[0].mgr_locs, lib.BlockPlacement) + tm.assert_numpy_array_equal(cons.blocks[0].mgr_locs.as_array, + np.arange(len(cons.items), dtype=np.int64)) def test_reindex_index(self): pass @@ -786,18 +803,18 @@ def test_get_bool_data(self): bools.get('bool').internal_values()) bools.set('bool', np.array([True, False, True])) - assert_almost_equal( - mgr.get('bool', fastpath=False), [True, False, True]) - assert_almost_equal( - mgr.get('bool').internal_values(), [True, False, True]) + tm.assert_numpy_array_equal(mgr.get('bool', fastpath=False), + np.array([True, False, True])) + tm.assert_numpy_array_equal(mgr.get('bool').internal_values(), + np.array([True, False, True])) # Check sharing bools2 = mgr.get_bool_data(copy=True) bools2.set('bool', np.array([False, True, False])) - assert_almost_equal( - mgr.get('bool', fastpath=False), [True, False, True]) - assert_almost_equal( - mgr.get('bool').internal_values(), [True, False, True]) + tm.assert_numpy_array_equal(mgr.get('bool', fastpath=False), + np.array([True, False, True])) + tm.assert_numpy_array_equal(mgr.get('bool').internal_values(), + np.array([True, False, True])) def test_unicode_repr_doesnt_raise(self): repr(create_mgr(u('b,\u05d0: object'))) @@ -892,8 +909,7 @@ def assert_slice_ok(mgr, axis, slobj): mat_slobj = (slice(None), ) * axis + (slobj, ) tm.assert_numpy_array_equal(mat[mat_slobj], sliced.as_matrix(), check_dtype=False) - tm.assert_numpy_array_equal(mgr.axes[axis][slobj], - sliced.axes[axis]) + tm.assert_index_equal(mgr.axes[axis][slobj], sliced.axes[axis]) for mgr in self.MANAGERS: for ax in range(mgr.ndim): @@ -931,8 +947,8 @@ def assert_take_ok(mgr, axis, indexer): taken = mgr.take(indexer, axis) tm.assert_numpy_array_equal(np.take(mat, indexer, axis), taken.as_matrix(), check_dtype=False) - tm.assert_numpy_array_equal(mgr.axes[axis].take(indexer), - taken.axes[axis]) + tm.assert_index_equal(mgr.axes[axis].take(indexer), + taken.axes[axis]) for mgr in self.MANAGERS: for ax in range(mgr.ndim): diff --git a/pandas/tests/test_lib.py b/pandas/tests/test_lib.py index 6912e3a7ff68c..10a6bb5c75b01 100644 --- a/pandas/tests/test_lib.py +++ b/pandas/tests/test_lib.py @@ -1,19 +1,9 @@ # -*- coding: utf-8 -*- -from datetime import datetime, timedelta, date, time - import numpy as np -import pandas as pd import pandas.lib as lib import pandas.util.testing as tm -from pandas.compat import long, u, PY2 - - -def _assert_same_values_and_dtype(res, exp): - tm.assert_equal(res.dtype, exp.dtype) - tm.assert_almost_equal(res, exp) - class TestMisc(tm.TestCase): @@ -34,16 +24,21 @@ def test_max_len_string_array(self): tm.assertRaises(TypeError, lambda: lib.max_len_string_array(arr.astype('U'))) - def test_infer_dtype_bytes(self): - compare = 'string' if PY2 else 'bytes' + def test_fast_unique_multiple_list_gen_sort(self): + keys = [['p', 'a'], ['n', 'd'], ['a', 's']] + + gen = (key for key in keys) + expected = np.array(['a', 'd', 'n', 'p', 's']) + out = lib.fast_unique_multiple_list_gen(gen, sort=True) + tm.assert_numpy_array_equal(np.array(out), expected) - # string array of bytes - arr = np.array(list('abc'), dtype='S1') - self.assertEqual(pd.lib.infer_dtype(arr), compare) + gen = (key for key in keys) + expected = np.array(['p', 'a', 'n', 'd', 's']) + out = lib.fast_unique_multiple_list_gen(gen, sort=False) + tm.assert_numpy_array_equal(np.array(out), expected) - # object array of bytes - arr = arr.astype(object) - self.assertEqual(pd.lib.infer_dtype(arr), compare) + +class TestIndexing(tm.TestCase): def test_maybe_indices_to_slice_left_edge(self): target = np.arange(100) @@ -174,151 +169,58 @@ def test_maybe_indices_to_slice_middle(self): self.assert_numpy_array_equal(maybe_slice, indices) self.assert_numpy_array_equal(target[indices], target[maybe_slice]) - def test_isinf_scalar(self): - # GH 11352 - self.assertTrue(lib.isposinf_scalar(float('inf'))) - self.assertTrue(lib.isposinf_scalar(np.inf)) - self.assertFalse(lib.isposinf_scalar(-np.inf)) - self.assertFalse(lib.isposinf_scalar(1)) - self.assertFalse(lib.isposinf_scalar('a')) - - self.assertTrue(lib.isneginf_scalar(float('-inf'))) - self.assertTrue(lib.isneginf_scalar(-np.inf)) - self.assertFalse(lib.isneginf_scalar(np.inf)) - self.assertFalse(lib.isneginf_scalar(1)) - self.assertFalse(lib.isneginf_scalar('a')) - - -class Testisscalar(tm.TestCase): - - def test_isscalar_builtin_scalars(self): - self.assertTrue(lib.isscalar(None)) - self.assertTrue(lib.isscalar(True)) - self.assertTrue(lib.isscalar(False)) - self.assertTrue(lib.isscalar(0.)) - self.assertTrue(lib.isscalar(np.nan)) - self.assertTrue(lib.isscalar('foobar')) - self.assertTrue(lib.isscalar(b'foobar')) - self.assertTrue(lib.isscalar(u('efoobar'))) - self.assertTrue(lib.isscalar(datetime(2014, 1, 1))) - self.assertTrue(lib.isscalar(date(2014, 1, 1))) - self.assertTrue(lib.isscalar(time(12, 0))) - self.assertTrue(lib.isscalar(timedelta(hours=1))) - self.assertTrue(lib.isscalar(pd.NaT)) - - def test_isscalar_builtin_nonscalars(self): - self.assertFalse(lib.isscalar({})) - self.assertFalse(lib.isscalar([])) - self.assertFalse(lib.isscalar([1])) - self.assertFalse(lib.isscalar(())) - self.assertFalse(lib.isscalar((1, ))) - self.assertFalse(lib.isscalar(slice(None))) - self.assertFalse(lib.isscalar(Ellipsis)) - - def test_isscalar_numpy_array_scalars(self): - self.assertTrue(lib.isscalar(np.int64(1))) - self.assertTrue(lib.isscalar(np.float64(1.))) - self.assertTrue(lib.isscalar(np.int32(1))) - self.assertTrue(lib.isscalar(np.object_('foobar'))) - self.assertTrue(lib.isscalar(np.str_('foobar'))) - self.assertTrue(lib.isscalar(np.unicode_(u('foobar')))) - self.assertTrue(lib.isscalar(np.bytes_(b'foobar'))) - self.assertTrue(lib.isscalar(np.datetime64('2014-01-01'))) - self.assertTrue(lib.isscalar(np.timedelta64(1, 'h'))) - - def test_isscalar_numpy_zerodim_arrays(self): - for zerodim in [np.array(1), np.array('foobar'), - np.array(np.datetime64('2014-01-01')), - np.array(np.timedelta64(1, 'h')), - np.array(np.datetime64('NaT'))]: - self.assertFalse(lib.isscalar(zerodim)) - self.assertTrue(lib.isscalar(lib.item_from_zerodim(zerodim))) - - def test_isscalar_numpy_arrays(self): - self.assertFalse(lib.isscalar(np.array([]))) - self.assertFalse(lib.isscalar(np.array([[]]))) - self.assertFalse(lib.isscalar(np.matrix('1; 2'))) - - def test_isscalar_pandas_scalars(self): - self.assertTrue(lib.isscalar(pd.Timestamp('2014-01-01'))) - self.assertTrue(lib.isscalar(pd.Timedelta(hours=1))) - self.assertTrue(lib.isscalar(pd.Period('2014-01-01'))) - - def test_lisscalar_pandas_containers(self): - self.assertFalse(lib.isscalar(pd.Series())) - self.assertFalse(lib.isscalar(pd.Series([1]))) - self.assertFalse(lib.isscalar(pd.DataFrame())) - self.assertFalse(lib.isscalar(pd.DataFrame([[1]]))) - self.assertFalse(lib.isscalar(pd.Panel())) - self.assertFalse(lib.isscalar(pd.Panel([[[1]]]))) - self.assertFalse(lib.isscalar(pd.Index([]))) - self.assertFalse(lib.isscalar(pd.Index([1]))) - - -class TestParseSQL(tm.TestCase): - - def test_convert_sql_column_floats(self): - arr = np.array([1.5, None, 3, 4.2], dtype=object) - result = lib.convert_sql_column(arr) - expected = np.array([1.5, np.nan, 3, 4.2], dtype='f8') - _assert_same_values_and_dtype(result, expected) - - def test_convert_sql_column_strings(self): - arr = np.array(['1.5', None, '3', '4.2'], dtype=object) - result = lib.convert_sql_column(arr) - expected = np.array(['1.5', np.nan, '3', '4.2'], dtype=object) - _assert_same_values_and_dtype(result, expected) - - def test_convert_sql_column_unicode(self): - arr = np.array([u('1.5'), None, u('3'), u('4.2')], - dtype=object) - result = lib.convert_sql_column(arr) - expected = np.array([u('1.5'), np.nan, u('3'), u('4.2')], - dtype=object) - _assert_same_values_and_dtype(result, expected) - - def test_convert_sql_column_ints(self): - arr = np.array([1, 2, 3, 4], dtype='O') - arr2 = np.array([1, 2, 3, 4], dtype='i4').astype('O') - result = lib.convert_sql_column(arr) - result2 = lib.convert_sql_column(arr2) - expected = np.array([1, 2, 3, 4], dtype='i8') - _assert_same_values_and_dtype(result, expected) - _assert_same_values_and_dtype(result2, expected) - - arr = np.array([1, 2, 3, None, 4], dtype='O') - result = lib.convert_sql_column(arr) - expected = np.array([1, 2, 3, np.nan, 4], dtype='f8') - _assert_same_values_and_dtype(result, expected) - - def test_convert_sql_column_longs(self): - arr = np.array([long(1), long(2), long(3), long(4)], dtype='O') - result = lib.convert_sql_column(arr) - expected = np.array([1, 2, 3, 4], dtype='i8') - _assert_same_values_and_dtype(result, expected) - - arr = np.array([long(1), long(2), long(3), None, long(4)], dtype='O') - result = lib.convert_sql_column(arr) - expected = np.array([1, 2, 3, np.nan, 4], dtype='f8') - _assert_same_values_and_dtype(result, expected) - - def test_convert_sql_column_bools(self): - arr = np.array([True, False, True, False], dtype='O') - result = lib.convert_sql_column(arr) - expected = np.array([True, False, True, False], dtype=bool) - _assert_same_values_and_dtype(result, expected) - - arr = np.array([True, False, None, False], dtype='O') - result = lib.convert_sql_column(arr) - expected = np.array([True, False, np.nan, False], dtype=object) - _assert_same_values_and_dtype(result, expected) - - def test_convert_sql_column_decimals(self): - from decimal import Decimal - arr = np.array([Decimal('1.5'), None, Decimal('3'), Decimal('4.2')]) - result = lib.convert_sql_column(arr) - expected = np.array([1.5, np.nan, 3, 4.2], dtype='f8') - _assert_same_values_and_dtype(result, expected) + def test_maybe_booleans_to_slice(self): + arr = np.array([0, 0, 1, 1, 1, 0, 1], dtype=np.uint8) + result = lib.maybe_booleans_to_slice(arr) + self.assertTrue(result.dtype == np.bool_) + + result = lib.maybe_booleans_to_slice(arr[:0]) + self.assertTrue(result == slice(0, 0)) + + def test_get_reverse_indexer(self): + indexer = np.array([-1, -1, 1, 2, 0, -1, 3, 4], dtype=np.int64) + result = lib.get_reverse_indexer(indexer, 5) + expected = np.array([4, 2, 3, 6, 7], dtype=np.int64) + self.assertTrue(np.array_equal(result, expected)) + + +def test_duplicated_with_nas(): + keys = np.array([0, 1, np.nan, 0, 2, np.nan], dtype=object) + + result = lib.duplicated(keys) + expected = [False, False, False, True, False, True] + assert (np.array_equal(result, expected)) + + result = lib.duplicated(keys, keep='first') + expected = [False, False, False, True, False, True] + assert (np.array_equal(result, expected)) + + result = lib.duplicated(keys, keep='last') + expected = [True, False, True, False, False, False] + assert (np.array_equal(result, expected)) + + result = lib.duplicated(keys, keep=False) + expected = [True, False, True, True, False, True] + assert (np.array_equal(result, expected)) + + keys = np.empty(8, dtype=object) + for i, t in enumerate(zip([0, 0, np.nan, np.nan] * 2, + [0, np.nan, 0, np.nan] * 2)): + keys[i] = t + + result = lib.duplicated(keys) + falses = [False] * 4 + trues = [True] * 4 + expected = falses + trues + assert (np.array_equal(result, expected)) + + result = lib.duplicated(keys, keep='last') + expected = trues + falses + assert (np.array_equal(result, expected)) + + result = lib.duplicated(keys, keep=False) + expected = trues + trues + assert (np.array_equal(result, expected)) if __name__ == '__main__': import nose diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py index 63a8b49ab4b00..c4ccef13f2844 100644 --- a/pandas/tests/test_multilevel.py +++ b/pandas/tests/test_multilevel.py @@ -87,19 +87,19 @@ def test_append_index(self): (1.2, datetime.datetime(2011, 1, 2, tzinfo=tz)), (1.3, datetime.datetime(2011, 1, 3, tzinfo=tz))] expected = Index([1.1, 1.2, 1.3] + expected_tuples) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) result = midx_lv2.append(idx1) expected = Index(expected_tuples + [1.1, 1.2, 1.3]) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) result = midx_lv2.append(midx_lv2) - expected = MultiIndex.from_arrays([idx1.append(idx1), idx2.append(idx2) - ]) - self.assertTrue(result.equals(expected)) + expected = MultiIndex.from_arrays([idx1.append(idx1), + idx2.append(idx2)]) + self.assert_index_equal(result, expected) result = midx_lv2.append(midx_lv3) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) result = midx_lv3.append(midx_lv2) expected = Index._simple_new( @@ -107,7 +107,7 @@ def test_append_index(self): (1.2, datetime.datetime(2011, 1, 2, tzinfo=tz), 'B'), (1.3, datetime.datetime(2011, 1, 3, tzinfo=tz), 'C')] + expected_tuples), None) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) def test_dataframe_constructor(self): multi = DataFrame(np.random.randn(4, 4), @@ -966,9 +966,7 @@ def check(left, right): assert_series_equal(left, right) self.assertFalse(left.index.is_unique) li, ri = left.index, right.index - for i in range(ri.nlevels): - tm.assert_numpy_array_equal(li.levels[i], ri.levels[i]) - tm.assert_numpy_array_equal(li.labels[i], ri.labels[i]) + tm.assert_index_equal(li, ri) df = DataFrame(np.arange(12).reshape(4, 3), index=list('abab'), @@ -1542,8 +1540,8 @@ def aggf(x): # for good measure, groupby detail level_index = frame._get_axis(axis).levels[level] - self.assertTrue(leftside._get_axis(axis).equals(level_index)) - self.assertTrue(rightside._get_axis(axis).equals(level_index)) + self.assert_index_equal(leftside._get_axis(axis), level_index) + self.assert_index_equal(rightside._get_axis(axis), level_index) assert_frame_equal(leftside, rightside) @@ -2211,12 +2209,11 @@ def test_datetimeindex(self): tz='US/Eastern') idx = MultiIndex.from_arrays([idx1, idx2]) - expected1 = pd.DatetimeIndex( - ['2013-04-01 9:00', '2013-04-02 9:00', '2013-04-03 9:00' - ], tz='Asia/Tokyo') + expected1 = pd.DatetimeIndex(['2013-04-01 9:00', '2013-04-02 9:00', + '2013-04-03 9:00'], tz='Asia/Tokyo') - self.assertTrue(idx.levels[0].equals(expected1)) - self.assertTrue(idx.levels[1].equals(idx2)) + self.assert_index_equal(idx.levels[0], expected1) + self.assert_index_equal(idx.levels[1], idx2) # from datetime combos # GH 7888 @@ -2256,18 +2253,20 @@ def test_set_index_datetime(self): df.index = pd.to_datetime(df.pop('datetime'), utc=True) df.index = df.index.tz_localize('UTC').tz_convert('US/Pacific') - expected = pd.DatetimeIndex( - ['2011-07-19 07:00:00', '2011-07-19 08:00:00', - '2011-07-19 09:00:00']) + expected = pd.DatetimeIndex(['2011-07-19 07:00:00', + '2011-07-19 08:00:00', + '2011-07-19 09:00:00'], name='datetime') expected = expected.tz_localize('UTC').tz_convert('US/Pacific') df = df.set_index('label', append=True) - self.assertTrue(df.index.levels[0].equals(expected)) - self.assertTrue(df.index.levels[1].equals(pd.Index(['a', 'b']))) + self.assert_index_equal(df.index.levels[0], expected) + self.assert_index_equal(df.index.levels[1], + pd.Index(['a', 'b'], name='label')) df = df.swaplevel(0, 1) - self.assertTrue(df.index.levels[0].equals(pd.Index(['a', 'b']))) - self.assertTrue(df.index.levels[1].equals(expected)) + self.assert_index_equal(df.index.levels[0], + pd.Index(['a', 'b'], name='label')) + self.assert_index_equal(df.index.levels[1], expected) df = DataFrame(np.random.random(6)) idx1 = pd.DatetimeIndex(['2011-07-19 07:00:00', '2011-07-19 08:00:00', @@ -2287,17 +2286,17 @@ def test_set_index_datetime(self): expected1 = pd.DatetimeIndex(['2011-07-19 07:00:00', '2011-07-19 08:00:00', '2011-07-19 09:00:00'], tz='US/Eastern') - expected2 = pd.DatetimeIndex( - ['2012-04-01 09:00', '2012-04-02 09:00'], tz='US/Eastern') + expected2 = pd.DatetimeIndex(['2012-04-01 09:00', '2012-04-02 09:00'], + tz='US/Eastern') - self.assertTrue(df.index.levels[0].equals(expected1)) - self.assertTrue(df.index.levels[1].equals(expected2)) - self.assertTrue(df.index.levels[2].equals(idx3)) + self.assert_index_equal(df.index.levels[0], expected1) + self.assert_index_equal(df.index.levels[1], expected2) + self.assert_index_equal(df.index.levels[2], idx3) # GH 7092 - self.assertTrue(df.index.get_level_values(0).equals(idx1)) - self.assertTrue(df.index.get_level_values(1).equals(idx2)) - self.assertTrue(df.index.get_level_values(2).equals(idx3)) + self.assert_index_equal(df.index.get_level_values(0), idx1) + self.assert_index_equal(df.index.get_level_values(1), idx2) + self.assert_index_equal(df.index.get_level_values(2), idx3) def test_reset_index_datetime(self): # GH 3950 @@ -2404,13 +2403,13 @@ def test_set_index_period(self): expected1 = pd.period_range('2011-01-01', periods=3, freq='M') expected2 = pd.period_range('2013-01-01 09:00', periods=2, freq='H') - self.assertTrue(df.index.levels[0].equals(expected1)) - self.assertTrue(df.index.levels[1].equals(expected2)) - self.assertTrue(df.index.levels[2].equals(idx3)) + self.assert_index_equal(df.index.levels[0], expected1) + self.assert_index_equal(df.index.levels[1], expected2) + self.assert_index_equal(df.index.levels[2], idx3) - self.assertTrue(df.index.get_level_values(0).equals(idx1)) - self.assertTrue(df.index.get_level_values(1).equals(idx2)) - self.assertTrue(df.index.get_level_values(2).equals(idx3)) + self.assert_index_equal(df.index.get_level_values(0), idx1) + self.assert_index_equal(df.index.get_level_values(1), idx2) + self.assert_index_equal(df.index.get_level_values(2), idx3) def test_repeat(self): # GH 9361 diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py index d33a64002c3b1..904bedde03312 100644 --- a/pandas/tests/test_nanops.py +++ b/pandas/tests/test_nanops.py @@ -799,30 +799,31 @@ def setUp(self): def test_nanvar_all_finite(self): samples = self.samples actual_variance = nanops.nanvar(samples) - np.testing.assert_almost_equal(actual_variance, self.variance, - decimal=2) + tm.assert_almost_equal(actual_variance, self.variance, + check_less_precise=2) def test_nanvar_nans(self): samples = np.nan * np.ones(2 * self.samples.shape[0]) samples[::2] = self.samples actual_variance = nanops.nanvar(samples, skipna=True) - np.testing.assert_almost_equal(actual_variance, self.variance, - decimal=2) + tm.assert_almost_equal(actual_variance, self.variance, + check_less_precise=2) actual_variance = nanops.nanvar(samples, skipna=False) - np.testing.assert_almost_equal(actual_variance, np.nan, decimal=2) + tm.assert_almost_equal(actual_variance, np.nan, check_less_precise=2) def test_nanstd_nans(self): samples = np.nan * np.ones(2 * self.samples.shape[0]) samples[::2] = self.samples actual_std = nanops.nanstd(samples, skipna=True) - np.testing.assert_almost_equal(actual_std, self.variance ** 0.5, - decimal=2) + tm.assert_almost_equal(actual_std, self.variance ** 0.5, + check_less_precise=2) actual_std = nanops.nanvar(samples, skipna=False) - np.testing.assert_almost_equal(actual_std, np.nan, decimal=2) + tm.assert_almost_equal(actual_std, np.nan, + check_less_precise=2) def test_nanvar_axis(self): # Generate some sample data. @@ -831,8 +832,8 @@ def test_nanvar_axis(self): samples = np.vstack([samples_norm, samples_unif]) actual_variance = nanops.nanvar(samples, axis=1) - np.testing.assert_array_almost_equal(actual_variance, np.array( - [self.variance, 1.0 / 12]), decimal=2) + tm.assert_almost_equal(actual_variance, np.array( + [self.variance, 1.0 / 12]), check_less_precise=2) def test_nanvar_ddof(self): n = 5 @@ -845,13 +846,16 @@ def test_nanvar_ddof(self): # The unbiased estimate. var = 1.0 / 12 - np.testing.assert_almost_equal(variance_1, var, decimal=2) + tm.assert_almost_equal(variance_1, var, + check_less_precise=2) + # The underestimated variance. - np.testing.assert_almost_equal(variance_0, (n - 1.0) / n * var, - decimal=2) + tm.assert_almost_equal(variance_0, (n - 1.0) / n * var, + check_less_precise=2) + # The overestimated variance. - np.testing.assert_almost_equal(variance_2, (n - 1.0) / (n - 2.0) * var, - decimal=2) + tm.assert_almost_equal(variance_2, (n - 1.0) / (n - 2.0) * var, + check_less_precise=2) def test_ground_truth(self): # Test against values that were precomputed with Numpy. @@ -873,17 +877,15 @@ def test_ground_truth(self): for axis in range(2): for ddof in range(3): var = nanops.nanvar(samples, skipna=True, axis=axis, ddof=ddof) - np.testing.assert_array_almost_equal(var[:3], - variance[axis, ddof]) - np.testing.assert_equal(var[3], np.nan) + tm.assert_almost_equal(var[:3], variance[axis, ddof]) + self.assertTrue(np.isnan(var[3])) # Test nanstd. for axis in range(2): for ddof in range(3): std = nanops.nanstd(samples, skipna=True, axis=axis, ddof=ddof) - np.testing.assert_array_almost_equal( - std[:3], variance[axis, ddof] ** 0.5) - np.testing.assert_equal(std[3], np.nan) + tm.assert_almost_equal(std[:3], variance[axis, ddof] ** 0.5) + self.assertTrue(np.isnan(std[3])) def test_nanstd_roundoff(self): # Regression test for GH 10242 (test data taken from GH 10489). Ensure @@ -931,7 +933,7 @@ def test_axis(self): samples = np.vstack([self.samples, np.nan * np.ones(len(self.samples))]) skew = nanops.nanskew(samples, axis=1) - tm.assert_almost_equal(skew, [self.actual_skew, np.nan]) + tm.assert_almost_equal(skew, np.array([self.actual_skew, np.nan])) def test_nans(self): samples = np.hstack([self.samples, np.nan]) @@ -981,7 +983,7 @@ def test_axis(self): samples = np.vstack([self.samples, np.nan * np.ones(len(self.samples))]) kurt = nanops.nankurt(samples, axis=1) - tm.assert_almost_equal(kurt, [self.actual_kurt, np.nan]) + tm.assert_almost_equal(kurt, np.array([self.actual_kurt, np.nan])) def test_nans(self): samples = np.hstack([self.samples, np.nan]) diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py index 87401f272adbd..b1f09ad2685e3 100644 --- a/pandas/tests/test_panel.py +++ b/pandas/tests/test_panel.py @@ -1086,12 +1086,12 @@ def test_ctor_dict(self): # TODO: unused? wp3 = Panel.from_dict(d3) # noqa - self.assertTrue(wp.major_axis.equals(self.panel.major_axis)) + self.assert_index_equal(wp.major_axis, self.panel.major_axis) assert_panel_equal(wp, wp2) # intersect wp = Panel.from_dict(d, intersect=True) - self.assertTrue(wp.major_axis.equals(itemb.index[5:])) + self.assert_index_equal(wp.major_axis, itemb.index[5:]) # use constructor assert_panel_equal(Panel(d), Panel.from_dict(d)) @@ -1123,7 +1123,7 @@ def test_constructor_dict_mixed(self): data = dict((k, v.values) for k, v in self.panel.iteritems()) result = Panel(data) exp_major = Index(np.arange(len(self.panel.major_axis))) - self.assertTrue(result.major_axis.equals(exp_major)) + self.assert_index_equal(result.major_axis, exp_major) result = Panel(data, items=self.panel.items, major_axis=self.panel.major_axis, @@ -1213,8 +1213,8 @@ def test_conform(self): df = self.panel['ItemA'][:-5].filter(items=['A', 'B']) conformed = self.panel.conform(df) - assert (conformed.index.equals(self.panel.major_axis)) - assert (conformed.columns.equals(self.panel.minor_axis)) + tm.assert_index_equal(conformed.index, self.panel.major_axis) + tm.assert_index_equal(conformed.columns, self.panel.minor_axis) def test_convert_objects(self): @@ -2078,11 +2078,11 @@ def test_rename(self): renamed = self.panel.rename_axis(mapper, axis=0) exp = Index(['foo', 'bar', 'baz']) - self.assertTrue(renamed.items.equals(exp)) + self.assert_index_equal(renamed.items, exp) renamed = self.panel.rename_axis(str.lower, axis=2) exp = Index(['a', 'b', 'c', 'd']) - self.assertTrue(renamed.minor_axis.equals(exp)) + self.assert_index_equal(renamed.minor_axis, exp) # don't copy renamed_nocopy = self.panel.rename_axis(mapper, axis=0, copy=False) @@ -2301,8 +2301,8 @@ def test_update_raise(self): [[1.5, np.nan, 3.], [1.5, np.nan, 3.], [1.5, np.nan, 3.], [1.5, np.nan, 3.]]]) - np.testing.assert_raises(Exception, pan.update, *(pan, ), - **{'raise_conflict': True}) + self.assertRaises(Exception, pan.update, *(pan, ), + **{'raise_conflict': True}) def test_all_any(self): self.assertTrue((self.panel.all(axis=0).values == nanall( @@ -2485,7 +2485,7 @@ def test_axis_dummies(self): transformed = make_axis_dummies(self.panel, 'minor', transform=mapping.get) self.assertEqual(len(transformed.columns), 2) - self.assert_numpy_array_equal(transformed.columns, ['one', 'two']) + self.assert_index_equal(transformed.columns, Index(['one', 'two'])) # TODO: test correctness @@ -2578,10 +2578,10 @@ def _monotonic(arr): def test_panel_index(): index = panelm.panel_index([1, 2, 3, 4], [1, 2, 3]) - expected = MultiIndex.from_arrays([np.tile( - [1, 2, 3, 4], 3), np.repeat( - [1, 2, 3], 4)]) - assert (index.equals(expected)) + expected = MultiIndex.from_arrays([np.tile([1, 2, 3, 4], 3), + np.repeat([1, 2, 3], 4)], + names=['time', 'panel']) + tm.assert_index_equal(index, expected) def test_import_warnings(): diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py index e3e906d48ae98..607048df29faa 100644 --- a/pandas/tests/test_panel4d.py +++ b/pandas/tests/test_panel4d.py @@ -733,7 +733,7 @@ def test_constructor_dict_mixed(self): data = dict((k, v.values) for k, v in self.panel4d.iteritems()) result = Panel4D(data) exp_major = Index(np.arange(len(self.panel4d.major_axis))) - self.assertTrue(result.major_axis.equals(exp_major)) + self.assert_index_equal(result.major_axis, exp_major) result = Panel4D(data, labels=self.panel4d.labels, @@ -799,9 +799,9 @@ def test_conform(self): p = self.panel4d['l1'].filter(items=['ItemA', 'ItemB']) conformed = self.panel4d.conform(p) - assert(conformed.items.equals(self.panel4d.labels)) - assert(conformed.major_axis.equals(self.panel4d.major_axis)) - assert(conformed.minor_axis.equals(self.panel4d.minor_axis)) + tm.assert_index_equal(conformed.items, self.panel4d.labels) + tm.assert_index_equal(conformed.major_axis, self.panel4d.major_axis) + tm.assert_index_equal(conformed.minor_axis, self.panel4d.minor_axis) def test_reindex(self): ref = self.panel4d['l2'] @@ -1085,11 +1085,11 @@ def test_rename(self): renamed = self.panel4d.rename_axis(mapper, axis=0) exp = Index(['foo', 'bar', 'baz']) - self.assertTrue(renamed.labels.equals(exp)) + self.assert_index_equal(renamed.labels, exp) renamed = self.panel4d.rename_axis(str.lower, axis=3) exp = Index(['a', 'b', 'c', 'd']) - self.assertTrue(renamed.minor_axis.equals(exp)) + self.assert_index_equal(renamed.minor_axis, exp) # don't copy renamed_nocopy = self.panel4d.rename_axis(mapper, axis=0, copy=False) diff --git a/pandas/tests/test_reshape.py b/pandas/tests/test_reshape.py index 862e2282bae2f..7136d7effc1fc 100644 --- a/pandas/tests/test_reshape.py +++ b/pandas/tests/test_reshape.py @@ -239,26 +239,16 @@ def test_just_na(self): def test_include_na(self): s = ['a', 'b', np.nan] res = get_dummies(s, sparse=self.sparse) - exp = DataFrame({'a': {0: 1.0, - 1: 0.0, - 2: 0.0}, - 'b': {0: 0.0, - 1: 1.0, - 2: 0.0}}) + exp = DataFrame({'a': {0: 1.0, 1: 0.0, 2: 0.0}, + 'b': {0: 0.0, 1: 1.0, 2: 0.0}}) assert_frame_equal(res, exp) # Sparse dataframes do not allow nan labelled columns, see #GH8822 res_na = get_dummies(s, dummy_na=True, sparse=self.sparse) - exp_na = DataFrame({nan: {0: 0.0, - 1: 0.0, - 2: 1.0}, - 'a': {0: 1.0, - 1: 0.0, - 2: 0.0}, - 'b': {0: 0.0, - 1: 1.0, - 2: 0.0}}).reindex_axis( - ['a', 'b', nan], 1) + exp_na = DataFrame({nan: {0: 0.0, 1: 0.0, 2: 1.0}, + 'a': {0: 1.0, 1: 0.0, 2: 0.0}, + 'b': {0: 0.0, 1: 1.0, 2: 0.0}}) + exp_na = exp_na.reindex_axis(['a', 'b', nan], 1) # hack (NaN handling in assert_index_equal) exp_na.columns = res_na.columns assert_frame_equal(res_na, exp_na) diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py index 4179949bc49a6..3d1851966afd0 100644 --- a/pandas/tests/test_strings.py +++ b/pandas/tests/test_strings.py @@ -48,12 +48,12 @@ def test_iter(self): # indices of each yielded Series should be equal to the index of # the original Series - tm.assert_numpy_array_equal(s.index, ds.index) + tm.assert_index_equal(s.index, ds.index) for el in s: # each element of the series is either a basestring/str or nan - self.assertTrue(isinstance(el, compat.string_types) or isnull( - el)) + self.assertTrue(isinstance(el, compat.string_types) or + isnull(el)) # desired behavior is to iterate until everything would be nan on the # next iter so make sure the last element of the iterator was 'l' in @@ -95,8 +95,8 @@ def test_iter_object_try_string(self): self.assertEqual(s, 'h') def test_cat(self): - one = ['a', 'a', 'b', 'b', 'c', NA] - two = ['a', NA, 'b', 'd', 'foo', NA] + one = np.array(['a', 'a', 'b', 'b', 'c', NA], dtype=np.object_) + two = np.array(['a', NA, 'b', 'd', 'foo', NA], dtype=np.object_) # single array result = strings.str_cat(one) @@ -121,21 +121,24 @@ def test_cat(self): # Multiple arrays result = strings.str_cat(one, [two], na_rep='NA') - exp = ['aa', 'aNA', 'bb', 'bd', 'cfoo', 'NANA'] + exp = np.array(['aa', 'aNA', 'bb', 'bd', 'cfoo', 'NANA'], + dtype=np.object_) self.assert_numpy_array_equal(result, exp) result = strings.str_cat(one, two) - exp = ['aa', NA, 'bb', 'bd', 'cfoo', NA] + exp = np.array(['aa', NA, 'bb', 'bd', 'cfoo', NA], dtype=np.object_) tm.assert_almost_equal(result, exp) def test_count(self): - values = ['foo', 'foofoo', NA, 'foooofooofommmfoo'] + values = np.array(['foo', 'foofoo', NA, 'foooofooofommmfoo'], + dtype=np.object_) result = strings.str_count(values, 'f[o]+') - exp = Series([1, 2, NA, 4]) - tm.assert_almost_equal(result, exp) + exp = np.array([1, 2, NA, 4]) + tm.assert_numpy_array_equal(result, exp) result = Series(values).str.count('f[o]+') + exp = Series([1, 2, NA, 4]) tm.assertIsInstance(result, Series) tm.assert_series_equal(result, exp) @@ -163,61 +166,66 @@ def test_count(self): tm.assert_series_equal(result, exp) def test_contains(self): - values = ['foo', NA, 'fooommm__foo', 'mmm_', 'foommm[_]+bar'] + values = np.array(['foo', NA, 'fooommm__foo', + 'mmm_', 'foommm[_]+bar'], dtype=np.object_) pat = 'mmm[_]+' result = strings.str_contains(values, pat) - expected = [False, NA, True, True, False] - tm.assert_almost_equal(result, expected) + expected = np.array([False, NA, True, True, False], dtype=np.object_) + tm.assert_numpy_array_equal(result, expected) result = strings.str_contains(values, pat, regex=False) - expected = [False, NA, False, False, True] - tm.assert_almost_equal(result, expected) + expected = np.array([False, NA, False, False, True], dtype=np.object_) + tm.assert_numpy_array_equal(result, expected) values = ['foo', 'xyz', 'fooommm__foo', 'mmm_'] result = strings.str_contains(values, pat) - expected = [False, False, True, True] + expected = np.array([False, False, True, True]) self.assertEqual(result.dtype, np.bool_) - tm.assert_almost_equal(result, expected) + tm.assert_numpy_array_equal(result, expected) # case insensitive using regex values = ['Foo', 'xYz', 'fOOomMm__fOo', 'MMM_'] result = strings.str_contains(values, 'FOO|mmm', case=False) - expected = [True, False, True, True] - tm.assert_almost_equal(result, expected) + expected = np.array([True, False, True, True]) + tm.assert_numpy_array_equal(result, expected) # case insensitive without regex result = strings.str_contains(values, 'foo', regex=False, case=False) - expected = [True, False, True, False] - tm.assert_almost_equal(result, expected) + expected = np.array([True, False, True, False]) + tm.assert_numpy_array_equal(result, expected) # mixed mixed = ['a', NA, 'b', True, datetime.today(), 'foo', None, 1, 2.] rs = strings.str_contains(mixed, 'o') - xp = Series([False, NA, False, NA, NA, True, NA, NA, NA]) - tm.assert_almost_equal(rs, xp) + xp = np.array([False, NA, False, NA, NA, True, NA, NA, NA], + dtype=np.object_) + tm.assert_numpy_array_equal(rs, xp) rs = Series(mixed).str.contains('o') + xp = Series([False, NA, False, NA, NA, True, NA, NA, NA]) tm.assertIsInstance(rs, Series) tm.assert_series_equal(rs, xp) # unicode - values = [u('foo'), NA, u('fooommm__foo'), u('mmm_')] + values = np.array([u'foo', NA, u'fooommm__foo', u'mmm_'], + dtype=np.object_) pat = 'mmm[_]+' result = strings.str_contains(values, pat) - expected = [False, np.nan, True, True] - tm.assert_almost_equal(result, expected) + expected = np.array([False, np.nan, True, True], dtype=np.object_) + tm.assert_numpy_array_equal(result, expected) result = strings.str_contains(values, pat, na=False) - expected = [False, False, True, True] - tm.assert_almost_equal(result, expected) + expected = np.array([False, False, True, True]) + tm.assert_numpy_array_equal(result, expected) - values = ['foo', 'xyz', 'fooommm__foo', 'mmm_'] + values = np.array(['foo', 'xyz', 'fooommm__foo', 'mmm_'], + dtype=np.object_) result = strings.str_contains(values, pat) - expected = [False, False, True, True] + expected = np.array([False, False, True, True]) self.assertEqual(result.dtype, np.bool_) - tm.assert_almost_equal(result, expected) + tm.assert_numpy_array_equal(result, expected) # na values = Series(['om', 'foo', np.nan]) @@ -232,13 +240,16 @@ def test_startswith(self): tm.assert_series_equal(result, exp) # mixed - mixed = ['a', NA, 'b', True, datetime.today(), 'foo', None, 1, 2.] + mixed = np.array(['a', NA, 'b', True, datetime.today(), + 'foo', None, 1, 2.], dtype=np.object_) rs = strings.str_startswith(mixed, 'f') - xp = Series([False, NA, False, NA, NA, True, NA, NA, NA]) - tm.assert_almost_equal(rs, xp) + xp = np.array([False, NA, False, NA, NA, True, NA, NA, NA], + dtype=np.object_) + tm.assert_numpy_array_equal(rs, xp) rs = Series(mixed).str.startswith('f') tm.assertIsInstance(rs, Series) + xp = Series([False, NA, False, NA, NA, True, NA, NA, NA]) tm.assert_series_equal(rs, xp) # unicode @@ -262,10 +273,12 @@ def test_endswith(self): # mixed mixed = ['a', NA, 'b', True, datetime.today(), 'foo', None, 1, 2.] rs = strings.str_endswith(mixed, 'f') - xp = Series([False, NA, False, NA, NA, False, NA, NA, NA]) - tm.assert_almost_equal(rs, xp) + xp = np.array([False, NA, False, NA, NA, False, NA, NA, NA], + dtype=np.object_) + tm.assert_numpy_array_equal(rs, xp) rs = Series(mixed).str.endswith('f') + xp = Series([False, NA, False, NA, NA, False, NA, NA, NA]) tm.assertIsInstance(rs, Series) tm.assert_series_equal(rs, xp) @@ -573,8 +586,13 @@ def test_extract_expand_False(self): # single group renames series/index properly s_or_idx = klass(['A1', 'A2']) result = s_or_idx.str.extract(r'(?P<uno>A)\d', expand=False) - tm.assert_equal(result.name, 'uno') - tm.assert_numpy_array_equal(result, klass(['A', 'A'])) + self.assertEqual(result.name, 'uno') + + exp = klass(['A', 'A'], name='uno') + if klass == Series: + tm.assert_series_equal(result, exp) + else: + tm.assert_index_equal(result, exp) s = Series(['A1', 'B2', 'C3']) # one group, no matches @@ -713,8 +731,9 @@ def test_extract_expand_True(self): # single group renames series/index properly s_or_idx = klass(['A1', 'A2']) result_df = s_or_idx.str.extract(r'(?P<uno>A)\d', expand=True) + tm.assertIsInstance(result_df, DataFrame) result_series = result_df['uno'] - tm.assert_numpy_array_equal(result_series, klass(['A', 'A'])) + assert_series_equal(result_series, Series(['A', 'A'], name='uno')) def test_extract_series(self): # extract should give the same result whether or not the @@ -982,6 +1001,30 @@ def test_extractall_no_matches(self): "second"]) tm.assert_frame_equal(r, e) + def test_extractall_stringindex(self): + s = Series(["a1a2", "b1", "c1"], name='xxx') + res = s.str.extractall("[ab](?P<digit>\d)") + exp_idx = MultiIndex.from_tuples([(0, 0), (0, 1), (1, 0)], + names=[None, 'match']) + exp = DataFrame({'digit': ["1", "2", "1"]}, index=exp_idx) + tm.assert_frame_equal(res, exp) + + # index should return the same result as the default index without name + # thus index.name doesn't affect to the result + for idx in [Index(["a1a2", "b1", "c1"]), + Index(["a1a2", "b1", "c1"], name='xxx')]: + + res = idx.str.extractall("[ab](?P<digit>\d)") + tm.assert_frame_equal(res, exp) + + s = Series(["a1a2", "b1", "c1"], name='s_name', + index=Index(["XX", "yy", "zz"], name='idx_name')) + res = s.str.extractall("[ab](?P<digit>\d)") + exp_idx = MultiIndex.from_tuples([("XX", 0), ("XX", 1), ("yy", 0)], + names=["idx_name", 'match']) + exp = DataFrame({'digit': ["1", "2", "1"]}, index=exp_idx) + tm.assert_frame_equal(res, exp) + def test_extractall_errors(self): # Does not make sense to use extractall with a regex that has # no capture groups. (it returns DataFrame with one column for @@ -991,8 +1034,8 @@ def test_extractall_errors(self): s.str.extractall(r'[a-z]') def test_extract_index_one_two_groups(self): - s = Series( - ['a3', 'b3', 'd4c2'], ["A3", "B3", "D4"], name='series_name') + s = Series(['a3', 'b3', 'd4c2'], index=["A3", "B3", "D4"], + name='series_name') r = s.index.str.extract(r'([A-Z])', expand=True) e = DataFrame(['A', "B", "D"]) tm.assert_frame_equal(r, e) @@ -1081,7 +1124,7 @@ def test_empty_str_methods(self): # (extract) on empty series tm.assert_series_equal(empty_str, empty.str.cat(empty)) - tm.assert_equal('', empty.str.cat()) + self.assertEqual('', empty.str.cat()) tm.assert_series_equal(empty_str, empty.str.title()) tm.assert_series_equal(empty_int, empty.str.count('a')) tm.assert_series_equal(empty_bool, empty.str.contains('a')) @@ -1398,41 +1441,48 @@ def test_find_nan(self): tm.assert_series_equal(result, Series([4, np.nan, -1, np.nan, -1])) def test_index(self): + + def _check(result, expected): + if isinstance(result, Series): + tm.assert_series_equal(result, expected) + else: + tm.assert_index_equal(result, expected) + for klass in [Series, Index]: s = klass(['ABCDEFG', 'BCDEFEF', 'DEFGHIJEF', 'EFGHEF']) result = s.str.index('EF') - tm.assert_numpy_array_equal(result, klass([4, 3, 1, 0])) + _check(result, klass([4, 3, 1, 0])) expected = np.array([v.index('EF') for v in s.values], dtype=np.int64) tm.assert_numpy_array_equal(result.values, expected) result = s.str.rindex('EF') - tm.assert_numpy_array_equal(result, klass([4, 5, 7, 4])) + _check(result, klass([4, 5, 7, 4])) expected = np.array([v.rindex('EF') for v in s.values], dtype=np.int64) tm.assert_numpy_array_equal(result.values, expected) result = s.str.index('EF', 3) - tm.assert_numpy_array_equal(result, klass([4, 3, 7, 4])) + _check(result, klass([4, 3, 7, 4])) expected = np.array([v.index('EF', 3) for v in s.values], dtype=np.int64) tm.assert_numpy_array_equal(result.values, expected) result = s.str.rindex('EF', 3) - tm.assert_numpy_array_equal(result, klass([4, 5, 7, 4])) + _check(result, klass([4, 5, 7, 4])) expected = np.array([v.rindex('EF', 3) for v in s.values], dtype=np.int64) tm.assert_numpy_array_equal(result.values, expected) result = s.str.index('E', 4, 8) - tm.assert_numpy_array_equal(result, klass([4, 5, 7, 4])) + _check(result, klass([4, 5, 7, 4])) expected = np.array([v.index('E', 4, 8) for v in s.values], dtype=np.int64) tm.assert_numpy_array_equal(result.values, expected) result = s.str.rindex('E', 0, 5) - tm.assert_numpy_array_equal(result, klass([4, 3, 1, 4])) + _check(result, klass([4, 3, 1, 4])) expected = np.array([v.rindex('E', 0, 5) for v in s.values], dtype=np.int64) tm.assert_numpy_array_equal(result.values, expected) @@ -1447,9 +1497,9 @@ def test_index(self): # test with nan s = Series(['abcb', 'ab', 'bcbe', np.nan]) result = s.str.index('b') - tm.assert_numpy_array_equal(result, Series([1, 1, 0, np.nan])) + tm.assert_series_equal(result, Series([1, 1, 0, np.nan])) result = s.str.rindex('b') - tm.assert_numpy_array_equal(result, Series([3, 1, 2, np.nan])) + tm.assert_series_equal(result, Series([3, 1, 2, np.nan])) def test_pad(self): values = Series(['a', 'b', NA, 'c', NA, 'eeeeee']) @@ -1534,6 +1584,13 @@ def test_pad_fillchar(self): result = values.str.pad(5, fillchar=5) def test_translate(self): + + def _check(result, expected): + if isinstance(result, Series): + tm.assert_series_equal(result, expected) + else: + tm.assert_index_equal(result, expected) + for klass in [Series, Index]: s = klass(['abcdefg', 'abcc', 'cdddfg', 'cdefggg']) if not compat.PY3: @@ -1543,17 +1600,17 @@ def test_translate(self): table = str.maketrans('abc', 'cde') result = s.str.translate(table) expected = klass(['cdedefg', 'cdee', 'edddfg', 'edefggg']) - tm.assert_numpy_array_equal(result, expected) + _check(result, expected) # use of deletechars is python 2 only if not compat.PY3: result = s.str.translate(table, deletechars='fg') expected = klass(['cdede', 'cdee', 'eddd', 'ede']) - tm.assert_numpy_array_equal(result, expected) + _check(result, expected) result = s.str.translate(None, deletechars='fg') expected = klass(['abcde', 'abcc', 'cddd', 'cde']) - tm.assert_numpy_array_equal(result, expected) + _check(result, expected) else: with tm.assertRaisesRegexp( ValueError, "deletechars is not a valid argument"): @@ -1563,7 +1620,7 @@ def test_translate(self): s = Series(['a', 'b', 'c', 1.2]) expected = Series(['c', 'd', 'e', np.nan]) result = s.str.translate(table) - tm.assert_numpy_array_equal(result, expected) + tm.assert_series_equal(result, expected) def test_center_ljust_rjust(self): values = Series(['a', 'b', NA, 'c', NA, 'eeeeee']) @@ -1961,8 +2018,8 @@ def test_rsplit_to_multiindex_expand(self): idx = Index(['some_equal_splits', 'with_no_nans']) result = idx.str.rsplit('_', expand=True, n=1) - exp = MultiIndex.from_tuples([('some_equal', 'splits'), ('with_no', - 'nans')]) + exp = MultiIndex.from_tuples([('some_equal', 'splits'), + ('with_no', 'nans')]) tm.assert_index_equal(result, exp) self.assertEqual(result.nlevels, 2) @@ -1972,7 +2029,7 @@ def test_split_with_name(self): # should preserve name s = Series(['a,b', 'c,d'], name='xxx') res = s.str.split(',') - exp = Series([('a', 'b'), ('c', 'd')], name='xxx') + exp = Series([['a', 'b'], ['c', 'd']], name='xxx') tm.assert_series_equal(res, exp) res = s.str.split(',', expand=True) @@ -1994,60 +2051,60 @@ def test_partition_series(self): values = Series(['a_b_c', 'c_d_e', NA, 'f_g_h']) result = values.str.partition('_', expand=False) - exp = Series([['a', '_', 'b_c'], ['c', '_', 'd_e'], NA, ['f', '_', - 'g_h']]) + exp = Series([('a', '_', 'b_c'), ('c', '_', 'd_e'), NA, + ('f', '_', 'g_h')]) tm.assert_series_equal(result, exp) result = values.str.rpartition('_', expand=False) - exp = Series([['a_b', '_', 'c'], ['c_d', '_', 'e'], NA, ['f_g', '_', - 'h']]) + exp = Series([('a_b', '_', 'c'), ('c_d', '_', 'e'), NA, + ('f_g', '_', 'h')]) tm.assert_series_equal(result, exp) # more than one char values = Series(['a__b__c', 'c__d__e', NA, 'f__g__h']) result = values.str.partition('__', expand=False) - exp = Series([['a', '__', 'b__c'], ['c', '__', 'd__e'], NA, ['f', '__', - 'g__h']]) + exp = Series([('a', '__', 'b__c'), ('c', '__', 'd__e'), NA, + ('f', '__', 'g__h')]) tm.assert_series_equal(result, exp) result = values.str.rpartition('__', expand=False) - exp = Series([['a__b', '__', 'c'], ['c__d', '__', 'e'], NA, - ['f__g', '__', 'h']]) + exp = Series([('a__b', '__', 'c'), ('c__d', '__', 'e'), NA, + ('f__g', '__', 'h')]) tm.assert_series_equal(result, exp) # None values = Series(['a b c', 'c d e', NA, 'f g h']) result = values.str.partition(expand=False) - exp = Series([['a', ' ', 'b c'], ['c', ' ', 'd e'], NA, ['f', ' ', - 'g h']]) + exp = Series([('a', ' ', 'b c'), ('c', ' ', 'd e'), NA, + ('f', ' ', 'g h')]) tm.assert_series_equal(result, exp) result = values.str.rpartition(expand=False) - exp = Series([['a b', ' ', 'c'], ['c d', ' ', 'e'], NA, ['f g', ' ', - 'h']]) + exp = Series([('a b', ' ', 'c'), ('c d', ' ', 'e'), NA, + ('f g', ' ', 'h')]) tm.assert_series_equal(result, exp) # Not splited values = Series(['abc', 'cde', NA, 'fgh']) result = values.str.partition('_', expand=False) - exp = Series([['abc', '', ''], ['cde', '', ''], NA, ['fgh', '', '']]) + exp = Series([('abc', '', ''), ('cde', '', ''), NA, ('fgh', '', '')]) tm.assert_series_equal(result, exp) result = values.str.rpartition('_', expand=False) - exp = Series([['', '', 'abc'], ['', '', 'cde'], NA, ['', '', 'fgh']]) + exp = Series([('', '', 'abc'), ('', '', 'cde'), NA, ('', '', 'fgh')]) tm.assert_series_equal(result, exp) # unicode - values = Series([u('a_b_c'), u('c_d_e'), NA, u('f_g_h')]) + values = Series([u'a_b_c', u'c_d_e', NA, u'f_g_h']) result = values.str.partition('_', expand=False) - exp = Series([[u('a'), u('_'), u('b_c')], [u('c'), u('_'), u('d_e')], - NA, [u('f'), u('_'), u('g_h')]]) + exp = Series([(u'a', u'_', u'b_c'), (u'c', u'_', u'd_e'), + NA, (u'f', u'_', u'g_h')]) tm.assert_series_equal(result, exp) result = values.str.rpartition('_', expand=False) - exp = Series([[u('a_b'), u('_'), u('c')], [u('c_d'), u('_'), u('e')], - NA, [u('f_g'), u('_'), u('h')]]) + exp = Series([(u'a_b', u'_', u'c'), (u'c_d', u'_', u'e'), + NA, (u'f_g', u'_', u'h')]) tm.assert_series_equal(result, exp) # compare to standard lib diff --git a/pandas/tests/test_testing.py b/pandas/tests/test_testing.py index 9294bccce013f..c4e864a909c03 100644 --- a/pandas/tests/test_testing.py +++ b/pandas/tests/test_testing.py @@ -43,6 +43,8 @@ def test_assert_almost_equal_numbers(self): def test_assert_almost_equal_numbers_with_zeros(self): self._assert_almost_equal_both(0, 0) + self._assert_almost_equal_both(0, 0.0) + self._assert_almost_equal_both(0, np.float64(0)) self._assert_almost_equal_both(0.000001, 0) self._assert_not_almost_equal_both(0.001, 0) @@ -65,9 +67,8 @@ def test_assert_almost_equal_dicts(self): self._assert_almost_equal_both({'a': 1, 'b': 2}, {'a': 1, 'b': 2}) self._assert_not_almost_equal_both({'a': 1, 'b': 2}, {'a': 1, 'b': 3}) - self._assert_not_almost_equal_both( - {'a': 1, 'b': 2}, {'a': 1, 'b': 2, 'c': 3} - ) + self._assert_not_almost_equal_both({'a': 1, 'b': 2}, + {'a': 1, 'b': 2, 'c': 3}) self._assert_not_almost_equal_both({'a': 1}, 1) self._assert_not_almost_equal_both({'a': 1}, 'abc') self._assert_not_almost_equal_both({'a': 1}, [1, ]) @@ -82,9 +83,11 @@ def __getitem__(self, item): if item == 'a': return 1 - self._assert_almost_equal_both({'a': 1}, DictLikeObj()) + self._assert_almost_equal_both({'a': 1}, DictLikeObj(), + check_dtype=False) - self._assert_not_almost_equal_both({'a': 2}, DictLikeObj()) + self._assert_not_almost_equal_both({'a': 2}, DictLikeObj(), + check_dtype=False) def test_assert_almost_equal_strings(self): self._assert_almost_equal_both('abc', 'abc') @@ -96,7 +99,13 @@ def test_assert_almost_equal_strings(self): def test_assert_almost_equal_iterables(self): self._assert_almost_equal_both([1, 2, 3], [1, 2, 3]) - self._assert_almost_equal_both(np.array([1, 2, 3]), [1, 2, 3]) + self._assert_almost_equal_both(np.array([1, 2, 3]), + np.array([1, 2, 3])) + + # class / dtype are different + self._assert_not_almost_equal_both(np.array([1, 2, 3]), [1, 2, 3]) + self._assert_not_almost_equal_both(np.array([1, 2, 3]), + np.array([1., 2., 3.])) # Can't compare generators self._assert_not_almost_equal_both(iter([1, 2, 3]), [1, 2, 3]) @@ -107,8 +116,8 @@ def test_assert_almost_equal_iterables(self): def test_assert_almost_equal_null(self): self._assert_almost_equal_both(None, None) - self._assert_almost_equal_both(None, np.NaN) + self._assert_not_almost_equal_both(None, np.NaN) self._assert_not_almost_equal_both(None, 0) self._assert_not_almost_equal_both(np.NaN, 0) @@ -177,7 +186,7 @@ def test_numpy_array_equal_message(self): assert_almost_equal(np.array([1, 2]), np.array([3, 4, 5])) # scalar comparison - expected = """: 1 != 2""" + expected = """Expected type """ with assertRaisesRegexp(AssertionError, expected): assert_numpy_array_equal(1, 2) expected = """expected 2\\.00000 but got 1\\.00000, with decimal 5""" @@ -192,6 +201,7 @@ def test_numpy_array_equal_message(self): \\[right\\]: int""" with assertRaisesRegexp(AssertionError, expected): + # numpy_array_equal only accepts np.ndarray assert_numpy_array_equal(np.array([1]), 1) with assertRaisesRegexp(AssertionError, expected): assert_almost_equal(np.array([1]), 1) @@ -215,11 +225,11 @@ def test_numpy_array_equal_message(self): \\[right\\]: \\[1\\.0, nan, 3\\.0\\]""" with assertRaisesRegexp(AssertionError, expected): - assert_numpy_array_equal( - np.array([np.nan, 2, 3]), np.array([1, np.nan, 3])) + assert_numpy_array_equal(np.array([np.nan, 2, 3]), + np.array([1, np.nan, 3])) with assertRaisesRegexp(AssertionError, expected): - assert_almost_equal( - np.array([np.nan, 2, 3]), np.array([1, np.nan, 3])) + assert_almost_equal(np.array([np.nan, 2, 3]), + np.array([1, np.nan, 3])) expected = """numpy array are different @@ -339,8 +349,8 @@ def test_index_equal_message(self): labels=\\[\\[0, 0, 1, 1\\], \\[0, 1, 2, 3\\]\\]\\)""" idx1 = pd.Index([1, 2, 3]) - idx2 = pd.MultiIndex.from_tuples([('A', 1), ('A', 2), ('B', 3), ('B', 4 - )]) + idx2 = pd.MultiIndex.from_tuples([('A', 1), ('A', 2), + ('B', 3), ('B', 4)]) with assertRaisesRegexp(AssertionError, expected): assert_index_equal(idx1, idx2, exact=False) @@ -350,10 +360,10 @@ def test_index_equal_message(self): \\[left\\]: Int64Index\\(\\[2, 2, 3, 4\\], dtype='int64'\\) \\[right\\]: Int64Index\\(\\[1, 2, 3, 4\\], dtype='int64'\\)""" - idx1 = pd.MultiIndex.from_tuples([('A', 2), ('A', 2), ('B', 3), ('B', 4 - )]) - idx2 = pd.MultiIndex.from_tuples([('A', 1), ('A', 2), ('B', 3), ('B', 4 - )]) + idx1 = pd.MultiIndex.from_tuples([('A', 2), ('A', 2), + ('B', 3), ('B', 4)]) + idx2 = pd.MultiIndex.from_tuples([('A', 1), ('A', 2), + ('B', 3), ('B', 4)]) with assertRaisesRegexp(AssertionError, expected): assert_index_equal(idx1, idx2) with assertRaisesRegexp(AssertionError, expected): @@ -434,10 +444,10 @@ def test_index_equal_message(self): \\[left\\]: Int64Index\\(\\[2, 2, 3, 4\\], dtype='int64'\\) \\[right\\]: Int64Index\\(\\[1, 2, 3, 4\\], dtype='int64'\\)""" - idx1 = pd.MultiIndex.from_tuples([('A', 2), ('A', 2), ('B', 3), ('B', 4 - )]) - idx2 = pd.MultiIndex.from_tuples([('A', 1), ('A', 2), ('B', 3), ('B', 4 - )]) + idx1 = pd.MultiIndex.from_tuples([('A', 2), ('A', 2), + ('B', 3), ('B', 4)]) + idx2 = pd.MultiIndex.from_tuples([('A', 1), ('A', 2), + ('B', 3), ('B', 4)]) with assertRaisesRegexp(AssertionError, expected): assert_index_equal(idx1, idx2) with assertRaisesRegexp(AssertionError, expected): @@ -509,12 +519,18 @@ def test_less_precise(self): self.assertRaises(AssertionError, assert_series_equal, s1, s2) self._assert_equal(s1, s2, check_less_precise=True) + for i in range(4): + self._assert_equal(s1, s2, check_less_precise=i) + self.assertRaises(AssertionError, assert_series_equal, s1, s2, 10) s1 = Series([0.12345], dtype='float32') s2 = Series([0.12346], dtype='float32') self.assertRaises(AssertionError, assert_series_equal, s1, s2) self._assert_equal(s1, s2, check_less_precise=True) + for i in range(4): + self._assert_equal(s1, s2, check_less_precise=i) + self.assertRaises(AssertionError, assert_series_equal, s1, s2, 10) # even less than less precise s1 = Series([0.1235], dtype='float32') @@ -674,6 +690,45 @@ def test_notisinstance(self): tm.assertNotIsInstance(pd.Series([1]), pd.Series) +class TestAssertCategoricalEqual(unittest.TestCase): + _multiprocess_can_split_ = True + + def test_categorical_equal_message(self): + + expected = """Categorical\\.categories are different + +Categorical\\.categories values are different \\(25\\.0 %\\) +\\[left\\]: Int64Index\\(\\[1, 2, 3, 4\\], dtype='int64'\\) +\\[right\\]: Int64Index\\(\\[1, 2, 3, 5\\], dtype='int64'\\)""" + + a = pd.Categorical([1, 2, 3, 4]) + b = pd.Categorical([1, 2, 3, 5]) + with assertRaisesRegexp(AssertionError, expected): + tm.assert_categorical_equal(a, b) + + expected = """Categorical\\.codes are different + +Categorical\\.codes values are different \\(50\\.0 %\\) +\\[left\\]: \\[0, 1, 3, 2\\] +\\[right\\]: \\[0, 1, 2, 3\\]""" + + a = pd.Categorical([1, 2, 4, 3], categories=[1, 2, 3, 4]) + b = pd.Categorical([1, 2, 3, 4], categories=[1, 2, 3, 4]) + with assertRaisesRegexp(AssertionError, expected): + tm.assert_categorical_equal(a, b) + + expected = """Categorical are different + +Attribute "ordered" are different +\\[left\\]: False +\\[right\\]: True""" + + a = pd.Categorical([1, 2, 3, 4], ordered=False) + b = pd.Categorical([1, 2, 3, 4], ordered=True) + with assertRaisesRegexp(AssertionError, expected): + tm.assert_categorical_equal(a, b) + + class TestRNGContext(unittest.TestCase): def test_RNGContext(self): diff --git a/pandas/tests/test_tseries.py b/pandas/tests/test_tseries.py deleted file mode 100644 index 854b7295aece4..0000000000000 --- a/pandas/tests/test_tseries.py +++ /dev/null @@ -1,711 +0,0 @@ -# -*- coding: utf-8 -*- -from numpy import nan -import numpy as np -from pandas import Index, isnull, Timestamp -from pandas.util.testing import assert_almost_equal -import pandas.util.testing as tm -from pandas.compat import range, lrange, zip -import pandas.lib as lib -import pandas._period as period -import pandas.algos as algos -from pandas.core import common as com -import datetime - - -class TestTseriesUtil(tm.TestCase): - _multiprocess_can_split_ = True - - def test_combineFunc(self): - pass - - def test_reindex(self): - pass - - def test_isnull(self): - pass - - def test_groupby(self): - pass - - def test_groupby_withnull(self): - pass - - def test_backfill(self): - old = Index([1, 5, 10]) - new = Index(lrange(12)) - - filler = algos.backfill_int64(old.values, new.values) - - expect_filler = [0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2, -1] - self.assert_numpy_array_equal(filler, expect_filler) - - # corner case - old = Index([1, 4]) - new = Index(lrange(5, 10)) - filler = algos.backfill_int64(old.values, new.values) - - expect_filler = [-1, -1, -1, -1, -1] - self.assert_numpy_array_equal(filler, expect_filler) - - def test_pad(self): - old = Index([1, 5, 10]) - new = Index(lrange(12)) - - filler = algos.pad_int64(old.values, new.values) - - expect_filler = [-1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2] - self.assert_numpy_array_equal(filler, expect_filler) - - # corner case - old = Index([5, 10]) - new = Index(lrange(5)) - filler = algos.pad_int64(old.values, new.values) - expect_filler = [-1, -1, -1, -1, -1] - self.assert_numpy_array_equal(filler, expect_filler) - - -def test_left_join_indexer_unique(): - a = np.array([1, 2, 3, 4, 5], dtype=np.int64) - b = np.array([2, 2, 3, 4, 4], dtype=np.int64) - - result = algos.left_join_indexer_unique_int64(b, a) - expected = np.array([1, 1, 2, 3, 3], dtype=np.int64) - assert (np.array_equal(result, expected)) - - -def test_left_outer_join_bug(): - left = np.array([0, 1, 0, 1, 1, 2, 3, 1, 0, 2, 1, 2, 0, 1, 1, 2, 3, 2, 3, - 2, 1, 1, 3, 0, 3, 2, 3, 0, 0, 2, 3, 2, 0, 3, 1, 3, 0, 1, - 3, 0, 0, 1, 0, 3, 1, 0, 1, 0, 1, 1, 0, 2, 2, 2, 2, 2, 0, - 3, 1, 2, 0, 0, 3, 1, 3, 2, 2, 0, 1, 3, 0, 2, 3, 2, 3, 3, - 2, 3, 3, 1, 3, 2, 0, 0, 3, 1, 1, 1, 0, 2, 3, 3, 1, 2, 0, - 3, 1, 2, 0, 2], dtype=np.int64) - - right = np.array([3, 1], dtype=np.int64) - max_groups = 4 - - lidx, ridx = algos.left_outer_join(left, right, max_groups, sort=False) - - exp_lidx = np.arange(len(left)) - exp_ridx = -np.ones(len(left)) - exp_ridx[left == 1] = 1 - exp_ridx[left == 3] = 0 - - assert (np.array_equal(lidx, exp_lidx)) - assert (np.array_equal(ridx, exp_ridx)) - - -def test_inner_join_indexer(): - a = np.array([1, 2, 3, 4, 5], dtype=np.int64) - b = np.array([0, 3, 5, 7, 9], dtype=np.int64) - - index, ares, bres = algos.inner_join_indexer_int64(a, b) - - index_exp = np.array([3, 5], dtype=np.int64) - assert_almost_equal(index, index_exp) - - aexp = np.array([2, 4], dtype=np.int64) - bexp = np.array([1, 2], dtype=np.int64) - assert_almost_equal(ares, aexp) - assert_almost_equal(bres, bexp) - - a = np.array([5], dtype=np.int64) - b = np.array([5], dtype=np.int64) - - index, ares, bres = algos.inner_join_indexer_int64(a, b) - assert_almost_equal(index, [5]) - assert_almost_equal(ares, [0]) - assert_almost_equal(bres, [0]) - - -def test_outer_join_indexer(): - a = np.array([1, 2, 3, 4, 5], dtype=np.int64) - b = np.array([0, 3, 5, 7, 9], dtype=np.int64) - - index, ares, bres = algos.outer_join_indexer_int64(a, b) - - index_exp = np.array([0, 1, 2, 3, 4, 5, 7, 9], dtype=np.int64) - assert_almost_equal(index, index_exp) - - aexp = np.array([-1, 0, 1, 2, 3, 4, -1, -1], dtype=np.int64) - bexp = np.array([0, -1, -1, 1, -1, 2, 3, 4], dtype=np.int64) - assert_almost_equal(ares, aexp) - assert_almost_equal(bres, bexp) - - a = np.array([5], dtype=np.int64) - b = np.array([5], dtype=np.int64) - - index, ares, bres = algos.outer_join_indexer_int64(a, b) - assert_almost_equal(index, [5]) - assert_almost_equal(ares, [0]) - assert_almost_equal(bres, [0]) - - -def test_left_join_indexer(): - a = np.array([1, 2, 3, 4, 5], dtype=np.int64) - b = np.array([0, 3, 5, 7, 9], dtype=np.int64) - - index, ares, bres = algos.left_join_indexer_int64(a, b) - - assert_almost_equal(index, a) - - aexp = np.array([0, 1, 2, 3, 4], dtype=np.int64) - bexp = np.array([-1, -1, 1, -1, 2], dtype=np.int64) - assert_almost_equal(ares, aexp) - assert_almost_equal(bres, bexp) - - a = np.array([5], dtype=np.int64) - b = np.array([5], dtype=np.int64) - - index, ares, bres = algos.left_join_indexer_int64(a, b) - assert_almost_equal(index, [5]) - assert_almost_equal(ares, [0]) - assert_almost_equal(bres, [0]) - - -def test_left_join_indexer2(): - idx = Index([1, 1, 2, 5]) - idx2 = Index([1, 2, 5, 7, 9]) - - res, lidx, ridx = algos.left_join_indexer_int64(idx2.values, idx.values) - - exp_res = np.array([1, 1, 2, 5, 7, 9], dtype=np.int64) - assert_almost_equal(res, exp_res) - - exp_lidx = np.array([0, 0, 1, 2, 3, 4], dtype=np.int64) - assert_almost_equal(lidx, exp_lidx) - - exp_ridx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64) - assert_almost_equal(ridx, exp_ridx) - - -def test_outer_join_indexer2(): - idx = Index([1, 1, 2, 5]) - idx2 = Index([1, 2, 5, 7, 9]) - - res, lidx, ridx = algos.outer_join_indexer_int64(idx2.values, idx.values) - - exp_res = np.array([1, 1, 2, 5, 7, 9], dtype=np.int64) - assert_almost_equal(res, exp_res) - - exp_lidx = np.array([0, 0, 1, 2, 3, 4], dtype=np.int64) - assert_almost_equal(lidx, exp_lidx) - - exp_ridx = np.array([0, 1, 2, 3, -1, -1], dtype=np.int64) - assert_almost_equal(ridx, exp_ridx) - - -def test_inner_join_indexer2(): - idx = Index([1, 1, 2, 5]) - idx2 = Index([1, 2, 5, 7, 9]) - - res, lidx, ridx = algos.inner_join_indexer_int64(idx2.values, idx.values) - - exp_res = np.array([1, 1, 2, 5], dtype=np.int64) - assert_almost_equal(res, exp_res) - - exp_lidx = np.array([0, 0, 1, 2], dtype=np.int64) - assert_almost_equal(lidx, exp_lidx) - - exp_ridx = np.array([0, 1, 2, 3], dtype=np.int64) - assert_almost_equal(ridx, exp_ridx) - - -def test_is_lexsorted(): - failure = [ - np.array([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, - 3, 3, - 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, - 2, 2, 2, 2, 2, 2, 2, - 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, - 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, - 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0]), - np.array([30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, - 15, 14, - 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, - 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, - 12, 11, - 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25, - 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, - 9, 8, - 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25, 24, 23, 22, - 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, - 6, 5, - 4, 3, 2, 1, 0])] - - assert (not algos.is_lexsorted(failure)) - -# def test_get_group_index(): -# a = np.array([0, 1, 2, 0, 2, 1, 0, 0], dtype=np.int64) -# b = np.array([1, 0, 3, 2, 0, 2, 3, 0], dtype=np.int64) -# expected = np.array([1, 4, 11, 2, 8, 6, 3, 0], dtype=np.int64) - -# result = lib.get_group_index([a, b], (3, 4)) - -# assert(np.array_equal(result, expected)) - - -def test_groupsort_indexer(): - a = np.random.randint(0, 1000, 100).astype(np.int64) - b = np.random.randint(0, 1000, 100).astype(np.int64) - - result = algos.groupsort_indexer(a, 1000)[0] - - # need to use a stable sort - expected = np.argsort(a, kind='mergesort') - assert (np.array_equal(result, expected)) - - # compare with lexsort - key = a * 1000 + b - result = algos.groupsort_indexer(key, 1000000)[0] - expected = np.lexsort((b, a)) - assert (np.array_equal(result, expected)) - - -def test_ensure_platform_int(): - arr = np.arange(100) - - result = algos.ensure_platform_int(arr) - assert (result is arr) - - -def test_duplicated_with_nas(): - keys = np.array([0, 1, nan, 0, 2, nan], dtype=object) - - result = lib.duplicated(keys) - expected = [False, False, False, True, False, True] - assert (np.array_equal(result, expected)) - - result = lib.duplicated(keys, keep='first') - expected = [False, False, False, True, False, True] - assert (np.array_equal(result, expected)) - - result = lib.duplicated(keys, keep='last') - expected = [True, False, True, False, False, False] - assert (np.array_equal(result, expected)) - - result = lib.duplicated(keys, keep=False) - expected = [True, False, True, True, False, True] - assert (np.array_equal(result, expected)) - - keys = np.empty(8, dtype=object) - for i, t in enumerate(zip([0, 0, nan, nan] * 2, [0, nan, 0, nan] * 2)): - keys[i] = t - - result = lib.duplicated(keys) - falses = [False] * 4 - trues = [True] * 4 - expected = falses + trues - assert (np.array_equal(result, expected)) - - result = lib.duplicated(keys, keep='last') - expected = trues + falses - assert (np.array_equal(result, expected)) - - result = lib.duplicated(keys, keep=False) - expected = trues + trues - assert (np.array_equal(result, expected)) - - -def test_maybe_booleans_to_slice(): - arr = np.array([0, 0, 1, 1, 1, 0, 1], dtype=np.uint8) - result = lib.maybe_booleans_to_slice(arr) - assert (result.dtype == np.bool_) - - result = lib.maybe_booleans_to_slice(arr[:0]) - assert (result == slice(0, 0)) - - -def test_convert_objects(): - arr = np.array(['a', 'b', nan, nan, 'd', 'e', 'f'], dtype='O') - result = lib.maybe_convert_objects(arr) - assert (result.dtype == np.object_) - - -def test_convert_infs(): - arr = np.array(['inf', 'inf', 'inf'], dtype='O') - result = lib.maybe_convert_numeric(arr, set(), False) - assert (result.dtype == np.float64) - - arr = np.array(['-inf', '-inf', '-inf'], dtype='O') - result = lib.maybe_convert_numeric(arr, set(), False) - assert (result.dtype == np.float64) - - -def test_scientific_no_exponent(): - # See PR 12215 - arr = np.array(['42E', '2E', '99e', '6e'], dtype='O') - result = lib.maybe_convert_numeric(arr, set(), False, True) - assert np.all(np.isnan(result)) - - -def test_convert_objects_ints(): - # test that we can detect many kinds of integers - dtypes = ['i1', 'i2', 'i4', 'i8', 'u1', 'u2', 'u4', 'u8'] - - for dtype_str in dtypes: - arr = np.array(list(np.arange(20, dtype=dtype_str)), dtype='O') - assert (arr[0].dtype == np.dtype(dtype_str)) - result = lib.maybe_convert_objects(arr) - assert (issubclass(result.dtype.type, np.integer)) - - -def test_convert_objects_complex_number(): - for dtype in np.sctypes['complex']: - arr = np.array(list(1j * np.arange(20, dtype=dtype)), dtype='O') - assert (arr[0].dtype == np.dtype(dtype)) - result = lib.maybe_convert_objects(arr) - assert (issubclass(result.dtype.type, np.complexfloating)) - - -def test_rank(): - tm._skip_if_no_scipy() - from scipy.stats import rankdata - - def _check(arr): - mask = ~np.isfinite(arr) - arr = arr.copy() - result = algos.rank_1d_float64(arr) - arr[mask] = np.inf - exp = rankdata(arr) - exp[mask] = nan - assert_almost_equal(result, exp) - - _check(np.array([nan, nan, 5., 5., 5., nan, 1, 2, 3, nan])) - _check(np.array([4., nan, 5., 5., 5., nan, 1, 2, 4., nan])) - - -def test_get_reverse_indexer(): - indexer = np.array([-1, -1, 1, 2, 0, -1, 3, 4], dtype=np.int64) - result = lib.get_reverse_indexer(indexer, 5) - expected = np.array([4, 2, 3, 6, 7], dtype=np.int64) - assert (np.array_equal(result, expected)) - - -def test_pad_backfill_object_segfault(): - - old = np.array([], dtype='O') - new = np.array([datetime.datetime(2010, 12, 31)], dtype='O') - - result = algos.pad_object(old, new) - expected = np.array([-1], dtype=np.int64) - assert (np.array_equal(result, expected)) - - result = algos.pad_object(new, old) - expected = np.array([], dtype=np.int64) - assert (np.array_equal(result, expected)) - - result = algos.backfill_object(old, new) - expected = np.array([-1], dtype=np.int64) - assert (np.array_equal(result, expected)) - - result = algos.backfill_object(new, old) - expected = np.array([], dtype=np.int64) - assert (np.array_equal(result, expected)) - - -def test_arrmap(): - values = np.array(['foo', 'foo', 'bar', 'bar', 'baz', 'qux'], dtype='O') - result = algos.arrmap_object(values, lambda x: x in ['foo', 'bar']) - assert (result.dtype == np.bool_) - - -def test_series_grouper(): - from pandas import Series - obj = Series(np.random.randn(10)) - dummy = obj[:0] - - labels = np.array([-1, -1, -1, 0, 0, 0, 1, 1, 1, 1], dtype=np.int64) - - grouper = lib.SeriesGrouper(obj, np.mean, labels, 2, dummy) - result, counts = grouper.get_result() - - expected = np.array([obj[3:6].mean(), obj[6:].mean()]) - assert_almost_equal(result, expected) - - exp_counts = np.array([3, 4], dtype=np.int64) - assert_almost_equal(counts, exp_counts) - - -def test_series_bin_grouper(): - from pandas import Series - obj = Series(np.random.randn(10)) - dummy = obj[:0] - - bins = np.array([3, 6]) - - grouper = lib.SeriesBinGrouper(obj, np.mean, bins, dummy) - result, counts = grouper.get_result() - - expected = np.array([obj[:3].mean(), obj[3:6].mean(), obj[6:].mean()]) - assert_almost_equal(result, expected) - - exp_counts = np.array([3, 3, 4], dtype=np.int64) - assert_almost_equal(counts, exp_counts) - - -class TestBinGroupers(tm.TestCase): - _multiprocess_can_split_ = True - - def setUp(self): - self.obj = np.random.randn(10, 1) - self.labels = np.array([0, 0, 0, 1, 1, 1, 2, 2, 2, 2], dtype=np.int64) - self.bins = np.array([3, 6], dtype=np.int64) - - def test_generate_bins(self): - from pandas.core.groupby import generate_bins_generic - values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64) - binner = np.array([0, 3, 6, 9], dtype=np.int64) - - for func in [lib.generate_bins_dt64, generate_bins_generic]: - bins = func(values, binner, closed='left') - assert ((bins == np.array([2, 5, 6])).all()) - - bins = func(values, binner, closed='right') - assert ((bins == np.array([3, 6, 6])).all()) - - for func in [lib.generate_bins_dt64, generate_bins_generic]: - values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64) - binner = np.array([0, 3, 6], dtype=np.int64) - - bins = func(values, binner, closed='right') - assert ((bins == np.array([3, 6])).all()) - - self.assertRaises(ValueError, generate_bins_generic, values, [], - 'right') - self.assertRaises(ValueError, generate_bins_generic, values[:0], - binner, 'right') - - self.assertRaises(ValueError, generate_bins_generic, values, [4], - 'right') - self.assertRaises(ValueError, generate_bins_generic, values, [-3, -1], - 'right') - - -def test_group_ohlc(): - def _check(dtype): - obj = np.array(np.random.randn(20), dtype=dtype) - - bins = np.array([6, 12, 20]) - out = np.zeros((3, 4), dtype) - counts = np.zeros(len(out), dtype=np.int64) - labels = com._ensure_int64(np.repeat( - np.arange(3), np.diff(np.r_[0, bins]))) - - func = getattr(algos, 'group_ohlc_%s' % dtype) - func(out, counts, obj[:, None], labels) - - def _ohlc(group): - if isnull(group).all(): - return np.repeat(nan, 4) - return [group[0], group.max(), group.min(), group[-1]] - - expected = np.array([_ohlc(obj[:6]), _ohlc(obj[6:12]), _ohlc(obj[12:]) - ]) - - assert_almost_equal(out, expected) - assert_almost_equal(counts, [6, 6, 8]) - - obj[:6] = nan - func(out, counts, obj[:, None], labels) - expected[0] = nan - assert_almost_equal(out, expected) - - _check('float32') - _check('float64') - - -def test_try_parse_dates(): - from dateutil.parser import parse - - arr = np.array(['5/1/2000', '6/1/2000', '7/1/2000'], dtype=object) - - result = lib.try_parse_dates(arr, dayfirst=True) - expected = [parse(d, dayfirst=True) for d in arr] - assert (np.array_equal(result, expected)) - - -class TestTypeInference(tm.TestCase): - _multiprocess_can_split_ = True - - def test_length_zero(self): - result = lib.infer_dtype(np.array([], dtype='i4')) - self.assertEqual(result, 'integer') - - result = lib.infer_dtype([]) - self.assertEqual(result, 'empty') - - def test_integers(self): - arr = np.array([1, 2, 3, np.int64(4), np.int32(5)], dtype='O') - result = lib.infer_dtype(arr) - self.assertEqual(result, 'integer') - - arr = np.array([1, 2, 3, np.int64(4), np.int32(5), 'foo'], dtype='O') - result = lib.infer_dtype(arr) - self.assertEqual(result, 'mixed-integer') - - arr = np.array([1, 2, 3, 4, 5], dtype='i4') - result = lib.infer_dtype(arr) - self.assertEqual(result, 'integer') - - def test_bools(self): - arr = np.array([True, False, True, True, True], dtype='O') - result = lib.infer_dtype(arr) - self.assertEqual(result, 'boolean') - - arr = np.array([np.bool_(True), np.bool_(False)], dtype='O') - result = lib.infer_dtype(arr) - self.assertEqual(result, 'boolean') - - arr = np.array([True, False, True, 'foo'], dtype='O') - result = lib.infer_dtype(arr) - self.assertEqual(result, 'mixed') - - arr = np.array([True, False, True], dtype=bool) - result = lib.infer_dtype(arr) - self.assertEqual(result, 'boolean') - - def test_floats(self): - arr = np.array([1., 2., 3., np.float64(4), np.float32(5)], dtype='O') - result = lib.infer_dtype(arr) - self.assertEqual(result, 'floating') - - arr = np.array([1, 2, 3, np.float64(4), np.float32(5), 'foo'], - dtype='O') - result = lib.infer_dtype(arr) - self.assertEqual(result, 'mixed-integer') - - arr = np.array([1, 2, 3, 4, 5], dtype='f4') - result = lib.infer_dtype(arr) - self.assertEqual(result, 'floating') - - arr = np.array([1, 2, 3, 4, 5], dtype='f8') - result = lib.infer_dtype(arr) - self.assertEqual(result, 'floating') - - def test_string(self): - pass - - def test_unicode(self): - pass - - def test_datetime(self): - - dates = [datetime.datetime(2012, 1, x) for x in range(1, 20)] - index = Index(dates) - self.assertEqual(index.inferred_type, 'datetime64') - - def test_date(self): - - dates = [datetime.date(2012, 1, x) for x in range(1, 20)] - index = Index(dates) - self.assertEqual(index.inferred_type, 'date') - - def test_to_object_array_tuples(self): - r = (5, 6) - values = [r] - result = lib.to_object_array_tuples(values) - - try: - # make sure record array works - from collections import namedtuple - record = namedtuple('record', 'x y') - r = record(5, 6) - values = [r] - result = lib.to_object_array_tuples(values) # noqa - except ImportError: - pass - - def test_object(self): - - # GH 7431 - # cannot infer more than this as only a single element - arr = np.array([None], dtype='O') - result = lib.infer_dtype(arr) - self.assertEqual(result, 'mixed') - - def test_categorical(self): - - # GH 8974 - from pandas import Categorical, Series - arr = Categorical(list('abc')) - result = lib.infer_dtype(arr) - self.assertEqual(result, 'categorical') - - result = lib.infer_dtype(Series(arr)) - self.assertEqual(result, 'categorical') - - arr = Categorical(list('abc'), categories=['cegfab'], ordered=True) - result = lib.infer_dtype(arr) - self.assertEqual(result, 'categorical') - - result = lib.infer_dtype(Series(arr)) - self.assertEqual(result, 'categorical') - - -class TestMoments(tm.TestCase): - pass - - -class TestReducer(tm.TestCase): - def test_int_index(self): - from pandas.core.series import Series - - arr = np.random.randn(100, 4) - result = lib.reduce(arr, np.sum, labels=Index(np.arange(4))) - expected = arr.sum(0) - assert_almost_equal(result, expected) - - result = lib.reduce(arr, np.sum, axis=1, labels=Index(np.arange(100))) - expected = arr.sum(1) - assert_almost_equal(result, expected) - - dummy = Series(0., index=np.arange(100)) - result = lib.reduce(arr, np.sum, dummy=dummy, - labels=Index(np.arange(4))) - expected = arr.sum(0) - assert_almost_equal(result, expected) - - dummy = Series(0., index=np.arange(4)) - result = lib.reduce(arr, np.sum, axis=1, dummy=dummy, - labels=Index(np.arange(100))) - expected = arr.sum(1) - assert_almost_equal(result, expected) - - result = lib.reduce(arr, np.sum, axis=1, dummy=dummy, - labels=Index(np.arange(100))) - assert_almost_equal(result, expected) - - -class TestTsUtil(tm.TestCase): - def test_min_valid(self): - # Ensure that Timestamp.min is a valid Timestamp - Timestamp(Timestamp.min) - - def test_max_valid(self): - # Ensure that Timestamp.max is a valid Timestamp - Timestamp(Timestamp.max) - - def test_to_datetime_bijective(self): - # Ensure that converting to datetime and back only loses precision - # by going from nanoseconds to microseconds. - self.assertEqual( - Timestamp(Timestamp.max.to_pydatetime()).value / 1000, - Timestamp.max.value / 1000) - self.assertEqual( - Timestamp(Timestamp.min.to_pydatetime()).value / 1000, - Timestamp.min.value / 1000) - - -class TestPeriodField(tm.TestCase): - def test_get_period_field_raises_on_out_of_range(self): - self.assertRaises(ValueError, period.get_period_field, -1, 0, 0) - - def test_get_period_field_array_raises_on_out_of_range(self): - self.assertRaises(ValueError, period.get_period_field_arr, -1, - np.empty(1), 0) diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py index 22ac583a3b808..2ec419221c6d8 100644 --- a/pandas/tests/test_window.py +++ b/pandas/tests/test_window.py @@ -6,26 +6,30 @@ from nose.tools import assert_raises from datetime import datetime from numpy.random import randn -from numpy.testing.decorators import slow import numpy as np from distutils.version import LooseVersion import pandas as pd from pandas import (Series, DataFrame, Panel, bdate_range, isnull, notnull, concat) -from pandas.util.testing import (assert_almost_equal, assert_series_equal, - assert_frame_equal, assert_panel_equal, - assert_index_equal, assert_numpy_array_equal) import pandas.core.datetools as datetools import pandas.stats.moments as mom import pandas.core.window as rwindow from pandas.core.base import SpecificationError +from pandas.core.common import UnsupportedFunctionCall import pandas.util.testing as tm from pandas.compat import range, zip, PY3 N, K = 100, 10 +def assert_equal(left, right): + if isinstance(left, Series): + tm.assert_series_equal(left, right) + else: + tm.assert_frame_equal(left, right) + + class Base(tm.TestCase): _multiprocess_can_split_ = True @@ -93,11 +97,11 @@ def tests_skip_nuisance(self): expected = DataFrame({'A': [np.nan, np.nan, 3, 6, 9], 'B': [np.nan, np.nan, 18, 21, 24]}, columns=list('AB')) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) expected = pd.concat([r[['A', 'B']].sum(), df[['C']]], axis=1) result = r.sum() - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_agg(self): df = DataFrame({'A': range(5), 'B': range(0, 10, 2)}) @@ -114,50 +118,51 @@ def test_agg(self): expected = pd.concat([a_mean, a_std, b_mean, b_std], axis=1) expected.columns = pd.MultiIndex.from_product([['A', 'B'], ['mean', 'std']]) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = r.aggregate({'A': np.mean, 'B': np.std}) expected = pd.concat([a_mean, b_std], axis=1) - assert_frame_equal(result, expected, check_like=True) + tm.assert_frame_equal(result, expected, check_like=True) result = r.aggregate({'A': ['mean', 'std']}) expected = pd.concat([a_mean, a_std], axis=1) expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'), ('A', 'std')]) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = r['A'].aggregate(['mean', 'sum']) expected = pd.concat([a_mean, a_sum], axis=1) expected.columns = ['mean', 'sum'] - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = r.aggregate({'A': {'mean': 'mean', 'sum': 'sum'}}) expected = pd.concat([a_mean, a_sum], axis=1) - expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'), ('A', - 'sum')]) - assert_frame_equal(result, expected, check_like=True) + expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'), + ('A', 'sum')]) + tm.assert_frame_equal(result, expected, check_like=True) result = r.aggregate({'A': {'mean': 'mean', 'sum': 'sum'}, 'B': {'mean2': 'mean', 'sum2': 'sum'}}) expected = pd.concat([a_mean, a_sum, b_mean, b_sum], axis=1) - expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'), ( - 'A', 'sum'), ('B', 'mean2'), ('B', 'sum2')]) - assert_frame_equal(result, expected, check_like=True) + exp_cols = [('A', 'mean'), ('A', 'sum'), ('B', 'mean2'), ('B', 'sum2')] + expected.columns = pd.MultiIndex.from_tuples(exp_cols) + tm.assert_frame_equal(result, expected, check_like=True) result = r.aggregate({'A': ['mean', 'std'], 'B': ['mean', 'std']}) expected = pd.concat([a_mean, a_std, b_mean, b_std], axis=1) - expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'), ( - 'A', 'std'), ('B', 'mean'), ('B', 'std')]) - assert_frame_equal(result, expected, check_like=True) + + exp_cols = [('A', 'mean'), ('A', 'std'), ('B', 'mean'), ('B', 'std')] + expected.columns = pd.MultiIndex.from_tuples(exp_cols) + tm.assert_frame_equal(result, expected, check_like=True) # passed lambda result = r.agg({'A': np.sum, 'B': lambda x: np.std(x, ddof=1)}) rcustom = r['B'].apply(lambda x: np.std(x, ddof=1)) expected = pd.concat([a_sum, rcustom], axis=1) - assert_frame_equal(result, expected, check_like=True) + tm.assert_frame_equal(result, expected, check_like=True) def test_agg_consistency(self): @@ -194,13 +199,13 @@ def f(): 'ra', 'std'), ('rb', 'mean'), ('rb', 'std')]) result = r[['A', 'B']].agg({'A': {'ra': ['mean', 'std']}, 'B': {'rb': ['mean', 'std']}}) - assert_frame_equal(result, expected, check_like=True) + tm.assert_frame_equal(result, expected, check_like=True) result = r.agg({'A': {'ra': ['mean', 'std']}, 'B': {'rb': ['mean', 'std']}}) expected.columns = pd.MultiIndex.from_tuples([('A', 'ra', 'mean'), ( 'A', 'ra', 'std'), ('B', 'rb', 'mean'), ('B', 'rb', 'std')]) - assert_frame_equal(result, expected, check_like=True) + tm.assert_frame_equal(result, expected, check_like=True) def test_window_with_args(self): tm._skip_if_no_scipy() @@ -212,7 +217,7 @@ def test_window_with_args(self): expected.columns = ['<lambda>', '<lambda>'] result = r.aggregate([lambda x: x.mean(std=10), lambda x: x.mean(std=.01)]) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def a(x): return x.mean(std=10) @@ -223,7 +228,7 @@ def b(x): expected = pd.concat([r.mean(std=10), r.mean(std=.01)], axis=1) expected.columns = ['a', 'b'] result = r.aggregate([a, b]) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_preserve_metadata(self): # GH 10565 @@ -261,7 +266,7 @@ def test_how_compat(self): expected = getattr( getattr(s, t)(freq='D', **kwargs), op)(how=how) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) class TestWindow(Base): @@ -296,6 +301,18 @@ def test_constructor(self): with self.assertRaises(ValueError): c(win_type=wt, window=2) + def test_numpy_compat(self): + # see gh-12811 + w = rwindow.Window(Series([2, 4, 6]), window=[0, 2]) + + msg = "numpy operations are not valid with window objects" + + for func in ('sum', 'mean'): + tm.assertRaisesRegexp(UnsupportedFunctionCall, msg, + getattr(w, func), 1, 2, 3) + tm.assertRaisesRegexp(UnsupportedFunctionCall, msg, + getattr(w, func), dtype=np.float64) + class TestRolling(Base): @@ -323,6 +340,18 @@ def test_constructor(self): with self.assertRaises(ValueError): c(window=2, min_periods=1, center=w) + def test_numpy_compat(self): + # see gh-12811 + r = rwindow.Rolling(Series([2, 4, 6]), window=2) + + msg = "numpy operations are not valid with window objects" + + for func in ('std', 'mean', 'sum', 'max', 'min', 'var'): + tm.assertRaisesRegexp(UnsupportedFunctionCall, msg, + getattr(r, func), 1, 2, 3) + tm.assertRaisesRegexp(UnsupportedFunctionCall, msg, + getattr(r, func), dtype=np.float64) + class TestExpanding(Base): @@ -347,6 +376,74 @@ def test_constructor(self): with self.assertRaises(ValueError): c(min_periods=1, center=w) + def test_numpy_compat(self): + # see gh-12811 + e = rwindow.Expanding(Series([2, 4, 6]), window=2) + + msg = "numpy operations are not valid with window objects" + + for func in ('std', 'mean', 'sum', 'max', 'min', 'var'): + tm.assertRaisesRegexp(UnsupportedFunctionCall, msg, + getattr(e, func), 1, 2, 3) + tm.assertRaisesRegexp(UnsupportedFunctionCall, msg, + getattr(e, func), dtype=np.float64) + + +class TestEWM(Base): + + def setUp(self): + self._create_data() + + def test_constructor(self): + for o in [self.series, self.frame]: + c = o.ewm + + # valid + c(com=0.5) + c(span=1.5) + c(alpha=0.5) + c(halflife=0.75) + c(com=0.5, span=None) + c(alpha=0.5, com=None) + c(halflife=0.75, alpha=None) + + # not valid: mutually exclusive + with self.assertRaises(ValueError): + c(com=0.5, alpha=0.5) + with self.assertRaises(ValueError): + c(span=1.5, halflife=0.75) + with self.assertRaises(ValueError): + c(alpha=0.5, span=1.5) + + # not valid: com < 0 + with self.assertRaises(ValueError): + c(com=-0.5) + + # not valid: span < 1 + with self.assertRaises(ValueError): + c(span=0.5) + + # not valid: halflife <= 0 + with self.assertRaises(ValueError): + c(halflife=0) + + # not valid: alpha <= 0 or alpha > 1 + for alpha in (-0.5, 1.5): + with self.assertRaises(ValueError): + c(alpha=alpha) + + def test_numpy_compat(self): + # see gh-12811 + e = rwindow.EWM(Series([2, 4, 6]), alpha=0.5) + + msg = "numpy operations are not valid with window objects" + + for func in ('std', 'mean', 'var'): + tm.assertRaisesRegexp(UnsupportedFunctionCall, msg, + getattr(e, func), 1, 2, 3) + tm.assertRaisesRegexp(UnsupportedFunctionCall, msg, + getattr(e, func), dtype=np.float64) + class TestDeprecations(Base): """ test that we are catching deprecation warnings """ @@ -462,7 +559,7 @@ def test_dtypes(self): def check_dtypes(self, f, f_name, d, d_name, exp): roll = d.rolling(window=self.window) result = f(roll) - assert_almost_equal(result, exp) + tm.assert_almost_equal(result, exp) class TestDtype_object(Dtype): @@ -549,7 +646,7 @@ def check_dtypes(self, f, f_name, d, d_name, exp): if f_name == 'count': result = f(roll) - assert_almost_equal(result, exp) + tm.assert_almost_equal(result, exp) else: @@ -621,11 +718,11 @@ def test_cmov_mean(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): rs = mom.rolling_mean(vals, 5, center=True) - assert_almost_equal(xp, rs) + tm.assert_almost_equal(xp, rs) xp = Series(rs) rs = Series(vals).rolling(5, center=True).mean() - assert_series_equal(xp, rs) + tm.assert_series_equal(xp, rs) def test_cmov_window(self): # GH 8238 @@ -638,11 +735,11 @@ def test_cmov_window(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): rs = mom.rolling_window(vals, 5, 'boxcar', center=True) - assert_almost_equal(xp, rs) + tm.assert_almost_equal(xp, rs) xp = Series(rs) rs = Series(vals).rolling(5, win_type='boxcar', center=True).mean() - assert_series_equal(xp, rs) + tm.assert_series_equal(xp, rs) def test_cmov_window_corner(self): # GH 8238 @@ -684,7 +781,7 @@ def test_cmov_window_frame(self): # DataFrame rs = DataFrame(vals).rolling(5, win_type='boxcar', center=True).mean() - assert_frame_equal(DataFrame(xp), rs) + tm.assert_frame_equal(DataFrame(xp), rs) # invalid method with self.assertRaises(AttributeError): @@ -698,7 +795,7 @@ def test_cmov_window_frame(self): ], [np.nan, np.nan]]) rs = DataFrame(vals).rolling(5, win_type='boxcar', center=True).sum() - assert_frame_equal(DataFrame(xp), rs) + tm.assert_frame_equal(DataFrame(xp), rs) def test_cmov_window_na_min_periods(self): tm._skip_if_no_scipy() @@ -711,7 +808,7 @@ def test_cmov_window_na_min_periods(self): xp = vals.rolling(5, min_periods=4, center=True).mean() rs = vals.rolling(5, win_type='boxcar', min_periods=4, center=True).mean() - assert_series_equal(xp, rs) + tm.assert_series_equal(xp, rs) def test_cmov_window_regular(self): # GH 8238 @@ -744,7 +841,7 @@ def test_cmov_window_regular(self): for wt in win_types: xp = Series(xps[wt]) rs = Series(vals).rolling(5, win_type=wt, center=True).mean() - assert_series_equal(xp, rs) + tm.assert_series_equal(xp, rs) def test_cmov_window_regular_linear_range(self): # GH 8238 @@ -761,7 +858,7 @@ def test_cmov_window_regular_linear_range(self): for wt in win_types: rs = Series(vals).rolling(5, win_type=wt, center=True).mean() - assert_series_equal(xp, rs) + tm.assert_series_equal(xp, rs) def test_cmov_window_regular_missing_data(self): # GH 8238 @@ -794,7 +891,7 @@ def test_cmov_window_regular_missing_data(self): for wt in win_types: xp = Series(xps[wt]) rs = Series(vals).rolling(5, win_type=wt, min_periods=3).mean() - assert_series_equal(xp, rs) + tm.assert_series_equal(xp, rs) def test_cmov_window_special(self): # GH 8238 @@ -821,7 +918,7 @@ def test_cmov_window_special(self): for wt, k in zip(win_types, kwds): xp = Series(xps[wt]) rs = Series(vals).rolling(5, win_type=wt, center=True).mean(**k) - assert_series_equal(xp, rs) + tm.assert_series_equal(xp, rs) def test_cmov_window_special_linear_range(self): # GH 8238 @@ -839,7 +936,7 @@ def test_cmov_window_special_linear_range(self): for wt, k in zip(win_types, kwds): rs = Series(vals).rolling(5, win_type=wt, center=True).mean(**k) - assert_series_equal(xp, rs) + tm.assert_series_equal(xp, rs) def test_rolling_median(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): @@ -853,7 +950,7 @@ def test_rolling_min(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): a = np.array([1, 2, 3, 4, 5]) b = mom.rolling_min(a, window=100, min_periods=1) - assert_almost_equal(b, np.ones(len(a))) + tm.assert_almost_equal(b, np.ones(len(a))) self.assertRaises(ValueError, mom.rolling_min, np.array([1, 2, 3]), window=3, min_periods=5) @@ -865,7 +962,7 @@ def test_rolling_max(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): a = np.array([1, 2, 3, 4, 5], dtype=np.float64) b = mom.rolling_max(a, window=100, min_periods=1) - assert_almost_equal(a, b) + tm.assert_almost_equal(a, b) self.assertRaises(ValueError, mom.rolling_max, np.array([1, 2, 3]), window=3, min_periods=5) @@ -901,7 +998,8 @@ def test_rolling_apply(self): category=RuntimeWarning) ser = Series([]) - assert_series_equal(ser, ser.rolling(10).apply(lambda x: x.mean())) + tm.assert_series_equal(ser, + ser.rolling(10).apply(lambda x: x.mean())) f = lambda x: x[np.isfinite(x)].mean() @@ -917,10 +1015,10 @@ def roll_mean(x, window, min_periods=None, freq=None, center=False, s = Series([None, None, None]) result = s.rolling(2, min_periods=0).apply(lambda x: len(x)) expected = Series([1., 2., 2.]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) result = s.rolling(2, min_periods=0).apply(len) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_rolling_apply_out_of_bounds(self): # #1850 @@ -933,7 +1031,7 @@ def test_rolling_apply_out_of_bounds(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): result = mom.rolling_apply(arr, 10, np.sum, min_periods=1) - assert_almost_equal(result, result) + tm.assert_almost_equal(result, result) def test_rolling_std(self): self._check_moment_func(mom.rolling_std, lambda x: np.std(x, ddof=1), @@ -946,13 +1044,13 @@ def test_rolling_std_1obs(self): result = mom.rolling_std(np.array([1., 2., 3., 4., 5.]), 1, min_periods=1) expected = np.array([np.nan] * 5) - assert_almost_equal(result, expected) + tm.assert_almost_equal(result, expected) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): result = mom.rolling_std(np.array([1., 2., 3., 4., 5.]), 1, min_periods=1, ddof=0) expected = np.zeros(5) - assert_almost_equal(result, expected) + tm.assert_almost_equal(result, expected) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): result = mom.rolling_std(np.array([np.nan, np.nan, 3., 4., 5.]), @@ -1066,7 +1164,7 @@ def get_result(arr, window, min_periods=None, center=False): kwargs) result = get_result(self.arr, window) - assert_almost_equal(result[-1], static_comp(self.arr[-50:])) + tm.assert_almost_equal(result[-1], static_comp(self.arr[-50:])) if preserve_nan: assert (np.isnan(result[self._nan_locs]).all()) @@ -1078,7 +1176,7 @@ def get_result(arr, window, min_periods=None, center=False): if has_min_periods: result = get_result(arr, 50, min_periods=30) - assert_almost_equal(result[-1], static_comp(arr[10:-10])) + tm.assert_almost_equal(result[-1], static_comp(arr[10:-10])) # min_periods is working correctly result = get_result(arr, 20, min_periods=15) @@ -1096,10 +1194,10 @@ def get_result(arr, window, min_periods=None, center=False): # min_periods=0 result0 = get_result(arr, 20, min_periods=0) result1 = get_result(arr, 20, min_periods=1) - assert_almost_equal(result0, result1) + tm.assert_almost_equal(result0, result1) else: result = get_result(arr, 50) - assert_almost_equal(result[-1], static_comp(arr[10:-10])) + tm.assert_almost_equal(result[-1], static_comp(arr[10:-10])) # GH 7925 if has_center: @@ -1117,7 +1215,8 @@ def get_result(arr, window, min_periods=None, center=False): if test_stable: result = get_result(self.arr + 1e9, window) - assert_almost_equal(result[-1], static_comp(self.arr[-50:] + 1e9)) + tm.assert_almost_equal(result[-1], + static_comp(self.arr[-50:] + 1e9)) # Test window larger than array, #7297 if test_window: @@ -1131,14 +1230,15 @@ def get_result(arr, window, min_periods=None, center=False): self.assertTrue(np.array_equal(nan_mask, np.isnan( expected))) nan_mask = ~nan_mask - assert_almost_equal(result[nan_mask], expected[nan_mask]) + tm.assert_almost_equal(result[nan_mask], + expected[nan_mask]) else: result = get_result(self.arr, len(self.arr) + 1) expected = get_result(self.arr, len(self.arr)) nan_mask = np.isnan(result) self.assertTrue(np.array_equal(nan_mask, np.isnan(expected))) nan_mask = ~nan_mask - assert_almost_equal(result[nan_mask], expected[nan_mask]) + tm.assert_almost_equal(result[nan_mask], expected[nan_mask]) def _check_structures(self, f, static_comp, name=None, has_min_periods=True, has_time_rule=True, @@ -1190,11 +1290,12 @@ def get_result(obj, window, min_periods=None, freq=None, center=False): trunc_series = self.series[::2].truncate(prev_date, last_date) trunc_frame = self.frame[::2].truncate(prev_date, last_date) - assert_almost_equal(series_result[-1], static_comp(trunc_series)) + self.assertAlmostEqual(series_result[-1], + static_comp(trunc_series)) - assert_series_equal(frame_result.xs(last_date), - trunc_frame.apply(static_comp), - check_names=False) + tm.assert_series_equal(frame_result.xs(last_date), + trunc_frame.apply(static_comp), + check_names=False) # GH 7925 if has_center: @@ -1233,8 +1334,8 @@ def get_result(obj, window, min_periods=None, freq=None, center=False): if fill_value is not None: series_xp = series_xp.fillna(fill_value) frame_xp = frame_xp.fillna(fill_value) - assert_series_equal(series_xp, series_rs) - assert_frame_equal(frame_xp, frame_rs) + tm.assert_series_equal(series_xp, series_rs) + tm.assert_frame_equal(frame_xp, frame_rs) def test_ewma(self): self._check_ew(mom.ewma, name='mean') @@ -1254,7 +1355,7 @@ def test_ewma(self): lambda s: s.ewm(com=2.0, adjust=True, ignore_na=True).mean(), ]: result = f(s) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) expected = Series([1.0, 1.333333, 2.222222, 4.148148]) for f in [lambda s: s.ewm(com=2.0, adjust=False).mean(), @@ -1264,7 +1365,7 @@ def test_ewma(self): ignore_na=True).mean(), ]: result = f(s) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_ewma_nan_handling(self): s = Series([1.] + [np.nan] * 5 + [1.]) @@ -1315,11 +1416,11 @@ def simple_wma(s, w): expected = simple_wma(s, Series(w)) result = s.ewm(com=com, adjust=adjust, ignore_na=ignore_na).mean() - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) if ignore_na is False: # check that ignore_na defaults to False result = s.ewm(com=com, adjust=adjust).mean() - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_ewmvar(self): self._check_ew(mom.ewmvar, name='var') @@ -1331,7 +1432,7 @@ def test_ewma_span_com_args(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): A = mom.ewma(self.arr, com=9.5) B = mom.ewma(self.arr, span=20) - assert_almost_equal(A, B) + tm.assert_almost_equal(A, B) self.assertRaises(ValueError, mom.ewma, self.arr, com=9.5, span=20) self.assertRaises(ValueError, mom.ewma, self.arr) @@ -1340,7 +1441,7 @@ def test_ewma_halflife_arg(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): A = mom.ewma(self.arr, com=13.932726172912965) B = mom.ewma(self.arr, halflife=10.0) - assert_almost_equal(A, B) + tm.assert_almost_equal(A, B) self.assertRaises(ValueError, mom.ewma, self.arr, span=20, halflife=50) @@ -1357,9 +1458,9 @@ def test_ewma_alpha_old_api(self): b = mom.ewma(self.arr, com=0.62014947789973052) c = mom.ewma(self.arr, span=2.240298955799461) d = mom.ewma(self.arr, halflife=0.721792864318) - assert_numpy_array_equal(a, b) - assert_numpy_array_equal(a, c) - assert_numpy_array_equal(a, d) + tm.assert_numpy_array_equal(a, b) + tm.assert_numpy_array_equal(a, c) + tm.assert_numpy_array_equal(a, d) def test_ewma_alpha_arg_old_api(self): # GH 10789 @@ -1379,9 +1480,9 @@ def test_ewm_alpha(self): b = s.ewm(com=0.62014947789973052).mean() c = s.ewm(span=2.240298955799461).mean() d = s.ewm(halflife=0.721792864318).mean() - assert_series_equal(a, b) - assert_series_equal(a, c) - assert_series_equal(a, d) + tm.assert_series_equal(a, b) + tm.assert_series_equal(a, c) + tm.assert_series_equal(a, d) def test_ewm_alpha_arg(self): # GH 10789 @@ -1423,7 +1524,7 @@ def test_ew_empty_arrays(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): result = f(arr, 3) - assert_almost_equal(result, arr) + tm.assert_almost_equal(result, arr) def _check_ew(self, func, name=None): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): @@ -1460,16 +1561,16 @@ def _check_ew_ndarray(self, func, preserve_nan=False, name=None): # check series of length 0 result = func(Series([]), 50, min_periods=min_periods) - assert_series_equal(result, Series([])) + tm.assert_series_equal(result, Series([])) # check series of length 1 result = func(Series([1.]), 50, min_periods=min_periods) if func == mom.ewma: - assert_series_equal(result, Series([1.])) + tm.assert_series_equal(result, Series([1.])) else: # ewmstd, ewmvol, ewmvar with bias=False require at least two # values - assert_series_equal(result, Series([np.NaN])) + tm.assert_series_equal(result, Series([np.NaN])) # pass in ints result2 = func(np.arange(50), span=10) @@ -1601,8 +1702,6 @@ def _non_null_values(x): return set(values[notnull(values)].tolist()) for (x, is_constant, no_nans) in self.data: - assert_equal = assert_series_equal if isinstance( - x, Series) else assert_frame_equal count_x = count(x) mean_x = mean(x) @@ -1707,7 +1806,7 @@ def _non_null_values(x): assert_equal(cov_x_y, mean_x_times_y - (mean_x * mean_y)) - @slow + @tm.slow def test_ewm_consistency(self): def _weights(s, com, adjust, ignore_na): if isinstance(s, DataFrame): @@ -1806,7 +1905,7 @@ def _ewma(s, com, min_periods, adjust, ignore_na): _variance_debiasing_factors(x, com=com, adjust=adjust, ignore_na=ignore_na))) - @slow + @tm.slow def test_expanding_consistency(self): # suppress warnings about empty slices, as we are deliberately testing @@ -1849,8 +1948,6 @@ def test_expanding_consistency(self): # expanding_apply of Series.xyz(), or (b) expanding_apply of # np.nanxyz() for (x, is_constant, no_nans) in self.data: - assert_equal = assert_series_equal if isinstance( - x, Series) else assert_frame_equal functions = self.base_functions # GH 8269 @@ -1895,9 +1992,9 @@ def test_expanding_consistency(self): x.iloc[:, i].expanding( min_periods=min_periods), name)(x.iloc[:, j]) - assert_panel_equal(expanding_f_result, expected) + tm.assert_panel_equal(expanding_f_result, expected) - @slow + @tm.slow def test_rolling_consistency(self): # suppress warnings about empty slices, as we are deliberately testing @@ -1969,10 +2066,6 @@ def cases(): # rolling_apply of Series.xyz(), or (b) rolling_apply of # np.nanxyz() for (x, is_constant, no_nans) in self.data: - - assert_equal = (assert_series_equal - if isinstance(x, Series) else - assert_frame_equal) functions = self.base_functions # GH 8269 @@ -2023,7 +2116,7 @@ def cases(): min_periods=min_periods, center=center), name)(x.iloc[:, j])) - assert_panel_equal(rolling_f_result, expected) + tm.assert_panel_equal(rolling_f_result, expected) # binary moments def test_rolling_cov(self): @@ -2031,7 +2124,7 @@ def test_rolling_cov(self): B = A + randn(len(A)) result = A.rolling(window=50, min_periods=25).cov(B) - assert_almost_equal(result[-1], np.cov(A[-50:], B[-50:])[0, 1]) + tm.assert_almost_equal(result[-1], np.cov(A[-50:], B[-50:])[0, 1]) def test_rolling_cov_pairwise(self): self._check_pairwise_moment('rolling', 'cov', window=10, min_periods=5) @@ -2041,7 +2134,7 @@ def test_rolling_corr(self): B = A + randn(len(A)) result = A.rolling(window=50, min_periods=25).corr(B) - assert_almost_equal(result[-1], np.corrcoef(A[-50:], B[-50:])[0, 1]) + tm.assert_almost_equal(result[-1], np.corrcoef(A[-50:], B[-50:])[0, 1]) # test for correct bias correction a = tm.makeTimeSeries() @@ -2050,7 +2143,7 @@ def test_rolling_corr(self): b[:10] = np.nan result = a.rolling(window=len(a), min_periods=1).corr(b) - assert_almost_equal(result[-1], a.corr(b)) + tm.assert_almost_equal(result[-1], a.corr(b)) def test_rolling_corr_pairwise(self): self._check_pairwise_moment('rolling', 'corr', window=10, @@ -2151,18 +2244,18 @@ def func(A, B, com, **kwargs): # check series of length 0 result = func(Series([]), Series([]), 50, min_periods=min_periods) - assert_series_equal(result, Series([])) + tm.assert_series_equal(result, Series([])) # check series of length 1 result = func( Series([1.]), Series([1.]), 50, min_periods=min_periods) - assert_series_equal(result, Series([np.NaN])) + tm.assert_series_equal(result, Series([np.NaN])) self.assertRaises(Exception, func, A, randn(50), 20, min_periods=5) def test_expanding_apply(self): ser = Series([]) - assert_series_equal(ser, ser.expanding().apply(lambda x: x.mean())) + tm.assert_series_equal(ser, ser.expanding().apply(lambda x: x.mean())) def expanding_mean(x, min_periods=1, freq=None): return mom.expanding_apply(x, lambda x: x.mean(), @@ -2174,7 +2267,7 @@ def expanding_mean(x, min_periods=1, freq=None): s = Series([None, None, None]) result = s.expanding(min_periods=0).apply(lambda x: len(x)) expected = Series([1., 2., 3.]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_expanding_apply_args_kwargs(self): def mean_w_arg(x, const): @@ -2184,11 +2277,11 @@ def mean_w_arg(x, const): expected = df.expanding().apply(np.mean) + 20. - assert_frame_equal(df.expanding().apply(mean_w_arg, args=(20, )), - expected) - assert_frame_equal(df.expanding().apply(mean_w_arg, - kwargs={'const': 20}), - expected) + tm.assert_frame_equal(df.expanding().apply(mean_w_arg, args=(20, )), + expected) + tm.assert_frame_equal(df.expanding().apply(mean_w_arg, + kwargs={'const': 20}), + expected) def test_expanding_corr(self): A = self.series.dropna() @@ -2198,11 +2291,11 @@ def test_expanding_corr(self): rolling_result = A.rolling(window=len(A), min_periods=1).corr(B) - assert_almost_equal(rolling_result, result) + tm.assert_almost_equal(rolling_result, result) def test_expanding_count(self): result = self.series.expanding().count() - assert_almost_equal(result, self.series.rolling( + tm.assert_almost_equal(result, self.series.rolling( window=len(self.series)).count()) def test_expanding_quantile(self): @@ -2211,7 +2304,7 @@ def test_expanding_quantile(self): rolling_result = self.series.rolling(window=len(self.series), min_periods=1).quantile(0.5) - assert_almost_equal(result, rolling_result) + tm.assert_almost_equal(result, rolling_result) def test_expanding_cov(self): A = self.series @@ -2221,7 +2314,7 @@ def test_expanding_cov(self): rolling_result = A.rolling(window=len(A), min_periods=1).cov(B) - assert_almost_equal(rolling_result, result) + tm.assert_almost_equal(rolling_result, result) def test_expanding_max(self): self._check_expanding(mom.expanding_max, np.max, preserve_nan=False) @@ -2233,7 +2326,7 @@ def test_expanding_cov_pairwise(self): min_periods=1).corr() for i in result.items: - assert_almost_equal(result[i], rolling_result[i]) + tm.assert_almost_equal(result[i], rolling_result[i]) def test_expanding_corr_pairwise(self): result = self.frame.expanding().corr() @@ -2242,7 +2335,7 @@ def test_expanding_corr_pairwise(self): min_periods=1).corr() for i in result.items: - assert_almost_equal(result[i], rolling_result[i]) + tm.assert_almost_equal(result[i], rolling_result[i]) def test_expanding_cov_diff_index(self): # GH 7512 @@ -2250,17 +2343,17 @@ def test_expanding_cov_diff_index(self): s2 = Series([1, 3], index=[0, 2]) result = s1.expanding().cov(s2) expected = Series([None, None, 2.0]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) s2a = Series([1, None, 3], index=[0, 1, 2]) result = s1.expanding().cov(s2a) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) s1 = Series([7, 8, 10], index=[0, 1, 3]) s2 = Series([7, 9, 10], index=[0, 2, 3]) result = s1.expanding().cov(s2) expected = Series([None, None, None, 4.5]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_expanding_corr_diff_index(self): # GH 7512 @@ -2268,17 +2361,17 @@ def test_expanding_corr_diff_index(self): s2 = Series([1, 3], index=[0, 2]) result = s1.expanding().corr(s2) expected = Series([None, None, 1.0]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) s2a = Series([1, None, 3], index=[0, 1, 2]) result = s1.expanding().corr(s2a) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) s1 = Series([7, 8, 10], index=[0, 1, 3]) s2 = Series([7, 9, 10], index=[0, 2, 3]) result = s1.expanding().corr(s2) expected = Series([None, None, None, 1.]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_rolling_cov_diff_length(self): # GH 7512 @@ -2286,11 +2379,11 @@ def test_rolling_cov_diff_length(self): s2 = Series([1, 3], index=[0, 2]) result = s1.rolling(window=3, min_periods=2).cov(s2) expected = Series([None, None, 2.0]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) s2a = Series([1, None, 3], index=[0, 1, 2]) result = s1.rolling(window=3, min_periods=2).cov(s2a) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_rolling_corr_diff_length(self): # GH 7512 @@ -2298,11 +2391,11 @@ def test_rolling_corr_diff_length(self): s2 = Series([1, 3], index=[0, 2]) result = s1.rolling(window=3, min_periods=2).corr(s2) expected = Series([None, None, 1.0]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) s2a = Series([1, None, 3], index=[0, 1, 2]) result = s1.rolling(window=3, min_periods=2).corr(s2a) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_rolling_functions_window_non_shrinkage(self): # GH 7764 @@ -2334,10 +2427,10 @@ def test_rolling_functions_window_non_shrinkage(self): for f in functions: try: s_result = f(s) - assert_series_equal(s_result, s_expected) + tm.assert_series_equal(s_result, s_expected) df_result = f(df) - assert_frame_equal(df_result, df_expected) + tm.assert_frame_equal(df_result, df_expected) except (ImportError): # scipy needed for rolling_window @@ -2349,7 +2442,7 @@ def test_rolling_functions_window_non_shrinkage(self): .corr(x, pairwise=True))] for f in functions: df_result_panel = f(df) - assert_panel_equal(df_result_panel, df_expected_panel) + tm.assert_panel_equal(df_result_panel, df_expected_panel) def test_moment_functions_zero_length(self): # GH 8056 @@ -2404,13 +2497,13 @@ def test_moment_functions_zero_length(self): for f in functions: try: s_result = f(s) - assert_series_equal(s_result, s_expected) + tm.assert_series_equal(s_result, s_expected) df1_result = f(df1) - assert_frame_equal(df1_result, df1_expected) + tm.assert_frame_equal(df1_result, df1_expected) df2_result = f(df2) - assert_frame_equal(df2_result, df2_expected) + tm.assert_frame_equal(df2_result, df2_expected) except (ImportError): # scipy needed for rolling_window @@ -2427,10 +2520,10 @@ def test_moment_functions_zero_length(self): ] for f in functions: df1_result_panel = f(df1) - assert_panel_equal(df1_result_panel, df1_expected_panel) + tm.assert_panel_equal(df1_result_panel, df1_expected_panel) df2_result_panel = f(df2) - assert_panel_equal(df2_result_panel, df2_expected_panel) + tm.assert_panel_equal(df2_result_panel, df2_expected_panel) def test_expanding_cov_pairwise_diff_length(self): # GH 7512 @@ -2444,10 +2537,10 @@ def test_expanding_cov_pairwise_diff_length(self): result4 = df1a.expanding().cov(df2a, pairwise=True)[2] expected = DataFrame([[-3., -5.], [-6., -10.]], index=['A', 'B'], columns=['X', 'Y']) - assert_frame_equal(result1, expected) - assert_frame_equal(result2, expected) - assert_frame_equal(result3, expected) - assert_frame_equal(result4, expected) + tm.assert_frame_equal(result1, expected) + tm.assert_frame_equal(result2, expected) + tm.assert_frame_equal(result3, expected) + tm.assert_frame_equal(result4, expected) def test_expanding_corr_pairwise_diff_length(self): # GH 7512 @@ -2461,35 +2554,29 @@ def test_expanding_corr_pairwise_diff_length(self): result4 = df1a.expanding().corr(df2a, pairwise=True)[2] expected = DataFrame([[-1.0, -1.0], [-1.0, -1.0]], index=['A', 'B'], columns=['X', 'Y']) - assert_frame_equal(result1, expected) - assert_frame_equal(result2, expected) - assert_frame_equal(result3, expected) - assert_frame_equal(result4, expected) + tm.assert_frame_equal(result1, expected) + tm.assert_frame_equal(result2, expected) + tm.assert_frame_equal(result3, expected) + tm.assert_frame_equal(result4, expected) def test_pairwise_stats_column_names_order(self): # GH 7738 df1s = [DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], columns=[0, 1]), - DataFrame( - [[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1, 0]), - DataFrame( - [[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1, 1]), - DataFrame( - [[2, 4], [1, 2], [5, 2], [8, 1]], columns=['C', 'C']), - DataFrame( - [[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1., 0]), - DataFrame( - [[2, 4], [1, 2], [5, 2], [8, 1]], columns=[0., 1]), - DataFrame( - [[2, 4], [1, 2], [5, 2], [8, 1]], columns=['C', 1]), - DataFrame( - [[2., 4.], [1., 2.], [5., 2.], [8., 1.]], columns=[1, 0.]), - DataFrame( - [[2, 4.], [1, 2.], [5, 2.], [8, 1.]], columns=[0, 1.]), - DataFrame( - [[2, 4], [1, 2], [5, 2], [8, 1.]], columns=[1., 'X']), ] - df2 = DataFrame( - [[None, 1, 1], [None, 1, 2], [None, 3, 2], [None, 8, 1] - ], columns=['Y', 'Z', 'X']) + DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1, 0]), + DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1, 1]), + DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], + columns=['C', 'C']), + DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1., 0]), + DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], columns=[0., 1]), + DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], columns=['C', 1]), + DataFrame([[2., 4.], [1., 2.], [5., 2.], [8., 1.]], + columns=[1, 0.]), + DataFrame([[2, 4.], [1, 2.], [5, 2.], [8, 1.]], + columns=[0, 1.]), + DataFrame([[2, 4], [1, 2], [5, 2], [8, 1.]], + columns=[1., 'X']), ] + df2 = DataFrame([[None, 1, 1], [None, 1, 2], + [None, 3, 2], [None, 8, 1]], columns=['Y', 'Z', 'X']) s = Series([1, 1, 3, 8]) # suppress warnings about incomparable objects, as we are deliberately @@ -2503,11 +2590,13 @@ def test_pairwise_stats_column_names_order(self): for f in [lambda x: x.cov(), lambda x: x.corr(), ]: results = [f(df) for df in df1s] for (df, result) in zip(df1s, results): - assert_index_equal(result.index, df.columns) - assert_index_equal(result.columns, df.columns) + tm.assert_index_equal(result.index, df.columns) + tm.assert_index_equal(result.columns, df.columns) for i, result in enumerate(results): if i > 0: - self.assert_numpy_array_equal(result, results[0]) + # compare internal values, as columns can be different + self.assert_numpy_array_equal(result.values, + results[0].values) # DataFrame with itself, pairwise=True for f in [lambda x: x.expanding().cov(pairwise=True), @@ -2518,12 +2607,13 @@ def test_pairwise_stats_column_names_order(self): lambda x: x.ewm(com=3).corr(pairwise=True), ]: results = [f(df) for df in df1s] for (df, result) in zip(df1s, results): - assert_index_equal(result.items, df.index) - assert_index_equal(result.major_axis, df.columns) - assert_index_equal(result.minor_axis, df.columns) + tm.assert_index_equal(result.items, df.index) + tm.assert_index_equal(result.major_axis, df.columns) + tm.assert_index_equal(result.minor_axis, df.columns) for i, result in enumerate(results): if i > 0: - self.assert_numpy_array_equal(result, results[0]) + self.assert_numpy_array_equal(result.values, + results[0].values) # DataFrame with itself, pairwise=False for f in [lambda x: x.expanding().cov(pairwise=False), @@ -2534,11 +2624,12 @@ def test_pairwise_stats_column_names_order(self): lambda x: x.ewm(com=3).corr(pairwise=False), ]: results = [f(df) for df in df1s] for (df, result) in zip(df1s, results): - assert_index_equal(result.index, df.index) - assert_index_equal(result.columns, df.columns) + tm.assert_index_equal(result.index, df.index) + tm.assert_index_equal(result.columns, df.columns) for i, result in enumerate(results): if i > 0: - self.assert_numpy_array_equal(result, results[0]) + self.assert_numpy_array_equal(result.values, + results[0].values) # DataFrame with another DataFrame, pairwise=True for f in [lambda x, y: x.expanding().cov(y, pairwise=True), @@ -2549,12 +2640,13 @@ def test_pairwise_stats_column_names_order(self): lambda x, y: x.ewm(com=3).corr(y, pairwise=True), ]: results = [f(df, df2) for df in df1s] for (df, result) in zip(df1s, results): - assert_index_equal(result.items, df.index) - assert_index_equal(result.major_axis, df.columns) - assert_index_equal(result.minor_axis, df2.columns) + tm.assert_index_equal(result.items, df.index) + tm.assert_index_equal(result.major_axis, df.columns) + tm.assert_index_equal(result.minor_axis, df2.columns) for i, result in enumerate(results): if i > 0: - self.assert_numpy_array_equal(result, results[0]) + self.assert_numpy_array_equal(result.values, + results[0].values) # DataFrame with another DataFrame, pairwise=False for f in [lambda x, y: x.expanding().cov(y, pairwise=False), @@ -2569,8 +2661,8 @@ def test_pairwise_stats_column_names_order(self): if result is not None: expected_index = df.index.union(df2.index) expected_columns = df.columns.union(df2.columns) - assert_index_equal(result.index, expected_index) - assert_index_equal(result.columns, expected_columns) + tm.assert_index_equal(result.index, expected_index) + tm.assert_index_equal(result.columns, expected_columns) else: tm.assertRaisesRegexp( ValueError, "'arg1' columns are not unique", f, df, @@ -2588,11 +2680,12 @@ def test_pairwise_stats_column_names_order(self): lambda x, y: x.ewm(com=3).corr(y), ]: results = [f(df, s) for df in df1s] + [f(s, df) for df in df1s] for (df, result) in zip(df1s, results): - assert_index_equal(result.index, df.index) - assert_index_equal(result.columns, df.columns) + tm.assert_index_equal(result.index, df.index) + tm.assert_index_equal(result.columns, df.columns) for i, result in enumerate(results): if i > 0: - self.assert_numpy_array_equal(result, results[0]) + self.assert_numpy_array_equal(result.values, + results[0].values) def test_rolling_skew_edge_cases(self): @@ -2601,19 +2694,19 @@ def test_rolling_skew_edge_cases(self): # yields all NaN (0 variance) d = Series([1] * 5) x = d.rolling(window=5).skew() - assert_series_equal(all_nan, x) + tm.assert_series_equal(all_nan, x) # yields all NaN (window too small) d = Series(np.random.randn(5)) x = d.rolling(window=2).skew() - assert_series_equal(all_nan, x) + tm.assert_series_equal(all_nan, x) # yields [NaN, NaN, NaN, 0.177994, 1.548824] d = Series([-1.50837035, -0.1297039, 0.19501095, 1.73508164, 0.41941401 ]) expected = Series([np.NaN, np.NaN, np.NaN, 0.177994, 1.548824]) x = d.rolling(window=4).skew() - assert_series_equal(expected, x) + tm.assert_series_equal(expected, x) def test_rolling_kurt_edge_cases(self): @@ -2622,25 +2715,25 @@ def test_rolling_kurt_edge_cases(self): # yields all NaN (0 variance) d = Series([1] * 5) x = d.rolling(window=5).kurt() - assert_series_equal(all_nan, x) + tm.assert_series_equal(all_nan, x) # yields all NaN (window too small) d = Series(np.random.randn(5)) x = d.rolling(window=3).kurt() - assert_series_equal(all_nan, x) + tm.assert_series_equal(all_nan, x) # yields [NaN, NaN, NaN, 1.224307, 2.671499] d = Series([-1.50837035, -0.1297039, 0.19501095, 1.73508164, 0.41941401 ]) expected = Series([np.NaN, np.NaN, np.NaN, 1.224307, 2.671499]) x = d.rolling(window=4).kurt() - assert_series_equal(expected, x) + tm.assert_series_equal(expected, x) def _check_expanding_ndarray(self, func, static_comp, has_min_periods=True, has_time_rule=True, preserve_nan=True): result = func(self.arr) - assert_almost_equal(result[10], static_comp(self.arr[:11])) + tm.assert_almost_equal(result[10], static_comp(self.arr[:11])) if preserve_nan: assert (np.isnan(result[self._nan_locs]).all()) @@ -2650,7 +2743,7 @@ def _check_expanding_ndarray(self, func, static_comp, has_min_periods=True, if has_min_periods: result = func(arr, min_periods=30) assert (np.isnan(result[:29]).all()) - assert_almost_equal(result[-1], static_comp(arr[:50])) + tm.assert_almost_equal(result[-1], static_comp(arr[:50])) # min_periods is working correctly result = func(arr, min_periods=15) @@ -2665,10 +2758,10 @@ def _check_expanding_ndarray(self, func, static_comp, has_min_periods=True, # min_periods=0 result0 = func(arr, min_periods=0) result1 = func(arr, min_periods=1) - assert_almost_equal(result0, result1) + tm.assert_almost_equal(result0, result1) else: result = func(arr) - assert_almost_equal(result[-1], static_comp(arr[:50])) + tm.assert_almost_equal(result[-1], static_comp(arr[:50])) def _check_expanding_structures(self, func): series_result = func(self.series) @@ -2702,7 +2795,7 @@ def test_rolling_max_gh6297(self): index=[datetime(1975, 1, i, 0) for i in range(1, 6)]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): x = series.rolling(window=1, freq='D').max() - assert_series_equal(expected, x) + tm.assert_series_equal(expected, x) def test_rolling_max_how_resample(self): @@ -2721,14 +2814,14 @@ def test_rolling_max_how_resample(self): index=[datetime(1975, 1, i, 0) for i in range(1, 6)]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): x = series.rolling(window=1, freq='D').max() - assert_series_equal(expected, x) + tm.assert_series_equal(expected, x) # Now specify median (10.0) expected = Series([0.0, 1.0, 2.0, 3.0, 10.0], index=[datetime(1975, 1, i, 0) for i in range(1, 6)]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): x = series.rolling(window=1, freq='D').max(how='median') - assert_series_equal(expected, x) + tm.assert_series_equal(expected, x) # Now specify mean (4+10+20)/3 v = (4.0 + 10.0 + 20.0) / 3.0 @@ -2736,7 +2829,7 @@ def test_rolling_max_how_resample(self): index=[datetime(1975, 1, i, 0) for i in range(1, 6)]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): x = series.rolling(window=1, freq='D').max(how='mean') - assert_series_equal(expected, x) + tm.assert_series_equal(expected, x) def test_rolling_min_how_resample(self): @@ -2755,7 +2848,7 @@ def test_rolling_min_how_resample(self): index=[datetime(1975, 1, i, 0) for i in range(1, 6)]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): r = series.rolling(window=1, freq='D') - assert_series_equal(expected, r.min()) + tm.assert_series_equal(expected, r.min()) def test_rolling_median_how_resample(self): @@ -2774,7 +2867,7 @@ def test_rolling_median_how_resample(self): index=[datetime(1975, 1, i, 0) for i in range(1, 6)]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): x = series.rolling(window=1, freq='D').median() - assert_series_equal(expected, x) + tm.assert_series_equal(expected, x) def test_rolling_median_memory_error(self): # GH11722 @@ -2824,16 +2917,30 @@ def test_getitem(self): expected = g_mutated.B.apply(lambda x: x.rolling(2).mean()) result = g.rolling(2).mean().B - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) result = g.rolling(2).B.mean() - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) result = g.B.rolling(2).mean() - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) result = self.frame.B.groupby(self.frame.A).rolling(2).mean() - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) + + def test_getitem_multiple(self): + + # GH 13174 + g = self.frame.groupby('A') + r = g.rolling(2) + g_mutated = self.frame.groupby('A', mutated=True) + expected = g_mutated.B.apply(lambda x: x.rolling(2).count()) + + result = r.B.count() + tm.assert_series_equal(result, expected) + + result = r.B.count() + tm.assert_series_equal(result, expected) def test_rolling(self): g = self.frame.groupby('A') @@ -2842,16 +2949,16 @@ def test_rolling(self): for f in ['sum', 'mean', 'min', 'max', 'count', 'kurt', 'skew']: result = getattr(r, f)() expected = g.apply(lambda x: getattr(x.rolling(4), f)()) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) for f in ['std', 'var']: result = getattr(r, f)(ddof=1) expected = g.apply(lambda x: getattr(x.rolling(4), f)(ddof=1)) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = r.quantile(0.5) expected = g.apply(lambda x: x.rolling(4).quantile(0.5)) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_rolling_corr_cov(self): g = self.frame.groupby('A') @@ -2863,14 +2970,14 @@ def test_rolling_corr_cov(self): def func(x): return getattr(x.rolling(4), f)(self.frame) expected = g.apply(func) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = getattr(r.B, f)(pairwise=True) def func(x): return getattr(x.B.rolling(4), f)(pairwise=True) expected = g.apply(func) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_rolling_apply(self): g = self.frame.groupby('A') @@ -2879,7 +2986,7 @@ def test_rolling_apply(self): # reduction result = r.apply(lambda x: x.sum()) expected = g.apply(lambda x: x.rolling(4).apply(lambda y: y.sum())) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_expanding(self): g = self.frame.groupby('A') @@ -2888,16 +2995,16 @@ def test_expanding(self): for f in ['sum', 'mean', 'min', 'max', 'count', 'kurt', 'skew']: result = getattr(r, f)() expected = g.apply(lambda x: getattr(x.expanding(), f)()) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) for f in ['std', 'var']: result = getattr(r, f)(ddof=0) expected = g.apply(lambda x: getattr(x.expanding(), f)(ddof=0)) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = r.quantile(0.5) expected = g.apply(lambda x: x.expanding().quantile(0.5)) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) def test_expanding_corr_cov(self): g = self.frame.groupby('A') @@ -2909,14 +3016,14 @@ def test_expanding_corr_cov(self): def func(x): return getattr(x.expanding(), f)(self.frame) expected = g.apply(func) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) result = getattr(r.B, f)(pairwise=True) def func(x): return getattr(x.B.expanding(), f)(pairwise=True) expected = g.apply(func) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_expanding_apply(self): g = self.frame.groupby('A') @@ -2925,4 +3032,4 @@ def test_expanding_apply(self): # reduction result = r.apply(lambda x: x.sum()) expected = g.apply(lambda x: x.expanding().apply(lambda y: y.sum())) - assert_frame_equal(result, expected) + tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/types/test_dtypes.py b/pandas/tests/types/test_dtypes.py index 2a9ad30a07805..d48b9baf64777 100644 --- a/pandas/tests/types/test_dtypes.py +++ b/pandas/tests/types/test_dtypes.py @@ -45,6 +45,16 @@ class TestCategoricalDtype(Base, tm.TestCase): def setUp(self): self.dtype = CategoricalDtype() + def test_hash_vs_equality(self): + # make sure that we satisfy is semantics + dtype = self.dtype + dtype2 = CategoricalDtype() + self.assertTrue(dtype == dtype2) + self.assertTrue(dtype2 == dtype) + self.assertTrue(dtype is dtype2) + self.assertTrue(dtype2 is dtype) + self.assertTrue(hash(dtype) == hash(dtype2)) + def test_equality(self): self.assertTrue(is_dtype_equal(self.dtype, 'category')) self.assertTrue(is_dtype_equal(self.dtype, CategoricalDtype())) @@ -88,6 +98,20 @@ class TestDatetimeTZDtype(Base, tm.TestCase): def setUp(self): self.dtype = DatetimeTZDtype('ns', 'US/Eastern') + def test_hash_vs_equality(self): + # make sure that we satisfy is semantics + dtype = self.dtype + dtype2 = DatetimeTZDtype('ns', 'US/Eastern') + dtype3 = DatetimeTZDtype(dtype2) + self.assertTrue(dtype == dtype2) + self.assertTrue(dtype2 == dtype) + self.assertTrue(dtype3 == dtype) + self.assertTrue(dtype is dtype2) + self.assertTrue(dtype2 is dtype) + self.assertTrue(dtype3 is dtype) + self.assertTrue(hash(dtype) == hash(dtype2)) + self.assertTrue(hash(dtype) == hash(dtype3)) + def test_construction(self): self.assertRaises(ValueError, lambda: DatetimeTZDtype('ms', 'US/Eastern')) diff --git a/pandas/tests/types/test_types.py b/pandas/tests/types/test_types.py new file mode 100644 index 0000000000000..b9f6006cab731 --- /dev/null +++ b/pandas/tests/types/test_types.py @@ -0,0 +1,40 @@ +# -*- coding: utf-8 -*- +import nose +import numpy as np + +from pandas import NaT +from pandas.types.api import (DatetimeTZDtype, CategoricalDtype, + na_value_for_dtype, pandas_dtype) + + +def test_pandas_dtype(): + + assert pandas_dtype('datetime64[ns, US/Eastern]') == DatetimeTZDtype( + 'datetime64[ns, US/Eastern]') + assert pandas_dtype('category') == CategoricalDtype() + for dtype in ['M8[ns]', 'm8[ns]', 'object', 'float64', 'int64']: + assert pandas_dtype(dtype) == np.dtype(dtype) + + +def test_na_value_for_dtype(): + for dtype in [np.dtype('M8[ns]'), np.dtype('m8[ns]'), + DatetimeTZDtype('datetime64[ns, US/Eastern]')]: + assert na_value_for_dtype(dtype) is NaT + + for dtype in ['u1', 'u2', 'u4', 'u8', + 'i1', 'i2', 'i4', 'i8']: + assert na_value_for_dtype(np.dtype(dtype)) == 0 + + for dtype in ['bool']: + assert na_value_for_dtype(np.dtype(dtype)) is False + + for dtype in ['f2', 'f4', 'f8']: + assert np.isnan(na_value_for_dtype(np.dtype(dtype))) + + for dtype in ['O']: + assert np.isnan(na_value_for_dtype(np.dtype(dtype))) + + +if __name__ == '__main__': + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], + exit=False) diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py index 3371f63db1e1c..182c0637ae29c 100644 --- a/pandas/tools/merge.py +++ b/pandas/tools/merge.py @@ -7,6 +7,7 @@ import numpy as np from pandas.compat import range, lrange, lzip, zip, map, filter import pandas.compat as compat + from pandas.core.categorical import Categorical from pandas.core.frame import DataFrame, _merge_doc from pandas.core.generic import NDFrame @@ -22,6 +23,7 @@ import pandas.core.algorithms as algos import pandas.core.common as com import pandas.types.concat as _concat +from pandas.types.api import na_value_for_dtype import pandas.algos as _algos import pandas.hashtable as _hash @@ -280,55 +282,78 @@ def _indicator_post_merge(self, result): return result def _maybe_add_join_keys(self, result, left_indexer, right_indexer): - # insert group keys + + left_has_missing = None + right_has_missing = None keys = zip(self.join_names, self.left_on, self.right_on) for i, (name, lname, rname) in enumerate(keys): if not _should_fill(lname, rname): continue + take_left, take_right = None, None + if name in result: - key_indexer = result.columns.get_loc(name) if left_indexer is not None and right_indexer is not None: - if name in self.left: - if len(self.left) == 0: - continue - na_indexer = (left_indexer == -1).nonzero()[0] - if len(na_indexer) == 0: - continue + if left_has_missing is None: + left_has_missing = any(left_indexer == -1) + + if left_has_missing: + take_right = self.right_join_keys[i] + + if not com.is_dtype_equal(result[name].dtype, + self.left[name].dtype): + take_left = self.left[name]._values - right_na_indexer = right_indexer.take(na_indexer) - result.iloc[na_indexer, key_indexer] = ( - algos.take_1d(self.right_join_keys[i], - right_na_indexer)) elif name in self.right: - if len(self.right) == 0: - continue - na_indexer = (right_indexer == -1).nonzero()[0] - if len(na_indexer) == 0: - continue + if right_has_missing is None: + right_has_missing = any(right_indexer == -1) + + if right_has_missing: + take_left = self.left_join_keys[i] + + if not com.is_dtype_equal(result[name].dtype, + self.right[name].dtype): + take_right = self.right[name]._values - left_na_indexer = left_indexer.take(na_indexer) - result.iloc[na_indexer, key_indexer] = ( - algos.take_1d(self.left_join_keys[i], - left_na_indexer)) elif left_indexer is not None \ and isinstance(self.left_join_keys[i], np.ndarray): - if name is None: - name = 'key_%d' % i + take_left = self.left_join_keys[i] + take_right = self.right_join_keys[i] - # a faster way? - key_col = algos.take_1d(self.left_join_keys[i], left_indexer) - na_indexer = (left_indexer == -1).nonzero()[0] - right_na_indexer = right_indexer.take(na_indexer) - key_col.put(na_indexer, algos.take_1d(self.right_join_keys[i], - right_na_indexer)) - result.insert(i, name, key_col) + if take_left is not None or take_right is not None: + + if take_left is None: + lvals = result[name]._values + else: + lfill = na_value_for_dtype(take_left.dtype) + lvals = algos.take_1d(take_left, left_indexer, + fill_value=lfill) + + if take_right is None: + rvals = result[name]._values + else: + rfill = na_value_for_dtype(take_right.dtype) + rvals = algos.take_1d(take_right, right_indexer, + fill_value=rfill) + + # if we have an all missing left_indexer + # make sure to just use the right values + mask = left_indexer == -1 + if mask.all(): + key_col = rvals + else: + key_col = Index(lvals).where(~mask, rvals) + + if name in result: + result[name] = key_col + else: + result.insert(i, name or 'key_%d' % i, key_col) def _get_join_info(self): left_ax = self.left._data.axes[self.axis] diff --git a/pandas/tools/pivot.py b/pandas/tools/pivot.py index de79e54e22270..a4e6cc404a457 100644 --- a/pandas/tools/pivot.py +++ b/pandas/tools/pivot.py @@ -410,7 +410,11 @@ def crosstab(index, columns, values=None, rownames=None, colnames=None, Notes ----- Any Series passed will have their name attributes used unless row or column - names for the cross-tabulation are specified + names for the cross-tabulation are specified. + + Any input passed containing Categorical data will have **all** of its + categories included in the cross-tabulation, even if the actual data does + not contain any instances of a particular category. In the event that there aren't overlapping indexes an empty DataFrame will be returned. @@ -434,6 +438,16 @@ def crosstab(index, columns, values=None, rownames=None, colnames=None, bar 1 2 1 0 foo 2 2 1 2 + >>> foo = pd.Categorical(['a', 'b'], categories=['a', 'b', 'c']) + >>> bar = pd.Categorical(['d', 'e'], categories=['d', 'e', 'f']) + >>> crosstab(foo, bar) # 'c' and 'f' are not represented in the data, + # but they still will be counted in the output + col_0 d e f + row_0 + a 1 0 0 + b 0 1 0 + c 0 0 0 + Returns ------- crosstab : DataFrame diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py index 808c9d22c53c8..baca8045f0cc1 100644 --- a/pandas/tools/plotting.py +++ b/pandas/tools/plotting.py @@ -1331,6 +1331,10 @@ def _plot(cls, ax, x, y, style=None, is_errorbar=False, **kwds): x = x._mpl_repr() if is_errorbar: + if 'xerr' in kwds: + kwds['xerr'] = np.array(kwds.get('xerr')) + if 'yerr' in kwds: + kwds['yerr'] = np.array(kwds.get('yerr')) return ax.errorbar(x, y, **kwds) else: # prevent style kwarg from going to errorbar, where it is diff --git a/pandas/tools/tests/test_concat.py b/pandas/tools/tests/test_concat.py new file mode 100644 index 0000000000000..9d9b0635e0f35 --- /dev/null +++ b/pandas/tools/tests/test_concat.py @@ -0,0 +1,1037 @@ +import nose + +import numpy as np +from numpy.random import randn + +from datetime import datetime +from pandas.compat import StringIO +import pandas as pd +from pandas import (DataFrame, concat, + read_csv, isnull, Series, date_range, + Index, Panel, MultiIndex, Timestamp, + DatetimeIndex) +from pandas.util import testing as tm +from pandas.util.testing import (assert_frame_equal, + makeCustomDataframe as mkdf, + assert_almost_equal) + + +class TestConcatenate(tm.TestCase): + + _multiprocess_can_split_ = True + + def setUp(self): + self.frame = DataFrame(tm.getSeriesData()) + self.mixed_frame = self.frame.copy() + self.mixed_frame['foo'] = 'bar' + + def test_append(self): + begin_index = self.frame.index[:5] + end_index = self.frame.index[5:] + + begin_frame = self.frame.reindex(begin_index) + end_frame = self.frame.reindex(end_index) + + appended = begin_frame.append(end_frame) + assert_almost_equal(appended['A'], self.frame['A']) + + del end_frame['A'] + partial_appended = begin_frame.append(end_frame) + self.assertIn('A', partial_appended) + + partial_appended = end_frame.append(begin_frame) + self.assertIn('A', partial_appended) + + # mixed type handling + appended = self.mixed_frame[:5].append(self.mixed_frame[5:]) + assert_frame_equal(appended, self.mixed_frame) + + # what to test here + mixed_appended = self.mixed_frame[:5].append(self.frame[5:]) + mixed_appended2 = self.frame[:5].append(self.mixed_frame[5:]) + + # all equal except 'foo' column + assert_frame_equal( + mixed_appended.reindex(columns=['A', 'B', 'C', 'D']), + mixed_appended2.reindex(columns=['A', 'B', 'C', 'D'])) + + # append empty + empty = DataFrame({}) + + appended = self.frame.append(empty) + assert_frame_equal(self.frame, appended) + self.assertIsNot(appended, self.frame) + + appended = empty.append(self.frame) + assert_frame_equal(self.frame, appended) + self.assertIsNot(appended, self.frame) + + # overlap + self.assertRaises(ValueError, self.frame.append, self.frame, + verify_integrity=True) + + # new columns + # GH 6129 + df = DataFrame({'a': {'x': 1, 'y': 2}, 'b': {'x': 3, 'y': 4}}) + row = Series([5, 6, 7], index=['a', 'b', 'c'], name='z') + expected = DataFrame({'a': {'x': 1, 'y': 2, 'z': 5}, 'b': { + 'x': 3, 'y': 4, 'z': 6}, 'c': {'z': 7}}) + result = df.append(row) + assert_frame_equal(result, expected) + + def test_append_length0_frame(self): + df = DataFrame(columns=['A', 'B', 'C']) + df3 = DataFrame(index=[0, 1], columns=['A', 'B']) + df5 = df.append(df3) + + expected = DataFrame(index=[0, 1], columns=['A', 'B', 'C']) + assert_frame_equal(df5, expected) + + def test_append_records(self): + arr1 = np.zeros((2,), dtype=('i4,f4,a10')) + arr1[:] = [(1, 2., 'Hello'), (2, 3., "World")] + + arr2 = np.zeros((3,), dtype=('i4,f4,a10')) + arr2[:] = [(3, 4., 'foo'), + (5, 6., "bar"), + (7., 8., 'baz')] + + df1 = DataFrame(arr1) + df2 = DataFrame(arr2) + + result = df1.append(df2, ignore_index=True) + expected = DataFrame(np.concatenate((arr1, arr2))) + assert_frame_equal(result, expected) + + def test_append_different_columns(self): + df = DataFrame({'bools': np.random.randn(10) > 0, + 'ints': np.random.randint(0, 10, 10), + 'floats': np.random.randn(10), + 'strings': ['foo', 'bar'] * 5}) + + a = df[:5].ix[:, ['bools', 'ints', 'floats']] + b = df[5:].ix[:, ['strings', 'ints', 'floats']] + + appended = a.append(b) + self.assertTrue(isnull(appended['strings'][0:4]).all()) + self.assertTrue(isnull(appended['bools'][5:]).all()) + + def test_append_many(self): + chunks = [self.frame[:5], self.frame[5:10], + self.frame[10:15], self.frame[15:]] + + result = chunks[0].append(chunks[1:]) + tm.assert_frame_equal(result, self.frame) + + chunks[-1] = chunks[-1].copy() + chunks[-1]['foo'] = 'bar' + result = chunks[0].append(chunks[1:]) + tm.assert_frame_equal(result.ix[:, self.frame.columns], self.frame) + self.assertTrue((result['foo'][15:] == 'bar').all()) + self.assertTrue(result['foo'][:15].isnull().all()) + + def test_append_preserve_index_name(self): + # #980 + df1 = DataFrame(data=None, columns=['A', 'B', 'C']) + df1 = df1.set_index(['A']) + df2 = DataFrame(data=[[1, 4, 7], [2, 5, 8], [3, 6, 9]], + columns=['A', 'B', 'C']) + df2 = df2.set_index(['A']) + + result = df1.append(df2) + self.assertEqual(result.index.name, 'A') + + def test_join_many(self): + df = DataFrame(np.random.randn(10, 6), columns=list('abcdef')) + df_list = [df[['a', 'b']], df[['c', 'd']], df[['e', 'f']]] + + joined = df_list[0].join(df_list[1:]) + tm.assert_frame_equal(joined, df) + + df_list = [df[['a', 'b']][:-2], + df[['c', 'd']][2:], df[['e', 'f']][1:9]] + + def _check_diff_index(df_list, result, exp_index): + reindexed = [x.reindex(exp_index) for x in df_list] + expected = reindexed[0].join(reindexed[1:]) + tm.assert_frame_equal(result, expected) + + # different join types + joined = df_list[0].join(df_list[1:], how='outer') + _check_diff_index(df_list, joined, df.index) + + joined = df_list[0].join(df_list[1:]) + _check_diff_index(df_list, joined, df_list[0].index) + + joined = df_list[0].join(df_list[1:], how='inner') + _check_diff_index(df_list, joined, df.index[2:8]) + + self.assertRaises(ValueError, df_list[0].join, df_list[1:], on='a') + + def test_join_many_mixed(self): + df = DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D']) + df['key'] = ['foo', 'bar'] * 4 + df1 = df.ix[:, ['A', 'B']] + df2 = df.ix[:, ['C', 'D']] + df3 = df.ix[:, ['key']] + + result = df1.join([df2, df3]) + assert_frame_equal(result, df) + + def test_append_missing_column_proper_upcast(self): + df1 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='i8')}) + df2 = DataFrame({'B': np.array([True, False, True, False], + dtype=bool)}) + + appended = df1.append(df2, ignore_index=True) + self.assertEqual(appended['A'].dtype, 'f8') + self.assertEqual(appended['B'].dtype, 'O') + + def test_concat_copy(self): + + df = DataFrame(np.random.randn(4, 3)) + df2 = DataFrame(np.random.randint(0, 10, size=4).reshape(4, 1)) + df3 = DataFrame({5: 'foo'}, index=range(4)) + + # these are actual copies + result = concat([df, df2, df3], axis=1, copy=True) + for b in result._data.blocks: + self.assertIsNone(b.values.base) + + # these are the same + result = concat([df, df2, df3], axis=1, copy=False) + for b in result._data.blocks: + if b.is_float: + self.assertTrue( + b.values.base is df._data.blocks[0].values.base) + elif b.is_integer: + self.assertTrue( + b.values.base is df2._data.blocks[0].values.base) + elif b.is_object: + self.assertIsNotNone(b.values.base) + + # float block was consolidated + df4 = DataFrame(np.random.randn(4, 1)) + result = concat([df, df2, df3, df4], axis=1, copy=False) + for b in result._data.blocks: + if b.is_float: + self.assertIsNone(b.values.base) + elif b.is_integer: + self.assertTrue( + b.values.base is df2._data.blocks[0].values.base) + elif b.is_object: + self.assertIsNotNone(b.values.base) + + def test_concat_with_group_keys(self): + df = DataFrame(np.random.randn(4, 3)) + df2 = DataFrame(np.random.randn(4, 4)) + + # axis=0 + df = DataFrame(np.random.randn(3, 4)) + df2 = DataFrame(np.random.randn(4, 4)) + + result = concat([df, df2], keys=[0, 1]) + exp_index = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1, 1], + [0, 1, 2, 0, 1, 2, 3]]) + expected = DataFrame(np.r_[df.values, df2.values], + index=exp_index) + tm.assert_frame_equal(result, expected) + + result = concat([df, df], keys=[0, 1]) + exp_index2 = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1], + [0, 1, 2, 0, 1, 2]]) + expected = DataFrame(np.r_[df.values, df.values], + index=exp_index2) + tm.assert_frame_equal(result, expected) + + # axis=1 + df = DataFrame(np.random.randn(4, 3)) + df2 = DataFrame(np.random.randn(4, 4)) + + result = concat([df, df2], keys=[0, 1], axis=1) + expected = DataFrame(np.c_[df.values, df2.values], + columns=exp_index) + tm.assert_frame_equal(result, expected) + + result = concat([df, df], keys=[0, 1], axis=1) + expected = DataFrame(np.c_[df.values, df.values], + columns=exp_index2) + tm.assert_frame_equal(result, expected) + + def test_concat_keys_specific_levels(self): + df = DataFrame(np.random.randn(10, 4)) + pieces = [df.ix[:, [0, 1]], df.ix[:, [2]], df.ix[:, [3]]] + level = ['three', 'two', 'one', 'zero'] + result = concat(pieces, axis=1, keys=['one', 'two', 'three'], + levels=[level], + names=['group_key']) + + self.assert_index_equal(result.columns.levels[0], + Index(level, name='group_key')) + self.assertEqual(result.columns.names[0], 'group_key') + + def test_concat_dataframe_keys_bug(self): + t1 = DataFrame({ + 'value': Series([1, 2, 3], index=Index(['a', 'b', 'c'], + name='id'))}) + t2 = DataFrame({ + 'value': Series([7, 8], index=Index(['a', 'b'], name='id'))}) + + # it works + result = concat([t1, t2], axis=1, keys=['t1', 't2']) + self.assertEqual(list(result.columns), [('t1', 'value'), + ('t2', 'value')]) + + def test_concat_series_partial_columns_names(self): + # GH10698 + foo = Series([1, 2], name='foo') + bar = Series([1, 2]) + baz = Series([4, 5]) + + result = concat([foo, bar, baz], axis=1) + expected = DataFrame({'foo': [1, 2], 0: [1, 2], 1: [ + 4, 5]}, columns=['foo', 0, 1]) + tm.assert_frame_equal(result, expected) + + result = concat([foo, bar, baz], axis=1, keys=[ + 'red', 'blue', 'yellow']) + expected = DataFrame({'red': [1, 2], 'blue': [1, 2], 'yellow': [ + 4, 5]}, columns=['red', 'blue', 'yellow']) + tm.assert_frame_equal(result, expected) + + result = concat([foo, bar, baz], axis=1, ignore_index=True) + expected = DataFrame({0: [1, 2], 1: [1, 2], 2: [4, 5]}) + tm.assert_frame_equal(result, expected) + + def test_concat_dict(self): + frames = {'foo': DataFrame(np.random.randn(4, 3)), + 'bar': DataFrame(np.random.randn(4, 3)), + 'baz': DataFrame(np.random.randn(4, 3)), + 'qux': DataFrame(np.random.randn(4, 3))} + + sorted_keys = sorted(frames) + + result = concat(frames) + expected = concat([frames[k] for k in sorted_keys], keys=sorted_keys) + tm.assert_frame_equal(result, expected) + + result = concat(frames, axis=1) + expected = concat([frames[k] for k in sorted_keys], keys=sorted_keys, + axis=1) + tm.assert_frame_equal(result, expected) + + keys = ['baz', 'foo', 'bar'] + result = concat(frames, keys=keys) + expected = concat([frames[k] for k in keys], keys=keys) + tm.assert_frame_equal(result, expected) + + def test_concat_ignore_index(self): + frame1 = DataFrame({"test1": ["a", "b", "c"], + "test2": [1, 2, 3], + "test3": [4.5, 3.2, 1.2]}) + frame2 = DataFrame({"test3": [5.2, 2.2, 4.3]}) + frame1.index = Index(["x", "y", "z"]) + frame2.index = Index(["x", "y", "q"]) + + v1 = concat([frame1, frame2], axis=1, ignore_index=True) + + nan = np.nan + expected = DataFrame([[nan, nan, nan, 4.3], + ['a', 1, 4.5, 5.2], + ['b', 2, 3.2, 2.2], + ['c', 3, 1.2, nan]], + index=Index(["q", "x", "y", "z"])) + + tm.assert_frame_equal(v1, expected) + + def test_concat_multiindex_with_keys(self): + index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], + ['one', 'two', 'three']], + labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], + [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], + names=['first', 'second']) + frame = DataFrame(np.random.randn(10, 3), index=index, + columns=Index(['A', 'B', 'C'], name='exp')) + result = concat([frame, frame], keys=[0, 1], names=['iteration']) + + self.assertEqual(result.index.names, ('iteration',) + index.names) + tm.assert_frame_equal(result.ix[0], frame) + tm.assert_frame_equal(result.ix[1], frame) + self.assertEqual(result.index.nlevels, 3) + + def test_concat_multiindex_with_tz(self): + # GH 6606 + df = DataFrame({'dt': [datetime(2014, 1, 1), + datetime(2014, 1, 2), + datetime(2014, 1, 3)], + 'b': ['A', 'B', 'C'], + 'c': [1, 2, 3], 'd': [4, 5, 6]}) + df['dt'] = df['dt'].apply(lambda d: Timestamp(d, tz='US/Pacific')) + df = df.set_index(['dt', 'b']) + + exp_idx1 = DatetimeIndex(['2014-01-01', '2014-01-02', + '2014-01-03'] * 2, + tz='US/Pacific', name='dt') + exp_idx2 = Index(['A', 'B', 'C'] * 2, name='b') + exp_idx = MultiIndex.from_arrays([exp_idx1, exp_idx2]) + expected = DataFrame({'c': [1, 2, 3] * 2, 'd': [4, 5, 6] * 2}, + index=exp_idx, columns=['c', 'd']) + + result = concat([df, df]) + tm.assert_frame_equal(result, expected) + + def test_concat_keys_and_levels(self): + df = DataFrame(np.random.randn(1, 3)) + df2 = DataFrame(np.random.randn(1, 4)) + + levels = [['foo', 'baz'], ['one', 'two']] + names = ['first', 'second'] + result = concat([df, df2, df, df2], + keys=[('foo', 'one'), ('foo', 'two'), + ('baz', 'one'), ('baz', 'two')], + levels=levels, + names=names) + expected = concat([df, df2, df, df2]) + exp_index = MultiIndex(levels=levels + [[0]], + labels=[[0, 0, 1, 1], [0, 1, 0, 1], + [0, 0, 0, 0]], + names=names + [None]) + expected.index = exp_index + + assert_frame_equal(result, expected) + + # no names + + result = concat([df, df2, df, df2], + keys=[('foo', 'one'), ('foo', 'two'), + ('baz', 'one'), ('baz', 'two')], + levels=levels) + self.assertEqual(result.index.names, (None,) * 3) + + # no levels + result = concat([df, df2, df, df2], + keys=[('foo', 'one'), ('foo', 'two'), + ('baz', 'one'), ('baz', 'two')], + names=['first', 'second']) + self.assertEqual(result.index.names, ('first', 'second') + (None,)) + self.assert_index_equal(result.index.levels[0], + Index(['baz', 'foo'], name='first')) + + def test_concat_keys_levels_no_overlap(self): + # GH #1406 + df = DataFrame(np.random.randn(1, 3), index=['a']) + df2 = DataFrame(np.random.randn(1, 4), index=['b']) + + self.assertRaises(ValueError, concat, [df, df], + keys=['one', 'two'], levels=[['foo', 'bar', 'baz']]) + + self.assertRaises(ValueError, concat, [df, df2], + keys=['one', 'two'], levels=[['foo', 'bar', 'baz']]) + + def test_concat_rename_index(self): + a = DataFrame(np.random.rand(3, 3), + columns=list('ABC'), + index=Index(list('abc'), name='index_a')) + b = DataFrame(np.random.rand(3, 3), + columns=list('ABC'), + index=Index(list('abc'), name='index_b')) + + result = concat([a, b], keys=['key0', 'key1'], + names=['lvl0', 'lvl1']) + + exp = concat([a, b], keys=['key0', 'key1'], names=['lvl0']) + names = list(exp.index.names) + names[1] = 'lvl1' + exp.index.set_names(names, inplace=True) + + tm.assert_frame_equal(result, exp) + self.assertEqual(result.index.names, exp.index.names) + + def test_crossed_dtypes_weird_corner(self): + columns = ['A', 'B', 'C', 'D'] + df1 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='f8'), + 'B': np.array([1, 2, 3, 4], dtype='i8'), + 'C': np.array([1, 2, 3, 4], dtype='f8'), + 'D': np.array([1, 2, 3, 4], dtype='i8')}, + columns=columns) + + df2 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='i8'), + 'B': np.array([1, 2, 3, 4], dtype='f8'), + 'C': np.array([1, 2, 3, 4], dtype='i8'), + 'D': np.array([1, 2, 3, 4], dtype='f8')}, + columns=columns) + + appended = df1.append(df2, ignore_index=True) + expected = DataFrame(np.concatenate([df1.values, df2.values], axis=0), + columns=columns) + tm.assert_frame_equal(appended, expected) + + df = DataFrame(np.random.randn(1, 3), index=['a']) + df2 = DataFrame(np.random.randn(1, 4), index=['b']) + result = concat( + [df, df2], keys=['one', 'two'], names=['first', 'second']) + self.assertEqual(result.index.names, ('first', 'second')) + + def test_dups_index(self): + # GH 4771 + + # single dtypes + df = DataFrame(np.random.randint(0, 10, size=40).reshape( + 10, 4), columns=['A', 'A', 'C', 'C']) + + result = concat([df, df], axis=1) + assert_frame_equal(result.iloc[:, :4], df) + assert_frame_equal(result.iloc[:, 4:], df) + + result = concat([df, df], axis=0) + assert_frame_equal(result.iloc[:10], df) + assert_frame_equal(result.iloc[10:], df) + + # multi dtypes + df = concat([DataFrame(np.random.randn(10, 4), + columns=['A', 'A', 'B', 'B']), + DataFrame(np.random.randint(0, 10, size=20) + .reshape(10, 2), + columns=['A', 'C'])], + axis=1) + + result = concat([df, df], axis=1) + assert_frame_equal(result.iloc[:, :6], df) + assert_frame_equal(result.iloc[:, 6:], df) + + result = concat([df, df], axis=0) + assert_frame_equal(result.iloc[:10], df) + assert_frame_equal(result.iloc[10:], df) + + # append + result = df.iloc[0:8, :].append(df.iloc[8:]) + assert_frame_equal(result, df) + + result = df.iloc[0:8, :].append(df.iloc[8:9]).append(df.iloc[9:10]) + assert_frame_equal(result, df) + + expected = concat([df, df], axis=0) + result = df.append(df) + assert_frame_equal(result, expected) + + def test_with_mixed_tuples(self): + # 10697 + # columns have mixed tuples, so handle properly + df1 = DataFrame({u'A': 'foo', (u'B', 1): 'bar'}, index=range(2)) + df2 = DataFrame({u'B': 'foo', (u'B', 1): 'bar'}, index=range(2)) + + # it works + concat([df1, df2]) + + def test_join_dups(self): + + # joining dups + df = concat([DataFrame(np.random.randn(10, 4), + columns=['A', 'A', 'B', 'B']), + DataFrame(np.random.randint(0, 10, size=20) + .reshape(10, 2), + columns=['A', 'C'])], + axis=1) + + expected = concat([df, df], axis=1) + result = df.join(df, rsuffix='_2') + result.columns = expected.columns + assert_frame_equal(result, expected) + + # GH 4975, invalid join on dups + w = DataFrame(np.random.randn(4, 2), columns=["x", "y"]) + x = DataFrame(np.random.randn(4, 2), columns=["x", "y"]) + y = DataFrame(np.random.randn(4, 2), columns=["x", "y"]) + z = DataFrame(np.random.randn(4, 2), columns=["x", "y"]) + + dta = x.merge(y, left_index=True, right_index=True).merge( + z, left_index=True, right_index=True, how="outer") + dta = dta.merge(w, left_index=True, right_index=True) + expected = concat([x, y, z, w], axis=1) + expected.columns = ['x_x', 'y_x', 'x_y', + 'y_y', 'x_x', 'y_x', 'x_y', 'y_y'] + assert_frame_equal(dta, expected) + + def test_handle_empty_objects(self): + df = DataFrame(np.random.randn(10, 4), columns=list('abcd')) + + baz = df[:5].copy() + baz['foo'] = 'bar' + empty = df[5:5] + + frames = [baz, empty, empty, df[5:]] + concatted = concat(frames, axis=0) + + expected = df.ix[:, ['a', 'b', 'c', 'd', 'foo']] + expected['foo'] = expected['foo'].astype('O') + expected.loc[0:4, 'foo'] = 'bar' + + tm.assert_frame_equal(concatted, expected) + + # empty as first element with time series + # GH3259 + df = DataFrame(dict(A=range(10000)), index=date_range( + '20130101', periods=10000, freq='s')) + empty = DataFrame() + result = concat([df, empty], axis=1) + assert_frame_equal(result, df) + result = concat([empty, df], axis=1) + assert_frame_equal(result, df) + + result = concat([df, empty]) + assert_frame_equal(result, df) + result = concat([empty, df]) + assert_frame_equal(result, df) + + def test_concat_mixed_objs(self): + + # concat mixed series/frames + # G2385 + + # axis 1 + index = date_range('01-Jan-2013', periods=10, freq='H') + arr = np.arange(10, dtype='int64') + s1 = Series(arr, index=index) + s2 = Series(arr, index=index) + df = DataFrame(arr.reshape(-1, 1), index=index) + + expected = DataFrame(np.repeat(arr, 2).reshape(-1, 2), + index=index, columns=[0, 0]) + result = concat([df, df], axis=1) + assert_frame_equal(result, expected) + + expected = DataFrame(np.repeat(arr, 2).reshape(-1, 2), + index=index, columns=[0, 1]) + result = concat([s1, s2], axis=1) + assert_frame_equal(result, expected) + + expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3), + index=index, columns=[0, 1, 2]) + result = concat([s1, s2, s1], axis=1) + assert_frame_equal(result, expected) + + expected = DataFrame(np.repeat(arr, 5).reshape(-1, 5), + index=index, columns=[0, 0, 1, 2, 3]) + result = concat([s1, df, s2, s2, s1], axis=1) + assert_frame_equal(result, expected) + + # with names + s1.name = 'foo' + expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3), + index=index, columns=['foo', 0, 0]) + result = concat([s1, df, s2], axis=1) + assert_frame_equal(result, expected) + + s2.name = 'bar' + expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3), + index=index, columns=['foo', 0, 'bar']) + result = concat([s1, df, s2], axis=1) + assert_frame_equal(result, expected) + + # ignore index + expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3), + index=index, columns=[0, 1, 2]) + result = concat([s1, df, s2], axis=1, ignore_index=True) + assert_frame_equal(result, expected) + + # axis 0 + expected = DataFrame(np.tile(arr, 3).reshape(-1, 1), + index=index.tolist() * 3, columns=[0]) + result = concat([s1, df, s2]) + assert_frame_equal(result, expected) + + expected = DataFrame(np.tile(arr, 3).reshape(-1, 1), columns=[0]) + result = concat([s1, df, s2], ignore_index=True) + assert_frame_equal(result, expected) + + # invalid concatente of mixed dims + panel = tm.makePanel() + self.assertRaises(ValueError, lambda: concat([panel, s1], axis=1)) + + def test_panel_join(self): + panel = tm.makePanel() + tm.add_nans(panel) + + p1 = panel.ix[:2, :10, :3] + p2 = panel.ix[2:, 5:, 2:] + + # left join + result = p1.join(p2) + expected = p1.copy() + expected['ItemC'] = p2['ItemC'] + tm.assert_panel_equal(result, expected) + + # right join + result = p1.join(p2, how='right') + expected = p2.copy() + expected['ItemA'] = p1['ItemA'] + expected['ItemB'] = p1['ItemB'] + expected = expected.reindex(items=['ItemA', 'ItemB', 'ItemC']) + tm.assert_panel_equal(result, expected) + + # inner join + result = p1.join(p2, how='inner') + expected = panel.ix[:, 5:10, 2:3] + tm.assert_panel_equal(result, expected) + + # outer join + result = p1.join(p2, how='outer') + expected = p1.reindex(major=panel.major_axis, + minor=panel.minor_axis) + expected = expected.join(p2.reindex(major=panel.major_axis, + minor=panel.minor_axis)) + tm.assert_panel_equal(result, expected) + + def test_panel_join_overlap(self): + panel = tm.makePanel() + tm.add_nans(panel) + + p1 = panel.ix[['ItemA', 'ItemB', 'ItemC']] + p2 = panel.ix[['ItemB', 'ItemC']] + + # Expected index is + # + # ItemA, ItemB_p1, ItemC_p1, ItemB_p2, ItemC_p2 + joined = p1.join(p2, lsuffix='_p1', rsuffix='_p2') + p1_suf = p1.ix[['ItemB', 'ItemC']].add_suffix('_p1') + p2_suf = p2.ix[['ItemB', 'ItemC']].add_suffix('_p2') + no_overlap = panel.ix[['ItemA']] + expected = no_overlap.join(p1_suf.join(p2_suf)) + tm.assert_panel_equal(joined, expected) + + def test_panel_join_many(self): + tm.K = 10 + panel = tm.makePanel() + tm.K = 4 + + panels = [panel.ix[:2], panel.ix[2:6], panel.ix[6:]] + + joined = panels[0].join(panels[1:]) + tm.assert_panel_equal(joined, panel) + + panels = [panel.ix[:2, :-5], panel.ix[2:6, 2:], panel.ix[6:, 5:-7]] + + data_dict = {} + for p in panels: + data_dict.update(p.iteritems()) + + joined = panels[0].join(panels[1:], how='inner') + expected = Panel.from_dict(data_dict, intersect=True) + tm.assert_panel_equal(joined, expected) + + joined = panels[0].join(panels[1:], how='outer') + expected = Panel.from_dict(data_dict, intersect=False) + tm.assert_panel_equal(joined, expected) + + # edge cases + self.assertRaises(ValueError, panels[0].join, panels[1:], + how='outer', lsuffix='foo', rsuffix='bar') + self.assertRaises(ValueError, panels[0].join, panels[1:], + how='right') + + def test_panel_concat_other_axes(self): + panel = tm.makePanel() + + p1 = panel.ix[:, :5, :] + p2 = panel.ix[:, 5:, :] + + result = concat([p1, p2], axis=1) + tm.assert_panel_equal(result, panel) + + p1 = panel.ix[:, :, :2] + p2 = panel.ix[:, :, 2:] + + result = concat([p1, p2], axis=2) + tm.assert_panel_equal(result, panel) + + # if things are a bit misbehaved + p1 = panel.ix[:2, :, :2] + p2 = panel.ix[:, :, 2:] + p1['ItemC'] = 'baz' + + result = concat([p1, p2], axis=2) + + expected = panel.copy() + expected['ItemC'] = expected['ItemC'].astype('O') + expected.ix['ItemC', :, :2] = 'baz' + tm.assert_panel_equal(result, expected) + + def test_panel_concat_buglet(self): + # #2257 + def make_panel(): + index = 5 + cols = 3 + + def df(): + return DataFrame(np.random.randn(index, cols), + index=["I%s" % i for i in range(index)], + columns=["C%s" % i for i in range(cols)]) + return Panel(dict([("Item%s" % x, df()) for x in ['A', 'B', 'C']])) + + panel1 = make_panel() + panel2 = make_panel() + + panel2 = panel2.rename_axis(dict([(x, "%s_1" % x) + for x in panel2.major_axis]), + axis=1) + + panel3 = panel2.rename_axis(lambda x: '%s_1' % x, axis=1) + panel3 = panel3.rename_axis(lambda x: '%s_1' % x, axis=2) + + # it works! + concat([panel1, panel3], axis=1, verify_integrity=True) + + def test_panel4d_concat(self): + p4d = tm.makePanel4D() + + p1 = p4d.ix[:, :, :5, :] + p2 = p4d.ix[:, :, 5:, :] + + result = concat([p1, p2], axis=2) + tm.assert_panel4d_equal(result, p4d) + + p1 = p4d.ix[:, :, :, :2] + p2 = p4d.ix[:, :, :, 2:] + + result = concat([p1, p2], axis=3) + tm.assert_panel4d_equal(result, p4d) + + def test_panel4d_concat_mixed_type(self): + p4d = tm.makePanel4D() + + # if things are a bit misbehaved + p1 = p4d.ix[:, :2, :, :2] + p2 = p4d.ix[:, :, :, 2:] + p1['L5'] = 'baz' + + result = concat([p1, p2], axis=3) + + p2['L5'] = np.nan + expected = concat([p1, p2], axis=3) + expected = expected.ix[result.labels] + + tm.assert_panel4d_equal(result, expected) + + def test_concat_series(self): + + ts = tm.makeTimeSeries() + ts.name = 'foo' + + pieces = [ts[:5], ts[5:15], ts[15:]] + + result = concat(pieces) + tm.assert_series_equal(result, ts) + self.assertEqual(result.name, ts.name) + + result = concat(pieces, keys=[0, 1, 2]) + expected = ts.copy() + + ts.index = DatetimeIndex(np.array(ts.index.values, dtype='M8[ns]')) + + exp_labels = [np.repeat([0, 1, 2], [len(x) for x in pieces]), + np.arange(len(ts))] + exp_index = MultiIndex(levels=[[0, 1, 2], ts.index], + labels=exp_labels) + expected.index = exp_index + tm.assert_series_equal(result, expected) + + def test_concat_series_axis1(self): + ts = tm.makeTimeSeries() + + pieces = [ts[:-2], ts[2:], ts[2:-2]] + + result = concat(pieces, axis=1) + expected = DataFrame(pieces).T + assert_frame_equal(result, expected) + + result = concat(pieces, keys=['A', 'B', 'C'], axis=1) + expected = DataFrame(pieces, index=['A', 'B', 'C']).T + assert_frame_equal(result, expected) + + # preserve series names, #2489 + s = Series(randn(5), name='A') + s2 = Series(randn(5), name='B') + + result = concat([s, s2], axis=1) + expected = DataFrame({'A': s, 'B': s2}) + assert_frame_equal(result, expected) + + s2.name = None + result = concat([s, s2], axis=1) + self.assertTrue(np.array_equal( + result.columns, Index(['A', 0], dtype='object'))) + + # must reindex, #2603 + s = Series(randn(3), index=['c', 'a', 'b'], name='A') + s2 = Series(randn(4), index=['d', 'a', 'b', 'c'], name='B') + result = concat([s, s2], axis=1) + expected = DataFrame({'A': s, 'B': s2}) + assert_frame_equal(result, expected) + + def test_concat_single_with_key(self): + df = DataFrame(np.random.randn(10, 4)) + + result = concat([df], keys=['foo']) + expected = concat([df, df], keys=['foo', 'bar']) + tm.assert_frame_equal(result, expected[:10]) + + def test_concat_exclude_none(self): + df = DataFrame(np.random.randn(10, 4)) + + pieces = [df[:5], None, None, df[5:]] + result = concat(pieces) + tm.assert_frame_equal(result, df) + self.assertRaises(ValueError, concat, [None, None]) + + def test_concat_datetime64_block(self): + from pandas.tseries.index import date_range + + rng = date_range('1/1/2000', periods=10) + + df = DataFrame({'time': rng}) + + result = concat([df, df]) + self.assertTrue((result.iloc[:10]['time'] == rng).all()) + self.assertTrue((result.iloc[10:]['time'] == rng).all()) + + def test_concat_timedelta64_block(self): + from pandas import to_timedelta + + rng = to_timedelta(np.arange(10), unit='s') + + df = DataFrame({'time': rng}) + + result = concat([df, df]) + self.assertTrue((result.iloc[:10]['time'] == rng).all()) + self.assertTrue((result.iloc[10:]['time'] == rng).all()) + + def test_concat_keys_with_none(self): + # #1649 + df0 = DataFrame([[10, 20, 30], [10, 20, 30], [10, 20, 30]]) + + result = concat(dict(a=None, b=df0, c=df0[:2], d=df0[:1], e=df0)) + expected = concat(dict(b=df0, c=df0[:2], d=df0[:1], e=df0)) + tm.assert_frame_equal(result, expected) + + result = concat([None, df0, df0[:2], df0[:1], df0], + keys=['a', 'b', 'c', 'd', 'e']) + expected = concat([df0, df0[:2], df0[:1], df0], + keys=['b', 'c', 'd', 'e']) + tm.assert_frame_equal(result, expected) + + def test_concat_bug_1719(self): + ts1 = tm.makeTimeSeries() + ts2 = tm.makeTimeSeries()[::2] + + # to join with union + # these two are of different length! + left = concat([ts1, ts2], join='outer', axis=1) + right = concat([ts2, ts1], join='outer', axis=1) + + self.assertEqual(len(left), len(right)) + + def test_concat_bug_2972(self): + ts0 = Series(np.zeros(5)) + ts1 = Series(np.ones(5)) + ts0.name = ts1.name = 'same name' + result = concat([ts0, ts1], axis=1) + + expected = DataFrame({0: ts0, 1: ts1}) + expected.columns = ['same name', 'same name'] + assert_frame_equal(result, expected) + + def test_concat_bug_3602(self): + + # GH 3602, duplicate columns + df1 = DataFrame({'firmNo': [0, 0, 0, 0], 'stringvar': [ + 'rrr', 'rrr', 'rrr', 'rrr'], 'prc': [6, 6, 6, 6]}) + df2 = DataFrame({'misc': [1, 2, 3, 4], 'prc': [ + 6, 6, 6, 6], 'C': [9, 10, 11, 12]}) + expected = DataFrame([[0, 6, 'rrr', 9, 1, 6], + [0, 6, 'rrr', 10, 2, 6], + [0, 6, 'rrr', 11, 3, 6], + [0, 6, 'rrr', 12, 4, 6]]) + expected.columns = ['firmNo', 'prc', 'stringvar', 'C', 'misc', 'prc'] + + result = concat([df1, df2], axis=1) + assert_frame_equal(result, expected) + + def test_concat_series_axis1_same_names_ignore_index(self): + dates = date_range('01-Jan-2013', '01-Jan-2014', freq='MS')[0:-1] + s1 = Series(randn(len(dates)), index=dates, name='value') + s2 = Series(randn(len(dates)), index=dates, name='value') + + result = concat([s1, s2], axis=1, ignore_index=True) + self.assertTrue(np.array_equal(result.columns, [0, 1])) + + def test_concat_iterables(self): + from collections import deque, Iterable + + # GH8645 check concat works with tuples, list, generators, and weird + # stuff like deque and custom iterables + df1 = DataFrame([1, 2, 3]) + df2 = DataFrame([4, 5, 6]) + expected = DataFrame([1, 2, 3, 4, 5, 6]) + assert_frame_equal(concat((df1, df2), ignore_index=True), expected) + assert_frame_equal(concat([df1, df2], ignore_index=True), expected) + assert_frame_equal(concat((df for df in (df1, df2)), + ignore_index=True), expected) + assert_frame_equal( + concat(deque((df1, df2)), ignore_index=True), expected) + + class CustomIterator1(object): + + def __len__(self): + return 2 + + def __getitem__(self, index): + try: + return {0: df1, 1: df2}[index] + except KeyError: + raise IndexError + assert_frame_equal(pd.concat(CustomIterator1(), + ignore_index=True), expected) + + class CustomIterator2(Iterable): + + def __iter__(self): + yield df1 + yield df2 + assert_frame_equal(pd.concat(CustomIterator2(), + ignore_index=True), expected) + + def test_concat_invalid(self): + + # trying to concat a ndframe with a non-ndframe + df1 = mkdf(10, 2) + for obj in [1, dict(), [1, 2], (1, 2)]: + self.assertRaises(TypeError, lambda x: concat([df1, obj])) + + def test_concat_invalid_first_argument(self): + df1 = mkdf(10, 2) + df2 = mkdf(10, 2) + self.assertRaises(TypeError, concat, df1, df2) + + # generator ok though + concat(DataFrame(np.random.rand(5, 5)) for _ in range(3)) + + # text reader ok + # GH6583 + data = """index,A,B,C,D +foo,2,3,4,5 +bar,7,8,9,10 +baz,12,13,14,15 +qux,12,13,14,15 +foo2,12,13,14,15 +bar2,12,13,14,15 +""" + + reader = read_csv(StringIO(data), chunksize=1) + result = concat(reader, ignore_index=True) + expected = read_csv(StringIO(data)) + assert_frame_equal(result, expected) + + +if __name__ == '__main__': + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], + exit=False) diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py index 13f00afb5a489..2505309768997 100644 --- a/pandas/tools/tests/test_merge.py +++ b/pandas/tools/tests/test_merge.py @@ -9,20 +9,17 @@ import random import pandas as pd -from pandas.compat import range, lrange, lzip, StringIO -from pandas import compat -from pandas.tseries.index import DatetimeIndex -from pandas.tools.merge import merge, concat, ordered_merge, MergeError -from pandas import Categorical, Timestamp -from pandas.util.testing import (assert_frame_equal, assert_series_equal, - assert_almost_equal, - makeCustomDataframe as mkdf, - assertRaisesRegexp) -from pandas import (isnull, DataFrame, Index, MultiIndex, Panel, - Series, date_range, read_csv) +from pandas.compat import range, lrange, lzip +from pandas.tools.merge import merge, concat, MergeError +from pandas.util.testing import (assert_frame_equal, + assert_series_equal, + slow) +from pandas import (DataFrame, Index, MultiIndex, + Series, date_range, Categorical, + compat) import pandas.algos as algos import pandas.util.testing as tm -from numpy.testing.decorators import slow + a_ = np.array @@ -203,8 +200,10 @@ def test_join_on(self): source = self.source merged = target.join(source, on='C') - self.assert_numpy_array_equal(merged['MergedA'], target['A']) - self.assert_numpy_array_equal(merged['MergedD'], target['D']) + self.assert_series_equal(merged['MergedA'], target['A'], + check_names=False) + self.assert_series_equal(merged['MergedD'], target['D'], + check_names=False) # join with duplicates (fix regression from DataFrame/Matrix merge) df = DataFrame({'key': ['a', 'a', 'b', 'b', 'c']}) @@ -289,7 +288,7 @@ def test_join_with_len0(self): merged2 = self.target.join(self.source.reindex([]), on='C', how='inner') - self.assertTrue(merged2.columns.equals(merged.columns)) + self.assert_index_equal(merged2.columns, merged.columns) self.assertEqual(len(merged2), 0) def test_join_on_inner(self): @@ -300,9 +299,11 @@ def test_join_on_inner(self): expected = df.join(df2, on='key') expected = expected[expected['value'].notnull()] - self.assert_numpy_array_equal(joined['key'], expected['key']) - self.assert_numpy_array_equal(joined['value'], expected['value']) - self.assertTrue(joined.index.equals(expected.index)) + self.assert_series_equal(joined['key'], expected['key'], + check_dtype=False) + self.assert_series_equal(joined['value'], expected['value'], + check_dtype=False) + self.assert_index_equal(joined.index, expected.index) def test_join_on_singlekey_list(self): df = DataFrame({'key': ['a', 'a', 'b', 'b', 'c']}) @@ -509,11 +510,10 @@ def test_join_many_non_unique_index(self): expected = merge(df_partially_merged, df3, on=['a', 'b'], how='outer') result = result.reset_index() - - result['a'] = result['a'].astype(np.float64) - result['b'] = result['b'].astype(np.float64) - - assert_frame_equal(result, expected.ix[:, result.columns]) + expected = expected[result.columns] + expected['a'] = expected.a.astype('int64') + expected['b'] = expected.b.astype('int64') + assert_frame_equal(result, expected) df1 = DataFrame({"a": [1, 1, 1], "b": [1, 1, 1], "c": [10, 20, 30]}) df2 = DataFrame({"a": [1, 1, 1], "b": [1, 1, 2], "d": [100, 200, 300]}) @@ -666,7 +666,7 @@ def test_join_sort(self): # smoke test joined = left.join(right, on='key', sort=False) - self.assert_numpy_array_equal(joined.index, lrange(4)) + self.assert_index_equal(joined.index, pd.Index(lrange(4))) def test_intelligently_handle_join_key(self): # #733, be a bit more 1337 about not returning unconsolidated DataFrame @@ -677,14 +677,35 @@ def test_intelligently_handle_join_key(self): 'rvalue': lrange(6)}) joined = merge(left, right, on='key', how='outer') - expected = DataFrame({'key': [1, 1, 1, 1, 2, 2, 3, 4, 5.], + expected = DataFrame({'key': [1, 1, 1, 1, 2, 2, 3, 4, 5], 'value': np.array([0, 0, 1, 1, 2, 3, 4, np.nan, np.nan]), - 'rvalue': np.array([0, 1, 0, 1, 2, 2, 3, 4, 5])}, + 'rvalue': [0, 1, 0, 1, 2, 2, 3, 4, 5]}, columns=['value', 'key', 'rvalue']) - assert_frame_equal(joined, expected, check_dtype=False) + assert_frame_equal(joined, expected) + + def test_merge_join_key_dtype_cast(self): + # #8596 - self.assertTrue(joined._data.is_consolidated()) + df1 = DataFrame({'key': [1], 'v1': [10]}) + df2 = DataFrame({'key': [2], 'v1': [20]}) + df = merge(df1, df2, how='outer') + self.assertEqual(df['key'].dtype, 'int64') + + df1 = DataFrame({'key': [True], 'v1': [1]}) + df2 = DataFrame({'key': [False], 'v1': [0]}) + df = merge(df1, df2, how='outer') + + # GH13169 + # this really should be bool + self.assertEqual(df['key'].dtype, 'object') + + df1 = DataFrame({'val': [1]}) + df2 = DataFrame({'val': [2]}) + lkey = np.array([1]) + rkey = np.array([2]) + df = merge(df1, df2, left_on=lkey, right_on=rkey, how='outer') + self.assertEqual(df['key_0'].dtype, 'int64') def test_handle_join_key_pass_array(self): left = DataFrame({'key': [1, 1, 2, 2, 3], @@ -705,15 +726,16 @@ def test_handle_join_key_pass_array(self): rkey = np.array([1, 1, 2, 3, 4, 5]) merged = merge(left, right, left_on=lkey, right_on=rkey, how='outer') - self.assert_numpy_array_equal(merged['key_0'], - np.array([1, 1, 1, 1, 2, 2, 3, 4, 5])) + self.assert_series_equal(merged['key_0'], + Series([1, 1, 1, 1, 2, 2, 3, 4, 5], + name='key_0')) left = DataFrame({'value': lrange(3)}) right = DataFrame({'rvalue': lrange(6)}) - key = np.array([0, 1, 1, 2, 2, 3]) + key = np.array([0, 1, 1, 2, 2, 3], dtype=np.int64) merged = merge(left, right, left_index=True, right_on=key, how='outer') - self.assert_numpy_array_equal(merged['key_0'], key) + self.assert_series_equal(merged['key_0'], Series(key, name='key_0')) def test_mixed_type_join_with_suffix(self): # GH #916 @@ -817,20 +839,32 @@ def test_merge_left_empty_right_notempty(self): # result will have object dtype exp_in.index = exp_in.index.astype(object) - for kwarg in [dict(left_index=True, right_index=True), - dict(left_index=True, right_on='x'), - dict(left_on='a', right_index=True), - dict(left_on='a', right_on='x')]: - + def check1(exp, kwarg): result = pd.merge(left, right, how='inner', **kwarg) - tm.assert_frame_equal(result, exp_in) + tm.assert_frame_equal(result, exp) result = pd.merge(left, right, how='left', **kwarg) - tm.assert_frame_equal(result, exp_in) + tm.assert_frame_equal(result, exp) + def check2(exp, kwarg): result = pd.merge(left, right, how='right', **kwarg) - tm.assert_frame_equal(result, exp_out) + tm.assert_frame_equal(result, exp) result = pd.merge(left, right, how='outer', **kwarg) - tm.assert_frame_equal(result, exp_out) + tm.assert_frame_equal(result, exp) + + for kwarg in [dict(left_index=True, right_index=True), + dict(left_index=True, right_on='x')]: + check1(exp_in, kwarg) + check2(exp_out, kwarg) + + kwarg = dict(left_on='a', right_index=True) + check1(exp_in, kwarg) + exp_out['a'] = [0, 1, 2] + check2(exp_out, kwarg) + + kwarg = dict(left_on='a', right_on='x') + check1(exp_in, kwarg) + exp_out['a'] = np.array([np.nan] * 3, dtype=object) + check2(exp_out, kwarg) def test_merge_left_notempty_right_empty(self): # GH 10824 @@ -849,20 +883,24 @@ def test_merge_left_notempty_right_empty(self): # result will have object dtype exp_in.index = exp_in.index.astype(object) - for kwarg in [dict(left_index=True, right_index=True), - dict(left_index=True, right_on='x'), - dict(left_on='a', right_index=True), - dict(left_on='a', right_on='x')]: - + def check1(exp, kwarg): result = pd.merge(left, right, how='inner', **kwarg) - tm.assert_frame_equal(result, exp_in) + tm.assert_frame_equal(result, exp) result = pd.merge(left, right, how='right', **kwarg) - tm.assert_frame_equal(result, exp_in) + tm.assert_frame_equal(result, exp) + def check2(exp, kwarg): result = pd.merge(left, right, how='left', **kwarg) - tm.assert_frame_equal(result, exp_out) + tm.assert_frame_equal(result, exp) result = pd.merge(left, right, how='outer', **kwarg) - tm.assert_frame_equal(result, exp_out) + tm.assert_frame_equal(result, exp) + + for kwarg in [dict(left_index=True, right_index=True), + dict(left_index=True, right_on='x'), + dict(left_on='a', right_index=True), + dict(left_on='a', right_on='x')]: + check1(exp_in, kwarg) + check2(exp_out, kwarg) def test_merge_nosort(self): # #2098, anything to do? @@ -1064,7 +1102,7 @@ def test_merge_on_datetime64tz(self): tz='US/Eastern')) + [pd.NaT], 'value_y': [pd.NaT] + list(pd.date_range('20151011', periods=2, tz='US/Eastern')), - 'key': [1., 2, 3]}) + 'key': [1, 2, 3]}) result = pd.merge(left, right, on='key', how='outer') assert_frame_equal(result, expected) self.assertEqual(result['value_x'].dtype, 'datetime64[ns, US/Eastern]') @@ -1096,7 +1134,7 @@ def test_merge_on_periods(self): exp_y = pd.period_range('20151011', periods=2, freq='D') expected = DataFrame({'value_x': list(exp_x) + [pd.NaT], 'value_y': [pd.NaT] + list(exp_y), - 'key': [1., 2, 3]}) + 'key': [1, 2, 3]}) result = pd.merge(left, right, on='key', how='outer') assert_frame_equal(result, expected) self.assertEqual(result['value_x'].dtype, 'object') @@ -1136,6 +1174,15 @@ def test_concat_NaT_series(self): result = pd.concat([x, y], ignore_index=True) tm.assert_series_equal(result, expected) + def test_concat_tz_frame(self): + df2 = DataFrame(dict(A=pd.Timestamp('20130102', tz='US/Eastern'), + B=pd.Timestamp('20130603', tz='CET')), + index=range(5)) + + # concat + df3 = pd.concat([df2.A.to_frame(), df2.B.to_frame()], axis=1) + assert_frame_equal(df2, df3) + def test_concat_tz_series(self): # GH 11755 # tz and no tz @@ -1329,7 +1376,7 @@ def test_indicator(self): 'col_conflict_x': [1, 2, np.nan, np.nan, np.nan, np.nan], 'col_left': ['a', 'b', np.nan, np.nan, np.nan, np.nan], 'col_conflict_y': [np.nan, 1, 2, 3, 4, 5], - 'col_right': [np.nan, 2, 2, 2, 2, 2]}, dtype='float64') + 'col_right': [np.nan, 2, 2, 2, 2, 2]}) df_result['_merge'] = Categorical( ['left_only', 'both', 'right_only', 'right_only', 'right_only', 'right_only'], @@ -1408,7 +1455,7 @@ def test_indicator(self): df4 = DataFrame({'col1': [1, 1, 3], 'col2': ['b', 'x', 'y']}) - hand_coded_result = DataFrame({'col1': [0, 1, 1, 3.0], + hand_coded_result = DataFrame({'col1': [0, 1, 1, 3], 'col2': ['a', 'b', 'x', 'y']}) hand_coded_result['_merge'] = Categorical( ['left_only', 'both', 'right_only', 'right_only'], @@ -2159,1100 +2206,6 @@ def _join_by_hand(a, b, how='left'): return a_re.reindex(columns=result_columns) -class TestConcatenate(tm.TestCase): - - _multiprocess_can_split_ = True - - def setUp(self): - self.frame = DataFrame(tm.getSeriesData()) - self.mixed_frame = self.frame.copy() - self.mixed_frame['foo'] = 'bar' - - def test_append(self): - begin_index = self.frame.index[:5] - end_index = self.frame.index[5:] - - begin_frame = self.frame.reindex(begin_index) - end_frame = self.frame.reindex(end_index) - - appended = begin_frame.append(end_frame) - assert_almost_equal(appended['A'], self.frame['A']) - - del end_frame['A'] - partial_appended = begin_frame.append(end_frame) - self.assertIn('A', partial_appended) - - partial_appended = end_frame.append(begin_frame) - self.assertIn('A', partial_appended) - - # mixed type handling - appended = self.mixed_frame[:5].append(self.mixed_frame[5:]) - assert_frame_equal(appended, self.mixed_frame) - - # what to test here - mixed_appended = self.mixed_frame[:5].append(self.frame[5:]) - mixed_appended2 = self.frame[:5].append(self.mixed_frame[5:]) - - # all equal except 'foo' column - assert_frame_equal( - mixed_appended.reindex(columns=['A', 'B', 'C', 'D']), - mixed_appended2.reindex(columns=['A', 'B', 'C', 'D'])) - - # append empty - empty = DataFrame({}) - - appended = self.frame.append(empty) - assert_frame_equal(self.frame, appended) - self.assertIsNot(appended, self.frame) - - appended = empty.append(self.frame) - assert_frame_equal(self.frame, appended) - self.assertIsNot(appended, self.frame) - - # overlap - self.assertRaises(ValueError, self.frame.append, self.frame, - verify_integrity=True) - - # new columns - # GH 6129 - df = DataFrame({'a': {'x': 1, 'y': 2}, 'b': {'x': 3, 'y': 4}}) - row = Series([5, 6, 7], index=['a', 'b', 'c'], name='z') - expected = DataFrame({'a': {'x': 1, 'y': 2, 'z': 5}, 'b': { - 'x': 3, 'y': 4, 'z': 6}, 'c': {'z': 7}}) - result = df.append(row) - assert_frame_equal(result, expected) - - def test_append_length0_frame(self): - df = DataFrame(columns=['A', 'B', 'C']) - df3 = DataFrame(index=[0, 1], columns=['A', 'B']) - df5 = df.append(df3) - - expected = DataFrame(index=[0, 1], columns=['A', 'B', 'C']) - assert_frame_equal(df5, expected) - - def test_append_records(self): - arr1 = np.zeros((2,), dtype=('i4,f4,a10')) - arr1[:] = [(1, 2., 'Hello'), (2, 3., "World")] - - arr2 = np.zeros((3,), dtype=('i4,f4,a10')) - arr2[:] = [(3, 4., 'foo'), - (5, 6., "bar"), - (7., 8., 'baz')] - - df1 = DataFrame(arr1) - df2 = DataFrame(arr2) - - result = df1.append(df2, ignore_index=True) - expected = DataFrame(np.concatenate((arr1, arr2))) - assert_frame_equal(result, expected) - - def test_append_different_columns(self): - df = DataFrame({'bools': np.random.randn(10) > 0, - 'ints': np.random.randint(0, 10, 10), - 'floats': np.random.randn(10), - 'strings': ['foo', 'bar'] * 5}) - - a = df[:5].ix[:, ['bools', 'ints', 'floats']] - b = df[5:].ix[:, ['strings', 'ints', 'floats']] - - appended = a.append(b) - self.assertTrue(isnull(appended['strings'][0:4]).all()) - self.assertTrue(isnull(appended['bools'][5:]).all()) - - def test_append_many(self): - chunks = [self.frame[:5], self.frame[5:10], - self.frame[10:15], self.frame[15:]] - - result = chunks[0].append(chunks[1:]) - tm.assert_frame_equal(result, self.frame) - - chunks[-1] = chunks[-1].copy() - chunks[-1]['foo'] = 'bar' - result = chunks[0].append(chunks[1:]) - tm.assert_frame_equal(result.ix[:, self.frame.columns], self.frame) - self.assertTrue((result['foo'][15:] == 'bar').all()) - self.assertTrue(result['foo'][:15].isnull().all()) - - def test_append_preserve_index_name(self): - # #980 - df1 = DataFrame(data=None, columns=['A', 'B', 'C']) - df1 = df1.set_index(['A']) - df2 = DataFrame(data=[[1, 4, 7], [2, 5, 8], [3, 6, 9]], - columns=['A', 'B', 'C']) - df2 = df2.set_index(['A']) - - result = df1.append(df2) - self.assertEqual(result.index.name, 'A') - - def test_join_many(self): - df = DataFrame(np.random.randn(10, 6), columns=list('abcdef')) - df_list = [df[['a', 'b']], df[['c', 'd']], df[['e', 'f']]] - - joined = df_list[0].join(df_list[1:]) - tm.assert_frame_equal(joined, df) - - df_list = [df[['a', 'b']][:-2], - df[['c', 'd']][2:], df[['e', 'f']][1:9]] - - def _check_diff_index(df_list, result, exp_index): - reindexed = [x.reindex(exp_index) for x in df_list] - expected = reindexed[0].join(reindexed[1:]) - tm.assert_frame_equal(result, expected) - - # different join types - joined = df_list[0].join(df_list[1:], how='outer') - _check_diff_index(df_list, joined, df.index) - - joined = df_list[0].join(df_list[1:]) - _check_diff_index(df_list, joined, df_list[0].index) - - joined = df_list[0].join(df_list[1:], how='inner') - _check_diff_index(df_list, joined, df.index[2:8]) - - self.assertRaises(ValueError, df_list[0].join, df_list[1:], on='a') - - def test_join_many_mixed(self): - df = DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D']) - df['key'] = ['foo', 'bar'] * 4 - df1 = df.ix[:, ['A', 'B']] - df2 = df.ix[:, ['C', 'D']] - df3 = df.ix[:, ['key']] - - result = df1.join([df2, df3]) - assert_frame_equal(result, df) - - def test_append_missing_column_proper_upcast(self): - df1 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='i8')}) - df2 = DataFrame({'B': np.array([True, False, True, False], - dtype=bool)}) - - appended = df1.append(df2, ignore_index=True) - self.assertEqual(appended['A'].dtype, 'f8') - self.assertEqual(appended['B'].dtype, 'O') - - def test_concat_copy(self): - - df = DataFrame(np.random.randn(4, 3)) - df2 = DataFrame(np.random.randint(0, 10, size=4).reshape(4, 1)) - df3 = DataFrame({5: 'foo'}, index=range(4)) - - # these are actual copies - result = concat([df, df2, df3], axis=1, copy=True) - for b in result._data.blocks: - self.assertIsNone(b.values.base) - - # these are the same - result = concat([df, df2, df3], axis=1, copy=False) - for b in result._data.blocks: - if b.is_float: - self.assertTrue( - b.values.base is df._data.blocks[0].values.base) - elif b.is_integer: - self.assertTrue( - b.values.base is df2._data.blocks[0].values.base) - elif b.is_object: - self.assertIsNotNone(b.values.base) - - # float block was consolidated - df4 = DataFrame(np.random.randn(4, 1)) - result = concat([df, df2, df3, df4], axis=1, copy=False) - for b in result._data.blocks: - if b.is_float: - self.assertIsNone(b.values.base) - elif b.is_integer: - self.assertTrue( - b.values.base is df2._data.blocks[0].values.base) - elif b.is_object: - self.assertIsNotNone(b.values.base) - - def test_concat_with_group_keys(self): - df = DataFrame(np.random.randn(4, 3)) - df2 = DataFrame(np.random.randn(4, 4)) - - # axis=0 - df = DataFrame(np.random.randn(3, 4)) - df2 = DataFrame(np.random.randn(4, 4)) - - result = concat([df, df2], keys=[0, 1]) - exp_index = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1, 1], - [0, 1, 2, 0, 1, 2, 3]]) - expected = DataFrame(np.r_[df.values, df2.values], - index=exp_index) - tm.assert_frame_equal(result, expected) - - result = concat([df, df], keys=[0, 1]) - exp_index2 = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1], - [0, 1, 2, 0, 1, 2]]) - expected = DataFrame(np.r_[df.values, df.values], - index=exp_index2) - tm.assert_frame_equal(result, expected) - - # axis=1 - df = DataFrame(np.random.randn(4, 3)) - df2 = DataFrame(np.random.randn(4, 4)) - - result = concat([df, df2], keys=[0, 1], axis=1) - expected = DataFrame(np.c_[df.values, df2.values], - columns=exp_index) - tm.assert_frame_equal(result, expected) - - result = concat([df, df], keys=[0, 1], axis=1) - expected = DataFrame(np.c_[df.values, df.values], - columns=exp_index2) - tm.assert_frame_equal(result, expected) - - def test_concat_keys_specific_levels(self): - df = DataFrame(np.random.randn(10, 4)) - pieces = [df.ix[:, [0, 1]], df.ix[:, [2]], df.ix[:, [3]]] - level = ['three', 'two', 'one', 'zero'] - result = concat(pieces, axis=1, keys=['one', 'two', 'three'], - levels=[level], - names=['group_key']) - - self.assert_numpy_array_equal(result.columns.levels[0], level) - self.assertEqual(result.columns.names[0], 'group_key') - - def test_concat_dataframe_keys_bug(self): - t1 = DataFrame({ - 'value': Series([1, 2, 3], index=Index(['a', 'b', 'c'], - name='id'))}) - t2 = DataFrame({ - 'value': Series([7, 8], index=Index(['a', 'b'], name='id'))}) - - # it works - result = concat([t1, t2], axis=1, keys=['t1', 't2']) - self.assertEqual(list(result.columns), [('t1', 'value'), - ('t2', 'value')]) - - def test_concat_series_partial_columns_names(self): - # GH10698 - foo = Series([1, 2], name='foo') - bar = Series([1, 2]) - baz = Series([4, 5]) - - result = concat([foo, bar, baz], axis=1) - expected = DataFrame({'foo': [1, 2], 0: [1, 2], 1: [ - 4, 5]}, columns=['foo', 0, 1]) - tm.assert_frame_equal(result, expected) - - result = concat([foo, bar, baz], axis=1, keys=[ - 'red', 'blue', 'yellow']) - expected = DataFrame({'red': [1, 2], 'blue': [1, 2], 'yellow': [ - 4, 5]}, columns=['red', 'blue', 'yellow']) - tm.assert_frame_equal(result, expected) - - result = concat([foo, bar, baz], axis=1, ignore_index=True) - expected = DataFrame({0: [1, 2], 1: [1, 2], 2: [4, 5]}) - tm.assert_frame_equal(result, expected) - - def test_concat_dict(self): - frames = {'foo': DataFrame(np.random.randn(4, 3)), - 'bar': DataFrame(np.random.randn(4, 3)), - 'baz': DataFrame(np.random.randn(4, 3)), - 'qux': DataFrame(np.random.randn(4, 3))} - - sorted_keys = sorted(frames) - - result = concat(frames) - expected = concat([frames[k] for k in sorted_keys], keys=sorted_keys) - tm.assert_frame_equal(result, expected) - - result = concat(frames, axis=1) - expected = concat([frames[k] for k in sorted_keys], keys=sorted_keys, - axis=1) - tm.assert_frame_equal(result, expected) - - keys = ['baz', 'foo', 'bar'] - result = concat(frames, keys=keys) - expected = concat([frames[k] for k in keys], keys=keys) - tm.assert_frame_equal(result, expected) - - def test_concat_ignore_index(self): - frame1 = DataFrame({"test1": ["a", "b", "c"], - "test2": [1, 2, 3], - "test3": [4.5, 3.2, 1.2]}) - frame2 = DataFrame({"test3": [5.2, 2.2, 4.3]}) - frame1.index = Index(["x", "y", "z"]) - frame2.index = Index(["x", "y", "q"]) - - v1 = concat([frame1, frame2], axis=1, ignore_index=True) - - nan = np.nan - expected = DataFrame([[nan, nan, nan, 4.3], - ['a', 1, 4.5, 5.2], - ['b', 2, 3.2, 2.2], - ['c', 3, 1.2, nan]], - index=Index(["q", "x", "y", "z"])) - - tm.assert_frame_equal(v1, expected) - - def test_concat_multiindex_with_keys(self): - index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], - ['one', 'two', 'three']], - labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], - [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], - names=['first', 'second']) - frame = DataFrame(np.random.randn(10, 3), index=index, - columns=Index(['A', 'B', 'C'], name='exp')) - result = concat([frame, frame], keys=[0, 1], names=['iteration']) - - self.assertEqual(result.index.names, ('iteration',) + index.names) - tm.assert_frame_equal(result.ix[0], frame) - tm.assert_frame_equal(result.ix[1], frame) - self.assertEqual(result.index.nlevels, 3) - - def test_concat_multiindex_with_tz(self): - # GH 6606 - df = DataFrame({'dt': [datetime(2014, 1, 1), - datetime(2014, 1, 2), - datetime(2014, 1, 3)], - 'b': ['A', 'B', 'C'], - 'c': [1, 2, 3], 'd': [4, 5, 6]}) - df['dt'] = df['dt'].apply(lambda d: Timestamp(d, tz='US/Pacific')) - df = df.set_index(['dt', 'b']) - - exp_idx1 = DatetimeIndex(['2014-01-01', '2014-01-02', - '2014-01-03'] * 2, - tz='US/Pacific', name='dt') - exp_idx2 = Index(['A', 'B', 'C'] * 2, name='b') - exp_idx = MultiIndex.from_arrays([exp_idx1, exp_idx2]) - expected = DataFrame({'c': [1, 2, 3] * 2, 'd': [4, 5, 6] * 2}, - index=exp_idx, columns=['c', 'd']) - - result = concat([df, df]) - tm.assert_frame_equal(result, expected) - - def test_concat_keys_and_levels(self): - df = DataFrame(np.random.randn(1, 3)) - df2 = DataFrame(np.random.randn(1, 4)) - - levels = [['foo', 'baz'], ['one', 'two']] - names = ['first', 'second'] - result = concat([df, df2, df, df2], - keys=[('foo', 'one'), ('foo', 'two'), - ('baz', 'one'), ('baz', 'two')], - levels=levels, - names=names) - expected = concat([df, df2, df, df2]) - exp_index = MultiIndex(levels=levels + [[0]], - labels=[[0, 0, 1, 1], [0, 1, 0, 1], - [0, 0, 0, 0]], - names=names + [None]) - expected.index = exp_index - - assert_frame_equal(result, expected) - - # no names - - result = concat([df, df2, df, df2], - keys=[('foo', 'one'), ('foo', 'two'), - ('baz', 'one'), ('baz', 'two')], - levels=levels) - self.assertEqual(result.index.names, (None,) * 3) - - # no levels - result = concat([df, df2, df, df2], - keys=[('foo', 'one'), ('foo', 'two'), - ('baz', 'one'), ('baz', 'two')], - names=['first', 'second']) - self.assertEqual(result.index.names, ('first', 'second') + (None,)) - self.assert_numpy_array_equal(result.index.levels[0], ['baz', 'foo']) - - def test_concat_keys_levels_no_overlap(self): - # GH #1406 - df = DataFrame(np.random.randn(1, 3), index=['a']) - df2 = DataFrame(np.random.randn(1, 4), index=['b']) - - self.assertRaises(ValueError, concat, [df, df], - keys=['one', 'two'], levels=[['foo', 'bar', 'baz']]) - - self.assertRaises(ValueError, concat, [df, df2], - keys=['one', 'two'], levels=[['foo', 'bar', 'baz']]) - - def test_concat_rename_index(self): - a = DataFrame(np.random.rand(3, 3), - columns=list('ABC'), - index=Index(list('abc'), name='index_a')) - b = DataFrame(np.random.rand(3, 3), - columns=list('ABC'), - index=Index(list('abc'), name='index_b')) - - result = concat([a, b], keys=['key0', 'key1'], - names=['lvl0', 'lvl1']) - - exp = concat([a, b], keys=['key0', 'key1'], names=['lvl0']) - names = list(exp.index.names) - names[1] = 'lvl1' - exp.index.set_names(names, inplace=True) - - tm.assert_frame_equal(result, exp) - self.assertEqual(result.index.names, exp.index.names) - - def test_crossed_dtypes_weird_corner(self): - columns = ['A', 'B', 'C', 'D'] - df1 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='f8'), - 'B': np.array([1, 2, 3, 4], dtype='i8'), - 'C': np.array([1, 2, 3, 4], dtype='f8'), - 'D': np.array([1, 2, 3, 4], dtype='i8')}, - columns=columns) - - df2 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='i8'), - 'B': np.array([1, 2, 3, 4], dtype='f8'), - 'C': np.array([1, 2, 3, 4], dtype='i8'), - 'D': np.array([1, 2, 3, 4], dtype='f8')}, - columns=columns) - - appended = df1.append(df2, ignore_index=True) - expected = DataFrame(np.concatenate([df1.values, df2.values], axis=0), - columns=columns) - tm.assert_frame_equal(appended, expected) - - df = DataFrame(np.random.randn(1, 3), index=['a']) - df2 = DataFrame(np.random.randn(1, 4), index=['b']) - result = concat( - [df, df2], keys=['one', 'two'], names=['first', 'second']) - self.assertEqual(result.index.names, ('first', 'second')) - - def test_dups_index(self): - # GH 4771 - - # single dtypes - df = DataFrame(np.random.randint(0, 10, size=40).reshape( - 10, 4), columns=['A', 'A', 'C', 'C']) - - result = concat([df, df], axis=1) - assert_frame_equal(result.iloc[:, :4], df) - assert_frame_equal(result.iloc[:, 4:], df) - - result = concat([df, df], axis=0) - assert_frame_equal(result.iloc[:10], df) - assert_frame_equal(result.iloc[10:], df) - - # multi dtypes - df = concat([DataFrame(np.random.randn(10, 4), - columns=['A', 'A', 'B', 'B']), - DataFrame(np.random.randint(0, 10, size=20) - .reshape(10, 2), - columns=['A', 'C'])], - axis=1) - - result = concat([df, df], axis=1) - assert_frame_equal(result.iloc[:, :6], df) - assert_frame_equal(result.iloc[:, 6:], df) - - result = concat([df, df], axis=0) - assert_frame_equal(result.iloc[:10], df) - assert_frame_equal(result.iloc[10:], df) - - # append - result = df.iloc[0:8, :].append(df.iloc[8:]) - assert_frame_equal(result, df) - - result = df.iloc[0:8, :].append(df.iloc[8:9]).append(df.iloc[9:10]) - assert_frame_equal(result, df) - - expected = concat([df, df], axis=0) - result = df.append(df) - assert_frame_equal(result, expected) - - def test_with_mixed_tuples(self): - # 10697 - # columns have mixed tuples, so handle properly - df1 = DataFrame({u'A': 'foo', (u'B', 1): 'bar'}, index=range(2)) - df2 = DataFrame({u'B': 'foo', (u'B', 1): 'bar'}, index=range(2)) - - # it works - concat([df1, df2]) - - def test_join_dups(self): - - # joining dups - df = concat([DataFrame(np.random.randn(10, 4), - columns=['A', 'A', 'B', 'B']), - DataFrame(np.random.randint(0, 10, size=20) - .reshape(10, 2), - columns=['A', 'C'])], - axis=1) - - expected = concat([df, df], axis=1) - result = df.join(df, rsuffix='_2') - result.columns = expected.columns - assert_frame_equal(result, expected) - - # GH 4975, invalid join on dups - w = DataFrame(np.random.randn(4, 2), columns=["x", "y"]) - x = DataFrame(np.random.randn(4, 2), columns=["x", "y"]) - y = DataFrame(np.random.randn(4, 2), columns=["x", "y"]) - z = DataFrame(np.random.randn(4, 2), columns=["x", "y"]) - - dta = x.merge(y, left_index=True, right_index=True).merge( - z, left_index=True, right_index=True, how="outer") - dta = dta.merge(w, left_index=True, right_index=True) - expected = concat([x, y, z, w], axis=1) - expected.columns = ['x_x', 'y_x', 'x_y', - 'y_y', 'x_x', 'y_x', 'x_y', 'y_y'] - assert_frame_equal(dta, expected) - - def test_handle_empty_objects(self): - df = DataFrame(np.random.randn(10, 4), columns=list('abcd')) - - baz = df[:5].copy() - baz['foo'] = 'bar' - empty = df[5:5] - - frames = [baz, empty, empty, df[5:]] - concatted = concat(frames, axis=0) - - expected = df.ix[:, ['a', 'b', 'c', 'd', 'foo']] - expected['foo'] = expected['foo'].astype('O') - expected.loc[0:4, 'foo'] = 'bar' - - tm.assert_frame_equal(concatted, expected) - - # empty as first element with time series - # GH3259 - df = DataFrame(dict(A=range(10000)), index=date_range( - '20130101', periods=10000, freq='s')) - empty = DataFrame() - result = concat([df, empty], axis=1) - assert_frame_equal(result, df) - result = concat([empty, df], axis=1) - assert_frame_equal(result, df) - - result = concat([df, empty]) - assert_frame_equal(result, df) - result = concat([empty, df]) - assert_frame_equal(result, df) - - def test_concat_mixed_objs(self): - - # concat mixed series/frames - # G2385 - - # axis 1 - index = date_range('01-Jan-2013', periods=10, freq='H') - arr = np.arange(10, dtype='int64') - s1 = Series(arr, index=index) - s2 = Series(arr, index=index) - df = DataFrame(arr.reshape(-1, 1), index=index) - - expected = DataFrame(np.repeat(arr, 2).reshape(-1, 2), - index=index, columns=[0, 0]) - result = concat([df, df], axis=1) - assert_frame_equal(result, expected) - - expected = DataFrame(np.repeat(arr, 2).reshape(-1, 2), - index=index, columns=[0, 1]) - result = concat([s1, s2], axis=1) - assert_frame_equal(result, expected) - - expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3), - index=index, columns=[0, 1, 2]) - result = concat([s1, s2, s1], axis=1) - assert_frame_equal(result, expected) - - expected = DataFrame(np.repeat(arr, 5).reshape(-1, 5), - index=index, columns=[0, 0, 1, 2, 3]) - result = concat([s1, df, s2, s2, s1], axis=1) - assert_frame_equal(result, expected) - - # with names - s1.name = 'foo' - expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3), - index=index, columns=['foo', 0, 0]) - result = concat([s1, df, s2], axis=1) - assert_frame_equal(result, expected) - - s2.name = 'bar' - expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3), - index=index, columns=['foo', 0, 'bar']) - result = concat([s1, df, s2], axis=1) - assert_frame_equal(result, expected) - - # ignore index - expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3), - index=index, columns=[0, 1, 2]) - result = concat([s1, df, s2], axis=1, ignore_index=True) - assert_frame_equal(result, expected) - - # axis 0 - expected = DataFrame(np.tile(arr, 3).reshape(-1, 1), - index=index.tolist() * 3, columns=[0]) - result = concat([s1, df, s2]) - assert_frame_equal(result, expected) - - expected = DataFrame(np.tile(arr, 3).reshape(-1, 1), columns=[0]) - result = concat([s1, df, s2], ignore_index=True) - assert_frame_equal(result, expected) - - # invalid concatente of mixed dims - panel = tm.makePanel() - self.assertRaises(ValueError, lambda: concat([panel, s1], axis=1)) - - def test_panel_join(self): - panel = tm.makePanel() - tm.add_nans(panel) - - p1 = panel.ix[:2, :10, :3] - p2 = panel.ix[2:, 5:, 2:] - - # left join - result = p1.join(p2) - expected = p1.copy() - expected['ItemC'] = p2['ItemC'] - tm.assert_panel_equal(result, expected) - - # right join - result = p1.join(p2, how='right') - expected = p2.copy() - expected['ItemA'] = p1['ItemA'] - expected['ItemB'] = p1['ItemB'] - expected = expected.reindex(items=['ItemA', 'ItemB', 'ItemC']) - tm.assert_panel_equal(result, expected) - - # inner join - result = p1.join(p2, how='inner') - expected = panel.ix[:, 5:10, 2:3] - tm.assert_panel_equal(result, expected) - - # outer join - result = p1.join(p2, how='outer') - expected = p1.reindex(major=panel.major_axis, - minor=panel.minor_axis) - expected = expected.join(p2.reindex(major=panel.major_axis, - minor=panel.minor_axis)) - tm.assert_panel_equal(result, expected) - - def test_panel_join_overlap(self): - panel = tm.makePanel() - tm.add_nans(panel) - - p1 = panel.ix[['ItemA', 'ItemB', 'ItemC']] - p2 = panel.ix[['ItemB', 'ItemC']] - - # Expected index is - # - # ItemA, ItemB_p1, ItemC_p1, ItemB_p2, ItemC_p2 - joined = p1.join(p2, lsuffix='_p1', rsuffix='_p2') - p1_suf = p1.ix[['ItemB', 'ItemC']].add_suffix('_p1') - p2_suf = p2.ix[['ItemB', 'ItemC']].add_suffix('_p2') - no_overlap = panel.ix[['ItemA']] - expected = no_overlap.join(p1_suf.join(p2_suf)) - tm.assert_panel_equal(joined, expected) - - def test_panel_join_many(self): - tm.K = 10 - panel = tm.makePanel() - tm.K = 4 - - panels = [panel.ix[:2], panel.ix[2:6], panel.ix[6:]] - - joined = panels[0].join(panels[1:]) - tm.assert_panel_equal(joined, panel) - - panels = [panel.ix[:2, :-5], panel.ix[2:6, 2:], panel.ix[6:, 5:-7]] - - data_dict = {} - for p in panels: - data_dict.update(p.iteritems()) - - joined = panels[0].join(panels[1:], how='inner') - expected = Panel.from_dict(data_dict, intersect=True) - tm.assert_panel_equal(joined, expected) - - joined = panels[0].join(panels[1:], how='outer') - expected = Panel.from_dict(data_dict, intersect=False) - tm.assert_panel_equal(joined, expected) - - # edge cases - self.assertRaises(ValueError, panels[0].join, panels[1:], - how='outer', lsuffix='foo', rsuffix='bar') - self.assertRaises(ValueError, panels[0].join, panels[1:], - how='right') - - def test_panel_concat_other_axes(self): - panel = tm.makePanel() - - p1 = panel.ix[:, :5, :] - p2 = panel.ix[:, 5:, :] - - result = concat([p1, p2], axis=1) - tm.assert_panel_equal(result, panel) - - p1 = panel.ix[:, :, :2] - p2 = panel.ix[:, :, 2:] - - result = concat([p1, p2], axis=2) - tm.assert_panel_equal(result, panel) - - # if things are a bit misbehaved - p1 = panel.ix[:2, :, :2] - p2 = panel.ix[:, :, 2:] - p1['ItemC'] = 'baz' - - result = concat([p1, p2], axis=2) - - expected = panel.copy() - expected['ItemC'] = expected['ItemC'].astype('O') - expected.ix['ItemC', :, :2] = 'baz' - tm.assert_panel_equal(result, expected) - - def test_panel_concat_buglet(self): - # #2257 - def make_panel(): - index = 5 - cols = 3 - - def df(): - return DataFrame(np.random.randn(index, cols), - index=["I%s" % i for i in range(index)], - columns=["C%s" % i for i in range(cols)]) - return Panel(dict([("Item%s" % x, df()) for x in ['A', 'B', 'C']])) - - panel1 = make_panel() - panel2 = make_panel() - - panel2 = panel2.rename_axis(dict([(x, "%s_1" % x) - for x in panel2.major_axis]), - axis=1) - - panel3 = panel2.rename_axis(lambda x: '%s_1' % x, axis=1) - panel3 = panel3.rename_axis(lambda x: '%s_1' % x, axis=2) - - # it works! - concat([panel1, panel3], axis=1, verify_integrity=True) - - def test_panel4d_concat(self): - p4d = tm.makePanel4D() - - p1 = p4d.ix[:, :, :5, :] - p2 = p4d.ix[:, :, 5:, :] - - result = concat([p1, p2], axis=2) - tm.assert_panel4d_equal(result, p4d) - - p1 = p4d.ix[:, :, :, :2] - p2 = p4d.ix[:, :, :, 2:] - - result = concat([p1, p2], axis=3) - tm.assert_panel4d_equal(result, p4d) - - def test_panel4d_concat_mixed_type(self): - p4d = tm.makePanel4D() - - # if things are a bit misbehaved - p1 = p4d.ix[:, :2, :, :2] - p2 = p4d.ix[:, :, :, 2:] - p1['L5'] = 'baz' - - result = concat([p1, p2], axis=3) - - p2['L5'] = np.nan - expected = concat([p1, p2], axis=3) - expected = expected.ix[result.labels] - - tm.assert_panel4d_equal(result, expected) - - def test_concat_series(self): - - ts = tm.makeTimeSeries() - ts.name = 'foo' - - pieces = [ts[:5], ts[5:15], ts[15:]] - - result = concat(pieces) - tm.assert_series_equal(result, ts) - self.assertEqual(result.name, ts.name) - - result = concat(pieces, keys=[0, 1, 2]) - expected = ts.copy() - - ts.index = DatetimeIndex(np.array(ts.index.values, dtype='M8[ns]')) - - exp_labels = [np.repeat([0, 1, 2], [len(x) for x in pieces]), - np.arange(len(ts))] - exp_index = MultiIndex(levels=[[0, 1, 2], ts.index], - labels=exp_labels) - expected.index = exp_index - tm.assert_series_equal(result, expected) - - def test_concat_series_axis1(self): - ts = tm.makeTimeSeries() - - pieces = [ts[:-2], ts[2:], ts[2:-2]] - - result = concat(pieces, axis=1) - expected = DataFrame(pieces).T - assert_frame_equal(result, expected) - - result = concat(pieces, keys=['A', 'B', 'C'], axis=1) - expected = DataFrame(pieces, index=['A', 'B', 'C']).T - assert_frame_equal(result, expected) - - # preserve series names, #2489 - s = Series(randn(5), name='A') - s2 = Series(randn(5), name='B') - - result = concat([s, s2], axis=1) - expected = DataFrame({'A': s, 'B': s2}) - assert_frame_equal(result, expected) - - s2.name = None - result = concat([s, s2], axis=1) - self.assertTrue(np.array_equal( - result.columns, Index(['A', 0], dtype='object'))) - - # must reindex, #2603 - s = Series(randn(3), index=['c', 'a', 'b'], name='A') - s2 = Series(randn(4), index=['d', 'a', 'b', 'c'], name='B') - result = concat([s, s2], axis=1) - expected = DataFrame({'A': s, 'B': s2}) - assert_frame_equal(result, expected) - - def test_concat_single_with_key(self): - df = DataFrame(np.random.randn(10, 4)) - - result = concat([df], keys=['foo']) - expected = concat([df, df], keys=['foo', 'bar']) - tm.assert_frame_equal(result, expected[:10]) - - def test_concat_exclude_none(self): - df = DataFrame(np.random.randn(10, 4)) - - pieces = [df[:5], None, None, df[5:]] - result = concat(pieces) - tm.assert_frame_equal(result, df) - self.assertRaises(ValueError, concat, [None, None]) - - def test_concat_datetime64_block(self): - from pandas.tseries.index import date_range - - rng = date_range('1/1/2000', periods=10) - - df = DataFrame({'time': rng}) - - result = concat([df, df]) - self.assertTrue((result.iloc[:10]['time'] == rng).all()) - self.assertTrue((result.iloc[10:]['time'] == rng).all()) - - def test_concat_timedelta64_block(self): - from pandas import to_timedelta - - rng = to_timedelta(np.arange(10), unit='s') - - df = DataFrame({'time': rng}) - - result = concat([df, df]) - self.assertTrue((result.iloc[:10]['time'] == rng).all()) - self.assertTrue((result.iloc[10:]['time'] == rng).all()) - - def test_concat_keys_with_none(self): - # #1649 - df0 = DataFrame([[10, 20, 30], [10, 20, 30], [10, 20, 30]]) - - result = concat(dict(a=None, b=df0, c=df0[:2], d=df0[:1], e=df0)) - expected = concat(dict(b=df0, c=df0[:2], d=df0[:1], e=df0)) - tm.assert_frame_equal(result, expected) - - result = concat([None, df0, df0[:2], df0[:1], df0], - keys=['a', 'b', 'c', 'd', 'e']) - expected = concat([df0, df0[:2], df0[:1], df0], - keys=['b', 'c', 'd', 'e']) - tm.assert_frame_equal(result, expected) - - def test_concat_bug_1719(self): - ts1 = tm.makeTimeSeries() - ts2 = tm.makeTimeSeries()[::2] - - # to join with union - # these two are of different length! - left = concat([ts1, ts2], join='outer', axis=1) - right = concat([ts2, ts1], join='outer', axis=1) - - self.assertEqual(len(left), len(right)) - - def test_concat_bug_2972(self): - ts0 = Series(np.zeros(5)) - ts1 = Series(np.ones(5)) - ts0.name = ts1.name = 'same name' - result = concat([ts0, ts1], axis=1) - - expected = DataFrame({0: ts0, 1: ts1}) - expected.columns = ['same name', 'same name'] - assert_frame_equal(result, expected) - - def test_concat_bug_3602(self): - - # GH 3602, duplicate columns - df1 = DataFrame({'firmNo': [0, 0, 0, 0], 'stringvar': [ - 'rrr', 'rrr', 'rrr', 'rrr'], 'prc': [6, 6, 6, 6]}) - df2 = DataFrame({'misc': [1, 2, 3, 4], 'prc': [ - 6, 6, 6, 6], 'C': [9, 10, 11, 12]}) - expected = DataFrame([[0, 6, 'rrr', 9, 1, 6], - [0, 6, 'rrr', 10, 2, 6], - [0, 6, 'rrr', 11, 3, 6], - [0, 6, 'rrr', 12, 4, 6]]) - expected.columns = ['firmNo', 'prc', 'stringvar', 'C', 'misc', 'prc'] - - result = concat([df1, df2], axis=1) - assert_frame_equal(result, expected) - - def test_concat_series_axis1_same_names_ignore_index(self): - dates = date_range('01-Jan-2013', '01-Jan-2014', freq='MS')[0:-1] - s1 = Series(randn(len(dates)), index=dates, name='value') - s2 = Series(randn(len(dates)), index=dates, name='value') - - result = concat([s1, s2], axis=1, ignore_index=True) - self.assertTrue(np.array_equal(result.columns, [0, 1])) - - def test_concat_iterables(self): - from collections import deque, Iterable - - # GH8645 check concat works with tuples, list, generators, and weird - # stuff like deque and custom iterables - df1 = DataFrame([1, 2, 3]) - df2 = DataFrame([4, 5, 6]) - expected = DataFrame([1, 2, 3, 4, 5, 6]) - assert_frame_equal(concat((df1, df2), ignore_index=True), expected) - assert_frame_equal(concat([df1, df2], ignore_index=True), expected) - assert_frame_equal(concat((df for df in (df1, df2)), - ignore_index=True), expected) - assert_frame_equal( - concat(deque((df1, df2)), ignore_index=True), expected) - - class CustomIterator1(object): - - def __len__(self): - return 2 - - def __getitem__(self, index): - try: - return {0: df1, 1: df2}[index] - except KeyError: - raise IndexError - assert_frame_equal(pd.concat(CustomIterator1(), - ignore_index=True), expected) - - class CustomIterator2(Iterable): - - def __iter__(self): - yield df1 - yield df2 - assert_frame_equal(pd.concat(CustomIterator2(), - ignore_index=True), expected) - - def test_concat_invalid(self): - - # trying to concat a ndframe with a non-ndframe - df1 = mkdf(10, 2) - for obj in [1, dict(), [1, 2], (1, 2)]: - self.assertRaises(TypeError, lambda x: concat([df1, obj])) - - def test_concat_invalid_first_argument(self): - df1 = mkdf(10, 2) - df2 = mkdf(10, 2) - self.assertRaises(TypeError, concat, df1, df2) - - # generator ok though - concat(DataFrame(np.random.rand(5, 5)) for _ in range(3)) - - # text reader ok - # GH6583 - data = """index,A,B,C,D -foo,2,3,4,5 -bar,7,8,9,10 -baz,12,13,14,15 -qux,12,13,14,15 -foo2,12,13,14,15 -bar2,12,13,14,15 -""" - - reader = read_csv(StringIO(data), chunksize=1) - result = concat(reader, ignore_index=True) - expected = read_csv(StringIO(data)) - assert_frame_equal(result, expected) - - -class TestOrderedMerge(tm.TestCase): - - def setUp(self): - self.left = DataFrame({'key': ['a', 'c', 'e'], - 'lvalue': [1, 2., 3]}) - - self.right = DataFrame({'key': ['b', 'c', 'd', 'f'], - 'rvalue': [1, 2, 3., 4]}) - - # GH #813 - - def test_basic(self): - result = ordered_merge(self.left, self.right, on='key') - expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'], - 'lvalue': [1, nan, 2, nan, 3, nan], - 'rvalue': [nan, 1, 2, 3, nan, 4]}) - - assert_frame_equal(result, expected) - - def test_ffill(self): - result = ordered_merge( - self.left, self.right, on='key', fill_method='ffill') - expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'], - 'lvalue': [1., 1, 2, 2, 3, 3.], - 'rvalue': [nan, 1, 2, 3, 3, 4]}) - assert_frame_equal(result, expected) - - def test_multigroup(self): - left = concat([self.left, self.left], ignore_index=True) - # right = concat([self.right, self.right], ignore_index=True) - - left['group'] = ['a'] * 3 + ['b'] * 3 - # right['group'] = ['a'] * 4 + ['b'] * 4 - - result = ordered_merge(left, self.right, on='key', left_by='group', - fill_method='ffill') - expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'] * 2, - 'lvalue': [1., 1, 2, 2, 3, 3.] * 2, - 'rvalue': [nan, 1, 2, 3, 3, 4] * 2}) - expected['group'] = ['a'] * 6 + ['b'] * 6 - - assert_frame_equal(result, expected.ix[:, result.columns]) - - result2 = ordered_merge(self.right, left, on='key', right_by='group', - fill_method='ffill') - assert_frame_equal(result, result2.ix[:, result.columns]) - - result = ordered_merge(left, self.right, on='key', left_by='group') - self.assertTrue(result['group'].notnull().all()) - - def test_merge_type(self): - class NotADataFrame(DataFrame): - - @property - def _constructor(self): - return NotADataFrame - - nad = NotADataFrame(self.left) - result = nad.merge(self.right, on='key') - - tm.assertIsInstance(result, NotADataFrame) - - def test_empty_sequence_concat(self): - # GH 9157 - empty_pat = "[Nn]o objects" - none_pat = "objects.*None" - test_cases = [ - ((), empty_pat), - ([], empty_pat), - ({}, empty_pat), - ([None], none_pat), - ([None, None], none_pat) - ] - for df_seq, pattern in test_cases: - assertRaisesRegexp(ValueError, pattern, pd.concat, df_seq) - - pd.concat([pd.DataFrame()]) - pd.concat([None, pd.DataFrame()]) - pd.concat([pd.DataFrame(), None]) - if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tools/tests/test_ordered_merge.py b/pandas/tools/tests/test_ordered_merge.py new file mode 100644 index 0000000000000..53f00d9761f32 --- /dev/null +++ b/pandas/tools/tests/test_ordered_merge.py @@ -0,0 +1,93 @@ +import nose + +import pandas as pd +from pandas import DataFrame, ordered_merge +from pandas.util import testing as tm +from pandas.util.testing import assert_frame_equal + +from numpy import nan + + +class TestOrderedMerge(tm.TestCase): + + def setUp(self): + self.left = DataFrame({'key': ['a', 'c', 'e'], + 'lvalue': [1, 2., 3]}) + + self.right = DataFrame({'key': ['b', 'c', 'd', 'f'], + 'rvalue': [1, 2, 3., 4]}) + + # GH #813 + + def test_basic(self): + result = ordered_merge(self.left, self.right, on='key') + expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'], + 'lvalue': [1, nan, 2, nan, 3, nan], + 'rvalue': [nan, 1, 2, 3, nan, 4]}) + + assert_frame_equal(result, expected) + + def test_ffill(self): + result = ordered_merge( + self.left, self.right, on='key', fill_method='ffill') + expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'], + 'lvalue': [1., 1, 2, 2, 3, 3.], + 'rvalue': [nan, 1, 2, 3, 3, 4]}) + assert_frame_equal(result, expected) + + def test_multigroup(self): + left = pd.concat([self.left, self.left], ignore_index=True) + # right = concat([self.right, self.right], ignore_index=True) + + left['group'] = ['a'] * 3 + ['b'] * 3 + # right['group'] = ['a'] * 4 + ['b'] * 4 + + result = ordered_merge(left, self.right, on='key', left_by='group', + fill_method='ffill') + expected = DataFrame({'key': ['a', 'b', 'c', 'd', 'e', 'f'] * 2, + 'lvalue': [1., 1, 2, 2, 3, 3.] * 2, + 'rvalue': [nan, 1, 2, 3, 3, 4] * 2}) + expected['group'] = ['a'] * 6 + ['b'] * 6 + + assert_frame_equal(result, expected.ix[:, result.columns]) + + result2 = ordered_merge(self.right, left, on='key', right_by='group', + fill_method='ffill') + assert_frame_equal(result, result2.ix[:, result.columns]) + + result = ordered_merge(left, self.right, on='key', left_by='group') + self.assertTrue(result['group'].notnull().all()) + + def test_merge_type(self): + class NotADataFrame(DataFrame): + + @property + def _constructor(self): + return NotADataFrame + + nad = NotADataFrame(self.left) + result = nad.merge(self.right, on='key') + + tm.assertIsInstance(result, NotADataFrame) + + def test_empty_sequence_concat(self): + # GH 9157 + empty_pat = "[Nn]o objects" + none_pat = "objects.*None" + test_cases = [ + ((), empty_pat), + ([], empty_pat), + ({}, empty_pat), + ([None], none_pat), + ([None, None], none_pat) + ] + for df_seq, pattern in test_cases: + tm.assertRaisesRegexp(ValueError, pattern, pd.concat, df_seq) + + pd.concat([pd.DataFrame()]) + pd.concat([None, pd.DataFrame()]) + pd.concat([pd.DataFrame(), None]) + +if __name__ == '__main__': + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], + exit=False) diff --git a/pandas/tools/tests/test_pivot.py b/pandas/tools/tests/test_pivot.py index 5ebd2e4f693cf..82feaae13f771 100644 --- a/pandas/tools/tests/test_pivot.py +++ b/pandas/tools/tests/test_pivot.py @@ -1,13 +1,12 @@ from datetime import datetime, date, timedelta import numpy as np -from numpy.testing import assert_equal import pandas as pd from pandas import DataFrame, Series, Index, MultiIndex, Grouper from pandas.tools.merge import concat from pandas.tools.pivot import pivot_table, crosstab -from pandas.compat import range, u, product +from pandas.compat import range, product import pandas.util.testing as tm @@ -80,21 +79,13 @@ def test_pivot_table_dropna(self): pv_ind = df.pivot_table( 'quantity', ['customer', 'product'], 'month', dropna=False) - m = MultiIndex.from_tuples([(u('A'), u('a')), - (u('A'), u('b')), - (u('A'), u('c')), - (u('A'), u('d')), - (u('B'), u('a')), - (u('B'), u('b')), - (u('B'), u('c')), - (u('B'), u('d')), - (u('C'), u('a')), - (u('C'), u('b')), - (u('C'), u('c')), - (u('C'), u('d'))]) - - assert_equal(pv_col.columns.values, m.values) - assert_equal(pv_ind.index.values, m.values) + m = MultiIndex.from_tuples([('A', 'a'), ('A', 'b'), ('A', 'c'), + ('A', 'd'), ('B', 'a'), ('B', 'b'), + ('B', 'c'), ('B', 'd'), ('C', 'a'), + ('C', 'b'), ('C', 'c'), ('C', 'd')], + names=['customer', 'product']) + tm.assert_index_equal(pv_col.columns, m) + tm.assert_index_equal(pv_ind.index, m) def test_pass_array(self): result = self.data.pivot_table( @@ -902,8 +893,9 @@ def test_crosstab_dropna(self): res = pd.crosstab(a, [b, c], rownames=['a'], colnames=['b', 'c'], dropna=False) m = MultiIndex.from_tuples([('one', 'dull'), ('one', 'shiny'), - ('two', 'dull'), ('two', 'shiny')]) - assert_equal(res.columns.values, m.values) + ('two', 'dull'), ('two', 'shiny')], + names=['b', 'c']) + tm.assert_index_equal(res.columns, m) def test_categorical_margins(self): # GH 10989 diff --git a/pandas/tools/tests/test_tile.py b/pandas/tools/tests/test_tile.py index 55f27e1466a92..0b91fd1ef1c02 100644 --- a/pandas/tools/tests/test_tile.py +++ b/pandas/tools/tests/test_tile.py @@ -4,7 +4,7 @@ import numpy as np from pandas.compat import zip -from pandas import Series +from pandas import Series, Index import pandas.util.testing as tm from pandas.util.testing import assertRaisesRegexp import pandas.core.common as com @@ -19,32 +19,41 @@ class TestCut(tm.TestCase): def test_simple(self): data = np.ones(5) result = cut(data, 4, labels=False) - desired = [1, 1, 1, 1, 1] + desired = np.array([1, 1, 1, 1, 1], dtype=np.int64) tm.assert_numpy_array_equal(result, desired) def test_bins(self): data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1]) result, bins = cut(data, 3, retbins=True) - tm.assert_numpy_array_equal(result.codes, [0, 0, 0, 1, 2, 0]) - tm.assert_almost_equal(bins, [0.1905, 3.36666667, 6.53333333, 9.7]) + + exp_codes = np.array([0, 0, 0, 1, 2, 0], dtype=np.int8) + tm.assert_numpy_array_equal(result.codes, exp_codes) + exp = np.array([0.1905, 3.36666667, 6.53333333, 9.7]) + tm.assert_almost_equal(bins, exp) def test_right(self): data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1, 2.575]) result, bins = cut(data, 4, right=True, retbins=True) - tm.assert_numpy_array_equal(result.codes, [0, 0, 0, 2, 3, 0, 0]) - tm.assert_almost_equal(bins, [0.1905, 2.575, 4.95, 7.325, 9.7]) + exp_codes = np.array([0, 0, 0, 2, 3, 0, 0], dtype=np.int8) + tm.assert_numpy_array_equal(result.codes, exp_codes) + exp = np.array([0.1905, 2.575, 4.95, 7.325, 9.7]) + tm.assert_numpy_array_equal(bins, exp) def test_noright(self): data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1, 2.575]) result, bins = cut(data, 4, right=False, retbins=True) - tm.assert_numpy_array_equal(result.codes, [0, 0, 0, 2, 3, 0, 1]) - tm.assert_almost_equal(bins, [0.2, 2.575, 4.95, 7.325, 9.7095]) + exp_codes = np.array([0, 0, 0, 2, 3, 0, 1], dtype=np.int8) + tm.assert_numpy_array_equal(result.codes, exp_codes) + exp = np.array([0.2, 2.575, 4.95, 7.325, 9.7095]) + tm.assert_almost_equal(bins, exp) def test_arraylike(self): data = [.2, 1.4, 2.5, 6.2, 9.7, 2.1] result, bins = cut(data, 3, retbins=True) - tm.assert_numpy_array_equal(result.codes, [0, 0, 0, 1, 2, 0]) - tm.assert_almost_equal(bins, [0.1905, 3.36666667, 6.53333333, 9.7]) + exp_codes = np.array([0, 0, 0, 1, 2, 0], dtype=np.int8) + tm.assert_numpy_array_equal(result.codes, exp_codes) + exp = np.array([0.1905, 3.36666667, 6.53333333, 9.7]) + tm.assert_almost_equal(bins, exp) def test_bins_not_monotonic(self): data = [.2, 1.4, 2.5, 6.2, 9.7, 2.1] @@ -72,14 +81,14 @@ def test_labels(self): arr = np.tile(np.arange(0, 1.01, 0.1), 4) result, bins = cut(arr, 4, retbins=True) - ex_levels = ['(-0.001, 0.25]', '(0.25, 0.5]', '(0.5, 0.75]', - '(0.75, 1]'] - self.assert_numpy_array_equal(result.categories, ex_levels) + ex_levels = Index(['(-0.001, 0.25]', '(0.25, 0.5]', '(0.5, 0.75]', + '(0.75, 1]']) + self.assert_index_equal(result.categories, ex_levels) result, bins = cut(arr, 4, retbins=True, right=False) - ex_levels = ['[0, 0.25)', '[0.25, 0.5)', '[0.5, 0.75)', - '[0.75, 1.001)'] - self.assert_numpy_array_equal(result.categories, ex_levels) + ex_levels = Index(['[0, 0.25)', '[0.25, 0.5)', '[0.5, 0.75)', + '[0.75, 1.001)']) + self.assert_index_equal(result.categories, ex_levels) def test_cut_pass_series_name_to_factor(self): s = Series(np.random.randn(100), name='foo') @@ -91,9 +100,9 @@ def test_label_precision(self): arr = np.arange(0, 0.73, 0.01) result = cut(arr, 4, precision=2) - ex_levels = ['(-0.00072, 0.18]', '(0.18, 0.36]', '(0.36, 0.54]', - '(0.54, 0.72]'] - self.assert_numpy_array_equal(result.categories, ex_levels) + ex_levels = Index(['(-0.00072, 0.18]', '(0.18, 0.36]', + '(0.36, 0.54]', '(0.54, 0.72]']) + self.assert_index_equal(result.categories, ex_levels) def test_na_handling(self): arr = np.arange(0, 0.75, 0.01) @@ -118,10 +127,10 @@ def test_inf_handling(self): result = cut(data, [-np.inf, 2, 4, np.inf]) result_ser = cut(data_ser, [-np.inf, 2, 4, np.inf]) - ex_categories = ['(-inf, 2]', '(2, 4]', '(4, inf]'] + ex_categories = Index(['(-inf, 2]', '(2, 4]', '(4, inf]']) - tm.assert_numpy_array_equal(result.categories, ex_categories) - tm.assert_numpy_array_equal(result_ser.cat.categories, ex_categories) + tm.assert_index_equal(result.categories, ex_categories) + tm.assert_index_equal(result_ser.cat.categories, ex_categories) self.assertEqual(result[5], '(4, inf]') self.assertEqual(result[0], '(-inf, 2]') self.assertEqual(result_ser[5], '(4, inf]') @@ -135,7 +144,7 @@ def test_qcut(self): tm.assert_almost_equal(bins, ex_bins) ex_levels = cut(arr, ex_bins, include_lowest=True) - self.assert_numpy_array_equal(labels, ex_levels) + self.assert_categorical_equal(labels, ex_levels) def test_qcut_bounds(self): arr = np.random.randn(1000) @@ -148,7 +157,7 @@ def test_qcut_specify_quantiles(self): factor = qcut(arr, [0, .25, .5, .75, 1.]) expected = qcut(arr, 4) - self.assertTrue(factor.equals(expected)) + tm.assert_categorical_equal(factor, expected) def test_qcut_all_bins_same(self): assertRaisesRegexp(ValueError, "edges.*unique", qcut, @@ -173,7 +182,7 @@ def test_cut_pass_labels(self): exp = cut(arr, bins) exp.categories = labels - self.assertTrue(result.equals(exp)) + tm.assert_categorical_equal(result, exp) def test_qcut_include_lowest(self): values = np.arange(10) @@ -253,12 +262,14 @@ def test_series_retbins(self): # GH 8589 s = Series(np.arange(4)) result, bins = cut(s, 2, retbins=True) - tm.assert_numpy_array_equal(result.cat.codes.values, [0, 0, 1, 1]) - tm.assert_almost_equal(bins, [-0.003, 1.5, 3]) + tm.assert_numpy_array_equal(result.cat.codes.values, + np.array([0, 0, 1, 1], dtype=np.int8)) + tm.assert_numpy_array_equal(bins, np.array([-0.003, 1.5, 3])) result, bins = qcut(s, 2, retbins=True) - tm.assert_numpy_array_equal(result.cat.codes.values, [0, 0, 1, 1]) - tm.assert_almost_equal(bins, [0, 1.5, 3]) + tm.assert_numpy_array_equal(result.cat.codes.values, + np.array([0, 0, 1, 1], dtype=np.int8)) + tm.assert_numpy_array_equal(bins, np.array([0, 1.5, 3])) def curpath(): diff --git a/pandas/tools/tests/test_util.py b/pandas/tools/tests/test_util.py index 1c4f55b2defa4..c592b33bdab9a 100644 --- a/pandas/tools/tests/test_util.py +++ b/pandas/tools/tests/test_util.py @@ -4,7 +4,6 @@ import nose import numpy as np -from numpy.testing import assert_equal import pandas as pd from pandas import date_range, Index @@ -19,18 +18,21 @@ class TestCartesianProduct(tm.TestCase): def test_simple(self): x, y = list('ABC'), [1, 22] - result = cartesian_product([x, y]) - expected = [np.array(['A', 'A', 'B', 'B', 'C', 'C']), - np.array([1, 22, 1, 22, 1, 22])] - assert_equal(result, expected) + result1, result2 = cartesian_product([x, y]) + expected1 = np.array(['A', 'A', 'B', 'B', 'C', 'C']) + expected2 = np.array([1, 22, 1, 22, 1, 22]) + tm.assert_numpy_array_equal(result1, expected1) + tm.assert_numpy_array_equal(result2, expected2) def test_datetimeindex(self): # regression test for GitHub issue #6439 # make sure that the ordering on datetimeindex is consistent x = date_range('2000-01-01', periods=2) - result = [Index(y).day for y in cartesian_product([x, x])] - expected = [np.array([1, 1, 2, 2]), np.array([1, 2, 1, 2])] - assert_equal(result, expected) + result1, result2 = [Index(y).day for y in cartesian_product([x, x])] + expected1 = np.array([1, 1, 2, 2], dtype=np.int32) + expected2 = np.array([1, 2, 1, 2], dtype=np.int32) + tm.assert_numpy_array_equal(result1, expected1) + tm.assert_numpy_array_equal(result2, expected2) class TestLocaleUtils(tm.TestCase): @@ -277,6 +279,18 @@ def test_period(self): # res = pd.to_numeric(pd.Series(idx, name='xxx')) # tm.assert_series_equal(res, pd.Series(idx.asi8, name='xxx')) + def test_non_hashable(self): + # Test for Bug #13324 + s = pd.Series([[10.0, 2], 1.0, 'apple']) + res = pd.to_numeric(s, errors='coerce') + tm.assert_series_equal(res, pd.Series([np.nan, 1.0, np.nan])) + + res = pd.to_numeric(s, errors='ignore') + tm.assert_series_equal(res, pd.Series([[10.0, 2], 1.0, 'apple'])) + + with self.assertRaisesRegexp(TypeError, "Invalid object type"): + pd.to_numeric(s) + if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], diff --git a/pandas/tseries/base.py b/pandas/tseries/base.py index 0f58d17f0ade4..42631d442a990 100644 --- a/pandas/tseries/base.py +++ b/pandas/tseries/base.py @@ -9,6 +9,7 @@ from pandas.compat.numpy import function as nv import numpy as np + from pandas.core import common as com, algorithms from pandas.core.common import (is_integer, is_float, is_bool_dtype, AbstractMethodError) @@ -74,22 +75,16 @@ def _round(self, freq, rounder): unit = to_offset(freq).nanos # round the local times - if getattr(self, 'tz', None) is not None: - values = self.tz_localize(None).asi8 - else: - values = self.asi8 + values = _ensure_datetimelike_to_i8(self) + result = (unit * rounder(values / float(unit))).astype('i8') attribs = self._get_attributes_dict() if 'freq' in attribs: attribs['freq'] = None if 'tz' in attribs: attribs['tz'] = None - result = self._shallow_copy(result, **attribs) - - # reconvert to local tz - if getattr(self, 'tz', None) is not None: - result = result.tz_localize(self.tz) - return result + return self._ensure_localized( + self._shallow_copy(result, **attribs)) @Appender(_round_doc % "round") def round(self, freq, *args, **kwargs): @@ -161,6 +156,29 @@ def _evaluate_compare(self, other, op): except TypeError: return result + def _ensure_localized(self, result): + """ + ensure that we are re-localized + + This is for compat as we can then call this on all datetimelike + indexes generally (ignored for Period/Timedelta) + + Parameters + ---------- + result : DatetimeIndex / i8 ndarray + + Returns + ------- + localized DTI + """ + + # reconvert to local tz + if getattr(self, 'tz', None) is not None: + if not isinstance(result, com.ABCIndexClass): + result = self._simple_new(result) + result = result.tz_localize(self.tz) + return result + @property def _box_func(self): """ @@ -189,8 +207,17 @@ def __contains__(self, key): return False def __getitem__(self, key): + """ + This getitem defers to the underlying array, which by-definition can + only handle list-likes, slices, and integer scalars + """ + + is_int = is_integer(key) + if lib.isscalar(key) and not is_int: + raise ValueError + getitem = self._data.__getitem__ - if lib.isscalar(key): + if is_int: val = getitem(key) return self._box_func(val) else: @@ -718,6 +745,27 @@ def repeat(self, repeats, *args, **kwargs): nv.validate_repeat(args, kwargs) return self._shallow_copy(self.values.repeat(repeats), freq=None) + def where(self, cond, other=None): + """ + .. versionadded:: 0.18.2 + + Return an Index of same shape as self and whose corresponding + entries are from self where cond is True and otherwise are from + other. + + Parameters + ---------- + cond : boolean same length as self + other : scalar, or array-like + """ + other = _ensure_datetimelike_to_i8(other) + values = _ensure_datetimelike_to_i8(self) + result = np.where(cond, values, other).astype('i8') + + result = self._ensure_localized(result) + return self._shallow_copy(result, + **self._get_attributes_dict()) + def summary(self, name=None): """ return a summarized representation @@ -739,3 +787,19 @@ def summary(self, name=None): # display as values, not quoted result = result.replace("'", "") return result + + +def _ensure_datetimelike_to_i8(other): + """ helper for coercing an input scalar or array to i8 """ + if lib.isscalar(other) and com.isnull(other): + other = tslib.iNaT + elif isinstance(other, com.ABCIndexClass): + + # convert tz if needed + if getattr(other, 'tz', None) is not None: + other = other.tz_localize(None).asi8 + else: + other = other.asi8 + else: + other = np.array(other, copy=False).view('i8') + return other diff --git a/pandas/tseries/converter.py b/pandas/tseries/converter.py index 8ccfdfa05e9b5..78b185ae8cf31 100644 --- a/pandas/tseries/converter.py +++ b/pandas/tseries/converter.py @@ -23,6 +23,24 @@ from pandas.tseries.frequencies import FreqGroup from pandas.tseries.period import Period, PeriodIndex +# constants +HOURS_PER_DAY = 24. +MIN_PER_HOUR = 60. +SEC_PER_MIN = 60. + +SEC_PER_HOUR = SEC_PER_MIN * MIN_PER_HOUR +SEC_PER_DAY = SEC_PER_HOUR * HOURS_PER_DAY + +MUSEC_PER_DAY = 1e6 * SEC_PER_DAY + + +def _mpl_le_2_0_0(): + try: + import matplotlib + return matplotlib.compare_versions('2.0.0', matplotlib.__version__) + except ImportError: + return False + def register(): units.registry[lib.Timestamp] = DatetimeConverter() @@ -221,6 +239,13 @@ def __init__(self, locator, tz=None, defaultfmt='%Y-%m-%d'): if self._tz is dates.UTC: self._tz._utcoffset = self._tz.utcoffset(None) + # For mpl > 2.0 the format strings are controlled via rcparams + # so do not mess with them. For mpl < 2.0 change the second + # break point and add a musec break point + if _mpl_le_2_0_0(): + self.scaled[1. / SEC_PER_DAY] = '%H:%M:%S' + self.scaled[1. / MUSEC_PER_DAY] = '%H:%M:%S.%f' + class PandasAutoDateLocator(dates.AutoDateLocator): diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py index 25d3490873542..83ab5d2a2bce4 100644 --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -6,16 +6,17 @@ from datetime import timedelta import numpy as np from pandas.core.base import _shared_docs -from pandas.core.common import (_NS_DTYPE, _INT64_DTYPE, - _values_from_object, _maybe_box, - is_object_dtype, is_datetime64_dtype, - is_datetimetz, is_dtype_equal, - ABCSeries, is_integer, is_float, - DatetimeTZDtype, PerformanceWarning) +from pandas.core.common import (_INT64_DTYPE, _NS_DTYPE, _maybe_box, + _values_from_object, ABCSeries, + DatetimeTZDtype, PerformanceWarning, + is_datetimetz, is_datetime64_dtype, + is_datetime64_ns_dtype, is_dtype_equal, + is_float, is_integer, is_integer_dtype, + is_object_dtype, is_string_dtype) from pandas.core.index import Index, Int64Index, Float64Index +from pandas.indexes.base import _index_shared_docs import pandas.compat as compat -from pandas.compat import u from pandas.tseries.frequencies import ( to_offset, get_period_alias, Resolution) @@ -814,8 +815,7 @@ def _add_offset(self, offset): "or DatetimeIndex", PerformanceWarning) return self.astype('O') + offset - def _format_native_types(self, na_rep=u('NaT'), - date_format=None, **kwargs): + def _format_native_types(self, na_rep='NaT', date_format=None, **kwargs): from pandas.formats.format import _get_format_datetime64_from_values format = _get_format_datetime64_from_values(self, date_format) @@ -827,19 +827,24 @@ def _format_native_types(self, na_rep=u('NaT'), def to_datetime(self, dayfirst=False): return self.copy() - def astype(self, dtype): + @Appender(_index_shared_docs['astype']) + def astype(self, dtype, copy=True): dtype = np.dtype(dtype) - if dtype == np.object_: + if is_object_dtype(dtype): return self.asobject - elif dtype == _INT64_DTYPE: - return self.asi8.copy() - elif dtype == _NS_DTYPE and self.tz is not None: - return self.tz_convert('UTC').tz_localize(None) - elif dtype == str: + elif is_integer_dtype(dtype): + return Index(self.values.astype('i8', copy=copy), name=self.name, + dtype='i8') + elif is_datetime64_ns_dtype(dtype): + if self.tz is not None: + return self.tz_convert('UTC').tz_localize(None) + elif copy is True: + return self.copy() + return self + elif is_string_dtype(dtype): return Index(self.format(), name=self.name, dtype=object) - else: # pragma: no cover - raise ValueError('Cannot cast DatetimeIndex to dtype %s' % dtype) + raise ValueError('Cannot cast DatetimeIndex to dtype %s' % dtype) def _get_time_micros(self): utc = _utc() diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py index fb91185746181..c3deee5f6dab2 100644 --- a/pandas/tseries/period.py +++ b/pandas/tseries/period.py @@ -15,11 +15,12 @@ _quarter_to_myear) from pandas.core.base import _shared_docs +from pandas.indexes.base import _index_shared_docs import pandas.core.common as com -from pandas.core.common import (isnull, _INT64_DTYPE, _maybe_box, - _values_from_object, ABCSeries, - is_integer, is_float, is_object_dtype) +from pandas.core.common import ( + _maybe_box, _values_from_object, ABCSeries, is_float, is_integer, + is_integer_dtype, is_object_dtype, isnull) from pandas import compat from pandas.compat.numpy import function as nv from pandas.util.decorators import Appender, cache_readonly, Substitution @@ -271,10 +272,15 @@ def _from_arraylike(cls, data, freq, tz): @classmethod def _simple_new(cls, values, name=None, freq=None, **kwargs): - if not getattr(values, 'dtype', None): + + if not com.is_integer_dtype(values): values = np.array(values, copy=False) - if is_object_dtype(values): - return PeriodIndex(values, name=name, freq=freq, **kwargs) + if (len(values) > 0 and com.is_float_dtype(values)): + raise TypeError("PeriodIndex can't take floats") + else: + return PeriodIndex(values, name=name, freq=freq, **kwargs) + + values = np.array(values, dtype='int64', copy=False) result = object.__new__(cls) result._data = values @@ -381,12 +387,14 @@ def asof_locs(self, where, mask): def _array_values(self): return self.asobject - def astype(self, dtype): + @Appender(_index_shared_docs['astype']) + def astype(self, dtype, copy=True): dtype = np.dtype(dtype) - if dtype == np.object_: - return Index(np.array(list(self), dtype), dtype) - elif dtype == _INT64_DTYPE: - return Index(self.values, dtype) + if is_object_dtype(dtype): + return self.asobject + elif is_integer_dtype(dtype): + return Index(self.values.astype('i8', copy=copy), name=self.name, + dtype='i8') raise ValueError('Cannot cast PeriodIndex to dtype %s' % dtype) @Substitution(klass='PeriodIndex', value='key') diff --git a/pandas/tseries/resample.py b/pandas/tseries/resample.py index a0f08a93a07d9..8d6955ab43711 100644 --- a/pandas/tseries/resample.py +++ b/pandas/tseries/resample.py @@ -1,6 +1,7 @@ from datetime import timedelta import numpy as np import warnings +import copy import pandas as pd from pandas.core.base import AbstractMethodError, GroupByMixin @@ -15,9 +16,12 @@ from pandas.tseries.period import PeriodIndex, period_range import pandas.core.common as com import pandas.core.algorithms as algos + import pandas.compat as compat +from pandas.compat.numpy import function as nv from pandas.lib import Timestamp +from pandas._period import IncompatibleFrequency import pandas.lib as lib import pandas.tslib as tslib @@ -479,7 +483,7 @@ def asfreq(self): """ return self._upsample('asfreq') - def std(self, ddof=1): + def std(self, ddof=1, *args, **kwargs): """ Compute standard deviation of groups, excluding missing values @@ -488,9 +492,10 @@ def std(self, ddof=1): ddof : integer, default 1 degrees of freedom """ + nv.validate_resampler_func('std', args, kwargs) return self._downsample('std', ddof=ddof) - def var(self, ddof=1): + def var(self, ddof=1, *args, **kwargs): """ Compute variance of groups, excluding missing values @@ -499,6 +504,7 @@ def var(self, ddof=1): ddof : integer, default 1 degrees of freedom """ + nv.validate_resampler_func('var', args, kwargs) return self._downsample('var', ddof=ddof) Resampler._deprecated_valids += dir(Resampler) @@ -506,7 +512,8 @@ def var(self, ddof=1): for method in ['min', 'max', 'first', 'last', 'sum', 'mean', 'sem', 'median', 'prod', 'ohlc']: - def f(self, _method=method): + def f(self, _method=method, *args, **kwargs): + nv.validate_resampler_func(_method, args, kwargs) return self._downsample(_method) f.__doc__ = getattr(GroupBy, method).__doc__ setattr(Resampler, method, f) @@ -592,7 +599,7 @@ def __init__(self, obj, *args, **kwargs): self._groupby = groupby self._groupby.mutated = True self._groupby.grouper.mutated = True - self.groupby = parent.groupby + self.groupby = copy.copy(parent.groupby) def _apply(self, f, **kwargs): """ @@ -789,16 +796,17 @@ def _downsample(self, how, **kwargs): ax = self.ax new_index = self._get_new_index() - if len(new_index) == 0: - return self._wrap_result(self._selected_obj.reindex(new_index)) # Start vs. end of period memb = ax.asfreq(self.freq, how=self.convention) if is_subperiod(ax.freq, self.freq): # Downsampling - rng = np.arange(memb.values[0], memb.values[-1] + 1) - bins = memb.searchsorted(rng, side='right') + if len(new_index) == 0: + bins = [] + else: + rng = np.arange(memb.values[0], memb.values[-1] + 1) + bins = memb.searchsorted(rng, side='right') grouper = BinGrouper(bins, new_index) return self._groupby_and_aggregate(how, grouper=grouper) elif is_superperiod(ax.freq, self.freq): @@ -806,10 +814,9 @@ def _downsample(self, how, **kwargs): elif ax.freq == self.freq: return self.asfreq() - raise ValueError('Frequency {axfreq} cannot be ' - 'resampled to {freq}'.format( - axfreq=ax.freq, - freq=self.freq)) + raise IncompatibleFrequency( + 'Frequency {} cannot be resampled to {}, as they are not ' + 'sub or super periods'.format(ax.freq, self.freq)) def _upsample(self, method, limit=None): """ @@ -832,9 +839,6 @@ def _upsample(self, method, limit=None): obj = self.obj new_index = self._get_new_index() - if len(new_index) == 0: - return self._wrap_result(self._selected_obj.reindex(new_index)) - # Start vs. end of period memb = ax.asfreq(self.freq, how=self.convention) @@ -908,8 +912,7 @@ def get_resampler_for_grouping(groupby, rule, how=None, fill_method=None, return _maybe_process_deprecations(r, how=how, fill_method=fill_method, - limit=limit, - **kwargs) + limit=limit) class TimeGrouper(Grouper): diff --git a/pandas/tseries/tdi.py b/pandas/tseries/tdi.py index 7d731c28c0f88..3e12cf14e7485 100644 --- a/pandas/tseries/tdi.py +++ b/pandas/tseries/tdi.py @@ -2,15 +2,17 @@ from datetime import timedelta import numpy as np -from pandas.core.common import (ABCSeries, _TD_DTYPE, _INT64_DTYPE, - _maybe_box, +from pandas.core.common import (ABCSeries, _TD_DTYPE, _maybe_box, _values_from_object, isnull, - is_integer, is_float) + is_integer, is_float, is_integer_dtype, + is_object_dtype, is_timedelta64_dtype, + is_timedelta64_ns_dtype) from pandas.core.index import Index, Int64Index import pandas.compat as compat from pandas.compat import u from pandas.tseries.frequencies import to_offset from pandas.core.base import _shared_docs +from pandas.indexes.base import _index_shared_docs import pandas.core.common as com import pandas.types.concat as _concat from pandas.util.decorators import Appender, Substitution @@ -435,28 +437,28 @@ def to_pytimedelta(self): """ return tslib.ints_to_pytimedelta(self.asi8) - def astype(self, dtype): + @Appender(_index_shared_docs['astype']) + def astype(self, dtype, copy=True): dtype = np.dtype(dtype) - if dtype == np.object_: + if is_object_dtype(dtype): return self.asobject - elif dtype == _INT64_DTYPE: - return self.asi8.copy() - elif dtype == _TD_DTYPE: + elif is_timedelta64_ns_dtype(dtype): + if copy is True: + return self.copy() return self - elif dtype.kind == 'm': - + elif is_timedelta64_dtype(dtype): # return an index (essentially this is division) - result = self.values.astype(dtype) + result = self.values.astype(dtype, copy=copy) if self.hasnans: return Index(self._maybe_mask_results(result, convert='float64'), name=self.name) - return Index(result.astype('i8'), name=self.name) - - else: # pragma: no cover - raise ValueError('Cannot cast TimedeltaIndex to dtype %s' % dtype) + elif is_integer_dtype(dtype): + return Index(self.values.astype('i8', copy=copy), dtype='i8', + name=self.name) + raise ValueError('Cannot cast TimedeltaIndex to dtype %s' % dtype) def union(self, other): """ diff --git a/pandas/tseries/tests/test_base.py b/pandas/tseries/tests/test_base.py index 2077409f4afec..7077a23d5abcb 100644 --- a/pandas/tseries/tests/test_base.py +++ b/pandas/tseries/tests/test_base.py @@ -50,39 +50,6 @@ def test_ops_properties_basic(self): self.assertEqual(s.day, 10) self.assertRaises(AttributeError, lambda: s.weekday) - def test_astype_str(self): - # test astype string - #10442 - result = date_range('2012-01-01', periods=4, - name='test_name').astype(str) - expected = Index(['2012-01-01', '2012-01-02', '2012-01-03', - '2012-01-04'], name='test_name', dtype=object) - tm.assert_index_equal(result, expected) - - # test astype string with tz and name - result = date_range('2012-01-01', periods=3, name='test_name', - tz='US/Eastern').astype(str) - expected = Index(['2012-01-01 00:00:00-05:00', - '2012-01-02 00:00:00-05:00', - '2012-01-03 00:00:00-05:00'], - name='test_name', dtype=object) - tm.assert_index_equal(result, expected) - - # test astype string with freqH and name - result = date_range('1/1/2011', periods=3, freq='H', - name='test_name').astype(str) - expected = Index(['2011-01-01 00:00:00', '2011-01-01 01:00:00', - '2011-01-01 02:00:00'], - name='test_name', dtype=object) - tm.assert_index_equal(result, expected) - - # test astype string with freqH and timezone - result = date_range('3/6/2012 00:00', periods=2, freq='H', - tz='Europe/London', name='test_name').astype(str) - expected = Index(['2012-03-06 00:00:00+00:00', - '2012-03-06 01:00:00+00:00'], - dtype=object, name='test_name') - tm.assert_index_equal(result, expected) - def test_asobject_tolist(self): idx = pd.date_range(start='2013-01-01', periods=4, freq='M', name='idx') @@ -95,7 +62,7 @@ def test_asobject_tolist(self): self.assertTrue(isinstance(result, Index)) self.assertEqual(result.dtype, object) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(idx.tolist(), expected_list) @@ -109,7 +76,7 @@ def test_asobject_tolist(self): result = idx.asobject self.assertTrue(isinstance(result, Index)) self.assertEqual(result.dtype, object) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(idx.tolist(), expected_list) @@ -122,7 +89,7 @@ def test_asobject_tolist(self): result = idx.asobject self.assertTrue(isinstance(result, Index)) self.assertEqual(result.dtype, object) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(idx.tolist(), expected_list) @@ -759,7 +726,7 @@ def test_asobject_tolist(self): self.assertTrue(isinstance(result, Index)) self.assertEqual(result.dtype, object) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(idx.tolist(), expected_list) @@ -771,7 +738,7 @@ def test_asobject_tolist(self): result = idx.asobject self.assertTrue(isinstance(result, Index)) self.assertEqual(result.dtype, object) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(idx.tolist(), expected_list) @@ -1522,7 +1489,7 @@ def test_asobject_tolist(self): result = idx.asobject self.assertTrue(isinstance(result, Index)) self.assertEqual(result.dtype, object) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(idx.tolist(), expected_list) diff --git a/pandas/tseries/tests/test_bin_groupby.py b/pandas/tseries/tests/test_bin_groupby.py new file mode 100644 index 0000000000000..6b6c468b7c391 --- /dev/null +++ b/pandas/tseries/tests/test_bin_groupby.py @@ -0,0 +1,151 @@ +# -*- coding: utf-8 -*- + +from numpy import nan +import numpy as np + +from pandas import Index, isnull +from pandas.util.testing import assert_almost_equal +import pandas.util.testing as tm +import pandas.lib as lib +import pandas.algos as algos +from pandas.core import common as com + + +def test_series_grouper(): + from pandas import Series + obj = Series(np.random.randn(10)) + dummy = obj[:0] + + labels = np.array([-1, -1, -1, 0, 0, 0, 1, 1, 1, 1], dtype=np.int64) + + grouper = lib.SeriesGrouper(obj, np.mean, labels, 2, dummy) + result, counts = grouper.get_result() + + expected = np.array([obj[3:6].mean(), obj[6:].mean()]) + assert_almost_equal(result, expected) + + exp_counts = np.array([3, 4], dtype=np.int64) + assert_almost_equal(counts, exp_counts) + + +def test_series_bin_grouper(): + from pandas import Series + obj = Series(np.random.randn(10)) + dummy = obj[:0] + + bins = np.array([3, 6]) + + grouper = lib.SeriesBinGrouper(obj, np.mean, bins, dummy) + result, counts = grouper.get_result() + + expected = np.array([obj[:3].mean(), obj[3:6].mean(), obj[6:].mean()]) + assert_almost_equal(result, expected) + + exp_counts = np.array([3, 3, 4], dtype=np.int64) + assert_almost_equal(counts, exp_counts) + + +class TestBinGroupers(tm.TestCase): + _multiprocess_can_split_ = True + + def setUp(self): + self.obj = np.random.randn(10, 1) + self.labels = np.array([0, 0, 0, 1, 1, 1, 2, 2, 2, 2], dtype=np.int64) + self.bins = np.array([3, 6], dtype=np.int64) + + def test_generate_bins(self): + from pandas.core.groupby import generate_bins_generic + values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64) + binner = np.array([0, 3, 6, 9], dtype=np.int64) + + for func in [lib.generate_bins_dt64, generate_bins_generic]: + bins = func(values, binner, closed='left') + assert ((bins == np.array([2, 5, 6])).all()) + + bins = func(values, binner, closed='right') + assert ((bins == np.array([3, 6, 6])).all()) + + for func in [lib.generate_bins_dt64, generate_bins_generic]: + values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64) + binner = np.array([0, 3, 6], dtype=np.int64) + + bins = func(values, binner, closed='right') + assert ((bins == np.array([3, 6])).all()) + + self.assertRaises(ValueError, generate_bins_generic, values, [], + 'right') + self.assertRaises(ValueError, generate_bins_generic, values[:0], + binner, 'right') + + self.assertRaises(ValueError, generate_bins_generic, values, [4], + 'right') + self.assertRaises(ValueError, generate_bins_generic, values, [-3, -1], + 'right') + + +def test_group_ohlc(): + def _check(dtype): + obj = np.array(np.random.randn(20), dtype=dtype) + + bins = np.array([6, 12, 20]) + out = np.zeros((3, 4), dtype) + counts = np.zeros(len(out), dtype=np.int64) + labels = com._ensure_int64(np.repeat(np.arange(3), + np.diff(np.r_[0, bins]))) + + func = getattr(algos, 'group_ohlc_%s' % dtype) + func(out, counts, obj[:, None], labels) + + def _ohlc(group): + if isnull(group).all(): + return np.repeat(nan, 4) + return [group[0], group.max(), group.min(), group[-1]] + + expected = np.array([_ohlc(obj[:6]), _ohlc(obj[6:12]), + _ohlc(obj[12:])]) + + assert_almost_equal(out, expected) + tm.assert_numpy_array_equal(counts, + np.array([6, 6, 8], dtype=np.int64)) + + obj[:6] = nan + func(out, counts, obj[:, None], labels) + expected[0] = nan + assert_almost_equal(out, expected) + + _check('float32') + _check('float64') + + +class TestMoments(tm.TestCase): + pass + + +class TestReducer(tm.TestCase): + def test_int_index(self): + from pandas.core.series import Series + + arr = np.random.randn(100, 4) + result = lib.reduce(arr, np.sum, labels=Index(np.arange(4))) + expected = arr.sum(0) + assert_almost_equal(result, expected) + + result = lib.reduce(arr, np.sum, axis=1, labels=Index(np.arange(100))) + expected = arr.sum(1) + assert_almost_equal(result, expected) + + dummy = Series(0., index=np.arange(100)) + result = lib.reduce(arr, np.sum, dummy=dummy, + labels=Index(np.arange(4))) + expected = arr.sum(0) + assert_almost_equal(result, expected) + + dummy = Series(0., index=np.arange(4)) + result = lib.reduce(arr, np.sum, axis=1, dummy=dummy, + labels=Index(np.arange(100))) + expected = arr.sum(1) + assert_almost_equal(result, expected) + + result = lib.reduce(arr, np.sum, axis=1, dummy=dummy, + labels=Index(np.arange(100))) + assert_almost_equal(result, expected) diff --git a/pandas/tseries/tests/test_converter.py b/pandas/tseries/tests/test_converter.py index f2c20f7d3111d..ceb8660efb9cd 100644 --- a/pandas/tseries/tests/test_converter.py +++ b/pandas/tseries/tests/test_converter.py @@ -3,7 +3,6 @@ import nose import numpy as np -from numpy.testing import assert_almost_equal as np_assert_almost_equal from pandas import Timestamp, Period from pandas.compat import u import pandas.util.testing as tm @@ -69,14 +68,14 @@ def test_conversion_float(self): rs = self.dtc.convert( Timestamp('2012-1-1 01:02:03', tz='UTC'), None, None) xp = converter.dates.date2num(Timestamp('2012-1-1 01:02:03', tz='UTC')) - np_assert_almost_equal(rs, xp, decimals) + tm.assert_almost_equal(rs, xp, decimals) rs = self.dtc.convert( Timestamp('2012-1-1 09:02:03', tz='Asia/Hong_Kong'), None, None) - np_assert_almost_equal(rs, xp, decimals) + tm.assert_almost_equal(rs, xp, decimals) rs = self.dtc.convert(datetime(2012, 1, 1, 1, 2, 3), None, None) - np_assert_almost_equal(rs, xp, decimals) + tm.assert_almost_equal(rs, xp, decimals) def test_time_formatter(self): self.tc(90000) @@ -88,7 +87,7 @@ def test_dateindex_conversion(self): dateindex = tm.makeDateIndex(k=10, freq=freq) rs = self.dtc.convert(dateindex, None, None) xp = converter.dates.date2num(dateindex._mpl_repr()) - np_assert_almost_equal(rs, xp, decimals) + tm.assert_almost_equal(rs, xp, decimals) def test_resolution(self): def _assert_less(ts1, ts2): diff --git a/pandas/tseries/tests/test_daterange.py b/pandas/tseries/tests/test_daterange.py index 6e572289a3cae..6ad33b6b973de 100644 --- a/pandas/tseries/tests/test_daterange.py +++ b/pandas/tseries/tests/test_daterange.py @@ -25,15 +25,16 @@ def eq_gen_range(kwargs, expected): class TestGenRangeGeneration(tm.TestCase): + def test_generate(self): rng1 = list(generate_range(START, END, offset=datetools.bday)) rng2 = list(generate_range(START, END, time_rule='B')) - self.assert_numpy_array_equal(rng1, rng2) + self.assertEqual(rng1, rng2) def test_generate_cday(self): rng1 = list(generate_range(START, END, offset=datetools.cday)) rng2 = list(generate_range(START, END, time_rule='C')) - self.assert_numpy_array_equal(rng1, rng2) + self.assertEqual(rng1, rng2) def test_1(self): eq_gen_range(dict(start=datetime(2009, 3, 25), periods=2), @@ -68,8 +69,8 @@ def test_precision_finer_than_offset(self): freq='Q-DEC', tz=None) expected2 = DatetimeIndex(expected2_list, dtype='datetime64[ns]', freq='W-SUN', tz=None) - self.assertTrue(result1.equals(expected1)) - self.assertTrue(result2.equals(expected2)) + self.assert_index_equal(result1, expected1) + self.assert_index_equal(result2, expected2) class TestDateRange(tm.TestCase): @@ -140,7 +141,7 @@ def test_comparison(self): def test_copy(self): cp = self.rng.copy() repr(cp) - self.assertTrue(cp.equals(self.rng)) + self.assert_index_equal(cp, self.rng) def test_repr(self): # only really care that it works @@ -148,7 +149,9 @@ def test_repr(self): def test_getitem(self): smaller = self.rng[:5] - self.assert_numpy_array_equal(smaller, self.rng.view(np.ndarray)[:5]) + exp = DatetimeIndex(self.rng.view(np.ndarray)[:5]) + self.assert_index_equal(smaller, exp) + self.assertEqual(smaller.offset, self.rng.offset) sliced = self.rng[::5] @@ -211,7 +214,7 @@ def test_union(self): tm.assertIsInstance(the_union, DatetimeIndex) # order does not matter - self.assert_numpy_array_equal(right.union(left), the_union) + tm.assert_index_equal(right.union(left), the_union) # overlapping, but different offset rng = date_range(START, END, freq=datetools.bmonthEnd) @@ -256,13 +259,13 @@ def test_union_not_cacheable(self): rng1 = rng[10:] rng2 = rng[:25] the_union = rng1.union(rng2) - self.assertTrue(the_union.equals(rng)) + self.assert_index_equal(the_union, rng) rng1 = rng[10:] rng2 = rng[15:35] the_union = rng1.union(rng2) expected = rng[10:] - self.assertTrue(the_union.equals(expected)) + self.assert_index_equal(the_union, expected) def test_intersection(self): rng = date_range('1/1/2000', periods=50, freq=datetools.Minute()) @@ -270,24 +273,24 @@ def test_intersection(self): rng2 = rng[:25] the_int = rng1.intersection(rng2) expected = rng[10:25] - self.assertTrue(the_int.equals(expected)) + self.assert_index_equal(the_int, expected) tm.assertIsInstance(the_int, DatetimeIndex) self.assertEqual(the_int.offset, rng.offset) the_int = rng1.intersection(rng2.view(DatetimeIndex)) - self.assertTrue(the_int.equals(expected)) + self.assert_index_equal(the_int, expected) # non-overlapping the_int = rng[:10].intersection(rng[10:]) expected = DatetimeIndex([]) - self.assertTrue(the_int.equals(expected)) + self.assert_index_equal(the_int, expected) def test_intersection_bug(self): # GH #771 a = bdate_range('11/30/2011', '12/31/2011') b = bdate_range('12/10/2011', '12/20/2011') result = a.intersection(b) - self.assertTrue(result.equals(b)) + self.assert_index_equal(result, b) def test_summary(self): self.rng.summary() @@ -364,7 +367,7 @@ def test_range_bug(self): start = datetime(2011, 1, 1) exp_values = [start + i * offset for i in range(5)] - self.assert_numpy_array_equal(result, DatetimeIndex(exp_values)) + tm.assert_index_equal(result, DatetimeIndex(exp_values)) def test_range_tz_pytz(self): # GH 2906 @@ -494,8 +497,8 @@ def test_range_closed(self): if begin == closed[0]: expected_right = closed[1:] - self.assertTrue(expected_left.equals(left)) - self.assertTrue(expected_right.equals(right)) + self.assert_index_equal(expected_left, left) + self.assert_index_equal(expected_right, right) def test_range_closed_with_tz_aware_start_end(self): # GH12409 @@ -514,8 +517,8 @@ def test_range_closed_with_tz_aware_start_end(self): if begin == closed[0]: expected_right = closed[1:] - self.assertTrue(expected_left.equals(left)) - self.assertTrue(expected_right.equals(right)) + self.assert_index_equal(expected_left, left) + self.assert_index_equal(expected_right, right) # test with default frequency, UTC begin = Timestamp('2011/1/1', tz='UTC') @@ -546,9 +549,9 @@ def test_range_closed_boundary(self): expected_right = both_boundary[1:] expected_left = both_boundary[:-1] - self.assertTrue(right_boundary.equals(expected_right)) - self.assertTrue(left_boundary.equals(expected_left)) - self.assertTrue(both_boundary.equals(expected_both)) + self.assert_index_equal(right_boundary, expected_right) + self.assert_index_equal(left_boundary, expected_left) + self.assert_index_equal(both_boundary, expected_both) def test_years_only(self): # GH 6961 @@ -570,8 +573,8 @@ def test_freq_divides_end_in_nanos(self): '2005-01-13 15:45:00'], dtype='datetime64[ns]', freq='345T', tz=None) - self.assertTrue(result_1.equals(expected_1)) - self.assertTrue(result_2.equals(expected_2)) + self.assert_index_equal(result_1, expected_1) + self.assert_index_equal(result_2, expected_2) class TestCustomDateRange(tm.TestCase): @@ -613,7 +616,7 @@ def test_comparison(self): def test_copy(self): cp = self.rng.copy() repr(cp) - self.assertTrue(cp.equals(self.rng)) + self.assert_index_equal(cp, self.rng) def test_repr(self): # only really care that it works @@ -621,7 +624,8 @@ def test_repr(self): def test_getitem(self): smaller = self.rng[:5] - self.assert_numpy_array_equal(smaller, self.rng.view(np.ndarray)[:5]) + exp = DatetimeIndex(self.rng.view(np.ndarray)[:5]) + self.assert_index_equal(smaller, exp) self.assertEqual(smaller.offset, self.rng.offset) sliced = self.rng[::5] @@ -686,7 +690,7 @@ def test_union(self): tm.assertIsInstance(the_union, DatetimeIndex) # order does not matter - self.assert_numpy_array_equal(right.union(left), the_union) + self.assert_index_equal(right.union(left), the_union) # overlapping, but different offset rng = date_range(START, END, freq=datetools.bmonthEnd) @@ -731,7 +735,7 @@ def test_intersection_bug(self): a = cdate_range('11/30/2011', '12/31/2011') b = cdate_range('12/10/2011', '12/20/2011') result = a.intersection(b) - self.assertTrue(result.equals(b)) + self.assert_index_equal(result, b) def test_summary(self): self.rng.summary() @@ -783,25 +787,25 @@ def test_daterange_bug_456(self): def test_cdaterange(self): rng = cdate_range('2013-05-01', periods=3) xp = DatetimeIndex(['2013-05-01', '2013-05-02', '2013-05-03']) - self.assertTrue(xp.equals(rng)) + self.assert_index_equal(xp, rng) def test_cdaterange_weekmask(self): rng = cdate_range('2013-05-01', periods=3, weekmask='Sun Mon Tue Wed Thu') xp = DatetimeIndex(['2013-05-01', '2013-05-02', '2013-05-05']) - self.assertTrue(xp.equals(rng)) + self.assert_index_equal(xp, rng) def test_cdaterange_holidays(self): rng = cdate_range('2013-05-01', periods=3, holidays=['2013-05-01']) xp = DatetimeIndex(['2013-05-02', '2013-05-03', '2013-05-06']) - self.assertTrue(xp.equals(rng)) + self.assert_index_equal(xp, rng) def test_cdaterange_weekmask_and_holidays(self): rng = cdate_range('2013-05-01', periods=3, weekmask='Sun Mon Tue Wed Thu', holidays=['2013-05-01']) xp = DatetimeIndex(['2013-05-02', '2013-05-05', '2013-05-06']) - self.assertTrue(xp.equals(rng)) + self.assert_index_equal(xp, rng) if __name__ == '__main__': diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py index 0e91e396965fa..ec88acc421cdb 100644 --- a/pandas/tseries/tests/test_offsets.py +++ b/pandas/tseries/tests/test_offsets.py @@ -4551,7 +4551,7 @@ def test_all_offset_classes(self): for offset, test_values in iteritems(tests): first = Timestamp(test_values[0], tz='US/Eastern') + offset() second = Timestamp(test_values[1], tz='US/Eastern') - self.assertEqual(first, second, str(offset)) + self.assertEqual(first, second, msg=str(offset)) if __name__ == '__main__': diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py index 740a158c52f87..de23306c80b71 100644 --- a/pandas/tseries/tests/test_period.py +++ b/pandas/tseries/tests/test_period.py @@ -8,9 +8,7 @@ from datetime import datetime, date, timedelta -from numpy.ma.testutils import assert_equal - -from pandas import Timestamp +from pandas import Timestamp, _period from pandas.tseries.frequencies import MONTHS, DAYS, _period_code_map from pandas.tseries.period import Period, PeriodIndex, period_range from pandas.tseries.index import DatetimeIndex, date_range, Index @@ -28,8 +26,6 @@ from pandas import (Series, DataFrame, _np_version_under1p9, _np_version_under1p12) from pandas import tslib -from pandas.util.testing import (assert_index_equal, assert_series_equal, - assert_almost_equal, assertRaisesRegexp) import pandas.util.testing as tm @@ -492,8 +488,8 @@ def test_sub_delta(self): result = left - right self.assertEqual(result, 4) - self.assertRaises(ValueError, left.__sub__, - Period('2007-01', freq='M')) + with self.assertRaises(period.IncompatibleFrequency): + left - Period('2007-01', freq='M') def test_to_timestamp(self): p = Period('1982', freq='A') @@ -625,7 +621,7 @@ def _ex(*args): def test_properties_annually(self): # Test properties on Periods with annually frequency. a_date = Period(freq='A', year=2007) - assert_equal(a_date.year, 2007) + self.assertEqual(a_date.year, 2007) def test_properties_quarterly(self): # Test properties on Periods with daily frequency. @@ -635,78 +631,78 @@ def test_properties_quarterly(self): # for x in range(3): for qd in (qedec_date, qejan_date, qejun_date): - assert_equal((qd + x).qyear, 2007) - assert_equal((qd + x).quarter, x + 1) + self.assertEqual((qd + x).qyear, 2007) + self.assertEqual((qd + x).quarter, x + 1) def test_properties_monthly(self): # Test properties on Periods with daily frequency. m_date = Period(freq='M', year=2007, month=1) for x in range(11): m_ival_x = m_date + x - assert_equal(m_ival_x.year, 2007) + self.assertEqual(m_ival_x.year, 2007) if 1 <= x + 1 <= 3: - assert_equal(m_ival_x.quarter, 1) + self.assertEqual(m_ival_x.quarter, 1) elif 4 <= x + 1 <= 6: - assert_equal(m_ival_x.quarter, 2) + self.assertEqual(m_ival_x.quarter, 2) elif 7 <= x + 1 <= 9: - assert_equal(m_ival_x.quarter, 3) + self.assertEqual(m_ival_x.quarter, 3) elif 10 <= x + 1 <= 12: - assert_equal(m_ival_x.quarter, 4) - assert_equal(m_ival_x.month, x + 1) + self.assertEqual(m_ival_x.quarter, 4) + self.assertEqual(m_ival_x.month, x + 1) def test_properties_weekly(self): # Test properties on Periods with daily frequency. w_date = Period(freq='W', year=2007, month=1, day=7) # - assert_equal(w_date.year, 2007) - assert_equal(w_date.quarter, 1) - assert_equal(w_date.month, 1) - assert_equal(w_date.week, 1) - assert_equal((w_date - 1).week, 52) - assert_equal(w_date.days_in_month, 31) - assert_equal(Period(freq='W', year=2012, - month=2, day=1).days_in_month, 29) + self.assertEqual(w_date.year, 2007) + self.assertEqual(w_date.quarter, 1) + self.assertEqual(w_date.month, 1) + self.assertEqual(w_date.week, 1) + self.assertEqual((w_date - 1).week, 52) + self.assertEqual(w_date.days_in_month, 31) + self.assertEqual(Period(freq='W', year=2012, + month=2, day=1).days_in_month, 29) def test_properties_weekly_legacy(self): # Test properties on Periods with daily frequency. with tm.assert_produces_warning(FutureWarning): w_date = Period(freq='WK', year=2007, month=1, day=7) # - assert_equal(w_date.year, 2007) - assert_equal(w_date.quarter, 1) - assert_equal(w_date.month, 1) - assert_equal(w_date.week, 1) - assert_equal((w_date - 1).week, 52) - assert_equal(w_date.days_in_month, 31) + self.assertEqual(w_date.year, 2007) + self.assertEqual(w_date.quarter, 1) + self.assertEqual(w_date.month, 1) + self.assertEqual(w_date.week, 1) + self.assertEqual((w_date - 1).week, 52) + self.assertEqual(w_date.days_in_month, 31) with tm.assert_produces_warning(FutureWarning): exp = Period(freq='WK', year=2012, month=2, day=1) - assert_equal(exp.days_in_month, 29) + self.assertEqual(exp.days_in_month, 29) def test_properties_daily(self): # Test properties on Periods with daily frequency. b_date = Period(freq='B', year=2007, month=1, day=1) # - assert_equal(b_date.year, 2007) - assert_equal(b_date.quarter, 1) - assert_equal(b_date.month, 1) - assert_equal(b_date.day, 1) - assert_equal(b_date.weekday, 0) - assert_equal(b_date.dayofyear, 1) - assert_equal(b_date.days_in_month, 31) - assert_equal(Period(freq='B', year=2012, - month=2, day=1).days_in_month, 29) + self.assertEqual(b_date.year, 2007) + self.assertEqual(b_date.quarter, 1) + self.assertEqual(b_date.month, 1) + self.assertEqual(b_date.day, 1) + self.assertEqual(b_date.weekday, 0) + self.assertEqual(b_date.dayofyear, 1) + self.assertEqual(b_date.days_in_month, 31) + self.assertEqual(Period(freq='B', year=2012, + month=2, day=1).days_in_month, 29) # d_date = Period(freq='D', year=2007, month=1, day=1) # - assert_equal(d_date.year, 2007) - assert_equal(d_date.quarter, 1) - assert_equal(d_date.month, 1) - assert_equal(d_date.day, 1) - assert_equal(d_date.weekday, 0) - assert_equal(d_date.dayofyear, 1) - assert_equal(d_date.days_in_month, 31) - assert_equal(Period(freq='D', year=2012, month=2, - day=1).days_in_month, 29) + self.assertEqual(d_date.year, 2007) + self.assertEqual(d_date.quarter, 1) + self.assertEqual(d_date.month, 1) + self.assertEqual(d_date.day, 1) + self.assertEqual(d_date.weekday, 0) + self.assertEqual(d_date.dayofyear, 1) + self.assertEqual(d_date.days_in_month, 31) + self.assertEqual(Period(freq='D', year=2012, month=2, + day=1).days_in_month, 29) def test_properties_hourly(self): # Test properties on Periods with hourly frequency. @@ -714,50 +710,50 @@ def test_properties_hourly(self): h_date2 = Period(freq='2H', year=2007, month=1, day=1, hour=0) for h_date in [h_date1, h_date2]: - assert_equal(h_date.year, 2007) - assert_equal(h_date.quarter, 1) - assert_equal(h_date.month, 1) - assert_equal(h_date.day, 1) - assert_equal(h_date.weekday, 0) - assert_equal(h_date.dayofyear, 1) - assert_equal(h_date.hour, 0) - assert_equal(h_date.days_in_month, 31) - assert_equal(Period(freq='H', year=2012, month=2, day=1, - hour=0).days_in_month, 29) + self.assertEqual(h_date.year, 2007) + self.assertEqual(h_date.quarter, 1) + self.assertEqual(h_date.month, 1) + self.assertEqual(h_date.day, 1) + self.assertEqual(h_date.weekday, 0) + self.assertEqual(h_date.dayofyear, 1) + self.assertEqual(h_date.hour, 0) + self.assertEqual(h_date.days_in_month, 31) + self.assertEqual(Period(freq='H', year=2012, month=2, day=1, + hour=0).days_in_month, 29) def test_properties_minutely(self): # Test properties on Periods with minutely frequency. t_date = Period(freq='Min', year=2007, month=1, day=1, hour=0, minute=0) # - assert_equal(t_date.quarter, 1) - assert_equal(t_date.month, 1) - assert_equal(t_date.day, 1) - assert_equal(t_date.weekday, 0) - assert_equal(t_date.dayofyear, 1) - assert_equal(t_date.hour, 0) - assert_equal(t_date.minute, 0) - assert_equal(t_date.days_in_month, 31) - assert_equal(Period(freq='D', year=2012, month=2, day=1, hour=0, - minute=0).days_in_month, 29) + self.assertEqual(t_date.quarter, 1) + self.assertEqual(t_date.month, 1) + self.assertEqual(t_date.day, 1) + self.assertEqual(t_date.weekday, 0) + self.assertEqual(t_date.dayofyear, 1) + self.assertEqual(t_date.hour, 0) + self.assertEqual(t_date.minute, 0) + self.assertEqual(t_date.days_in_month, 31) + self.assertEqual(Period(freq='D', year=2012, month=2, day=1, hour=0, + minute=0).days_in_month, 29) def test_properties_secondly(self): # Test properties on Periods with secondly frequency. s_date = Period(freq='Min', year=2007, month=1, day=1, hour=0, minute=0, second=0) # - assert_equal(s_date.year, 2007) - assert_equal(s_date.quarter, 1) - assert_equal(s_date.month, 1) - assert_equal(s_date.day, 1) - assert_equal(s_date.weekday, 0) - assert_equal(s_date.dayofyear, 1) - assert_equal(s_date.hour, 0) - assert_equal(s_date.minute, 0) - assert_equal(s_date.second, 0) - assert_equal(s_date.days_in_month, 31) - assert_equal(Period(freq='Min', year=2012, month=2, day=1, hour=0, - minute=0, second=0).days_in_month, 29) + self.assertEqual(s_date.year, 2007) + self.assertEqual(s_date.quarter, 1) + self.assertEqual(s_date.month, 1) + self.assertEqual(s_date.day, 1) + self.assertEqual(s_date.weekday, 0) + self.assertEqual(s_date.dayofyear, 1) + self.assertEqual(s_date.hour, 0) + self.assertEqual(s_date.minute, 0) + self.assertEqual(s_date.second, 0) + self.assertEqual(s_date.days_in_month, 31) + self.assertEqual(Period(freq='Min', year=2012, month=2, day=1, hour=0, + minute=0, second=0).days_in_month, 29) def test_properties_nat(self): p_nat = Period('NaT', freq='M') @@ -829,9 +825,13 @@ def test_asfreq_MS(self): self.assertEqual(initial.asfreq(freq="M", how="S"), Period('2013-01', 'M')) - self.assertRaises(ValueError, initial.asfreq, freq="MS", how="S") - tm.assertRaisesRegexp(ValueError, "Unknown freqstr: MS", pd.Period, - '2013-01', 'MS') + + with self.assertRaisesRegexp(ValueError, "Unknown freqstr"): + initial.asfreq(freq="MS", how="S") + + with tm.assertRaisesRegexp(ValueError, "Unknown freqstr: MS"): + pd.Period('2013-01', 'MS') + self.assertTrue(_period_code_map.get("MS") is None) @@ -890,35 +890,35 @@ def test_conv_annual(self): ival_ANOV_to_D_end = Period(freq='D', year=2007, month=11, day=30) ival_ANOV_to_D_start = Period(freq='D', year=2006, month=12, day=1) - assert_equal(ival_A.asfreq('Q', 'S'), ival_A_to_Q_start) - assert_equal(ival_A.asfreq('Q', 'e'), ival_A_to_Q_end) - assert_equal(ival_A.asfreq('M', 's'), ival_A_to_M_start) - assert_equal(ival_A.asfreq('M', 'E'), ival_A_to_M_end) - assert_equal(ival_A.asfreq('W', 'S'), ival_A_to_W_start) - assert_equal(ival_A.asfreq('W', 'E'), ival_A_to_W_end) - assert_equal(ival_A.asfreq('B', 'S'), ival_A_to_B_start) - assert_equal(ival_A.asfreq('B', 'E'), ival_A_to_B_end) - assert_equal(ival_A.asfreq('D', 'S'), ival_A_to_D_start) - assert_equal(ival_A.asfreq('D', 'E'), ival_A_to_D_end) - assert_equal(ival_A.asfreq('H', 'S'), ival_A_to_H_start) - assert_equal(ival_A.asfreq('H', 'E'), ival_A_to_H_end) - assert_equal(ival_A.asfreq('min', 'S'), ival_A_to_T_start) - assert_equal(ival_A.asfreq('min', 'E'), ival_A_to_T_end) - assert_equal(ival_A.asfreq('T', 'S'), ival_A_to_T_start) - assert_equal(ival_A.asfreq('T', 'E'), ival_A_to_T_end) - assert_equal(ival_A.asfreq('S', 'S'), ival_A_to_S_start) - assert_equal(ival_A.asfreq('S', 'E'), ival_A_to_S_end) - - assert_equal(ival_AJAN.asfreq('D', 'S'), ival_AJAN_to_D_start) - assert_equal(ival_AJAN.asfreq('D', 'E'), ival_AJAN_to_D_end) - - assert_equal(ival_AJUN.asfreq('D', 'S'), ival_AJUN_to_D_start) - assert_equal(ival_AJUN.asfreq('D', 'E'), ival_AJUN_to_D_end) - - assert_equal(ival_ANOV.asfreq('D', 'S'), ival_ANOV_to_D_start) - assert_equal(ival_ANOV.asfreq('D', 'E'), ival_ANOV_to_D_end) - - assert_equal(ival_A.asfreq('A'), ival_A) + self.assertEqual(ival_A.asfreq('Q', 'S'), ival_A_to_Q_start) + self.assertEqual(ival_A.asfreq('Q', 'e'), ival_A_to_Q_end) + self.assertEqual(ival_A.asfreq('M', 's'), ival_A_to_M_start) + self.assertEqual(ival_A.asfreq('M', 'E'), ival_A_to_M_end) + self.assertEqual(ival_A.asfreq('W', 'S'), ival_A_to_W_start) + self.assertEqual(ival_A.asfreq('W', 'E'), ival_A_to_W_end) + self.assertEqual(ival_A.asfreq('B', 'S'), ival_A_to_B_start) + self.assertEqual(ival_A.asfreq('B', 'E'), ival_A_to_B_end) + self.assertEqual(ival_A.asfreq('D', 'S'), ival_A_to_D_start) + self.assertEqual(ival_A.asfreq('D', 'E'), ival_A_to_D_end) + self.assertEqual(ival_A.asfreq('H', 'S'), ival_A_to_H_start) + self.assertEqual(ival_A.asfreq('H', 'E'), ival_A_to_H_end) + self.assertEqual(ival_A.asfreq('min', 'S'), ival_A_to_T_start) + self.assertEqual(ival_A.asfreq('min', 'E'), ival_A_to_T_end) + self.assertEqual(ival_A.asfreq('T', 'S'), ival_A_to_T_start) + self.assertEqual(ival_A.asfreq('T', 'E'), ival_A_to_T_end) + self.assertEqual(ival_A.asfreq('S', 'S'), ival_A_to_S_start) + self.assertEqual(ival_A.asfreq('S', 'E'), ival_A_to_S_end) + + self.assertEqual(ival_AJAN.asfreq('D', 'S'), ival_AJAN_to_D_start) + self.assertEqual(ival_AJAN.asfreq('D', 'E'), ival_AJAN_to_D_end) + + self.assertEqual(ival_AJUN.asfreq('D', 'S'), ival_AJUN_to_D_start) + self.assertEqual(ival_AJUN.asfreq('D', 'E'), ival_AJUN_to_D_end) + + self.assertEqual(ival_ANOV.asfreq('D', 'S'), ival_ANOV_to_D_start) + self.assertEqual(ival_ANOV.asfreq('D', 'E'), ival_ANOV_to_D_end) + + self.assertEqual(ival_A.asfreq('A'), ival_A) def test_conv_quarterly(self): # frequency conversion tests: from Quarterly Frequency @@ -955,30 +955,30 @@ def test_conv_quarterly(self): ival_QEJUN_to_D_start = Period(freq='D', year=2006, month=7, day=1) ival_QEJUN_to_D_end = Period(freq='D', year=2006, month=9, day=30) - assert_equal(ival_Q.asfreq('A'), ival_Q_to_A) - assert_equal(ival_Q_end_of_year.asfreq('A'), ival_Q_to_A) - - assert_equal(ival_Q.asfreq('M', 'S'), ival_Q_to_M_start) - assert_equal(ival_Q.asfreq('M', 'E'), ival_Q_to_M_end) - assert_equal(ival_Q.asfreq('W', 'S'), ival_Q_to_W_start) - assert_equal(ival_Q.asfreq('W', 'E'), ival_Q_to_W_end) - assert_equal(ival_Q.asfreq('B', 'S'), ival_Q_to_B_start) - assert_equal(ival_Q.asfreq('B', 'E'), ival_Q_to_B_end) - assert_equal(ival_Q.asfreq('D', 'S'), ival_Q_to_D_start) - assert_equal(ival_Q.asfreq('D', 'E'), ival_Q_to_D_end) - assert_equal(ival_Q.asfreq('H', 'S'), ival_Q_to_H_start) - assert_equal(ival_Q.asfreq('H', 'E'), ival_Q_to_H_end) - assert_equal(ival_Q.asfreq('Min', 'S'), ival_Q_to_T_start) - assert_equal(ival_Q.asfreq('Min', 'E'), ival_Q_to_T_end) - assert_equal(ival_Q.asfreq('S', 'S'), ival_Q_to_S_start) - assert_equal(ival_Q.asfreq('S', 'E'), ival_Q_to_S_end) - - assert_equal(ival_QEJAN.asfreq('D', 'S'), ival_QEJAN_to_D_start) - assert_equal(ival_QEJAN.asfreq('D', 'E'), ival_QEJAN_to_D_end) - assert_equal(ival_QEJUN.asfreq('D', 'S'), ival_QEJUN_to_D_start) - assert_equal(ival_QEJUN.asfreq('D', 'E'), ival_QEJUN_to_D_end) - - assert_equal(ival_Q.asfreq('Q'), ival_Q) + self.assertEqual(ival_Q.asfreq('A'), ival_Q_to_A) + self.assertEqual(ival_Q_end_of_year.asfreq('A'), ival_Q_to_A) + + self.assertEqual(ival_Q.asfreq('M', 'S'), ival_Q_to_M_start) + self.assertEqual(ival_Q.asfreq('M', 'E'), ival_Q_to_M_end) + self.assertEqual(ival_Q.asfreq('W', 'S'), ival_Q_to_W_start) + self.assertEqual(ival_Q.asfreq('W', 'E'), ival_Q_to_W_end) + self.assertEqual(ival_Q.asfreq('B', 'S'), ival_Q_to_B_start) + self.assertEqual(ival_Q.asfreq('B', 'E'), ival_Q_to_B_end) + self.assertEqual(ival_Q.asfreq('D', 'S'), ival_Q_to_D_start) + self.assertEqual(ival_Q.asfreq('D', 'E'), ival_Q_to_D_end) + self.assertEqual(ival_Q.asfreq('H', 'S'), ival_Q_to_H_start) + self.assertEqual(ival_Q.asfreq('H', 'E'), ival_Q_to_H_end) + self.assertEqual(ival_Q.asfreq('Min', 'S'), ival_Q_to_T_start) + self.assertEqual(ival_Q.asfreq('Min', 'E'), ival_Q_to_T_end) + self.assertEqual(ival_Q.asfreq('S', 'S'), ival_Q_to_S_start) + self.assertEqual(ival_Q.asfreq('S', 'E'), ival_Q_to_S_end) + + self.assertEqual(ival_QEJAN.asfreq('D', 'S'), ival_QEJAN_to_D_start) + self.assertEqual(ival_QEJAN.asfreq('D', 'E'), ival_QEJAN_to_D_end) + self.assertEqual(ival_QEJUN.asfreq('D', 'S'), ival_QEJUN_to_D_start) + self.assertEqual(ival_QEJUN.asfreq('D', 'E'), ival_QEJUN_to_D_end) + + self.assertEqual(ival_Q.asfreq('Q'), ival_Q) def test_conv_monthly(self): # frequency conversion tests: from Monthly Frequency @@ -1005,25 +1005,25 @@ def test_conv_monthly(self): ival_M_to_S_end = Period(freq='S', year=2007, month=1, day=31, hour=23, minute=59, second=59) - assert_equal(ival_M.asfreq('A'), ival_M_to_A) - assert_equal(ival_M_end_of_year.asfreq('A'), ival_M_to_A) - assert_equal(ival_M.asfreq('Q'), ival_M_to_Q) - assert_equal(ival_M_end_of_quarter.asfreq('Q'), ival_M_to_Q) - - assert_equal(ival_M.asfreq('W', 'S'), ival_M_to_W_start) - assert_equal(ival_M.asfreq('W', 'E'), ival_M_to_W_end) - assert_equal(ival_M.asfreq('B', 'S'), ival_M_to_B_start) - assert_equal(ival_M.asfreq('B', 'E'), ival_M_to_B_end) - assert_equal(ival_M.asfreq('D', 'S'), ival_M_to_D_start) - assert_equal(ival_M.asfreq('D', 'E'), ival_M_to_D_end) - assert_equal(ival_M.asfreq('H', 'S'), ival_M_to_H_start) - assert_equal(ival_M.asfreq('H', 'E'), ival_M_to_H_end) - assert_equal(ival_M.asfreq('Min', 'S'), ival_M_to_T_start) - assert_equal(ival_M.asfreq('Min', 'E'), ival_M_to_T_end) - assert_equal(ival_M.asfreq('S', 'S'), ival_M_to_S_start) - assert_equal(ival_M.asfreq('S', 'E'), ival_M_to_S_end) - - assert_equal(ival_M.asfreq('M'), ival_M) + self.assertEqual(ival_M.asfreq('A'), ival_M_to_A) + self.assertEqual(ival_M_end_of_year.asfreq('A'), ival_M_to_A) + self.assertEqual(ival_M.asfreq('Q'), ival_M_to_Q) + self.assertEqual(ival_M_end_of_quarter.asfreq('Q'), ival_M_to_Q) + + self.assertEqual(ival_M.asfreq('W', 'S'), ival_M_to_W_start) + self.assertEqual(ival_M.asfreq('W', 'E'), ival_M_to_W_end) + self.assertEqual(ival_M.asfreq('B', 'S'), ival_M_to_B_start) + self.assertEqual(ival_M.asfreq('B', 'E'), ival_M_to_B_end) + self.assertEqual(ival_M.asfreq('D', 'S'), ival_M_to_D_start) + self.assertEqual(ival_M.asfreq('D', 'E'), ival_M_to_D_end) + self.assertEqual(ival_M.asfreq('H', 'S'), ival_M_to_H_start) + self.assertEqual(ival_M.asfreq('H', 'E'), ival_M_to_H_end) + self.assertEqual(ival_M.asfreq('Min', 'S'), ival_M_to_T_start) + self.assertEqual(ival_M.asfreq('Min', 'E'), ival_M_to_T_end) + self.assertEqual(ival_M.asfreq('S', 'S'), ival_M_to_S_start) + self.assertEqual(ival_M.asfreq('S', 'E'), ival_M_to_S_end) + + self.assertEqual(ival_M.asfreq('M'), ival_M) def test_conv_weekly(self): # frequency conversion tests: from Weekly Frequency @@ -1089,43 +1089,45 @@ def test_conv_weekly(self): ival_W_to_S_end = Period(freq='S', year=2007, month=1, day=7, hour=23, minute=59, second=59) - assert_equal(ival_W.asfreq('A'), ival_W_to_A) - assert_equal(ival_W_end_of_year.asfreq('A'), ival_W_to_A_end_of_year) - assert_equal(ival_W.asfreq('Q'), ival_W_to_Q) - assert_equal(ival_W_end_of_quarter.asfreq('Q'), - ival_W_to_Q_end_of_quarter) - assert_equal(ival_W.asfreq('M'), ival_W_to_M) - assert_equal(ival_W_end_of_month.asfreq('M'), ival_W_to_M_end_of_month) - - assert_equal(ival_W.asfreq('B', 'S'), ival_W_to_B_start) - assert_equal(ival_W.asfreq('B', 'E'), ival_W_to_B_end) - - assert_equal(ival_W.asfreq('D', 'S'), ival_W_to_D_start) - assert_equal(ival_W.asfreq('D', 'E'), ival_W_to_D_end) - - assert_equal(ival_WSUN.asfreq('D', 'S'), ival_WSUN_to_D_start) - assert_equal(ival_WSUN.asfreq('D', 'E'), ival_WSUN_to_D_end) - assert_equal(ival_WSAT.asfreq('D', 'S'), ival_WSAT_to_D_start) - assert_equal(ival_WSAT.asfreq('D', 'E'), ival_WSAT_to_D_end) - assert_equal(ival_WFRI.asfreq('D', 'S'), ival_WFRI_to_D_start) - assert_equal(ival_WFRI.asfreq('D', 'E'), ival_WFRI_to_D_end) - assert_equal(ival_WTHU.asfreq('D', 'S'), ival_WTHU_to_D_start) - assert_equal(ival_WTHU.asfreq('D', 'E'), ival_WTHU_to_D_end) - assert_equal(ival_WWED.asfreq('D', 'S'), ival_WWED_to_D_start) - assert_equal(ival_WWED.asfreq('D', 'E'), ival_WWED_to_D_end) - assert_equal(ival_WTUE.asfreq('D', 'S'), ival_WTUE_to_D_start) - assert_equal(ival_WTUE.asfreq('D', 'E'), ival_WTUE_to_D_end) - assert_equal(ival_WMON.asfreq('D', 'S'), ival_WMON_to_D_start) - assert_equal(ival_WMON.asfreq('D', 'E'), ival_WMON_to_D_end) - - assert_equal(ival_W.asfreq('H', 'S'), ival_W_to_H_start) - assert_equal(ival_W.asfreq('H', 'E'), ival_W_to_H_end) - assert_equal(ival_W.asfreq('Min', 'S'), ival_W_to_T_start) - assert_equal(ival_W.asfreq('Min', 'E'), ival_W_to_T_end) - assert_equal(ival_W.asfreq('S', 'S'), ival_W_to_S_start) - assert_equal(ival_W.asfreq('S', 'E'), ival_W_to_S_end) - - assert_equal(ival_W.asfreq('W'), ival_W) + self.assertEqual(ival_W.asfreq('A'), ival_W_to_A) + self.assertEqual(ival_W_end_of_year.asfreq('A'), + ival_W_to_A_end_of_year) + self.assertEqual(ival_W.asfreq('Q'), ival_W_to_Q) + self.assertEqual(ival_W_end_of_quarter.asfreq('Q'), + ival_W_to_Q_end_of_quarter) + self.assertEqual(ival_W.asfreq('M'), ival_W_to_M) + self.assertEqual(ival_W_end_of_month.asfreq('M'), + ival_W_to_M_end_of_month) + + self.assertEqual(ival_W.asfreq('B', 'S'), ival_W_to_B_start) + self.assertEqual(ival_W.asfreq('B', 'E'), ival_W_to_B_end) + + self.assertEqual(ival_W.asfreq('D', 'S'), ival_W_to_D_start) + self.assertEqual(ival_W.asfreq('D', 'E'), ival_W_to_D_end) + + self.assertEqual(ival_WSUN.asfreq('D', 'S'), ival_WSUN_to_D_start) + self.assertEqual(ival_WSUN.asfreq('D', 'E'), ival_WSUN_to_D_end) + self.assertEqual(ival_WSAT.asfreq('D', 'S'), ival_WSAT_to_D_start) + self.assertEqual(ival_WSAT.asfreq('D', 'E'), ival_WSAT_to_D_end) + self.assertEqual(ival_WFRI.asfreq('D', 'S'), ival_WFRI_to_D_start) + self.assertEqual(ival_WFRI.asfreq('D', 'E'), ival_WFRI_to_D_end) + self.assertEqual(ival_WTHU.asfreq('D', 'S'), ival_WTHU_to_D_start) + self.assertEqual(ival_WTHU.asfreq('D', 'E'), ival_WTHU_to_D_end) + self.assertEqual(ival_WWED.asfreq('D', 'S'), ival_WWED_to_D_start) + self.assertEqual(ival_WWED.asfreq('D', 'E'), ival_WWED_to_D_end) + self.assertEqual(ival_WTUE.asfreq('D', 'S'), ival_WTUE_to_D_start) + self.assertEqual(ival_WTUE.asfreq('D', 'E'), ival_WTUE_to_D_end) + self.assertEqual(ival_WMON.asfreq('D', 'S'), ival_WMON_to_D_start) + self.assertEqual(ival_WMON.asfreq('D', 'E'), ival_WMON_to_D_end) + + self.assertEqual(ival_W.asfreq('H', 'S'), ival_W_to_H_start) + self.assertEqual(ival_W.asfreq('H', 'E'), ival_W_to_H_end) + self.assertEqual(ival_W.asfreq('Min', 'S'), ival_W_to_T_start) + self.assertEqual(ival_W.asfreq('Min', 'E'), ival_W_to_T_end) + self.assertEqual(ival_W.asfreq('S', 'S'), ival_W_to_S_start) + self.assertEqual(ival_W.asfreq('S', 'E'), ival_W_to_S_end) + + self.assertEqual(ival_W.asfreq('W'), ival_W) def test_conv_weekly_legacy(self): # frequency conversion tests: from Weekly Frequency @@ -1204,44 +1206,46 @@ def test_conv_weekly_legacy(self): ival_W_to_S_end = Period(freq='S', year=2007, month=1, day=7, hour=23, minute=59, second=59) - assert_equal(ival_W.asfreq('A'), ival_W_to_A) - assert_equal(ival_W_end_of_year.asfreq('A'), ival_W_to_A_end_of_year) - assert_equal(ival_W.asfreq('Q'), ival_W_to_Q) - assert_equal(ival_W_end_of_quarter.asfreq('Q'), - ival_W_to_Q_end_of_quarter) - assert_equal(ival_W.asfreq('M'), ival_W_to_M) - assert_equal(ival_W_end_of_month.asfreq('M'), ival_W_to_M_end_of_month) - - assert_equal(ival_W.asfreq('B', 'S'), ival_W_to_B_start) - assert_equal(ival_W.asfreq('B', 'E'), ival_W_to_B_end) - - assert_equal(ival_W.asfreq('D', 'S'), ival_W_to_D_start) - assert_equal(ival_W.asfreq('D', 'E'), ival_W_to_D_end) - - assert_equal(ival_WSUN.asfreq('D', 'S'), ival_WSUN_to_D_start) - assert_equal(ival_WSUN.asfreq('D', 'E'), ival_WSUN_to_D_end) - assert_equal(ival_WSAT.asfreq('D', 'S'), ival_WSAT_to_D_start) - assert_equal(ival_WSAT.asfreq('D', 'E'), ival_WSAT_to_D_end) - assert_equal(ival_WFRI.asfreq('D', 'S'), ival_WFRI_to_D_start) - assert_equal(ival_WFRI.asfreq('D', 'E'), ival_WFRI_to_D_end) - assert_equal(ival_WTHU.asfreq('D', 'S'), ival_WTHU_to_D_start) - assert_equal(ival_WTHU.asfreq('D', 'E'), ival_WTHU_to_D_end) - assert_equal(ival_WWED.asfreq('D', 'S'), ival_WWED_to_D_start) - assert_equal(ival_WWED.asfreq('D', 'E'), ival_WWED_to_D_end) - assert_equal(ival_WTUE.asfreq('D', 'S'), ival_WTUE_to_D_start) - assert_equal(ival_WTUE.asfreq('D', 'E'), ival_WTUE_to_D_end) - assert_equal(ival_WMON.asfreq('D', 'S'), ival_WMON_to_D_start) - assert_equal(ival_WMON.asfreq('D', 'E'), ival_WMON_to_D_end) - - assert_equal(ival_W.asfreq('H', 'S'), ival_W_to_H_start) - assert_equal(ival_W.asfreq('H', 'E'), ival_W_to_H_end) - assert_equal(ival_W.asfreq('Min', 'S'), ival_W_to_T_start) - assert_equal(ival_W.asfreq('Min', 'E'), ival_W_to_T_end) - assert_equal(ival_W.asfreq('S', 'S'), ival_W_to_S_start) - assert_equal(ival_W.asfreq('S', 'E'), ival_W_to_S_end) + self.assertEqual(ival_W.asfreq('A'), ival_W_to_A) + self.assertEqual(ival_W_end_of_year.asfreq('A'), + ival_W_to_A_end_of_year) + self.assertEqual(ival_W.asfreq('Q'), ival_W_to_Q) + self.assertEqual(ival_W_end_of_quarter.asfreq('Q'), + ival_W_to_Q_end_of_quarter) + self.assertEqual(ival_W.asfreq('M'), ival_W_to_M) + self.assertEqual(ival_W_end_of_month.asfreq('M'), + ival_W_to_M_end_of_month) + + self.assertEqual(ival_W.asfreq('B', 'S'), ival_W_to_B_start) + self.assertEqual(ival_W.asfreq('B', 'E'), ival_W_to_B_end) + + self.assertEqual(ival_W.asfreq('D', 'S'), ival_W_to_D_start) + self.assertEqual(ival_W.asfreq('D', 'E'), ival_W_to_D_end) + + self.assertEqual(ival_WSUN.asfreq('D', 'S'), ival_WSUN_to_D_start) + self.assertEqual(ival_WSUN.asfreq('D', 'E'), ival_WSUN_to_D_end) + self.assertEqual(ival_WSAT.asfreq('D', 'S'), ival_WSAT_to_D_start) + self.assertEqual(ival_WSAT.asfreq('D', 'E'), ival_WSAT_to_D_end) + self.assertEqual(ival_WFRI.asfreq('D', 'S'), ival_WFRI_to_D_start) + self.assertEqual(ival_WFRI.asfreq('D', 'E'), ival_WFRI_to_D_end) + self.assertEqual(ival_WTHU.asfreq('D', 'S'), ival_WTHU_to_D_start) + self.assertEqual(ival_WTHU.asfreq('D', 'E'), ival_WTHU_to_D_end) + self.assertEqual(ival_WWED.asfreq('D', 'S'), ival_WWED_to_D_start) + self.assertEqual(ival_WWED.asfreq('D', 'E'), ival_WWED_to_D_end) + self.assertEqual(ival_WTUE.asfreq('D', 'S'), ival_WTUE_to_D_start) + self.assertEqual(ival_WTUE.asfreq('D', 'E'), ival_WTUE_to_D_end) + self.assertEqual(ival_WMON.asfreq('D', 'S'), ival_WMON_to_D_start) + self.assertEqual(ival_WMON.asfreq('D', 'E'), ival_WMON_to_D_end) + + self.assertEqual(ival_W.asfreq('H', 'S'), ival_W_to_H_start) + self.assertEqual(ival_W.asfreq('H', 'E'), ival_W_to_H_end) + self.assertEqual(ival_W.asfreq('Min', 'S'), ival_W_to_T_start) + self.assertEqual(ival_W.asfreq('Min', 'E'), ival_W_to_T_end) + self.assertEqual(ival_W.asfreq('S', 'S'), ival_W_to_S_start) + self.assertEqual(ival_W.asfreq('S', 'E'), ival_W_to_S_end) with tm.assert_produces_warning(FutureWarning): - assert_equal(ival_W.asfreq('WK'), ival_W) + self.assertEqual(ival_W.asfreq('WK'), ival_W) def test_conv_business(self): # frequency conversion tests: from Business Frequency" @@ -1268,25 +1272,25 @@ def test_conv_business(self): ival_B_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=23, minute=59, second=59) - assert_equal(ival_B.asfreq('A'), ival_B_to_A) - assert_equal(ival_B_end_of_year.asfreq('A'), ival_B_to_A) - assert_equal(ival_B.asfreq('Q'), ival_B_to_Q) - assert_equal(ival_B_end_of_quarter.asfreq('Q'), ival_B_to_Q) - assert_equal(ival_B.asfreq('M'), ival_B_to_M) - assert_equal(ival_B_end_of_month.asfreq('M'), ival_B_to_M) - assert_equal(ival_B.asfreq('W'), ival_B_to_W) - assert_equal(ival_B_end_of_week.asfreq('W'), ival_B_to_W) + self.assertEqual(ival_B.asfreq('A'), ival_B_to_A) + self.assertEqual(ival_B_end_of_year.asfreq('A'), ival_B_to_A) + self.assertEqual(ival_B.asfreq('Q'), ival_B_to_Q) + self.assertEqual(ival_B_end_of_quarter.asfreq('Q'), ival_B_to_Q) + self.assertEqual(ival_B.asfreq('M'), ival_B_to_M) + self.assertEqual(ival_B_end_of_month.asfreq('M'), ival_B_to_M) + self.assertEqual(ival_B.asfreq('W'), ival_B_to_W) + self.assertEqual(ival_B_end_of_week.asfreq('W'), ival_B_to_W) - assert_equal(ival_B.asfreq('D'), ival_B_to_D) + self.assertEqual(ival_B.asfreq('D'), ival_B_to_D) - assert_equal(ival_B.asfreq('H', 'S'), ival_B_to_H_start) - assert_equal(ival_B.asfreq('H', 'E'), ival_B_to_H_end) - assert_equal(ival_B.asfreq('Min', 'S'), ival_B_to_T_start) - assert_equal(ival_B.asfreq('Min', 'E'), ival_B_to_T_end) - assert_equal(ival_B.asfreq('S', 'S'), ival_B_to_S_start) - assert_equal(ival_B.asfreq('S', 'E'), ival_B_to_S_end) + self.assertEqual(ival_B.asfreq('H', 'S'), ival_B_to_H_start) + self.assertEqual(ival_B.asfreq('H', 'E'), ival_B_to_H_end) + self.assertEqual(ival_B.asfreq('Min', 'S'), ival_B_to_T_start) + self.assertEqual(ival_B.asfreq('Min', 'E'), ival_B_to_T_end) + self.assertEqual(ival_B.asfreq('S', 'S'), ival_B_to_S_start) + self.assertEqual(ival_B.asfreq('S', 'E'), ival_B_to_S_end) - assert_equal(ival_B.asfreq('B'), ival_B) + self.assertEqual(ival_B.asfreq('B'), ival_B) def test_conv_daily(self): # frequency conversion tests: from Business Frequency" @@ -1331,36 +1335,39 @@ def test_conv_daily(self): ival_D_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=23, minute=59, second=59) - assert_equal(ival_D.asfreq('A'), ival_D_to_A) - - assert_equal(ival_D_end_of_quarter.asfreq('A-JAN'), ival_Deoq_to_AJAN) - assert_equal(ival_D_end_of_quarter.asfreq('A-JUN'), ival_Deoq_to_AJUN) - assert_equal(ival_D_end_of_quarter.asfreq('A-DEC'), ival_Deoq_to_ADEC) - - assert_equal(ival_D_end_of_year.asfreq('A'), ival_D_to_A) - assert_equal(ival_D_end_of_quarter.asfreq('Q'), ival_D_to_QEDEC) - assert_equal(ival_D.asfreq("Q-JAN"), ival_D_to_QEJAN) - assert_equal(ival_D.asfreq("Q-JUN"), ival_D_to_QEJUN) - assert_equal(ival_D.asfreq("Q-DEC"), ival_D_to_QEDEC) - assert_equal(ival_D.asfreq('M'), ival_D_to_M) - assert_equal(ival_D_end_of_month.asfreq('M'), ival_D_to_M) - assert_equal(ival_D.asfreq('W'), ival_D_to_W) - assert_equal(ival_D_end_of_week.asfreq('W'), ival_D_to_W) - - assert_equal(ival_D_friday.asfreq('B'), ival_B_friday) - assert_equal(ival_D_saturday.asfreq('B', 'S'), ival_B_friday) - assert_equal(ival_D_saturday.asfreq('B', 'E'), ival_B_monday) - assert_equal(ival_D_sunday.asfreq('B', 'S'), ival_B_friday) - assert_equal(ival_D_sunday.asfreq('B', 'E'), ival_B_monday) - - assert_equal(ival_D.asfreq('H', 'S'), ival_D_to_H_start) - assert_equal(ival_D.asfreq('H', 'E'), ival_D_to_H_end) - assert_equal(ival_D.asfreq('Min', 'S'), ival_D_to_T_start) - assert_equal(ival_D.asfreq('Min', 'E'), ival_D_to_T_end) - assert_equal(ival_D.asfreq('S', 'S'), ival_D_to_S_start) - assert_equal(ival_D.asfreq('S', 'E'), ival_D_to_S_end) - - assert_equal(ival_D.asfreq('D'), ival_D) + self.assertEqual(ival_D.asfreq('A'), ival_D_to_A) + + self.assertEqual(ival_D_end_of_quarter.asfreq('A-JAN'), + ival_Deoq_to_AJAN) + self.assertEqual(ival_D_end_of_quarter.asfreq('A-JUN'), + ival_Deoq_to_AJUN) + self.assertEqual(ival_D_end_of_quarter.asfreq('A-DEC'), + ival_Deoq_to_ADEC) + + self.assertEqual(ival_D_end_of_year.asfreq('A'), ival_D_to_A) + self.assertEqual(ival_D_end_of_quarter.asfreq('Q'), ival_D_to_QEDEC) + self.assertEqual(ival_D.asfreq("Q-JAN"), ival_D_to_QEJAN) + self.assertEqual(ival_D.asfreq("Q-JUN"), ival_D_to_QEJUN) + self.assertEqual(ival_D.asfreq("Q-DEC"), ival_D_to_QEDEC) + self.assertEqual(ival_D.asfreq('M'), ival_D_to_M) + self.assertEqual(ival_D_end_of_month.asfreq('M'), ival_D_to_M) + self.assertEqual(ival_D.asfreq('W'), ival_D_to_W) + self.assertEqual(ival_D_end_of_week.asfreq('W'), ival_D_to_W) + + self.assertEqual(ival_D_friday.asfreq('B'), ival_B_friday) + self.assertEqual(ival_D_saturday.asfreq('B', 'S'), ival_B_friday) + self.assertEqual(ival_D_saturday.asfreq('B', 'E'), ival_B_monday) + self.assertEqual(ival_D_sunday.asfreq('B', 'S'), ival_B_friday) + self.assertEqual(ival_D_sunday.asfreq('B', 'E'), ival_B_monday) + + self.assertEqual(ival_D.asfreq('H', 'S'), ival_D_to_H_start) + self.assertEqual(ival_D.asfreq('H', 'E'), ival_D_to_H_end) + self.assertEqual(ival_D.asfreq('Min', 'S'), ival_D_to_T_start) + self.assertEqual(ival_D.asfreq('Min', 'E'), ival_D_to_T_end) + self.assertEqual(ival_D.asfreq('S', 'S'), ival_D_to_S_start) + self.assertEqual(ival_D.asfreq('S', 'E'), ival_D_to_S_end) + + self.assertEqual(ival_D.asfreq('D'), ival_D) def test_conv_hourly(self): # frequency conversion tests: from Hourly Frequency" @@ -1395,25 +1402,25 @@ def test_conv_hourly(self): ival_H_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=0, minute=59, second=59) - assert_equal(ival_H.asfreq('A'), ival_H_to_A) - assert_equal(ival_H_end_of_year.asfreq('A'), ival_H_to_A) - assert_equal(ival_H.asfreq('Q'), ival_H_to_Q) - assert_equal(ival_H_end_of_quarter.asfreq('Q'), ival_H_to_Q) - assert_equal(ival_H.asfreq('M'), ival_H_to_M) - assert_equal(ival_H_end_of_month.asfreq('M'), ival_H_to_M) - assert_equal(ival_H.asfreq('W'), ival_H_to_W) - assert_equal(ival_H_end_of_week.asfreq('W'), ival_H_to_W) - assert_equal(ival_H.asfreq('D'), ival_H_to_D) - assert_equal(ival_H_end_of_day.asfreq('D'), ival_H_to_D) - assert_equal(ival_H.asfreq('B'), ival_H_to_B) - assert_equal(ival_H_end_of_bus.asfreq('B'), ival_H_to_B) - - assert_equal(ival_H.asfreq('Min', 'S'), ival_H_to_T_start) - assert_equal(ival_H.asfreq('Min', 'E'), ival_H_to_T_end) - assert_equal(ival_H.asfreq('S', 'S'), ival_H_to_S_start) - assert_equal(ival_H.asfreq('S', 'E'), ival_H_to_S_end) - - assert_equal(ival_H.asfreq('H'), ival_H) + self.assertEqual(ival_H.asfreq('A'), ival_H_to_A) + self.assertEqual(ival_H_end_of_year.asfreq('A'), ival_H_to_A) + self.assertEqual(ival_H.asfreq('Q'), ival_H_to_Q) + self.assertEqual(ival_H_end_of_quarter.asfreq('Q'), ival_H_to_Q) + self.assertEqual(ival_H.asfreq('M'), ival_H_to_M) + self.assertEqual(ival_H_end_of_month.asfreq('M'), ival_H_to_M) + self.assertEqual(ival_H.asfreq('W'), ival_H_to_W) + self.assertEqual(ival_H_end_of_week.asfreq('W'), ival_H_to_W) + self.assertEqual(ival_H.asfreq('D'), ival_H_to_D) + self.assertEqual(ival_H_end_of_day.asfreq('D'), ival_H_to_D) + self.assertEqual(ival_H.asfreq('B'), ival_H_to_B) + self.assertEqual(ival_H_end_of_bus.asfreq('B'), ival_H_to_B) + + self.assertEqual(ival_H.asfreq('Min', 'S'), ival_H_to_T_start) + self.assertEqual(ival_H.asfreq('Min', 'E'), ival_H_to_T_end) + self.assertEqual(ival_H.asfreq('S', 'S'), ival_H_to_S_start) + self.assertEqual(ival_H.asfreq('S', 'E'), ival_H_to_S_end) + + self.assertEqual(ival_H.asfreq('H'), ival_H) def test_conv_minutely(self): # frequency conversion tests: from Minutely Frequency" @@ -1448,25 +1455,25 @@ def test_conv_minutely(self): ival_T_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=0, minute=0, second=59) - assert_equal(ival_T.asfreq('A'), ival_T_to_A) - assert_equal(ival_T_end_of_year.asfreq('A'), ival_T_to_A) - assert_equal(ival_T.asfreq('Q'), ival_T_to_Q) - assert_equal(ival_T_end_of_quarter.asfreq('Q'), ival_T_to_Q) - assert_equal(ival_T.asfreq('M'), ival_T_to_M) - assert_equal(ival_T_end_of_month.asfreq('M'), ival_T_to_M) - assert_equal(ival_T.asfreq('W'), ival_T_to_W) - assert_equal(ival_T_end_of_week.asfreq('W'), ival_T_to_W) - assert_equal(ival_T.asfreq('D'), ival_T_to_D) - assert_equal(ival_T_end_of_day.asfreq('D'), ival_T_to_D) - assert_equal(ival_T.asfreq('B'), ival_T_to_B) - assert_equal(ival_T_end_of_bus.asfreq('B'), ival_T_to_B) - assert_equal(ival_T.asfreq('H'), ival_T_to_H) - assert_equal(ival_T_end_of_hour.asfreq('H'), ival_T_to_H) - - assert_equal(ival_T.asfreq('S', 'S'), ival_T_to_S_start) - assert_equal(ival_T.asfreq('S', 'E'), ival_T_to_S_end) - - assert_equal(ival_T.asfreq('Min'), ival_T) + self.assertEqual(ival_T.asfreq('A'), ival_T_to_A) + self.assertEqual(ival_T_end_of_year.asfreq('A'), ival_T_to_A) + self.assertEqual(ival_T.asfreq('Q'), ival_T_to_Q) + self.assertEqual(ival_T_end_of_quarter.asfreq('Q'), ival_T_to_Q) + self.assertEqual(ival_T.asfreq('M'), ival_T_to_M) + self.assertEqual(ival_T_end_of_month.asfreq('M'), ival_T_to_M) + self.assertEqual(ival_T.asfreq('W'), ival_T_to_W) + self.assertEqual(ival_T_end_of_week.asfreq('W'), ival_T_to_W) + self.assertEqual(ival_T.asfreq('D'), ival_T_to_D) + self.assertEqual(ival_T_end_of_day.asfreq('D'), ival_T_to_D) + self.assertEqual(ival_T.asfreq('B'), ival_T_to_B) + self.assertEqual(ival_T_end_of_bus.asfreq('B'), ival_T_to_B) + self.assertEqual(ival_T.asfreq('H'), ival_T_to_H) + self.assertEqual(ival_T_end_of_hour.asfreq('H'), ival_T_to_H) + + self.assertEqual(ival_T.asfreq('S', 'S'), ival_T_to_S_start) + self.assertEqual(ival_T.asfreq('S', 'E'), ival_T_to_S_end) + + self.assertEqual(ival_T.asfreq('Min'), ival_T) def test_conv_secondly(self): # frequency conversion tests: from Secondly Frequency" @@ -1500,24 +1507,24 @@ def test_conv_secondly(self): ival_S_to_T = Period(freq='Min', year=2007, month=1, day=1, hour=0, minute=0) - assert_equal(ival_S.asfreq('A'), ival_S_to_A) - assert_equal(ival_S_end_of_year.asfreq('A'), ival_S_to_A) - assert_equal(ival_S.asfreq('Q'), ival_S_to_Q) - assert_equal(ival_S_end_of_quarter.asfreq('Q'), ival_S_to_Q) - assert_equal(ival_S.asfreq('M'), ival_S_to_M) - assert_equal(ival_S_end_of_month.asfreq('M'), ival_S_to_M) - assert_equal(ival_S.asfreq('W'), ival_S_to_W) - assert_equal(ival_S_end_of_week.asfreq('W'), ival_S_to_W) - assert_equal(ival_S.asfreq('D'), ival_S_to_D) - assert_equal(ival_S_end_of_day.asfreq('D'), ival_S_to_D) - assert_equal(ival_S.asfreq('B'), ival_S_to_B) - assert_equal(ival_S_end_of_bus.asfreq('B'), ival_S_to_B) - assert_equal(ival_S.asfreq('H'), ival_S_to_H) - assert_equal(ival_S_end_of_hour.asfreq('H'), ival_S_to_H) - assert_equal(ival_S.asfreq('Min'), ival_S_to_T) - assert_equal(ival_S_end_of_minute.asfreq('Min'), ival_S_to_T) - - assert_equal(ival_S.asfreq('S'), ival_S) + self.assertEqual(ival_S.asfreq('A'), ival_S_to_A) + self.assertEqual(ival_S_end_of_year.asfreq('A'), ival_S_to_A) + self.assertEqual(ival_S.asfreq('Q'), ival_S_to_Q) + self.assertEqual(ival_S_end_of_quarter.asfreq('Q'), ival_S_to_Q) + self.assertEqual(ival_S.asfreq('M'), ival_S_to_M) + self.assertEqual(ival_S_end_of_month.asfreq('M'), ival_S_to_M) + self.assertEqual(ival_S.asfreq('W'), ival_S_to_W) + self.assertEqual(ival_S_end_of_week.asfreq('W'), ival_S_to_W) + self.assertEqual(ival_S.asfreq('D'), ival_S_to_D) + self.assertEqual(ival_S_end_of_day.asfreq('D'), ival_S_to_D) + self.assertEqual(ival_S.asfreq('B'), ival_S_to_B) + self.assertEqual(ival_S_end_of_bus.asfreq('B'), ival_S_to_B) + self.assertEqual(ival_S.asfreq('H'), ival_S_to_H) + self.assertEqual(ival_S_end_of_hour.asfreq('H'), ival_S_to_H) + self.assertEqual(ival_S.asfreq('Min'), ival_S_to_T) + self.assertEqual(ival_S_end_of_minute.asfreq('Min'), ival_S_to_T) + + self.assertEqual(ival_S.asfreq('S'), ival_S) def test_asfreq_nat(self): p = Period('NaT', freq='A') @@ -1627,18 +1634,12 @@ def test_make_time_series(self): series = Series(1, index=index) tm.assertIsInstance(series, Series) - def test_astype(self): - idx = period_range('1990', '2009', freq='A') - - result = idx.astype('i8') - self.assert_numpy_array_equal(result, idx.values) - def test_constructor_use_start_freq(self): # GH #1118 p = Period('4/2/2012', freq='B') index = PeriodIndex(start=p, periods=10) expected = PeriodIndex(start='4/2/2012', periods=10, freq='B') - self.assertTrue(index.equals(expected)) + tm.assert_index_equal(index, expected) def test_constructor_field_arrays(self): # GH #1264 @@ -1648,13 +1649,13 @@ def test_constructor_field_arrays(self): index = PeriodIndex(year=years, quarter=quarters, freq='Q-DEC') expected = period_range('1990Q3', '2009Q2', freq='Q-DEC') - self.assertTrue(index.equals(expected)) + tm.assert_index_equal(index, expected) index2 = PeriodIndex(year=years, quarter=quarters, freq='2Q-DEC') tm.assert_numpy_array_equal(index.asi8, index2.asi8) index = PeriodIndex(year=years, quarter=quarters) - self.assertTrue(index.equals(expected)) + tm.assert_index_equal(index, expected) years = [2007, 2007, 2007] months = [1, 2] @@ -1669,7 +1670,7 @@ def test_constructor_field_arrays(self): months = [1, 2, 3] idx = PeriodIndex(year=years, month=months, freq='M') exp = period_range('2007-01', periods=3, freq='M') - self.assertTrue(idx.equals(exp)) + tm.assert_index_equal(idx, exp) def test_constructor_U(self): # U was used as undefined period @@ -1700,7 +1701,7 @@ def test_constructor_corner(self): result = period_range('2007-01', periods=10.5, freq='M') exp = period_range('2007-01', periods=10, freq='M') - self.assertTrue(result.equals(exp)) + tm.assert_index_equal(result, exp) def test_constructor_fromarraylike(self): idx = period_range('2007-01', periods=20, freq='M') @@ -1711,29 +1712,29 @@ def test_constructor_fromarraylike(self): data=Period('2007', freq='A')) result = PeriodIndex(iter(idx)) - self.assertTrue(result.equals(idx)) + tm.assert_index_equal(result, idx) result = PeriodIndex(idx) - self.assertTrue(result.equals(idx)) + tm.assert_index_equal(result, idx) result = PeriodIndex(idx, freq='M') - self.assertTrue(result.equals(idx)) + tm.assert_index_equal(result, idx) result = PeriodIndex(idx, freq=offsets.MonthEnd()) - self.assertTrue(result.equals(idx)) + tm.assert_index_equal(result, idx) self.assertTrue(result.freq, 'M') result = PeriodIndex(idx, freq='2M') - self.assertTrue(result.equals(idx)) + tm.assert_index_equal(result, idx.asfreq('2M')) self.assertTrue(result.freq, '2M') result = PeriodIndex(idx, freq=offsets.MonthEnd(2)) - self.assertTrue(result.equals(idx)) + tm.assert_index_equal(result, idx.asfreq('2M')) self.assertTrue(result.freq, '2M') result = PeriodIndex(idx, freq='D') exp = idx.asfreq('D', 'e') - self.assertTrue(result.equals(exp)) + tm.assert_index_equal(result, exp) def test_constructor_datetime64arr(self): vals = np.arange(100000, 100000 + 10000, 100, dtype=np.int64) @@ -1742,12 +1743,43 @@ def test_constructor_datetime64arr(self): self.assertRaises(ValueError, PeriodIndex, vals, freq='D') def test_constructor_simple_new(self): - idx = period_range('2007-01', name='p', periods=20, freq='M') + idx = period_range('2007-01', name='p', periods=2, freq='M') result = idx._simple_new(idx, 'p', freq=idx.freq) - self.assertTrue(result.equals(idx)) + tm.assert_index_equal(result, idx) result = idx._simple_new(idx.astype('i8'), 'p', freq=idx.freq) - self.assertTrue(result.equals(idx)) + tm.assert_index_equal(result, idx) + + result = idx._simple_new([pd.Period('2007-01', freq='M'), + pd.Period('2007-02', freq='M')], + 'p', freq=idx.freq) + self.assert_index_equal(result, idx) + + result = idx._simple_new(np.array([pd.Period('2007-01', freq='M'), + pd.Period('2007-02', freq='M')]), + 'p', freq=idx.freq) + self.assert_index_equal(result, idx) + + def test_constructor_simple_new_empty(self): + # GH13079 + idx = PeriodIndex([], freq='M', name='p') + result = idx._simple_new(idx, name='p', freq='M') + tm.assert_index_equal(result, idx) + + def test_constructor_simple_new_floats(self): + # GH13079 + for floats in [[1.1], np.array([1.1])]: + with self.assertRaises(TypeError): + pd.PeriodIndex._simple_new(floats, freq='M') + + def test_shallow_copy_empty(self): + + # GH13067 + idx = PeriodIndex([], freq='M') + result = idx._shallow_copy() + expected = idx + + tm.assert_index_equal(result, expected) def test_constructor_nat(self): self.assertRaises(ValueError, period_range, start='NaT', @@ -1769,14 +1801,14 @@ def test_constructor_freq_mult(self): for func in [PeriodIndex, period_range]: # must be the same, but for sure... pidx = func(start='2014-01', freq='2M', periods=4) - expected = PeriodIndex( - ['2014-01', '2014-03', '2014-05', '2014-07'], freq='M') + expected = PeriodIndex(['2014-01', '2014-03', + '2014-05', '2014-07'], freq='2M') tm.assert_index_equal(pidx, expected) pidx = func(start='2014-01-02', end='2014-01-15', freq='3D') expected = PeriodIndex(['2014-01-02', '2014-01-05', '2014-01-08', '2014-01-11', - '2014-01-14'], freq='D') + '2014-01-14'], freq='3D') tm.assert_index_equal(pidx, expected) pidx = func(end='2014-01-01 17:00', freq='4H', periods=3) @@ -1805,7 +1837,7 @@ def test_constructor_freq_mult_dti_compat(self): freqstr = str(mult) + freq pidx = PeriodIndex(start='2014-04-01', freq=freqstr, periods=10) expected = date_range(start='2014-04-01', freq=freqstr, - periods=10).to_period(freq) + periods=10).to_period(freqstr) tm.assert_index_equal(pidx, expected) def test_is_(self): @@ -1867,7 +1899,7 @@ def test_getitem_partial(self): exp = result result = ts[24:] - assert_series_equal(exp, result) + tm.assert_series_equal(exp, result) ts = ts[10:].append(ts[10:]) self.assertRaisesRegexp(KeyError, @@ -1883,7 +1915,7 @@ def test_getitem_datetime(self): dt4 = datetime(2012, 4, 20) rs = ts[dt1:dt4] - assert_series_equal(rs, ts) + tm.assert_series_equal(rs, ts) def test_slice_with_negative_step(self): ts = Series(np.arange(20), @@ -1891,9 +1923,9 @@ def test_slice_with_negative_step(self): SLC = pd.IndexSlice def assert_slices_equivalent(l_slc, i_slc): - assert_series_equal(ts[l_slc], ts.iloc[i_slc]) - assert_series_equal(ts.loc[l_slc], ts.iloc[i_slc]) - assert_series_equal(ts.ix[l_slc], ts.iloc[i_slc]) + tm.assert_series_equal(ts[l_slc], ts.iloc[i_slc]) + tm.assert_series_equal(ts.loc[l_slc], ts.iloc[i_slc]) + tm.assert_series_equal(ts.ix[l_slc], ts.iloc[i_slc]) assert_slices_equivalent(SLC[Period('2014-10')::-1], SLC[9::-1]) assert_slices_equivalent(SLC['2014-10'::-1], SLC[9::-1]) @@ -1933,11 +1965,11 @@ def test_sub(self): result = rng - 5 exp = rng + (-5) - self.assertTrue(result.equals(exp)) + tm.assert_index_equal(result, exp) def test_periods_number_check(self): - self.assertRaises(ValueError, period_range, '2011-1-1', '2012-1-1', - 'B') + with tm.assertRaises(ValueError): + period_range('2011-1-1', '2012-1-1', 'B') def test_tolist(self): index = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009') @@ -1945,7 +1977,7 @@ def test_tolist(self): [tm.assertIsInstance(x, Period) for x in rs] recon = PeriodIndex(rs) - self.assertTrue(index.equals(recon)) + tm.assert_index_equal(index, recon) def test_to_timestamp(self): index = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009') @@ -1953,12 +1985,12 @@ def test_to_timestamp(self): exp_index = date_range('1/1/2001', end='12/31/2009', freq='A-DEC') result = series.to_timestamp(how='end') - self.assertTrue(result.index.equals(exp_index)) + tm.assert_index_equal(result.index, exp_index) self.assertEqual(result.name, 'foo') exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-JAN') result = series.to_timestamp(how='start') - self.assertTrue(result.index.equals(exp_index)) + tm.assert_index_equal(result.index, exp_index) def _get_with_delta(delta, freq='A-DEC'): return date_range(to_datetime('1/1/2001') + delta, @@ -1967,17 +1999,17 @@ def _get_with_delta(delta, freq='A-DEC'): delta = timedelta(hours=23) result = series.to_timestamp('H', 'end') exp_index = _get_with_delta(delta) - self.assertTrue(result.index.equals(exp_index)) + tm.assert_index_equal(result.index, exp_index) delta = timedelta(hours=23, minutes=59) result = series.to_timestamp('T', 'end') exp_index = _get_with_delta(delta) - self.assertTrue(result.index.equals(exp_index)) + tm.assert_index_equal(result.index, exp_index) result = series.to_timestamp('S', 'end') delta = timedelta(hours=23, minutes=59, seconds=59) exp_index = _get_with_delta(delta) - self.assertTrue(result.index.equals(exp_index)) + tm.assert_index_equal(result.index, exp_index) index = PeriodIndex(freq='H', start='1/1/2001', end='1/2/2001') series = Series(1, index=index, name='foo') @@ -1985,7 +2017,7 @@ def _get_with_delta(delta, freq='A-DEC'): exp_index = date_range('1/1/2001 00:59:59', end='1/2/2001 00:59:59', freq='H') result = series.to_timestamp(how='end') - self.assertTrue(result.index.equals(exp_index)) + tm.assert_index_equal(result.index, exp_index) self.assertEqual(result.name, 'foo') def test_to_timestamp_quarterly_bug(self): @@ -1996,7 +2028,7 @@ def test_to_timestamp_quarterly_bug(self): stamps = pindex.to_timestamp('D', 'end') expected = DatetimeIndex([x.to_timestamp('D', 'end') for x in pindex]) - self.assertTrue(stamps.equals(expected)) + tm.assert_index_equal(stamps, expected) def test_to_timestamp_preserve_name(self): index = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009', @@ -2022,11 +2054,11 @@ def test_to_timestamp_pi_nat(self): result = index.to_timestamp('D') expected = DatetimeIndex([pd.NaT, datetime(2011, 1, 1), datetime(2011, 2, 1)], name='idx') - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, 'idx') result2 = result.to_period(freq='M') - self.assertTrue(result2.equals(index)) + tm.assert_index_equal(result2, index) self.assertEqual(result2.name, 'idx') result3 = result.to_period(freq='3M') @@ -2053,25 +2085,25 @@ def test_to_timestamp_pi_mult(self): def test_start_time(self): index = PeriodIndex(freq='M', start='2016-01-01', end='2016-05-31') expected_index = date_range('2016-01-01', end='2016-05-31', freq='MS') - self.assertTrue(index.start_time.equals(expected_index)) + tm.assert_index_equal(index.start_time, expected_index) def test_end_time(self): index = PeriodIndex(freq='M', start='2016-01-01', end='2016-05-31') expected_index = date_range('2016-01-01', end='2016-05-31', freq='M') - self.assertTrue(index.end_time.equals(expected_index)) + tm.assert_index_equal(index.end_time, expected_index) def test_as_frame_columns(self): rng = period_range('1/1/2000', periods=5) df = DataFrame(randn(10, 5), columns=rng) ts = df[rng[0]] - assert_series_equal(ts, df.ix[:, 0]) + tm.assert_series_equal(ts, df.ix[:, 0]) # GH # 1211 repr(df) ts = df['1/1/2000'] - assert_series_equal(ts, df.ix[:, 0]) + tm.assert_series_equal(ts, df.ix[:, 0]) def test_indexing(self): @@ -2083,17 +2115,18 @@ def test_indexing(self): self.assertEqual(expected, result) def test_frame_setitem(self): - rng = period_range('1/1/2000', periods=5) - rng.name = 'index' + rng = period_range('1/1/2000', periods=5, name='index') df = DataFrame(randn(5, 3), index=rng) df['Index'] = rng rs = Index(df['Index']) - self.assertTrue(rs.equals(rng)) + tm.assert_index_equal(rs, rng, check_names=False) + self.assertEqual(rs.name, 'Index') + self.assertEqual(rng.name, 'index') rs = df.reset_index().set_index('index') tm.assertIsInstance(rs.index, PeriodIndex) - self.assertTrue(rs.index.equals(rng)) + tm.assert_index_equal(rs.index, rng) def test_period_set_index_reindex(self): # GH 6631 @@ -2102,9 +2135,9 @@ def test_period_set_index_reindex(self): idx2 = period_range('2013', periods=6, freq='A') df = df.set_index(idx1) - self.assertTrue(df.index.equals(idx1)) + tm.assert_index_equal(df.index, idx1) df = df.set_index(idx2) - self.assertTrue(df.index.equals(idx2)) + tm.assert_index_equal(df.index, idx2) def test_frame_to_time_stamp(self): K = 5 @@ -2114,12 +2147,12 @@ def test_frame_to_time_stamp(self): exp_index = date_range('1/1/2001', end='12/31/2009', freq='A-DEC') result = df.to_timestamp('D', 'end') - self.assertTrue(result.index.equals(exp_index)) - assert_almost_equal(result.values, df.values) + tm.assert_index_equal(result.index, exp_index) + tm.assert_numpy_array_equal(result.values, df.values) exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-JAN') result = df.to_timestamp('D', 'start') - self.assertTrue(result.index.equals(exp_index)) + tm.assert_index_equal(result.index, exp_index) def _get_with_delta(delta, freq='A-DEC'): return date_range(to_datetime('1/1/2001') + delta, @@ -2128,47 +2161,47 @@ def _get_with_delta(delta, freq='A-DEC'): delta = timedelta(hours=23) result = df.to_timestamp('H', 'end') exp_index = _get_with_delta(delta) - self.assertTrue(result.index.equals(exp_index)) + tm.assert_index_equal(result.index, exp_index) delta = timedelta(hours=23, minutes=59) result = df.to_timestamp('T', 'end') exp_index = _get_with_delta(delta) - self.assertTrue(result.index.equals(exp_index)) + tm.assert_index_equal(result.index, exp_index) result = df.to_timestamp('S', 'end') delta = timedelta(hours=23, minutes=59, seconds=59) exp_index = _get_with_delta(delta) - self.assertTrue(result.index.equals(exp_index)) + tm.assert_index_equal(result.index, exp_index) # columns df = df.T exp_index = date_range('1/1/2001', end='12/31/2009', freq='A-DEC') result = df.to_timestamp('D', 'end', axis=1) - self.assertTrue(result.columns.equals(exp_index)) - assert_almost_equal(result.values, df.values) + tm.assert_index_equal(result.columns, exp_index) + tm.assert_numpy_array_equal(result.values, df.values) exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-JAN') result = df.to_timestamp('D', 'start', axis=1) - self.assertTrue(result.columns.equals(exp_index)) + tm.assert_index_equal(result.columns, exp_index) delta = timedelta(hours=23) result = df.to_timestamp('H', 'end', axis=1) exp_index = _get_with_delta(delta) - self.assertTrue(result.columns.equals(exp_index)) + tm.assert_index_equal(result.columns, exp_index) delta = timedelta(hours=23, minutes=59) result = df.to_timestamp('T', 'end', axis=1) exp_index = _get_with_delta(delta) - self.assertTrue(result.columns.equals(exp_index)) + tm.assert_index_equal(result.columns, exp_index) result = df.to_timestamp('S', 'end', axis=1) delta = timedelta(hours=23, minutes=59, seconds=59) exp_index = _get_with_delta(delta) - self.assertTrue(result.columns.equals(exp_index)) + tm.assert_index_equal(result.columns, exp_index) # invalid axis - assertRaisesRegexp(ValueError, 'axis', df.to_timestamp, axis=2) + tm.assertRaisesRegexp(ValueError, 'axis', df.to_timestamp, axis=2) result1 = df.to_timestamp('5t', axis=1) result2 = df.to_timestamp('t', axis=1) @@ -2188,7 +2221,7 @@ def test_index_duplicate_periods(self): result = ts[2007] expected = ts[1:3] - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) result[:] = 1 self.assertTrue((ts[1:3] == 1).all()) @@ -2198,69 +2231,69 @@ def test_index_duplicate_periods(self): result = ts[2007] expected = ts[idx == 2007] - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) def test_index_unique(self): idx = PeriodIndex([2000, 2007, 2007, 2009, 2009], freq='A-JUN') expected = PeriodIndex([2000, 2007, 2009], freq='A-JUN') - self.assert_numpy_array_equal(idx.unique(), expected.values) + self.assert_index_equal(idx.unique(), expected) self.assertEqual(idx.nunique(), 3) idx = PeriodIndex([2000, 2007, 2007, 2009, 2007], freq='A-JUN', tz='US/Eastern') expected = PeriodIndex([2000, 2007, 2009], freq='A-JUN', tz='US/Eastern') - self.assert_numpy_array_equal(idx.unique(), expected.values) + self.assert_index_equal(idx.unique(), expected) self.assertEqual(idx.nunique(), 3) def test_constructor(self): pi = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009') - assert_equal(len(pi), 9) + self.assertEqual(len(pi), 9) pi = PeriodIndex(freq='Q', start='1/1/2001', end='12/1/2009') - assert_equal(len(pi), 4 * 9) + self.assertEqual(len(pi), 4 * 9) pi = PeriodIndex(freq='M', start='1/1/2001', end='12/1/2009') - assert_equal(len(pi), 12 * 9) + self.assertEqual(len(pi), 12 * 9) pi = PeriodIndex(freq='D', start='1/1/2001', end='12/31/2009') - assert_equal(len(pi), 365 * 9 + 2) + self.assertEqual(len(pi), 365 * 9 + 2) pi = PeriodIndex(freq='B', start='1/1/2001', end='12/31/2009') - assert_equal(len(pi), 261 * 9) + self.assertEqual(len(pi), 261 * 9) pi = PeriodIndex(freq='H', start='1/1/2001', end='12/31/2001 23:00') - assert_equal(len(pi), 365 * 24) + self.assertEqual(len(pi), 365 * 24) pi = PeriodIndex(freq='Min', start='1/1/2001', end='1/1/2001 23:59') - assert_equal(len(pi), 24 * 60) + self.assertEqual(len(pi), 24 * 60) pi = PeriodIndex(freq='S', start='1/1/2001', end='1/1/2001 23:59:59') - assert_equal(len(pi), 24 * 60 * 60) + self.assertEqual(len(pi), 24 * 60 * 60) start = Period('02-Apr-2005', 'B') i1 = PeriodIndex(start=start, periods=20) - assert_equal(len(i1), 20) - assert_equal(i1.freq, start.freq) - assert_equal(i1[0], start) + self.assertEqual(len(i1), 20) + self.assertEqual(i1.freq, start.freq) + self.assertEqual(i1[0], start) end_intv = Period('2006-12-31', 'W') i1 = PeriodIndex(end=end_intv, periods=10) - assert_equal(len(i1), 10) - assert_equal(i1.freq, end_intv.freq) - assert_equal(i1[-1], end_intv) + self.assertEqual(len(i1), 10) + self.assertEqual(i1.freq, end_intv.freq) + self.assertEqual(i1[-1], end_intv) end_intv = Period('2006-12-31', '1w') i2 = PeriodIndex(end=end_intv, periods=10) - assert_equal(len(i1), len(i2)) + self.assertEqual(len(i1), len(i2)) self.assertTrue((i1 == i2).all()) - assert_equal(i1.freq, i2.freq) + self.assertEqual(i1.freq, i2.freq) end_intv = Period('2006-12-31', ('w', 1)) i2 = PeriodIndex(end=end_intv, periods=10) - assert_equal(len(i1), len(i2)) + self.assertEqual(len(i1), len(i2)) self.assertTrue((i1 == i2).all()) - assert_equal(i1.freq, i2.freq) + self.assertEqual(i1.freq, i2.freq) try: PeriodIndex(start=start, end=end_intv) @@ -2280,12 +2313,12 @@ def test_constructor(self): # infer freq from first element i2 = PeriodIndex([end_intv, Period('2005-05-05', 'B')]) - assert_equal(len(i2), 2) - assert_equal(i2[0], end_intv) + self.assertEqual(len(i2), 2) + self.assertEqual(i2[0], end_intv) i2 = PeriodIndex(np.array([end_intv, Period('2005-05-05', 'B')])) - assert_equal(len(i2), 2) - assert_equal(i2[0], end_intv) + self.assertEqual(len(i2), 2) + self.assertEqual(i2[0], end_intv) # Mixed freq should fail vals = [end_intv, Period('2006-12-31', 'w')] @@ -2300,78 +2333,75 @@ def test_repeat(self): Period('2001-01-02'), Period('2001-01-02'), ]) - assert_index_equal(index.repeat(2), expected) + tm.assert_index_equal(index.repeat(2), expected) def test_numpy_repeat(self): index = period_range('20010101', periods=2) - expected = PeriodIndex([ - Period('2001-01-01'), Period('2001-01-01'), - Period('2001-01-02'), Period('2001-01-02'), - ]) + expected = PeriodIndex([Period('2001-01-01'), Period('2001-01-01'), + Period('2001-01-02'), Period('2001-01-02')]) - assert_index_equal(np.repeat(index, 2), expected) + tm.assert_index_equal(np.repeat(index, 2), expected) msg = "the 'axis' parameter is not supported" - assertRaisesRegexp(ValueError, msg, np.repeat, - index, 2, axis=1) + tm.assertRaisesRegexp(ValueError, msg, np.repeat, index, 2, axis=1) def test_shift(self): pi1 = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009') pi2 = PeriodIndex(freq='A', start='1/1/2002', end='12/1/2010') - self.assertTrue(pi1.shift(0).equals(pi1)) + tm.assert_index_equal(pi1.shift(0), pi1) - assert_equal(len(pi1), len(pi2)) - assert_equal(pi1.shift(1).values, pi2.values) + self.assertEqual(len(pi1), len(pi2)) + self.assert_index_equal(pi1.shift(1), pi2) pi1 = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009') pi2 = PeriodIndex(freq='A', start='1/1/2000', end='12/1/2008') - assert_equal(len(pi1), len(pi2)) - assert_equal(pi1.shift(-1).values, pi2.values) + self.assertEqual(len(pi1), len(pi2)) + self.assert_index_equal(pi1.shift(-1), pi2) pi1 = PeriodIndex(freq='M', start='1/1/2001', end='12/1/2009') pi2 = PeriodIndex(freq='M', start='2/1/2001', end='1/1/2010') - assert_equal(len(pi1), len(pi2)) - assert_equal(pi1.shift(1).values, pi2.values) + self.assertEqual(len(pi1), len(pi2)) + self.assert_index_equal(pi1.shift(1), pi2) pi1 = PeriodIndex(freq='M', start='1/1/2001', end='12/1/2009') pi2 = PeriodIndex(freq='M', start='12/1/2000', end='11/1/2009') - assert_equal(len(pi1), len(pi2)) - assert_equal(pi1.shift(-1).values, pi2.values) + self.assertEqual(len(pi1), len(pi2)) + self.assert_index_equal(pi1.shift(-1), pi2) pi1 = PeriodIndex(freq='D', start='1/1/2001', end='12/1/2009') pi2 = PeriodIndex(freq='D', start='1/2/2001', end='12/2/2009') - assert_equal(len(pi1), len(pi2)) - assert_equal(pi1.shift(1).values, pi2.values) + self.assertEqual(len(pi1), len(pi2)) + self.assert_index_equal(pi1.shift(1), pi2) pi1 = PeriodIndex(freq='D', start='1/1/2001', end='12/1/2009') pi2 = PeriodIndex(freq='D', start='12/31/2000', end='11/30/2009') - assert_equal(len(pi1), len(pi2)) - assert_equal(pi1.shift(-1).values, pi2.values) + self.assertEqual(len(pi1), len(pi2)) + self.assert_index_equal(pi1.shift(-1), pi2) def test_shift_nat(self): idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M', name='idx') result = idx.shift(1) - expected = PeriodIndex( - ['2011-02', '2011-03', 'NaT', '2011-05'], freq='M', name='idx') - self.assertTrue(result.equals(expected)) + expected = PeriodIndex(['2011-02', '2011-03', 'NaT', + '2011-05'], freq='M', name='idx') + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) def test_shift_ndarray(self): idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M', name='idx') result = idx.shift(np.array([1, 2, 3, 4])) - expected = PeriodIndex( - ['2011-02', '2011-04', 'NaT', '2011-08'], freq='M', name='idx') - self.assertTrue(result.equals(expected)) + expected = PeriodIndex(['2011-02', '2011-04', 'NaT', + '2011-08'], freq='M', name='idx') + tm.assert_index_equal(result, expected) idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M', name='idx') result = idx.shift(np.array([1, -2, 3, -4])) - expected = PeriodIndex( - ['2011-02', '2010-12', 'NaT', '2010-12'], freq='M', name='idx') - self.assertTrue(result.equals(expected)) + expected = PeriodIndex(['2011-02', '2010-12', 'NaT', + '2010-12'], freq='M', name='idx') + tm.assert_index_equal(result, expected) def test_asfreq(self): pi1 = PeriodIndex(freq='A', start='1/1/2001', end='1/1/2001') @@ -2445,7 +2475,7 @@ def test_asfreq_nat(self): idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M') result = idx.asfreq(freq='Q') expected = PeriodIndex(['2011Q1', '2011Q1', 'NaT', '2011Q2'], freq='Q') - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) def test_asfreq_mult_pi(self): pi = PeriodIndex(['2001-01', '2001-02', 'NaT', '2001-03'], freq='2M') @@ -2465,37 +2495,37 @@ def test_asfreq_mult_pi(self): def test_period_index_length(self): pi = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009') - assert_equal(len(pi), 9) + self.assertEqual(len(pi), 9) pi = PeriodIndex(freq='Q', start='1/1/2001', end='12/1/2009') - assert_equal(len(pi), 4 * 9) + self.assertEqual(len(pi), 4 * 9) pi = PeriodIndex(freq='M', start='1/1/2001', end='12/1/2009') - assert_equal(len(pi), 12 * 9) + self.assertEqual(len(pi), 12 * 9) start = Period('02-Apr-2005', 'B') i1 = PeriodIndex(start=start, periods=20) - assert_equal(len(i1), 20) - assert_equal(i1.freq, start.freq) - assert_equal(i1[0], start) + self.assertEqual(len(i1), 20) + self.assertEqual(i1.freq, start.freq) + self.assertEqual(i1[0], start) end_intv = Period('2006-12-31', 'W') i1 = PeriodIndex(end=end_intv, periods=10) - assert_equal(len(i1), 10) - assert_equal(i1.freq, end_intv.freq) - assert_equal(i1[-1], end_intv) + self.assertEqual(len(i1), 10) + self.assertEqual(i1.freq, end_intv.freq) + self.assertEqual(i1[-1], end_intv) end_intv = Period('2006-12-31', '1w') i2 = PeriodIndex(end=end_intv, periods=10) - assert_equal(len(i1), len(i2)) + self.assertEqual(len(i1), len(i2)) self.assertTrue((i1 == i2).all()) - assert_equal(i1.freq, i2.freq) + self.assertEqual(i1.freq, i2.freq) end_intv = Period('2006-12-31', ('w', 1)) i2 = PeriodIndex(end=end_intv, periods=10) - assert_equal(len(i1), len(i2)) + self.assertEqual(len(i1), len(i2)) self.assertTrue((i1 == i2).all()) - assert_equal(i1.freq, i2.freq) + self.assertEqual(i1.freq, i2.freq) try: PeriodIndex(start=start, end=end_intv) @@ -2515,12 +2545,12 @@ def test_period_index_length(self): # infer freq from first element i2 = PeriodIndex([end_intv, Period('2005-05-05', 'B')]) - assert_equal(len(i2), 2) - assert_equal(i2[0], end_intv) + self.assertEqual(len(i2), 2) + self.assertEqual(i2[0], end_intv) i2 = PeriodIndex(np.array([end_intv, Period('2005-05-05', 'B')])) - assert_equal(len(i2), 2) - assert_equal(i2[0], end_intv) + self.assertEqual(len(i2), 2) + self.assertEqual(i2[0], end_intv) # Mixed freq should fail vals = [end_intv, Period('2006-12-31', 'w')] @@ -2544,12 +2574,12 @@ def test_asfreq_ts(self): df_result = df.asfreq('D', how='end') exp_index = index.asfreq('D', how='end') self.assertEqual(len(result), len(ts)) - self.assertTrue(result.index.equals(exp_index)) - self.assertTrue(df_result.index.equals(exp_index)) + tm.assert_index_equal(result.index, exp_index) + tm.assert_index_equal(df_result.index, exp_index) result = ts.asfreq('D', how='start') self.assertEqual(len(result), len(ts)) - self.assertTrue(result.index.equals(index.asfreq('D', how='start'))) + tm.assert_index_equal(result.index, index.asfreq('D', how='start')) def test_badinput(self): self.assertRaises(datetools.DateParseError, Period, '1/1/-2000', 'A') @@ -2562,7 +2592,7 @@ def test_negative_ordinals(self): idx1 = PeriodIndex(ordinal=[-1, 0, 1], freq='A') idx2 = PeriodIndex(ordinal=np.array([-1, 0, 1]), freq='A') - tm.assert_numpy_array_equal(idx1, idx2) + tm.assert_index_equal(idx1, idx2) def test_dti_to_period(self): dti = DatetimeIndex(start='1/1/2005', end='12/1/2005', freq='M') @@ -2590,10 +2620,10 @@ def test_pindex_slice_index(self): s = Series(np.random.rand(len(pi)), index=pi) res = s['2010'] exp = s[0:12] - assert_series_equal(res, exp) + tm.assert_series_equal(res, exp) res = s['2011'] exp = s[12:24] - assert_series_equal(res, exp) + tm.assert_series_equal(res, exp) def test_getitem_day(self): # GH 6716 @@ -2619,9 +2649,9 @@ def test_getitem_day(self): continue s = Series(np.random.rand(len(idx)), index=idx) - assert_series_equal(s['2013/01'], s[0:31]) - assert_series_equal(s['2013/02'], s[31:59]) - assert_series_equal(s['2014'], s[365:]) + tm.assert_series_equal(s['2013/01'], s[0:31]) + tm.assert_series_equal(s['2013/02'], s[31:59]) + tm.assert_series_equal(s['2014'], s[365:]) invalid = ['2013/02/01 9H', '2013/02/01 09:00'] for v in invalid: @@ -2647,10 +2677,10 @@ def test_range_slice_day(self): s = Series(np.random.rand(len(idx)), index=idx) - assert_series_equal(s['2013/01/02':], s[1:]) - assert_series_equal(s['2013/01/02':'2013/01/05'], s[1:5]) - assert_series_equal(s['2013/02':], s[31:]) - assert_series_equal(s['2014':], s[365:]) + tm.assert_series_equal(s['2013/01/02':], s[1:]) + tm.assert_series_equal(s['2013/01/02':'2013/01/05'], s[1:5]) + tm.assert_series_equal(s['2013/02':], s[31:]) + tm.assert_series_equal(s['2014':], s[365:]) invalid = ['2013/02/01 9H', '2013/02/01 09:00'] for v in invalid: @@ -2680,10 +2710,10 @@ def test_getitem_seconds(self): continue s = Series(np.random.rand(len(idx)), index=idx) - assert_series_equal(s['2013/01/01 10:00'], s[3600:3660]) - assert_series_equal(s['2013/01/01 9H'], s[:3600]) + tm.assert_series_equal(s['2013/01/01 10:00'], s[3600:3660]) + tm.assert_series_equal(s['2013/01/01 9H'], s[:3600]) for d in ['2013/01/01', '2013/01', '2013']: - assert_series_equal(s[d], s) + tm.assert_series_equal(s[d], s) def test_range_slice_seconds(self): # GH 6716 @@ -2705,14 +2735,14 @@ def test_range_slice_seconds(self): s = Series(np.random.rand(len(idx)), index=idx) - assert_series_equal(s['2013/01/01 09:05':'2013/01/01 09:10'], - s[300:660]) - assert_series_equal(s['2013/01/01 10:00':'2013/01/01 10:05'], - s[3600:3960]) - assert_series_equal(s['2013/01/01 10H':], s[3600:]) - assert_series_equal(s[:'2013/01/01 09:30'], s[:1860]) + tm.assert_series_equal(s['2013/01/01 09:05':'2013/01/01 09:10'], + s[300:660]) + tm.assert_series_equal(s['2013/01/01 10:00':'2013/01/01 10:05'], + s[3600:3960]) + tm.assert_series_equal(s['2013/01/01 10H':], s[3600:]) + tm.assert_series_equal(s[:'2013/01/01 09:30'], s[:1860]) for d in ['2013/01/01', '2013/01', '2013']: - assert_series_equal(s[d:], s) + tm.assert_series_equal(s[d:], s) def test_range_slice_outofbounds(self): # GH 5407 @@ -2721,8 +2751,8 @@ def test_range_slice_outofbounds(self): for idx in [didx, pidx]: df = DataFrame(dict(units=[100 + i for i in range(10)]), index=idx) - empty = DataFrame(index=idx.__class__( - [], freq='D'), columns=['units']) + empty = DataFrame(index=idx.__class__([], freq='D'), + columns=['units']) empty['units'] = empty['units'].astype('int64') tm.assert_frame_equal(df['2013/09/01':'2013/09/30'], empty) @@ -2751,11 +2781,11 @@ def test_pindex_qaccess(self): def test_period_dt64_round_trip(self): dti = date_range('1/1/2000', '1/7/2002', freq='B') pi = dti.to_period() - self.assertTrue(pi.to_timestamp().equals(dti)) + tm.assert_index_equal(pi.to_timestamp(), dti) dti = date_range('1/1/2000', '1/7/2002', freq='B') pi = dti.to_period(freq='H') - self.assertTrue(pi.to_timestamp().equals(dti)) + tm.assert_index_equal(pi.to_timestamp(), dti) def test_to_period_quarterly(self): # make sure we can make the round trip @@ -2764,7 +2794,7 @@ def test_to_period_quarterly(self): rng = period_range('1989Q3', '1991Q3', freq=freq) stamps = rng.to_timestamp() result = stamps.to_period(freq) - self.assertTrue(rng.equals(result)) + tm.assert_index_equal(rng, result) def test_to_period_quarterlyish(self): offsets = ['BQ', 'QS', 'BQS'] @@ -2809,7 +2839,7 @@ def test_multiples(self): def test_pindex_multiples(self): pi = PeriodIndex(start='1/1/11', end='12/31/11', freq='2M') expected = PeriodIndex(['2011-01', '2011-03', '2011-05', '2011-07', - '2011-09', '2011-11'], freq='M') + '2011-09', '2011-11'], freq='2M') tm.assert_index_equal(pi, expected) self.assertEqual(pi.freq, offsets.MonthEnd(2)) self.assertEqual(pi.freqstr, '2M') @@ -2842,7 +2872,7 @@ def test_take(self): taken2 = index[[5, 6, 8, 12]] for taken in [taken1, taken2]: - self.assertTrue(taken.equals(expected)) + tm.assert_index_equal(taken, expected) tm.assertIsInstance(taken, PeriodIndex) self.assertEqual(taken.freq, index.freq) self.assertEqual(taken.name, expected.name) @@ -2913,16 +2943,16 @@ def test_align_series(self): result = ts + ts[::2] expected = ts + ts expected[1::2] = np.nan - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) result = ts + _permute(ts[::2]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) # it works! for kind in ['inner', 'outer', 'left', 'right']: ts.align(ts[::2], join=kind) msg = "Input has different freq=D from PeriodIndex\\(freq=A-DEC\\)" - with assertRaisesRegexp(ValueError, msg): + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): ts + ts.asfreq('D', how="end") def test_align_frame(self): @@ -2941,11 +2971,11 @@ def test_union(self): index = period_range('1/1/2000', '1/20/2000', freq='D') result = index[:-5].union(index[10:]) - self.assertTrue(result.equals(index)) + tm.assert_index_equal(result, index) # not in order result = _permute(index[:-5]).union(_permute(index[10:])) - self.assertTrue(result.equals(index)) + tm.assert_index_equal(result, index) # raise if different frequencies index = period_range('1/1/2000', '1/20/2000', freq='D') @@ -2976,13 +3006,13 @@ def test_intersection(self): index = period_range('1/1/2000', '1/20/2000', freq='D') result = index[:-5].intersection(index[10:]) - self.assertTrue(result.equals(index[10:-5])) + tm.assert_index_equal(result, index[10:-5]) # not in order left = _permute(index[:-5]) right = _permute(index[10:]) result = left.intersection(right).sort_values() - self.assertTrue(result.equals(index[10:-5])) + tm.assert_index_equal(result, index[10:-5]) # raise if different frequencies index = period_range('1/1/2000', '1/20/2000', freq='D') @@ -3013,7 +3043,7 @@ def test_intersection_cases(self): for (rng, expected) in [(rng2, expected2), (rng3, expected3), (rng4, expected4)]: result = base.intersection(rng) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) @@ -3039,7 +3069,7 @@ def test_intersection_cases(self): for (rng, expected) in [(rng2, expected2), (rng3, expected3), (rng4, expected4)]: result = base.intersection(rng) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, 'D') @@ -3093,9 +3123,9 @@ def _check_all_fields(self, periodindex): for field in fields: field_idx = getattr(periodindex, field) - assert_equal(len(periodindex), len(field_idx)) + self.assertEqual(len(periodindex), len(field_idx)) for x, val in zip(periods, field_idx): - assert_equal(getattr(x, field), val) + self.assertEqual(getattr(x, field), val) def test_is_full(self): index = PeriodIndex([2005, 2007, 2009], freq='A') @@ -3119,10 +3149,10 @@ def test_map(self): index = PeriodIndex([2005, 2007, 2009], freq='A') result = index.map(lambda x: x + 1) expected = index + 1 - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) result = index.map(lambda x: x.ordinal) - exp = [x.ordinal for x in index] + exp = np.array([x.ordinal for x in index], dtype=np.int64) tm.assert_numpy_array_equal(result, exp) def test_map_with_string_constructor(self): @@ -3220,11 +3250,11 @@ def test_factorize(self): arr, idx = idx1.factorize() self.assert_numpy_array_equal(arr, exp_arr) - self.assertTrue(idx.equals(exp_idx)) + tm.assert_index_equal(idx, exp_idx) arr, idx = idx1.factorize(sort=True) self.assert_numpy_array_equal(arr, exp_arr) - self.assertTrue(idx.equals(exp_idx)) + tm.assert_index_equal(idx, exp_idx) idx2 = pd.PeriodIndex(['2014-03', '2014-03', '2014-02', '2014-01', '2014-03', '2014-01'], freq='M') @@ -3232,19 +3262,19 @@ def test_factorize(self): exp_arr = np.array([2, 2, 1, 0, 2, 0]) arr, idx = idx2.factorize(sort=True) self.assert_numpy_array_equal(arr, exp_arr) - self.assertTrue(idx.equals(exp_idx)) + tm.assert_index_equal(idx, exp_idx) exp_arr = np.array([0, 0, 1, 2, 0, 2]) exp_idx = PeriodIndex(['2014-03', '2014-02', '2014-01'], freq='M') arr, idx = idx2.factorize() self.assert_numpy_array_equal(arr, exp_arr) - self.assertTrue(idx.equals(exp_idx)) + tm.assert_index_equal(idx, exp_idx) def test_recreate_from_data(self): for o in ['M', 'Q', 'A', 'D', 'B', 'T', 'S', 'L', 'U', 'N', 'H']: org = PeriodIndex(start='2001/04/01', freq=o, periods=1) idx = PeriodIndex(org.values, freq=o) - self.assertTrue(idx.equals(org)) + tm.assert_index_equal(idx, org) def test_combine_first(self): # GH 3367 @@ -3292,13 +3322,12 @@ def _permute(obj): class TestMethods(tm.TestCase): - "Base test class for MaskedArrays." def test_add(self): dt1 = Period(freq='D', year=2008, month=1, day=1) dt2 = Period(freq='D', year=2008, month=1, day=2) - assert_equal(dt1 + 1, dt2) - assert_equal(1 + dt1, dt2) + self.assertEqual(dt1 + 1, dt2) + self.assertEqual(1 + dt1, dt2) def test_add_pdnat(self): p = pd.Period('2011-01', freq='M') @@ -3324,6 +3353,17 @@ def test_add_raises(self): with tm.assertRaisesRegexp(TypeError, msg): dt1 + dt2 + def test_sub(self): + dt1 = Period('2011-01-01', freq='D') + dt2 = Period('2011-01-15', freq='D') + + self.assertEqual(dt1 - dt2, -14) + self.assertEqual(dt2 - dt1, 14) + + msg = "Input has different freq=M from Period\(freq=D\)" + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): + dt1 - pd.Period('2011-02', freq='M') + def test_add_offset(self): # freq is DateOffset for freq in ['A', '2A', '3A']: @@ -3335,14 +3375,14 @@ def test_add_offset(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(365, 'D'), timedelta(365)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p + o if isinstance(o, np.timedelta64): with tm.assertRaises(TypeError): o + p else: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): o + p for freq in ['M', '2M', '3M']: @@ -3358,14 +3398,14 @@ def test_add_offset(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(365, 'D'), timedelta(365)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p + o if isinstance(o, np.timedelta64): with tm.assertRaises(TypeError): o + p else: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): o + p # freq is Tick @@ -3401,14 +3441,14 @@ def test_add_offset(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(4, 'h'), timedelta(hours=23)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p + o if isinstance(o, np.timedelta64): with tm.assertRaises(TypeError): o + p else: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): o + p for freq in ['H', '2H', '3H']: @@ -3443,14 +3483,14 @@ def test_add_offset(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p + o if isinstance(o, np.timedelta64): with tm.assertRaises(TypeError): o + p else: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): o + p def test_add_offset_nat(self): @@ -3464,14 +3504,14 @@ def test_add_offset_nat(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(365, 'D'), timedelta(365)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p + o if isinstance(o, np.timedelta64): with tm.assertRaises(TypeError): o + p else: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): o + p for freq in ['M', '2M', '3M']: @@ -3488,14 +3528,14 @@ def test_add_offset_nat(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(365, 'D'), timedelta(365)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p + o if isinstance(o, np.timedelta64): with tm.assertRaises(TypeError): o + p else: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): o + p # freq is Tick for freq in ['D', '2D', '3D']: @@ -3515,14 +3555,14 @@ def test_add_offset_nat(self): offsets.Minute(), np.timedelta64(4, 'h'), timedelta(hours=23)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p + o if isinstance(o, np.timedelta64): with tm.assertRaises(TypeError): o + p else: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): o + p for freq in ['H', '2H', '3H']: @@ -3538,14 +3578,14 @@ def test_add_offset_nat(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p + o if isinstance(o, np.timedelta64): with tm.assertRaises(TypeError): o + p else: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): o + p def test_sub_pdnat(self): @@ -3567,7 +3607,7 @@ def test_sub_offset(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(365, 'D'), timedelta(365)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p - o for freq in ['M', '2M', '3M']: @@ -3580,7 +3620,7 @@ def test_sub_offset(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(365, 'D'), timedelta(365)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p - o # freq is Tick @@ -3602,7 +3642,7 @@ def test_sub_offset(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(4, 'h'), timedelta(hours=23)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p - o for freq in ['H', '2H', '3H']: @@ -3623,7 +3663,7 @@ def test_sub_offset(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p - o def test_sub_offset_nat(self): @@ -3636,7 +3676,7 @@ def test_sub_offset_nat(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(365, 'D'), timedelta(365)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p - o for freq in ['M', '2M', '3M']: @@ -3647,7 +3687,7 @@ def test_sub_offset_nat(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(365, 'D'), timedelta(365)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p - o # freq is Tick @@ -3661,7 +3701,7 @@ def test_sub_offset_nat(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(4, 'h'), timedelta(hours=23)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p - o for freq in ['H', '2H', '3H']: @@ -3674,7 +3714,7 @@ def test_sub_offset_nat(self): for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]: - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): p - o def test_nat_ops(self): @@ -3683,77 +3723,153 @@ def test_nat_ops(self): self.assertEqual((p + 1).ordinal, tslib.iNaT) self.assertEqual((1 + p).ordinal, tslib.iNaT) self.assertEqual((p - 1).ordinal, tslib.iNaT) - self.assertEqual( - (p - Period('2011-01', freq=freq)).ordinal, tslib.iNaT) - self.assertEqual( - (Period('2011-01', freq=freq) - p).ordinal, tslib.iNaT) + self.assertEqual((p - Period('2011-01', freq=freq)).ordinal, + tslib.iNaT) + self.assertEqual((Period('2011-01', freq=freq) - p).ordinal, + tslib.iNaT) + + def test_period_ops_offset(self): + p = Period('2011-04-01', freq='D') + result = p + offsets.Day() + exp = pd.Period('2011-04-02', freq='D') + self.assertEqual(result, exp) - def test_pi_ops_nat(self): - idx = PeriodIndex(['2011-01', '2011-02', 'NaT', + result = p - offsets.Day(2) + exp = pd.Period('2011-03-30', freq='D') + self.assertEqual(result, exp) + + msg = "Input cannot be converted to Period\(freq=D\)" + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): + p + offsets.Hour(2) + + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): + p - offsets.Hour(2) + + +class TestPeriodIndexSeriesMethods(tm.TestCase): + """ Test PeriodIndex and Period Series Ops consistency """ + + def _check(self, values, func, expected): + idx = pd.PeriodIndex(values) + result = func(idx) + tm.assert_index_equal(result, pd.PeriodIndex(expected)) + + s = pd.Series(values) + result = func(s) + + exp = pd.Series(expected) + # Period(NaT) != Period(NaT) + + lmask = result.map(lambda x: x.ordinal != tslib.iNaT) + rmask = exp.map(lambda x: x.ordinal != tslib.iNaT) + tm.assert_series_equal(lmask, rmask) + tm.assert_series_equal(result[lmask], exp[rmask]) + + def test_pi_ops(self): + idx = PeriodIndex(['2011-01', '2011-02', '2011-03', '2011-04'], freq='M', name='idx') - result = idx + 2 - expected = PeriodIndex( - ['2011-03', '2011-04', 'NaT', '2011-06'], freq='M', name='idx') - self.assertTrue(result.equals(expected)) - result2 = result - 2 - self.assertTrue(result2.equals(idx)) + expected = PeriodIndex(['2011-03', '2011-04', + '2011-05', '2011-06'], freq='M', name='idx') + self._check(idx, lambda x: x + 2, expected) + self._check(idx, lambda x: 2 + x, expected) + + self._check(idx + 2, lambda x: x - 2, idx) + result = idx - Period('2011-01', freq='M') + exp = pd.Index([0, 1, 2, 3], name='idx') + tm.assert_index_equal(result, exp) + + result = Period('2011-01', freq='M') - idx + exp = pd.Index([0, -1, -2, -3], name='idx') + tm.assert_index_equal(result, exp) + + def test_pi_ops_errors(self): + idx = PeriodIndex(['2011-01', '2011-02', '2011-03', + '2011-04'], freq='M', name='idx') + s = pd.Series(idx) msg = "unsupported operand type\(s\)" - with tm.assertRaisesRegexp(TypeError, msg): - idx + "str" + for obj in [idx, s]: + for ng in ["str", 1.5]: + with tm.assertRaisesRegexp(TypeError, msg): + obj + ng + + with tm.assertRaises(TypeError): + # error message differs between PY2 and 3 + ng + obj + + with tm.assertRaisesRegexp(TypeError, msg): + obj - ng + + def test_pi_ops_nat(self): + idx = PeriodIndex(['2011-01', '2011-02', 'NaT', + '2011-04'], freq='M', name='idx') + expected = PeriodIndex(['2011-03', '2011-04', + 'NaT', '2011-06'], freq='M', name='idx') + self._check(idx, lambda x: x + 2, expected) + self._check(idx, lambda x: 2 + x, expected) - def test_pi_ops_array(self): + self._check(idx + 2, lambda x: x - 2, idx) + + def test_pi_ops_array_int(self): idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M', name='idx') - result = idx + np.array([1, 2, 3, 4]) + f = lambda x: x + np.array([1, 2, 3, 4]) exp = PeriodIndex(['2011-02', '2011-04', 'NaT', '2011-08'], freq='M', name='idx') - self.assert_index_equal(result, exp) + self._check(idx, f, exp) - result = np.add(idx, np.array([4, -1, 1, 2])) + f = lambda x: np.add(x, np.array([4, -1, 1, 2])) exp = PeriodIndex(['2011-05', '2011-01', 'NaT', '2011-06'], freq='M', name='idx') - self.assert_index_equal(result, exp) + self._check(idx, f, exp) - result = idx - np.array([1, 2, 3, 4]) + f = lambda x: x - np.array([1, 2, 3, 4]) exp = PeriodIndex(['2010-12', '2010-12', 'NaT', '2010-12'], freq='M', name='idx') - self.assert_index_equal(result, exp) + self._check(idx, f, exp) - result = np.subtract(idx, np.array([3, 2, 3, -2])) + f = lambda x: np.subtract(x, np.array([3, 2, 3, -2])) exp = PeriodIndex(['2010-10', '2010-12', 'NaT', '2011-06'], freq='M', name='idx') - self.assert_index_equal(result, exp) - - # incompatible freq - msg = "Input has different freq from PeriodIndex\(freq=M\)" - with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): - idx + np.array([np.timedelta64(1, 'D')] * 4) - - idx = PeriodIndex(['2011-01-01 09:00', '2011-01-01 10:00', 'NaT', - '2011-01-01 12:00'], freq='H', name='idx') - result = idx + np.array([np.timedelta64(1, 'D')] * 4) - exp = PeriodIndex(['2011-01-02 09:00', '2011-01-02 10:00', 'NaT', - '2011-01-02 12:00'], freq='H', name='idx') - self.assert_index_equal(result, exp) - - result = idx - np.array([np.timedelta64(1, 'h')] * 4) - exp = PeriodIndex(['2011-01-01 08:00', '2011-01-01 09:00', 'NaT', - '2011-01-01 11:00'], freq='H', name='idx') - self.assert_index_equal(result, exp) + self._check(idx, f, exp) + + def test_pi_ops_offset(self): + idx = PeriodIndex(['2011-01-01', '2011-02-01', '2011-03-01', + '2011-04-01'], freq='D', name='idx') + f = lambda x: x + offsets.Day() + exp = PeriodIndex(['2011-01-02', '2011-02-02', '2011-03-02', + '2011-04-02'], freq='D', name='idx') + self._check(idx, f, exp) + + f = lambda x: x + offsets.Day(2) + exp = PeriodIndex(['2011-01-03', '2011-02-03', '2011-03-03', + '2011-04-03'], freq='D', name='idx') + self._check(idx, f, exp) + + f = lambda x: x - offsets.Day(2) + exp = PeriodIndex(['2010-12-30', '2011-01-30', '2011-02-27', + '2011-03-30'], freq='D', name='idx') + self._check(idx, f, exp) + + def test_pi_offset_errors(self): + idx = PeriodIndex(['2011-01-01', '2011-02-01', '2011-03-01', + '2011-04-01'], freq='D', name='idx') + s = pd.Series(idx) + + # Series op is applied per Period instance, thus error is raised + # from Period + msg_idx = "Input has different freq from PeriodIndex\(freq=D\)" + msg_s = "Input cannot be converted to Period\(freq=D\)" + for obj, msg in [(idx, msg_idx), (s, msg_s)]: + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): + obj + offsets.Hour(2) - msg = "Input has different freq from PeriodIndex\(freq=H\)" - with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): - idx + np.array([np.timedelta64(1, 's')] * 4) + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): + offsets.Hour(2) + obj - idx = PeriodIndex(['2011-01-01 09:00:00', '2011-01-01 10:00:00', 'NaT', - '2011-01-01 12:00:00'], freq='S', name='idx') - result = idx + np.array([np.timedelta64(1, 'h'), np.timedelta64( - 30, 's'), np.timedelta64(2, 'h'), np.timedelta64(15, 'm')]) - exp = PeriodIndex(['2011-01-01 10:00:00', '2011-01-01 10:00:30', 'NaT', - '2011-01-01 12:15:00'], freq='S', name='idx') - self.assert_index_equal(result, exp) + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): + obj - offsets.Hour(2) def test_pi_sub_period(self): # GH 13071 @@ -3871,7 +3987,7 @@ def test_equal(self): self.assertEqual(self.january1, self.january2) def test_equal_Raises_Value(self): - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): self.january1 == self.day def test_notEqual(self): @@ -3882,7 +3998,7 @@ def test_greater(self): self.assertTrue(self.february > self.january1) def test_greater_Raises_Value(self): - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): self.january1 > self.day def test_greater_Raises_Type(self): @@ -3893,8 +4009,9 @@ def test_greaterEqual(self): self.assertTrue(self.january1 >= self.january2) def test_greaterEqual_Raises_Value(self): - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): self.january1 >= self.day + with tm.assertRaises(TypeError): print(self.january1 >= 1) @@ -3902,7 +4019,7 @@ def test_smallerEqual(self): self.assertTrue(self.january1 <= self.january2) def test_smallerEqual_Raises_Value(self): - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): self.january1 <= self.day def test_smallerEqual_Raises_Type(self): @@ -3913,7 +4030,7 @@ def test_smaller(self): self.assertTrue(self.january1 < self.february) def test_smaller_Raises_Value(self): - with tm.assertRaises(ValueError): + with tm.assertRaises(period.IncompatibleFrequency): self.january1 < self.day def test_smaller_Raises_Type(self): @@ -3950,24 +4067,30 @@ def test_pi_pi_comp(self): exp = np.array([False, True, False, False]) self.assert_numpy_array_equal(base == p, exp) + self.assert_numpy_array_equal(p == base, exp) exp = np.array([True, False, True, True]) self.assert_numpy_array_equal(base != p, exp) + self.assert_numpy_array_equal(p != base, exp) exp = np.array([False, False, True, True]) self.assert_numpy_array_equal(base > p, exp) + self.assert_numpy_array_equal(p < base, exp) exp = np.array([True, False, False, False]) self.assert_numpy_array_equal(base < p, exp) + self.assert_numpy_array_equal(p > base, exp) exp = np.array([False, True, True, True]) self.assert_numpy_array_equal(base >= p, exp) + self.assert_numpy_array_equal(p <= base, exp) exp = np.array([True, True, False, False]) self.assert_numpy_array_equal(base <= p, exp) + self.assert_numpy_array_equal(p >= base, exp) - idx = PeriodIndex( - ['2011-02', '2011-01', '2011-03', '2011-05'], freq=freq) + idx = PeriodIndex(['2011-02', '2011-01', '2011-03', + '2011-05'], freq=freq) exp = np.array([False, False, True, False]) self.assert_numpy_array_equal(base == idx, exp) @@ -3992,7 +4115,10 @@ def test_pi_pi_comp(self): with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): base <= Period('2011', freq='A') - with tm.assertRaisesRegexp(ValueError, msg): + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): + Period('2011', freq='A') >= base + + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): idx = PeriodIndex(['2011', '2012', '2013', '2014'], freq='A') base <= idx @@ -4001,6 +4127,9 @@ def test_pi_pi_comp(self): with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): base <= Period('2011', freq='4M') + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): + Period('2011', freq='4M') >= base + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): idx = PeriodIndex(['2011', '2012', '2013', '2014'], freq='4M') base <= idx @@ -4013,17 +4142,23 @@ def test_pi_nat_comp(self): result = idx1 > Period('2011-02', freq=freq) exp = np.array([False, False, False, True]) self.assert_numpy_array_equal(result, exp) + result = Period('2011-02', freq=freq) < idx1 + self.assert_numpy_array_equal(result, exp) result = idx1 == Period('NaT', freq=freq) exp = np.array([False, False, False, False]) self.assert_numpy_array_equal(result, exp) + result = Period('NaT', freq=freq) == idx1 + self.assert_numpy_array_equal(result, exp) result = idx1 != Period('NaT', freq=freq) exp = np.array([True, True, True, True]) self.assert_numpy_array_equal(result, exp) + result = Period('NaT', freq=freq) != idx1 + self.assert_numpy_array_equal(result, exp) - idx2 = PeriodIndex( - ['2011-02', '2011-01', '2011-04', 'NaT'], freq=freq) + idx2 = PeriodIndex(['2011-02', '2011-01', '2011-04', + 'NaT'], freq=freq) result = idx1 < idx2 exp = np.array([True, False, False, False]) self.assert_numpy_array_equal(result, exp) @@ -4044,11 +4179,12 @@ def test_pi_nat_comp(self): exp = np.array([False, False, True, False]) self.assert_numpy_array_equal(result, exp) - diff = PeriodIndex( - ['2011-02', '2011-01', '2011-04', 'NaT'], freq='4M') + diff = PeriodIndex(['2011-02', '2011-01', '2011-04', + 'NaT'], freq='4M') msg = "Input has different freq=4M from PeriodIndex" with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): idx1 > diff + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): idx1 == diff @@ -4089,19 +4225,19 @@ def test_constructor_cast_object(self): def test_series_comparison_scalars(self): val = pd.Period('2000-01-04', freq='D') result = self.series > val - expected = np.array([x > val for x in self.series]) - self.assert_numpy_array_equal(result, expected) + expected = pd.Series([x > val for x in self.series]) + tm.assert_series_equal(result, expected) val = self.series[5] result = self.series > val - expected = np.array([x > val for x in self.series]) - self.assert_numpy_array_equal(result, expected) + expected = pd.Series([x > val for x in self.series]) + tm.assert_series_equal(result, expected) def test_between(self): left, right = self.series[[2, 7]] result = self.series.between(left, right) expected = (self.series >= left) & (self.series <= right) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) # --------------------------------------------------------------------- # NaT support @@ -4120,7 +4256,7 @@ def test_NaT_scalar(self): def test_NaT_cast(self): result = Series([np.nan]).astype('period[D]') expected = Series([NaT]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) """ def test_set_none_nan(self): @@ -4151,6 +4287,176 @@ def test_intercept_astype_object(self): result = df.values.squeeze() self.assertTrue((result[:, 0] == expected.values).all()) + def test_ops_series_timedelta(self): + # GH 13043 + s = pd.Series([pd.Period('2015-01-01', freq='D'), + pd.Period('2015-01-02', freq='D')], name='xxx') + self.assertEqual(s.dtype, object) + + exp = pd.Series([pd.Period('2015-01-02', freq='D'), + pd.Period('2015-01-03', freq='D')], name='xxx') + tm.assert_series_equal(s + pd.Timedelta('1 days'), exp) + tm.assert_series_equal(pd.Timedelta('1 days') + s, exp) + + tm.assert_series_equal(s + pd.tseries.offsets.Day(), exp) + tm.assert_series_equal(pd.tseries.offsets.Day() + s, exp) + + def test_ops_series_period(self): + # GH 13043 + s = pd.Series([pd.Period('2015-01-01', freq='D'), + pd.Period('2015-01-02', freq='D')], name='xxx') + self.assertEqual(s.dtype, object) + + p = pd.Period('2015-01-10', freq='D') + # dtype will be object because of original dtype + exp = pd.Series([9, 8], name='xxx', dtype=object) + tm.assert_series_equal(p - s, exp) + tm.assert_series_equal(s - p, -exp) + + s2 = pd.Series([pd.Period('2015-01-05', freq='D'), + pd.Period('2015-01-04', freq='D')], name='xxx') + self.assertEqual(s2.dtype, object) + + exp = pd.Series([4, 2], name='xxx', dtype=object) + tm.assert_series_equal(s2 - s, exp) + tm.assert_series_equal(s - s2, -exp) + + def test_comp_series_period_scalar(self): + # GH 13200 + for freq in ['M', '2M', '3M']: + base = Series([Period(x, freq=freq) for x in + ['2011-01', '2011-02', '2011-03', '2011-04']]) + p = Period('2011-02', freq=freq) + + exp = pd.Series([False, True, False, False]) + tm.assert_series_equal(base == p, exp) + tm.assert_series_equal(p == base, exp) + + exp = pd.Series([True, False, True, True]) + tm.assert_series_equal(base != p, exp) + tm.assert_series_equal(p != base, exp) + + exp = pd.Series([False, False, True, True]) + tm.assert_series_equal(base > p, exp) + tm.assert_series_equal(p < base, exp) + + exp = pd.Series([True, False, False, False]) + tm.assert_series_equal(base < p, exp) + tm.assert_series_equal(p > base, exp) + + exp = pd.Series([False, True, True, True]) + tm.assert_series_equal(base >= p, exp) + tm.assert_series_equal(p <= base, exp) + + exp = pd.Series([True, True, False, False]) + tm.assert_series_equal(base <= p, exp) + tm.assert_series_equal(p >= base, exp) + + # different base freq + msg = "Input has different freq=A-DEC from Period" + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): + base <= Period('2011', freq='A') + + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): + Period('2011', freq='A') >= base + + def test_comp_series_period_series(self): + # GH 13200 + for freq in ['M', '2M', '3M']: + base = Series([Period(x, freq=freq) for x in + ['2011-01', '2011-02', '2011-03', '2011-04']]) + + s = Series([Period(x, freq=freq) for x in + ['2011-02', '2011-01', '2011-03', '2011-05']]) + + exp = Series([False, False, True, False]) + tm.assert_series_equal(base == s, exp) + + exp = Series([True, True, False, True]) + tm.assert_series_equal(base != s, exp) + + exp = Series([False, True, False, False]) + tm.assert_series_equal(base > s, exp) + + exp = Series([True, False, False, True]) + tm.assert_series_equal(base < s, exp) + + exp = Series([False, True, True, False]) + tm.assert_series_equal(base >= s, exp) + + exp = Series([True, False, True, True]) + tm.assert_series_equal(base <= s, exp) + + s2 = Series([Period(x, freq='A') for x in + ['2011', '2011', '2011', '2011']]) + + # different base freq + msg = "Input has different freq=A-DEC from Period" + with tm.assertRaisesRegexp(period.IncompatibleFrequency, msg): + base <= s2 + + def test_comp_series_period_object(self): + # GH 13200 + base = Series([Period('2011', freq='A'), Period('2011-02', freq='M'), + Period('2013', freq='A'), Period('2011-04', freq='M')]) + + s = Series([Period('2012', freq='A'), Period('2011-01', freq='M'), + Period('2013', freq='A'), Period('2011-05', freq='M')]) + + exp = Series([False, False, True, False]) + tm.assert_series_equal(base == s, exp) + + exp = Series([True, True, False, True]) + tm.assert_series_equal(base != s, exp) + + exp = Series([False, True, False, False]) + tm.assert_series_equal(base > s, exp) + + exp = Series([True, False, False, True]) + tm.assert_series_equal(base < s, exp) + + exp = Series([False, True, True, False]) + tm.assert_series_equal(base >= s, exp) + + exp = Series([True, False, True, True]) + tm.assert_series_equal(base <= s, exp) + + def test_ops_frame_period(self): + # GH 13043 + df = pd.DataFrame({'A': [pd.Period('2015-01', freq='M'), + pd.Period('2015-02', freq='M')], + 'B': [pd.Period('2014-01', freq='M'), + pd.Period('2014-02', freq='M')]}) + self.assertEqual(df['A'].dtype, object) + self.assertEqual(df['B'].dtype, object) + + p = pd.Period('2015-03', freq='M') + # dtype will be object because of original dtype + exp = pd.DataFrame({'A': np.array([2, 1], dtype=object), + 'B': np.array([14, 13], dtype=object)}) + tm.assert_frame_equal(p - df, exp) + tm.assert_frame_equal(df - p, -exp) + + df2 = pd.DataFrame({'A': [pd.Period('2015-05', freq='M'), + pd.Period('2015-06', freq='M')], + 'B': [pd.Period('2015-05', freq='M'), + pd.Period('2015-06', freq='M')]}) + self.assertEqual(df2['A'].dtype, object) + self.assertEqual(df2['B'].dtype, object) + + exp = pd.DataFrame({'A': np.array([4, 4], dtype=object), + 'B': np.array([16, 16], dtype=object)}) + tm.assert_frame_equal(df2 - df, exp) + tm.assert_frame_equal(df - df2, -exp) + + +class TestPeriodField(tm.TestCase): + def test_get_period_field_raises_on_out_of_range(self): + self.assertRaises(ValueError, _period.get_period_field, -1, 0, 0) + + def test_get_period_field_array_raises_on_out_of_range(self): + self.assertRaises(ValueError, _period.get_period_field_arr, -1, + np.empty(1), 0) if __name__ == '__main__': import nose diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py index 9fab9c0990ef0..2255f9fae73de 100644 --- a/pandas/tseries/tests/test_plotting.py +++ b/pandas/tseries/tests/test_plotting.py @@ -4,8 +4,6 @@ from pandas.compat import lrange, zip import numpy as np -from numpy.testing.decorators import slow - from pandas import Index, Series, DataFrame from pandas.tseries.index import date_range, bdate_range @@ -13,7 +11,7 @@ from pandas.tseries.period import period_range, Period, PeriodIndex from pandas.tseries.resample import DatetimeIndex -from pandas.util.testing import assert_series_equal, ensure_clean +from pandas.util.testing import assert_series_equal, ensure_clean, slow import pandas.util.testing as tm from pandas.tests.test_graphics import _skip_if_no_scipy_gaussian_kde @@ -76,6 +74,13 @@ def test_frame_inferred(self): df = DataFrame(np.random.randn(len(idx), 3), index=idx) _check_plot_works(df.plot) + def test_is_error_nozeroindex(self): + # GH11858 + i = np.array([1, 2, 3]) + a = DataFrame(i, index=i) + _check_plot_works(a.plot, xerr=a) + _check_plot_works(a.plot, yerr=a) + def test_nonnumeric_exclude(self): import matplotlib.pyplot as plt @@ -325,7 +330,7 @@ def test_dataframe(self): bts = DataFrame({'a': tm.makeTimeSeries()}) ax = bts.plot() idx = ax.get_lines()[0].get_xdata() - tm.assert_numpy_array_equal(bts.index.to_period(), PeriodIndex(idx)) + tm.assert_index_equal(bts.index.to_period(), PeriodIndex(idx)) @slow def test_axis_limits(self): @@ -1108,7 +1113,7 @@ def test_ax_plot(self): fig = plt.figure() ax = fig.add_subplot(111) lines = ax.plot(x, y, label='Y') - tm.assert_numpy_array_equal(DatetimeIndex(lines[0].get_xdata()), x) + tm.assert_index_equal(DatetimeIndex(lines[0].get_xdata()), x) @slow def test_mpl_nopandas(self): diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py index 77396c3e38c93..2236d20975eee 100644 --- a/pandas/tseries/tests/test_resample.py +++ b/pandas/tseries/tests/test_resample.py @@ -13,18 +13,20 @@ notnull, Timestamp) from pandas.compat import range, lrange, zip, product, OrderedDict from pandas.core.base import SpecificationError -from pandas.core.common import ABCSeries, ABCDataFrame +from pandas.core.common import (ABCSeries, ABCDataFrame, + UnsupportedFunctionCall) from pandas.core.groupby import DataError from pandas.tseries.frequencies import MONTHS, DAYS +from pandas.tseries.frequencies import to_offset from pandas.tseries.index import date_range from pandas.tseries.offsets import Minute, BDay from pandas.tseries.period import period_range, PeriodIndex, Period from pandas.tseries.resample import (DatetimeIndex, TimeGrouper, DatetimeIndexResampler) -from pandas.tseries.frequencies import to_offset from pandas.tseries.tdi import timedelta_range from pandas.util.testing import (assert_series_equal, assert_almost_equal, - assert_frame_equal) + assert_frame_equal, assert_index_equal) +from pandas._period import IncompatibleFrequency bday = BDay() @@ -577,6 +579,7 @@ class Base(object): base class for resampling testing, calling .create_series() generates a series of each index type """ + def create_index(self, *args, **kwargs): """ return the _index_factory created using the args, kwargs """ factory = self._index_factory() @@ -619,6 +622,75 @@ def test_resample_interpolate(self): df.resample('1T').asfreq().interpolate(), df.resample('1T').interpolate()) + def test_raises_on_non_datetimelike_index(self): + # this is a non datetimelike index + xp = DataFrame() + self.assertRaises(TypeError, lambda: xp.resample('A').mean()) + + def test_resample_empty_series(self): + # GH12771 & GH12868 + + s = self.create_series()[:0] + + for freq in ['M', 'D', 'H']: + # need to test for ohlc from GH13083 + methods = [method for method in resample_methods + if method != 'ohlc'] + for method in methods: + result = getattr(s.resample(freq), method)() + + expected = s.copy() + expected.index = s.index._shallow_copy(freq=freq) + assert_index_equal(result.index, expected.index) + self.assertEqual(result.index.freq, expected.index.freq) + + if (method == 'size' and + isinstance(result.index, PeriodIndex) and + freq in ['M', 'D']): + # GH12871 - TODO: name should propagate, but currently + # doesn't on lower / same frequency with PeriodIndex + assert_series_equal(result, expected, check_dtype=False, + check_names=False) + # this assert will break when fixed + self.assertTrue(result.name is None) + else: + assert_series_equal(result, expected, check_dtype=False) + + def test_resample_empty_dataframe(self): + # GH13212 + index = self.create_series().index[:0] + f = DataFrame(index=index) + + for freq in ['M', 'D', 'H']: + # count retains dimensions too + methods = downsample_methods + ['count'] + for method in methods: + result = getattr(f.resample(freq), method)() + + expected = f.copy() + expected.index = f.index._shallow_copy(freq=freq) + assert_index_equal(result.index, expected.index) + self.assertEqual(result.index.freq, expected.index.freq) + assert_frame_equal(result, expected, check_dtype=False) + + # test size for GH13212 (currently stays as df) + + def test_resample_empty_dtypes(self): + + # Empty series were sometimes causing a segfault (for the functions + # with Cython bounds-checking disabled) or an IndexError. We just run + # them to ensure they no longer do. (GH #10228) + for index in tm.all_timeseries_index_generator(0): + for dtype in (np.float, np.int, np.object, 'datetime64[ns]'): + for how in downsample_methods + upsample_methods: + empty_series = pd.Series([], index, dtype) + try: + getattr(empty_series.resample('d'), how)() + except DataError: + # Ignore these since some combinations are invalid + # (ex: doing mean with dtype of np.object) + pass + class TestDatetimeIndex(Base, tm.TestCase): _multiprocess_can_split_ = True @@ -746,6 +818,22 @@ def _ohlc(group): exc.args += ('how=%s' % arg,) raise + def test_numpy_compat(self): + # see gh-12811 + s = Series([1, 2, 3, 4, 5], index=date_range( + '20130101', periods=5, freq='s')) + r = s.resample('2s') + + msg = "numpy operations are not valid with resample" + + for func in ('min', 'max', 'sum', 'prod', + 'mean', 'var', 'std'): + tm.assertRaisesRegexp(UnsupportedFunctionCall, msg, + getattr(r, func), + func, 1, 2, 3) + tm.assertRaisesRegexp(UnsupportedFunctionCall, msg, + getattr(r, func), axis=1) + def test_resample_how_callables(self): # GH 7929 data = np.arange(5, dtype=np.int64) @@ -1330,7 +1418,7 @@ def test_resample_base(self): resampled = ts.resample('5min', base=2).mean() exp_rng = date_range('12/31/1999 23:57:00', '1/1/2000 01:57', freq='5min') - self.assertTrue(resampled.index.equals(exp_rng)) + self.assert_index_equal(resampled.index, exp_rng) def test_resample_base_with_timedeltaindex(self): @@ -1344,8 +1432,8 @@ def test_resample_base_with_timedeltaindex(self): exp_without_base = timedelta_range(start='0s', end='25s', freq='2s') exp_with_base = timedelta_range(start='5s', end='29s', freq='2s') - self.assertTrue(without_base.index.equals(exp_without_base)) - self.assertTrue(with_base.index.equals(exp_with_base)) + self.assert_index_equal(without_base.index, exp_without_base) + self.assert_index_equal(with_base.index, exp_with_base) def test_resample_categorical_data_with_timedeltaindex(self): # GH #12169 @@ -1376,7 +1464,7 @@ def test_resample_to_period_monthly_buglet(self): result = ts.resample('M', kind='period').mean() exp_index = period_range('Jan-2000', 'Dec-2000', freq='M') - self.assertTrue(result.index.equals(exp_index)) + self.assert_index_equal(result.index, exp_index) def test_period_with_agg(self): @@ -1391,39 +1479,6 @@ def test_period_with_agg(self): result = s2.resample('D').agg(lambda x: x.mean()) assert_series_equal(result, expected) - def test_resample_empty(self): - ts = _simple_ts('1/1/2000', '2/1/2000')[:0] - - result = ts.resample('A').mean() - self.assertEqual(len(result), 0) - self.assertEqual(result.index.freqstr, 'A-DEC') - - result = ts.resample('A', kind='period').mean() - self.assertEqual(len(result), 0) - self.assertEqual(result.index.freqstr, 'A-DEC') - - # this is a non datetimelike index - xp = DataFrame() - self.assertRaises(TypeError, lambda: xp.resample('A').mean()) - - # Empty series were sometimes causing a segfault (for the functions - # with Cython bounds-checking disabled) or an IndexError. We just run - # them to ensure they no longer do. (GH #10228) - for index in tm.all_timeseries_index_generator(0): - for dtype in (np.float, np.int, np.object, 'datetime64[ns]'): - for how in downsample_methods + upsample_methods: - empty_series = pd.Series([], index, dtype) - try: - getattr(empty_series.resample('d'), how)() - except DataError: - # Ignore these since some combinations are invalid - # (ex: doing mean with dtype of np.object) - pass - - # this should also tests nunique - # (IOW, use resample_methods) - # when GH12886 is closed - def test_resample_segfault(self): # GH 8573 # segfaulting in older versions @@ -1572,7 +1627,7 @@ def test_corner_cases(self): result = ts.resample('5t', closed='right', label='left').mean() ex_index = date_range('1999-12-31 23:55', periods=4, freq='5t') - self.assertTrue(result.index.equals(ex_index)) + self.assert_index_equal(result.index, ex_index) len0pts = _simple_pts('2007-01', '2010-05', freq='M')[:0] # it works @@ -1846,6 +1901,32 @@ def test_resmaple_dst_anchor(self): freq='D', tz='Europe/Paris')), 'D Frequency') + def test_resample_with_nat(self): + # GH 13020 + index = DatetimeIndex([pd.NaT, + '1970-01-01 00:00:00', + pd.NaT, + '1970-01-01 00:00:01', + '1970-01-01 00:00:02']) + frame = DataFrame([2, 3, 5, 7, 11], index=index) + + index_1s = DatetimeIndex(['1970-01-01 00:00:00', + '1970-01-01 00:00:01', + '1970-01-01 00:00:02']) + frame_1s = DataFrame([3, 7, 11], index=index_1s) + assert_frame_equal(frame.resample('1s').mean(), frame_1s) + + index_2s = DatetimeIndex(['1970-01-01 00:00:00', + '1970-01-01 00:00:02']) + frame_2s = DataFrame([5, 11], index=index_2s) + assert_frame_equal(frame.resample('2s').mean(), frame_2s) + + index_3s = DatetimeIndex(['1970-01-01 00:00:00']) + frame_3s = DataFrame([7], index=index_3s) + assert_frame_equal(frame.resample('3s').mean(), frame_3s) + + assert_frame_equal(frame.resample('60s').mean(), frame_3s) + class TestPeriodIndex(Base, tm.TestCase): _multiprocess_can_split_ = True @@ -2042,19 +2123,6 @@ def test_resample_basic(self): result2 = s.resample('T', kind='period').mean() assert_series_equal(result2, expected) - def test_resample_empty(self): - - # GH12771 & GH12868 - index = PeriodIndex(start='2000', periods=0, freq='D', name='idx') - s = Series(index=index) - - expected_index = PeriodIndex([], name='idx', freq='M') - expected = Series(index=expected_index) - - for method in resample_methods: - result = getattr(s.resample('M'), method)() - assert_series_equal(result, expected) - def test_resample_count(self): # GH12774 @@ -2078,6 +2146,12 @@ def test_resample_same_freq(self): result = getattr(series.resample('M'), method)() assert_series_equal(result, expected) + def test_resample_incompat_freq(self): + + with self.assertRaises(IncompatibleFrequency): + pd.Series(range(3), index=pd.period_range( + start='2000', periods=3, freq='M')).resample('W').mean() + def test_with_local_timezone_pytz(self): # GH5430 tm._skip_if_no_pytz() @@ -2317,7 +2391,7 @@ def test_closed_left_corner(self): ex_index = date_range(start='1/1/2012 9:30', freq='10min', periods=3) - self.assertTrue(result.index.equals(ex_index)) + self.assert_index_equal(result.index, ex_index) assert_series_equal(result, exp) def test_quarterly_resampling(self): @@ -2439,7 +2513,6 @@ def create_series(self): return Series(np.arange(len(i)), index=i, name='tdi') def test_asfreq_bug(self): - import datetime as dt df = DataFrame(data=[1, 3], index=[dt.timedelta(), dt.timedelta(minutes=3)]) @@ -2452,7 +2525,6 @@ def test_asfreq_bug(self): class TestResamplerGrouper(tm.TestCase): - def setUp(self): self.frame = DataFrame({'A': [1] * 20 + [2] * 12 + [3] * 8, 'B': np.arange(40)}, @@ -2519,6 +2591,25 @@ def test_getitem(self): result = g.resample('2s').mean().B assert_series_equal(result, expected) + def test_getitem_multiple(self): + + # GH 13174 + # multiple calls after selection causing an issue with aliasing + data = [{'id': 1, 'buyer': 'A'}, {'id': 2, 'buyer': 'B'}] + df = pd.DataFrame(data, index=pd.date_range('2016-01-01', periods=2)) + r = df.groupby('id').resample('1D') + result = r['buyer'].count() + expected = pd.Series([1, 1], + index=pd.MultiIndex.from_tuples( + [(1, pd.Timestamp('2016-01-01')), + (2, pd.Timestamp('2016-01-02'))], + names=['id', None]), + name='buyer') + assert_series_equal(result, expected) + + result = r['buyer'].count() + assert_series_equal(result, expected) + def test_methods(self): g = self.frame.groupby('A') r = g.resample('2s') @@ -2569,14 +2660,36 @@ def test_apply(self): def f(x): return x.resample('2s').sum() + result = r.apply(f) assert_frame_equal(result, expected) def f(x): return x.resample('2s').apply(lambda y: y.sum()) + result = g.apply(f) assert_frame_equal(result, expected) + def test_resample_groupby_with_label(self): + # GH 13235 + index = date_range('2000-01-01', freq='2D', periods=5) + df = DataFrame(index=index, + data={'col0': [0, 0, 1, 1, 2], 'col1': [1, 1, 1, 1, 1]} + ) + result = df.groupby('col0').resample('1W', label='left').sum() + + mi = [np.array([0, 0, 1, 2]), + pd.to_datetime(np.array(['1999-12-26', '2000-01-02', + '2000-01-02', '2000-01-02']) + ) + ] + mindex = pd.MultiIndex.from_arrays(mi, names=['col0', None]) + expected = DataFrame(data={'col0': [0, 0, 2, 2], 'col1': [1, 1, 2, 1]}, + index=mindex + ) + + assert_frame_equal(result, expected) + def test_consistency_with_window(self): # consistent return values with window @@ -2647,7 +2760,7 @@ def test_apply_iteration(self): # it works! result = grouped.apply(f) - self.assertTrue(result.index.equals(df.index)) + self.assert_index_equal(result.index, df.index) def test_panel_aggregation(self): ind = pd.date_range('1/1/2000', periods=100) diff --git a/pandas/tseries/tests/test_timedeltas.py b/pandas/tseries/tests/test_timedeltas.py index c764f34b697c1..10276137b42a1 100644 --- a/pandas/tseries/tests/test_timedeltas.py +++ b/pandas/tseries/tests/test_timedeltas.py @@ -16,7 +16,6 @@ from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type as ct from pandas.util.testing import (assert_series_equal, assert_frame_equal, assert_almost_equal, assert_index_equal) -from numpy.testing import assert_allclose from pandas.tseries.offsets import Day, Second import pandas.util.testing as tm from numpy.random import randn @@ -413,6 +412,38 @@ def test_ops_series(self): tm.assert_series_equal(expected, td * other) tm.assert_series_equal(expected, other * td) + def test_ops_series_object(self): + # GH 13043 + s = pd.Series([pd.Timestamp('2015-01-01', tz='US/Eastern'), + pd.Timestamp('2015-01-01', tz='Asia/Tokyo')], + name='xxx') + self.assertEqual(s.dtype, object) + + exp = pd.Series([pd.Timestamp('2015-01-02', tz='US/Eastern'), + pd.Timestamp('2015-01-02', tz='Asia/Tokyo')], + name='xxx') + tm.assert_series_equal(s + pd.Timedelta('1 days'), exp) + tm.assert_series_equal(pd.Timedelta('1 days') + s, exp) + + # object series & object series + s2 = pd.Series([pd.Timestamp('2015-01-03', tz='US/Eastern'), + pd.Timestamp('2015-01-05', tz='Asia/Tokyo')], + name='xxx') + self.assertEqual(s2.dtype, object) + exp = pd.Series([pd.Timedelta('2 days'), pd.Timedelta('4 days')], + name='xxx') + tm.assert_series_equal(s2 - s, exp) + tm.assert_series_equal(s - s2, -exp) + + s = pd.Series([pd.Timedelta('01:00:00'), pd.Timedelta('02:00:00')], + name='xxx', dtype=object) + self.assertEqual(s.dtype, object) + + exp = pd.Series([pd.Timedelta('01:30:00'), pd.Timedelta('02:30:00')], + name='xxx') + tm.assert_series_equal(s + pd.Timedelta('00:30:00'), exp) + tm.assert_series_equal(pd.Timedelta('00:30:00') + s, exp) + def test_compare_timedelta_series(self): # regresssion test for GH5963 s = pd.Series([timedelta(days=1), timedelta(days=2)]) @@ -1159,12 +1190,6 @@ def test_append_numpy_bug_1681(self): result = a.append(c) self.assertTrue((result['B'] == td).all()) - def test_astype(self): - rng = timedelta_range('1 days', periods=10) - - result = rng.astype('i8') - self.assert_numpy_array_equal(result, rng.asi8) - def test_fields(self): rng = timedelta_range('1 days, 10:11:12.100123456', periods=2, freq='s') @@ -1198,7 +1223,7 @@ def test_total_seconds(self): freq='s') expt = [1 * 86400 + 10 * 3600 + 11 * 60 + 12 + 100123456. / 1e9, 1 * 86400 + 10 * 3600 + 11 * 60 + 13 + 100123456. / 1e9] - assert_allclose(rng.total_seconds(), expt, atol=1e-10, rtol=0) + tm.assert_almost_equal(rng.total_seconds(), np.array(expt)) # test Series s = Series(rng) @@ -1213,14 +1238,14 @@ def test_total_seconds(self): # with both nat s = Series([np.nan, np.nan], dtype='timedelta64[ns]') - tm.assert_series_equal(s.dt.total_seconds(), Series( - [np.nan, np.nan], index=[0, 1])) + tm.assert_series_equal(s.dt.total_seconds(), + Series([np.nan, np.nan], index=[0, 1])) def test_total_seconds_scalar(self): # GH 10939 rng = Timedelta('1 days, 10:11:12.100123456') expt = 1 * 86400 + 10 * 3600 + 11 * 60 + 12 + 100123456. / 1e9 - assert_allclose(rng.total_seconds(), expt, atol=1e-10, rtol=0) + tm.assert_almost_equal(rng.total_seconds(), expt) rng = Timedelta(np.nan) self.assertTrue(np.isnan(rng.total_seconds())) @@ -1263,7 +1288,7 @@ def test_constructor(self): def test_constructor_coverage(self): rng = timedelta_range('1 days', periods=10.5) exp = timedelta_range('1 days', periods=10) - self.assertTrue(rng.equals(exp)) + self.assert_index_equal(rng, exp) self.assertRaises(ValueError, TimedeltaIndex, start='1 days', periods='foo', freq='D') @@ -1277,16 +1302,16 @@ def test_constructor_coverage(self): gen = (timedelta(i) for i in range(10)) result = TimedeltaIndex(gen) expected = TimedeltaIndex([timedelta(i) for i in range(10)]) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) # NumPy string array strings = np.array(['1 days', '2 days', '3 days']) result = TimedeltaIndex(strings) expected = to_timedelta([1, 2, 3], unit='d') - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) from_ints = TimedeltaIndex(expected.asi8) - self.assertTrue(from_ints.equals(expected)) + self.assert_index_equal(from_ints, expected) # non-conforming freq self.assertRaises(ValueError, TimedeltaIndex, @@ -1413,7 +1438,7 @@ def test_map(self): f = lambda x: x.days result = rng.map(f) - exp = [f(x) for x in rng] + exp = np.array([f(x) for x in rng], dtype=np.int64) self.assert_numpy_array_equal(result, exp) def test_misc_coverage(self): @@ -1434,7 +1459,7 @@ def test_union(self): i2 = timedelta_range('3day', periods=5) result = i1.union(i2) expected = timedelta_range('1day', periods=7) - self.assert_numpy_array_equal(result, expected) + self.assert_index_equal(result, expected) i1 = Int64Index(np.arange(0, 20, 2)) i2 = TimedeltaIndex(start='1 day', periods=10, freq='D') @@ -1446,10 +1471,10 @@ def test_union_coverage(self): idx = TimedeltaIndex(['3d', '1d', '2d']) ordered = TimedeltaIndex(idx.sort_values(), freq='infer') result = ordered.union(idx) - self.assertTrue(result.equals(ordered)) + self.assert_index_equal(result, ordered) result = ordered[:0].union(ordered) - self.assertTrue(result.equals(ordered)) + self.assert_index_equal(result, ordered) self.assertEqual(result.freq, ordered.freq) def test_union_bug_1730(self): @@ -1459,18 +1484,18 @@ def test_union_bug_1730(self): result = rng_a.union(rng_b) exp = TimedeltaIndex(sorted(set(list(rng_a)) | set(list(rng_b)))) - self.assertTrue(result.equals(exp)) + self.assert_index_equal(result, exp) def test_union_bug_1745(self): left = TimedeltaIndex(['1 day 15:19:49.695000']) - right = TimedeltaIndex( - ['2 day 13:04:21.322000', '1 day 15:27:24.873000', - '1 day 15:31:05.350000']) + right = TimedeltaIndex(['2 day 13:04:21.322000', + '1 day 15:27:24.873000', + '1 day 15:31:05.350000']) result = left.union(right) exp = TimedeltaIndex(sorted(set(list(left)) | set(list(right)))) - self.assertTrue(result.equals(exp)) + self.assert_index_equal(result, exp) def test_union_bug_4564(self): @@ -1479,7 +1504,7 @@ def test_union_bug_4564(self): result = left.union(right) exp = TimedeltaIndex(sorted(set(list(left)) | set(list(right)))) - self.assertTrue(result.equals(exp)) + self.assert_index_equal(result, exp) def test_intersection_bug_1708(self): index_1 = timedelta_range('1 day', periods=4, freq='h') @@ -1501,7 +1526,7 @@ def test_get_duplicates(self): result = idx.get_duplicates() ex = TimedeltaIndex(['2 day', '3day']) - self.assertTrue(result.equals(ex)) + self.assert_index_equal(result, ex) def test_argmin_argmax(self): idx = TimedeltaIndex(['1 day 00:00:05', '1 day 00:00:01', @@ -1521,11 +1546,13 @@ def test_sort_values(self): ordered, dexer = idx.sort_values(return_indexer=True) self.assertTrue(ordered.is_monotonic) - self.assert_numpy_array_equal(dexer, [1, 2, 0]) + self.assert_numpy_array_equal(dexer, + np.array([1, 2, 0], dtype=np.int64)) ordered, dexer = idx.sort_values(return_indexer=True, ascending=False) self.assertTrue(ordered[::-1].is_monotonic) - self.assert_numpy_array_equal(dexer, [0, 2, 1]) + self.assert_numpy_array_equal(dexer, + np.array([0, 2, 1], dtype=np.int64)) def test_insert(self): @@ -1533,7 +1560,7 @@ def test_insert(self): result = idx.insert(2, timedelta(days=5)) exp = TimedeltaIndex(['4day', '1day', '5day', '2day'], name='idx') - self.assertTrue(result.equals(exp)) + self.assert_index_equal(result, exp) # insertion of non-datetime should coerce to object index result = idx.insert(1, 'inserted') @@ -1569,7 +1596,7 @@ def test_insert(self): for n, d, expected in cases: result = idx.insert(n, d) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) @@ -1593,7 +1620,7 @@ def test_delete(self): 1: expected_1} for n, expected in compat.iteritems(cases): result = idx.delete(n) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) @@ -1620,12 +1647,12 @@ def test_delete_slice(self): (3, 4, 5): expected_3_5} for n, expected in compat.iteritems(cases): result = idx.delete(n) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) result = idx.delete(slice(n[0], n[-1] + 1)) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) @@ -1639,7 +1666,7 @@ def test_take(self): taken2 = idx[[2, 4, 10]] for taken in [taken1, taken2]: - self.assertTrue(taken.equals(expected)) + self.assert_index_equal(taken, expected) tm.assertIsInstance(taken, TimedeltaIndex) self.assertIsNone(taken.freq) self.assertEqual(taken.name, expected.name) @@ -1686,7 +1713,7 @@ def test_isin(self): self.assertTrue(result.all()) assert_almost_equal(index.isin([index[2], 5]), - [False, False, True, False]) + np.array([False, False, True, False])) def test_does_not_convert_mixed_integer(self): df = tm.makeCustomDataframe(10, 10, @@ -1723,18 +1750,18 @@ def test_factorize(self): arr, idx = idx1.factorize() self.assert_numpy_array_equal(arr, exp_arr) - self.assertTrue(idx.equals(exp_idx)) + self.assert_index_equal(idx, exp_idx) arr, idx = idx1.factorize(sort=True) self.assert_numpy_array_equal(arr, exp_arr) - self.assertTrue(idx.equals(exp_idx)) + self.assert_index_equal(idx, exp_idx) # freq must be preserved idx3 = timedelta_range('1 day', periods=4, freq='s') exp_arr = np.array([0, 1, 2, 3]) arr, idx = idx3.factorize() self.assert_numpy_array_equal(arr, exp_arr) - self.assertTrue(idx.equals(idx3)) + self.assert_index_equal(idx, idx3) class TestSlicing(tm.TestCase): diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py index 3d8e389ba30f2..f6d80f7ee410b 100644 --- a/pandas/tseries/tests/test_timeseries.py +++ b/pandas/tseries/tests/test_timeseries.py @@ -5,7 +5,6 @@ import warnings from datetime import datetime, time, timedelta from numpy.random import rand -from numpy.testing.decorators import slow import nose import numpy as np @@ -31,7 +30,7 @@ from pandas.tslib import iNaT from pandas.util.testing import ( assert_frame_equal, assert_series_equal, assert_almost_equal, - _skip_if_has_locale) + _skip_if_has_locale, slow) randn = np.random.randn @@ -60,7 +59,7 @@ def test_index_unique(self): expected = DatetimeIndex([datetime(2000, 1, 2), datetime(2000, 1, 3), datetime(2000, 1, 4), datetime(2000, 1, 5)]) self.assertEqual(uniques.dtype, 'M8[ns]') # sanity - self.assertTrue(uniques.equals(expected)) + tm.assert_index_equal(uniques, expected) self.assertEqual(self.dups.index.nunique(), 4) # #2563 @@ -69,22 +68,23 @@ def test_index_unique(self): dups_local = self.dups.index.tz_localize('US/Eastern') dups_local.name = 'foo' result = dups_local.unique() - expected = DatetimeIndex(expected).tz_localize('US/Eastern') + expected = DatetimeIndex(expected, name='foo') + expected = expected.tz_localize('US/Eastern') self.assertTrue(result.tz is not None) self.assertEqual(result.name, 'foo') - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) # NaT, note this is excluded arr = [1370745748 + t for t in range(20)] + [iNaT] idx = DatetimeIndex(arr * 3) - self.assertTrue(idx.unique().equals(DatetimeIndex(arr))) + tm.assert_index_equal(idx.unique(), DatetimeIndex(arr)) self.assertEqual(idx.nunique(), 20) self.assertEqual(idx.nunique(dropna=False), 21) arr = [Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t) for t in range(20)] + [NaT] idx = DatetimeIndex(arr * 3) - self.assertTrue(idx.unique().equals(DatetimeIndex(arr))) + tm.assert_index_equal(idx.unique(), DatetimeIndex(arr)) self.assertEqual(idx.nunique(), 20) self.assertEqual(idx.nunique(dropna=False), 21) @@ -285,12 +285,12 @@ def test_recreate_from_data(self): for f in freqs: org = DatetimeIndex(start='2001/02/01 09:00', freq=f, periods=1) idx = DatetimeIndex(org, freq=f) - self.assertTrue(idx.equals(org)) + tm.assert_index_equal(idx, org) org = DatetimeIndex(start='2001/02/01 09:00', freq=f, tz='US/Pacific', periods=1) idx = DatetimeIndex(org, freq=f, tz='US/Pacific') - self.assertTrue(idx.equals(org)) + tm.assert_index_equal(idx, org) def assert_range_equal(left, right): @@ -762,6 +762,15 @@ def test_to_datetime_unit(self): with self.assertRaises(ValueError): to_datetime([1, 2, 111111111], unit='D') + # coerce we can process + expected = DatetimeIndex([Timestamp('1970-01-02'), + Timestamp('1970-01-03')] + ['NaT'] * 1) + result = to_datetime([1, 2, 'foo'], unit='D', errors='coerce') + tm.assert_index_equal(result, expected) + + result = to_datetime([1, 2, 111111111], unit='D', errors='coerce') + tm.assert_index_equal(result, expected) + def test_series_ctor_datetime64(self): rng = date_range('1/1/2000 00:00:00', '1/1/2000 1:59:50', freq='10s') dates = np.asarray(rng) @@ -866,7 +875,7 @@ def test_string_na_nat_conversion(self): result2 = to_datetime(strings) tm.assertIsInstance(result2, DatetimeIndex) - tm.assert_numpy_array_equal(result, result2) + tm.assert_numpy_array_equal(result, result2.values) malformed = np.array(['1/100/2000', np.nan], dtype=object) @@ -1057,7 +1066,7 @@ def test_to_datetime_list_of_integers(self): result = DatetimeIndex(ints) - self.assertTrue(rng.equals(result)) + tm.assert_index_equal(rng, result) def test_to_datetime_freq(self): xp = bdate_range('2000-1-1', periods=10, tz='UTC') @@ -1101,8 +1110,8 @@ def test_asfreq_keep_index_name(self): index = pd.date_range('20130101', periods=20, name=index_name) df = pd.DataFrame([x for x in range(20)], columns=['foo'], index=index) - tm.assert_equal(index_name, df.index.name) - tm.assert_equal(index_name, df.asfreq('10D').index.name) + self.assertEqual(index_name, df.index.name) + self.assertEqual(index_name, df.asfreq('10D').index.name) def test_promote_datetime_date(self): rng = date_range('1/1/2000', periods=20) @@ -1154,15 +1163,15 @@ def test_date_range_gen_error(self): def test_date_range_negative_freq(self): # GH 11018 rng = date_range('2011-12-31', freq='-2A', periods=3) - exp = pd.DatetimeIndex( - ['2011-12-31', '2009-12-31', '2007-12-31'], freq='-2A') - self.assert_index_equal(rng, exp) + exp = pd.DatetimeIndex(['2011-12-31', '2009-12-31', + '2007-12-31'], freq='-2A') + tm.assert_index_equal(rng, exp) self.assertEqual(rng.freq, '-2A') rng = date_range('2011-01-31', freq='-2M', periods=3) - exp = pd.DatetimeIndex( - ['2011-01-31', '2010-11-30', '2010-09-30'], freq='-2M') - self.assert_index_equal(rng, exp) + exp = pd.DatetimeIndex(['2011-01-31', '2010-11-30', + '2010-09-30'], freq='-2M') + tm.assert_index_equal(rng, exp) self.assertEqual(rng.freq, '-2M') def test_date_range_bms_bug(self): @@ -1515,7 +1524,7 @@ def test_normalize(self): result = rng.normalize() expected = date_range('1/1/2000', periods=10, freq='D') - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) rng_ns = pd.DatetimeIndex(np.array([1380585623454345752, 1380585612343234312]).astype( @@ -1524,7 +1533,7 @@ def test_normalize(self): expected = pd.DatetimeIndex(np.array([1380585600000000000, 1380585600000000000]).astype( "datetime64[ns]")) - self.assertTrue(rng_ns_normalized.equals(expected)) + tm.assert_index_equal(rng_ns_normalized, expected) self.assertTrue(result.is_normalized) self.assertFalse(rng.is_normalized) @@ -1541,7 +1550,7 @@ def test_to_period(self): pts = ts.to_period('M') exp.index = exp.index.asfreq('M') - self.assertTrue(pts.index.equals(exp.index.asfreq('M'))) + tm.assert_index_equal(pts.index, exp.index.asfreq('M')) assert_series_equal(pts, exp) # GH 7606 without freq @@ -1599,7 +1608,7 @@ def test_to_period_tz_pytz(self): expected = ts[0].to_period() self.assertEqual(result, expected) - self.assertTrue(ts.to_period().equals(xp)) + tm.assert_index_equal(ts.to_period(), xp) ts = date_range('1/1/2000', '4/1/2000', tz=UTC) @@ -1607,7 +1616,7 @@ def test_to_period_tz_pytz(self): expected = ts[0].to_period() self.assertEqual(result, expected) - self.assertTrue(ts.to_period().equals(xp)) + tm.assert_index_equal(ts.to_period(), xp) ts = date_range('1/1/2000', '4/1/2000', tz=tzlocal()) @@ -1615,7 +1624,7 @@ def test_to_period_tz_pytz(self): expected = ts[0].to_period() self.assertEqual(result, expected) - self.assertTrue(ts.to_period().equals(xp)) + tm.assert_index_equal(ts.to_period(), xp) def test_to_period_tz_explicit_pytz(self): tm._skip_if_no_pytz() @@ -1630,7 +1639,7 @@ def test_to_period_tz_explicit_pytz(self): expected = ts[0].to_period() self.assertTrue(result == expected) - self.assertTrue(ts.to_period().equals(xp)) + tm.assert_index_equal(ts.to_period(), xp) ts = date_range('1/1/2000', '4/1/2000', tz=pytz.utc) @@ -1638,7 +1647,7 @@ def test_to_period_tz_explicit_pytz(self): expected = ts[0].to_period() self.assertTrue(result == expected) - self.assertTrue(ts.to_period().equals(xp)) + tm.assert_index_equal(ts.to_period(), xp) ts = date_range('1/1/2000', '4/1/2000', tz=tzlocal()) @@ -1646,7 +1655,7 @@ def test_to_period_tz_explicit_pytz(self): expected = ts[0].to_period() self.assertTrue(result == expected) - self.assertTrue(ts.to_period().equals(xp)) + tm.assert_index_equal(ts.to_period(), xp) def test_to_period_tz_dateutil(self): tm._skip_if_no_dateutil() @@ -1661,7 +1670,7 @@ def test_to_period_tz_dateutil(self): expected = ts[0].to_period() self.assertTrue(result == expected) - self.assertTrue(ts.to_period().equals(xp)) + tm.assert_index_equal(ts.to_period(), xp) ts = date_range('1/1/2000', '4/1/2000', tz=dateutil.tz.tzutc()) @@ -1669,7 +1678,7 @@ def test_to_period_tz_dateutil(self): expected = ts[0].to_period() self.assertTrue(result == expected) - self.assertTrue(ts.to_period().equals(xp)) + tm.assert_index_equal(ts.to_period(), xp) ts = date_range('1/1/2000', '4/1/2000', tz=tzlocal()) @@ -1677,7 +1686,7 @@ def test_to_period_tz_dateutil(self): expected = ts[0].to_period() self.assertTrue(result == expected) - self.assertTrue(ts.to_period().equals(xp)) + tm.assert_index_equal(ts.to_period(), xp) def test_frame_to_period(self): K = 5 @@ -1694,7 +1703,7 @@ def test_frame_to_period(self): assert_frame_equal(pts, exp) pts = df.to_period('M') - self.assertTrue(pts.index.equals(exp.index.asfreq('M'))) + tm.assert_index_equal(pts.index, exp.index.asfreq('M')) df = df.T pts = df.to_period(axis=1) @@ -1703,7 +1712,7 @@ def test_frame_to_period(self): assert_frame_equal(pts, exp) pts = df.to_period('M', axis=1) - self.assertTrue(pts.columns.equals(exp.columns.asfreq('M'))) + tm.assert_index_equal(pts.columns, exp.columns.asfreq('M')) self.assertRaises(ValueError, df.to_period, axis=2) @@ -1791,11 +1800,11 @@ def test_datetimeindex_integers_shift(self): result = rng + 5 expected = rng.shift(5) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) result = rng - 5 expected = rng.shift(-5) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) def test_astype_object(self): # NumPy 1.6.1 weak ns support @@ -1804,7 +1813,8 @@ def test_astype_object(self): casted = rng.astype('O') exp_values = list(rng) - self.assert_numpy_array_equal(casted, exp_values) + tm.assert_index_equal(casted, Index(exp_values, dtype=np.object_)) + self.assertEqual(casted.tolist(), exp_values) def test_catch_infinite_loop(self): offset = datetools.DateOffset(minute=5) @@ -1820,15 +1830,15 @@ def test_append_concat(self): result = ts.append(ts) result_df = df.append(df) ex_index = DatetimeIndex(np.tile(rng.values, 2)) - self.assertTrue(result.index.equals(ex_index)) - self.assertTrue(result_df.index.equals(ex_index)) + tm.assert_index_equal(result.index, ex_index) + tm.assert_index_equal(result_df.index, ex_index) appended = rng.append(rng) - self.assertTrue(appended.equals(ex_index)) + tm.assert_index_equal(appended, ex_index) appended = rng.append([rng, rng]) ex_index = DatetimeIndex(np.tile(rng.values, 3)) - self.assertTrue(appended.equals(ex_index)) + tm.assert_index_equal(appended, ex_index) # different index names rng1 = rng.copy() @@ -1855,11 +1865,11 @@ def test_append_concat_tz(self): result = ts.append(ts2) result_df = df.append(df2) - self.assertTrue(result.index.equals(rng3)) - self.assertTrue(result_df.index.equals(rng3)) + tm.assert_index_equal(result.index, rng3) + tm.assert_index_equal(result_df.index, rng3) appended = rng.append(rng2) - self.assertTrue(appended.equals(rng3)) + tm.assert_index_equal(appended, rng3) def test_append_concat_tz_explicit_pytz(self): # GH 2938 @@ -1879,11 +1889,11 @@ def test_append_concat_tz_explicit_pytz(self): result = ts.append(ts2) result_df = df.append(df2) - self.assertTrue(result.index.equals(rng3)) - self.assertTrue(result_df.index.equals(rng3)) + tm.assert_index_equal(result.index, rng3) + tm.assert_index_equal(result_df.index, rng3) appended = rng.append(rng2) - self.assertTrue(appended.equals(rng3)) + tm.assert_index_equal(appended, rng3) def test_append_concat_tz_dateutil(self): # GH 2938 @@ -1901,11 +1911,11 @@ def test_append_concat_tz_dateutil(self): result = ts.append(ts2) result_df = df.append(df2) - self.assertTrue(result.index.equals(rng3)) - self.assertTrue(result_df.index.equals(rng3)) + tm.assert_index_equal(result.index, rng3) + tm.assert_index_equal(result_df.index, rng3) appended = rng.append(rng2) - self.assertTrue(appended.equals(rng3)) + tm.assert_index_equal(appended, rng3) def test_set_dataframe_column_ns_dtype(self): x = DataFrame([datetime.now(), datetime.now()]) @@ -2283,18 +2293,162 @@ def test_to_datetime_tz_psycopg2(self): dtype='datetime64[ns, UTC]') tm.assert_index_equal(result, expected) + def test_datetime_bool(self): + # GH13176 + with self.assertRaises(TypeError): + to_datetime(False) + self.assertTrue(to_datetime(False, errors="coerce") is tslib.NaT) + self.assertEqual(to_datetime(False, errors="ignore"), False) + with self.assertRaises(TypeError): + to_datetime(True) + self.assertTrue(to_datetime(True, errors="coerce") is tslib.NaT) + self.assertEqual(to_datetime(True, errors="ignore"), True) + with self.assertRaises(TypeError): + to_datetime([False, datetime.today()]) + with self.assertRaises(TypeError): + to_datetime(['20130101', True]) + tm.assert_index_equal(to_datetime([0, False, tslib.NaT, 0.0], + errors="coerce"), + DatetimeIndex([to_datetime(0), tslib.NaT, + tslib.NaT, to_datetime(0)])) + + def test_datetime_invalid_datatype(self): + # GH13176 + + with self.assertRaises(TypeError): + pd.to_datetime(bool) + with self.assertRaises(TypeError): + pd.to_datetime(pd.to_datetime) + + def test_unit(self): + # GH 11758 + # test proper behavior with erros + + with self.assertRaises(ValueError): + to_datetime([1], unit='D', format='%Y%m%d') + + values = [11111111, 1, 1.0, tslib.iNaT, pd.NaT, np.nan, + 'NaT', ''] + result = to_datetime(values, unit='D', errors='ignore') + expected = Index([11111111, Timestamp('1970-01-02'), + Timestamp('1970-01-02'), pd.NaT, + pd.NaT, pd.NaT, pd.NaT, pd.NaT], + dtype=object) + tm.assert_index_equal(result, expected) + + result = to_datetime(values, unit='D', errors='coerce') + expected = DatetimeIndex(['NaT', '1970-01-02', '1970-01-02', + 'NaT', 'NaT', 'NaT', 'NaT', 'NaT']) + tm.assert_index_equal(result, expected) + + with self.assertRaises(tslib.OutOfBoundsDatetime): + to_datetime(values, unit='D', errors='raise') + + values = [1420043460000, tslib.iNaT, pd.NaT, np.nan, 'NaT'] + + result = to_datetime(values, errors='ignore', unit='s') + expected = Index([1420043460000, pd.NaT, pd.NaT, + pd.NaT, pd.NaT], dtype=object) + tm.assert_index_equal(result, expected) + + result = to_datetime(values, errors='coerce', unit='s') + expected = DatetimeIndex(['NaT', 'NaT', 'NaT', 'NaT', 'NaT']) + tm.assert_index_equal(result, expected) + + with self.assertRaises(tslib.OutOfBoundsDatetime): + to_datetime(values, errors='raise', unit='s') + + # if we have a string, then we raise a ValueError + # and NOT an OutOfBoundsDatetime + for val in ['foo', Timestamp('20130101')]: + try: + to_datetime(val, errors='raise', unit='s') + except tslib.OutOfBoundsDatetime: + raise AssertionError("incorrect exception raised") + except ValueError: + pass + + def test_unit_consistency(self): + + # consistency of conversions + expected = Timestamp('1970-05-09 14:25:11') + result = pd.to_datetime(11111111, unit='s', errors='raise') + self.assertEqual(result, expected) + self.assertIsInstance(result, Timestamp) + + result = pd.to_datetime(11111111, unit='s', errors='coerce') + self.assertEqual(result, expected) + self.assertIsInstance(result, Timestamp) + + result = pd.to_datetime(11111111, unit='s', errors='ignore') + self.assertEqual(result, expected) + self.assertIsInstance(result, Timestamp) + + def test_unit_with_numeric(self): + + # GH 13180 + # coercions from floats/ints are ok + expected = DatetimeIndex(['2015-06-19 05:33:20', + '2015-05-27 22:33:20']) + arr1 = [1.434692e+18, 1.432766e+18] + arr2 = np.array(arr1).astype('int64') + for errors in ['ignore', 'raise', 'coerce']: + result = pd.to_datetime(arr1, errors=errors) + tm.assert_index_equal(result, expected) + + result = pd.to_datetime(arr2, errors=errors) + tm.assert_index_equal(result, expected) + + # but we want to make sure that we are coercing + # if we have ints/strings + expected = DatetimeIndex(['NaT', + '2015-06-19 05:33:20', + '2015-05-27 22:33:20']) + arr = ['foo', 1.434692e+18, 1.432766e+18] + result = pd.to_datetime(arr, errors='coerce') + tm.assert_index_equal(result, expected) + + expected = DatetimeIndex(['2015-06-19 05:33:20', + '2015-05-27 22:33:20', + 'NaT', + 'NaT']) + arr = [1.434692e+18, 1.432766e+18, 'foo', 'NaT'] + result = pd.to_datetime(arr, errors='coerce') + tm.assert_index_equal(result, expected) + + def test_unit_mixed(self): + + # mixed integers/datetimes + expected = DatetimeIndex(['2013-01-01', 'NaT', 'NaT']) + arr = [pd.Timestamp('20130101'), 1.434692e+18, 1.432766e+18] + result = pd.to_datetime(arr, errors='coerce') + tm.assert_index_equal(result, expected) + + with self.assertRaises(ValueError): + pd.to_datetime(arr, errors='raise') + + expected = DatetimeIndex(['NaT', + 'NaT', + '2013-01-01']) + arr = [1.434692e+18, 1.432766e+18, pd.Timestamp('20130101')] + result = pd.to_datetime(arr, errors='coerce') + tm.assert_index_equal(result, expected) + + with self.assertRaises(ValueError): + pd.to_datetime(arr, errors='raise') + def test_index_to_datetime(self): idx = Index(['1/1/2000', '1/2/2000', '1/3/2000']) result = idx.to_datetime() expected = DatetimeIndex(datetools.to_datetime(idx.values)) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) today = datetime.today() idx = Index([today], dtype=object) result = idx.to_datetime() expected = DatetimeIndex([today]) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) def test_dataframe(self): @@ -2437,34 +2591,6 @@ def test_append_join_nondatetimeindex(self): # it works rng.join(idx, how='outer') - def test_astype(self): - rng = date_range('1/1/2000', periods=10) - - result = rng.astype('i8') - self.assert_numpy_array_equal(result, rng.asi8) - - # with tz - rng = date_range('1/1/2000', periods=10, tz='US/Eastern') - result = rng.astype('datetime64[ns]') - expected = (date_range('1/1/2000', periods=10, - tz='US/Eastern') - .tz_convert('UTC').tz_localize(None)) - tm.assert_index_equal(result, expected) - - # BUG#10442 : testing astype(str) is correct for Series/DatetimeIndex - result = pd.Series(pd.date_range('2012-01-01', periods=3)).astype(str) - expected = pd.Series( - ['2012-01-01', '2012-01-02', '2012-01-03'], dtype=object) - tm.assert_series_equal(result, expected) - - result = Series(pd.date_range('2012-01-01', periods=3, - tz='US/Eastern')).astype(str) - expected = Series(['2012-01-01 00:00:00-05:00', - '2012-01-02 00:00:00-05:00', - '2012-01-03 00:00:00-05:00'], - dtype=object) - tm.assert_series_equal(result, expected) - def test_to_period_nofreq(self): idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-04']) self.assertRaises(ValueError, idx.to_period) @@ -2472,14 +2598,14 @@ def test_to_period_nofreq(self): idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03'], freq='infer') self.assertEqual(idx.freqstr, 'D') - expected = pd.PeriodIndex( - ['2000-01-01', '2000-01-02', '2000-01-03'], freq='D') - self.assertTrue(idx.to_period().equals(expected)) + expected = pd.PeriodIndex(['2000-01-01', '2000-01-02', + '2000-01-03'], freq='D') + tm.assert_index_equal(idx.to_period(), expected) # GH 7606 idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03']) self.assertEqual(idx.freqstr, None) - self.assertTrue(idx.to_period().equals(expected)) + tm.assert_index_equal(idx.to_period(), expected) def test_000constructor_resolution(self): # 2252 @@ -2491,7 +2617,7 @@ def test_000constructor_resolution(self): def test_constructor_coverage(self): rng = date_range('1/1/2000', periods=10.5) exp = date_range('1/1/2000', periods=10) - self.assertTrue(rng.equals(exp)) + tm.assert_index_equal(rng, exp) self.assertRaises(ValueError, DatetimeIndex, start='1/1/2000', periods='foo', freq='D') @@ -2506,25 +2632,25 @@ def test_constructor_coverage(self): result = DatetimeIndex(gen) expected = DatetimeIndex([datetime(2000, 1, 1) + timedelta(i) for i in range(10)]) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) # NumPy string array strings = np.array(['2000-01-01', '2000-01-02', '2000-01-03']) result = DatetimeIndex(strings) expected = DatetimeIndex(strings.astype('O')) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) from_ints = DatetimeIndex(expected.asi8) - self.assertTrue(from_ints.equals(expected)) + tm.assert_index_equal(from_ints, expected) # string with NaT strings = np.array(['2000-01-01', '2000-01-02', 'NaT']) result = DatetimeIndex(strings) expected = DatetimeIndex(strings.astype('O')) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) from_ints = DatetimeIndex(expected.asi8) - self.assertTrue(from_ints.equals(expected)) + tm.assert_index_equal(from_ints, expected) # non-conforming self.assertRaises(ValueError, DatetimeIndex, @@ -2591,17 +2717,15 @@ def test_constructor_datetime64_tzformat(self): def test_constructor_dtype(self): # passing a dtype with a tz should localize - idx = DatetimeIndex(['2013-01-01', - '2013-01-02'], + idx = DatetimeIndex(['2013-01-01', '2013-01-02'], dtype='datetime64[ns, US/Eastern]') expected = DatetimeIndex(['2013-01-01', '2013-01-02'] ).tz_localize('US/Eastern') - self.assertTrue(idx.equals(expected)) + tm.assert_index_equal(idx, expected) - idx = DatetimeIndex(['2013-01-01', - '2013-01-02'], + idx = DatetimeIndex(['2013-01-01', '2013-01-02'], tz='US/Eastern') - self.assertTrue(idx.equals(expected)) + tm.assert_index_equal(idx, expected) # if we already have a tz and its not the same, then raise idx = DatetimeIndex(['2013-01-01', '2013-01-02'], @@ -2620,7 +2744,7 @@ def test_constructor_dtype(self): idx, tz='CET', dtype='datetime64[ns, US/Eastern]')) result = DatetimeIndex(idx, dtype='datetime64[ns, US/Eastern]') - self.assertTrue(idx.equals(result)) + tm.assert_index_equal(idx, result) def test_constructor_name(self): idx = DatetimeIndex(start='2000-01-01', periods=1, freq='A', @@ -2736,7 +2860,7 @@ def test_map(self): f = lambda x: x.strftime('%Y%m%d') result = rng.map(f) - exp = [f(x) for x in rng] + exp = np.array([f(x) for x in rng], dtype='<U8') tm.assert_almost_equal(result, exp) def test_iteration_preserves_tz(self): @@ -2785,10 +2909,10 @@ def test_union_coverage(self): idx = DatetimeIndex(['2000-01-03', '2000-01-01', '2000-01-02']) ordered = DatetimeIndex(idx.sort_values(), freq='infer') result = ordered.union(idx) - self.assertTrue(result.equals(ordered)) + tm.assert_index_equal(result, ordered) result = ordered[:0].union(ordered) - self.assertTrue(result.equals(ordered)) + tm.assert_index_equal(result, ordered) self.assertEqual(result.freq, ordered.freq) def test_union_bug_1730(self): @@ -2797,17 +2921,17 @@ def test_union_bug_1730(self): result = rng_a.union(rng_b) exp = DatetimeIndex(sorted(set(list(rng_a)) | set(list(rng_b)))) - self.assertTrue(result.equals(exp)) + tm.assert_index_equal(result, exp) def test_union_bug_1745(self): left = DatetimeIndex(['2012-05-11 15:19:49.695000']) - right = DatetimeIndex( - ['2012-05-29 13:04:21.322000', '2012-05-11 15:27:24.873000', - '2012-05-11 15:31:05.350000']) + right = DatetimeIndex(['2012-05-29 13:04:21.322000', + '2012-05-11 15:27:24.873000', + '2012-05-11 15:31:05.350000']) result = left.union(right) exp = DatetimeIndex(sorted(set(list(left)) | set(list(right)))) - self.assertTrue(result.equals(exp)) + tm.assert_index_equal(result, exp) def test_union_bug_4564(self): from pandas import DateOffset @@ -2816,7 +2940,7 @@ def test_union_bug_4564(self): result = left.union(right) exp = DatetimeIndex(sorted(set(list(left)) | set(list(right)))) - self.assertTrue(result.equals(exp)) + tm.assert_index_equal(result, exp) def test_union_freq_both_none(self): # GH11086 @@ -2836,7 +2960,7 @@ def test_union_dataframe_index(self): df = DataFrame({'s1': s1, 's2': s2}) exp = pd.date_range('1/1/1980', '1/1/2012', freq='MS') - self.assert_index_equal(df.index, exp) + tm.assert_index_equal(df.index, exp) def test_intersection_bug_1708(self): from pandas import DateOffset @@ -2867,7 +2991,7 @@ def test_intersection(self): for (rng, expected) in [(rng2, expected2), (rng3, expected3), (rng4, expected4)]: result = base.intersection(rng) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) self.assertEqual(result.tz, expected.tz) @@ -2897,7 +3021,7 @@ def test_intersection(self): for (rng, expected) in [(rng2, expected2), (rng3, expected3), (rng4, expected4)]: result = base.intersection(rng) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertIsNone(result.freq) self.assertEqual(result.tz, expected.tz) @@ -3044,7 +3168,7 @@ def test_get_duplicates(self): result = idx.get_duplicates() ex = DatetimeIndex(['2000-01-02', '2000-01-03']) - self.assertTrue(result.equals(ex)) + tm.assert_index_equal(result, ex) def test_argmin_argmax(self): idx = DatetimeIndex(['2000-01-04', '2000-01-01', '2000-01-02']) @@ -3062,11 +3186,13 @@ def test_sort_values(self): ordered, dexer = idx.sort_values(return_indexer=True) self.assertTrue(ordered.is_monotonic) - self.assert_numpy_array_equal(dexer, [1, 2, 0]) + self.assert_numpy_array_equal(dexer, + np.array([1, 2, 0], dtype=np.intp)) ordered, dexer = idx.sort_values(return_indexer=True, ascending=False) self.assertTrue(ordered[::-1].is_monotonic) - self.assert_numpy_array_equal(dexer, [0, 2, 1]) + self.assert_numpy_array_equal(dexer, + np.array([0, 2, 1], dtype=np.intp)) def test_round(self): @@ -3143,7 +3269,7 @@ def test_insert(self): result = idx.insert(2, datetime(2000, 1, 5)) exp = DatetimeIndex(['2000-01-04', '2000-01-01', '2000-01-05', '2000-01-02'], name='idx') - self.assertTrue(result.equals(exp)) + tm.assert_index_equal(result, exp) # insertion of non-datetime should coerce to object index result = idx.insert(1, 'inserted') @@ -3180,7 +3306,7 @@ def test_insert(self): for n, d, expected in cases: result = idx.insert(n, d) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) @@ -3188,7 +3314,7 @@ def test_insert(self): result = idx.insert(3, datetime(2000, 1, 2)) expected = DatetimeIndex(['2000-01-31', '2000-02-29', '2000-03-31', '2000-01-02'], name='idx', freq=None) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertTrue(result.freq is None) @@ -3219,7 +3345,7 @@ def test_insert(self): pytz.timezone(tz).localize(datetime(2000, 1, 1, 15))]: result = idx.insert(6, d) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) self.assertEqual(result.tz, expected.tz) @@ -3234,7 +3360,7 @@ def test_insert(self): for d in [pd.Timestamp('2000-01-01 10:00', tz=tz), pytz.timezone(tz).localize(datetime(2000, 1, 1, 10))]: result = idx.insert(6, d) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertTrue(result.freq is None) self.assertEqual(result.tz, expected.tz) @@ -3259,7 +3385,7 @@ def test_delete(self): 1: expected_1} for n, expected in compat.iteritems(cases): result = idx.delete(n) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) @@ -3274,7 +3400,7 @@ def test_delete(self): expected = date_range(start='2000-01-01 10:00', periods=9, freq='H', name='idx', tz=tz) result = idx.delete(0) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freqstr, 'H') self.assertEqual(result.tz, expected.tz) @@ -3282,7 +3408,7 @@ def test_delete(self): expected = date_range(start='2000-01-01 09:00', periods=9, freq='H', name='idx', tz=tz) result = idx.delete(-1) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freqstr, 'H') self.assertEqual(result.tz, expected.tz) @@ -3306,12 +3432,12 @@ def test_delete_slice(self): (3, 4, 5): expected_3_5} for n, expected in compat.iteritems(cases): result = idx.delete(n) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) result = idx.delete(slice(n[0], n[-1] + 1)) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) @@ -3322,7 +3448,7 @@ def test_delete_slice(self): result = ts.drop(ts.index[:5]).index expected = pd.date_range('2000-01-01 14:00', periods=5, freq='H', name='idx', tz=tz) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) self.assertEqual(result.tz, expected.tz) @@ -3333,7 +3459,7 @@ def test_delete_slice(self): '2000-01-01 13:00', '2000-01-01 15:00', '2000-01-01 17:00'], freq=None, name='idx', tz=tz) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) self.assertEqual(result.tz, expected.tz) @@ -3352,7 +3478,7 @@ def test_take(self): taken2 = idx[[5, 6, 8, 12]] for taken in [taken1, taken2]: - self.assertTrue(taken.equals(expected)) + tm.assert_index_equal(taken, expected) tm.assertIsInstance(taken, DatetimeIndex) self.assertIsNone(taken.freq) self.assertEqual(taken.tz, expected.tz) @@ -3455,14 +3581,14 @@ def test_isin(self): self.assertTrue(result.all()) assert_almost_equal(index.isin([index[2], 5]), - [False, False, True, False]) + np.array([False, False, True, False])) def test_union(self): i1 = Int64Index(np.arange(0, 20, 2)) i2 = Int64Index(np.arange(10, 30, 2)) result = i1.union(i2) expected = Int64Index(np.arange(0, 30, 2)) - self.assert_numpy_array_equal(result, expected) + tm.assert_index_equal(result, expected) def test_union_with_DatetimeIndex(self): i1 = Int64Index(np.arange(0, 20, 2)) @@ -3545,11 +3671,11 @@ def test_factorize(self): arr, idx = idx1.factorize() self.assert_numpy_array_equal(arr, exp_arr) - self.assertTrue(idx.equals(exp_idx)) + tm.assert_index_equal(idx, exp_idx) arr, idx = idx1.factorize(sort=True) self.assert_numpy_array_equal(arr, exp_arr) - self.assertTrue(idx.equals(exp_idx)) + tm.assert_index_equal(idx, exp_idx) # tz must be preserved idx1 = idx1.tz_localize('Asia/Tokyo') @@ -3557,7 +3683,7 @@ def test_factorize(self): arr, idx = idx1.factorize() self.assert_numpy_array_equal(arr, exp_arr) - self.assertTrue(idx.equals(exp_idx)) + tm.assert_index_equal(idx, exp_idx) idx2 = pd.DatetimeIndex(['2014-03', '2014-03', '2014-02', '2014-01', '2014-03', '2014-01']) @@ -3566,20 +3692,20 @@ def test_factorize(self): exp_idx = DatetimeIndex(['2014-01', '2014-02', '2014-03']) arr, idx = idx2.factorize(sort=True) self.assert_numpy_array_equal(arr, exp_arr) - self.assertTrue(idx.equals(exp_idx)) + tm.assert_index_equal(idx, exp_idx) exp_arr = np.array([0, 0, 1, 2, 0, 2]) exp_idx = DatetimeIndex(['2014-03', '2014-02', '2014-01']) arr, idx = idx2.factorize() self.assert_numpy_array_equal(arr, exp_arr) - self.assertTrue(idx.equals(exp_idx)) + tm.assert_index_equal(idx, exp_idx) # freq must be preserved idx3 = date_range('2000-01', periods=4, freq='M', tz='Asia/Tokyo') exp_arr = np.array([0, 1, 2, 3]) arr, idx = idx3.factorize() self.assert_numpy_array_equal(arr, exp_arr) - self.assertTrue(idx.equals(idx3)) + tm.assert_index_equal(idx, idx3) def test_slice_with_negative_step(self): ts = Series(np.arange(20), @@ -3831,7 +3957,7 @@ def test_datetimeindex_constructor(self): idx7 = DatetimeIndex(['12/05/2007', '25/01/2008'], dayfirst=True) idx8 = DatetimeIndex(['2007/05/12', '2008/01/25'], dayfirst=False, yearfirst=True) - self.assertTrue(idx7.equals(idx8)) + tm.assert_index_equal(idx7, idx8) for other in [idx2, idx3, idx4, idx5, idx6]: self.assertTrue((idx1.values == other.values).all()) @@ -3877,12 +4003,12 @@ def test_dayfirst(self): idx4 = to_datetime(np.array(arr), dayfirst=True) idx5 = DatetimeIndex(Index(arr), dayfirst=True) idx6 = DatetimeIndex(Series(arr), dayfirst=True) - self.assertTrue(expected.equals(idx1)) - self.assertTrue(expected.equals(idx2)) - self.assertTrue(expected.equals(idx3)) - self.assertTrue(expected.equals(idx4)) - self.assertTrue(expected.equals(idx5)) - self.assertTrue(expected.equals(idx6)) + tm.assert_index_equal(expected, idx1) + tm.assert_index_equal(expected, idx2) + tm.assert_index_equal(expected, idx3) + tm.assert_index_equal(expected, idx4) + tm.assert_index_equal(expected, idx5) + tm.assert_index_equal(expected, idx6) def test_dti_snap(self): dti = DatetimeIndex(['1/1/2002', '1/2/2002', '1/3/2002', '1/4/2002', @@ -3922,9 +4048,9 @@ def test_dti_set_index_reindex(self): idx2 = date_range('2013', periods=6, freq='A', tz='Asia/Tokyo') df = df.set_index(idx1) - self.assertTrue(df.index.equals(idx1)) + tm.assert_index_equal(df.index, idx1) df = df.reindex(idx2) - self.assertTrue(df.index.equals(idx2)) + tm.assert_index_equal(df.index, idx2) # 11314 # with tz @@ -4039,13 +4165,13 @@ def test_constructor_cast_object(self): def test_series_comparison_scalars(self): val = datetime(2000, 1, 4) result = self.series > val - expected = np.array([x > val for x in self.series]) - self.assert_numpy_array_equal(result, expected) + expected = Series([x > val for x in self.series]) + self.assert_series_equal(result, expected) val = self.series[5] result = self.series > val - expected = np.array([x > val for x in self.series]) - self.assert_numpy_array_equal(result, expected) + expected = Series([x > val for x in self.series]) + self.assert_series_equal(result, expected) def test_between(self): left, right = self.series[[2, 7]] @@ -4229,68 +4355,6 @@ def check(val, unit=None, h=1, s=1, us=0): result = Timestamp('NaT') self.assertIs(result, NaT) - def test_unit_errors(self): - # GH 11758 - # test proper behavior with erros - - with self.assertRaises(ValueError): - to_datetime([1], unit='D', format='%Y%m%d') - - values = [11111111, 1, 1.0, tslib.iNaT, pd.NaT, np.nan, - 'NaT', ''] - result = to_datetime(values, unit='D', errors='ignore') - expected = Index([11111111, Timestamp('1970-01-02'), - Timestamp('1970-01-02'), pd.NaT, - pd.NaT, pd.NaT, pd.NaT, pd.NaT], - dtype=object) - tm.assert_index_equal(result, expected) - - result = to_datetime(values, unit='D', errors='coerce') - expected = DatetimeIndex(['NaT', '1970-01-02', '1970-01-02', - 'NaT', 'NaT', 'NaT', 'NaT', 'NaT']) - tm.assert_index_equal(result, expected) - - with self.assertRaises(tslib.OutOfBoundsDatetime): - to_datetime(values, unit='D', errors='raise') - - values = [1420043460000, tslib.iNaT, pd.NaT, np.nan, 'NaT'] - - result = to_datetime(values, errors='ignore', unit='s') - expected = Index([1420043460000, pd.NaT, pd.NaT, - pd.NaT, pd.NaT], dtype=object) - tm.assert_index_equal(result, expected) - - result = to_datetime(values, errors='coerce', unit='s') - expected = DatetimeIndex(['NaT', 'NaT', 'NaT', 'NaT', 'NaT']) - tm.assert_index_equal(result, expected) - - with self.assertRaises(tslib.OutOfBoundsDatetime): - to_datetime(values, errors='raise', unit='s') - - # if we have a string, then we raise a ValueError - # and NOT an OutOfBoundsDatetime - for val in ['foo', Timestamp('20130101')]: - try: - to_datetime(val, errors='raise', unit='s') - except tslib.OutOfBoundsDatetime: - raise AssertionError("incorrect exception raised") - except ValueError: - pass - - # consistency of conversions - expected = Timestamp('1970-05-09 14:25:11') - result = pd.to_datetime(11111111, unit='s', errors='raise') - self.assertEqual(result, expected) - self.assertIsInstance(result, Timestamp) - - result = pd.to_datetime(11111111, unit='s', errors='coerce') - self.assertEqual(result, expected) - self.assertIsInstance(result, Timestamp) - - result = pd.to_datetime(11111111, unit='s', errors='ignore') - self.assertEqual(result, expected) - self.assertIsInstance(result, Timestamp) - def test_roundtrip(self): # test value to string and back conversions @@ -4713,10 +4777,9 @@ def test_date_range_normalize(self): rng = date_range(snap, periods=n, normalize=False, freq='2D') offset = timedelta(2) - values = np.array([snap + i * offset for i in range(n)], - dtype='M8[ns]') + values = DatetimeIndex([snap + i * offset for i in range(n)]) - self.assert_numpy_array_equal(rng, values) + tm.assert_index_equal(rng, values) rng = date_range('1/1/2000 08:15', periods=n, normalize=False, freq='B') @@ -4735,7 +4798,7 @@ def test_timedelta(self): result = index - timedelta(1) expected = index + timedelta(-1) - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) # GH4134, buggy with timedeltas rng = date_range('2013', '2014') @@ -4744,8 +4807,8 @@ def test_timedelta(self): result2 = DatetimeIndex(s - np.timedelta64(100000000)) result3 = rng - np.timedelta64(100000000) result4 = DatetimeIndex(s - pd.offsets.Hour(1)) - self.assertTrue(result1.equals(result4)) - self.assertTrue(result2.equals(result3)) + tm.assert_index_equal(result1, result4) + tm.assert_index_equal(result2, result3) def test_shift(self): ts = Series(np.random.randn(5), @@ -4753,12 +4816,12 @@ def test_shift(self): result = ts.shift(1, freq='5T') exp_index = ts.index.shift(1, freq='5T') - self.assertTrue(result.index.equals(exp_index)) + tm.assert_index_equal(result.index, exp_index) # GH #1063, multiple of same base result = ts.shift(1, freq='4H') exp_index = ts.index + datetools.Hour(4) - self.assertTrue(result.index.equals(exp_index)) + tm.assert_index_equal(result.index, exp_index) idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-04']) self.assertRaises(ValueError, idx.shift, 1) @@ -4910,7 +4973,7 @@ def test_to_datetime_format(self): elif isinstance(expected, Timestamp): self.assertEqual(result, expected) else: - self.assertTrue(result.equals(expected)) + tm.assert_index_equal(result, expected) def test_to_datetime_format_YYYYMMDD(self): s = Series([19801222, 19801222] + [19810105] * 5) @@ -4941,9 +5004,10 @@ def test_to_datetime_format_YYYYMMDD(self): # GH 7930 s = Series([20121231, 20141231, 99991231]) result = pd.to_datetime(s, format='%Y%m%d', errors='ignore') - expected = np.array([datetime(2012, 12, 31), datetime( - 2014, 12, 31), datetime(9999, 12, 31)], dtype=object) - self.assert_numpy_array_equal(result, expected) + expected = Series([datetime(2012, 12, 31), + datetime(2014, 12, 31), datetime(9999, 12, 31)], + dtype=object) + self.assert_series_equal(result, expected) result = pd.to_datetime(s, format='%Y%m%d', errors='coerce') expected = Series(['20121231', '20141231', 'NaT'], dtype='M8[ns]') @@ -5030,18 +5094,13 @@ def test_to_datetime_format_weeks(self): class TestToDatetimeInferFormat(tm.TestCase): def test_to_datetime_infer_datetime_format_consistent_format(self): - time_series = pd.Series(pd.date_range('20000101', periods=50, - freq='H')) + s = pd.Series(pd.date_range('20000101', periods=50, freq='H')) - test_formats = [ - '%m-%d-%Y', - '%m/%d/%Y %H:%M:%S.%f', - '%Y-%m-%dT%H:%M:%S.%f', - ] + test_formats = ['%m-%d-%Y', '%m/%d/%Y %H:%M:%S.%f', + '%Y-%m-%dT%H:%M:%S.%f'] for test_format in test_formats: - s_as_dt_strings = time_series.apply( - lambda x: x.strftime(test_format)) + s_as_dt_strings = s.apply(lambda x: x.strftime(test_format)) with_format = pd.to_datetime(s_as_dt_strings, format=test_format) no_infer = pd.to_datetime(s_as_dt_strings, @@ -5051,70 +5110,45 @@ def test_to_datetime_infer_datetime_format_consistent_format(self): # Whether the format is explicitly passed, it is inferred, or # it is not inferred, the results should all be the same - self.assert_numpy_array_equal(with_format, no_infer) - self.assert_numpy_array_equal(no_infer, yes_infer) + self.assert_series_equal(with_format, no_infer) + self.assert_series_equal(no_infer, yes_infer) def test_to_datetime_infer_datetime_format_inconsistent_format(self): - test_series = pd.Series(np.array([ - '01/01/2011 00:00:00', - '01-02-2011 00:00:00', - '2011-01-03T00:00:00', - ])) + s = pd.Series(np.array(['01/01/2011 00:00:00', + '01-02-2011 00:00:00', + '2011-01-03T00:00:00'])) # When the format is inconsistent, infer_datetime_format should just # fallback to the default parsing - self.assert_numpy_array_equal( - pd.to_datetime(test_series, infer_datetime_format=False), - pd.to_datetime(test_series, infer_datetime_format=True) - ) + tm.assert_series_equal(pd.to_datetime(s, infer_datetime_format=False), + pd.to_datetime(s, infer_datetime_format=True)) - test_series = pd.Series(np.array([ - 'Jan/01/2011', - 'Feb/01/2011', - 'Mar/01/2011', - ])) + s = pd.Series(np.array(['Jan/01/2011', 'Feb/01/2011', 'Mar/01/2011'])) - self.assert_numpy_array_equal( - pd.to_datetime(test_series, infer_datetime_format=False), - pd.to_datetime(test_series, infer_datetime_format=True) - ) + tm.assert_series_equal(pd.to_datetime(s, infer_datetime_format=False), + pd.to_datetime(s, infer_datetime_format=True)) def test_to_datetime_infer_datetime_format_series_with_nans(self): - test_series = pd.Series(np.array([ - '01/01/2011 00:00:00', - np.nan, - '01/03/2011 00:00:00', - np.nan, - ])) - - self.assert_numpy_array_equal( - pd.to_datetime(test_series, infer_datetime_format=False), - pd.to_datetime(test_series, infer_datetime_format=True) - ) + s = pd.Series(np.array(['01/01/2011 00:00:00', np.nan, + '01/03/2011 00:00:00', np.nan])) + tm.assert_series_equal(pd.to_datetime(s, infer_datetime_format=False), + pd.to_datetime(s, infer_datetime_format=True)) def test_to_datetime_infer_datetime_format_series_starting_with_nans(self): - test_series = pd.Series(np.array([ - np.nan, - np.nan, - '01/01/2011 00:00:00', - '01/02/2011 00:00:00', - '01/03/2011 00:00:00', - ])) + s = pd.Series(np.array([np.nan, np.nan, '01/01/2011 00:00:00', + '01/02/2011 00:00:00', '01/03/2011 00:00:00'])) - self.assert_numpy_array_equal( - pd.to_datetime(test_series, infer_datetime_format=False), - pd.to_datetime(test_series, infer_datetime_format=True) - ) + tm.assert_series_equal(pd.to_datetime(s, infer_datetime_format=False), + pd.to_datetime(s, infer_datetime_format=True)) def test_to_datetime_iso8601_noleading_0s(self): # GH 11871 - test_series = pd.Series(['2014-1-1', '2014-2-2', '2015-3-3']) + s = pd.Series(['2014-1-1', '2014-2-2', '2015-3-3']) expected = pd.Series([pd.Timestamp('2014-01-01'), pd.Timestamp('2014-02-02'), pd.Timestamp('2015-03-03')]) - tm.assert_series_equal(pd.to_datetime(test_series), expected) - tm.assert_series_equal(pd.to_datetime(test_series, format='%Y-%m-%d'), - expected) + tm.assert_series_equal(pd.to_datetime(s), expected) + tm.assert_series_equal(pd.to_datetime(s, format='%Y-%m-%d'), expected) class TestGuessDatetimeFormat(tm.TestCase): diff --git a/pandas/tseries/tests/test_timeseries_legacy.py b/pandas/tseries/tests/test_timeseries_legacy.py index 086f23cd2d4fd..6f58ad3a57b48 100644 --- a/pandas/tseries/tests/test_timeseries_legacy.py +++ b/pandas/tseries/tests/test_timeseries_legacy.py @@ -85,7 +85,7 @@ def test_unpickle_legacy_len0_daterange(self): ex_index = DatetimeIndex([], freq='B') - self.assertTrue(result.index.equals(ex_index)) + self.assert_index_equal(result.index, ex_index) tm.assertIsInstance(result.index.freq, offsets.BDay) self.assertEqual(len(result), 0) @@ -116,7 +116,7 @@ def _check_join(left, right, how='inner'): return_indexers=True) tm.assertIsInstance(ra, DatetimeIndex) - self.assertTrue(ra.equals(ea)) + self.assert_index_equal(ra, ea) assert_almost_equal(rb, eb) assert_almost_equal(rc, ec) @@ -150,24 +150,24 @@ def test_setops(self): result = index[:5].union(obj_index[5:]) expected = index tm.assertIsInstance(result, DatetimeIndex) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) result = index[:10].intersection(obj_index[5:]) expected = index[5:10] tm.assertIsInstance(result, DatetimeIndex) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) result = index[:10] - obj_index[5:] expected = index[:5] tm.assertIsInstance(result, DatetimeIndex) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) def test_index_conversion(self): index = self.frame.index obj_index = index.asobject conv = DatetimeIndex(obj_index) - self.assertTrue(conv.equals(index)) + self.assert_index_equal(conv, index) self.assertRaises(ValueError, DatetimeIndex, ['a', 'b', 'c', 'd']) @@ -188,11 +188,11 @@ def test_setops_conversion_fail(self): result = index.union(right) expected = Index(np.concatenate([index.asobject, right])) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) result = index.intersection(right) expected = Index([]) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) def test_legacy_time_rules(self): rules = [('WEEKDAY', 'B'), ('EOM', 'BM'), ('W@MON', 'W-MON'), @@ -211,7 +211,7 @@ def test_legacy_time_rules(self): for old_freq, new_freq in rules: old_rng = date_range(start, end, freq=old_freq) new_rng = date_range(start, end, freq=new_freq) - self.assertTrue(old_rng.equals(new_rng)) + self.assert_index_equal(old_rng, new_rng) # test get_legacy_offset_name offset = datetools.get_offset(new_freq) diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py index 1f0632377c851..afe9d0652db19 100644 --- a/pandas/tseries/tests/test_timezones.py +++ b/pandas/tseries/tests/test_timezones.py @@ -263,7 +263,7 @@ def test_create_with_fixed_tz(self): self.assertEqual(off, rng.tz) rng2 = date_range(start, periods=len(rng), tz=off) - self.assertTrue(rng.equals(rng2)) + self.assert_index_equal(rng, rng2) rng3 = date_range('3/11/2012 05:00:00+07:00', '6/11/2012 05:00:00+07:00') @@ -287,7 +287,7 @@ def test_date_range_localize(self): rng3 = date_range('3/11/2012 03:00', periods=15, freq='H') rng3 = rng3.tz_localize('US/Eastern') - self.assertTrue(rng.equals(rng3)) + self.assert_index_equal(rng, rng3) # DST transition time val = rng[0] @@ -296,14 +296,14 @@ def test_date_range_localize(self): self.assertEqual(val.hour, 3) self.assertEqual(exp.hour, 3) self.assertEqual(val, exp) # same UTC value - self.assertTrue(rng[:2].equals(rng2)) + self.assert_index_equal(rng[:2], rng2) # Right before the DST transition rng = date_range('3/11/2012 00:00', periods=2, freq='H', tz='US/Eastern') rng2 = DatetimeIndex(['3/11/2012 00:00', '3/11/2012 01:00'], tz='US/Eastern') - self.assertTrue(rng.equals(rng2)) + self.assert_index_equal(rng, rng2) exp = Timestamp('3/11/2012 00:00', tz='US/Eastern') self.assertEqual(exp.hour, 0) self.assertEqual(rng[0], exp) @@ -402,7 +402,7 @@ def test_tz_localize(self): dr = bdate_range('1/1/2009', '1/1/2010') dr_utc = bdate_range('1/1/2009', '1/1/2010', tz=pytz.utc) localized = dr.tz_localize(pytz.utc) - self.assert_numpy_array_equal(dr_utc, localized) + self.assert_index_equal(dr_utc, localized) def test_with_tz_ambiguous_times(self): tz = self.tz('US/Eastern') @@ -440,22 +440,22 @@ def test_ambiguous_infer(self): '11/06/2011 02:00', '11/06/2011 03:00'] di = DatetimeIndex(times) localized = di.tz_localize(tz, ambiguous='infer') - self.assert_numpy_array_equal(dr, localized) + self.assert_index_equal(dr, localized) with tm.assert_produces_warning(FutureWarning): localized_old = di.tz_localize(tz, infer_dst=True) - self.assert_numpy_array_equal(dr, localized_old) - self.assert_numpy_array_equal(dr, DatetimeIndex(times, tz=tz, - ambiguous='infer')) + self.assert_index_equal(dr, localized_old) + self.assert_index_equal(dr, DatetimeIndex(times, tz=tz, + ambiguous='infer')) # When there is no dst transition, nothing special happens dr = date_range(datetime(2011, 6, 1, 0), periods=10, freq=datetools.Hour()) localized = dr.tz_localize(tz) localized_infer = dr.tz_localize(tz, ambiguous='infer') - self.assert_numpy_array_equal(localized, localized_infer) + self.assert_index_equal(localized, localized_infer) with tm.assert_produces_warning(FutureWarning): localized_infer_old = dr.tz_localize(tz, infer_dst=True) - self.assert_numpy_array_equal(localized, localized_infer_old) + self.assert_index_equal(localized, localized_infer_old) def test_ambiguous_flags(self): # November 6, 2011, fall back, repeat 2 AM hour @@ -471,20 +471,20 @@ def test_ambiguous_flags(self): di = DatetimeIndex(times) is_dst = [1, 1, 0, 0, 0] localized = di.tz_localize(tz, ambiguous=is_dst) - self.assert_numpy_array_equal(dr, localized) - self.assert_numpy_array_equal(dr, DatetimeIndex(times, tz=tz, - ambiguous=is_dst)) + self.assert_index_equal(dr, localized) + self.assert_index_equal(dr, DatetimeIndex(times, tz=tz, + ambiguous=is_dst)) localized = di.tz_localize(tz, ambiguous=np.array(is_dst)) - self.assert_numpy_array_equal(dr, localized) + self.assert_index_equal(dr, localized) localized = di.tz_localize(tz, ambiguous=np.array(is_dst).astype('bool')) - self.assert_numpy_array_equal(dr, localized) + self.assert_index_equal(dr, localized) # Test constructor localized = DatetimeIndex(times, tz=tz, ambiguous=is_dst) - self.assert_numpy_array_equal(dr, localized) + self.assert_index_equal(dr, localized) # Test duplicate times where infer_dst fails times += times @@ -497,7 +497,7 @@ def test_ambiguous_flags(self): is_dst = np.hstack((is_dst, is_dst)) localized = di.tz_localize(tz, ambiguous=is_dst) dr = dr.append(dr) - self.assert_numpy_array_equal(dr, localized) + self.assert_index_equal(dr, localized) # When there is no dst transition, nothing special happens dr = date_range(datetime(2011, 6, 1, 0), periods=10, @@ -505,7 +505,7 @@ def test_ambiguous_flags(self): is_dst = np.array([1] * 10) localized = dr.tz_localize(tz) localized_is_dst = dr.tz_localize(tz, ambiguous=is_dst) - self.assert_numpy_array_equal(localized, localized_is_dst) + self.assert_index_equal(localized, localized_is_dst) # construction with an ambiguous end-point # GH 11626 @@ -531,7 +531,10 @@ def test_ambiguous_nat(self): times = ['11/06/2011 00:00', np.NaN, np.NaN, '11/06/2011 02:00', '11/06/2011 03:00'] di_test = DatetimeIndex(times, tz='US/Eastern') - self.assert_numpy_array_equal(di_test, localized) + + # left dtype is datetime64[ns, US/Eastern] + # right is datetime64[ns, tzfile('/usr/share/zoneinfo/US/Eastern')] + self.assert_numpy_array_equal(di_test.values, localized.values) def test_nonexistent_raise_coerce(self): # See issue 13057 @@ -580,7 +583,7 @@ def test_tz_string(self): tz=self.tzstr('US/Eastern')) expected = date_range('1/1/2000', periods=10, tz=self.tz('US/Eastern')) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) def test_take_dont_lose_meta(self): tm._skip_if_no_pytz() @@ -673,7 +676,7 @@ def test_convert_tz_aware_datetime_datetime(self): self.assertTrue(self.cmptz(result.tz, self.tz('US/Eastern'))) converted = to_datetime(dates_aware, utc=True) - ex_vals = [Timestamp(x).value for x in dates_aware] + ex_vals = np.array([Timestamp(x).value for x in dates_aware]) self.assert_numpy_array_equal(converted.asi8, ex_vals) self.assertIs(converted.tz, pytz.utc) @@ -779,10 +782,11 @@ def test_date_range_span_dst_transition(self): self.assertTrue((dr.hour == 0).all()) def test_convert_datetime_list(self): - dr = date_range('2012-06-02', periods=10, tz=self.tzstr('US/Eastern')) + dr = date_range('2012-06-02', periods=10, + tz=self.tzstr('US/Eastern'), name='foo') dr2 = DatetimeIndex(list(dr), name='foo') - self.assertTrue(dr.equals(dr2)) + self.assert_index_equal(dr, dr2) self.assertEqual(dr.tz, dr2.tz) self.assertEqual(dr2.name, 'foo') @@ -845,7 +849,7 @@ def test_datetimeindex_tz(self): idx4 = DatetimeIndex(np.array(arr), tz=self.tzstr('US/Eastern')) for other in [idx2, idx3, idx4]: - self.assertTrue(idx1.equals(other)) + self.assert_index_equal(idx1, other) def test_datetimeindex_tz_nat(self): idx = to_datetime([Timestamp("2013-1-1", tz=self.tzstr('US/Eastern')), @@ -898,6 +902,88 @@ def test_utc_with_system_utc(self): # check that the time hasn't changed. self.assertEqual(ts, ts.tz_convert(dateutil.tz.tzutc())) + def test_tz_convert_hour_overflow_dst(self): + # Regression test for: + # https://github.com/pydata/pandas/issues/13306 + + # sorted case US/Eastern -> UTC + ts = ['2008-05-12 09:50:00', + '2008-12-12 09:50:35', + '2009-05-12 09:50:32'] + tt = to_datetime(ts).tz_localize('US/Eastern') + ut = tt.tz_convert('UTC') + expected = np.array([13, 14, 13], dtype=np.int32) + self.assert_numpy_array_equal(ut.hour, expected) + + # sorted case UTC -> US/Eastern + ts = ['2008-05-12 13:50:00', + '2008-12-12 14:50:35', + '2009-05-12 13:50:32'] + tt = to_datetime(ts).tz_localize('UTC') + ut = tt.tz_convert('US/Eastern') + expected = np.array([9, 9, 9], dtype=np.int32) + self.assert_numpy_array_equal(ut.hour, expected) + + # unsorted case US/Eastern -> UTC + ts = ['2008-05-12 09:50:00', + '2008-12-12 09:50:35', + '2008-05-12 09:50:32'] + tt = to_datetime(ts).tz_localize('US/Eastern') + ut = tt.tz_convert('UTC') + expected = np.array([13, 14, 13], dtype=np.int32) + self.assert_numpy_array_equal(ut.hour, expected) + + # unsorted case UTC -> US/Eastern + ts = ['2008-05-12 13:50:00', + '2008-12-12 14:50:35', + '2008-05-12 13:50:32'] + tt = to_datetime(ts).tz_localize('UTC') + ut = tt.tz_convert('US/Eastern') + expected = np.array([9, 9, 9], dtype=np.int32) + self.assert_numpy_array_equal(ut.hour, expected) + + def test_tz_convert_hour_overflow_dst_timestamps(self): + # Regression test for: + # https://github.com/pydata/pandas/issues/13306 + + tz = self.tzstr('US/Eastern') + + # sorted case US/Eastern -> UTC + ts = [Timestamp('2008-05-12 09:50:00', tz=tz), + Timestamp('2008-12-12 09:50:35', tz=tz), + Timestamp('2009-05-12 09:50:32', tz=tz)] + tt = to_datetime(ts) + ut = tt.tz_convert('UTC') + expected = np.array([13, 14, 13], dtype=np.int32) + self.assert_numpy_array_equal(ut.hour, expected) + + # sorted case UTC -> US/Eastern + ts = [Timestamp('2008-05-12 13:50:00', tz='UTC'), + Timestamp('2008-12-12 14:50:35', tz='UTC'), + Timestamp('2009-05-12 13:50:32', tz='UTC')] + tt = to_datetime(ts) + ut = tt.tz_convert('US/Eastern') + expected = np.array([9, 9, 9], dtype=np.int32) + self.assert_numpy_array_equal(ut.hour, expected) + + # unsorted case US/Eastern -> UTC + ts = [Timestamp('2008-05-12 09:50:00', tz=tz), + Timestamp('2008-12-12 09:50:35', tz=tz), + Timestamp('2008-05-12 09:50:32', tz=tz)] + tt = to_datetime(ts) + ut = tt.tz_convert('UTC') + expected = np.array([13, 14, 13], dtype=np.int32) + self.assert_numpy_array_equal(ut.hour, expected) + + # unsorted case UTC -> US/Eastern + ts = [Timestamp('2008-05-12 13:50:00', tz='UTC'), + Timestamp('2008-12-12 14:50:35', tz='UTC'), + Timestamp('2008-05-12 13:50:32', tz='UTC')] + tt = to_datetime(ts) + ut = tt.tz_convert('US/Eastern') + expected = np.array([9, 9, 9], dtype=np.int32) + self.assert_numpy_array_equal(ut.hour, expected) + def test_tslib_tz_convert_trans_pos_plus_1__bug(self): # Regression test for tslib.tz_convert(vals, tz1, tz2). # See https://github.com/pydata/pandas/issues/4496 for details. @@ -1011,7 +1097,7 @@ def test_tz_localize_naive(self): conv = rng.tz_localize('US/Pacific') exp = date_range('1/1/2011', periods=100, freq='H', tz='US/Pacific') - self.assertTrue(conv.equals(exp)) + self.assert_index_equal(conv, exp) def test_tz_localize_roundtrip(self): for tz in self.timezones: @@ -1143,7 +1229,7 @@ def test_join_aware(self): result = test1.join(test2, how='outer') ex_index = test1.index.union(test2.index) - self.assertTrue(result.index.equals(ex_index)) + self.assert_index_equal(result.index, ex_index) self.assertTrue(result.index.tz.zone == 'US/Central') # non-overlapping @@ -1199,11 +1285,11 @@ def test_append_aware_naive(self): ts1 = Series(np.random.randn(len(rng1)), index=rng1) ts2 = Series(np.random.randn(len(rng2)), index=rng2) ts_result = ts1.append(ts2) + self.assertTrue(ts_result.index.equals(ts1.index.asobject.append( ts2.index.asobject))) # mixed - rng1 = date_range('1/1/2011 01:00', periods=1, freq='H') rng2 = lrange(100) ts1 = Series(np.random.randn(len(rng1)), index=rng1) @@ -1280,7 +1366,7 @@ def test_datetimeindex_tz(self): rng = date_range('03/12/2012 00:00', periods=10, freq='W-FRI', tz='US/Eastern') rng2 = DatetimeIndex(data=rng, tz='US/Eastern') - self.assertTrue(rng.equals(rng2)) + self.assert_index_equal(rng, rng2) def test_normalize_tz(self): rng = date_range('1/1/2000 9:30', periods=10, freq='D', @@ -1289,7 +1375,7 @@ def test_normalize_tz(self): result = rng.normalize() expected = date_range('1/1/2000', periods=10, freq='D', tz='US/Eastern') - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertTrue(result.is_normalized) self.assertFalse(rng.is_normalized) @@ -1298,7 +1384,7 @@ def test_normalize_tz(self): result = rng.normalize() expected = date_range('1/1/2000', periods=10, freq='D', tz='UTC') - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertTrue(result.is_normalized) self.assertFalse(rng.is_normalized) @@ -1307,7 +1393,7 @@ def test_normalize_tz(self): rng = date_range('1/1/2000 9:30', periods=10, freq='D', tz=tzlocal()) result = rng.normalize() expected = date_range('1/1/2000', periods=10, freq='D', tz=tzlocal()) - self.assertTrue(result.equals(expected)) + self.assert_index_equal(result, expected) self.assertTrue(result.is_normalized) self.assertFalse(rng.is_normalized) @@ -1324,45 +1410,45 @@ def test_tzaware_offset(self): '2010-11-01 07:00'], freq='H', tz=tz) offset = dates + offsets.Hour(5) - self.assertTrue(offset.equals(expected)) + self.assert_index_equal(offset, expected) offset = dates + np.timedelta64(5, 'h') - self.assertTrue(offset.equals(expected)) + self.assert_index_equal(offset, expected) offset = dates + timedelta(hours=5) - self.assertTrue(offset.equals(expected)) + self.assert_index_equal(offset, expected) def test_nat(self): # GH 5546 dates = [NaT] idx = DatetimeIndex(dates) idx = idx.tz_localize('US/Pacific') - self.assertTrue(idx.equals(DatetimeIndex(dates, tz='US/Pacific'))) + self.assert_index_equal(idx, DatetimeIndex(dates, tz='US/Pacific')) idx = idx.tz_convert('US/Eastern') - self.assertTrue(idx.equals(DatetimeIndex(dates, tz='US/Eastern'))) + self.assert_index_equal(idx, DatetimeIndex(dates, tz='US/Eastern')) idx = idx.tz_convert('UTC') - self.assertTrue(idx.equals(DatetimeIndex(dates, tz='UTC'))) + self.assert_index_equal(idx, DatetimeIndex(dates, tz='UTC')) dates = ['2010-12-01 00:00', '2010-12-02 00:00', NaT] idx = DatetimeIndex(dates) idx = idx.tz_localize('US/Pacific') - self.assertTrue(idx.equals(DatetimeIndex(dates, tz='US/Pacific'))) + self.assert_index_equal(idx, DatetimeIndex(dates, tz='US/Pacific')) idx = idx.tz_convert('US/Eastern') expected = ['2010-12-01 03:00', '2010-12-02 03:00', NaT] - self.assertTrue(idx.equals(DatetimeIndex(expected, tz='US/Eastern'))) + self.assert_index_equal(idx, DatetimeIndex(expected, tz='US/Eastern')) idx = idx + offsets.Hour(5) expected = ['2010-12-01 08:00', '2010-12-02 08:00', NaT] - self.assertTrue(idx.equals(DatetimeIndex(expected, tz='US/Eastern'))) + self.assert_index_equal(idx, DatetimeIndex(expected, tz='US/Eastern')) idx = idx.tz_convert('US/Pacific') expected = ['2010-12-01 05:00', '2010-12-02 05:00', NaT] - self.assertTrue(idx.equals(DatetimeIndex(expected, tz='US/Pacific'))) + self.assert_index_equal(idx, DatetimeIndex(expected, tz='US/Pacific')) idx = idx + np.timedelta64(3, 'h') expected = ['2010-12-01 08:00', '2010-12-02 08:00', NaT] - self.assertTrue(idx.equals(DatetimeIndex(expected, tz='US/Pacific'))) + self.assert_index_equal(idx, DatetimeIndex(expected, tz='US/Pacific')) idx = idx.tz_convert('US/Eastern') expected = ['2010-12-01 11:00', '2010-12-02 11:00', NaT] - self.assertTrue(idx.equals(DatetimeIndex(expected, tz='US/Eastern'))) + self.assert_index_equal(idx, DatetimeIndex(expected, tz='US/Eastern')) if __name__ == '__main__': diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py index 4543047a8a72a..c6436163b9edb 100644 --- a/pandas/tseries/tests/test_tslib.py +++ b/pandas/tseries/tests/test_tslib.py @@ -2,7 +2,7 @@ from distutils.version import LooseVersion import numpy as np -from pandas import tslib +from pandas import tslib, lib import pandas._period as period import datetime @@ -25,6 +25,35 @@ from pandas.util.testing import assert_series_equal, _skip_if_has_locale +class TestTsUtil(tm.TestCase): + + def test_try_parse_dates(self): + from dateutil.parser import parse + arr = np.array(['5/1/2000', '6/1/2000', '7/1/2000'], dtype=object) + + result = lib.try_parse_dates(arr, dayfirst=True) + expected = [parse(d, dayfirst=True) for d in arr] + self.assertTrue(np.array_equal(result, expected)) + + def test_min_valid(self): + # Ensure that Timestamp.min is a valid Timestamp + Timestamp(Timestamp.min) + + def test_max_valid(self): + # Ensure that Timestamp.max is a valid Timestamp + Timestamp(Timestamp.max) + + def test_to_datetime_bijective(self): + # Ensure that converting to datetime and back only loses precision + # by going from nanoseconds to microseconds. + self.assertEqual( + Timestamp(Timestamp.max.to_pydatetime()).value / 1000, + Timestamp.max.value / 1000) + self.assertEqual( + Timestamp(Timestamp.min.to_pydatetime()).value / 1000, + Timestamp.min.value / 1000) + + class TestTimestamp(tm.TestCase): def test_constructor(self): @@ -180,6 +209,52 @@ def test_constructor_invalid(self): with tm.assertRaisesRegexp(ValueError, 'Cannot convert Period'): Timestamp(Period('1000-01-01')) + def test_constructor_positional(self): + # GH 10758 + with tm.assertRaises(TypeError): + Timestamp(2000, 1) + with tm.assertRaises(ValueError): + Timestamp(2000, 0, 1) + with tm.assertRaises(ValueError): + Timestamp(2000, 13, 1) + with tm.assertRaises(ValueError): + Timestamp(2000, 1, 0) + with tm.assertRaises(ValueError): + Timestamp(2000, 1, 32) + + # GH 11630 + self.assertEqual( + repr(Timestamp(2015, 11, 12)), + repr(Timestamp('20151112'))) + + self.assertEqual( + repr(Timestamp(2015, 11, 12, 1, 2, 3, 999999)), + repr(Timestamp('2015-11-12 01:02:03.999999'))) + + self.assertIs(Timestamp(None), pd.NaT) + + def test_constructor_keyword(self): + # GH 10758 + with tm.assertRaises(TypeError): + Timestamp(year=2000, month=1) + with tm.assertRaises(ValueError): + Timestamp(year=2000, month=0, day=1) + with tm.assertRaises(ValueError): + Timestamp(year=2000, month=13, day=1) + with tm.assertRaises(ValueError): + Timestamp(year=2000, month=1, day=0) + with tm.assertRaises(ValueError): + Timestamp(year=2000, month=1, day=32) + + self.assertEqual( + repr(Timestamp(year=2015, month=11, day=12)), + repr(Timestamp('20151112'))) + + self.assertEqual( + repr(Timestamp(year=2015, month=11, day=12, + hour=1, minute=2, second=3, microsecond=999999)), + repr(Timestamp('2015-11-12 01:02:03.999999'))) + def test_conversion(self): # GH 9255 ts = Timestamp('2000-01-01') @@ -766,8 +841,9 @@ def test_parsers_time(self): self.assert_series_equal(tools.to_time(Series(arg, name="test")), Series(expected_arr, name="test")) - self.assert_numpy_array_equal(tools.to_time(np.array(arg)), - np.array(expected_arr, dtype=np.object_)) + res = tools.to_time(np.array(arg)) + self.assertIsInstance(res, list) + self.assert_equal(res, expected_arr) def test_parsers_monthfreq(self): cases = {'201101': datetime.datetime(2011, 1, 1, 0, 0), @@ -1290,6 +1366,24 @@ def test_shift_months(self): years=years, months=months) for x in s]) tm.assert_index_equal(actual, expected) + def test_round(self): + stamp = Timestamp('2000-01-05 05:09:15.13') + + def _check_round(freq, expected): + result = stamp.round(freq=freq) + self.assertEqual(result, expected) + + for freq, expected in [ + ('D', Timestamp('2000-01-05 00:00:00')), + ('H', Timestamp('2000-01-05 05:00:00')), + ('S', Timestamp('2000-01-05 05:09:15')) + ]: + _check_round(freq, expected) + + msg = "Could not evaluate" + tm.assertRaisesRegexp(ValueError, msg, + stamp.round, 'foo') + class TestTimestampOps(tm.TestCase): def test_timestamp_and_datetime(self): diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py index a46149035dbae..d5e87d1df2462 100644 --- a/pandas/tseries/tools.py +++ b/pandas/tseries/tools.py @@ -221,7 +221,8 @@ def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False, - If True, require an exact format match. - If False, allow the format to match anywhere in the target string. - unit : unit of the arg (D,s,ms,us,ns) denote the unit in epoch + unit : string, default 'ns' + unit of the arg (D,s,ms,us,ns) denote the unit in epoch (e.g. a unix timestamp), which is an integer/float number. infer_datetime_format : boolean, default False If True and no `format` is given, attempt to infer the format of the diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx index a240558025090..6453e65ecdc81 100644 --- a/pandas/tslib.pyx +++ b/pandas/tslib.pyx @@ -214,8 +214,8 @@ cdef inline bint _is_fixed_offset(object tz): return 0 return 1 - _zero_time = datetime_time(0, 0) +_no_input = object() # Python front end to C extension type _Timestamp # This serves as the box for datetime64 @@ -225,6 +225,10 @@ class Timestamp(_Timestamp): for the entries that make up a DatetimeIndex, and other timeseries oriented data structures in pandas. + There are essentially three calling conventions for the constructor. The + primary form accepts four parameters. They can be passed by position or + keyword. + Parameters ---------- ts_input : datetime-like, str, int, float @@ -235,6 +239,23 @@ class Timestamp(_Timestamp): Time zone for time which Timestamp will have. unit : string numpy unit used for conversion, if ts_input is int or float + + The other two forms mimic the parameters from ``datetime.datetime``. They + can be passed by either position or keyword, but not both mixed together. + + :func:`datetime.datetime` Parameters + ------------------------------------ + + .. versionadded:: 0.18.2 + + year : int + month : int + day : int + hour : int, optional, default is 0 + minute : int, optional, default is 0 + second : int, optional, default is 0 + microsecond : int, optional, default is 0 + tzinfo : datetime.tzinfo, optional, default is None """ @classmethod @@ -288,10 +309,46 @@ class Timestamp(_Timestamp): def combine(cls, date, time): return cls(datetime.combine(date, time)) - def __new__(cls, object ts_input, object offset=None, tz=None, unit=None): + def __new__(cls, + object ts_input=_no_input, object offset=None, tz=None, unit=None, + year=None, month=None, day=None, + hour=None, minute=None, second=None, microsecond=None, + tzinfo=None): + # The parameter list folds together legacy parameter names (the first + # four) and positional and keyword parameter names from pydatetime. + # + # There are three calling forms: + # + # - In the legacy form, the first parameter, ts_input, is required + # and may be datetime-like, str, int, or float. The second + # parameter, offset, is optional and may be str or DateOffset. + # + # - ints in the first, second, and third arguments indicate + # pydatetime positional arguments. Only the first 8 arguments + # (standing in for year, month, day, hour, minute, second, + # microsecond, tzinfo) may be non-None. As a shortcut, we just + # check that the second argument is an int. + # + # - Nones for the first four (legacy) arguments indicate pydatetime + # keyword arguments. year, month, and day are required. As a + # shortcut, we just check that the first argument was not passed. + # + # Mixing pydatetime positional and keyword arguments is forbidden! + cdef _TSObject ts cdef _Timestamp ts_base + if ts_input is _no_input: + # User passed keyword arguments. + return Timestamp(datetime(year, month, day, hour or 0, + minute or 0, second or 0, microsecond or 0, tzinfo), + tz=tzinfo) + elif is_integer_object(offset): + # User passed positional arguments: + # Timestamp(year, month, day[, hour[, minute[, second[, microsecond[, tzinfo]]]]]) + return Timestamp(datetime(ts_input, offset, tz, unit or 0, + year or 0, month or 0, day or 0, hour), tz=hour) + ts = convert_to_tsobject(ts_input, tz, unit, 0, 0) if ts.value == NPY_NAT: @@ -2082,6 +2139,7 @@ cpdef array_with_unit_to_datetime(ndarray values, unit, errors='coerce'): unit)) elif is_ignore: raise AssertionError + iresult[i] = NPY_NAT except: if is_raise: raise OutOfBoundsDatetime("cannot convert input {0}" @@ -2149,7 +2207,7 @@ cpdef array_to_datetime(ndarray[object] values, errors='raise', ndarray[int64_t] iresult ndarray[object] oresult pandas_datetimestruct dts - bint utc_convert = bool(utc), seen_integer=0, seen_datetime=0 + bint utc_convert = bool(utc), seen_integer=0, seen_string=0, seen_datetime=0 bint is_raise=errors=='raise', is_ignore=errors=='ignore', is_coerce=errors=='coerce' _TSObject _ts int out_local=0, out_tzoffset=0 @@ -2162,8 +2220,10 @@ cpdef array_to_datetime(ndarray[object] values, errors='raise', iresult = result.view('i8') for i in range(n): val = values[i] + if _checknull_with_nat(val): iresult[i] = NPY_NAT + elif PyDateTime_Check(val): seen_datetime=1 if val.tzinfo is not None: @@ -2192,6 +2252,7 @@ cpdef array_to_datetime(ndarray[object] values, errors='raise', iresult[i] = NPY_NAT continue raise + elif PyDate_Check(val): iresult[i] = _date_to_datetime64(val, &dts) try: @@ -2202,6 +2263,7 @@ cpdef array_to_datetime(ndarray[object] values, errors='raise', iresult[i] = NPY_NAT continue raise + elif util.is_datetime64_object(val): if get_datetime64_value(val) == NPY_NAT: iresult[i] = NPY_NAT @@ -2215,25 +2277,35 @@ cpdef array_to_datetime(ndarray[object] values, errors='raise', continue raise - # if we are coercing, dont' allow integers - elif is_integer_object(val) and not is_coerce: - if val == NPY_NAT: + elif is_integer_object(val) or is_float_object(val): + # these must be ns unit by-definition + + if val != val or val == NPY_NAT: iresult[i] = NPY_NAT - else: + elif is_raise or is_ignore: iresult[i] = val seen_integer=1 - elif is_float_object(val) and not is_coerce: - if val != val or val == NPY_NAT: - iresult[i] = NPY_NAT else: - iresult[i] = <int64_t>val - seen_integer=1 - else: + # coerce + # we now need to parse this as if unit='ns' + # we can ONLY accept integers at this point + # if we have previously (or in future accept + # datetimes/strings, then we must coerce) + seen_integer = 1 + try: + iresult[i] = cast_from_unit(val, 'ns') + except: + iresult[i] = NPY_NAT + + elif util.is_string_object(val): + # string + try: if len(val) == 0 or val in _nat_strings: iresult[i] = NPY_NAT continue + seen_string=1 _string_to_dts(val, &dts, &out_local, &out_tzoffset) value = pandas_datetimestruct_to_datetime(PANDAS_FR_ns, &dts) if out_local == 1: @@ -2275,12 +2347,27 @@ cpdef array_to_datetime(ndarray[object] values, errors='raise', iresult[i] = NPY_NAT continue raise + else: + if is_coerce: + iresult[i] = NPY_NAT + else: + raise TypeError("{0} is not convertible to datetime" + .format(type(val))) + + if seen_datetime and seen_integer: + # we have mixed datetimes & integers - # don't allow mixed integers and datetime like - # higher levels can catch and is_coerce to object, for - # example - if seen_integer and seen_datetime: - raise ValueError("mixed datetimes and integers in passed array") + if is_coerce: + # coerce all of the integers/floats to NaT, preserve + # the datetimes and other convertibles + for i in range(n): + val = values[i] + if is_integer_object(val) or is_float_object(val): + result[i] = NPY_NAT + elif is_raise: + raise ValueError("mixed datetimes and integers in passed array") + else: + raise TypeError return result except OutOfBoundsDatetime: @@ -3667,8 +3754,8 @@ except: def tz_convert(ndarray[int64_t] vals, object tz1, object tz2): cdef: - ndarray[int64_t] utc_dates, tt, result, trans, deltas - Py_ssize_t i, pos, n = len(vals) + ndarray[int64_t] utc_dates, tt, result, trans, deltas, posn + Py_ssize_t i, j, pos, n = len(vals) int64_t v, offset pandas_datetimestruct dts Py_ssize_t trans_len @@ -3704,19 +3791,18 @@ def tz_convert(ndarray[int64_t] vals, object tz1, object tz2): return vals trans_len = len(trans) - pos = trans.searchsorted(tt[0]) - 1 - if pos < 0: - raise ValueError('First time before start of DST info') - - offset = deltas[pos] + posn = trans.searchsorted(tt, side='right') + j = 0 for i in range(n): v = vals[i] if v == NPY_NAT: utc_dates[i] = NPY_NAT else: - while pos + 1 < trans_len and v >= trans[pos + 1]: - pos += 1 - offset = deltas[pos] + pos = posn[j] - 1 + j = j + 1 + if pos < 0: + raise ValueError('First time before start of DST info') + offset = deltas[pos] utc_dates[i] = v - offset else: utc_dates = vals @@ -3751,20 +3837,18 @@ def tz_convert(ndarray[int64_t] vals, object tz1, object tz2): if (result==NPY_NAT).all(): return result - pos = trans.searchsorted(utc_dates[utc_dates!=NPY_NAT][0]) - 1 - if pos < 0: - raise ValueError('First time before start of DST info') - - # TODO: this assumed sortedness :/ - offset = deltas[pos] + posn = trans.searchsorted(utc_dates[utc_dates!=NPY_NAT], side='right') + j = 0 for i in range(n): v = utc_dates[i] if vals[i] == NPY_NAT: result[i] = vals[i] else: - while pos + 1 < trans_len and v >= trans[pos + 1]: - pos += 1 - offset = deltas[pos] + pos = posn[j] - 1 + j = j + 1 + if pos < 0: + raise ValueError('First time before start of DST info') + offset = deltas[pos] result[i] = v + offset return result diff --git a/pandas/types/api.py b/pandas/types/api.py index bb61025a41a37..721d8d29bba8b 100644 --- a/pandas/types/api.py +++ b/pandas/types/api.py @@ -28,7 +28,11 @@ def pandas_dtype(dtype): ------- np.dtype or a pandas dtype """ - if isinstance(dtype, string_types): + if isinstance(dtype, DatetimeTZDtype): + return dtype + elif isinstance(dtype, CategoricalDtype): + return dtype + elif isinstance(dtype, string_types): try: return DatetimeTZDtype.construct_from_string(dtype) except TypeError: @@ -40,3 +44,32 @@ def pandas_dtype(dtype): pass return np.dtype(dtype) + +def na_value_for_dtype(dtype): + """ + Return a dtype compat na value + + Parameters + ---------- + dtype : string / dtype + + Returns + ------- + dtype compat na value + """ + + from pandas.core import common as com + from pandas import NaT + dtype = pandas_dtype(dtype) + + if (com.is_datetime64_dtype(dtype) or + com.is_datetime64tz_dtype(dtype) or + com.is_timedelta64_dtype(dtype)): + return NaT + elif com.is_float_dtype(dtype): + return np.nan + elif com.is_integer_dtype(dtype): + return 0 + elif com.is_bool_dtype(dtype): + return False + return np.nan diff --git a/pandas/types/concat.py b/pandas/types/concat.py index eb18023d6409d..5cd7abb6889b7 100644 --- a/pandas/types/concat.py +++ b/pandas/types/concat.py @@ -249,7 +249,7 @@ def convert_to_pydatetime(x, axis): # thus no need to care # we require ALL of the same tz for datetimetz - tzs = set([x.tz for x in to_concat]) + tzs = set([str(x.tz) for x in to_concat]) if len(tzs) == 1: from pandas.tseries.index import DatetimeIndex new_values = np.concatenate([x.tz_localize(None).asi8 diff --git a/pandas/types/dtypes.py b/pandas/types/dtypes.py index e6adbc8500117..140d494c3e1b2 100644 --- a/pandas/types/dtypes.py +++ b/pandas/types/dtypes.py @@ -108,6 +108,16 @@ class CategoricalDtype(ExtensionDtype): kind = 'O' str = '|O08' base = np.dtype('O') + _cache = {} + + def __new__(cls): + + try: + return cls._cache[cls.name] + except KeyError: + c = object.__new__(cls) + cls._cache[cls.name] = c + return c def __hash__(self): # make myself hashable @@ -155,9 +165,11 @@ class DatetimeTZDtype(ExtensionDtype): base = np.dtype('M8[ns]') _metadata = ['unit', 'tz'] _match = re.compile("(datetime64|M8)\[(?P<unit>.+), (?P<tz>.+)\]") + _cache = {} + + def __new__(cls, unit=None, tz=None): + """ Create a new unit if needed, otherwise return from the cache - def __init__(self, unit, tz=None): - """ Parameters ---------- unit : string unit that this represents, currently must be 'ns' @@ -165,28 +177,46 @@ def __init__(self, unit, tz=None): """ if isinstance(unit, DatetimeTZDtype): - self.unit, self.tz = unit.unit, unit.tz - return + unit, tz = unit.unit, unit.tz - if tz is None: + elif unit is None: + # we are called as an empty constructor + # generally for pickle compat + return object.__new__(cls) + + elif tz is None: # we were passed a string that we can construct try: - m = self._match.search(unit) + m = cls._match.search(unit) if m is not None: - self.unit = m.groupdict()['unit'] - self.tz = m.groupdict()['tz'] - return + unit = m.groupdict()['unit'] + tz = m.groupdict()['tz'] except: raise ValueError("could not construct DatetimeTZDtype") + elif isinstance(unit, compat.string_types): + + if unit != 'ns': + raise ValueError("DatetimeTZDtype only supports ns units") + + unit = unit + tz = tz + + if tz is None: raise ValueError("DatetimeTZDtype constructor must have a tz " "supplied") - if unit != 'ns': - raise ValueError("DatetimeTZDtype only supports ns units") - self.unit = unit - self.tz = tz + # set/retrieve from cache + key = (unit, str(tz)) + try: + return cls._cache[key] + except KeyError: + u = object.__new__(cls) + u.unit = unit + u.tz = tz + cls._cache[key] = u + return u @classmethod def construct_from_string(cls, string): diff --git a/pandas/util/print_versions.py b/pandas/util/print_versions.py index 115423f3e3e22..e74568f39418c 100644 --- a/pandas/util/print_versions.py +++ b/pandas/util/print_versions.py @@ -4,6 +4,7 @@ import struct import subprocess import codecs +import importlib def get_sys_info(): @@ -55,7 +56,6 @@ def get_sys_info(): def show_versions(as_json=False): - import imp sys_info = get_sys_info() deps = [ @@ -99,11 +99,7 @@ def show_versions(as_json=False): deps_blob = list() for (modname, ver_f) in deps: try: - try: - mod = imp.load_module(modname, *imp.find_module(modname)) - except (ImportError): - import importlib - mod = importlib.import_module(modname) + mod = importlib.import_module(modname) ver = ver_f(mod) deps_blob.append((modname, ver)) except: diff --git a/pandas/util/testing.py b/pandas/util/testing.py index 3ea4a09c453ee..03ccfcab24f58 100644 --- a/pandas/util/testing.py +++ b/pandas/util/testing.py @@ -19,18 +19,19 @@ from distutils.version import LooseVersion from numpy.random import randn, rand +from numpy.testing.decorators import slow # noqa import numpy as np import pandas as pd from pandas.core.common import (is_sequence, array_equivalent, is_list_like, is_datetimelike_v_numeric, - is_datetimelike_v_object, is_number, - needs_i8_conversion) + is_datetimelike_v_object, + is_number, is_bool, + needs_i8_conversion, is_categorical_dtype) from pandas.formats.printing import pprint_thing from pandas.core.algorithms import take_1d import pandas.compat as compat -import pandas.lib as lib from pandas.compat import( filter, map, zip, range, unichr, lrange, lmap, lzip, u, callable, Counter, raise_with_traceback, httplib, is_platform_windows, is_platform_32bit, @@ -115,23 +116,71 @@ def assertNotAlmostEquals(self, *args, **kwargs): self.assertNotAlmostEqual)(*args, **kwargs) -def assert_almost_equal(left, right, check_exact=False, **kwargs): +def assert_almost_equal(left, right, check_exact=False, + check_dtype='equiv', check_less_precise=False, + **kwargs): + """Check that left and right Index are equal. + + Parameters + ---------- + left : object + right : object + check_exact : bool, default True + Whether to compare number exactly. + check_dtype: bool, default True + check dtype if both a and b are the same type + check_less_precise : bool or int, default False + Specify comparison precision. Only used when check_exact is False. + 5 digits (False) or 3 digits (True) after decimal points are compared. + If int, then specify the digits to compare + """ if isinstance(left, pd.Index): return assert_index_equal(left, right, check_exact=check_exact, + exact=check_dtype, + check_less_precise=check_less_precise, **kwargs) elif isinstance(left, pd.Series): return assert_series_equal(left, right, check_exact=check_exact, + check_dtype=check_dtype, + check_less_precise=check_less_precise, **kwargs) elif isinstance(left, pd.DataFrame): return assert_frame_equal(left, right, check_exact=check_exact, + check_dtype=check_dtype, + check_less_precise=check_less_precise, **kwargs) - return _testing.assert_almost_equal(left, right, **kwargs) + else: + # other sequences + if check_dtype: + if is_number(left) and is_number(right): + # do not compare numeric classes, like np.float64 and float + pass + elif is_bool(left) and is_bool(right): + # do not compare bool classes, like np.bool_ and bool + pass + else: + if (isinstance(left, np.ndarray) or + isinstance(right, np.ndarray)): + obj = 'numpy array' + else: + obj = 'Input' + assert_class_equal(left, right, obj=obj) + return _testing.assert_almost_equal( + left, right, + check_dtype=check_dtype, + check_less_precise=check_less_precise, + **kwargs) + +def assert_dict_equal(left, right, compare_keys=True): -assert_dict_equal = _testing.assert_dict_equal + assertIsInstance(left, dict, '[dict] ') + assertIsInstance(right, dict, '[dict] ') + + return _testing.assert_dict_equal(left, right, compare_keys=compare_keys) def randbool(size=(), p=0.5): @@ -657,7 +706,7 @@ def assert_equal(a, b, msg=""): def assert_index_equal(left, right, exact='equiv', check_names=True, check_less_precise=False, check_exact=True, - obj='Index'): + check_categorical=True, obj='Index'): """Check that left and right Index are equal. Parameters @@ -670,11 +719,14 @@ def assert_index_equal(left, right, exact='equiv', check_names=True, Int64Index as well check_names : bool, default True Whether to check the names attribute. - check_less_precise : bool, default False + check_less_precise : bool or int, default False Specify comparison precision. Only used when check_exact is False. 5 digits (False) or 3 digits (True) after decimal points are compared. + If int, then specify the digits to compare check_exact : bool, default True Whether to compare number exactly. + check_categorical : bool, default True + Whether to compare internal Categorical exactly. obj : str, default 'Index' Specify object name being compared, internally used to show appropriate assertion message @@ -751,6 +803,13 @@ def _get_ilevel_values(index, level): # metadata comparison if check_names: assert_attr_equal('names', left, right, obj=obj) + if isinstance(left, pd.PeriodIndex) or isinstance(right, pd.PeriodIndex): + assert_attr_equal('freq', left, right, obj=obj) + + if check_categorical: + if is_categorical_dtype(left) or is_categorical_dtype(right): + assert_categorical_equal(left.values, right.values, + obj='{0} category'.format(obj)) def assert_class_equal(left, right, exact=True, obj='Input'): @@ -903,18 +962,17 @@ def assertNotIsInstance(obj, cls, msg=''): raise AssertionError(err_msg.format(msg, cls)) -def assert_categorical_equal(res, exp): - assertIsInstance(res, pd.Categorical, '[Categorical] ') - assertIsInstance(exp, pd.Categorical, '[Categorical] ') +def assert_categorical_equal(left, right, check_dtype=True, + obj='Categorical'): + assertIsInstance(left, pd.Categorical, '[Categorical] ') + assertIsInstance(right, pd.Categorical, '[Categorical] ') - assert_index_equal(res.categories, exp.categories) + assert_index_equal(left.categories, right.categories, + obj='{0}.categories'.format(obj)) + assert_numpy_array_equal(left.codes, right.codes, check_dtype=check_dtype, + obj='{0}.codes'.format(obj)) - if not array_equivalent(res.codes, exp.codes): - raise AssertionError( - 'codes not equivalent: {0} vs {1}.'.format(res.codes, exp.codes)) - - if res.ordered != exp.ordered: - raise AssertionError("ordered not the same") + assert_attr_equal('ordered', left, right, obj=obj) def raise_assert_detail(obj, message, left, right): @@ -951,33 +1009,29 @@ def assert_numpy_array_equal(left, right, strict_nan=False, assertion message """ + # instance validation + # to show a detailed erorr message when classes are different + assert_class_equal(left, right, obj=obj) + # both classes must be an np.ndarray + assertIsInstance(left, np.ndarray, '[ndarray] ') + assertIsInstance(right, np.ndarray, '[ndarray] ') + def _raise(left, right, err_msg): if err_msg is None: - # show detailed error - if lib.isscalar(left) and lib.isscalar(right): - # show scalar comparison error - assert_equal(left, right) - elif is_list_like(left) and is_list_like(right): - # some test cases pass list - left = np.asarray(left) - right = np.array(right) - - if left.shape != right.shape: - raise_assert_detail(obj, '{0} shapes are different' - .format(obj), left.shape, right.shape) - - diff = 0 - for l, r in zip(left, right): - # count up differences - if not array_equivalent(l, r, strict_nan=strict_nan): - diff += 1 - - diff = diff * 100.0 / left.size - msg = '{0} values are different ({1} %)'\ - .format(obj, np.round(diff, 5)) - raise_assert_detail(obj, msg, left, right) - else: - assert_class_equal(left, right, obj=obj) + if left.shape != right.shape: + raise_assert_detail(obj, '{0} shapes are different' + .format(obj), left.shape, right.shape) + + diff = 0 + for l, r in zip(left, right): + # count up differences + if not array_equivalent(l, r, strict_nan=strict_nan): + diff += 1 + + diff = diff * 100.0 / left.size + msg = '{0} values are different ({1} %)'\ + .format(obj, np.round(diff, 5)) + raise_assert_detail(obj, msg, left, right) raise AssertionError(err_msg) @@ -1000,6 +1054,7 @@ def assert_series_equal(left, right, check_dtype=True, check_names=True, check_exact=False, check_datetimelike_compat=False, + check_categorical=True, obj='Series'): """Check that left and right Series are equal. @@ -1015,15 +1070,18 @@ def assert_series_equal(left, right, check_dtype=True, are identical. check_series_type : bool, default False Whether to check the Series class is identical. - check_less_precise : bool, default False + check_less_precise : bool or int, default False Specify comparison precision. Only used when check_exact is False. 5 digits (False) or 3 digits (True) after decimal points are compared. + If int, then specify the digits to compare check_exact : bool, default False Whether to compare number exactly. check_names : bool, default True Whether to check the Series and Index names attribute. check_dateteimelike_compat : bool, default False Compare datetime-like which is comparable ignoring dtype. + check_categorical : bool, default True + Whether to compare internal Categorical exactly. obj : str, default 'Series' Specify object name being compared, internally used to show appropriate assertion message @@ -1050,6 +1108,7 @@ def assert_series_equal(left, right, check_dtype=True, check_names=check_names, check_less_precise=check_less_precise, check_exact=check_exact, + check_categorical=check_categorical, obj='{0}.index'.format(obj)) if check_dtype: @@ -1057,8 +1116,8 @@ def assert_series_equal(left, right, check_dtype=True, if check_exact: assert_numpy_array_equal(left.get_values(), right.get_values(), - obj='{0}'.format(obj), - check_dtype=check_dtype) + check_dtype=check_dtype, + obj='{0}'.format(obj),) elif check_datetimelike_compat: # we want to check only if we have compat dtypes # e.g. integer and M|m are NOT compat, but we can simply check @@ -1074,11 +1133,11 @@ def assert_series_equal(left, right, check_dtype=True, msg = '[datetimelike_compat=True] {0} is not equal to {1}.' raise AssertionError(msg.format(left.values, right.values)) else: - assert_numpy_array_equal(left.values, right.values, + assert_numpy_array_equal(left.get_values(), right.get_values(), check_dtype=check_dtype) else: _testing.assert_almost_equal(left.get_values(), right.get_values(), - check_less_precise, + check_less_precise=check_less_precise, check_dtype=check_dtype, obj='{0}'.format(obj)) @@ -1086,6 +1145,11 @@ def assert_series_equal(left, right, check_dtype=True, if check_names: assert_attr_equal('name', left, right, obj=obj) + if check_categorical: + if is_categorical_dtype(left) or is_categorical_dtype(right): + assert_categorical_equal(left.values, right.values, + obj='{0} category'.format(obj)) + # This could be refactored to use the NDFrame.equals method def assert_frame_equal(left, right, check_dtype=True, @@ -1097,6 +1161,7 @@ def assert_frame_equal(left, right, check_dtype=True, by_blocks=False, check_exact=False, check_datetimelike_compat=False, + check_categorical=True, check_like=False, obj='DataFrame'): @@ -1116,9 +1181,10 @@ def assert_frame_equal(left, right, check_dtype=True, are identical. check_frame_type : bool, default False Whether to check the DataFrame class is identical. - check_less_precise : bool, default False + check_less_precise : bool or it, default False Specify comparison precision. Only used when check_exact is False. 5 digits (False) or 3 digits (True) after decimal points are compared. + If int, then specify the digits to compare check_names : bool, default True Whether to check the Index names attribute. by_blocks : bool, default False @@ -1128,6 +1194,8 @@ def assert_frame_equal(left, right, check_dtype=True, Whether to compare number exactly. check_dateteimelike_compat : bool, default False Compare datetime-like which is comparable ignoring dtype. + check_categorical : bool, default True + Whether to compare internal Categorical exactly. check_like : bool, default False If true, then reindex_like operands obj : str, default 'DataFrame' @@ -1169,6 +1237,7 @@ def assert_frame_equal(left, right, check_dtype=True, check_names=check_names, check_less_precise=check_less_precise, check_exact=check_exact, + check_categorical=check_categorical, obj='{0}.index'.format(obj)) # column comparison @@ -1176,6 +1245,7 @@ def assert_frame_equal(left, right, check_dtype=True, check_names=check_names, check_less_precise=check_less_precise, check_exact=check_exact, + check_categorical=check_categorical, obj='{0}.columns'.format(obj)) # compare by blocks @@ -1200,6 +1270,7 @@ def assert_frame_equal(left, right, check_dtype=True, check_less_precise=check_less_precise, check_exact=check_exact, check_names=check_names, check_datetimelike_compat=check_datetimelike_compat, + check_categorical=check_categorical, obj='DataFrame.iloc[:, {0}]'.format(i)) @@ -1220,9 +1291,10 @@ def assert_panelnd_equal(left, right, Whether to check the Panel dtype is identical. check_panel_type : bool, default False Whether to check the Panel class is identical. - check_less_precise : bool, default False + check_less_precise : bool or int, default False Specify comparison precision. Only used when check_exact is False. 5 digits (False) or 3 digits (True) after decimal points are compared. + If int, then specify the digits to compare assert_func : function for comparing data check_names : bool, default True Whether to check the Index names attribute. @@ -1284,11 +1356,7 @@ def assert_sp_array_equal(left, right): raise_assert_detail('SparseArray.index', 'index are not equal', left.sp_index, right.sp_index) - if np.isnan(left.fill_value): - assert (np.isnan(right.fill_value)) - else: - assert (left.fill_value == right.fill_value) - + assert_attr_equal('fill_value', left, right) assert_attr_equal('dtype', left, right) assert_numpy_array_equal(left.values, right.values) diff --git a/pandas/util/validators.py b/pandas/util/validators.py index 2166dc45db605..bbfd24df9c13e 100644 --- a/pandas/util/validators.py +++ b/pandas/util/validators.py @@ -42,7 +42,16 @@ def _check_for_default_values(fname, arg_val_dict, compat_args): # as comparison may have been overriden for the left # hand object try: - match = (arg_val_dict[key] == compat_args[key]) + v1 = arg_val_dict[key] + v2 = compat_args[key] + + # check for None-ness otherwise we could end up + # comparing a numpy array vs None + if (v1 is not None and v2 is None) or \ + (v1 is None and v2 is not None): + match = False + else: + match = (v1 == v2) if not is_bool(match): raise ValueError("'match' is not a boolean")
Continued in https://github.com/pydata/pandas/pull/13375 --- closes #7271 By passing a dict of {column name/column index: dtype}, multiple columns can be cast to different data types in a single command. Now users can do: `df = df.astype({'my_bool', 'bool', 'my_int': 'int'})` or: `df = df.astype({0, 'bool', 1: 'int'})` instead of: ``` df['my_bool'] = df.my_bool.astype('bool') df['my_int'] = df.my_int.astype('int') ```
https://api.github.com/repos/pandas-dev/pandas/pulls/12086
2016-01-18T22:33:29Z
2016-06-05T23:41:37Z
null
2023-05-11T01:13:20Z
Fix issue with merge-pr.py script
diff --git a/merge-py.py b/scripts/merge-py.py similarity index 82% rename from merge-py.py rename to scripts/merge-py.py index e8fb52674208f..9d611213ba517 100755 --- a/merge-py.py +++ b/scripts/merge-py.py @@ -17,7 +17,8 @@ # limitations under the License. # -# Utility for creating well-formed pull request merges and pushing them to Apache. +# Utility for creating well-formed pull request merges and pushing them to +# Apache. # usage: ./apache-pr-merge.py (see config env vars below) # # Lightly modified from version of this script in incubator-parquet-format @@ -121,25 +122,31 @@ def clean_up(): # merge the requested PR and return the merge hash def merge_pr(pr_num, target_ref): pr_branch_name = "%s_MERGE_PR_%s" % (BRANCH_PREFIX, pr_num) - target_branch_name = "%s_MERGE_PR_%s_%s" % (BRANCH_PREFIX, pr_num, target_ref.upper()) - run_cmd("git fetch %s pull/%s/head:%s" % (PR_REMOTE_NAME, pr_num, pr_branch_name)) - run_cmd("git fetch %s %s:%s" % (PUSH_REMOTE_NAME, target_ref, target_branch_name)) + target_branch_name = "%s_MERGE_PR_%s_%s" % (BRANCH_PREFIX, pr_num, + target_ref.upper()) + run_cmd("git fetch %s pull/%s/head:%s" % (PR_REMOTE_NAME, pr_num, + pr_branch_name)) + run_cmd("git fetch %s %s:%s" % (PUSH_REMOTE_NAME, target_ref, + target_branch_name)) run_cmd("git checkout %s" % target_branch_name) had_conflicts = False try: run_cmd(['git', 'merge', pr_branch_name, '--squash']) except Exception as e: - msg = "Error merging: %s\nWould you like to manually fix-up this merge?" % e + msg = ("Error merging: %s\nWould you like to manually fix-up " + "this merge?" % e) continue_maybe(msg) - msg = "Okay, please fix any conflicts and 'git add' conflicting files... Finished?" + msg = ("Okay, please fix any conflicts and 'git add' " + "conflicting files... Finished?") continue_maybe(msg) had_conflicts = True commit_authors = run_cmd(['git', 'log', 'HEAD..%s' % pr_branch_name, '--pretty=format:%an <%ae>']).split("\n") distinct_authors = sorted(set(commit_authors), - key=lambda x: commit_authors.count(x), reverse=True) + key=lambda x: commit_authors.count(x), + reverse=True) primary_author = distinct_authors[0] commits = run_cmd(['git', 'log', 'HEAD..%s' % pr_branch_name, '--pretty=format:%h [%an] %s']).split("\n\n") @@ -147,7 +154,7 @@ def merge_pr(pr_num, target_ref): merge_message_flags = [] merge_message_flags += ["-m", title] - if body != None: + if body is not None: merge_message_flags += ["-m", '\n'.join(textwrap.wrap(body))] authors = "\n".join(["Author: %s" % a for a in distinct_authors]) @@ -157,14 +164,17 @@ def merge_pr(pr_num, target_ref): if had_conflicts: committer_name = run_cmd("git config --get user.name").strip() committer_email = run_cmd("git config --get user.email").strip() - message = "This patch had conflicts when merged, resolved by\nCommitter: %s <%s>" % ( - committer_name, committer_email) + message = ("This patch had conflicts when merged, " + "resolved by\nCommitter: %s <%s>" + % (committer_name, committer_email)) merge_message_flags += ["-m", message] - # The string "Closes #%s" string is required for GitHub to correctly close the PR + # The string "Closes #%s" string is required for GitHub to correctly close + # the PR merge_message_flags += [ "-m", - "Closes #%s from %s and squashes the following commits:" % (pr_num, pr_repo_desc)] + "Closes #%s from %s and squashes the following commits:" + % (pr_num, pr_repo_desc)] for c in commits: merge_message_flags += ["-m", c] @@ -229,11 +239,11 @@ def fix_version_from_branch(branch, versions): return filter(lambda x: x.name.startswith(branch_ver), versions)[-1] -branches = get_json("%s/branches" % GITHUB_API_BASE) -branch_names = filter(lambda x: x.startswith("branch-"), - [x['name'] for x in branches]) +# branches = get_json("%s/branches" % GITHUB_API_BASE) +# branch_names = filter(lambda x: x.startswith("branch-"), +# [x['name'] for x in branches]) # Assumes branch names can be sorted lexicographically -latest_branch = sorted(branch_names, reverse=True)[0] +# latest_branch = sorted(branch_names, reverse=True)[0] pr_num = raw_input("Which pull request would you like to merge? (e.g. 34): ") pr = get_json("%s/pulls/%s" % (GITHUB_API_BASE, pr_num)) @@ -247,20 +257,8 @@ def fix_version_from_branch(branch, versions): pr_repo_desc = "%s/%s" % (user_login, base_ref) if pr["merged"] is True: - print("Pull request {0} has already been merged, assuming " - "you want to backport".format(pr_num)) - merge_commit_desc = run_cmd([ - 'git', 'log', '--merges', '--first-parent', - '--grep=pull request #%s' % pr_num, '--oneline']).split("\n")[0] - if merge_commit_desc == "": - fail("Couldn't find any merge commit for #{0}" - ", you may need to update HEAD.".format(pr_num)) - - merge_hash = merge_commit_desc[:7] - message = merge_commit_desc[8:] - - print("Found: %s" % message) - maybe_cherry_pick(pr_num, merge_hash, latest_branch) + print("Pull request {0} has already been merged, please backport manually" + .format(pr_num)) sys.exit(0) if not bool(pr["mergeable"]): @@ -268,16 +266,11 @@ def fix_version_from_branch(branch, versions): "Continue? (experts only!)".format(pr_num)) continue_maybe(msg) -print ("\n=== Pull Request #%s ===" % pr_num) -print ("title\t%s\nsource\t%s\ntarget\t%s\nurl\t%s" % ( - title, pr_repo_desc, target_ref, url)) +print("\n=== Pull Request #%s ===" % pr_num) +print("title\t%s\nsource\t%s\ntarget\t%s\nurl\t%s" + % (title, pr_repo_desc, target_ref, url)) continue_maybe("Proceed with merging pull request #%s?" % pr_num) merged_refs = [target_ref] merge_hash = merge_pr(pr_num, target_ref) - -pick_prompt = "Would you like to pick %s into another branch?" % merge_hash -while raw_input("\n%s (y/n): " % pick_prompt).lower() == "y": - merged_refs = merged_refs + [cherry_pick(pr_num, merge_hash, - latest_branch)]
The commented line shown produces an error on fresh clones of pandas, and is not needed AFAICT.
https://api.github.com/repos/pandas-dev/pandas/pulls/12083
2016-01-18T16:32:44Z
2016-01-19T12:07:40Z
null
2016-01-19T12:08:21Z
CLN: fix all flake8 warnings in pandas/tools
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py index 9211ffb5cfde5..6ea217c4a72a7 100644 --- a/pandas/tools/merge.py +++ b/pandas/tools/merge.py @@ -181,8 +181,8 @@ def __init__(self, left, right, how='inner', on=None, elif isinstance(self.indicator, bool): self.indicator_name = '_merge' if self.indicator else None else: - raise ValueError('indicator option can only accept boolean or string arguments') - + raise ValueError( + 'indicator option can only accept boolean or string arguments') # note this function has side effects (self.left_join_keys, @@ -191,7 +191,8 @@ def __init__(self, left, right, how='inner', on=None, def get_result(self): if self.indicator: - self.left, self.right = self._indicator_pre_merge(self.left, self.right) + self.left, self.right = self._indicator_pre_merge( + self.left, self.right) join_index, left_indexer, right_indexer = self._get_join_info() @@ -225,9 +226,11 @@ def _indicator_pre_merge(self, left, right): for i in ['_left_indicator', '_right_indicator']: if i in columns: - raise ValueError("Cannot use `indicator=True` option when data contains a column named {}".format(i)) + raise ValueError("Cannot use `indicator=True` option when " + "data contains a column named {}".format(i)) if self.indicator_name in columns: - raise ValueError("Cannot use name of an existing column for indicator column") + raise ValueError( + "Cannot use name of an existing column for indicator column") left = left.copy() right = right.copy() @@ -245,11 +248,15 @@ def _indicator_post_merge(self, result): result['_left_indicator'] = result['_left_indicator'].fillna(0) result['_right_indicator'] = result['_right_indicator'].fillna(0) - result[self.indicator_name] = Categorical((result['_left_indicator'] + result['_right_indicator']), categories=[1,2,3]) - result[self.indicator_name] = result[self.indicator_name].cat.rename_categories(['left_only', 'right_only', 'both']) - - result = result.drop(labels=['_left_indicator', '_right_indicator'], axis=1) + result[self.indicator_name] = Categorical((result['_left_indicator'] + + result['_right_indicator']), + categories=[1, 2, 3]) + result[self.indicator_name] = ( + result[self.indicator_name] + .cat.rename_categories(['left_only', 'right_only', 'both'])) + result = result.drop(labels=['_left_indicator', '_right_indicator'], + axis=1) return result def _maybe_add_join_keys(self, result, left_indexer, right_indexer): @@ -274,8 +281,9 @@ def _maybe_add_join_keys(self, result, left_indexer, right_indexer): continue right_na_indexer = right_indexer.take(na_indexer) - result.iloc[na_indexer,key_indexer] = com.take_1d(self.right_join_keys[i], - right_na_indexer) + result.iloc[na_indexer, key_indexer] = ( + com.take_1d(self.right_join_keys[i], + right_na_indexer)) elif name in self.right: if len(self.right) == 0: continue @@ -285,8 +293,9 @@ def _maybe_add_join_keys(self, result, left_indexer, right_indexer): continue left_na_indexer = left_indexer.take(na_indexer) - result.iloc[na_indexer,key_indexer] = com.take_1d(self.left_join_keys[i], - left_na_indexer) + result.iloc[na_indexer, key_indexer] = ( + com.take_1d(self.left_join_keys[i], + left_na_indexer)) elif left_indexer is not None \ and isinstance(self.left_join_keys[i], np.ndarray): @@ -384,8 +393,10 @@ def _get_merge_keys(self): left_drop = [] left, right = self.left, self.right - is_lkey = lambda x: isinstance(x, (np.ndarray, ABCSeries)) and len(x) == len(left) - is_rkey = lambda x: isinstance(x, (np.ndarray, ABCSeries)) and len(x) == len(right) + is_lkey = lambda x: isinstance( + x, (np.ndarray, ABCSeries)) and len(x) == len(left) + is_rkey = lambda x: isinstance( + x, (np.ndarray, ABCSeries)) and len(x) == len(right) # ugh, spaghetti re #733 if _any(self.left_on) and _any(self.right_on): @@ -507,13 +518,13 @@ def _get_join_indexers(left_keys, right_keys, sort=False, how='inner'): from functools import partial assert len(left_keys) == len(right_keys), \ - 'left_key and right_keys must be the same length' + 'left_key and right_keys must be the same length' # bind `sort` arg. of _factorize_keys fkeys = partial(_factorize_keys, sort=sort) # get left & right join labels and num. of levels at each location - llab, rlab, shape = map(list, zip( * map(fkeys, left_keys, right_keys))) + llab, rlab, shape = map(list, zip(* map(fkeys, left_keys, right_keys))) # get flat i8 keys from label lists lkey, rkey = _get_join_keys(llab, rlab, shape, sort) @@ -524,7 +535,7 @@ def _get_join_indexers(left_keys, right_keys, sort=False, how='inner'): lkey, rkey, count = fkeys(lkey, rkey) # preserve left frame order if how == 'left' and sort == False - kwargs = {'sort':sort} if how == 'left' else {} + kwargs = {'sort': sort} if how == 'left' else {} join_func = _join_functions[how] return join_func(lkey, rkey, count, **kwargs) @@ -563,8 +574,10 @@ def get_result(self): left_join_indexer = left_indexer right_join_indexer = right_indexer - lindexers = {1: left_join_indexer} if left_join_indexer is not None else {} - rindexers = {1: right_join_indexer} if right_join_indexer is not None else {} + lindexers = { + 1: left_join_indexer} if left_join_indexer is not None else {} + rindexers = { + 1: right_join_indexer} if right_join_indexer is not None else {} result_data = concatenate_block_managers( [(ldata, lindexers), (rdata, rindexers)], @@ -586,7 +599,7 @@ def _get_multiindex_indexer(join_keys, index, sort): fkeys = partial(_factorize_keys, sort=sort) # left & right join labels and num. of levels at each location - rlab, llab, shape = map(list, zip( * map(fkeys, index.levels, join_keys))) + rlab, llab, shape = map(list, zip(* map(fkeys, index.levels, join_keys))) if sort: rlab = list(map(np.take, rlab, index.labels)) else: @@ -751,12 +764,13 @@ def _get_join_keys(llab, rlab, shape, sort): return _get_join_keys(llab, rlab, shape, sort) -#---------------------------------------------------------------------- +# --------------------------------------------------------------------- # Concatenate DataFrame objects def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False, - keys=None, levels=None, names=None, verify_integrity=False, copy=True): + keys=None, levels=None, names=None, verify_integrity=False, + copy=True): """ Concatenate pandas objects along a particular axis with optional set logic along the other axes. Can also add a layer of hierarchical indexing on the @@ -885,10 +899,11 @@ def __init__(self, objs, axis=0, join='outer', join_axes=None, else: # filter out the empties # if we have not multi-index possibiltes - df = DataFrame([ obj.shape for obj in objs ]).sum(1) - non_empties = df[df!=0] - if len(non_empties) and (keys is None and names is None and levels is None and join_axes is None): - objs = [ objs[i] for i in non_empties.index ] + df = DataFrame([obj.shape for obj in objs]).sum(1) + non_empties = df[df != 0] + if (len(non_empties) and (keys is None and names is None and + levels is None and join_axes is None)): + objs = [objs[i] for i in non_empties.index] sample = objs[0] if sample is None: @@ -917,12 +932,12 @@ def __init__(self, objs, axis=0, join='outer', join_axes=None, if ndim == max_ndim: pass - elif ndim != max_ndim-1: + elif ndim != max_ndim - 1: raise ValueError("cannot concatenate unaligned mixed " "dimensional NDFrame objects") else: - name = getattr(obj,'name',None) + name = getattr(obj, 'name', None) if ignore_index or name is None: name = current_column current_column += 1 @@ -931,7 +946,7 @@ def __init__(self, objs, axis=0, join='outer', join_axes=None, # to line up if self._is_frame and axis == 1: name = 0 - obj = sample._constructor({ name : obj }) + obj = sample._constructor({name: obj}) self.objs.append(obj) @@ -957,17 +972,23 @@ def get_result(self): if self.axis == 0: new_data = com._concat_compat([x._values for x in self.objs]) name = com._consensus_name_attr(self.objs) - return Series(new_data, index=self.new_axes[0], name=name).__finalize__(self, method='concat') + return (Series(new_data, index=self.new_axes[0], name=name) + .__finalize__(self, method='concat')) # combine as columns in a frame else: data = dict(zip(range(len(self.objs)), self.objs)) index, columns = self.new_axes tmpdf = DataFrame(data, index=index) - # checks if the column variable already stores valid column names (because set via the 'key' argument - # in the 'concat' function call. If that's not the case, use the series names as column names - if columns.equals(Index(np.arange(len(self.objs)))) and not self.ignore_index: - columns = np.array([ data[i].name for i in range(len(data)) ], dtype='object') + # checks if the column variable already stores valid column + # names (because set via the 'key' argument in the 'concat' + # function call. If that's not the case, use the series names + # as column names + if (columns.equals(Index(np.arange(len(self.objs)))) and + not self.ignore_index): + columns = np.array([data[i].name + for i in range(len(data))], + dtype='object') indexer = isnull(columns) if indexer.any(): columns[indexer] = np.arange(len(indexer[indexer])) @@ -992,11 +1013,13 @@ def get_result(self): mgrs_indexers.append((obj._data, indexers)) new_data = concatenate_block_managers( - mgrs_indexers, self.new_axes, concat_axis=self.axis, copy=self.copy) + mgrs_indexers, self.new_axes, + concat_axis=self.axis, copy=self.copy) if not self.copy: new_data._consolidate_inplace() - return self.objs[0]._from_axes(new_data, self.new_axes).__finalize__(self, method='concat') + return (self.objs[0]._from_axes(new_data, self.new_axes) + .__finalize__(self, method='concat')) def _get_result_dim(self): if self._is_series and self.axis == 1: @@ -1091,7 +1114,7 @@ def _maybe_check_integrity(self, concat_index): if not concat_index.is_unique: overlap = concat_index.get_duplicates() raise ValueError('Indexes have overlapping values: %s' - % str(overlap)) + % str(overlap)) def _concat_indexes(indexes): @@ -1106,7 +1129,8 @@ def _make_concat_multiindex(indexes, keys, levels=None, names=None): names = [None] * len(zipped) if levels is None: - levels = [Categorical.from_array(zp, ordered=True).categories for zp in zipped] + levels = [Categorical.from_array( + zp, ordered=True).categories for zp in zipped] else: levels = [_ensure_index(x) for x in levels] else: @@ -1152,7 +1176,7 @@ def _make_concat_multiindex(indexes, keys, levels=None, names=None): names = list(names) else: # make sure that all of the passed indices have the same nlevels - if not len(set([ i.nlevels for i in indexes ])) == 1: + if not len(set([i.nlevels for i in indexes])) == 1: raise AssertionError("Cannot concat indices that do" " not have the same number of levels") @@ -1201,7 +1225,8 @@ def _make_concat_multiindex(indexes, keys, levels=None, names=None): def _should_fill(lname, rname): - if not isinstance(lname, compat.string_types) or not isinstance(rname, compat.string_types): + if (not isinstance(lname, compat.string_types) or + not isinstance(rname, compat.string_types)): return True return lname == rname diff --git a/pandas/tools/pivot.py b/pandas/tools/pivot.py index 97bd1f86d01cf..7a04847947bf2 100644 --- a/pandas/tools/pivot.py +++ b/pandas/tools/pivot.py @@ -24,13 +24,17 @@ def pivot_table(data, values=None, index=None, columns=None, aggfunc='mean', ---------- data : DataFrame values : column to aggregate, optional - index : a column, Grouper, array which has the same length as data, or list of them. - Keys to group by on the pivot table index. - If an array is passed, it is being used as the same manner as column values. - columns : a column, Grouper, array which has the same length as data, or list of them. - Keys to group by on the pivot table column. - If an array is passed, it is being used as the same manner as column values. - aggfunc : function, default numpy.mean, or list of functions + index : column, Grouper, array, or list of the previous + If an array is passed, it must be the same length as the data. The list + can contain any of the other types (except list). + Keys to group by on the pivot table index. If an array is passed, it + is being used as the same manner as column values. + columns : column, Grouper, array, or list of the previous + If an array is passed, it must be the same length as the data. The list + can contain any of the other types (except list). + Keys to group by on the pivot table column. If an array is passed, it + is being used as the same manner as column values. + aggfunc : function or list of functions, default numpy.mean If list of functions passed, the resulting pivot table will have hierarchical columns whose top level are the function names (inferred from the function objects themselves) @@ -78,7 +82,8 @@ def pivot_table(data, values=None, index=None, columns=None, aggfunc='mean', pieces = [] keys = [] for func in aggfunc: - table = pivot_table(data, values=values, index=index, columns=columns, + table = pivot_table(data, values=values, index=index, + columns=columns, fill_value=fill_value, aggfunc=func, margins=margins) pieces.append(table) @@ -124,7 +129,7 @@ def pivot_table(data, values=None, index=None, columns=None, aggfunc='mean', m = MultiIndex.from_arrays(cartesian_product(table.index.levels)) table = table.reindex_axis(m, axis=0) except AttributeError: - pass # it's a single level + pass # it's a single level try: m = MultiIndex.from_arrays(cartesian_product(table.columns.levels)) @@ -197,7 +202,7 @@ def _add_margins(table, data, values, rows, cols, aggfunc, result, margin_keys, row_margin = marginal_result_set else: marginal_result_set = _generate_marginal_results_without_values( - table, data, rows, cols, aggfunc, margins_name) + table, data, rows, cols, aggfunc, margins_name) if not isinstance(marginal_result_set, tuple): return marginal_result_set result, margin_keys, row_margin = marginal_result_set @@ -273,7 +278,8 @@ def _all_key(key): except TypeError: # we cannot reshape, so coerce the axis - piece.set_axis(cat_axis, piece._get_axis(cat_axis)._to_safe_for_reshape()) + piece.set_axis(cat_axis, piece._get_axis( + cat_axis)._to_safe_for_reshape()) piece[all_key] = margin[key] table_pieces.append(piece) @@ -349,13 +355,15 @@ def _all_key(): def _convert_by(by): if by is None: by = [] - elif (np.isscalar(by) or isinstance(by, (np.ndarray, Index, Series, Grouper)) + elif (np.isscalar(by) or isinstance(by, (np.ndarray, Index, + Series, Grouper)) or hasattr(by, '__call__')): by = [by] else: by = list(by) return by + def crosstab(index, columns, values=None, rownames=None, colnames=None, aggfunc=None, margins=False, dropna=True): """ diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py index 43bcd2373df69..8f7c0a2b1be9a 100644 --- a/pandas/tools/plotting.py +++ b/pandas/tools/plotting.py @@ -1,6 +1,5 @@ # being a bit too dynamic # pylint: disable=E1101 -import datetime import warnings import re from math import ceil @@ -17,10 +16,7 @@ from pandas.core.generic import _shared_docs, _shared_doc_kwargs from pandas.core.index import Index, MultiIndex from pandas.core.series import Series, remove_na -from pandas.tseries.index import DatetimeIndex -from pandas.tseries.period import PeriodIndex, Period -import pandas.tseries.frequencies as frequencies -from pandas.tseries.offsets import DateOffset +from pandas.tseries.period import PeriodIndex from pandas.compat import range, lrange, lmap, map, zip, string_types import pandas.compat as compat from pandas.util.decorators import Appender @@ -36,65 +32,65 @@ # to True. mpl_stylesheet = { 'axes.axisbelow': True, - 'axes.color_cycle': ['#348ABD', - '#7A68A6', - '#A60628', - '#467821', - '#CF4457', - '#188487', - '#E24A33'], - 'axes.edgecolor': '#bcbcbc', - 'axes.facecolor': '#eeeeee', - 'axes.grid': True, - 'axes.labelcolor': '#555555', - 'axes.labelsize': 'large', - 'axes.linewidth': 1.0, - 'axes.titlesize': 'x-large', - 'figure.edgecolor': 'white', - 'figure.facecolor': 'white', - 'figure.figsize': (6.0, 4.0), - 'figure.subplot.hspace': 0.5, - 'font.family': 'monospace', - 'font.monospace': ['Andale Mono', - 'Nimbus Mono L', - 'Courier New', - 'Courier', - 'Fixed', - 'Terminal', - 'monospace'], - 'font.size': 10, - 'interactive': True, - 'keymap.all_axes': ['a'], - 'keymap.back': ['left', 'c', 'backspace'], - 'keymap.forward': ['right', 'v'], - 'keymap.fullscreen': ['f'], - 'keymap.grid': ['g'], - 'keymap.home': ['h', 'r', 'home'], - 'keymap.pan': ['p'], - 'keymap.save': ['s'], - 'keymap.xscale': ['L', 'k'], - 'keymap.yscale': ['l'], - 'keymap.zoom': ['o'], - 'legend.fancybox': True, - 'lines.antialiased': True, - 'lines.linewidth': 1.0, - 'patch.antialiased': True, - 'patch.edgecolor': '#EEEEEE', - 'patch.facecolor': '#348ABD', - 'patch.linewidth': 0.5, - 'toolbar': 'toolbar2', - 'xtick.color': '#555555', - 'xtick.direction': 'in', - 'xtick.major.pad': 6.0, - 'xtick.major.size': 0.0, - 'xtick.minor.pad': 6.0, - 'xtick.minor.size': 0.0, - 'ytick.color': '#555555', - 'ytick.direction': 'in', - 'ytick.major.pad': 6.0, - 'ytick.major.size': 0.0, - 'ytick.minor.pad': 6.0, - 'ytick.minor.size': 0.0 + 'axes.color_cycle': ['#348ABD', + '#7A68A6', + '#A60628', + '#467821', + '#CF4457', + '#188487', + '#E24A33'], + 'axes.edgecolor': '#bcbcbc', + 'axes.facecolor': '#eeeeee', + 'axes.grid': True, + 'axes.labelcolor': '#555555', + 'axes.labelsize': 'large', + 'axes.linewidth': 1.0, + 'axes.titlesize': 'x-large', + 'figure.edgecolor': 'white', + 'figure.facecolor': 'white', + 'figure.figsize': (6.0, 4.0), + 'figure.subplot.hspace': 0.5, + 'font.family': 'monospace', + 'font.monospace': ['Andale Mono', + 'Nimbus Mono L', + 'Courier New', + 'Courier', + 'Fixed', + 'Terminal', + 'monospace'], + 'font.size': 10, + 'interactive': True, + 'keymap.all_axes': ['a'], + 'keymap.back': ['left', 'c', 'backspace'], + 'keymap.forward': ['right', 'v'], + 'keymap.fullscreen': ['f'], + 'keymap.grid': ['g'], + 'keymap.home': ['h', 'r', 'home'], + 'keymap.pan': ['p'], + 'keymap.save': ['s'], + 'keymap.xscale': ['L', 'k'], + 'keymap.yscale': ['l'], + 'keymap.zoom': ['o'], + 'legend.fancybox': True, + 'lines.antialiased': True, + 'lines.linewidth': 1.0, + 'patch.antialiased': True, + 'patch.edgecolor': '#EEEEEE', + 'patch.facecolor': '#348ABD', + 'patch.linewidth': 0.5, + 'toolbar': 'toolbar2', + 'xtick.color': '#555555', + 'xtick.direction': 'in', + 'xtick.major.pad': 6.0, + 'xtick.major.size': 0.0, + 'xtick.minor.pad': 6.0, + 'xtick.minor.size': 0.0, + 'ytick.color': '#555555', + 'ytick.direction': 'in', + 'ytick.major.pad': 6.0, + 'ytick.major.size': 0.0, + 'ytick.minor.pad': 6.0, + 'ytick.minor.size': 0.0 } @@ -106,6 +102,7 @@ def _mpl_le_1_2_1(): except ImportError: return False + def _mpl_ge_1_3_1(): try: import matplotlib @@ -116,18 +113,20 @@ def _mpl_ge_1_3_1(): except ImportError: return False + def _mpl_ge_1_4_0(): try: import matplotlib - return (matplotlib.__version__ >= LooseVersion('1.4') + return (matplotlib.__version__ >= LooseVersion('1.4') or matplotlib.__version__[0] == '0') except ImportError: return False + def _mpl_ge_1_5_0(): try: import matplotlib - return (matplotlib.__version__ >= LooseVersion('1.5') + return (matplotlib.__version__ >= LooseVersion('1.5') or matplotlib.__version__[0] == '0') except ImportError: return False @@ -142,6 +141,7 @@ def _mpl_ge_1_5_0(): def _get_standard_kind(kind): return {'density': 'kde'}.get(kind, kind) + def _get_standard_colors(num_colors=None, colormap=None, color_type='default', color=None): import matplotlib.pyplot as plt @@ -164,7 +164,8 @@ def _get_standard_colors(num_colors=None, colormap=None, color_type='default', # need to call list() on the result to copy so we don't # modify the global rcParams below try: - colors = [c['color'] for c in list(plt.rcParams['axes.prop_cycle'])] + colors = [c['color'] + for c in list(plt.rcParams['axes.prop_cycle'])] except KeyError: colors = list(plt.rcParams.get('axes.color_cycle', list('bgrcmyk'))) @@ -172,6 +173,7 @@ def _get_standard_colors(num_colors=None, colormap=None, color_type='default', colors = list(colors) elif color_type == 'random': import random + def random_color(column): random.seed(column) return [random.random() for _ in range(3)] @@ -183,6 +185,7 @@ def random_color(column): if isinstance(colors, compat.string_types): import matplotlib.colors conv = matplotlib.colors.ColorConverter() + def _maybe_valid_colors(colors): try: [conv.to_rgba(c) for c in colors] @@ -207,7 +210,7 @@ def _maybe_valid_colors(colors): pass if len(colors) != num_colors: - multiple = num_colors//len(colors) - 1 + multiple = num_colors // len(colors) - 1 mod = num_colors % len(colors) colors += multiple * colors @@ -215,6 +218,7 @@ def _maybe_valid_colors(colors): return colors + class _Options(dict): """ Stores pandas plotting options. @@ -319,7 +323,6 @@ def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False, >>> scatter_matrix(df, alpha=0.2) """ import matplotlib.pyplot as plt - from matplotlib.artist import setp df = frame._get_numeric_data() n = df.columns.size @@ -345,7 +348,7 @@ def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False, values = df[a].values[mask[a].values] rmin_, rmax_ = np.min(values), np.max(values) rdelta_ext = (rmax_ - rmin_) * range_padding / 2. - boundaries_list.append((rmin_ - rdelta_ext, rmax_+ rdelta_ext)) + boundaries_list.append((rmin_ - rdelta_ext, rmax_ + rdelta_ext)) for i, a in zip(lrange(n), df.columns): for j, b in zip(lrange(n), df.columns): @@ -379,9 +382,9 @@ def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False, ax.set_xlabel(b) ax.set_ylabel(a) - if j!= 0: + if j != 0: ax.yaxis.set_visible(False) - if i != n-1: + if i != n - 1: ax.xaxis.set_visible(False) if len(df.columns) > 1: @@ -413,6 +416,7 @@ def _gcf(): import matplotlib.pyplot as plt return plt.gcf() + def _get_marker_compat(marker): import matplotlib.lines as mlines import matplotlib as mpl @@ -422,6 +426,7 @@ def _get_marker_compat(marker): return 'o' return marker + def radviz(frame, class_column, ax=None, color=None, colormap=None, **kwds): """RadViz - a multivariate data visualization algorithm @@ -506,18 +511,22 @@ def normalize(series): ax.axis('equal') return ax + @deprecate_kwarg(old_arg_name='data', new_arg_name='frame') def andrews_curves(frame, class_column, ax=None, samples=200, color=None, colormap=None, **kwds): """ - Generates a matplotlib plot of Andrews curves, for visualising clusters of multivariate data. + Generates a matplotlib plot of Andrews curves, for visualising clusters of + multivariate data. Andrews curves have the functional form: - f(t) = x_1/sqrt(2) + x_2 sin(t) + x_3 cos(t) + x_4 sin(2t) + x_5 cos(2t) + ... + f(t) = x_1/sqrt(2) + x_2 sin(t) + x_3 cos(t) + + x_4 sin(2t) + x_5 cos(2t) + ... - Where x coefficients correspond to the values of each dimension and t is linearly spaced between -pi and +pi. Each - row of frame then corresponds to a single curve. + Where x coefficients correspond to the values of each dimension and t is + linearly spaced between -pi and +pi. Each row of frame then corresponds to + a single curve. Parameters: ----------- @@ -547,12 +556,14 @@ def f(t): x1 = amplitudes[0] result = x1 / sqrt(2.0) - # Take the rest of the coefficients and resize them appropriately. Take a copy of amplitudes as otherwise - # numpy deletes the element from amplitudes itself. + # Take the rest of the coefficients and resize them + # appropriately. Take a copy of amplitudes as otherwise numpy + # deletes the element from amplitudes itself. coeffs = np.delete(np.copy(amplitudes), 0) coeffs.resize(int((coeffs.size + 1) / 2), 2) - # Generate the harmonics and arguments for the sin and cos functions. + # Generate the harmonics and arguments for the sin and cos + # functions. harmonics = np.arange(0, coeffs.shape[0]) + 1 trig_args = np.outer(harmonics, t) @@ -652,6 +663,7 @@ def bootstrap_plot(series, fig=None, size=50, samples=500, **kwds): plt.setp(axis.get_yticklabels(), fontsize=8) return fig + @deprecate_kwarg(old_arg_name='colors', new_arg_name='color') @deprecate_kwarg(old_arg_name='data', new_arg_name='frame', stacklevel=3) def parallel_coordinates(frame, class_column, cols=None, ax=None, color=None, @@ -692,12 +704,14 @@ def parallel_coordinates(frame, class_column, cols=None, ax=None, color=None, >>> from pandas import read_csv >>> from pandas.tools.plotting import parallel_coordinates >>> from matplotlib import pyplot as plt - >>> df = read_csv('https://raw.github.com/pydata/pandas/master/pandas/tests/data/iris.csv') - >>> parallel_coordinates(df, 'Name', color=('#556270', '#4ECDC4', '#C7F464')) + >>> df = read_csv('https://raw.github.com/pydata/pandas/master' + '/pandas/tests/data/iris.csv') + >>> parallel_coordinates(df, 'Name', color=('#556270', + '#4ECDC4', '#C7F464')) >>> plt.show() """ if axvlines_kwds is None: - axvlines_kwds = {'linewidth':1,'color':'black'} + axvlines_kwds = {'linewidth': 1, 'color': 'black'} import matplotlib.pyplot as plt n = len(frame) @@ -811,7 +825,8 @@ def autocorrelation_plot(series, ax=None, **kwds): c0 = np.sum((data - mean) ** 2) / float(n) def r(h): - return ((data[:n - h] - mean) * (data[h:] - mean)).sum() / float(n) / c0 + return ((data[:n - h] - mean) * + (data[h:] - mean)).sum() / float(n) / c0 x = np.arange(n) + 1 y = lmap(r, x) z95 = 1.959963984540054 @@ -873,10 +888,11 @@ def __init__(self, data, kind=None, by=None, subplots=False, sharex=None, if sharex is None: if ax is None: - self.sharex = True + self.sharex = True else: - # if we get an axis, the users should do the visibility setting... - self.sharex = False + # if we get an axis, the users should do the visibility + # setting... + self.sharex = False else: self.sharex = sharex @@ -968,10 +984,11 @@ def _validate_color_args(self): # need only a single match for s in styles: if re.match('^[a-z]+?', s) is not None: - raise ValueError("Cannot pass 'style' string with a color " - "symbol and 'color' keyword argument. Please" - " use one or the other or pass 'style' " - "without a color symbol") + raise ValueError( + "Cannot pass 'style' string with a color " + "symbol and 'color' keyword argument. Please" + " use one or the other or pass 'style' " + "without a color symbol") def _iter_data(self, data=None, keep_index=False, fillna=None): if data is None: @@ -979,10 +996,11 @@ def _iter_data(self, data=None, keep_index=False, fillna=None): if fillna is not None: data = data.fillna(fillna) - if self.sort_columns: - columns = com._try_sort(data.columns) - else: - columns = data.columns + # TODO: unused? + # if self.sort_columns: + # columns = com._try_sort(data.columns) + # else: + # columns = data.columns for col, values in data.iteritems(): if keep_index is True: @@ -1147,7 +1165,7 @@ def _post_plot_logic_common(self, ax, data): self._apply_axis_properties(ax.yaxis, rot=self.rot, fontsize=self.fontsize) self._apply_axis_properties(ax.xaxis, fontsize=self.fontsize) - else: # pragma no cover + else: # pragma no cover raise ValueError def _post_plot_logic(self, ax, data): @@ -1206,7 +1224,7 @@ def legend_title(self): return ','.join(stringified) def _add_legend_handle(self, handle, label, index=None): - if not label is None: + if label is not None: if self.mark_right and index is not None: if self.on_right(index): label = label + ' (right)' @@ -1221,7 +1239,7 @@ def _make_legend(self): title = '' if not self.subplots: - if not leg is None: + if leg is not None: title = leg.get_title().get_text() handles = leg.legendHandles labels = [x.get_text() for x in leg.get_texts()] @@ -1233,7 +1251,7 @@ def _make_legend(self): handles += self.legend_handles labels += self.legend_labels - if not self.legend_title is None: + if self.legend_title is not None: title = self.legend_title if len(handles) > 0: @@ -1312,7 +1330,8 @@ def _plot(cls, ax, x, y, style=None, is_errorbar=False, **kwds): if is_errorbar: return ax.errorbar(x, y, **kwds) else: - # prevent style kwarg from going to errorbar, where it is unsupported + # prevent style kwarg from going to errorbar, where it is + # unsupported if style is not None: args = (x, y, style) else: @@ -1449,10 +1468,10 @@ def match_labels(data, e): # asymmetrical error bars if err.ndim == 3: if (err_shape[0] != self.nseries) or \ - (err_shape[1] != 2) or \ - (err_shape[2] != len(self.data)): + (err_shape[1] != 2) or \ + (err_shape[2] != len(self.data)): msg = "Asymmetrical error bars should be provided " + \ - "with the shape (%u, 2, %u)" % \ + "with the shape (%u, 2, %u)" % \ (self.nseries, len(self.data)) raise ValueError(msg) @@ -1492,7 +1511,7 @@ def _get_errorbars(self, label=None, index=None, xerr=True, yerr=True): def _get_subplots(self): from matplotlib.axes import Subplot return [ax for ax in self.axes[0].get_figure().get_axes() - if isinstance(ax, Subplot)] + if isinstance(ax, Subplot)] def _get_axes_layout(self): axes = self._get_subplots() @@ -1594,7 +1613,8 @@ def _make_plot(self): if len(errors_x) > 0 or len(errors_y) > 0: err_kwds = dict(errors_x, **errors_y) err_kwds['ecolor'] = scatter.get_facecolor()[0] - ax.errorbar(data[x].values, data[y].values, linestyle='none', **err_kwds) + ax.errorbar(data[x].values, data[y].values, + linestyle='none', **err_kwds) class HexBinPlot(PlanePlot): @@ -1691,7 +1711,8 @@ def _make_plot(self): @classmethod def _plot(cls, ax, x, y, style=None, column_num=None, stacking_id=None, **kwds): - # column_num is used to get the target column from protf in line and area plots + # column_num is used to get the target column from protf in line and + # area plots if column_num == 0: cls._initialize_stacker(ax, stacking_id, len(y)) y_values = cls._get_stacked_values(ax, stacking_id, y, kwds['label']) @@ -1753,8 +1774,10 @@ def _get_stacked_values(cls, ax, stacking_id, values, label): elif (values <= 0).all(): return ax._stacker_neg_prior[stacking_id] + values - raise ValueError('When stacked is True, each column must be either all positive or negative.' - '{0} contains both positive and negative values'.format(label)) + raise ValueError('When stacked is True, each column must be either ' + 'all positive or negative.' + '{0} contains both positive and negative values' + .format(label)) @classmethod def _update_stacker(cls, ax, stacking_id, values): @@ -1820,10 +1843,10 @@ def _plot(cls, ax, x, y, style=None, column_num=None, else: start = np.zeros(len(y)) - if not 'color' in kwds: + if 'color' not in kwds: kwds['color'] = lines[0].get_color() - if cls.mpl_ge_1_5_0(): # mpl 1.5 added real support for poly legends + if cls.mpl_ge_1_5_0(): # mpl 1.5 added real support for poly legends kwds.pop('label') ax.fill_between(xdata, start, y_values, **kwds) cls._update_stacker(ax, stacking_id, y) @@ -1861,7 +1884,7 @@ def __init__(self, data, **kwargs): self.bottom = kwargs.pop('bottom', 0) self.left = kwargs.pop('left', 0) - self.log = kwargs.pop('log',False) + self.log = kwargs.pop('log', False) MPLPlot.__init__(self, data, **kwargs) if self.stacked or self.subplots: @@ -1915,7 +1938,7 @@ def _make_plot(self): label = com.pprint_thing(label) if (('yerr' in kwds) or ('xerr' in kwds)) \ - and (kwds.get('ecolor') is None): + and (kwds.get('ecolor') is None): kwds['ecolor'] = mpl.rcParams['xtick.color'] start = 0 @@ -1926,20 +1949,23 @@ def _make_plot(self): if self.subplots: w = self.bar_width / 2 rect = self._plot(ax, self.ax_pos + w, y, self.bar_width, - start=start, label=label, log=self.log, **kwds) + start=start, label=label, + log=self.log, **kwds) ax.set_title(label) elif self.stacked: mask = y > 0 start = np.where(mask, pos_prior, neg_prior) + self._start_base w = self.bar_width / 2 rect = self._plot(ax, self.ax_pos + w, y, self.bar_width, - start=start, label=label, log=self.log, **kwds) + start=start, label=label, + log=self.log, **kwds) pos_prior = pos_prior + np.where(mask, y, 0) neg_prior = neg_prior + np.where(mask, 0, y) else: w = self.bar_width / K rect = self._plot(ax, self.ax_pos + (i + 0.5) * w, y, w, - start=start, label=label, log=self.log, **kwds) + start=start, label=label, + log=self.log, **kwds) self._add_legend_handle(rect, label, index=i) def _post_plot_logic(self, ax, data): @@ -2000,9 +2026,10 @@ def _args_adjust(self): values = np.ravel(values) values = values[~com.isnull(values)] - hist, self.bins = np.histogram(values, bins=self.bins, - range=self.kwds.get('range', None), - weights=self.kwds.get('weights', None)) + hist, self.bins = np.histogram( + values, bins=self.bins, + range=self.kwds.get('range', None), + weights=self.kwds.get('weights', None)) if com.is_list_like(self.bottom): self.bottom = np.array(self.bottom) @@ -2015,7 +2042,8 @@ def _plot(cls, ax, y, style=None, bins=None, bottom=0, column_num=0, y = y[~com.isnull(y)] base = np.zeros(len(bins) - 1) - bottom = bottom + cls._get_stacked_values(ax, stacking_id, base, kwds['label']) + bottom = bottom + \ + cls._get_stacked_values(ax, stacking_id, base, kwds['label']) # ignore style n, bins, patches = ax.hist(y, bins=bins, bottom=bottom, **kwds) cls._update_stacker(ax, stacking_id, n) @@ -2134,7 +2162,8 @@ def _validate_color_args(self): pass def _make_plot(self): - colors = self._get_colors(num_colors=len(self.data), color_kwds='colors') + colors = self._get_colors( + num_colors=len(self.data), color_kwds='colors') self.kwds.setdefault('colors', colors) for i, (label, y) in enumerate(self._iter_data()): @@ -2190,14 +2219,16 @@ class BoxPlot(LinePlot): def __init__(self, data, return_type=None, **kwargs): # Do not call LinePlot.__init__ which may fill nan if return_type not in self._valid_return_types: - raise ValueError("return_type must be {None, 'axes', 'dict', 'both'}") + raise ValueError( + "return_type must be {None, 'axes', 'dict', 'both'}") self.return_type = return_type MPLPlot.__init__(self, data, **kwargs) def _args_adjust(self): if self.subplots: - # Disable label ax sharing. Otherwise, all subplots shows last column label + # Disable label ax sharing. Otherwise, all subplots shows last + # column label if self.orientation == 'vertical': self.sharex = False else: @@ -2233,8 +2264,10 @@ def _validate_color_args(self): valid_keys = ['boxes', 'whiskers', 'medians', 'caps'] for key, values in compat.iteritems(self.color): if key not in valid_keys: - raise ValueError("color dict contains invalid key '{0}' " - "The key must be either {1}".format(key, valid_keys)) + raise ValueError("color dict contains invalid " + "key '{0}' " + "The key must be either {1}" + .format(key, valid_keys)) else: self.color = None @@ -2332,7 +2365,8 @@ def result(self): # kinds supported by both dataframe and series -_common_kinds = ['line', 'bar', 'barh', 'kde', 'density', 'area', 'hist', 'box'] +_common_kinds = ['line', 'bar', 'barh', + 'kde', 'density', 'area', 'hist', 'box'] # kinds supported by dataframe _dataframe_kinds = ['scatter', 'hexbin'] # kinds supported only by series or dataframe single column @@ -2529,7 +2563,8 @@ def _plot(data, x=None, y=None, subplots=False, be transposed to meet matplotlib's default layout. If a Series or DataFrame is passed, use passed data to draw a table. yerr : DataFrame, Series, array-like, dict and str - See :ref:`Plotting with Error Bars <visualization.errorbars>` for detail. + See :ref:`Plotting with Error Bars <visualization.errorbars>` for + detail. xerr : same types as yerr. %(klass_unique)s mark_right : boolean, default True @@ -2555,14 +2590,14 @@ def _plot(data, x=None, y=None, subplots=False, @Appender(_shared_docs['plot'] % _shared_doc_df_kwargs) -def plot_frame(data, x=None, y=None, kind='line', ax=None, # Dataframe unique - subplots=False, sharex=None, sharey=False, layout=None, # Dataframe unique +def plot_frame(data, x=None, y=None, kind='line', ax=None, + subplots=False, sharex=None, sharey=False, layout=None, figsize=None, use_index=True, title=None, grid=None, legend=True, style=None, logx=False, logy=False, loglog=False, xticks=None, yticks=None, xlim=None, ylim=None, rot=None, fontsize=None, colormap=None, table=False, yerr=None, xerr=None, - secondary_y=False, sort_columns=False, # Dataframe unique + secondary_y=False, sort_columns=False, **kwds): return _plot(data, kind=kind, x=x, y=y, ax=ax, subplots=subplots, sharex=sharex, sharey=sharey, @@ -2836,10 +2871,11 @@ def hist_frame(data, column=None, by=None, grid=True, xlabelsize=None, """ if by is not None: - axes = grouped_hist(data, column=column, by=by, ax=ax, grid=grid, figsize=figsize, - sharex=sharex, sharey=sharey, layout=layout, bins=bins, - xlabelsize=xlabelsize, xrot=xrot, ylabelsize=ylabelsize, yrot=yrot, - **kwds) + axes = grouped_hist(data, column=column, by=by, ax=ax, grid=grid, + figsize=figsize, sharex=sharex, sharey=sharey, + layout=layout, bins=bins, xlabelsize=xlabelsize, + xrot=xrot, ylabelsize=ylabelsize, + yrot=yrot, **kwds) return axes if column is not None: @@ -2861,14 +2897,15 @@ def hist_frame(data, column=None, by=None, grid=True, xlabelsize=None, ax.grid(grid) _set_ticks_props(axes, xlabelsize=xlabelsize, xrot=xrot, - ylabelsize=ylabelsize, yrot=yrot) + ylabelsize=ylabelsize, yrot=yrot) fig.subplots_adjust(wspace=0.3, hspace=0.3) return axes def hist_series(self, by=None, ax=None, grid=True, xlabelsize=None, - xrot=None, ylabelsize=None, yrot=None, figsize=None, bins=10, **kwds): + xrot=None, ylabelsize=None, yrot=None, figsize=None, + bins=10, **kwds): """ Draw histogram of the input series using matplotlib @@ -2910,7 +2947,7 @@ def hist_series(self, by=None, ax=None, grid=True, xlabelsize=None, fig = kwds.pop('figure', plt.gcf() if plt.get_fignums() else plt.figure(figsize=figsize)) if (figsize is not None and tuple(figsize) != - tuple(fig.get_size_inches())): + tuple(fig.get_size_inches())): fig.set_size_inches(*figsize, forward=True) if ax is None: ax = fig.gca() @@ -2923,16 +2960,16 @@ def hist_series(self, by=None, ax=None, grid=True, xlabelsize=None, axes = np.array([ax]) _set_ticks_props(axes, xlabelsize=xlabelsize, xrot=xrot, - ylabelsize=ylabelsize, yrot=yrot) + ylabelsize=ylabelsize, yrot=yrot) else: if 'figure' in kwds: raise ValueError("Cannot pass 'figure' when using the " "'by' argument, since a new 'Figure' instance " "will be created") - axes = grouped_hist(self, by=by, ax=ax, grid=grid, figsize=figsize, bins=bins, - xlabelsize=xlabelsize, xrot=xrot, ylabelsize=ylabelsize, yrot=yrot, - **kwds) + axes = grouped_hist(self, by=by, ax=ax, grid=grid, figsize=figsize, + bins=bins, xlabelsize=xlabelsize, xrot=xrot, + ylabelsize=ylabelsize, yrot=yrot, **kwds) if hasattr(axes, 'ndim'): if axes.ndim == 1 and len(axes) == 1: @@ -2976,7 +3013,7 @@ def plot_group(group, ax): figsize=figsize, layout=layout, rot=rot) _set_ticks_props(axes, xlabelsize=xlabelsize, xrot=xrot, - ylabelsize=ylabelsize, yrot=yrot) + ylabelsize=ylabelsize, yrot=yrot) fig.subplots_adjust(bottom=0.15, top=0.9, left=0.1, right=0.9, hspace=0.5, wspace=0.3) @@ -3032,8 +3069,8 @@ def boxplot_frame_groupby(grouped, subplots=True, column=None, fontsize=None, if subplots is True: naxes = len(grouped) fig, axes = _subplots(naxes=naxes, squeeze=False, - ax=ax, sharex=False, sharey=True, figsize=figsize, - layout=layout) + ax=ax, sharex=False, sharey=True, + figsize=figsize, layout=layout) axes = _flatten(axes) ret = compat.OrderedDict() @@ -3042,7 +3079,8 @@ def boxplot_frame_groupby(grouped, subplots=True, column=None, fontsize=None, rot=rot, grid=grid, **kwds) ax.set_title(com.pprint_thing(key)) ret[key] = d - fig.subplots_adjust(bottom=0.15, top=0.9, left=0.1, right=0.9, wspace=0.2) + fig.subplots_adjust(bottom=0.15, top=0.9, left=0.1, + right=0.9, wspace=0.2) else: from pandas.tools.merge import concat keys, frames = zip(*grouped) @@ -3054,7 +3092,8 @@ def boxplot_frame_groupby(grouped, subplots=True, column=None, fontsize=None, else: df = frames[0] ret = df.boxplot(column=column, fontsize=fontsize, rot=rot, - grid=grid, ax=ax, figsize=figsize, layout=layout, **kwds) + grid=grid, ax=ax, figsize=figsize, + layout=layout, **kwds) return ret @@ -3092,8 +3131,8 @@ def _grouped_plot(plotf, data, column=None, by=None, numeric_only=True, def _grouped_plot_by_column(plotf, data, columns=None, by=None, numeric_only=True, grid=False, - figsize=None, ax=None, layout=None, return_type=None, - **kwargs): + figsize=None, ax=None, layout=None, + return_type=None, **kwargs): grouped = data.groupby(by) if columns is None: if not isinstance(by, (list, tuple)): @@ -3129,7 +3168,6 @@ def _grouped_plot_by_column(plotf, data, columns=None, by=None, def table(ax, data, rowLabels=None, colLabels=None, **kwargs): - """ Helper function to convert DataFrame and Series to matplotlib.table @@ -3140,7 +3178,8 @@ def table(ax, data, rowLabels=None, colLabels=None, data for table contents `kwargs`: keywords, optional keyword arguments which passed to matplotlib.table.table. - If `rowLabels` or `colLabels` is not specified, data index or column name will be used. + If `rowLabels` or `colLabels` is not specified, data index or column + name will be used. Returns ------- @@ -3164,7 +3203,8 @@ def table(ax, data, rowLabels=None, colLabels=None, import matplotlib.table table = matplotlib.table.table(ax, cellText=cellText, - rowLabels=rowLabels, colLabels=colLabels, **kwargs) + rowLabels=rowLabels, + colLabels=colLabels, **kwargs) return table @@ -3177,7 +3217,7 @@ def _get_layout(nplots, layout=None, layout_type='box'): # Python 2 compat ceil_ = lambda x: int(ceil(x)) - if nrows == -1 and ncols >0: + if nrows == -1 and ncols > 0: layout = nrows, ncols = (ceil_(float(nplots) / ncols), ncols) elif ncols == -1 and nrows > 0: layout = nrows, ncols = (nrows, ceil_(float(nplots) / nrows)) @@ -3186,8 +3226,8 @@ def _get_layout(nplots, layout=None, layout_type='box'): raise ValueError(msg) if nrows * ncols < nplots: - raise ValueError('Layout of %sx%s must be larger than required size %s' % - (nrows, ncols, nplots)) + raise ValueError('Layout of %sx%s must be larger than ' + 'required size %s' % (nrows, ncols, nplots)) return layout @@ -3215,7 +3255,8 @@ def _get_layout(nplots, layout=None, layout_type='box'): def _subplots(naxes=None, sharex=False, sharey=False, squeeze=True, - subplot_kw=None, ax=None, layout=None, layout_type='box', **fig_kw): + subplot_kw=None, ax=None, layout=None, layout_type='box', + **fig_kw): """Create a figure with a set of subplots already made. This utility wrapper makes it convenient to create common layouts of @@ -3300,27 +3341,29 @@ def _subplots(naxes=None, sharex=False, sharey=False, squeeze=True, if com.is_list_like(ax): ax = _flatten(ax) if layout is not None: - warnings.warn("When passing multiple axes, layout keyword is ignored", UserWarning) + warnings.warn("When passing multiple axes, layout keyword is " + "ignored", UserWarning) if sharex or sharey: - warnings.warn("When passing multiple axes, sharex and sharey are ignored." - "These settings must be specified when creating axes", UserWarning) + warnings.warn("When passing multiple axes, sharex and sharey " + "are ignored. These settings must be specified " + "when creating axes", UserWarning) if len(ax) == naxes: fig = ax[0].get_figure() return fig, ax else: - raise ValueError("The number of passed axes must be {0}, the same as " - "the output plot".format(naxes)) + raise ValueError("The number of passed axes must be {0}, the " + "same as the output plot".format(naxes)) fig = ax.get_figure() - # if ax is passed and a number of subplots is 1, return ax as it is + # if ax is passed and a number of subplots is 1, return ax as it is if naxes == 1: if squeeze: return fig, ax else: return fig, _flatten(ax) else: - warnings.warn("To output multiple subplots, the figure containing the passed axes " - "is being cleared", UserWarning) + warnings.warn("To output multiple subplots, the figure containing " + "the passed axes is being cleared", UserWarning) fig.clear() nrows, ncols = _get_layout(naxes, layout=layout, layout_type=layout_type) @@ -3399,7 +3442,7 @@ def _handle_shared_axes(axarr, nplots, naxes, nrows, ncols, sharex, sharey): try: # first find out the ax layout, # so that we can correctly handle 'gaps" - layout = np.zeros((nrows+1,ncols+1), dtype=np.bool) + layout = np.zeros((nrows + 1, ncols + 1), dtype=np.bool) for ax in axarr: layout[ax.rowNum, ax.colNum] = ax.get_visible() @@ -3407,9 +3450,10 @@ def _handle_shared_axes(axarr, nplots, naxes, nrows, ncols, sharex, sharey): # only the last row of subplots should get x labels -> all # other off layout handles the case that the subplot is # the last in the column, because below is no subplot/gap. - if not layout[ax.rowNum+1, ax.colNum]: + if not layout[ax.rowNum + 1, ax.colNum]: continue - if sharex or len(ax.get_shared_x_axes().get_siblings(ax)) > 1: + if sharex or len(ax.get_shared_x_axes() + .get_siblings(ax)) > 1: _remove_labels_from_axis(ax.xaxis) except IndexError: @@ -3418,21 +3462,21 @@ def _handle_shared_axes(axarr, nplots, naxes, nrows, ncols, sharex, sharey): for ax in axarr: if ax.is_last_row(): continue - if sharex or len(ax.get_shared_x_axes().get_siblings(ax)) > 1: + if sharex or len(ax.get_shared_x_axes() + .get_siblings(ax)) > 1: _remove_labels_from_axis(ax.xaxis) if ncols > 1: for ax in axarr: - # only the first column should get y labels -> set all other to off - # as we only have labels in teh first column and we always have a subplot there, - # we can skip the layout test + # only the first column should get y labels -> set all other to + # off as we only have labels in teh first column and we always + # have a subplot there, we can skip the layout test if ax.is_first_col(): continue if sharey or len(ax.get_shared_y_axes().get_siblings(ax)) > 1: _remove_labels_from_axis(ax.yaxis) - def _flatten(axes): if not com.is_list_like(axes): return np.array([axes]) @@ -3479,6 +3523,7 @@ def _set_ticks_props(axes, xlabelsize=None, xrot=None, class BasePlotMethods(PandasObject): + def __init__(self, data): self._data = data @@ -3499,14 +3544,15 @@ class SeriesPlotMethods(BasePlotMethods): with the ``kind`` argument: ``s.plot(kind='line')`` is equivalent to ``s.plot.line()`` """ - def __call__(self, kind='line', ax=None, # Series unique + + def __call__(self, kind='line', ax=None, figsize=None, use_index=True, title=None, grid=None, - legend=False, style=None, logx=False, logy=False, loglog=False, - xticks=None, yticks=None, xlim=None, ylim=None, + legend=False, style=None, logx=False, logy=False, + loglog=False, xticks=None, yticks=None, + xlim=None, ylim=None, rot=None, fontsize=None, colormap=None, table=False, yerr=None, xerr=None, - label=None, secondary_y=False, # Series unique - **kwds): + label=None, secondary_y=False, **kwds): return plot_series(self._data, kind=kind, ax=ax, figsize=figsize, use_index=use_index, title=title, grid=grid, legend=legend, style=style, logx=logx, logy=logy, @@ -3671,15 +3717,15 @@ class FramePlotMethods(BasePlotMethods): method with the ``kind`` argument: ``df.plot(kind='line')`` is equivalent to ``df.plot.line()`` """ - def __call__(self, x=None, y=None, kind='line', ax=None, # Dataframe unique - subplots=False, sharex=None, sharey=False, layout=None, # Dataframe unique + + def __call__(self, x=None, y=None, kind='line', ax=None, + subplots=False, sharex=None, sharey=False, layout=None, figsize=None, use_index=True, title=None, grid=None, legend=True, style=None, logx=False, logy=False, loglog=False, xticks=None, yticks=None, xlim=None, ylim=None, rot=None, fontsize=None, colormap=None, table=False, yerr=None, xerr=None, - secondary_y=False, sort_columns=False, # Dataframe unique - **kwds): + secondary_y=False, sort_columns=False, **kwds): return plot_frame(self._data, kind=kind, x=x, y=y, ax=ax, subplots=subplots, sharex=sharex, sharey=sharey, layout=layout, figsize=figsize, use_index=use_index, @@ -3913,8 +3959,8 @@ def hexbin(self, x, y, C=None, reduce_C_function=None, gridsize=None, import pandas.tools.plotting as plots import pandas.core.frame as fr - reload(plots) - reload(fr) + reload(plots) # noqa + reload(fr) # noqa from pandas.core.frame import DataFrame data = DataFrame([[3, 6, -5], [4, 8, 2], [4, 9, -6], diff --git a/pandas/tools/rplot.py b/pandas/tools/rplot.py index bc834689ffce8..5a748b60aae9c 100644 --- a/pandas/tools/rplot.py +++ b/pandas/tools/rplot.py @@ -15,7 +15,8 @@ "The rplot trellis plotting interface is deprecated and will be " "removed in a future version. We refer to external packages " "like seaborn for similar but more refined functionality. \n\n" - "See our docs http://pandas.pydata.org/pandas-docs/stable/visualization.html#rplot " + "See our docs http://pandas.pydata.org/pandas-docs/stable" + "/visualization.html#rplot " "for some example how to convert your existing code to these " "packages.", FutureWarning, stacklevel=2) @@ -26,19 +27,23 @@ class Scale: """ pass + class ScaleGradient(Scale): """ A mapping between a data attribute value and a point in colour space between two specified colours. """ + def __init__(self, column, colour1, colour2): """Initialize ScaleGradient instance. Parameters: ----------- column: string, pandas DataFrame column name - colour1: tuple, 3 element tuple with float values representing an RGB colour - colour2: tuple, 3 element tuple with float values representing an RGB colour + colour1: tuple + 3 element tuple with float values representing an RGB colour + colour2: tuple + 3 element tuple with float values representing an RGB colour """ self.column = column self.colour1 = colour1 @@ -55,7 +60,8 @@ def __call__(self, data, index): Returns: -------- - A three element tuple representing an RGB somewhere between colour1 and colour2 + A three element tuple representing an RGB somewhere between colour1 and + colour2 """ x = data[self.column].iget(index) a = min(data[self.column]) @@ -67,20 +73,25 @@ def __call__(self, data, index): g1 + (g2 - g1) * x_scaled, b1 + (b2 - b1) * x_scaled) + class ScaleGradient2(Scale): """ Create a mapping between a data attribute value and a point in colour space in a line of three specified colours. """ + def __init__(self, column, colour1, colour2, colour3): """Initialize ScaleGradient2 instance. Parameters: ----------- column: string, pandas DataFrame column name - colour1: tuple, 3 element tuple with float values representing an RGB colour - colour2: tuple, 3 element tuple with float values representing an RGB colour - colour3: tuple, 3 element tuple with float values representing an RGB colour + colour1: tuple + 3 element tuple with float values representing an RGB colour + colour2: tuple + 3 element tuple with float values representing an RGB colour + colour3: tuple + 3 element tuple with float values representing an RGB colour """ self.column = column self.colour1 = colour1 @@ -119,12 +130,15 @@ def __call__(self, data, index): g2 + (g3 - g2) * x_scaled, b2 + (b3 - b2) * x_scaled) + class ScaleSize(Scale): """ Provide a mapping between a DataFrame column and matplotlib scatter plot shape size. """ - def __init__(self, column, min_size=5.0, max_size=100.0, transform=lambda x: x): + + def __init__(self, column, min_size=5.0, max_size=100.0, + transform=lambda x: x): """Initialize ScaleSize instance. Parameters: @@ -132,7 +146,9 @@ def __init__(self, column, min_size=5.0, max_size=100.0, transform=lambda x: x): column: string, a column name min_size: float, minimum point size max_size: float, maximum point size - transform: a one argument function of form float -> float (e.g. lambda x: log(x)) + transform: function + a one argument function of form float -> float (e.g. lambda x: + log(x)) """ self.column = column self.min_size = min_size @@ -152,13 +168,15 @@ def __call__(self, data, index): a = float(min(data[self.column])) b = float(max(data[self.column])) return self.transform(self.min_size + ((x - a) / (b - a)) * - (self.max_size - self.min_size)) + (self.max_size - self.min_size)) + class ScaleShape(Scale): """ Provides a mapping between matplotlib marker shapes and attribute values. """ + def __init__(self, column): """Initialize ScaleShape instance. @@ -185,14 +203,17 @@ def __call__(self, data, index): """ values = sorted(list(set(data[self.column]))) if len(values) > len(self.shapes): - raise ValueError("Too many different values of the categorical attribute for ScaleShape") + raise ValueError("Too many different values of the categorical " + "attribute for ScaleShape") x = data[self.column].iget(index) return self.shapes[values.index(x)] + class ScaleRandomColour(Scale): """ Maps a random colour to a DataFrame attribute. """ + def __init__(self, column): """Initialize ScaleRandomColour instance. @@ -215,10 +236,12 @@ def __call__(self, data, index): random.seed(data[self.column].iget(index)) return [random.random() for _ in range(3)] + class ScaleConstant(Scale): """ Constant returning scale. Usually used automatically. """ + def __init__(self, value): """Initialize ScaleConstant instance. @@ -243,6 +266,7 @@ def __call__(self, data, index): """ return self.value + def default_aes(x=None, y=None): """Create the default aesthetics dictionary. @@ -256,14 +280,15 @@ def default_aes(x=None, y=None): a dictionary with aesthetics bindings """ return { - 'x' : x, - 'y' : y, - 'size' : ScaleConstant(40.0), - 'colour' : ScaleConstant('grey'), - 'shape' : ScaleConstant('o'), - 'alpha' : ScaleConstant(1.0), + 'x': x, + 'y': y, + 'size': ScaleConstant(40.0), + 'colour': ScaleConstant('grey'), + 'shape': ScaleConstant('o'), + 'alpha': ScaleConstant(1.0), } + def make_aes(x=None, y=None, size=None, colour=None, shape=None, alpha=None): """Create an empty aesthetics dictionary. @@ -288,35 +313,48 @@ def make_aes(x=None, y=None, size=None, colour=None, shape=None, alpha=None): shape = ScaleConstant(shape) if not hasattr(alpha, '__call__') and alpha is not None: alpha = ScaleConstant(alpha) - if any([isinstance(size, scale) for scale in [ScaleConstant, ScaleSize]]) or size is None: + if any([isinstance(size, scale) + for scale in [ScaleConstant, ScaleSize]]) or size is None: pass else: - raise ValueError('size mapping should be done through ScaleConstant or ScaleSize') - if any([isinstance(colour, scale) for scale in [ScaleConstant, ScaleGradient, ScaleGradient2, ScaleRandomColour]]) or colour is None: + raise ValueError( + 'size mapping should be done through ScaleConstant or ScaleSize') + if (any([isinstance(colour, scale) + for scale in [ScaleConstant, ScaleGradient, + ScaleGradient2, ScaleRandomColour]]) or + colour is None): pass else: - raise ValueError('colour mapping should be done through ScaleConstant, ScaleRandomColour, ScaleGradient or ScaleGradient2') - if any([isinstance(shape, scale) for scale in [ScaleConstant, ScaleShape]]) or shape is None: + raise ValueError('colour mapping should be done through ' + 'ScaleConstant, ScaleRandomColour, ScaleGradient ' + 'or ScaleGradient2') + if (any([isinstance(shape, scale) + for scale in [ScaleConstant, ScaleShape]]) or + shape is None): pass else: - raise ValueError('shape mapping should be done through ScaleConstant or ScaleShape') - if any([isinstance(alpha, scale) for scale in [ScaleConstant]]) or alpha is None: + raise ValueError('shape mapping should be done through ScaleConstant ' + 'or ScaleShape') + if (any([isinstance(alpha, scale) for scale in [ScaleConstant]]) or + alpha is None): pass else: raise ValueError('alpha mapping should be done through ScaleConstant') return { - 'x' : x, - 'y' : y, - 'size' : size, - 'colour' : colour, - 'shape' : shape, - 'alpha' : alpha, + 'x': x, + 'y': y, + 'size': size, + 'colour': colour, + 'shape': shape, + 'alpha': alpha, } + class Layer: """ Layer object representing a single plot layer. """ + def __init__(self, data=None, **kwds): """Initialize layer object. @@ -343,7 +381,9 @@ def work(self, fig=None, ax=None): """ return fig, ax + class GeomPoint(Layer): + def work(self, fig=None, ax=None): """Render the layer on a matplotlib axis. You can specify either a figure or an axis to draw on. @@ -375,10 +415,10 @@ def work(self, fig=None, ax=None): marker_value = shape_scaler(self.data, index) alpha_value = alpha(self.data, index) patch = ax.scatter(x, y, - s=size_value, - c=colour_value, - marker=marker_value, - alpha=alpha_value) + s=size_value, + c=colour_value, + marker=marker_value, + alpha=alpha_value) label = [] if colour_scaler.categorical: label += [colour_scaler.column, row[colour_scaler.column]] @@ -389,10 +429,12 @@ def work(self, fig=None, ax=None): ax.set_ylabel(self.aes['y']) return fig, ax + class GeomPolyFit(Layer): """ Draw a polynomial fit of specified degree. """ + def __init__(self, degree, lw=2.0, colour='grey'): """Initialize GeomPolyFit object. @@ -436,10 +478,12 @@ def work(self, fig=None, ax=None): ax.plot(x_, y_, lw=self.lw, c=self.colour) return fig, ax + class GeomScatter(Layer): """ An efficient scatter plot, use this instead of GeomPoint for speed. """ + def __init__(self, marker='o', colour='lightblue', alpha=1.0): """Initialize GeomScatter instance. @@ -476,10 +520,12 @@ def work(self, fig=None, ax=None): ax.scatter(x, y, marker=self.marker, c=self.colour, alpha=self.alpha) return fig, ax + class GeomHistogram(Layer): """ An efficient histogram, use this instead of GeomBar for speed. """ + def __init__(self, bins=10, colour='lightblue'): """Initialize GeomHistogram instance. @@ -514,10 +560,12 @@ def work(self, fig=None, ax=None): ax.set_xlabel(self.aes['x']) return fig, ax + class GeomDensity(Layer): """ A kernel density estimation plot. """ + def work(self, fig=None, ax=None): """Draw a one dimensional kernel density plot. You can specify either a figure or an axis to draw on. @@ -543,7 +591,9 @@ def work(self, fig=None, ax=None): ax.plot(ind, gkde.evaluate(ind)) return fig, ax + class GeomDensity2D(Layer): + def work(self, fig=None, ax=None): """Draw a two dimensional kernel density plot. You can specify either a figure or an axis to draw on. @@ -564,7 +614,10 @@ def work(self, fig=None, ax=None): ax = fig.gca() x = self.data[self.aes['x']] y = self.data[self.aes['y']] - rvs = np.array([x, y]) + + # TODO: unused? + # rvs = np.array([x, y]) + x_min = x.min() x_max = x.max() y_min = y.min() @@ -578,7 +631,9 @@ def work(self, fig=None, ax=None): ax.contour(Z, extent=[x_min, x_max, y_min, y_max]) return fig, ax + class TrellisGrid(Layer): + def __init__(self, by): """Initialize TreelisGrid instance. @@ -589,12 +644,14 @@ def __init__(self, by): if len(by) != 2: raise ValueError("You must give a list of length 2 to group by") elif by[0] == '.' and by[1] == '.': - raise ValueError("At least one of grouping attributes must be not a dot") + raise ValueError( + "At least one of grouping attributes must be not a dot") self.by = by def trellis(self, layers): - """Create a trellis structure for a list of layers. - Each layer will be cloned with different data in to a two dimensional grid. + """ + Create a trellis structure for a list of layers. Each layer will be + cloned with different data in to a two dimensional grid. Parameters: ----------- @@ -602,7 +659,8 @@ def trellis(self, layers): Returns: -------- - trellised_layers: Clones of each layer in the list arranged in a trellised latice + trellised_layers: Clones of each layer in the list arranged in a + trellised latice """ trellised_layers = [] for layer in layers: @@ -628,8 +686,10 @@ def trellis(self, layers): else: self.rows = len(shingle1) self.cols = len(shingle2) - trellised = [[None for _ in range(self.cols)] for _ in range(self.rows)] - self.group_grid = [[None for _ in range(self.cols)] for _ in range(self.rows)] + trellised = [[None for _ in range(self.cols)] + for _ in range(self.rows)] + self.group_grid = [[None for _ in range( + self.cols)] for _ in range(self.rows)] row = 0 col = 0 for group, data in grouped: @@ -644,6 +704,7 @@ def trellis(self, layers): trellised_layers.append(trellised) return trellised_layers + def dictionary_union(dict1, dict2): """Take two dictionaries, return dictionary union. @@ -666,6 +727,7 @@ def dictionary_union(dict1, dict2): result[key2] = dict2[key2] return result + def merge_aes(layer1, layer2): """Merges the aesthetics dictionaries for the two layers. Look up sequence_layers function. Which layer is first and which @@ -680,11 +742,16 @@ def merge_aes(layer1, layer2): if layer2.aes[key] is None: layer2.aes[key] = layer1.aes[key] + def sequence_layers(layers): - """Go through the list of layers and fill in the missing bits of information. + """ + Go through the list of layers and fill in the missing bits of information. The basic rules are this: - * If the current layer has data set to None, take the data from previous layer. - * For each aesthetic mapping, if that mapping is set to None, take it from previous layer. + + * If the current layer has data set to None, take the data from previous + layer. + * For each aesthetic mapping, if that mapping is set to None, take it from + previous layer. Parameters: ----------- @@ -696,8 +763,11 @@ def sequence_layers(layers): merge_aes(layer1, layer2) return layers + def sequence_grids(layer_grids): - """Go through the list of layer girds and perform the same thing as sequence_layers. + """ + Go through the list of layer girds and perform the same thing as + sequence_layers. Parameters: ----------- @@ -711,8 +781,11 @@ def sequence_grids(layer_grids): merge_aes(layer1, layer2) return layer_grids + def work_grid(grid, fig): - """Take a two dimensional grid, add subplots to a figure for each cell and do layer work. + """ + Take a two dimensional grid, add subplots to a figure for each cell and do + layer work. Parameters: ----------- @@ -728,10 +801,12 @@ def work_grid(grid, fig): axes = [[None for _ in range(ncols)] for _ in range(nrows)] for row in range(nrows): for col in range(ncols): - axes[row][col] = fig.add_subplot(nrows, ncols, ncols * row + col + 1) + axes[row][col] = fig.add_subplot( + nrows, ncols, ncols * row + col + 1) grid[row][col].work(ax=axes[row][col]) return axes + def adjust_subplots(fig, axes, trellis, layers): """Adjust the subtplots on matplotlib figure with the fact that we have a trellis plot in mind. @@ -763,20 +838,29 @@ def adjust_subplots(fig, axes, trellis, layers): axis.get_xaxis().set_ticks([]) axis.set_xlabel('') if trellis.by[0] == '.': - label1 = "%s = %s" % (trellis.by[1], trellis.group_grid[index // trellis.cols][index % trellis.cols]) + label1 = "%s = %s" % (trellis.by[1], trellis.group_grid[ + index // trellis.cols][index % trellis.cols]) label2 = None elif trellis.by[1] == '.': - label1 = "%s = %s" % (trellis.by[0], trellis.group_grid[index // trellis.cols][index % trellis.cols]) + label1 = "%s = %s" % (trellis.by[0], trellis.group_grid[ + index // trellis.cols][index % trellis.cols]) label2 = None else: - label1 = "%s = %s" % (trellis.by[0], trellis.group_grid[index // trellis.cols][index % trellis.cols][0]) - label2 = "%s = %s" % (trellis.by[1], trellis.group_grid[index // trellis.cols][index % trellis.cols][1]) + label1 = "%s = %s" % ( + trellis.by[0], + trellis.group_grid[index // trellis.cols] + [index % trellis.cols][0]) + label2 = "%s = %s" % ( + trellis.by[1], + trellis.group_grid[index // trellis.cols] + [index % trellis.cols][1]) if label2 is not None: axis.table(cellText=[[label1], [label2]], - loc='top', cellLoc='center', - cellColours=[['lightgrey'], ['lightgrey']]) + loc='top', cellLoc='center', + cellColours=[['lightgrey'], ['lightgrey']]) else: - axis.table(cellText=[[label1]], loc='top', cellLoc='center', cellColours=[['lightgrey']]) + axis.table(cellText=[[label1]], loc='top', + cellLoc='center', cellColours=[['lightgrey']]) # Flatten the layer grid layers = [layer for row in layers for layer in row] legend = {} @@ -800,15 +884,19 @@ def adjust_subplots(fig, axes, trellis, layers): col1, val1, col2, val2 = key labels.append("%s, %s" % (str(val1), str(val2))) else: - raise ValueError("Maximum 2 categorical attributes to display a lengend of") + raise ValueError( + "Maximum 2 categorical attributes to display a lengend of") if len(legend): fig.legend(patches, labels, loc='upper right') fig.subplots_adjust(wspace=0.05, hspace=0.2) + class RPlot: """ - The main plot object. Add layers to an instance of this object to create a plot. + The main plot object. Add layers to an instance of this object to create a + plot. """ + def __init__(self, data, x=None, y=None): """Initialize RPlot instance. @@ -819,7 +907,6 @@ def __init__(self, data, x=None, y=None): y: string, DataFrame column name """ self.layers = [Layer(data, **default_aes(x=x, y=y))] - trellised = False def add(self, layer): """Add a layer to RPlot instance. @@ -829,7 +916,8 @@ def add(self, layer): layer: Layer instance """ if not isinstance(layer, Layer): - raise TypeError("The operand on the right side of + must be a Layer instance") + raise TypeError( + "The operand on the right side of + must be a Layer instance") self.layers.append(layer) def render(self, fig=None): @@ -873,13 +961,13 @@ def render(self, fig=None): col1, val1, col2, val2 = key labels.append("%s, %s" % (str(val1), str(val2))) else: - raise ValueError("Maximum 2 categorical attributes to display a lengend of") + raise ValueError("Maximum 2 categorical attributes to " + "display a lengend of") if len(legend): fig.legend(patches, labels, loc='upper right') else: - # We have a trellised plot. - # First let's remove all other TrellisGrid instances from the layer list, - # including this one. + # We have a trellised plot. First let's remove all other + # TrellisGrid instances from the layer list, including this one. new_layers = [] for layer in self.layers: if not isinstance(layer, TrellisGrid): diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py index 6db2d2e15f699..9e64e0eeb2792 100644 --- a/pandas/tools/tests/test_merge.py +++ b/pandas/tools/tests/test_merge.py @@ -9,7 +9,7 @@ import random import pandas as pd -from pandas.compat import range, lrange, lzip, zip, StringIO +from pandas.compat import range, lrange, lzip, StringIO from pandas import compat from pandas.tseries.index import DatetimeIndex from pandas.tools.merge import merge, concat, ordered_merge, MergeError @@ -18,7 +18,8 @@ assert_almost_equal, makeCustomDataframe as mkdf, assertRaisesRegexp) -from pandas import isnull, DataFrame, Index, MultiIndex, Panel, Series, date_range, read_table, read_csv +from pandas import (isnull, DataFrame, Index, MultiIndex, Panel, + Series, date_range, read_csv) import pandas.algos as algos import pandas.util.testing as tm from numpy.testing.decorators import slow @@ -414,7 +415,7 @@ def test_join_inner_multiindex(self): data = np.random.randn(len(key1)) data = DataFrame({'key1': key1, 'key2': key2, - 'data': data}) + 'data': data}) index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two', 'three']], @@ -459,8 +460,8 @@ def test_join_hierarchical_mixed(self): def test_join_float64_float32(self): - a = DataFrame(randn(10, 2), columns=['a', 'b'], dtype = np.float64) - b = DataFrame(randn(10, 1), columns=['c'], dtype = np.float32) + a = DataFrame(randn(10, 2), columns=['a', 'b'], dtype=np.float64) + b = DataFrame(randn(10, 1), columns=['c'], dtype=np.float32) joined = a.join(b) self.assertEqual(joined.dtypes['a'], 'float64') self.assertEqual(joined.dtypes['b'], 'float64') @@ -470,7 +471,7 @@ def test_join_float64_float32(self): b = np.random.random(100).astype('float64') c = np.random.random(100).astype('float32') df = DataFrame({'a': a, 'b': b, 'c': c}) - xpdf = DataFrame({'a': a, 'b': b, 'c': c }) + xpdf = DataFrame({'a': a, 'b': b, 'c': c}) s = DataFrame(np.random.random(5).astype('float32'), columns=['md']) rs = df.merge(s, left_on='a', right_index=True) self.assertEqual(rs.dtypes['a'], 'int64') @@ -785,14 +786,14 @@ def test_merge_left_empty_right_notempty(self): right = pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]], columns=['x', 'y', 'z']) - exp_out = pd.DataFrame({'a': np.array([np.nan]*3, dtype=object), - 'b': np.array([np.nan]*3, dtype=object), - 'c': np.array([np.nan]*3, dtype=object), + exp_out = pd.DataFrame({'a': np.array([np.nan] * 3, dtype=object), + 'b': np.array([np.nan] * 3, dtype=object), + 'c': np.array([np.nan] * 3, dtype=object), 'x': [1, 4, 7], 'y': [2, 5, 8], 'z': [3, 6, 9]}, columns=['a', 'b', 'c', 'x', 'y', 'z']) - exp_in = exp_out[0:0] # make empty DataFrame keeping dtype + exp_in = exp_out[0:0] # make empty DataFrame keeping dtype # result will have object dtype exp_in.index = exp_in.index.astype(object) @@ -820,11 +821,11 @@ def test_merge_left_notempty_right_empty(self): exp_out = pd.DataFrame({'a': [1, 4, 7], 'b': [2, 5, 8], 'c': [3, 6, 9], - 'x': np.array([np.nan]*3, dtype=object), - 'y': np.array([np.nan]*3, dtype=object), - 'z': np.array([np.nan]*3, dtype=object)}, + 'x': np.array([np.nan] * 3, dtype=object), + 'y': np.array([np.nan] * 3, dtype=object), + 'z': np.array([np.nan] * 3, dtype=object)}, columns=['a', 'b', 'c', 'x', 'y', 'z']) - exp_in = exp_out[0:0] # make empty DataFrame keeping dtype + exp_in = exp_out[0:0] # make empty DataFrame keeping dtype # result will have object dtype exp_in.index = exp_in.index.astype(object) @@ -871,24 +872,30 @@ def test_merge_nosort(self): self.assertTrue((df.var3.unique() == result.var3.unique()).all()) def test_merge_nan_right(self): - df1 = DataFrame({"i1" : [0, 1], "i2" : [0, 1]}) - df2 = DataFrame({"i1" : [0], "i3" : [0]}) + df1 = DataFrame({"i1": [0, 1], "i2": [0, 1]}) + df2 = DataFrame({"i1": [0], "i3": [0]}) result = df1.join(df2, on="i1", rsuffix="_") - expected = DataFrame({'i1': {0: 0.0, 1: 1}, 'i2': {0: 0, 1: 1}, - 'i1_': {0: 0, 1: np.nan}, 'i3': {0: 0.0, 1: np.nan}, - None: {0: 0, 1: 0}}).set_index(None).reset_index()[['i1', 'i2', 'i1_', 'i3']] + expected = (DataFrame({'i1': {0: 0.0, 1: 1}, 'i2': {0: 0, 1: 1}, + 'i1_': {0: 0, 1: np.nan}, + 'i3': {0: 0.0, 1: np.nan}, + None: {0: 0, 1: 0}}) + .set_index(None) + .reset_index()[['i1', 'i2', 'i1_', 'i3']]) assert_frame_equal(result, expected, check_dtype=False) - df1 = DataFrame({"i1" : [0, 1], "i2" : [0.5, 1.5]}) - df2 = DataFrame({"i1" : [0], "i3" : [0.7]}) + df1 = DataFrame({"i1": [0, 1], "i2": [0.5, 1.5]}) + df2 = DataFrame({"i1": [0], "i3": [0.7]}) result = df1.join(df2, rsuffix="_", on='i1') - expected = DataFrame({'i1': {0: 0, 1: 1}, 'i1_': {0: 0.0, 1: nan}, - 'i2': {0: 0.5, 1: 1.5}, 'i3': {0: 0.69999999999999996, - 1: nan}})[['i1', 'i2', 'i1_', 'i3']] + expected = (DataFrame({'i1': {0: 0, 1: 1}, 'i1_': {0: 0.0, 1: nan}, + 'i2': {0: 0.5, 1: 1.5}, + 'i3': {0: 0.69999999999999996, + 1: nan}}) + [['i1', 'i2', 'i1_', 'i3']]) assert_frame_equal(result, expected) def test_merge_type(self): class NotADataFrame(DataFrame): + @property def _constructor(self): return NotADataFrame @@ -905,20 +912,24 @@ def test_append_dtype_coerce(self): import datetime as dt from pandas import NaT - df1 = DataFrame(index=[1,2], data=[dt.datetime(2013,1,1,0,0), - dt.datetime(2013,1,2,0,0)], + df1 = DataFrame(index=[1, 2], data=[dt.datetime(2013, 1, 1, 0, 0), + dt.datetime(2013, 1, 2, 0, 0)], columns=['start_time']) - df2 = DataFrame(index=[4,5], data=[[dt.datetime(2013,1,3,0,0), - dt.datetime(2013,1,3,6,10)], - [dt.datetime(2013,1,4,0,0), - dt.datetime(2013,1,4,7,10)]], - columns=['start_time','end_time']) - - expected = concat([ - Series([NaT,NaT,dt.datetime(2013,1,3,6,10),dt.datetime(2013,1,4,7,10)],name='end_time'), - Series([dt.datetime(2013,1,1,0,0),dt.datetime(2013,1,2,0,0),dt.datetime(2013,1,3,0,0),dt.datetime(2013,1,4,0,0)],name='start_time'), - ],axis=1) - result = df1.append(df2,ignore_index=True) + df2 = DataFrame(index=[4, 5], data=[[dt.datetime(2013, 1, 3, 0, 0), + dt.datetime(2013, 1, 3, 6, 10)], + [dt.datetime(2013, 1, 4, 0, 0), + dt.datetime(2013, 1, 4, 7, 10)]], + columns=['start_time', 'end_time']) + + expected = concat([Series([NaT, NaT, dt.datetime(2013, 1, 3, 6, 10), + dt.datetime(2013, 1, 4, 7, 10)], + name='end_time'), + Series([dt.datetime(2013, 1, 1, 0, 0), + dt.datetime(2013, 1, 2, 0, 0), + dt.datetime(2013, 1, 3, 0, 0), + dt.datetime(2013, 1, 4, 0, 0)], + name='start_time')], axis=1) + result = df1.append(df2, ignore_index=True) assert_frame_equal(result, expected) def test_join_append_timedeltas(self): @@ -934,18 +945,18 @@ def test_join_append_timedeltas(self): df = df.append(d, ignore_index=True) result = df.append(d, ignore_index=True) expected = DataFrame({'d': [dt.datetime(2013, 11, 5, 5, 56), - dt.datetime(2013, 11, 5, 5, 56) ], - 't': [ dt.timedelta(0, 22500), - dt.timedelta(0, 22500) ]}) + dt.datetime(2013, 11, 5, 5, 56)], + 't': [dt.timedelta(0, 22500), + dt.timedelta(0, 22500)]}) assert_frame_equal(result, expected) td = np.timedelta64(300000000) - lhs = DataFrame(Series([td,td],index=["A","B"])) - rhs = DataFrame(Series([td],index=["A"])) + lhs = DataFrame(Series([td, td], index=["A", "B"])) + rhs = DataFrame(Series([td], index=["A"])) - from pandas import NaT - result = lhs.join(rhs,rsuffix='r', how="left") - expected = DataFrame({ '0' : Series([td,td],index=list('AB')), '0r' : Series([td,NaT],index=list('AB')) }) + result = lhs.join(rhs, rsuffix='r', how="left") + expected = DataFrame({'0': Series([td, td], index=list('AB')), + '0r': Series([td, NaT], index=list('AB'))}) assert_frame_equal(result, expected) def test_overlapping_columns_error_message(self): @@ -959,10 +970,10 @@ def test_overlapping_columns_error_message(self): df.columns = ['key', 'foo', 'foo'] df2.columns = ['key', 'bar', 'bar'] expected = DataFrame({'key': [1, 2, 3], - 'v1': [4, 5, 6], - 'v2': [7, 8, 9], - 'v3': [4, 5, 6], - 'v4': [7, 8, 9]}) + 'v1': [4, 5, 6], + 'v2': [7, 8, 9], + 'v3': [4, 5, 6], + 'v4': [7, 8, 9]}) expected.columns = ['key', 'foo', 'foo', 'bar', 'bar'] assert_frame_equal(merge(df, df2), expected) @@ -973,48 +984,58 @@ def test_overlapping_columns_error_message(self): def test_merge_on_datetime64tz(self): # GH11405 - left = pd.DataFrame({'key' : pd.date_range('20151010',periods=2,tz='US/Eastern'), - 'value' : [1,2]}) - right = pd.DataFrame({'key' : pd.date_range('20151011',periods=3,tz='US/Eastern'), - 'value' : [1,2,3]}) - - expected = DataFrame({'key' : pd.date_range('20151010',periods=4,tz='US/Eastern'), - 'value_x' : [1,2,np.nan,np.nan], - 'value_y' : [np.nan,1,2,3]}) + left = pd.DataFrame({'key': pd.date_range('20151010', periods=2, + tz='US/Eastern'), + 'value': [1, 2]}) + right = pd.DataFrame({'key': pd.date_range('20151011', periods=3, + tz='US/Eastern'), + 'value': [1, 2, 3]}) + + expected = DataFrame({'key': pd.date_range('20151010', periods=4, + tz='US/Eastern'), + 'value_x': [1, 2, np.nan, np.nan], + 'value_y': [np.nan, 1, 2, 3]}) result = pd.merge(left, right, on='key', how='outer') assert_frame_equal(result, expected) - left = pd.DataFrame({'value' : pd.date_range('20151010',periods=2,tz='US/Eastern'), - 'key' : [1,2]}) - right = pd.DataFrame({'value' : pd.date_range('20151011',periods=2,tz='US/Eastern'), - 'key' : [2,3]}) - expected = DataFrame({'value_x' : list(pd.date_range('20151010',periods=2,tz='US/Eastern')) + [pd.NaT], - 'value_y' : [pd.NaT] + list(pd.date_range('20151011',periods=2,tz='US/Eastern')), - 'key' : [1.,2,3]}) + left = pd.DataFrame({'value': pd.date_range('20151010', periods=2, + tz='US/Eastern'), + 'key': [1, 2]}) + right = pd.DataFrame({'value': pd.date_range('20151011', periods=2, + tz='US/Eastern'), + 'key': [2, 3]}) + expected = DataFrame({ + 'value_x': list(pd.date_range('20151010', periods=2, + tz='US/Eastern')) + [pd.NaT], + 'value_y': [pd.NaT] + list(pd.date_range('20151011', periods=2, + tz='US/Eastern')), + 'key': [1., 2, 3]}) result = pd.merge(left, right, on='key', how='outer') assert_frame_equal(result, expected) def test_indicator(self): # PR #10054. xref #7412 and closes #8790. - df1 = DataFrame({'col1':[0,1], 'col_left':['a','b'], 'col_conflict':[1,2]}) + df1 = DataFrame({'col1': [0, 1], 'col_left': [ + 'a', 'b'], 'col_conflict': [1, 2]}) df1_copy = df1.copy() - df2 = DataFrame({'col1':[1,2,3,4,5],'col_right':[2,2,2,2,2], - 'col_conflict':[1,2,3,4,5]}) + df2 = DataFrame({'col1': [1, 2, 3, 4, 5], 'col_right': [2, 2, 2, 2, 2], + 'col_conflict': [1, 2, 3, 4, 5]}) df2_copy = df2.copy() - df_result = DataFrame({'col1':[0,1,2,3,4,5], - 'col_conflict_x':[1,2,np.nan,np.nan,np.nan,np.nan], - 'col_left':['a','b', np.nan,np.nan,np.nan,np.nan], - 'col_conflict_y':[np.nan,1,2,3,4,5], - 'col_right':[np.nan, 2,2,2,2,2]}, - dtype='float64') - df_result['_merge'] = Categorical(['left_only','both','right_only', - 'right_only','right_only','right_only'] - , categories=['left_only', 'right_only', 'both']) + df_result = DataFrame({ + 'col1': [0, 1, 2, 3, 4, 5], + 'col_conflict_x': [1, 2, np.nan, np.nan, np.nan, np.nan], + 'col_left': ['a', 'b', np.nan, np.nan, np.nan, np.nan], + 'col_conflict_y': [np.nan, 1, 2, 3, 4, 5], + 'col_right': [np.nan, 2, 2, 2, 2, 2]}, dtype='float64') + df_result['_merge'] = Categorical( + ['left_only', 'both', 'right_only', + 'right_only', 'right_only', 'right_only'], + categories=['left_only', 'right_only', 'both']) df_result = df_result[['col1', 'col_conflict_x', 'col_left', - 'col_conflict_y', 'col_right', '_merge' ]] + 'col_conflict_y', 'col_right', '_merge']] test = merge(df1, df2, on='col1', how='outer', indicator=True) assert_frame_equal(test, df_result) @@ -1027,11 +1048,14 @@ def test_indicator(self): # Check with custom name df_result_custom_name = df_result - df_result_custom_name = df_result_custom_name.rename(columns={'_merge':'custom_name'}) + df_result_custom_name = df_result_custom_name.rename( + columns={'_merge': 'custom_name'}) - test_custom_name = merge(df1, df2, on='col1', how='outer', indicator='custom_name') + test_custom_name = merge( + df1, df2, on='col1', how='outer', indicator='custom_name') assert_frame_equal(test_custom_name, df_result_custom_name) - test_custom_name = df1.merge(df2, on='col1', how='outer', indicator='custom_name') + test_custom_name = df1.merge( + df2, on='col1', how='outer', indicator='custom_name') assert_frame_equal(test_custom_name, df_result_custom_name) # Check only accepts strings and booleans @@ -1059,35 +1083,41 @@ def test_indicator(self): # Check if working name in df for i in ['_right_indicator', '_left_indicator', '_merge']: - df_badcolumn = DataFrame({'col1':[1,2], i:[2,2]}) + df_badcolumn = DataFrame({'col1': [1, 2], i: [2, 2]}) with tm.assertRaises(ValueError): - merge(df1, df_badcolumn, on='col1', how='outer', indicator=True) + merge(df1, df_badcolumn, on='col1', + how='outer', indicator=True) with tm.assertRaises(ValueError): df1.merge(df_badcolumn, on='col1', how='outer', indicator=True) # Check for name conflict with custom name - df_badcolumn = DataFrame({'col1':[1,2], 'custom_column_name':[2,2]}) + df_badcolumn = DataFrame( + {'col1': [1, 2], 'custom_column_name': [2, 2]}) with tm.assertRaises(ValueError): - merge(df1, df_badcolumn, on='col1', how='outer', indicator='custom_column_name') + merge(df1, df_badcolumn, on='col1', how='outer', + indicator='custom_column_name') with tm.assertRaises(ValueError): - df1.merge(df_badcolumn, on='col1', how='outer', indicator='custom_column_name') + df1.merge(df_badcolumn, on='col1', how='outer', + indicator='custom_column_name') # Merge on multiple columns - df3 = DataFrame({'col1':[0,1], 'col2':['a','b']}) + df3 = DataFrame({'col1': [0, 1], 'col2': ['a', 'b']}) - df4 = DataFrame({'col1':[1,1,3], 'col2':['b','x','y']}) + df4 = DataFrame({'col1': [1, 1, 3], 'col2': ['b', 'x', 'y']}) - hand_coded_result = DataFrame({'col1':[0,1,1,3.0], - 'col2':['a','b','x','y']}) + hand_coded_result = DataFrame({'col1': [0, 1, 1, 3.0], + 'col2': ['a', 'b', 'x', 'y']}) hand_coded_result['_merge'] = Categorical( - ['left_only','both','right_only','right_only'] - , categories=['left_only', 'right_only', 'both']) + ['left_only', 'both', 'right_only', 'right_only'], + categories=['left_only', 'right_only', 'both']) - test5 = merge(df3, df4, on=['col1', 'col2'], how='outer', indicator=True) + test5 = merge(df3, df4, on=['col1', 'col2'], + how='outer', indicator=True) assert_frame_equal(test5, hand_coded_result) - test5 = df3.merge(df4, on=['col1', 'col2'], how='outer', indicator=True) + test5 = df3.merge(df4, on=['col1', 'col2'], + how='outer', indicator=True) assert_frame_equal(test5, hand_coded_result) @@ -1099,7 +1129,8 @@ def _check_merge(x, y): sort=True) expected = expected.set_index('index') - assert_frame_equal(result, expected, check_names=False) # TODO check_names on merge? + # TODO check_names on merge? + assert_frame_equal(result, expected, check_names=False) class TestMergeMulti(tm.TestCase): @@ -1147,7 +1178,8 @@ def test_left_join_multi_index(self): def bind_cols(df): iord = lambda a: 0 if a != a else ord(a) f = lambda ts: ts.map(iord) - ord('a') - return f(df['1st']) + f(df['3rd'])* 1e2 + df['2nd'].fillna(0) * 1e4 + return (f(df['1st']) + f(df['3rd']) * 1e2 + + df['2nd'].fillna(0) * 1e4) def run_asserts(left, right): for sort in [False, True]: @@ -1157,14 +1189,15 @@ def run_asserts(left, right): self.assertFalse(res['4th'].isnull().any()) self.assertFalse(res['5th'].isnull().any()) - tm.assert_series_equal(res['4th'], - res['5th'], check_names=False) + tm.assert_series_equal( + res['4th'], - res['5th'], check_names=False) result = bind_cols(res.iloc[:, :-2]) tm.assert_series_equal(res['4th'], result, check_names=False) self.assertTrue(result.name is None) if sort: - tm.assert_frame_equal(res, - res.sort_values(icols, kind='mergesort')) + tm.assert_frame_equal( + res, res.sort_values(icols, kind='mergesort')) out = merge(left, right.reset_index(), on=icols, sort=sort, how='left') @@ -1203,10 +1236,11 @@ def test_merge_right_vs_left(self): # compare left vs right merge with multikey for sort in [False, True]: merged1 = self.data.merge(self.to_join, left_on=['key1', 'key2'], - right_index=True, how='left', sort=sort) + right_index=True, how='left', sort=sort) merged2 = self.to_join.merge(self.data, right_on=['key1', 'key2'], - left_index=True, how='right', sort=sort) + left_index=True, how='right', + sort=sort) merged2 = merged2.ix[:, merged1.columns] assert_frame_equal(merged1, merged2) @@ -1225,13 +1259,13 @@ def test_compress_group_combinations(self): 'value2': np.random.randn(10000)}) # just to hit the label compression code path - merged = merge(df, df2, how='outer') + merge(df, df2, how='outer') def test_left_join_index_preserve_order(self): left = DataFrame({'k1': [0, 1, 2] * 8, 'k2': ['foo', 'bar'] * 12, - 'v': np.array(np.arange(24),dtype=np.int64) }) + 'v': np.array(np.arange(24), dtype=np.int64)}) index = MultiIndex.from_tuples([(2, 'bar'), (1, 'foo')]) right = DataFrame({'v2': [5, 7]}, index=index) @@ -1240,18 +1274,19 @@ def test_left_join_index_preserve_order(self): expected = left.copy() expected['v2'] = np.nan - expected.loc[(expected.k1 == 2) & (expected.k2 == 'bar'),'v2'] = 5 - expected.loc[(expected.k1 == 1) & (expected.k2 == 'foo'),'v2'] = 7 + expected.loc[(expected.k1 == 2) & (expected.k2 == 'bar'), 'v2'] = 5 + expected.loc[(expected.k1 == 1) & (expected.k2 == 'foo'), 'v2'] = 7 tm.assert_frame_equal(result, expected) - tm.assert_frame_equal(result.sort_values(['k1', 'k2'], kind='mergesort'), - left.join(right, on=['k1', 'k2'], sort=True)) + tm.assert_frame_equal( + result.sort_values(['k1', 'k2'], kind='mergesort'), + left.join(right, on=['k1', 'k2'], sort=True)) # test join with multi dtypes blocks left = DataFrame({'k1': [0, 1, 2] * 8, 'k2': ['foo', 'bar'] * 12, - 'k3' : np.array([0, 1, 2]*8, dtype=np.float32), - 'v': np.array(np.arange(24),dtype=np.int32) }) + 'k3': np.array([0, 1, 2] * 8, dtype=np.float32), + 'v': np.array(np.arange(24), dtype=np.int32)}) index = MultiIndex.from_tuples([(2, 'bar'), (1, 'foo')]) right = DataFrame({'v2': [5, 7]}, index=index) @@ -1260,12 +1295,13 @@ def test_left_join_index_preserve_order(self): expected = left.copy() expected['v2'] = np.nan - expected.loc[(expected.k1 == 2) & (expected.k2 == 'bar'),'v2'] = 5 - expected.loc[(expected.k1 == 1) & (expected.k2 == 'foo'),'v2'] = 7 + expected.loc[(expected.k1 == 2) & (expected.k2 == 'bar'), 'v2'] = 5 + expected.loc[(expected.k1 == 1) & (expected.k2 == 'foo'), 'v2'] = 7 tm.assert_frame_equal(result, expected) - tm.assert_frame_equal(result.sort_values(['k1', 'k2'], kind='mergesort'), - left.join(right, on=['k1', 'k2'], sort=True)) + tm.assert_frame_equal( + result.sort_values(['k1', 'k2'], kind='mergesort'), + left.join(right, on=['k1', 'k2'], sort=True)) # do a right join for an extra test joined = merge(right, left, left_index=True, @@ -1288,18 +1324,18 @@ def test_left_join_index_multi_match_multiindex(self): index=[3, 2, 0, 1, 7, 6, 4, 5, 9, 8]) right = DataFrame([ - ['W', 'R', 'C', 0], - ['W', 'Q', 'B', 3], - ['W', 'Q', 'B', 8], - ['X', 'Y', 'A', 1], - ['X', 'Y', 'A', 4], - ['X', 'Y', 'B', 5], - ['X', 'Y', 'C', 6], - ['X', 'Y', 'C', 9], + ['W', 'R', 'C', 0], + ['W', 'Q', 'B', 3], + ['W', 'Q', 'B', 8], + ['X', 'Y', 'A', 1], + ['X', 'Y', 'A', 4], + ['X', 'Y', 'B', 5], + ['X', 'Y', 'C', 6], + ['X', 'Y', 'C', 9], ['X', 'Q', 'C', -6], ['X', 'R', 'C', -9], - ['V', 'Y', 'C', 7], - ['V', 'R', 'D', 2], + ['V', 'Y', 'C', 7], + ['V', 'R', 'D', 2], ['V', 'R', 'D', -1], ['V', 'Q', 'A', -3]], columns=['col1', 'col2', 'col3', 'val']) @@ -1308,20 +1344,20 @@ def test_left_join_index_multi_match_multiindex(self): result = left.join(right, on=['cola', 'colb', 'colc'], how='left') expected = DataFrame([ - ['X', 'Y', 'C', 'a', 6], - ['X', 'Y', 'C', 'a', 9], + ['X', 'Y', 'C', 'a', 6], + ['X', 'Y', 'C', 'a', 9], ['W', 'Y', 'C', 'e', nan], - ['V', 'Q', 'A', 'h', -3], - ['V', 'R', 'D', 'i', 2], - ['V', 'R', 'D', 'i', -1], + ['V', 'Q', 'A', 'h', -3], + ['V', 'R', 'D', 'i', 2], + ['V', 'R', 'D', 'i', -1], ['X', 'Y', 'D', 'b', nan], - ['X', 'Y', 'A', 'c', 1], - ['X', 'Y', 'A', 'c', 4], - ['W', 'Q', 'B', 'f', 3], - ['W', 'Q', 'B', 'f', 8], - ['W', 'R', 'C', 'g', 0], - ['V', 'Y', 'C', 'j', 7], - ['X', 'Y', 'B', 'd', 5]], + ['X', 'Y', 'A', 'c', 1], + ['X', 'Y', 'A', 'c', 4], + ['W', 'Q', 'B', 'f', 3], + ['W', 'Q', 'B', 'f', 8], + ['W', 'R', 'C', 'g', 0], + ['V', 'Y', 'C', 'j', 7], + ['X', 'Y', 'B', 'd', 5]], columns=['cola', 'colb', 'colc', 'tag', 'val'], index=[3, 3, 2, 0, 1, 1, 7, 6, 6, 4, 4, 5, 9, 8]) @@ -1330,8 +1366,9 @@ def test_left_join_index_multi_match_multiindex(self): result = left.join(right, on=['cola', 'colb', 'colc'], how='left', sort=True) - tm.assert_frame_equal(result, - expected.sort_values(['cola', 'colb', 'colc'], kind='mergesort')) + tm.assert_frame_equal( + result, + expected.sort_values(['cola', 'colb', 'colc'], kind='mergesort')) # GH7331 - maintain left frame order in left merge right.reset_index(inplace=True) @@ -1378,7 +1415,8 @@ def test_left_join_index_multi_match(self): tm.assert_frame_equal(result, expected) result = left.join(right, on='tag', how='left', sort=True) - tm.assert_frame_equal(result, expected.sort_values('tag', kind='mergesort')) + tm.assert_frame_equal( + result, expected.sort_values('tag', kind='mergesort')) # GH7331 - maintain left frame order in left merge result = merge(left, right.reset_index(), how='left', on='tag') @@ -1388,13 +1426,14 @@ def test_left_join_index_multi_match(self): def test_join_multi_dtypes(self): # test with multi dtypes in the join index - def _test(dtype1,dtype2): + def _test(dtype1, dtype2): left = DataFrame({'k1': np.array([0, 1, 2] * 8, dtype=dtype1), 'k2': ['foo', 'bar'] * 12, - 'v': np.array(np.arange(24),dtype=np.int64) }) + 'v': np.array(np.arange(24), dtype=np.int64)}) index = MultiIndex.from_tuples([(2, 'bar'), (1, 'foo')]) - right = DataFrame({'v2': np.array([5, 7], dtype=dtype2)}, index=index) + right = DataFrame( + {'v2': np.array([5, 7], dtype=dtype2)}, index=index) result = left.join(right, on=['k1', 'k2']) @@ -1402,9 +1441,9 @@ def _test(dtype1,dtype2): if dtype2.kind == 'i': dtype2 = np.dtype('float64') - expected['v2'] = np.array(np.nan,dtype=dtype2) - expected.loc[(expected.k1 == 2) & (expected.k2 == 'bar'),'v2'] = 5 - expected.loc[(expected.k1 == 1) & (expected.k2 == 'foo'),'v2'] = 7 + expected['v2'] = np.array(np.nan, dtype=dtype2) + expected.loc[(expected.k1 == 2) & (expected.k2 == 'bar'), 'v2'] = 5 + expected.loc[(expected.k1 == 1) & (expected.k2 == 'foo'), 'v2'] = 7 tm.assert_frame_equal(result, expected) @@ -1412,9 +1451,9 @@ def _test(dtype1,dtype2): expected.sort_values(['k1', 'k2'], kind='mergesort', inplace=True) tm.assert_frame_equal(result, expected) - for d1 in [np.int64,np.int32,np.int16,np.int8,np.uint8]: - for d2 in [np.int64,np.float64,np.float32,np.float16]: - _test(np.dtype(d1),np.dtype(d2)) + for d1 in [np.int64, np.int32, np.int16, np.int8, np.uint8]: + for d2 in [np.int64, np.float64, np.float32, np.float16]: + _test(np.dtype(d1), np.dtype(d2)) def test_left_merge_na_buglet(self): left = DataFrame({'id': list('abcde'), 'v1': randn(5), @@ -1517,7 +1556,8 @@ def test_int64_overflow_issues(self): # add duplicates to left frame left = concat([left, left], ignore_index=True) - right = DataFrame(np.random.randint(low, high, (n // 2, 7)).astype('int64'), + right = DataFrame(np.random.randint(low, high, (n // 2, 7)) + .astype('int64'), columns=list('ABCDEFG')) # add duplicates & overlap with left to the right frame @@ -1588,52 +1628,78 @@ def verify_order(df): assert_frame_equal(frame, align(res), check_dtype=how not in ('right', 'outer')) - def test_join_multi_levels(self): # GH 3662 # merge multi-levels - - household = DataFrame(dict(household_id = [1,2,3], - male = [0,1,0], - wealth = [196087.3,316478.7,294750]), - columns = ['household_id','male','wealth']).set_index('household_id') - portfolio = DataFrame(dict(household_id = [1,2,2,3,3,3,4], - asset_id = ["nl0000301109","nl0000289783","gb00b03mlx29","gb00b03mlx29","lu0197800237","nl0000289965",np.nan], - name = ["ABN Amro","Robeco","Royal Dutch Shell","Royal Dutch Shell","AAB Eastern Europe Equity Fund","Postbank BioTech Fonds",np.nan], - share = [1.0,0.4,0.6,0.15,0.6,0.25,1.0]), - columns = ['household_id','asset_id','name','share']).set_index(['household_id','asset_id']) + household = ( + DataFrame( + dict(household_id=[1, 2, 3], + male=[0, 1, 0], + wealth=[196087.3, 316478.7, 294750]), + columns=['household_id', 'male', 'wealth']) + .set_index('household_id')) + portfolio = ( + DataFrame( + dict(household_id=[1, 2, 2, 3, 3, 3, 4], + asset_id=["nl0000301109", "nl0000289783", "gb00b03mlx29", + "gb00b03mlx29", "lu0197800237", "nl0000289965", + np.nan], + name=["ABN Amro", "Robeco", "Royal Dutch Shell", + "Royal Dutch Shell", + "AAB Eastern Europe Equity Fund", + "Postbank BioTech Fonds", np.nan], + share=[1.0, 0.4, 0.6, 0.15, 0.6, 0.25, 1.0]), + columns=['household_id', 'asset_id', 'name', 'share']) + .set_index(['household_id', 'asset_id'])) result = household.join(portfolio, how='inner') - expected = DataFrame(dict(male = [0,1,1,0,0,0], - wealth = [ 196087.3, 316478.7, 316478.7, 294750.0, 294750.0, 294750.0 ], - name = ['ABN Amro','Robeco','Royal Dutch Shell','Royal Dutch Shell','AAB Eastern Europe Equity Fund','Postbank BioTech Fonds'], - share = [1.00,0.40,0.60,0.15,0.60,0.25], - household_id = [1,2,2,3,3,3], - asset_id = ['nl0000301109','nl0000289783','gb00b03mlx29','gb00b03mlx29','lu0197800237','nl0000289965']), - ).set_index(['household_id','asset_id']).reindex(columns=['male','wealth','name','share']) - assert_frame_equal(result,expected) + expected = ( + DataFrame( + dict(male=[0, 1, 1, 0, 0, 0], + wealth=[196087.3, 316478.7, 316478.7, + 294750.0, 294750.0, 294750.0], + name=['ABN Amro', 'Robeco', 'Royal Dutch Shell', + 'Royal Dutch Shell', + 'AAB Eastern Europe Equity Fund', + 'Postbank BioTech Fonds'], + share=[1.00, 0.40, 0.60, 0.15, 0.60, 0.25], + household_id=[1, 2, 2, 3, 3, 3], + asset_id=['nl0000301109', 'nl0000289783', 'gb00b03mlx29', + 'gb00b03mlx29', 'lu0197800237', + 'nl0000289965'])) + .set_index(['household_id', 'asset_id']) + .reindex(columns=['male', 'wealth', 'name', 'share'])) + assert_frame_equal(result, expected) - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) # equivalency - result2 = merge(household.reset_index(),portfolio.reset_index(),on=['household_id'],how='inner').set_index(['household_id','asset_id']) - assert_frame_equal(result2,expected) + result2 = (merge(household.reset_index(), portfolio.reset_index(), + on=['household_id'], how='inner') + .set_index(['household_id', 'asset_id'])) + assert_frame_equal(result2, expected) result = household.join(portfolio, how='outer') - expected = concat([expected,DataFrame(dict(share = [1.00]), - index=MultiIndex.from_tuples([(4,np.nan)], - names=['household_id','asset_id']))], - axis=0).reindex(columns=expected.columns) - assert_frame_equal(result,expected) + expected = (concat([ + expected, + (DataFrame( + dict(share=[1.00]), + index=MultiIndex.from_tuples( + [(4, np.nan)], + names=['household_id', 'asset_id']))) + ], axis=0).reindex(columns=expected.columns)) + assert_frame_equal(result, expected) # invalid cases household.index.name = 'foo' + def f(): household.join(portfolio, how='inner') self.assertRaises(ValueError, f) portfolio2 = portfolio.copy() - portfolio2.index.set_names(['household_id','foo']) + portfolio2.index.set_names(['household_id', 'foo']) + def f(): portfolio2.join(portfolio, how='inner') self.assertRaises(ValueError, f) @@ -1642,45 +1708,72 @@ def test_join_multi_levels2(self): # some more advanced merges # GH6360 - household = DataFrame(dict(household_id = [1,2,2,3,3,3,4], - asset_id = ["nl0000301109","nl0000301109","gb00b03mlx29","gb00b03mlx29","lu0197800237","nl0000289965",np.nan], - share = [1.0,0.4,0.6,0.15,0.6,0.25,1.0]), - columns = ['household_id','asset_id','share']).set_index(['household_id','asset_id']) + household = ( + DataFrame( + dict(household_id=[1, 2, 2, 3, 3, 3, 4], + asset_id=["nl0000301109", "nl0000301109", "gb00b03mlx29", + "gb00b03mlx29", "lu0197800237", "nl0000289965", + np.nan], + share=[1.0, 0.4, 0.6, 0.15, 0.6, 0.25, 1.0]), + columns=['household_id', 'asset_id', 'share']) + .set_index(['household_id', 'asset_id'])) log_return = DataFrame(dict( - asset_id = ["gb00b03mlx29", "gb00b03mlx29", "gb00b03mlx29", "lu0197800237", "lu0197800237"], - t = [233, 234, 235, 180, 181], - log_return = [.09604978, -.06524096, .03532373, .03025441, .036997] - )).set_index(["asset_id","t"]) - - expected = DataFrame(dict( - household_id = [2, 2, 2, 3, 3, 3, 3, 3], - asset_id = ["gb00b03mlx29", "gb00b03mlx29", "gb00b03mlx29", "gb00b03mlx29", "gb00b03mlx29", "gb00b03mlx29", "lu0197800237", "lu0197800237"], - t = [233, 234, 235, 233, 234, 235, 180, 181], - share = [0.6, 0.6, 0.6, 0.15, 0.15, 0.15, 0.6, 0.6], - log_return = [.09604978, -.06524096, .03532373, .09604978, -.06524096, .03532373, .03025441, .036997] - )).set_index(["household_id", "asset_id", "t"]).reindex(columns=['share','log_return']) + asset_id=["gb00b03mlx29", "gb00b03mlx29", + "gb00b03mlx29", "lu0197800237", "lu0197800237"], + t=[233, 234, 235, 180, 181], + log_return=[.09604978, -.06524096, .03532373, .03025441, .036997] + )).set_index(["asset_id", "t"]) + + expected = ( + DataFrame(dict( + household_id=[2, 2, 2, 3, 3, 3, 3, 3], + asset_id=["gb00b03mlx29", "gb00b03mlx29", + "gb00b03mlx29", "gb00b03mlx29", + "gb00b03mlx29", "gb00b03mlx29", + "lu0197800237", "lu0197800237"], + t=[233, 234, 235, 233, 234, 235, 180, 181], + share=[0.6, 0.6, 0.6, 0.15, 0.15, 0.15, 0.6, 0.6], + log_return=[.09604978, -.06524096, .03532373, + .09604978, -.06524096, .03532373, + .03025441, .036997] + )) + .set_index(["household_id", "asset_id", "t"]) + .reindex(columns=['share', 'log_return'])) def f(): household.join(log_return, how='inner') self.assertRaises(NotImplementedError, f) # this is the equivalency - result = merge(household.reset_index(),log_return.reset_index(),on=['asset_id'],how='inner').set_index(['household_id','asset_id','t']) - assert_frame_equal(result,expected) + result = (merge(household.reset_index(), log_return.reset_index(), + on=['asset_id'], how='inner') + .set_index(['household_id', 'asset_id', 't'])) + assert_frame_equal(result, expected) - expected = DataFrame(dict( - household_id = [1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4], - asset_id = ["nl0000301109", "nl0000289783", "gb00b03mlx29", "gb00b03mlx29", "gb00b03mlx29", "gb00b03mlx29", "gb00b03mlx29", "gb00b03mlx29", "lu0197800237", "lu0197800237", "nl0000289965", None], - t = [None, None, 233, 234, 235, 233, 234, 235, 180, 181, None, None], - share = [1.0, 0.4, 0.6, 0.6, 0.6, 0.15, 0.15, 0.15, 0.6, 0.6, 0.25, 1.0], - log_return = [None, None, .09604978, -.06524096, .03532373, .09604978, -.06524096, .03532373, .03025441, .036997, None, None] - )).set_index(["household_id", "asset_id", "t"]) + expected = ( + DataFrame(dict( + household_id=[1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4], + asset_id=["nl0000301109", "nl0000289783", "gb00b03mlx29", + "gb00b03mlx29", "gb00b03mlx29", + "gb00b03mlx29", "gb00b03mlx29", "gb00b03mlx29", + "lu0197800237", "lu0197800237", + "nl0000289965", None], + t=[None, None, 233, 234, 235, 233, 234, + 235, 180, 181, None, None], + share=[1.0, 0.4, 0.6, 0.6, 0.6, 0.15, + 0.15, 0.15, 0.6, 0.6, 0.25, 1.0], + log_return=[None, None, .09604978, -.06524096, .03532373, + .09604978, -.06524096, .03532373, + .03025441, .036997, None, None] + )) + .set_index(["household_id", "asset_id", "t"])) def f(): household.join(log_return, how='outer') self.assertRaises(NotImplementedError, f) + def _check_join(left, right, result, join_col, how='left', lsuffix='_x', rsuffix='_y'): @@ -1722,7 +1815,7 @@ def _restrict_to_columns(group, columns, suffix): found = [c for c in group.columns if c in columns or c.replace(suffix, '') in columns] - # filter + # filter group = group.ix[:, found] # get rid of suffixes, if any @@ -1823,7 +1916,8 @@ def test_append(self): # GH 6129 df = DataFrame({'a': {'x': 1, 'y': 2}, 'b': {'x': 3, 'y': 4}}) row = Series([5, 6, 7], index=['a', 'b', 'c'], name='z') - expected = DataFrame({'a': {'x': 1, 'y': 2, 'z': 5}, 'b': {'x': 3, 'y': 4, 'z': 6}, 'c' : {'z' : 7}}) + expected = DataFrame({'a': {'x': 1, 'y': 2, 'z': 5}, 'b': { + 'x': 3, 'y': 4, 'z': 6}, 'c': {'z': 7}}) result = df.append(row) assert_frame_equal(result, expected) @@ -1938,32 +2032,35 @@ def test_append_missing_column_proper_upcast(self): def test_concat_copy(self): df = DataFrame(np.random.randn(4, 3)) - df2 = DataFrame(np.random.randint(0,10,size=4).reshape(4,1)) - df3 = DataFrame({5 : 'foo'},index=range(4)) + df2 = DataFrame(np.random.randint(0, 10, size=4).reshape(4, 1)) + df3 = DataFrame({5: 'foo'}, index=range(4)) # these are actual copies - result = concat([df,df2,df3],axis=1,copy=True) + result = concat([df, df2, df3], axis=1, copy=True) for b in result._data.blocks: self.assertIsNone(b.values.base) # these are the same - result = concat([df,df2,df3],axis=1,copy=False) + result = concat([df, df2, df3], axis=1, copy=False) for b in result._data.blocks: if b.is_float: - self.assertTrue(b.values.base is df._data.blocks[0].values.base) + self.assertTrue( + b.values.base is df._data.blocks[0].values.base) elif b.is_integer: - self.assertTrue(b.values.base is df2._data.blocks[0].values.base) + self.assertTrue( + b.values.base is df2._data.blocks[0].values.base) elif b.is_object: self.assertIsNotNone(b.values.base) # float block was consolidated - df4 = DataFrame(np.random.randn(4,1)) - result = concat([df,df2,df3,df4],axis=1,copy=False) + df4 = DataFrame(np.random.randn(4, 1)) + result = concat([df, df2, df3, df4], axis=1, copy=False) for b in result._data.blocks: if b.is_float: self.assertIsNone(b.values.base) elif b.is_integer: - self.assertTrue(b.values.base is df2._data.blocks[0].values.base) + self.assertTrue( + b.values.base is df2._data.blocks[0].values.base) elif b.is_object: self.assertIsNotNone(b.values.base) @@ -1984,7 +2081,7 @@ def test_concat_with_group_keys(self): result = concat([df, df], keys=[0, 1]) exp_index2 = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1], - [0, 1, 2, 0, 1, 2]]) + [0, 1, 2, 0, 1, 2]]) expected = DataFrame(np.r_[df.values, df.values], index=exp_index2) tm.assert_frame_equal(result, expected) @@ -2015,10 +2112,11 @@ def test_concat_keys_specific_levels(self): self.assertEqual(result.columns.names[0], 'group_key') def test_concat_dataframe_keys_bug(self): - t1 = DataFrame({'value': Series([1, 2, 3], - index=Index(['a', 'b', 'c'], name='id'))}) - t2 = DataFrame({'value': Series([7, 8], - index=Index(['a', 'b'], name='id'))}) + t1 = DataFrame({ + 'value': Series([1, 2, 3], index=Index(['a', 'b', 'c'], + name='id'))}) + t2 = DataFrame({ + 'value': Series([7, 8], index=Index(['a', 'b'], name='id'))}) # it works result = concat([t1, t2], axis=1, keys=['t1', 't2']) @@ -2027,20 +2125,23 @@ def test_concat_dataframe_keys_bug(self): def test_concat_series_partial_columns_names(self): # GH10698 - foo = Series([1,2], name='foo') - bar = Series([1,2]) - baz = Series([4,5]) + foo = Series([1, 2], name='foo') + bar = Series([1, 2]) + baz = Series([4, 5]) result = concat([foo, bar, baz], axis=1) - expected = DataFrame({'foo' : [1,2], 0 : [1,2], 1 : [4,5]}, columns=['foo',0,1]) + expected = DataFrame({'foo': [1, 2], 0: [1, 2], 1: [ + 4, 5]}, columns=['foo', 0, 1]) tm.assert_frame_equal(result, expected) - result = concat([foo, bar, baz], axis=1, keys=['red','blue','yellow']) - expected = DataFrame({'red' : [1,2], 'blue' : [1,2], 'yellow' : [4,5]}, columns=['red','blue','yellow']) + result = concat([foo, bar, baz], axis=1, keys=[ + 'red', 'blue', 'yellow']) + expected = DataFrame({'red': [1, 2], 'blue': [1, 2], 'yellow': [ + 4, 5]}, columns=['red', 'blue', 'yellow']) tm.assert_frame_equal(result, expected) result = concat([foo, bar, baz], axis=1, ignore_index=True) - expected = DataFrame({0 : [1,2], 1 : [1,2], 2 : [4,5]}) + expected = DataFrame({0: [1, 2], 1: [1, 2], 2: [4, 5]}) tm.assert_frame_equal(result, expected) def test_concat_dict(self): @@ -2109,8 +2210,9 @@ def test_concat_multiindex_with_tz(self): df['dt'] = df['dt'].apply(lambda d: Timestamp(d, tz='US/Pacific')) df = df.set_index(['dt', 'b']) - exp_idx1 = DatetimeIndex(['2014-01-01', '2014-01-02', '2014-01-03'] * 2, - tz='US/Pacific', name='dt') + exp_idx1 = DatetimeIndex(['2014-01-01', '2014-01-02', + '2014-01-03'] * 2, + tz='US/Pacific', name='dt') exp_idx2 = Index(['A', 'B', 'C'] * 2, name='b') exp_idx = MultiIndex.from_arrays([exp_idx1, exp_idx2]) expected = DataFrame({'c': [1, 2, 3] * 2, 'd': [4, 5, 6] * 2}, @@ -2214,70 +2316,81 @@ def test_dups_index(self): # GH 4771 # single dtypes - df = DataFrame(np.random.randint(0,10,size=40).reshape(10,4),columns=['A','A','C','C']) + df = DataFrame(np.random.randint(0, 10, size=40).reshape( + 10, 4), columns=['A', 'A', 'C', 'C']) - result = concat([df,df],axis=1) - assert_frame_equal(result.iloc[:,:4],df) - assert_frame_equal(result.iloc[:,4:],df) + result = concat([df, df], axis=1) + assert_frame_equal(result.iloc[:, :4], df) + assert_frame_equal(result.iloc[:, 4:], df) - result = concat([df,df],axis=0) - assert_frame_equal(result.iloc[:10],df) - assert_frame_equal(result.iloc[10:],df) + result = concat([df, df], axis=0) + assert_frame_equal(result.iloc[:10], df) + assert_frame_equal(result.iloc[10:], df) # multi dtypes - df = concat([DataFrame(np.random.randn(10,4),columns=['A','A','B','B']), - DataFrame(np.random.randint(0,10,size=20).reshape(10,2),columns=['A','C'])], + df = concat([DataFrame(np.random.randn(10, 4), + columns=['A', 'A', 'B', 'B']), + DataFrame(np.random.randint(0, 10, size=20) + .reshape(10, 2), + columns=['A', 'C'])], axis=1) - result = concat([df,df],axis=1) - assert_frame_equal(result.iloc[:,:6],df) - assert_frame_equal(result.iloc[:,6:],df) + result = concat([df, df], axis=1) + assert_frame_equal(result.iloc[:, :6], df) + assert_frame_equal(result.iloc[:, 6:], df) - result = concat([df,df],axis=0) - assert_frame_equal(result.iloc[:10],df) - assert_frame_equal(result.iloc[10:],df) + result = concat([df, df], axis=0) + assert_frame_equal(result.iloc[:10], df) + assert_frame_equal(result.iloc[10:], df) # append - result = df.iloc[0:8,:].append(df.iloc[8:]) + result = df.iloc[0:8, :].append(df.iloc[8:]) assert_frame_equal(result, df) - result = df.iloc[0:8,:].append(df.iloc[8:9]).append(df.iloc[9:10]) + result = df.iloc[0:8, :].append(df.iloc[8:9]).append(df.iloc[9:10]) assert_frame_equal(result, df) - expected = concat([df,df],axis=0) + expected = concat([df, df], axis=0) result = df.append(df) assert_frame_equal(result, expected) def test_with_mixed_tuples(self): # 10697 # columns have mixed tuples, so handle properly - df1 = DataFrame({ u'A' : 'foo', (u'B',1) : 'bar' },index=range(2)) - df2 = DataFrame({ u'B' : 'foo', (u'B',1) : 'bar' },index=range(2)) - result = concat([df1,df2]) + df1 = DataFrame({u'A': 'foo', (u'B', 1): 'bar'}, index=range(2)) + df2 = DataFrame({u'B': 'foo', (u'B', 1): 'bar'}, index=range(2)) + + # it works + concat([df1, df2]) def test_join_dups(self): # joining dups - df = concat([DataFrame(np.random.randn(10,4),columns=['A','A','B','B']), - DataFrame(np.random.randint(0,10,size=20).reshape(10,2),columns=['A','C'])], + df = concat([DataFrame(np.random.randn(10, 4), + columns=['A', 'A', 'B', 'B']), + DataFrame(np.random.randint(0, 10, size=20) + .reshape(10, 2), + columns=['A', 'C'])], axis=1) - expected = concat([df,df],axis=1) - result = df.join(df,rsuffix='_2') + expected = concat([df, df], axis=1) + result = df.join(df, rsuffix='_2') result.columns = expected.columns assert_frame_equal(result, expected) # GH 4975, invalid join on dups - w = DataFrame(np.random.randn(4,2), columns=["x", "y"]) - x = DataFrame(np.random.randn(4,2), columns=["x", "y"]) - y = DataFrame(np.random.randn(4,2), columns=["x", "y"]) - z = DataFrame(np.random.randn(4,2), columns=["x", "y"]) + w = DataFrame(np.random.randn(4, 2), columns=["x", "y"]) + x = DataFrame(np.random.randn(4, 2), columns=["x", "y"]) + y = DataFrame(np.random.randn(4, 2), columns=["x", "y"]) + z = DataFrame(np.random.randn(4, 2), columns=["x", "y"]) - dta = x.merge(y, left_index=True, right_index=True).merge(z, left_index=True, right_index=True, how="outer") + dta = x.merge(y, left_index=True, right_index=True).merge( + z, left_index=True, right_index=True, how="outer") dta = dta.merge(w, left_index=True, right_index=True) - expected = concat([x,y,z,w],axis=1) - expected.columns=['x_x','y_x','x_y','y_y','x_x','y_x','x_y','y_y'] - assert_frame_equal(dta,expected) + expected = concat([x, y, z, w], axis=1) + expected.columns = ['x_x', 'y_x', 'x_y', + 'y_y', 'x_x', 'y_x', 'x_y', 'y_y'] + assert_frame_equal(dta, expected) def test_handle_empty_objects(self): df = DataFrame(np.random.randn(10, 4), columns=list('abcd')) @@ -2291,22 +2404,23 @@ def test_handle_empty_objects(self): expected = df.ix[:, ['a', 'b', 'c', 'd', 'foo']] expected['foo'] = expected['foo'].astype('O') - expected.loc[0:4,'foo'] = 'bar' + expected.loc[0:4, 'foo'] = 'bar' tm.assert_frame_equal(concatted, expected) # empty as first element with time series # GH3259 - df = DataFrame(dict(A = range(10000)),index=date_range('20130101',periods=10000,freq='s')) + df = DataFrame(dict(A=range(10000)), index=date_range( + '20130101', periods=10000, freq='s')) empty = DataFrame() - result = concat([df,empty],axis=1) + result = concat([df, empty], axis=1) assert_frame_equal(result, df) - result = concat([empty,df],axis=1) + result = concat([empty, df], axis=1) assert_frame_equal(result, df) - result = concat([df,empty]) + result = concat([df, empty]) assert_frame_equal(result, df) - result = concat([empty,df]) + result = concat([empty, df]) assert_frame_equal(result, df) def test_concat_mixed_objs(self): @@ -2315,56 +2429,64 @@ def test_concat_mixed_objs(self): # G2385 # axis 1 - index=date_range('01-Jan-2013', periods=10, freq='H') + index = date_range('01-Jan-2013', periods=10, freq='H') arr = np.arange(10, dtype='int64') s1 = Series(arr, index=index) s2 = Series(arr, index=index) - df = DataFrame(arr.reshape(-1,1), index=index) + df = DataFrame(arr.reshape(-1, 1), index=index) - expected = DataFrame(np.repeat(arr,2).reshape(-1,2), index=index, columns = [0, 0]) - result = concat([df,df], axis=1) + expected = DataFrame(np.repeat(arr, 2).reshape(-1, 2), + index=index, columns=[0, 0]) + result = concat([df, df], axis=1) assert_frame_equal(result, expected) - expected = DataFrame(np.repeat(arr,2).reshape(-1,2), index=index, columns = [0, 1]) - result = concat([s1,s2], axis=1) + expected = DataFrame(np.repeat(arr, 2).reshape(-1, 2), + index=index, columns=[0, 1]) + result = concat([s1, s2], axis=1) assert_frame_equal(result, expected) - expected = DataFrame(np.repeat(arr,3).reshape(-1,3), index=index, columns = [0, 1, 2]) - result = concat([s1,s2,s1], axis=1) + expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3), + index=index, columns=[0, 1, 2]) + result = concat([s1, s2, s1], axis=1) assert_frame_equal(result, expected) - expected = DataFrame(np.repeat(arr,5).reshape(-1,5), index=index, columns = [0, 0, 1, 2, 3]) - result = concat([s1,df,s2,s2,s1], axis=1) + expected = DataFrame(np.repeat(arr, 5).reshape(-1, 5), + index=index, columns=[0, 0, 1, 2, 3]) + result = concat([s1, df, s2, s2, s1], axis=1) assert_frame_equal(result, expected) # with names s1.name = 'foo' - expected = DataFrame(np.repeat(arr,3).reshape(-1,3), index=index, columns = ['foo', 0, 0]) - result = concat([s1,df,s2], axis=1) + expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3), + index=index, columns=['foo', 0, 0]) + result = concat([s1, df, s2], axis=1) assert_frame_equal(result, expected) s2.name = 'bar' - expected = DataFrame(np.repeat(arr,3).reshape(-1,3), index=index, columns = ['foo', 0, 'bar']) - result = concat([s1,df,s2], axis=1) + expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3), + index=index, columns=['foo', 0, 'bar']) + result = concat([s1, df, s2], axis=1) assert_frame_equal(result, expected) # ignore index - expected = DataFrame(np.repeat(arr,3).reshape(-1,3), index=index, columns = [0, 1, 2]) - result = concat([s1,df,s2], axis=1, ignore_index=True) + expected = DataFrame(np.repeat(arr, 3).reshape(-1, 3), + index=index, columns=[0, 1, 2]) + result = concat([s1, df, s2], axis=1, ignore_index=True) assert_frame_equal(result, expected) # axis 0 - expected = DataFrame(np.tile(arr,3).reshape(-1,1), index=index.tolist() * 3, columns = [0]) - result = concat([s1,df,s2]) + expected = DataFrame(np.tile(arr, 3).reshape(-1, 1), + index=index.tolist() * 3, columns=[0]) + result = concat([s1, df, s2]) assert_frame_equal(result, expected) - expected = DataFrame(np.tile(arr,3).reshape(-1,1), columns = [0]) - result = concat([s1,df,s2], ignore_index=True) + expected = DataFrame(np.tile(arr, 3).reshape(-1, 1), columns=[0]) + result = concat([s1, df, s2], ignore_index=True) assert_frame_equal(result, expected) # invalid concatente of mixed dims panel = tm.makePanel() - self.assertRaises(ValueError, lambda : concat([panel,s1],axis=1)) + self.assertRaises(ValueError, lambda: concat([panel, s1], axis=1)) def test_panel_join(self): panel = tm.makePanel() @@ -2576,7 +2698,8 @@ def test_concat_series_axis1(self): s2.name = None result = concat([s, s2], axis=1) - self.assertTrue(np.array_equal(result.columns, Index(['A', 0], dtype='object'))) + self.assertTrue(np.array_equal( + result.columns, Index(['A', 0], dtype='object'))) # must reindex, #2603 s = Series(randn(3), index=['c', 'a', 'b'], name='A') @@ -2614,7 +2737,7 @@ def test_concat_datetime64_block(self): def test_concat_timedelta64_block(self): from pandas import to_timedelta - rng = to_timedelta(np.arange(10),unit='s') + rng = to_timedelta(np.arange(10), unit='s') df = DataFrame({'time': rng}) @@ -2640,8 +2763,8 @@ def test_concat_bug_1719(self): ts1 = tm.makeTimeSeries() ts2 = tm.makeTimeSeries()[::2] - ## to join with union - ## these two are of different length! + # to join with union + # these two are of different length! left = concat([ts1, ts2], join='outer', axis=1) right = concat([ts2, ts1], join='outer', axis=1) @@ -2654,22 +2777,24 @@ def test_concat_bug_2972(self): result = concat([ts0, ts1], axis=1) expected = DataFrame({0: ts0, 1: ts1}) - expected.columns=['same name', 'same name'] + expected.columns = ['same name', 'same name'] assert_frame_equal(result, expected) def test_concat_bug_3602(self): # GH 3602, duplicate columns - df1 = DataFrame({'firmNo' : [0,0,0,0], 'stringvar' : ['rrr', 'rrr', 'rrr', 'rrr'], 'prc' : [6,6,6,6] }) - df2 = DataFrame({'misc' : [1,2,3,4], 'prc' : [6,6,6,6], 'C' : [9,10,11,12]}) - expected = DataFrame([[0,6,'rrr',9,1,6], - [0,6,'rrr',10,2,6], - [0,6,'rrr',11,3,6], - [0,6,'rrr',12,4,6]]) - expected.columns = ['firmNo','prc','stringvar','C','misc','prc'] - - result = concat([df1,df2],axis=1) - assert_frame_equal(result,expected) + df1 = DataFrame({'firmNo': [0, 0, 0, 0], 'stringvar': [ + 'rrr', 'rrr', 'rrr', 'rrr'], 'prc': [6, 6, 6, 6]}) + df2 = DataFrame({'misc': [1, 2, 3, 4], 'prc': [ + 6, 6, 6, 6], 'C': [9, 10, 11, 12]}) + expected = DataFrame([[0, 6, 'rrr', 9, 1, 6], + [0, 6, 'rrr', 10, 2, 6], + [0, 6, 'rrr', 11, 3, 6], + [0, 6, 'rrr', 12, 4, 6]]) + expected.columns = ['firmNo', 'prc', 'stringvar', 'C', 'misc', 'prc'] + + result = concat([df1, df2], axis=1) + assert_frame_equal(result, expected) def test_concat_series_axis1_same_names_ignore_index(self): dates = date_range('01-Jan-2013', '01-Jan-2014', freq='MS')[0:-1] @@ -2689,29 +2814,38 @@ def test_concat_iterables(self): expected = DataFrame([1, 2, 3, 4, 5, 6]) assert_frame_equal(concat((df1, df2), ignore_index=True), expected) assert_frame_equal(concat([df1, df2], ignore_index=True), expected) - assert_frame_equal(concat((df for df in (df1, df2)), ignore_index=True), expected) - assert_frame_equal(concat(deque((df1, df2)), ignore_index=True), expected) + assert_frame_equal(concat((df for df in (df1, df2)), + ignore_index=True), expected) + assert_frame_equal( + concat(deque((df1, df2)), ignore_index=True), expected) + class CustomIterator1(object): + def __len__(self): return 2 + def __getitem__(self, index): try: return {0: df1, 1: df2}[index] except KeyError: raise IndexError - assert_frame_equal(pd.concat(CustomIterator1(), ignore_index=True), expected) + assert_frame_equal(pd.concat(CustomIterator1(), + ignore_index=True), expected) + class CustomIterator2(Iterable): + def __iter__(self): yield df1 yield df2 - assert_frame_equal(pd.concat(CustomIterator2(), ignore_index=True), expected) + assert_frame_equal(pd.concat(CustomIterator2(), + ignore_index=True), expected) def test_concat_invalid(self): # trying to concat a ndframe with a non-ndframe df1 = mkdf(10, 2) - for obj in [1, dict(), [1, 2], (1, 2) ]: - self.assertRaises(TypeError, lambda x: concat([ df1, obj ])) + for obj in [1, dict(), [1, 2], (1, 2)]: + self.assertRaises(TypeError, lambda x: concat([df1, obj])) def test_concat_invalid_first_argument(self): df1 = mkdf(10, 2) @@ -2719,7 +2853,7 @@ def test_concat_invalid_first_argument(self): self.assertRaises(TypeError, concat, df1, df2) # generator ok though - concat(DataFrame(np.random.rand(5,5)) for _ in range(3)) + concat(DataFrame(np.random.rand(5, 5)) for _ in range(3)) # text reader ok # GH6583 @@ -2735,7 +2869,8 @@ def test_concat_invalid_first_argument(self): reader = read_csv(StringIO(data), chunksize=1) result = concat(reader, ignore_index=True) expected = read_csv(StringIO(data)) - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) + class TestOrderedMerge(tm.TestCase): @@ -2789,6 +2924,7 @@ def test_multigroup(self): def test_merge_type(self): class NotADataFrame(DataFrame): + @property def _constructor(self): return NotADataFrame diff --git a/pandas/tools/tests/test_pivot.py b/pandas/tools/tests/test_pivot.py index cb7e9102b21a0..d303f489d9dea 100644 --- a/pandas/tools/tests/test_pivot.py +++ b/pandas/tools/tests/test_pivot.py @@ -1,4 +1,4 @@ -import datetime +from datetime import datetime, date, timedelta import numpy as np from numpy.testing import assert_equal @@ -32,9 +32,11 @@ def setUp(self): def test_pivot_table(self): index = ['A', 'B'] columns = 'C' - table = pivot_table(self.data, values='D', index=index, columns=columns) + table = pivot_table(self.data, values='D', + index=index, columns=columns) - table2 = self.data.pivot_table(values='D', index=index, columns=columns) + table2 = self.data.pivot_table( + values='D', index=index, columns=columns) tm.assert_frame_equal(table, table2) # this works @@ -50,13 +52,14 @@ def test_pivot_table(self): else: self.assertEqual(table.columns.name, columns[0]) - expected = self.data.groupby(index + [columns])['D'].agg(np.mean).unstack() + expected = self.data.groupby( + index + [columns])['D'].agg(np.mean).unstack() tm.assert_frame_equal(table, expected) def test_pivot_table_nocols(self): df = DataFrame({'rows': ['a', 'b', 'c'], 'cols': ['x', 'y', 'z'], - 'values': [1,2,3]}) + 'values': [1, 2, 3]}) rs = df.pivot_table(columns='cols', aggfunc=np.sum) xp = df.pivot_table(index='cols', aggfunc=np.sum).T tm.assert_frame_equal(rs, xp) @@ -70,9 +73,12 @@ def test_pivot_table_dropna(self): 'customer': {0: 'A', 1: 'A', 2: 'B', 3: 'C'}, 'month': {0: 201307, 1: 201309, 2: 201308, 3: 201310}, 'product': {0: 'a', 1: 'b', 2: 'c', 3: 'd'}, - 'quantity': {0: 2000000, 1: 500000, 2: 1000000, 3: 1000000}}) - pv_col = df.pivot_table('quantity', 'month', ['customer', 'product'], dropna=False) - pv_ind = df.pivot_table('quantity', ['customer', 'product'], 'month', dropna=False) + 'quantity': {0: 2000000, 1: 500000, + 2: 1000000, 3: 1000000}}) + pv_col = df.pivot_table('quantity', 'month', [ + 'customer', 'product'], dropna=False) + pv_ind = df.pivot_table( + 'quantity', ['customer', 'product'], 'month', dropna=False) m = MultiIndex.from_tuples([(u('A'), u('a')), (u('A'), u('b')), @@ -90,9 +96,9 @@ def test_pivot_table_dropna(self): assert_equal(pv_col.columns.values, m.values) assert_equal(pv_ind.index.values, m.values) - def test_pass_array(self): - result = self.data.pivot_table('D', index=self.data.A, columns=self.data.C) + result = self.data.pivot_table( + 'D', index=self.data.A, columns=self.data.C) expected = self.data.pivot_table('D', index='A', columns='C') tm.assert_frame_equal(result, expected) @@ -113,21 +119,25 @@ def test_pivot_table_multiple(self): def test_pivot_dtypes(self): # can convert dtypes - f = DataFrame({'a' : ['cat', 'bat', 'cat', 'bat'], 'v' : [1,2,3,4], 'i' : ['a','b','a','b']}) + f = DataFrame({'a': ['cat', 'bat', 'cat', 'bat'], 'v': [ + 1, 2, 3, 4], 'i': ['a', 'b', 'a', 'b']}) self.assertEqual(f.dtypes['v'], 'int64') - z = pivot_table(f, values='v', index=['a'], columns=['i'], fill_value=0, aggfunc=np.sum) + z = pivot_table(f, values='v', index=['a'], columns=[ + 'i'], fill_value=0, aggfunc=np.sum) result = z.get_dtype_counts() - expected = Series(dict(int64 = 2)) + expected = Series(dict(int64=2)) tm.assert_series_equal(result, expected) # cannot convert dtypes - f = DataFrame({'a' : ['cat', 'bat', 'cat', 'bat'], 'v' : [1.5,2.5,3.5,4.5], 'i' : ['a','b','a','b']}) + f = DataFrame({'a': ['cat', 'bat', 'cat', 'bat'], 'v': [ + 1.5, 2.5, 3.5, 4.5], 'i': ['a', 'b', 'a', 'b']}) self.assertEqual(f.dtypes['v'], 'float64') - z = pivot_table(f, values='v', index=['a'], columns=['i'], fill_value=0, aggfunc=np.mean) + z = pivot_table(f, values='v', index=['a'], columns=[ + 'i'], fill_value=0, aggfunc=np.mean) result = z.get_dtype_counts() - expected = Series(dict(float64 = 2)) + expected = Series(dict(float64=2)) tm.assert_series_equal(result, expected) def test_pivot_multi_values(self): @@ -160,20 +170,20 @@ def test_pivot_multi_functions(self): def test_pivot_index_with_nan(self): # GH 3588 nan = np.nan - df = DataFrame({'a':['R1', 'R2', nan, 'R4'], - 'b':['C1', 'C2', 'C3' , 'C4'], - 'c':[10, 15, 17, 20]}) - result = df.pivot('a','b','c') - expected = DataFrame([[nan,nan,17,nan],[10,nan,nan,nan], - [nan,15,nan,nan],[nan,nan,nan,20]], - index = Index([nan,'R1','R2','R4'], name='a'), - columns = Index(['C1','C2','C3','C4'], name='b')) + df = DataFrame({'a': ['R1', 'R2', nan, 'R4'], + 'b': ['C1', 'C2', 'C3', 'C4'], + 'c': [10, 15, 17, 20]}) + result = df.pivot('a', 'b', 'c') + expected = DataFrame([[nan, nan, 17, nan], [10, nan, nan, nan], + [nan, 15, nan, nan], [nan, nan, nan, 20]], + index=Index([nan, 'R1', 'R2', 'R4'], name='a'), + columns=Index(['C1', 'C2', 'C3', 'C4'], name='b')) tm.assert_frame_equal(result, expected) tm.assert_frame_equal(df.pivot('b', 'a', 'c'), expected.T) # GH9491 - df = DataFrame({'a':pd.date_range('2014-02-01', periods=6, freq='D'), - 'c':100 + np.arange(6)}) + df = DataFrame({'a': pd.date_range('2014-02-01', periods=6, freq='D'), + 'c': 100 + np.arange(6)}) df['b'] = df['a'] - pd.Timestamp('2014-02-02') df.loc[1, 'a'] = df.loc[3, 'a'] = nan df.loc[1, 'b'] = df.loc[4, 'b'] = nan @@ -188,39 +198,46 @@ def test_pivot_index_with_nan(self): def test_pivot_with_tz(self): # GH 5878 - df = DataFrame({'dt1': [datetime.datetime(2013, 1, 1, 9, 0), - datetime.datetime(2013, 1, 2, 9, 0), - datetime.datetime(2013, 1, 1, 9, 0), - datetime.datetime(2013, 1, 2, 9, 0)], - 'dt2': [datetime.datetime(2014, 1, 1, 9, 0), - datetime.datetime(2014, 1, 1, 9, 0), - datetime.datetime(2014, 1, 2, 9, 0), - datetime.datetime(2014, 1, 2, 9, 0)], - 'data1': np.arange(4,dtype='int64'), - 'data2': np.arange(4,dtype='int64')}) + df = DataFrame({'dt1': [datetime(2013, 1, 1, 9, 0), + datetime(2013, 1, 2, 9, 0), + datetime(2013, 1, 1, 9, 0), + datetime(2013, 1, 2, 9, 0)], + 'dt2': [datetime(2014, 1, 1, 9, 0), + datetime(2014, 1, 1, 9, 0), + datetime(2014, 1, 2, 9, 0), + datetime(2014, 1, 2, 9, 0)], + 'data1': np.arange(4, dtype='int64'), + 'data2': np.arange(4, dtype='int64')}) df['dt1'] = df['dt1'].apply(lambda d: pd.Timestamp(d, tz='US/Pacific')) df['dt2'] = df['dt2'].apply(lambda d: pd.Timestamp(d, tz='Asia/Tokyo')) exp_col1 = Index(['data1', 'data1', 'data2', 'data2']) - exp_col2 = pd.DatetimeIndex(['2014/01/01 09:00', '2014/01/02 09:00'] * 2, + exp_col2 = pd.DatetimeIndex(['2014/01/01 09:00', + '2014/01/02 09:00'] * 2, name='dt2', tz='Asia/Tokyo') exp_col = pd.MultiIndex.from_arrays([exp_col1, exp_col2]) expected = DataFrame([[0, 2, 0, 2], [1, 3, 1, 3]], - index=pd.DatetimeIndex(['2013/01/01 09:00', '2013/01/02 09:00'], - name='dt1', tz='US/Pacific'), + index=pd.DatetimeIndex(['2013/01/01 09:00', + '2013/01/02 09:00'], + name='dt1', + tz='US/Pacific'), columns=exp_col) - pv = df.pivot(index='dt1', columns='dt2') + pv = df.pivot(index='dt1', columns='dt2') tm.assert_frame_equal(pv, expected) expected = DataFrame([[0, 2], [1, 3]], - index=pd.DatetimeIndex(['2013/01/01 09:00', '2013/01/02 09:00'], - name='dt1', tz='US/Pacific'), - columns=pd.DatetimeIndex(['2014/01/01 09:00', '2014/01/02 09:00'], - name='dt2', tz='Asia/Tokyo')) - - pv = df.pivot(index='dt1', columns='dt2', values='data1') + index=pd.DatetimeIndex(['2013/01/01 09:00', + '2013/01/02 09:00'], + name='dt1', + tz='US/Pacific'), + columns=pd.DatetimeIndex(['2014/01/01 09:00', + '2014/01/02 09:00'], + name='dt2', + tz='Asia/Tokyo')) + + pv = df.pivot(index='dt1', columns='dt2', values='data1') tm.assert_frame_equal(pv, expected) def test_margins(self): @@ -287,19 +304,19 @@ def _check_output(result, values_col, index=['A', 'B'], # issue number #8349: pivot_table with margins and dictionary aggfunc data = [ {'JOB': 'Worker', 'NAME': 'Bob', 'YEAR': 2013, - 'MONTH': 12, 'DAYS': 3, 'SALARY': 17}, + 'MONTH': 12, 'DAYS': 3, 'SALARY': 17}, {'JOB': 'Employ', 'NAME': - 'Mary', 'YEAR': 2013, 'MONTH': 12, 'DAYS': 5, 'SALARY': 23}, + 'Mary', 'YEAR': 2013, 'MONTH': 12, 'DAYS': 5, 'SALARY': 23}, {'JOB': 'Worker', 'NAME': 'Bob', 'YEAR': 2014, - 'MONTH': 1, 'DAYS': 10, 'SALARY': 100}, + 'MONTH': 1, 'DAYS': 10, 'SALARY': 100}, {'JOB': 'Worker', 'NAME': 'Bob', 'YEAR': 2014, - 'MONTH': 1, 'DAYS': 11, 'SALARY': 110}, + 'MONTH': 1, 'DAYS': 11, 'SALARY': 110}, {'JOB': 'Employ', 'NAME': 'Mary', 'YEAR': 2014, - 'MONTH': 1, 'DAYS': 15, 'SALARY': 200}, + 'MONTH': 1, 'DAYS': 15, 'SALARY': 200}, {'JOB': 'Worker', 'NAME': 'Bob', 'YEAR': 2014, - 'MONTH': 2, 'DAYS': 8, 'SALARY': 80}, + 'MONTH': 2, 'DAYS': 8, 'SALARY': 80}, {'JOB': 'Employ', 'NAME': 'Mary', 'YEAR': 2014, - 'MONTH': 2, 'DAYS': 5, 'SALARY': 190}, + 'MONTH': 2, 'DAYS': 5, 'SALARY': 190}, ] df = DataFrame(data) @@ -328,14 +345,16 @@ def _check_output(result, values_col, index=['A', 'B'], def test_pivot_integer_columns(self): # caused by upstream bug in unstack - d = datetime.date.min + d = date.min data = list(product(['foo', 'bar'], ['A', 'B', 'C'], ['x1', 'x2'], - [d + datetime.timedelta(i) for i in range(20)], [1.0])) + [d + timedelta(i) + for i in range(20)], [1.0])) df = DataFrame(data) table = df.pivot_table(values=4, index=[0, 1, 3], columns=[2]) df2 = df.rename(columns=str) - table2 = df2.pivot_table(values='4', index=['0', '1', '3'], columns=['2']) + table2 = df2.pivot_table( + values='4', index=['0', '1', '3'], columns=['2']) tm.assert_frame_equal(table, table2, check_names=False) @@ -382,7 +401,8 @@ def test_pivot_columns_lexsorted(self): iproduct = np.random.randint(0, len(products), n) items['Index'] = products['Index'][iproduct] items['Symbol'] = products['Symbol'][iproduct] - dr = pd.date_range(datetime.date(2000, 1, 1), datetime.date(2010, 12, 31)) + dr = pd.date_range(date(2000, 1, 1), + date(2010, 12, 31)) dates = dr[np.random.randint(0, len(dr), n)] items['Year'] = dates.year items['Month'] = dates.month @@ -406,24 +426,32 @@ def test_pivot_complex_aggfunc(self): def test_margins_no_values_no_cols(self): # Regression test on pivot table: no values or cols passed. - result = self.data[['A', 'B']].pivot_table(index=['A', 'B'], aggfunc=len, margins=True) + result = self.data[['A', 'B']].pivot_table( + index=['A', 'B'], aggfunc=len, margins=True) result_list = result.tolist() self.assertEqual(sum(result_list[:-1]), result_list[-1]) def test_margins_no_values_two_rows(self): - # Regression test on pivot table: no values passed but rows are a multi-index - result = self.data[['A', 'B', 'C']].pivot_table(index=['A', 'B'], columns='C', aggfunc=len, margins=True) + # Regression test on pivot table: no values passed but rows are a + # multi-index + result = self.data[['A', 'B', 'C']].pivot_table( + index=['A', 'B'], columns='C', aggfunc=len, margins=True) self.assertEqual(result.All.tolist(), [3.0, 1.0, 4.0, 3.0, 11.0]) def test_margins_no_values_one_row_one_col(self): - # Regression test on pivot table: no values passed but row and col defined - result = self.data[['A', 'B']].pivot_table(index='A', columns='B', aggfunc=len, margins=True) + # Regression test on pivot table: no values passed but row and col + # defined + result = self.data[['A', 'B']].pivot_table( + index='A', columns='B', aggfunc=len, margins=True) self.assertEqual(result.All.tolist(), [4.0, 7.0, 11.0]) def test_margins_no_values_two_row_two_cols(self): - # Regression test on pivot table: no values passed but rows and cols are multi-indexed - self.data['D'] = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k'] - result = self.data[['A', 'B', 'C', 'D']].pivot_table(index=['A', 'B'], columns=['C', 'D'], aggfunc=len, margins=True) + # Regression test on pivot table: no values passed but rows and cols + # are multi-indexed + self.data['D'] = ['a', 'b', 'c', 'd', + 'e', 'f', 'g', 'h', 'i', 'j', 'k'] + result = self.data[['A', 'B', 'C', 'D']].pivot_table( + index=['A', 'B'], columns=['C', 'D'], aggfunc=len, margins=True) self.assertEqual(result.All.tolist(), [3.0, 1.0, 4.0, 3.0, 11.0]) def test_pivot_table_with_margins_set_margin_name(self): @@ -447,30 +475,37 @@ def test_pivot_table_with_margins_set_margin_name(self): def test_pivot_timegrouper(self): df = DataFrame({ - 'Branch' : 'A A A A A A A B'.split(), + 'Branch': 'A A A A A A A B'.split(), 'Buyer': 'Carl Mark Carl Carl Joe Joe Joe Carl'.split(), 'Quantity': [1, 3, 5, 1, 8, 1, 9, 3], - 'Date' : [datetime.datetime(2013, 1, 1), datetime.datetime(2013, 1, 1), - datetime.datetime(2013, 10, 1), datetime.datetime(2013, 10, 2), - datetime.datetime(2013, 10, 1), datetime.datetime(2013, 10, 2), - datetime.datetime(2013, 12, 2), datetime.datetime(2013, 12, 2),]}).set_index('Date') - - expected = DataFrame(np.array([10, 18, 3],dtype='int64').reshape(1, 3), - index=[datetime.datetime(2013, 12, 31)], + 'Date': [datetime(2013, 1, 1), + datetime(2013, 1, 1), + datetime(2013, 10, 1), + datetime(2013, 10, 2), + datetime(2013, 10, 1), + datetime(2013, 10, 2), + datetime(2013, 12, 2), + datetime(2013, 12, 2), ]}).set_index('Date') + + expected = DataFrame(np.array([10, 18, 3], dtype='int64') + .reshape(1, 3), + index=[datetime(2013, 12, 31)], columns='Carl Joe Mark'.split()) expected.index.name = 'Date' expected.columns.name = 'Buyer' result = pivot_table(df, index=Grouper(freq='A'), columns='Buyer', values='Quantity', aggfunc=np.sum) - tm.assert_frame_equal(result,expected) + tm.assert_frame_equal(result, expected) result = pivot_table(df, index='Buyer', columns=Grouper(freq='A'), values='Quantity', aggfunc=np.sum) - tm.assert_frame_equal(result,expected.T) + tm.assert_frame_equal(result, expected.T) - expected = DataFrame(np.array([1, np.nan, 3, 9, 18, np.nan]).reshape(2, 3), - index=[datetime.datetime(2013, 1, 1), datetime.datetime(2013, 7, 1)], + expected = DataFrame(np.array([1, np.nan, 3, 9, 18, np.nan]) + .reshape(2, 3), + index=[datetime(2013, 1, 1), + datetime(2013, 7, 1)], columns='Carl Joe Mark'.split()) expected.index.name = 'Date' expected.columns.name = 'Buyer' @@ -485,57 +520,80 @@ def test_pivot_timegrouper(self): # passing the name df = df.reset_index() - result = pivot_table(df, index=Grouper(freq='6MS', key='Date'), columns='Buyer', + result = pivot_table(df, index=Grouper(freq='6MS', key='Date'), + columns='Buyer', values='Quantity', aggfunc=np.sum) tm.assert_frame_equal(result, expected) - result = pivot_table(df, index='Buyer', columns=Grouper(freq='6MS', key='Date'), + result = pivot_table(df, index='Buyer', + columns=Grouper(freq='6MS', key='Date'), values='Quantity', aggfunc=np.sum) tm.assert_frame_equal(result, expected.T) - self.assertRaises(KeyError, lambda : pivot_table(df, index=Grouper(freq='6MS', key='foo'), - columns='Buyer', values='Quantity', aggfunc=np.sum)) - self.assertRaises(KeyError, lambda : pivot_table(df, index='Buyer', - columns=Grouper(freq='6MS', key='foo'), values='Quantity', aggfunc=np.sum)) + self.assertRaises(KeyError, lambda: pivot_table( + df, index=Grouper(freq='6MS', key='foo'), + columns='Buyer', values='Quantity', aggfunc=np.sum)) + self.assertRaises(KeyError, lambda: pivot_table( + df, index='Buyer', + columns=Grouper(freq='6MS', key='foo'), + values='Quantity', aggfunc=np.sum)) # passing the level df = df.set_index('Date') - result = pivot_table(df, index=Grouper(freq='6MS', level='Date'), columns='Buyer', - values='Quantity', aggfunc=np.sum) + result = pivot_table(df, index=Grouper(freq='6MS', level='Date'), + columns='Buyer', values='Quantity', + aggfunc=np.sum) tm.assert_frame_equal(result, expected) - result = pivot_table(df, index='Buyer', columns=Grouper(freq='6MS', level='Date'), + result = pivot_table(df, index='Buyer', + columns=Grouper(freq='6MS', level='Date'), values='Quantity', aggfunc=np.sum) tm.assert_frame_equal(result, expected.T) - self.assertRaises(ValueError, lambda : pivot_table(df, index=Grouper(freq='6MS', level='foo'), - columns='Buyer', values='Quantity', aggfunc=np.sum)) - self.assertRaises(ValueError, lambda : pivot_table(df, index='Buyer', - columns=Grouper(freq='6MS', level='foo'), values='Quantity', aggfunc=np.sum)) + self.assertRaises(ValueError, lambda: pivot_table( + df, index=Grouper(freq='6MS', level='foo'), + columns='Buyer', values='Quantity', aggfunc=np.sum)) + self.assertRaises(ValueError, lambda: pivot_table( + df, index='Buyer', + columns=Grouper(freq='6MS', level='foo'), + values='Quantity', aggfunc=np.sum)) # double grouper df = DataFrame({ - 'Branch' : 'A A A A A A A B'.split(), + 'Branch': 'A A A A A A A B'.split(), 'Buyer': 'Carl Mark Carl Carl Joe Joe Joe Carl'.split(), - 'Quantity': [1,3,5,1,8,1,9,3], - 'Date' : [datetime.datetime(2013,11,1,13,0), datetime.datetime(2013,9,1,13,5), - datetime.datetime(2013,10,1,20,0), datetime.datetime(2013,10,2,10,0), - datetime.datetime(2013,11,1,20,0), datetime.datetime(2013,10,2,10,0), - datetime.datetime(2013,10,2,12,0), datetime.datetime(2013,12,5,14,0)], - 'PayDay' : [datetime.datetime(2013,10,4,0,0), datetime.datetime(2013,10,15,13,5), - datetime.datetime(2013,9,5,20,0), datetime.datetime(2013,11,2,10,0), - datetime.datetime(2013,10,7,20,0), datetime.datetime(2013,9,5,10,0), - datetime.datetime(2013,12,30,12,0), datetime.datetime(2013,11,20,14,0),]}) + 'Quantity': [1, 3, 5, 1, 8, 1, 9, 3], + 'Date': [datetime(2013, 11, 1, 13, 0), datetime(2013, 9, 1, 13, 5), + datetime(2013, 10, 1, 20, 0), + datetime(2013, 10, 2, 10, 0), + datetime(2013, 11, 1, 20, 0), + datetime(2013, 10, 2, 10, 0), + datetime(2013, 10, 2, 12, 0), + datetime(2013, 12, 5, 14, 0)], + 'PayDay': [datetime(2013, 10, 4, 0, 0), + datetime(2013, 10, 15, 13, 5), + datetime(2013, 9, 5, 20, 0), + datetime(2013, 11, 2, 10, 0), + datetime(2013, 10, 7, 20, 0), + datetime(2013, 9, 5, 10, 0), + datetime(2013, 12, 30, 12, 0), + datetime(2013, 11, 20, 14, 0), ]}) result = pivot_table(df, index=Grouper(freq='M', key='Date'), columns=Grouper(freq='M', key='PayDay'), values='Quantity', aggfunc=np.sum) - expected = DataFrame(np.array([np.nan, 3, np.nan, np.nan, 6, np.nan, 1, 9, - np.nan, 9, np.nan, np.nan, np.nan, np.nan, 3, np.nan]).reshape(4, 4), - index=[datetime.datetime(2013, 9, 30), datetime.datetime(2013, 10, 31), - datetime.datetime(2013, 11, 30), datetime.datetime(2013, 12, 31)], - columns=[datetime.datetime(2013, 9, 30), datetime.datetime(2013, 10, 31), - datetime.datetime(2013, 11, 30), datetime.datetime(2013, 12, 31)]) + expected = DataFrame(np.array([np.nan, 3, np.nan, np.nan, + 6, np.nan, 1, 9, + np.nan, 9, np.nan, np.nan, np.nan, + np.nan, 3, np.nan]).reshape(4, 4), + index=[datetime(2013, 9, 30), + datetime(2013, 10, 31), + datetime(2013, 11, 30), + datetime(2013, 12, 31)], + columns=[datetime(2013, 9, 30), + datetime(2013, 10, 31), + datetime(2013, 11, 30), + datetime(2013, 12, 31)]) expected.index.name = 'Date' expected.columns.name = 'PayDay' @@ -546,74 +604,97 @@ def test_pivot_timegrouper(self): values='Quantity', aggfunc=np.sum) tm.assert_frame_equal(result, expected.T) - tuples = [(datetime.datetime(2013, 9, 30), datetime.datetime(2013, 10, 31)), - (datetime.datetime(2013, 10, 31), datetime.datetime(2013, 9, 30)), - (datetime.datetime(2013, 10, 31), datetime.datetime(2013, 11, 30)), - (datetime.datetime(2013, 10, 31), datetime.datetime(2013, 12, 31)), - (datetime.datetime(2013, 11, 30), datetime.datetime(2013, 10, 31)), - (datetime.datetime(2013, 12, 31), datetime.datetime(2013, 11, 30)),] + tuples = [(datetime(2013, 9, 30), datetime(2013, 10, 31)), + (datetime(2013, 10, 31), + datetime(2013, 9, 30)), + (datetime(2013, 10, 31), + datetime(2013, 11, 30)), + (datetime(2013, 10, 31), + datetime(2013, 12, 31)), + (datetime(2013, 11, 30), + datetime(2013, 10, 31)), + (datetime(2013, 12, 31), datetime(2013, 11, 30)), ] idx = MultiIndex.from_tuples(tuples, names=['Date', 'PayDay']) expected = DataFrame(np.array([3, np.nan, 6, np.nan, 1, np.nan, - 9, np.nan, 9, np.nan, np.nan, 3]).reshape(6, 2), + 9, np.nan, 9, np.nan, + np.nan, 3]).reshape(6, 2), index=idx, columns=['A', 'B']) expected.columns.name = 'Branch' - result = pivot_table(df, index=[Grouper(freq='M', key='Date'), - Grouper(freq='M', key='PayDay')], columns=['Branch'], - values='Quantity', aggfunc=np.sum) + result = pivot_table( + df, index=[Grouper(freq='M', key='Date'), + Grouper(freq='M', key='PayDay')], columns=['Branch'], + values='Quantity', aggfunc=np.sum) tm.assert_frame_equal(result, expected) - result = pivot_table(df, index=['Branch'], columns=[Grouper(freq='M', key='Date'), - Grouper(freq='M', key='PayDay')], + result = pivot_table(df, index=['Branch'], + columns=[Grouper(freq='M', key='Date'), + Grouper(freq='M', key='PayDay')], values='Quantity', aggfunc=np.sum) tm.assert_frame_equal(result, expected.T) def test_pivot_datetime_tz(self): - dates1 = ['2011-07-19 07:00:00', '2011-07-19 08:00:00', '2011-07-19 09:00:00', - '2011-07-19 07:00:00', '2011-07-19 08:00:00', '2011-07-19 09:00:00'] - dates2 = ['2013-01-01 15:00:00', '2013-01-01 15:00:00', '2013-01-01 15:00:00', - '2013-02-01 15:00:00', '2013-02-01 15:00:00', '2013-02-01 15:00:00'] + dates1 = ['2011-07-19 07:00:00', '2011-07-19 08:00:00', + '2011-07-19 09:00:00', + '2011-07-19 07:00:00', '2011-07-19 08:00:00', + '2011-07-19 09:00:00'] + dates2 = ['2013-01-01 15:00:00', '2013-01-01 15:00:00', + '2013-01-01 15:00:00', + '2013-02-01 15:00:00', '2013-02-01 15:00:00', + '2013-02-01 15:00:00'] df = DataFrame({'label': ['a', 'a', 'a', 'b', 'b', 'b'], 'dt1': dates1, 'dt2': dates2, - 'value1': np.arange(6,dtype='int64'), 'value2': [1, 2] * 3}) + 'value1': np.arange(6, dtype='int64'), + 'value2': [1, 2] * 3}) df['dt1'] = df['dt1'].apply(lambda d: pd.Timestamp(d, tz='US/Pacific')) df['dt2'] = df['dt2'].apply(lambda d: pd.Timestamp(d, tz='Asia/Tokyo')) - exp_idx = pd.DatetimeIndex(['2011-07-19 07:00:00', '2011-07-19 08:00:00', - '2011-07-19 09:00:00'], tz='US/Pacific', name='dt1') + exp_idx = pd.DatetimeIndex(['2011-07-19 07:00:00', + '2011-07-19 08:00:00', + '2011-07-19 09:00:00'], + tz='US/Pacific', name='dt1') exp_col1 = Index(['value1', 'value1']) exp_col2 = Index(['a', 'b'], name='label') exp_col = MultiIndex.from_arrays([exp_col1, exp_col2]) expected = DataFrame([[0, 3], [1, 4], [2, 5]], index=exp_idx, columns=exp_col) - result = pivot_table(df, index=['dt1'], columns=['label'], values=['value1']) + result = pivot_table(df, index=['dt1'], columns=[ + 'label'], values=['value1']) tm.assert_frame_equal(result, expected) - - exp_col1 = Index(['sum', 'sum', 'sum', 'sum', 'mean', 'mean', 'mean', 'mean']) + exp_col1 = Index(['sum', 'sum', 'sum', 'sum', + 'mean', 'mean', 'mean', 'mean']) exp_col2 = Index(['value1', 'value1', 'value2', 'value2'] * 2) - exp_col3 = pd.DatetimeIndex(['2013-01-01 15:00:00', '2013-02-01 15:00:00'] * 4, + exp_col3 = pd.DatetimeIndex(['2013-01-01 15:00:00', + '2013-02-01 15:00:00'] * 4, tz='Asia/Tokyo', name='dt2') exp_col = MultiIndex.from_arrays([exp_col1, exp_col2, exp_col3]) expected = DataFrame(np.array([[0, 3, 1, 2, 0, 3, 1, 2], [1, 4, 2, 1, 1, 4, 2, 1], - [2, 5, 1, 2, 2, 5, 1, 2]], dtype='int64'), + [2, 5, 1, 2, 2, 5, 1, 2]], + dtype='int64'), index=exp_idx, columns=exp_col) - result = pivot_table(df, index=['dt1'], columns=['dt2'], values=['value1', 'value2'], + result = pivot_table(df, index=['dt1'], columns=['dt2'], + values=['value1', 'value2'], aggfunc=[np.sum, np.mean]) tm.assert_frame_equal(result, expected) def test_pivot_dtaccessor(self): # GH 8103 - dates1 = ['2011-07-19 07:00:00', '2011-07-19 08:00:00', '2011-07-19 09:00:00', - '2011-07-19 07:00:00', '2011-07-19 08:00:00', '2011-07-19 09:00:00'] - dates2 = ['2013-01-01 15:00:00', '2013-01-01 15:00:00', '2013-01-01 15:00:00', - '2013-02-01 15:00:00', '2013-02-01 15:00:00', '2013-02-01 15:00:00'] + dates1 = ['2011-07-19 07:00:00', '2011-07-19 08:00:00', + '2011-07-19 09:00:00', + '2011-07-19 07:00:00', '2011-07-19 08:00:00', + '2011-07-19 09:00:00'] + dates2 = ['2013-01-01 15:00:00', '2013-01-01 15:00:00', + '2013-01-01 15:00:00', + '2013-02-01 15:00:00', '2013-02-01 15:00:00', + '2013-02-01 15:00:00'] df = DataFrame({'label': ['a', 'a', 'a', 'b', 'b', 'b'], 'dt1': dates1, 'dt2': dates2, - 'value1': np.arange(6,dtype='int64'), 'value2': [1, 2] * 3}) + 'value1': np.arange(6, dtype='int64'), + 'value2': [1, 2] * 3}) df['dt1'] = df['dt1'].apply(lambda d: pd.Timestamp(d)) df['dt2'] = df['dt2'].apply(lambda d: pd.Timestamp(d)) @@ -621,31 +702,37 @@ def test_pivot_dtaccessor(self): values='value1') exp_idx = Index(['a', 'b'], name='label') - expected = DataFrame({7: [0, 3], 8: [1, 4], 9:[2, 5]}, - index=exp_idx, columns=Index([7, 8, 9],name='dt1')) + expected = DataFrame({7: [0, 3], 8: [1, 4], 9: [2, 5]}, + index=exp_idx, + columns=Index([7, 8, 9], name='dt1')) tm.assert_frame_equal(result, expected) - result = pivot_table(df, index=df['dt2'].dt.month, columns=df['dt1'].dt.hour, + result = pivot_table(df, index=df['dt2'].dt.month, + columns=df['dt1'].dt.hour, values='value1') - expected = DataFrame({7: [0, 3], 8: [1, 4], 9:[2, 5]}, - index=Index([1, 2],name='dt2'), columns=Index([7, 8, 9],name='dt1')) + expected = DataFrame({7: [0, 3], 8: [1, 4], 9: [2, 5]}, + index=Index([1, 2], name='dt2'), + columns=Index([7, 8, 9], name='dt1')) tm.assert_frame_equal(result, expected) result = pivot_table(df, index=df['dt2'].dt.year.values, columns=[df['dt1'].dt.hour, df['dt2'].dt.month], values='value1') - exp_col = MultiIndex.from_arrays([[7, 7, 8, 8, 9, 9], [1, 2] * 3],names=['dt1','dt2']) - expected = DataFrame(np.array([[0, 3, 1, 4, 2, 5]],dtype='int64'), + exp_col = MultiIndex.from_arrays( + [[7, 7, 8, 8, 9, 9], [1, 2] * 3], names=['dt1', 'dt2']) + expected = DataFrame(np.array([[0, 3, 1, 4, 2, 5]], dtype='int64'), index=[2013], columns=exp_col) tm.assert_frame_equal(result, expected) - result = pivot_table(df, index=np.array(['X', 'X', 'X', 'X', 'Y', 'Y']), + result = pivot_table(df, index=np.array(['X', 'X', 'X', + 'X', 'Y', 'Y']), columns=[df['dt1'].dt.hour, df['dt2'].dt.month], values='value1') expected = DataFrame(np.array([[0, 3, 1, np.nan, 2, np.nan], - [np.nan, np.nan, np.nan, 4, np.nan, 5]]), + [np.nan, np.nan, np.nan, + 4, np.nan, 5]]), index=['X', 'Y'], columns=exp_col) tm.assert_frame_equal(result, expected) @@ -748,16 +835,20 @@ def test_crosstab_pass_values(self): df = DataFrame({'foo': a, 'bar': b, 'baz': c, 'values': values}) - expected = df.pivot_table('values', index=['foo', 'bar'], columns='baz', - aggfunc=np.sum) + expected = df.pivot_table('values', index=['foo', 'bar'], + columns='baz', aggfunc=np.sum) tm.assert_frame_equal(table, expected) def test_crosstab_dropna(self): # GH 3820 - a = np.array(['foo', 'foo', 'foo', 'bar', 'bar', 'foo', 'foo'], dtype=object) - b = np.array(['one', 'one', 'two', 'one', 'two', 'two', 'two'], dtype=object) - c = np.array(['dull', 'dull', 'dull', 'dull', 'dull', 'shiny', 'shiny'], dtype=object) - res = crosstab(a, [b, c], rownames=['a'], colnames=['b', 'c'], dropna=False) + a = np.array(['foo', 'foo', 'foo', 'bar', + 'bar', 'foo', 'foo'], dtype=object) + b = np.array(['one', 'one', 'two', 'one', + 'two', 'two', 'two'], dtype=object) + c = np.array(['dull', 'dull', 'dull', 'dull', + 'dull', 'shiny', 'shiny'], dtype=object) + res = crosstab(a, [b, c], rownames=['a'], + colnames=['b', 'c'], dropna=False) m = MultiIndex.from_tuples([('one', 'dull'), ('one', 'shiny'), ('two', 'dull'), ('two', 'shiny')]) assert_equal(res.columns.values, m.values) @@ -768,9 +859,9 @@ def test_categorical_margins(self): 'y': np.arange(8) // 4, 'z': np.arange(8) % 2}) - expected = pd.DataFrame([[1.0, 2.0, 1.5],[5, 6, 5.5],[3, 4, 3.5]]) - expected.index = Index([0,1,'All'],name='y') - expected.columns = Index([0,1,'All'],name='z') + expected = pd.DataFrame([[1.0, 2.0, 1.5], [5, 6, 5.5], [3, 4, 3.5]]) + expected.index = Index([0, 1, 'All'], name='y') + expected.columns = Index([0, 1, 'All'], name='z') data = df.copy() table = data.pivot_table('x', 'y', 'z', margins=True) diff --git a/pandas/tools/tests/test_tile.py b/pandas/tools/tests/test_tile.py index eac6973bffb25..63dc769f2ed75 100644 --- a/pandas/tools/tests/test_tile.py +++ b/pandas/tools/tests/test_tile.py @@ -4,7 +4,7 @@ import numpy as np from pandas.compat import zip -from pandas import DataFrame, Series, unique +from pandas import Series import pandas.util.testing as tm from pandas.util.testing import assertRaisesRegexp import pandas.core.common as com @@ -113,7 +113,7 @@ def test_na_handling(self): def test_inf_handling(self): data = np.arange(6) - data_ser = Series(data,dtype='int64') + data_ser = Series(data, dtype='int64') result = cut(data, [-np.inf, 2, 4, np.inf]) result_ser = cut(data_ser, [-np.inf, 2, 4, np.inf]) @@ -151,7 +151,8 @@ def test_qcut_specify_quantiles(self): self.assertTrue(factor.equals(expected)) def test_qcut_all_bins_same(self): - assertRaisesRegexp(ValueError, "edges.*unique", qcut, [0,0,0,0,0,0,0,0,0,0], 3) + assertRaisesRegexp(ValueError, "edges.*unique", qcut, + [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 3) def test_cut_out_of_bounds(self): arr = np.random.randn(100) @@ -230,19 +231,21 @@ def test_qcut_binning_issues(self): def test_cut_return_categorical(self): from pandas import Categorical - s = Series([0,1,2,3,4,5,6,7,8]) - res = cut(s,3) - exp = Series(Categorical.from_codes([0,0,0,1,1,1,2,2,2], - ["(-0.008, 2.667]", "(2.667, 5.333]", "(5.333, 8]"], + s = Series([0, 1, 2, 3, 4, 5, 6, 7, 8]) + res = cut(s, 3) + exp = Series(Categorical.from_codes([0, 0, 0, 1, 1, 1, 2, 2, 2], + ["(-0.008, 2.667]", + "(2.667, 5.333]", "(5.333, 8]"], ordered=True)) tm.assert_series_equal(res, exp) def test_qcut_return_categorical(self): from pandas import Categorical - s = Series([0,1,2,3,4,5,6,7,8]) - res = qcut(s,[0,0.333,0.666,1]) - exp = Series(Categorical.from_codes([0,0,0,1,1,1,2,2,2], - ["[0, 2.664]", "(2.664, 5.328]", "(5.328, 8]"], + s = Series([0, 1, 2, 3, 4, 5, 6, 7, 8]) + res = qcut(s, [0, 0.333, 0.666, 1]) + exp = Series(Categorical.from_codes([0, 0, 0, 1, 1, 1, 2, 2, 2], + ["[0, 2.664]", + "(2.664, 5.328]", "(5.328, 8]"], ordered=True)) tm.assert_series_equal(res, exp) diff --git a/pandas/tools/tests/test_util.py b/pandas/tools/tests/test_util.py index a00b27c81e668..8a40f65af869a 100644 --- a/pandas/tools/tests/test_util.py +++ b/pandas/tools/tests/test_util.py @@ -2,7 +2,7 @@ import locale import codecs import nose -from nose.tools import assert_raises, assert_true +from nose.tools import assert_raises import numpy as np from numpy.testing import assert_equal @@ -22,7 +22,7 @@ def test_simple(self): x, y = list('ABC'), [1, 22] result = cartesian_product([x, y]) expected = [np.array(['A', 'A', 'B', 'B', 'C', 'C']), - np.array([ 1, 22, 1, 22, 1, 22])] + np.array([1, 22, 1, 22, 1, 22])] assert_equal(result, expected) def test_datetimeindex(self): @@ -91,6 +91,7 @@ def test_set_locale(self): class TestToNumeric(tm.TestCase): + def test_series(self): s = pd.Series(['1', '-3.14', '7']) res = to_numeric(s) @@ -130,7 +131,7 @@ def test_numeric(self): tm.assert_series_equal(res, expected) def test_all_nan(self): - s = pd.Series(['a','b','c']) + s = pd.Series(['a', 'b', 'c']) res = to_numeric(s, errors='coerce') expected = pd.Series([np.nan, np.nan, np.nan]) tm.assert_series_equal(res, expected) @@ -147,4 +148,3 @@ def test_type_check(self): if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) - diff --git a/pandas/tools/tile.py b/pandas/tools/tile.py index 416addfcf2ad5..f66ace14ccf50 100644 --- a/pandas/tools/tile.py +++ b/pandas/tools/tile.py @@ -2,9 +2,8 @@ Quantilization functions and related stuff """ -from pandas.core.api import DataFrame, Series +from pandas.core.api import Series from pandas.core.categorical import Categorical -from pandas.core.index import _ensure_index import pandas.core.algorithms as algos import pandas.core.common as com import pandas.core.nanops as nanops @@ -34,8 +33,9 @@ def cut(x, bins, right=True, labels=None, retbins=False, precision=3, right == True (the default), then the bins [1,2,3,4] indicate (1,2], (2,3], (3,4]. labels : array or boolean, default None - Used as labels for the resulting bins. Must be of the same length as the resulting - bins. If False, return only integer indicators of the bins. + Used as labels for the resulting bins. Must be of the same length as + the resulting bins. If False, return only integer indicators of the + bins. retbins : bool, optional Whether to return the bins or not. Can be useful if bins is given as a scalar. @@ -47,9 +47,9 @@ def cut(x, bins, right=True, labels=None, retbins=False, precision=3, Returns ------- out : Categorical or Series or array of integers if labels is False - The return type (Categorical or Series) depends on the input: a Series of type category if - input is a Series else Categorical. Bins are represented as categories when categorical - data is returned. + The return type (Categorical or Series) depends on the input: a Series + of type category if input is a Series else Categorical. Bins are + represented as categories when categorical data is returned. bins : ndarray of floats Returned only if `retbins` is True. @@ -66,10 +66,12 @@ def cut(x, bins, right=True, labels=None, retbins=False, precision=3, Examples -------- >>> pd.cut(np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1]), 3, retbins=True) - ([(0.191, 3.367], (0.191, 3.367], (0.191, 3.367], (3.367, 6.533], (6.533, 9.7], (0.191, 3.367]] + ([(0.191, 3.367], (0.191, 3.367], (0.191, 3.367], (3.367, 6.533], + (6.533, 9.7], (0.191, 3.367]] Categories (3, object): [(0.191, 3.367] < (3.367, 6.533] < (6.533, 9.7]], array([ 0.1905 , 3.36666667, 6.53333333, 9.7 ])) - >>> pd.cut(np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1]), 3, labels=["good","medium","bad"]) + >>> pd.cut(np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1]), 3, + labels=["good","medium","bad"]) [good, good, good, medium, bad, good] Categories (3, object): [good < medium < bad] >>> pd.cut(np.ones(5), 4, labels=False) @@ -109,11 +111,11 @@ def cut(x, bins, right=True, labels=None, retbins=False, precision=3, if (np.diff(bins) < 0).any(): raise ValueError('bins must increase monotonically.') - return _bins_to_cuts(x, bins, right=right, labels=labels,retbins=retbins, precision=precision, + return _bins_to_cuts(x, bins, right=right, labels=labels, + retbins=retbins, precision=precision, include_lowest=include_lowest) - def qcut(x, q, labels=None, retbins=False, precision=3): """ Quantile-based discretization function. Discretize variable into @@ -128,8 +130,9 @@ def qcut(x, q, labels=None, retbins=False, precision=3): Number of quantiles. 10 for deciles, 4 for quartiles, etc. Alternately array of quantiles, e.g. [0, .25, .5, .75, 1.] for quartiles labels : array or boolean, default None - Used as labels for the resulting bins. Must be of the same length as the resulting - bins. If False, return only integer indicators of the bins. + Used as labels for the resulting bins. Must be of the same length as + the resulting bins. If False, return only integer indicators of the + bins. retbins : bool, optional Whether to return the bins or not. Can be useful if bins is given as a scalar. @@ -139,9 +142,9 @@ def qcut(x, q, labels=None, retbins=False, precision=3): Returns ------- out : Categorical or Series or array of integers if labels is False - The return type (Categorical or Series) depends on the input: a Series of type category if - input is a Series else Categorical. Bins are represented as categories when categorical - data is returned. + The return type (Categorical or Series) depends on the input: a Series + of type category if input is a Series else Categorical. Bins are + represented as categories when categorical data is returned. bins : ndarray of floats Returned only if `retbins` is True. @@ -165,9 +168,8 @@ def qcut(x, q, labels=None, retbins=False, precision=3): else: quantiles = q bins = algos.quantile(x, quantiles) - return _bins_to_cuts(x, bins, labels=labels, retbins=retbins,precision=precision, - include_lowest=True) - + return _bins_to_cuts(x, bins, labels=labels, retbins=retbins, + precision=precision, include_lowest=True) def _bins_to_cuts(x, bins, right=True, labels=None, retbins=False, diff --git a/pandas/tools/util.py b/pandas/tools/util.py index c3ebadfdb9e0b..3b7becdf64a10 100644 --- a/pandas/tools/util.py +++ b/pandas/tools/util.py @@ -36,7 +36,7 @@ def cartesian_product(X): return [np.tile(np.repeat(np.asarray(com._values_from_object(x)), b[i]), np.product(a[i])) - for i, x in enumerate(X)] + for i, x in enumerate(X)] def _compose2(f, g):
https://api.github.com/repos/pandas-dev/pandas/pulls/12082
2016-01-18T16:21:10Z
2016-01-19T11:57:16Z
null
2016-01-19T11:58:06Z
BUG: GH12071 .reset_index() should create a RangeIndex
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index b2eb7d9d97d58..b84c3d635280d 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -110,7 +110,7 @@ Range Index A ``RangeIndex`` has been added to the ``Int64Index`` sub-classes to support a memory saving alternative for common use cases. This has a similar implementation to the python ``range`` object (``xrange`` in python 2), in that it only stores the start, stop, and step values for the index. It will transparently interact with the user API, converting to ``Int64Index`` if needed. -This will now be the default constructed index for ``NDFrame`` objects, rather than previous an ``Int64Index``. (:issue:`939`) +This will now be the default constructed index for ``NDFrame`` objects, rather than previous an ``Int64Index``. (:issue:`939`, :issue:`12071`) Previous Behavior: diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 7f53e08b7c38b..e4f5e86e216ba 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -30,7 +30,7 @@ is_internal_type, is_datetimetz, _possibly_infer_to_datetimelike, _dict_compat) from pandas.core.generic import NDFrame, _shared_docs -from pandas.core.index import Index, MultiIndex, _ensure_index +from pandas.core.index import Index, MultiIndex, _ensure_index, RangeIndex from pandas.core.indexing import (maybe_droplevels, convert_to_index_sliceable, check_bool_indexer) from pandas.core.internals import (BlockManager, @@ -2891,7 +2891,7 @@ def _maybe_casted_values(index, labels=None): np.nan) return values - new_index = np.arange(len(new_obj), dtype='int64') + new_index = _default_index(len(new_obj)) if isinstance(self.index, MultiIndex): if level is not None: if not isinstance(level, (tuple, list)): diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py index bea8eab932be3..884a147d7509f 100644 --- a/pandas/tests/frame/test_alter_axes.py +++ b/pandas/tests/frame/test_alter_axes.py @@ -7,7 +7,7 @@ import numpy as np from pandas.compat import lrange -from pandas import DataFrame, Series, Index, MultiIndex +from pandas import DataFrame, Series, Index, MultiIndex, RangeIndex import pandas as pd from pandas.util.testing import (assert_almost_equal, @@ -578,6 +578,17 @@ def test_reset_index_with_datetimeindex_cols(self): datetime(2013, 1, 2)]) assert_frame_equal(result, expected) + def test_reset_index_range(self): + # GH 12071 + df = pd.DataFrame([[0, 0], [1, 1]], columns=['A', 'B'], + index=RangeIndex(stop=2)) + result = df.reset_index() + tm.assertIsInstance(result.index, RangeIndex) + expected = pd.DataFrame([[0, 0, 0], [1, 1, 1]], + columns=['index', 'A', 'B'], + index=RangeIndex(stop=2)) + assert_frame_equal(result, expected) + def test_set_index_names(self): df = pd.util.testing.makeDataFrame() df.index.name = 'name' diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py index 4045825578aff..a67eb998ce819 100644 --- a/pandas/tests/test_series.py +++ b/pandas/tests/test_series.py @@ -20,7 +20,7 @@ from pandas import (Index, Series, DataFrame, isnull, notnull, bdate_range, NaT, date_range, period_range, timedelta_range, _np_version_under1p8, _np_version_under1p9) -from pandas.core.index import MultiIndex +from pandas.core.index import MultiIndex, RangeIndex from pandas.core.indexing import IndexingError from pandas.tseries.period import PeriodIndex from pandas.tseries.index import Timestamp, DatetimeIndex @@ -8553,6 +8553,15 @@ def test_reset_index(self): self.assertTrue(rs.index.equals(Index(index.get_level_values(1)))) tm.assertIsInstance(rs, Series) + def test_reset_index_range(self): + # GH 12071 + s = pd.Series(range(2), name='A', index=RangeIndex(stop=2)) + series_result = s.reset_index() + tm.assertIsInstance(series_result.index, RangeIndex) + series_expected = pd.DataFrame([[0, 0], [1, 1]], columns=['index', 'A'], + index=RangeIndex(stop=2)) + assert_frame_equal(series_result, series_expected) + def test_set_index_makes_timeseries(self): idx = tm.makeDateIndex(10)
closes #12071
https://api.github.com/repos/pandas-dev/pandas/pulls/12080
2016-01-18T14:48:39Z
2016-01-20T14:10:00Z
null
2016-01-20T14:24:23Z
PEP: pandas/core round 5 (nanops, ops, panel*)
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py index 43533b67b5441..7764d6e1e1fa9 100644 --- a/pandas/core/nanops.py +++ b/pandas/core/nanops.py @@ -1,6 +1,7 @@ import itertools import functools import numpy as np +import operator try: import bottleneck as bn @@ -10,13 +11,10 @@ import pandas.hashtable as _hash from pandas import compat, lib, algos, tslib -from pandas.compat import builtins from pandas.core.common import (isnull, notnull, _values_from_object, - _maybe_upcast_putmask, - ensure_float, _ensure_float64, - _ensure_int64, _ensure_object, - is_float, is_integer, is_complex, - is_float_dtype, + _maybe_upcast_putmask, _ensure_float64, + _ensure_int64, _ensure_object, is_float, + is_integer, is_complex, is_float_dtype, is_complex_dtype, is_integer_dtype, is_bool_dtype, is_object_dtype, is_datetime64_dtype, is_timedelta64_dtype, @@ -26,7 +24,6 @@ class disallow(object): - def __init__(self, *dtypes): super(disallow, self).__init__() self.dtypes = tuple(np.dtype(dtype).type for dtype in dtypes) @@ -41,8 +38,8 @@ def _f(*args, **kwargs): obj_iter = itertools.chain(args, compat.itervalues(kwargs)) if any(self.check(obj) for obj in obj_iter): raise TypeError('reduction operation {0!r} not allowed for ' - 'this dtype'.format(f.__name__.replace('nan', - ''))) + 'this dtype'.format( + f.__name__.replace('nan', ''))) try: return f(*args, **kwargs) except ValueError as e: @@ -53,11 +50,11 @@ def _f(*args, **kwargs): if is_object_dtype(args[0]): raise TypeError(e) raise + return _f class bottleneck_switch(object): - def __init__(self, zero_value=None, **kwargs): self.zero_value = zero_value self.kwargs = kwargs @@ -91,8 +88,8 @@ def f(values, axis=None, skipna=True, **kwds): result.fill(0) return result - if _USE_BOTTLENECK and skipna and _bn_ok_dtype(values.dtype, - bn_name): + if (_USE_BOTTLENECK and skipna and + _bn_ok_dtype(values.dtype, bn_name)): result = bn_func(values, axis=axis, **kwds) # prefer to treat inf/-inf as NA, but must compute the func @@ -121,8 +118,7 @@ def f(values, axis=None, skipna=True, **kwds): def _bn_ok_dtype(dt, name): # Bottleneck chokes on datetime64 - if (not is_object_dtype(dt) and - not is_datetime_or_timedelta_dtype(dt)): + if (not is_object_dtype(dt) and not is_datetime_or_timedelta_dtype(dt)): # bottleneck does not properly upcast during the sum # so can overflow @@ -142,7 +138,7 @@ def _has_infs(result): return lib.has_infs_f4(result.ravel()) try: return np.isinf(result).any() - except (TypeError, NotImplementedError) as e: + except (TypeError, NotImplementedError): # if it doesn't support infs, then it can't have infs return False @@ -173,8 +169,9 @@ def _get_fill_value(dtype, fill_value=None, fill_value_typ=None): def _get_values(values, skipna, fill_value=None, fill_value_typ=None, isfinite=False, copy=True): """ utility to get the values view, mask, dtype - if necessary copy and mask using the specified fill_value - copy = True will force the copy """ + if necessary copy and mask using the specified fill_value + copy = True will force the copy + """ values = _values_from_object(values) if isfinite: mask = _isfinite(values) @@ -331,7 +328,8 @@ def get_median(x): if values.ndim > 1: # there's a non-empty array to apply over otherwise numpy raises if notempty: - return _wrap_results(np.apply_along_axis(get_median, axis, values), dtype) + return _wrap_results( + np.apply_along_axis(get_median, axis, values), dtype) # must return the correct shape, but median is not defined for the # empty set so return nans of shape "everything but the passed axis" @@ -400,7 +398,7 @@ def nanvar(values, axis=None, skipna=True, ddof=1): avg = _ensure_numeric(values.sum(axis=axis, dtype=np.float64)) / count if axis is not None: avg = np.expand_dims(avg, axis) - sqr = _ensure_numeric((avg - values) ** 2) + sqr = _ensure_numeric((avg - values)**2) np.putmask(sqr, mask, 0) result = sqr.sum(axis=axis, dtype=np.float64) / d @@ -429,13 +427,10 @@ def _nanminmax(meth, fill_value_typ): @bottleneck_switch() def reduction(values, axis=None, skipna=True): values, mask, dtype, dtype_max = _get_values( - values, - skipna, - fill_value_typ=fill_value_typ, - ) + values, skipna, fill_value_typ=fill_value_typ, ) - if ((axis is not None and values.shape[axis] == 0) - or values.size == 0): + if ((axis is not None and values.shape[axis] == 0) or + values.size == 0): try: result = getattr(values, meth)(axis, dtype=dtype_max) result.fill(np.nan) @@ -477,7 +472,7 @@ def nanargmin(values, axis=None, skipna=True): return result -@disallow('M8','m8') +@disallow('M8', 'm8') def nanskew(values, axis=None, skipna=True): mask = isnull(values) @@ -493,15 +488,15 @@ def nanskew(values, axis=None, skipna=True): typ = values.dtype.type A = values.sum(axis) / count - B = (values ** 2).sum(axis) / count - A ** typ(2) - C = (values ** 3).sum(axis) / count - A ** typ(3) - typ(3) * A * B + B = (values**2).sum(axis) / count - A**typ(2) + C = (values**3).sum(axis) / count - A**typ(3) - typ(3) * A * B # floating point error B = _zero_out_fperr(B) C = _zero_out_fperr(C) result = ((np.sqrt(count * count - count) * C) / - ((count - typ(2)) * np.sqrt(B) ** typ(3))) + ((count - typ(2)) * np.sqrt(B)**typ(3))) if isinstance(result, np.ndarray): result = np.where(B == 0, 0, result) @@ -514,7 +509,7 @@ def nanskew(values, axis=None, skipna=True): return result -@disallow('M8','m8') +@disallow('M8', 'm8') def nankurt(values, axis=None, skipna=True): mask = isnull(values) @@ -530,22 +525,25 @@ def nankurt(values, axis=None, skipna=True): typ = values.dtype.type A = values.sum(axis) / count - B = (values ** 2).sum(axis) / count - A ** typ(2) - C = (values ** 3).sum(axis) / count - A ** typ(3) - typ(3) * A * B - D = (values ** 4).sum(axis) / count - A ** typ(4) - typ(6) * B * A * A - typ(4) * C * A + B = (values**2).sum(axis) / count - A**typ(2) + C = (values**3).sum(axis) / count - A**typ(3) - typ(3) * A * B + D = ((values**4).sum(axis) / count - A**typ(4) - + typ(6) * B * A * A - typ(4) * C * A) B = _zero_out_fperr(B) D = _zero_out_fperr(D) if not isinstance(B, np.ndarray): - # if B is a scalar, check these corner cases first before doing division + # if B is a scalar, check these corner cases first before doing + # division if count < 4: return np.nan if B == 0: return 0 - result = (((count * count - typ(1)) * D / (B * B) - typ(3) * ((count - typ(1)) ** typ(2))) / - ((count - typ(2)) * (count - typ(3)))) + result = (((count * count - typ(1)) * D / (B * B) - typ(3) * + ((count - typ(1))**typ(2))) / ((count - typ(2)) * + (count - typ(3)))) if isinstance(result, np.ndarray): result = np.where(B == 0, 0, result) @@ -554,7 +552,7 @@ def nankurt(values, axis=None, skipna=True): return result -@disallow('M8','m8') +@disallow('M8', 'm8') def nanprod(values, axis=None, skipna=True): mask = isnull(values) if skipna and not is_any_int_dtype(values): @@ -621,7 +619,7 @@ def _zero_out_fperr(arg): return arg.dtype.type(0) if np.abs(arg) < 1e-14 else arg -@disallow('M8','m8') +@disallow('M8', 'm8') def nancorr(a, b, method='pearson', min_periods=None): """ a, b: ndarrays @@ -668,7 +666,7 @@ def _spearman(a, b): return _cor_methods[method] -@disallow('M8','m8') +@disallow('M8', 'm8') def nancov(a, b, min_periods=None): if len(a) != len(b): raise AssertionError('Operands to nancov must have same size') @@ -711,8 +709,6 @@ def _ensure_numeric(x): # NA-friendly array comparisons -import operator - def make_nancomp(op): def f(x, y): @@ -728,8 +724,10 @@ def f(x, y): np.putmask(result, mask, np.nan) return result + return f + nangt = make_nancomp(operator.gt) nange = make_nancomp(operator.ge) nanlt = make_nancomp(operator.lt) diff --git a/pandas/core/ops.py b/pandas/core/ops.py index 4d003456f8102..ef699be301d4b 100644 --- a/pandas/core/ops.py +++ b/pandas/core/ops.py @@ -18,12 +18,13 @@ from pandas.lib import isscalar from pandas.tslib import iNaT from pandas.compat import bind_method -from pandas.core.common import(is_list_like, notnull, isnull, - _values_from_object, _maybe_match_name, - needs_i8_conversion, is_datetimelike_v_numeric, - is_integer_dtype, is_categorical_dtype, is_object_dtype, - is_timedelta64_dtype, is_datetime64_dtype, is_datetime64tz_dtype, - is_bool_dtype) +from pandas.core.common import (is_list_like, notnull, isnull, + _values_from_object, _maybe_match_name, + needs_i8_conversion, is_datetimelike_v_numeric, + is_integer_dtype, is_categorical_dtype, + is_object_dtype, is_timedelta64_dtype, + is_datetime64_dtype, is_datetime64tz_dtype, + is_bool_dtype) from pandas.io.common import PerformanceWarning # ----------------------------------------------------------------------------- @@ -44,6 +45,7 @@ def _create_methods(arith_method, radd_func, comp_method, bool_method, else: op = lambda x: None if special: + def names(x): if x[-1] == "_": return "__%s_" % x @@ -54,6 +56,7 @@ def names(x): radd_func = radd_func or operator.add # Inframe, all special methods have default_axis=None, flex methods have # default_axis set to the default (columns) + # yapf: disable new_methods = dict( add=arith_method(operator.add, names('add'), op('+'), default_axis=default_axis), @@ -68,8 +71,8 @@ def names(x): default_axis=default_axis), floordiv=arith_method(operator.floordiv, names('floordiv'), op('//'), default_axis=default_axis, fill_zeros=np.inf), - # Causes a floating point exception in the tests when numexpr - # enabled, so for now no speedup + # Causes a floating point exception in the tests when numexpr enabled, + # so for now no speedup mod=arith_method(operator.mod, names('mod'), None, default_axis=default_axis, fill_zeros=np.nan), pow=arith_method(operator.pow, names('pow'), op('**'), @@ -88,12 +91,12 @@ def names(x): names('rfloordiv'), op('//'), default_axis=default_axis, fill_zeros=np.inf, reversed=True), - rpow=arith_method(lambda x, y: y ** x, names('rpow'), op('**'), + rpow=arith_method(lambda x, y: y**x, names('rpow'), op('**'), default_axis=default_axis, reversed=True), rmod=arith_method(lambda x, y: y % x, names('rmod'), op('%'), default_axis=default_axis, fill_zeros=np.nan, - reversed=True), - ) + reversed=True),) + # yapf: enable new_methods['div'] = new_methods['truediv'] new_methods['rdiv'] = new_methods['rtruediv'] @@ -105,19 +108,19 @@ def names(x): lt=comp_method(operator.lt, names('lt'), op('<')), gt=comp_method(operator.gt, names('gt'), op('>')), le=comp_method(operator.le, names('le'), op('<=')), - ge=comp_method(operator.ge, names('ge'), op('>=')), - )) + ge=comp_method(operator.ge, names('ge'), op('>=')), )) if bool_method: - new_methods.update(dict( - and_=bool_method(operator.and_, names('and_'), op('&')), - or_=bool_method(operator.or_, names('or_'), op('|')), - # For some reason ``^`` wasn't used in original. - xor=bool_method(operator.xor, names('xor'), op('^')), - rand_=bool_method(lambda x, y: operator.and_(y, x), - names('rand_'), op('&')), - ror_=bool_method(lambda x, y: operator.or_(y, x), names('ror_'), op('|')), - rxor=bool_method(lambda x, y: operator.xor(y, x), names('rxor'), op('^')) - )) + new_methods.update( + dict(and_=bool_method(operator.and_, names('and_'), op('&')), + or_=bool_method(operator.or_, names('or_'), op('|')), + # For some reason ``^`` wasn't used in original. + xor=bool_method(operator.xor, names('xor'), op('^')), + rand_=bool_method(lambda x, y: operator.and_(y, x), + names('rand_'), op('&')), + ror_=bool_method(lambda x, y: operator.or_(y, x), + names('ror_'), op('|')), + rxor=bool_method(lambda x, y: operator.xor(y, x), + names('rxor'), op('^')))) new_methods = dict((names(k), v) for k, v in new_methods.items()) return new_methods @@ -142,7 +145,7 @@ def add_methods(cls, new_methods, force, select, exclude): bind_method(cls, name, method) -#---------------------------------------------------------------------- +# ---------------------------------------------------------------------- # Arithmetic def add_special_arithmetic_methods(cls, arith_method=None, radd_func=None, comp_method=None, bool_method=None, @@ -193,19 +196,19 @@ def f(self, other): # this makes sure that we are aligned like the input # we are updating inplace so we want to ignore is_copy - self._update_inplace(result.reindex_like(self,copy=False)._data, + self._update_inplace(result.reindex_like(self, copy=False)._data, verify_is_copy=False) return self + return f - new_methods.update(dict( - __iadd__=_wrap_inplace_method(new_methods["__add__"]), - __isub__=_wrap_inplace_method(new_methods["__sub__"]), - __imul__=_wrap_inplace_method(new_methods["__mul__"]), - __itruediv__=_wrap_inplace_method(new_methods["__truediv__"]), - __ipow__=_wrap_inplace_method(new_methods["__pow__"]), - )) + new_methods.update( + dict(__iadd__=_wrap_inplace_method(new_methods["__add__"]), + __isub__=_wrap_inplace_method(new_methods["__sub__"]), + __imul__=_wrap_inplace_method(new_methods["__mul__"]), + __itruediv__=_wrap_inplace_method(new_methods["__truediv__"]), + __ipow__=_wrap_inplace_method(new_methods["__pow__"]), )) if not compat.PY3: new_methods["__idiv__"] = new_methods["__div__"] @@ -243,14 +246,13 @@ def add_flex_arithmetic_methods(cls, flex_arith_method, radd_func=None, """ radd_func = radd_func or (lambda x, y: operator.add(y, x)) # in frame, default axis is 'columns', doesn't matter for series and panel - new_methods = _create_methods( - flex_arith_method, radd_func, flex_comp_method, flex_bool_method, - use_numexpr, default_axis='columns', special=False) - new_methods.update(dict( - multiply=new_methods['mul'], - subtract=new_methods['sub'], - divide=new_methods['div'] - )) + new_methods = _create_methods(flex_arith_method, radd_func, + flex_comp_method, flex_bool_method, + use_numexpr, default_axis='columns', + special=False) + new_methods.update(dict(multiply=new_methods['mul'], + subtract=new_methods['sub'], + divide=new_methods['div'])) # opt out of bool flex methods for now for k in ('ror_', 'rxor', 'rand_'): if k in new_methods: @@ -261,7 +263,6 @@ def add_flex_arithmetic_methods(cls, flex_arith_method, radd_func=None, class _TimeOp(object): - """ Wrapper around Series datetime/time/timedelta arithmetic operations. Generally, you should use classmethod ``maybe_convert_for_time_op`` as an @@ -275,7 +276,7 @@ def __init__(self, left, right, name, na_op): # need to make sure that we are aligning the data if isinstance(left, pd.Series) and isinstance(right, pd.Series): - left, right = left.align(right,copy=False) + left, right = left.align(right, copy=False) lvalues = self._convert_to_array(left, name=name) rvalues = self._convert_to_array(right, name=name, other=lvalues) @@ -289,7 +290,8 @@ def __init__(self, left, right, name, na_op): self.is_timedelta_lhs = is_timedelta64_dtype(lvalues) self.is_datetime64_lhs = is_datetime64_dtype(lvalues) self.is_datetime64tz_lhs = is_datetime64tz_dtype(lvalues) - self.is_datetime_lhs = self.is_datetime64_lhs or self.is_datetime64tz_lhs + self.is_datetime_lhs = (self.is_datetime64_lhs or + self.is_datetime64tz_lhs) self.is_integer_lhs = left.dtype.kind in ['i', 'u'] self.is_floating_lhs = left.dtype.kind == 'f' @@ -298,21 +300,23 @@ def __init__(self, left, right, name, na_op): self.is_offset_rhs = self._is_offset(right) self.is_datetime64_rhs = is_datetime64_dtype(rvalues) self.is_datetime64tz_rhs = is_datetime64tz_dtype(rvalues) - self.is_datetime_rhs = self.is_datetime64_rhs or self.is_datetime64tz_rhs + self.is_datetime_rhs = (self.is_datetime64_rhs or + self.is_datetime64tz_rhs) self.is_timedelta_rhs = is_timedelta64_dtype(rvalues) self.is_integer_rhs = rvalues.dtype.kind in ('i', 'u') self.is_floating_rhs = rvalues.dtype.kind == 'f' self._validate(lvalues, rvalues, name) - self.lvalues, self.rvalues = self._convert_for_datetime(lvalues, rvalues) + self.lvalues, self.rvalues = self._convert_for_datetime(lvalues, + rvalues) def _validate(self, lvalues, rvalues, name): # timedelta and integer mul/div - if (self.is_timedelta_lhs and - (self.is_integer_rhs or self.is_floating_rhs)) or ( - self.is_timedelta_rhs and - (self.is_integer_lhs or self.is_floating_lhs)): + if ((self.is_timedelta_lhs and + (self.is_integer_rhs or self.is_floating_rhs)) or + (self.is_timedelta_rhs and + (self.is_integer_lhs or self.is_floating_lhs))): if name not in ('__div__', '__truediv__', '__mul__', '__rmul__'): raise TypeError("can only operate on a timedelta and an " @@ -320,59 +324,56 @@ def _validate(self, lvalues, rvalues, name): "multiplication, but the operator [%s] was" "passed" % name) - # 2 timedeltas elif ((self.is_timedelta_lhs and (self.is_timedelta_rhs or self.is_offset_rhs)) or (self.is_timedelta_rhs and (self.is_timedelta_lhs or self.is_offset_lhs))): - if name not in ('__div__', '__rdiv__', '__truediv__', '__rtruediv__', - '__add__', '__radd__', '__sub__', '__rsub__'): + if name not in ('__div__', '__rdiv__', '__truediv__', + '__rtruediv__', '__add__', '__radd__', '__sub__', + '__rsub__'): raise TypeError("can only operate on a timedeltas for " "addition, subtraction, and division, but the" " operator [%s] was passed" % name) - # datetime and timedelta/DateOffset elif (self.is_datetime_lhs and (self.is_timedelta_rhs or self.is_offset_rhs)): if name not in ('__add__', '__radd__', '__sub__'): - raise TypeError("can only operate on a datetime with a rhs of" - " a timedelta/DateOffset for addition and subtraction," - " but the operator [%s] was passed" % - name) + raise TypeError("can only operate on a datetime with a rhs of " + "a timedelta/DateOffset for addition and " + "subtraction, but the operator [%s] was " + "passed" % name) elif (self.is_datetime_rhs and (self.is_timedelta_lhs or self.is_offset_lhs)): if name not in ('__add__', '__radd__', '__rsub__'): - raise TypeError("can only operate on a timedelta/DateOffset with a rhs of" - " a datetime for addition," - " but the operator [%s] was passed" % - name) - + raise TypeError("can only operate on a timedelta/DateOffset " + "with a rhs of a datetime for addition, " + "but the operator [%s] was passed" % name) # 2 datetimes elif self.is_datetime_lhs and self.is_datetime_rhs: - if name not in ('__sub__','__rsub__'): + if name not in ('__sub__', '__rsub__'): raise TypeError("can only operate on a datetimes for" " subtraction, but the operator [%s] was" " passed" % name) # if tz's must be equal (same or None) - if getattr(lvalues,'tz',None) != getattr(rvalues,'tz',None): - raise ValueError("Incompatbile tz's on datetime subtraction ops") + if getattr(lvalues, 'tz', None) != getattr(rvalues, 'tz', None): + raise ValueError("Incompatbile tz's on datetime subtraction " + "ops") - - elif ((self.is_timedelta_lhs or self.is_offset_lhs) - and self.is_datetime_rhs): + elif ((self.is_timedelta_lhs or self.is_offset_lhs) and + self.is_datetime_rhs): if name not in ('__add__', '__radd__'): - raise TypeError("can only operate on a timedelta/DateOffset and" - " a datetime for addition, but the operator" - " [%s] was passed" % name) + raise TypeError("can only operate on a timedelta/DateOffset " + "and a datetime for addition, but the " + "operator [%s] was passed" % name) else: raise TypeError('cannot operate on a series without a rhs ' 'of a series/ndarray of type datetime64[ns] ' @@ -389,18 +390,17 @@ def _convert_to_array(self, values, name=None, other=None): # if this is a Series that contains relevant dtype info, then use this # instead of the inferred type; this avoids coercing Series([NaT], # dtype='datetime64[ns]') to Series([NaT], dtype='timedelta64[ns]') - elif isinstance(values, pd.Series) and ( - is_timedelta64_dtype(values) or is_datetime64_dtype(values)): + elif (isinstance(values, pd.Series) and + (is_timedelta64_dtype(values) or is_datetime64_dtype(values))): supplied_dtype = values.dtype inferred_type = supplied_dtype or lib.infer_dtype(values) - if (inferred_type in ('datetime64', 'datetime', 'date', 'time') - or com.is_datetimetz(inferred_type)): + if (inferred_type in ('datetime64', 'datetime', 'date', 'time') or + com.is_datetimetz(inferred_type)): # if we have a other of timedelta, but use pd.NaT here we # we are in the wrong path - if (supplied_dtype is None - and other is not None - and (other.dtype in ('timedelta64[ns]', 'datetime64[ns]')) - and isnull(values).all()): + if (supplied_dtype is None and other is not None and + (other.dtype in ('timedelta64[ns]', 'datetime64[ns]')) and + isnull(values).all()): values = np.empty(values.shape, dtype='timedelta64[ns]') values[:] = iNaT @@ -408,7 +408,8 @@ def _convert_to_array(self, values, name=None, other=None): elif isinstance(values, pd.DatetimeIndex): values = values.to_series() # datetime with tz - elif isinstance(ovalues, datetime.datetime) and hasattr(ovalues,'tz'): + elif (isinstance(ovalues, datetime.datetime) and + hasattr(ovalues, 'tz')): values = pd.DatetimeIndex(values) # datetime array with tz elif com.is_datetimetz(values): @@ -430,8 +431,8 @@ def _convert_to_array(self, values, name=None, other=None): raise TypeError("incompatible type for a datetime/timedelta " "operation [{0}]".format(name)) elif inferred_type == 'floating': - if isnull(values).all() and name in ('__add__', '__radd__', - '__sub__', '__rsub__'): + if (isnull(values).all() and + name in ('__add__', '__radd__', '__sub__', '__rsub__')): values = np.empty(values.shape, dtype=other.dtype) values[:] = iNaT return values @@ -471,15 +472,14 @@ def _offset(lvalues, rvalues): rvalues = pd.DatetimeIndex(rvalues) lvalues = lvalues[0] else: - warnings.warn("Adding/subtracting array of DateOffsets to Series not vectorized", - PerformanceWarning) + warnings.warn("Adding/subtracting array of DateOffsets to " + "Series not vectorized", PerformanceWarning) rvalues = rvalues.astype('O') # pass thru on the na_op - self.na_op = lambda x, y: getattr(x,self.name)(y) + self.na_op = lambda x, y: getattr(x, self.name)(y) return lvalues, rvalues - if self.is_offset_lhs: lvalues, rvalues = _offset(lvalues, rvalues) elif self.is_offset_rhs: @@ -512,10 +512,9 @@ def _offset(lvalues, rvalues): # time delta division -> unit less # integer gets converted to timedelta in np < 1.6 - if (self.is_timedelta_lhs and self.is_timedelta_rhs) and\ - not self.is_integer_rhs and\ - not self.is_integer_lhs and\ - self.name in ('__div__', '__truediv__'): + if ((self.is_timedelta_lhs and self.is_timedelta_rhs) and + not self.is_integer_rhs and not self.is_integer_lhs and + self.name in ('__div__', '__truediv__')): self.dtype = 'float64' self.fill_value = np.nan lvalues = lvalues.astype(np.float64) @@ -523,6 +522,7 @@ def _offset(lvalues, rvalues): # if we need to mask the results if mask.any(): + def f(x): # datetime64[ns]/timedelta64[ns] masking @@ -533,11 +533,11 @@ def f(x): np.putmask(x, mask, self.fill_value) return x + self.wrap_results = f return lvalues, rvalues - def _is_offset(self, arr_or_obj): """ check if obj or all elements of list-like is DateOffset """ if isinstance(arr_or_obj, pd.DateOffset): @@ -559,7 +559,8 @@ def maybe_convert_for_time_op(cls, left, right, name, na_op): """ # decide if we can do it is_timedelta_lhs = is_timedelta64_dtype(left) - is_datetime_lhs = is_datetime64_dtype(left) or is_datetime64tz_dtype(left) + is_datetime_lhs = (is_datetime64_dtype(left) or + is_datetime64tz_dtype(left)) if not (is_datetime_lhs or is_timedelta_lhs): return None @@ -567,12 +568,13 @@ def maybe_convert_for_time_op(cls, left, right, name, na_op): return cls(left, right, name, na_op) -def _arith_method_SERIES(op, name, str_rep, fill_zeros=None, - default_axis=None, **eval_kwargs): +def _arith_method_SERIES(op, name, str_rep, fill_zeros=None, default_axis=None, + **eval_kwargs): """ Wrapper function for Series arithmetic operations, to avoid code duplication. """ + def na_op(x, y): try: result = expressions.evaluate(op, str_rep, x, y, @@ -588,7 +590,9 @@ def na_op(x, y): mask = notnull(x) result[mask] = op(x[mask], y) else: - raise TypeError("{typ} cannot perform the operation {op}".format(typ=type(x).__name__,op=str_rep)) + raise TypeError("{typ} cannot perform the operation " + "{op}".format(typ=type(x).__name__, + op=str_rep)) result, changed = com._maybe_upcast_putmask(result, ~mask, np.nan) @@ -600,7 +604,8 @@ def wrapper(left, right, name=name, na_op=na_op): if isinstance(right, pd.DataFrame): return NotImplemented - time_converted = _TimeOp.maybe_convert_for_time_op(left, right, name, na_op) + time_converted = _TimeOp.maybe_convert_for_time_op(left, right, name, + na_op) if time_converted is None: lvalues, rvalues = left, right @@ -616,7 +621,7 @@ def wrapper(left, right, name=name, na_op=na_op): na_op = time_converted.na_op if isinstance(rvalues, pd.Series): - rindex = getattr(rvalues,'index',rvalues) + rindex = getattr(rvalues, 'index', rvalues) name = _maybe_match_name(left, rvalues) lvalues = getattr(lvalues, 'values', lvalues) rvalues = getattr(rvalues, 'values', rvalues) @@ -624,7 +629,7 @@ def wrapper(left, right, name=name, na_op=na_op): index = left.index else: index, lidx, ridx = left.index.join(rindex, how='outer', - return_indexers=True) + return_indexers=True) if lidx is not None: lvalues = com.take_1d(lvalues, lidx) @@ -638,12 +643,14 @@ def wrapper(left, right, name=name, na_op=na_op): name=name, dtype=dtype) else: # scalars - if hasattr(lvalues, 'values') and not isinstance(lvalues, pd.DatetimeIndex): + if (hasattr(lvalues, 'values') and + not isinstance(lvalues, pd.DatetimeIndex)): lvalues = lvalues.values return left._constructor(wrap_results(na_op(lvalues, rvalues)), index=left.index, name=left.name, dtype=dtype) + return wrapper @@ -652,14 +659,15 @@ def _comp_method_SERIES(op, name, str_rep, masker=False): Wrapper function for Series arithmetic operations, to avoid code duplication. """ + def na_op(x, y): # dispatch to the categorical if we have a categorical # in either operand if is_categorical_dtype(x): - return op(x,y) + return op(x, y) elif is_categorical_dtype(y) and not isscalar(y): - return op(y,x) + return op(y, x) if is_object_dtype(x.dtype): if isinstance(y, list): @@ -691,10 +699,11 @@ def na_op(x, y): # we have a datetime/timedelta and may need to convert mask = None - if needs_i8_conversion(x) or (not isscalar(y) and needs_i8_conversion(y)): + if (needs_i8_conversion(x) or + (not isscalar(y) and needs_i8_conversion(y))): if isscalar(y): - y = _index.convert_scalar(x,_values_from_object(y)) + y = _index.convert_scalar(x, _values_from_object(y)) else: y = y.view('i8') @@ -734,15 +743,16 @@ def wrapper(self, other, axis=None): index=self.index).__finalize__(self) elif isinstance(other, pd.Categorical): if not is_categorical_dtype(self): - msg = "Cannot compare a Categorical for op {op} with Series of dtype {typ}.\n"\ - "If you want to compare values, use 'series <op> np.asarray(other)'." - raise TypeError(msg.format(op=op,typ=self.dtype)) - + msg = ("Cannot compare a Categorical for op {op} with Series " + "of dtype {typ}.\nIf you want to compare values, use " + "'series <op> np.asarray(other)'.") + raise TypeError(msg.format(op=op, typ=self.dtype)) if is_categorical_dtype(self): - # cats are a special case as get_values() would return an ndarray, which would then - # not take categories ordering into account - # we can go directly to op, as the na_op would just test again and dispatch to it. + # cats are a special case as get_values() would return an ndarray, + # which would then not take categories ordering into account + # we can go directly to op, as the na_op would just test again and + # dispatch to it. res = op(self.values, other) else: values = self.get_values() @@ -751,15 +761,15 @@ def wrapper(self, other, axis=None): res = na_op(values, other) if isscalar(res): - raise TypeError('Could not compare %s type with Series' - % type(other)) + raise TypeError('Could not compare %s type with Series' % + type(other)) # always return a full value series here res = _values_from_object(res) - res = pd.Series(res, index=self.index, name=self.name, - dtype='bool') + res = pd.Series(res, index=self.index, name=self.name, dtype='bool') return res + return wrapper @@ -768,6 +778,7 @@ def _bool_method_SERIES(op, name, str_rep): Wrapper function for Series arithmetic operations, to avoid code duplication. """ + def na_op(x, y): try: result = op(x, y) @@ -808,19 +819,21 @@ def wrapper(self, other): is_other_int_dtype = is_integer_dtype(other.dtype) other = fill_int(other) if is_other_int_dtype else fill_bool(other) - filler = fill_int if is_self_int_dtype and is_other_int_dtype else fill_bool + filler = (fill_int if is_self_int_dtype and is_other_int_dtype + else fill_bool) return filler(self._constructor(na_op(self.values, other.values), - index=self.index, - name=name)) + index=self.index, name=name)) elif isinstance(other, pd.DataFrame): return NotImplemented else: # scalars, list, tuple, np.array - filler = fill_int if is_self_int_dtype and is_integer_dtype(np.asarray(other)) else fill_bool - return filler(self._constructor(na_op(self.values, other), - index=self.index)).__finalize__(self) + filler = (fill_int if is_self_int_dtype and + is_integer_dtype(np.asarray(other)) else fill_bool) + return filler(self._constructor( + na_op(self.values, other), + index=self.index)).__finalize__(self) return wrapper @@ -835,13 +848,35 @@ def _radd_compat(left, right): return output -_op_descriptions = {'add': {'op': '+', 'desc': 'Addition', 'reversed': False, 'reverse': 'radd'}, - 'sub': {'op': '-', 'desc': 'Subtraction', 'reversed': False, 'reverse': 'rsub'}, - 'mul': {'op': '*', 'desc': 'Multiplication', 'reversed': False, 'reverse': 'rmul'}, - 'mod': {'op': '%', 'desc': 'Modulo', 'reversed': False, 'reverse': 'rmod'}, - 'pow': {'op': '**', 'desc': 'Exponential power', 'reversed': False, 'reverse': 'rpow'}, - 'truediv': {'op': '/', 'desc': 'Floating division', 'reversed': False, 'reverse': 'rtruediv'}, - 'floordiv': {'op': '//', 'desc': 'Integer division', 'reversed': False, 'reverse': 'rfloordiv'}} + +_op_descriptions = {'add': {'op': '+', + 'desc': 'Addition', + 'reversed': False, + 'reverse': 'radd'}, + 'sub': {'op': '-', + 'desc': 'Subtraction', + 'reversed': False, + 'reverse': 'rsub'}, + 'mul': {'op': '*', + 'desc': 'Multiplication', + 'reversed': False, + 'reverse': 'rmul'}, + 'mod': {'op': '%', + 'desc': 'Modulo', + 'reversed': False, + 'reverse': 'rmod'}, + 'pow': {'op': '**', + 'desc': 'Exponential power', + 'reversed': False, + 'reverse': 'rpow'}, + 'truediv': {'op': '/', + 'desc': 'Floating division', + 'reversed': False, + 'reverse': 'rtruediv'}, + 'floordiv': {'op': '//', + 'desc': 'Integer division', + 'reversed': False, + 'reverse': 'rfloordiv'}} _op_names = list(_op_descriptions.keys()) for k in _op_names: @@ -850,8 +885,9 @@ def _radd_compat(left, right): _op_descriptions[reverse_op]['reversed'] = True _op_descriptions[reverse_op]['reverse'] = k -def _flex_method_SERIES(op, name, str_rep, default_axis=None, - fill_zeros=None, **eval_kwargs): + +def _flex_method_SERIES(op, name, str_rep, default_axis=None, fill_zeros=None, + **eval_kwargs): op_name = name.replace('__', '') op_desc = _op_descriptions[op_name] if op_desc['reversed']: @@ -902,6 +938,7 @@ def flex_wrapper(self, other, level=None, fill_value=None, axis=0): flex_wrapper.__name__ = name return flex_wrapper + series_flex_funcs = dict(flex_arith_method=_flex_method_SERIES, radd_func=_radd_compat, flex_comp_method=_comp_method_SERIES) @@ -911,7 +948,6 @@ def flex_wrapper(self, other, level=None, fill_value=None, axis=0): comp_method=_comp_method_SERIES, bool_method=_bool_method_SERIES) - _arith_doc_FRAME = """ Binary operator %s with support to substitute a fill_value for missing data in one of the inputs @@ -942,8 +978,8 @@ def _arith_method_FRAME(op, name, str_rep=None, default_axis='columns', fill_zeros=None, **eval_kwargs): def na_op(x, y): try: - result = expressions.evaluate( - op, str_rep, x, y, raise_on_error=True, **eval_kwargs) + result = expressions.evaluate(op, str_rep, x, y, + raise_on_error=True, **eval_kwargs) except TypeError: xrav = x.ravel() if isinstance(y, (np.ndarray, pd.Series)): @@ -955,15 +991,16 @@ def na_op(x, y): yrav = yrav[mask] if np.prod(xrav.shape) and np.prod(yrav.shape): result[mask] = op(xrav, yrav) - elif hasattr(x,'size'): + elif hasattr(x, 'size'): result = np.empty(x.size, dtype=x.dtype) mask = notnull(xrav) xrav = xrav[mask] if np.prod(xrav.shape): result[mask] = op(xrav, y) else: - raise TypeError("cannot perform operation {op} between objects " - "of type {x} and {y}".format(op=name,x=type(x),y=type(y))) + raise TypeError("cannot perform operation {op} between " + "objects of type {x} and {y}".format( + op=name, x=type(x), y=type(y))) result, changed = com._maybe_upcast_putmask(result, ~mask, np.nan) result = result.reshape(x.shape) @@ -992,8 +1029,8 @@ def na_op(x, y): axis : {0, 1, 'index', 'columns'} For Series input, axis to match Series index on fill_value : None or float value, default None - Fill missing (NaN) values with this value. If both DataFrame locations are - missing, the result will be missing + Fill missing (NaN) values with this value. If both DataFrame + locations are missing, the result will be missing level : int or name Broadcast across a level, matching Index values on the passed MultiIndex level @@ -1015,7 +1052,7 @@ def na_op(x, y): @Appender(doc) def f(self, other, axis=default_axis, level=None, fill_value=None): - if isinstance(other, pd.DataFrame): # Another DataFrame + if isinstance(other, pd.DataFrame): # Another DataFrame return self._combine_frame(other, na_op, fill_value, level) elif isinstance(other, pd.Series): return self._combine_series(other, na_op, fill_value, axis, level) @@ -1038,8 +1075,8 @@ def f(self, other, axis=default_axis, level=None, fill_value=None): # casted = self._constructor_sliced(other, # index=self.columns) casted = pd.Series(other, index=self.columns) - return self._combine_series(casted, na_op, fill_value, - axis, level) + return self._combine_series(casted, na_op, fill_value, axis, + level) elif other.ndim == 2: # casted = self._constructor(other, index=self.index, # columns=self.columns) @@ -1060,7 +1097,6 @@ def f(self, other, axis=default_axis, level=None, fill_value=None): # Masker unused for now def _flex_comp_method_FRAME(op, name, str_rep=None, default_axis='columns', masker=False): - def na_op(x, y): try: result = op(x, y) @@ -1086,7 +1122,7 @@ def na_op(x, y): @Appender('Wrapper for flexible comparison methods %s' % name) def f(self, other, axis=default_axis, level=None): - if isinstance(other, pd.DataFrame): # Another DataFrame + if isinstance(other, pd.DataFrame): # Another DataFrame return self._flex_compare_frame(other, na_op, str_rep, level) elif isinstance(other, pd.Series): @@ -1130,7 +1166,7 @@ def f(self, other, axis=default_axis, level=None): def _comp_method_FRAME(func, name, str_rep, masker=False): @Appender('Wrapper for comparison method %s' % name) def f(self, other): - if isinstance(other, pd.DataFrame): # Another DataFrame + if isinstance(other, pd.DataFrame): # Another DataFrame return self._compare_frame(other, func, str_rep) elif isinstance(other, pd.Series): return self._combine_series_infer(other, func) @@ -1150,7 +1186,6 @@ def f(self, other): radd_func=_radd_compat, flex_comp_method=_flex_comp_method_FRAME) - frame_special_funcs = dict(arith_method=_arith_method_FRAME, radd_func=_radd_compat, comp_method=_comp_method_FRAME, @@ -1184,12 +1219,12 @@ def f(self, other): self._constructor.__name__) return self._combine(other, op) + f.__name__ = name return f def _comp_method_PANEL(op, name, str_rep=None, masker=False): - def na_op(x, y): try: result = expressions.evaluate(op, str_rep, x, y, diff --git a/pandas/core/panel.py b/pandas/core/panel.py index e0d9405a66b75..e0989574d79e1 100644 --- a/pandas/core/panel.py +++ b/pandas/core/panel.py @@ -13,8 +13,7 @@ import pandas.core.ops as ops from pandas import compat from pandas import lib -from pandas.compat import (map, zip, range, u, OrderedDict, - OrderedDefaultdict) +from pandas.compat import (map, zip, range, u, OrderedDict, OrderedDefaultdict) from pandas.core.categorical import Categorical from pandas.core.common import (PandasError, _try_sort, _default_index, _infer_dtype_from_scalar, is_list_like) @@ -36,7 +35,7 @@ klass="Panel", axes_single_arg="{0, 1, 2, 'items', 'major_axis', 'minor_axis'}") _shared_doc_kwargs['args_transpose'] = ("three positional arguments: each one" - "of\n %s" % + "of\n%s" % _shared_doc_kwargs['axes_single_arg']) @@ -171,7 +170,8 @@ def _init_data(self, data, copy, dtype, **kwargs): dtype, data = _infer_dtype_from_scalar(data) values = np.empty([len(x) for x in passed_axes], dtype=dtype) values.fill(data) - mgr = self._init_matrix(values, passed_axes, dtype=dtype, copy=False) + mgr = self._init_matrix(values, passed_axes, dtype=dtype, + copy=False) copy = False else: # pragma: no cover raise PandasError('Panel constructor not properly called!') @@ -184,8 +184,9 @@ def _init_dict(self, data, axes, dtype=None): # prefilter if haxis passed if haxis is not None: haxis = _ensure_index(haxis) - data = OrderedDict((k, v) for k, v - in compat.iteritems(data) if k in haxis) + data = OrderedDict((k, v) + for k, v in compat.iteritems(data) + if k in haxis) else: ks = list(data.keys()) if not isinstance(data, OrderedDict): @@ -197,8 +198,8 @@ def _init_dict(self, data, axes, dtype=None): data[k] = self._constructor_sliced(v) # extract axis for remaining axes & create the slicemap - raxes = [self._extract_axis(self, data, axis=i) - if a is None else a for i, a in enumerate(axes)] + raxes = [self._extract_axis(self, data, axis=i) if a is None else a + for i, a in enumerate(axes)] raxes_sm = self._extract_axes_for_slice(self, raxes) # shallow copy @@ -277,8 +278,7 @@ def _getitem_multilevel(self, key): if isinstance(loc, (slice, np.ndarray)): new_index = info[loc] result_index = maybe_droplevels(new_index, key) - slices = [loc] + [slice(None) for x in range( - self._AXIS_LEN - 1)] + slices = [loc] + [slice(None) for x in range(self._AXIS_LEN - 1)] new_values = self.values[slices] d = self._construct_axes_dict(self._AXIS_ORDERS[1:]) @@ -379,7 +379,8 @@ def _get_plane_axes(self, axis): (as compared with higher level planes), as we are returning a DataFrame axes """ - return [self._get_axis(axi) for axi in self._get_plane_axes_index(axis)] + return [self._get_axis(axi) + for axi in self._get_plane_axes_index(axis)] fromDict = from_dict @@ -400,8 +401,7 @@ def to_sparse(self, fill_value=None, kind='block'): frames = dict(compat.iteritems(self)) return SparsePanel(frames, items=self.items, major_axis=self.major_axis, - minor_axis=self.minor_axis, - default_kind=kind, + minor_axis=self.minor_axis, default_kind=kind, default_fill_value=fill_value) def to_excel(self, path, na_rep='', engine=None, **kwargs): @@ -544,8 +544,7 @@ def set_value(self, *args, **kwargs): result = self.reindex(**d) args = list(args) likely_dtype, args[-1] = _infer_dtype_from_scalar(args[-1]) - made_bigger = not np.array_equal( - axes[0], self._info_axis) + made_bigger = not np.array_equal(axes[0], self._info_axis) # how to make this logic simpler? if made_bigger: com._possibly_cast_item(result, args[0], likely_dtype) @@ -573,9 +572,9 @@ def __setitem__(self, key, value): mat = value.values elif isinstance(value, np.ndarray): if value.shape != shape[1:]: - raise ValueError( - 'shape of value must be {0}, shape of given object was ' - '{1}'.format(shape[1:], tuple(map(int, value.shape)))) + raise ValueError('shape of value must be {0}, shape of given ' + 'object was {1}'.format( + shape[1:], tuple(map(int, value.shape)))) mat = np.asarray(value) elif np.isscalar(value): dtype, value = _infer_dtype_from_scalar(value) @@ -624,24 +623,24 @@ def head(self, n=5): def tail(self, n=5): raise NotImplementedError - + def round(self, decimals=0): """ Round each value in Panel to a specified number of decimal places. - + .. versionadded:: 0.18.0 - + Parameters ---------- decimals : int - Number of decimal places to round to (default: 0). - If decimals is negative, it specifies the number of + Number of decimal places to round to (default: 0). + If decimals is negative, it specifies the number of positions to the left of the decimal point. - + Returns ------- Panel object - + See Also -------- numpy.around @@ -650,7 +649,6 @@ def round(self, decimals=0): result = np.apply_along_axis(np.round, 0, self.values) return self._wrap_result(result, axis=0) raise TypeError("decimals must be an integer") - def _needs_reindex_multi(self, axes, method, level): """ don't allow a multi reindex on Panel or above ndim """ @@ -708,9 +706,9 @@ def _combine(self, other, func, axis=0): elif np.isscalar(other): return self._combine_const(other, func) else: - raise NotImplementedError(str(type(other)) + - ' is not supported in combine operation with ' + - str(type(self))) + raise NotImplementedError("%s is not supported in combine " + "operation with %s" % + (str(type(other)), str(type(self)))) def _combine_const(self, other, func): new_values = func(self.values, other) @@ -768,8 +766,9 @@ def major_xs(self, key, copy=None): ----- major_xs is only for getting, not setting values. - MultiIndex Slicers is a generic way to get/set values on any level or levels - it is a superset of major_xs functionality, see :ref:`MultiIndex Slicers <advanced.mi_slicers>` + MultiIndex Slicers is a generic way to get/set values on any level or + levels and is a superset of major_xs functionality, see + :ref:`MultiIndex Slicers <advanced.mi_slicers>` """ if copy is not None: @@ -798,8 +797,9 @@ def minor_xs(self, key, copy=None): ----- minor_xs is only for getting, not setting values. - MultiIndex Slicers is a generic way to get/set values on any level or levels - it is a superset of minor_xs functionality, see :ref:`MultiIndex Slicers <advanced.mi_slicers>` + MultiIndex Slicers is a generic way to get/set values on any level or + levels and is a superset of minor_xs functionality, see + :ref:`MultiIndex Slicers <advanced.mi_slicers>` """ if copy is not None: @@ -828,8 +828,9 @@ def xs(self, key, axis=1, copy=None): ----- xs is only for getting, not setting values. - MultiIndex Slicers is a generic way to get/set values on any level or levels - it is a superset of xs functionality, see :ref:`MultiIndex Slicers <advanced.mi_slicers>` + MultiIndex Slicers is a generic way to get/set values on any level or + levels and is a superset of xs functionality, see + :ref:`MultiIndex Slicers <advanced.mi_slicers>` """ if copy is not None: @@ -860,7 +861,8 @@ def _ixs(self, i, axis=0): key = ax[i] # xs cannot handle a non-scalar key, so just reindex here - # if we have a multi-index and a single tuple, then its a reduction (GH 7516) + # if we have a multi-index and a single tuple, then its a reduction + # (GH 7516) if not (isinstance(ax, MultiIndex) and isinstance(key, tuple)): if is_list_like(key): indexer = {self._get_axis_name(axis): key} @@ -962,8 +964,8 @@ def construct_index_parts(idx, major=True): labels = major_labels + minor_labels names = major_names + minor_names - index = MultiIndex(levels=levels, labels=labels, - names=names, verify_integrity=False) + index = MultiIndex(levels=levels, labels=labels, names=names, + verify_integrity=False) return DataFrame(data, index=index, columns=self.items) @@ -979,9 +981,10 @@ def apply(self, func, axis='major', **kwargs): func : function Function to apply to each combination of 'other' axes e.g. if axis = 'items', the combination of major_axis/minor_axis - will each be passed as a Series; if axis = ('items', 'major'), DataFrames - of items & major axis will be passed - axis : {'items', 'minor', 'major'}, or {0, 1, 2}, or a tuple with two axes + will each be passed as a Series; if axis = ('items', 'major'), + DataFrames of items & major axis will be passed + axis : {'items', 'minor', 'major'}, or {0, 1, 2}, or a tuple with two + axes Additional keyword arguments will be passed as keywords to the function Examples @@ -1000,7 +1003,8 @@ def apply(self, func, axis='major', **kwargs): >>> p.apply(lambda x: x.sum(), axis='minor') - Return the shapes of each DataFrame over axis 2 (i.e the shapes of items x major), as a Series + Return the shapes of each DataFrame over axis 2 (i.e the shapes of + items x major), as a Series >>> p.apply(lambda x: x.shape, axis=(0,1)) @@ -1034,7 +1038,6 @@ def apply(self, func, axis='major', **kwargs): def _apply_1d(self, func, axis): axis_name = self._get_axis_name(axis) - ax = self._get_axis(axis) ndim = self.ndim values = self.values @@ -1119,8 +1122,8 @@ def _apply_2d(self, func, axis): def _reduce(self, op, name, axis=0, skipna=True, numeric_only=None, filter_type=None, **kwds): if numeric_only: - raise NotImplementedError( - 'Panel.{0} does not implement numeric_only.'.format(name)) + raise NotImplementedError('Panel.{0} does not implement ' + 'numeric_only.'.format(name)) axis_name = self._get_axis_name(axis) axis_number = self._get_axis_number(axis_name) @@ -1153,7 +1156,7 @@ def _construct_return_type(self, result, axes=None): # same as self elif self.ndim == ndim: - """ return the construction dictionary for these axes """ + # return the construction dictionary for these axes if axes is None: return self._constructor(result) return self._constructor(result, **self._construct_axes_dict()) @@ -1178,19 +1181,19 @@ def _wrap_result(self, result, axis): @Appender(_shared_docs['reindex'] % _shared_doc_kwargs) def reindex(self, items=None, major_axis=None, minor_axis=None, **kwargs): - major_axis = (major_axis if major_axis is not None - else kwargs.pop('major', None)) - minor_axis = (minor_axis if minor_axis is not None - else kwargs.pop('minor', None)) + major_axis = (major_axis if major_axis is not None else + kwargs.pop('major', None)) + minor_axis = (minor_axis if minor_axis is not None else + kwargs.pop('minor', None)) return super(Panel, self).reindex(items=items, major_axis=major_axis, minor_axis=minor_axis, **kwargs) @Appender(_shared_docs['rename'] % _shared_doc_kwargs) def rename(self, items=None, major_axis=None, minor_axis=None, **kwargs): - major_axis = (major_axis if major_axis is not None - else kwargs.pop('major', None)) - minor_axis = (minor_axis if minor_axis is not None - else kwargs.pop('minor', None)) + major_axis = (major_axis if major_axis is not None else + kwargs.pop('major', None)) + minor_axis = (minor_axis if minor_axis is not None else + kwargs.pop('minor', None)) return super(Panel, self).rename(items=items, major_axis=major_axis, minor_axis=minor_axis, **kwargs) @@ -1209,10 +1212,9 @@ def transpose(self, *args, **kwargs): @Appender(_shared_docs['fillna'] % _shared_doc_kwargs) def fillna(self, value=None, method=None, axis=None, inplace=False, limit=None, downcast=None, **kwargs): - return super(Panel, self).fillna(value=value, method=method, - axis=axis, inplace=inplace, - limit=limit, downcast=downcast, - **kwargs) + return super(Panel, self).fillna(value=value, method=method, axis=axis, + inplace=inplace, limit=limit, + downcast=downcast, **kwargs) def count(self, axis='major'): """ @@ -1359,15 +1361,16 @@ def _get_join_index(self, other, how): @staticmethod def _extract_axes(self, data, axes, **kwargs): """ return a list of the axis indicies """ - return [self._extract_axis(self, data, axis=i, **kwargs) for i, a - in enumerate(axes)] + return [self._extract_axis(self, data, axis=i, **kwargs) + for i, a in enumerate(axes)] @staticmethod def _extract_axes_for_slice(self, axes): """ return the slice dictionary for these axes """ return dict([(self._AXIS_SLICEMAP[i], a) - for i, a in zip(self._AXIS_ORDERS[self._AXIS_LEN - - len(axes):], axes)]) + for i, a in zip( + self._AXIS_ORDERS[self._AXIS_LEN - len(axes):], + axes)]) @staticmethod def _prep_ndarray(self, values, copy=True): @@ -1519,7 +1522,8 @@ def na_op(x, y): Parameters ---------- - other : %s or %s""" % (cls._constructor_sliced.__name__, cls.__name__) + """ + other : %s or %s""" % (cls._constructor_sliced.__name__, + cls.__name__) + """ axis : {""" + ', '.join(cls._AXIS_ORDERS) + "}" + """ Axis to broadcast over @@ -1530,7 +1534,8 @@ def na_op(x, y): See also -------- """ + cls.__name__ + ".%s\n" - doc = _op_doc % (op_desc['desc'], op_name, equiv, op_desc['reverse']) + doc = _op_doc % (op_desc['desc'], op_name, equiv, + op_desc['reverse']) else: doc = _agg_doc % name @@ -1547,11 +1552,9 @@ def f(self, other, axis=0): flex_comp_method=ops._comp_method_PANEL) -Panel._setup_axes(axes=['items', 'major_axis', 'minor_axis'], - info_axis=0, - stat_axis=1, - aliases={'major': 'major_axis', - 'minor': 'minor_axis'}, +Panel._setup_axes(axes=['items', 'major_axis', 'minor_axis'], info_axis=0, + stat_axis=1, aliases={'major': 'major_axis', + 'minor': 'minor_axis'}, slicers={'major_axis': 'index', 'minor_axis': 'columns'}) @@ -1559,6 +1562,7 @@ def f(self, other, axis=0): Panel._add_aggregate_operations() Panel._add_numeric_operations() + # legacy class WidePanel(Panel): def __init__(self, *args, **kwargs): diff --git a/pandas/core/panel4d.py b/pandas/core/panel4d.py index 7fafbd0eaa2b5..33bd79195cc77 100644 --- a/pandas/core/panel4d.py +++ b/pandas/core/panel4d.py @@ -3,17 +3,19 @@ from pandas.core.panelnd import create_nd_panel_factory from pandas.core.panel import Panel -Panel4D = create_nd_panel_factory( - klass_name='Panel4D', - orders=['labels', 'items', 'major_axis', 'minor_axis'], - slices={'labels': 'labels', 'items': 'items', 'major_axis': 'major_axis', - 'minor_axis': 'minor_axis'}, - slicer=Panel, - aliases={'major': 'major_axis', 'minor': 'minor_axis'}, - stat_axis=2, - ns=dict(__doc__=""" - Panel4D is a 4-Dimensional named container very much like a Panel, but - having 4 named dimensions. It is intended as a test bed for more +Panel4D = create_nd_panel_factory(klass_name='Panel4D', + orders=['labels', 'items', 'major_axis', + 'minor_axis'], + slices={'labels': 'labels', + 'items': 'items', + 'major_axis': 'major_axis', + 'minor_axis': 'minor_axis'}, + slicer=Panel, + aliases={'major': 'major_axis', + 'minor': 'minor_axis'}, stat_axis=2, + ns=dict(__doc__=""" + Panel4D is a 4-Dimensional named container very much like a Panel, but + having 4 named dimensions. It is intended as a test bed for more N-Dimensional named containers. Parameters @@ -29,15 +31,15 @@ Data type to force, otherwise infer copy : boolean, default False Copy data from inputs. Only affects DataFrame / 2d ndarray input - """) -) + """)) def panel4d_init(self, data=None, labels=None, items=None, major_axis=None, minor_axis=None, copy=False, dtype=None): self._init_data(data=data, labels=labels, items=items, - major_axis=major_axis, minor_axis=minor_axis, - copy=copy, dtype=dtype) + major_axis=major_axis, minor_axis=minor_axis, copy=copy, + dtype=dtype) + Panel4D.__init__ = panel4d_init diff --git a/pandas/core/panelnd.py b/pandas/core/panelnd.py index 35e6412efc760..04fbaab30b42e 100644 --- a/pandas/core/panelnd.py +++ b/pandas/core/panelnd.py @@ -8,21 +8,20 @@ def create_nd_panel_factory(klass_name, orders, slices, slicer, aliases=None, stat_axis=2, info_axis=0, ns=None): """ manufacture a n-d class: - Parameters - ---------- - klass_name : the klass name - orders : the names of the axes in order (highest to lowest) - slices : a dictionary that defines how the axes map to the slice axis - slicer : the class representing a slice of this panel - aliases : a dictionary defining aliases for various axes - default = { major : major_axis, minor : minor_axis } - stat_axis : the default statistic axis default = 2 - info_axis : the info axis - - Returns - ------- - a class object representing this panel - + Parameters + ---------- + klass_name : the klass name + orders : the names of the axes in order (highest to lowest) + slices : a dictionary that defines how the axes map to the slice axis + slicer : the class representing a slice of this panel + aliases : a dictionary defining aliases for various axes + default = { major : major_axis, minor : minor_axis } + stat_axis : the default statistic axis default = 2 + info_axis : the info axis + + Returns + ------- + a class object representing this panel """ # if slicer is a name, get the object @@ -35,7 +34,7 @@ def create_nd_panel_factory(klass_name, orders, slices, slicer, aliases=None, # build the klass ns = {} if not ns else ns - klass = type(klass_name, (slicer,), ns) + klass = type(klass_name, (slicer, ), ns) # setup the axes klass._setup_axes(axes=orders, info_axis=info_axis, stat_axis=stat_axis, @@ -46,19 +45,21 @@ def create_nd_panel_factory(klass_name, orders, slices, slicer, aliases=None, # define the methods #### def __init__(self, *args, **kwargs): if not (kwargs.get('data') or len(args)): - raise Exception( - "must supply at least a data argument to [%s]" % klass_name) + raise Exception("must supply at least a data argument to [%s]" % + klass_name) if 'copy' not in kwargs: kwargs['copy'] = False if 'dtype' not in kwargs: kwargs['dtype'] = None self._init_data(*args, **kwargs) + klass.__init__ = __init__ def _get_plane_axes_index(self, axis): """ return the sliced index for this object """ - axis_name = self._get_axis_name(axis) + # TODO: axis_name is not used, remove? + axis_name = self._get_axis_name(axis) # noqa index = self._AXIS_ORDERS.index(axis) planes = [] @@ -68,12 +69,14 @@ def _get_plane_axes_index(self, axis): planes.extend(self._AXIS_ORDERS[index + 1:]) return planes + klass._get_plane_axes_index = _get_plane_axes_index def _combine(self, other, func, axis=0): if isinstance(other, klass): return self._combine_with_constructor(other, func) return super(klass, self)._combine(other, func, axis=axis) + klass._combine = _combine def _combine_with_constructor(self, other, func): @@ -92,13 +95,16 @@ def _combine_with_constructor(self, other, func): result_values = func(this.values, other.values) return self._constructor(result_values, **d) + klass._combine_with_constructor = _combine_with_constructor # set as NonImplemented operations which we don't support for f in ['to_frame', 'to_excel', 'to_sparse', 'groupby', 'join', 'filter', 'dropna', 'shift']: + def func(self, *args, **kwargs): raise NotImplementedError("this operation is not supported") + setattr(klass, f, func) # add the aggregate operations
https://api.github.com/repos/pandas-dev/pandas/pulls/12079
2016-01-18T13:44:59Z
2016-01-19T22:50:16Z
null
2016-01-19T22:50:25Z
CLN: fix all flake8 warnings in pandas/tseries
diff --git a/pandas/tseries/api.py b/pandas/tseries/api.py index 7c47bd9a232a9..9a07983b4d951 100644 --- a/pandas/tseries/api.py +++ b/pandas/tseries/api.py @@ -2,6 +2,7 @@ """ +# flake8: noqa from pandas.tseries.index import DatetimeIndex, date_range, bdate_range from pandas.tseries.frequencies import infer_freq diff --git a/pandas/tseries/base.py b/pandas/tseries/base.py index b7fddf37df0d0..4b8192edc56ce 100644 --- a/pandas/tseries/base.py +++ b/pandas/tseries/base.py @@ -17,29 +17,28 @@ import pandas.algos as _algos - class DatelikeOps(object): """ common ops for DatetimeIndex/PeriodIndex, but not TimedeltaIndex """ def strftime(self, date_format): - """ - Return an array of formatted strings specified by date_format, which - supports the same string format as the python standard library. Details - of the string format can be found in the `python string format doc - <https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior>`__ + return np.asarray(self.format(date_format=date_format)) + strftime.__doc__ = """ + Return an array of formatted strings specified by date_format, which + supports the same string format as the python standard library. Details + of the string format can be found in `python string format doc <{0}>`__ - .. versionadded:: 0.17.0 + .. versionadded:: 0.17.0 - Parameters - ---------- - date_format : str - date format string (e.g. "%Y-%m-%d") + Parameters + ---------- + date_format : str + date format string (e.g. "%Y-%m-%d") - Returns - ------- - ndarray of formatted strings - """ - return np.asarray(self.format(date_format=date_format)) + Returns + ------- + ndarray of formatted strings + """.format("https://docs.python.org/2/library/datetime.html" + "#strftime-and-strptime-behavior") class TimelikeOps(object): @@ -68,7 +67,7 @@ def _round(self, freq, rounder): unit = to_offset(freq).nanos # round the local times - if getattr(self,'tz',None) is not None: + if getattr(self, 'tz', None) is not None: values = self.tz_localize(None).asi8 else: values = self.asi8 @@ -81,7 +80,7 @@ def _round(self, freq, rounder): result = self._shallow_copy(result, **attribs) # reconvert to local tz - if getattr(self,'tz',None) is not None: + if getattr(self, 'tz', None) is not None: result = result.tz_localize(self.tz) return result @@ -181,7 +180,9 @@ def __getitem__(self, key): @property def freqstr(self): - """ return the frequency object as a string if its set, otherwise None """ + """ + Return the frequency object as a string if its set, otherwise None + """ if self.freq is None: return None return self.freq.freqstr @@ -291,7 +292,8 @@ def _maybe_mask_results(self, result, fill_value=None, convert=None): ------- result : ndarray with values replace by the fill_value - mask the result if needed, convert to the provided dtype if its not None + mask the result if needed, convert to the provided dtype if its not + None This is an internal routine """ @@ -408,7 +410,7 @@ def _format_attrs(self): freq = self.freqstr if freq is not None: freq = "'%s'" % freq - attrs.append(('freq',freq)) + attrs.append(('freq', freq)) return attrs @cache_readonly @@ -424,7 +426,8 @@ def resolution(self): def _convert_scalar_indexer(self, key, kind=None): """ - we don't allow integer or float indexing on datetime-like when using loc + we don't allow integer or float indexing on datetime-like when using + loc Parameters ---------- @@ -432,10 +435,12 @@ def _convert_scalar_indexer(self, key, kind=None): kind : optional, type of the indexing operation (loc/ix/iloc/None) """ - if kind in ['loc'] and lib.isscalar(key) and (is_integer(key) or is_float(key)): - self._invalid_indexer('index',key) + if (kind in ['loc'] and lib.isscalar(key) and + (is_integer(key) or is_float(key))): + self._invalid_indexer('index', key) - return super(DatetimeIndexOpsMixin, self)._convert_scalar_indexer(key, kind=kind) + return (super(DatetimeIndexOpsMixin, self) + ._convert_scalar_indexer(key, kind=kind)) def _add_datelike(self, other): raise AbstractMethodError(self) @@ -445,7 +450,10 @@ def _sub_datelike(self, other): @classmethod def _add_datetimelike_methods(cls): - """ add in the datetimelike methods (as we may have to override the superclass) """ + """ + add in the datetimelike methods (as we may have to override the + superclass) + """ def __add__(self, other): from pandas.core.index import Index @@ -454,14 +462,17 @@ def __add__(self, other): if isinstance(other, TimedeltaIndex): return self._add_delta(other) elif isinstance(self, TimedeltaIndex) and isinstance(other, Index): - if hasattr(other,'_add_delta'): + if hasattr(other, '_add_delta'): return other._add_delta(self) - raise TypeError("cannot add TimedeltaIndex and {typ}".format(typ=type(other))) + raise TypeError("cannot add TimedeltaIndex and {typ}" + .format(typ=type(other))) elif isinstance(other, Index): - warnings.warn("using '+' to provide set union with datetimelike Indexes is deprecated, " - "use .union()",FutureWarning, stacklevel=2) + warnings.warn("using '+' to provide set union with " + "datetimelike Indexes is deprecated, " + "use .union()", FutureWarning, stacklevel=2) return self.union(other) - elif isinstance(other, (DateOffset, timedelta, np.timedelta64, tslib.Timedelta)): + elif isinstance(other, (DateOffset, timedelta, np.timedelta64, + tslib.Timedelta)): return self._add_delta(other) elif com.is_integer(other): return self.shift(other) @@ -480,13 +491,16 @@ def __sub__(self, other): return self._add_delta(-other) elif isinstance(self, TimedeltaIndex) and isinstance(other, Index): if not isinstance(other, TimedeltaIndex): - raise TypeError("cannot subtract TimedeltaIndex and {typ}".format(typ=type(other))) + raise TypeError("cannot subtract TimedeltaIndex and {typ}" + .format(typ=type(other))) return self._add_delta(-other) elif isinstance(other, Index): - warnings.warn("using '-' to provide set differences with datetimelike Indexes is deprecated, " - "use .difference()",FutureWarning, stacklevel=2) + warnings.warn("using '-' to provide set differences with " + "datetimelike Indexes is deprecated, " + "use .difference()", FutureWarning, stacklevel=2) return self.difference(other) - elif isinstance(other, (DateOffset, timedelta, np.timedelta64, tslib.Timedelta)): + elif isinstance(other, (DateOffset, timedelta, np.timedelta64, + tslib.Timedelta)): return self._add_delta(-other) elif com.is_integer(other): return self.shift(-other) @@ -630,5 +644,5 @@ def summary(self, name=None): result += '\nFreq: %s' % self.freqstr # display as values, not quoted - result = result.replace("'","") + result = result.replace("'", "") return result diff --git a/pandas/tseries/common.py b/pandas/tseries/common.py index 345bea18f49c3..f9f90a9377f76 100644 --- a/pandas/tseries/common.py +++ b/pandas/tseries/common.py @@ -1,4 +1,6 @@ -## datetimelike delegation ## +""" +datetimelike delegation +""" import numpy as np from pandas.core.base import PandasDelegate, NoNewAttributesMixin @@ -8,13 +10,17 @@ from pandas.tseries.tdi import TimedeltaIndex from pandas import tslib from pandas.core.common import (_NS_DTYPE, _TD_DTYPE, is_period_arraylike, - is_datetime_arraylike, is_integer_dtype, is_list_like, + is_datetime_arraylike, is_integer_dtype, + is_list_like, is_datetime64_dtype, is_datetime64tz_dtype, is_timedelta64_dtype, is_categorical_dtype, get_dtype_kinds, take_1d) + def is_datetimelike(data): - """ return a boolean if we can be successfully converted to a datetimelike """ + """ + return a boolean if we can be successfully converted to a datetimelike + """ try: maybe_to_datetimelike(data) return True @@ -22,6 +28,7 @@ def is_datetimelike(data): pass return False + def maybe_to_datetimelike(data, copy=False): """ return a DelegatedClass of a Series that is datetimelike @@ -42,7 +49,8 @@ def maybe_to_datetimelike(data, copy=False): from pandas import Series if not isinstance(data, Series): - raise TypeError("cannot convert an object of type {0} to a datetimelike index".format(type(data))) + raise TypeError("cannot convert an object of type {0} to a " + "datetimelike index".format(type(data))) index = data.index name = data.name @@ -51,22 +59,28 @@ def maybe_to_datetimelike(data, copy=False): data = orig.values.categories if is_datetime64_dtype(data.dtype): - return DatetimeProperties(DatetimeIndex(data, copy=copy, freq='infer'), index, name=name, - orig=orig) + return DatetimeProperties(DatetimeIndex(data, copy=copy, freq='infer'), + index, name=name, orig=orig) elif is_datetime64tz_dtype(data.dtype): - return DatetimeProperties(DatetimeIndex(data, copy=copy, freq='infer', ambiguous='infer'), + return DatetimeProperties(DatetimeIndex(data, copy=copy, freq='infer', + ambiguous='infer'), index, data.name, orig=orig) elif is_timedelta64_dtype(data.dtype): - return TimedeltaProperties(TimedeltaIndex(data, copy=copy, freq='infer'), index, + return TimedeltaProperties(TimedeltaIndex(data, copy=copy, + freq='infer'), index, name=name, orig=orig) else: if is_period_arraylike(data): - return PeriodProperties(PeriodIndex(data, copy=copy), index, name=name, orig=orig) + return PeriodProperties(PeriodIndex(data, copy=copy), index, + name=name, orig=orig) if is_datetime_arraylike(data): - return DatetimeProperties(DatetimeIndex(data, copy=copy, freq='infer'), index, + return DatetimeProperties(DatetimeIndex(data, copy=copy, + freq='infer'), index, name=name, orig=orig) - raise TypeError("cannot convert an object of type {0} to a datetimelike index".format(type(data))) + raise TypeError("cannot convert an object of type {0} to a " + "datetimelike index".format(type(data))) + class Properties(PandasDelegate, NoNewAttributesMixin): @@ -80,7 +94,7 @@ def __init__(self, values, index, name, orig=None): def _delegate_property_get(self, name): from pandas import Series - result = getattr(self.values,name) + result = getattr(self.values, name) # maybe need to upcast (ints) if isinstance(result, np.ndarray): @@ -97,14 +111,16 @@ def _delegate_property_get(self, name): result = Series(result, index=self.index, name=self.name) # setting this object will show a SettingWithCopyWarning/Error - result.is_copy = ("modifications to a property of a datetimelike object are not " - "supported and are discarded. Change values on the original.") + result.is_copy = ("modifications to a property of a datetimelike " + "object are not supported and are discarded. " + "Change values on the original.") return result def _delegate_property_set(self, name, value, *args, **kwargs): - raise ValueError("modifications to a property of a datetimelike object are not " - "supported. Change values on the original.") + raise ValueError("modifications to a property of a datetimelike " + "object are not supported. Change values on the " + "original.") def _delegate_method(self, name, *args, **kwargs): from pandas import Series @@ -118,8 +134,9 @@ def _delegate_method(self, name, *args, **kwargs): result = Series(result, index=self.index, name=self.name) # setting this object will show a SettingWithCopyWarning/Error - result.is_copy = ("modifications to a method of a datetimelike object are not " - "supported and are discarded. Change values on the original.") + result.is_copy = ("modifications to a method of a datetimelike object " + "are not supported and are discarded. Change " + "values on the original.") return result @@ -205,9 +222,10 @@ class PeriodProperties(Properties): Raises TypeError if the Series does not contain datetimelike values. """ -PeriodProperties._add_delegate_accessors(delegate=PeriodIndex, - accessors=PeriodIndex._datetimelike_ops, - typ='property') +PeriodProperties._add_delegate_accessors( + delegate=PeriodIndex, + accessors=PeriodIndex._datetimelike_ops, + typ='property') PeriodProperties._add_delegate_accessors(delegate=PeriodIndex, accessors=["strftime"], typ='method') @@ -222,8 +240,8 @@ class CombinedDatetimelikeProperties(DatetimeProperties, TimedeltaProperties): def _concat_compat(to_concat, axis=0): """ - provide concatenation of an datetimelike array of arrays each of which is a single - M8[ns], datetimet64[ns, tz] or m8[ns] dtype + provide concatenation of an datetimelike array of arrays each of which is a + single M8[ns], datetimet64[ns, tz] or m8[ns] dtype Parameters ---------- @@ -258,19 +276,21 @@ def convert_to_pydatetime(x, axis): if 'datetimetz' in typs: # we require ALL of the same tz for datetimetz - tzs = set([ getattr(x,'tz',None) for x in to_concat ])-set([None]) + tzs = set([getattr(x, 'tz', None) for x in to_concat]) - set([None]) if len(tzs) == 1: - return DatetimeIndex(np.concatenate([ x.tz_localize(None).asi8 for x in to_concat ]), tz=list(tzs)[0]) + return DatetimeIndex(np.concatenate([x.tz_localize(None).asi8 + for x in to_concat]), + tz=list(tzs)[0]) # single dtype if len(typs) == 1: - if not len(typs-set(['datetime'])): + if not len(typs - set(['datetime'])): new_values = np.concatenate([x.view(np.int64) for x in to_concat], axis=axis) return new_values.view(_NS_DTYPE) - elif not len(typs-set(['timedelta'])): + elif not len(typs - set(['timedelta'])): new_values = np.concatenate([x.view(np.int64) for x in to_concat], axis=axis) return new_values.view(_TD_DTYPE) @@ -278,4 +298,4 @@ def convert_to_pydatetime(x, axis): # need to coerce to object to_concat = [convert_to_pydatetime(x, axis) for x in to_concat] - return np.concatenate(to_concat,axis=axis) + return np.concatenate(to_concat, axis=axis) diff --git a/pandas/tseries/converter.py b/pandas/tseries/converter.py index 9bcb6348f01cc..8ccfdfa05e9b5 100644 --- a/pandas/tseries/converter.py +++ b/pandas/tseries/converter.py @@ -23,9 +23,6 @@ from pandas.tseries.frequencies import FreqGroup from pandas.tseries.period import Period, PeriodIndex -from matplotlib.dates import (HOURS_PER_DAY, MINUTES_PER_DAY, - SEC_PER_DAY, MUSECONDS_PER_DAY) - def register(): units.registry[lib.Timestamp] = DatetimeConverter() @@ -81,7 +78,7 @@ def default_units(x, axis): return 'time' -### time formatter +# time formatter class TimeFormatter(Formatter): def __init__(self, locs): @@ -103,7 +100,7 @@ def __call__(self, x, pos=0): return pydt.time(h, m, s, us).strftime(fmt) -### Period Conversion +# Period Conversion class PeriodConverter(dates.DateConverter): @@ -112,7 +109,8 @@ class PeriodConverter(dates.DateConverter): def convert(values, units, axis): if not hasattr(axis, 'freq'): raise TypeError('Axis must have `freq` set to convert to Periods') - valid_types = (compat.string_types, datetime, Period, pydt.date, pydt.time) + valid_types = (compat.string_types, datetime, + Period, pydt.date, pydt.time) if (isinstance(values, valid_types) or com.is_integer(values) or com.is_float(values)): return get_datevalue(values, axis.freq) @@ -130,7 +128,8 @@ def convert(values, units, axis): def get_datevalue(date, freq): if isinstance(date, Period): return date.asfreq(freq).ordinal - elif isinstance(date, (compat.string_types, datetime, pydt.date, pydt.time)): + elif isinstance(date, (compat.string_types, datetime, + pydt.date, pydt.time)): return Period(date, freq).ordinal elif (com.is_integer(date) or com.is_float(date) or (isinstance(date, (np.ndarray, Index)) and (date.size == 1))): @@ -146,14 +145,15 @@ def _dt_to_float_ordinal(dt): preserving hours, minutes, seconds and microseconds. Return value is a :func:`float`. """ - if isinstance(dt, (np.ndarray, Index, Series)) and com.is_datetime64_ns_dtype(dt): + if (isinstance(dt, (np.ndarray, Index, Series)) and + com.is_datetime64_ns_dtype(dt)): base = dates.epoch2num(dt.asi8 / 1.0E9) else: base = dates.date2num(dt) return base -### Datetime Conversion +# Datetime Conversion class DatetimeConverter(dates.DateConverter): @staticmethod @@ -274,19 +274,20 @@ def __call__(self): if dmin > dmax: dmax, dmin = dmin, dmax - delta = relativedelta(dmax, dmin) - # We need to cap at the endpoints of valid datetime - try: - start = dmin - delta - except ValueError: - start = _from_ordinal(1.0) - try: - stop = dmax + delta - except ValueError: - # The magic number! - stop = _from_ordinal(3652059.9999999) + # TODO(wesm) unused? + # delta = relativedelta(dmax, dmin) + # try: + # start = dmin - delta + # except ValueError: + # start = _from_ordinal(1.0) + + # try: + # stop = dmax + delta + # except ValueError: + # # The magic number! + # stop = _from_ordinal(3652059.9999999) nmax, nmin = dates.date2num((dmax, dmin)) @@ -306,7 +307,7 @@ def __call__(self): raise RuntimeError(('MillisecondLocator estimated to generate %d ' 'ticks from %s to %s: exceeds Locator.MAXTICKS' '* 2 (%d) ') % - (estimate, dmin, dmax, self.MAXTICKS * 2)) + (estimate, dmin, dmax, self.MAXTICKS * 2)) freq = '%dL' % self._get_interval() tz = self.tz.tzname(None) @@ -318,7 +319,7 @@ def __call__(self): if len(all_dates) > 0: locs = self.raise_if_exceeds(dates.date2num(all_dates)) return locs - except Exception as e: # pragma: no cover + except Exception: # pragma: no cover pass lims = dates.date2num([dmin, dmax]) @@ -335,19 +336,21 @@ def autoscale(self): if dmin > dmax: dmax, dmin = dmin, dmax - delta = relativedelta(dmax, dmin) - # We need to cap at the endpoints of valid datetime - try: - start = dmin - delta - except ValueError: - start = _from_ordinal(1.0) - try: - stop = dmax + delta - except ValueError: - # The magic number! - stop = _from_ordinal(3652059.9999999) + # TODO(wesm): unused? + + # delta = relativedelta(dmax, dmin) + # try: + # start = dmin - delta + # except ValueError: + # start = _from_ordinal(1.0) + + # try: + # stop = dmax + delta + # except ValueError: + # # The magic number! + # stop = _from_ordinal(3652059.9999999) dmin, dmax = self.datalim_to_dt() @@ -377,11 +380,11 @@ def _from_ordinal(x, tz=None): return dt -### Fixed frequency dynamic tick locators and formatters +# Fixed frequency dynamic tick locators and formatters -##### ------------------------------------------------------------------------- -#---- --- Locators --- -##### ------------------------------------------------------------------------- +# ------------------------------------------------------------------------- +# --- Locators --- +# ------------------------------------------------------------------------- def _get_default_annual_spacing(nyears): @@ -660,7 +663,6 @@ def _second_finder(label_interval): minor_idx = year_start[(year_break % min_anndef == 0)] info_min[minor_idx] = True info_fmt[major_idx] = '%Y' - #............................................ return info @@ -671,7 +673,7 @@ def _monthly_finder(vmin, vmax, freq): vmin_orig = vmin (vmin, vmax) = (int(vmin), int(vmax)) span = vmax - vmin + 1 - #.............. + # Initialize the output info = np.zeros(span, dtype=[('val', int), ('maj', bool), ('min', bool), @@ -682,7 +684,7 @@ def _monthly_finder(vmin, vmax, freq): year_start = (dates_ % 12 == 0).nonzero()[0] info_maj = info['maj'] info_fmt = info['fmt'] - #.............. + if span <= 1.15 * periodsperyear: info_maj[year_start] = True info['min'] = True @@ -696,7 +698,7 @@ def _monthly_finder(vmin, vmax, freq): else: idx = 0 info_fmt[idx] = '%b\n%Y' - #.............. + elif span <= 2.5 * periodsperyear: quarter_start = (dates_ % 3 == 0).nonzero() info_maj[year_start] = True @@ -706,7 +708,7 @@ def _monthly_finder(vmin, vmax, freq): info_fmt[quarter_start] = '%b' info_fmt[year_start] = '%b\n%Y' - #.............. + elif span <= 4 * periodsperyear: info_maj[year_start] = True info['min'] = True @@ -714,14 +716,14 @@ def _monthly_finder(vmin, vmax, freq): jan_or_jul = (dates_ % 12 == 0) | (dates_ % 12 == 6) info_fmt[jan_or_jul] = '%b' info_fmt[year_start] = '%b\n%Y' - #.............. + elif span <= 11 * periodsperyear: quarter_start = (dates_ % 3 == 0).nonzero() info_maj[year_start] = True info['min'][quarter_start] = True info_fmt[year_start] = '%Y' - #.................. + else: nyears = span / periodsperyear (min_anndef, maj_anndef) = _get_default_annual_spacing(nyears) @@ -731,7 +733,7 @@ def _monthly_finder(vmin, vmax, freq): info['min'][year_start[(years % min_anndef == 0)]] = True info_fmt[major_idx] = '%Y' - #.............. + return info @@ -740,7 +742,7 @@ def _quarterly_finder(vmin, vmax, freq): vmin_orig = vmin (vmin, vmax) = (int(vmin), int(vmax)) span = vmax - vmin + 1 - #............................................ + info = np.zeros(span, dtype=[('val', int), ('maj', bool), ('min', bool), ('fmt', '|S8')]) @@ -750,7 +752,7 @@ def _quarterly_finder(vmin, vmax, freq): info_maj = info['maj'] info_fmt = info['fmt'] year_start = (dates_ % 4 == 0).nonzero()[0] - #.............. + if span <= 3.5 * periodsperyear: info_maj[year_start] = True info['min'] = True @@ -763,12 +765,12 @@ def _quarterly_finder(vmin, vmax, freq): else: idx = 0 info_fmt[idx] = 'Q%q\n%F' - #.............. + elif span <= 11 * periodsperyear: info_maj[year_start] = True info['min'] = True info_fmt[year_start] = '%F' - #.............. + else: years = dates_[year_start] // 4 + 1 nyears = span / periodsperyear @@ -777,27 +779,27 @@ def _quarterly_finder(vmin, vmax, freq): info_maj[major_idx] = True info['min'][year_start[(years % min_anndef == 0)]] = True info_fmt[major_idx] = '%F' - #.............. + return info def _annual_finder(vmin, vmax, freq): (vmin, vmax) = (int(vmin), int(vmax + 1)) span = vmax - vmin + 1 - #.............. + info = np.zeros(span, dtype=[('val', int), ('maj', bool), ('min', bool), ('fmt', '|S8')]) info['val'] = np.arange(vmin, vmax + 1) info['fmt'] = '' dates_ = info['val'] - #.............. + (min_anndef, maj_anndef) = _get_default_annual_spacing(span) major_idx = dates_ % maj_anndef == 0 info['maj'][major_idx] = True info['min'][(dates_ % min_anndef == 0)] = True info['fmt'][major_idx] = '%Y' - #.............. + return info @@ -896,9 +898,9 @@ def autoscale(self): vmax += 1 return nonsingular(vmin, vmax) -#####------------------------------------------------------------------------- -#---- --- Formatter --- -#####------------------------------------------------------------------------- +# ------------------------------------------------------------------------- +# --- Formatter --- +# ------------------------------------------------------------------------- class TimeSeries_DateFormatter(Formatter): diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py index fced24c706246..d83b0e3f250ca 100644 --- a/pandas/tseries/frequencies.py +++ b/pandas/tseries/frequencies.py @@ -1,4 +1,4 @@ -from datetime import datetime,timedelta +from datetime import timedelta from pandas.compat import range, long, zip from pandas import compat import re @@ -17,6 +17,7 @@ from pandas.tslib import Timedelta from pytz import AmbiguousTimeError + class FreqGroup(object): FR_ANN = 1000 FR_QTR = 2000 @@ -44,28 +45,29 @@ class Resolution(object): RESO_DAY = period.D_RESO _reso_str_map = { - RESO_US: 'microsecond', - RESO_MS: 'millisecond', - RESO_SEC: 'second', - RESO_MIN: 'minute', - RESO_HR: 'hour', - RESO_DAY: 'day'} + RESO_US: 'microsecond', + RESO_MS: 'millisecond', + RESO_SEC: 'second', + RESO_MIN: 'minute', + RESO_HR: 'hour', + RESO_DAY: 'day'} _str_reso_map = dict([(v, k) for k, v in compat.iteritems(_reso_str_map)]) _reso_freq_map = { - 'year': 'A', - 'quarter': 'Q', - 'month': 'M', - 'day': 'D', - 'hour': 'H', - 'minute': 'T', - 'second': 'S', - 'millisecond': 'L', - 'microsecond': 'U', - 'nanosecond': 'N'} - - _freq_reso_map = dict([(v, k) for k, v in compat.iteritems(_reso_freq_map)]) + 'year': 'A', + 'quarter': 'Q', + 'month': 'M', + 'day': 'D', + 'hour': 'H', + 'minute': 'T', + 'second': 'S', + 'millisecond': 'L', + 'microsecond': 'U', + 'nanosecond': 'N'} + + _freq_reso_map = dict([(v, k) + for k, v in compat.iteritems(_reso_freq_map)]) @classmethod def get_str(cls, reso): @@ -277,11 +279,12 @@ def _get_freq_str(base, mult=1): return str(mult) + code -#---------------------------------------------------------------------- +# --------------------------------------------------------------------- # Offset names ("time rules") and related functions -from pandas.tseries.offsets import (Nano, Micro, Milli, Second, Minute, Hour, +from pandas.tseries.offsets import (Nano, Micro, Milli, Second, # noqa + Minute, Hour, Day, BDay, CDay, Week, MonthBegin, MonthEnd, BMonthBegin, BMonthEnd, QuarterBegin, QuarterEnd, BQuarterBegin, @@ -384,7 +387,7 @@ def get_period_alias(offset_str): 'us': 'U' } -#TODO: Can this be killed? +# TODO: Can this be killed? for _i, _weekday in enumerate(['MON', 'TUE', 'WED', 'THU', 'FRI']): for _iweek in range(4): _name = 'WOM-%d%s' % (_iweek + 1, _weekday) @@ -404,6 +407,7 @@ def get_period_alias(offset_str): 'microseconds': Micro(1), 'nanoseconds': Nano(1)} + def to_offset(freqstr): """ Return DateOffset object from string representation or @@ -548,7 +552,8 @@ def get_offset(name): try: split = name.split('-') klass = prefix_mapping[split[0]] - # handles case where there's no suffix (and will TypeError if too many '-') + # handles case where there's no suffix (and will TypeError if too + # many '-') offset = klass._from_name(*split[1:]) except (ValueError, TypeError, KeyError): # bad prefix or suffix @@ -586,6 +591,7 @@ def get_legacy_offset_name(offset): name = offset.name return _legacy_reverse_map.get(name, name) + def get_standard_freq(freq): """ Return the standardized frequency string @@ -599,7 +605,7 @@ def get_standard_freq(freq): code, stride = get_freq_code(freq) return _get_freq_str(code, stride) -#---------------------------------------------------------------------- +# --------------------------------------------------------------------- # Period codes # period frequency constants corresponding to scikits timeseries @@ -815,7 +821,7 @@ def infer_freq(index, warn=True): Parameters ---------- index : DatetimeIndex or TimedeltaIndex - if passed a Series will use the values of the series (NOT THE INDEX) + if passed a Series will use the values of the series (NOT THE INDEX) warn : boolean, default True Returns @@ -829,8 +835,11 @@ def infer_freq(index, warn=True): if isinstance(index, com.ABCSeries): values = index._values - if not (com.is_datetime64_dtype(values) or com.is_timedelta64_dtype(values) or values.dtype == object): - raise TypeError("cannot infer freq from a non-convertible dtype on a Series of {0}".format(index.dtype)) + if not (com.is_datetime64_dtype(values) or + com.is_timedelta64_dtype(values) or + values.dtype == object): + raise TypeError("cannot infer freq from a non-convertible " + "dtype on a Series of {0}".format(index.dtype)) index = values if com.is_period_arraylike(index): @@ -842,7 +851,8 @@ def infer_freq(index, warn=True): if isinstance(index, pd.Index) and not isinstance(index, pd.DatetimeIndex): if isinstance(index, (pd.Int64Index, pd.Float64Index)): - raise TypeError("cannot infer freq from a non-convertible index type {0}".format(type(index))) + raise TypeError("cannot infer freq from a non-convertible index " + "type {0}".format(type(index))) index = index.values if not isinstance(index, pd.DatetimeIndex): @@ -873,7 +883,7 @@ def __init__(self, index, warn=True): # This moves the values, which are implicitly in UTC, to the # the timezone so they are in local time - if hasattr(index,'tz'): + if hasattr(index, 'tz'): if index.tz is not None: self.values = tslib.tz_convert(self.values, 'UTC', index.tz) @@ -1069,10 +1079,10 @@ def _get_monthly_rule(self): 'ce': 'M', 'be': 'BM'}.get(pos_check) def _get_wom_rule(self): -# wdiffs = unique(np.diff(self.index.week)) - #We also need -47, -49, -48 to catch index spanning year boundary -# if not lib.ismember(wdiffs, set([4, 5, -47, -49, -48])).all(): -# return None + # wdiffs = unique(np.diff(self.index.week)) + # We also need -47, -49, -48 to catch index spanning year boundary + # if not lib.ismember(wdiffs, set([4, 5, -47, -49, -48])).all(): + # return None weekdays = unique(self.index.weekday) if len(weekdays) > 1: @@ -1092,6 +1102,7 @@ def _get_wom_rule(self): import pandas.core.algorithms as algos + class _TimedeltaFrequencyInferer(_FrequencyInferer): def _infer_daily_rule(self): @@ -1262,5 +1273,6 @@ def _is_weekly(rule): _month_aliases = tslib._MONTH_ALIASES _weekday_rule_aliases = dict((k, v) for k, v in enumerate(DAYS)) + def _is_multiple(us, mult): return us % mult == 0 diff --git a/pandas/tseries/holiday.py b/pandas/tseries/holiday.py index 813354b2d0f86..31e40c6bcbb2c 100644 --- a/pandas/tseries/holiday.py +++ b/pandas/tseries/holiday.py @@ -3,7 +3,7 @@ from pandas import DateOffset, DatetimeIndex, Series, Timestamp from pandas.compat import add_metaclass from datetime import datetime, timedelta -from dateutil.relativedelta import MO, TU, WE, TH, FR, SA, SU +from dateutil.relativedelta import MO, TU, WE, TH, FR, SA, SU # noqa from pandas.tseries.offsets import Easter, Day import numpy as np @@ -19,6 +19,7 @@ def next_monday(dt): return dt + timedelta(1) return dt + def next_monday_or_tuesday(dt): """ For second holiday of two adjacent ones! @@ -33,6 +34,7 @@ def next_monday_or_tuesday(dt): return dt + timedelta(1) return dt + def previous_friday(dt): """ If holiday falls on Saturday or Sunday, use previous Friday instead. @@ -43,6 +45,7 @@ def previous_friday(dt): return dt - timedelta(2) return dt + def sunday_to_monday(dt): """ If holiday falls on Sunday, use day thereafter (Monday) instead. @@ -119,6 +122,7 @@ class Holiday(object): Class that defines a holiday with start/end dates and rules for observance. """ + def __init__(self, name, year=None, month=None, day=None, offset=None, observance=None, start_date=None, end_date=None, days_of_week=None): @@ -159,8 +163,10 @@ class from pandas.tseries.offsets self.month = month self.day = day self.offset = offset - self.start_date = Timestamp(start_date) if start_date is not None else start_date - self.end_date = Timestamp(end_date) if end_date is not None else end_date + self.start_date = Timestamp( + start_date) if start_date is not None else start_date + self.end_date = Timestamp( + end_date) if end_date is not None else end_date self.observance = observance assert (days_of_week is None or type(days_of_week) == tuple) self.days_of_week = days_of_week @@ -212,16 +218,17 @@ def dates(self, start_date, end_date, return_name=False): self.days_of_week)] if self.start_date is not None: - filter_start_date = max(self.start_date.tz_localize(filter_start_date.tz), filter_start_date) + filter_start_date = max(self.start_date.tz_localize( + filter_start_date.tz), filter_start_date) if self.end_date is not None: - filter_end_date = min(self.end_date.tz_localize(filter_end_date.tz), filter_end_date) + filter_end_date = min(self.end_date.tz_localize( + filter_end_date.tz), filter_end_date) holiday_dates = holiday_dates[(holiday_dates >= filter_start_date) & (holiday_dates <= filter_end_date)] if return_name: return Series(self.name, index=holiday_dates) return holiday_dates - def _reference_dates(self, start_date, end_date): """ Get reference dates for the holiday. @@ -239,12 +246,13 @@ def _reference_dates(self, start_date, end_date): year_offset = DateOffset(years=1) reference_start_date = Timestamp( - datetime(start_date.year-1, self.month, self.day)) + datetime(start_date.year - 1, self.month, self.day)) reference_end_date = Timestamp( - datetime(end_date.year+1, self.month, self.day)) + datetime(end_date.year + 1, self.month, self.day)) # Don't process unnecessary holidays - dates = DatetimeIndex(start=reference_start_date, end=reference_end_date, + dates = DatetimeIndex(start=reference_start_date, + end=reference_end_date, freq=year_offset, tz=start_date.tz) return dates @@ -279,6 +287,8 @@ def _apply_rule(self, dates): return dates holiday_calendars = {} + + def register(cls): try: name = cls.name @@ -286,6 +296,7 @@ def register(cls): name = cls.__name__ holiday_calendars[name] = cls + def get_calendar(name): """ Return an instance of a calendar based on its name. @@ -297,12 +308,16 @@ def get_calendar(name): """ return holiday_calendars[name]() + class HolidayCalendarMetaClass(type): + def __new__(cls, clsname, bases, attrs): - calendar_class = super(HolidayCalendarMetaClass, cls).__new__(cls, clsname, bases, attrs) + calendar_class = super(HolidayCalendarMetaClass, cls).__new__( + cls, clsname, bases, attrs) register(calendar_class) return calendar_class + @add_metaclass(HolidayCalendarMetaClass) class AbstractHolidayCalendar(object): """ @@ -371,8 +386,10 @@ def holidays(self, start=None, end=None, return_name=False): end = Timestamp(end) holidays = None - # If we don't have a cache or the dates are outside the prior cache, we get them again - if self._cache is None or start < self._cache[0] or end > self._cache[1]: + # If we don't have a cache or the dates are outside the prior cache, we + # get them again + if (self._cache is None or start < self._cache[0] or + end > self._cache[1]): for rule in self.rules: rule_holidays = rule.dates(start, end, return_name=True) @@ -400,8 +417,10 @@ def merge_class(base, other): Parameters ---------- - base : AbstractHolidayCalendar instance/subclass or array of Holiday objects - other : AbstractHolidayCalendar instance/subclass or array of Holiday objects + base : AbstractHolidayCalendar + instance/subclass or array of Holiday objects + other : AbstractHolidayCalendar + instance/subclass or array of Holiday objects """ try: other = other.rules @@ -450,34 +469,39 @@ def merge(self, other, inplace=False): offset=DateOffset(weekday=MO(2))) USThanksgivingDay = Holiday('Thanksgiving', month=11, day=1, offset=DateOffset(weekday=TH(4))) -USMartinLutherKingJr = Holiday('Dr. Martin Luther King Jr.', start_date=datetime(1986,1,1), month=1, day=1, +USMartinLutherKingJr = Holiday('Dr. Martin Luther King Jr.', + start_date=datetime(1986, 1, 1), month=1, day=1, offset=DateOffset(weekday=MO(3))) USPresidentsDay = Holiday('President''s Day', month=2, day=1, offset=DateOffset(weekday=MO(3))) GoodFriday = Holiday("Good Friday", month=1, day=1, offset=[Easter(), Day(-2)]) -EasterMonday = Holiday("Easter Monday", month=1, day=1, offset=[Easter(), Day(1)]) +EasterMonday = Holiday("Easter Monday", month=1, day=1, + offset=[Easter(), Day(1)]) class USFederalHolidayCalendar(AbstractHolidayCalendar): """ - US Federal Government Holiday Calendar based on rules specified - by: https://www.opm.gov/policy-data-oversight/snow-dismissal-procedures/federal-holidays/ + US Federal Government Holiday Calendar based on rules specified by: + https://www.opm.gov/policy-data-oversight/ + snow-dismissal-procedures/federal-holidays/ """ rules = [ - Holiday('New Years Day', month=1, day=1, observance=nearest_workday), + Holiday('New Years Day', month=1, day=1, observance=nearest_workday), USMartinLutherKingJr, USPresidentsDay, USMemorialDay, - Holiday('July 4th', month=7, day=4, observance=nearest_workday), + Holiday('July 4th', month=7, day=4, observance=nearest_workday), USLaborDay, USColumbusDay, Holiday('Veterans Day', month=11, day=11, observance=nearest_workday), USThanksgivingDay, Holiday('Christmas', month=12, day=25, observance=nearest_workday) - ] + ] + -def HolidayCalendarFactory(name, base, other, base_class=AbstractHolidayCalendar): +def HolidayCalendarFactory(name, base, other, + base_class=AbstractHolidayCalendar): rules = AbstractHolidayCalendar.merge_class(base, other) calendar_class = type(name, (base_class,), {"rules": rules, "name": name}) return calendar_class diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py index 0dae564352967..bb4f878157595 100644 --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -52,14 +52,18 @@ def f(self): values = self._local_timestamps() if field in ['is_month_start', 'is_month_end', - 'is_quarter_start', 'is_quarter_end', - 'is_year_start', 'is_year_end']: - month_kw = self.freq.kwds.get('startingMonth', self.freq.kwds.get('month', 12)) if self.freq else 12 - result = tslib.get_start_end_field(values, field, self.freqstr, month_kw) + 'is_quarter_start', 'is_quarter_end', + 'is_year_start', 'is_year_end']: + month_kw = (self.freq.kwds.get('startingMonth', + self.freq.kwds.get('month', 12)) + if self.freq else 12) + + result = tslib.get_start_end_field( + values, field, self.freqstr, month_kw) else: result = tslib.get_date_field(values, field) - return self._maybe_mask_results(result,convert='float64') + return self._maybe_mask_results(result, convert='float64') f.__name__ = name f.__doc__ = docstring @@ -70,9 +74,11 @@ def _dt_index_cmp(opname, nat_result=False): """ Wrap comparison operations to convert datetime-like to datetime64 """ + def wrapper(self, other): func = getattr(super(DatetimeIndex, self), opname) - if isinstance(other, datetime) or isinstance(other, compat.string_types): + if (isinstance(other, datetime) or + isinstance(other, compat.string_types)): other = _to_m8(other, tz=self.tz) result = func(other) if com.isnull(other): @@ -118,14 +124,16 @@ def _new_DatetimeIndex(cls, d): # data are already in UTC # so need to localize - tz = d.pop('tz',None) + tz = d.pop('tz', None) result = cls.__new__(cls, verify_integrity=False, **d) if tz is not None: result = result.tz_localize('UTC').tz_convert(tz) return result -class DatetimeIndex(DatelikeOps, TimelikeOps, DatetimeIndexOpsMixin, Int64Index): + +class DatetimeIndex(DatelikeOps, TimelikeOps, DatetimeIndexOpsMixin, + Int64Index): """ Immutable ndarray of datetime64 data, represented internally as int64, and which can be boxed to Timestamp objects that are subclasses of datetime and @@ -153,9 +161,11 @@ class DatetimeIndex(DatelikeOps, TimelikeOps, DatetimeIndexOpsMixin, Int64Index) the 'left', 'right', or both sides (None) tz : pytz.timezone or dateutil.tz.tzfile ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise' - - 'infer' will attempt to infer fall dst-transition hours based on order - - bool-ndarray where True signifies a DST time, False signifies - a non-DST time (note that this flag is only applicable for ambiguous times) + - 'infer' will attempt to infer fall dst-transition hours based on + order + - bool-ndarray where True signifies a DST time, False signifies a + non-DST time (note that this flag is only applicable for ambiguous + times) - 'NaT' will return NaT where there are ambiguous times - 'raise' will raise an AmbiguousTimeError if there are ambiguous times infer_dst : boolean, default False (DEPRECATED) @@ -168,7 +178,8 @@ class DatetimeIndex(DatelikeOps, TimelikeOps, DatetimeIndexOpsMixin, Int64Index) _join_precedence = 10 def _join_i8_wrapper(joinf, **kwargs): - return DatetimeIndexOpsMixin._join_i8_wrapper(joinf, dtype='M8[ns]', **kwargs) + return DatetimeIndexOpsMixin._join_i8_wrapper(joinf, dtype='M8[ns]', + **kwargs) _inner_indexer = _join_i8_wrapper(_algos.inner_join_indexer_int64) _outer_indexer = _join_i8_wrapper(_algos.outer_join_indexer_int64) @@ -190,11 +201,13 @@ def _join_i8_wrapper(joinf, **kwargs): offset = None _comparables = ['name', 'freqstr', 'tz'] _attributes = ['name', 'freq', 'tz'] - _datetimelike_ops = ['year','month','day','hour','minute','second', - 'weekofyear','week','dayofweek','weekday','dayofyear','quarter', 'days_in_month', 'daysinmonth', - 'date','time','microsecond','nanosecond','is_month_start','is_month_end', - 'is_quarter_start','is_quarter_end','is_year_start','is_year_end', - 'tz','freq'] + _datetimelike_ops = ['year', 'month', 'day', 'hour', 'minute', 'second', + 'weekofyear', 'week', 'dayofweek', 'weekday', + 'dayofyear', 'quarter', 'days_in_month', + 'daysinmonth', 'date', 'time', 'microsecond', + 'nanosecond', 'is_month_start', 'is_month_end', + 'is_quarter_start', 'is_quarter_end', 'is_year_start', + 'is_year_end', 'tz', 'freq'] _is_numeric_dtype = False _infer_as_myclass = True @@ -300,14 +313,14 @@ def __new__(cls, data=None, values = data if lib.is_string_array(values): - subarr = tslib.parse_str_array_to_datetime(values, freq=freq, dayfirst=dayfirst, - yearfirst=yearfirst) - + subarr = tslib.parse_str_array_to_datetime( + values, freq=freq, dayfirst=dayfirst, yearfirst=yearfirst) else: try: subarr = tools.to_datetime(data, box=False) - # make sure that we have a index/ndarray like (and not a Series) + # make sure that we have a index/ndarray like (and not a + # Series) if isinstance(subarr, ABCSeries): subarr = subarr._values if subarr.dtype == np.object_: @@ -318,7 +331,8 @@ def __new__(cls, data=None, subarr = tools._to_datetime(data, box=False, utc=True) # we may not have been able to convert - if not (is_datetimetz(subarr) or np.issubdtype(subarr.dtype, np.datetime64)): + if not (is_datetimetz(subarr) or + np.issubdtype(subarr.dtype, np.datetime64)): raise ValueError('Unable to convert %s to datetime dtype' % str(data)) @@ -354,10 +368,13 @@ def __new__(cls, data=None, if freq is not None and not freq_infer: inferred = subarr.inferred_freq if inferred != freq.freqstr: - on_freq = cls._generate(subarr[0], None, len(subarr), None, freq, tz=tz, ambiguous=ambiguous) + on_freq = cls._generate(subarr[0], None, len( + subarr), None, freq, tz=tz, ambiguous=ambiguous) if not np.array_equal(subarr.asi8, on_freq.asi8): - raise ValueError('Inferred frequency {0} from passed dates does not ' - 'conform to passed frequency {1}'.format(inferred, freq.freqstr)) + raise ValueError('Inferred frequency {0} from passed ' + 'dates does not conform to passed ' + 'frequency {1}' + .format(inferred, freq.freqstr)) if freq_infer: inferred = subarr.inferred_freq @@ -402,7 +419,7 @@ def _generate(cls, start, end, periods, name, offset, inferred_tz = tools._infer_tzinfo(start, end) except: raise TypeError('Start and end cannot both be tz-aware with ' - 'different timezones') + 'different timezones') inferred_tz = tslib.maybe_get_tz(inferred_tz) @@ -486,7 +503,7 @@ def _generate(cls, start, end, periods, name, offset, index = index.view(_NS_DTYPE) index = cls._simple_new(index, name=name, freq=offset, tz=tz) - if not left_closed and len(index) and index[0] == start: + if not left_closed and len(index) and index[0] == start: index = index[1:] if not right_closed and len(index) and index[-1] == end: index = index[:-1] @@ -519,21 +536,24 @@ def _local_timestamps(self): return result.take(reverse) @classmethod - def _simple_new(cls, values, name=None, freq=None, tz=None, dtype=None, **kwargs): + def _simple_new(cls, values, name=None, freq=None, tz=None, + dtype=None, **kwargs): """ we require the we have a dtype compat for the values if we are passed a non-dtype compat, then coerce using the constructor """ - if not getattr(values,'dtype',None): + if not getattr(values, 'dtype', None): # empty, but with dtype compat if values is None: values = np.empty(0, dtype=_NS_DTYPE) - return cls(values, name=name, freq=freq, tz=tz, dtype=dtype, **kwargs) - values = np.array(values,copy=False) + return cls(values, name=name, freq=freq, tz=tz, + dtype=dtype, **kwargs) + values = np.array(values, copy=False) if is_object_dtype(values): - return cls(values, name=name, freq=freq, tz=tz, dtype=dtype, **kwargs).values + return cls(values, name=name, freq=freq, tz=tz, + dtype=dtype, **kwargs).values elif not is_datetime64_dtype(values): values = com._ensure_int64(values).view(_NS_DTYPE) @@ -571,18 +591,20 @@ def _has_same_tz(self, other): def _cached_range(cls, start=None, end=None, periods=None, offset=None, name=None): if start is None and end is None: - # I somewhat believe this should never be raised externally and therefore - # should be a `PandasError` but whatever... + # I somewhat believe this should never be raised externally and + # therefore should be a `PandasError` but whatever... raise TypeError('Must specify either start or end.') if start is not None: start = Timestamp(start) if end is not None: end = Timestamp(end) if (start is None or end is None) and periods is None: - raise TypeError('Must either specify period or provide both start and end.') + raise TypeError( + 'Must either specify period or provide both start and end.') if offset is None: - # This can't happen with external-facing code, therefore PandasError + # This can't happen with external-facing code, therefore + # PandasError raise TypeError('Must provide offset.') drc = _daterange_cache @@ -700,12 +722,13 @@ def _sub_datelike(self, other): # require tz compat if not self._has_same_tz(other): - raise TypeError("Timestamp subtraction must have the same timezones or no timezones") + raise TypeError("Timestamp subtraction must have the same " + "timezones or no timezones") i8 = self.asi8 result = i8 - other.value - result = self._maybe_mask_results(result,fill_value=tslib.iNaT) - return TimedeltaIndex(result,name=self.name,copy=False) + result = self._maybe_mask_results(result, fill_value=tslib.iNaT) + return TimedeltaIndex(result, name=self.name, copy=False) def _maybe_update_attributes(self, attrs): """ Update Index attributes (e.g. freq) depending on op """ @@ -749,8 +772,8 @@ def _add_offset(self, offset): return result except NotImplementedError: - warnings.warn("Non-vectorized DateOffset being applied to Series or DatetimeIndex", - PerformanceWarning) + warnings.warn("Non-vectorized DateOffset being applied to Series " + "or DatetimeIndex", PerformanceWarning) return self.astype('O') + offset def _format_native_types(self, na_rep=u('NaT'), @@ -795,20 +818,20 @@ def to_series(self, keep_tz=False): Parameters ---------- keep_tz : optional, defaults False. - return the data keeping the timezone. + return the data keeping the timezone. - If keep_tz is True: + If keep_tz is True: - If the timezone is not set, the resulting - Series will have a datetime64[ns] dtype. + If the timezone is not set, the resulting + Series will have a datetime64[ns] dtype. - Otherwise the Series will have an datetime64[ns, tz] dtype; the - tz will be preserved. + Otherwise the Series will have an datetime64[ns, tz] dtype; the + tz will be preserved. - If keep_tz is False: + If keep_tz is False: - Series will have a datetime64[ns] dtype. TZ aware - objects will have the tz removed. + Series will have a datetime64[ns] dtype. TZ aware + objects will have the tz removed. Returns ------- @@ -850,7 +873,8 @@ def to_period(self, freq=None): freq = self.freqstr or self.inferred_freq if freq is None: - msg = "You must pass a freq argument as current index has none." + msg = ("You must pass a freq argument as " + "current index has none.") raise ValueError(msg) freq = get_period_alias(freq) @@ -931,7 +955,8 @@ def to_perioddelta(self, freq): ------- y : TimedeltaIndex """ - return to_timedelta(self.asi8 - self.to_period(freq).to_timestamp().asi8) + return to_timedelta(self.asi8 - self.to_period(freq) + .to_timestamp().asi8) def union_many(self, others): """ @@ -1116,9 +1141,10 @@ def __iter__(self): chunksize = 10000 chunks = int(l / chunksize) + 1 for i in range(chunks): - start_i = i*chunksize - end_i = min((i+1)*chunksize,l) - converted = tslib.ints_to_pydatetime(data[start_i:end_i], tz=self.tz, offset=self.offset, box=True) + start_i = i * chunksize + end_i = min((i + 1) * chunksize, l) + converted = tslib.ints_to_pydatetime( + data[start_i:end_i], tz=self.tz, offset=self.offset, box=True) for v in converted: yield v @@ -1199,23 +1225,28 @@ def _parsed_string_to_bounds(self, reso, parsed): lower, upper: pd.Timestamp """ - is_monotonic = self.is_monotonic if reso == 'year': return (Timestamp(datetime(parsed.year, 1, 1), tz=self.tz), - Timestamp(datetime(parsed.year, 12, 31, 23, 59, 59, 999999), tz=self.tz)) + Timestamp(datetime(parsed.year, 12, 31, 23, + 59, 59, 999999), tz=self.tz)) elif reso == 'month': d = tslib.monthrange(parsed.year, parsed.month)[1] - return (Timestamp(datetime(parsed.year, parsed.month, 1), tz=self.tz), - Timestamp(datetime(parsed.year, parsed.month, d, 23, 59, 59, 999999), tz=self.tz)) + return (Timestamp(datetime(parsed.year, parsed.month, 1), + tz=self.tz), + Timestamp(datetime(parsed.year, parsed.month, d, 23, + 59, 59, 999999), tz=self.tz)) elif reso == 'quarter': qe = (((parsed.month - 1) + 2) % 12) + 1 # two months ahead d = tslib.monthrange(parsed.year, qe)[1] # at end of month - return (Timestamp(datetime(parsed.year, parsed.month, 1), tz=self.tz), - Timestamp(datetime(parsed.year, qe, d, 23, 59, 59, 999999), tz=self.tz)) + return (Timestamp(datetime(parsed.year, parsed.month, 1), + tz=self.tz), + Timestamp(datetime(parsed.year, qe, d, 23, 59, + 59, 999999), tz=self.tz)) elif reso == 'day': st = datetime(parsed.year, parsed.month, parsed.day) return (Timestamp(st, tz=self.tz), - Timestamp(Timestamp(st + offsets.Day(), tz=self.tz).value - 1)) + Timestamp(Timestamp(st + offsets.Day(), + tz=self.tz).value - 1)) elif reso == 'hour': st = datetime(parsed.year, parsed.month, parsed.day, hour=parsed.hour) @@ -1230,7 +1261,8 @@ def _parsed_string_to_bounds(self, reso, parsed): tz=self.tz).value - 1)) elif reso == 'second': st = datetime(parsed.year, parsed.month, parsed.day, - hour=parsed.hour, minute=parsed.minute, second=parsed.second) + hour=parsed.hour, minute=parsed.minute, + second=parsed.second) return (Timestamp(st, tz=self.tz), Timestamp(Timestamp(st + offsets.Second(), tz=self.tz).value - 1)) @@ -1265,14 +1297,17 @@ def _partial_date_slice(self, reso, parsed, use_lhs=True, use_rhs=True): if is_monotonic: # we are out of range - if len(stamps) and ( - (use_lhs and t1.value < stamps[0] and t2.value < stamps[0]) or ( - (use_rhs and t1.value > stamps[-1] and t2.value > stamps[-1]))): + if (len(stamps) and ((use_lhs and t1.value < stamps[0] and + t2.value < stamps[0]) or + ((use_rhs and t1.value > stamps[-1] and + t2.value > stamps[-1])))): raise KeyError # a monotonic (sorted) series can be sliced - left = stamps.searchsorted(t1.value, side='left') if use_lhs else None - right = stamps.searchsorted(t2.value, side='right') if use_rhs else None + left = stamps.searchsorted( + t1.value, side='left') if use_lhs else None + right = stamps.searchsorted( + t2.value, side='right') if use_rhs else None return slice(left, right) @@ -1306,7 +1341,8 @@ def get_value(self, series, key): return series.take(locs) try: - return _maybe_box(self, Index.get_value(self, series, key), series, key) + return _maybe_box(self, Index.get_value(self, series, key), + series, key) except KeyError: try: loc = self._get_string_slice(key) @@ -1386,7 +1422,7 @@ def _maybe_cast_slice_bound(self, label, side, kind): """ if is_float(label) or isinstance(label, time) or is_integer(label): - self._invalid_indexer('slice',label) + self._invalid_indexer('slice', label) if isinstance(label, compat.string_types): freq = getattr(self, 'freqstr', @@ -1437,14 +1473,16 @@ def slice_indexer(self, start=None, end=None, step=None, kind=None): # value-based partial (aka string) slices on non-monotonic arrays, # let's try that. if ((start is None or isinstance(start, compat.string_types)) and - (end is None or isinstance(end, compat.string_types))): + (end is None or isinstance(end, compat.string_types))): mask = True if start is not None: - start_casted = self._maybe_cast_slice_bound(start, 'left', kind) + start_casted = self._maybe_cast_slice_bound( + start, 'left', kind) mask = start_casted <= self if end is not None: - end_casted = self._maybe_cast_slice_bound(end, 'right', kind) + end_casted = self._maybe_cast_slice_bound( + end, 'right', kind) mask = (self <= end_casted) & mask indexer = mask.nonzero()[0][::step] @@ -1461,10 +1499,12 @@ def _get_freq(self): def _set_freq(self, value): self.offset = value - freq = property(fget=_get_freq, fset=_set_freq, doc="get/set the frequncy of the Index") + freq = property(fget=_get_freq, fset=_set_freq, + doc="get/set the frequncy of the Index") year = _field_accessor('year', 'Y', "The year of the datetime") - month = _field_accessor('month', 'M', "The month as January=1, December=12") + month = _field_accessor( + 'month', 'M', "The month as January=1, December=12") day = _field_accessor('day', 'D', "The days of the datetime") hour = _field_accessor('hour', 'h', "The hours of the datetime") minute = _field_accessor('minute', 'm', "The minutes of the datetime") @@ -1530,15 +1570,17 @@ def time(self): """ Returns numpy array of datetime.time. The time part of the Timestamps. """ - return self._maybe_mask_results(_algos.arrmap_object(self.asobject.values, - lambda x: np.nan if x is tslib.NaT else x.time())) + return self._maybe_mask_results(_algos.arrmap_object( + self.asobject.values, + lambda x: np.nan if x is tslib.NaT else x.time())) @property def date(self): """ Returns numpy array of datetime.date. The date part of the Timestamps. """ - return self._maybe_mask_results(_algos.arrmap_object(self.asobject.values, lambda x: x.date())) + return self._maybe_mask_results(_algos.arrmap_object( + self.asobject.values, lambda x: x.date())) def normalize(self): """ @@ -1573,7 +1615,7 @@ def inferred_type(self): def dtype(self): if self.tz is None: return _NS_DTYPE - return com.DatetimeTZDtype('ns',self.tz) + return com.DatetimeTZDtype('ns', self.tz) @property def is_all_dates(self): @@ -1631,10 +1673,12 @@ def insert(self, loc, item): if isinstance(item, (datetime, np.datetime64)): self._assert_can_do_op(item) if not self._has_same_tz(item): - raise ValueError('Passed item and index have different timezone') + raise ValueError( + 'Passed item and index have different timezone') # check freq can be preserved on edge cases if self.size and self.freq is not None: - if (loc == 0 or loc == -len(self)) and item + self.freq == self[0]: + if ((loc == 0 or loc == -len(self)) and + item + self.freq == self[0]): freq = self.freq elif (loc == len(self)) and item - self.freq == self[-1]: freq = self.freq @@ -1644,14 +1688,16 @@ def insert(self, loc, item): self[loc:].asi8)) if self.tz is not None: new_dates = tslib.tz_convert(new_dates, 'UTC', self.tz) - return DatetimeIndex(new_dates, name=self.name, freq=freq, tz=self.tz) + return DatetimeIndex(new_dates, name=self.name, freq=freq, + tz=self.tz) except (AttributeError, TypeError): # fall back to object index - if isinstance(item,compat.string_types): + if isinstance(item, compat.string_types): return self.asobject.insert(loc, item) - raise TypeError("cannot insert DatetimeIndex with incompatible label") + raise TypeError( + "cannot insert DatetimeIndex with incompatible label") def delete(self, loc): """ @@ -1674,7 +1720,8 @@ def delete(self, loc): freq = self.freq else: if com.is_list_like(loc): - loc = lib.maybe_indices_to_slice(com._ensure_int64(np.array(loc)), len(self)) + loc = lib.maybe_indices_to_slice( + com._ensure_int64(np.array(loc)), len(self)) if isinstance(loc, slice) and loc.step in (1, None): if (loc.start in (0, None) or loc.stop in (len(self), None)): freq = self.freq @@ -1685,7 +1732,8 @@ def delete(self, loc): def tz_convert(self, tz): """ - Convert tz-aware DatetimeIndex from one time zone to another (using pytz/dateutil) + Convert tz-aware DatetimeIndex from one time zone to another (using + pytz/dateutil) Parameters ---------- @@ -1714,11 +1762,11 @@ def tz_convert(self, tz): return self._shallow_copy(tz=tz) @deprecate_kwarg(old_arg_name='infer_dst', new_arg_name='ambiguous', - mapping={True: 'infer', False: 'raise'}) + mapping={True: 'infer', False: 'raise'}) def tz_localize(self, tz, ambiguous='raise'): """ - Localize tz-naive DatetimeIndex to given time zone (using pytz/dateutil), - or remove timezone from tz-aware DatetimeIndex + Localize tz-naive DatetimeIndex to given time zone (using + pytz/dateutil), or remove timezone from tz-aware DatetimeIndex Parameters ---------- @@ -1727,11 +1775,14 @@ def tz_localize(self, tz, ambiguous='raise'): time zone of the TimeSeries. None will remove timezone holding local time. ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise' - - 'infer' will attempt to infer fall dst-transition hours based on order - - bool-ndarray where True signifies a DST time, False signifies - a non-DST time (note that this flag is only applicable for ambiguous times) + - 'infer' will attempt to infer fall dst-transition hours based on + order + - bool-ndarray where True signifies a DST time, False signifies a + non-DST time (note that this flag is only applicable for + ambiguous times) - 'NaT' will return NaT where there are ambiguous times - - 'raise' will raise an AmbiguousTimeError if there are ambiguous times + - 'raise' will raise an AmbiguousTimeError if there are ambiguous + times infer_dst : boolean, default False (DEPRECATED) Attempt to infer fall dst-transition hours based on order @@ -1851,18 +1902,18 @@ def to_julian_date(self): year[testarr] -= 1 month[testarr] += 12 return Float64Index(day + - np.fix((153*month - 457)/5) + - 365*year + + np.fix((153 * month - 457) / 5) + + 365 * year + np.floor(year / 4) - np.floor(year / 100) + np.floor(year / 400) + 1721118.5 + (self.hour + - self.minute/60.0 + - self.second/3600.0 + - self.microsecond/3600.0/1e+6 + - self.nanosecond/3600.0/1e+9 - )/24.0) + self.minute / 60.0 + + self.second / 3600.0 + + self.microsecond / 3600.0 / 1e+6 + + self.nanosecond / 3600.0 / 1e+9 + ) / 24.0) DatetimeIndex._add_numeric_methods_disabled() @@ -1877,7 +1928,7 @@ def _generate_regular_range(start, end, periods, offset): b = Timestamp(start).value # cannot just use e = Timestamp(end) + 1 because arange breaks when # stride is too large, see GH10887 - e = b + (Timestamp(end).value - b)//stride * stride + stride//2 + e = b + (Timestamp(end).value - b) // stride * stride + stride // 2 # end.tz == start.tz by this point due to _generate implementation tz = start.tz elif start is not None: @@ -2038,7 +2089,7 @@ def cdate_range(start=None, end=None, periods=None, freq='C', tz=None, rng : DatetimeIndex """ - if freq=='C': + if freq == 'C': holidays = kwargs.pop('holidays', []) weekmask = kwargs.pop('weekmask', 'Mon Tue Wed Thu Fri') freq = CDay(holidays=holidays, weekmask=weekmask) @@ -2076,10 +2127,12 @@ def _naive_in_cache_range(start, end): def _in_range(start, end, rng_start, rng_end): return start > rng_start and end < rng_end + def _use_cached_range(offset, _normalized, start, end): return (offset._should_cache() and - not (offset._normalize_cache and not _normalized) and - _naive_in_cache_range(start, end)) + not (offset._normalize_cache and not _normalized) and + _naive_in_cache_range(start, end)) + def _time_to_micros(time): seconds = time.hour * 60 * 60 + 60 * time.minute + time.second diff --git a/pandas/tseries/interval.py b/pandas/tseries/interval.py index bcce64c3a71bf..6698c7e924758 100644 --- a/pandas/tseries/interval.py +++ b/pandas/tseries/interval.py @@ -14,7 +14,8 @@ def __init__(self, start, end): class PeriodInterval(object): """ - Represents an interval of time defined by two Period objects (time ordinals) + Represents an interval of time defined by two Period objects (time + ordinals) """ def __init__(self, start, end): @@ -26,6 +27,7 @@ class IntervalIndex(Index): """ """ + def __new__(self, starts, ends): pass diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py index 5beb65f0ba640..50c0a1ab7f336 100644 --- a/pandas/tseries/offsets.py +++ b/pandas/tseries/offsets.py @@ -3,8 +3,7 @@ from pandas import compat import numpy as np -from pandas.tseries.tools import to_datetime -from pandas.tseries.timedeltas import to_timedelta +from pandas.tseries.tools import to_datetime, normalize_date from pandas.core.common import ABCSeries, ABCDatetimeIndex # import after tools, dateutil check @@ -16,7 +15,7 @@ import functools __all__ = ['Day', 'BusinessDay', 'BDay', 'CustomBusinessDay', 'CDay', - 'CBMonthEnd','CBMonthBegin', + 'CBMonthEnd', 'CBMonthBegin', 'MonthBegin', 'BMonthBegin', 'MonthEnd', 'BMonthEnd', 'BusinessHour', 'YearBegin', 'BYearBegin', 'YearEnd', 'BYearEnd', @@ -26,7 +25,10 @@ 'Hour', 'Minute', 'Second', 'Milli', 'Micro', 'Nano', 'DateOffset'] -# convert to/from datetime/timestamp to allow invalid Timestamp ranges to pass thru +# convert to/from datetime/timestamp to allow invalid Timestamp ranges to +# pass thru + + def as_timestamp(obj): if isinstance(obj, Timestamp): return obj @@ -36,12 +38,14 @@ def as_timestamp(obj): pass return obj + def as_datetime(obj): - f = getattr(obj,'to_pydatetime',None) + f = getattr(obj, 'to_pydatetime', None) if f is not None: obj = f() return obj + def apply_wraps(func): @functools.wraps(func) def wrapper(self, other): @@ -73,7 +77,8 @@ def wrapper(self, other): if not isinstance(self, Nano) and result.nanosecond != nano: if result.tz is not None: # convert to UTC - value = tslib.tz_convert_single(result.value, 'UTC', result.tz) + value = tslib.tz_convert_single( + result.value, 'UTC', result.tz) else: value = result.value result = Timestamp(value + nano) @@ -100,17 +105,18 @@ def apply_index_wraps(func): def wrapper(self, other): result = func(self, other) if self.normalize: - result = result.to_period('D').to_timestamp() + result = result.to_period('D').to_timestamp() return result return wrapper + def _is_normalized(dt): if (dt.hour != 0 or dt.minute != 0 or dt.second != 0 - or dt.microsecond != 0 or getattr(dt, 'nanosecond', 0) != 0): + or dt.microsecond != 0 or getattr(dt, 'nanosecond', 0) != 0): return False return True -#---------------------------------------------------------------------- +# --------------------------------------------------------------------- # DateOffset @@ -172,7 +178,7 @@ def __add__(date): 'years', 'months', 'weeks', 'days', 'year', 'month', 'week', 'day', 'weekday', 'hour', 'minute', 'second', 'microsecond' - ) + ) _use_relativedelta = False _adjust_dst = False @@ -186,13 +192,13 @@ def __init__(self, n=1, normalize=False, **kwds): self._offset, self._use_relativedelta = self._determine_offset() def _determine_offset(self): - # timedelta is used for sub-daily plural offsets and all singular offsets - # relativedelta is used for plural offsets of daily length or more - # nanosecond(s) are handled by apply_wraps + # timedelta is used for sub-daily plural offsets and all singular + # offsets relativedelta is used for plural offsets of daily length or + # more nanosecond(s) are handled by apply_wraps kwds_no_nanos = dict( (k, v) for k, v in self.kwds.items() if k not in ('nanosecond', 'nanoseconds') - ) + ) use_relativedelta = False if len(kwds_no_nanos) > 0: @@ -252,29 +258,30 @@ def apply_index(self, i): if not type(self) is DateOffset: raise NotImplementedError("DateOffset subclass %s " - "does not have a vectorized " - "implementation" - % (self.__class__.__name__,)) + "does not have a vectorized " + "implementation" + % (self.__class__.__name__,)) relativedelta_fast = set(['years', 'months', 'weeks', - 'days', 'hours', 'minutes', - 'seconds', 'microseconds']) + 'days', 'hours', 'minutes', + 'seconds', 'microseconds']) # relativedelta/_offset path only valid for base DateOffset if (self._use_relativedelta and - set(self.kwds).issubset(relativedelta_fast)): + set(self.kwds).issubset(relativedelta_fast)): months = ((self.kwds.get('years', 0) * 12 - + self.kwds.get('months', 0)) * self.n) + + self.kwds.get('months', 0)) * self.n) if months: shifted = tslib.shift_months(i.asi8, months) i = i._shallow_copy(shifted) weeks = (self.kwds.get('weeks', 0)) * self.n if weeks: - i = (i.to_period('W') + weeks).to_timestamp() + i.to_perioddelta('W') + i = (i.to_period('W') + weeks).to_timestamp() + \ + i.to_perioddelta('W') - timedelta_kwds = dict((k,v) for k,v in self.kwds.items() - if k in ['days','hours','minutes', - 'seconds','microseconds']) + timedelta_kwds = dict((k, v) for k, v in self.kwds.items() + if k in ['days', 'hours', 'minutes', + 'seconds', 'microseconds']) if timedelta_kwds: delta = Timedelta(**timedelta_kwds) i = i + (self.n * delta) @@ -302,8 +309,9 @@ def _params(self): all_paras = dict(list(vars(self).items()) + list(self.kwds.items())) if 'holidays' in all_paras and not all_paras['holidays']: all_paras.pop('holidays') - exclude = ['kwds', 'name','normalize', 'calendar'] - attrs = [(k, v) for k, v in all_paras.items() if (k not in exclude ) and (k[0] != '_')] + exclude = ['kwds', 'name', 'normalize', 'calendar'] + attrs = [(k, v) for k, v in all_paras.items() + if (k not in exclude) and (k[0] != '_')] attrs = sorted(set(attrs)) params = tuple([str(self.__class__)] + attrs) return params @@ -384,17 +392,20 @@ def __sub__(self, other): if isinstance(other, datetime): raise TypeError('Cannot subtract datetime from offset.') elif type(other) == type(self): - return self.__class__(self.n - other.n, normalize=self.normalize, **self.kwds) + return self.__class__(self.n - other.n, normalize=self.normalize, + **self.kwds) else: # pragma: no cover return NotImplemented def __rsub__(self, other): if isinstance(other, (ABCDatetimeIndex, ABCSeries)): return other - self - return self.__class__(-self.n, normalize=self.normalize, **self.kwds) + other + return self.__class__(-self.n, normalize=self.normalize, + **self.kwds) + other def __mul__(self, someInt): - return self.__class__(n=someInt * self.n, normalize=self.normalize, **self.kwds) + return self.__class__(n=someInt * self.n, normalize=self.normalize, + **self.kwds) def __rmul__(self, someInt): return self.__mul__(someInt) @@ -454,7 +465,6 @@ def _end_apply_index(self, i, freq): off = i.to_perioddelta('D') - import pandas.tseries.frequencies as frequencies from pandas.tseries.frequencies import get_freq_code base, mult = get_freq_code(freq) base_period = i.to_period(base) @@ -495,6 +505,7 @@ def freqstr(self): def nanos(self): raise ValueError("{0} is a non-fixed frequency".format(self)) + class SingleConstructorOffset(DateOffset): @classmethod @@ -504,6 +515,7 @@ def _from_name(cls, suffix=None): raise ValueError("Bad freq suffix %s" % suffix) return cls() + class BusinessMixin(object): """ mixin to business types to provide related functions """ @@ -535,6 +547,7 @@ def _repr_attrs(self): out += ': ' + ', '.join(attrs) return out + class BusinessDay(BusinessMixin, SingleConstructorOffset): """ DateOffset subclass representing possibly n business days @@ -694,7 +707,8 @@ def _validate_time(self, t_input): raise ValueError("time data must match '%H:%M' format") elif isinstance(t_input, dt_time): if t_input.second != 0 or t_input.microsecond != 0: - raise ValueError("time data must be specified only with hour and minute") + raise ValueError( + "time data must be specified only with hour and minute") return t_input else: raise ValueError("time data must be string or datetime.time") @@ -768,9 +782,11 @@ def rollback(self, dt): if not self.onOffset(dt): businesshours = self._get_business_hours_by_sec() if self.n >= 0: - dt = self._prev_opening_time(dt) + timedelta(seconds=businesshours) + dt = self._prev_opening_time( + dt) + timedelta(seconds=businesshours) else: - dt = self._next_opening_time(dt) + timedelta(seconds=businesshours) + dt = self._next_opening_time( + dt) + timedelta(seconds=businesshours) return dt @apply_wraps @@ -801,7 +817,7 @@ def apply(self, other): n = self.n if n >= 0: if (other.time() == self.end or - not self._onOffset(other, businesshours)): + not self._onOffset(other, businesshours)): other = self._next_opening_time(other) else: if other.time() == self.start: @@ -828,8 +844,9 @@ def apply(self, other): result = other + timedelta(hours=hours, minutes=minutes) # because of previous adjustment, time will be larger than start - if ((daytime and (result.time() < self.start or self.end < result.time())) or - not daytime and (self.end < result.time() < self.start)): + if ((daytime and (result.time() < self.start or + self.end < result.time())) or + not daytime and (self.end < result.time() < self.start)): if n >= 0: bday_edge = self._prev_opening_time(other) bday_edge = bday_edge + bhdelta @@ -849,11 +866,13 @@ def apply(self, other): else: if result.time() == self.start and nanosecond == 0: # adjustment to move to previous business day - result = self._next_opening_time(result- timedelta(seconds=1)) +bhdelta + result = self._next_opening_time( + result - timedelta(seconds=1)) + bhdelta return result else: - raise ApplyTypeError('Only know how to combine business hour with ') + raise ApplyTypeError( + 'Only know how to combine business hour with ') def onOffset(self, dt): if self.normalize and not _is_normalized(dt): @@ -919,8 +938,8 @@ def __init__(self, n=1, normalize=False, weekmask='Mon Tue Wed Thu Fri', self.kwds = kwds self.offset = kwds.get('offset', timedelta(0)) calendar, holidays = self.get_calendar(weekmask=weekmask, - holidays=holidays, - calendar=calendar) + holidays=holidays, + calendar=calendar) # CustomBusinessDay instances are identified by the # following two attributes. See DateOffset._params() # holidays, weekmask @@ -937,8 +956,8 @@ def get_calendar(self, weekmask, holidays, calendar): elif not isinstance(holidays, tuple): holidays = tuple(holidays) else: - # trust that calendar.holidays and holidays are - # consistent + # trust that calendar.holidays and holidays are + # consistent pass return calendar, holidays @@ -963,9 +982,9 @@ def get_calendar(self, weekmask, holidays, calendar): from distutils.version import LooseVersion if LooseVersion(np.__version__) < '1.7.0': - raise NotImplementedError("CustomBusinessDay requires numpy >= " - "1.7.0. Current version: " + - np.__version__) + raise NotImplementedError( + "CustomBusinessDay requires numpy >= " + "1.7.0. Current version: " + np.__version__) else: raise return busdaycalendar, holidays @@ -1006,7 +1025,7 @@ def apply(self, other): np_dt = np.datetime64(date_in.date()) np_incr_dt = np.busday_offset(np_dt, self.n, roll=roll, - busdaycal=self.calendar) + busdaycal=self.calendar) dt_date = np_incr_dt.astype(datetime) result = datetime.combine(dt_date, date_in.time()) @@ -1043,7 +1062,7 @@ def _to_dt64(dt, dtype='datetime64'): def onOffset(self, dt): if self.normalize and not _is_normalized(dt): return False - day64 = self._to_dt64(dt,'datetime64[D]') + day64 = self._to_dt64(dt, 'datetime64[D]') return np.is_busday(day64, busdaycal=self.calendar) @@ -1156,7 +1175,8 @@ def apply(self, other): other = other + relativedelta(months=n) wkday, _ = tslib.monthrange(other.year, other.month) first = _get_firstbday(wkday) - result = datetime(other.year, other.month, first, other.hour, other.minute, + result = datetime(other.year, other.month, first, + other.hour, other.minute, other.second, other.microsecond) return result @@ -1199,7 +1219,8 @@ class CustomBusinessMonthEnd(BusinessMixin, MonthOffset): _cacheable = False _prefix = 'CBM' - def __init__(self, n=1, normalize=False, weekmask='Mon Tue Wed Thu Fri', + + def __init__(self, n=1, normalize=False, weekmask='Mon Tue Wed Thu Fri', holidays=None, calendar=None, **kwds): self.n = int(n) self.normalize = normalize @@ -1212,7 +1233,7 @@ def __init__(self, n=1, normalize=False, weekmask='Mon Tue Wed Thu Fri', self.kwds['calendar'] = self.cbday.calendar # cache numpy calendar @apply_wraps - def apply(self,other): + def apply(self, other): n = self.n # First move to month offset cur_mend = self.m_offset.rollforward(other) @@ -1232,6 +1253,7 @@ def apply(self,other): result = self.cbday.rollback(new) return result + class CustomBusinessMonthBegin(BusinessMixin, MonthOffset): """ **EXPERIMENTAL** DateOffset of one custom business month @@ -1257,7 +1279,8 @@ class CustomBusinessMonthBegin(BusinessMixin, MonthOffset): _cacheable = False _prefix = 'CBMS' - def __init__(self, n=1, normalize=False, weekmask='Mon Tue Wed Thu Fri', + + def __init__(self, n=1, normalize=False, weekmask='Mon Tue Wed Thu Fri', holidays=None, calendar=None, **kwds): self.n = int(n) self.normalize = normalize @@ -1270,7 +1293,7 @@ def __init__(self, n=1, normalize=False, weekmask='Mon Tue Wed Thu Fri', self.kwds['calendar'] = self.cbday.calendar # cache numpy calendar @apply_wraps - def apply(self,other): + def apply(self, other): n = self.n dt_in = other # First move to month offset @@ -1291,6 +1314,7 @@ def apply(self,other): result = self.cbday.rollforward(new) return result + class Week(DateOffset): """ Weekly offset @@ -1301,6 +1325,7 @@ class Week(DateOffset): Always generate specific day of week. 0 for Monday """ _adjust_dst = True + def __init__(self, n=1, normalize=False, **kwds): self.n = n self.normalize = normalize @@ -1347,7 +1372,8 @@ def apply(self, other): @apply_index_wraps def apply_index(self, i): if self.weekday is None: - return (i.to_period('W') + self.n).to_timestamp() + i.to_perioddelta('W') + return ((i.to_period('W') + self.n).to_timestamp() + + i.to_perioddelta('W')) else: return self._end_apply_index(i, self.freqstr) @@ -1373,6 +1399,7 @@ def _from_name(cls, suffix=None): weekday = _weekday_to_int[suffix] return cls(weekday=weekday) + class WeekDay(object): MON = 0 TUE = 1 @@ -1452,7 +1479,8 @@ def apply(self, other): else: months = self.n + 1 - other = self.getOffsetOfMonth(other + relativedelta(months=months, day=1)) + other = self.getOffsetOfMonth( + other + relativedelta(months=months, day=1)) other = datetime(other.year, other.month, other.day, base.hour, base.minute, base.second, base.microsecond) return other @@ -1490,9 +1518,11 @@ def _from_name(cls, suffix=None): weekday = _weekday_to_int[suffix[1:]] return cls(week=week, weekday=weekday) + class LastWeekOfMonth(DateOffset): """ - Describes monthly dates in last week of month like "the last Tuesday of each month" + Describes monthly dates in last week of month like "the last Tuesday of + each month" Parameters ---------- @@ -1506,6 +1536,7 @@ class LastWeekOfMonth(DateOffset): 5: Saturdays 6: Sundays """ + def __init__(self, n=1, normalize=False, **kwds): self.n = n self.normalize = normalize @@ -1516,7 +1547,7 @@ def __init__(self, n=1, normalize=False, **kwds): if self.weekday < 0 or self.weekday > 6: raise ValueError('Day must be 0<=day<=6, got %d' % - self.weekday) + self.weekday) self.kwds = kwds @@ -1537,10 +1568,11 @@ def apply(self, other): else: months = self.n + 1 - return self.getOffsetOfMonth(other + relativedelta(months=months, day=1)) + return self.getOffsetOfMonth( + other + relativedelta(months=months, day=1)) def getOffsetOfMonth(self, dt): - m = MonthEnd() + m = MonthEnd() d = datetime(dt.year, dt.month, 1, dt.hour, dt.minute, dt.second, dt.microsecond, tzinfo=dt.tzinfo) eom = m.rollforward(d) @@ -1577,6 +1609,7 @@ class QuarterOffset(DateOffset): _adjust_dst = True # TODO: Consider combining QuarterOffset and YearOffset __init__ at some # point + def __init__(self, n=1, normalize=False, **kwds): self.n = n self.normalize = normalize @@ -1769,12 +1802,14 @@ def apply(self, other): def apply_index(self, i): freq_month = 12 if self.startingMonth == 1 else self.startingMonth - 1 # freq_month = self.startingMonth - freqstr = 'Q-%s' % (_int_to_month[freq_month],) + freqstr = 'Q-%s' % (_int_to_month[freq_month],) return self._beg_apply_index(i, freqstr) + class YearOffset(DateOffset): """DateOffset that just needs a month""" _adjust_dst = True + def __init__(self, n=1, normalize=False, **kwds): self.month = kwds.get('month', self._default_month) @@ -1967,7 +2002,7 @@ def _rollf(date): @apply_index_wraps def apply_index(self, i): freq_month = 12 if self.month == 1 else self.month - 1 - freqstr = 'A-%s' % (_int_to_month[freq_month],) + freqstr = 'A-%s' % (_int_to_month[freq_month],) return self._beg_apply_index(i, freqstr) def onOffset(self, dt): @@ -2044,8 +2079,8 @@ def __init__(self, n=1, normalize=False, **kwds): def isAnchored(self): return self.n == 1 \ - and self.startingMonth is not None \ - and self.weekday is not None + and self.startingMonth is not None \ + and self.weekday is not None def onOffset(self, dt): if self.normalize and not _is_normalized(dt): @@ -2064,11 +2099,11 @@ def onOffset(self, dt): def apply(self, other): n = self.n prev_year = self.get_year_end( - datetime(other.year - 1, self.startingMonth, 1)) + datetime(other.year - 1, self.startingMonth, 1)) cur_year = self.get_year_end( - datetime(other.year, self.startingMonth, 1)) + datetime(other.year, self.startingMonth, 1)) next_year = self.get_year_end( - datetime(other.year + 1, self.startingMonth, 1)) + datetime(other.year + 1, self.startingMonth, 1)) prev_year = tslib._localize_pydatetime(prev_year, other.tzinfo) cur_year = tslib._localize_pydatetime(cur_year, other.tzinfo) next_year = tslib._localize_pydatetime(next_year, other.tzinfo) @@ -2092,10 +2127,12 @@ def apply(self, other): else: assert False - result = self.get_year_end(datetime(year + n, self.startingMonth, 1)) + result = self.get_year_end( + datetime(year + n, self.startingMonth, 1)) result = datetime(result.year, result.month, result.day, - other.hour, other.minute, other.second, other.microsecond) + other.hour, other.minute, other.second, + other.microsecond) return result else: n = -n @@ -2117,10 +2154,12 @@ def apply(self, other): else: assert False - result = self.get_year_end(datetime(year - n, self.startingMonth, 1)) + result = self.get_year_end( + datetime(year - n, self.startingMonth, 1)) result = datetime(result.year, result.month, result.day, - other.hour, other.minute, other.second, other.microsecond) + other.hour, other.minute, other.second, + other.microsecond) return result def get_year_end(self, dt): @@ -2130,7 +2169,8 @@ def get_year_end(self, dt): return self._get_year_end_last(dt) def get_target_month_end(self, dt): - target_month = datetime(dt.year, self.startingMonth, 1, tzinfo=dt.tzinfo) + target_month = datetime( + dt.year, self.startingMonth, 1, tzinfo=dt.tzinfo) next_month_first_of = target_month + relativedelta(months=+1) return next_month_first_of + relativedelta(days=-1) @@ -2148,7 +2188,8 @@ def _get_year_end_nearest(self, dt): return backward def _get_year_end_last(self, dt): - current_year = datetime(dt.year, self.startingMonth, 1, tzinfo=dt.tzinfo) + current_year = datetime( + dt.year, self.startingMonth, 1, tzinfo=dt.tzinfo) return current_year + self._offset_lwom @property @@ -2166,9 +2207,9 @@ def _get_suffix_prefix(self): return self._suffix_prefix_last def get_rule_code_suffix(self): - return '%s-%s-%s' % (self._get_suffix_prefix(), \ - _int_to_month[self.startingMonth], \ - _int_to_weekday[self.weekday]) + return '%s-%s-%s' % (self._get_suffix_prefix(), + _int_to_month[self.startingMonth], + _int_to_weekday[self.weekday]) @classmethod def _parse_suffix(cls, varion_code, startingMonth_code, weekday_code): @@ -2184,10 +2225,10 @@ def _parse_suffix(cls, varion_code, startingMonth_code, weekday_code): weekday = _weekday_to_int[weekday_code] return { - "weekday": weekday, - "startingMonth": startingMonth, - "variation": variation, - } + "weekday": weekday, + "startingMonth": startingMonth, + "variation": variation, + } @classmethod def _from_name(cls, *args): @@ -2252,10 +2293,10 @@ def __init__(self, n=1, normalize=False, **kwds): if self.n == 0: raise ValueError('N cannot be 0') - self._offset = FY5253( \ - startingMonth=kwds['startingMonth'], \ - weekday=kwds["weekday"], - variation=kwds["variation"]) + self._offset = FY5253( + startingMonth=kwds['startingMonth'], + weekday=kwds["weekday"], + variation=kwds["variation"]) def isAnchored(self): return self.n == 1 and self._offset.isAnchored() @@ -2351,6 +2392,7 @@ def _from_name(cls, *args): return cls(**dict(FY5253._parse_suffix(*args[:-1]), qtr_with_extra_week=int(args[-1]))) + class Easter(DateOffset): ''' DateOffset for the Easter holiday using @@ -2366,10 +2408,12 @@ def __init__(self, n=1, **kwds): @apply_wraps def apply(self, other): currentEaster = easter(other.year) - currentEaster = datetime(currentEaster.year, currentEaster.month, currentEaster.day) + currentEaster = datetime( + currentEaster.year, currentEaster.month, currentEaster.day) currentEaster = tslib._localize_pydatetime(currentEaster, other.tzinfo) - # NOTE: easter returns a datetime.date so we have to convert to type of other + # NOTE: easter returns a datetime.date so we have to convert to type of + # other if self.n >= 0: if other >= currentEaster: new = easter(other.year + self.n) @@ -2390,7 +2434,7 @@ def onOffset(self, dt): return False return date(dt.year, dt.month, dt.day) == easter(dt.year) -#---------------------------------------------------------------------- +# --------------------------------------------------------------------- # Ticks import operator @@ -2472,7 +2516,6 @@ def apply(self, other): _prefix = 'undefined' - def isAnchored(self): return False @@ -2643,7 +2686,7 @@ def generate_range(start=None, end=None, periods=None, BusinessHour, # 'BH' CustomBusinessDay, # 'C' CustomBusinessMonthEnd, # 'CBM' - CustomBusinessMonthBegin, # 'CBMS' + CustomBusinessMonthBegin, # 'CBMS' MonthEnd, # 'M' MonthBegin, # 'MS' Week, # 'W' diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py index 534804900c5e6..911277429ce86 100644 --- a/pandas/tseries/period.py +++ b/pandas/tseries/period.py @@ -19,15 +19,13 @@ import pandas.core.common as com from pandas.core.common import (isnull, _INT64_DTYPE, _maybe_box, _values_from_object, ABCSeries, - is_integer, is_float, is_object_dtype, - is_float_dtype) + is_integer, is_float, is_object_dtype) from pandas import compat from pandas.util.decorators import cache_readonly -from pandas.lib import Timestamp, Timedelta +from pandas.lib import Timedelta import pandas.lib as lib import pandas.tslib as tslib -import pandas.algos as _algos from pandas.compat import zip, u @@ -59,10 +57,12 @@ def dt64arr_to_periodarr(data, freq, tz): _DIFFERENT_FREQ_INDEX = period._DIFFERENT_FREQ_INDEX + def _period_index_cmp(opname, nat_result=False): """ Wrap comparison operations to convert datetime-like to datetime64 """ + def wrapper(self, other): if isinstance(other, Period): func = getattr(self.values, opname) @@ -152,9 +152,11 @@ class PeriodIndex(DatelikeOps, DatetimeIndexOpsMixin, Int64Index): """ _box_scalars = True _typ = 'periodindex' - _attributes = ['name','freq'] - _datetimelike_ops = ['year','month','day','hour','minute','second', - 'weekofyear','week','dayofweek','weekday','dayofyear','quarter', 'qyear', 'freq', 'days_in_month', 'daysinmonth'] + _attributes = ['name', 'freq'] + _datetimelike_ops = ['year', 'month', 'day', 'hour', 'minute', 'second', + 'weekofyear', 'week', 'dayofweek', 'weekday', + 'dayofyear', 'quarter', 'qyear', 'freq', + 'days_in_month', 'daysinmonth'] _is_numeric_dtype = False _infer_as_myclass = True @@ -207,8 +209,8 @@ def _generate_range(cls, start, end, periods, freq, fields): @classmethod def _from_arraylike(cls, data, freq, tz): - - if not isinstance(data, (np.ndarray, PeriodIndex, DatetimeIndex, Int64Index)): + if not isinstance(data, (np.ndarray, PeriodIndex, + DatetimeIndex, Int64Index)): if np.isscalar(data) or isinstance(data, Period): raise ValueError('PeriodIndex() must be called with a ' 'collection of some kind, %s was passed' @@ -267,8 +269,8 @@ def _from_arraylike(cls, data, freq, tz): @classmethod def _simple_new(cls, values, name=None, freq=None, **kwargs): - if not getattr(values,'dtype',None): - values = np.array(values,copy=False) + if not getattr(values, 'dtype', None): + values = np.array(values, copy=False) if is_object_dtype(values): return PeriodIndex(values, name=name, freq=freq, **kwargs) @@ -344,12 +346,10 @@ def __array_wrap__(self, result, context=None): def _box_func(self): return lambda x: Period._from_ordinal(ordinal=x, freq=self.freq) - def _convert_for_op(self): - """ Convert value to be insertable to ndarray """ - return self._box_func(value) - def _to_embed(self, keep_tz=False): - """ return an array repr of this object, potentially casting to object """ + """ + return an array repr of this object, potentially casting to object + """ return self.asobject.values @property @@ -487,17 +487,21 @@ def to_datetime(self, dayfirst=False): second = _field_accessor('second', 7, "The second of the period") weekofyear = _field_accessor('week', 8, "The week ordinal of the year") week = weekofyear - dayofweek = _field_accessor('dayofweek', 10, "The day of the week with Monday=0, Sunday=6") + dayofweek = _field_accessor( + 'dayofweek', 10, "The day of the week with Monday=0, Sunday=6") weekday = dayofweek - dayofyear = day_of_year = _field_accessor('dayofyear', 9, "The ordinal day of the year") + dayofyear = day_of_year = _field_accessor( + 'dayofyear', 9, "The ordinal day of the year") quarter = _field_accessor('quarter', 2, "The quarter of the date") qyear = _field_accessor('qyear', 1) - days_in_month = _field_accessor('days_in_month', 11, "The number of days in the month") + days_in_month = _field_accessor( + 'days_in_month', 11, "The number of days in the month") daysinmonth = days_in_month def _get_object_array(self): freq = self.freq - return np.array([ Period._from_ordinal(ordinal=x, freq=freq) for x in self.values], copy=False) + return np.array([Period._from_ordinal(ordinal=x, freq=freq) + for x in self.values], copy=False) def _mpl_repr(self): # how to represent ourselves to matplotlib @@ -547,7 +551,8 @@ def to_timestamp(self, freq=None, how='start'): return DatetimeIndex(new_data, freq='infer', name=self.name) def _maybe_convert_timedelta(self, other): - if isinstance(other, (timedelta, np.timedelta64, offsets.Tick, Timedelta)): + if isinstance(other, (timedelta, np.timedelta64, + offsets.Tick, Timedelta)): offset = frequencies.to_offset(self.freq.rule_code) if isinstance(offset, offsets.Tick): nanos = tslib._delta_to_nanoseconds(other) @@ -612,7 +617,8 @@ def get_value(self, series, key): """ s = _values_from_object(series) try: - return _maybe_box(self, super(PeriodIndex, self).get_value(s, key), series, key) + return _maybe_box(self, super(PeriodIndex, self).get_value(s, key), + series, key) except (KeyError, IndexError): try: asdt, parsed, reso = parse_time_string(key, self.freq) @@ -635,14 +641,16 @@ def get_value(self, series, key): return series[key] elif grp == freqn: key = Period(asdt, freq=self.freq).ordinal - return _maybe_box(self, self._engine.get_value(s, key), series, key) + return _maybe_box(self, self._engine.get_value(s, key), + series, key) else: raise KeyError(key) except TypeError: pass key = Period(key, self.freq).ordinal - return _maybe_box(self, self._engine.get_value(s, key), series, key) + return _maybe_box(self, self._engine.get_value(s, key), + series, key) def get_indexer(self, target, method=None, limit=None, tolerance=None): if hasattr(target, 'freq') and target.freq != self.freq: @@ -678,8 +686,8 @@ def get_loc(self, key, method=None, tolerance=None): def _maybe_cast_slice_bound(self, label, side, kind): """ - If label is a string or a datetime, cast it to Period.ordinal according to - resolution. + If label is a string or a datetime, cast it to Period.ordinal according + to resolution. Parameters ---------- @@ -706,7 +714,7 @@ def _maybe_cast_slice_bound(self, label, side, kind): except Exception: raise KeyError(label) elif is_integer(label) or is_float(label): - self._invalid_indexer('slice',label) + self._invalid_indexer('slice', label) return label @@ -729,10 +737,10 @@ def _parsed_string_to_bounds(self, reso, parsed): hour=parsed.hour, minute=parsed.minute, freq='T') elif reso == 'second': t1 = Period(year=parsed.year, month=parsed.month, day=parsed.day, - hour=parsed.hour, minute=parsed.minute, second=parsed.second, - freq='S') + hour=parsed.hour, minute=parsed.minute, + second=parsed.second, freq='S') else: - raise KeyError(key) + raise KeyError(reso) return (t1.asfreq(self.freq, how='start'), t1.asfreq(self.freq, how='end')) @@ -809,7 +817,8 @@ def __getitem__(self, key): return PeriodIndex(result, name=self.name, freq=self.freq) - def _format_native_types(self, na_rep=u('NaT'), date_format=None, **kwargs): + def _format_native_types(self, na_rep=u('NaT'), date_format=None, + **kwargs): values = np.array(list(self), dtype=object) mask = isnull(self.values) @@ -911,7 +920,8 @@ def __setstate__(self, state): def tz_convert(self, tz): """ - Convert tz-aware DatetimeIndex from one time zone to another (using pytz/dateutil) + Convert tz-aware DatetimeIndex from one time zone to another (using + pytz/dateutil) Parameters ---------- @@ -932,8 +942,8 @@ def tz_convert(self, tz): def tz_localize(self, tz, infer_dst=False): """ - Localize tz-naive DatetimeIndex to given time zone (using pytz/dateutil), - or remove timezone from tz-aware DatetimeIndex + Localize tz-naive DatetimeIndex to given time zone (using + pytz/dateutil), or remove timezone from tz-aware DatetimeIndex Parameters ---------- @@ -978,7 +988,7 @@ def _get_ordinal_range(start, end, periods, freq, mult=1): if is_start_per and is_end_per and start.freq != end.freq: raise ValueError('Start and end must have same freq') if ((is_start_per and start.ordinal == tslib.iNaT) or - (is_end_per and end.ordinal == tslib.iNaT)): + (is_end_per and end.ordinal == tslib.iNaT)): raise ValueError('Start and end must not be NaT') if freq is None: @@ -1035,7 +1045,8 @@ def _range_from_fields(year=None, month=None, quarter=None, day=None, base, mult = _gfc(freq) arrays = _make_field_arrays(year, month, day, hour, minute, second) for y, mth, d, h, mn, s in zip(*arrays): - ordinals.append(period.period_ordinal(y, mth, d, h, mn, s, 0, 0, base)) + ordinals.append(period.period_ordinal( + y, mth, d, h, mn, s, 0, 0, base)) return np.array(ordinals, dtype=np.int64), freq diff --git a/pandas/tseries/plotting.py b/pandas/tseries/plotting.py index ad27b412cddb9..729e85b0ad595 100644 --- a/pandas/tseries/plotting.py +++ b/pandas/tseries/plotting.py @@ -3,7 +3,7 @@ Pierre GF Gerard-Marchant & Matt Knox """ -#!!! TODO: Use the fact that axis can have units to simplify the process +# TODO: Use the fact that axis can have units to simplify the process import numpy as np @@ -18,7 +18,7 @@ from pandas.tseries.converter import (TimeSeries_DateLocator, TimeSeries_DateFormatter) -#---------------------------------------------------------------------- +# --------------------------------------------------------------------- # Plotting functions and monkey patches @@ -139,7 +139,8 @@ def _replot_ax(ax, freq, kwargs): from pandas.tools.plotting import _plot_klass plotf = _plot_klass[plotf]._plot - lines.append(plotf(ax, series.index._mpl_repr(), series.values, **kwds)[0]) + lines.append(plotf(ax, series.index._mpl_repr(), + series.values, **kwds)[0]) labels.append(com.pprint_thing(series.name)) return lines, labels @@ -287,7 +288,7 @@ def format_dateaxis(subplot, freq): subplot.xaxis.set_minor_formatter(minformatter) # x and y coord info - subplot.format_coord = lambda t, y: ("t = {0} " - "y = {1:8f}".format(Period(ordinal=int(t), freq=freq), y)) + subplot.format_coord = lambda t, y: ( + "t = {0} y = {1:8f}".format(Period(ordinal=int(t), freq=freq), y)) pylab.draw_if_interactive() diff --git a/pandas/tseries/tdi.py b/pandas/tseries/tdi.py index ea61e4f247e58..fafe13e1f2c09 100644 --- a/pandas/tseries/tdi.py +++ b/pandas/tseries/tdi.py @@ -3,18 +3,17 @@ from datetime import timedelta import numpy as np from pandas.core.common import (ABCSeries, _TD_DTYPE, _INT64_DTYPE, - is_timedelta64_dtype, _maybe_box, - _values_from_object, isnull, is_integer, is_float) + _maybe_box, + _values_from_object, isnull, + is_integer, is_float) from pandas.core.index import Index, Int64Index import pandas.compat as compat from pandas.compat import u -from pandas.util.decorators import cache_readonly from pandas.tseries.frequencies import to_offset import pandas.core.common as com -from pandas.tseries import timedeltas from pandas.tseries.base import TimelikeOps, DatetimeIndexOpsMixin -from pandas.tseries.timedeltas import to_timedelta, _coerce_scalar_to_timedelta_type -import pandas.tseries.offsets as offsets +from pandas.tseries.timedeltas import (to_timedelta, + _coerce_scalar_to_timedelta_type) from pandas.tseries.offsets import Tick, DateOffset import pandas.lib as lib @@ -24,10 +23,12 @@ Timedelta = tslib.Timedelta + def _td_index_cmp(opname, nat_result=False): """ Wrap comparison operations to convert timedelta-like to timedelta64 """ + def wrapper(self, other): func = getattr(super(TimedeltaIndex, self), opname) if _is_convertible_to_td(other): @@ -37,7 +38,8 @@ def wrapper(self, other): result.fill(nat_result) else: if not com.is_list_like(other): - raise TypeError("cannot compare a TimedeltaIndex with type {0}".format(type(other))) + raise TypeError("cannot compare a TimedeltaIndex with type " + "{0}".format(type(other))) other = TimedeltaIndex(other).values result = func(other) @@ -94,8 +96,10 @@ class TimedeltaIndex(DatetimeIndexOpsMixin, TimelikeOps, Int64Index): _typ = 'timedeltaindex' _join_precedence = 10 + def _join_i8_wrapper(joinf, **kwargs): - return DatetimeIndexOpsMixin._join_i8_wrapper(joinf, dtype='m8[ns]', **kwargs) + return DatetimeIndexOpsMixin._join_i8_wrapper( + joinf, dtype='m8[ns]', **kwargs) _inner_indexer = _join_i8_wrapper(_algos.inner_join_indexer_int64) _outer_indexer = _join_i8_wrapper(_algos.outer_join_indexer_int64) @@ -103,8 +107,8 @@ def _join_i8_wrapper(joinf, **kwargs): _left_indexer_unique = _join_i8_wrapper( _algos.left_join_indexer_unique_int64, with_indexers=False) _arrmap = None - _datetimelike_ops = ['days','seconds','microseconds','nanoseconds', - 'freq','components'] + _datetimelike_ops = ['days', 'seconds', 'microseconds', 'nanoseconds', + 'freq', 'components'] __eq__ = _td_index_cmp('__eq__') __ne__ = _td_index_cmp('__ne__', nat_result=True) @@ -167,10 +171,10 @@ def __new__(cls, data=None, unit=None, % repr(data)) # convert if not already - if getattr(data,'dtype',None) != _TD_DTYPE: - data = to_timedelta(data,unit=unit,box=False) + if getattr(data, 'dtype', None) != _TD_DTYPE: + data = to_timedelta(data, unit=unit, box=False) elif copy: - data = np.array(data,copy=True) + data = np.array(data, copy=True) # check that we are matching freqs if verify_integrity and len(data) > 0: @@ -178,10 +182,13 @@ def __new__(cls, data=None, unit=None, index = cls._simple_new(data, name=name) inferred = index.inferred_freq if inferred != freq.freqstr: - on_freq = cls._generate(index[0], None, len(index), name, freq) + on_freq = cls._generate( + index[0], None, len(index), name, freq) if not np.array_equal(index.asi8, on_freq.asi8): - raise ValueError('Inferred frequency {0} from passed timedeltas does not ' - 'conform to passed frequency {1}'.format(inferred, freq.freqstr)) + raise ValueError('Inferred frequency {0} from passed ' + 'timedeltas does not conform to ' + 'passed frequency {1}' + .format(inferred, freq.freqstr)) index.freq = freq return index @@ -239,8 +246,8 @@ def _box_func(self): @classmethod def _simple_new(cls, values, name=None, freq=None, **kwargs): - if not getattr(values,'dtype',None): - values = np.array(values,copy=False) + if not getattr(values, 'dtype', None): + values = np.array(values, copy=False) if values.dtype == np.object_: values = tslib.array_to_timedelta64(values) if values.dtype != _TD_DTYPE: @@ -286,7 +293,8 @@ def _add_delta(self, delta): # update name when delta is index name = com._maybe_match_name(self, delta) else: - raise ValueError("cannot add the type {0} to a TimedeltaIndex".format(type(delta))) + raise ValueError("cannot add the type {0} to a TimedeltaIndex" + .format(type(delta))) result = TimedeltaIndex(new_values, freq='infer', name=name) return result @@ -294,16 +302,17 @@ def _add_delta(self, delta): def _evaluate_with_timedelta_like(self, other, op, opstr): # allow division by a timedelta - if opstr in ['__div__','__truediv__']: + if opstr in ['__div__', '__truediv__']: if _is_convertible_to_td(other): other = Timedelta(other) if isnull(other): - raise NotImplementedError("division by pd.NaT not implemented") + raise NotImplementedError( + "division by pd.NaT not implemented") i8 = self.asi8 - result = i8/float(other.value) - result = self._maybe_mask_results(result,convert='float64') - return Index(result,name=self.name,copy=False) + result = i8 / float(other.value) + result = self._maybe_mask_results(result, convert='float64') + return Index(result, name=self.name, copy=False) return NotImplemented @@ -314,8 +323,8 @@ def _add_datelike(self, other): other = Timestamp(other) i8 = self.asi8 result = i8 + other.value - result = self._maybe_mask_results(result,fill_value=tslib.iNaT) - return DatetimeIndex(result,name=self.name,copy=False) + result = self._maybe_mask_results(result, fill_value=tslib.iNaT) + return DatetimeIndex(result, name=self.name, copy=False) def _sub_datelike(self, other): raise TypeError("cannot subtract a datelike from a TimedeltaIndex") @@ -335,10 +344,12 @@ def _get_field(self, m): result = np.empty(len(self), dtype='float64') mask = self._isnan imask = ~mask - result.flat[imask] = np.array([ getattr(Timedelta(val),m) for val in values[imask] ]) + result.flat[imask] = np.array( + [getattr(Timedelta(val), m) for val in values[imask]]) result[mask] = np.nan else: - result = np.array([ getattr(Timedelta(val),m) for val in values ],dtype='int64') + result = np.array([getattr(Timedelta(val), m) + for val in values], dtype='int64') return result @property @@ -353,12 +364,17 @@ def seconds(self): @property def microseconds(self): - """ Number of microseconds (>= 0 and less than 1 second) for each element. """ + """ + Number of microseconds (>= 0 and less than 1 second) for each + element. """ return self._get_field('microseconds') @property def nanoseconds(self): - """ Number of nanoseconds (>= 0 and less than 1 microsecond) for each element. """ + """ + Number of nanoseconds (>= 0 and less than 1 microsecond) for each + element. + """ return self._get_field('nanoseconds') @property @@ -373,18 +389,19 @@ def components(self): """ from pandas import DataFrame - columns = ['days','hours','minutes','seconds','milliseconds','microseconds','nanoseconds'] + columns = ['days', 'hours', 'minutes', 'seconds', + 'milliseconds', 'microseconds', 'nanoseconds'] hasnans = self.hasnans if hasnans: def f(x): if isnull(x): - return [np.nan]*len(columns) + return [np.nan] * len(columns) return x.components else: def f(x): return x.components - result = DataFrame([ f(x) for x in self ]) + result = DataFrame([f(x) for x in self]) result.columns = columns if not hasnans: result = result.astype('int64') @@ -396,7 +413,7 @@ def total_seconds(self): .. versionadded:: 0.17.0 """ - return self._maybe_mask_results(1e-9*self.asi8) + return self._maybe_mask_results(1e-9 * self.asi8) def to_pytimedelta(self): """ @@ -422,9 +439,11 @@ def astype(self, dtype): # return an index (essentially this is division) result = self.values.astype(dtype) if self.hasnans: - return Index(self._maybe_mask_results(result,convert='float64'),name=self.name) + return Index(self._maybe_mask_results(result, + convert='float64'), + name=self.name) - return Index(result.astype('i8'),name=self.name) + return Index(result.astype('i8'), name=self.name) else: # pragma: no cover raise ValueError('Cannot cast TimedeltaIndex to dtype %s' % dtype) @@ -503,8 +522,8 @@ def join(self, other, how='left', level=None, return_indexers=False): def _wrap_joined_index(self, joined, other): name = self.name if self.name == other.name else None - if (isinstance(other, TimedeltaIndex) and self.freq == other.freq - and self._can_fast_union(other)): + if (isinstance(other, TimedeltaIndex) and self.freq == other.freq + and self._can_fast_union(other)): joined = self._shallow_copy(joined, name=name) return joined else: @@ -550,7 +569,7 @@ def _fast_union(self, other): else: left, right = other, self - left_start, left_end = left[0], left[-1] + left_end = left[-1] right_end = right[-1] # concatenate @@ -624,7 +643,8 @@ def get_value(self, series, key): return self.get_value_maybe_box(series, key) try: - return _maybe_box(self, Index.get_value(self, series, key), series, key) + return _maybe_box(self, Index.get_value(self, series, key), + series, key) except KeyError: try: loc = self._get_string_slice(key) @@ -699,7 +719,7 @@ def _maybe_cast_slice_bound(self, label, side, kind): return (lbound + to_offset(parsed.resolution) - Timedelta(1, 'ns')) elif is_integer(label) or is_float(label): - self._invalid_indexer('slice',label) + self._invalid_indexer('slice', label) return label @@ -707,7 +727,7 @@ def _get_string_slice(self, key, use_lhs=True, use_rhs=True): freq = getattr(self, 'freqstr', getattr(self, 'inferred_freq', None)) if is_integer(key) or is_float(key): - self._invalid_indexer('slice',key) + self._invalid_indexer('slice', key) loc = self._partial_td_slice(key, freq, use_lhs=use_lhs, use_rhs=use_rhs) return loc @@ -718,36 +738,44 @@ def _partial_td_slice(self, key, freq, use_lhs=True, use_rhs=True): if not isinstance(key, compat.string_types): return key - parsed = _coerce_scalar_to_timedelta_type(key, box=True) + raise NotImplementedError - is_monotonic = self.is_monotonic + # TODO(wesm): dead code + # parsed = _coerce_scalar_to_timedelta_type(key, box=True) - # figure out the resolution of the passed td - # and round to it - t1 = parsed.round(reso) - t2 = t1 + to_offset(parsed.resolution) - Timedelta(1,'ns') + # is_monotonic = self.is_monotonic - stamps = self.asi8 + # # figure out the resolution of the passed td + # # and round to it - if is_monotonic: + # # t1 = parsed.round(reso) - # we are out of range - if len(stamps) and ( - (use_lhs and t1.value < stamps[0] and t2.value < stamps[0]) or ( - (use_rhs and t1.value > stamps[-1] and t2.value > stamps[-1]))): - raise KeyError + # t2 = t1 + to_offset(parsed.resolution) - Timedelta(1, 'ns') - # a monotonic (sorted) series can be sliced - left = stamps.searchsorted(t1.value, side='left') if use_lhs else None - right = stamps.searchsorted(t2.value, side='right') if use_rhs else None + # stamps = self.asi8 - return slice(left, right) + # if is_monotonic: - lhs_mask = (stamps >= t1.value) if use_lhs else True - rhs_mask = (stamps <= t2.value) if use_rhs else True + # # we are out of range + # if (len(stamps) and ((use_lhs and t1.value < stamps[0] and + # t2.value < stamps[0]) or + # ((use_rhs and t1.value > stamps[-1] and + # t2.value > stamps[-1])))): + # raise KeyError - # try to find a the dates - return (lhs_mask & rhs_mask).nonzero()[0] + # # a monotonic (sorted) series can be sliced + # left = (stamps.searchsorted(t1.value, side='left') + # if use_lhs else None) + # right = (stamps.searchsorted(t2.value, side='right') + # if use_rhs else None) + + # return slice(left, right) + + # lhs_mask = (stamps >= t1.value) if use_lhs else True + # rhs_mask = (stamps <= t2.value) if use_rhs else True + + # # try to find a the dates + # return (lhs_mask & rhs_mask).nonzero()[0] def searchsorted(self, key, side='left'): if isinstance(key, (np.ndarray, Index)): @@ -816,7 +844,8 @@ def insert(self, loc, item): # check freq can be preserved on edge cases if self.freq is not None: - if (loc == 0 or loc == -len(self)) and item + self.freq == self[0]: + if ((loc == 0 or loc == -len(self)) and + item + self.freq == self[0]): freq = self.freq elif (loc == len(self)) and item - self.freq == self[-1]: freq = self.freq @@ -824,15 +853,16 @@ def insert(self, loc, item): try: new_tds = np.concatenate((self[:loc].asi8, [item.view(np.int64)], - self[loc:].asi8)) + self[loc:].asi8)) return TimedeltaIndex(new_tds, name=self.name, freq=freq) except (AttributeError, TypeError): # fall back to object index - if isinstance(item,compat.string_types): + if isinstance(item, compat.string_types): return self.asobject.insert(loc, item) - raise TypeError("cannot insert TimedeltaIndex with incompatible label") + raise TypeError( + "cannot insert TimedeltaIndex with incompatible label") def delete(self, loc): """ @@ -855,7 +885,8 @@ def delete(self, loc): freq = self.freq else: if com.is_list_like(loc): - loc = lib.maybe_indices_to_slice(com._ensure_int64(np.array(loc)), len(self)) + loc = lib.maybe_indices_to_slice( + com._ensure_int64(np.array(loc)), len(self)) if isinstance(loc, slice) and loc.step in (1, None): if (loc.start in (0, None) or loc.stop in (len(self), None)): freq = self.freq @@ -869,18 +900,22 @@ def delete(self, loc): def _is_convertible_to_index(other): - """ return a boolean whether I can attempt conversion to a TimedeltaIndex """ + """ + return a boolean whether I can attempt conversion to a TimedeltaIndex + """ if isinstance(other, TimedeltaIndex): return True elif (len(other) > 0 and - other.inferred_type not in ('floating', 'mixed-integer','integer', + other.inferred_type not in ('floating', 'mixed-integer', 'integer', 'mixed-integer-float', 'mixed')): return True return False def _is_convertible_to_td(key): - return isinstance(key, (DateOffset, timedelta, Timedelta, np.timedelta64, compat.string_types)) + return isinstance(key, (DateOffset, timedelta, Timedelta, + np.timedelta64, compat.string_types)) + def _to_m8(key): ''' @@ -893,6 +928,7 @@ def _to_m8(key): # return an type that can be compared return np.int64(key.value).view(_TD_DTYPE) + def _generate_regular_range(start, end, periods, offset): stride = offset.nanos if periods is None: diff --git a/pandas/tseries/tests/test_base.py b/pandas/tseries/tests/test_base.py index 2a1e59154f3d1..2f28c55ae520f 100644 --- a/pandas/tseries/tests/test_base.py +++ b/pandas/tseries/tests/test_base.py @@ -1,80 +1,93 @@ from __future__ import print_function -import re from datetime import datetime, timedelta import numpy as np import pandas as pd -from pandas.tseries.base import DatetimeIndexOpsMixin -from pandas.util.testing import assertRaisesRegexp, assertIsInstance -from pandas.tseries.common import is_datetimelike -from pandas import (Series, Index, Int64Index, Timestamp, DatetimeIndex, PeriodIndex, - TimedeltaIndex, Timedelta, timedelta_range, date_range, Float64Index) -import pandas.tseries.offsets as offsets +from pandas import (Series, Index, Int64Index, Timestamp, DatetimeIndex, + PeriodIndex, TimedeltaIndex, Timedelta, timedelta_range, + date_range, Float64Index) import pandas.tslib as tslib -import nose import pandas.util.testing as tm from pandas.tests.test_base import Ops + class TestDatetimeIndexOps(Ops): - tz = [None, 'UTC', 'Asia/Tokyo', 'US/Eastern', - 'dateutil/Asia/Singapore', 'dateutil/US/Pacific'] + tz = [None, 'UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/Asia/Singapore', + 'dateutil/US/Pacific'] def setUp(self): super(TestDatetimeIndexOps, self).setUp() - mask = lambda x: isinstance(x, DatetimeIndex) or isinstance(x, PeriodIndex) - self.is_valid_objs = [ o for o in self.objs if mask(o) ] - self.not_valid_objs = [ o for o in self.objs if not mask(o) ] + mask = lambda x: (isinstance(x, DatetimeIndex) or + isinstance(x, PeriodIndex)) + self.is_valid_objs = [o for o in self.objs if mask(o)] + self.not_valid_objs = [o for o in self.objs if not mask(o)] def test_ops_properties(self): - self.check_ops_properties(['year','month','day','hour','minute','second','weekofyear','week','dayofweek','dayofyear','quarter']) - self.check_ops_properties(['date','time','microsecond','nanosecond', 'is_month_start', 'is_month_end', 'is_quarter_start', - 'is_quarter_end', 'is_year_start', 'is_year_end'], lambda x: isinstance(x,DatetimeIndex)) + self.check_ops_properties( + ['year', 'month', 'day', 'hour', 'minute', 'second', 'weekofyear', + 'week', 'dayofweek', 'dayofyear', 'quarter']) + self.check_ops_properties(['date', 'time', 'microsecond', 'nanosecond', + 'is_month_start', 'is_month_end', + 'is_quarter_start', + 'is_quarter_end', 'is_year_start', + 'is_year_end'], + lambda x: isinstance(x, DatetimeIndex)) def test_ops_properties_basic(self): # sanity check that the behavior didn't change # GH7206 - for op in ['year','day','second','weekday']: - self.assertRaises(TypeError, lambda x: getattr(self.dt_series,op)) + for op in ['year', 'day', 'second', 'weekday']: + self.assertRaises(TypeError, lambda x: getattr(self.dt_series, op)) # attribute access should still work! - s = Series(dict(year=2000,month=1,day=10)) - self.assertEqual(s.year,2000) - self.assertEqual(s.month,1) - self.assertEqual(s.day,10) - self.assertRaises(AttributeError, lambda : s.weekday) + s = Series(dict(year=2000, month=1, day=10)) + self.assertEqual(s.year, 2000) + self.assertEqual(s.month, 1) + self.assertEqual(s.day, 10) + self.assertRaises(AttributeError, lambda: s.weekday) def test_astype_str(self): # test astype string - #10442 - result = date_range('2012-01-01', periods=4, name='test_name').astype(str) - expected = Index(['2012-01-01', '2012-01-02', '2012-01-03','2012-01-04'], - name='test_name', dtype=object) + result = date_range('2012-01-01', periods=4, + name='test_name').astype(str) + expected = Index(['2012-01-01', '2012-01-02', '2012-01-03', + '2012-01-04'], name='test_name', dtype=object) tm.assert_index_equal(result, expected) # test astype string with tz and name - result = date_range('2012-01-01', periods=3, name='test_name', tz='US/Eastern').astype(str) - expected = Index(['2012-01-01 00:00:00-05:00', '2012-01-02 00:00:00-05:00', - '2012-01-03 00:00:00-05:00'], name='test_name', dtype=object) + result = date_range('2012-01-01', periods=3, name='test_name', + tz='US/Eastern').astype(str) + expected = Index(['2012-01-01 00:00:00-05:00', + '2012-01-02 00:00:00-05:00', + '2012-01-03 00:00:00-05:00'], + name='test_name', dtype=object) tm.assert_index_equal(result, expected) # test astype string with freqH and name - result = date_range('1/1/2011', periods=3, freq='H', name='test_name').astype(str) - expected = Index(['2011-01-01 00:00:00', '2011-01-01 01:00:00', '2011-01-01 02:00:00'], + result = date_range('1/1/2011', periods=3, freq='H', + name='test_name').astype(str) + expected = Index(['2011-01-01 00:00:00', '2011-01-01 01:00:00', + '2011-01-01 02:00:00'], name='test_name', dtype=object) tm.assert_index_equal(result, expected) # test astype string with freqH and timezone result = date_range('3/6/2012 00:00', periods=2, freq='H', tz='Europe/London', name='test_name').astype(str) - expected = Index(['2012-03-06 00:00:00+00:00', '2012-03-06 01:00:00+00:00'], + expected = Index(['2012-03-06 00:00:00+00:00', + '2012-03-06 01:00:00+00:00'], dtype=object, name='test_name') tm.assert_index_equal(result, expected) def test_asobject_tolist(self): - idx = pd.date_range(start='2013-01-01', periods=4, freq='M', name='idx') - expected_list = [pd.Timestamp('2013-01-31'), pd.Timestamp('2013-02-28'), - pd.Timestamp('2013-03-31'), pd.Timestamp('2013-04-30')] + idx = pd.date_range(start='2013-01-01', periods=4, freq='M', + name='idx') + expected_list = [pd.Timestamp('2013-01-31'), + pd.Timestamp('2013-02-28'), + pd.Timestamp('2013-03-31'), + pd.Timestamp('2013-04-30')] expected = pd.Index(expected_list, dtype=object, name='idx') result = idx.asobject self.assertTrue(isinstance(result, Index)) @@ -84,7 +97,8 @@ def test_asobject_tolist(self): self.assertEqual(result.name, expected.name) self.assertEqual(idx.tolist(), expected_list) - idx = pd.date_range(start='2013-01-01', periods=4, freq='M', name='idx', tz='Asia/Tokyo') + idx = pd.date_range(start='2013-01-01', periods=4, freq='M', + name='idx', tz='Asia/Tokyo') expected_list = [pd.Timestamp('2013-01-31', tz='Asia/Tokyo'), pd.Timestamp('2013-02-28', tz='Asia/Tokyo'), pd.Timestamp('2013-03-31', tz='Asia/Tokyo'), @@ -99,8 +113,9 @@ def test_asobject_tolist(self): idx = DatetimeIndex([datetime(2013, 1, 1), datetime(2013, 1, 2), pd.NaT, datetime(2013, 1, 4)], name='idx') - expected_list = [pd.Timestamp('2013-01-01'), pd.Timestamp('2013-01-02'), - pd.NaT, pd.Timestamp('2013-01-04')] + expected_list = [pd.Timestamp('2013-01-01'), + pd.Timestamp('2013-01-02'), pd.NaT, + pd.Timestamp('2013-01-04')] expected = pd.Index(expected_list, dtype=object, name='idx') result = idx.asobject self.assertTrue(isinstance(result, Index)) @@ -144,22 +159,33 @@ def test_representation(self): idx.append(DatetimeIndex([], freq='D')) idx.append(DatetimeIndex(['2011-01-01'], freq='D')) idx.append(DatetimeIndex(['2011-01-01', '2011-01-02'], freq='D')) - idx.append(DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], freq='D')) - idx.append(DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00'], - freq='H', tz='Asia/Tokyo')) - idx.append(DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', pd.NaT], - tz='US/Eastern')) - idx.append(DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', pd.NaT], - tz='UTC')) + idx.append(DatetimeIndex( + ['2011-01-01', '2011-01-02', '2011-01-03'], freq='D')) + idx.append(DatetimeIndex( + ['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00' + ], freq='H', tz='Asia/Tokyo')) + idx.append(DatetimeIndex( + ['2011-01-01 09:00', '2011-01-01 10:00', pd.NaT], tz='US/Eastern')) + idx.append(DatetimeIndex( + ['2011-01-01 09:00', '2011-01-01 10:00', pd.NaT], tz='UTC')) exp = [] exp.append("""DatetimeIndex([], dtype='datetime64[ns]', freq='D')""") - exp.append("""DatetimeIndex(['2011-01-01'], dtype='datetime64[ns]', freq='D')""") - exp.append("""DatetimeIndex(['2011-01-01', '2011-01-02'], dtype='datetime64[ns]', freq='D')""") - exp.append("""DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], dtype='datetime64[ns]', freq='D')""") - exp.append("""DatetimeIndex(['2011-01-01 09:00:00+09:00', '2011-01-01 10:00:00+09:00', '2011-01-01 11:00:00+09:00'], dtype='datetime64[ns, Asia/Tokyo]', freq='H')""") - exp.append("""DatetimeIndex(['2011-01-01 09:00:00-05:00', '2011-01-01 10:00:00-05:00', 'NaT'], dtype='datetime64[ns, US/Eastern]', freq=None)""") - exp.append("""DatetimeIndex(['2011-01-01 09:00:00+00:00', '2011-01-01 10:00:00+00:00', 'NaT'], dtype='datetime64[ns, UTC]', freq=None)""") + exp.append("DatetimeIndex(['2011-01-01'], dtype='datetime64[ns]', " + "freq='D')") + exp.append("DatetimeIndex(['2011-01-01', '2011-01-02'], " + "dtype='datetime64[ns]', freq='D')") + exp.append("DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], " + "dtype='datetime64[ns]', freq='D')") + exp.append("DatetimeIndex(['2011-01-01 09:00:00+09:00', " + "'2011-01-01 10:00:00+09:00', '2011-01-01 11:00:00+09:00']" + ", dtype='datetime64[ns, Asia/Tokyo]', freq='H')") + exp.append("DatetimeIndex(['2011-01-01 09:00:00-05:00', " + "'2011-01-01 10:00:00-05:00', 'NaT'], " + "dtype='datetime64[ns, US/Eastern]', freq=None)") + exp.append("DatetimeIndex(['2011-01-01 09:00:00+00:00', " + "'2011-01-01 10:00:00+00:00', 'NaT'], " + "dtype='datetime64[ns, UTC]', freq=None)""") with pd.option_context('display.width', 300): for indx, expected in zip(idx, exp): @@ -171,9 +197,10 @@ def test_representation_to_series(self): idx1 = DatetimeIndex([], freq='D') idx2 = DatetimeIndex(['2011-01-01'], freq='D') idx3 = DatetimeIndex(['2011-01-01', '2011-01-02'], freq='D') - idx4 = DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], freq='D') - idx5 = DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00'], - freq='H', tz='Asia/Tokyo') + idx4 = DatetimeIndex( + ['2011-01-01', '2011-01-02', '2011-01-03'], freq='D') + idx5 = DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', + '2011-01-01 11:00'], freq='H', tz='Asia/Tokyo') idx6 = DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', pd.NaT], tz='US/Eastern') idx7 = DatetimeIndex(['2011-01-01 09:00', '2011-01-02 10:15']) @@ -207,8 +234,10 @@ def test_representation_to_series(self): dtype: datetime64[ns]""" with pd.option_context('display.width', 300): - for idx, expected in zip([idx1, idx2, idx3, idx4, idx5, idx6, idx7], - [exp1, exp2, exp3, exp4, exp5, exp6, exp7]): + for idx, expected in zip([idx1, idx2, idx3, idx4, + idx5, idx6, idx7], + [exp1, exp2, exp3, exp4, + exp5, exp6, exp7]): result = repr(Series(idx)) self.assertEqual(result, expected) @@ -217,22 +246,30 @@ def test_summary(self): idx1 = DatetimeIndex([], freq='D') idx2 = DatetimeIndex(['2011-01-01'], freq='D') idx3 = DatetimeIndex(['2011-01-01', '2011-01-02'], freq='D') - idx4 = DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], freq='D') - idx5 = DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00'], + idx4 = DatetimeIndex( + ['2011-01-01', '2011-01-02', '2011-01-03'], freq='D') + idx5 = DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', + '2011-01-01 11:00'], freq='H', tz='Asia/Tokyo') idx6 = DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', pd.NaT], tz='US/Eastern') exp1 = """DatetimeIndex: 0 entries Freq: D""" + exp2 = """DatetimeIndex: 1 entries, 2011-01-01 to 2011-01-01 Freq: D""" + exp3 = """DatetimeIndex: 2 entries, 2011-01-01 to 2011-01-02 Freq: D""" + exp4 = """DatetimeIndex: 3 entries, 2011-01-01 to 2011-01-03 Freq: D""" - exp5 = """DatetimeIndex: 3 entries, 2011-01-01 09:00:00+09:00 to 2011-01-01 11:00:00+09:00 -Freq: H""" + + exp5 = ("DatetimeIndex: 3 entries, 2011-01-01 09:00:00+09:00 " + "to 2011-01-01 11:00:00+09:00\n" + "Freq: H") + exp6 = """DatetimeIndex: 3 entries, 2011-01-01 09:00:00-05:00 to NaT""" for idx, expected in zip([idx1, idx2, idx3, idx4, idx5, idx6], @@ -241,11 +278,14 @@ def test_summary(self): self.assertEqual(result, expected) def test_resolution(self): - for freq, expected in zip(['A', 'Q', 'M', 'D', 'H', 'T', 'S', 'L', 'U'], - ['day', 'day', 'day', 'day', - 'hour', 'minute', 'second', 'millisecond', 'microsecond']): + for freq, expected in zip(['A', 'Q', 'M', 'D', 'H', 'T', + 'S', 'L', 'U'], + ['day', 'day', 'day', 'day', 'hour', + 'minute', 'second', 'millisecond', + 'microsecond']): for tz in [None, 'Asia/Tokyo', 'US/Eastern']: - idx = pd.date_range(start='2013-04-01', periods=30, freq=freq, tz=tz) + idx = pd.date_range(start='2013-04-01', periods=30, freq=freq, + tz=tz) self.assertEqual(idx.resolution, expected) def test_add_iadd(self): @@ -263,7 +303,8 @@ def test_add_iadd(self): other3 = pd.DatetimeIndex([], tz=tz) expected3 = pd.date_range('1/1/2000', freq='D', periods=5, tz=tz) - for rng, other, expected in [(rng1, other1, expected1), (rng2, other2, expected2), + for rng, other, expected in [(rng1, other1, expected1), + (rng2, other2, expected2), (rng3, other3, expected3)]: # GH9094 with tm.assert_produces_warning(FutureWarning): @@ -279,20 +320,23 @@ def test_add_iadd(self): # offset offsets = [pd.offsets.Hour(2), timedelta(hours=2), - np.timedelta64(2, 'h'), Timedelta(hours=2)] + np.timedelta64(2, 'h'), Timedelta(hours=2)] for delta in offsets: rng = pd.date_range('2000-01-01', '2000-02-01', tz=tz) result = rng + delta - expected = pd.date_range('2000-01-01 02:00', '2000-02-01 02:00', tz=tz) + expected = pd.date_range('2000-01-01 02:00', + '2000-02-01 02:00', tz=tz) tm.assert_index_equal(result, expected) rng += delta tm.assert_index_equal(rng, expected) # int - rng = pd.date_range('2000-01-01 09:00', freq='H', periods=10, tz=tz) + rng = pd.date_range('2000-01-01 09:00', freq='H', periods=10, + tz=tz) result = rng + 1 - expected = pd.date_range('2000-01-01 10:00', freq='H', periods=10, tz=tz) + expected = pd.date_range('2000-01-01 10:00', freq='H', periods=10, + tz=tz) tm.assert_index_equal(result, expected) rng += 1 tm.assert_index_equal(rng, expected) @@ -312,28 +356,32 @@ def test_sub_isub(self): other3 = pd.DatetimeIndex([], tz=tz) expected3 = pd.date_range('1/1/2000', freq='D', periods=5, tz=tz) - for rng, other, expected in [(rng1, other1, expected1), (rng2, other2, expected2), + for rng, other, expected in [(rng1, other1, expected1), + (rng2, other2, expected2), (rng3, other3, expected3)]: result_union = rng.difference(other) tm.assert_index_equal(result_union, expected) # offset - offsets = [pd.offsets.Hour(2), timedelta(hours=2), np.timedelta64(2, 'h'), - Timedelta(hours=2)] + offsets = [pd.offsets.Hour(2), timedelta(hours=2), + np.timedelta64(2, 'h'), Timedelta(hours=2)] for delta in offsets: rng = pd.date_range('2000-01-01', '2000-02-01', tz=tz) result = rng - delta - expected = pd.date_range('1999-12-31 22:00', '2000-01-31 22:00', tz=tz) + expected = pd.date_range('1999-12-31 22:00', + '2000-01-31 22:00', tz=tz) tm.assert_index_equal(result, expected) rng -= delta tm.assert_index_equal(rng, expected) # int - rng = pd.date_range('2000-01-01 09:00', freq='H', periods=10, tz=tz) + rng = pd.date_range('2000-01-01 09:00', freq='H', periods=10, + tz=tz) result = rng - 1 - expected = pd.date_range('2000-01-01 08:00', freq='H', periods=10, tz=tz) + expected = pd.date_range('2000-01-01 08:00', freq='H', periods=10, + tz=tz) tm.assert_index_equal(result, expected) rng -= 1 tm.assert_index_equal(rng, expected) @@ -343,23 +391,29 @@ def test_value_counts_unique(self): for tz in [None, 'UTC', 'Asia/Tokyo', 'US/Eastern']: idx = pd.date_range('2011-01-01 09:00', freq='H', periods=10) # create repeated values, 'n'th element is repeated by n+1 times - idx = DatetimeIndex(np.repeat(idx.values, range(1, len(idx) + 1)), tz=tz) + idx = DatetimeIndex( + np.repeat(idx.values, range(1, len(idx) + 1)), tz=tz) - exp_idx = pd.date_range('2011-01-01 18:00', freq='-1H', periods=10, tz=tz) + exp_idx = pd.date_range('2011-01-01 18:00', freq='-1H', periods=10, + tz=tz) expected = Series(range(10, 0, -1), index=exp_idx, dtype='int64') tm.assert_series_equal(idx.value_counts(), expected) - expected = pd.date_range('2011-01-01 09:00', freq='H', periods=10, tz=tz) + expected = pd.date_range('2011-01-01 09:00', freq='H', periods=10, + tz=tz) tm.assert_index_equal(idx.unique(), expected) - idx = DatetimeIndex(['2013-01-01 09:00', '2013-01-01 09:00', '2013-01-01 09:00', - '2013-01-01 08:00', '2013-01-01 08:00', pd.NaT], tz=tz) + idx = DatetimeIndex(['2013-01-01 09:00', '2013-01-01 09:00', + '2013-01-01 09:00', '2013-01-01 08:00', + '2013-01-01 08:00', pd.NaT], tz=tz) - exp_idx = DatetimeIndex(['2013-01-01 09:00', '2013-01-01 08:00'], tz=tz) + exp_idx = DatetimeIndex( + ['2013-01-01 09:00', '2013-01-01 08:00'], tz=tz) expected = Series([3, 2], index=exp_idx) tm.assert_series_equal(idx.value_counts(), expected) - exp_idx = DatetimeIndex(['2013-01-01 09:00', '2013-01-01 08:00', pd.NaT], tz=tz) + exp_idx = DatetimeIndex( + ['2013-01-01 09:00', '2013-01-01 08:00', pd.NaT], tz=tz) expected = Series([3, 2, 1], index=exp_idx) tm.assert_series_equal(idx.value_counts(dropna=False), expected) @@ -367,15 +421,18 @@ def test_value_counts_unique(self): def test_nonunique_contains(self): # GH 9512 - for idx in map(DatetimeIndex, ([0, 1, 0], [0, 0, -1], [0, -1, -1], - ['2015', '2015', '2016'], ['2015', '2015', '2014'])): + for idx in map(DatetimeIndex, + ([0, 1, 0], [0, 0, -1], [0, -1, -1], + ['2015', '2015', '2016'], ['2015', '2015', '2014'])): tm.assertIn(idx[0], idx) def test_order(self): # with freq - idx1 = DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], freq='D', name='idx') - idx2 = DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00'], - freq='H', tz='Asia/Tokyo', name='tzidx') + idx1 = DatetimeIndex( + ['2011-01-01', '2011-01-02', '2011-01-03'], freq='D', name='idx') + idx2 = DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', + '2011-01-01 11:00'], freq='H', + tz='Asia/Tokyo', name='tzidx') for idx in [idx1, idx2]: ordered = idx.sort_values() @@ -393,7 +450,8 @@ def test_order(self): self.assert_numpy_array_equal(indexer, np.array([0, 1, 2])) self.assertEqual(ordered.freq, idx.freq) - ordered, indexer = idx.sort_values(return_indexer=True, ascending=False) + ordered, indexer = idx.sort_values(return_indexer=True, + ascending=False) expected = idx[::-1] self.assert_index_equal(ordered, expected) self.assert_numpy_array_equal(indexer, np.array([2, 1, 0])) @@ -409,14 +467,17 @@ def test_order(self): idx2 = DatetimeIndex(['2011-01-01', '2011-01-03', '2011-01-05', '2011-01-02', '2011-01-01'], tz='Asia/Tokyo', name='idx2') - exp2 = DatetimeIndex(['2011-01-01', '2011-01-01', '2011-01-02', - '2011-01-03', '2011-01-05'], - tz='Asia/Tokyo', name='idx2') - idx3 = DatetimeIndex([pd.NaT, '2011-01-03', '2011-01-05', - '2011-01-02', pd.NaT], name='idx3') - exp3 = DatetimeIndex([pd.NaT, pd.NaT, '2011-01-02', '2011-01-03', - '2011-01-05'], name='idx3') + # TODO(wesm): unused? + + # exp2 = DatetimeIndex(['2011-01-01', '2011-01-01', '2011-01-02', + # '2011-01-03', '2011-01-05'], + # tz='Asia/Tokyo', name='idx2') + + # idx3 = DatetimeIndex([pd.NaT, '2011-01-03', '2011-01-05', + # '2011-01-02', pd.NaT], name='idx3') + # exp3 = DatetimeIndex([pd.NaT, pd.NaT, '2011-01-02', '2011-01-03', + # '2011-01-05'], name='idx3') for idx, expected in [(idx1, exp1), (idx1, exp1), (idx1, exp1)]: ordered = idx.sort_values() @@ -432,14 +493,16 @@ def test_order(self): self.assert_numpy_array_equal(indexer, np.array([0, 4, 3, 1, 2])) self.assertIsNone(ordered.freq) - ordered, indexer = idx.sort_values(return_indexer=True, ascending=False) + ordered, indexer = idx.sort_values(return_indexer=True, + ascending=False) self.assert_index_equal(ordered, expected[::-1]) self.assert_numpy_array_equal(indexer, np.array([2, 1, 3, 4, 0])) self.assertIsNone(ordered.freq) def test_getitem(self): idx1 = pd.date_range('2011-01-01', '2011-01-31', freq='D', name='idx') - idx2 = pd.date_range('2011-01-01', '2011-01-31', freq='D', tz='Asia/Tokyo', name='idx') + idx2 = pd.date_range('2011-01-01', '2011-01-31', freq='D', + tz='Asia/Tokyo', name='idx') for idx in [idx1, idx2]: result = idx[0] @@ -471,22 +534,23 @@ def test_getitem(self): self.assertEqual(result.freq, expected.freq) def test_drop_duplicates_metadata(self): - #GH 10115 + # GH 10115 idx = pd.date_range('2011-01-01', '2011-01-31', freq='D', name='idx') result = idx.drop_duplicates() self.assert_index_equal(idx, result) self.assertEqual(idx.freq, result.freq) idx_dup = idx.append(idx) - self.assertIsNone(idx_dup.freq) # freq is reset + self.assertIsNone(idx_dup.freq) # freq is reset result = idx_dup.drop_duplicates() self.assert_index_equal(idx, result) self.assertIsNone(result.freq) def test_take(self): - #GH 10295 + # GH 10295 idx1 = pd.date_range('2011-01-01', '2011-01-31', freq='D', name='idx') - idx2 = pd.date_range('2011-01-01', '2011-01-31', freq='D', tz='Asia/Tokyo', name='idx') + idx2 = pd.date_range('2011-01-01', '2011-01-31', freq='D', + tz='Asia/Tokyo', name='idx') for idx in [idx1, idx2]: result = idx.take([0]) @@ -511,42 +575,46 @@ def test_take(self): self.assertEqual(result.freq, expected.freq) result = idx.take([3, 2, 5]) - expected = DatetimeIndex(['2011-01-04', '2011-01-03', '2011-01-06'], + expected = DatetimeIndex(['2011-01-04', '2011-01-03', + '2011-01-06'], freq=None, tz=idx.tz, name='idx') self.assert_index_equal(result, expected) self.assertIsNone(result.freq) result = idx.take([-3, 2, 5]) - expected = DatetimeIndex(['2011-01-29', '2011-01-03', '2011-01-06'], + expected = DatetimeIndex(['2011-01-29', '2011-01-03', + '2011-01-06'], freq=None, tz=idx.tz, name='idx') self.assert_index_equal(result, expected) self.assertIsNone(result.freq) def test_infer_freq(self): # GH 11018 - for freq in ['A', '2A', '-2A', 'Q', '-1Q', 'M', '-1M', 'D', '3D', '-3D', - 'W', '-1W', 'H', '2H', '-2H', 'T', '2T', 'S', '-3S']: + for freq in ['A', '2A', '-2A', 'Q', '-1Q', 'M', '-1M', 'D', '3D', + '-3D', 'W', '-1W', 'H', '2H', '-2H', 'T', '2T', 'S', + '-3S']: idx = pd.date_range('2011-01-01 09:00:00', freq=freq, periods=10) result = pd.DatetimeIndex(idx.asi8, freq='infer') tm.assert_index_equal(idx, result) self.assertEqual(result.freq, freq) -class TestTimedeltaIndexOps(Ops): +class TestTimedeltaIndexOps(Ops): def setUp(self): super(TestTimedeltaIndexOps, self).setUp() mask = lambda x: isinstance(x, TimedeltaIndex) - self.is_valid_objs = [ o for o in self.objs if mask(o) ] - self.not_valid_objs = [ ] + self.is_valid_objs = [o for o in self.objs if mask(o)] + self.not_valid_objs = [] def test_ops_properties(self): - self.check_ops_properties(['days','hours','minutes','seconds','milliseconds']) - self.check_ops_properties(['microseconds','nanoseconds']) + self.check_ops_properties(['days', 'hours', 'minutes', 'seconds', + 'milliseconds']) + self.check_ops_properties(['microseconds', 'nanoseconds']) def test_asobject_tolist(self): idx = timedelta_range(start='1 days', periods=4, freq='D', name='idx') - expected_list = [Timedelta('1 days'),Timedelta('2 days'),Timedelta('3 days'), - Timedelta('4 days')] + expected_list = [Timedelta('1 days'), Timedelta('2 days'), + Timedelta('3 days'), Timedelta('4 days')] expected = pd.Index(expected_list, dtype=object, name='idx') result = idx.asobject self.assertTrue(isinstance(result, Index)) @@ -556,9 +624,9 @@ def test_asobject_tolist(self): self.assertEqual(result.name, expected.name) self.assertEqual(idx.tolist(), expected_list) - idx = TimedeltaIndex([timedelta(days=1),timedelta(days=2),pd.NaT, + idx = TimedeltaIndex([timedelta(days=1), timedelta(days=2), pd.NaT, timedelta(days=4)], name='idx') - expected_list = [Timedelta('1 days'),Timedelta('2 days'),pd.NaT, + expected_list = [Timedelta('1 days'), Timedelta('2 days'), pd.NaT, Timedelta('4 days')] expected = pd.Index(expected_list, dtype=object, name='idx') result = idx.asobject @@ -604,15 +672,19 @@ def test_representation(self): exp1 = """TimedeltaIndex([], dtype='timedelta64[ns]', freq='D')""" - exp2 = """TimedeltaIndex(['1 days'], dtype='timedelta64[ns]', freq='D')""" + exp2 = ("TimedeltaIndex(['1 days'], dtype='timedelta64[ns]', " + "freq='D')") - exp3 = """TimedeltaIndex(['1 days', '2 days'], dtype='timedelta64[ns]', freq='D')""" + exp3 = ("TimedeltaIndex(['1 days', '2 days'], " + "dtype='timedelta64[ns]', freq='D')") - exp4 = """TimedeltaIndex(['1 days', '2 days', '3 days'], dtype='timedelta64[ns]', freq='D')""" + exp4 = ("TimedeltaIndex(['1 days', '2 days', '3 days'], " + "dtype='timedelta64[ns]', freq='D')") - exp5 = """TimedeltaIndex(['1 days 00:00:01', '2 days 00:00:00', '3 days 00:00:00'], dtype='timedelta64[ns]', freq=None)""" + exp5 = ("TimedeltaIndex(['1 days 00:00:01', '2 days 00:00:00', " + "'3 days 00:00:00'], dtype='timedelta64[ns]', freq=None)") - with pd.option_context('display.width',300): + with pd.option_context('display.width', 300): for idx, expected in zip([idx1, idx2, idx3, idx4, idx5], [exp1, exp2, exp3, exp4, exp5]): for func in ['__repr__', '__unicode__', '__str__']: @@ -645,7 +717,7 @@ def test_representation_to_series(self): 2 3 days 00:00:00 dtype: timedelta64[ns]""" - with pd.option_context('display.width',300): + with pd.option_context('display.width', 300): for idx, expected in zip([idx1, idx2, idx3, idx4, idx5], [exp1, exp2, exp3, exp4, exp5]): result = repr(pd.Series(idx)) @@ -661,13 +733,18 @@ def test_summary(self): exp1 = """TimedeltaIndex: 0 entries Freq: D""" + exp2 = """TimedeltaIndex: 1 entries, 1 days to 1 days Freq: D""" + exp3 = """TimedeltaIndex: 2 entries, 1 days to 2 days Freq: D""" + exp4 = """TimedeltaIndex: 3 entries, 1 days to 3 days Freq: D""" - exp5 = """TimedeltaIndex: 3 entries, 1 days 00:00:01 to 3 days 00:00:00""" + + exp5 = ("TimedeltaIndex: 3 entries, 1 days 00:00:01 to 3 days " + "00:00:00") for idx, expected in zip([idx1, idx2, idx3, idx4, idx5], [exp1, exp2, exp3, exp4, exp5]): @@ -680,12 +757,13 @@ def test_add_iadd(self): # offset offsets = [pd.offsets.Hour(2), timedelta(hours=2), - np.timedelta64(2, 'h'), Timedelta(hours=2)] + np.timedelta64(2, 'h'), Timedelta(hours=2)] for delta in offsets: - rng = timedelta_range('1 days','10 days') + rng = timedelta_range('1 days', '10 days') result = rng + delta - expected = timedelta_range('1 days 02:00:00','10 days 02:00:00',freq='D') + expected = timedelta_range('1 days 02:00:00', '10 days 02:00:00', + freq='D') tm.assert_index_equal(result, expected) rng += delta tm.assert_index_equal(rng, expected) @@ -703,11 +781,11 @@ def test_sub_isub(self): # only test adding/sub offsets as - is now numeric # offset - offsets = [pd.offsets.Hour(2), timedelta(hours=2), np.timedelta64(2, 'h'), - Timedelta(hours=2)] + offsets = [pd.offsets.Hour(2), timedelta(hours=2), + np.timedelta64(2, 'h'), Timedelta(hours=2)] for delta in offsets: - rng = timedelta_range('1 days','10 days') + rng = timedelta_range('1 days', '10 days') result = rng - delta expected = timedelta_range('0 days 22:00:00', '9 days 22:00:00') tm.assert_index_equal(result, expected) @@ -724,10 +802,10 @@ def test_sub_isub(self): def test_ops_compat(self): - offsets = [pd.offsets.Hour(2), timedelta(hours=2), np.timedelta64(2, 'h'), - Timedelta(hours=2)] + offsets = [pd.offsets.Hour(2), timedelta(hours=2), + np.timedelta64(2, 'h'), Timedelta(hours=2)] - rng = timedelta_range('1 days','10 days',name='foo') + rng = timedelta_range('1 days', '10 days', name='foo') # multiply for offset in offsets: @@ -744,10 +822,10 @@ def test_ops_compat(self): expected = Float64Index([12, np.nan, 24], name='foo') for offset in offsets: result = rng / offset - tm.assert_index_equal(result,expected) + tm.assert_index_equal(result, expected) # don't allow division by NaT (make could in the future) - self.assertRaises(TypeError, lambda : rng / pd.NaT) + self.assertRaises(TypeError, lambda: rng / pd.NaT) def test_subtraction_ops(self): @@ -757,10 +835,10 @@ def test_subtraction_ops(self): td = Timedelta('1 days') dt = Timestamp('20130101') - self.assertRaises(TypeError, lambda : tdi - dt) - self.assertRaises(TypeError, lambda : tdi - dti) - self.assertRaises(TypeError, lambda : td - dt) - self.assertRaises(TypeError, lambda : td - dti) + self.assertRaises(TypeError, lambda: tdi - dt) + self.assertRaises(TypeError, lambda: tdi - dti) + self.assertRaises(TypeError, lambda: td - dt) + self.assertRaises(TypeError, lambda: td - dti) result = dt - dti expected = TimedeltaIndex(['0 days', '-1 days', '-2 days'], name='bar') @@ -779,7 +857,8 @@ def test_subtraction_ops(self): tm.assert_index_equal(result, expected, check_names=False) result = dti - td - expected = DatetimeIndex(['20121231', '20130101', '20130102'], name='bar') + expected = DatetimeIndex( + ['20121231', '20130101', '20130102'], name='bar') tm.assert_index_equal(result, expected, check_names=False) result = dt - tdi @@ -789,17 +868,17 @@ def test_subtraction_ops(self): def test_subtraction_ops_with_tz(self): # check that dt/dti subtraction ops with tz are validated - dti = date_range('20130101',periods=3) + dti = date_range('20130101', periods=3) ts = Timestamp('20130101') dt = ts.to_datetime() - dti_tz = date_range('20130101',periods=3).tz_localize('US/Eastern') + dti_tz = date_range('20130101', periods=3).tz_localize('US/Eastern') ts_tz = Timestamp('20130101').tz_localize('US/Eastern') ts_tz2 = Timestamp('20130101').tz_localize('CET') dt_tz = ts_tz.to_datetime() td = Timedelta('1 days') def _check(result, expected): - self.assertEqual(result,expected) + self.assertEqual(result, expected) self.assertIsInstance(result, Timedelta) # scalars @@ -816,43 +895,44 @@ def _check(result, expected): _check(result, expected) # tz mismatches - self.assertRaises(TypeError, lambda : dt_tz - ts) - self.assertRaises(TypeError, lambda : dt_tz - dt) - self.assertRaises(TypeError, lambda : dt_tz - ts_tz2) - self.assertRaises(TypeError, lambda : dt - dt_tz) - self.assertRaises(TypeError, lambda : ts - dt_tz) - self.assertRaises(TypeError, lambda : ts_tz2 - ts) - self.assertRaises(TypeError, lambda : ts_tz2 - dt) - self.assertRaises(TypeError, lambda : ts_tz - ts_tz2) + self.assertRaises(TypeError, lambda: dt_tz - ts) + self.assertRaises(TypeError, lambda: dt_tz - dt) + self.assertRaises(TypeError, lambda: dt_tz - ts_tz2) + self.assertRaises(TypeError, lambda: dt - dt_tz) + self.assertRaises(TypeError, lambda: ts - dt_tz) + self.assertRaises(TypeError, lambda: ts_tz2 - ts) + self.assertRaises(TypeError, lambda: ts_tz2 - dt) + self.assertRaises(TypeError, lambda: ts_tz - ts_tz2) # with dti - self.assertRaises(TypeError, lambda : dti - ts_tz) - self.assertRaises(TypeError, lambda : dti_tz - ts) - self.assertRaises(TypeError, lambda : dti_tz - ts_tz2) + self.assertRaises(TypeError, lambda: dti - ts_tz) + self.assertRaises(TypeError, lambda: dti_tz - ts) + self.assertRaises(TypeError, lambda: dti_tz - ts_tz2) - result = dti_tz-dt_tz - expected = TimedeltaIndex(['0 days','1 days','2 days']) - tm.assert_index_equal(result,expected) + result = dti_tz - dt_tz + expected = TimedeltaIndex(['0 days', '1 days', '2 days']) + tm.assert_index_equal(result, expected) - result = dt_tz-dti_tz - expected = TimedeltaIndex(['0 days','-1 days','-2 days']) - tm.assert_index_equal(result,expected) + result = dt_tz - dti_tz + expected = TimedeltaIndex(['0 days', '-1 days', '-2 days']) + tm.assert_index_equal(result, expected) - result = dti_tz-ts_tz - expected = TimedeltaIndex(['0 days','1 days','2 days']) - tm.assert_index_equal(result,expected) + result = dti_tz - ts_tz + expected = TimedeltaIndex(['0 days', '1 days', '2 days']) + tm.assert_index_equal(result, expected) - result = ts_tz-dti_tz - expected = TimedeltaIndex(['0 days','-1 days','-2 days']) - tm.assert_index_equal(result,expected) + result = ts_tz - dti_tz + expected = TimedeltaIndex(['0 days', '-1 days', '-2 days']) + tm.assert_index_equal(result, expected) result = td - td expected = Timedelta('0 days') _check(result, expected) result = dti_tz - td - expected = DatetimeIndex(['20121231','20130101','20130102'],tz='US/Eastern') - tm.assert_index_equal(result,expected) + expected = DatetimeIndex( + ['20121231', '20130101', '20130102'], tz='US/Eastern') + tm.assert_index_equal(result, expected) def test_dti_dti_deprecated_ops(self): @@ -860,51 +940,53 @@ def test_dti_dti_deprecated_ops(self): # change to return subtraction -> TimeDeltaIndex in 0.17.0 # shoudl move to the appropriate sections above - dti = date_range('20130101',periods=3) - dti_tz = date_range('20130101',periods=3).tz_localize('US/Eastern') + dti = date_range('20130101', periods=3) + dti_tz = date_range('20130101', periods=3).tz_localize('US/Eastern') with tm.assert_produces_warning(FutureWarning): - result = dti-dti + result = dti - dti expected = Index([]) - tm.assert_index_equal(result,expected) + tm.assert_index_equal(result, expected) with tm.assert_produces_warning(FutureWarning): - result = dti+dti + result = dti + dti expected = dti - tm.assert_index_equal(result,expected) + tm.assert_index_equal(result, expected) with tm.assert_produces_warning(FutureWarning): - result = dti_tz-dti_tz + result = dti_tz - dti_tz expected = Index([]) - tm.assert_index_equal(result,expected) + tm.assert_index_equal(result, expected) with tm.assert_produces_warning(FutureWarning): - result = dti_tz+dti_tz + result = dti_tz + dti_tz expected = dti_tz - tm.assert_index_equal(result,expected) + tm.assert_index_equal(result, expected) with tm.assert_produces_warning(FutureWarning): - result = dti_tz-dti + result = dti_tz - dti expected = dti_tz - tm.assert_index_equal(result,expected) + tm.assert_index_equal(result, expected) with tm.assert_produces_warning(FutureWarning): - result = dti-dti_tz + result = dti - dti_tz expected = dti - tm.assert_index_equal(result,expected) + tm.assert_index_equal(result, expected) with tm.assert_produces_warning(FutureWarning): - self.assertRaises(TypeError, lambda : dti_tz+dti) + self.assertRaises(TypeError, lambda: dti_tz + dti) with tm.assert_produces_warning(FutureWarning): - self.assertRaises(TypeError, lambda : dti+dti_tz) + self.assertRaises(TypeError, lambda: dti + dti_tz) def test_dti_tdi_numeric_ops(self): # These are normally union/diff set-like ops tdi = TimedeltaIndex(['1 days', pd.NaT, '2 days'], name='foo') dti = date_range('20130101', periods=3, name='bar') - td = Timedelta('1 days') - dt = Timestamp('20130101') + + # TODO(wesm): unused? + # td = Timedelta('1 days') + # dt = Timestamp('20130101') result = tdi - tdi expected = TimedeltaIndex(['0 days', pd.NaT, '0 days'], name='foo') @@ -914,7 +996,7 @@ def test_dti_tdi_numeric_ops(self): expected = TimedeltaIndex(['2 days', pd.NaT, '4 days'], name='foo') tm.assert_index_equal(result, expected) - result = dti - tdi # name will be reset + result = dti - tdi # name will be reset expected = DatetimeIndex(['20121231', pd.NaT, '20130101']) tm.assert_index_equal(result, expected) @@ -943,20 +1025,20 @@ def test_addition_ops(self): tm.assert_index_equal(result, expected) # unequal length - self.assertRaises(ValueError, lambda : tdi + dti[0:1]) - self.assertRaises(ValueError, lambda : tdi[0:1] + dti) + self.assertRaises(ValueError, lambda: tdi + dti[0:1]) + self.assertRaises(ValueError, lambda: tdi[0:1] + dti) # random indexes - self.assertRaises(TypeError, lambda : tdi + Int64Index([1,2,3])) + self.assertRaises(TypeError, lambda: tdi + Int64Index([1, 2, 3])) # this is a union! - #self.assertRaises(TypeError, lambda : Int64Index([1,2,3]) + tdi) + # self.assertRaises(TypeError, lambda : Int64Index([1,2,3]) + tdi) - result = tdi + dti # name will be reset + result = tdi + dti # name will be reset expected = DatetimeIndex(['20130102', pd.NaT, '20130105']) tm.assert_index_equal(result, expected) - result = dti + tdi # name will be reset + result = dti + tdi # name will be reset expected = DatetimeIndex(['20130102', pd.NaT, '20130105']) tm.assert_index_equal(result, expected) @@ -982,14 +1064,16 @@ def test_value_counts_unique(self): expected = timedelta_range('1 days 09:00:00', freq='H', periods=10) tm.assert_index_equal(idx.unique(), expected) - idx = TimedeltaIndex(['1 days 09:00:00', '1 days 09:00:00', '1 days 09:00:00', - '1 days 08:00:00', '1 days 08:00:00', pd.NaT]) + idx = TimedeltaIndex( + ['1 days 09:00:00', '1 days 09:00:00', '1 days 09:00:00', + '1 days 08:00:00', '1 days 08:00:00', pd.NaT]) exp_idx = TimedeltaIndex(['1 days 09:00:00', '1 days 08:00:00']) expected = Series([3, 2], index=exp_idx) tm.assert_series_equal(idx.value_counts(), expected) - exp_idx = TimedeltaIndex(['1 days 09:00:00', '1 days 08:00:00', pd.NaT]) + exp_idx = TimedeltaIndex(['1 days 09:00:00', '1 days 08:00:00', pd.NaT + ]) expected = Series([3, 2, 1], index=exp_idx) tm.assert_series_equal(idx.value_counts(dropna=False), expected) @@ -1003,16 +1087,18 @@ def test_nonunique_contains(self): tm.assertIn(idx[0], idx) def test_unknown_attribute(self): - #GH 9680 - tdi = pd.timedelta_range(start=0,periods=10,freq='1s') - ts = pd.Series(np.random.normal(size=10),index=tdi) - self.assertNotIn('foo',ts.__dict__.keys()) - self.assertRaises(AttributeError,lambda : ts.foo) + # GH 9680 + tdi = pd.timedelta_range(start=0, periods=10, freq='1s') + ts = pd.Series(np.random.normal(size=10), index=tdi) + self.assertNotIn('foo', ts.__dict__.keys()) + self.assertRaises(AttributeError, lambda: ts.foo) def test_order(self): - #GH 10295 - idx1 = TimedeltaIndex(['1 day', '2 day', '3 day'], freq='D', name='idx') - idx2 = TimedeltaIndex(['1 hour', '2 hour', '3 hour'], freq='H', name='idx') + # GH 10295 + idx1 = TimedeltaIndex(['1 day', '2 day', '3 day'], freq='D', + name='idx') + idx2 = TimedeltaIndex( + ['1 hour', '2 hour', '3 hour'], freq='H', name='idx') for idx in [idx1, idx2]: ordered = idx.sort_values() @@ -1030,7 +1116,8 @@ def test_order(self): self.assert_numpy_array_equal(indexer, np.array([0, 1, 2])) self.assertEqual(ordered.freq, idx.freq) - ordered, indexer = idx.sort_values(return_indexer=True, ascending=False) + ordered, indexer = idx.sort_values(return_indexer=True, + ascending=False) self.assert_index_equal(ordered, idx[::-1]) self.assertEqual(ordered.freq, expected.freq) self.assertEqual(ordered.freq.n, -1) @@ -1042,13 +1129,15 @@ def test_order(self): idx2 = TimedeltaIndex(['1 day', '3 day', '5 day', '2 day', '1 day'], name='idx2') - exp2 = TimedeltaIndex(['1 day', '1 day', '2 day', - '3 day', '5 day'], name='idx2') - idx3 = TimedeltaIndex([pd.NaT, '3 minute', '5 minute', - '2 minute', pd.NaT], name='idx3') - exp3 = TimedeltaIndex([pd.NaT, pd.NaT, '2 minute', '3 minute', - '5 minute'], name='idx3') + # TODO(wesm): unused? + # exp2 = TimedeltaIndex(['1 day', '1 day', '2 day', + # '3 day', '5 day'], name='idx2') + + # idx3 = TimedeltaIndex([pd.NaT, '3 minute', '5 minute', + # '2 minute', pd.NaT], name='idx3') + # exp3 = TimedeltaIndex([pd.NaT, pd.NaT, '2 minute', '3 minute', + # '5 minute'], name='idx3') for idx, expected in [(idx1, exp1), (idx1, exp1), (idx1, exp1)]: ordered = idx.sort_values() @@ -1064,7 +1153,8 @@ def test_order(self): self.assert_numpy_array_equal(indexer, np.array([0, 4, 3, 1, 2])) self.assertIsNone(ordered.freq) - ordered, indexer = idx.sort_values(return_indexer=True, ascending=False) + ordered, indexer = idx.sort_values(return_indexer=True, + ascending=False) self.assert_index_equal(ordered, expected[::-1]) self.assert_numpy_array_equal(indexer, np.array([2, 1, 3, 4, 0])) self.assertIsNone(ordered.freq) @@ -1077,41 +1167,45 @@ def test_getitem(self): self.assertEqual(result, pd.Timedelta('1 day')) result = idx[0:5] - expected = pd.timedelta_range('1 day', '5 day', freq='D', name='idx') + expected = pd.timedelta_range('1 day', '5 day', freq='D', + name='idx') self.assert_index_equal(result, expected) self.assertEqual(result.freq, expected.freq) result = idx[0:10:2] - expected = pd.timedelta_range('1 day', '9 day', freq='2D', name='idx') + expected = pd.timedelta_range('1 day', '9 day', freq='2D', + name='idx') self.assert_index_equal(result, expected) self.assertEqual(result.freq, expected.freq) result = idx[-20:-5:3] - expected = pd.timedelta_range('12 day', '24 day', freq='3D', name='idx') + expected = pd.timedelta_range('12 day', '24 day', freq='3D', + name='idx') self.assert_index_equal(result, expected) self.assertEqual(result.freq, expected.freq) result = idx[4::-1] - expected = TimedeltaIndex(['5 day', '4 day', '3 day', '2 day', '1 day'], + expected = TimedeltaIndex(['5 day', '4 day', '3 day', + '2 day', '1 day'], freq='-1D', name='idx') self.assert_index_equal(result, expected) self.assertEqual(result.freq, expected.freq) def test_drop_duplicates_metadata(self): - #GH 10115 + # GH 10115 idx = pd.timedelta_range('1 day', '31 day', freq='D', name='idx') result = idx.drop_duplicates() self.assert_index_equal(idx, result) self.assertEqual(idx.freq, result.freq) idx_dup = idx.append(idx) - self.assertIsNone(idx_dup.freq) # freq is reset + self.assertIsNone(idx_dup.freq) # freq is reset result = idx_dup.drop_duplicates() self.assert_index_equal(idx, result) self.assertIsNone(result.freq) def test_take(self): - #GH 10295 + # GH 10295 idx1 = pd.timedelta_range('1 day', '31 day', freq='D', name='idx') for idx in [idx1]: @@ -1122,17 +1216,20 @@ def test_take(self): self.assertEqual(result, pd.Timedelta('31 day')) result = idx.take([0, 1, 2]) - expected = pd.timedelta_range('1 day', '3 day', freq='D', name='idx') + expected = pd.timedelta_range('1 day', '3 day', freq='D', + name='idx') self.assert_index_equal(result, expected) self.assertEqual(result.freq, expected.freq) result = idx.take([0, 2, 4]) - expected = pd.timedelta_range('1 day', '5 day', freq='2D', name='idx') + expected = pd.timedelta_range('1 day', '5 day', freq='2D', + name='idx') self.assert_index_equal(result, expected) self.assertEqual(result.freq, expected.freq) result = idx.take([7, 4, 1]) - expected = pd.timedelta_range('8 day', '2 day', freq='-3D', name='idx') + expected = pd.timedelta_range('8 day', '2 day', freq='-3D', + name='idx') self.assert_index_equal(result, expected) self.assertEqual(result.freq, expected.freq) @@ -1148,7 +1245,8 @@ def test_take(self): def test_infer_freq(self): # GH 11018 - for freq in ['D', '3D', '-3D', 'H', '2H', '-2H', 'T', '2T', 'S', '-3S']: + for freq in ['D', '3D', '-3D', 'H', '2H', '-2H', 'T', '2T', 'S', '-3S' + ]: idx = pd.timedelta_range('1', freq=freq, periods=10) result = pd.TimedeltaIndex(idx.asi8, freq='infer') tm.assert_index_equal(idx, result) @@ -1156,21 +1254,27 @@ def test_infer_freq(self): class TestPeriodIndexOps(Ops): - def setUp(self): super(TestPeriodIndexOps, self).setUp() - mask = lambda x: isinstance(x, DatetimeIndex) or isinstance(x, PeriodIndex) - self.is_valid_objs = [ o for o in self.objs if mask(o) ] - self.not_valid_objs = [ o for o in self.objs if not mask(o) ] + mask = lambda x: (isinstance(x, DatetimeIndex) or + isinstance(x, PeriodIndex)) + self.is_valid_objs = [o for o in self.objs if mask(o)] + self.not_valid_objs = [o for o in self.objs if not mask(o)] def test_ops_properties(self): - self.check_ops_properties(['year','month','day','hour','minute','second','weekofyear','week','dayofweek','dayofyear','quarter']) - self.check_ops_properties(['qyear'], lambda x: isinstance(x,PeriodIndex)) + self.check_ops_properties( + ['year', 'month', 'day', 'hour', 'minute', 'second', 'weekofyear', + 'week', 'dayofweek', 'dayofyear', 'quarter']) + self.check_ops_properties(['qyear'], + lambda x: isinstance(x, PeriodIndex)) def test_asobject_tolist(self): - idx = pd.period_range(start='2013-01-01', periods=4, freq='M', name='idx') - expected_list = [pd.Period('2013-01-31', freq='M'), pd.Period('2013-02-28', freq='M'), - pd.Period('2013-03-31', freq='M'), pd.Period('2013-04-30', freq='M')] + idx = pd.period_range(start='2013-01-01', periods=4, freq='M', + name='idx') + expected_list = [pd.Period('2013-01-31', freq='M'), + pd.Period('2013-02-28', freq='M'), + pd.Period('2013-03-31', freq='M'), + pd.Period('2013-04-30', freq='M')] expected = pd.Index(expected_list, dtype=object, name='idx') result = idx.asobject self.assertTrue(isinstance(result, Index)) @@ -1179,9 +1283,12 @@ def test_asobject_tolist(self): self.assertEqual(result.name, expected.name) self.assertEqual(idx.tolist(), expected_list) - idx = PeriodIndex(['2013-01-01', '2013-01-02', 'NaT', '2013-01-04'], freq='D', name='idx') - expected_list = [pd.Period('2013-01-01', freq='D'), pd.Period('2013-01-02', freq='D'), - pd.Period('NaT', freq='D'), pd.Period('2013-01-04', freq='D')] + idx = PeriodIndex(['2013-01-01', '2013-01-02', 'NaT', + '2013-01-04'], freq='D', name='idx') + expected_list = [pd.Period('2013-01-01', freq='D'), + pd.Period('2013-01-02', freq='D'), + pd.Period('NaT', freq='D'), + pd.Period('2013-01-04', freq='D')] expected = pd.Index(expected_list, dtype=object, name='idx') result = idx.asobject self.assertTrue(isinstance(result, Index)) @@ -1207,7 +1314,7 @@ def test_minmax(self): # non-monotonic idx2 = pd.PeriodIndex(['2011-01-01', pd.NaT, '2011-01-03', - '2011-01-02', pd.NaT], freq='D') + '2011-01-02', pd.NaT], freq='D') self.assertFalse(idx2.is_monotonic) for idx in [idx1, idx2]: @@ -1240,9 +1347,11 @@ def test_representation(self): idx1 = PeriodIndex([], freq='D') idx2 = PeriodIndex(['2011-01-01'], freq='D') idx3 = PeriodIndex(['2011-01-01', '2011-01-02'], freq='D') - idx4 = PeriodIndex(['2011-01-01', '2011-01-02', '2011-01-03'], freq='D') + idx4 = PeriodIndex( + ['2011-01-01', '2011-01-02', '2011-01-03'], freq='D') idx5 = PeriodIndex(['2011', '2012', '2013'], freq='A') - idx6 = PeriodIndex(['2011-01-01 09:00', '2012-02-01 10:00', 'NaT'], freq='H') + idx6 = PeriodIndex( + ['2011-01-01 09:00', '2012-02-01 10:00', 'NaT'], freq='H') idx7 = pd.period_range('2013Q1', periods=1, freq="Q") idx8 = pd.period_range('2013Q1', periods=2, freq="Q") @@ -1252,22 +1361,30 @@ def test_representation(self): exp2 = """PeriodIndex(['2011-01-01'], dtype='int64', freq='D')""" - exp3 = """PeriodIndex(['2011-01-01', '2011-01-02'], dtype='int64', freq='D')""" + exp3 = ("PeriodIndex(['2011-01-01', '2011-01-02'], dtype='int64', " + "freq='D')") - exp4 = """PeriodIndex(['2011-01-01', '2011-01-02', '2011-01-03'], dtype='int64', freq='D')""" + exp4 = ("PeriodIndex(['2011-01-01', '2011-01-02', '2011-01-03'], " + "dtype='int64', freq='D')") - exp5 = """PeriodIndex(['2011', '2012', '2013'], dtype='int64', freq='A-DEC')""" + exp5 = ("PeriodIndex(['2011', '2012', '2013'], dtype='int64', " + "freq='A-DEC')") - exp6 = """PeriodIndex(['2011-01-01 09:00', '2012-02-01 10:00', 'NaT'], dtype='int64', freq='H')""" + exp6 = ("PeriodIndex(['2011-01-01 09:00', '2012-02-01 10:00', 'NaT'], " + "dtype='int64', freq='H')") exp7 = """PeriodIndex(['2013Q1'], dtype='int64', freq='Q-DEC')""" - exp8 = """PeriodIndex(['2013Q1', '2013Q2'], dtype='int64', freq='Q-DEC')""" + exp8 = ("PeriodIndex(['2013Q1', '2013Q2'], dtype='int64', " + "freq='Q-DEC')") - exp9 = """PeriodIndex(['2013Q1', '2013Q2', '2013Q3'], dtype='int64', freq='Q-DEC')""" + exp9 = ("PeriodIndex(['2013Q1', '2013Q2', '2013Q3'], dtype='int64', " + "freq='Q-DEC')") - for idx, expected in zip([idx1, idx2, idx3, idx4, idx5, idx6, idx7, idx8, idx9], - [exp1, exp2, exp3, exp4, exp5, exp6, exp7, exp8, exp9]): + for idx, expected in zip([idx1, idx2, idx3, idx4, idx5, + idx6, idx7, idx8, idx9], + [exp1, exp2, exp3, exp4, exp5, + exp6, exp7, exp8, exp9]): for func in ['__repr__', '__unicode__', '__str__']: result = getattr(idx, func)() self.assertEqual(result, expected) @@ -1277,9 +1394,11 @@ def test_representation_to_series(self): idx1 = PeriodIndex([], freq='D') idx2 = PeriodIndex(['2011-01-01'], freq='D') idx3 = PeriodIndex(['2011-01-01', '2011-01-02'], freq='D') - idx4 = PeriodIndex(['2011-01-01', '2011-01-02', '2011-01-03'], freq='D') + idx4 = PeriodIndex( + ['2011-01-01', '2011-01-02', '2011-01-03'], freq='D') idx5 = PeriodIndex(['2011', '2012', '2013'], freq='A') - idx6 = PeriodIndex(['2011-01-01 09:00', '2012-02-01 10:00', 'NaT'], freq='H') + idx6 = PeriodIndex( + ['2011-01-01 09:00', '2012-02-01 10:00', 'NaT'], freq='H') idx7 = pd.period_range('2013Q1', periods=1, freq="Q") idx8 = pd.period_range('2013Q1', periods=2, freq="Q") @@ -1321,8 +1440,10 @@ def test_representation_to_series(self): 2 2013Q3 dtype: object""" - for idx, expected in zip([idx1, idx2, idx3, idx4, idx5, idx6, idx7, idx8, idx9], - [exp1, exp2, exp3, exp4, exp5, exp6, exp7, exp8, exp9]): + for idx, expected in zip([idx1, idx2, idx3, idx4, idx5, + idx6, idx7, idx8, idx9], + [exp1, exp2, exp3, exp4, exp5, + exp6, exp7, exp8, exp9]): result = repr(pd.Series(idx)) self.assertEqual(result, expected) @@ -1331,9 +1452,11 @@ def test_summary(self): idx1 = PeriodIndex([], freq='D') idx2 = PeriodIndex(['2011-01-01'], freq='D') idx3 = PeriodIndex(['2011-01-01', '2011-01-02'], freq='D') - idx4 = PeriodIndex(['2011-01-01', '2011-01-02', '2011-01-03'], freq='D') + idx4 = PeriodIndex( + ['2011-01-01', '2011-01-02', '2011-01-03'], freq='D') idx5 = PeriodIndex(['2011', '2012', '2013'], freq='A') - idx6 = PeriodIndex(['2011-01-01 09:00', '2012-02-01 10:00', 'NaT'], freq='H') + idx6 = PeriodIndex( + ['2011-01-01 09:00', '2012-02-01 10:00', 'NaT'], freq='H') idx7 = pd.period_range('2013Q1', periods=1, freq="Q") idx8 = pd.period_range('2013Q1', periods=2, freq="Q") @@ -1341,32 +1464,44 @@ def test_summary(self): exp1 = """PeriodIndex: 0 entries Freq: D""" + exp2 = """PeriodIndex: 1 entries, 2011-01-01 to 2011-01-01 Freq: D""" + exp3 = """PeriodIndex: 2 entries, 2011-01-01 to 2011-01-02 Freq: D""" + exp4 = """PeriodIndex: 3 entries, 2011-01-01 to 2011-01-03 Freq: D""" + exp5 = """PeriodIndex: 3 entries, 2011 to 2013 Freq: A-DEC""" + exp6 = """PeriodIndex: 3 entries, 2011-01-01 09:00 to NaT Freq: H""" + exp7 = """PeriodIndex: 1 entries, 2013Q1 to 2013Q1 Freq: Q-DEC""" + exp8 = """PeriodIndex: 2 entries, 2013Q1 to 2013Q2 Freq: Q-DEC""" + exp9 = """PeriodIndex: 3 entries, 2013Q1 to 2013Q3 Freq: Q-DEC""" - for idx, expected in zip([idx1, idx2, idx3, idx4, idx5, idx6, idx7, idx8, idx9], - [exp1, exp2, exp3, exp4, exp5, exp6, exp7, exp8, exp9]): + for idx, expected in zip([idx1, idx2, idx3, idx4, idx5, + idx6, idx7, idx8, idx9], + [exp1, exp2, exp3, exp4, exp5, + exp6, exp7, exp8, exp9]): result = idx.summary() self.assertEqual(result, expected) def test_resolution(self): - for freq, expected in zip(['A', 'Q', 'M', 'D', 'H', 'T', 'S', 'L', 'U'], + for freq, expected in zip(['A', 'Q', 'M', 'D', 'H', + 'T', 'S', 'L', 'U'], ['day', 'day', 'day', 'day', - 'hour', 'minute', 'second', 'millisecond', 'microsecond']): + 'hour', 'minute', 'second', + 'millisecond', 'microsecond']): idx = pd.period_range(start='2013-04-01', periods=30, freq=freq) self.assertEqual(idx.resolution, expected) @@ -1410,9 +1545,12 @@ def test_add_iadd(self): other7 = pd.period_range('1998-01-01', freq='A', periods=8) expected7 = pd.period_range('1998-01-01', freq='A', periods=10) - for rng, other, expected in [(rng1, other1, expected1), (rng2, other2, expected2), - (rng3, other3, expected3), (rng4, other4, expected4), - (rng5, other5, expected5), (rng6, other6, expected6), + for rng, other, expected in [(rng1, other1, expected1), + (rng2, other2, expected2), + (rng3, other3, expected3), (rng4, other4, + expected4), + (rng5, other5, expected5), (rng6, other6, + expected6), (rng7, other7, expected7)]: # GH9094 @@ -1439,10 +1577,12 @@ def test_add_iadd(self): rng += pd.offsets.YearEnd(5) tm.assert_index_equal(rng, expected) - for o in [pd.offsets.YearBegin(2), pd.offsets.MonthBegin(1), pd.offsets.Minute(), - np.timedelta64(365, 'D'), timedelta(365), Timedelta(days=365)]: + for o in [pd.offsets.YearBegin(2), pd.offsets.MonthBegin(1), + pd.offsets.Minute(), np.timedelta64(365, 'D'), + timedelta(365), Timedelta(days=365)]: msg = 'Input has different freq from PeriodIndex\\(freq=A-DEC\\)' - with tm.assertRaisesRegexp(ValueError, 'Input has different freq from Period'): + with tm.assertRaisesRegexp(ValueError, + 'Input has different freq from Period'): rng + o rng = pd.period_range('2014-01', '2016-12', freq='M') @@ -1452,17 +1592,19 @@ def test_add_iadd(self): rng += pd.offsets.MonthEnd(5) tm.assert_index_equal(rng, expected) - for o in [pd.offsets.YearBegin(2), pd.offsets.MonthBegin(1), pd.offsets.Minute(), - np.timedelta64(365, 'D'), timedelta(365), Timedelta(days=365)]: + for o in [pd.offsets.YearBegin(2), pd.offsets.MonthBegin(1), + pd.offsets.Minute(), np.timedelta64(365, 'D'), + timedelta(365), Timedelta(days=365)]: rng = pd.period_range('2014-01', '2016-12', freq='M') msg = 'Input has different freq from PeriodIndex\\(freq=M\\)' with tm.assertRaisesRegexp(ValueError, msg): rng + o # Tick - offsets = [pd.offsets.Day(3), timedelta(days=3), np.timedelta64(3, 'D'), - pd.offsets.Hour(72), timedelta(minutes=60*24*3), - np.timedelta64(72, 'h'), Timedelta('72:00:00')] + offsets = [pd.offsets.Day(3), timedelta(days=3), + np.timedelta64(3, 'D'), pd.offsets.Hour(72), + timedelta(minutes=60 * 24 * 3), np.timedelta64(72, 'h'), + Timedelta('72:00:00')] for delta in offsets: rng = pd.period_range('2014-05-01', '2014-05-15', freq='D') result = rng + delta @@ -1471,27 +1613,32 @@ def test_add_iadd(self): rng += delta tm.assert_index_equal(rng, expected) - for o in [pd.offsets.YearBegin(2), pd.offsets.MonthBegin(1), pd.offsets.Minute(), - np.timedelta64(4, 'h'), timedelta(hours=23), Timedelta('23:00:00')]: + for o in [pd.offsets.YearBegin(2), pd.offsets.MonthBegin(1), + pd.offsets.Minute(), np.timedelta64(4, 'h'), + timedelta(hours=23), Timedelta('23:00:00')]: rng = pd.period_range('2014-05-01', '2014-05-15', freq='D') msg = 'Input has different freq from PeriodIndex\\(freq=D\\)' with tm.assertRaisesRegexp(ValueError, msg): rng + o - offsets = [pd.offsets.Hour(2), timedelta(hours=2), np.timedelta64(2, 'h'), - pd.offsets.Minute(120), timedelta(minutes=120), - np.timedelta64(120, 'm'), Timedelta(minutes=120)] + offsets = [pd.offsets.Hour(2), timedelta(hours=2), + np.timedelta64(2, 'h'), pd.offsets.Minute(120), + timedelta(minutes=120), np.timedelta64(120, 'm'), + Timedelta(minutes=120)] for delta in offsets: - rng = pd.period_range('2014-01-01 10:00', '2014-01-05 10:00', freq='H') + rng = pd.period_range('2014-01-01 10:00', '2014-01-05 10:00', + freq='H') result = rng + delta - expected = pd.period_range('2014-01-01 12:00', '2014-01-05 12:00', freq='H') + expected = pd.period_range('2014-01-01 12:00', '2014-01-05 12:00', + freq='H') tm.assert_index_equal(result, expected) rng += delta tm.assert_index_equal(rng, expected) for delta in [pd.offsets.YearBegin(2), timedelta(minutes=30), - np.timedelta64(30, 's'), Timedelta(seconds=30)]: - rng = pd.period_range('2014-01-01 10:00', '2014-01-05 10:00', freq='H') + np.timedelta64(30, 's'), Timedelta(seconds=30)]: + rng = pd.period_range('2014-01-01 10:00', '2014-01-05 10:00', + freq='H') msg = 'Input has different freq from PeriodIndex\\(freq=H\\)' with tm.assertRaisesRegexp(ValueError, msg): result = rng + delta @@ -1526,7 +1673,8 @@ def test_sub_isub(self): rng5 = pd.PeriodIndex(['2000-01-01 09:01', '2000-01-01 09:03', '2000-01-01 09:05'], freq='T') - other5 = pd.PeriodIndex(['2000-01-01 09:01', '2000-01-01 09:05'], freq='T') + other5 = pd.PeriodIndex( + ['2000-01-01 09:01', '2000-01-01 09:05'], freq='T') expected5 = pd.PeriodIndex(['2000-01-01 09:03'], freq='T') rng6 = pd.period_range('2000-01-01', freq='M', periods=7) @@ -1537,10 +1685,13 @@ def test_sub_isub(self): other7 = pd.period_range('1998-01-01', freq='A', periods=8) expected7 = pd.period_range('2006-01-01', freq='A', periods=2) - for rng, other, expected in [(rng1, other1, expected1), (rng2, other2, expected2), - (rng3, other3, expected3), (rng4, other4, expected4), - (rng5, other5, expected5), (rng6, other6, expected6), - (rng7, other7, expected7),]: + for rng, other, expected in [(rng1, other1, expected1), + (rng2, other2, expected2), + (rng3, other3, expected3), + (rng4, other4, expected4), + (rng5, other5, expected5), + (rng6, other6, expected6), + (rng7, other7, expected7), ]: result_union = rng.difference(other) tm.assert_index_equal(result_union, expected) @@ -1553,8 +1704,9 @@ def test_sub_isub(self): rng -= pd.offsets.YearEnd(5) tm.assert_index_equal(rng, expected) - for o in [pd.offsets.YearBegin(2), pd.offsets.MonthBegin(1), pd.offsets.Minute(), - np.timedelta64(365, 'D'), timedelta(365)]: + for o in [pd.offsets.YearBegin(2), pd.offsets.MonthBegin(1), + pd.offsets.Minute(), np.timedelta64(365, 'D'), + timedelta(365)]: rng = pd.period_range('2014', '2024', freq='A') msg = 'Input has different freq from PeriodIndex\\(freq=A-DEC\\)' with tm.assertRaisesRegexp(ValueError, msg): @@ -1567,16 +1719,18 @@ def test_sub_isub(self): rng -= pd.offsets.MonthEnd(5) tm.assert_index_equal(rng, expected) - for o in [pd.offsets.YearBegin(2), pd.offsets.MonthBegin(1), pd.offsets.Minute(), - np.timedelta64(365, 'D'), timedelta(365)]: + for o in [pd.offsets.YearBegin(2), pd.offsets.MonthBegin(1), + pd.offsets.Minute(), np.timedelta64(365, 'D'), + timedelta(365)]: rng = pd.period_range('2014-01', '2016-12', freq='M') msg = 'Input has different freq from PeriodIndex\\(freq=M\\)' with tm.assertRaisesRegexp(ValueError, msg): rng - o # Tick - offsets = [pd.offsets.Day(3), timedelta(days=3), np.timedelta64(3, 'D'), - pd.offsets.Hour(72), timedelta(minutes=60*24*3), np.timedelta64(72, 'h')] + offsets = [pd.offsets.Day(3), timedelta(days=3), + np.timedelta64(3, 'D'), pd.offsets.Hour(72), + timedelta(minutes=60 * 24 * 3), np.timedelta64(72, 'h')] for delta in offsets: rng = pd.period_range('2014-05-01', '2014-05-15', freq='D') result = rng - delta @@ -1585,25 +1739,31 @@ def test_sub_isub(self): rng -= delta tm.assert_index_equal(rng, expected) - for o in [pd.offsets.YearBegin(2), pd.offsets.MonthBegin(1), pd.offsets.Minute(), - np.timedelta64(4, 'h'), timedelta(hours=23)]: + for o in [pd.offsets.YearBegin(2), pd.offsets.MonthBegin(1), + pd.offsets.Minute(), np.timedelta64(4, 'h'), + timedelta(hours=23)]: rng = pd.period_range('2014-05-01', '2014-05-15', freq='D') msg = 'Input has different freq from PeriodIndex\\(freq=D\\)' with tm.assertRaisesRegexp(ValueError, msg): rng - o - offsets = [pd.offsets.Hour(2), timedelta(hours=2), np.timedelta64(2, 'h'), - pd.offsets.Minute(120), timedelta(minutes=120), np.timedelta64(120, 'm')] + offsets = [pd.offsets.Hour(2), timedelta(hours=2), + np.timedelta64(2, 'h'), pd.offsets.Minute(120), + timedelta(minutes=120), np.timedelta64(120, 'm')] for delta in offsets: - rng = pd.period_range('2014-01-01 10:00', '2014-01-05 10:00', freq='H') + rng = pd.period_range('2014-01-01 10:00', '2014-01-05 10:00', + freq='H') result = rng - delta - expected = pd.period_range('2014-01-01 08:00', '2014-01-05 08:00', freq='H') + expected = pd.period_range('2014-01-01 08:00', '2014-01-05 08:00', + freq='H') tm.assert_index_equal(result, expected) rng -= delta tm.assert_index_equal(rng, expected) - for delta in [pd.offsets.YearBegin(2), timedelta(minutes=30), np.timedelta64(30, 's')]: - rng = pd.period_range('2014-01-01 10:00', '2014-01-05 10:00', freq='H') + for delta in [pd.offsets.YearBegin(2), timedelta(minutes=30), + np.timedelta64(30, 's')]: + rng = pd.period_range('2014-01-01 10:00', '2014-01-05 10:00', + freq='H') msg = 'Input has different freq from PeriodIndex\\(freq=H\\)' with tm.assertRaisesRegexp(ValueError, msg): result = rng + delta @@ -1622,11 +1782,14 @@ def test_value_counts_unique(self): # GH 7735 idx = pd.period_range('2011-01-01 09:00', freq='H', periods=10) # create repeated values, 'n'th element is repeated by n+1 times - idx = PeriodIndex(np.repeat(idx.values, range(1, len(idx) + 1)), freq='H') - - exp_idx = PeriodIndex(['2011-01-01 18:00', '2011-01-01 17:00', '2011-01-01 16:00', - '2011-01-01 15:00', '2011-01-01 14:00', '2011-01-01 13:00', - '2011-01-01 12:00', '2011-01-01 11:00', '2011-01-01 10:00', + idx = PeriodIndex( + np.repeat(idx.values, range(1, len(idx) + 1)), freq='H') + + exp_idx = PeriodIndex(['2011-01-01 18:00', '2011-01-01 17:00', + '2011-01-01 16:00', '2011-01-01 15:00', + '2011-01-01 14:00', '2011-01-01 13:00', + '2011-01-01 12:00', '2011-01-01 11:00', + '2011-01-01 10:00', '2011-01-01 09:00'], freq='H') expected = Series(range(10, 0, -1), index=exp_idx, dtype='int64') tm.assert_series_equal(idx.value_counts(), expected) @@ -1634,33 +1797,35 @@ def test_value_counts_unique(self): expected = pd.period_range('2011-01-01 09:00', freq='H', periods=10) tm.assert_index_equal(idx.unique(), expected) - idx = PeriodIndex(['2013-01-01 09:00', '2013-01-01 09:00', '2013-01-01 09:00', - '2013-01-01 08:00', '2013-01-01 08:00', pd.NaT], freq='H') + idx = PeriodIndex(['2013-01-01 09:00', '2013-01-01 09:00', + '2013-01-01 09:00', '2013-01-01 08:00', + '2013-01-01 08:00', pd.NaT], freq='H') - exp_idx = PeriodIndex(['2013-01-01 09:00', '2013-01-01 08:00'], freq='H') + exp_idx = PeriodIndex( + ['2013-01-01 09:00', '2013-01-01 08:00'], freq='H') expected = Series([3, 2], index=exp_idx) tm.assert_series_equal(idx.value_counts(), expected) - exp_idx = PeriodIndex(['2013-01-01 09:00', '2013-01-01 08:00', pd.NaT], freq='H') + exp_idx = PeriodIndex( + ['2013-01-01 09:00', '2013-01-01 08:00', pd.NaT], freq='H') expected = Series([3, 2, 1], index=exp_idx) tm.assert_series_equal(idx.value_counts(dropna=False), expected) tm.assert_index_equal(idx.unique(), exp_idx) def test_drop_duplicates_metadata(self): - #GH 10115 + # GH 10115 idx = pd.period_range('2011-01-01', '2011-01-31', freq='D', name='idx') result = idx.drop_duplicates() self.assert_index_equal(idx, result) self.assertEqual(idx.freq, result.freq) - idx_dup = idx.append(idx) # freq will not be reset + idx_dup = idx.append(idx) # freq will not be reset result = idx_dup.drop_duplicates() self.assert_index_equal(idx, result) self.assertEqual(idx.freq, result.freq) def test_order_compat(self): - def _check_freq(index, expected_index): if isinstance(index, PeriodIndex): self.assertEqual(index.freq, expected_index.freq) @@ -1682,13 +1847,16 @@ def _check_freq(index, expected_index): self.assert_numpy_array_equal(indexer, np.array([0, 1, 2])) _check_freq(ordered, idx) - ordered, indexer = idx.sort_values(return_indexer=True, ascending=False) + ordered, indexer = idx.sort_values(return_indexer=True, + ascending=False) self.assert_index_equal(ordered, idx[::-1]) self.assert_numpy_array_equal(indexer, np.array([2, 1, 0])) _check_freq(ordered, idx[::-1]) - pidx = PeriodIndex(['2011', '2013', '2015', '2012', '2011'], name='pidx', freq='A') - pexpected = PeriodIndex(['2011', '2011', '2012', '2013', '2015'], name='pidx', freq='A') + pidx = PeriodIndex(['2011', '2013', '2015', '2012', + '2011'], name='pidx', freq='A') + pexpected = PeriodIndex( + ['2011', '2011', '2012', '2013', '2015'], name='pidx', freq='A') # for compatibility check iidx = Index([2011, 2013, 2015, 2012, 2011], name='idx') iexpected = Index([2011, 2011, 2012, 2013, 2015], name='idx') @@ -1706,20 +1874,24 @@ def _check_freq(index, expected_index): self.assert_numpy_array_equal(indexer, np.array([0, 4, 3, 1, 2])) _check_freq(ordered, idx) - ordered, indexer = idx.sort_values(return_indexer=True, ascending=False) + ordered, indexer = idx.sort_values(return_indexer=True, + ascending=False) self.assert_index_equal(ordered, expected[::-1]) self.assert_numpy_array_equal(indexer, np.array([2, 1, 3, 4, 0])) _check_freq(ordered, idx) - pidx = PeriodIndex(['2011', '2013', 'NaT', '2011'], name='pidx', freq='D') + pidx = PeriodIndex(['2011', '2013', 'NaT', '2011'], name='pidx', + freq='D') result = pidx.sort_values() - expected = PeriodIndex(['NaT', '2011', '2011', '2013'], name='pidx', freq='D') + expected = PeriodIndex( + ['NaT', '2011', '2011', '2013'], name='pidx', freq='D') self.assert_index_equal(result, expected) self.assertEqual(result.freq, 'D') result = pidx.sort_values(ascending=False) - expected = PeriodIndex(['2013', '2011', '2011', 'NaT'], name='pidx', freq='D') + expected = PeriodIndex( + ['2013', '2011', '2011', 'NaT'], name='pidx', freq='D') self.assert_index_equal(result, expected) self.assertEqual(result.freq, 'D') @@ -1744,7 +1916,8 @@ def test_order(self): self.assertEqual(ordered.freq, idx.freq) self.assertEqual(ordered.freq, freq) - ordered, indexer = idx.sort_values(return_indexer=True, ascending=False) + ordered, indexer = idx.sort_values(return_indexer=True, + ascending=False) expected = idx[::-1] self.assert_index_equal(ordered, expected) self.assert_numpy_array_equal(indexer, np.array([2, 1, 0])) @@ -1756,17 +1929,18 @@ def test_order(self): exp1 = PeriodIndex(['2011-01-01', '2011-01-01', '2011-01-02', '2011-01-03', '2011-01-05'], freq='D', name='idx1') - idx2 = PeriodIndex(['2011-01-01', '2011-01-03', '2011-01-05', - '2011-01-02', '2011-01-01'], - freq='D', name='idx2') - exp2 = PeriodIndex(['2011-01-01', '2011-01-01', '2011-01-02', - '2011-01-03', '2011-01-05'], - freq='D', name='idx2') + # TODO(wesm): unused? + # idx2 = PeriodIndex(['2011-01-01', '2011-01-03', '2011-01-05', + # '2011-01-02', '2011-01-01'], + # freq='D', name='idx2') + # exp2 = PeriodIndex(['2011-01-01', '2011-01-01', '2011-01-02', + # '2011-01-03', '2011-01-05'], + # freq='D', name='idx2') - idx3 = PeriodIndex([pd.NaT, '2011-01-03', '2011-01-05', - '2011-01-02', pd.NaT], freq='D', name='idx3') - exp3 = PeriodIndex([pd.NaT, pd.NaT, '2011-01-02', '2011-01-03', - '2011-01-05'], freq='D', name='idx3') + # idx3 = PeriodIndex([pd.NaT, '2011-01-03', '2011-01-05', + # '2011-01-02', pd.NaT], freq='D', name='idx3') + # exp3 = PeriodIndex([pd.NaT, pd.NaT, '2011-01-02', '2011-01-03', + # '2011-01-05'], freq='D', name='idx3') for idx, expected in [(idx1, exp1), (idx1, exp1), (idx1, exp1)]: ordered = idx.sort_values() @@ -1782,13 +1956,15 @@ def test_order(self): self.assert_numpy_array_equal(indexer, np.array([0, 4, 3, 1, 2])) self.assertEqual(ordered.freq, 'D') - ordered, indexer = idx.sort_values(return_indexer=True, ascending=False) + ordered, indexer = idx.sort_values(return_indexer=True, + ascending=False) self.assert_index_equal(ordered, expected[::-1]) self.assert_numpy_array_equal(indexer, np.array([2, 1, 3, 4, 0])) self.assertEqual(ordered.freq, 'D') def test_getitem(self): - idx1 = pd.period_range('2011-01-01', '2011-01-31', freq='D', name='idx') + idx1 = pd.period_range('2011-01-01', '2011-01-31', freq='D', + name='idx') for idx in [idx1]: result = idx[0] @@ -1805,7 +1981,8 @@ def test_getitem(self): self.assertEqual(result.freq, 'D') result = idx[0:10:2] - expected = pd.PeriodIndex(['2011-01-01', '2011-01-03', '2011-01-05', + expected = pd.PeriodIndex(['2011-01-01', '2011-01-03', + '2011-01-05', '2011-01-07', '2011-01-09'], freq='D', name='idx') self.assert_index_equal(result, expected) @@ -1813,7 +1990,8 @@ def test_getitem(self): self.assertEqual(result.freq, 'D') result = idx[-20:-5:3] - expected = pd.PeriodIndex(['2011-01-12', '2011-01-15', '2011-01-18', + expected = pd.PeriodIndex(['2011-01-12', '2011-01-15', + '2011-01-18', '2011-01-21', '2011-01-24'], freq='D', name='idx') self.assert_index_equal(result, expected) @@ -1822,15 +2000,16 @@ def test_getitem(self): result = idx[4::-1] expected = PeriodIndex(['2011-01-05', '2011-01-04', '2011-01-03', - '2011-01-02', '2011-01-01'], + '2011-01-02', '2011-01-01'], freq='D', name='idx') self.assert_index_equal(result, expected) self.assertEqual(result.freq, expected.freq) self.assertEqual(result.freq, 'D') def test_take(self): - #GH 10295 - idx1 = pd.period_range('2011-01-01', '2011-01-31', freq='D', name='idx') + # GH 10295 + idx1 = pd.period_range('2011-01-01', '2011-01-31', freq='D', + name='idx') for idx in [idx1]: result = idx.take([0]) @@ -1847,14 +2026,15 @@ def test_take(self): self.assertEqual(result.freq, expected.freq) result = idx.take([0, 2, 4]) - expected = pd.PeriodIndex(['2011-01-01', '2011-01-03', '2011-01-05'], - freq='D', name='idx') + expected = pd.PeriodIndex(['2011-01-01', '2011-01-03', + '2011-01-05'], freq='D', name='idx') self.assert_index_equal(result, expected) self.assertEqual(result.freq, expected.freq) self.assertEqual(result.freq, 'D') result = idx.take([7, 4, 1]) - expected = pd.PeriodIndex(['2011-01-08', '2011-01-05', '2011-01-02'], + expected = pd.PeriodIndex(['2011-01-08', '2011-01-05', + '2011-01-02'], freq='D', name='idx') self.assert_index_equal(result, expected) self.assertEqual(result.freq, expected.freq) @@ -1873,11 +2053,3 @@ def test_take(self): self.assert_index_equal(result, expected) self.assertEqual(result.freq, expected.freq) self.assertEqual(result.freq, 'D') - - -if __name__ == '__main__': - import nose - - nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], - # '--with-coverage', '--cover-package=pandas.core'], - exit=False) diff --git a/pandas/tseries/tests/test_converter.py b/pandas/tseries/tests/test_converter.py index 95c0b4466da26..1fe35838ef9ad 100644 --- a/pandas/tseries/tests/test_converter.py +++ b/pandas/tseries/tests/test_converter.py @@ -1,6 +1,4 @@ -from datetime import datetime, time, timedelta, date -import sys -import os +from datetime import datetime, date import nose @@ -18,11 +16,10 @@ def test_timtetonum_accepts_unicode(): - assert(converter.time2num("00:01") == converter.time2num(u("00:01"))) + assert (converter.time2num("00:01") == converter.time2num(u("00:01"))) class TestDateTimeConverter(tm.TestCase): - def setUp(self): self.dtc = converter.DatetimeConverter() self.tc = converter.TimeFormatter(None) @@ -30,7 +27,7 @@ def setUp(self): def test_convert_accepts_unicode(self): r1 = self.dtc.convert("12:22", None, None) r2 = self.dtc.convert(u("12:22"), None, None) - assert(r1 == r2), "DatetimeConverter.convert should accept unicode" + assert (r1 == r2), "DatetimeConverter.convert should accept unicode" def test_conversion(self): rs = self.dtc.convert(['2012-1-1'], None, None)[0] @@ -56,21 +53,25 @@ def test_conversion(self): rs = self.dtc.convert(np.datetime64('2012-01-01'), None, None) self.assertEqual(rs, xp) - rs = self.dtc.convert(np.datetime64('2012-01-01 00:00:00+00:00'), None, None) + rs = self.dtc.convert(np.datetime64( + '2012-01-01 00:00:00+00:00'), None, None) self.assertEqual(rs, xp) - rs = self.dtc.convert(np.array([np.datetime64('2012-01-01 00:00:00+00:00'), - np.datetime64('2012-01-02 00:00:00+00:00')]), None, None) + rs = self.dtc.convert(np.array([ + np.datetime64('2012-01-01 00:00:00+00:00'), + np.datetime64('2012-01-02 00:00:00+00:00')]), None, None) self.assertEqual(rs[0], xp) def test_conversion_float(self): decimals = 9 - rs = self.dtc.convert(Timestamp('2012-1-1 01:02:03', tz='UTC'), None, None) + rs = self.dtc.convert( + Timestamp('2012-1-1 01:02:03', tz='UTC'), None, None) xp = converter.dates.date2num(Timestamp('2012-1-1 01:02:03', tz='UTC')) np_assert_almost_equal(rs, xp, decimals) - rs = self.dtc.convert(Timestamp('2012-1-1 09:02:03', tz='Asia/Hong_Kong'), None, None) + rs = self.dtc.convert( + Timestamp('2012-1-1 09:02:03', tz='Asia/Hong_Kong'), None, None) np_assert_almost_equal(rs, xp, decimals) rs = self.dtc.convert(datetime(2012, 1, 1, 1, 2, 3), None, None) @@ -83,7 +84,7 @@ def test_dateindex_conversion(self): decimals = 9 for freq in ('B', 'L', 'S'): - dateindex = tm.makeDateIndex(k = 10, freq = freq) + dateindex = tm.makeDateIndex(k=10, freq=freq) rs = self.dtc.convert(dateindex, None, None) xp = converter.dates.date2num(dateindex._mpl_repr()) np_assert_almost_equal(rs, xp, decimals) @@ -93,10 +94,11 @@ def _assert_less(ts1, ts2): val1 = self.dtc.convert(ts1, None, None) val2 = self.dtc.convert(ts2, None, None) if not val1 < val2: - raise AssertionError('{0} is not less than {1}.'.format(val1, val2)) + raise AssertionError('{0} is not less than {1}.'.format(val1, + val2)) - # Matplotlib's time representation using floats cannot distinguish intervals smaller - # than ~10 microsecond in the common range of years. + # Matplotlib's time representation using floats cannot distinguish + # intervals smaller than ~10 microsecond in the common range of years. ts = Timestamp('2012-1-1') _assert_less(ts, ts + Second()) _assert_less(ts, ts + Milli()) @@ -104,7 +106,6 @@ def _assert_less(ts1, ts2): class TestPeriodConverter(tm.TestCase): - def setUp(self): self.pc = converter.PeriodConverter() @@ -117,7 +118,8 @@ class Axis(object): def test_convert_accepts_unicode(self): r1 = self.pc.convert("2012-1-1", None, self.axis) r2 = self.pc.convert(u("2012-1-1"), None, self.axis) - self.assert_equal(r1, r2, "PeriodConverter.convert should accept unicode") + self.assert_equal(r1, r2, + "PeriodConverter.convert should accept unicode") def test_conversion(self): rs = self.pc.convert(['2012-1-1'], None, self.axis)[0] @@ -143,11 +145,13 @@ def test_conversion(self): # rs = self.pc.convert(np.datetime64('2012-01-01'), None, self.axis) # self.assertEqual(rs, xp) # - # rs = self.pc.convert(np.datetime64('2012-01-01 00:00:00+00:00'), None, self.axis) + # rs = self.pc.convert(np.datetime64('2012-01-01 00:00:00+00:00'), + # None, self.axis) # self.assertEqual(rs, xp) # - # rs = self.pc.convert(np.array([np.datetime64('2012-01-01 00:00:00+00:00'), - # np.datetime64('2012-01-02 00:00:00+00:00')]), None, self.axis) + # rs = self.pc.convert(np.array([ + # np.datetime64('2012-01-01 00:00:00+00:00'), + # np.datetime64('2012-01-02 00:00:00+00:00')]), None, self.axis) # self.assertEqual(rs[0], xp) def test_integer_passthrough(self): diff --git a/pandas/tseries/tests/test_daterange.py b/pandas/tseries/tests/test_daterange.py index 00336615aeab4..5b0d9b593c344 100644 --- a/pandas/tseries/tests/test_daterange.py +++ b/pandas/tseries/tests/test_daterange.py @@ -17,7 +17,7 @@ def eq_gen_range(kwargs, expected): rng = generate_range(**kwargs) - assert(np.array_equal(list(rng), expected)) + assert (np.array_equal(list(rng), expected)) START, END = datetime(2009, 1, 1), datetime(2010, 1, 1) @@ -73,41 +73,44 @@ def test_precision_finer_than_offset(self): class TestDateRange(tm.TestCase): - def setUp(self): self.rng = bdate_range(START, END) def test_constructor(self): - rng = bdate_range(START, END, freq=datetools.bday) - rng = bdate_range(START, periods=20, freq=datetools.bday) - rng = bdate_range(end=START, periods=20, freq=datetools.bday) + bdate_range(START, END, freq=datetools.bday) + bdate_range(START, periods=20, freq=datetools.bday) + bdate_range(end=START, periods=20, freq=datetools.bday) self.assertRaises(ValueError, date_range, '2011-1-1', '2012-1-1', 'B') self.assertRaises(ValueError, bdate_range, '2011-1-1', '2012-1-1', 'B') def test_naive_aware_conflicts(self): naive = bdate_range(START, END, freq=datetools.bday, tz=None) - aware = bdate_range(START, END, freq=datetools.bday, tz="Asia/Hong_Kong") + aware = bdate_range(START, END, freq=datetools.bday, + tz="Asia/Hong_Kong") assertRaisesRegexp(TypeError, "tz-naive.*tz-aware", naive.join, aware) assertRaisesRegexp(TypeError, "tz-naive.*tz-aware", aware.join, naive) def test_cached_range(self): - rng = DatetimeIndex._cached_range(START, END, - offset=datetools.bday) - rng = DatetimeIndex._cached_range(START, periods=20, - offset=datetools.bday) - rng = DatetimeIndex._cached_range(end=START, periods=20, - offset=datetools.bday) + DatetimeIndex._cached_range(START, END, offset=datetools.bday) + DatetimeIndex._cached_range(START, periods=20, + offset=datetools.bday) + DatetimeIndex._cached_range(end=START, periods=20, + offset=datetools.bday) - assertRaisesRegexp(TypeError, "offset", DatetimeIndex._cached_range, START, END) + assertRaisesRegexp(TypeError, "offset", DatetimeIndex._cached_range, + START, END) - assertRaisesRegexp(TypeError, "specify period", DatetimeIndex._cached_range, START, - offset=datetools.bday) + assertRaisesRegexp(TypeError, "specify period", + DatetimeIndex._cached_range, START, + offset=datetools.bday) - assertRaisesRegexp(TypeError, "specify period", DatetimeIndex._cached_range, end=END, - offset=datetools.bday) + assertRaisesRegexp(TypeError, "specify period", + DatetimeIndex._cached_range, end=END, + offset=datetools.bday) - assertRaisesRegexp(TypeError, "start or end", DatetimeIndex._cached_range, periods=20, - offset=datetools.bday) + assertRaisesRegexp(TypeError, "start or end", + DatetimeIndex._cached_range, periods=20, + offset=datetools.bday) def test_cached_range_bug(self): rng = date_range('2010-09-01 05:00:00', periods=50, @@ -124,7 +127,8 @@ def test_timezone_comparaison_bug(self): def test_timezone_comparaison_assert(self): start = Timestamp('20130220 10:00', tz='US/Eastern') - self.assertRaises(AssertionError, date_range, start, periods=2, tz='Europe/Berlin') + self.assertRaises(AssertionError, date_range, start, periods=2, + tz='Europe/Berlin') def test_comparison(self): d = self.rng[10] @@ -399,17 +403,18 @@ def test_range_tz_dst_straddle_pytz(self): dr = date_range(start, end, freq='D') self.assertEqual(dr[0], start) self.assertEqual(dr[-1], end) - self.assertEqual(np.all(dr.hour==0), True) + self.assertEqual(np.all(dr.hour == 0), True) dr = date_range(start, end, freq='D', tz='US/Eastern') self.assertEqual(dr[0], start) self.assertEqual(dr[-1], end) - self.assertEqual(np.all(dr.hour==0), True) + self.assertEqual(np.all(dr.hour == 0), True) - dr = date_range(start.replace(tzinfo=None), end.replace(tzinfo=None), freq='D', tz='US/Eastern') + dr = date_range(start.replace(tzinfo=None), end.replace( + tzinfo=None), freq='D', tz='US/Eastern') self.assertEqual(dr[0], start) self.assertEqual(dr[-1], end) - self.assertEqual(np.all(dr.hour==0), True) + self.assertEqual(np.all(dr.hour == 0), True) def test_range_tz_dateutil(self): # GH 2906 @@ -447,8 +452,10 @@ def test_month_range_union_tz_pytz(self): late_start = datetime(2011, 3, 1) late_end = datetime(2011, 5, 1) - early_dr = date_range(start=early_start, end=early_end, tz=tz, freq=datetools.monthEnd) - late_dr = date_range(start=late_start, end=late_end, tz=tz, freq=datetools.monthEnd) + early_dr = date_range(start=early_start, end=early_end, tz=tz, + freq=datetools.monthEnd) + late_dr = date_range(start=late_start, end=late_end, tz=tz, + freq=datetools.monthEnd) early_dr.union(late_dr) @@ -464,8 +471,10 @@ def test_month_range_union_tz_dateutil(self): late_start = datetime(2011, 3, 1) late_end = datetime(2011, 5, 1) - early_dr = date_range(start=early_start, end=early_end, tz=tz, freq=datetools.monthEnd) - late_dr = date_range(start=late_start, end=late_end, tz=tz, freq=datetools.monthEnd) + early_dr = date_range(start=early_start, end=early_end, tz=tz, + freq=datetools.monthEnd) + late_dr = date_range(start=late_start, end=late_end, tz=tz, + freq=datetools.monthEnd) early_dr.union(late_dr) @@ -491,9 +500,12 @@ def test_range_closed(self): def test_range_closed_boundary(self): # GH 11804 for closed in ['right', 'left', None]: - right_boundary = date_range('2015-09-12', '2015-12-01', freq='QS-MAR', closed=closed) - left_boundary = date_range('2015-09-01', '2015-09-12', freq='QS-MAR', closed=closed) - both_boundary = date_range('2015-09-01', '2015-12-01', freq='QS-MAR', closed=closed) + right_boundary = date_range('2015-09-12', '2015-12-01', + freq='QS-MAR', closed=closed) + left_boundary = date_range('2015-09-01', '2015-09-12', + freq='QS-MAR', closed=closed) + both_boundary = date_range('2015-09-01', '2015-12-01', + freq='QS-MAR', closed=closed) expected_right = expected_left = expected_both = both_boundary if closed == 'right': @@ -520,33 +532,36 @@ def test_freq_divides_end_in_nanos(self): freq='345min') result_2 = date_range('2005-01-13 10:00', '2005-01-13 16:00', freq='345min') - expected_1 = DatetimeIndex(['2005-01-12 10:00:00', '2005-01-12 15:45:00'], - dtype='datetime64[ns]', freq='345T', tz=None) - expected_2 = DatetimeIndex(['2005-01-13 10:00:00', '2005-01-13 15:45:00'], - dtype='datetime64[ns]', freq='345T', tz=None) + expected_1 = DatetimeIndex(['2005-01-12 10:00:00', + '2005-01-12 15:45:00'], + dtype='datetime64[ns]', freq='345T', + tz=None) + expected_2 = DatetimeIndex(['2005-01-13 10:00:00', + '2005-01-13 15:45:00'], + dtype='datetime64[ns]', freq='345T', + tz=None) self.assertTrue(result_1.equals(expected_1)) self.assertTrue(result_2.equals(expected_2)) -class TestCustomDateRange(tm.TestCase): +class TestCustomDateRange(tm.TestCase): def setUp(self): tm._skip_if_no_cday() self.rng = cdate_range(START, END) def test_constructor(self): - rng = cdate_range(START, END, freq=datetools.cday) - rng = cdate_range(START, periods=20, freq=datetools.cday) - rng = cdate_range(end=START, periods=20, freq=datetools.cday) + cdate_range(START, END, freq=datetools.cday) + cdate_range(START, periods=20, freq=datetools.cday) + cdate_range(end=START, periods=20, freq=datetools.cday) self.assertRaises(ValueError, date_range, '2011-1-1', '2012-1-1', 'C') self.assertRaises(ValueError, cdate_range, '2011-1-1', '2012-1-1', 'C') def test_cached_range(self): - rng = DatetimeIndex._cached_range(START, END, - offset=datetools.cday) - rng = DatetimeIndex._cached_range(START, periods=20, - offset=datetools.cday) - rng = DatetimeIndex._cached_range(end=START, periods=20, - offset=datetools.cday) + DatetimeIndex._cached_range(START, END, offset=datetools.cday) + DatetimeIndex._cached_range(START, periods=20, + offset=datetools.cday) + DatetimeIndex._cached_range(end=START, periods=20, + offset=datetools.cday) self.assertRaises(Exception, DatetimeIndex._cached_range, START, END) @@ -746,8 +761,7 @@ def test_cdaterange_weekmask(self): self.assertTrue(xp.equals(rng)) def test_cdaterange_holidays(self): - rng = cdate_range('2013-05-01', periods=3, - holidays=['2013-05-01']) + rng = cdate_range('2013-05-01', periods=3, holidays=['2013-05-01']) xp = DatetimeIndex(['2013-05-02', '2013-05-03', '2013-05-06']) self.assertTrue(xp.equals(rng)) diff --git a/pandas/tseries/tests/test_frequencies.py b/pandas/tseries/tests/test_frequencies.py index d9bc64136e390..653d92a6148e6 100644 --- a/pandas/tseries/tests/test_frequencies.py +++ b/pandas/tseries/tests/test_frequencies.py @@ -1,13 +1,10 @@ -from datetime import datetime, time, timedelta +from datetime import datetime, timedelta from pandas.compat import range -import sys -import os - -import nose import numpy as np -from pandas import Index, DatetimeIndex, Timestamp, Series, date_range, period_range +from pandas import (Index, DatetimeIndex, Timestamp, Series, + date_range, period_range) import pandas.tseries.frequencies as frequencies from pandas.tseries.tools import to_datetime @@ -20,39 +17,40 @@ import pandas.util.testing as tm from pandas import Timedelta + def test_to_offset_multiple(): freqstr = '2h30min' freqstr2 = '2h 30min' result = frequencies.to_offset(freqstr) - assert(result == frequencies.to_offset(freqstr2)) + assert (result == frequencies.to_offset(freqstr2)) expected = offsets.Minute(150) - assert(result == expected) + assert (result == expected) freqstr = '2h30min15s' result = frequencies.to_offset(freqstr) expected = offsets.Second(150 * 60 + 15) - assert(result == expected) + assert (result == expected) freqstr = '2h 60min' result = frequencies.to_offset(freqstr) expected = offsets.Hour(3) - assert(result == expected) + assert (result == expected) freqstr = '15l500u' result = frequencies.to_offset(freqstr) expected = offsets.Micro(15500) - assert(result == expected) + assert (result == expected) freqstr = '10s75L' result = frequencies.to_offset(freqstr) expected = offsets.Milli(10075) - assert(result == expected) + assert (result == expected) freqstr = '2800N' result = frequencies.to_offset(freqstr) expected = offsets.Nano(2800) - assert(result == expected) + assert (result == expected) # malformed try: @@ -60,27 +58,27 @@ def test_to_offset_multiple(): except ValueError: pass else: - assert(False) + assert (False) def test_to_offset_negative(): freqstr = '-1S' result = frequencies.to_offset(freqstr) - assert(result.n == -1) + assert (result.n == -1) freqstr = '-5min10s' result = frequencies.to_offset(freqstr) - assert(result.n == -310) + assert (result.n == -310) def test_to_offset_leading_zero(): freqstr = '00H 00T 01S' result = frequencies.to_offset(freqstr) - assert(result.n == 1) + assert (result.n == 1) freqstr = '-00H 03T 14S' result = frequencies.to_offset(freqstr) - assert(result.n == -194) + assert (result.n == -194) def test_to_offset_pd_timedelta(): @@ -88,37 +86,37 @@ def test_to_offset_pd_timedelta(): td = Timedelta(days=1, seconds=1) result = frequencies.to_offset(td) expected = offsets.Second(86401) - assert(expected==result) + assert (expected == result) td = Timedelta(days=-1, seconds=1) result = frequencies.to_offset(td) expected = offsets.Second(-86399) - assert(expected==result) + assert (expected == result) td = Timedelta(hours=1, minutes=10) result = frequencies.to_offset(td) expected = offsets.Minute(70) - assert(expected==result) + assert (expected == result) td = Timedelta(hours=1, minutes=-10) result = frequencies.to_offset(td) expected = offsets.Minute(50) - assert(expected==result) + assert (expected == result) td = Timedelta(weeks=1) result = frequencies.to_offset(td) expected = offsets.Day(7) - assert(expected==result) + assert (expected == result) td1 = Timedelta(hours=1) result1 = frequencies.to_offset(td1) result2 = frequencies.to_offset('60min') - assert(result1 == result2) + assert (result1 == result2) td = Timedelta(microseconds=1) result = frequencies.to_offset(td) expected = offsets.Micro(1) - assert(expected == result) + assert (expected == result) td = Timedelta(microseconds=0) tm.assertRaises(ValueError, lambda: frequencies.to_offset(td)) @@ -127,53 +125,52 @@ def test_to_offset_pd_timedelta(): def test_anchored_shortcuts(): result = frequencies.to_offset('W') expected = frequencies.to_offset('W-SUN') - assert(result == expected) + assert (result == expected) result1 = frequencies.to_offset('Q') result2 = frequencies.to_offset('Q-DEC') expected = offsets.QuarterEnd(startingMonth=12) - assert(result1 == expected) - assert(result2 == expected) + assert (result1 == expected) + assert (result2 == expected) result1 = frequencies.to_offset('Q-MAY') expected = offsets.QuarterEnd(startingMonth=5) - assert(result1 == expected) + assert (result1 == expected) def test_get_rule_month(): result = frequencies._get_rule_month('W') - assert(result == 'DEC') + assert (result == 'DEC') result = frequencies._get_rule_month(offsets.Week()) - assert(result == 'DEC') + assert (result == 'DEC') result = frequencies._get_rule_month('D') - assert(result == 'DEC') + assert (result == 'DEC') result = frequencies._get_rule_month(offsets.Day()) - assert(result == 'DEC') + assert (result == 'DEC') result = frequencies._get_rule_month('Q') - assert(result == 'DEC') + assert (result == 'DEC') result = frequencies._get_rule_month(offsets.QuarterEnd(startingMonth=12)) print(result == 'DEC') result = frequencies._get_rule_month('Q-JAN') - assert(result == 'JAN') + assert (result == 'JAN') result = frequencies._get_rule_month(offsets.QuarterEnd(startingMonth=1)) - assert(result == 'JAN') + assert (result == 'JAN') result = frequencies._get_rule_month('A-DEC') - assert(result == 'DEC') + assert (result == 'DEC') result = frequencies._get_rule_month(offsets.YearEnd()) - assert(result == 'DEC') + assert (result == 'DEC') result = frequencies._get_rule_month('A-MAY') - assert(result == 'MAY') + assert (result == 'MAY') result = frequencies._get_rule_month(offsets.YearEnd(month=5)) - assert(result == 'MAY') + assert (result == 'MAY') class TestFrequencyCode(tm.TestCase): - def test_freq_code(self): self.assertEqual(frequencies.get_freq('A'), 1000) self.assertEqual(frequencies.get_freq('3A'), 1000) @@ -200,15 +197,19 @@ def test_freq_group(self): self.assertEqual(frequencies.get_freq_group('A-JAN'), 1000) self.assertEqual(frequencies.get_freq_group('A-MAY'), 1000) self.assertEqual(frequencies.get_freq_group(offsets.YearEnd()), 1000) - self.assertEqual(frequencies.get_freq_group(offsets.YearEnd(month=1)), 1000) - self.assertEqual(frequencies.get_freq_group(offsets.YearEnd(month=5)), 1000) + self.assertEqual(frequencies.get_freq_group( + offsets.YearEnd(month=1)), 1000) + self.assertEqual(frequencies.get_freq_group( + offsets.YearEnd(month=5)), 1000) self.assertEqual(frequencies.get_freq_group('W'), 4000) self.assertEqual(frequencies.get_freq_group('W-MON'), 4000) self.assertEqual(frequencies.get_freq_group('W-FRI'), 4000) self.assertEqual(frequencies.get_freq_group(offsets.Week()), 4000) - self.assertEqual(frequencies.get_freq_group(offsets.Week(weekday=1)), 4000) - self.assertEqual(frequencies.get_freq_group(offsets.Week(weekday=5)), 4000) + self.assertEqual(frequencies.get_freq_group( + offsets.Week(weekday=1)), 4000) + self.assertEqual(frequencies.get_freq_group( + offsets.Week(weekday=5)), 4000) def test_get_to_timestamp_base(self): tsb = frequencies.get_to_timestamp_base @@ -227,7 +228,6 @@ def test_get_to_timestamp_base(self): self.assertEqual(tsb(frequencies.get_freq_code('H')[0]), frequencies.get_freq_code('S')[0]) - def test_freq_to_reso(self): Reso = frequencies.Resolution @@ -297,15 +297,15 @@ def test_get_freq_code(self): (frequencies.get_freq('W-TUE'), 1)) self.assertEqual(frequencies.get_freq_code(offsets.Week(3, weekday=0)), (frequencies.get_freq('W-MON'), 3)) - self.assertEqual(frequencies.get_freq_code(offsets.Week(-2, weekday=4)), - (frequencies.get_freq('W-FRI'), -2)) + self.assertEqual( + frequencies.get_freq_code(offsets.Week(-2, weekday=4)), + (frequencies.get_freq('W-FRI'), -2)) _dti = DatetimeIndex class TestFrequencyInference(tm.TestCase): - def test_raise_if_period_index(self): index = PeriodIndex(start="1/1/1990", periods=20, freq="M") self.assertRaises(TypeError, frequencies.infer_freq, index) @@ -358,12 +358,12 @@ def _check_tick(self, base_delta, code): exp_freq = code self.assertEqual(frequencies.infer_freq(index), exp_freq) - index = _dti([b + base_delta * 7] + - [b + base_delta * j for j in range(3)]) + index = _dti([b + base_delta * 7] + [b + base_delta * j for j in range( + 3)]) self.assertIsNone(frequencies.infer_freq(index)) - index = _dti([b + base_delta * j for j in range(3)] + - [b + base_delta * 7]) + index = _dti([b + base_delta * j for j in range(3)] + [b + base_delta * + 7]) self.assertIsNone(frequencies.infer_freq(index)) @@ -391,8 +391,9 @@ def test_fifth_week_of_month_infer(self): assert frequencies.infer_freq(index) is None def test_week_of_month_fake(self): - #All of these dates are on same day of week and are 4 or 5 weeks apart - index = DatetimeIndex(["2013-08-27","2013-10-01","2013-10-29","2013-11-26"]) + # All of these dates are on same day of week and are 4 or 5 weeks apart + index = DatetimeIndex(["2013-08-27", "2013-10-01", "2013-10-29", + "2013-11-26"]) assert frequencies.infer_freq(index) != 'WOM-4TUE' def test_monthly(self): @@ -433,15 +434,12 @@ def _check_generated_range(self, start, freq): self.assertEqual(frequencies.infer_freq(index), gen.freqstr) else: inf_freq = frequencies.infer_freq(index) - self.assertTrue((inf_freq == 'Q-DEC' and - gen.freqstr in ('Q', 'Q-DEC', 'Q-SEP', 'Q-JUN', - 'Q-MAR')) - or - (inf_freq == 'Q-NOV' and - gen.freqstr in ('Q-NOV', 'Q-AUG', 'Q-MAY', 'Q-FEB')) - or - (inf_freq == 'Q-OCT' and - gen.freqstr in ('Q-OCT', 'Q-JUL', 'Q-APR', 'Q-JAN'))) + self.assertTrue((inf_freq == 'Q-DEC' and gen.freqstr in ( + 'Q', 'Q-DEC', 'Q-SEP', 'Q-JUN', 'Q-MAR')) or ( + inf_freq == 'Q-NOV' and gen.freqstr in ( + 'Q-NOV', 'Q-AUG', 'Q-MAY', 'Q-FEB')) or ( + inf_freq == 'Q-OCT' and gen.freqstr in ( + 'Q-OCT', 'Q-JUL', 'Q-APR', 'Q-JAN'))) gen = date_range(start, periods=5, freq=freq) index = _dti(gen.values) @@ -449,15 +447,12 @@ def _check_generated_range(self, start, freq): self.assertEqual(frequencies.infer_freq(index), gen.freqstr) else: inf_freq = frequencies.infer_freq(index) - self.assertTrue((inf_freq == 'Q-DEC' and - gen.freqstr in ('Q', 'Q-DEC', 'Q-SEP', 'Q-JUN', - 'Q-MAR')) - or - (inf_freq == 'Q-NOV' and - gen.freqstr in ('Q-NOV', 'Q-AUG', 'Q-MAY', 'Q-FEB')) - or - (inf_freq == 'Q-OCT' and - gen.freqstr in ('Q-OCT', 'Q-JUL', 'Q-APR', 'Q-JAN'))) + self.assertTrue((inf_freq == 'Q-DEC' and gen.freqstr in ( + 'Q', 'Q-DEC', 'Q-SEP', 'Q-JUN', 'Q-MAR')) or ( + inf_freq == 'Q-NOV' and gen.freqstr in ( + 'Q-NOV', 'Q-AUG', 'Q-MAY', 'Q-FEB')) or ( + inf_freq == 'Q-OCT' and gen.freqstr in ( + 'Q-OCT', 'Q-JUL', 'Q-APR', 'Q-JAN'))) def test_infer_freq(self): rng = period_range('1959Q2', '2009Q3', freq='Q') @@ -474,13 +469,16 @@ def test_infer_freq(self): def test_infer_freq_tz(self): - freqs = {'AS-JAN': ['2009-01-01', '2010-01-01', '2011-01-01', '2012-01-01'], - 'Q-OCT': ['2009-01-31', '2009-04-30', '2009-07-31', '2009-10-31'], + freqs = {'AS-JAN': + ['2009-01-01', '2010-01-01', '2011-01-01', '2012-01-01'], + 'Q-OCT': + ['2009-01-31', '2009-04-30', '2009-07-31', '2009-10-31'], 'M': ['2010-11-30', '2010-12-31', '2011-01-31', '2011-02-28'], - 'W-SAT': ['2010-12-25', '2011-01-01', '2011-01-08', '2011-01-15'], + 'W-SAT': + ['2010-12-25', '2011-01-01', '2011-01-08', '2011-01-15'], 'D': ['2011-01-01', '2011-01-02', '2011-01-03', '2011-01-04'], - 'H': ['2011-12-31 22:00', '2011-12-31 23:00', '2012-01-01 00:00', '2012-01-01 01:00'] - } + 'H': ['2011-12-31 22:00', '2011-12-31 23:00', + '2012-01-01 00:00', '2012-01-01 01:00']} # GH 7310 for tz in [None, 'Australia/Sydney', 'Asia/Tokyo', 'Europe/Paris', @@ -491,49 +489,55 @@ def test_infer_freq_tz(self): def test_infer_freq_tz_transition(self): # Tests for #8772 - date_pairs = [['2013-11-02', '2013-11-5'], #Fall DST - ['2014-03-08', '2014-03-11'], #Spring DST - ['2014-01-01', '2014-01-03']] #Regular Time - freqs = ['3H', '10T', '3601S', '3600001L', '3600000001U', '3600000000001N'] + date_pairs = [['2013-11-02', '2013-11-5'], # Fall DST + ['2014-03-08', '2014-03-11'], # Spring DST + ['2014-01-01', '2014-01-03']] # Regular Time + freqs = ['3H', '10T', '3601S', '3600001L', '3600000001U', + '3600000000001N'] for tz in [None, 'Australia/Sydney', 'Asia/Tokyo', 'Europe/Paris', 'US/Pacific', 'US/Eastern']: for date_pair in date_pairs: for freq in freqs: - idx = date_range(date_pair[0], date_pair[1], freq=freq, tz=tz) + idx = date_range(date_pair[0], date_pair[ + 1], freq=freq, tz=tz) self.assertEqual(idx.inferred_freq, freq) - index = date_range("2013-11-03", periods=5, freq="3H").tz_localize("America/Chicago") + index = date_range("2013-11-03", periods=5, + freq="3H").tz_localize("America/Chicago") self.assertIsNone(index.inferred_freq) def test_infer_freq_businesshour(self): # GH 7905 - idx = DatetimeIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00', - '2014-07-01 12:00', '2014-07-01 13:00', '2014-07-01 14:00']) + idx = DatetimeIndex( + ['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00', + '2014-07-01 12:00', '2014-07-01 13:00', '2014-07-01 14:00']) # hourly freq in a day must result in 'H' self.assertEqual(idx.inferred_freq, 'H') - idx = DatetimeIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00', - '2014-07-01 12:00', '2014-07-01 13:00', '2014-07-01 14:00', - '2014-07-01 15:00', '2014-07-01 16:00', - '2014-07-02 09:00', '2014-07-02 10:00', '2014-07-02 11:00']) + idx = DatetimeIndex( + ['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00', + '2014-07-01 12:00', '2014-07-01 13:00', '2014-07-01 14:00', + '2014-07-01 15:00', '2014-07-01 16:00', '2014-07-02 09:00', + '2014-07-02 10:00', '2014-07-02 11:00']) self.assertEqual(idx.inferred_freq, 'BH') - idx = DatetimeIndex(['2014-07-04 09:00', '2014-07-04 10:00', '2014-07-04 11:00', - '2014-07-04 12:00', '2014-07-04 13:00', '2014-07-04 14:00', - '2014-07-04 15:00', '2014-07-04 16:00', - '2014-07-07 09:00', '2014-07-07 10:00', '2014-07-07 11:00']) + idx = DatetimeIndex( + ['2014-07-04 09:00', '2014-07-04 10:00', '2014-07-04 11:00', + '2014-07-04 12:00', '2014-07-04 13:00', '2014-07-04 14:00', + '2014-07-04 15:00', '2014-07-04 16:00', '2014-07-07 09:00', + '2014-07-07 10:00', '2014-07-07 11:00']) self.assertEqual(idx.inferred_freq, 'BH') - idx = DatetimeIndex(['2014-07-04 09:00', '2014-07-04 10:00', '2014-07-04 11:00', - '2014-07-04 12:00', '2014-07-04 13:00', '2014-07-04 14:00', - '2014-07-04 15:00', '2014-07-04 16:00', - '2014-07-07 09:00', '2014-07-07 10:00', '2014-07-07 11:00', - '2014-07-07 12:00', '2014-07-07 13:00', '2014-07-07 14:00', - '2014-07-07 15:00', '2014-07-07 16:00', - '2014-07-08 09:00', '2014-07-08 10:00', '2014-07-08 11:00', - '2014-07-08 12:00', '2014-07-08 13:00', '2014-07-08 14:00', - '2014-07-08 15:00', '2014-07-08 16:00']) + idx = DatetimeIndex( + ['2014-07-04 09:00', '2014-07-04 10:00', '2014-07-04 11:00', + '2014-07-04 12:00', '2014-07-04 13:00', '2014-07-04 14:00', + '2014-07-04 15:00', '2014-07-04 16:00', '2014-07-07 09:00', + '2014-07-07 10:00', '2014-07-07 11:00', '2014-07-07 12:00', + '2014-07-07 13:00', '2014-07-07 14:00', '2014-07-07 15:00', + '2014-07-07 16:00', '2014-07-08 09:00', '2014-07-08 10:00', + '2014-07-08 11:00', '2014-07-08 12:00', '2014-07-08 13:00', + '2014-07-08 14:00', '2014-07-08 15:00', '2014-07-08 16:00']) self.assertEqual(idx.inferred_freq, 'BH') def test_not_monotonic(self): @@ -541,7 +545,7 @@ def test_not_monotonic(self): rng = rng[::-1] self.assertEqual(rng.inferred_freq, '-1A-JAN') - def test_non_datetimeindex(self): + def test_non_datetimeindex2(self): rng = _dti(['1/31/2000', '1/31/2001', '1/31/2002']) vals = rng.to_pydatetime() @@ -552,24 +556,25 @@ def test_non_datetimeindex(self): def test_invalid_index_types(self): # test all index types - for i in [ tm.makeIntIndex(10), - tm.makeFloatIndex(10), - tm.makePeriodIndex(10) ]: - self.assertRaises(TypeError, lambda : frequencies.infer_freq(i)) + for i in [tm.makeIntIndex(10), tm.makeFloatIndex(10), + tm.makePeriodIndex(10)]: + self.assertRaises(TypeError, lambda: frequencies.infer_freq(i)) # GH 10822 # odd error message on conversions to datetime for unicode if not is_platform_windows(): - for i in [ tm.makeStringIndex(10), - tm.makeUnicodeIndex(10) ]: - self.assertRaises(ValueError, lambda : frequencies.infer_freq(i)) + for i in [tm.makeStringIndex(10), tm.makeUnicodeIndex(10)]: + self.assertRaises(ValueError, + lambda: frequencies.infer_freq(i)) def test_string_datetimelike_compat(self): # GH 6463 - expected = frequencies.infer_freq(['2004-01', '2004-02', '2004-03', '2004-04']) - result = frequencies.infer_freq(Index(['2004-01', '2004-02', '2004-03', '2004-04'])) - self.assertEqual(result,expected) + expected = frequencies.infer_freq(['2004-01', '2004-02', '2004-03', + '2004-04']) + result = frequencies.infer_freq(Index(['2004-01', '2004-02', '2004-03', + '2004-04'])) + self.assertEqual(result, expected) def test_series(self): @@ -577,31 +582,33 @@ def test_series(self): # inferring series # invalid type of Series - for s in [ Series(np.arange(10)), - Series(np.arange(10.))]: - self.assertRaises(TypeError, lambda : frequencies.infer_freq(s)) + for s in [Series(np.arange(10)), Series(np.arange(10.))]: + self.assertRaises(TypeError, lambda: frequencies.infer_freq(s)) # a non-convertible string - self.assertRaises(ValueError, lambda : frequencies.infer_freq(Series(['foo','bar']))) + self.assertRaises(ValueError, + lambda: frequencies.infer_freq( + Series(['foo', 'bar']))) # cannot infer on PeriodIndex for freq in [None, 'L']: - s = Series(period_range('2013',periods=10,freq=freq)) - self.assertRaises(TypeError, lambda : frequencies.infer_freq(s)) + s = Series(period_range('2013', periods=10, freq=freq)) + self.assertRaises(TypeError, lambda: frequencies.infer_freq(s)) for freq in ['Y']: - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - s = Series(period_range('2013',periods=10,freq=freq)) - self.assertRaises(TypeError, lambda : frequencies.infer_freq(s)) + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): + s = Series(period_range('2013', periods=10, freq=freq)) + self.assertRaises(TypeError, lambda: frequencies.infer_freq(s)) # DateTimeIndex for freq in ['M', 'L', 'S']: - s = Series(date_range('20130101',periods=10,freq=freq)) + s = Series(date_range('20130101', periods=10, freq=freq)) inferred = frequencies.infer_freq(s) - self.assertEqual(inferred,freq) + self.assertEqual(inferred, freq) - s = Series(date_range('20130101','20130110')) + s = Series(date_range('20130101', '20130110')) inferred = frequencies.infer_freq(s) - self.assertEqual(inferred,'D') + self.assertEqual(inferred, 'D') def test_legacy_offset_warnings(self): for k, v in compat.iteritems(frequencies._rule_aliases): @@ -610,34 +617,29 @@ def test_legacy_offset_warnings(self): exp = frequencies.get_offset(v) self.assertEqual(result, exp) - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): idx = date_range('2011-01-01', periods=5, freq=k) exp = date_range('2011-01-01', periods=5, freq=v) self.assert_index_equal(idx, exp) -MONTHS = ['JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', 'AUG', 'SEP', - 'OCT', 'NOV', 'DEC'] +MONTHS = ['JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', 'AUG', 'SEP', 'OCT', + 'NOV', 'DEC'] def test_is_superperiod_subperiod(): - assert(frequencies.is_superperiod(offsets.YearEnd(), offsets.MonthEnd())) - assert(frequencies.is_subperiod(offsets.MonthEnd(), offsets.YearEnd())) - - assert(frequencies.is_superperiod(offsets.Hour(), offsets.Minute())) - assert(frequencies.is_subperiod(offsets.Minute(), offsets.Hour())) - - assert(frequencies.is_superperiod(offsets.Second(), offsets.Milli())) - assert(frequencies.is_subperiod(offsets.Milli(), offsets.Second())) + assert (frequencies.is_superperiod(offsets.YearEnd(), offsets.MonthEnd())) + assert (frequencies.is_subperiod(offsets.MonthEnd(), offsets.YearEnd())) - assert(frequencies.is_superperiod(offsets.Milli(), offsets.Micro())) - assert(frequencies.is_subperiod(offsets.Micro(), offsets.Milli())) + assert (frequencies.is_superperiod(offsets.Hour(), offsets.Minute())) + assert (frequencies.is_subperiod(offsets.Minute(), offsets.Hour())) - assert(frequencies.is_superperiod(offsets.Micro(), offsets.Nano())) - assert(frequencies.is_subperiod(offsets.Nano(), offsets.Micro())) + assert (frequencies.is_superperiod(offsets.Second(), offsets.Milli())) + assert (frequencies.is_subperiod(offsets.Milli(), offsets.Second())) + assert (frequencies.is_superperiod(offsets.Milli(), offsets.Micro())) + assert (frequencies.is_subperiod(offsets.Micro(), offsets.Milli())) -if __name__ == '__main__': - import nose - nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], - exit=False) + assert (frequencies.is_superperiod(offsets.Micro(), offsets.Nano())) + assert (frequencies.is_subperiod(offsets.Nano(), offsets.Micro())) diff --git a/pandas/tseries/tests/test_holiday.py b/pandas/tseries/tests/test_holiday.py index dc07a6d455f86..62446e8e637c6 100644 --- a/pandas/tseries/tests/test_holiday.py +++ b/pandas/tseries/tests/test_holiday.py @@ -1,34 +1,36 @@ - from datetime import datetime import pandas.util.testing as tm from pandas import compat from pandas import DatetimeIndex -from pandas.tseries.holiday import ( - USFederalHolidayCalendar, USMemorialDay, USThanksgivingDay, - nearest_workday, next_monday_or_tuesday, next_monday, - previous_friday, sunday_to_monday, Holiday, DateOffset, - MO, SA, Timestamp, AbstractHolidayCalendar, get_calendar, - HolidayCalendarFactory, next_workday, previous_workday, - before_nearest_workday, EasterMonday, GoodFriday, - after_nearest_workday, weekend_to_monday, USLaborDay, - USColumbusDay, USMartinLutherKingJr, USPresidentsDay) +from pandas.tseries.holiday import (USFederalHolidayCalendar, USMemorialDay, + USThanksgivingDay, nearest_workday, + next_monday_or_tuesday, next_monday, + previous_friday, sunday_to_monday, Holiday, + DateOffset, MO, SA, Timestamp, + AbstractHolidayCalendar, get_calendar, + HolidayCalendarFactory, next_workday, + previous_workday, before_nearest_workday, + EasterMonday, GoodFriday, + after_nearest_workday, weekend_to_monday, + USLaborDay, USColumbusDay, + USMartinLutherKingJr, USPresidentsDay) from pytz import utc import nose -class TestCalendar(tm.TestCase): +class TestCalendar(tm.TestCase): def setUp(self): self.holiday_list = [ - datetime(2012, 1, 2), - datetime(2012, 1, 16), - datetime(2012, 2, 20), - datetime(2012, 5, 28), - datetime(2012, 7, 4), - datetime(2012, 9, 3), - datetime(2012, 10, 8), - datetime(2012, 11, 12), - datetime(2012, 11, 22), - datetime(2012, 12, 25)] + datetime(2012, 1, 2), + datetime(2012, 1, 16), + datetime(2012, 2, 20), + datetime(2012, 5, 28), + datetime(2012, 7, 4), + datetime(2012, 9, 3), + datetime(2012, 10, 8), + datetime(2012, 11, 12), + datetime(2012, 11, 22), + datetime(2012, 12, 25)] self.start_date = datetime(2012, 1, 1) self.end_date = datetime(2012, 12, 31) @@ -36,64 +38,55 @@ def setUp(self): def test_calendar(self): calendar = USFederalHolidayCalendar() - holidays = calendar.holidays(self.start_date, - self.end_date) + holidays = calendar.holidays(self.start_date, self.end_date) holidays_1 = calendar.holidays( - self.start_date.strftime('%Y-%m-%d'), - self.end_date.strftime('%Y-%m-%d')) + self.start_date.strftime('%Y-%m-%d'), + self.end_date.strftime('%Y-%m-%d')) holidays_2 = calendar.holidays( - Timestamp(self.start_date), - Timestamp(self.end_date)) + Timestamp(self.start_date), + Timestamp(self.end_date)) - self.assertEqual(list(holidays.to_pydatetime()), - self.holiday_list) - self.assertEqual(list(holidays_1.to_pydatetime()), - self.holiday_list) - self.assertEqual(list(holidays_2.to_pydatetime()), - self.holiday_list) + self.assertEqual(list(holidays.to_pydatetime()), self.holiday_list) + self.assertEqual(list(holidays_1.to_pydatetime()), self.holiday_list) + self.assertEqual(list(holidays_2.to_pydatetime()), self.holiday_list) def test_calendar_caching(self): # Test for issue #9552 class TestCalendar(AbstractHolidayCalendar): def __init__(self, name=None, rules=None): - super(TestCalendar, self).__init__( - name=name, - rules=rules - ) + super(TestCalendar, self).__init__(name=name, rules=rules) jan1 = TestCalendar(rules=[Holiday('jan1', year=2015, month=1, day=1)]) jan2 = TestCalendar(rules=[Holiday('jan2', year=2015, month=1, day=2)]) - tm.assert_index_equal( - jan1.holidays(), - DatetimeIndex(['01-Jan-2015']) - ) - tm.assert_index_equal( - jan2.holidays(), - DatetimeIndex(['02-Jan-2015']) - ) - + tm.assert_index_equal(jan1.holidays(), DatetimeIndex(['01-Jan-2015'])) + tm.assert_index_equal(jan2.holidays(), DatetimeIndex(['02-Jan-2015'])) + def test_calendar_observance_dates(self): # Test for issue 11477 USFedCal = get_calendar('USFederalHolidayCalendar') - holidays0 = USFedCal.holidays(datetime(2015,7,3), datetime(2015,7,3)) # <-- same start and end dates - holidays1 = USFedCal.holidays(datetime(2015,7,3), datetime(2015,7,6)) # <-- different start and end dates - holidays2 = USFedCal.holidays(datetime(2015,7,3), datetime(2015,7,3)) # <-- same start and end dates - + holidays0 = USFedCal.holidays(datetime(2015, 7, 3), datetime( + 2015, 7, 3)) # <-- same start and end dates + holidays1 = USFedCal.holidays(datetime(2015, 7, 3), datetime( + 2015, 7, 6)) # <-- different start and end dates + holidays2 = USFedCal.holidays(datetime(2015, 7, 3), datetime( + 2015, 7, 3)) # <-- same start and end dates + tm.assert_index_equal(holidays0, holidays1) tm.assert_index_equal(holidays0, holidays2) - + def test_rule_from_name(self): USFedCal = get_calendar('USFederalHolidayCalendar') - self.assertEqual(USFedCal.rule_from_name('Thanksgiving'), USThanksgivingDay) + self.assertEqual(USFedCal.rule_from_name( + 'Thanksgiving'), USThanksgivingDay) -class TestHoliday(tm.TestCase): +class TestHoliday(tm.TestCase): def setUp(self): self.start_date = datetime(2011, 1, 1) - self.end_date = datetime(2020, 12, 31) + self.end_date = datetime(2020, 12, 31) def check_results(self, holiday, start, end, expected): self.assertEqual(list(holiday.dates(start, end)), expected) @@ -109,23 +102,21 @@ def check_results(self, holiday, start, end, expected): ) def test_usmemorialday(self): - self.check_results( - holiday=USMemorialDay, - start=self.start_date, - end=self.end_date, - expected=[ - datetime(2011, 5, 30), - datetime(2012, 5, 28), - datetime(2013, 5, 27), - datetime(2014, 5, 26), - datetime(2015, 5, 25), - datetime(2016, 5, 30), - datetime(2017, 5, 29), - datetime(2018, 5, 28), - datetime(2019, 5, 27), - datetime(2020, 5, 25), - ], - ) + self.check_results(holiday=USMemorialDay, + start=self.start_date, + end=self.end_date, + expected=[ + datetime(2011, 5, 30), + datetime(2012, 5, 28), + datetime(2013, 5, 27), + datetime(2014, 5, 26), + datetime(2015, 5, 25), + datetime(2016, 5, 30), + datetime(2017, 5, 29), + datetime(2018, 5, 28), + datetime(2019, 5, 27), + datetime(2020, 5, 25), + ], ) def test_non_observed_holiday(self): @@ -154,61 +145,55 @@ def test_non_observed_holiday(self): def test_easter(self): - self.check_results( - EasterMonday, - start=self.start_date, - end=self.end_date, - expected=[ - Timestamp('2011-04-25 00:00:00'), - Timestamp('2012-04-09 00:00:00'), - Timestamp('2013-04-01 00:00:00'), - Timestamp('2014-04-21 00:00:00'), - Timestamp('2015-04-06 00:00:00'), - Timestamp('2016-03-28 00:00:00'), - Timestamp('2017-04-17 00:00:00'), - Timestamp('2018-04-02 00:00:00'), - Timestamp('2019-04-22 00:00:00'), - Timestamp('2020-04-13 00:00:00'), - ], - ) - self.check_results( - GoodFriday, - start=self.start_date, - end=self.end_date, - expected=[ - Timestamp('2011-04-22 00:00:00'), - Timestamp('2012-04-06 00:00:00'), - Timestamp('2013-03-29 00:00:00'), - Timestamp('2014-04-18 00:00:00'), - Timestamp('2015-04-03 00:00:00'), - Timestamp('2016-03-25 00:00:00'), - Timestamp('2017-04-14 00:00:00'), - Timestamp('2018-03-30 00:00:00'), - Timestamp('2019-04-19 00:00:00'), - Timestamp('2020-04-10 00:00:00'), - ], - ) + self.check_results(EasterMonday, + start=self.start_date, + end=self.end_date, + expected=[ + Timestamp('2011-04-25 00:00:00'), + Timestamp('2012-04-09 00:00:00'), + Timestamp('2013-04-01 00:00:00'), + Timestamp('2014-04-21 00:00:00'), + Timestamp('2015-04-06 00:00:00'), + Timestamp('2016-03-28 00:00:00'), + Timestamp('2017-04-17 00:00:00'), + Timestamp('2018-04-02 00:00:00'), + Timestamp('2019-04-22 00:00:00'), + Timestamp('2020-04-13 00:00:00'), + ], ) + self.check_results(GoodFriday, + start=self.start_date, + end=self.end_date, + expected=[ + Timestamp('2011-04-22 00:00:00'), + Timestamp('2012-04-06 00:00:00'), + Timestamp('2013-03-29 00:00:00'), + Timestamp('2014-04-18 00:00:00'), + Timestamp('2015-04-03 00:00:00'), + Timestamp('2016-03-25 00:00:00'), + Timestamp('2017-04-14 00:00:00'), + Timestamp('2018-03-30 00:00:00'), + Timestamp('2019-04-19 00:00:00'), + Timestamp('2020-04-10 00:00:00'), + ], ) def test_usthanksgivingday(self): - self.check_results( - USThanksgivingDay, - start=self.start_date, - end=self.end_date, - expected=[ - datetime(2011, 11, 24), - datetime(2012, 11, 22), - datetime(2013, 11, 28), - datetime(2014, 11, 27), - datetime(2015, 11, 26), - datetime(2016, 11, 24), - datetime(2017, 11, 23), - datetime(2018, 11, 22), - datetime(2019, 11, 28), - datetime(2020, 11, 26), - ], - ) - + self.check_results(USThanksgivingDay, + start=self.start_date, + end=self.end_date, + expected=[ + datetime(2011, 11, 24), + datetime(2012, 11, 22), + datetime(2013, 11, 28), + datetime(2014, 11, 27), + datetime(2015, 11, 26), + datetime(2016, 11, 24), + datetime(2017, 11, 23), + datetime(2018, 11, 22), + datetime(2019, 11, 28), + datetime(2020, 11, 26), + ], ) + def test_holidays_within_dates(self): # Fix holiday behavior found in #11477 # where holiday.dates returned dates outside start/end date @@ -216,56 +201,55 @@ def test_holidays_within_dates(self): # was not in the original date range (e.g., 7/4/2015 -> 7/3/2015) start_date = datetime(2015, 7, 1) end_date = datetime(2015, 7, 1) - + calendar = get_calendar('USFederalHolidayCalendar') new_years = calendar.rule_from_name('New Years Day') july_4th = calendar.rule_from_name('July 4th') veterans_day = calendar.rule_from_name('Veterans Day') christmas = calendar.rule_from_name('Christmas') - + # Holiday: (start/end date, holiday) - holidays = {USMemorialDay: ("2015-05-25", "2015-05-25"), - USLaborDay: ("2015-09-07", "2015-09-07"), - USColumbusDay: ("2015-10-12", "2015-10-12"), - USThanksgivingDay: ("2015-11-26", "2015-11-26"), - USMartinLutherKingJr: ("2015-01-19", "2015-01-19"), - USPresidentsDay: ("2015-02-16", "2015-02-16"), - GoodFriday: ("2015-04-03", "2015-04-03"), - EasterMonday: [("2015-04-06", "2015-04-06"), - ("2015-04-05", [])], - new_years: [("2015-01-01", "2015-01-01"), - ("2011-01-01", []), - ("2010-12-31", "2010-12-31")], - july_4th: [("2015-07-03", "2015-07-03"), - ("2015-07-04", [])], - veterans_day: [("2012-11-11", []), - ("2012-11-12", "2012-11-12")], - christmas: [("2011-12-25", []), - ("2011-12-26", "2011-12-26")]} - + holidays = {USMemorialDay: ("2015-05-25", "2015-05-25"), + USLaborDay: ("2015-09-07", "2015-09-07"), + USColumbusDay: ("2015-10-12", "2015-10-12"), + USThanksgivingDay: ("2015-11-26", "2015-11-26"), + USMartinLutherKingJr: ("2015-01-19", "2015-01-19"), + USPresidentsDay: ("2015-02-16", "2015-02-16"), + GoodFriday: ("2015-04-03", "2015-04-03"), + EasterMonday: [("2015-04-06", "2015-04-06"), + ("2015-04-05", [])], + new_years: [("2015-01-01", "2015-01-01"), + ("2011-01-01", []), + ("2010-12-31", "2010-12-31")], + july_4th: [("2015-07-03", "2015-07-03"), + ("2015-07-04", [])], + veterans_day: [("2012-11-11", []), + ("2012-11-12", "2012-11-12")], + christmas: [("2011-12-25", []), + ("2011-12-26", "2011-12-26")]} + for rule, dates in compat.iteritems(holidays): empty_dates = rule.dates(start_date, end_date) self.assertEqual(empty_dates.tolist(), []) - + if isinstance(dates, tuple): dates = [dates] - + for start, expected in dates: if len(expected): expected = [Timestamp(expected)] self.check_results(rule, start, start, expected) def test_argument_types(self): - holidays = USThanksgivingDay.dates(self.start_date, - self.end_date) + holidays = USThanksgivingDay.dates(self.start_date, self.end_date) holidays_1 = USThanksgivingDay.dates( - self.start_date.strftime('%Y-%m-%d'), - self.end_date.strftime('%Y-%m-%d')) + self.start_date.strftime('%Y-%m-%d'), + self.end_date.strftime('%Y-%m-%d')) holidays_2 = USThanksgivingDay.dates( - Timestamp(self.start_date), - Timestamp(self.end_date)) + Timestamp(self.start_date), + Timestamp(self.end_date)) self.assert_index_equal(holidays, holidays_1) self.assert_index_equal(holidays, holidays_2) @@ -291,9 +275,11 @@ class TestCalendar(AbstractHolidayCalendar): self.assertEqual(TestCalendar, calendar.__class__) def test_factory(self): - class_1 = HolidayCalendarFactory('MemorialDay', AbstractHolidayCalendar, + class_1 = HolidayCalendarFactory('MemorialDay', + AbstractHolidayCalendar, USMemorialDay) - class_2 = HolidayCalendarFactory('Thansksgiving', AbstractHolidayCalendar, + class_2 = HolidayCalendarFactory('Thansksgiving', + AbstractHolidayCalendar, USThanksgivingDay) class_3 = HolidayCalendarFactory('Combined', class_1, class_2) @@ -303,15 +289,14 @@ def test_factory(self): class TestObservanceRules(tm.TestCase): - def setUp(self): - self.we = datetime(2014, 4, 9) - self.th = datetime(2014, 4, 10) - self.fr = datetime(2014, 4, 11) - self.sa = datetime(2014, 4, 12) - self.su = datetime(2014, 4, 13) - self.mo = datetime(2014, 4, 14) - self.tu = datetime(2014, 4, 15) + self.we = datetime(2014, 4, 9) + self.th = datetime(2014, 4, 10) + self.fr = datetime(2014, 4, 11) + self.sa = datetime(2014, 4, 12) + self.su = datetime(2014, 4, 13) + self.mo = datetime(2014, 4, 14) + self.tu = datetime(2014, 4, 15) def test_next_monday(self): self.assertEqual(next_monday(self.sa), self.mo) @@ -353,44 +338,55 @@ def test_before_nearest_workday(self): self.assertEqual(before_nearest_workday(self.sa), self.th) self.assertEqual(before_nearest_workday(self.su), self.fr) self.assertEqual(before_nearest_workday(self.tu), self.mo) - + def test_after_nearest_workday(self): self.assertEqual(after_nearest_workday(self.sa), self.mo) self.assertEqual(after_nearest_workday(self.su), self.tu) self.assertEqual(after_nearest_workday(self.fr), self.mo) + class TestFederalHolidayCalendar(tm.TestCase): # Test for issue 10278 def test_no_mlk_before_1984(self): class MLKCalendar(AbstractHolidayCalendar): - rules=[USMartinLutherKingJr] - holidays = MLKCalendar().holidays(start='1984', end='1988').to_pydatetime().tolist() + rules = [USMartinLutherKingJr] + + holidays = MLKCalendar().holidays(start='1984', + end='1988').to_pydatetime().tolist() # Testing to make sure holiday is not incorrectly observed before 1986 - self.assertEqual(holidays, [datetime(1986, 1, 20, 0, 0), datetime(1987, 1, 19, 0, 0)]) + self.assertEqual(holidays, [datetime(1986, 1, 20, 0, 0), datetime( + 1987, 1, 19, 0, 0)]) def test_memorial_day(self): class MemorialDay(AbstractHolidayCalendar): - rules=[USMemorialDay] - holidays = MemorialDay().holidays(start='1971', end='1980').to_pydatetime().tolist() + rules = [USMemorialDay] + + holidays = MemorialDay().holidays(start='1971', + end='1980').to_pydatetime().tolist() # Fixes 5/31 error and checked manually against wikipedia - self.assertEqual(holidays, [datetime(1971, 5, 31, 0, 0), datetime(1972, 5, 29, 0, 0), - datetime(1973, 5, 28, 0, 0), datetime(1974, 5, 27, 0, 0), - datetime(1975, 5, 26, 0, 0), datetime(1976, 5, 31, 0, 0), - datetime(1977, 5, 30, 0, 0), datetime(1978, 5, 29, 0, 0), - datetime(1979, 5, 28, 0, 0)]) + self.assertEqual(holidays, [datetime(1971, 5, 31, 0, 0), + datetime(1972, 5, 29, 0, 0), + datetime(1973, 5, 28, 0, 0), + datetime(1974, 5, 27, 0, + 0), datetime(1975, 5, 26, 0, 0), + datetime(1976, 5, 31, 0, + 0), datetime(1977, 5, 30, 0, 0), + datetime(1978, 5, 29, 0, + 0), datetime(1979, 5, 28, 0, 0)]) + class TestHolidayConflictingArguments(tm.TestCase): # GH 10217 def test_both_offset_observance_raises(self): + with self.assertRaises(NotImplementedError): + Holiday("Cyber Monday", month=11, day=1, + offset=[DateOffset(weekday=SA(4))], + observance=next_monday) - with self.assertRaises(NotImplementedError) as cm: - h = Holiday("Cyber Monday", month=11, day=1, - offset=[DateOffset(weekday=SA(4))], observance=next_monday) if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) - diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py index 70e0cc288458e..f1b5172a838cf 100644 --- a/pandas/tseries/tests/test_offsets.py +++ b/pandas/tseries/tests/test_offsets.py @@ -6,21 +6,22 @@ import nose from nose.tools import assert_raises - import numpy as np -from pandas.core.datetools import ( - bday, BDay, CDay, BQuarterEnd, BMonthEnd, BusinessHour, - CBMonthEnd, CBMonthBegin, - BYearEnd, MonthEnd, MonthBegin, BYearBegin, CustomBusinessDay, - QuarterBegin, BQuarterBegin, BMonthBegin, DateOffset, Week, - YearBegin, YearEnd, Hour, Minute, Second, Day, Micro, Milli, Nano, Easter, - WeekOfMonth, format, ole2datetime, QuarterEnd, to_datetime, normalize_date, - get_offset, get_standard_freq) - -from pandas import Series -from pandas.tseries.frequencies import _offset_map, get_freq_code, _get_freq_str -from pandas.tseries.index import _to_m8, DatetimeIndex, _daterange_cache, date_range +from pandas.core.datetools import (bday, BDay, CDay, BQuarterEnd, BMonthEnd, + BusinessHour, CBMonthEnd, CBMonthBegin, + BYearEnd, MonthEnd, MonthBegin, BYearBegin, + QuarterBegin, + BQuarterBegin, BMonthBegin, DateOffset, + Week, YearBegin, YearEnd, Hour, Minute, + Second, Day, Micro, Milli, Nano, Easter, + WeekOfMonth, format, ole2datetime, + QuarterEnd, to_datetime, normalize_date, + get_offset, get_standard_freq) + +from pandas.tseries.frequencies import (_offset_map, get_freq_code, + _get_freq_str) +from pandas.tseries.index import _to_m8, DatetimeIndex, _daterange_cache from pandas.tseries.tools import parse_time_string, DateParseError import pandas.tseries.offsets as offsets from pandas.io.pickle import read_pickle @@ -41,9 +42,8 @@ def test_monthrange(): for m in range(1, 13): assert tslib.monthrange(y, m) == calendar.monthrange(y, m) - #### -## Misc function tests +# Misc function tests #### @@ -82,15 +82,16 @@ def test_to_m8(): tm.assertIsInstance(valu, np.datetime64) # assert valu == np.datetime64(datetime(2007,10,1)) -# def test_datetime64_box(): -# valu = np.datetime64(datetime(2007,10,1)) -# valb = _dt_box(valu) -# assert type(valb) == datetime -# assert valb == datetime(2007,10,1) + # def test_datetime64_box(): + # valu = np.datetime64(datetime(2007,10,1)) + # valb = _dt_box(valu) + # assert type(valb) == datetime + # assert valb == datetime(2007,10,1) + + ##### + # DateOffset Tests + ##### -##### -### DateOffset Tests -##### class Base(tm.TestCase): _offset = None @@ -129,11 +130,12 @@ def test_apply_out_of_range(self): if self._offset is None: return - # try to create an out-of-bounds result timestamp; if we can't create the offset - # skip + # try to create an out-of-bounds result timestamp; if we can't create + # the offset skip try: if self._offset is BusinessHour: - # Using 10000 in BusinessHour fails in tz check because of DST difference + # Using 10000 in BusinessHour fails in tz check because of DST + # difference offset = self._get_offset(self._offset, value=100000) else: offset = self._get_offset(self._offset, value=10000) @@ -154,11 +156,12 @@ def test_apply_out_of_range(self): except (tslib.OutOfBoundsDatetime): raise except (ValueError, KeyError) as e: - raise nose.SkipTest("cannot create out_of_range offset: {0} {1}".format(str(self).split('.')[-1],e)) + raise nose.SkipTest( + "cannot create out_of_range offset: {0} {1}".format( + str(self).split('.')[-1], e)) class TestCommon(Base): - def setUp(self): # exected value created by Base._get_offset @@ -167,11 +170,15 @@ def setUp(self): self.expecteds = {'Day': Timestamp('2011-01-02 09:00:00'), 'DateOffset': Timestamp('2011-01-02 09:00:00'), 'BusinessDay': Timestamp('2011-01-03 09:00:00'), - 'CustomBusinessDay': Timestamp('2011-01-03 09:00:00'), - 'CustomBusinessMonthEnd': Timestamp('2011-01-31 09:00:00'), - 'CustomBusinessMonthBegin': Timestamp('2011-01-03 09:00:00'), + 'CustomBusinessDay': + Timestamp('2011-01-03 09:00:00'), + 'CustomBusinessMonthEnd': + Timestamp('2011-01-31 09:00:00'), + 'CustomBusinessMonthBegin': + Timestamp('2011-01-03 09:00:00'), 'MonthBegin': Timestamp('2011-02-01 09:00:00'), - 'BusinessMonthBegin': Timestamp('2011-01-03 09:00:00'), + 'BusinessMonthBegin': + Timestamp('2011-01-03 09:00:00'), 'MonthEnd': Timestamp('2011-01-31 09:00:00'), 'BusinessMonthEnd': Timestamp('2011-01-31 09:00:00'), 'YearBegin': Timestamp('2012-01-01 09:00:00'), @@ -194,7 +201,8 @@ def setUp(self): 'Second': Timestamp('2011-01-01 09:00:01'), 'Milli': Timestamp('2011-01-01 09:00:00.001000'), 'Micro': Timestamp('2011-01-01 09:00:00.000001'), - 'Nano': Timestamp(np.datetime64('2011-01-01T09:00:00.000000001Z'))} + 'Nano': Timestamp(np.datetime64( + '2011-01-01T09:00:00.000000001Z'))} def test_return_type(self): for offset in self.offset_types: @@ -227,7 +235,8 @@ def test_offset_freqstr(self): offset = self._get_offset(offset_klass) freqstr = offset.freqstr - if freqstr not in ('<Easter>', "<DateOffset: kwds={'days': 1}>", + if freqstr not in ('<Easter>', + "<DateOffset: kwds={'days': 1}>", 'LWOM-SAT', ): code = get_offset(freqstr) self.assertEqual(offset.rule_code, code) @@ -298,8 +307,9 @@ def test_rollforward(self): expecteds = self.expecteds.copy() # result will not be changed if the target is on the offset - no_changes = ['Day', 'MonthBegin', 'YearBegin', 'Week', 'Hour', 'Minute', - 'Second', 'Milli', 'Micro', 'Nano', 'DateOffset'] + no_changes = ['Day', 'MonthBegin', 'YearBegin', 'Week', 'Hour', + 'Minute', 'Second', 'Milli', 'Micro', 'Nano', + 'DateOffset'] for n in no_changes: expecteds[n] = Timestamp('2011/01/01 09:00') @@ -328,16 +338,19 @@ def test_rollforward(self): for offset in self.offset_types: for dt in [sdt, ndt]: expected = expecteds[offset.__name__] - self._check_offsetfunc_works(offset, 'rollforward', dt, expected) + self._check_offsetfunc_works(offset, 'rollforward', dt, + expected) expected = norm_expected[offset.__name__] - self._check_offsetfunc_works(offset, 'rollforward', dt, expected, - normalize=True) + self._check_offsetfunc_works(offset, 'rollforward', dt, + expected, normalize=True) def test_rollback(self): expecteds = {'BusinessDay': Timestamp('2010-12-31 09:00:00'), 'CustomBusinessDay': Timestamp('2010-12-31 09:00:00'), - 'CustomBusinessMonthEnd': Timestamp('2010-12-31 09:00:00'), - 'CustomBusinessMonthBegin': Timestamp('2010-12-01 09:00:00'), + 'CustomBusinessMonthEnd': + Timestamp('2010-12-31 09:00:00'), + 'CustomBusinessMonthBegin': + Timestamp('2010-12-01 09:00:00'), 'BusinessMonthBegin': Timestamp('2010-12-01 09:00:00'), 'MonthEnd': Timestamp('2010-12-31 09:00:00'), 'BusinessMonthEnd': Timestamp('2010-12-31 09:00:00'), @@ -386,8 +399,8 @@ def test_rollback(self): self._check_offsetfunc_works(offset, 'rollback', dt, expected) expected = norm_expected[offset.__name__] - self._check_offsetfunc_works(offset, 'rollback', - dt, expected, normalize=True) + self._check_offsetfunc_works(offset, 'rollback', dt, expected, + normalize=True) def test_onOffset(self): for offset in self.offset_types: @@ -455,6 +468,7 @@ def test_pickle_v0_15_2(self): # tm.assert_dict_equal(offsets, read_pickle(pickle_path)) + class TestDateOffset(Base): _multiprocess_can_split_ = True @@ -474,19 +488,19 @@ def test_mul(self): def test_constructor(self): - assert((self.d + DateOffset(months=2)) == datetime(2008, 3, 2)) - assert((self.d - DateOffset(months=2)) == datetime(2007, 11, 2)) + assert ((self.d + DateOffset(months=2)) == datetime(2008, 3, 2)) + assert ((self.d - DateOffset(months=2)) == datetime(2007, 11, 2)) - assert((self.d + DateOffset(2)) == datetime(2008, 1, 4)) + assert ((self.d + DateOffset(2)) == datetime(2008, 1, 4)) assert not DateOffset(2).isAnchored() assert DateOffset(1).isAnchored() d = datetime(2008, 1, 31) - assert((d + DateOffset(months=1)) == datetime(2008, 2, 29)) + assert ((d + DateOffset(months=1)) == datetime(2008, 2, 29)) def test_copy(self): - assert(DateOffset(months=2).copy() == DateOffset(months=2)) + assert (DateOffset(months=2).copy() == DateOffset(months=2)) def test_eq(self): offset1 = DateOffset(days=1) @@ -553,8 +567,7 @@ def testMult1(self): self.assertEqual(self.d + 10 * self.offset, self.d + BDay(10)) def testMult2(self): - self.assertEqual(self.d + (-5 * BDay(-10)), - self.d + BDay(50)) + self.assertEqual(self.d + (-5 * BDay(-10)), self.d + BDay(50)) def testRollback1(self): self.assertEqual(BDay(10).rollback(self.d), self.d) @@ -592,49 +605,44 @@ def test_onOffset(self): tests = [(BDay(), datetime(2008, 1, 1), True), (BDay(), datetime(2008, 1, 5), False)] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, d, expected in tests: + assertOnOffset(offset, d, expected) def test_apply(self): tests = [] - tests.append((bday, - {datetime(2008, 1, 1): datetime(2008, 1, 2), - datetime(2008, 1, 4): datetime(2008, 1, 7), - datetime(2008, 1, 5): datetime(2008, 1, 7), - datetime(2008, 1, 6): datetime(2008, 1, 7), - datetime(2008, 1, 7): datetime(2008, 1, 8)})) - - tests.append((2 * bday, - {datetime(2008, 1, 1): datetime(2008, 1, 3), - datetime(2008, 1, 4): datetime(2008, 1, 8), - datetime(2008, 1, 5): datetime(2008, 1, 8), - datetime(2008, 1, 6): datetime(2008, 1, 8), - datetime(2008, 1, 7): datetime(2008, 1, 9)})) - - tests.append((-bday, - {datetime(2008, 1, 1): datetime(2007, 12, 31), - datetime(2008, 1, 4): datetime(2008, 1, 3), - datetime(2008, 1, 5): datetime(2008, 1, 4), - datetime(2008, 1, 6): datetime(2008, 1, 4), - datetime(2008, 1, 7): datetime(2008, 1, 4), - datetime(2008, 1, 8): datetime(2008, 1, 7)})) - - tests.append((-2 * bday, - {datetime(2008, 1, 1): datetime(2007, 12, 28), - datetime(2008, 1, 4): datetime(2008, 1, 2), - datetime(2008, 1, 5): datetime(2008, 1, 3), - datetime(2008, 1, 6): datetime(2008, 1, 3), - datetime(2008, 1, 7): datetime(2008, 1, 3), - datetime(2008, 1, 8): datetime(2008, 1, 4), - datetime(2008, 1, 9): datetime(2008, 1, 7)})) - - tests.append((BDay(0), - {datetime(2008, 1, 1): datetime(2008, 1, 1), - datetime(2008, 1, 4): datetime(2008, 1, 4), - datetime(2008, 1, 5): datetime(2008, 1, 7), - datetime(2008, 1, 6): datetime(2008, 1, 7), - datetime(2008, 1, 7): datetime(2008, 1, 7)})) + tests.append((bday, {datetime(2008, 1, 1): datetime(2008, 1, 2), + datetime(2008, 1, 4): datetime(2008, 1, 7), + datetime(2008, 1, 5): datetime(2008, 1, 7), + datetime(2008, 1, 6): datetime(2008, 1, 7), + datetime(2008, 1, 7): datetime(2008, 1, 8)})) + + tests.append((2 * bday, {datetime(2008, 1, 1): datetime(2008, 1, 3), + datetime(2008, 1, 4): datetime(2008, 1, 8), + datetime(2008, 1, 5): datetime(2008, 1, 8), + datetime(2008, 1, 6): datetime(2008, 1, 8), + datetime(2008, 1, 7): datetime(2008, 1, 9)})) + + tests.append((-bday, {datetime(2008, 1, 1): datetime(2007, 12, 31), + datetime(2008, 1, 4): datetime(2008, 1, 3), + datetime(2008, 1, 5): datetime(2008, 1, 4), + datetime(2008, 1, 6): datetime(2008, 1, 4), + datetime(2008, 1, 7): datetime(2008, 1, 4), + datetime(2008, 1, 8): datetime(2008, 1, 7)})) + + tests.append((-2 * bday, {datetime(2008, 1, 1): datetime(2007, 12, 28), + datetime(2008, 1, 4): datetime(2008, 1, 2), + datetime(2008, 1, 5): datetime(2008, 1, 3), + datetime(2008, 1, 6): datetime(2008, 1, 3), + datetime(2008, 1, 7): datetime(2008, 1, 3), + datetime(2008, 1, 8): datetime(2008, 1, 4), + datetime(2008, 1, 9): datetime(2008, 1, 7)})) + + tests.append((BDay(0), {datetime(2008, 1, 1): datetime(2008, 1, 1), + datetime(2008, 1, 4): datetime(2008, 1, 4), + datetime(2008, 1, 5): datetime(2008, 1, 7), + datetime(2008, 1, 6): datetime(2008, 1, 7), + datetime(2008, 1, 7): datetime(2008, 1, 7)})) for offset, cases in tests: for base, expected in compat.iteritems(cases): @@ -660,7 +668,7 @@ def test_apply_large_n(self): self.assertEqual(rs, xp) off = BDay() * 10 - rs = datetime(2014, 1, 5) + off # see #5890 + rs = datetime(2014, 1, 5) + off # see #5890 xp = datetime(2014, 1, 17) self.assertEqual(rs, xp) @@ -690,7 +698,8 @@ def setUp(self): from datetime import time as dt_time self.offset5 = BusinessHour(start=dt_time(11, 0), end=dt_time(14, 30)) self.offset6 = BusinessHour(start='20:00', end='05:00') - self.offset7 = BusinessHour(n=-2, start=dt_time(21, 30), end=dt_time(6, 30)) + self.offset7 = BusinessHour(n=-2, start=dt_time(21, 30), + end=dt_time(6, 30)) def test_constructor_errors(self): from datetime import time as dt_time @@ -710,13 +719,17 @@ def test_different_normalize_equals(self): def test_repr(self): self.assertEqual(repr(self.offset1), '<BusinessHour: BH=09:00-17:00>') - self.assertEqual(repr(self.offset2), '<3 * BusinessHours: BH=09:00-17:00>') - self.assertEqual(repr(self.offset3), '<-1 * BusinessHour: BH=09:00-17:00>') - self.assertEqual(repr(self.offset4), '<-4 * BusinessHours: BH=09:00-17:00>') + self.assertEqual(repr(self.offset2), + '<3 * BusinessHours: BH=09:00-17:00>') + self.assertEqual(repr(self.offset3), + '<-1 * BusinessHour: BH=09:00-17:00>') + self.assertEqual(repr(self.offset4), + '<-4 * BusinessHours: BH=09:00-17:00>') self.assertEqual(repr(self.offset5), '<BusinessHour: BH=11:00-14:30>') self.assertEqual(repr(self.offset6), '<BusinessHour: BH=20:00-05:00>') - self.assertEqual(repr(self.offset7), '<-2 * BusinessHours: BH=21:30-06:30>') + self.assertEqual(repr(self.offset7), + '<-2 * BusinessHours: BH=21:30-06:30>') def test_with_offset(self): expected = Timestamp('2014-07-01 13:00') @@ -730,9 +743,10 @@ def testEQ(self): self.assertNotEqual(BusinessHour(), BusinessHour(-1)) self.assertEqual(BusinessHour(start='09:00'), BusinessHour()) - self.assertNotEqual(BusinessHour(start='09:00'), BusinessHour(start='09:01')) + self.assertNotEqual(BusinessHour(start='09:00'), + BusinessHour(start='09:01')) self.assertNotEqual(BusinessHour(start='09:00', end='17:00'), - BusinessHour(start='17:00', end='09:01')) + BusinessHour(start='17:00', end='09:01')) def test_hash(self): self.assertEqual(hash(self.offset2), hash(self.offset2)) @@ -768,23 +782,28 @@ def testRollback1(self): self.assertEqual(self.offset2.rollback(self.d), self.d) self.assertEqual(self.offset3.rollback(self.d), self.d) self.assertEqual(self.offset4.rollback(self.d), self.d) - self.assertEqual(self.offset5.rollback(self.d), datetime(2014, 6, 30, 14, 30)) - self.assertEqual(self.offset6.rollback(self.d), datetime(2014, 7, 1, 5, 0)) - self.assertEqual(self.offset7.rollback(self.d), datetime(2014, 7, 1, 6, 30)) + self.assertEqual(self.offset5.rollback(self.d), + datetime(2014, 6, 30, 14, 30)) + self.assertEqual(self.offset6.rollback( + self.d), datetime(2014, 7, 1, 5, 0)) + self.assertEqual(self.offset7.rollback( + self.d), datetime(2014, 7, 1, 6, 30)) d = datetime(2014, 7, 1, 0) self.assertEqual(self.offset1.rollback(d), datetime(2014, 6, 30, 17)) self.assertEqual(self.offset2.rollback(d), datetime(2014, 6, 30, 17)) self.assertEqual(self.offset3.rollback(d), datetime(2014, 6, 30, 17)) self.assertEqual(self.offset4.rollback(d), datetime(2014, 6, 30, 17)) - self.assertEqual(self.offset5.rollback(d), datetime(2014, 6, 30, 14, 30)) + self.assertEqual(self.offset5.rollback( + d), datetime(2014, 6, 30, 14, 30)) self.assertEqual(self.offset6.rollback(d), d) self.assertEqual(self.offset7.rollback(d), d) self.assertEqual(self._offset(5).rollback(self.d), self.d) def testRollback2(self): - self.assertEqual(self._offset(-3).rollback(datetime(2014, 7, 5, 15, 0)), + self.assertEqual(self._offset(-3) + .rollback(datetime(2014, 7, 5, 15, 0)), datetime(2014, 7, 4, 17, 0)) def testRollforward1(self): @@ -792,9 +811,12 @@ def testRollforward1(self): self.assertEqual(self.offset2.rollforward(self.d), self.d) self.assertEqual(self.offset3.rollforward(self.d), self.d) self.assertEqual(self.offset4.rollforward(self.d), self.d) - self.assertEqual(self.offset5.rollforward(self.d), datetime(2014, 7, 1, 11, 0)) - self.assertEqual(self.offset6.rollforward(self.d), datetime(2014, 7, 1, 20, 0)) - self.assertEqual(self.offset7.rollforward(self.d), datetime(2014, 7, 1, 21, 30)) + self.assertEqual(self.offset5.rollforward( + self.d), datetime(2014, 7, 1, 11, 0)) + self.assertEqual(self.offset6.rollforward( + self.d), datetime(2014, 7, 1, 20, 0)) + self.assertEqual(self.offset7.rollforward( + self.d), datetime(2014, 7, 1, 21, 30)) d = datetime(2014, 7, 1, 0) self.assertEqual(self.offset1.rollforward(d), datetime(2014, 7, 1, 9)) @@ -808,7 +830,8 @@ def testRollforward1(self): self.assertEqual(self._offset(5).rollforward(self.d), self.d) def testRollforward2(self): - self.assertEqual(self._offset(-3).rollforward(datetime(2014, 7, 5, 16, 0)), + self.assertEqual(self._offset(-3) + .rollforward(datetime(2014, 7, 5, 16, 0)), datetime(2014, 7, 7, 9)) def test_roll_date_object(self): @@ -848,7 +871,8 @@ def test_normalize(self): datetime(2014, 7, 5, 23): datetime(2014, 7, 4), datetime(2014, 7, 6, 10): datetime(2014, 7, 4)})) - tests.append((BusinessHour(1, normalize=True, start='17:00', end='04:00'), + tests.append((BusinessHour(1, normalize=True, start='17:00', + end='04:00'), {datetime(2014, 7, 1, 8): datetime(2014, 7, 1), datetime(2014, 7, 1, 17): datetime(2014, 7, 1), datetime(2014, 7, 1, 23): datetime(2014, 7, 2), @@ -866,38 +890,37 @@ def test_normalize(self): def test_onOffset(self): tests = [] - tests.append((BusinessHour(), - {datetime(2014, 7, 1, 9): True, - datetime(2014, 7, 1, 8, 59): False, - datetime(2014, 7, 1, 8): False, - datetime(2014, 7, 1, 17): True, - datetime(2014, 7, 1, 17, 1): False, - datetime(2014, 7, 1, 18): False, - datetime(2014, 7, 5, 9): False, - datetime(2014, 7, 6, 12): False})) + tests.append((BusinessHour(), {datetime(2014, 7, 1, 9): True, + datetime(2014, 7, 1, 8, 59): False, + datetime(2014, 7, 1, 8): False, + datetime(2014, 7, 1, 17): True, + datetime(2014, 7, 1, 17, 1): False, + datetime(2014, 7, 1, 18): False, + datetime(2014, 7, 5, 9): False, + datetime(2014, 7, 6, 12): False})) tests.append((BusinessHour(start='10:00', end='15:00'), - {datetime(2014, 7, 1, 9): False, - datetime(2014, 7, 1, 10): True, - datetime(2014, 7, 1, 15): True, - datetime(2014, 7, 1, 15, 1): False, - datetime(2014, 7, 5, 12): False, - datetime(2014, 7, 6, 12): False})) + {datetime(2014, 7, 1, 9): False, + datetime(2014, 7, 1, 10): True, + datetime(2014, 7, 1, 15): True, + datetime(2014, 7, 1, 15, 1): False, + datetime(2014, 7, 5, 12): False, + datetime(2014, 7, 6, 12): False})) tests.append((BusinessHour(start='19:00', end='05:00'), - {datetime(2014, 7, 1, 9, 0): False, - datetime(2014, 7, 1, 10, 0): False, - datetime(2014, 7, 1, 15): False, - datetime(2014, 7, 1, 15, 1): False, - datetime(2014, 7, 5, 12, 0): False, - datetime(2014, 7, 6, 12, 0): False, - datetime(2014, 7, 1, 19, 0): True, - datetime(2014, 7, 2, 0, 0): True, - datetime(2014, 7, 4, 23): True, - datetime(2014, 7, 5, 1): True, - datetime(2014, 7, 5, 5, 0): True, - datetime(2014, 7, 6, 23, 0): False, - datetime(2014, 7, 7, 3, 0): False})) + {datetime(2014, 7, 1, 9, 0): False, + datetime(2014, 7, 1, 10, 0): False, + datetime(2014, 7, 1, 15): False, + datetime(2014, 7, 1, 15, 1): False, + datetime(2014, 7, 5, 12, 0): False, + datetime(2014, 7, 6, 12, 0): False, + datetime(2014, 7, 1, 19, 0): True, + datetime(2014, 7, 2, 0, 0): True, + datetime(2014, 7, 4, 23): True, + datetime(2014, 7, 5, 1): True, + datetime(2014, 7, 5, 5, 0): True, + datetime(2014, 7, 6, 23, 0): False, + datetime(2014, 7, 7, 3, 0): False})) for offset, cases in tests: for dt, expected in compat.iteritems(cases): @@ -906,94 +929,164 @@ def test_onOffset(self): def test_opening_time(self): tests = [] - # opening time should be affected by sign of n, not by n's value and end - tests.append(([BusinessHour(), BusinessHour(n=2), BusinessHour(n=4), - BusinessHour(end='10:00'), BusinessHour(n=2, end='4:00'), - BusinessHour(n=4, end='15:00')], - {datetime(2014, 7, 1, 11): (datetime(2014, 7, 2, 9), datetime(2014, 7, 1, 9)), - datetime(2014, 7, 1, 18): (datetime(2014, 7, 2, 9), datetime(2014, 7, 1, 9)), - datetime(2014, 7, 1, 23): (datetime(2014, 7, 2, 9), datetime(2014, 7, 1, 9)), - datetime(2014, 7, 2, 8): (datetime(2014, 7, 2, 9), datetime(2014, 7, 1, 9)), - # if timestamp is on opening time, next opening time is as it is - datetime(2014, 7, 2, 9): (datetime(2014, 7, 2, 9), datetime(2014, 7, 2, 9)), - datetime(2014, 7, 2, 10): (datetime(2014, 7, 3, 9), datetime(2014, 7, 2, 9)), - # 2014-07-05 is saturday - datetime(2014, 7, 5, 10): (datetime(2014, 7, 7, 9), datetime(2014, 7, 4, 9)), - datetime(2014, 7, 4, 10): (datetime(2014, 7, 7, 9), datetime(2014, 7, 4, 9)), - datetime(2014, 7, 4, 23): (datetime(2014, 7, 7, 9), datetime(2014, 7, 4, 9)), - datetime(2014, 7, 6, 10): (datetime(2014, 7, 7, 9), datetime(2014, 7, 4, 9)), - datetime(2014, 7, 7, 5): (datetime(2014, 7, 7, 9), datetime(2014, 7, 4, 9)), - datetime(2014, 7, 7, 9, 1): (datetime(2014, 7, 8, 9), datetime(2014, 7, 7, 9))})) - - tests.append(([BusinessHour(start='11:15'), BusinessHour(n=2, start='11:15'), + # opening time should be affected by sign of n, not by n's value and + # end + tests.append(( + [BusinessHour(), BusinessHour(n=2), BusinessHour( + n=4), BusinessHour(end='10:00'), BusinessHour(n=2, end='4:00'), + BusinessHour(n=4, end='15:00')], + {datetime(2014, 7, 1, 11): (datetime(2014, 7, 2, 9), datetime( + 2014, 7, 1, 9)), + datetime(2014, 7, 1, 18): (datetime(2014, 7, 2, 9), datetime( + 2014, 7, 1, 9)), + datetime(2014, 7, 1, 23): (datetime(2014, 7, 2, 9), datetime( + 2014, 7, 1, 9)), + datetime(2014, 7, 2, 8): (datetime(2014, 7, 2, 9), datetime( + 2014, 7, 1, 9)), + # if timestamp is on opening time, next opening time is + # as it is + datetime(2014, 7, 2, 9): (datetime(2014, 7, 2, 9), datetime( + 2014, 7, 2, 9)), + datetime(2014, 7, 2, 10): (datetime(2014, 7, 3, 9), datetime( + 2014, 7, 2, 9)), + # 2014-07-05 is saturday + datetime(2014, 7, 5, 10): (datetime(2014, 7, 7, 9), datetime( + 2014, 7, 4, 9)), + datetime(2014, 7, 4, 10): (datetime(2014, 7, 7, 9), datetime( + 2014, 7, 4, 9)), + datetime(2014, 7, 4, 23): (datetime(2014, 7, 7, 9), datetime( + 2014, 7, 4, 9)), + datetime(2014, 7, 6, 10): (datetime(2014, 7, 7, 9), datetime( + 2014, 7, 4, 9)), + datetime(2014, 7, 7, 5): (datetime(2014, 7, 7, 9), datetime( + 2014, 7, 4, 9)), + datetime(2014, 7, 7, 9, 1): (datetime(2014, 7, 8, 9), datetime( + 2014, 7, 7, 9))})) + + tests.append(([BusinessHour(start='11:15'), + BusinessHour(n=2, start='11:15'), BusinessHour(n=3, start='11:15'), BusinessHour(start='11:15', end='10:00'), BusinessHour(n=2, start='11:15', end='4:00'), BusinessHour(n=3, start='11:15', end='15:00')], - {datetime(2014, 7, 1, 11): (datetime(2014, 7, 1, 11, 15), datetime(2014, 6, 30, 11, 15)), - datetime(2014, 7, 1, 18): (datetime(2014, 7, 2, 11, 15), datetime(2014, 7, 1, 11, 15)), - datetime(2014, 7, 1, 23): (datetime(2014, 7, 2, 11, 15), datetime(2014, 7, 1, 11, 15)), - datetime(2014, 7, 2, 8): (datetime(2014, 7, 2, 11, 15), datetime(2014, 7, 1, 11, 15)), - datetime(2014, 7, 2, 9): (datetime(2014, 7, 2, 11, 15), datetime(2014, 7, 1, 11, 15)), - datetime(2014, 7, 2, 10): (datetime(2014, 7, 2, 11, 15), datetime(2014, 7, 1, 11, 15)), - datetime(2014, 7, 2, 11, 15): (datetime(2014, 7, 2, 11, 15), datetime(2014, 7, 2, 11, 15)), - datetime(2014, 7, 2, 11, 15, 1): (datetime(2014, 7, 3, 11, 15), datetime(2014, 7, 2, 11, 15)), - datetime(2014, 7, 5, 10): (datetime(2014, 7, 7, 11, 15), datetime(2014, 7, 4, 11, 15)), - datetime(2014, 7, 4, 10): (datetime(2014, 7, 4, 11, 15), datetime(2014, 7, 3, 11, 15)), - datetime(2014, 7, 4, 23): (datetime(2014, 7, 7, 11, 15), datetime(2014, 7, 4, 11, 15)), - datetime(2014, 7, 6, 10): (datetime(2014, 7, 7, 11, 15), datetime(2014, 7, 4, 11, 15)), - datetime(2014, 7, 7, 5): (datetime(2014, 7, 7, 11, 15), datetime(2014, 7, 4, 11, 15)), - datetime(2014, 7, 7, 9, 1): (datetime(2014, 7, 7, 11, 15), datetime(2014, 7, 4, 11, 15))})) - - tests.append(([BusinessHour(-1), BusinessHour(n=-2), BusinessHour(n=-4), - BusinessHour(n=-1, end='10:00'), BusinessHour(n=-2, end='4:00'), + {datetime(2014, 7, 1, 11): (datetime( + 2014, 7, 1, 11, 15), datetime(2014, 6, 30, 11, 15)), + datetime(2014, 7, 1, 18): (datetime( + 2014, 7, 2, 11, 15), datetime(2014, 7, 1, 11, 15)), + datetime(2014, 7, 1, 23): (datetime( + 2014, 7, 2, 11, 15), datetime(2014, 7, 1, 11, 15)), + datetime(2014, 7, 2, 8): (datetime(2014, 7, 2, 11, 15), + datetime(2014, 7, 1, 11, 15)), + datetime(2014, 7, 2, 9): (datetime(2014, 7, 2, 11, 15), + datetime(2014, 7, 1, 11, 15)), + datetime(2014, 7, 2, 10): (datetime( + 2014, 7, 2, 11, 15), datetime(2014, 7, 1, 11, 15)), + datetime(2014, 7, 2, 11, 15): (datetime( + 2014, 7, 2, 11, 15), datetime(2014, 7, 2, 11, 15)), + datetime(2014, 7, 2, 11, 15, 1): (datetime( + 2014, 7, 3, 11, 15), datetime(2014, 7, 2, 11, 15)), + datetime(2014, 7, 5, 10): (datetime( + 2014, 7, 7, 11, 15), datetime(2014, 7, 4, 11, 15)), + datetime(2014, 7, 4, 10): (datetime( + 2014, 7, 4, 11, 15), datetime(2014, 7, 3, 11, 15)), + datetime(2014, 7, 4, 23): (datetime( + 2014, 7, 7, 11, 15), datetime(2014, 7, 4, 11, 15)), + datetime(2014, 7, 6, 10): (datetime( + 2014, 7, 7, 11, 15), datetime(2014, 7, 4, 11, 15)), + datetime(2014, 7, 7, 5): (datetime(2014, 7, 7, 11, 15), + datetime(2014, 7, 4, 11, 15)), + datetime(2014, 7, 7, 9, 1): ( + datetime(2014, 7, 7, 11, 15), + datetime(2014, 7, 4, 11, 15))})) + + tests.append(([BusinessHour(-1), BusinessHour(n=-2), + BusinessHour(n=-4), + BusinessHour(n=-1, end='10:00'), + BusinessHour(n=-2, end='4:00'), BusinessHour(n=-4, end='15:00')], - {datetime(2014, 7, 1, 11): (datetime(2014, 7, 1, 9), datetime(2014, 7, 2, 9)), - datetime(2014, 7, 1, 18): (datetime(2014, 7, 1, 9), datetime(2014, 7, 2, 9)), - datetime(2014, 7, 1, 23): (datetime(2014, 7, 1, 9), datetime(2014, 7, 2, 9)), - datetime(2014, 7, 2, 8): (datetime(2014, 7, 1, 9), datetime(2014, 7, 2, 9)), - datetime(2014, 7, 2, 9): (datetime(2014, 7, 2, 9), datetime(2014, 7, 2, 9)), - datetime(2014, 7, 2, 10): (datetime(2014, 7, 2, 9), datetime(2014, 7, 3, 9)), - datetime(2014, 7, 5, 10): (datetime(2014, 7, 4, 9), datetime(2014, 7, 7, 9)), - datetime(2014, 7, 4, 10): (datetime(2014, 7, 4, 9), datetime(2014, 7, 7, 9)), - datetime(2014, 7, 4, 23): (datetime(2014, 7, 4, 9), datetime(2014, 7, 7, 9)), - datetime(2014, 7, 6, 10): (datetime(2014, 7, 4, 9), datetime(2014, 7, 7, 9)), - datetime(2014, 7, 7, 5): (datetime(2014, 7, 4, 9), datetime(2014, 7, 7, 9)), - datetime(2014, 7, 7, 9): (datetime(2014, 7, 7, 9), datetime(2014, 7, 7, 9)), - datetime(2014, 7, 7, 9, 1): (datetime(2014, 7, 7, 9), datetime(2014, 7, 8, 9))})) + {datetime(2014, 7, 1, 11): (datetime(2014, 7, 1, 9), + datetime(2014, 7, 2, 9)), + datetime(2014, 7, 1, 18): (datetime(2014, 7, 1, 9), + datetime(2014, 7, 2, 9)), + datetime(2014, 7, 1, 23): (datetime(2014, 7, 1, 9), + datetime(2014, 7, 2, 9)), + datetime(2014, 7, 2, 8): (datetime(2014, 7, 1, 9), + datetime(2014, 7, 2, 9)), + datetime(2014, 7, 2, 9): (datetime(2014, 7, 2, 9), + datetime(2014, 7, 2, 9)), + datetime(2014, 7, 2, 10): (datetime(2014, 7, 2, 9), + datetime(2014, 7, 3, 9)), + datetime(2014, 7, 5, 10): (datetime(2014, 7, 4, 9), + datetime(2014, 7, 7, 9)), + datetime(2014, 7, 4, 10): (datetime(2014, 7, 4, 9), + datetime(2014, 7, 7, 9)), + datetime(2014, 7, 4, 23): (datetime(2014, 7, 4, 9), + datetime(2014, 7, 7, 9)), + datetime(2014, 7, 6, 10): (datetime(2014, 7, 4, 9), + datetime(2014, 7, 7, 9)), + datetime(2014, 7, 7, 5): (datetime(2014, 7, 4, 9), + datetime(2014, 7, 7, 9)), + datetime(2014, 7, 7, 9): (datetime(2014, 7, 7, 9), + datetime(2014, 7, 7, 9)), + datetime(2014, 7, 7, 9, 1): (datetime(2014, 7, 7, 9), + datetime(2014, 7, 8, 9))})) tests.append(([BusinessHour(start='17:00', end='05:00'), BusinessHour(n=3, start='17:00', end='03:00')], - {datetime(2014, 7, 1, 11): (datetime(2014, 7, 1, 17), datetime(2014, 6, 30, 17)), - datetime(2014, 7, 1, 18): (datetime(2014, 7, 2, 17), datetime(2014, 7, 1, 17)), - datetime(2014, 7, 1, 23): (datetime(2014, 7, 2, 17), datetime(2014, 7, 1, 17)), - datetime(2014, 7, 2, 8): (datetime(2014, 7, 2, 17), datetime(2014, 7, 1, 17)), - datetime(2014, 7, 2, 9): (datetime(2014, 7, 2, 17), datetime(2014, 7, 1, 17)), - datetime(2014, 7, 4, 17): (datetime(2014, 7, 4, 17), datetime(2014, 7, 4, 17)), - datetime(2014, 7, 5, 10): (datetime(2014, 7, 7, 17), datetime(2014, 7, 4, 17)), - datetime(2014, 7, 4, 10): (datetime(2014, 7, 4, 17), datetime(2014, 7, 3, 17)), - datetime(2014, 7, 4, 23): (datetime(2014, 7, 7, 17), datetime(2014, 7, 4, 17)), - datetime(2014, 7, 6, 10): (datetime(2014, 7, 7, 17), datetime(2014, 7, 4, 17)), - datetime(2014, 7, 7, 5): (datetime(2014, 7, 7, 17), datetime(2014, 7, 4, 17)), - datetime(2014, 7, 7, 17, 1): (datetime(2014, 7, 8, 17), datetime(2014, 7, 7, 17)),})) + {datetime(2014, 7, 1, 11): (datetime(2014, 7, 1, 17), + datetime(2014, 6, 30, 17)), + datetime(2014, 7, 1, 18): (datetime(2014, 7, 2, 17), + datetime(2014, 7, 1, 17)), + datetime(2014, 7, 1, 23): (datetime(2014, 7, 2, 17), + datetime(2014, 7, 1, 17)), + datetime(2014, 7, 2, 8): (datetime(2014, 7, 2, 17), + datetime(2014, 7, 1, 17)), + datetime(2014, 7, 2, 9): (datetime(2014, 7, 2, 17), + datetime(2014, 7, 1, 17)), + datetime(2014, 7, 4, 17): (datetime(2014, 7, 4, 17), + datetime(2014, 7, 4, 17)), + datetime(2014, 7, 5, 10): (datetime(2014, 7, 7, 17), + datetime(2014, 7, 4, 17)), + datetime(2014, 7, 4, 10): (datetime(2014, 7, 4, 17), + datetime(2014, 7, 3, 17)), + datetime(2014, 7, 4, 23): (datetime(2014, 7, 7, 17), + datetime(2014, 7, 4, 17)), + datetime(2014, 7, 6, 10): (datetime(2014, 7, 7, 17), + datetime(2014, 7, 4, 17)), + datetime(2014, 7, 7, 5): (datetime(2014, 7, 7, 17), + datetime(2014, 7, 4, 17)), + datetime(2014, 7, 7, 17, 1): (datetime( + 2014, 7, 8, 17), datetime(2014, 7, 7, 17)), })) tests.append(([BusinessHour(-1, start='17:00', end='05:00'), BusinessHour(n=-2, start='17:00', end='03:00')], - {datetime(2014, 7, 1, 11): (datetime(2014, 6, 30, 17), datetime(2014, 7, 1, 17)), - datetime(2014, 7, 1, 18): (datetime(2014, 7, 1, 17), datetime(2014, 7, 2, 17)), - datetime(2014, 7, 1, 23): (datetime(2014, 7, 1, 17), datetime(2014, 7, 2, 17)), - datetime(2014, 7, 2, 8): (datetime(2014, 7, 1, 17), datetime(2014, 7, 2, 17)), - datetime(2014, 7, 2, 9): (datetime(2014, 7, 1, 17), datetime(2014, 7, 2, 17)), - datetime(2014, 7, 2, 16, 59): (datetime(2014, 7, 1, 17), datetime(2014, 7, 2, 17)), - datetime(2014, 7, 5, 10): (datetime(2014, 7, 4, 17), datetime(2014, 7, 7, 17)), - datetime(2014, 7, 4, 10): (datetime(2014, 7, 3, 17), datetime(2014, 7, 4, 17)), - datetime(2014, 7, 4, 23): (datetime(2014, 7, 4, 17), datetime(2014, 7, 7, 17)), - datetime(2014, 7, 6, 10): (datetime(2014, 7, 4, 17), datetime(2014, 7, 7, 17)), - datetime(2014, 7, 7, 5): (datetime(2014, 7, 4, 17), datetime(2014, 7, 7, 17)), - datetime(2014, 7, 7, 18): (datetime(2014, 7, 7, 17), datetime(2014, 7, 8, 17))})) - - for offsets, cases in tests: - for offset in offsets: + {datetime(2014, 7, 1, 11): (datetime(2014, 6, 30, 17), + datetime(2014, 7, 1, 17)), + datetime(2014, 7, 1, 18): (datetime(2014, 7, 1, 17), + datetime(2014, 7, 2, 17)), + datetime(2014, 7, 1, 23): (datetime(2014, 7, 1, 17), + datetime(2014, 7, 2, 17)), + datetime(2014, 7, 2, 8): (datetime(2014, 7, 1, 17), + datetime(2014, 7, 2, 17)), + datetime(2014, 7, 2, 9): (datetime(2014, 7, 1, 17), + datetime(2014, 7, 2, 17)), + datetime(2014, 7, 2, 16, 59): (datetime( + 2014, 7, 1, 17), datetime(2014, 7, 2, 17)), + datetime(2014, 7, 5, 10): (datetime(2014, 7, 4, 17), + datetime(2014, 7, 7, 17)), + datetime(2014, 7, 4, 10): (datetime(2014, 7, 3, 17), + datetime(2014, 7, 4, 17)), + datetime(2014, 7, 4, 23): (datetime(2014, 7, 4, 17), + datetime(2014, 7, 7, 17)), + datetime(2014, 7, 6, 10): (datetime(2014, 7, 4, 17), + datetime(2014, 7, 7, 17)), + datetime(2014, 7, 7, 5): (datetime(2014, 7, 4, 17), + datetime(2014, 7, 7, 17)), + datetime(2014, 7, 7, 18): (datetime(2014, 7, 7, 17), + datetime(2014, 7, 8, 17))})) + + for _offsets, cases in tests: + for offset in _offsets: for dt, (exp_next, exp_prev) in compat.iteritems(cases): self.assertEqual(offset._next_opening_time(dt), exp_next) self.assertEqual(offset._prev_opening_time(dt), exp_prev) @@ -1001,79 +1094,87 @@ def test_opening_time(self): def test_apply(self): tests = [] - tests.append((BusinessHour(), - {datetime(2014, 7, 1, 11): datetime(2014, 7, 1, 12), - datetime(2014, 7, 1, 13): datetime(2014, 7, 1, 14), - datetime(2014, 7, 1, 15): datetime(2014, 7, 1, 16), - datetime(2014, 7, 1, 19): datetime(2014, 7, 2, 10), - datetime(2014, 7, 1, 16): datetime(2014, 7, 2, 9), - datetime(2014, 7, 1, 16, 30, 15): datetime(2014, 7, 2, 9, 30, 15), - datetime(2014, 7, 1, 17): datetime(2014, 7, 2, 10), - datetime(2014, 7, 2, 11): datetime(2014, 7, 2, 12), - # out of business hours - datetime(2014, 7, 2, 8): datetime(2014, 7, 2, 10), - datetime(2014, 7, 2, 19): datetime(2014, 7, 3, 10), - datetime(2014, 7, 2, 23): datetime(2014, 7, 3, 10), - datetime(2014, 7, 3, 0): datetime(2014, 7, 3, 10), - # saturday - datetime(2014, 7, 5, 15): datetime(2014, 7, 7, 10), - datetime(2014, 7, 4, 17): datetime(2014, 7, 7, 10), - datetime(2014, 7, 4, 16, 30): datetime(2014, 7, 7, 9, 30), - datetime(2014, 7, 4, 16, 30, 30): datetime(2014, 7, 7, 9, 30, 30)})) - - tests.append((BusinessHour(4), - {datetime(2014, 7, 1, 11): datetime(2014, 7, 1, 15), - datetime(2014, 7, 1, 13): datetime(2014, 7, 2, 9), - datetime(2014, 7, 1, 15): datetime(2014, 7, 2, 11), - datetime(2014, 7, 1, 16): datetime(2014, 7, 2, 12), - datetime(2014, 7, 1, 17): datetime(2014, 7, 2, 13), - datetime(2014, 7, 2, 11): datetime(2014, 7, 2, 15), - datetime(2014, 7, 2, 8): datetime(2014, 7, 2, 13), - datetime(2014, 7, 2, 19): datetime(2014, 7, 3, 13), - datetime(2014, 7, 2, 23): datetime(2014, 7, 3, 13), - datetime(2014, 7, 3, 0): datetime(2014, 7, 3, 13), - datetime(2014, 7, 5, 15): datetime(2014, 7, 7, 13), - datetime(2014, 7, 4, 17): datetime(2014, 7, 7, 13), - datetime(2014, 7, 4, 16, 30): datetime(2014, 7, 7, 12, 30), - datetime(2014, 7, 4, 16, 30, 30): datetime(2014, 7, 7, 12, 30, 30)})) - - tests.append((BusinessHour(-1), - {datetime(2014, 7, 1, 11): datetime(2014, 7, 1, 10), - datetime(2014, 7, 1, 13): datetime(2014, 7, 1, 12), - datetime(2014, 7, 1, 15): datetime(2014, 7, 1, 14), - datetime(2014, 7, 1, 16): datetime(2014, 7, 1, 15), - datetime(2014, 7, 1, 10): datetime(2014, 6, 30, 17), - datetime(2014, 7, 1, 16, 30, 15): datetime(2014, 7, 1, 15, 30, 15), - datetime(2014, 7, 1, 9, 30, 15): datetime(2014, 6, 30, 16, 30, 15), - datetime(2014, 7, 1, 17): datetime(2014, 7, 1, 16), - datetime(2014, 7, 1, 5): datetime(2014, 6, 30, 16), - datetime(2014, 7, 2, 11): datetime(2014, 7, 2, 10), - # out of business hours - datetime(2014, 7, 2, 8): datetime(2014, 7, 1, 16), - datetime(2014, 7, 2, 19): datetime(2014, 7, 2, 16), - datetime(2014, 7, 2, 23): datetime(2014, 7, 2, 16), - datetime(2014, 7, 3, 0): datetime(2014, 7, 2, 16), - # saturday - datetime(2014, 7, 5, 15): datetime(2014, 7, 4, 16), - datetime(2014, 7, 7, 9): datetime(2014, 7, 4, 16), - datetime(2014, 7, 7, 9, 30): datetime(2014, 7, 4, 16, 30), - datetime(2014, 7, 7, 9, 30, 30): datetime(2014, 7, 4, 16, 30, 30)})) - - tests.append((BusinessHour(-4), - {datetime(2014, 7, 1, 11): datetime(2014, 6, 30, 15), - datetime(2014, 7, 1, 13): datetime(2014, 6, 30, 17), - datetime(2014, 7, 1, 15): datetime(2014, 7, 1, 11), - datetime(2014, 7, 1, 16): datetime(2014, 7, 1, 12), - datetime(2014, 7, 1, 17): datetime(2014, 7, 1, 13), - datetime(2014, 7, 2, 11): datetime(2014, 7, 1, 15), - datetime(2014, 7, 2, 8): datetime(2014, 7, 1, 13), - datetime(2014, 7, 2, 19): datetime(2014, 7, 2, 13), - datetime(2014, 7, 2, 23): datetime(2014, 7, 2, 13), - datetime(2014, 7, 3, 0): datetime(2014, 7, 2, 13), - datetime(2014, 7, 5, 15): datetime(2014, 7, 4, 13), - datetime(2014, 7, 4, 18): datetime(2014, 7, 4, 13), - datetime(2014, 7, 7, 9, 30): datetime(2014, 7, 4, 13, 30), - datetime(2014, 7, 7, 9, 30, 30): datetime(2014, 7, 4, 13, 30, 30)})) + tests.append(( + BusinessHour(), + {datetime(2014, 7, 1, 11): datetime(2014, 7, 1, 12), + datetime(2014, 7, 1, 13): datetime(2014, 7, 1, 14), + datetime(2014, 7, 1, 15): datetime(2014, 7, 1, 16), + datetime(2014, 7, 1, 19): datetime(2014, 7, 2, 10), + datetime(2014, 7, 1, 16): datetime(2014, 7, 2, 9), + datetime(2014, 7, 1, 16, 30, 15): datetime(2014, 7, 2, 9, 30, 15), + datetime(2014, 7, 1, 17): datetime(2014, 7, 2, 10), + datetime(2014, 7, 2, 11): datetime(2014, 7, 2, 12), + # out of business hours + datetime(2014, 7, 2, 8): datetime(2014, 7, 2, 10), + datetime(2014, 7, 2, 19): datetime(2014, 7, 3, 10), + datetime(2014, 7, 2, 23): datetime(2014, 7, 3, 10), + datetime(2014, 7, 3, 0): datetime(2014, 7, 3, 10), + # saturday + datetime(2014, 7, 5, 15): datetime(2014, 7, 7, 10), + datetime(2014, 7, 4, 17): datetime(2014, 7, 7, 10), + datetime(2014, 7, 4, 16, 30): datetime(2014, 7, 7, 9, 30), + datetime(2014, 7, 4, 16, 30, 30): datetime(2014, 7, 7, 9, 30, + 30)})) + + tests.append((BusinessHour( + 4), {datetime(2014, 7, 1, 11): datetime(2014, 7, 1, 15), + datetime(2014, 7, 1, 13): datetime(2014, 7, 2, 9), + datetime(2014, 7, 1, 15): datetime(2014, 7, 2, 11), + datetime(2014, 7, 1, 16): datetime(2014, 7, 2, 12), + datetime(2014, 7, 1, 17): datetime(2014, 7, 2, 13), + datetime(2014, 7, 2, 11): datetime(2014, 7, 2, 15), + datetime(2014, 7, 2, 8): datetime(2014, 7, 2, 13), + datetime(2014, 7, 2, 19): datetime(2014, 7, 3, 13), + datetime(2014, 7, 2, 23): datetime(2014, 7, 3, 13), + datetime(2014, 7, 3, 0): datetime(2014, 7, 3, 13), + datetime(2014, 7, 5, 15): datetime(2014, 7, 7, 13), + datetime(2014, 7, 4, 17): datetime(2014, 7, 7, 13), + datetime(2014, 7, 4, 16, 30): datetime(2014, 7, 7, 12, 30), + datetime(2014, 7, 4, 16, 30, 30): datetime(2014, 7, 7, 12, 30, + 30)})) + + tests.append( + (BusinessHour(-1), + {datetime(2014, 7, 1, 11): datetime(2014, 7, 1, 10), + datetime(2014, 7, 1, 13): datetime(2014, 7, 1, 12), + datetime(2014, 7, 1, 15): datetime(2014, 7, 1, 14), + datetime(2014, 7, 1, 16): datetime(2014, 7, 1, 15), + datetime(2014, 7, 1, 10): datetime(2014, 6, 30, 17), + datetime(2014, 7, 1, 16, 30, 15): datetime( + 2014, 7, 1, 15, 30, 15), + datetime(2014, 7, 1, 9, 30, 15): datetime( + 2014, 6, 30, 16, 30, 15), + datetime(2014, 7, 1, 17): datetime(2014, 7, 1, 16), + datetime(2014, 7, 1, 5): datetime(2014, 6, 30, 16), + datetime(2014, 7, 2, 11): datetime(2014, 7, 2, 10), + # out of business hours + datetime(2014, 7, 2, 8): datetime(2014, 7, 1, 16), + datetime(2014, 7, 2, 19): datetime(2014, 7, 2, 16), + datetime(2014, 7, 2, 23): datetime(2014, 7, 2, 16), + datetime(2014, 7, 3, 0): datetime(2014, 7, 2, 16), + # saturday + datetime(2014, 7, 5, 15): datetime(2014, 7, 4, 16), + datetime(2014, 7, 7, 9): datetime(2014, 7, 4, 16), + datetime(2014, 7, 7, 9, 30): datetime(2014, 7, 4, 16, 30), + datetime(2014, 7, 7, 9, 30, 30): datetime(2014, 7, 4, 16, 30, + 30)})) + + tests.append((BusinessHour( + -4), {datetime(2014, 7, 1, 11): datetime(2014, 6, 30, 15), + datetime(2014, 7, 1, 13): datetime(2014, 6, 30, 17), + datetime(2014, 7, 1, 15): datetime(2014, 7, 1, 11), + datetime(2014, 7, 1, 16): datetime(2014, 7, 1, 12), + datetime(2014, 7, 1, 17): datetime(2014, 7, 1, 13), + datetime(2014, 7, 2, 11): datetime(2014, 7, 1, 15), + datetime(2014, 7, 2, 8): datetime(2014, 7, 1, 13), + datetime(2014, 7, 2, 19): datetime(2014, 7, 2, 13), + datetime(2014, 7, 2, 23): datetime(2014, 7, 2, 13), + datetime(2014, 7, 3, 0): datetime(2014, 7, 2, 13), + datetime(2014, 7, 5, 15): datetime(2014, 7, 4, 13), + datetime(2014, 7, 4, 18): datetime(2014, 7, 4, 13), + datetime(2014, 7, 7, 9, 30): datetime(2014, 7, 4, 13, 30), + datetime(2014, 7, 7, 9, 30, 30): datetime(2014, 7, 4, 13, 30, + 30)})) tests.append((BusinessHour(start='13:00', end='16:00'), {datetime(2014, 7, 1, 11): datetime(2014, 7, 1, 14), @@ -1081,21 +1182,23 @@ def test_apply(self): datetime(2014, 7, 1, 15): datetime(2014, 7, 2, 13), datetime(2014, 7, 1, 19): datetime(2014, 7, 2, 14), datetime(2014, 7, 1, 16): datetime(2014, 7, 2, 14), - datetime(2014, 7, 1, 15, 30, 15): datetime(2014, 7, 2, 13, 30, 15), + datetime(2014, 7, 1, 15, 30, 15): datetime(2014, 7, 2, + 13, 30, 15), datetime(2014, 7, 5, 15): datetime(2014, 7, 7, 14), datetime(2014, 7, 4, 17): datetime(2014, 7, 7, 14)})) - tests.append((BusinessHour(n=2, start='13:00', end='16:00'), - {datetime(2014, 7, 1, 17): datetime(2014, 7, 2, 15), - datetime(2014, 7, 2, 14): datetime(2014, 7, 3, 13), - datetime(2014, 7, 2, 8): datetime(2014, 7, 2, 15), - datetime(2014, 7, 2, 19): datetime(2014, 7, 3, 15), - datetime(2014, 7, 2, 14, 30): datetime(2014, 7, 3, 13, 30), - datetime(2014, 7, 3, 0): datetime(2014, 7, 3, 15), - datetime(2014, 7, 5, 15): datetime(2014, 7, 7, 15), - datetime(2014, 7, 4, 17): datetime(2014, 7, 7, 15), - datetime(2014, 7, 4, 14, 30): datetime(2014, 7, 7, 13, 30), - datetime(2014, 7, 4, 14, 30, 30): datetime(2014, 7, 7, 13, 30, 30)})) + tests.append((BusinessHour(n=2, start='13:00', end='16:00'), { + datetime(2014, 7, 1, 17): datetime(2014, 7, 2, 15), + datetime(2014, 7, 2, 14): datetime(2014, 7, 3, 13), + datetime(2014, 7, 2, 8): datetime(2014, 7, 2, 15), + datetime(2014, 7, 2, 19): datetime(2014, 7, 3, 15), + datetime(2014, 7, 2, 14, 30): datetime(2014, 7, 3, 13, 30), + datetime(2014, 7, 3, 0): datetime(2014, 7, 3, 15), + datetime(2014, 7, 5, 15): datetime(2014, 7, 7, 15), + datetime(2014, 7, 4, 17): datetime(2014, 7, 7, 15), + datetime(2014, 7, 4, 14, 30): datetime(2014, 7, 7, 13, 30), + datetime(2014, 7, 4, 14, 30, 30): datetime(2014, 7, 7, 13, 30, 30) + })) tests.append((BusinessHour(n=-1, start='13:00', end='16:00'), {datetime(2014, 7, 2, 11): datetime(2014, 7, 1, 15), @@ -1104,54 +1207,58 @@ def test_apply(self): datetime(2014, 7, 2, 15): datetime(2014, 7, 2, 14), datetime(2014, 7, 2, 19): datetime(2014, 7, 2, 15), datetime(2014, 7, 2, 16): datetime(2014, 7, 2, 15), - datetime(2014, 7, 2, 13, 30, 15): datetime(2014, 7, 1, 15, 30, 15), + datetime(2014, 7, 2, 13, 30, 15): datetime(2014, 7, 1, + 15, 30, 15), datetime(2014, 7, 5, 15): datetime(2014, 7, 4, 15), datetime(2014, 7, 7, 11): datetime(2014, 7, 4, 15)})) - tests.append((BusinessHour(n=-3, start='10:00', end='16:00'), - {datetime(2014, 7, 1, 17): datetime(2014, 7, 1, 13), - datetime(2014, 7, 2, 14): datetime(2014, 7, 2, 11), - datetime(2014, 7, 2, 8): datetime(2014, 7, 1, 13), - datetime(2014, 7, 2, 13): datetime(2014, 7, 1, 16), - datetime(2014, 7, 2, 19): datetime(2014, 7, 2, 13), - datetime(2014, 7, 2, 11, 30): datetime(2014, 7, 1, 14, 30), - datetime(2014, 7, 3, 0): datetime(2014, 7, 2, 13), - datetime(2014, 7, 4, 10): datetime(2014, 7, 3, 13), - datetime(2014, 7, 5, 15): datetime(2014, 7, 4, 13), - datetime(2014, 7, 4, 16): datetime(2014, 7, 4, 13), - datetime(2014, 7, 4, 12, 30): datetime(2014, 7, 3, 15, 30), - datetime(2014, 7, 4, 12, 30, 30): datetime(2014, 7, 3, 15, 30, 30)})) - - tests.append((BusinessHour(start='19:00', end='05:00'), - {datetime(2014, 7, 1, 17): datetime(2014, 7, 1, 20), - datetime(2014, 7, 2, 14): datetime(2014, 7, 2, 20), - datetime(2014, 7, 2, 8): datetime(2014, 7, 2, 20), - datetime(2014, 7, 2, 13): datetime(2014, 7, 2, 20), - datetime(2014, 7, 2, 19): datetime(2014, 7, 2, 20), - datetime(2014, 7, 2, 4, 30): datetime(2014, 7, 2, 19, 30), - datetime(2014, 7, 3, 0): datetime(2014, 7, 3, 1), - datetime(2014, 7, 4, 10): datetime(2014, 7, 4, 20), - datetime(2014, 7, 4, 23): datetime(2014, 7, 5, 0), - datetime(2014, 7, 5, 0): datetime(2014, 7, 5, 1), - datetime(2014, 7, 5, 4): datetime(2014, 7, 7, 19), - datetime(2014, 7, 5, 4, 30): datetime(2014, 7, 7, 19, 30), - datetime(2014, 7, 5, 4, 30, 30): datetime(2014, 7, 7, 19, 30, 30)})) - - tests.append((BusinessHour(n=-1, start='19:00', end='05:00'), - {datetime(2014, 7, 1, 17): datetime(2014, 7, 1, 4), - datetime(2014, 7, 2, 14): datetime(2014, 7, 2, 4), - datetime(2014, 7, 2, 8): datetime(2014, 7, 2, 4), - datetime(2014, 7, 2, 13): datetime(2014, 7, 2, 4), - datetime(2014, 7, 2, 20): datetime(2014, 7, 2, 5), - datetime(2014, 7, 2, 19): datetime(2014, 7, 2, 4), - datetime(2014, 7, 2, 19, 30): datetime(2014, 7, 2, 4, 30), - datetime(2014, 7, 3, 0): datetime(2014, 7, 2, 23), - datetime(2014, 7, 3, 6): datetime(2014, 7, 3, 4), - datetime(2014, 7, 4, 23): datetime(2014, 7, 4, 22), - datetime(2014, 7, 5, 0): datetime(2014, 7, 4, 23), - datetime(2014, 7, 5, 4): datetime(2014, 7, 5, 3), - datetime(2014, 7, 7, 19, 30): datetime(2014, 7, 5, 4, 30), - datetime(2014, 7, 7, 19, 30, 30): datetime(2014, 7, 5, 4, 30, 30)})) + tests.append((BusinessHour(n=-3, start='10:00', end='16:00'), { + datetime(2014, 7, 1, 17): datetime(2014, 7, 1, 13), + datetime(2014, 7, 2, 14): datetime(2014, 7, 2, 11), + datetime(2014, 7, 2, 8): datetime(2014, 7, 1, 13), + datetime(2014, 7, 2, 13): datetime(2014, 7, 1, 16), + datetime(2014, 7, 2, 19): datetime(2014, 7, 2, 13), + datetime(2014, 7, 2, 11, 30): datetime(2014, 7, 1, 14, 30), + datetime(2014, 7, 3, 0): datetime(2014, 7, 2, 13), + datetime(2014, 7, 4, 10): datetime(2014, 7, 3, 13), + datetime(2014, 7, 5, 15): datetime(2014, 7, 4, 13), + datetime(2014, 7, 4, 16): datetime(2014, 7, 4, 13), + datetime(2014, 7, 4, 12, 30): datetime(2014, 7, 3, 15, 30), + datetime(2014, 7, 4, 12, 30, 30): datetime(2014, 7, 3, 15, 30, 30) + })) + + tests.append((BusinessHour(start='19:00', end='05:00'), { + datetime(2014, 7, 1, 17): datetime(2014, 7, 1, 20), + datetime(2014, 7, 2, 14): datetime(2014, 7, 2, 20), + datetime(2014, 7, 2, 8): datetime(2014, 7, 2, 20), + datetime(2014, 7, 2, 13): datetime(2014, 7, 2, 20), + datetime(2014, 7, 2, 19): datetime(2014, 7, 2, 20), + datetime(2014, 7, 2, 4, 30): datetime(2014, 7, 2, 19, 30), + datetime(2014, 7, 3, 0): datetime(2014, 7, 3, 1), + datetime(2014, 7, 4, 10): datetime(2014, 7, 4, 20), + datetime(2014, 7, 4, 23): datetime(2014, 7, 5, 0), + datetime(2014, 7, 5, 0): datetime(2014, 7, 5, 1), + datetime(2014, 7, 5, 4): datetime(2014, 7, 7, 19), + datetime(2014, 7, 5, 4, 30): datetime(2014, 7, 7, 19, 30), + datetime(2014, 7, 5, 4, 30, 30): datetime(2014, 7, 7, 19, 30, 30) + })) + + tests.append((BusinessHour(n=-1, start='19:00', end='05:00'), { + datetime(2014, 7, 1, 17): datetime(2014, 7, 1, 4), + datetime(2014, 7, 2, 14): datetime(2014, 7, 2, 4), + datetime(2014, 7, 2, 8): datetime(2014, 7, 2, 4), + datetime(2014, 7, 2, 13): datetime(2014, 7, 2, 4), + datetime(2014, 7, 2, 20): datetime(2014, 7, 2, 5), + datetime(2014, 7, 2, 19): datetime(2014, 7, 2, 4), + datetime(2014, 7, 2, 19, 30): datetime(2014, 7, 2, 4, 30), + datetime(2014, 7, 3, 0): datetime(2014, 7, 2, 23), + datetime(2014, 7, 3, 6): datetime(2014, 7, 3, 4), + datetime(2014, 7, 4, 23): datetime(2014, 7, 4, 22), + datetime(2014, 7, 5, 0): datetime(2014, 7, 4, 23), + datetime(2014, 7, 5, 4): datetime(2014, 7, 5, 3), + datetime(2014, 7, 7, 19, 30): datetime(2014, 7, 5, 4, 30), + datetime(2014, 7, 7, 19, 30, 30): datetime(2014, 7, 5, 4, 30, 30) + })) for offset, cases in tests: for base, expected in compat.iteritems(cases): @@ -1160,38 +1267,43 @@ def test_apply(self): def test_apply_large_n(self): tests = [] - tests.append((BusinessHour(40), # A week later - {datetime(2014, 7, 1, 11): datetime(2014, 7, 8, 11), - datetime(2014, 7, 1, 13): datetime(2014, 7, 8, 13), - datetime(2014, 7, 1, 15): datetime(2014, 7, 8, 15), - datetime(2014, 7, 1, 16): datetime(2014, 7, 8, 16), - datetime(2014, 7, 1, 17): datetime(2014, 7, 9, 9), - datetime(2014, 7, 2, 11): datetime(2014, 7, 9, 11), - datetime(2014, 7, 2, 8): datetime(2014, 7, 9, 9), - datetime(2014, 7, 2, 19): datetime(2014, 7, 10, 9), - datetime(2014, 7, 2, 23): datetime(2014, 7, 10, 9), - datetime(2014, 7, 3, 0): datetime(2014, 7, 10, 9), - datetime(2014, 7, 5, 15): datetime(2014, 7, 14, 9), - datetime(2014, 7, 4, 18): datetime(2014, 7, 14, 9), - datetime(2014, 7, 7, 9, 30): datetime(2014, 7, 14, 9, 30), - datetime(2014, 7, 7, 9, 30, 30): datetime(2014, 7, 14, 9, 30, 30)})) - - tests.append((BusinessHour(-25), # 3 days and 1 hour before - {datetime(2014, 7, 1, 11): datetime(2014, 6, 26, 10), - datetime(2014, 7, 1, 13): datetime(2014, 6, 26, 12), - datetime(2014, 7, 1, 9): datetime(2014, 6, 25, 16), - datetime(2014, 7, 1, 10): datetime(2014, 6, 25, 17), - datetime(2014, 7, 3, 11): datetime(2014, 6, 30, 10), - datetime(2014, 7, 3, 8): datetime(2014, 6, 27, 16), - datetime(2014, 7, 3, 19): datetime(2014, 6, 30, 16), - datetime(2014, 7, 3, 23): datetime(2014, 6, 30, 16), - datetime(2014, 7, 4, 9): datetime(2014, 6, 30, 16), - datetime(2014, 7, 5, 15): datetime(2014, 7, 1, 16), - datetime(2014, 7, 6, 18): datetime(2014, 7, 1, 16), - datetime(2014, 7, 7, 9, 30): datetime(2014, 7, 1, 16, 30), - datetime(2014, 7, 7, 10, 30, 30): datetime(2014, 7, 2, 9, 30, 30)})) - - tests.append((BusinessHour(28, start='21:00', end='02:00'), # 5 days and 3 hours later + tests.append( + (BusinessHour(40), # A week later + {datetime(2014, 7, 1, 11): datetime(2014, 7, 8, 11), + datetime(2014, 7, 1, 13): datetime(2014, 7, 8, 13), + datetime(2014, 7, 1, 15): datetime(2014, 7, 8, 15), + datetime(2014, 7, 1, 16): datetime(2014, 7, 8, 16), + datetime(2014, 7, 1, 17): datetime(2014, 7, 9, 9), + datetime(2014, 7, 2, 11): datetime(2014, 7, 9, 11), + datetime(2014, 7, 2, 8): datetime(2014, 7, 9, 9), + datetime(2014, 7, 2, 19): datetime(2014, 7, 10, 9), + datetime(2014, 7, 2, 23): datetime(2014, 7, 10, 9), + datetime(2014, 7, 3, 0): datetime(2014, 7, 10, 9), + datetime(2014, 7, 5, 15): datetime(2014, 7, 14, 9), + datetime(2014, 7, 4, 18): datetime(2014, 7, 14, 9), + datetime(2014, 7, 7, 9, 30): datetime(2014, 7, 14, 9, 30), + datetime(2014, 7, 7, 9, 30, 30): datetime(2014, 7, 14, 9, 30, + 30)})) + + tests.append( + (BusinessHour(-25), # 3 days and 1 hour before + {datetime(2014, 7, 1, 11): datetime(2014, 6, 26, 10), + datetime(2014, 7, 1, 13): datetime(2014, 6, 26, 12), + datetime(2014, 7, 1, 9): datetime(2014, 6, 25, 16), + datetime(2014, 7, 1, 10): datetime(2014, 6, 25, 17), + datetime(2014, 7, 3, 11): datetime(2014, 6, 30, 10), + datetime(2014, 7, 3, 8): datetime(2014, 6, 27, 16), + datetime(2014, 7, 3, 19): datetime(2014, 6, 30, 16), + datetime(2014, 7, 3, 23): datetime(2014, 6, 30, 16), + datetime(2014, 7, 4, 9): datetime(2014, 6, 30, 16), + datetime(2014, 7, 5, 15): datetime(2014, 7, 1, 16), + datetime(2014, 7, 6, 18): datetime(2014, 7, 1, 16), + datetime(2014, 7, 7, 9, 30): datetime(2014, 7, 1, 16, 30), + datetime(2014, 7, 7, 10, 30, 30): datetime(2014, 7, 2, 9, 30, + 30)})) + + # 5 days and 3 hours later + tests.append((BusinessHour(28, start='21:00', end='02:00'), {datetime(2014, 7, 1, 11): datetime(2014, 7, 9, 0), datetime(2014, 7, 1, 22): datetime(2014, 7, 9, 1), datetime(2014, 7, 1, 23): datetime(2014, 7, 9, 21), @@ -1204,7 +1316,8 @@ def test_apply_large_n(self): datetime(2014, 7, 5, 15): datetime(2014, 7, 15, 0), datetime(2014, 7, 6, 18): datetime(2014, 7, 15, 0), datetime(2014, 7, 7, 1): datetime(2014, 7, 15, 0), - datetime(2014, 7, 7, 23, 30): datetime(2014, 7, 15, 21, 30)})) + datetime(2014, 7, 7, 23, 30): datetime(2014, 7, 15, 21, + 30)})) for offset, cases in tests: for base, expected in compat.iteritems(cases): @@ -1214,16 +1327,20 @@ def test_apply_nanoseconds(self): tests = [] tests.append((BusinessHour(), - {Timestamp('2014-07-04 15:00') + Nano(5): Timestamp('2014-07-04 16:00') + Nano(5), - Timestamp('2014-07-04 16:00') + Nano(5): Timestamp('2014-07-07 09:00') + Nano(5), - Timestamp('2014-07-04 16:00') - Nano(5): Timestamp('2014-07-04 17:00') - Nano(5) - })) + {Timestamp('2014-07-04 15:00') + Nano(5): Timestamp( + '2014-07-04 16:00') + Nano(5), + Timestamp('2014-07-04 16:00') + Nano(5): Timestamp( + '2014-07-07 09:00') + Nano(5), + Timestamp('2014-07-04 16:00') - Nano(5): Timestamp( + '2014-07-04 17:00') - Nano(5)})) tests.append((BusinessHour(-1), - {Timestamp('2014-07-04 15:00') + Nano(5): Timestamp('2014-07-04 14:00') + Nano(5), - Timestamp('2014-07-04 10:00') + Nano(5): Timestamp('2014-07-04 09:00') + Nano(5), - Timestamp('2014-07-04 10:00') - Nano(5): Timestamp('2014-07-03 17:00') - Nano(5), - })) + {Timestamp('2014-07-04 15:00') + Nano(5): Timestamp( + '2014-07-04 14:00') + Nano(5), + Timestamp('2014-07-04 10:00') + Nano(5): Timestamp( + '2014-07-04 09:00') + Nano(5), + Timestamp('2014-07-04 10:00') - Nano(5): Timestamp( + '2014-07-03 17:00') - Nano(5), })) for offset, cases in tests: for base, expected in compat.iteritems(cases): @@ -1236,26 +1353,36 @@ def test_offsets_compare_equal(self): self.assertFalse(offset1 != offset2) def test_datetimeindex(self): - idx1 = DatetimeIndex(start='2014-07-04 15:00', end='2014-07-08 10:00', freq='BH') + idx1 = DatetimeIndex(start='2014-07-04 15:00', end='2014-07-08 10:00', + freq='BH') idx2 = DatetimeIndex(start='2014-07-04 15:00', periods=12, freq='BH') idx3 = DatetimeIndex(end='2014-07-08 10:00', periods=12, freq='BH') - expected = DatetimeIndex(['2014-07-04 15:00', '2014-07-04 16:00', '2014-07-07 09:00', - '2014-07-07 10:00', '2014-07-07 11:00', '2014-07-07 12:00', - '2014-07-07 13:00', '2014-07-07 14:00', '2014-07-07 15:00', - '2014-07-07 16:00', '2014-07-08 09:00', '2014-07-08 10:00'], - freq='BH') + expected = DatetimeIndex(['2014-07-04 15:00', '2014-07-04 16:00', + '2014-07-07 09:00', + '2014-07-07 10:00', '2014-07-07 11:00', + '2014-07-07 12:00', + '2014-07-07 13:00', '2014-07-07 14:00', + '2014-07-07 15:00', + '2014-07-07 16:00', '2014-07-08 09:00', + '2014-07-08 10:00'], + freq='BH') for idx in [idx1, idx2, idx3]: tm.assert_index_equal(idx, expected) - idx1 = DatetimeIndex(start='2014-07-04 15:45', end='2014-07-08 10:45', freq='BH') + idx1 = DatetimeIndex(start='2014-07-04 15:45', end='2014-07-08 10:45', + freq='BH') idx2 = DatetimeIndex(start='2014-07-04 15:45', periods=12, freq='BH') idx3 = DatetimeIndex(end='2014-07-08 10:45', periods=12, freq='BH') - expected = DatetimeIndex(['2014-07-04 15:45', '2014-07-04 16:45', '2014-07-07 09:45', - '2014-07-07 10:45', '2014-07-07 11:45', '2014-07-07 12:45', - '2014-07-07 13:45', '2014-07-07 14:45', '2014-07-07 15:45', - '2014-07-07 16:45', '2014-07-08 09:45', '2014-07-08 10:45'], - freq='BH') + expected = DatetimeIndex(['2014-07-04 15:45', '2014-07-04 16:45', + '2014-07-07 09:45', + '2014-07-07 10:45', '2014-07-07 11:45', + '2014-07-07 12:45', + '2014-07-07 13:45', '2014-07-07 14:45', + '2014-07-07 15:45', + '2014-07-07 16:45', '2014-07-08 09:45', + '2014-07-08 10:45'], + freq='BH') expected = idx1 for idx in [idx1, idx2, idx3]: tm.assert_index_equal(idx, expected) @@ -1322,8 +1449,7 @@ def testMult1(self): self.assertEqual(self.d + 10 * self.offset, self.d + CDay(10)) def testMult2(self): - self.assertEqual(self.d + (-5 * CDay(-10)), - self.d + CDay(50)) + self.assertEqual(self.d + (-5 * CDay(-10)), self.d + CDay(50)) def testRollback1(self): self.assertEqual(CDay(10).rollback(self.d), self.d) @@ -1361,50 +1487,45 @@ def test_onOffset(self): tests = [(CDay(), datetime(2008, 1, 1), True), (CDay(), datetime(2008, 1, 5), False)] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, d, expected in tests: + assertOnOffset(offset, d, expected) def test_apply(self): from pandas.core.datetools import cday tests = [] - tests.append((cday, - {datetime(2008, 1, 1): datetime(2008, 1, 2), - datetime(2008, 1, 4): datetime(2008, 1, 7), - datetime(2008, 1, 5): datetime(2008, 1, 7), - datetime(2008, 1, 6): datetime(2008, 1, 7), - datetime(2008, 1, 7): datetime(2008, 1, 8)})) - - tests.append((2 * cday, - {datetime(2008, 1, 1): datetime(2008, 1, 3), - datetime(2008, 1, 4): datetime(2008, 1, 8), - datetime(2008, 1, 5): datetime(2008, 1, 8), - datetime(2008, 1, 6): datetime(2008, 1, 8), - datetime(2008, 1, 7): datetime(2008, 1, 9)})) - - tests.append((-cday, - {datetime(2008, 1, 1): datetime(2007, 12, 31), - datetime(2008, 1, 4): datetime(2008, 1, 3), - datetime(2008, 1, 5): datetime(2008, 1, 4), - datetime(2008, 1, 6): datetime(2008, 1, 4), - datetime(2008, 1, 7): datetime(2008, 1, 4), - datetime(2008, 1, 8): datetime(2008, 1, 7)})) - - tests.append((-2 * cday, - {datetime(2008, 1, 1): datetime(2007, 12, 28), - datetime(2008, 1, 4): datetime(2008, 1, 2), - datetime(2008, 1, 5): datetime(2008, 1, 3), - datetime(2008, 1, 6): datetime(2008, 1, 3), - datetime(2008, 1, 7): datetime(2008, 1, 3), - datetime(2008, 1, 8): datetime(2008, 1, 4), - datetime(2008, 1, 9): datetime(2008, 1, 7)})) - - tests.append((CDay(0), - {datetime(2008, 1, 1): datetime(2008, 1, 1), - datetime(2008, 1, 4): datetime(2008, 1, 4), - datetime(2008, 1, 5): datetime(2008, 1, 7), - datetime(2008, 1, 6): datetime(2008, 1, 7), - datetime(2008, 1, 7): datetime(2008, 1, 7)})) + tests.append((cday, {datetime(2008, 1, 1): datetime(2008, 1, 2), + datetime(2008, 1, 4): datetime(2008, 1, 7), + datetime(2008, 1, 5): datetime(2008, 1, 7), + datetime(2008, 1, 6): datetime(2008, 1, 7), + datetime(2008, 1, 7): datetime(2008, 1, 8)})) + + tests.append((2 * cday, {datetime(2008, 1, 1): datetime(2008, 1, 3), + datetime(2008, 1, 4): datetime(2008, 1, 8), + datetime(2008, 1, 5): datetime(2008, 1, 8), + datetime(2008, 1, 6): datetime(2008, 1, 8), + datetime(2008, 1, 7): datetime(2008, 1, 9)})) + + tests.append((-cday, {datetime(2008, 1, 1): datetime(2007, 12, 31), + datetime(2008, 1, 4): datetime(2008, 1, 3), + datetime(2008, 1, 5): datetime(2008, 1, 4), + datetime(2008, 1, 6): datetime(2008, 1, 4), + datetime(2008, 1, 7): datetime(2008, 1, 4), + datetime(2008, 1, 8): datetime(2008, 1, 7)})) + + tests.append((-2 * cday, {datetime(2008, 1, 1): datetime(2007, 12, 28), + datetime(2008, 1, 4): datetime(2008, 1, 2), + datetime(2008, 1, 5): datetime(2008, 1, 3), + datetime(2008, 1, 6): datetime(2008, 1, 3), + datetime(2008, 1, 7): datetime(2008, 1, 3), + datetime(2008, 1, 8): datetime(2008, 1, 4), + datetime(2008, 1, 9): datetime(2008, 1, 7)})) + + tests.append((CDay(0), {datetime(2008, 1, 1): datetime(2008, 1, 1), + datetime(2008, 1, 4): datetime(2008, 1, 4), + datetime(2008, 1, 5): datetime(2008, 1, 7), + datetime(2008, 1, 6): datetime(2008, 1, 7), + datetime(2008, 1, 7): datetime(2008, 1, 7)})) for offset, cases in tests: for base, expected in compat.iteritems(cases): @@ -1451,8 +1572,8 @@ def test_holidays(self): def test_weekmask(self): weekmask_saudi = 'Sat Sun Mon Tue Wed' # Thu-Fri Weekend - weekmask_uae = '1111001' # Fri-Sat Weekend - weekmask_egypt = [1,1,1,1,0,0,1] # Fri-Sat Weekend + weekmask_uae = '1111001' # Fri-Sat Weekend + weekmask_egypt = [1, 1, 1, 1, 0, 0, 1] # Fri-Sat Weekend bday_saudi = CDay(weekmask=weekmask_saudi) bday_uae = CDay(weekmask=weekmask_uae) bday_egypt = CDay(weekmask=weekmask_egypt) @@ -1486,12 +1607,13 @@ def test_roundtrip_pickle(self): def _check_roundtrip(obj): unpickled = self.round_trip_pickle(obj) self.assertEqual(unpickled, obj) + _check_roundtrip(self.offset) _check_roundtrip(self.offset2) - _check_roundtrip(self.offset*2) + _check_roundtrip(self.offset * 2) def test_pickle_compat_0_14_1(self): - hdays = [datetime(2013,1,1) for ele in range(4)] + hdays = [datetime(2013, 1, 1) for ele in range(4)] pth = tm.get_data_path() @@ -1499,6 +1621,7 @@ def test_pickle_compat_0_14_1(self): cday = CDay(holidays=hdays) self.assertEqual(cday, cday0_14_1) + class CustomBusinessMonthBase(object): _multiprocess_can_split_ = True @@ -1526,15 +1649,13 @@ def testSub(self): self.assertRaises(Exception, off.__sub__, self.d) self.assertEqual(2 * off - off, off) - self.assertEqual(self.d - self.offset2, - self.d + self._object(-2)) + self.assertEqual(self.d - self.offset2, self.d + self._object(-2)) def testRSub(self): self.assertEqual(self.d - self.offset2, (-self.offset2).apply(self.d)) def testMult1(self): - self.assertEqual(self.d + 10 * self.offset, - self.d + self._object(10)) + self.assertEqual(self.d + 10 * self.offset, self.d + self._object(10)) def testMult2(self): self.assertEqual(self.d + (-5 * self._object(-10)), @@ -1549,9 +1670,10 @@ def test_roundtrip_pickle(self): def _check_roundtrip(obj): unpickled = self.round_trip_pickle(obj) self.assertEqual(unpickled, obj) + _check_roundtrip(self._object()) _check_roundtrip(self._object(2)) - _check_roundtrip(self._object()*2) + _check_roundtrip(self._object() * 2) class TestCustomBusinessMonthEnd(CustomBusinessMonthBase, Base): @@ -1577,10 +1699,11 @@ def testRollback1(self): def testRollback2(self): self.assertEqual(CBMonthEnd(10).rollback(self.d), - datetime(2007,12,31)) + datetime(2007, 12, 31)) def testRollforward1(self): - self.assertEqual(CBMonthEnd(10).rollforward(self.d), datetime(2008,1,31)) + self.assertEqual(CBMonthEnd(10).rollforward( + self.d), datetime(2008, 1, 31)) def test_roll_date_object(self): offset = CBMonthEnd() @@ -1604,29 +1727,25 @@ def test_onOffset(self): tests = [(CBMonthEnd(), datetime(2008, 1, 31), True), (CBMonthEnd(), datetime(2008, 1, 1), False)] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) - + for offset, d, expected in tests: + assertOnOffset(offset, d, expected) def test_apply(self): cbm = CBMonthEnd() tests = [] - tests.append((cbm, - {datetime(2008, 1, 1): datetime(2008, 1, 31), - datetime(2008, 2, 7): datetime(2008, 2, 29)})) + tests.append((cbm, {datetime(2008, 1, 1): datetime(2008, 1, 31), + datetime(2008, 2, 7): datetime(2008, 2, 29)})) - tests.append((2 * cbm, - {datetime(2008, 1, 1): datetime(2008, 2, 29), - datetime(2008, 2, 7): datetime(2008, 3, 31)})) + tests.append((2 * cbm, {datetime(2008, 1, 1): datetime(2008, 2, 29), + datetime(2008, 2, 7): datetime(2008, 3, 31)})) - tests.append((-cbm, - {datetime(2008, 1, 1): datetime(2007, 12, 31), - datetime(2008, 2, 8): datetime(2008, 1, 31)})) + tests.append((-cbm, {datetime(2008, 1, 1): datetime(2007, 12, 31), + datetime(2008, 2, 8): datetime(2008, 1, 31)})) - tests.append((-2 * cbm, - {datetime(2008, 1, 1): datetime(2007, 11, 30), - datetime(2008, 2, 9): datetime(2007, 12, 31)})) + tests.append((-2 * cbm, {datetime(2008, 1, 1): datetime(2007, 11, 30), + datetime(2008, 2, 9): datetime(2007, 12, 31)} + )) tests.append((CBMonthEnd(0), {datetime(2008, 1, 1): datetime(2008, 1, 31), @@ -1660,18 +1779,19 @@ def test_holidays(self): holidays = ['2012-01-31', datetime(2012, 2, 28), np.datetime64('2012-02-29')] bm_offset = CBMonthEnd(holidays=holidays) - dt = datetime(2012,1,1) - self.assertEqual(dt + bm_offset,datetime(2012,1,30)) - self.assertEqual(dt + 2*bm_offset,datetime(2012,2,27)) + dt = datetime(2012, 1, 1) + self.assertEqual(dt + bm_offset, datetime(2012, 1, 30)) + self.assertEqual(dt + 2 * bm_offset, datetime(2012, 2, 27)) def test_datetimeindex(self): from pandas.tseries.holiday import USFederalHolidayCalendar hcal = USFederalHolidayCalendar() freq = CBMonthEnd(calendar=hcal) - self.assertEqual(DatetimeIndex(start='20120101',end='20130101', + self.assertEqual(DatetimeIndex(start='20120101', end='20130101', freq=freq).tolist()[0], - datetime(2012,1,31)) + datetime(2012, 1, 31)) + class TestCustomBusinessMonthBegin(CustomBusinessMonthBase, Base): _object = CBMonthBegin @@ -1696,10 +1816,11 @@ def testRollback1(self): def testRollback2(self): self.assertEqual(CBMonthBegin(10).rollback(self.d), - datetime(2008,1,1)) + datetime(2008, 1, 1)) def testRollforward1(self): - self.assertEqual(CBMonthBegin(10).rollforward(self.d), datetime(2008,1,1)) + self.assertEqual(CBMonthBegin(10).rollforward( + self.d), datetime(2008, 1, 1)) def test_roll_date_object(self): offset = CBMonthBegin() @@ -1723,29 +1844,24 @@ def test_onOffset(self): tests = [(CBMonthBegin(), datetime(2008, 1, 1), True), (CBMonthBegin(), datetime(2008, 1, 31), False)] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) - + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) def test_apply(self): cbm = CBMonthBegin() tests = [] - tests.append((cbm, - {datetime(2008, 1, 1): datetime(2008, 2, 1), - datetime(2008, 2, 7): datetime(2008, 3, 3)})) + tests.append((cbm, {datetime(2008, 1, 1): datetime(2008, 2, 1), + datetime(2008, 2, 7): datetime(2008, 3, 3)})) - tests.append((2 * cbm, - {datetime(2008, 1, 1): datetime(2008, 3, 3), - datetime(2008, 2, 7): datetime(2008, 4, 1)})) + tests.append((2 * cbm, {datetime(2008, 1, 1): datetime(2008, 3, 3), + datetime(2008, 2, 7): datetime(2008, 4, 1)})) - tests.append((-cbm, - {datetime(2008, 1, 1): datetime(2007, 12, 3), - datetime(2008, 2, 8): datetime(2008, 2, 1)})) + tests.append((-cbm, {datetime(2008, 1, 1): datetime(2007, 12, 3), + datetime(2008, 2, 8): datetime(2008, 2, 1)})) - tests.append((-2 * cbm, - {datetime(2008, 1, 1): datetime(2007, 11, 1), - datetime(2008, 2, 9): datetime(2008, 1, 1)})) + tests.append((-2 * cbm, {datetime(2008, 1, 1): datetime(2007, 11, 1), + datetime(2008, 2, 9): datetime(2008, 1, 1)})) tests.append((CBMonthBegin(0), {datetime(2008, 1, 1): datetime(2008, 1, 1), @@ -1779,16 +1895,16 @@ def test_holidays(self): holidays = ['2012-02-01', datetime(2012, 2, 2), np.datetime64('2012-03-01')] bm_offset = CBMonthBegin(holidays=holidays) - dt = datetime(2012,1,1) - self.assertEqual(dt + bm_offset,datetime(2012,1,2)) - self.assertEqual(dt + 2*bm_offset,datetime(2012,2,3)) + dt = datetime(2012, 1, 1) + self.assertEqual(dt + bm_offset, datetime(2012, 1, 2)) + self.assertEqual(dt + 2 * bm_offset, datetime(2012, 2, 3)) def test_datetimeindex(self): hcal = USFederalHolidayCalendar() cbmb = CBMonthBegin(calendar=hcal) self.assertEqual(DatetimeIndex(start='20120101', end='20130101', freq=cbmb).tolist()[0], - datetime(2012,1,3)) + datetime(2012, 1, 3)) def assertOnOffset(offset, date, expected): @@ -1804,7 +1920,8 @@ class TestWeek(Base): def test_repr(self): self.assertEqual(repr(Week(weekday=0)), "<Week: weekday=0>") self.assertEqual(repr(Week(n=-1, weekday=0)), "<-1 * Week: weekday=0>") - self.assertEqual(repr(Week(n=-2, weekday=0)), "<-2 * Weeks: weekday=0>") + self.assertEqual(repr(Week(n=-2, weekday=0)), + "<-2 * Weeks: weekday=0>") def test_corner(self): self.assertRaises(ValueError, Week, weekday=7) @@ -1873,14 +1990,20 @@ class TestWeekOfMonth(Base): _offset = WeekOfMonth def test_constructor(self): - assertRaisesRegexp(ValueError, "^N cannot be 0", WeekOfMonth, n=0, week=1, weekday=1) - assertRaisesRegexp(ValueError, "^Week", WeekOfMonth, n=1, week=4, weekday=0) - assertRaisesRegexp(ValueError, "^Week", WeekOfMonth, n=1, week=-1, weekday=0) - assertRaisesRegexp(ValueError, "^Day", WeekOfMonth, n=1, week=0, weekday=-1) - assertRaisesRegexp(ValueError, "^Day", WeekOfMonth, n=1, week=0, weekday=7) + assertRaisesRegexp(ValueError, "^N cannot be 0", WeekOfMonth, n=0, + week=1, weekday=1) + assertRaisesRegexp(ValueError, "^Week", WeekOfMonth, n=1, week=4, + weekday=0) + assertRaisesRegexp(ValueError, "^Week", WeekOfMonth, n=1, week=-1, + weekday=0) + assertRaisesRegexp(ValueError, "^Day", WeekOfMonth, n=1, week=0, + weekday=-1) + assertRaisesRegexp(ValueError, "^Day", WeekOfMonth, n=1, week=0, + weekday=7) def test_repr(self): - self.assertEqual(repr(WeekOfMonth(weekday=1,week=2)), "<WeekOfMonth: week=2, weekday=1>") + self.assertEqual(repr(WeekOfMonth(weekday=1, week=2)), + "<WeekOfMonth: week=2, weekday=1>") def test_offset(self): date1 = datetime(2011, 1, 4) # 1st Tuesday of Month @@ -1924,9 +2047,9 @@ def test_offset(self): (2, 2, 1, date4, datetime(2011, 3, 15)), ] - for n, week, weekday, date, expected in test_cases: + for n, week, weekday, dt, expected in test_cases: offset = WeekOfMonth(n, week=week, weekday=weekday) - assertEq(offset, date, expected) + assertEq(offset, dt, expected) # try subtracting result = datetime(2011, 2, 1) - WeekOfMonth(week=1, weekday=2) @@ -1944,24 +2067,26 @@ def test_onOffset(self): (0, 1, datetime(2011, 2, 8), False), ] - for week, weekday, date, expected in test_cases: + for week, weekday, dt, expected in test_cases: offset = WeekOfMonth(week=week, weekday=weekday) - self.assertEqual(offset.onOffset(date), expected) + self.assertEqual(offset.onOffset(dt), expected) + class TestLastWeekOfMonth(Base): _offset = LastWeekOfMonth def test_constructor(self): - assertRaisesRegexp(ValueError, "^N cannot be 0", \ - LastWeekOfMonth, n=0, weekday=1) + assertRaisesRegexp(ValueError, "^N cannot be 0", LastWeekOfMonth, n=0, + weekday=1) - assertRaisesRegexp(ValueError, "^Day", LastWeekOfMonth, n=1, weekday=-1) + assertRaisesRegexp(ValueError, "^Day", LastWeekOfMonth, n=1, + weekday=-1) assertRaisesRegexp(ValueError, "^Day", LastWeekOfMonth, n=1, weekday=7) def test_offset(self): - #### Saturday - last_sat = datetime(2013,8,31) - next_sat = datetime(2013,9,28) + # Saturday + last_sat = datetime(2013, 8, 31) + next_sat = datetime(2013, 9, 28) offset_sat = LastWeekOfMonth(n=1, weekday=5) one_day_before = (last_sat + timedelta(days=-1)) @@ -1970,14 +2095,14 @@ def test_offset(self): one_day_after = (last_sat + timedelta(days=+1)) self.assertEqual(one_day_after + offset_sat, next_sat) - #Test On that day + # Test On that day self.assertEqual(last_sat + offset_sat, next_sat) - #### Thursday + # Thursday offset_thur = LastWeekOfMonth(n=1, weekday=3) - last_thurs = datetime(2013,1,31) - next_thurs = datetime(2013,2,28) + last_thurs = datetime(2013, 1, 31) + next_thurs = datetime(2013, 2, 28) one_day_before = last_thurs + timedelta(days=-1) self.assertEqual(one_day_before + offset_thur, last_thurs) @@ -1995,14 +2120,15 @@ def test_offset(self): self.assertEqual(two_after + offset_thur, next_thurs) offset_sunday = LastWeekOfMonth(n=1, weekday=WeekDay.SUN) - self.assertEqual(datetime(2013,7,31) + offset_sunday, datetime(2013,8,25)) + self.assertEqual(datetime(2013, 7, 31) + + offset_sunday, datetime(2013, 8, 25)) def test_onOffset(self): test_cases = [ (WeekDay.SUN, datetime(2013, 1, 27), True), (WeekDay.SAT, datetime(2013, 3, 30), True), - (WeekDay.MON, datetime(2013, 2, 18), False), #Not the last Mon - (WeekDay.SUN, datetime(2013, 2, 25), False), #Not a SUN + (WeekDay.MON, datetime(2013, 2, 18), False), # Not the last Mon + (WeekDay.SUN, datetime(2013, 2, 25), False), # Not a SUN (WeekDay.MON, datetime(2013, 2, 25), True), (WeekDay.SAT, datetime(2013, 11, 30), True), @@ -2015,9 +2141,9 @@ def test_onOffset(self): (WeekDay.SAT, datetime(2019, 8, 31), True), ] - for weekday, date, expected in test_cases: + for weekday, dt, expected in test_cases: offset = LastWeekOfMonth(weekday=weekday) - self.assertEqual(offset.onOffset(date), expected, msg=date) + self.assertEqual(offset.onOffset(dt), expected, msg=date) class TestBMonthBegin(Base): @@ -2027,13 +2153,13 @@ def test_offset(self): tests = [] tests.append((BMonthBegin(), - {datetime(2008, 1, 1): datetime(2008, 2, 1), - datetime(2008, 1, 31): datetime(2008, 2, 1), - datetime(2006, 12, 29): datetime(2007, 1, 1), - datetime(2006, 12, 31): datetime(2007, 1, 1), - datetime(2006, 9, 1): datetime(2006, 10, 2), - datetime(2007, 1, 1): datetime(2007, 2, 1), - datetime(2006, 12, 1): datetime(2007, 1, 1)})) + {datetime(2008, 1, 1): datetime(2008, 2, 1), + datetime(2008, 1, 31): datetime(2008, 2, 1), + datetime(2006, 12, 29): datetime(2007, 1, 1), + datetime(2006, 12, 31): datetime(2007, 1, 1), + datetime(2006, 9, 1): datetime(2006, 10, 2), + datetime(2007, 1, 1): datetime(2007, 2, 1), + datetime(2006, 12, 1): datetime(2007, 1, 1)})) tests.append((BMonthBegin(0), {datetime(2008, 1, 1): datetime(2008, 1, 1), @@ -2044,22 +2170,22 @@ def test_offset(self): datetime(2006, 9, 15): datetime(2006, 10, 2)})) tests.append((BMonthBegin(2), - {datetime(2008, 1, 1): datetime(2008, 3, 3), - datetime(2008, 1, 15): datetime(2008, 3, 3), - datetime(2006, 12, 29): datetime(2007, 2, 1), - datetime(2006, 12, 31): datetime(2007, 2, 1), - datetime(2007, 1, 1): datetime(2007, 3, 1), - datetime(2006, 11, 1): datetime(2007, 1, 1)})) + {datetime(2008, 1, 1): datetime(2008, 3, 3), + datetime(2008, 1, 15): datetime(2008, 3, 3), + datetime(2006, 12, 29): datetime(2007, 2, 1), + datetime(2006, 12, 31): datetime(2007, 2, 1), + datetime(2007, 1, 1): datetime(2007, 3, 1), + datetime(2006, 11, 1): datetime(2007, 1, 1)})) tests.append((BMonthBegin(-1), - {datetime(2007, 1, 1): datetime(2006, 12, 1), - datetime(2008, 6, 30): datetime(2008, 6, 2), - datetime(2008, 6, 1): datetime(2008, 5, 1), - datetime(2008, 3, 10): datetime(2008, 3, 3), - datetime(2008, 12, 31): datetime(2008, 12, 1), - datetime(2006, 12, 29): datetime(2006, 12, 1), - datetime(2006, 12, 30): datetime(2006, 12, 1), - datetime(2007, 1, 1): datetime(2006, 12, 1)})) + {datetime(2007, 1, 1): datetime(2006, 12, 1), + datetime(2008, 6, 30): datetime(2008, 6, 2), + datetime(2008, 6, 1): datetime(2008, 5, 1), + datetime(2008, 3, 10): datetime(2008, 3, 3), + datetime(2008, 12, 31): datetime(2008, 12, 1), + datetime(2006, 12, 29): datetime(2006, 12, 1), + datetime(2006, 12, 30): datetime(2006, 12, 1), + datetime(2007, 1, 1): datetime(2006, 12, 1)})) for offset, cases in tests: for base, expected in compat.iteritems(cases): @@ -2072,8 +2198,8 @@ def test_onOffset(self): (BMonthBegin(), datetime(2001, 4, 2), True), (BMonthBegin(), datetime(2008, 3, 3), True)] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) def test_offsets_compare_equal(self): # root cause of #456 @@ -2089,12 +2215,12 @@ def test_offset(self): tests = [] tests.append((BMonthEnd(), - {datetime(2008, 1, 1): datetime(2008, 1, 31), - datetime(2008, 1, 31): datetime(2008, 2, 29), - datetime(2006, 12, 29): datetime(2007, 1, 31), - datetime(2006, 12, 31): datetime(2007, 1, 31), - datetime(2007, 1, 1): datetime(2007, 1, 31), - datetime(2006, 12, 1): datetime(2006, 12, 29)})) + {datetime(2008, 1, 1): datetime(2008, 1, 31), + datetime(2008, 1, 31): datetime(2008, 2, 29), + datetime(2006, 12, 29): datetime(2007, 1, 31), + datetime(2006, 12, 31): datetime(2007, 1, 31), + datetime(2007, 1, 1): datetime(2007, 1, 31), + datetime(2006, 12, 1): datetime(2006, 12, 29)})) tests.append((BMonthEnd(0), {datetime(2008, 1, 1): datetime(2008, 1, 31), @@ -2104,20 +2230,20 @@ def test_offset(self): datetime(2007, 1, 1): datetime(2007, 1, 31)})) tests.append((BMonthEnd(2), - {datetime(2008, 1, 1): datetime(2008, 2, 29), - datetime(2008, 1, 31): datetime(2008, 3, 31), - datetime(2006, 12, 29): datetime(2007, 2, 28), - datetime(2006, 12, 31): datetime(2007, 2, 28), - datetime(2007, 1, 1): datetime(2007, 2, 28), - datetime(2006, 11, 1): datetime(2006, 12, 29)})) + {datetime(2008, 1, 1): datetime(2008, 2, 29), + datetime(2008, 1, 31): datetime(2008, 3, 31), + datetime(2006, 12, 29): datetime(2007, 2, 28), + datetime(2006, 12, 31): datetime(2007, 2, 28), + datetime(2007, 1, 1): datetime(2007, 2, 28), + datetime(2006, 11, 1): datetime(2006, 12, 29)})) tests.append((BMonthEnd(-1), - {datetime(2007, 1, 1): datetime(2006, 12, 29), - datetime(2008, 6, 30): datetime(2008, 5, 30), - datetime(2008, 12, 31): datetime(2008, 11, 28), - datetime(2006, 12, 29): datetime(2006, 11, 30), - datetime(2006, 12, 30): datetime(2006, 12, 29), - datetime(2007, 1, 1): datetime(2006, 12, 29)})) + {datetime(2007, 1, 1): datetime(2006, 12, 29), + datetime(2008, 6, 30): datetime(2008, 5, 30), + datetime(2008, 12, 31): datetime(2008, 11, 28), + datetime(2006, 12, 29): datetime(2006, 11, 30), + datetime(2006, 12, 30): datetime(2006, 12, 29), + datetime(2007, 1, 1): datetime(2006, 12, 29)})) for offset, cases in tests: for base, expected in compat.iteritems(cases): @@ -2135,8 +2261,8 @@ def test_onOffset(self): tests = [(BMonthEnd(), datetime(2007, 12, 31), True), (BMonthEnd(), datetime(2008, 1, 1), False)] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) def test_offsets_compare_equal(self): # root cause of #456 @@ -2154,11 +2280,11 @@ def test_offset(self): # NOTE: I'm not entirely happy with the logic here for Begin -ss # see thread 'offset conventions' on the ML tests.append((MonthBegin(), - {datetime(2008, 1, 31): datetime(2008, 2, 1), - datetime(2008, 2, 1): datetime(2008, 3, 1), - datetime(2006, 12, 31): datetime(2007, 1, 1), - datetime(2006, 12, 1): datetime(2007, 1, 1), - datetime(2007, 1, 31): datetime(2007, 2, 1)})) + {datetime(2008, 1, 31): datetime(2008, 2, 1), + datetime(2008, 2, 1): datetime(2008, 3, 1), + datetime(2006, 12, 31): datetime(2007, 1, 1), + datetime(2006, 12, 1): datetime(2007, 1, 1), + datetime(2007, 1, 31): datetime(2007, 2, 1)})) tests.append((MonthBegin(0), {datetime(2008, 1, 31): datetime(2008, 2, 1), @@ -2167,19 +2293,19 @@ def test_offset(self): datetime(2007, 1, 31): datetime(2007, 2, 1)})) tests.append((MonthBegin(2), - {datetime(2008, 2, 29): datetime(2008, 4, 1), - datetime(2008, 1, 31): datetime(2008, 3, 1), - datetime(2006, 12, 31): datetime(2007, 2, 1), - datetime(2007, 12, 28): datetime(2008, 2, 1), - datetime(2007, 1, 1): datetime(2007, 3, 1), - datetime(2006, 11, 1): datetime(2007, 1, 1)})) + {datetime(2008, 2, 29): datetime(2008, 4, 1), + datetime(2008, 1, 31): datetime(2008, 3, 1), + datetime(2006, 12, 31): datetime(2007, 2, 1), + datetime(2007, 12, 28): datetime(2008, 2, 1), + datetime(2007, 1, 1): datetime(2007, 3, 1), + datetime(2006, 11, 1): datetime(2007, 1, 1)})) tests.append((MonthBegin(-1), - {datetime(2007, 1, 1): datetime(2006, 12, 1), - datetime(2008, 5, 31): datetime(2008, 5, 1), - datetime(2008, 12, 31): datetime(2008, 12, 1), - datetime(2006, 12, 29): datetime(2006, 12, 1), - datetime(2006, 1, 2): datetime(2006, 1, 1)})) + {datetime(2007, 1, 1): datetime(2006, 12, 1), + datetime(2008, 5, 31): datetime(2008, 5, 1), + datetime(2008, 12, 31): datetime(2008, 12, 1), + datetime(2006, 12, 29): datetime(2006, 12, 1), + datetime(2006, 1, 2): datetime(2006, 1, 1)})) for offset, cases in tests: for base, expected in compat.iteritems(cases): @@ -2193,12 +2319,12 @@ def test_offset(self): tests = [] tests.append((MonthEnd(), - {datetime(2008, 1, 1): datetime(2008, 1, 31), - datetime(2008, 1, 31): datetime(2008, 2, 29), - datetime(2006, 12, 29): datetime(2006, 12, 31), - datetime(2006, 12, 31): datetime(2007, 1, 31), - datetime(2007, 1, 1): datetime(2007, 1, 31), - datetime(2006, 12, 1): datetime(2006, 12, 31)})) + {datetime(2008, 1, 1): datetime(2008, 1, 31), + datetime(2008, 1, 31): datetime(2008, 2, 29), + datetime(2006, 12, 29): datetime(2006, 12, 31), + datetime(2006, 12, 31): datetime(2007, 1, 31), + datetime(2007, 1, 1): datetime(2007, 1, 31), + datetime(2006, 12, 1): datetime(2006, 12, 31)})) tests.append((MonthEnd(0), {datetime(2008, 1, 1): datetime(2008, 1, 31), @@ -2208,20 +2334,20 @@ def test_offset(self): datetime(2007, 1, 1): datetime(2007, 1, 31)})) tests.append((MonthEnd(2), - {datetime(2008, 1, 1): datetime(2008, 2, 29), - datetime(2008, 1, 31): datetime(2008, 3, 31), - datetime(2006, 12, 29): datetime(2007, 1, 31), - datetime(2006, 12, 31): datetime(2007, 2, 28), - datetime(2007, 1, 1): datetime(2007, 2, 28), - datetime(2006, 11, 1): datetime(2006, 12, 31)})) + {datetime(2008, 1, 1): datetime(2008, 2, 29), + datetime(2008, 1, 31): datetime(2008, 3, 31), + datetime(2006, 12, 29): datetime(2007, 1, 31), + datetime(2006, 12, 31): datetime(2007, 2, 28), + datetime(2007, 1, 1): datetime(2007, 2, 28), + datetime(2006, 11, 1): datetime(2006, 12, 31)})) tests.append((MonthEnd(-1), - {datetime(2007, 1, 1): datetime(2006, 12, 31), - datetime(2008, 6, 30): datetime(2008, 5, 31), - datetime(2008, 12, 31): datetime(2008, 11, 30), - datetime(2006, 12, 29): datetime(2006, 11, 30), - datetime(2006, 12, 30): datetime(2006, 11, 30), - datetime(2007, 1, 1): datetime(2006, 12, 31)})) + {datetime(2007, 1, 1): datetime(2006, 12, 31), + datetime(2008, 6, 30): datetime(2008, 5, 31), + datetime(2008, 12, 31): datetime(2008, 11, 30), + datetime(2006, 12, 29): datetime(2006, 11, 30), + datetime(2006, 12, 30): datetime(2006, 11, 30), + datetime(2007, 1, 1): datetime(2006, 12, 31)})) for offset, cases in tests: for base, expected in compat.iteritems(cases): @@ -2250,17 +2376,20 @@ def test_onOffset(self): tests = [(MonthEnd(), datetime(2007, 12, 31), True), (MonthEnd(), datetime(2008, 1, 1), False)] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) class TestBQuarterBegin(Base): _offset = BQuarterBegin def test_repr(self): - self.assertEqual(repr(BQuarterBegin()),"<BusinessQuarterBegin: startingMonth=3>") - self.assertEqual(repr(BQuarterBegin(startingMonth=3)), "<BusinessQuarterBegin: startingMonth=3>") - self.assertEqual(repr(BQuarterBegin(startingMonth=1)), "<BusinessQuarterBegin: startingMonth=1>") + self.assertEqual(repr(BQuarterBegin()), + "<BusinessQuarterBegin: startingMonth=3>") + self.assertEqual(repr(BQuarterBegin(startingMonth=3)), + "<BusinessQuarterBegin: startingMonth=3>") + self.assertEqual(repr(BQuarterBegin(startingMonth=1)), + "<BusinessQuarterBegin: startingMonth=1>") def test_isAnchored(self): self.assertTrue(BQuarterBegin(startingMonth=1).isAnchored()) @@ -2349,9 +2478,12 @@ class TestBQuarterEnd(Base): _offset = BQuarterEnd def test_repr(self): - self.assertEqual(repr(BQuarterEnd()),"<BusinessQuarterEnd: startingMonth=3>") - self.assertEqual(repr(BQuarterEnd(startingMonth=3)), "<BusinessQuarterEnd: startingMonth=3>") - self.assertEqual(repr(BQuarterEnd(startingMonth=1)), "<BusinessQuarterEnd: startingMonth=1>") + self.assertEqual(repr(BQuarterEnd()), + "<BusinessQuarterEnd: startingMonth=3>") + self.assertEqual(repr(BQuarterEnd(startingMonth=3)), + "<BusinessQuarterEnd: startingMonth=3>") + self.assertEqual(repr(BQuarterEnd(startingMonth=1)), + "<BusinessQuarterEnd: startingMonth=1>") def test_isAnchored(self): self.assertTrue(BQuarterEnd(startingMonth=1).isAnchored()) @@ -2450,30 +2582,37 @@ def test_onOffset(self): (BQuarterEnd(1, startingMonth=3), datetime(2007, 6, 30), False), ] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) + def makeFY5253LastOfMonthQuarter(*args, **kwds): return FY5253Quarter(*args, variation="last", **kwds) + def makeFY5253NearestEndMonthQuarter(*args, **kwds): return FY5253Quarter(*args, variation="nearest", **kwds) + def makeFY5253NearestEndMonth(*args, **kwds): return FY5253(*args, variation="nearest", **kwds) + def makeFY5253LastOfMonth(*args, **kwds): return FY5253(*args, variation="last", **kwds) -class TestFY5253LastOfMonth(Base): +class TestFY5253LastOfMonth(Base): def test_onOffset(self): - offset_lom_sat_aug = makeFY5253LastOfMonth(1, startingMonth=8, weekday=WeekDay.SAT) - offset_lom_sat_sep = makeFY5253LastOfMonth(1, startingMonth=9, weekday=WeekDay.SAT) + offset_lom_sat_aug = makeFY5253LastOfMonth(1, startingMonth=8, + weekday=WeekDay.SAT) + offset_lom_sat_sep = makeFY5253LastOfMonth(1, startingMonth=9, + weekday=WeekDay.SAT) tests = [ - #From Wikipedia (see: http://en.wikipedia.org/wiki/4%E2%80%934%E2%80%935_calendar#Last_Saturday_of_the_month_at_fiscal_year_end) + # From Wikipedia (see: + # http://en.wikipedia.org/wiki/4%E2%80%934%E2%80%935_calendar#Last_Saturday_of_the_month_at_fiscal_year_end) (offset_lom_sat_aug, datetime(2006, 8, 26), True), (offset_lom_sat_aug, datetime(2007, 8, 25), True), (offset_lom_sat_aug, datetime(2008, 8, 30), True), @@ -2504,19 +2643,23 @@ def test_onOffset(self): (offset_lom_sat_aug, datetime(2011, 8, 26), False), (offset_lom_sat_aug, datetime(2019, 8, 30), False), - #From GMCR (see for example: http://yahoo.brand.edgar-online.com/Default.aspx?companyid=3184&formtypeID=7) + # From GMCR (see for example: + # http://yahoo.brand.edgar-online.com/Default.aspx? + # companyid=3184&formtypeID=7) (offset_lom_sat_sep, datetime(2010, 9, 25), True), (offset_lom_sat_sep, datetime(2011, 9, 24), True), (offset_lom_sat_sep, datetime(2012, 9, 29), True), ] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) def test_apply(self): - offset_lom_aug_sat = makeFY5253LastOfMonth(startingMonth=8, weekday=WeekDay.SAT) - offset_lom_aug_sat_1 = makeFY5253LastOfMonth(n=1, startingMonth=8, weekday=WeekDay.SAT) + offset_lom_aug_sat = makeFY5253LastOfMonth(startingMonth=8, + weekday=WeekDay.SAT) + offset_lom_aug_sat_1 = makeFY5253LastOfMonth(n=1, startingMonth=8, + weekday=WeekDay.SAT) date_seq_lom_aug_sat = [datetime(2006, 8, 26), datetime(2007, 8, 25), datetime(2008, 8, 30), datetime(2009, 8, 29), @@ -2526,12 +2669,16 @@ def test_apply(self): datetime(2016, 8, 27)] tests = [ - (offset_lom_aug_sat, date_seq_lom_aug_sat), - (offset_lom_aug_sat_1, date_seq_lom_aug_sat), - (offset_lom_aug_sat, [datetime(2006, 8, 25)] + date_seq_lom_aug_sat), - (offset_lom_aug_sat_1, [datetime(2006, 8, 27)] + date_seq_lom_aug_sat[1:]), - (makeFY5253LastOfMonth(n=-1, startingMonth=8, weekday=WeekDay.SAT), list(reversed(date_seq_lom_aug_sat))), - ] + (offset_lom_aug_sat, date_seq_lom_aug_sat), + (offset_lom_aug_sat_1, date_seq_lom_aug_sat), + (offset_lom_aug_sat, [ + datetime(2006, 8, 25)] + date_seq_lom_aug_sat), + (offset_lom_aug_sat_1, [ + datetime(2006, 8, 27)] + date_seq_lom_aug_sat[1:]), + (makeFY5253LastOfMonth(n=-1, startingMonth=8, + weekday=WeekDay.SAT), + list(reversed(date_seq_lom_aug_sat))), + ] for test in tests: offset, data = test current = data[0] @@ -2539,53 +2686,82 @@ def test_apply(self): current = current + offset self.assertEqual(current, datum) -class TestFY5253NearestEndMonth(Base): +class TestFY5253NearestEndMonth(Base): def test_get_target_month_end(self): - self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.SAT).get_target_month_end(datetime(2013,1,1)), datetime(2013,8,31)) - self.assertEqual(makeFY5253NearestEndMonth(startingMonth=12, weekday=WeekDay.SAT).get_target_month_end(datetime(2013,1,1)), datetime(2013,12,31)) - self.assertEqual(makeFY5253NearestEndMonth(startingMonth=2, weekday=WeekDay.SAT).get_target_month_end(datetime(2013,1,1)), datetime(2013,2,28)) + self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8, + weekday=WeekDay.SAT) + .get_target_month_end( + datetime(2013, 1, 1)), datetime(2013, 8, 31)) + self.assertEqual(makeFY5253NearestEndMonth(startingMonth=12, + weekday=WeekDay.SAT) + .get_target_month_end(datetime(2013, 1, 1)), + datetime(2013, 12, 31)) + self.assertEqual(makeFY5253NearestEndMonth(startingMonth=2, + weekday=WeekDay.SAT) + .get_target_month_end(datetime(2013, 1, 1)), + datetime(2013, 2, 28)) def test_get_year_end(self): - self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.SAT).get_year_end(datetime(2013,1,1)), datetime(2013,8,31)) - self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.SUN).get_year_end(datetime(2013,1,1)), datetime(2013,9,1)) - self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.FRI).get_year_end(datetime(2013,1,1)), datetime(2013,8,30)) + self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8, + weekday=WeekDay.SAT) + .get_year_end(datetime(2013, 1, 1)), + datetime(2013, 8, 31)) + self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8, + weekday=WeekDay.SUN) + .get_year_end(datetime(2013, 1, 1)), + datetime(2013, 9, 1)) + self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8, + weekday=WeekDay.FRI) + .get_year_end(datetime(2013, 1, 1)), + datetime(2013, 8, 30)) offset_n = FY5253(weekday=WeekDay.TUE, startingMonth=12, - variation="nearest") - self.assertEqual(offset_n.get_year_end(datetime(2012,1,1)), datetime(2013,1,1)) - self.assertEqual(offset_n.get_year_end(datetime(2012,1,10)), datetime(2013,1,1)) - - self.assertEqual(offset_n.get_year_end(datetime(2013,1,1)), datetime(2013,12,31)) - self.assertEqual(offset_n.get_year_end(datetime(2013,1,2)), datetime(2013,12,31)) - self.assertEqual(offset_n.get_year_end(datetime(2013,1,3)), datetime(2013,12,31)) - self.assertEqual(offset_n.get_year_end(datetime(2013,1,10)), datetime(2013,12,31)) + variation="nearest") + self.assertEqual(offset_n.get_year_end( + datetime(2012, 1, 1)), datetime(2013, 1, 1)) + self.assertEqual(offset_n.get_year_end( + datetime(2012, 1, 10)), datetime(2013, 1, 1)) + + self.assertEqual(offset_n.get_year_end( + datetime(2013, 1, 1)), datetime(2013, 12, 31)) + self.assertEqual(offset_n.get_year_end( + datetime(2013, 1, 2)), datetime(2013, 12, 31)) + self.assertEqual(offset_n.get_year_end( + datetime(2013, 1, 3)), datetime(2013, 12, 31)) + self.assertEqual(offset_n.get_year_end( + datetime(2013, 1, 10)), datetime(2013, 12, 31)) JNJ = FY5253(n=1, startingMonth=12, weekday=6, variation="nearest") - self.assertEqual(JNJ.get_year_end(datetime(2006, 1, 1)), datetime(2006, 12, 31)) + self.assertEqual(JNJ.get_year_end( + datetime(2006, 1, 1)), datetime(2006, 12, 31)) def test_onOffset(self): - offset_lom_aug_sat = makeFY5253NearestEndMonth(1, startingMonth=8, weekday=WeekDay.SAT) - offset_lom_aug_thu = makeFY5253NearestEndMonth(1, startingMonth=8, weekday=WeekDay.THU) + offset_lom_aug_sat = makeFY5253NearestEndMonth(1, startingMonth=8, + weekday=WeekDay.SAT) + offset_lom_aug_thu = makeFY5253NearestEndMonth(1, startingMonth=8, + weekday=WeekDay.THU) offset_n = FY5253(weekday=WeekDay.TUE, startingMonth=12, - variation="nearest") + variation="nearest") tests = [ -# From Wikipedia (see: http://en.wikipedia.org/wiki/4%E2%80%934%E2%80%935_calendar#Saturday_nearest_the_end_of_month) -# 2006-09-02 2006 September 2 -# 2007-09-01 2007 September 1 -# 2008-08-30 2008 August 30 (leap year) -# 2009-08-29 2009 August 29 -# 2010-08-28 2010 August 28 -# 2011-09-03 2011 September 3 -# 2012-09-01 2012 September 1 (leap year) -# 2013-08-31 2013 August 31 -# 2014-08-30 2014 August 30 -# 2015-08-29 2015 August 29 -# 2016-09-03 2016 September 3 (leap year) -# 2017-09-02 2017 September 2 -# 2018-09-01 2018 September 1 -# 2019-08-31 2019 August 31 + # From Wikipedia (see: + # http://en.wikipedia.org/wiki/4%E2%80%934%E2%80%935_calendar + # #Saturday_nearest_the_end_of_month) + # 2006-09-02 2006 September 2 + # 2007-09-01 2007 September 1 + # 2008-08-30 2008 August 30 (leap year) + # 2009-08-29 2009 August 29 + # 2010-08-28 2010 August 28 + # 2011-09-03 2011 September 3 + # 2012-09-01 2012 September 1 (leap year) + # 2013-08-31 2013 August 31 + # 2014-08-30 2014 August 30 + # 2015-08-29 2015 August 29 + # 2016-09-03 2016 September 3 (leap year) + # 2017-09-02 2017 September 2 + # 2018-09-01 2018 September 1 + # 2019-08-31 2019 August 31 (offset_lom_aug_sat, datetime(2006, 9, 2), True), (offset_lom_aug_sat, datetime(2007, 9, 1), True), (offset_lom_aug_sat, datetime(2008, 8, 30), True), @@ -2613,7 +2789,8 @@ def test_onOffset(self): (offset_lom_aug_sat, datetime(2011, 8, 26), False), (offset_lom_aug_sat, datetime(2019, 8, 30), False), - #From Micron, see: http://google.brand.edgar-online.com/?sym=MU&formtypeID=7 + # From Micron, see: + # http://google.brand.edgar-online.com/?sym=MU&formtypeID=7 (offset_lom_aug_thu, datetime(2012, 8, 30), True), (offset_lom_aug_thu, datetime(2011, 9, 1), True), @@ -2622,8 +2799,8 @@ def test_onOffset(self): (offset_n, datetime(2013, 1, 2), False), ] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) def test_apply(self): date_seq_nem_8_sat = [datetime(2006, 9, 2), datetime(2007, 9, 1), @@ -2636,20 +2813,37 @@ def test_apply(self): datetime(2011, 1, 2), datetime(2012, 1, 1), datetime(2012, 12, 30)] - DEC_SAT = FY5253(n=-1, startingMonth=12, weekday=5, variation="nearest") + DEC_SAT = FY5253(n=-1, startingMonth=12, weekday=5, + variation="nearest") tests = [ - (makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.SAT), date_seq_nem_8_sat), - (makeFY5253NearestEndMonth(n=1, startingMonth=8, weekday=WeekDay.SAT), date_seq_nem_8_sat), - (makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.SAT), [datetime(2006, 9, 1)] + date_seq_nem_8_sat), - (makeFY5253NearestEndMonth(n=1, startingMonth=8, weekday=WeekDay.SAT), [datetime(2006, 9, 3)] + date_seq_nem_8_sat[1:]), - (makeFY5253NearestEndMonth(n=-1, startingMonth=8, weekday=WeekDay.SAT), list(reversed(date_seq_nem_8_sat))), - (makeFY5253NearestEndMonth(n=1, startingMonth=12, weekday=WeekDay.SUN), JNJ), - (makeFY5253NearestEndMonth(n=-1, startingMonth=12, weekday=WeekDay.SUN), list(reversed(JNJ))), - (makeFY5253NearestEndMonth(n=1, startingMonth=12, weekday=WeekDay.SUN), [datetime(2005,1,2), datetime(2006, 1, 1)]), - (makeFY5253NearestEndMonth(n=1, startingMonth=12, weekday=WeekDay.SUN), [datetime(2006,1,2), datetime(2006, 12, 31)]), - (DEC_SAT, [datetime(2013,1,15), datetime(2012,12,29)]) - ] + (makeFY5253NearestEndMonth(startingMonth=8, + weekday=WeekDay.SAT), + date_seq_nem_8_sat), + (makeFY5253NearestEndMonth(n=1, startingMonth=8, + weekday=WeekDay.SAT), + date_seq_nem_8_sat), + (makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.SAT), + [datetime(2006, 9, 1)] + date_seq_nem_8_sat), + (makeFY5253NearestEndMonth(n=1, startingMonth=8, + weekday=WeekDay.SAT), + [datetime(2006, 9, 3)] + date_seq_nem_8_sat[1:]), + (makeFY5253NearestEndMonth(n=-1, startingMonth=8, + weekday=WeekDay.SAT), + list(reversed(date_seq_nem_8_sat))), + (makeFY5253NearestEndMonth(n=1, startingMonth=12, + weekday=WeekDay.SUN), JNJ), + (makeFY5253NearestEndMonth(n=-1, startingMonth=12, + weekday=WeekDay.SUN), + list(reversed(JNJ))), + (makeFY5253NearestEndMonth(n=1, startingMonth=12, + weekday=WeekDay.SUN), + [datetime(2005, 1, 2), datetime(2006, 1, 1)]), + (makeFY5253NearestEndMonth(n=1, startingMonth=12, + weekday=WeekDay.SUN), + [datetime(2006, 1, 2), datetime(2006, 12, 31)]), + (DEC_SAT, [datetime(2013, 1, 15), datetime(2012, 12, 29)]) + ] for test in tests: offset, data = test current = data[0] @@ -2657,51 +2851,78 @@ def test_apply(self): current = current + offset self.assertEqual(current, datum) -class TestFY5253LastOfMonthQuarter(Base): +class TestFY5253LastOfMonthQuarter(Base): def test_isAnchored(self): - self.assertTrue(makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4).isAnchored()) - self.assertTrue(makeFY5253LastOfMonthQuarter(weekday=WeekDay.SAT, startingMonth=3, qtr_with_extra_week=4).isAnchored()) - self.assertFalse(makeFY5253LastOfMonthQuarter(2, startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4).isAnchored()) + self.assertTrue( + makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SAT, + qtr_with_extra_week=4).isAnchored()) + self.assertTrue( + makeFY5253LastOfMonthQuarter(weekday=WeekDay.SAT, startingMonth=3, + qtr_with_extra_week=4).isAnchored()) + self.assertFalse(makeFY5253LastOfMonthQuarter( + 2, startingMonth=1, weekday=WeekDay.SAT, + qtr_with_extra_week=4).isAnchored()) def test_equality(self): - self.assertEqual(makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4), makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4)) - self.assertNotEqual(makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4), makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SUN, qtr_with_extra_week=4)) - self.assertNotEqual(makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4), makeFY5253LastOfMonthQuarter(startingMonth=2, weekday=WeekDay.SAT, qtr_with_extra_week=4)) + self.assertEqual(makeFY5253LastOfMonthQuarter(startingMonth=1, + weekday=WeekDay.SAT, + qtr_with_extra_week=4), + makeFY5253LastOfMonthQuarter(startingMonth=1, + weekday=WeekDay.SAT, + qtr_with_extra_week=4)) + self.assertNotEqual( + makeFY5253LastOfMonthQuarter( + startingMonth=1, weekday=WeekDay.SAT, + qtr_with_extra_week=4), + makeFY5253LastOfMonthQuarter( + startingMonth=1, weekday=WeekDay.SUN, + qtr_with_extra_week=4)) + self.assertNotEqual( + makeFY5253LastOfMonthQuarter( + startingMonth=1, weekday=WeekDay.SAT, + qtr_with_extra_week=4), + makeFY5253LastOfMonthQuarter( + startingMonth=2, weekday=WeekDay.SAT, + qtr_with_extra_week=4)) def test_offset(self): - offset = makeFY5253LastOfMonthQuarter(1, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4) - offset2 = makeFY5253LastOfMonthQuarter(2, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4) - offset4 = makeFY5253LastOfMonthQuarter(4, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4) - - offset_neg1 = makeFY5253LastOfMonthQuarter(-1, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4) - offset_neg2 = makeFY5253LastOfMonthQuarter(-2, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4) - - GMCR = [datetime(2010, 3, 27), - datetime(2010, 6, 26), - datetime(2010, 9, 25), - datetime(2010, 12, 25), - datetime(2011, 3, 26), - datetime(2011, 6, 25), - datetime(2011, 9, 24), - datetime(2011, 12, 24), - datetime(2012, 3, 24), - datetime(2012, 6, 23), - datetime(2012, 9, 29), - datetime(2012, 12, 29), - datetime(2013, 3, 30), - datetime(2013, 6, 29)] - + offset = makeFY5253LastOfMonthQuarter(1, startingMonth=9, + weekday=WeekDay.SAT, + qtr_with_extra_week=4) + offset2 = makeFY5253LastOfMonthQuarter(2, startingMonth=9, + weekday=WeekDay.SAT, + qtr_with_extra_week=4) + offset4 = makeFY5253LastOfMonthQuarter(4, startingMonth=9, + weekday=WeekDay.SAT, + qtr_with_extra_week=4) + + offset_neg1 = makeFY5253LastOfMonthQuarter(-1, startingMonth=9, + weekday=WeekDay.SAT, + qtr_with_extra_week=4) + offset_neg2 = makeFY5253LastOfMonthQuarter(-2, startingMonth=9, + weekday=WeekDay.SAT, + qtr_with_extra_week=4) + + GMCR = [datetime(2010, 3, 27), datetime(2010, 6, 26), + datetime(2010, 9, 25), datetime(2010, 12, 25), + datetime(2011, 3, 26), datetime(2011, 6, 25), + datetime(2011, 9, 24), datetime(2011, 12, 24), + datetime(2012, 3, 24), datetime(2012, 6, 23), + datetime(2012, 9, 29), datetime(2012, 12, 29), + datetime(2013, 3, 30), datetime(2013, 6, 29)] assertEq(offset, base=GMCR[0], expected=GMCR[1]) - assertEq(offset, base=GMCR[0] + relativedelta(days=-1), expected=GMCR[0]) + assertEq(offset, base=GMCR[0] + relativedelta(days=-1), + expected=GMCR[0]) assertEq(offset, base=GMCR[1], expected=GMCR[2]) assertEq(offset2, base=GMCR[0], expected=GMCR[2]) assertEq(offset4, base=GMCR[0], expected=GMCR[4]) assertEq(offset_neg1, base=GMCR[-1], expected=GMCR[-2]) - assertEq(offset_neg1, base=GMCR[-1] + relativedelta(days=+1), expected=GMCR[-1]) + assertEq(offset_neg1, base=GMCR[-1] + relativedelta(days=+1), + expected=GMCR[-1]) assertEq(offset_neg2, base=GMCR[-1], expected=GMCR[-3]) date = GMCR[0] + relativedelta(days=-1) @@ -2714,13 +2935,16 @@ def test_offset(self): assertEq(offset_neg1, date, expected) date = date + offset_neg1 - def test_onOffset(self): - lomq_aug_sat_4 = makeFY5253LastOfMonthQuarter(1, startingMonth=8, weekday=WeekDay.SAT, qtr_with_extra_week=4) - lomq_sep_sat_4 = makeFY5253LastOfMonthQuarter(1, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4) + lomq_aug_sat_4 = makeFY5253LastOfMonthQuarter(1, startingMonth=8, + weekday=WeekDay.SAT, + qtr_with_extra_week=4) + lomq_sep_sat_4 = makeFY5253LastOfMonthQuarter(1, startingMonth=9, + weekday=WeekDay.SAT, + qtr_with_extra_week=4) tests = [ - #From Wikipedia + # From Wikipedia (lomq_aug_sat_4, datetime(2006, 8, 26), True), (lomq_aug_sat_4, datetime(2007, 8, 25), True), (lomq_aug_sat_4, datetime(2008, 8, 30), True), @@ -2744,7 +2968,7 @@ def test_onOffset(self): (lomq_aug_sat_4, datetime(2011, 8, 26), False), (lomq_aug_sat_4, datetime(2019, 8, 30), False), - #From GMCR + # From GMCR (lomq_sep_sat_4, datetime(2010, 9, 25), True), (lomq_sep_sat_4, datetime(2011, 9, 24), True), (lomq_sep_sat_4, datetime(2012, 9, 29), True), @@ -2759,57 +2983,111 @@ def test_onOffset(self): (lomq_sep_sat_4, datetime(2012, 12, 29), True), (lomq_sep_sat_4, datetime(2011, 12, 24), True), - #INTC (extra week in Q1) - #See: http://www.intc.com/releasedetail.cfm?ReleaseID=542844 - (makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1), datetime(2011, 4, 2), True), - - #see: http://google.brand.edgar-online.com/?sym=INTC&formtypeID=7 - (makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1), datetime(2012, 12, 29), True), - (makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1), datetime(2011, 12, 31), True), - (makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1), datetime(2010, 12, 25), True), - + # INTC (extra week in Q1) + # See: http://www.intc.com/releasedetail.cfm?ReleaseID=542844 + (makeFY5253LastOfMonthQuarter(1, startingMonth=12, + weekday=WeekDay.SAT, + qtr_with_extra_week=1), + datetime(2011, 4, 2), True), + + # see: http://google.brand.edgar-online.com/?sym=INTC&formtypeID=7 + (makeFY5253LastOfMonthQuarter(1, startingMonth=12, + weekday=WeekDay.SAT, + qtr_with_extra_week=1), + datetime(2012, 12, 29), True), + (makeFY5253LastOfMonthQuarter(1, startingMonth=12, + weekday=WeekDay.SAT, + qtr_with_extra_week=1), + datetime(2011, 12, 31), True), + (makeFY5253LastOfMonthQuarter(1, startingMonth=12, + weekday=WeekDay.SAT, + qtr_with_extra_week=1), + datetime(2010, 12, 25), True), ] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) def test_year_has_extra_week(self): - #End of long Q1 - self.assertTrue(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(2011, 4, 2))) - - #Start of long Q1 - self.assertTrue(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(2010, 12, 26))) - - #End of year before year with long Q1 - self.assertFalse(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(2010, 12, 25))) - - for year in [x for x in range(1994, 2011+1) if x not in [2011, 2005, 2000, 1994]]: - self.assertFalse(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(year, 4, 2))) - - #Other long years - self.assertTrue(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(2005, 4, 2))) - self.assertTrue(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(2000, 4, 2))) - self.assertTrue(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(1994, 4, 2))) + # End of long Q1 + self.assertTrue( + makeFY5253LastOfMonthQuarter(1, startingMonth=12, + weekday=WeekDay.SAT, + qtr_with_extra_week=1) + .year_has_extra_week(datetime(2011, 4, 2))) + + # Start of long Q1 + self.assertTrue( + makeFY5253LastOfMonthQuarter( + 1, startingMonth=12, weekday=WeekDay.SAT, + qtr_with_extra_week=1) + .year_has_extra_week(datetime(2010, 12, 26))) + + # End of year before year with long Q1 + self.assertFalse( + makeFY5253LastOfMonthQuarter( + 1, startingMonth=12, weekday=WeekDay.SAT, + qtr_with_extra_week=1) + .year_has_extra_week(datetime(2010, 12, 25))) + + for year in [x + for x in range(1994, 2011 + 1) + if x not in [2011, 2005, 2000, 1994]]: + self.assertFalse( + makeFY5253LastOfMonthQuarter( + 1, startingMonth=12, weekday=WeekDay.SAT, + qtr_with_extra_week=1) + .year_has_extra_week(datetime(year, 4, 2))) + + # Other long years + self.assertTrue( + makeFY5253LastOfMonthQuarter( + 1, startingMonth=12, weekday=WeekDay.SAT, + qtr_with_extra_week=1) + .year_has_extra_week(datetime(2005, 4, 2))) + + self.assertTrue( + makeFY5253LastOfMonthQuarter( + 1, startingMonth=12, weekday=WeekDay.SAT, + qtr_with_extra_week=1) + .year_has_extra_week(datetime(2000, 4, 2))) + + self.assertTrue( + makeFY5253LastOfMonthQuarter( + 1, startingMonth=12, weekday=WeekDay.SAT, + qtr_with_extra_week=1) + .year_has_extra_week(datetime(1994, 4, 2))) def test_get_weeks(self): - sat_dec_1 = makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1) - sat_dec_4 = makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=4) + sat_dec_1 = makeFY5253LastOfMonthQuarter(1, startingMonth=12, + weekday=WeekDay.SAT, + qtr_with_extra_week=1) + sat_dec_4 = makeFY5253LastOfMonthQuarter(1, startingMonth=12, + weekday=WeekDay.SAT, + qtr_with_extra_week=4) - self.assertEqual(sat_dec_1.get_weeks(datetime(2011, 4, 2)), [14, 13, 13, 13]) - self.assertEqual(sat_dec_4.get_weeks(datetime(2011, 4, 2)), [13, 13, 13, 14]) - self.assertEqual(sat_dec_1.get_weeks(datetime(2010, 12, 25)), [13, 13, 13, 13]) + self.assertEqual(sat_dec_1.get_weeks( + datetime(2011, 4, 2)), [14, 13, 13, 13]) + self.assertEqual(sat_dec_4.get_weeks( + datetime(2011, 4, 2)), [13, 13, 13, 14]) + self.assertEqual(sat_dec_1.get_weeks( + datetime(2010, 12, 25)), [13, 13, 13, 13]) -class TestFY5253NearestEndMonthQuarter(Base): +class TestFY5253NearestEndMonthQuarter(Base): def test_onOffset(self): - offset_nem_sat_aug_4 = makeFY5253NearestEndMonthQuarter(1, startingMonth=8, weekday=WeekDay.SAT, qtr_with_extra_week=4) - offset_nem_thu_aug_4 = makeFY5253NearestEndMonthQuarter(1, startingMonth=8, weekday=WeekDay.THU, qtr_with_extra_week=4) + offset_nem_sat_aug_4 = makeFY5253NearestEndMonthQuarter( + 1, startingMonth=8, weekday=WeekDay.SAT, + qtr_with_extra_week=4) + offset_nem_thu_aug_4 = makeFY5253NearestEndMonthQuarter( + 1, startingMonth=8, weekday=WeekDay.THU, + qtr_with_extra_week=4) offset_n = FY5253(weekday=WeekDay.TUE, startingMonth=12, - variation="nearest", qtr_with_extra_week=4) + variation="nearest", qtr_with_extra_week=4) tests = [ - #From Wikipedia + # From Wikipedia (offset_nem_sat_aug_4, datetime(2006, 9, 2), True), (offset_nem_sat_aug_4, datetime(2007, 9, 1), True), (offset_nem_sat_aug_4, datetime(2008, 8, 30), True), @@ -2837,11 +3115,12 @@ def test_onOffset(self): (offset_nem_sat_aug_4, datetime(2011, 8, 26), False), (offset_nem_sat_aug_4, datetime(2019, 8, 30), False), - #From Micron, see: http://google.brand.edgar-online.com/?sym=MU&formtypeID=7 + # From Micron, see: + # http://google.brand.edgar-online.com/?sym=MU&formtypeID=7 (offset_nem_thu_aug_4, datetime(2012, 8, 30), True), (offset_nem_thu_aug_4, datetime(2011, 9, 1), True), - #See: http://google.brand.edgar-online.com/?sym=MU&formtypeID=13 + # See: http://google.brand.edgar-online.com/?sym=MU&formtypeID=13 (offset_nem_thu_aug_4, datetime(2013, 5, 30), True), (offset_nem_thu_aug_4, datetime(2013, 2, 28), True), (offset_nem_thu_aug_4, datetime(2012, 11, 29), True), @@ -2854,13 +3133,17 @@ def test_onOffset(self): (offset_n, datetime(2013, 1, 2), False) ] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) def test_offset(self): - offset = makeFY5253NearestEndMonthQuarter(1, startingMonth=8, weekday=WeekDay.THU, qtr_with_extra_week=4) + offset = makeFY5253NearestEndMonthQuarter(1, startingMonth=8, + weekday=WeekDay.THU, + qtr_with_extra_week=4) - MU = [datetime(2012, 5, 31), datetime(2012, 8, 30), datetime(2012, 11, 29), datetime(2013, 2, 28), datetime(2013, 5, 30)] + MU = [datetime(2012, 5, 31), datetime(2012, 8, 30), datetime(2012, 11, + 29), + datetime(2013, 2, 28), datetime(2013, 5, 30)] date = MU[0] + relativedelta(days=-1) for expected in MU: @@ -2870,17 +3153,20 @@ def test_offset(self): assertEq(offset, datetime(2012, 5, 31), datetime(2012, 8, 30)) assertEq(offset, datetime(2012, 5, 30), datetime(2012, 5, 31)) - offset2 = FY5253Quarter(weekday=5, startingMonth=12, - variation="last", qtr_with_extra_week=4) + offset2 = FY5253Quarter(weekday=5, startingMonth=12, variation="last", + qtr_with_extra_week=4) - assertEq(offset2, datetime(2013,1,15), datetime(2013, 3, 30)) + assertEq(offset2, datetime(2013, 1, 15), datetime(2013, 3, 30)) -class TestQuarterBegin(Base): +class TestQuarterBegin(Base): def test_repr(self): - self.assertEqual(repr(QuarterBegin()), "<QuarterBegin: startingMonth=3>") - self.assertEqual(repr(QuarterBegin(startingMonth=3)), "<QuarterBegin: startingMonth=3>") - self.assertEqual(repr(QuarterBegin(startingMonth=1)),"<QuarterBegin: startingMonth=1>") + self.assertEqual(repr(QuarterBegin()), + "<QuarterBegin: startingMonth=3>") + self.assertEqual(repr(QuarterBegin(startingMonth=3)), + "<QuarterBegin: startingMonth=3>") + self.assertEqual(repr(QuarterBegin(startingMonth=1)), + "<QuarterBegin: startingMonth=1>") def test_isAnchored(self): self.assertTrue(QuarterBegin(startingMonth=1).isAnchored()) @@ -2955,8 +3241,10 @@ class TestQuarterEnd(Base): def test_repr(self): self.assertEqual(repr(QuarterEnd()), "<QuarterEnd: startingMonth=3>") - self.assertEqual(repr(QuarterEnd(startingMonth=3)), "<QuarterEnd: startingMonth=3>") - self.assertEqual(repr(QuarterEnd(startingMonth=1)), "<QuarterEnd: startingMonth=1>") + self.assertEqual(repr(QuarterEnd(startingMonth=3)), + "<QuarterEnd: startingMonth=3>") + self.assertEqual(repr(QuarterEnd(startingMonth=1)), + "<QuarterEnd: startingMonth=1>") def test_isAnchored(self): self.assertTrue(QuarterEnd(startingMonth=1).isAnchored()) @@ -3027,65 +3315,63 @@ def test_offset(self): def test_onOffset(self): tests = [(QuarterEnd(1, startingMonth=1), datetime(2008, 1, 31), True), - (QuarterEnd( - 1, startingMonth=1), datetime(2007, 12, 31), False), - (QuarterEnd( - 1, startingMonth=1), datetime(2008, 2, 29), False), - (QuarterEnd( - 1, startingMonth=1), datetime(2007, 3, 30), False), - (QuarterEnd( - 1, startingMonth=1), datetime(2007, 3, 31), False), + (QuarterEnd(1, startingMonth=1), datetime(2007, 12, 31), + False), + (QuarterEnd(1, startingMonth=1), datetime(2008, 2, 29), + False), + (QuarterEnd(1, startingMonth=1), datetime(2007, 3, 30), + False), + (QuarterEnd(1, startingMonth=1), datetime(2007, 3, 31), + False), (QuarterEnd(1, startingMonth=1), datetime(2008, 4, 30), True), - (QuarterEnd( - 1, startingMonth=1), datetime(2008, 5, 30), False), - (QuarterEnd( - 1, startingMonth=1), datetime(2008, 5, 31), False), - (QuarterEnd( - 1, startingMonth=1), datetime(2007, 6, 29), False), - (QuarterEnd( - 1, startingMonth=1), datetime(2007, 6, 30), False), - - (QuarterEnd( - 1, startingMonth=2), datetime(2008, 1, 31), False), - (QuarterEnd( - 1, startingMonth=2), datetime(2007, 12, 31), False), + (QuarterEnd(1, startingMonth=1), datetime(2008, 5, 30), + False), + (QuarterEnd(1, startingMonth=1), datetime(2008, 5, 31), + False), + (QuarterEnd(1, startingMonth=1), datetime(2007, 6, 29), + False), + (QuarterEnd(1, startingMonth=1), datetime(2007, 6, 30), + False), + (QuarterEnd(1, startingMonth=2), datetime(2008, 1, 31), + False), + (QuarterEnd(1, startingMonth=2), datetime(2007, 12, 31), + False), (QuarterEnd(1, startingMonth=2), datetime(2008, 2, 29), True), - (QuarterEnd( - 1, startingMonth=2), datetime(2007, 3, 30), False), - (QuarterEnd( - 1, startingMonth=2), datetime(2007, 3, 31), False), - (QuarterEnd( - 1, startingMonth=2), datetime(2008, 4, 30), False), - (QuarterEnd( - 1, startingMonth=2), datetime(2008, 5, 30), False), + (QuarterEnd(1, startingMonth=2), datetime(2007, 3, 30), + False), + (QuarterEnd(1, startingMonth=2), datetime(2007, 3, 31), + False), + (QuarterEnd(1, startingMonth=2), datetime(2008, 4, 30), + False), + (QuarterEnd(1, startingMonth=2), datetime(2008, 5, 30), + False), (QuarterEnd(1, startingMonth=2), datetime(2008, 5, 31), True), - (QuarterEnd( - 1, startingMonth=2), datetime(2007, 6, 29), False), - (QuarterEnd( - 1, startingMonth=2), datetime(2007, 6, 30), False), - - (QuarterEnd( - 1, startingMonth=3), datetime(2008, 1, 31), False), - (QuarterEnd( - 1, startingMonth=3), datetime(2007, 12, 31), True), - (QuarterEnd( - 1, startingMonth=3), datetime(2008, 2, 29), False), - (QuarterEnd( - 1, startingMonth=3), datetime(2007, 3, 30), False), + (QuarterEnd(1, startingMonth=2), datetime(2007, 6, 29), + False), + (QuarterEnd(1, startingMonth=2), datetime(2007, 6, 30), + False), + (QuarterEnd(1, startingMonth=3), datetime(2008, 1, 31), + False), + (QuarterEnd(1, startingMonth=3), datetime(2007, 12, 31), + True), + (QuarterEnd(1, startingMonth=3), datetime(2008, 2, 29), + False), + (QuarterEnd(1, startingMonth=3), datetime(2007, 3, 30), + False), (QuarterEnd(1, startingMonth=3), datetime(2007, 3, 31), True), - (QuarterEnd( - 1, startingMonth=3), datetime(2008, 4, 30), False), - (QuarterEnd( - 1, startingMonth=3), datetime(2008, 5, 30), False), - (QuarterEnd( - 1, startingMonth=3), datetime(2008, 5, 31), False), - (QuarterEnd( - 1, startingMonth=3), datetime(2007, 6, 29), False), - (QuarterEnd(1, startingMonth=3), datetime(2007, 6, 30), True), - ] - - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + (QuarterEnd(1, startingMonth=3), datetime(2008, 4, 30), + False), + (QuarterEnd(1, startingMonth=3), datetime(2008, 5, 30), + False), + (QuarterEnd(1, startingMonth=3), datetime(2008, 5, 31), + False), + (QuarterEnd(1, startingMonth=3), datetime(2007, 6, 29), + False), + (QuarterEnd(1, startingMonth=3), datetime(2007, 6, 30), + True), ] + + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) class TestBYearBegin(Base): @@ -3105,9 +3391,7 @@ def test_offset(self): datetime(2011, 1, 1): datetime(2011, 1, 3), datetime(2011, 1, 3): datetime(2012, 1, 2), datetime(2005, 12, 30): datetime(2006, 1, 2), - datetime(2005, 12, 31): datetime(2006, 1, 2) - } - )) + datetime(2005, 12, 31): datetime(2006, 1, 2)})) tests.append((BYearBegin(0), {datetime(2008, 1, 1): datetime(2008, 1, 1), @@ -3225,12 +3509,11 @@ def test_onOffset(self): (YearBegin(), datetime(2006, 1, 2), False), ] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) class TestBYearEndLagged(Base): - def test_bad_month_fail(self): self.assertRaises(Exception, BYearEnd, month=13) self.assertRaises(Exception, BYearEnd, month=0) @@ -3240,13 +3523,11 @@ def test_offset(self): tests.append((BYearEnd(month=6), {datetime(2008, 1, 1): datetime(2008, 6, 30), - datetime(2007, 6, 30): datetime(2008, 6, 30)}, - )) + datetime(2007, 6, 30): datetime(2008, 6, 30)}, )) tests.append((BYearEnd(n=-1, month=6), {datetime(2008, 1, 1): datetime(2007, 6, 29), - datetime(2007, 6, 30): datetime(2007, 6, 29)}, - )) + datetime(2007, 6, 30): datetime(2007, 6, 29)}, )) for offset, cases in tests: for base, expected in compat.iteritems(cases): @@ -3266,8 +3547,8 @@ def test_onOffset(self): (BYearEnd(month=6), datetime(2007, 6, 30), False), ] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) class TestBYearEnd(Base): @@ -3315,8 +3596,8 @@ def test_onOffset(self): (BYearEnd(), datetime(2006, 12, 29), True), ] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) class TestYearEnd(Base): @@ -3367,12 +3648,11 @@ def test_onOffset(self): (YearEnd(), datetime(2006, 12, 29), False), ] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) class TestYearEndDiffMonth(Base): - def test_offset(self): tests = [] @@ -3416,8 +3696,8 @@ def test_onOffset(self): (YearEnd(month=3), datetime(2006, 3, 29), False), ] - for offset, date, expected in tests: - assertOnOffset(offset, date, expected) + for offset, dt, expected in tests: + assertOnOffset(offset, dt, expected) def assertEq(offset, base, expected): @@ -3431,7 +3711,8 @@ def assertEq(offset, base, expected): except AssertionError: raise AssertionError("\nExpected: %s\nActual: %s\nFor Offset: %s)" "\nAt Date: %s" % - (expected, actual, offset, base)) + (expected, actual, offset, base)) + def test_Easter(): assertEq(Easter(), datetime(2010, 1, 1), datetime(2010, 4, 4)) @@ -3481,8 +3762,10 @@ def test_Hour(self): def test_Minute(self): assertEq(Minute(), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 1)) assertEq(Minute(-1), datetime(2010, 1, 1, 0, 1), datetime(2010, 1, 1)) - assertEq(2 * Minute(), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 2)) - assertEq(-1 * Minute(), datetime(2010, 1, 1, 0, 1), datetime(2010, 1, 1)) + assertEq(2 * Minute(), datetime(2010, 1, 1), + datetime(2010, 1, 1, 0, 2)) + assertEq(-1 * Minute(), datetime(2010, 1, 1, 0, 1), + datetime(2010, 1, 1)) self.assertEqual(Minute(3) + Minute(2), Minute(5)) self.assertEqual(Minute(3) - Minute(2), Minute()) @@ -3490,33 +3773,46 @@ def test_Minute(self): def test_Second(self): assertEq(Second(), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 0, 1)) - assertEq(Second(-1), datetime(2010, 1, 1, 0, 0, 1), datetime(2010, 1, 1)) - assertEq(2 * Second(), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 0, 2)) - assertEq( - -1 * Second(), datetime(2010, 1, 1, 0, 0, 1), datetime(2010, 1, 1)) + assertEq(Second(-1), datetime(2010, 1, 1, + 0, 0, 1), datetime(2010, 1, 1)) + assertEq(2 * Second(), datetime(2010, 1, 1), + datetime(2010, 1, 1, 0, 0, 2)) + assertEq(-1 * Second(), datetime(2010, 1, 1, 0, 0, 1), + datetime(2010, 1, 1)) self.assertEqual(Second(3) + Second(2), Second(5)) self.assertEqual(Second(3) - Second(2), Second()) def test_Millisecond(self): - assertEq(Milli(), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 0, 0, 1000)) - assertEq(Milli(-1), datetime(2010, 1, 1, 0, 0, 0, 1000), datetime(2010, 1, 1)) - assertEq(Milli(2), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 0, 0, 2000)) - assertEq(2 * Milli(), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 0, 0, 2000)) - assertEq(-1 * Milli(), datetime(2010, 1, 1, 0, 0, 0, 1000), datetime(2010, 1, 1)) + assertEq(Milli(), datetime(2010, 1, 1), + datetime(2010, 1, 1, 0, 0, 0, 1000)) + assertEq(Milli(-1), datetime(2010, 1, 1, 0, + 0, 0, 1000), datetime(2010, 1, 1)) + assertEq(Milli(2), datetime(2010, 1, 1), + datetime(2010, 1, 1, 0, 0, 0, 2000)) + assertEq(2 * Milli(), datetime(2010, 1, 1), + datetime(2010, 1, 1, 0, 0, 0, 2000)) + assertEq(-1 * Milli(), datetime(2010, 1, 1, 0, 0, 0, 1000), + datetime(2010, 1, 1)) self.assertEqual(Milli(3) + Milli(2), Milli(5)) self.assertEqual(Milli(3) - Milli(2), Milli()) def test_MillisecondTimestampArithmetic(self): - assertEq(Milli(), Timestamp('2010-01-01'), Timestamp('2010-01-01 00:00:00.001')) - assertEq(Milli(-1), Timestamp('2010-01-01 00:00:00.001'), Timestamp('2010-01-01')) + assertEq(Milli(), Timestamp('2010-01-01'), + Timestamp('2010-01-01 00:00:00.001')) + assertEq(Milli(-1), Timestamp('2010-01-01 00:00:00.001'), + Timestamp('2010-01-01')) def test_Microsecond(self): - assertEq(Micro(), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 0, 0, 1)) - assertEq(Micro(-1), datetime(2010, 1, 1, 0, 0, 0, 1), datetime(2010, 1, 1)) - assertEq(2 * Micro(), datetime(2010, 1, 1), datetime(2010, 1, 1, 0, 0, 0, 2)) - assertEq(-1 * Micro(), datetime(2010, 1, 1, 0, 0, 0, 1), datetime(2010, 1, 1)) + assertEq(Micro(), datetime(2010, 1, 1), + datetime(2010, 1, 1, 0, 0, 0, 1)) + assertEq(Micro(-1), datetime(2010, 1, 1, + 0, 0, 0, 1), datetime(2010, 1, 1)) + assertEq(2 * Micro(), datetime(2010, 1, 1), + datetime(2010, 1, 1, 0, 0, 0, 2)) + assertEq(-1 * Micro(), datetime(2010, 1, 1, 0, 0, 0, 1), + datetime(2010, 1, 1)) self.assertEqual(Micro(3) + Micro(2), Micro(5)) self.assertEqual(Micro(3) - Micro(2), Micro()) @@ -3602,32 +3898,43 @@ def test_get_offset_name(self): self.assertEqual(Week(weekday=3).freqstr, 'W-THU') self.assertEqual(Week(weekday=4).freqstr, 'W-FRI') - self.assertEqual(LastWeekOfMonth(weekday=WeekDay.SUN).freqstr, "LWOM-SUN") - self.assertEqual(makeFY5253LastOfMonthQuarter(weekday=1, startingMonth=3, qtr_with_extra_week=4).freqstr, - "REQ-L-MAR-TUE-4") - self.assertEqual(makeFY5253NearestEndMonthQuarter(weekday=1, startingMonth=3, qtr_with_extra_week=3).freqstr, - "REQ-N-MAR-TUE-3") + self.assertEqual(LastWeekOfMonth( + weekday=WeekDay.SUN).freqstr, "LWOM-SUN") + self.assertEqual( + makeFY5253LastOfMonthQuarter(weekday=1, startingMonth=3, + qtr_with_extra_week=4).freqstr, + "REQ-L-MAR-TUE-4") + self.assertEqual( + makeFY5253NearestEndMonthQuarter(weekday=1, startingMonth=3, + qtr_with_extra_week=3).freqstr, + "REQ-N-MAR-TUE-3") + def test_get_offset(): assertRaisesRegexp(ValueError, "rule.*GIBBERISH", get_offset, 'gibberish') assertRaisesRegexp(ValueError, "rule.*QS-JAN-B", get_offset, 'QS-JAN-B') pairs = [ - ('B', BDay()), ('b', BDay()), ('bm', BMonthEnd()), - ('Bm', BMonthEnd()), ('W-MON', Week(weekday=0)), - ('W-TUE', Week(weekday=1)), ('W-WED', Week(weekday=2)), - ('W-THU', Week(weekday=3)), ('W-FRI', Week(weekday=4)), - ("RE-N-DEC-MON", makeFY5253NearestEndMonth(weekday=0, startingMonth=12)), - ("RE-L-DEC-TUE", makeFY5253LastOfMonth(weekday=1, startingMonth=12)), - ("REQ-L-MAR-TUE-4", makeFY5253LastOfMonthQuarter(weekday=1, startingMonth=3, qtr_with_extra_week=4)), - ("REQ-L-DEC-MON-3", makeFY5253LastOfMonthQuarter(weekday=0, startingMonth=12, qtr_with_extra_week=3)), - ("REQ-N-DEC-MON-3", makeFY5253NearestEndMonthQuarter(weekday=0, startingMonth=12, qtr_with_extra_week=3)), - ] + ('B', BDay()), ('b', BDay()), ('bm', BMonthEnd()), + ('Bm', BMonthEnd()), ('W-MON', Week(weekday=0)), + ('W-TUE', Week(weekday=1)), ('W-WED', Week(weekday=2)), + ('W-THU', Week(weekday=3)), ('W-FRI', Week(weekday=4)), + ("RE-N-DEC-MON", makeFY5253NearestEndMonth(weekday=0, + startingMonth=12)), + ("RE-L-DEC-TUE", makeFY5253LastOfMonth(weekday=1, startingMonth=12)), + ("REQ-L-MAR-TUE-4", makeFY5253LastOfMonthQuarter( + weekday=1, startingMonth=3, qtr_with_extra_week=4)), + ("REQ-L-DEC-MON-3", makeFY5253LastOfMonthQuarter( + weekday=0, startingMonth=12, qtr_with_extra_week=3)), + ("REQ-N-DEC-MON-3", makeFY5253NearestEndMonthQuarter( + weekday=0, startingMonth=12, qtr_with_extra_week=3)), + ] for name, expected in pairs: offset = get_offset(name) assert offset == expected, ("Expected %r to yield %r (actual: %r)" % (name, expected, offset)) + def test_get_offset_legacy(): pairs = [('w@Sat', Week(weekday=5))] for name, expected in pairs: @@ -3636,8 +3943,8 @@ def test_get_offset_legacy(): assert offset == expected, ("Expected %r to yield %r (actual: %r)" % (name, expected, offset)) -class TestParseTimeString(tm.TestCase): +class TestParseTimeString(tm.TestCase): def test_parse_time_string(self): (date, parsed, reso) = parse_time_string('4Q1984') (date_lower, parsed_lower, reso_lower) = parse_time_string('4q1984') @@ -3647,10 +3954,7 @@ def test_parse_time_string(self): def test_parse_time_quarter_w_dash(self): # https://github.com/pydata/pandas/issue/9688 - pairs = [ - ('1988-Q2', '1988Q2'), - ('2Q-1988', '2Q1988'), - ] + pairs = [('1988-Q2', '1988Q2'), ('2Q-1988', '2Q1988'), ] for dashed, normal in pairs: (date_dash, parsed_dash, reso_dash) = parse_time_string(dashed) @@ -3692,11 +3996,10 @@ def test_quarterly_dont_normalize(): for klass in offsets: result = date + klass() - assert(result.time() == date.time()) + assert (result.time() == date.time()) class TestOffsetAliases(tm.TestCase): - def setUp(self): _offset_map.clear() @@ -3737,33 +4040,34 @@ def test_rule_code(self): self.assertEqual(stride, 3) self.assertEqual(k, _get_freq_str(code)) + def test_apply_ticks(): result = offsets.Hour(3).apply(offsets.Hour(4)) exp = offsets.Hour(7) - assert(result == exp) + assert (result == exp) def test_delta_to_tick(): delta = timedelta(3) tick = offsets._delta_to_tick(delta) - assert(tick == offsets.Day(3)) + assert (tick == offsets.Day(3)) def test_dateoffset_misc(): oset = offsets.DateOffset(months=2, days=4) # it works - result = oset.freqstr + oset.freqstr - assert(not offsets.DateOffset(months=2) == 2) + assert (not offsets.DateOffset(months=2) == 2) def test_freq_offsets(): off = BDay(1, offset=timedelta(0, 1800)) - assert(off.freqstr == 'B+30Min') + assert (off.freqstr == 'B+30Min') off = BDay(1, offset=timedelta(0, -1800)) - assert(off.freqstr == 'B-30Min') + assert (off.freqstr == 'B-30Min') def get_all_subclasses(cls): @@ -3774,6 +4078,7 @@ def get_all_subclasses(cls): ret | get_all_subclasses(this_subclass) return ret + class TestCaching(tm.TestCase): # as of GH 6479 (in 0.14.0), offset caching is turned off @@ -3791,7 +4096,8 @@ def run_X_index_creation(self, cls): self.assertTrue(inst1._should_cache(), cls) - DatetimeIndex(start=datetime(2013,1,31), end=datetime(2013,3,31), freq=inst1, normalize=True) + DatetimeIndex(start=datetime(2013, 1, 31), end=datetime(2013, 3, 31), + freq=inst1, normalize=True) self.assertTrue(cls() in _daterange_cache, cls) def test_should_cache_month_end(self): @@ -3806,35 +4112,40 @@ def test_should_cache_week_month(self): def test_all_cacheableoffsets(self): for subclass in get_all_subclasses(CacheableOffset): if subclass.__name__[0] == "_" \ - or subclass in TestCaching.no_simple_ctr: + or subclass in TestCaching.no_simple_ctr: continue self.run_X_index_creation(subclass) def test_month_end_index_creation(self): - DatetimeIndex(start=datetime(2013,1,31), end=datetime(2013,3,31), freq=MonthEnd(), normalize=True) + DatetimeIndex(start=datetime(2013, 1, 31), end=datetime(2013, 3, 31), + freq=MonthEnd(), normalize=True) self.assertFalse(MonthEnd() in _daterange_cache) def test_bmonth_end_index_creation(self): - DatetimeIndex(start=datetime(2013,1,31), end=datetime(2013,3,29), freq=BusinessMonthEnd(), normalize=True) + DatetimeIndex(start=datetime(2013, 1, 31), end=datetime(2013, 3, 29), + freq=BusinessMonthEnd(), normalize=True) self.assertFalse(BusinessMonthEnd() in _daterange_cache) def test_week_of_month_index_creation(self): inst1 = WeekOfMonth(weekday=1, week=2) - DatetimeIndex(start=datetime(2013,1,31), end=datetime(2013,3,29), freq=inst1, normalize=True) + DatetimeIndex(start=datetime(2013, 1, 31), end=datetime(2013, 3, 29), + freq=inst1, normalize=True) inst2 = WeekOfMonth(weekday=1, week=2) self.assertFalse(inst2 in _daterange_cache) + class TestReprNames(tm.TestCase): def test_str_for_named_is_name(self): # look at all the amazing combinations! month_prefixes = ['A', 'AS', 'BA', 'BAS', 'Q', 'BQ', 'BQS', 'QS'] - names = [prefix + '-' + month for prefix in month_prefixes - for month in ['JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', - 'JUL', 'AUG', 'SEP', 'OCT', 'NOV', 'DEC']] + names = [prefix + '-' + month + for prefix in month_prefixes + for month in ['JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', + 'AUG', 'SEP', 'OCT', 'NOV', 'DEC']] days = ['MON', 'TUE', 'WED', 'THU', 'FRI', 'SAT', 'SUN'] names += ['W-' + day for day in days] - names += ['WOM-' + week + day for week in ('1', '2', '3', '4') - for day in days] + names += ['WOM-' + week + day + for week in ('1', '2', '3', '4') for day in days] _offset_map.clear() for name in names: offset = get_offset(name) @@ -3857,23 +4168,19 @@ class TestDST(tm.TestCase): # test both basic names and dateutil timezones timezone_utc_offsets = { - 'US/Eastern': dict( - utc_offset_daylight=-4, - utc_offset_standard=-5, - ), - 'dateutil/US/Pacific': dict( - utc_offset_daylight=-7, - utc_offset_standard=-8, - ) - } + 'US/Eastern': dict(utc_offset_daylight=-4, + utc_offset_standard=-5, ), + 'dateutil/US/Pacific': dict(utc_offset_daylight=-7, + utc_offset_standard=-8, ) + } valid_date_offsets_singular = [ 'weekday', 'day', 'hour', 'minute', 'second', 'microsecond' - ] + ] valid_date_offsets_plural = [ 'weeks', 'days', 'hours', 'minutes', 'seconds', 'milliseconds', 'microseconds' - ] + ] def _test_all_offsets(self, n, **kwds): valid_offsets = self.valid_date_offsets_plural if n > 1 \ @@ -3890,35 +4197,29 @@ def _test_offset(self, offset_name, offset_n, tstart, expected_utc_offset): if offset_name == 'weeks': # dates should match - self.assertTrue( - t.date() == - timedelta(days=7 * offset.kwds['weeks']) + tstart.date() - ) + self.assertTrue(t.date() == timedelta(days=7 * offset.kwds[ + 'weeks']) + tstart.date()) # expect the same day of week, hour of day, minute, second, ... - self.assertTrue( - t.dayofweek == tstart.dayofweek and - t.hour == tstart.hour and - t.minute == tstart.minute and - t.second == tstart.second - ) + self.assertTrue(t.dayofweek == tstart.dayofweek and t.hour == + tstart.hour and t.minute == tstart.minute and + t.second == tstart.second) elif offset_name == 'days': # dates should match - self.assertTrue(timedelta(offset.kwds['days']) + tstart.date() == t.date()) + self.assertTrue(timedelta(offset.kwds['days']) + tstart.date() == + t.date()) # expect the same hour of day, minute, second, ... - self.assertTrue( - t.hour == tstart.hour and - t.minute == tstart.minute and - t.second == tstart.second - ) + self.assertTrue(t.hour == tstart.hour and t.minute == tstart.minute + and t.second == tstart.second) elif offset_name in self.valid_date_offsets_singular: # expect the signular offset value to match between tstart and t - datepart_offset = getattr(t, offset_name if offset_name != 'weekday' else 'dayofweek') + datepart_offset = getattr(t, offset_name + if offset_name != 'weekday' else + 'dayofweek') self.assertTrue(datepart_offset == offset.kwds[offset_name]) else: # the offset should be the same as if it was done in UTC - self.assertTrue( - t == (tstart.tz_convert('UTC') + offset).tz_convert('US/Pacific') - ) + self.assertTrue(t == (tstart.tz_convert('UTC') + offset + ).tz_convert('US/Pacific')) def _make_timestamp(self, string, hrs_offset, tz): offset_string = '{hrs:02d}00'.format(hrs=hrs_offset) if hrs_offset >= 0 else \ @@ -3931,10 +4232,9 @@ def test_fallback_plural(self): hrs_pre = utc_offsets['utc_offset_daylight'] hrs_post = utc_offsets['utc_offset_standard'] self._test_all_offsets( - n=3, - tstart=self._make_timestamp(self.ts_pre_fallback, hrs_pre, tz), - expected_utc_offset=hrs_post - ) + n=3, tstart=self._make_timestamp(self.ts_pre_fallback, + hrs_pre, tz), + expected_utc_offset=hrs_post) def test_springforward_plural(self): """test moving from standard to daylight savings""" @@ -3942,31 +4242,24 @@ def test_springforward_plural(self): hrs_pre = utc_offsets['utc_offset_standard'] hrs_post = utc_offsets['utc_offset_daylight'] self._test_all_offsets( - n=3, - tstart=self._make_timestamp(self.ts_pre_springfwd, hrs_pre, tz), - expected_utc_offset=hrs_post - ) + n=3, tstart=self._make_timestamp(self.ts_pre_springfwd, + hrs_pre, tz), + expected_utc_offset=hrs_post) def test_fallback_singular(self): - # in the case of signular offsets, we dont neccesarily know which utc offset - # the new Timestamp will wind up in (the tz for 1 month may be different from 1 second) - # so we don't specify an expected_utc_offset + # in the case of signular offsets, we dont neccesarily know which utc + # offset the new Timestamp will wind up in (the tz for 1 month may be + # different from 1 second) so we don't specify an expected_utc_offset for tz, utc_offsets in self.timezone_utc_offsets.items(): hrs_pre = utc_offsets['utc_offset_standard'] - self._test_all_offsets( - n=1, - tstart=self._make_timestamp(self.ts_pre_fallback, hrs_pre, tz), - expected_utc_offset=None - ) + self._test_all_offsets(n=1, tstart=self._make_timestamp( + self.ts_pre_fallback, hrs_pre, tz), expected_utc_offset=None) def test_springforward_singular(self): for tz, utc_offsets in self.timezone_utc_offsets.items(): hrs_pre = utc_offsets['utc_offset_standard'] - self._test_all_offsets( - n=1, - tstart=self._make_timestamp(self.ts_pre_springfwd, hrs_pre, tz), - expected_utc_offset=None - ) + self._test_all_offsets(n=1, tstart=self._make_timestamp( + self.ts_pre_springfwd, hrs_pre, tz), expected_utc_offset=None) def test_all_offset_classes(self): tests = {MonthBegin: ['11/2/2012', '12/1/2012'], @@ -3984,14 +4277,14 @@ def test_all_offset_classes(self): QuarterEnd: ['11/2/2012', '12/31/2012'], BQuarterBegin: ['11/2/2012', '12/3/2012'], BQuarterEnd: ['11/2/2012', '12/31/2012'], - Day: ['11/4/2012', '11/4/2012 23:00'] - } + Day: ['11/4/2012', '11/4/2012 23:00']} for offset, test_values in iteritems(tests): first = Timestamp(test_values[0], tz='US/Eastern') + offset() second = Timestamp(test_values[1], tz='US/Eastern') self.assertEqual(first, second, str(offset)) + if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py index d884766280da2..e37ffa3974729 100644 --- a/pandas/tseries/tests/test_period.py +++ b/pandas/tseries/tests/test_period.py @@ -26,14 +26,15 @@ from pandas import Series, DataFrame, _np_version_under1p9 from pandas import tslib -from pandas.util.testing import(assert_series_equal, assert_almost_equal, - assertRaisesRegexp) +from pandas.util.testing import (assert_series_equal, assert_almost_equal, + assertRaisesRegexp) import pandas.util.testing as tm from pandas import compat class TestPeriodProperties(tm.TestCase): "Test properties such as year, month, weekday, etc...." + # def test_quarterly_negative_ordinals(self): @@ -132,8 +133,7 @@ def test_period_cons_mult(self): with tm.assertRaisesRegexp(ValueError, msg): Period('2011-01', freq='-3M') - msg = ('Frequency must be positive, because it' - ' represents span: 0M') + msg = ('Frequency must be positive, because it' ' represents span: 0M') with tm.assertRaisesRegexp(ValueError, msg): Period('2011-01', freq='0M') @@ -178,13 +178,15 @@ def test_timestamp_tz_arg_dateutil(self): from pandas.tslib import maybe_get_tz for case in ['dateutil/Europe/Brussels', 'dateutil/Asia/Tokyo', 'dateutil/US/Pacific']: - p = Period('1/1/2005', freq='M').to_timestamp(tz=maybe_get_tz(case)) + p = Period('1/1/2005', freq='M').to_timestamp( + tz=maybe_get_tz(case)) exp = Timestamp('1/1/2005', tz='UTC').tz_convert(case) self.assertEqual(p, exp) self.assertEqual(p.tz, gettz(case.split('/', 1)[1])) self.assertEqual(p.tz, exp.tz) - p = Period('1/1/2005', freq='M').to_timestamp(freq='3H', tz=maybe_get_tz(case)) + p = Period('1/1/2005', + freq='M').to_timestamp(freq='3H', tz=maybe_get_tz(case)) exp = Timestamp('1/1/2005', tz='UTC').tz_convert(case) self.assertEqual(p, exp) self.assertEqual(p.tz, gettz(case.split('/', 1)[1])) @@ -192,7 +194,8 @@ def test_timestamp_tz_arg_dateutil(self): def test_timestamp_tz_arg_dateutil_from_string(self): from pandas.tslib import _dateutil_gettz as gettz - p = Period('1/1/2005', freq='M').to_timestamp(tz='dateutil/Europe/Brussels') + p = Period('1/1/2005', + freq='M').to_timestamp(tz='dateutil/Europe/Brussels') self.assertEqual(p.tz, gettz('Europe/Brussels')) def test_timestamp_nat_tz(self): @@ -352,7 +355,6 @@ def test_period_constructor(self): self.assertRaises(ValueError, Period, '2007-1-1', freq='X') - def test_period_constructor_offsets(self): self.assertEqual(Period('1/1/2005', freq=offsets.MonthEnd()), Period('1/1/2005', freq='M')) @@ -374,16 +376,18 @@ def test_period_constructor_offsets(self): self.assertEqual(Period(year=2005, month=3, day=1, freq=offsets.Day()), Period(year=2005, month=3, day=1, freq='D')) - self.assertEqual(Period(year=2012, month=3, day=10, freq=offsets.BDay()), + self.assertEqual(Period(year=2012, month=3, day=10, + freq=offsets.BDay()), Period(year=2012, month=3, day=10, freq='B')) expected = Period('2005-03-01', freq='3D') - self.assertEqual(Period(year=2005, month=3, day=1, freq=offsets.Day(3)), - expected) + self.assertEqual(Period(year=2005, month=3, day=1, + freq=offsets.Day(3)), expected) self.assertEqual(Period(year=2005, month=3, day=1, freq='3D'), expected) - self.assertEqual(Period(year=2012, month=3, day=10, freq=offsets.BDay(3)), + self.assertEqual(Period(year=2012, month=3, day=10, + freq=offsets.BDay(3)), Period(year=2012, month=3, day=10, freq='3B')) self.assertEqual(Period(200701, freq=offsets.MonthEnd()), @@ -428,7 +432,6 @@ def test_period_constructor_offsets(self): self.assertRaises(ValueError, Period, '2007-1-1', freq='X') - def test_freq_str(self): i1 = Period('1982', freq='Min') self.assertEqual(i1.freq, offsets.Minute()) @@ -458,8 +461,8 @@ def test_microsecond_repr(self): def test_strftime(self): p = Period('2000-1-1 12:34:12', freq='S') res = p.strftime('%Y-%m-%d %H:%M:%S') - self.assertEqual(res, '2000-01-01 12:34:12') - tm.assertIsInstance(res, compat.text_type) # GH3363 + self.assertEqual(res, '2000-01-01 12:34:12') + tm.assertIsInstance(res, compat.text_type) # GH3363 def test_sub_delta(self): left, right = Period('2011', freq='A'), Period('2007', freq='A') @@ -484,8 +487,7 @@ def test_to_timestamp(self): self.assertEqual(end_ts, p.to_timestamp('D', how=a)) self.assertEqual(end_ts, p.to_timestamp('3D', how=a)) - from_lst = ['A', 'Q', 'M', 'W', 'B', - 'D', 'H', 'Min', 'S'] + from_lst = ['A', 'Q', 'M', 'W', 'B', 'D', 'H', 'Min', 'S'] def _ex(p): return Timestamp((p + 1).start_time.value - 1) @@ -515,7 +517,6 @@ def _ex(p): result = p.to_timestamp('2T', how='end') self.assertEqual(result, expected) - result = p.to_timestamp(how='end') expected = datetime(1985, 12, 31) self.assertEqual(result, expected) @@ -640,7 +641,8 @@ def test_properties_weekly(self): assert_equal(w_date.week, 1) assert_equal((w_date - 1).week, 52) assert_equal(w_date.days_in_month, 31) - assert_equal(Period(freq='W', year=2012, month=2, day=1).days_in_month, 29) + assert_equal(Period(freq='W', year=2012, + month=2, day=1).days_in_month, 29) def test_properties_weekly_legacy(self): # Test properties on Periods with daily frequency. @@ -668,7 +670,8 @@ def test_properties_daily(self): assert_equal(b_date.weekday, 0) assert_equal(b_date.dayofyear, 1) assert_equal(b_date.days_in_month, 31) - assert_equal(Period(freq='B', year=2012, month=2, day=1).days_in_month, 29) + assert_equal(Period(freq='B', year=2012, + month=2, day=1).days_in_month, 29) # d_date = Period(freq='D', year=2007, month=1, day=1) # @@ -717,8 +720,8 @@ def test_properties_minutely(self): def test_properties_secondly(self): # Test properties on Periods with secondly frequency. - s_date = Period(freq='Min', year=2007, month=1, day=1, - hour=0, minute=0, second=0) + s_date = Period(freq='Min', year=2007, month=1, day=1, hour=0, + minute=0, second=0) # assert_equal(s_date.year, 2007) assert_equal(s_date.quarter, 1) @@ -737,8 +740,8 @@ def test_properties_nat(self): p_nat = Period('NaT', freq='M') t_nat = pd.Timestamp('NaT') # confirm Period('NaT') work identical with Timestamp('NaT') - for f in ['year', 'month', 'day', 'hour', 'minute', 'second', - 'week', 'dayofyear', 'quarter', 'days_in_month']: + for f in ['year', 'month', 'day', 'hour', 'minute', 'second', 'week', + 'dayofyear', 'quarter', 'days_in_month']: self.assertTrue(np.isnan(getattr(p_nat, f))) self.assertTrue(np.isnan(getattr(t_nat, f))) @@ -801,11 +804,14 @@ def test_constructor_infer_freq(self): def test_asfreq_MS(self): initial = Period("2013") - self.assertEqual(initial.asfreq(freq="M", how="S"), Period('2013-01', 'M')) + self.assertEqual(initial.asfreq(freq="M", how="S"), + Period('2013-01', 'M')) self.assertRaises(ValueError, initial.asfreq, freq="MS", how="S") - tm.assertRaisesRegexp(ValueError, "Unknown freqstr: MS", pd.Period, '2013-01', 'MS') + tm.assertRaisesRegexp(ValueError, "Unknown freqstr: MS", pd.Period, + '2013-01', 'MS') self.assertTrue(_period_code_map.get("MS") is None) + def noWrap(item): return item @@ -842,16 +848,15 @@ def test_conv_annual(self): ival_A_to_B_end = Period(freq='B', year=2007, month=12, day=31) ival_A_to_D_start = Period(freq='D', year=2007, month=1, day=1) ival_A_to_D_end = Period(freq='D', year=2007, month=12, day=31) - ival_A_to_H_start = Period(freq='H', year=2007, month=1, day=1, - hour=0) + ival_A_to_H_start = Period(freq='H', year=2007, month=1, day=1, hour=0) ival_A_to_H_end = Period(freq='H', year=2007, month=12, day=31, hour=23) ival_A_to_T_start = Period(freq='Min', year=2007, month=1, day=1, hour=0, minute=0) ival_A_to_T_end = Period(freq='Min', year=2007, month=12, day=31, hour=23, minute=59) - ival_A_to_S_start = Period(freq='S', year=2007, month=1, day=1, - hour=0, minute=0, second=0) + ival_A_to_S_start = Period(freq='S', year=2007, month=1, day=1, hour=0, + minute=0, second=0) ival_A_to_S_end = Period(freq='S', year=2007, month=12, day=31, hour=23, minute=59, second=59) @@ -910,18 +915,16 @@ def test_conv_quarterly(self): ival_Q_to_B_end = Period(freq='B', year=2007, month=3, day=30) ival_Q_to_D_start = Period(freq='D', year=2007, month=1, day=1) ival_Q_to_D_end = Period(freq='D', year=2007, month=3, day=31) - ival_Q_to_H_start = Period(freq='H', year=2007, month=1, day=1, - hour=0) - ival_Q_to_H_end = Period(freq='H', year=2007, month=3, day=31, - hour=23) + ival_Q_to_H_start = Period(freq='H', year=2007, month=1, day=1, hour=0) + ival_Q_to_H_end = Period(freq='H', year=2007, month=3, day=31, hour=23) ival_Q_to_T_start = Period(freq='Min', year=2007, month=1, day=1, hour=0, minute=0) ival_Q_to_T_end = Period(freq='Min', year=2007, month=3, day=31, hour=23, minute=59) - ival_Q_to_S_start = Period(freq='S', year=2007, month=1, day=1, - hour=0, minute=0, second=0) - ival_Q_to_S_end = Period(freq='S', year=2007, month=3, day=31, - hour=23, minute=59, second=59) + ival_Q_to_S_start = Period(freq='S', year=2007, month=1, day=1, hour=0, + minute=0, second=0) + ival_Q_to_S_end = Period(freq='S', year=2007, month=3, day=31, hour=23, + minute=59, second=59) ival_QEJAN_to_D_start = Period(freq='D', year=2006, month=2, day=1) ival_QEJAN_to_D_end = Period(freq='D', year=2006, month=4, day=30) @@ -968,18 +971,16 @@ def test_conv_monthly(self): ival_M_to_B_end = Period(freq='B', year=2007, month=1, day=31) ival_M_to_D_start = Period(freq='D', year=2007, month=1, day=1) ival_M_to_D_end = Period(freq='D', year=2007, month=1, day=31) - ival_M_to_H_start = Period(freq='H', year=2007, month=1, day=1, - hour=0) - ival_M_to_H_end = Period(freq='H', year=2007, month=1, day=31, - hour=23) + ival_M_to_H_start = Period(freq='H', year=2007, month=1, day=1, hour=0) + ival_M_to_H_end = Period(freq='H', year=2007, month=1, day=31, hour=23) ival_M_to_T_start = Period(freq='Min', year=2007, month=1, day=1, hour=0, minute=0) ival_M_to_T_end = Period(freq='Min', year=2007, month=1, day=31, hour=23, minute=59) - ival_M_to_S_start = Period(freq='S', year=2007, month=1, day=1, - hour=0, minute=0, second=0) - ival_M_to_S_end = Period(freq='S', year=2007, month=1, day=31, - hour=23, minute=59, second=59) + ival_M_to_S_start = Period(freq='S', year=2007, month=1, day=1, hour=0, + minute=0, second=0) + ival_M_to_S_end = Period(freq='S', year=2007, month=1, day=31, hour=23, + minute=59, second=59) assert_equal(ival_M.asfreq('A'), ival_M_to_A) assert_equal(ival_M_end_of_year.asfreq('A'), ival_M_to_A) @@ -1041,11 +1042,9 @@ def test_conv_weekly(self): ival_W_to_A_end_of_year = Period(freq='A', year=2008) if Period(freq='D', year=2007, month=3, day=31).weekday == 6: - ival_W_to_Q_end_of_quarter = Period(freq='Q', year=2007, - quarter=1) + ival_W_to_Q_end_of_quarter = Period(freq='Q', year=2007, quarter=1) else: - ival_W_to_Q_end_of_quarter = Period(freq='Q', year=2007, - quarter=2) + ival_W_to_Q_end_of_quarter = Period(freq='Q', year=2007, quarter=2) if Period(freq='D', year=2007, month=1, day=31).weekday == 6: ival_W_to_M_end_of_month = Period(freq='M', year=2007, month=1) @@ -1056,28 +1055,24 @@ def test_conv_weekly(self): ival_W_to_B_end = Period(freq='B', year=2007, month=1, day=5) ival_W_to_D_start = Period(freq='D', year=2007, month=1, day=1) ival_W_to_D_end = Period(freq='D', year=2007, month=1, day=7) - ival_W_to_H_start = Period(freq='H', year=2007, month=1, day=1, - hour=0) - ival_W_to_H_end = Period(freq='H', year=2007, month=1, day=7, - hour=23) + ival_W_to_H_start = Period(freq='H', year=2007, month=1, day=1, hour=0) + ival_W_to_H_end = Period(freq='H', year=2007, month=1, day=7, hour=23) ival_W_to_T_start = Period(freq='Min', year=2007, month=1, day=1, hour=0, minute=0) ival_W_to_T_end = Period(freq='Min', year=2007, month=1, day=7, hour=23, minute=59) - ival_W_to_S_start = Period(freq='S', year=2007, month=1, day=1, - hour=0, minute=0, second=0) - ival_W_to_S_end = Period(freq='S', year=2007, month=1, day=7, - hour=23, minute=59, second=59) + ival_W_to_S_start = Period(freq='S', year=2007, month=1, day=1, hour=0, + minute=0, second=0) + ival_W_to_S_end = Period(freq='S', year=2007, month=1, day=7, hour=23, + minute=59, second=59) assert_equal(ival_W.asfreq('A'), ival_W_to_A) - assert_equal(ival_W_end_of_year.asfreq('A'), - ival_W_to_A_end_of_year) + assert_equal(ival_W_end_of_year.asfreq('A'), ival_W_to_A_end_of_year) assert_equal(ival_W.asfreq('Q'), ival_W_to_Q) assert_equal(ival_W_end_of_quarter.asfreq('Q'), ival_W_to_Q_end_of_quarter) assert_equal(ival_W.asfreq('M'), ival_W_to_M) - assert_equal(ival_W_end_of_month.asfreq('M'), - ival_W_to_M_end_of_month) + assert_equal(ival_W_end_of_month.asfreq('M'), ival_W_to_M_end_of_month) assert_equal(ival_W.asfreq('B', 'S'), ival_W_to_B_start) assert_equal(ival_W.asfreq('B', 'E'), ival_W_to_B_end) @@ -1148,7 +1143,8 @@ def test_conv_weekly_legacy(self): with tm.assert_produces_warning(FutureWarning): ival_W_end_of_year = Period(freq='WK', year=2007, month=12, day=31) with tm.assert_produces_warning(FutureWarning): - ival_W_end_of_quarter = Period(freq='WK', year=2007, month=3, day=31) + ival_W_end_of_quarter = Period(freq='WK', year=2007, month=3, + day=31) with tm.assert_produces_warning(FutureWarning): ival_W_end_of_month = Period(freq='WK', year=2007, month=1, day=31) ival_W_to_A = Period(freq='A', year=2007) @@ -1161,11 +1157,9 @@ def test_conv_weekly_legacy(self): ival_W_to_A_end_of_year = Period(freq='A', year=2008) if Period(freq='D', year=2007, month=3, day=31).weekday == 6: - ival_W_to_Q_end_of_quarter = Period(freq='Q', year=2007, - quarter=1) + ival_W_to_Q_end_of_quarter = Period(freq='Q', year=2007, quarter=1) else: - ival_W_to_Q_end_of_quarter = Period(freq='Q', year=2007, - quarter=2) + ival_W_to_Q_end_of_quarter = Period(freq='Q', year=2007, quarter=2) if Period(freq='D', year=2007, month=1, day=31).weekday == 6: ival_W_to_M_end_of_month = Period(freq='M', year=2007, month=1) @@ -1176,28 +1170,24 @@ def test_conv_weekly_legacy(self): ival_W_to_B_end = Period(freq='B', year=2007, month=1, day=5) ival_W_to_D_start = Period(freq='D', year=2007, month=1, day=1) ival_W_to_D_end = Period(freq='D', year=2007, month=1, day=7) - ival_W_to_H_start = Period(freq='H', year=2007, month=1, day=1, - hour=0) - ival_W_to_H_end = Period(freq='H', year=2007, month=1, day=7, - hour=23) + ival_W_to_H_start = Period(freq='H', year=2007, month=1, day=1, hour=0) + ival_W_to_H_end = Period(freq='H', year=2007, month=1, day=7, hour=23) ival_W_to_T_start = Period(freq='Min', year=2007, month=1, day=1, hour=0, minute=0) ival_W_to_T_end = Period(freq='Min', year=2007, month=1, day=7, hour=23, minute=59) - ival_W_to_S_start = Period(freq='S', year=2007, month=1, day=1, - hour=0, minute=0, second=0) - ival_W_to_S_end = Period(freq='S', year=2007, month=1, day=7, - hour=23, minute=59, second=59) + ival_W_to_S_start = Period(freq='S', year=2007, month=1, day=1, hour=0, + minute=0, second=0) + ival_W_to_S_end = Period(freq='S', year=2007, month=1, day=7, hour=23, + minute=59, second=59) assert_equal(ival_W.asfreq('A'), ival_W_to_A) - assert_equal(ival_W_end_of_year.asfreq('A'), - ival_W_to_A_end_of_year) + assert_equal(ival_W_end_of_year.asfreq('A'), ival_W_to_A_end_of_year) assert_equal(ival_W.asfreq('Q'), ival_W_to_Q) assert_equal(ival_W_end_of_quarter.asfreq('Q'), ival_W_to_Q_end_of_quarter) assert_equal(ival_W.asfreq('M'), ival_W_to_M) - assert_equal(ival_W_end_of_month.asfreq('M'), - ival_W_to_M_end_of_month) + assert_equal(ival_W_end_of_month.asfreq('M'), ival_W_to_M_end_of_month) assert_equal(ival_W.asfreq('B', 'S'), ival_W_to_B_start) assert_equal(ival_W.asfreq('B', 'E'), ival_W_to_B_end) @@ -1244,18 +1234,16 @@ def test_conv_business(self): ival_B_to_M = Period(freq='M', year=2007, month=1) ival_B_to_W = Period(freq='W', year=2007, month=1, day=7) ival_B_to_D = Period(freq='D', year=2007, month=1, day=1) - ival_B_to_H_start = Period(freq='H', year=2007, month=1, day=1, - hour=0) - ival_B_to_H_end = Period(freq='H', year=2007, month=1, day=1, - hour=23) + ival_B_to_H_start = Period(freq='H', year=2007, month=1, day=1, hour=0) + ival_B_to_H_end = Period(freq='H', year=2007, month=1, day=1, hour=23) ival_B_to_T_start = Period(freq='Min', year=2007, month=1, day=1, hour=0, minute=0) ival_B_to_T_end = Period(freq='Min', year=2007, month=1, day=1, hour=23, minute=59) - ival_B_to_S_start = Period(freq='S', year=2007, month=1, day=1, - hour=0, minute=0, second=0) - ival_B_to_S_end = Period(freq='S', year=2007, month=1, day=1, - hour=23, minute=59, second=59) + ival_B_to_S_start = Period(freq='S', year=2007, month=1, day=1, hour=0, + minute=0, second=0) + ival_B_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=23, + minute=59, second=59) assert_equal(ival_B.asfreq('A'), ival_B_to_A) assert_equal(ival_B_end_of_year.asfreq('A'), ival_B_to_A) @@ -1289,7 +1277,9 @@ def test_conv_daily(self): ival_D_friday = Period(freq='D', year=2007, month=1, day=5) ival_D_saturday = Period(freq='D', year=2007, month=1, day=6) ival_D_sunday = Period(freq='D', year=2007, month=1, day=7) - ival_D_monday = Period(freq='D', year=2007, month=1, day=8) + + # TODO: unused? + # ival_D_monday = Period(freq='D', year=2007, month=1, day=8) ival_B_friday = Period(freq='B', year=2007, month=1, day=5) ival_B_monday = Period(freq='B', year=2007, month=1, day=8) @@ -1307,27 +1297,22 @@ def test_conv_daily(self): ival_D_to_M = Period(freq='M', year=2007, month=1) ival_D_to_W = Period(freq='W', year=2007, month=1, day=7) - ival_D_to_H_start = Period(freq='H', year=2007, month=1, day=1, - hour=0) - ival_D_to_H_end = Period(freq='H', year=2007, month=1, day=1, - hour=23) + ival_D_to_H_start = Period(freq='H', year=2007, month=1, day=1, hour=0) + ival_D_to_H_end = Period(freq='H', year=2007, month=1, day=1, hour=23) ival_D_to_T_start = Period(freq='Min', year=2007, month=1, day=1, hour=0, minute=0) ival_D_to_T_end = Period(freq='Min', year=2007, month=1, day=1, hour=23, minute=59) - ival_D_to_S_start = Period(freq='S', year=2007, month=1, day=1, - hour=0, minute=0, second=0) - ival_D_to_S_end = Period(freq='S', year=2007, month=1, day=1, - hour=23, minute=59, second=59) + ival_D_to_S_start = Period(freq='S', year=2007, month=1, day=1, hour=0, + minute=0, second=0) + ival_D_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=23, + minute=59, second=59) assert_equal(ival_D.asfreq('A'), ival_D_to_A) - assert_equal(ival_D_end_of_quarter.asfreq('A-JAN'), - ival_Deoq_to_AJAN) - assert_equal(ival_D_end_of_quarter.asfreq('A-JUN'), - ival_Deoq_to_AJUN) - assert_equal(ival_D_end_of_quarter.asfreq('A-DEC'), - ival_Deoq_to_ADEC) + assert_equal(ival_D_end_of_quarter.asfreq('A-JAN'), ival_Deoq_to_AJAN) + assert_equal(ival_D_end_of_quarter.asfreq('A-JUN'), ival_Deoq_to_AJUN) + assert_equal(ival_D_end_of_quarter.asfreq('A-DEC'), ival_Deoq_to_ADEC) assert_equal(ival_D_end_of_year.asfreq('A'), ival_D_to_A) assert_equal(ival_D_end_of_quarter.asfreq('Q'), ival_D_to_QEDEC) @@ -1380,12 +1365,12 @@ def test_conv_hourly(self): ival_H_to_T_start = Period(freq='Min', year=2007, month=1, day=1, hour=0, minute=0) - ival_H_to_T_end = Period(freq='Min', year=2007, month=1, day=1, - hour=0, minute=59) - ival_H_to_S_start = Period(freq='S', year=2007, month=1, day=1, - hour=0, minute=0, second=0) - ival_H_to_S_end = Period(freq='S', year=2007, month=1, day=1, - hour=0, minute=59, second=59) + ival_H_to_T_end = Period(freq='Min', year=2007, month=1, day=1, hour=0, + minute=59) + ival_H_to_S_start = Period(freq='S', year=2007, month=1, day=1, hour=0, + minute=0, second=0) + ival_H_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=0, + minute=59, second=59) assert_equal(ival_H.asfreq('A'), ival_H_to_A) assert_equal(ival_H_end_of_year.asfreq('A'), ival_H_to_A) @@ -1410,8 +1395,8 @@ def test_conv_hourly(self): def test_conv_minutely(self): # frequency conversion tests: from Minutely Frequency" - ival_T = Period(freq='Min', year=2007, month=1, day=1, - hour=0, minute=0) + ival_T = Period(freq='Min', year=2007, month=1, day=1, hour=0, + minute=0) ival_T_end_of_year = Period(freq='Min', year=2007, month=12, day=31, hour=23, minute=59) ival_T_end_of_quarter = Period(freq='Min', year=2007, month=3, day=31, @@ -1435,10 +1420,10 @@ def test_conv_minutely(self): ival_T_to_B = Period(freq='B', year=2007, month=1, day=1) ival_T_to_H = Period(freq='H', year=2007, month=1, day=1, hour=0) - ival_T_to_S_start = Period(freq='S', year=2007, month=1, day=1, - hour=0, minute=0, second=0) - ival_T_to_S_end = Period(freq='S', year=2007, month=1, day=1, - hour=0, minute=0, second=59) + ival_T_to_S_start = Period(freq='S', year=2007, month=1, day=1, hour=0, + minute=0, second=0) + ival_T_to_S_end = Period(freq='S', year=2007, month=1, day=1, hour=0, + minute=0, second=59) assert_equal(ival_T.asfreq('A'), ival_T_to_A) assert_equal(ival_T_end_of_year.asfreq('A'), ival_T_to_A) @@ -1463,8 +1448,8 @@ def test_conv_minutely(self): def test_conv_secondly(self): # frequency conversion tests: from Secondly Frequency" - ival_S = Period(freq='S', year=2007, month=1, day=1, - hour=0, minute=0, second=0) + ival_S = Period(freq='S', year=2007, month=1, day=1, hour=0, minute=0, + second=0) ival_S_end_of_year = Period(freq='S', year=2007, month=12, day=31, hour=23, minute=59, second=59) ival_S_end_of_quarter = Period(freq='S', year=2007, month=3, day=31, @@ -1488,10 +1473,9 @@ def test_conv_secondly(self): ival_S_to_W = Period(freq='W', year=2007, month=1, day=7) ival_S_to_D = Period(freq='D', year=2007, month=1, day=1) ival_S_to_B = Period(freq='B', year=2007, month=1, day=1) - ival_S_to_H = Period(freq='H', year=2007, month=1, day=1, - hour=0) - ival_S_to_T = Period(freq='Min', year=2007, month=1, day=1, - hour=0, minute=0) + ival_S_to_H = Period(freq='H', year=2007, month=1, day=1, hour=0) + ival_S_to_T = Period(freq='Min', year=2007, month=1, day=1, hour=0, + minute=0) assert_equal(ival_S.asfreq('A'), ival_S_to_A) assert_equal(ival_S_end_of_year.asfreq('A'), ival_S_to_A) @@ -1606,14 +1590,12 @@ def test_asfreq_mult_nat(self): class TestPeriodIndex(tm.TestCase): - def setUp(self): pass def test_hash_error(self): index = period_range('20010101', periods=10) - with tm.assertRaisesRegexp(TypeError, - "unhashable type: %r" % + with tm.assertRaisesRegexp(TypeError, "unhashable type: %r" % type(index).__name__): hash(index) @@ -1745,10 +1727,10 @@ def test_constructor_simple_new(self): self.assertTrue(result.equals(idx)) def test_constructor_nat(self): - self.assertRaises( - ValueError, period_range, start='NaT', end='2011-01-01', freq='M') - self.assertRaises( - ValueError, period_range, start='2011-01-01', end='NaT', freq='M') + self.assertRaises(ValueError, period_range, start='NaT', + end='2011-01-01', freq='M') + self.assertRaises(ValueError, period_range, start='2011-01-01', + end='NaT', freq='M') def test_constructor_year_and_quarter(self): year = pd.Series([2001, 2002, 2003]) @@ -1764,11 +1746,13 @@ def test_constructor_freq_mult(self): for func in [PeriodIndex, period_range]: # must be the same, but for sure... pidx = func(start='2014-01', freq='2M', periods=4) - expected = PeriodIndex(['2014-01', '2014-03', '2014-05', '2014-07'], freq='M') + expected = PeriodIndex( + ['2014-01', '2014-03', '2014-05', '2014-07'], freq='M') tm.assert_index_equal(pidx, expected) pidx = func(start='2014-01-02', end='2014-01-15', freq='3D') - expected = PeriodIndex(['2014-01-02', '2014-01-05', '2014-01-08', '2014-01-11', + expected = PeriodIndex(['2014-01-02', '2014-01-05', + '2014-01-08', '2014-01-11', '2014-01-14'], freq='D') tm.assert_index_equal(pidx, expected) @@ -1782,13 +1766,11 @@ def test_constructor_freq_mult(self): with tm.assertRaisesRegexp(ValueError, msg): PeriodIndex(['2011-01'], freq='-1M') - msg = ('Frequency must be positive, because it' - ' represents span: 0M') + msg = ('Frequency must be positive, because it' ' represents span: 0M') with tm.assertRaisesRegexp(ValueError, msg): PeriodIndex(['2011-01'], freq='0M') - msg = ('Frequency must be positive, because it' - ' represents span: 0M') + msg = ('Frequency must be positive, because it' ' represents span: 0M') with tm.assertRaisesRegexp(ValueError, msg): period_range('2011-01', periods=3, freq='0M') @@ -1799,7 +1781,8 @@ def test_constructor_freq_mult_dti_compat(self): for mult, freq in itertools.product(mults, freqs): freqstr = str(mult) + freq pidx = PeriodIndex(start='2014-04-01', freq=freqstr, periods=10) - expected = date_range(start='2014-04-01', freq=freqstr, periods=10).to_period(freq) + expected = date_range(start='2014-04-01', freq=freqstr, + periods=10).to_period(freq) tm.assert_index_equal(pidx, expected) def test_is_(self): @@ -1809,7 +1792,8 @@ def test_is_(self): self.assertEqual(index.is_(index), True) self.assertEqual(index.is_(create_index()), False) self.assertEqual(index.is_(index.view()), True) - self.assertEqual(index.is_(index.view().view().view().view().view()), True) + self.assertEqual( + index.is_(index.view().view().view().view().view()), True) self.assertEqual(index.view().is_(index), True) ind2 = index.view() index.name = "Apple" @@ -1863,9 +1847,10 @@ def test_getitem_partial(self): assert_series_equal(exp, result) ts = ts[10:].append(ts[10:]) - self.assertRaisesRegexp( - KeyError, "left slice bound for non-unique label: '2008'", - ts.__getitem__, slice('2008', '2009')) + self.assertRaisesRegexp(KeyError, + "left slice bound for non-unique " + "label: '2008'", + ts.__getitem__, slice('2008', '2009')) def test_getitem_datetime(self): rng = period_range(start='2012-01-01', periods=10, freq='W-MON') @@ -1894,9 +1879,12 @@ def assert_slices_equivalent(l_slc, i_slc): assert_slices_equivalent(SLC[:'2014-10':-1], SLC[:8:-1]) assert_slices_equivalent(SLC['2015-02':'2014-10':-1], SLC[13:8:-1]) - assert_slices_equivalent(SLC[Period('2015-02'):Period('2014-10'):-1], SLC[13:8:-1]) - assert_slices_equivalent(SLC['2015-02':Period('2014-10'):-1], SLC[13:8:-1]) - assert_slices_equivalent(SLC[Period('2015-02'):'2014-10':-1], SLC[13:8:-1]) + assert_slices_equivalent(SLC[Period('2015-02'):Period('2014-10'):-1], + SLC[13:8:-1]) + assert_slices_equivalent(SLC['2015-02':Period('2014-10'):-1], + SLC[13:8:-1]) + assert_slices_equivalent(SLC[Period('2015-02'):'2014-10':-1], + SLC[13:8:-1]) assert_slices_equivalent(SLC['2014-10':'2015-02':-1], SLC[:0]) @@ -1925,8 +1913,8 @@ def test_sub(self): self.assertTrue(result.equals(exp)) def test_periods_number_check(self): - self.assertRaises( - ValueError, period_range, '2011-1-1', '2012-1-1', 'B') + self.assertRaises(ValueError, period_range, '2011-1-1', '2012-1-1', + 'B') def test_tolist(self): index = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009') @@ -1996,16 +1984,17 @@ def test_to_timestamp_preserve_name(self): self.assertEqual(conv.name, 'foo') def test_to_timestamp_repr_is_code(self): - zs=[Timestamp('99-04-17 00:00:00',tz='UTC'), - Timestamp('2001-04-17 00:00:00',tz='UTC'), - Timestamp('2001-04-17 00:00:00',tz='America/Los_Angeles'), - Timestamp('2001-04-17 00:00:00',tz=None)] + zs = [Timestamp('99-04-17 00:00:00', tz='UTC'), + Timestamp('2001-04-17 00:00:00', tz='UTC'), + Timestamp('2001-04-17 00:00:00', tz='America/Los_Angeles'), + Timestamp('2001-04-17 00:00:00', tz=None)] for z in zs: - self.assertEqual( eval(repr(z)), z) + self.assertEqual(eval(repr(z)), z) def test_to_timestamp_pi_nat(self): # GH 7228 - index = PeriodIndex(['NaT', '2011-01', '2011-02'], freq='M', name='idx') + index = PeriodIndex(['NaT', '2011-01', '2011-02'], freq='M', + name='idx') result = index.to_timestamp('D') expected = DatetimeIndex([pd.NaT, datetime(2011, 1, 1), @@ -2030,10 +2019,12 @@ def test_to_timestamp_pi_nat(self): def test_to_timestamp_pi_mult(self): idx = PeriodIndex(['2011-01', 'NaT', '2011-02'], freq='2M', name='idx') result = idx.to_timestamp() - expected = DatetimeIndex(['2011-01-01', 'NaT', '2011-02-01'], name='idx') + expected = DatetimeIndex( + ['2011-01-01', 'NaT', '2011-02-01'], name='idx') self.assert_index_equal(result, expected) result = idx.to_timestamp(how='E') - expected = DatetimeIndex(['2011-02-28', 'NaT', '2011-03-31'], name='idx') + expected = DatetimeIndex( + ['2011-02-28', 'NaT', '2011-03-31'], name='idx') self.assert_index_equal(result, expected) def test_as_frame_columns(self): @@ -2182,8 +2173,10 @@ def test_index_unique(self): self.assert_numpy_array_equal(idx.unique(), expected.values) self.assertEqual(idx.nunique(), 3) - idx = PeriodIndex([2000, 2007, 2007, 2009, 2007], freq='A-JUN', tz='US/Eastern') - expected = PeriodIndex([2000, 2007, 2009], freq='A-JUN', tz='US/Eastern') + idx = PeriodIndex([2000, 2007, 2007, 2009, 2007], freq='A-JUN', + tz='US/Eastern') + expected = PeriodIndex([2000, 2007, 2009], freq='A-JUN', + tz='US/Eastern') self.assert_numpy_array_equal(idx.unique(), expected.values) self.assertEqual(idx.nunique(), 3) @@ -2302,21 +2295,27 @@ def test_shift(self): assert_equal(pi1.shift(-1).values, pi2.values) def test_shift_nat(self): - idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M', name='idx') + idx = PeriodIndex(['2011-01', '2011-02', 'NaT', + '2011-04'], freq='M', name='idx') result = idx.shift(1) - expected = PeriodIndex(['2011-02', '2011-03', 'NaT', '2011-05'], freq='M', name='idx') + expected = PeriodIndex( + ['2011-02', '2011-03', 'NaT', '2011-05'], freq='M', name='idx') self.assertTrue(result.equals(expected)) self.assertEqual(result.name, expected.name) def test_shift_ndarray(self): - idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M', name='idx') + idx = PeriodIndex(['2011-01', '2011-02', 'NaT', + '2011-04'], freq='M', name='idx') result = idx.shift(np.array([1, 2, 3, 4])) - expected = PeriodIndex(['2011-02', '2011-04', 'NaT', '2011-08'], freq='M', name='idx') + expected = PeriodIndex( + ['2011-02', '2011-04', 'NaT', '2011-08'], freq='M', name='idx') self.assertTrue(result.equals(expected)) - idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M', name='idx') + idx = PeriodIndex(['2011-01', '2011-02', 'NaT', + '2011-04'], freq='M', name='idx') result = idx.shift(np.array([1, -2, 3, -4])) - expected = PeriodIndex(['2011-02', '2010-12', 'NaT', '2010-12'], freq='M', name='idx') + expected = PeriodIndex( + ['2011-02', '2010-12', 'NaT', '2010-12'], freq='M', name='idx') self.assertTrue(result.equals(expected)) def test_asfreq(self): @@ -2503,12 +2502,12 @@ def test_badinput(self): # self.assertRaises(datetools.DateParseError, Period, '0', 'A') def test_negative_ordinals(self): - p = Period(ordinal=-1000, freq='A') - p = Period(ordinal=0, freq='A') + Period(ordinal=-1000, freq='A') + Period(ordinal=0, freq='A') idx1 = PeriodIndex(ordinal=[-1, 0, 1], freq='A') idx2 = PeriodIndex(ordinal=np.array([-1, 0, 1]), freq='A') - tm.assert_numpy_array_equal(idx1,idx2) + tm.assert_numpy_array_equal(idx1, idx2) def test_dti_to_period(self): dti = DatetimeIndex(start='1/1/2005', end='12/1/2005', freq='M') @@ -2524,9 +2523,12 @@ def test_dti_to_period(self): self.assertEqual(pi2[-1], Period('11/30/2005', freq='D')) self.assertEqual(pi3[-1], Period('11/30/2005', freq='3D')) - tm.assert_index_equal(pi1, period_range('1/1/2005', '11/1/2005', freq='M')) - tm.assert_index_equal(pi2, period_range('1/1/2005', '11/1/2005', freq='M').asfreq('D')) - tm.assert_index_equal(pi3, period_range('1/1/2005', '11/1/2005', freq='M').asfreq('3D')) + tm.assert_index_equal(pi1, period_range('1/1/2005', '11/1/2005', + freq='M')) + tm.assert_index_equal(pi2, period_range('1/1/2005', '11/1/2005', + freq='M').asfreq('D')) + tm.assert_index_equal(pi3, period_range('1/1/2005', '11/1/2005', + freq='M').asfreq('3D')) def test_pindex_slice_index(self): pi = PeriodIndex(start='1/1/10', end='12/31/12', freq='M') @@ -2546,8 +2548,8 @@ def test_getitem_day(self): for idx in [didx, pidx]: # getitem against index should raise ValueError - values = ['2014', '2013/02', '2013/01/02', - '2013/02/01 9H', '2013/02/01 09:00'] + values = ['2014', '2013/02', '2013/01/02', '2013/02/01 9H', + '2013/02/01 09:00'] for v in values: if _np_version_under1p9: @@ -2557,7 +2559,7 @@ def test_getitem_day(self): # GH7116 # these show deprecations as we are trying # to slice with non-integer indexers - #with tm.assertRaises(IndexError): + # with tm.assertRaises(IndexError): # idx[v] continue @@ -2578,8 +2580,8 @@ def test_range_slice_day(self): for idx in [didx, pidx]: # slices against index should raise IndexError - values = ['2014', '2013/02', '2013/01/02', - '2013/02/01 9H', '2013/02/01 09:00'] + values = ['2014', '2013/02', '2013/01/02', '2013/02/01 9H', + '2013/02/01 09:00'] for v in values: with tm.assertRaises(IndexError): idx[v:] @@ -2598,13 +2600,14 @@ def test_range_slice_day(self): def test_getitem_seconds(self): # GH 6716 - didx = DatetimeIndex(start='2013/01/01 09:00:00', freq='S', periods=4000) + didx = DatetimeIndex(start='2013/01/01 09:00:00', freq='S', + periods=4000) pidx = PeriodIndex(start='2013/01/01 09:00:00', freq='S', periods=4000) for idx in [didx, pidx]: # getitem against index should raise ValueError - values = ['2014', '2013/02', '2013/01/02', - '2013/02/01 9H', '2013/02/01 09:00'] + values = ['2014', '2013/02', '2013/01/02', '2013/02/01 9H', + '2013/02/01 09:00'] for v in values: if _np_version_under1p9: with tm.assertRaises(ValueError): @@ -2613,7 +2616,7 @@ def test_getitem_seconds(self): # GH7116 # these show deprecations as we are trying # to slice with non-integer indexers - #with tm.assertRaises(IndexError): + # with tm.assertRaises(IndexError): # idx[v] continue @@ -2625,21 +2628,24 @@ def test_getitem_seconds(self): def test_range_slice_seconds(self): # GH 6716 - didx = DatetimeIndex(start='2013/01/01 09:00:00', freq='S', periods=4000) + didx = DatetimeIndex(start='2013/01/01 09:00:00', freq='S', + periods=4000) pidx = PeriodIndex(start='2013/01/01 09:00:00', freq='S', periods=4000) for idx in [didx, pidx]: # slices against index should raise IndexError - values = ['2014', '2013/02', '2013/01/02', - '2013/02/01 9H', '2013/02/01 09:00'] + values = ['2014', '2013/02', '2013/01/02', '2013/02/01 9H', + '2013/02/01 09:00'] for v in values: with tm.assertRaises(IndexError): idx[v:] s = Series(np.random.rand(len(idx)), index=idx) - assert_series_equal(s['2013/01/01 09:05':'2013/01/01 09:10'], s[300:660]) - assert_series_equal(s['2013/01/01 10:00':'2013/01/01 10:05'], s[3600:3960]) + assert_series_equal(s['2013/01/01 09:05':'2013/01/01 09:10'], + s[300:660]) + assert_series_equal(s['2013/01/01 10:00':'2013/01/01 10:05'], + s[3600:3960]) assert_series_equal(s['2013/01/01 10H':], s[3600:]) assert_series_equal(s[:'2013/01/01 09:30'], s[:1860]) for d in ['2013/01/01', '2013/01', '2013']: @@ -2652,7 +2658,8 @@ def test_range_slice_outofbounds(self): for idx in [didx, pidx]: df = DataFrame(dict(units=[100 + i for i in range(10)]), index=idx) - empty = DataFrame(index=idx.__class__([], freq='D'), columns=['units']) + empty = DataFrame(index=idx.__class__( + [], freq='D'), columns=['units']) empty['units'] = empty['units'].astype('int64') tm.assert_frame_equal(df['2013/09/01':'2013/09/30'], empty) @@ -2664,8 +2671,10 @@ def test_range_slice_outofbounds(self): tm.assert_frame_equal(df['2013-11':'2013-12'], empty) def test_pindex_fieldaccessor_nat(self): - idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2012-03', '2012-04'], freq='D') - self.assert_numpy_array_equal(idx.year, np.array([2011, 2011, -1, 2012, 2012])) + idx = PeriodIndex(['2011-01', '2011-02', 'NaT', + '2012-03', '2012-04'], freq='D') + self.assert_numpy_array_equal(idx.year, + np.array([2011, 2011, -1, 2012, 2012])) self.assert_numpy_array_equal(idx.month, np.array([1, 2, -1, 3, 4])) def test_pindex_qaccess(self): @@ -2756,10 +2765,11 @@ def test_iteration(self): self.assertEqual(result[0].freq, index.freq) def test_take(self): - index = PeriodIndex(start='1/1/10', end='12/31/12', freq='D', name='idx') + index = PeriodIndex(start='1/1/10', end='12/31/12', freq='D', + name='idx') expected = PeriodIndex([datetime(2010, 1, 6), datetime(2010, 1, 7), datetime(2010, 1, 9), datetime(2010, 1, 13)], - freq='D', name='idx') + freq='D', name='idx') taken1 = index.take([5, 6, 8, 12]) taken2 = index[[5, 6, 8, 12]] @@ -2787,9 +2797,9 @@ def test_join_self(self): self.assertIs(index, res) def test_join_does_not_recur(self): - df = tm.makeCustomDataframe(3, 2, data_gen_f=lambda *args: - np.random.randint(2), c_idx_type='p', - r_idx_type='dt') + df = tm.makeCustomDataframe( + 3, 2, data_gen_f=lambda *args: np.random.randint(2), + c_idx_type='p', r_idx_type='dt') s = df.iloc[:2, 0] res = s.index.join(df.columns, how='outer') @@ -2902,9 +2912,9 @@ def test_fields(self): self._check_all_fields(i1) def _check_all_fields(self, periodindex): - fields = ['year', 'month', 'day', 'hour', 'minute', - 'second', 'weekofyear', 'week', 'dayofweek', - 'weekday', 'dayofyear', 'quarter', 'qyear', 'days_in_month'] + fields = ['year', 'month', 'day', 'hour', 'minute', 'second', + 'weekofyear', 'week', 'dayofweek', 'weekday', 'dayofyear', + 'quarter', 'qyear', 'days_in_month'] periods = list(periodindex) @@ -3076,14 +3086,16 @@ def test_recreate_from_data(self): def test_combine_first(self): # GH 3367 didx = pd.DatetimeIndex(start='1950-01-31', end='1950-07-31', freq='M') - pidx = pd.PeriodIndex(start=pd.Period('1950-1'), end=pd.Period('1950-7'), freq='M') + pidx = pd.PeriodIndex(start=pd.Period('1950-1'), + end=pd.Period('1950-7'), freq='M') # check to be consistent with DatetimeIndex for idx in [didx, pidx]: a = pd.Series([1, np.nan, np.nan, 4, 5, np.nan, 7], index=idx) b = pd.Series([9, 9, 9, 9, 9, 9, 9], index=idx) result = a.combine_first(b) - expected = pd.Series([1, 9, 9, 4, 5, 9, 7], index=idx, dtype=np.float64) + expected = pd.Series([1, 9, 9, 4, 5, 9, 7], index=idx, + dtype=np.float64) tm.assert_series_equal(result, expected) def test_searchsorted(self): @@ -3105,13 +3117,13 @@ def test_searchsorted(self): with self.assertRaisesRegexp(ValueError, msg): pidx.searchsorted(pd.Period('2014-01-01', freq='5D')) - def test_round_trip(self): p = Period('2000Q1') new_p = self.round_trip_pickle(p) self.assertEqual(new_p, p) + def _permute(obj): return obj.take(np.random.permutation(len(obj))) @@ -3138,47 +3150,65 @@ def test_add_offset(self): p = Period('2011', freq=freq) self.assertEqual(p + offsets.YearEnd(2), Period('2013', freq=freq)) - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(365, 'D'), timedelta(365)]: + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(365, 'D'), + timedelta(365)]: with tm.assertRaises(ValueError): p + o for freq in ['M', '2M', '3M']: p = Period('2011-03', freq=freq) - self.assertEqual(p + offsets.MonthEnd(2), Period('2011-05', freq=freq)) - self.assertEqual(p + offsets.MonthEnd(12), Period('2012-03', freq=freq)) - - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(365, 'D'), timedelta(365)]: + self.assertEqual(p + offsets.MonthEnd(2), + Period('2011-05', freq=freq)) + self.assertEqual(p + offsets.MonthEnd(12), + Period('2012-03', freq=freq)) + + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(365, 'D'), + timedelta(365)]: with tm.assertRaises(ValueError): p + o # freq is Tick for freq in ['D', '2D', '3D']: p = Period('2011-04-01', freq=freq) - self.assertEqual(p + offsets.Day(5), Period('2011-04-06', freq=freq)) - self.assertEqual(p + offsets.Hour(24), Period('2011-04-02', freq=freq)) - self.assertEqual(p + np.timedelta64(2, 'D'), Period('2011-04-03', freq=freq)) - self.assertEqual(p + np.timedelta64(3600 * 24, 's'), Period('2011-04-02', freq=freq)) - self.assertEqual(p + timedelta(-2), Period('2011-03-30', freq=freq)) - self.assertEqual(p + timedelta(hours=48), Period('2011-04-03', freq=freq)) - - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(4, 'h'), timedelta(hours=23)]: + self.assertEqual(p + offsets.Day(5), + Period('2011-04-06', freq=freq)) + self.assertEqual(p + offsets.Hour(24), + Period('2011-04-02', freq=freq)) + self.assertEqual(p + np.timedelta64(2, 'D'), + Period('2011-04-03', freq=freq)) + self.assertEqual(p + np.timedelta64(3600 * 24, 's'), + Period('2011-04-02', freq=freq)) + self.assertEqual(p + timedelta(-2), + Period('2011-03-30', freq=freq)) + self.assertEqual(p + timedelta(hours=48), + Period('2011-04-03', freq=freq)) + + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(4, 'h'), + timedelta(hours=23)]: with tm.assertRaises(ValueError): p + o for freq in ['H', '2H', '3H']: p = Period('2011-04-01 09:00', freq=freq) - self.assertEqual(p + offsets.Day(2), Period('2011-04-03 09:00', freq=freq)) - self.assertEqual(p + offsets.Hour(3), Period('2011-04-01 12:00', freq=freq)) - self.assertEqual(p + np.timedelta64(3, 'h'), Period('2011-04-01 12:00', freq=freq)) - self.assertEqual(p + np.timedelta64(3600, 's'), Period('2011-04-01 10:00', freq=freq)) - self.assertEqual(p + timedelta(minutes=120), Period('2011-04-01 11:00', freq=freq)) - self.assertEqual(p + timedelta(days=4, minutes=180), Period('2011-04-05 12:00', freq=freq)) - - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]: + self.assertEqual(p + offsets.Day(2), + Period('2011-04-03 09:00', freq=freq)) + self.assertEqual(p + offsets.Hour(3), + Period('2011-04-01 12:00', freq=freq)) + self.assertEqual(p + np.timedelta64(3, 'h'), + Period('2011-04-01 12:00', freq=freq)) + self.assertEqual(p + np.timedelta64(3600, 's'), + Period('2011-04-01 10:00', freq=freq)) + self.assertEqual(p + timedelta(minutes=120), + Period('2011-04-01 11:00', freq=freq)) + self.assertEqual(p + timedelta(days=4, minutes=180), + Period('2011-04-05 12:00', freq=freq)) + + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(3200, 's'), + timedelta(hours=23, minutes=30)]: with tm.assertRaises(ValueError): p + o @@ -3189,8 +3219,9 @@ def test_add_offset_nat(self): for o in [offsets.YearEnd(2)]: self.assertEqual((p + o).ordinal, tslib.iNaT) - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(365, 'D'), timedelta(365)]: + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(365, 'D'), + timedelta(365)]: with tm.assertRaises(ValueError): p + o @@ -3199,8 +3230,9 @@ def test_add_offset_nat(self): for o in [offsets.MonthEnd(2), offsets.MonthEnd(12)]: self.assertEqual((p + o).ordinal, tslib.iNaT) - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(365, 'D'), timedelta(365)]: + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(365, 'D'), + timedelta(365)]: with tm.assertRaises(ValueError): p + o @@ -3208,11 +3240,13 @@ def test_add_offset_nat(self): for freq in ['D', '2D', '3D']: p = Period('NaT', freq=freq) for o in [offsets.Day(5), offsets.Hour(24), np.timedelta64(2, 'D'), - np.timedelta64(3600 * 24, 's'), timedelta(-2), timedelta(hours=48)]: + np.timedelta64(3600 * 24, 's'), timedelta(-2), + timedelta(hours=48)]: self.assertEqual((p + o).ordinal, tslib.iNaT) - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(4, 'h'), timedelta(hours=23)]: + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(4, 'h'), + timedelta(hours=23)]: with tm.assertRaises(ValueError): p + o @@ -3223,8 +3257,9 @@ def test_add_offset_nat(self): timedelta(days=4, minutes=180)]: self.assertEqual((p + o).ordinal, tslib.iNaT) - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]: + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(3200, 's'), + timedelta(hours=23, minutes=30)]: with tm.assertRaises(ValueError): p + o @@ -3234,47 +3269,65 @@ def test_sub_offset(self): p = Period('2011', freq=freq) self.assertEqual(p - offsets.YearEnd(2), Period('2009', freq=freq)) - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(365, 'D'), timedelta(365)]: + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(365, 'D'), + timedelta(365)]: with tm.assertRaises(ValueError): p - o for freq in ['M', '2M', '3M']: p = Period('2011-03', freq=freq) - self.assertEqual(p - offsets.MonthEnd(2), Period('2011-01', freq=freq)) - self.assertEqual(p - offsets.MonthEnd(12), Period('2010-03', freq=freq)) - - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(365, 'D'), timedelta(365)]: + self.assertEqual(p - offsets.MonthEnd(2), + Period('2011-01', freq=freq)) + self.assertEqual(p - offsets.MonthEnd(12), + Period('2010-03', freq=freq)) + + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(365, 'D'), + timedelta(365)]: with tm.assertRaises(ValueError): p - o # freq is Tick for freq in ['D', '2D', '3D']: p = Period('2011-04-01', freq=freq) - self.assertEqual(p - offsets.Day(5), Period('2011-03-27', freq=freq)) - self.assertEqual(p - offsets.Hour(24), Period('2011-03-31', freq=freq)) - self.assertEqual(p - np.timedelta64(2, 'D'), Period('2011-03-30', freq=freq)) - self.assertEqual(p - np.timedelta64(3600 * 24, 's'), Period('2011-03-31', freq=freq)) - self.assertEqual(p - timedelta(-2), Period('2011-04-03', freq=freq)) - self.assertEqual(p - timedelta(hours=48), Period('2011-03-30', freq=freq)) - - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(4, 'h'), timedelta(hours=23)]: + self.assertEqual(p - offsets.Day(5), + Period('2011-03-27', freq=freq)) + self.assertEqual(p - offsets.Hour(24), + Period('2011-03-31', freq=freq)) + self.assertEqual(p - np.timedelta64(2, 'D'), + Period('2011-03-30', freq=freq)) + self.assertEqual(p - np.timedelta64(3600 * 24, 's'), + Period('2011-03-31', freq=freq)) + self.assertEqual(p - timedelta(-2), + Period('2011-04-03', freq=freq)) + self.assertEqual(p - timedelta(hours=48), + Period('2011-03-30', freq=freq)) + + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(4, 'h'), + timedelta(hours=23)]: with tm.assertRaises(ValueError): p - o for freq in ['H', '2H', '3H']: p = Period('2011-04-01 09:00', freq=freq) - self.assertEqual(p - offsets.Day(2), Period('2011-03-30 09:00', freq=freq)) - self.assertEqual(p - offsets.Hour(3), Period('2011-04-01 06:00', freq=freq)) - self.assertEqual(p - np.timedelta64(3, 'h'), Period('2011-04-01 06:00', freq=freq)) - self.assertEqual(p - np.timedelta64(3600, 's'), Period('2011-04-01 08:00', freq=freq)) - self.assertEqual(p - timedelta(minutes=120), Period('2011-04-01 07:00', freq=freq)) - self.assertEqual(p - timedelta(days=4, minutes=180), Period('2011-03-28 06:00', freq=freq)) - - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]: + self.assertEqual(p - offsets.Day(2), + Period('2011-03-30 09:00', freq=freq)) + self.assertEqual(p - offsets.Hour(3), + Period('2011-04-01 06:00', freq=freq)) + self.assertEqual(p - np.timedelta64(3, 'h'), + Period('2011-04-01 06:00', freq=freq)) + self.assertEqual(p - np.timedelta64(3600, 's'), + Period('2011-04-01 08:00', freq=freq)) + self.assertEqual(p - timedelta(minutes=120), + Period('2011-04-01 07:00', freq=freq)) + self.assertEqual(p - timedelta(days=4, minutes=180), + Period('2011-03-28 06:00', freq=freq)) + + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(3200, 's'), + timedelta(hours=23, minutes=30)]: with tm.assertRaises(ValueError): p - o @@ -3285,8 +3338,9 @@ def test_sub_offset_nat(self): for o in [offsets.YearEnd(2)]: self.assertEqual((p - o).ordinal, tslib.iNaT) - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(365, 'D'), timedelta(365)]: + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(365, 'D'), + timedelta(365)]: with tm.assertRaises(ValueError): p - o @@ -3295,8 +3349,9 @@ def test_sub_offset_nat(self): for o in [offsets.MonthEnd(2), offsets.MonthEnd(12)]: self.assertEqual((p - o).ordinal, tslib.iNaT) - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(365, 'D'), timedelta(365)]: + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(365, 'D'), + timedelta(365)]: with tm.assertRaises(ValueError): p - o @@ -3304,11 +3359,13 @@ def test_sub_offset_nat(self): for freq in ['D', '2D', '3D']: p = Period('NaT', freq=freq) for o in [offsets.Day(5), offsets.Hour(24), np.timedelta64(2, 'D'), - np.timedelta64(3600 * 24, 's'), timedelta(-2), timedelta(hours=48)]: + np.timedelta64(3600 * 24, 's'), timedelta(-2), + timedelta(hours=48)]: self.assertEqual((p - o).ordinal, tslib.iNaT) - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(4, 'h'), timedelta(hours=23)]: + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(4, 'h'), + timedelta(hours=23)]: with tm.assertRaises(ValueError): p - o @@ -3319,8 +3376,9 @@ def test_sub_offset_nat(self): timedelta(days=4, minutes=180)]: self.assertEqual((p - o).ordinal, tslib.iNaT) - for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(), - np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]: + for o in [offsets.YearBegin(2), offsets.MonthBegin(1), + offsets.Minute(), np.timedelta64(3200, 's'), + timedelta(hours=23, minutes=30)]: with tm.assertRaises(ValueError): p - o @@ -3329,13 +3387,17 @@ def test_nat_ops(self): p = Period('NaT', freq=freq) self.assertEqual((p + 1).ordinal, tslib.iNaT) self.assertEqual((p - 1).ordinal, tslib.iNaT) - self.assertEqual((p - Period('2011-01', freq=freq)).ordinal, tslib.iNaT) - self.assertEqual((Period('2011-01', freq=freq) - p).ordinal, tslib.iNaT) + self.assertEqual( + (p - Period('2011-01', freq=freq)).ordinal, tslib.iNaT) + self.assertEqual( + (Period('2011-01', freq=freq) - p).ordinal, tslib.iNaT) def test_pi_ops_nat(self): - idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M', name='idx') + idx = PeriodIndex(['2011-01', '2011-02', 'NaT', + '2011-04'], freq='M', name='idx') result = idx + 2 - expected = PeriodIndex(['2011-03', '2011-04', 'NaT', '2011-06'], freq='M', name='idx') + expected = PeriodIndex( + ['2011-03', '2011-04', 'NaT', '2011-06'], freq='M', name='idx') self.assertTrue(result.equals(expected)) result2 = result - 2 @@ -3346,21 +3408,26 @@ def test_pi_ops_nat(self): idx + "str" def test_pi_ops_array(self): - idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M', name='idx') + idx = PeriodIndex(['2011-01', '2011-02', 'NaT', + '2011-04'], freq='M', name='idx') result = idx + np.array([1, 2, 3, 4]) - exp = PeriodIndex(['2011-02', '2011-04', 'NaT', '2011-08'], freq='M', name='idx') + exp = PeriodIndex(['2011-02', '2011-04', 'NaT', + '2011-08'], freq='M', name='idx') self.assert_index_equal(result, exp) result = np.add(idx, np.array([4, -1, 1, 2])) - exp = PeriodIndex(['2011-05', '2011-01', 'NaT', '2011-06'], freq='M', name='idx') + exp = PeriodIndex(['2011-05', '2011-01', 'NaT', + '2011-06'], freq='M', name='idx') self.assert_index_equal(result, exp) result = idx - np.array([1, 2, 3, 4]) - exp = PeriodIndex(['2010-12', '2010-12', 'NaT', '2010-12'], freq='M', name='idx') + exp = PeriodIndex(['2010-12', '2010-12', 'NaT', + '2010-12'], freq='M', name='idx') self.assert_index_equal(result, exp) result = np.subtract(idx, np.array([3, 2, 3, -2])) - exp = PeriodIndex(['2010-10', '2010-12', 'NaT', '2011-06'], freq='M', name='idx') + exp = PeriodIndex(['2010-10', '2010-12', 'NaT', + '2011-06'], freq='M', name='idx') self.assert_index_equal(result, exp) # incompatible freq @@ -3386,8 +3453,8 @@ def test_pi_ops_array(self): idx = PeriodIndex(['2011-01-01 09:00:00', '2011-01-01 10:00:00', 'NaT', '2011-01-01 12:00:00'], freq='S', name='idx') - result = idx + np.array([np.timedelta64(1, 'h'), np.timedelta64(30, 's'), - np.timedelta64(2, 'h'), np.timedelta64(15, 'm')]) + result = idx + np.array([np.timedelta64(1, 'h'), np.timedelta64( + 30, 's'), np.timedelta64(2, 'h'), np.timedelta64(15, 'm')]) exp = PeriodIndex(['2011-01-01 10:00:00', '2011-01-01 10:00:30', 'NaT', '2011-01-01 12:15:00'], freq='S', name='idx') self.assert_index_equal(result, exp) @@ -3527,8 +3594,8 @@ def test_period_nat_comp(self): nat = pd.Timestamp('NaT') t = pd.Timestamp('2011-01-01') # confirm Period('NaT') work identical with Timestamp('NaT') - for left, right in [(p_nat, p), (p, p_nat), (p_nat, p_nat), - (nat, t), (t, nat), (nat, nat)]: + for left, right in [(p_nat, p), (p, p_nat), (p_nat, p_nat), (nat, t), + (t, nat), (nat, nat)]: self.assertEqual(left < right, False) self.assertEqual(left > right, False) self.assertEqual(left == right, False) @@ -3561,7 +3628,8 @@ def test_pi_pi_comp(self): exp = np.array([True, True, False, False]) self.assert_numpy_array_equal(base <= p, exp) - idx = PeriodIndex(['2011-02', '2011-01', '2011-03', '2011-05'], freq=freq) + idx = PeriodIndex( + ['2011-02', '2011-01', '2011-03', '2011-05'], freq=freq) exp = np.array([False, False, True, False]) self.assert_numpy_array_equal(base == idx, exp) @@ -3601,7 +3669,8 @@ def test_pi_pi_comp(self): def test_pi_nat_comp(self): for freq in ['M', '2M', '3M']: - idx1 = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-05'], freq=freq) + idx1 = PeriodIndex( + ['2011-01', '2011-02', 'NaT', '2011-05'], freq=freq) result = idx1 > Period('2011-02', freq=freq) exp = np.array([False, False, False, True]) @@ -3615,7 +3684,8 @@ def test_pi_nat_comp(self): exp = np.array([True, True, True, True]) self.assert_numpy_array_equal(result, exp) - idx2 = PeriodIndex(['2011-02', '2011-01', '2011-04', 'NaT'], freq=freq) + idx2 = PeriodIndex( + ['2011-02', '2011-01', '2011-04', 'NaT'], freq=freq) result = idx1 < idx2 exp = np.array([True, False, False, False]) self.assert_numpy_array_equal(result, exp) @@ -3636,7 +3706,8 @@ def test_pi_nat_comp(self): exp = np.array([False, False, True, False]) self.assert_numpy_array_equal(result, exp) - diff = PeriodIndex(['2011-02', '2011-01', '2011-04', 'NaT'], freq='4M') + diff = PeriodIndex( + ['2011-02', '2011-01', '2011-04', 'NaT'], freq='4M') msg = "Input has different freq=4M from PeriodIndex" with tm.assertRaisesRegexp(ValueError, msg): idx1 > diff diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py index bcd1e400d3974..4a06a5500094a 100644 --- a/pandas/tseries/tests/test_plotting.py +++ b/pandas/tseries/tests/test_plotting.py @@ -9,7 +9,7 @@ from pandas import Index, Series, DataFrame from pandas.tseries.index import date_range, bdate_range -from pandas.tseries.offsets import DateOffset, Week +from pandas.tseries.offsets import DateOffset from pandas.tseries.period import period_range, Period, PeriodIndex from pandas.tseries.resample import DatetimeIndex @@ -49,7 +49,7 @@ def test_ts_plot_with_tz(self): def test_fontsize_set_correctly(self): # For issue #8765 - import matplotlib.pyplot as plt + import matplotlib.pyplot as plt # noqa df = DataFrame(np.random.randn(10, 9), index=range(10)) ax = df.plot(fontsize=2) for label in (ax.get_xticklabels() + ax.get_yticklabels()): @@ -58,7 +58,7 @@ def test_fontsize_set_correctly(self): @slow def test_frame_inferred(self): # inferred freq - import matplotlib.pyplot as plt + import matplotlib.pyplot as plt # noqa idx = date_range('1/1/1987', freq='MS', periods=100) idx = DatetimeIndex(idx.values, freq=None) @@ -80,10 +80,10 @@ def test_nonnumeric_exclude(self): import matplotlib.pyplot as plt idx = date_range('1/1/1987', freq='A', periods=3) - df = DataFrame({'A': ["x", "y", "z"], 'B': [1,2,3]}, idx) + df = DataFrame({'A': ["x", "y", "z"], 'B': [1, 2, 3]}, idx) - ax = df.plot() # it works - self.assertEqual(len(ax.get_lines()), 1) #B was plotted + ax = df.plot() # it works + self.assertEqual(len(ax.get_lines()), 1) # B was plotted plt.close(plt.gcf()) self.assertRaises(TypeError, df['A'].plot) @@ -114,7 +114,7 @@ def test_tsplot(self): self.assertEqual((0., 0., 0.), ax.get_lines()[0].get_color()) def test_both_style_and_color(self): - import matplotlib.pyplot as plt + import matplotlib.pyplot as plt # noqa ts = tm.makeTimeSeries() self.assertRaises(ValueError, ts.plot, style='b-', color='#000099') @@ -146,16 +146,21 @@ def check_format_of_first_point(ax, expected_string): first_x = first_line.get_xdata()[0].ordinal first_y = first_line.get_ydata()[0] try: - self.assertEqual(expected_string, ax.format_coord(first_x, first_y)) + self.assertEqual(expected_string, + ax.format_coord(first_x, first_y)) except (ValueError): - raise nose.SkipTest("skipping test because issue forming test comparison GH7664") + raise nose.SkipTest("skipping test because issue forming " + "test comparison GH7664") - annual = Series(1, index=date_range('2014-01-01', periods=3, freq='A-DEC')) + annual = Series(1, index=date_range('2014-01-01', periods=3, + freq='A-DEC')) check_format_of_first_point(annual.plot(), 't = 2014 y = 1.000000') - # note this is added to the annual plot already in existence, and changes its freq field + # note this is added to the annual plot already in existence, and + # changes its freq field daily = Series(1, index=date_range('2014-01-01', periods=3, freq='D')) - check_format_of_first_point(daily.plot(), 't = 2014-01-01 y = 1.000000') + check_format_of_first_point(daily.plot(), + 't = 2014-01-01 y = 1.000000') tm.close() # tsplot @@ -218,9 +223,8 @@ def test_plot_offset_freq(self): @slow def test_plot_multiple_inferred_freq(self): - dr = Index([datetime(2000, 1, 1), - datetime(2000, 1, 6), - datetime(2000, 1, 11)]) + dr = Index([datetime(2000, 1, 1), datetime(2000, 1, 6), datetime( + 2000, 1, 11)]) ser = Series(np.random.randn(len(dr)), dr) _check_plot_works(ser.plot) @@ -261,7 +265,8 @@ def test_irreg_hf(self): diffs = Series(ax.get_lines()[0].get_xydata()[:, 0]).diff() sec = 1. / 24 / 60 / 60 - self.assertTrue((np.fabs(diffs[1:] - [sec, sec * 2, sec]) < 1e-8).all()) + self.assertTrue((np.fabs(diffs[1:] - [sec, sec * 2, sec]) < 1e-8).all( + )) plt.clf() fig.add_subplot(111) @@ -286,7 +291,7 @@ def test_irregular_datetime64_repr_bug(self): self.assertEqual(rs, xp) def test_business_freq(self): - import matplotlib.pyplot as plt + import matplotlib.pyplot as plt # noqa bts = tm.makePeriodSeries() ax = bts.plot() self.assertEqual(ax.get_lines()[0].get_xydata()[0, 0], @@ -309,8 +314,8 @@ def test_business_freq_convert(self): def test_nonzero_base(self): # GH2571 - idx = (date_range('2012-12-20', periods=24, freq='H') + - timedelta(minutes=30)) + idx = (date_range('2012-12-20', periods=24, freq='H') + timedelta( + minutes=30)) df = DataFrame(np.arange(24), index=idx) ax = df.plot() rs = ax.get_lines()[0].get_xdata() @@ -601,7 +606,7 @@ def test_secondary_kde(self): tm._skip_if_no_scipy() _skip_if_no_scipy_gaussian_kde() - import matplotlib.pyplot as plt + import matplotlib.pyplot as plt # noqa ser = Series(np.random.randn(10)) ax = ser.plot(secondary_y=True, kind='density') self.assertTrue(hasattr(ax, 'left_ax')) @@ -616,7 +621,7 @@ def test_secondary_bar(self): ax = ser.plot(secondary_y=True, kind='bar') fig = ax.get_figure() axes = fig.get_axes() - self.assertEqual(axes[1].get_yaxis().get_ticks_position(), 'right') + self.assertEqual(axes[1].get_yaxis().get_ticks_position(), 'right') @slow def test_secondary_frame(self): @@ -635,10 +640,13 @@ def test_secondary_bar_frame(self): self.assertEqual(axes[2].get_yaxis().get_ticks_position(), 'right') def test_mixed_freq_regular_first(self): - import matplotlib.pyplot as plt + import matplotlib.pyplot as plt # noqa s1 = tm.makeTimeSeries() s2 = s1[[0, 5, 10, 11, 12, 13, 14, 15]] - ax = s1.plot() + + # it works! + s1.plot() + ax2 = s2.plot(style='g') lines = ax2.get_lines() idx1 = PeriodIndex(lines[0].get_xdata()) @@ -652,7 +660,7 @@ def test_mixed_freq_regular_first(self): @slow def test_mixed_freq_irregular_first(self): - import matplotlib.pyplot as plt + import matplotlib.pyplot as plt # noqa s1 = tm.makeTimeSeries() s2 = s1[[0, 5, 10, 11, 12, 13, 14, 15]] s2.plot(style='g') @@ -666,7 +674,7 @@ def test_mixed_freq_irregular_first(self): def test_mixed_freq_regular_first_df(self): # GH 9852 - import matplotlib.pyplot as plt + import matplotlib.pyplot as plt # noqa s1 = tm.makeTimeSeries().to_frame() s2 = s1.iloc[[0, 5, 10, 11, 12, 13, 14, 15], :] ax = s1.plot() @@ -684,7 +692,7 @@ def test_mixed_freq_regular_first_df(self): @slow def test_mixed_freq_irregular_first_df(self): # GH 9852 - import matplotlib.pyplot as plt + import matplotlib.pyplot as plt # noqa s1 = tm.makeTimeSeries().to_frame() s2 = s1.iloc[[0, 5, 10, 11, 12, 13, 14, 15], :] ax = s2.plot(style='g') @@ -783,12 +791,12 @@ def test_from_weekly_resampling(self): ax = high.plot() expected_h = idxh.to_period().asi8 - expected_l = np.array([1514, 1519, 1523, 1527, 1531, 1536, 1540, 1544, 1549, - 1553, 1558, 1562]) + expected_l = np.array([1514, 1519, 1523, 1527, 1531, 1536, 1540, 1544, + 1549, 1553, 1558, 1562]) for l in ax.get_lines(): self.assertTrue(PeriodIndex(data=l.get_xdata()).freq, idxh.freq) xdata = l.get_xdata(orig=False) - if len(xdata) == 12: # idxl lines + if len(xdata) == 12: # idxl lines self.assert_numpy_array_equal(xdata, expected_l) else: self.assert_numpy_array_equal(xdata, expected_h) @@ -803,7 +811,7 @@ def test_from_weekly_resampling(self): for l in lines: self.assertTrue(PeriodIndex(data=l.get_xdata()).freq, idxh.freq) xdata = l.get_xdata(orig=False) - if len(xdata) == 12: # idxl lines + if len(xdata) == 12: # idxl lines self.assert_numpy_array_equal(xdata, expected_l) else: self.assert_numpy_array_equal(xdata, expected_h) @@ -815,7 +823,7 @@ def test_from_resampling_area_line_mixed(self): high = DataFrame(np.random.rand(len(idxh), 3), index=idxh, columns=[0, 1, 2]) low = DataFrame(np.random.rand(len(idxl), 3), - index=idxl, columns=[0, 1, 2]) + index=idxl, columns=[0, 1, 2]) # low to high for kind1, kind2 in [('line', 'area'), ('area', 'line')]: @@ -823,26 +831,31 @@ def test_from_resampling_area_line_mixed(self): ax = high.plot(kind=kind2, stacked=True, ax=ax) # check low dataframe result - expected_x = np.array([1514, 1519, 1523, 1527, 1531, 1536, 1540, 1544, 1549, - 1553, 1558, 1562]) + expected_x = np.array([1514, 1519, 1523, 1527, 1531, 1536, 1540, + 1544, 1549, 1553, 1558, 1562]) expected_y = np.zeros(len(expected_x)) for i in range(3): l = ax.lines[i] self.assertEqual(PeriodIndex(l.get_xdata()).freq, idxh.freq) - self.assert_numpy_array_equal(l.get_xdata(orig=False), expected_x) + self.assert_numpy_array_equal( + l.get_xdata(orig=False), expected_x) # check stacked values are correct expected_y += low[i].values - self.assert_numpy_array_equal(l.get_ydata(orig=False), expected_y) + self.assert_numpy_array_equal( + l.get_ydata(orig=False), expected_y) # check high dataframe result expected_x = idxh.to_period().asi8 expected_y = np.zeros(len(expected_x)) for i in range(3): l = ax.lines[3 + i] - self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, idxh.freq) - self.assert_numpy_array_equal(l.get_xdata(orig=False), expected_x) + self.assertEqual(PeriodIndex( + data=l.get_xdata()).freq, idxh.freq) + self.assert_numpy_array_equal( + l.get_xdata(orig=False), expected_x) expected_y += high[i].values - self.assert_numpy_array_equal(l.get_ydata(orig=False), expected_y) + self.assert_numpy_array_equal( + l.get_ydata(orig=False), expected_y) # high to low for kind1, kind2 in [('line', 'area'), ('area', 'line')]: @@ -854,21 +867,27 @@ def test_from_resampling_area_line_mixed(self): expected_y = np.zeros(len(expected_x)) for i in range(3): l = ax.lines[i] - self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, idxh.freq) - self.assert_numpy_array_equal(l.get_xdata(orig=False), expected_x) + self.assertEqual(PeriodIndex( + data=l.get_xdata()).freq, idxh.freq) + self.assert_numpy_array_equal( + l.get_xdata(orig=False), expected_x) expected_y += high[i].values - self.assert_numpy_array_equal(l.get_ydata(orig=False), expected_y) + self.assert_numpy_array_equal( + l.get_ydata(orig=False), expected_y) # check low dataframe result - expected_x = np.array([1514, 1519, 1523, 1527, 1531, 1536, 1540, 1544, 1549, - 1553, 1558, 1562]) + expected_x = np.array([1514, 1519, 1523, 1527, 1531, 1536, 1540, + 1544, 1549, 1553, 1558, 1562]) expected_y = np.zeros(len(expected_x)) for i in range(3): l = ax.lines[3 + i] - self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, idxh.freq) - self.assert_numpy_array_equal(l.get_xdata(orig=False), expected_x) + self.assertEqual(PeriodIndex( + data=l.get_xdata()).freq, idxh.freq) + self.assert_numpy_array_equal( + l.get_xdata(orig=False), expected_x) expected_y += low[i].values - self.assert_numpy_array_equal(l.get_ydata(orig=False), expected_y) + self.assert_numpy_array_equal( + l.get_ydata(orig=False), expected_y) @slow def test_mixed_freq_second_millisecond(self): @@ -956,7 +975,10 @@ def test_time_musec(self): labels = ax.get_xticklabels() for t, l in zip(ticks, labels): m, s = divmod(int(t), 60) - us = int((t - int(t)) * 1e6) + + # TODO: unused? + # us = int((t - int(t)) * 1e6) + h, m = divmod(m, 60) xp = l.get_text() if len(xp) > 0: @@ -1079,8 +1101,7 @@ def test_format_date_axis(self): def test_ax_plot(self): import matplotlib.pyplot as plt - x = DatetimeIndex(start='2012-01-02', periods=10, - freq='D') + x = DatetimeIndex(start='2012-01-02', periods=10, freq='D') y = lrange(len(x)) fig = plt.figure() ax = fig.add_subplot(111) @@ -1105,9 +1126,9 @@ def test_mpl_nopandas(self): line1, line2 = ax.get_lines() tm.assert_numpy_array_equal(np.array([x.toordinal() for x in dates]), - line1.get_xydata()[:, 0]) + line1.get_xydata()[:, 0]) tm.assert_numpy_array_equal(np.array([x.toordinal() for x in dates]), - line2.get_xydata()[:, 0]) + line2.get_xydata()[:, 0]) @slow def test_irregular_ts_shared_ax_xlim(self): diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py index b48f077bd6f6d..850071e8e49e6 100644 --- a/pandas/tseries/tests/test_resample.py +++ b/pandas/tseries/tests/test_resample.py @@ -6,8 +6,8 @@ from pandas.compat import range, lrange, zip, product import numpy as np -from pandas import (Series, TimeSeries, DataFrame, Panel, Index, - isnull, notnull, Timestamp) +from pandas import (Series, DataFrame, Panel, Index, isnull, + notnull, Timestamp) from pandas.core.groupby import DataError from pandas.tseries.index import date_range @@ -73,7 +73,8 @@ def test_custom_grouper(self): result = g.agg(np.sum) assert_series_equal(result, expect) - df = DataFrame(np.random.rand(len(dti), 10), index=dti, dtype='float64') + df = DataFrame(np.random.rand(len(dti), 10), + index=dti, dtype='float64') r = df.groupby(b).agg(np.sum) self.assertEqual(len(r.columns), 10) @@ -93,8 +94,10 @@ def test_resample_basic(self): result = s.resample('5min', how='mean', closed='left', label='right') - exp_idx = date_range('1/1/2000 00:05', periods=3, freq='5min', name='index') - expected = Series([s[:5].mean(), s[5:10].mean(), s[10:].mean()], index=exp_idx) + exp_idx = date_range('1/1/2000 00:05', periods=3, freq='5min', + name='index') + expected = Series([s[:5].mean(), s[5:10].mean(), + s[10:].mean()], index=exp_idx) assert_series_equal(result, expected) s = self.series @@ -104,21 +107,22 @@ def test_resample_basic(self): assert_series_equal(result, expect) def test_resample_how(self): - rng = date_range('1/1/2000 00:00:00', '1/1/2000 00:13:00', - freq='min', name='index') + rng = date_range('1/1/2000 00:00:00', '1/1/2000 00:13:00', freq='min', + name='index') s = Series(np.random.randn(14), index=rng) grouplist = np.ones_like(s) grouplist[0] = 0 grouplist[1:6] = 1 grouplist[6:11] = 2 grouplist[11:] = 3 - args = ['sum', 'mean', 'std', 'sem', 'max', 'min', - 'median', 'first', 'last', 'ohlc'] + args = ['sum', 'mean', 'std', 'sem', 'max', 'min', 'median', 'first', + 'last', 'ohlc'] def _ohlc(group): if isnull(group).all(): return np.repeat(np.nan, 4) return [group[0], group.max(), group.min(), group[-1]] + inds = date_range('1/1/2000', periods=4, freq='5min', name='index') for arg in args: @@ -127,8 +131,8 @@ def _ohlc(group): else: func = arg try: - result = s.resample('5min', how=arg, - closed='right', label='right') + result = s.resample('5min', how=arg, closed='right', + label='right') expected = s.groupby(grouplist).agg(func) self.assertEqual(result.index.name, 'index') @@ -142,7 +146,7 @@ def _ohlc(group): assert_series_equal(result, expected) except BaseException as exc: - exc.args += ('how=%s' % arg,) + exc.args += ('how=%s' % arg, ) raise def test_resample_how_callables(self): @@ -171,12 +175,13 @@ def __call__(self, x): def test_resample_with_timedeltas(self): - expected = DataFrame({'A' : np.arange(1480)}) + expected = DataFrame({'A': np.arange(1480)}) expected = expected.groupby(expected.index // 30).sum() - expected.index = pd.timedelta_range('0 days',freq='30T',periods=50) + expected.index = pd.timedelta_range('0 days', freq='30T', periods=50) - df = DataFrame({'A' : np.arange(1480)},index=pd.to_timedelta(np.arange(1480),unit='T')) - result = df.resample('30T',how='sum') + df = DataFrame({'A': np.arange(1480)}, index=pd.to_timedelta( + np.arange(1480), unit='T')) + result = df.resample('30T', how='sum') assert_frame_equal(result, expected) @@ -206,33 +211,43 @@ def test_resample_rounding(self): 11-08-2014,00:00:21.191,1""" from pandas.compat import StringIO - df = pd.read_csv(StringIO(data), parse_dates={'timestamp': ['date', 'time']}, index_col='timestamp') + df = pd.read_csv(StringIO(data), parse_dates={'timestamp': [ + 'date', 'time']}, index_col='timestamp') df.index.name = None result = df.resample('6s', how='sum') - expected = DataFrame({'value' : [4,9,4,2]},index=date_range('2014-11-08',freq='6s',periods=4)) - assert_frame_equal(result,expected) + expected = DataFrame({'value': [ + 4, 9, 4, 2 + ]}, index=date_range('2014-11-08', freq='6s', periods=4)) + assert_frame_equal(result, expected) result = df.resample('7s', how='sum') - expected = DataFrame({'value' : [4,10,4,1]},index=date_range('2014-11-08',freq='7s',periods=4)) - assert_frame_equal(result,expected) + expected = DataFrame({'value': [ + 4, 10, 4, 1 + ]}, index=date_range('2014-11-08', freq='7s', periods=4)) + assert_frame_equal(result, expected) result = df.resample('11s', how='sum') - expected = DataFrame({'value' : [11,8]},index=date_range('2014-11-08',freq='11s',periods=2)) - assert_frame_equal(result,expected) + expected = DataFrame({'value': [ + 11, 8 + ]}, index=date_range('2014-11-08', freq='11s', periods=2)) + assert_frame_equal(result, expected) result = df.resample('13s', how='sum') - expected = DataFrame({'value' : [13,6]},index=date_range('2014-11-08',freq='13s',periods=2)) - assert_frame_equal(result,expected) + expected = DataFrame({'value': [ + 13, 6 + ]}, index=date_range('2014-11-08', freq='13s', periods=2)) + assert_frame_equal(result, expected) result = df.resample('17s', how='sum') - expected = DataFrame({'value' : [16,3]},index=date_range('2014-11-08',freq='17s',periods=2)) - assert_frame_equal(result,expected) + expected = DataFrame({'value': [ + 16, 3 + ]}, index=date_range('2014-11-08', freq='17s', periods=2)) + assert_frame_equal(result, expected) def test_resample_basic_from_daily(self): # from daily - dti = DatetimeIndex( - start=datetime(2005, 1, 1), end=datetime(2005, 1, 10), - freq='D', name='index') + dti = DatetimeIndex(start=datetime(2005, 1, 1), + end=datetime(2005, 1, 10), freq='D', name='index') s = Series(np.random.rand(len(dti)), dti) @@ -278,7 +293,9 @@ def test_resample_basic_from_daily(self): # to biz day result = s.resample('B', how='last') self.assertEqual(len(result), 7) - self.assertTrue((result.index.dayofweek == [4, 0, 1, 2, 3, 4, 0]).all()) + self.assertTrue((result.index.dayofweek == [ + 4, 0, 1, 2, 3, 4, 0 + ]).all()) self.assertEqual(result.iloc[0], s['1/2/2005']) self.assertEqual(result.iloc[1], s['1/3/2005']) self.assertEqual(result.iloc[5], s['1/9/2005']) @@ -287,28 +304,31 @@ def test_resample_basic_from_daily(self): def test_resample_upsampling_picked_but_not_correct(self): # Test for issue #3020 - dates = date_range('01-Jan-2014','05-Jan-2014', freq='D') + dates = date_range('01-Jan-2014', '05-Jan-2014', freq='D') series = Series(1, index=dates) result = series.resample('D') self.assertEqual(result.index[0], dates[0]) # GH 5955 - # incorrect deciding to upsample when the axis frequency matches the resample frequency + # incorrect deciding to upsample when the axis frequency matches the + # resample frequency import datetime - s = Series(np.arange(1.,6),index=[datetime.datetime(1975, 1, i, 12, 0) for i in range(1, 6)]) - expected = Series(np.arange(1.,6),index=date_range('19750101',periods=5,freq='D')) + s = Series(np.arange(1., 6), index=[datetime.datetime( + 1975, 1, i, 12, 0) for i in range(1, 6)]) + expected = Series(np.arange(1., 6), index=date_range( + '19750101', periods=5, freq='D')) - result = s.resample('D',how='count') - assert_series_equal(result,Series(1,index=expected.index)) + result = s.resample('D', how='count') + assert_series_equal(result, Series(1, index=expected.index)) - result1 = s.resample('D',how='sum') - result2 = s.resample('D',how='mean') + result1 = s.resample('D', how='sum') + result2 = s.resample('D', how='mean') result3 = s.resample('D') - assert_series_equal(result1,expected) - assert_series_equal(result2,expected) - assert_series_equal(result3,expected) + assert_series_equal(result1, expected) + assert_series_equal(result2, expected) + assert_series_equal(result3, expected) def test_resample_frame_basic(self): df = tm.makeTimeDataFrame() @@ -341,22 +361,19 @@ def test_resample_loffset(self): index=idx + timedelta(minutes=1)) assert_series_equal(result, expected) - expected = s.resample( - '5min', how='mean', closed='right', label='right', - loffset='1min') + expected = s.resample('5min', how='mean', closed='right', + label='right', loffset='1min') assert_series_equal(result, expected) - expected = s.resample( - '5min', how='mean', closed='right', label='right', - loffset=Minute(1)) + expected = s.resample('5min', how='mean', closed='right', + label='right', loffset=Minute(1)) assert_series_equal(result, expected) self.assertEqual(result.index.freq, Minute(5)) - # from daily - dti = DatetimeIndex( - start=datetime(2005, 1, 1), end=datetime(2005, 1, 10), - freq='D') + # from daily + dti = DatetimeIndex(start=datetime(2005, 1, 1), + end=datetime(2005, 1, 10), freq='D') ser = Series(np.random.rand(len(dti)), dti) # to weekly @@ -366,9 +383,8 @@ def test_resample_loffset(self): def test_resample_upsample(self): # from daily - dti = DatetimeIndex( - start=datetime(2005, 1, 1), end=datetime(2005, 1, 10), - freq='D', name='index') + dti = DatetimeIndex(start=datetime(2005, 1, 1), + end=datetime(2005, 1, 10), freq='D', name='index') s = Series(np.random.rand(len(dti)), dti) @@ -383,10 +399,11 @@ def test_resample_upsample(self): def test_resample_extra_index_point(self): # GH 9756 index = DatetimeIndex(start='20150101', end='20150331', freq='BM') - expected = DataFrame({'A' : Series([21,41,63], index=index)}) + expected = DataFrame({'A': Series([21, 41, 63], index=index)}) index = DatetimeIndex(start='20150101', end='20150331', freq='B') - df = DataFrame({'A' : Series(range(len(index)),index=index)},dtype='int64') + df = DataFrame( + {'A': Series(range(len(index)), index=index)}, dtype='int64') result = df.resample('BM', how='last') assert_frame_equal(result, expected) @@ -421,25 +438,30 @@ def test_resample_ohlc(self): self.assertEqual(xs['close'], s[4]) def test_resample_ohlc_dataframe(self): - df = (pd.DataFrame({'PRICE': {Timestamp('2011-01-06 10:59:05', tz=None): 24990, - Timestamp('2011-01-06 12:43:33', tz=None): 25499, - Timestamp('2011-01-06 12:54:09', tz=None): 25499}, - 'VOLUME': {Timestamp('2011-01-06 10:59:05', tz=None): 1500000000, - Timestamp('2011-01-06 12:43:33', tz=None): 5000000000, - Timestamp('2011-01-06 12:54:09', tz=None): 100000000}}) - ).reindex_axis(['VOLUME', 'PRICE'], axis=1) + df = ( + pd.DataFrame({ + 'PRICE': { + Timestamp('2011-01-06 10:59:05', tz=None): 24990, + Timestamp('2011-01-06 12:43:33', tz=None): 25499, + Timestamp('2011-01-06 12:54:09', tz=None): 25499}, + 'VOLUME': { + Timestamp('2011-01-06 10:59:05', tz=None): 1500000000, + Timestamp('2011-01-06 12:43:33', tz=None): 5000000000, + Timestamp('2011-01-06 12:54:09', tz=None): 100000000}}) + ).reindex_axis(['VOLUME', 'PRICE'], axis=1) res = df.resample('H', how='ohlc') exp = pd.concat([df['VOLUME'].resample('H', how='ohlc'), - df['PRICE'].resample('H', how='ohlc')], + df['PRICE'].resample('H', how='ohlc')], axis=1, keys=['VOLUME', 'PRICE']) assert_frame_equal(exp, res) df.columns = [['a', 'b'], ['c', 'd']] res = df.resample('H', how='ohlc') - exp.columns = pd.MultiIndex.from_tuples([('a', 'c', 'open'), ('a', 'c', 'high'), - ('a', 'c', 'low'), ('a', 'c', 'close'), ('b', 'd', 'open'), - ('b', 'd', 'high'), ('b', 'd', 'low'), ('b', 'd', 'close')]) + exp.columns = pd.MultiIndex.from_tuples([('a', 'c', 'open'), ( + 'a', 'c', 'high'), ('a', 'c', 'low'), ('a', 'c', 'close'), ( + 'b', 'd', 'open'), ('b', 'd', 'high'), ('b', 'd', 'low'), ( + 'b', 'd', 'close')]) assert_frame_equal(exp, res) # dupe columns fail atm @@ -449,17 +471,19 @@ def test_resample_dup_index(self): # GH 4812 # dup columns with resample raising - df = DataFrame(np.random.randn(4,12),index=[2000,2000,2000,2000],columns=[ Period(year=2000,month=i+1,freq='M') for i in range(12) ]) - df.iloc[3,:] = np.nan - result = df.resample('Q',axis=1) - expected = df.groupby(lambda x: int((x.month-1)/3),axis=1).mean() - expected.columns = [ Period(year=2000,quarter=i+1,freq='Q') for i in range(4) ] + df = DataFrame(np.random.randn(4, 12), index=[2000, 2000, 2000, 2000], + columns=[Period(year=2000, month=i + 1, freq='M') + for i in range(12)]) + df.iloc[3, :] = np.nan + result = df.resample('Q', axis=1) + expected = df.groupby(lambda x: int((x.month - 1) / 3), axis=1).mean() + expected.columns = [ + Period(year=2000, quarter=i + 1, freq='Q') for i in range(4)] assert_frame_equal(result, expected) def test_resample_reresample(self): - dti = DatetimeIndex( - start=datetime(2005, 1, 1), end=datetime(2005, 1, 10), - freq='D') + dti = DatetimeIndex(start=datetime(2005, 1, 1), + end=datetime(2005, 1, 10), freq='D') s = Series(np.random.rand(len(dti)), dti) bs = s.resample('B', closed='right', label='right') result = bs.resample('8H') @@ -496,8 +520,7 @@ def _ohlc(group): return np.repeat(np.nan, 4) return [group[0], group.max(), group.min(), group[-1]] - rng = date_range('1/1/2000 00:00:00', '1/1/2000 5:59:50', - freq='10s') + rng = date_range('1/1/2000 00:00:00', '1/1/2000 5:59:50', freq='10s') ts = Series(np.random.randn(len(rng)), index=rng) resampled = ts.resample('5min', how='ohlc', closed='right', @@ -586,8 +609,8 @@ def test_resample_panel_numpy(self): def test_resample_anchored_ticks(self): # If a fixed delta (5 minute, 4 hour) evenly divides a day, we should # "anchor" the origin at midnight so we get regular intervals rather - # than starting from the first timestamp which might start in the middle - # of a desired interval + # than starting from the first timestamp which might start in the + # middle of a desired interval rng = date_range('1/1/2000 04:00:00', periods=86400, freq='s') ts = Series(np.random.randn(len(rng)), index=rng) @@ -631,14 +654,14 @@ def test_resample_base(self): def test_resample_base_with_timedeltaindex(self): # GH 10530 - rng = timedelta_range(start = '0s', periods = 25, freq = 's') - ts = Series(np.random.randn(len(rng)), index = rng) + rng = timedelta_range(start='0s', periods=25, freq='s') + ts = Series(np.random.randn(len(rng)), index=rng) - with_base = ts.resample('2s', base = 5) + with_base = ts.resample('2s', base=5) without_base = ts.resample('2s') - exp_without_base = timedelta_range(start = '0s', end = '25s', freq = '2s') - exp_with_base = timedelta_range(start = '5s', end = '29s', freq = '2s') + exp_without_base = timedelta_range(start='0s', end='25s', freq='2s') + exp_with_base = timedelta_range(start='5s', end='29s', freq='2s') self.assertTrue(without_base.index.equals(exp_without_base)) self.assertTrue(with_base.index.equals(exp_with_base)) @@ -705,7 +728,7 @@ def test_monthly_resample_error(self): dates = date_range('4/16/2012 20:00', periods=5000, freq='h') ts = Series(np.random.randn(len(dates)), index=dates) # it works! - result = ts.resample('M') + ts.resample('M') def test_resample_anchored_intraday(self): # #1471, #1458 @@ -746,7 +769,7 @@ def test_resample_anchored_monthstart(self): freqs = ['MS', 'BMS', 'QS-MAR', 'AS-DEC', 'AS-JUN'] for freq in freqs: - result = ts.resample(freq, how='mean') + result = ts.resample(freq, how='mean') # noqa def test_resample_anchored_multiday(self): # When resampling a range spanning multiple days, ensure that the @@ -819,10 +842,10 @@ def test_resample_not_monotonic(self): def test_resample_median_bug_1688(self): - for dtype in ['int64','int32','float64','float32']: + for dtype in ['int64', 'int32', 'float64', 'float32']: df = DataFrame([1, 2], index=[datetime(2012, 1, 1, 0, 0, 0), datetime(2012, 1, 1, 0, 5, 0)], - dtype = dtype) + dtype=dtype) result = df.resample("T", how=lambda x: x.mean()) exp = df.asfreq('T') @@ -869,8 +892,8 @@ def test_resample_consistency(self): # GH 6418 # resample with bfill / limit / reindex consistency - i30 = index=pd.date_range('2002-02-02', periods=4, freq='30T') - s=pd.Series(np.arange(4.), index=i30) + i30 = pd.date_range('2002-02-02', periods=4, freq='30T') + s = pd.Series(np.arange(4.), index=i30) s[2] = np.NaN # Upsample by factor 3 with reindex() and resample() methods: @@ -899,15 +922,18 @@ def test_resample_timegrouper(self): for dates in [dates1, dates2, dates3]: df = DataFrame(dict(A=dates, B=np.arange(len(dates)))) result = df.set_index('A').resample('M', how='count') - exp_idx = pd.DatetimeIndex(['2014-07-31', '2014-08-31', '2014-09-30', - '2014-10-31', '2014-11-30'], freq='M', name='A') + exp_idx = pd.DatetimeIndex(['2014-07-31', '2014-08-31', + '2014-09-30', + '2014-10-31', '2014-11-30'], + freq='M', name='A') expected = DataFrame({'B': [1, 0, 2, 2, 1]}, index=exp_idx) assert_frame_equal(result, expected) result = df.groupby(pd.Grouper(freq='M', key='A')).count() assert_frame_equal(result, expected) - df = DataFrame(dict(A=dates, B=np.arange(len(dates)), C=np.arange(len(dates)))) + df = DataFrame(dict(A=dates, B=np.arange(len(dates)), C=np.arange( + len(dates)))) result = df.set_index('A').resample('M', how='count') expected = DataFrame({'B': [1, 0, 2, 2, 1], 'C': [1, 0, 2, 2, 1]}, index=exp_idx, columns=['B', 'C']) @@ -923,8 +949,7 @@ def test_resample_group_info(self): # GH10914 index=np.random.choice(dr, n)) left = ts.resample('30T', how='nunique') - ix = date_range(start=ts.index.min(), - end=ts.index.max(), + ix = date_range(start=ts.index.min(), end=ts.index.max(), freq='30T') vals = ts.values @@ -936,7 +961,8 @@ def test_resample_group_info(self): # GH10914 mask = np.r_[True, vals[1:] != vals[:-1]] mask |= np.r_[True, bins[1:] != bins[:-1]] - arr = np.bincount(bins[mask] - 1, minlength=len(ix)).astype('int64',copy=False) + arr = np.bincount(bins[mask] - 1, + minlength=len(ix)).astype('int64', copy=False) right = Series(arr, index=ix) assert_series_equal(left, right) @@ -950,7 +976,8 @@ def test_resample_size(self): ix = date_range(start=left.index.min(), end=ts.index.max(), freq='7T') bins = np.searchsorted(ix.values, ts.index.values, side='right') - val = np.bincount(bins, minlength=len(ix) + 1)[1:].astype('int64',copy=False) + val = np.bincount(bins, minlength=len(ix) + 1)[1:].astype('int64', + copy=False) right = Series(val, index=ix) assert_series_equal(left, right) @@ -962,55 +989,66 @@ def test_resmaple_dst_anchor(self): assert_frame_equal(df.resample(rule='D', how='sum'), DataFrame([5], index=df.index.normalize())) df.resample(rule='MS', how='sum') - assert_frame_equal(df.resample(rule='MS', how='sum'), - DataFrame([5], index=DatetimeIndex([datetime(2012, 11, 1)], - tz='US/Eastern'))) + assert_frame_equal( + df.resample(rule='MS', how='sum'), + DataFrame([5], index=DatetimeIndex([datetime(2012, 11, 1)], + tz='US/Eastern'))) - dti = date_range('2013-09-30', '2013-11-02', freq='30Min', tz='Europe/Paris') + dti = date_range('2013-09-30', '2013-11-02', freq='30Min', + tz='Europe/Paris') values = range(dti.size) - df = DataFrame({"a": values, "b": values, "c": values}, index=dti, dtype='int64') + df = DataFrame({"a": values, + "b": values, + "c": values}, index=dti, dtype='int64') how = {"a": "min", "b": "max", "c": "count"} - assert_frame_equal(df.resample("W-MON", how=how)[["a", "b", "c"]], - DataFrame({"a": [0, 48, 384, 720, 1056, 1394], - "b": [47, 383, 719, 1055, 1393, 1586], - "c": [48, 336, 336, 336, 338, 193]}, - index=date_range('9/30/2013', '11/4/2013', - freq='W-MON', tz='Europe/Paris')), - 'W-MON Frequency') - - assert_frame_equal(df.resample("2W-MON", how=how)[["a", "b", "c"]], - DataFrame({"a": [0, 48, 720, 1394], - "b": [47, 719, 1393, 1586], - "c": [48, 672, 674, 193]}, - index=date_range('9/30/2013', '11/11/2013', - freq='2W-MON', tz='Europe/Paris')), - '2W-MON Frequency') - - assert_frame_equal(df.resample("MS", how=how)[["a", "b", "c"]], - DataFrame({"a": [0, 48, 1538], - "b": [47, 1537, 1586], - "c": [48, 1490, 49]}, - index=date_range('9/1/2013', '11/1/2013', - freq='MS', tz='Europe/Paris')), - 'MS Frequency') - - assert_frame_equal(df.resample("2MS", how=how)[["a", "b", "c"]], - DataFrame({"a": [0, 1538], - "b": [1537, 1586], - "c": [1538, 49]}, - index=date_range('9/1/2013', '11/1/2013', - freq='2MS', tz='Europe/Paris')), - '2MS Frequency') + assert_frame_equal( + df.resample("W-MON", how=how)[["a", "b", "c"]], + DataFrame({"a": [0, 48, 384, 720, 1056, 1394], + "b": [47, 383, 719, 1055, 1393, 1586], + "c": [48, 336, 336, 336, 338, 193]}, + index=date_range('9/30/2013', '11/4/2013', + freq='W-MON', tz='Europe/Paris')), + 'W-MON Frequency') + + assert_frame_equal( + df.resample("2W-MON", how=how)[["a", "b", "c"]], + DataFrame({"a": [0, 48, 720, 1394], + "b": [47, 719, 1393, 1586], + "c": [48, 672, 674, 193]}, + index=date_range('9/30/2013', '11/11/2013', + freq='2W-MON', tz='Europe/Paris')), + '2W-MON Frequency') + + assert_frame_equal( + df.resample("MS", how=how)[["a", "b", "c"]], + DataFrame({"a": [0, 48, 1538], + "b": [47, 1537, 1586], + "c": [48, 1490, 49]}, + index=date_range('9/1/2013', '11/1/2013', + freq='MS', tz='Europe/Paris')), + 'MS Frequency') + + assert_frame_equal( + df.resample("2MS", how=how)[["a", "b", "c"]], + DataFrame({"a": [0, 1538], + "b": [1537, 1586], + "c": [1538, 49]}, + index=date_range('9/1/2013', '11/1/2013', + freq='2MS', tz='Europe/Paris')), + '2MS Frequency') df_daily = df['10/26/2013':'10/29/2013'] - assert_frame_equal(df_daily.resample("D", how={"a": "min", "b": "max", "c": "count"})[["a", "b", "c"]], - DataFrame({"a": [1248, 1296, 1346, 1394], - "b": [1295, 1345, 1393, 1441], - "c": [48, 50, 48, 48]}, - index=date_range('10/26/2013', '10/29/2013', - freq='D', tz='Europe/Paris')), - 'D Frequency') + assert_frame_equal( + df_daily.resample("D", how={"a": "min", "b": "max", "c": "count"}) + [["a", "b", "c"]], + DataFrame({"a": [1248, 1296, 1346, 1394], + "b": [1295, 1345, 1393, 1441], + "c": [48, 50, 48, 48]}, + index=date_range('10/26/2013', '10/29/2013', + freq='D', tz='Europe/Paris')), + 'D Frequency') + def _simple_ts(start, end, freq='D'): rng = date_range(start, end, freq=freq) @@ -1066,8 +1104,7 @@ def _check_annual_upsample_cases(self, targ, conv, meth, end='12/31/1991'): for month in MONTHS: ts = _simple_pts('1/1/1990', end, freq='A-%s' % month) - result = ts.resample(targ, fill_method=meth, - convention=conv) + result = ts.resample(targ, fill_method=meth, convention=conv) expected = result.to_timestamp(targ, how=conv) expected = expected.asfreq(targ, meth).to_period() assert_series_equal(result, expected) @@ -1077,8 +1114,7 @@ def test_basic_downsample(self): result = ts.resample('a-dec') expected = ts.groupby(ts.index.year).mean() - expected.index = period_range('1/1/1990', '6/30/1995', - freq='a-dec') + expected.index = period_range('1/1/1990', '6/30/1995', freq='a-dec') assert_series_equal(result, expected) # this is ok @@ -1150,15 +1186,14 @@ def test_monthly_upsample(self): ts = _simple_pts('1/1/1990', '12/31/1995', freq='M') for targ, conv in product(targets, ['start', 'end']): - result = ts.resample(targ, fill_method='ffill', - convention=conv) + result = ts.resample(targ, fill_method='ffill', convention=conv) expected = result.to_timestamp(targ, how=conv) expected = expected.asfreq(targ, 'ffill').to_period() assert_series_equal(result, expected) def test_fill_method_and_how_upsample(self): # GH2073 - s = Series(np.arange(9,dtype='int64'), + s = Series(np.arange(9, dtype='int64'), index=date_range('2010-01-01', periods=9, freq='Q')) last = s.resample('M', fill_method='ffill') both = s.resample('M', how='last', fill_method='ffill').astype('int64') @@ -1325,12 +1360,17 @@ def test_resample_tz_localized(self): rng = date_range('1/1/2011', periods=20000, freq='H') rng = rng.tz_localize('EST') ts = DataFrame(index=rng) - ts['first']=np.random.randn(len(rng)) - ts['second']=np.cumsum(np.random.randn(len(rng))) - expected = DataFrame({ 'first' : ts.resample('A',how=np.sum)['first'], - 'second' : ts.resample('A',how=np.mean)['second'] },columns=['first','second']) - result = ts.resample('A', how={'first':np.sum, 'second':np.mean}).reindex(columns=['first','second']) - assert_frame_equal(result,expected) + ts['first'] = np.random.randn(len(rng)) + ts['second'] = np.cumsum(np.random.randn(len(rng))) + expected = DataFrame( + { + 'first': ts.resample('A', how=np.sum)['first'], + 'second': ts.resample('A', how=np.mean)['second']}, + columns=['first', 'second']) + result = ts.resample( + 'A', how={'first': np.sum, + 'second': np.mean}).reindex(columns=['first', 'second']) + assert_frame_equal(result, expected) def test_closed_left_corner(self): # #1465 @@ -1372,7 +1412,7 @@ def test_resample_weekly_bug_1726(self): def test_resample_bms_2752(self): # GH2753 - foo = pd.Series(index=pd.bdate_range('20000101','20000201')) + foo = pd.Series(index=pd.bdate_range('20000101', '20000201')) res1 = foo.resample("BMS") res2 = foo.resample("BMS").resample("B") self.assertEqual(res1.index[0], Timestamp('20000103')) @@ -1396,8 +1436,7 @@ def test_default_right_closed_label(self): end_types = ['M', 'A', 'Q', 'W'] for from_freq, to_freq in zip(end_freq, end_types): - idx = DatetimeIndex(start='8/15/2012', periods=100, - freq=from_freq) + idx = DatetimeIndex(start='8/15/2012', periods=100, freq=from_freq) df = DataFrame(np.random.randn(len(idx), 2), idx) resampled = df.resample(to_freq) @@ -1409,8 +1448,7 @@ def test_default_left_closed_label(self): others_freq = ['D', 'Q', 'M', 'H', 'T'] for from_freq, to_freq in zip(others_freq, others): - idx = DatetimeIndex(start='8/15/2012', periods=100, - freq=from_freq) + idx = DatetimeIndex(start='8/15/2012', periods=100, freq=from_freq) df = DataFrame(np.random.randn(len(idx), 2), idx) resampled = df.resample(to_freq) @@ -1429,11 +1467,13 @@ def test_evenly_divisible_with_no_extra_bins(self): # 4076 # when the frequency is evenly divisible, sometimes extra bins - df = DataFrame(np.random.randn(9, 3), index=date_range('2000-1-1', periods=9)) + df = DataFrame(np.random.randn(9, 3), + index=date_range('2000-1-1', periods=9)) result = df.resample('5D') - expected = pd.concat([df.iloc[0:5].mean(),df.iloc[5:].mean()],axis=1).T - expected.index = [Timestamp('2000-1-1'),Timestamp('2000-1-6')] - assert_frame_equal(result,expected) + expected = pd.concat( + [df.iloc[0:5].mean(), df.iloc[5:].mean()], axis=1).T + expected.index = [Timestamp('2000-1-1'), Timestamp('2000-1-6')] + assert_frame_equal(result, expected) index = date_range(start='2001-5-4', periods=28) df = DataFrame( @@ -1443,23 +1483,23 @@ def test_evenly_divisible_with_no_extra_bins(self): 'COOP_DLY_TRN_QT': 50, 'COOP_DLY_SLS_AMT': 20}] * 28, index=index.append(index)).sort_index() - index = date_range('2001-5-4',periods=4,freq='7D') + index = date_range('2001-5-4', periods=4, freq='7D') expected = DataFrame( [{'REST_KEY': 14, 'DLY_TRN_QT': 14, 'DLY_SLS_AMT': 14, 'COOP_DLY_TRN_QT': 14, 'COOP_DLY_SLS_AMT': 14}] * 4, index=index) result = df.resample('7D', how='count') - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) expected = DataFrame( [{'REST_KEY': 21, 'DLY_TRN_QT': 1050, 'DLY_SLS_AMT': 700, 'COOP_DLY_TRN_QT': 560, 'COOP_DLY_SLS_AMT': 280}] * 4, index=index) result = df.resample('7D', how='sum') - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) -class TestTimeGrouper(tm.TestCase): +class TestTimeGrouper(tm.TestCase): def setUp(self): self.ts = Series(np.random.randn(1000), index=date_range('1/1/2000', periods=1000)) @@ -1481,7 +1521,9 @@ def test_apply(self): def test_count(self): self.ts[::3] = np.nan - grouper = TimeGrouper('A', label='right', closed='right') + # TODO: unused? + grouper = TimeGrouper('A', label='right', closed='right') # noqa + result = self.ts.resample('A', how='count') expected = self.ts.groupby(lambda x: x.year).count() @@ -1526,8 +1568,9 @@ def test_panel_aggregation(self): binagg = bingrouped.mean() def f(x): - assert(isinstance(x, Panel)) + assert (isinstance(x, Panel)) return x.mean(1) + result = bingrouped.agg(f) tm.assert_panel_equal(result, binagg) @@ -1555,8 +1598,9 @@ def test_aggregate_normal(self): normal_df['key'] = [1, 2, 3, 4, 5] * 4 dt_df = DataFrame(data, columns=['A', 'B', 'C', 'D']) - dt_df['key'] = [datetime(2013, 1, 1), datetime(2013, 1, 2), datetime(2013, 1, 3), - datetime(2013, 1, 4), datetime(2013, 1, 5)] * 4 + dt_df['key'] = [datetime(2013, 1, 1), datetime(2013, 1, 2), + datetime(2013, 1, 3), datetime(2013, 1, 4), + datetime(2013, 1, 5)] * 4 normal_grouped = normal_df.groupby('key') dt_grouped = dt_df.groupby(TimeGrouper(key='key', freq='D')) @@ -1564,36 +1608,41 @@ def test_aggregate_normal(self): for func in ['min', 'max', 'prod', 'var', 'std', 'mean']: expected = getattr(normal_grouped, func)() dt_result = getattr(dt_grouped, func)() - expected.index = date_range(start='2013-01-01', freq='D', periods=5, name='key') + expected.index = date_range(start='2013-01-01', freq='D', + periods=5, name='key') assert_frame_equal(expected, dt_result) for func in ['count', 'sum']: expected = getattr(normal_grouped, func)() - expected.index = date_range(start='2013-01-01', freq='D', periods=5, name='key') + expected.index = date_range(start='2013-01-01', freq='D', + periods=5, name='key') dt_result = getattr(dt_grouped, func)() assert_frame_equal(expected, dt_result) # GH 7453 for func in ['size']: expected = getattr(normal_grouped, func)() - expected.index = date_range(start='2013-01-01', freq='D', periods=5, name='key') + expected.index = date_range(start='2013-01-01', freq='D', + periods=5, name='key') dt_result = getattr(dt_grouped, func)() assert_series_equal(expected, dt_result) - """ for func in ['first', 'last']: expected = getattr(normal_grouped, func)() - expected.index = date_range(start='2013-01-01', freq='D', periods=5, name='key') + expected.index = date_range(start='2013-01-01', freq='D', + periods=5, name='key') dt_result = getattr(dt_grouped, func)() assert_frame_equal(expected, dt_result) for func in ['nth']: expected = getattr(normal_grouped, func)(3) - expected.index = date_range(start='2013-01-01', freq='D', periods=5, name='key') + expected.index = date_range(start='2013-01-01', + freq='D', periods=5, name='key') dt_result = getattr(dt_grouped, func)(3) assert_frame_equal(expected, dt_result) """ - # if TimeGrouper is used included, 'first','last' and 'nth' doesn't work yet + # if TimeGrouper is used included, 'first','last' and 'nth' doesn't + # work yet def test_aggregate_with_nat(self): # check TimeGrouper's aggregation is identical as normal groupby @@ -1613,19 +1662,22 @@ def test_aggregate_with_nat(self): for func in ['min', 'max', 'sum', 'prod']: normal_result = getattr(normal_grouped, func)() dt_result = getattr(dt_grouped, func)() - pad = DataFrame([[np.nan, np.nan, np.nan, np.nan]], - index=[3], columns=['A', 'B', 'C', 'D']) + pad = DataFrame([[np.nan, np.nan, np.nan, np.nan]], index=[3], + columns=['A', 'B', 'C', 'D']) expected = normal_result.append(pad) expected = expected.sort_index() - expected.index = date_range(start='2013-01-01', freq='D', periods=5, name='key') + expected.index = date_range(start='2013-01-01', freq='D', + periods=5, name='key') assert_frame_equal(expected, dt_result) for func in ['count']: normal_result = getattr(normal_grouped, func)() - pad = DataFrame([[0, 0, 0, 0]], index=[3], columns=['A', 'B', 'C', 'D']) + pad = DataFrame([[0, 0, 0, 0]], index=[3], + columns=['A', 'B', 'C', 'D']) expected = normal_result.append(pad) expected = expected.sort_index() - expected.index = date_range(start='2013-01-01', freq='D', periods=5, name='key') + expected.index = date_range(start='2013-01-01', freq='D', + periods=5, name='key') dt_result = getattr(dt_grouped, func)() assert_frame_equal(expected, dt_result) @@ -1634,13 +1686,15 @@ def test_aggregate_with_nat(self): pad = Series([0], index=[3]) expected = normal_result.append(pad) expected = expected.sort_index() - expected.index = date_range(start='2013-01-01', freq='D', periods=5, name='key') + expected.index = date_range(start='2013-01-01', freq='D', + periods=5, name='key') dt_result = getattr(dt_grouped, func)() assert_series_equal(expected, dt_result) # GH 9925 self.assertEqual(dt_result.index.name, 'key') - # if NaT is included, 'var', 'std', 'mean', 'first','last' and 'nth' doesn't work yet + # if NaT is included, 'var', 'std', 'mean', 'first','last' and 'nth' + # doesn't work yet if __name__ == '__main__': diff --git a/pandas/tseries/tests/test_timedeltas.py b/pandas/tseries/tests/test_timedeltas.py index 176b5e90bee0f..d5a057d25e752 100644 --- a/pandas/tseries/tests/test_timedeltas.py +++ b/pandas/tseries/tests/test_timedeltas.py @@ -1,34 +1,30 @@ # pylint: disable-msg=E1101,W0612 from __future__ import division -from datetime import datetime, timedelta, time +from datetime import timedelta, time import nose from distutils.version import LooseVersion import numpy as np import pandas as pd -from pandas import (Index, Series, DataFrame, Timestamp, Timedelta, TimedeltaIndex, isnull, notnull, - bdate_range, date_range, timedelta_range, Int64Index) -import pandas.core.common as com -from pandas.compat import StringIO, lrange, range, zip, u, OrderedDict, long +from pandas import (Index, Series, DataFrame, Timestamp, Timedelta, + TimedeltaIndex, isnull, date_range, + timedelta_range, Int64Index) +from pandas.compat import range from pandas import compat, to_timedelta, tslib from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type as ct -from pandas.util.testing import (assert_series_equal, - assert_frame_equal, - assert_almost_equal, - assert_index_equal, - ensure_clean) +from pandas.util.testing import (assert_series_equal, assert_frame_equal, + assert_almost_equal, assert_index_equal) from numpy.testing import assert_allclose -from pandas.tseries.offsets import Day, Second, Hour +from pandas.tseries.offsets import Day, Second import pandas.util.testing as tm -from numpy.random import rand, randn +from numpy.random import randn from pandas import _np_version_under1p8 -import pandas.compat as compat - iNaT = tslib.iNaT + class TestTimedeltas(tm.TestCase): _multiprocess_can_split_ = True @@ -37,35 +33,46 @@ def setUp(self): def test_construction(self): - expected = np.timedelta64(10,'D').astype('m8[ns]').view('i8') - self.assertEqual(Timedelta(10,unit='d').value, expected) - self.assertEqual(Timedelta(10.0,unit='d').value, expected) + expected = np.timedelta64(10, 'D').astype('m8[ns]').view('i8') + self.assertEqual(Timedelta(10, unit='d').value, expected) + self.assertEqual(Timedelta(10.0, unit='d').value, expected) self.assertEqual(Timedelta('10 days').value, expected) self.assertEqual(Timedelta(days=10).value, expected) self.assertEqual(Timedelta(days=10.0).value, expected) - expected += np.timedelta64(10,'s').astype('m8[ns]').view('i8') + expected += np.timedelta64(10, 's').astype('m8[ns]').view('i8') self.assertEqual(Timedelta('10 days 00:00:10').value, expected) - self.assertEqual(Timedelta(days=10,seconds=10).value, expected) - self.assertEqual(Timedelta(days=10,milliseconds=10*1000).value, expected) - self.assertEqual(Timedelta(days=10,microseconds=10*1000*1000).value, expected) + self.assertEqual(Timedelta(days=10, seconds=10).value, expected) + self.assertEqual( + Timedelta(days=10, milliseconds=10 * 1000).value, expected) + self.assertEqual( + Timedelta(days=10, microseconds=10 * 1000 * 1000).value, expected) # test construction with np dtypes # GH 8757 - timedelta_kwargs = {'days':'D', 'seconds':'s', 'microseconds':'us', - 'milliseconds':'ms', 'minutes':'m', 'hours':'h', 'weeks':'W'} - npdtypes = [np.int64, np.int32, np.int16, - np.float64, np.float32, np.float16] + timedelta_kwargs = {'days': 'D', + 'seconds': 's', + 'microseconds': 'us', + 'milliseconds': 'ms', + 'minutes': 'm', + 'hours': 'h', + 'weeks': 'W'} + npdtypes = [np.int64, np.int32, np.int16, np.float64, np.float32, + np.float16] for npdtype in npdtypes: for pykwarg, npkwarg in timedelta_kwargs.items(): - expected = np.timedelta64(1, npkwarg).astype('m8[ns]').view('i8') - self.assertEqual(Timedelta(**{pykwarg:npdtype(1)}).value, expected) + expected = np.timedelta64(1, + npkwarg).astype('m8[ns]').view('i8') + self.assertEqual( + Timedelta(**{pykwarg: npdtype(1)}).value, expected) # rounding cases self.assertEqual(Timedelta(82739999850000).value, 82739999850000) - self.assertTrue('0 days 22:58:59.999850' in str(Timedelta(82739999850000))) + self.assertTrue('0 days 22:58:59.999850' in str(Timedelta( + 82739999850000))) self.assertEqual(Timedelta(123072001000000).value, 123072001000000) - self.assertTrue('1 days 10:11:12.001' in str(Timedelta(123072001000000))) + self.assertTrue('1 days 10:11:12.001' in str(Timedelta( + 123072001000000))) # string conversion with/without leading zero # GH 9570 @@ -94,69 +101,75 @@ def test_construction(self): self.assertEqual(Timedelta('1 us'), timedelta(microseconds=1)) self.assertEqual(Timedelta('1 micros'), timedelta(microseconds=1)) self.assertEqual(Timedelta('1 microsecond'), timedelta(microseconds=1)) - self.assertEqual(Timedelta('1.5 microsecond'), Timedelta('00:00:00.000001500')) + self.assertEqual(Timedelta('1.5 microsecond'), + Timedelta('00:00:00.000001500')) self.assertEqual(Timedelta('1 ns'), Timedelta('00:00:00.000000001')) - self.assertEqual(Timedelta('1 nano'), Timedelta('00:00:00.000000001')) - self.assertEqual(Timedelta('1 nanosecond'), Timedelta('00:00:00.000000001')) + self.assertEqual(Timedelta('1 nano'), Timedelta('00:00:00.000000001')) + self.assertEqual(Timedelta('1 nanosecond'), + Timedelta('00:00:00.000000001')) # combos - self.assertEqual(Timedelta('10 days 1 hour'), timedelta(days=10,hours=1)) - self.assertEqual(Timedelta('10 days 1 h'), timedelta(days=10,hours=1)) - self.assertEqual(Timedelta('10 days 1 h 1m 1s'), timedelta(days=10,hours=1,minutes=1,seconds=1)) - self.assertEqual(Timedelta('-10 days 1 h 1m 1s'), -timedelta(days=10,hours=1,minutes=1,seconds=1)) - self.assertEqual(Timedelta('-10 days 1 h 1m 1s'), -timedelta(days=10,hours=1,minutes=1,seconds=1)) - self.assertEqual(Timedelta('-10 days 1 h 1m 1s 3us'), -timedelta(days=10,hours=1,minutes=1,seconds=1,microseconds=3)) - self.assertEqual(Timedelta('-10 days 1 h 1.5m 1s 3us'), -timedelta(days=10,hours=1,minutes=1,seconds=31,microseconds=3)) - - # currently invalid as it has a - on the hhmmdd part (only allowed on the days) - self.assertRaises(ValueError, lambda : Timedelta('-10 days -1 h 1.5m 1s 3us')) + self.assertEqual(Timedelta('10 days 1 hour'), + timedelta(days=10, hours=1)) + self.assertEqual(Timedelta('10 days 1 h'), timedelta(days=10, hours=1)) + self.assertEqual(Timedelta('10 days 1 h 1m 1s'), timedelta( + days=10, hours=1, minutes=1, seconds=1)) + self.assertEqual(Timedelta('-10 days 1 h 1m 1s'), - + timedelta(days=10, hours=1, minutes=1, seconds=1)) + self.assertEqual(Timedelta('-10 days 1 h 1m 1s'), - + timedelta(days=10, hours=1, minutes=1, seconds=1)) + self.assertEqual(Timedelta('-10 days 1 h 1m 1s 3us'), - + timedelta(days=10, hours=1, minutes=1, + seconds=1, microseconds=3)) + self.assertEqual(Timedelta('-10 days 1 h 1.5m 1s 3us'), - + timedelta(days=10, hours=1, minutes=1, + seconds=31, microseconds=3)) + + # currently invalid as it has a - on the hhmmdd part (only allowed on + # the days) + self.assertRaises(ValueError, + lambda: Timedelta('-10 days -1 h 1.5m 1s 3us')) # only leading neg signs are allowed - self.assertRaises(ValueError, lambda : Timedelta('10 days -1 h 1.5m 1s 3us')) + self.assertRaises(ValueError, + lambda: Timedelta('10 days -1 h 1.5m 1s 3us')) # no units specified - self.assertRaises(ValueError, lambda : Timedelta('3.1415')) + self.assertRaises(ValueError, lambda: Timedelta('3.1415')) # invalid construction + tm.assertRaisesRegexp(ValueError, "cannot construct a TimeDelta", + lambda: Timedelta()) + tm.assertRaisesRegexp(ValueError, "unit abbreviation w/o a number", + lambda: Timedelta('foo')) tm.assertRaisesRegexp(ValueError, - "cannot construct a TimeDelta", - lambda : Timedelta()) - tm.assertRaisesRegexp(ValueError, - "unit abbreviation w/o a number", - lambda : Timedelta('foo')) - tm.assertRaisesRegexp(ValueError, - "cannot construct a TimeDelta from the passed arguments, allowed keywords are ", - lambda : Timedelta(day=10)) + "cannot construct a TimeDelta from the passed " + "arguments, allowed keywords are ", + lambda: Timedelta(day=10)) # roundtripping both for string and value - for v in ['1s', - '-1s', - '1us', - '-1us', - '1 day', - '-1 day', - '-23:59:59.999999', - '-1 days +23:59:59.999999', - '-1ns', - '1ns', - '-23:59:59.999999999']: + for v in ['1s', '-1s', '1us', '-1us', '1 day', '-1 day', + '-23:59:59.999999', '-1 days +23:59:59.999999', '-1ns', + '1ns', '-23:59:59.999999999']: td = Timedelta(v) - self.assertEqual(Timedelta(td.value),td) + self.assertEqual(Timedelta(td.value), td) # str does not normally display nanos if not td.nanoseconds: - self.assertEqual(Timedelta(str(td)),td) - self.assertEqual(Timedelta(td._repr_base(format='all')),td) + self.assertEqual(Timedelta(str(td)), td) + self.assertEqual(Timedelta(td._repr_base(format='all')), td) # floats - expected = np.timedelta64(10,'s').astype('m8[ns]').view('i8') + np.timedelta64(500,'ms').astype('m8[ns]').view('i8') - self.assertEqual(Timedelta(10.5,unit='s').value, expected) + expected = np.timedelta64( + 10, 's').astype('m8[ns]').view('i8') + np.timedelta64( + 500, 'ms').astype('m8[ns]').view('i8') + self.assertEqual(Timedelta(10.5, unit='s').value, expected) # nat - self.assertEqual(Timedelta('').value,iNaT) - self.assertEqual(Timedelta('nat').value,iNaT) - self.assertEqual(Timedelta('NAT').value,iNaT) + self.assertEqual(Timedelta('').value, iNaT) + self.assertEqual(Timedelta('nat').value, iNaT) + self.assertEqual(Timedelta('NAT').value, iNaT) self.assertTrue(isnull(Timestamp('nat'))) self.assertTrue(isnull(Timedelta('nat'))) @@ -184,41 +197,24 @@ def test_round(self): t2 = Timedelta('-1 days 02:34:56.789123456') for (freq, s1, s2) in [('N', t1, t2), - ('U', - Timedelta('1 days 02:34:56.789123000'), - Timedelta('-1 days 02:34:56.789123000') - ), - ('L', - Timedelta('1 days 02:34:56.789000000'), - Timedelta('-1 days 02:34:56.789000000') - ), - ('S', - Timedelta('1 days 02:34:57'), - Timedelta('-1 days 02:34:57') - ), - ('2S', - Timedelta('1 days 02:34:56'), - Timedelta('-1 days 02:34:56') - ), - ('5S', - Timedelta('1 days 02:34:55'), - Timedelta('-1 days 02:34:55') - ), - ('T', - Timedelta('1 days 02:35:00'), - Timedelta('-1 days 02:35:00') - ), - ('12T', - Timedelta('1 days 02:36:00'), + ('U', Timedelta('1 days 02:34:56.789123000'), + Timedelta('-1 days 02:34:56.789123000')), + ('L', Timedelta('1 days 02:34:56.789000000'), + Timedelta('-1 days 02:34:56.789000000')), + ('S', Timedelta('1 days 02:34:57'), + Timedelta('-1 days 02:34:57')), + ('2S', Timedelta('1 days 02:34:56'), + Timedelta('-1 days 02:34:56')), + ('5S', Timedelta('1 days 02:34:55'), + Timedelta('-1 days 02:34:55')), + ('T', Timedelta('1 days 02:35:00'), + Timedelta('-1 days 02:35:00')), + ('12T', Timedelta('1 days 02:36:00'), Timedelta('-1 days 02:36:00')), - ('H', - Timedelta('1 days 03:00:00'), - Timedelta('-1 days 03:00:00') - ), - ('d', - Timedelta('1 days'), - Timedelta('-1 days') - )]: + ('H', Timedelta('1 days 03:00:00'), + Timedelta('-1 days 03:00:00')), + ('d', Timedelta('1 days'), + Timedelta('-1 days'))]: r1 = t1.round(freq) self.assertEqual(r1, s1) r2 = t2.round(freq) @@ -279,25 +275,30 @@ def test_round(self): def test_repr(self): - self.assertEqual(repr(Timedelta(10,unit='d')),"Timedelta('10 days 00:00:00')") - self.assertEqual(repr(Timedelta(10,unit='s')),"Timedelta('0 days 00:00:10')") - self.assertEqual(repr(Timedelta(10,unit='ms')),"Timedelta('0 days 00:00:00.010000')") - self.assertEqual(repr(Timedelta(-10,unit='ms')),"Timedelta('-1 days +23:59:59.990000')") + self.assertEqual(repr(Timedelta(10, unit='d')), + "Timedelta('10 days 00:00:00')") + self.assertEqual(repr(Timedelta(10, unit='s')), + "Timedelta('0 days 00:00:10')") + self.assertEqual(repr(Timedelta(10, unit='ms')), + "Timedelta('0 days 00:00:00.010000')") + self.assertEqual(repr(Timedelta(-10, unit='ms')), + "Timedelta('-1 days +23:59:59.990000')") def test_identity(self): - td = Timedelta(10,unit='d') + td = Timedelta(10, unit='d') self.assertTrue(isinstance(td, Timedelta)) self.assertTrue(isinstance(td, timedelta)) def test_conversion(self): - for td in [ Timedelta(10,unit='d'), Timedelta('1 days, 10:11:12.012345') ]: + for td in [Timedelta(10, unit='d'), + Timedelta('1 days, 10:11:12.012345')]: pydt = td.to_pytimedelta() self.assertTrue(td == Timedelta(pydt)) self.assertEqual(td, pydt) - self.assertTrue(isinstance(pydt, timedelta) - and not isinstance(pydt, Timedelta)) + self.assertTrue(isinstance(pydt, timedelta) and not isinstance( + pydt, Timedelta)) self.assertEqual(td, np.timedelta64(td.value, 'ns')) td64 = td.to_timedelta64() @@ -311,36 +312,36 @@ def test_conversion(self): def test_ops(self): - td = Timedelta(10,unit='d') - self.assertEqual(-td,Timedelta(-10,unit='d')) - self.assertEqual(+td,Timedelta(10,unit='d')) - self.assertEqual(td - td, Timedelta(0,unit='ns')) + td = Timedelta(10, unit='d') + self.assertEqual(-td, Timedelta(-10, unit='d')) + self.assertEqual(+td, Timedelta(10, unit='d')) + self.assertEqual(td - td, Timedelta(0, unit='ns')) self.assertTrue((td - pd.NaT) is pd.NaT) - self.assertEqual(td + td, Timedelta(20,unit='d')) + self.assertEqual(td + td, Timedelta(20, unit='d')) self.assertTrue((td + pd.NaT) is pd.NaT) - self.assertEqual(td * 2, Timedelta(20,unit='d')) + self.assertEqual(td * 2, Timedelta(20, unit='d')) self.assertTrue((td * pd.NaT) is pd.NaT) - self.assertEqual(td / 2, Timedelta(5,unit='d')) + self.assertEqual(td / 2, Timedelta(5, unit='d')) self.assertEqual(abs(td), td) self.assertEqual(abs(-td), td) self.assertEqual(td / td, 1) self.assertTrue((td / pd.NaT) is np.nan) # invert - self.assertEqual(-td,Timedelta('-10d')) - self.assertEqual(td * -1,Timedelta('-10d')) - self.assertEqual(-1 * td,Timedelta('-10d')) - self.assertEqual(abs(-td),Timedelta('10d')) + self.assertEqual(-td, Timedelta('-10d')) + self.assertEqual(td * -1, Timedelta('-10d')) + self.assertEqual(-1 * td, Timedelta('-10d')) + self.assertEqual(abs(-td), Timedelta('10d')) # invalid - self.assertRaises(TypeError, lambda : Timedelta(11,unit='d') // 2) + self.assertRaises(TypeError, lambda: Timedelta(11, unit='d') // 2) # invalid multiply with another timedelta - self.assertRaises(TypeError, lambda : td * td) + self.assertRaises(TypeError, lambda: td * td) # can't operate with integers - self.assertRaises(TypeError, lambda : td + 2) - self.assertRaises(TypeError, lambda : td - 2) + self.assertRaises(TypeError, lambda: td + 2) + self.assertRaises(TypeError, lambda: td - 2) def test_ops_offsets(self): td = Timedelta(10, unit='d') @@ -354,11 +355,11 @@ def test_ops_offsets(self): def test_freq_conversion(self): td = Timedelta('1 days 2 hours 3 ns') - result = td / np.timedelta64(1,'D') - self.assertEqual(result, td.value/float(86400*1e9)) - result = td / np.timedelta64(1,'s') - self.assertEqual(result, td.value/float(1e9)) - result = td / np.timedelta64(1,'ns') + result = td / np.timedelta64(1, 'D') + self.assertEqual(result, td.value / float(86400 * 1e9)) + result = td / np.timedelta64(1, 's') + self.assertEqual(result, td.value / float(1e9)) + result = td / np.timedelta64(1, 'ns') self.assertEqual(result, td.value) def test_ops_ndarray(self): @@ -428,6 +429,7 @@ def test_compare_timedelta_ndarray(self): def test_ops_notimplemented(self): class Other: pass + other = Other() td = Timedelta('1 day') @@ -438,7 +440,6 @@ class Other: self.assertTrue(td.__floordiv__(td) is NotImplemented) def test_fields(self): - def check(value): # that we are int/long like self.assertTrue(isinstance(value, (int, compat.long))) @@ -446,13 +447,13 @@ def check(value): # compat to datetime.timedelta rng = to_timedelta('1 days, 10:11:12') self.assertEqual(rng.days, 1) - self.assertEqual(rng.seconds, 10*3600+11*60+12) + self.assertEqual(rng.seconds, 10 * 3600 + 11 * 60 + 12) self.assertEqual(rng.microseconds, 0) self.assertEqual(rng.nanoseconds, 0) - self.assertRaises(AttributeError, lambda : rng.hours) - self.assertRaises(AttributeError, lambda : rng.minutes) - self.assertRaises(AttributeError, lambda : rng.milliseconds) + self.assertRaises(AttributeError, lambda: rng.hours) + self.assertRaises(AttributeError, lambda: rng.minutes) + self.assertRaises(AttributeError, lambda: rng.milliseconds) # GH 10050 check(rng.days) @@ -469,12 +470,12 @@ def check(value): rng = to_timedelta('-1 days, 10:11:12.100123456') self.assertEqual(rng.days, -1) - self.assertEqual(rng.seconds, 10*3600+11*60+12) - self.assertEqual(rng.microseconds, 100*1000+123) + self.assertEqual(rng.seconds, 10 * 3600 + 11 * 60 + 12) + self.assertEqual(rng.microseconds, 100 * 1000 + 123) self.assertEqual(rng.nanoseconds, 456) - self.assertRaises(AttributeError, lambda : rng.hours) - self.assertRaises(AttributeError, lambda : rng.minutes) - self.assertRaises(AttributeError, lambda : rng.milliseconds) + self.assertRaises(AttributeError, lambda: rng.hours) + self.assertRaises(AttributeError, lambda: rng.minutes) + self.assertRaises(AttributeError, lambda: rng.milliseconds) # components tup = pd.to_timedelta(-1, 'us').components @@ -515,10 +516,11 @@ def test_timedelta_range(self): tm.assert_index_equal(result, expected) expected = to_timedelta(np.arange(5), unit='D') + Second(2) + Day() - result = timedelta_range('1 days, 00:00:02', '5 days, 00:00:02', freq='D') + result = timedelta_range('1 days, 00:00:02', '5 days, 00:00:02', + freq='D') tm.assert_index_equal(result, expected) - expected = to_timedelta([1,3,5,7,9], unit='D') + Second(2) + expected = to_timedelta([1, 3, 5, 7, 9], unit='D') + Second(2) result = timedelta_range('1 days, 00:00:02', periods=5, freq='2D') tm.assert_index_equal(result, expected) @@ -537,69 +539,76 @@ def test_timedelta_range(self): to_timedelta(arg, errors=errors) # issue10583 - df = pd.DataFrame(np.random.normal(size=(10,4))) + df = pd.DataFrame(np.random.normal(size=(10, 4))) df.index = pd.timedelta_range(start='0s', periods=10, freq='s') - expected = df.loc[pd.Timedelta('0s'):,:] - result = df.loc['0s':,:] + expected = df.loc[pd.Timedelta('0s'):, :] + result = df.loc['0s':, :] assert_frame_equal(expected, result) - def test_numeric_conversions(self): - self.assertEqual(ct(0), np.timedelta64(0,'ns')) - self.assertEqual(ct(10), np.timedelta64(10,'ns')) - self.assertEqual(ct(10,unit='ns'), np.timedelta64(10,'ns').astype('m8[ns]')) - - self.assertEqual(ct(10,unit='us'), np.timedelta64(10,'us').astype('m8[ns]')) - self.assertEqual(ct(10,unit='ms'), np.timedelta64(10,'ms').astype('m8[ns]')) - self.assertEqual(ct(10,unit='s'), np.timedelta64(10,'s').astype('m8[ns]')) - self.assertEqual(ct(10,unit='d'), np.timedelta64(10,'D').astype('m8[ns]')) + self.assertEqual(ct(0), np.timedelta64(0, 'ns')) + self.assertEqual(ct(10), np.timedelta64(10, 'ns')) + self.assertEqual(ct(10, unit='ns'), np.timedelta64( + 10, 'ns').astype('m8[ns]')) + + self.assertEqual(ct(10, unit='us'), np.timedelta64( + 10, 'us').astype('m8[ns]')) + self.assertEqual(ct(10, unit='ms'), np.timedelta64( + 10, 'ms').astype('m8[ns]')) + self.assertEqual(ct(10, unit='s'), np.timedelta64( + 10, 's').astype('m8[ns]')) + self.assertEqual(ct(10, unit='d'), np.timedelta64( + 10, 'D').astype('m8[ns]')) def test_timedelta_conversions(self): - self.assertEqual(ct(timedelta(seconds=1)), np.timedelta64(1,'s').astype('m8[ns]')) - self.assertEqual(ct(timedelta(microseconds=1)), np.timedelta64(1,'us').astype('m8[ns]')) - self.assertEqual(ct(timedelta(days=1)), np.timedelta64(1,'D').astype('m8[ns]')) + self.assertEqual(ct(timedelta(seconds=1)), + np.timedelta64(1, 's').astype('m8[ns]')) + self.assertEqual(ct(timedelta(microseconds=1)), + np.timedelta64(1, 'us').astype('m8[ns]')) + self.assertEqual(ct(timedelta(days=1)), + np.timedelta64(1, 'D').astype('m8[ns]')) def test_short_format_converters(self): def conv(v): return v.astype('m8[ns]') - self.assertEqual(ct('10'), np.timedelta64(10,'ns')) - self.assertEqual(ct('10ns'), np.timedelta64(10,'ns')) - self.assertEqual(ct('100'), np.timedelta64(100,'ns')) - self.assertEqual(ct('100ns'), np.timedelta64(100,'ns')) - - self.assertEqual(ct('1000'), np.timedelta64(1000,'ns')) - self.assertEqual(ct('1000ns'), np.timedelta64(1000,'ns')) - self.assertEqual(ct('1000NS'), np.timedelta64(1000,'ns')) - - self.assertEqual(ct('10us'), np.timedelta64(10000,'ns')) - self.assertEqual(ct('100us'), np.timedelta64(100000,'ns')) - self.assertEqual(ct('1000us'), np.timedelta64(1000000,'ns')) - self.assertEqual(ct('1000Us'), np.timedelta64(1000000,'ns')) - self.assertEqual(ct('1000uS'), np.timedelta64(1000000,'ns')) - - self.assertEqual(ct('1ms'), np.timedelta64(1000000,'ns')) - self.assertEqual(ct('10ms'), np.timedelta64(10000000,'ns')) - self.assertEqual(ct('100ms'), np.timedelta64(100000000,'ns')) - self.assertEqual(ct('1000ms'), np.timedelta64(1000000000,'ns')) - - self.assertEqual(ct('-1s'), -np.timedelta64(1000000000,'ns')) - self.assertEqual(ct('1s'), np.timedelta64(1000000000,'ns')) - self.assertEqual(ct('10s'), np.timedelta64(10000000000,'ns')) - self.assertEqual(ct('100s'), np.timedelta64(100000000000,'ns')) - self.assertEqual(ct('1000s'), np.timedelta64(1000000000000,'ns')) - - self.assertEqual(ct('1d'), conv(np.timedelta64(1,'D'))) - self.assertEqual(ct('-1d'), -conv(np.timedelta64(1,'D'))) - self.assertEqual(ct('1D'), conv(np.timedelta64(1,'D'))) - self.assertEqual(ct('10D'), conv(np.timedelta64(10,'D'))) - self.assertEqual(ct('100D'), conv(np.timedelta64(100,'D'))) - self.assertEqual(ct('1000D'), conv(np.timedelta64(1000,'D'))) - self.assertEqual(ct('10000D'), conv(np.timedelta64(10000,'D'))) + self.assertEqual(ct('10'), np.timedelta64(10, 'ns')) + self.assertEqual(ct('10ns'), np.timedelta64(10, 'ns')) + self.assertEqual(ct('100'), np.timedelta64(100, 'ns')) + self.assertEqual(ct('100ns'), np.timedelta64(100, 'ns')) + + self.assertEqual(ct('1000'), np.timedelta64(1000, 'ns')) + self.assertEqual(ct('1000ns'), np.timedelta64(1000, 'ns')) + self.assertEqual(ct('1000NS'), np.timedelta64(1000, 'ns')) + + self.assertEqual(ct('10us'), np.timedelta64(10000, 'ns')) + self.assertEqual(ct('100us'), np.timedelta64(100000, 'ns')) + self.assertEqual(ct('1000us'), np.timedelta64(1000000, 'ns')) + self.assertEqual(ct('1000Us'), np.timedelta64(1000000, 'ns')) + self.assertEqual(ct('1000uS'), np.timedelta64(1000000, 'ns')) + + self.assertEqual(ct('1ms'), np.timedelta64(1000000, 'ns')) + self.assertEqual(ct('10ms'), np.timedelta64(10000000, 'ns')) + self.assertEqual(ct('100ms'), np.timedelta64(100000000, 'ns')) + self.assertEqual(ct('1000ms'), np.timedelta64(1000000000, 'ns')) + + self.assertEqual(ct('-1s'), -np.timedelta64(1000000000, 'ns')) + self.assertEqual(ct('1s'), np.timedelta64(1000000000, 'ns')) + self.assertEqual(ct('10s'), np.timedelta64(10000000000, 'ns')) + self.assertEqual(ct('100s'), np.timedelta64(100000000000, 'ns')) + self.assertEqual(ct('1000s'), np.timedelta64(1000000000000, 'ns')) + + self.assertEqual(ct('1d'), conv(np.timedelta64(1, 'D'))) + self.assertEqual(ct('-1d'), -conv(np.timedelta64(1, 'D'))) + self.assertEqual(ct('1D'), conv(np.timedelta64(1, 'D'))) + self.assertEqual(ct('10D'), conv(np.timedelta64(10, 'D'))) + self.assertEqual(ct('100D'), conv(np.timedelta64(100, 'D'))) + self.assertEqual(ct('1000D'), conv(np.timedelta64(1000, 'D'))) + self.assertEqual(ct('10000D'), conv(np.timedelta64(10000, 'D'))) # space - self.assertEqual(ct(' 10000D '), conv(np.timedelta64(10000,'D'))) - self.assertEqual(ct(' - 10000D '), -conv(np.timedelta64(10000,'D'))) + self.assertEqual(ct(' 10000D '), conv(np.timedelta64(10000, 'D'))) + self.assertEqual(ct(' - 10000D '), -conv(np.timedelta64(10000, 'D'))) # invalid self.assertRaises(ValueError, ct, '1foo') @@ -608,102 +617,117 @@ def conv(v): def test_full_format_converters(self): def conv(v): return v.astype('m8[ns]') - d1 = np.timedelta64(1,'D') + + d1 = np.timedelta64(1, 'D') self.assertEqual(ct('1days'), conv(d1)) self.assertEqual(ct('1days,'), conv(d1)) self.assertEqual(ct('- 1days,'), -conv(d1)) - self.assertEqual(ct('00:00:01'), conv(np.timedelta64(1,'s'))) - self.assertEqual(ct('06:00:01'), conv(np.timedelta64(6*3600+1,'s'))) - self.assertEqual(ct('06:00:01.0'), conv(np.timedelta64(6*3600+1,'s'))) - self.assertEqual(ct('06:00:01.01'), conv(np.timedelta64(1000*(6*3600+1)+10,'ms'))) - - self.assertEqual(ct('- 1days, 00:00:01'), conv(-d1+np.timedelta64(1,'s'))) - self.assertEqual(ct('1days, 06:00:01'), conv(d1+np.timedelta64(6*3600+1,'s'))) - self.assertEqual(ct('1days, 06:00:01.01'), conv(d1+np.timedelta64(1000*(6*3600+1)+10,'ms'))) + self.assertEqual(ct('00:00:01'), conv(np.timedelta64(1, 's'))) + self.assertEqual(ct('06:00:01'), conv( + np.timedelta64(6 * 3600 + 1, 's'))) + self.assertEqual(ct('06:00:01.0'), conv( + np.timedelta64(6 * 3600 + 1, 's'))) + self.assertEqual(ct('06:00:01.01'), conv( + np.timedelta64(1000 * (6 * 3600 + 1) + 10, 'ms'))) + + self.assertEqual(ct('- 1days, 00:00:01'), + conv(-d1 + np.timedelta64(1, 's'))) + self.assertEqual(ct('1days, 06:00:01'), conv( + d1 + np.timedelta64(6 * 3600 + 1, 's'))) + self.assertEqual(ct('1days, 06:00:01.01'), conv( + d1 + np.timedelta64(1000 * (6 * 3600 + 1) + 10, 'ms'))) # invalid self.assertRaises(ValueError, ct, '- 1days, 00') def test_nat_converters(self): - self.assertEqual(to_timedelta('nat',box=False).astype('int64'), tslib.iNaT) - self.assertEqual(to_timedelta('nan',box=False).astype('int64'), tslib.iNaT) + self.assertEqual(to_timedelta( + 'nat', box=False).astype('int64'), tslib.iNaT) + self.assertEqual(to_timedelta( + 'nan', box=False).astype('int64'), tslib.iNaT) def test_to_timedelta(self): def conv(v): return v.astype('m8[ns]') - d1 = np.timedelta64(1,'D') - self.assertEqual(to_timedelta('1 days 06:05:01.00003',box=False), conv(d1+np.timedelta64(6*3600+5*60+1,'s')+np.timedelta64(30,'us'))) - self.assertEqual(to_timedelta('15.5us',box=False), conv(np.timedelta64(15500,'ns'))) + d1 = np.timedelta64(1, 'D') + + self.assertEqual(to_timedelta('1 days 06:05:01.00003', box=False), + conv(d1 + np.timedelta64(6 * 3600 + + 5 * 60 + 1, 's') + + np.timedelta64(30, 'us'))) + self.assertEqual(to_timedelta('15.5us', box=False), + conv(np.timedelta64(15500, 'ns'))) # empty string - result = to_timedelta('',box=False) + result = to_timedelta('', box=False) self.assertEqual(result.astype('int64'), tslib.iNaT) result = to_timedelta(['', '']) self.assertTrue(isnull(result).all()) # pass thru - result = to_timedelta(np.array([np.timedelta64(1,'s')])) - expected = np.array([np.timedelta64(1,'s')]) - tm.assert_almost_equal(result,expected) + result = to_timedelta(np.array([np.timedelta64(1, 's')])) + expected = np.array([np.timedelta64(1, 's')]) + tm.assert_almost_equal(result, expected) # ints - result = np.timedelta64(0,'ns') - expected = to_timedelta(0,box=False) + result = np.timedelta64(0, 'ns') + expected = to_timedelta(0, box=False) self.assertEqual(result, expected) # Series expected = Series([timedelta(days=1), timedelta(days=1, seconds=1)]) - result = to_timedelta(Series(['1d','1days 00:00:01'])) + result = to_timedelta(Series(['1d', '1days 00:00:01'])) tm.assert_series_equal(result, expected) # with units - result = TimedeltaIndex([ np.timedelta64(0,'ns'), np.timedelta64(10,'s').astype('m8[ns]') ]) - expected = to_timedelta([0,10],unit='s') + result = TimedeltaIndex([np.timedelta64(0, 'ns'), np.timedelta64( + 10, 's').astype('m8[ns]')]) + expected = to_timedelta([0, 10], unit='s') tm.assert_index_equal(result, expected) # single element conversion v = timedelta(seconds=1) - result = to_timedelta(v,box=False) + result = to_timedelta(v, box=False) expected = np.timedelta64(timedelta(seconds=1)) self.assertEqual(result, expected) v = np.timedelta64(timedelta(seconds=1)) - result = to_timedelta(v,box=False) + result = to_timedelta(v, box=False) expected = np.timedelta64(timedelta(seconds=1)) self.assertEqual(result, expected) # arrays of various dtypes - arr = np.array([1]*5,dtype='int64') - result = to_timedelta(arr,unit='s') - expected = TimedeltaIndex([ np.timedelta64(1,'s') ]*5) + arr = np.array([1] * 5, dtype='int64') + result = to_timedelta(arr, unit='s') + expected = TimedeltaIndex([np.timedelta64(1, 's')] * 5) tm.assert_index_equal(result, expected) - arr = np.array([1]*5,dtype='int64') - result = to_timedelta(arr,unit='m') - expected = TimedeltaIndex([ np.timedelta64(1,'m') ]*5) + arr = np.array([1] * 5, dtype='int64') + result = to_timedelta(arr, unit='m') + expected = TimedeltaIndex([np.timedelta64(1, 'm')] * 5) tm.assert_index_equal(result, expected) - arr = np.array([1]*5,dtype='int64') - result = to_timedelta(arr,unit='h') - expected = TimedeltaIndex([ np.timedelta64(1,'h') ]*5) + arr = np.array([1] * 5, dtype='int64') + result = to_timedelta(arr, unit='h') + expected = TimedeltaIndex([np.timedelta64(1, 'h')] * 5) tm.assert_index_equal(result, expected) - arr = np.array([1]*5,dtype='timedelta64[s]') + arr = np.array([1] * 5, dtype='timedelta64[s]') result = to_timedelta(arr) - expected = TimedeltaIndex([ np.timedelta64(1,'s') ]*5) + expected = TimedeltaIndex([np.timedelta64(1, 's')] * 5) tm.assert_index_equal(result, expected) - arr = np.array([1]*5,dtype='timedelta64[D]') + arr = np.array([1] * 5, dtype='timedelta64[D]') result = to_timedelta(arr) - expected = TimedeltaIndex([ np.timedelta64(1,'D') ]*5) + expected = TimedeltaIndex([np.timedelta64(1, 'D')] * 5) tm.assert_index_equal(result, expected) # Test with lists as input when box=false - expected = np.array(np.arange(3)*1000000000, dtype='timedelta64[ns]') + expected = np.array(np.arange(3) * 1000000000, dtype='timedelta64[ns]') result = to_timedelta(range(3), unit='s', box=False) tm.assert_numpy_array_equal(expected, result) @@ -714,59 +738,65 @@ def conv(v): tm.assert_numpy_array_equal(expected, result) # Tests with fractional seconds as input: - expected = np.array([0, 500000000, 800000000, 1200000000], dtype='timedelta64[ns]') + expected = np.array( + [0, 500000000, 800000000, 1200000000], dtype='timedelta64[ns]') result = to_timedelta([0., 0.5, 0.8, 1.2], unit='s', box=False) tm.assert_numpy_array_equal(expected, result) def testit(unit, transform): # array - result = to_timedelta(np.arange(5),unit=unit) - expected = TimedeltaIndex([ np.timedelta64(i,transform(unit)) for i in np.arange(5).tolist() ]) + result = to_timedelta(np.arange(5), unit=unit) + expected = TimedeltaIndex([np.timedelta64(i, transform(unit)) + for i in np.arange(5).tolist()]) tm.assert_index_equal(result, expected) # scalar - result = to_timedelta(2,unit=unit) - expected = Timedelta(np.timedelta64(2,transform(unit)).astype('timedelta64[ns]')) + result = to_timedelta(2, unit=unit) + expected = Timedelta(np.timedelta64(2, transform(unit)).astype( + 'timedelta64[ns]')) self.assertEqual(result, expected) # validate all units # GH 6855 - for unit in ['Y','M','W','D','y','w','d']: - testit(unit,lambda x: x.upper()) - for unit in ['days','day','Day','Days']: - testit(unit,lambda x: 'D') - for unit in ['h','m','s','ms','us','ns','H','S','MS','US','NS']: - testit(unit,lambda x: x.lower()) + for unit in ['Y', 'M', 'W', 'D', 'y', 'w', 'd']: + testit(unit, lambda x: x.upper()) + for unit in ['days', 'day', 'Day', 'Days']: + testit(unit, lambda x: 'D') + for unit in ['h', 'm', 's', 'ms', 'us', 'ns', 'H', 'S', 'MS', 'US', + 'NS']: + testit(unit, lambda x: x.lower()) # offsets # m - testit('T',lambda x: 'm') + testit('T', lambda x: 'm') # ms - testit('L',lambda x: 'ms') + testit('L', lambda x: 'ms') def test_to_timedelta_invalid(self): # these will error - self.assertRaises(ValueError, lambda : to_timedelta([1,2],unit='foo')) - self.assertRaises(ValueError, lambda : to_timedelta(1,unit='foo')) + self.assertRaises(ValueError, lambda: to_timedelta([1, 2], unit='foo')) + self.assertRaises(ValueError, lambda: to_timedelta(1, unit='foo')) # time not supported ATM - self.assertRaises(ValueError, lambda :to_timedelta(time(second=1))) - self.assertTrue(to_timedelta(time(second=1), errors='coerce') is pd.NaT) + self.assertRaises(ValueError, lambda: to_timedelta(time(second=1))) + self.assertTrue(to_timedelta( + time(second=1), errors='coerce') is pd.NaT) - self.assertRaises(ValueError, lambda : to_timedelta(['foo','bar'])) - tm.assert_index_equal(TimedeltaIndex([pd.NaT,pd.NaT]), - to_timedelta(['foo','bar'], errors='coerce')) + self.assertRaises(ValueError, lambda: to_timedelta(['foo', 'bar'])) + tm.assert_index_equal(TimedeltaIndex([pd.NaT, pd.NaT]), + to_timedelta(['foo', 'bar'], errors='coerce')) tm.assert_index_equal(TimedeltaIndex(['1 day', pd.NaT, '1 min']), - to_timedelta(['1 day','bar','1 min'], errors='coerce')) + to_timedelta(['1 day', 'bar', '1 min'], + errors='coerce')) def test_to_timedelta_via_apply(self): # GH 5458 - expected = Series([np.timedelta64(1,'s')]) + expected = Series([np.timedelta64(1, 's')]) result = Series(['00:00:01']).apply(to_timedelta) tm.assert_series_equal(result, expected) @@ -776,7 +806,8 @@ def test_to_timedelta_via_apply(self): def test_timedelta_ops(self): # GH4984 # make sure ops return Timedelta - s = Series([Timestamp('20130101') + timedelta(seconds=i*i) for i in range(10) ]) + s = Series([Timestamp('20130101') + timedelta(seconds=i * i) + for i in range(10)]) td = s.diff() result = td.mean() @@ -787,7 +818,7 @@ def test_timedelta_ops(self): self.assertEqual(result[0], expected) result = td.quantile(.1) - expected = Timedelta(np.timedelta64(2600,'ms')) + expected = Timedelta(np.timedelta64(2600, 'ms')) self.assertEqual(result, expected) result = td.median() @@ -815,35 +846,39 @@ def test_timedelta_ops(self): self.assertEqual(result[0], expected) # invalid ops - for op in ['skew','kurt','sem','prod']: - self.assertRaises(TypeError, getattr(td,op)) + for op in ['skew', 'kurt', 'sem', 'prod']: + self.assertRaises(TypeError, getattr(td, op)) # GH 10040 # make sure NaT is properly handled by median() s = Series([Timestamp('2015-02-03'), Timestamp('2015-02-07')]) self.assertEqual(s.diff().median(), timedelta(days=4)) - s = Series([Timestamp('2015-02-03'), Timestamp('2015-02-07'), Timestamp('2015-02-15')]) + s = Series([Timestamp('2015-02-03'), Timestamp('2015-02-07'), + Timestamp('2015-02-15')]) self.assertEqual(s.diff().median(), timedelta(days=6)) def test_overflow(self): # GH 9442 - s = Series(pd.date_range('20130101',periods=100000,freq='H')) + s = Series(pd.date_range('20130101', periods=100000, freq='H')) s[0] += pd.Timedelta('1s 1ms') # mean - result = (s-s.min()).mean() - expected = pd.Timedelta((pd.DatetimeIndex((s-s.min())).asi8/len(s)).sum()) + result = (s - s.min()).mean() + expected = pd.Timedelta((pd.DatetimeIndex((s - s.min())).asi8 / len(s) + ).sum()) - # the computation is converted to float so might be some loss of precision - self.assertTrue(np.allclose(result.value/1000, expected.value/1000)) + # the computation is converted to float so might be some loss of + # precision + self.assertTrue(np.allclose(result.value / 1000, expected.value / + 1000)) # sum - self.assertRaises(ValueError, lambda : (s-s.min()).sum()) + self.assertRaises(ValueError, lambda: (s - s.min()).sum()) s1 = s[0:10000] - self.assertRaises(ValueError, lambda : (s1-s1.min()).sum()) + self.assertRaises(ValueError, lambda: (s1 - s1.min()).sum()) s2 = s[0:1000] - result = (s2-s2.min()).sum() + result = (s2 - s2.min()).sum() def test_timedelta_ops_scalar(self): # GH 6808 @@ -851,10 +886,9 @@ def test_timedelta_ops_scalar(self): expected_add = pd.to_datetime('20130101 09:01:22.123456') expected_sub = pd.to_datetime('20130101 09:01:02.123456') - for offset in [pd.to_timedelta(10,unit='s'), - timedelta(seconds=10), - np.timedelta64(10,'s'), - np.timedelta64(10000000000,'ns'), + for offset in [pd.to_timedelta(10, unit='s'), timedelta(seconds=10), + np.timedelta64(10, 's'), + np.timedelta64(10000000000, 'ns'), pd.offsets.Second(10)]: result = base + offset self.assertEqual(result, expected_add) @@ -868,9 +902,9 @@ def test_timedelta_ops_scalar(self): for offset in [pd.to_timedelta('1 day, 00:00:10'), pd.to_timedelta('1 days, 00:00:10'), - timedelta(days=1,seconds=10), - np.timedelta64(1,'D')+np.timedelta64(10,'s'), - pd.offsets.Day()+pd.offsets.Second(10)]: + timedelta(days=1, seconds=10), + np.timedelta64(1, 'D') + np.timedelta64(10, 's'), + pd.offsets.Day() + pd.offsets.Second(10)]: result = base + offset self.assertEqual(result, expected_add) @@ -882,7 +916,8 @@ def test_to_timedelta_on_missing_values(self): timedelta_NaT = np.timedelta64('NaT') actual = pd.to_timedelta(Series(['00:00:01', np.nan])) - expected = Series([np.timedelta64(1000000000, 'ns'), timedelta_NaT], dtype='<m8[ns]') + expected = Series([np.timedelta64(1000000000, 'ns'), + timedelta_NaT], dtype='<m8[ns]') assert_series_equal(actual, expected) actual = pd.to_timedelta(Series(['00:00:01', pd.NaT])) @@ -900,7 +935,8 @@ def test_to_timedelta_on_nanoseconds(self): expected = Timedelta('100ns') self.assertEqual(result, expected) - result = Timedelta(days=1,hours=1,minutes=1,weeks=1,seconds=1,milliseconds=1,microseconds=1,nanoseconds=1) + result = Timedelta(days=1, hours=1, minutes=1, weeks=1, seconds=1, + milliseconds=1, microseconds=1, nanoseconds=1) expected = Timedelta(694861001001001) self.assertEqual(result, expected) @@ -912,7 +948,7 @@ def test_to_timedelta_on_nanoseconds(self): expected = Timedelta('999ns') self.assertEqual(result, expected) - result = Timedelta(microseconds=1) + 5*Timedelta(nanoseconds=-2) + result = Timedelta(microseconds=1) + 5 * Timedelta(nanoseconds=-2) expected = Timedelta('990ns') self.assertEqual(result, expected) @@ -1017,8 +1053,10 @@ def test_apply_to_timedelta(self): # assert_series_equal(a, b) list_of_strings = ['00:00:01', np.nan, pd.NaT, timedelta_NaT] - a = pd.to_timedelta(list_of_strings) - b = Series(list_of_strings).apply(pd.to_timedelta) + + # TODO: unused? + a = pd.to_timedelta(list_of_strings) # noqa + b = Series(list_of_strings).apply(pd.to_timedelta) # noqa # Can't compare until apply on a Series gives the correct dtype # assert_series_equal(a, b) @@ -1026,10 +1064,10 @@ def test_pickle(self): v = Timedelta('1 days 10:11:12.0123456') v_p = self.round_trip_pickle(v) - self.assertEqual(v,v_p) + self.assertEqual(v, v_p) def test_timedelta_hash_equality(self): - #GH 11129 + # GH 11129 v = Timedelta(1, 'D') td = timedelta(days=1) self.assertEqual(hash(v), hash(td)) @@ -1038,8 +1076,8 @@ def test_timedelta_hash_equality(self): self.assertEqual(d[v], 2) tds = timedelta_range('1 second', periods=20) - self.assertTrue( - all(hash(td) == hash(td.to_pytimedelta()) for td in tds)) + self.assertTrue(all(hash(td) == hash(td.to_pytimedelta()) for td in + tds)) # python timedeltas drop ns resolution ns_td = Timedelta(1, 'ns') @@ -1051,7 +1089,7 @@ class TestTimedeltaIndex(tm.TestCase): def test_pass_TimedeltaIndex_to_index(self): - rng = timedelta_range('1 days','10 days') + rng = timedelta_range('1 days', '10 days') idx = Index(rng, dtype=object) expected = Index(rng.to_pytimedelta(), dtype=object) @@ -1062,12 +1100,11 @@ def test_pickle(self): rng = timedelta_range('1 days', periods=10) rng_p = self.round_trip_pickle(rng) - tm.assert_index_equal(rng,rng_p) + tm.assert_index_equal(rng, rng_p) def test_hash_error(self): index = timedelta_range('1 days', periods=10) - with tm.assertRaisesRegexp(TypeError, - "unhashable type: %r" % + with tm.assertRaisesRegexp(TypeError, "unhashable type: %r" % type(index).__name__): hash(index) @@ -1083,7 +1120,7 @@ def test_append_join_nondatetimeindex(self): def test_append_numpy_bug_1681(self): - td = timedelta_range('1 days','10 days',freq='2D') + td = timedelta_range('1 days', '10 days', freq='2D') a = DataFrame() c = DataFrame({'A': 'foo', 'B': td}, index=td) str(c) @@ -1098,48 +1135,60 @@ def test_astype(self): self.assert_numpy_array_equal(result, rng.asi8) def test_fields(self): - rng = timedelta_range('1 days, 10:11:12.100123456', periods=2, freq='s') - self.assert_numpy_array_equal(rng.days, np.array([1,1],dtype='int64')) - self.assert_numpy_array_equal(rng.seconds, np.array([10*3600+11*60+12,10*3600+11*60+13],dtype='int64')) - self.assert_numpy_array_equal(rng.microseconds, np.array([100*1000+123,100*1000+123],dtype='int64')) - self.assert_numpy_array_equal(rng.nanoseconds, np.array([456,456],dtype='int64')) - - self.assertRaises(AttributeError, lambda : rng.hours) - self.assertRaises(AttributeError, lambda : rng.minutes) - self.assertRaises(AttributeError, lambda : rng.milliseconds) + rng = timedelta_range('1 days, 10:11:12.100123456', periods=2, + freq='s') + self.assert_numpy_array_equal(rng.days, np.array( + [1, 1], dtype='int64')) + self.assert_numpy_array_equal( + rng.seconds, + np.array([10 * 3600 + 11 * 60 + 12, 10 * 3600 + 11 * 60 + 13], + dtype='int64')) + self.assert_numpy_array_equal(rng.microseconds, np.array( + [100 * 1000 + 123, 100 * 1000 + 123], dtype='int64')) + self.assert_numpy_array_equal(rng.nanoseconds, np.array( + [456, 456], dtype='int64')) + + self.assertRaises(AttributeError, lambda: rng.hours) + self.assertRaises(AttributeError, lambda: rng.minutes) + self.assertRaises(AttributeError, lambda: rng.milliseconds) # with nat s = Series(rng) s[1] = np.nan - tm.assert_series_equal(s.dt.days,Series([1,np.nan],index=[0,1])) - tm.assert_series_equal(s.dt.seconds,Series([10*3600+11*60+12,np.nan],index=[0,1])) + tm.assert_series_equal(s.dt.days, Series([1, np.nan], index=[0, 1])) + tm.assert_series_equal(s.dt.seconds, Series( + [10 * 3600 + 11 * 60 + 12, np.nan], index=[0, 1])) def test_total_seconds(self): # GH 10939 # test index - rng = timedelta_range('1 days, 10:11:12.100123456', periods=2, freq='s') - expt = [1*86400+10*3600+11*60+12+100123456./1e9,1*86400+10*3600+11*60+13+100123456./1e9] + rng = timedelta_range('1 days, 10:11:12.100123456', periods=2, + freq='s') + expt = [1 * 86400 + 10 * 3600 + 11 * 60 + 12 + 100123456. / 1e9, + 1 * 86400 + 10 * 3600 + 11 * 60 + 13 + 100123456. / 1e9] assert_allclose(rng.total_seconds(), expt, atol=1e-10, rtol=0) # test Series s = Series(rng) - s_expt = Series(expt,index=[0,1]) - tm.assert_series_equal(s.dt.total_seconds(),s_expt) + s_expt = Series(expt, index=[0, 1]) + tm.assert_series_equal(s.dt.total_seconds(), s_expt) # with nat s[1] = np.nan - s_expt = Series([1*86400+10*3600+11*60+12+100123456./1e9,np.nan],index=[0,1]) - tm.assert_series_equal(s.dt.total_seconds(),s_expt) + s_expt = Series([1 * 86400 + 10 * 3600 + 11 * 60 + + 12 + 100123456. / 1e9, np.nan], index=[0, 1]) + tm.assert_series_equal(s.dt.total_seconds(), s_expt) # with both nat - s = Series([np.nan,np.nan], dtype='timedelta64[ns]') - tm.assert_series_equal(s.dt.total_seconds(),Series([np.nan,np.nan],index=[0,1])) + s = Series([np.nan, np.nan], dtype='timedelta64[ns]') + tm.assert_series_equal(s.dt.total_seconds(), Series( + [np.nan, np.nan], index=[0, 1])) def test_total_seconds_scalar(self): # GH 10939 rng = Timedelta('1 days, 10:11:12.100123456') - expt = 1*86400+10*3600+11*60+12+100123456./1e9 + expt = 1 * 86400 + 10 * 3600 + 11 * 60 + 12 + 100123456. / 1e9 assert_allclose(rng.total_seconds(), expt, atol=1e-10, rtol=0) rng = Timedelta(np.nan) @@ -1158,20 +1207,15 @@ def test_components(self): self.assertTrue(result.iloc[1].isnull().all()) def test_constructor(self): - expected = TimedeltaIndex(['1 days', '1 days 00:00:05', - '2 days', '2 days 00:00:02', - '0 days 00:00:03']) - result = TimedeltaIndex(['1 days', '1 days, 00:00:05', - np.timedelta64(2, 'D'), - timedelta(days=2, seconds=2), - pd.offsets.Second(3)]) + expected = TimedeltaIndex(['1 days', '1 days 00:00:05', '2 days', + '2 days 00:00:02', '0 days 00:00:03']) + result = TimedeltaIndex(['1 days', '1 days, 00:00:05', np.timedelta64( + 2, 'D'), timedelta(days=2, seconds=2), pd.offsets.Second(3)]) tm.assert_index_equal(result, expected) # unicode - result = TimedeltaIndex([u'1 days', '1 days, 00:00:05', - np.timedelta64(2, 'D'), - timedelta(days=2, seconds=2), - pd.offsets.Second(3)]) + result = TimedeltaIndex([u'1 days', '1 days, 00:00:05', np.timedelta64( + 2, 'D'), timedelta(days=2, seconds=2), pd.offsets.Second(3)]) expected = TimedeltaIndex(['0 days 00:00:00', '0 days 00:00:01', '0 days 00:00:02']) @@ -1179,9 +1223,9 @@ def test_constructor(self): expected = TimedeltaIndex(['0 days 00:00:00', '0 days 00:00:05', '0 days 00:00:09']) tm.assert_index_equal(TimedeltaIndex([0, 5, 9], unit='s'), expected) - expected = TimedeltaIndex(['0 days 00:00:00.400', - '0 days 00:00:00.450', - '0 days 00:00:01.200']) + expected = TimedeltaIndex( + ['0 days 00:00:00.400', '0 days 00:00:00.450', + '0 days 00:00:01.200']) tm.assert_index_equal(TimedeltaIndex([400, 450, 1200], unit='ms'), expected) @@ -1207,7 +1251,7 @@ def test_constructor_coverage(self): # NumPy string array strings = np.array(['1 days', '2 days', '3 days']) result = TimedeltaIndex(strings) - expected = to_timedelta([1,2,3],unit='d') + expected = to_timedelta([1, 2, 3], unit='d') self.assertTrue(result.equals(expected)) from_ints = TimedeltaIndex(expected.asi8) @@ -1215,14 +1259,12 @@ def test_constructor_coverage(self): # non-conforming freq self.assertRaises(ValueError, TimedeltaIndex, - ['1 days', '2 days', '4 days'], - freq='D') + ['1 days', '2 days', '4 days'], freq='D') self.assertRaises(ValueError, TimedeltaIndex, periods=10, freq='D') def test_constructor_name(self): - idx = TimedeltaIndex(start='1 days', periods=1, freq='D', - name='TEST') + idx = TimedeltaIndex(start='1 days', periods=1, freq='D', name='TEST') self.assertEqual(idx.name, 'TEST') # GH10025 @@ -1234,49 +1276,52 @@ def test_freq_conversion(self): # doc example # series - td = Series(date_range('20130101',periods=4)) - \ - Series(date_range('20121201',periods=4)) - td[2] += timedelta(minutes=5,seconds=3) + td = Series(date_range('20130101', periods=4)) - \ + Series(date_range('20121201', periods=4)) + td[2] += timedelta(minutes=5, seconds=3) td[3] = np.nan - result = td / np.timedelta64(1,'D') - expected = Series([31,31,(31*86400+5*60+3)/86400.0,np.nan]) - assert_series_equal(result,expected) + result = td / np.timedelta64(1, 'D') + expected = Series([31, 31, (31 * 86400 + 5 * 60 + 3) / 86400.0, np.nan + ]) + assert_series_equal(result, expected) result = td.astype('timedelta64[D]') - expected = Series([31,31,31,np.nan]) - assert_series_equal(result,expected) + expected = Series([31, 31, 31, np.nan]) + assert_series_equal(result, expected) - result = td / np.timedelta64(1,'s') - expected = Series([31*86400,31*86400,31*86400+5*60+3,np.nan]) - assert_series_equal(result,expected) + result = td / np.timedelta64(1, 's') + expected = Series([31 * 86400, 31 * 86400, 31 * 86400 + 5 * 60 + 3, + np.nan]) + assert_series_equal(result, expected) result = td.astype('timedelta64[s]') - assert_series_equal(result,expected) + assert_series_equal(result, expected) # tdi td = TimedeltaIndex(td) - result = td / np.timedelta64(1,'D') - expected = Index([31,31,(31*86400+5*60+3)/86400.0,np.nan]) - assert_index_equal(result,expected) + result = td / np.timedelta64(1, 'D') + expected = Index([31, 31, (31 * 86400 + 5 * 60 + 3) / 86400.0, np.nan]) + assert_index_equal(result, expected) result = td.astype('timedelta64[D]') - expected = Index([31,31,31,np.nan]) - assert_index_equal(result,expected) + expected = Index([31, 31, 31, np.nan]) + assert_index_equal(result, expected) - result = td / np.timedelta64(1,'s') - expected = Index([31*86400,31*86400,31*86400+5*60+3,np.nan]) - assert_index_equal(result,expected) + result = td / np.timedelta64(1, 's') + expected = Index([31 * 86400, 31 * 86400, 31 * 86400 + 5 * 60 + 3, + np.nan]) + assert_index_equal(result, expected) result = td.astype('timedelta64[s]') - assert_index_equal(result,expected) + assert_index_equal(result, expected) def test_comparisons_coverage(self): rng = timedelta_range('1 days', periods=10) result = rng < rng[3] - exp = np.array([True, True, True]+[False]*7) + exp = np.array([True, True, True] + [False] * 7) self.assert_numpy_array_equal(result, exp) # raise TypeError for now @@ -1292,11 +1337,11 @@ def test_comparisons_nat(self): '1 day 00:00:01', '5 day 00:00:03']) tdidx2 = pd.TimedeltaIndex(['2 day', '2 day', pd.NaT, pd.NaT, '1 day 00:00:02', '5 days 00:00:03']) - tdarr = np.array([np.timedelta64(2,'D'), - np.timedelta64(2,'D'), - np.timedelta64('nat'), np.timedelta64('nat'), - np.timedelta64(1,'D') + np.timedelta64(2,'s'), - np.timedelta64(5,'D') + np.timedelta64(3,'s')]) + tdarr = np.array([np.timedelta64(2, 'D'), + np.timedelta64(2, 'D'), np.timedelta64('nat'), + np.timedelta64('nat'), + np.timedelta64(1, 'D') + np.timedelta64(2, 's'), + np.timedelta64(5, 'D') + np.timedelta64(3, 's')]) if _np_version_under1p8: # cannot test array because np.datetime('nat') returns today's date @@ -1346,7 +1391,7 @@ def test_misc_coverage(self): result = rng.groupby(rng.days) tm.assertIsInstance(list(result.values())[0][0], Timedelta) - idx = TimedeltaIndex(['3d','1d','2d']) + idx = TimedeltaIndex(['3d', '1d', '2d']) self.assertTrue(idx.equals(list(idx))) non_td = Index(list('abc')) @@ -1354,10 +1399,10 @@ def test_misc_coverage(self): def test_union(self): - i1 = timedelta_range('1day',periods=5) - i2 = timedelta_range('3day',periods=5) + i1 = timedelta_range('1day', periods=5) + i2 = timedelta_range('3day', periods=5) result = i1.union(i2) - expected = timedelta_range('1day',periods=7) + expected = timedelta_range('1day', periods=7) self.assert_numpy_array_equal(result, expected) i1 = Int64Index(np.arange(0, 20, 2)) @@ -1367,7 +1412,7 @@ def test_union(self): def test_union_coverage(self): - idx = TimedeltaIndex(['3d','1d','2d']) + idx = TimedeltaIndex(['3d', '1d', '2d']) ordered = TimedeltaIndex(idx.sort_values(), freq='infer') result = ordered.union(idx) self.assertTrue(result.equals(ordered)) @@ -1388,9 +1433,9 @@ def test_union_bug_1730(self): def test_union_bug_1745(self): left = TimedeltaIndex(['1 day 15:19:49.695000']) - right = TimedeltaIndex(['2 day 13:04:21.322000', - '1 day 15:27:24.873000', - '1 day 15:31:05.350000']) + right = TimedeltaIndex( + ['2 day 13:04:21.322000', '1 day 15:27:24.873000', + '1 day 15:31:05.350000']) result = left.union(right) exp = TimedeltaIndex(sorted(set(list(left)) | set(list(right)))) @@ -1398,7 +1443,7 @@ def test_union_bug_1745(self): def test_union_bug_4564(self): - left = timedelta_range("1 day","30d") + left = timedelta_range("1 day", "30d") right = left + pd.offsets.Minute(15) result = left.union(right) @@ -1416,24 +1461,26 @@ def test_intersection_bug_1708(self): index_2 = index_1 + pd.offsets.Hour(1) result = index_1 & index_2 - expected = timedelta_range('1 day 01:00:00',periods=3,freq='h') - tm.assert_index_equal(result,expected) + expected = timedelta_range('1 day 01:00:00', periods=3, freq='h') + tm.assert_index_equal(result, expected) def test_get_duplicates(self): - idx = TimedeltaIndex(['1 day','2 day','2 day','3 day','3day', '4day']) + idx = TimedeltaIndex(['1 day', '2 day', '2 day', '3 day', '3day', + '4day']) result = idx.get_duplicates() - ex = TimedeltaIndex(['2 day','3day']) + ex = TimedeltaIndex(['2 day', '3day']) self.assertTrue(result.equals(ex)) def test_argmin_argmax(self): - idx = TimedeltaIndex(['1 day 00:00:05','1 day 00:00:01','1 day 00:00:02']) + idx = TimedeltaIndex(['1 day 00:00:05', '1 day 00:00:01', + '1 day 00:00:02']) self.assertEqual(idx.argmin(), 1) self.assertEqual(idx.argmax(), 0) def test_sort_values(self): - idx = TimedeltaIndex(['4d','1d','2d']) + idx = TimedeltaIndex(['4d', '1d', '2d']) ordered = idx.sort_values() self.assertTrue(ordered.is_monotonic) @@ -1451,10 +1498,10 @@ def test_sort_values(self): def test_insert(self): - idx = TimedeltaIndex(['4day','1day','2day'], name='idx') + idx = TimedeltaIndex(['4day', '1day', '2day'], name='idx') result = idx.insert(2, timedelta(days=5)) - exp = TimedeltaIndex(['4day','1day','5day','2day'],name='idx') + exp = TimedeltaIndex(['4day', '1day', '5day', '2day'], name='idx') self.assertTrue(result.equals(exp)) # insertion of non-datetime should coerce to object index @@ -1468,15 +1515,19 @@ def test_insert(self): idx = timedelta_range('1day 00:00:01', periods=3, freq='s', name='idx') # preserve freq - expected_0 = TimedeltaIndex(['1day','1day 00:00:01','1day 00:00:02','1day 00:00:03'], + expected_0 = TimedeltaIndex(['1day', '1day 00:00:01', '1day 00:00:02', + '1day 00:00:03'], name='idx', freq='s') - expected_3 = TimedeltaIndex(['1day 00:00:01','1day 00:00:02','1day 00:00:03','1day 00:00:04'], + expected_3 = TimedeltaIndex(['1day 00:00:01', '1day 00:00:02', + '1day 00:00:03', '1day 00:00:04'], name='idx', freq='s') # reset freq to None - expected_1_nofreq = TimedeltaIndex(['1day 00:00:01','1day 00:00:01','1day 00:00:02','1day 00:00:03'], + expected_1_nofreq = TimedeltaIndex(['1day 00:00:01', '1day 00:00:01', + '1day 00:00:02', '1day 00:00:03'], name='idx', freq=None) - expected_3_nofreq = TimedeltaIndex(['1day 00:00:01','1day 00:00:02','1day 00:00:03','1day 00:00:05'], + expected_3_nofreq = TimedeltaIndex(['1day 00:00:01', '1day 00:00:02', + '1day 00:00:03', '1day 00:00:05'], name='idx', freq=None) cases = [(0, Timedelta('1day'), expected_0), @@ -1495,15 +1546,20 @@ def test_delete(self): idx = timedelta_range(start='1 Days', periods=5, freq='D', name='idx') # prserve freq - expected_0 = timedelta_range(start='2 Days', periods=4, freq='D', name='idx') - expected_4 = timedelta_range(start='1 Days', periods=4, freq='D', name='idx') + expected_0 = timedelta_range(start='2 Days', periods=4, freq='D', + name='idx') + expected_4 = timedelta_range(start='1 Days', periods=4, freq='D', + name='idx') # reset freq to None - expected_1 = TimedeltaIndex(['1 day','3 day','4 day', '5 day'],freq=None,name='idx') - - cases ={0: expected_0, -5: expected_0, - -1: expected_4, 4: expected_4, - 1: expected_1} + expected_1 = TimedeltaIndex( + ['1 day', '3 day', '4 day', '5 day'], freq=None, name='idx') + + cases = {0: expected_0, + -5: expected_0, + -1: expected_4, + 4: expected_4, + 1: expected_1} for n, expected in compat.iteritems(cases): result = idx.delete(n) self.assertTrue(result.equals(expected)) @@ -1518,16 +1574,19 @@ def test_delete_slice(self): idx = timedelta_range(start='1 days', periods=10, freq='D', name='idx') # prserve freq - expected_0_2 = timedelta_range(start='4 days', periods=7, freq='D', name='idx') - expected_7_9 = timedelta_range(start='1 days', periods=7, freq='D', name='idx') + expected_0_2 = timedelta_range(start='4 days', periods=7, freq='D', + name='idx') + expected_7_9 = timedelta_range(start='1 days', periods=7, freq='D', + name='idx') # reset freq to None - expected_3_5 = TimedeltaIndex(['1 d','2 d','3 d', - '7 d','8 d','9 d','10d'], freq=None, name='idx') + expected_3_5 = TimedeltaIndex(['1 d', '2 d', '3 d', + '7 d', '8 d', '9 d', '10d'], + freq=None, name='idx') - cases ={(0, 1, 2): expected_0_2, - (7, 8, 9): expected_7_9, - (3, 4, 5): expected_3_5} + cases = {(0, 1, 2): expected_0_2, + (7, 8, 9): expected_7_9, + (3, 4, 5): expected_3_5} for n, expected in compat.iteritems(cases): result = idx.delete(n) self.assertTrue(result.equals(expected)) @@ -1541,12 +1600,12 @@ def test_delete_slice(self): def test_take(self): - tds = ['1day 02:00:00','1 day 04:00:00','1 day 10:00:00'] - idx = TimedeltaIndex(start='1d',end='2d',freq='H',name='idx') + tds = ['1day 02:00:00', '1 day 04:00:00', '1 day 10:00:00'] + idx = TimedeltaIndex(start='1d', end='2d', freq='H', name='idx') expected = TimedeltaIndex(tds, freq=None, name='idx') taken1 = idx.take([2, 4, 10]) - taken2 = idx[[2,4,10]] + taken2 = idx[[2, 4, 10]] for taken in [taken1, taken2]: self.assertTrue(taken.equals(expected)) @@ -1567,8 +1626,9 @@ def test_isin(self): [False, False, True, False]) def test_does_not_convert_mixed_integer(self): - df = tm.makeCustomDataframe(10, 10, data_gen_f=lambda *args, **kwargs: - randn(), r_idx_type='i', c_idx_type='td') + df = tm.makeCustomDataframe(10, 10, + data_gen_f=lambda *args, **kwargs: randn(), + r_idx_type='i', c_idx_type='td') str(df) cols = df.columns.join(df.index, how='outer') @@ -1580,7 +1640,7 @@ def test_does_not_convert_mixed_integer(self): def test_slice_keeps_name(self): # GH4226 - dr = pd.timedelta_range('1d','5d', freq='H', name='timebucket') + dr = pd.timedelta_range('1d', '5d', freq='H', name='timebucket') self.assertEqual(dr[1:].name, dr.name) def test_join_self(self): @@ -1592,11 +1652,11 @@ def test_join_self(self): self.assertIs(index, joined) def test_factorize(self): - idx1 = TimedeltaIndex(['1 day','1 day','2 day', - '2 day','3 day','3 day']) + idx1 = TimedeltaIndex(['1 day', '1 day', '2 day', '2 day', '3 day', + '3 day']) exp_arr = np.array([0, 0, 1, 1, 2, 2]) - exp_idx = TimedeltaIndex(['1 day','2 day','3 day']) + exp_idx = TimedeltaIndex(['1 day', '2 day', '3 day']) arr, idx = idx1.factorize() self.assert_numpy_array_equal(arr, exp_arr) @@ -1613,10 +1673,10 @@ def test_factorize(self): self.assert_numpy_array_equal(arr, exp_arr) self.assertTrue(idx.equals(idx3)) -class TestSlicing(tm.TestCase): +class TestSlicing(tm.TestCase): def test_partial_slice(self): - rng = timedelta_range('1 day 10:11:12', freq='h',periods=500) + rng = timedelta_range('1 day 10:11:12', freq='h', periods=500) s = Series(np.arange(len(rng)), index=rng) result = s['5 day':'6 day'] @@ -1639,7 +1699,7 @@ def test_partial_slice(self): def test_partial_slice_high_reso(self): # higher reso - rng = timedelta_range('1 day 10:11:12', freq='us',periods=2000) + rng = timedelta_range('1 day 10:11:12', freq='us', periods=2000) s = Series(np.arange(len(rng)), index=rng) result = s['1 day 10:11:12':] @@ -1654,8 +1714,7 @@ def test_partial_slice_high_reso(self): self.assertEqual(result, s.iloc[1001]) def test_slice_with_negative_step(self): - ts = Series(np.arange(20), - timedelta_range('0', periods=20, freq='H')) + ts = Series(np.arange(20), timedelta_range('0', periods=20, freq='H')) SLC = pd.IndexSlice def assert_slices_equivalent(l_slc, i_slc): @@ -1670,15 +1729,17 @@ def assert_slices_equivalent(l_slc, i_slc): assert_slices_equivalent(SLC[:'7 hours':-1], SLC[:6:-1]) assert_slices_equivalent(SLC['15 hours':'7 hours':-1], SLC[15:6:-1]) - assert_slices_equivalent(SLC[Timedelta(hours=15):Timedelta(hours=7):-1], SLC[15:6:-1]) - assert_slices_equivalent(SLC['15 hours':Timedelta(hours=7):-1], SLC[15:6:-1]) - assert_slices_equivalent(SLC[Timedelta(hours=15):'7 hours':-1], SLC[15:6:-1]) + assert_slices_equivalent(SLC[Timedelta(hours=15):Timedelta(hours=7):- + 1], SLC[15:6:-1]) + assert_slices_equivalent(SLC['15 hours':Timedelta(hours=7):-1], + SLC[15:6:-1]) + assert_slices_equivalent(SLC[Timedelta(hours=15):'7 hours':-1], + SLC[15:6:-1]) assert_slices_equivalent(SLC['7 hours':'15 hours':-1], SLC[:0]) def test_slice_with_zero_step_raises(self): - ts = Series(np.arange(20), - timedelta_range('0', periods=20, freq='H')) + ts = Series(np.arange(20), timedelta_range('0', periods=20, freq='H')) self.assertRaisesRegexp(ValueError, 'slice step cannot be zero', lambda: ts[::0]) self.assertRaisesRegexp(ValueError, 'slice step cannot be zero', @@ -1694,7 +1755,7 @@ def test_tdi_ops_attributes(self): tm.assert_index_equal(result, exp) self.assertEqual(result.freq, '2D') - result = rng -2 + result = rng - 2 exp = timedelta_range('-2 days', periods=5, freq='2D', name='x') tm.assert_index_equal(result, exp) self.assertEqual(result.freq, '2D') @@ -1709,7 +1770,7 @@ def test_tdi_ops_attributes(self): tm.assert_index_equal(result, exp) self.assertEqual(result.freq, 'D') - result = - rng + result = -rng exp = timedelta_range('-2 days', periods=5, freq='-2D', name='x') tm.assert_index_equal(result, exp) self.assertEqual(result.freq, '-2D') diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py index e81df61983216..84065c0340aad 100644 --- a/pandas/tseries/tests/test_timeseries.py +++ b/pandas/tseries/tests/test_timeseries.py @@ -11,10 +11,9 @@ import pandas.tslib as tslib import pandas.index as _index import pandas as pd -from pandas import (Index, Series, DataFrame, - isnull, date_range, Timestamp, Period, DatetimeIndex, - Int64Index, to_datetime, bdate_range, Float64Index, - NaT, timedelta_range, Timedelta) +from pandas import (Index, Series, DataFrame, isnull, date_range, Timestamp, + Period, DatetimeIndex, Int64Index, to_datetime, + bdate_range, Float64Index, NaT, timedelta_range, Timedelta) import pandas.core.datetools as datetools import pandas.tseries.offsets as offsets @@ -80,20 +79,19 @@ def test_index_unique(self): self.assertTrue(result.equals(expected)) # NaT, note this is excluded - arr = [ 1370745748 + t for t in range(20) ] + [iNaT] + arr = [1370745748 + t for t in range(20)] + [iNaT] idx = DatetimeIndex(arr * 3) self.assertTrue(idx.unique().equals(DatetimeIndex(arr))) self.assertEqual(idx.nunique(), 20) self.assertEqual(idx.nunique(dropna=False), 21) - arr = [Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t) for - t in range(20)] + [NaT] + arr = [Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t) + for t in range(20)] + [NaT] idx = DatetimeIndex(arr * 3) self.assertTrue(idx.unique().equals(DatetimeIndex(arr))) self.assertEqual(idx.nunique(), 20) self.assertEqual(idx.nunique(dropna=False), 21) - def test_index_dupes_contains(self): d = datetime(2011, 12, 5, 20, 30) ix = DatetimeIndex([d, d]) @@ -122,8 +120,8 @@ def test_duplicate_dates_indexing(self): self.assertRaises(KeyError, ts.__getitem__, datetime(2000, 1, 6)) # new index - ts[datetime(2000,1,6)] = 0 - self.assertEqual(ts[datetime(2000,1,6)], 0) + ts[datetime(2000, 1, 6)] = 0 + self.assertEqual(ts[datetime(2000, 1, 6)], 0) def test_range_slice(self): idx = DatetimeIndex(['1/1/2000', '1/2/2000', '1/2/2000', '1/3/2000', @@ -187,11 +185,13 @@ def test_indexing_over_size_cutoff(self): def test_indexing_unordered(self): # GH 2437 rng = date_range(start='2011-01-01', end='2011-01-15') - ts = Series(randn(len(rng)), index=rng) - ts2 = concat([ts[0:4],ts[-4:],ts[4:-4]]) + ts = Series(randn(len(rng)), index=rng) + ts2 = concat([ts[0:4], ts[-4:], ts[4:-4]]) for t in ts.index: - s = str(t) + # TODO: unused? + s = str(t) # noqa + expected = ts[t] result = ts2[t] self.assertTrue(expected == result) @@ -201,21 +201,21 @@ def compare(slobj): result = ts2[slobj].copy() result = result.sort_index() expected = ts[slobj] - assert_series_equal(result,expected) + assert_series_equal(result, expected) - compare(slice('2011-01-01','2011-01-15')) - compare(slice('2010-12-30','2011-01-15')) - compare(slice('2011-01-01','2011-01-16')) + compare(slice('2011-01-01', '2011-01-15')) + compare(slice('2010-12-30', '2011-01-15')) + compare(slice('2011-01-01', '2011-01-16')) # partial ranges - compare(slice('2011-01-01','2011-01-6')) - compare(slice('2011-01-06','2011-01-8')) - compare(slice('2011-01-06','2011-01-12')) + compare(slice('2011-01-01', '2011-01-6')) + compare(slice('2011-01-06', '2011-01-8')) + compare(slice('2011-01-06', '2011-01-12')) # single values result = ts2['2011'].sort_index() expected = ts['2011'] - assert_series_equal(result,expected) + assert_series_equal(result, expected) # diff freq rng = date_range(datetime(2005, 1, 1), periods=20, freq='M') @@ -229,7 +229,7 @@ def compare(slobj): def test_indexing(self): idx = date_range("2001-1-1", periods=20, freq='M') - ts = Series(np.random.rand(len(idx)),index=idx) + ts = Series(np.random.rand(len(idx)), index=idx) # getting @@ -237,7 +237,7 @@ def test_indexing(self): expected = ts['2001'] expected.name = 'A' - df = DataFrame(dict(A = ts)) + df = DataFrame(dict(A=ts)) result = df['2001']['A'] assert_series_equal(expected, result) @@ -280,11 +280,11 @@ def test_indexing(self): assert_frame_equal(result, expected) # this is a single date, so will raise - self.assertRaises(KeyError, df.__getitem__, df.index[2],) + self.assertRaises(KeyError, df.__getitem__, df.index[2], ) def test_recreate_from_data(self): - freqs = ['M', 'Q', 'A', 'D', 'B', 'BH', 'T', - 'S', 'L', 'U', 'H', 'N', 'C'] + freqs = ['M', 'Q', 'A', 'D', 'B', 'BH', 'T', 'S', 'L', 'U', 'H', 'N', + 'C'] for f in freqs: org = DatetimeIndex(start='2001/02/01 09:00', freq=f, periods=1) @@ -298,9 +298,9 @@ def test_recreate_from_data(self): def assert_range_equal(left, right): - assert(left.equals(right)) - assert(left.freq == right.freq) - assert(left.tz == right.tz) + assert (left.equals(right)) + assert (left.freq == right.freq) + assert (left.tz == right.tz) class TestTimeSeries(tm.TestCase): @@ -373,7 +373,7 @@ def test_series_box_timestamp(self): tm.assertIsInstance(s.iat[5], Timestamp) def test_series_box_timedelta(self): - rng = timedelta_range('1 day 1 s',periods=5,freq='h') + rng = timedelta_range('1 day 1 s', periods=5, freq='h') s = Series(rng) tm.assertIsInstance(s[1], Timedelta) tm.assertIsInstance(s.iat[2], Timedelta) @@ -383,13 +383,12 @@ def test_date_range_ambiguous_arguments(self): start = datetime(2011, 1, 1, 5, 3, 40) end = datetime(2011, 1, 1, 8, 9, 40) - self.assertRaises(ValueError, date_range, start, end, - freq='s', periods=10) + self.assertRaises(ValueError, date_range, start, end, freq='s', + periods=10) def test_timestamp_to_datetime(self): tm._skip_if_no_pytz() - rng = date_range('20090415', '20090519', - tz='US/Eastern') + rng = date_range('20090415', '20090519', tz='US/Eastern') stamp = rng[0] dtval = stamp.to_pydatetime() @@ -398,8 +397,7 @@ def test_timestamp_to_datetime(self): def test_timestamp_to_datetime_dateutil(self): tm._skip_if_no_pytz() - rng = date_range('20090415', '20090519', - tz='dateutil/US/Eastern') + rng = date_range('20090415', '20090519', tz='dateutil/US/Eastern') stamp = rng[0] dtval = stamp.to_pydatetime() @@ -421,8 +419,7 @@ def test_timestamp_to_datetime_explicit_dateutil(self): tm._skip_if_windows_python_3() tm._skip_if_no_dateutil() from pandas.tslib import _dateutil_gettz as gettz - rng = date_range('20090415', '20090519', - tz=gettz('US/Eastern')) + rng = date_range('20090415', '20090519', tz=gettz('US/Eastern')) stamp = rng[0] dtval = stamp.to_pydatetime() @@ -668,20 +665,17 @@ def test_pad_require_monotonicity(self): # neither monotonic increasing or decreasing rng2 = rng[[1, 0, 2]] - self.assertRaises(ValueError, rng2.get_indexer, rng, - method='pad') + self.assertRaises(ValueError, rng2.get_indexer, rng, method='pad') def test_frame_ctor_datetime64_column(self): - rng = date_range('1/1/2000 00:00:00', '1/1/2000 1:59:50', - freq='10s') + rng = date_range('1/1/2000 00:00:00', '1/1/2000 1:59:50', freq='10s') dates = np.asarray(rng) df = DataFrame({'A': np.random.randn(len(rng)), 'B': dates}) self.assertTrue(np.issubdtype(df['B'].dtype, np.dtype('M8[ns]'))) def test_frame_add_datetime64_column(self): - rng = date_range('1/1/2000 00:00:00', '1/1/2000 1:59:50', - freq='10s') + rng = date_range('1/1/2000 00:00:00', '1/1/2000 1:59:50', freq='10s') df = DataFrame(index=np.arange(len(rng))) df['A'] = rng @@ -730,34 +724,40 @@ def test_frame_add_datetime64_col_other_units(self): def test_to_datetime_unit(self): epoch = 1370745748 - s = Series([ epoch + t for t in range(20) ]) - result = to_datetime(s,unit='s') - expected = Series([ Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t) for t in range(20) ]) - assert_series_equal(result,expected) - - s = Series([ epoch + t for t in range(20) ]).astype(float) - result = to_datetime(s,unit='s') - expected = Series([ Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t) for t in range(20) ]) - assert_series_equal(result,expected) - - s = Series([ epoch + t for t in range(20) ] + [iNaT]) - result = to_datetime(s,unit='s') - expected = Series([ Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t) for t in range(20) ] + [NaT]) - assert_series_equal(result,expected) - - s = Series([ epoch + t for t in range(20) ] + [iNaT]).astype(float) - result = to_datetime(s,unit='s') - expected = Series([ Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t) for t in range(20) ] + [NaT]) - assert_series_equal(result,expected) - - s = concat([Series([ epoch + t for t in range(20) ]).astype(float),Series([np.nan])],ignore_index=True) - result = to_datetime(s,unit='s') - expected = Series([ Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t) for t in range(20) ] + [NaT]) - assert_series_equal(result,expected) + s = Series([epoch + t for t in range(20)]) + result = to_datetime(s, unit='s') + expected = Series([Timestamp('2013-06-09 02:42:28') + timedelta( + seconds=t) for t in range(20)]) + assert_series_equal(result, expected) + + s = Series([epoch + t for t in range(20)]).astype(float) + result = to_datetime(s, unit='s') + expected = Series([Timestamp('2013-06-09 02:42:28') + timedelta( + seconds=t) for t in range(20)]) + assert_series_equal(result, expected) + + s = Series([epoch + t for t in range(20)] + [iNaT]) + result = to_datetime(s, unit='s') + expected = Series([Timestamp('2013-06-09 02:42:28') + timedelta( + seconds=t) for t in range(20)] + [NaT]) + assert_series_equal(result, expected) + + s = Series([epoch + t for t in range(20)] + [iNaT]).astype(float) + result = to_datetime(s, unit='s') + expected = Series([Timestamp('2013-06-09 02:42:28') + timedelta( + seconds=t) for t in range(20)] + [NaT]) + assert_series_equal(result, expected) + + s = concat([Series([epoch + t for t in range(20)] + ).astype(float), Series([np.nan])], + ignore_index=True) + result = to_datetime(s, unit='s') + expected = Series([Timestamp('2013-06-09 02:42:28') + timedelta( + seconds=t) for t in range(20)] + [NaT]) + assert_series_equal(result, expected) def test_series_ctor_datetime64(self): - rng = date_range('1/1/2000 00:00:00', '1/1/2000 1:59:50', - freq='10s') + rng = date_range('1/1/2000 00:00:00', '1/1/2000 1:59:50', freq='10s') dates = np.asarray(rng) series = Series(dates) @@ -865,13 +865,13 @@ def test_string_na_nat_conversion(self): malformed = np.array(['1/100/2000', np.nan], dtype=object) # GH 10636, default is now 'raise' - self.assertRaises(ValueError, lambda : to_datetime(malformed, errors='raise')) + self.assertRaises(ValueError, + lambda: to_datetime(malformed, errors='raise')) result = to_datetime(malformed, errors='ignore') tm.assert_numpy_array_equal(result, malformed) - self.assertRaises(ValueError, to_datetime, malformed, - errors='raise') + self.assertRaises(ValueError, to_datetime, malformed, errors='raise') idx = ['a', 'b', 'c', 'd', 'e'] series = Series(['1/1/2000', np.nan, '1/3/2000', np.nan, @@ -911,9 +911,11 @@ def test_to_datetime_default(self): xp = datetime(2001, 1, 1) self.assertTrue(rs, xp) - #### dayfirst is essentially broken - #### to_datetime('01-13-2012', dayfirst=True) - #### self.assertRaises(ValueError, to_datetime('01-13-2012', dayfirst=True)) + # dayfirst is essentially broken + + # to_datetime('01-13-2012', dayfirst=True) + # self.assertRaises(ValueError, to_datetime('01-13-2012', + # dayfirst=True)) def test_to_datetime_on_datetime64_series(self): # #2699 @@ -928,25 +930,30 @@ def test_to_datetime_with_apply(self): # GH 5195 # with a format and coerce a single item to_datetime fails - td = Series(['May 04', 'Jun 02', 'Dec 11'], index=[1,2,3]) + td = Series(['May 04', 'Jun 02', 'Dec 11'], index=[1, 2, 3]) expected = pd.to_datetime(td, format='%b %y') result = td.apply(pd.to_datetime, format='%b %y') assert_series_equal(result, expected) - td = pd.Series(['May 04', 'Jun 02', ''], index=[1,2,3]) - self.assertRaises(ValueError, lambda : pd.to_datetime(td,format='%b %y', errors='raise')) - self.assertRaises(ValueError, lambda : td.apply(pd.to_datetime, format='%b %y', errors='raise')) + td = pd.Series(['May 04', 'Jun 02', ''], index=[1, 2, 3]) + self.assertRaises(ValueError, + lambda: pd.to_datetime(td, format='%b %y', + errors='raise')) + self.assertRaises(ValueError, + lambda: td.apply(pd.to_datetime, format='%b %y', + errors='raise')) expected = pd.to_datetime(td, format='%b %y', errors='coerce') - result = td.apply(lambda x: pd.to_datetime(x, format='%b %y', errors='coerce')) + result = td.apply( + lambda x: pd.to_datetime(x, format='%b %y', errors='coerce')) assert_series_equal(result, expected) def test_nat_vector_field_access(self): idx = DatetimeIndex(['1/1/2000', None, None, '1/4/2000']) - fields = ['year', 'quarter', 'month', 'day', 'hour', - 'minute', 'second', 'microsecond', 'nanosecond', - 'week', 'dayofyear', 'days_in_month'] + fields = ['year', 'quarter', 'month', 'day', 'hour', 'minute', + 'second', 'microsecond', 'nanosecond', 'week', 'dayofyear', + 'days_in_month'] for field in fields: result = getattr(idx, field) expected = [getattr(x, field) if x is not NaT else np.nan @@ -954,22 +961,21 @@ def test_nat_vector_field_access(self): self.assert_numpy_array_equal(result, np.array(expected)) def test_nat_scalar_field_access(self): - fields = ['year', 'quarter', 'month', 'day', 'hour', - 'minute', 'second', 'microsecond', 'nanosecond', - 'week', 'dayofyear', 'days_in_month', 'daysinmonth', - 'dayofweek'] + fields = ['year', 'quarter', 'month', 'day', 'hour', 'minute', + 'second', 'microsecond', 'nanosecond', 'week', 'dayofyear', + 'days_in_month', 'daysinmonth', 'dayofweek'] for field in fields: result = getattr(NaT, field) self.assertTrue(np.isnan(result)) def test_NaT_methods(self): # GH 9513 - raise_methods = ['astimezone', 'combine', 'ctime', 'dst', 'fromordinal', - 'fromtimestamp', 'isocalendar', 'isoformat', - 'strftime', 'strptime', - 'time', 'timestamp', 'timetuple', 'timetz', - 'toordinal', 'tzname', 'utcfromtimestamp', - 'utcnow', 'utcoffset', 'utctimetuple'] + raise_methods = ['astimezone', 'combine', 'ctime', 'dst', + 'fromordinal', 'fromtimestamp', 'isocalendar', + 'isoformat', 'strftime', 'strptime', 'time', + 'timestamp', 'timetuple', 'timetz', 'toordinal', + 'tzname', 'utcfromtimestamp', 'utcnow', 'utcoffset', + 'utctimetuple'] nat_methods = ['date', 'now', 'replace', 'to_datetime', 'today'] nan_methods = ['weekday', 'isoweekday'] @@ -1004,16 +1010,16 @@ def test_to_datetime_types(self): result = to_datetime('2012') self.assertEqual(result, expected) - ### array = ['2012','20120101','20120101 12:01:01'] - array = ['20120101','20120101 12:01:01'] + # array = ['2012','20120101','20120101 12:01:01'] + array = ['20120101', '20120101 12:01:01'] expected = list(to_datetime(array)) - result = lmap(Timestamp,array) - tm.assert_almost_equal(result,expected) + result = lmap(Timestamp, array) + tm.assert_almost_equal(result, expected) - ### currently fails ### - ### result = Timestamp('2012') - ### expected = to_datetime('2012') - ### self.assertEqual(result, expected) + # currently fails ### + # result = Timestamp('2012') + # expected = to_datetime('2012') + # self.assertEqual(result, expected) def test_to_datetime_unprocessable_input(self): # GH 4928 @@ -1051,15 +1057,9 @@ def test_to_datetime_dt64s(self): ] for dt in in_bound_dts: - self.assertEqual( - pd.to_datetime(dt), - Timestamp(dt) - ) + self.assertEqual(pd.to_datetime(dt), Timestamp(dt)) - oob_dts = [ - np.datetime64('1000-01-01'), - np.datetime64('5000-01-02'), - ] + oob_dts = [np.datetime64('1000-01-01'), np.datetime64('5000-01-02'), ] for dt in oob_dts: self.assertRaises(ValueError, pd.to_datetime, dt, errors='raise') @@ -1067,10 +1067,7 @@ def test_to_datetime_dt64s(self): self.assertIs(pd.to_datetime(dt, errors='coerce'), NaT) def test_to_datetime_array_of_dt64s(self): - dts = [ - np.datetime64('2000-01-01'), - np.datetime64('2000-01-02'), - ] + dts = [np.datetime64('2000-01-01'), np.datetime64('2000-01-02'), ] # Assuming all datetimes are in bounds, to_datetime() returns # an array that is equal to Timestamp() parsing @@ -1082,22 +1079,18 @@ def test_to_datetime_array_of_dt64s(self): # A list of datetimes where the last one is out of bounds dts_with_oob = dts + [np.datetime64('9999-01-01')] - self.assertRaises( - ValueError, - pd.to_datetime, - dts_with_oob, - errors='raise' - ) + self.assertRaises(ValueError, pd.to_datetime, dts_with_oob, + errors='raise') self.assert_numpy_array_equal( pd.to_datetime(dts_with_oob, box=False, errors='coerce'), np.array( - [ - Timestamp(dts_with_oob[0]).asm8, - Timestamp(dts_with_oob[1]).asm8, - iNaT, - ], - dtype='M8' + [ + Timestamp(dts_with_oob[0]).asm8, + Timestamp(dts_with_oob[1]).asm8, + iNaT, + ], + dtype='M8' ) ) @@ -1107,8 +1100,8 @@ def test_to_datetime_array_of_dt64s(self): self.assert_numpy_array_equal( pd.to_datetime(dts_with_oob, box=False, errors='ignore'), np.array( - [dt.item() for dt in dts_with_oob], - dtype='O' + [dt.item() for dt in dts_with_oob], + dtype='O' ) ) @@ -1116,14 +1109,17 @@ def test_to_datetime_tz(self): # xref 8260 # uniform returns a DatetimeIndex - arr = [pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'),pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Pacific')] + arr = [pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'), + pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Pacific')] result = pd.to_datetime(arr) - expected = DatetimeIndex(['2013-01-01 13:00:00','2013-01-02 14:00:00'],tz='US/Pacific') + expected = DatetimeIndex( + ['2013-01-01 13:00:00', '2013-01-02 14:00:00'], tz='US/Pacific') tm.assert_index_equal(result, expected) # mixed tzs will raise - arr = [pd.Timestamp('2013-01-01 13:00:00', tz='US/Pacific'),pd.Timestamp('2013-01-02 14:00:00', tz='US/Eastern')] - self.assertRaises(ValueError, lambda : pd.to_datetime(arr)) + arr = [pd.Timestamp('2013-01-01 13:00:00', tz='US/Pacific'), + pd.Timestamp('2013-01-02 14:00:00', tz='US/Eastern')] + self.assertRaises(ValueError, lambda: pd.to_datetime(arr)) def test_to_datetime_tz_pytz(self): @@ -1132,10 +1128,15 @@ def test_to_datetime_tz_pytz(self): import pytz us_eastern = pytz.timezone('US/Eastern') - arr = np.array([us_eastern.localize(datetime(year=2000, month=1, day=1, hour=3, minute=0)), - us_eastern.localize(datetime(year=2000, month=6, day=1, hour=3, minute=0))],dtype=object) + arr = np.array([us_eastern.localize(datetime(year=2000, month=1, day=1, + hour=3, minute=0)), + us_eastern.localize(datetime(year=2000, month=6, day=1, + hour=3, minute=0))], + dtype=object) result = pd.to_datetime(arr, utc=True) - expected = DatetimeIndex(['2000-01-01 08:00:00+00:00', '2000-06-01 07:00:00+00:00'], dtype='datetime64[ns, UTC]', freq=None) + expected = DatetimeIndex(['2000-01-01 08:00:00+00:00', + '2000-06-01 07:00:00+00:00'], + dtype='datetime64[ns, UTC]', freq=None) tm.assert_index_equal(result, expected) def test_to_datetime_tz_psycopg2(self): @@ -1147,15 +1148,22 @@ def test_to_datetime_tz_psycopg2(self): raise nose.SkipTest("no psycopg2 installed") # misc cases - arr = np.array([ datetime(2000, 1, 1, 3, 0, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-300, name=None)), - datetime(2000, 6, 1, 3, 0, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-240, name=None))], dtype=object) + tz1 = psycopg2.tz.FixedOffsetTimezone(offset=-300, name=None) + tz2 = psycopg2.tz.FixedOffsetTimezone(offset=-240, name=None) + arr = np.array([datetime(2000, 1, 1, 3, 0, tzinfo=tz1), + datetime(2000, 6, 1, 3, 0, tzinfo=tz2)], + dtype=object) result = pd.to_datetime(arr, errors='coerce', utc=True) - expected = DatetimeIndex(['2000-01-01 08:00:00+00:00', '2000-06-01 07:00:00+00:00'], dtype='datetime64[ns, UTC]', freq=None) + expected = DatetimeIndex(['2000-01-01 08:00:00+00:00', + '2000-06-01 07:00:00+00:00'], + dtype='datetime64[ns, UTC]', freq=None) tm.assert_index_equal(result, expected) # dtype coercion - i = pd.DatetimeIndex(['2000-01-01 08:00:00+00:00'],tz=psycopg2.tz.FixedOffsetTimezone(offset=-300, name=None)) + i = pd.DatetimeIndex([ + '2000-01-01 08:00:00+00:00' + ], tz=psycopg2.tz.FixedOffsetTimezone(offset=-300, name=None)) self.assertFalse(com.is_datetime64_ns_dtype(i)) # tz coerceion @@ -1218,8 +1226,8 @@ def test_reindex_with_datetimes(self): def test_asfreq_keep_index_name(self): # GH #9854 index_name = 'bar' - index = pd.date_range('20130101',periods=20,name=index_name) - df = pd.DataFrame([x for x in range(20)],columns=['foo'],index=index) + index = pd.date_range('20130101', periods=20, name=index_name) + df = pd.DataFrame([x for x in range(20)], columns=['foo'], index=index) tm.assert_equal(index_name, df.index.name) tm.assert_equal(index_name, df.asfreq('10D').index.name) @@ -1291,12 +1299,14 @@ def test_date_range_gen_error(self): def test_date_range_negative_freq(self): # GH 11018 rng = date_range('2011-12-31', freq='-2A', periods=3) - exp = pd.DatetimeIndex(['2011-12-31', '2009-12-31', '2007-12-31'], freq='-2A') + exp = pd.DatetimeIndex( + ['2011-12-31', '2009-12-31', '2007-12-31'], freq='-2A') self.assert_index_equal(rng, exp) self.assertEqual(rng.freq, '-2A') rng = date_range('2011-01-31', freq='-2M', periods=3) - exp = pd.DatetimeIndex(['2011-01-31', '2010-11-30', '2010-09-30'], freq='-2M') + exp = pd.DatetimeIndex( + ['2011-01-31', '2010-11-30', '2010-09-30'], freq='-2M') self.assert_index_equal(rng, exp) self.assertEqual(rng.freq, '-2M') @@ -1566,10 +1576,10 @@ def test_between_time_formats(self): rng = date_range('1/1/2000', '1/5/2000', freq='5min') ts = DataFrame(np.random.randn(len(rng), 2), index=rng) - strings = [("2:00", "2:30"), ("0200", "0230"), - ("2:00am", "2:30am"), ("0200am", "0230am"), - ("2:00:00", "2:30:00"), ("020000", "023000"), - ("2:00:00am", "2:30:00am"), ("020000am", "023000am")] + strings = [("2:00", "2:30"), ("0200", "0230"), ("2:00am", "2:30am"), + ("0200am", "0230am"), ("2:00:00", "2:30:00"), + ("020000", "023000"), ("2:00:00am", "2:30:00am"), + ("020000am", "023000am")] expected_length = 28 for time_string in strings: @@ -1590,13 +1600,15 @@ def test_dti_constructor_years_only(self): expected1 = date_range('2014-01-31', '2014-12-31', freq='M', tz=tz) rng2 = date_range('2014', '2015', freq='MS', tz=tz) - expected2 = date_range('2014-01-01', '2015-01-01', freq='MS', tz=tz) + expected2 = date_range('2014-01-01', '2015-01-01', freq='MS', + tz=tz) rng3 = date_range('2014', '2020', freq='A', tz=tz) expected3 = date_range('2014-12-31', '2019-12-31', freq='A', tz=tz) rng4 = date_range('2014', '2020', freq='AS', tz=tz) - expected4 = date_range('2014-01-01', '2020-01-01', freq='AS', tz=tz) + expected4 = date_range('2014-01-01', '2020-01-01', freq='AS', + tz=tz) for rng, expected in [(rng1, expected1), (rng2, expected2), (rng3, expected3), (rng4, expected4)]: @@ -1609,9 +1621,13 @@ def test_normalize(self): expected = date_range('1/1/2000', periods=10, freq='D') self.assertTrue(result.equals(expected)) - rng_ns = pd.DatetimeIndex(np.array([1380585623454345752, 1380585612343234312]).astype("datetime64[ns]")) + rng_ns = pd.DatetimeIndex(np.array([1380585623454345752, + 1380585612343234312]).astype( + "datetime64[ns]")) rng_ns_normalized = rng_ns.normalize() - expected = pd.DatetimeIndex(np.array([1380585600000000000, 1380585600000000000]).astype("datetime64[ns]")) + expected = pd.DatetimeIndex(np.array([1380585600000000000, + 1380585600000000000]).astype( + "datetime64[ns]")) self.assertTrue(rng_ns_normalized.equals(expected)) self.assertTrue(result.is_normalized) @@ -1633,7 +1649,8 @@ def test_to_period(self): assert_series_equal(pts, exp) # GH 7606 without freq - idx = DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03', '2011-01-04']) + idx = DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03', + '2011-01-04']) exp_idx = pd.PeriodIndex(['2011-01-01', '2011-01-02', '2011-01-03', '2011-01-04'], freq='D') @@ -1798,7 +1815,10 @@ def test_timestamp_fields(self): # extra fields from DatetimeIndex like quarter and week idx = tm.makeDateIndex(100) - fields = ['dayofweek', 'dayofyear', 'week', 'weekofyear', 'quarter', 'days_in_month', 'is_month_start', 'is_month_end', 'is_quarter_start', 'is_quarter_end', 'is_year_start', 'is_year_end'] + fields = ['dayofweek', 'dayofyear', 'week', 'weekofyear', 'quarter', + 'days_in_month', 'is_month_start', 'is_month_end', + 'is_quarter_start', 'is_quarter_end', 'is_year_start', + 'is_year_end'] for f in fields: expected = getattr(idx, f)[-1] result = getattr(Timestamp(idx[-1]), f) @@ -1809,33 +1829,34 @@ def test_timestamp_fields(self): def test_woy_boundary(self): # make sure weeks at year boundaries are correct - d = datetime(2013,12,31) + d = datetime(2013, 12, 31) result = Timestamp(d).week - expected = 1 # ISO standard + expected = 1 # ISO standard self.assertEqual(result, expected) - d = datetime(2008,12,28) + d = datetime(2008, 12, 28) result = Timestamp(d).week - expected = 52 # ISO standard + expected = 52 # ISO standard self.assertEqual(result, expected) - d = datetime(2009,12,31) + d = datetime(2009, 12, 31) result = Timestamp(d).week - expected = 53 # ISO standard + expected = 53 # ISO standard self.assertEqual(result, expected) - d = datetime(2010,1,1) + d = datetime(2010, 1, 1) result = Timestamp(d).week - expected = 53 # ISO standard + expected = 53 # ISO standard self.assertEqual(result, expected) - d = datetime(2010,1,3) + d = datetime(2010, 1, 3) result = Timestamp(d).week - expected = 53 # ISO standard + expected = 53 # ISO standard self.assertEqual(result, expected) - result = np.array([Timestamp(datetime(*args)).week for args in - [(2000,1,1),(2000,1,2),(2005,1,1),(2005,1,2)]]) + result = np.array([Timestamp(datetime(*args)).week + for args in [(2000, 1, 1), (2000, 1, 2), ( + 2005, 1, 1), (2005, 1, 2)]]) self.assertTrue((result == [52, 52, 53, 53]).all()) def test_timestamp_date_out_of_range(self): @@ -1866,7 +1887,7 @@ def test_timestamp_from_ordinal(self): # with a tzinfo stamp = Timestamp('2011-4-16', tz='US/Eastern') dt_tz = stamp.to_pydatetime() - ts = Timestamp.fromordinal(dt_tz.toordinal(),tz='US/Eastern') + ts = Timestamp.fromordinal(dt_tz.toordinal(), tz='US/Eastern') self.assertEqual(ts.to_pydatetime(), dt_tz) def test_datetimeindex_integers_shift(self): @@ -1922,15 +1943,15 @@ def test_append_concat(self): self.assertIsNone(rng1.append(rng2).name) def test_append_concat_tz(self): - #GH 2938 + # GH 2938 tm._skip_if_no_pytz() rng = date_range('5/8/2012 1:45', periods=10, freq='5T', tz='US/Eastern') rng2 = date_range('5/8/2012 2:35', periods=10, freq='5T', - tz='US/Eastern') + tz='US/Eastern') rng3 = date_range('5/8/2012 1:45', periods=20, freq='5T', - tz='US/Eastern') + tz='US/Eastern') ts = Series(np.random.randn(len(rng)), rng) df = DataFrame(np.random.randn(len(rng), 4), index=rng) ts2 = Series(np.random.randn(len(rng2)), rng2) @@ -1952,9 +1973,9 @@ def test_append_concat_tz_explicit_pytz(self): rng = date_range('5/8/2012 1:45', periods=10, freq='5T', tz=timezone('US/Eastern')) rng2 = date_range('5/8/2012 2:35', periods=10, freq='5T', - tz=timezone('US/Eastern')) + tz=timezone('US/Eastern')) rng3 = date_range('5/8/2012 1:45', periods=20, freq='5T', - tz=timezone('US/Eastern')) + tz=timezone('US/Eastern')) ts = Series(np.random.randn(len(rng)), rng) df = DataFrame(np.random.randn(len(rng), 4), index=rng) ts2 = Series(np.random.randn(len(rng2)), rng2) @@ -1971,14 +1992,12 @@ def test_append_concat_tz_explicit_pytz(self): def test_append_concat_tz_dateutil(self): # GH 2938 tm._skip_if_no_dateutil() - from pandas.tslib import _dateutil_gettz as timezone - rng = date_range('5/8/2012 1:45', periods=10, freq='5T', tz='dateutil/US/Eastern') rng2 = date_range('5/8/2012 2:35', periods=10, freq='5T', - tz='dateutil/US/Eastern') + tz='dateutil/US/Eastern') rng3 = date_range('5/8/2012 1:45', periods=20, freq='5T', - tz='dateutil/US/Eastern') + tz='dateutil/US/Eastern') ts = Series(np.random.randn(len(rng)), rng) df = DataFrame(np.random.randn(len(rng), 4), index=rng) ts2 = Series(np.random.randn(len(rng2)), rng2) @@ -2148,10 +2167,11 @@ def f(x): def test_series_map_box_timedelta(self): # GH 11349 - s = Series(timedelta_range('1 day 1 s',periods=5,freq='h')) + s = Series(timedelta_range('1 day 1 s', periods=5, freq='h')) def f(x): return x.total_seconds() + s.map(f) s.apply(f) DataFrame(s).applymap(f) @@ -2165,16 +2185,18 @@ def test_concat_datetime_datetime64_frame(self): df2_obj = DataFrame.from_records(rows, columns=['date', 'test']) ind = date_range(start="2000/1/1", freq="D", periods=10) - df1 = DataFrame({'date': ind, 'test':lrange(10)}) + df1 = DataFrame({'date': ind, 'test': lrange(10)}) # it works! pd.concat([df1, df2_obj]) def test_period_resample(self): # GH3609 - s = Series(range(100),index=date_range('20130101', freq='s', periods=100), dtype='float') + s = Series(range(100), index=date_range( + '20130101', freq='s', periods=100), dtype='float') s[10:30] = np.nan - expected = Series([34.5, 79.5], index=[Period('2013-01-01 00:00', 'T'), Period('2013-01-01 00:01', 'T')]) + expected = Series([34.5, 79.5], index=[Period( + '2013-01-01 00:00', 'T'), Period('2013-01-01 00:01', 'T')]) result = s.to_period().resample('T', kind='period') assert_series_equal(result, expected) result2 = s.resample('T', kind='period') @@ -2187,9 +2209,11 @@ def test_period_resample_with_local_timezone_pytz(self): local_timezone = pytz.timezone('America/Los_Angeles') - start = datetime(year=2013, month=11, day=1, hour=0, minute=0, tzinfo=pytz.utc) + start = datetime(year=2013, month=11, day=1, hour=0, minute=0, + tzinfo=pytz.utc) # 1 day later - end = datetime(year=2013, month=11, day=2, hour=0, minute=0, tzinfo=pytz.utc) + end = datetime(year=2013, month=11, day=2, hour=0, minute=0, + tzinfo=pytz.utc) index = pd.date_range(start, end, freq='H') @@ -2197,7 +2221,9 @@ def test_period_resample_with_local_timezone_pytz(self): series = series.tz_convert(local_timezone) result = series.resample('D', kind='period') # Create the expected series - expected_index = (pd.period_range(start=start, end=end, freq='D') - 1) # Index is moved back a day with the timezone conversion from UTC to Pacific + # Index is moved back a day with the timezone conversion from UTC to + # Pacific + expected_index = (pd.period_range(start=start, end=end, freq='D') - 1) expected = pd.Series(1, index=expected_index) assert_series_equal(result, expected) @@ -2208,9 +2234,11 @@ def test_period_resample_with_local_timezone_dateutil(self): local_timezone = 'dateutil/America/Los_Angeles' - start = datetime(year=2013, month=11, day=1, hour=0, minute=0, tzinfo=dateutil.tz.tzutc()) + start = datetime(year=2013, month=11, day=1, hour=0, minute=0, + tzinfo=dateutil.tz.tzutc()) # 1 day later - end = datetime(year=2013, month=11, day=2, hour=0, minute=0, tzinfo=dateutil.tz.tzutc()) + end = datetime(year=2013, month=11, day=2, hour=0, minute=0, + tzinfo=dateutil.tz.tzutc()) index = pd.date_range(start, end, freq='H') @@ -2218,7 +2246,9 @@ def test_period_resample_with_local_timezone_dateutil(self): series = series.tz_convert(local_timezone) result = series.resample('D', kind='period') # Create the expected series - expected_index = (pd.period_range(start=start, end=end, freq='D') - 1) # Index is moved back a day with the timezone conversion from UTC to Pacific + # Index is moved back a day with the timezone conversion from UTC to + # Pacific + expected_index = (pd.period_range(start=start, end=end, freq='D') - 1) expected = pd.Series(1, index=expected_index) assert_series_equal(result, expected) @@ -2243,21 +2273,20 @@ def test_pickle(self): def test_timestamp_equality(self): # GH 11034 - s = Series([Timestamp('2000-01-29 01:59:00'),'NaT']) + s = Series([Timestamp('2000-01-29 01:59:00'), 'NaT']) result = s != s - assert_series_equal(result, Series([False,True])) + assert_series_equal(result, Series([False, True])) result = s != s[0] - assert_series_equal(result, Series([False,True])) + assert_series_equal(result, Series([False, True])) result = s != s[1] - assert_series_equal(result, Series([True,True])) + assert_series_equal(result, Series([True, True])) result = s == s - assert_series_equal(result, Series([True,False])) + assert_series_equal(result, Series([True, False])) result = s == s[0] - assert_series_equal(result, Series([True,False])) + assert_series_equal(result, Series([True, False])) result = s == s[1] - assert_series_equal(result, Series([False,False])) - + assert_series_equal(result, Series([False, False])) def _simple_ts(start, end, freq='D'): @@ -2270,18 +2299,17 @@ class TestDatetimeIndex(tm.TestCase): def test_hash_error(self): index = date_range('20010101', periods=10) - with tm.assertRaisesRegexp(TypeError, - "unhashable type: %r" % + with tm.assertRaisesRegexp(TypeError, "unhashable type: %r" % type(index).__name__): hash(index) def test_stringified_slice_with_tz(self): - #GH2658 + # GH2658 import datetime - start=datetime.datetime.now() - idx=DatetimeIndex(start=start,freq="1d",periods=10) - df=DataFrame(lrange(10),index=idx) - df["2013-01-14 23:44:34.437768-05:00":] # no exception here + start = datetime.datetime.now() + idx = DatetimeIndex(start=start, freq="1d", periods=10) + df = DataFrame(lrange(10), index=idx) + df["2013-01-14 23:44:34.437768-05:00":] # no exception here def test_append_join_nondatetimeindex(self): rng = date_range('1/1/2000', periods=10) @@ -2293,7 +2321,6 @@ def test_append_join_nondatetimeindex(self): # it works rng.join(idx, how='outer') - def test_astype(self): rng = date_range('1/1/2000', periods=10) @@ -2303,20 +2330,25 @@ def test_astype(self): # with tz rng = date_range('1/1/2000', periods=10, tz='US/Eastern') result = rng.astype('datetime64[ns]') - expected = date_range('1/1/2000', periods=10, tz='US/Eastern').tz_convert('UTC').tz_localize(None) + expected = (date_range('1/1/2000', periods=10, + tz='US/Eastern') + .tz_convert('UTC').tz_localize(None)) tm.assert_index_equal(result, expected) # BUG#10442 : testing astype(str) is correct for Series/DatetimeIndex result = pd.Series(pd.date_range('2012-01-01', periods=3)).astype(str) - expected = pd.Series(['2012-01-01', '2012-01-02', '2012-01-03'], dtype=object) + expected = pd.Series( + ['2012-01-01', '2012-01-02', '2012-01-03'], dtype=object) tm.assert_series_equal(result, expected) - result = Series(pd.date_range('2012-01-01', periods=3, tz='US/Eastern')).astype(str) - expected = Series(['2012-01-01 00:00:00-05:00', '2012-01-02 00:00:00-05:00', '2012-01-03 00:00:00-05:00'], + result = Series(pd.date_range('2012-01-01', periods=3, + tz='US/Eastern')).astype(str) + expected = Series(['2012-01-01 00:00:00-05:00', + '2012-01-02 00:00:00-05:00', + '2012-01-03 00:00:00-05:00'], dtype=object) tm.assert_series_equal(result, expected) - def test_to_period_nofreq(self): idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-04']) self.assertRaises(ValueError, idx.to_period) @@ -2324,7 +2356,8 @@ def test_to_period_nofreq(self): idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03'], freq='infer') self.assertEqual(idx.freqstr, 'D') - expected = pd.PeriodIndex(['2000-01-01', '2000-01-02', '2000-01-03'], freq='D') + expected = pd.PeriodIndex( + ['2000-01-01', '2000-01-02', '2000-01-03'], freq='D') self.assertTrue(idx.to_period().equals(expected)) # GH 7606 @@ -2379,13 +2412,12 @@ def test_constructor_coverage(self): # non-conforming self.assertRaises(ValueError, DatetimeIndex, - ['2000-01-01', '2000-01-02', '2000-01-04'], - freq='D') + ['2000-01-01', '2000-01-02', '2000-01-04'], freq='D') - self.assertRaises(ValueError, DatetimeIndex, - start='2011-01-01', freq='b') - self.assertRaises(ValueError, DatetimeIndex, - end='2011-01-01', freq='B') + self.assertRaises(ValueError, DatetimeIndex, start='2011-01-01', + freq='b') + self.assertRaises(ValueError, DatetimeIndex, end='2011-01-01', + freq='B') self.assertRaises(ValueError, DatetimeIndex, periods=10, freq='D') def test_constructor_datetime64_tzformat(self): @@ -2394,42 +2426,50 @@ def test_constructor_datetime64_tzformat(self): import pytz # ISO 8601 format results in pytz.FixedOffset for freq in ['AS', 'W-SUN']: - idx = date_range('2013-01-01T00:00:00-05:00', '2016-01-01T23:59:59-05:00', freq=freq) + idx = date_range('2013-01-01T00:00:00-05:00', + '2016-01-01T23:59:59-05:00', freq=freq) expected = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59', freq=freq, tz=pytz.FixedOffset(-300)) tm.assert_index_equal(idx, expected) # Unable to use `US/Eastern` because of DST - expected_i8 = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59', - freq=freq, tz='America/Lima') + expected_i8 = date_range('2013-01-01T00:00:00', + '2016-01-01T23:59:59', freq=freq, + tz='America/Lima') self.assert_numpy_array_equal(idx.asi8, expected_i8.asi8) - idx = date_range('2013-01-01T00:00:00+09:00', '2016-01-01T23:59:59+09:00', freq=freq) + idx = date_range('2013-01-01T00:00:00+09:00', + '2016-01-01T23:59:59+09:00', freq=freq) expected = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59', freq=freq, tz=pytz.FixedOffset(540)) tm.assert_index_equal(idx, expected) - expected_i8 = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59', - freq=freq, tz='Asia/Tokyo') + expected_i8 = date_range('2013-01-01T00:00:00', + '2016-01-01T23:59:59', freq=freq, + tz='Asia/Tokyo') self.assert_numpy_array_equal(idx.asi8, expected_i8.asi8) tm._skip_if_no_dateutil() from dateutil.tz import tzoffset # Non ISO 8601 format results in dateutil.tz.tzoffset for freq in ['AS', 'W-SUN']: - idx = date_range('2013/1/1 0:00:00-5:00', '2016/1/1 23:59:59-5:00', freq=freq) + idx = date_range('2013/1/1 0:00:00-5:00', '2016/1/1 23:59:59-5:00', + freq=freq) expected = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59', freq=freq, tz=tzoffset(None, -18000)) tm.assert_index_equal(idx, expected) # Unable to use `US/Eastern` because of DST - expected_i8 = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59', - freq=freq, tz='America/Lima') + expected_i8 = date_range('2013-01-01T00:00:00', + '2016-01-01T23:59:59', freq=freq, + tz='America/Lima') self.assert_numpy_array_equal(idx.asi8, expected_i8.asi8) - idx = date_range('2013/1/1 0:00:00+9:00', '2016/1/1 23:59:59+09:00', freq=freq) + idx = date_range('2013/1/1 0:00:00+9:00', + '2016/1/1 23:59:59+09:00', freq=freq) expected = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59', freq=freq, tz=tzoffset(None, 32400)) tm.assert_index_equal(idx, expected) - expected_i8 = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59', - freq=freq, tz='Asia/Tokyo') + expected_i8 = date_range('2013-01-01T00:00:00', + '2016-01-01T23:59:59', freq=freq, + tz='Asia/Tokyo') self.assert_numpy_array_equal(idx.asi8, expected_i8.asi8) def test_constructor_name(self): @@ -2546,7 +2586,6 @@ def test_map(self): exp = [f(x) for x in rng] tm.assert_almost_equal(result, exp) - def test_iteration_preserves_tz(self): tm._skip_if_no_dateutil() @@ -2560,7 +2599,8 @@ def test_iteration_preserves_tz(self): expected = index[i] self.assertEqual(result, expected) - index = date_range("2012-01-01", periods=3, freq='H', tz=dateutil.tz.tzoffset(None, -28800)) + index = date_range("2012-01-01", periods=3, freq='H', + tz=dateutil.tz.tzoffset(None, -28800)) for i, ts in enumerate(index): result = ts @@ -2569,14 +2609,14 @@ def test_iteration_preserves_tz(self): self.assertEqual(result, expected) # 9100 - index = pd.DatetimeIndex(['2014-12-01 03:32:39.987000-08:00','2014-12-01 04:12:34.987000-08:00']) + index = pd.DatetimeIndex(['2014-12-01 03:32:39.987000-08:00', + '2014-12-01 04:12:34.987000-08:00']) for i, ts in enumerate(index): result = ts expected = index[i] self.assertEqual(result._repr_base, expected._repr_base) self.assertEqual(result, expected) - def test_misc_coverage(self): rng = date_range('1/1/2000', periods=5) result = rng.groupby(rng.day) @@ -2608,9 +2648,9 @@ def test_union_bug_1730(self): def test_union_bug_1745(self): left = DatetimeIndex(['2012-05-11 15:19:49.695000']) - right = DatetimeIndex(['2012-05-29 13:04:21.322000', - '2012-05-11 15:27:24.873000', - '2012-05-11 15:31:05.350000']) + right = DatetimeIndex( + ['2012-05-29 13:04:21.322000', '2012-05-11 15:27:24.873000', + '2012-05-11 15:31:05.350000']) result = left.union(right) exp = DatetimeIndex(sorted(set(list(left)) | set(list(right)))) @@ -2634,7 +2674,7 @@ def test_intersection_bug_1708(self): self.assertEqual(len(result), 0) def test_union_freq_both_none(self): - #GH11086 + # GH11086 expected = bdate_range('20150101', periods=10) expected.freq = None @@ -2681,8 +2721,9 @@ def test_datetime64_with_DateOffset(self): with tm.assert_produces_warning(PerformanceWarning): s = klass([Timestamp('2000-1-1'), Timestamp('2000-2-1')]) result = s + Series([pd.offsets.DateOffset(years=1), - pd.offsets.MonthEnd()]) - exp = klass([Timestamp('2001-1-1'), Timestamp('2000-2-29')]) + pd.offsets.MonthEnd()]) + exp = klass([Timestamp('2001-1-1'), Timestamp('2000-2-29') + ]) assert_func(result, exp) # same offset @@ -2691,36 +2732,45 @@ def test_datetime64_with_DateOffset(self): exp = klass([Timestamp('2001-1-1'), Timestamp('2001-2-1')]) assert_func(result, exp) - s = klass([Timestamp('2000-01-05 00:15:00'), Timestamp('2000-01-31 00:23:00'), - Timestamp('2000-01-01'), Timestamp('2000-03-31'), - Timestamp('2000-02-29'), Timestamp('2000-12-31')]) + s = klass([Timestamp('2000-01-05 00:15:00'), Timestamp( + '2000-01-31 00:23:00'), Timestamp('2000-01-01'), Timestamp( + '2000-03-31'), Timestamp('2000-02-29'), Timestamp( + '2000-12-31')]) - #DateOffset relativedelta fastpath + # DateOffset relativedelta fastpath relative_kwargs = [('years', 2), ('months', 5), ('days', 3), - ('hours', 5), ('minutes', 10), ('seconds', 2), - ('microseconds', 5)] + ('hours', 5), ('minutes', 10), ('seconds', 2), + ('microseconds', 5)] for i, kwd in enumerate(relative_kwargs): op = pd.DateOffset(**dict([kwd])) assert_func(klass([x + op for x in s]), s + op) assert_func(klass([x - op for x in s]), s - op) - op = pd.DateOffset(**dict(relative_kwargs[:i+1])) + op = pd.DateOffset(**dict(relative_kwargs[:i + 1])) assert_func(klass([x + op for x in s]), s + op) assert_func(klass([x - op for x in s]), s - op) - # assert these are equal on a piecewise basis - offsets = ['YearBegin', ('YearBegin', {'month': 5}), - 'YearEnd', ('YearEnd', {'month': 5}), - 'MonthBegin', 'MonthEnd', 'Week', ('Week', {'weekday': 3}), - 'BusinessDay', 'BDay', 'QuarterEnd', 'QuarterBegin', - 'CustomBusinessDay', 'CDay', 'CBMonthEnd','CBMonthBegin', - 'BMonthBegin', 'BMonthEnd', 'BusinessHour', 'BYearBegin', - 'BYearEnd','BQuarterBegin', ('LastWeekOfMonth', {'weekday':2}), - ('FY5253Quarter', {'qtr_with_extra_week': 1, 'startingMonth': 1, - 'weekday': 2, 'variation': 'nearest'}), - ('FY5253',{'weekday': 0, 'startingMonth': 2, 'variation': 'nearest'}), - ('WeekOfMonth', {'weekday': 2, 'week': 2}), 'Easter', - ('DateOffset', {'day': 4}), ('DateOffset', {'month': 5})] + offsets = ['YearBegin', ('YearBegin', {'month': 5}), 'YearEnd', + ('YearEnd', {'month': 5}), 'MonthBegin', 'MonthEnd', + 'Week', ('Week', { + 'weekday': 3 + }), 'BusinessDay', 'BDay', 'QuarterEnd', 'QuarterBegin', + 'CustomBusinessDay', 'CDay', 'CBMonthEnd', + 'CBMonthBegin', 'BMonthBegin', 'BMonthEnd', + 'BusinessHour', 'BYearBegin', 'BYearEnd', + 'BQuarterBegin', ('LastWeekOfMonth', { + 'weekday': 2 + }), ('FY5253Quarter', {'qtr_with_extra_week': 1, + 'startingMonth': 1, + 'weekday': 2, + 'variation': 'nearest'}), + ('FY5253', {'weekday': 0, + 'startingMonth': 2, + 'variation': + 'nearest'}), ('WeekOfMonth', {'weekday': 2, + 'week': 2}), + 'Easter', ('DateOffset', {'day': 4}), + ('DateOffset', {'month': 5})] with warnings.catch_warnings(record=True): for normalize in (True, False): @@ -2732,10 +2782,12 @@ def test_datetime64_with_DateOffset(self): kwargs = {} for n in [0, 5]: - if (do in ['WeekOfMonth','LastWeekOfMonth', - 'FY5253Quarter','FY5253'] and n == 0): + if (do in ['WeekOfMonth', 'LastWeekOfMonth', + 'FY5253Quarter', 'FY5253'] and n == 0): continue - op = getattr(pd.offsets,do)(n, normalize=normalize, **kwargs) + op = getattr(pd.offsets, do)(n, + normalize=normalize, + **kwargs) assert_func(klass([x + op for x in s]), s + op) assert_func(klass([x - op for x in s]), s - op) assert_func(klass([op + x for x in s]), op + s) @@ -2832,20 +2884,22 @@ def test_round(self): result = dt.round('s') self.assertEqual(result, dt) - dti = date_range('20130101 09:10:11',periods=5).tz_localize('UTC').tz_convert('US/Eastern') + dti = date_range('20130101 09:10:11', + periods=5).tz_localize('UTC').tz_convert('US/Eastern') result = dti.round('D') - expected = date_range('20130101',periods=5).tz_localize('US/Eastern') + expected = date_range('20130101', periods=5).tz_localize('US/Eastern') tm.assert_index_equal(result, expected) result = dti.round('s') tm.assert_index_equal(result, dti) # invalid - for freq in ['Y','M','foobar']: - self.assertRaises(ValueError, lambda : dti.round(freq)) + for freq in ['Y', 'M', 'foobar']: + self.assertRaises(ValueError, lambda: dti.round(freq)) def test_insert(self): - idx = DatetimeIndex(['2000-01-04', '2000-01-01', '2000-01-02'], name='idx') + idx = DatetimeIndex( + ['2000-01-04', '2000-01-01', '2000-01-02'], name='idx') result = idx.insert(2, datetime(2000, 1, 5)) exp = DatetimeIndex(['2000-01-04', '2000-01-01', '2000-01-05', @@ -2854,7 +2908,8 @@ def test_insert(self): # insertion of non-datetime should coerce to object index result = idx.insert(1, 'inserted') - expected = Index([datetime(2000, 1, 4), 'inserted', datetime(2000, 1, 1), + expected = Index([datetime(2000, 1, 4), 'inserted', + datetime(2000, 1, 1), datetime(2000, 1, 2)], name='idx') self.assertNotIsInstance(result, DatetimeIndex) tm.assert_index_equal(result, expected) @@ -2869,10 +2924,14 @@ def test_insert(self): '2000-04-30'], name='idx', freq='M') # reset freq to None - expected_1_nofreq = DatetimeIndex(['2000-01-31', '2000-01-31', '2000-02-29', - '2000-03-31'], name='idx', freq=None) - expected_3_nofreq = DatetimeIndex(['2000-01-31', '2000-02-29', '2000-03-31', - '2000-01-02'], name='idx', freq=None) + expected_1_nofreq = DatetimeIndex(['2000-01-31', '2000-01-31', + '2000-02-29', + '2000-03-31'], name='idx', + freq=None) + expected_3_nofreq = DatetimeIndex(['2000-01-31', '2000-02-29', + '2000-03-31', + '2000-01-02'], name='idx', + freq=None) cases = [(0, datetime(1999, 12, 31), expected_0), (-3, datetime(1999, 12, 31), expected_0), @@ -2898,7 +2957,8 @@ def test_insert(self): tm._skip_if_no_pytz() import pytz - idx = date_range('1/1/2000', periods=3, freq='D', tz='Asia/Tokyo', name='idx') + idx = date_range('1/1/2000', periods=3, freq='D', tz='Asia/Tokyo', + name='idx') with tm.assertRaises(ValueError): result = idx.insert(3, pd.Timestamp('2000-01-04')) with tm.assertRaises(ValueError): @@ -2906,12 +2966,16 @@ def test_insert(self): with tm.assertRaises(ValueError): result = idx.insert(3, pd.Timestamp('2000-01-04', tz='US/Eastern')) with tm.assertRaises(ValueError): - result = idx.insert(3, datetime(2000, 1, 4, tzinfo=pytz.timezone('US/Eastern'))) + result = idx.insert(3, + datetime(2000, 1, 4, + tzinfo=pytz.timezone('US/Eastern'))) for tz in ['US/Pacific', 'Asia/Singapore']: - idx = date_range('1/1/2000 09:00', periods=6, freq='H', tz=tz, name='idx') + idx = date_range('1/1/2000 09:00', periods=6, freq='H', tz=tz, + name='idx') # preserve freq - expected = date_range('1/1/2000 09:00', periods=7, freq='H', tz=tz, name='idx') + expected = date_range('1/1/2000 09:00', periods=7, freq='H', tz=tz, + name='idx') for d in [pd.Timestamp('2000-01-01 15:00', tz=tz), pytz.timezone(tz).localize(datetime(2000, 1, 1, 15))]: @@ -2921,10 +2985,12 @@ def test_insert(self): self.assertEqual(result.freq, expected.freq) self.assertEqual(result.tz, expected.tz) - expected = DatetimeIndex(['2000-01-01 09:00', '2000-01-01 10:00', '2000-01-01 11:00', - '2000-01-01 12:00', '2000-01-01 13:00', '2000-01-01 14:00', + expected = DatetimeIndex(['2000-01-01 09:00', '2000-01-01 10:00', + '2000-01-01 11:00', + '2000-01-01 12:00', '2000-01-01 13:00', + '2000-01-01 14:00', '2000-01-01 10:00'], name='idx', - tz=tz, freq=None) + tz=tz, freq=None) # reset freq to None for d in [pd.Timestamp('2000-01-01 10:00', tz=tz), pytz.timezone(tz).localize(datetime(2000, 1, 1, 10))]: @@ -2938,16 +3004,20 @@ def test_delete(self): idx = date_range(start='2000-01-01', periods=5, freq='M', name='idx') # prserve freq - expected_0 = date_range(start='2000-02-01', periods=4, freq='M', name='idx') - expected_4 = date_range(start='2000-01-01', periods=4, freq='M', name='idx') + expected_0 = date_range(start='2000-02-01', periods=4, freq='M', + name='idx') + expected_4 = date_range(start='2000-01-01', periods=4, freq='M', + name='idx') # reset freq to None expected_1 = DatetimeIndex(['2000-01-31', '2000-03-31', '2000-04-30', '2000-05-31'], freq=None, name='idx') - cases ={0: expected_0, -5: expected_0, - -1: expected_4, 4: expected_4, - 1: expected_1} + cases = {0: expected_0, + -5: expected_0, + -1: expected_4, + 4: expected_4, + 1: expected_1} for n, expected in compat.iteritems(cases): result = idx.delete(n) self.assertTrue(result.equals(expected)) @@ -2959,8 +3029,8 @@ def test_delete(self): result = idx.delete(5) for tz in [None, 'Asia/Tokyo', 'US/Pacific']: - idx = date_range(start='2000-01-01 09:00', periods=10, - freq='H', name='idx', tz=tz) + idx = date_range(start='2000-01-01 09:00', periods=10, freq='H', + name='idx', tz=tz) expected = date_range(start='2000-01-01 10:00', periods=9, freq='H', name='idx', tz=tz) @@ -2982,17 +3052,19 @@ def test_delete_slice(self): idx = date_range(start='2000-01-01', periods=10, freq='D', name='idx') # prserve freq - expected_0_2 = date_range(start='2000-01-04', periods=7, freq='D', name='idx') - expected_7_9 = date_range(start='2000-01-01', periods=7, freq='D', name='idx') + expected_0_2 = date_range(start='2000-01-04', periods=7, freq='D', + name='idx') + expected_7_9 = date_range(start='2000-01-01', periods=7, freq='D', + name='idx') # reset freq to None expected_3_5 = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03', - '2000-01-07', '2000-01-08', '2000-01-09', - '2000-01-10'], freq=None, name='idx') + '2000-01-07', '2000-01-08', '2000-01-09', + '2000-01-10'], freq=None, name='idx') - cases ={(0, 1, 2): expected_0_2, - (7, 8, 9): expected_7_9, - (3, 4, 5): expected_3_5} + cases = {(0, 1, 2): expected_0_2, + (7, 8, 9): expected_7_9, + (3, 4, 5): expected_3_5} for n, expected in compat.iteritems(cases): result = idx.delete(n) self.assertTrue(result.equals(expected)) @@ -3005,11 +3077,12 @@ def test_delete_slice(self): self.assertEqual(result.freq, expected.freq) for tz in [None, 'Asia/Tokyo', 'US/Pacific']: - ts = pd.Series(1, index=pd.date_range('2000-01-01 09:00', periods=10, - freq='H', name='idx', tz=tz)) + ts = pd.Series(1, index=pd.date_range( + '2000-01-01 09:00', periods=10, freq='H', name='idx', tz=tz)) # preserve freq result = ts.drop(ts.index[:5]).index - expected = pd.date_range('2000-01-01 14:00', periods=5, freq='H', name='idx', tz=tz) + expected = pd.date_range('2000-01-01 14:00', periods=5, freq='H', + name='idx', tz=tz) self.assertTrue(result.equals(expected)) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) @@ -3017,9 +3090,10 @@ def test_delete_slice(self): # reset freq to None result = ts.drop(ts.index[[1, 3, 5, 7, 9]]).index - expected = DatetimeIndex(['2000-01-01 09:00', '2000-01-01 11:00', '2000-01-01 13:00', + expected = DatetimeIndex(['2000-01-01 09:00', '2000-01-01 11:00', + '2000-01-01 13:00', '2000-01-01 15:00', '2000-01-01 17:00'], - freq=None, name='idx', tz=tz) + freq=None, name='idx', tz=tz) self.assertTrue(result.equals(expected)) self.assertEqual(result.name, expected.name) self.assertEqual(result.freq, expected.freq) @@ -3030,8 +3104,9 @@ def test_take(self): datetime(2010, 1, 1, 17), datetime(2010, 1, 1, 21)] for tz in [None, 'US/Eastern', 'Asia/Tokyo']: - idx = DatetimeIndex(start='2010-01-01 09:00', end='2010-02-01 09:00', - freq='H', tz=tz, name='idx') + idx = DatetimeIndex(start='2010-01-01 09:00', + end='2010-02-01 09:00', freq='H', tz=tz, + name='idx') expected = DatetimeIndex(dates, freq=None, name='idx', tz=tz) taken1 = idx.take([5, 6, 8, 12]) @@ -3106,8 +3181,9 @@ def test_date(self): self.assertTrue((result == expected).all()) def test_does_not_convert_mixed_integer(self): - df = tm.makeCustomDataframe(10, 10, data_gen_f=lambda *args, **kwargs: - randn(), r_idx_type='i', c_idx_type='dt') + df = tm.makeCustomDataframe(10, 10, + data_gen_f=lambda *args, **kwargs: randn(), + r_idx_type='i', c_idx_type='dt') cols = df.columns.join(df.index, how='outer') joined = cols.join(df.columns) self.assertEqual(cols.dtype, np.dtype('O')) @@ -3142,13 +3218,14 @@ def test_ns_index(self): index = pd.DatetimeIndex(dt, freq=freq, name='time') self.assert_index_parameters(index) - new_index = pd.DatetimeIndex(start=index[0], end=index[-1], freq=index.freq) + new_index = pd.DatetimeIndex(start=index[0], end=index[-1], + freq=index.freq) self.assert_index_parameters(new_index) def test_join_with_period_index(self): - df = tm.makeCustomDataframe(10, 10, data_gen_f=lambda *args: - np.random.randint(2), c_idx_type='p', - r_idx_type='dt') + df = tm.makeCustomDataframe( + 10, 10, data_gen_f=lambda *args: np.random.randint(2), + c_idx_type='p', r_idx_type='dt') s = df.iloc[:5, 0] joins = 'left', 'right', 'inner', 'outer' @@ -3158,8 +3235,8 @@ def test_join_with_period_index(self): df.columns.join(s.index, how=join) def test_factorize(self): - idx1 = DatetimeIndex(['2014-01', '2014-01', '2014-02', - '2014-02', '2014-03', '2014-03']) + idx1 = DatetimeIndex(['2014-01', '2014-01', '2014-02', '2014-02', + '2014-03', '2014-03']) exp_arr = np.array([0, 0, 1, 1, 2, 2]) exp_idx = DatetimeIndex(['2014-01', '2014-02', '2014-03']) @@ -3181,7 +3258,7 @@ def test_factorize(self): self.assertTrue(idx.equals(exp_idx)) idx2 = pd.DatetimeIndex(['2014-03', '2014-03', '2014-02', '2014-01', - '2014-03', '2014-01']) + '2014-03', '2014-01']) exp_arr = np.array([2, 2, 1, 0, 2, 0]) exp_idx = DatetimeIndex(['2014-01', '2014-02', '2014-03']) @@ -3202,7 +3279,6 @@ def test_factorize(self): self.assert_numpy_array_equal(arr, exp_arr) self.assertTrue(idx.equals(idx3)) - def test_slice_with_negative_step(self): ts = Series(np.arange(20), date_range('2014-01-01', periods=20, freq='MS')) @@ -3219,10 +3295,14 @@ def assert_slices_equivalent(l_slc, i_slc): assert_slices_equivalent(SLC[:Timestamp('2014-10-01'):-1], SLC[:8:-1]) assert_slices_equivalent(SLC[:'2014-10-01':-1], SLC[:8:-1]) - assert_slices_equivalent(SLC['2015-02-01':'2014-10-01':-1], SLC[13:8:-1]) - assert_slices_equivalent(SLC[Timestamp('2015-02-01'):Timestamp('2014-10-01'):-1], SLC[13:8:-1]) - assert_slices_equivalent(SLC['2015-02-01':Timestamp('2014-10-01'):-1], SLC[13:8:-1]) - assert_slices_equivalent(SLC[Timestamp('2015-02-01'):'2014-10-01':-1], SLC[13:8:-1]) + assert_slices_equivalent(SLC['2015-02-01':'2014-10-01':-1], + SLC[13:8:-1]) + assert_slices_equivalent(SLC[Timestamp('2015-02-01'):Timestamp( + '2014-10-01'):-1], SLC[13:8:-1]) + assert_slices_equivalent(SLC['2015-02-01':Timestamp('2014-10-01'):-1], + SLC[13:8:-1]) + assert_slices_equivalent(SLC[Timestamp('2015-02-01'):'2014-10-01':-1], + SLC[13:8:-1]) assert_slices_equivalent(SLC['2014-10-01':'2015-02-01':-1], SLC[:0]) @@ -3237,7 +3317,6 @@ def test_slice_with_zero_step_raises(self): lambda: ts.ix[::0]) - class TestDatetime64(tm.TestCase): """ Also test support for datetime64[ns] in Series / DataFrame @@ -3249,8 +3328,7 @@ def setUp(self): self.series = Series(rand(len(dti)), dti) def test_datetimeindex_accessors(self): - dti = DatetimeIndex( - freq='D', start=datetime(1998, 1, 1), periods=365) + dti = DatetimeIndex(freq='D', start=datetime(1998, 1, 1), periods=365) self.assertEqual(dti.year[0], 1998) self.assertEqual(dti.month[0], 1) @@ -3309,15 +3387,16 @@ def test_datetimeindex_accessors(self): self.assertEqual(len(dti.is_year_start), 365) self.assertEqual(len(dti.is_year_end), 365) - dti = DatetimeIndex( - freq='BQ-FEB', start=datetime(1998, 1, 1), periods=4) + dti = DatetimeIndex(freq='BQ-FEB', start=datetime(1998, 1, 1), + periods=4) self.assertEqual(sum(dti.is_quarter_start), 0) self.assertEqual(sum(dti.is_quarter_end), 4) self.assertEqual(sum(dti.is_year_start), 0) self.assertEqual(sum(dti.is_year_end), 1) - # Ensure is_start/end accessors throw ValueError for CustomBusinessDay, CBD requires np >= 1.7 + # Ensure is_start/end accessors throw ValueError for CustomBusinessDay, + # CBD requires np >= 1.7 bday_egypt = offsets.CustomBusinessDay(weekmask='Sun Mon Tue Wed Thu') dti = date_range(datetime(2013, 4, 30), periods=5, freq=bday_egypt) self.assertRaises(ValueError, lambda: dti.is_month_start) @@ -3363,7 +3442,6 @@ def test_datetimeindex_accessors(self): for ts, value in tests: self.assertEqual(ts, value) - def test_nanosecond_field(self): dti = DatetimeIndex(np.arange(10)) @@ -3425,8 +3503,8 @@ def test_datetimeindex_constructor(self): arr = to_datetime(['1/1/2005', '1/2/2005', '1/3/2005', '2005-01-04']) idx5 = DatetimeIndex(arr) - arr = to_datetime( - ['1/1/2005', '1/2/2005', 'Jan 3, 2005', '2005-01-04']) + arr = to_datetime(['1/1/2005', '1/2/2005', 'Jan 3, 2005', '2005-01-04' + ]) idx6 = DatetimeIndex(arr) idx7 = DatetimeIndex(['12/05/2007', '25/01/2008'], dayfirst=True) @@ -3470,8 +3548,7 @@ def test_datetimeindex_constructor(self): def test_dayfirst(self): # GH 5917 arr = ['10/02/2014', '11/02/2014', '12/02/2014'] - expected = DatetimeIndex([datetime(2014, 2, 10), - datetime(2014, 2, 11), + expected = DatetimeIndex([datetime(2014, 2, 10), datetime(2014, 2, 11), datetime(2014, 2, 12)]) idx1 = DatetimeIndex(arr, dayfirst=True) idx2 = DatetimeIndex(np.array(arr), dayfirst=True) @@ -3530,11 +3607,17 @@ def test_dti_set_index_reindex(self): # 11314 # with tz - index = date_range(datetime(2015, 10, 1), datetime(2015,10,1,23), freq='H', tz='US/Eastern') + index = date_range(datetime(2015, 10, 1), datetime( + 2015, 10, 1, 23), freq='H', tz='US/Eastern') df = DataFrame(np.random.randn(24, 1), columns=['a'], index=index) - new_index = date_range(datetime(2015, 10, 2), datetime(2015,10,2,23), freq='H', tz='US/Eastern') - result = df.set_index(new_index) - self.assertEqual(new_index.freq,index.freq) + new_index = date_range(datetime(2015, 10, 2), + datetime(2015, 10, 2, 23), + freq='H', tz='US/Eastern') + + # TODO: unused? + result = df.set_index(new_index) # noqa + + self.assertEqual(new_index.freq, index.freq) def test_datetimeindex_union_join_empty(self): dti = DatetimeIndex(start='1/1/2001', end='2/1/2001', freq='D') @@ -3576,40 +3659,42 @@ def test_slicing_datetimes(self): # GH 7523 # unique - df = DataFrame(np.arange(4.,dtype='float64'), - index=[datetime(2001, 1, i, 10, 00) for i in [1,2,3,4]]) - result = df.ix[datetime(2001,1,1,10):] - assert_frame_equal(result,df) - result = df.ix[:datetime(2001,1,4,10)] - assert_frame_equal(result,df) - result = df.ix[datetime(2001,1,1,10):datetime(2001,1,4,10)] - assert_frame_equal(result,df) - - result = df.ix[datetime(2001,1,1,11):] + df = DataFrame(np.arange(4., dtype='float64'), + index=[datetime(2001, 1, i, 10, 00) + for i in [1, 2, 3, 4]]) + result = df.ix[datetime(2001, 1, 1, 10):] + assert_frame_equal(result, df) + result = df.ix[:datetime(2001, 1, 4, 10)] + assert_frame_equal(result, df) + result = df.ix[datetime(2001, 1, 1, 10):datetime(2001, 1, 4, 10)] + assert_frame_equal(result, df) + + result = df.ix[datetime(2001, 1, 1, 11):] expected = df.iloc[1:] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) result = df.ix['20010101 11':] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) # duplicates - df = pd.DataFrame(np.arange(5.,dtype='float64'), - index=[datetime(2001, 1, i, 10, 00) for i in [1,2,2,3,4]]) - - result = df.ix[datetime(2001,1,1,10):] - assert_frame_equal(result,df) - result = df.ix[:datetime(2001,1,4,10)] - assert_frame_equal(result,df) - result = df.ix[datetime(2001,1,1,10):datetime(2001,1,4,10)] - assert_frame_equal(result,df) - - result = df.ix[datetime(2001,1,1,11):] + df = pd.DataFrame(np.arange(5., dtype='float64'), + index=[datetime(2001, 1, i, 10, 00) + for i in [1, 2, 2, 3, 4]]) + + result = df.ix[datetime(2001, 1, 1, 10):] + assert_frame_equal(result, df) + result = df.ix[:datetime(2001, 1, 4, 10)] + assert_frame_equal(result, df) + result = df.ix[datetime(2001, 1, 1, 10):datetime(2001, 1, 4, 10)] + assert_frame_equal(result, df) + + result = df.ix[datetime(2001, 1, 1, 11):] expected = df.iloc[1:] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) result = df.ix['20010101 11':] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) -class TestSeriesDatetime64(tm.TestCase): +class TestSeriesDatetime64(tm.TestCase): def setUp(self): self.series = Series(date_range('1/1/2000', periods=10)) @@ -3639,7 +3724,7 @@ def test_between(self): expected = (self.series >= left) & (self.series <= right) assert_series_equal(result, expected) - #---------------------------------------------------------------------- + # --------------------------------------------------------------------- # NaT support def test_NaT_scalar(self): @@ -3672,7 +3757,8 @@ def test_set_none_nan(self): def test_intercept_astype_object(self): - # this test no longer makes sense as series is by default already M8[ns] + # this test no longer makes sense as series is by default already + # M8[ns] expected = self.series.astype('object') df = DataFrame({'a': self.series, @@ -3681,8 +3767,7 @@ def test_intercept_astype_object(self): result = df.values.squeeze() self.assertTrue((result[:, 0] == expected.values).all()) - df = DataFrame({'a': self.series, - 'b': ['foo'] * len(self.series)}) + df = DataFrame({'a': self.series, 'b': ['foo'] * len(self.series)}) result = df.values.squeeze() self.assertTrue((result[:, 0] == expected.values).all()) @@ -3703,16 +3788,19 @@ def test_intersection(self): # if target has the same name, it is preserved rng2 = date_range('5/15/2000', '6/20/2000', freq='D', name='idx') - expected2 = date_range('6/1/2000', '6/20/2000', freq='D', name='idx') + expected2 = date_range('6/1/2000', '6/20/2000', freq='D', + name='idx') # if target name is different, it will be reset rng3 = date_range('5/15/2000', '6/20/2000', freq='D', name='other') - expected3 = date_range('6/1/2000', '6/20/2000', freq='D', name=None) + expected3 = date_range('6/1/2000', '6/20/2000', freq='D', + name=None) rng4 = date_range('7/1/2000', '7/31/2000', freq='D', name='idx') expected4 = DatetimeIndex([], name='idx') - for (rng, expected) in [(rng2, expected2), (rng3, expected3), (rng4, expected4)]: + for (rng, expected) in [(rng2, expected2), (rng3, expected3), + (rng4, expected4)]: result = base.intersection(rng) self.assertTrue(result.equals(expected)) self.assertEqual(result.name, expected.name) @@ -3720,22 +3808,29 @@ def test_intersection(self): self.assertEqual(result.tz, expected.tz) # non-monotonic - base = DatetimeIndex(['2011-01-05', '2011-01-04', '2011-01-02', '2011-01-03'], - tz=tz, name='idx') + base = DatetimeIndex(['2011-01-05', '2011-01-04', + '2011-01-02', '2011-01-03'], + tz=tz, name='idx') - rng2 = DatetimeIndex(['2011-01-04', '2011-01-02', '2011-02-02', '2011-02-03'], + rng2 = DatetimeIndex(['2011-01-04', '2011-01-02', + '2011-02-02', '2011-02-03'], tz=tz, name='idx') - expected2 = DatetimeIndex(['2011-01-04', '2011-01-02'], tz=tz, name='idx') + expected2 = DatetimeIndex( + ['2011-01-04', '2011-01-02'], tz=tz, name='idx') - rng3 = DatetimeIndex(['2011-01-04', '2011-01-02', '2011-02-02', '2011-02-03'], + rng3 = DatetimeIndex(['2011-01-04', '2011-01-02', + '2011-02-02', '2011-02-03'], tz=tz, name='other') - expected3 = DatetimeIndex(['2011-01-04', '2011-01-02'], tz=tz, name=None) + expected3 = DatetimeIndex( + ['2011-01-04', '2011-01-02'], tz=tz, name=None) # GH 7880 - rng4 = date_range('7/1/2000', '7/31/2000', freq='D', tz=tz, name='idx') + rng4 = date_range('7/1/2000', '7/31/2000', freq='D', tz=tz, + name='idx') expected4 = DatetimeIndex([], tz=tz, name='idx') - for (rng, expected) in [(rng2, expected2), (rng3, expected3), (rng4, expected4)]: + for (rng, expected) in [(rng2, expected2), (rng3, expected3), + (rng4, expected4)]: result = base.intersection(rng) self.assertTrue(result.equals(expected)) self.assertEqual(result.name, expected.name) @@ -3758,25 +3853,36 @@ def test_date_range_bms_bug(self): self.assertEqual(rng[0], ex_first) def test_date_range_businesshour(self): - idx = DatetimeIndex(['2014-07-04 09:00', '2014-07-04 10:00', '2014-07-04 11:00', - '2014-07-04 12:00', '2014-07-04 13:00', '2014-07-04 14:00', - '2014-07-04 15:00', '2014-07-04 16:00'], freq='BH') + idx = DatetimeIndex(['2014-07-04 09:00', '2014-07-04 10:00', + '2014-07-04 11:00', + '2014-07-04 12:00', '2014-07-04 13:00', + '2014-07-04 14:00', + '2014-07-04 15:00', '2014-07-04 16:00'], + freq='BH') rng = date_range('2014-07-04 09:00', '2014-07-04 16:00', freq='BH') tm.assert_index_equal(idx, rng) - idx = DatetimeIndex(['2014-07-04 16:00', '2014-07-07 09:00'], freq='BH') + idx = DatetimeIndex( + ['2014-07-04 16:00', '2014-07-07 09:00'], freq='BH') rng = date_range('2014-07-04 16:00', '2014-07-07 09:00', freq='BH') tm.assert_index_equal(idx, rng) - idx = DatetimeIndex(['2014-07-04 09:00', '2014-07-04 10:00', '2014-07-04 11:00', - '2014-07-04 12:00', '2014-07-04 13:00', '2014-07-04 14:00', + idx = DatetimeIndex(['2014-07-04 09:00', '2014-07-04 10:00', + '2014-07-04 11:00', + '2014-07-04 12:00', '2014-07-04 13:00', + '2014-07-04 14:00', '2014-07-04 15:00', '2014-07-04 16:00', - '2014-07-07 09:00', '2014-07-07 10:00', '2014-07-07 11:00', - '2014-07-07 12:00', '2014-07-07 13:00', '2014-07-07 14:00', + '2014-07-07 09:00', '2014-07-07 10:00', + '2014-07-07 11:00', + '2014-07-07 12:00', '2014-07-07 13:00', + '2014-07-07 14:00', '2014-07-07 15:00', '2014-07-07 16:00', - '2014-07-08 09:00', '2014-07-08 10:00', '2014-07-08 11:00', - '2014-07-08 12:00', '2014-07-08 13:00', '2014-07-08 14:00', - '2014-07-08 15:00', '2014-07-08 16:00'], freq='BH') + '2014-07-08 09:00', '2014-07-08 10:00', + '2014-07-08 11:00', + '2014-07-08 12:00', '2014-07-08 13:00', + '2014-07-08 14:00', + '2014-07-08 15:00', '2014-07-08 16:00'], + freq='BH') rng = date_range('2014-07-04 09:00', '2014-07-08 16:00', freq='BH') tm.assert_index_equal(idx, rng) @@ -3793,13 +3899,13 @@ def test_string_index_series_name_converted(self): class TestTimestamp(tm.TestCase): - def test_class_ops_pytz(self): tm._skip_if_no_pytz() from pytz import timezone def compare(x, y): - self.assertEqual(int(Timestamp(x).value / 1e9), int(Timestamp(y).value / 1e9)) + self.assertEqual(int(Timestamp(x).value / 1e9), + int(Timestamp(y).value / 1e9)) compare(Timestamp.now(), datetime.now()) compare(Timestamp.now('UTC'), datetime.now(timezone('UTC'))) @@ -3820,13 +3926,14 @@ def test_class_ops_dateutil(self): tm._skip_if_no_dateutil() from dateutil.tz import tzutc - def compare(x,y): - self.assertEqual(int(np.round(Timestamp(x).value/1e9)), int(np.round(Timestamp(y).value/1e9))) + def compare(x, y): + self.assertEqual(int(np.round(Timestamp(x).value / 1e9)), + int(np.round(Timestamp(y).value / 1e9))) - compare(Timestamp.now(),datetime.now()) + compare(Timestamp.now(), datetime.now()) compare(Timestamp.now('UTC'), datetime.now(tzutc())) - compare(Timestamp.utcnow(),datetime.utcnow()) - compare(Timestamp.today(),datetime.today()) + compare(Timestamp.utcnow(), datetime.utcnow()) + compare(Timestamp.today(), datetime.today()) current_time = calendar.timegm(datetime.now().utctimetuple()) compare(Timestamp.utcfromtimestamp(current_time), datetime.utcfromtimestamp(current_time)) @@ -3847,7 +3954,7 @@ def test_basics_nanos(self): self.assertEqual(stamp.nanosecond, 500) def test_unit(self): - def check(val,unit=None,h=1,s=1,us=0): + def check(val, unit=None, h=1, s=1, us=0): stamp = Timestamp(val, unit=unit) self.assertEqual(stamp.year, 2000) self.assertEqual(stamp.month, 1) @@ -3868,34 +3975,34 @@ def check(val,unit=None,h=1,s=1,us=0): days = (ts - Timestamp('1970-01-01')).days check(val) - check(val/long(1000),unit='us') - check(val/long(1000000),unit='ms') - check(val/long(1000000000),unit='s') - check(days,unit='D',h=0) + check(val / long(1000), unit='us') + check(val / long(1000000), unit='ms') + check(val / long(1000000000), unit='s') + check(days, unit='D', h=0) # using truediv, so these are like floats if compat.PY3: - check((val+500000)/long(1000000000),unit='s',us=500) - check((val+500000000)/long(1000000000),unit='s',us=500000) - check((val+500000)/long(1000000),unit='ms',us=500) + check((val + 500000) / long(1000000000), unit='s', us=500) + check((val + 500000000) / long(1000000000), unit='s', us=500000) + check((val + 500000) / long(1000000), unit='ms', us=500) # get chopped in py2 else: - check((val+500000)/long(1000000000),unit='s') - check((val+500000000)/long(1000000000),unit='s') - check((val+500000)/long(1000000),unit='ms') + check((val + 500000) / long(1000000000), unit='s') + check((val + 500000000) / long(1000000000), unit='s') + check((val + 500000) / long(1000000), unit='ms') # ok - check((val+500000)/long(1000),unit='us',us=500) - check((val+500000000)/long(1000000),unit='ms',us=500000) + check((val + 500000) / long(1000), unit='us', us=500) + check((val + 500000000) / long(1000000), unit='ms', us=500000) # floats - check(val/1000.0 + 5,unit='us',us=5) - check(val/1000.0 + 5000,unit='us',us=5000) - check(val/1000000.0 + 0.5,unit='ms',us=500) - check(val/1000000.0 + 0.005,unit='ms',us=5) - check(val/1000000000.0 + 0.5,unit='s',us=500000) - check(days + 0.5,unit='D',h=12) + check(val / 1000.0 + 5, unit='us', us=5) + check(val / 1000.0 + 5000, unit='us', us=5000) + check(val / 1000000.0 + 0.5, unit='ms', us=500) + check(val / 1000000.0 + 0.005, unit='ms', us=5) + check(val / 1000000000.0 + 0.5, unit='s', us=500000) + check(days + 0.5, unit='D', h=12) # nan result = Timestamp(np.nan) @@ -3920,25 +4027,25 @@ def test_roundtrip(self): base = Timestamp('20140101 00:00:00') result = Timestamp(base.value + pd.Timedelta('5ms').value) - self.assertEqual(result,Timestamp(str(base) + ".005000")) - self.assertEqual(result.microsecond,5000) + self.assertEqual(result, Timestamp(str(base) + ".005000")) + self.assertEqual(result.microsecond, 5000) result = Timestamp(base.value + pd.Timedelta('5us').value) - self.assertEqual(result,Timestamp(str(base) + ".000005")) - self.assertEqual(result.microsecond,5) + self.assertEqual(result, Timestamp(str(base) + ".000005")) + self.assertEqual(result.microsecond, 5) result = Timestamp(base.value + pd.Timedelta('5ns').value) - self.assertEqual(result,Timestamp(str(base) + ".000000005")) - self.assertEqual(result.nanosecond,5) - self.assertEqual(result.microsecond,0) + self.assertEqual(result, Timestamp(str(base) + ".000000005")) + self.assertEqual(result.nanosecond, 5) + self.assertEqual(result.microsecond, 0) result = Timestamp(base.value + pd.Timedelta('6ms 5us').value) - self.assertEqual(result,Timestamp(str(base) + ".006005")) - self.assertEqual(result.microsecond,5+6*1000) + self.assertEqual(result, Timestamp(str(base) + ".006005")) + self.assertEqual(result.microsecond, 5 + 6 * 1000) result = Timestamp(base.value + pd.Timedelta('200ms 5us').value) - self.assertEqual(result,Timestamp(str(base) + ".200005")) - self.assertEqual(result.microsecond,5+200*1000) + self.assertEqual(result, Timestamp(str(base) + ".200005")) + self.assertEqual(result.microsecond, 5 + 200 * 1000) def test_comparison(self): # 5-18-2012 00:00:00.000 @@ -3979,7 +4086,7 @@ def test_compare_invalid(self): self.assertFalse(val == 1) self.assertFalse(val == long(1)) self.assertFalse(val == []) - self.assertFalse(val == {'foo' : 1}) + self.assertFalse(val == {'foo': 1}) self.assertFalse(val == np.float64(1)) self.assertFalse(val == np.int64(1)) @@ -3988,12 +4095,12 @@ def test_compare_invalid(self): self.assertTrue(val != 1) self.assertTrue(val != long(1)) self.assertTrue(val != []) - self.assertTrue(val != {'foo' : 1}) + self.assertTrue(val != {'foo': 1}) self.assertTrue(val != np.float64(1)) self.assertTrue(val != np.int64(1)) # ops testing - df = DataFrame(randn(5,2)) + df = DataFrame(randn(5, 2)) a = df[0] b = Series(randn(5)) b.name = Timestamp('2000-01-01') @@ -4075,7 +4182,7 @@ def test_delta_preserve_nanos(self): def test_frequency_misc(self): self.assertEqual(frequencies.get_freq_group('T'), - frequencies.FreqGroup.FR_MIN) + frequencies.FreqGroup.FR_MIN) code, stride = frequencies.get_freq_code(offsets.Hour()) self.assertEqual(code, frequencies.FreqGroup.FR_HR) @@ -4112,8 +4219,12 @@ def test_timestamp_compare_scalars(self): rhs = Timestamp('now') nat = Timestamp('nat') - ops = {'gt': 'lt', 'lt': 'gt', 'ge': 'le', 'le': 'ge', 'eq': 'eq', - 'ne': 'ne'} + ops = {'gt': 'lt', + 'lt': 'gt', + 'ge': 'le', + 'le': 'ge', + 'eq': 'eq', + 'ne': 'ne'} for left, right in ops.items(): left_f = getattr(operator, left) @@ -4164,7 +4275,6 @@ def test_timestamp_compare_series(self): class TestSlicing(tm.TestCase): - def test_slice_year(self): dti = DatetimeIndex(freq='B', start=datetime(2005, 1, 1), periods=500) @@ -4281,28 +4391,35 @@ def test_partial_slicing_with_multiindex(self): # GH 4758 # partial string indexing with a multi-index buggy - df = DataFrame({'ACCOUNT':["ACCT1", "ACCT1", "ACCT1", "ACCT2"], - 'TICKER':["ABC", "MNP", "XYZ", "XYZ"], - 'val':[1,2,3,4]}, - index=date_range("2013-06-19 09:30:00", periods=4, freq='5T')) + df = DataFrame({'ACCOUNT': ["ACCT1", "ACCT1", "ACCT1", "ACCT2"], + 'TICKER': ["ABC", "MNP", "XYZ", "XYZ"], + 'val': [1, 2, 3, 4]}, + index=date_range("2013-06-19 09:30:00", + periods=4, freq='5T')) df_multi = df.set_index(['ACCOUNT', 'TICKER'], append=True) - expected = DataFrame([[1]],index=Index(['ABC'],name='TICKER'),columns=['val']) + expected = DataFrame([ + [1] + ], index=Index(['ABC'], name='TICKER'), columns=['val']) result = df_multi.loc[('2013-06-19 09:30:00', 'ACCT1')] assert_frame_equal(result, expected) - expected = df_multi.loc[(pd.Timestamp('2013-06-19 09:30:00', tz=None), 'ACCT1', 'ABC')] + expected = df_multi.loc[ + (pd.Timestamp('2013-06-19 09:30:00', tz=None), 'ACCT1', 'ABC')] result = df_multi.loc[('2013-06-19 09:30:00', 'ACCT1', 'ABC')] assert_series_equal(result, expected) - # this is a KeyError as we don't do partial string selection on multi-levels + # this is a KeyError as we don't do partial string selection on + # multi-levels def f(): df_multi.loc[('2013-06-19', 'ACCT1', 'ABC')] + self.assertRaises(KeyError, f) # GH 4294 # partial slice on a series mi - s = pd.DataFrame(randn(1000, 1000), index=pd.date_range('2000-1-1', periods=1000)).stack() + s = pd.DataFrame(randn(1000, 1000), index=pd.date_range( + '2000-1-1', periods=1000)).stack() s2 = s[:-1].copy() expected = s2['2000-1-4'] @@ -4330,8 +4447,8 @@ def test_date_range_normalize(self): self.assert_numpy_array_equal(rng, values) - rng = date_range( - '1/1/2000 08:15', periods=n, normalize=False, freq='B') + rng = date_range('1/1/2000 08:15', periods=n, normalize=False, + freq='B') the_time = time(8, 15) for val in rng: self.assertEqual(val.time(), the_time) @@ -4427,8 +4544,7 @@ def test_min_max(self): def test_min_max_series(self): rng = date_range('1/1/2000', periods=10, freq='4h') lvls = ['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C', 'C'] - df = DataFrame({'TS': rng, 'V': np.random.randn(len(rng)), - 'L': lvls}) + df = DataFrame({'TS': rng, 'V': np.random.randn(len(rng)), 'L': lvls}) result = df.TS.max() exp = Timestamp(df.TS.iat[-1]) @@ -4441,8 +4557,7 @@ def test_min_max_series(self): self.assertEqual(result, exp) def test_from_M8_structured(self): - dates = [(datetime(2012, 9, 9, 0, 0), - datetime(2012, 9, 8, 15, 10))] + dates = [(datetime(2012, 9, 9, 0, 0), datetime(2012, 9, 8, 15, 10))] arr = np.array(dates, dtype=[('Date', 'M8[us]'), ('Forecasting', 'M8[us]')]) df = DataFrame(arr) @@ -4462,8 +4577,7 @@ def test_get_level_values_box(self): dates = date_range('1/1/2000', periods=4) levels = [dates, [0, 1]] - labels = [[0, 0, 1, 1, 2, 2, 3, 3], - [0, 1, 0, 1, 0, 1, 0, 1]] + labels = [[0, 0, 1, 1, 2, 2, 3, 3], [0, 1, 0, 1, 0, 1, 0, 1]] index = MultiIndex(levels=levels, labels=labels) @@ -4479,18 +4593,14 @@ def test_frame_apply_dont_convert_datetime64(self): self.assertTrue(df.x1.dtype == 'M8[ns]') def test_date_range_fy5252(self): - dr = date_range(start="2013-01-01", - periods=2, - freq=offsets.FY5253(startingMonth=1, - weekday=3, - variation="nearest")) + dr = date_range(start="2013-01-01", periods=2, freq=offsets.FY5253( + startingMonth=1, weekday=3, variation="nearest")) self.assertEqual(dr[0], Timestamp('2013-01-31')) self.assertEqual(dr[1], Timestamp('2014-01-30')) def test_partial_slice_doesnt_require_monotonicity(self): # For historical reasons. - s = pd.Series(np.arange(10), - pd.date_range('2014-01-01', periods=10)) + s = pd.Series(np.arange(10), pd.date_range('2014-01-01', periods=10)) nonmonotonic = s[[3, 5, 4]] expected = nonmonotonic.iloc[:0] @@ -4509,15 +4619,16 @@ class TimeConversionFormats(tm.TestCase): def test_to_datetime_format(self): values = ['1/1/2000', '1/2/2000', '1/3/2000'] - results1 = [ Timestamp('20000101'), Timestamp('20000201'), - Timestamp('20000301') ] - results2 = [ Timestamp('20000101'), Timestamp('20000102'), - Timestamp('20000103') ] - for vals, expecteds in [ (values, (Index(results1), Index(results2))), - (Series(values),(Series(results1), Series(results2))), - (values[0], (results1[0], results2[0])), - (values[1], (results1[1], results2[1])), - (values[2], (results1[2], results2[2])) ]: + results1 = [Timestamp('20000101'), Timestamp('20000201'), + Timestamp('20000301')] + results2 = [Timestamp('20000101'), Timestamp('20000102'), + Timestamp('20000103')] + for vals, expecteds in [(values, (Index(results1), Index(results2))), + (Series(values), + (Series(results1), Series(results2))), + (values[0], (results1[0], results2[0])), + (values[1], (results1[1], results2[1])), + (values[2], (results1[2], results2[2]))]: for i, fmt in enumerate(['%d/%m/%Y', '%m/%d/%Y']): result = to_datetime(vals, format=fmt) @@ -4531,52 +4642,55 @@ def test_to_datetime_format(self): self.assertTrue(result.equals(expected)) def test_to_datetime_format_YYYYMMDD(self): - s = Series([19801222,19801222] + [19810105]*5) - expected = Series([ Timestamp(x) for x in s.apply(str) ]) + s = Series([19801222, 19801222] + [19810105] * 5) + expected = Series([Timestamp(x) for x in s.apply(str)]) - result = to_datetime(s,format='%Y%m%d') + result = to_datetime(s, format='%Y%m%d') assert_series_equal(result, expected) - result = to_datetime(s.apply(str),format='%Y%m%d') + result = to_datetime(s.apply(str), format='%Y%m%d') assert_series_equal(result, expected) # with NaT - expected = Series([Timestamp("19801222"),Timestamp("19801222")] + [Timestamp("19810105")]*5) + expected = Series([Timestamp("19801222"), Timestamp("19801222")] + + [Timestamp("19810105")] * 5) expected[2] = np.nan s[2] = np.nan - result = to_datetime(s,format='%Y%m%d') + result = to_datetime(s, format='%Y%m%d') assert_series_equal(result, expected) # string with NaT s = s.apply(str) s[2] = 'nat' - result = to_datetime(s,format='%Y%m%d') + result = to_datetime(s, format='%Y%m%d') assert_series_equal(result, expected) # coercion # GH 7930 s = Series([20121231, 20141231, 99991231]) - result = pd.to_datetime(s,format='%Y%m%d',errors='ignore') - expected = np.array([ datetime(2012,12,31), datetime(2014,12,31), datetime(9999,12,31) ], dtype=object) + result = pd.to_datetime(s, format='%Y%m%d', errors='ignore') + expected = np.array([datetime(2012, 12, 31), datetime( + 2014, 12, 31), datetime(9999, 12, 31)], dtype=object) self.assert_numpy_array_equal(result, expected) - result = pd.to_datetime(s,format='%Y%m%d', errors='coerce') - expected = Series(['20121231','20141231','NaT'],dtype='M8[ns]') + result = pd.to_datetime(s, format='%Y%m%d', errors='coerce') + expected = Series(['20121231', '20141231', 'NaT'], dtype='M8[ns]') assert_series_equal(result, expected) # GH 10178 def test_to_datetime_format_integer(self): s = Series([2000, 2001, 2002]) - expected = Series([ Timestamp(x) for x in s.apply(str) ]) + expected = Series([Timestamp(x) for x in s.apply(str)]) - result = to_datetime(s,format='%Y') + result = to_datetime(s, format='%Y') assert_series_equal(result, expected) s = Series([200001, 200105, 200206]) - expected = Series([ Timestamp(x[:4] + '-' + x[4:]) for x in s.apply(str) ]) + expected = Series([Timestamp(x[:4] + '-' + x[4:]) for x in s.apply(str) + ]) - result = to_datetime(s,format='%Y%m') + result = to_datetime(s, format='%Y%m') assert_series_equal(result, expected) def test_to_datetime_format_microsecond(self): @@ -4588,13 +4702,19 @@ def test_to_datetime_format_microsecond(self): def test_to_datetime_format_time(self): data = [ - ['01/10/2010 15:20', '%m/%d/%Y %H:%M', Timestamp('2010-01-10 15:20')], - ['01/10/2010 05:43', '%m/%d/%Y %I:%M', Timestamp('2010-01-10 05:43')], - ['01/10/2010 13:56:01', '%m/%d/%Y %H:%M:%S', Timestamp('2010-01-10 13:56:01')]#, - #['01/10/2010 08:14 PM', '%m/%d/%Y %I:%M %p', Timestamp('2010-01-10 20:14')], - #['01/10/2010 07:40 AM', '%m/%d/%Y %I:%M %p', Timestamp('2010-01-10 07:40')], - #['01/10/2010 09:12:56 AM', '%m/%d/%Y %I:%M:%S %p', Timestamp('2010-01-10 09:12:56')] - ] + ['01/10/2010 15:20', '%m/%d/%Y %H:%M', + Timestamp('2010-01-10 15:20')], + ['01/10/2010 05:43', '%m/%d/%Y %I:%M', + Timestamp('2010-01-10 05:43')], + ['01/10/2010 13:56:01', '%m/%d/%Y %H:%M:%S', + Timestamp('2010-01-10 13:56:01')] # , + # ['01/10/2010 08:14 PM', '%m/%d/%Y %I:%M %p', + # Timestamp('2010-01-10 20:14')], + # ['01/10/2010 07:40 AM', '%m/%d/%Y %I:%M %p', + # Timestamp('2010-01-10 07:40')], + # ['01/10/2010 09:12:56 AM', '%m/%d/%Y %I:%M:%S %p', + # Timestamp('2010-01-10 09:12:56')] + ] for s, format, dt in data: self.assertEqual(to_datetime(s, format=format), dt) @@ -4607,9 +4727,10 @@ def test_to_datetime_with_non_exact(self): if sys.version_info < (2, 7): raise nose.SkipTest('on python version < 2.7') - s = Series(['19MAY11','foobar19MAY11','19MAY11:00:00:00','19MAY11 00:00:00Z']) - result = to_datetime(s,format='%d%b%y',exact=False) - expected = to_datetime(s.str.extract('(\d+\w+\d+)'),format='%d%b%y') + s = Series(['19MAY11', 'foobar19MAY11', '19MAY11:00:00:00', + '19MAY11 00:00:00Z']) + result = to_datetime(s, format='%d%b%y', exact=False) + expected = to_datetime(s.str.extract('(\d+\w+\d+)'), format='%d%b%y') assert_series_equal(result, expected) def test_parse_nanoseconds_with_formula(self): @@ -4620,25 +4741,24 @@ def test_parse_nanoseconds_with_formula(self): "2012-01-01 09:00:00.000001", "2012-01-01 09:00:00.001", "2012-01-01 09:00:00.001000", - "2012-01-01 09:00:00.001000000", - ]: + "2012-01-01 09:00:00.001000000", ]: expected = pd.to_datetime(v) - result = pd.to_datetime(v, format="%Y-%m-%d %H:%M:%S.%f") - self.assertEqual(result,expected) + result = pd.to_datetime(v, format="%Y-%m-%d %H:%M:%S.%f") + self.assertEqual(result, expected) def test_to_datetime_format_weeks(self): data = [ - ['2009324', '%Y%W%w', Timestamp('2009-08-13')], - ['2013020', '%Y%U%w', Timestamp('2013-01-13')] - ] + ['2009324', '%Y%W%w', Timestamp('2009-08-13')], + ['2013020', '%Y%U%w', Timestamp('2013-01-13')] + ] for s, format, dt in data: self.assertEqual(to_datetime(s, format=format), dt) + class TestToDatetimeInferFormat(tm.TestCase): def test_to_datetime_infer_datetime_format_consistent_format(self): - time_series = pd.Series( - pd.date_range('20000101', periods=50, freq='H') - ) + time_series = pd.Series(pd.date_range('20000101', periods=50, + freq='H')) test_formats = [ '%m-%d-%Y', @@ -4648,16 +4768,13 @@ def test_to_datetime_infer_datetime_format_consistent_format(self): for test_format in test_formats: s_as_dt_strings = time_series.apply( - lambda x: x.strftime(test_format) - ) + lambda x: x.strftime(test_format)) with_format = pd.to_datetime(s_as_dt_strings, format=test_format) - no_infer = pd.to_datetime( - s_as_dt_strings, infer_datetime_format=False - ) - yes_infer = pd.to_datetime( - s_as_dt_strings, infer_datetime_format=True - ) + no_infer = pd.to_datetime(s_as_dt_strings, + infer_datetime_format=False) + yes_infer = pd.to_datetime(s_as_dt_strings, + infer_datetime_format=True) # Whether the format is explicitly passed, it is inferred, or # it is not inferred, the results should all be the same @@ -4665,11 +4782,10 @@ def test_to_datetime_infer_datetime_format_consistent_format(self): self.assert_numpy_array_equal(no_infer, yes_infer) def test_to_datetime_infer_datetime_format_inconsistent_format(self): - test_series = pd.Series( - np.array([ - '01/01/2011 00:00:00', - '01-02-2011 00:00:00', - '2011-01-03T00:00:00', + test_series = pd.Series(np.array([ + '01/01/2011 00:00:00', + '01-02-2011 00:00:00', + '2011-01-03T00:00:00', ])) # When the format is inconsistent, infer_datetime_format should just @@ -4679,11 +4795,10 @@ def test_to_datetime_infer_datetime_format_inconsistent_format(self): pd.to_datetime(test_series, infer_datetime_format=True) ) - test_series = pd.Series( - np.array([ - 'Jan/01/2011', - 'Feb/01/2011', - 'Mar/01/2011', + test_series = pd.Series(np.array([ + 'Jan/01/2011', + 'Feb/01/2011', + 'Mar/01/2011', ])) self.assert_numpy_array_equal( @@ -4692,12 +4807,11 @@ def test_to_datetime_infer_datetime_format_inconsistent_format(self): ) def test_to_datetime_infer_datetime_format_series_with_nans(self): - test_series = pd.Series( - np.array([ - '01/01/2011 00:00:00', - np.nan, - '01/03/2011 00:00:00', - np.nan, + test_series = pd.Series(np.array([ + '01/01/2011 00:00:00', + np.nan, + '01/03/2011 00:00:00', + np.nan, ])) self.assert_numpy_array_equal( @@ -4706,13 +4820,12 @@ def test_to_datetime_infer_datetime_format_series_with_nans(self): ) def test_to_datetime_infer_datetime_format_series_starting_with_nans(self): - test_series = pd.Series( - np.array([ - np.nan, - np.nan, - '01/01/2011 00:00:00', - '01/02/2011 00:00:00', - '01/03/2011 00:00:00', + test_series = pd.Series(np.array([ + np.nan, + np.nan, + '01/01/2011 00:00:00', + '01/02/2011 00:00:00', + '01/03/2011 00:00:00', ])) self.assert_numpy_array_equal( @@ -4723,14 +4836,13 @@ def test_to_datetime_infer_datetime_format_series_starting_with_nans(self): class TestGuessDatetimeFormat(tm.TestCase): def test_guess_datetime_format_with_parseable_formats(self): - dt_string_to_format = ( - ('20111230', '%Y%m%d'), - ('2011-12-30', '%Y-%m-%d'), - ('30-12-2011', '%d-%m-%Y'), - ('2011-12-30 00:00:00', '%Y-%m-%d %H:%M:%S'), - ('2011-12-30T00:00:00', '%Y-%m-%dT%H:%M:%S'), - ('2011-12-30 00:00:00.000000', '%Y-%m-%d %H:%M:%S.%f'), - ) + dt_string_to_format = (('20111230', '%Y%m%d'), + ('2011-12-30', '%Y-%m-%d'), + ('30-12-2011', '%d-%m-%Y'), + ('2011-12-30 00:00:00', '%Y-%m-%d %H:%M:%S'), + ('2011-12-30T00:00:00', '%Y-%m-%dT%H:%M:%S'), + ('2011-12-30 00:00:00.000000', + '%Y-%m-%d %H:%M:%S.%f'), ) for dt_string, dt_format in dt_string_to_format: self.assertEqual( @@ -4754,11 +4866,9 @@ def test_guess_datetime_format_with_locale_specific_formats(self): # case these wont be parsed properly (dateutil can't parse them) _skip_if_has_locale() - dt_string_to_format = ( - ('30/Dec/2011', '%d/%b/%Y'), - ('30/December/2011', '%d/%B/%Y'), - ('30/Dec/2011 00:00:00', '%d/%b/%Y %H:%M:%S'), - ) + dt_string_to_format = (('30/Dec/2011', '%d/%b/%Y'), + ('30/December/2011', '%d/%B/%Y'), + ('30/Dec/2011 00:00:00', '%d/%b/%Y %H:%M:%S'), ) for dt_string, dt_format in dt_string_to_format: self.assertEqual( @@ -4786,14 +4896,12 @@ def test_guess_datetime_format_invalid_inputs(self): def test_guess_datetime_format_nopadding(self): # GH 11142 - dt_string_to_format = ( - ('2011-1-1', '%Y-%m-%d'), - ('30-1-2011', '%d-%m-%Y'), - ('1/1/2011', '%m/%d/%Y'), - ('2011-1-1 00:00:00', '%Y-%m-%d %H:%M:%S'), - ('2011-1-1 0:0:0', '%Y-%m-%d %H:%M:%S'), - ('2011-1-3T00:00:0', '%Y-%m-%dT%H:%M:%S') - ) + dt_string_to_format = (('2011-1-1', '%Y-%m-%d'), + ('30-1-2011', '%d-%m-%Y'), + ('1/1/2011', '%m/%d/%Y'), + ('2011-1-1 00:00:00', '%Y-%m-%d %H:%M:%S'), + ('2011-1-1 0:0:0', '%Y-%m-%d %H:%M:%S'), + ('2011-1-3T00:00:0', '%Y-%m-%dT%H:%M:%S')) for dt_string, dt_format in dt_string_to_format: self.assertEqual( @@ -4801,7 +4909,6 @@ def test_guess_datetime_format_nopadding(self): dt_format ) - def test_guess_datetime_format_for_array(self): expected_format = '%Y-%m-%d %H:%M:%S.%f' dt_string = datetime(2011, 12, 30, 0, 0, 0).strftime(expected_format) @@ -4819,13 +4926,12 @@ def test_guess_datetime_format_for_array(self): ) format_for_string_of_nans = tools._guess_datetime_format_for_array( - np.array([np.nan, np.nan, np.nan], dtype='O') - ) + np.array( + [np.nan, np.nan, np.nan], dtype='O')) self.assertTrue(format_for_string_of_nans is None) class TestTimestampToJulianDate(tm.TestCase): - def test_compare_1700(self): r = Timestamp('1700-06-23').to_julian_date() self.assertEqual(r, 2342145.5) @@ -4849,98 +4955,96 @@ def test_compare_hour13(self): class TestDateTimeIndexToJulianDate(tm.TestCase): def test_1700(self): - r1 = Float64Index([2345897.5, - 2345898.5, - 2345899.5, - 2345900.5, + r1 = Float64Index([2345897.5, 2345898.5, 2345899.5, 2345900.5, 2345901.5]) - r2 = date_range(start=Timestamp('1710-10-01'), - periods=5, + r2 = date_range(start=Timestamp('1710-10-01'), periods=5, freq='D').to_julian_date() self.assertIsInstance(r2, Float64Index) tm.assert_index_equal(r1, r2) def test_2000(self): - r1 = Float64Index([2451601.5, - 2451602.5, - 2451603.5, - 2451604.5, + r1 = Float64Index([2451601.5, 2451602.5, 2451603.5, 2451604.5, 2451605.5]) - r2 = date_range(start=Timestamp('2000-02-27'), - periods=5, + r2 = date_range(start=Timestamp('2000-02-27'), periods=5, freq='D').to_julian_date() self.assertIsInstance(r2, Float64Index) tm.assert_index_equal(r1, r2) def test_hour(self): - r1 = Float64Index([2451601.5, - 2451601.5416666666666666, - 2451601.5833333333333333, - 2451601.625, - 2451601.6666666666666666]) - r2 = date_range(start=Timestamp('2000-02-27'), - periods=5, + r1 = Float64Index( + [2451601.5, 2451601.5416666666666666, 2451601.5833333333333333, + 2451601.625, 2451601.6666666666666666]) + r2 = date_range(start=Timestamp('2000-02-27'), periods=5, freq='H').to_julian_date() self.assertIsInstance(r2, Float64Index) tm.assert_index_equal(r1, r2) def test_minute(self): - r1 = Float64Index([2451601.5, - 2451601.5006944444444444, - 2451601.5013888888888888, - 2451601.5020833333333333, - 2451601.5027777777777777]) - r2 = date_range(start=Timestamp('2000-02-27'), - periods=5, + r1 = Float64Index( + [2451601.5, 2451601.5006944444444444, 2451601.5013888888888888, + 2451601.5020833333333333, 2451601.5027777777777777]) + r2 = date_range(start=Timestamp('2000-02-27'), periods=5, freq='T').to_julian_date() self.assertIsInstance(r2, Float64Index) tm.assert_index_equal(r1, r2) def test_second(self): - r1 = Float64Index([2451601.5, - 2451601.500011574074074, - 2451601.5000231481481481, - 2451601.5000347222222222, - 2451601.5000462962962962]) - r2 = date_range(start=Timestamp('2000-02-27'), - periods=5, + r1 = Float64Index( + [2451601.5, 2451601.500011574074074, 2451601.5000231481481481, + 2451601.5000347222222222, 2451601.5000462962962962]) + r2 = date_range(start=Timestamp('2000-02-27'), periods=5, freq='S').to_julian_date() self.assertIsInstance(r2, Float64Index) tm.assert_index_equal(r1, r2) -class TestDaysInMonth(tm.TestCase): +class TestDaysInMonth(tm.TestCase): def test_coerce_deprecation(self): # deprecation of coerce with tm.assert_produces_warning(FutureWarning): to_datetime('2015-02-29', coerce=True) with tm.assert_produces_warning(FutureWarning): - self.assertRaises(ValueError, lambda : to_datetime('2015-02-29', coerce=False)) + self.assertRaises(ValueError, + lambda: to_datetime('2015-02-29', coerce=False)) # multiple arguments - for e, c in zip(['raise','ignore','coerce'],[True,False]): + for e, c in zip(['raise', 'ignore', 'coerce'], [True, False]): with tm.assert_produces_warning(FutureWarning): - self.assertRaises(TypeError, lambda : to_datetime('2015-02-29', errors=e, coerce=c)) + self.assertRaises(TypeError, + lambda: to_datetime('2015-02-29', errors=e, + coerce=c)) # tests for issue #10154 def test_day_not_in_month_coerce(self): self.assertTrue(isnull(to_datetime('2015-02-29', errors='coerce'))) - self.assertTrue(isnull(to_datetime('2015-02-29', format="%Y-%m-%d", errors='coerce'))) - self.assertTrue(isnull(to_datetime('2015-02-32', format="%Y-%m-%d", errors='coerce'))) - self.assertTrue(isnull(to_datetime('2015-04-31', format="%Y-%m-%d", errors='coerce'))) + self.assertTrue(isnull(to_datetime('2015-02-29', format="%Y-%m-%d", + errors='coerce'))) + self.assertTrue(isnull(to_datetime('2015-02-32', format="%Y-%m-%d", + errors='coerce'))) + self.assertTrue(isnull(to_datetime('2015-04-31', format="%Y-%m-%d", + errors='coerce'))) def test_day_not_in_month_raise(self): - self.assertRaises(ValueError, to_datetime, '2015-02-29', errors='raise') - self.assertRaises(ValueError, to_datetime, '2015-02-29', errors='raise', format="%Y-%m-%d") - self.assertRaises(ValueError, to_datetime, '2015-02-32', errors='raise', format="%Y-%m-%d") - self.assertRaises(ValueError, to_datetime, '2015-04-31', errors='raise', format="%Y-%m-%d") + self.assertRaises(ValueError, to_datetime, '2015-02-29', + errors='raise') + self.assertRaises(ValueError, to_datetime, '2015-02-29', + errors='raise', format="%Y-%m-%d") + self.assertRaises(ValueError, to_datetime, '2015-02-32', + errors='raise', format="%Y-%m-%d") + self.assertRaises(ValueError, to_datetime, '2015-04-31', + errors='raise', format="%Y-%m-%d") def test_day_not_in_month_ignore(self): - self.assertEqual(to_datetime('2015-02-29', errors='ignore'), '2015-02-29') - self.assertEqual(to_datetime('2015-02-29', errors='ignore', format="%Y-%m-%d"), '2015-02-29') - self.assertEqual(to_datetime('2015-02-32', errors='ignore', format="%Y-%m-%d"), '2015-02-32') - self.assertEqual(to_datetime('2015-04-31', errors='ignore', format="%Y-%m-%d"), '2015-04-31') + self.assertEqual(to_datetime( + '2015-02-29', errors='ignore'), '2015-02-29') + self.assertEqual(to_datetime( + '2015-02-29', errors='ignore', format="%Y-%m-%d"), '2015-02-29') + self.assertEqual(to_datetime( + '2015-02-32', errors='ignore', format="%Y-%m-%d"), '2015-02-32') + self.assertEqual(to_datetime( + '2015-04-31', errors='ignore', format="%Y-%m-%d"), '2015-04-31') + if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], diff --git a/pandas/tseries/tests/test_timeseries_legacy.py b/pandas/tseries/tests/test_timeseries_legacy.py index 4cbc171364ee6..96a9fd67733a1 100644 --- a/pandas/tseries/tests/test_timeseries_legacy.py +++ b/pandas/tseries/tests/test_timeseries_legacy.py @@ -1,5 +1,5 @@ # pylint: disable-msg=E1101,W0612 -from datetime import datetime, time, timedelta +from datetime import datetime import sys import os @@ -8,9 +8,8 @@ import numpy as np randn = np.random.randn -from pandas import (Index, Series, DataFrame, - isnull, date_range, Timestamp, DatetimeIndex, - Int64Index, to_datetime, bdate_range) +from pandas import (Index, Series, date_range, Timestamp, + DatetimeIndex, Int64Index, to_datetime) import pandas.core.datetools as datetools import pandas.tseries.offsets as offsets @@ -19,9 +18,7 @@ from pandas.util.testing import assert_series_equal, assert_almost_equal import pandas.util.testing as tm -from pandas.compat import( - range, long, StringIO, lrange, lmap, map, zip, cPickle as pickle, product -) +from pandas.compat import StringIO, cPickle as pickle from pandas import read_pickle from numpy.random import rand import pandas.compat as compat @@ -182,7 +179,7 @@ def test_tolist(self): tm.assertIsInstance(result[0], Timestamp) def test_object_convert_fail(self): - idx = DatetimeIndex([NaT]) + idx = DatetimeIndex([np.NaT]) self.assertRaises(ValueError, idx.astype, 'O') def test_setops_conversion_fail(self): @@ -199,17 +196,16 @@ def test_setops_conversion_fail(self): self.assertTrue(result.equals(expected)) def test_legacy_time_rules(self): - rules = [('WEEKDAY', 'B'), - ('EOM', 'BM'), - ('W@MON', 'W-MON'), ('W@TUE', 'W-TUE'), ('W@WED', 'W-WED'), - ('W@THU', 'W-THU'), ('W@FRI', 'W-FRI'), - ('Q@JAN', 'BQ-JAN'), ('Q@FEB', 'BQ-FEB'), ('Q@MAR', 'BQ-MAR'), - ('A@JAN', 'BA-JAN'), ('A@FEB', 'BA-FEB'), ('A@MAR', 'BA-MAR'), - ('A@APR', 'BA-APR'), ('A@MAY', 'BA-MAY'), ('A@JUN', 'BA-JUN'), - ('A@JUL', 'BA-JUL'), ('A@AUG', 'BA-AUG'), ('A@SEP', 'BA-SEP'), - ('A@OCT', 'BA-OCT'), ('A@NOV', 'BA-NOV'), ('A@DEC', 'BA-DEC'), - ('WOM@1FRI', 'WOM-1FRI'), ('WOM@2FRI', 'WOM-2FRI'), - ('WOM@3FRI', 'WOM-3FRI'), ('WOM@4FRI', 'WOM-4FRI')] + rules = [('WEEKDAY', 'B'), ('EOM', 'BM'), ('W@MON', 'W-MON'), + ('W@TUE', 'W-TUE'), ('W@WED', 'W-WED'), ('W@THU', 'W-THU'), + ('W@FRI', 'W-FRI'), ('Q@JAN', 'BQ-JAN'), ('Q@FEB', 'BQ-FEB'), + ('Q@MAR', 'BQ-MAR'), ('A@JAN', 'BA-JAN'), ('A@FEB', 'BA-FEB'), + ('A@MAR', 'BA-MAR'), ('A@APR', 'BA-APR'), ('A@MAY', 'BA-MAY'), + ('A@JUN', 'BA-JUN'), ('A@JUL', 'BA-JUL'), ('A@AUG', 'BA-AUG'), + ('A@SEP', 'BA-SEP'), ('A@OCT', 'BA-OCT'), ('A@NOV', 'BA-NOV'), + ('A@DEC', 'BA-DEC'), ('WOM@1FRI', 'WOM-1FRI'), + ('WOM@2FRI', 'WOM-2FRI'), ('WOM@3FRI', 'WOM-3FRI'), + ('WOM@4FRI', 'WOM-4FRI')] start, end = '1/1/2000', '1/1/2010' @@ -233,6 +229,7 @@ def test_rule_aliases(self): rule = datetools.to_offset('10us') self.assertEqual(rule, datetools.Micro(10)) + if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py index 37c40dd48cf6a..7a951683abaec 100644 --- a/pandas/tseries/tests/test_timezones.py +++ b/pandas/tseries/tests/test_timezones.py @@ -21,9 +21,8 @@ from pandas.util.testing import assert_frame_equal from pandas.compat import lrange, zip - try: - import pytz + import pytz # noqa except ImportError: pass @@ -49,6 +48,7 @@ def tzname(self, dt): def dst(self, dt): return timedelta(0) + fixed_off = FixedOffset(-420, '-07:00') fixed_off_no_name = FixedOffset(-330, None) @@ -60,18 +60,21 @@ def setUp(self): tm._skip_if_no_pytz() def tz(self, tz): - ''' Construct a timezone object from a string. Overridden in subclass to parameterize tests. ''' + # Construct a timezone object from a string. Overridden in subclass to + # parameterize tests. return pytz.timezone(tz) def tzstr(self, tz): - ''' Construct a timezone string from a string. Overridden in subclass to parameterize tests. ''' + # Construct a timezone string from a string. Overridden in subclass to + # parameterize tests. return tz def localize(self, tz, x): return tz.localize(x) def cmptz(self, tz1, tz2): - ''' Compare two timezones. Overridden in subclass to parameterize tests. ''' + # Compare two timezones. Overridden in subclass to parameterize + # tests. return tz1.zone == tz2.zone def test_utc_to_local_no_modify(self): @@ -92,7 +95,6 @@ def test_utc_to_local_no_modify_explicit(self): self.assertEqual(rng_eastern.tz, self.tz('US/Eastern')) - def test_localize_utc_conversion(self): # Localizing to time zone should: # 1) check for DST ambiguities @@ -107,7 +109,8 @@ def test_localize_utc_conversion(self): # DST ambiguity, this should fail rng = date_range('3/11/2012', '3/12/2012', freq='30T') # Is this really how it should fail?? - self.assertRaises(NonExistentTimeError, rng.tz_localize, self.tzstr('US/Eastern')) + self.assertRaises(NonExistentTimeError, rng.tz_localize, + self.tzstr('US/Eastern')) def test_localize_utc_conversion_explicit(self): # Localizing to time zone should: @@ -122,7 +125,8 @@ def test_localize_utc_conversion_explicit(self): # DST ambiguity, this should fail rng = date_range('3/11/2012', '3/12/2012', freq='30T') # Is this really how it should fail?? - self.assertRaises(NonExistentTimeError, rng.tz_localize, self.tz('US/Eastern')) + self.assertRaises(NonExistentTimeError, rng.tz_localize, + self.tz('US/Eastern')) def test_timestamp_tz_localize(self): stamp = Timestamp('3/11/2012 04:00') @@ -195,30 +199,27 @@ def test_timedelta_push_over_dst_boundary_explicit(self): self.assertEqual(result, expected) def test_tz_localize_dti(self): - from pandas.tseries.offsets import Hour - dti = DatetimeIndex(start='1/1/2005', end='1/1/2005 0:00:30.256', freq='L') dti2 = dti.tz_localize(self.tzstr('US/Eastern')) dti_utc = DatetimeIndex(start='1/1/2005 05:00', - end='1/1/2005 5:00:30.256', freq='L', - tz='utc') + end='1/1/2005 5:00:30.256', freq='L', tz='utc') self.assert_numpy_array_equal(dti2.values, dti_utc.values) dti3 = dti2.tz_convert(self.tzstr('US/Pacific')) self.assert_numpy_array_equal(dti3.values, dti_utc.values) - dti = DatetimeIndex(start='11/6/2011 1:59', - end='11/6/2011 2:00', freq='L') + dti = DatetimeIndex(start='11/6/2011 1:59', end='11/6/2011 2:00', + freq='L') self.assertRaises(pytz.AmbiguousTimeError, dti.tz_localize, self.tzstr('US/Eastern')) dti = DatetimeIndex(start='3/13/2011 1:59', end='3/13/2011 2:00', freq='L') - self.assertRaises( - pytz.NonExistentTimeError, dti.tz_localize, self.tzstr('US/Eastern')) + self.assertRaises(pytz.NonExistentTimeError, dti.tz_localize, + self.tzstr('US/Eastern')) def test_tz_localize_empty_series(self): # #2248 @@ -242,8 +243,8 @@ def test_create_with_tz(self): stamp = Timestamp('3/11/2012 05:00', tz=self.tzstr('US/Eastern')) self.assertEqual(stamp.hour, 5) - rng = date_range( - '3/11/2012 04:00', periods=10, freq='H', tz=self.tzstr('US/Eastern')) + rng = date_range('3/11/2012 04:00', periods=10, freq='H', + tz=self.tzstr('US/Eastern')) self.assertEqual(stamp, rng[1]) @@ -264,8 +265,8 @@ def test_create_with_fixed_tz(self): rng2 = date_range(start, periods=len(rng), tz=off) self.assertTrue(rng.equals(rng2)) - rng3 = date_range( - '3/11/2012 05:00:00+07:00', '6/11/2012 05:00:00+07:00') + rng3 = date_range('3/11/2012 05:00:00+07:00', + '6/11/2012 05:00:00+07:00') self.assertTrue((rng.values == rng3.values).all()) def test_create_with_fixedoffset_noname(self): @@ -279,8 +280,8 @@ def test_create_with_fixedoffset_noname(self): self.assertEqual(off, idx.tz) def test_date_range_localize(self): - rng = date_range( - '3/11/2012 03:00', periods=15, freq='H', tz='US/Eastern') + rng = date_range('3/11/2012 03:00', periods=15, freq='H', + tz='US/Eastern') rng2 = DatetimeIndex(['3/11/2012 03:00', '3/11/2012 04:00'], tz='US/Eastern') rng3 = date_range('3/11/2012 03:00', periods=15, freq='H') @@ -298,8 +299,8 @@ def test_date_range_localize(self): self.assertTrue(rng[:2].equals(rng2)) # Right before the DST transition - rng = date_range( - '3/11/2012 00:00', periods=2, freq='H', tz='US/Eastern') + rng = date_range('3/11/2012 00:00', periods=2, freq='H', + tz='US/Eastern') rng2 = DatetimeIndex(['3/11/2012 00:00', '3/11/2012 01:00'], tz='US/Eastern') self.assertTrue(rng.equals(rng2)) @@ -330,7 +331,8 @@ def test_utc_box_timestamp_and_localize(self): rng_eastern = rng.tz_convert(self.tzstr('US/Eastern')) # test not valid for dateutil timezones. # self.assertIn('EDT', repr(rng_eastern[0].tzinfo)) - self.assertTrue('EDT' in repr(rng_eastern[0].tzinfo) or 'tzfile' in repr(rng_eastern[0].tzinfo)) + self.assertTrue('EDT' in repr(rng_eastern[0].tzinfo) or 'tzfile' in + repr(rng_eastern[0].tzinfo)) def test_timestamp_tz_convert(self): strdates = ['1/1/2012', '3/1/2012', '4/1/2012'] @@ -379,11 +381,13 @@ def test_with_tz(self): # normalized central = dr.tz_convert(tz) self.assertIs(central.tz, tz) - comp = self.localize(tz, central[0].to_pydatetime().replace(tzinfo=None)).tzinfo + comp = self.localize(tz, central[0].to_pydatetime().replace( + tzinfo=None)).tzinfo self.assertIs(central[0].tz, comp) # compare vs a localized tz - comp = self.localize(tz, dr[0].to_pydatetime().replace(tzinfo=None)).tzinfo + comp = self.localize(tz, + dr[0].to_pydatetime().replace(tzinfo=None)).tzinfo self.assertIs(central[0].tz, comp) # datetimes with tzinfo set @@ -391,8 +395,8 @@ def test_with_tz(self): '1/1/2009', tz=pytz.utc) self.assertRaises(Exception, bdate_range, - datetime(2005, 1, 1, tzinfo=pytz.utc), - '1/1/2009', tz=tz) + datetime(2005, 1, 1, tzinfo=pytz.utc), '1/1/2009', + tz=tz) def test_tz_localize(self): dr = bdate_range('1/1/2009', '1/1/2010') @@ -432,16 +436,16 @@ def test_ambiguous_infer(self): # With repeated hours, we can infer the transition dr = date_range(datetime(2011, 11, 6, 0), periods=5, freq=datetools.Hour(), tz=tz) - times = ['11/06/2011 00:00', '11/06/2011 01:00', - '11/06/2011 01:00', '11/06/2011 02:00', - '11/06/2011 03:00'] + times = ['11/06/2011 00:00', '11/06/2011 01:00', '11/06/2011 01:00', + '11/06/2011 02:00', '11/06/2011 03:00'] di = DatetimeIndex(times) localized = di.tz_localize(tz, ambiguous='infer') self.assert_numpy_array_equal(dr, localized) with tm.assert_produces_warning(FutureWarning): localized_old = di.tz_localize(tz, infer_dst=True) self.assert_numpy_array_equal(dr, localized_old) - self.assert_numpy_array_equal(dr, DatetimeIndex(times, tz=tz, ambiguous='infer')) + self.assert_numpy_array_equal(dr, DatetimeIndex(times, tz=tz, + ambiguous='infer')) # When there is no dst transition, nothing special happens dr = date_range(datetime(2011, 6, 1, 0), periods=10, @@ -460,21 +464,22 @@ def test_ambiguous_flags(self): # Pass in flags to determine right dst transition dr = date_range(datetime(2011, 11, 6, 0), periods=5, freq=datetools.Hour(), tz=tz) - times = ['11/06/2011 00:00', '11/06/2011 01:00', - '11/06/2011 01:00', '11/06/2011 02:00', - '11/06/2011 03:00'] + times = ['11/06/2011 00:00', '11/06/2011 01:00', '11/06/2011 01:00', + '11/06/2011 02:00', '11/06/2011 03:00'] # Test tz_localize di = DatetimeIndex(times) is_dst = [1, 1, 0, 0, 0] localized = di.tz_localize(tz, ambiguous=is_dst) self.assert_numpy_array_equal(dr, localized) - self.assert_numpy_array_equal(dr, DatetimeIndex(times, tz=tz, ambiguous=is_dst)) + self.assert_numpy_array_equal(dr, DatetimeIndex(times, tz=tz, + ambiguous=is_dst)) localized = di.tz_localize(tz, ambiguous=np.array(is_dst)) self.assert_numpy_array_equal(dr, localized) - localized = di.tz_localize(tz, ambiguous=np.array(is_dst).astype('bool')) + localized = di.tz_localize(tz, + ambiguous=np.array(is_dst).astype('bool')) self.assert_numpy_array_equal(dr, localized) # Test constructor @@ -504,30 +509,26 @@ def test_ambiguous_flags(self): # construction with an ambiguous end-point # GH 11626 - tz=self.tzstr("Europe/London") + tz = self.tzstr("Europe/London") def f(): date_range("2013-10-26 23:00", "2013-10-27 01:00", - tz="Europe/London", - freq="H") + tz="Europe/London", freq="H") self.assertRaises(pytz.AmbiguousTimeError, f) - times = date_range("2013-10-26 23:00", "2013-10-27 01:00", - freq="H", - tz=tz, - ambiguous='infer') - self.assertEqual(times[0],Timestamp('2013-10-26 23:00',tz=tz)) - self.assertEqual(times[-1],Timestamp('2013-10-27 01:00',tz=tz)) + + times = date_range("2013-10-26 23:00", "2013-10-27 01:00", freq="H", + tz=tz, ambiguous='infer') + self.assertEqual(times[0], Timestamp('2013-10-26 23:00', tz=tz)) + self.assertEqual(times[-1], Timestamp('2013-10-27 01:00', tz=tz)) def test_ambiguous_nat(self): tz = self.tz('US/Eastern') - times = ['11/06/2011 00:00', '11/06/2011 01:00', - '11/06/2011 01:00', '11/06/2011 02:00', - '11/06/2011 03:00'] + times = ['11/06/2011 00:00', '11/06/2011 01:00', '11/06/2011 01:00', + '11/06/2011 02:00', '11/06/2011 03:00'] di = DatetimeIndex(times) localized = di.tz_localize(tz, ambiguous='NaT') - times = ['11/06/2011 00:00', np.NaN, - np.NaN, '11/06/2011 02:00', + times = ['11/06/2011 00:00', np.NaN, np.NaN, '11/06/2011 02:00', '11/06/2011 03:00'] di_test = DatetimeIndex(times, tz='US/Eastern') self.assert_numpy_array_equal(di_test, localized) @@ -542,22 +543,25 @@ def test_infer_tz(self): start = self.localize(eastern, _start) end = self.localize(eastern, _end) - assert(tools._infer_tzinfo(start, end) is self.localize(eastern, _start).tzinfo) - assert(tools._infer_tzinfo(start, None) is self.localize(eastern, _start).tzinfo) - assert(tools._infer_tzinfo(None, end) is self.localize(eastern, _end).tzinfo) + assert (tools._infer_tzinfo(start, end) is self.localize( + eastern, _start).tzinfo) + assert (tools._infer_tzinfo(start, None) is self.localize( + eastern, _start).tzinfo) + assert (tools._infer_tzinfo(None, end) is self.localize(eastern, + _end).tzinfo) start = utc.localize(_start) end = utc.localize(_end) - assert(tools._infer_tzinfo(start, end) is utc) + assert (tools._infer_tzinfo(start, end) is utc) end = self.localize(eastern, _end) self.assertRaises(Exception, tools._infer_tzinfo, start, end) self.assertRaises(Exception, tools._infer_tzinfo, end, start) def test_tz_string(self): - result = date_range('1/1/2000', periods=10, tz=self.tzstr('US/Eastern')) - expected = date_range('1/1/2000', periods=10, - tz=self.tz('US/Eastern')) + result = date_range('1/1/2000', periods=10, + tz=self.tzstr('US/Eastern')) + expected = date_range('1/1/2000', periods=10, tz=self.tz('US/Eastern')) self.assertTrue(result.equals(expected)) @@ -604,13 +608,15 @@ def test_localized_at_time_between_time(self): ts_local = ts.tz_localize(self.tzstr('US/Eastern')) result = ts_local.at_time(time(10, 0)) - expected = ts.at_time(time(10, 0)).tz_localize(self.tzstr('US/Eastern')) + expected = ts.at_time(time(10, 0)).tz_localize(self.tzstr( + 'US/Eastern')) tm.assert_series_equal(result, expected) self.assertTrue(self.cmptz(result.index.tz, self.tz('US/Eastern'))) t1, t2 = time(10, 0), time(11, 0) result = ts_local.between_time(t1, t2) - expected = ts.between_time(t1, t2).tz_localize(self.tzstr('US/Eastern')) + expected = ts.between_time(t1, + t2).tz_localize(self.tzstr('US/Eastern')) tm.assert_series_equal(result, expected) self.assertTrue(self.cmptz(result.index.tz, self.tz('US/Eastern'))) @@ -685,23 +691,24 @@ def test_frame_no_datetime64_dtype(self): dr = date_range('2011/1/1', '2012/1/1', freq='W-FRI') dr_tz = dr.tz_localize(self.tzstr('US/Eastern')) e = DataFrame({'A': 'foo', 'B': dr_tz}, index=dr) - tz_expected = DatetimeTZDtype('ns',dr_tz.tzinfo) + tz_expected = DatetimeTZDtype('ns', dr_tz.tzinfo) self.assertEqual(e['B'].dtype, tz_expected) # GH 2810 (with timezones) - datetimes_naive = [ ts.to_pydatetime() for ts in dr ] - datetimes_with_tz = [ ts.to_pydatetime() for ts in dr_tz ] - df = DataFrame({'dr' : dr, 'dr_tz' : dr_tz, + datetimes_naive = [ts.to_pydatetime() for ts in dr] + datetimes_with_tz = [ts.to_pydatetime() for ts in dr_tz] + df = DataFrame({'dr': dr, + 'dr_tz': dr_tz, 'datetimes_naive': datetimes_naive, - 'datetimes_with_tz' : datetimes_with_tz }) + 'datetimes_with_tz': datetimes_with_tz}) result = df.get_dtype_counts().sort_index() - expected = Series({ 'datetime64[ns]' : 2, str(tz_expected) : 2 }).sort_index() + expected = Series({'datetime64[ns]': 2, + str(tz_expected): 2}).sort_index() tm.assert_series_equal(result, expected) def test_hongkong_tz_convert(self): # #1673 - dr = date_range( - '2012-01-01', '2012-01-10', freq='D', tz='Hongkong') + dr = date_range('2012-01-01', '2012-01-10', freq='D', tz='Hongkong') # it works! dr.hour @@ -722,8 +729,8 @@ def test_shift_localized(self): self.assertEqual(result.tz, dr_tz.tz) def test_tz_aware_asfreq(self): - dr = date_range( - '2011-12-01', '2012-07-20', freq='D', tz=self.tzstr('US/Eastern')) + dr = date_range('2011-12-01', '2012-07-20', freq='D', + tz=self.tzstr('US/Eastern')) s = Series(np.random.randn(len(dr)), index=dr) @@ -791,12 +798,13 @@ def test_dateutil_tzoffset_support(self): repr(series.index[0]) def test_getitem_pydatetime_tz(self): - index = date_range(start='2012-12-24 16:00', - end='2012-12-24 18:00', freq='H', - tz=self.tzstr('Europe/Berlin')) + index = date_range(start='2012-12-24 16:00', end='2012-12-24 18:00', + freq='H', tz=self.tzstr('Europe/Berlin')) ts = Series(index=index, data=index.hour) - time_pandas = Timestamp('2012-12-24 17:00', tz=self.tzstr('Europe/Berlin')) - time_datetime = self.localize(self.tz('Europe/Berlin'), datetime(2012, 12, 24, 17, 0)) + time_pandas = Timestamp('2012-12-24 17:00', + tz=self.tzstr('Europe/Berlin')) + time_datetime = self.localize( + self.tz('Europe/Berlin'), datetime(2012, 12, 24, 17, 0)) self.assertEqual(ts[time_pandas], ts[time_datetime]) def test_index_drop_dont_lose_tz(self): @@ -814,7 +822,8 @@ def test_datetimeindex_tz(self): arr = ['11/10/2005 08:00:00', '11/10/2005 09:00:00'] idx1 = to_datetime(arr).tz_localize(self.tzstr('US/Eastern')) - idx2 = DatetimeIndex(start="2005-11-10 08:00:00", freq='H', periods=2, tz=self.tzstr('US/Eastern')) + idx2 = DatetimeIndex(start="2005-11-10 08:00:00", freq='H', periods=2, + tz=self.tzstr('US/Eastern')) idx3 = DatetimeIndex(arr, tz=self.tzstr('US/Eastern')) idx4 = DatetimeIndex(np.array(arr), tz=self.tzstr('US/Eastern')) @@ -822,7 +831,8 @@ def test_datetimeindex_tz(self): self.assertTrue(idx1.equals(other)) def test_datetimeindex_tz_nat(self): - idx = to_datetime([Timestamp("2013-1-1", tz=self.tzstr('US/Eastern')), NaT]) + idx = to_datetime([Timestamp("2013-1-1", tz=self.tzstr('US/Eastern')), + NaT]) self.assertTrue(isnull(idx[1])) self.assertTrue(idx[0].tzinfo is not None) @@ -843,11 +853,13 @@ def tz(self, tz): return tslib.maybe_get_tz('dateutil/' + tz) def tzstr(self, tz): - ''' Construct a timezone string from a string. Overridden in subclass to parameterize tests. ''' + ''' Construct a timezone string from a string. Overridden in subclass + to parameterize tests. ''' return 'dateutil/' + tz def cmptz(self, tz1, tz2): - ''' Compare two timezones. Overridden in subclass to parameterize tests. ''' + ''' Compare two timezones. Overridden in subclass to parameterize + tests. ''' return tz1 == tz2 def localize(self, tz, x): @@ -873,7 +885,8 @@ def test_tslib_tz_convert_trans_pos_plus_1__bug(self): # Regression test for tslib.tz_convert(vals, tz1, tz2). # See https://github.com/pydata/pandas/issues/4496 for details. for freq, n in [('H', 1), ('T', 60), ('S', 3600)]: - idx = date_range(datetime(2011, 3, 26, 23), datetime(2011, 3, 27, 1), freq=freq) + idx = date_range(datetime(2011, 3, 26, 23), + datetime(2011, 3, 27, 1), freq=freq) idx = idx.tz_localize('UTC') idx = idx.tz_convert('Europe/Moscow') @@ -883,47 +896,59 @@ def test_tslib_tz_convert_trans_pos_plus_1__bug(self): def test_tslib_tz_convert_dst(self): for freq, n in [('H', 1), ('T', 60), ('S', 3600)]: # Start DST - idx = date_range('2014-03-08 23:00', '2014-03-09 09:00', freq=freq, tz='UTC') + idx = date_range('2014-03-08 23:00', '2014-03-09 09:00', freq=freq, + tz='UTC') idx = idx.tz_convert('US/Eastern') - expected = np.repeat(np.array([18, 19, 20, 21, 22, 23, 0, 1, 3, 4, 5]), + expected = np.repeat(np.array([18, 19, 20, 21, 22, 23, + 0, 1, 3, 4, 5]), np.array([n, n, n, n, n, n, n, n, n, n, 1])) self.assert_numpy_array_equal(idx.hour, expected) - idx = date_range('2014-03-08 18:00', '2014-03-09 05:00', freq=freq, tz='US/Eastern') + idx = date_range('2014-03-08 18:00', '2014-03-09 05:00', freq=freq, + tz='US/Eastern') idx = idx.tz_convert('UTC') expected = np.repeat(np.array([23, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), np.array([n, n, n, n, n, n, n, n, n, n, 1])) self.assert_numpy_array_equal(idx.hour, expected) # End DST - idx = date_range('2014-11-01 23:00', '2014-11-02 09:00', freq=freq, tz='UTC') + idx = date_range('2014-11-01 23:00', '2014-11-02 09:00', freq=freq, + tz='UTC') idx = idx.tz_convert('US/Eastern') - expected = np.repeat(np.array([19, 20, 21, 22, 23, 0, 1, 1, 2, 3, 4]), + expected = np.repeat(np.array([19, 20, 21, 22, 23, + 0, 1, 1, 2, 3, 4]), np.array([n, n, n, n, n, n, n, n, n, n, 1])) self.assert_numpy_array_equal(idx.hour, expected) - idx = date_range('2014-11-01 18:00', '2014-11-02 05:00', freq=freq, tz='US/Eastern') + idx = date_range('2014-11-01 18:00', '2014-11-02 05:00', freq=freq, + tz='US/Eastern') idx = idx.tz_convert('UTC') - expected = np.repeat(np.array([22, 23, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]), - np.array([n, n, n, n, n, n, n, n, n, n, n, n, 1])) + expected = np.repeat(np.array([22, 23, 0, 1, 2, 3, 4, 5, 6, + 7, 8, 9, 10]), + np.array([n, n, n, n, n, n, n, n, n, + n, n, n, 1])) self.assert_numpy_array_equal(idx.hour, expected) # daily # Start DST - idx = date_range('2014-03-08 00:00', '2014-03-09 00:00', freq='D', tz='UTC') + idx = date_range('2014-03-08 00:00', '2014-03-09 00:00', freq='D', + tz='UTC') idx = idx.tz_convert('US/Eastern') self.assert_numpy_array_equal(idx.hour, np.array([19, 19])) - idx = date_range('2014-03-08 00:00', '2014-03-09 00:00', freq='D', tz='US/Eastern') + idx = date_range('2014-03-08 00:00', '2014-03-09 00:00', freq='D', + tz='US/Eastern') idx = idx.tz_convert('UTC') self.assert_numpy_array_equal(idx.hour, np.array([5, 5])) # End DST - idx = date_range('2014-11-01 00:00', '2014-11-02 00:00', freq='D', tz='UTC') + idx = date_range('2014-11-01 00:00', '2014-11-02 00:00', freq='D', + tz='UTC') idx = idx.tz_convert('US/Eastern') self.assert_numpy_array_equal(idx.hour, np.array([20, 20])) - idx = date_range('2014-11-01 00:00', '2014-11-02 000:00', freq='D', tz='US/Eastern') + idx = date_range('2014-11-01 00:00', '2014-11-02 000:00', freq='D', + tz='US/Eastern') idx = idx.tz_convert('UTC') self.assert_numpy_array_equal(idx.hour, np.array([4, 4])) @@ -940,7 +965,8 @@ def test_cache_keys_are_distinct_for_pytz_vs_dateutil(self): if tz_d is None: # skip timezones that dateutil doesn't know about. continue - self.assertNotEqual(tslib._p_tz_cache_key(tz_p), tslib._p_tz_cache_key(tz_d)) + self.assertNotEqual(tslib._p_tz_cache_key( + tz_p), tslib._p_tz_cache_key(tz_d)) class TestTimeZones(tm.TestCase): @@ -952,8 +978,7 @@ def setUp(self): def test_index_equals_with_tz(self): left = date_range('1/1/2011', periods=100, freq='H', tz='utc') - right = date_range('1/1/2011', periods=100, freq='H', - tz='US/Eastern') + right = date_range('1/1/2011', periods=100, freq='H', tz='US/Eastern') self.assertFalse(left.equals(right)) @@ -973,7 +998,8 @@ def test_tz_localize_roundtrip(self): idx4 = date_range(start='2014-08-01', end='2014-10-31', freq='T') for idx in [idx1, idx2, idx3, idx4]: localized = idx.tz_localize(tz) - expected = date_range(start=idx[0], end=idx[-1], freq=idx.freq, tz=tz) + expected = date_range(start=idx[0], end=idx[-1], freq=idx.freq, + tz=tz) tm.assert_index_equal(localized, expected) with tm.assertRaises(TypeError): @@ -1005,11 +1031,11 @@ def test_series_frame_tz_localize(self): # Can't localize if already tz-aware rng = date_range('1/1/2011', periods=100, freq='H', tz='utc') ts = Series(1, index=rng) - tm.assertRaisesRegexp(TypeError, 'Already tz-aware', ts.tz_localize, 'US/Eastern') + tm.assertRaisesRegexp(TypeError, 'Already tz-aware', ts.tz_localize, + 'US/Eastern') def test_series_frame_tz_convert(self): - rng = date_range('1/1/2011', periods=200, freq='D', - tz='US/Eastern') + rng = date_range('1/1/2011', periods=200, freq='D', tz='US/Eastern') ts = Series(1, index=rng) result = ts.tz_convert('Europe/Berlin') @@ -1029,30 +1055,35 @@ def test_series_frame_tz_convert(self): # can't convert tz-naive rng = date_range('1/1/2011', periods=200, freq='D') ts = Series(1, index=rng) - tm.assertRaisesRegexp(TypeError, "Cannot convert tz-naive", ts.tz_convert, 'US/Eastern') + tm.assertRaisesRegexp(TypeError, "Cannot convert tz-naive", + ts.tz_convert, 'US/Eastern') def test_tz_convert_roundtrip(self): for tz in self.timezones: - idx1 = date_range(start='2014-01-01', end='2014-12-31', freq='M', tz='UTC') + idx1 = date_range(start='2014-01-01', end='2014-12-31', freq='M', + tz='UTC') exp1 = date_range(start='2014-01-01', end='2014-12-31', freq='M') - idx2 = date_range(start='2014-01-01', end='2014-12-31', freq='D', tz='UTC') + idx2 = date_range(start='2014-01-01', end='2014-12-31', freq='D', + tz='UTC') exp2 = date_range(start='2014-01-01', end='2014-12-31', freq='D') - idx3 = date_range(start='2014-01-01', end='2014-03-01', freq='H', tz='UTC') + idx3 = date_range(start='2014-01-01', end='2014-03-01', freq='H', + tz='UTC') exp3 = date_range(start='2014-01-01', end='2014-03-01', freq='H') - idx4 = date_range(start='2014-08-01', end='2014-10-31', freq='T', tz='UTC') + idx4 = date_range(start='2014-08-01', end='2014-10-31', freq='T', + tz='UTC') exp4 = date_range(start='2014-08-01', end='2014-10-31', freq='T') - - for idx, expected in [(idx1, exp1), (idx2, exp2), (idx3, exp3), (idx4, exp4)]: + for idx, expected in [(idx1, exp1), (idx2, exp2), (idx3, exp3), + (idx4, exp4)]: converted = idx.tz_convert(tz) reset = converted.tz_convert(None) tm.assert_index_equal(reset, expected) self.assertTrue(reset.tzinfo is None) - tm.assert_index_equal(reset, converted.tz_convert('UTC').tz_localize(None)) - + tm.assert_index_equal(reset, converted.tz_convert( + 'UTC').tz_localize(None)) def test_join_utc_convert(self): rng = date_range('1/1/2011', periods=100, freq='H', tz='utc') @@ -1093,11 +1124,11 @@ def test_join_aware(self): self.assertTrue(result.index.tz.zone == 'US/Central') # non-overlapping - rng = date_range("2012-11-15 00:00:00", periods=6, - freq="H", tz="US/Central") + rng = date_range("2012-11-15 00:00:00", periods=6, freq="H", + tz="US/Central") - rng2 = date_range("2012-11-15 12:00:00", periods=6, - freq="H", tz="US/Eastern") + rng2 = date_range("2012-11-15 12:00:00", periods=6, freq="H", + tz="US/Eastern") result = rng.union(rng2) self.assertTrue(result.tz.zone == 'UTC') @@ -1121,10 +1152,8 @@ def test_append_aware(self): ts_result = ts1.append(ts2) self.assertEqual(ts_result.index.tz, rng1.tz) - rng1 = date_range('1/1/2011 01:00', periods=1, freq='H', - tz='UTC') - rng2 = date_range('1/1/2011 02:00', periods=1, freq='H', - tz='UTC') + rng1 = date_range('1/1/2011 01:00', periods=1, freq='H', tz='UTC') + rng2 = date_range('1/1/2011 02:00', periods=1, freq='H', tz='UTC') ts1 = Series(np.random.randn(len(rng1)), index=rng1) ts2 = Series(np.random.randn(len(rng2)), index=rng2) ts_result = ts1.append(ts2) @@ -1147,8 +1176,8 @@ def test_append_aware_naive(self): ts1 = Series(np.random.randn(len(rng1)), index=rng1) ts2 = Series(np.random.randn(len(rng2)), index=rng2) ts_result = ts1.append(ts2) - self.assertTrue(ts_result.index.equals( - ts1.index.asobject.append(ts2.index.asobject))) + self.assertTrue(ts_result.index.equals(ts1.index.asobject.append( + ts2.index.asobject))) # mixed @@ -1157,8 +1186,8 @@ def test_append_aware_naive(self): ts1 = Series(np.random.randn(len(rng1)), index=rng1) ts2 = Series(np.random.randn(len(rng2)), index=rng2) ts_result = ts1.append(ts2) - self.assertTrue(ts_result.index.equals( - ts1.index.asobject.append(ts2.index))) + self.assertTrue(ts_result.index.equals(ts1.index.asobject.append( + ts2.index))) def test_equal_join_ensure_utc(self): rng = date_range('1/1/2011', periods=10, freq='H', tz='US/Eastern') @@ -1242,23 +1271,19 @@ def test_normalize_tz(self): self.assertTrue(result.is_normalized) self.assertFalse(rng.is_normalized) - rng = date_range('1/1/2000 9:30', periods=10, freq='D', - tz='UTC') + rng = date_range('1/1/2000 9:30', periods=10, freq='D', tz='UTC') result = rng.normalize() - expected = date_range('1/1/2000', periods=10, freq='D', - tz='UTC') + expected = date_range('1/1/2000', periods=10, freq='D', tz='UTC') self.assertTrue(result.equals(expected)) self.assertTrue(result.is_normalized) self.assertFalse(rng.is_normalized) from dateutil.tz import tzlocal - rng = date_range('1/1/2000 9:30', periods=10, freq='D', - tz=tzlocal()) + rng = date_range('1/1/2000 9:30', periods=10, freq='D', tz=tzlocal()) result = rng.normalize() - expected = date_range('1/1/2000', periods=10, freq='D', - tz=tzlocal()) + expected = date_range('1/1/2000', periods=10, freq='D', tz=tzlocal()) self.assertTrue(result.equals(expected)) self.assertTrue(result.is_normalized) diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py index 55a6bf6f13b63..123b91d8bbf82 100644 --- a/pandas/tseries/tests/test_tslib.py +++ b/pandas/tseries/tests/test_tslib.py @@ -19,7 +19,6 @@ class TestTimestamp(tm.TestCase): - def test_constructor(self): base_str = '2014-07-01 09:00' base_dt = datetime.datetime(2014, 7, 1, 9) @@ -27,23 +26,27 @@ def test_constructor(self): # confirm base representation is correct import calendar - self.assertEqual(calendar.timegm(base_dt.timetuple()) * 1000000000, base_expected) + self.assertEqual(calendar.timegm(base_dt.timetuple()) + * 1000000000, base_expected) tests = [(base_str, base_dt, base_expected), ('2014-07-01 10:00', datetime.datetime(2014, 7, 1, 10), base_expected + 3600 * 1000000000), ('2014-07-01 09:00:00.000008000', - datetime.datetime(2014, 7, 1, 9, 0, 0, 8), base_expected + 8000), + datetime.datetime(2014, 7, 1, 9, 0, 0, 8), + base_expected + 8000), ('2014-07-01 09:00:00.000000005', - Timestamp('2014-07-01 09:00:00.000000005'), base_expected + 5)] + Timestamp('2014-07-01 09:00:00.000000005'), + base_expected + 5)] tm._skip_if_no_pytz() tm._skip_if_no_dateutil() import pytz import dateutil - timezones = [(None, 0), ('UTC', 0), (pytz.utc, 0), - ('Asia/Tokyo', 9), ('US/Eastern', -4), ('dateutil/US/Pacific', -7), - (pytz.FixedOffset(-180), -3), (dateutil.tz.tzoffset(None, 18000), 5)] + timezones = [(None, 0), ('UTC', 0), (pytz.utc, 0), ('Asia/Tokyo', 9), + ('US/Eastern', -4), ('dateutil/US/Pacific', -7), + (pytz.FixedOffset(-180), -3), + (dateutil.tz.tzoffset(None, 18000), 5)] for date_str, date, expected in tests: for result in [Timestamp(date_str), Timestamp(date)]: @@ -58,7 +61,8 @@ def test_constructor(self): # with timezone for tz, offset in timezones: - for result in [Timestamp(date_str, tz=tz), Timestamp(date, tz=tz)]: + for result in [Timestamp(date_str, tz=tz), Timestamp(date, + tz=tz)]: expected_tz = expected - offset * 3600 * 1000000000 self.assertEqual(result.value, expected_tz) self.assertEqual(tslib.pydt_to_i8(result), expected_tz) @@ -82,10 +86,12 @@ def test_constructor_with_stringoffset(self): # confirm base representation is correct import calendar - self.assertEqual(calendar.timegm(base_dt.timetuple()) * 1000000000, base_expected) + self.assertEqual(calendar.timegm(base_dt.timetuple()) + * 1000000000, base_expected) tests = [(base_str, base_expected), - ('2014-07-01 12:00:00+02:00', base_expected + 3600 * 1000000000), + ('2014-07-01 12:00:00+02:00', + base_expected + 3600 * 1000000000), ('2014-07-01 11:00:00.000008000+02:00', base_expected + 8000), ('2014-07-01 11:00:00.000000005+02:00', base_expected + 5)] @@ -93,10 +99,10 @@ def test_constructor_with_stringoffset(self): tm._skip_if_no_dateutil() import pytz import dateutil - timezones = [(None, 0), ('UTC', 0), (pytz.utc, 0), - ('Asia/Tokyo', 9), ('US/Eastern', -4), - ('dateutil/US/Pacific', -7), - (pytz.FixedOffset(-180), -3), (dateutil.tz.tzoffset(None, 18000), 5)] + timezones = [(None, 0), ('UTC', 0), (pytz.utc, 0), ('Asia/Tokyo', 9), + ('US/Eastern', -4), ('dateutil/US/Pacific', -7), + (pytz.FixedOffset(-180), -3), + (dateutil.tz.tzoffset(None, 18000), 5)] for date_str, expected in tests: for result in [Timestamp(date_str)]: @@ -185,14 +191,18 @@ def test_repr(self): tm._skip_if_no_pytz() tm._skip_if_no_dateutil() - dates = ['2014-03-07', '2014-01-01 09:00', '2014-01-01 00:00:00.000000001'] + dates = ['2014-03-07', '2014-01-01 09:00', + '2014-01-01 00:00:00.000000001'] # dateutil zone change (only matters for repr) import dateutil - if dateutil.__version__ >= LooseVersion('2.3') and dateutil.__version__ <= LooseVersion('2.4.0'): - timezones = ['UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Pacific'] + if dateutil.__version__ >= LooseVersion( + '2.3') and dateutil.__version__ <= LooseVersion('2.4.0'): + timezones = ['UTC', 'Asia/Tokyo', 'US/Eastern', + 'dateutil/US/Pacific'] else: - timezones = ['UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/America/Los_Angeles'] + timezones = ['UTC', 'Asia/Tokyo', 'US/Eastern', + 'dateutil/America/Los_Angeles'] freqs = ['D', 'M', 'S', 'N'] @@ -231,43 +241,33 @@ def test_repr(self): self.assertIn(freq_repr, repr(date_tz_freq)) self.assertEqual(date_tz_freq, eval(repr(date_tz_freq))) - # this can cause the tz field to be populated, but it's redundant to information in the datestring + # this can cause the tz field to be populated, but it's redundant to + # information in the datestring tm._skip_if_no_pytz() - import pytz + import pytz # noqa date_with_utc_offset = Timestamp('2014-03-13 00:00:00-0400', tz=None) self.assertIn('2014-03-13 00:00:00-0400', repr(date_with_utc_offset)) self.assertNotIn('tzoffset', repr(date_with_utc_offset)) self.assertIn('pytz.FixedOffset(-240)', repr(date_with_utc_offset)) expr = repr(date_with_utc_offset).replace("'pytz.FixedOffset(-240)'", - 'pytz.FixedOffset(-240)') + 'pytz.FixedOffset(-240)') self.assertEqual(date_with_utc_offset, eval(expr)) def test_bounds_with_different_units(self): - out_of_bounds_dates = ( - '1677-09-21', - '2262-04-12', - ) + out_of_bounds_dates = ('1677-09-21', '2262-04-12', ) time_units = ('D', 'h', 'm', 's', 'ms', 'us') for date_string in out_of_bounds_dates: for unit in time_units: - self.assertRaises( - ValueError, - Timestamp, - np.datetime64(date_string, dtype='M8[%s]' % unit) - ) - - in_bounds_dates = ( - '1677-09-23', - '2262-04-11', - ) + self.assertRaises(ValueError, Timestamp, np.datetime64( + date_string, dtype='M8[%s]' % unit)) + + in_bounds_dates = ('1677-09-23', '2262-04-11', ) for date_string in in_bounds_dates: for unit in time_units: - Timestamp( - np.datetime64(date_string, dtype='M8[%s]' % unit) - ) + Timestamp(np.datetime64(date_string, dtype='M8[%s]' % unit)) def test_tz(self): t = '2014-02-01 09:00' @@ -276,8 +276,7 @@ def test_tz(self): self.assertEqual(local.hour, 9) self.assertEqual(local, Timestamp(t, tz='Asia/Tokyo')) conv = local.tz_convert('US/Eastern') - self.assertEqual(conv, - Timestamp('2014-01-31 19:00', tz='US/Eastern')) + self.assertEqual(conv, Timestamp('2014-01-31 19:00', tz='US/Eastern')) self.assertEqual(conv.hour, 19) # preserves nanosecond @@ -298,21 +297,24 @@ def test_tz_localize_ambiguous(self): rng = date_range('2014-11-02', periods=3, freq='H', tz='US/Eastern') self.assertEqual(rng[1], ts_dst) self.assertEqual(rng[2], ts_no_dst) - self.assertRaises(ValueError, ts.tz_localize, 'US/Eastern', ambiguous='infer') + self.assertRaises(ValueError, ts.tz_localize, 'US/Eastern', + ambiguous='infer') # GH 8025 - with tm.assertRaisesRegexp(TypeError, 'Cannot localize tz-aware Timestamp, use ' + with tm.assertRaisesRegexp(TypeError, + 'Cannot localize tz-aware Timestamp, use ' 'tz_convert for conversions'): - Timestamp('2011-01-01' ,tz='US/Eastern').tz_localize('Asia/Tokyo') + Timestamp('2011-01-01', tz='US/Eastern').tz_localize('Asia/Tokyo') - with tm.assertRaisesRegexp(TypeError, 'Cannot convert tz-naive Timestamp, use ' - 'tz_localize to localize'): + with tm.assertRaisesRegexp(TypeError, + 'Cannot convert tz-naive Timestamp, use ' + 'tz_localize to localize'): Timestamp('2011-01-01').tz_convert('Asia/Tokyo') def test_tz_localize_roundtrip(self): for tz in ['UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Pacific']: - for t in ['2014-02-01 09:00', '2014-07-08 09:00', '2014-11-01 17:00', - '2014-11-05 00:00']: + for t in ['2014-02-01 09:00', '2014-07-08 09:00', + '2014-11-01 17:00', '2014-11-05 00:00']: ts = Timestamp(t) localized = ts.tz_localize(tz) self.assertEqual(localized, Timestamp(t, tz=tz)) @@ -326,15 +328,16 @@ def test_tz_localize_roundtrip(self): def test_tz_convert_roundtrip(self): for tz in ['UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Pacific']: - for t in ['2014-02-01 09:00', '2014-07-08 09:00', '2014-11-01 17:00', - '2014-11-05 00:00']: + for t in ['2014-02-01 09:00', '2014-07-08 09:00', + '2014-11-01 17:00', '2014-11-05 00:00']: ts = Timestamp(t, tz='UTC') converted = ts.tz_convert(tz) reset = converted.tz_convert(None) self.assertEqual(reset, Timestamp(t)) self.assertTrue(reset.tzinfo is None) - self.assertEqual(reset, converted.tz_convert('UTC').tz_localize(None)) + self.assertEqual(reset, + converted.tz_convert('UTC').tz_localize(None)) def test_barely_oob_dts(self): one_us = np.timedelta64(1).astype('timedelta64[us]') @@ -355,7 +358,8 @@ def test_barely_oob_dts(self): self.assertRaises(ValueError, Timestamp, max_ts_us + one_us) def test_utc_z_designator(self): - self.assertEqual(get_timezone(Timestamp('2014-11-02 01:00Z').tzinfo), 'UTC') + self.assertEqual(get_timezone( + Timestamp('2014-11-02 01:00Z').tzinfo), 'UTC') def test_now(self): # #9000 @@ -366,13 +370,14 @@ def test_now(self): ts_from_string_tz = Timestamp('now', tz='US/Eastern') ts_from_method_tz = Timestamp.now(tz='US/Eastern') - # Check that the delta between the times is less than 1s (arbitrarily small) + # Check that the delta between the times is less than 1s (arbitrarily + # small) delta = Timedelta(seconds=1) self.assertTrue(abs(ts_from_method - ts_from_string) < delta) self.assertTrue(abs(ts_datetime - ts_from_method) < delta) self.assertTrue(abs(ts_from_method_tz - ts_from_string_tz) < delta) - self.assertTrue(abs(ts_from_string_tz.tz_localize(None) - - ts_from_method_tz.tz_localize(None)) < delta) + self.assertTrue(abs(ts_from_string_tz.tz_localize(None) - + ts_from_method_tz.tz_localize(None)) < delta) def test_today(self): @@ -383,21 +388,18 @@ def test_today(self): ts_from_string_tz = Timestamp('today', tz='US/Eastern') ts_from_method_tz = Timestamp.today(tz='US/Eastern') - # Check that the delta between the times is less than 1s (arbitrarily small) + # Check that the delta between the times is less than 1s (arbitrarily + # small) delta = Timedelta(seconds=1) self.assertTrue(abs(ts_from_method - ts_from_string) < delta) self.assertTrue(abs(ts_datetime - ts_from_method) < delta) self.assertTrue(abs(ts_from_method_tz - ts_from_string_tz) < delta) - self.assertTrue(abs(ts_from_string_tz.tz_localize(None) - - ts_from_method_tz.tz_localize(None)) < delta) + self.assertTrue(abs(ts_from_string_tz.tz_localize(None) - + ts_from_method_tz.tz_localize(None)) < delta) def test_asm8(self): np.random.seed(7960929) - ns = [ - Timestamp.min.value, - Timestamp.max.value, - 1000, - ] + ns = [Timestamp.min.value, Timestamp.max.value, 1000, ] for n in ns: self.assertEqual(Timestamp(n).asm8.view('i8'), np.datetime64(n, 'ns').view('i8'), n) @@ -405,7 +407,6 @@ def test_asm8(self): np.datetime64('nat', 'ns').view('i8')) def test_fields(self): - def check(value, equal): # that we are int/long like self.assertTrue(isinstance(value, (int, compat.long))) @@ -419,7 +420,7 @@ def check(value, equal): check(ts.hour, 9) check(ts.minute, 6) check(ts.second, 3) - self.assertRaises(AttributeError, lambda : ts.millisecond) + self.assertRaises(AttributeError, lambda: ts.millisecond) check(ts.microsecond, 100) check(ts.nanosecond, 1) check(ts.dayofweek, 6) @@ -449,34 +450,23 @@ def test_nat_fields(self): class TestDatetimeParsingWrappers(tm.TestCase): - def test_does_not_convert_mixed_integer(self): - bad_date_strings = ( - '-50000', - '999', - '123.1234', - 'm', - 'T' - ) + bad_date_strings = ('-50000', '999', '123.1234', 'm', 'T') for bad_date_string in bad_date_strings: - self.assertFalse( - tslib._does_string_look_like_datetime(bad_date_string) - ) + self.assertFalse(tslib._does_string_look_like_datetime( + bad_date_string)) - good_date_strings = ( - '2012-01-01', - '01/01/2012', - 'Mon Sep 16, 2013', - '01012012', - '0101', - '1-1', - ) + good_date_strings = ('2012-01-01', + '01/01/2012', + 'Mon Sep 16, 2013', + '01012012', + '0101', + '1-1', ) for good_date_string in good_date_strings: - self.assertTrue( - tslib._does_string_look_like_datetime(good_date_string) - ) + self.assertTrue(tslib._does_string_look_like_datetime( + good_date_string)) def test_parsers(self): cases = {'2011-01-01': datetime.datetime(2011, 1, 1), @@ -495,15 +485,12 @@ def test_parsers(self): '4Q2000': datetime.datetime(2000, 10, 1), '4Q00': datetime.datetime(2000, 10, 1), '2000q4': datetime.datetime(2000, 10, 1), - '2000-Q4': datetime.datetime(2000, 10, 1), '00-Q4': datetime.datetime(2000, 10, 1), '4Q-2000': datetime.datetime(2000, 10, 1), '4Q-00': datetime.datetime(2000, 10, 1), - '2000q4': datetime.datetime(2000, 10, 1), '00q4': datetime.datetime(2000, 10, 1), - '2005': datetime.datetime(2005, 1, 1), '2005-11': datetime.datetime(2005, 11, 1), '2005 11': datetime.datetime(2005, 11, 1), @@ -511,16 +498,14 @@ def test_parsers(self): '11 2005': datetime.datetime(2005, 11, 1), '200511': datetime.datetime(2020, 5, 11), '20051109': datetime.datetime(2005, 11, 9), - '20051109 10:15': datetime.datetime(2005, 11, 9, 10, 15), '20051109 08H': datetime.datetime(2005, 11, 9, 8, 0), - '2005-11-09 10:15': datetime.datetime(2005, 11, 9, 10, 15), '2005-11-09 08H': datetime.datetime(2005, 11, 9, 8, 0), '2005/11/09 10:15': datetime.datetime(2005, 11, 9, 10, 15), '2005/11/09 08H': datetime.datetime(2005, 11, 9, 8, 0), - - "Thu Sep 25 10:36:28 2003": datetime.datetime(2003, 9, 25, 10, 36, 28), + "Thu Sep 25 10:36:28 2003": datetime.datetime(2003, 9, 25, 10, + 36, 28), "Thu Sep 25 2003": datetime.datetime(2003, 9, 25), "Sep 25 2003": datetime.datetime(2003, 9, 25), "January 1 2014": datetime.datetime(2014, 1, 1), @@ -529,8 +514,7 @@ def test_parsers(self): '2014-06': datetime.datetime(2014, 6, 1), '06-2014': datetime.datetime(2014, 6, 1), '2014-6': datetime.datetime(2014, 6, 1), - '6-2014': datetime.datetime(2014, 6, 1), - } + '6-2014': datetime.datetime(2014, 6, 1), } for date_str, expected in compat.iteritems(cases): result1, _, _ = tools.parse_time_string(date_str) @@ -560,8 +544,7 @@ def test_parsers(self): def test_parsers_quarter_invalid(self): - cases = ['2Q 2005', '2Q-200A', '2Q-200', - '22Q2005', '6Q-20', '2Q200.'] + cases = ['2Q 2005', '2Q-200A', '2Q-200', '22Q2005', '6Q-20', '2Q200.'] for case in cases: self.assertRaises(ValueError, tools.parse_time_string, case) @@ -579,8 +562,9 @@ def test_parsers_dayfirst_yearfirst(self): tm._skip_if_no_dateutil() from dateutil.parser import parse for date_str, values in compat.iteritems(cases): - for dayfirst, yearfirst ,expected in values: - result1, _, _ = tools.parse_time_string(date_str, dayfirst=dayfirst, + for dayfirst, yearfirst, expected in values: + result1, _, _ = tools.parse_time_string(date_str, + dayfirst=dayfirst, yearfirst=yearfirst) result2 = to_datetime(date_str, dayfirst=dayfirst, @@ -596,7 +580,8 @@ def test_parsers_dayfirst_yearfirst(self): self.assertEqual(result3, expected) # compare with dateutil result - dateutil_result = parse(date_str, dayfirst=dayfirst, yearfirst=yearfirst) + dateutil_result = parse(date_str, dayfirst=dayfirst, + yearfirst=yearfirst) self.assertEqual(dateutil_result, expected) def test_parsers_timestring(self): @@ -605,7 +590,7 @@ def test_parsers_timestring(self): # must be the same as dateutil result cases = {'10:15': (parse('10:15'), datetime.datetime(1, 1, 1, 10, 15)), - '9:05': (parse('9:05'), datetime.datetime(1, 1, 1, 9, 5)) } + '9:05': (parse('9:05'), datetime.datetime(1, 1, 1, 9, 5))} for date_str, (exp_now, exp_def) in compat.iteritems(cases): result1, _, _ = tools.parse_time_string(date_str) @@ -647,9 +632,9 @@ def test_parsers_time(self): self.assert_numpy_array_equal(tools.to_time(arg, format="%I:%M%p", errors="ignore"), np.array(arg)) - self.assertRaises(ValueError, lambda: tools.to_time(arg, - format="%I:%M%p", - errors="raise")) + self.assertRaises(ValueError, + lambda: tools.to_time(arg, format="%I:%M%p", + errors="raise")) self.assert_series_equal(tools.to_time(Series(arg, name="test")), Series(expected_arr, name="test")) self.assert_numpy_array_equal(tools.to_time(np.array(arg)), @@ -666,13 +651,14 @@ def test_parsers_monthfreq(self): self.assertEqual(result2, expected) def test_parsers_quarterly_with_freq(self): - - msg = 'Incorrect quarterly string is given, quarter must be between 1 and 4: 2013Q5' + msg = ('Incorrect quarterly string is given, quarter ' + 'must be between 1 and 4: 2013Q5') with tm.assertRaisesRegexp(tslib.DateParseError, msg): tools.parse_time_string('2013Q5') # GH 5418 - msg = 'Unable to retrieve month information from given freq: INVLD-L-DEC-SAT' + msg = ('Unable to retrieve month information from given freq: ' + 'INVLD-L-DEC-SAT') with tm.assertRaisesRegexp(tslib.DateParseError, msg): tools.parse_time_string('2013Q1', freq='INVLD-L-DEC-SAT') @@ -702,17 +688,18 @@ def test_parsers_timezone_minute_offsets_roundtrip(self): converted_time = dt_time.tz_localize('UTC').tz_convert(tz) self.assertEqual(dt_string_repr, repr(converted_time)) + class TestArrayToDatetime(tm.TestCase): def test_parsing_valid_dates(self): arr = np.array(['01-01-2013', '01-02-2013'], dtype=object) self.assert_numpy_array_equal( tslib.array_to_datetime(arr), np.array( - [ - '2013-01-01T00:00:00.000000000-0000', - '2013-01-02T00:00:00.000000000-0000' - ], - dtype='M8[ns]' + [ + '2013-01-01T00:00:00.000000000-0000', + '2013-01-02T00:00:00.000000000-0000' + ], + dtype='M8[ns]' ) ) @@ -720,11 +707,11 @@ def test_parsing_valid_dates(self): self.assert_numpy_array_equal( tslib.array_to_datetime(arr), np.array( - [ - '2013-09-16T00:00:00.000000000-0000', - '2013-09-17T00:00:00.000000000-0000' - ], - dtype='M8[ns]' + [ + '2013-09-16T00:00:00.000000000-0000', + '2013-09-17T00:00:00.000000000-0000' + ], + dtype='M8[ns]' ) ) @@ -733,10 +720,12 @@ def test_number_looking_strings_not_into_datetime(self): # These strings don't look like datetimes so they shouldn't be # attempted to be converted arr = np.array(['-352.737091', '183.575577'], dtype=object) - self.assert_numpy_array_equal(tslib.array_to_datetime(arr, errors='ignore'), arr) + self.assert_numpy_array_equal( + tslib.array_to_datetime(arr, errors='ignore'), arr) arr = np.array(['1', '2', '3', '4', '5'], dtype=object) - self.assert_numpy_array_equal(tslib.array_to_datetime(arr, errors='ignore'), arr) + self.assert_numpy_array_equal( + tslib.array_to_datetime(arr, errors='ignore'), arr) def test_coercing_dates_outside_of_datetime64_ns_bounds(self): invalid_dates = [ @@ -748,12 +737,11 @@ def test_coercing_dates_outside_of_datetime64_ns_bounds(self): ] for invalid_date in invalid_dates: - self.assertRaises( - ValueError, - tslib.array_to_datetime, - np.array([invalid_date], dtype='object'), - errors='raise', - ) + self.assertRaises(ValueError, + tslib.array_to_datetime, + np.array( + [invalid_date], dtype='object'), + errors='raise', ) self.assert_numpy_array_equal( tslib.array_to_datetime( np.array([invalid_date], dtype='object'), @@ -765,11 +753,11 @@ def test_coercing_dates_outside_of_datetime64_ns_bounds(self): self.assert_numpy_array_equal( tslib.array_to_datetime(arr, errors='coerce'), np.array( - [ - tslib.iNaT, - '2000-01-01T00:00:00.000000000-0000' - ], - dtype='M8[ns]' + [ + tslib.iNaT, + '2000-01-01T00:00:00.000000000-0000' + ], + dtype='M8[ns]' ) ) @@ -778,18 +766,19 @@ def test_coerce_of_invalid_datetimes(self): # Without coercing, the presence of any invalid dates prevents # any values from being converted - self.assert_numpy_array_equal(tslib.array_to_datetime(arr,errors='ignore'), arr) + self.assert_numpy_array_equal( + tslib.array_to_datetime(arr, errors='ignore'), arr) # With coercing, the invalid dates becomes iNaT self.assert_numpy_array_equal( tslib.array_to_datetime(arr, errors='coerce'), np.array( - [ - '2013-01-01T00:00:00.000000000-0000', - tslib.iNaT, - tslib.iNaT - ], - dtype='M8[ns]' + [ + '2013-01-01T00:00:00.000000000-0000', + tslib.iNaT, + tslib.iNaT + ], + dtype='M8[ns]' ) ) @@ -803,9 +792,8 @@ def test_parsing_timezone_offsets(self): '12-31-2012 23:00:00-01:00' ] - expected_output = tslib.array_to_datetime( - np.array(['01-01-2013 00:00:00'], dtype=object) - ) + expected_output = tslib.array_to_datetime(np.array( + ['01-01-2013 00:00:00'], dtype=object)) for dt_string in dt_strings: self.assert_numpy_array_equal( @@ -815,6 +803,7 @@ def test_parsing_timezone_offsets(self): expected_output ) + class TestTimestampNsOperations(tm.TestCase): def setUp(self): self.timestamp = Timestamp(datetime.datetime.utcnow()) @@ -826,13 +815,16 @@ def assert_ns_timedelta(self, modified_timestamp, expected_value): self.assertEqual(modified_value - value, expected_value) def test_timedelta_ns_arithmetic(self): - self.assert_ns_timedelta(self.timestamp + np.timedelta64(-123, 'ns'), -123) + self.assert_ns_timedelta(self.timestamp + np.timedelta64(-123, 'ns'), + -123) def test_timedelta_ns_based_arithmetic(self): - self.assert_ns_timedelta(self.timestamp + np.timedelta64(1234567898, 'ns'), 1234567898) + self.assert_ns_timedelta(self.timestamp + np.timedelta64( + 1234567898, 'ns'), 1234567898) def test_timedelta_us_arithmetic(self): - self.assert_ns_timedelta(self.timestamp + np.timedelta64(-123, 'us'), -123000) + self.assert_ns_timedelta(self.timestamp + np.timedelta64(-123, 'us'), + -123000) def test_timedelta_ms_arithmetic(self): time = self.timestamp + np.timedelta64(-123, 'ms') @@ -927,65 +919,99 @@ def test_nat_arithmetic(self): self.assertTrue((left - right) is nat) - class TestTslib(tm.TestCase): - def test_intraday_conversion_factors(self): - self.assertEqual(period_asfreq(1, get_freq('D'), get_freq('H'), False), 24) - self.assertEqual(period_asfreq(1, get_freq('D'), get_freq('T'), False), 1440) - self.assertEqual(period_asfreq(1, get_freq('D'), get_freq('S'), False), 86400) - self.assertEqual(period_asfreq(1, get_freq('D'), get_freq('L'), False), 86400000) - self.assertEqual(period_asfreq(1, get_freq('D'), get_freq('U'), False), 86400000000) - self.assertEqual(period_asfreq(1, get_freq('D'), get_freq('N'), False), 86400000000000) - - self.assertEqual(period_asfreq(1, get_freq('H'), get_freq('T'), False), 60) - self.assertEqual(period_asfreq(1, get_freq('H'), get_freq('S'), False), 3600) - self.assertEqual(period_asfreq(1, get_freq('H'), get_freq('L'), False), 3600000) - self.assertEqual(period_asfreq(1, get_freq('H'), get_freq('U'), False), 3600000000) - self.assertEqual(period_asfreq(1, get_freq('H'), get_freq('N'), False), 3600000000000) - - self.assertEqual(period_asfreq(1, get_freq('T'), get_freq('S'), False), 60) - self.assertEqual(period_asfreq(1, get_freq('T'), get_freq('L'), False), 60000) - self.assertEqual(period_asfreq(1, get_freq('T'), get_freq('U'), False), 60000000) - self.assertEqual(period_asfreq(1, get_freq('T'), get_freq('N'), False), 60000000000) - - self.assertEqual(period_asfreq(1, get_freq('S'), get_freq('L'), False), 1000) - self.assertEqual(period_asfreq(1, get_freq('S'), get_freq('U'), False), 1000000) - self.assertEqual(period_asfreq(1, get_freq('S'), get_freq('N'), False), 1000000000) - - self.assertEqual(period_asfreq(1, get_freq('L'), get_freq('U'), False), 1000) - self.assertEqual(period_asfreq(1, get_freq('L'), get_freq('N'), False), 1000000) - - self.assertEqual(period_asfreq(1, get_freq('U'), get_freq('N'), False), 1000) + self.assertEqual(period_asfreq( + 1, get_freq('D'), get_freq('H'), False), 24) + self.assertEqual(period_asfreq( + 1, get_freq('D'), get_freq('T'), False), 1440) + self.assertEqual(period_asfreq( + 1, get_freq('D'), get_freq('S'), False), 86400) + self.assertEqual(period_asfreq(1, get_freq( + 'D'), get_freq('L'), False), 86400000) + self.assertEqual(period_asfreq(1, get_freq( + 'D'), get_freq('U'), False), 86400000000) + self.assertEqual(period_asfreq(1, get_freq( + 'D'), get_freq('N'), False), 86400000000000) + + self.assertEqual(period_asfreq( + 1, get_freq('H'), get_freq('T'), False), 60) + self.assertEqual(period_asfreq( + 1, get_freq('H'), get_freq('S'), False), 3600) + self.assertEqual(period_asfreq(1, get_freq('H'), + get_freq('L'), False), 3600000) + self.assertEqual(period_asfreq(1, get_freq( + 'H'), get_freq('U'), False), 3600000000) + self.assertEqual(period_asfreq(1, get_freq( + 'H'), get_freq('N'), False), 3600000000000) + + self.assertEqual(period_asfreq( + 1, get_freq('T'), get_freq('S'), False), 60) + self.assertEqual(period_asfreq( + 1, get_freq('T'), get_freq('L'), False), 60000) + self.assertEqual(period_asfreq(1, get_freq( + 'T'), get_freq('U'), False), 60000000) + self.assertEqual(period_asfreq(1, get_freq( + 'T'), get_freq('N'), False), 60000000000) + + self.assertEqual(period_asfreq( + 1, get_freq('S'), get_freq('L'), False), 1000) + self.assertEqual(period_asfreq(1, get_freq('S'), + get_freq('U'), False), 1000000) + self.assertEqual(period_asfreq(1, get_freq( + 'S'), get_freq('N'), False), 1000000000) + + self.assertEqual(period_asfreq( + 1, get_freq('L'), get_freq('U'), False), 1000) + self.assertEqual(period_asfreq(1, get_freq('L'), + get_freq('N'), False), 1000000) + + self.assertEqual(period_asfreq( + 1, get_freq('U'), get_freq('N'), False), 1000) def test_period_ordinal_start_values(self): # information for 1.1.1970 - self.assertEqual(0, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('A'))) - self.assertEqual(0, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('M'))) - self.assertEqual(1, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('W'))) - self.assertEqual(0, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('D'))) - self.assertEqual(0, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('B'))) + self.assertEqual(0, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, + get_freq('A'))) + self.assertEqual(0, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, + get_freq('M'))) + self.assertEqual(1, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, + get_freq('W'))) + self.assertEqual(0, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, + get_freq('D'))) + self.assertEqual(0, period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, + get_freq('B'))) def test_period_ordinal_week(self): - self.assertEqual(1, period_ordinal(1970, 1, 4, 0, 0, 0, 0, 0, get_freq('W'))) - self.assertEqual(2, period_ordinal(1970, 1, 5, 0, 0, 0, 0, 0, get_freq('W'))) + self.assertEqual(1, period_ordinal(1970, 1, 4, 0, 0, 0, 0, 0, + get_freq('W'))) + self.assertEqual(2, period_ordinal(1970, 1, 5, 0, 0, 0, 0, 0, + get_freq('W'))) - self.assertEqual(2284, period_ordinal(2013, 10, 6, 0, 0, 0, 0, 0, get_freq('W'))) - self.assertEqual(2285, period_ordinal(2013, 10, 7, 0, 0, 0, 0, 0, get_freq('W'))) + self.assertEqual(2284, period_ordinal(2013, 10, 6, 0, 0, 0, 0, 0, + get_freq('W'))) + self.assertEqual(2285, period_ordinal(2013, 10, 7, 0, 0, 0, 0, 0, + get_freq('W'))) def test_period_ordinal_business_day(self): # Thursday - self.assertEqual(11415, period_ordinal(2013, 10, 3, 0, 0, 0, 0, 0, get_freq('B'))) + self.assertEqual(11415, period_ordinal(2013, 10, 3, 0, 0, 0, 0, 0, + get_freq('B'))) # Friday - self.assertEqual(11416, period_ordinal(2013, 10, 4, 0, 0, 0, 0, 0, get_freq('B'))) + self.assertEqual(11416, period_ordinal(2013, 10, 4, 0, 0, 0, 0, 0, + get_freq('B'))) # Saturday - self.assertEqual(11417, period_ordinal(2013, 10, 5, 0, 0, 0, 0, 0, get_freq('B'))) + self.assertEqual(11417, period_ordinal(2013, 10, 5, 0, 0, 0, 0, 0, + get_freq('B'))) # Sunday - self.assertEqual(11417, period_ordinal(2013, 10, 6, 0, 0, 0, 0, 0, get_freq('B'))) + self.assertEqual(11417, period_ordinal(2013, 10, 6, 0, 0, 0, 0, 0, + get_freq('B'))) # Monday - self.assertEqual(11417, period_ordinal(2013, 10, 7, 0, 0, 0, 0, 0, get_freq('B'))) + self.assertEqual(11417, period_ordinal(2013, 10, 7, 0, 0, 0, 0, 0, + get_freq('B'))) # Tuesday - self.assertEqual(11418, period_ordinal(2013, 10, 8, 0, 0, 0, 0, 0, get_freq('B'))) + self.assertEqual(11418, period_ordinal(2013, 10, 8, 0, 0, 0, 0, 0, + get_freq('B'))) def test_tslib_tz_convert(self): def compare_utc_to_local(tz_didx, utc_didx): @@ -1006,7 +1032,8 @@ def compare_local_to_utc(tz_didx, utc_didx): tz_didx = date_range('2014-03-01', '2015-01-10', freq='H', tz=tz) utc_didx = date_range('2014-03-01', '2015-01-10', freq='H') compare_utc_to_local(tz_didx, utc_didx) - # local tz to UTC can be differ in hourly (or higher) freqs because of DST + # local tz to UTC can be differ in hourly (or higher) freqs because + # of DST compare_local_to_utc(tz_didx, utc_didx) tz_didx = date_range('2000-01-01', '2020-01-01', freq='D', tz=tz) @@ -1029,26 +1056,32 @@ def compare_local_to_utc(tz_didx, utc_didx): result = tslib.tz_convert(np.array([tslib.iNaT], dtype=np.int64), tslib.maybe_get_tz('US/Eastern'), tslib.maybe_get_tz('Asia/Tokyo')) - self.assert_numpy_array_equal(result, np.array([tslib.iNaT], dtype=np.int64)) + self.assert_numpy_array_equal(result, np.array( + [tslib.iNaT], dtype=np.int64)) def test_shift_months(self): - s = DatetimeIndex([Timestamp('2000-01-05 00:15:00'), Timestamp('2000-01-31 00:23:00'), - Timestamp('2000-01-01'), Timestamp('2000-02-29'), Timestamp('2000-12-31')]) + s = DatetimeIndex([Timestamp('2000-01-05 00:15:00'), Timestamp( + '2000-01-31 00:23:00'), Timestamp('2000-01-01'), Timestamp( + '2000-02-29'), Timestamp('2000-12-31')]) for years in [-1, 0, 1]: for months in [-2, 0, 2]: - actual = DatetimeIndex(tslib.shift_months(s.asi8, years * 12 + months)) - expected = DatetimeIndex([x + offsets.DateOffset(years=years, months=months) for x in s]) + actual = DatetimeIndex(tslib.shift_months(s.asi8, years * 12 + + months)) + expected = DatetimeIndex([x + offsets.DateOffset( + years=years, months=months) for x in s]) tm.assert_index_equal(actual, expected) - class TestTimestampOps(tm.TestCase): def test_timestamp_and_datetime(self): - self.assertEqual((Timestamp(datetime.datetime(2013, 10, 13)) - datetime.datetime(2013, 10, 12)).days, 1) - self.assertEqual((datetime.datetime(2013, 10, 12) - Timestamp(datetime.datetime(2013, 10, 13))).days, -1) + self.assertEqual((Timestamp(datetime.datetime( + 2013, 10, 13)) - datetime.datetime(2013, 10, 12)).days, 1) + self.assertEqual((datetime.datetime(2013, 10, 12) - + Timestamp(datetime.datetime(2013, 10, 13))).days, -1) def test_timestamp_and_series(self): - timestamp_series = Series(date_range('2014-03-17', periods=2, freq='D', tz='US/Eastern')) + timestamp_series = Series(date_range('2014-03-17', periods=2, freq='D', + tz='US/Eastern')) first_timestamp = timestamp_series[0] delta_series = Series([np.timedelta64(0, 'D'), np.timedelta64(1, 'D')]) @@ -1056,25 +1089,34 @@ def test_timestamp_and_series(self): assert_series_equal(first_timestamp - timestamp_series, -delta_series) def test_addition_subtraction_types(self): - # Assert on the types resulting from Timestamp +/- various date/time objects + # Assert on the types resulting from Timestamp +/- various date/time + # objects datetime_instance = datetime.datetime(2014, 3, 4) timedelta_instance = datetime.timedelta(seconds=1) - # build a timestamp with a frequency, since then it supports addition/subtraction of integers - timestamp_instance = date_range(datetime_instance, periods=1, freq='D')[0] + # build a timestamp with a frequency, since then it supports + # addition/subtraction of integers + timestamp_instance = date_range(datetime_instance, periods=1, + freq='D')[0] self.assertEqual(type(timestamp_instance + 1), Timestamp) self.assertEqual(type(timestamp_instance - 1), Timestamp) - # Timestamp + datetime not supported, though subtraction is supported and yields timedelta - # more tests in tseries/base/tests/test_base.py - self.assertEqual(type(timestamp_instance - datetime_instance), Timedelta) - self.assertEqual(type(timestamp_instance + timedelta_instance), Timestamp) - self.assertEqual(type(timestamp_instance - timedelta_instance), Timestamp) - - # Timestamp +/- datetime64 not supported, so not tested (could possibly assert error raised?) + # Timestamp + datetime not supported, though subtraction is supported + # and yields timedelta more tests in tseries/base/tests/test_base.py + self.assertEqual( + type(timestamp_instance - datetime_instance), Timedelta) + self.assertEqual( + type(timestamp_instance + timedelta_instance), Timestamp) + self.assertEqual( + type(timestamp_instance - timedelta_instance), Timestamp) + + # Timestamp +/- datetime64 not supported, so not tested (could possibly + # assert error raised?) timedelta64_instance = np.timedelta64(1, 'D') - self.assertEqual(type(timestamp_instance + timedelta64_instance), Timestamp) - self.assertEqual(type(timestamp_instance - timedelta64_instance), Timestamp) + self.assertEqual( + type(timestamp_instance + timedelta64_instance), Timestamp) + self.assertEqual( + type(timestamp_instance - timedelta64_instance), Timestamp) def test_addition_subtraction_preserve_frequency(self): timestamp_instance = date_range('2014-03-05', periods=1, freq='D')[0] @@ -1082,20 +1124,30 @@ def test_addition_subtraction_preserve_frequency(self): original_freq = timestamp_instance.freq self.assertEqual((timestamp_instance + 1).freq, original_freq) self.assertEqual((timestamp_instance - 1).freq, original_freq) - self.assertEqual((timestamp_instance + timedelta_instance).freq, original_freq) - self.assertEqual((timestamp_instance - timedelta_instance).freq, original_freq) + self.assertEqual( + (timestamp_instance + timedelta_instance).freq, original_freq) + self.assertEqual( + (timestamp_instance - timedelta_instance).freq, original_freq) timedelta64_instance = np.timedelta64(1, 'D') - self.assertEqual((timestamp_instance + timedelta64_instance).freq, original_freq) - self.assertEqual((timestamp_instance - timedelta64_instance).freq, original_freq) + self.assertEqual( + (timestamp_instance + timedelta64_instance).freq, original_freq) + self.assertEqual( + (timestamp_instance - timedelta64_instance).freq, original_freq) def test_resolution(self): - for freq, expected in zip(['A', 'Q', 'M', 'D', 'H', 'T', 'S', 'L', 'U'], - [period.D_RESO, period.D_RESO, period.D_RESO, period.D_RESO, - period.H_RESO, period.T_RESO, period.S_RESO, period.MS_RESO, period.US_RESO]): - for tz in [None, 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Eastern']: - idx = date_range(start='2013-04-01', periods=30, freq=freq, tz=tz) + for freq, expected in zip(['A', 'Q', 'M', 'D', 'H', 'T', + 'S', 'L', 'U'], + [period.D_RESO, period.D_RESO, + period.D_RESO, period.D_RESO, + period.H_RESO, period.T_RESO, + period.S_RESO, period.MS_RESO, + period.US_RESO]): + for tz in [None, 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Eastern' + ]: + idx = date_range(start='2013-04-01', periods=30, freq=freq, + tz=tz) result = period.resolution(idx.asi8, idx.tz) self.assertEqual(result, expected) diff --git a/pandas/tseries/tests/test_util.py b/pandas/tseries/tests/test_util.py index c75fcbdac07c0..9c5c9b7a03445 100644 --- a/pandas/tseries/tests/test_util.py +++ b/pandas/tseries/tests/test_util.py @@ -2,7 +2,6 @@ import nose import numpy as np -from numpy.testing.decorators import slow from pandas import Series, date_range import pandas.util.testing as tm @@ -17,6 +16,7 @@ class TestPivotAnnual(tm.TestCase): """ New pandas of scikits.timeseries pivot_annual """ + def test_daily(self): rng = date_range('1/1/2000', '12/31/2004', freq='D') ts = Series(np.random.randn(len(rng)), index=rng) @@ -42,8 +42,8 @@ def test_daily(self): tm.assert_series_equal(annual[day].dropna(), leaps) def test_hourly(self): - rng_hourly = date_range( - '1/1/1994', periods=(18 * 8760 + 4 * 24), freq='H') + rng_hourly = date_range('1/1/1994', periods=(18 * 8760 + 4 * 24), + freq='H') data_hourly = np.random.randint(100, 350, rng_hourly.size) ts_hourly = Series(data_hourly, index=rng_hourly) @@ -64,15 +64,13 @@ def test_hourly(self): tm.assert_series_equal(result, subset, check_names=False) self.assertEqual(result.name, i) - leaps = ts_hourly[(ts_hourly.index.month == 2) & - (ts_hourly.index.day == 29) & - (ts_hourly.index.hour == 0)] + leaps = ts_hourly[(ts_hourly.index.month == 2) & ( + ts_hourly.index.day == 29) & (ts_hourly.index.hour == 0)] hour = leaps.index.dayofyear[0] * 24 - 23 leaps.index = leaps.index.year leaps.name = 1417 tm.assert_series_equal(annual[hour].dropna(), leaps) - def test_weekly(self): pass @@ -104,12 +102,13 @@ def test_normalize_date(): value = date(2012, 9, 7) result = normalize_date(value) - assert(result == datetime(2012, 9, 7)) + assert (result == datetime(2012, 9, 7)) value = datetime(2012, 9, 7, 12) result = normalize_date(value) - assert(result == datetime(2012, 9, 7)) + assert (result == datetime(2012, 9, 7)) + if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], diff --git a/pandas/tseries/timedeltas.py b/pandas/tseries/timedeltas.py index b769eb2406b33..7ff5d7adcaa35 100644 --- a/pandas/tseries/timedeltas.py +++ b/pandas/tseries/timedeltas.py @@ -2,15 +2,14 @@ timedelta support tools """ -import re import numpy as np import pandas.tslib as tslib -from pandas import compat from pandas.core.common import (ABCSeries, is_integer_dtype, is_timedelta64_dtype, is_list_like, - isnull, _ensure_object, ABCIndexClass) + _ensure_object, ABCIndexClass) from pandas.util.decorators import deprecate_kwarg + @deprecate_kwarg(old_arg_name='coerce', new_arg_name='errors', mapping={True: 'coerce', False: 'raise'}) def to_timedelta(arg, unit='ns', box=True, errors='raise', coerce=None): @@ -20,10 +19,12 @@ def to_timedelta(arg, unit='ns', box=True, errors='raise', coerce=None): Parameters ---------- arg : string, timedelta, list, tuple, 1-d array, or Series - unit : unit of the arg (D,h,m,s,ms,us,ns) denote the unit, which is an integer/float number + unit : unit of the arg (D,h,m,s,ms,us,ns) denote the unit, which is an + integer/float number box : boolean, default True - If True returns a Timedelta/TimedeltaIndex of the results - - if False returns a np.timedelta64 or ndarray of values of dtype timedelta64[ns] + - if False returns a np.timedelta64 or ndarray of values of dtype + timedelta64[ns] errors : {'ignore', 'raise', 'coerce'}, default 'raise' - If 'raise', then invalid parsing will raise an exception - If 'coerce', then invalid parsing will be set as NaT @@ -46,14 +47,18 @@ def to_timedelta(arg, unit='ns', box=True, errors='raise', coerce=None): Parsing a list or array of strings: >>> pd.to_timedelta(['1 days 06:05:01.00003', '15.5us', 'nan']) - TimedeltaIndex(['1 days 06:05:01.000030', '0 days 00:00:00.000015', NaT], dtype='timedelta64[ns]', freq=None) + TimedeltaIndex(['1 days 06:05:01.000030', '0 days 00:00:00.000015', NaT], + dtype='timedelta64[ns]', freq=None) Converting numbers by specifying the `unit` keyword argument: >>> pd.to_timedelta(np.arange(5), unit='s') - TimedeltaIndex(['00:00:00', '00:00:01', '00:00:02', '00:00:03', '00:00:04'], dtype='timedelta64[ns]', freq=None) + TimedeltaIndex(['00:00:00', '00:00:01', '00:00:02', + '00:00:03', '00:00:04'], + dtype='timedelta64[ns]', freq=None) >>> pd.to_timedelta(np.arange(5), unit='d') - TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], dtype='timedelta64[ns]', freq=None) + TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], + dtype='timedelta64[ns]', freq=None) """ unit = _validate_timedelta_unit(unit) @@ -66,14 +71,16 @@ def _convert_listlike(arg, box, unit, name=None): if is_timedelta64_dtype(arg): value = arg.astype('timedelta64[ns]') elif is_integer_dtype(arg): - value = arg.astype('timedelta64[{0}]'.format(unit)).astype('timedelta64[ns]', copy=False) + value = arg.astype('timedelta64[{0}]'.format( + unit)).astype('timedelta64[ns]', copy=False) else: - value = tslib.array_to_timedelta64(_ensure_object(arg), unit=unit, errors=errors) + value = tslib.array_to_timedelta64( + _ensure_object(arg), unit=unit, errors=errors) value = value.astype('timedelta64[ns]', copy=False) if box: from pandas import TimedeltaIndex - value = TimedeltaIndex(value,unit='ns', name=name) + value = TimedeltaIndex(value, unit='ns', name=name) return value if arg is None: @@ -87,37 +94,40 @@ def _convert_listlike(arg, box, unit, name=None): elif is_list_like(arg) and getattr(arg, 'ndim', 1) == 1: return _convert_listlike(arg, box=box, unit=unit) elif getattr(arg, 'ndim', 1) > 1: - raise TypeError('arg must be a string, timedelta, list, tuple, 1-d array, or Series') + raise TypeError('arg must be a string, timedelta, list, tuple, ' + '1-d array, or Series') # ...so it must be a scalar value. Return scalar. - return _coerce_scalar_to_timedelta_type(arg, unit=unit, box=box, errors=errors) + return _coerce_scalar_to_timedelta_type(arg, unit=unit, + box=box, errors=errors) _unit_map = { - 'Y' : 'Y', - 'y' : 'Y', - 'W' : 'W', - 'w' : 'W', - 'D' : 'D', - 'd' : 'D', - 'days' : 'D', - 'Days' : 'D', - 'day' : 'D', - 'Day' : 'D', - 'M' : 'M', - 'H' : 'h', - 'h' : 'h', - 'm' : 'm', - 'T' : 'm', - 'S' : 's', - 's' : 's', - 'L' : 'ms', - 'MS' : 'ms', - 'ms' : 'ms', - 'US' : 'us', - 'us' : 'us', - 'NS' : 'ns', - 'ns' : 'ns', - } + 'Y': 'Y', + 'y': 'Y', + 'W': 'W', + 'w': 'W', + 'D': 'D', + 'd': 'D', + 'days': 'D', + 'Days': 'D', + 'day': 'D', + 'Day': 'D', + 'M': 'M', + 'H': 'h', + 'h': 'h', + 'm': 'm', + 'T': 'm', + 'S': 's', + 's': 's', + 'L': 'ms', + 'MS': 'ms', + 'ms': 'ms', + 'US': 'us', + 'us': 'us', + 'NS': 'ns', + 'ns': 'ns', +} + def _validate_timedelta_unit(arg): """ provide validation / translation for timedelta short units """ @@ -128,10 +138,14 @@ def _validate_timedelta_unit(arg): return 'ns' raise ValueError("invalid timedelta unit {0} provided".format(arg)) + def _coerce_scalar_to_timedelta_type(r, unit='ns', box=True, errors='raise'): - """ convert strings to timedelta; coerce to Timedelta (if box), else np.timedelta64""" + """ + convert strings to timedelta; coerce to Timedelta (if box), else + np.timedelta64 + """ - result = tslib.convert_to_timedelta(r,unit,errors) + result = tslib.convert_to_timedelta(r, unit, errors) if box: result = tslib.Timedelta(result) diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py index 795e125daff5f..734857c6d724d 100644 --- a/pandas/tseries/tools.py +++ b/pandas/tseries/tools.py @@ -37,6 +37,7 @@ def _lexer_split_from_str(dt_str): except (ImportError, AttributeError): pass + def _infer_tzinfo(start, end): def _infer(a, b): tz = a.tzinfo @@ -169,6 +170,7 @@ def _guess_datetime_format(dt_str, dayfirst=False, if parsed_datetime.strftime(guessed_format) == dt_str: return guessed_format + def _guess_datetime_format_for_array(arr, **kwargs): # Try to guess the format based on the first non-NaN element non_nan_elements = com.notnull(arr).nonzero()[0] @@ -193,13 +195,16 @@ def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False, - If 'ignore', then invalid parsing will return the input dayfirst : boolean, default False Specify a date parse order if `arg` is str or its list-likes. - If True, parses dates with the day first, eg 10/11/12 is parsed as 2012-11-10. + If True, parses dates with the day first, eg 10/11/12 is parsed as + 2012-11-10. Warning: dayfirst=True is not strict, but will prefer to parse with day first (this is a known bug, based on dateutil behavior). yearfirst : boolean, default False Specify a date parse order if `arg` is str or its list-likes. - - If True parses dates with the year first, eg 10/11/12 is parsed as 2010-11-12. - - If both dayfirst and yearfirst are True, yearfirst is preceded (same as dateutil). + - If True parses dates with the year first, eg 10/11/12 is parsed as + 2010-11-12. + - If both dayfirst and yearfirst are True, yearfirst is preceded (same + as dateutil). Warning: yearfirst=True is not strict, but will prefer to parse with year first (this is a known bug, based on dateutil beahavior). @@ -269,7 +274,8 @@ def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False, >>> pd.to_datetime('13000101', format='%Y%m%d', errors='coerce') NaT """ - return _to_datetime(arg, errors=errors, dayfirst=dayfirst, yearfirst=yearfirst, + return _to_datetime(arg, errors=errors, dayfirst=dayfirst, + yearfirst=yearfirst, utc=utc, box=box, format=format, exact=exact, unit=unit, infer_datetime_format=infer_datetime_format) @@ -293,7 +299,8 @@ def _convert_listlike(arg, box, format, name=None): if com.is_datetime64_ns_dtype(arg): if box and not isinstance(arg, DatetimeIndex): try: - return DatetimeIndex(arg, tz='utc' if utc else None, name=name) + return DatetimeIndex(arg, tz='utc' if utc else None, + name=name) except ValueError: pass @@ -306,13 +313,15 @@ def _convert_listlike(arg, box, format, name=None): arg = arg.tz_convert(None) return arg - elif format is None and com.is_integer_dtype(arg) and unit=='ns': + elif format is None and com.is_integer_dtype(arg) and unit == 'ns': result = arg.astype('datetime64[ns]') if box: - return DatetimeIndex(result, tz='utc' if utc else None, name=name) + return DatetimeIndex(result, tz='utc' if utc else None, + name=name) return result elif getattr(arg, 'ndim', 1) > 1: - raise TypeError('arg must be a string, datetime, list, tuple, 1-d array, or Series') + raise TypeError('arg must be a string, datetime, list, tuple, ' + '1-d array, or Series') arg = com._ensure_object(arg) require_iso8601 = False @@ -400,7 +409,7 @@ def _convert_listlike(arg, box, format, name=None): elif com.is_list_like(arg): return _convert_listlike(arg, box, format) - return _convert_listlike(np.array([ arg ]), box, format)[0] + return _convert_listlike(np.array([arg]), box, format)[0] def _attempt_YYYYMMDD(arg, errors): @@ -417,8 +426,8 @@ def _attempt_YYYYMMDD(arg, errors): def calc(carg): # calculate the actual result carg = carg.astype(object) - parsed = lib.try_parse_year_month_day(carg/10000, - carg/100 % 100, + parsed = lib.try_parse_year_month_day(carg / 10000, + carg / 100 % 100, carg % 100) return tslib.array_to_datetime(parsed, errors=errors) @@ -439,14 +448,14 @@ def calc_with_mask(carg, mask): # a float with actual np.nan try: carg = arg.astype(np.float64) - return calc_with_mask(carg,com.notnull(carg)) + return calc_with_mask(carg, com.notnull(carg)) except: pass # string with NaN-like try: mask = ~lib.ismember(arg, tslib._nat_strings) - return calc_with_mask(arg,mask) + return calc_with_mask(arg, mask) except: pass diff --git a/pandas/tseries/util.py b/pandas/tseries/util.py index 4f29b2bf31f83..7e314657cb25c 100644 --- a/pandas/tseries/util.py +++ b/pandas/tseries/util.py @@ -1,4 +1,4 @@ -from pandas.compat import range, lrange +from pandas.compat import lrange import numpy as np import pandas.core.common as com from pandas.core.frame import DataFrame
https://api.github.com/repos/pandas-dev/pandas/pulls/12075
2016-01-18T02:30:02Z
2016-01-19T20:06:02Z
null
2016-01-19T20:06:48Z
PEP: pandas/core round 4 (indexing, internals, missing)
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index 9df72053fb0af..c61ef2e3d1e60 100644 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -6,34 +6,41 @@ import pandas.core.common as com from pandas.core.common import (is_bool_indexer, is_integer_dtype, _asarray_tuplesafe, is_list_like, isnull, - is_null_slice, is_full_slice, - ABCSeries, ABCDataFrame, ABCPanel, is_float, - _values_from_object, _infer_fill_value, is_integer) + is_null_slice, is_full_slice, ABCSeries, + ABCDataFrame, ABCPanel, is_float, + _values_from_object, _infer_fill_value, + is_integer) import numpy as np + # the supported indexers def get_indexers_list(): return [ - ('ix', _IXIndexer), + ('ix', _IXIndexer), ('iloc', _iLocIndexer), - ('loc', _LocIndexer), - ('at', _AtIndexer), - ('iat', _iAtIndexer), + ('loc', _LocIndexer), + ('at', _AtIndexer), + ('iat', _iAtIndexer), ] # "null slice" _NS = slice(None, None) + # the public IndexSlicerMaker class _IndexSlice(object): def __getitem__(self, arg): return arg + + IndexSlice = _IndexSlice() + class IndexingError(Exception): pass + class _NDFrameIndexer(object): _valid_types = None _exception = KeyError @@ -50,7 +57,7 @@ def __call__(self, *args, **kwargs): # set the passed in values for k, v in compat.iteritems(kwargs): - setattr(self,k,v) + setattr(self, k, v) return self def __iter__(self): @@ -79,8 +86,7 @@ def _get_label(self, label, axis=0): return self.obj._xs(label, axis=axis) except: return self.obj[label] - elif (isinstance(label, tuple) and - isinstance(label[axis], slice)): + elif isinstance(label, tuple) and isinstance(label[axis], slice): raise IndexingError('no slices here, handle elsewhere') return self.obj._xs(label, axis=axis) @@ -129,7 +135,9 @@ def _has_valid_tuple(self, key): "types" % self._valid_types) def _should_validate_iterable(self, axis=0): - """ return a boolean whether this axes needs validation for a passed iterable """ + """ return a boolean whether this axes needs validation for a passed + iterable + """ ax = self.obj._get_axis(axis) if isinstance(ax, MultiIndex): return False @@ -139,8 +147,8 @@ def _should_validate_iterable(self, axis=0): return True def _is_nested_tuple_indexer(self, tup): - if any([ isinstance(ax, MultiIndex) for ax in self.obj.axes ]): - return any([ is_nested_tuple(tup,ax) for ax in self.obj.axes ]) + if any([isinstance(ax, MultiIndex) for ax in self.obj.axes]): + return any([is_nested_tuple(tup, ax) for ax in self.obj.axes]) return False def _convert_tuple(self, key, is_setter=False): @@ -149,7 +157,8 @@ def _convert_tuple(self, key, is_setter=False): axis = self.obj._get_axis_number(self.axis) for i in range(self.ndim): if i == axis: - keyidx.append(self._convert_to_indexer(key, axis=axis, is_setter=is_setter)) + keyidx.append(self._convert_to_indexer( + key, axis=axis, is_setter=is_setter)) else: keyidx.append(slice(None)) else: @@ -178,7 +187,8 @@ def _has_valid_setitem_indexer(self, indexer): def _has_valid_positional_setitem_indexer(self, indexer): """ validate that an positional indexer cannot enlarge its target - will raise if needed, does not modify the indexer externally """ + will raise if needed, does not modify the indexer externally + """ if isinstance(indexer, dict): raise IndexError("{0} cannot enlarge its target object" .format(self.name)) @@ -206,7 +216,8 @@ def _setitem_with_indexer(self, indexer, value): self._has_valid_setitem_indexer(indexer) # also has the side effect of consolidating in-place - from pandas import Panel, DataFrame, Series + # TODO: Panel, DataFrame are not imported, remove? + from pandas import Panel, DataFrame, Series # noqa info_axis = self.obj._info_axis_number # maybe partial set @@ -217,16 +228,19 @@ def _setitem_with_indexer(self, indexer, value): if not take_split_path and self.obj._data.blocks: blk, = self.obj._data.blocks if 1 < blk.ndim: # in case of dict, keys are indices - val = list(value.values()) if isinstance(value,dict) else value + val = list(value.values()) if isinstance(value, + dict) else value take_split_path = not blk._can_hold_element(val) if isinstance(indexer, tuple) and len(indexer) == len(self.obj.axes): for i, ax in zip(indexer, self.obj.axes): - # if we have any multi-indexes that have non-trivial slices (not null slices) - # then we must take the split path, xref GH 10360 - if isinstance(ax, MultiIndex) and not (is_integer(i) or is_null_slice(i)): + # if we have any multi-indexes that have non-trivial slices + # (not null slices) then we must take the split path, xref + # GH 10360 + if (isinstance(ax, MultiIndex) and + not (is_integer(i) or is_null_slice(i))): take_split_path = True break @@ -261,7 +275,6 @@ def _setitem_with_indexer(self, indexer, value): self.obj[key] = value return self.obj - # add a new item with the dtype setup self.obj[key] = _infer_fill_value(value) @@ -276,10 +289,10 @@ def _setitem_with_indexer(self, indexer, value): # just replacing the block manager here # so the object is the same index = self.obj._get_axis(i) - labels = index.insert(len(index),key) + labels = index.insert(len(index), key) self.obj._data = self.obj.reindex_axis(labels, i)._data self.obj._maybe_update_cacher(clear=True) - self.obj.is_copy=None + self.obj.is_copy = None nindexer.append(labels.get_loc(key)) @@ -297,7 +310,7 @@ def _setitem_with_indexer(self, indexer, value): # and set inplace if self.ndim == 1: index = self.obj.index - new_index = index.insert(len(index),indexer) + new_index = index.insert(len(index), indexer) # this preserves dtype of the value new_values = Series([value])._values @@ -314,14 +327,14 @@ def _setitem_with_indexer(self, indexer, value): # no columns and scalar if not len(self.obj.columns): - raise ValueError( - "cannot set a frame with no defined columns" - ) + raise ValueError("cannot set a frame with no defined " + "columns") # append a Series if isinstance(value, Series): - value = value.reindex(index=self.obj.columns,copy=True) + value = value.reindex(index=self.obj.columns, + copy=True) value.name = indexer # a list-list @@ -330,11 +343,11 @@ def _setitem_with_indexer(self, indexer, value): # must have conforming columns if is_list_like_indexer(value): if len(value) != len(self.obj.columns): - raise ValueError( - "cannot set a row with mismatched columns" - ) + raise ValueError("cannot set a row with " + "mismatched columns") - value = Series(value,index=self.obj.columns,name=indexer) + value = Series(value, index=self.obj.columns, + name=indexer) self.obj._data = self.obj.append(value)._data self.obj._maybe_update_cacher(clear=True) @@ -375,23 +388,24 @@ def _setitem_with_indexer(self, indexer, value): # require that we are setting the right number of values that # we are indexing - if is_list_like_indexer(value) and np.iterable(value) and lplane_indexer != len(value): + if is_list_like_indexer(value) and np.iterable( + value) and lplane_indexer != len(value): if len(obj[idx]) != len(value): - raise ValueError( - "cannot set using a multi-index selection indexer " - "with a different length than the value" - ) + raise ValueError("cannot set using a multi-index " + "selection indexer with a different " + "length than the value") # make sure we have an ndarray - value = getattr(value,'values',value).ravel() + value = getattr(value, 'values', value).ravel() # we can directly set the series here # as we select a slice indexer on the mi idx = index._convert_slice_indexer(idx) obj._consolidate_inplace() obj = obj.copy() - obj._data = obj._data.setitem(indexer=tuple([idx]), value=value) + obj._data = obj._data.setitem(indexer=tuple([idx]), + value=value) self.obj[item] = obj return @@ -411,9 +425,13 @@ def setter(item, v): # perform the equivalent of a setitem on the info axis # as we have a null slice or a slice with full bounds - # which means essentially reassign to the columns of a multi-dim object + # which means essentially reassign to the columns of a + # multi-dim object # GH6149 (null slice), GH10408 (full bounds) - if isinstance(pi, tuple) and all(is_null_slice(idx) or is_full_slice(idx, len(self.obj)) for idx in pi): + if (isinstance(pi, tuple) and + all(is_null_slice(idx) or + is_full_slice(idx, len(self.obj)) + for idx in pi)): s = v else: # set the item, possibly having a dtype change @@ -444,7 +462,7 @@ def can_do_equal_len(): # we need an iterable, with a ndim of at least 1 # eg. don't pass through np.array(0) - if is_list_like_indexer(value) and getattr(value,'ndim',1) > 0: + if is_list_like_indexer(value) and getattr(value, 'ndim', 1) > 0: # we have an equal len Frame if isinstance(value, ABCDataFrame) and value.ndim > 1: @@ -455,8 +473,8 @@ def can_do_equal_len(): if item in value: sub_indexer[info_axis] = item v = self._align_series( - tuple(sub_indexer), value[item], multiindex_indexer - ) + tuple(sub_indexer), value[item], + multiindex_indexer) else: v = np.nan @@ -467,7 +485,7 @@ def can_do_equal_len(): # note that this coerces the dtype if we are mixed # GH 7551 - value = np.array(value,dtype=object) + value = np.array(value, dtype=object) if len(labels) != value.shape[1]: raise ValueError('Must have equal len keys and value ' 'when setting with an ndarray') @@ -503,8 +521,10 @@ def can_do_equal_len(): # if we are setting on the info axis ONLY # set using those methods to avoid block-splitting # logic here - if len(indexer) > info_axis and is_integer(indexer[info_axis]) and all( - is_null_slice(idx) for i, idx in enumerate(indexer) if i != info_axis): + if (len(indexer) > info_axis and + is_integer(indexer[info_axis]) and + all(is_null_slice(idx) for i, idx in enumerate(indexer) + if i != info_axis)): self.obj[item_labels[indexer[info_axis]]] = value return @@ -522,7 +542,8 @@ def can_do_equal_len(): # actually do the set self.obj._consolidate_inplace() - self.obj._data = self.obj._data.setitem(indexer=indexer, value=value) + self.obj._data = self.obj._data.setitem(indexer=indexer, + value=value) self.obj._maybe_update_cacher(clear=True) def _align_series(self, indexer, ser, multiindex_indexer=False): @@ -658,7 +679,8 @@ def _align_frame(self, indexer, df): aligners = [not is_null_slice(idx) for idx in indexer] sum_aligners = sum(aligners) - single_aligner = sum_aligners == 1 + # TODO: single_aligner is not used + single_aligner = sum_aligners == 1 # noqa idx, cols = None, None sindexers = [] @@ -697,8 +719,8 @@ def _align_frame(self, indexer, df): val = df.reindex(idx, columns=cols)._values return val - elif ((isinstance(indexer, slice) or is_list_like_indexer(indexer)) - and is_frame): + elif ((isinstance(indexer, slice) or is_list_like_indexer(indexer)) and + is_frame): ax = self.obj.index[indexer] if df.index.equals(ax): val = df.copy()._values @@ -706,9 +728,11 @@ def _align_frame(self, indexer, df): # we have a multi-index and are trying to align # with a particular, level GH3738 - if isinstance(ax, MultiIndex) and isinstance( - df.index, MultiIndex) and ax.nlevels != df.index.nlevels: - raise TypeError("cannot align on a multi-index with out specifying the join levels") + if (isinstance(ax, MultiIndex) and + isinstance(df.index, MultiIndex) and + ax.nlevels != df.index.nlevels): + raise TypeError("cannot align on a multi-index with out " + "specifying the join levels") val = df.reindex(index=ax)._values return val @@ -728,8 +752,9 @@ def _align_frame(self, indexer, df): raise ValueError('Incompatible indexer with DataFrame') def _align_panel(self, indexer, df): - is_frame = self.obj.ndim == 2 - is_panel = self.obj.ndim >= 3 + # TODO: is_frame, is_panel are unused + is_frame = self.obj.ndim == 2 # noqa + is_panel = self.obj.ndim >= 3 # noqa raise NotImplementedError("cannot set using an indexer with a Panel " "yet!") @@ -786,10 +811,9 @@ def _multi_take(self, tup): """ try: o = self.obj - d = dict([ - (a, self._convert_for_reindex(t, axis=o._get_axis_number(a))) - for t, a in zip(tup, o._AXIS_ORDERS) - ]) + d = dict( + [(a, self._convert_for_reindex(t, axis=o._get_axis_number(a))) + for t, a in zip(tup, o._AXIS_ORDERS)]) return o.reindex(**d) except: raise self._exception @@ -876,8 +900,8 @@ def _getitem_lowerdim(self, tup): # unfortunately need an odious kludge here because of # DataFrame transposing convention - if (isinstance(section, ABCDataFrame) and i > 0 - and len(new_key) == 2): + if (isinstance(section, ABCDataFrame) and i > 0 and + len(new_key) == 2): a, b = new_key new_key = b, a @@ -901,7 +925,8 @@ def _getitem_nested_tuple(self, tup): if result is not None: return result - # this is a series with a multi-index specified a tuple of selectors + # this is a series with a multi-index specified a tuple of + # selectors return self._getitem_axis(tup, axis=0) # handle the multi-axis by taking sections and reducing @@ -919,7 +944,7 @@ def _getitem_nested_tuple(self, tup): axis += 1 # if we have a scalar, we are done - if np.isscalar(obj) or not hasattr(obj,'ndim'): + if np.isscalar(obj) or not hasattr(obj, 'ndim'): break # has the dim of the obj changed? @@ -944,8 +969,9 @@ def _getitem_axis(self, key, axis=0): labels = self.obj._get_axis(axis) if isinstance(key, slice): return self._get_slice_axis(key, axis=axis) - elif is_list_like_indexer(key) and not (isinstance(key, tuple) and - isinstance(labels, MultiIndex)): + elif (is_list_like_indexer(key) and + not (isinstance(key, tuple) and + isinstance(labels, MultiIndex))): if hasattr(key, 'ndim') and key.ndim > 1: raise ValueError('Cannot index with multidimensional key') @@ -1004,13 +1030,16 @@ def _getitem_iterable(self, key, axis=0): if labels.is_unique and Index(keyarr).is_unique: try: - result = self.obj.reindex_axis(keyarr, axis=axis, level=level) + result = self.obj.reindex_axis(keyarr, axis=axis, + level=level) # this is an error as we are trying to find # keys in a multi-index that don't exist if isinstance(labels, MultiIndex) and level is not None: - if hasattr(result,'ndim') and not np.prod(result.shape) and len(keyarr): - raise KeyError("cannot index a multi-index axis with these keys") + if (hasattr(result, 'ndim') and + not np.prod(result.shape) and len(keyarr)): + raise KeyError("cannot index a multi-index axis " + "with these keys") return result @@ -1029,19 +1058,19 @@ def _getitem_iterable(self, key, axis=0): raise AssertionError("invalid indexing error with " "non-unique index") - new_target, indexer, new_indexer = labels._reindex_non_unique(keyarr) + new_target, indexer, new_indexer = labels._reindex_non_unique( + keyarr) if new_indexer is not None: - result = self.obj.take(indexer[indexer!=-1], axis=axis, + result = self.obj.take(indexer[indexer != -1], axis=axis, convert=False) - result = result._reindex_with_indexers({ - axis: [new_target, new_indexer] - }, copy=True, allow_dups=True) + result = result._reindex_with_indexers( + {axis: [new_target, new_indexer]}, + copy=True, allow_dups=True) else: - result = self.obj.take(indexer, axis=axis, - convert=False) + result = self.obj.take(indexer, axis=axis, convert=False) return result @@ -1182,7 +1211,6 @@ def _get_slice_axis(self, slice_obj, axis=0): class _IXIndexer(_NDFrameIndexer): - """A primarily label-location based indexer, with integer position fallback. @@ -1258,7 +1286,6 @@ def _get_slice_axis(self, slice_obj, axis=0): class _LocIndexer(_LocationIndexer): - """Purely label-location based indexer for selection by label. ``.loc[]`` is primarily label based, but may also be used with a @@ -1317,16 +1344,16 @@ def _has_valid_type(self, key, axis): def error(): if isnull(key): - raise TypeError( - "cannot use label indexing with a null key") + raise TypeError("cannot use label indexing with a null " + "key") raise KeyError("the label [%s] is not in the [%s]" % (key, self.obj._get_axis_name(axis))) try: key = self._convert_scalar_indexer(key, axis) - if not key in ax: + if key not in ax: error() - except (TypeError) as e: + except TypeError as e: # python 3 type errors should be raised if 'unorderable' in str(e): # pragma: no cover @@ -1351,12 +1378,12 @@ def _getitem_axis(self, key, axis=0): # possibly convert a list-like into a nested tuple # but don't convert a list-like of tuples if isinstance(labels, MultiIndex): - if not isinstance(key, tuple) and len(key) > 1 and not isinstance(key[0], tuple): + if (not isinstance(key, tuple) and len(key) > 1 and + not isinstance(key[0], tuple)): key = tuple([key]) # an iterable multi-selection - if not (isinstance(key, tuple) and - isinstance(labels, MultiIndex)): + if not (isinstance(key, tuple) and isinstance(labels, MultiIndex)): if hasattr(key, 'ndim') and key.ndim > 1: raise ValueError('Cannot index with multidimensional key') @@ -1366,7 +1393,7 @@ def _getitem_axis(self, key, axis=0): # nested tuple slicing if is_nested_tuple(key, labels): locs = labels.get_locs(key) - indexer = [ slice(None) ] * self.ndim + indexer = [slice(None)] * self.ndim indexer[axis] = locs return self.obj.iloc[tuple(indexer)] @@ -1376,7 +1403,6 @@ def _getitem_axis(self, key, axis=0): class _iLocIndexer(_LocationIndexer): - """Purely integer-location based indexing for selection by position. ``.iloc[]`` is primarily integer position based (from ``0`` to @@ -1406,10 +1432,9 @@ def _has_valid_type(self, key, axis): if is_bool_indexer(key): if hasattr(key, 'index') and isinstance(key.index, Index): if key.index.inferred_type == 'integer': - raise NotImplementedError( - "iLocation based boolean indexing on an integer type " - "is not available" - ) + raise NotImplementedError("iLocation based boolean " + "indexing on an integer type " + "is not available") raise ValueError("iLocation based boolean indexing cannot use " "an indexable as a mask") return True @@ -1434,9 +1459,9 @@ def _is_valid_integer(self, key, axis): raise IndexError("single positional indexer is out-of-bounds") return True - def _is_valid_list_like(self, key, axis): - # return a boolean if we are a valid list-like (e.g. that we dont' have out-of-bounds values) + # return a boolean if we are a valid list-like (e.g. that we don't + # have out-of-bounds values) # coerce the key to not exceed the maximum size of the index arr = np.array(key) @@ -1456,7 +1481,7 @@ def _getitem_tuple(self, tup): pass retval = self.obj - axis=0 + axis = 0 for i, key in enumerate(tup): if i >= self.obj.ndim: raise IndexingError('Too many indexers') @@ -1468,7 +1493,7 @@ def _getitem_tuple(self, tup): retval = getattr(retval, self.name)._getitem_axis(key, axis=axis) # if the dim was reduced, then pass a lower-dim the next time - if retval.ndim<self.ndim: + if retval.ndim < self.ndim: axis -= 1 # try to get for the next axis @@ -1539,7 +1564,6 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False): class _ScalarAccessIndexer(_NDFrameIndexer): - """ access scalars quickly """ def _convert_key(self, key, is_setter=False): @@ -1569,7 +1593,6 @@ def __setitem__(self, key, value): class _AtIndexer(_ScalarAccessIndexer): - """Fast label-based scalar accessor Similarly to ``loc``, ``at`` provides **label** based scalar lookups. @@ -1580,7 +1603,9 @@ class _AtIndexer(_ScalarAccessIndexer): _takeable = False def _convert_key(self, key, is_setter=False): - """ require they keys to be the same type as the index (so we don't fallback) """ + """ require they keys to be the same type as the index (so we don't + fallback) + """ # allow arbitrary setting if is_setter: @@ -1589,16 +1614,17 @@ def _convert_key(self, key, is_setter=False): for ax, i in zip(self.obj.axes, key): if ax.is_integer(): if not is_integer(i): - raise ValueError("At based indexing on an integer index can only have integer " - "indexers") + raise ValueError("At based indexing on an integer index " + "can only have integer indexers") else: if is_integer(i): - raise ValueError("At based indexing on an non-integer index can only have non-integer " + raise ValueError("At based indexing on an non-integer " + "index can only have non-integer " "indexers") return key -class _iAtIndexer(_ScalarAccessIndexer): +class _iAtIndexer(_ScalarAccessIndexer): """Fast integer location scalar accessor. Similarly to ``iloc``, ``iat`` provides **integer** based lookups. @@ -1643,7 +1669,7 @@ def length_of_indexer(indexer, target=None): step = 1 elif step < 0: step = -step - return (stop - start + step-1) // step + return (stop - start + step - 1) // step elif isinstance(indexer, (ABCSeries, Index, np.ndarray, list)): return len(indexer) elif not is_list_like_indexer(indexer): @@ -1676,8 +1702,8 @@ def convert_to_index_sliceable(obj, key): def is_index_slice(obj): def _is_valid_index(x): - return (is_integer(x) or is_float(x) - and np.allclose(x, int(x), rtol=_eps, atol=0)) + return (is_integer(x) or is_float(x) and + np.allclose(x, int(x), rtol=_eps, atol=0)) def _crit(v): return v is None or _is_valid_index(v) @@ -1712,7 +1738,8 @@ def check_bool_indexer(ax, key): def convert_missing_indexer(indexer): """ reverse convert a missing indexer, which is a dict - return the scalar indexer and a boolean indicating if we converted """ + return the scalar indexer and a boolean indicating if we converted + """ if isinstance(indexer, dict): @@ -1728,9 +1755,11 @@ def convert_missing_indexer(indexer): def convert_from_missing_indexer_tuple(indexer, axes): """ create a filtered indexer that doesn't have any missing indexers """ + def get_indexer(_i, _idx): - return (axes[_i].get_loc(_idx['key']) - if isinstance(_idx, dict) else _idx) + return (axes[_i].get_loc(_idx['key']) if isinstance(_idx, dict) else + _idx) + return tuple([get_indexer(_i, _idx) for _i, _idx in enumerate(indexer)]) @@ -1783,9 +1812,12 @@ def is_nested_tuple(tup, labels): return False + def is_list_like_indexer(key): # allow a list_like, but exclude NamedTuples which can be indexers - return is_list_like(key) and not (isinstance(key, tuple) and type(key) is not tuple) + return is_list_like(key) and not (isinstance(key, tuple) and + type(key) is not tuple) + def is_label_like(key): # select a label or row @@ -1793,8 +1825,7 @@ def is_label_like(key): def need_slice(obj): - return (obj.start is not None or - obj.stop is not None or + return (obj.start is not None or obj.stop is not None or (obj.step is not None and obj.step != 1)) @@ -1826,8 +1857,8 @@ def _non_reducing_slice(slice_): """ # default to column slice, like DataFrame # ['A', 'B'] -> IndexSlices[:, ['A', 'B']] - kinds = tuple(list(compat.string_types) + - [ABCSeries, np.ndarray, Index, list]) + kinds = tuple(list(compat.string_types) + [ABCSeries, np.ndarray, Index, + list]) if isinstance(slice_, kinds): slice_ = IndexSlice[:, slice_] diff --git a/pandas/core/internals.py b/pandas/core/internals.py index b10b1b5771bf7..6e9005395281c 100644 --- a/pandas/core/internals.py +++ b/pandas/core/internals.py @@ -8,15 +8,15 @@ import numpy as np from pandas.core.base import PandasObject -from pandas.core.common import (_possibly_downcast_to_dtype, isnull, - _NS_DTYPE, _TD_DTYPE, ABCSeries, is_list_like, - ABCSparseSeries, _infer_dtype_from_scalar, - is_null_slice, is_dtype_equal, - is_null_datelike_scalar, _maybe_promote, - is_timedelta64_dtype, is_datetime64_dtype, - is_datetime64tz_dtype, is_datetimetz, is_sparse, - array_equivalent, _maybe_convert_string_to_object, - is_categorical, needs_i8_conversion, is_datetimelike_v_numeric, +from pandas.core.common import (_possibly_downcast_to_dtype, isnull, _NS_DTYPE, + _TD_DTYPE, ABCSeries, is_list_like, + _infer_dtype_from_scalar, is_null_slice, + is_dtype_equal, is_null_datelike_scalar, + _maybe_promote, is_timedelta64_dtype, + is_datetime64_dtype, is_datetimetz, is_sparse, + array_equivalent, + _maybe_convert_string_to_object, + is_categorical, is_datetimelike_v_numeric, is_numeric_v_string_like, is_internal_type) from pandas.core.dtypes import DatetimeTZDtype @@ -33,17 +33,14 @@ import pandas.computation.expressions as expressions from pandas.util.decorators import cache_readonly -from pandas.tslib import Timestamp, Timedelta +from pandas.tslib import Timedelta from pandas import compat from pandas.compat import range, map, zip, u -from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type - from pandas.lib import BlockPlacement class Block(PandasObject): - """ Canonical n-dimensional unit of homogeneous dtype contained in a pandas data structure @@ -82,9 +79,9 @@ def __init__(self, values, placement, ndim=None, fastpath=False): self.values = values if len(self.mgr_locs) != len(self.values): - raise ValueError('Wrong number of items passed %d,' - ' placement implies %d' % ( - len(self.values), len(self.mgr_locs))) + raise ValueError('Wrong number of items passed %d, placement ' + 'implies %d' % (len(self.values), + len(self.mgr_locs))) @property def _consolidate_key(self): @@ -125,7 +122,8 @@ def external_values(self, dtype=None): def internal_values(self, dtype=None): """ return an internal format, currently just the ndarray - this should be the pure internal API format """ + this should be the pure internal API format + """ return self.values def get_values(self, dtype=None): @@ -141,7 +139,7 @@ def to_dense(self): def to_object_block(self, mgr): """ return myself as an object block """ values = self.get_values(dtype=object) - return self.make_block(values,klass=ObjectBlock) + return self.make_block(values, klass=ObjectBlock) @property def fill_value(self): @@ -153,13 +151,15 @@ def mgr_locs(self): @property def array_dtype(self): - """ the dtype to return if I want to construct this block as an array """ + """ the dtype to return if I want to construct this block as an + array + """ return self.dtype def make_block(self, values, placement=None, ndim=None, **kwargs): """ - Create a new block, with type inference - propogate any values that are not specified + Create a new block, with type inference propogate any values that are + not specified """ if placement is None: placement = self.mgr_locs @@ -168,7 +168,8 @@ def make_block(self, values, placement=None, ndim=None, **kwargs): return make_block(values, placement=placement, ndim=ndim, **kwargs) - def make_block_same_class(self, values, placement=None, fastpath=True, **kwargs): + def make_block_same_class(self, values, placement=None, fastpath=True, + **kwargs): """ Wrap given values in a block of same type as self. """ if placement is None: placement = self.mgr_locs @@ -188,15 +189,13 @@ def __unicode__(self): name = com.pprint_thing(self.__class__.__name__) if self._is_single_block: - result = '%s: %s dtype: %s' % ( - name, len(self), self.dtype) + result = '%s: %s dtype: %s' % (name, len(self), self.dtype) else: shape = ' x '.join([com.pprint_thing(s) for s in self.shape]) - result = '%s: %s, %s, dtype: %s' % ( - name, com.pprint_thing(self.mgr_locs.indexer), shape, - self.dtype) + result = '%s: %s, %s, dtype: %s' % (name, com.pprint_thing( + self.mgr_locs.indexer), shape, self.dtype) return result @@ -226,19 +225,15 @@ def reshape_nd(self, labels, shape, ref_items, mgr=None): return a new block that is transformed to a nd block """ - return _block2d_to_blocknd( - values=self.get_values().T, - placement=self.mgr_locs, - shape=shape, - labels=labels, - ref_items=ref_items) + return _block2d_to_blocknd(values=self.get_values().T, + placement=self.mgr_locs, shape=shape, + labels=labels, ref_items=ref_items) def getitem_block(self, slicer, new_mgr_locs=None): """ Perform __getitem__-like, return result as block. As of now, only supports slices that preserve dimensionality. - """ if new_mgr_locs is None: if isinstance(slicer, tuple): @@ -285,8 +280,7 @@ def reindex_axis(self, indexer, method=None, axis=1, fill_value=None, new_values = com.take_nd(self.values, indexer, axis, fill_value=fill_value, mask_info=mask_info) - return self.make_block(new_values, - fastpath=True) + return self.make_block(new_values, fastpath=True) def get(self, item): loc = self.items.get_loc(item) @@ -313,16 +307,20 @@ def delete(self, loc): self.mgr_locs = self.mgr_locs.delete(loc) def apply(self, func, mgr=None, **kwargs): - """ apply the function to my values; return a block if we are not one """ + """ apply the function to my values; return a block if we are not + one + """ result = func(self.values, **kwargs) if not isinstance(result, Block): result = self.make_block(values=_block_shape(result)) return result - def fillna(self, value, limit=None, inplace=False, downcast=None, mgr=None): - """ fillna on the block with the value. If we fail, then convert to ObjectBlock - and try again """ + def fillna(self, value, limit=None, inplace=False, downcast=None, + mgr=None): + """ fillna on the block with the value. If we fail, then convert to + ObjectBlock and try again + """ if not self._can_hold_na: if inplace: @@ -336,13 +334,14 @@ def fillna(self, value, limit=None, inplace=False, downcast=None, mgr=None): if self.ndim > 2: raise NotImplementedError("number of dimensions for 'fillna' " "is currently limited to 2") - mask[mask.cumsum(self.ndim-1) > limit] = False + mask[mask.cumsum(self.ndim - 1) > limit] = False # fillna, but if we cannot coerce, then try again as an ObjectBlock try: values, _, value, _ = self._try_coerce_args(self.values, value) blocks = self.putmask(mask, value, inplace=inplace) - blocks = [ b.make_block(values=self._try_coerce_result(b.values)) for b in blocks ] + blocks = [b.make_block(values=self._try_coerce_result(b.values)) + for b in blocks] return self._maybe_downcast(blocks, downcast) except (TypeError, ValueError): @@ -366,7 +365,7 @@ def _maybe_downcast(self, blocks, downcast=None): elif downcast is None and (self.is_timedelta or self.is_datetime): return blocks - return _extend_blocks([ b.downcast(downcast) for b in blocks ]) + return _extend_blocks([b.downcast(downcast) for b in blocks]) def downcast(self, dtypes=None, mgr=None): """ try to downcast each item to the dict of dtypes if present """ @@ -385,8 +384,7 @@ def downcast(self, dtypes=None, mgr=None): dtypes = 'infer' nv = _possibly_downcast_to_dtype(values, dtypes) - return self.make_block(nv, - fastpath=True) + return self.make_block(nv, fastpath=True) # ndim > 1 if dtypes is None: @@ -405,7 +403,8 @@ def downcast(self, dtypes=None, mgr=None): dtype = 'infer' else: raise AssertionError("dtypes as dict is not supported yet") - dtype = dtypes.get(item, self._downcast_dtype) + # TODO: This either should be completed or removed + dtype = dtypes.get(item, self._downcast_dtype) # noqa if dtype is None: nv = _block_shape(values[i], ndim=self.ndim) @@ -413,13 +412,12 @@ def downcast(self, dtypes=None, mgr=None): nv = _possibly_downcast_to_dtype(values[i], dtype) nv = _block_shape(nv, ndim=self.ndim) - blocks.append(self.make_block(nv, - fastpath=True, - placement=[rl])) + blocks.append(self.make_block(nv, fastpath=True, placement=[rl])) return blocks - def astype(self, dtype, copy=False, raise_on_error=True, values=None, **kwargs): + def astype(self, dtype, copy=False, raise_on_error=True, values=None, + **kwargs): return self._astype(dtype, copy=copy, raise_on_error=raise_on_error, values=values, **kwargs) @@ -449,7 +447,8 @@ def _astype(self, dtype, copy=False, raise_on_error=True, values=None, # force the copy here if values is None: - if issubclass(dtype.type, (compat.text_type, compat.string_types)): + if issubclass(dtype.type, + (compat.text_type, compat.string_types)): # use native type formatting for datetime/tz/timedelta if self.is_datelike: @@ -466,9 +465,7 @@ def _astype(self, dtype, copy=False, raise_on_error=True, values=None, values = com._astype_nansafe(values.ravel(), dtype, copy=True) values = values.reshape(self.shape) - newb = make_block(values, - placement=self.mgr_locs, - dtype=dtype, + newb = make_block(values, placement=self.mgr_locs, dtype=dtype, klass=klass) except: if raise_on_error is True: @@ -485,9 +482,10 @@ def _astype(self, dtype, copy=False, raise_on_error=True, values=None, return newb def convert(self, copy=True, **kwargs): - """ attempt to coerce any object types to better types - return a copy of the block (if copy = True) - by definition we are not an ObjectBlock here! """ + """ attempt to coerce any object types to better types return a copy + of the block (if copy = True) by definition we are not an ObjectBlock + here! + """ return self.copy() if copy else self @@ -498,8 +496,9 @@ def _try_cast(self, value): raise NotImplementedError() def _try_cast_result(self, result, dtype=None): - """ try to cast the result to our original type, - we may have roundtripped thru object in the mean-time """ + """ try to cast the result to our original type, we may have + roundtripped thru object in the mean-time + """ if dtype is None: dtype = self.dtype @@ -549,7 +548,8 @@ def _try_coerce_and_cast_result(self, result, dtype=None): def _try_fill(self, value): return value - def to_native_types(self, slicer=None, na_rep='nan', quoting=None, **kwargs): + def to_native_types(self, slicer=None, na_rep='nan', quoting=None, + **kwargs): """ convert to our native types format, slicing if desired """ values = self.values @@ -578,14 +578,17 @@ def replace(self, to_replace, value, inplace=False, filter=None, """ replace the to_replace value with value, possible to create new blocks here this is just a call to putmask. regex is not used here. It is used in ObjectBlocks. It is here for API - compatibility.""" + compatibility. + """ original_to_replace = to_replace mask = isnull(self.values) - # try to replace, if we raise an error, convert to ObjectBlock and retry + # try to replace, if we raise an error, convert to ObjectBlock and + # retry try: - values, _, to_replace, _ = self._try_coerce_args(self.values, to_replace) + values, _, to_replace, _ = self._try_coerce_args(self.values, + to_replace) mask = com.mask_missing(values, to_replace) if filter is not None: filtered_out = ~self.mgr_locs.isin(filter) @@ -593,7 +596,8 @@ def replace(self, to_replace, value, inplace=False, filter=None, blocks = self.putmask(mask, value, inplace=inplace) if convert: - blocks = [ b.convert(by_item=True, numeric=False, copy=not inplace) for b in blocks ] + blocks = [b.convert(by_item=True, numeric=False, + copy=not inplace) for b in blocks] return blocks except (TypeError, ValueError): @@ -601,13 +605,9 @@ def replace(self, to_replace, value, inplace=False, filter=None, if not mask.any(): return self if inplace else self.copy() - return self.to_object_block(mgr=mgr).replace(to_replace=original_to_replace, - value=value, - inplace=inplace, - filter=filter, - regex=regex, - convert=convert) - + return self.to_object_block(mgr=mgr).replace( + to_replace=original_to_replace, value=value, inplace=inplace, + filter=filter, regex=regex, convert=convert) def _replace_single(self, *args, **kwargs): """ no-op on a non-ObjectBlock """ @@ -665,7 +665,7 @@ def _is_scalar_indexer(indexer): if arr_value.ndim == 1: if not isinstance(indexer, tuple): indexer = tuple([indexer]) - return all([ np.isscalar(idx) for idx in indexer ]) + return all([np.isscalar(idx) for idx in indexer]) return False def _is_empty_indexer(indexer): @@ -674,7 +674,8 @@ def _is_empty_indexer(indexer): if arr_value.ndim == 1: if not isinstance(indexer, tuple): indexer = tuple([indexer]) - return any(isinstance(idx, np.ndarray) and len(idx) == 0 for idx in indexer) + return any(isinstance(idx, np.ndarray) and len(idx) == 0 + for idx in indexer) return False # empty indexers @@ -682,14 +683,17 @@ def _is_empty_indexer(indexer): if _is_empty_indexer(indexer): pass - # setting a single element for each dim and with a rhs that could be say a list + # setting a single element for each dim and with a rhs that could + # be say a list # GH 6043 elif _is_scalar_indexer(indexer): values[indexer] = value # if we are an exact match (ex-broadcasting), # then use the resultant dtype - elif len(arr_value.shape) and arr_value.shape[0] == values.shape[0] and np.prod(arr_value.shape) == np.prod(values.shape): + elif (len(arr_value.shape) and + arr_value.shape[0] == values.shape[0] and + np.prod(arr_value.shape) == np.prod(values.shape)): values[indexer] = value values = values.astype(arr_value.dtype) @@ -703,23 +707,22 @@ def _is_empty_indexer(indexer): else: dtype = 'infer' values = self._try_coerce_and_cast_result(values, dtype) - block = self.make_block(transf(values), - fastpath=True) + block = self.make_block(transf(values), fastpath=True) # may have to soft convert_objects here if block.is_object and not self.is_object: block = block.convert(numeric=False) return block - except (ValueError, TypeError) as detail: + except (ValueError, TypeError): raise - except Exception as detail: + except Exception: pass return [self] - def putmask(self, mask, new, align=True, inplace=False, - axis=0, transpose=False, mgr=None): + def putmask(self, mask, new, align=True, inplace=False, axis=0, + transpose=False, mgr=None): """ putmask the data to the block; it is possible that we may create a new dtype of block @@ -758,12 +761,12 @@ def putmask(self, mask, new, align=True, inplace=False, new = self._try_cast(new) - # If the default repeat behavior in np.putmask would go in the wrong - # direction, then explictly repeat and reshape new instead + # If the default repeat behavior in np.putmask would go in the + # wrong direction, then explictly repeat and reshape new instead if getattr(new, 'ndim', 0) >= 1: if self.ndim - 1 == new.ndim and axis == 1: - new = np.repeat(new, new_values.shape[-1]).reshape( - self.shape) + new = np.repeat( + new, new_values.shape[-1]).reshape(self.shape) new = new.astype(new_values.dtype) np.putmask(new_values, mask, new) @@ -810,15 +813,13 @@ def putmask(self, mask, new, align=True, inplace=False, # Put back the dimension that was taken from it and make # a block out of the result. block = self.make_block(values=nv[np.newaxis], - placement=[ref_loc], - fastpath=True) + placement=[ref_loc], fastpath=True) new_blocks.append(block) else: nv = _putmask_smart(new_values, mask, new) - new_blocks.append(self.make_block(values=nv, - fastpath=True)) + new_blocks.append(self.make_block(values=nv, fastpath=True)) return new_blocks @@ -828,14 +829,12 @@ def putmask(self, mask, new, align=True, inplace=False, if transpose: new_values = new_values.T - return [self.make_block(new_values, - fastpath=True)] - - def interpolate(self, method='pad', axis=0, index=None, - values=None, inplace=False, limit=None, - limit_direction='forward', - fill_value=None, coerce=False, downcast=None, mgr=None, **kwargs): + return [self.make_block(new_values, fastpath=True)] + def interpolate(self, method='pad', axis=0, index=None, values=None, + inplace=False, limit=None, limit_direction='forward', + fill_value=None, coerce=False, downcast=None, mgr=None, + **kwargs): def check_int_bool(self, inplace): # Only FloatBlocks will contain NaNs. # timedelta subclasses IntBlock @@ -855,14 +854,11 @@ def check_int_bool(self, inplace): r = check_int_bool(self, inplace) if r is not None: return r - return self._interpolate_with_fill(method=m, - axis=axis, - inplace=inplace, - limit=limit, + return self._interpolate_with_fill(method=m, axis=axis, + inplace=inplace, limit=limit, fill_value=fill_value, coerce=coerce, - downcast=downcast, - mgr=mgr) + downcast=downcast, mgr=mgr) # try an interp method try: m = mis._clean_interp_method(method, **kwargs) @@ -873,17 +869,11 @@ def check_int_bool(self, inplace): r = check_int_bool(self, inplace) if r is not None: return r - return self._interpolate(method=m, - index=index, - values=values, - axis=axis, - limit=limit, + return self._interpolate(method=m, index=index, values=values, + axis=axis, limit=limit, limit_direction=limit_direction, - fill_value=fill_value, - inplace=inplace, - downcast=downcast, - mgr=mgr, - **kwargs) + fill_value=fill_value, inplace=inplace, + downcast=downcast, mgr=mgr, **kwargs) raise ValueError("invalid method '{0}' to interpolate.".format(method)) @@ -904,23 +894,18 @@ def _interpolate_with_fill(self, method='pad', axis=0, inplace=False, values = self.values if inplace else self.values.copy() values, _, fill_value, _ = self._try_coerce_args(values, fill_value) values = self._try_operate(values) - values = mis.interpolate_2d(values, - method=method, - axis=axis, - limit=limit, - fill_value=fill_value, + values = mis.interpolate_2d(values, method=method, axis=axis, + limit=limit, fill_value=fill_value, dtype=self.dtype) values = self._try_coerce_result(values) - blocks = [self.make_block(values, - klass=self.__class__, - fastpath=True)] + blocks = [self.make_block(values, klass=self.__class__, fastpath=True)] return self._maybe_downcast(blocks, downcast) def _interpolate(self, method=None, index=None, values=None, fill_value=None, axis=0, limit=None, - limit_direction='forward', - inplace=False, downcast=None, mgr=None, **kwargs): + limit_direction='forward', inplace=False, downcast=None, + mgr=None, **kwargs): """ interpolate using scipy wrappers """ data = self.values if inplace else self.values.copy() @@ -953,8 +938,7 @@ def func(x): # interp each column independently interp_values = np.apply_along_axis(func, axis, data) - blocks = [self.make_block(interp_values, - klass=self.__class__, + blocks = [self.make_block(interp_values, klass=self.__class__, fastpath=True)] return self._maybe_downcast(blocks, downcast) @@ -999,8 +983,7 @@ def take_nd(self, indexer, axis, new_mgr_locs=None, fill_tuple=None): def diff(self, n, axis=1, mgr=None): """ return block for the diff of the values """ new_values = com.diff(self.values, n, axis=axis) - return [self.make_block(values=new_values, - fastpath=True)] + return [self.make_block(values=new_values, fastpath=True)] def shift(self, periods, axis=0, mgr=None): """ shift the block by periods, possibly upcast """ @@ -1016,21 +999,21 @@ def shift(self, periods, axis=0, mgr=None): axis = new_values.ndim - axis - 1 if np.prod(new_values.shape): - new_values = np.roll(new_values, com._ensure_platform_int(periods), axis=axis) + new_values = np.roll(new_values, com._ensure_platform_int(periods), + axis=axis) - axis_indexer = [ slice(None) ] * self.ndim + axis_indexer = [slice(None)] * self.ndim if periods > 0: - axis_indexer[axis] = slice(None,periods) + axis_indexer[axis] = slice(None, periods) else: - axis_indexer[axis] = slice(periods,None) + axis_indexer[axis] = slice(periods, None) new_values[tuple(axis_indexer)] = fill_value # restore original order if f_ordered: new_values = new_values.T - return [self.make_block(new_values, - fastpath=True)] + return [self.make_block(new_values, fastpath=True)] def eval(self, func, other, raise_on_error=True, try_cast=False, mgr=None): """ @@ -1072,7 +1055,8 @@ def eval(self, func, other, raise_on_error=True, try_cast=False, mgr=None): transf = (lambda x: x.T) if is_transposed else (lambda x: x) # coerce/transpose the args if needed - values, values_mask, other, other_mask = self._try_coerce_args(transf(values), other) + values, values_mask, other, other_mask = self._try_coerce_args( + transf(values), other) # get the result, may need to transpose the other def get_result(other): @@ -1090,13 +1074,13 @@ def get_result(other): # mask if needed if isinstance(values_mask, np.ndarray) and values_mask.any(): - result = result.astype('float64',copy=False) + result = result.astype('float64', copy=False) result[values_mask] = np.nan if other_mask is True: - result = result.astype('float64',copy=False) + result = result.astype('float64', copy=False) result[:] = np.nan elif isinstance(other_mask, np.ndarray) and other_mask.any(): - result = result.astype('float64',copy=False) + result = result.astype('float64', copy=False) result[other_mask.ravel()] = np.nan return self._try_coerce_result(result) @@ -1105,8 +1089,8 @@ def get_result(other): def handle_error(): if raise_on_error: - raise TypeError('Could not operate %s with block values %s' - % (repr(other), str(detail))) + raise TypeError('Could not operate %s with block values %s' % + (repr(other), str(detail))) else: # return the values result = np.empty(values.shape, dtype='O') @@ -1135,8 +1119,8 @@ def handle_error(): raise ValueError('Invalid broadcasting comparison [%s] ' 'with block values' % repr(other)) - raise TypeError('Could not compare [%s] with block values' - % repr(other)) + raise TypeError('Could not compare [%s] with block values' % + repr(other)) # transpose if needed result = transf(result) @@ -1145,8 +1129,7 @@ def handle_error(): if try_cast: result = self._try_cast_result(result) - return [self.make_block(result, - fastpath=True,)] + return [self.make_block(result, fastpath=True, )] def where(self, other, cond, align=True, raise_on_error=True, try_cast=False, axis=0, transpose=False, mgr=None): @@ -1183,10 +1166,11 @@ def where(self, other, cond, align=True, raise_on_error=True, # explictly reshape other instead if getattr(other, 'ndim', 0) >= 1: if values.ndim - 1 == other.ndim and axis == 1: - other = other.reshape(tuple(other.shape + (1,))) + other = other.reshape(tuple(other.shape + (1, ))) if not hasattr(cond, 'shape'): - raise ValueError("where must have a condition that is ndarray like") + raise ValueError("where must have a condition that is ndarray " + "like") other = _maybe_convert_string_to_object(other) @@ -1195,11 +1179,11 @@ def func(cond, values, other): if cond.ravel().all(): return values - values, values_mask, other, other_mask = self._try_coerce_args(values, other) + values, values_mask, other, other_mask = self._try_coerce_args( + values, other) try: - return self._try_coerce_result( - expressions.where(cond, values, other, raise_on_error=True) - ) + return self._try_coerce_result(expressions.where( + cond, values, other, raise_on_error=True)) except Exception as detail: if raise_on_error: raise TypeError('Could not operate [%s] with block values ' @@ -1233,15 +1217,16 @@ def func(cond, values, other): result_blocks = [] for m in [mask, ~mask]: if m.any(): - r = self._try_cast_result( - result.take(m.nonzero()[0], axis=axis)) - result_blocks.append(self.make_block(r.T, - placement=self.mgr_locs[m])) + r = self._try_cast_result(result.take(m.nonzero()[0], + axis=axis)) + result_blocks.append( + self.make_block(r.T, placement=self.mgr_locs[m])) return result_blocks def equals(self, other): - if self.dtype != other.dtype or self.shape != other.shape: return False + if self.dtype != other.dtype or self.shape != other.shape: + return False return array_equivalent(self.values, other.values) @@ -1252,8 +1237,7 @@ class NonConsolidatableMixIn(object): _validate_ndim = False _holder = None - def __init__(self, values, placement, - ndim=None, fastpath=False, **kwargs): + def __init__(self, values, placement, ndim=None, fastpath=False, **kwargs): # Placement must be converted to BlockPlacement via property setter # before ndim logic, because placement may be a slice which doesn't @@ -1312,11 +1296,11 @@ def get(self, item): else: return self.values - def putmask(self, mask, new, align=True, inplace=False, - axis=0, transpose=False, mgr=None): + def putmask(self, mask, new, align=True, inplace=False, axis=0, + transpose=False, mgr=None): """ - putmask the data to the block; we must be a single block and not generate - other blocks + putmask the data to the block; we must be a single block and not + generate other blocks return the resulting block @@ -1358,7 +1342,8 @@ class FloatOrComplexBlock(NumericBlock): __slots__ = () def equals(self, other): - if self.dtype != other.dtype or self.shape != other.shape: return False + if self.dtype != other.dtype or self.shape != other.shape: + return False left, right = self.values, other.values return ((left == right) | (np.isnan(left) & np.isnan(right))).all() @@ -1372,10 +1357,11 @@ def _can_hold_element(self, element): if is_list_like(element): element = np.array(element) tipo = element.dtype.type - return issubclass(tipo, (np.floating, np.integer)) and not issubclass( - tipo, (np.datetime64, np.timedelta64)) - return isinstance(element, (float, int, np.float_, np.int_)) and not isinstance( - element, (bool, np.bool_, datetime, timedelta, np.datetime64, np.timedelta64)) + return (issubclass(tipo, (np.floating, np.integer)) and + not issubclass(tipo, (np.datetime64, np.timedelta64))) + return (isinstance(element, (float, int, np.float_, np.int_)) and + not isinstance(element, (bool, np.bool_, datetime, timedelta, + np.datetime64, np.timedelta64))) def _try_cast(self, element): try: @@ -1383,8 +1369,8 @@ def _try_cast(self, element): except: # pragma: no cover return element - def to_native_types(self, slicer=None, na_rep='', float_format=None, decimal='.', - quoting=None, **kwargs): + def to_native_types(self, slicer=None, na_rep='', float_format=None, + decimal='.', quoting=None, **kwargs): """ convert to our native types format, slicing if desired """ values = self.values @@ -1411,8 +1397,10 @@ class ComplexBlock(FloatOrComplexBlock): def _can_hold_element(self, element): if is_list_like(element): element = np.array(element) - return issubclass(element.dtype.type, (np.floating, np.integer, np.complexfloating)) - return (isinstance(element, (float, int, complex, np.float_, np.int_)) and + return issubclass(element.dtype.type, + (np.floating, np.integer, np.complexfloating)) + return (isinstance(element, + (float, int, complex, np.float_, np.int_)) and not isinstance(bool, np.bool_)) def _try_cast(self, element): @@ -1434,7 +1422,8 @@ def _can_hold_element(self, element): if is_list_like(element): element = np.array(element) tipo = element.dtype.type - return issubclass(tipo, np.integer) and not issubclass(tipo, (np.datetime64, np.timedelta64)) + return (issubclass(tipo, np.integer) and + not issubclass(tipo, (np.datetime64, np.timedelta64))) return com.is_integer(element) def _try_cast(self, element): @@ -1462,7 +1451,7 @@ def fillna(self, value, **kwargs): # allow filling with integers to be # interpreted as seconds if not isinstance(value, np.timedelta64) and com.is_integer(value): - value = Timedelta(value,unit='s') + value = Timedelta(value, unit='s') return super(TimeDeltaBlock, self).fillna(value, **kwargs) def _try_coerce_args(self, values, other): @@ -1499,7 +1488,7 @@ def _try_coerce_args(self, values, other): other = Timedelta(other).value elif isinstance(other, np.ndarray): other_mask = isnull(other) - other = other.astype('i8',copy=False).view('i8') + other = other.astype('i8', copy=False).view('i8') else: # scalar other = Timedelta(other) @@ -1526,7 +1515,8 @@ def _try_coerce_result(self, result): def should_store(self, value): return issubclass(value.dtype.type, np.timedelta64) - def to_native_types(self, slicer=None, na_rep=None, quoting=None, **kwargs): + def to_native_types(self, slicer=None, na_rep=None, quoting=None, + **kwargs): """ convert to our native types format, slicing if desired """ values = self.values @@ -1540,7 +1530,7 @@ def to_native_types(self, slicer=None, na_rep=None, quoting=None, **kwargs): rvalues[mask] = na_rep imask = (~mask).ravel() - #### FIXME #### + # FIXME: # should use the core.format.Timedelta64Formatter here # to figure what format to pass to the Timedelta # e.g. to not show the decimals say @@ -1549,14 +1539,14 @@ def to_native_types(self, slicer=None, na_rep=None, quoting=None, **kwargs): dtype=object) return rvalues - def get_values(self, dtype=None): # return object dtypes as Timedelta if dtype == object: - return lib.map_infer(self.values.ravel(), lib.Timedelta - ).reshape(self.values.shape) + return lib.map_infer(self.values.ravel(), + lib.Timedelta).reshape(self.values.shape) return self.values + class BoolBlock(NumericBlock): __slots__ = () is_bool = True @@ -1592,13 +1582,12 @@ class ObjectBlock(Block): is_object = True _can_hold_na = True - def __init__(self, values, ndim=2, fastpath=False, - placement=None, **kwargs): + def __init__(self, values, ndim=2, fastpath=False, placement=None, + **kwargs): if issubclass(values.dtype.type, compat.string_types): values = np.array(values, dtype=object) - super(ObjectBlock, self).__init__(values, ndim=ndim, - fastpath=fastpath, + super(ObjectBlock, self).__init__(values, ndim=ndim, fastpath=fastpath, placement=placement, **kwargs) @property @@ -1610,17 +1599,16 @@ def is_bool(self): # TODO: Refactor when convert_objects is removed since there will be 1 path def convert(self, *args, **kwargs): - """ attempt to coerce any object types to better types - return a copy of the block (if copy = True) - by definition we ARE an ObjectBlock!!!!! + """ attempt to coerce any object types to better types return a copy of + the block (if copy = True) by definition we ARE an ObjectBlock!!!!! - can return multiple blocks! - """ + can return multiple blocks! + """ if args: raise NotImplementedError by_item = True if 'by_item' not in kwargs else kwargs['by_item'] - new_inputs = ['coerce','datetime','numeric','timedelta'] + new_inputs = ['coerce', 'datetime', 'numeric', 'timedelta'] new_style = False for kw in new_inputs: new_style |= kw in kwargs @@ -1630,7 +1618,8 @@ def convert(self, *args, **kwargs): fn_inputs = new_inputs else: fn = convert._possibly_convert_objects - fn_inputs = ['convert_dates','convert_numeric','convert_timedeltas'] + fn_inputs = ['convert_dates', 'convert_numeric', + 'convert_timedeltas'] fn_inputs += ['copy'] fn_kwargs = {} @@ -1651,8 +1640,10 @@ def convert(self, *args, **kwargs): blocks.append(newb) else: - values = fn(self.values.ravel(), **fn_kwargs).reshape(self.values.shape) - blocks.append(make_block(values, ndim=self.ndim, placement=self.mgr_locs)) + values = fn( + self.values.ravel(), **fn_kwargs).reshape(self.values.shape) + blocks.append(make_block(values, ndim=self.ndim, + placement=self.mgr_locs)) return blocks @@ -1680,18 +1671,18 @@ def set(self, locs, values, check=False): # see GH6171 new_shape = list(values.shape) new_shape[0] = len(self.items) - self.values = np.empty(tuple(new_shape),dtype=self.dtype) + self.values = np.empty(tuple(new_shape), dtype=self.dtype) self.values.fill(np.nan) self.values[locs] = values - def _maybe_downcast(self, blocks, downcast=None): if downcast is not None: return blocks # split and convert the blocks - return _extend_blocks([ b.convert(datetime=True, numeric=False) for b in blocks ]) + return _extend_blocks([b.convert(datetime=True, numeric=False) + for b in blocks]) def _can_hold_element(self, element): return True @@ -1701,8 +1692,9 @@ def _try_cast(self, element): def should_store(self, value): return not (issubclass(value.dtype.type, - (np.integer, np.floating, np.complexfloating, - np.datetime64, np.bool_)) or is_internal_type(value)) + (np.integer, np.floating, np.complexfloating, + np.datetime64, np.bool_)) or + is_internal_type(value)) def replace(self, to_replace, value, inplace=False, filter=None, regex=False, convert=True, mgr=None): @@ -1715,9 +1707,9 @@ def replace(self, to_replace, value, inplace=False, filter=None, blocks = [self] if not either_list and com.is_re(to_replace): - return self._replace_single(to_replace, value, - inplace=inplace, filter=filter, - regex=True, convert=convert, mgr=mgr) + return self._replace_single(to_replace, value, inplace=inplace, + filter=filter, regex=True, + convert=convert, mgr=mgr) elif not (either_list or regex): return super(ObjectBlock, self).replace(to_replace, value, inplace=inplace, @@ -1738,17 +1730,16 @@ def replace(self, to_replace, value, inplace=False, filter=None, for to_rep in to_replace: result_blocks = [] for b in blocks: - result = b._replace_single(to_rep, value, - inplace=inplace, + result = b._replace_single(to_rep, value, inplace=inplace, filter=filter, regex=regex, convert=convert, mgr=mgr) result_blocks = _extend_blocks(result, result_blocks) blocks = result_blocks return result_blocks - return self._replace_single(to_replace, value, - inplace=inplace, filter=filter, - convert=convert, regex=regex, mgr=mgr) + return self._replace_single(to_replace, value, inplace=inplace, + filter=filter, convert=convert, + regex=regex, mgr=mgr) def _replace_single(self, to_replace, value, inplace=False, filter=None, regex=False, convert=True, mgr=None): @@ -1785,8 +1776,7 @@ def _replace_single(self, to_replace, value, inplace=False, filter=None, # the superclass method -> to_replace is some kind of object return super(ObjectBlock, self).replace(to_replace, value, inplace=inplace, - filter=filter, - regex=regex, + filter=filter, regex=regex, mgr=mgr) new_values = self.values if inplace else self.values.copy() @@ -1794,6 +1784,7 @@ def _replace_single(self, to_replace, value, inplace=False, filter=None, # deal with replacing values with objects (strings) that match but # whose replacement is not a string (numeric, nan, object) if isnull(value) or not isinstance(value, compat.string_types): + def re_replacer(s): try: return value if rx.search(s) is not None else s @@ -1820,10 +1811,11 @@ def re_replacer(s): # convert block = self.make_block(new_values) if convert: - block = block.convert(by_item=True,numeric=False) + block = block.convert(by_item=True, numeric=False) return block + class CategoricalBlock(NonConsolidatableMixIn, ObjectBlock): __slots__ = () is_categorical = True @@ -1831,13 +1823,12 @@ class CategoricalBlock(NonConsolidatableMixIn, ObjectBlock): _can_hold_na = True _holder = Categorical - def __init__(self, values, placement, - fastpath=False, **kwargs): + def __init__(self, values, placement, fastpath=False, **kwargs): # coerce to categorical if we can super(CategoricalBlock, self).__init__(maybe_to_categorical(values), - fastpath=True, placement=placement, - **kwargs) + fastpath=True, + placement=placement, **kwargs) @property def is_view(self): @@ -1852,7 +1843,9 @@ def convert(self, copy=True, **kwargs): @property def array_dtype(self): - """ the dtype to return if I want to construct this block as an array """ + """ the dtype to return if I want to construct this block as an + array + """ return np.object_ def _slice(self, slicer): @@ -1862,7 +1855,8 @@ def _slice(self, slicer): # return same dims as we currently have return self.values._slice(slicer) - def fillna(self, value, limit=None, inplace=False, downcast=None, mgr=None): + def fillna(self, value, limit=None, inplace=False, downcast=None, + mgr=None): # we may need to upcast our fill to match our dtype if limit is not None: raise NotImplementedError("specifying a limit for 'fillna' has " @@ -1873,14 +1867,14 @@ def fillna(self, value, limit=None, inplace=False, downcast=None, mgr=None): limit=limit)) return [self.make_block(values=values)] - def interpolate(self, method='pad', axis=0, inplace=False, - limit=None, fill_value=None, **kwargs): + def interpolate(self, method='pad', axis=0, inplace=False, limit=None, + fill_value=None, **kwargs): values = self.values if inplace else self.values.copy() - return self.make_block_same_class(values=values.fillna(fill_value=fill_value, - method=method, - limit=limit), - placement=self.mgr_locs) + return self.make_block_same_class( + values=values.fillna(fill_value=fill_value, method=method, + limit=limit), + placement=self.mgr_locs) def shift(self, periods, axis=0, mgr=None): return self.make_block_same_class(values=self.values.shift(periods), @@ -1889,7 +1883,6 @@ def shift(self, periods, axis=0, mgr=None): def take_nd(self, indexer, axis=0, new_mgr_locs=None, fill_tuple=None): """ Take values according to indexer and return them as a block.bb - """ if fill_tuple is None: fill_value = None @@ -1939,21 +1932,20 @@ def to_native_types(self, slicer=None, na_rep='', quoting=None, **kwargs): values[mask] = na_rep # we are expected to return a 2-d ndarray - return values.reshape(1,len(values)) + return values.reshape(1, len(values)) + class DatetimeBlock(Block): __slots__ = () is_datetime = True _can_hold_na = True - def __init__(self, values, placement, - fastpath=False, **kwargs): + def __init__(self, values, placement, fastpath=False, **kwargs): if values.dtype != _NS_DTYPE: values = tslib.cast_to_nanoseconds(values) - super(DatetimeBlock, self).__init__(values, - fastpath=True, placement=placement, - **kwargs) + super(DatetimeBlock, self).__init__(values, fastpath=True, + placement=placement, **kwargs) def _astype(self, dtype, mgr=None, **kwargs): """ @@ -1966,7 +1958,7 @@ def _astype(self, dtype, mgr=None, **kwargs): dtype = DatetimeTZDtype(dtype) values = self.values - if getattr(values,'tz',None) is None: + if getattr(values, 'tz', None) is None: values = DatetimeIndex(values).tz_localize('UTC') values = values.tz_convert(dtype.tz) return self.make_block(values) @@ -1974,13 +1966,11 @@ def _astype(self, dtype, mgr=None, **kwargs): # delegate return super(DatetimeBlock, self)._astype(dtype=dtype, **kwargs) - def _can_hold_element(self, element): if is_list_like(element): element = np.array(element) return element.dtype == _NS_DTYPE or element.dtype == np.int64 - return (com.is_integer(element) or - isinstance(element, datetime) or + return (com.is_integer(element) or isinstance(element, datetime) or isnull(element)) def _try_cast(self, element): @@ -2021,8 +2011,9 @@ def _try_coerce_args(self, values, other): other_mask = True elif isinstance(other, (datetime, np.datetime64, date)): other = lib.Timestamp(other) - if getattr(other,'tz') is not None: - raise TypeError("cannot coerce a Timestamp with a tz on a naive Block") + if getattr(other, 'tz') is not None: + raise TypeError("cannot coerce a Timestamp with a tz on a " + "naive Block") other_mask = isnull(other) other = other.asm8.view('i8') elif hasattr(other, 'dtype') and com.is_integer_dtype(other): @@ -2032,7 +2023,7 @@ def _try_coerce_args(self, values, other): other = np.asarray(other) other_mask = isnull(other) - other = other.astype('i8',copy=False).view('i8') + other = other.astype('i8', copy=False).view('i8') except ValueError: # coercion issues @@ -2065,14 +2056,14 @@ def to_native_types(self, slicer=None, na_rep=None, date_format=None, from pandas.core.format import _get_format_datetime64_from_values format = _get_format_datetime64_from_values(values, date_format) - result = tslib.format_array_from_datetime(values.view('i8').ravel(), - tz=getattr(self.values,'tz',None), - format=format, - na_rep=na_rep).reshape(values.shape) + result = tslib.format_array_from_datetime( + values.view('i8').ravel(), tz=getattr(self.values, 'tz', None), + format=format, na_rep=na_rep).reshape(values.shape) return np.atleast_2d(result) def should_store(self, value): - return issubclass(value.dtype.type, np.datetime64) and not is_datetimetz(value) + return (issubclass(value.dtype.type, np.datetime64) and + not is_datetimetz(value)) def set(self, locs, values, check=False): """ @@ -2091,28 +2082,27 @@ def set(self, locs, values, check=False): def get_values(self, dtype=None): # return object dtype as Timestamps if dtype == object: - return lib.map_infer(self.values.ravel(), lib.Timestamp)\ - .reshape(self.values.shape) + return lib.map_infer( + self.values.ravel(), lib.Timestamp).reshape(self.values.shape) return self.values + class DatetimeTZBlock(NonConsolidatableMixIn, DatetimeBlock): """ implement a datetime64 block with a tz attribute """ __slots__ = () _holder = DatetimeIndex is_datetimetz = True - def __init__(self, values, placement, ndim=2, - **kwargs): + def __init__(self, values, placement, ndim=2, **kwargs): if not isinstance(values, self._holder): values = self._holder(values) if values.tz is None: raise ValueError("cannot create a DatetimeTZBlock without a tz") - super(DatetimeTZBlock, self).__init__(values, - placement=placement, - ndim=ndim, - **kwargs) + super(DatetimeTZBlock, self).__init__(values, placement=placement, + ndim=ndim, **kwargs) + def copy(self, deep=True, mgr=None): """ copy constructor """ values = self.values @@ -2121,15 +2111,17 @@ def copy(self, deep=True, mgr=None): return self.make_block_same_class(values) def external_values(self): - """ we internally represent the data as a DatetimeIndex, but for external - compat with ndarray, export as a ndarray of Timestamps """ + """ we internally represent the data as a DatetimeIndex, but for + external compat with ndarray, export as a ndarray of Timestamps + """ return self.values.astype('datetime64[ns]').values def get_values(self, dtype=None): # return object dtype as Timestamps with the zones if dtype == object: - return lib.map_infer(self.values.ravel(), lambda x: lib.Timestamp(x,tz=self.values.tz))\ - .reshape(self.values.shape) + f = lambda x: lib.Timestamp(x, tz=self.values.tz) + return lib.map_infer( + self.values.ravel(), f).reshape(self.values.shape) return self.values def to_object_block(self, mgr): @@ -2142,9 +2134,9 @@ def to_object_block(self, mgr): values = self.get_values(dtype=object) kwargs = {} if mgr.ndim > 1: - values = _block_shape(values,ndim=mgr.ndim) + values = _block_shape(values, ndim=mgr.ndim) kwargs['ndim'] = mgr.ndim - kwargs['placement']=[0] + kwargs['placement'] = [0] return self.make_block(values, klass=ObjectBlock, **kwargs) def replace(self, *args, **kwargs): @@ -2216,7 +2208,8 @@ def _try_coerce_result(self, result): def shift(self, periods, axis=0, mgr=None): """ shift the block by periods """ - ### think about moving this to the DatetimeIndex. This is a non-freq (number of periods) shift ### + # think about moving this to the DatetimeIndex. This is a non-freq + # (number of periods) shift ### N = len(self) indexer = np.zeros(N, dtype=int) @@ -2233,8 +2226,10 @@ def shift(self, periods, axis=0, mgr=None): else: new_values[periods:] = tslib.iNaT - new_values = DatetimeIndex(new_values,tz=self.values.tz) - return [self.make_block_same_class(new_values, placement=self.mgr_locs)] + new_values = DatetimeIndex(new_values, tz=self.values.tz) + return [self.make_block_same_class(new_values, + placement=self.mgr_locs)] + class SparseBlock(NonConsolidatableMixIn, Block): """ implement as a list of sparse arrays of the same dtype """ @@ -2256,7 +2251,7 @@ def itemsize(self): @property def fill_value(self): - #return np.nan + # return np.nan return self.values.fill_value @fill_value.setter @@ -2301,9 +2296,9 @@ def copy(self, deep=True, mgr=None): kind=self.kind, copy=deep, placement=self.mgr_locs) - def make_block_same_class(self, values, placement, - sparse_index=None, kind=None, dtype=None, - fill_value=None, copy=False, fastpath=True, **kwargs): + def make_block_same_class(self, values, placement, sparse_index=None, + kind=None, dtype=None, fill_value=None, + copy=False, fastpath=True, **kwargs): """ return a new block """ if dtype is None: dtype = self.dtype @@ -2332,19 +2327,19 @@ def make_block_same_class(self, values, placement, new_values = SparseArray(values, sparse_index=sparse_index, kind=kind or self.kind, dtype=dtype, fill_value=fill_value, copy=copy) - return self.make_block(new_values, - fastpath=fastpath, + return self.make_block(new_values, fastpath=fastpath, placement=placement) - def interpolate(self, method='pad', axis=0, inplace=False, - limit=None, fill_value=None, **kwargs): + def interpolate(self, method='pad', axis=0, inplace=False, limit=None, + fill_value=None, **kwargs): - values = mis.interpolate_2d( - self.values.to_dense(), method, axis, limit, fill_value) + values = mis.interpolate_2d(self.values.to_dense(), method, axis, + limit, fill_value) return self.make_block_same_class(values=values, placement=self.mgr_locs) - def fillna(self, value, limit=None, inplace=False, downcast=None, mgr=None): + def fillna(self, value, limit=None, inplace=False, downcast=None, + mgr=None): # we may need to upcast our fill to match our dtype if limit is not None: raise NotImplementedError("specifying a limit for 'fillna' has " @@ -2372,7 +2367,8 @@ def shift(self, periods, axis=0, mgr=None): new_values[:periods] = fill_value else: new_values[periods:] = fill_value - return [self.make_block_same_class(new_values, placement=self.mgr_locs)] + return [self.make_block_same_class(new_values, + placement=self.mgr_locs)] def reindex_axis(self, indexer, method=None, axis=1, fill_value=None, limit=None, mask_info=None): @@ -2396,11 +2392,11 @@ def sparse_reindex(self, new_index): values = values.sp_index.to_int_index().reindex( values.sp_values.astype('float64'), values.fill_value, new_index) return self.make_block_same_class(values, sparse_index=new_index, - placement=self.mgr_locs) + placement=self.mgr_locs) -def make_block(values, placement, klass=None, ndim=None, - dtype=None, fastpath=False): +def make_block(values, placement, klass=None, ndim=None, dtype=None, + fastpath=False): if klass is None: dtype = dtype or values.dtype vtype = dtype.type @@ -2410,15 +2406,15 @@ def make_block(values, placement, klass=None, ndim=None, elif issubclass(vtype, np.floating): klass = FloatBlock elif (issubclass(vtype, np.integer) and - issubclass(vtype, np.timedelta64)): + issubclass(vtype, np.timedelta64)): klass = TimeDeltaBlock elif (issubclass(vtype, np.integer) and - not issubclass(vtype, np.datetime64)): + not issubclass(vtype, np.datetime64)): klass = IntBlock elif dtype == np.bool_: klass = BoolBlock elif issubclass(vtype, np.datetime64): - if hasattr(values,'tz'): + if hasattr(values, 'tz'): klass = DatetimeTZBlock else: klass = DatetimeBlock @@ -2431,15 +2427,12 @@ def make_block(values, placement, klass=None, ndim=None, else: klass = ObjectBlock - return klass(values, ndim=ndim, fastpath=fastpath, - placement=placement) - + return klass(values, ndim=ndim, fastpath=fastpath, placement=placement) # TODO: flexible with index=None and/or items=None class BlockManager(PandasObject): - """ Core internal data structure to implement DataFrame, Series, Panel, etc. @@ -2501,12 +2494,13 @@ def __init__(self, blocks, axes, do_integrity_check=True, fastpath=True): for block in blocks: if block.is_sparse: if len(block.mgr_locs) != 1: - raise AssertionError("Sparse block refers to multiple items") + raise AssertionError("Sparse block refers to multiple " + "items") else: if self.ndim != block.ndim: - raise AssertionError(('Number of Block dimensions (%d) must ' - 'equal number of axes (%d)') - % (block.ndim, self.ndim)) + raise AssertionError('Number of Block dimensions (%d) ' + 'must equal number of axes (%d)' % + (block.ndim, self.ndim)) if do_integrity_check: self._verify_integrity() @@ -2518,9 +2512,8 @@ def __init__(self, blocks, axes, do_integrity_check=True, fastpath=True): def make_empty(self, axes=None): """ return an empty BlockManager with the items axis of len 0 """ if axes is None: - axes = [_ensure_index([])] + [ - _ensure_index(a) for a in self.axes[1:] - ] + axes = [_ensure_index([])] + [_ensure_index(a) + for a in self.axes[1:]] # preserve dtype if possible if self.ndim == 1: @@ -2550,7 +2543,8 @@ def set_axis(self, axis, new_labels): if new_len != old_len: raise ValueError('Length mismatch: Expected axis has %d elements, ' - 'new values have %d elements' % (old_len, new_len)) + 'new values have %d elements' % + (old_len, new_len)) self.axes[axis] = new_labels @@ -2612,6 +2606,7 @@ def _rebuild_blknos_and_blklocs(self): # make items read only for now def _get_items(self): return self.axes[0] + items = property(fget=_get_items) def _get_counts(self, f): @@ -2645,8 +2640,7 @@ def __getstate__(self): extra_state = { '0.14.1': { 'axes': axes_array, - 'blocks': [dict(values=b.values, - mgr_locs=b.mgr_locs.indexer) + 'blocks': [dict(values=b.values, mgr_locs=b.mgr_locs.indexer) for b in self.blocks] } } @@ -2662,13 +2656,12 @@ def unpickle_block(values, mgr_locs): values = values.astype('M8[ns]') return make_block(values, placement=mgr_locs) - if (isinstance(state, tuple) and len(state) >= 4 - and '0.14.1' in state[3]): + if (isinstance(state, tuple) and len(state) >= 4 and + '0.14.1' in state[3]): state = state[3]['0.14.1'] self.axes = [_ensure_index(ax) for ax in state['axes']] - self.blocks = tuple( - unpickle_block(b['values'], b['mgr_locs']) - for b in state['blocks']) + self.blocks = tuple(unpickle_block(b['values'], b['mgr_locs']) + for b in state['blocks']) else: # discard anything after 3rd, support beta pickling format for a # little while longer @@ -2724,10 +2717,11 @@ def _verify_integrity(self): if len(self.items) != tot_items: raise AssertionError('Number of manager items must equal union of ' 'block items\n# manager items: {0}, # ' - 'tot_items: {1}'.format(len(self.items), - tot_items)) + 'tot_items: {1}'.format( + len(self.items), tot_items)) - def apply(self, f, axes=None, filter=None, do_integrity_check=False, consolidate=True, **kwargs): + def apply(self, f, axes=None, filter=None, do_integrity_check=False, + consolidate=True, **kwargs): """ iterate over the blocks, collect and create a new block manager @@ -2737,8 +2731,10 @@ def apply(self, f, axes=None, filter=None, do_integrity_check=False, consolidate axes : optional (if not supplied, use self.axes) filter : list, if supplied, only call the block if the filter is in the block - do_integrity_check : boolean, default False. Do the block manager integrity check - consolidate: boolean, default True. Join together blocks having same dtype + do_integrity_check : boolean, default False. Do the block manager + integrity check + consolidate: boolean, default True. Join together blocks having same + dtype Returns ------- @@ -2783,7 +2779,8 @@ def apply(self, f, axes=None, filter=None, do_integrity_check=False, consolidate else: align_keys = [] - aligned_args = dict((k, kwargs[k]) for k in align_keys + aligned_args = dict((k, kwargs[k]) + for k in align_keys if hasattr(kwargs[k], 'reindex_axis')) for b in self.blocks: @@ -2850,7 +2847,8 @@ def convert(self, **kwargs): def replace(self, **kwargs): return self.apply('replace', **kwargs) - def replace_list(self, src_list, dest_list, inplace=False, regex=False, mgr=None): + def replace_list(self, src_list, dest_list, inplace=False, regex=False, + mgr=None): """ do a list replace """ if mgr is None: @@ -2864,6 +2862,7 @@ def comp(s): return isnull(values) return _possibly_compare(values, getattr(s, 'asm8', s), operator.eq) + masks = [comp(s) for i, s in enumerate(src_list)] result_blocks = [] @@ -2876,8 +2875,8 @@ def comp(s): new_rb = [] for b in rb: if b.dtype == np.object_: - result = b.replace(s, d, inplace=inplace, - regex=regex, mgr=mgr) + result = b.replace(s, d, inplace=inplace, regex=regex, + mgr=mgr) new_rb = _extend_blocks(result, new_rb) else: # get our mask for this element, sized to this @@ -2970,7 +2969,8 @@ def combine(self, blocks, copy=True): return self.make_empty() # FIXME: optimization potential - indexer = np.sort(np.concatenate([b.mgr_locs.as_array for b in blocks])) + indexer = np.sort(np.concatenate([b.mgr_locs.as_array + for b in blocks])) inv_indexer = lib.get_reverse_indexer(indexer, self.shape[0]) new_items = self.items.take(indexer) @@ -3033,7 +3033,7 @@ def copy(self, deep=True, mgr=None): copy = lambda ax: ax.copy(deep=True) else: copy = lambda ax: ax.view() - new_axes = [ copy(ax) for ax in self.axes] + new_axes = [copy(ax) for ax in self.axes] else: new_axes = list(self.axes) return self.apply('copy', axes=new_axes, deep=deep, @@ -3088,8 +3088,8 @@ def _interleave(self): def xs(self, key, axis=1, copy=True, takeable=False): if axis < 1: - raise AssertionError('Can only take xs across axis >= 1, got %d' - % axis) + raise AssertionError('Can only take xs across axis >= 1, got %d' % + axis) # take by position if takeable: @@ -3122,8 +3122,10 @@ def xs(self, key, axis=1, copy=True, takeable=False): vals = block.values[slicer] if copy: vals = vals.copy() - new_blocks = [make_block(values=vals, placement=block.mgr_locs, - klass=block.__class__, fastpath=True,)] + new_blocks = [make_block(values=vals, + placement=block.mgr_locs, + klass=block.__class__, + fastpath=True, )] return self.__class__(new_blocks, new_axes) @@ -3208,14 +3210,14 @@ def get(self, item, fastpath=True): indexer = self.items.get_indexer_for([item]) return self.reindex_indexer(new_axis=self.items[indexer], - indexer=indexer, axis=0, allow_dups=True) + indexer=indexer, axis=0, + allow_dups=True) def iget(self, i, fastpath=True): """ Return the data as a SingleBlockManager if fastpath=True and possible Otherwise return as a ndarray - """ block = self.blocks[self._blknos[i]] values = block.iget(self._blklocs[i]) @@ -3223,19 +3225,17 @@ def iget(self, i, fastpath=True): return values # fastpath shortcut for select a single-dim from a 2-dim BM - return SingleBlockManager([ block.make_block_same_class(values, - placement=slice(0, len(values)), - ndim=1, - fastpath=True) ], - self.axes[1]) - + return SingleBlockManager( + [block.make_block_same_class(values, + placement=slice(0, len(values)), + ndim=1, fastpath=True)], + self.axes[1]) def get_scalar(self, tup): """ Retrieve single item """ - full_loc = list(ax.get_loc(x) - for ax, x in zip(self.axes, tup)) + full_loc = list(ax.get_loc(x) for ax, x in zip(self.axes, tup)) blk = self.blocks[self._blknos[full_loc[0]]] values = blk.values @@ -3297,6 +3297,7 @@ def set(self, item, value, check=False): # categorical/spares/datetimetz if value_is_internal_type: + def value_getitem(placement): return value else: @@ -3306,8 +3307,10 @@ def value_getitem(placement): def value_getitem(placement): return value else: + def value_getitem(placement): return value[placement.indexer] + if value.shape[1:] != self.shape[1:]: raise AssertionError('Shape of new values must be compatible ' 'with manager shape') @@ -3416,9 +3419,8 @@ def insert(self, loc, item, value, allow_duplicates=False): # insert to the axis; this could possibly raise a TypeError new_axis = self.items.insert(loc, item) - block = make_block(values=value, - ndim=self.ndim, - placement=slice(loc, loc+1)) + block = make_block(values=value, ndim=self.ndim, + placement=slice(loc, loc + 1)) for blkno, count in _fast_count_smallints(self._blknos[loc:]): blk = self.blocks[blkno] @@ -3453,8 +3455,8 @@ def reindex_axis(self, new_index, axis, method=None, limit=None, Conform block manager to new index. """ new_index = _ensure_index(new_index) - new_index, indexer = self.axes[axis].reindex( - new_index, method=method, limit=limit) + new_index, indexer = self.axes[axis].reindex(new_index, method=method, + limit=limit) return self.reindex_indexer(new_index, indexer, axis=axis, fill_value=fill_value, copy=copy) @@ -3491,13 +3493,12 @@ def reindex_indexer(self, new_axis, indexer, axis, fill_value=None, raise IndexError("Requested axis not found in manager") if axis == 0: - new_blocks = self._slice_take_blocks_ax0( - indexer, fill_tuple=(fill_value,)) + new_blocks = self._slice_take_blocks_ax0(indexer, + fill_tuple=(fill_value,)) else: - new_blocks = [blk.take_nd(indexer, axis=axis, - fill_tuple=(fill_value if fill_value is not None else - blk.fill_value,)) - for blk in self.blocks] + new_blocks = [blk.take_nd(indexer, axis=axis, fill_tuple=( + fill_value if fill_value is not None else blk.fill_value,)) + for blk in self.blocks] new_axes = list(self.axes) new_axes[axis] = new_axis @@ -3524,12 +3525,11 @@ def _slice_take_blocks_ax0(self, slice_or_indexer, fill_tuple=None): blk = self.blocks[0] if sl_type in ('slice', 'mask'): - return [blk.getitem_block(slobj, - new_mgr_locs=slice(0, sllen))] + return [blk.getitem_block(slobj, new_mgr_locs=slice(0, sllen))] elif not allow_fill or self.ndim == 1: if allow_fill and fill_tuple[0] is None: _, fill_value = com._maybe_promote(blk.dtype) - fill_tuple = (fill_value,) + fill_tuple = (fill_value, ) return [blk.take_nd(slobj, axis=0, new_mgr_locs=slice(0, sllen), @@ -3556,24 +3556,25 @@ def _slice_take_blocks_ax0(self, slice_or_indexer, fill_tuple=None): # If we've got here, fill_tuple was not None. fill_value = fill_tuple[0] - blocks.append(self._make_na_block( - placement=mgr_locs, fill_value=fill_value)) + blocks.append(self._make_na_block(placement=mgr_locs, + fill_value=fill_value)) else: blk = self.blocks[blkno] # Otherwise, slicing along items axis is necessary. if not blk._can_consolidate: - # A non-consolidatable block, it's easy, because there's only one item - # and each mgr loc is a copy of that single item. + # A non-consolidatable block, it's easy, because there's + # only one item and each mgr loc is a copy of that single + # item. for mgr_loc in mgr_locs: newblk = blk.copy(deep=True) newblk.mgr_locs = slice(mgr_loc, mgr_loc + 1) blocks.append(newblk) else: - blocks.append(blk.take_nd( - blklocs[mgr_locs.indexer], axis=0, - new_mgr_locs=mgr_locs, fill_tuple=None)) + blocks.append(blk.take_nd(blklocs[mgr_locs.indexer], + axis=0, new_mgr_locs=mgr_locs, + fill_tuple=None)) return blocks @@ -3595,9 +3596,10 @@ def take(self, indexer, axis=1, verify=True, convert=True): Take items along any axis. """ self._consolidate_inplace() - indexer = np.arange(indexer.start, indexer.stop, indexer.step, - dtype='int64') if isinstance(indexer, slice) \ - else np.asanyarray(indexer, dtype='int64') + indexer = (np.arange(indexer.start, indexer.stop, indexer.step, + dtype='int64') + if isinstance(indexer, slice) + else np.asanyarray(indexer, dtype='int64')) n = self.shape[axis] if convert: @@ -3620,8 +3622,7 @@ def merge(self, other, lsuffix='', rsuffix=''): right=other.items, rsuffix=rsuffix) new_items = _concat_indexes([l, r]) - new_blocks = [blk.copy(deep=False) - for blk in self.blocks] + new_blocks = [blk.copy(deep=False) for blk in self.blocks] offset = self.shape[0] for blk in other.blocks: @@ -3639,8 +3640,8 @@ def _is_indexed_like(self, other): Check all axes except items """ if self.ndim != other.ndim: - raise AssertionError(('Number of dimensions must agree ' - 'got %d and %d') % (self.ndim, other.ndim)) + raise AssertionError('Number of dimensions must agree ' + 'got %d and %d' % (self.ndim, other.ndim)) for ax, oax in zip(self.axes[1:], other.axes[1:]): if not ax.equals(oax): return False @@ -3650,7 +3651,7 @@ def equals(self, other): self_axes, other_axes = self.axes, other.axes if len(self_axes) != len(other_axes): return False - if not all (ax1.equals(ax2) for ax1, ax2 in zip(self_axes, other_axes)): + if not all(ax1.equals(ax2) for ax1, ax2 in zip(self_axes, other_axes)): return False self._consolidate_inplace() other._consolidate_inplace() @@ -3666,8 +3667,8 @@ def canonicalize(block): self_blocks = sorted(self.blocks, key=canonicalize) other_blocks = sorted(other.blocks, key=canonicalize) - return all(block.equals(oblock) for block, oblock in - zip(self_blocks, other_blocks)) + return all(block.equals(oblock) + for block, oblock in zip(self_blocks, other_blocks)) class SingleBlockManager(BlockManager): @@ -3682,8 +3683,8 @@ def __init__(self, block, axis, do_integrity_check=False, fastpath=False): if isinstance(axis, list): if len(axis) != 1: - raise ValueError( - "cannot create SingleBlockManager with more than 1 axis") + raise ValueError("cannot create SingleBlockManager with more " + "than 1 axis") axis = axis[0] # passed from constructor, single block, single axis @@ -3716,9 +3717,8 @@ def __init__(self, block, axis, do_integrity_check=False, fastpath=False): block = block[0] if not isinstance(block, Block): - block = make_block(block, - placement=slice(0, len(axis)), - ndim=1, fastpath=True) + block = make_block(block, placement=slice(0, len(axis)), ndim=1, + fastpath=True) self.blocks = [block] @@ -3754,8 +3754,7 @@ def reindex(self, new_axis, indexer=None, method=None, fill_value=None, else: fill_value = np.nan - new_values = com.take_1d(values, indexer, - fill_value=fill_value) + new_values = com.take_1d(values, indexer, fill_value=fill_value) # fill if needed if method is not None or limit is not None: @@ -3820,7 +3819,7 @@ def internal_values(self): def get_values(self): """ return a dense type view """ - return np.array(self._block.to_dense(),copy=False) + return np.array(self._block.to_dense(), copy=False) @property def itemsize(self): @@ -3864,7 +3863,7 @@ def construction_error(tot_items, block_shape, axes, e=None): if passed == implied and e is not None: raise e raise ValueError("Shape of passed values is {0}, indices imply {1}".format( - passed,implied)) + passed, implied)) def create_block_manager_from_blocks(blocks, axes): @@ -3874,8 +3873,8 @@ def create_block_manager_from_blocks(blocks, axes): if not len(blocks[0]): blocks = [] else: - # It's OK if a single block is passed as values, its placement is - # basically "all items", but if there're many, don't bother + # It's OK if a single block is passed as values, its placement + # is basically "all items", but if there're many, don't bother # converting, it's an error anyway. blocks = [make_block(values=blocks[0], placement=slice(0, len(axes[0])))] @@ -3897,7 +3896,7 @@ def create_block_manager_from_arrays(arrays, names, axes): mgr = BlockManager(blocks, axes) mgr._consolidate_inplace() return mgr - except (ValueError) as e: + except ValueError as e: construction_error(len(arrays), arrays[0].shape, axes, e) @@ -3949,7 +3948,7 @@ def form_blocks(arrays, names, axes): elif issubclass(v.dtype.type, np.integer): if v.dtype == np.uint64: # HACK #2355 definite overflow - if (v > 2 ** 63 - 1).any(): + if (v > 2**63 - 1).any(): object_items.append((i, k, v)) continue int_items.append((i, k, v)) @@ -3974,26 +3973,23 @@ def form_blocks(arrays, names, axes): blocks.extend(int_blocks) if len(datetime_items): - datetime_blocks = _simple_blockify( - datetime_items, _NS_DTYPE) + datetime_blocks = _simple_blockify(datetime_items, _NS_DTYPE) blocks.extend(datetime_blocks) if len(datetime_tz_items): - dttz_blocks = [ make_block(array, - klass=DatetimeTZBlock, - fastpath=True, - placement=[i], - ) for i, names, array in datetime_tz_items ] + dttz_blocks = [make_block(array, + klass=DatetimeTZBlock, + fastpath=True, + placement=[i], ) + for i, names, array in datetime_tz_items] blocks.extend(dttz_blocks) if len(bool_items): - bool_blocks = _simple_blockify( - bool_items, np.bool_) + bool_blocks = _simple_blockify(bool_items, np.bool_) blocks.extend(bool_blocks) if len(object_items) > 0: - object_blocks = _simple_blockify( - object_items, np.object_) + object_blocks = _simple_blockify(object_items, np.object_) blocks.extend(object_blocks) if len(sparse_items) > 0: @@ -4001,11 +3997,9 @@ def form_blocks(arrays, names, axes): blocks.extend(sparse_blocks) if len(cat_items) > 0: - cat_blocks = [ make_block(array, - klass=CategoricalBlock, - fastpath=True, - placement=[i] - ) for i, names, array in cat_items ] + cat_blocks = [make_block(array, klass=CategoricalBlock, fastpath=True, + placement=[i]) + for i, names, array in cat_items] blocks.extend(cat_blocks) if len(extra_locs): @@ -4044,8 +4038,7 @@ def _multi_blockify(tuples, dtype=None): new_blocks = [] for dtype, tup_block in grouper: - values, placement = _stack_arrays( - list(tup_block), dtype) + values, placement = _stack_arrays(list(tup_block), dtype) block = make_block(values, placement=placement) new_blocks.append(block) @@ -4061,9 +4054,8 @@ def _sparse_blockify(tuples, dtype=None): new_blocks = [] for i, names, array in tuples: array = _maybe_to_sparse(array) - block = make_block( - array, klass=SparseBlock, fastpath=True, - placement=[i]) + block = make_block(array, klass=SparseBlock, fastpath=True, + placement=[i]) new_blocks.append(block) return new_blocks @@ -4121,18 +4113,17 @@ def _lcd_dtype(l): have_dt64_tz = len(counts[DatetimeTZBlock]) > 0 have_td64 = len(counts[TimeDeltaBlock]) > 0 have_cat = len(counts[CategoricalBlock]) > 0 - have_sparse = len(counts[SparseBlock]) > 0 + # TODO: have_sparse is not used + have_sparse = len(counts[SparseBlock]) > 0 # noqa have_numeric = have_float or have_complex or have_int has_non_numeric = have_dt64 or have_dt64_tz or have_td64 or have_cat if (have_object or - (have_bool and (have_numeric or have_dt64 or have_dt64_tz or have_td64)) or - (have_numeric and has_non_numeric) or - have_cat or - have_dt64 or - have_dt64_tz or - have_td64): + (have_bool and + (have_numeric or have_dt64 or have_dt64_tz or have_td64)) or + (have_numeric and has_non_numeric) or have_cat or have_dt64 or + have_dt64_tz or have_td64): return np.dtype(object) elif have_bool: return np.dtype(bool) @@ -4197,8 +4188,7 @@ def _merge_blocks(blocks, dtype=None, _can_consolidate=True): new_values = new_values[argsort] new_mgr_locs = new_mgr_locs[argsort] - return make_block(new_values, - fastpath=True, placement=new_mgr_locs) + return make_block(new_values, fastpath=True, placement=new_mgr_locs) # no merge return blocks @@ -4220,12 +4210,13 @@ def _extend_blocks(result, blocks=None): blocks.append(result) return blocks + def _block_shape(values, ndim=1, shape=None): """ guarantee the shape of the values to be at least 1 d """ if values.ndim <= ndim: if shape is None: shape = values.shape - values = values.reshape(tuple((1,) + shape)) + values = values.reshape(tuple((1, ) + shape)) return values @@ -4310,6 +4301,7 @@ def _factor_indexer(shape, labels): return com._ensure_platform_int( np.sum(np.array(labels).T * np.append(mult, [1]), axis=1).T) + def _get_blkno_placements(blknos, blk_count, group=True): """ @@ -4391,7 +4383,7 @@ def _putmask_smart(v, m, n): # n should be the length of the mask or a scalar here if not is_list_like(n): n = np.array([n] * len(m)) - elif isinstance(n, np.ndarray) and n.ndim == 0: # numpy scalar + elif isinstance(n, np.ndarray) and n.ndim == 0: # numpy scalar n = np.repeat(np.array(n, ndmin=1), len(m)) # see if we are only masking values that if putted @@ -4434,9 +4426,9 @@ def concatenate_block_managers(mgrs_indexers, axes, concat_axis, copy): copy : bool """ - concat_plan = combine_concat_plans([get_mgr_concatenation_plan(mgr, indexers) - for mgr, indexers in mgrs_indexers], - concat_axis) + concat_plan = combine_concat_plans( + [get_mgr_concatenation_plan(mgr, indexers) + for mgr, indexers in mgrs_indexers], concat_axis) blocks = [make_block(concatenate_join_units(join_units, concat_axis, copy=copy), @@ -4550,6 +4542,7 @@ def concatenate_join_units(join_units, concat_axis, copy): return concat_values + def get_mgr_concatenation_plan(mgr, indexers): """ Construct concatenation plan for given block manager and indexers. @@ -4603,17 +4596,18 @@ def get_mgr_concatenation_plan(mgr, indexers): blk = mgr.blocks[blkno] ax0_blk_indexer = blklocs[placements.indexer] - unit_no_ax0_reindexing = ( - len(placements) == len(blk.mgr_locs) and - # Fastpath detection of join unit not needing to reindex its - # block: no ax0 reindexing took place and block placement was - # sequential before. - ((ax0_indexer is None - and blk.mgr_locs.is_slice_like - and blk.mgr_locs.as_slice.step == 1) or - # Slow-ish detection: all indexer locs are sequential (and - # length match is checked above). - (np.diff(ax0_blk_indexer) == 1).all())) + unit_no_ax0_reindexing = (len(placements) == len(blk.mgr_locs) and + # Fastpath detection of join unit not + # needing to reindex its block: no ax0 + # reindexing took place and block + # placement was sequential before. + ((ax0_indexer is None and + blk.mgr_locs.is_slice_like and + blk.mgr_locs.as_slice.step == 1) or + # Slow-ish detection: all indexer locs + # are sequential (and length match is + # checked above). + (np.diff(ax0_blk_indexer) == 1).all())) # Omit indexer if no item reindexing is required. if unit_no_ax0_reindexing: @@ -4652,6 +4646,7 @@ def combine_concat_plans(plans, concat_axis): else: num_ended = [0] + def _next_or_none(seq): retval = next(seq, None) if retval is None: @@ -4723,14 +4718,14 @@ class JoinUnit(object): def __init__(self, block, shape, indexers=None): # Passing shape explicitly is required for cases when block is None. if indexers is None: - indexers = {} + indexers = {} self.block = block self.indexers = indexers self.shape = shape def __repr__(self): - return '%s(%r, %s)' % (self.__class__.__name__, - self.block, self.indexers) + return '%s(%r, %s)' % (self.__class__.__name__, self.block, + self.indexers) @cache_readonly def needs_filling(self): @@ -4773,7 +4768,7 @@ def is_null(self): total_len = values_flat.shape[0] chunk_len = max(total_len // 40, 1000) for i in range(0, total_len, chunk_len): - if not isnull(values_flat[i: i + chunk_len]).all(): + if not isnull(values_flat[i:i + chunk_len]).all(): return False return True @@ -4787,7 +4782,8 @@ def get_reindexed_values(self, empty_dtype, upcasted_na): else: fill_value = upcasted_na - if self.is_null and not getattr(self.block,'is_categorical',None): + if self.is_null and not getattr(self.block, 'is_categorical', + None): missing_arr = np.empty(self.shape, dtype=empty_dtype) if np.prod(self.shape): # NumPy 1.6 workaround: this statement gets strange if all diff --git a/pandas/core/missing.py b/pandas/core/missing.py index f1143ad808b91..86640cffc136e 100644 --- a/pandas/core/missing.py +++ b/pandas/core/missing.py @@ -2,11 +2,8 @@ Routines for filling missing data """ -from functools import partial - import numpy as np -import pandas as pd import pandas.core.common as com import pandas.algos as algos import pandas.lib as lib @@ -28,8 +25,8 @@ def _clean_fill_method(method, allow_nearest=False): valid_methods.append('nearest') expecting = 'pad (ffill), backfill (bfill) or nearest' if method not in valid_methods: - msg = ('Invalid fill method. Expecting %s. Got %s' - % (expecting, method)) + msg = ('Invalid fill method. Expecting %s. Got %s' % + (expecting, method)) raise ValueError(msg) return method @@ -37,9 +34,8 @@ def _clean_fill_method(method, allow_nearest=False): def _clean_interp_method(method, **kwargs): order = kwargs.get('order') valid = ['linear', 'time', 'index', 'values', 'nearest', 'zero', 'slinear', - 'quadratic', 'cubic', 'barycentric', 'polynomial', - 'krogh', 'piecewise_polynomial', - 'pchip', 'spline'] + 'quadratic', 'cubic', 'barycentric', 'polynomial', 'krogh', + 'piecewise_polynomial', 'pchip', 'spline'] if method in ('spline', 'polynomial') and order is None: raise ValueError("You must specify the order of the spline or " "polynomial.") @@ -50,8 +46,8 @@ def _clean_interp_method(method, **kwargs): def interpolate_1d(xvalues, yvalues, method='linear', limit=None, - limit_direction='forward', - fill_value=None, bounds_error=False, order=None, **kwargs): + limit_direction='forward', fill_value=None, + bounds_error=False, order=None, **kwargs): """ Logic for the 1-d interpolation. The result should be 1-d, inputs xvalues and yvalues will each be 1-d arrays of the same length. @@ -76,7 +72,7 @@ def interpolate_1d(xvalues, yvalues, method='linear', limit=None, if method == 'time': if not getattr(xvalues, 'is_all_dates', None): - # if not issubclass(xvalues.dtype.type, np.datetime64): + # if not issubclass(xvalues.dtype.type, np.datetime64): raise ValueError('time-weighted interpolation only works ' 'on Series or DataFrames with a ' 'DatetimeIndex') @@ -91,22 +87,21 @@ def _interp_limit(invalid, fw_limit, bw_limit): valid_limit_directions = ['forward', 'backward', 'both'] limit_direction = limit_direction.lower() if limit_direction not in valid_limit_directions: - msg = 'Invalid limit_direction: expecting one of %r, got %r.' % ( - valid_limit_directions, limit_direction) - raise ValueError(msg) + raise ValueError('Invalid limit_direction: expecting one of %r, got ' + '%r.' % (valid_limit_directions, limit_direction)) from pandas import Series ys = Series(yvalues) start_nans = set(range(ys.first_valid_index())) end_nans = set(range(1 + ys.last_valid_index(), len(valid))) - # This is a list of the indexes in the series whose yvalue is currently NaN, - # but whose interpolated yvalue will be overwritten with NaN after computing - # the interpolation. For each index in this list, one of these conditions is - # true of the corresponding NaN in the yvalues: + # This is a list of the indexes in the series whose yvalue is currently + # NaN, but whose interpolated yvalue will be overwritten with NaN after + # computing the interpolation. For each index in this list, one of these + # conditions is true of the corresponding NaN in the yvalues: # - # a) It is one of a chain of NaNs at the beginning of the series, and either - # limit is not specified or limit_direction is 'forward'. + # a) It is one of a chain of NaNs at the beginning of the series, and + # either limit is not specified or limit_direction is 'forward'. # b) It is one of a chain of NaNs at the end of the series, and limit is # specified and limit_direction is 'backward' or 'both'. # c) Limit is nonzero and it is further than limit from the nearest non-NaN @@ -118,9 +113,11 @@ def _interp_limit(invalid, fw_limit, bw_limit): if limit: if limit_direction == 'forward': - violate_limit = sorted(start_nans | set(_interp_limit(invalid, limit, 0))) + violate_limit = sorted(start_nans | set(_interp_limit(invalid, + limit, 0))) if limit_direction == 'backward': - violate_limit = sorted(end_nans | set(_interp_limit(invalid, 0, limit))) + violate_limit = sorted(end_nans | set(_interp_limit(invalid, 0, + limit))) if limit_direction == 'both': violate_limit = sorted(_interp_limit(invalid, limit, limit)) @@ -150,10 +147,13 @@ def _interp_limit(invalid, fw_limit, bw_limit): # hack for DatetimeIndex, #1646 if issubclass(inds.dtype.type, np.datetime64): inds = inds.view(np.int64) - result[invalid] = _interpolate_scipy_wrapper( - inds[valid], yvalues[valid], inds[invalid], method=method, - fill_value=fill_value, - bounds_error=bounds_error, order=order, **kwargs) + result[invalid] = _interpolate_scipy_wrapper(inds[valid], + yvalues[valid], + inds[invalid], + method=method, + fill_value=fill_value, + bounds_error=bounds_error, + order=order, **kwargs) result[violate_limit] = np.nan return result @@ -167,7 +167,8 @@ def _interpolate_scipy_wrapper(x, y, new_x, method, fill_value=None, """ try: from scipy import interpolate - from pandas import DatetimeIndex + # TODO: Why is DatetimeIndex being imported here? + from pandas import DatetimeIndex # noqa except ImportError: raise ImportError('{0} interpolation requires Scipy'.format(method)) @@ -219,7 +220,8 @@ def _interpolate_scipy_wrapper(x, y, new_x, method, fill_value=None, return new_y -def interpolate_2d(values, method='pad', axis=0, limit=None, fill_value=None, dtype=None): +def interpolate_2d(values, method='pad', axis=0, limit=None, fill_value=None, + dtype=None): """ perform an actual interpolation of values, values will be make 2-d if needed fills inplace, returns the result """ @@ -232,7 +234,7 @@ def interpolate_2d(values, method='pad', axis=0, limit=None, fill_value=None, dt if axis != 0: # pragma: no cover raise AssertionError("cannot interpolate on a ndim == 1 with " "axis != 0") - values = values.reshape(tuple((1,) + values.shape)) + values = values.reshape(tuple((1, ) + values.shape)) if fill_value is None: mask = None @@ -241,9 +243,11 @@ def interpolate_2d(values, method='pad', axis=0, limit=None, fill_value=None, dt method = _clean_fill_method(method) if method == 'pad': - values = transf(pad_2d(transf(values), limit=limit, mask=mask, dtype=dtype)) + values = transf(pad_2d( + transf(values), limit=limit, mask=mask, dtype=dtype)) else: - values = transf(backfill_2d(transf(values), limit=limit, mask=mask, dtype=dtype)) + values = transf(backfill_2d( + transf(values), limit=limit, mask=mask, dtype=dtype)) # reshape back if ndim == 1: @@ -256,13 +260,13 @@ def _interp_wrapper(f, wrap_dtype, na_override=None): def wrapper(arr, mask, limit=None): view = arr.view(wrap_dtype) f(view, mask, limit=limit) + return wrapper _pad_1d_datetime = _interp_wrapper(algos.pad_inplace_int64, np.int64) _pad_2d_datetime = _interp_wrapper(algos.pad_2d_inplace_int64, np.int64) -_backfill_1d_datetime = _interp_wrapper(algos.backfill_inplace_int64, - np.int64) +_backfill_1d_datetime = _interp_wrapper(algos.backfill_inplace_int64, np.int64) _backfill_2d_datetime = _interp_wrapper(algos.backfill_2d_inplace_int64, np.int64)
https://api.github.com/repos/pandas-dev/pandas/pulls/12074
2016-01-18T01:14:37Z
2016-01-19T22:48:23Z
null
2016-01-19T22:48:58Z
ENH: GH12034 RangeIndex.__floordiv__ returns RangeIndex if possible
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index b2eb7d9d97d58..160a2936cca70 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -110,7 +110,7 @@ Range Index A ``RangeIndex`` has been added to the ``Int64Index`` sub-classes to support a memory saving alternative for common use cases. This has a similar implementation to the python ``range`` object (``xrange`` in python 2), in that it only stores the start, stop, and step values for the index. It will transparently interact with the user API, converting to ``Int64Index`` if needed. -This will now be the default constructed index for ``NDFrame`` objects, rather than previous an ``Int64Index``. (:issue:`939`) +This will now be the default constructed index for ``NDFrame`` objects, rather than previous an ``Int64Index``. (:issue:`939`, :issue:`12070`) Previous Behavior: diff --git a/pandas/core/index.py b/pandas/core/index.py index 63b748ada6afa..8459cbe3f810e 100644 --- a/pandas/core/index.py +++ b/pandas/core/index.py @@ -4360,6 +4360,22 @@ def __getitem__(self, key): # fall back to Int64Index return super_getitem(key) + def __floordiv__(self, other): + if com.is_integer(other): + if (len(self) == 0 or + self._start % other == 0 and + self._step % other == 0): + start = self._start // other + step = self._step // other + stop = start + len(self) * step + return RangeIndex(start, stop, step, name=self.name, + fastpath=True) + if len(self) == 1: + start = self._start // other + return RangeIndex(start, start + 1, 1, name=self.name, + fastpath=True) + return self._int64index // other + @classmethod def _add_numeric_methods_binary(cls): """ add in numeric methods, specialized to RangeIndex """ diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py index 2c909d653df85..b0210c9fde2e9 100644 --- a/pandas/tests/test_index.py +++ b/pandas/tests/test_index.py @@ -3733,7 +3733,7 @@ def test_numeric_compat2(self): self.assertTrue(result.equals(expected)) result = idx // 1 - expected = idx._int64index // 1 + expected = idx tm.assert_index_equal(result, expected, exact=True) # __mul__ @@ -3748,15 +3748,18 @@ def test_numeric_compat2(self): tm.assert_index_equal(Index(result.values), expected, exact=True) # __floordiv__ - idx = RangeIndex(0, 1000, 2) - result = idx // 2 - expected = idx._int64index // 2 - tm.assert_index_equal(result, expected, exact=True) - - idx = RangeIndex(0, 1000, 1) - result = idx // 2 - expected = idx._int64index // 2 - tm.assert_index_equal(result, expected, exact=True) + cases_exact = [(RangeIndex(0, 1000, 2), 2, RangeIndex(0, 500, 1)), + (RangeIndex(-99, -201, -3), -3, RangeIndex(33, 67, 1)), + (RangeIndex(0, 1000, 1), 2, + RangeIndex(0, 1000, 1)._int64index // 2), + (RangeIndex(0, 100, 1), 2.0, + RangeIndex(0, 100, 1)._int64index // 2.0), + (RangeIndex(), 50, RangeIndex()), + (RangeIndex(2, 4, 2), 3, RangeIndex(0, 1, 1)), + (RangeIndex(-5, -10, -6), 4, RangeIndex(-2, -1, 1)), + (RangeIndex(-100, -200, 3), 2, RangeIndex())] + for idx, div, expected in cases_exact: + tm.assert_index_equal(idx // div, expected, exact=True) def test_constructor_corner(self): arr = np.array([1, 2, 3, 4], dtype=object) @@ -3857,7 +3860,6 @@ def test_is_monotonic(self): self.assertTrue(index.is_monotonic_decreasing) def test_equals(self): - equiv_pairs = [(RangeIndex(0, 9, 2), RangeIndex(0, 10, 2)), (RangeIndex(0), RangeIndex(1, -1, 3)), (RangeIndex(1, 2, 3), RangeIndex(1, 3, 4)),
xref #12034
https://api.github.com/repos/pandas-dev/pandas/pulls/12070
2016-01-17T16:18:25Z
2016-01-20T14:04:48Z
null
2016-01-20T14:04:48Z
CLN: Fix all flake8 warnings in pandas/tests
diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py index 9e48702ad2b0a..eedcce82c733d 100644 --- a/pandas/tests/frame/test_operators.py +++ b/pandas/tests/frame/test_operators.py @@ -477,7 +477,7 @@ def test_binary_ops_align(self): result = getattr(df, op)(x, level='second', axis=0) expected = (pd.concat([opa(df.loc[idx[:, i], :], v) - for i, v in x.iteritems()]) + for i, v in x.iteritems()]) .reindex_like(df).sortlevel()) assert_frame_equal(result, expected) diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py index 252250c5a55b8..4b17736dd149a 100644 --- a/pandas/tests/test_algos.py +++ b/pandas/tests/test_algos.py @@ -28,14 +28,14 @@ def test_ints(self): expected = Series(np.array([0, 2, 1, 1, 0, 2, np.nan, 0])) tm.assert_series_equal(result, expected) - s = pd.Series(np.arange(5),dtype=np.float32) - result = algos.match(s, [2,4]) + s = pd.Series(np.arange(5), dtype=np.float32) + result = algos.match(s, [2, 4]) expected = np.array([-1, -1, 0, -1, 1]) self.assert_numpy_array_equal(result, expected) - result = Series(algos.match(s, [2,4], np.nan)) + result = Series(algos.match(s, [2, 4], np.nan)) expected = Series(np.array([np.nan, np.nan, 0, np.nan, 1])) - tm.assert_series_equal(result,expected) + tm.assert_series_equal(result, expected) def test_strings(self): values = ['foo', 'bar', 'baz'] @@ -47,7 +47,8 @@ def test_strings(self): result = Series(algos.match(to_match, values, np.nan)) expected = Series(np.array([1, 0, np.nan, 0, 1, 2, np.nan])) - tm.assert_series_equal(result,expected) + tm.assert_series_equal(result, expected) + class TestFactorize(tm.TestCase): _multiprocess_can_split_ = True @@ -60,31 +61,42 @@ def test_warn(self): def test_basic(self): - labels, uniques = algos.factorize(['a', 'b', 'b', 'a', - 'a', 'c', 'c', 'c']) - # self.assert_numpy_array_equal(labels, np.array([ 0, 1, 1, 0, 0, 2, 2, 2],dtype=np.int64)) - self.assert_numpy_array_equal(uniques, np.array(['a','b','c'], dtype=object)) + labels, uniques = algos.factorize(['a', 'b', 'b', 'a', 'a', 'c', 'c', + 'c']) + self.assert_numpy_array_equal( + uniques, np.array(['a', 'b', 'c'], dtype=object)) labels, uniques = algos.factorize(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c'], sort=True) - self.assert_numpy_array_equal(labels, np.array([ 0, 1, 1, 0, 0, 2, 2, 2],dtype=np.int64)) - self.assert_numpy_array_equal(uniques, np.array(['a','b','c'], dtype=object)) + self.assert_numpy_array_equal(labels, np.array( + [0, 1, 1, 0, 0, 2, 2, 2], dtype=np.int64)) + self.assert_numpy_array_equal(uniques, np.array( + ['a', 'b', 'c'], dtype=object)) labels, uniques = algos.factorize(list(reversed(range(5)))) - self.assert_numpy_array_equal(labels, np.array([0, 1, 2, 3, 4], dtype=np.int64)) - self.assert_numpy_array_equal(uniques, np.array([ 4, 3, 2, 1, 0],dtype=np.int64)) + self.assert_numpy_array_equal(labels, np.array( + [0, 1, 2, 3, 4], dtype=np.int64)) + self.assert_numpy_array_equal(uniques, np.array( + [4, 3, 2, 1, 0], dtype=np.int64)) labels, uniques = algos.factorize(list(reversed(range(5))), sort=True) - self.assert_numpy_array_equal(labels, np.array([ 4, 3, 2, 1, 0],dtype=np.int64)) - self.assert_numpy_array_equal(uniques, np.array([0, 1, 2, 3, 4], dtype=np.int64)) + self.assert_numpy_array_equal(labels, np.array( + [4, 3, 2, 1, 0], dtype=np.int64)) + self.assert_numpy_array_equal(uniques, np.array( + [0, 1, 2, 3, 4], dtype=np.int64)) labels, uniques = algos.factorize(list(reversed(np.arange(5.)))) - self.assert_numpy_array_equal(labels, np.array([0., 1., 2., 3., 4.], dtype=np.float64)) - self.assert_numpy_array_equal(uniques, np.array([ 4, 3, 2, 1, 0],dtype=np.int64)) - - labels, uniques = algos.factorize(list(reversed(np.arange(5.))), sort=True) - self.assert_numpy_array_equal(labels, np.array([ 4, 3, 2, 1, 0],dtype=np.int64)) - self.assert_numpy_array_equal(uniques, np.array([0., 1., 2., 3., 4.], dtype=np.float64)) + self.assert_numpy_array_equal(labels, np.array( + [0., 1., 2., 3., 4.], dtype=np.float64)) + self.assert_numpy_array_equal(uniques, np.array( + [4, 3, 2, 1, 0], dtype=np.int64)) + + labels, uniques = algos.factorize( + list(reversed(np.arange(5.))), sort=True) + self.assert_numpy_array_equal(labels, np.array( + [4, 3, 2, 1, 0], dtype=np.int64)) + self.assert_numpy_array_equal(uniques, np.array( + [0., 1., 2., 3., 4.], dtype=np.float64)) def test_mixed(self): @@ -92,39 +104,49 @@ def test_mixed(self): x = Series(['A', 'A', np.nan, 'B', 3.14, np.inf]) labels, uniques = algos.factorize(x) - self.assert_numpy_array_equal(labels, np.array([ 0, 0, -1, 1, 2, 3],dtype=np.int64)) - self.assert_numpy_array_equal(uniques, np.array(['A', 'B', 3.14, np.inf], dtype=object)) + self.assert_numpy_array_equal(labels, np.array( + [0, 0, -1, 1, 2, 3], dtype=np.int64)) + self.assert_numpy_array_equal(uniques, np.array( + ['A', 'B', 3.14, np.inf], dtype=object)) labels, uniques = algos.factorize(x, sort=True) - self.assert_numpy_array_equal(labels, np.array([ 2, 2, -1, 3, 0, 1],dtype=np.int64)) - self.assert_numpy_array_equal(uniques, np.array([3.14, np.inf, 'A', 'B'], dtype=object)) + self.assert_numpy_array_equal(labels, np.array( + [2, 2, -1, 3, 0, 1], dtype=np.int64)) + self.assert_numpy_array_equal(uniques, np.array( + [3.14, np.inf, 'A', 'B'], dtype=object)) def test_datelike(self): # M8 v1 = pd.Timestamp('20130101 09:00:00.00004') v2 = pd.Timestamp('20130101') - x = Series([v1,v1,v1,v2,v2,v1]) + x = Series([v1, v1, v1, v2, v2, v1]) labels, uniques = algos.factorize(x) - self.assert_numpy_array_equal(labels, np.array([ 0,0,0,1,1,0],dtype=np.int64)) - self.assert_numpy_array_equal(uniques, np.array([v1.value,v2.value],dtype='M8[ns]')) + self.assert_numpy_array_equal(labels, np.array( + [0, 0, 0, 1, 1, 0], dtype=np.int64)) + self.assert_numpy_array_equal(uniques, np.array( + [v1.value, v2.value], dtype='M8[ns]')) labels, uniques = algos.factorize(x, sort=True) - self.assert_numpy_array_equal(labels, np.array([ 1,1,1,0,0,1],dtype=np.int64)) - self.assert_numpy_array_equal(uniques, np.array([v2.value,v1.value],dtype='M8[ns]')) + self.assert_numpy_array_equal(labels, np.array( + [1, 1, 1, 0, 0, 1], dtype=np.int64)) + self.assert_numpy_array_equal(uniques, np.array( + [v2.value, v1.value], dtype='M8[ns]')) # period - v1 = pd.Period('201302',freq='M') - v2 = pd.Period('201303',freq='M') - x = Series([v1,v1,v1,v2,v2,v1]) + v1 = pd.Period('201302', freq='M') + v2 = pd.Period('201303', freq='M') + x = Series([v1, v1, v1, v2, v2, v1]) # periods are not 'sorted' as they are converted back into an index labels, uniques = algos.factorize(x) - self.assert_numpy_array_equal(labels, np.array([ 0,0,0,1,1,0],dtype=np.int64)) + self.assert_numpy_array_equal(labels, np.array( + [0, 0, 0, 1, 1, 0], dtype=np.int64)) self.assert_numpy_array_equal(uniques, pd.PeriodIndex([v1, v2])) - labels, uniques = algos.factorize(x,sort=True) - self.assert_numpy_array_equal(labels, np.array([ 0,0,0,1,1,0],dtype=np.int64)) + labels, uniques = algos.factorize(x, sort=True) + self.assert_numpy_array_equal(labels, np.array( + [0, 0, 0, 1, 1, 0], dtype=np.int64)) self.assert_numpy_array_equal(uniques, pd.PeriodIndex([v1, v2])) def test_factorize_nan(self): @@ -137,15 +159,20 @@ def test_factorize_nan(self): ids = rizer.factorize(key, sort=True, na_sentinel=na_sentinel) expected = np.array([0, 1, 0, na_sentinel], dtype='int32') self.assertEqual(len(set(key)), len(set(expected))) - self.assertTrue(np.array_equal(pd.isnull(key), expected == na_sentinel)) + self.assertTrue(np.array_equal( + pd.isnull(key), expected == na_sentinel)) # nan still maps to na_sentinel when sort=False key = np.array([0, np.nan, 1], dtype='O') na_sentinel = -1 - ids = rizer.factorize(key, sort=False, na_sentinel=na_sentinel) - expected = np.array([ 2, -1, 0], dtype='int32') + + # TODO(wesm): unused? + ids = rizer.factorize(key, sort=False, na_sentinel=na_sentinel) # noqa + + expected = np.array([2, -1, 0], dtype='int32') self.assertEqual(len(set(key)), len(set(expected))) - self.assertTrue(np.array_equal(pd.isnull(key), expected == na_sentinel)) + self.assertTrue( + np.array_equal(pd.isnull(key), expected == na_sentinel)) def test_vector_resize(self): # Test for memory errors after internal vector @@ -161,14 +188,15 @@ def _test_vector_resize(htable, uniques, dtype, nvals): test_cases = [ (hashtable.PyObjectHashTable, hashtable.ObjectVector, 'object'), - (hashtable.Float64HashTable, hashtable.Float64Vector, 'float64'), - (hashtable.Int64HashTable, hashtable.Int64Vector, 'int64')] + (hashtable.Float64HashTable, hashtable.Float64Vector, 'float64'), + (hashtable.Int64HashTable, hashtable.Int64Vector, 'int64')] for (tbl, vect, dtype) in test_cases: # resizing to empty is a special case _test_vector_resize(tbl(), vect(), dtype, 0) _test_vector_resize(tbl(), vect(), dtype, 10) + class TestIndexer(tm.TestCase): _multiprocess_can_split_ = True @@ -180,15 +208,15 @@ def test_outer_join_indexer(self): ('object', algos.algos.outer_join_indexer_object)] for dtype, indexer in typemap: - left = np.arange(3, dtype = dtype) - right = np.arange(2,5, dtype = dtype) - empty = np.array([], dtype = dtype) + left = np.arange(3, dtype=dtype) + right = np.arange(2, 5, dtype=dtype) + empty = np.array([], dtype=dtype) result, lindexer, rindexer = indexer(left, right) tm.assertIsInstance(result, np.ndarray) tm.assertIsInstance(lindexer, np.ndarray) tm.assertIsInstance(rindexer, np.ndarray) - tm.assert_numpy_array_equal(result, np.arange(5, dtype = dtype)) + tm.assert_numpy_array_equal(result, np.arange(5, dtype=dtype)) tm.assert_numpy_array_equal(lindexer, np.array([0, 1, 2, -1, -1])) tm.assert_numpy_array_equal(rindexer, np.array([-1, -1, 0, 1, 2])) @@ -202,6 +230,7 @@ def test_outer_join_indexer(self): tm.assert_numpy_array_equal(lindexer, np.array([0, 1, 2])) tm.assert_numpy_array_equal(rindexer, np.array([-1, -1, -1])) + class TestUnique(tm.TestCase): _multiprocess_can_split_ = True @@ -224,8 +253,8 @@ def test_object_refcount_bug(self): def test_on_index_object(self): - mindex = pd.MultiIndex.from_arrays([np.arange(5).repeat(5), - np.tile(np.arange(5), 5)]) + mindex = pd.MultiIndex.from_arrays([np.arange(5).repeat(5), np.tile( + np.arange(5), 5)]) expected = mindex.values expected.sort() @@ -239,7 +268,8 @@ def test_on_index_object(self): def test_datetime64_dtype_array_returned(self): # GH 9431 expected = np.array(['2015-01-03T00:00:00.000000000+0000', - '2015-01-01T00:00:00.000000000+0000'], dtype='M8[ns]') + '2015-01-01T00:00:00.000000000+0000'], + dtype='M8[ns]') dt_index = pd.to_datetime(['2015-01-03T00:00:00.000000000+0000', '2015-01-01T00:00:00.000000000+0000', @@ -258,7 +288,6 @@ def test_datetime64_dtype_array_returned(self): tm.assert_numpy_array_equal(result, expected) self.assertEqual(result.dtype, expected.dtype) - def test_timedelta64_dtype_array_returned(self): # GH 9431 expected = np.array([31200, 45678, 10000], dtype='m8[ns]') @@ -278,70 +307,70 @@ def test_timedelta64_dtype_array_returned(self): tm.assert_numpy_array_equal(result, expected) self.assertEqual(result.dtype, expected.dtype) + class TestIsin(tm.TestCase): _multiprocess_can_split_ = True def test_invalid(self): - self.assertRaises(TypeError, lambda : algos.isin(1,1)) - self.assertRaises(TypeError, lambda : algos.isin(1,[1])) - self.assertRaises(TypeError, lambda : algos.isin([1],1)) + self.assertRaises(TypeError, lambda: algos.isin(1, 1)) + self.assertRaises(TypeError, lambda: algos.isin(1, [1])) + self.assertRaises(TypeError, lambda: algos.isin([1], 1)) def test_basic(self): - result = algos.isin([1,2],[1]) - expected = np.array([True,False]) + result = algos.isin([1, 2], [1]) + expected = np.array([True, False]) tm.assert_numpy_array_equal(result, expected) - result = algos.isin(np.array([1,2]),[1]) - expected = np.array([True,False]) + result = algos.isin(np.array([1, 2]), [1]) + expected = np.array([True, False]) tm.assert_numpy_array_equal(result, expected) - result = algos.isin(pd.Series([1,2]),[1]) - expected = np.array([True,False]) + result = algos.isin(pd.Series([1, 2]), [1]) + expected = np.array([True, False]) tm.assert_numpy_array_equal(result, expected) - result = algos.isin(pd.Series([1,2]),pd.Series([1])) - expected = np.array([True,False]) + result = algos.isin(pd.Series([1, 2]), pd.Series([1])) + expected = np.array([True, False]) tm.assert_numpy_array_equal(result, expected) - result = algos.isin(['a','b'],['a']) - expected = np.array([True,False]) + result = algos.isin(['a', 'b'], ['a']) + expected = np.array([True, False]) tm.assert_numpy_array_equal(result, expected) - result = algos.isin(pd.Series(['a','b']),pd.Series(['a'])) - expected = np.array([True,False]) + result = algos.isin(pd.Series(['a', 'b']), pd.Series(['a'])) + expected = np.array([True, False]) tm.assert_numpy_array_equal(result, expected) - result = algos.isin(['a','b'],[1]) - expected = np.array([False,False]) + result = algos.isin(['a', 'b'], [1]) + expected = np.array([False, False]) tm.assert_numpy_array_equal(result, expected) - arr = pd.date_range('20130101',periods=3).values - result = algos.isin(arr,[arr[0]]) - expected = np.array([True,False,False]) + arr = pd.date_range('20130101', periods=3).values + result = algos.isin(arr, [arr[0]]) + expected = np.array([True, False, False]) tm.assert_numpy_array_equal(result, expected) - result = algos.isin(arr,arr[0:2]) - expected = np.array([True,True,False]) + result = algos.isin(arr, arr[0:2]) + expected = np.array([True, True, False]) tm.assert_numpy_array_equal(result, expected) - arr = pd.timedelta_range('1 day',periods=3).values - result = algos.isin(arr,[arr[0]]) - expected = np.array([True,False,False]) + arr = pd.timedelta_range('1 day', periods=3).values + result = algos.isin(arr, [arr[0]]) + expected = np.array([True, False, False]) tm.assert_numpy_array_equal(result, expected) - - def test_large(self): - s = pd.date_range('20000101',periods=2000000,freq='s').values - result = algos.isin(s,s[0:2]) - expected = np.zeros(len(s),dtype=bool) + s = pd.date_range('20000101', periods=2000000, freq='s').values + result = algos.isin(s, s[0:2]) + expected = np.zeros(len(s), dtype=bool) expected[0] = True expected[1] = True tm.assert_numpy_array_equal(result, expected) + class TestValueCounts(tm.TestCase): _multiprocess_can_split_ = True @@ -354,14 +383,10 @@ def test_value_counts(self): tm.assertIsInstance(factor, Categorical) result = algos.value_counts(factor) - cats = ['(-1.194, -0.535]', - '(-0.535, 0.121]', - '(0.121, 0.777]', - '(0.777, 1.433]' - ] + cats = ['(-1.194, -0.535]', '(-0.535, 0.121]', '(0.121, 0.777]', + '(0.777, 1.433]'] expected_index = CategoricalIndex(cats, cats, ordered=True) - expected = Series([1, 1, 1, 1], - index=expected_index) + expected = Series([1, 1, 1, 1], index=expected_index) tm.assert_series_equal(result.sort_index(), expected.sort_index()) def test_value_counts_bins(self): @@ -385,7 +410,8 @@ def test_value_counts_dtypes(self): result = algos.value_counts(Series([1, 1., '1'])) # object self.assertEqual(len(result), 2) - self.assertRaises(TypeError, lambda s: algos.value_counts(s, bins=1), ['1', 1]) + self.assertRaises(TypeError, lambda s: algos.value_counts(s, bins=1), + ['1', 1]) def test_value_counts_nat(self): td = Series([np.timedelta64(10000), pd.NaT], dtype='timedelta64[ns]') @@ -404,7 +430,8 @@ def test_value_counts_nat(self): def test_categorical(self): s = Series(pd.Categorical(list('aaabbc'))) result = s.value_counts() - expected = pd.Series([3, 2, 1], index=pd.CategoricalIndex(['a', 'b', 'c'])) + expected = pd.Series([3, 2, 1], + index=pd.CategoricalIndex(['a', 'b', 'c'])) tm.assert_series_equal(result, expected, check_index_type=True) # preserve order? @@ -414,44 +441,41 @@ def test_categorical(self): tm.assert_series_equal(result, expected, check_index_type=True) def test_categorical_nans(self): - s = Series(pd.Categorical(list('aaaaabbbcc'))) # 4,3,2,1 (nan) + s = Series(pd.Categorical(list('aaaaabbbcc'))) # 4,3,2,1 (nan) s.iloc[1] = np.nan result = s.value_counts() - expected = pd.Series([4, 3, 2], - index=pd.CategoricalIndex(['a', 'b', 'c'], - categories=['a', 'b', 'c'])) + expected = pd.Series([4, 3, 2], index=pd.CategoricalIndex( + ['a', 'b', 'c'], categories=['a', 'b', 'c'])) tm.assert_series_equal(result, expected, check_index_type=True) result = s.value_counts(dropna=False) - expected = pd.Series([4, 3, 2, 1], index=pd.CategoricalIndex( - ['a', 'b', 'c', np.nan])) + expected = pd.Series([ + 4, 3, 2, 1 + ], index=pd.CategoricalIndex(['a', 'b', 'c', np.nan])) tm.assert_series_equal(result, expected, check_index_type=True) # out of order - s = Series(pd.Categorical(list('aaaaabbbcc'), - ordered=True, categories=['b', 'a', 'c'])) + s = Series(pd.Categorical( + list('aaaaabbbcc'), ordered=True, categories=['b', 'a', 'c'])) s.iloc[1] = np.nan result = s.value_counts() - expected = pd.Series([4, 3, 2], - index=pd.CategoricalIndex(['a', 'b', 'c'], - categories=['b', 'a', 'c'], - ordered=True)) + expected = pd.Series([4, 3, 2], index=pd.CategoricalIndex( + ['a', 'b', 'c'], categories=['b', 'a', 'c'], ordered=True)) tm.assert_series_equal(result, expected, check_index_type=True) result = s.value_counts(dropna=False) expected = pd.Series([4, 3, 2, 1], index=pd.CategoricalIndex( - ['a', 'b', 'c', np.nan], categories=['b', 'a', 'c'], ordered=True)) + ['a', 'b', 'c', np.nan], categories=['b', 'a', 'c'], ordered=True)) tm.assert_series_equal(result, expected, check_index_type=True) def test_categorical_zeroes(self): # keep the `d` category with 0 - s = Series(pd.Categorical(list('bbbaac'), categories=list('abcd'), - ordered=True)) + s = Series(pd.Categorical( + list('bbbaac'), categories=list('abcd'), ordered=True)) result = s.value_counts() expected = Series([3, 2, 1, 0], index=pd.Categorical( ['b', 'a', 'c', 'd'], categories=list('abcd'), ordered=True)) tm.assert_series_equal(result, expected, check_index_type=True) - def test_dropna(self): # https://github.com/pydata/pandas/issues/9443#issuecomment-73719328 @@ -529,8 +553,7 @@ def test_group_var_generic_2d_all_finite(self): values = 10 * prng.rand(10, 2).astype(self.dtype) labels = np.tile(np.arange(5), (2, )).astype('int64') - expected_out = np.std( - values.reshape(2, 5, 2), ddof=1, axis=0) ** 2 + expected_out = np.std(values.reshape(2, 5, 2), ddof=1, axis=0) ** 2 expected_counts = counts + 2 self.algo(out, counts, values, labels) @@ -546,10 +569,10 @@ def test_group_var_generic_2d_some_nan(self): values[:, 1] = np.nan labels = np.tile(np.arange(5), (2, )).astype('int64') - expected_out = np.vstack([ - values[:, 0].reshape(5, 2, order='F').std(ddof=1, axis=1) ** 2, - np.nan * np.ones(5) - ]).T + expected_out = np.vstack([values[:, 0] + .reshape(5, 2, order='F') + .std(ddof=1, axis=1) ** 2, + np.nan * np.ones(5)]).T expected_counts = counts + 2 self.algo(out, counts, values, labels) @@ -560,7 +583,7 @@ def test_group_var_constant(self): # Regression test from GH 10448. out = np.array([[np.nan]], dtype=self.dtype) - counts = np.array([0],dtype='int64') + counts = np.array([0], dtype='int64') values = 0.832845131556193 * np.ones((3, 1), dtype=self.dtype) labels = np.zeros(3, dtype='int64') @@ -584,7 +607,7 @@ def test_group_var_large_inputs(self): prng = RandomState(1234) out = np.array([[np.nan]], dtype=self.dtype) - counts = np.array([0],dtype='int64') + counts = np.array([0], dtype='int64') values = (prng.rand(10 ** 6) + 10 ** 12).astype(self.dtype) values.shape = (10 ** 6, 1) labels = np.zeros(10 ** 6, dtype='int64') @@ -611,6 +634,7 @@ def test_quantile(): expected = algos.quantile(s.values, [0, .25, .5, .75, 1.]) tm.assert_almost_equal(result, expected) + def test_unique_label_indices(): from pandas.hashtable import unique_label_indices @@ -622,10 +646,11 @@ def test_unique_label_indices(): tm.assert_numpy_array_equal(left, right) a[np.random.choice(len(a), 10)] = -1 - left= unique_label_indices(a) + left = unique_label_indices(a) right = np.unique(a, return_index=True)[1][1:] tm.assert_numpy_array_equal(left, right) + if __name__ == '__main__': import nose nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py index 741e3eecc96a0..10a5b9dbefe02 100644 --- a/pandas/tests/test_base.py +++ b/pandas/tests/test_base.py @@ -11,23 +11,23 @@ import pandas.compat as compat import pandas.core.common as com import pandas.util.testing as tm -from pandas import (Series, Index, DatetimeIndex, - TimedeltaIndex, PeriodIndex, Timedelta) +from pandas import (Series, Index, DatetimeIndex, TimedeltaIndex, PeriodIndex, + Timedelta) from pandas.compat import u, StringIO -from pandas.core.base import (FrozenList, FrozenNDArray, - PandasDelegate, NoNewAttributesMixin) +from pandas.core.base import (FrozenList, FrozenNDArray, PandasDelegate, + NoNewAttributesMixin) from pandas.tseries.base import DatetimeIndexOpsMixin -from pandas.util.testing import (assertRaisesRegexp, - assertIsInstance) +from pandas.util.testing import (assertRaisesRegexp, assertIsInstance) class CheckStringMixin(object): + def test_string_methods_dont_fail(self): repr(self.container) str(self.container) bytes(self.container) if not compat.PY3: - unicode(self.container) + unicode(self.container) # noqa def test_tricky_container(self): if not hasattr(self, 'unicode_container'): @@ -36,30 +36,35 @@ def test_tricky_container(self): str(self.unicode_container) bytes(self.unicode_container) if not compat.PY3: - unicode(self.unicode_container) + unicode(self.unicode_container) # noqa class CheckImmutable(object): mutable_regex = re.compile('does not support mutable operations') def check_mutable_error(self, *args, **kwargs): - # pass whatever functions you normally would to assertRaises (after the Exception kind) + # pass whatever functions you normally would to assertRaises (after the + # Exception kind) assertRaisesRegexp(TypeError, self.mutable_regex, *args, **kwargs) def test_no_mutable_funcs(self): - def setitem(): self.container[0] = 5 + def setitem(): + self.container[0] = 5 self.check_mutable_error(setitem) - def setslice(): self.container[1:2] = 3 + def setslice(): + self.container[1:2] = 3 self.check_mutable_error(setslice) - def delitem(): del self.container[0] + def delitem(): + del self.container[0] self.check_mutable_error(delitem) - def delslice(): del self.container[0:3] + def delslice(): + del self.container[0:3] self.check_mutable_error(delslice) mutable_methods = getattr(self, "mutable_methods", []) @@ -116,14 +121,15 @@ def test_shallow_copying(self): original = self.container.copy() assertIsInstance(self.container.view(), FrozenNDArray) self.assertFalse(isinstance( - self.container.view(np.ndarray), FrozenNDArray - )) + self.container.view(np.ndarray), FrozenNDArray)) self.assertIsNot(self.container.view(), self.container) self.assert_numpy_array_equal(self.container, original) # shallow copy should be the same too assertIsInstance(self.container._shallow_copy(), FrozenNDArray) + # setting should not be allowed - def testit(container): container[0] = 16 + def testit(container): + container[0] = 16 self.check_mutable_error(testit, self.container) @@ -164,8 +170,10 @@ def bar(self, *args, **kwargs): pass class Delegate(PandasDelegate): + def __init__(self, obj): self.obj = obj + Delegate._add_delegate_accessors(delegate=Delegator, accessors=Delegator._properties, typ='property') @@ -177,12 +185,17 @@ def __init__(self, obj): def f(): delegate.foo + self.assertRaises(TypeError, f) + def f(): delegate.foo = 5 + self.assertRaises(TypeError, f) + def f(): delegate.foo() + self.assertRaises(TypeError, f) @@ -191,32 +204,36 @@ class Ops(tm.TestCase): def _allow_na_ops(self, obj): """Whether to skip test cases including NaN""" if (isinstance(obj, Index) and - (obj.is_boolean() or not obj._can_hold_na)): + (obj.is_boolean() or not obj._can_hold_na)): # don't test boolean / int64 index return False return True def setUp(self): - self.bool_index = tm.makeBoolIndex(10, name='a') - self.int_index = tm.makeIntIndex(10, name='a') - self.float_index = tm.makeFloatIndex(10, name='a') - self.dt_index = tm.makeDateIndex(10, name='a') - self.dt_tz_index = tm.makeDateIndex(10, name='a').tz_localize(tz='US/Eastern') - self.period_index = tm.makePeriodIndex(10, name='a') - self.string_index = tm.makeStringIndex(10, name='a') - self.unicode_index = tm.makeUnicodeIndex(10, name='a') + self.bool_index = tm.makeBoolIndex(10, name='a') + self.int_index = tm.makeIntIndex(10, name='a') + self.float_index = tm.makeFloatIndex(10, name='a') + self.dt_index = tm.makeDateIndex(10, name='a') + self.dt_tz_index = tm.makeDateIndex(10, name='a').tz_localize( + tz='US/Eastern') + self.period_index = tm.makePeriodIndex(10, name='a') + self.string_index = tm.makeStringIndex(10, name='a') + self.unicode_index = tm.makeUnicodeIndex(10, name='a') arr = np.random.randn(10) - self.int_series = Series(arr, index=self.int_index, name='a') - self.float_series = Series(arr, index=self.float_index, name='a') - self.dt_series = Series(arr, index=self.dt_index, name='a') - self.dt_tz_series = self.dt_tz_index.to_series(keep_tz=True) + self.int_series = Series(arr, index=self.int_index, name='a') + self.float_series = Series(arr, index=self.float_index, name='a') + self.dt_series = Series(arr, index=self.dt_index, name='a') + self.dt_tz_series = self.dt_tz_index.to_series(keep_tz=True) self.period_series = Series(arr, index=self.period_index, name='a') self.string_series = Series(arr, index=self.string_index, name='a') - types = ['bool','int','float','dt', 'dt_tz', 'period','string', 'unicode'] - fmts = [ "{0}_{1}".format(t,f) for t in types for f in ['index','series'] ] - self.objs = [ getattr(self,f) for f in fmts if getattr(self,f,None) is not None ] + types = ['bool', 'int', 'float', 'dt', 'dt_tz', 'period', 'string', + 'unicode'] + fmts = ["{0}_{1}".format(t, f) + for t in types for f in ['index', 'series']] + self.objs = [getattr(self, f) + for f in fmts if getattr(self, f, None) is not None] def check_ops_properties(self, props, filter=None, ignore_failures=False): for op in props: @@ -230,36 +247,39 @@ def check_ops_properties(self, props, filter=None, ignore_failures=False): try: if isinstance(o, Series): - expected = Series(getattr(o.index,op), index=o.index, name='a') + expected = Series( + getattr(o.index, op), index=o.index, name='a') else: expected = getattr(o, op) except (AttributeError): if ignore_failures: continue - result = getattr(o,op) + result = getattr(o, op) # these couuld be series, arrays or scalars - if isinstance(result,Series) and isinstance(expected,Series): - tm.assert_series_equal(result,expected) - elif isinstance(result,Index) and isinstance(expected,Index): - tm.assert_index_equal(result,expected) - elif isinstance(result,np.ndarray) and isinstance(expected,np.ndarray): - self.assert_numpy_array_equal(result,expected) + if isinstance(result, Series) and isinstance(expected, Series): + tm.assert_series_equal(result, expected) + elif isinstance(result, Index) and isinstance(expected, Index): + tm.assert_index_equal(result, expected) + elif isinstance(result, np.ndarray) and isinstance(expected, + np.ndarray): + self.assert_numpy_array_equal(result, expected) else: self.assertEqual(result, expected) - # freq raises AttributeError on an Int64Index because its not defined - # we mostly care about Series hwere anyhow + # freq raises AttributeError on an Int64Index because its not + # defined we mostly care about Series hwere anyhow if not ignore_failures: for o in self.not_valid_objs: - # an object that is datetimelike will raise a TypeError, otherwise - # an AttributeError + # an object that is datetimelike will raise a TypeError, + # otherwise an AttributeError if issubclass(type(o), DatetimeIndexOpsMixin): - self.assertRaises(TypeError, lambda : getattr(o,op)) + self.assertRaises(TypeError, lambda: getattr(o, op)) else: - self.assertRaises(AttributeError, lambda : getattr(o,op)) + self.assertRaises(AttributeError, + lambda: getattr(o, op)) def test_binary_ops_docs(self): from pandas import DataFrame, Panel @@ -270,24 +290,28 @@ def test_binary_ops_docs(self): 'pow': '**', 'truediv': '/', 'floordiv': '//'} - for op_name in ['add', 'sub', 'mul', 'mod', 'pow', 'truediv', 'floordiv']: + for op_name in ['add', 'sub', 'mul', 'mod', 'pow', 'truediv', + 'floordiv']: for klass in [Series, DataFrame, Panel]: operand1 = klass.__name__.lower() operand2 = 'other' op = op_map[op_name] expected_str = ' '.join([operand1, op, operand2]) - self.assertTrue(expected_str in getattr(klass, op_name).__doc__) + self.assertTrue(expected_str in getattr(klass, + op_name).__doc__) # reverse version of the binary ops expected_str = ' '.join([operand2, op, operand1]) - self.assertTrue(expected_str in getattr(klass, 'r' + op_name).__doc__) + self.assertTrue(expected_str in getattr(klass, 'r' + + op_name).__doc__) + class TestIndexOps(Ops): def setUp(self): super(TestIndexOps, self).setUp() - self.is_valid_objs = [ o for o in self.objs if o._allow_index_ops ] - self.not_valid_objs = [ o for o in self.objs if not o._allow_index_ops ] + self.is_valid_objs = [o for o in self.objs if o._allow_index_ops] + self.not_valid_objs = [o for o in self.objs if not o._allow_index_ops] def test_none_comparison(self): @@ -299,12 +323,12 @@ def test_none_comparison(self): o[0] = np.nan # noinspection PyComparisonWithNone - result = o == None + result = o == None # noqa self.assertFalse(result.iat[0]) self.assertFalse(result.iat[1]) # noinspection PyComparisonWithNone - result = o != None + result = o != None # noqa self.assertTrue(result.iat[0]) self.assertTrue(result.iat[1]) @@ -314,9 +338,9 @@ def test_none_comparison(self): # this fails for numpy < 1.9 # and oddly for *some* platforms - #result = None != o - #self.assertTrue(result.iat[0]) - #self.assertTrue(result.iat[1]) + # result = None != o # noqa + # self.assertTrue(result.iat[0]) + # self.assertTrue(result.iat[1]) result = None > o self.assertFalse(result.iat[0]) @@ -326,14 +350,13 @@ def test_none_comparison(self): self.assertFalse(result.iat[0]) self.assertFalse(result.iat[1]) - def test_ndarray_compat_properties(self): for o in self.objs: # check that we work - for p in ['shape', 'dtype', 'flags', 'T', - 'strides', 'itemsize', 'nbytes']: + for p in ['shape', 'dtype', 'flags', 'T', 'strides', 'itemsize', + 'nbytes']: self.assertIsNotNone(getattr(o, p, None)) self.assertTrue(hasattr(o, 'base')) @@ -352,23 +375,25 @@ def test_ndarray_compat_properties(self): self.assertEqual(Series([1]).item(), 1) def test_ops(self): - for op in ['max','min']: + for op in ['max', 'min']: for o in self.objs: - result = getattr(o,op)() + result = getattr(o, op)() if not isinstance(o, PeriodIndex): expected = getattr(o.values, op)() else: - expected = pd.Period(ordinal=getattr(o.values, op)(), freq=o.freq) + expected = pd.Period(ordinal=getattr(o.values, op)(), + freq=o.freq) try: self.assertEqual(result, expected) except TypeError: - # comparing tz-aware series with np.array results in TypeError + # comparing tz-aware series with np.array results in + # TypeError expected = expected.astype('M8[ns]').astype('int64') self.assertEqual(result.value, expected) def test_nanops(self): # GH 7261 - for op in ['max','min']: + for op in ['max', 'min']: for klass in [Index, Series]: obj = klass([np.nan, 2.0]) @@ -389,25 +414,26 @@ def test_nanops(self): self.assertEqual(getattr(obj, op)(), datetime(2011, 11, 1)) # argmin/max - obj = Index(np.arange(5,dtype='int64')) - self.assertEqual(obj.argmin(),0) - self.assertEqual(obj.argmax(),4) + obj = Index(np.arange(5, dtype='int64')) + self.assertEqual(obj.argmin(), 0) + self.assertEqual(obj.argmax(), 4) obj = Index([np.nan, 1, np.nan, 2]) - self.assertEqual(obj.argmin(),1) - self.assertEqual(obj.argmax(),3) + self.assertEqual(obj.argmin(), 1) + self.assertEqual(obj.argmax(), 3) obj = Index([np.nan]) - self.assertEqual(obj.argmin(),-1) - self.assertEqual(obj.argmax(),-1) + self.assertEqual(obj.argmin(), -1) + self.assertEqual(obj.argmax(), -1) - obj = Index([pd.NaT, datetime(2011, 11, 1), datetime(2011,11,2),pd.NaT]) - self.assertEqual(obj.argmin(),1) - self.assertEqual(obj.argmax(),2) + obj = Index([pd.NaT, datetime(2011, 11, 1), datetime(2011, 11, 2), + pd.NaT]) + self.assertEqual(obj.argmin(), 1) + self.assertEqual(obj.argmax(), 2) obj = Index([pd.NaT]) - self.assertEqual(obj.argmin(),-1) - self.assertEqual(obj.argmax(),-1) + self.assertEqual(obj.argmin(), -1) + self.assertEqual(obj.argmax(), -1) def test_value_counts_unique_nunique(self): for o in self.objs: @@ -447,9 +473,13 @@ def test_value_counts_unique_nunique(self): else: expected_index = pd.Index(values[::-1]) idx = o.index.repeat(range(1, len(o) + 1)) - o = klass(np.repeat(values, range(1, len(o) + 1)), index=idx, name='a') + o = klass( + np.repeat(values, range(1, + len(o) + 1)), index=idx, name='a') - expected_s = Series(range(10, 0, -1), index=expected_index, dtype='int64', name='a') + expected_s = Series( + range(10, 0, - + 1), index=expected_index, dtype='int64', name='a') result = o.value_counts() tm.assert_series_equal(result, expected_s) @@ -484,28 +514,36 @@ def test_value_counts_unique_nunique(self): o = o.copy() o[0:2] = pd.tslib.iNaT values = o.values - elif o.values.dtype == 'datetime64[ns]' or isinstance(o, PeriodIndex): + elif o.values.dtype == 'datetime64[ns]' or isinstance( + o, PeriodIndex): values[0:2] = pd.tslib.iNaT else: values[0:2] = null_obj - # create repeated values, 'n'th element is repeated by n+1 times + # create repeated values, 'n'th element is repeated by n+1 + # times if isinstance(o, PeriodIndex): - # freq must be specified because repeat makes freq ambiguous + # freq must be specified because repeat makes freq + # ambiguous # resets name from Index expected_index = pd.Index(o, name=None) # attach name to klass - o = klass(np.repeat(values, range(1, len(o) + 1)), freq=o.freq, name='a') + o = klass( + np.repeat(values, range( + 1, len(o) + 1)), freq=o.freq, name='a') elif isinstance(o, Index): expected_index = pd.Index(values, name=None) - o = klass(np.repeat(values, range(1, len(o) + 1)), name='a') + o = klass( + np.repeat(values, range(1, len(o) + 1)), name='a') else: expected_index = pd.Index(values, name=None) idx = np.repeat(o.index.values, range(1, len(o) + 1)) - o = klass(np.repeat(values, range(1, len(o) + 1)), index=idx, name='a') + o = klass( + np.repeat(values, range( + 1, len(o) + 1)), index=idx, name='a') - expected_s_na = Series(list(range(10, 2, -1)) +[3], + expected_s_na = Series(list(range(10, 2, -1)) + [3], index=expected_index[9:0:-1], dtype='int64', name='a') expected_s = Series(list(range(10, 2, -1)), @@ -543,7 +581,8 @@ def test_value_counts_inferred(self): self.assert_numpy_array_equal(s.unique(), np.unique(s_values)) self.assertEqual(s.nunique(), 4) - # don't sort, have to sort after the fact as not sorting is platform-dep + # don't sort, have to sort after the fact as not sorting is + # platform-dep hist = s.value_counts(sort=False).sort_values() expected = Series([3, 1, 4, 2], index=list('acbd')).sort_values() tm.assert_series_equal(hist, expected) @@ -559,7 +598,8 @@ def test_value_counts_inferred(self): tm.assert_series_equal(hist, expected) # bins - self.assertRaises(TypeError, lambda bins: s.value_counts(bins=bins), 1) + self.assertRaises(TypeError, + lambda bins: s.value_counts(bins=bins), 1) s1 = Series([1, 1, 2, 3]) res1 = s1.value_counts(bins=1) @@ -573,45 +613,60 @@ def test_value_counts_inferred(self): self.assertEqual(s1.nunique(), 3) res4 = s1.value_counts(bins=4) - exp4 = Series({0.998: 2, 1.5: 1, 2.0: 0, 2.5: 1}, index=[0.998, 2.5, 1.5, 2.0]) + exp4 = Series({0.998: 2, + 1.5: 1, + 2.0: 0, + 2.5: 1}, index=[0.998, 2.5, 1.5, 2.0]) tm.assert_series_equal(res4, exp4) res4n = s1.value_counts(bins=4, normalize=True) - exp4n = Series({0.998: 0.5, 1.5: 0.25, 2.0: 0.0, 2.5: 0.25}, index=[0.998, 2.5, 1.5, 2.0]) + exp4n = Series( + {0.998: 0.5, + 1.5: 0.25, + 2.0: 0.0, + 2.5: 0.25}, index=[0.998, 2.5, 1.5, 2.0]) tm.assert_series_equal(res4n, exp4n) # handle NA's properly - s_values = ['a', 'b', 'b', 'b', np.nan, np.nan, 'd', 'd', 'a', 'a', 'b'] + s_values = ['a', 'b', 'b', 'b', np.nan, np.nan, 'd', 'd', 'a', 'a', + 'b'] s = klass(s_values) expected = Series([4, 3, 2], index=['b', 'a', 'd']) tm.assert_series_equal(s.value_counts(), expected) - self.assert_numpy_array_equal(s.unique(), np.array(['a', 'b', np.nan, 'd'], dtype='O')) + self.assert_numpy_array_equal(s.unique(), np.array( + ['a', 'b', np.nan, 'd'], dtype='O')) self.assertEqual(s.nunique(), 3) s = klass({}) expected = Series([], dtype=np.int64) - tm.assert_series_equal(s.value_counts(), expected, check_index_type=False) + tm.assert_series_equal(s.value_counts(), expected, + check_index_type=False) self.assert_numpy_array_equal(s.unique(), np.array([])) self.assertEqual(s.nunique(), 0) # GH 3002, datetime64[ns] # don't test names though - txt = "\n".join(['xxyyzz20100101PIE', 'xxyyzz20100101GUM', 'xxyyzz20100101EGG', - 'xxyyww20090101EGG', 'foofoo20080909PIE', 'foofoo20080909GUM']) + txt = "\n".join(['xxyyzz20100101PIE', 'xxyyzz20100101GUM', + 'xxyyzz20100101EGG', 'xxyyww20090101EGG', + 'foofoo20080909PIE', 'foofoo20080909GUM']) f = StringIO(txt) - df = pd.read_fwf(f, widths=[6, 8, 3], names=["person_id", "dt", "food"], + df = pd.read_fwf(f, widths=[6, 8, 3], + names=["person_id", "dt", "food"], parse_dates=["dt"]) s = klass(df['dt'].copy()) s.name = None - idx = pd.to_datetime(['2010-01-01 00:00:00Z', '2008-09-09 00:00:00Z', - '2009-01-01 00:00:00X']) + idx = pd.to_datetime( + ['2010-01-01 00:00:00Z', '2008-09-09 00:00:00Z', + '2009-01-01 00:00:00X']) expected_s = Series([3, 2, 1], index=idx) tm.assert_series_equal(s.value_counts(), expected_s) - expected = np.array(['2010-01-01 00:00:00Z', '2009-01-01 00:00:00Z', - '2008-09-09 00:00:00Z'], dtype='datetime64[ns]') + expected = np.array(['2010-01-01 00:00:00Z', + '2009-01-01 00:00:00Z', + '2008-09-09 00:00:00Z'], + dtype='datetime64[ns]') if isinstance(s, DatetimeIndex): expected = DatetimeIndex(expected) self.assertTrue(s.unique().equals(expected)) @@ -637,7 +692,8 @@ def test_value_counts_inferred(self): # numpy_array_equal cannot compare pd.NaT self.assert_numpy_array_equal(unique[:3], expected) - self.assertTrue(unique[3] is pd.NaT or unique[3].astype('int64') == pd.tslib.iNaT) + self.assertTrue(unique[3] is pd.NaT or unique[3].astype('int64') == + pd.tslib.iNaT) self.assertEqual(s.nunique(), 3) self.assertEqual(s.nunique(dropna=False), 4) @@ -664,10 +720,10 @@ def test_value_counts_inferred(self): def test_factorize(self): for o in self.objs: - if isinstance(o,Index) and o.is_boolean(): - exp_arr = np.array([0,1] + [0] * 8) + if isinstance(o, Index) and o.is_boolean(): + exp_arr = np.array([0, 1] + [0] * 8) exp_uniques = o - exp_uniques = Index([False,True]) + exp_uniques = Index([False, True]) else: exp_arr = np.array(range(len(o))) exp_uniques = o @@ -683,7 +739,7 @@ def test_factorize(self): for o in self.objs: # don't test boolean - if isinstance(o,Index) and o.is_boolean(): + if isinstance(o, Index) and o.is_boolean(): continue # sort by value, and create duplicates @@ -710,7 +766,8 @@ def test_factorize(self): self.assert_numpy_array_equal(labels, exp_arr) if isinstance(o, Series): - expected = Index(np.concatenate([o.values[5:10], o.values[:5]])) + expected = Index(np.concatenate([o.values[5:10], o.values[:5] + ])) self.assert_numpy_array_equal(uniques, expected) else: expected = o[5:].append(o[:5]) @@ -725,7 +782,7 @@ def test_duplicated_drop_duplicates(self): # special case if original.is_boolean(): result = original.drop_duplicates() - expected = Index([False,True], name='a') + expected = Index([False, True], name='a') tm.assert_index_equal(result, expected) continue @@ -743,7 +800,8 @@ def test_duplicated_drop_duplicates(self): # create repeated values, 3rd and 5th values are duplicated idx = original[list(range(len(original))) + [5, 3]] - expected = np.array([False] * len(original) + [True, True], dtype=bool) + expected = np.array([False] * len(original) + [True, True], + dtype=bool) duplicated = idx.duplicated() tm.assert_numpy_array_equal(duplicated, expected) self.assertTrue(duplicated.dtype == bool) @@ -780,8 +838,9 @@ def test_duplicated_drop_duplicates(self): result = idx.drop_duplicates(keep=False) tm.assert_index_equal(result, idx[~expected]) - with tm.assertRaisesRegexp(TypeError, - "drop_duplicates\(\) got an unexpected keyword argument"): + with tm.assertRaisesRegexp( + TypeError, "drop_duplicates\(\) got an unexpected " + "keyword argument"): idx.drop_duplicates(inplace=True) else: @@ -812,7 +871,8 @@ def test_duplicated_drop_duplicates(self): # deprecate take_last with tm.assert_produces_warning(FutureWarning): - tm.assert_series_equal(s.duplicated(take_last=True), expected) + tm.assert_series_equal( + s.duplicated(take_last=True), expected) with tm.assert_produces_warning(FutureWarning): tm.assert_series_equal(s.drop_duplicates(take_last=True), s[~np.array(base)]) @@ -863,13 +923,15 @@ def get_fill_value(obj): fill_value = get_fill_value(o) # special assign to the numpy array - if o.values.dtype == 'datetime64[ns]' or isinstance(o, PeriodIndex): + if o.values.dtype == 'datetime64[ns]' or isinstance( + o, PeriodIndex): values[0:2] = pd.tslib.iNaT else: values[0:2] = null_obj if isinstance(o, PeriodIndex): - # freq must be specified because repeat makes freq ambiguous + # freq must be specified because repeat makes freq + # ambiguous expected = [fill_value.ordinal] * 2 + list(values[2:]) expected = klass(ordinal=expected, freq=o.freq) o = klass(ordinal=values, freq=o.freq) @@ -891,9 +953,8 @@ def test_memory_usage(self): res = o.memory_usage() res_deep = o.memory_usage(deep=True) - if (com.is_object_dtype(o) or - (isinstance(o, Series) and - com.is_object_dtype(o.index))): + if (com.is_object_dtype(o) or (isinstance(o, Series) and + com.is_object_dtype(o.index))): # if there are objects, only deep will pick them up self.assertTrue(res_deep > res) else: @@ -913,6 +974,7 @@ def test_memory_usage(self): class TestFloat64HashTable(tm.TestCase): + def test_lookup_nan(self): from pandas.hashtable import Float64HashTable xs = np.array([2.718, 3.14, np.nan, -7, 5, 2, 3]) @@ -932,10 +994,12 @@ class T(NoNewAttributesMixin): t.a = "test" self.assertEqual(t.a, "test") t._freeze() - #self.assertTrue("__frozen" not in dir(t)) + # self.assertTrue("__frozen" not in dir(t)) self.assertIs(getattr(t, "__frozen"), True) + def f(): t.b = "test" + self.assertRaises(AttributeError, f) self.assertFalse(hasattr(t, "b")) diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py index 7e09b2e13a3c1..8a9827b9d5533 100755 --- a/pandas/tests/test_categorical.py +++ b/pandas/tests/test_categorical.py @@ -12,18 +12,22 @@ import pandas.compat as compat import pandas.core.common as com import pandas.util.testing as tm -from pandas import (Categorical, Index, Series, DataFrame, - PeriodIndex, Timestamp, CategoricalIndex) +from pandas import (Categorical, Index, Series, DataFrame, PeriodIndex, + Timestamp, CategoricalIndex) from pandas.compat import range, lrange, u, PY3 from pandas.core.config import option_context +# GH 12066 +# flake8: noqa + class TestCategorical(tm.TestCase): _multiprocess_can_split_ = True def setUp(self): self.factor = Categorical.from_array(['a', 'b', 'b', 'a', - 'a', 'c', 'c', 'c'], ordered=True) + 'a', 'c', 'c', 'c'], + ordered=True) def test_getitem(self): self.assertEqual(self.factor[0], 'a') @@ -56,7 +60,7 @@ def test_setitem(self): # boolean c = self.factor.copy() - indexer = np.zeros(len(c),dtype='bool') + indexer = np.zeros(len(c), dtype='bool') indexer[0] = True indexer[-1] = True c[indexer] = 'c' @@ -70,7 +74,8 @@ def test_setitem_listlike(self): # GH 9469 # properly coerce the input indexers np.random.seed(1) - c = Categorical(np.random.randint(0, 5, size=150000).astype(np.int8)).add_categories([-1000]) + c = Categorical(np.random.randint(0, 5, size=150000).astype( + np.int8)).add_categories([-1000]) indexer = np.array([100000]).astype(np.int64) c[indexer] = -1000 @@ -87,11 +92,15 @@ def test_constructor_unsortable(self): self.assertFalse(factor.ordered) if compat.PY3: - self.assertRaises(TypeError, lambda : Categorical.from_array(arr, ordered=True)) + self.assertRaises( + TypeError, lambda: Categorical.from_array(arr, ordered=True)) else: - # this however will raise as cannot be sorted (on PY3 or older numpies) + # this however will raise as cannot be sorted (on PY3 or older + # numpies) if LooseVersion(np.__version__) < "1.10": - self.assertRaises(TypeError, lambda : Categorical.from_array(arr, ordered=True)) + self.assertRaises( + TypeError, + lambda: Categorical.from_array(arr, ordered=True)) else: Categorical.from_array(arr, ordered=True) @@ -99,9 +108,9 @@ def test_is_equal_dtype(self): # test dtype comparisons between cats - c1 = Categorical(list('aabca'),categories=list('abc'),ordered=False) - c2 = Categorical(list('aabca'),categories=list('cab'),ordered=False) - c3 = Categorical(list('aabca'),categories=list('cab'),ordered=True) + c1 = Categorical(list('aabca'), categories=list('abc'), ordered=False) + c2 = Categorical(list('aabca'), categories=list('cab'), ordered=False) + c3 = Categorical(list('aabca'), categories=list('cab'), ordered=True) self.assertTrue(c1.is_dtype_equal(c1)) self.assertTrue(c2.is_dtype_equal(c2)) self.assertTrue(c3.is_dtype_equal(c3)) @@ -110,29 +119,35 @@ def test_is_equal_dtype(self): self.assertFalse(c1.is_dtype_equal(Index(list('aabca')))) self.assertFalse(c1.is_dtype_equal(c1.astype(object))) self.assertTrue(c1.is_dtype_equal(CategoricalIndex(c1))) - self.assertFalse(c1.is_dtype_equal(CategoricalIndex(c1,categories=list('cab')))) - self.assertFalse(c1.is_dtype_equal(CategoricalIndex(c1,ordered=True))) + self.assertFalse(c1.is_dtype_equal( + CategoricalIndex(c1, categories=list('cab')))) + self.assertFalse(c1.is_dtype_equal(CategoricalIndex(c1, ordered=True))) def test_constructor(self): exp_arr = np.array(["a", "b", "c", "a", "b", "c"]) c1 = Categorical(exp_arr) self.assert_numpy_array_equal(c1.__array__(), exp_arr) - c2 = Categorical(exp_arr, categories=["a","b","c"]) + c2 = Categorical(exp_arr, categories=["a", "b", "c"]) self.assert_numpy_array_equal(c2.__array__(), exp_arr) - c2 = Categorical(exp_arr, categories=["c","b","a"]) + c2 = Categorical(exp_arr, categories=["c", "b", "a"]) self.assert_numpy_array_equal(c2.__array__(), exp_arr) # categories must be unique def f(): - Categorical([1,2], [1,2,2]) + Categorical([1, 2], [1, 2, 2]) + self.assertRaises(ValueError, f) + def f(): - Categorical(["a","b"], ["a","b","b"]) + Categorical(["a", "b"], ["a", "b", "b"]) + self.assertRaises(ValueError, f) + def f(): with tm.assert_produces_warning(FutureWarning): - Categorical([1,2], [1,2,np.nan, np.nan]) + Categorical([1, 2], [1, 2, np.nan, np.nan]) + self.assertRaises(ValueError, f) # The default should be unordered @@ -144,25 +159,25 @@ def f(): c2 = Categorical(c1) self.assertTrue(c1.equals(c2)) - c1 = Categorical(["a", "b", "c", "a"], categories=["a","b","c","d"]) + c1 = Categorical(["a", "b", "c", "a"], categories=["a", "b", "c", "d"]) c2 = Categorical(c1) self.assertTrue(c1.equals(c2)) - c1 = Categorical(["a", "b", "c", "a"], categories=["a","c","b"]) + c1 = Categorical(["a", "b", "c", "a"], categories=["a", "c", "b"]) c2 = Categorical(c1) self.assertTrue(c1.equals(c2)) - c1 = Categorical(["a", "b", "c", "a"], categories=["a","c","b"]) - c2 = Categorical(c1, categories=["a","b","c"]) + c1 = Categorical(["a", "b", "c", "a"], categories=["a", "c", "b"]) + c2 = Categorical(c1, categories=["a", "b", "c"]) self.assert_numpy_array_equal(c1.__array__(), c2.__array__()) - self.assert_numpy_array_equal(c2.categories, np.array(["a","b","c"])) + self.assert_numpy_array_equal(c2.categories, np.array(["a", "b", "c"])) # Series of dtype category - c1 = Categorical(["a", "b", "c", "a"], categories=["a","b","c","d"]) + c1 = Categorical(["a", "b", "c", "a"], categories=["a", "b", "c", "d"]) c2 = Categorical(Series(c1)) self.assertTrue(c1.equals(c2)) - c1 = Categorical(["a", "b", "c", "a"], categories=["a","c","b"]) + c1 = Categorical(["a", "b", "c", "a"], categories=["a", "c", "b"]) c2 = Categorical(Series(c1)) self.assertTrue(c1.equals(c2)) @@ -171,43 +186,50 @@ def f(): c2 = Categorical(Series(["a", "b", "c", "a"])) self.assertTrue(c1.equals(c2)) - c1 = Categorical(["a", "b", "c", "a"], categories=["a","b","c","d"]) - c2 = Categorical(Series(["a", "b", "c", "a"]), categories=["a","b","c","d"]) + c1 = Categorical(["a", "b", "c", "a"], categories=["a", "b", "c", "d"]) + c2 = Categorical( + Series(["a", "b", "c", "a"]), categories=["a", "b", "c", "d"]) self.assertTrue(c1.equals(c2)) # This should result in integer categories, not float! - cat = pd.Categorical([1,2,3,np.nan], categories=[1,2,3]) + cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3]) self.assertTrue(com.is_integer_dtype(cat.categories)) # https://github.com/pydata/pandas/issues/3678 - cat = pd.Categorical([np.nan,1, 2, 3]) + cat = pd.Categorical([np.nan, 1, 2, 3]) self.assertTrue(com.is_integer_dtype(cat.categories)) # this should result in floats - cat = pd.Categorical([np.nan, 1, 2., 3 ]) + cat = pd.Categorical([np.nan, 1, 2., 3]) self.assertTrue(com.is_float_dtype(cat.categories)) - cat = pd.Categorical([np.nan, 1., 2., 3. ]) + cat = pd.Categorical([np.nan, 1., 2., 3.]) self.assertTrue(com.is_float_dtype(cat.categories)) # Deprecating NaNs in categoires (GH #10748) - # preserve int as far as possible by converting to object if NaN is in categories + # preserve int as far as possible by converting to object if NaN is in + # categories with tm.assert_produces_warning(FutureWarning): - cat = pd.Categorical([np.nan, 1, 2, 3], categories=[np.nan, 1, 2, 3]) + cat = pd.Categorical([np.nan, 1, 2, 3], + categories=[np.nan, 1, 2, 3]) self.assertTrue(com.is_object_dtype(cat.categories)) - # This doesn't work -> this would probably need some kind of "remember the original type" - # feature to try to cast the array interface result to... - #vals = np.asarray(cat[cat.notnull()]) - #self.assertTrue(com.is_integer_dtype(vals)) + + # This doesn't work -> this would probably need some kind of "remember + # the original type" feature to try to cast the array interface result + # to... + + # vals = np.asarray(cat[cat.notnull()]) + # self.assertTrue(com.is_integer_dtype(vals)) with tm.assert_produces_warning(FutureWarning): - cat = pd.Categorical([np.nan,"a", "b", "c"], categories=[np.nan,"a", "b", "c"]) + cat = pd.Categorical([np.nan, "a", "b", "c"], + categories=[np.nan, "a", "b", "c"]) self.assertTrue(com.is_object_dtype(cat.categories)) # but don't do it for floats with tm.assert_produces_warning(FutureWarning): - cat = pd.Categorical([np.nan, 1., 2., 3.], categories=[np.nan, 1., 2., 3.]) + cat = pd.Categorical([np.nan, 1., 2., 3.], + categories=[np.nan, 1., 2., 3.]) self.assertTrue(com.is_float_dtype(cat.categories)) - # corner cases cat = pd.Categorical([1]) self.assertTrue(len(cat.categories) == 1) @@ -239,35 +261,39 @@ def f(): # - when the first is an integer dtype and the second is not # - when the resulting codes are all -1/NaN with tm.assert_produces_warning(RuntimeWarning): - c_old = Categorical([0,1,2,0,1,2], categories=["a","b","c"]) + c_old = Categorical([0, 1, 2, 0, 1, 2], + categories=["a", "b", "c"]) # noqa with tm.assert_produces_warning(RuntimeWarning): - c_old = Categorical([0,1,2,0,1,2], categories=[3,4,5]) + c_old = Categorical([0, 1, 2, 0, 1, 2], # noqa + categories=[3, 4, 5]) - # the next one are from the old docs, but unfortunately these don't trigger :-( + # the next one are from the old docs, but unfortunately these don't + # trigger :-( with tm.assert_produces_warning(None): - c_old2 = Categorical([0, 1, 2, 0, 1, 2], [1, 2, 3]) - cat = Categorical([1,2], categories=[1,2,3]) + c_old2 = Categorical([0, 1, 2, 0, 1, 2], [1, 2, 3]) # noqa + cat = Categorical([1, 2], categories=[1, 2, 3]) # this is a legitimate constructor with tm.assert_produces_warning(None): - c = Categorical(np.array([],dtype='int64'),categories=[3,2,1],ordered=True) + c = Categorical(np.array([], dtype='int64'), # noqa + categories=[3, 2, 1], ordered=True) def test_constructor_with_index(self): - - ci = CategoricalIndex(list('aabbca'),categories=list('cab')) + ci = CategoricalIndex(list('aabbca'), categories=list('cab')) self.assertTrue(ci.values.equals(Categorical(ci))) - ci = CategoricalIndex(list('aabbca'),categories=list('cab')) - self.assertTrue(ci.values.equals(Categorical(ci.astype(object),categories=ci.categories))) + ci = CategoricalIndex(list('aabbca'), categories=list('cab')) + self.assertTrue(ci.values.equals(Categorical( + ci.astype(object), categories=ci.categories))) def test_constructor_with_generator(self): - # This was raising an Error in isnull(single_val).any() because isnull returned a scalar - # for a generator - from pandas.compat import range as xrange + # This was raising an Error in isnull(single_val).any() because isnull + # returned a scalar for a generator + xrange = range - exp = Categorical([0,1,2]) - cat = Categorical((x for x in [0,1,2])) + exp = Categorical([0, 1, 2]) + cat = Categorical((x for x in [0, 1, 2])) self.assertTrue(cat.equals(exp)) cat = Categorical(xrange(3)) self.assertTrue(cat.equals(exp)) @@ -277,42 +303,44 @@ def test_constructor_with_generator(self): MultiIndex.from_product([range(5), ['a', 'b', 'c']]) # check that categories accept generators and sequences - cat = pd.Categorical([0,1,2], categories=(x for x in [0,1,2])) + cat = pd.Categorical([0, 1, 2], categories=(x for x in [0, 1, 2])) self.assertTrue(cat.equals(exp)) - cat = pd.Categorical([0,1,2], categories=xrange(3)) + cat = pd.Categorical([0, 1, 2], categories=xrange(3)) self.assertTrue(cat.equals(exp)) - def test_from_codes(self): # too few categories def f(): - Categorical.from_codes([1,2], [1,2]) + Categorical.from_codes([1, 2], [1, 2]) + self.assertRaises(ValueError, f) # no int codes def f(): - Categorical.from_codes(["a"], [1,2]) + Categorical.from_codes(["a"], [1, 2]) + self.assertRaises(ValueError, f) # no unique categories def f(): - Categorical.from_codes([0,1,2], ["a","a","b"]) + Categorical.from_codes([0, 1, 2], ["a", "a", "b"]) + self.assertRaises(ValueError, f) # too negative def f(): - Categorical.from_codes([-2,1,2], ["a","b","c"]) - self.assertRaises(ValueError, f) + Categorical.from_codes([-2, 1, 2], ["a", "b", "c"]) + self.assertRaises(ValueError, f) - exp = Categorical(["a","b","c"], ordered=False) - res = Categorical.from_codes([0,1,2], ["a","b","c"]) + exp = Categorical(["a", "b", "c"], ordered=False) + res = Categorical.from_codes([0, 1, 2], ["a", "b", "c"]) self.assertTrue(exp.equals(res)) # Not available in earlier numpy versions if hasattr(np.random, "choice"): - codes = np.random.choice([0,1], 5, p=[0.9,0.1]) + codes = np.random.choice([0, 1], 5, p=[0.9, 0.1]) pd.Categorical.from_codes(codes, categories=["train", "test"]) def test_comparisons(self): @@ -353,10 +381,13 @@ def test_comparisons(self): self.assert_numpy_array_equal(result, expected) # comparisons with categoricals - cat_rev = pd.Categorical(["a","b","c"], categories=["c","b","a"], ordered=True) - cat_rev_base = pd.Categorical(["b","b","b"], categories=["c","b","a"], ordered=True) - cat = pd.Categorical(["a","b","c"], ordered=True) - cat_base = pd.Categorical(["b","b","b"], categories=cat.categories, ordered=True) + cat_rev = pd.Categorical(["a", "b", "c"], categories=["c", "b", "a"], + ordered=True) + cat_rev_base = pd.Categorical( + ["b", "b", "b"], categories=["c", "b", "a"], ordered=True) + cat = pd.Categorical(["a", "b", "c"], ordered=True) + cat_base = pd.Categorical(["b", "b", "b"], categories=cat.categories, + ordered=True) # comparisons need to take categories ordering into account res_rev = cat_rev > cat_rev_base @@ -374,30 +405,36 @@ def test_comparisons(self): # Only categories with same categories can be compared def f(): cat > cat_rev + self.assertRaises(TypeError, f) - cat_rev_base2 = pd.Categorical(["b","b","b"], categories=["c","b","a","d"]) + cat_rev_base2 = pd.Categorical( + ["b", "b", "b"], categories=["c", "b", "a", "d"]) + def f(): cat_rev > cat_rev_base2 + self.assertRaises(TypeError, f) # Only categories with same ordering information can be compared cat_unorderd = cat.set_ordered(False) self.assertFalse((cat > cat).any()) + def f(): cat > cat_unorderd + self.assertRaises(TypeError, f) # comparison (in both directions) with Series will raise - s = Series(["b","b","b"]) + s = Series(["b", "b", "b"]) self.assertRaises(TypeError, lambda: cat > s) self.assertRaises(TypeError, lambda: cat_rev > s) self.assertRaises(TypeError, lambda: s < cat) self.assertRaises(TypeError, lambda: s < cat_rev) - # comparison with numpy.array will raise in both direction, but only on newer - # numpy versions - a = np.array(["b","b","b"]) + # comparison with numpy.array will raise in both direction, but only on + # newer numpy versions + a = np.array(["b", "b", "b"]) self.assertRaises(TypeError, lambda: cat > a) self.assertRaises(TypeError, lambda: cat_rev > a) @@ -407,13 +444,14 @@ def f(): self.assertRaises(TypeError, lambda: a < cat) self.assertRaises(TypeError, lambda: a < cat_rev) - # Make sure that unequal comparison take the categories order in account - cat_rev = pd.Categorical(list("abc"), categories=list("cba"), ordered=True) + # Make sure that unequal comparison take the categories order in + # account + cat_rev = pd.Categorical( + list("abc"), categories=list("cba"), ordered=True) exp = np.array([True, False, False]) res = cat_rev > "b" self.assert_numpy_array_equal(res, exp) - def test_na_flags_int_categories(self): # #1457 @@ -435,55 +473,63 @@ def test_describe(self): # string type desc = self.factor.describe() expected = DataFrame({'counts': [3, 2, 3], - 'freqs': [3/8., 2/8., 3/8.]}, - index=pd.CategoricalIndex(['a', 'b', 'c'], name='categories')) + 'freqs': [3 / 8., 2 / 8., 3 / 8.]}, + index=pd.CategoricalIndex(['a', 'b', 'c'], + name='categories')) tm.assert_frame_equal(desc, expected) # check unused categories cat = self.factor.copy() - cat.set_categories(["a","b","c","d"], inplace=True) + cat.set_categories(["a", "b", "c", "d"], inplace=True) desc = cat.describe() expected = DataFrame({'counts': [3, 2, 3, 0], - 'freqs': [3/8., 2/8., 3/8., 0]}, - index=pd.CategoricalIndex(['a', 'b', 'c', 'd'], name='categories')) + 'freqs': [3 / 8., 2 / 8., 3 / 8., 0]}, + index=pd.CategoricalIndex(['a', 'b', 'c', 'd'], + name='categories')) tm.assert_frame_equal(desc, expected) # check an integer one - desc = Categorical([1,2,3,1,2,3,3,2,1,1,1]).describe() + desc = Categorical([1, 2, 3, 1, 2, 3, 3, 2, 1, 1, 1]).describe() expected = DataFrame({'counts': [5, 3, 3], - 'freqs': [5/11., 3/11., 3/11.]}, - index=pd.CategoricalIndex([1, 2, 3], name='categories')) + 'freqs': [5 / 11., 3 / 11., 3 / 11.]}, + index=pd.CategoricalIndex([1, 2, 3], + name='categories')) tm.assert_frame_equal(desc, expected) # https://github.com/pydata/pandas/issues/3678 # describe should work with NaN - cat = pd.Categorical([np.nan,1, 2, 2]) + cat = pd.Categorical([np.nan, 1, 2, 2]) desc = cat.describe() expected = DataFrame({'counts': [1, 2, 1], - 'freqs': [1/4., 2/4., 1/4.]}, - index=pd.CategoricalIndex([1, 2, np.nan], categories=[1, 2], + 'freqs': [1 / 4., 2 / 4., 1 / 4.]}, + index=pd.CategoricalIndex([1, 2, np.nan], + categories=[1, 2], name='categories')) tm.assert_frame_equal(desc, expected) # NA as a category with tm.assert_produces_warning(FutureWarning): - cat = pd.Categorical(["a", "c", "c", np.nan], categories=["b", "a", "c", np.nan]) + cat = pd.Categorical(["a", "c", "c", np.nan], + categories=["b", "a", "c", np.nan]) result = cat.describe() - expected = DataFrame([[0,0],[1,0.25],[2,0.5],[1,0.25]], - columns=['counts','freqs'], - index=pd.CategoricalIndex(['b', 'a', 'c', np.nan], name='categories')) - tm.assert_frame_equal(result,expected) + expected = DataFrame([[0, 0], [1, 0.25], [2, 0.5], [1, 0.25]], + columns=['counts', 'freqs'], + index=pd.CategoricalIndex(['b', 'a', 'c', np.nan], + name='categories')) + tm.assert_frame_equal(result, expected) # NA as an unused category with tm.assert_produces_warning(FutureWarning): - cat = pd.Categorical(["a", "c", "c"], categories=["b", "a", "c", np.nan]) + cat = pd.Categorical(["a", "c", "c"], + categories=["b", "a", "c", np.nan]) result = cat.describe() - exp_idx = pd.CategoricalIndex(['b', 'a', 'c', np.nan], name='categories') - expected = DataFrame([[0, 0], [1, 1/3.], [2, 2/3.], [0, 0]], + exp_idx = pd.CategoricalIndex( + ['b', 'a', 'c', np.nan], name='categories') + expected = DataFrame([[0, 0], [1, 1 / 3.], [2, 2 / 3.], [0, 0]], columns=['counts', 'freqs'], index=exp_idx) - tm.assert_frame_equal(result,expected) + tm.assert_frame_equal(result, expected) def test_print(self): expected = ["[a, b, b, a, a, c, c, c]", @@ -493,9 +539,9 @@ def test_print(self): self.assertEqual(actual, expected) def test_big_print(self): - factor = Categorical([0,1,2,0,1,2]*100, ['a', 'b', 'c'], name='cat', fastpath=True) - expected = ["[a, b, c, a, b, ..., b, c, a, b, c]", - "Length: 600", + factor = Categorical([0, 1, 2, 0, 1, 2] * 100, ['a', 'b', 'c'], + name='cat', fastpath=True) + expected = ["[a, b, c, a, b, ..., b, c, a, b, c]", "Length: 600", "Categories (3, object): [a, b, c]"] expected = "\n".join(expected) @@ -504,14 +550,14 @@ def test_big_print(self): self.assertEqual(actual, expected) def test_empty_print(self): - factor = Categorical([], ["a","b","c"]) + factor = Categorical([], ["a", "b", "c"]) expected = ("[], Categories (3, object): [a, b, c]") # hack because array_repr changed in numpy > 1.6.x actual = repr(factor) self.assertEqual(actual, expected) self.assertEqual(expected, actual) - factor = Categorical([], ["a","b","c"], ordered=True) + factor = Categorical([], ["a", "b", "c"], ordered=True) expected = ("[], Categories (3, object): [a < b < c]") actual = repr(factor) self.assertEqual(expected, actual) @@ -522,9 +568,9 @@ def test_empty_print(self): def test_print_none_width(self): # GH10087 - a = pd.Series(pd.Categorical([1,2,3,4])) + a = pd.Series(pd.Categorical([1, 2, 3, 4])) exp = u("0 1\n1 2\n2 3\n3 4\n" + - "dtype: category\nCategories (4, int64): [1, 2, 3, 4]") + "dtype: category\nCategories (4, int64): [1, 2, 3, 4]") with option_context("display.width", None): self.assertEqual(exp, repr(a)) @@ -533,27 +579,35 @@ def test_unicode_print(self): if PY3: _rep = repr else: - _rep = unicode + _rep = unicode # noqa c = pd.Categorical(['aaaaa', 'bb', 'cccc'] * 20) - expected = u"""[aaaaa, bb, cccc, aaaaa, bb, ..., bb, cccc, aaaaa, bb, cccc] + expected = u"""\ +[aaaaa, bb, cccc, aaaaa, bb, ..., bb, cccc, aaaaa, bb, cccc] Length: 60 Categories (3, object): [aaaaa, bb, cccc]""" + self.assertEqual(_rep(c), expected) - c = pd.Categorical([u'ああああ', u'いいいいい', u'ううううううう'] * 20) - expected = u"""[ああああ, いいいいい, ううううううう, ああああ, いいいいい, ..., いいいいい, ううううううう, ああああ, いいいいい, ううううううう] + c = pd.Categorical([u'ああああ', u'いいいいい', u'ううううううう'] + * 20) + expected = u"""\ +[ああああ, いいいいい, ううううううう, ああああ, いいいいい, ..., いいいいい, ううううううう, ああああ, いいいいい, ううううううう] Length: 60 -Categories (3, object): [ああああ, いいいいい, ううううううう]""" +Categories (3, object): [ああああ, いいいいい, ううううううう]""" # noqa + self.assertEqual(_rep(c), expected) - # unicode option should not affect to Categorical, as it doesn't care the repr width + # unicode option should not affect to Categorical, as it doesn't care + # the repr width with option_context('display.unicode.east_asian_width', True): - c = pd.Categorical([u'ああああ', u'いいいいい', u'ううううううう'] * 20) + c = pd.Categorical([u'ああああ', u'いいいいい', u'ううううううう'] + * 20) expected = u"""[ああああ, いいいいい, ううううううう, ああああ, いいいいい, ..., いいいいい, ううううううう, ああああ, いいいいい, ううううううう] Length: 60 -Categories (3, object): [ああああ, いいいいい, ううううううう]""" +Categories (3, object): [ああああ, いいいいい, ううううううう]""" # noqa + self.assertEqual(_rep(c), expected) def test_periodindex(self): @@ -562,7 +616,7 @@ def test_periodindex(self): cat1 = Categorical.from_array(idx1) str(cat1) - exp_arr = np.array([0, 0, 1, 1, 2, 2],dtype='int64') + exp_arr = np.array([0, 0, 1, 1, 2, 2], dtype='int64') exp_idx = PeriodIndex(['2014-01', '2014-02', '2014-03'], freq='M') self.assert_numpy_array_equal(cat1._codes, exp_arr) self.assertTrue(cat1.categories.equals(exp_idx)) @@ -571,7 +625,7 @@ def test_periodindex(self): '2014-03', '2014-01'], freq='M') cat2 = Categorical.from_array(idx2, ordered=True) str(cat2) - exp_arr = np.array([2, 2, 1, 0, 2, 0],dtype='int64') + exp_arr = np.array([2, 2, 1, 0, 2, 0], dtype='int64') exp_idx2 = PeriodIndex(['2014-01', '2014-02', '2014-03'], freq='M') self.assert_numpy_array_equal(cat2._codes, exp_arr) self.assertTrue(cat2.categories.equals(exp_idx2)) @@ -579,57 +633,63 @@ def test_periodindex(self): idx3 = PeriodIndex(['2013-12', '2013-11', '2013-10', '2013-09', '2013-08', '2013-07', '2013-05'], freq='M') cat3 = Categorical.from_array(idx3, ordered=True) - exp_arr = np.array([6, 5, 4, 3, 2, 1, 0],dtype='int64') + exp_arr = np.array([6, 5, 4, 3, 2, 1, 0], dtype='int64') exp_idx = PeriodIndex(['2013-05', '2013-07', '2013-08', '2013-09', '2013-10', '2013-11', '2013-12'], freq='M') self.assert_numpy_array_equal(cat3._codes, exp_arr) self.assertTrue(cat3.categories.equals(exp_idx)) def test_categories_assigments(self): - s = pd.Categorical(["a","b","c","a"]) - exp = np.array([1,2,3,1]) - s.categories = [1,2,3] + s = pd.Categorical(["a", "b", "c", "a"]) + exp = np.array([1, 2, 3, 1]) + s.categories = [1, 2, 3] self.assert_numpy_array_equal(s.__array__(), exp) - self.assert_numpy_array_equal(s.categories, np.array([1,2,3])) + self.assert_numpy_array_equal(s.categories, np.array([1, 2, 3])) + # lengthen def f(): - s.categories = [1,2,3,4] + s.categories = [1, 2, 3, 4] + self.assertRaises(ValueError, f) + # shorten def f(): - s.categories = [1,2] + s.categories = [1, 2] + self.assertRaises(ValueError, f) def test_construction_with_ordered(self): # GH 9347, 9190 - cat = Categorical([0,1,2]) + cat = Categorical([0, 1, 2]) self.assertFalse(cat.ordered) - cat = Categorical([0,1,2],ordered=False) + cat = Categorical([0, 1, 2], ordered=False) self.assertFalse(cat.ordered) - cat = Categorical([0,1,2],ordered=True) + cat = Categorical([0, 1, 2], ordered=True) self.assertTrue(cat.ordered) def test_ordered_api(self): # GH 9347 - cat1 = pd.Categorical(["a","c","b"], ordered=False) - self.assertTrue(cat1.categories.equals(Index(['a','b','c']))) + cat1 = pd.Categorical(["a", "c", "b"], ordered=False) + self.assertTrue(cat1.categories.equals(Index(['a', 'b', 'c']))) self.assertFalse(cat1.ordered) - cat2 = pd.Categorical(["a","c","b"], categories=['b','c','a'], ordered=False) - self.assertTrue(cat2.categories.equals(Index(['b','c','a']))) + cat2 = pd.Categorical(["a", "c", "b"], categories=['b', 'c', 'a'], + ordered=False) + self.assertTrue(cat2.categories.equals(Index(['b', 'c', 'a']))) self.assertFalse(cat2.ordered) - cat3 = pd.Categorical(["a","c","b"], ordered=True) - self.assertTrue(cat3.categories.equals(Index(['a','b','c']))) + cat3 = pd.Categorical(["a", "c", "b"], ordered=True) + self.assertTrue(cat3.categories.equals(Index(['a', 'b', 'c']))) self.assertTrue(cat3.ordered) - cat4 = pd.Categorical(["a","c","b"], categories=['b','c','a'], ordered=True) - self.assertTrue(cat4.categories.equals(Index(['b','c','a']))) + cat4 = pd.Categorical(["a", "c", "b"], categories=['b', 'c', 'a'], + ordered=True) + self.assertTrue(cat4.categories.equals(Index(['b', 'c', 'a']))) self.assertTrue(cat4.ordered) def test_set_ordered(self): - cat = Categorical(["a","b","c","a"], ordered=True) + cat = Categorical(["a", "b", "c", "a"], ordered=True) cat2 = cat.as_unordered() self.assertFalse(cat2.ordered) cat2 = cat.as_ordered() @@ -655,123 +715,142 @@ def test_set_ordered(self): self.assertTrue(cat.ordered) def test_set_categories(self): - cat = Categorical(["a","b","c","a"], ordered=True) - exp_categories = np.array(["c","b","a"]) - exp_values = np.array(["a","b","c","a"]) + cat = Categorical(["a", "b", "c", "a"], ordered=True) + exp_categories = np.array(["c", "b", "a"]) + exp_values = np.array(["a", "b", "c", "a"]) - res = cat.set_categories(["c","b","a"], inplace=True) + res = cat.set_categories(["c", "b", "a"], inplace=True) self.assert_numpy_array_equal(cat.categories, exp_categories) self.assert_numpy_array_equal(cat.__array__(), exp_values) self.assertIsNone(res) - res = cat.set_categories(["a","b","c"]) + res = cat.set_categories(["a", "b", "c"]) # cat must be the same as before self.assert_numpy_array_equal(cat.categories, exp_categories) self.assert_numpy_array_equal(cat.__array__(), exp_values) # only res is changed - exp_categories_back = np.array(["a","b","c"]) + exp_categories_back = np.array(["a", "b", "c"]) self.assert_numpy_array_equal(res.categories, exp_categories_back) self.assert_numpy_array_equal(res.__array__(), exp_values) - # not all "old" included in "new" -> all not included ones are now np.nan - cat = Categorical(["a","b","c","a"], ordered=True) + # not all "old" included in "new" -> all not included ones are now + # np.nan + cat = Categorical(["a", "b", "c", "a"], ordered=True) res = cat.set_categories(["a"]) - self.assert_numpy_array_equal(res.codes, np.array([0,-1,-1,0])) + self.assert_numpy_array_equal(res.codes, np.array([0, -1, -1, 0])) # still not all "old" in "new" - res = cat.set_categories(["a","b","d"]) - self.assert_numpy_array_equal(res.codes, np.array([0,1,-1,0])) - self.assert_numpy_array_equal(res.categories, np.array(["a","b","d"])) + res = cat.set_categories(["a", "b", "d"]) + self.assert_numpy_array_equal(res.codes, np.array([0, 1, -1, 0])) + self.assert_numpy_array_equal(res.categories, + np.array(["a", "b", "d"])) # all "old" included in "new" - cat = cat.set_categories(["a","b","c","d"]) - exp_categories = np.array(["a","b","c","d"]) + cat = cat.set_categories(["a", "b", "c", "d"]) + exp_categories = np.array(["a", "b", "c", "d"]) self.assert_numpy_array_equal(cat.categories, exp_categories) # internals... - c = Categorical([1,2,3,4,1], categories=[1,2,3,4], ordered=True) - self.assert_numpy_array_equal(c._codes, np.array([0,1,2,3,0])) - self.assert_numpy_array_equal(c.categories , np.array([1,2,3,4] )) - self.assert_numpy_array_equal(c.get_values(), np.array([1,2,3,4,1] )) - c = c.set_categories([4,3,2,1]) # all "pointers" to '4' must be changed from 3 to 0,... - self.assert_numpy_array_equal(c._codes, np.array([3,2,1,0,3])) # positions are changed - self.assert_numpy_array_equal(c.categories, np.array([4,3,2,1])) # categories are now in new order - self.assert_numpy_array_equal(c.get_values(), np.array([1,2,3,4,1])) # output is the same + c = Categorical([1, 2, 3, 4, 1], categories=[1, 2, 3, 4], ordered=True) + self.assert_numpy_array_equal(c._codes, np.array([0, 1, 2, 3, 0])) + self.assert_numpy_array_equal(c.categories, np.array([1, 2, 3, 4])) + self.assert_numpy_array_equal(c.get_values(), + np.array([1, 2, 3, 4, 1])) + c = c.set_categories( + [4, 3, 2, 1 + ]) # all "pointers" to '4' must be changed from 3 to 0,... + self.assert_numpy_array_equal(c._codes, np.array([3, 2, 1, 0, 3]) + ) # positions are changed + self.assert_numpy_array_equal(c.categories, np.array([4, 3, 2, 1]) + ) # categories are now in new order + self.assert_numpy_array_equal(c.get_values(), np.array([1, 2, 3, 4, 1]) + ) # output is the same self.assertTrue(c.min(), 4) self.assertTrue(c.max(), 1) # set_categories should set the ordering if specified - c2 = c.set_categories([4,3,2,1],ordered=False) + c2 = c.set_categories([4, 3, 2, 1], ordered=False) self.assertFalse(c2.ordered) self.assert_numpy_array_equal(c.get_values(), c2.get_values()) # set_categories should pass thru the ordering - c2 = c.set_ordered(False).set_categories([4,3,2,1]) + c2 = c.set_ordered(False).set_categories([4, 3, 2, 1]) self.assertFalse(c2.ordered) self.assert_numpy_array_equal(c.get_values(), c2.get_values()) def test_rename_categories(self): - cat = pd.Categorical(["a","b","c","a"]) + cat = pd.Categorical(["a", "b", "c", "a"]) # inplace=False: the old one must not be changed - res = cat.rename_categories([1,2,3]) - self.assert_numpy_array_equal(res.__array__(), np.array([1,2,3,1])) - self.assert_numpy_array_equal(res.categories, np.array([1,2,3])) - self.assert_numpy_array_equal(cat.__array__(), np.array(["a","b","c","a"])) - self.assert_numpy_array_equal(cat.categories, np.array(["a","b","c"])) - res = cat.rename_categories([1,2,3], inplace=True) + res = cat.rename_categories([1, 2, 3]) + self.assert_numpy_array_equal(res.__array__(), np.array([1, 2, 3, 1])) + self.assert_numpy_array_equal(res.categories, np.array([1, 2, 3])) + self.assert_numpy_array_equal(cat.__array__(), + np.array(["a", "b", "c", "a"])) + self.assert_numpy_array_equal(cat.categories, + np.array(["a", "b", "c"])) + res = cat.rename_categories([1, 2, 3], inplace=True) # and now inplace self.assertIsNone(res) - self.assert_numpy_array_equal(cat.__array__(), np.array([1,2,3,1])) - self.assert_numpy_array_equal(cat.categories, np.array([1,2,3])) + self.assert_numpy_array_equal(cat.__array__(), np.array([1, 2, 3, 1])) + self.assert_numpy_array_equal(cat.categories, np.array([1, 2, 3])) # lengthen def f(): - cat.rename_categories([1,2,3,4]) + cat.rename_categories([1, 2, 3, 4]) + self.assertRaises(ValueError, f) + # shorten def f(): - cat.rename_categories([1,2]) + cat.rename_categories([1, 2]) + self.assertRaises(ValueError, f) def test_reorder_categories(self): - cat = Categorical(["a","b","c","a"], ordered=True) + cat = Categorical(["a", "b", "c", "a"], ordered=True) old = cat.copy() - new = Categorical(["a","b","c","a"], categories=["c","b","a"], ordered=True) + new = Categorical(["a", "b", "c", "a"], categories=["c", "b", "a"], + ordered=True) # first inplace == False - res = cat.reorder_categories(["c","b","a"]) + res = cat.reorder_categories(["c", "b", "a"]) # cat must be the same as before self.assert_categorical_equal(cat, old) # only res is changed self.assert_categorical_equal(res, new) # inplace == True - res = cat.reorder_categories(["c","b","a"], inplace=True) + res = cat.reorder_categories(["c", "b", "a"], inplace=True) self.assertIsNone(res) self.assert_categorical_equal(cat, new) # not all "old" included in "new" - cat = Categorical(["a","b","c","a"], ordered=True) + cat = Categorical(["a", "b", "c", "a"], ordered=True) + def f(): cat.reorder_categories(["a"]) + self.assertRaises(ValueError, f) # still not all "old" in "new" def f(): - cat.reorder_categories(["a","b","d"]) + cat.reorder_categories(["a", "b", "d"]) + self.assertRaises(ValueError, f) # all "old" included in "new", but too long def f(): - cat.reorder_categories(["a","b","c","d"]) + cat.reorder_categories(["a", "b", "c", "d"]) + self.assertRaises(ValueError, f) def test_add_categories(self): - cat = Categorical(["a","b","c","a"], ordered=True) + cat = Categorical(["a", "b", "c", "a"], ordered=True) old = cat.copy() - new = Categorical(["a","b","c","a"], categories=["a","b","c","d"], ordered=True) + new = Categorical(["a", "b", "c", "a"], + categories=["a", "b", "c", "d"], ordered=True) # first inplace == False res = cat.add_categories("d") @@ -790,11 +869,13 @@ def test_add_categories(self): # new is in old categories def f(): cat.add_categories(["d"]) + self.assertRaises(ValueError, f) # GH 9927 cat = Categorical(list("abc"), ordered=True) - expected = Categorical(list("abc"), categories=list("abcde"), ordered=True) + expected = Categorical( + list("abc"), categories=list("abcde"), ordered=True) # test with Series, np.array, index, list res = cat.add_categories(Series(["d", "e"])) self.assert_categorical_equal(res, expected) @@ -806,9 +887,10 @@ def f(): self.assert_categorical_equal(res, expected) def test_remove_categories(self): - cat = Categorical(["a","b","c","a"], ordered=True) + cat = Categorical(["a", "b", "c", "a"], ordered=True) old = cat.copy() - new = Categorical(["a","b",np.nan,"a"], categories=["a","b"], ordered=True) + new = Categorical(["a", "b", np.nan, "a"], categories=["a", "b"], + ordered=True) # first inplace == False res = cat.remove_categories("c") @@ -827,12 +909,14 @@ def test_remove_categories(self): # removal is not in categories def f(): cat.remove_categories(["c"]) + self.assertRaises(ValueError, f) def test_remove_unused_categories(self): - c = Categorical(["a","b","c","d","a"], categories=["a","b","c","d","e"]) - exp_categories_all = np.array(["a","b","c","d","e"]) - exp_categories_dropped = np.array(["a","b","c","d"]) + c = Categorical(["a", "b", "c", "d", "a"], + categories=["a", "b", "c", "d", "e"]) + exp_categories_all = np.array(["a", "b", "c", "d", "e"]) + exp_categories_dropped = np.array(["a", "b", "c", "d"]) self.assert_numpy_array_equal(c.categories, exp_categories_all) @@ -845,16 +929,18 @@ def test_remove_unused_categories(self): self.assertIsNone(res) # with NaN values (GH11599) - c = Categorical(["a","b","c",np.nan], categories=["a","b","c","d","e"]) + c = Categorical(["a", "b", "c", np.nan], + categories=["a", "b", "c", "d", "e"]) res = c.remove_unused_categories() - self.assert_numpy_array_equal(res.categories, np.array(["a","b","c"])) + self.assert_numpy_array_equal(res.categories, + np.array(["a", "b", "c"])) self.assert_numpy_array_equal(c.categories, exp_categories_all) val = ['F', np.nan, 'D', 'B', 'D', 'F', np.nan] cat = pd.Categorical(values=val, categories=list('ABCDEFG')) out = cat.remove_unused_categories() self.assert_numpy_array_equal(out.categories, ['B', 'D', 'F']) - self.assert_numpy_array_equal(out.codes, [ 2, -1, 1, 0, 1, 2, -1]) + self.assert_numpy_array_equal(out.codes, [2, -1, 1, 0, 1, 2, -1]) self.assertEqual(out.get_values().tolist(), val) alpha = list('abcdefghijklmnopqrstuvwxyz') @@ -868,51 +954,62 @@ def test_remove_unused_categories(self): def test_nan_handling(self): # Nans are represented as -1 in codes - c = Categorical(["a","b",np.nan,"a"]) - self.assert_numpy_array_equal(c.categories , np.array(["a","b"])) - self.assert_numpy_array_equal(c._codes , np.array([0,1,-1,0])) + c = Categorical(["a", "b", np.nan, "a"]) + self.assert_numpy_array_equal(c.categories, np.array(["a", "b"])) + self.assert_numpy_array_equal(c._codes, np.array([0, 1, -1, 0])) c[1] = np.nan - self.assert_numpy_array_equal(c.categories , np.array(["a","b"])) - self.assert_numpy_array_equal(c._codes , np.array([0,-1,-1,0])) + self.assert_numpy_array_equal(c.categories, np.array(["a", "b"])) + self.assert_numpy_array_equal(c._codes, np.array([0, -1, -1, 0])) - # If categories have nan included, the code should point to that instead + # If categories have nan included, the code should point to that + # instead with tm.assert_produces_warning(FutureWarning): - c = Categorical(["a","b",np.nan,"a"], categories=["a","b",np.nan]) - self.assert_numpy_array_equal(c.categories, np.array(["a","b",np.nan], - dtype=np.object_)) - self.assert_numpy_array_equal(c._codes, np.array([0,1,2,0])) + c = Categorical(["a", "b", np.nan, "a"], + categories=["a", "b", np.nan]) + self.assert_numpy_array_equal(c.categories, + np.array(["a", "b", np.nan], + dtype=np.object_)) + self.assert_numpy_array_equal(c._codes, np.array([0, 1, 2, 0])) c[1] = np.nan - self.assert_numpy_array_equal(c.categories, np.array(["a","b",np.nan], - dtype=np.object_)) - self.assert_numpy_array_equal(c._codes, np.array([0,2,2,0])) + self.assert_numpy_array_equal(c.categories, + np.array(["a", "b", np.nan], + dtype=np.object_)) + self.assert_numpy_array_equal(c._codes, np.array([0, 2, 2, 0])) # Changing categories should also make the replaced category np.nan - c = Categorical(["a","b","c","a"]) + c = Categorical(["a", "b", "c", "a"]) with tm.assert_produces_warning(FutureWarning): - c.categories = ["a","b",np.nan] - self.assert_numpy_array_equal(c.categories, np.array(["a","b",np.nan], - dtype=np.object_)) - self.assert_numpy_array_equal(c._codes, np.array([0,1,2,0])) - - # Adding nan to categories should make assigned nan point to the category! - c = Categorical(["a","b",np.nan,"a"]) - self.assert_numpy_array_equal(c.categories , np.array(["a","b"])) - self.assert_numpy_array_equal(c._codes , np.array([0,1,-1,0])) + c.categories = ["a", "b", np.nan] # noqa + + self.assert_numpy_array_equal(c.categories, + np.array(["a", "b", np.nan], + dtype=np.object_)) + self.assert_numpy_array_equal(c._codes, np.array([0, 1, 2, 0])) + + # Adding nan to categories should make assigned nan point to the + # category! + c = Categorical(["a", "b", np.nan, "a"]) + self.assert_numpy_array_equal(c.categories, np.array(["a", "b"])) + self.assert_numpy_array_equal(c._codes, np.array([0, 1, -1, 0])) with tm.assert_produces_warning(FutureWarning): - c.set_categories(["a","b",np.nan], rename=True, inplace=True) - self.assert_numpy_array_equal(c.categories, np.array(["a","b",np.nan], - dtype=np.object_)) - self.assert_numpy_array_equal(c._codes, np.array([0,1,-1,0])) + c.set_categories(["a", "b", np.nan], rename=True, inplace=True) + + self.assert_numpy_array_equal(c.categories, + np.array(["a", "b", np.nan], + dtype=np.object_)) + self.assert_numpy_array_equal(c._codes, np.array([0, 1, -1, 0])) c[1] = np.nan - self.assert_numpy_array_equal(c.categories , np.array(["a","b",np.nan], - dtype=np.object_)) - self.assert_numpy_array_equal(c._codes, np.array([0,2,-1,0])) + self.assert_numpy_array_equal(c.categories, + np.array(["a", "b", np.nan], + dtype=np.object_)) + self.assert_numpy_array_equal(c._codes, np.array([0, 2, -1, 0])) # Remove null categories (GH 10156) cases = [ ([1.0, 2.0, np.nan], [1.0, 2.0]), (['a', 'b', None], ['a', 'b']), - ([pd.Timestamp('2012-05-01'), pd.NaT], [pd.Timestamp('2012-05-01')]) + ([pd.Timestamp('2012-05-01'), pd.NaT], + [pd.Timestamp('2012-05-01')]) ] null_values = [np.nan, None, pd.NaT] @@ -933,25 +1030,25 @@ def test_nan_handling(self): def f(): with tm.assert_produces_warning(FutureWarning): Categorical([], categories=nulls) - self.assertRaises(ValueError, f) + self.assertRaises(ValueError, f) def test_isnull(self): exp = np.array([False, False, True]) - c = Categorical(["a","b",np.nan]) + c = Categorical(["a", "b", np.nan]) res = c.isnull() self.assert_numpy_array_equal(res, exp) with tm.assert_produces_warning(FutureWarning): - c = Categorical(["a","b",np.nan], categories=["a","b",np.nan]) + c = Categorical(["a", "b", np.nan], categories=["a", "b", np.nan]) res = c.isnull() self.assert_numpy_array_equal(res, exp) # test both nan in categories and as -1 exp = np.array([True, False, True]) - c = Categorical(["a","b",np.nan]) + c = Categorical(["a", "b", np.nan]) with tm.assert_produces_warning(FutureWarning): - c.set_categories(["a","b",np.nan], rename=True, inplace=True) + c.set_categories(["a", "b", np.nan], rename=True, inplace=True) c[0] = np.nan res = c.isnull() self.assert_numpy_array_equal(res, exp) @@ -959,48 +1056,53 @@ def test_isnull(self): def test_codes_immutable(self): # Codes should be read only - c = Categorical(["a","b","c","a", np.nan]) - exp = np.array([0,1,2,0,-1],dtype='int8') + c = Categorical(["a", "b", "c", "a", np.nan]) + exp = np.array([0, 1, 2, 0, -1], dtype='int8') self.assert_numpy_array_equal(c.codes, exp) # Assignments to codes should raise def f(): - c.codes = np.array([0,1,2,0,1],dtype='int8') + c.codes = np.array([0, 1, 2, 0, 1], dtype='int8') + self.assertRaises(ValueError, f) # changes in the codes array should raise # np 1.6.1 raises RuntimeError rather than ValueError - codes= c.codes + codes = c.codes + def f(): codes[4] = 1 + self.assertRaises(ValueError, f) - # But even after getting the codes, the original array should still be writeable! + # But even after getting the codes, the original array should still be + # writeable! c[4] = "a" - exp = np.array([0,1,2,0,0],dtype='int8') + exp = np.array([0, 1, 2, 0, 0], dtype='int8') self.assert_numpy_array_equal(c.codes, exp) c._codes[4] = 2 - exp = np.array([0,1,2,0, 2],dtype='int8') + exp = np.array([0, 1, 2, 0, 2], dtype='int8') self.assert_numpy_array_equal(c.codes, exp) - def test_min_max(self): # unordered cats have no min/max - cat = Categorical(["a","b","c","d"], ordered=False) - self.assertRaises(TypeError, lambda : cat.min()) - self.assertRaises(TypeError, lambda : cat.max()) - cat = Categorical(["a","b","c","d"], ordered=True) + cat = Categorical(["a", "b", "c", "d"], ordered=False) + self.assertRaises(TypeError, lambda: cat.min()) + self.assertRaises(TypeError, lambda: cat.max()) + cat = Categorical(["a", "b", "c", "d"], ordered=True) _min = cat.min() _max = cat.max() self.assertEqual(_min, "a") self.assertEqual(_max, "d") - cat = Categorical(["a","b","c","d"], categories=['d','c','b','a'], ordered=True) + cat = Categorical(["a", "b", "c", "d"], + categories=['d', 'c', 'b', 'a'], ordered=True) _min = cat.min() _max = cat.max() self.assertEqual(_min, "d") self.assertEqual(_max, "a") - cat = Categorical([np.nan,"b","c",np.nan], categories=['d','c','b','a'], ordered=True) + cat = Categorical([np.nan, "b", "c", np.nan], + categories=['d', 'c', 'b', 'a'], ordered=True) _min = cat.min() _max = cat.max() self.assertTrue(np.isnan(_min)) @@ -1011,7 +1113,8 @@ def test_min_max(self): _max = cat.max(numeric_only=True) self.assertEqual(_max, "b") - cat = Categorical([np.nan,1,2,np.nan], categories=[5,4,3,2,1], ordered=True) + cat = Categorical([np.nan, 1, 2, np.nan], categories=[5, 4, 3, 2, 1], + ordered=True) _min = cat.min() _max = cat.max() self.assertTrue(np.isnan(_min)) @@ -1034,18 +1137,22 @@ def test_unique(self): self.assert_numpy_array_equal(res, exp) tm.assert_categorical_equal(res, Categorical(exp)) - cat = Categorical(["c", "a", "b", "a", "a"], categories=["a", "b", "c"]) + cat = Categorical(["c", "a", "b", "a", "a"], + categories=["a", "b", "c"]) exp = np.asarray(["c", "a", "b"]) res = cat.unique() self.assert_numpy_array_equal(res, exp) - tm.assert_categorical_equal(res, Categorical(exp, categories=['c', 'a', 'b'])) + tm.assert_categorical_equal(res, Categorical( + exp, categories=['c', 'a', 'b'])) # nan must be removed - cat = Categorical(["b", np.nan, "b", np.nan, "a"], categories=["a", "b", "c"]) + cat = Categorical(["b", np.nan, "b", np.nan, "a"], + categories=["a", "b", "c"]) res = cat.unique() exp = np.asarray(["b", np.nan, "a"], dtype=object) self.assert_numpy_array_equal(res, exp) - tm.assert_categorical_equal(res, Categorical(["b", np.nan, "a"], categories=["b", "a"])) + tm.assert_categorical_equal(res, Categorical( + ["b", np.nan, "a"], categories=["b", "a"])) def test_unique_ordered(self): # keep categories order when ordered=True @@ -1056,21 +1163,24 @@ def test_unique_ordered(self): self.assert_numpy_array_equal(res, exp) tm.assert_categorical_equal(res, exp_cat) - cat = Categorical(['c', 'b', 'a', 'a'], categories=['a', 'b', 'c'], ordered=True) + cat = Categorical(['c', 'b', 'a', 'a'], categories=['a', 'b', 'c'], + ordered=True) res = cat.unique() exp = np.asarray(['c', 'b', 'a']) exp_cat = Categorical(exp, categories=['a', 'b', 'c'], ordered=True) self.assert_numpy_array_equal(res, exp) tm.assert_categorical_equal(res, exp_cat) - cat = Categorical(['b', 'a', 'a'], categories=['a', 'b', 'c'], ordered=True) + cat = Categorical(['b', 'a', 'a'], categories=['a', 'b', 'c'], + ordered=True) res = cat.unique() exp = np.asarray(['b', 'a']) exp_cat = Categorical(exp, categories=['a', 'b'], ordered=True) self.assert_numpy_array_equal(res, exp) tm.assert_categorical_equal(res, exp_cat) - cat = Categorical(['b', 'b', np.nan, 'a'], categories=['a', 'b', 'c'], ordered=True) + cat = Categorical(['b', 'b', np.nan, 'a'], categories=['a', 'b', 'c'], + ordered=True) res = cat.unique() exp = np.asarray(['b', np.nan, 'a'], dtype=object) exp_cat = Categorical(exp, categories=['a', 'b'], ordered=True) @@ -1078,111 +1188,117 @@ def test_unique_ordered(self): tm.assert_categorical_equal(res, exp_cat) def test_mode(self): - s = Categorical([1,1,2,4,5,5,5], categories=[5,4,3,2,1], ordered=True) + s = Categorical([1, 1, 2, 4, 5, 5, 5], categories=[5, 4, 3, 2, 1], + ordered=True) res = s.mode() - exp = Categorical([5], categories=[5,4,3,2,1], ordered=True) + exp = Categorical([5], categories=[5, 4, 3, 2, 1], ordered=True) self.assertTrue(res.equals(exp)) - s = Categorical([1,1,1,4,5,5,5], categories=[5,4,3,2,1], ordered=True) + s = Categorical([1, 1, 1, 4, 5, 5, 5], categories=[5, 4, 3, 2, 1], + ordered=True) res = s.mode() - exp = Categorical([5,1], categories=[5,4,3,2,1], ordered=True) + exp = Categorical([5, 1], categories=[5, 4, 3, 2, 1], ordered=True) self.assertTrue(res.equals(exp)) - s = Categorical([1,2,3,4,5], categories=[5,4,3,2,1], ordered=True) + s = Categorical([1, 2, 3, 4, 5], categories=[5, 4, 3, 2, 1], + ordered=True) res = s.mode() - exp = Categorical([], categories=[5,4,3,2,1], ordered=True) + exp = Categorical([], categories=[5, 4, 3, 2, 1], ordered=True) self.assertTrue(res.equals(exp)) # NaN should not become the mode! - s = Categorical([np.nan,np.nan,np.nan,4,5], categories=[5,4,3,2,1], ordered=True) + s = Categorical([np.nan, np.nan, np.nan, 4, 5], + categories=[5, 4, 3, 2, 1], ordered=True) res = s.mode() - exp = Categorical([], categories=[5,4,3,2,1], ordered=True) + exp = Categorical([], categories=[5, 4, 3, 2, 1], ordered=True) self.assertTrue(res.equals(exp)) - s = Categorical([np.nan,np.nan,np.nan,4,5,4], categories=[5,4,3,2,1], ordered=True) + s = Categorical([np.nan, np.nan, np.nan, 4, 5, 4], + categories=[5, 4, 3, 2, 1], ordered=True) res = s.mode() - exp = Categorical([4], categories=[5,4,3,2,1], ordered=True) + exp = Categorical([4], categories=[5, 4, 3, 2, 1], ordered=True) self.assertTrue(res.equals(exp)) - s = Categorical([np.nan,np.nan,4,5,4], categories=[5,4,3,2,1], ordered=True) + s = Categorical([np.nan, np.nan, 4, 5, 4], categories=[5, 4, 3, 2, 1], + ordered=True) res = s.mode() - exp = Categorical([4], categories=[5,4,3,2,1], ordered=True) + exp = Categorical([4], categories=[5, 4, 3, 2, 1], ordered=True) self.assertTrue(res.equals(exp)) - def test_sort(self): # unordered cats are sortable - cat = Categorical(["a","b","b","a"], ordered=False) + cat = Categorical(["a", "b", "b", "a"], ordered=False) cat.sort_values() cat.sort() - cat = Categorical(["a","c","b","d"], ordered=True) + cat = Categorical(["a", "c", "b", "d"], ordered=True) # sort_values res = cat.sort_values() - exp = np.array(["a","b","c","d"],dtype=object) + exp = np.array(["a", "b", "c", "d"], dtype=object) self.assert_numpy_array_equal(res.__array__(), exp) - cat = Categorical(["a","c","b","d"], categories=["a","b","c","d"], ordered=True) + cat = Categorical(["a", "c", "b", "d"], + categories=["a", "b", "c", "d"], ordered=True) res = cat.sort_values() - exp = np.array(["a","b","c","d"],dtype=object) + exp = np.array(["a", "b", "c", "d"], dtype=object) self.assert_numpy_array_equal(res.__array__(), exp) res = cat.sort_values(ascending=False) - exp = np.array(["d","c","b","a"],dtype=object) + exp = np.array(["d", "c", "b", "a"], dtype=object) self.assert_numpy_array_equal(res.__array__(), exp) # sort (inplace order) cat1 = cat.copy() cat1.sort() - exp = np.array(["a","b","c","d"],dtype=object) + exp = np.array(["a", "b", "c", "d"], dtype=object) self.assert_numpy_array_equal(cat1.__array__(), exp) def test_slicing_directly(self): - cat = Categorical(["a","b","c","d","a","b","c"]) + cat = Categorical(["a", "b", "c", "d", "a", "b", "c"]) sliced = cat[3] tm.assert_equal(sliced, "d") sliced = cat[3:5] - expected = Categorical(["d","a"], categories=['a', 'b', 'c', 'd']) + expected = Categorical(["d", "a"], categories=['a', 'b', 'c', 'd']) self.assert_numpy_array_equal(sliced._codes, expected._codes) tm.assert_index_equal(sliced.categories, expected.categories) def test_set_item_nan(self): - cat = pd.Categorical([1,2,3]) - exp = pd.Categorical([1,np.nan,3], categories=[1,2,3]) + cat = pd.Categorical([1, 2, 3]) + exp = pd.Categorical([1, np.nan, 3], categories=[1, 2, 3]) cat[1] = np.nan self.assertTrue(cat.equals(exp)) # if nan in categories, the proper code should be set! - cat = pd.Categorical([1,2,3, np.nan], categories=[1,2,3]) + cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3]) with tm.assert_produces_warning(FutureWarning): - cat.set_categories([1,2,3, np.nan], rename=True, inplace=True) + cat.set_categories([1, 2, 3, np.nan], rename=True, inplace=True) cat[1] = np.nan - exp = np.array([0,3,2,-1]) + exp = np.array([0, 3, 2, -1]) self.assert_numpy_array_equal(cat.codes, exp) - cat = pd.Categorical([1,2,3, np.nan], categories=[1,2,3]) + cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3]) with tm.assert_produces_warning(FutureWarning): - cat.set_categories([1,2,3, np.nan], rename=True, inplace=True) + cat.set_categories([1, 2, 3, np.nan], rename=True, inplace=True) cat[1:3] = np.nan - exp = np.array([0,3,3,-1]) + exp = np.array([0, 3, 3, -1]) self.assert_numpy_array_equal(cat.codes, exp) - cat = pd.Categorical([1,2,3, np.nan], categories=[1,2,3]) + cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3]) with tm.assert_produces_warning(FutureWarning): - cat.set_categories([1,2,3, np.nan], rename=True, inplace=True) + cat.set_categories([1, 2, 3, np.nan], rename=True, inplace=True) cat[1:3] = [np.nan, 1] - exp = np.array([0,3,0,-1]) + exp = np.array([0, 3, 0, -1]) self.assert_numpy_array_equal(cat.codes, exp) - cat = pd.Categorical([1,2,3, np.nan], categories=[1,2,3]) + cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3]) with tm.assert_produces_warning(FutureWarning): - cat.set_categories([1,2,3, np.nan], rename=True, inplace=True) + cat.set_categories([1, 2, 3, np.nan], rename=True, inplace=True) cat[1:3] = [np.nan, np.nan] - exp = np.array([0,3,3,-1]) + exp = np.array([0, 3, 3, -1]) self.assert_numpy_array_equal(cat.codes, exp) - cat = pd.Categorical([1,2, np.nan, 3], categories=[1,2,3]) + cat = pd.Categorical([1, 2, np.nan, 3], categories=[1, 2, 3]) with tm.assert_produces_warning(FutureWarning): - cat.set_categories([1,2,3, np.nan], rename=True, inplace=True) + cat.set_categories([1, 2, 3, np.nan], rename=True, inplace=True) cat[pd.isnull(cat)] = np.nan - exp = np.array([0,1,3,2]) + exp = np.array([0, 1, 3, 2]) self.assert_numpy_array_equal(cat.codes, exp) def test_shift(self): @@ -1198,7 +1314,7 @@ def test_shift(self): # shift back sn2 = cat.shift(-2) xp2 = pd.Categorical(['c', 'd', 'a', np.nan, np.nan], - categories=['a', 'b', 'c', 'd']) + categories=['a', 'b', 'c', 'd']) self.assert_categorical_equal(sn2, xp2) self.assert_categorical_equal(cat[2:], sn2[:-2]) @@ -1206,16 +1322,16 @@ def test_shift(self): self.assert_categorical_equal(cat, cat.shift(0)) def test_nbytes(self): - cat = pd.Categorical([1,2,3]) + cat = pd.Categorical([1, 2, 3]) exp = cat._codes.nbytes + cat._categories.values.nbytes self.assertEqual(cat.nbytes, exp) def test_memory_usage(self): - cat = pd.Categorical([1,2,3]) + cat = pd.Categorical([1, 2, 3]) self.assertEqual(cat.nbytes, cat.memory_usage()) self.assertEqual(cat.nbytes, cat.memory_usage(deep=True)) - cat = pd.Categorical(['foo','foo','bar']) + cat = pd.Categorical(['foo', 'foo', 'bar']) self.assertEqual(cat.nbytes, cat.memory_usage()) self.assertTrue(cat.memory_usage(deep=True) > cat.nbytes) @@ -1226,10 +1342,8 @@ def test_memory_usage(self): def test_searchsorted(self): # https://github.com/pydata/pandas/issues/8420 - s1 = pd.Series(['apple', 'bread', 'bread', 'cheese', - 'milk']) - s2 = pd.Series(['apple', 'bread', 'bread', 'cheese', - 'milk', 'donuts']) + s1 = pd.Series(['apple', 'bread', 'bread', 'cheese', 'milk']) + s2 = pd.Series(['apple', 'bread', 'bread', 'cheese', 'milk', 'donuts']) c1 = pd.Categorical(s1, ordered=True) c2 = pd.Categorical(s2, ordered=True) @@ -1241,7 +1355,8 @@ def test_searchsorted(self): self.assert_numpy_array_equal(res, chk) # Scalar version of single item array - # Categorical return np.array like pd.Series, but different from np.array.searchsorted() + # Categorical return np.array like pd.Series, but different from + # np.array.searchsorted() res = c1.searchsorted('bread') chk = s1.searchsorted('bread') exp = np.array([1]) @@ -1258,20 +1373,24 @@ def test_searchsorted(self): # Searching for a value that is not present, to the right res = c1.searchsorted(['bread', 'eggs'], side='right') chk = s1.searchsorted(['bread', 'eggs'], side='right') - exp = np.array([3, 4]) # eggs before milk + exp = np.array([3, 4]) # eggs before milk self.assert_numpy_array_equal(res, exp) self.assert_numpy_array_equal(res, chk) # As above, but with a sorter array to reorder an unsorted array - res = c2.searchsorted(['bread', 'eggs'], side='right', sorter=[0, 1, 2, 3, 5, 4]) - chk = s2.searchsorted(['bread', 'eggs'], side='right', sorter=[0, 1, 2, 3, 5, 4]) - exp = np.array([3, 5]) # eggs after donuts, after switching milk and donuts + res = c2.searchsorted(['bread', 'eggs'], side='right', + sorter=[0, 1, 2, 3, 5, 4]) + chk = s2.searchsorted(['bread', 'eggs'], side='right', + sorter=[0, 1, 2, 3, 5, 4]) + exp = np.array([3, 5] + ) # eggs after donuts, after switching milk and donuts self.assert_numpy_array_equal(res, exp) self.assert_numpy_array_equal(res, chk) def test_deprecated_labels(self): - # TODO: labels is deprecated and should be removed in 0.18 or 2017, whatever is earlier - cat = pd.Categorical([1,2,3, np.nan], categories=[1,2,3]) + # TODO: labels is deprecated and should be removed in 0.18 or 2017, + # whatever is earlier + cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3]) exp = cat.codes with tm.assert_produces_warning(FutureWarning): res = cat.labels @@ -1279,14 +1398,15 @@ def test_deprecated_labels(self): self.assertFalse(LooseVersion(pd.__version__) >= '0.18') def test_deprecated_levels(self): - # TODO: levels is deprecated and should be removed in 0.18 or 2017, whatever is earlier - cat = pd.Categorical([1,2,3, np.nan], categories=[1,2,3]) + # TODO: levels is deprecated and should be removed in 0.18 or 2017, + # whatever is earlier + cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3]) exp = cat.categories with tm.assert_produces_warning(FutureWarning): res = cat.levels self.assert_numpy_array_equal(res, exp) with tm.assert_produces_warning(FutureWarning): - res = pd.Categorical([1,2,3, np.nan], levels=[1,2,3]) + res = pd.Categorical([1, 2, 3, np.nan], levels=[1, 2, 3]) self.assert_numpy_array_equal(res.categories, exp) self.assertFalse(LooseVersion(pd.__version__) >= '0.18') @@ -1295,13 +1415,14 @@ def test_removed_names_produces_warning(self): # 10482 with tm.assert_produces_warning(UserWarning): - Categorical([0,1], name="a") + Categorical([0, 1], name="a") with tm.assert_produces_warning(UserWarning): - Categorical.from_codes([1,2], ["a","b","c"], name="a") + Categorical.from_codes([1, 2], ["a", "b", "c"], name="a") def test_datetime_categorical_comparison(self): - dt_cat = pd.Categorical(pd.date_range('2014-01-01', periods=3), ordered=True) + dt_cat = pd.Categorical( + pd.date_range('2014-01-01', periods=3), ordered=True) self.assert_numpy_array_equal(dt_cat > dt_cat[0], [False, True, True]) self.assert_numpy_array_equal(dt_cat[0] < dt_cat, [False, True, True]) @@ -1312,9 +1433,9 @@ def test_reflected_comparison_with_scalars(self): self.assert_numpy_array_equal(cat[0] < cat, [False, True, True]) def test_comparison_with_unknown_scalars(self): - # https://github.com/pydata/pandas/issues/9836#issuecomment-92123057 and following - # comparisons with scalars not in categories should raise for unequal comps, but not for - # equal/not equal + # https://github.com/pydata/pandas/issues/9836#issuecomment-92123057 + # and following comparisons with scalars not in categories should raise + # for unequal comps, but not for equal/not equal cat = pd.Categorical([1, 2, 3], ordered=True) self.assertRaises(TypeError, lambda: cat < 4) @@ -1322,176 +1443,183 @@ def test_comparison_with_unknown_scalars(self): self.assertRaises(TypeError, lambda: 4 < cat) self.assertRaises(TypeError, lambda: 4 > cat) - self.assert_numpy_array_equal(cat == 4 , [False, False, False]) - self.assert_numpy_array_equal(cat != 4 , [True, True, True]) + self.assert_numpy_array_equal(cat == 4, [False, False, False]) + self.assert_numpy_array_equal(cat != 4, [True, True, True]) class TestCategoricalAsBlock(tm.TestCase): _multiprocess_can_split_ = True def setUp(self): - self.factor = Categorical.from_array(['a', 'b', 'b', 'a', - 'a', 'c', 'c', 'c']) + self.factor = Categorical.from_array(['a', 'b', 'b', 'a', 'a', 'c', + 'c', 'c']) df = DataFrame({'value': np.random.randint(0, 10000, 100)}) - labels = [ "{0} - {1}".format(i, i + 499) for i in range(0, 10000, 500) ] + labels = ["{0} - {1}".format(i, i + 499) for i in range(0, 10000, 500)] df = df.sort_values(by=['value'], ascending=True) - df['value_group'] = pd.cut(df.value, range(0, 10500, 500), right=False, labels=labels) + df['value_group'] = pd.cut(df.value, range(0, 10500, 500), right=False, + labels=labels) self.cat = df def test_dtypes(self): - # GH8143 - index = ['cat','obj','num'] + index = ['cat', 'obj', 'num'] cat = pd.Categorical(['a', 'b', 'c']) obj = pd.Series(['a', 'b', 'c']) num = pd.Series([1, 2, 3]) df = pd.concat([pd.Series(cat), obj, num], axis=1, keys=index) result = df.dtypes == 'object' - expected = Series([False,True,False],index=index) + expected = Series([False, True, False], index=index) tm.assert_series_equal(result, expected) result = df.dtypes == 'int64' - expected = Series([False,False,True],index=index) + expected = Series([False, False, True], index=index) tm.assert_series_equal(result, expected) result = df.dtypes == 'category' - expected = Series([True,False,False],index=index) + expected = Series([True, False, False], index=index) tm.assert_series_equal(result, expected) def test_codes_dtypes(self): # GH 8453 - result = Categorical(['foo','bar','baz']) + result = Categorical(['foo', 'bar', 'baz']) self.assertTrue(result.codes.dtype == 'int8') - result = Categorical(['foo%05d' % i for i in range(400) ]) + result = Categorical(['foo%05d' % i for i in range(400)]) self.assertTrue(result.codes.dtype == 'int16') - result = Categorical(['foo%05d' % i for i in range(40000) ]) + result = Categorical(['foo%05d' % i for i in range(40000)]) self.assertTrue(result.codes.dtype == 'int32') # adding cats - result = Categorical(['foo','bar','baz']) + result = Categorical(['foo', 'bar', 'baz']) self.assertTrue(result.codes.dtype == 'int8') - result = result.add_categories(['foo%05d' % i for i in range(400) ]) + result = result.add_categories(['foo%05d' % i for i in range(400)]) self.assertTrue(result.codes.dtype == 'int16') # removing cats - result = result.remove_categories(['foo%05d' % i for i in range(300) ]) + result = result.remove_categories(['foo%05d' % i for i in range(300)]) self.assertTrue(result.codes.dtype == 'int8') def test_basic(self): # test basic creation / coercion of categoricals s = Series(self.factor, name='A') - self.assertEqual(s.dtype,'category') - self.assertEqual(len(s),len(self.factor)) + self.assertEqual(s.dtype, 'category') + self.assertEqual(len(s), len(self.factor)) str(s.values) str(s) # in a frame - df = DataFrame({'A' : self.factor }) + df = DataFrame({'A': self.factor}) result = df['A'] - tm.assert_series_equal(result,s) - result = df.iloc[:,0] - tm.assert_series_equal(result,s) - self.assertEqual(len(df),len(self.factor)) + tm.assert_series_equal(result, s) + result = df.iloc[:, 0] + tm.assert_series_equal(result, s) + self.assertEqual(len(df), len(self.factor)) str(df.values) str(df) - df = DataFrame({'A' : s }) + df = DataFrame({'A': s}) result = df['A'] - tm.assert_series_equal(result,s) - self.assertEqual(len(df),len(self.factor)) + tm.assert_series_equal(result, s) + self.assertEqual(len(df), len(self.factor)) str(df.values) str(df) # multiples - df = DataFrame({'A' : s, 'B' : s, 'C' : 1}) + df = DataFrame({'A': s, 'B': s, 'C': 1}) result1 = df['A'] result2 = df['B'] tm.assert_series_equal(result1, s) tm.assert_series_equal(result2, s, check_names=False) self.assertEqual(result2.name, 'B') - self.assertEqual(len(df),len(self.factor)) + self.assertEqual(len(df), len(self.factor)) str(df.values) str(df) # GH8623 - x = pd.DataFrame([[1,'John P. Doe'],[2,'Jane Dove'],[1,'John P. Doe']], - columns=['person_id','person_name']) - x['person_name'] = pd.Categorical(x.person_name) # doing this breaks transform + x = pd.DataFrame([[1, 'John P. Doe'], [2, 'Jane Dove'], + [1, 'John P. Doe']], + columns=['person_id', 'person_name']) + x['person_name'] = pd.Categorical(x.person_name + ) # doing this breaks transform expected = x.iloc[0].person_name result = x.person_name.iloc[0] - self.assertEqual(result,expected) + self.assertEqual(result, expected) result = x.person_name[0] - self.assertEqual(result,expected) + self.assertEqual(result, expected) result = x.person_name.loc[0] - self.assertEqual(result,expected) + self.assertEqual(result, expected) def test_creation_astype(self): - l = ["a","b","c","a"] + l = ["a", "b", "c", "a"] s = pd.Series(l) exp = pd.Series(Categorical(l)) res = s.astype('category') tm.assert_series_equal(res, exp) - l = [1,2,3,1] + l = [1, 2, 3, 1] s = pd.Series(l) exp = pd.Series(Categorical(l)) res = s.astype('category') tm.assert_series_equal(res, exp) - df = pd.DataFrame({"cats":[1,2,3,4,5,6], "vals":[1,2,3,4,5,6]}) - cats = Categorical([1,2,3,4,5,6]) - exp_df = pd.DataFrame({"cats":cats, "vals":[1,2,3,4,5,6]}) - df["cats"] = df["cats"].astype("category") + df = pd.DataFrame({"cats": [1, 2, 3, 4, 5, 6], + "vals": [1, 2, 3, 4, 5, 6]}) + cats = Categorical([1, 2, 3, 4, 5, 6]) + exp_df = pd.DataFrame({"cats": cats, "vals": [1, 2, 3, 4, 5, 6]}) + df["cats"] = df["cats"].astype("category") tm.assert_frame_equal(exp_df, df) - df = pd.DataFrame({"cats":['a', 'b', 'b', 'a', 'a', 'd'], "vals":[1,2,3,4,5,6]}) + df = pd.DataFrame({"cats": ['a', 'b', 'b', 'a', 'a', 'd'], + "vals": [1, 2, 3, 4, 5, 6]}) cats = Categorical(['a', 'b', 'b', 'a', 'a', 'd']) - exp_df = pd.DataFrame({"cats":cats, "vals":[1,2,3,4,5,6]}) - df["cats"] = df["cats"].astype("category") + exp_df = pd.DataFrame({"cats": cats, "vals": [1, 2, 3, 4, 5, 6]}) + df["cats"] = df["cats"].astype("category") tm.assert_frame_equal(exp_df, df) # with keywords - l = ["a","b","c","a"] + l = ["a", "b", "c", "a"] s = pd.Series(l) exp = pd.Series(Categorical(l, ordered=True)) res = s.astype('category', ordered=True) tm.assert_series_equal(res, exp) - exp = pd.Series(Categorical(l, categories=list('abcdef'), ordered=True)) + exp = pd.Series(Categorical( + l, categories=list('abcdef'), ordered=True)) res = s.astype('category', categories=list('abcdef'), ordered=True) tm.assert_series_equal(res, exp) def test_construction_series(self): - l = [1,2,3,1] + l = [1, 2, 3, 1] exp = Series(l).astype('category') - res = Series(l,dtype='category') + res = Series(l, dtype='category') tm.assert_series_equal(res, exp) - l = ["a","b","c","a"] + l = ["a", "b", "c", "a"] exp = Series(l).astype('category') - res = Series(l,dtype='category') + res = Series(l, dtype='category') tm.assert_series_equal(res, exp) # insert into frame with different index # GH 8076 index = pd.date_range('20000101', periods=3) - expected = Series(Categorical(values=[np.nan,np.nan,np.nan],categories=['a', 'b', 'c'])) + expected = Series(Categorical(values=[np.nan, np.nan, np.nan], + categories=['a', 'b', 'c'])) expected.index = index expected = DataFrame({'x': expected}) - df = DataFrame({'x': Series(['a', 'b', 'c'],dtype='category')}, index=index) + df = DataFrame( + {'x': Series(['a', 'b', 'c'], dtype='category')}, index=index) tm.assert_frame_equal(df, expected) def test_construction_frame(self): @@ -1499,7 +1627,7 @@ def test_construction_frame(self): # GH8626 # dict creation - df = DataFrame({ 'A' : list('abc') }, dtype='category') + df = DataFrame({'A': list('abc')}, dtype='category') expected = Series(list('abc'), dtype='category', name='A') tm.assert_series_equal(df['A'], expected) @@ -1519,25 +1647,31 @@ def test_construction_frame(self): # ndim != 1 df = DataFrame([pd.Categorical(list('abc'))]) - expected = DataFrame({ 0 : Series(list('abc'),dtype='category')}) - tm.assert_frame_equal(df,expected) + expected = DataFrame({0: Series(list('abc'), dtype='category')}) + tm.assert_frame_equal(df, expected) - df = DataFrame([pd.Categorical(list('abc')),pd.Categorical(list('abd'))]) - expected = DataFrame({ 0 : Series(list('abc'),dtype='category'), - 1 : Series(list('abd'),dtype='category')},columns=[0,1]) - tm.assert_frame_equal(df,expected) + df = DataFrame([pd.Categorical(list('abc')), pd.Categorical(list( + 'abd'))]) + expected = DataFrame({0: Series(list('abc'), dtype='category'), + 1: Series(list('abd'), dtype='category')}, + columns=[0, 1]) + tm.assert_frame_equal(df, expected) # mixed - df = DataFrame([pd.Categorical(list('abc')),list('def')]) - expected = DataFrame({ 0 : Series(list('abc'),dtype='category'), - 1 : list('def')},columns=[0,1]) - tm.assert_frame_equal(df,expected) + df = DataFrame([pd.Categorical(list('abc')), list('def')]) + expected = DataFrame({0: Series(list('abc'), dtype='category'), + 1: list('def')}, columns=[0, 1]) + tm.assert_frame_equal(df, expected) # invalid (shape) - self.assertRaises(ValueError, lambda : DataFrame([pd.Categorical(list('abc')),pd.Categorical(list('abdefg'))])) + self.assertRaises( + ValueError, + lambda: DataFrame([pd.Categorical(list('abc')), + pd.Categorical(list('abdefg'))])) # ndim > 1 - self.assertRaises(NotImplementedError, lambda : pd.Categorical(np.array([list('abcd')]))) + self.assertRaises(NotImplementedError, + lambda: pd.Categorical(np.array([list('abcd')]))) def test_reshaping(self): @@ -1547,12 +1681,12 @@ def test_reshaping(self): df['category'] = df['str'].astype('category') result = df['category'].unstack() - c = Categorical(['foo']*len(p.major_axis)) - expected = DataFrame({'A' : c.copy(), - 'B' : c.copy(), - 'C' : c.copy(), - 'D' : c.copy()}, - columns=Index(list('ABCD'),name='minor'), + c = Categorical(['foo'] * len(p.major_axis)) + expected = DataFrame({'A': c.copy(), + 'B': c.copy(), + 'C': c.copy(), + 'D': c.copy()}, + columns=Index(list('ABCD'), name='minor'), index=p.major_axis.set_names('major')) tm.assert_frame_equal(result, expected) @@ -1561,91 +1695,94 @@ def test_reindex(self): index = pd.date_range('20000101', periods=3) # reindexing to an invalid Categorical - s = Series(['a', 'b', 'c'],dtype='category') + s = Series(['a', 'b', 'c'], dtype='category') result = s.reindex(index) - expected = Series(Categorical(values=[np.nan,np.nan,np.nan],categories=['a', 'b', 'c'])) + expected = Series(Categorical(values=[np.nan, np.nan, np.nan], + categories=['a', 'b', 'c'])) expected.index = index tm.assert_series_equal(result, expected) # partial reindexing - expected = Series(Categorical(values=['b','c'],categories=['a', 'b', 'c'])) - expected.index = [1,2] - result = s.reindex([1,2]) + expected = Series(Categorical(values=['b', 'c'], categories=['a', 'b', + 'c'])) + expected.index = [1, 2] + result = s.reindex([1, 2]) tm.assert_series_equal(result, expected) - expected = Series(Categorical(values=['c',np.nan],categories=['a', 'b', 'c'])) - expected.index = [2,3] - result = s.reindex([2,3]) + expected = Series(Categorical( + values=['c', np.nan], categories=['a', 'b', 'c'])) + expected.index = [2, 3] + result = s.reindex([2, 3]) tm.assert_series_equal(result, expected) - - def test_sideeffects_free(self): - - # Passing a categorical to a Series and then changing values in either the series or the - # categorical should not change the values in the other one, IF you specify copy! - cat = Categorical(["a","b","c","a"]) - s = pd.Series(cat, copy=True) + # Passing a categorical to a Series and then changing values in either + # the series or the categorical should not change the values in the + # other one, IF you specify copy! + cat = Categorical(["a", "b", "c", "a"]) + s = pd.Series(cat, copy=True) self.assertFalse(s.cat is cat) - s.cat.categories = [1,2,3] - exp_s = np.array([1,2,3,1]) - exp_cat = np.array(["a","b","c","a"]) + s.cat.categories = [1, 2, 3] + exp_s = np.array([1, 2, 3, 1]) + exp_cat = np.array(["a", "b", "c", "a"]) self.assert_numpy_array_equal(s.__array__(), exp_s) self.assert_numpy_array_equal(cat.__array__(), exp_cat) # setting s[0] = 2 - exp_s2 = np.array([2,2,3,1]) + exp_s2 = np.array([2, 2, 3, 1]) self.assert_numpy_array_equal(s.__array__(), exp_s2) self.assert_numpy_array_equal(cat.__array__(), exp_cat) # however, copy is False by default # so this WILL change values - cat = Categorical(["a","b","c","a"]) - s = pd.Series(cat) + cat = Categorical(["a", "b", "c", "a"]) + s = pd.Series(cat) self.assertTrue(s.values is cat) - s.cat.categories = [1,2,3] - exp_s = np.array([1,2,3,1]) + s.cat.categories = [1, 2, 3] + exp_s = np.array([1, 2, 3, 1]) self.assert_numpy_array_equal(s.__array__(), exp_s) self.assert_numpy_array_equal(cat.__array__(), exp_s) s[0] = 2 - exp_s2 = np.array([2,2,3,1]) + exp_s2 = np.array([2, 2, 3, 1]) self.assert_numpy_array_equal(s.__array__(), exp_s2) self.assert_numpy_array_equal(cat.__array__(), exp_s2) def test_nan_handling(self): # Nans are represented as -1 in labels - s = Series(Categorical(["a","b",np.nan,"a"])) - self.assert_numpy_array_equal(s.cat.categories, np.array(["a","b"])) - self.assert_numpy_array_equal(s.values.codes, np.array([0,1,-1,0])) + s = Series(Categorical(["a", "b", np.nan, "a"])) + self.assert_numpy_array_equal(s.cat.categories, np.array(["a", "b"])) + self.assert_numpy_array_equal(s.values.codes, np.array([0, 1, -1, 0])) - # If categories have nan included, the label should point to that instead + # If categories have nan included, the label should point to that + # instead with tm.assert_produces_warning(FutureWarning): - s2 = Series(Categorical(["a","b",np.nan,"a"], categories=["a","b",np.nan])) - self.assert_numpy_array_equal(s2.cat.categories, - np.array(["a","b",np.nan], dtype=np.object_)) - self.assert_numpy_array_equal(s2.values.codes, np.array([0,1,2,0])) + s2 = Series(Categorical( + ["a", "b", np.nan, "a"], categories=["a", "b", np.nan])) + self.assert_numpy_array_equal(s2.cat.categories, np.array( + ["a", "b", np.nan], dtype=np.object_)) + self.assert_numpy_array_equal(s2.values.codes, np.array([0, 1, 2, 0])) # Changing categories should also make the replaced category np.nan - s3 = Series(Categorical(["a","b","c","a"])) + s3 = Series(Categorical(["a", "b", "c", "a"])) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - s3.cat.categories = ["a","b",np.nan] - self.assert_numpy_array_equal(s3.cat.categories, - np.array(["a","b",np.nan], dtype=np.object_)) - self.assert_numpy_array_equal(s3.values.codes, np.array([0,1,2,0])) + s3.cat.categories = ["a", "b", np.nan] + self.assert_numpy_array_equal(s3.cat.categories, np.array( + ["a", "b", np.nan], dtype=np.object_)) + self.assert_numpy_array_equal(s3.values.codes, np.array([0, 1, 2, 0])) def test_cat_accessor(self): - s = Series(Categorical(["a","b",np.nan,"a"])) - self.assert_numpy_array_equal(s.cat.categories, np.array(["a","b"])) + s = Series(Categorical(["a", "b", np.nan, "a"])) + self.assert_numpy_array_equal(s.cat.categories, np.array(["a", "b"])) self.assertEqual(s.cat.ordered, False) - exp = Categorical(["a","b",np.nan,"a"], categories=["b","a"]) + exp = Categorical(["a", "b", np.nan, "a"], categories=["b", "a"]) s.cat.set_categories(["b", "a"], inplace=True) self.assertTrue(s.values.equals(exp)) res = s.cat.set_categories(["b", "a"]) self.assertTrue(res.values.equals(exp)) - exp = Categorical(["a","b",np.nan,"a"], categories=["b","a"]) + exp = Categorical(["a", "b", np.nan, "a"], categories=["b", "a"]) s[:] = "a" s = s.cat.remove_unused_categories() self.assert_numpy_array_equal(s.cat.categories, np.array(["a"])) @@ -1654,13 +1791,14 @@ def test_sequence_like(self): # GH 7839 # make sure can iterate - df = DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']}) + df = DataFrame({"id": [1, 2, 3, 4, 5, 6], + "raw_grade": ['a', 'b', 'b', 'a', 'a', 'e']}) df['grade'] = Categorical(df['raw_grade']) # basic sequencing testing result = list(df.grade.values) expected = np.array(df.grade.values).tolist() - tm.assert_almost_equal(result,expected) + tm.assert_almost_equal(result, expected) # iteration for t in df.itertuples(index=False): @@ -1675,24 +1813,27 @@ def test_sequence_like(self): def test_series_delegations(self): # invalid accessor - self.assertRaises(AttributeError, lambda : Series([1,2,3]).cat) - tm.assertRaisesRegexp(AttributeError, - r"Can only use .cat accessor with a 'category' dtype", - lambda : Series([1,2,3]).cat) - self.assertRaises(AttributeError, lambda : Series(['a','b','c']).cat) - self.assertRaises(AttributeError, lambda : Series(np.arange(5.)).cat) - self.assertRaises(AttributeError, lambda : Series([Timestamp('20130101')]).cat) - - # Series should delegate calls to '.categories', '.codes', '.ordered' and the - # methods '.set_categories()' 'drop_unused_categories()' to the categorical - s = Series(Categorical(["a","b","c","a"], ordered=True)) - exp_categories = np.array(["a","b","c"]) + self.assertRaises(AttributeError, lambda: Series([1, 2, 3]).cat) + tm.assertRaisesRegexp( + AttributeError, + r"Can only use .cat accessor with a 'category' dtype", + lambda: Series([1, 2, 3]).cat) + self.assertRaises(AttributeError, lambda: Series(['a', 'b', 'c']).cat) + self.assertRaises(AttributeError, lambda: Series(np.arange(5.)).cat) + self.assertRaises(AttributeError, + lambda: Series([Timestamp('20130101')]).cat) + + # Series should delegate calls to '.categories', '.codes', '.ordered' + # and the methods '.set_categories()' 'drop_unused_categories()' to the + # categorical + s = Series(Categorical(["a", "b", "c", "a"], ordered=True)) + exp_categories = np.array(["a", "b", "c"]) self.assert_numpy_array_equal(s.cat.categories, exp_categories) - s.cat.categories = [1,2,3] - exp_categories = np.array([1,2,3]) + s.cat.categories = [1, 2, 3] + exp_categories = np.array([1, 2, 3]) self.assert_numpy_array_equal(s.cat.categories, exp_categories) - exp_codes = Series([0,1,2,0],dtype='int8') + exp_codes = Series([0, 1, 2, 0], dtype='int8') tm.assert_series_equal(s.cat.codes, exp_codes) self.assertEqual(s.cat.ordered, True) @@ -1702,39 +1843,44 @@ def test_series_delegations(self): self.assertEqual(s.cat.ordered, True) # reorder - s = Series(Categorical(["a","b","c","a"], ordered=True)) - exp_categories = np.array(["c","b","a"]) - exp_values = np.array(["a","b","c","a"]) - s = s.cat.set_categories(["c","b","a"]) + s = Series(Categorical(["a", "b", "c", "a"], ordered=True)) + exp_categories = np.array(["c", "b", "a"]) + exp_values = np.array(["a", "b", "c", "a"]) + s = s.cat.set_categories(["c", "b", "a"]) self.assert_numpy_array_equal(s.cat.categories, exp_categories) self.assert_numpy_array_equal(s.values.__array__(), exp_values) self.assert_numpy_array_equal(s.__array__(), exp_values) # remove unused categories - s = Series(Categorical(["a","b","b","a"], categories=["a","b","c"])) - exp_categories = np.array(["a","b"]) - exp_values = np.array(["a","b","b","a"]) + s = Series(Categorical(["a", "b", "b", "a"], categories=["a", "b", "c" + ])) + exp_categories = np.array(["a", "b"]) + exp_values = np.array(["a", "b", "b", "a"]) s = s.cat.remove_unused_categories() self.assert_numpy_array_equal(s.cat.categories, exp_categories) self.assert_numpy_array_equal(s.values.__array__(), exp_values) self.assert_numpy_array_equal(s.__array__(), exp_values) - # This method is likely to be confused, so test that it raises an error on wrong inputs: + # This method is likely to be confused, so test that it raises an error + # on wrong inputs: def f(): - s.set_categories([4,3,2,1]) + s.set_categories([4, 3, 2, 1]) + self.assertRaises(Exception, f) # right: s.cat.set_categories([4,3,2,1]) def test_series_functions_no_warnings(self): df = pd.DataFrame({'value': np.random.randint(0, 100, 20)}) - labels = [ "{0} - {1}".format(i, i + 9) for i in range(0, 100, 10)] + labels = ["{0} - {1}".format(i, i + 9) for i in range(0, 100, 10)] with tm.assert_produces_warning(False): - df['group'] = pd.cut(df.value, range(0, 105, 10), right=False, labels=labels) + df['group'] = pd.cut(df.value, range(0, 105, 10), right=False, + labels=labels) def test_assignment_to_dataframe(self): # assignment - df = DataFrame({'value': np.array(np.random.randint(0, 10000, 100),dtype='int32')}) - labels = [ "{0} - {1}".format(i, i + 499) for i in range(0, 10000, 500) ] + df = DataFrame({'value': np.array( + np.random.randint(0, 10000, 100), dtype='int32')}) + labels = ["{0} - {1}".format(i, i + 499) for i in range(0, 10000, 500)] df = df.sort_values(by=['value'], ascending=True) s = pd.cut(df.value, range(0, 10500, 500), right=False, labels=labels) @@ -1743,16 +1889,18 @@ def test_assignment_to_dataframe(self): str(df) result = df.dtypes - expected = Series([np.dtype('int32'), com.CategoricalDtype()],index=['value','D']) - tm.assert_series_equal(result,expected) + expected = Series( + [np.dtype('int32'), com.CategoricalDtype()], index=['value', 'D']) + tm.assert_series_equal(result, expected) df['E'] = s str(df) result = df.dtypes - expected = Series([np.dtype('int32'), com.CategoricalDtype(), com.CategoricalDtype()], - index=['value','D','E']) - tm.assert_series_equal(result,expected) + expected = Series([np.dtype('int32'), com.CategoricalDtype(), + com.CategoricalDtype()], + index=['value', 'D', 'E']) + tm.assert_series_equal(result, expected) result1 = df['D'] result2 = df['E'] @@ -1762,122 +1910,164 @@ def test_assignment_to_dataframe(self): s.name = 'E' self.assertTrue(result2.sort_index().equals(s.sort_index())) - cat = pd.Categorical([1,2,3,10], categories=[1,2,3,4,10]) + cat = pd.Categorical([1, 2, 3, 10], categories=[1, 2, 3, 4, 10]) df = pd.DataFrame(pd.Series(cat)) def test_describe(self): # Categoricals should not show up together with numerical columns result = self.cat.describe() - self.assertEqual(len(result.columns),1) - + self.assertEqual(len(result.columns), 1) - # In a frame, describe() for the cat should be the same as for string arrays (count, unique, - # top, freq) + # In a frame, describe() for the cat should be the same as for string + # arrays (count, unique, top, freq) - cat = Categorical(["a","b","b","b"], categories=['a','b','c'], ordered=True) + cat = Categorical(["a", "b", "b", "b"], categories=['a', 'b', 'c'], + ordered=True) s = Series(cat) result = s.describe() - expected = Series([4,2,"b",3],index=['count','unique','top', 'freq']) - tm.assert_series_equal(result,expected) + expected = Series([4, 2, "b", 3], + index=['count', 'unique', 'top', 'freq']) + tm.assert_series_equal(result, expected) - cat = pd.Series(pd.Categorical(["a","b","c","c"])) - df3 = pd.DataFrame({"cat":cat, "s":["a","b","c","c"]}) + cat = pd.Series(pd.Categorical(["a", "b", "c", "c"])) + df3 = pd.DataFrame({"cat": cat, "s": ["a", "b", "c", "c"]}) res = df3.describe() self.assert_numpy_array_equal(res["cat"].values, res["s"].values) def test_repr(self): - a = pd.Series(pd.Categorical([1,2,3,4])) + a = pd.Series(pd.Categorical([1, 2, 3, 4])) exp = u("0 1\n1 2\n2 3\n3 4\n" + - "dtype: category\nCategories (4, int64): [1, 2, 3, 4]") + "dtype: category\nCategories (4, int64): [1, 2, 3, 4]") self.assertEqual(exp, a.__unicode__()) - a = pd.Series(pd.Categorical(["a","b"] *25)) - exp = u("0 a\n1 b\n" + " ..\n" + - "48 a\n49 b\n" + + a = pd.Series(pd.Categorical(["a", "b"] * 25)) + exp = u("0 a\n1 b\n" + " ..\n" + "48 a\n49 b\n" + "dtype: category\nCategories (2, object): [a, b]") with option_context("display.max_rows", 5): self.assertEqual(exp, repr(a)) levs = list("abcdefghijklmnopqrstuvwxyz") - a = pd.Series(pd.Categorical(["a","b"], categories=levs, ordered=True)) - exp = u("0 a\n1 b\n" + - "dtype: category\n" + a = pd.Series(pd.Categorical( + ["a", "b"], categories=levs, ordered=True)) + exp = u("0 a\n1 b\n" + "dtype: category\n" "Categories (26, object): [a < b < c < d ... w < x < y < z]") - self.assertEqual(exp,a.__unicode__()) + self.assertEqual(exp, a.__unicode__()) def test_categorical_repr(self): - c = pd.Categorical([1, 2 ,3]) + c = pd.Categorical([1, 2, 3]) exp = """[1, 2, 3] Categories (3, int64): [1, 2, 3]""" + self.assertEqual(repr(c), exp) - c = pd.Categorical([1, 2 ,3, 1, 2 ,3], categories=[1, 2, 3]) + c = pd.Categorical([1, 2, 3, 1, 2, 3], categories=[1, 2, 3]) exp = """[1, 2, 3, 1, 2, 3] Categories (3, int64): [1, 2, 3]""" + self.assertEqual(repr(c), exp) c = pd.Categorical([1, 2, 3, 4, 5] * 10) exp = """[1, 2, 3, 4, 5, ..., 1, 2, 3, 4, 5] Length: 50 Categories (5, int64): [1, 2, 3, 4, 5]""" + self.assertEqual(repr(c), exp) c = pd.Categorical(np.arange(20)) exp = """[0, 1, 2, 3, 4, ..., 15, 16, 17, 18, 19] Length: 20 Categories (20, int64): [0, 1, 2, 3, ..., 16, 17, 18, 19]""" + self.assertEqual(repr(c), exp) def test_categorical_repr_ordered(self): - c = pd.Categorical([1, 2 ,3], ordered=True) + c = pd.Categorical([1, 2, 3], ordered=True) exp = """[1, 2, 3] Categories (3, int64): [1 < 2 < 3]""" + self.assertEqual(repr(c), exp) - c = pd.Categorical([1, 2 ,3, 1, 2 ,3], categories=[1, 2, 3], ordered=True) + c = pd.Categorical([1, 2, 3, 1, 2, 3], categories=[1, 2, 3], + ordered=True) exp = """[1, 2, 3, 1, 2, 3] Categories (3, int64): [1 < 2 < 3]""" + self.assertEqual(repr(c), exp) c = pd.Categorical([1, 2, 3, 4, 5] * 10, ordered=True) exp = """[1, 2, 3, 4, 5, ..., 1, 2, 3, 4, 5] Length: 50 Categories (5, int64): [1 < 2 < 3 < 4 < 5]""" + self.assertEqual(repr(c), exp) c = pd.Categorical(np.arange(20), ordered=True) exp = """[0, 1, 2, 3, 4, ..., 15, 16, 17, 18, 19] Length: 20 Categories (20, int64): [0 < 1 < 2 < 3 ... 16 < 17 < 18 < 19]""" + self.assertEqual(repr(c), exp) def test_categorical_repr_datetime(self): idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5) c = pd.Categorical(idx) - exp = """[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00] -Categories (5, datetime64[ns]): [2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, - 2011-01-01 12:00:00, 2011-01-01 13:00:00]""" + + # TODO(wesm): exceeding 80 characters in the console is not good + # behavior + exp = ( + "[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, " + "2011-01-01 12:00:00, 2011-01-01 13:00:00]\n" + "Categories (5, datetime64[ns]): [2011-01-01 09:00:00, " + "2011-01-01 10:00:00, 2011-01-01 11:00:00,\n" + " 2011-01-01 12:00:00, " + "2011-01-01 13:00:00]""") self.assertEqual(repr(c), exp) c = pd.Categorical(idx.append(idx), categories=idx) - exp = """[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00, 2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00] -Categories (5, datetime64[ns]): [2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, - 2011-01-01 12:00:00, 2011-01-01 13:00:00]""" + exp = ( + "[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, " + "2011-01-01 12:00:00, 2011-01-01 13:00:00, 2011-01-01 09:00:00, " + "2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, " + "2011-01-01 13:00:00]\n" + "Categories (5, datetime64[ns]): [2011-01-01 09:00:00, " + "2011-01-01 10:00:00, 2011-01-01 11:00:00,\n" + " 2011-01-01 12:00:00, " + "2011-01-01 13:00:00]") + self.assertEqual(repr(c), exp) - idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5, tz='US/Eastern') + idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5, + tz='US/Eastern') c = pd.Categorical(idx) - exp = """[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00] -Categories (5, datetime64[ns, US/Eastern]): [2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00,\n 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00,\n 2011-01-01 13:00:00-05:00]""" + exp = ( + "[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, " + "2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, " + "2011-01-01 13:00:00-05:00]\n" + "Categories (5, datetime64[ns, US/Eastern]): " + "[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00,\n" + " " + "2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00,\n" + " " + "2011-01-01 13:00:00-05:00]") + self.assertEqual(repr(c), exp) c = pd.Categorical(idx.append(idx), categories=idx) - exp = """[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00, 2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00] -Categories (5, datetime64[ns, US/Eastern]): [2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, - 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, - 2011-01-01 13:00:00-05:00]""" + exp = ( + "[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, " + "2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, " + "2011-01-01 13:00:00-05:00, 2011-01-01 09:00:00-05:00, " + "2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, " + "2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00]\n" + "Categories (5, datetime64[ns, US/Eastern]): " + "[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00,\n" + " " + "2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00,\n" + " " + "2011-01-01 13:00:00-05:00]") + self.assertEqual(repr(c), exp) def test_categorical_repr_datetime_ordered(self): @@ -1885,21 +2075,25 @@ def test_categorical_repr_datetime_ordered(self): c = pd.Categorical(idx, ordered=True) exp = """[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00] Categories (5, datetime64[ns]): [2011-01-01 09:00:00 < 2011-01-01 10:00:00 < 2011-01-01 11:00:00 < - 2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" + 2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" # noqa + self.assertEqual(repr(c), exp) c = pd.Categorical(idx.append(idx), categories=idx, ordered=True) exp = """[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00, 2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00] Categories (5, datetime64[ns]): [2011-01-01 09:00:00 < 2011-01-01 10:00:00 < 2011-01-01 11:00:00 < - 2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" + 2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" # noqa + self.assertEqual(repr(c), exp) - idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5, tz='US/Eastern') + idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5, + tz='US/Eastern') c = pd.Categorical(idx, ordered=True) exp = """[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00] Categories (5, datetime64[ns, US/Eastern]): [2011-01-01 09:00:00-05:00 < 2011-01-01 10:00:00-05:00 < 2011-01-01 11:00:00-05:00 < 2011-01-01 12:00:00-05:00 < - 2011-01-01 13:00:00-05:00]""" + 2011-01-01 13:00:00-05:00]""" # noqa + self.assertEqual(repr(c), exp) c = pd.Categorical(idx.append(idx), categories=idx, ordered=True) @@ -1907,6 +2101,7 @@ def test_categorical_repr_datetime_ordered(self): Categories (5, datetime64[ns, US/Eastern]): [2011-01-01 09:00:00-05:00 < 2011-01-01 10:00:00-05:00 < 2011-01-01 11:00:00-05:00 < 2011-01-01 12:00:00-05:00 < 2011-01-01 13:00:00-05:00]""" + self.assertEqual(repr(c), exp) def test_categorical_repr_period(self): @@ -1915,23 +2110,27 @@ def test_categorical_repr_period(self): exp = """[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00] Categories (5, period): [2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00]""" + self.assertEqual(repr(c), exp) c = pd.Categorical(idx.append(idx), categories=idx) exp = """[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00, 2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00] Categories (5, period): [2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00]""" + self.assertEqual(repr(c), exp) idx = pd.period_range('2011-01', freq='M', periods=5) c = pd.Categorical(idx) exp = """[2011-01, 2011-02, 2011-03, 2011-04, 2011-05] Categories (5, period): [2011-01, 2011-02, 2011-03, 2011-04, 2011-05]""" + self.assertEqual(repr(c), exp) c = pd.Categorical(idx.append(idx), categories=idx) exp = """[2011-01, 2011-02, 2011-03, 2011-04, 2011-05, 2011-01, 2011-02, 2011-03, 2011-04, 2011-05] Categories (5, period): [2011-01, 2011-02, 2011-03, 2011-04, 2011-05]""" + self.assertEqual(repr(c), exp) def test_categorical_repr_period_ordered(self): @@ -1940,23 +2139,27 @@ def test_categorical_repr_period_ordered(self): exp = """[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00] Categories (5, period): [2011-01-01 09:00 < 2011-01-01 10:00 < 2011-01-01 11:00 < 2011-01-01 12:00 < 2011-01-01 13:00]""" + self.assertEqual(repr(c), exp) c = pd.Categorical(idx.append(idx), categories=idx, ordered=True) exp = """[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00, 2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00] Categories (5, period): [2011-01-01 09:00 < 2011-01-01 10:00 < 2011-01-01 11:00 < 2011-01-01 12:00 < 2011-01-01 13:00]""" + self.assertEqual(repr(c), exp) idx = pd.period_range('2011-01', freq='M', periods=5) c = pd.Categorical(idx, ordered=True) exp = """[2011-01, 2011-02, 2011-03, 2011-04, 2011-05] Categories (5, period): [2011-01 < 2011-02 < 2011-03 < 2011-04 < 2011-05]""" + self.assertEqual(repr(c), exp) c = pd.Categorical(idx.append(idx), categories=idx, ordered=True) exp = """[2011-01, 2011-02, 2011-03, 2011-04, 2011-05, 2011-01, 2011-02, 2011-03, 2011-04, 2011-05] Categories (5, period): [2011-01 < 2011-02 < 2011-03 < 2011-04 < 2011-05]""" + self.assertEqual(repr(c), exp) def test_categorical_repr_timedelta(self): @@ -1964,11 +2167,13 @@ def test_categorical_repr_timedelta(self): c = pd.Categorical(idx) exp = """[1 days, 2 days, 3 days, 4 days, 5 days] Categories (5, timedelta64[ns]): [1 days, 2 days, 3 days, 4 days, 5 days]""" + self.assertEqual(repr(c), exp) c = pd.Categorical(idx.append(idx), categories=idx) exp = """[1 days, 2 days, 3 days, 4 days, 5 days, 1 days, 2 days, 3 days, 4 days, 5 days] Categories (5, timedelta64[ns]): [1 days, 2 days, 3 days, 4 days, 5 days]""" + self.assertEqual(repr(c), exp) idx = pd.timedelta_range('1 hours', periods=20) @@ -1978,6 +2183,7 @@ def test_categorical_repr_timedelta(self): Categories (20, timedelta64[ns]): [0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, ..., 16 days 01:00:00, 17 days 01:00:00, 18 days 01:00:00, 19 days 01:00:00]""" + self.assertEqual(repr(c), exp) c = pd.Categorical(idx.append(idx), categories=idx) @@ -1986,6 +2192,7 @@ def test_categorical_repr_timedelta(self): Categories (20, timedelta64[ns]): [0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, ..., 16 days 01:00:00, 17 days 01:00:00, 18 days 01:00:00, 19 days 01:00:00]""" + self.assertEqual(repr(c), exp) def test_categorical_repr_timedelta_ordered(self): @@ -1993,11 +2200,13 @@ def test_categorical_repr_timedelta_ordered(self): c = pd.Categorical(idx, ordered=True) exp = """[1 days, 2 days, 3 days, 4 days, 5 days] Categories (5, timedelta64[ns]): [1 days < 2 days < 3 days < 4 days < 5 days]""" + self.assertEqual(repr(c), exp) c = pd.Categorical(idx.append(idx), categories=idx, ordered=True) exp = """[1 days, 2 days, 3 days, 4 days, 5 days, 1 days, 2 days, 3 days, 4 days, 5 days] Categories (5, timedelta64[ns]): [1 days < 2 days < 3 days < 4 days < 5 days]""" + self.assertEqual(repr(c), exp) idx = pd.timedelta_range('1 hours', periods=20) @@ -2007,6 +2216,7 @@ def test_categorical_repr_timedelta_ordered(self): Categories (20, timedelta64[ns]): [0 days 01:00:00 < 1 days 01:00:00 < 2 days 01:00:00 < 3 days 01:00:00 ... 16 days 01:00:00 < 17 days 01:00:00 < 18 days 01:00:00 < 19 days 01:00:00]""" + self.assertEqual(repr(c), exp) c = pd.Categorical(idx.append(idx), categories=idx, ordered=True) @@ -2015,15 +2225,17 @@ def test_categorical_repr_timedelta_ordered(self): Categories (20, timedelta64[ns]): [0 days 01:00:00 < 1 days 01:00:00 < 2 days 01:00:00 < 3 days 01:00:00 ... 16 days 01:00:00 < 17 days 01:00:00 < 18 days 01:00:00 < 19 days 01:00:00]""" + self.assertEqual(repr(c), exp) def test_categorical_series_repr(self): - s = pd.Series(pd.Categorical([1, 2 ,3])) + s = pd.Series(pd.Categorical([1, 2, 3])) exp = """0 1 1 2 2 3 dtype: category Categories (3, int64): [1, 2, 3]""" + self.assertEqual(repr(s), exp) s = pd.Series(pd.Categorical(np.arange(10))) @@ -2039,15 +2251,17 @@ def test_categorical_series_repr(self): 9 9 dtype: category Categories (10, int64): [0, 1, 2, 3, ..., 6, 7, 8, 9]""" + self.assertEqual(repr(s), exp) def test_categorical_series_repr_ordered(self): - s = pd.Series(pd.Categorical([1, 2 ,3], ordered=True)) + s = pd.Series(pd.Categorical([1, 2, 3], ordered=True)) exp = """0 1 1 2 2 3 dtype: category Categories (3, int64): [1 < 2 < 3]""" + self.assertEqual(repr(s), exp) s = pd.Series(pd.Categorical(np.arange(10), ordered=True)) @@ -2063,6 +2277,7 @@ def test_categorical_series_repr_ordered(self): 9 9 dtype: category Categories (10, int64): [0 < 1 < 2 < 3 ... 6 < 7 < 8 < 9]""" + self.assertEqual(repr(s), exp) def test_categorical_series_repr_datetime(self): @@ -2076,9 +2291,11 @@ def test_categorical_series_repr_datetime(self): dtype: category Categories (5, datetime64[ns]): [2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00]""" + self.assertEqual(repr(s), exp) - idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5, tz='US/Eastern') + idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5, + tz='US/Eastern') s = pd.Series(pd.Categorical(idx)) exp = """0 2011-01-01 09:00:00-05:00 1 2011-01-01 10:00:00-05:00 @@ -2089,6 +2306,7 @@ def test_categorical_series_repr_datetime(self): Categories (5, datetime64[ns, US/Eastern]): [2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00]""" + self.assertEqual(repr(s), exp) def test_categorical_series_repr_datetime_ordered(self): @@ -2102,9 +2320,11 @@ def test_categorical_series_repr_datetime_ordered(self): dtype: category Categories (5, datetime64[ns]): [2011-01-01 09:00:00 < 2011-01-01 10:00:00 < 2011-01-01 11:00:00 < 2011-01-01 12:00:00 < 2011-01-01 13:00:00]""" + self.assertEqual(repr(s), exp) - idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5, tz='US/Eastern') + idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5, + tz='US/Eastern') s = pd.Series(pd.Categorical(idx, ordered=True)) exp = """0 2011-01-01 09:00:00-05:00 1 2011-01-01 10:00:00-05:00 @@ -2115,6 +2335,7 @@ def test_categorical_series_repr_datetime_ordered(self): Categories (5, datetime64[ns, US/Eastern]): [2011-01-01 09:00:00-05:00 < 2011-01-01 10:00:00-05:00 < 2011-01-01 11:00:00-05:00 < 2011-01-01 12:00:00-05:00 < 2011-01-01 13:00:00-05:00]""" + self.assertEqual(repr(s), exp) def test_categorical_series_repr_period(self): @@ -2128,6 +2349,7 @@ def test_categorical_series_repr_period(self): dtype: category Categories (5, period): [2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00]""" + self.assertEqual(repr(s), exp) idx = pd.period_range('2011-01', freq='M', periods=5) @@ -2139,6 +2361,7 @@ def test_categorical_series_repr_period(self): 4 2011-05 dtype: category Categories (5, period): [2011-01, 2011-02, 2011-03, 2011-04, 2011-05]""" + self.assertEqual(repr(s), exp) def test_categorical_series_repr_period_ordered(self): @@ -2152,6 +2375,7 @@ def test_categorical_series_repr_period_ordered(self): dtype: category Categories (5, period): [2011-01-01 09:00 < 2011-01-01 10:00 < 2011-01-01 11:00 < 2011-01-01 12:00 < 2011-01-01 13:00]""" + self.assertEqual(repr(s), exp) idx = pd.period_range('2011-01', freq='M', periods=5) @@ -2163,6 +2387,7 @@ def test_categorical_series_repr_period_ordered(self): 4 2011-05 dtype: category Categories (5, period): [2011-01 < 2011-02 < 2011-03 < 2011-04 < 2011-05]""" + self.assertEqual(repr(s), exp) def test_categorical_series_repr_timedelta(self): @@ -2175,6 +2400,7 @@ def test_categorical_series_repr_timedelta(self): 4 5 days dtype: category Categories (5, timedelta64[ns]): [1 days, 2 days, 3 days, 4 days, 5 days]""" + self.assertEqual(repr(s), exp) idx = pd.timedelta_range('1 hours', periods=10) @@ -2193,6 +2419,7 @@ def test_categorical_series_repr_timedelta(self): Categories (10, timedelta64[ns]): [0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, ..., 6 days 01:00:00, 7 days 01:00:00, 8 days 01:00:00, 9 days 01:00:00]""" + self.assertEqual(repr(s), exp) def test_categorical_series_repr_timedelta_ordered(self): @@ -2205,6 +2432,7 @@ def test_categorical_series_repr_timedelta_ordered(self): 4 5 days dtype: category Categories (5, timedelta64[ns]): [1 days < 2 days < 3 days < 4 days < 5 days]""" + self.assertEqual(repr(s), exp) idx = pd.timedelta_range('1 hours', periods=10) @@ -2223,10 +2451,11 @@ def test_categorical_series_repr_timedelta_ordered(self): Categories (10, timedelta64[ns]): [0 days 01:00:00 < 1 days 01:00:00 < 2 days 01:00:00 < 3 days 01:00:00 ... 6 days 01:00:00 < 7 days 01:00:00 < 8 days 01:00:00 < 9 days 01:00:00]""" + self.assertEqual(repr(s), exp) def test_categorical_index_repr(self): - idx = pd.CategoricalIndex(pd.Categorical([1, 2 ,3])) + idx = pd.CategoricalIndex(pd.Categorical([1, 2, 3])) exp = """CategoricalIndex([1, 2, 3], categories=[1, 2, 3], ordered=False, dtype='category')""" self.assertEqual(repr(idx), exp) @@ -2235,7 +2464,7 @@ def test_categorical_index_repr(self): self.assertEqual(repr(i), exp) def test_categorical_index_repr_ordered(self): - i = pd.CategoricalIndex(pd.Categorical([1, 2 ,3], ordered=True)) + i = pd.CategoricalIndex(pd.Categorical([1, 2, 3], ordered=True)) exp = """CategoricalIndex([1, 2, 3], categories=[1, 2, 3], ordered=True, dtype='category')""" self.assertEqual(repr(i), exp) @@ -2250,14 +2479,17 @@ def test_categorical_index_repr_datetime(self): '2011-01-01 11:00:00', '2011-01-01 12:00:00', '2011-01-01 13:00:00'], categories=[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00], ordered=False, dtype='category')""" + self.assertEqual(repr(i), exp) - idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5, tz='US/Eastern') + idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5, + tz='US/Eastern') i = pd.CategoricalIndex(pd.Categorical(idx)) exp = """CategoricalIndex(['2011-01-01 09:00:00-05:00', '2011-01-01 10:00:00-05:00', '2011-01-01 11:00:00-05:00', '2011-01-01 12:00:00-05:00', '2011-01-01 13:00:00-05:00'], categories=[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00], ordered=False, dtype='category')""" + self.assertEqual(repr(i), exp) def test_categorical_index_repr_datetime_ordered(self): @@ -2267,14 +2499,17 @@ def test_categorical_index_repr_datetime_ordered(self): '2011-01-01 11:00:00', '2011-01-01 12:00:00', '2011-01-01 13:00:00'], categories=[2011-01-01 09:00:00, 2011-01-01 10:00:00, 2011-01-01 11:00:00, 2011-01-01 12:00:00, 2011-01-01 13:00:00], ordered=True, dtype='category')""" + self.assertEqual(repr(i), exp) - idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5, tz='US/Eastern') + idx = pd.date_range('2011-01-01 09:00', freq='H', periods=5, + tz='US/Eastern') i = pd.CategoricalIndex(pd.Categorical(idx, ordered=True)) exp = """CategoricalIndex(['2011-01-01 09:00:00-05:00', '2011-01-01 10:00:00-05:00', '2011-01-01 11:00:00-05:00', '2011-01-01 12:00:00-05:00', '2011-01-01 13:00:00-05:00'], categories=[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00], ordered=True, dtype='category')""" + self.assertEqual(repr(i), exp) i = pd.CategoricalIndex(pd.Categorical(idx.append(idx), ordered=True)) @@ -2284,6 +2519,7 @@ def test_categorical_index_repr_datetime_ordered(self): '2011-01-01 10:00:00-05:00', '2011-01-01 11:00:00-05:00', '2011-01-01 12:00:00-05:00', '2011-01-01 13:00:00-05:00'], categories=[2011-01-01 09:00:00-05:00, 2011-01-01 10:00:00-05:00, 2011-01-01 11:00:00-05:00, 2011-01-01 12:00:00-05:00, 2011-01-01 13:00:00-05:00], ordered=True, dtype='category')""" + self.assertEqual(repr(i), exp) def test_categorical_index_repr_period(self): @@ -2308,6 +2544,7 @@ def test_categorical_index_repr_period(self): exp = """CategoricalIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00', '2011-01-01 12:00', '2011-01-01 13:00'], categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00], ordered=False, dtype='category')""" + self.assertEqual(repr(i), exp) i = pd.CategoricalIndex(pd.Categorical(idx.append(idx))) @@ -2316,6 +2553,7 @@ def test_categorical_index_repr_period(self): '2011-01-01 10:00', '2011-01-01 11:00', '2011-01-01 12:00', '2011-01-01 13:00'], categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00], ordered=False, dtype='category')""" + self.assertEqual(repr(i), exp) idx = pd.period_range('2011-01', freq='M', periods=5) @@ -2329,6 +2567,7 @@ def test_categorical_index_repr_period_ordered(self): exp = """CategoricalIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00', '2011-01-01 12:00', '2011-01-01 13:00'], categories=[2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00, 2011-01-01 13:00], ordered=True, dtype='category')""" + self.assertEqual(repr(i), exp) idx = pd.period_range('2011-01', freq='M', periods=5) @@ -2349,6 +2588,7 @@ def test_categorical_index_repr_timedelta(self): '6 days 01:00:00', '7 days 01:00:00', '8 days 01:00:00', '9 days 01:00:00'], categories=[0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, 4 days 01:00:00, 5 days 01:00:00, 6 days 01:00:00, 7 days 01:00:00, ...], ordered=False, dtype='category')""" + self.assertEqual(repr(i), exp) def test_categorical_index_repr_timedelta_ordered(self): @@ -2364,11 +2604,13 @@ def test_categorical_index_repr_timedelta_ordered(self): '6 days 01:00:00', '7 days 01:00:00', '8 days 01:00:00', '9 days 01:00:00'], categories=[0 days 01:00:00, 1 days 01:00:00, 2 days 01:00:00, 3 days 01:00:00, 4 days 01:00:00, 5 days 01:00:00, 6 days 01:00:00, 7 days 01:00:00, ...], ordered=True, dtype='category')""" + self.assertEqual(repr(i), exp) def test_categorical_frame(self): # normal DataFrame - dt = pd.date_range('2011-01-01 09:00', freq='H', periods=5, tz='US/Eastern') + dt = pd.date_range('2011-01-01 09:00', freq='H', periods=5, + tz='US/Eastern') p = pd.period_range('2011-01', freq='M', periods=5) df = pd.DataFrame({'dt': dt, 'p': p}) exp = """ dt p @@ -2385,12 +2627,13 @@ def test_info(self): # make sure it works n = 2500 - df = DataFrame({ 'int64' : np.random.randint(100,size=n) }) - df['category'] = Series(np.array(list('abcdefghij')).take(np.random.randint(0,10,size=n))).astype('category') + df = DataFrame({'int64': np.random.randint(100, size=n)}) + df['category'] = Series(np.array(list('abcdefghij')).take( + np.random.randint(0, 10, size=n))).astype('category') df.isnull() df.info() - df2 = df[df['category']=='d'] + df2 = df[df['category'] == 'd'] df2.info() def test_groupby_sort(self): @@ -2398,7 +2641,7 @@ def test_groupby_sort(self): # http://stackoverflow.com/questions/23814368/sorting-pandas-categorical-labels-after-groupby # This should result in a properly sorted Series so that the plot # has a sorted x axis - #self.cat.groupby(['value_group'])['value_group'].count().plot(kind='bar') + # self.cat.groupby(['value_group'])['value_group'].count().plot(kind='bar') res = self.cat.groupby(['value_group'])['value_group'].count() exp = res[sorted(res.index, key=lambda x: float(x.split()[0]))] @@ -2407,56 +2650,68 @@ def test_groupby_sort(self): def test_min_max(self): # unordered cats have no min/max - cat = Series(Categorical(["a","b","c","d"], ordered=False)) - self.assertRaises(TypeError, lambda : cat.min()) - self.assertRaises(TypeError, lambda : cat.max()) + cat = Series(Categorical(["a", "b", "c", "d"], ordered=False)) + self.assertRaises(TypeError, lambda: cat.min()) + self.assertRaises(TypeError, lambda: cat.max()) - cat = Series(Categorical(["a","b","c","d"], ordered=True)) + cat = Series(Categorical(["a", "b", "c", "d"], ordered=True)) _min = cat.min() _max = cat.max() self.assertEqual(_min, "a") self.assertEqual(_max, "d") - cat = Series(Categorical(["a","b","c","d"], categories=['d','c','b','a'], ordered=True)) + cat = Series(Categorical(["a", "b", "c", "d"], categories=[ + 'd', 'c', 'b', 'a'], ordered=True)) _min = cat.min() _max = cat.max() self.assertEqual(_min, "d") self.assertEqual(_max, "a") - cat = Series(Categorical([np.nan,"b","c",np.nan], categories=['d','c','b','a'], ordered=True)) + cat = Series(Categorical( + [np.nan, "b", "c", np.nan], categories=['d', 'c', 'b', 'a' + ], ordered=True)) _min = cat.min() _max = cat.max() self.assertTrue(np.isnan(_min)) self.assertEqual(_max, "b") - cat = Series(Categorical([np.nan,1,2,np.nan], categories=[5,4,3,2,1], ordered=True)) + cat = Series(Categorical( + [np.nan, 1, 2, np.nan], categories=[5, 4, 3, 2, 1], ordered=True)) _min = cat.min() _max = cat.max() self.assertTrue(np.isnan(_min)) self.assertEqual(_max, 1) def test_mode(self): - s = Series(Categorical([1,1,2,4,5,5,5], categories=[5,4,3,2,1], ordered=True)) + s = Series(Categorical([1, 1, 2, 4, 5, 5, 5], + categories=[5, 4, 3, 2, 1], ordered=True)) res = s.mode() - exp = Series(Categorical([5], categories=[5,4,3,2,1], ordered=True)) + exp = Series(Categorical([5], categories=[ + 5, 4, 3, 2, 1], ordered=True)) tm.assert_series_equal(res, exp) - s = Series(Categorical([1,1,1,4,5,5,5], categories=[5,4,3,2,1], ordered=True)) + s = Series(Categorical([1, 1, 1, 4, 5, 5, 5], + categories=[5, 4, 3, 2, 1], ordered=True)) res = s.mode() - exp = Series(Categorical([5,1], categories=[5,4,3,2,1], ordered=True)) + exp = Series(Categorical([5, 1], categories=[ + 5, 4, 3, 2, 1], ordered=True)) tm.assert_series_equal(res, exp) - s = Series(Categorical([1,2,3,4,5], categories=[5,4,3,2,1], ordered=True)) + s = Series(Categorical([1, 2, 3, 4, 5], categories=[5, 4, 3, 2, 1], + ordered=True)) res = s.mode() - exp = Series(Categorical([], categories=[5,4,3,2,1], ordered=True)) + exp = Series(Categorical([], categories=[5, 4, 3, 2, 1], ordered=True)) tm.assert_series_equal(res, exp) def test_value_counts(self): - s = pd.Series(pd.Categorical(["a","b","c","c","c","b"], categories=["c","a","b","d"])) + s = pd.Series(pd.Categorical( + ["a", "b", "c", "c", "c", "b"], categories=["c", "a", "b", "d"])) res = s.value_counts(sort=False) - exp = Series([3,1,2,0], index=pd.CategoricalIndex(["c","a","b","d"])) + exp = Series([3, 1, 2, 0], + index=pd.CategoricalIndex(["c", "a", "b", "d"])) tm.assert_series_equal(res, exp) res = s.value_counts(sort=True) - exp = Series([3,2,1,0], index=pd.CategoricalIndex(["c","b","a","d"])) + exp = Series([3, 2, 1, 0], + index=pd.CategoricalIndex(["c", "b", "a", "d"])) tm.assert_series_equal(res, exp) def test_value_counts_with_nan(self): @@ -2481,40 +2736,50 @@ def test_value_counts_with_nan(self): # category, it should be last. tm.assert_series_equal( s.value_counts(dropna=False, sort=False), - pd.Series([2, 1, 3], index=pd.CategoricalIndex(["a", "b", np.nan]))) + pd.Series([2, 1, 3], + index=pd.CategoricalIndex(["a", "b", np.nan]))) with tm.assert_produces_warning(FutureWarning): - s = pd.Series(pd.Categorical(["a", "b", "a"], categories=["a", "b", np.nan])) + s = pd.Series(pd.Categorical( + ["a", "b", "a"], categories=["a", "b", np.nan])) tm.assert_series_equal( s.value_counts(dropna=True), pd.Series([2, 1], index=pd.CategoricalIndex(["a", "b"]))) tm.assert_series_equal( s.value_counts(dropna=False), - pd.Series([2, 1, 0], index=pd.CategoricalIndex(["a", "b", np.nan]))) + pd.Series([2, 1, 0], + index=pd.CategoricalIndex(["a", "b", np.nan]))) with tm.assert_produces_warning(FutureWarning): - s = pd.Series(pd.Categorical(["a", "b", None, "a", None, None], - categories=["a", "b", np.nan])) + s = pd.Series(pd.Categorical( + ["a", "b", None, "a", None, None], categories=["a", "b", np.nan + ])) tm.assert_series_equal( s.value_counts(dropna=True), pd.Series([2, 1], index=pd.CategoricalIndex(["a", "b"]))) tm.assert_series_equal( s.value_counts(dropna=False), - pd.Series([3, 2, 1], index=pd.CategoricalIndex([np.nan, "a", "b"]))) + pd.Series([3, 2, 1], + index=pd.CategoricalIndex([np.nan, "a", "b"]))) def test_groupby(self): - cats = Categorical(["a", "a", "a", "b", "b", "b", "c", "c", "c"], categories=["a","b","c","d"], ordered=True) - data = DataFrame({"a":[1,1,1,2,2,2,3,4,5], "b":cats}) + cats = Categorical( + ["a", "a", "a", "b", "b", "b", "c", "c", "c" + ], categories=["a", "b", "c", "d"], ordered=True) + data = DataFrame({"a": [1, 1, 1, 2, 2, 2, 3, 4, 5], "b": cats}) - expected = DataFrame({'a': Series([1, 2, 4, np.nan], - index=pd.CategoricalIndex(['a', 'b', 'c', 'd'], name='b'))}) + expected = DataFrame({'a': Series( + [1, 2, 4, np.nan], index=pd.CategoricalIndex( + ['a', 'b', 'c', 'd'], name='b'))}) result = data.groupby("b").mean() tm.assert_frame_equal(result, expected) - raw_cat1 = Categorical(["a","a","b","b"], categories=["a","b","z"], ordered=True) - raw_cat2 = Categorical(["c","d","c","d"], categories=["c","d","y"], ordered=True) - df = DataFrame({"A":raw_cat1,"B":raw_cat2, "values":[1,2,3,4]}) + raw_cat1 = Categorical(["a", "a", "b", "b"], + categories=["a", "b", "z"], ordered=True) + raw_cat2 = Categorical(["c", "d", "c", "d"], + categories=["c", "d", "y"], ordered=True) + df = DataFrame({"A": raw_cat1, "B": raw_cat2, "values": [1, 2, 3, 4]}) # single grouper gb = df.groupby("A") @@ -2524,60 +2789,63 @@ def test_groupby(self): tm.assert_frame_equal(result, expected) # multiple groupers - gb = df.groupby(['A','B']) - expected = DataFrame({ 'values' : Series([1,2,np.nan,3,4,np.nan,np.nan,np.nan,np.nan], - index=pd.MultiIndex.from_product([['a','b','z'],['c','d','y']],names=['A','B'])) }) + gb = df.groupby(['A', 'B']) + expected = DataFrame({'values': Series( + [1, 2, np.nan, 3, 4, np.nan, np.nan, np.nan, np.nan + ], index=pd.MultiIndex.from_product( + [['a', 'b', 'z'], ['c', 'd', 'y']], names=['A', 'B']))}) result = gb.sum() tm.assert_frame_equal(result, expected) # multiple groupers with a non-cat df = df.copy() - df['C'] = ['foo','bar']*2 - gb = df.groupby(['A','B','C']) - expected = DataFrame({ 'values' : - Series(np.nan,index=pd.MultiIndex.from_product([['a','b','z'], - ['c','d','y'], - ['foo','bar']], - names=['A','B','C'])) - }).sortlevel() - expected.iloc[[1,2,7,8],0] = [1,2,3,4] + df['C'] = ['foo', 'bar'] * 2 + gb = df.groupby(['A', 'B', 'C']) + expected = DataFrame({'values': Series( + np.nan, index=pd.MultiIndex.from_product( + [['a', 'b', 'z'], ['c', 'd', 'y'], ['foo', 'bar'] + ], names=['A', 'B', 'C']))}).sortlevel() + expected.iloc[[1, 2, 7, 8], 0] = [1, 2, 3, 4] result = gb.sum() tm.assert_frame_equal(result, expected) # GH 8623 - x=pd.DataFrame([[1,'John P. Doe'],[2,'Jane Dove'],[1,'John P. Doe']], - columns=['person_id','person_name']) + x = pd.DataFrame([[1, 'John P. Doe'], [2, 'Jane Dove'], + [1, 'John P. Doe']], + columns=['person_id', 'person_name']) x['person_name'] = pd.Categorical(x.person_name) g = x.groupby(['person_id']) - result = g.transform(lambda x:x) + result = g.transform(lambda x: x) tm.assert_frame_equal(result, x[['person_name']]) result = x.drop_duplicates('person_name') - expected = x.iloc[[0,1]] + expected = x.iloc[[0, 1]] tm.assert_frame_equal(result, expected) def f(x): return x.drop_duplicates('person_name').iloc[0] result = g.apply(f) - expected = x.iloc[[0,1]].copy() - expected.index = Index([1,2],name='person_id') + expected = x.iloc[[0, 1]].copy() + expected.index = Index([1, 2], name='person_id') expected['person_name'] = expected['person_name'].astype('object') tm.assert_frame_equal(result, expected) # GH 9921 # Monotonic df = DataFrame({"a": [5, 15, 25]}) - c = pd.cut(df.a, bins=[0,10,20,30,40]) + c = pd.cut(df.a, bins=[0, 10, 20, 30, 40]) result = df.a.groupby(c).transform(sum) tm.assert_series_equal(result, df['a'], check_names=False) self.assertTrue(result.name is None) - tm.assert_series_equal(df.a.groupby(c).transform(lambda xs: np.sum(xs)), df['a']) + tm.assert_series_equal( + df.a.groupby(c).transform(lambda xs: np.sum(xs)), df['a']) tm.assert_frame_equal(df.groupby(c).transform(sum), df[['a']]) - tm.assert_frame_equal(df.groupby(c).transform(lambda xs: np.max(xs)), df[['a']]) + tm.assert_frame_equal( + df.groupby(c).transform(lambda xs: np.max(xs)), df[['a']]) # Filter tm.assert_series_equal(df.a.groupby(c).filter(np.all), df['a']) @@ -2585,45 +2853,53 @@ def f(x): # Non-monotonic df = DataFrame({"a": [5, 15, 25, -5]}) - c = pd.cut(df.a, bins=[-10, 0,10,20,30,40]) + c = pd.cut(df.a, bins=[-10, 0, 10, 20, 30, 40]) result = df.a.groupby(c).transform(sum) tm.assert_series_equal(result, df['a'], check_names=False) self.assertTrue(result.name is None) - tm.assert_series_equal(df.a.groupby(c).transform(lambda xs: np.sum(xs)), df['a']) + tm.assert_series_equal( + df.a.groupby(c).transform(lambda xs: np.sum(xs)), df['a']) tm.assert_frame_equal(df.groupby(c).transform(sum), df[['a']]) - tm.assert_frame_equal(df.groupby(c).transform(lambda xs: np.sum(xs)), df[['a']]) + tm.assert_frame_equal( + df.groupby(c).transform(lambda xs: np.sum(xs)), df[['a']]) # GH 9603 df = pd.DataFrame({'a': [1, 0, 0, 0]}) c = pd.cut(df.a, [0, 1, 2, 3, 4]) result = df.groupby(c).apply(len) - expected = pd.Series([1, 0, 0, 0], index=pd.CategoricalIndex(c.values.categories)) + expected = pd.Series([1, 0, 0, 0], + index=pd.CategoricalIndex(c.values.categories)) expected.index.name = 'a' tm.assert_series_equal(result, expected) def test_pivot_table(self): - raw_cat1 = Categorical(["a","a","b","b"], categories=["a","b","z"], ordered=True) - raw_cat2 = Categorical(["c","d","c","d"], categories=["c","d","y"], ordered=True) - df = DataFrame({"A":raw_cat1,"B":raw_cat2, "values":[1,2,3,4]}) + raw_cat1 = Categorical(["a", "a", "b", "b"], + categories=["a", "b", "z"], ordered=True) + raw_cat2 = Categorical(["c", "d", "c", "d"], + categories=["c", "d", "y"], ordered=True) + df = DataFrame({"A": raw_cat1, "B": raw_cat2, "values": [1, 2, 3, 4]}) result = pd.pivot_table(df, values='values', index=['A', 'B']) - expected = Series([1,2,np.nan,3,4,np.nan,np.nan,np.nan,np.nan], - index=pd.MultiIndex.from_product([['a','b','z'],['c','d','y']],names=['A','B']), + expected = Series([1, 2, np.nan, 3, 4, np.nan, np.nan, np.nan, np.nan], + index=pd.MultiIndex.from_product( + [['a', 'b', 'z'], ['c', 'd', 'y']], + names=['A', 'B']), name='values') tm.assert_series_equal(result, expected) def test_count(self): - s = Series(Categorical([np.nan,1,2,np.nan], categories=[5,4,3,2,1], ordered=True)) + s = Series(Categorical([np.nan, 1, 2, np.nan], + categories=[5, 4, 3, 2, 1], ordered=True)) result = s.count() self.assertEqual(result, 2) def test_sort(self): - c = Categorical(["a","b","b","a"], ordered=False) + c = Categorical(["a", "b", "b", "a"], ordered=False) cat = Series(c) # 9816 deprecated @@ -2631,28 +2907,36 @@ def test_sort(self): c.order() # sort in the categories order - expected = Series(Categorical(["a","a","b","b"], ordered=False),index=[0,3,1,2]) + expected = Series( + Categorical(["a", "a", "b", "b"], + ordered=False), index=[0, 3, 1, 2]) result = cat.sort_values() tm.assert_series_equal(result, expected) - cat = Series(Categorical(["a","c","b","d"], ordered=True)) + cat = Series(Categorical(["a", "c", "b", "d"], ordered=True)) res = cat.sort_values() - exp = np.array(["a","b","c","d"]) + exp = np.array(["a", "b", "c", "d"]) self.assert_numpy_array_equal(res.__array__(), exp) - cat = Series(Categorical(["a","c","b","d"], categories=["a","b","c","d"], ordered=True)) + cat = Series(Categorical(["a", "c", "b", "d"], categories=[ + "a", "b", "c", "d"], ordered=True)) res = cat.sort_values() - exp = np.array(["a","b","c","d"]) + exp = np.array(["a", "b", "c", "d"]) self.assert_numpy_array_equal(res.__array__(), exp) res = cat.sort_values(ascending=False) - exp = np.array(["d","c","b","a"]) + exp = np.array(["d", "c", "b", "a"]) self.assert_numpy_array_equal(res.__array__(), exp) - raw_cat1 = Categorical(["a","b","c","d"], categories=["a","b","c","d"], ordered=False) - raw_cat2 = Categorical(["a","b","c","d"], categories=["d","c","b","a"], ordered=True) - s = ["a","b","c","d"] - df = DataFrame({"unsort":raw_cat1,"sort":raw_cat2, "string":s, "values":[1,2,3,4]}) + raw_cat1 = Categorical(["a", "b", "c", "d"], + categories=["a", "b", "c", "d"], ordered=False) + raw_cat2 = Categorical(["a", "b", "c", "d"], + categories=["d", "c", "b", "a"], ordered=True) + s = ["a", "b", "c", "d"] + df = DataFrame({"unsort": raw_cat1, + "sort": raw_cat2, + "string": s, + "values": [1, 2, 3, 4]}) # Cats must be sorted in a dataframe res = df.sort_values(by=["string"], ascending=False) @@ -2671,78 +2955,79 @@ def test_sort(self): # multi-columns sort # GH 7848 - df = DataFrame({"id":[6,5,4,3,2,1], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']}) + df = DataFrame({"id": [6, 5, 4, 3, 2, 1], + "raw_grade": ['a', 'b', 'b', 'a', 'a', 'e']}) df["grade"] = pd.Categorical(df["raw_grade"], ordered=True) df['grade'] = df['grade'].cat.set_categories(['b', 'e', 'a']) # sorts 'grade' according to the order of the categories result = df.sort_values(by=['grade']) - expected = df.iloc[[1,2,5,0,3,4]] - tm.assert_frame_equal(result,expected) + expected = df.iloc[[1, 2, 5, 0, 3, 4]] + tm.assert_frame_equal(result, expected) # multi result = df.sort_values(by=['grade', 'id']) - expected = df.iloc[[2,1,5,4,3,0]] - tm.assert_frame_equal(result,expected) + expected = df.iloc[[2, 1, 5, 4, 3, 0]] + tm.assert_frame_equal(result, expected) # reverse - cat = Categorical(["a","c","c","b","d"], ordered=True) + cat = Categorical(["a", "c", "c", "b", "d"], ordered=True) res = cat.sort_values(ascending=False) - exp_val = np.array(["d","c", "c", "b","a"],dtype=object) - exp_categories = np.array(["a","b","c","d"],dtype=object) + exp_val = np.array(["d", "c", "c", "b", "a"], dtype=object) + exp_categories = np.array(["a", "b", "c", "d"], dtype=object) self.assert_numpy_array_equal(res.__array__(), exp_val) self.assert_numpy_array_equal(res.categories, exp_categories) # some NaN positions - cat = Categorical(["a","c","b","d", np.nan], ordered=True) + cat = Categorical(["a", "c", "b", "d", np.nan], ordered=True) res = cat.sort_values(ascending=False, na_position='last') - exp_val = np.array(["d","c","b","a", np.nan],dtype=object) - exp_categories = np.array(["a","b","c","d"],dtype=object) + exp_val = np.array(["d", "c", "b", "a", np.nan], dtype=object) + exp_categories = np.array(["a", "b", "c", "d"], dtype=object) self.assert_numpy_array_equal(res.__array__(), exp_val) self.assert_numpy_array_equal(res.categories, exp_categories) - cat = Categorical(["a","c","b","d", np.nan], ordered=True) + cat = Categorical(["a", "c", "b", "d", np.nan], ordered=True) res = cat.sort_values(ascending=False, na_position='first') - exp_val = np.array([np.nan, "d","c","b","a"],dtype=object) - exp_categories = np.array(["a","b","c","d"],dtype=object) + exp_val = np.array([np.nan, "d", "c", "b", "a"], dtype=object) + exp_categories = np.array(["a", "b", "c", "d"], dtype=object) self.assert_numpy_array_equal(res.__array__(), exp_val) self.assert_numpy_array_equal(res.categories, exp_categories) - cat = Categorical(["a","c","b","d", np.nan], ordered=True) + cat = Categorical(["a", "c", "b", "d", np.nan], ordered=True) res = cat.sort_values(ascending=False, na_position='first') - exp_val = np.array([np.nan, "d","c","b","a"],dtype=object) - exp_categories = np.array(["a","b","c","d"],dtype=object) + exp_val = np.array([np.nan, "d", "c", "b", "a"], dtype=object) + exp_categories = np.array(["a", "b", "c", "d"], dtype=object) self.assert_numpy_array_equal(res.__array__(), exp_val) self.assert_numpy_array_equal(res.categories, exp_categories) - cat = Categorical(["a","c","b","d", np.nan], ordered=True) + cat = Categorical(["a", "c", "b", "d", np.nan], ordered=True) res = cat.sort_values(ascending=False, na_position='last') - exp_val = np.array(["d","c","b","a",np.nan],dtype=object) - exp_categories = np.array(["a","b","c","d"],dtype=object) + exp_val = np.array(["d", "c", "b", "a", np.nan], dtype=object) + exp_categories = np.array(["a", "b", "c", "d"], dtype=object) self.assert_numpy_array_equal(res.__array__(), exp_val) self.assert_numpy_array_equal(res.categories, exp_categories) def test_slicing(self): - cat = Series(Categorical([1,2,3,4])) + cat = Series(Categorical([1, 2, 3, 4])) reversed = cat[::-1] - exp = np.array([4,3,2,1]) + exp = np.array([4, 3, 2, 1]) self.assert_numpy_array_equal(reversed.__array__(), exp) - df = DataFrame({'value': (np.arange(100)+1).astype('int64')}) - df['D'] = pd.cut(df.value, bins=[0,25,50,75,100]) + df = DataFrame({'value': (np.arange(100) + 1).astype('int64')}) + df['D'] = pd.cut(df.value, bins=[0, 25, 50, 75, 100]) - expected = Series([11,'(0, 25]'], index=['value','D'], name=10) + expected = Series([11, '(0, 25]'], index=['value', 'D'], name=10) result = df.iloc[10] tm.assert_series_equal(result, expected) - expected = DataFrame({'value': np.arange(11,21).astype('int64')}, - index=np.arange(10,20).astype('int64')) - expected['D'] = pd.cut(expected.value, bins=[0,25,50,75,100]) + expected = DataFrame({'value': np.arange(11, 21).astype('int64')}, + index=np.arange(10, 20).astype('int64')) + expected['D'] = pd.cut(expected.value, bins=[0, 25, 50, 75, 100]) result = df.iloc[10:20] tm.assert_frame_equal(result, expected) - expected = Series([9,'(0, 25]'],index=['value', 'D'], name=8) + expected = Series([9, '(0, 25]'], index=['value', 'D'], name=8) result = df.loc[8] tm.assert_series_equal(result, expected) @@ -2755,107 +3040,109 @@ def test_slicing_and_getting_ops(self): # - returning a row # - returning a single value - cats = pd.Categorical(["a","c","b","c","c","c","c"], categories=["a","b","c"]) - idx = pd.Index(["h","i","j","k","l","m","n"]) - values= [1,2,3,4,5,6,7] - df = pd.DataFrame({"cats":cats,"values":values}, index=idx) + cats = pd.Categorical( + ["a", "c", "b", "c", "c", "c", "c"], categories=["a", "b", "c"]) + idx = pd.Index(["h", "i", "j", "k", "l", "m", "n"]) + values = [1, 2, 3, 4, 5, 6, 7] + df = pd.DataFrame({"cats": cats, "values": values}, index=idx) # the expected values - cats2 = pd.Categorical(["b","c"], categories=["a","b","c"]) - idx2 = pd.Index(["j","k"]) - values2= [3,4] + cats2 = pd.Categorical(["b", "c"], categories=["a", "b", "c"]) + idx2 = pd.Index(["j", "k"]) + values2 = [3, 4] # 2:4,: | "j":"k",: - exp_df = pd.DataFrame({"cats":cats2,"values":values2}, index=idx2) + exp_df = pd.DataFrame({"cats": cats2, "values": values2}, index=idx2) # :,"cats" | :,0 - exp_col = pd.Series(cats,index=idx,name='cats') + exp_col = pd.Series(cats, index=idx, name='cats') # "j",: | 2,: - exp_row = pd.Series(["b",3], index=["cats","values"], dtype="object", name="j") + exp_row = pd.Series(["b", 3], index=["cats", "values"], dtype="object", + name="j") # "j","cats | 2,0 exp_val = "b" # iloc # frame - res_df = df.iloc[2:4,:] + res_df = df.iloc[2:4, :] tm.assert_frame_equal(res_df, exp_df) self.assertTrue(com.is_categorical_dtype(res_df["cats"])) # row - res_row = df.iloc[2,:] + res_row = df.iloc[2, :] tm.assert_series_equal(res_row, exp_row) tm.assertIsInstance(res_row["cats"], compat.string_types) # col - res_col = df.iloc[:,0] + res_col = df.iloc[:, 0] tm.assert_series_equal(res_col, exp_col) self.assertTrue(com.is_categorical_dtype(res_col)) # single value - res_val = df.iloc[2,0] + res_val = df.iloc[2, 0] self.assertEqual(res_val, exp_val) # loc # frame - res_df = df.loc["j":"k",:] + res_df = df.loc["j":"k", :] tm.assert_frame_equal(res_df, exp_df) self.assertTrue(com.is_categorical_dtype(res_df["cats"])) # row - res_row = df.loc["j",:] + res_row = df.loc["j", :] tm.assert_series_equal(res_row, exp_row) tm.assertIsInstance(res_row["cats"], compat.string_types) # col - res_col = df.loc[:,"cats"] + res_col = df.loc[:, "cats"] tm.assert_series_equal(res_col, exp_col) self.assertTrue(com.is_categorical_dtype(res_col)) # single value - res_val = df.loc["j","cats"] + res_val = df.loc["j", "cats"] self.assertEqual(res_val, exp_val) # ix # frame - #res_df = df.ix["j":"k",[0,1]] # doesn't work? - res_df = df.ix["j":"k",:] + # res_df = df.ix["j":"k",[0,1]] # doesn't work? + res_df = df.ix["j":"k", :] tm.assert_frame_equal(res_df, exp_df) self.assertTrue(com.is_categorical_dtype(res_df["cats"])) # row - res_row = df.ix["j",:] + res_row = df.ix["j", :] tm.assert_series_equal(res_row, exp_row) tm.assertIsInstance(res_row["cats"], compat.string_types) # col - res_col = df.ix[:,"cats"] + res_col = df.ix[:, "cats"] tm.assert_series_equal(res_col, exp_col) self.assertTrue(com.is_categorical_dtype(res_col)) # single value - res_val = df.ix["j",0] + res_val = df.ix["j", 0] self.assertEqual(res_val, exp_val) # iat - res_val = df.iat[2,0] + res_val = df.iat[2, 0] self.assertEqual(res_val, exp_val) # at - res_val = df.at["j","cats"] + res_val = df.at["j", "cats"] self.assertEqual(res_val, exp_val) # fancy indexing exp_fancy = df.iloc[[2]] res_fancy = df[df["cats"] == "b"] - tm.assert_frame_equal(res_fancy,exp_fancy) + tm.assert_frame_equal(res_fancy, exp_fancy) res_fancy = df[df["values"] == 3] - tm.assert_frame_equal(res_fancy,exp_fancy) + tm.assert_frame_equal(res_fancy, exp_fancy) # get_value - res_val = df.get_value("j","cats") + res_val = df.get_value("j", "cats") self.assertEqual(res_val, exp_val) # i : int, slice, or sequence of integers @@ -2863,372 +3150,430 @@ def test_slicing_and_getting_ops(self): tm.assert_series_equal(res_row, exp_row) tm.assertIsInstance(res_row["cats"], compat.string_types) - res_df = df.iloc[slice(2,4)] + res_df = df.iloc[slice(2, 4)] tm.assert_frame_equal(res_df, exp_df) self.assertTrue(com.is_categorical_dtype(res_df["cats"])) - res_df = df.iloc[[2,3]] + res_df = df.iloc[[2, 3]] tm.assert_frame_equal(res_df, exp_df) self.assertTrue(com.is_categorical_dtype(res_df["cats"])) - res_col = df.iloc[:,0] + res_col = df.iloc[:, 0] tm.assert_series_equal(res_col, exp_col) self.assertTrue(com.is_categorical_dtype(res_col)) - res_df = df.iloc[:,slice(0,2)] + res_df = df.iloc[:, slice(0, 2)] tm.assert_frame_equal(res_df, df) self.assertTrue(com.is_categorical_dtype(res_df["cats"])) - res_df = df.iloc[:,[0,1]] + res_df = df.iloc[:, [0, 1]] tm.assert_frame_equal(res_df, df) self.assertTrue(com.is_categorical_dtype(res_df["cats"])) def test_slicing_doc_examples(self): - #GH 7918 - cats = Categorical(["a","b","b","b","c","c","c"], categories=["a","b","c"]) - idx = Index(["h","i","j","k","l","m","n",]) - values= [1,2,2,2,3,4,5] - df = DataFrame({"cats":cats,"values":values}, index=idx) - - result = df.iloc[2:4,:] - expected = DataFrame({"cats":Categorical(['b','b'],categories=['a','b','c']),"values":[2,2]}, index=['j','k']) + # GH 7918 + cats = Categorical( + ["a", "b", "b", "b", "c", "c", "c"], categories=["a", "b", "c"]) + idx = Index(["h", "i", "j", "k", "l", "m", "n", ]) + values = [1, 2, 2, 2, 3, 4, 5] + df = DataFrame({"cats": cats, "values": values}, index=idx) + + result = df.iloc[2:4, :] + expected = DataFrame( + {"cats": Categorical( + ['b', 'b'], categories=['a', 'b', 'c']), + "values": [2, 2]}, index=['j', 'k']) tm.assert_frame_equal(result, expected) - result = df.iloc[2:4,:].dtypes - expected = Series(['category','int64'],['cats','values']) + result = df.iloc[2:4, :].dtypes + expected = Series(['category', 'int64'], ['cats', 'values']) tm.assert_series_equal(result, expected) - result = df.loc["h":"j","cats"] - expected = Series(Categorical(['a','b','b'], - categories=['a','b','c']), index=['h','i','j'], name='cats') + result = df.loc["h":"j", "cats"] + expected = Series(Categorical(['a', 'b', 'b'], + categories=['a', 'b', 'c']), + index=['h', 'i', 'j'], name='cats') tm.assert_series_equal(result, expected) - result = df.ix["h":"j",0:1] - expected = DataFrame({'cats' : Series(Categorical(['a','b','b'],categories=['a','b','c']),index=['h','i','j']) }) + result = df.ix["h":"j", 0:1] + expected = DataFrame({'cats': Series( + Categorical( + ['a', 'b', 'b'], categories=['a', 'b', 'c']), index=['h', 'i', + 'j'])}) tm.assert_frame_equal(result, expected) def test_assigning_ops(self): - # systematically test the assigning operations: # for all slicing ops: # for value in categories and value not in categories: + # - assign a single value -> exp_single_cats_value + # - assign a complete row (mixed values) -> exp_single_row - # - assign multiple rows (mixed values) (-> array) -> exp_multi_row - # - assign a part of a column with dtype == categorical -> exp_parts_cats_col - # - assign a part of a column with dtype != categorical -> exp_parts_cats_col - cats = pd.Categorical(["a","a","a","a","a","a","a"], categories=["a","b"]) - idx = pd.Index(["h","i","j","k","l","m","n"]) - values = [1,1,1,1,1,1,1] - orig = pd.DataFrame({"cats":cats,"values":values}, index=idx) + # assign multiple rows (mixed values) (-> array) -> exp_multi_row - ### the expected values - # changed single row - cats1 = pd.Categorical(["a","a","b","a","a","a","a"], categories=["a","b"]) - idx1 = pd.Index(["h","i","j","k","l","m","n"]) - values1 = [1,1,2,1,1,1,1] - exp_single_row = pd.DataFrame({"cats":cats1,"values":values1}, index=idx1) + # assign a part of a column with dtype == categorical -> + # exp_parts_cats_col + + # assign a part of a column with dtype != categorical -> + # exp_parts_cats_col - #changed multiple rows - cats2 = pd.Categorical(["a","a","b","b","a","a","a"], categories=["a","b"]) - idx2 = pd.Index(["h","i","j","k","l","m","n"]) - values2 = [1,1,2,2,1,1,1] - exp_multi_row = pd.DataFrame({"cats":cats2,"values":values2}, index=idx2) + cats = pd.Categorical( + ["a", "a", "a", "a", "a", "a", "a"], categories=["a", "b"]) + idx = pd.Index(["h", "i", "j", "k", "l", "m", "n"]) + values = [1, 1, 1, 1, 1, 1, 1] + orig = pd.DataFrame({"cats": cats, "values": values}, index=idx) + + # the expected values + # changed single row + cats1 = pd.Categorical( + ["a", "a", "b", "a", "a", "a", "a"], categories=["a", "b"]) + idx1 = pd.Index(["h", "i", "j", "k", "l", "m", "n"]) + values1 = [1, 1, 2, 1, 1, 1, 1] + exp_single_row = pd.DataFrame( + {"cats": cats1, + "values": values1}, index=idx1) + + # changed multiple rows + cats2 = pd.Categorical( + ["a", "a", "b", "b", "a", "a", "a"], categories=["a", "b"]) + idx2 = pd.Index(["h", "i", "j", "k", "l", "m", "n"]) + values2 = [1, 1, 2, 2, 1, 1, 1] + exp_multi_row = pd.DataFrame( + {"cats": cats2, + "values": values2}, index=idx2) # changed part of the cats column - cats3 = pd.Categorical(["a","a","b","b","a","a","a"], categories=["a","b"]) - idx3 = pd.Index(["h","i","j","k","l","m","n"]) - values3 = [1,1,1,1,1,1,1] - exp_parts_cats_col = pd.DataFrame({"cats":cats3,"values":values3}, index=idx3) + cats3 = pd.Categorical( + ["a", "a", "b", "b", "a", "a", "a"], categories=["a", "b"]) + idx3 = pd.Index(["h", "i", "j", "k", "l", "m", "n"]) + values3 = [1, 1, 1, 1, 1, 1, 1] + exp_parts_cats_col = pd.DataFrame( + {"cats": cats3, + "values": values3}, index=idx3) # changed single value in cats col - cats4 = pd.Categorical(["a","a","b","a","a","a","a"], categories=["a","b"]) - idx4 = pd.Index(["h","i","j","k","l","m","n"]) - values4 = [1,1,1,1,1,1,1] - exp_single_cats_value = pd.DataFrame({"cats":cats4,"values":values4}, index=idx4) - - #### iloc ##### - ################ + cats4 = pd.Categorical( + ["a", "a", "b", "a", "a", "a", "a"], categories=["a", "b"]) + idx4 = pd.Index(["h", "i", "j", "k", "l", "m", "n"]) + values4 = [1, 1, 1, 1, 1, 1, 1] + exp_single_cats_value = pd.DataFrame( + {"cats": cats4, + "values": values4}, index=idx4) + + # iloc + # ############### # - assign a single value -> exp_single_cats_value df = orig.copy() - df.iloc[2,0] = "b" + df.iloc[2, 0] = "b" tm.assert_frame_equal(df, exp_single_cats_value) - df = orig.copy() - df.iloc[df.index == "j",0] = "b" + df.iloc[df.index == "j", 0] = "b" tm.assert_frame_equal(df, exp_single_cats_value) - # - assign a single value not in the current categories set def f(): df = orig.copy() - df.iloc[2,0] = "c" + df.iloc[2, 0] = "c" + self.assertRaises(ValueError, f) # - assign a complete row (mixed values) -> exp_single_row df = orig.copy() - df.iloc[2,:] = ["b",2] + df.iloc[2, :] = ["b", 2] tm.assert_frame_equal(df, exp_single_row) # - assign a complete row (mixed values) not in categories set def f(): df = orig.copy() - df.iloc[2,:] = ["c",2] + df.iloc[2, :] = ["c", 2] + self.assertRaises(ValueError, f) # - assign multiple rows (mixed values) -> exp_multi_row df = orig.copy() - df.iloc[2:4,:] = [["b",2],["b",2]] + df.iloc[2:4, :] = [["b", 2], ["b", 2]] tm.assert_frame_equal(df, exp_multi_row) def f(): df = orig.copy() - df.iloc[2:4,:] = [["c",2],["c",2]] + df.iloc[2:4, :] = [["c", 2], ["c", 2]] + self.assertRaises(ValueError, f) - # - assign a part of a column with dtype == categorical -> exp_parts_cats_col + # assign a part of a column with dtype == categorical -> + # exp_parts_cats_col df = orig.copy() - df.iloc[2:4,0] = pd.Categorical(["b","b"], categories=["a","b"]) + df.iloc[2:4, 0] = pd.Categorical(["b", "b"], categories=["a", "b"]) tm.assert_frame_equal(df, exp_parts_cats_col) with tm.assertRaises(ValueError): # different categories -> not sure if this should fail or pass df = orig.copy() - df.iloc[2:4,0] = pd.Categorical(["b","b"], categories=["a","b","c"]) + df.iloc[2:4, 0] = pd.Categorical( + ["b", "b"], categories=["a", "b", "c"]) with tm.assertRaises(ValueError): # different values df = orig.copy() - df.iloc[2:4,0] = pd.Categorical(["c","c"], categories=["a","b","c"]) + df.iloc[2:4, 0] = pd.Categorical( + ["c", "c"], categories=["a", "b", "c"]) - # - assign a part of a column with dtype != categorical -> exp_parts_cats_col + # assign a part of a column with dtype != categorical -> + # exp_parts_cats_col df = orig.copy() - df.iloc[2:4,0] = ["b","b"] + df.iloc[2:4, 0] = ["b", "b"] tm.assert_frame_equal(df, exp_parts_cats_col) with tm.assertRaises(ValueError): - df.iloc[2:4,0] = ["c","c"] + df.iloc[2:4, 0] = ["c", "c"] - #### loc ##### - ################ + # loc + # ############## # - assign a single value -> exp_single_cats_value df = orig.copy() - df.loc["j","cats"] = "b" + df.loc["j", "cats"] = "b" tm.assert_frame_equal(df, exp_single_cats_value) df = orig.copy() - df.loc[df.index == "j","cats"] = "b" + df.loc[df.index == "j", "cats"] = "b" tm.assert_frame_equal(df, exp_single_cats_value) # - assign a single value not in the current categories set def f(): df = orig.copy() - df.loc["j","cats"] = "c" + df.loc["j", "cats"] = "c" + self.assertRaises(ValueError, f) # - assign a complete row (mixed values) -> exp_single_row df = orig.copy() - df.loc["j",:] = ["b",2] + df.loc["j", :] = ["b", 2] tm.assert_frame_equal(df, exp_single_row) # - assign a complete row (mixed values) not in categories set def f(): df = orig.copy() - df.loc["j",:] = ["c",2] + df.loc["j", :] = ["c", 2] + self.assertRaises(ValueError, f) # - assign multiple rows (mixed values) -> exp_multi_row df = orig.copy() - df.loc["j":"k",:] = [["b",2],["b",2]] + df.loc["j":"k", :] = [["b", 2], ["b", 2]] tm.assert_frame_equal(df, exp_multi_row) def f(): df = orig.copy() - df.loc["j":"k",:] = [["c",2],["c",2]] + df.loc["j":"k", :] = [["c", 2], ["c", 2]] + self.assertRaises(ValueError, f) - # - assign a part of a column with dtype == categorical -> exp_parts_cats_col + # assign a part of a column with dtype == categorical -> + # exp_parts_cats_col df = orig.copy() - df.loc["j":"k","cats"] = pd.Categorical(["b","b"], categories=["a","b"]) + df.loc["j":"k", "cats"] = pd.Categorical( + ["b", "b"], categories=["a", "b"]) tm.assert_frame_equal(df, exp_parts_cats_col) with tm.assertRaises(ValueError): # different categories -> not sure if this should fail or pass df = orig.copy() - df.loc["j":"k","cats"] = pd.Categorical(["b","b"], categories=["a","b","c"]) + df.loc["j":"k", "cats"] = pd.Categorical( + ["b", "b"], categories=["a", "b", "c"]) with tm.assertRaises(ValueError): # different values df = orig.copy() - df.loc["j":"k","cats"] = pd.Categorical(["c","c"], categories=["a","b","c"]) + df.loc["j":"k", "cats"] = pd.Categorical( + ["c", "c"], categories=["a", "b", "c"]) - # - assign a part of a column with dtype != categorical -> exp_parts_cats_col + # assign a part of a column with dtype != categorical -> + # exp_parts_cats_col df = orig.copy() - df.loc["j":"k","cats"] = ["b","b"] + df.loc["j":"k", "cats"] = ["b", "b"] tm.assert_frame_equal(df, exp_parts_cats_col) with tm.assertRaises(ValueError): - df.loc["j":"k","cats"] = ["c","c"] + df.loc["j":"k", "cats"] = ["c", "c"] - #### ix ##### - ################ + # ix + # ############## # - assign a single value -> exp_single_cats_value df = orig.copy() - df.ix["j",0] = "b" + df.ix["j", 0] = "b" tm.assert_frame_equal(df, exp_single_cats_value) df = orig.copy() - df.ix[df.index == "j",0] = "b" + df.ix[df.index == "j", 0] = "b" tm.assert_frame_equal(df, exp_single_cats_value) # - assign a single value not in the current categories set def f(): df = orig.copy() - df.ix["j",0] = "c" + df.ix["j", 0] = "c" + self.assertRaises(ValueError, f) # - assign a complete row (mixed values) -> exp_single_row df = orig.copy() - df.ix["j",:] = ["b",2] + df.ix["j", :] = ["b", 2] tm.assert_frame_equal(df, exp_single_row) # - assign a complete row (mixed values) not in categories set def f(): df = orig.copy() - df.ix["j",:] = ["c",2] + df.ix["j", :] = ["c", 2] + self.assertRaises(ValueError, f) # - assign multiple rows (mixed values) -> exp_multi_row df = orig.copy() - df.ix["j":"k",:] = [["b",2],["b",2]] + df.ix["j":"k", :] = [["b", 2], ["b", 2]] tm.assert_frame_equal(df, exp_multi_row) def f(): df = orig.copy() - df.ix["j":"k",:] = [["c",2],["c",2]] + df.ix["j":"k", :] = [["c", 2], ["c", 2]] + self.assertRaises(ValueError, f) - # - assign a part of a column with dtype == categorical -> exp_parts_cats_col + # assign a part of a column with dtype == categorical -> + # exp_parts_cats_col df = orig.copy() - df.ix["j":"k",0] = pd.Categorical(["b","b"], categories=["a","b"]) + df.ix["j":"k", 0] = pd.Categorical(["b", "b"], categories=["a", "b"]) tm.assert_frame_equal(df, exp_parts_cats_col) with tm.assertRaises(ValueError): # different categories -> not sure if this should fail or pass df = orig.copy() - df.ix["j":"k",0] = pd.Categorical(["b","b"], categories=["a","b","c"]) + df.ix["j":"k", 0] = pd.Categorical( + ["b", "b"], categories=["a", "b", "c"]) with tm.assertRaises(ValueError): # different values df = orig.copy() - df.ix["j":"k",0] = pd.Categorical(["c","c"], categories=["a","b","c"]) + df.ix["j":"k", 0] = pd.Categorical( + ["c", "c"], categories=["a", "b", "c"]) - # - assign a part of a column with dtype != categorical -> exp_parts_cats_col + # assign a part of a column with dtype != categorical -> + # exp_parts_cats_col df = orig.copy() - df.ix["j":"k",0] = ["b","b"] + df.ix["j":"k", 0] = ["b", "b"] tm.assert_frame_equal(df, exp_parts_cats_col) with tm.assertRaises(ValueError): - df.ix["j":"k",0] = ["c","c"] + df.ix["j":"k", 0] = ["c", "c"] # iat df = orig.copy() - df.iat[2,0] = "b" + df.iat[2, 0] = "b" tm.assert_frame_equal(df, exp_single_cats_value) # - assign a single value not in the current categories set def f(): df = orig.copy() - df.iat[2,0] = "c" + df.iat[2, 0] = "c" + self.assertRaises(ValueError, f) # at # - assign a single value -> exp_single_cats_value df = orig.copy() - df.at["j","cats"] = "b" + df.at["j", "cats"] = "b" tm.assert_frame_equal(df, exp_single_cats_value) # - assign a single value not in the current categories set def f(): df = orig.copy() - df.at["j","cats"] = "c" + df.at["j", "cats"] = "c" + self.assertRaises(ValueError, f) # fancy indexing - catsf = pd.Categorical(["a","a","c","c","a","a","a"], categories=["a","b","c"]) - idxf = pd.Index(["h","i","j","k","l","m","n"]) - valuesf = [1,1,3,3,1,1,1] - df = pd.DataFrame({"cats":catsf,"values":valuesf}, index=idxf) + catsf = pd.Categorical( + ["a", "a", "c", "c", "a", "a", "a"], categories=["a", "b", "c"]) + idxf = pd.Index(["h", "i", "j", "k", "l", "m", "n"]) + valuesf = [1, 1, 3, 3, 1, 1, 1] + df = pd.DataFrame({"cats": catsf, "values": valuesf}, index=idxf) exp_fancy = exp_multi_row.copy() - exp_fancy["cats"].cat.set_categories(["a","b","c"], inplace=True) + exp_fancy["cats"].cat.set_categories(["a", "b", "c"], inplace=True) - df[df["cats"] == "c"] = ["b",2] + df[df["cats"] == "c"] = ["b", 2] tm.assert_frame_equal(df, exp_multi_row) # set_value df = orig.copy() - df.set_value("j","cats", "b") + df.set_value("j", "cats", "b") tm.assert_frame_equal(df, exp_single_cats_value) def f(): df = orig.copy() - df.set_value("j","cats", "c") + df.set_value("j", "cats", "c") + self.assertRaises(ValueError, f) - # Assigning a Category to parts of a int/... column uses the values of the Catgorical - df = pd.DataFrame({"a":[1,1,1,1,1], "b":["a","a","a","a","a"]}) - exp = pd.DataFrame({"a":[1,"b","b",1,1], "b":["a","a","b","b","a"]}) - df.loc[1:2,"a"] = pd.Categorical(["b","b"], categories=["a","b"]) - df.loc[2:3,"b"] = pd.Categorical(["b","b"], categories=["a","b"]) + # Assigning a Category to parts of a int/... column uses the values of + # the Catgorical + df = pd.DataFrame({"a": [1, 1, 1, 1, 1], + "b": ["a", "a", "a", "a", "a"]}) + exp = pd.DataFrame({"a": [1, "b", "b", 1, 1], + "b": ["a", "a", "b", "b", "a"]}) + df.loc[1:2, "a"] = pd.Categorical(["b", "b"], categories=["a", "b"]) + df.loc[2:3, "b"] = pd.Categorical(["b", "b"], categories=["a", "b"]) tm.assert_frame_equal(df, exp) - ######### Series ########## - orig = Series(pd.Categorical(["b","b"], categories=["a","b"])) + # Series + orig = Series(pd.Categorical(["b", "b"], categories=["a", "b"])) s = orig.copy() s[:] = "a" - exp = Series(pd.Categorical(["a","a"], categories=["a","b"])) + exp = Series(pd.Categorical(["a", "a"], categories=["a", "b"])) tm.assert_series_equal(s, exp) s = orig.copy() s[1] = "a" - exp = Series(pd.Categorical(["b","a"], categories=["a","b"])) + exp = Series(pd.Categorical(["b", "a"], categories=["a", "b"])) tm.assert_series_equal(s, exp) s = orig.copy() s[s.index > 0] = "a" - exp = Series(pd.Categorical(["b","a"], categories=["a","b"])) + exp = Series(pd.Categorical(["b", "a"], categories=["a", "b"])) tm.assert_series_equal(s, exp) s = orig.copy() s[[False, True]] = "a" - exp = Series(pd.Categorical(["b","a"], categories=["a","b"])) + exp = Series(pd.Categorical(["b", "a"], categories=["a", "b"])) tm.assert_series_equal(s, exp) s = orig.copy() s.index = ["x", "y"] s["y"] = "a" - exp = Series(pd.Categorical(["b","a"], categories=["a","b"]), index=["x", "y"]) + exp = Series( + pd.Categorical(["b", "a"], + categories=["a", "b"]), index=["x", "y"]) tm.assert_series_equal(s, exp) # ensure that one can set something to np.nan - s = Series(Categorical([1,2,3])) - exp = Series(Categorical([1,np.nan,3])) + s = Series(Categorical([1, 2, 3])) + exp = Series(Categorical([1, np.nan, 3])) s[1] = np.nan tm.assert_series_equal(s, exp) - def test_comparisons(self): tests_data = [(list("abc"), list("cba"), list("bbb")), - ([1,2,3], [3,2,1], [2,2,2])] - for data , reverse, base in tests_data: - cat_rev = pd.Series(pd.Categorical(data, categories=reverse, ordered=True)) - cat_rev_base = pd.Series(pd.Categorical(base, categories=reverse, ordered=True)) + ([1, 2, 3], [3, 2, 1], [2, 2, 2])] + for data, reverse, base in tests_data: + cat_rev = pd.Series(pd.Categorical(data, categories=reverse, + ordered=True)) + cat_rev_base = pd.Series(pd.Categorical(base, categories=reverse, + ordered=True)) cat = pd.Series(pd.Categorical(data, ordered=True)) - cat_base = pd.Series(pd.Categorical(base, categories=cat.cat.categories, ordered=True)) + cat_base = pd.Series(pd.Categorical( + base, categories=cat.cat.categories, ordered=True)) s = Series(base) a = np.array(base) @@ -3260,10 +3605,11 @@ def test_comparisons(self): # Only categories with same categories can be compared def f(): cat > cat_rev + self.assertRaises(TypeError, f) - # categorical cannot be compared to Series or numpy array, and also not the other way - # around + # categorical cannot be compared to Series or numpy array, and also + # not the other way around self.assertRaises(TypeError, lambda: cat > s) self.assertRaises(TypeError, lambda: cat_rev > s) self.assertRaises(TypeError, lambda: cat > a) @@ -3277,17 +3623,21 @@ def f(): # unequal comparison should raise for unordered cats cat = Series(Categorical(list("abc"))) + def f(): cat > "b" + self.assertRaises(TypeError, f) cat = Series(Categorical(list("abc"), ordered=False)) + def f(): cat > "b" + self.assertRaises(TypeError, f) - # https://github.com/pydata/pandas/issues/9836#issuecomment-92123057 and following - # comparisons with scalars not in categories should raise for unequal comps, but not for - # equal/not equal + # https://github.com/pydata/pandas/issues/9836#issuecomment-92123057 + # and following comparisons with scalars not in categories should raise + # for unequal comps, but not for equal/not equal cat = Series(Categorical(list("abc"), ordered=True)) self.assertRaises(TypeError, lambda: cat < "d") @@ -3295,12 +3645,11 @@ def f(): self.assertRaises(TypeError, lambda: "d" < cat) self.assertRaises(TypeError, lambda: "d" > cat) - self.assert_series_equal(cat == "d" , Series([False, False, False])) - self.assert_series_equal(cat != "d" , Series([True, True, True])) - + self.assert_series_equal(cat == "d", Series([False, False, False])) + self.assert_series_equal(cat != "d", Series([True, True, True])) # And test NaN handling... - cat = Series(Categorical(["a","b","c", np.nan])) + cat = Series(Categorical(["a", "b", "c", np.nan])) exp = Series([True, True, True, False]) res = (cat == cat) tm.assert_series_equal(res, exp) @@ -3309,47 +3658,47 @@ def test_cat_equality(self): # GH 8938 # allow equality comparisons - a = Series(list('abc'),dtype="category") - b = Series(list('abc'),dtype="object") - c = Series(['a','b','cc'],dtype="object") - d = Series(list('acb'),dtype="object") + a = Series(list('abc'), dtype="category") + b = Series(list('abc'), dtype="object") + c = Series(['a', 'b', 'cc'], dtype="object") + d = Series(list('acb'), dtype="object") e = Categorical(list('abc')) f = Categorical(list('acb')) # vs scalar - self.assertFalse((a=='a').all()) - self.assertTrue(((a!='a') == ~(a=='a')).all()) + self.assertFalse((a == 'a').all()) + self.assertTrue(((a != 'a') == ~(a == 'a')).all()) - self.assertFalse(('a'==a).all()) - self.assertTrue((a=='a')[0]) - self.assertTrue(('a'==a)[0]) - self.assertFalse(('a'!=a)[0]) + self.assertFalse(('a' == a).all()) + self.assertTrue((a == 'a')[0]) + self.assertTrue(('a' == a)[0]) + self.assertFalse(('a' != a)[0]) # vs list-like - self.assertTrue((a==a).all()) - self.assertFalse((a!=a).all()) + self.assertTrue((a == a).all()) + self.assertFalse((a != a).all()) - self.assertTrue((a==list(a)).all()) - self.assertTrue((a==b).all()) - self.assertTrue((b==a).all()) - self.assertTrue(((~(a==b))==(a!=b)).all()) - self.assertTrue(((~(b==a))==(b!=a)).all()) + self.assertTrue((a == list(a)).all()) + self.assertTrue((a == b).all()) + self.assertTrue((b == a).all()) + self.assertTrue(((~(a == b)) == (a != b)).all()) + self.assertTrue(((~(b == a)) == (b != a)).all()) - self.assertFalse((a==c).all()) - self.assertFalse((c==a).all()) - self.assertFalse((a==d).all()) - self.assertFalse((d==a).all()) + self.assertFalse((a == c).all()) + self.assertFalse((c == a).all()) + self.assertFalse((a == d).all()) + self.assertFalse((d == a).all()) # vs a cat-like - self.assertTrue((a==e).all()) - self.assertTrue((e==a).all()) - self.assertFalse((a==f).all()) - self.assertFalse((f==a).all()) + self.assertTrue((a == e).all()) + self.assertTrue((e == a).all()) + self.assertFalse((a == f).all()) + self.assertFalse((f == a).all()) - self.assertTrue(((~(a==e)==(a!=e)).all())) - self.assertTrue(((~(e==a)==(e!=a)).all())) - self.assertTrue(((~(a==f)==(a!=f)).all())) - self.assertTrue(((~(f==a)==(f!=a)).all())) + self.assertTrue(((~(a == e) == (a != e)).all())) + self.assertTrue(((~(e == a) == (e != a)).all())) + self.assertTrue(((~(a == f) == (a != f)).all())) + self.assertTrue(((~(f == a) == (f != a)).all())) # non-equality is not comparable self.assertRaises(TypeError, lambda: a < b) @@ -3358,109 +3707,147 @@ def test_cat_equality(self): self.assertRaises(TypeError, lambda: b > a) def test_concat(self): - cat = pd.Categorical(["a","b"], categories=["a","b"]) - vals = [1,2] - df = pd.DataFrame({"cats":cat, "vals":vals}) - cat2 = pd.Categorical(["a","b","a","b"], categories=["a","b"]) - vals2 = [1,2,1,2] - exp = pd.DataFrame({"cats":cat2, "vals":vals2}, index=pd.Index([0, 1, 0, 1])) - - res = pd.concat([df,df]) + cat = pd.Categorical(["a", "b"], categories=["a", "b"]) + vals = [1, 2] + df = pd.DataFrame({"cats": cat, "vals": vals}) + cat2 = pd.Categorical(["a", "b", "a", "b"], categories=["a", "b"]) + vals2 = [1, 2, 1, 2] + exp = pd.DataFrame({"cats": cat2, + "vals": vals2}, index=pd.Index([0, 1, 0, 1])) + + res = pd.concat([df, df]) tm.assert_frame_equal(exp, res) - # Concat should raise if the two categoricals do not have the same categories - cat3 = pd.Categorical(["a","b"], categories=["a","b","c"]) - vals3 = [1,2] - df_wrong_categories = pd.DataFrame({"cats":cat3, "vals":vals3}) + # Concat should raise if the two categoricals do not have the same + # categories + cat3 = pd.Categorical(["a", "b"], categories=["a", "b", "c"]) + vals3 = [1, 2] + df_wrong_categories = pd.DataFrame({"cats": cat3, "vals": vals3}) def f(): - pd.concat([df,df_wrong_categories]) + pd.concat([df, df_wrong_categories]) + self.assertRaises(ValueError, f) # GH 7864 # make sure ordering is preserverd - df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']}) + df = pd.DataFrame({"id": [1, 2, 3, 4, 5, 6], + "raw_grade": ['a', 'b', 'b', 'a', 'a', 'e']}) df["grade"] = pd.Categorical(df["raw_grade"]) df['grade'].cat.set_categories(['e', 'a', 'b']) df1 = df[0:3] df2 = df[3:] - self.assert_numpy_array_equal(df['grade'].cat.categories, df1['grade'].cat.categories) - self.assert_numpy_array_equal(df['grade'].cat.categories, df2['grade'].cat.categories) + self.assert_numpy_array_equal(df['grade'].cat.categories, + df1['grade'].cat.categories) + self.assert_numpy_array_equal(df['grade'].cat.categories, + df2['grade'].cat.categories) dfx = pd.concat([df1, df2]) dfx['grade'].cat.categories - self.assert_numpy_array_equal(df['grade'].cat.categories, dfx['grade'].cat.categories) + self.assert_numpy_array_equal(df['grade'].cat.categories, + dfx['grade'].cat.categories) def test_concat_preserve(self): # GH 8641 # series concat not preserving category dtype - s = Series(list('abc'),dtype='category') - s2 = Series(list('abd'),dtype='category') + s = Series(list('abc'), dtype='category') + s2 = Series(list('abd'), dtype='category') def f(): - pd.concat([s,s2]) + pd.concat([s, s2]) + self.assertRaises(ValueError, f) - result = pd.concat([s,s],ignore_index=True) + result = pd.concat([s, s], ignore_index=True) expected = Series(list('abcabc')).astype('category') tm.assert_series_equal(result, expected) - result = pd.concat([s,s]) - expected = Series(list('abcabc'),index=[0,1,2,0,1,2]).astype('category') + result = pd.concat([s, s]) + expected = Series( + list('abcabc'), index=[0, 1, 2, 0, 1, 2]).astype('category') tm.assert_series_equal(result, expected) - a = Series(np.arange(6,dtype='int64')) + a = Series(np.arange(6, dtype='int64')) b = Series(list('aabbca')) - df2 = DataFrame({'A' : a, 'B' : b.astype('category',categories=list('cab')) }) - result = pd.concat([df2,df2]) - expected = DataFrame({'A' : pd.concat([a,a]), 'B' : pd.concat([b,b]).astype('category',categories=list('cab')) }) + df2 = DataFrame({'A': a, + 'B': b.astype('category', categories=list('cab'))}) + result = pd.concat([df2, df2]) + expected = DataFrame({'A': pd.concat([a, a]), + 'B': pd.concat([b, b]).astype( + 'category', categories=list('cab'))}) tm.assert_frame_equal(result, expected) def test_categorical_index_preserver(self): - a = Series(np.arange(6,dtype='int64')) + a = Series(np.arange(6, dtype='int64')) b = Series(list('aabbca')) - df2 = DataFrame({'A' : a, 'B' : b.astype('category',categories=list('cab')) }).set_index('B') - result = pd.concat([df2,df2]) - expected = DataFrame({'A' : pd.concat([a,a]), 'B' : pd.concat([b,b]).astype('category',categories=list('cab')) }).set_index('B') + df2 = DataFrame({'A': a, + 'B': b.astype('category', categories=list( + 'cab'))}).set_index('B') + result = pd.concat([df2, df2]) + expected = DataFrame({'A': pd.concat([a, a]), + 'B': pd.concat([b, b]).astype( + 'category', categories=list( + 'cab'))}).set_index('B') tm.assert_frame_equal(result, expected) # wrong catgories - df3 = DataFrame({'A' : a, 'B' : b.astype('category',categories=list('abc')) }).set_index('B') - self.assertRaises(TypeError, lambda : pd.concat([df2,df3])) + df3 = DataFrame({'A': a, + 'B': b.astype('category', categories=list( + 'abc'))}).set_index('B') + self.assertRaises(TypeError, lambda: pd.concat([df2, df3])) def test_append(self): - cat = pd.Categorical(["a","b"], categories=["a","b"]) - vals = [1,2] - df = pd.DataFrame({"cats":cat, "vals":vals}) - cat2 = pd.Categorical(["a","b","a","b"], categories=["a","b"]) - vals2 = [1,2,1,2] - exp = pd.DataFrame({"cats":cat2, "vals":vals2}, index=pd.Index([0, 1, 0, 1])) + cat = pd.Categorical(["a", "b"], categories=["a", "b"]) + vals = [1, 2] + df = pd.DataFrame({"cats": cat, "vals": vals}) + cat2 = pd.Categorical(["a", "b", "a", "b"], categories=["a", "b"]) + vals2 = [1, 2, 1, 2] + exp = pd.DataFrame({"cats": cat2, + "vals": vals2}, index=pd.Index([0, 1, 0, 1])) res = df.append(df) tm.assert_frame_equal(exp, res) - # Concat should raise if the two categoricals do not have the same categories - cat3 = pd.Categorical(["a","b"], categories=["a","b","c"]) - vals3 = [1,2] - df_wrong_categories = pd.DataFrame({"cats":cat3, "vals":vals3}) + # Concat should raise if the two categoricals do not have the same + # categories + cat3 = pd.Categorical(["a", "b"], categories=["a", "b", "c"]) + vals3 = [1, 2] + df_wrong_categories = pd.DataFrame({"cats": cat3, "vals": vals3}) def f(): df.append(df_wrong_categories) + self.assertRaises(ValueError, f) def test_merge(self): # GH 9426 - right = DataFrame({'c': {0: 'a', 1: 'b', 2: 'c', 3: 'd', 4: 'e'}, - 'd': {0: 'null', 1: 'null', 2: 'null', 3: 'null', 4: 'null'}}) - left = DataFrame({'a': {0: 'f', 1: 'f', 2: 'f', 3: 'f', 4: 'f'}, - 'b': {0: 'g', 1: 'g', 2: 'g', 3: 'g', 4: 'g'}}) + right = DataFrame({'c': {0: 'a', + 1: 'b', + 2: 'c', + 3: 'd', + 4: 'e'}, + 'd': {0: 'null', + 1: 'null', + 2: 'null', + 3: 'null', + 4: 'null'}}) + left = DataFrame({'a': {0: 'f', + 1: 'f', + 2: 'f', + 3: 'f', + 4: 'f'}, + 'b': {0: 'g', + 1: 'g', + 2: 'g', + 3: 'g', + 4: 'g'}}) df = pd.merge(left, right, how='left', left_on='b', right_on='c') # object-object @@ -3487,33 +3874,34 @@ def test_merge(self): tm.assert_frame_equal(result, expected) def test_repeat(self): - #GH10183 - cat = pd.Categorical(["a","b"], categories=["a","b"]) - exp = pd.Categorical(["a", "a", "b", "b"], categories=["a","b"]) + # GH10183 + cat = pd.Categorical(["a", "b"], categories=["a", "b"]) + exp = pd.Categorical(["a", "a", "b", "b"], categories=["a", "b"]) res = cat.repeat(2) self.assert_categorical_equal(res, exp) def test_na_actions(self): - cat = pd.Categorical([1,2,3,np.nan], categories=[1,2,3]) - vals = ["a","b",np.nan,"d"] - df = pd.DataFrame({"cats":cat, "vals":vals}) - cat2 = pd.Categorical([1,2,3,3], categories=[1,2,3]) - vals2 = ["a","b","b","d"] - df_exp_fill = pd.DataFrame({"cats":cat2, "vals":vals2}) - cat3 = pd.Categorical([1,2,3], categories=[1,2,3]) - vals3 = ["a","b",np.nan] - df_exp_drop_cats = pd.DataFrame({"cats":cat3, "vals":vals3}) - cat4 = pd.Categorical([1,2], categories=[1,2,3]) - vals4 = ["a","b"] - df_exp_drop_all = pd.DataFrame({"cats":cat4, "vals":vals4}) + cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3]) + vals = ["a", "b", np.nan, "d"] + df = pd.DataFrame({"cats": cat, "vals": vals}) + cat2 = pd.Categorical([1, 2, 3, 3], categories=[1, 2, 3]) + vals2 = ["a", "b", "b", "d"] + df_exp_fill = pd.DataFrame({"cats": cat2, "vals": vals2}) + cat3 = pd.Categorical([1, 2, 3], categories=[1, 2, 3]) + vals3 = ["a", "b", np.nan] + df_exp_drop_cats = pd.DataFrame({"cats": cat3, "vals": vals3}) + cat4 = pd.Categorical([1, 2], categories=[1, 2, 3]) + vals4 = ["a", "b"] + df_exp_drop_all = pd.DataFrame({"cats": cat4, "vals": vals4}) # fillna - res = df.fillna(value={"cats":3, "vals":"b"}) + res = df.fillna(value={"cats": 3, "vals": "b"}) tm.assert_frame_equal(res, df_exp_fill) def f(): - df.fillna(value={"cats":4, "vals":"c"}) + df.fillna(value={"cats": 4, "vals": "c"}) + self.assertRaises(ValueError, f) res = df.fillna(method='pad') @@ -3525,72 +3913,77 @@ def f(): res = df.dropna() tm.assert_frame_equal(res, df_exp_drop_all) - # make sure that fillna takes both missing values and NA categories into account - c = Categorical(["a","b",np.nan]) + # make sure that fillna takes both missing values and NA categories + # into account + c = Categorical(["a", "b", np.nan]) with tm.assert_produces_warning(FutureWarning): - c.set_categories(["a","b",np.nan], rename=True, inplace=True) + c.set_categories(["a", "b", np.nan], rename=True, inplace=True) c[0] = np.nan - df = pd.DataFrame({"cats":c, "vals":[1,2,3]}) - df_exp = pd.DataFrame({"cats": Categorical(["a","b","a"]), "vals": [1,2,3]}) + df = pd.DataFrame({"cats": c, "vals": [1, 2, 3]}) + df_exp = pd.DataFrame({"cats": Categorical(["a", "b", "a"]), + "vals": [1, 2, 3]}) res = df.fillna("a") tm.assert_frame_equal(res, df_exp) - def test_astype_to_other(self): s = self.cat['value_group'] expected = s - tm.assert_series_equal(s.astype('category'),expected) - tm.assert_series_equal(s.astype(com.CategoricalDtype()),expected) - self.assertRaises(ValueError, lambda : s.astype('float64')) + tm.assert_series_equal(s.astype('category'), expected) + tm.assert_series_equal(s.astype(com.CategoricalDtype()), expected) + self.assertRaises(ValueError, lambda: s.astype('float64')) cat = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c'])) exp = Series(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c']) tm.assert_series_equal(cat.astype('str'), exp) s2 = Series(Categorical.from_array(['1', '2', '3', '4'])) - exp2 = Series([1,2,3,4]).astype(int) - tm.assert_series_equal(s2.astype('int') , exp2) + exp2 = Series([1, 2, 3, 4]).astype(int) + tm.assert_series_equal(s2.astype('int'), exp2) - # object don't sort correctly, so just compare that we have the same values - def cmp(a,b): - tm.assert_almost_equal(np.sort(np.unique(a)),np.sort(np.unique(b))) - expected = Series(np.array(s.values),name='value_group') - cmp(s.astype('object'),expected) - cmp(s.astype(np.object_),expected) + # object don't sort correctly, so just compare that we have the same + # values + def cmp(a, b): + tm.assert_almost_equal( + np.sort(np.unique(a)), np.sort(np.unique(b))) + + expected = Series(np.array(s.values), name='value_group') + cmp(s.astype('object'), expected) + cmp(s.astype(np.object_), expected) # array conversion - tm.assert_almost_equal(np.array(s),np.array(s.values)) + tm.assert_almost_equal(np.array(s), np.array(s.values)) # valid conversion for valid in [lambda x: x.astype('category'), lambda x: x.astype(com.CategoricalDtype()), lambda x: x.astype('object').astype('category'), - lambda x: x.astype('object').astype(com.CategoricalDtype())]: + lambda x: x.astype('object').astype( + com.CategoricalDtype()) + ]: result = valid(s) - tm.assert_series_equal(result,s) + tm.assert_series_equal(result, s) # invalid conversion (these are NOT a dtype) for invalid in [lambda x: x.astype(pd.Categorical), lambda x: x.astype('object').astype(pd.Categorical)]: - self.assertRaises(TypeError, lambda : invalid(s)) - + self.assertRaises(TypeError, lambda: invalid(s)) def test_astype_categorical(self): cat = Categorical(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c']) - tm.assert_categorical_equal(cat,cat.astype('category')) - tm.assert_almost_equal(np.array(cat),cat.astype('object')) + tm.assert_categorical_equal(cat, cat.astype('category')) + tm.assert_almost_equal(np.array(cat), cat.astype('object')) - self.assertRaises(ValueError, lambda : cat.astype(float)) + self.assertRaises(ValueError, lambda: cat.astype(float)) def test_to_records(self): # GH8626 # dict creation - df = DataFrame({ 'A' : list('abc') }, dtype='category') + df = DataFrame({'A': list('abc')}, dtype='category') expected = Series(list('abc'), dtype='category', name='A') tm.assert_series_equal(df['A'], expected) @@ -3609,40 +4002,44 @@ def test_to_records(self): def test_numeric_like_ops(self): # numeric ops should not succeed - for op in ['__add__','__sub__','__mul__','__truediv__']: - self.assertRaises(TypeError, lambda : getattr(self.cat,op)(self.cat)) + for op in ['__add__', '__sub__', '__mul__', '__truediv__']: + self.assertRaises(TypeError, + lambda: getattr(self.cat, op)(self.cat)) - # reduction ops should not succeed (unless specifically defined, e.g. min/max) + # reduction ops should not succeed (unless specifically defined, e.g. + # min/max) s = self.cat['value_group'] - for op in ['kurt','skew','var','std','mean','sum','median']: - self.assertRaises(TypeError, lambda : getattr(s,op)(numeric_only=False)) + for op in ['kurt', 'skew', 'var', 'std', 'mean', 'sum', 'median']: + self.assertRaises(TypeError, + lambda: getattr(s, op)(numeric_only=False)) # mad technically works because it takes always the numeric data # numpy ops - s = pd.Series(pd.Categorical([1,2,3,4])) - self.assertRaises(TypeError, lambda : np.sum(s)) + s = pd.Series(pd.Categorical([1, 2, 3, 4])) + self.assertRaises(TypeError, lambda: np.sum(s)) # numeric ops on a Series - for op in ['__add__','__sub__','__mul__','__truediv__']: - self.assertRaises(TypeError, lambda : getattr(s,op)(2)) + for op in ['__add__', '__sub__', '__mul__', '__truediv__']: + self.assertRaises(TypeError, lambda: getattr(s, op)(2)) # invalid ufunc - self.assertRaises(TypeError, lambda : np.log(s)) + self.assertRaises(TypeError, lambda: np.log(s)) def test_cat_tab_completition(self): - # test the tab completion display - ok_for_cat = ['categories','codes','ordered','set_categories', - 'add_categories', 'remove_categories', 'rename_categories', - 'reorder_categories', 'remove_unused_categories', - 'as_ordered', 'as_unordered'] + # test the tab completion display + ok_for_cat = ['categories', 'codes', 'ordered', 'set_categories', + 'add_categories', 'remove_categories', + 'rename_categories', 'reorder_categories', + 'remove_unused_categories', 'as_ordered', 'as_unordered'] + def get_dir(s): - results = [ r for r in s.cat.__dir__() if not r.startswith('_') ] + results = [r for r in s.cat.__dir__() if not r.startswith('_')] return list(sorted(set(results))) s = Series(list('aabbcde')).astype('category') results = get_dir(s) - tm.assert_almost_equal(results,list(sorted(set(ok_for_cat)))) + tm.assert_almost_equal(results, list(sorted(set(ok_for_cat)))) def test_cat_accessor_api(self): # GH 9322 @@ -3659,7 +4056,8 @@ def test_cat_accessor_api(self): def test_cat_accessor_no_new_attributes(self): # https://github.com/pydata/pandas/issues/10673 c = Series(list('aabbcde')).astype('category') - with tm.assertRaisesRegexp(AttributeError, "You cannot add any new attribute"): + with tm.assertRaisesRegexp(AttributeError, + "You cannot add any new attribute"): c.cat.xlabel = "a" def test_str_accessor_api_for_categorical(self): @@ -3684,22 +4082,22 @@ def test_str_accessor_api_for_categorical(self): ('findall', ("a",), {}), ('index', (" ",), {}), ('ljust', (10,), {}), - ('match', ("a"), {}), # deprecated... + ('match', ("a"), {}), # deprecated... ('normalize', ("NFC",), {}), ('pad', (10,), {}), - ('partition', (" ",), {"expand": False}), # not default - ('partition', (" ",), {"expand": True}), # default + ('partition', (" ",), {"expand": False}), # not default + ('partition', (" ",), {"expand": True}), # default ('repeat', (3,), {}), ('replace', ("a", "z"), {}), ('rfind', ("a",), {}), ('rindex', (" ",), {}), ('rjust', (10,), {}), - ('rpartition', (" ",), {"expand": False}), # not default - ('rpartition', (" ",), {"expand": True}), # default - ('slice', (0,1), {}), - ('slice_replace', (0,1,"z"), {}), - ('split', (" ",), {"expand":False}), #default - ('split', (" ",), {"expand":True}), # not default + ('rpartition', (" ",), {"expand": False}), # not default + ('rpartition', (" ",), {"expand": True}), # default + ('slice', (0, 1), {}), + ('slice_replace', (0, 1, "z"), {}), + ('split', (" ",), {"expand": False}), # default + ('split', (" ",), {"expand": True}), # not default ('startswith', ("a",), {}), ('wrap', (2,), {}), ('zfill', (10,), {}) @@ -3712,14 +4110,14 @@ def test_str_accessor_api_for_categorical(self): # * `translate` has different interfaces for py2 vs. py3 _ignore_names = ["get", "join", "translate"] - str_func_names = [f for f in dir(s.str) if not (f.startswith("_") or - f in _special_func_names or - f in _ignore_names)] + str_func_names = [f + for f in dir(s.str) + if not (f.startswith("_") or f in _special_func_names + or f in _ignore_names)] func_defs = [(f, (), {}) for f in str_func_names] func_defs.extend(special_func_defs) - for func, args, kwargs in func_defs: res = getattr(c.str, func)(*args, **kwargs) exp = getattr(s.str, func)(*args, **kwargs) @@ -3729,8 +4127,9 @@ def test_str_accessor_api_for_categorical(self): else: tm.assert_series_equal(res, exp) - invalid = Series([1,2,3]).astype('category') - with tm.assertRaisesRegexp(AttributeError, "Can only use .str accessor with string"): + invalid = Series([1, 2, 3]).astype('category') + with tm.assertRaisesRegexp(AttributeError, + "Can only use .str accessor with string"): invalid.str self.assertFalse(hasattr(invalid, 'str')) @@ -3747,7 +4146,7 @@ def test_dt_accessor_api_for_categorical(self): s_pr = Series(period_range('1/1/2015', freq='D', periods=5)) c_pr = s_pr.astype("category") - s_tdr = Series(timedelta_range('1 days','10 days')) + s_tdr = Series(timedelta_range('1 days', '10 days')) c_tdr = s_tdr.astype("category") test_data = [ @@ -3771,10 +4170,10 @@ def test_dt_accessor_api_for_categorical(self): _ignore_names = ['tz_localize'] for name, attr_names, s, c in test_data: - func_names = [f for f in dir(s.dt) if not (f.startswith("_") or - f in attr_names or - f in _special_func_names or - f in _ignore_names)] + func_names = [f + for f in dir(s.dt) + if not (f.startswith("_") or f in attr_names or f in + _special_func_names or f in _ignore_names)] func_defs = [(f, (), {}) for f in func_names] for f_def in special_func_defs: @@ -3807,8 +4206,9 @@ def test_dt_accessor_api_for_categorical(self): else: tm.assert_numpy_array_equal(res, exp) - invalid = Series([1,2,3]).astype('category') - with tm.assertRaisesRegexp(AttributeError, "Can only use .dt accessor with datetimelike"): + invalid = Series([1, 2, 3]).astype('category') + with tm.assertRaisesRegexp( + AttributeError, "Can only use .dt accessor with datetimelike"): invalid.dt self.assertFalse(hasattr(invalid, 'str')) @@ -3852,25 +4252,31 @@ def test_pickle_v0_15_2(self): def test_concat_categorical(self): # See GH 10177 - df1 = pd.DataFrame(np.arange(18, dtype='int64').reshape(6, 3), columns=["a", "b", "c"]) + df1 = pd.DataFrame( + np.arange(18, dtype='int64').reshape(6, + 3), columns=["a", "b", "c"]) - df2 = pd.DataFrame(np.arange(14, dtype='int64').reshape(7, 2), columns=["a", "c"]) - df2['h'] = pd.Series(pd.Categorical(["one", "one", "two", "one", "two", "two", "one"])) + df2 = pd.DataFrame( + np.arange(14, dtype='int64').reshape(7, 2), columns=["a", "c"]) + df2['h'] = pd.Series(pd.Categorical(["one", "one", "two", "one", "two", + "two", "one"])) df_concat = pd.concat((df1, df2), axis=0).reset_index(drop=True) - df_expected = pd.DataFrame({'a': [0, 3, 6, 9, 12, 15, 0, 2, 4, 6, 8, 10, 12], - 'b': [1, 4, 7, 10, 13, 16, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan], - 'c': [2, 5, 8, 11, 14, 17, 1, 3, 5, 7, 9, 11, 13]}) - df_expected['h'] = pd.Series(pd.Categorical([None, None, None, None, None, None, - "one", "one", "two", "one", "two", "two", "one"])) + df_expected = pd.DataFrame( + {'a': [0, 3, 6, 9, 12, 15, 0, 2, 4, 6, 8, 10, 12], + 'b': [1, 4, 7, 10, 13, 16, np.nan, np.nan, np.nan, np.nan, np.nan, + np.nan, np.nan], + 'c': [2, 5, 8, 11, 14, 17, 1, 3, 5, 7, 9, 11, 13]}) + df_expected['h'] = pd.Series(pd.Categorical( + [None, None, None, None, None, None, "one", "one", "two", "one", + "two", "two", "one"])) tm.assert_frame_equal(df_expected, df_concat) - if __name__ == '__main__': import nose nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], # '--with-coverage', '--cover-package=pandas.core'] - exit=False) + exit=False) diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py index a22d8f11c9a75..3fd8ee5879ff8 100644 --- a/pandas/tests/test_common.py +++ b/pandas/tests/test_common.py @@ -8,9 +8,8 @@ import numpy as np import pandas as pd from pandas.tslib import iNaT, NaT -from pandas import (Series, DataFrame, date_range, - DatetimeIndex, TimedeltaIndex, - Timestamp, Float64Index) +from pandas import (Series, DataFrame, date_range, DatetimeIndex, + TimedeltaIndex, Timestamp, Float64Index) from pandas import compat from pandas.compat import range, long, lrange, lmap, u from pandas.core.common import notnull, isnull, array_equivalent @@ -33,17 +32,18 @@ def test_mut_exclusive(): def test_is_sequence(): is_seq = com.is_sequence - assert(is_seq((1, 2))) - assert(is_seq([1, 2])) - assert(not is_seq("abcd")) - assert(not is_seq(u("abcd"))) - assert(not is_seq(np.int64)) + assert (is_seq((1, 2))) + assert (is_seq([1, 2])) + assert (not is_seq("abcd")) + assert (not is_seq(u("abcd"))) + assert (not is_seq(np.int64)) class A(object): + def __getitem__(self): return 1 - assert(not is_seq(A())) + assert (not is_seq(A())) def test_get_callable_name(): @@ -52,13 +52,15 @@ def test_get_callable_name(): def fn(x): return x + lambda_ = lambda x: x part1 = partial(fn) part2 = partial(part1) class somecall(object): + def __call__(self): - return x + return x # noqa assert getname(fn) == 'fn' assert getname(lambda_) @@ -67,7 +69,8 @@ def __call__(self): assert getname(somecall()) == 'somecall' assert getname(1) is None -#Issue 10859 + +# Issue 10859 class TestABCClasses(tm.TestCase): tuples = [[1, 2, 2], ['red', 'blue', 'red']] multi_index = pd.MultiIndex.from_arrays(tuples, names=('number', 'color')) @@ -88,7 +91,8 @@ def test_abc_types(self): self.assertIsInstance(self.datetime_index, com.ABCDatetimeIndex) self.assertIsInstance(self.timedelta_index, com.ABCTimedeltaIndex) self.assertIsInstance(self.period_index, com.ABCPeriodIndex) - self.assertIsInstance(self.categorical_df.index, com.ABCCategoricalIndex) + self.assertIsInstance(self.categorical_df.index, + com.ABCCategoricalIndex) self.assertIsInstance(pd.Index(['a', 'b', 'c']), com.ABCIndexClass) self.assertIsInstance(pd.Int64Index([1, 2, 3]), com.ABCIndexClass) self.assertIsInstance(pd.Series([1, 2, 3]), com.ABCSeries) @@ -103,12 +107,11 @@ def test_abc_types(self): class TestInferDtype(tm.TestCase): def test_infer_dtype_from_scalar(self): - # Test that _infer_dtype_from_scalar is returning correct dtype for int and float. + # Test that _infer_dtype_from_scalar is returning correct dtype for int + # and float. - for dtypec in [ np.uint8, np.int8, - np.uint16, np.int16, - np.uint32, np.int32, - np.uint64, np.int64 ]: + for dtypec in [np.uint8, np.int8, np.uint16, np.int16, np.uint32, + np.int32, np.uint64, np.int64]: data = dtypec(12) dtype, val = com._infer_dtype_from_scalar(data) self.assertEqual(dtype, type(data)) @@ -117,7 +120,7 @@ def test_infer_dtype_from_scalar(self): dtype, val = com._infer_dtype_from_scalar(data) self.assertEqual(dtype, np.int64) - for dtypec in [ np.float16, np.float32, np.float64 ]: + for dtypec in [np.float16, np.float32, np.float64]: data = dtypec(12) dtype, val = com._infer_dtype_from_scalar(data) self.assertEqual(dtype, dtypec) @@ -126,36 +129,31 @@ def test_infer_dtype_from_scalar(self): dtype, val = com._infer_dtype_from_scalar(data) self.assertEqual(dtype, np.float64) - for data in [ True, False ]: + for data in [True, False]: dtype, val = com._infer_dtype_from_scalar(data) self.assertEqual(dtype, np.bool_) - for data in [ np.complex64(1), np.complex128(1) ]: + for data in [np.complex64(1), np.complex128(1)]: dtype, val = com._infer_dtype_from_scalar(data) self.assertEqual(dtype, np.complex_) import datetime - for data in [ np.datetime64(1,'ns'), - pd.Timestamp(1), - datetime.datetime(2000,1,1,0,0) - ]: + for data in [np.datetime64(1, 'ns'), pd.Timestamp(1), + datetime.datetime(2000, 1, 1, 0, 0)]: dtype, val = com._infer_dtype_from_scalar(data) self.assertEqual(dtype, 'M8[ns]') - for data in [ np.timedelta64(1,'ns'), - pd.Timedelta(1), - datetime.timedelta(1) - ]: + for data in [np.timedelta64(1, 'ns'), pd.Timedelta(1), + datetime.timedelta(1)]: dtype, val = com._infer_dtype_from_scalar(data) self.assertEqual(dtype, 'm8[ns]') - for data in [ datetime.date(2000,1,1), - pd.Timestamp(1,tz='US/Eastern'), - 'foo' - ]: + for data in [datetime.date(2000, 1, 1), + pd.Timestamp(1, tz='US/Eastern'), 'foo']: dtype, val = com._infer_dtype_from_scalar(data) self.assertEqual(dtype, np.object_) + def test_notnull(): assert notnull(1.) assert not notnull(None) @@ -178,9 +176,11 @@ def test_notnull(): assert result.sum() == 2 with cf.option_context("mode.use_inf_as_null", False): - for s in [tm.makeFloatSeries(),tm.makeStringSeries(), - tm.makeObjectSeries(),tm.makeTimeSeries(),tm.makePeriodSeries()]: - assert(isinstance(isnull(s), Series)) + for s in [tm.makeFloatSeries(), tm.makeStringSeries(), + tm.makeObjectSeries(), tm.makeTimeSeries(), + tm.makePeriodSeries()]: + assert (isinstance(isnull(s), Series)) + def test_isnull(): assert not isnull(1.) @@ -190,52 +190,58 @@ def test_isnull(): assert not isnull(-np.inf) # series - for s in [tm.makeFloatSeries(),tm.makeStringSeries(), - tm.makeObjectSeries(),tm.makeTimeSeries(),tm.makePeriodSeries()]: - assert(isinstance(isnull(s), Series)) + for s in [tm.makeFloatSeries(), tm.makeStringSeries(), + tm.makeObjectSeries(), tm.makeTimeSeries(), + tm.makePeriodSeries()]: + assert (isinstance(isnull(s), Series)) # frame - for df in [tm.makeTimeDataFrame(),tm.makePeriodFrame(),tm.makeMixedDataFrame()]: + for df in [tm.makeTimeDataFrame(), tm.makePeriodFrame(), + tm.makeMixedDataFrame()]: result = isnull(df) expected = df.apply(isnull) tm.assert_frame_equal(result, expected) # panel - for p in [ tm.makePanel(), tm.makePeriodPanel(), tm.add_nans(tm.makePanel()) ]: + for p in [tm.makePanel(), tm.makePeriodPanel(), tm.add_nans(tm.makePanel()) + ]: result = isnull(p) expected = p.apply(isnull) tm.assert_panel_equal(result, expected) # panel 4d - for p in [ tm.makePanel4D(), tm.add_nans_panel4d(tm.makePanel4D()) ]: + for p in [tm.makePanel4D(), tm.add_nans_panel4d(tm.makePanel4D())]: result = isnull(p) expected = p.apply(isnull) tm.assert_panel4d_equal(result, expected) + def test_isnull_lists(): result = isnull([[False]]) exp = np.array([[False]]) - assert(np.array_equal(result, exp)) + assert (np.array_equal(result, exp)) result = isnull([[1], [2]]) exp = np.array([[False], [False]]) - assert(np.array_equal(result, exp)) + assert (np.array_equal(result, exp)) # list of strings / unicode result = isnull(['foo', 'bar']) - assert(not result.any()) + assert (not result.any()) result = isnull([u('foo'), u('bar')]) - assert(not result.any()) + assert (not result.any()) + def test_isnull_nat(): result = isnull([NaT]) exp = np.array([True]) - assert(np.array_equal(result, exp)) + assert (np.array_equal(result, exp)) result = isnull(np.array([NaT], dtype=object)) exp = np.array([True]) - assert(np.array_equal(result, exp)) + assert (np.array_equal(result, exp)) + def test_isnull_numpy_nat(): arr = np.array([NaT, np.datetime64('NaT'), np.timedelta64('NaT'), @@ -244,31 +250,33 @@ def test_isnull_numpy_nat(): expected = np.array([True] * 4) tm.assert_numpy_array_equal(result, expected) + def test_isnull_datetime(): assert (not isnull(datetime.now())) assert notnull(datetime.now()) idx = date_range('1/1/1990', periods=20) - assert(notnull(idx).all()) + assert (notnull(idx).all()) idx = np.asarray(idx) idx[0] = iNaT idx = DatetimeIndex(idx) mask = isnull(idx) - assert(mask[0]) - assert(not mask[1:].any()) + assert (mask[0]) + assert (not mask[1:].any()) # GH 9129 pidx = idx.to_period(freq='M') mask = isnull(pidx) - assert(mask[0]) - assert(not mask[1:].any()) + assert (mask[0]) + assert (not mask[1:].any()) mask = isnull(pidx[1:]) - assert(not mask.any()) + assert (not mask.any()) class TestIsNull(tm.TestCase): + def test_0d_array(self): self.assertTrue(isnull(np.array(np.nan))) self.assertFalse(isnull(np.array(0.0))) @@ -298,25 +306,27 @@ def test_downcast_conv(): # conversions - expected = np.array([1,2]) - for dtype in [np.float64,object,np.int64]: - arr = np.array([1.0,2.0],dtype=dtype) - result = com._possibly_downcast_to_dtype(arr,'infer') + expected = np.array([1, 2]) + for dtype in [np.float64, object, np.int64]: + arr = np.array([1.0, 2.0], dtype=dtype) + result = com._possibly_downcast_to_dtype(arr, 'infer') tm.assert_almost_equal(result, expected) - expected = np.array([1.0,2.0,np.nan]) - for dtype in [np.float64,object]: - arr = np.array([1.0,2.0,np.nan],dtype=dtype) - result = com._possibly_downcast_to_dtype(arr,'infer') + expected = np.array([1.0, 2.0, np.nan]) + for dtype in [np.float64, object]: + arr = np.array([1.0, 2.0, np.nan], dtype=dtype) + result = com._possibly_downcast_to_dtype(arr, 'infer') tm.assert_almost_equal(result, expected) # empties - for dtype in [np.int32,np.float64,np.float32,np.bool_,np.int64,object]: - arr = np.array([],dtype=dtype) - result = com._possibly_downcast_to_dtype(arr,'int64') - tm.assert_almost_equal(result, np.array([],dtype=np.int64)) + for dtype in [np.int32, np.float64, np.float32, np.bool_, np.int64, object + ]: + arr = np.array([], dtype=dtype) + result = com._possibly_downcast_to_dtype(arr, 'int64') + tm.assert_almost_equal(result, np.array([], dtype=np.int64)) assert result.dtype == np.int64 + def test_array_equivalent(): assert array_equivalent(np.array([np.nan, np.nan]), np.array([np.nan, np.nan])) @@ -326,70 +336,76 @@ def test_array_equivalent(): np.array([np.nan, None], dtype='object')) assert array_equivalent(np.array([np.nan, 1 + 1j], dtype='complex'), np.array([np.nan, 1 + 1j], dtype='complex')) - assert not array_equivalent(np.array([np.nan, 1 + 1j], dtype='complex'), - np.array([np.nan, 1 + 2j], dtype='complex')) - assert not array_equivalent(np.array([np.nan, 1, np.nan]), - np.array([np.nan, 2, np.nan])) - assert not array_equivalent(np.array(['a', 'b', 'c', 'd']), - np.array(['e', 'e'])) + assert not array_equivalent( + np.array([np.nan, 1 + 1j], dtype='complex'), np.array( + [np.nan, 1 + 2j], dtype='complex')) + assert not array_equivalent( + np.array([np.nan, 1, np.nan]), np.array([np.nan, 2, np.nan])) + assert not array_equivalent( + np.array(['a', 'b', 'c', 'd']), np.array(['e', 'e'])) assert array_equivalent(Float64Index([0, np.nan]), Float64Index([0, np.nan])) - assert not array_equivalent(Float64Index([0, np.nan]), - Float64Index([1, np.nan])) + assert not array_equivalent( + Float64Index([0, np.nan]), Float64Index([1, np.nan])) assert array_equivalent(DatetimeIndex([0, np.nan]), DatetimeIndex([0, np.nan])) - assert not array_equivalent(DatetimeIndex([0, np.nan]), - DatetimeIndex([1, np.nan])) + assert not array_equivalent( + DatetimeIndex([0, np.nan]), DatetimeIndex([1, np.nan])) assert array_equivalent(TimedeltaIndex([0, np.nan]), TimedeltaIndex([0, np.nan])) - assert not array_equivalent(TimedeltaIndex([0, np.nan]), - TimedeltaIndex([1, np.nan])) + assert not array_equivalent( + TimedeltaIndex([0, np.nan]), TimedeltaIndex([1, np.nan])) assert array_equivalent(DatetimeIndex([0, np.nan], tz='US/Eastern'), DatetimeIndex([0, np.nan], tz='US/Eastern')) - assert not array_equivalent(DatetimeIndex([0, np.nan], tz='US/Eastern'), - DatetimeIndex([1, np.nan], tz='US/Eastern')) - assert not array_equivalent(DatetimeIndex([0, np.nan]), - DatetimeIndex([0, np.nan], tz='US/Eastern')) - assert not array_equivalent(DatetimeIndex([0, np.nan], tz='CET'), - DatetimeIndex([0, np.nan], tz='US/Eastern')) - assert not array_equivalent(DatetimeIndex([0, np.nan]), - TimedeltaIndex([0, np.nan])) + assert not array_equivalent( + DatetimeIndex([0, np.nan], tz='US/Eastern'), DatetimeIndex( + [1, np.nan], tz='US/Eastern')) + assert not array_equivalent( + DatetimeIndex([0, np.nan]), DatetimeIndex( + [0, np.nan], tz='US/Eastern')) + assert not array_equivalent( + DatetimeIndex([0, np.nan], tz='CET'), DatetimeIndex( + [0, np.nan], tz='US/Eastern')) + assert not array_equivalent( + DatetimeIndex([0, np.nan]), TimedeltaIndex([0, np.nan])) def test_datetimeindex_from_empty_datetime64_array(): for unit in ['ms', 'us', 'ns']: idx = DatetimeIndex(np.array([], dtype='datetime64[%s]' % unit)) - assert(len(idx) == 0) + assert (len(idx) == 0) def test_nan_to_nat_conversions(): df = DataFrame(dict({ - 'A' : np.asarray(lrange(10),dtype='float64'), - 'B' : Timestamp('20010101') })) - df.iloc[3:6,:] = np.nan - result = df.loc[4,'B'].value - assert(result == iNaT) + 'A': np.asarray( + lrange(10), dtype='float64'), + 'B': Timestamp('20010101') + })) + df.iloc[3:6, :] = np.nan + result = df.loc[4, 'B'].value + assert (result == iNaT) s = df['B'].copy() - s._data = s._data.setitem(indexer=tuple([slice(8,9)]),value=np.nan) - assert(isnull(s[8])) + s._data = s._data.setitem(indexer=tuple([slice(8, 9)]), value=np.nan) + assert (isnull(s[8])) # numpy < 1.7.0 is wrong from distutils.version import LooseVersion if LooseVersion(np.__version__) >= '1.7.0': - assert(s[8].value == np.datetime64('NaT').astype(np.int64)) + assert (s[8].value == np.datetime64('NaT').astype(np.int64)) def test_any_none(): - assert(com._any_none(1, 2, 3, None)) - assert(not com._any_none(1, 2, 3, 4)) + assert (com._any_none(1, 2, 3, None)) + assert (not com._any_none(1, 2, 3, 4)) def test_all_not_none(): - assert(com._all_not_none(1, 2, 3, 4)) - assert(not com._all_not_none(1, 2, 3, None)) - assert(not com._all_not_none(None, None, None, None)) + assert (com._all_not_none(1, 2, 3, 4)) + assert (not com._all_not_none(1, 2, 3, None)) + assert (not com._all_not_none(None, None, None, None)) def test_repr_binary_type(): @@ -408,23 +424,18 @@ def test_repr_binary_type(): def test_adjoin(): - data = [['a', 'b', 'c'], - ['dd', 'ee', 'ff'], - ['ggg', 'hhh', 'iii']] + data = [['a', 'b', 'c'], ['dd', 'ee', 'ff'], ['ggg', 'hhh', 'iii']] expected = 'a dd ggg\nb ee hhh\nc ff iii' adjoined = com.adjoin(2, *data) - assert(adjoined == expected) - + assert (adjoined == expected) class TestFormattBase(tm.TestCase): def test_adjoin(self): - data = [['a', 'b', 'c'], - ['dd', 'ee', 'ff'], - ['ggg', 'hhh', 'iii']] + data = [['a', 'b', 'c'], ['dd', 'ee', 'ff'], ['ggg', 'hhh', 'iii']] expected = 'a dd ggg\nb ee hhh\nc ff iii' adjoined = com.adjoin(2, *data) @@ -432,9 +443,7 @@ def test_adjoin(self): self.assertEqual(adjoined, expected) def test_adjoin_unicode(self): - data = [[u'あ', 'b', 'c'], - ['dd', u'ええ', 'ff'], - ['ggg', 'hhh', u'いいい']] + data = [[u'あ', 'b', 'c'], ['dd', u'ええ', 'ff'], ['ggg', 'hhh', u'いいい']] expected = u'あ dd ggg\nb ええ hhh\nc ff いいい' adjoined = com.adjoin(2, *data) self.assertEqual(adjoined, expected) @@ -444,6 +453,7 @@ def test_adjoin_unicode(self): expected = u"""あ dd ggg b ええ hhh c ff いいい""" + adjoined = adj.adjoin(2, *data) self.assertEqual(adjoined, expected) cols = adjoined.split('\n') @@ -454,6 +464,7 @@ def test_adjoin_unicode(self): expected = u"""あ dd ggg b ええ hhh c ff いいい""" + adjoined = adj.adjoin(7, *data) self.assertEqual(adjoined, expected) cols = adjoined.split('\n') @@ -494,7 +505,6 @@ def test_east_asian_len(self): self.assertEqual(adj.len(u'パンダpanda'), 11) self.assertEqual(adj.len(u'パンダpanda'), 10) - def test_ambiguous_width(self): adj = fmt.EastAsianTextAdjustment() self.assertEqual(adj.len(u'¡¡ab'), 4) @@ -503,8 +513,7 @@ def test_ambiguous_width(self): adj = fmt.EastAsianTextAdjustment() self.assertEqual(adj.len(u'¡¡ab'), 6) - data = [[u'あ', 'b', 'c'], - ['dd', u'ええ', 'ff'], + data = [[u'あ', 'b', 'c'], ['dd', u'ええ', 'ff'], ['ggg', u'¡¡ab', u'いいい']] expected = u'あ dd ggg \nb ええ ¡¡ab\nc ff いいい' adjoined = adj.adjoin(2, *data) @@ -513,13 +522,11 @@ def test_ambiguous_width(self): def test_iterpairs(): data = [1, 2, 3, 4] - expected = [(1, 2), - (2, 3), - (3, 4)] + expected = [(1, 2), (2, 3), (3, 4)] result = list(com.iterpairs(data)) - assert(result == expected) + assert (result == expected) def test_split_ranges(): @@ -556,12 +563,12 @@ def test_indent(): s = 'a b c\nd e f' result = com.indent(s, spaces=6) - assert(result == ' a b c\n d e f') + assert (result == ' a b c\n d e f') def test_banner(): ban = com.banner('hi') - assert(ban == ('%s\nhi\n%s' % ('=' * 80, '=' * 80))) + assert (ban == ('%s\nhi\n%s' % ('=' * 80, '=' * 80))) def test_map_indices_py(): @@ -570,7 +577,7 @@ def test_map_indices_py(): result = com.map_indices_py(data) - assert(result == expected) + assert (result == expected) def test_union(): @@ -579,7 +586,7 @@ def test_union(): union = sorted(com.union(a, b)) - assert((a + b) == union) + assert ((a + b) == union) def test_difference(): @@ -588,7 +595,7 @@ def test_difference(): inter = sorted(com.difference(b, a)) - assert([4, 5, 6] == inter) + assert ([4, 5, 6] == inter) def test_intersection(): @@ -597,7 +604,7 @@ def test_intersection(): inter = sorted(com.intersection(a, b)) - assert(a == inter) + assert (a == inter) def test_groupby(): @@ -613,7 +620,7 @@ def test_groupby(): def test_is_list_like(): - passes = ([], [1], (1,), (1, 2), {'a': 1}, set([1, 'a']), Series([1]), + passes = ([], [1], (1, ), (1, 2), {'a': 1}, set([1, 'a']), Series([1]), Series([]), Series(['a']).str) fails = (1, '2', object()) @@ -623,9 +630,10 @@ def test_is_list_like(): for f in fails: assert not com.is_list_like(f) + def test_is_named_tuple(): - passes = (collections.namedtuple('Test',list('abc'))(1,2,3),) - fails = ((1,2,3), 'a', Series({'pi':3.14})) + passes = (collections.namedtuple('Test', list('abc'))(1, 2, 3), ) + fails = ((1, 2, 3), 'a', Series({'pi': 3.14})) for p in passes: assert com.is_named_tuple(p) @@ -633,6 +641,7 @@ def test_is_named_tuple(): for f in fails: assert not com.is_named_tuple(f) + def test_is_hashable(): # all new-style classes are hashable by default @@ -643,18 +652,19 @@ class UnhashableClass1(object): __hash__ = None class UnhashableClass2(object): + def __hash__(self): raise TypeError("Not hashable") - hashable = ( - 1, 3.14, np.float64(3.14), 'a', tuple(), (1,), HashableClass(), - ) - not_hashable = ( - [], UnhashableClass1(), - ) - abc_hashable_not_really_hashable = ( - ([],), UnhashableClass2(), - ) + hashable = (1, + 3.14, + np.float64(3.14), + 'a', + tuple(), + (1, ), + HashableClass(), ) + not_hashable = ([], UnhashableClass1(), ) + abc_hashable_not_really_hashable = (([], ), UnhashableClass2(), ) for i in hashable: assert com.is_hashable(i) @@ -671,8 +681,10 @@ def __hash__(self): # old-style classes in Python 2 don't appear hashable to # collections.Hashable but also seem to support hash() by default if compat.PY2: + class OldStyleClass(): pass + c = OldStyleClass() assert not isinstance(c, collections.Hashable) assert com.is_hashable(c) @@ -682,11 +694,11 @@ class OldStyleClass(): def test_ensure_int32(): values = np.arange(10, dtype=np.int32) result = com._ensure_int32(values) - assert(result.dtype == np.int32) + assert (result.dtype == np.int32) values = np.arange(10, dtype=np.int64) result = com._ensure_int32(values) - assert(result.dtype == np.int32) + assert (result.dtype == np.int32) def test_ensure_platform_int(): @@ -697,17 +709,17 @@ def test_ensure_platform_int(): # int64 x = Int64Index([1, 2, 3], dtype='int64') - assert(x.dtype == np.int64) + assert (x.dtype == np.int64) pi = com._ensure_platform_int(x) - assert(pi.dtype == np.int_) + assert (pi.dtype == np.int_) # int32 x = Int64Index([1, 2, 3], dtype='int32') - assert(x.dtype == np.int32) + assert (x.dtype == np.int32) pi = com._ensure_platform_int(x) - assert(pi.dtype == np.int_) + assert (pi.dtype == np.int_) # TODO: fix this broken test @@ -737,8 +749,8 @@ def test_is_re(): def test_is_recompilable(): - passes = (r'a', u('x'), r'asdf', re.compile('adsf'), - u(r'\u2233\s*'), re.compile(r'')) + passes = (r'a', u('x'), r'asdf', re.compile('adsf'), u(r'\u2233\s*'), + re.compile(r'')) fails = 1, [], object() for p in passes: @@ -747,6 +759,7 @@ def test_is_recompilable(): for f in fails: assert not com.is_re_compilable(f) + def test_random_state(): import numpy.random as npr # Check with seed @@ -755,7 +768,8 @@ def test_random_state(): # Check with random state object state2 = npr.RandomState(10) - assert_equal(com._random_state(state2).uniform(), npr.RandomState(10).uniform()) + assert_equal( + com._random_state(state2).uniform(), npr.RandomState(10).uniform()) # check with no arg random state assert isinstance(com._random_state(), npr.RandomState) @@ -770,23 +784,27 @@ def test_random_state(): def test_maybe_match_name(): - matched = com._maybe_match_name(Series([1], name='x'), Series([2], name='x')) - assert(matched == 'x') + matched = com._maybe_match_name( + Series([1], name='x'), Series( + [2], name='x')) + assert (matched == 'x') - matched = com._maybe_match_name(Series([1], name='x'), Series([2], name='y')) - assert(matched is None) + matched = com._maybe_match_name( + Series([1], name='x'), Series( + [2], name='y')) + assert (matched is None) matched = com._maybe_match_name(Series([1]), Series([2], name='x')) - assert(matched is None) + assert (matched is None) matched = com._maybe_match_name(Series([1], name='x'), Series([2])) - assert(matched is None) + assert (matched is None) matched = com._maybe_match_name(Series([1], name='x'), [2]) - assert(matched == 'x') + assert (matched == 'x') matched = com._maybe_match_name([1], Series([2], name='y')) - assert(matched == 'y') + assert (matched == 'y') class TestTake(tm.TestCase): @@ -843,15 +861,15 @@ def _test_dtype(dtype, fill_value, out_dtype): indexer = [2, 1, 0, -1] result = com.take_1d(data, indexer, fill_value=fill_value) - assert((result[[0, 1, 2]] == data[[2, 1, 0]]).all()) - assert(result[3] == fill_value) - assert(result.dtype == out_dtype) + assert ((result[[0, 1, 2]] == data[[2, 1, 0]]).all()) + assert (result[3] == fill_value) + assert (result.dtype == out_dtype) indexer = [2, 1, 0, 1] result = com.take_1d(data, indexer, fill_value=fill_value) - assert((result[[0, 1, 2, 3]] == data[indexer]).all()) - assert(result.dtype == dtype) + assert ((result[[0, 1, 2, 3]] == data[indexer]).all()) + assert (result.dtype == dtype) _test_dtype(np.int8, np.int16(127), np.int8) _test_dtype(np.int8, np.int16(128), np.int16) @@ -934,24 +952,24 @@ def _test_dtype(dtype, fill_value, out_dtype): indexer = [2, 1, 0, -1] result = com.take_nd(data, indexer, axis=0, fill_value=fill_value) - assert((result[[0, 1, 2], :] == data[[2, 1, 0], :]).all()) - assert((result[3, :] == fill_value).all()) - assert(result.dtype == out_dtype) + assert ((result[[0, 1, 2], :] == data[[2, 1, 0], :]).all()) + assert ((result[3, :] == fill_value).all()) + assert (result.dtype == out_dtype) result = com.take_nd(data, indexer, axis=1, fill_value=fill_value) - assert((result[:, [0, 1, 2]] == data[:, [2, 1, 0]]).all()) - assert((result[:, 3] == fill_value).all()) - assert(result.dtype == out_dtype) + assert ((result[:, [0, 1, 2]] == data[:, [2, 1, 0]]).all()) + assert ((result[:, 3] == fill_value).all()) + assert (result.dtype == out_dtype) indexer = [2, 1, 0, 1] result = com.take_nd(data, indexer, axis=0, fill_value=fill_value) - assert((result[[0, 1, 2, 3], :] == data[indexer, :]).all()) - assert(result.dtype == dtype) + assert ((result[[0, 1, 2, 3], :] == data[indexer, :]).all()) + assert (result.dtype == dtype) result = com.take_nd(data, indexer, axis=1, fill_value=fill_value) - assert((result[:, [0, 1, 2, 3]] == data[:, indexer]).all()) - assert(result.dtype == dtype) + assert ((result[:, [0, 1, 2, 3]] == data[:, indexer]).all()) + assert (result.dtype == dtype) _test_dtype(np.int8, np.int16(127), np.int8) _test_dtype(np.int8, np.int16(128), np.int16) @@ -1038,33 +1056,33 @@ def _test_dtype(dtype, fill_value, out_dtype): indexer = [2, 1, 0, -1] result = com.take_nd(data, indexer, axis=0, fill_value=fill_value) - assert((result[[0, 1, 2], :, :] == data[[2, 1, 0], :, :]).all()) - assert((result[3, :, :] == fill_value).all()) - assert(result.dtype == out_dtype) + assert ((result[[0, 1, 2], :, :] == data[[2, 1, 0], :, :]).all()) + assert ((result[3, :, :] == fill_value).all()) + assert (result.dtype == out_dtype) result = com.take_nd(data, indexer, axis=1, fill_value=fill_value) - assert((result[:, [0, 1, 2], :] == data[:, [2, 1, 0], :]).all()) - assert((result[:, 3, :] == fill_value).all()) - assert(result.dtype == out_dtype) + assert ((result[:, [0, 1, 2], :] == data[:, [2, 1, 0], :]).all()) + assert ((result[:, 3, :] == fill_value).all()) + assert (result.dtype == out_dtype) result = com.take_nd(data, indexer, axis=2, fill_value=fill_value) - assert((result[:, :, [0, 1, 2]] == data[:, :, [2, 1, 0]]).all()) - assert((result[:, :, 3] == fill_value).all()) - assert(result.dtype == out_dtype) + assert ((result[:, :, [0, 1, 2]] == data[:, :, [2, 1, 0]]).all()) + assert ((result[:, :, 3] == fill_value).all()) + assert (result.dtype == out_dtype) indexer = [2, 1, 0, 1] result = com.take_nd(data, indexer, axis=0, fill_value=fill_value) - assert((result[[0, 1, 2, 3], :, :] == data[indexer, :, :]).all()) - assert(result.dtype == dtype) + assert ((result[[0, 1, 2, 3], :, :] == data[indexer, :, :]).all()) + assert (result.dtype == dtype) result = com.take_nd(data, indexer, axis=1, fill_value=fill_value) - assert((result[:, [0, 1, 2, 3], :] == data[:, indexer, :]).all()) - assert(result.dtype == dtype) + assert ((result[:, [0, 1, 2, 3], :] == data[:, indexer, :]).all()) + assert (result.dtype == dtype) result = com.take_nd(data, indexer, axis=2, fill_value=fill_value) - assert((result[:, :, [0, 1, 2, 3]] == data[:, :, indexer]).all()) - assert(result.dtype == dtype) + assert ((result[:, :, [0, 1, 2, 3]] == data[:, :, indexer]).all()) + assert (result.dtype == dtype) _test_dtype(np.int8, np.int16(127), np.int8) _test_dtype(np.int8, np.int16(128), np.int16) @@ -1126,9 +1144,7 @@ def test_1d_bool(self): self.assertEqual(result.dtype, np.object_) def test_2d_bool(self): - arr = np.array([[0, 1, 0], - [1, 0, 1], - [0, 1, 1]], dtype=bool) + arr = np.array([[0, 1, 0], [1, 0, 1], [0, 1, 1]], dtype=bool) result = com.take_nd(arr, [0, 2, 2, 1]) expected = arr.take([0, 2, 2, 1], axis=0) @@ -1155,7 +1171,7 @@ def test_2d_float32(self): expected[[2, 4], :] = np.nan tm.assert_almost_equal(result, expected) - #### this now accepts a float32! # test with float64 out buffer + # this now accepts a float32! # test with float64 out buffer out = np.empty((len(indexer), arr.shape[1]), dtype='float32') com.take_nd(arr, indexer, out=out) # it works! @@ -1171,7 +1187,8 @@ def test_2d_float32(self): def test_2d_datetime64(self): # 2005/01/01 - 2006/01/01 - arr = np.random.randint(long(11045376), long(11360736), (5, 3))*100000000000 + arr = np.random.randint( + long(11045376), long(11360736), (5, 3)) * 100000000000 arr = arr.view(dtype='datetime64[ns]') indexer = [0, 2, -1, 1, -1] @@ -1245,6 +1262,7 @@ def test_maybe_convert_string_to_array(self): tm.assert_numpy_array_equal(result, np.array(['x', 2], dtype=object)) self.assertTrue(result.dtype == object) + def test_possibly_convert_objects_copy(): values = np.array([1, 2]) @@ -1254,7 +1272,7 @@ def test_possibly_convert_objects_copy(): out = convert._possibly_convert_objects(values, copy=True) assert_true(values is not out) - values = np.array(['apply','banana']) + values = np.array(['apply', 'banana']) out = convert._possibly_convert_objects(values, copy=False) assert_true(values is out) @@ -1267,9 +1285,9 @@ def test_dict_compat(): np.datetime64('2015-03-15'): 2} data_unchanged = {1: 2, 3: 4, 5: 6} expected = {Timestamp('1990-3-15'): 1, Timestamp('2015-03-15'): 2} - assert(com._dict_compat(data_datetime64) == expected) - assert(com._dict_compat(expected) == expected) - assert(com._dict_compat(data_unchanged) == data_unchanged) + assert (com._dict_compat(data_datetime64) == expected) + assert (com._dict_compat(expected) == expected) + assert (com._dict_compat(data_unchanged) == data_unchanged) if __name__ == '__main__': diff --git a/pandas/tests/test_compat.py b/pandas/tests/test_compat.py index 13596bd35bb62..2ea95b4e0b300 100644 --- a/pandas/tests/test_compat.py +++ b/pandas/tests/test_compat.py @@ -3,18 +3,16 @@ Testing that functions from compat work as expected """ -from pandas.compat import ( - range, zip, map, filter, - lrange, lzip, lmap, lfilter, - builtins -) -import unittest -import nose +from pandas.compat import (range, zip, map, filter, lrange, lzip, lmap, + lfilter, builtins) import pandas.util.testing as tm + class TestBuiltinIterators(tm.TestCase): + def check_result(self, actual, expected, lengths): - for (iter_res, list_res), exp, length in zip(actual, expected, lengths): + for (iter_res, list_res), exp, length in zip(actual, expected, + lengths): self.assertNotIsInstance(iter_res, list) tm.assertIsInstance(list_res, list) iter_res = list(iter_res) @@ -47,7 +45,6 @@ def test_map(self): lengths = 10, self.check_result(actual, expected, lengths) - def test_filter(self): func = lambda x: x lst = list(builtins.range(10)) @@ -64,8 +61,3 @@ def test_zip(self): expected = list(builtins.zip(*lst)), lengths = 10, self.check_result(actual, expected, lengths) - -if __name__ == '__main__': - nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], - # '--with-coverage', '--cover-package=pandas.core'], - exit=False) diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py index 0e286e93160b8..693b1d0ec71de 100644 --- a/pandas/tests/test_config.py +++ b/pandas/tests/test_config.py @@ -3,7 +3,6 @@ import pandas as pd import unittest import warnings -import nose class TestConfig(unittest.TestCase): @@ -39,11 +38,11 @@ def test_api(self): self.assertTrue(hasattr(pd, 'describe_option')) def test_is_one_of_factory(self): - v = self.cf.is_one_of_factory([None,12]) + v = self.cf.is_one_of_factory([None, 12]) v(12) v(None) - self.assertRaises(ValueError,v,1.1) + self.assertRaises(ValueError, v, 1.1) def test_register_option(self): self.cf.register_option('a', 1, 'doc') @@ -117,7 +116,7 @@ def test_describe_option(self): # current value is reported self.assertFalse( 'bar' in self.cf.describe_option('l', _print_desc=False)) - self.cf.set_option("l","bar") + self.cf.set_option("l", "bar") self.assertTrue( 'bar' in self.cf.describe_option('l', _print_desc=False)) @@ -168,7 +167,6 @@ def test_set_option(self): self.assertRaises(KeyError, self.cf.set_option, 'no.such.key', None) - def test_set_option_empty_args(self): self.assertRaises(ValueError, self.cf.set_option) @@ -244,9 +242,8 @@ def test_reset_option_all(self): self.assertEqual(self.cf.get_option('b.c'), 'hullo') def test_deprecate_option(self): - import sys - self.cf.deprecate_option( - 'foo') # we can deprecate non-existent options + # we can deprecate non-existent options + self.cf.deprecate_option('foo') self.assertTrue(self.cf._is_deprecated('foo')) with warnings.catch_warnings(record=True) as w: diff --git a/pandas/tests/test_dtypes.py b/pandas/tests/test_dtypes.py index 4403465576848..943e7c92d988b 100644 --- a/pandas/tests/test_dtypes.py +++ b/pandas/tests/test_dtypes.py @@ -4,14 +4,16 @@ import nose import numpy as np from pandas import Series, Categorical, date_range -import pandas.core.common as com -from pandas.core.common import (CategoricalDtype, is_categorical_dtype, is_categorical, - DatetimeTZDtype, is_datetime64tz_dtype, is_datetimetz, - is_dtype_equal, is_datetime64_ns_dtype, is_datetime64_dtype) +from pandas.core.common import (CategoricalDtype, is_categorical_dtype, + is_categorical, DatetimeTZDtype, + is_datetime64tz_dtype, is_datetimetz, + is_dtype_equal, is_datetime64_ns_dtype, + is_datetime64_dtype) import pandas.util.testing as tm _multiprocess_can_split_ = True + class Base(object): def test_hash(self): @@ -25,6 +27,7 @@ def test_numpy_informed(self): # np.dtype doesn't know about our new dtype def f(): np.dtype(self.dtype) + self.assertRaises(TypeError, f) self.assertNotEqual(self.dtype, np.str_) @@ -34,6 +37,7 @@ def test_pickle(self): result = self.round_trip_pickle(self.dtype) self.assertEqual(result, self.dtype) + class TestCategoricalDtype(Base, tm.TestCase): def setUp(self): @@ -47,7 +51,8 @@ def test_equality(self): def test_construction_from_string(self): result = CategoricalDtype.construct_from_string('category') self.assertTrue(is_dtype_equal(self.dtype, result)) - self.assertRaises(TypeError, lambda : CategoricalDtype.construct_from_string('foo')) + self.assertRaises( + TypeError, lambda: CategoricalDtype.construct_from_string('foo')) def test_is_dtype(self): self.assertTrue(CategoricalDtype.is_dtype(self.dtype)) @@ -60,10 +65,10 @@ def test_basic(self): self.assertTrue(is_categorical_dtype(self.dtype)) - factor = Categorical.from_array(['a', 'b', 'b', 'a', - 'a', 'c', 'c', 'c']) + factor = Categorical.from_array(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c' + ]) - s = Series(factor,name='A') + s = Series(factor, name='A') # dtypes self.assertTrue(is_categorical_dtype(s.dtype)) @@ -75,13 +80,15 @@ def test_basic(self): self.assertFalse(is_categorical(np.dtype('float64'))) self.assertFalse(is_categorical(1.0)) + class TestDatetimeTZDtype(Base, tm.TestCase): def setUp(self): - self.dtype = DatetimeTZDtype('ns','US/Eastern') + self.dtype = DatetimeTZDtype('ns', 'US/Eastern') def test_construction(self): - self.assertRaises(ValueError, lambda : DatetimeTZDtype('ms','US/Eastern')) + self.assertRaises(ValueError, + lambda: DatetimeTZDtype('ms', 'US/Eastern')) def test_subclass(self): a = DatetimeTZDtype('datetime64[ns, US/Eastern]') @@ -99,33 +106,41 @@ def test_compat(self): def test_construction_from_string(self): result = DatetimeTZDtype('datetime64[ns, US/Eastern]') self.assertTrue(is_dtype_equal(self.dtype, result)) - result = DatetimeTZDtype.construct_from_string('datetime64[ns, US/Eastern]') + result = DatetimeTZDtype.construct_from_string( + 'datetime64[ns, US/Eastern]') self.assertTrue(is_dtype_equal(self.dtype, result)) - self.assertRaises(TypeError, lambda : DatetimeTZDtype.construct_from_string('foo')) + self.assertRaises(TypeError, + lambda: DatetimeTZDtype.construct_from_string('foo')) def test_is_dtype(self): self.assertTrue(DatetimeTZDtype.is_dtype(self.dtype)) self.assertTrue(DatetimeTZDtype.is_dtype('datetime64[ns, US/Eastern]')) self.assertFalse(DatetimeTZDtype.is_dtype('foo')) - self.assertTrue(DatetimeTZDtype.is_dtype(DatetimeTZDtype('ns','US/Pacific'))) + self.assertTrue(DatetimeTZDtype.is_dtype(DatetimeTZDtype( + 'ns', 'US/Pacific'))) self.assertFalse(DatetimeTZDtype.is_dtype(np.float64)) def test_equality(self): - self.assertTrue(is_dtype_equal(self.dtype, 'datetime64[ns, US/Eastern]')) - self.assertTrue(is_dtype_equal(self.dtype, DatetimeTZDtype('ns','US/Eastern'))) + self.assertTrue(is_dtype_equal(self.dtype, + 'datetime64[ns, US/Eastern]')) + self.assertTrue(is_dtype_equal(self.dtype, DatetimeTZDtype( + 'ns', 'US/Eastern'))) self.assertFalse(is_dtype_equal(self.dtype, 'foo')) - self.assertFalse(is_dtype_equal(self.dtype, DatetimeTZDtype('ns','CET'))) - self.assertFalse(is_dtype_equal(DatetimeTZDtype('ns','US/Eastern'), DatetimeTZDtype('ns','US/Pacific'))) + self.assertFalse(is_dtype_equal(self.dtype, DatetimeTZDtype('ns', + 'CET'))) + self.assertFalse(is_dtype_equal( + DatetimeTZDtype('ns', 'US/Eastern'), DatetimeTZDtype( + 'ns', 'US/Pacific'))) # numpy compat - self.assertTrue(is_dtype_equal(np.dtype("M8[ns]"),"datetime64[ns]")) + self.assertTrue(is_dtype_equal(np.dtype("M8[ns]"), "datetime64[ns]")) def test_basic(self): self.assertTrue(is_datetime64tz_dtype(self.dtype)) - dr = date_range('20130101',periods=3,tz='US/Eastern') - s = Series(dr,name='A') + dr = date_range('20130101', periods=3, tz='US/Eastern') + s = Series(dr, name='A') # dtypes self.assertTrue(is_datetime64tz_dtype(s.dtype)) @@ -159,8 +174,6 @@ def test_parser(self): ) - - if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py index 3bd76dfb9da61..688f074e31a42 100644 --- a/pandas/tests/test_expressions.py +++ b/pandas/tests/test_expressions.py @@ -13,37 +13,45 @@ from pandas.core.api import DataFrame, Panel from pandas.computation import expressions as expr from pandas import compat - from pandas.util.testing import (assert_almost_equal, assert_series_equal, assert_frame_equal, assert_panel_equal, assert_panel4d_equal) +import pandas.core.common as com import pandas.util.testing as tm from numpy.testing.decorators import slow - if not expr._USE_NUMEXPR: try: - import numexpr + import numexpr # noqa except ImportError: msg = "don't have" else: msg = "not using" raise nose.SkipTest("{0} numexpr".format(msg)) -_frame = DataFrame(randn(10000, 4), columns=list('ABCD'), dtype='float64') -_frame2 = DataFrame(randn(100, 4), columns = list('ABCD'), dtype='float64') -_mixed = DataFrame({ 'A' : _frame['A'].copy(), 'B' : _frame['B'].astype('float32'), 'C' : _frame['C'].astype('int64'), 'D' : _frame['D'].astype('int32') }) -_mixed2 = DataFrame({ 'A' : _frame2['A'].copy(), 'B' : _frame2['B'].astype('float32'), 'C' : _frame2['C'].astype('int64'), 'D' : _frame2['D'].astype('int32') }) -_integer = DataFrame(np.random.randint(1, 100, size=(10001, 4)), columns = list('ABCD'), dtype='int64') +_frame = DataFrame(randn(10000, 4), columns=list('ABCD'), dtype='float64') +_frame2 = DataFrame(randn(100, 4), columns=list('ABCD'), dtype='float64') +_mixed = DataFrame({'A': _frame['A'].copy(), + 'B': _frame['B'].astype('float32'), + 'C': _frame['C'].astype('int64'), + 'D': _frame['D'].astype('int32')}) +_mixed2 = DataFrame({'A': _frame2['A'].copy(), + 'B': _frame2['B'].astype('float32'), + 'C': _frame2['C'].astype('int64'), + 'D': _frame2['D'].astype('int32')}) +_integer = DataFrame( + np.random.randint(1, 100, + size=(10001, 4)), columns=list('ABCD'), dtype='int64') _integer2 = DataFrame(np.random.randint(1, 100, size=(101, 4)), columns=list('ABCD'), dtype='int64') -_frame_panel = Panel(dict(ItemA=_frame.copy(), ItemB=(_frame.copy() + 3), ItemC=_frame.copy(), ItemD=_frame.copy())) +_frame_panel = Panel(dict(ItemA=_frame.copy(), ItemB=( + _frame.copy() + 3), ItemC=_frame.copy(), ItemD=_frame.copy())) _frame2_panel = Panel(dict(ItemA=_frame2.copy(), ItemB=(_frame2.copy() + 3), ItemC=_frame2.copy(), ItemD=_frame2.copy())) -_integer_panel = Panel(dict(ItemA=_integer, - ItemB=(_integer + 34).astype('int64'))) -_integer2_panel = Panel(dict(ItemA=_integer2, - ItemB=(_integer2 + 34).astype('int64'))) +_integer_panel = Panel(dict(ItemA=_integer, ItemB=(_integer + 34).astype( + 'int64'))) +_integer2_panel = Panel(dict(ItemA=_integer2, ItemB=(_integer2 + 34).astype( + 'int64'))) _mixed_panel = Panel(dict(ItemA=_mixed, ItemB=(_mixed + 3))) _mixed2_panel = Panel(dict(ItemA=_mixed2, ItemB=(_mixed2 + 3))) @@ -54,9 +62,9 @@ class TestExpressions(tm.TestCase): def setUp(self): - self.frame = _frame.copy() + self.frame = _frame.copy() self.frame2 = _frame2.copy() - self.mixed = _mixed.copy() + self.mixed = _mixed.copy() self.mixed2 = _mixed2.copy() self.integer = _integer.copy() self._MIN_ELEMENTS = expr._MIN_ELEMENTS @@ -97,13 +105,13 @@ def run_arithmetic_test(self, df, other, assert_func, check_dtype=False, def test_integer_arithmetic(self): self.run_arithmetic_test(self.integer, self.integer, assert_frame_equal) - self.run_arithmetic_test(self.integer.iloc[:,0], self.integer.iloc[:, 0], - assert_series_equal, check_dtype=True) + self.run_arithmetic_test(self.integer.iloc[:, 0], + self.integer.iloc[:, 0], assert_series_equal, + check_dtype=True) @nose.tools.nottest - def run_binary_test(self, df, other, assert_func, - test_flex=False, numexpr_ops=set(['gt', 'lt', 'ge', - 'le', 'eq', 'ne'])): + def run_binary_test(self, df, other, assert_func, test_flex=False, + numexpr_ops=set(['gt', 'lt', 'ge', 'le', 'eq', 'ne'])): """ tests solely that the result is the same whether or not numexpr is enabled. Need to test whether the function does the correct thing @@ -159,10 +167,10 @@ def run_series(self, ser, other, binary_comp=None, **kwargs): # series doesn't uses vec_compare instead of numexpr... # if binary_comp is None: # binary_comp = other + 1 - # self.run_binary_test(ser, binary_comp, assert_frame_equal, test_flex=False, - # **kwargs) - # self.run_binary_test(ser, binary_comp, assert_frame_equal, test_flex=True, - # **kwargs) + # self.run_binary_test(ser, binary_comp, assert_frame_equal, + # test_flex=False, **kwargs) + # self.run_binary_test(ser, binary_comp, assert_frame_equal, + # test_flex=True, **kwargs) def run_panel(self, panel, other, binary_comp=None, run_binary=True, assert_func=assert_panel_equal, **kwargs): @@ -231,51 +239,60 @@ def test_mixed_arithmetic(self): def test_integer_with_zeros(self): self.integer *= np.random.randint(0, 2, size=np.shape(self.integer)) - self.run_arithmetic_test(self.integer, self.integer, assert_frame_equal) - self.run_arithmetic_test(self.integer.iloc[:, 0], self.integer.iloc[:, 0], - assert_series_equal) + self.run_arithmetic_test(self.integer, self.integer, + assert_frame_equal) + self.run_arithmetic_test(self.integer.iloc[:, 0], + self.integer.iloc[:, 0], assert_series_equal) def test_invalid(self): # no op - result = expr._can_use_numexpr(operator.add, None, self.frame, self.frame, 'evaluate') + result = expr._can_use_numexpr(operator.add, None, self.frame, + self.frame, 'evaluate') self.assertFalse(result) # mixed - result = expr._can_use_numexpr(operator.add, '+', self.mixed, self.frame, 'evaluate') + result = expr._can_use_numexpr(operator.add, '+', self.mixed, + self.frame, 'evaluate') self.assertFalse(result) # min elements - result = expr._can_use_numexpr(operator.add, '+', self.frame2, self.frame2, 'evaluate') + result = expr._can_use_numexpr(operator.add, '+', self.frame2, + self.frame2, 'evaluate') self.assertFalse(result) # ok, we only check on first part of expression - result = expr._can_use_numexpr(operator.add, '+', self.frame, self.frame2, 'evaluate') + result = expr._can_use_numexpr(operator.add, '+', self.frame, + self.frame2, 'evaluate') self.assertTrue(result) def test_binary_ops(self): - def testit(): - for f, f2 in [ (self.frame, self.frame2), (self.mixed, self.mixed2) ]: + for f, f2 in [(self.frame, self.frame2), + (self.mixed, self.mixed2)]: - for op, op_str in [('add','+'),('sub','-'),('mul','*'),('div','/'),('pow','**')]: + for op, op_str in [('add', '+'), ('sub', '-'), ('mul', '*'), + ('div', '/'), ('pow', '**')]: if op == 'div': op = getattr(operator, 'truediv', None) else: op = getattr(operator, op, None) if op is not None: - result = expr._can_use_numexpr(op, op_str, f, f, 'evaluate') + result = expr._can_use_numexpr(op, op_str, f, f, + 'evaluate') self.assertNotEqual(result, f._is_mixed_type) - result = expr.evaluate(op, op_str, f, f, use_numexpr=True) - expected = expr.evaluate(op, op_str, f, f, use_numexpr=False) - tm.assert_numpy_array_equal(result,expected.values) + result = expr.evaluate(op, op_str, f, f, + use_numexpr=True) + expected = expr.evaluate(op, op_str, f, f, + use_numexpr=False) + tm.assert_numpy_array_equal(result, expected.values) - result = expr._can_use_numexpr(op, op_str, f2, f2, 'evaluate') + result = expr._can_use_numexpr(op, op_str, f2, f2, + 'evaluate') self.assertFalse(result) - expr.set_use_numexpr(False) testit() expr.set_use_numexpr(True) @@ -285,10 +302,9 @@ def testit(): testit() def test_boolean_ops(self): - - def testit(): - for f, f2 in [ (self.frame, self.frame2), (self.mixed, self.mixed2) ]: + for f, f2 in [(self.frame, self.frame2), + (self.mixed, self.mixed2)]: f11 = f f12 = f + 1 @@ -296,18 +312,23 @@ def testit(): f21 = f2 f22 = f2 + 1 - for op, op_str in [('gt','>'),('lt','<'),('ge','>='),('le','<='),('eq','=='),('ne','!=')]: + for op, op_str in [('gt', '>'), ('lt', '<'), ('ge', '>='), + ('le', '<='), ('eq', '=='), ('ne', '!=')]: - op = getattr(operator,op) + op = getattr(operator, op) - result = expr._can_use_numexpr(op, op_str, f11, f12, 'evaluate') + result = expr._can_use_numexpr(op, op_str, f11, f12, + 'evaluate') self.assertNotEqual(result, f11._is_mixed_type) - result = expr.evaluate(op, op_str, f11, f12, use_numexpr=True) - expected = expr.evaluate(op, op_str, f11, f12, use_numexpr=False) - tm.assert_numpy_array_equal(result,expected.values) + result = expr.evaluate(op, op_str, f11, f12, + use_numexpr=True) + expected = expr.evaluate(op, op_str, f11, f12, + use_numexpr=False) + tm.assert_numpy_array_equal(result, expected.values) - result = expr._can_use_numexpr(op, op_str, f21, f22, 'evaluate') + result = expr._can_use_numexpr(op, op_str, f21, f22, + 'evaluate') self.assertFalse(result) expr.set_use_numexpr(False) @@ -319,18 +340,16 @@ def testit(): testit() def test_where(self): - def testit(): - for f in [ self.frame, self.frame2, self.mixed, self.mixed2 ]: - + for f in [self.frame, self.frame2, self.mixed, self.mixed2]: - for cond in [ True, False ]: + for cond in [True, False]: - c = np.empty(f.shape,dtype=np.bool_) + c = np.empty(f.shape, dtype=np.bool_) c.fill(cond) - result = expr.where(c, f.values, f.values+1) - expected = np.where(c, f.values, f.values+1) - tm.assert_numpy_array_equal(result,expected) + result = expr.where(c, f.values, f.values + 1) + expected = np.where(c, f.values, f.values + 1) + tm.assert_numpy_array_equal(result, expected) expr.set_use_numexpr(False) testit() diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py index a73b459459321..b7691033dfc83 100644 --- a/pandas/tests/test_format.py +++ b/pandas/tests/test_format.py @@ -1,4 +1,8 @@ # -*- coding: utf-8 -*- + +# TODO(wesm): lots of issues making flake8 hard +# flake8: noqa + from __future__ import print_function from distutils.version import LooseVersion import re @@ -32,8 +36,8 @@ import pandas.core.common as com from pandas.util.terminal import get_terminal_size import pandas as pd -from pandas.core.config import (set_option, get_option, - option_context, reset_option) +from pandas.core.config import (set_option, get_option, option_context, + reset_option) from datetime import datetime import nose @@ -45,44 +49,54 @@ def curpath(): pth, _ = os.path.split(os.path.abspath(__file__)) return pth + def has_info_repr(df): r = repr(df) c1 = r.split('\n')[0].startswith("<class") c2 = r.split('\n')[0].startswith(r"&lt;class") # _repr_html_ return c1 or c2 + def has_non_verbose_info_repr(df): has_info = has_info_repr(df) r = repr(df) - nv = len(r.split('\n')) == 6 # 1. <class>, 2. Index, 3. Columns, 4. dtype, 5. memory usage, 6. trailing newline + nv = len(r.split( + '\n')) == 6 # 1. <class>, 2. Index, 3. Columns, 4. dtype, 5. memory usage, 6. trailing newline return has_info and nv + def has_horizontally_truncated_repr(df): - try: # Check header row + try: # Check header row fst_line = np.array(repr(df).splitlines()[0].split()) - cand_col = np.where(fst_line=='...')[0][0] + cand_col = np.where(fst_line == '...')[0][0] except: return False # Make sure each row has this ... in the same place r = repr(df) - for ix,l in enumerate(r.splitlines()): + for ix, l in enumerate(r.splitlines()): if not r.split()[cand_col] == '...': return False return True + def has_vertically_truncated_repr(df): r = repr(df) only_dot_row = False for row in r.splitlines(): - if re.match('^[\.\ ]+$',row): + if re.match('^[\.\ ]+$', row): only_dot_row = True return only_dot_row + def has_truncated_repr(df): - return has_horizontally_truncated_repr(df) or has_vertically_truncated_repr(df) + return has_horizontally_truncated_repr( + df) or has_vertically_truncated_repr(df) + def has_doubly_truncated_repr(df): - return has_horizontally_truncated_repr(df) and has_vertically_truncated_repr(df) + return has_horizontally_truncated_repr( + df) and has_vertically_truncated_repr(df) + def has_expanded_repr(df): r = repr(df) @@ -91,13 +105,13 @@ def has_expanded_repr(df): return True return False + class TestDataFrameFormatting(tm.TestCase): _multiprocess_can_split_ = True def setUp(self): self.warn_filters = warnings.filters - warnings.filterwarnings('ignore', - category=FutureWarning, + warnings.filterwarnings('ignore', category=FutureWarning, module=".*format") self.frame = _frame.copy() @@ -130,20 +144,22 @@ def test_eng_float_formatter(self): def test_show_null_counts(self): - df = DataFrame(1,columns=range(10),index=range(10)) - df.iloc[1,1] = np.nan + df = DataFrame(1, columns=range(10), index=range(10)) + df.iloc[1, 1] = np.nan def check(null_counts, result): buf = StringIO() df.info(buf=buf, null_counts=null_counts) self.assertTrue(('non-null' in buf.getvalue()) is result) - with option_context('display.max_info_rows',20,'display.max_info_columns',20): + with option_context('display.max_info_rows', 20, + 'display.max_info_columns', 20): check(None, True) check(True, True) check(False, False) - with option_context('display.max_info_rows',5,'display.max_info_columns',5): + with option_context('display.max_info_rows', 5, + 'display.max_info_columns', 5): check(None, False) check(True, False) check(False, False) @@ -159,8 +175,9 @@ def test_repr_truncation(self): max_len = 20 with option_context("display.max_colwidth", max_len): df = DataFrame({'A': np.random.randn(10), - 'B': [tm.rands(np.random.randint(max_len - 1, - max_len + 1)) for i in range(10)]}) + 'B': [tm.rands(np.random.randint( + max_len - 1, max_len + 1)) for i in range(10) + ]}) r = repr(df) r = r[r.find('\n') + 1:] @@ -179,34 +196,35 @@ def test_repr_truncation(self): self.assertNotIn('...', repr(df)) def test_repr_chop_threshold(self): - df = DataFrame([[0.1, 0.5],[0.5, -0.1]]) - pd.reset_option("display.chop_threshold") # default None + df = DataFrame([[0.1, 0.5], [0.5, -0.1]]) + pd.reset_option("display.chop_threshold") # default None self.assertEqual(repr(df), ' 0 1\n0 0.1 0.5\n1 0.5 -0.1') - with option_context("display.chop_threshold", 0.2 ): + with option_context("display.chop_threshold", 0.2): self.assertEqual(repr(df), ' 0 1\n0 0.0 0.5\n1 0.5 0.0') - with option_context("display.chop_threshold", 0.6 ): + with option_context("display.chop_threshold", 0.6): self.assertEqual(repr(df), ' 0 1\n0 0 0\n1 0 0') - with option_context("display.chop_threshold", None ): - self.assertEqual(repr(df), ' 0 1\n0 0.1 0.5\n1 0.5 -0.1') + with option_context("display.chop_threshold", None): + self.assertEqual(repr(df), ' 0 1\n0 0.1 0.5\n1 0.5 -0.1') def test_repr_obeys_max_seq_limit(self): - with option_context("display.max_seq_items",2000): + with option_context("display.max_seq_items", 2000): self.assertTrue(len(com.pprint_thing(lrange(1000))) > 1000) - with option_context("display.max_seq_items",5): + with option_context("display.max_seq_items", 5): self.assertTrue(len(com.pprint_thing(lrange(1000))) < 100) def test_repr_set(self): self.assertEqual(com.pprint_thing(set([1])), '{1}') def test_repr_is_valid_construction_code(self): - # for the case of Index, where the repr is traditional rather then stylized - idx = Index(['a','b']) - res = eval("pd."+repr(idx)) - tm.assert_series_equal(Series(res),Series(idx)) + # for the case of Index, where the repr is traditional rather then + # stylized + idx = Index(['a', 'b']) + res = eval("pd." + repr(idx)) + tm.assert_series_equal(Series(res), Series(idx)) def test_repr_should_return_str(self): # http://docs.python.org/py3k/reference/datamodel.html#object.__repr__ @@ -215,10 +233,8 @@ def test_repr_should_return_str(self): # (str on py2.x, str (unicode) on py3) - data = [8, 5, 3, 5] - index1 = [u("\u03c3"), u("\u03c4"), u("\u03c5"), - u("\u03c6")] + index1 = [u("\u03c3"), u("\u03c4"), u("\u03c5"), u("\u03c6")] cols = [u("\u03c8")] df = DataFrame(data, columns=cols, index=index1) self.assertTrue(type(df.__repr__()) == str) # both py2 / 3 @@ -234,8 +250,7 @@ def test_expand_frame_repr(self): df_tall = DataFrame('hello', lrange(30), lrange(5)) with option_context('mode.sim_interactive', True): - with option_context('display.max_columns', 10, - 'display.width',20, + with option_context('display.max_columns', 10, 'display.width', 20, 'display.max_rows', 20, 'display.show_dimensions', True): with option_context('display.expand_frame_repr', True): @@ -259,10 +274,8 @@ def test_repr_non_interactive(self): # result of terminal auto size detection df = DataFrame('hello', lrange(1000), lrange(5)) - with option_context('mode.sim_interactive', False, - 'display.width', 0, - 'display.height', 0, - 'display.max_rows',5000): + with option_context('mode.sim_interactive', False, 'display.width', 0, + 'display.height', 0, 'display.max_rows', 5000): self.assertFalse(has_truncated_repr(df)) self.assertFalse(has_expanded_repr(df)) @@ -301,9 +314,8 @@ def mkframe(n): self.assertTrue(has_vertically_truncated_repr(df10)) # width=None in terminal, auto detection - with option_context('display.max_columns', 100, - 'display.max_rows', term_width * 20, - 'display.width', None): + with option_context('display.max_columns', 100, 'display.max_rows', + term_width * 20, 'display.width', None): df = mkframe((term_width // 7) - 2) self.assertFalse(has_expanded_repr(df)) df = mkframe((term_width // 7) + 2) @@ -312,18 +324,23 @@ def mkframe(n): def test_str_max_colwidth(self): # GH 7856 - df = pd.DataFrame([{'a': 'foo', 'b': 'bar', + df = pd.DataFrame([{'a': 'foo', + 'b': 'bar', 'c': 'uncomfortably long line with lots of stuff', - 'd': 1}, - {'a': 'foo', 'b': 'bar', 'c': 'stuff', 'd': 1}]) + 'd': 1}, {'a': 'foo', + 'b': 'bar', + 'c': 'stuff', + 'd': 1}]) df.set_index(['a', 'b', 'c']) - self.assertTrue(str(df) == ' a b c d\n' - '0 foo bar uncomfortably long line with lots of stuff 1\n' - '1 foo bar stuff 1') + self.assertTrue( + str(df) == + ' a b c d\n' + '0 foo bar uncomfortably long line with lots of stuff 1\n' + '1 foo bar stuff 1') with option_context('max_colwidth', 20): self.assertTrue(str(df) == ' a b c d\n' - '0 foo bar uncomfortably lo... 1\n' - '1 foo bar stuff 1') + '0 foo bar uncomfortably lo... 1\n' + '1 foo bar stuff 1') def test_auto_detect(self): term_width, term_height = get_terminal_size() @@ -332,29 +349,28 @@ def test_auto_detect(self): index = range(10) df = DataFrame(index=index, columns=cols) with option_context('mode.sim_interactive', True): - with option_context('max_rows',None): - with option_context('max_columns',None): + with option_context('max_rows', None): + with option_context('max_columns', None): # Wrap around with None self.assertTrue(has_expanded_repr(df)) - with option_context('max_rows',0): - with option_context('max_columns',0): + with option_context('max_rows', 0): + with option_context('max_columns', 0): # Truncate with auto detection. self.assertTrue(has_horizontally_truncated_repr(df)) index = range(int(term_height * fac)) df = DataFrame(index=index, columns=cols) - with option_context('max_rows',0): - with option_context('max_columns',None): + with option_context('max_rows', 0): + with option_context('max_columns', None): # Wrap around with None self.assertTrue(has_expanded_repr(df)) # Truncate vertically self.assertTrue(has_vertically_truncated_repr(df)) - with option_context('max_rows',None): - with option_context('max_columns',0): + with option_context('max_rows', None): + with option_context('max_columns', 0): self.assertTrue(has_horizontally_truncated_repr(df)) - def test_to_string_repr_unicode(self): buf = StringIO() @@ -379,7 +395,7 @@ def test_to_string_repr_unicode(self): self.assertEqual(len(line), line_len) # it works even if sys.stdin in None - _stdin= sys.stdin + _stdin = sys.stdin try: sys.stdin = None repr(df) @@ -436,10 +452,8 @@ def test_to_string_with_formatters(self): def test_to_string_with_formatters_unicode(self): df = DataFrame({u('c/\u03c3'): [1, 2, 3]}) - result = df.to_string(formatters={u('c/\u03c3'): - lambda x: '%s' % x}) - self.assertEqual(result, u(' c/\u03c3\n') + - '0 1\n1 2\n2 3') + result = df.to_string(formatters={u('c/\u03c3'): lambda x: '%s' % x}) + self.assertEqual(result, u(' c/\u03c3\n') + '0 1\n1 2\n2 3') def test_east_asian_unicode_frame(self): if PY3: @@ -499,7 +513,8 @@ def test_east_asian_unicode_frame(self): 'b': [u'あ', u'いいい', u'う', u'ええええええ']}, index=pd.Index([u'あ', u'い', u'うう', u'え'], name=u'おおおお')) expected = (u" a b\nおおおお \nあ あああああ あ\n" - u"い い いいい\nうう う う\nえ えええ ええええええ") + u"い い いいい\nうう う う\nえ えええ ええええええ" + ) self.assertEqual(_rep(df), expected) # all @@ -511,8 +526,8 @@ def test_east_asian_unicode_frame(self): self.assertEqual(_rep(df), expected) # MultiIndex - idx = pd.MultiIndex.from_tuples([(u'あ', u'いい'), (u'う', u'え'), - (u'おおお', u'かかかか'), (u'き', u'くく')]) + idx = pd.MultiIndex.from_tuples([(u'あ', u'いい'), (u'う', u'え'), ( + u'おおお', u'かかかか'), (u'き', u'くく')]) df = DataFrame({'a': [u'あああああ', u'い', u'う', u'えええ'], 'b': [u'あ', u'いいい', u'う', u'ええええええ']}, index=idx) expected = (u" a b\nあ いい あああああ あ\n" @@ -533,7 +548,7 @@ def test_east_asian_unicode_frame(self): u"\n[4 rows x 4 columns]") self.assertEqual(_rep(df), expected) - df.index = [u'あああ', u'いいいい', u'う', 'aaa'] + df.index = [u'あああ', u'いいいい', u'う', 'aaa'] expected = (u" a ... ああああ\nあああ あああああ ... さ\n" u".. ... ... ...\naaa えええ ... せ\n" u"\n[4 rows x 4 columns]") @@ -544,8 +559,8 @@ def test_east_asian_unicode_frame(self): # mid col df = DataFrame({'a': [u'あ', u'いいい', u'う', u'ええええええ'], - 'b': [1, 222, 33333, 4]}, - index=['a', 'bb', 'c', 'ddd']) + 'b': [1, 222, 33333, 4]}, + index=['a', 'bb', 'c', 'ddd']) expected = (u" a b\na あ 1\n" u"bb いいい 222\nc う 33333\n" u"ddd ええええええ 4") @@ -553,8 +568,8 @@ def test_east_asian_unicode_frame(self): # last col df = DataFrame({'a': [1, 222, 33333, 4], - 'b': [u'あ', u'いいい', u'う', u'ええええええ']}, - index=['a', 'bb', 'c', 'ddd']) + 'b': [u'あ', u'いいい', u'う', u'ええええええ']}, + index=['a', 'bb', 'c', 'ddd']) expected = (u" a b\na 1 あ\n" u"bb 222 いいい\nc 33333 う\n" u"ddd 4 ええええええ") @@ -562,17 +577,18 @@ def test_east_asian_unicode_frame(self): # all col df = DataFrame({'a': [u'あああああ', u'い', u'う', u'えええ'], - 'b': [u'あ', u'いいい', u'う', u'ええええええ']}, - index=['a', 'bb', 'c', 'ddd']) + 'b': [u'あ', u'いいい', u'う', u'ええええええ']}, + index=['a', 'bb', 'c', 'ddd']) expected = (u" a b\na あああああ あ\n" u"bb い いいい\nc う う\n" - u"ddd えええ ええええええ""") + u"ddd えええ ええええええ" + "") self.assertEqual(_rep(df), expected) # column name df = DataFrame({u'あああああ': [1, 222, 33333, 4], - 'b': [u'あ', u'いいい', u'う', u'ええええええ']}, - index=['a', 'bb', 'c', 'ddd']) + 'b': [u'あ', u'いいい', u'う', u'ええええええ']}, + index=['a', 'bb', 'c', 'ddd']) expected = (u" b あああああ\na あ 1\n" u"bb いいい 222\nc う 33333\n" u"ddd ええええええ 4") @@ -593,7 +609,8 @@ def test_east_asian_unicode_frame(self): index=pd.Index([u'あ', u'い', u'うう', u'え'], name=u'おおおお')) expected = (u" a b\nおおおお \n" u"あ あああああ あ\nい い いいい\n" - u"うう う う\nえ えええ ええええええ") + u"うう う う\nえ えええ ええええええ" + ) self.assertEqual(_rep(df), expected) # all @@ -606,8 +623,8 @@ def test_east_asian_unicode_frame(self): self.assertEqual(_rep(df), expected) # MultiIndex - idx = pd.MultiIndex.from_tuples([(u'あ', u'いい'), (u'う', u'え'), - (u'おおお', u'かかかか'), (u'き', u'くく')]) + idx = pd.MultiIndex.from_tuples([(u'あ', u'いい'), (u'う', u'え'), ( + u'おおお', u'かかかか'), (u'き', u'くく')]) df = DataFrame({'a': [u'あああああ', u'い', u'う', u'えええ'], 'b': [u'あ', u'いいい', u'う', u'ええええええ']}, index=idx) expected = (u" a b\nあ いい あああああ あ\n" @@ -616,7 +633,8 @@ def test_east_asian_unicode_frame(self): self.assertEqual(_rep(df), expected) # truncate - with option_context('display.max_rows', 3, 'display.max_columns', 3): + with option_context('display.max_rows', 3, 'display.max_columns', + 3): df = pd.DataFrame({'a': [u'あああああ', u'い', u'う', u'えええ'], 'b': [u'あ', u'いいい', u'う', u'ええええええ'], @@ -629,7 +647,7 @@ def test_east_asian_unicode_frame(self): u"\n[4 rows x 4 columns]") self.assertEqual(_rep(df), expected) - df.index = [u'あああ', u'いいいい', u'う', 'aaa'] + df.index = [u'あああ', u'いいいい', u'う', 'aaa'] expected = (u" a ... ああああ\nあああ あああああ ... さ\n" u"... ... ... ...\naaa えええ ... せ\n" u"\n[4 rows x 4 columns]") @@ -637,8 +655,8 @@ def test_east_asian_unicode_frame(self): # ambiguous unicode df = DataFrame({u'あああああ': [1, 222, 33333, 4], - 'b': [u'あ', u'いいい', u'¡¡', u'ええええええ']}, - index=['a', 'bb', 'c', '¡¡¡']) + 'b': [u'あ', u'いいい', u'¡¡', u'ええええええ']}, + index=['a', 'bb', 'c', '¡¡¡']) expected = (u" b あああああ\na あ 1\n" u"bb いいい 222\nc ¡¡ 33333\n" u"¡¡¡ ええええええ 4") @@ -671,37 +689,44 @@ def test_to_string_with_col_space(self): self.assertEqual(len(with_header_row1), len(no_header)) def test_to_string_truncate_indices(self): - for index in [ tm.makeStringIndex, tm.makeUnicodeIndex, tm.makeIntIndex, - tm.makeDateIndex, tm.makePeriodIndex ]: - for column in [ tm.makeStringIndex ]: - for h in [10,20]: - for w in [10,20]: - with option_context("display.expand_frame_repr",False): + for index in [tm.makeStringIndex, tm.makeUnicodeIndex, tm.makeIntIndex, + tm.makeDateIndex, tm.makePeriodIndex]: + for column in [tm.makeStringIndex]: + for h in [10, 20]: + for w in [10, 20]: + with option_context("display.expand_frame_repr", + False): df = DataFrame(index=index(h), columns=column(w)) with option_context("display.max_rows", 15): if h == 20: - self.assertTrue(has_vertically_truncated_repr(df)) + self.assertTrue( + has_vertically_truncated_repr(df)) else: - self.assertFalse(has_vertically_truncated_repr(df)) + self.assertFalse( + has_vertically_truncated_repr(df)) with option_context("display.max_columns", 15): if w == 20: - self.assertTrue(has_horizontally_truncated_repr(df)) + self.assertTrue( + has_horizontally_truncated_repr(df)) else: - self.assertFalse(has_horizontally_truncated_repr(df)) - with option_context("display.max_rows", 15,"display.max_columns", 15): + self.assertFalse( + has_horizontally_truncated_repr(df)) + with option_context("display.max_rows", 15, + "display.max_columns", 15): if h == 20 and w == 20: - self.assertTrue(has_doubly_truncated_repr(df)) + self.assertTrue(has_doubly_truncated_repr( + df)) else: - self.assertFalse(has_doubly_truncated_repr(df)) + self.assertFalse(has_doubly_truncated_repr( + df)) def test_to_string_truncate_multilevel(self): arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']] - df = DataFrame(index=arrays,columns=arrays) - with option_context("display.max_rows", 7,"display.max_columns", 7): + df = DataFrame(index=arrays, columns=arrays) + with option_context("display.max_rows", 7, "display.max_columns", 7): self.assertTrue(has_doubly_truncated_repr(df)) - def test_to_html_with_col_space(self): def check_with_width(df, col_space): import re @@ -740,8 +765,8 @@ def test_to_html_escaped(self): test_dict = {'co<l1': {a: "<type 'str'>", b: "<type 'str'>"}, - 'co>l2':{a: "<type 'str'>", - b: "<type 'str'>"}} + 'co>l2': {a: "<type 'str'>", + b: "<type 'str'>"}} rs = DataFrame(test_dict).to_html() xp = """<table border="1" class="dataframe"> <thead> @@ -764,6 +789,7 @@ def test_to_html_escaped(self): </tr> </tbody> </table>""" + self.assertEqual(xp, rs) def test_to_html_escape_disabled(self): @@ -796,6 +822,7 @@ def test_to_html_escape_disabled(self): </tr> </tbody> </table>""" + self.assertEqual(xp, rs) def test_to_html_multiindex_index_false(self): @@ -804,8 +831,8 @@ def test_to_html_multiindex_index_false(self): 'a': range(2), 'b': range(3, 5), 'c': range(5, 7), - 'd': range(3, 5)} - ) + 'd': range(3, 5) + }) df.columns = MultiIndex.from_product([['a', 'b'], ['c', 'd']]) result = df.to_html(index=False) expected = """\ @@ -837,6 +864,7 @@ def test_to_html_multiindex_index_false(self): </tr> </tbody> </table>""" + self.assertEqual(result, expected) df.index = Index(df.index.values, name='idx') @@ -846,7 +874,7 @@ def test_to_html_multiindex_index_false(self): def test_to_html_multiindex_sparsify_false_multi_sparse(self): with option_context('display.multi_sparse', False): index = MultiIndex.from_arrays([[0, 0, 1, 1], [0, 1, 0, 1]], - names=['foo', None]) + names=['foo', None]) df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], index=index) @@ -894,6 +922,7 @@ def test_to_html_multiindex_sparsify_false_multi_sparse(self): </tr> </tbody> </table>""" + self.assertEqual(result, expected) df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], @@ -949,11 +978,12 @@ def test_to_html_multiindex_sparsify_false_multi_sparse(self): </tr> </tbody> </table>""" + self.assertEqual(result, expected) def test_to_html_multiindex_sparsify(self): index = MultiIndex.from_arrays([[0, 0, 1, 1], [0, 1, 0, 1]], - names=['foo', None]) + names=['foo', None]) df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], index=index) @@ -998,10 +1028,11 @@ def test_to_html_multiindex_sparsify(self): </tr> </tbody> </table>""" + self.assertEqual(result, expected) - df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], - columns=index[::2], index=index) + df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], columns=index[::2], + index=index) result = df.to_html() expected = """\ @@ -1051,13 +1082,14 @@ def test_to_html_multiindex_sparsify(self): </tr> </tbody> </table>""" + self.assertEqual(result, expected) def test_to_html_index_formatter(self): - df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], - columns=['foo', None], index=lrange(4)) + df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], columns=['foo', None], + index=lrange(4)) - f = lambda x: 'abcd'[x] + f = lambda x: 'abcd' [x] result = df.to_html(formatters={'__index__': f}) expected = """\ <table border="1" class="dataframe"> @@ -1091,22 +1123,24 @@ def test_to_html_index_formatter(self): </tr> </tbody> </table>""" + self.assertEqual(result, expected) def test_to_html_regression_GH6098(self): df = DataFrame({u('clé1'): [u('a'), u('a'), u('b'), u('b'), u('a')], - u('clé2'): [u('1er'), u('2ème'), u('1er'), u('2ème'), u('1er')], - 'données1': np.random.randn(5), - 'données2': np.random.randn(5)}) + u('clé2'): [u('1er'), u('2ème'), u('1er'), u('2ème'), + u('1er')], + 'données1': np.random.randn(5), + 'données2': np.random.randn(5)}) # it works df.pivot_table(index=[u('clé1')], columns=[u('clé2')])._repr_html_() def test_to_html_truncate(self): raise nose.SkipTest("unreliable on travis") - index = pd.DatetimeIndex(start='20010101',freq='D',periods=20) - df = DataFrame(index=index,columns=range(20)) - fmt.set_option('display.max_rows',8) - fmt.set_option('display.max_columns',4) + index = pd.DatetimeIndex(start='20010101', freq='D', periods=20) + df = DataFrame(index=index, columns=range(20)) + fmt.set_option('display.max_rows', 8) + fmt.set_option('display.max_columns', 4) result = df._repr_html_() expected = '''\ <div{0}> @@ -1206,9 +1240,9 @@ def test_to_html_truncate_multi_index(self): raise nose.SkipTest("unreliable on travis") arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']] - df = DataFrame(index=arrays,columns=arrays) - fmt.set_option('display.max_rows',7) - fmt.set_option('display.max_columns',7) + df = DataFrame(index=arrays, columns=arrays) + fmt.set_option('display.max_rows', 7) + fmt.set_option('display.max_columns', 7) result = df._repr_html_() expected = '''\ <div{0}> @@ -1323,10 +1357,10 @@ def test_to_html_truncate_multi_index_sparse_off(self): raise nose.SkipTest("unreliable on travis") arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']] - df = DataFrame(index=arrays,columns=arrays) - fmt.set_option('display.max_rows',7) - fmt.set_option('display.max_columns',7) - fmt.set_option('display.multi_sparse',False) + df = DataFrame(index=arrays, columns=arrays) + fmt.set_option('display.max_rows', 7) + fmt.set_option('display.max_columns', 7) + fmt.set_option('display.multi_sparse', False) result = df._repr_html_() expected = '''\ <div{0}> @@ -1430,8 +1464,6 @@ def test_to_html_truncate_multi_index_sparse_off(self): expected = expected.decode('utf-8') self.assertEqual(result, expected) - - def test_nonunicode_nonascii_alignment(self): df = DataFrame([["aa\xc3\xa4\xc3\xa4", 1], ["bbbb", 2]]) rep_str = df.to_string() @@ -1468,31 +1500,30 @@ def test_pprint_thing(self): if PY3: raise nose.SkipTest("doesn't work on Python 3") - self.assertEqual(pp_t('a') , u('a')) - self.assertEqual(pp_t(u('a')) , u('a')) - self.assertEqual(pp_t(None) , 'None') - self.assertEqual(pp_t(u('\u05d0'), quote_strings=True), - u("u'\u05d0'")) - self.assertEqual(pp_t(u('\u05d0'), quote_strings=False), - u('\u05d0')) + self.assertEqual(pp_t('a'), u('a')) + self.assertEqual(pp_t(u('a')), u('a')) + self.assertEqual(pp_t(None), 'None') + self.assertEqual(pp_t(u('\u05d0'), quote_strings=True), u("u'\u05d0'")) + self.assertEqual(pp_t(u('\u05d0'), quote_strings=False), u('\u05d0')) self.assertEqual(pp_t((u('\u05d0'), - u('\u05d1')), quote_strings=True), - u("(u'\u05d0', u'\u05d1')")) + u('\u05d1')), quote_strings=True), + u("(u'\u05d0', u'\u05d1')")) self.assertEqual(pp_t((u('\u05d0'), (u('\u05d1'), - u('\u05d2'))), - quote_strings=True), - u("(u'\u05d0', (u'\u05d1', u'\u05d2'))")) + u('\u05d2'))), + quote_strings=True), + u("(u'\u05d0', (u'\u05d1', u'\u05d2'))")) self.assertEqual(pp_t(('foo', u('\u05d0'), (u('\u05d0'), - u('\u05d0'))), - quote_strings=True), - u("(u'foo', u'\u05d0', (u'\u05d0', u'\u05d0'))")) + u('\u05d0'))), + quote_strings=True), + u("(u'foo', u'\u05d0', (u'\u05d0', u'\u05d0'))")) # escape embedded tabs in string # GH #2038 - self.assertTrue(not "\t" in pp_t("a\tb", escape_chars=("\t",))) + self.assertTrue(not "\t" in pp_t("a\tb", escape_chars=("\t", ))) def test_wide_repr(self): - with option_context('mode.sim_interactive', True, 'display.show_dimensions', True): + with option_context('mode.sim_interactive', True, + 'display.show_dimensions', True): max_cols = get_option('display.max_columns') df = DataFrame(tm.rands_array(25, size=(10, max_cols - 1))) set_option('display.expand_frame_repr', False) @@ -1562,10 +1593,9 @@ def test_wide_repr_multiindex(self): def test_wide_repr_multiindex_cols(self): with option_context('mode.sim_interactive', True): max_cols = get_option('display.max_columns') - midx = MultiIndex.from_arrays( - tm.rands_array(5, size=(2, 10))) - mcols = MultiIndex.from_arrays( - tm.rands_array(3, size=(2, max_cols - 1))) + midx = MultiIndex.from_arrays(tm.rands_array(5, size=(2, 10))) + mcols = MultiIndex.from_arrays(tm.rands_array(3, size=(2, max_cols + - 1))) df = DataFrame(tm.rands_array(25, (10, max_cols - 1)), index=midx, columns=mcols) df.index.names = ['Level 0', 'Level 1'] @@ -1599,8 +1629,8 @@ def test_wide_repr_unicode(self): def test_wide_repr_wide_long_columns(self): with option_context('mode.sim_interactive', True): - df = DataFrame( - {'a': ['a' * 30, 'b' * 30], 'b': ['c' * 70, 'd' * 80]}) + df = DataFrame({'a': ['a' * 30, 'b' * 30], + 'b': ['c' * 70, 'd' * 80]}) result = repr(df) self.assertTrue('ccccc' in result) @@ -1608,58 +1638,78 @@ def test_wide_repr_wide_long_columns(self): def test_long_series(self): n = 1000 - s = Series(np.random.randint(-50,50,n),index=['s%04d' % x for x in range(n)], dtype='int64') + s = Series( + np.random.randint(-50, 50, n), + index=['s%04d' % x for x in range(n)], dtype='int64') import re str_rep = str(s) - nmatches = len(re.findall('dtype',str_rep)) + nmatches = len(re.findall('dtype', str_rep)) self.assertEqual(nmatches, 1) def test_index_with_nan(self): # GH 2850 - df = DataFrame({'id1': {0: '1a3', 1: '9h4'}, 'id2': {0: np.nan, 1: 'd67'}, - 'id3': {0: '78d', 1: '79d'}, 'value': {0: 123, 1: 64}}) + df = DataFrame({'id1': {0: '1a3', + 1: '9h4'}, + 'id2': {0: np.nan, + 1: 'd67'}, + 'id3': {0: '78d', + 1: '79d'}, + 'value': {0: 123, + 1: 64}}) # multi-index y = df.set_index(['id1', 'id2', 'id3']) result = y.to_string() - expected = u(' value\nid1 id2 id3 \n1a3 NaN 78d 123\n9h4 d67 79d 64') + expected = u( + ' value\nid1 id2 id3 \n1a3 NaN 78d 123\n9h4 d67 79d 64') self.assertEqual(result, expected) # index y = df.set_index('id2') result = y.to_string() - expected = u(' id1 id3 value\nid2 \nNaN 1a3 78d 123\nd67 9h4 79d 64') + expected = u( + ' id1 id3 value\nid2 \nNaN 1a3 78d 123\nd67 9h4 79d 64') self.assertEqual(result, expected) # with append (this failed in 0.12) y = df.set_index(['id1', 'id2']).set_index('id3', append=True) result = y.to_string() - expected = u(' value\nid1 id2 id3 \n1a3 NaN 78d 123\n9h4 d67 79d 64') + expected = u( + ' value\nid1 id2 id3 \n1a3 NaN 78d 123\n9h4 d67 79d 64') self.assertEqual(result, expected) # all-nan in mi df2 = df.copy() - df2.ix[:,'id2'] = np.nan + df2.ix[:, 'id2'] = np.nan y = df2.set_index('id2') result = y.to_string() - expected = u(' id1 id3 value\nid2 \nNaN 1a3 78d 123\nNaN 9h4 79d 64') + expected = u( + ' id1 id3 value\nid2 \nNaN 1a3 78d 123\nNaN 9h4 79d 64') self.assertEqual(result, expected) # partial nan in mi df2 = df.copy() - df2.ix[:,'id2'] = np.nan - y = df2.set_index(['id2','id3']) + df2.ix[:, 'id2'] = np.nan + y = df2.set_index(['id2', 'id3']) result = y.to_string() - expected = u(' id1 value\nid2 id3 \nNaN 78d 1a3 123\n 79d 9h4 64') + expected = u( + ' id1 value\nid2 id3 \nNaN 78d 1a3 123\n 79d 9h4 64') self.assertEqual(result, expected) - df = DataFrame({'id1': {0: np.nan, 1: '9h4'}, 'id2': {0: np.nan, 1: 'd67'}, - 'id3': {0: np.nan, 1: '79d'}, 'value': {0: 123, 1: 64}}) + df = DataFrame({'id1': {0: np.nan, + 1: '9h4'}, + 'id2': {0: np.nan, + 1: 'd67'}, + 'id3': {0: np.nan, + 1: '79d'}, + 'value': {0: 123, + 1: 64}}) - y = df.set_index(['id1','id2','id3']) + y = df.set_index(['id1', 'id2', 'id3']) result = y.to_string() - expected = u(' value\nid1 id2 id3 \nNaN NaN NaN 123\n9h4 d67 79d 64') + expected = u( + ' value\nid1 id2 id3 \nNaN NaN NaN 123\n9h4 d67 79d 64') self.assertEqual(result, expected) def test_to_string(self): @@ -1671,8 +1721,8 @@ def test_to_string(self): 'B': tm.makeStringIndex(200)}, index=lrange(200)) - biggie.loc[:20,'A'] = nan - biggie.loc[:20,'B'] = nan + biggie.loc[:20, 'A'] = nan + biggie.loc[:20, 'B'] = nan s = biggie.to_string() buf = StringIO() @@ -1692,8 +1742,8 @@ def test_to_string(self): header=None, sep=' ') tm.assert_series_equal(recons['B'], biggie['B']) self.assertEqual(recons['A'].count(), biggie['A'].count()) - self.assertTrue((np.abs(recons['A'].dropna() - - biggie['A'].dropna()) < 0.1).all()) + self.assertTrue((np.abs(recons['A'].dropna() - biggie['A'].dropna()) < + 0.1).all()) # expected = ['B', 'A'] # self.assertEqual(header, expected) @@ -1707,15 +1757,13 @@ def test_to_string(self): formatters={'A': lambda x: '%.1f' % x}) biggie.to_string(columns=['B', 'A'], float_format=str) - biggie.to_string(columns=['B', 'A'], col_space=12, - float_format=str) + biggie.to_string(columns=['B', 'A'], col_space=12, float_format=str) frame = DataFrame(index=np.arange(200)) frame.to_string() def test_to_string_no_header(self): - df = DataFrame({'x': [1, 2, 3], - 'y': [4, 5, 6]}) + df = DataFrame({'x': [1, 2, 3], 'y': [4, 5, 6]}) df_s = df.to_string(header=False) expected = "0 1 4\n1 2 5\n2 3 6" @@ -1723,8 +1771,7 @@ def test_to_string_no_header(self): self.assertEqual(df_s, expected) def test_to_string_no_index(self): - df = DataFrame({'x': [1, 2, 3], - 'y': [4, 5, 6]}) + df = DataFrame({'x': [1, 2, 3], 'y': [4, 5, 6]}) df_s = df.to_string(index=False) expected = "x y\n1 4\n2 5\n3 6" @@ -1733,11 +1780,11 @@ def test_to_string_no_index(self): def test_to_string_float_formatting(self): self.reset_display_options() - fmt.set_option('display.precision', 5, 'display.column_space', - 12, 'display.notebook_repr_html', False) + fmt.set_option('display.precision', 5, 'display.column_space', 12, + 'display.notebook_repr_html', False) - df = DataFrame({'x': [0, 0.25, 3456.000, 12e+45, 1.64e+6, - 1.7e+8, 1.253456, np.pi, -1e6]}) + df = DataFrame({'x': [0, 0.25, 3456.000, 12e+45, 1.64e+6, 1.7e+8, + 1.253456, np.pi, -1e6]}) df_s = df.to_string() @@ -1758,9 +1805,7 @@ def test_to_string_float_formatting(self): df = DataFrame({'x': [3234, 0.253]}) df_s = df.to_string() - expected = (' x\n' - '0 3234.000\n' - '1 0.253') + expected = (' x\n' '0 3234.000\n' '1 0.253') self.assertEqual(df_s, expected) self.reset_display_options() @@ -1800,10 +1845,7 @@ def test_to_string_small_float_values(self): # but not all exactly zero df = df * 0 result = df.to_string() - expected = (' 0\n' - '0 0\n' - '1 0\n' - '2 -0') + expected = (' 0\n' '0 0\n' '1 0\n' '2 -0') def test_to_string_float_index(self): index = Index([1.5, 2, 3, 4, 5]) @@ -1819,9 +1861,7 @@ def test_to_string_float_index(self): self.assertEqual(result, expected) def test_to_string_ascii_error(self): - data = [('0 ', - u(' .gitignore '), - u(' 5 '), + data = [('0 ', u(' .gitignore '), u(' 5 '), ' \xe2\x80\xa2\xe2\x80\xa2\xe2\x80' '\xa2\xe2\x80\xa2\xe2\x80\xa2')] df = DataFrame(data) @@ -1834,17 +1874,13 @@ def test_to_string_int_formatting(self): self.assertTrue(issubclass(df['x'].dtype.type, np.integer)) output = df.to_string() - expected = (' x\n' - '0 -15\n' - '1 20\n' - '2 25\n' - '3 -35') + expected = (' x\n' '0 -15\n' '1 20\n' '2 25\n' '3 -35') self.assertEqual(output, expected) def test_to_string_index_formatter(self): df = DataFrame([lrange(5), lrange(5, 10), lrange(10, 15)]) - rs = df.to_string(formatters={'__index__': lambda x: 'abc'[x]}) + rs = df.to_string(formatters={'__index__': lambda x: 'abc' [x]}) xp = """\ 0 1 2 3 4 @@ -1852,15 +1888,14 @@ def test_to_string_index_formatter(self): b 5 6 7 8 9 c 10 11 12 13 14\ """ + self.assertEqual(rs, xp) def test_to_string_left_justify_cols(self): self.reset_display_options() df = DataFrame({'x': [3234, 0.253]}) df_s = df.to_string(justify='left') - expected = (' x \n' - '0 3234.000\n' - '1 0.253') + expected = (' x \n' '0 3234.000\n' '1 0.253') self.assertEqual(df_s, expected) def test_to_string_format_na(self): @@ -1897,20 +1932,24 @@ def test_to_string_line_width(self): def test_show_dimensions(self): df = DataFrame(123, lrange(10, 15), lrange(30)) - with option_context('display.max_rows', 10, 'display.max_columns', 40, 'display.width', - 500, 'display.expand_frame_repr', 'info', 'display.show_dimensions', True): + with option_context('display.max_rows', 10, 'display.max_columns', 40, + 'display.width', 500, 'display.expand_frame_repr', + 'info', 'display.show_dimensions', True): self.assertTrue('5 rows' in str(df)) self.assertTrue('5 rows' in df._repr_html_()) - with option_context('display.max_rows', 10, 'display.max_columns', 40, 'display.width', - 500, 'display.expand_frame_repr', 'info', 'display.show_dimensions', False): + with option_context('display.max_rows', 10, 'display.max_columns', 40, + 'display.width', 500, 'display.expand_frame_repr', + 'info', 'display.show_dimensions', False): self.assertFalse('5 rows' in str(df)) self.assertFalse('5 rows' in df._repr_html_()) - with option_context('display.max_rows', 2, 'display.max_columns', 2, 'display.width', - 500, 'display.expand_frame_repr', 'info', 'display.show_dimensions', 'truncate'): + with option_context('display.max_rows', 2, 'display.max_columns', 2, + 'display.width', 500, 'display.expand_frame_repr', + 'info', 'display.show_dimensions', 'truncate'): self.assertTrue('5 rows' in str(df)) self.assertTrue('5 rows' in df._repr_html_()) - with option_context('display.max_rows', 10, 'display.max_columns', 40, 'display.width', - 500, 'display.expand_frame_repr', 'info', 'display.show_dimensions', 'truncate'): + with option_context('display.max_rows', 10, 'display.max_columns', 40, + 'display.width', 500, 'display.expand_frame_repr', + 'info', 'display.show_dimensions', 'truncate'): self.assertFalse('5 rows' in str(df)) self.assertFalse('5 rows' in df._repr_html_()) @@ -1920,8 +1959,8 @@ def test_to_html(self): 'B': tm.makeStringIndex(200)}, index=lrange(200)) - biggie.loc[:20,'A'] = nan - biggie.loc[:20,'B'] = nan + biggie.loc[:20, 'A'] = nan + biggie.loc[:20, 'B'] = nan s = biggie.to_html() buf = StringIO() @@ -1936,8 +1975,7 @@ def test_to_html(self): formatters={'A': lambda x: '%.1f' % x}) biggie.to_html(columns=['B', 'A'], float_format=str) - biggie.to_html(columns=['B', 'A'], col_space=12, - float_format=str) + biggie.to_html(columns=['B', 'A'], col_space=12, float_format=str) frame = DataFrame(index=np.arange(200)) frame.to_html() @@ -1947,8 +1985,8 @@ def test_to_html_filename(self): 'B': tm.makeStringIndex(200)}, index=lrange(200)) - biggie.loc[:20,'A'] = nan - biggie.loc[:20,'B'] = nan + biggie.loc[:20, 'A'] = nan + biggie.loc[:20, 'B'] = nan with tm.ensure_clean('test.html') as path: biggie.to_html(path) with open(path, 'r') as f: @@ -1973,8 +2011,8 @@ def test_to_html_columns_arg(self): def test_to_html_multiindex(self): columns = MultiIndex.from_tuples(list(zip(np.arange(2).repeat(2), - np.mod(lrange(4), 2))), - names=['CL0', 'CL1']) + np.mod(lrange(4), 2))), + names=['CL0', 'CL1']) df = DataFrame([list('abcd'), list('efgh')], columns=columns) result = df.to_html(justify='left') expected = ('<table border="1" class="dataframe">\n' @@ -2012,8 +2050,9 @@ def test_to_html_multiindex(self): self.assertEqual(result, expected) - columns = MultiIndex.from_tuples(list(zip(range(4), - np.mod(lrange(4), 2)))) + columns = MultiIndex.from_tuples(list(zip( + range(4), np.mod( + lrange(4), 2)))) df = DataFrame([list('abcd'), list('efgh')], columns=columns) result = df.to_html(justify='right') @@ -2056,9 +2095,9 @@ def test_to_html_multiindex(self): def test_to_html_justify(self): df = DataFrame({'A': [6, 30000, 2], - 'B': [1, 2, 70000], - 'C': [223442, 0, 1]}, - columns=['A', 'B', 'C']) + 'B': [1, 2, 70000], + 'C': [223442, 0, 1]}, + columns=['A', 'B', 'C']) result = df.to_html(justify='left') expected = ('<table border="1" class="dataframe">\n' ' <thead>\n' @@ -2128,10 +2167,10 @@ def test_to_html_justify(self): def test_to_html_index(self): index = ['foo', 'bar', 'baz'] df = DataFrame({'A': [1, 2, 3], - 'B': [1.2, 3.4, 5.6], - 'C': ['one', 'two', np.NaN]}, - columns=['A', 'B', 'C'], - index=index) + 'B': [1.2, 3.4, 5.6], + 'C': ['one', 'two', np.NaN]}, + columns=['A', 'B', 'C'], + index=index) expected_with_index = ('<table border="1" class="dataframe">\n' ' <thead>\n' ' <tr style="text-align: right;">\n' @@ -2354,17 +2393,17 @@ def test_repr_html_wide(self): def test_repr_html_wide_multiindex_cols(self): max_cols = get_option('display.max_columns') - mcols = MultiIndex.from_product([np.arange(max_cols//2), - ['foo', 'bar']], - names=['first', 'second']) + mcols = MultiIndex.from_product([np.arange(max_cols // 2), + ['foo', 'bar']], + names=['first', 'second']) df = DataFrame(tm.rands_array(25, size=(10, len(mcols))), columns=mcols) reg_repr = df._repr_html_() assert '...' not in reg_repr - mcols = MultiIndex.from_product((np.arange(1+(max_cols//2)), - ['foo', 'bar']), - names=['first', 'second']) + mcols = MultiIndex.from_product((np.arange(1 + (max_cols // 2)), + ['foo', 'bar']), + names=['first', 'second']) df = DataFrame(tm.rands_array(25, size=(10, len(mcols))), columns=mcols) wide_repr = df._repr_html_() @@ -2373,13 +2412,13 @@ def test_repr_html_wide_multiindex_cols(self): def test_repr_html_long(self): max_rows = get_option('display.max_rows') h = max_rows - 1 - df = DataFrame({'A':np.arange(1,1+h), 'B':np.arange(41, 41+h)}) + df = DataFrame({'A': np.arange(1, 1 + h), 'B': np.arange(41, 41 + h)}) reg_repr = df._repr_html_() assert '..' not in reg_repr assert str(41 + max_rows // 2) in reg_repr h = max_rows + 1 - df = DataFrame({'A':np.arange(1,1+h), 'B':np.arange(41, 41+h)}) + df = DataFrame({'A': np.arange(1, 1 + h), 'B': np.arange(41, 41 + h)}) long_repr = df._repr_html_() assert '..' in long_repr assert str(41 + max_rows // 2) not in long_repr @@ -2389,13 +2428,17 @@ def test_repr_html_long(self): def test_repr_html_float(self): max_rows = get_option('display.max_rows') h = max_rows - 1 - df = DataFrame({'idx':np.linspace(-10,10,h), 'A':np.arange(1,1+h), 'B': np.arange(41, 41+h) }).set_index('idx') + df = DataFrame({'idx': np.linspace(-10, 10, h), + 'A': np.arange(1, 1 + h), + 'B': np.arange(41, 41 + h)}).set_index('idx') reg_repr = df._repr_html_() assert '..' not in reg_repr assert str(40 + h) in reg_repr h = max_rows + 1 - df = DataFrame({'idx':np.linspace(-10,10,h), 'A':np.arange(1,1+h), 'B': np.arange(41, 41+h) }).set_index('idx') + df = DataFrame({'idx': np.linspace(-10, 10, h), + 'A': np.arange(1, 1 + h), + 'B': np.arange(41, 41 + h)}).set_index('idx') long_repr = df._repr_html_() assert '..' in long_repr assert '31' not in long_repr @@ -2404,18 +2447,18 @@ def test_repr_html_float(self): def test_repr_html_long_multiindex(self): max_rows = get_option('display.max_rows') - max_L1 = max_rows//2 + max_L1 = max_rows // 2 tuples = list(itertools.product(np.arange(max_L1), ['foo', 'bar'])) idx = MultiIndex.from_tuples(tuples, names=['first', 'second']) - df = DataFrame(np.random.randn(max_L1*2, 2), index=idx, + df = DataFrame(np.random.randn(max_L1 * 2, 2), index=idx, columns=['A', 'B']) reg_repr = df._repr_html_() assert '...' not in reg_repr - tuples = list(itertools.product(np.arange(max_L1+1), ['foo', 'bar'])) + tuples = list(itertools.product(np.arange(max_L1 + 1), ['foo', 'bar'])) idx = MultiIndex.from_tuples(tuples, names=['first', 'second']) - df = DataFrame(np.random.randn((max_L1+1)*2, 2), index=idx, + df = DataFrame(np.random.randn((max_L1 + 1) * 2, 2), index=idx, columns=['A', 'B']) long_repr = df._repr_html_() assert '...' in long_repr @@ -2424,27 +2467,27 @@ def test_repr_html_long_and_wide(self): max_cols = get_option('display.max_columns') max_rows = get_option('display.max_rows') - h, w = max_rows-1, max_cols-1 - df = DataFrame(dict((k,np.arange(1,1+h)) for k in np.arange(w))) - assert '...' not in df._repr_html_() + h, w = max_rows - 1, max_cols - 1 + df = DataFrame(dict((k, np.arange(1, 1 + h)) for k in np.arange(w))) + assert '...' not in df._repr_html_() - h, w = max_rows+1, max_cols+1 - df = DataFrame(dict((k,np.arange(1,1+h)) for k in np.arange(w))) - assert '...' in df._repr_html_() + h, w = max_rows + 1, max_cols + 1 + df = DataFrame(dict((k, np.arange(1, 1 + h)) for k in np.arange(w))) + assert '...' in df._repr_html_() def test_info_repr(self): max_rows = get_option('display.max_rows') max_cols = get_option('display.max_columns') # Long - h, w = max_rows+1, max_cols-1 - df = DataFrame(dict((k,np.arange(1,1+h)) for k in np.arange(w))) + h, w = max_rows + 1, max_cols - 1 + df = DataFrame(dict((k, np.arange(1, 1 + h)) for k in np.arange(w))) assert has_vertically_truncated_repr(df) with option_context('display.large_repr', 'info'): assert has_info_repr(df) # Wide - h, w = max_rows-1, max_cols+1 - df = DataFrame(dict((k,np.arange(1,1+h)) for k in np.arange(w))) + h, w = max_rows - 1, max_cols + 1 + df = DataFrame(dict((k, np.arange(1, 1 + h)) for k in np.arange(w))) assert has_horizontally_truncated_repr(df) with option_context('display.large_repr', 'info'): assert has_info_repr(df) @@ -2469,24 +2512,23 @@ def test_info_repr_html(self): max_rows = get_option('display.max_rows') max_cols = get_option('display.max_columns') # Long - h, w = max_rows+1, max_cols-1 - df = DataFrame(dict((k,np.arange(1,1+h)) for k in np.arange(w))) + h, w = max_rows + 1, max_cols - 1 + df = DataFrame(dict((k, np.arange(1, 1 + h)) for k in np.arange(w))) assert r'&lt;class' not in df._repr_html_() with option_context('display.large_repr', 'info'): assert r'&lt;class' in df._repr_html_() # Wide - h, w = max_rows-1, max_cols+1 - df = DataFrame(dict((k,np.arange(1,1+h)) for k in np.arange(w))) + h, w = max_rows - 1, max_cols + 1 + df = DataFrame(dict((k, np.arange(1, 1 + h)) for k in np.arange(w))) assert '<class' not in df._repr_html_() with option_context('display.large_repr', 'info'): assert '&lt;class' in df._repr_html_() def test_fake_qtconsole_repr_html(self): def get_ipython(): - return {'config': - {'KernelApp': - {'parent_appname': 'ipython-qtconsole'}}} + return {'config': {'KernelApp': + {'parent_appname': 'ipython-qtconsole'}}} repstr = self.frame._repr_html_() self.assertIsNotNone(repstr) @@ -2523,11 +2565,14 @@ def test_pprint_pathological_object(self): if the test fails, the stack will overflow and nose crash, but it won't hang. """ + class A: + def __getitem__(self, key): - return 3 # obviously simplified + return 3 # obviously simplified + df = DataFrame([A()]) - repr(df) # just don't dine + repr(df) # just don't dine def test_float_trim_zeros(self): vals = [2.08430917305e+10, 3.52205017305e+10, 2.30674817305e+10, @@ -2578,8 +2623,7 @@ def test_to_latex(self): # it works! self.frame.to_latex() - df = DataFrame({'a': [1, 2], - 'b': ['b1', 'b2']}) + df = DataFrame({'a': [1, 2], 'b': ['b1', 'b2']}) withindex_result = df.to_latex() withindex_expected = r"""\begin{tabular}{lrl} \toprule @@ -2590,6 +2634,7 @@ def test_to_latex(self): \bottomrule \end{tabular} """ + self.assertEqual(withindex_result, withindex_expected) withoutindex_result = df.to_latex(index=False) @@ -2602,14 +2647,14 @@ def test_to_latex(self): \bottomrule \end{tabular} """ + self.assertEqual(withoutindex_result, withoutindex_expected) def test_to_latex_format(self): # GH Bug #9402 self.frame.to_latex(column_format='ccc') - df = DataFrame({'a': [1, 2], - 'b': ['b1', 'b2']}) + df = DataFrame({'a': [1, 2], 'b': ['b1', 'b2']}) withindex_result = df.to_latex(column_format='ccc') withindex_expected = r"""\begin{tabular}{ccc} \toprule @@ -2620,6 +2665,7 @@ def test_to_latex_format(self): \bottomrule \end{tabular} """ + self.assertEqual(withindex_result, withindex_expected) def test_to_latex_multiindex(self): @@ -2634,6 +2680,7 @@ def test_to_latex_multiindex(self): \bottomrule \end{tabular} """ + self.assertEqual(result, expected) result = df.T.to_latex() @@ -2645,6 +2692,7 @@ def test_to_latex_multiindex(self): \bottomrule \end{tabular} """ + self.assertEqual(result, expected) df = DataFrame.from_dict({ @@ -2667,10 +2715,13 @@ def test_to_latex_multiindex(self): \bottomrule \end{tabular} """ + self.assertEqual(result, expected) # GH 10660 - df = pd.DataFrame({'a':[0,0,1,1], 'b':list('abab'), 'c':[1,2,3,4]}) + df = pd.DataFrame({'a': [0, 0, 1, 1], + 'b': list('abab'), + 'c': [1, 2, 3, 4]}) result = df.set_index(['a', 'b']).to_latex() expected = r"""\begin{tabular}{llr} \toprule @@ -2684,6 +2735,7 @@ def test_to_latex_multiindex(self): \bottomrule \end{tabular} """ + self.assertEqual(result, expected) result = df.groupby('a').describe().to_latex() @@ -2711,19 +2763,21 @@ def test_to_latex_multiindex(self): \bottomrule \end{tabular} """ + self.assertEqual(result, expected) def test_to_latex_escape(self): a = 'a' b = 'b' - test_dict = {u('co^l1') : {a: "a", - b: "b"}, + test_dict = {u('co^l1'): {a: "a", + b: "b"}, u('co$e^x$'): {a: "a", b: "b"}} unescaped_result = DataFrame(test_dict).to_latex(escape=False) - escaped_result = DataFrame(test_dict).to_latex() # default: escape=True + escaped_result = DataFrame(test_dict).to_latex( + ) # default: escape=True unescaped_expected = r'''\begin{tabular}{lll} \toprule @@ -2744,14 +2798,14 @@ def test_to_latex_escape(self): \bottomrule \end{tabular} ''' + self.assertEqual(unescaped_result, unescaped_expected) self.assertEqual(escaped_result, escaped_expected) def test_to_latex_longtable(self): self.frame.to_latex(longtable=True) - df = DataFrame({'a': [1, 2], - 'b': ['b1', 'b2']}) + df = DataFrame({'a': [1, 2], 'b': ['b1', 'b2']}) withindex_result = df.to_latex(longtable=True) withindex_expected = r"""\begin{longtable}{lrl} \toprule @@ -2769,6 +2823,7 @@ def test_to_latex_longtable(self): 1 & 2 & b2 \\ \end{longtable} """ + self.assertEqual(withindex_result, withindex_expected) withoutindex_result = df.to_latex(index=False, longtable=True) @@ -2788,11 +2843,12 @@ def test_to_latex_longtable(self): 2 & b2 \\ \end{longtable} """ + self.assertEqual(withoutindex_result, withoutindex_expected) def test_to_latex_escape_special_chars(self): - special_characters = ['&','%','$','#','_', - '{','}','~','^','\\'] + special_characters = ['&', '%', '$', '#', '_', '{', '}', '~', '^', + '\\'] df = DataFrame(data=special_characters) observed = df.to_latex() expected = r"""\begin{tabular}{ll} @@ -2812,12 +2868,12 @@ def test_to_latex_escape_special_chars(self): \bottomrule \end{tabular} """ + self.assertEqual(observed, expected) def test_to_latex_no_header(self): # GH 7124 - df = DataFrame({'a': [1, 2], - 'b': ['b1', 'b2']}) + df = DataFrame({'a': [1, 2], 'b': ['b1', 'b2']}) withindex_result = df.to_latex(header=False) withindex_expected = r"""\begin{tabular}{lrl} \toprule @@ -2826,6 +2882,7 @@ def test_to_latex_no_header(self): \bottomrule \end{tabular} """ + self.assertEqual(withindex_result, withindex_expected) withoutindex_result = df.to_latex(index=False, header=False) @@ -2836,17 +2893,19 @@ def test_to_latex_no_header(self): \bottomrule \end{tabular} """ + self.assertEqual(withoutindex_result, withoutindex_expected) def test_to_csv_quotechar(self): - df = DataFrame({'col' : [1,2]}) + df = DataFrame({'col': [1, 2]}) expected = """\ "","col" "0","1" "1","2" """ + with tm.ensure_clean('test.csv') as path: - df.to_csv(path, quoting=1) # 1=QUOTE_ALL + df.to_csv(path, quoting=1) # 1=QUOTE_ALL with open(path, 'r') as f: self.assertEqual(f.read(), expected) with tm.ensure_clean('test.csv') as path: @@ -2859,6 +2918,7 @@ def test_to_csv_quotechar(self): $0$,$1$ $1$,$2$ """ + with tm.ensure_clean('test.csv') as path: df.to_csv(path, quoting=1, quotechar="$") with open(path, 'r') as f: @@ -2876,14 +2936,15 @@ def test_to_csv_quotechar(self): df.to_csv(path, quoting=1, quotechar=None, engine='python') def test_to_csv_doublequote(self): - df = DataFrame({'col' : ['a"a', '"bb"']}) + df = DataFrame({'col': ['a"a', '"bb"']}) expected = '''\ "","col" "0","a""a" "1","""bb""" ''' + with tm.ensure_clean('test.csv') as path: - df.to_csv(path, quoting=1, doublequote=True) # QUOTE_ALL + df.to_csv(path, quoting=1, doublequote=True) # QUOTE_ALL with open(path, 'r') as f: self.assertEqual(f.read(), expected) with tm.ensure_clean('test.csv') as path: @@ -2894,19 +2955,20 @@ def test_to_csv_doublequote(self): from _csv import Error with tm.ensure_clean('test.csv') as path: with tm.assertRaisesRegexp(Error, 'escapechar'): - df.to_csv(path, doublequote=False) # no escapechar set + df.to_csv(path, doublequote=False) # no escapechar set with tm.ensure_clean('test.csv') as path: with tm.assertRaisesRegexp(Error, 'escapechar'): df.to_csv(path, doublequote=False, engine='python') def test_to_csv_escapechar(self): - df = DataFrame({'col' : ['a"a', '"bb"']}) + df = DataFrame({'col': ['a"a', '"bb"']}) expected = """\ "","col" "0","a\\"a" "1","\\"bb\\"" """ - with tm.ensure_clean('test.csv') as path: # QUOTE_ALL + + with tm.ensure_clean('test.csv') as path: # QUOTE_ALL df.to_csv(path, quoting=1, doublequote=False, escapechar='\\') with open(path, 'r') as f: self.assertEqual(f.read(), expected) @@ -2916,14 +2978,15 @@ def test_to_csv_escapechar(self): with open(path, 'r') as f: self.assertEqual(f.read(), expected) - df = DataFrame({'col' : ['a,a', ',bb,']}) + df = DataFrame({'col': ['a,a', ',bb,']}) expected = """\ ,col 0,a\\,a 1,\\,bb\\, """ + with tm.ensure_clean('test.csv') as path: - df.to_csv(path, quoting=3, escapechar='\\') # QUOTE_NONE + df.to_csv(path, quoting=3, escapechar='\\') # QUOTE_NONE with open(path, 'r') as f: self.assertEqual(f.read(), expected) with tm.ensure_clean('test.csv') as path: @@ -2932,35 +2995,37 @@ def test_to_csv_escapechar(self): self.assertEqual(f.read(), expected) def test_csv_to_string(self): - df = DataFrame({'col' : [1,2]}) + df = DataFrame({'col': [1, 2]}) expected = ',col\n0,1\n1,2\n' self.assertEqual(df.to_csv(), expected) def test_to_csv_decimal(self): # GH 781 - df = DataFrame({'col1' : [1], 'col2' : ['a'], 'col3' : [10.1] }) + df = DataFrame({'col1': [1], 'col2': ['a'], 'col3': [10.1]}) expected_default = ',col1,col2,col3\n0,1,a,10.1\n' self.assertEqual(df.to_csv(), expected_default) expected_european_excel = ';col1;col2;col3\n0;1;a;10,1\n' - self.assertEqual(df.to_csv(decimal=',',sep=';'), expected_european_excel) + self.assertEqual( + df.to_csv(decimal=',', sep=';'), expected_european_excel) expected_float_format_default = ',col1,col2,col3\n0,1,a,10.10\n' - self.assertEqual(df.to_csv(float_format = '%.2f'), expected_float_format_default) + self.assertEqual( + df.to_csv(float_format='%.2f'), expected_float_format_default) expected_float_format = ';col1;col2;col3\n0;1;a;10,10\n' - self.assertEqual(df.to_csv(decimal=',',sep=';', float_format = '%.2f'), expected_float_format) + self.assertEqual( + df.to_csv(decimal=',', sep=';', + float_format='%.2f'), expected_float_format) # GH 11553: testing if decimal is taken into account for '0.0' df = pd.DataFrame({'a': [0, 1.1], 'b': [2.2, 3.3], 'c': 1}) expected = 'a,b,c\n0^0,2^2,1\n1^1,3^3,1\n' - self.assertEqual( - df.to_csv(index=False, decimal='^'), expected) + self.assertEqual(df.to_csv(index=False, decimal='^'), expected) # same but for an index - self.assertEqual( - df.set_index('a').to_csv(decimal='^'), expected) + self.assertEqual(df.set_index('a').to_csv(decimal='^'), expected) # same for a multi-index self.assertEqual( @@ -3000,8 +3065,10 @@ def test_to_csv_na_rep(self): def test_to_csv_date_format(self): # GH 10209 - df_sec = DataFrame({'A': pd.date_range('20130101',periods=5,freq='s')}) - df_day = DataFrame({'A': pd.date_range('20130101',periods=5,freq='d')}) + df_sec = DataFrame({'A': pd.date_range('20130101', periods=5, freq='s') + }) + df_day = DataFrame({'A': pd.date_range('20130101', periods=5, freq='d') + }) expected_default_sec = ',A\n0,2013-01-01 00:00:00\n1,2013-01-01 00:00:01\n2,2013-01-01 00:00:02' + \ '\n3,2013-01-01 00:00:03\n4,2013-01-01 00:00:04\n' @@ -3009,14 +3076,18 @@ def test_to_csv_date_format(self): expected_ymdhms_day = ',A\n0,2013-01-01 00:00:00\n1,2013-01-02 00:00:00\n2,2013-01-03 00:00:00' + \ '\n3,2013-01-04 00:00:00\n4,2013-01-05 00:00:00\n' - self.assertEqual(df_day.to_csv(date_format='%Y-%m-%d %H:%M:%S'), expected_ymdhms_day) + self.assertEqual( + df_day.to_csv( + date_format='%Y-%m-%d %H:%M:%S'), expected_ymdhms_day) expected_ymd_sec = ',A\n0,2013-01-01\n1,2013-01-01\n2,2013-01-01\n3,2013-01-01\n4,2013-01-01\n' - self.assertEqual(df_sec.to_csv(date_format='%Y-%m-%d'), expected_ymd_sec) + self.assertEqual( + df_sec.to_csv(date_format='%Y-%m-%d'), expected_ymd_sec) expected_default_day = ',A\n0,2013-01-01\n1,2013-01-02\n2,2013-01-03\n3,2013-01-04\n4,2013-01-05\n' self.assertEqual(df_day.to_csv(), expected_default_day) - self.assertEqual(df_day.to_csv(date_format='%Y-%m-%d'), expected_default_day) + self.assertEqual( + df_day.to_csv(date_format='%Y-%m-%d'), expected_default_day) # testing if date_format parameter is taken into account for # multi-indexed dataframes (GH 7791) @@ -3024,15 +3095,13 @@ def test_to_csv_date_format(self): df_sec['C'] = 1 expected_ymd_sec = 'A,B,C\n2013-01-01,0,1\n' df_sec_grouped = df_sec.groupby([pd.Grouper(key='A', freq='1h'), 'B']) - self.assertEqual( - df_sec_grouped.mean().to_csv(date_format='%Y-%m-%d'), - expected_ymd_sec - ) + self.assertEqual(df_sec_grouped.mean().to_csv(date_format='%Y-%m-%d'), + expected_ymd_sec) # deprecation GH11274 def test_to_csv_engine_kw_deprecation(self): with tm.assert_produces_warning(FutureWarning): - df = DataFrame({'col1' : [1], 'col2' : ['a'], 'col3' : [10.1] }) + df = DataFrame({'col1': [1], 'col2': ['a'], 'col3': [10.1]}) df.to_csv(engine='python') @@ -3078,7 +3147,9 @@ def test_to_string(self): cp.name = 'foo' result = cp.to_string(length=True, name=True, dtype=True) last_line = result.split('\n')[-1].strip() - self.assertEqual(last_line, "Freq: B, Name: foo, Length: %d, dtype: float64" % len(cp)) + self.assertEqual(last_line, + "Freq: B, Name: foo, Length: %d, dtype: float64" % + len(cp)) def test_freq_name_separation(self): s = Series(np.random.randn(10), @@ -3090,27 +3161,19 @@ def test_freq_name_separation(self): def test_to_string_mixed(self): s = Series(['foo', np.nan, -1.23, 4.56]) result = s.to_string() - expected = (u('0 foo\n') + - u('1 NaN\n') + - u('2 -1.23\n') + + expected = (u('0 foo\n') + u('1 NaN\n') + u('2 -1.23\n') + u('3 4.56')) self.assertEqual(result, expected) # but don't count NAs as floats s = Series(['foo', np.nan, 'bar', 'baz']) result = s.to_string() - expected = (u('0 foo\n') + - '1 NaN\n' + - '2 bar\n' + - '3 baz') + expected = (u('0 foo\n') + '1 NaN\n' + '2 bar\n' + '3 baz') self.assertEqual(result, expected) s = Series(['foo', 5, 'bar', 'baz']) result = s.to_string() - expected = (u('0 foo\n') + - '1 5\n' + - '2 bar\n' + - '3 baz') + expected = (u('0 foo\n') + '1 5\n' + '2 bar\n' + '3 baz') self.assertEqual(result, expected) def test_to_string_float_na_spacing(self): @@ -3118,21 +3181,15 @@ def test_to_string_float_na_spacing(self): s[::2] = np.nan result = s.to_string() - expected = (u('0 NaN\n') + - '1 1.5678\n' + - '2 NaN\n' + - '3 -3.0000\n' + - '4 NaN') + expected = (u('0 NaN\n') + '1 1.5678\n' + '2 NaN\n' + + '3 -3.0000\n' + '4 NaN') self.assertEqual(result, expected) def test_to_string_without_index(self): # GH 11729 Test index=False option s = Series([1, 2, 3, 4]) result = s.to_string(index=False) - expected = (u('1\n') + - '2\n' + - '3\n' + - '4') + expected = (u('1\n') + '2\n' + '3\n' + '4') self.assertEqual(result, expected) def test_unicode_name_in_footer(self): @@ -3155,7 +3212,8 @@ def test_east_asian_unicode_series(self): self.assertEqual(_rep(s), expected) # unicode values - s = Series([u'あ', u'いい', u'ううう', u'ええええ'], index=['a', 'bb', 'c', 'ddd']) + s = Series([u'あ', u'いい', u'ううう', u'ええええ'], + index=['a', 'bb', 'c', 'ddd']) expected = (u"a あ\nbb いい\nc ううう\n" u"ddd ええええ\ndtype: object") self.assertEqual(_rep(s), expected) @@ -3169,15 +3227,14 @@ def test_east_asian_unicode_series(self): # unicode footer s = Series([u'あ', u'いい', u'ううう', u'ええええ'], - index=[u'ああ', u'いいいい', u'う', u'えええ'], - name=u'おおおおおおお') + index=[u'ああ', u'いいいい', u'う', u'えええ'], name=u'おおおおおおお') expected = (u"ああ あ\nいいいい いい\nう ううう\n" u"えええ ええええ\nName: おおおおおおお, dtype: object") self.assertEqual(_rep(s), expected) # MultiIndex - idx = pd.MultiIndex.from_tuples([(u'あ', u'いい'), (u'う', u'え'), - (u'おおお', u'かかかか'), (u'き', u'くく')]) + idx = pd.MultiIndex.from_tuples([(u'あ', u'いい'), (u'う', u'え'), ( + u'おおお', u'かかかか'), (u'き', u'くく')]) s = Series([1, 22, 3333, 44444], index=idx) expected = (u"あ いい 1\nう え 22\nおおお かかかか 3333\n" u"き くく 44444\ndtype: int64") @@ -3193,13 +3250,13 @@ def test_east_asian_unicode_series(self): s = Series([1, 22, 3333, 44444], index=[1, 'AB', pd.Timestamp('2011-01-01'), u'あああ']) expected = (u"1 1\nAB 22\n" - u"2011-01-01 00:00:00 3333\nあああ 44444\ndtype: int64") + u"2011-01-01 00:00:00 3333\nあああ 44444\ndtype: int64" + ) self.assertEqual(_rep(s), expected) # truncate with option_context('display.max_rows', 3): - s = Series([u'あ', u'いい', u'ううう', u'ええええ'], - name=u'おおおおおおお') + s = Series([u'あ', u'いい', u'ううう', u'ええええ'], name=u'おおおおおおお') expected = (u"0 あ\n ... \n" u"3 ええええ\nName: おおおおおおお, dtype: object") @@ -3221,7 +3278,8 @@ def test_east_asian_unicode_series(self): self.assertEqual(_rep(s), expected) # unicode values - s = Series([u'あ', u'いい', u'ううう', u'ええええ'], index=['a', 'bb', 'c', 'ddd']) + s = Series([u'あ', u'いい', u'ううう', u'ええええ'], + index=['a', 'bb', 'c', 'ddd']) expected = (u"a あ\nbb いい\nc ううう\n" u"ddd ええええ\ndtype: object") self.assertEqual(_rep(s), expected) @@ -3235,15 +3293,14 @@ def test_east_asian_unicode_series(self): # unicode footer s = Series([u'あ', u'いい', u'ううう', u'ええええ'], - index=[u'ああ', u'いいいい', u'う', u'えええ'], - name=u'おおおおおおお') + index=[u'ああ', u'いいいい', u'う', u'えええ'], name=u'おおおおおおお') expected = (u"ああ あ\nいいいい いい\nう ううう\n" u"えええ ええええ\nName: おおおおおおお, dtype: object") self.assertEqual(_rep(s), expected) # MultiIndex - idx = pd.MultiIndex.from_tuples([(u'あ', u'いい'), (u'う', u'え'), - (u'おおお', u'かかかか'), (u'き', u'くく')]) + idx = pd.MultiIndex.from_tuples([(u'あ', u'いい'), (u'う', u'え'), ( + u'おおお', u'かかかか'), (u'き', u'くく')]) s = Series([1, 22, 3333, 44444], index=idx) expected = (u"あ いい 1\nう え 22\nおおお かかかか 3333\n" u"き くく 44444\ndtype: int64") @@ -3259,13 +3316,13 @@ def test_east_asian_unicode_series(self): s = Series([1, 22, 3333, 44444], index=[1, 'AB', pd.Timestamp('2011-01-01'), u'あああ']) expected = (u"1 1\nAB 22\n" - u"2011-01-01 00:00:00 3333\nあああ 44444\ndtype: int64") + u"2011-01-01 00:00:00 3333\nあああ 44444\ndtype: int64" + ) self.assertEqual(_rep(s), expected) # truncate with option_context('display.max_rows', 3): - s = Series([u'あ', u'いい', u'ううう', u'ええええ'], - name=u'おおおおおおお') + s = Series([u'あ', u'いい', u'ううう', u'ええええ'], name=u'おおおおおおお') expected = (u"0 あ\n ... \n" u"3 ええええ\nName: おおおおおおお, dtype: object") self.assertEqual(_rep(s), expected) @@ -3295,13 +3352,13 @@ def test_float_trim_zeros(self): def test_datetimeindex(self): - index = date_range('20130102',periods=6) - s = Series(1,index=index) + index = date_range('20130102', periods=6) + s = Series(1, index=index) result = s.to_string() self.assertTrue('2013-01-02' in result) # nat in index - s2 = Series(2, index=[ Timestamp('20130111'), NaT ]) + s2 = Series(2, index=[Timestamp('20130111'), NaT]) s = s2.append(s) result = s.to_string() self.assertTrue('NaT' in result) @@ -3321,39 +3378,39 @@ def test_timedelta64(self): # GH2146 # adding NaTs - y = s-s.shift(1) + y = s - s.shift(1) result = y.to_string() self.assertTrue('1 days' in result) self.assertTrue('00:00:00' not in result) self.assertTrue('NaT' in result) # with frac seconds - o = Series([datetime(2012,1,1,microsecond=150)]*3) - y = s-o + o = Series([datetime(2012, 1, 1, microsecond=150)] * 3) + y = s - o result = y.to_string() self.assertTrue('-1 days +23:59:59.999850' in result) # rounding? - o = Series([datetime(2012,1,1,1)]*3) - y = s-o + o = Series([datetime(2012, 1, 1, 1)] * 3) + y = s - o result = y.to_string() self.assertTrue('-1 days +23:00:00' in result) self.assertTrue('1 days 23:00:00' in result) - o = Series([datetime(2012,1,1,1,1)]*3) - y = s-o + o = Series([datetime(2012, 1, 1, 1, 1)] * 3) + y = s - o result = y.to_string() self.assertTrue('-1 days +22:59:00' in result) self.assertTrue('1 days 22:59:00' in result) - o = Series([datetime(2012,1,1,1,1,microsecond=150)]*3) - y = s-o + o = Series([datetime(2012, 1, 1, 1, 1, microsecond=150)] * 3) + y = s - o result = y.to_string() self.assertTrue('-1 days +22:58:59.999850' in result) self.assertTrue('0 days 22:58:59.999850' in result) # neg time - td = timedelta(minutes=5,seconds=3) + td = timedelta(minutes=5, seconds=3) s2 = Series(date_range('2012-1-1', periods=3, freq='D')) + td y = s - s2 result = y.to_string() @@ -3366,13 +3423,12 @@ def test_timedelta64(self): self.assertTrue('2012-01-01 23:59:59.999450' in result) # no boxing of the actual elements - td = Series(pd.timedelta_range('1 days',periods=3)) + td = Series(pd.timedelta_range('1 days', periods=3)) result = td.to_string() - self.assertEqual(result,u("0 1 days\n1 2 days\n2 3 days")) + self.assertEqual(result, u("0 1 days\n1 2 days\n2 3 days")) def test_mixed_datetime64(self): - df = DataFrame({'A': [1, 2], - 'B': ['2012-01-01', '2012-01-02']}) + df = DataFrame({'A': [1, 2], 'B': ['2012-01-01', '2012-01-02']}) df['B'] = pd.to_datetime(df.B) result = repr(df.ix[0]) @@ -3391,33 +3447,33 @@ def test_max_multi_index_display(self): s = Series(randn(8), index=index) with option_context("display.max_rows", 10): - self.assertEqual(len(str(s).split('\n')),10) + self.assertEqual(len(str(s).split('\n')), 10) with option_context("display.max_rows", 3): - self.assertEqual(len(str(s).split('\n')),5) + self.assertEqual(len(str(s).split('\n')), 5) with option_context("display.max_rows", 2): - self.assertEqual(len(str(s).split('\n')),5) + self.assertEqual(len(str(s).split('\n')), 5) with option_context("display.max_rows", 1): - self.assertEqual(len(str(s).split('\n')),4) + self.assertEqual(len(str(s).split('\n')), 4) with option_context("display.max_rows", 0): - self.assertEqual(len(str(s).split('\n')),10) + self.assertEqual(len(str(s).split('\n')), 10) # index s = Series(randn(8), None) with option_context("display.max_rows", 10): - self.assertEqual(len(str(s).split('\n')),9) + self.assertEqual(len(str(s).split('\n')), 9) with option_context("display.max_rows", 3): - self.assertEqual(len(str(s).split('\n')),4) + self.assertEqual(len(str(s).split('\n')), 4) with option_context("display.max_rows", 2): - self.assertEqual(len(str(s).split('\n')),4) + self.assertEqual(len(str(s).split('\n')), 4) with option_context("display.max_rows", 1): - self.assertEqual(len(str(s).split('\n')),3) + self.assertEqual(len(str(s).split('\n')), 3) with option_context("display.max_rows", 0): - self.assertEqual(len(str(s).split('\n')),9) + self.assertEqual(len(str(s).split('\n')), 9) # Make sure #8532 is fixed def test_consistent_format(self): - s = pd.Series([1,1,1,1,1,1,1,1,1,1,0.9999,1,1]*10) + s = pd.Series([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0.9999, 1, 1] * 10) with option_context("display.max_rows", 10): res = repr(s) exp = ('0 1.0000\n1 1.0000\n2 1.0000\n3 ' @@ -3428,8 +3484,8 @@ def test_consistent_format(self): @staticmethod def gen_test_series(): - s1 = pd.Series(['a']*100) - s2 = pd.Series(['ab']*100) + s1 = pd.Series(['a'] * 100) + s2 = pd.Series(['ab'] * 100) s3 = pd.Series(['a', 'ab', 'abc', 'abcd', 'abcde', 'abcdef']) s4 = s3[::-1] test_sers = {'onel': s1, 'twol': s2, 'asc': s3, 'desc': s4} @@ -3439,7 +3495,7 @@ def chck_ncols(self, s): with option_context("display.max_rows", 10): res = repr(s) lines = res.split('\n') - lines = [line for line in repr(s).split('\n') \ + lines = [line for line in repr(s).split('\n') if not re.match('[^\.]*\.+', line)][:-1] ncolsizes = len(set(len(line.strip()) for line in lines)) self.assertEqual(ncolsizes, 1) @@ -3469,7 +3525,7 @@ def test_ncols(self): self.chck_ncols(s) def test_max_rows_eq_one(self): - s = Series(range(10),dtype='int64') + s = Series(range(10), dtype='int64') with option_context("display.max_rows", 1): strrepr = repr(s).split('\n') exp1 = ['0', '0'] @@ -3494,7 +3550,7 @@ def getndots(s): self.assertEqual(getndots(strrepr), 3) def test_to_string_name(self): - s = Series(range(100),dtype='int64') + s = Series(range(100), dtype='int64') s.name = 'myser' res = s.to_string(max_rows=2, name=True) exp = '0 0\n ..\n99 99\nName: myser' @@ -3504,7 +3560,7 @@ def test_to_string_name(self): self.assertEqual(res, exp) def test_to_string_dtype(self): - s = Series(range(100),dtype='int64') + s = Series(range(100), dtype='int64') res = s.to_string(max_rows=2, dtype=True) exp = '0 0\n ..\n99 99\ndtype: int64' self.assertEqual(res, exp) @@ -3513,7 +3569,7 @@ def test_to_string_dtype(self): self.assertEqual(res, exp) def test_to_string_length(self): - s = Series(range(100),dtype='int64') + s = Series(range(100), dtype='int64') res = s.to_string(max_rows=2, length=True) exp = '0 0\n ..\n99 99\nLength: 100' self.assertEqual(res, exp) @@ -3532,7 +3588,7 @@ def test_to_string_float_format(self): self.assertEqual(res, exp) def test_to_string_header(self): - s = pd.Series(range(10),dtype='int64') + s = pd.Series(range(10), dtype='int64') s.index.name = 'foo' res = s.to_string(header=True, max_rows=2) exp = 'foo\n0 0\n ..\n9 9' @@ -3579,8 +3635,8 @@ def test_eng_float_formatter(self): def compare(self, formatter, input, output): formatted_input = formatter(input) - msg = ("formatting of %s results in '%s', expected '%s'" - % (str(input), formatted_input, output)) + msg = ("formatting of %s results in '%s', expected '%s'" % + (str(input), formatted_input, output)) self.assertEqual(formatted_input, output, msg) def compare_all(self, formatter, in_out): @@ -3601,57 +3657,32 @@ def compare_all(self, formatter, in_out): def test_exponents_with_eng_prefix(self): formatter = fmt.EngFormatter(accuracy=3, use_eng_prefix=True) f = np.sqrt(2) - in_out = [(f * 10 ** -24, " 1.414y"), - (f * 10 ** -23, " 14.142y"), - (f * 10 ** -22, " 141.421y"), - (f * 10 ** -21, " 1.414z"), - (f * 10 ** -20, " 14.142z"), - (f * 10 ** -19, " 141.421z"), - (f * 10 ** -18, " 1.414a"), - (f * 10 ** -17, " 14.142a"), - (f * 10 ** -16, " 141.421a"), - (f * 10 ** -15, " 1.414f"), - (f * 10 ** -14, " 14.142f"), - (f * 10 ** -13, " 141.421f"), - (f * 10 ** -12, " 1.414p"), - (f * 10 ** -11, " 14.142p"), - (f * 10 ** -10, " 141.421p"), - (f * 10 ** -9, " 1.414n"), - (f * 10 ** -8, " 14.142n"), - (f * 10 ** -7, " 141.421n"), - (f * 10 ** -6, " 1.414u"), - (f * 10 ** -5, " 14.142u"), - (f * 10 ** -4, " 141.421u"), - (f * 10 ** -3, " 1.414m"), - (f * 10 ** -2, " 14.142m"), - (f * 10 ** -1, " 141.421m"), - (f * 10 ** 0, " 1.414"), - (f * 10 ** 1, " 14.142"), - (f * 10 ** 2, " 141.421"), - (f * 10 ** 3, " 1.414k"), - (f * 10 ** 4, " 14.142k"), - (f * 10 ** 5, " 141.421k"), - (f * 10 ** 6, " 1.414M"), - (f * 10 ** 7, " 14.142M"), - (f * 10 ** 8, " 141.421M"), - (f * 10 ** 9, " 1.414G"), - (f * 10 ** 10, " 14.142G"), - (f * 10 ** 11, " 141.421G"), - (f * 10 ** 12, " 1.414T"), - (f * 10 ** 13, " 14.142T"), - (f * 10 ** 14, " 141.421T"), - (f * 10 ** 15, " 1.414P"), - (f * 10 ** 16, " 14.142P"), - (f * 10 ** 17, " 141.421P"), - (f * 10 ** 18, " 1.414E"), - (f * 10 ** 19, " 14.142E"), - (f * 10 ** 20, " 141.421E"), - (f * 10 ** 21, " 1.414Z"), - (f * 10 ** 22, " 14.142Z"), - (f * 10 ** 23, " 141.421Z"), - (f * 10 ** 24, " 1.414Y"), - (f * 10 ** 25, " 14.142Y"), - (f * 10 ** 26, " 141.421Y")] + in_out = [(f * 10 ** -24, " 1.414y"), (f * 10 ** -23, " 14.142y"), + (f * 10 ** -22, " 141.421y"), (f * 10 ** -21, " 1.414z"), + (f * 10 ** -20, " 14.142z"), (f * 10 ** -19, " 141.421z"), + (f * 10 ** -18, " 1.414a"), (f * 10 ** -17, " 14.142a"), + (f * 10 ** -16, " 141.421a"), (f * 10 ** -15, " 1.414f"), + (f * 10 ** -14, " 14.142f"), (f * 10 ** -13, " 141.421f"), + (f * 10 ** -12, " 1.414p"), (f * 10 ** -11, " 14.142p"), + (f * 10 ** -10, " 141.421p"), (f * 10 ** -9, " 1.414n"), + (f * 10 ** -8, " 14.142n"), (f * 10 ** -7, " 141.421n"), + (f * 10 ** -6, " 1.414u"), (f * 10 ** -5, " 14.142u"), + (f * 10 ** -4, " 141.421u"), (f * 10 ** -3, " 1.414m"), + (f * 10 ** -2, " 14.142m"), (f * 10 ** -1, " 141.421m"), + (f * 10 ** 0, " 1.414"), (f * 10 ** 1, " 14.142"), + (f * 10 ** 2, " 141.421"), (f * 10 ** 3, " 1.414k"), + (f * 10 ** 4, " 14.142k"), (f * 10 ** 5, " 141.421k"), + (f * 10 ** 6, " 1.414M"), (f * 10 ** 7, " 14.142M"), + (f * 10 ** 8, " 141.421M"), (f * 10 ** 9, " 1.414G"), ( + f * 10 ** 10, " 14.142G"), (f * 10 ** 11, " 141.421G"), + (f * 10 ** 12, " 1.414T"), (f * 10 ** 13, " 14.142T"), ( + f * 10 ** 14, " 141.421T"), (f * 10 ** 15, " 1.414P"), ( + f * 10 ** 16, " 14.142P"), (f * 10 ** 17, " 141.421P"), ( + f * 10 ** 18, " 1.414E"), (f * 10 ** 19, " 14.142E"), + (f * 10 ** 20, " 141.421E"), (f * 10 ** 21, " 1.414Z"), ( + f * 10 ** 22, " 14.142Z"), (f * 10 ** 23, " 141.421Z"), ( + f * 10 ** 24, " 1.414Y"), (f * 10 ** 25, " 14.142Y"), ( + f * 10 ** 26, " 141.421Y")] self.compare_all(formatter, in_out) def test_exponents_without_eng_prefix(self): @@ -3672,70 +3703,42 @@ def test_exponents_without_eng_prefix(self): (f * 10 ** -12, " 3.1416E-12"), (f * 10 ** -11, " 31.4159E-12"), (f * 10 ** -10, " 314.1593E-12"), - (f * 10 ** -9, " 3.1416E-09"), - (f * 10 ** -8, " 31.4159E-09"), - (f * 10 ** -7, " 314.1593E-09"), - (f * 10 ** -6, " 3.1416E-06"), - (f * 10 ** -5, " 31.4159E-06"), - (f * 10 ** -4, " 314.1593E-06"), - (f * 10 ** -3, " 3.1416E-03"), - (f * 10 ** -2, " 31.4159E-03"), - (f * 10 ** -1, " 314.1593E-03"), - (f * 10 ** 0, " 3.1416E+00"), - (f * 10 ** 1, " 31.4159E+00"), - (f * 10 ** 2, " 314.1593E+00"), - (f * 10 ** 3, " 3.1416E+03"), - (f * 10 ** 4, " 31.4159E+03"), - (f * 10 ** 5, " 314.1593E+03"), - (f * 10 ** 6, " 3.1416E+06"), - (f * 10 ** 7, " 31.4159E+06"), - (f * 10 ** 8, " 314.1593E+06"), - (f * 10 ** 9, " 3.1416E+09"), - (f * 10 ** 10, " 31.4159E+09"), - (f * 10 ** 11, " 314.1593E+09"), - (f * 10 ** 12, " 3.1416E+12"), - (f * 10 ** 13, " 31.4159E+12"), - (f * 10 ** 14, " 314.1593E+12"), - (f * 10 ** 15, " 3.1416E+15"), - (f * 10 ** 16, " 31.4159E+15"), - (f * 10 ** 17, " 314.1593E+15"), - (f * 10 ** 18, " 3.1416E+18"), - (f * 10 ** 19, " 31.4159E+18"), - (f * 10 ** 20, " 314.1593E+18"), - (f * 10 ** 21, " 3.1416E+21"), - (f * 10 ** 22, " 31.4159E+21"), - (f * 10 ** 23, " 314.1593E+21"), - (f * 10 ** 24, " 3.1416E+24"), - (f * 10 ** 25, " 31.4159E+24"), - (f * 10 ** 26, " 314.1593E+24")] + (f * 10 ** -9, " 3.1416E-09"), (f * 10 ** -8, " 31.4159E-09"), + (f * 10 ** -7, " 314.1593E-09"), (f * 10 ** -6, " 3.1416E-06"), + (f * 10 ** -5, " 31.4159E-06"), (f * 10 ** -4, + " 314.1593E-06"), + (f * 10 ** -3, " 3.1416E-03"), (f * 10 ** -2, " 31.4159E-03"), + (f * 10 ** -1, " 314.1593E-03"), (f * 10 ** 0, " 3.1416E+00"), ( + f * 10 ** 1, " 31.4159E+00"), (f * 10 ** 2, " 314.1593E+00"), + (f * 10 ** 3, " 3.1416E+03"), (f * 10 ** 4, " 31.4159E+03"), ( + f * 10 ** 5, " 314.1593E+03"), (f * 10 ** 6, " 3.1416E+06"), + (f * 10 ** 7, " 31.4159E+06"), (f * 10 ** 8, " 314.1593E+06"), ( + f * 10 ** 9, " 3.1416E+09"), (f * 10 ** 10, " 31.4159E+09"), + (f * 10 ** 11, " 314.1593E+09"), (f * 10 ** 12, " 3.1416E+12"), + (f * 10 ** 13, " 31.4159E+12"), (f * 10 ** 14, " 314.1593E+12"), + (f * 10 ** 15, " 3.1416E+15"), (f * 10 ** 16, " 31.4159E+15"), + (f * 10 ** 17, " 314.1593E+15"), (f * 10 ** 18, " 3.1416E+18"), + (f * 10 ** 19, " 31.4159E+18"), (f * 10 ** 20, " 314.1593E+18"), + (f * 10 ** 21, " 3.1416E+21"), (f * 10 ** 22, " 31.4159E+21"), + (f * 10 ** 23, " 314.1593E+21"), (f * 10 ** 24, " 3.1416E+24"), + (f * 10 ** 25, " 31.4159E+24"), (f * 10 ** 26, " 314.1593E+24")] self.compare_all(formatter, in_out) def test_rounding(self): formatter = fmt.EngFormatter(accuracy=3, use_eng_prefix=True) - in_out = [(5.55555, ' 5.556'), - (55.5555, ' 55.556'), - (555.555, ' 555.555'), - (5555.55, ' 5.556k'), - (55555.5, ' 55.556k'), - (555555, ' 555.555k')] + in_out = [(5.55555, ' 5.556'), (55.5555, ' 55.556'), + (555.555, ' 555.555'), (5555.55, ' 5.556k'), + (55555.5, ' 55.556k'), (555555, ' 555.555k')] self.compare_all(formatter, in_out) formatter = fmt.EngFormatter(accuracy=1, use_eng_prefix=True) - in_out = [(5.55555, ' 5.6'), - (55.5555, ' 55.6'), - (555.555, ' 555.6'), - (5555.55, ' 5.6k'), - (55555.5, ' 55.6k'), - (555555, ' 555.6k')] + in_out = [(5.55555, ' 5.6'), (55.5555, ' 55.6'), (555.555, ' 555.6'), + (5555.55, ' 5.6k'), (55555.5, ' 55.6k'), (555555, ' 555.6k')] self.compare_all(formatter, in_out) formatter = fmt.EngFormatter(accuracy=0, use_eng_prefix=True) - in_out = [(5.55555, ' 6'), - (55.5555, ' 56'), - (555.555, ' 556'), - (5555.55, ' 6k'), - (55555.5, ' 56k'), - (555555, ' 556k')] + in_out = [(5.55555, ' 6'), (55.5555, ' 56'), (555.555, ' 556'), + (5555.55, ' 6k'), (55555.5, ' 56k'), (555555, ' 556k')] self.compare_all(formatter, in_out) formatter = fmt.EngFormatter(accuracy=3, use_eng_prefix=True) @@ -3766,14 +3769,22 @@ def test_output_significant_digits(self): # In case default display precision changes: with pd.option_context('display.precision', 6): # DataFrame example from issue #9764 - d=pd.DataFrame({'col1':[9.999e-8, 1e-7, 1.0001e-7, 2e-7, 4.999e-7, 5e-7, 5.0001e-7, 6e-7, 9.999e-7, 1e-6, 1.0001e-6, 2e-6, 4.999e-6, 5e-6, 5.0001e-6, 6e-6]}) - - expected_output={ - (0,6):' col1\n0 9.999000e-08\n1 1.000000e-07\n2 1.000100e-07\n3 2.000000e-07\n4 4.999000e-07\n5 5.000000e-07', - (1,6):' col1\n1 1.000000e-07\n2 1.000100e-07\n3 2.000000e-07\n4 4.999000e-07\n5 5.000000e-07', - (1,8):' col1\n1 1.000000e-07\n2 1.000100e-07\n3 2.000000e-07\n4 4.999000e-07\n5 5.000000e-07\n6 5.000100e-07\n7 6.000000e-07', - (8,16):' col1\n8 9.999000e-07\n9 1.000000e-06\n10 1.000100e-06\n11 2.000000e-06\n12 4.999000e-06\n13 5.000000e-06\n14 5.000100e-06\n15 6.000000e-06', - (9,16):' col1\n9 0.000001\n10 0.000001\n11 0.000002\n12 0.000005\n13 0.000005\n14 0.000005\n15 0.000006' + d = pd.DataFrame( + {'col1': [9.999e-8, 1e-7, 1.0001e-7, 2e-7, 4.999e-7, 5e-7, + 5.0001e-7, 6e-7, 9.999e-7, 1e-6, 1.0001e-6, 2e-6, + 4.999e-6, 5e-6, 5.0001e-6, 6e-6]}) + + expected_output = { + (0, 6): + ' col1\n0 9.999000e-08\n1 1.000000e-07\n2 1.000100e-07\n3 2.000000e-07\n4 4.999000e-07\n5 5.000000e-07', + (1, 6): + ' col1\n1 1.000000e-07\n2 1.000100e-07\n3 2.000000e-07\n4 4.999000e-07\n5 5.000000e-07', + (1, 8): + ' col1\n1 1.000000e-07\n2 1.000100e-07\n3 2.000000e-07\n4 4.999000e-07\n5 5.000000e-07\n6 5.000100e-07\n7 6.000000e-07', + (8, 16): + ' col1\n8 9.999000e-07\n9 1.000000e-06\n10 1.000100e-06\n11 2.000000e-06\n12 4.999000e-06\n13 5.000000e-06\n14 5.000100e-06\n15 6.000000e-06', + (9, 16): + ' col1\n9 0.000001\n10 0.000001\n11 0.000002\n12 0.000005\n13 0.000005\n14 0.000005\n15 0.000006' } for (start, stop), v in expected_output.items(): @@ -3782,13 +3793,15 @@ def test_output_significant_digits(self): def test_too_long(self): # GH 10451 with pd.option_context('display.precision', 4): - # need both a number > 1e8 and something that normally formats to having length > display.precision + 6 + # need both a number > 1e8 and something that normally formats to + # having length > display.precision + 6 df = pd.DataFrame(dict(x=[12345.6789])) self.assertEqual(str(df), ' x\n0 12345.6789') df = pd.DataFrame(dict(x=[2e8])) self.assertEqual(str(df), ' x\n0 200000000') df = pd.DataFrame(dict(x=[12345.6789, 2e8])) - self.assertEqual(str(df), ' x\n0 1.2346e+04\n1 2.0000e+08') + self.assertEqual( + str(df), ' x\n0 1.2346e+04\n1 2.0000e+08') class TestRepr_timedelta64(tm.TestCase): @@ -3806,7 +3819,8 @@ def test_none(self): self.assertEqual(drepr(delta_1s), "0 days 00:00:01") self.assertEqual(drepr(delta_500ms), "0 days 00:00:00.500000") self.assertEqual(drepr(delta_1d + delta_1s), "1 days 00:00:01") - self.assertEqual(drepr(delta_1d + delta_500ms), "1 days 00:00:00.500000") + self.assertEqual( + drepr(delta_1d + delta_500ms), "1 days 00:00:00.500000") def test_even_day(self): delta_1d = pd.to_timedelta(1, unit='D') @@ -3821,7 +3835,8 @@ def test_even_day(self): self.assertEqual(drepr(delta_1s), "0 days 00:00:01") self.assertEqual(drepr(delta_500ms), "0 days 00:00:00.500000") self.assertEqual(drepr(delta_1d + delta_1s), "1 days 00:00:01") - self.assertEqual(drepr(delta_1d + delta_500ms), "1 days 00:00:00.500000") + self.assertEqual( + drepr(delta_1d + delta_500ms), "1 days 00:00:00.500000") def test_sub_day(self): delta_1d = pd.to_timedelta(1, unit='D') @@ -3836,7 +3851,8 @@ def test_sub_day(self): self.assertEqual(drepr(delta_1s), "00:00:01") self.assertEqual(drepr(delta_500ms), "00:00:00.500000") self.assertEqual(drepr(delta_1d + delta_1s), "1 days 00:00:01") - self.assertEqual(drepr(delta_1d + delta_500ms), "1 days 00:00:00.500000") + self.assertEqual( + drepr(delta_1d + delta_500ms), "1 days 00:00:00.500000") def test_long(self): delta_1d = pd.to_timedelta(1, unit='D') @@ -3851,7 +3867,8 @@ def test_long(self): self.assertEqual(drepr(delta_1s), "0 days 00:00:01") self.assertEqual(drepr(delta_500ms), "0 days 00:00:00.500000") self.assertEqual(drepr(delta_1d + delta_1s), "1 days 00:00:01") - self.assertEqual(drepr(delta_1d + delta_500ms), "1 days 00:00:00.500000") + self.assertEqual( + drepr(delta_1d + delta_500ms), "1 days 00:00:00.500000") def test_all(self): delta_1d = pd.to_timedelta(1, unit='D') @@ -3863,53 +3880,55 @@ def test_all(self): self.assertEqual(drepr(delta_0d), "0 days 00:00:00.000000000") self.assertEqual(drepr(delta_1ns), "0 days 00:00:00.000000001") + class TestTimedelta64Formatter(tm.TestCase): def test_days(self): x = pd.to_timedelta(list(range(5)) + [pd.NaT], unit='D') - result = fmt.Timedelta64Formatter(x,box=True).get_result() + result = fmt.Timedelta64Formatter(x, box=True).get_result() self.assertEqual(result[0].strip(), "'0 days'") self.assertEqual(result[1].strip(), "'1 days'") - result = fmt.Timedelta64Formatter(x[1:2],box=True).get_result() + result = fmt.Timedelta64Formatter(x[1:2], box=True).get_result() self.assertEqual(result[0].strip(), "'1 days'") - result = fmt.Timedelta64Formatter(x,box=False).get_result() + result = fmt.Timedelta64Formatter(x, box=False).get_result() self.assertEqual(result[0].strip(), "0 days") self.assertEqual(result[1].strip(), "1 days") - result = fmt.Timedelta64Formatter(x[1:2],box=False).get_result() + result = fmt.Timedelta64Formatter(x[1:2], box=False).get_result() self.assertEqual(result[0].strip(), "1 days") def test_days_neg(self): x = pd.to_timedelta(list(range(5)) + [pd.NaT], unit='D') - result = fmt.Timedelta64Formatter(-x,box=True).get_result() + result = fmt.Timedelta64Formatter(-x, box=True).get_result() self.assertEqual(result[0].strip(), "'0 days'") self.assertEqual(result[1].strip(), "'-1 days'") def test_subdays(self): y = pd.to_timedelta(list(range(5)) + [pd.NaT], unit='s') - result = fmt.Timedelta64Formatter(y,box=True).get_result() + result = fmt.Timedelta64Formatter(y, box=True).get_result() self.assertEqual(result[0].strip(), "'00:00:00'") self.assertEqual(result[1].strip(), "'00:00:01'") def test_subdays_neg(self): y = pd.to_timedelta(list(range(5)) + [pd.NaT], unit='s') - result = fmt.Timedelta64Formatter(-y,box=True).get_result() + result = fmt.Timedelta64Formatter(-y, box=True).get_result() self.assertEqual(result[0].strip(), "'00:00:00'") self.assertEqual(result[1].strip(), "'-1 days +23:59:59'") def test_zero(self): x = pd.to_timedelta(list(range(1)) + [pd.NaT], unit='D') - result = fmt.Timedelta64Formatter(x,box=True).get_result() + result = fmt.Timedelta64Formatter(x, box=True).get_result() self.assertEqual(result[0].strip(), "'0 days'") x = pd.to_timedelta(list(range(1)), unit='D') - result = fmt.Timedelta64Formatter(x,box=True).get_result() + result = fmt.Timedelta64Formatter(x, box=True).get_result() self.assertEqual(result[0].strip(), "'0 days'") class TestDatetime64Formatter(tm.TestCase): + def test_mixed(self): x = Series([datetime(2013, 1, 1), datetime(2013, 1, 1, 12), pd.NaT]) result = fmt.Datetime64Formatter(x).get_result() @@ -3931,42 +3950,44 @@ def test_dates_display(self): # 10170 # make sure that we are consistently display date formatting - x = Series(date_range('20130101 09:00:00',periods=5,freq='D')) + x = Series(date_range('20130101 09:00:00', periods=5, freq='D')) x.iloc[1] = np.nan result = fmt.Datetime64Formatter(x).get_result() self.assertEqual(result[0].strip(), "2013-01-01 09:00:00") self.assertEqual(result[1].strip(), "NaT") self.assertEqual(result[4].strip(), "2013-01-05 09:00:00") - x = Series(date_range('20130101 09:00:00',periods=5,freq='s')) + x = Series(date_range('20130101 09:00:00', periods=5, freq='s')) x.iloc[1] = np.nan result = fmt.Datetime64Formatter(x).get_result() self.assertEqual(result[0].strip(), "2013-01-01 09:00:00") self.assertEqual(result[1].strip(), "NaT") self.assertEqual(result[4].strip(), "2013-01-01 09:00:04") - x = Series(date_range('20130101 09:00:00',periods=5,freq='ms')) + x = Series(date_range('20130101 09:00:00', periods=5, freq='ms')) x.iloc[1] = np.nan result = fmt.Datetime64Formatter(x).get_result() self.assertEqual(result[0].strip(), "2013-01-01 09:00:00.000") self.assertEqual(result[1].strip(), "NaT") self.assertEqual(result[4].strip(), "2013-01-01 09:00:00.004") - x = Series(date_range('20130101 09:00:00',periods=5,freq='us')) + x = Series(date_range('20130101 09:00:00', periods=5, freq='us')) x.iloc[1] = np.nan result = fmt.Datetime64Formatter(x).get_result() self.assertEqual(result[0].strip(), "2013-01-01 09:00:00.000000") self.assertEqual(result[1].strip(), "NaT") self.assertEqual(result[4].strip(), "2013-01-01 09:00:00.000004") - x = Series(date_range('20130101 09:00:00',periods=5,freq='N')) + x = Series(date_range('20130101 09:00:00', periods=5, freq='N')) x.iloc[1] = np.nan result = fmt.Datetime64Formatter(x).get_result() self.assertEqual(result[0].strip(), "2013-01-01 09:00:00.000000000") self.assertEqual(result[1].strip(), "NaT") self.assertEqual(result[4].strip(), "2013-01-01 09:00:00.000000004") + class TestNaTFormatting(tm.TestCase): + def test_repr(self): self.assertEqual(repr(pd.NaT), "NaT") @@ -3975,6 +3996,7 @@ def test_str(self): class TestDatetimeIndexFormat(tm.TestCase): + def test_datetime(self): formatted = pd.to_datetime([datetime(2003, 1, 1, 12), pd.NaT]).format() self.assertEqual(formatted[0], "2003-01-01 12:00:00") @@ -3986,31 +4008,37 @@ def test_date(self): self.assertEqual(formatted[1], "NaT") def test_date_tz(self): - formatted = pd.to_datetime([datetime(2013,1,1)], utc=True).format() + formatted = pd.to_datetime([datetime(2013, 1, 1)], utc=True).format() self.assertEqual(formatted[0], "2013-01-01 00:00:00+00:00") - formatted = pd.to_datetime([datetime(2013,1,1), pd.NaT], utc=True).format() + formatted = pd.to_datetime( + [datetime(2013, 1, 1), pd.NaT], utc=True).format() self.assertEqual(formatted[0], "2013-01-01 00:00:00+00:00") def test_date_explict_date_format(self): - formatted = pd.to_datetime([datetime(2003, 2, 1), pd.NaT]).format(date_format="%m-%d-%Y", na_rep="UT") + formatted = pd.to_datetime([datetime(2003, 2, 1), pd.NaT]).format( + date_format="%m-%d-%Y", na_rep="UT") self.assertEqual(formatted[0], "02-01-2003") self.assertEqual(formatted[1], "UT") class TestDatetimeIndexUnicode(tm.TestCase): + def test_dates(self): - text = str(pd.to_datetime([datetime(2013,1,1), datetime(2014,1,1)])) + text = str(pd.to_datetime([datetime(2013, 1, 1), datetime(2014, 1, 1) + ])) self.assertTrue("['2013-01-01'," in text) self.assertTrue(", '2014-01-01']" in text) def test_mixed(self): - text = str(pd.to_datetime([datetime(2013,1,1), datetime(2014,1,1,12), datetime(2014,1,1)])) + text = str(pd.to_datetime([datetime(2013, 1, 1), datetime( + 2014, 1, 1, 12), datetime(2014, 1, 1)])) self.assertTrue("'2013-01-01 00:00:00'," in text) self.assertTrue("'2014-01-01 00:00:00']" in text) class TestStringRepTimestamp(tm.TestCase): + def test_no_tz(self): dt_date = datetime(2013, 1, 2) self.assertEqual(str(dt_date), str(Timestamp(dt_date))) @@ -4055,6 +4083,7 @@ def test_tz_dateutil(self): dt_datetime_us = datetime(2013, 1, 2, 12, 1, 3, 45, tzinfo=utc) self.assertEqual(str(dt_datetime_us), str(Timestamp(dt_datetime_us))) + if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tests/test_generic.py b/pandas/tests/test_generic.py index 37cb38454f74e..3754155cca0a3 100644 --- a/pandas/tests/test_generic.py +++ b/pandas/tests/test_generic.py @@ -1,36 +1,33 @@ # -*- coding: utf-8 -*- # pylint: disable-msg=E1101,W0612 -from datetime import datetime, timedelta import nose import numpy as np from numpy import nan import pandas as pd -from pandas import (Index, Series, DataFrame, Panel, - isnull, notnull, date_range, period_range) -from pandas.core.index import Index, MultiIndex +from pandas import (Index, Series, DataFrame, Panel, isnull, + date_range, period_range) +from pandas.core.index import MultiIndex import pandas.core.common as com -from pandas.compat import StringIO, lrange, range, zip, u, OrderedDict, long +from pandas.compat import range, zip from pandas import compat from pandas.util.testing import (assert_series_equal, assert_frame_equal, assert_panel_equal, - assert_almost_equal, - assert_equal, - ensure_clean) + assert_equal) import pandas.util.testing as tm def _skip_if_no_pchip(): try: - from scipy.interpolate import pchip_interpolate + from scipy.interpolate import pchip_interpolate # noqa except ImportError: raise nose.SkipTest('scipy.interpolate.pchip missing') -#------------------------------------------------------------------------------ +# ---------------------------------------------------------------------- # Generic types test cases @@ -54,7 +51,7 @@ def _construct(self, shape, value=None, dtype=None, **kwargs): if value is specified use that if its a scalar if value is an array, repeat it as needed """ - if isinstance(shape,int): + if isinstance(shape, int): shape = tuple([shape] * self._ndim) if value is not None: if np.isscalar(value): @@ -62,39 +59,39 @@ def _construct(self, shape, value=None, dtype=None, **kwargs): arr = None # remove the info axis - kwargs.pop(self._typ._info_axis_name,None) + kwargs.pop(self._typ._info_axis_name, None) else: - arr = np.empty(shape,dtype=dtype) + arr = np.empty(shape, dtype=dtype) arr.fill(value) else: fshape = np.prod(shape) arr = value.ravel() - new_shape = fshape/arr.shape[0] + new_shape = fshape / arr.shape[0] if fshape % arr.shape[0] != 0: raise Exception("invalid value passed in _construct") - arr = np.repeat(arr,new_shape).reshape(shape) + arr = np.repeat(arr, new_shape).reshape(shape) else: arr = np.random.randn(*shape) - return self._typ(arr,dtype=dtype,**kwargs) + return self._typ(arr, dtype=dtype, **kwargs) def _compare(self, result, expected): - self._comparator(result,expected) + self._comparator(result, expected) def test_rename(self): # single axis for axis in self._axes(): - kwargs = { axis : list('ABCD') } - obj = self._construct(4,**kwargs) + kwargs = {axis: list('ABCD')} + obj = self._construct(4, **kwargs) # no values passed - #self.assertRaises(Exception, o.rename(str.lower)) + # self.assertRaises(Exception, o.rename(str.lower)) # rename a single axis - result = obj.rename(**{ axis : str.lower }) + result = obj.rename(**{axis: str.lower}) expected = obj.copy() - setattr(expected,axis,list('abcd')) + setattr(expected, axis, list('abcd')) self._compare(result, expected) # multiple axes at once @@ -102,27 +99,28 @@ def test_rename(self): def test_get_numeric_data(self): n = 4 - kwargs = { } + kwargs = {} for i in range(self._ndim): kwargs[self._typ._AXIS_NAMES[i]] = list(range(n)) # get the numeric data - o = self._construct(n,**kwargs) + o = self._construct(n, **kwargs) result = o._get_numeric_data() self._compare(result, o) # non-inclusion result = o._get_bool_data() - expected = self._construct(n,value='empty',**kwargs) - self._compare(result,expected) + expected = self._construct(n, value='empty', **kwargs) + self._compare(result, expected) # get the bool data - arr = np.array([True,True,False,True]) - o = self._construct(n,value=arr,**kwargs) + arr = np.array([True, True, False, True]) + o = self._construct(n, value=arr, **kwargs) result = o._get_numeric_data() self._compare(result, o) - # _get_numeric_data is includes _get_bool_data, so can't test for non-inclusion + # _get_numeric_data is includes _get_bool_data, so can't test for + # non-inclusion def test_get_default(self): @@ -133,7 +131,7 @@ def test_get_default(self): for data, index in ((d0, d1), (d1, d0)): s = Series(data, index=index) - for i,d in zip(index, data): + for i, d in zip(index, data): self.assertEqual(s.get(i), d) self.assertEqual(s.get(i, d), d) self.assertEqual(s.get(i, "z"), d) @@ -146,45 +144,47 @@ def test_nonzero(self): # GH 4633 # look at the boolean/nonzero behavior for objects obj = self._construct(shape=4) - self.assertRaises(ValueError, lambda : bool(obj == 0)) - self.assertRaises(ValueError, lambda : bool(obj == 1)) - self.assertRaises(ValueError, lambda : bool(obj)) + self.assertRaises(ValueError, lambda: bool(obj == 0)) + self.assertRaises(ValueError, lambda: bool(obj == 1)) + self.assertRaises(ValueError, lambda: bool(obj)) - obj = self._construct(shape=4,value=1) - self.assertRaises(ValueError, lambda : bool(obj == 0)) - self.assertRaises(ValueError, lambda : bool(obj == 1)) - self.assertRaises(ValueError, lambda : bool(obj)) + obj = self._construct(shape=4, value=1) + self.assertRaises(ValueError, lambda: bool(obj == 0)) + self.assertRaises(ValueError, lambda: bool(obj == 1)) + self.assertRaises(ValueError, lambda: bool(obj)) - obj = self._construct(shape=4,value=np.nan) - self.assertRaises(ValueError, lambda : bool(obj == 0)) - self.assertRaises(ValueError, lambda : bool(obj == 1)) - self.assertRaises(ValueError, lambda : bool(obj)) + obj = self._construct(shape=4, value=np.nan) + self.assertRaises(ValueError, lambda: bool(obj == 0)) + self.assertRaises(ValueError, lambda: bool(obj == 1)) + self.assertRaises(ValueError, lambda: bool(obj)) # empty obj = self._construct(shape=0) - self.assertRaises(ValueError, lambda : bool(obj)) + self.assertRaises(ValueError, lambda: bool(obj)) # invalid behaviors - obj1 = self._construct(shape=4,value=1) - obj2 = self._construct(shape=4,value=1) + obj1 = self._construct(shape=4, value=1) + obj2 = self._construct(shape=4, value=1) def f(): if obj1: com.pprint_thing("this works and shouldn't") + self.assertRaises(ValueError, f) - self.assertRaises(ValueError, lambda : obj1 and obj2) - self.assertRaises(ValueError, lambda : obj1 or obj2) - self.assertRaises(ValueError, lambda : not obj1) + self.assertRaises(ValueError, lambda: obj1 and obj2) + self.assertRaises(ValueError, lambda: obj1 or obj2) + self.assertRaises(ValueError, lambda: not obj1) def test_numpy_1_7_compat_numeric_methods(self): # GH 4435 # numpy in 1.7 tries to pass addtional arguments to pandas functions o = self._construct(shape=4) - for op in ['min','max','max','var','std','prod','sum','cumsum','cumprod', - 'median','skew','kurt','compound','cummax','cummin','all','any']: - f = getattr(np,op,None) + for op in ['min', 'max', 'max', 'var', 'std', 'prod', 'sum', 'cumsum', + 'cumprod', 'median', 'skew', 'kurt', 'compound', 'cummax', + 'cummin', 'all', 'any']: + f = getattr(np, op, None) if f is not None: f(o) @@ -221,7 +221,9 @@ def test_constructor_compound_dtypes(self): def f(dtype): return self._construct(shape=3, dtype=dtype) - self.assertRaises(NotImplementedError, f, [("A","datetime64[h]"), ("B","str"), ("C","int32")]) + self.assertRaises(NotImplementedError, f, [("A", "datetime64[h]"), + ("B", "str"), + ("C", "int32")]) # these work (though results may be unexpected) f('int64') @@ -230,11 +232,11 @@ def f(dtype): def check_metadata(self, x, y=None): for m in x._metadata: - v = getattr(x,m,None) + v = getattr(x, m, None) if y is None: self.assertIsNone(v) else: - self.assertEqual(v, getattr(y,m,None)) + self.assertEqual(v, getattr(y, m, None)) def test_metadata_propagation(self): # check that the metadata matches up on the resulting ops @@ -250,28 +252,27 @@ def test_metadata_propagation(self): # this, though it actually does work) # can remove all of these try: except: blocks on the actual operations - # ---------- # preserving # ---------- # simple ops with scalars - for op in [ '__add__','__sub__','__truediv__','__mul__' ]: - result = getattr(o,op)(1) - self.check_metadata(o,result) + for op in ['__add__', '__sub__', '__truediv__', '__mul__']: + result = getattr(o, op)(1) + self.check_metadata(o, result) # ops with like - for op in [ '__add__','__sub__','__truediv__','__mul__' ]: + for op in ['__add__', '__sub__', '__truediv__', '__mul__']: try: - result = getattr(o,op)(o) - self.check_metadata(o,result) + result = getattr(o, op)(o) + self.check_metadata(o, result) except (ValueError, AttributeError): pass # simple boolean - for op in [ '__eq__','__le__', '__ge__' ]: - v1 = getattr(o,op)(o) - self.check_metadata(o,v1) + for op in ['__eq__', '__le__', '__ge__']: + v1 = getattr(o, op)(o) + self.check_metadata(o, v1) try: self.check_metadata(o, v1 & v1) @@ -286,7 +287,7 @@ def test_metadata_propagation(self): # combine_first try: result = o.combine_first(o2) - self.check_metadata(o,result) + self.check_metadata(o, result) except (AttributeError): pass @@ -302,12 +303,12 @@ def test_metadata_propagation(self): pass # simple boolean - for op in [ '__eq__','__le__', '__ge__' ]: + for op in ['__eq__', '__le__', '__ge__']: # this is a name matching op - v1 = getattr(o,op)(o) + v1 = getattr(o, op)(o) - v2 = getattr(o,op)(o2) + v2 = getattr(o, op)(o2) self.check_metadata(v2) try: @@ -326,17 +327,18 @@ def test_head_tail(self): o = self._construct(shape=10) # check all index types - for index in [ tm.makeFloatIndex, tm.makeIntIndex, - tm.makeStringIndex, tm.makeUnicodeIndex, - tm.makeDateIndex, tm.makePeriodIndex ]: + for index in [tm.makeFloatIndex, tm.makeIntIndex, tm.makeStringIndex, + tm.makeUnicodeIndex, tm.makeDateIndex, + tm.makePeriodIndex]: axis = o._get_axis_name(0) - setattr(o,axis,index(len(getattr(o,axis)))) + setattr(o, axis, index(len(getattr(o, axis)))) # Panel + dims try: o.head() except (NotImplementedError): - raise nose.SkipTest('not implemented on {0}'.format(o.__class__.__name__)) + raise nose.SkipTest('not implemented on {0}'.format( + o.__class__.__name__)) self._compare(o.head(), o.iloc[:5]) self._compare(o.tail(), o.iloc[-5:]) @@ -346,8 +348,8 @@ def test_head_tail(self): self._compare(o.tail(0), o.iloc[0:0]) # bounded - self._compare(o.head(len(o)+1), o) - self._compare(o.tail(len(o)+1), o) + self._compare(o.head(len(o) + 1), o) + self._compare(o.tail(len(o) + 1), o) # neg index self._compare(o.head(-3), o.head(7)) @@ -362,18 +364,24 @@ def test_sample(self): # Check behavior of random_state argument ### - # Check for stability when receives seed or random state -- run 10 times. + # Check for stability when receives seed or random state -- run 10 + # times. for test in range(10): - seed = np.random.randint(0,100) - self._compare(o.sample(n=4, random_state=seed), o.sample(n=4, random_state=seed)) - self._compare(o.sample(frac=0.7,random_state=seed), o.sample(frac=0.7, random_state=seed)) - - self._compare(o.sample(n=4, random_state=np.random.RandomState(test)), - o.sample(n=4, random_state=np.random.RandomState(test))) - - self._compare(o.sample(frac=0.7,random_state=np.random.RandomState(test)), - o.sample(frac=0.7, random_state=np.random.RandomState(test))) - + seed = np.random.randint(0, 100) + self._compare( + o.sample(n=4, random_state=seed), o.sample(n=4, + random_state=seed)) + self._compare( + o.sample(frac=0.7, random_state=seed), o.sample( + frac=0.7, random_state=seed)) + + self._compare( + o.sample(n=4, random_state=np.random.RandomState(test)), + o.sample(n=4, random_state=np.random.RandomState(test))) + + self._compare( + o.sample(frac=0.7, random_state=np.random.RandomState(test)), + o.sample(frac=0.7, random_state=np.random.RandomState(test))) # Check for error when random_state argument invalid. with tm.assertRaises(ValueError): @@ -395,7 +403,7 @@ def test_sample(self): # Make sure float values of `n` give error with tm.assertRaises(ValueError): - o.sample(n= 3.2) + o.sample(n=3.2) # Check lengths are right self.assertTrue(len(o.sample(n=4) == 4)) @@ -408,51 +416,50 @@ def test_sample(self): # Weight length must be right with tm.assertRaises(ValueError): - o.sample(n=3, weights=[0,1]) + o.sample(n=3, weights=[0, 1]) with tm.assertRaises(ValueError): - bad_weights = [0.5]*11 + bad_weights = [0.5] * 11 o.sample(n=3, weights=bad_weights) with tm.assertRaises(ValueError): - bad_weight_series = Series([0,0,0.2]) + bad_weight_series = Series([0, 0, 0.2]) o.sample(n=4, weights=bad_weight_series) # Check won't accept negative weights with tm.assertRaises(ValueError): - bad_weights = [-0.1]*10 + bad_weights = [-0.1] * 10 o.sample(n=3, weights=bad_weights) # Check inf and -inf throw errors: with tm.assertRaises(ValueError): - weights_with_inf = [0.1]*10 + weights_with_inf = [0.1] * 10 weights_with_inf[0] = np.inf o.sample(n=3, weights=weights_with_inf) with tm.assertRaises(ValueError): - weights_with_ninf = [0.1]*10 - weights_with_ninf[0] = -np.inf + weights_with_ninf = [0.1] * 10 + weights_with_ninf[0] = -np.inf o.sample(n=3, weights=weights_with_ninf) # All zeros raises errors - zero_weights = [0]*10 + zero_weights = [0] * 10 with tm.assertRaises(ValueError): o.sample(n=3, weights=zero_weights) # All missing weights - nan_weights = [np.nan]*10 + nan_weights = [np.nan] * 10 with tm.assertRaises(ValueError): o.sample(n=3, weights=nan_weights) - # A few dataframe test with degenerate weights. - easy_weight_list = [0]*10 + easy_weight_list = [0] * 10 easy_weight_list[5] = 1 - df = pd.DataFrame({'col1':range(10,20), - 'col2':range(20,30), - 'colString': ['a']*10, - 'easyweights':easy_weight_list}) + df = pd.DataFrame({'col1': range(10, 20), + 'col2': range(20, 30), + 'colString': ['a'] * 10, + 'easyweights': easy_weight_list}) sample1 = df.sample(n=1, weights='easyweights') assert_frame_equal(sample1, df.iloc[5:6]) @@ -462,47 +469,52 @@ def test_sample(self): with tm.assertRaises(ValueError): s.sample(n=3, weights='weight_column') - panel = pd.Panel(items = [0,1,2], major_axis = [2,3,4], minor_axis = [3,4,5]) + panel = pd.Panel(items=[0, 1, 2], major_axis=[2, 3, 4], + minor_axis=[3, 4, 5]) with tm.assertRaises(ValueError): panel.sample(n=1, weights='weight_column') with tm.assertRaises(ValueError): - df.sample(n=1, weights='weight_column', axis = 1) + df.sample(n=1, weights='weight_column', axis=1) # Check weighting key error with tm.assertRaises(KeyError): df.sample(n=3, weights='not_a_real_column_name') - # Check np.nan are replaced by zeros. - weights_with_nan = [np.nan]*10 + # Check np.nan are replaced by zeros. + weights_with_nan = [np.nan] * 10 weights_with_nan[5] = 0.5 - self._compare(o.sample(n=1, axis=0, weights=weights_with_nan), o.iloc[5:6]) + self._compare( + o.sample(n=1, axis=0, weights=weights_with_nan), o.iloc[5:6]) # Check None are also replaced by zeros. - weights_with_None = [None]*10 + weights_with_None = [None] * 10 weights_with_None[5] = 0.5 - self._compare(o.sample(n=1, axis=0, weights=weights_with_None), o.iloc[5:6]) + self._compare( + o.sample(n=1, axis=0, weights=weights_with_None), o.iloc[5:6]) # Check that re-normalizes weights that don't sum to one. - weights_less_than_1 = [0]*10 + weights_less_than_1 = [0] * 10 weights_less_than_1[0] = 0.5 - tm.assert_frame_equal(df.sample(n=1, weights=weights_less_than_1), df.iloc[:1]) - + tm.assert_frame_equal( + df.sample(n=1, weights=weights_less_than_1), df.iloc[:1]) ### # Test axis argument ### # Test axis argument - df = pd.DataFrame({'col1':range(10), 'col2':['a']*10}) - second_column_weight = [0,1] - assert_frame_equal(df.sample(n=1, axis=1, weights=second_column_weight), df[['col2']]) + df = pd.DataFrame({'col1': range(10), 'col2': ['a'] * 10}) + second_column_weight = [0, 1] + assert_frame_equal( + df.sample(n=1, axis=1, weights=second_column_weight), df[['col2']]) # Different axis arg types - assert_frame_equal(df.sample(n=1, axis='columns', weights=second_column_weight), + assert_frame_equal(df.sample(n=1, axis='columns', + weights=second_column_weight), df[['col2']]) - weight = [0]*10 + weight = [0] * 10 weight[5] = 0.5 assert_frame_equal(df.sample(n=1, axis='rows', weights=weight), df.iloc[5:6]) @@ -522,56 +534,62 @@ def test_sample(self): # Test weight length compared to correct axis with tm.assertRaises(ValueError): - df.sample(n=1, axis=1, weights=[0.5]*10) + df.sample(n=1, axis=1, weights=[0.5] * 10) # Check weights with axis = 1 - easy_weight_list = [0]*3 + easy_weight_list = [0] * 3 easy_weight_list[2] = 1 - df = pd.DataFrame({'col1':range(10,20), - 'col2':range(20,30), - 'colString': ['a']*10}) + df = pd.DataFrame({'col1': range(10, 20), + 'col2': range(20, 30), + 'colString': ['a'] * 10}) sample1 = df.sample(n=1, axis=1, weights=easy_weight_list) assert_frame_equal(sample1, df[['colString']]) # Test default axes - p = pd.Panel(items = ['a','b','c'], major_axis=[2,4,6], minor_axis=[1,3,5]) - assert_panel_equal(p.sample(n=3, random_state=42), p.sample(n=3, axis=1, random_state=42)) - assert_frame_equal(df.sample(n=3, random_state=42), df.sample(n=3, axis=0, random_state=42)) + p = pd.Panel(items=['a', 'b', 'c'], major_axis=[2, 4, 6], + minor_axis=[1, 3, 5]) + assert_panel_equal( + p.sample(n=3, random_state=42), p.sample(n=3, axis=1, + random_state=42)) + assert_frame_equal( + df.sample(n=3, random_state=42), df.sample(n=3, axis=0, + random_state=42)) # Test that function aligns weights with frame - df = DataFrame({'col1':[5,6,7], 'col2':['a','b','c'], }, index = [9,5,3]) - s = Series([1,0,0], index=[3,5,9]) + df = DataFrame( + {'col1': [5, 6, 7], + 'col2': ['a', 'b', 'c'], }, index=[9, 5, 3]) + s = Series([1, 0, 0], index=[3, 5, 9]) assert_frame_equal(df.loc[[3]], df.sample(1, weights=s)) # Weights have index values to be dropped because not in # sampled DataFrame - s2 = Series([0.001,0,10000], index=[3,5,10]) + s2 = Series([0.001, 0, 10000], index=[3, 5, 10]) assert_frame_equal(df.loc[[3]], df.sample(1, weights=s2)) # Weights have empty values to be filed with zeros - s3 = Series([0.01,0], index=[3,5]) + s3 = Series([0.01, 0], index=[3, 5]) assert_frame_equal(df.loc[[3]], df.sample(1, weights=s3)) # No overlap in weight and sampled DataFrame indices - s4 = Series([1,0], index=[1,2]) + s4 = Series([1, 0], index=[1, 2]) with tm.assertRaises(ValueError): df.sample(1, weights=s4) - def test_size_compat(self): # GH8846 # size property should be defined o = self._construct(shape=10) self.assertTrue(o.size == np.prod(o.shape)) - self.assertTrue(o.size == 10**len(o.axes)) + self.assertTrue(o.size == 10 ** len(o.axes)) def test_split_compat(self): # xref GH8846 o = self._construct(shape=10) - self.assertTrue(len(np.array_split(o,5)) == 5) - self.assertTrue(len(np.array_split(o,2)) == 2) + self.assertTrue(len(np.array_split(o, 5)) == 5) + self.assertTrue(len(np.array_split(o, 2)) == 2) def test_unexpected_keyword(self): # GH8597 from pandas.util.testing import assertRaisesRegexp @@ -593,9 +611,10 @@ def test_unexpected_keyword(self): # GH8597 with assertRaisesRegexp(TypeError, 'unexpected keyword'): ts.fillna(0, in_place=True) + class TestSeries(tm.TestCase, Generic): _typ = Series - _comparator = lambda self, x, y: assert_series_equal(x,y) + _comparator = lambda self, x, y: assert_series_equal(x, y) def setUp(self): self.ts = tm.makeTimeSeries() # Was at top level in test_series @@ -605,9 +624,10 @@ def setUp(self): self.series.name = 'series' def test_rename_mi(self): - s = Series([11,21,31], - index=MultiIndex.from_tuples([("A",x) for x in ["a","B","c"]])) - result = s.rename(str.lower) + s = Series([11, 21, 31], + index=MultiIndex.from_tuples( + [("A", x) for x in ["a", "B", "c"]])) + s.rename(str.lower) def test_get_numeric_data_preserve_dtype(self): @@ -629,9 +649,9 @@ def test_get_numeric_data_preserve_dtype(self): result = o._get_bool_data() self._compare(result, o) - o = Series(date_range('20130101',periods=3)) + o = Series(date_range('20130101', periods=3)) result = o._get_numeric_data() - expected = Series([],dtype='M8[ns]', index=pd.Index([], dtype=object)) + expected = Series([], dtype='M8[ns]', index=pd.Index([], dtype=object)) self._compare(result, expected) def test_nonzero_single_element(self): @@ -644,57 +664,59 @@ def test_nonzero_single_element(self): self.assertFalse(s.bool()) # single item nan to raise - for s in [ Series([np.nan]), Series([pd.NaT]), Series([True]), Series([False]) ]: - self.assertRaises(ValueError, lambda : bool(s)) + for s in [Series([np.nan]), Series([pd.NaT]), Series([True]), + Series([False])]: + self.assertRaises(ValueError, lambda: bool(s)) - for s in [ Series([np.nan]), Series([pd.NaT])]: - self.assertRaises(ValueError, lambda : s.bool()) + for s in [Series([np.nan]), Series([pd.NaT])]: + self.assertRaises(ValueError, lambda: s.bool()) # multiple bool are still an error - for s in [Series([True,True]), Series([False, False])]: - self.assertRaises(ValueError, lambda : bool(s)) - self.assertRaises(ValueError, lambda : s.bool()) + for s in [Series([True, True]), Series([False, False])]: + self.assertRaises(ValueError, lambda: bool(s)) + self.assertRaises(ValueError, lambda: s.bool()) # single non-bool are an error - for s in [Series([1]), Series([0]), - Series(['a']), Series([0.0])]: - self.assertRaises(ValueError, lambda : bool(s)) - self.assertRaises(ValueError, lambda : s.bool()) + for s in [Series([1]), Series([0]), Series(['a']), Series([0.0])]: + self.assertRaises(ValueError, lambda: bool(s)) + self.assertRaises(ValueError, lambda: s.bool()) def test_metadata_propagation_indiv(self): # check that the metadata matches up on the resulting ops - o = Series(range(3),range(3)) + o = Series(range(3), range(3)) o.name = 'foo' - o2 = Series(range(3),range(3)) + o2 = Series(range(3), range(3)) o2.name = 'bar' result = o.T - self.check_metadata(o,result) + self.check_metadata(o, result) # resample ts = Series(np.random.rand(1000), - index=date_range('20130101',periods=1000,freq='s'), + index=date_range('20130101', periods=1000, freq='s'), name='foo') result = ts.resample('1T') - self.check_metadata(ts,result) + self.check_metadata(ts, result) - result = ts.resample('1T',how='min') - self.check_metadata(ts,result) + result = ts.resample('1T', how='min') + self.check_metadata(ts, result) - result = ts.resample('1T',how=lambda x: x.sum()) - self.check_metadata(ts,result) + result = ts.resample('1T', how=lambda x: x.sum()) + self.check_metadata(ts, result) _metadata = Series._metadata _finalize = Series.__finalize__ - Series._metadata = ['name','filename'] + Series._metadata = ['name', 'filename'] o.filename = 'foo' o2.filename = 'bar' def finalize(self, other, method=None, **kwargs): for name in self._metadata: if method == 'concat' and name == 'filename': - value = '+'.join([ getattr(o,name) for o in other.objs if getattr(o,name,None) ]) + value = '+'.join([getattr( + o, name) for o in other.objs if getattr(o, name, None) + ]) object.__setattr__(self, name, value) else: object.__setattr__(self, name, getattr(other, name, None)) @@ -704,7 +726,7 @@ def finalize(self, other, method=None, **kwargs): Series.__finalize__ = finalize result = pd.concat([o, o2]) - self.assertEqual(result.filename,'foo+bar') + self.assertEqual(result.filename, 'foo+bar') self.assertIsNone(result.name) # reset @@ -742,7 +764,8 @@ def test_interp_regression(self): ser = Series(np.sort(np.random.uniform(size=100))) # interpolate at new_index - new_index = ser.index.union(Index([49.25, 49.5, 49.75, 50.25, 50.5, 50.75])) + new_index = ser.index.union(Index([49.25, 49.5, 49.75, 50.25, 50.5, + 50.75])) interp_s = ser.reindex(new_index).interpolate(method='pchip') # does not blow up, GH5977 interp_s[49:51] @@ -772,8 +795,9 @@ def test_interpolate_index_values(self): expected = s.copy() bad = isnull(expected.values) good = ~bad - expected = Series( - np.interp(vals[bad], vals[good], s.values[good]), index=s.index[bad]) + expected = Series(np.interp(vals[bad], vals[good], + s.values[good]), + index=s.index[bad]) assert_series_equal(result[bad], expected) @@ -867,24 +891,22 @@ def test_interp_limit_forward(self): # Provide 'forward' (the default) explicitly here. expected = Series([1., 3., 5., 7., np.nan, 11.]) - result = s.interpolate( - method='linear', limit=2, limit_direction='forward') + result = s.interpolate(method='linear', limit=2, + limit_direction='forward') assert_series_equal(result, expected) - result = s.interpolate( - method='linear', limit=2, limit_direction='FORWARD') + result = s.interpolate(method='linear', limit=2, + limit_direction='FORWARD') assert_series_equal(result, expected) def test_interp_limit_bad_direction(self): s = Series([1, 3, np.nan, np.nan, np.nan, 11]) - self.assertRaises(ValueError, s.interpolate, - method='linear', limit=2, + self.assertRaises(ValueError, s.interpolate, method='linear', limit=2, limit_direction='abc') # raises an error even if no limit is specified. - self.assertRaises(ValueError, s.interpolate, - method='linear', + self.assertRaises(ValueError, s.interpolate, method='linear', limit_direction='abc') def test_interp_limit_direction(self): @@ -892,26 +914,27 @@ def test_interp_limit_direction(self): s = Series([1, 3, np.nan, np.nan, np.nan, 11]) expected = Series([1., 3., np.nan, 7., 9., 11.]) - result = s.interpolate( - method='linear', limit=2, limit_direction='backward') + result = s.interpolate(method='linear', limit=2, + limit_direction='backward') assert_series_equal(result, expected) expected = Series([1., 3., 5., np.nan, 9., 11.]) - result = s.interpolate( - method='linear', limit=1, limit_direction='both') + result = s.interpolate(method='linear', limit=1, + limit_direction='both') assert_series_equal(result, expected) # Check that this works on a longer series of nans. - s = Series([1, 3, np.nan, np.nan, np.nan, 7, 9, np.nan, np.nan, 12, np.nan]) + s = Series([1, 3, np.nan, np.nan, np.nan, 7, 9, np.nan, np.nan, 12, + np.nan]) expected = Series([1., 3., 4., 5., 6., 7., 9., 10., 11., 12., 12.]) - result = s.interpolate( - method='linear', limit=2, limit_direction='both') + result = s.interpolate(method='linear', limit=2, + limit_direction='both') assert_series_equal(result, expected) expected = Series([1., 3., 4., np.nan, 6., 7., 9., 10., 11., 12., 12.]) - result = s.interpolate( - method='linear', limit=1, limit_direction='both') + result = s.interpolate(method='linear', limit=1, + limit_direction='both') assert_series_equal(result, expected) def test_interp_limit_to_ends(self): @@ -919,13 +942,13 @@ def test_interp_limit_to_ends(self): s = Series([np.nan, np.nan, 5, 7, 9, np.nan]) expected = Series([5., 5., 5., 7., 9., np.nan]) - result = s.interpolate( - method='linear', limit=2, limit_direction='backward') + result = s.interpolate(method='linear', limit=2, + limit_direction='backward') assert_series_equal(result, expected) expected = Series([5., 5., 5., 7., 9., 9.]) - result = s.interpolate( - method='linear', limit=2, limit_direction='both') + result = s.interpolate(method='linear', limit=2, + limit_direction='both') assert_series_equal(result, expected) def test_interp_limit_before_ends(self): @@ -933,18 +956,18 @@ def test_interp_limit_before_ends(self): s = Series([np.nan, np.nan, 5, 7, np.nan, np.nan]) expected = Series([np.nan, np.nan, 5., 7., 7., np.nan]) - result = s.interpolate( - method='linear', limit=1, limit_direction='forward') + result = s.interpolate(method='linear', limit=1, + limit_direction='forward') assert_series_equal(result, expected) expected = Series([np.nan, 5., 5., 7., np.nan, np.nan]) - result = s.interpolate( - method='linear', limit=1, limit_direction='backward') + result = s.interpolate(method='linear', limit=1, + limit_direction='backward') assert_series_equal(result, expected) expected = Series([np.nan, 5., 5., 7., 7., np.nan]) - result = s.interpolate( - method='linear', limit=1, limit_direction='both') + result = s.interpolate(method='linear', limit=1, + limit_direction='both') assert_series_equal(result, expected) def test_interp_all_good(self): @@ -981,7 +1004,8 @@ def test_interp_datetime64(self): tm._skip_if_no_scipy() df = Series([1, np.nan, 3], index=date_range('1/1/2000', periods=3)) result = df.interpolate(method='nearest') - expected = Series([1., 1., 3.], index=date_range('1/1/2000', periods=3)) + expected = Series([1., 1., 3.], + index=date_range('1/1/2000', periods=3)) assert_series_equal(result, expected) def test_interp_limit_no_nans(self): @@ -992,8 +1016,8 @@ def test_interp_limit_no_nans(self): assert_series_equal(result, expected) def test_describe(self): - _ = self.series.describe() - _ = self.ts.describe() + self.series.describe() + self.ts.describe() def test_describe_objects(self): s = Series(['a', 'b', 'b', np.nan, np.nan, np.nan, 'c', 'd', 'a', 'a']) @@ -1035,12 +1059,13 @@ def test_describe_none(self): class TestDataFrame(tm.TestCase, Generic): _typ = DataFrame - _comparator = lambda self, x, y: assert_frame_equal(x,y) + _comparator = lambda self, x, y: assert_frame_equal(x, y) def test_rename_mi(self): - df = DataFrame([11,21,31], - index=MultiIndex.from_tuples([("A",x) for x in ["a","B","c"]])) - result = df.rename(str.lower) + df = DataFrame([ + 11, 21, 31 + ], index=MultiIndex.from_tuples([("A", x) for x in ["a", "B", "c"]])) + df.rename(str.lower) def test_nonzero_single_element(self): @@ -1052,8 +1077,8 @@ def test_nonzero_single_element(self): self.assertFalse(df.bool()) df = DataFrame([[False, False]]) - self.assertRaises(ValueError, lambda : df.bool()) - self.assertRaises(ValueError, lambda : bool(df)) + self.assertRaises(ValueError, lambda: df.bool()) + self.assertRaises(ValueError, lambda: bool(df)) def test_get_numeric_data_preserve_dtype(self): @@ -1064,28 +1089,36 @@ def test_get_numeric_data_preserve_dtype(self): self._compare(result, expected) def test_interp_basic(self): - df = DataFrame({'A': [1, 2, np.nan, 4], 'B': [1, 4, 9, np.nan], - 'C': [1, 2, 3, 5], 'D': list('abcd')}) - expected = DataFrame({'A': [1., 2., 3., 4.], 'B': [1., 4., 9., 9.], - 'C': [1, 2, 3, 5], 'D': list('abcd')}) + df = DataFrame({'A': [1, 2, np.nan, 4], + 'B': [1, 4, 9, np.nan], + 'C': [1, 2, 3, 5], + 'D': list('abcd')}) + expected = DataFrame({'A': [1., 2., 3., 4.], + 'B': [1., 4., 9., 9.], + 'C': [1, 2, 3, 5], + 'D': list('abcd')}) result = df.interpolate() assert_frame_equal(result, expected) result = df.set_index('C').interpolate() expected = df.set_index('C') - expected.loc[3,'A'] = 3 - expected.loc[5,'B'] = 9 + expected.loc[3, 'A'] = 3 + expected.loc[5, 'B'] = 9 assert_frame_equal(result, expected) def test_interp_bad_method(self): - df = DataFrame({'A': [1, 2, np.nan, 4], 'B': [1, 4, 9, np.nan], - 'C': [1, 2, 3, 5], 'D': list('abcd')}) + df = DataFrame({'A': [1, 2, np.nan, 4], + 'B': [1, 4, 9, np.nan], + 'C': [1, 2, 3, 5], + 'D': list('abcd')}) with tm.assertRaises(ValueError): df.interpolate(method='not_a_method') def test_interp_combo(self): - df = DataFrame({'A': [1., 2., np.nan, 4.], 'B': [1, 4, 9, np.nan], - 'C': [1, 2, 3, 5], 'D': list('abcd')}) + df = DataFrame({'A': [1., 2., np.nan, 4.], + 'B': [1, 4, 9, np.nan], + 'C': [1, 2, 3, 5], + 'D': list('abcd')}) result = df['A'].interpolate() expected = Series([1., 2., 3., 4.], name='A') @@ -1149,8 +1182,8 @@ def test_interp_alt_scipy(self): 'C': [1, 2, 3, 5, 8, 13, 21]}) result = df.interpolate(method='barycentric') expected = df.copy() - expected.ix[2,'A'] = 3 - expected.ix[5,'A'] = 6 + expected.ix[2, 'A'] = 3 + expected.ix[5, 'A'] = 6 assert_frame_equal(result, expected) result = df.interpolate(method='barycentric', downcast='infer') @@ -1163,8 +1196,8 @@ def test_interp_alt_scipy(self): _skip_if_no_pchip() result = df.interpolate(method='pchip') - expected.ix[2,'A'] = 3 - expected.ix[5,'A'] = 6.125 + expected.ix[2, 'A'] = 3 + expected.ix[5, 'A'] = 6.125 assert_frame_equal(result, expected) def test_interp_rowwise(self): @@ -1175,9 +1208,9 @@ def test_interp_rowwise(self): 4: [1, 2, 3, 4]}) result = df.interpolate(axis=1) expected = df.copy() - expected.loc[3,1] = 5 - expected.loc[0,2] = 3 - expected.loc[1,3] = 3 + expected.loc[3, 1] = 5 + expected.loc[0, 2] = 3 + expected.loc[1, 3] = 3 expected[4] = expected[4].astype(np.float64) assert_frame_equal(result, expected) @@ -1208,8 +1241,10 @@ def test_interp_leading_nans(self): assert_frame_equal(result, expected) def test_interp_raise_on_only_mixed(self): - df = DataFrame({'A': [1, 2, np.nan, 4], 'B': ['a', 'b', 'c', 'd'], - 'C': [np.nan, 2, 5, 7], 'D': [np.nan, np.nan, 9, 9], + df = DataFrame({'A': [1, 2, np.nan, 4], + 'B': ['a', 'b', 'c', 'd'], + 'C': [np.nan, 2, 5, 7], + 'D': [np.nan, np.nan, 9, 9], 'E': [1, 2, 3, 4]}) with tm.assertRaises(TypeError): df.interpolate(axis=1) @@ -1227,7 +1262,8 @@ def test_interp_inplace(self): def test_interp_inplace_row(self): # GH 10395 - result = DataFrame({'a': [1.,2.,3.,4.], 'b': [np.nan, 2., 3., 4.], + result = DataFrame({'a': [1., 2., 3., 4.], + 'b': [np.nan, 2., 3., 4.], 'c': [3, 2, 2, 2]}) expected = result.interpolate(method='linear', axis=1, inplace=False) result.interpolate(method='linear', axis=1, inplace=True) @@ -1239,10 +1275,14 @@ def test_interp_ignore_all_good(self): 'B': [1, 2, 3, 4], 'C': [1., 2., np.nan, 4.], 'D': [1., 2., 3., 4.]}) - expected = DataFrame({'A': np.array([1, 2, 3, 4], dtype='float64'), - 'B': np.array([1, 2, 3, 4], dtype='int64'), - 'C': np.array([1., 2., 3, 4.], dtype='float64'), - 'D': np.array([1., 2., 3., 4.], dtype='float64')}) + expected = DataFrame({'A': np.array( + [1, 2, 3, 4], dtype='float64'), + 'B': np.array( + [1, 2, 3, 4], dtype='int64'), + 'C': np.array( + [1., 2., 3, 4.], dtype='float64'), + 'D': np.array( + [1., 2., 3., 4.], dtype='float64')}) result = df.interpolate(downcast=None) assert_frame_equal(result, expected) @@ -1252,9 +1292,9 @@ def test_interp_ignore_all_good(self): assert_frame_equal(result, df[['B', 'D']]) def test_describe(self): - desc = tm.makeDataFrame().describe() - desc = tm.makeMixedDataFrame().describe() - desc = tm.makeTimeDataFrame().describe() + tm.makeDataFrame().describe() + tm.makeMixedDataFrame().describe() + tm.makeTimeDataFrame().describe() def test_describe_percentiles_percent_or_raw(self): msg = 'percentiles should all be in the interval \\[0, 1\\]' @@ -1332,7 +1372,8 @@ def test_describe_objects(self): index=['count', 'unique', 'top', 'freq']) assert_frame_equal(result, expected) - df = DataFrame({"C1": pd.date_range('2010-01-01', periods=4, freq='D')}) + df = DataFrame({"C1": pd.date_range('2010-01-01', periods=4, freq='D') + }) df.loc[4] = pd.Timestamp('2010-01-04') result = df.describe() expected = DataFrame({"C1": [5, 4, pd.Timestamp('2010-01-04'), 2, @@ -1373,7 +1414,7 @@ def test_describe_typefiltering(self): 'ts': tm.makeTimeSeries()[:24].index}) descN = df.describe() - expected_cols = ['numC', 'numD',] + expected_cols = ['numC', 'numD', ] expected = DataFrame(dict((k, df[k].describe()) for k in expected_cols), columns=expected_cols) @@ -1384,7 +1425,7 @@ def test_describe_typefiltering(self): desc = df.describe(exclude=['object', 'datetime']) assert_frame_equal(desc, descN) desc = df.describe(include=['float']) - assert_frame_equal(desc, descN.drop('numC',1)) + assert_frame_equal(desc, descN.drop('numC', 1)) descC = df.describe(include=['O']) expected_cols = ['catA', 'catB'] @@ -1394,32 +1435,34 @@ def test_describe_typefiltering(self): assert_frame_equal(descC, expected) descD = df.describe(include=['datetime']) - assert_series_equal( descD.ts, df.ts.describe()) + assert_series_equal(descD.ts, df.ts.describe()) - desc = df.describe(include=['object','number', 'datetime']) - assert_frame_equal(desc.loc[:,["numC","numD"]].dropna(), descN) - assert_frame_equal(desc.loc[:,["catA","catB"]].dropna(), descC) - descDs = descD.sort_index() # the index order change for mixed-types - assert_frame_equal(desc.loc[:,"ts":].dropna().sort_index(), descDs) + desc = df.describe(include=['object', 'number', 'datetime']) + assert_frame_equal(desc.loc[:, ["numC", "numD"]].dropna(), descN) + assert_frame_equal(desc.loc[:, ["catA", "catB"]].dropna(), descC) + descDs = descD.sort_index() # the index order change for mixed-types + assert_frame_equal(desc.loc[:, "ts":].dropna().sort_index(), descDs) - desc = df.loc[:,'catA':'catB'].describe(include='all') + desc = df.loc[:, 'catA':'catB'].describe(include='all') assert_frame_equal(desc, descC) - desc = df.loc[:,'numC':'numD'].describe(include='all') + desc = df.loc[:, 'numC':'numD'].describe(include='all') assert_frame_equal(desc, descN) - desc = df.describe(percentiles = [], include='all') - cnt = Series(data=[4,4,6,6,6], index=['catA','catB','numC','numD','ts']) - assert_series_equal( desc.count(), cnt) + desc = df.describe(percentiles=[], include='all') + cnt = Series(data=[4, 4, 6, 6, 6], + index=['catA', 'catB', 'numC', 'numD', 'ts']) + assert_series_equal(desc.count(), cnt) self.assertTrue('count' in desc.index) self.assertTrue('unique' in desc.index) self.assertTrue('50%' in desc.index) self.assertTrue('first' in desc.index) - desc = df.drop("ts", 1).describe(percentiles = [], include='all') - assert_series_equal( desc.count(), cnt.drop("ts")) + desc = df.drop("ts", 1).describe(percentiles=[], include='all') + assert_series_equal(desc.count(), cnt.drop("ts")) self.assertTrue('first' not in desc.index) - desc = df.drop(["numC","numD"], 1).describe(percentiles = [], include='all') - assert_series_equal( desc.count(), cnt.drop(["numC","numD"])) + desc = df.drop(["numC", "numD"], 1).describe(percentiles=[], + include='all') + assert_series_equal(desc.count(), cnt.drop(["numC", "numD"])) self.assertTrue('50%' not in desc.index) def test_describe_typefiltering_category_bool(self): @@ -1446,8 +1489,9 @@ def test_describe_typefiltering_category_bool(self): assert_frame_equal(desc1, desc2) def test_describe_timedelta(self): - df = DataFrame({"td": pd.to_timedelta(np.arange(24)%20,"D")}) - self.assertTrue(df.describe().loc["mean"][0] == pd.to_timedelta("8d4h")) + df = DataFrame({"td": pd.to_timedelta(np.arange(24) % 20, "D")}) + self.assertTrue(df.describe().loc["mean"][0] == pd.to_timedelta( + "8d4h")) def test_describe_typefiltering_dupcol(self): df = DataFrame({'catA': ['foo', 'foo', 'bar'] * 8, @@ -1462,33 +1506,37 @@ def test_describe_typefiltering_dupcol(self): def test_describe_typefiltering_groupby(self): df = DataFrame({'catA': ['foo', 'foo', 'bar'] * 8, - 'catB': ['a', 'b', 'c', 'd'] * 6, - 'numC': np.arange(24), - 'numD': np.arange(24.) + .5, - 'ts': tm.makeTimeSeries()[:24].index}) + 'catB': ['a', 'b', 'c', 'd'] * 6, + 'numC': np.arange(24), + 'numD': np.arange(24.) + .5, + 'ts': tm.makeTimeSeries()[:24].index}) G = df.groupby('catA') self.assertTrue(G.describe(include=['number']).shape == (16, 2)) - self.assertTrue(G.describe(include=['number', 'object']).shape == (22, 3)) + self.assertTrue(G.describe(include=['number', 'object']).shape == (22, + 3)) self.assertTrue(G.describe(include='all').shape == (26, 4)) def test_describe_multi_index_df_column_names(self): """ Test that column names persist after the describe operation.""" - df = pd.DataFrame({'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], - 'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], - 'C': np.random.randn(8), - 'D': np.random.randn(8)}) + df = pd.DataFrame( + {'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], + 'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], + 'C': np.random.randn(8), + 'D': np.random.randn(8)}) # GH 11517 # test for hierarchical index hierarchical_index_df = df.groupby(['A', 'B']).mean().T self.assertTrue(hierarchical_index_df.columns.names == ['A', 'B']) - self.assertTrue(hierarchical_index_df.describe().columns.names == ['A', 'B']) + self.assertTrue(hierarchical_index_df.describe().columns.names == + ['A', 'B']) # test for non-hierarchical index non_hierarchical_index_df = df.groupby(['A']).mean().T self.assertTrue(non_hierarchical_index_df.columns.names == ['A']) - self.assertTrue(non_hierarchical_index_df.describe().columns.names == ['A']) + self.assertTrue(non_hierarchical_index_df.describe().columns.names == + ['A']) def test_no_order(self): tm._skip_if_no_scipy() @@ -1506,7 +1554,9 @@ def test_spline(self): assert_series_equal(result, expected) def test_spline_extrapolate(self): - tm.skip_if_no_package('scipy', '0.15', 'setting ext on scipy.interpolate.UnivariateSpline') + tm.skip_if_no_package( + 'scipy', '0.15', + 'setting ext on scipy.interpolate.UnivariateSpline') s = Series([1, 2, 3, 4, np.nan, 6, np.nan]) result3 = s.interpolate(method='spline', order=1, ext=3) expected3 = Series([1., 2., 3., 4., 5., 6., 6.]) @@ -1525,8 +1575,8 @@ def test_spline_smooth(self): def test_spline_interpolation(self): tm._skip_if_no_scipy() - s = Series(np.arange(10)**2) - s[np.random.randint(0,9,3)] = np.nan + s = Series(np.arange(10) ** 2) + s[np.random.randint(0, 9, 3)] = np.nan result1 = s.interpolate(method='spline', order=1) expected1 = s.interpolate(method='spline', order=1) assert_series_equal(result1, expected1) @@ -1535,8 +1585,8 @@ def test_spline_interpolation(self): def test_spline_error(self): tm._skip_if_no_scipy() - s = pd.Series(np.arange(10)**2) - s[np.random.randint(0,9,3)] = np.nan + s = pd.Series(np.arange(10) ** 2) + s[np.random.randint(0, 9, 3)] = np.nan with tm.assertRaises(ValueError): s.interpolate(method='spline') @@ -1546,20 +1596,19 @@ def test_spline_error(self): def test_metadata_propagation_indiv(self): # groupby - df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'foo', 'foo'], - 'B': ['one', 'one', 'two', 'three', - 'two', 'two', 'one', 'three'], - 'C': np.random.randn(8), - 'D': np.random.randn(8)}) + df = DataFrame( + {'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], + 'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], + 'C': np.random.randn(8), + 'D': np.random.randn(8)}) result = df.groupby('A').sum() - self.check_metadata(df,result) + self.check_metadata(df, result) # resample - df = DataFrame(np.random.randn(1000,2), - index=date_range('20130101',periods=1000,freq='s')) + df = DataFrame(np.random.randn(1000, 2), + index=date_range('20130101', periods=1000, freq='s')) result = df.resample('1T') - self.check_metadata(df,result) + self.check_metadata(df, result) # merging with override # GH 6923 @@ -1578,7 +1627,8 @@ def finalize(self, other, method=None, **kwargs): for name in self._metadata: if method == 'merge': left, right = other.left, other.right - value = getattr(left, name, '') + '|' + getattr(right, name, '') + value = getattr(left, name, '') + '|' + getattr(right, + name, '') object.__setattr__(self, name, value) else: object.__setattr__(self, name, getattr(other, name, '')) @@ -1587,7 +1637,7 @@ def finalize(self, other, method=None, **kwargs): DataFrame.__finalize__ = finalize result = df1.merge(df2, left_on=['a'], right_on=['c'], how='inner') - self.assertEqual(result.filename,'fname1.csv|fname2.csv') + self.assertEqual(result.filename, 'fname1.csv|fname2.csv') # concat # GH 6927 @@ -1598,7 +1648,9 @@ def finalize(self, other, method=None, **kwargs): def finalize(self, other, method=None, **kwargs): for name in self._metadata: if method == 'concat': - value = '+'.join([ getattr(o,name) for o in other.objs if getattr(o,name,None) ]) + value = '+'.join([getattr( + o, name) for o in other.objs if getattr(o, name, None) + ]) object.__setattr__(self, name, value) else: object.__setattr__(self, name, getattr(other, name, None)) @@ -1608,7 +1660,7 @@ def finalize(self, other, method=None, **kwargs): DataFrame.__finalize__ = finalize result = pd.concat([df1, df1]) - self.assertEqual(result.filename,'foo+foo') + self.assertEqual(result.filename, 'foo+foo') # reset DataFrame._metadata = _metadata @@ -1636,8 +1688,8 @@ def test_tz_convert_and_localize(self): for idx in [l0, l1]: - l0_expected = getattr(idx, fn)('US/Pacific') - l1_expected = getattr(idx, fn)('US/Pacific') + l0_expected = getattr(idx, fn)('US/Pacific') + l1_expected = getattr(idx, fn)('US/Pacific') df1 = DataFrame(np.ones(5), index=l0) df1 = getattr(df1, fn)('US/Pacific') @@ -1645,8 +1697,7 @@ def test_tz_convert_and_localize(self): # MultiIndex # GH7846 - df2 = DataFrame(np.ones(5), - MultiIndex.from_arrays([l0, l1])) + df2 = DataFrame(np.ones(5), MultiIndex.from_arrays([l0, l1])) df3 = getattr(df2, fn)('US/Pacific', level=0) self.assertFalse(df3.index.levels[0].equals(l0)) @@ -1663,7 +1714,9 @@ def test_tz_convert_and_localize(self): df4 = DataFrame(np.ones(5), MultiIndex.from_arrays([int_idx, l0])) - df5 = getattr(df4, fn)('US/Pacific', level=1) + # TODO: untested + df5 = getattr(df4, fn)('US/Pacific', level=1) # noqa + self.assertTrue(df3.index.levels[0].equals(l0)) self.assertFalse(df3.index.levels[0].equals(l0_expected)) self.assertTrue(df3.index.levels[1].equals(l1_expected)) @@ -1679,7 +1732,7 @@ def test_tz_convert_and_localize(self): # Not DatetimeIndex / PeriodIndex with tm.assertRaisesRegexp(TypeError, 'DatetimeIndex'): df = DataFrame(np.ones(5), - MultiIndex.from_arrays([int_idx, l0])) + MultiIndex.from_arrays([int_idx, l0])) df = getattr(df, fn)('US/Pacific', level=0) # Invalid level @@ -1690,7 +1743,7 @@ def test_tz_convert_and_localize(self): def test_set_attribute(self): # Test for consistent setattr behavior when an attribute and a column # have the same name (Issue #8994) - df = DataFrame({'x':[1, 2, 3]}) + df = DataFrame({'x': [1, 2, 3]}) df.y = 2 df['y'] = [2, 4, 6] @@ -1701,20 +1754,23 @@ def test_set_attribute(self): def test_pct_change(self): # GH 11150 - pnl = DataFrame([np.arange(0, 40, 10), np.arange(0, 40, 10), np.arange(0, 40, 10)]).astype(np.float64) - pnl.iat[1,0] = np.nan - pnl.iat[1,1] = np.nan - pnl.iat[2,3] = 60 + pnl = DataFrame([np.arange(0, 40, 10), np.arange(0, 40, 10), np.arange( + 0, 40, 10)]).astype(np.float64) + pnl.iat[1, 0] = np.nan + pnl.iat[1, 1] = np.nan + pnl.iat[2, 3] = 60 mask = pnl.isnull() for axis in range(2): - expected = pnl.ffill(axis=axis)/pnl.ffill(axis=axis).shift(axis=axis) - 1 + expected = pnl.ffill(axis=axis) / pnl.ffill(axis=axis).shift( + axis=axis) - 1 expected[mask] = np.nan result = pnl.pct_change(axis=axis, fill_method='pad') self.assert_frame_equal(result, expected) + class TestPanel(tm.TestCase, Generic): _typ = Panel _comparator = lambda self, x, y: assert_panel_equal(x, y) @@ -1725,40 +1781,40 @@ class TestNDFrame(tm.TestCase): def test_squeeze(self): # noop - for s in [ tm.makeFloatSeries(), tm.makeStringSeries(), tm.makeObjectSeries() ]: - tm.assert_series_equal(s.squeeze(),s) - for df in [ tm.makeTimeDataFrame() ]: - tm.assert_frame_equal(df.squeeze(),df) - for p in [ tm.makePanel() ]: - tm.assert_panel_equal(p.squeeze(),p) - for p4d in [ tm.makePanel4D() ]: - tm.assert_panel4d_equal(p4d.squeeze(),p4d) + for s in [tm.makeFloatSeries(), tm.makeStringSeries(), + tm.makeObjectSeries()]: + tm.assert_series_equal(s.squeeze(), s) + for df in [tm.makeTimeDataFrame()]: + tm.assert_frame_equal(df.squeeze(), df) + for p in [tm.makePanel()]: + tm.assert_panel_equal(p.squeeze(), p) + for p4d in [tm.makePanel4D()]: + tm.assert_panel4d_equal(p4d.squeeze(), p4d) # squeezing df = tm.makeTimeDataFrame().reindex(columns=['A']) - tm.assert_series_equal(df.squeeze(),df['A']) + tm.assert_series_equal(df.squeeze(), df['A']) p = tm.makePanel().reindex(items=['ItemA']) - tm.assert_frame_equal(p.squeeze(),p['ItemA']) + tm.assert_frame_equal(p.squeeze(), p['ItemA']) - p = tm.makePanel().reindex(items=['ItemA'],minor_axis=['A']) - tm.assert_series_equal(p.squeeze(),p.ix['ItemA',:,'A']) + p = tm.makePanel().reindex(items=['ItemA'], minor_axis=['A']) + tm.assert_series_equal(p.squeeze(), p.ix['ItemA', :, 'A']) p4d = tm.makePanel4D().reindex(labels=['label1']) - tm.assert_panel_equal(p4d.squeeze(),p4d['label1']) + tm.assert_panel_equal(p4d.squeeze(), p4d['label1']) - p4d = tm.makePanel4D().reindex(labels=['label1'],items=['ItemA']) - tm.assert_frame_equal(p4d.squeeze(),p4d.ix['label1','ItemA']) + p4d = tm.makePanel4D().reindex(labels=['label1'], items=['ItemA']) + tm.assert_frame_equal(p4d.squeeze(), p4d.ix['label1', 'ItemA']) # don't fail with 0 length dimensions GH11229 & GH8999 - empty_series=pd.Series([], name='five') - empty_frame=pd.DataFrame([empty_series]) - empty_panel=pd.Panel({'six':empty_frame}) + empty_series = pd.Series([], name='five') + empty_frame = pd.DataFrame([empty_series]) + empty_panel = pd.Panel({'six': empty_frame}) [tm.assert_series_equal(empty_series, higher_dim.squeeze()) for higher_dim in [empty_series, empty_frame, empty_panel]] - def test_equals(self): s1 = pd.Series([1, 2, 3], index=[0, 2, 1]) s2 = s1.copy() @@ -1782,8 +1838,10 @@ def test_equals(self): # Add object dtype column with nans index = np.random.random(10) - df1 = DataFrame(np.random.random(10,), index=index, columns=['floats']) - df1['text'] = 'the sky is so blue. we could use more chocolate.'.split() + df1 = DataFrame( + np.random.random(10, ), index=index, columns=['floats']) + df1['text'] = 'the sky is so blue. we could use more chocolate.'.split( + ) df1['start'] = date_range('2000-1-1', periods=10, freq='T') df1['end'] = date_range('2000-1-1', periods=10, freq='D') df1['diff'] = df1['end'] - df1['start'] @@ -1874,10 +1932,10 @@ def test_pipe_tuple_error(self): df = DataFrame({"A": [1, 2, 3]}) f = lambda x, y: y with tm.assertRaises(ValueError): - result = df.pipe((f, 'y'), x=1, y=0) + df.pipe((f, 'y'), x=1, y=0) with tm.assertRaises(ValueError): - result = df.A.pipe((f, 'y'), x=1, y=0) + df.A.pipe((f, 'y'), x=1, y=0) def test_pipe_panel(self): wp = Panel({'r1': DataFrame({"A": [1, 2, 3]})}) diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py index 0fc5916676dd3..add7245561d3f 100644 --- a/pandas/tests/test_graphics.py +++ b/pandas/tests/test_graphics.py @@ -6,7 +6,6 @@ import os import string import warnings -from distutils.version import LooseVersion from datetime import datetime, date @@ -21,7 +20,6 @@ from pandas.util.testing import ensure_clean from pandas.core.config import set_option - import numpy as np from numpy import random from numpy.random import rand, randn @@ -29,8 +27,6 @@ from numpy.testing import assert_allclose from numpy.testing.decorators import slow import pandas.tools.plotting as plotting - - """ These tests are for ``Dataframe.plot`` and ``Series.plot``. Other plot methods such as ``.hist``, ``.boxplot`` and other miscellaneous @@ -40,17 +36,15 @@ def _skip_if_no_scipy_gaussian_kde(): try: - import scipy - from scipy.stats import gaussian_kde + from scipy.stats import gaussian_kde # noqa except ImportError: raise nose.SkipTest("scipy version doesn't support gaussian_kde") def _ok_for_gaussian_kde(kind): - if kind in ['kde','density']: + if kind in ['kde', 'density']: try: - import scipy - from scipy.stats import gaussian_kde + from scipy.stats import gaussian_kde # noqa except ImportError: return False return True @@ -91,7 +85,6 @@ def setUp(self): else: self.polycollection_factor = 1 - def tearDown(self): tm.close() @@ -115,7 +108,8 @@ def _check_legend_labels(self, axes, labels=None, visible=True): labels : list-like expected legend labels visible : bool - expected legend visibility. labels are checked only when visible is True + expected legend visibility. labels are checked only when visible is + True """ if visible and (labels is None): @@ -161,7 +155,8 @@ def _check_visible(self, collections, visible=True): expected visibility """ from matplotlib.collections import Collection - if not isinstance(collections, Collection) and not com.is_list_like(collections): + if not isinstance(collections, + Collection) and not com.is_list_like(collections): collections = [collections] for patch in collections: @@ -276,14 +271,17 @@ def _check_ticks_props(self, axes, xlabelsize=None, xrot=None, for ax in axes: if xlabelsize or xrot: if isinstance(ax.xaxis.get_minor_formatter(), NullFormatter): - # If minor ticks has NullFormatter, rot / fontsize are not retained + # If minor ticks has NullFormatter, rot / fontsize are not + # retained labels = ax.get_xticklabels() else: - labels = ax.get_xticklabels() + ax.get_xticklabels(minor=True) + labels = ax.get_xticklabels() + ax.get_xticklabels( + minor=True) for label in labels: if xlabelsize is not None: - self.assertAlmostEqual(label.get_fontsize(), xlabelsize) + self.assertAlmostEqual(label.get_fontsize(), + xlabelsize) if xrot is not None: self.assertAlmostEqual(label.get_rotation(), xrot) @@ -291,11 +289,13 @@ def _check_ticks_props(self, axes, xlabelsize=None, xrot=None, if isinstance(ax.yaxis.get_minor_formatter(), NullFormatter): labels = ax.get_yticklabels() else: - labels = ax.get_yticklabels() + ax.get_yticklabels(minor=True) + labels = ax.get_yticklabels() + ax.get_yticklabels( + minor=True) for label in labels: if ylabelsize is not None: - self.assertAlmostEqual(label.get_fontsize(), ylabelsize) + self.assertAlmostEqual(label.get_fontsize(), + ylabelsize) if yrot is not None: self.assertAlmostEqual(label.get_rotation(), yrot) @@ -316,7 +316,8 @@ def _check_ax_scales(self, axes, xaxis='linear', yaxis='linear'): self.assertEqual(ax.xaxis.get_scale(), xaxis) self.assertEqual(ax.yaxis.get_scale(), yaxis) - def _check_axes_shape(self, axes, axes_num=None, layout=None, figsize=(8.0, 6.0)): + def _check_axes_shape(self, axes, axes_num=None, layout=None, + figsize=(8.0, 6.0)): """ Check expected number of axes is drawn in expected layout @@ -324,7 +325,8 @@ def _check_axes_shape(self, axes, axes_num=None, layout=None, figsize=(8.0, 6.0) ---------- axes : matplotlib Axes object, or its list-like axes_num : number - expected number of axes. Unnecessary axes should be set to invisible. + expected number of axes. Unnecessary axes should be set to + invisible. layout : tuple expected layout, (expected number of rows , columns) figsize : tuple @@ -342,8 +344,9 @@ def _check_axes_shape(self, axes, axes_num=None, layout=None, figsize=(8.0, 6.0) result = self._get_axes_layout(plotting._flatten(axes)) self.assertEqual(result, layout) - self.assert_numpy_array_equal(np.round(visible_axes[0].figure.get_size_inches()), - np.array(figsize)) + self.assert_numpy_array_equal( + np.round(visible_axes[0].figure.get_size_inches()), + np.array(figsize)) def _get_axes_layout(self, axes): x_set = set() @@ -457,33 +460,39 @@ def _check_grid_settings(self, obj, kinds, kws={}): import matplotlib as mpl def is_grid_on(): - xoff = all(not g.gridOn for g in self.plt.gca().xaxis.get_major_ticks()) - yoff = all(not g.gridOn for g in self.plt.gca().yaxis.get_major_ticks()) - return not(xoff and yoff) + xoff = all(not g.gridOn + for g in self.plt.gca().xaxis.get_major_ticks()) + yoff = all(not g.gridOn + for g in self.plt.gca().yaxis.get_major_ticks()) + return not (xoff and yoff) - spndx=1 + spndx = 1 for kind in kinds: if not _ok_for_gaussian_kde(kind): continue - self.plt.subplot(1,4*len(kinds),spndx); spndx+=1 - mpl.rc('axes',grid=False) + self.plt.subplot(1, 4 * len(kinds), spndx) + spndx += 1 + mpl.rc('axes', grid=False) obj.plot(kind=kind, **kws) self.assertFalse(is_grid_on()) - self.plt.subplot(1,4*len(kinds),spndx); spndx+=1 - mpl.rc('axes',grid=True) + self.plt.subplot(1, 4 * len(kinds), spndx) + spndx += 1 + mpl.rc('axes', grid=True) obj.plot(kind=kind, grid=False, **kws) self.assertFalse(is_grid_on()) if kind != 'pie': - self.plt.subplot(1,4*len(kinds),spndx); spndx+=1 - mpl.rc('axes',grid=True) + self.plt.subplot(1, 4 * len(kinds), spndx) + spndx += 1 + mpl.rc('axes', grid=True) obj.plot(kind=kind, **kws) self.assertTrue(is_grid_on()) - self.plt.subplot(1,4*len(kinds),spndx); spndx+=1 - mpl.rc('axes',grid=False) + self.plt.subplot(1, 4 * len(kinds), spndx) + spndx += 1 + mpl.rc('axes', grid=False) obj.plot(kind=kind, grid=True, **kws) self.assertTrue(is_grid_on()) @@ -651,8 +660,10 @@ def test_line_area_nan_series(self): ax = _check_plot_works(d.plot) masked = ax.lines[0].get_ydata() # remove nan for comparison purpose - self.assert_numpy_array_equal(np.delete(masked.data, 2), np.array([1, 2, 3])) - self.assert_numpy_array_equal(masked.mask, np.array([False, False, True, False])) + self.assert_numpy_array_equal( + np.delete(masked.data, 2), np.array([1, 2, 3])) + self.assert_numpy_array_equal( + masked.mask, np.array([False, False, True, False])) expected = np.array([1, 2, 0, 3]) ax = _check_plot_works(d.plot, stacked=True) @@ -694,12 +705,14 @@ def test_bar_log(self): expected = np.hstack((1.0e-04, expected, 1.0e+01)) ax = Series([0.1, 0.01, 0.001]).plot(log=True, kind='bar') - tm.assert_numpy_array_equal(ax.get_ylim(), (0.001, 0.10000000000000001)) + tm.assert_numpy_array_equal(ax.get_ylim(), + (0.001, 0.10000000000000001)) tm.assert_numpy_array_equal(ax.yaxis.get_ticklocs(), expected) tm.close() ax = Series([0.1, 0.01, 0.001]).plot(log=True, kind='barh') - tm.assert_numpy_array_equal(ax.get_xlim(), (0.001, 0.10000000000000001)) + tm.assert_numpy_array_equal(ax.get_xlim(), + (0.001, 0.10000000000000001)) tm.assert_numpy_array_equal(ax.xaxis.get_ticklocs(), expected) @slow @@ -728,7 +741,8 @@ def test_irregular_datetime(self): @slow def test_pie_series(self): - # if sum of values is less than 1.0, pie handle them as rate and draw semicircle. + # if sum of values is less than 1.0, pie handle them as rate and draw + # semicircle. series = Series(np.random.randint(1, 5), index=['a', 'b', 'c', 'd', 'e'], name='YLABEL') ax = _check_plot_works(series.plot.pie) @@ -749,14 +763,16 @@ def test_pie_series(self): # with labels and colors labels = ['A', 'B', 'C', 'D', 'E'] color_args = ['r', 'g', 'b', 'c', 'm'] - ax = _check_plot_works(series.plot.pie, labels=labels, colors=color_args) + ax = _check_plot_works(series.plot.pie, labels=labels, + colors=color_args) self._check_text_labels(ax.texts, labels) self._check_colors(ax.patches, facecolors=color_args) # with autopct and fontsize ax = _check_plot_works(series.plot.pie, colors=color_args, autopct='%.2f', fontsize=7) - pcts = ['{0:.2f}'.format(s * 100) for s in series.values / float(series.sum())] + pcts = ['{0:.2f}'.format(s * 100) + for s in series.values / float(series.sum())] iters = [iter(series.index), iter(pcts)] expected_texts = list(next(it) for it in itertools.cycle(iters)) self._check_text_labels(ax.texts, expected_texts) @@ -769,8 +785,8 @@ def test_pie_series(self): series.plot.pie() # includes nan - series = Series([1, 2, np.nan, 4], - index=['a', 'b', 'c', 'd'], name='YLABEL') + series = Series([1, 2, np.nan, 4], index=['a', 'b', 'c', 'd'], + name='YLABEL') ax = _check_plot_works(series.plot.pie) self._check_text_labels(ax.texts, ['a', 'b', '', 'd']) @@ -791,12 +807,13 @@ def test_hist_df_kwargs(self): def test_hist_df_with_nonnumerics(self): # GH 9853 with tm.RNGContext(1): - df = DataFrame(np.random.randn(10, 4), columns=['A', 'B', 'C', 'D']) + df = DataFrame( + np.random.randn(10, 4), columns=['A', 'B', 'C', 'D']) df['E'] = ['x', 'y'] * 5 ax = df.plot.hist(bins=5) self.assertEqual(len(ax.patches), 20) - ax = df.plot.hist() # bins=10 + ax = df.plot.hist() # bins=10 self.assertEqual(len(ax.patches), 40) @slow @@ -804,8 +821,10 @@ def test_hist_legacy(self): _check_plot_works(self.ts.hist) _check_plot_works(self.ts.hist, grid=False) _check_plot_works(self.ts.hist, figsize=(8, 10)) - _check_plot_works(self.ts.hist, filterwarnings='ignore', by=self.ts.index.month) - _check_plot_works(self.ts.hist, filterwarnings='ignore', by=self.ts.index.month, bins=5) + _check_plot_works(self.ts.hist, filterwarnings='ignore', + by=self.ts.index.month) + _check_plot_works(self.ts.hist, filterwarnings='ignore', + by=self.ts.index.month, bins=5) fig, ax = self.plt.subplots(1, 1) _check_plot_works(self.ts.hist, ax=ax) @@ -868,7 +887,8 @@ def test_hist_layout_with_by(self): self._check_axes_shape(axes, axes_num=3, layout=(2, 2)) axes = df.height.hist(by=df.category, layout=(4, 2), figsize=(12, 7)) - self._check_axes_shape(axes, axes_num=4, layout=(4, 2), figsize=(12, 7)) + self._check_axes_shape(axes, axes_num=4, layout=(4, 2), + figsize=(12, 7)) @slow def test_hist_no_overlap(self): @@ -903,7 +923,8 @@ def test_hist_secondary_legend(self): df['b'].plot.hist(ax=ax, legend=True, secondary_y=True) # both legends are draw on left ax # left axis must be invisible, right axis must be visible - self._check_legend_labels(ax.left_ax, labels=['a (right)', 'b (right)']) + self._check_legend_labels(ax.left_ax, + labels=['a (right)', 'b (right)']) self.assertFalse(ax.left_ax.get_yaxis().get_visible()) self.assertTrue(ax.get_yaxis().get_visible()) tm.close() @@ -1010,9 +1031,12 @@ def test_kde_kwargs(self): tm._skip_if_no_scipy() _skip_if_no_scipy_gaussian_kde() from numpy import linspace - _check_plot_works(self.ts.plot.kde, bw_method=.5, ind=linspace(-100,100,20)) - _check_plot_works(self.ts.plot.density, bw_method=.5, ind=linspace(-100,100,20)) - ax = self.ts.plot.kde(logy=True, bw_method=.5, ind=linspace(-100,100,20)) + _check_plot_works(self.ts.plot.kde, bw_method=.5, + ind=linspace(-100, 100, 20)) + _check_plot_works(self.ts.plot.density, bw_method=.5, + ind=linspace(-100, 100, 20)) + ax = self.ts.plot.kde(logy=True, bw_method=.5, + ind=linspace(-100, 100, 20)) self._check_ax_scales(ax, yaxis='log') self._check_text_labels(ax.yaxis.get_label(), 'Density') @@ -1022,7 +1046,7 @@ def test_kde_missing_vals(self): _skip_if_no_scipy_gaussian_kde() s = Series(np.random.uniform(size=50)) s[0] = np.nan - ax = _check_plot_works(s.plot.kde) + _check_plot_works(s.plot.kde) @slow def test_hist_kwargs(self): @@ -1116,7 +1140,7 @@ def test_errorbar_plot(self): s = Series(np.arange(10), name='x') s_err = np.random.randn(10) - d_err = DataFrame(randn(10, 2), index=s.index, columns=['x', 'y']) + d_err = DataFrame(randn(10, 2), index=s.index, columns=['x', 'y']) # test line and bar plots kinds = ['line', 'bar'] for kind in kinds: @@ -1138,7 +1162,7 @@ def test_errorbar_plot(self): ix = date_range('1/1/2000', '1/1/2001', freq='M') ts = Series(np.arange(12), index=ix, name='x') ts_err = Series(np.random.randn(12), index=ix) - td_err = DataFrame(randn(12, 2), index=ix, columns=['x', 'y']) + td_err = DataFrame(randn(12, 2), index=ix, columns=['x', 'y']) ax = _check_plot_works(ts.plot, yerr=ts_err) self._check_has_errorbars(ax, xerr=0, yerr=1) @@ -1149,7 +1173,7 @@ def test_errorbar_plot(self): with tm.assertRaises(ValueError): s.plot(yerr=np.arange(11)) - s_err = ['zzz']*10 + s_err = ['zzz'] * 10 # in mpl 1.5+ this is a TypeError with tm.assertRaises((ValueError, TypeError)): s.plot(yerr=s_err) @@ -1161,8 +1185,9 @@ def test_table(self): @slow def test_series_grid_settings(self): # Make sure plot defaults to rcParams['axes.grid'] setting, GH 9792 - self._check_grid_settings(Series([1,2,3]), - plotting._series_kinds + plotting._common_kinds) + self._check_grid_settings(Series([1, 2, 3]), + plotting._series_kinds + + plotting._common_kinds) @slow def test_standard_colors(self): @@ -1241,13 +1266,14 @@ def test_time_series_plot_color_with_empty_kwargs(self): def test_xticklabels(self): # GH11529 s = Series(np.arange(10), index=['P%02d' % i for i in range(10)]) - ax = s.plot(xticks=[0,3,5,9]) - exp = ['P%02d' % i for i in [0,3,5,9]] + ax = s.plot(xticks=[0, 3, 5, 9]) + exp = ['P%02d' % i for i in [0, 3, 5, 9]] self._check_text_labels(ax.get_xticklabels(), exp) @tm.mplskip class TestDataFramePlots(TestPlotBase): + def setUp(self): TestPlotBase.setUp(self) import matplotlib as mpl @@ -1255,8 +1281,9 @@ def setUp(self): self.tdf = tm.makeTimeDataFrame() self.hexbin_df = DataFrame({"A": np.random.uniform(size=20), - "B": np.random.uniform(size=20), - "C": np.arange(20) + np.random.uniform(size=20)}) + "B": np.random.uniform(size=20), + "C": np.arange(20) + np.random.uniform( + size=20)}) from pandas import read_csv path = os.path.join(curpath(), 'data', 'iris.csv') @@ -1266,7 +1293,8 @@ def setUp(self): def test_plot(self): df = self.tdf _check_plot_works(df.plot, filterwarnings='ignore', grid=False) - axes = _check_plot_works(df.plot, filterwarnings='ignore', subplots=True) + axes = _check_plot_works(df.plot, filterwarnings='ignore', + subplots=True) self._check_axes_shape(axes, axes_num=4, layout=(4, 1)) axes = _check_plot_works(df.plot, filterwarnings='ignore', @@ -1290,16 +1318,19 @@ def test_plot(self): _check_plot_works(df.plot, xticks=[1, 5, 10]) _check_plot_works(df.plot, ylim=(-100, 100), xlim=(-100, 100)) - _check_plot_works(df.plot, filterwarnings='ignore', subplots=True, title='blah') - # We have to redo it here because _check_plot_works does two plots, once without an ax - # kwarg and once with an ax kwarg and the new sharex behaviour does not remove the - # visibility of the latter axis (as ax is present). - # see: https://github.com/pydata/pandas/issues/9737 + _check_plot_works(df.plot, filterwarnings='ignore', + subplots=True, title='blah') + + # We have to redo it here because _check_plot_works does two plots, + # once without an ax kwarg and once with an ax kwarg and the new sharex + # behaviour does not remove the visibility of the latter axis (as ax is + # present). see: https://github.com/pydata/pandas/issues/9737 + axes = df.plot(subplots=True, title='blah') self._check_axes_shape(axes, axes_num=3, layout=(3, 1)) - #axes[0].figure.savefig("test.png") + # axes[0].figure.savefig("test.png") for ax in axes[:2]: - self._check_visible(ax.xaxis) # xaxis must be visible for grid + self._check_visible(ax.xaxis) # xaxis must be visible for grid self._check_visible(ax.get_xticklabels(), visible=False) self._check_visible(ax.get_xticklabels(minor=True), visible=False) self._check_visible([ax.xaxis.get_label()], visible=False) @@ -1326,8 +1357,8 @@ def test_plot(self): (u('\u03b4'), 6), (u('\u03b4'), 7)], names=['i0', 'i1']) columns = MultiIndex.from_tuples([('bar', u('\u0394')), - ('bar', u('\u0395'))], names=['c0', - 'c1']) + ('bar', u('\u0395'))], names=['c0', + 'c1']) df = DataFrame(np.random.randint(0, 10, (8, 2)), columns=columns, index=index) @@ -1339,8 +1370,7 @@ def test_plot(self): axes = _check_plot_works(df.plot.bar, subplots=True) self._check_axes_shape(axes, axes_num=1, layout=(1, 1)) - axes = _check_plot_works(df.plot.bar, subplots=True, - layout=(-1, 1)) + axes = _check_plot_works(df.plot.bar, subplots=True, layout=(-1, 1)) self._check_axes_shape(axes, axes_num=1, layout=(1, 1)) # When ax is supplied and required number of axes is 1, # passed ax should be used: @@ -1357,7 +1387,7 @@ def test_color_and_style_arguments(self): df = DataFrame({'x': [1, 2], 'y': [3, 4]}) # passing both 'color' and 'style' arguments should be allowed # if there is no color symbol in the style strings: - ax = df.plot(color = ['red', 'black'], style = ['-', '--']) + ax = df.plot(color=['red', 'black'], style=['-', '--']) # check that the linestyles are correctly set: linestyle = [line.get_linestyle() for line in ax.lines] self.assertEqual(linestyle, ['-', '--']) @@ -1367,7 +1397,7 @@ def test_color_and_style_arguments(self): # passing both 'color' and 'style' arguments should not be allowed # if there is a color symbol in the style strings: with tm.assertRaises(ValueError): - df.plot(color = ['red', 'black'], style = ['k-', 'r--']) + df.plot(color=['red', 'black'], style=['k-', 'r--']) def test_nonnumeric_exclude(self): df = DataFrame({'A': ["x", "y", "z"], 'B': [1, 2, 3]}) @@ -1392,34 +1422,31 @@ def test_donot_overwrite_index_name(self): def test_plot_xy(self): # columns.inferred_type == 'string' df = self.tdf - self._check_data(df.plot(x=0, y=1), - df.set_index('A')['B'].plot()) + self._check_data(df.plot(x=0, y=1), df.set_index('A')['B'].plot()) self._check_data(df.plot(x=0), df.set_index('A').plot()) self._check_data(df.plot(y=0), df.B.plot()) - self._check_data(df.plot(x='A', y='B'), - df.set_index('A').B.plot()) + self._check_data(df.plot(x='A', y='B'), df.set_index('A').B.plot()) self._check_data(df.plot(x='A'), df.set_index('A').plot()) self._check_data(df.plot(y='B'), df.B.plot()) # columns.inferred_type == 'integer' df.columns = lrange(1, len(df.columns) + 1) - self._check_data(df.plot(x=1, y=2), - df.set_index(1)[2].plot()) + self._check_data(df.plot(x=1, y=2), df.set_index(1)[2].plot()) self._check_data(df.plot(x=1), df.set_index(1).plot()) self._check_data(df.plot(y=1), df[1].plot()) # figsize and title ax = df.plot(x=1, y=2, title='Test', figsize=(16, 8)) self._check_text_labels(ax.title, 'Test') - self._check_axes_shape(ax, axes_num=1, layout=(1, 1), figsize=(16., 8.)) + self._check_axes_shape(ax, axes_num=1, layout=(1, 1), + figsize=(16., 8.)) # columns.inferred_type == 'mixed' # TODO add MultiIndex test @slow def test_logscales(self): - df = DataFrame({'a': np.arange(100)}, - index=np.arange(100)) + df = DataFrame({'a': np.arange(100)}, index=np.arange(100)) ax = df.plot(logy=True) self._check_ax_scales(ax, yaxis='log') @@ -1477,8 +1504,8 @@ def test_period_compat(self): tm.close() def test_unsorted_index(self): - df = DataFrame({'y': np.arange(100)}, - index=np.arange(99, -1, -1), dtype=np.int64) + df = DataFrame({'y': np.arange(100)}, index=np.arange(99, -1, -1), + dtype=np.int64) ax = df.plot() l = ax.get_lines()[0] rs = l.get_xydata() @@ -1504,12 +1531,14 @@ def test_subplots(self): self.assertEqual(axes.shape, (3, )) for ax, column in zip(axes, df.columns): - self._check_legend_labels(ax, labels=[com.pprint_thing(column)]) + self._check_legend_labels(ax, + labels=[com.pprint_thing(column)]) for ax in axes[:-2]: - self._check_visible(ax.xaxis) # xaxis must be visible for grid + self._check_visible(ax.xaxis) # xaxis must be visible for grid self._check_visible(ax.get_xticklabels(), visible=False) - self._check_visible(ax.get_xticklabels(minor=True), visible=False) + self._check_visible( + ax.get_xticklabels(minor=True), visible=False) self._check_visible(ax.xaxis.get_label(), visible=False) self._check_visible(ax.get_yticklabels()) @@ -1542,9 +1571,10 @@ def test_subplots_timeseries(self): for ax in axes[:-2]: # GH 7801 - self._check_visible(ax.xaxis) # xaxis must be visible for grid + self._check_visible(ax.xaxis) # xaxis must be visible for grid self._check_visible(ax.get_xticklabels(), visible=False) - self._check_visible(ax.get_xticklabels(minor=True), visible=False) + self._check_visible( + ax.get_xticklabels(minor=True), visible=False) self._check_visible(ax.xaxis.get_label(), visible=False) self._check_visible(ax.get_yticklabels()) @@ -1555,14 +1585,16 @@ def test_subplots_timeseries(self): self._check_visible(axes[-1].get_yticklabels()) self._check_ticks_props(axes, xrot=0) - axes = df.plot(kind=kind, subplots=True, sharex=False, rot=45, fontsize=7) + axes = df.plot(kind=kind, subplots=True, sharex=False, rot=45, + fontsize=7) for ax in axes: self._check_visible(ax.xaxis) self._check_visible(ax.get_xticklabels()) self._check_visible(ax.get_xticklabels(minor=True)) self._check_visible(ax.xaxis.get_label()) self._check_visible(ax.get_yticklabels()) - self._check_ticks_props(ax, xlabelsize=7, xrot=45, ylabelsize=7) + self._check_ticks_props(ax, xlabelsize=7, xrot=45, + ylabelsize=7) @slow def test_subplots_layout(self): @@ -1632,12 +1664,14 @@ def test_subplots_multiple_axes(self): df = DataFrame(np.random.rand(10, 3), index=list(string.ascii_letters[:10])) - returned = df.plot(subplots=True, ax=axes[0], sharex=False, sharey=False) + returned = df.plot(subplots=True, ax=axes[0], sharex=False, + sharey=False) self._check_axes_shape(returned, axes_num=3, layout=(1, 3)) self.assertEqual(returned.shape, (3, )) self.assertIs(returned[0].figure, fig) # draw on second row - returned = df.plot(subplots=True, ax=axes[1], sharex=False, sharey=False) + returned = df.plot(subplots=True, ax=axes[1], sharex=False, + sharey=False) self._check_axes_shape(returned, axes_num=3, layout=(1, 3)) self.assertEqual(returned.shape, (3, )) self.assertIs(returned[0].figure, fig) @@ -1687,23 +1721,25 @@ def test_subplots_ts_share_axes(self): # GH 3964 fig, axes = self.plt.subplots(3, 3, sharex=True, sharey=True) self.plt.subplots_adjust(left=0.05, right=0.95, hspace=0.3, wspace=0.3) - df = DataFrame(np.random.randn(10, 9), index=date_range(start='2014-07-01', freq='M', periods=10)) + df = DataFrame( + np.random.randn(10, 9), + index=date_range(start='2014-07-01', freq='M', periods=10)) for i, ax in enumerate(axes.ravel()): df[i].plot(ax=ax, fontsize=5) - #Rows other than bottom should not be visible + # Rows other than bottom should not be visible for ax in axes[0:-1].ravel(): self._check_visible(ax.get_xticklabels(), visible=False) - #Bottom row should be visible + # Bottom row should be visible for ax in axes[-1].ravel(): self._check_visible(ax.get_xticklabels(), visible=True) - #First column should be visible + # First column should be visible for ax in axes[[0, 1, 2], [0]].ravel(): self._check_visible(ax.get_yticklabels(), visible=True) - #Other columns should not be visible + # Other columns should not be visible for ax in axes[[0, 1, 2], [1]].ravel(): self._check_visible(ax.get_yticklabels(), visible=False) for ax in axes[[0, 1, 2], [2]].ravel(): @@ -1746,8 +1782,8 @@ def test_subplots_dup_columns(self): def test_negative_log(self): df = - DataFrame(rand(6, 4), - index=list(string.ascii_letters[:6]), - columns=['x', 'y', 'z', 'four']) + index=list(string.ascii_letters[:6]), + columns=['x', 'y', 'z', 'four']) with tm.assertRaises(ValueError): df.plot.area(logy=True) @@ -1757,20 +1793,22 @@ def test_negative_log(self): def _compare_stacked_y_cood(self, normal_lines, stacked_lines): base = np.zeros(len(normal_lines[0].get_data()[1])) for nl, sl in zip(normal_lines, stacked_lines): - base += nl.get_data()[1] # get y coodinates + base += nl.get_data()[1] # get y coodinates sy = sl.get_data()[1] self.assert_numpy_array_equal(base, sy) def test_line_area_stacked(self): with tm.RNGContext(42): - df = DataFrame(rand(6, 4), - columns=['w', 'x', 'y', 'z']) - neg_df = - df + df = DataFrame(rand(6, 4), columns=['w', 'x', 'y', 'z']) + neg_df = -df # each column has either positive or negative value - sep_df = DataFrame({'w': rand(6), 'x': rand(6), - 'y': - rand(6), 'z': - rand(6)}) + sep_df = DataFrame({'w': rand(6), + 'x': rand(6), + 'y': -rand(6), + 'z': -rand(6)}) # each column has positive-negative mixed value - mixed_df = DataFrame(randn(6, 4), index=list(string.ascii_letters[:6]), + mixed_df = DataFrame(randn(6, 4), + index=list(string.ascii_letters[:6]), columns=['w', 'x', 'y', 'z']) for kind in ['line', 'area']: @@ -1797,28 +1835,35 @@ def test_line_area_nan_df(self): values1 = [1, 2, np.nan, 3] values2 = [3, np.nan, 2, 1] df = DataFrame({'a': values1, 'b': values2}) - tdf = DataFrame({'a': values1, 'b': values2}, index=tm.makeDateIndex(k=4)) + tdf = DataFrame({'a': values1, + 'b': values2}, index=tm.makeDateIndex(k=4)) for d in [df, tdf]: ax = _check_plot_works(d.plot) masked1 = ax.lines[0].get_ydata() masked2 = ax.lines[1].get_ydata() # remove nan for comparison purpose - self.assert_numpy_array_equal(np.delete(masked1.data, 2), np.array([1, 2, 3])) - self.assert_numpy_array_equal(np.delete(masked2.data, 1), np.array([3, 2, 1])) - self.assert_numpy_array_equal(masked1.mask, np.array([False, False, True, False])) - self.assert_numpy_array_equal(masked2.mask, np.array([False, True, False, False])) + self.assert_numpy_array_equal( + np.delete(masked1.data, 2), np.array([1, 2, 3])) + self.assert_numpy_array_equal( + np.delete(masked2.data, 1), np.array([3, 2, 1])) + self.assert_numpy_array_equal( + masked1.mask, np.array([False, False, True, False])) + self.assert_numpy_array_equal( + masked2.mask, np.array([False, True, False, False])) expected1 = np.array([1, 2, 0, 3]) expected2 = np.array([3, 0, 2, 1]) ax = _check_plot_works(d.plot, stacked=True) self.assert_numpy_array_equal(ax.lines[0].get_ydata(), expected1) - self.assert_numpy_array_equal(ax.lines[1].get_ydata(), expected1 + expected2) + self.assert_numpy_array_equal(ax.lines[1].get_ydata(), + expected1 + expected2) ax = _check_plot_works(d.plot.area) self.assert_numpy_array_equal(ax.lines[0].get_ydata(), expected1) - self.assert_numpy_array_equal(ax.lines[1].get_ydata(), expected1 + expected2) + self.assert_numpy_array_equal(ax.lines[1].get_ydata(), + expected1 + expected2) ax = _check_plot_works(d.plot.area, stacked=False) self.assert_numpy_array_equal(ax.lines[0].get_ydata(), expected1) @@ -1849,10 +1894,9 @@ def test_line_lim(self): self.assertEqual(xmax, lines[0].get_data()[0][-1]) def test_area_lim(self): - df = DataFrame(rand(6, 4), - columns=['x', 'y', 'z', 'four']) + df = DataFrame(rand(6, 4), columns=['x', 'y', 'z', 'four']) - neg_df = - df + neg_df = -df for stacked in [True, False]: ax = _check_plot_works(df.plot.area, stacked=stacked) xmin, xmax = ax.get_xlim() @@ -1964,12 +2008,18 @@ def test_bar_barwidth(self): @slow def test_bar_barwidth_position(self): df = DataFrame(randn(5, 5)) - self._check_bar_alignment(df, kind='bar', stacked=False, width=0.9, position=0.2) - self._check_bar_alignment(df, kind='bar', stacked=True, width=0.9, position=0.2) - self._check_bar_alignment(df, kind='barh', stacked=False, width=0.9, position=0.2) - self._check_bar_alignment(df, kind='barh', stacked=True, width=0.9, position=0.2) - self._check_bar_alignment(df, kind='bar', subplots=True, width=0.9, position=0.2) - self._check_bar_alignment(df, kind='barh', subplots=True, width=0.9, position=0.2) + self._check_bar_alignment(df, kind='bar', stacked=False, width=0.9, + position=0.2) + self._check_bar_alignment(df, kind='bar', stacked=True, width=0.9, + position=0.2) + self._check_bar_alignment(df, kind='barh', stacked=False, width=0.9, + position=0.2) + self._check_bar_alignment(df, kind='barh', stacked=True, width=0.9, + position=0.2) + self._check_bar_alignment(df, kind='bar', subplots=True, width=0.9, + position=0.2) + self._check_bar_alignment(df, kind='barh', subplots=True, width=0.9, + position=0.2) @slow def test_bar_bottom_left(self): @@ -2002,7 +2052,8 @@ def test_bar_bottom_left(self): @slow def test_bar_nan(self): - df = DataFrame({'A': [10, np.nan, 20], 'B': [5, 10, 20], + df = DataFrame({'A': [10, np.nan, 20], + 'B': [5, 10, 20], 'C': [1, 2, 3]}) ax = df.plot.bar() expected = [10, 0, 20, 5, 10, 20, 1, 2, 3] @@ -2081,10 +2132,8 @@ def test_plot_scatter_with_c(self): # identical to the values we supplied, normally we'd be on shaky ground # comparing floats for equality but here we expect them to be # identical. - self.assertTrue( - np.array_equal( - ax.collections[0].get_facecolor(), - rgba_array)) + self.assertTrue(np.array_equal(ax.collections[0].get_facecolor(), + rgba_array)) # we don't test the colors of the faces in this next plot because they # are dependent on the spring colormap, which may change its colors # later. @@ -2134,12 +2183,11 @@ def test_plot_bar(self): self._check_ticks_props(ax, yrot=55, ylabelsize=11, xlabelsize=11) def _check_bar_alignment(self, df, kind='bar', stacked=False, - subplots=False, align='center', - width=0.5, position=0.5): + subplots=False, align='center', width=0.5, + position=0.5): axes = df.plot(kind=kind, stacked=stacked, subplots=subplots, - align=align, width=width, position=position, - grid=True) + align=align, width=width, position=position, grid=True) axes = self._flatten_visible(axes) @@ -2153,7 +2201,8 @@ def _check_bar_alignment(self, df, kind='bar', stacked=False, axis = ax.yaxis ax_min, ax_max = ax.get_ylim() min_edge = min([p.get_y() for p in ax.patches]) - max_edge = max([p.get_y() + p.get_height() for p in ax.patches]) + max_edge = max([p.get_y() + p.get_height() for p in ax.patches + ]) else: raise ValueError @@ -2173,7 +2222,8 @@ def _check_bar_alignment(self, df, kind='bar', stacked=False, center = p.get_y() + p.get_height() * position edge = p.get_y() elif kind == 'barh' and stacked is False: - center = p.get_y() + p.get_height() * len(df.columns) * position + center = p.get_y() + p.get_height() * len( + df.columns) * position edge = p.get_y() else: raise ValueError @@ -2232,25 +2282,25 @@ def test_bar_edge(self): df = DataFrame({'A': [3] * 5, 'B': lrange(5)}, index=lrange(5)) self._check_bar_alignment(df, kind='bar', stacked=True, align='edge') - self._check_bar_alignment(df, kind='bar', stacked=True, - width=0.9, align='edge') + self._check_bar_alignment(df, kind='bar', stacked=True, width=0.9, + align='edge') self._check_bar_alignment(df, kind='barh', stacked=True, align='edge') - self._check_bar_alignment(df, kind='barh', stacked=True, - width=0.9, align='edge') + self._check_bar_alignment(df, kind='barh', stacked=True, width=0.9, + align='edge') self._check_bar_alignment(df, kind='bar', stacked=False, align='edge') - self._check_bar_alignment(df, kind='bar', stacked=False, - width=0.9, align='edge') + self._check_bar_alignment(df, kind='bar', stacked=False, width=0.9, + align='edge') self._check_bar_alignment(df, kind='barh', stacked=False, align='edge') - self._check_bar_alignment(df, kind='barh', stacked=False, - width=0.9, align='edge') + self._check_bar_alignment(df, kind='barh', stacked=False, width=0.9, + align='edge') self._check_bar_alignment(df, kind='bar', subplots=True, align='edge') - self._check_bar_alignment(df, kind='bar', subplots=True, - width=0.9, align='edge') + self._check_bar_alignment(df, kind='bar', subplots=True, width=0.9, + align='edge') self._check_bar_alignment(df, kind='barh', subplots=True, align='edge') - self._check_bar_alignment(df, kind='barh', subplots=True, - width=0.9, align='edge') + self._check_bar_alignment(df, kind='barh', subplots=True, width=0.9, + align='edge') @slow def test_bar_log_no_subplots(self): @@ -2272,8 +2322,8 @@ def test_bar_log_subplots(self): if not self.mpl_le_1_2_1: expected = np.hstack((.1, expected, 1e4)) - ax = DataFrame([Series([200, 300]), - Series([300, 500])]).plot.bar(log=True, subplots=True) + ax = DataFrame([Series([200, 300]), Series([300, 500])]).plot.bar( + log=True, subplots=True) tm.assert_numpy_array_equal(ax[0].yaxis.get_ticklocs(), expected) tm.assert_numpy_array_equal(ax[1].yaxis.get_ticklocs(), expected) @@ -2288,14 +2338,12 @@ def test_boxplot(self): ax = _check_plot_works(df.plot.box) self._check_text_labels(ax.get_xticklabels(), labels) tm.assert_numpy_array_equal(ax.xaxis.get_ticklocs(), - np.arange(1, len(numeric_cols) + 1)) - self.assertEqual(len(ax.lines), - self.bp_n_objects * len(numeric_cols)) + np.arange(1, len(numeric_cols) + 1)) + self.assertEqual(len(ax.lines), self.bp_n_objects * len(numeric_cols)) # different warning on py3 if not PY3: - axes = _check_plot_works(df.plot.box, - subplots=True, logy=True) + axes = _check_plot_works(df.plot.box, subplots=True, logy=True) self._check_axes_shape(axes, axes_num=3, layout=(1, 3)) self._check_ax_scales(axes, yaxis='log') @@ -2329,8 +2377,8 @@ def test_boxplot_vertical(self): self._check_text_labels(ax.get_yticklabels(), labels) self.assertEqual(len(ax.lines), self.bp_n_objects * len(numeric_cols)) - axes = _check_plot_works(df.plot.box, filterwarnings='ignore', subplots=True, - vert=False, logx=True) + axes = _check_plot_works(df.plot.box, filterwarnings='ignore', + subplots=True, vert=False, logx=True) self._check_axes_shape(axes, axes_num=3, layout=(1, 3)) self._check_ax_scales(axes, xaxis='log') for ax, label in zip(axes, labels): @@ -2367,14 +2415,15 @@ def test_boxplot_subplots_return_type(self): # normal style: return_type=None result = df.plot.box(subplots=True) self.assertIsInstance(result, np.ndarray) - self._check_box_return_type(result, None, - expected_keys=['height', 'weight', 'category']) + self._check_box_return_type(result, None, expected_keys=[ + 'height', 'weight', 'category']) for t in ['dict', 'axes', 'both']: returned = df.plot.box(return_type=t, subplots=True) - self._check_box_return_type(returned, t, - expected_keys=['height', 'weight', 'category'], - check_ax_title=False) + self._check_box_return_type( + returned, t, + expected_keys=['height', 'weight', 'category'], + check_ax_title=False) @slow def test_kde_df(self): @@ -2389,7 +2438,8 @@ def test_kde_df(self): ax = df.plot(kind='kde', rot=20, fontsize=5) self._check_ticks_props(ax, xrot=20, xlabelsize=5, ylabelsize=5) - axes = _check_plot_works(df.plot, filterwarnings='ignore', kind='kde', subplots=True) + axes = _check_plot_works(df.plot, filterwarnings='ignore', kind='kde', + subplots=True) self._check_axes_shape(axes, axes_num=4, layout=(4, 1)) axes = df.plot(kind='kde', logy=True, subplots=True) @@ -2401,7 +2451,7 @@ def test_kde_missing_vals(self): _skip_if_no_scipy_gaussian_kde() df = DataFrame(np.random.uniform(size=(100, 4))) df.loc[0, 0] = np.nan - ax = _check_plot_works(df.plot, kind='kde') + _check_plot_works(df.plot, kind='kde') @slow def test_hist_df(self): @@ -2416,7 +2466,8 @@ def test_hist_df(self): expected = [com.pprint_thing(c) for c in df.columns] self._check_legend_labels(ax, labels=expected) - axes = _check_plot_works(df.plot.hist, filterwarnings='ignore', subplots=True, logy=True) + axes = _check_plot_works(df.plot.hist, filterwarnings='ignore', + subplots=True, logy=True) self._check_axes_shape(axes, axes_num=4, layout=(4, 1)) self._check_ax_scales(axes, yaxis='log') @@ -2464,7 +2515,7 @@ def test_hist_df_coord(self): np.array([8, 8, 8, 8, 8])), 'C': np.repeat(np.array([1, 2, 3, 4, 5]), np.array([6, 7, 8, 9, 10]))}, - columns=['A', 'B', 'C']) + columns=['A', 'B', 'C']) nan_df = DataFrame({'A': np.repeat(np.array([np.nan, 1, 2, 3, 4, 5]), np.array([3, 10, 9, 8, 7, 6])), @@ -2476,55 +2527,74 @@ def test_hist_df_coord(self): for df in [normal_df, nan_df]: ax = df.plot.hist(bins=5) - self._check_box_coord(ax.patches[:5], expected_y=np.array([0, 0, 0, 0, 0]), + self._check_box_coord(ax.patches[:5], + expected_y=np.array([0, 0, 0, 0, 0]), expected_h=np.array([10, 9, 8, 7, 6])) - self._check_box_coord(ax.patches[5:10], expected_y=np.array([0, 0, 0, 0, 0]), + self._check_box_coord(ax.patches[5:10], + expected_y=np.array([0, 0, 0, 0, 0]), expected_h=np.array([8, 8, 8, 8, 8])) - self._check_box_coord(ax.patches[10:], expected_y=np.array([0, 0, 0, 0, 0]), + self._check_box_coord(ax.patches[10:], + expected_y=np.array([0, 0, 0, 0, 0]), expected_h=np.array([6, 7, 8, 9, 10])) ax = df.plot.hist(bins=5, stacked=True) - self._check_box_coord(ax.patches[:5], expected_y=np.array([0, 0, 0, 0, 0]), + self._check_box_coord(ax.patches[:5], + expected_y=np.array([0, 0, 0, 0, 0]), expected_h=np.array([10, 9, 8, 7, 6])) - self._check_box_coord(ax.patches[5:10], expected_y=np.array([10, 9, 8, 7, 6]), + self._check_box_coord(ax.patches[5:10], + expected_y=np.array([10, 9, 8, 7, 6]), expected_h=np.array([8, 8, 8, 8, 8])) - self._check_box_coord(ax.patches[10:], expected_y=np.array([18, 17, 16, 15, 14]), + self._check_box_coord(ax.patches[10:], + expected_y=np.array([18, 17, 16, 15, 14]), expected_h=np.array([6, 7, 8, 9, 10])) axes = df.plot.hist(bins=5, stacked=True, subplots=True) - self._check_box_coord(axes[0].patches, expected_y=np.array([0, 0, 0, 0, 0]), + self._check_box_coord(axes[0].patches, + expected_y=np.array([0, 0, 0, 0, 0]), expected_h=np.array([10, 9, 8, 7, 6])) - self._check_box_coord(axes[1].patches, expected_y=np.array([0, 0, 0, 0, 0]), + self._check_box_coord(axes[1].patches, + expected_y=np.array([0, 0, 0, 0, 0]), expected_h=np.array([8, 8, 8, 8, 8])) - self._check_box_coord(axes[2].patches, expected_y=np.array([0, 0, 0, 0, 0]), + self._check_box_coord(axes[2].patches, + expected_y=np.array([0, 0, 0, 0, 0]), expected_h=np.array([6, 7, 8, 9, 10])) if self.mpl_ge_1_3_1: # horizontal ax = df.plot.hist(bins=5, orientation='horizontal') - self._check_box_coord(ax.patches[:5], expected_x=np.array([0, 0, 0, 0, 0]), + self._check_box_coord(ax.patches[:5], + expected_x=np.array([0, 0, 0, 0, 0]), expected_w=np.array([10, 9, 8, 7, 6])) - self._check_box_coord(ax.patches[5:10], expected_x=np.array([0, 0, 0, 0, 0]), + self._check_box_coord(ax.patches[5:10], + expected_x=np.array([0, 0, 0, 0, 0]), expected_w=np.array([8, 8, 8, 8, 8])) - self._check_box_coord(ax.patches[10:], expected_x=np.array([0, 0, 0, 0, 0]), + self._check_box_coord(ax.patches[10:], + expected_x=np.array([0, 0, 0, 0, 0]), expected_w=np.array([6, 7, 8, 9, 10])) - ax = df.plot.hist(bins=5, stacked=True, orientation='horizontal') - self._check_box_coord(ax.patches[:5], expected_x=np.array([0, 0, 0, 0, 0]), + ax = df.plot.hist(bins=5, stacked=True, + orientation='horizontal') + self._check_box_coord(ax.patches[:5], + expected_x=np.array([0, 0, 0, 0, 0]), expected_w=np.array([10, 9, 8, 7, 6])) - self._check_box_coord(ax.patches[5:10], expected_x=np.array([10, 9, 8, 7, 6]), + self._check_box_coord(ax.patches[5:10], + expected_x=np.array([10, 9, 8, 7, 6]), expected_w=np.array([8, 8, 8, 8, 8])) - self._check_box_coord(ax.patches[10:], expected_x=np.array([18, 17, 16, 15, 14]), - expected_w=np.array([6, 7, 8, 9, 10])) - - axes = df.plot.hist(bins=5, stacked=True, - subplots=True, orientation='horizontal') - self._check_box_coord(axes[0].patches, expected_x=np.array([0, 0, 0, 0, 0]), + self._check_box_coord(ax.patches[10:], expected_x=np.array( + [18, 17, 16, 15, 14]), + expected_w=np.array([6, 7, 8, 9, 10])) + + axes = df.plot.hist(bins=5, stacked=True, subplots=True, + orientation='horizontal') + self._check_box_coord(axes[0].patches, + expected_x=np.array([0, 0, 0, 0, 0]), expected_w=np.array([10, 9, 8, 7, 6])) - self._check_box_coord(axes[1].patches, expected_x=np.array([0, 0, 0, 0, 0]), + self._check_box_coord(axes[1].patches, + expected_x=np.array([0, 0, 0, 0, 0]), expected_w=np.array([8, 8, 8, 8, 8])) - self._check_box_coord(axes[2].patches, expected_x=np.array([0, 0, 0, 0, 0]), + self._check_box_coord(axes[2].patches, + expected_x=np.array([0, 0, 0, 0, 0]), expected_w=np.array([6, 7, 8, 9, 10])) @slow @@ -2554,7 +2624,8 @@ def test_df_legend_labels(self): self._check_legend_labels(ax, labels=df.columns.union(df3.columns)) ax = df4.plot(kind=kind, legend='reverse', ax=ax) - expected = list(df.columns.union(df3.columns)) + list(reversed(df4.columns)) + expected = list(df.columns.union(df3.columns)) + list(reversed( + df4.columns)) self._check_legend_labels(ax, labels=expected) # Secondary Y @@ -2563,7 +2634,8 @@ def test_df_legend_labels(self): ax = df2.plot(legend=False, ax=ax) self._check_legend_labels(ax, labels=['a', 'b (right)', 'c']) ax = df3.plot(kind='bar', legend=True, secondary_y='h', ax=ax) - self._check_legend_labels(ax, labels=['a', 'b (right)', 'c', 'g', 'h (right)', 'i']) + self._check_legend_labels( + ax, labels=['a', 'b (right)', 'c', 'g', 'h (right)', 'i']) # Time Series ind = date_range('1/1/2014', periods=3) @@ -2575,13 +2647,13 @@ def test_df_legend_labels(self): ax = df2.plot(legend=False, ax=ax) self._check_legend_labels(ax, labels=['a', 'b (right)', 'c']) ax = df3.plot(legend=True, ax=ax) - self._check_legend_labels(ax, labels=['a', 'b (right)', 'c', 'g', 'h', 'i']) + self._check_legend_labels( + ax, labels=['a', 'b (right)', 'c', 'g', 'h', 'i']) # scatter ax = df.plot.scatter(x='a', y='b', label='data1') self._check_legend_labels(ax, labels=['data1']) - ax = df2.plot.scatter(x='d', y='e', legend=False, - label='data2', ax=ax) + ax = df2.plot.scatter(x='d', y='e', legend=False, label='data2', ax=ax) self._check_legend_labels(ax, labels=['data1']) ax = df3.plot.scatter(x='g', y='h', label='data3', ax=ax) self._check_legend_labels(ax, labels=['data1', 'data3']) @@ -2596,9 +2668,8 @@ def test_df_legend_labels(self): self._check_legend_labels(ax, labels=['LABEL_b']) self._check_text_labels(ax.xaxis.get_label(), 'a') ax = df5.plot(y='c', label='LABEL_c', ax=ax) - self._check_legend_labels(ax, labels=['LABEL_b','LABEL_c']) - self.assertTrue(df5.columns.tolist() == ['b','c']) - + self._check_legend_labels(ax, labels=['LABEL_b', 'LABEL_c']) + self.assertTrue(df5.columns.tolist() == ['b', 'c']) def test_legend_name(self): multi = DataFrame(randn(4, 4), @@ -2642,10 +2713,10 @@ def test_style_by_column(self): fig = plt.gcf() df = DataFrame(randn(100, 3)) - for markers in [{0: '^', 1: '+', 2: 'o'}, - {0: '^', 1: '+'}, - ['^', '+', 'o'], - ['^', '+']]: + for markers in [{0: '^', + 1: '+', + 2: 'o'}, {0: '^', + 1: '+'}, ['^', '+', 'o'], ['^', '+']]: fig.clf() fig.add_subplot(111) ax = df.plot(style=markers) @@ -2659,8 +2730,7 @@ def test_line_label_none(self): self.assertEqual(ax.get_legend(), None) ax = s.plot(legend=True) - self.assertEqual(ax.get_legend().get_texts()[0].get_text(), - 'None') + self.assertEqual(ax.get_legend().get_texts()[0].get_text(), 'None') @slow def test_line_colors(self): @@ -2855,19 +2925,19 @@ def test_hist_colors(self): tm.close() custom_colors = 'rgcby' - ax = df.plot.hist( color=custom_colors) + ax = df.plot.hist(color=custom_colors) self._check_colors(ax.patches[::10], facecolors=custom_colors) tm.close() from matplotlib import cm # Test str -> colormap functionality - ax = df.plot.hist( colormap='jet') + ax = df.plot.hist(colormap='jet') rgba_colors = lmap(cm.jet, np.linspace(0, 1, 5)) self._check_colors(ax.patches[::10], facecolors=rgba_colors) tm.close() # Test colormap functionality - ax = df.plot.hist( colormap=cm.jet) + ax = df.plot.hist(colormap=cm.jet) rgba_colors = lmap(cm.jet, np.linspace(0, 1, 5)) self._check_colors(ax.patches[::10], facecolors=rgba_colors) tm.close() @@ -2944,7 +3014,8 @@ def test_kde_colors_and_styles_subplots(self): # make color a list if plotting one column frame # handles cases like df.plot(color='DodgerBlue') - axes = df.ix[:, [0]].plot(kind='kde', color='DodgerBlue', subplots=True) + axes = df.ix[:, [0]].plot(kind='kde', color='DodgerBlue', + subplots=True) self._check_colors(axes[0].lines, linecolors=['DodgerBlue']) # single character style @@ -2962,19 +3033,25 @@ def test_kde_colors_and_styles_subplots(self): @slow def test_boxplot_colors(self): - - def _check_colors(bp, box_c, whiskers_c, medians_c, caps_c='k', fliers_c='b'): - self._check_colors(bp['boxes'], linecolors=[box_c] * len(bp['boxes'])) - self._check_colors(bp['whiskers'], linecolors=[whiskers_c] * len(bp['whiskers'])) - self._check_colors(bp['medians'], linecolors=[medians_c] * len(bp['medians'])) - self._check_colors(bp['fliers'], linecolors=[fliers_c] * len(bp['fliers'])) - self._check_colors(bp['caps'], linecolors=[caps_c] * len(bp['caps'])) + def _check_colors(bp, box_c, whiskers_c, medians_c, caps_c='k', + fliers_c='b'): + self._check_colors(bp['boxes'], + linecolors=[box_c] * len(bp['boxes'])) + self._check_colors(bp['whiskers'], + linecolors=[whiskers_c] * len(bp['whiskers'])) + self._check_colors(bp['medians'], + linecolors=[medians_c] * len(bp['medians'])) + self._check_colors(bp['fliers'], + linecolors=[fliers_c] * len(bp['fliers'])) + self._check_colors(bp['caps'], + linecolors=[caps_c] * len(bp['caps'])) default_colors = self._maybe_unpack_cycler(self.plt.rcParams) df = DataFrame(randn(5, 5)) bp = df.plot.box(return_type='dict') - _check_colors(bp, default_colors[0], default_colors[0], default_colors[2]) + _check_colors(bp, default_colors[0], default_colors[0], + default_colors[2]) tm.close() dict_colors = dict(boxes='#572923', whiskers='#982042', @@ -3009,7 +3086,8 @@ def _check_colors(bp, box_c, whiskers_c, medians_c, caps_c='k', fliers_c='b'): # tuple is also applied to all artists except fliers bp = df.plot.box(color=(0, 1, 0), sym='#123456', return_type='dict') - _check_colors(bp, (0, 1, 0), (0, 1, 0), (0, 1, 0), (0, 1, 0), '#123456') + _check_colors(bp, (0, 1, 0), (0, 1, 0), (0, 1, 0), + (0, 1, 0), '#123456') with tm.assertRaises(ValueError): # Color contains invalid key results in ValueError @@ -3096,7 +3174,8 @@ def test_hexbin_basic(self): # GH 6951 axes = df.plot.hexbin(x='A', y='B', subplots=True) - # hexbin should have 2 axes in the figure, 1 for plotting and another is colorbar + # hexbin should have 2 axes in the figure, 1 for plotting and another + # is colorbar self.assertEqual(len(axes[0].figure.axes), 2) # return value is single axes self._check_axes_shape(axes, axes_num=1, layout=(1, 1)) @@ -3138,8 +3217,7 @@ def test_allow_cmap(self): self.assertEqual(ax.collections[0].cmap.name, 'YlGn') with tm.assertRaises(TypeError): - df.plot.hexbin(x='A', y='B', cmap='YlGn', - colormap='BuGn') + df.plot.hexbin(x='A', y='B', cmap='YlGn', colormap='BuGn') @slow def test_pie_df(self): @@ -3154,7 +3232,8 @@ def test_pie_df(self): ax = _check_plot_works(df.plot.pie, y=2) self._check_text_labels(ax.texts, df.index) - axes = _check_plot_works(df.plot.pie, filterwarnings='ignore', subplots=True) + axes = _check_plot_works(df.plot.pie, filterwarnings='ignore', + subplots=True) self.assertEqual(len(axes), len(df.columns)) for ax in axes: self._check_text_labels(ax.texts, df.index) @@ -3163,8 +3242,9 @@ def test_pie_df(self): labels = ['A', 'B', 'C', 'D', 'E'] color_args = ['r', 'g', 'b', 'c', 'm'] - axes = _check_plot_works(df.plot.pie, filterwarnings='ignore', subplots=True, - labels=labels, colors=color_args) + axes = _check_plot_works(df.plot.pie, filterwarnings='ignore', + subplots=True, labels=labels, + colors=color_args) self.assertEqual(len(axes), len(df.columns)) for ax in axes: @@ -3189,13 +3269,13 @@ def test_pie_df_nan(self): # see https://github.com/pydata/pandas/issues/8390 self.assertEqual([x.get_text() for x in ax.get_legend().get_texts()], - base_expected[:i] + base_expected[i+1:]) + base_expected[:i] + base_expected[i + 1:]) @slow def test_errorbar_plot(self): d = {'x': np.arange(12), 'y': np.arange(12, 0, -1)} df = DataFrame(d) - d_err = {'x': np.ones(12)*0.2, 'y': np.ones(12)*0.4} + d_err = {'x': np.ones(12) * 0.2, 'y': np.ones(12) * 0.4} df_err = DataFrame(d_err) # check line plots @@ -3212,23 +3292,27 @@ def test_errorbar_plot(self): self._check_has_errorbars(ax, xerr=0, yerr=2) ax = _check_plot_works(df.plot, yerr=d_err, kind=kind) self._check_has_errorbars(ax, xerr=0, yerr=2) - ax = _check_plot_works(df.plot, yerr=df_err, xerr=df_err, kind=kind) + ax = _check_plot_works(df.plot, yerr=df_err, xerr=df_err, + kind=kind) self._check_has_errorbars(ax, xerr=2, yerr=2) - ax = _check_plot_works(df.plot, yerr=df_err['x'], xerr=df_err['x'], kind=kind) + ax = _check_plot_works(df.plot, yerr=df_err['x'], xerr=df_err['x'], + kind=kind) self._check_has_errorbars(ax, xerr=2, yerr=2) ax = _check_plot_works(df.plot, xerr=0.2, yerr=0.2, kind=kind) self._check_has_errorbars(ax, xerr=2, yerr=2) - axes = _check_plot_works(df.plot, filterwarnings='ignore', yerr=df_err, - xerr=df_err, subplots=True, kind=kind) + axes = _check_plot_works(df.plot, filterwarnings='ignore', + yerr=df_err, xerr=df_err, subplots=True, + kind=kind) self._check_has_errorbars(axes, xerr=1, yerr=1) - ax = _check_plot_works((df+1).plot, yerr=df_err, xerr=df_err, kind='bar', log=True) + ax = _check_plot_works((df + 1).plot, yerr=df_err, + xerr=df_err, kind='bar', log=True) self._check_has_errorbars(ax, xerr=2, yerr=2) # yerr is raw error values - ax = _check_plot_works(df['y'].plot, yerr=np.ones(12)*0.4) + ax = _check_plot_works(df['y'].plot, yerr=np.ones(12) * 0.4) self._check_has_errorbars(ax, xerr=0, yerr=1) - ax = _check_plot_works(df.plot, yerr=np.ones((2, 12))*0.4) + ax = _check_plot_works(df.plot, yerr=np.ones((2, 12)) * 0.4) self._check_has_errorbars(ax, xerr=0, yerr=2) # yerr is iterator @@ -3239,7 +3323,7 @@ def test_errorbar_plot(self): # yerr is column name for yerr in ['yerr', u('誤差')]: s_df = df.copy() - s_df[yerr] = np.ones(12)*0.2 + s_df[yerr] = np.ones(12) * 0.2 ax = _check_plot_works(s_df.plot, yerr=yerr) self._check_has_errorbars(ax, xerr=0, yerr=2) ax = _check_plot_works(s_df.plot, y='y', x='x', yerr=yerr) @@ -3248,7 +3332,7 @@ def test_errorbar_plot(self): with tm.assertRaises(ValueError): df.plot(yerr=np.random.randn(11)) - df_err = DataFrame({'x': ['zzz']*12, 'y': ['zzz']*12}) + df_err = DataFrame({'x': ['zzz'] * 12, 'y': ['zzz'] * 12}) with tm.assertRaises((ValueError, TypeError)): df.plot(yerr=df_err) @@ -3279,7 +3363,7 @@ def test_errorbar_with_partial_columns(self): d = {'x': np.arange(12), 'y': np.arange(12, 0, -1)} df = DataFrame(d) - d_err = {'x': np.ones(12)*0.2, 'z': np.ones(12)*0.4} + d_err = {'x': np.ones(12) * 0.2, 'z': np.ones(12) * 0.4} df_err = DataFrame(d_err) for err in [d_err, df_err]: ax = _check_plot_works(df.plot, yerr=err) @@ -3289,7 +3373,7 @@ def test_errorbar_with_partial_columns(self): def test_errorbar_timeseries(self): d = {'x': np.arange(12), 'y': np.arange(12, 0, -1)} - d_err = {'x': np.ones(12)*0.2, 'y': np.ones(12)*0.4} + d_err = {'x': np.ones(12) * 0.2, 'y': np.ones(12) * 0.4} # check time-series plots ix = date_range('1/1/2000', '1/1/2001', freq='M') @@ -3302,14 +3386,15 @@ def test_errorbar_timeseries(self): self._check_has_errorbars(ax, xerr=0, yerr=2) ax = _check_plot_works(tdf.plot, yerr=d_err, kind=kind) self._check_has_errorbars(ax, xerr=0, yerr=2) - ax = _check_plot_works(tdf.plot, y='y', yerr=tdf_err['x'], kind=kind) + ax = _check_plot_works(tdf.plot, y='y', yerr=tdf_err['x'], + kind=kind) self._check_has_errorbars(ax, xerr=0, yerr=1) ax = _check_plot_works(tdf.plot, y='y', yerr='x', kind=kind) self._check_has_errorbars(ax, xerr=0, yerr=1) ax = _check_plot_works(tdf.plot, yerr=tdf_err, kind=kind) self._check_has_errorbars(ax, xerr=0, yerr=2) - axes = _check_plot_works(tdf.plot, filterwarnings='ignore', kind=kind, - yerr=tdf_err, subplots=True) + axes = _check_plot_works(tdf.plot, filterwarnings='ignore', + kind=kind, yerr=tdf_err, subplots=True) self._check_has_errorbars(axes, xerr=0, yerr=1) def test_errorbar_asymmetrical(self): @@ -3320,13 +3405,13 @@ def test_errorbar_asymmetrical(self): data = np.random.randn(5, 3) df = DataFrame(data) - ax = df.plot(yerr=err, xerr=err/2) + ax = df.plot(yerr=err, xerr=err / 2) - self.assertEqual(ax.lines[7].get_ydata()[0], data[0,1]-err[1,0,0]) - self.assertEqual(ax.lines[8].get_ydata()[0], data[0,1]+err[1,1,0]) + self.assertEqual(ax.lines[7].get_ydata()[0], data[0, 1] - err[1, 0, 0]) + self.assertEqual(ax.lines[8].get_ydata()[0], data[0, 1] + err[1, 1, 0]) - self.assertEqual(ax.lines[5].get_xdata()[0], -err[1,0,0]/2) - self.assertEqual(ax.lines[6].get_xdata()[0], err[1,1,0]/2) + self.assertEqual(ax.lines[5].get_xdata()[0], -err[1, 0, 0] / 2) + self.assertEqual(ax.lines[6].get_xdata()[0], err[1, 1, 0] / 2) with tm.assertRaises(ValueError): df.plot(yerr=err.T) @@ -3345,7 +3430,8 @@ def test_table(self): self.assertTrue(len(ax.tables) == 1) def test_errorbar_scatter(self): - df = DataFrame(np.random.randn(5, 2), index=range(5), columns=['x', 'y']) + df = DataFrame( + np.random.randn(5, 2), index=range(5), columns=['x', 'y']) df_err = DataFrame(np.random.randn(5, 2) / 5, index=range(5), columns=['x', 'y']) @@ -3356,16 +3442,18 @@ def test_errorbar_scatter(self): ax = _check_plot_works(df.plot.scatter, x='x', y='y', yerr=df_err) self._check_has_errorbars(ax, xerr=0, yerr=1) - ax = _check_plot_works(df.plot.scatter, x='x', y='y', - xerr=df_err, yerr=df_err) + ax = _check_plot_works(df.plot.scatter, x='x', y='y', xerr=df_err, + yerr=df_err) self._check_has_errorbars(ax, xerr=1, yerr=1) def _check_errorbar_color(containers, expected, has_err='has_xerr'): - errs = [c.lines[1][0] for c in ax.containers if getattr(c, has_err, False)] + errs = [c.lines[1][0] + for c in ax.containers if getattr(c, has_err, False)] self._check_colors(errs, linecolors=[expected] * len(errs)) # GH 8081 - df = DataFrame(np.random.randn(10, 5), columns=['a', 'b', 'c', 'd', 'e']) + df = DataFrame( + np.random.randn(10, 5), columns=['a', 'b', 'c', 'd', 'e']) ax = df.plot.scatter(x='a', y='b', xerr='d', yerr='e', c='red') self._check_has_errorbars(ax, xerr=1, yerr=1) _check_errorbar_color(ax.containers, 'red', has_err='has_xerr') @@ -3377,9 +3465,9 @@ def _check_errorbar_color(containers, expected, has_err='has_xerr'): @slow def test_sharex_and_ax(self): - # https://github.com/pydata/pandas/issues/9737 - # using gridspec, the axis in fig.get_axis() are sorted differently than pandas expected - # them, so make sure that only the right ones are removed + # https://github.com/pydata/pandas/issues/9737 using gridspec, the axis + # in fig.get_axis() are sorted differently than pandas expected them, + # so make sure that only the right ones are removed import matplotlib.pyplot as plt plt.close('all') gs, axes = _generate_4_axes_via_gridspec() @@ -3395,10 +3483,12 @@ def _check(axes): self._check_visible(ax.get_yticklabels(), visible=True) for ax in [axes[0], axes[2]]: self._check_visible(ax.get_xticklabels(), visible=False) - self._check_visible(ax.get_xticklabels(minor=True), visible=False) + self._check_visible( + ax.get_xticklabels(minor=True), visible=False) for ax in [axes[1], axes[3]]: self._check_visible(ax.get_xticklabels(), visible=True) - self._check_visible(ax.get_xticklabels(minor=True), visible=True) + self._check_visible( + ax.get_xticklabels(minor=True), visible=True) for ax in axes: df.plot(x="a", y="b", title="title", ax=ax, sharex=True) @@ -3427,9 +3517,9 @@ def _check(axes): @slow def test_sharey_and_ax(self): - # https://github.com/pydata/pandas/issues/9737 - # using gridspec, the axis in fig.get_axis() are sorted differently than pandas expected - # them, so make sure that only the right ones are removed + # https://github.com/pydata/pandas/issues/9737 using gridspec, the axis + # in fig.get_axis() are sorted differently than pandas expected them, + # so make sure that only the right ones are removed import matplotlib.pyplot as plt gs, axes = _generate_4_axes_via_gridspec() @@ -3443,7 +3533,8 @@ def _check(axes): for ax in axes: self.assertEqual(len(ax.lines), 1) self._check_visible(ax.get_xticklabels(), visible=True) - self._check_visible(ax.get_xticklabels(minor=True), visible=True) + self._check_visible( + ax.get_xticklabels(minor=True), visible=True) for ax in [axes[0], axes[1]]: self._check_visible(ax.get_yticklabels(), visible=True) for ax in [axes[2], axes[3]]: @@ -3586,7 +3677,8 @@ def _get_horizontal_grid(): for ax in [ax1, ax2]: self._check_visible(ax.get_yticklabels(), visible=True) self._check_visible(ax.get_xticklabels(), visible=True) - self._check_visible(ax.get_xticklabels(minor=True), visible=True) + self._check_visible( + ax.get_xticklabels(minor=True), visible=True) tm.close() # subplots=True @@ -3597,14 +3689,15 @@ def _get_horizontal_grid(): for ax in axes: self._check_visible(ax.get_yticklabels(), visible=True) self._check_visible(ax.get_xticklabels(), visible=True) - self._check_visible(ax.get_xticklabels(minor=True), visible=True) + self._check_visible( + ax.get_xticklabels(minor=True), visible=True) tm.close() # vertical / subplots / sharex=True / sharey=True ax1, ax2 = _get_vertical_grid() with tm.assert_produces_warning(UserWarning): - axes = df.plot(subplots=True, ax=[ax1, ax2], - sharex=True, sharey=True) + axes = df.plot(subplots=True, ax=[ax1, ax2], sharex=True, + sharey=True) self.assertEqual(len(axes[0].lines), 1) self.assertEqual(len(axes[1].lines), 1) for ax in [ax1, ax2]: @@ -3620,8 +3713,8 @@ def _get_horizontal_grid(): # horizontal / subplots / sharex=True / sharey=True ax1, ax2 = _get_horizontal_grid() with tm.assert_produces_warning(UserWarning): - axes = df.plot(subplots=True, ax=[ax1, ax2], - sharex=True, sharey=True) + axes = df.plot(subplots=True, ax=[ax1, ax2], sharex=True, + sharey=True) self.assertEqual(len(axes[0].lines), 1) self.assertEqual(len(axes[1].lines), 1) self._check_visible(axes[0].get_yticklabels(), visible=True) @@ -3635,7 +3728,7 @@ def _get_horizontal_grid(): # boxed def _get_boxed_grid(): - gs = gridspec.GridSpec(3,3) + gs = gridspec.GridSpec(3, 3) fig = plt.figure() ax1 = fig.add_subplot(gs[:2, :2]) ax2 = fig.add_subplot(gs[:2, 2]) @@ -3645,7 +3738,7 @@ def _get_boxed_grid(): axes = _get_boxed_grid() df = DataFrame(np.random.randn(10, 4), - index=ts.index, columns=list('ABCD')) + index=ts.index, columns=list('ABCD')) axes = df.plot(subplots=True, ax=axes) for ax in axes: self.assertEqual(len(ax.lines), 1) @@ -3661,14 +3754,14 @@ def _get_boxed_grid(): axes = df.plot(subplots=True, ax=axes, sharex=True, sharey=True) for ax in axes: self.assertEqual(len(ax.lines), 1) - for ax in [axes[0], axes[2]]: # left column + for ax in [axes[0], axes[2]]: # left column self._check_visible(ax.get_yticklabels(), visible=True) - for ax in [axes[1], axes[3]]: # right column + for ax in [axes[1], axes[3]]: # right column self._check_visible(ax.get_yticklabels(), visible=False) - for ax in [axes[0], axes[1]]: # top row + for ax in [axes[0], axes[1]]: # top row self._check_visible(ax.get_xticklabels(), visible=False) self._check_visible(ax.get_xticklabels(minor=True), visible=False) - for ax in [axes[2], axes[3]]: # bottom row + for ax in [axes[2], axes[3]]: # bottom row self._check_visible(ax.get_xticklabels(), visible=True) self._check_visible(ax.get_xticklabels(minor=True), visible=True) tm.close() @@ -3676,8 +3769,9 @@ def _get_boxed_grid(): @slow def test_df_grid_settings(self): # Make sure plot defaults to rcParams['axes.grid'] setting, GH 9792 - self._check_grid_settings(DataFrame({'a':[1,2,3],'b':[2,3,4]}), - plotting._dataframe_kinds, kws={'x':'a','y':'b'}) + self._check_grid_settings( + DataFrame({'a': [1, 2, 3], 'b': [2, 3, 4]}), + plotting._dataframe_kinds, kws={'x': 'a', 'y': 'b'}) def test_option_mpl_style(self): set_option('display.mpl_style', 'default') @@ -3705,7 +3799,7 @@ def test_plain_axes(self): # a new ax is created for the colorbar -> also multiples axes (GH11520) df = DataFrame({'a': randn(8), 'b': randn(8)}) fig = self.plt.figure() - ax = fig.add_axes((0,0,1,1)) + ax = fig.add_axes((0, 0, 1, 1)) df.plot(kind='scatter', ax=ax, x='a', y='b', c='a', cmap='hsv') # other examples @@ -3726,19 +3820,22 @@ def test_passed_bar_colors(self): import matplotlib as mpl color_tuples = [(0.9, 0, 0, 1), (0, 0.9, 0, 1), (0, 0, 0.9, 1)] colormap = mpl.colors.ListedColormap(color_tuples) - barplot = pd.DataFrame([[1,2,3]]).plot(kind="bar", cmap=colormap) - self.assertEqual(color_tuples, [c.get_facecolor() for c in barplot.patches]) + barplot = pd.DataFrame([[1, 2, 3]]).plot(kind="bar", cmap=colormap) + self.assertEqual(color_tuples, [c.get_facecolor() + for c in barplot.patches]) def test_rcParams_bar_colors(self): import matplotlib as mpl color_tuples = [(0.9, 0, 0, 1), (0, 0.9, 0, 1), (0, 0, 0.9, 1)] - try: # mpl 1.5 - with mpl.rc_context(rc={'axes.prop_cycle': mpl.cycler("color", color_tuples)}): - barplot = pd.DataFrame([[1,2,3]]).plot(kind="bar") - except (AttributeError, KeyError): # mpl 1.4 + try: # mpl 1.5 + with mpl.rc_context( + rc={'axes.prop_cycle': mpl.cycler("color", color_tuples)}): + barplot = pd.DataFrame([[1, 2, 3]]).plot(kind="bar") + except (AttributeError, KeyError): # mpl 1.4 with mpl.rc_context(rc={'axes.color_cycle': color_tuples}): - barplot = pd.DataFrame([[1,2,3]]).plot(kind="bar") - self.assertEqual(color_tuples, [c.get_facecolor() for c in barplot.patches]) + barplot = pd.DataFrame([[1, 2, 3]]).plot(kind="bar") + self.assertEqual(color_tuples, [c.get_facecolor() + for c in barplot.patches]) @tm.mplskip @@ -3755,15 +3852,15 @@ def test_series_groupby_plotting_nominally_works(self): tm.close() height.groupby(gender).hist() tm.close() - #Regression test for GH8733 + # Regression test for GH8733 height.groupby(gender).plot(alpha=0.5) tm.close() def test_plotting_with_float_index_works(self): # GH 7025 - df = DataFrame({'def': [1,1,1,2,2,2,3,3,3], + df = DataFrame({'def': [1, 1, 1, 2, 2, 2, 3, 3, 3], 'val': np.random.randn(9)}, - index=[1.0,2.0,3.0,1.0,2.0,3.0,1.0,2.0,3.0]) + index=[1.0, 2.0, 3.0, 1.0, 2.0, 3.0, 1.0, 2.0, 3.0]) df.groupby('def')['val'].plot() tm.close() @@ -3773,7 +3870,9 @@ def test_plotting_with_float_index_works(self): def test_hist_single_row(self): # GH10214 bins = np.arange(80, 100 + 2, 1) - df = DataFrame({"Name": ["AAA", "BBB"], "ByCol": [1, 2], "Mark": [85, 89]}) + df = DataFrame({"Name": ["AAA", "BBB"], + "ByCol": [1, 2], + "Mark": [85, 89]}) df["Mark"].hist(by=df["ByCol"], bins=bins) df = DataFrame({"Name": ["AAA"], "ByCol": [1], "Mark": [85]}) df["Mark"].hist(by=df["ByCol"], bins=bins) @@ -3812,9 +3911,9 @@ def assert_is_valid_plot_return_object(objs): ''.format(el.__class__.__name__)) else: assert isinstance(objs, (plt.Artist, tuple, dict)), \ - ('objs is neither an ndarray of Artist instances nor a ' - 'single Artist instance, tuple, or dict, "objs" is a {0!r} ' - ''.format(objs.__class__.__name__)) + ('objs is neither an ndarray of Artist instances nor a ' + 'single Artist instance, tuple, or dict, "objs" is a {0!r} ' + ''.format(objs.__class__.__name__)) def _check_plot_works(f, filterwarnings='always', **kwargs): @@ -3830,7 +3929,7 @@ def _check_plot_works(f, filterwarnings='always', **kwargs): plt.clf() - ax = kwargs.get('ax', fig.add_subplot(211)) + ax = kwargs.get('ax', fig.add_subplot(211)) # noqa ret = f(**kwargs) assert_is_valid_plot_return_object(ret) @@ -3850,16 +3949,17 @@ def _check_plot_works(f, filterwarnings='always', **kwargs): return ret + def _generate_4_axes_via_gridspec(): import matplotlib.pyplot as plt import matplotlib as mpl - import matplotlib.gridspec + import matplotlib.gridspec # noqa gs = mpl.gridspec.GridSpec(2, 2) - ax_tl = plt.subplot(gs[0,0]) - ax_ll = plt.subplot(gs[1,0]) - ax_tr = plt.subplot(gs[0,1]) - ax_lr = plt.subplot(gs[1,1]) + ax_tl = plt.subplot(gs[0, 0]) + ax_ll = plt.subplot(gs[1, 0]) + ax_tr = plt.subplot(gs[0, 1]) + ax_lr = plt.subplot(gs[1, 1]) return gs, [ax_tl, ax_ll, ax_tr, ax_lr] diff --git a/pandas/tests/test_graphics_others.py b/pandas/tests/test_graphics_others.py index 0fb1864f998b2..7301edcd52c3c 100644 --- a/pandas/tests/test_graphics_others.py +++ b/pandas/tests/test_graphics_others.py @@ -8,24 +8,14 @@ import warnings from distutils.version import LooseVersion -from datetime import datetime, date - -from pandas import (Series, DataFrame, MultiIndex, PeriodIndex, date_range, - bdate_range) -from pandas.compat import (range, lrange, StringIO, lmap, lzip, u, zip, - iteritems, OrderedDict, PY3) -from pandas.util.decorators import cache_readonly -import pandas.core.common as com +from pandas import Series, DataFrame, MultiIndex +from pandas.compat import range, lmap, lzip import pandas.util.testing as tm -from pandas.util.testing import ensure_clean -from pandas.core.config import set_option - import numpy as np from numpy import random -from numpy.random import rand, randn +from numpy.random import randn -from numpy.testing import assert_array_equal, assert_allclose from numpy.testing.decorators import slow import pandas.tools.plotting as plotting @@ -115,20 +105,25 @@ def test_hist_layout_with_by(self): axes = _check_plot_works(df.height.hist, by=df.category, layout=(4, 1)) self._check_axes_shape(axes, axes_num=4, layout=(4, 1)) - axes = _check_plot_works(df.height.hist, by=df.category, layout=(2, -1)) + axes = _check_plot_works( + df.height.hist, by=df.category, layout=(2, -1)) self._check_axes_shape(axes, axes_num=4, layout=(2, 2)) - axes = _check_plot_works(df.height.hist, by=df.category, layout=(3, -1)) + axes = _check_plot_works( + df.height.hist, by=df.category, layout=(3, -1)) self._check_axes_shape(axes, axes_num=4, layout=(3, 2)) - axes = _check_plot_works(df.height.hist, by=df.category, layout=(-1, 4)) + axes = _check_plot_works( + df.height.hist, by=df.category, layout=(-1, 4)) self._check_axes_shape(axes, axes_num=4, layout=(1, 4)) - axes = _check_plot_works(df.height.hist, by=df.classroom, layout=(2, 2)) + axes = _check_plot_works( + df.height.hist, by=df.classroom, layout=(2, 2)) self._check_axes_shape(axes, axes_num=3, layout=(2, 2)) axes = df.height.hist(by=df.category, layout=(4, 2), figsize=(12, 7)) - self._check_axes_shape(axes, axes_num=4, layout=(4, 2), figsize=(12, 7)) + self._check_axes_shape( + axes, axes_num=4, layout=(4, 2), figsize=(12, 7)) @slow def test_hist_no_overlap(self): @@ -146,7 +141,7 @@ def test_hist_no_overlap(self): @slow def test_hist_by_no_extra_plots(self): df = self.hist_df - axes = df.height.hist(by=df.gender) + axes = df.height.hist(by=df.gender) # noqa self.assertEqual(len(self.plt.get_fignums()), 1) @slow @@ -188,9 +183,10 @@ def setUp(self): mpl.rcdefaults() self.tdf = tm.makeTimeDataFrame() - self.hexbin_df = DataFrame({"A": np.random.uniform(size=20), - "B": np.random.uniform(size=20), - "C": np.arange(20) + np.random.uniform(size=20)}) + self.hexbin_df = DataFrame({ + "A": np.random.uniform(size=20), + "B": np.random.uniform(size=20), + "C": np.arange(20) + np.random.uniform(size=20)}) from pandas import read_csv path = os.path.join(curpath(), 'data', 'iris.csv') @@ -205,7 +201,8 @@ def test_boxplot_legacy(self): df['indic2'] = ['foo', 'bar', 'foo'] * 2 _check_plot_works(df.boxplot, return_type='dict') - _check_plot_works(df.boxplot, column=['one', 'two'], return_type='dict') + _check_plot_works(df.boxplot, column=[ + 'one', 'two'], return_type='dict') _check_plot_works(df.boxplot, column=['one', 'two'], by='indic') _check_plot_works(df.boxplot, column='one', by=['indic', 'indic2']) _check_plot_works(df.boxplot, by='indic') @@ -231,10 +228,12 @@ def test_boxplot_legacy(self): # Multiple columns with an ax argument should use same figure fig, ax = self.plt.subplots() - axes = df.boxplot(column=['Col1', 'Col2'], by='X', ax=ax, return_type='axes') + axes = df.boxplot(column=['Col1', 'Col2'], + by='X', ax=ax, return_type='axes') self.assertIs(axes['Col1'].get_figure(), fig) - # When by is None, check that all relevant lines are present in the dict + # When by is None, check that all relevant lines are present in the + # dict fig, ax = self.plt.subplots() d = df.boxplot(ax=ax, return_type='dict') lines = list(itertools.chain.from_iterable(d.values())) @@ -243,7 +242,7 @@ def test_boxplot_legacy(self): @slow def test_boxplot_return_type_legacy(self): # API change in https://github.com/pydata/pandas/pull/7096 - import matplotlib as mpl + import matplotlib as mpl # noqa df = DataFrame(randn(6, 4), index=list(string.ascii_letters[:6]), @@ -426,21 +425,23 @@ def test_scatter_matrix_axis(self): with tm.RNGContext(42): df = DataFrame(randn(100, 3)) - axes = _check_plot_works(scatter_matrix, filterwarnings='always', frame=df, - range_padding=.1) + axes = _check_plot_works(scatter_matrix, filterwarnings='always', + frame=df, range_padding=.1) axes0_labels = axes[0][0].yaxis.get_majorticklabels() # GH 5662 expected = ['-2', '-1', '0', '1', '2'] self._check_text_labels(axes0_labels, expected) - self._check_ticks_props(axes, xlabelsize=8, xrot=90, ylabelsize=8, yrot=0) + self._check_ticks_props( + axes, xlabelsize=8, xrot=90, ylabelsize=8, yrot=0) df[0] = ((df[0] - 2) / 3) - axes = _check_plot_works(scatter_matrix, filterwarnings='always', frame=df, - range_padding=.1) + axes = _check_plot_works(scatter_matrix, filterwarnings='always', + frame=df, range_padding=.1) axes0_labels = axes[0][0].yaxis.get_majorticklabels() expected = ['-1.2', '-1.0', '-0.8', '-0.6', '-0.4', '-0.2', '0.0'] self._check_text_labels(axes0_labels, expected) - self._check_ticks_props(axes, xlabelsize=8, xrot=90, ylabelsize=8, yrot=0) + self._check_ticks_props( + axes, xlabelsize=8, xrot=90, ylabelsize=8, yrot=0) @slow def test_andrews_curves(self): @@ -452,16 +453,22 @@ def test_andrews_curves(self): _check_plot_works(andrews_curves, frame=df, class_column='Name') rgba = ('#556270', '#4ECDC4', '#C7F464') - ax = _check_plot_works(andrews_curves, frame=df, class_column='Name', color=rgba) - self._check_colors(ax.get_lines()[:10], linecolors=rgba, mapping=df['Name'][:10]) + ax = _check_plot_works(andrews_curves, frame=df, + class_column='Name', color=rgba) + self._check_colors( + ax.get_lines()[:10], linecolors=rgba, mapping=df['Name'][:10]) cnames = ['dodgerblue', 'aquamarine', 'seagreen'] - ax = _check_plot_works(andrews_curves, frame=df, class_column='Name', color=cnames) - self._check_colors(ax.get_lines()[:10], linecolors=cnames, mapping=df['Name'][:10]) + ax = _check_plot_works(andrews_curves, frame=df, + class_column='Name', color=cnames) + self._check_colors( + ax.get_lines()[:10], linecolors=cnames, mapping=df['Name'][:10]) - ax = _check_plot_works(andrews_curves, frame=df, class_column='Name', colormap=cm.jet) + ax = _check_plot_works(andrews_curves, frame=df, + class_column='Name', colormap=cm.jet) cmaps = lmap(cm.jet, np.linspace(0, 1, df['Name'].nunique())) - self._check_colors(ax.get_lines()[:10], linecolors=cmaps, mapping=df['Name'][:10]) + self._check_colors( + ax.get_lines()[:10], linecolors=cmaps, mapping=df['Name'][:10]) length = 10 df = DataFrame({"A": random.rand(length), @@ -472,16 +479,22 @@ def test_andrews_curves(self): _check_plot_works(andrews_curves, frame=df, class_column='Name') rgba = ('#556270', '#4ECDC4', '#C7F464') - ax = _check_plot_works(andrews_curves, frame=df, class_column='Name', color=rgba) - self._check_colors(ax.get_lines()[:10], linecolors=rgba, mapping=df['Name'][:10]) + ax = _check_plot_works(andrews_curves, frame=df, + class_column='Name', color=rgba) + self._check_colors( + ax.get_lines()[:10], linecolors=rgba, mapping=df['Name'][:10]) cnames = ['dodgerblue', 'aquamarine', 'seagreen'] - ax = _check_plot_works(andrews_curves, frame=df, class_column='Name', color=cnames) - self._check_colors(ax.get_lines()[:10], linecolors=cnames, mapping=df['Name'][:10]) + ax = _check_plot_works(andrews_curves, frame=df, + class_column='Name', color=cnames) + self._check_colors( + ax.get_lines()[:10], linecolors=cnames, mapping=df['Name'][:10]) - ax = _check_plot_works(andrews_curves, frame=df, class_column='Name', colormap=cm.jet) + ax = _check_plot_works(andrews_curves, frame=df, + class_column='Name', colormap=cm.jet) cmaps = lmap(cm.jet, np.linspace(0, 1, df['Name'].nunique())) - self._check_colors(ax.get_lines()[:10], linecolors=cmaps, mapping=df['Name'][:10]) + self._check_colors( + ax.get_lines()[:10], linecolors=cmaps, mapping=df['Name'][:10]) colors = ['b', 'g', 'r'] df = DataFrame({"A": [1, 2, 3], @@ -502,23 +515,31 @@ def test_parallel_coordinates(self): df = self.iris - ax = _check_plot_works(parallel_coordinates, frame=df, class_column='Name') + ax = _check_plot_works(parallel_coordinates, + frame=df, class_column='Name') nlines = len(ax.get_lines()) nxticks = len(ax.xaxis.get_ticklabels()) rgba = ('#556270', '#4ECDC4', '#C7F464') - ax = _check_plot_works(parallel_coordinates, frame=df, class_column='Name', color=rgba) - self._check_colors(ax.get_lines()[:10], linecolors=rgba, mapping=df['Name'][:10]) + ax = _check_plot_works(parallel_coordinates, + frame=df, class_column='Name', color=rgba) + self._check_colors( + ax.get_lines()[:10], linecolors=rgba, mapping=df['Name'][:10]) cnames = ['dodgerblue', 'aquamarine', 'seagreen'] - ax = _check_plot_works(parallel_coordinates, frame=df, class_column='Name', color=cnames) - self._check_colors(ax.get_lines()[:10], linecolors=cnames, mapping=df['Name'][:10]) + ax = _check_plot_works(parallel_coordinates, + frame=df, class_column='Name', color=cnames) + self._check_colors( + ax.get_lines()[:10], linecolors=cnames, mapping=df['Name'][:10]) - ax = _check_plot_works(parallel_coordinates, frame=df, class_column='Name', colormap=cm.jet) + ax = _check_plot_works(parallel_coordinates, + frame=df, class_column='Name', colormap=cm.jet) cmaps = lmap(cm.jet, np.linspace(0, 1, df['Name'].nunique())) - self._check_colors(ax.get_lines()[:10], linecolors=cmaps, mapping=df['Name'][:10]) + self._check_colors( + ax.get_lines()[:10], linecolors=cmaps, mapping=df['Name'][:10]) - ax = _check_plot_works(parallel_coordinates, frame=df, class_column='Name', axvlines=False) + ax = _check_plot_works(parallel_coordinates, + frame=df, class_column='Name', axvlines=False) assert len(ax.get_lines()) == (nlines - nxticks) colors = ['b', 'g', 'r'] @@ -544,17 +565,20 @@ def test_radviz(self): _check_plot_works(radviz, frame=df, class_column='Name') rgba = ('#556270', '#4ECDC4', '#C7F464') - ax = _check_plot_works(radviz, frame=df, class_column='Name', color=rgba) + ax = _check_plot_works( + radviz, frame=df, class_column='Name', color=rgba) # skip Circle drawn as ticks patches = [p for p in ax.patches[:20] if p.get_label() != ''] - self._check_colors(patches[:10], facecolors=rgba, mapping=df['Name'][:10]) + self._check_colors( + patches[:10], facecolors=rgba, mapping=df['Name'][:10]) cnames = ['dodgerblue', 'aquamarine', 'seagreen'] _check_plot_works(radviz, frame=df, class_column='Name', color=cnames) patches = [p for p in ax.patches[:20] if p.get_label() != ''] self._check_colors(patches, facecolors=cnames, mapping=df['Name'][:10]) - _check_plot_works(radviz, frame=df, class_column='Name', colormap=cm.jet) + _check_plot_works(radviz, frame=df, + class_column='Name', colormap=cm.jet) cmaps = lmap(cm.jet, np.linspace(0, 1, df['Name'].nunique())) patches = [p for p in ax.patches[:20] if p.get_label() != ''] self._check_colors(patches, facecolors=cmaps, mapping=df['Name'][:10]) @@ -656,7 +680,8 @@ def test_grouped_hist_legacy(self): xrot, yrot = 30, 40 axes = plotting.grouped_hist(df.A, by=df.C, normed=True, cumulative=True, bins=4, - xlabelsize=xf, xrot=xrot, ylabelsize=yf, yrot=yrot) + xlabelsize=xf, xrot=xrot, + ylabelsize=yf, yrot=yrot) # height of last bin (index 5) must be 1.0 for ax in axes.ravel(): rects = [x for x in ax.get_children() if isinstance(x, Rectangle)] @@ -700,13 +725,15 @@ def test_grouped_box_return_type(self): # old style: return_type=None result = df.boxplot(by='gender') self.assertIsInstance(result, np.ndarray) - self._check_box_return_type(result, None, - expected_keys=['height', 'weight', 'category']) + self._check_box_return_type( + result, None, + expected_keys=['height', 'weight', 'category']) # now for groupby with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): result = df.groupby('gender').boxplot() - self._check_box_return_type(result, 'dict', expected_keys=['Male', 'Female']) + self._check_box_return_type( + result, 'dict', expected_keys=['Male', 'Female']) columns2 = 'X B C D A G Y N Q O'.split() df2 = DataFrame(random.randn(50, 10), columns=columns2) @@ -715,11 +742,13 @@ def test_grouped_box_return_type(self): for t in ['dict', 'axes', 'both']: returned = df.groupby('classroom').boxplot(return_type=t) - self._check_box_return_type(returned, t, expected_keys=['A', 'B', 'C']) + self._check_box_return_type( + returned, t, expected_keys=['A', 'B', 'C']) returned = df.boxplot(by='classroom', return_type=t) - self._check_box_return_type(returned, t, - expected_keys=['height', 'weight', 'category']) + self._check_box_return_type( + returned, t, + expected_keys=['height', 'weight', 'category']) returned = df2.groupby('category').boxplot(return_type=t) self._check_box_return_type(returned, t, expected_keys=categories2) @@ -733,7 +762,8 @@ def test_grouped_box_layout(self): self.assertRaises(ValueError, df.boxplot, column=['weight', 'height'], by=df.gender, layout=(1, 1)) - self.assertRaises(ValueError, df.boxplot, column=['height', 'weight', 'category'], + self.assertRaises(ValueError, df.boxplot, + column=['height', 'weight', 'category'], layout=(2, 1), return_type='dict') self.assertRaises(ValueError, df.boxplot, column=['weight', 'height'], by=df.gender, layout=(-1, -1)) @@ -742,7 +772,8 @@ def test_grouped_box_layout(self): return_type='dict') self._check_axes_shape(self.plt.gcf().axes, axes_num=2, layout=(1, 2)) - box = _check_plot_works(df.groupby('category').boxplot, column='height', + box = _check_plot_works(df.groupby('category').boxplot, + column='height', return_type='dict') self._check_axes_shape(self.plt.gcf().axes, axes_num=4, layout=(2, 2)) @@ -766,10 +797,12 @@ def test_grouped_box_layout(self): column=['height', 'weight', 'category'], return_type='dict') self._check_axes_shape(self.plt.gcf().axes, axes_num=3, layout=(2, 2)) - box = _check_plot_works(df.groupby('category').boxplot, column='height', + box = _check_plot_works(df.groupby('category').boxplot, + column='height', layout=(3, 2), return_type='dict') self._check_axes_shape(self.plt.gcf().axes, axes_num=4, layout=(3, 2)) - box = _check_plot_works(df.groupby('category').boxplot, column='height', + box = _check_plot_works(df.groupby('category').boxplot, + column='height', layout=(3, -1), return_type='dict') self._check_axes_shape(self.plt.gcf().axes, axes_num=4, layout=(3, 2)) @@ -786,7 +819,7 @@ def test_grouped_box_layout(self): return_type='dict') self._check_axes_shape(self.plt.gcf().axes, axes_num=3, layout=(1, 4)) - box = df.groupby('classroom').boxplot( + box = df.groupby('classroom').boxplot( # noqa column=['height', 'weight', 'category'], layout=(1, -1), return_type='dict') self._check_axes_shape(self.plt.gcf().axes, axes_num=3, layout=(1, 3)) @@ -803,8 +836,10 @@ def test_grouped_box_multiple_axes(self): # which has earlier alphabetical order with tm.assert_produces_warning(UserWarning): fig, axes = self.plt.subplots(2, 2) - df.groupby('category').boxplot(column='height', return_type='axes', ax=axes) - self._check_axes_shape(self.plt.gcf().axes, axes_num=4, layout=(2, 2)) + df.groupby('category').boxplot( + column='height', return_type='axes', ax=axes) + self._check_axes_shape(self.plt.gcf().axes, + axes_num=4, layout=(2, 2)) fig, axes = self.plt.subplots(2, 3) with warnings.catch_warnings(): @@ -856,12 +891,15 @@ def test_grouped_hist_layout(self): axes = df.hist(column='height', by=df.category, layout=(-1, 1)) self._check_axes_shape(axes, axes_num=4, layout=(4, 1)) - axes = df.hist(column='height', by=df.category, layout=(4, 2), figsize=(12, 8)) - self._check_axes_shape(axes, axes_num=4, layout=(4, 2), figsize=(12, 8)) + axes = df.hist(column='height', by=df.category, + layout=(4, 2), figsize=(12, 8)) + self._check_axes_shape( + axes, axes_num=4, layout=(4, 2), figsize=(12, 8)) tm.close() # GH 6769 - axes = _check_plot_works(df.hist, column='height', by='classroom', layout=(2, 2)) + axes = _check_plot_works( + df.hist, column='height', by='classroom', layout=(2, 2)) self._check_axes_shape(axes, axes_num=3, layout=(2, 2)) # without column diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py index 5eb8606f4c30c..7e40885fdacb5 100644 --- a/pandas/tests/test_groupby.py +++ b/pandas/tests/test_groupby.py @@ -7,20 +7,18 @@ from datetime import datetime from numpy import nan -from pandas import date_range,bdate_range, Timestamp -from pandas.core.index import Index, MultiIndex, Int64Index, CategoricalIndex +from pandas import date_range, bdate_range, Timestamp +from pandas.core.index import Index, MultiIndex, CategoricalIndex from pandas.core.api import Categorical, DataFrame -from pandas.core.groupby import (SpecificationError, DataError, - _nargsort, _lexsort_indexer) +from pandas.core.groupby import (SpecificationError, DataError, _nargsort, + _lexsort_indexer) from pandas.core.series import Series from pandas.core.config import option_context from pandas.util.testing import (assert_panel_equal, assert_frame_equal, assert_series_equal, assert_almost_equal, assert_index_equal, assertRaisesRegexp) -from pandas.compat import( - range, long, lrange, StringIO, lmap, lzip, map, - zip, builtins, OrderedDict, product as cart_product -) +from pandas.compat import (range, long, lrange, StringIO, lmap, lzip, map, zip, + builtins, OrderedDict, product as cart_product) from pandas import compat from pandas.core.panel import Panel from pandas.tools.merge import concat @@ -36,24 +34,6 @@ from numpy.testing import assert_equal -def commonSetUp(self): - self.dateRange = bdate_range('1/1/2005', periods=250) - self.stringIndex = Index([rands(8).upper() for x in range(250)]) - - self.groupId = Series([x[0] for x in self.stringIndex], - index=self.stringIndex) - self.groupDict = dict((k, v) for k, v in compat.iteritems(self.groupId)) - - self.columnIndex = Index(['A', 'B', 'C', 'D', 'E']) - - randMat = np.random.randn(250, 5) - self.stringMatrix = DataFrame(randMat, columns=self.columnIndex, - index=self.stringIndex) - - self.timeMatrix = DataFrame(randMat, columns=self.columnIndex, - index=self.dateRange) - - class TestGroupBy(tm.TestCase): _multiprocess_can_split_ = True @@ -66,44 +46,39 @@ def setUp(self): self.frame = DataFrame(self.seriesd) self.tsframe = DataFrame(self.tsd) - self.df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'foo', 'foo'], - 'B': ['one', 'one', 'two', 'three', - 'two', 'two', 'one', 'three'], - 'C': np.random.randn(8), - 'D': np.random.randn(8)}) - - self.df_mixed_floats = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'foo', 'foo'], - 'B': ['one', 'one', 'two', 'three', - 'two', 'two', 'one', 'three'], - 'C': np.random.randn(8), - 'D': np.array(np.random.randn(8), - dtype='float32')}) - - index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], - ['one', 'two', 'three']], + self.df = DataFrame( + {'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], + 'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], + 'C': np.random.randn(8), + 'D': np.random.randn(8)}) + + self.df_mixed_floats = DataFrame( + {'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], + 'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], + 'C': np.random.randn(8), + 'D': np.array( + np.random.randn(8), dtype='float32')}) + + index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two', + 'three']], labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], names=['first', 'second']) self.mframe = DataFrame(np.random.randn(10, 3), index=index, columns=['A', 'B', 'C']) - self.three_group = DataFrame({'A': ['foo', 'foo', 'foo', 'foo', - 'bar', 'bar', 'bar', 'bar', - 'foo', 'foo', 'foo'], - 'B': ['one', 'one', 'one', 'two', - 'one', 'one', 'one', 'two', - 'two', 'two', 'one'], - 'C': ['dull', 'dull', 'shiny', 'dull', - 'dull', 'shiny', 'shiny', 'dull', - 'shiny', 'shiny', 'shiny'], - 'D': np.random.randn(11), - 'E': np.random.randn(11), - 'F': np.random.randn(11)}) + self.three_group = DataFrame( + {'A': ['foo', 'foo', 'foo', 'foo', 'bar', 'bar', 'bar', 'bar', + 'foo', 'foo', 'foo'], + 'B': ['one', 'one', 'one', 'two', 'one', 'one', 'one', 'two', + 'two', 'two', 'one'], + 'C': ['dull', 'dull', 'shiny', 'dull', 'dull', 'shiny', 'shiny', + 'dull', 'shiny', 'shiny', 'shiny'], + 'D': np.random.randn(11), + 'E': np.random.randn(11), + 'F': np.random.randn(11)}) def test_basic(self): - def checkit(dtype): data = Series(np.arange(9) // 3, index=np.arange(9), dtype=dtype) @@ -134,14 +109,9 @@ def checkit(dtype): # complex agg agged = grouped.aggregate([np.mean, np.std]) - agged = grouped.aggregate({'one': np.mean, - 'two': np.std}) - - group_constants = { - 0: 10, - 1: 20, - 2: 30 - } + agged = grouped.aggregate({'one': np.mean, 'two': np.std}) + + group_constants = {0: 10, 1: 20, 2: 30} agged = grouped.agg(lambda x: group_constants[x.name] + x.mean()) self.assertEqual(agged[1], 21) @@ -166,8 +136,8 @@ def test_first_last_nth(self): # tests for first / last / nth grouped = self.df.groupby('A') first = grouped.first() - expected = self.df.ix[[1, 0], ['B','C','D']] - expected.index = Index(['bar', 'foo'],name='A') + expected = self.df.ix[[1, 0], ['B', 'C', 'D']] + expected.index = Index(['bar', 'foo'], name='A') expected = expected.sort_index() assert_frame_equal(first, expected) @@ -175,16 +145,16 @@ def test_first_last_nth(self): assert_frame_equal(nth, expected) last = grouped.last() - expected = self.df.ix[[5, 7], ['B','C','D']] - expected.index = Index(['bar', 'foo'],name='A') + expected = self.df.ix[[5, 7], ['B', 'C', 'D']] + expected.index = Index(['bar', 'foo'], name='A') assert_frame_equal(last, expected) nth = grouped.nth(-1) assert_frame_equal(nth, expected) nth = grouped.nth(1) - expected = self.df.ix[[2, 3],['B','C','D']].copy() - expected.index = Index(['foo', 'bar'],name='A') + expected = self.df.ix[[2, 3], ['B', 'C', 'D']].copy() + expected.index = Index(['foo', 'bar'], name='A') expected = expected.sort_index() assert_frame_equal(nth, expected) @@ -196,17 +166,18 @@ def test_first_last_nth(self): self.df.loc[self.df['A'] == 'foo', 'B'] = np.nan self.assertTrue(com.isnull(grouped['B'].first()['foo'])) self.assertTrue(com.isnull(grouped['B'].last()['foo'])) - self.assertTrue(com.isnull(grouped['B'].nth(0)[0])) # not sure what this is testing + self.assertTrue(com.isnull(grouped['B'].nth(0)[0]) + ) # not sure what this is testing # v0.14.0 whatsnew df = DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=['A', 'B']) g = df.groupby('A') result = g.first() - expected = df.iloc[[1,2]].set_index('A') + expected = df.iloc[[1, 2]].set_index('A') assert_frame_equal(result, expected) - expected = df.iloc[[1,2]].set_index('A') - result = g.nth(0,dropna='any') + expected = df.iloc[[1, 2]].set_index('A') + result = g.nth(0, dropna='any') assert_frame_equal(result, expected) def test_first_last_nth_dtypes(self): @@ -230,7 +201,7 @@ def test_first_last_nth_dtypes(self): assert_frame_equal(last, expected) nth = grouped.nth(1) - expected = df.ix[[3, 2],['B', 'C', 'D', 'E', 'F']] + expected = df.ix[[3, 2], ['B', 'C', 'D', 'E', 'F']] expected.index = Index(['bar', 'foo'], name='A') expected = expected.sort_index() assert_frame_equal(nth, expected) @@ -249,13 +220,14 @@ def test_nth(self): assert_frame_equal(g.nth(0), df.iloc[[0, 2]].set_index('A')) assert_frame_equal(g.nth(1), df.iloc[[1]].set_index('A')) - assert_frame_equal(g.nth(2), df.loc[[],['B']]) + assert_frame_equal(g.nth(2), df.loc[[], ['B']]) assert_frame_equal(g.nth(-1), df.iloc[[1, 2]].set_index('A')) assert_frame_equal(g.nth(-2), df.iloc[[0]].set_index('A')) - assert_frame_equal(g.nth(-3), df.loc[[],['B']]) + assert_frame_equal(g.nth(-3), df.loc[[], ['B']]) assert_series_equal(g.B.nth(0), df.B.iloc[[0, 2]]) assert_series_equal(g.B.nth(1), df.B.iloc[[1]]) - assert_frame_equal(g[['B']].nth(0), df.ix[[0, 2], ['A', 'B']].set_index('A')) + assert_frame_equal(g[['B']].nth(0), + df.ix[[0, 2], ['A', 'B']].set_index('A')) exp = df.set_index('A') assert_frame_equal(g.nth(0, dropna='any'), exp.iloc[[1, 2]]) @@ -267,22 +239,39 @@ def test_nth(self): # out of bounds, regression from 0.13.1 # GH 6621 - df = DataFrame({'color': {0: 'green', 1: 'green', 2: 'red', 3: 'red', 4: 'red'}, - 'food': {0: 'ham', 1: 'eggs', 2: 'eggs', 3: 'ham', 4: 'pork'}, - 'two': {0: 1.5456590000000001, 1: -0.070345000000000005, 2: -2.4004539999999999, 3: 0.46206000000000003, 4: 0.52350799999999997}, - 'one': {0: 0.56573799999999996, 1: -0.9742360000000001, 2: 1.033801, 3: -0.78543499999999999, 4: 0.70422799999999997}}).set_index(['color', 'food']) + df = DataFrame({'color': {0: 'green', + 1: 'green', + 2: 'red', + 3: 'red', + 4: 'red'}, + 'food': {0: 'ham', + 1: 'eggs', + 2: 'eggs', + 3: 'ham', + 4: 'pork'}, + 'two': {0: 1.5456590000000001, + 1: -0.070345000000000005, + 2: -2.4004539999999999, + 3: 0.46206000000000003, + 4: 0.52350799999999997}, + 'one': {0: 0.56573799999999996, + 1: -0.9742360000000001, + 2: 1.033801, + 3: -0.78543499999999999, + 4: 0.70422799999999997}}).set_index(['color', + 'food']) result = df.groupby(level=0).nth(2) expected = df.iloc[[-1]] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) result = df.groupby(level=0).nth(3) expected = df.loc[[]] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) # GH 7559 # from the vbench - df = DataFrame(np.random.randint(1, 10, (100, 2)),dtype='int64') + df = DataFrame(np.random.randint(1, 10, (100, 2)), dtype='int64') s = df[1] g = df[0] expected = s.groupby(g).first() @@ -292,15 +281,15 @@ def test_nth(self): self.assertEqual(expected.name, 1) # validate first - v = s[g==1].iloc[0] - self.assertEqual(expected.iloc[0],v) - self.assertEqual(expected2.iloc[0],v) + v = s[g == 1].iloc[0] + self.assertEqual(expected.iloc[0], v) + self.assertEqual(expected2.iloc[0], v) # this is NOT the same as .first (as sorted is default!) # as it keeps the order in the series (and not the group order) # related GH 7287 - expected = s.groupby(g,sort=False).first() - expected.index = pd.Index(range(1,10), name=0) + expected = s.groupby(g, sort=False).first() + expected.index = pd.Index(range(1, 10), name=0) result = s.groupby(g).nth(0, dropna='all') assert_series_equal(result, expected) @@ -309,7 +298,7 @@ def test_nth(self): g = df.groupby('A') result = g.B.nth(0, dropna=True) expected = g.B.first() - assert_series_equal(result,expected) + assert_series_equal(result, expected) # test multiple nth values df = DataFrame([[1, np.nan], [1, 3], [1, 4], [5, 6], [5, 7]], @@ -319,19 +308,25 @@ def test_nth(self): assert_frame_equal(g.nth(0), df.iloc[[0, 3]].set_index('A')) assert_frame_equal(g.nth([0]), df.iloc[[0, 3]].set_index('A')) assert_frame_equal(g.nth([0, 1]), df.iloc[[0, 1, 3, 4]].set_index('A')) - assert_frame_equal(g.nth([0, -1]), df.iloc[[0, 2, 3, 4]].set_index('A')) - assert_frame_equal(g.nth([0, 1, 2]), df.iloc[[0, 1, 2, 3, 4]].set_index('A')) - assert_frame_equal(g.nth([0, 1, -1]), df.iloc[[0, 1, 2, 3, 4]].set_index('A')) + assert_frame_equal( + g.nth([0, -1]), df.iloc[[0, 2, 3, 4]].set_index('A')) + assert_frame_equal( + g.nth([0, 1, 2]), df.iloc[[0, 1, 2, 3, 4]].set_index('A')) + assert_frame_equal( + g.nth([0, 1, -1]), df.iloc[[0, 1, 2, 3, 4]].set_index('A')) assert_frame_equal(g.nth([2]), df.iloc[[2]].set_index('A')) - assert_frame_equal(g.nth([3, 4]), df.loc[[],['B']]) + assert_frame_equal(g.nth([3, 4]), df.loc[[], ['B']]) - business_dates = pd.date_range(start='4/1/2014', end='6/30/2014', freq='B') + business_dates = pd.date_range(start='4/1/2014', end='6/30/2014', + freq='B') df = DataFrame(1, index=business_dates, columns=['a', 'b']) # get the first, fourth and last two business days for each month - result = df.groupby((df.index.year, df.index.month)).nth([0, 3, -2, -1]) - expected_dates = pd.to_datetime(['2014/4/1', '2014/4/4', '2014/4/29', '2014/4/30', - '2014/5/1', '2014/5/6', '2014/5/29', '2014/5/30', - '2014/6/2', '2014/6/5', '2014/6/27', '2014/6/30']) + result = df.groupby((df.index.year, df.index.month)).nth([0, 3, -2, -1 + ]) + expected_dates = pd.to_datetime( + ['2014/4/1', '2014/4/4', '2014/4/29', '2014/4/30', '2014/5/1', + '2014/5/6', '2014/5/29', '2014/5/30', '2014/6/2', '2014/6/5', + '2014/6/27', '2014/6/30']) expected = DataFrame(1, columns=['a', 'b'], index=expected_dates) assert_frame_equal(result, expected) @@ -343,34 +338,32 @@ def test_nth_multi_index(self): expected = grouped.first() assert_frame_equal(result, expected) - def test_nth_multi_index_as_expected(self): # PR 9090, related to issue 8979 # test nth on MultiIndex - three_group = DataFrame({'A': ['foo', 'foo', 'foo', 'foo', - 'bar', 'bar', 'bar', 'bar', - 'foo', 'foo', 'foo'], - 'B': ['one', 'one', 'one', 'two', - 'one', 'one', 'one', 'two', - 'two', 'two', 'one'], - 'C': ['dull', 'dull', 'shiny', 'dull', - 'dull', 'shiny', 'shiny', 'dull', - 'shiny', 'shiny', 'shiny']}) + three_group = DataFrame( + {'A': ['foo', 'foo', 'foo', 'foo', 'bar', 'bar', 'bar', 'bar', + 'foo', 'foo', 'foo'], + 'B': ['one', 'one', 'one', 'two', 'one', 'one', 'one', 'two', + 'two', 'two', 'one'], + 'C': ['dull', 'dull', 'shiny', 'dull', 'dull', 'shiny', 'shiny', + 'dull', 'shiny', 'shiny', 'shiny']}) grouped = three_group.groupby(['A', 'B']) result = grouped.nth(0) - expected = DataFrame({'C': ['dull', 'dull', 'dull', 'dull']}, - index=MultiIndex.from_arrays([['bar', 'bar', 'foo', 'foo'], ['one', 'two', 'one', 'two']], - names=['A', 'B'])) + expected = DataFrame( + {'C': ['dull', 'dull', 'dull', 'dull']}, + index=MultiIndex.from_arrays([['bar', 'bar', 'foo', 'foo'], + ['one', 'two', 'one', 'two']], + names=['A', 'B'])) assert_frame_equal(result, expected) - def test_grouper_index_types(self): # related GH5375 # groupby misbehaving when using a Floatlike index - df = DataFrame(np.arange(10).reshape(5,2),columns=list('AB')) - for index in [ tm.makeFloatIndex, tm.makeStringIndex, - tm.makeUnicodeIndex, tm.makeIntIndex, - tm.makeDateIndex, tm.makePeriodIndex ]: + df = DataFrame(np.arange(10).reshape(5, 2), columns=list('AB')) + for index in [tm.makeFloatIndex, tm.makeStringIndex, + tm.makeUnicodeIndex, tm.makeIntIndex, tm.makeDateIndex, + tm.makePeriodIndex]: df.index = index(len(df)) df.groupby(list('abcde')).apply(lambda x: x) @@ -385,28 +378,29 @@ def test_grouper_multilevel_freq(self): from datetime import date, timedelta d0 = date.today() - timedelta(days=14) dates = date_range(d0, date.today()) - date_index = pd.MultiIndex.from_product([dates, dates], names=['foo', 'bar']) + date_index = pd.MultiIndex.from_product( + [dates, dates], names=['foo', 'bar']) df = pd.DataFrame(np.random.randint(0, 100, 225), index=date_index) # Check string level - expected = df.reset_index().groupby([pd.Grouper(key='foo', freq='W'), - pd.Grouper(key='bar', freq='W')]).sum() + expected = df.reset_index().groupby([pd.Grouper( + key='foo', freq='W'), pd.Grouper(key='bar', freq='W')]).sum() # reset index changes columns dtype to object expected.columns = pd.Index([0], dtype='int64') - result = df.groupby([pd.Grouper(level='foo', freq='W'), - pd.Grouper(level='bar', freq='W')]).sum() + result = df.groupby([pd.Grouper(level='foo', freq='W'), pd.Grouper( + level='bar', freq='W')]).sum() assert_frame_equal(result, expected) # Check integer level - result = df.groupby([pd.Grouper(level=0, freq='W'), - pd.Grouper(level=1, freq='W')]).sum() + result = df.groupby([pd.Grouper(level=0, freq='W'), pd.Grouper( + level=1, freq='W')]).sum() assert_frame_equal(result, expected) def test_grouper_creation_bug(self): # GH 8795 - df = DataFrame({'A':[0,0,1,1,2,2], 'B':[1,2,3,4,5,6]}) + df = DataFrame({'A': [0, 0, 1, 1, 2, 2], 'B': [1, 2, 3, 4, 5, 6]}) g = df.groupby('A') expected = g.sum() @@ -417,18 +411,19 @@ def test_grouper_creation_bug(self): result = g.apply(lambda x: x.sum()) assert_frame_equal(result, expected) - g = df.groupby(pd.Grouper(key='A',axis=0)) + g = df.groupby(pd.Grouper(key='A', axis=0)) result = g.sum() assert_frame_equal(result, expected) # GH8866 - s = Series(np.arange(8,dtype='int64'), - index=pd.MultiIndex.from_product([list('ab'), - range(2), - date_range('20130101',periods=2)], - names=['one','two','three'])) - result = s.groupby(pd.Grouper(level='three',freq='M')).sum() - expected = Series([28],index=Index([Timestamp('2013-01-31')],freq='M',name='three')) + s = Series(np.arange(8, dtype='int64'), + index=pd.MultiIndex.from_product( + [list('ab'), range(2), + date_range('20130101', periods=2)], + names=['one', 'two', 'three'])) + result = s.groupby(pd.Grouper(level='three', freq='M')).sum() + expected = Series([28], index=Index( + [Timestamp('2013-01-31')], freq='M', name='three')) assert_series_equal(result, expected) # just specifying a level breaks @@ -441,14 +436,16 @@ def test_grouper_getting_correct_binner(self): # GH 10063 # using a non-time-based grouper and a time-based grouper # and specifying levels - df = DataFrame({'A' : 1 }, - index=pd.MultiIndex.from_product([list('ab'), - date_range('20130101',periods=80)], - names=['one','two'])) - result = df.groupby([pd.Grouper(level='one'),pd.Grouper(level='two',freq='M')]).sum() - expected = DataFrame({'A' : [31,28,21,31,28,21]}, - index=MultiIndex.from_product([list('ab'),date_range('20130101',freq='M',periods=3)], - names=['one','two'])) + df = DataFrame({'A': 1}, index=pd.MultiIndex.from_product( + [list('ab'), date_range('20130101', periods=80)], names=['one', + 'two'])) + result = df.groupby([pd.Grouper(level='one'), pd.Grouper( + level='two', freq='M')]).sum() + expected = DataFrame({'A': [31, 28, 21, 31, 28, 21]}, + index=MultiIndex.from_product( + [list('ab'), + date_range('20130101', freq='M', periods=3)], + names=['one', 'two'])) assert_frame_equal(result, expected) def test_grouper_iter(self): @@ -467,8 +464,8 @@ def test_groupby_grouper(self): def test_groupby_duplicated_column_errormsg(self): # GH7511 - df = DataFrame(columns=['A','B','A','C'], \ - data=[range(4), range(2,6), range(0, 8, 2)]) + df = DataFrame(columns=['A', 'B', 'A', 'C'], + data=[range(4), range(2, 6), range(0, 8, 2)]) self.assertRaises(ValueError, df.groupby, 'A') self.assertRaises(ValueError, df.groupby, ['A', 'B']) @@ -500,9 +497,9 @@ def test_groupby_dict_mapping(self): def test_groupby_bounds_check(self): # groupby_X is code-generated, so if one variant # does, the rest probably do to - a = np.array([1,2],dtype='object') - b = np.array([1,2,3],dtype='object') - self.assertRaises(AssertionError, pd.algos.groupby_object,a, b) + a = np.array([1, 2], dtype='object') + b = np.array([1, 2, 3], dtype='object') + self.assertRaises(AssertionError, pd.algos.groupby_object, a, b) def test_groupby_grouper_f_sanity_checked(self): dates = date_range('01-Jan-2013', periods=12, freq='MS') @@ -517,7 +514,7 @@ def test_groupby_grouper_f_sanity_checked(self): # when the elements are Timestamp. # the result is Index[0:6], very confusing. - self.assertRaises(AssertionError, ts.groupby,lambda key: key[0:6]) + self.assertRaises(AssertionError, ts.groupby, lambda key: key[0:6]) def test_groupby_nonobject_dtype(self): key = self.mframe.index.labels[0] @@ -536,84 +533,101 @@ def max_value(group): applied = df.groupby('A').apply(max_value) result = applied.get_dtype_counts().sort_values() - expected = Series({ 'object' : 2, 'float64' : 2, 'int64' : 1 }).sort_values() - assert_series_equal(result,expected) + expected = Series({'object': 2, + 'float64': 2, + 'int64': 1}).sort_values() + assert_series_equal(result, expected) def test_groupby_return_type(self): # GH2893, return a reduced type - df1 = DataFrame([{"val1": 1, "val2" : 20}, {"val1":1, "val2": 19}, - {"val1":2, "val2": 27}, {"val1":2, "val2": 12}]) + df1 = DataFrame([{"val1": 1, + "val2": 20}, {"val1": 1, + "val2": 19}, {"val1": 2, + "val2": 27}, {"val1": 2, + "val2": 12} + ]) def func(dataf): - return dataf["val2"] - dataf["val2"].mean() + return dataf["val2"] - dataf["val2"].mean() result = df1.groupby("val1", squeeze=True).apply(func) - tm.assertIsInstance(result,Series) + tm.assertIsInstance(result, Series) + + df2 = DataFrame([{"val1": 1, + "val2": 20}, {"val1": 1, + "val2": 19}, {"val1": 1, + "val2": 27}, {"val1": 1, + "val2": 12} + ]) - df2 = DataFrame([{"val1": 1, "val2" : 20}, {"val1":1, "val2": 19}, - {"val1":1, "val2": 27}, {"val1":1, "val2": 12}]) def func(dataf): - return dataf["val2"] - dataf["val2"].mean() + return dataf["val2"] - dataf["val2"].mean() result = df2.groupby("val1", squeeze=True).apply(func) - tm.assertIsInstance(result,Series) + tm.assertIsInstance(result, Series) # GH3596, return a consistent type (regression in 0.11 from 0.10.1) - df = DataFrame([[1,1],[1,1]],columns=['X','Y']) - result = df.groupby('X',squeeze=False).count() - tm.assertIsInstance(result,DataFrame) + df = DataFrame([[1, 1], [1, 1]], columns=['X', 'Y']) + result = df.groupby('X', squeeze=False).count() + tm.assertIsInstance(result, DataFrame) # GH5592 # inconcistent return type - df = DataFrame(dict(A = [ 'Tiger', 'Tiger', 'Tiger', 'Lamb', 'Lamb', 'Pony', 'Pony' ], - B = Series(np.arange(7),dtype='int64'), - C = date_range('20130101',periods=7))) + df = DataFrame(dict(A=['Tiger', 'Tiger', 'Tiger', 'Lamb', 'Lamb', + 'Pony', 'Pony'], B=Series( + np.arange(7), dtype='int64'), C=date_range( + '20130101', periods=7))) def f(grp): return grp.iloc[0] + expected = df.groupby('A').first()[['B']] result = df.groupby('A').apply(f)[['B']] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) def f(grp): if grp.name == 'Tiger': return None return grp.iloc[0] + result = df.groupby('A').apply(f)[['B']] e = expected.copy() e.loc['Tiger'] = np.nan - assert_frame_equal(result,e) + assert_frame_equal(result, e) def f(grp): if grp.name == 'Pony': return None return grp.iloc[0] + result = df.groupby('A').apply(f)[['B']] e = expected.copy() e.loc['Pony'] = np.nan - assert_frame_equal(result,e) + assert_frame_equal(result, e) # 5592 revisited, with datetimes def f(grp): if grp.name == 'Pony': return None return grp.iloc[0] + result = df.groupby('A').apply(f)[['C']] e = df.groupby('A').first()[['C']] e.loc['Pony'] = pd.NaT - assert_frame_equal(result,e) + assert_frame_equal(result, e) # scalar outputs def f(grp): if grp.name == 'Pony': return None return grp.iloc[0].loc['C'] + result = df.groupby('A').apply(f) e = df.groupby('A').first()['C'].copy() e.loc['Pony'] = np.nan e.name = None - assert_series_equal(result,e) + assert_series_equal(result, e) def test_agg_api(self): @@ -621,19 +635,19 @@ def test_agg_api(self): # http://stackoverflow.com/questions/21706030/pandas-groupby-agg-function-column-dtype-error # different api for agg when passed custom function with mixed frame - df = DataFrame({'data1':np.random.randn(5), - 'data2':np.random.randn(5), - 'key1':['a','a','b','b','a'], - 'key2':['one','two','one','two','one']}) + df = DataFrame({'data1': np.random.randn(5), + 'data2': np.random.randn(5), + 'key1': ['a', 'a', 'b', 'b', 'a'], + 'key2': ['one', 'two', 'one', 'two', 'one']}) grouped = df.groupby('key1') def peak_to_peak(arr): return arr.max() - arr.min() expected = grouped.agg([peak_to_peak]) - expected.columns=['data1','data2'] + expected.columns = ['data1', 'data2'] result = grouped.agg(peak_to_peak) - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) def test_agg_regression1(self): grouped = self.tsframe.groupby([lambda x: x.year, lambda x: x.month]) @@ -642,16 +656,14 @@ def test_agg_regression1(self): assert_frame_equal(result, expected) def test_agg_datetimes_mixed(self): - data = [[1, '2012-01-01', 1.0], - [2, '2012-01-02', 2.0], - [3, None, 3.0]] + data = [[1, '2012-01-01', 1.0], [2, '2012-01-02', 2.0], [3, None, 3.0]] df1 = DataFrame({'key': [x[0] for x in data], 'date': [x[1] for x in data], 'value': [x[2] for x in data]}) - data = [[row[0], datetime.strptime(row[1], '%Y-%m-%d').date() - if row[1] else None, row[2]] for row in data] + data = [[row[0], datetime.strptime(row[1], '%Y-%m-%d').date() if row[1] + else None, row[2]] for row in data] df2 = DataFrame({'key': [x[0] for x in data], 'date': [x[1] for x in data], @@ -663,7 +675,7 @@ def test_agg_datetimes_mixed(self): df2['weights'] = df1['value'] / df1['value'].sum() gb2 = df2.groupby('date').aggregate(np.sum) - assert(len(gb1) == len(gb2)) + assert (len(gb1) == len(gb2)) def test_agg_period_index(self): from pandas import period_range, PeriodIndex @@ -676,7 +688,7 @@ def test_agg_period_index(self): index = period_range(start='1999-01', periods=5, freq='M') s1 = Series(np.random.rand(len(index)), index=index) s2 = Series(np.random.rand(len(index)), index=index) - series = [('s1', s1), ('s2',s2)] + series = [('s1', s1), ('s2', s2)] df = DataFrame.from_items(series) grouped = df.groupby(df.index.month) list(grouped) @@ -687,7 +699,9 @@ def test_agg_must_agg(self): self.assertRaises(Exception, grouped.agg, lambda x: x.index[:2]) def test_agg_ser_multi_key(self): - ser = self.df.C + # TODO(wesm): unused + ser = self.df.C # noqa + f = lambda x: x.sum() results = self.df.C.groupby([self.df.A, self.df.B]).aggregate(f) expected = self.df.groupby(['A', 'B']).sum()['C'] @@ -703,48 +717,49 @@ def test_get_group(self): # GH 5267 # be datelike friendly - df = DataFrame({'DATE' : pd.to_datetime(['10-Oct-2013', '10-Oct-2013', '10-Oct-2013', - '11-Oct-2013', '11-Oct-2013', '11-Oct-2013']), - 'label' : ['foo','foo','bar','foo','foo','bar'], - 'VAL' : [1,2,3,4,5,6]}) + df = DataFrame({'DATE': pd.to_datetime( + ['10-Oct-2013', '10-Oct-2013', '10-Oct-2013', '11-Oct-2013', + '11-Oct-2013', '11-Oct-2013']), + 'label': ['foo', 'foo', 'bar', 'foo', 'foo', 'bar'], + 'VAL': [1, 2, 3, 4, 5, 6]}) g = df.groupby('DATE') key = list(g.groups)[0] result1 = g.get_group(key) result2 = g.get_group(Timestamp(key).to_datetime()) result3 = g.get_group(str(Timestamp(key))) - assert_frame_equal(result1,result2) - assert_frame_equal(result1,result3) + assert_frame_equal(result1, result2) + assert_frame_equal(result1, result3) - g = df.groupby(['DATE','label']) + g = df.groupby(['DATE', 'label']) key = list(g.groups)[0] result1 = g.get_group(key) - result2 = g.get_group((Timestamp(key[0]).to_datetime(),key[1])) - result3 = g.get_group((str(Timestamp(key[0])),key[1])) - assert_frame_equal(result1,result2) - assert_frame_equal(result1,result3) + result2 = g.get_group((Timestamp(key[0]).to_datetime(), key[1])) + result3 = g.get_group((str(Timestamp(key[0])), key[1])) + assert_frame_equal(result1, result2) + assert_frame_equal(result1, result3) # must pass a same-length tuple with multiple keys - self.assertRaises(ValueError, lambda : g.get_group('foo')) - self.assertRaises(ValueError, lambda : g.get_group(('foo'))) - self.assertRaises(ValueError, lambda : g.get_group(('foo','bar','baz'))) + self.assertRaises(ValueError, lambda: g.get_group('foo')) + self.assertRaises(ValueError, lambda: g.get_group(('foo'))) + self.assertRaises(ValueError, + lambda: g.get_group(('foo', 'bar', 'baz'))) def test_get_group_grouped_by_tuple(self): # GH 8121 - df = DataFrame([[(1,), (1, 2), (1,), (1, 2)]], - index=['ids']).T + df = DataFrame([[(1, ), (1, 2), (1, ), (1, 2)]], index=['ids']).T gr = df.groupby('ids') - expected = DataFrame({'ids': [(1,), (1,)]}, index=[0, 2]) - result = gr.get_group((1,)) + expected = DataFrame({'ids': [(1, ), (1, )]}, index=[0, 2]) + result = gr.get_group((1, )) assert_frame_equal(result, expected) dt = pd.to_datetime(['2010-01-01', '2010-01-02', '2010-01-01', - '2010-01-02']) - df = DataFrame({'ids': [(x,) for x in dt]}) + '2010-01-02']) + df = DataFrame({'ids': [(x, ) for x in dt]}) gr = df.groupby('ids') - result = gr.get_group(('2010-01-01',)) - expected = DataFrame({'ids': [(dt[0],), (dt[0],)]}, index=[0, 2]) + result = gr.get_group(('2010-01-01', )) + expected = DataFrame({'ids': [(dt[0], ), (dt[0], )]}, index=[0, 2]) assert_frame_equal(result, expected) def test_agg_apply_corner(self): @@ -753,7 +768,8 @@ def test_agg_apply_corner(self): self.assertEqual(self.ts.dtype, np.float64) # groupby float64 values results in Float64Index - exp = Series([], dtype=np.float64, index=pd.Index([], dtype=np.float64)) + exp = Series([], dtype=np.float64, index=pd.Index( + [], dtype=np.float64)) assert_series_equal(grouped.sum(), exp) assert_series_equal(grouped.agg(np.sum), exp) assert_series_equal(grouped.apply(np.sum), exp, check_index_type=False) @@ -761,7 +777,8 @@ def test_agg_apply_corner(self): # DataFrame grouped = self.tsframe.groupby(self.tsframe['A'] * np.nan) exp_df = DataFrame(columns=self.tsframe.columns, dtype=float, - index=pd.Index([], dtype=np.float64)) + index=pd.Index( + [], dtype=np.float64)) assert_frame_equal(grouped.sum(), exp_df, check_names=False) assert_frame_equal(grouped.agg(np.sum), exp_df, check_names=False) assert_frame_equal(grouped.apply(np.sum), DataFrame({}, dtype=float)) @@ -787,8 +804,8 @@ def test_agg_grouping_is_list_tuple(self): def test_grouping_error_on_multidim_input(self): from pandas.core.groupby import Grouping - self.assertRaises(ValueError, \ - Grouping, self.df.index, self.df[['A','A']]) + self.assertRaises(ValueError, + Grouping, self.df.index, self.df[['A', 'A']]) def test_agg_python_multiindex(self): grouped = self.mframe.groupby(['A', 'B']) @@ -799,12 +816,12 @@ def test_agg_python_multiindex(self): def test_apply_describe_bug(self): grouped = self.mframe.groupby(level='first') - result = grouped.describe() # it works! + grouped.describe() # it works! def test_apply_issues(self): # GH 5788 - s="""2011.05.16,00:00,1.40893 + s = """2011.05.16,00:00,1.40893 2011.05.16,01:00,1.40760 2011.05.16,02:00,1.40750 2011.05.16,03:00,1.40649 @@ -817,27 +834,34 @@ def test_apply_issues(self): 2011.05.18,04:00,1.40750 2011.05.18,05:00,1.40649""" - df = pd.read_csv(StringIO(s), header=None, names=['date', 'time', 'value'], parse_dates=[['date', 'time']]) + df = pd.read_csv( + StringIO(s), header=None, names=['date', 'time', 'value'], + parse_dates=[['date', 'time']]) df = df.set_index('date_time') expected = df.groupby(df.index.date).idxmax() result = df.groupby(df.index.date).apply(lambda x: x.idxmax()) - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) # GH 5789 # don't auto coerce dates - df = pd.read_csv(StringIO(s), header=None, names=['date', 'time', 'value']) - exp_idx = pd.Index(['2011.05.16','2011.05.17','2011.05.18'], dtype=object, name='date') - expected = Series(['00:00','02:00','02:00'], index=exp_idx) - result = df.groupby('date').apply(lambda x: x['time'][x['value'].idxmax()]) + df = pd.read_csv( + StringIO(s), header=None, names=['date', 'time', 'value']) + exp_idx = pd.Index( + ['2011.05.16', '2011.05.17', '2011.05.18' + ], dtype=object, name='date') + expected = Series(['00:00', '02:00', '02:00'], index=exp_idx) + result = df.groupby('date').apply( + lambda x: x['time'][x['value'].idxmax()]) assert_series_equal(result, expected) def test_time_field_bug(self): - # Test a fix for the following error related to GH issue 11324 - # When non-key fields in a group-by dataframe contained time-based fields that - # were not returned by the apply function, an exception would be raised. + # Test a fix for the following error related to GH issue 11324 When + # non-key fields in a group-by dataframe contained time-based fields + # that were not returned by the apply function, an exception would be + # raised. - df = pd.DataFrame({'a': 1,'b': [datetime.now() for nn in range(10)]}) + df = pd.DataFrame({'a': 1, 'b': [datetime.now() for nn in range(10)]}) def func_with_no_date(batch): return pd.Series({'c': 2}) @@ -850,7 +874,9 @@ def func_with_date(batch): dfg_no_conversion_expected.index.name = 'a' dfg_conversion = df.groupby(by=['a']).apply(func_with_date) - dfg_conversion_expected = pd.DataFrame({'b': datetime(2015, 1, 1), 'c': 2}, index=[1]) + dfg_conversion_expected = pd.DataFrame( + {'b': datetime(2015, 1, 1), + 'c': 2}, index=[1]) dfg_conversion_expected.index.name = 'a' self.assert_frame_equal(dfg_no_conversion, dfg_no_conversion_expected) @@ -858,18 +884,16 @@ def func_with_date(batch): def test_len(self): df = tm.makeTimeDataFrame() - grouped = df.groupby([lambda x: x.year, - lambda x: x.month, + grouped = df.groupby([lambda x: x.year, lambda x: x.month, lambda x: x.day]) self.assertEqual(len(grouped), len(df)) - grouped = df.groupby([lambda x: x.year, - lambda x: x.month]) + grouped = df.groupby([lambda x: x.year, lambda x: x.month]) expected = len(set([(x.year, x.month) for x in df.index])) self.assertEqual(len(grouped), expected) # issue 11016 - df = pd.DataFrame(dict(a=[np.nan]*3, b=[1,2,3])) + df = pd.DataFrame(dict(a=[np.nan] * 3, b=[1, 2, 3])) self.assertEqual(len(df.groupby(('a'))), 0) self.assertEqual(len(df.groupby(('b'))), 3) self.assertEqual(len(df.groupby(('a', 'b'))), 3) @@ -890,7 +914,6 @@ def test_groups(self): self.assertTrue((self.df.ix[v]['B'] == k[1]).all()) def test_aggregate_str_func(self): - def _check_results(grouped): # single series result = grouped['A'].agg('std') @@ -903,14 +926,11 @@ def _check_results(grouped): assert_frame_equal(result, expected) # group frame by function dict - result = grouped.agg(OrderedDict([['A', 'var'], - ['B', 'std'], - ['C', 'mean'], - ['D', 'sem']])) - expected = DataFrame(OrderedDict([['A', grouped['A'].var()], - ['B', grouped['B'].std()], - ['C', grouped['C'].mean()], - ['D', grouped['D'].sem()]])) + result = grouped.agg(OrderedDict([['A', 'var'], ['B', 'std'], + ['C', 'mean'], ['D', 'sem']])) + expected = DataFrame(OrderedDict([['A', grouped['A'].var( + )], ['B', grouped['B'].std()], ['C', grouped['C'].mean()], + ['D', grouped['D'].sem()]])) assert_frame_equal(result, expected) by_weekday = self.tsframe.groupby(lambda x: x.weekday()) @@ -940,11 +960,14 @@ def test_aggregate_item_by_item(self): # GH5782 # odd comparisons can result here, so cast to make easy - assert_almost_equal(result.xs('foo'), np.array([foo] * K).astype('float64')) - assert_almost_equal(result.xs('bar'), np.array([bar] * K).astype('float64')) + assert_almost_equal( + result.xs('foo'), np.array([foo] * K).astype('float64')) + assert_almost_equal( + result.xs('bar'), np.array([bar] * K).astype('float64')) def aggfun(ser): return ser.size + result = DataFrame().groupby(self.df.A).agg(aggfun) tm.assertIsInstance(result, DataFrame) self.assertEqual(len(result), 0) @@ -959,15 +982,14 @@ def raiseException(df): com.pprint_thing(df.to_string()) raise TypeError - self.assertRaises(TypeError, df.groupby(0).agg, - raiseException) + self.assertRaises(TypeError, df.groupby(0).agg, raiseException) def test_basic_regression(self): # regression T = [1.0 * x for x in lrange(1, 10) * 10][:1095] result = Series(T, lrange(0, len(T))) - groupings = np.random.random((1100,)) + groupings = np.random.random((1100, )) groupings = Series(groupings, lrange(0, len(groupings))) * 10. grouped = result.groupby(groupings) @@ -988,10 +1010,14 @@ def test_transform(self): # GH 8046 # make sure that we preserve the input order - df = DataFrame(np.arange(6,dtype='int64').reshape(3,2), columns=["a","b"], index=[0,2,1]) - key = [0,0,1] - expected = df.sort_index().groupby(key).transform(lambda x: x-x.mean()).groupby(key).mean() - result = df.groupby(key).transform(lambda x: x-x.mean()).groupby(key).mean() + df = DataFrame( + np.arange(6, dtype='int64').reshape( + 3, 2), columns=["a", "b"], index=[0, 2, 1]) + key = [0, 0, 1] + expected = df.sort_index().groupby(key).transform( + lambda x: x - x.mean()).groupby(key).mean() + result = df.groupby(key).transform(lambda x: x - x.mean()).groupby( + key).mean() assert_frame_equal(result, expected) def demean(arr): @@ -1008,28 +1034,29 @@ def demean(arr): # GH 8430 df = tm.makeTimeDataFrame() g = df.groupby(pd.TimeGrouper('M')) - g.transform(lambda x: x-1) + g.transform(lambda x: x - 1) # GH 9700 - df = DataFrame({'a' : range(5, 10), 'b' : range(5)}) + df = DataFrame({'a': range(5, 10), 'b': range(5)}) result = df.groupby('a').transform(max) - expected = DataFrame({'b' : range(5)}) + expected = DataFrame({'b': range(5)}) tm.assert_frame_equal(result, expected) def test_transform_fast(self): - df = DataFrame( { 'id' : np.arange( 100000 ) / 3, - 'val': np.random.randn( 100000) } ) + df = DataFrame({'id': np.arange(100000) / 3, + 'val': np.random.randn(100000)}) - grp=df.groupby('id')['val'] + grp = df.groupby('id')['val'] - values = np.repeat(grp.mean().values, com._ensure_platform_int(grp.count().values)) - expected = pd.Series(values,index=df.index) + values = np.repeat(grp.mean().values, + com._ensure_platform_int(grp.count().values)) + expected = pd.Series(values, index=df.index) result = grp.transform(np.mean) - assert_series_equal(result,expected) + assert_series_equal(result, expected) result = grp.transform('mean') - assert_series_equal(result,expected) + assert_series_equal(result, expected) def test_transform_broadcast(self): grouped = self.ts.groupby(lambda x: x.month) @@ -1071,16 +1098,17 @@ def test_transform_dtype(self): def test_transform_bug(self): # GH 5712 # transforming on a datetime column - df = DataFrame(dict(A = Timestamp('20130101'), B = np.arange(5))) - result = df.groupby('A')['B'].transform(lambda x: x.rank(ascending=False)) - expected = Series(np.arange(5,0,step=-1),name='B') - assert_series_equal(result,expected) + df = DataFrame(dict(A=Timestamp('20130101'), B=np.arange(5))) + result = df.groupby('A')['B'].transform( + lambda x: x.rank(ascending=False)) + expected = Series(np.arange(5, 0, step=-1), name='B') + assert_series_equal(result, expected) def test_transform_multiple(self): grouped = self.ts.groupby([lambda x: x.year, lambda x: x.month]) - transformed = grouped.transform(lambda x: x * 2) - broadcasted = grouped.transform(np.mean) + grouped.transform(lambda x: x * 2) + grouped.transform(np.mean) def test_dispatch_transform(self): df = self.tsframe[::5].reindex(self.tsframe.index) @@ -1125,10 +1153,12 @@ def test_transform_function_aliases(self): def test_transform_length(self): # GH 9697 - df = pd.DataFrame({'col1':[1,1,2,2], 'col2':[1,2,3,np.nan]}) - expected = pd.Series([3.0]*4) + df = pd.DataFrame({'col1': [1, 1, 2, 2], 'col2': [1, 2, 3, np.nan]}) + expected = pd.Series([3.0] * 4) + def nsum(x): return np.nansum(x) + results = [df.groupby('col1').transform(sum)['col2'], df.groupby('col1')['col2'].transform(sum), df.groupby('col1').transform(nsum)['col2'], @@ -1139,19 +1169,19 @@ def nsum(x): def test_with_na(self): index = Index(np.arange(10)) - for dtype in ['float64','float32','int64','int32','int16','int8']: + for dtype in ['float64', 'float32', 'int64', 'int32', 'int16', 'int8']: values = Series(np.ones(10), index, dtype=dtype) labels = Series([nan, 'foo', 'bar', 'bar', nan, nan, 'bar', 'bar', nan, 'foo'], index=index) - # this SHOULD be an int grouped = values.groupby(labels) agged = grouped.agg(len) expected = Series([4, 2], index=['bar', 'foo']) assert_series_equal(agged, expected, check_dtype=False) - #self.assertTrue(issubclass(agged.dtype.type, np.integer)) + + # self.assertTrue(issubclass(agged.dtype.type, np.integer)) # explicity return a float from my function def f(x): @@ -1168,85 +1198,74 @@ def test_groupby_transform_with_int(self): # GH 3740, make sure that we might upcast on item-by-item transform # floats - df = DataFrame(dict(A = [1,1,1,2,2,2], B = Series(1,dtype='float64'), C = Series([1,2,3,1,2,3],dtype='float64'), D = 'foo')) - result = df.groupby('A').transform(lambda x: (x-x.mean())/x.std()) - expected = DataFrame(dict(B = np.nan, C = Series([-1,0,1,-1,0,1],dtype='float64'))) - assert_frame_equal(result,expected) + df = DataFrame(dict(A=[1, 1, 1, 2, 2, 2], B=Series(1, dtype='float64'), + C=Series( + [1, 2, 3, 1, 2, 3], dtype='float64'), D='foo')) + result = df.groupby('A').transform(lambda x: (x - x.mean()) / x.std()) + expected = DataFrame(dict(B=np.nan, C=Series( + [-1, 0, 1, -1, 0, 1], dtype='float64'))) + assert_frame_equal(result, expected) # int case - df = DataFrame(dict(A = [1,1,1,2,2,2], B = 1, C = [1,2,3,1,2,3], D = 'foo')) - result = df.groupby('A').transform(lambda x: (x-x.mean())/x.std()) - expected = DataFrame(dict(B = np.nan, C = [-1,0,1,-1,0,1])) - assert_frame_equal(result,expected) + df = DataFrame(dict(A=[1, 1, 1, 2, 2, 2], B=1, + C=[1, 2, 3, 1, 2, 3], D='foo')) + result = df.groupby('A').transform(lambda x: (x - x.mean()) / x.std()) + expected = DataFrame(dict(B=np.nan, C=[-1, 0, 1, -1, 0, 1])) + assert_frame_equal(result, expected) # int that needs float conversion - s = Series([2,3,4,10,5,-1]) - df = DataFrame(dict(A = [1,1,1,2,2,2], B = 1, C = s, D = 'foo')) - result = df.groupby('A').transform(lambda x: (x-x.mean())/x.std()) + s = Series([2, 3, 4, 10, 5, -1]) + df = DataFrame(dict(A=[1, 1, 1, 2, 2, 2], B=1, C=s, D='foo')) + result = df.groupby('A').transform(lambda x: (x - x.mean()) / x.std()) s1 = s.iloc[0:3] - s1 = (s1-s1.mean())/s1.std() + s1 = (s1 - s1.mean()) / s1.std() s2 = s.iloc[3:6] - s2 = (s2-s2.mean())/s2.std() - expected = DataFrame(dict(B = np.nan, C = concat([s1,s2]))) - assert_frame_equal(result,expected) + s2 = (s2 - s2.mean()) / s2.std() + expected = DataFrame(dict(B=np.nan, C=concat([s1, s2]))) + assert_frame_equal(result, expected) # int downcasting - result = df.groupby('A').transform(lambda x: x*2/2) - expected = DataFrame(dict(B = 1, C = [2,3,4,10,5,-1])) - assert_frame_equal(result,expected) + result = df.groupby('A').transform(lambda x: x * 2 / 2) + expected = DataFrame(dict(B=1, C=[2, 3, 4, 10, 5, -1])) + assert_frame_equal(result, expected) def test_indices_concatenation_order(self): # GH 2808 def f1(x): - y = x[(x.b % 2) == 1]**2 + y = x[(x.b % 2) == 1] ** 2 if y.empty: - multiindex = MultiIndex( - levels = [[]]*2, - labels = [[]]*2, - names = ['b', 'c'] - ) - res = DataFrame(None, - columns=['a'], - index=multiindex) + multiindex = MultiIndex(levels=[[]] * 2, labels=[[]] * 2, + names=['b', 'c']) + res = DataFrame(None, columns=['a'], index=multiindex) return res else: - y = y.set_index(['b','c']) + y = y.set_index(['b', 'c']) return y def f2(x): - y = x[(x.b % 2) == 1]**2 + y = x[(x.b % 2) == 1] ** 2 if y.empty: return DataFrame() else: - y = y.set_index(['b','c']) + y = y.set_index(['b', 'c']) return y def f3(x): - y = x[(x.b % 2) == 1]**2 + y = x[(x.b % 2) == 1] ** 2 if y.empty: - multiindex = MultiIndex( - levels = [[]]*2, - labels = [[]]*2, - names = ['foo', 'bar'] - ) - res = DataFrame(None, - columns=['a','b'], - index=multiindex) + multiindex = MultiIndex(levels=[[]] * 2, labels=[[]] * 2, + names=['foo', 'bar']) + res = DataFrame(None, columns=['a', 'b'], index=multiindex) return res else: return y - df = DataFrame({'a':[1,2,2,2], - 'b':lrange(4), - 'c':lrange(5,9)}) - - df2 = DataFrame({'a':[3,2,2,2], - 'b':lrange(4), - 'c':lrange(5,9)}) + df = DataFrame({'a': [1, 2, 2, 2], 'b': lrange(4), 'c': lrange(5, 9)}) + df2 = DataFrame({'a': [3, 2, 2, 2], 'b': lrange(4), 'c': lrange(5, 9)}) # correct result result1 = df.groupby('a').apply(f1) @@ -1307,21 +1326,19 @@ def test_series_agg_multikey(self): assert_series_equal(result, expected) def test_series_agg_multi_pure_python(self): - data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo', - 'bar', 'bar', 'bar', 'bar', - 'foo', 'foo', 'foo'], - 'B': ['one', 'one', 'one', 'two', - 'one', 'one', 'one', 'two', - 'two', 'two', 'one'], - 'C': ['dull', 'dull', 'shiny', 'dull', - 'dull', 'shiny', 'shiny', 'dull', - 'shiny', 'shiny', 'shiny'], - 'D': np.random.randn(11), - 'E': np.random.randn(11), - 'F': np.random.randn(11)}) + data = DataFrame( + {'A': ['foo', 'foo', 'foo', 'foo', 'bar', 'bar', 'bar', 'bar', + 'foo', 'foo', 'foo'], + 'B': ['one', 'one', 'one', 'two', 'one', 'one', 'one', 'two', + 'two', 'two', 'one'], + 'C': ['dull', 'dull', 'shiny', 'dull', 'dull', 'shiny', 'shiny', + 'dull', 'shiny', 'shiny', 'shiny'], + 'D': np.random.randn(11), + 'E': np.random.randn(11), + 'F': np.random.randn(11)}) def bad(x): - assert(len(x.base) > 0) + assert (len(x.base) > 0) return 'foo' result = data.groupby(['A', 'B']).agg(bad) @@ -1334,8 +1351,7 @@ def test_series_index_name(self): self.assertEqual(result.index.name, 'A') def test_frame_describe_multikey(self): - grouped = self.tsframe.groupby([lambda x: x.year, - lambda x: x.month]) + grouped = self.tsframe.groupby([lambda x: x.year, lambda x: x.month]) result = grouped.describe() for col in self.tsframe: @@ -1391,17 +1407,15 @@ def test_frame_groupby(self): def test_grouping_is_iterable(self): # this code path isn't used anywhere else # not sure it's useful - grouped = self.tsframe.groupby([lambda x: x.weekday(), - lambda x: x.year]) + grouped = self.tsframe.groupby([lambda x: x.weekday(), lambda x: x.year + ]) # test it works for g in grouped.grouper.groupings[0]: pass def test_frame_groupby_columns(self): - mapping = { - 'A': 0, 'B': 0, 'C': 1, 'D': 1 - } + mapping = {'A': 0, 'B': 0, 'C': 1, 'D': 1} grouped = self.tsframe.groupby(mapping, axis=1) # aggregate @@ -1448,41 +1462,41 @@ def test_aggregate_api_consistency(self): # make sure that the aggregates via dict # are consistent - def compare(result, expected): - # if we ar passin dicts then ordering is not guaranteed for output columns + # if we ar passin dicts then ordering is not guaranteed for output + # columns assert_frame_equal(result.reindex_like(expected), expected) - - df = DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'foo', 'foo'], - 'B' : ['one', 'one', 'two', 'three', - 'two', 'two', 'one', 'three'], - 'C' : np.random.randn(8), - 'D' : np.random.randn(8)}) + df = DataFrame( + {'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], + 'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], + 'C': np.random.randn(8), + 'D': np.random.randn(8)}) grouped = df.groupby(['A', 'B']) - result = grouped[['D','C']].agg({'r':np.sum, 'r2':np.mean}) - expected = pd.concat([grouped[['D','C']].sum(), - grouped[['D','C']].mean()], - keys=['r','r2'], + result = grouped[['D', 'C']].agg({'r': np.sum, 'r2': np.mean}) + expected = pd.concat([grouped[['D', 'C']].sum(), + grouped[['D', 'C']].mean()], + keys=['r', 'r2'], axis=1).stack(level=1) compare(result, expected) - result = grouped[['D','C']].agg({'r': { 'C' : np.sum }, 'r2' : { 'D' : np.mean }}) + result = grouped[['D', 'C']].agg({'r': {'C': np.sum}, + 'r2': {'D': np.mean}}) expected = pd.concat([grouped[['C']].sum(), grouped[['D']].mean()], axis=1) - expected.columns = MultiIndex.from_tuples([('r','C'),('r2','D')]) + expected.columns = MultiIndex.from_tuples([('r', 'C'), ('r2', 'D')]) compare(result, expected) - result = grouped[['D','C']].agg([np.sum, np.mean]) + result = grouped[['D', 'C']].agg([np.sum, np.mean]) expected = pd.concat([grouped['D'].sum(), grouped['D'].mean(), grouped['C'].sum(), grouped['C'].mean()], axis=1) - expected.columns = MultiIndex.from_product([['D','C'],['sum','mean']]) + expected.columns = MultiIndex.from_product([['D', 'C'], ['sum', 'mean'] + ]) compare(result, expected) def test_multi_iter(self): @@ -1493,10 +1507,8 @@ def test_multi_iter(self): grouped = s.groupby([k1, k2]) iterated = list(grouped) - expected = [('a', '1', s[[0, 2]]), - ('a', '2', s[[1]]), - ('b', '1', s[[4]]), - ('b', '2', s[[3, 5]])] + expected = [('a', '1', s[[0, 2]]), ('a', '2', s[[1]]), + ('b', '1', s[[4]]), ('b', '2', s[[3, 5]])] for i, ((one, two), three) in enumerate(iterated): e1, e2, e3 = expected[i] self.assertEqual(e1, one) @@ -1547,7 +1559,8 @@ def test_multi_iter_panel(self): axis=1) for (month, wd), group in grouped: - exp_axis = [x for x in wp.major_axis + exp_axis = [x + for x in wp.major_axis if x.month == month and x.weekday() == wd] expected = wp.reindex(major=exp_axis) assert_panel_equal(group, expected) @@ -1559,8 +1572,7 @@ def test_multi_func(self): grouped = self.df.groupby([col1.get, col2.get]) agged = grouped.mean() expected = self.df.groupby(['A', 'B']).mean() - assert_frame_equal(agged.ix[:, ['C', 'D']], - expected.ix[:, ['C', 'D']], + assert_frame_equal(agged.ix[:, ['C', 'D']], expected.ix[:, ['C', 'D']], check_names=False) # TODO groupby get drops names # some "groups" with no data @@ -1582,18 +1594,16 @@ def test_multi_key_multiple_functions(self): assert_frame_equal(agged, expected) def test_frame_multi_key_function_list(self): - data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo', - 'bar', 'bar', 'bar', 'bar', - 'foo', 'foo', 'foo'], - 'B': ['one', 'one', 'one', 'two', - 'one', 'one', 'one', 'two', - 'two', 'two', 'one'], - 'C': ['dull', 'dull', 'shiny', 'dull', - 'dull', 'shiny', 'shiny', 'dull', - 'shiny', 'shiny', 'shiny'], - 'D': np.random.randn(11), - 'E': np.random.randn(11), - 'F': np.random.randn(11)}) + data = DataFrame( + {'A': ['foo', 'foo', 'foo', 'foo', 'bar', 'bar', 'bar', 'bar', + 'foo', 'foo', 'foo'], + 'B': ['one', 'one', 'one', 'two', 'one', 'one', 'one', 'two', + 'two', 'two', 'one'], + 'C': ['dull', 'dull', 'shiny', 'dull', 'dull', 'shiny', 'shiny', + 'dull', 'shiny', 'shiny', 'shiny'], + 'D': np.random.randn(11), + 'E': np.random.randn(11), + 'F': np.random.randn(11)}) grouped = data.groupby(['A', 'B']) funcs = [np.mean, np.std] @@ -1601,8 +1611,8 @@ def test_frame_multi_key_function_list(self): expected = concat([grouped['D'].agg(funcs), grouped['E'].agg(funcs), grouped['F'].agg(funcs)], keys=['D', 'E', 'F'], axis=1) - assert(isinstance(agged.index, MultiIndex)) - assert(isinstance(expected.index, MultiIndex)) + assert (isinstance(agged.index, MultiIndex)) + assert (isinstance(expected.index, MultiIndex)) assert_frame_equal(agged, expected) def test_groupby_multiple_columns(self): @@ -1617,7 +1627,8 @@ def _check_op(op): for n1, gp1 in data.groupby('A'): for n2, gp2 in gp1.groupby('B'): expected[n1][n2] = op(gp2.ix[:, ['C', 'D']]) - expected = dict((k, DataFrame(v)) for k, v in compat.iteritems(expected)) + expected = dict((k, DataFrame(v)) + for k, v in compat.iteritems(expected)) expected = Panel.fromDict(expected).swapaxes(0, 1) expected.major_axis.name, expected.minor_axis.name = 'A', 'B' @@ -1683,7 +1694,7 @@ def test_groupby_as_index_agg(self): ts = Series(np.random.randint(5, 10, 50), name='jim') gr = df.groupby(ts) - _ = gr.nth(0) # invokes _set_selection_from_grouper internally + gr.nth(0) # invokes set_selection_from_grouper internally assert_frame_equal(gr.apply(sum), df.groupby(ts).apply(sum)) for attr in ['mean', 'max', 'count', 'idxmax', 'cumsum', 'all']: @@ -1711,11 +1722,13 @@ def check_nunique(df, keys): days = date_range('2015-08-23', periods=10) - for n, m in product(10**np.arange(2, 6), (10, 100, 1000)): + for n, m in product(10 ** np.arange(2, 6), (10, 100, 1000)): frame = DataFrame({ - 'jim':np.random.choice(list(ascii_lowercase), n), - 'joe':np.random.choice(days, n), - 'julie':np.random.randint(0, m, n)}) + 'jim': np.random.choice( + list(ascii_lowercase), n), + 'joe': np.random.choice(days, n), + 'julie': np.random.randint(0, m, n) + }) check_nunique(frame, ['jim']) check_nunique(frame, ['jim', 'joe']) @@ -1743,8 +1756,7 @@ def check_value_counts(df, keys, bins): in product((False, True), repeat=5): kwargs = dict(normalize=normalize, sort=sort, - ascending=ascending, dropna=dropna, - bins=bins) + ascending=ascending, dropna=dropna, bins=bins) gr = df.groupby(keys, sort=isort) left = gr['3rd'].value_counts(**kwargs) @@ -1754,7 +1766,7 @@ def check_value_counts(df, keys, bins): right.index.names = right.index.names[:-1] + ['3rd'] # have to sort on index because of unstable sort on values - left, right = map(rebuild_index, (left, right)) # xref GH9212 + left, right = map(rebuild_index, (left, right)) # xref GH9212 assert_series_equal(left.sort_index(), right.sort_index()) def loop(df): @@ -1767,9 +1779,11 @@ def loop(df): for n, m in product((100, 10000), (5, 20)): frame = DataFrame({ - '1st':np.random.choice(list('abcd'), n), - '2nd':np.random.choice(days, n), - '3rd':np.random.randint(1, m + 1, n)}) + '1st': np.random.choice( + list('abcd'), n), + '2nd': np.random.choice(days, n), + '3rd': np.random.randint(1, m + 1, n) + }) loop(frame) @@ -1785,10 +1799,10 @@ def test_mulitindex_passthru(self): # GH 7997 # regression from 0.14.1 - df = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]]) - df.columns = pd.MultiIndex.from_tuples([(0,1),(1,1),(2,1)]) + df = pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) + df.columns = pd.MultiIndex.from_tuples([(0, 1), (1, 1), (2, 1)]) - result = df.groupby(axis=1, level=[0,1]).first() + result = df.groupby(axis=1, level=[0, 1]).first() assert_frame_equal(result, df) def test_multifunc_select_col_integer_cols(self): @@ -1796,7 +1810,7 @@ def test_multifunc_select_col_integer_cols(self): df.columns = np.arange(len(df.columns)) # it works! - result = df.groupby(1, as_index=False)[2].agg({'Q': np.mean}) + df.groupby(1, as_index=False)[2].agg({'Q': np.mean}) def test_as_index_series_return_frame(self): grouped = self.df.groupby('A', as_index=False) @@ -1823,8 +1837,7 @@ def test_as_index_series_return_frame(self): assert_frame_equal(result2, expected2) # corner case - self.assertRaises(Exception, grouped['C'].__getitem__, - 'D') + self.assertRaises(Exception, grouped['C'].__getitem__, 'D') def test_groupby_as_index_cython(self): data = self.df @@ -1858,16 +1871,16 @@ def test_groupby_as_index_series_scalar(self): assert_frame_equal(result, expected) def test_groupby_as_index_corner(self): - self.assertRaises(TypeError, self.ts.groupby, - lambda x: x.weekday(), as_index=False) + self.assertRaises(TypeError, self.ts.groupby, lambda x: x.weekday(), + as_index=False) - self.assertRaises(ValueError, self.df.groupby, - lambda x: x.lower(), as_index=False, axis=1) + self.assertRaises(ValueError, self.df.groupby, lambda x: x.lower(), + as_index=False, axis=1) def test_groupby_as_index_apply(self): # GH #4648 and #3417 df = DataFrame({'item_id': ['b', 'b', 'a', 'c', 'a', 'b'], - 'user_id': [1,2,1,1,3,1], + 'user_id': [1, 2, 1, 1, 3, 1], 'time': range(6)}) g_as = df.groupby('user_id', as_index=True) @@ -1884,7 +1897,8 @@ def test_groupby_as_index_apply(self): # apply doesn't maintain the original ordering # changed in GH5610 as the as_index=False returns a MI here - exp_not_as_apply = MultiIndex.from_tuples([(0, 0), (0, 2), (1, 1), (2, 4)]) + exp_not_as_apply = MultiIndex.from_tuples([(0, 0), (0, 2), (1, 1), ( + 2, 4)]) tp = [(1, 0), (1, 2), (2, 1), (3, 4)] exp_as_apply = MultiIndex.from_tuples(tp, names=['user_id', None]) @@ -1905,8 +1919,8 @@ def test_groupby_head_tail(self): assert_frame_equal(df.loc[[0, 2]], g_not_as.head(1)) assert_frame_equal(df.loc[[1, 2]], g_not_as.tail(1)) - empty_not_as = DataFrame(columns=df.columns, index=pd.Index([], - dtype=df.index.dtype)) + empty_not_as = DataFrame(columns=df.columns, index=pd.Index( + [], dtype=df.index.dtype)) empty_not_as['A'] = empty_not_as['A'].astype(df.A.dtype) empty_not_as['B'] = empty_not_as['B'].astype(df.B.dtype) assert_frame_equal(empty_not_as, g_not_as.head(0)) @@ -1914,7 +1928,7 @@ def test_groupby_head_tail(self): assert_frame_equal(empty_not_as, g_not_as.head(-1)) assert_frame_equal(empty_not_as, g_not_as.tail(-1)) - assert_frame_equal(df, g_not_as.head(7)) # contains all + assert_frame_equal(df, g_not_as.head(7)) # contains all assert_frame_equal(df, g_not_as.tail(7)) # as_index=True, (used to be different) @@ -1931,24 +1945,23 @@ def test_groupby_head_tail(self): assert_frame_equal(empty_as, g_as.head(-1)) assert_frame_equal(empty_as, g_as.tail(-1)) - assert_frame_equal(df_as, g_as.head(7)) # contains all + assert_frame_equal(df_as, g_as.head(7)) # contains all assert_frame_equal(df_as, g_as.tail(7)) # test with selection - assert_frame_equal(g_as[[]].head(1), df_as.loc[[0,2], []]) - assert_frame_equal(g_as[['A']].head(1), df_as.loc[[0,2], ['A']]) - assert_frame_equal(g_as[['B']].head(1), df_as.loc[[0,2], ['B']]) - assert_frame_equal(g_as[['A', 'B']].head(1), df_as.loc[[0,2]]) + assert_frame_equal(g_as[[]].head(1), df_as.loc[[0, 2], []]) + assert_frame_equal(g_as[['A']].head(1), df_as.loc[[0, 2], ['A']]) + assert_frame_equal(g_as[['B']].head(1), df_as.loc[[0, 2], ['B']]) + assert_frame_equal(g_as[['A', 'B']].head(1), df_as.loc[[0, 2]]) - assert_frame_equal(g_not_as[[]].head(1), df_as.loc[[0,2], []]) - assert_frame_equal(g_not_as[['A']].head(1), df_as.loc[[0,2], ['A']]) - assert_frame_equal(g_not_as[['B']].head(1), df_as.loc[[0,2], ['B']]) - assert_frame_equal(g_not_as[['A', 'B']].head(1), df_as.loc[[0,2]]) + assert_frame_equal(g_not_as[[]].head(1), df_as.loc[[0, 2], []]) + assert_frame_equal(g_not_as[['A']].head(1), df_as.loc[[0, 2], ['A']]) + assert_frame_equal(g_not_as[['B']].head(1), df_as.loc[[0, 2], ['B']]) + assert_frame_equal(g_not_as[['A', 'B']].head(1), df_as.loc[[0, 2]]) def test_groupby_multiple_key(self): df = tm.makeTimeDataFrame() - grouped = df.groupby([lambda x: x.year, - lambda x: x.month, + grouped = df.groupby([lambda x: x.year, lambda x: x.month, lambda x: x.day]) agged = grouped.sum() assert_almost_equal(df.values, agged.values) @@ -2062,20 +2075,21 @@ def test_nonsense_func(self): df = DataFrame([0]) self.assertRaises(Exception, df.groupby, lambda x: x + 'foo') - def test_builtins_apply(self): # GH8155 + def test_builtins_apply(self): # GH8155 df = pd.DataFrame(np.random.randint(1, 50, (1000, 2)), columns=['jim', 'joe']) df['jolie'] = np.random.randn(1000) for keys in ['jim', ['jim', 'joe']]: # single key & multi-key - if keys == 'jim': continue + if keys == 'jim': + continue for f in [max, min, sum]: fname = f.__name__ result = df.groupby(keys).apply(f) - _shape = result.shape + result.shape ngroups = len(df.drop_duplicates(subset=keys)) assert result.shape == (ngroups, 3), 'invalid frame shape: '\ - '{} (expected ({}, 3))'.format(result.shape, ngroups) + '{} (expected ({}, 3))'.format(result.shape, ngroups) assert_frame_equal(result, # numpy's equivalent function df.groupby(keys).apply(getattr(np, fname))) @@ -2093,11 +2107,11 @@ def test_cythonized_aggers(self): 'B': ['A', 'B'] * 6, 'C': np.random.randn(12)} df = DataFrame(data) - df.loc[2:10:2,'C'] = nan + df.loc[2:10:2, 'C'] = nan def _testit(name): - op = lambda x: getattr(x,name)() + op = lambda x: getattr(x, name)() # single column grouped = df.drop(['B'], axis=1).groupby('A') @@ -2135,7 +2149,9 @@ def _testit(name): def test_max_min_non_numeric(self): # #2700 - aa = DataFrame({'nn':[11,11,22,22],'ii':[1,2,3,4],'ss':4*['mama']}) + aa = DataFrame({'nn': [11, 11, 22, 22], + 'ii': [1, 2, 3, 4], + 'ss': 4 * ['mama']}) result = aa.groupby('nn').max() self.assertTrue('ss' in result) @@ -2171,7 +2187,9 @@ def test_cython_agg_nothing_to_agg_with_dates(self): def test_groupby_timedelta_cython_count(self): df = DataFrame({'g': list('ab' * 2), 'delt': np.arange(4).astype('timedelta64[ns]')}) - expected = Series([2, 2], index=pd.Index(['a', 'b'], name='g'), name='delt') + expected = Series([ + 2, 2 + ], index=pd.Index(['a', 'b'], name='g'), name='delt') result = df.groupby('g').delt.count() tm.assert_series_equal(expected, result) @@ -2179,10 +2197,10 @@ def test_cython_agg_frame_columns(self): # #2113 df = DataFrame({'x': [1, 2, 3], 'y': [3, 4, 5]}) - result = df.groupby(level=0, axis='columns').mean() - result = df.groupby(level=0, axis='columns').mean() - result = df.groupby(level=0, axis='columns').mean() - _ = df.groupby(level=0, axis='columns').mean() + df.groupby(level=0, axis='columns').mean() + df.groupby(level=0, axis='columns').mean() + df.groupby(level=0, axis='columns').mean() + df.groupby(level=0, axis='columns').mean() def test_wrap_aggregated_output_multindex(self): df = self.mframe.T @@ -2197,6 +2215,7 @@ def aggfun(ser): raise TypeError else: return ser.sum() + agged2 = df.groupby(keys).aggregate(aggfun) self.assertEqual(len(agged2.columns) + 1, len(df.columns)) @@ -2237,19 +2256,17 @@ def test_groupby_level(self): # raise exception for non-MultiIndex self.assertRaises(ValueError, self.df.groupby, level=1) - - - def test_groupby_level_index_names(self): - ## GH4014 this used to raise ValueError since 'exp'>1 (in py2) - df = DataFrame({'exp' : ['A']*3 + ['B']*3, 'var1' : lrange(6),}).set_index('exp') + # GH4014 this used to raise ValueError since 'exp'>1 (in py2) + df = DataFrame({'exp': ['A'] * 3 + ['B'] * 3, + 'var1': lrange(6), }).set_index('exp') df.groupby(level='exp') self.assertRaises(ValueError, df.groupby, level='foo') def test_groupby_level_with_nas(self): index = MultiIndex(levels=[[1, 0], [0, 1, 2, 3]], - labels=[[1, 1, 1, 1, 0, 0, 0, 0], - [0, 1, 2, 3, 0, 1, 2, 3]]) + labels=[[1, 1, 1, 1, 0, 0, 0, 0], [0, 1, 2, 3, 0, 1, + 2, 3]]) # factorizing doesn't confuse things s = Series(np.arange(8.), index=index) @@ -2258,8 +2275,8 @@ def test_groupby_level_with_nas(self): assert_series_equal(result, expected) index = MultiIndex(levels=[[1, 0], [0, 1, 2, 3]], - labels=[[1, 1, 1, 1, -1, 0, 0, 0], - [0, 1, 2, 3, 0, 1, 2, 3]]) + labels=[[1, 1, 1, 1, -1, 0, 0, 0], [0, 1, 2, 3, 0, + 1, 2, 3]]) # factorizing doesn't confuse things s = Series(np.arange(8.), index=index) @@ -2279,22 +2296,28 @@ def test_groupby_level_apply(self): self.assertEqual(result.index.name, 'first') def test_groupby_args(self): - #PR8618 and issue 8015 + # PR8618 and issue 8015 frame = self.mframe + def j(): - frame.groupby() - self.assertRaisesRegexp(TypeError, "You have to supply one of 'by' and 'level'", j) + frame.groupby() + + self.assertRaisesRegexp(TypeError, + "You have to supply one of 'by' and 'level'", + j) def k(): frame.groupby(by=None, level=None) - self.assertRaisesRegexp(TypeError, "You have to supply one of 'by' and 'level'", k) + + self.assertRaisesRegexp(TypeError, + "You have to supply one of 'by' and 'level'", + k) def test_groupby_level_mapper(self): frame = self.mframe deleveled = frame.reset_index() - mapper0 = {'foo': 0, 'bar': 0, - 'baz': 1, 'qux': 1} + mapper0 = {'foo': 0, 'bar': 0, 'baz': 1, 'qux': 1} mapper1 = {'one': 0, 'two': 0, 'three': 1} result0 = frame.groupby(mapper0, level=0).sum() @@ -2312,7 +2335,7 @@ def test_groupby_level_mapper(self): def test_groupby_level_0_nonmulti(self): # #1313 a = Series([1, 2, 3, 10, 4, 5, 20, 6], Index([1, 2, 3, 1, - 4, 5, 2, 6], name='foo')) + 4, 5, 2, 6], name='foo')) result = a.groupby(level=0).sum() self.assertEqual(result.index.name, a.index.name) @@ -2386,8 +2409,7 @@ def test_apply_transform(self): assert_series_equal(result, expected) def test_apply_multikey_corner(self): - grouped = self.tsframe.groupby([lambda x: x.year, - lambda x: x.month]) + grouped = self.tsframe.groupby([lambda x: x.year, lambda x: x.month]) def f(group): return group.sort_values('A')[-5:] @@ -2401,11 +2423,12 @@ def test_mutate_groups(self): # GH3380 mydf = DataFrame({ - 'cat1' : ['a'] * 8 + ['b'] * 6, - 'cat2' : ['c'] * 2 + ['d'] * 2 + ['e'] * 2 + ['f'] * 2 + ['c'] * 2 + ['d'] * 2 + ['e'] * 2, - 'cat3' : lmap(lambda x: 'g%s' % x, lrange(1,15)), - 'val' : np.random.randint(100, size=14), - }) + 'cat1': ['a'] * 8 + ['b'] * 6, + 'cat2': ['c'] * 2 + ['d'] * 2 + ['e'] * 2 + ['f'] * 2 + ['c'] * 2 + + ['d'] * 2 + ['e'] * 2, + 'cat3': lmap(lambda x: 'g%s' % x, lrange(1, 15)), + 'val': np.random.randint(100, size=14), + }) def f_copy(x): x = x.copy() @@ -2416,17 +2439,16 @@ def f_no_copy(x): x['rank'] = x.val.rank(method='min') return x.groupby('cat2')['rank'].min() - grpby_copy = mydf.groupby('cat1').apply(f_copy) + grpby_copy = mydf.groupby('cat1').apply(f_copy) grpby_no_copy = mydf.groupby('cat1').apply(f_no_copy) - assert_series_equal(grpby_copy,grpby_no_copy) + assert_series_equal(grpby_copy, grpby_no_copy) def test_no_mutate_but_looks_like(self): # GH 8467 # first show's mutation indicator # second does not, but should yield the same results - df = DataFrame({'key': [1, 1, 1, 2, 2, 2, 3, 3, 3], - 'value': range(9)}) + df = DataFrame({'key': [1, 1, 1, 2, 2, 2, 3, 3, 3], 'value': range(9)}) result1 = df.groupby('key', group_keys=True).apply(lambda x: x[:].key) result2 = df.groupby('key', group_keys=True).apply(lambda x: x.key) @@ -2451,7 +2473,7 @@ def test_apply_no_name_column_conflict(self): # it works! #2605 grouped = df.groupby(['name', 'name2']) - grouped.apply(lambda x: x.sort_values('value',inplace=True)) + grouped.apply(lambda x: x.sort_values('value', inplace=True)) def test_groupby_series_indexed_differently(self): s1 = Series([5.0, -9.0, 4.0, 100., -5., 55., 6.7], @@ -2465,15 +2487,13 @@ def test_groupby_series_indexed_differently(self): assert_series_equal(agged, exp) def test_groupby_with_hier_columns(self): - tuples = list(zip(*[['bar', 'bar', 'baz', 'baz', - 'foo', 'foo', 'qux', 'qux'], - ['one', 'two', 'one', 'two', - 'one', 'two', 'one', 'two']])) + tuples = list(zip(*[['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', + 'qux'], ['one', 'two', 'one', 'two', 'one', 'two', + 'one', 'two']])) index = MultiIndex.from_tuples(tuples) - columns = MultiIndex.from_tuples([('A', 'cat'), ('B', 'dog'), - ('B', 'cat'), ('A', 'dog')]) - df = DataFrame(np.random.randn(8, 4), index=index, - columns=columns) + columns = MultiIndex.from_tuples([('A', 'cat'), ('B', 'dog'), ( + 'B', 'cat'), ('A', 'dog')]) + df = DataFrame(np.random.randn(8, 4), index=index, columns=columns) result = df.groupby(level=0).mean() self.assertTrue(result.columns.equals(columns)) @@ -2502,6 +2522,7 @@ def test_pass_args_kwargs(self): def f(x, q=None, axis=0): return percentile(x, q, axis=axis) + g = lambda x: percentile(x, 80, axis=0) # Series @@ -2537,14 +2558,6 @@ def f(x, q=None, axis=0): assert_frame_equal(agg_result, expected) assert_frame_equal(apply_result, expected) - # def test_cython_na_bug(self): - # values = np.random.randn(10) - # shape = (5, 5) - # label_list = [np.array([0, 0, 0, 0, 1, 1, 1, 1, 2, 2], dtype=np.int32), - # np.array([1, 2, 3, 4, 0, 1, 2, 3, 3, 4], dtype=np.int32)] - - # lib.group_aggregate(values, label_list, shape) - def test_size(self): grouped = self.df.groupby(['A', 'B']) result = grouped.size() @@ -2578,15 +2591,18 @@ def test_count(self): dr = date_range('2015-08-30', periods=n // 10, freq='T') df = DataFrame({ - '1st':np.random.choice(list(ascii_lowercase), n), - '2nd':np.random.randint(0, 5, n), - '3rd':np.random.randn(n).round(3), - '4th':np.random.randint(-10, 10, n), - '5th':np.random.choice(dr, n), - '6th':np.random.randn(n).round(3), - '7th':np.random.randn(n).round(3), - '8th':np.random.choice(dr, n) - np.random.choice(dr, 1), - '9th':np.random.choice(list(ascii_lowercase), n)}) + '1st': np.random.choice( + list(ascii_lowercase), n), + '2nd': np.random.randint(0, 5, n), + '3rd': np.random.randn(n).round(3), + '4th': np.random.randint(-10, 10, n), + '5th': np.random.choice(dr, n), + '6th': np.random.randn(n).round(3), + '7th': np.random.randn(n).round(3), + '8th': np.random.choice(dr, n) - np.random.choice(dr, 1), + '9th': np.random.choice( + list(ascii_lowercase), n) + }) for col in df.columns.drop(['1st', '2nd', '4th']): df.loc[np.random.choice(n, n // 10), col] = np.nan @@ -2606,8 +2622,9 @@ def test_count(self): count_as = df.groupby('A').count() count_not_as = df.groupby('A', as_index=False).count() - expected = DataFrame([[1, 2], [0, 0]], columns=['B', 'C'], index=[1,3]) - expected.index.name='A' + expected = DataFrame([[1, 2], [0, 0]], columns=['B', 'C'], + index=[1, 3]) + expected.index.name = 'A' assert_frame_equal(count_not_as, expected.reset_index()) assert_frame_equal(count_as, expected) @@ -2615,24 +2632,27 @@ def test_count(self): assert_series_equal(count_B, expected['B']) def test_count_object(self): - df = pd.DataFrame({'a': ['a'] * 3 + ['b'] * 3, - 'c': [2] * 3 + [3] * 3}) + df = pd.DataFrame({'a': ['a'] * 3 + ['b'] * 3, 'c': [2] * 3 + [3] * 3}) result = df.groupby('c').a.count() - expected = pd.Series([3, 3], index=pd.Index([2, 3], name='c'), name='a') + expected = pd.Series([ + 3, 3 + ], index=pd.Index([2, 3], name='c'), name='a') tm.assert_series_equal(result, expected) df = pd.DataFrame({'a': ['a', np.nan, np.nan] + ['b'] * 3, 'c': [2] * 3 + [3] * 3}) result = df.groupby('c').a.count() - expected = pd.Series([1, 3], index=pd.Index([2, 3], name='c'), name='a') + expected = pd.Series([ + 1, 3 + ], index=pd.Index([2, 3], name='c'), name='a') tm.assert_series_equal(result, expected) def test_count_cross_type(self): # GH8169 - vals = np.hstack((np.random.randint(0,5,(100,2)), - np.random.randint(0,2,(100,2)))) + vals = np.hstack((np.random.randint(0, 5, (100, 2)), np.random.randint( + 0, 2, (100, 2)))) df = pd.DataFrame(vals, columns=['a', 'b', 'c', 'd']) - df[df==2] = np.nan + df[df == 2] = np.nan expected = df.groupby(['c', 'd']).count() for t in ['float32', 'object']: @@ -2646,62 +2666,76 @@ def test_non_cython_api(self): # GH5610 # non-cython calls should not include the grouper - df = DataFrame([[1, 2, 'foo'], [1, nan, 'bar',], [3, nan, 'baz']], columns=['A', 'B','C']) + df = DataFrame( + [[1, 2, 'foo'], [1, + nan, + 'bar', ], [3, nan, 'baz'] + ], columns=['A', 'B', 'C']) g = df.groupby('A') - gni = df.groupby('A',as_index=False) + gni = df.groupby('A', as_index=False) # mad - expected = DataFrame([[0],[nan]],columns=['B'],index=[1,3]) + expected = DataFrame([[0], [nan]], columns=['B'], index=[1, 3]) expected.index.name = 'A' result = g.mad() - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) - expected = DataFrame([[0.,0.],[0,nan]],columns=['A','B'],index=[0,1]) + expected = DataFrame([[0., 0.], [0, nan]], columns=['A', 'B'], + index=[0, 1]) result = gni.mad() - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) # describe - expected = DataFrame(dict(B = concat([df.loc[[0,1],'B'].describe(),df.loc[[2],'B'].describe()],keys=[1,3]))) - expected.index.names = ['A',None] + expected = DataFrame(dict(B=concat( + [df.loc[[0, 1], 'B'].describe(), df.loc[[2], 'B'].describe() + ], keys=[1, 3]))) + expected.index.names = ['A', None] result = g.describe() - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) - expected = concat([df.loc[[0,1],['A','B']].describe(),df.loc[[2],['A','B']].describe()],keys=[0,1]) + expected = concat( + [df.loc[[0, 1], ['A', 'B']].describe(), + df.loc[[2], ['A', 'B']].describe()], keys=[0, 1]) result = gni.describe() - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) # any - expected = DataFrame([[True, True],[False, True]],columns=['B','C'],index=[1,3]) + expected = DataFrame([[True, True], [False, True]], columns=['B', 'C'], + index=[1, 3]) expected.index.name = 'A' result = g.any() - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) # idxmax - expected = DataFrame([[0],[nan]],columns=['B'],index=[1,3]) + expected = DataFrame([[0], [nan]], columns=['B'], index=[1, 3]) expected.index.name = 'A' result = g.idxmax() - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) def test_cython_api2(self): # this takes the fast apply path # cumsum (GH5614) - df = DataFrame([[1, 2, np.nan], [1, np.nan, 9], [3, 4, 9]], columns=['A', 'B', 'C']) - expected = DataFrame([[2, np.nan], [np.nan, 9], [4, 9]], columns=['B', 'C']) + df = DataFrame( + [[1, 2, np.nan], [1, np.nan, 9], [3, 4, 9] + ], columns=['A', 'B', 'C']) + expected = DataFrame( + [[2, np.nan], [np.nan, 9], [4, 9]], columns=['B', 'C']) result = df.groupby('A').cumsum() - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) # GH 5755 - cumsum is a transformer and should ignore as_index result = df.groupby('A', as_index=False).cumsum() - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) def test_grouping_ndarray(self): grouped = self.df.groupby(self.df['A'].values) result = grouped.sum() expected = self.df.groupby('A').sum() - assert_frame_equal(result, expected, check_names=False) # Note: no names when grouping by value + assert_frame_equal(result, expected, check_names=False + ) # Note: no names when grouping by value def test_agg_consistency(self): # agg with ([]) and () not consistent @@ -2714,9 +2748,10 @@ def P1(a): return np.nan import datetime as dt - df = DataFrame({'col1':[1,2,3,4], - 'col2':[10,25,26,31], - 'date':[dt.date(2013,2,10),dt.date(2013,2,10),dt.date(2013,2,11),dt.date(2013,2,11)]}) + df = DataFrame({'col1': [1, 2, 3, 4], + 'col2': [10, 25, 26, 31], + 'date': [dt.date(2013, 2, 10), dt.date(2013, 2, 10), + dt.date(2013, 2, 11), dt.date(2013, 2, 11)]}) g = df.groupby('date') @@ -2728,7 +2763,8 @@ def P1(a): def test_apply_typecast_fail(self): df = DataFrame({'d': [1., 1., 1., 2., 2., 2.], - 'c': np.tile(['a', 'b', 'c'], 2), + 'c': np.tile( + ['a', 'b', 'c'], 2), 'v': np.arange(1., 7.)}) def f(group): @@ -2744,8 +2780,8 @@ def f(group): assert_frame_equal(result, expected) def test_apply_multiindex_fail(self): - index = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1], - [1, 2, 3, 1, 2, 3]]) + index = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1], [1, 2, 3, 1, 2, 3] + ]) df = DataFrame({'d': [1., 1., 1., 2., 2., 2.], 'c': np.tile(['a', 'b', 'c'], 2), 'v': np.arange(1., 7.)}, index=index) @@ -2771,7 +2807,9 @@ def test_apply_without_copy(self): # GH 5545 # returning a non-copy in an applied function fails - data = DataFrame({'id_field' : [100, 100, 200, 300], 'category' : ['a','b','c','c'], 'value' : [1,2,3,4]}) + data = DataFrame({'id_field': [100, 100, 200, 300], + 'category': ['a', 'b', 'c', 'c'], + 'value': [1, 2, 3, 4]}) def filt1(x): if x.shape[0] == 1: @@ -2787,15 +2825,17 @@ def filt2(x): expected = data.groupby('id_field').apply(filt1) result = data.groupby('id_field').apply(filt2) - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) def test_apply_use_categorical_name(self): from pandas import qcut cats = qcut(self.df.C, 4) def get_stats(group): - return {'min': group.min(), 'max': group.max(), - 'count': group.count(), 'mean': group.mean()} + return {'min': group.min(), + 'max': group.max(), + 'count': group.count(), + 'mean': group.mean()} result = self.df.groupby(cats).D.apply(get_stats) self.assertEqual(result.index.names[0], 'C') @@ -2805,7 +2845,8 @@ def test_apply_categorical_data(self): for ordered in [True, False]: dense = Categorical(list('abc'), ordered=ordered) # 'b' is in the categories but not in the list - missing = Categorical(list('aaa'), categories=['a', 'b'], ordered=ordered) + missing = Categorical( + list('aaa'), categories=['a', 'b'], ordered=ordered) values = np.arange(len(dense)) df = DataFrame({'missing': missing, 'dense': dense, @@ -2848,8 +2889,8 @@ def f(g): self.assertTrue('value3' in result) def test_transform_mixed_type(self): - index = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1], - [1, 2, 3, 1, 2, 3]]) + index = MultiIndex.from_arrays([[0, 0, 0, 1, 1, 1], [1, 2, 3, 1, 2, 3] + ]) df = DataFrame({'d': [1., 1., 1., 2., 2., 2.], 'c': np.tile(['a', 'b', 'c'], 2), 'v': np.arange(1., 7.)}, index=index) @@ -2864,7 +2905,7 @@ def f(group): self.assertEqual(result['d'].dtype, np.float64) # this is by definition a mutating operation! - with option_context('mode.chained_assignment',None): + with option_context('mode.chained_assignment', None): for key, group in grouped: res = f(group) assert_frame_equal(res, result.ix[key]) @@ -2877,6 +2918,7 @@ def test_groupby_wrong_multi_labels(self): 2,foo2,bar2,baz1,spam2,40 3,foo1,bar1,baz2,spam1,50 4,foo3,bar1,baz2,spam1,60""" + data = read_csv(StringIO(data), index_col=0) grouped = data.groupby(['foo', 'bar', 'baz', 'spam']) @@ -2904,17 +2946,13 @@ def test_seriesgroupby_name_attr(self): self.assertEqual(result.count().name, 'C') self.assertEqual(result.mean().name, 'C') - testFunc = lambda x: np.sum(x)*2 + testFunc = lambda x: np.sum(x) * 2 self.assertEqual(result.agg(testFunc).name, 'C') def test_groupby_name_propagation(self): # GH 6124 def summarize(df, name=None): - return Series({ - 'count': 1, - 'mean': 2, - 'omissions': 3, - }, name=name) + return Series({'count': 1, 'mean': 2, 'omissions': 3, }, name=name) def summarize_random_name(df): # Provide a different name for each Series. In this case, groupby @@ -2974,7 +3012,7 @@ def convert_fast(x): def convert_force_pure(x): # base will be length 0 - assert(len(x.base) > 0) + assert (len(x.base) > 0) return Decimal(str(x.mean())) grouped = s.groupby(labels) @@ -2999,6 +3037,7 @@ def test_fast_apply(self): 'key2': labels2, 'value1': np.random.randn(N), 'value2': ['foo', 'bar', 'baz', 'qux'] * (N // 4)}) + def f(g): return 1 @@ -3014,50 +3053,53 @@ def f(g): def test_apply_with_mixed_dtype(self): # GH3480, apply with mixed dtype on axis=1 breaks in 0.11 - df = DataFrame({'foo1' : ['one', 'two', 'two', 'three', 'one', 'two'], - 'foo2' : np.random.randn(6)}) + df = DataFrame({'foo1': ['one', 'two', 'two', 'three', 'one', 'two'], + 'foo2': np.random.randn(6)}) result = df.apply(lambda x: x, axis=1) assert_series_equal(df.get_dtype_counts(), result.get_dtype_counts()) - # GH 3610 incorrect dtype conversion with as_index=False - df = DataFrame({"c1" : [1,2,6,6,8]}) - df["c2"] = df.c1/2.0 + df = DataFrame({"c1": [1, 2, 6, 6, 8]}) + df["c2"] = df.c1 / 2.0 result1 = df.groupby("c2").mean().reset_index().c2 result2 = df.groupby("c2", as_index=False).mean().c2 - assert_series_equal(result1,result2) + assert_series_equal(result1, result2) def test_groupby_aggregation_mixed_dtype(self): # GH 6212 expected = DataFrame({ - 'v1': [5,5,7,np.nan,3,3,4,1], - 'v2': [55,55,77,np.nan,33,33,44,11]}, - index=MultiIndex.from_tuples([(1,95),(1,99),(2,95),(2,99),('big','damp'), - ('blue','dry'),('red','red'),('red','wet')], - names=['by1','by2'])) + 'v1': [5, 5, 7, np.nan, 3, 3, 4, 1], + 'v2': [55, 55, 77, np.nan, 33, 33, 44, 11]}, + index=MultiIndex.from_tuples([(1, 95), (1, 99), (2, 95), (2, 99), + ('big', 'damp'), + ('blue', 'dry'), + ('red', 'red'), ('red', 'wet')], + names=['by1', 'by2'])) df = DataFrame({ - 'v1': [1,3,5,7,8,3,5,np.nan,4,5,7,9], - 'v2': [11,33,55,77,88,33,55,np.nan,44,55,77,99], - 'by1': ["red", "blue", 1, 2, np.nan, "big", 1, 2, "red", 1, np.nan, 12], - 'by2': ["wet", "dry", 99, 95, np.nan, "damp", 95, 99, "red", 99, np.nan, - np.nan] - }) + 'v1': [1, 3, 5, 7, 8, 3, 5, np.nan, 4, 5, 7, 9], + 'v2': [11, 33, 55, 77, 88, 33, 55, np.nan, 44, 55, 77, 99], + 'by1': ["red", "blue", 1, 2, np.nan, "big", 1, 2, "red", 1, np.nan, + 12], + 'by2': ["wet", "dry", 99, 95, np.nan, "damp", 95, 99, "red", 99, + np.nan, np.nan] + }) - g = df.groupby(['by1','by2']) - result = g[['v1','v2']].mean() - assert_frame_equal(result,expected) + g = df.groupby(['by1', 'by2']) + result = g[['v1', 'v2']].mean() + assert_frame_equal(result, expected) def test_groupby_dtype_inference_empty(self): # GH 6733 - df = DataFrame({'x': [], 'range': np.arange(0,dtype='int64')}) + df = DataFrame({'x': [], 'range': np.arange(0, dtype='int64')}) self.assertEqual(df['x'].dtype, np.float64) result = df.groupby('x').first() exp_index = Index([], name='x', dtype=np.float64) - expected = DataFrame({'range' : Series([], index=exp_index, dtype='int64')}) - assert_frame_equal(result,expected, by_blocks=True) + expected = DataFrame({'range': Series( + [], index=exp_index, dtype='int64')}) + assert_frame_equal(result, expected, by_blocks=True) def test_groupby_list_infer_array_like(self): result = self.df.groupby(list(self.df['A'])).mean() @@ -3067,7 +3109,8 @@ def test_groupby_list_infer_array_like(self): self.assertRaises(Exception, self.df.groupby, list(self.df['A'][:-1])) # pathological case of ambiguity - df = DataFrame({'foo': [0, 1], 'bar': [3, 4], + df = DataFrame({'foo': [0, 1], + 'bar': [3, 4], 'val': np.random.randn(2)}) result = df.groupby(['foo', 'bar']).mean() @@ -3076,10 +3119,11 @@ def test_groupby_list_infer_array_like(self): def test_groupby_keys_same_size_as_index(self): # GH 11185 freq = 's' - index = pd.date_range(start=np.datetime64( - '2015-09-29T11:34:44-0700'), periods=2, freq=freq) + index = pd.date_range(start=np.datetime64('2015-09-29T11:34:44-0700'), + periods=2, freq=freq) df = pd.DataFrame([['A', 10], ['B', 15]], columns=[ - 'metric', 'values'], index=index) + 'metric', 'values' + ], index=index) result = df.groupby([pd.Grouper(level=0, freq=freq), 'metric']).mean() expected = df.set_index([df.index, 'metric']) @@ -3094,11 +3138,12 @@ def test_groupby_one_row(self): def test_groupby_nat_exclude(self): # GH 6992 - df = pd.DataFrame({'values': np.random.randn(8), - 'dt': [np.nan, pd.Timestamp('2013-01-01'), np.nan, pd.Timestamp('2013-02-01'), - np.nan, pd.Timestamp('2013-02-01'), np.nan, pd.Timestamp('2013-01-01')], - 'str': [np.nan, 'a', np.nan, 'a', - np.nan, 'a', np.nan, 'b']}) + df = pd.DataFrame( + {'values': np.random.randn(8), + 'dt': [np.nan, pd.Timestamp('2013-01-01'), np.nan, pd.Timestamp( + '2013-02-01'), np.nan, pd.Timestamp('2013-02-01'), np.nan, + pd.Timestamp('2013-01-01')], + 'str': [np.nan, 'a', np.nan, 'a', np.nan, 'a', np.nan, 'b']}) grouped = df.groupby('dt') expected = [[1, 7], [3, 5]] @@ -3117,8 +3162,10 @@ def test_groupby_nat_exclude(self): for k in grouped.indices: self.assert_numpy_array_equal(grouped.indices[k], expected[k]) - tm.assert_frame_equal(grouped.get_group(Timestamp('2013-01-01')), df.iloc[[1, 7]]) - tm.assert_frame_equal(grouped.get_group(Timestamp('2013-02-01')), df.iloc[[3, 5]]) + tm.assert_frame_equal( + grouped.get_group(Timestamp('2013-01-01')), df.iloc[[1, 7]]) + tm.assert_frame_equal( + grouped.get_group(Timestamp('2013-02-01')), df.iloc[[3, 5]]) self.assertRaises(KeyError, grouped.get_group, pd.NaT) @@ -3176,7 +3223,8 @@ def test_panel_groupby(self): grouped = self.panel.groupby(lambda x: x.month, axis='major') agged = grouped.mean() - self.assert_numpy_array_equal(agged.major_axis, sorted(list(set(self.panel.major_axis.month)))) + self.assert_numpy_array_equal(agged.major_axis, sorted(list(set( + self.panel.major_axis.month)))) grouped = self.panel.groupby({'A': 0, 'B': 0, 'C': 1, 'D': 1}, axis='minor') @@ -3211,11 +3259,13 @@ def test_groupby_2d_malformed(self): self.assert_numpy_array_equal(tmp.values, res_values) def test_int32_overflow(self): - B = np.concatenate((np.arange(10000), np.arange(10000), - np.arange(5000))) + B = np.concatenate((np.arange(10000), np.arange(10000), np.arange(5000) + )) A = np.arange(25000) - df = DataFrame({'A': A, 'B': B, - 'C': A, 'D': B, + df = DataFrame({'A': A, + 'B': B, + 'C': A, + 'D': B, 'E': np.random.randn(25000)}) left = df.groupby(['A', 'B', 'C', 'D']).sum() @@ -3225,13 +3275,16 @@ def test_int32_overflow(self): def test_int64_overflow(self): from pandas.core.groupby import _int64_overflow_possible - B = np.concatenate((np.arange(1000), np.arange(1000), - np.arange(500))) + B = np.concatenate((np.arange(1000), np.arange(1000), np.arange(500))) A = np.arange(2500) - df = DataFrame({'A': A, 'B': B, - 'C': A, 'D': B, - 'E': A, 'F': B, - 'G': A, 'H': B, + df = DataFrame({'A': A, + 'B': B, + 'C': A, + 'D': B, + 'E': A, + 'F': B, + 'G': A, + 'H': B, 'values': np.random.randn(2500)}) lg = df.groupby(['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H']) @@ -3246,8 +3299,8 @@ def test_int64_overflow(self): exp_index, _ = right.index.sortlevel(0) self.assertTrue(right.index.equals(exp_index)) - tups = list(map(tuple, df[['A', 'B', 'C', 'D', - 'E', 'F', 'G', 'H']].values)) + tups = list(map(tuple, df[['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H' + ]].values)) tups = com._asarray_tuplesafe(tups) expected = df.groupby(tups).sum()['values'] @@ -3258,12 +3311,14 @@ def test_int64_overflow(self): # GH9096 values = range(55109) - data = pd.DataFrame.from_dict({'a': values, 'b': values, - 'c': values, 'd': values}) + data = pd.DataFrame.from_dict({'a': values, + 'b': values, + 'c': values, + 'd': values}) grouped = data.groupby(['a', 'b', 'c', 'd']) self.assertEqual(len(grouped), len(values)) - arr = np.random.randint(- 1 << 12, 1 << 12, (1 << 15, 5)) + arr = np.random.randint(-1 << 12, 1 << 12, (1 << 15, 5)) i = np.random.choice(len(arr), len(arr) * 4) arr = np.vstack((arr, arr[i])) # add sume duplicate rows @@ -3304,8 +3359,7 @@ def test_groupby_sort_multi(self): tups = lmap(tuple, df[['a', 'b', 'c']].values) tups = com._asarray_tuplesafe(tups) result = df.groupby(['a', 'b', 'c'], sort=True).sum() - self.assert_numpy_array_equal(result.index.values, - tups[[1, 2, 0]]) + self.assert_numpy_array_equal(result.index.values, tups[[1, 2, 0]]) tups = lmap(tuple, df[['c', 'a', 'b']].values) tups = com._asarray_tuplesafe(tups) @@ -3315,8 +3369,7 @@ def test_groupby_sort_multi(self): tups = lmap(tuple, df[['b', 'c', 'a']].values) tups = com._asarray_tuplesafe(tups) result = df.groupby(['b', 'c', 'a'], sort=True).sum() - self.assert_numpy_array_equal(result.index.values, - tups[[2, 1, 0]]) + self.assert_numpy_array_equal(result.index.values, tups[[2, 1, 0]]) df = DataFrame({'a': [0, 1, 2, 0, 1, 2], 'b': [0, 0, 0, 1, 1, 1], @@ -3424,6 +3477,7 @@ def func(ser): raise TypeError else: return ser.sum() + result = grouped.aggregate(func) exp_grouped = self.three_group.ix[:, self.three_group.columns != 'C'] expected = exp_grouped.groupby(['A', 'B']).aggregate(func) @@ -3453,13 +3507,12 @@ def g(group): assert_series_equal(result, expected) def test_getitem_list_of_columns(self): - df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'foo', 'foo'], - 'B': ['one', 'one', 'two', 'three', - 'two', 'two', 'one', 'three'], - 'C': np.random.randn(8), - 'D': np.random.randn(8), - 'E': np.random.randn(8)}) + df = DataFrame( + {'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], + 'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], + 'C': np.random.randn(8), + 'D': np.random.randn(8), + 'E': np.random.randn(8)}) result = df.groupby('A')[['C', 'D']].mean() result2 = df.groupby('A')['C', 'D'].mean() @@ -3527,9 +3580,9 @@ def foo(x): def bar(x): return np.std(x, ddof=1) - d = OrderedDict([['C', np.mean], - ['D', OrderedDict([['foo', np.mean], - ['bar', np.std]])]]) + + d = OrderedDict([['C', np.mean], ['D', OrderedDict( + [['foo', np.mean], ['bar', np.std]])]]) result = grouped.aggregate(d) d = OrderedDict([['C', [np.mean]], ['D', [foo, bar]]]) @@ -3541,21 +3594,18 @@ def test_multi_function_flexible_mix(self): # GH #1268 grouped = self.df.groupby('A') - d = OrderedDict([['C', OrderedDict([['foo', 'mean'], - [ - 'bar', 'std']])], - ['D', 'sum']]) + d = OrderedDict([['C', OrderedDict([['foo', 'mean'], [ + 'bar', 'std' + ]])], ['D', 'sum']]) result = grouped.aggregate(d) - d2 = OrderedDict([['C', OrderedDict([['foo', 'mean'], - [ - 'bar', 'std']])], - ['D', ['sum']]]) + d2 = OrderedDict([['C', OrderedDict([['foo', 'mean'], [ + 'bar', 'std' + ]])], ['D', ['sum']]]) result2 = grouped.aggregate(d2) - d3 = OrderedDict([['C', OrderedDict([['foo', 'mean'], - [ - 'bar', 'std']])], - ['D', {'sum': 'sum'}]]) + d3 = OrderedDict([['C', OrderedDict([['foo', 'mean'], [ + 'bar', 'std' + ]])], ['D', {'sum': 'sum'}]]) expected = grouped.aggregate(d3) assert_frame_equal(result, expected) @@ -3563,15 +3613,14 @@ def test_multi_function_flexible_mix(self): def test_agg_callables(self): # GH 7929 - df = DataFrame({'foo' : [1,2], 'bar' :[3,4]}).astype(np.int64) + df = DataFrame({'foo': [1, 2], 'bar': [3, 4]}).astype(np.int64) class fn_class(object): + def __call__(self, x): return sum(x) - equiv_callables = [sum, np.sum, - lambda x: sum(x), - lambda x: x.sum(), + equiv_callables = [sum, np.sum, lambda x: sum(x), lambda x: x.sum(), partial(sum), fn_class()] expected = df.groupby("foo").agg(sum) @@ -3612,8 +3661,8 @@ def test_no_dummy_key_names(self): result = self.df.groupby(self.df['A'].values).sum() self.assertIsNone(result.index.name) - result = self.df.groupby([self.df['A'].values, - self.df['B'].values]).sum() + result = self.df.groupby([self.df['A'].values, self.df['B'].values + ]).sum() self.assertEqual(result.index.names, (None, None)) def test_groupby_sort_categorical(self): @@ -3626,7 +3675,8 @@ def test_groupby_sort_categorical(self): ['(0, 2.5]', 1, 60], ['(5, 7.5]', 7, 70]], columns=['range', 'foo', 'bar']) df['range'] = Categorical(df['range'], ordered=True) - index = CategoricalIndex(['(0, 2.5]', '(2.5, 5]', '(5, 7.5]', '(7.5, 10]'], name='range') + index = CategoricalIndex( + ['(0, 2.5]', '(2.5, 5]', '(5, 7.5]', '(7.5, 10]'], name='range') result_sort = DataFrame([[1, 60], [5, 30], [6, 40], [10, 10]], columns=['foo', 'bar'], index=index) @@ -3636,17 +3686,19 @@ def test_groupby_sort_categorical(self): assert_frame_equal(result_sort, df.groupby(col, sort=False).first()) df['range'] = Categorical(df['range'], ordered=False) - index = CategoricalIndex(['(0, 2.5]', '(2.5, 5]', '(5, 7.5]', '(7.5, 10]'], name='range') + index = CategoricalIndex( + ['(0, 2.5]', '(2.5, 5]', '(5, 7.5]', '(7.5, 10]'], name='range') result_sort = DataFrame([[1, 60], [5, 30], [6, 40], [10, 10]], columns=['foo', 'bar'], index=index) - index = CategoricalIndex(['(7.5, 10]', '(2.5, 5]', '(5, 7.5]', '(0, 2.5]'], + index = CategoricalIndex(['(7.5, 10]', '(2.5, 5]', + '(5, 7.5]', '(0, 2.5]'], name='range') result_nosort = DataFrame([[10, 10], [5, 30], [6, 40], [1, 60]], index=index, columns=['foo', 'bar']) col = 'range' - #### this is an unordered categorical, but we allow this #### + # this is an unordered categorical, but we allow this #### assert_frame_equal(result_sort, df.groupby(col, sort=True).first()) assert_frame_equal(result_nosort, df.groupby(col, sort=False).first()) @@ -3667,7 +3719,8 @@ def test_groupby_sort_categorical_datetimelike(self): df['dt'] = Categorical(df['dt'], ordered=True) index = [datetime(2011, 1, 1), datetime(2011, 2, 1), datetime(2011, 5, 1), datetime(2011, 7, 1)] - result_sort = DataFrame([[1, 60], [5, 30], [6, 40], [10, 10]], columns=['foo', 'bar']) + result_sort = DataFrame( + [[1, 60], [5, 30], [6, 40], [10, 10]], columns=['foo', 'bar']) result_sort.index = CategoricalIndex(index, name='dt', ordered=True) index = [datetime(2011, 7, 1), datetime(2011, 2, 1), @@ -3686,30 +3739,31 @@ def test_groupby_sort_categorical_datetimelike(self): df['dt'] = Categorical(df['dt'], ordered=False) index = [datetime(2011, 1, 1), datetime(2011, 2, 1), datetime(2011, 5, 1), datetime(2011, 7, 1)] - result_sort = DataFrame([[1, 60], [5, 30], [6, 40], [10, 10]], columns=['foo', 'bar']) + result_sort = DataFrame( + [[1, 60], [5, 30], [6, 40], [10, 10]], columns=['foo', 'bar']) result_sort.index = CategoricalIndex(index, name='dt') index = [datetime(2011, 7, 1), datetime(2011, 2, 1), datetime(2011, 5, 1), datetime(2011, 1, 1)] result_nosort = DataFrame([[10, 10], [5, 30], [6, 40], [1, 60]], columns=['foo', 'bar']) - result_nosort.index = CategoricalIndex(index, categories=index, name='dt') + result_nosort.index = CategoricalIndex(index, categories=index, + name='dt') col = 'dt' assert_frame_equal(result_sort, df.groupby(col, sort=True).first()) assert_frame_equal(result_nosort, df.groupby(col, sort=False).first()) - def test_groupby_sort_multiindex_series(self): - # series multiindex groupby sort argument was not being passed through _compress_group_index + # series multiindex groupby sort argument was not being passed through + # _compress_group_index # GH 9444 index = MultiIndex(levels=[[1, 2], [1, 2]], labels=[[0, 0, 0, 0, 1, 1], [1, 1, 0, 0, 0, 0]], names=['a', 'b']) mseries = Series([0, 1, 2, 3, 4, 5], index=index) index = MultiIndex(levels=[[1, 2], [1, 2]], - labels=[[0, 0, 1], [1, 0, 0]], - names=['a', 'b']) + labels=[[0, 0, 1], [1, 0, 0]], names=['a', 'b']) mseries_result = Series([0, 2, 4], index=index) result = mseries.groupby(level=['a', 'b'], sort=False).first() @@ -3739,15 +3793,18 @@ def test_groupby_categorical(self): idx = cats.codes.argsort() ord_labels = np.asarray(cats).take(idx) ord_data = data.take(idx) - expected = ord_data.groupby(Categorical(ord_labels), sort=False).describe() + expected = ord_data.groupby( + Categorical(ord_labels), sort=False).describe() expected.index.names = [None, None] assert_frame_equal(desc_result, expected) # GH 10460 - expc = Categorical.from_codes(np.arange(4).repeat(8), levels, ordered=True) + expc = Categorical.from_codes( + np.arange(4).repeat(8), levels, ordered=True) exp = CategoricalIndex(expc) self.assert_index_equal(desc_result.index.get_level_values(0), exp) - exp = Index(['count', 'mean', 'std', 'min', '25%', '50%', '75%', 'max'] * 4) + exp = Index(['count', 'mean', 'std', 'min', '25%', '50%', '75%', 'max'] + * 4) self.assert_index_equal(desc_result.index.get_level_values(1), exp) def test_groupby_datetime_categorical(self): @@ -3762,7 +3819,8 @@ def test_groupby_datetime_categorical(self): expected = data.groupby(np.asarray(cats)).mean() expected = expected.reindex(levels) - expected.index = CategoricalIndex(expected.index, categories=expected.index, + expected.index = CategoricalIndex(expected.index, + categories=expected.index, ordered=True) assert_frame_equal(result, expected) @@ -3777,34 +3835,43 @@ def test_groupby_datetime_categorical(self): expected.index.names = [None, None] assert_frame_equal(desc_result, expected) tm.assert_index_equal(desc_result.index, expected.index) - tm.assert_index_equal(desc_result.index.get_level_values(0), expected.index.get_level_values(0)) + tm.assert_index_equal( + desc_result.index.get_level_values(0), + expected.index.get_level_values(0)) # GH 10460 - expc = Categorical.from_codes(np.arange(4).repeat(8), levels, ordered=True) + expc = Categorical.from_codes( + np.arange(4).repeat(8), levels, ordered=True) exp = CategoricalIndex(expc) self.assert_index_equal(desc_result.index.get_level_values(0), exp) - exp = Index(['count', 'mean', 'std', 'min', '25%', '50%', '75%', 'max'] * 4) + exp = Index(['count', 'mean', 'std', 'min', '25%', '50%', '75%', 'max'] + * 4) self.assert_index_equal(desc_result.index.get_level_values(1), exp) - def test_groupby_categorical_index(self): levels = ['foo', 'bar', 'baz', 'qux'] codes = np.random.randint(0, 4, size=20) cats = Categorical.from_codes(codes, levels, ordered=True) - df = DataFrame(np.repeat(np.arange(20),4).reshape(-1,4), columns=list('abcd')) + df = DataFrame( + np.repeat( + np.arange(20), 4).reshape(-1, 4), columns=list('abcd')) df['cats'] = cats # with a cat index result = df.set_index('cats').groupby(level=0).sum() expected = df[list('abcd')].groupby(cats.codes).sum() - expected.index = CategoricalIndex(Categorical.from_codes([0,1,2,3], levels, ordered=True),name='cats') + expected.index = CategoricalIndex( + Categorical.from_codes( + [0, 1, 2, 3], levels, ordered=True), name='cats') assert_frame_equal(result, expected) # with a cat column, should produce a cat index result = df.groupby('cats').sum() expected = df[list('abcd')].groupby(cats.codes).sum() - expected.index = CategoricalIndex(Categorical.from_codes([0,1,2,3], levels, ordered=True),name='cats') + expected.index = CategoricalIndex( + Categorical.from_codes( + [0, 1, 2, 3], levels, ordered=True), name='cats') assert_frame_equal(result, expected) def test_groupby_groups_datetimeindex(self): @@ -3822,21 +3889,25 @@ def test_groupby_groups_datetimeindex(self): def test_groupby_groups_datetimeindex_tz(self): # GH 3950 - dates = ['2011-07-19 07:00:00', '2011-07-19 08:00:00', '2011-07-19 09:00:00', - '2011-07-19 07:00:00', '2011-07-19 08:00:00', '2011-07-19 09:00:00'] + dates = ['2011-07-19 07:00:00', '2011-07-19 08:00:00', + '2011-07-19 09:00:00', '2011-07-19 07:00:00', + '2011-07-19 08:00:00', '2011-07-19 09:00:00'] df = DataFrame({'label': ['a', 'a', 'a', 'b', 'b', 'b'], 'datetime': dates, - 'value1': np.arange(6,dtype='int64'), + 'value1': np.arange(6, dtype='int64'), 'value2': [1, 2] * 3}) - df['datetime'] = df['datetime'].apply(lambda d: Timestamp(d, tz='US/Pacific')) - - exp_idx1 = pd.DatetimeIndex(['2011-07-19 07:00:00', '2011-07-19 07:00:00', - '2011-07-19 08:00:00', '2011-07-19 08:00:00', - '2011-07-19 09:00:00', '2011-07-19 09:00:00'], - tz='US/Pacific', name='datetime') + df['datetime'] = df['datetime'].apply( + lambda d: Timestamp(d, tz='US/Pacific')) + + exp_idx1 = pd.DatetimeIndex( + ['2011-07-19 07:00:00', '2011-07-19 07:00:00', + '2011-07-19 08:00:00', '2011-07-19 08:00:00', + '2011-07-19 09:00:00', '2011-07-19 09:00:00'], + tz='US/Pacific', name='datetime') exp_idx2 = Index(['a', 'b'] * 3, name='label') exp_idx = MultiIndex.from_arrays([exp_idx1, exp_idx2]) - expected = DataFrame({'value1': [0, 3, 1, 4, 2, 5], 'value2': [1, 2, 2, 1, 1, 2]}, + expected = DataFrame({'value1': [0, 3, 1, 4, 2, 5], + 'value2': [1, 2, 2, 1, 1, 2]}, index=exp_idx, columns=['value1', 'value2']) result = df.groupby(['datetime', 'label']).sum() @@ -3844,12 +3915,13 @@ def test_groupby_groups_datetimeindex_tz(self): # by level didx = pd.DatetimeIndex(dates, tz='Asia/Tokyo') - df = DataFrame({'value1': np.arange(6,dtype='int64'), + df = DataFrame({'value1': np.arange(6, dtype='int64'), 'value2': [1, 2, 3, 1, 2, 3]}, index=didx) - exp_idx = pd.DatetimeIndex(['2011-07-19 07:00:00', '2011-07-19 08:00:00', - '2011-07-19 09:00:00'], tz='Asia/Tokyo') + exp_idx = pd.DatetimeIndex( + ['2011-07-19 07:00:00', '2011-07-19 08:00:00', + '2011-07-19 09:00:00'], tz='Asia/Tokyo') expected = DataFrame({'value1': [3, 5, 7], 'value2': [2, 4, 6]}, index=exp_idx, columns=['value1', 'value2']) @@ -3860,23 +3932,30 @@ def test_groupby_multi_timezone(self): # combining multiple / different timezones yields UTC - data="""0,2000-01-28 16:47:00,America/Chicago + data = """0,2000-01-28 16:47:00,America/Chicago 1,2000-01-29 16:48:00,America/Chicago 2,2000-01-30 16:49:00,America/Los_Angeles 3,2000-01-31 16:50:00,America/Chicago 4,2000-01-01 16:50:00,America/New_York""" - df = pd.read_csv(StringIO(data),header=None, names=['value','date','tz']) - result = df.groupby('tz').date.apply(lambda x: pd.to_datetime(x).dt.tz_localize(x.name)) + df = pd.read_csv( + StringIO(data), header=None, names=['value', 'date', 'tz']) + result = df.groupby('tz').date.apply( + lambda x: pd.to_datetime(x).dt.tz_localize(x.name)) - expected = pd.to_datetime(Series(['2000-01-28 22:47:00', '2000-01-29 22:48:00', '2000-01-31 00:49:00', '2000-01-31 22:50:00', '2000-01-01 21:50:00'])) + expected = pd.to_datetime(Series( + ['2000-01-28 22:47:00', '2000-01-29 22:48:00', + '2000-01-31 00:49:00', '2000-01-31 22:50:00', + '2000-01-01 21:50:00'])) assert_series_equal(result, expected) tz = 'America/Chicago' - result = pd.to_datetime(df.groupby('tz').date.get_group(tz)).dt.tz_localize(tz) - expected = pd.to_datetime(Series(['2000-01-28 16:47:00', '2000-01-29 16:48:00','2000-01-31 16:50:00'], - index=[0,1,3], - name='date')).dt.tz_localize(tz) + result = pd.to_datetime(df.groupby('tz').date.get_group( + tz)).dt.tz_localize(tz) + expected = pd.to_datetime(Series( + ['2000-01-28 16:47:00', '2000-01-29 16:48:00', + '2000-01-31 16:50:00'], index=[0, 1, 3 + ], name='date')).dt.tz_localize(tz) assert_series_equal(result, expected) def test_groupby_reindex_inside_function(self): @@ -3891,6 +3970,7 @@ def agg_before(hour, func, fix=False): """ Run an aggregate func on the subset of data. """ + def _func(data): d = data.select(lambda x: x.hour < 11).dropna() if fix: @@ -3898,6 +3978,7 @@ def _func(data): if len(d) == 0: return None return func(d) + return _func def afunc(data): @@ -3959,7 +4040,8 @@ def test_groupby_categorical_no_compress(self): result = data.groupby(cats).mean() exp = data.groupby(codes).mean() - exp.index = CategoricalIndex(exp.index,categories=cats.categories,ordered=cats.ordered) + exp.index = CategoricalIndex(exp.index, categories=cats.categories, + ordered=cats.ordered) assert_series_equal(result, exp) codes = np.array([0, 0, 0, 1, 1, 1, 3, 3, 3]) @@ -3967,35 +4049,48 @@ def test_groupby_categorical_no_compress(self): result = data.groupby(cats).mean() exp = data.groupby(codes).mean().reindex(cats.categories) - exp.index = CategoricalIndex(exp.index,categories=cats.categories,ordered=cats.ordered) + exp.index = CategoricalIndex(exp.index, categories=cats.categories, + ordered=cats.ordered) assert_series_equal(result, exp) cats = Categorical(["a", "a", "a", "b", "b", "b", "c", "c", "c"], - categories=["a","b","c","d"], ordered=True) - data = DataFrame({"a":[1,1,1,2,2,2,3,4,5], "b":cats}) + categories=["a", "b", "c", "d"], ordered=True) + data = DataFrame({"a": [1, 1, 1, 2, 2, 2, 3, 4, 5], "b": cats}) result = data.groupby("b").mean() result = result["a"].values - exp = np.array([1,2,4,np.nan]) + exp = np.array([1, 2, 4, np.nan]) self.assert_numpy_array_equal(result, exp) def test_groupby_non_arithmetic_agg_types(self): # GH9311, GH6620 - df = pd.DataFrame([{'a': 1, 'b': 1}, - {'a': 1, 'b': 2}, - {'a': 2, 'b': 3}, - {'a': 2, 'b': 4}]) - - dtypes = ['int8', 'int16', 'int32', 'int64', - 'float32', 'float64'] - - grp_exp = {'first': {'df': [{'a': 1, 'b': 1}, {'a': 2, 'b': 3}]}, - 'last': {'df': [{'a': 1, 'b': 2}, {'a': 2, 'b': 4}]}, - 'min': {'df': [{'a': 1, 'b': 1}, {'a': 2, 'b': 3}]}, - 'max': {'df': [{'a': 1, 'b': 2}, {'a': 2, 'b': 4}]}, - 'nth': {'df': [{'a': 1, 'b': 2}, {'a': 2, 'b': 4}], + df = pd.DataFrame([{'a': 1, + 'b': 1}, {'a': 1, + 'b': 2}, {'a': 2, + 'b': 3}, {'a': 2, + 'b': 4}]) + + dtypes = ['int8', 'int16', 'int32', 'int64', 'float32', 'float64'] + + grp_exp = {'first': {'df': [{'a': 1, + 'b': 1}, {'a': 2, + 'b': 3}]}, + 'last': {'df': [{'a': 1, + 'b': 2}, {'a': 2, + 'b': 4}]}, + 'min': {'df': [{'a': 1, + 'b': 1}, {'a': 2, + 'b': 3}]}, + 'max': {'df': [{'a': 1, + 'b': 2}, {'a': 2, + 'b': 4}]}, + 'nth': {'df': [{'a': 1, + 'b': 2}, {'a': 2, + 'b': 4}], 'args': [1]}, - 'count': {'df': [{'a': 1, 'b': 2}, {'a': 2, 'b': 2}], + 'count': {'df': [{'a': 1, + 'b': 2}, {'a': 2, + 'b': 2}], 'out_type': 'int64'}} for dtype in dtypes: @@ -4026,20 +4121,17 @@ def test_groupby_non_arithmetic_agg_intlike_precision(self): c = 24650000000000000 inputs = ((Timestamp('2011-01-15 12:50:28.502376'), - Timestamp('2011-01-20 12:50:28.593448')), - (1 + c, 2 + c)) + Timestamp('2011-01-20 12:50:28.593448')), (1 + c, 2 + c)) for i in inputs: - df = pd.DataFrame([{'a': 1, - 'b': i[0]}, - {'a': 1, - 'b': i[1]}]) + df = pd.DataFrame([{'a': 1, 'b': i[0]}, {'a': 1, 'b': i[1]}]) grp_exp = {'first': {'expected': i[0]}, 'last': {'expected': i[1]}, 'min': {'expected': i[0]}, 'max': {'expected': i[1]}, - 'nth': {'expected': i[1], 'args': [1]}, + 'nth': {'expected': i[1], + 'args': [1]}, 'count': {'expected': 2}} for method, data in compat.iteritems(grp_exp): @@ -4067,39 +4159,41 @@ def test_groupby_first_datetime64(self): def test_groupby_max_datetime64(self): # GH 5869 # datetimelike dtype conversion from int - df = DataFrame(dict(A = Timestamp('20130101'), B = np.arange(5))) + df = DataFrame(dict(A=Timestamp('20130101'), B=np.arange(5))) expected = df.groupby('A')['A'].apply(lambda x: x.max()) result = df.groupby('A')['A'].max() - assert_series_equal(result,expected) + assert_series_equal(result, expected) def test_groupby_datetime64_32_bit(self): # GH 6410 / numpy 4328 # 32-bit under 1.9-dev indexing issue - df = DataFrame({"A": range(2), "B": [pd.Timestamp('2000-01-1')]*2}) + df = DataFrame({"A": range(2), "B": [pd.Timestamp('2000-01-1')] * 2}) result = df.groupby("A")["B"].transform(min) - expected = Series([pd.Timestamp('2000-01-1')]*2) - assert_series_equal(result,expected) + expected = Series([pd.Timestamp('2000-01-1')] * 2) + assert_series_equal(result, expected) def test_groupby_categorical_unequal_len(self): - #GH3011 + # GH3011 series = Series([np.nan, np.nan, 1, 1, 2, 2, 3, 3, 4, 4]) - # The raises only happens with categorical, not with series of types category - bins = pd.cut(series.dropna().values, 4) + # The raises only happens with categorical, not with series of types + # category + bins = pd.cut(series.dropna().values, 4) # len(bins) != len(series) here - self.assertRaises(ValueError,lambda : series.groupby(bins).mean()) + self.assertRaises(ValueError, lambda: series.groupby(bins).mean()) def test_groupby_multiindex_missing_pair(self): # GH9049 - df = DataFrame({'group1': ['a','a','a','b'], - 'group2': ['c','c','d','c'], - 'value': [1,1,1,5]}) + df = DataFrame({'group1': ['a', 'a', 'a', 'b'], + 'group2': ['c', 'c', 'd', 'c'], + 'value': [1, 1, 1, 5]}) df = df.set_index(['group1', 'group2']) - df_grouped = df.groupby(level=['group1','group2'], sort=True) + df_grouped = df.groupby(level=['group1', 'group2'], sort=True) res = df_grouped.agg('sum') - idx = MultiIndex.from_tuples([('a','c'), ('a','d'), ('b','c')], names=['group1', 'group2']) + idx = MultiIndex.from_tuples( + [('a', 'c'), ('a', 'd'), ('b', 'c')], names=['group1', 'group2']) exp = DataFrame([[2], [1], [5]], index=idx, columns=['value']) tm.assert_frame_equal(res, exp) @@ -4107,7 +4201,8 @@ def test_groupby_multiindex_missing_pair(self): def test_groupby_levels_and_columns(self): # GH9344, GH9049 idx_names = ['x', 'y'] - idx = pd.MultiIndex.from_tuples([(1, 1), (1, 2), (3, 4), (5, 6)], names=idx_names) + idx = pd.MultiIndex.from_tuples( + [(1, 1), (1, 2), (3, 4), (5, 6)], names=idx_names) df = pd.DataFrame(np.arange(12).reshape(-1, 3), index=idx) by_levels = df.groupby(level=idx_names).mean() @@ -4122,16 +4217,17 @@ def test_groupby_levels_and_columns(self): def test_gb_apply_list_of_unequal_len_arrays(self): # GH1738 - df = DataFrame({'group1': ['a','a','a','b','b','b','a','a','a','b','b','b'], - 'group2': ['c','c','d','d','d','e','c','c','d','d','d','e'], - 'weight': [1.1,2,3,4,5,6,2,4,6,8,1,2], - 'value': [7.1,8,9,10,11,12,8,7,6,5,4,3] - }) + df = DataFrame({'group1': ['a', 'a', 'a', 'b', 'b', 'b', 'a', 'a', 'a', + 'b', 'b', 'b'], + 'group2': ['c', 'c', 'd', 'd', 'd', 'e', 'c', 'c', 'd', + 'd', 'd', 'e'], + 'weight': [1.1, 2, 3, 4, 5, 6, 2, 4, 6, 8, 1, 2], + 'value': [7.1, 8, 9, 10, 11, 12, 8, 7, 6, 5, 4, 3]}) df = df.set_index(['group1', 'group2']) - df_grouped = df.groupby(level=['group1','group2'], sort=True) + df_grouped = df.groupby(level=['group1', 'group2'], sort=True) def noddy(value, weight): - out = np.array( value * weight ).repeat(3) + out = np.array(value * weight).repeat(3) return out # the kernel function returns arrays of unequal length @@ -4140,7 +4236,7 @@ def noddy(value, weight): # and so tries a vstack # don't die - no_toes = df_grouped.apply(lambda x: noddy(x.value, x.weight )) + df_grouped.apply(lambda x: noddy(x.value, x.weight)) def test_groupby_with_empty(self): index = pd.DatetimeIndex(()) @@ -4156,21 +4252,22 @@ def test_groupby_with_timezone_selection(self): np.random.seed(42) df = pd.DataFrame({ 'factor': np.random.randint(0, 3, size=60), - 'time': pd.date_range('01/01/2000 00:00', periods=60, freq='s', tz='UTC') + 'time': pd.date_range('01/01/2000 00:00', periods=60, + freq='s', tz='UTC') }) df1 = df.groupby('factor').max()['time'] df2 = df.groupby('factor')['time'].max() tm.assert_series_equal(df1, df2) def test_timezone_info(self): - #GH 11682 + # GH 11682 # Timezone info lost when broadcasting scalar datetime to DataFrame tm._skip_if_no_pytz() import pytz df = pd.DataFrame({'a': [1], 'b': [datetime.now(pytz.utc)]}) tm.assert_equal(df['b'][0].tzinfo, pytz.utc) - df = pd.DataFrame({'a': [1,2,3]}) + df = pd.DataFrame({'a': [1, 2, 3]}) df['b'] = datetime.now(pytz.utc) tm.assert_equal(df['b'][0].tzinfo, pytz.utc) @@ -4181,15 +4278,16 @@ def test_groupby_with_timegrouper(self): import datetime as DT df_original = DataFrame({ 'Buyer': 'Carl Carl Carl Carl Joe Carl'.split(), - 'Quantity': [18,3,5,1,9,3], - 'Date' : [ - DT.datetime(2013,9,1,13,0), - DT.datetime(2013,9,1,13,5), - DT.datetime(2013,10,1,20,0), - DT.datetime(2013,10,3,10,0), - DT.datetime(2013,12,2,12,0), - DT.datetime(2013,9,2,14,0), - ]}) + 'Quantity': [18, 3, 5, 1, 9, 3], + 'Date': [ + DT.datetime(2013, 9, 1, 13, 0), + DT.datetime(2013, 9, 1, 13, 5), + DT.datetime(2013, 10, 1, 20, 0), + DT.datetime(2013, 10, 3, 10, 0), + DT.datetime(2013, 12, 2, 12, 0), + DT.datetime(2013, 9, 2, 14, 0), + ] + }) # GH 6908 change target column's order df_reordered = df_original.sort_values(by='Quantity') @@ -4197,12 +4295,15 @@ def test_groupby_with_timegrouper(self): for df in [df_original, df_reordered]: df = df.set_index(['Date']) - expected = DataFrame({ 'Quantity' : np.nan }, - index=date_range('20130901 13:00:00','20131205 13:00:00', - freq='5D',name='Date',closed='left')) - expected.iloc[[0,6,18],0] = np.array([24.,6.,9.],dtype='float64') + expected = DataFrame( + {'Quantity': np.nan}, + index=date_range('20130901 13:00:00', + '20131205 13:00:00', freq='5D', + name='Date', closed='left')) + expected.iloc[[0, 6, 18], 0] = np.array( + [24., 6., 9.], dtype='float64') - result1 = df.resample('5D',how=sum) + result1 = df.resample('5D', how=sum) assert_frame_equal(result1, expected) df_sorted = df.sort_index() @@ -4218,17 +4319,18 @@ def test_groupby_with_timegrouper_methods(self): import datetime as DT df_original = pd.DataFrame({ - 'Branch' : 'A A A A A B'.split(), + 'Branch': 'A A A A A B'.split(), 'Buyer': 'Carl Mark Carl Joe Joe Carl'.split(), - 'Quantity': [1,3,5,8,9,3], - 'Date' : [ - DT.datetime(2013,1,1,13,0), - DT.datetime(2013,1,1,13,5), - DT.datetime(2013,10,1,20,0), - DT.datetime(2013,10,2,10,0), - DT.datetime(2013,12,2,12,0), - DT.datetime(2013,12,2,14,0), - ]}) + 'Quantity': [1, 3, 5, 8, 9, 3], + 'Date': [ + DT.datetime(2013, 1, 1, 13, 0), + DT.datetime(2013, 1, 1, 13, 5), + DT.datetime(2013, 10, 1, 20, 0), + DT.datetime(2013, 10, 2, 10, 0), + DT.datetime(2013, 12, 2, 12, 0), + DT.datetime(2013, 12, 2, 14, 0), + ] + }) df_sorted = df_original.sort_values(by='Quantity', ascending=False) @@ -4236,9 +4338,9 @@ def test_groupby_with_timegrouper_methods(self): df = df.set_index('Date', drop=False) g = df.groupby(pd.TimeGrouper('6M')) self.assertTrue(g.group_keys) - self.assertTrue(isinstance(g.grouper,pd.core.groupby.BinGrouper)) + self.assertTrue(isinstance(g.grouper, pd.core.groupby.BinGrouper)) groups = g.groups - self.assertTrue(isinstance(groups,dict)) + self.assertTrue(isinstance(groups, dict)) self.assertTrue(len(groups) == 3) def test_timegrouper_with_reg_groups(self): @@ -4249,160 +4351,184 @@ def test_timegrouper_with_reg_groups(self): import datetime as DT df_original = DataFrame({ - 'Branch' : 'A A A A A A A B'.split(), + 'Branch': 'A A A A A A A B'.split(), 'Buyer': 'Carl Mark Carl Carl Joe Joe Joe Carl'.split(), - 'Quantity': [1,3,5,1,8,1,9,3], - 'Date' : [ - DT.datetime(2013,1,1,13,0), - DT.datetime(2013,1,1,13,5), - DT.datetime(2013,10,1,20,0), - DT.datetime(2013,10,2,10,0), - DT.datetime(2013,10,1,20,0), - DT.datetime(2013,10,2,10,0), - DT.datetime(2013,12,2,12,0), - DT.datetime(2013,12,2,14,0), - ]}).set_index('Date') + 'Quantity': [1, 3, 5, 1, 8, 1, 9, 3], + 'Date': [ + DT.datetime(2013, 1, 1, 13, 0), + DT.datetime(2013, 1, 1, 13, 5), + DT.datetime(2013, 10, 1, 20, 0), + DT.datetime(2013, 10, 2, 10, 0), + DT.datetime(2013, 10, 1, 20, 0), + DT.datetime(2013, 10, 2, 10, 0), + DT.datetime(2013, 12, 2, 12, 0), + DT.datetime(2013, 12, 2, 14, 0), + ] + }).set_index('Date') df_sorted = df_original.sort_values(by='Quantity', ascending=False) for df in [df_original, df_sorted]: expected = DataFrame({ 'Buyer': 'Carl Joe Mark'.split(), - 'Quantity': [10,18,3], - 'Date' : [ - DT.datetime(2013,12,31,0,0), - DT.datetime(2013,12,31,0,0), - DT.datetime(2013,12,31,0,0), - ]}).set_index(['Date','Buyer']) - - result = df.groupby([pd.Grouper(freq='A'),'Buyer']).sum() - assert_frame_equal(result,expected) + 'Quantity': [10, 18, 3], + 'Date': [ + DT.datetime(2013, 12, 31, 0, 0), + DT.datetime(2013, 12, 31, 0, 0), + DT.datetime(2013, 12, 31, 0, 0), + ] + }).set_index(['Date', 'Buyer']) + + result = df.groupby([pd.Grouper(freq='A'), 'Buyer']).sum() + assert_frame_equal(result, expected) expected = DataFrame({ 'Buyer': 'Carl Mark Carl Joe'.split(), - 'Quantity': [1,3,9,18], - 'Date' : [ - DT.datetime(2013,1,1,0,0), - DT.datetime(2013,1,1,0,0), - DT.datetime(2013,7,1,0,0), - DT.datetime(2013,7,1,0,0), - ]}).set_index(['Date','Buyer']) - result = df.groupby([pd.Grouper(freq='6MS'),'Buyer']).sum() - assert_frame_equal(result,expected) + 'Quantity': [1, 3, 9, 18], + 'Date': [ + DT.datetime(2013, 1, 1, 0, 0), + DT.datetime(2013, 1, 1, 0, 0), + DT.datetime(2013, 7, 1, 0, 0), + DT.datetime(2013, 7, 1, 0, 0), + ] + }).set_index(['Date', 'Buyer']) + result = df.groupby([pd.Grouper(freq='6MS'), 'Buyer']).sum() + assert_frame_equal(result, expected) df_original = DataFrame({ - 'Branch' : 'A A A A A A A B'.split(), + 'Branch': 'A A A A A A A B'.split(), 'Buyer': 'Carl Mark Carl Carl Joe Joe Joe Carl'.split(), - 'Quantity': [1,3,5,1,8,1,9,3], - 'Date' : [ - DT.datetime(2013,10,1,13,0), - DT.datetime(2013,10,1,13,5), - DT.datetime(2013,10,1,20,0), - DT.datetime(2013,10,2,10,0), - DT.datetime(2013,10,1,20,0), - DT.datetime(2013,10,2,10,0), - DT.datetime(2013,10,2,12,0), - DT.datetime(2013,10,2,14,0), - ]}).set_index('Date') + 'Quantity': [1, 3, 5, 1, 8, 1, 9, 3], + 'Date': [ + DT.datetime(2013, 10, 1, 13, 0), + DT.datetime(2013, 10, 1, 13, 5), + DT.datetime(2013, 10, 1, 20, 0), + DT.datetime(2013, 10, 2, 10, 0), + DT.datetime(2013, 10, 1, 20, 0), + DT.datetime(2013, 10, 2, 10, 0), + DT.datetime(2013, 10, 2, 12, 0), + DT.datetime(2013, 10, 2, 14, 0), + ] + }).set_index('Date') df_sorted = df_original.sort_values(by='Quantity', ascending=False) for df in [df_original, df_sorted]: expected = DataFrame({ 'Buyer': 'Carl Joe Mark Carl Joe'.split(), - 'Quantity': [6,8,3,4,10], - 'Date' : [ - DT.datetime(2013,10,1,0,0), - DT.datetime(2013,10,1,0,0), - DT.datetime(2013,10,1,0,0), - DT.datetime(2013,10,2,0,0), - DT.datetime(2013,10,2,0,0), - ]}).set_index(['Date','Buyer']) - - result = df.groupby([pd.Grouper(freq='1D'),'Buyer']).sum() - assert_frame_equal(result,expected) - - result = df.groupby([pd.Grouper(freq='1M'),'Buyer']).sum() + 'Quantity': [6, 8, 3, 4, 10], + 'Date': [ + DT.datetime(2013, 10, 1, 0, 0), + DT.datetime(2013, 10, 1, 0, 0), + DT.datetime(2013, 10, 1, 0, 0), + DT.datetime(2013, 10, 2, 0, 0), + DT.datetime(2013, 10, 2, 0, 0), + ] + }).set_index(['Date', 'Buyer']) + + result = df.groupby([pd.Grouper(freq='1D'), 'Buyer']).sum() + assert_frame_equal(result, expected) + + result = df.groupby([pd.Grouper(freq='1M'), 'Buyer']).sum() expected = DataFrame({ 'Buyer': 'Carl Joe Mark'.split(), - 'Quantity': [10,18,3], - 'Date' : [ - DT.datetime(2013,10,31,0,0), - DT.datetime(2013,10,31,0,0), - DT.datetime(2013,10,31,0,0), - ]}).set_index(['Date','Buyer']) - assert_frame_equal(result,expected) + 'Quantity': [10, 18, 3], + 'Date': [ + DT.datetime(2013, 10, 31, 0, 0), + DT.datetime(2013, 10, 31, 0, 0), + DT.datetime(2013, 10, 31, 0, 0), + ] + }).set_index(['Date', 'Buyer']) + assert_frame_equal(result, expected) # passing the name df = df.reset_index() - result = df.groupby([pd.Grouper(freq='1M',key='Date'),'Buyer']).sum() - assert_frame_equal(result,expected) + result = df.groupby([pd.Grouper(freq='1M', key='Date'), 'Buyer' + ]).sum() + assert_frame_equal(result, expected) - self.assertRaises(KeyError, lambda : df.groupby([pd.Grouper(freq='1M',key='foo'),'Buyer']).sum()) + with self.assertRaises(KeyError): + df.groupby([pd.Grouper(freq='1M', key='foo'), 'Buyer']).sum() # passing the level df = df.set_index('Date') - result = df.groupby([pd.Grouper(freq='1M',level='Date'),'Buyer']).sum() - assert_frame_equal(result,expected) - result = df.groupby([pd.Grouper(freq='1M',level=0),'Buyer']).sum() - assert_frame_equal(result,expected) + result = df.groupby([pd.Grouper(freq='1M', level='Date'), 'Buyer' + ]).sum() + assert_frame_equal(result, expected) + result = df.groupby([pd.Grouper(freq='1M', level=0), 'Buyer']).sum( + ) + assert_frame_equal(result, expected) - self.assertRaises(ValueError, lambda : df.groupby([pd.Grouper(freq='1M',level='foo'),'Buyer']).sum()) + with self.assertRaises(ValueError): + df.groupby([pd.Grouper(freq='1M', level='foo'), + 'Buyer']).sum() # multi names df = df.copy() df['Date'] = df.index + pd.offsets.MonthEnd(2) - result = df.groupby([pd.Grouper(freq='1M',key='Date'),'Buyer']).sum() + result = df.groupby([pd.Grouper(freq='1M', key='Date'), 'Buyer' + ]).sum() expected = DataFrame({ 'Buyer': 'Carl Joe Mark'.split(), - 'Quantity': [10,18,3], - 'Date' : [ - DT.datetime(2013,11,30,0,0), - DT.datetime(2013,11,30,0,0), - DT.datetime(2013,11,30,0,0), - ]}).set_index(['Date','Buyer']) - assert_frame_equal(result,expected) + 'Quantity': [10, 18, 3], + 'Date': [ + DT.datetime(2013, 11, 30, 0, 0), + DT.datetime(2013, 11, 30, 0, 0), + DT.datetime(2013, 11, 30, 0, 0), + ] + }).set_index(['Date', 'Buyer']) + assert_frame_equal(result, expected) # error as we have both a level and a name! - self.assertRaises(ValueError, lambda : df.groupby([pd.Grouper(freq='1M',key='Date',level='Date'),'Buyer']).sum()) - + with self.assertRaises(ValueError): + df.groupby([pd.Grouper(freq='1M', key='Date', + level='Date'), 'Buyer']).sum() # single groupers - expected = DataFrame({ 'Quantity' : [31], - 'Date' : [DT.datetime(2013,10,31,0,0)] }).set_index('Date') + expected = DataFrame({'Quantity': [31], + 'Date': [DT.datetime(2013, 10, 31, 0, 0) + ]}).set_index('Date') result = df.groupby(pd.Grouper(freq='1M')).sum() assert_frame_equal(result, expected) result = df.groupby([pd.Grouper(freq='1M')]).sum() assert_frame_equal(result, expected) - expected = DataFrame({ 'Quantity' : [31], - 'Date' : [DT.datetime(2013,11,30,0,0)] }).set_index('Date') - result = df.groupby(pd.Grouper(freq='1M',key='Date')).sum() + expected = DataFrame({'Quantity': [31], + 'Date': [DT.datetime(2013, 11, 30, 0, 0) + ]}).set_index('Date') + result = df.groupby(pd.Grouper(freq='1M', key='Date')).sum() assert_frame_equal(result, expected) - result = df.groupby([pd.Grouper(freq='1M',key='Date')]).sum() + result = df.groupby([pd.Grouper(freq='1M', key='Date')]).sum() assert_frame_equal(result, expected) # GH 6764 multiple grouping with/without sort df = DataFrame({ - 'date' : pd.to_datetime([ - '20121002','20121007','20130130','20130202','20130305','20121002', - '20121207','20130130','20130202','20130305','20130202','20130305']), - 'user_id' : [1,1,1,1,1,3,3,3,5,5,5,5], - 'whole_cost' : [1790,364,280,259,201,623,90,312,359,301,359,801], - 'cost1' : [12,15,10,24,39,1,0,90,45,34,1,12] }).set_index('date') + 'date': pd.to_datetime([ + '20121002', '20121007', '20130130', '20130202', '20130305', + '20121002', '20121207', '20130130', '20130202', '20130305', + '20130202', '20130305' + ]), + 'user_id': [1, 1, 1, 1, 1, 3, 3, 3, 5, 5, 5, 5], + 'whole_cost': [1790, 364, 280, 259, 201, 623, 90, 312, 359, 301, + 359, 801], + 'cost1': [12, 15, 10, 24, 39, 1, 0, 90, 45, 34, 1, 12] + }).set_index('date') for freq in ['D', 'M', 'A', 'Q-APR']: - expected = df.groupby('user_id')['whole_cost'].resample( - freq, how='sum').dropna().reorder_levels( - ['date','user_id']).sortlevel().astype('int64') + expected = df.groupby('user_id')[ + 'whole_cost'].resample( + freq, how='sum').dropna().reorder_levels( + ['date', 'user_id']).sortlevel().astype('int64') expected.name = 'whole_cost' - result1 = df.sort_index().groupby([pd.TimeGrouper(freq=freq), 'user_id'])['whole_cost'].sum() + result1 = df.sort_index().groupby([pd.TimeGrouper(freq=freq), + 'user_id'])['whole_cost'].sum() assert_series_equal(result1, expected) - result2 = df.groupby([pd.TimeGrouper(freq=freq), 'user_id'])['whole_cost'].sum() + result2 = df.groupby([pd.TimeGrouper(freq=freq), 'user_id'])[ + 'whole_cost'].sum() assert_series_equal(result2, expected) def test_timegrouper_get_group(self): @@ -4410,10 +4536,14 @@ def test_timegrouper_get_group(self): df_original = DataFrame({ 'Buyer': 'Carl Joe Joe Carl Joe Carl'.split(), - 'Quantity': [18,3,5,1,9,3], - 'Date' : [datetime(2013,9,1,13,0), datetime(2013,9,1,13,5), - datetime(2013,10,1,20,0), datetime(2013,10,3,10,0), - datetime(2013,12,2,12,0), datetime(2013,9,2,14,0),]}) + 'Quantity': [18, 3, 5, 1, 9, 3], + 'Date': [datetime(2013, 9, 1, 13, 0), + datetime(2013, 9, 1, 13, 5), + datetime(2013, 10, 1, 20, 0), + datetime(2013, 10, 3, 10, 0), + datetime(2013, 12, 2, 12, 0), + datetime(2013, 9, 2, 14, 0), ] + }) df_reordered = df_original.sort_values(by='Quantity') # single grouping @@ -4431,7 +4561,8 @@ def test_timegrouper_get_group(self): # multiple grouping expected_list = [df_original.iloc[[1]], df_original.iloc[[3]], df_original.iloc[[4]]] - g_list = [('Joe', '2013-09-30'), ('Carl', '2013-10-31'), ('Joe', '2013-12-31')] + g_list = [('Joe', '2013-09-30'), ('Carl', '2013-10-31'), + ('Joe', '2013-12-31')] for df in [df_original, df_reordered]: grouped = df.groupby(['Buyer', pd.Grouper(freq='M', key='Date')]) @@ -4468,13 +4599,15 @@ def test_cumcount_empty(self): ge = DataFrame().groupby(level=0) se = Series().groupby(level=0) - e = Series(dtype='int64') # edge case, as this is usually considered float + e = Series(dtype='int64' + ) # edge case, as this is usually considered float assert_series_equal(e, ge.cumcount()) assert_series_equal(e, se.cumcount()) def test_cumcount_dupe_index(self): - df = DataFrame([['a'], ['a'], ['a'], ['b'], ['a']], columns=['A'], index=[0] * 5) + df = DataFrame([['a'], ['a'], ['a'], ['b'], ['a']], columns=['A'], + index=[0] * 5) g = df.groupby('A') sg = g.A @@ -4485,7 +4618,8 @@ def test_cumcount_dupe_index(self): def test_cumcount_mi(self): mi = MultiIndex.from_tuples([[0, 1], [1, 2], [2, 2], [2, 2], [1, 0]]) - df = DataFrame([['a'], ['a'], ['a'], ['b'], ['a']], columns=['A'], index=mi) + df = DataFrame([['a'], ['a'], ['a'], ['b'], ['a']], columns=['A'], + index=mi) g = df.groupby('A') sg = g.A @@ -4495,7 +4629,8 @@ def test_cumcount_mi(self): assert_series_equal(expected, sg.cumcount()) def test_cumcount_groupby_not_col(self): - df = DataFrame([['a'], ['a'], ['a'], ['b'], ['a']], columns=['A'], index=[0] * 5) + df = DataFrame([['a'], ['a'], ['a'], ['b'], ['a']], columns=['A'], + index=[0] * 5) g = df.groupby([0, 0, 0, 1, 0]) sg = g.A @@ -4535,10 +4670,10 @@ def test_filter_single_column_df(self): # Test dropna=False. assert_frame_equal( grouped.filter(lambda x: x.mean() < 10, dropna=False), - expected_odd.reindex(df.index)) + expected_odd.reindex(df.index)) assert_frame_equal( grouped.filter(lambda x: x.mean() > 10, dropna=False), - expected_even.reindex(df.index)) + expected_even.reindex(df.index)) def test_filter_multi_column_df(self): df = pd.DataFrame({'A': [1, 12, 12, 1], 'B': [1, 1, 1, 1]}) @@ -4546,14 +4681,14 @@ def test_filter_multi_column_df(self): grouped = df.groupby(grouper) expected = pd.DataFrame({'A': [12, 12], 'B': [1, 1]}, index=[1, 2]) assert_frame_equal( - grouped.filter(lambda x: x['A'].sum() - x['B'].sum() > 10), expected) + grouped.filter(lambda x: x['A'].sum() - x['B'].sum() > 10), + expected) def test_filter_mixed_df(self): df = pd.DataFrame({'A': [1, 12, 12, 1], 'B': 'a b c d'.split()}) grouper = df['A'].apply(lambda x: x % 2) grouped = df.groupby(grouper) - expected = pd.DataFrame({'A': [12, 12], 'B': ['b', 'c']}, - index=[1, 2]) + expected = pd.DataFrame({'A': [12, 12], 'B': ['b', 'c']}, index=[1, 2]) assert_frame_equal( grouped.filter(lambda x: x['A'].sum() > 10), expected) @@ -4561,8 +4696,7 @@ def test_filter_out_all_groups(self): s = pd.Series([1, 3, 20, 5, 22, 24, 7]) grouper = s.apply(lambda x: x % 2) grouped = s.groupby(grouper) - assert_series_equal( - grouped.filter(lambda x: x.mean() > 1000), s[[]]) + assert_series_equal(grouped.filter(lambda x: x.mean() > 1000), s[[]]) df = pd.DataFrame({'A': [1, 12, 12, 1], 'B': 'a b c d'.split()}) grouper = df['A'].apply(lambda x: x % 2) grouped = df.groupby(grouper) @@ -4587,7 +4721,8 @@ def raise_if_sum_is_zero(x): raise ValueError else: return x.sum() > 0 - s = pd.Series([-1,0,1,2]) + + s = pd.Series([-1, 0, 1, 2]) grouper = s.apply(lambda x: x % 2) grouped = s.groupby(grouper) self.assertRaises(TypeError, @@ -4596,13 +4731,17 @@ def raise_if_sum_is_zero(x): def test_filter_with_axis_in_groupby(self): # issue 11041 index = pd.MultiIndex.from_product([range(10), [0, 1]]) - data = pd.DataFrame(np.arange(100).reshape(-1, 20), columns=index, dtype='int64') - result = data.groupby(level=0, axis=1).filter(lambda x: x.iloc[0, 0] > 10) - expected = data.iloc[:,12:20] + data = pd.DataFrame( + np.arange(100).reshape(-1, 20), columns=index, dtype='int64') + result = data.groupby(level=0, + axis=1).filter(lambda x: x.iloc[0, 0] > 10) + expected = data.iloc[:, 12:20] assert_frame_equal(result, expected) def test_filter_bad_shapes(self): - df = DataFrame({'A': np.arange(8), 'B': list('aabbbbcc'), 'C': np.arange(8)}) + df = DataFrame({'A': np.arange(8), + 'B': list('aabbbbcc'), + 'C': np.arange(8)}) s = df['B'] g_df = df.groupby('B') g_s = s.groupby(s) @@ -4620,7 +4759,9 @@ def test_filter_bad_shapes(self): self.assertRaises(TypeError, lambda: g_s.filter(f)) def test_filter_nan_is_false(self): - df = DataFrame({'A': np.arange(8), 'B': list('aabbbbcc'), 'C': np.arange(8)}) + df = DataFrame({'A': np.arange(8), + 'B': list('aabbbbcc'), + 'C': np.arange(8)}) s = df['B'] g_df = df.groupby(df['B']) g_s = s.groupby(s) @@ -4632,7 +4773,7 @@ def test_filter_nan_is_false(self): def test_filter_against_workaround(self): np.random.seed(0) # Series of ints - s = Series(np.random.randint(0,100,1000)) + s = Series(np.random.randint(0, 100, 1000)) grouper = s.apply(lambda x: np.round(x, -1)) grouped = s.groupby(grouper) f = lambda x: x.mean() > 10 @@ -4641,7 +4782,7 @@ def test_filter_against_workaround(self): assert_series_equal(new_way.sort_values(), old_way.sort_values()) # Series of floats - s = 100*Series(np.random.random(1000)) + s = 100 * Series(np.random.random(1000)) grouper = s.apply(lambda x: np.round(x, -1)) grouped = s.groupby(grouper) f = lambda x: x.mean() > 10 @@ -4655,38 +4796,42 @@ def test_filter_against_workaround(self): N = 1000 random_letters = letters.take(np.random.randint(0, 26, N)) df = DataFrame({'ints': Series(np.random.randint(0, 100, N)), - 'floats': N/10*Series(np.random.random(N)), + 'floats': N / 10 * Series(np.random.random(N)), 'letters': Series(random_letters)}) # Group by ints; filter on floats. grouped = df.groupby('ints') - old_way = df[grouped.floats.\ - transform(lambda x: x.mean() > N/20).astype('bool')] - new_way = grouped.filter(lambda x: x['floats'].mean() > N/20) + old_way = df[grouped.floats. + transform(lambda x: x.mean() > N / 20).astype('bool')] + new_way = grouped.filter(lambda x: x['floats'].mean() > N / 20) assert_frame_equal(new_way, old_way) # Group by floats (rounded); filter on strings. grouper = df.floats.apply(lambda x: np.round(x, -1)) grouped = df.groupby(grouper) - old_way = df[grouped.letters.\ - transform(lambda x: len(x) < N/10).astype('bool')] - new_way = grouped.filter( - lambda x: len(x.letters) < N/10) + old_way = df[grouped.letters. + transform(lambda x: len(x) < N / 10).astype('bool')] + new_way = grouped.filter(lambda x: len(x.letters) < N / 10) assert_frame_equal(new_way, old_way) # Group by strings; filter on ints. grouped = df.groupby('letters') - old_way = df[grouped.ints.\ - transform(lambda x: x.mean() > N/20).astype('bool')] - new_way = grouped.filter(lambda x: x['ints'].mean() > N/20) + old_way = df[grouped.ints. + transform(lambda x: x.mean() > N / 20).astype('bool')] + new_way = grouped.filter(lambda x: x['ints'].mean() > N / 20) assert_frame_equal(new_way, old_way) def test_filter_using_len(self): # BUG GH4447 - df = DataFrame({'A': np.arange(8), 'B': list('aabbbbcc'), 'C': np.arange(8)}) + df = DataFrame({'A': np.arange(8), + 'B': list('aabbbbcc'), + 'C': np.arange(8)}) grouped = df.groupby('B') actual = grouped.filter(lambda x: len(x) > 2) - expected = DataFrame({'A': np.arange(2, 6), 'B': list('bbbb'), 'C': np.arange(2, 6)}, index=np.arange(2, 6)) + expected = DataFrame( + {'A': np.arange(2, 6), + 'B': list('bbbb'), + 'C': np.arange(2, 6)}, index=np.arange(2, 6)) assert_frame_equal(actual, expected) actual = grouped.filter(lambda x: len(x) > 4) @@ -4697,7 +4842,7 @@ def test_filter_using_len(self): s = df['B'] grouped = s.groupby(s) actual = grouped.filter(lambda x: len(x) > 2) - expected = Series(4*['b'], index=np.arange(2, 6), name='B') + expected = Series(4 * ['b'], index=np.arange(2, 6), name='B') assert_series_equal(actual, expected) actual = grouped.filter(lambda x: len(x) > 4) @@ -4706,8 +4851,8 @@ def test_filter_using_len(self): def test_filter_maintains_ordering(self): # Simple case: index is sequential. #4621 - df = DataFrame({'pid' : [1,1,1,2,2,3,3,3], - 'tag' : [23,45,62,24,45,34,25,62]}) + df = DataFrame({'pid': [1, 1, 1, 2, 2, 3, 3, 3], + 'tag': [23, 45, 62, 24, 45, 34, 25, 62]}) s = df['pid'] grouped = df.groupby('tag') actual = grouped.filter(lambda x: len(x) > 1) @@ -4748,9 +4893,9 @@ def test_filter_maintains_ordering(self): def test_filter_multiple_timestamp(self): # GH 10114 - df = DataFrame({'A' : np.arange(5,dtype='int64'), - 'B' : ['foo','bar','foo','bar','bar'], - 'C' : Timestamp('20130101') }) + df = DataFrame({'A': np.arange(5, dtype='int64'), + 'B': ['foo', 'bar', 'foo', 'bar', 'bar'], + 'C': Timestamp('20130101')}) grouped = df.groupby(['B', 'C']) @@ -4765,18 +4910,18 @@ def test_filter_multiple_timestamp(self): assert_frame_equal(df, result) result = grouped.transform('sum') - expected = DataFrame({'A' : [2, 8, 2, 8, 8]}) + expected = DataFrame({'A': [2, 8, 2, 8, 8]}) assert_frame_equal(result, expected) result = grouped.transform(len) - expected = DataFrame({'A' : [2, 3, 2, 3, 3]}) + expected = DataFrame({'A': [2, 3, 2, 3, 3]}) assert_frame_equal(result, expected) def test_filter_and_transform_with_non_unique_int_index(self): # GH4620 index = [1, 1, 1, 2, 1, 1, 0, 1] - df = DataFrame({'pid' : [1,1,1,2,2,3,3,3], - 'tag' : [23,45,62,24,45,34,25,62]}, index=index) + df = DataFrame({'pid': [1, 1, 1, 2, 2, 3, 3, 3], + 'tag': [23, 45, 62, 24, 45, 34, 25, 62]}, index=index) grouped_df = df.groupby('tag') ser = df['pid'] grouped_ser = ser.groupby(df['tag']) @@ -4799,7 +4944,7 @@ def test_filter_and_transform_with_non_unique_int_index(self): actual = grouped_ser.filter(lambda x: len(x) > 1, dropna=False) NA = np.nan - expected = Series([NA,1,1,NA,2,NA,NA,3], index, name='pid') + expected = Series([NA, 1, 1, NA, 2, NA, NA, 3], index, name='pid') # ^ made manually because this can get confusing! assert_series_equal(actual, expected) @@ -4815,8 +4960,8 @@ def test_filter_and_transform_with_non_unique_int_index(self): def test_filter_and_transform_with_multiple_non_unique_int_index(self): # GH4620 index = [1, 1, 1, 2, 0, 0, 0, 1] - df = DataFrame({'pid' : [1,1,1,2,2,3,3,3], - 'tag' : [23,45,62,24,45,34,25,62]}, index=index) + df = DataFrame({'pid': [1, 1, 1, 2, 2, 3, 3, 3], + 'tag': [23, 45, 62, 24, 45, 34, 25, 62]}, index=index) grouped_df = df.groupby('tag') ser = df['pid'] grouped_ser = ser.groupby(df['tag']) @@ -4839,7 +4984,7 @@ def test_filter_and_transform_with_multiple_non_unique_int_index(self): actual = grouped_ser.filter(lambda x: len(x) > 1, dropna=False) NA = np.nan - expected = Series([NA,1,1,NA,2,NA,NA,3], index, name='pid') + expected = Series([NA, 1, 1, NA, 2, NA, NA, 3], index, name='pid') # ^ made manually because this can get confusing! assert_series_equal(actual, expected) @@ -4855,48 +5000,8 @@ def test_filter_and_transform_with_multiple_non_unique_int_index(self): def test_filter_and_transform_with_non_unique_float_index(self): # GH4620 index = np.array([1, 1, 1, 2, 1, 1, 0, 1], dtype=float) - df = DataFrame({'pid' : [1,1,1,2,2,3,3,3], - 'tag' : [23,45,62,24,45,34,25,62]}, index=index) - grouped_df = df.groupby('tag') - ser = df['pid'] - grouped_ser = ser.groupby(df['tag']) - expected_indexes = [1, 2, 4, 7] - - # Filter DataFrame - actual = grouped_df.filter(lambda x: len(x) > 1) - expected = df.iloc[expected_indexes] - assert_frame_equal(actual, expected) - - actual = grouped_df.filter(lambda x: len(x) > 1, dropna=False) - expected = df.copy() - expected.iloc[[0, 3, 5, 6]] = np.nan - assert_frame_equal(actual, expected) - - # Filter Series - actual = grouped_ser.filter(lambda x: len(x) > 1) - expected = ser.take(expected_indexes) - assert_series_equal(actual, expected) - - actual = grouped_ser.filter(lambda x: len(x) > 1, dropna=False) - NA = np.nan - expected = Series([NA,1,1,NA,2,NA,NA,3], index, name='pid') - # ^ made manually because this can get confusing! - assert_series_equal(actual, expected) - - # Transform Series - actual = grouped_ser.transform(len) - expected = Series([1, 2, 2, 1, 2, 1, 1, 2], index) - assert_series_equal(actual, expected) - - # Transform (a column from) DataFrameGroupBy - actual = grouped_df.pid.transform(len) - assert_series_equal(actual, expected) - - def test_filter_and_transform_with_non_unique_float_index(self): - # GH4620 - index = np.array([1, 1, 1, 2, 0, 0, 0, 1], dtype=float) - df = DataFrame({'pid' : [1,1,1,2,2,3,3,3], - 'tag' : [23,45,62,24,45,34,25,62]}, index=index) + df = DataFrame({'pid': [1, 1, 1, 2, 2, 3, 3, 3], + 'tag': [23, 45, 62, 24, 45, 34, 25, 62]}, index=index) grouped_df = df.groupby('tag') ser = df['pid'] grouped_ser = ser.groupby(df['tag']) @@ -4919,7 +5024,7 @@ def test_filter_and_transform_with_non_unique_float_index(self): actual = grouped_ser.filter(lambda x: len(x) > 1, dropna=False) NA = np.nan - expected = Series([NA,1,1,NA,2,NA,NA,3], index, name='pid') + expected = Series([NA, 1, 1, NA, 2, NA, NA, 3], index, name='pid') # ^ made manually because this can get confusing! assert_series_equal(actual, expected) @@ -4938,8 +5043,8 @@ def test_filter_and_transform_with_non_unique_timestamp_index(self): t1 = Timestamp('2013-10-30 00:05:00') t2 = Timestamp('2013-11-30 00:05:00') index = [t1, t1, t1, t2, t1, t1, t0, t1] - df = DataFrame({'pid' : [1,1,1,2,2,3,3,3], - 'tag' : [23,45,62,24,45,34,25,62]}, index=index) + df = DataFrame({'pid': [1, 1, 1, 2, 2, 3, 3, 3], + 'tag': [23, 45, 62, 24, 45, 34, 25, 62]}, index=index) grouped_df = df.groupby('tag') ser = df['pid'] grouped_ser = ser.groupby(df['tag']) @@ -4962,7 +5067,7 @@ def test_filter_and_transform_with_non_unique_timestamp_index(self): actual = grouped_ser.filter(lambda x: len(x) > 1, dropna=False) NA = np.nan - expected = Series([NA,1,1,NA,2,NA,NA,3], index, name='pid') + expected = Series([NA, 1, 1, NA, 2, NA, NA, 3], index, name='pid') # ^ made manually because this can get confusing! assert_series_equal(actual, expected) @@ -4978,8 +5083,8 @@ def test_filter_and_transform_with_non_unique_timestamp_index(self): def test_filter_and_transform_with_non_unique_string_index(self): # GH4620 index = list('bbbcbbab') - df = DataFrame({'pid' : [1,1,1,2,2,3,3,3], - 'tag' : [23,45,62,24,45,34,25,62]}, index=index) + df = DataFrame({'pid': [1, 1, 1, 2, 2, 3, 3, 3], + 'tag': [23, 45, 62, 24, 45, 34, 25, 62]}, index=index) grouped_df = df.groupby('tag') ser = df['pid'] grouped_ser = ser.groupby(df['tag']) @@ -5002,7 +5107,7 @@ def test_filter_and_transform_with_non_unique_string_index(self): actual = grouped_ser.filter(lambda x: len(x) > 1, dropna=False) NA = np.nan - expected = Series([NA,1,1,NA,2,NA,NA,3], index, name='pid') + expected = Series([NA, 1, 1, NA, 2, NA, NA, 3], index, name='pid') # ^ made manually because this can get confusing! assert_series_equal(actual, expected) @@ -5023,27 +5128,27 @@ def test_filter_has_access_to_grouped_cols(self): assert_frame_equal(filt, df.iloc[[0, 1]]) def test_filter_enforces_scalarness(self): - df = pd.DataFrame([ + df = pd.DataFrame([ ['best', 'a', 'x'], ['worst', 'b', 'y'], ['best', 'c', 'x'], - ['best','d', 'y'], - ['worst','d', 'y'], - ['worst','d', 'y'], - ['best','d', 'z'], + ['best', 'd', 'y'], + ['worst', 'd', 'y'], + ['worst', 'd', 'y'], + ['best', 'd', 'z'], ], columns=['a', 'b', 'c']) with tm.assertRaisesRegexp(TypeError, 'filter function returned a.*'): df.groupby('c').filter(lambda g: g['a'] == 'best') def test_filter_non_bool_raises(self): - df = pd.DataFrame([ + df = pd.DataFrame([ ['best', 'a', 1], ['worst', 'b', 1], ['best', 'c', 1], - ['best','d', 1], - ['worst','d', 1], - ['worst','d', 1], - ['best','d', 1], + ['best', 'd', 1], + ['worst', 'd', 1], + ['worst', 'd', 1], + ['best', 'd', 1], ], columns=['a', 'b', 'c']) with tm.assertRaisesRegexp(TypeError, 'filter function returned a.*'): df.groupby('a').filter(lambda g: g.c.mean()) @@ -5053,11 +5158,14 @@ def test_fill_constistency(self): # GH9221 # pass thru keyword arguments to the generated wrapper # are set if the passed kw is None (only) - df = DataFrame(index=pd.MultiIndex.from_product([['value1','value2'], - date_range('2014-01-01','2014-01-06')]), - columns=Index(['1','2'], name='id')) - df['1'] = [np.nan, 1, np.nan, np.nan, 11, np.nan, np.nan, 2, np.nan, np.nan, 22, np.nan] - df['2'] = [np.nan, 3, np.nan, np.nan, 33, np.nan, np.nan, 4, np.nan, np.nan, 44, np.nan] + df = DataFrame(index=pd.MultiIndex.from_product( + [['value1', 'value2'], date_range('2014-01-01', '2014-01-06')]), + columns=Index( + ['1', '2'], name='id')) + df['1'] = [np.nan, 1, np.nan, np.nan, 11, np.nan, np.nan, 2, np.nan, + np.nan, 22, np.nan] + df['2'] = [np.nan, 3, np.nan, np.nan, 33, np.nan, np.nan, 4, np.nan, + np.nan, 44, np.nan] expected = df.groupby(level=0, axis=0).fillna(method='ffill') result = df.T.groupby(level=0, axis=1).fillna(method='ffill').T @@ -5103,17 +5211,22 @@ def test_groupby_selection_with_methods(self): # methods which are called as .foo() methods = ['count', 'corr', - 'cummax', 'cummin', 'cumprod', - 'describe', 'rank', + 'cummax', + 'cummin', + 'cumprod', + 'describe', + 'rank', 'quantile', - 'diff', 'shift', - 'all', 'any', - 'idxmin', 'idxmax', - 'ffill', 'bfill', + 'diff', + 'shift', + 'all', + 'any', + 'idxmin', + 'idxmax', + 'ffill', + 'bfill', 'pct_change', - 'tshift', - #'ohlc' - ] + 'tshift'] for m in methods: res = getattr(g, m)() @@ -5143,52 +5256,89 @@ def test_groupby_whitelist(self): s = df.floats df_whitelist = frozenset([ - 'last', 'first', - 'mean', 'sum', 'min', 'max', - 'head', 'tail', - 'cumsum', 'cumprod', 'cummin', 'cummax', 'cumcount', + 'last', + 'first', + 'mean', + 'sum', + 'min', + 'max', + 'head', + 'tail', + 'cumsum', + 'cumprod', + 'cummin', + 'cummax', + 'cumcount', 'resample', 'describe', - 'rank', 'quantile', + 'rank', + 'quantile', 'fillna', 'mad', - 'any', 'all', + 'any', + 'all', 'take', - 'idxmax', 'idxmin', - 'shift', 'tshift', - 'ffill', 'bfill', - 'pct_change', 'skew', - 'plot', 'boxplot', 'hist', - 'median', 'dtypes', - 'corrwith', 'corr', 'cov', + 'idxmax', + 'idxmin', + 'shift', + 'tshift', + 'ffill', + 'bfill', + 'pct_change', + 'skew', + 'plot', + 'boxplot', + 'hist', + 'median', + 'dtypes', + 'corrwith', + 'corr', + 'cov', 'diff', ]) s_whitelist = frozenset([ - 'last', 'first', - 'mean', 'sum', 'min', 'max', - 'head', 'tail', - 'cumsum', 'cumprod', 'cummin', 'cummax', 'cumcount', + 'last', + 'first', + 'mean', + 'sum', + 'min', + 'max', + 'head', + 'tail', + 'cumsum', + 'cumprod', + 'cummin', + 'cummax', + 'cumcount', 'resample', 'describe', - 'rank', 'quantile', + 'rank', + 'quantile', 'fillna', 'mad', - 'any', 'all', + 'any', + 'all', 'take', - 'idxmax', 'idxmin', - 'shift', 'tshift', - 'ffill', 'bfill', - 'pct_change', 'skew', - 'plot', 'hist', - 'median', 'dtype', - 'corr', 'cov', + 'idxmax', + 'idxmin', + 'shift', + 'tshift', + 'ffill', + 'bfill', + 'pct_change', + 'skew', + 'plot', + 'hist', + 'median', + 'dtype', + 'corr', + 'cov', 'diff', 'unique', # 'nlargest', 'nsmallest', ]) - for obj, whitelist in zip((df, s), - (df_whitelist, s_whitelist)): + for obj, whitelist in zip((df, s), (df_whitelist, s_whitelist)): gb = obj.groupby(df.letters) self.assertEqual(whitelist, gb._apply_whitelist) for m in whitelist: @@ -5212,38 +5362,39 @@ def test_groupby_whitelist_deprecations(self): with tm.assert_produces_warning(FutureWarning): df.groupby('letters').floats.irow(0) - def test_regression_whitelist_methods(self) : + def test_regression_whitelist_methods(self): # GH6944 # explicity test the whitelest methods - index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], - ['one', 'two', 'three']], + index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two', + 'three']], labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], names=['first', 'second']) raw_frame = DataFrame(np.random.randn(10, 3), index=index, - columns=Index(['A', 'B', 'C'], name='exp')) + columns=Index(['A', 'B', 'C'], name='exp')) raw_frame.ix[1, [1, 2]] = np.nan raw_frame.ix[7, [0, 1]] = np.nan for op, level, axis, skipna in cart_product(self.AGG_FUNCTIONS, lrange(2), lrange(2), - [True,False]) : + [True, False]): - if axis == 0 : + if axis == 0: frame = raw_frame - else : + else: frame = raw_frame.T - if op in self.AGG_FUNCTIONS_WITH_SKIPNA : - grouped = frame.groupby(level=level,axis=axis) - result = getattr(grouped,op)(skipna=skipna) - expected = getattr(frame,op)(level=level,axis=axis,skipna=skipna) + if op in self.AGG_FUNCTIONS_WITH_SKIPNA: + grouped = frame.groupby(level=level, axis=axis) + result = getattr(grouped, op)(skipna=skipna) + expected = getattr(frame, op)(level=level, axis=axis, + skipna=skipna) assert_frame_equal(result, expected) - else : - grouped = frame.groupby(level=level,axis=axis) - result = getattr(grouped,op)() - expected = getattr(frame,op)(level=level,axis=axis) + else: + grouped = frame.groupby(level=level, axis=axis) + result = getattr(grouped, op)() + expected = getattr(frame, op)(level=level, axis=axis) assert_frame_equal(result, expected) def test_groupby_blacklist(self): @@ -5281,22 +5432,20 @@ def test_groupby_blacklist(self): def test_tab_completion(self): grp = self.mframe.groupby(level='second') results = set([v for v in dir(grp) if not v.startswith('_')]) - expected = set(['A','B','C', - 'agg','aggregate','apply','boxplot','filter','first','get_group', - 'groups','hist','indices','last','max','mean','median', - 'min','name','ngroups','nth','ohlc','plot', 'prod', - 'size', 'std', 'sum', 'transform', 'var', 'sem', 'count', 'head', - 'irow', - 'describe', 'cummax', 'quantile', 'rank', 'cumprod', 'tail', - 'resample', 'cummin', 'fillna', 'cumsum', 'cumcount', - 'all', 'shift', 'skew', 'bfill', 'ffill', - 'take', 'tshift', 'pct_change', 'any', 'mad', 'corr', 'corrwith', - 'cov', 'dtypes', 'diff', 'idxmax', 'idxmin' - ]) + expected = set( + ['A', 'B', 'C', 'agg', 'aggregate', 'apply', 'boxplot', 'filter', + 'first', 'get_group', 'groups', 'hist', 'indices', 'last', 'max', + 'mean', 'median', 'min', 'name', 'ngroups', 'nth', 'ohlc', 'plot', + 'prod', 'size', 'std', 'sum', 'transform', 'var', 'sem', 'count', + 'head', 'irow', 'describe', 'cummax', 'quantile', 'rank', + 'cumprod', 'tail', 'resample', 'cummin', 'fillna', 'cumsum', + 'cumcount', 'all', 'shift', 'skew', 'bfill', 'ffill', 'take', + 'tshift', 'pct_change', 'any', 'mad', 'corr', 'corrwith', 'cov', + 'dtypes', 'diff', 'idxmax', 'idxmin']) self.assertEqual(results, expected) def test_lexsort_indexer(self): - keys = [[nan]*5 + list(range(100)) + [nan]*5] + keys = [[nan] * 5 + list(range(100)) + [nan] * 5] # orders=True, na_position='last' result = _lexsort_indexer(keys, orders=True, na_position='last') expected = list(range(5, 105)) + list(range(5)) + list(range(105, 110)) @@ -5309,17 +5458,19 @@ def test_lexsort_indexer(self): # orders=False, na_position='last' result = _lexsort_indexer(keys, orders=False, na_position='last') - expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105, 110)) + expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105, + 110)) assert_equal(result, expected) # orders=False, na_position='first' result = _lexsort_indexer(keys, orders=False, na_position='first') - expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4, -1)) + expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4, + -1)) assert_equal(result, expected) def test_nargsort(self): # np.argsort(items) places NaNs last - items = [nan]*5 + list(range(100)) + [nan]*5 + items = [nan] * 5 + list(range(100)) + [nan] * 5 # np.argsort(items2) may not place NaNs first items2 = np.array(items, dtype='O') @@ -5327,78 +5478,86 @@ def test_nargsort(self): # GH 2785; due to a regression in NumPy1.6.2 np.argsort(np.array([[1, 2], [1, 3], [1, 2]], dtype='i')) np.argsort(items2, kind='mergesort') - except TypeError as err: + except TypeError: raise nose.SkipTest('requested sort not available for type') - # mergesort is the most difficult to get right because we want it to be stable. + # mergesort is the most difficult to get right because we want it to be + # stable. - # According to numpy/core/tests/test_multiarray, """The number - # of sorted items must be greater than ~50 to check the actual algorithm + # According to numpy/core/tests/test_multiarray, """The number of + # sorted items must be greater than ~50 to check the actual algorithm # because quick and merge sort fall over to insertion sort for small # arrays.""" - # mergesort, ascending=True, na_position='last' - result = _nargsort( - items, kind='mergesort', ascending=True, na_position='last') + result = _nargsort(items, kind='mergesort', ascending=True, + na_position='last') expected = list(range(5, 105)) + list(range(5)) + list(range(105, 110)) assert_equal(result, expected) # mergesort, ascending=True, na_position='first' - result = _nargsort( - items, kind='mergesort', ascending=True, na_position='first') + result = _nargsort(items, kind='mergesort', ascending=True, + na_position='first') expected = list(range(5)) + list(range(105, 110)) + list(range(5, 105)) assert_equal(result, expected) # mergesort, ascending=False, na_position='last' - result = _nargsort( - items, kind='mergesort', ascending=False, na_position='last') - expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105, 110)) + result = _nargsort(items, kind='mergesort', ascending=False, + na_position='last') + expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105, + 110)) assert_equal(result, expected) # mergesort, ascending=False, na_position='first' - result = _nargsort( - items, kind='mergesort', ascending=False, na_position='first') - expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4, -1)) + result = _nargsort(items, kind='mergesort', ascending=False, + na_position='first') + expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4, + -1)) assert_equal(result, expected) # mergesort, ascending=True, na_position='last' - result = _nargsort( - items2, kind='mergesort', ascending=True, na_position='last') + result = _nargsort(items2, kind='mergesort', ascending=True, + na_position='last') expected = list(range(5, 105)) + list(range(5)) + list(range(105, 110)) assert_equal(result, expected) # mergesort, ascending=True, na_position='first' - result = _nargsort( - items2, kind='mergesort', ascending=True, na_position='first') + result = _nargsort(items2, kind='mergesort', ascending=True, + na_position='first') expected = list(range(5)) + list(range(105, 110)) + list(range(5, 105)) assert_equal(result, expected) # mergesort, ascending=False, na_position='last' - result = _nargsort( - items2, kind='mergesort', ascending=False, na_position='last') - expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105, 110)) + result = _nargsort(items2, kind='mergesort', ascending=False, + na_position='last') + expected = list(range(104, 4, -1)) + list(range(5)) + list(range(105, + 110)) assert_equal(result, expected) # mergesort, ascending=False, na_position='first' - result = _nargsort( - items2, kind='mergesort', ascending=False, na_position='first') - expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4, -1)) + result = _nargsort(items2, kind='mergesort', ascending=False, + na_position='first') + expected = list(range(5)) + list(range(105, 110)) + list(range(104, 4, + -1)) assert_equal(result, expected) def test_datetime_count(self): - df = DataFrame({'a': [1,2,3] * 2, + df = DataFrame({'a': [1, 2, 3] * 2, 'dates': pd.date_range('now', periods=6, freq='T')}) result = df.groupby('a').dates.count() - expected = Series([2, 2, 2], index=Index([1, 2, 3], name='a'), - name='dates') + expected = Series([ + 2, 2, 2 + ], index=Index([1, 2, 3], name='a'), name='dates') tm.assert_series_equal(result, expected) def test_lower_int_prec_count(self): - df = DataFrame({'a': np.array([0, 1, 2, 100], np.int8), - 'b': np.array([1, 2, 3, 6], np.uint32), - 'c': np.array([4, 5, 6, 8], np.int16), - 'grp': list('ab' * 2)}) + df = DataFrame({'a': np.array( + [0, 1, 2, 100], np.int8), + 'b': np.array( + [1, 2, 3, 6], np.uint32), + 'c': np.array( + [4, 5, 6, 8], np.int16), + 'grp': list('ab' * 2)}) result = df.groupby('grp').count() expected = DataFrame({'a': [2, 2], 'b': [2, 2], @@ -5411,6 +5570,7 @@ class RaisingObjectException(Exception): pass class RaisingObject(object): + def __init__(self, msg='I will raise inside Cython'): super(RaisingObject, self).__init__() self.msg = msg @@ -5422,8 +5582,8 @@ def __eq__(self, other): df = DataFrame({'a': [RaisingObject() for _ in range(4)], 'grp': list('ab' * 2)}) result = df.groupby('grp').count() - expected = DataFrame({'a': [2, 2]}, index=pd.Index(list('ab'), - name='grp')) + expected = DataFrame({'a': [2, 2]}, index=pd.Index( + list('ab'), name='grp')) tm.assert_frame_equal(result, expected) def test__cython_agg_general(self): @@ -5435,8 +5595,7 @@ def test__cython_agg_general(self): ('min', np.min), ('max', np.max), ('first', lambda x: x.iloc[0]), - ('last', lambda x: x.iloc[-1]), - ] + ('last', lambda x: x.iloc[-1]), ] df = DataFrame(np.random.randn(1000)) labels = np.random.randint(0, 50, size=1000).astype(float) @@ -5446,33 +5605,30 @@ def test__cython_agg_general(self): try: tm.assert_frame_equal(result, expected) except BaseException as exc: - exc.args += ('operation: %s' % op,) + exc.args += ('operation: %s' % op, ) raise def test_cython_group_transform_algos(self): - #GH 4095 - dtypes = [np.int8, np.int16, np.int32, np.int64, - np.uint8, np.uint32, np.uint64, - np.float32, np.float64] + # GH 4095 + dtypes = [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint32, + np.uint64, np.float32, np.float64] ops = [(pd.algos.group_cumprod_float64, np.cumproduct, [np.float64]), (pd.algos.group_cumsum, np.cumsum, dtypes)] for pd_op, np_op, dtypes in ops: for dtype in dtypes: - data = np.array([[1],[2],[3],[4]], dtype=dtype) + data = np.array([[1], [2], [3], [4]], dtype=dtype) ans = np.zeros_like(data) accum = np.array([[0]], dtype=dtype) - labels = np.array([0,0,0,0], dtype=np.int64) + labels = np.array([0, 0, 0, 0], dtype=np.int64) pd_op(ans, data, labels, accum) - self.assert_numpy_array_equal(np_op(data), ans[:,0]) - - + self.assert_numpy_array_equal(np_op(data), ans[:, 0]) # with nans - labels = np.array([0,0,0,0,0], dtype=np.int64) + labels = np.array([0, 0, 0, 0, 0], dtype=np.int64) - data = np.array([[1],[2],[3],[np.nan],[4]], dtype='float64') + data = np.array([[1], [2], [3], [np.nan], [4]], dtype='float64') accum = np.array([[0.0]]) actual = np.zeros_like(data) actual.fill(np.nan) @@ -5493,45 +5649,46 @@ def test_cython_group_transform_algos(self): actual = np.zeros_like(data, dtype='int64') actual.fill(np.nan) pd.algos.group_cumsum(actual, data.view('int64'), labels, accum) - expected = np.array( - [np.timedelta64(1, 'ns'), np.timedelta64(2, 'ns'), - np.timedelta64(3, 'ns'), np.timedelta64(4, 'ns'), - np.timedelta64(5, 'ns')]) + expected = np.array([np.timedelta64(1, 'ns'), np.timedelta64( + 2, 'ns'), np.timedelta64(3, 'ns'), np.timedelta64(4, 'ns'), + np.timedelta64(5, 'ns')]) self.assert_numpy_array_equal(actual[:, 0].view('m8[ns]'), expected) - - def test_cython_transform(self): # GH 4095 - ops = [(('cumprod', ()), lambda x: x.cumprod()), - (('cumsum', ()), lambda x: x.cumsum()), - (('shift', (-1,)), lambda x: x.shift(-1)), - (('shift', (1,)), lambda x: x.shift())] + ops = [(('cumprod', + ()), lambda x: x.cumprod()), (('cumsum', ()), + lambda x: x.cumsum()), + (('shift', (-1, )), + lambda x: x.shift(-1)), (('shift', + (1, )), lambda x: x.shift())] s = Series(np.random.randn(1000)) s_missing = s.copy() s_missing.iloc[2:10] = np.nan labels = np.random.randint(0, 50, size=1000).astype(float) - #series + # series for (op, args), targop in ops: for data in [s, s_missing]: # print(data.head()) expected = data.groupby(labels).transform(targop) tm.assert_series_equal(expected, - data.groupby(labels).transform(op, *args)) - tm.assert_series_equal(expected, - getattr(data.groupby(labels), op)(*args)) + data.groupby(labels).transform(op, + *args)) + tm.assert_series_equal(expected, getattr( + data.groupby(labels), op)(*args)) strings = list('qwertyuiopasdfghjklz') strings_missing = strings[:] strings_missing[5] = np.nan df = DataFrame({'float': s, 'float_missing': s_missing, - 'int': [1,1,1,1,2] * 200, + 'int': [1, 1, 1, 1, 2] * 200, 'datetime': pd.date_range('1990-1-1', periods=1000), - 'timedelta': pd.timedelta_range(1, freq='s', periods=1000), + 'timedelta': pd.timedelta_range(1, freq='s', + periods=1000), 'string': strings * 50, 'string_missing': strings_missing * 50}) df['cat'] = df['string'].astype('category') @@ -5539,12 +5696,12 @@ def test_cython_transform(self): df2 = df.copy() df2.index = pd.MultiIndex.from_product([range(100), range(10)]) - #DataFrame - Single and MultiIndex, - #group by values, index level, columns + # DataFrame - Single and MultiIndex, + # group by values, index level, columns for df in [df, df2]: - for gb_target in [dict(by=labels), dict(level=0), - dict(by='string')]: # dict(by='string_missing')]: - # dict(by=['int','string'])]: + for gb_target in [dict(by=labels), dict(level=0), dict(by='string') + ]: # dict(by='string_missing')]: + # dict(by=['int','string'])]: gb = df.groupby(**gb_target) # whitelisted methods set the selection before applying @@ -5558,19 +5715,20 @@ def test_cython_transform(self): # numeric apply fastpath promotes dtype so have # to apply seperately and concat i = gb[['int']].apply(targop) - f = gb[['float','float_missing']].apply(targop) - expected = pd.concat([f,i], axis=1) + f = gb[['float', 'float_missing']].apply(targop) + expected = pd.concat([f, i], axis=1) else: expected = gb.apply(targop) expected = expected.sort_index(axis=1) tm.assert_frame_equal(expected, - gb.transform(op, *args).sort_index(axis=1)) - tm.assert_frame_equal(expected, - getattr(gb, op)(*args)) + gb.transform(op, *args).sort_index( + axis=1)) + tm.assert_frame_equal(expected, getattr(gb, op)(*args)) # individual columns for c in df: - if c not in ['float', 'int', 'float_missing'] and op != 'shift': + if c not in ['float', 'int', 'float_missing' + ] and op != 'shift': self.assertRaises(DataError, gb[c].transform, op) self.assertRaises(DataError, getattr(gb[c], op)) else: @@ -5580,6 +5738,7 @@ def test_cython_transform(self): gb[c].transform(op, *args)) tm.assert_series_equal(expected, getattr(gb[c], op)(*args)) + def test_groupby_cumprod(self): # GH 4095 df = pd.DataFrame({'key': ['b'] * 10, 'value': 2}) @@ -5598,7 +5757,6 @@ def test_groupby_cumprod(self): expected.name = 'value' tm.assert_series_equal(actual, expected) - def test_ops_general(self): ops = [('mean', np.mean), ('median', np.median), @@ -5610,8 +5768,7 @@ def test_ops_general(self): ('max', np.max), ('first', lambda x: x.iloc[0]), ('last', lambda x: x.iloc[-1]), - ('count', np.size), - ] + ('count', np.size), ] try: from scipy.stats import sem except ImportError: @@ -5627,7 +5784,7 @@ def test_ops_general(self): try: tm.assert_frame_equal(result, expected) except BaseException as exc: - exc.args += ('operation: %s' % op,) + exc.args += ('operation: %s' % op, ) raise def test_max_nan_bug(self): @@ -5635,6 +5792,7 @@ def test_max_nan_bug(self): 2013-04-23,2013-04-23 00:00:00,,log080001.log 2013-05-06,2013-05-06 00:00:00,,log.log 2013-05-07,2013-05-07 00:00:00,OE,xlsx""" + df = pd.read_csv(StringIO(raw), parse_dates=[0]) gb = df.groupby('Date') r = gb[['File']].max() @@ -5647,17 +5805,16 @@ def test_nlargest(self): b = Series(list('a' * 5 + 'b' * 5)) gb = a.groupby(b) r = gb.nlargest(3) - e = Series([7, 5, 3, 10, 9, 6], - index=MultiIndex.from_arrays([list('aaabbb'), - [3, 2, 1, 9, 5, 8]])) + e = Series([ + 7, 5, 3, 10, 9, 6 + ], index=MultiIndex.from_arrays([list('aaabbb'), [3, 2, 1, 9, 5, 8]])) tm.assert_series_equal(r, e) - a = Series([1, 1, 3, 2, 0, 3, 3, 2, 1, 0]) gb = a.groupby(b) - e = Series([3, 2, 1, 3, 3, 2], - index=MultiIndex.from_arrays([list('aaabbb'), - [2, 3, 1, 6, 5, 7]])) + e = Series([ + 3, 2, 1, 3, 3, 2 + ], index=MultiIndex.from_arrays([list('aaabbb'), [2, 3, 1, 6, 5, 7]])) assert_series_equal(gb.nlargest(3, keep='last'), e) with tm.assert_produces_warning(FutureWarning): assert_series_equal(gb.nlargest(3, take_last=True), e) @@ -5667,16 +5824,16 @@ def test_nsmallest(self): b = Series(list('a' * 5 + 'b' * 5)) gb = a.groupby(b) r = gb.nsmallest(3) - e = Series([1, 2, 3, 0, 4, 6], - index=MultiIndex.from_arrays([list('aaabbb'), - [0, 4, 1, 6, 7, 8]])) + e = Series([ + 1, 2, 3, 0, 4, 6 + ], index=MultiIndex.from_arrays([list('aaabbb'), [0, 4, 1, 6, 7, 8]])) tm.assert_series_equal(r, e) a = Series([1, 1, 3, 2, 0, 3, 3, 2, 1, 0]) gb = a.groupby(b) - e = Series([0, 1, 1, 0, 1, 2], - index=MultiIndex.from_arrays([list('aaabbb'), - [4, 1, 0, 9, 8, 7]])) + e = Series([ + 0, 1, 1, 0, 1, 2 + ], index=MultiIndex.from_arrays([list('aaabbb'), [4, 1, 0, 9, 8, 7]])) assert_series_equal(gb.nsmallest(3, keep='last'), e) with tm.assert_produces_warning(FutureWarning): assert_series_equal(gb.nsmallest(3, take_last=True), e) @@ -5698,23 +5855,27 @@ def test_transform_doesnt_clobber_ints(self): def test_groupby_categorical_two_columns(self): # https://github.com/pydata/pandas/issues/8138 - d = {'cat': pd.Categorical(["a","b","a","b"], categories=["a", "b", "c"], ordered=True), - 'ints': [1, 1, 2, 2],'val': [10, 20, 30, 40]} + d = {'cat': + pd.Categorical(["a", "b", "a", "b"], categories=["a", "b", "c"], + ordered=True), + 'ints': [1, 1, 2, 2], + 'val': [10, 20, 30, 40]} test = pd.DataFrame(d) # Grouping on a single column groups_single_key = test.groupby("cat") res = groups_single_key.agg('mean') - exp = DataFrame({"ints":[1.5,1.5,np.nan], "val":[20,30,np.nan]}, + exp = DataFrame({"ints": [1.5, 1.5, np.nan], "val": [20, 30, np.nan]}, index=pd.CategoricalIndex(["a", "b", "c"], name="cat")) tm.assert_frame_equal(res, exp) # Grouping on two columns - groups_double_key = test.groupby(["cat","ints"]) + groups_double_key = test.groupby(["cat", "ints"]) res = groups_double_key.agg('mean') - exp = DataFrame({"val":[10,30,20,40,np.nan,np.nan], - "cat": ["a","a","b","b","c","c"], - "ints": [1,2,1,2,1,2]}).set_index(["cat","ints"]) + exp = DataFrame({"val": [10, 30, 20, 40, np.nan, np.nan], + "cat": ["a", "a", "b", "b", "c", "c"], + "ints": [1, 2, 1, 2, 1, 2]}).set_index(["cat", "ints" + ]) tm.assert_frame_equal(res, exp) # GH 10132 @@ -5728,23 +5889,28 @@ def test_groupby_categorical_two_columns(self): test = pd.DataFrame(d) values = pd.cut(test['C1'], [1, 2, 3, 6]) values.name = "cat" - groups_double_key = test.groupby([values,'C2']) + groups_double_key = test.groupby([values, 'C2']) res = groups_double_key.agg('mean') nan = np.nan - idx = MultiIndex.from_product([["(1, 2]", "(2, 3]", "(3, 6]"],[1,2,3,4]], + idx = MultiIndex.from_product([["(1, 2]", "(2, 3]", "(3, 6]"], + [1, 2, 3, 4]], names=["cat", "C2"]) - exp = DataFrame({"C1":[nan,nan,nan,nan, 3, 3,nan,nan, nan,nan, 4, 5], - "C3":[nan,nan,nan,nan, 10,100,nan,nan, nan,nan,200,34]}, index=idx) + exp = DataFrame({"C1": [nan, nan, nan, nan, 3, 3, + nan, nan, nan, nan, 4, 5], + "C3": [nan, nan, nan, nan, 10, 100, + nan, nan, nan, nan, 200, 34]}, index=idx) tm.assert_frame_equal(res, exp) def test_groupby_apply_all_none(self): # Tests to make sure no errors if apply function returns all None # values. Issue 9684. - test_df = DataFrame({'groups': [0,0,1,1], 'random_vars': [8,7,4,5]}) + test_df = DataFrame({'groups': [0, 0, 1, 1], + 'random_vars': [8, 7, 4, 5]}) def test_func(x): pass + result = test_df.groupby('groups').apply(test_func) expected = DataFrame() tm.assert_frame_equal(result, expected) @@ -5754,35 +5920,38 @@ def test_first_last_max_min_on_time_data(self): # Verify that NaT is not in the result of max, min, first and last on # Dataframe with datetime or timedelta values. from datetime import timedelta as td - df_test=DataFrame({'dt':[nan,'2015-07-24 10:10','2015-07-25 11:11','2015-07-23 12:12',nan], - 'td':[nan,td(days=1),td(days=2),td(days=3),nan]}) - df_test.dt=pd.to_datetime(df_test.dt) - df_test['group']='A' - df_ref=df_test[df_test.dt.notnull()] - - grouped_test=df_test.groupby('group') - grouped_ref=df_ref.groupby('group') - - assert_frame_equal(grouped_ref.max(),grouped_test.max()) - assert_frame_equal(grouped_ref.min(),grouped_test.min()) - assert_frame_equal(grouped_ref.first(),grouped_test.first()) - assert_frame_equal(grouped_ref.last(),grouped_test.last()) + df_test = DataFrame( + {'dt': [nan, '2015-07-24 10:10', '2015-07-25 11:11', + '2015-07-23 12:12', nan], + 'td': [nan, td(days=1), td(days=2), td(days=3), nan]}) + df_test.dt = pd.to_datetime(df_test.dt) + df_test['group'] = 'A' + df_ref = df_test[df_test.dt.notnull()] + + grouped_test = df_test.groupby('group') + grouped_ref = df_ref.groupby('group') + + assert_frame_equal(grouped_ref.max(), grouped_test.max()) + assert_frame_equal(grouped_ref.min(), grouped_test.min()) + assert_frame_equal(grouped_ref.first(), grouped_test.first()) + assert_frame_equal(grouped_ref.last(), grouped_test.last()) def test_groupby_preserves_sort(self): # Test to ensure that groupby always preserves sort order of original # object. Issue #8588 and #9651 - df = DataFrame({'int_groups':[3,1,0,1,0,3,3,3], - 'string_groups':['z','a','z','a','a','g','g','g'], - 'ints':[8,7,4,5,2,9,1,1], - 'floats':[2.3,5.3,6.2,-2.4,2.2,1.1,1.1,5], - 'strings':['z','d','a','e','word','word2','42','47']}) + df = DataFrame( + {'int_groups': [3, 1, 0, 1, 0, 3, 3, 3], + 'string_groups': ['z', 'a', 'z', 'a', 'a', 'g', 'g', 'g'], + 'ints': [8, 7, 4, 5, 2, 9, 1, 1], + 'floats': [2.3, 5.3, 6.2, -2.4, 2.2, 1.1, 1.1, 5], + 'strings': ['z', 'd', 'a', 'e', 'word', 'word2', '42', '47']}) # Try sorting on different types and with different group types - for sort_column in ['ints', 'floats', 'strings', ['ints','floats'], - ['ints','strings']]: + for sort_column in ['ints', 'floats', 'strings', ['ints', 'floats'], + ['ints', 'strings']]: for group_column in ['int_groups', 'string_groups', - ['int_groups','string_groups']]: + ['int_groups', 'string_groups']]: df = df.sort_values(by=sort_column) @@ -5819,7 +5988,7 @@ def _check_groupby(df, result, keys, field, f=lambda x: x.sum()): tups = com._asarray_tuplesafe(tups) expected = f(df.groupby(tups)[field]) for k, v in compat.iteritems(expected): - assert(result[k] == v) + assert (result[k] == v) def test_decons(): @@ -5830,20 +5999,19 @@ def testit(label_list, shape): label_list2 = decons_group_index(group_index, shape) for a, b in zip(label_list, label_list2): - assert(np.array_equal(a, b)) + assert (np.array_equal(a, b)) shape = (4, 5, 6) - label_list = [np.tile([0, 1, 2, 3, 0, 1, 2, 3], 100), - np.tile([0, 2, 4, 3, 0, 1, 2, 3], 100), - np.tile([5, 1, 0, 2, 3, 0, 5, 4], 100)] + label_list = [np.tile([0, 1, 2, 3, 0, 1, 2, 3], 100), np.tile( + [0, 2, 4, 3, 0, 1, 2, 3], 100), np.tile( + [5, 1, 0, 2, 3, 0, 5, 4], 100)] testit(label_list, shape) shape = (10000, 10000) - label_list = [np.tile(np.arange(10000), 5), - np.tile(np.arange(10000), 5)] + label_list = [np.tile(np.arange(10000), 5), np.tile(np.arange(10000), 5)] testit(label_list, shape) if __name__ == '__main__': - nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure', - '-s'], exit=False) + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure', '-s' + ], exit=False) diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py index 4dcc390787908..2c909d653df85 100644 --- a/pandas/tests/test_index.py +++ b/pandas/tests/test_index.py @@ -1,10 +1,13 @@ # -*- coding: utf-8 -*- # pylint: disable=E1101,E1103,W0232 +# TODO(wesm): fix long line flake8 issues +# flake8: noqa + from datetime import datetime, timedelta, time from pandas import compat -from pandas.compat import (long, is_platform_windows, range, - lrange, lzip, u, zip, PY3) +from pandas.compat import (long, is_platform_windows, range, lrange, lzip, u, + zip, PY3) from itertools import combinations import operator import re @@ -14,15 +17,14 @@ import numpy as np -from pandas import (period_range, date_range, Categorical, Series, - DataFrame, Index, Float64Index, Int64Index, RangeIndex, - MultiIndex, CategoricalIndex, DatetimeIndex, - TimedeltaIndex, PeriodIndex) +from pandas import (period_range, date_range, Categorical, Series, DataFrame, + Index, Float64Index, Int64Index, RangeIndex, MultiIndex, + CategoricalIndex, DatetimeIndex, TimedeltaIndex, + PeriodIndex) from pandas.core.index import InvalidIndexError from pandas.util.testing import (assert_almost_equal, assertRaisesRegexp, assert_copy) - import pandas.util.testing as tm import pandas.core.config as cf @@ -32,6 +34,9 @@ from pandas.lib import Timestamp from itertools import product +if PY3: + unicode = lambda x: x + class Base(object): """ base class for index sub-class tests """ @@ -43,7 +48,7 @@ def setup_indices(self): for name, ind in self.indices.items(): setattr(self, name, ind) - def verify_pickle(self,index): + def verify_pickle(self, index): unpickled = self.round_trip_pickle(index) self.assertTrue(index.equals(unpickled)) @@ -64,8 +69,8 @@ def test_shift(self): def test_create_index_existing_name(self): - # GH11193, when an existing index is passed, and a new name is not specified, the new index should inherit the - # previous object name + # GH11193, when an existing index is passed, and a new name is not + # specified, the new index should inherit the previous object name expected = self.create_index() if not isinstance(expected, MultiIndex): expected.name = 'foo' @@ -78,57 +83,54 @@ def test_create_index_existing_name(self): else: expected.names = ['foo', 'bar'] result = pd.Index(expected) - tm.assert_index_equal(result, Index(Index([('foo', 'one'), ('foo', 'two'), ('bar', 'one'), ('baz', 'two'), - ('qux', 'one'), ('qux', 'two')], dtype='object'), - names=['foo', 'bar'])) + tm.assert_index_equal( + result, Index(Index([('foo', 'one'), ('foo', 'two'), + ('bar', 'one'), ('baz', 'two'), + ('qux', 'one'), ('qux', 'two')], + dtype='object'), + names=['foo', 'bar'])) result = pd.Index(expected, names=['A', 'B']) - tm.assert_index_equal(result, Index(Index([('foo', 'one'), ('foo', 'two'), ('bar', 'one'), ('baz', 'two'), - ('qux', 'one'), ('qux', 'two')], dtype='object'), - names=['A', 'B'])) + tm.assert_index_equal( + result, + Index(Index([('foo', 'one'), ('foo', 'two'), ('bar', 'one'), + ('baz', 'two'), ('qux', 'one'), ('qux', 'two')], + dtype='object'), names=['A', 'B'])) def test_numeric_compat(self): idx = self.create_index() - tm.assertRaisesRegexp(TypeError, - "cannot perform __mul__", + tm.assertRaisesRegexp(TypeError, "cannot perform __mul__", lambda: idx * 1) - tm.assertRaisesRegexp(TypeError, - "cannot perform __mul__", + tm.assertRaisesRegexp(TypeError, "cannot perform __mul__", lambda: 1 * idx) div_err = "cannot perform __truediv__" if PY3 \ else "cannot perform __div__" - tm.assertRaisesRegexp(TypeError, - div_err, - lambda: idx / 1) - tm.assertRaisesRegexp(TypeError, - div_err, - lambda: 1 / idx) - tm.assertRaisesRegexp(TypeError, - "cannot perform __floordiv__", + tm.assertRaisesRegexp(TypeError, div_err, lambda: idx / 1) + tm.assertRaisesRegexp(TypeError, div_err, lambda: 1 / idx) + tm.assertRaisesRegexp(TypeError, "cannot perform __floordiv__", lambda: idx // 1) - tm.assertRaisesRegexp(TypeError, - "cannot perform __floordiv__", + tm.assertRaisesRegexp(TypeError, "cannot perform __floordiv__", lambda: 1 // idx) def test_logical_compat(self): idx = self.create_index() - tm.assertRaisesRegexp(TypeError, - 'cannot perform all', + tm.assertRaisesRegexp(TypeError, 'cannot perform all', lambda: idx.all()) - tm.assertRaisesRegexp(TypeError, - 'cannot perform any', + tm.assertRaisesRegexp(TypeError, 'cannot perform any', lambda: idx.any()) def test_boolean_context_compat(self): # boolean context compat idx = self.create_index() + def f(): if idx: pass - tm.assertRaisesRegexp(ValueError,'The truth value of a',f) + + tm.assertRaisesRegexp(ValueError, 'The truth value of a', f) def test_reindex_base(self): idx = self.create_index() @@ -157,7 +159,7 @@ def test_ndarray_compat_properties(self): def test_repr_roundtrip(self): idx = self.create_index() - tm.assert_index_equal(eval(repr(idx)),idx) + tm.assert_index_equal(eval(repr(idx)), idx) def test_str(self): @@ -209,7 +211,7 @@ def test_set_name_methods(self): self.assertIsNone(res) self.assertEqual(ind.name, new_name) self.assertEqual(ind.names, [new_name]) - #with assertRaisesRegexp(TypeError, "list-like"): + # with assertRaisesRegexp(TypeError, "list-like"): # # should still fail even if it would be the right length # ind.set_names("a") with assertRaisesRegexp(ValueError, "Level must be None"): @@ -223,8 +225,7 @@ def test_set_name_methods(self): def test_hash_error(self): for ind in self.indices.values(): - with tm.assertRaisesRegexp(TypeError, - "unhashable type: %r" % + with tm.assertRaisesRegexp(TypeError, "unhashable type: %r" % type(ind).__name__): hash(ind) @@ -252,7 +253,7 @@ def test_duplicates(self): continue if isinstance(ind, MultiIndex): continue - idx = self._holder([ind[0]]*5) + idx = self._holder([ind[0]] * 5) self.assertFalse(idx.is_unique) self.assertTrue(idx.has_duplicates) @@ -261,7 +262,7 @@ def test_duplicates(self): idx.name = 'foo' result = idx.drop_duplicates() self.assertEqual(result.name, 'foo') - self.assert_index_equal(result, Index([ind[0]],name='foo')) + self.assert_index_equal(result, Index([ind[0]], name='foo')) def test_sort(self): for ind in self.indices.values(): @@ -286,7 +287,7 @@ def test_view(self): def test_compat(self): for ind in self.indices.values(): - self.assertEqual(ind.tolist(),list(ind)) + self.assertEqual(ind.tolist(), list(ind)) def test_argsort(self): for k, ind in self.indices.items(): @@ -310,14 +311,15 @@ def test_take(self): for k, ind in self.indices.items(): # separate - if k in ['boolIndex','tuples','empty']: + if k in ['boolIndex', 'tuples', 'empty']: continue result = ind.take(indexer) expected = ind[indexer] self.assertTrue(result.equals(expected)) - if not isinstance(ind, (DatetimeIndex, PeriodIndex, TimedeltaIndex)): + if not isinstance(ind, + (DatetimeIndex, PeriodIndex, TimedeltaIndex)): # GH 10791 with tm.assertRaises(AttributeError): ind.freq @@ -326,7 +328,8 @@ def test_setops_errorcases(self): for name, idx in compat.iteritems(self.indices): # # non-iterable input cases = [0.5, 'xxx'] - methods = [idx.intersection, idx.union, idx.difference, idx.sym_diff] + methods = [idx.intersection, idx.union, idx.difference, + idx.sym_diff] for method in methods: for case in cases: @@ -346,7 +349,8 @@ def test_intersection_base(self): self.assertTrue(tm.equalContents(intersect, second)) # GH 10149 - cases = [klass(second.values) for klass in [np.array, Series, list]] + cases = [klass(second.values) + for klass in [np.array, Series, list]] for case in cases: if isinstance(idx, PeriodIndex): msg = "can only call with other PeriodIndex-ed objects" @@ -372,7 +376,8 @@ def test_union_base(self): self.assertTrue(tm.equalContents(union, everything)) # GH 10149 - cases = [klass(second.values) for klass in [np.array, Series, list]] + cases = [klass(second.values) + for klass in [np.array, Series, list]] for case in cases: if isinstance(idx, PeriodIndex): msg = "can only call with other PeriodIndex-ed objects" @@ -402,7 +407,8 @@ def test_difference_base(self): self.assertTrue(tm.equalContents(result, answer)) # GH 10149 - cases = [klass(second.values) for klass in [np.array, Series, list]] + cases = [klass(second.values) + for klass in [np.array, Series, list]] for case in cases: if isinstance(idx, PeriodIndex): msg = "can only call with other PeriodIndex-ed objects" @@ -434,7 +440,8 @@ def test_symmetric_diff(self): self.assertTrue(tm.equalContents(result, answer)) # GH 10149 - cases = [klass(second.values) for klass in [np.array, Series, list]] + cases = [klass(second.values) + for klass in [np.array, Series, list]] for case in cases: if isinstance(idx, PeriodIndex): msg = "can only call with other PeriodIndex-ed objects" @@ -459,9 +466,8 @@ def test_insert_base(self): if not len(idx): continue - #test 0th element - self.assertTrue(idx[0:4].equals( - result.insert(0, idx[0]))) + # test 0th element + self.assertTrue(idx[0:4].equals(result.insert(0, idx[0]))) def test_delete_base(self): @@ -557,10 +563,10 @@ def test_numpy_ufuncs(self): for name, idx in compat.iteritems(self.indices): for func in [np.exp, np.exp2, np.expm1, np.log, np.log2, np.log10, - np.log1p, np.sqrt, np.sin, np.cos, - np.tan, np.arcsin, np.arccos, np.arctan, - np.sinh, np.cosh, np.tanh, np.arcsinh, np.arccosh, - np.arctanh, np.deg2rad, np.rad2deg]: + np.log1p, np.sqrt, np.sin, np.cos, np.tan, np.arcsin, + np.arccos, np.arctan, np.sinh, np.cosh, np.tanh, + np.arcsinh, np.arccosh, np.arctanh, np.deg2rad, + np.rad2deg]: if isinstance(idx, pd.tseries.base.DatetimeIndexOpsMixin): # raise TypeError or ValueError (PeriodIndex) # PeriodIndex behavior should be changed in future version @@ -679,21 +685,19 @@ class TestIndex(Base, tm.TestCase): _multiprocess_can_split_ = True def setUp(self): - self.indices = dict( - unicodeIndex=tm.makeUnicodeIndex(100), - strIndex=tm.makeStringIndex(100), - dateIndex=tm.makeDateIndex(100), - periodIndex=tm.makePeriodIndex(100), - tdIndex=tm.makeTimedeltaIndex(100), - intIndex=tm.makeIntIndex(100), - rangeIndex=tm.makeIntIndex(100), - floatIndex=tm.makeFloatIndex(100), - boolIndex=Index([True, False]), - catIndex=tm.makeCategoricalIndex(100), - empty=Index([]), - tuples=MultiIndex.from_tuples(lzip(['foo', 'bar', 'baz'], - [1, 2, 3])) - ) + self.indices = dict(unicodeIndex=tm.makeUnicodeIndex(100), + strIndex=tm.makeStringIndex(100), + dateIndex=tm.makeDateIndex(100), + periodIndex=tm.makePeriodIndex(100), + tdIndex=tm.makeTimedeltaIndex(100), + intIndex=tm.makeIntIndex(100), + rangeIndex=tm.makeIntIndex(100), + floatIndex=tm.makeFloatIndex(100), + boolIndex=Index([True, False]), + catIndex=tm.makeCategoricalIndex(100), + empty=Index([]), + tuples=MultiIndex.from_tuples(lzip( + ['foo', 'bar', 'baz'], [1, 2, 3]))) self.setup_indices() def create_index(self): @@ -742,15 +746,19 @@ def test_construction_list_mixed_tuples(self): # 10697 # if we are constructing from a mixed list of tuples, make sure that we # are independent of the sorting order - idx1 = Index([('A',1),'B']) - self.assertIsInstance(idx1, Index) and self.assertNotInstance(idx1, MultiIndex) - idx2 = Index(['B',('A',1)]) - self.assertIsInstance(idx2, Index) and self.assertNotInstance(idx2, MultiIndex) + idx1 = Index([('A', 1), 'B']) + self.assertIsInstance(idx1, Index) and self.assertNotInstance( + idx1, MultiIndex) + idx2 = Index(['B', ('A', 1)]) + self.assertIsInstance(idx2, Index) and self.assertNotInstance( + idx2, MultiIndex) def test_constructor_from_series(self): - expected = DatetimeIndex([Timestamp('20110101'),Timestamp('20120101'),Timestamp('20130101')]) - s = Series([Timestamp('20110101'),Timestamp('20120101'),Timestamp('20130101')]) + expected = DatetimeIndex([Timestamp('20110101'), Timestamp('20120101'), + Timestamp('20130101')]) + s = Series([Timestamp('20110101'), Timestamp('20120101'), Timestamp( + '20130101')]) result = Index(s) self.assertTrue(result.equals(expected)) result = DatetimeIndex(s) @@ -758,37 +766,44 @@ def test_constructor_from_series(self): # GH 6273 # create from a series, passing a freq - s = Series(pd.to_datetime(['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990'])) + s = Series(pd.to_datetime(['1-1-1990', '2-1-1990', '3-1-1990', + '4-1-1990', '5-1-1990'])) result = DatetimeIndex(s, freq='MS') - expected = DatetimeIndex(['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990'],freq='MS') + expected = DatetimeIndex( + ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990' + ], freq='MS') self.assertTrue(result.equals(expected)) - df = pd.DataFrame(np.random.rand(5,3)) - df['date'] = ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990'] + df = pd.DataFrame(np.random.rand(5, 3)) + df['date'] = ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', + '5-1-1990'] result = DatetimeIndex(df['date'], freq='MS') self.assertTrue(result.equals(expected)) self.assertEqual(df['date'].dtype, object) - exp = pd.Series(['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990'], name='date') + exp = pd.Series( + ['1-1-1990', '2-1-1990', '3-1-1990', '4-1-1990', '5-1-1990' + ], name='date') self.assert_series_equal(df['date'], exp) # GH 6274 # infer freq of same result = pd.infer_freq(df['date']) - self.assertEqual(result,'MS') + self.assertEqual(result, 'MS') def test_constructor_ndarray_like(self): # GH 5460#issuecomment-44474502 # it should be possible to convert any object that satisfies the numpy # ndarray interface directly into an Index class ArrayLike(object): + def __init__(self, array): self.array = array + def __array__(self, dtype=None): return self.array - for array in [np.arange(5), - np.array(['a', 'b', 'c']), + for array in [np.arange(5), np.array(['a', 'b', 'c']), date_range('2000-01-01', periods=3).values]: expected = pd.Index(array) result = pd.Index(ArrayLike(array)) @@ -815,65 +830,82 @@ def test_constructor_simple_new(self): def test_constructor_dtypes(self): - for idx in [Index(np.array([1, 2, 3], dtype=int)), - Index(np.array([1, 2, 3], dtype=int), dtype=int), - Index(np.array([1., 2., 3.], dtype=float), dtype=int), - Index([1, 2, 3], dtype=int), - Index([1., 2., 3.], dtype=int)]: + for idx in [Index(np.array([1, 2, 3], dtype=int)), Index( + np.array( + [1, 2, 3], dtype=int), dtype=int), Index( + np.array( + [1., 2., 3.], dtype=float), dtype=int), Index( + [1, 2, 3], dtype=int), Index( + [1., 2., 3.], dtype=int)]: self.assertIsInstance(idx, Int64Index) - for idx in [Index(np.array([1., 2., 3.], dtype=float)), - Index(np.array([1, 2, 3], dtype=int), dtype=float), - Index(np.array([1., 2., 3.], dtype=float), dtype=float), - Index([1, 2, 3], dtype=float), - Index([1., 2., 3.], dtype=float)]: + for idx in [Index(np.array([1., 2., 3.], dtype=float)), Index( + np.array( + [1, 2, 3], dtype=int), dtype=float), Index( + np.array( + [1., 2., 3.], dtype=float), dtype=float), Index( + [1, 2, 3], dtype=float), Index( + [1., 2., 3.], dtype=float)]: self.assertIsInstance(idx, Float64Index) - for idx in [Index(np.array([True, False, True], dtype=bool)), - Index([True, False, True]), - Index(np.array([True, False, True], dtype=bool), dtype=bool), - Index([True, False, True], dtype=bool)]: + for idx in [Index(np.array( + [True, False, True], dtype=bool)), Index([True, False, True]), + Index( + np.array( + [True, False, True], dtype=bool), dtype=bool), + Index( + [True, False, True], dtype=bool)]: self.assertIsInstance(idx, Index) self.assertEqual(idx.dtype, object) - for idx in [Index(np.array([1, 2, 3], dtype=int), dtype='category'), - Index([1, 2, 3], dtype='category'), - Index(np.array([np.datetime64('2011-01-01'), np.datetime64('2011-01-02')]), dtype='category'), - Index([datetime(2011, 1, 1), datetime(2011, 1, 2)], dtype='category')]: + for idx in [Index( + np.array([1, 2, 3], dtype=int), dtype='category'), Index( + [1, 2, 3], dtype='category'), Index( + np.array([np.datetime64('2011-01-01'), np.datetime64( + '2011-01-02')]), dtype='category'), Index( + [datetime(2011, 1, 1), datetime(2011, 1, 2) + ], dtype='category')]: self.assertIsInstance(idx, CategoricalIndex) - for idx in [Index(np.array([np.datetime64('2011-01-01'), np.datetime64('2011-01-02')])), + for idx in [Index(np.array([np.datetime64('2011-01-01'), np.datetime64( + '2011-01-02')])), Index([datetime(2011, 1, 1), datetime(2011, 1, 2)])]: self.assertIsInstance(idx, DatetimeIndex) - for idx in [Index(np.array([np.datetime64('2011-01-01'), np.datetime64('2011-01-02')]), dtype=object), - Index([datetime(2011, 1, 1), datetime(2011, 1, 2)], dtype=object)]: + for idx in [Index( + np.array([np.datetime64('2011-01-01'), np.datetime64( + '2011-01-02')]), dtype=object), Index( + [datetime(2011, 1, 1), datetime(2011, 1, 2) + ], dtype=object)]: self.assertNotIsInstance(idx, DatetimeIndex) self.assertIsInstance(idx, Index) self.assertEqual(idx.dtype, object) - for idx in [Index(np.array([np.timedelta64(1, 'D'), np.timedelta64(1, 'D')])), - Index([timedelta(1), timedelta(1)])]: + for idx in [Index(np.array([np.timedelta64(1, 'D'), np.timedelta64( + 1, 'D')])), Index([timedelta(1), timedelta(1)])]: self.assertIsInstance(idx, TimedeltaIndex) - for idx in [Index(np.array([np.timedelta64(1, 'D'), np.timedelta64(1, 'D')]), dtype=object), - Index([timedelta(1), timedelta(1)], dtype=object)]: + for idx in [Index( + np.array([np.timedelta64(1, 'D'), np.timedelta64(1, 'D')]), + dtype=object), Index( + [timedelta(1), timedelta(1)], dtype=object)]: self.assertNotIsInstance(idx, TimedeltaIndex) self.assertIsInstance(idx, Index) self.assertEqual(idx.dtype, object) def test_view_with_args(self): - restricted = ['unicodeIndex','strIndex','catIndex','boolIndex','empty'] + restricted = ['unicodeIndex', 'strIndex', 'catIndex', 'boolIndex', + 'empty'] for i in restricted: ind = self.indices[i] # with arguments - self.assertRaises(TypeError, lambda : ind.view('i8')) + self.assertRaises(TypeError, lambda: ind.view('i8')) # these are ok - for i in list(set(self.indices.keys())-set(restricted)): + for i in list(set(self.indices.keys()) - set(restricted)): ind = self.indices[i] # with arguments @@ -883,8 +915,8 @@ def test_legacy_pickle_identity(self): # GH 8431 pth = tm.get_data_path() - s1 = pd.read_pickle(os.path.join(pth,'s1-0.12.0.pickle')) - s2 = pd.read_pickle(os.path.join(pth,'s2-0.12.0.pickle')) + s1 = pd.read_pickle(os.path.join(pth, 's1-0.12.0.pickle')) + s2 = pd.read_pickle(os.path.join(pth, 's2-0.12.0.pickle')) self.assertFalse(s1.index.identical(s2.index)) self.assertFalse(s1.index.equals(s2.index)) @@ -918,22 +950,20 @@ def test_insert(self): # validate neg/pos inserts result = Index(['b', 'c', 'd']) - #test 0th element - self.assertTrue(Index(['a', 'b', 'c', 'd']).equals( - result.insert(0, 'a'))) + # test 0th element + self.assertTrue(Index(['a', 'b', 'c', 'd']).equals(result.insert(0, + 'a'))) - #test Nth element that follows Python list behavior - self.assertTrue(Index(['b', 'c', 'e', 'd']).equals( - result.insert(-1, 'e'))) + # test Nth element that follows Python list behavior + self.assertTrue(Index(['b', 'c', 'e', 'd']).equals(result.insert(-1, + 'e'))) - #test loc +/- neq (0, -1) - self.assertTrue(result.insert(1, 'z').equals( - result.insert(-2, 'z'))) + # test loc +/- neq (0, -1) + self.assertTrue(result.insert(1, 'z').equals(result.insert(-2, 'z'))) - #test empty + # test empty null_index = Index([]) - self.assertTrue(Index(['a']).equals( - null_index.insert(0, 'a'))) + self.assertTrue(Index(['a']).equals(null_index.insert(0, 'a'))) def test_delete(self): idx = Index(['a', 'b', 'c', 'd'], name='idx') @@ -1021,9 +1051,13 @@ def test_nanosecond_index_access(self): first_value = x.asof(x.index[0]) # this does not yet work, as parsing strings is done via dateutil - #self.assertEqual(first_value, x['2013-01-01 00:00:00.000000050+0000']) + # self.assertEqual(first_value, + # x['2013-01-01 00:00:00.000000050+0000']) - self.assertEqual(first_value, x[Timestamp(np.datetime64('2013-01-01 00:00:00.000000050+0000', 'ns'))]) + self.assertEqual( + first_value, + x[Timestamp(np.datetime64('2013-01-01 00:00:00.000000050+0000', + 'ns'))]) def test_comparators(self): index = self.dateIndex @@ -1127,8 +1161,8 @@ def test_intersection(self): self.assertEqual(result3.name, expected3.name) # non-monotonic non-unique - idx1 = Index(['A','B','A','C']) - idx2 = Index(['B','D']) + idx1 = Index(['A', 'B', 'A', 'C']) + idx2 = Index(['B', 'D']) expected = Index(['B'], dtype='object') result = idx1.intersection(idx2) self.assertTrue(result.equals(expected)) @@ -1345,23 +1379,22 @@ def test_format(self): index = Index([datetime.now()]) - - # windows has different precision on datetime.datetime.now (it doesn't include us - # since the default for Timestamp shows these but Index formating does not - # we are skipping + # windows has different precision on datetime.datetime.now (it doesn't + # include us since the default for Timestamp shows these but Index + # formating does not we are skipping if not is_platform_windows(): formatted = index.format() expected = [str(index[0])] self.assertEqual(formatted, expected) # 2845 - index = Index([1, 2.0+3.0j, np.nan]) + index = Index([1, 2.0 + 3.0j, np.nan]) formatted = index.format() expected = [str(index[0]), str(index[1]), u('NaN')] self.assertEqual(formatted, expected) # is this really allowed? - index = Index([1, 2.0+3.0j, None]) + index = Index([1, 2.0 + 3.0j, None]) formatted = index.format() expected = [str(index[0]), str(index[1]), u('NaN')] self.assertEqual(formatted, expected) @@ -1453,15 +1486,19 @@ def test_get_indexer_nearest(self): actual = idx.get_indexer([0, 5, 9], method=method, tolerance=0) tm.assert_numpy_array_equal(actual, [0, 5, 9]) - for method, expected in zip(all_methods, [[0, 1, 8], [1, 2, 9], [0, 2, 9]]): + for method, expected in zip(all_methods, [[0, 1, 8], [1, 2, 9], [0, 2, + 9]]): actual = idx.get_indexer([0.2, 1.8, 8.5], method=method) tm.assert_numpy_array_equal(actual, expected) - actual = idx.get_indexer([0.2, 1.8, 8.5], method=method, tolerance=1) + actual = idx.get_indexer([0.2, 1.8, 8.5], method=method, + tolerance=1) tm.assert_numpy_array_equal(actual, expected) - for method, expected in zip(all_methods, [[0, -1, -1], [-1, 2, -1], [0, 2, -1]]): - actual = idx.get_indexer([0.2, 1.8, 8.5], method=method, tolerance=0.2) + for method, expected in zip(all_methods, [[0, -1, -1], [-1, 2, -1], + [0, 2, -1]]): + actual = idx.get_indexer([0.2, 1.8, 8.5], method=method, + tolerance=0.2) tm.assert_numpy_array_equal(actual, expected) with tm.assertRaisesRegexp(ValueError, 'limit argument'): @@ -1475,7 +1512,8 @@ def test_get_indexer_nearest_decreasing(self): actual = idx.get_indexer([0, 5, 9], method=method) tm.assert_numpy_array_equal(actual, [9, 4, 0]) - for method, expected in zip(all_methods, [[8, 7, 0], [9, 8, 1], [9, 7, 0]]): + for method, expected in zip(all_methods, [[8, 7, 0], [9, 8, 1], [9, 7, + 0]]): actual = idx.get_indexer([0.2, 1.8, 8.5], method=method) tm.assert_numpy_array_equal(actual, expected) @@ -1666,8 +1704,9 @@ def test_tuple_union_bug(self): aidx1 = np.array([(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B')], dtype=[('num', int), ('let', 'a1')]) - aidx2 = np.array([(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B'), (1, 'C'), (2, - 'C')], dtype=[('num', int), ('let', 'a1')]) + aidx2 = np.array([(1, 'A'), (2, 'A'), (1, 'B'), + (2, 'B'), (1, 'C'), (2, 'C')], + dtype=[('num', int), ('let', 'a1')]) idx1 = pandas.Index(aidx1) idx2 = pandas.Index(aidx2) @@ -1694,8 +1733,7 @@ def test_get_set_value(self): values = np.random.randn(100) date = self.dateIndex[67] - assert_almost_equal(self.dateIndex.get_value(values, date), - values[67]) + assert_almost_equal(self.dateIndex.get_value(values, date), values[67]) self.dateIndex.set_value(values, date, 10) self.assertEqual(values[67], 10) @@ -1748,7 +1786,7 @@ def check_idx(idx): idx.name = 'foobar' tm.assert_numpy_array_equal(expected, - idx.isin(values, level='foobar')) + idx.isin(values, level='foobar')) self.assertRaises(KeyError, idx.isin, values, level='xyzzy') self.assertRaises(KeyError, idx.isin, values, level=np.nan) @@ -1763,7 +1801,8 @@ def test_boolean_cmp(self): idx = Index(values) res = (idx == values) - tm.assert_numpy_array_equal(res,np.array([True,True,True,True],dtype=bool)) + tm.assert_numpy_array_equal(res, np.array( + [True, True, True, True], dtype=bool)) def test_get_level_values(self): result = self.strIndex.get_level_values(0) @@ -1790,15 +1829,16 @@ def test_str_attribute(self): idx = Index([' jack', 'jill ', ' jesse ', 'frank']) for method in methods: expected = Index([getattr(str, method)(x) for x in idx.values]) - tm.assert_index_equal(getattr(Index.str, method)(idx.str), expected) + tm.assert_index_equal( + getattr(Index.str, method)(idx.str), expected) # create a few instances that are not able to use .str accessor - indices = [Index(range(5)), - tm.makeDateIndex(10), + indices = [Index(range(5)), tm.makeDateIndex(10), MultiIndex.from_tuples([('foo', '1'), ('bar', '3')]), PeriodIndex(start='2000', end='2010', freq='A')] for idx in indices: - with self.assertRaisesRegexp(AttributeError, 'only use .str accessor'): + with self.assertRaisesRegexp(AttributeError, + 'only use .str accessor'): idx.str.repeat(2) idx = Index(['a b c', 'd e', 'f']) @@ -1806,8 +1846,7 @@ def test_str_attribute(self): tm.assert_index_equal(idx.str.split(), expected) tm.assert_index_equal(idx.str.split(expand=False), expected) - expected = MultiIndex.from_tuples([('a', 'b', 'c'), - ('d', 'e', np.nan), + expected = MultiIndex.from_tuples([('a', 'b', 'c'), ('d', 'e', np.nan), ('f', np.nan, np.nan)]) tm.assert_index_equal(idx.str.split(expand=True), expected) @@ -1831,10 +1870,9 @@ def test_tab_completion(self): def test_indexing_doesnt_change_class(self): idx = Index([1, 2, 3, 'a', 'b', 'c']) - self.assertTrue(idx[1:3].identical( - pd.Index([2, 3], dtype=np.object_))) - self.assertTrue(idx[[0,1]].identical( - pd.Index([1, 2], dtype=np.object_))) + self.assertTrue(idx[1:3].identical(pd.Index([2, 3], dtype=np.object_))) + self.assertTrue(idx[[0, 1]].identical(pd.Index( + [1, 2], dtype=np.object_))) def test_outer_join_sort(self): left_idx = Index(np.random.permutation(15)) @@ -1888,6 +1926,7 @@ def test_reindex_preserves_name_if_target_is_list_or_ndarray(self): def test_reindex_preserves_type_if_target_is_empty_list_or_array(self): # GH7774 idx = pd.Index(list('abc')) + def get_reindex_type(target): return idx.reindex(target)[0].dtype.type @@ -1899,6 +1938,7 @@ def get_reindex_type(target): def test_reindex_doesnt_preserve_type_if_target_is_empty_index(self): # GH7774 idx = pd.Index(list('abc')) + def get_reindex_type(target): return idx.reindex(target)[0].dtype.type @@ -1906,15 +1946,14 @@ def get_reindex_type(target): self.assertEqual(get_reindex_type(pd.Float64Index([])), np.float64) self.assertEqual(get_reindex_type(pd.DatetimeIndex([])), np.datetime64) - reindexed = idx.reindex(pd.MultiIndex([pd.Int64Index([]), - pd.Float64Index([])], - [[], []]))[0] + reindexed = idx.reindex(pd.MultiIndex( + [pd.Int64Index([]), pd.Float64Index([])], [[], []]))[0] self.assertEqual(reindexed.levels[0].dtype.type, np.int64) self.assertEqual(reindexed.levels[1].dtype.type, np.float64) def test_groupby(self): idx = Index(range(5)) - groups = idx.groupby(np.array([1,1,2,2,2])) + groups = idx.groupby(np.array([1, 1, 2, 2, 2])) exp = {1: [0, 1], 2: [2, 3, 4]} tm.assert_dict_equal(groups, exp) @@ -1923,7 +1962,8 @@ def test_equals_op_multiindex(self): # test comparisons of multiindex from pandas.compat import StringIO df = pd.read_csv(StringIO('a,b,c\n1,2,3\n4,5,6'), index_col=[0, 1]) - tm.assert_numpy_array_equal(df.index == df.index, np.array([True, True])) + tm.assert_numpy_array_equal(df.index == df.index, + np.array([True, True])) mi1 = MultiIndex.from_tuples([(1, 2), (4, 5)]) tm.assert_numpy_array_equal(df.index == mi1, np.array([True, True])) @@ -1936,10 +1976,11 @@ def test_equals_op_multiindex(self): index_a = Index(['foo', 'bar', 'baz']) with tm.assertRaisesRegexp(ValueError, "Lengths must match"): df.index == index_a - tm.assert_numpy_array_equal(index_a == mi3, np.array([False, False, False])) + tm.assert_numpy_array_equal(index_a == mi3, + np.array([False, False, False])) def test_conversion_preserves_name(self): - #GH 10875 + # GH 10875 i = pd.Index(['01:02:03', '01:02:04'], name='label') self.assertEqual(i.name, pd.to_datetime(i).name) self.assertEqual(i.name, pd.to_timedelta(i).name) @@ -1960,31 +2001,39 @@ def test_string_index_repr(self): # multiple lines idx = pd.Index(['a', 'bb', 'ccc'] * 10) if PY3: - expected = u"""Index(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', + expected = u"""\ +Index(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'], dtype='object')""" + self.assertEqual(repr(idx), expected) else: - expected = u"""Index([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', + expected = u"""\ +Index([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc'], dtype='object')""" + self.assertEqual(unicode(idx), expected) # truncated idx = pd.Index(['a', 'bb', 'ccc'] * 100) if PY3: - expected = u"""Index(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', + expected = u"""\ +Index(['a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', ... 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'], dtype='object', length=300)""" + self.assertEqual(repr(idx), expected) else: - expected = u"""Index([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', + expected = u"""\ +Index([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', ... u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc'], dtype='object', length=300)""" + self.assertEqual(unicode(idx), expected) # short @@ -1993,7 +2042,8 @@ def test_string_index_repr(self): expected = u"""Index(['あ', 'いい', 'ううう'], dtype='object')""" self.assertEqual(repr(idx), expected) else: - expected = u"""Index([u'あ', u'いい', u'ううう'], dtype='object')""" + expected = u"""\ +Index([u'あ', u'いい', u'ううう'], dtype='object')""" self.assertEqual(unicode(idx), expected) # multiple lines @@ -2003,12 +2053,14 @@ def test_string_index_repr(self): 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], dtype='object')""" + self.assertEqual(repr(idx), expected) else: expected = u"""Index([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], dtype='object')""" + self.assertEqual(unicode(idx), expected) # truncated @@ -2018,12 +2070,14 @@ def test_string_index_repr(self): ... 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], dtype='object', length=300)""" + self.assertEqual(repr(idx), expected) else: expected = u"""Index([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', ... u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], dtype='object', length=300)""" + self.assertEqual(unicode(idx), expected) # Emable Unicode option ----------------------------------------- @@ -2046,6 +2100,7 @@ def test_string_index_repr(self): 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], dtype='object')""" + self.assertEqual(repr(idx), expected) else: expected = u"""Index([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', @@ -2053,6 +2108,7 @@ def test_string_index_repr(self): u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], dtype='object')""" + self.assertEqual(unicode(idx), expected) # truncated @@ -2064,6 +2120,7 @@ def test_string_index_repr(self): 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], dtype='object', length=300)""" + self.assertEqual(repr(idx), expected) else: expected = u"""Index([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', @@ -2072,6 +2129,7 @@ def test_string_index_repr(self): u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], dtype='object', length=300)""" + self.assertEqual(unicode(idx), expected) @@ -2079,13 +2137,14 @@ class TestCategoricalIndex(Base, tm.TestCase): _holder = CategoricalIndex def setUp(self): - self.indices = dict(catIndex = tm.makeCategoricalIndex(100)) + self.indices = dict(catIndex=tm.makeCategoricalIndex(100)) self.setup_indices() def create_index(self, categories=None, ordered=False): if categories is None: categories = list('cab') - return CategoricalIndex(list('aabbca'), categories=categories, ordered=ordered) + return CategoricalIndex( + list('aabbca'), categories=categories, ordered=ordered) def test_construction(self): @@ -2093,49 +2152,55 @@ def test_construction(self): categories = ci.categories result = Index(ci) - tm.assert_index_equal(result,ci,exact=True) + tm.assert_index_equal(result, ci, exact=True) self.assertFalse(result.ordered) result = Index(ci.values) - tm.assert_index_equal(result,ci,exact=True) + tm.assert_index_equal(result, ci, exact=True) self.assertFalse(result.ordered) # empty result = CategoricalIndex(categories=categories) self.assertTrue(result.categories.equals(Index(categories))) - tm.assert_numpy_array_equal(result.codes, np.array([],dtype='int8')) + tm.assert_numpy_array_equal(result.codes, np.array([], dtype='int8')) self.assertFalse(result.ordered) # passing categories - result = CategoricalIndex(list('aabbca'),categories=categories) + result = CategoricalIndex(list('aabbca'), categories=categories) self.assertTrue(result.categories.equals(Index(categories))) - tm.assert_numpy_array_equal(result.codes,np.array([0,0,1,1,2,0],dtype='int8')) + tm.assert_numpy_array_equal(result.codes, np.array( + [0, 0, 1, 1, 2, 0], dtype='int8')) c = pd.Categorical(list('aabbca')) result = CategoricalIndex(c) self.assertTrue(result.categories.equals(Index(list('abc')))) - tm.assert_numpy_array_equal(result.codes,np.array([0,0,1,1,2,0],dtype='int8')) + tm.assert_numpy_array_equal(result.codes, np.array( + [0, 0, 1, 1, 2, 0], dtype='int8')) self.assertFalse(result.ordered) - result = CategoricalIndex(c,categories=categories) + result = CategoricalIndex(c, categories=categories) self.assertTrue(result.categories.equals(Index(categories))) - tm.assert_numpy_array_equal(result.codes,np.array([0,0,1,1,2,0],dtype='int8')) + tm.assert_numpy_array_equal(result.codes, np.array( + [0, 0, 1, 1, 2, 0], dtype='int8')) self.assertFalse(result.ordered) - ci = CategoricalIndex(c,categories=list('abcd')) + ci = CategoricalIndex(c, categories=list('abcd')) result = CategoricalIndex(ci) self.assertTrue(result.categories.equals(Index(categories))) - tm.assert_numpy_array_equal(result.codes,np.array([0,0,1,1,2,0],dtype='int8')) + tm.assert_numpy_array_equal(result.codes, np.array( + [0, 0, 1, 1, 2, 0], dtype='int8')) self.assertFalse(result.ordered) result = CategoricalIndex(ci, categories=list('ab')) self.assertTrue(result.categories.equals(Index(list('ab')))) - tm.assert_numpy_array_equal(result.codes,np.array([0,0,1,1,-1,0],dtype='int8')) + tm.assert_numpy_array_equal(result.codes, np.array( + [0, 0, 1, 1, -1, 0], dtype='int8')) self.assertFalse(result.ordered) result = CategoricalIndex(ci, categories=list('ab'), ordered=True) self.assertTrue(result.categories.equals(Index(list('ab')))) - tm.assert_numpy_array_equal(result.codes,np.array([0,0,1,1,-1,0],dtype='int8')) + tm.assert_numpy_array_equal(result.codes, np.array( + [0, 0, 1, 1, -1, 0], dtype='int8')) self.assertTrue(result.ordered) # turn me to an Index @@ -2149,19 +2214,21 @@ def test_construction_with_dtype(self): ci = self.create_index(categories=list('abc')) result = Index(np.array(ci), dtype='category') - tm.assert_index_equal(result,ci,exact=True) + tm.assert_index_equal(result, ci, exact=True) result = Index(np.array(ci).tolist(), dtype='category') - tm.assert_index_equal(result,ci,exact=True) + tm.assert_index_equal(result, ci, exact=True) # these are generally only equal when the categories are reordered ci = self.create_index() - result = Index(np.array(ci), dtype='category').reorder_categories(ci.categories) - tm.assert_index_equal(result,ci,exact=True) + result = Index( + np.array(ci), dtype='category').reorder_categories(ci.categories) + tm.assert_index_equal(result, ci, exact=True) # make sure indexes are handled - expected = CategoricalIndex([0,1,2], categories=[0,1,2], ordered=True) + expected = CategoricalIndex([0, 1, 2], categories=[0, 1, 2], + ordered=True) idx = Index(range(3)) result = CategoricalIndex(idx, categories=idx, ordered=True) tm.assert_index_equal(result, expected, exact=True) @@ -2172,30 +2239,34 @@ def test_disallow_set_ops(self): # set ops (+/-) raise TypeError idx = pd.Index(pd.Categorical(['a', 'b'])) - self.assertRaises(TypeError, lambda : idx - idx) - self.assertRaises(TypeError, lambda : idx + idx) - self.assertRaises(TypeError, lambda : idx - ['a','b']) - self.assertRaises(TypeError, lambda : idx + ['a','b']) - self.assertRaises(TypeError, lambda : ['a','b'] - idx) - self.assertRaises(TypeError, lambda : ['a','b'] + idx) + self.assertRaises(TypeError, lambda: idx - idx) + self.assertRaises(TypeError, lambda: idx + idx) + self.assertRaises(TypeError, lambda: idx - ['a', 'b']) + self.assertRaises(TypeError, lambda: idx + ['a', 'b']) + self.assertRaises(TypeError, lambda: ['a', 'b'] - idx) + self.assertRaises(TypeError, lambda: ['a', 'b'] + idx) def test_method_delegation(self): ci = CategoricalIndex(list('aabbca'), categories=list('cabdef')) result = ci.set_categories(list('cab')) - tm.assert_index_equal(result, CategoricalIndex(list('aabbca'), categories=list('cab'))) + tm.assert_index_equal(result, CategoricalIndex( + list('aabbca'), categories=list('cab'))) ci = CategoricalIndex(list('aabbca'), categories=list('cab')) result = ci.rename_categories(list('efg')) - tm.assert_index_equal(result, CategoricalIndex(list('ffggef'), categories=list('efg'))) + tm.assert_index_equal(result, CategoricalIndex( + list('ffggef'), categories=list('efg'))) ci = CategoricalIndex(list('aabbca'), categories=list('cab')) result = ci.add_categories(['d']) - tm.assert_index_equal(result, CategoricalIndex(list('aabbca'), categories=list('cabd'))) + tm.assert_index_equal(result, CategoricalIndex( + list('aabbca'), categories=list('cabd'))) ci = CategoricalIndex(list('aabbca'), categories=list('cab')) result = ci.remove_categories(['c']) - tm.assert_index_equal(result, CategoricalIndex(list('aabb') + [np.nan] + ['a'], categories=list('ab'))) + tm.assert_index_equal(result, CategoricalIndex( + list('aabb') + [np.nan] + ['a'], categories=list('ab'))) ci = CategoricalIndex(list('aabbca'), categories=list('cabdef')) result = ci.as_unordered() @@ -2203,10 +2274,12 @@ def test_method_delegation(self): ci = CategoricalIndex(list('aabbca'), categories=list('cabdef')) result = ci.as_ordered() - tm.assert_index_equal(result, CategoricalIndex(list('aabbca'), categories=list('cabdef'), ordered=True)) + tm.assert_index_equal(result, CategoricalIndex( + list('aabbca'), categories=list('cabdef'), ordered=True)) # invalid - self.assertRaises(ValueError, lambda : ci.set_categories(list('cab'), inplace=True)) + self.assertRaises(ValueError, lambda: ci.set_categories( + list('cab'), inplace=True)) def test_contains(self): @@ -2222,22 +2295,24 @@ def test_contains(self): self.assertFalse(1 in ci) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - ci = CategoricalIndex(list('aabbca'), categories=list('cabdef') + [np.nan]) + ci = CategoricalIndex( + list('aabbca'), categories=list('cabdef') + [np.nan]) self.assertFalse(np.nan in ci) - ci = CategoricalIndex(list('aabbca') + [np.nan], categories=list('cabdef')) + ci = CategoricalIndex( + list('aabbca') + [np.nan], categories=list('cabdef')) self.assertTrue(np.nan in ci) def test_min_max(self): ci = self.create_index(ordered=False) - self.assertRaises(TypeError, lambda : ci.min()) - self.assertRaises(TypeError, lambda : ci.max()) + self.assertRaises(TypeError, lambda: ci.min()) + self.assertRaises(TypeError, lambda: ci.max()) ci = self.create_index(ordered=True) - self.assertEqual(ci.min(),'c') - self.assertEqual(ci.max(),'b') + self.assertEqual(ci.min(), 'c') + self.assertEqual(ci.max(), 'b') def test_append(self): @@ -2246,50 +2321,54 @@ def test_append(self): # append cats with the same categories result = ci[:3].append(ci[3:]) - tm.assert_index_equal(result,ci,exact=True) + tm.assert_index_equal(result, ci, exact=True) foos = [ci[:1], ci[1:3], ci[3:]] result = foos[0].append(foos[1:]) - tm.assert_index_equal(result,ci,exact=True) + tm.assert_index_equal(result, ci, exact=True) # empty result = ci.append([]) - tm.assert_index_equal(result,ci,exact=True) + tm.assert_index_equal(result, ci, exact=True) # appending with different categories or reoreded is not ok - self.assertRaises(TypeError, lambda : ci.append(ci.values.set_categories(list('abcd')))) - self.assertRaises(TypeError, lambda : ci.append(ci.values.reorder_categories(list('abc')))) + self.assertRaises( + TypeError, + lambda: ci.append(ci.values.set_categories(list('abcd')))) + self.assertRaises( + TypeError, + lambda: ci.append(ci.values.reorder_categories(list('abc')))) # with objects - result = ci.append(['c','a']) + result = ci.append(['c', 'a']) expected = CategoricalIndex(list('aabbcaca'), categories=categories) - tm.assert_index_equal(result,expected,exact=True) + tm.assert_index_equal(result, expected, exact=True) # invalid objects - self.assertRaises(TypeError, lambda : ci.append(['a','d'])) + self.assertRaises(TypeError, lambda: ci.append(['a', 'd'])) def test_insert(self): ci = self.create_index() categories = ci.categories - #test 0th element + # test 0th element result = ci.insert(0, 'a') - expected = CategoricalIndex(list('aaabbca'),categories=categories) - tm.assert_index_equal(result,expected,exact=True) + expected = CategoricalIndex(list('aaabbca'), categories=categories) + tm.assert_index_equal(result, expected, exact=True) - #test Nth element that follows Python list behavior + # test Nth element that follows Python list behavior result = ci.insert(-1, 'a') - expected = CategoricalIndex(list('aabbcaa'),categories=categories) - tm.assert_index_equal(result,expected,exact=True) + expected = CategoricalIndex(list('aabbcaa'), categories=categories) + tm.assert_index_equal(result, expected, exact=True) - #test empty + # test empty result = CategoricalIndex(categories=categories).insert(0, 'a') - expected = CategoricalIndex(['a'],categories=categories) - tm.assert_index_equal(result,expected,exact=True) + expected = CategoricalIndex(['a'], categories=categories) + tm.assert_index_equal(result, expected, exact=True) # invalid - self.assertRaises(TypeError, lambda : ci.insert(0,'d')) + self.assertRaises(TypeError, lambda: ci.insert(0, 'd')) def test_delete(self): @@ -2297,12 +2376,12 @@ def test_delete(self): categories = ci.categories result = ci.delete(0) - expected = CategoricalIndex(list('abbca'),categories=categories) - tm.assert_index_equal(result,expected,exact=True) + expected = CategoricalIndex(list('abbca'), categories=categories) + tm.assert_index_equal(result, expected, exact=True) result = ci.delete(-1) - expected = CategoricalIndex(list('aabbc'),categories=categories) - tm.assert_index_equal(result,expected,exact=True) + expected = CategoricalIndex(list('aabbc'), categories=categories) + tm.assert_index_equal(result, expected, exact=True) with tm.assertRaises((IndexError, ValueError)): # either depeidnig on numpy version @@ -2312,7 +2391,7 @@ def test_astype(self): ci = self.create_index() result = ci.astype('category') - tm.assert_index_equal(result,ci,exact=True) + tm.assert_index_equal(result, ci, exact=True) result = ci.astype(object) self.assertTrue(result.equals(Index(np.array(ci)))) @@ -2326,7 +2405,7 @@ def test_reindex_base(self): # determined by cat ordering idx = self.create_index() - expected = np.array([4,0,1,5,2,3]) + expected = np.array([4, 0, 1, 5, 2, 3]) actual = idx.get_indexer(idx) tm.assert_numpy_array_equal(expected, actual) @@ -2339,28 +2418,37 @@ def test_reindexing(self): ci = self.create_index() oidx = Index(np.array(ci)) - for n in [1,2,5,len(ci)]: - finder = oidx[np.random.randint(0,len(ci),size=n)] + for n in [1, 2, 5, len(ci)]: + finder = oidx[np.random.randint(0, len(ci), size=n)] expected = oidx.get_indexer_non_unique(finder)[0] actual = ci.get_indexer(finder) tm.assert_numpy_array_equal(expected, actual) def test_reindex_dtype(self): - res, indexer = CategoricalIndex(['a', 'b', 'c', 'a']).reindex(['a', 'c']) + res, indexer = CategoricalIndex(['a', 'b', 'c', 'a']).reindex(['a', 'c' + ]) tm.assert_index_equal(res, Index(['a', 'a', 'c']), exact=True) tm.assert_numpy_array_equal(indexer, np.array([0, 3, 2])) - res, indexer = CategoricalIndex(['a', 'b', 'c', 'a']).reindex(Categorical(['a', 'c'])) - tm.assert_index_equal(res, CategoricalIndex(['a', 'a', 'c'], categories=['a', 'c']), exact=True) + res, indexer = CategoricalIndex(['a', 'b', 'c', 'a']).reindex( + Categorical(['a', 'c'])) + tm.assert_index_equal(res, CategoricalIndex( + ['a', 'a', 'c'], categories=['a', 'c']), exact=True) tm.assert_numpy_array_equal(indexer, np.array([0, 3, 2])) - res, indexer = CategoricalIndex(['a', 'b', 'c', 'a'], categories=['a', 'b', 'c', 'd']).reindex(['a', 'c']) - tm.assert_index_equal(res, Index(['a', 'a', 'c'], dtype='object'), exact=True) + res, indexer = CategoricalIndex( + ['a', 'b', 'c', 'a' + ], categories=['a', 'b', 'c', 'd']).reindex(['a', 'c']) + tm.assert_index_equal(res, Index( + ['a', 'a', 'c'], dtype='object'), exact=True) tm.assert_numpy_array_equal(indexer, np.array([0, 3, 2])) - res, indexer = CategoricalIndex(['a', 'b', 'c', 'a'], categories=['a', 'b', 'c', 'd']).reindex(Categorical(['a', 'c'])) - tm.assert_index_equal(res, CategoricalIndex(['a', 'a', 'c'], categories=['a', 'c']), exact=True) + res, indexer = CategoricalIndex( + ['a', 'b', 'c', 'a'], + categories=['a', 'b', 'c', 'd']).reindex(Categorical(['a', 'c'])) + tm.assert_index_equal(res, CategoricalIndex( + ['a', 'a', 'c'], categories=['a', 'c']), exact=True) tm.assert_numpy_array_equal(indexer, np.array([0, 3, 2])) def test_duplicates(self): @@ -2374,16 +2462,19 @@ def test_duplicates(self): def test_get_indexer(self): - idx1 = CategoricalIndex(list('aabcde'),categories=list('edabc')) + idx1 = CategoricalIndex(list('aabcde'), categories=list('edabc')) idx2 = CategoricalIndex(list('abf')) for indexer in [idx2, list('abf'), Index(list('abf'))]: r1 = idx1.get_indexer(idx2) assert_almost_equal(r1, [0, 1, 2, -1]) - self.assertRaises(NotImplementedError, lambda : idx2.get_indexer(idx1, method='pad')) - self.assertRaises(NotImplementedError, lambda : idx2.get_indexer(idx1, method='backfill')) - self.assertRaises(NotImplementedError, lambda : idx2.get_indexer(idx1, method='nearest')) + self.assertRaises(NotImplementedError, + lambda: idx2.get_indexer(idx1, method='pad')) + self.assertRaises(NotImplementedError, + lambda: idx2.get_indexer(idx1, method='backfill')) + self.assertRaises(NotImplementedError, + lambda: idx2.get_indexer(idx1, method='nearest')) def test_repr_roundtrip(self): @@ -2407,19 +2498,29 @@ def test_repr_roundtrip(self): def test_isin(self): - ci = CategoricalIndex(list('aabca') + [np.nan],categories=['c','a','b']) - tm.assert_numpy_array_equal(ci.isin(['c']),np.array([False,False,False,True,False,False])) - tm.assert_numpy_array_equal(ci.isin(['c','a','b']),np.array([True]*5 + [False])) - tm.assert_numpy_array_equal(ci.isin(['c','a','b',np.nan]),np.array([True]*6)) + ci = CategoricalIndex( + list('aabca') + [np.nan], categories=['c', 'a', 'b']) + tm.assert_numpy_array_equal( + ci.isin(['c']), + np.array([False, False, False, True, False, False])) + tm.assert_numpy_array_equal( + ci.isin(['c', 'a', 'b']), np.array([True] * 5 + [False])) + tm.assert_numpy_array_equal( + ci.isin(['c', 'a', 'b', np.nan]), np.array([True] * 6)) # mismatched categorical -> coerced to ndarray so doesn't matter - tm.assert_numpy_array_equal(ci.isin(ci.set_categories(list('abcdefghi'))),np.array([True]*6)) - tm.assert_numpy_array_equal(ci.isin(ci.set_categories(list('defghi'))),np.array([False]*5 + [True])) + tm.assert_numpy_array_equal( + ci.isin(ci.set_categories(list('abcdefghi'))), np.array([True] * + 6)) + tm.assert_numpy_array_equal( + ci.isin(ci.set_categories(list('defghi'))), + np.array([False] * 5 + [True])) def test_identical(self): ci1 = CategoricalIndex(['a', 'b'], categories=['a', 'b'], ordered=True) - ci2 = CategoricalIndex(['a', 'b'], categories=['a', 'b', 'c'], ordered=True) + ci2 = CategoricalIndex(['a', 'b'], categories=['a', 'b', 'c'], + ordered=True) self.assertTrue(ci1.identical(ci1)) self.assertTrue(ci1.identical(ci1.copy())) self.assertFalse(ci1.identical(ci2)) @@ -2427,7 +2528,8 @@ def test_identical(self): def test_equals(self): ci1 = CategoricalIndex(['a', 'b'], categories=['a', 'b'], ordered=True) - ci2 = CategoricalIndex(['a', 'b'], categories=['a', 'b', 'c'], ordered=True) + ci2 = CategoricalIndex(['a', 'b'], categories=['a', 'b', 'c'], + ordered=True) self.assertTrue(ci1.equals(ci1)) self.assertFalse(ci1.equals(ci2)) @@ -2442,24 +2544,34 @@ def test_equals(self): self.assertTrue((ci1 >= ci1).all()) self.assertFalse((ci1 == 1).all()) - self.assertTrue((ci1 == Index(['a','b'])).all()) + self.assertTrue((ci1 == Index(['a', 'b'])).all()) self.assertTrue((ci1 == ci1.values).all()) # invalid comparisons with tm.assertRaisesRegexp(ValueError, "Lengths must match"): - ci1 == Index(['a','b','c']) - self.assertRaises(TypeError, lambda : ci1 == ci2) - self.assertRaises(TypeError, lambda : ci1 == Categorical(ci1.values, ordered=False)) - self.assertRaises(TypeError, lambda : ci1 == Categorical(ci1.values, categories=list('abc'))) + ci1 == Index(['a', 'b', 'c']) + self.assertRaises(TypeError, lambda: ci1 == ci2) + self.assertRaises( + TypeError, lambda: ci1 == Categorical(ci1.values, ordered=False)) + self.assertRaises( + TypeError, + lambda: ci1 == Categorical(ci1.values, categories=list('abc'))) # tests # make sure that we are testing for category inclusion properly - self.assertTrue(CategoricalIndex(list('aabca'),categories=['c','a','b']).equals(list('aabca'))) + self.assertTrue(CategoricalIndex( + list('aabca'), categories=['c', 'a', 'b']).equals(list('aabca'))) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - self.assertTrue(CategoricalIndex(list('aabca'),categories=['c','a','b',np.nan]).equals(list('aabca'))) + self.assertTrue(CategoricalIndex( + list('aabca'), categories=['c', 'a', 'b', np.nan]).equals(list( + 'aabca'))) - self.assertFalse(CategoricalIndex(list('aabca') + [np.nan],categories=['c','a','b']).equals(list('aabca'))) - self.assertTrue(CategoricalIndex(list('aabca') + [np.nan],categories=['c','a','b']).equals(list('aabca') + [np.nan])) + self.assertFalse(CategoricalIndex( + list('aabca') + [np.nan], categories=['c', 'a', 'b']).equals(list( + 'aabca'))) + self.assertTrue(CategoricalIndex( + list('aabca') + [np.nan], categories=['c', 'a', 'b']).equals(list( + 'aabca') + [np.nan])) def test_string_categorical_index_repr(self): # short @@ -2478,6 +2590,7 @@ def test_string_categorical_index_repr(self): 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'], categories=['a', 'bb', 'ccc'], ordered=False, dtype='category')""" + self.assertEqual(repr(idx), expected) else: expected = u"""CategoricalIndex([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', @@ -2485,6 +2598,7 @@ def test_string_categorical_index_repr(self): u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc'], categories=[u'a', u'bb', u'ccc'], ordered=False, dtype='category')""" + self.assertEqual(unicode(idx), expected) # truncated @@ -2494,6 +2608,7 @@ def test_string_categorical_index_repr(self): ... 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc', 'a', 'bb', 'ccc'], categories=['a', 'bb', 'ccc'], ordered=False, dtype='category', length=300)""" + self.assertEqual(repr(idx), expected) else: expected = u"""CategoricalIndex([u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', @@ -2502,6 +2617,7 @@ def test_string_categorical_index_repr(self): u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc', u'a', u'bb', u'ccc'], categories=[u'a', u'bb', u'ccc'], ordered=False, dtype='category', length=300)""" + self.assertEqual(unicode(idx), expected) # larger categories @@ -2510,6 +2626,7 @@ def test_string_categorical_index_repr(self): expected = u"""CategoricalIndex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'm', 'o'], categories=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', ...], ordered=False, dtype='category')""" + self.assertEqual(repr(idx), expected) else: expected = u"""CategoricalIndex([u'a', u'b', u'c', u'd', u'e', u'f', u'g', u'h', u'i', u'j', @@ -2534,6 +2651,7 @@ def test_string_categorical_index_repr(self): 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" + self.assertEqual(repr(idx), expected) else: expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', @@ -2541,6 +2659,7 @@ def test_string_categorical_index_repr(self): u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" + self.assertEqual(unicode(idx), expected) # truncated @@ -2550,6 +2669,7 @@ def test_string_categorical_index_repr(self): ... 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category', length=300)""" + self.assertEqual(repr(idx), expected) else: expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', @@ -2558,6 +2678,7 @@ def test_string_categorical_index_repr(self): u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category', length=300)""" + self.assertEqual(unicode(idx), expected) # larger categories @@ -2566,11 +2687,13 @@ def test_string_categorical_index_repr(self): expected = u"""CategoricalIndex(['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', 'け', 'こ', 'さ', 'し', 'す', 'せ', 'そ'], categories=['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', ...], ordered=False, dtype='category')""" + self.assertEqual(repr(idx), expected) else: expected = u"""CategoricalIndex([u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', u'け', u'こ', u'さ', u'し', u'す', u'せ', u'そ'], categories=[u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', ...], ordered=False, dtype='category')""" + self.assertEqual(unicode(idx), expected) # Emable Unicode option ----------------------------------------- @@ -2593,6 +2716,7 @@ def test_string_categorical_index_repr(self): 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category')""" + self.assertEqual(repr(idx), expected) else: expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', @@ -2601,6 +2725,7 @@ def test_string_categorical_index_repr(self): u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category')""" + self.assertEqual(unicode(idx), expected) # truncated @@ -2612,6 +2737,7 @@ def test_string_categorical_index_repr(self): 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう', 'あ', 'いい', 'ううう'], categories=['あ', 'いい', 'ううう'], ordered=False, dtype='category', length=300)""" + self.assertEqual(repr(idx), expected) else: expected = u"""CategoricalIndex([u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', @@ -2620,6 +2746,7 @@ def test_string_categorical_index_repr(self): u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう', u'あ', u'いい', u'ううう'], categories=[u'あ', u'いい', u'ううう'], ordered=False, dtype='category', length=300)""" + self.assertEqual(unicode(idx), expected) # larger categories @@ -2628,11 +2755,13 @@ def test_string_categorical_index_repr(self): expected = u"""CategoricalIndex(['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', 'け', 'こ', 'さ', 'し', 'す', 'せ', 'そ'], categories=['あ', 'い', 'う', 'え', 'お', 'か', 'き', 'く', ...], ordered=False, dtype='category')""" + self.assertEqual(repr(idx), expected) else: expected = u"""CategoricalIndex([u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', u'け', u'こ', u'さ', u'し', u'す', u'せ', u'そ'], categories=[u'あ', u'い', u'う', u'え', u'お', u'か', u'き', u'く', ...], ordered=False, dtype='category')""" + self.assertEqual(unicode(idx), expected) def test_fillna_categorical(self): @@ -2692,14 +2821,13 @@ def test_numeric_compat(self): tm.assert_index_equal(result, didx) result = idx * Series(np.arange(5, dtype='float64') + 0.1) - expected = Float64Index(np.arange(5, dtype='float64') * ( - np.arange(5, dtype='float64') + 0.1)) + expected = Float64Index(np.arange(5, dtype='float64') * + (np.arange(5, dtype='float64') + 0.1)) tm.assert_index_equal(result, expected) # invalid - self.assertRaises(TypeError, lambda: idx * date_range('20130101', - periods=5) - ) + self.assertRaises(TypeError, + lambda: idx * date_range('20130101', periods=5)) self.assertRaises(ValueError, lambda: idx * idx[0:3]) self.assertRaises(ValueError, lambda: idx * np.array([1, 2])) @@ -2707,31 +2835,31 @@ def test_explicit_conversions(self): # GH 8608 # add/sub are overriden explicity for Float/Int Index - idx = self._holder(np.arange(5,dtype='int64')) + idx = self._holder(np.arange(5, dtype='int64')) # float conversions - arr = np.arange(5,dtype='int64')*3.2 + arr = np.arange(5, dtype='int64') * 3.2 expected = Float64Index(arr) fidx = idx * 3.2 - tm.assert_index_equal(fidx,expected) + tm.assert_index_equal(fidx, expected) fidx = 3.2 * idx - tm.assert_index_equal(fidx,expected) + tm.assert_index_equal(fidx, expected) # interops with numpy arrays expected = Float64Index(arr) - a = np.zeros(5,dtype='float64') + a = np.zeros(5, dtype='float64') result = fidx - a - tm.assert_index_equal(result,expected) + tm.assert_index_equal(result, expected) expected = Float64Index(-arr) - a = np.zeros(5,dtype='float64') + a = np.zeros(5, dtype='float64') result = a - fidx - tm.assert_index_equal(result,expected) + tm.assert_index_equal(result, expected) def test_ufunc_compat(self): - idx = self._holder(np.arange(5,dtype='int64')) + idx = self._holder(np.arange(5, dtype='int64')) result = np.sin(idx) - expected = Float64Index(np.sin(np.arange(5,dtype='int64'))) + expected = Float64Index(np.sin(np.arange(5, dtype='int64'))) tm.assert_index_equal(result, expected) def test_index_groupby(self): @@ -2753,9 +2881,8 @@ def test_index_groupby(self): datetime(2011, 11, 1)], tz='UTC').values - ex_keys = pd.tslib.datetime_to_datetime64( - np.array([Timestamp('2011-11-01'), - Timestamp('2011-12-01')])) + ex_keys = pd.tslib.datetime_to_datetime64(np.array([Timestamp( + '2011-11-01'), Timestamp('2011-12-01')])) expected = {ex_keys[0][0]: [idx[0], idx[5]], ex_keys[0][1]: [idx[1], idx[4]]} self.assertEqual(idx.groupby(to_groupby), expected) @@ -2772,8 +2899,8 @@ class TestFloat64Index(Numeric, tm.TestCase): _multiprocess_can_split_ = True def setUp(self): - self.indices = dict(mixed = Float64Index([1.5, 2, 3, 4, 5]), - float = Float64Index(np.arange(5) * 2.5)) + self.indices = dict(mixed=Float64Index([1.5, 2, 3, 4, 5]), + float=Float64Index(np.arange(5) * 2.5)) self.setup_indices() def create_index(self): @@ -2797,22 +2924,23 @@ def check_coerce(self, a, b, is_float_index=True): def test_constructor(self): # explicit construction - index = Float64Index([1,2,3,4,5]) + index = Float64Index([1, 2, 3, 4, 5]) self.assertIsInstance(index, Float64Index) - self.assertTrue((index.values == np.array([1,2,3,4,5],dtype='float64')).all()) - index = Float64Index(np.array([1,2,3,4,5])) + self.assertTrue((index.values == np.array( + [1, 2, 3, 4, 5], dtype='float64')).all()) + index = Float64Index(np.array([1, 2, 3, 4, 5])) self.assertIsInstance(index, Float64Index) - index = Float64Index([1.,2,3,4,5]) + index = Float64Index([1., 2, 3, 4, 5]) self.assertIsInstance(index, Float64Index) - index = Float64Index(np.array([1.,2,3,4,5])) + index = Float64Index(np.array([1., 2, 3, 4, 5])) self.assertIsInstance(index, Float64Index) self.assertEqual(index.dtype, float) - index = Float64Index(np.array([1.,2,3,4,5]),dtype=np.float32) + index = Float64Index(np.array([1., 2, 3, 4, 5]), dtype=np.float32) self.assertIsInstance(index, Float64Index) self.assertEqual(index.dtype, np.float64) - index = Float64Index(np.array([1,2,3,4,5]),dtype=np.float32) + index = Float64Index(np.array([1, 2, 3, 4, 5]), dtype=np.float32) self.assertIsInstance(index, Float64Index) self.assertEqual(index.dtype, np.float64) @@ -2828,22 +2956,24 @@ def test_constructor_invalid(self): # invalid self.assertRaises(TypeError, Float64Index, 0.) - self.assertRaises(TypeError, Float64Index, ['a','b',0.]) + self.assertRaises(TypeError, Float64Index, ['a', 'b', 0.]) self.assertRaises(TypeError, Float64Index, [Timestamp('20130101')]) def test_constructor_coerce(self): - self.check_coerce(self.mixed,Index([1.5, 2, 3, 4, 5])) - self.check_coerce(self.float,Index(np.arange(5) * 2.5)) - self.check_coerce(self.float,Index(np.array(np.arange(5) * 2.5, dtype=object))) + self.check_coerce(self.mixed, Index([1.5, 2, 3, 4, 5])) + self.check_coerce(self.float, Index(np.arange(5) * 2.5)) + self.check_coerce(self.float, Index(np.array( + np.arange(5) * 2.5, dtype=object))) def test_constructor_explicit(self): # these don't auto convert - self.check_coerce(self.float,Index((np.arange(5) * 2.5), dtype=object), - is_float_index=False) - self.check_coerce(self.mixed,Index([1.5, 2, 3, 4, 5],dtype=object), + self.check_coerce(self.float, + Index((np.arange(5) * 2.5), dtype=object), is_float_index=False) + self.check_coerce(self.mixed, Index( + [1.5, 2, 3, 4, 5], dtype=object), is_float_index=False) def test_astype(self): @@ -2861,18 +2991,18 @@ def test_astype(self): def test_equals(self): - i = Float64Index([1.0,2.0]) + i = Float64Index([1.0, 2.0]) self.assertTrue(i.equals(i)) self.assertTrue(i.identical(i)) - i2 = Float64Index([1.0,2.0]) + i2 = Float64Index([1.0, 2.0]) self.assertTrue(i.equals(i2)) - i = Float64Index([1.0,np.nan]) + i = Float64Index([1.0, np.nan]) self.assertTrue(i.equals(i)) self.assertTrue(i.identical(i)) - i2 = Float64Index([1.0,np.nan]) + i2 = Float64Index([1.0, np.nan]) self.assertTrue(i.equals(i2)) def test_get_indexer(self): @@ -2881,8 +3011,10 @@ def test_get_indexer(self): target = [-0.1, 0.5, 1.1] tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) - tm.assert_numpy_array_equal(idx.get_indexer(target, 'backfill'), [0, 1, 2]) - tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest'), [0, 1, 1]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'backfill'), [0, 1, 2]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'nearest'), [0, 1, 1]) def test_get_loc(self): idx = Float64Index([0.0, 1.0, 2.0]) @@ -2897,8 +3029,8 @@ def test_get_loc(self): self.assertRaises(KeyError, idx.get_loc, 'foo') self.assertRaises(KeyError, idx.get_loc, 1.5) - self.assertRaises(KeyError, idx.get_loc, 1.5, - method='pad', tolerance=0.1) + self.assertRaises(KeyError, idx.get_loc, 1.5, method='pad', + tolerance=0.1) with tm.assertRaisesRegexp(ValueError, 'must be numeric'): idx.get_loc(1.4, method='nearest', tolerance='foo') @@ -2941,13 +3073,11 @@ def test_nan_multiple_containment(self): tm.assert_numpy_array_equal(i.isin([1.0]), np.array([True, False])) tm.assert_numpy_array_equal(i.isin([2.0, np.pi]), np.array([False, False])) - tm.assert_numpy_array_equal(i.isin([np.nan]), - np.array([False, True])) + tm.assert_numpy_array_equal(i.isin([np.nan]), np.array([False, True])) tm.assert_numpy_array_equal(i.isin([1.0, np.nan]), np.array([True, True])) i = Float64Index([1.0, 2.0]) - tm.assert_numpy_array_equal(i.isin([np.nan]), - np.array([False, False])) + tm.assert_numpy_array_equal(i.isin([np.nan]), np.array([False, False])) def test_astype_from_object(self): index = Index([1.0, np.nan, 0.2], dtype='object') @@ -2977,7 +3107,7 @@ class TestInt64Index(Numeric, tm.TestCase): _multiprocess_can_split_ = True def setUp(self): - self.indices = dict(index = Int64Index(np.arange(0, 20, 2))) + self.indices = dict(index=Int64Index(np.arange(0, 20, 2))) self.setup_indices() def create_index(self): @@ -2986,6 +3116,7 @@ def create_index(self): def test_too_many_names(self): def testit(): self.index.names = ["roger", "harold"] + assertRaisesRegexp(ValueError, "^Length", testit) def test_constructor(self): @@ -3077,8 +3208,7 @@ def test_is_monotonic_na(self): pd.to_datetime(['NaT']), pd.to_datetime(['NaT', '2000-01-01']), pd.to_datetime(['2000-01-01', 'NaT', '2000-01-02']), - pd.to_timedelta(['1 day', 'NaT']), - ] + pd.to_timedelta(['1 day', 'NaT']), ] for index in examples: self.assertFalse(index.is_monotonic_increasing) self.assertFalse(index.is_monotonic_decreasing) @@ -3106,12 +3236,11 @@ def test_identical(self): self.assertTrue(same_values.identical(i)) self.assertFalse(i.identical(self.index)) - self.assertTrue(Index(same_values, name='foo', dtype=object - ).identical(i)) + self.assertTrue(Index(same_values, name='foo', dtype=object).identical( + i)) - self.assertFalse( - self.index.copy(dtype=object) - .identical(self.index.copy(dtype='int64'))) + self.assertFalse(self.index.copy(dtype=object) + .identical(self.index.copy(dtype='int64'))) def test_get_indexer(self): target = Int64Index(np.arange(10)) @@ -3247,8 +3376,7 @@ def test_join_right(self): res, lidx, ridx = self.index.join(other, how='right', return_indexers=True) eres = other - elidx = np.array([-1, 6, -1, -1, 1, -1], - dtype=np.int64) + elidx = np.array([-1, 6, -1, -1, 1, -1], dtype=np.int64) tm.assertIsInstance(other, Int64Index) self.assertTrue(res.equals(eres)) @@ -3259,8 +3387,7 @@ def test_join_right(self): res, lidx, ridx = self.index.join(other_mono, how='right', return_indexers=True) eres = other_mono - elidx = np.array([-1, 1, -1, -1, 6, -1], - dtype=np.int64) + elidx = np.array([-1, 1, -1, -1, 6, -1], dtype=np.int64) tm.assertIsInstance(other, Int64Index) self.assertTrue(res.equals(eres)) tm.assert_numpy_array_equal(lidx, elidx) @@ -3387,8 +3514,9 @@ def test_int_name_format(self): repr(df) def test_print_unicode_columns(self): - df = pd.DataFrame( - {u("\u05d0"): [1, 2, 3], "\u05d1": [4, 5, 6], "c": [7, 8, 9]}) + df = pd.DataFrame({u("\u05d0"): [1, 2, 3], + "\u05d1": [4, 5, 6], + "c": [7, 8, 9]}) repr(df.columns) # should not raise UnicodeDecodeError def test_repr_summary(self): @@ -3466,13 +3594,11 @@ def create_index(self): return RangeIndex(5) def test_binops(self): - ops = [operator.add, operator.sub, operator.mul, - operator.floordiv, operator.truediv, pow] + ops = [operator.add, operator.sub, operator.mul, operator.floordiv, + operator.truediv, pow] scalars = [-1, 1, 2] - idxs = [RangeIndex(0, 10, 1), - RangeIndex(0, 20, 2), - RangeIndex(-10, 10, 2), - RangeIndex(5, -5, -1)] + idxs = [RangeIndex(0, 10, 1), RangeIndex(0, 20, 2), + RangeIndex(-10, 10, 2), RangeIndex(5, -5, -1)] for op in ops: for a, b in combinations(idxs, 2): result = op(a, b) @@ -3487,6 +3613,7 @@ def test_binops(self): def test_too_many_names(self): def testit(): self.index.names = ["roger", "harold"] + assertRaisesRegexp(ValueError, "^Length", testit) def test_constructor(self): @@ -3527,13 +3654,8 @@ def test_constructor(self): self.assertRaises(TypeError, lambda: Index(0, 1000)) # invalid args - for i in [Index(['a', 'b']), - Series(['a', 'b']), - np.array(['a', 'b']), - [], - 'foo', - datetime(2000, 1, 1, 0, 0), - np.arange(0, 10)]: + for i in [Index(['a', 'b']), Series(['a', 'b']), np.array(['a', 'b']), + [], 'foo', datetime(2000, 1, 1, 0, 0), np.arange(0, 10)]: self.assertRaises(TypeError, lambda: RangeIndex(i)) def test_constructor_same(self): @@ -3647,8 +3769,7 @@ def test_constructor_corner(self): self.assertRaises(TypeError, RangeIndex, 1.1, 10.2, 1.3) # invalid passed type - self.assertRaises(TypeError, - lambda: RangeIndex(1, 5, dtype='float64')) + self.assertRaises(TypeError, lambda: RangeIndex(1, 5, dtype='float64')) def test_copy(self): i = RangeIndex(5, name='Foo') @@ -3686,8 +3807,7 @@ def test_insert(self): result = idx[1:4] # test 0th element - self.assertTrue(idx[0:4].equals( - result.insert(0, idx[0]))) + self.assertTrue(idx[0:4].equals(result.insert(0, idx[0]))) def test_delete(self): @@ -3768,12 +3888,11 @@ def test_identical(self): self.assertTrue(same_values.identical(self.index.copy(dtype=object))) self.assertFalse(i.identical(self.index)) - self.assertTrue(Index(same_values, name='foo', dtype=object - ).identical(i)) + self.assertTrue(Index(same_values, name='foo', dtype=object).identical( + i)) - self.assertFalse( - self.index.copy(dtype=object) - .identical(self.index.copy(dtype='int64'))) + self.assertFalse(self.index.copy(dtype=object) + .identical(self.index.copy(dtype='int64'))) def test_get_indexer(self): target = RangeIndex(10) @@ -3802,8 +3921,8 @@ def test_join_outer(self): noidx_res = self.index.join(other, how='outer') self.assertTrue(res.equals(noidx_res)) - eres = Int64Index([0, 2, 4, 6, 8, 10, 12, 14, 15, 16, 17, 18, - 19, 20, 21, 22, 23, 24, 25]) + eres = Int64Index([0, 2, 4, 6, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, + 21, 22, 23, 24, 25]) elidx = np.array([0, 1, 2, 3, 4, 5, 6, 7, -1, 8, -1, 9, -1, -1, -1, -1, -1, -1, -1], dtype=np.int64) eridx = np.array([-1, -1, -1, -1, -1, -1, -1, -1, 10, 9, 8, 7, 6, @@ -4039,8 +4158,9 @@ def test_take_preserve_name(self): self.assertEqual(index.name, taken.name) def test_print_unicode_columns(self): - df = pd.DataFrame( - {u("\u05d0"): [1, 2, 3], "\u05d1": [4, 5, 6], "c": [7, 8, 9]}) + df = pd.DataFrame({u("\u05d0"): [1, 2, 3], + "\u05d1": [4, 5, 6], + "c": [7, 8, 9]}) repr(df.columns) # should not raise UnicodeDecodeError def test_repr_roundtrip(self): @@ -4221,10 +4341,10 @@ def test_str(self): self.assertTrue("'foo'" in str(idx)) self.assertTrue(idx.__class__.__name__ in str(idx)) - if hasattr(idx,'tz'): + if hasattr(idx, 'tz'): if idx.tz is not None: self.assertTrue(idx.tz in str(idx)) - if hasattr(idx,'freq'): + if hasattr(idx, 'freq'): self.assertTrue("freq='%s'" % idx.freqstr in str(idx)) def test_view(self): @@ -4238,14 +4358,15 @@ def test_view(self): i_view = i.view(self._holder) result = self._holder(i) - tm.assert_index_equal(result, i) + tm.assert_index_equal(result, i_view) + class TestDatetimeIndex(DatetimeLike, tm.TestCase): _holder = DatetimeIndex _multiprocess_can_split_ = True def setUp(self): - self.indices = dict(index = tm.makeDateIndex(10)) + self.indices = dict(index=tm.makeDateIndex(10)) self.setup_indices() def create_index(self): @@ -4258,24 +4379,26 @@ def test_shift(self): drange = self.create_index() result = drange.shift(1) - expected = DatetimeIndex(['2013-01-02', '2013-01-03', '2013-01-04', '2013-01-05', - '2013-01-06'], freq='D') + expected = DatetimeIndex(['2013-01-02', '2013-01-03', '2013-01-04', + '2013-01-05', + '2013-01-06'], freq='D') self.assert_index_equal(result, expected) result = drange.shift(-1) - expected = DatetimeIndex(['2012-12-31','2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04'], + expected = DatetimeIndex(['2012-12-31', '2013-01-01', '2013-01-02', + '2013-01-03', '2013-01-04'], freq='D') self.assert_index_equal(result, expected) result = drange.shift(3, freq='2D') - expected = DatetimeIndex(['2013-01-07', '2013-01-08', '2013-01-09', '2013-01-10', - '2013-01-11'],freq='D') + expected = DatetimeIndex(['2013-01-07', '2013-01-08', '2013-01-09', + '2013-01-10', + '2013-01-11'], freq='D') self.assert_index_equal(result, expected) - def test_construction_with_alt(self): - i = pd.date_range('20130101',periods=5,freq='H',tz='US/Eastern') + i = pd.date_range('20130101', periods=5, freq='H', tz='US/Eastern') i2 = DatetimeIndex(i, dtype=i.dtype) self.assert_index_equal(i, i2) @@ -4285,7 +4408,8 @@ def test_construction_with_alt(self): i2 = DatetimeIndex(i.tz_localize(None).asi8, dtype=i.dtype) self.assert_index_equal(i, i2) - i2 = DatetimeIndex(i.tz_localize(None).asi8, dtype=i.dtype, tz=i.dtype.tz) + i2 = DatetimeIndex( + i.tz_localize(None).asi8, dtype=i.dtype, tz=i.dtype.tz) self.assert_index_equal(i, i2) # localize into the provided tz @@ -4298,7 +4422,8 @@ def test_construction_with_alt(self): self.assert_index_equal(i2, expected) # incompat tz/dtype - self.assertRaises(ValueError, lambda : DatetimeIndex(i.tz_localize(None).asi8, dtype=i.dtype, tz='US/Pacific')) + self.assertRaises(ValueError, lambda: DatetimeIndex( + i.tz_localize(None).asi8, dtype=i.dtype, tz='US/Pacific')) def test_pickle_compat_construction(self): pass @@ -4306,16 +4431,21 @@ def test_pickle_compat_construction(self): def test_construction_index_with_mixed_timezones(self): # GH 11488 # no tz results in DatetimeIndex - result = Index([Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') + result = Index( + [Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') + exp = DatetimeIndex( + [Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') self.assert_index_equal(result, exp, exact=True) self.assertTrue(isinstance(result, DatetimeIndex)) self.assertIsNone(result.tz) # same tz results in DatetimeIndex result = Index([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - Timestamp('2011-01-02 10:00', tz='Asia/Tokyo')], name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00')], tz='Asia/Tokyo', name='idx') + Timestamp('2011-01-02 10:00', tz='Asia/Tokyo')], + name='idx') + exp = DatetimeIndex( + [Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00') + ], tz='Asia/Tokyo', name='idx') self.assert_index_equal(result, exp, exact=True) self.assertTrue(isinstance(result, DatetimeIndex)) self.assertIsNotNone(result.tz) @@ -4323,8 +4453,10 @@ def test_construction_index_with_mixed_timezones(self): # same tz results in DatetimeIndex (DST) result = Index([Timestamp('2011-01-01 10:00', tz='US/Eastern'), - Timestamp('2011-08-01 10:00', tz='US/Eastern')], name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), Timestamp('2011-08-01 10:00')], + Timestamp('2011-08-01 10:00', tz='US/Eastern')], + name='idx') + exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), + Timestamp('2011-08-01 10:00')], tz='US/Eastern', name='idx') self.assert_index_equal(result, exp, exact=True) self.assertTrue(isinstance(result, DatetimeIndex)) @@ -4332,14 +4464,18 @@ def test_construction_index_with_mixed_timezones(self): self.assertEqual(result.tz, exp.tz) # different tz results in Index(dtype=object) - result = Index([Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00', tz='US/Eastern')], name='idx') - exp = Index([Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00', tz='US/Eastern')], + result = Index([Timestamp('2011-01-01 10:00'), + Timestamp('2011-01-02 10:00', tz='US/Eastern')], + name='idx') + exp = Index([Timestamp('2011-01-01 10:00'), + Timestamp('2011-01-02 10:00', tz='US/Eastern')], dtype='object', name='idx') self.assert_index_equal(result, exp, exact=True) self.assertFalse(isinstance(result, DatetimeIndex)) result = Index([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - Timestamp('2011-01-02 10:00', tz='US/Eastern')], name='idx') + Timestamp('2011-01-02 10:00', tz='US/Eastern')], + name='idx') exp = Index([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), Timestamp('2011-01-02 10:00', tz='US/Eastern')], dtype='object', name='idx') @@ -4347,9 +4483,11 @@ def test_construction_index_with_mixed_timezones(self): self.assertFalse(isinstance(result, DatetimeIndex)) # passing tz results in DatetimeIndex - result = Index([Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00', tz='US/Eastern')], - tz='Asia/Tokyo', name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 19:00'), Timestamp('2011-01-03 00:00')], + result = Index([Timestamp('2011-01-01 10:00'), + Timestamp('2011-01-02 10:00', tz='US/Eastern')], + tz='Asia/Tokyo', name='idx') + exp = DatetimeIndex([Timestamp('2011-01-01 19:00'), + Timestamp('2011-01-03 00:00')], tz='Asia/Tokyo', name='idx') self.assert_index_equal(result, exp, exact=True) self.assertTrue(isinstance(result, DatetimeIndex)) @@ -4362,8 +4500,10 @@ def test_construction_index_with_mixed_timezones(self): self.assertIsNone(result.tz) # length = 1 with tz - result = Index([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo')], name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 10:00')], tz='Asia/Tokyo', name='idx') + result = Index( + [Timestamp('2011-01-01 10:00', tz='Asia/Tokyo')], name='idx') + exp = DatetimeIndex([Timestamp('2011-01-01 10:00')], tz='Asia/Tokyo', + name='idx') self.assert_index_equal(result, exp, exact=True) self.assertTrue(isinstance(result, DatetimeIndex)) self.assertIsNotNone(result.tz) @@ -4381,9 +4521,12 @@ def test_construction_index_with_mixed_timezones_with_NaT(self): # same tz results in DatetimeIndex result = Index([pd.NaT, Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - pd.NaT, Timestamp('2011-01-02 10:00', tz='Asia/Tokyo')], name='idx') + pd.NaT, Timestamp('2011-01-02 10:00', + tz='Asia/Tokyo')], + name='idx') exp = DatetimeIndex([pd.NaT, Timestamp('2011-01-01 10:00'), - pd.NaT, Timestamp('2011-01-02 10:00')], tz='Asia/Tokyo', name='idx') + pd.NaT, Timestamp('2011-01-02 10:00')], + tz='Asia/Tokyo', name='idx') self.assert_index_equal(result, exp, exact=True) self.assertTrue(isinstance(result, DatetimeIndex)) self.assertIsNotNone(result.tz) @@ -4391,8 +4534,11 @@ def test_construction_index_with_mixed_timezones_with_NaT(self): # same tz results in DatetimeIndex (DST) result = Index([Timestamp('2011-01-01 10:00', tz='US/Eastern'), - pd.NaT, Timestamp('2011-08-01 10:00', tz='US/Eastern')], name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), pd.NaT, Timestamp('2011-08-01 10:00')], + pd.NaT, + Timestamp('2011-08-01 10:00', tz='US/Eastern')], + name='idx') + exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), pd.NaT, + Timestamp('2011-08-01 10:00')], tz='US/Eastern', name='idx') self.assert_index_equal(result, exp, exact=True) self.assertTrue(isinstance(result, DatetimeIndex)) @@ -4401,7 +4547,9 @@ def test_construction_index_with_mixed_timezones_with_NaT(self): # different tz results in Index(dtype=object) result = Index([pd.NaT, Timestamp('2011-01-01 10:00'), - pd.NaT, Timestamp('2011-01-02 10:00', tz='US/Eastern')], name='idx') + pd.NaT, Timestamp('2011-01-02 10:00', + tz='US/Eastern')], + name='idx') exp = Index([pd.NaT, Timestamp('2011-01-01 10:00'), pd.NaT, Timestamp('2011-01-02 10:00', tz='US/Eastern')], dtype='object', name='idx') @@ -4409,7 +4557,8 @@ def test_construction_index_with_mixed_timezones_with_NaT(self): self.assertFalse(isinstance(result, DatetimeIndex)) result = Index([pd.NaT, Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - pd.NaT, Timestamp('2011-01-02 10:00', tz='US/Eastern')], name='idx') + pd.NaT, Timestamp('2011-01-02 10:00', + tz='US/Eastern')], name='idx') exp = Index([pd.NaT, Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), pd.NaT, Timestamp('2011-01-02 10:00', tz='US/Eastern')], dtype='object', name='idx') @@ -4418,8 +4567,9 @@ def test_construction_index_with_mixed_timezones_with_NaT(self): # passing tz results in DatetimeIndex result = Index([pd.NaT, Timestamp('2011-01-01 10:00'), - pd.NaT, Timestamp('2011-01-02 10:00', tz='US/Eastern')], - tz='Asia/Tokyo', name='idx') + pd.NaT, Timestamp('2011-01-02 10:00', + tz='US/Eastern')], + tz='Asia/Tokyo', name='idx') exp = DatetimeIndex([pd.NaT, Timestamp('2011-01-01 19:00'), pd.NaT, Timestamp('2011-01-03 00:00')], tz='Asia/Tokyo', name='idx') @@ -4445,30 +4595,41 @@ def test_construction_dti_with_mixed_timezones(self): # GH 11488 (not changed, added explicit tests) # no tz results in DatetimeIndex - result = DatetimeIndex([Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') + result = DatetimeIndex( + [Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') + exp = DatetimeIndex( + [Timestamp('2011-01-01'), Timestamp('2011-01-02')], name='idx') self.assert_index_equal(result, exp, exact=True) self.assertTrue(isinstance(result, DatetimeIndex)) # same tz results in DatetimeIndex result = DatetimeIndex([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - Timestamp('2011-01-02 10:00', tz='Asia/Tokyo')], name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00')], tz='Asia/Tokyo', name='idx') + Timestamp('2011-01-02 10:00', + tz='Asia/Tokyo')], + name='idx') + exp = DatetimeIndex( + [Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00') + ], tz='Asia/Tokyo', name='idx') self.assert_index_equal(result, exp, exact=True) self.assertTrue(isinstance(result, DatetimeIndex)) # same tz results in DatetimeIndex (DST) result = DatetimeIndex([Timestamp('2011-01-01 10:00', tz='US/Eastern'), - Timestamp('2011-08-01 10:00', tz='US/Eastern')], name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), Timestamp('2011-08-01 10:00')], + Timestamp('2011-08-01 10:00', + tz='US/Eastern')], + name='idx') + exp = DatetimeIndex([Timestamp('2011-01-01 10:00'), + Timestamp('2011-08-01 10:00')], tz='US/Eastern', name='idx') self.assert_index_equal(result, exp, exact=True) self.assertTrue(isinstance(result, DatetimeIndex)) # different tz coerces tz-naive to tz-awareIndex(dtype=object) result = DatetimeIndex([Timestamp('2011-01-01 10:00'), - Timestamp('2011-01-02 10:00', tz='US/Eastern')], name='idx') - exp = DatetimeIndex([Timestamp('2011-01-01 05:00'), Timestamp('2011-01-02 10:00')], + Timestamp('2011-01-02 10:00', + tz='US/Eastern')], name='idx') + exp = DatetimeIndex([Timestamp('2011-01-01 05:00'), + Timestamp('2011-01-02 10:00')], tz='US/Eastern', name='idx') self.assert_index_equal(result, exp, exact=True) self.assertTrue(isinstance(result, DatetimeIndex)) @@ -4476,15 +4637,18 @@ def test_construction_dti_with_mixed_timezones(self): # tz mismatch affecting to tz-aware raises TypeError/ValueError with tm.assertRaises(ValueError): DatetimeIndex([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - Timestamp('2011-01-02 10:00', tz='US/Eastern')], name='idx') + Timestamp('2011-01-02 10:00', tz='US/Eastern')], + name='idx') with tm.assertRaises(TypeError): - DatetimeIndex([Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00', tz='US/Eastern')], + DatetimeIndex([Timestamp('2011-01-01 10:00'), + Timestamp('2011-01-02 10:00', tz='US/Eastern')], tz='Asia/Tokyo', name='idx') with tm.assertRaises(ValueError): DatetimeIndex([Timestamp('2011-01-01 10:00', tz='Asia/Tokyo'), - Timestamp('2011-01-02 10:00', tz='US/Eastern')], tz='US/Eastern', name='idx') + Timestamp('2011-01-02 10:00', tz='US/Eastern')], + tz='US/Eastern', name='idx') def test_get_loc(self): idx = pd.date_range('2000-01-01', periods=3) @@ -4512,8 +4676,7 @@ def test_get_loc(self): with tm.assertRaisesRegexp(ValueError, 'must be convertible'): idx.get_loc('2000-01-01T12', method='nearest', tolerance='foo') with tm.assertRaises(KeyError): - idx.get_loc('2000-01-01T03', method='nearest', - tolerance='2 hours') + idx.get_loc('2000-01-01T03', method='nearest', tolerance='2 hours') self.assertEqual(idx.get_loc('2000', method='nearest'), slice(0, 3)) self.assertEqual(idx.get_loc('2000-01', method='nearest'), slice(0, 3)) @@ -4547,12 +4710,16 @@ def test_get_indexer(self): idx = pd.date_range('2000-01-01', periods=3) tm.assert_numpy_array_equal(idx.get_indexer(idx), [0, 1, 2]) - target = idx[0] + pd.to_timedelta(['-1 hour', '12 hours', '1 day 1 hour']) + target = idx[0] + pd.to_timedelta(['-1 hour', '12 hours', + '1 day 1 hour']) tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) - tm.assert_numpy_array_equal(idx.get_indexer(target, 'backfill'), [0, 1, 2]) - tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest'), [0, 1, 1]) tm.assert_numpy_array_equal( - idx.get_indexer(target, 'nearest', tolerance=pd.Timedelta('1 hour')), + idx.get_indexer(target, 'backfill'), [0, 1, 2]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'nearest'), [0, 1, 1]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'nearest', + tolerance=pd.Timedelta('1 hour')), [0, -1, 1]) with tm.assertRaises(ValueError): idx.get_indexer(idx[[0]], method='nearest', tolerance='foo') @@ -4561,7 +4728,7 @@ def test_roundtrip_pickle_with_tz(self): # GH 8367 # round-trip of timezone - index=date_range('20130101',periods=3,tz='US/Eastern',name='foo') + index = date_range('20130101', periods=3, tz='US/Eastern', name='foo') unpickled = self.round_trip_pickle(index) self.assertTrue(index.equals(unpickled)) @@ -4575,7 +4742,7 @@ def test_time_loc(self): # GH8667 from datetime import time from pandas.index import _SIZE_CUTOFF - ns = _SIZE_CUTOFF + np.array([-100, 100],dtype=np.int64) + ns = _SIZE_CUTOFF + np.array([-100, 100], dtype=np.int64) key = time(15, 11, 30) start = key.hour * 3600 + key.minute * 60 + key.second step = 24 * 3600 @@ -4641,7 +4808,6 @@ def test_union(self): def test_nat(self): self.assertIs(DatetimeIndex([np.nan])[0], pd.NaT) - def test_ufunc_coercions(self): idx = date_range('2011-01-01', periods=3, freq='2D', name='x') @@ -4677,34 +4843,47 @@ def test_ufunc_coercions(self): def test_fillna_datetime64(self): # GH 11343 for tz in ['US/Eastern', 'Asia/Tokyo']: - idx = pd.DatetimeIndex(['2011-01-01 09:00', pd.NaT, '2011-01-01 11:00']) + idx = pd.DatetimeIndex(['2011-01-01 09:00', pd.NaT, + '2011-01-01 11:00']) - exp = pd.DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00']) - self.assert_index_equal(idx.fillna(pd.Timestamp('2011-01-01 10:00')), exp) + exp = pd.DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', + '2011-01-01 11:00']) + self.assert_index_equal( + idx.fillna(pd.Timestamp('2011-01-01 10:00')), exp) # tz mismatch - exp = pd.Index([pd.Timestamp('2011-01-01 09:00'), pd.Timestamp('2011-01-01 10:00', tz=tz), + exp = pd.Index([pd.Timestamp('2011-01-01 09:00'), + pd.Timestamp('2011-01-01 10:00', tz=tz), pd.Timestamp('2011-01-01 11:00')], dtype=object) - self.assert_index_equal(idx.fillna(pd.Timestamp('2011-01-01 10:00', tz=tz)), exp) + self.assert_index_equal( + idx.fillna(pd.Timestamp('2011-01-01 10:00', tz=tz)), exp) # object exp = pd.Index([pd.Timestamp('2011-01-01 09:00'), 'x', pd.Timestamp('2011-01-01 11:00')], dtype=object) self.assert_index_equal(idx.fillna('x'), exp) + idx = pd.DatetimeIndex( + ['2011-01-01 09:00', pd.NaT, '2011-01-01 11:00'], tz=tz) - idx = pd.DatetimeIndex(['2011-01-01 09:00', pd.NaT, '2011-01-01 11:00'], tz=tz) - - exp = pd.DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00'], tz=tz) - self.assert_index_equal(idx.fillna(pd.Timestamp('2011-01-01 10:00', tz=tz)), exp) + exp = pd.DatetimeIndex( + ['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00' + ], tz=tz) + self.assert_index_equal( + idx.fillna(pd.Timestamp('2011-01-01 10:00', tz=tz)), exp) - exp = pd.Index([pd.Timestamp('2011-01-01 09:00', tz=tz), pd.Timestamp('2011-01-01 10:00'), - pd.Timestamp('2011-01-01 11:00', tz=tz)], dtype=object) - self.assert_index_equal(idx.fillna(pd.Timestamp('2011-01-01 10:00')), exp) + exp = pd.Index([pd.Timestamp('2011-01-01 09:00', tz=tz), + pd.Timestamp('2011-01-01 10:00'), + pd.Timestamp('2011-01-01 11:00', tz=tz)], + dtype=object) + self.assert_index_equal( + idx.fillna(pd.Timestamp('2011-01-01 10:00')), exp) # object - exp = pd.Index([pd.Timestamp('2011-01-01 09:00', tz=tz), 'x', - pd.Timestamp('2011-01-01 11:00', tz=tz)], dtype=object) + exp = pd.Index([pd.Timestamp('2011-01-01 09:00', tz=tz), + 'x', + pd.Timestamp('2011-01-01 11:00', tz=tz)], + dtype=object) self.assert_index_equal(idx.fillna('x'), exp) @@ -4713,7 +4892,7 @@ class TestPeriodIndex(DatetimeLike, tm.TestCase): _multiprocess_can_split_ = True def setUp(self): - self.indices = dict(index = tm.makePeriodIndex(10)) + self.indices = dict(index=tm.makePeriodIndex(10)) self.setup_indices() def create_index(self): @@ -4725,8 +4904,8 @@ def test_shift(self): # GH8083 drange = self.create_index() result = drange.shift(1) - expected = PeriodIndex(['2013-01-02', '2013-01-03', '2013-01-04', '2013-01-05', - '2013-01-06'], freq='D') + expected = PeriodIndex(['2013-01-02', '2013-01-03', '2013-01-04', + '2013-01-05', '2013-01-06'], freq='D') self.assert_index_equal(result, expected) def test_pickle_compat_construction(self): @@ -4737,9 +4916,11 @@ def test_get_loc(self): for method in [None, 'pad', 'backfill', 'nearest']: self.assertEqual(idx.get_loc(idx[1], method), 1) - self.assertEqual(idx.get_loc(idx[1].asfreq('H', how='start'), method), 1) + self.assertEqual( + idx.get_loc(idx[1].asfreq('H', how='start'), method), 1) self.assertEqual(idx.get_loc(idx[1].to_timestamp(), method), 1) - self.assertEqual(idx.get_loc(idx[1].to_timestamp().to_pydatetime(), method), 1) + self.assertEqual( + idx.get_loc(idx[1].to_timestamp().to_pydatetime(), method), 1) self.assertEqual(idx.get_loc(str(idx[1]), method), 1) idx = pd.period_range('2000-01-01', periods=5)[::2] @@ -4767,8 +4948,10 @@ def test_get_indexer(self): target = pd.PeriodIndex(['1999-12-31T23', '2000-01-01T12', '2000-01-02T01'], freq='H') tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) - tm.assert_numpy_array_equal(idx.get_indexer(target, 'backfill'), [0, 1, 2]) - tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest'), [0, 1, 1]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'backfill'), [0, 1, 2]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'nearest'), [0, 1, 1]) tm.assert_numpy_array_equal( idx.get_indexer(target, 'nearest', tolerance='1 hour'), [0, -1, 1]) @@ -4790,9 +4973,9 @@ def test_repeat(self): def test_period_index_indexer(self): - #GH4125 - idx = pd.period_range('2002-01','2003-12', freq='M') - df = pd.DataFrame(pd.np.random.randn(24,10), index=idx) + # GH4125 + idx = pd.period_range('2002-01', '2003-12', freq='M') + df = pd.DataFrame(pd.np.random.randn(24, 10), index=idx) self.assert_frame_equal(df, df.ix[idx]) self.assert_frame_equal(df, df.ix[list(idx)]) self.assert_frame_equal(df, df.loc[list(idx)]) @@ -4801,16 +4984,22 @@ def test_period_index_indexer(self): def test_fillna_period(self): # GH 11343 - idx = pd.PeriodIndex(['2011-01-01 09:00', pd.NaT, '2011-01-01 11:00'], freq='H') + idx = pd.PeriodIndex( + ['2011-01-01 09:00', pd.NaT, '2011-01-01 11:00'], freq='H') - exp = pd.PeriodIndex(['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00'], freq='H') - self.assert_index_equal(idx.fillna(pd.Period('2011-01-01 10:00', freq='H')), exp) + exp = pd.PeriodIndex( + ['2011-01-01 09:00', '2011-01-01 10:00', '2011-01-01 11:00' + ], freq='H') + self.assert_index_equal( + idx.fillna(pd.Period('2011-01-01 10:00', freq='H')), exp) exp = pd.Index([pd.Period('2011-01-01 09:00', freq='H'), 'x', pd.Period('2011-01-01 11:00', freq='H')], dtype=object) self.assert_index_equal(idx.fillna('x'), exp) - with tm.assertRaisesRegexp(ValueError, 'Input has different freq=D from PeriodIndex\\(freq=H\\)'): + with tm.assertRaisesRegexp( + ValueError, + 'Input has different freq=D from PeriodIndex\\(freq=H\\)'): idx.fillna(pd.Period('2011-01-01', freq='D')) def test_no_millisecond_field(self): @@ -4820,12 +5009,13 @@ def test_no_millisecond_field(self): with self.assertRaises(AttributeError): DatetimeIndex([]).millisecond + class TestTimedeltaIndex(DatetimeLike, tm.TestCase): _holder = TimedeltaIndex _multiprocess_can_split_ = True def setUp(self): - self.indices = dict(index = tm.makeTimedeltaIndex(10)) + self.indices = dict(index=tm.makeTimedeltaIndex(10)) self.setup_indices() def create_index(self): @@ -4837,13 +5027,16 @@ def test_shift(self): drange = self.create_index() result = drange.shift(1) - expected = TimedeltaIndex(['1 days 01:00:00', '2 days 01:00:00', '3 days 01:00:00', - '4 days 01:00:00', '5 days 01:00:00'],freq='D') + expected = TimedeltaIndex(['1 days 01:00:00', '2 days 01:00:00', + '3 days 01:00:00', + '4 days 01:00:00', '5 days 01:00:00'], + freq='D') self.assert_index_equal(result, expected) result = drange.shift(3, freq='2D 1s') - expected = TimedeltaIndex(['6 days 01:00:03', '7 days 01:00:03', '8 days 01:00:03', - '9 days 01:00:03', '10 days 01:00:03'],freq='D') + expected = TimedeltaIndex(['6 days 01:00:03', '7 days 01:00:03', + '8 days 01:00:03', '9 days 01:00:03', + '10 days 01:00:03'], freq='D') self.assert_index_equal(result, expected) def test_get_loc(self): @@ -4854,8 +5047,10 @@ def test_get_loc(self): self.assertEqual(idx.get_loc(idx[1].to_pytimedelta(), method), 1) self.assertEqual(idx.get_loc(str(idx[1]), method), 1) - self.assertEqual(idx.get_loc(idx[1], 'pad', tolerance=pd.Timedelta(0)), 1) - self.assertEqual(idx.get_loc(idx[1], 'pad', tolerance=np.timedelta64(0, 's')), 1) + self.assertEqual( + idx.get_loc(idx[1], 'pad', tolerance=pd.Timedelta(0)), 1) + self.assertEqual( + idx.get_loc(idx[1], 'pad', tolerance=np.timedelta64(0, 's')), 1) self.assertEqual(idx.get_loc(idx[1], 'pad', tolerance=timedelta(0)), 1) with tm.assertRaisesRegexp(ValueError, 'must be convertible'): @@ -4870,8 +5065,10 @@ def test_get_indexer(self): target = pd.to_timedelta(['-1 hour', '12 hours', '1 day 1 hour']) tm.assert_numpy_array_equal(idx.get_indexer(target, 'pad'), [-1, 0, 1]) - tm.assert_numpy_array_equal(idx.get_indexer(target, 'backfill'), [0, 1, 2]) - tm.assert_numpy_array_equal(idx.get_indexer(target, 'nearest'), [0, 1, 1]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'backfill'), [0, 1, 2]) + tm.assert_numpy_array_equal( + idx.get_indexer(target, 'nearest'), [0, 1, 1]) tm.assert_numpy_array_equal( idx.get_indexer(target, 'nearest', tolerance=pd.Timedelta('1 hour')), @@ -4879,9 +5076,8 @@ def test_get_indexer(self): def test_numeric_compat(self): - idx = self._holder(np.arange(5,dtype='int64')) - didx = self._holder(np.arange(5,dtype='int64')**2 - ) + idx = self._holder(np.arange(5, dtype='int64')) + didx = self._holder(np.arange(5, dtype='int64') ** 2) result = idx * 1 tm.assert_index_equal(result, idx) @@ -4905,9 +5101,8 @@ def test_numeric_compat(self): tm.assert_index_equal(result, didx) result = idx * Series(np.arange(5, dtype='float64') + 0.1) - tm.assert_index_equal(result, - self._holder(np.arange(5, dtype='float64') * ( - np.arange(5, dtype='float64') + 0.1))) + tm.assert_index_equal(result, self._holder(np.arange( + 5, dtype='float64') * (np.arange(5, dtype='float64') + 0.1))) # invalid self.assertRaises(TypeError, lambda: idx * idx) @@ -4920,7 +5115,7 @@ def test_pickle_compat_construction(self): def test_ufunc_coercions(self): # normal ops are also tested in tseries/test_timedeltas.py idx = TimedeltaIndex(['2H', '4H', '6H', '8H', '10H'], - freq='2H', name='x') + freq='2H', name='x') for result in [idx * 2, np.multiply(idx, 2)]: tm.assertIsInstance(result, TimedeltaIndex) @@ -4938,7 +5133,7 @@ def test_ufunc_coercions(self): idx = TimedeltaIndex(['2H', '4H', '6H', '8H', '10H'], freq='2H', name='x') - for result in [ - idx, np.negative(idx)]: + for result in [-idx, np.negative(idx)]: tm.assertIsInstance(result, TimedeltaIndex) exp = TimedeltaIndex(['-2H', '-4H', '-6H', '-8H', '-10H'], freq='-2H', name='x') @@ -4947,7 +5142,7 @@ def test_ufunc_coercions(self): idx = TimedeltaIndex(['-2H', '-1H', '0H', '1H', '2H'], freq='H', name='x') - for result in [ abs(idx), np.absolute(idx)]: + for result in [abs(idx), np.absolute(idx)]: tm.assertIsInstance(result, TimedeltaIndex) exp = TimedeltaIndex(['2H', '1H', '0H', '1H', '2H'], freq=None, name='x') @@ -4964,7 +5159,8 @@ def test_fillna_timedelta(self): exp = pd.TimedeltaIndex(['1 day', '3 hour', '3 day']) idx.fillna(pd.Timedelta('3 hour')) - exp = pd.Index([pd.Timedelta('1 day'), 'x', pd.Timedelta('3 day')], dtype=object) + exp = pd.Index( + [pd.Timedelta('1 day'), 'x', pd.Timedelta('3 day')], dtype=object) self.assert_index_equal(idx.fillna('x'), exp) @@ -4980,9 +5176,10 @@ def setUp(self): major_labels = np.array([0, 0, 1, 2, 3, 3]) minor_labels = np.array([0, 1, 0, 1, 0, 1]) self.index_names = ['first', 'second'] - self.indices = dict(index = MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels], - names=self.index_names, verify_integrity=False)) + self.indices = dict(index=MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels + ], names=self.index_names, + verify_integrity=False)) self.setup_indices() def create_index(self): @@ -4999,7 +5196,8 @@ def test_boolean_context_compat2(self): def f(): if common: pass - tm.assertRaisesRegexp(ValueError,'The truth value of a',f) + + tm.assertRaisesRegexp(ValueError, 'The truth value of a', f) def test_labels_dtypes(self): @@ -5008,16 +5206,16 @@ def test_labels_dtypes(self): self.assertTrue(i.labels[0].dtype == 'int8') self.assertTrue(i.labels[1].dtype == 'int8') - i = MultiIndex.from_product([['a'],range(40)]) + i = MultiIndex.from_product([['a'], range(40)]) self.assertTrue(i.labels[1].dtype == 'int8') - i = MultiIndex.from_product([['a'],range(400)]) + i = MultiIndex.from_product([['a'], range(400)]) self.assertTrue(i.labels[1].dtype == 'int16') - i = MultiIndex.from_product([['a'],range(40000)]) + i = MultiIndex.from_product([['a'], range(40000)]) self.assertTrue(i.labels[1].dtype == 'int32') - i = pd.MultiIndex.from_product([['a'],range(1000)]) - self.assertTrue((i.labels[0]>=0).all()) - self.assertTrue((i.labels[1]>=0).all()) + i = pd.MultiIndex.from_product([['a'], range(1000)]) + self.assertTrue((i.labels[0] >= 0).all()) + self.assertTrue((i.labels[1] >= 0).all()) def test_set_name_methods(self): # so long as these are synonyms, we don't need to test set_names @@ -5051,12 +5249,10 @@ def test_set_name_methods(self): self.assertIsNone(res) self.assertEqual(ind.names, new_names2) - def test_set_levels(self): - # side note - you probably wouldn't want to use levels and labels # directly like this - but it is possible. - levels, labels = self.index.levels, self.index.labels + levels = self.index.levels new_levels = [[lev + 'a' for lev in level] for level in levels] def assert_matching(actual, expected): @@ -5108,7 +5304,8 @@ def assert_matching(actual, expected): # level changing multiple levels [w/ mutation] ind2 = self.index.copy() - inplace_return = ind2.set_levels(new_levels, level=[0, 1], inplace=True) + inplace_return = ind2.set_levels(new_levels, level=[0, 1], + inplace=True) self.assertIsNone(inplace_return) assert_matching(ind2.levels, new_levels) assert_matching(self.index.levels, levels) @@ -5116,7 +5313,7 @@ def assert_matching(actual, expected): def test_set_labels(self): # side note - you probably wouldn't want to use levels and labels # directly like this - but it is possible. - levels, labels = self.index.levels, self.index.labels + labels = self.index.labels major_labels, minor_labels = labels major_labels = [(x + 1) % 3 for x in major_labels] minor_labels = [(x + 1) % 1 for x in minor_labels] @@ -5171,7 +5368,8 @@ def assert_matching(actual, expected): # label changing multiple levels [w/ mutation] ind2 = self.index.copy() - inplace_return = ind2.set_labels(new_labels, level=[0, 1], inplace=True) + inplace_return = ind2.set_labels(new_labels, level=[0, 1], + inplace=True) self.assertIsNone(inplace_return) assert_matching(ind2.labels, new_labels) assert_matching(self.index.labels, labels) @@ -5295,9 +5493,7 @@ def test_set_value_keeps_names(self): # motivating example from #3742 lev1 = ['hans', 'hans', 'hans', 'grethe', 'grethe', 'grethe'] lev2 = ['1', '2', '3'] * 2 - idx = pd.MultiIndex.from_arrays( - [lev1, lev2], - names=['Name', 'Number']) + idx = pd.MultiIndex.from_arrays([lev1, lev2], names=['Name', 'Number']) df = pd.DataFrame( np.random.randn(6, 4), columns=['one', 'two', 'three', 'four'], @@ -5342,10 +5538,12 @@ def test_names(self): self.assertEqual(ind_names, level_names) def test_reference_duplicate_name(self): - idx = MultiIndex.from_tuples([('a', 'b'), ('c', 'd')], names=['x', 'x']) + idx = MultiIndex.from_tuples( + [('a', 'b'), ('c', 'd')], names=['x', 'x']) self.assertTrue(idx._reference_duplicate_name('x')) - idx = MultiIndex.from_tuples([('a', 'b'), ('c', 'd')], names=['x', 'y']) + idx = MultiIndex.from_tuples( + [('a', 'b'), ('c', 'd')], names=['x', 'y']) self.assertFalse(idx._reference_duplicate_name('x')) def test_astype(self): @@ -5360,8 +5558,7 @@ def test_astype(self): def test_constructor_single_level(self): single_level = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux']], - labels=[[0, 1, 2, 3]], - names=['first']) + labels=[[0, 1, 2, 3]], names=['first']) tm.assertIsInstance(single_level, Index) self.assertNotIsInstance(single_level, MultiIndex) self.assertEqual(single_level.name, 'first') @@ -5390,7 +5587,8 @@ def test_constructor_mismatched_label_levels(self): # important to check that it's looking at the right thing. with tm.assertRaisesRegexp(ValueError, length_error): - MultiIndex(levels=[['a'], ['b']], labels=[[0, 1, 2, 3], [0, 3, 4, 1]]) + MultiIndex(levels=[['a'], ['b']], + labels=[[0, 1, 2, 3], [0, 3, 4, 1]]) with tm.assertRaisesRegexp(ValueError, label_error): MultiIndex(levels=[['a'], ['b']], labels=[[0, 0, 0, 0], [0, 0]]) @@ -5412,7 +5610,6 @@ def test_constructor_mismatched_label_levels(self): with tm.assertRaisesRegexp(ValueError, label_error): self.index.copy().labels = [[0, 0, 0, 0], [0, 0]] - def assert_multiindex_copied(self, copy, original): # levels shoudl be (at least, shallow copied) assert_copy(copy.levels, original.levels) @@ -5494,9 +5691,11 @@ def test_from_arrays(self): self.assertEqual(list(result), list(self.index)) # infer correctly - result = MultiIndex.from_arrays([[pd.NaT, Timestamp('20130101')], ['a', 'b']]) - self.assertTrue(result.levels[0].equals(Index([Timestamp('20130101')]))) - self.assertTrue(result.levels[1].equals(Index(['a','b']))) + result = MultiIndex.from_arrays([[pd.NaT, Timestamp('20130101')], + ['a', 'b']]) + self.assertTrue(result.levels[0].equals(Index([Timestamp('20130101') + ]))) + self.assertTrue(result.levels[1].equals(Index(['a', 'b']))) def test_from_product(self): @@ -5505,9 +5704,9 @@ def test_from_product(self): names = ['first', 'second'] result = MultiIndex.from_product([first, second], names=names) - tuples = [('foo', 'a'), ('foo', 'b'), ('foo', 'c'), - ('bar', 'a'), ('bar', 'b'), ('bar', 'c'), - ('buz', 'a'), ('buz', 'b'), ('buz', 'c')] + tuples = [('foo', 'a'), ('foo', 'b'), ('foo', 'c'), ('bar', 'a'), + ('bar', 'b'), ('bar', 'c'), ('buz', 'a'), ('buz', 'b'), + ('buz', 'c')] expected = MultiIndex.from_tuples(tuples, names=names) tm.assert_numpy_array_equal(result, expected) @@ -5516,21 +5715,20 @@ def test_from_product(self): def test_from_product_datetimeindex(self): dt_index = date_range('2000-01-01', periods=2) mi = pd.MultiIndex.from_product([[1, 2], dt_index]) - etalon = pd.lib.list_to_object_array([(1, pd.Timestamp('2000-01-01')), - (1, pd.Timestamp('2000-01-02')), - (2, pd.Timestamp('2000-01-01')), - (2, pd.Timestamp('2000-01-02'))]) + etalon = pd.lib.list_to_object_array([(1, pd.Timestamp( + '2000-01-01')), (1, pd.Timestamp('2000-01-02')), (2, pd.Timestamp( + '2000-01-01')), (2, pd.Timestamp('2000-01-02'))]) tm.assert_numpy_array_equal(mi.values, etalon) def test_values_boxed(self): - tuples = [(1, pd.Timestamp('2000-01-01')), - (2, pd.NaT), + tuples = [(1, pd.Timestamp('2000-01-01')), (2, pd.NaT), (3, pd.Timestamp('2000-01-03')), (1, pd.Timestamp('2000-01-04')), (2, pd.Timestamp('2000-01-02')), (3, pd.Timestamp('2000-01-03'))] mi = pd.MultiIndex.from_tuples(tuples) - tm.assert_numpy_array_equal(mi.values, pd.lib.list_to_object_array(tuples)) + tm.assert_numpy_array_equal(mi.values, + pd.lib.list_to_object_array(tuples)) # Check that code branches for boxed values produce identical results tm.assert_numpy_array_equal(mi.values[:4], mi[:4].values) @@ -5558,13 +5756,12 @@ def test_get_level_values(self): tm.assert_numpy_array_equal(result, expected) # GH 10460 - index = MultiIndex(levels=[CategoricalIndex(['A', 'B']), - CategoricalIndex([1, 2, 3])], - labels=[np.array([0, 0, 0, 1, 1, 1]), - np.array([0, 1, 2, 0, 1, 2])]) + index = MultiIndex(levels=[CategoricalIndex( + ['A', 'B']), CategoricalIndex([1, 2, 3])], labels=[np.array( + [0, 0, 0, 1, 1, 1]), np.array([0, 1, 2, 0, 1, 2])]) exp = CategoricalIndex(['A', 'A', 'A', 'B', 'B', 'B']) self.assert_index_equal(index.get_level_values(0), exp) - exp = CategoricalIndex([1, 2 ,3, 1, 2, 3]) + exp = CategoricalIndex([1, 2, 3, 1, 2, 3]) self.assert_index_equal(index.get_level_values(1), exp) def test_get_level_values_na(self): @@ -5586,7 +5783,7 @@ def test_get_level_values_na(self): expected = [np.nan, np.nan, np.nan] tm.assert_numpy_array_equal(values.values.astype(float), expected) values = index.get_level_values(1) - expected = np.array(['a', np.nan, 1],dtype=object) + expected = np.array(['a', np.nan, 1], dtype=object) tm.assert_numpy_array_equal(values.values, expected) arrays = [['a', 'b', 'b'], pd.DatetimeIndex([0, 1, pd.NaT])] @@ -5598,7 +5795,7 @@ def test_get_level_values_na(self): arrays = [[], []] index = pd.MultiIndex.from_arrays(arrays) values = index.get_level_values(0) - self.assertEqual(values.shape, (0,)) + self.assertEqual(values.shape, (0, )) def test_reorder_levels(self): # this blows up @@ -5658,7 +5855,10 @@ def test_roundtrip_pickle_with_tz(self): # GH 8367 # round-trip of timezone - index=MultiIndex.from_product([[1,2],['a','b'],date_range('20130101',periods=3,tz='US/Eastern')],names=['one','two','three']) + index = MultiIndex.from_product( + [[1, 2], ['a', 'b'], date_range('20130101', periods=3, + tz='US/Eastern') + ], names=['one', 'two', 'three']) unpickled = self.round_trip_pickle(index) self.assertTrue(index.equal_levels(unpickled)) @@ -5709,12 +5909,9 @@ def test_get_loc(self): method='nearest') # 3 levels - index = MultiIndex(levels=[Index(lrange(4)), - Index(lrange(4)), - Index(lrange(4))], - labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), - np.array([0, 1, 0, 0, 0, 1, 0, 1]), - np.array([1, 0, 1, 1, 0, 0, 1, 0])]) + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) self.assertRaises(KeyError, index.get_loc, (1, 1)) self.assertEqual(index.get_loc((2, 0)), slice(3, 5)) @@ -5728,15 +5925,12 @@ def test_get_loc_duplicates(self): index = Index(['c', 'a', 'a', 'b', 'b']) rs = index.get_loc('c') xp = 0 - assert(rs == xp) + assert (rs == xp) def test_get_loc_level(self): - index = MultiIndex(levels=[Index(lrange(4)), - Index(lrange(4)), - Index(lrange(4))], - labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), - np.array([0, 1, 0, 0, 0, 1, 0, 1]), - np.array([1, 0, 1, 1, 0, 0, 1, 0])]) + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) loc, new_index = index.get_loc_level((0, 1)) expected = slice(1, 2) @@ -5751,9 +5945,8 @@ def test_get_loc_level(self): self.assertRaises(KeyError, index.get_loc_level, (2, 2)) - index = MultiIndex(levels=[[2000], lrange(4)], - labels=[np.array([0, 0, 0, 0]), - np.array([0, 1, 2, 3])]) + index = MultiIndex(levels=[[2000], lrange(4)], labels=[np.array( + [0, 0, 0, 0]), np.array([0, 1, 2, 3])]) result, new_index = index.get_loc_level((2000, slice(None, None))) expected = slice(None, None) self.assertEqual(result, expected) @@ -5793,12 +5986,9 @@ def test_slice_locs_with_type_mismatch(self): idx.slice_locs(df.index[1], (16, "a")) def test_slice_locs_not_sorted(self): - index = MultiIndex(levels=[Index(lrange(4)), - Index(lrange(4)), - Index(lrange(4))], - labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), - np.array([0, 1, 0, 0, 0, 1, 0, 1]), - np.array([1, 0, 1, 1, 0, 0, 1, 0])]) + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) assertRaisesRegexp(KeyError, "[Kk]ey length.*greater than MultiIndex" " lexsort depth", index.slice_locs, (1, 0, 1), @@ -5829,8 +6019,7 @@ def test_slice_locs_not_contained(self): index = MultiIndex(levels=[[0, 2, 4, 6], [0, 2, 4]], labels=[[0, 0, 0, 1, 1, 2, 3, 3, 3], - [0, 1, 2, 1, 2, 2, 0, 1, 2]], - sortorder=0) + [0, 1, 2, 1, 2, 2, 0, 1, 2]], sortorder=0) result = index.slice_locs((1, 0), (5, 2)) self.assertEqual(result, (3, 6)) @@ -5941,8 +6130,8 @@ def test_get_indexer(self): idx1 = Index(lrange(10) + lrange(10)) idx2 = Index(lrange(20)) assertRaisesRegexp(InvalidIndexError, "Reindexing only valid with" - " uniquely valued Index objects", - idx1.get_indexer, idx2) + " uniquely valued Index objects", idx1.get_indexer, + idx2) def test_get_indexer_nearest(self): midx = MultiIndex.from_tuples([('a', 1), ('b', 2)]) @@ -5957,24 +6146,20 @@ def test_format(self): def test_format_integer_names(self): index = MultiIndex(levels=[[0, 1], [0, 1]], - labels=[[0, 0, 1, 1], [0, 1, 0, 1]], - names=[0, 1]) + labels=[[0, 0, 1, 1], [0, 1, 0, 1]], names=[0, 1]) index.format(names=True) def test_format_sparse_display(self): index = MultiIndex(levels=[[0, 1], [0, 1], [0, 1], [0]], - labels=[[0, 0, 0, 1, 1, 1], - [0, 0, 1, 0, 0, 1], - [0, 1, 0, 0, 1, 0], - [0, 0, 0, 0, 0, 0]]) + labels=[[0, 0, 0, 1, 1, 1], [0, 0, 1, 0, 0, 1], + [0, 1, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0]]) result = index.format() self.assertEqual(result[3], '1 0 0 0') def test_format_sparse_config(self): warn_filters = warnings.filters - warnings.filterwarnings('ignore', - category=FutureWarning, + warnings.filterwarnings('ignore', category=FutureWarning, module=".*format") # GH1538 pd.set_option('display.multi_sparse', False) @@ -5987,8 +6172,8 @@ def test_format_sparse_config(self): warnings.filters = warn_filters def test_to_hierarchical(self): - index = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), - (2, 'one'), (2, 'two')]) + index = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), ( + 2, 'two')]) result = index.to_hierarchical(3) expected = MultiIndex(levels=[[1, 2], ['one', 'two']], labels=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1], @@ -6010,8 +6195,10 @@ def test_to_hierarchical(self): names=['N1', 'N2']) result = index.to_hierarchical(2) - expected = MultiIndex.from_tuples([(2, 'c'), (2, 'c'), (1, 'b'), (1, 'b'), - (2, 'a'), (2, 'a'), (2, 'b'), (2, 'b')], + expected = MultiIndex.from_tuples([(2, 'c'), (2, 'c'), (1, 'b'), + (1, 'b'), + (2, 'a'), (2, 'a'), + (2, 'b'), (2, 'b')], names=['N1', 'N2']) tm.assert_index_equal(result, expected) self.assertEqual(result.names, index.names) @@ -6028,15 +6215,11 @@ def test_equals(self): self.assertTrue(self.index.equals(self.index._tuple_index)) # different number of levels - index = MultiIndex(levels=[Index(lrange(4)), - Index(lrange(4)), - Index(lrange(4))], - labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), - np.array([0, 1, 0, 0, 0, 1, 0, 1]), - np.array([1, 0, 1, 1, 0, 0, 1, 0])]) - - index2 = MultiIndex(levels=index.levels[:-1], - labels=index.labels[:-1]) + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) + + index2 = MultiIndex(levels=index.levels[:-1], labels=index.labels[:-1]) self.assertFalse(index.equals(index2)) self.assertFalse(index.equal_levels(index2)) @@ -6132,16 +6315,16 @@ def test_union(self): # result = self.index[:4] | tuples[4:] # self.assertTrue(result.equals(tuples)) - # not valid for python 3 - # def test_union_with_regular_index(self): - # other = Index(['A', 'B', 'C']) + # not valid for python 3 + # def test_union_with_regular_index(self): + # other = Index(['A', 'B', 'C']) - # result = other.union(self.index) - # self.assertIn(('foo', 'one'), result) - # self.assertIn('B', result) + # result = other.union(self.index) + # self.assertIn(('foo', 'one'), result) + # self.assertIn('B', result) - # result2 = self.index.union(other) - # self.assertTrue(result.equals(result2)) + # result2 = self.index.union(other) + # self.assertTrue(result.equals(result2)) def test_intersection(self): piece1 = self.index[:5][::-1] @@ -6179,7 +6362,7 @@ def test_difference(self): with tm.assert_produces_warning(): self.index[-3:] - first.tolist() - self.assertRaises(TypeError, lambda : first.tolist() - self.index[-3:]) + self.assertRaises(TypeError, lambda: first.tolist() - self.index[-3:]) expected = MultiIndex.from_tuples(sorted(self.index[:-3].values), sortorder=0, @@ -6228,9 +6411,8 @@ def test_difference(self): # name from non-empty array result = first.difference([('foo', 'one')]) - expected = pd.MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'), - ('foo', 'two'), ('qux', 'one'), - ('qux', 'two')]) + expected = pd.MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'), ( + 'foo', 'two'), ('qux', 'one'), ('qux', 'two')]) expected.names = first.names self.assertEqual(first.names, result.names) assertRaisesRegexp(TypeError, "other must be a MultiIndex or a list" @@ -6357,13 +6539,10 @@ def test_droplevel_with_names(self): dropped = index.droplevel(0) self.assertEqual(dropped.name, 'second') - index = MultiIndex(levels=[Index(lrange(4)), - Index(lrange(4)), - Index(lrange(4))], - labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), - np.array([0, 1, 0, 0, 0, 1, 0, 1]), - np.array([1, 0, 1, 1, 0, 0, 1, 0])], - names=['one', 'two', 'three']) + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])], + names=['one', 'two', 'three']) dropped = index.droplevel(0) self.assertEqual(dropped.names, ('two', 'three')) @@ -6372,13 +6551,10 @@ def test_droplevel_with_names(self): self.assertTrue(dropped.equals(expected)) def test_droplevel_multiple(self): - index = MultiIndex(levels=[Index(lrange(4)), - Index(lrange(4)), - Index(lrange(4))], - labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), - np.array([0, 1, 0, 0, 0, 1, 0, 1]), - np.array([1, 0, 1, 1, 0, 0, 1, 0])], - names=['one', 'two', 'three']) + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])], + names=['one', 'two', 'three']) dropped = index[:2].droplevel(['three', 'one']) expected = index[:2].droplevel(2).droplevel(0) @@ -6393,14 +6569,14 @@ def test_insert(self): # key not contained in all levels new_index = self.index.insert(0, ('abc', 'three')) tm.assert_numpy_array_equal(new_index.levels[0], - list(self.index.levels[0]) + ['abc']) + list(self.index.levels[0]) + ['abc']) tm.assert_numpy_array_equal(new_index.levels[1], - list(self.index.levels[1]) + ['three']) + list(self.index.levels[1]) + ['three']) self.assertEqual(new_index[0], ('abc', 'three')) # key wrong length assertRaisesRegexp(ValueError, "Item must have length equal to number" - " of levels", self.index.insert, 0, ('foo2',)) + " of levels", self.index.insert, 0, ('foo2', )) left = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1]], columns=['1st', '2nd', '3rd']) @@ -6421,15 +6597,15 @@ def test_insert(self): ts.loc[('a', 'w')] = 5 ts.loc['a', 'a'] = 6 - right = pd.DataFrame([['a', 'b', 0], - ['b', 'd', 1], - ['b', 'x', 2], + right = pd.DataFrame([['a', 'b', 0], + ['b', 'd', 1], + ['b', 'x', 2], ['b', 'a', -1], - ['b', 'b', 3], - ['a', 'x', 4], - ['a', 'w', 5], - ['a', 'a', 6]], - columns=['1st', '2nd', '3rd']) + ['b', 'b', 3], + ['a', 'x', 4], + ['a', 'w', 5], + ['a', 'a', 6]], + columns=['1st', '2nd', '3rd']) right.set_index(['1st', '2nd'], inplace=True) # FIXME data types changes to float because # of intermediate nan insertion; @@ -6438,8 +6614,8 @@ def test_insert(self): # GH9250 idx = [('test1', i) for i in range(5)] + \ - [('test2', i) for i in range(6)] + \ - [('test', 17), ('test', 18)] + [('test2', i) for i in range(6)] + \ + [('test', 17), ('test', 18)] left = pd.Series(np.linspace(0, 10, 11), pd.MultiIndex.from_tuples(idx[:-2])) @@ -6509,12 +6685,14 @@ def test_join_self(self): def test_join_multi(self): # GH 10665 - midx = pd.MultiIndex.from_product([np.arange(4), np.arange(4)], names=['a', 'b']) + midx = pd.MultiIndex.from_product( + [np.arange(4), np.arange(4)], names=['a', 'b']) idx = pd.Index([1, 2, 5], name='b') # inner jidx, lidx, ridx = midx.join(idx, how='inner', return_indexers=True) - exp_idx = pd.MultiIndex.from_product([np.arange(4), [1, 2]], names=['a', 'b']) + exp_idx = pd.MultiIndex.from_product( + [np.arange(4), [1, 2]], names=['a', 'b']) exp_lidx = np.array([1, 2, 5, 6, 9, 10, 13, 14]) exp_ridx = np.array([0, 1, 0, 1, 0, 1, 0, 1]) self.assert_index_equal(jidx, exp_idx) @@ -6528,7 +6706,8 @@ def test_join_multi(self): # keep MultiIndex jidx, lidx, ridx = midx.join(idx, how='left', return_indexers=True) - exp_ridx = np.array([-1, 0, 1, -1, -1, 0, 1, -1, -1, 0, 1, -1, -1, 0, 1, -1]) + exp_ridx = np.array([-1, 0, 1, -1, -1, 0, 1, -1, -1, 0, 1, -1, -1, 0, + 1, -1]) self.assert_index_equal(jidx, midx) self.assertIsNone(lidx) self.assert_numpy_array_equal(ridx, exp_ridx) @@ -6569,16 +6748,15 @@ def test_reindex_level(self): self.index.reindex, self.index, method='pad', level='second') - assertRaisesRegexp(TypeError, "Fill method not supported", - idx.reindex, idx, method='bfill', level='first') + assertRaisesRegexp(TypeError, "Fill method not supported", idx.reindex, + idx, method='bfill', level='first') def test_duplicates(self): self.assertFalse(self.index.has_duplicates) self.assertTrue(self.index.append(self.index).has_duplicates) - index = MultiIndex(levels=[[0, 1], [0, 1, 2]], - labels=[[0, 0, 0, 0, 1, 1, 1], - [0, 1, 2, 0, 0, 1, 2]]) + index = MultiIndex(levels=[[0, 1], [0, 1, 2]], labels=[ + [0, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 0, 1, 2]]) self.assertTrue(index.has_duplicates) # GH 9075 @@ -6613,7 +6791,7 @@ def check(nlevels, with_nulls): labels[500] = -1 # common nan value labels = list(labels.copy() for i in range(nlevels)) for i in range(nlevels): - labels[i][500 + i - nlevels // 2 ] = -1 + labels[i][500 + i - nlevels // 2] = -1 labels += [np.array([-1, 1]).repeat(500)] else: @@ -6660,7 +6838,8 @@ def check(nlevels, with_nulls): mi = MultiIndex.from_arrays([[101, a], [3.5, np.nan]]) self.assertFalse(mi.has_duplicates) self.assertEqual(mi.get_duplicates(), []) - tm.assert_numpy_array_equal(mi.duplicated(), np.zeros(2, dtype='bool')) + tm.assert_numpy_array_equal(mi.duplicated(), np.zeros( + 2, dtype='bool')) for n in range(1, 6): # 1st level shape for m in range(1, 5): # 2nd level shape @@ -6671,19 +6850,17 @@ def check(nlevels, with_nulls): self.assertEqual(len(mi), (n + 1) * (m + 1)) self.assertFalse(mi.has_duplicates) self.assertEqual(mi.get_duplicates(), []) - tm.assert_numpy_array_equal(mi.duplicated(), - np.zeros(len(mi), dtype='bool')) + tm.assert_numpy_array_equal(mi.duplicated(), np.zeros( + len(mi), dtype='bool')) def test_duplicate_meta_data(self): # GH 10115 - index = MultiIndex(levels=[[0, 1], [0, 1, 2]], - labels=[[0, 0, 0, 0, 1, 1, 1], - [0, 1, 2, 0, 0, 1, 2]]) + index = MultiIndex(levels=[[0, 1], [0, 1, 2]], labels=[ + [0, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 0, 1, 2]]) for idx in [index, index.set_names([None, None]), index.set_names([None, 'Num']), - index.set_names(['Upper','Num']), - ]: + index.set_names(['Upper', 'Num']), ]: self.assertTrue(idx.has_duplicates) self.assertEqual(idx.drop_duplicates().names, idx.names) @@ -6693,10 +6870,11 @@ def test_tolist(self): self.assertEqual(result, exp) def test_repr_with_unicode_data(self): - with pd.core.config.option_context("display.encoding",'UTF-8'): + with pd.core.config.option_context("display.encoding", 'UTF-8'): d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]} index = pd.DataFrame(d).set_index(["a", "b"]).index - self.assertFalse("\\u" in repr(index)) # we don't want unicode-escaped + self.assertFalse("\\u" in repr(index) + ) # we don't want unicode-escaped def test_repr_roundtrip(self): @@ -6710,10 +6888,13 @@ def test_repr_roundtrip(self): result = eval(repr(mi)) # string coerces to unicode tm.assert_index_equal(result, mi, exact=False) - self.assertEqual(mi.get_level_values('first').inferred_type, 'string') - self.assertEqual(result.get_level_values('first').inferred_type, 'unicode') + self.assertEqual( + mi.get_level_values('first').inferred_type, 'string') + self.assertEqual( + result.get_level_values('first').inferred_type, 'unicode') - mi_u = MultiIndex.from_product([list(u'ab'),range(3)],names=['first','second']) + mi_u = MultiIndex.from_product( + [list(u'ab'), range(3)], names=['first', 'second']) result = eval(repr(mi_u)) tm.assert_index_equal(result, mi_u, exact=True) @@ -6734,10 +6915,13 @@ def test_repr_roundtrip(self): result = eval(repr(mi)) # string coerces to unicode tm.assert_index_equal(result, mi, exact=False) - self.assertEqual(mi.get_level_values('first').inferred_type, 'string') - self.assertEqual(result.get_level_values('first').inferred_type, 'unicode') + self.assertEqual( + mi.get_level_values('first').inferred_type, 'string') + self.assertEqual( + result.get_level_values('first').inferred_type, 'unicode') - mi = MultiIndex.from_product([list(u'abcdefg'),range(10)],names=['first','second']) + mi = MultiIndex.from_product( + [list(u'abcdefg'), range(10)], names=['first', 'second']) result = eval(repr(mi_u)) tm.assert_index_equal(result, mi_u, exact=True) @@ -6777,8 +6961,8 @@ def test_isnull_behavior(self): def test_level_setting_resets_attributes(self): ind = MultiIndex.from_arrays([ - ['A', 'A', 'B', 'B', 'B'], - [1, 2, 1, 2, 3]]) + ['A', 'A', 'B', 'B', 'B'], [1, 2, 1, 2, 3] + ]) assert ind.is_monotonic ind.set_levels([['A', 'B', 'A', 'A', 'B'], [2, 1, 3, -2, 5]], inplace=True) @@ -6788,8 +6972,8 @@ def test_level_setting_resets_attributes(self): def test_isin(self): values = [('foo', 2), ('bar', 3), ('quux', 4)] - idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], - np.arange(4)]) + idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange( + 4)]) result = idx.isin(values) expected = np.array([False, False, True, True]) tm.assert_numpy_array_equal(result, expected) @@ -6803,13 +6987,13 @@ def test_isin(self): def test_isin_nan(self): idx = MultiIndex.from_arrays([['foo', 'bar'], [1.0, np.nan]]) tm.assert_numpy_array_equal(idx.isin([('bar', np.nan)]), - [False, False]) + [False, False]) tm.assert_numpy_array_equal(idx.isin([('bar', float('nan'))]), - [False, False]) + [False, False]) def test_isin_level_kwarg(self): - idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], - np.arange(4)]) + idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange( + 4)]) vals_0 = ['foo', 'bar', 'quux'] vals_1 = [2, 3, 10] @@ -6847,16 +7031,20 @@ def test_reindex_preserves_names_when_target_is_list_or_ndarray(self): self.assertEqual(idx.reindex(np.array([]))[0].names, [None, None]) self.assertEqual(idx.reindex(target.tolist())[0].names, [None, None]) self.assertEqual(idx.reindex(target.values)[0].names, [None, None]) - self.assertEqual(idx.reindex(other_dtype.tolist())[0].names, [None, None]) - self.assertEqual(idx.reindex(other_dtype.values)[0].names, [None, None]) + self.assertEqual( + idx.reindex(other_dtype.tolist())[0].names, [None, None]) + self.assertEqual( + idx.reindex(other_dtype.values)[0].names, [None, None]) idx.names = ['foo', 'bar'] self.assertEqual(idx.reindex([])[0].names, ['foo', 'bar']) self.assertEqual(idx.reindex(np.array([]))[0].names, ['foo', 'bar']) self.assertEqual(idx.reindex(target.tolist())[0].names, ['foo', 'bar']) self.assertEqual(idx.reindex(target.values)[0].names, ['foo', 'bar']) - self.assertEqual(idx.reindex(other_dtype.tolist())[0].names, ['foo', 'bar']) - self.assertEqual(idx.reindex(other_dtype.values)[0].names, ['foo', 'bar']) + self.assertEqual( + idx.reindex(other_dtype.tolist())[0].names, ['foo', 'bar']) + self.assertEqual( + idx.reindex(other_dtype.values)[0].names, ['foo', 'bar']) def test_reindex_lvl_preserves_names_when_target_is_list_or_array(self): # GH7774 @@ -6905,8 +7093,7 @@ def test_equals_operator(self): def test_get_combined_index(): from pandas.core.index import _get_combined_index result = _get_combined_index([]) - assert(result.equals(Index([]))) - + assert (result.equals(Index([]))) if __name__ == '__main__': diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py index 5c3e4c01a965a..fc7a57ae2f179 100644 --- a/pandas/tests/test_indexing.py +++ b/pandas/tests/test_indexing.py @@ -30,7 +30,7 @@ _verbose = False -#------------------------------------------------------------------------------- +# ------------------------------------------------------------------------ # Indexing test cases @@ -42,10 +42,11 @@ def _generate_indices(f, values=False): axes = f.axes if values: - axes = [ lrange(len(a)) for a in axes ] + axes = [lrange(len(a)) for a in axes] return itertools.product(*axes) + def _get_value(f, i, values=False): """ return the value for the location i """ @@ -54,12 +55,13 @@ def _get_value(f, i, values=False): return f.values[i] # this is equiv of f[col][row]..... - #v = f - #for a in reversed(i): + # v = f + # for a in reversed(i): # v = v.__getitem__(a) - #return v + # return v return f.ix[i] + def _get_result(obj, method, key, axis): """ return the result for this obj with this key and this axis """ @@ -70,81 +72,90 @@ def _get_result(obj, method, key, axis): # so ix can work for comparisions if method == 'indexer': method = 'ix' - key = obj._get_axis(axis)[key] + key = obj._get_axis(axis)[key] # in case we actually want 0 index slicing try: - xp = getattr(obj, method).__getitem__(_axify(obj,key,axis)) + xp = getattr(obj, method).__getitem__(_axify(obj, key, axis)) except: - xp = getattr(obj, method).__getitem__(key) + xp = getattr(obj, method).__getitem__(key) return xp + def _axify(obj, key, axis): # create a tuple accessor - if axis is not None: - axes = [ slice(None) ] * obj.ndim - axes[axis] = key - return tuple(axes) - return k + axes = [slice(None)] * obj.ndim + axes[axis] = key + return tuple(axes) -def _mklbl(prefix,n): - return ["%s%s" % (prefix,i) for i in range(n)] +def _mklbl(prefix, n): + return ["%s%s" % (prefix, i) for i in range(n)] + class TestIndexing(tm.TestCase): _multiprocess_can_split_ = True - _objs = set(['series','frame','panel']) - _typs = set(['ints','labels','mixed','ts','floats','empty']) + _objs = set(['series', 'frame', 'panel']) + _typs = set(['ints', 'labels', 'mixed', 'ts', 'floats', 'empty']) def setUp(self): import warnings warnings.filterwarnings(action='ignore', category=FutureWarning) - self.series_ints = Series(np.random.rand(4), index=lrange(0,8,2)) - self.frame_ints = DataFrame(np.random.randn(4, 4), index=lrange(0, 8, 2), columns=lrange(0,12,3)) - self.panel_ints = Panel(np.random.rand(4,4,4), items=lrange(0,8,2),major_axis=lrange(0,12,3),minor_axis=lrange(0,16,4)) + self.series_ints = Series(np.random.rand(4), index=lrange(0, 8, 2)) + self.frame_ints = DataFrame( + np.random.randn( + 4, 4), index=lrange(0, 8, 2), columns=lrange(0, 12, 3)) + self.panel_ints = Panel( + np.random.rand(4, 4, 4), items=lrange(0, 8, 2), + major_axis=lrange(0, 12, 3), minor_axis=lrange(0, 16, 4)) self.series_labels = Series(np.random.randn(4), index=list('abcd')) - self.frame_labels = DataFrame(np.random.randn(4, 4), index=list('abcd'), columns=list('ABCD')) - self.panel_labels = Panel(np.random.randn(4,4,4), items=list('abcd'), major_axis=list('ABCD'), minor_axis=list('ZYXW')) - - self.series_mixed = Series(np.random.randn(4), index=[2, 4, 'null', 8]) - self.frame_mixed = DataFrame(np.random.randn(4, 4), index=[2, 4, 'null', 8]) - self.panel_mixed = Panel(np.random.randn(4,4,4), items=[2,4,'null',8]) - - self.series_ts = Series(np.random.randn(4), index=date_range('20130101', periods=4)) - self.frame_ts = DataFrame(np.random.randn(4, 4), index=date_range('20130101', periods=4)) - self.panel_ts = Panel(np.random.randn(4, 4, 4), items=date_range('20130101', periods=4)) - - #self.series_floats = Series(np.random.randn(4), index=[1.00, 2.00, 3.00, 4.00]) - #self.frame_floats = DataFrame(np.random.randn(4, 4), columns=[1.00, 2.00, 3.00, 4.00]) - #self.panel_floats = Panel(np.random.rand(4,4,4), items = [1.00,2.00,3.00,4.00]) - - self.frame_empty = DataFrame({}) - self.series_empty = Series({}) - self.panel_empty = Panel({}) + self.frame_labels = DataFrame( + np.random.randn(4, 4), index=list('abcd'), columns=list('ABCD')) + self.panel_labels = Panel( + np.random.randn(4, 4, 4), items=list('abcd'), + major_axis=list('ABCD'), minor_axis=list('ZYXW')) + + self.series_mixed = Series(np.random.randn(4), index=[2, 4, 'null', 8]) + self.frame_mixed = DataFrame( + np.random.randn(4, 4), index=[2, 4, 'null', 8]) + self.panel_mixed = Panel( + np.random.randn(4, 4, 4), items=[2, 4, 'null', 8]) + + self.series_ts = Series( + np.random.randn(4), index=date_range('20130101', periods=4)) + self.frame_ts = DataFrame( + np.random.randn(4, 4), index=date_range('20130101', periods=4)) + self.panel_ts = Panel( + np.random.randn(4, 4, 4), items=date_range('20130101', periods=4)) + + self.frame_empty = DataFrame({}) + self.series_empty = Series({}) + self.panel_empty = Panel({}) # form agglomerates for o in self._objs: d = dict() for t in self._typs: - d[t] = getattr(self,'%s_%s' % (o,t),None) + d[t] = getattr(self, '%s_%s' % (o, t), None) - setattr(self,o,d) + setattr(self, o, d) - def check_values(self, f, func, values = False): + def check_values(self, f, func, values=False): - if f is None: return + if f is None: + return axes = f.axes indicies = itertools.product(*axes) for i in indicies: - result = getattr(f,func)[i] + result = getattr(f, func)[i] # check agains values if values: @@ -156,33 +167,32 @@ def check_values(self, f, func, values = False): assert_almost_equal(result, expected) - - def check_result(self, name, method1, key1, method2, key2, typs = None, objs = None, axes = None, fails = None): - - + def check_result(self, name, method1, key1, method2, key2, typs=None, + objs=None, axes=None, fails=None): def _eq(t, o, a, obj, k1, k2): """ compare equal for these 2 keys """ - if a is not None and a > obj.ndim-1: + if a is not None and a > obj.ndim - 1: return - def _print(result, error = None): + def _print(result, error=None): if error is not None: error = str(error) - v = "%-16.16s [%-16.16s]: [typ->%-8.8s,obj->%-8.8s,key1->(%-4.4s),key2->(%-4.4s),axis->%s] %s" % (name,result,t,o,method1,method2,a,error or '') + v = ("%-16.16s [%-16.16s]: [typ->%-8.8s,obj->%-8.8s," + "key1->(%-4.4s),key2->(%-4.4s),axis->%s] %s" % + (name, result, t, o, method1, method2, a, error or '')) if _verbose: com.pprint_thing(v) try: - - ### good debug location ### - #if name == 'bool' and t == 'empty' and o == 'series' and method1 == 'loc': + # if (name == 'bool' and t == 'empty' and o == 'series' and + # method1 == 'loc'): # import pdb; pdb.set_trace() - rs = getattr(obj, method1).__getitem__(_axify(obj,k1,a)) + rs = getattr(obj, method1).__getitem__(_axify(obj, k1, a)) try: - xp = _get_result(obj,method2,k2,a) + xp = _get_result(obj, method2, k2, a) except: result = 'no comp' _print(result) @@ -192,11 +202,11 @@ def _print(result, error = None): if np.isscalar(rs) and np.isscalar(xp): self.assertEqual(rs, xp) elif xp.ndim == 1: - assert_series_equal(rs,xp) + assert_series_equal(rs, xp) elif xp.ndim == 2: - assert_frame_equal(rs,xp) + assert_frame_equal(rs, xp) elif xp.ndim == 3: - assert_panel_equal(rs,xp) + assert_panel_equal(rs, xp) result = 'ok' except (AssertionError): result = 'fail' @@ -223,7 +233,7 @@ def _print(result, error = None): return result = type(detail).__name__ - raise AssertionError(_print(result, error = detail)) + raise AssertionError(_print(result, error=detail)) if typs is None: typs = self._typs @@ -232,19 +242,19 @@ def _print(result, error = None): objs = self._objs if axes is not None: - if not isinstance(axes,(tuple,list)): - axes = [ axes ] + if not isinstance(axes, (tuple, list)): + axes = [axes] else: axes = list(axes) else: - axes = [ 0, 1, 2] + axes = [0, 1, 2] # check for o in objs: if o not in self._objs: continue - d = getattr(self,o) + d = getattr(self, o) for a in axes: for t in typs: if t not in self._typs: @@ -269,61 +279,59 @@ def test_indexer_caching(self): # setitem expected = Series(np.ones(n), index=index) s = Series(np.zeros(n), index=index) - s[s==0] = 1 - assert_series_equal(s,expected) + s[s == 0] = 1 + assert_series_equal(s, expected) def test_at_and_iat_get(self): - - def _check(f, func, values = False): + def _check(f, func, values=False): if f is not None: indicies = _generate_indices(f, values) for i in indicies: - result = getattr(f,func)[i] - expected = _get_value(f,i,values) + result = getattr(f, func)[i] + expected = _get_value(f, i, values) assert_almost_equal(result, expected) for o in self._objs: - d = getattr(self,o) + d = getattr(self, o) # iat - _check(d['ints'],'iat', values=True) - for f in [d['labels'],d['ts'],d['floats']]: + _check(d['ints'], 'iat', values=True) + for f in [d['labels'], d['ts'], d['floats']]: if f is not None: self.assertRaises(ValueError, self.check_values, f, 'iat') # at - _check(d['ints'], 'at') - _check(d['labels'],'at') - _check(d['ts'], 'at') - _check(d['floats'],'at') + _check(d['ints'], 'at') + _check(d['labels'], 'at') + _check(d['ts'], 'at') + _check(d['floats'], 'at') def test_at_and_iat_set(self): - - def _check(f, func, values = False): + def _check(f, func, values=False): if f is not None: indicies = _generate_indices(f, values) for i in indicies: - getattr(f,func)[i] = 1 - expected = _get_value(f,i,values) + getattr(f, func)[i] = 1 + expected = _get_value(f, i, values) assert_almost_equal(expected, 1) for t in self._objs: - d = getattr(self,t) + d = getattr(self, t) - _check(d['ints'],'iat',values=True) - for f in [d['labels'],d['ts'],d['floats']]: + _check(d['ints'], 'iat', values=True) + for f in [d['labels'], d['ts'], d['floats']]: if f is not None: self.assertRaises(ValueError, _check, f, 'iat') # at - _check(d['ints'], 'at') - _check(d['labels'],'at') - _check(d['ts'], 'at') - _check(d['floats'],'at') + _check(d['ints'], 'at') + _check(d['labels'], 'at') + _check(d['ts'], 'at') + _check(d['floats'], 'at') def test_at_iat_coercion(self): @@ -333,7 +341,7 @@ def test_at_iat_coercion(self): s = df['A'] result = s.at[dates[5]] - xp = s.values[5] + xp = s.values[5] self.assertEqual(result, xp) # GH 7729 @@ -341,14 +349,14 @@ def test_at_iat_coercion(self): s = Series(['2014-01-01', '2014-02-02'], dtype='datetime64[ns]') expected = Timestamp('2014-02-02') - for r in [ lambda : s.iat[1], lambda : s.iloc[1] ]: + for r in [lambda: s.iat[1], lambda: s.iloc[1]]: result = r() self.assertEqual(result, expected) - s = Series(['1 days','2 days'], dtype='timedelta64[ns]') + s = Series(['1 days', '2 days'], dtype='timedelta64[ns]') expected = Timedelta('2 days') - for r in [ lambda : s.iat[1], lambda : s.iloc[1] ]: + for r in [lambda: s.iat[1], lambda: s.iloc[1]]: result = r() self.assertEqual(result, expected) @@ -360,149 +368,180 @@ def test_imethods_with_dups(self): # GH6493 # iat/iloc with dups - s = Series(range(5), index=[1,1,2,2,3], dtype='int64') + s = Series(range(5), index=[1, 1, 2, 2, 3], dtype='int64') result = s.iloc[2] - self.assertEqual(result,2) + self.assertEqual(result, 2) result = s.iat[2] - self.assertEqual(result,2) + self.assertEqual(result, 2) - self.assertRaises(IndexError, lambda : s.iat[10]) - self.assertRaises(IndexError, lambda : s.iat[-10]) + self.assertRaises(IndexError, lambda: s.iat[10]) + self.assertRaises(IndexError, lambda: s.iat[-10]) - result = s.iloc[[2,3]] - expected = Series([2,3],[2,2],dtype='int64') - assert_series_equal(result,expected) + result = s.iloc[[2, 3]] + expected = Series([2, 3], [2, 2], dtype='int64') + assert_series_equal(result, expected) df = s.to_frame() result = df.iloc[2] expected = Series(2, index=[0], name=2) assert_series_equal(result, expected) - result = df.iat[2,0] + result = df.iat[2, 0] expected = 2 - self.assertEqual(result,2) + self.assertEqual(result, 2) def test_repeated_getitem_dups(self): # GH 5678 # repeated gettitems on a dup index returing a ndarray - df = DataFrame(np.random.random_sample((20,5)), index=['ABCDE'[x%5] for x in range(20)]) - expected = df.loc['A',0] - result = df.loc[:,0].loc['A'] - assert_series_equal(result,expected) + df = DataFrame( + np.random.random_sample((20, 5)), + index=['ABCDE' [x % 5] for x in range(20)]) + expected = df.loc['A', 0] + result = df.loc[:, 0].loc['A'] + assert_series_equal(result, expected) def test_iloc_exceeds_bounds(self): # GH6296 # iloc should allow indexers that exceed the bounds - df = DataFrame(np.random.random_sample((20,5)), columns=list('ABCDE')) + df = DataFrame(np.random.random_sample((20, 5)), columns=list('ABCDE')) expected = df # lists of positions should raise IndexErrror! - with tm.assertRaisesRegexp(IndexError, 'positional indexers are out-of-bounds'): - df.iloc[:,[0,1,2,3,4,5]] - self.assertRaises(IndexError, lambda : df.iloc[[1,30]]) - self.assertRaises(IndexError, lambda : df.iloc[[1,-30]]) - self.assertRaises(IndexError, lambda : df.iloc[[100]]) + with tm.assertRaisesRegexp(IndexError, + 'positional indexers are out-of-bounds'): + df.iloc[:, [0, 1, 2, 3, 4, 5]] + self.assertRaises(IndexError, lambda: df.iloc[[1, 30]]) + self.assertRaises(IndexError, lambda: df.iloc[[1, -30]]) + self.assertRaises(IndexError, lambda: df.iloc[[100]]) s = df['A'] - self.assertRaises(IndexError, lambda : s.iloc[[100]]) - self.assertRaises(IndexError, lambda : s.iloc[[-100]]) + self.assertRaises(IndexError, lambda: s.iloc[[100]]) + self.assertRaises(IndexError, lambda: s.iloc[[-100]]) # still raise on a single indexer - with tm.assertRaisesRegexp(IndexError, 'single positional indexer is out-of-bounds'): + with tm.assertRaisesRegexp( + IndexError, 'single positional indexer is out-of-bounds'): df.iloc[30] - self.assertRaises(IndexError, lambda : df.iloc[-30]) + self.assertRaises(IndexError, lambda: df.iloc[-30]) # GH10779 - # single positive/negative indexer exceeding Series bounds should raise an IndexError - with tm.assertRaisesRegexp(IndexError, 'single positional indexer is out-of-bounds'): + # single positive/negative indexer exceeding Series bounds should raise + # an IndexError + with tm.assertRaisesRegexp( + IndexError, 'single positional indexer is out-of-bounds'): s.iloc[30] - self.assertRaises(IndexError, lambda : s.iloc[-30]) + self.assertRaises(IndexError, lambda: s.iloc[-30]) # slices are ok - result = df.iloc[:,4:10] # 0 < start < len < stop - expected = df.iloc[:,4:] - assert_frame_equal(result,expected) + result = df.iloc[:, 4:10] # 0 < start < len < stop + expected = df.iloc[:, 4:] + assert_frame_equal(result, expected) - result = df.iloc[:,-4:-10] # stop < 0 < start < len - expected = df.iloc[:,:0] - assert_frame_equal(result,expected) + result = df.iloc[:, -4:-10] # stop < 0 < start < len + expected = df.iloc[:, :0] + assert_frame_equal(result, expected) - result = df.iloc[:,10:4:-1] # 0 < stop < len < start (down) - expected = df.iloc[:,:4:-1] - assert_frame_equal(result,expected) + result = df.iloc[:, 10:4:-1] # 0 < stop < len < start (down) + expected = df.iloc[:, :4:-1] + assert_frame_equal(result, expected) - result = df.iloc[:,4:-10:-1] # stop < 0 < start < len (down) - expected = df.iloc[:,4::-1] - assert_frame_equal(result,expected) + result = df.iloc[:, 4:-10:-1] # stop < 0 < start < len (down) + expected = df.iloc[:, 4::-1] + assert_frame_equal(result, expected) - result = df.iloc[:,-10:4] # start < 0 < stop < len - expected = df.iloc[:,:4] - assert_frame_equal(result,expected) + result = df.iloc[:, -10:4] # start < 0 < stop < len + expected = df.iloc[:, :4] + assert_frame_equal(result, expected) - result = df.iloc[:,10:4] # 0 < stop < len < start - expected = df.iloc[:,:0] - assert_frame_equal(result,expected) + result = df.iloc[:, 10:4] # 0 < stop < len < start + expected = df.iloc[:, :0] + assert_frame_equal(result, expected) - result = df.iloc[:,-10:-11:-1] # stop < start < 0 < len (down) - expected = df.iloc[:,:0] - assert_frame_equal(result,expected) + result = df.iloc[:, -10:-11:-1] # stop < start < 0 < len (down) + expected = df.iloc[:, :0] + assert_frame_equal(result, expected) - result = df.iloc[:,10:11] # 0 < len < start < stop - expected = df.iloc[:,:0] - assert_frame_equal(result,expected) + result = df.iloc[:, 10:11] # 0 < len < start < stop + expected = df.iloc[:, :0] + assert_frame_equal(result, expected) # slice bounds exceeding is ok result = s.iloc[18:30] expected = s.iloc[18:] - assert_series_equal(result,expected) + assert_series_equal(result, expected) result = s.iloc[30:] expected = s.iloc[:0] - assert_series_equal(result,expected) + assert_series_equal(result, expected) result = s.iloc[30::-1] expected = s.iloc[::-1] - assert_series_equal(result,expected) + assert_series_equal(result, expected) # doc example - def check(result,expected): + def check(result, expected): str(result) result.dtypes - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) - dfl = DataFrame(np.random.randn(5,2),columns=list('AB')) - check(dfl.iloc[:,2:3],DataFrame(index=dfl.index)) - check(dfl.iloc[:,1:3],dfl.iloc[:,[1]]) - check(dfl.iloc[4:6],dfl.iloc[[4]]) + dfl = DataFrame(np.random.randn(5, 2), columns=list('AB')) + check(dfl.iloc[:, 2:3], DataFrame(index=dfl.index)) + check(dfl.iloc[:, 1:3], dfl.iloc[:, [1]]) + check(dfl.iloc[4:6], dfl.iloc[[4]]) - self.assertRaises(IndexError, lambda : dfl.iloc[[4,5,6]]) - self.assertRaises(IndexError, lambda : dfl.iloc[:,4]) + self.assertRaises(IndexError, lambda: dfl.iloc[[4, 5, 6]]) + self.assertRaises(IndexError, lambda: dfl.iloc[:, 4]) def test_iloc_getitem_int(self): # integer - self.check_result('integer', 'iloc', 2, 'ix', { 0 : 4, 1: 6, 2: 8 }, typs = ['ints']) - self.check_result('integer', 'iloc', 2, 'indexer', 2, typs = ['labels','mixed','ts','floats','empty'], fails = IndexError) + self.check_result('integer', 'iloc', 2, 'ix', {0: 4, + 1: 6, + 2: 8}, typs=['ints']) + self.check_result('integer', 'iloc', 2, 'indexer', 2, + typs=['labels', 'mixed', 'ts', 'floats', 'empty'], + fails=IndexError) def test_iloc_getitem_neg_int(self): # neg integer - self.check_result('neg int', 'iloc', -1, 'ix', { 0 : 6, 1: 9, 2: 12 }, typs = ['ints']) - self.check_result('neg int', 'iloc', -1, 'indexer', -1, typs = ['labels','mixed','ts','floats','empty'], fails = IndexError) + self.check_result('neg int', 'iloc', -1, 'ix', {0: 6, + 1: 9, + 2: 12}, typs=['ints']) + self.check_result('neg int', 'iloc', -1, 'indexer', -1, + typs=['labels', 'mixed', 'ts', 'floats', 'empty'], + fails=IndexError) def test_iloc_getitem_list_int(self): # list of ints - self.check_result('list int', 'iloc', [0,1,2], 'ix', { 0 : [0,2,4], 1 : [0,3,6], 2: [0,4,8] }, typs = ['ints']) - self.check_result('list int', 'iloc', [2], 'ix', { 0 : [4], 1 : [6], 2: [8] }, typs = ['ints']) - self.check_result('list int', 'iloc', [0,1,2], 'indexer', [0,1,2], typs = ['labels','mixed','ts','floats','empty'], fails = IndexError) - - # array of ints - # (GH5006), make sure that a single indexer is returning the correct type - self.check_result('array int', 'iloc', np.array([0,1,2]), 'ix', { 0 : [0,2,4], 1 : [0,3,6], 2: [0,4,8] }, typs = ['ints']) - self.check_result('array int', 'iloc', np.array([2]), 'ix', { 0 : [4], 1 : [6], 2: [8] }, typs = ['ints']) - self.check_result('array int', 'iloc', np.array([0,1,2]), 'indexer', [0,1,2], typs = ['labels','mixed','ts','floats','empty'], fails = IndexError) + self.check_result('list int', 'iloc', [0, 1, 2], 'ix', {0: [0, 2, 4], + 1: [0, 3, 6], + 2: [0, 4, 8]}, + typs=['ints']) + self.check_result('list int', 'iloc', [2], 'ix', {0: [4], + 1: [6], + 2: [8]}, + typs=['ints']) + self.check_result('list int', 'iloc', [0, 1, 2], 'indexer', [0, 1, 2], + typs=['labels', 'mixed', 'ts', 'floats', 'empty'], + fails=IndexError) + + # array of ints (GH5006), make sure that a single indexer is returning + # the correct type + self.check_result('array int', 'iloc', np.array([0, 1, 2]), 'ix', + {0: [0, 2, 4], + 1: [0, 3, 6], + 2: [0, 4, 8]}, typs=['ints']) + self.check_result('array int', 'iloc', np.array([2]), 'ix', {0: [4], + 1: [6], + 2: [8]}, + typs=['ints']) + self.check_result('array int', 'iloc', np.array([0, 1, 2]), 'indexer', + [0, 1, 2], + typs=['labels', 'mixed', 'ts', 'floats', 'empty'], + fails=IndexError) def test_iloc_getitem_neg_int_can_reach_first_index(self): # GH10547 and GH10779 @@ -534,160 +573,176 @@ def test_iloc_getitem_neg_int_can_reach_first_index(self): def test_iloc_getitem_dups(self): # no dups in panel (bug?) - self.check_result('list int (dups)', 'iloc', [0,1,1,3], 'ix', { 0 : [0,2,2,6], 1 : [0,3,3,9] }, objs = ['series','frame'], typs = ['ints']) + self.check_result('list int (dups)', 'iloc', [0, 1, 1, 3], 'ix', + {0: [0, 2, 2, 6], + 1: [0, 3, 3, 9 + ]}, objs=['series', 'frame'], typs=['ints']) # GH 6766 - df1 = DataFrame([{'A':None, 'B':1},{'A':2, 'B':2}]) - df2 = DataFrame([{'A':3, 'B':3},{'A':4, 'B':4}]) + df1 = DataFrame([{'A': None, 'B': 1}, {'A': 2, 'B': 2}]) + df2 = DataFrame([{'A': 3, 'B': 3}, {'A': 4, 'B': 4}]) df = concat([df1, df2], axis=1) # cross-sectional indexing - result = df.iloc[0,0] + result = df.iloc[0, 0] self.assertTrue(isnull(result)) - result = df.iloc[0,:] - expected = Series([np.nan, 1, 3, 3], index=['A','B','A','B'], name=0) - assert_series_equal(result,expected) + result = df.iloc[0, :] + expected = Series([np.nan, 1, 3, 3], index=['A', 'B', 'A', 'B'], + name=0) + assert_series_equal(result, expected) def test_iloc_getitem_array(self): # array like - s = Series(index=lrange(1,4)) - self.check_result('array like', 'iloc', s.index, 'ix', { 0 : [2,4,6], 1 : [3,6,9], 2: [4,8,12] }, typs = ['ints']) + s = Series(index=lrange(1, 4)) + self.check_result('array like', 'iloc', s.index, 'ix', {0: [2, 4, 6], + 1: [3, 6, 9], + 2: [4, 8, 12]}, + typs=['ints']) def test_iloc_getitem_bool(self): # boolean indexers - b = [True,False,True,False,] - self.check_result('bool', 'iloc', b, 'ix', b, typs = ['ints']) - self.check_result('bool', 'iloc', b, 'ix', b, typs = ['labels','mixed','ts','floats','empty'], fails = IndexError) + b = [True, False, True, False, ] + self.check_result('bool', 'iloc', b, 'ix', b, typs=['ints']) + self.check_result('bool', 'iloc', b, 'ix', b, + typs=['labels', 'mixed', 'ts', 'floats', 'empty'], + fails=IndexError) def test_iloc_getitem_slice(self): # slices - self.check_result('slice', 'iloc', slice(1,3), 'ix', { 0 : [2,4], 1: [3,6], 2: [4,8] }, typs = ['ints']) - self.check_result('slice', 'iloc', slice(1,3), 'indexer', slice(1,3), typs = ['labels','mixed','ts','floats','empty'], fails = IndexError) + self.check_result('slice', 'iloc', slice(1, 3), 'ix', {0: [2, 4], + 1: [3, 6], + 2: [4, 8]}, + typs=['ints']) + self.check_result('slice', 'iloc', slice(1, 3), 'indexer', slice( + 1, 3), typs=['labels', 'mixed', 'ts', 'floats', 'empty'], + fails=IndexError) def test_iloc_getitem_slice_dups(self): - df1 = DataFrame(np.random.randn(10,4),columns=['A','A','B','B']) - df2 = DataFrame(np.random.randint(0,10,size=20).reshape(10,2),columns=['A','C']) + df1 = DataFrame(np.random.randn(10, 4), columns=['A', 'A', 'B', 'B']) + df2 = DataFrame( + np.random.randint(0, 10, size=20).reshape(10, + 2), columns=['A', 'C']) # axis=1 - df = concat([df1,df2],axis=1) - assert_frame_equal(df.iloc[:,:4],df1) - assert_frame_equal(df.iloc[:,4:],df2) + df = concat([df1, df2], axis=1) + assert_frame_equal(df.iloc[:, :4], df1) + assert_frame_equal(df.iloc[:, 4:], df2) - df = concat([df2,df1],axis=1) - assert_frame_equal(df.iloc[:,:2],df2) - assert_frame_equal(df.iloc[:,2:],df1) + df = concat([df2, df1], axis=1) + assert_frame_equal(df.iloc[:, :2], df2) + assert_frame_equal(df.iloc[:, 2:], df1) - assert_frame_equal(df.iloc[:,0:3],concat([df2,df1.iloc[:,[0]]],axis=1)) + assert_frame_equal(df.iloc[:, 0:3], concat( + [df2, df1.iloc[:, [0]]], axis=1)) # axis=0 - df = concat([df,df],axis=0) - assert_frame_equal(df.iloc[0:10,:2],df2) - assert_frame_equal(df.iloc[0:10,2:],df1) - assert_frame_equal(df.iloc[10:,:2],df2) - assert_frame_equal(df.iloc[10:,2:],df1) + df = concat([df, df], axis=0) + assert_frame_equal(df.iloc[0:10, :2], df2) + assert_frame_equal(df.iloc[0:10, 2:], df1) + assert_frame_equal(df.iloc[10:, :2], df2) + assert_frame_equal(df.iloc[10:, 2:], df1) - def test_iloc_getitem_multiindex(self): + def test_iloc_getitem_multiindex2(self): + # TODO(wesm): fix this + raise nose.SkipTest('this test was being suppressed, ' + 'needs to be fixed') arr = np.random.randn(3, 3) - df = DataFrame(arr, - columns=[[2,2,4],[6,8,10]], - index=[[4,4,8],[8,10,12]]) + df = DataFrame(arr, columns=[[2, 2, 4], [6, 8, 10]], + index=[[4, 4, 8], [8, 10, 12]]) rs = df.iloc[2] - xp = Series(arr[2],index=df.columns) + xp = Series(arr[2], index=df.columns) assert_series_equal(rs, xp) - rs = df.iloc[:,2] - xp = Series(arr[:, 2],index=df.index) + rs = df.iloc[:, 2] + xp = Series(arr[:, 2], index=df.index) assert_series_equal(rs, xp) - rs = df.iloc[2,2] - xp = df.values[2,2] + rs = df.iloc[2, 2] + xp = df.values[2, 2] self.assertEqual(rs, xp) # for multiple items # GH 5528 - rs = df.iloc[[0,1]] - xp = df.xs(4,drop_level=False) - assert_frame_equal(rs,xp) + rs = df.iloc[[0, 1]] + xp = df.xs(4, drop_level=False) + assert_frame_equal(rs, xp) - tup = zip(*[['a','a','b','b'],['x','y','x','y']]) + tup = zip(*[['a', 'a', 'b', 'b'], ['x', 'y', 'x', 'y']]) index = MultiIndex.from_tuples(tup) df = DataFrame(np.random.randn(4, 4), index=index) rs = df.iloc[[2, 3]] - xp = df.xs('b',drop_level=False) - assert_frame_equal(rs,xp) + xp = df.xs('b', drop_level=False) + assert_frame_equal(rs, xp) def test_iloc_setitem(self): df = self.frame_ints - df.iloc[1,1] = 1 - result = df.iloc[1,1] + df.iloc[1, 1] = 1 + result = df.iloc[1, 1] self.assertEqual(result, 1) - df.iloc[:,2:3] = 0 - expected = df.iloc[:,2:3] - result = df.iloc[:,2:3] + df.iloc[:, 2:3] = 0 + expected = df.iloc[:, 2:3] + result = df.iloc[:, 2:3] assert_frame_equal(result, expected) # GH5771 - s = Series(0,index=[4,5,6]) + s = Series(0, index=[4, 5, 6]) s.iloc[1:2] += 1 - expected = Series([0,1,0],index=[4,5,6]) + expected = Series([0, 1, 0], index=[4, 5, 6]) assert_series_equal(s, expected) def test_ix_loc_setitem_consistency(self): # GH 5771 # loc with slice and series - s = Series(0,index=[4,5,6]) + s = Series(0, index=[4, 5, 6]) s.loc[4:5] += 1 - expected = Series([1,1,0],index=[4,5,6]) + expected = Series([1, 1, 0], index=[4, 5, 6]) assert_series_equal(s, expected) # GH 5928 # chained indexing assignment - df = DataFrame({'a' : [0,1,2] }) + df = DataFrame({'a': [0, 1, 2]}) expected = df.copy() - expected.ix[[0,1,2],'a'] = -expected.ix[[0,1,2],'a'] + expected.ix[[0, 1, 2], 'a'] = -expected.ix[[0, 1, 2], 'a'] - df['a'].ix[[0,1,2]] = -df['a'].ix[[0,1,2]] - assert_frame_equal(df,expected) + df['a'].ix[[0, 1, 2]] = -df['a'].ix[[0, 1, 2]] + assert_frame_equal(df, expected) - df = DataFrame({'a' : [0,1,2], 'b' :[0,1,2] }) - df['a'].ix[[0,1,2]] = -df['a'].ix[[0,1,2]].astype('float64') + 0.5 - expected = DataFrame({'a' : [0.5,-0.5,-1.5], 'b' : [0,1,2] }) - assert_frame_equal(df,expected) + df = DataFrame({'a': [0, 1, 2], 'b': [0, 1, 2]}) + df['a'].ix[[0, 1, 2]] = -df['a'].ix[[0, 1, 2]].astype('float64') + 0.5 + expected = DataFrame({'a': [0.5, -0.5, -1.5], 'b': [0, 1, 2]}) + assert_frame_equal(df, expected) # GH 8607 # ix setitem consistency - df = DataFrame( - {'timestamp':[1413840976, 1413842580, 1413760580], - 'delta':[1174, 904, 161], - 'elapsed':[7673, 9277, 1470] - }) - expected = DataFrame( - {'timestamp':pd.to_datetime([1413840976, 1413842580, 1413760580], unit='s'), - 'delta':[1174, 904, 161], - 'elapsed':[7673, 9277, 1470] - }) + df = DataFrame({'timestamp': [1413840976, 1413842580, 1413760580], + 'delta': [1174, 904, 161], + 'elapsed': [7673, 9277, 1470]}) + expected = DataFrame({'timestamp': pd.to_datetime( + [1413840976, 1413842580, 1413760580], unit='s'), + 'delta': [1174, 904, 161], + 'elapsed': [7673, 9277, 1470]}) df2 = df.copy() df2['timestamp'] = pd.to_datetime(df['timestamp'], unit='s') - assert_frame_equal(df2,expected) + assert_frame_equal(df2, expected) df2 = df.copy() - df2.loc[:,'timestamp'] = pd.to_datetime(df['timestamp'], unit='s') - assert_frame_equal(df2,expected) + df2.loc[:, 'timestamp'] = pd.to_datetime(df['timestamp'], unit='s') + assert_frame_equal(df2, expected) df2 = df.copy() - df2.ix[:,2] = pd.to_datetime(df['timestamp'], unit='s') - assert_frame_equal(df2,expected) + df2.ix[:, 2] = pd.to_datetime(df['timestamp'], unit='s') + assert_frame_equal(df2, expected) def test_ix_loc_consistency(self): @@ -702,26 +757,29 @@ def compare(result, expected): self.assertTrue(expected.equals(result)) # failure cases for .loc, but these work for .ix - df = pd.DataFrame(np.random.randn(5,4), columns=list('ABCD')) - for key in [ slice(1,3), tuple([slice(0,2),slice(0,2)]), tuple([slice(0,2),df.columns[0:2]]) ]: + df = pd.DataFrame(np.random.randn(5, 4), columns=list('ABCD')) + for key in [slice(1, 3), tuple([slice(0, 2), slice(0, 2)]), + tuple([slice(0, 2), df.columns[0:2]])]: - for index in [ tm.makeStringIndex, tm.makeUnicodeIndex, - tm.makeDateIndex, tm.makePeriodIndex, tm.makeTimedeltaIndex ]: + for index in [tm.makeStringIndex, tm.makeUnicodeIndex, + tm.makeDateIndex, tm.makePeriodIndex, + tm.makeTimedeltaIndex]: df.index = index(len(df.index)) df.ix[key] - self.assertRaises(TypeError, lambda : df.loc[key]) + self.assertRaises(TypeError, lambda: df.loc[key]) - df = pd.DataFrame(np.random.randn(5,4), columns=list('ABCD'), index=pd.date_range('2012-01-01', periods=5)) + df = pd.DataFrame( + np.random.randn(5, 4), columns=list('ABCD'), + index=pd.date_range('2012-01-01', periods=5)) - for key in [ '2012-01-03', - '2012-01-31', - slice('2012-01-03','2012-01-03'), - slice('2012-01-03','2012-01-04'), - slice('2012-01-03','2012-01-06',2), - slice('2012-01-03','2012-01-31'), - tuple([[True,True,True,False,True]]), - ]: + for key in ['2012-01-03', + '2012-01-31', + slice('2012-01-03', '2012-01-03'), + slice('2012-01-03', '2012-01-04'), + slice('2012-01-03', '2012-01-06', 2), + slice('2012-01-03', '2012-01-31'), + tuple([[True, True, True, False, True]]), ]: # getitem @@ -729,7 +787,7 @@ def compare(result, expected): try: expected = df.ix[key] except KeyError: - self.assertRaises(KeyError, lambda : df.loc[key]) + self.assertRaises(KeyError, lambda: df.loc[key]) continue result = df.loc[key] @@ -744,27 +802,28 @@ def compare(result, expected): compare(df2, df1) # edge cases - s = Series([1,2,3,4], index=list('abde')) + s = Series([1, 2, 3, 4], index=list('abde')) result1 = s['a':'c'] result2 = s.ix['a':'c'] result3 = s.loc['a':'c'] - assert_series_equal(result1,result2) - assert_series_equal(result1,result3) + assert_series_equal(result1, result2) + assert_series_equal(result1, result3) # now work rather than raising KeyError - s = Series(range(5),[-2,-1,1,2,3]) + s = Series(range(5), [-2, -1, 1, 2, 3]) result1 = s.ix[-10:3] result2 = s.loc[-10:3] - assert_series_equal(result1,result2) + assert_series_equal(result1, result2) result1 = s.ix[0:3] result2 = s.loc[0:3] - assert_series_equal(result1,result2) + assert_series_equal(result1, result2) def test_setitem_multiindex(self): for index_fn in ('ix', 'loc'): + def check(target, indexers, value, compare_fn, expected=None): fn = getattr(target, index_fn) fn.__setitem__(indexers, value) @@ -773,33 +832,36 @@ def check(target, indexers, value, compare_fn, expected=None): expected = value compare_fn(result, expected) # GH7190 - index = pd.MultiIndex.from_product([np.arange(0,100), np.arange(0, 80)], names=['time', 'firm']) + index = pd.MultiIndex.from_product( + [np.arange(0, 100), np.arange(0, 80)], names=['time', 'firm']) t, n = 0, 2 - df = DataFrame(np.nan,columns=['A', 'w', 'l', 'a', 'x', 'X', 'd', 'profit'], index=index) - check( - target=df, indexers=((t,n), 'X'), - value=0, compare_fn=self.assertEqual - ) - - df = DataFrame(-999,columns=['A', 'w', 'l', 'a', 'x', 'X', 'd', 'profit'], index=index) - check( - target=df, indexers=((t,n), 'X'), - value=1, compare_fn=self.assertEqual - ) - - df = DataFrame(columns=['A', 'w', 'l', 'a', 'x', 'X', 'd', 'profit'], index=index) - check( - target=df, indexers=((t,n), 'X'), - value=2, compare_fn=self.assertEqual - ) + df = DataFrame( + np.nan, columns=['A', 'w', 'l', 'a', 'x', 'X', 'd', 'profit'], + index=index) + check(target=df, indexers=((t, n), 'X'), value=0, + compare_fn=self.assertEqual) + + df = DataFrame( + -999, columns=['A', 'w', 'l', 'a', 'x', 'X', 'd', 'profit'], + index=index) + check(target=df, indexers=((t, n), 'X'), value=1, + compare_fn=self.assertEqual) + + df = DataFrame( + columns=['A', 'w', 'l', 'a', 'x', 'X', 'd', 'profit'], + index=index) + check(target=df, indexers=((t, n), 'X'), value=2, + compare_fn=self.assertEqual) # GH 7218, assinging with 0-dim arrays - df = DataFrame(-999,columns=['A', 'w', 'l', 'a', 'x', 'X', 'd', 'profit'], index=index) - check( - target=df, indexers=((t,n), 'X'), - value=np.array(3), compare_fn=self.assertEqual, - expected=3, - ) + df = DataFrame( + -999, columns=['A', 'w', 'l', 'a', 'x', 'X', 'd', 'profit'], + index=index) + check(target=df, + indexers=((t, n), 'X'), + value=np.array(3), + compare_fn=self.assertEqual, + expected=3, ) # GH5206 df = pd.DataFrame( @@ -812,67 +874,67 @@ def check(target, indexers, value, compare_fn, expected=None): df.ix[row_selection, col_selection] = df['F'] output = pd.DataFrame(99., index=[0, 2, 4], columns=['B', 'C']) assert_frame_equal(df.ix[row_selection, col_selection], output) - check( - target=df, indexers=(row_selection, col_selection), - value=df['F'], compare_fn=assert_frame_equal, - expected=output, - ) + check(target=df, + indexers=(row_selection, col_selection), + value=df['F'], + compare_fn=assert_frame_equal, + expected=output, ) # GH11372 idx = pd.MultiIndex.from_product([ - ['A', 'B', 'C'], - pd.date_range('2015-01-01', '2015-04-01', freq='MS') + ['A', 'B', 'C'], pd.date_range( + '2015-01-01', '2015-04-01', freq='MS') ]) cols = pd.MultiIndex.from_product([ - ['foo', 'bar'], - pd.date_range('2016-01-01', '2016-02-01', freq='MS') + ['foo', 'bar'], pd.date_range( + '2016-01-01', '2016-02-01', freq='MS') ]) - df = pd.DataFrame(np.random.random((12, 4)), index=idx, columns=cols) - subidx = pd.MultiIndex.from_tuples( - [('A', pd.Timestamp('2015-01-01')), ('A', pd.Timestamp('2015-02-01'))] - ) - subcols = pd.MultiIndex.from_tuples( - [('foo', pd.Timestamp('2016-01-01')), ('foo', pd.Timestamp('2016-02-01'))] - ) - vals = pd.DataFrame(np.random.random((2, 2)), index=subidx, columns=subcols) - check( - target=df, indexers=(subidx, subcols), - value=vals, compare_fn=assert_frame_equal, - ) + df = pd.DataFrame( + np.random.random((12, 4)), index=idx, columns=cols) + subidx = pd.MultiIndex.from_tuples([('A', pd.Timestamp( + '2015-01-01')), ('A', pd.Timestamp('2015-02-01'))]) + subcols = pd.MultiIndex.from_tuples([('foo', pd.Timestamp( + '2016-01-01')), ('foo', pd.Timestamp('2016-02-01'))]) + vals = pd.DataFrame( + np.random.random((2, 2)), index=subidx, columns=subcols) + check(target=df, + indexers=(subidx, subcols), + value=vals, + compare_fn=assert_frame_equal, ) # set all columns - vals = pd.DataFrame(np.random.random((2, 4)), index=subidx, columns=cols) - check( - target=df, indexers=(subidx, slice(None, None, None)), - value=vals, compare_fn=assert_frame_equal, - ) + vals = pd.DataFrame( + np.random.random((2, 4)), index=subidx, columns=cols) + check(target=df, + indexers=(subidx, slice(None, None, None)), + value=vals, + compare_fn=assert_frame_equal, ) # identity copy = df.copy() - check( - target=df, indexers=(df.index, df.columns), - value=df, compare_fn=assert_frame_equal, - expected=copy - ) + check(target=df, indexers=(df.index, df.columns), value=df, + compare_fn=assert_frame_equal, expected=copy) def test_indexing_with_datetime_tz(self): # 8260 # support datetime64 with tz - idx = Index(date_range('20130101',periods=3,tz='US/Eastern'), + idx = Index(date_range('20130101', periods=3, tz='US/Eastern'), name='foo') - dr = date_range('20130110',periods=3) - df = DataFrame({'A' : idx, 'B' : dr}) + dr = date_range('20130110', periods=3) + df = DataFrame({'A': idx, 'B': dr}) df['C'] = idx - df.iloc[1,1] = pd.NaT - df.iloc[1,2] = pd.NaT + df.iloc[1, 1] = pd.NaT + df.iloc[1, 2] = pd.NaT # indexing result = df.iloc[1] - expected = Series([Timestamp('2013-01-02 00:00:00-0500', tz='US/Eastern'), np.nan, np.nan], + expected = Series([Timestamp('2013-01-02 00:00:00-0500', + tz='US/Eastern'), np.nan, np.nan], index=list('ABC'), dtype='object', name=1) assert_series_equal(result, expected) result = df.loc[1] - expected = Series([Timestamp('2013-01-02 00:00:00-0500', tz='US/Eastern'), np.nan, np.nan], + expected = Series([Timestamp('2013-01-02 00:00:00-0500', + tz='US/Eastern'), np.nan, np.nan], index=list('ABC'), dtype='object', name=1) assert_series_equal(result, expected) @@ -891,85 +953,91 @@ def test_indexing_with_datetime_tz(self): assert_frame_equal(result, expected) # indexing - setting an element - df = DataFrame( data = pd.to_datetime(['2015-03-30 20:12:32','2015-03-12 00:11:11']) ,columns=['time'] ) - df['new_col']=['new','old'] - df.time=df.set_index('time').index.tz_localize('UTC') - v = df[df.new_col=='new'].set_index('time').index.tz_convert('US/Pacific') + df = DataFrame(data=pd.to_datetime( + ['2015-03-30 20:12:32', '2015-03-12 00:11:11']), columns=['time']) + df['new_col'] = ['new', 'old'] + df.time = df.set_index('time').index.tz_localize('UTC') + v = df[df.new_col == 'new'].set_index('time').index.tz_convert( + 'US/Pacific') # trying to set a single element on a part of a different timezone def f(): - df.loc[df.new_col=='new','time'] = v + df.loc[df.new_col == 'new', 'time'] = v + self.assertRaises(ValueError, f) - v = df.loc[df.new_col=='new','time'] + pd.Timedelta('1s') - df.loc[df.new_col=='new','time'] = v - assert_series_equal(df.loc[df.new_col=='new','time'],v) + v = df.loc[df.new_col == 'new', 'time'] + pd.Timedelta('1s') + df.loc[df.new_col == 'new', 'time'] = v + assert_series_equal(df.loc[df.new_col == 'new', 'time'], v) def test_loc_setitem_dups(self): # GH 6541 - df_orig = DataFrame({'me' : list('rttti'), - 'foo': list('aaade'), - 'bar': np.arange(5,dtype='float64')*1.34+2, - 'bar2': np.arange(5,dtype='float64')*-.34+2}).set_index('me') + df_orig = DataFrame( + {'me': list('rttti'), + 'foo': list('aaade'), + 'bar': np.arange(5, dtype='float64') * 1.34 + 2, + 'bar2': np.arange(5, dtype='float64') * -.34 + 2}).set_index('me') - indexer = tuple(['r',['bar','bar2']]) + indexer = tuple(['r', ['bar', 'bar2']]) df = df_orig.copy() - df.loc[indexer]*=2.0 - assert_series_equal(df.loc[indexer],2.0*df_orig.loc[indexer]) + df.loc[indexer] *= 2.0 + assert_series_equal(df.loc[indexer], 2.0 * df_orig.loc[indexer]) - indexer = tuple(['r','bar']) + indexer = tuple(['r', 'bar']) df = df_orig.copy() - df.loc[indexer]*=2.0 - self.assertEqual(df.loc[indexer],2.0*df_orig.loc[indexer]) + df.loc[indexer] *= 2.0 + self.assertEqual(df.loc[indexer], 2.0 * df_orig.loc[indexer]) - indexer = tuple(['t',['bar','bar2']]) + indexer = tuple(['t', ['bar', 'bar2']]) df = df_orig.copy() - df.loc[indexer]*=2.0 - assert_frame_equal(df.loc[indexer],2.0*df_orig.loc[indexer]) + df.loc[indexer] *= 2.0 + assert_frame_equal(df.loc[indexer], 2.0 * df_orig.loc[indexer]) def test_iloc_setitem_dups(self): # GH 6766 # iloc with a mask aligning from another iloc - df1 = DataFrame([{'A':None, 'B':1},{'A':2, 'B':2}]) - df2 = DataFrame([{'A':3, 'B':3},{'A':4, 'B':4}]) + df1 = DataFrame([{'A': None, 'B': 1}, {'A': 2, 'B': 2}]) + df2 = DataFrame([{'A': 3, 'B': 3}, {'A': 4, 'B': 4}]) df = concat([df1, df2], axis=1) expected = df.fillna(3) expected['A'] = expected['A'].astype('float64') inds = np.isnan(df.iloc[:, 0]) mask = inds[inds].index - df.iloc[mask,0] = df.iloc[mask,2] + df.iloc[mask, 0] = df.iloc[mask, 2] assert_frame_equal(df, expected) # del a dup column across blocks - expected = DataFrame({ 0 : [1,2], 1 : [3,4] }) - expected.columns=['B','B'] + expected = DataFrame({0: [1, 2], 1: [3, 4]}) + expected.columns = ['B', 'B'] del df['A'] assert_frame_equal(df, expected) # assign back to self - df.iloc[[0,1],[0,1]] = df.iloc[[0,1],[0,1]] + df.iloc[[0, 1], [0, 1]] = df.iloc[[0, 1], [0, 1]] assert_frame_equal(df, expected) # reversed x 2 - df.iloc[[1,0],[0,1]] = df.iloc[[1,0],[0,1]].reset_index(drop=True) - df.iloc[[1,0],[0,1]] = df.iloc[[1,0],[0,1]].reset_index(drop=True) + df.iloc[[1, 0], [0, 1]] = df.iloc[[1, 0], [0, 1]].reset_index( + drop=True) + df.iloc[[1, 0], [0, 1]] = df.iloc[[1, 0], [0, 1]].reset_index( + drop=True) assert_frame_equal(df, expected) def test_chained_getitem_with_lists(self): # GH6394 - # Regression in chained getitem indexing with embedded list-like from 0.12 + # Regression in chained getitem indexing with embedded list-like from + # 0.12 def check(result, expected): - tm.assert_numpy_array_equal(result,expected) + tm.assert_numpy_array_equal(result, expected) tm.assertIsInstance(result, np.ndarray) - - df = DataFrame({'A': 5*[np.zeros(3)], 'B':5*[np.ones(3)]}) + df = DataFrame({'A': 5 * [np.zeros(3)], 'B': 5 * [np.ones(3)]}) expected = df['A'].iloc[2] - result = df.loc[2,'A'] + result = df.loc[2, 'A'] check(result, expected) result2 = df.iloc[2]['A'] check(result2, expected) @@ -981,95 +1049,135 @@ def check(result, expected): def test_loc_getitem_int(self): # int label - self.check_result('int label', 'loc', 2, 'ix', 2, typs = ['ints'], axes = 0) - self.check_result('int label', 'loc', 3, 'ix', 3, typs = ['ints'], axes = 1) - self.check_result('int label', 'loc', 4, 'ix', 4, typs = ['ints'], axes = 2) - self.check_result('int label', 'loc', 2, 'ix', 2, typs = ['label'], fails = KeyError) + self.check_result('int label', 'loc', 2, 'ix', 2, typs=['ints'], + axes=0) + self.check_result('int label', 'loc', 3, 'ix', 3, typs=['ints'], + axes=1) + self.check_result('int label', 'loc', 4, 'ix', 4, typs=['ints'], + axes=2) + self.check_result('int label', 'loc', 2, 'ix', 2, typs=['label'], + fails=KeyError) def test_loc_getitem_label(self): # label - self.check_result('label', 'loc', 'c', 'ix', 'c', typs = ['labels'], axes=0) - self.check_result('label', 'loc', 'null', 'ix', 'null', typs = ['mixed'] , axes=0) - self.check_result('label', 'loc', 8, 'ix', 8, typs = ['mixed'] , axes=0) - self.check_result('label', 'loc', Timestamp('20130102'), 'ix', 1, typs = ['ts'], axes=0) - self.check_result('label', 'loc', 'c', 'ix', 'c', typs = ['empty'], fails = KeyError) + self.check_result('label', 'loc', 'c', 'ix', 'c', typs=['labels'], + axes=0) + self.check_result('label', 'loc', 'null', 'ix', 'null', typs=['mixed'], + axes=0) + self.check_result('label', 'loc', 8, 'ix', 8, typs=['mixed'], axes=0) + self.check_result('label', 'loc', Timestamp('20130102'), 'ix', 1, + typs=['ts'], axes=0) + self.check_result('label', 'loc', 'c', 'ix', 'c', typs=['empty'], + fails=KeyError) def test_loc_getitem_label_out_of_range(self): # out of range label - self.check_result('label range', 'loc', 'f', 'ix', 'f', typs = ['ints','labels','mixed','ts'], fails=KeyError) - self.check_result('label range', 'loc', 'f', 'ix', 'f', typs = ['floats'], fails=TypeError) - self.check_result('label range', 'loc', 20, 'ix', 20, typs = ['ints','labels','mixed'], fails=KeyError) - self.check_result('label range', 'loc', 20, 'ix', 20, typs = ['ts'], axes=0, fails=TypeError) - self.check_result('label range', 'loc', 20, 'ix', 20, typs = ['floats'], axes=0, fails=TypeError) + self.check_result('label range', 'loc', 'f', 'ix', 'f', + typs=['ints', 'labels', 'mixed', 'ts'], + fails=KeyError) + self.check_result('label range', 'loc', 'f', 'ix', 'f', + typs=['floats'], fails=TypeError) + self.check_result('label range', 'loc', 20, 'ix', 20, + typs=['ints', 'labels', 'mixed'], fails=KeyError) + self.check_result('label range', 'loc', 20, 'ix', 20, typs=['ts'], + axes=0, fails=TypeError) + self.check_result('label range', 'loc', 20, 'ix', 20, typs=['floats'], + axes=0, fails=TypeError) def test_loc_getitem_label_list(self): # list of labels - self.check_result('list lbl', 'loc', [0,2,4], 'ix', [0,2,4], typs = ['ints'], axes=0) - self.check_result('list lbl', 'loc', [3,6,9], 'ix', [3,6,9], typs = ['ints'], axes=1) - self.check_result('list lbl', 'loc', [4,8,12], 'ix', [4,8,12], typs = ['ints'], axes=2) - self.check_result('list lbl', 'loc', ['a','b','d'], 'ix', ['a','b','d'], typs = ['labels'], axes=0) - self.check_result('list lbl', 'loc', ['A','B','C'], 'ix', ['A','B','C'], typs = ['labels'], axes=1) - self.check_result('list lbl', 'loc', ['Z','Y','W'], 'ix', ['Z','Y','W'], typs = ['labels'], axes=2) - self.check_result('list lbl', 'loc', [2,8,'null'], 'ix', [2,8,'null'], typs = ['mixed'], axes=0) - self.check_result('list lbl', 'loc', [Timestamp('20130102'),Timestamp('20130103')], 'ix', - [Timestamp('20130102'),Timestamp('20130103')], typs = ['ts'], axes=0) - - self.check_result('list lbl', 'loc', [0,1,2], 'indexer', [0,1,2], typs = ['empty'], fails = KeyError) - self.check_result('list lbl', 'loc', [0,2,3], 'ix', [0,2,3], typs = ['ints'], axes=0, fails = KeyError) - self.check_result('list lbl', 'loc', [3,6,7], 'ix', [3,6,7], typs = ['ints'], axes=1, fails = KeyError) - self.check_result('list lbl', 'loc', [4,8,10], 'ix', [4,8,10], typs = ['ints'], axes=2, fails = KeyError) + self.check_result('list lbl', 'loc', [0, 2, 4], 'ix', [0, 2, 4], + typs=['ints'], axes=0) + self.check_result('list lbl', 'loc', [3, 6, 9], 'ix', [3, 6, 9], + typs=['ints'], axes=1) + self.check_result('list lbl', 'loc', [4, 8, 12], 'ix', [4, 8, 12], + typs=['ints'], axes=2) + self.check_result('list lbl', 'loc', ['a', 'b', 'd'], 'ix', + ['a', 'b', 'd'], typs=['labels'], axes=0) + self.check_result('list lbl', 'loc', ['A', 'B', 'C'], 'ix', + ['A', 'B', 'C'], typs=['labels'], axes=1) + self.check_result('list lbl', 'loc', ['Z', 'Y', 'W'], 'ix', + ['Z', 'Y', 'W'], typs=['labels'], axes=2) + self.check_result('list lbl', 'loc', [2, 8, 'null'], 'ix', + [2, 8, 'null'], typs=['mixed'], axes=0) + self.check_result('list lbl', 'loc', + [Timestamp('20130102'), Timestamp('20130103')], 'ix', + [Timestamp('20130102'), Timestamp('20130103')], + typs=['ts'], axes=0) + + self.check_result('list lbl', 'loc', [0, 1, 2], 'indexer', [0, 1, 2], + typs=['empty'], fails=KeyError) + self.check_result('list lbl', 'loc', [0, 2, 3], 'ix', [0, 2, 3], + typs=['ints'], axes=0, fails=KeyError) + self.check_result('list lbl', 'loc', [3, 6, 7], 'ix', [3, 6, 7], + typs=['ints'], axes=1, fails=KeyError) + self.check_result('list lbl', 'loc', [4, 8, 10], 'ix', [4, 8, 10], + typs=['ints'], axes=2, fails=KeyError) # fails - self.check_result('list lbl', 'loc', [20,30,40], 'ix', [20,30,40], typs = ['ints'], axes=1, fails = KeyError) - self.check_result('list lbl', 'loc', [20,30,40], 'ix', [20,30,40], typs = ['ints'], axes=2, fails = KeyError) + self.check_result('list lbl', 'loc', [20, 30, 40], 'ix', [20, 30, 40], + typs=['ints'], axes=1, fails=KeyError) + self.check_result('list lbl', 'loc', [20, 30, 40], 'ix', [20, 30, 40], + typs=['ints'], axes=2, fails=KeyError) # array like - self.check_result('array like', 'loc', Series(index=[0,2,4]).index, 'ix', [0,2,4], typs = ['ints'], axes=0) - self.check_result('array like', 'loc', Series(index=[3,6,9]).index, 'ix', [3,6,9], typs = ['ints'], axes=1) - self.check_result('array like', 'loc', Series(index=[4,8,12]).index, 'ix', [4,8,12], typs = ['ints'], axes=2) + self.check_result('array like', 'loc', Series(index=[0, 2, 4]).index, + 'ix', [0, 2, 4], typs=['ints'], axes=0) + self.check_result('array like', 'loc', Series(index=[3, 6, 9]).index, + 'ix', [3, 6, 9], typs=['ints'], axes=1) + self.check_result('array like', 'loc', Series(index=[4, 8, 12]).index, + 'ix', [4, 8, 12], typs=['ints'], axes=2) def test_loc_getitem_bool(self): # boolean indexers - b = [True,False,True,False] - self.check_result('bool', 'loc', b, 'ix', b, typs = ['ints','labels','mixed','ts','floats']) - self.check_result('bool', 'loc', b, 'ix', b, typs = ['empty'], fails = KeyError) + b = [True, False, True, False] + self.check_result('bool', 'loc', b, 'ix', b, + typs=['ints', 'labels', 'mixed', 'ts', 'floats']) + self.check_result('bool', 'loc', b, 'ix', b, typs=['empty'], + fails=KeyError) def test_loc_getitem_int_slice(self): # ok - self.check_result('int slice2', 'loc', slice(2,4), 'ix', [2,4], typs = ['ints'], axes = 0) - self.check_result('int slice2', 'loc', slice(3,6), 'ix', [3,6], typs = ['ints'], axes = 1) - self.check_result('int slice2', 'loc', slice(4,8), 'ix', [4,8], typs = ['ints'], axes = 2) + self.check_result('int slice2', 'loc', slice(2, 4), 'ix', [2, 4], + typs=['ints'], axes=0) + self.check_result('int slice2', 'loc', slice(3, 6), 'ix', [3, 6], + typs=['ints'], axes=1) + self.check_result('int slice2', 'loc', slice(4, 8), 'ix', [4, 8], + typs=['ints'], axes=2) # GH 3053 # loc should treat integer slices like label slices from itertools import product - index = MultiIndex.from_tuples([t for t in product([6,7,8], ['a', 'b'])]) + index = MultiIndex.from_tuples([t for t in product( + [6, 7, 8], ['a', 'b'])]) df = DataFrame(np.random.randn(6, 6), index, index) - result = df.loc[6:8,:] - expected = df.ix[6:8,:] - assert_frame_equal(result,expected) + result = df.loc[6:8, :] + expected = df.ix[6:8, :] + assert_frame_equal(result, expected) - index = MultiIndex.from_tuples([t for t in product([10, 20, 30], ['a', 'b'])]) + index = MultiIndex.from_tuples([t + for t in product( + [10, 20, 30], ['a', 'b'])]) df = DataFrame(np.random.randn(6, 6), index, index) - result = df.loc[20:30,:] - expected = df.ix[20:30,:] - assert_frame_equal(result,expected) + result = df.loc[20:30, :] + expected = df.ix[20:30, :] + assert_frame_equal(result, expected) # doc examples - result = df.loc[10,:] - expected = df.ix[10,:] - assert_frame_equal(result,expected) + result = df.loc[10, :] + expected = df.ix[10, :] + assert_frame_equal(result, expected) - result = df.loc[:,10] - #expected = df.ix[:,10] (this fails) + result = df.loc[:, 10] + # expected = df.ix[:,10] (this fails) expected = df[10] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) def test_loc_to_fail(self): @@ -1079,7 +1187,8 @@ def test_loc_to_fail(self): columns=['e', 'f', 'g']) # raise a KeyError? - self.assertRaises(KeyError, df.loc.__getitem__, tuple([[1, 2], [1, 2]])) + self.assertRaises(KeyError, df.loc.__getitem__, + tuple([[1, 2], [1, 2]])) # GH 7496 # loc should not fallback @@ -1088,139 +1197,167 @@ def test_loc_to_fail(self): s.loc[1] = 1 s.loc['a'] = 2 - self.assertRaises(KeyError, lambda : s.loc[-1]) - self.assertRaises(KeyError, lambda : s.loc[[-1, -2]]) + self.assertRaises(KeyError, lambda: s.loc[-1]) + self.assertRaises(KeyError, lambda: s.loc[[-1, -2]]) - self.assertRaises(KeyError, lambda : s.loc[['4']]) + self.assertRaises(KeyError, lambda: s.loc[['4']]) s.loc[-1] = 3 - result = s.loc[[-1,-2]] - expected = Series([3,np.nan],index=[-1,-2]) + result = s.loc[[-1, -2]] + expected = Series([3, np.nan], index=[-1, -2]) assert_series_equal(result, expected) s['a'] = 2 - self.assertRaises(KeyError, lambda : s.loc[[-2]]) + self.assertRaises(KeyError, lambda: s.loc[[-2]]) del s['a'] + def f(): s.loc[[-2]] = 0 + self.assertRaises(KeyError, f) # inconsistency between .loc[values] and .loc[values,:] # GH 7999 - df = DataFrame([['a'],['b']],index=[1,2],columns=['value']) + df = DataFrame([['a'], ['b']], index=[1, 2], columns=['value']) def f(): - df.loc[[3],:] + df.loc[[3], :] + self.assertRaises(KeyError, f) def f(): df.loc[[3]] + self.assertRaises(KeyError, f) # at should not fallback # GH 7814 - s = Series([1,2,3], index=list('abc')) + s = Series([1, 2, 3], index=list('abc')) result = s.at['a'] self.assertEqual(result, 1) - self.assertRaises(ValueError, lambda : s.at[0]) + self.assertRaises(ValueError, lambda: s.at[0]) - df = DataFrame({'A' : [1,2,3]},index=list('abc')) - result = df.at['a','A'] + df = DataFrame({'A': [1, 2, 3]}, index=list('abc')) + result = df.at['a', 'A'] self.assertEqual(result, 1) - self.assertRaises(ValueError, lambda : df.at['a',0]) + self.assertRaises(ValueError, lambda: df.at['a', 0]) - s = Series([1,2,3], index=[3,2,1]) + s = Series([1, 2, 3], index=[3, 2, 1]) result = s.at[1] self.assertEqual(result, 3) - self.assertRaises(ValueError, lambda : s.at['a']) + self.assertRaises(ValueError, lambda: s.at['a']) - df = DataFrame({0 : [1,2,3]},index=[3,2,1]) - result = df.at[1,0] + df = DataFrame({0: [1, 2, 3]}, index=[3, 2, 1]) + result = df.at[1, 0] self.assertEqual(result, 3) - self.assertRaises(ValueError, lambda : df.at['a',0]) + self.assertRaises(ValueError, lambda: df.at['a', 0]) def test_loc_getitem_label_slice(self): # label slices (with ints) - self.check_result('lab slice', 'loc', slice(1,3), 'ix', slice(1,3), typs = ['labels','mixed','empty','ts','floats'], fails=TypeError) + self.check_result('lab slice', 'loc', slice(1, 3), 'ix', slice( + 1, 3), typs=['labels', 'mixed', 'empty', 'ts', 'floats'], + fails=TypeError) # real label slices - self.check_result('lab slice', 'loc', slice('a','c'), 'ix', slice('a','c'), typs = ['labels'], axes=0) - self.check_result('lab slice', 'loc', slice('A','C'), 'ix', slice('A','C'), typs = ['labels'], axes=1) - self.check_result('lab slice', 'loc', slice('W','Z'), 'ix', slice('W','Z'), typs = ['labels'], axes=2) - - self.check_result('ts slice', 'loc', slice('20130102','20130104'), 'ix', slice('20130102','20130104'), typs = ['ts'], axes=0) - self.check_result('ts slice', 'loc', slice('20130102','20130104'), 'ix', slice('20130102','20130104'), typs = ['ts'], axes=1, fails=TypeError) - self.check_result('ts slice', 'loc', slice('20130102','20130104'), 'ix', slice('20130102','20130104'), typs = ['ts'], axes=2, fails=TypeError) - - self.check_result('mixed slice', 'loc', slice(2,8), 'ix', slice(2,8), typs = ['mixed'], axes=0, fails=TypeError) - self.check_result('mixed slice', 'loc', slice(2,8), 'ix', slice(2,8), typs = ['mixed'], axes=1, fails=KeyError) - self.check_result('mixed slice', 'loc', slice(2,8), 'ix', slice(2,8), typs = ['mixed'], axes=2, fails=KeyError) - - self.check_result('mixed slice', 'loc', slice(2,4,2), 'ix', slice(2,4,2), typs = ['mixed'], axes=0, fails=TypeError) + self.check_result('lab slice', 'loc', slice('a', 'c'), 'ix', slice( + 'a', 'c'), typs=['labels'], axes=0) + self.check_result('lab slice', 'loc', slice('A', 'C'), 'ix', slice( + 'A', 'C'), typs=['labels'], axes=1) + self.check_result('lab slice', 'loc', slice('W', 'Z'), 'ix', slice( + 'W', 'Z'), typs=['labels'], axes=2) + + self.check_result('ts slice', 'loc', slice( + '20130102', '20130104'), 'ix', slice('20130102', '20130104'), + typs=['ts'], axes=0) + self.check_result('ts slice', 'loc', slice( + '20130102', '20130104'), 'ix', slice('20130102', '20130104'), + typs=['ts'], axes=1, fails=TypeError) + self.check_result('ts slice', 'loc', slice( + '20130102', '20130104'), 'ix', slice('20130102', '20130104'), + typs=['ts'], axes=2, fails=TypeError) + + self.check_result('mixed slice', 'loc', slice(2, 8), 'ix', slice(2, 8), + typs=['mixed'], axes=0, fails=TypeError) + self.check_result('mixed slice', 'loc', slice(2, 8), 'ix', slice(2, 8), + typs=['mixed'], axes=1, fails=KeyError) + self.check_result('mixed slice', 'loc', slice(2, 8), 'ix', slice(2, 8), + typs=['mixed'], axes=2, fails=KeyError) + + self.check_result('mixed slice', 'loc', slice(2, 4, 2), 'ix', slice( + 2, 4, 2), typs=['mixed'], axes=0, fails=TypeError) def test_loc_general(self): - df = DataFrame(np.random.rand(4,4),columns=['A','B','C','D'], index=['A','B','C','D']) + df = DataFrame( + np.random.rand(4, 4), columns=['A', 'B', 'C', 'D'], + index=['A', 'B', 'C', 'D']) # want this to work - result = df.loc[:,"A":"B"].iloc[0:2,:] - self.assertTrue((result.columns == ['A','B']).all() == True) - self.assertTrue((result.index == ['A','B']).all() == True) + result = df.loc[:, "A":"B"].iloc[0:2, :] + self.assertTrue((result.columns == ['A', 'B']).all()) + self.assertTrue((result.index == ['A', 'B']).all()) # mixed type - result = DataFrame({ 'a' : [Timestamp('20130101')], 'b' : [1] }).iloc[0] - expected = Series([ Timestamp('20130101'), 1], index=['a','b'], name=0) + result = DataFrame({'a': [Timestamp('20130101')], 'b': [1]}).iloc[0] + expected = Series([Timestamp('20130101'), 1], index=['a', 'b'], name=0) assert_series_equal(result, expected) self.assertEqual(result.dtype, object) def test_loc_setitem_consistency(self): - # GH 6149 # coerce similary for setitem and loc when rows have a null-slice - expected = DataFrame({ 'date': Series(0,index=range(5),dtype=np.int64), - 'val' : Series(range(5),dtype=np.int64) }) - - df = DataFrame({ 'date': date_range('2000-01-01','2000-01-5'), - 'val' : Series(range(5),dtype=np.int64) }) - df.loc[:,'date'] = 0 - assert_frame_equal(df,expected) - - df = DataFrame({ 'date': date_range('2000-01-01','2000-01-5'), - 'val' : Series(range(5),dtype=np.int64) }) - df.loc[:,'date'] = np.array(0,dtype=np.int64) - assert_frame_equal(df,expected) - - df = DataFrame({ 'date': date_range('2000-01-01','2000-01-5'), - 'val' : Series(range(5),dtype=np.int64) }) - df.loc[:,'date'] = np.array([0,0,0,0,0],dtype=np.int64) - assert_frame_equal(df,expected) - - expected = DataFrame({ 'date': Series('foo',index=range(5)), - 'val' : Series(range(5),dtype=np.int64) }) - df = DataFrame({ 'date': date_range('2000-01-01','2000-01-5'), - 'val' : Series(range(5),dtype=np.int64) }) - df.loc[:,'date'] = 'foo' - assert_frame_equal(df,expected) - - expected = DataFrame({ 'date': Series(1.0,index=range(5)), - 'val' : Series(range(5),dtype=np.int64) }) - df = DataFrame({ 'date': date_range('2000-01-01','2000-01-5'), - 'val' : Series(range(5),dtype=np.int64) }) - df.loc[:,'date'] = 1.0 - assert_frame_equal(df,expected) + expected = DataFrame({'date': Series(0, index=range(5), + dtype=np.int64), + 'val': Series(range(5), dtype=np.int64)}) + + df = DataFrame({'date': date_range('2000-01-01', '2000-01-5'), + 'val': Series( + range(5), dtype=np.int64)}) + df.loc[:, 'date'] = 0 + assert_frame_equal(df, expected) + + df = DataFrame({'date': date_range('2000-01-01', '2000-01-5'), + 'val': Series( + range(5), dtype=np.int64)}) + df.loc[:, 'date'] = np.array(0, dtype=np.int64) + assert_frame_equal(df, expected) + + df = DataFrame({'date': date_range('2000-01-01', '2000-01-5'), + 'val': Series( + range(5), dtype=np.int64)}) + df.loc[:, 'date'] = np.array([0, 0, 0, 0, 0], dtype=np.int64) + assert_frame_equal(df, expected) + + expected = DataFrame({'date': Series('foo', index=range(5)), + 'val': Series( + range(5), dtype=np.int64)}) + df = DataFrame({'date': date_range('2000-01-01', '2000-01-5'), + 'val': Series( + range(5), dtype=np.int64)}) + df.loc[:, 'date'] = 'foo' + assert_frame_equal(df, expected) + + expected = DataFrame({'date': Series(1.0, index=range(5)), + 'val': Series( + range(5), dtype=np.int64)}) + df = DataFrame({'date': date_range('2000-01-01', '2000-01-5'), + 'val': Series( + range(5), dtype=np.int64)}) + df.loc[:, 'date'] = 1.0 + assert_frame_equal(df, expected) # empty (essentially noops) expected = DataFrame(columns=['x', 'y']) expected['x'] = expected['x'].astype(np.int64) df = DataFrame(columns=['x', 'y']) df.loc[:, 'x'] = 1 - assert_frame_equal(df,expected) + assert_frame_equal(df, expected) df = DataFrame(columns=['x', 'y']) df['x'] = 1 - assert_frame_equal(df,expected) + assert_frame_equal(df, expected) # .loc[:,column] setting with slice == len of the column # GH10408 @@ -1232,109 +1369,122 @@ def test_loc_setitem_consistency(self): Region_1,Site_2,3977723249,A,5/20/2015 8:27,5/20/2015 8:41,Yes, Region_1,Site_2,3977723089,A,5/20/2015 8:33,5/20/2015 9:09,Yes,No""" - df = pd.read_csv(StringIO(data),header=[0,1], index_col=[0,1,2]) - df.loc[:,('Respondent','StartDate')] = pd.to_datetime(df.loc[:,('Respondent','StartDate')]) - df.loc[:,('Respondent','EndDate')] = pd.to_datetime(df.loc[:,('Respondent','EndDate')]) - df.loc[:,('Respondent','Duration')] = df.loc[:,('Respondent','EndDate')] - df.loc[:,('Respondent','StartDate')] + df = pd.read_csv(StringIO(data), header=[0, 1], index_col=[0, 1, 2]) + df.loc[:, ('Respondent', 'StartDate')] = pd.to_datetime(df.loc[:, ( + 'Respondent', 'StartDate')]) + df.loc[:, ('Respondent', 'EndDate')] = pd.to_datetime(df.loc[:, ( + 'Respondent', 'EndDate')]) + df.loc[:, ('Respondent', 'Duration')] = df.loc[:, ( + 'Respondent', 'EndDate')] - df.loc[:, ('Respondent', 'StartDate')] - df.loc[:,('Respondent','Duration')] = df.loc[:,('Respondent','Duration')].astype('timedelta64[s]') - expected = Series([1380,720,840,2160.],index=df.index,name=('Respondent','Duration')) - assert_series_equal(df[('Respondent','Duration')],expected) + df.loc[:, ('Respondent', 'Duration')] = df.loc[:, ( + 'Respondent', 'Duration')].astype('timedelta64[s]') + expected = Series([1380, 720, 840, 2160.], index=df.index, + name=('Respondent', 'Duration')) + assert_series_equal(df[('Respondent', 'Duration')], expected) def test_loc_setitem_frame(self): df = self.frame_labels - result = df.iloc[0,0] + result = df.iloc[0, 0] - df.loc['a','A'] = 1 - result = df.loc['a','A'] + df.loc['a', 'A'] = 1 + result = df.loc['a', 'A'] self.assertEqual(result, 1) - result = df.iloc[0,0] + result = df.iloc[0, 0] self.assertEqual(result, 1) - df.loc[:,'B':'D'] = 0 - expected = df.loc[:,'B':'D'] - result = df.ix[:,1:] + df.loc[:, 'B':'D'] = 0 + expected = df.loc[:, 'B':'D'] + result = df.ix[:, 1:] assert_frame_equal(result, expected) # GH 6254 # setting issue df = DataFrame(index=[3, 5, 4], columns=['A']) - df.loc[[4, 3, 5], 'A'] = np.array([1, 2, 3],dtype='int64') - expected = DataFrame(dict(A = Series([1,2,3],index=[4, 3, 5]))).reindex(index=[3,5,4]) + df.loc[[4, 3, 5], 'A'] = np.array([1, 2, 3], dtype='int64') + expected = DataFrame(dict(A=Series( + [1, 2, 3], index=[4, 3, 5]))).reindex(index=[3, 5, 4]) assert_frame_equal(df, expected) # GH 6252 # setting with an empty frame keys1 = ['@' + str(i) for i in range(5)] - val1 = np.arange(5,dtype='int64') + val1 = np.arange(5, dtype='int64') keys2 = ['@' + str(i) for i in range(4)] - val2 = np.arange(4,dtype='int64') + val2 = np.arange(4, dtype='int64') index = list(set(keys1).union(keys2)) - df = DataFrame(index = index) + df = DataFrame(index=index) df['A'] = nan df.loc[keys1, 'A'] = val1 df['B'] = nan df.loc[keys2, 'B'] = val2 - expected = DataFrame(dict(A = Series(val1,index=keys1), B = Series(val2,index=keys2))).reindex(index=index) + expected = DataFrame(dict(A=Series(val1, index=keys1), B=Series( + val2, index=keys2))).reindex(index=index) assert_frame_equal(df, expected) # GH 8669 # invalid coercion of nan -> int - df = DataFrame({'A' : [1,2,3], 'B' : np.nan }) + df = DataFrame({'A': [1, 2, 3], 'B': np.nan}) df.loc[df.B > df.A, 'B'] = df.A - expected = DataFrame({'A' : [1,2,3], 'B' : np.nan}) + expected = DataFrame({'A': [1, 2, 3], 'B': np.nan}) assert_frame_equal(df, expected) # GH 6546 # setting with mixed labels - df = DataFrame({1:[1,2],2:[3,4],'a':['a','b']}) + df = DataFrame({1: [1, 2], 2: [3, 4], 'a': ['a', 'b']}) - result = df.loc[0, [1,2]] - expected = Series([1,3],index=[1,2],dtype=object, name=0) + result = df.loc[0, [1, 2]] + expected = Series([1, 3], index=[1, 2], dtype=object, name=0) assert_series_equal(result, expected) - expected = DataFrame({1:[5,2],2:[6,4],'a':['a','b']}) - df.loc[0, [1,2]] = [5,6] + expected = DataFrame({1: [5, 2], 2: [6, 4], 'a': ['a', 'b']}) + df.loc[0, [1, 2]] = [5, 6] assert_frame_equal(df, expected) def test_loc_setitem_frame_multiples(self): # multiple setting - df = DataFrame({ 'A' : ['foo','bar','baz'], - 'B' : Series(range(3),dtype=np.int64) }) + df = DataFrame({'A': ['foo', 'bar', 'baz'], + 'B': Series( + range(3), dtype=np.int64)}) rhs = df.loc[1:2] rhs.index = df.index[0:2] df.loc[0:1] = rhs - expected = DataFrame({ 'A' : ['bar','baz','baz'], - 'B' : Series([1,2,2],dtype=np.int64) }) + expected = DataFrame({'A': ['bar', 'baz', 'baz'], + 'B': Series( + [1, 2, 2], dtype=np.int64)}) assert_frame_equal(df, expected) - # multiple setting with frame on rhs (with M8) - df = DataFrame({ 'date' : date_range('2000-01-01','2000-01-5'), - 'val' : Series(range(5),dtype=np.int64) }) - expected = DataFrame({ 'date' : [Timestamp('20000101'),Timestamp('20000102'),Timestamp('20000101'), - Timestamp('20000102'),Timestamp('20000103')], - 'val' : Series([0,1,0,1,2],dtype=np.int64) }) + df = DataFrame({'date': date_range('2000-01-01', '2000-01-5'), + 'val': Series( + range(5), dtype=np.int64)}) + expected = DataFrame({'date': [Timestamp('20000101'), Timestamp( + '20000102'), Timestamp('20000101'), Timestamp('20000102'), + Timestamp('20000103')], + 'val': Series( + [0, 1, 0, 1, 2], dtype=np.int64)}) rhs = df.loc[0:2] rhs.index = df.index[2:5] df.loc[2:4] = rhs assert_frame_equal(df, expected) def test_iloc_getitem_frame(self): - df = DataFrame(np.random.randn(10, 4), index=lrange(0, 20, 2), columns=lrange(0,8,2)) + df = DataFrame( + np.random.randn( + 10, 4), index=lrange(0, 20, 2), columns=lrange(0, 8, 2)) result = df.iloc[2] exp = df.ix[4] assert_series_equal(result, exp) - result = df.iloc[2,2] - exp = df.ix[4,4] + result = df.iloc[2, 2] + exp = df.ix[4, 4] self.assertEqual(result, exp) # slice @@ -1342,134 +1492,142 @@ def test_iloc_getitem_frame(self): expected = df.ix[8:14] assert_frame_equal(result, expected) - result = df.iloc[:,2:3] - expected = df.ix[:,4:5] + result = df.iloc[:, 2:3] + expected = df.ix[:, 4:5] assert_frame_equal(result, expected) # list of integers - result = df.iloc[[0,1,3]] - expected = df.ix[[0,2,6]] + result = df.iloc[[0, 1, 3]] + expected = df.ix[[0, 2, 6]] assert_frame_equal(result, expected) - result = df.iloc[[0,1,3],[0,1]] - expected = df.ix[[0,2,6],[0,2]] + result = df.iloc[[0, 1, 3], [0, 1]] + expected = df.ix[[0, 2, 6], [0, 2]] assert_frame_equal(result, expected) # neg indicies - result = df.iloc[[-1,1,3],[-1,1]] - expected = df.ix[[18,2,6],[6,2]] + result = df.iloc[[-1, 1, 3], [-1, 1]] + expected = df.ix[[18, 2, 6], [6, 2]] assert_frame_equal(result, expected) # dups indicies - result = df.iloc[[-1,-1,1,3],[-1,1]] - expected = df.ix[[18,18,2,6],[6,2]] + result = df.iloc[[-1, -1, 1, 3], [-1, 1]] + expected = df.ix[[18, 18, 2, 6], [6, 2]] assert_frame_equal(result, expected) # with index-like - s = Series(index=lrange(1,5)) + s = Series(index=lrange(1, 5)) result = df.iloc[s.index] - expected = df.ix[[2,4,6,8]] + expected = df.ix[[2, 4, 6, 8]] assert_frame_equal(result, expected) # try with labelled frame - df = DataFrame(np.random.randn(10, 4), index=list('abcdefghij'), columns=list('ABCD')) + df = DataFrame( + np.random.randn(10, + 4), index=list('abcdefghij'), columns=list('ABCD')) - result = df.iloc[1,1] - exp = df.ix['b','B'] + result = df.iloc[1, 1] + exp = df.ix['b', 'B'] self.assertEqual(result, exp) - result = df.iloc[:,2:3] - expected = df.ix[:,['C']] + result = df.iloc[:, 2:3] + expected = df.ix[:, ['C']] assert_frame_equal(result, expected) # negative indexing - result = df.iloc[-1,-1] - exp = df.ix['j','D'] + result = df.iloc[-1, -1] + exp = df.ix['j', 'D'] self.assertEqual(result, exp) # out-of-bounds exception - self.assertRaises(IndexError, df.iloc.__getitem__, tuple([10,5])) + self.assertRaises(IndexError, df.iloc.__getitem__, tuple([10, 5])) # trying to use a label - self.assertRaises(ValueError, df.iloc.__getitem__, tuple(['j','D'])) + self.assertRaises(ValueError, df.iloc.__getitem__, tuple(['j', 'D'])) def test_iloc_getitem_panel(self): # GH 7189 - p = Panel(np.arange(4*3*2).reshape(4,3,2), - items=['A','B','C','D'], - major_axis=['a','b','c'], - minor_axis=['one','two']) + p = Panel(np.arange(4 * 3 * 2).reshape(4, 3, 2), + items=['A', 'B', 'C', 'D'], + major_axis=['a', 'b', 'c'], + minor_axis=['one', 'two']) result = p.iloc[1] expected = p.loc['B'] assert_frame_equal(result, expected) - result = p.iloc[1,1] - expected = p.loc['B','b'] + result = p.iloc[1, 1] + expected = p.loc['B', 'b'] assert_series_equal(result, expected) - result = p.iloc[1,1,1] - expected = p.loc['B','b','two'] - self.assertEqual(result,expected) + result = p.iloc[1, 1, 1] + expected = p.loc['B', 'b', 'two'] + self.assertEqual(result, expected) # slice result = p.iloc[1:3] - expected = p.loc[['B','C']] + expected = p.loc[['B', 'C']] assert_panel_equal(result, expected) - result = p.iloc[:,0:2] - expected = p.loc[:,['a','b']] + result = p.iloc[:, 0:2] + expected = p.loc[:, ['a', 'b']] assert_panel_equal(result, expected) # list of integers - result = p.iloc[[0,2]] - expected = p.loc[['A','C']] + result = p.iloc[[0, 2]] + expected = p.loc[['A', 'C']] assert_panel_equal(result, expected) # neg indicies - result = p.iloc[[-1,1],[-1,1]] - expected = p.loc[['D','B'],['c','b']] + result = p.iloc[[-1, 1], [-1, 1]] + expected = p.loc[['D', 'B'], ['c', 'b']] assert_panel_equal(result, expected) # dups indicies - result = p.iloc[[-1,-1,1],[-1,1]] - expected = p.loc[['D','D','B'],['c','b']] + result = p.iloc[[-1, -1, 1], [-1, 1]] + expected = p.loc[['D', 'D', 'B'], ['c', 'b']] assert_panel_equal(result, expected) # combined - result = p.iloc[0,[True,True],[0,1]] - expected = p.loc['A',['a','b'],['one','two']] + result = p.iloc[0, [True, True], [0, 1]] + expected = p.loc['A', ['a', 'b'], ['one', 'two']] assert_frame_equal(result, expected) # out-of-bounds exception - self.assertRaises(IndexError, p.iloc.__getitem__, tuple([10,5])) + self.assertRaises(IndexError, p.iloc.__getitem__, tuple([10, 5])) + def f(): - p.iloc[0,[True,True],[0,1,2]] + p.iloc[0, [True, True], [0, 1, 2]] + self.assertRaises(IndexError, f) # trying to use a label - self.assertRaises(ValueError, p.iloc.__getitem__, tuple(['j','D'])) + self.assertRaises(ValueError, p.iloc.__getitem__, tuple(['j', 'D'])) # GH - p = Panel(np.random.rand(4,3,2), items=['A','B','C','D'], major_axis=['U','V','W'], minor_axis=['X','Y']) + p = Panel( + np.random.rand(4, 3, 2), items=['A', 'B', 'C', 'D'], + major_axis=['U', 'V', 'W'], minor_axis=['X', 'Y']) expected = p['A'] - result = p.iloc[0,:,:] + result = p.iloc[0, :, :] assert_frame_equal(result, expected) - result = p.iloc[0,[True,True,True],:] + result = p.iloc[0, [True, True, True], :] assert_frame_equal(result, expected) - result = p.iloc[0,[True,True,True],[0,1]] + result = p.iloc[0, [True, True, True], [0, 1]] assert_frame_equal(result, expected) def f(): - p.iloc[0,[True,True,True],[0,1,2]] + p.iloc[0, [True, True, True], [0, 1, 2]] + self.assertRaises(IndexError, f) def f(): - p.iloc[0,[True,True,True],[2]] + p.iloc[0, [True, True, True], [2]] + self.assertRaises(IndexError, f) # GH 7199 @@ -1480,124 +1638,134 @@ def f(): names=['UPPER', 'lower']) simple_index = [x[0] for x in multi_index] - wd1 = Panel(items=['First', 'Second'], - major_axis=['a', 'b', 'c', 'd'], + wd1 = Panel(items=['First', 'Second'], major_axis=['a', 'b', 'c', 'd'], minor_axis=multi_index) - wd2 = Panel(items=['First', 'Second'], - major_axis=['a', 'b', 'c', 'd'], + wd2 = Panel(items=['First', 'Second'], major_axis=['a', 'b', 'c', 'd'], minor_axis=simple_index) expected1 = wd1['First'].iloc[[True, True, True, False], [0, 2]] result1 = wd1.iloc[0, [True, True, True, False], [0, 2]] # WRONG - assert_frame_equal(result1,expected1) + assert_frame_equal(result1, expected1) expected2 = wd2['First'].iloc[[True, True, True, False], [0, 2]] result2 = wd2.iloc[0, [True, True, True, False], [0, 2]] - assert_frame_equal(result2,expected2) + assert_frame_equal(result2, expected2) - expected1 = DataFrame(index=['a'],columns=multi_index,dtype='float64') - result1 = wd1.iloc[0,[0],[0,1,2]] - assert_frame_equal(result1,expected1) + expected1 = DataFrame(index=['a'], columns=multi_index, + dtype='float64') + result1 = wd1.iloc[0, [0], [0, 1, 2]] + assert_frame_equal(result1, expected1) - expected2 = DataFrame(index=['a'],columns=simple_index,dtype='float64') - result2 = wd2.iloc[0,[0],[0,1,2]] - assert_frame_equal(result2,expected2) + expected2 = DataFrame(index=['a'], columns=simple_index, + dtype='float64') + result2 = wd2.iloc[0, [0], [0, 1, 2]] + assert_frame_equal(result2, expected2) # GH 7516 - mi = MultiIndex.from_tuples([(0,'x'), (1,'y'), (2,'z')]) - p = Panel(np.arange(3*3*3,dtype='int64').reshape(3,3,3), items=['a','b','c'], major_axis=mi, minor_axis=['u','v','w']) + mi = MultiIndex.from_tuples([(0, 'x'), (1, 'y'), (2, 'z')]) + p = Panel( + np.arange(3 * 3 * 3, dtype='int64').reshape(3, 3, 3), + items=['a', 'b', 'c'], major_axis=mi, minor_axis=['u', 'v', 'w']) result = p.iloc[:, 1, 0] - expected = Series([3,12,21],index=['a','b','c'], name='u') - assert_series_equal(result,expected) + expected = Series([3, 12, 21], index=['a', 'b', 'c'], name='u') + assert_series_equal(result, expected) - result = p.loc[:, (1,'y'), 'u'] - assert_series_equal(result,expected) + result = p.loc[:, (1, 'y'), 'u'] + assert_series_equal(result, expected) def test_iloc_getitem_doc_issue(self): # multi axis slicing issue with single block # surfaced in GH 6059 - arr = np.random.randn(6,4) - index = date_range('20130101',periods=6) + arr = np.random.randn(6, 4) + index = date_range('20130101', periods=6) columns = list('ABCD') - df = DataFrame(arr,index=index,columns=columns) + df = DataFrame(arr, index=index, columns=columns) # defines ref_locs df.describe() - result = df.iloc[3:5,0:2] + result = df.iloc[3:5, 0:2] str(result) result.dtypes - expected = DataFrame(arr[3:5,0:2],index=index[3:5],columns=columns[0:2]) - assert_frame_equal(result,expected) + expected = DataFrame(arr[3:5, 0:2], index=index[3:5], + columns=columns[0:2]) + assert_frame_equal(result, expected) # for dups df.columns = list('aaaa') - result = df.iloc[3:5,0:2] + result = df.iloc[3:5, 0:2] str(result) result.dtypes - expected = DataFrame(arr[3:5,0:2],index=index[3:5],columns=list('aa')) - assert_frame_equal(result,expected) + expected = DataFrame(arr[3:5, 0:2], index=index[3:5], + columns=list('aa')) + assert_frame_equal(result, expected) # related - arr = np.random.randn(6,4) - index = list(range(0,12,2)) - columns = list(range(0,8,2)) - df = DataFrame(arr,index=index,columns=columns) + arr = np.random.randn(6, 4) + index = list(range(0, 12, 2)) + columns = list(range(0, 8, 2)) + df = DataFrame(arr, index=index, columns=columns) df._data.blocks[0].mgr_locs - result = df.iloc[1:5,2:4] + result = df.iloc[1:5, 2:4] str(result) result.dtypes - expected = DataFrame(arr[1:5,2:4],index=index[1:5],columns=columns[2:4]) - assert_frame_equal(result,expected) + expected = DataFrame(arr[1:5, 2:4], index=index[1:5], + columns=columns[2:4]) + assert_frame_equal(result, expected) def test_setitem_ndarray_1d(self): # GH5508 # len of indexer vs length of the 1d ndarray - df = DataFrame(index=Index(lrange(1,11))) + df = DataFrame(index=Index(lrange(1, 11))) df['foo'] = np.zeros(10, dtype=np.float64) df['bar'] = np.zeros(10, dtype=np.complex) # invalid def f(): - df.ix[2:5, 'bar'] = np.array([2.33j, 1.23+0.1j, 2.2]) + df.ix[2:5, 'bar'] = np.array([2.33j, 1.23 + 0.1j, 2.2]) + self.assertRaises(ValueError, f) # valid - df.ix[2:5, 'bar'] = np.array([2.33j, 1.23+0.1j, 2.2, 1.0]) + df.ix[2:5, 'bar'] = np.array([2.33j, 1.23 + 0.1j, 2.2, 1.0]) result = df.ix[2:5, 'bar'] - expected = Series([2.33j, 1.23+0.1j, 2.2, 1.0], index=[2,3,4,5], name='bar') + expected = Series([2.33j, 1.23 + 0.1j, 2.2, 1.0], index=[2, 3, 4, 5], + name='bar') assert_series_equal(result, expected) # dtype getting changed? - df = DataFrame(index=Index(lrange(1,11))) + df = DataFrame(index=Index(lrange(1, 11))) df['foo'] = np.zeros(10, dtype=np.float64) df['bar'] = np.zeros(10, dtype=np.complex) def f(): - df[2:5] = np.arange(1,4)*1j + df[2:5] = np.arange(1, 4) * 1j + self.assertRaises(ValueError, f) def test_iloc_setitem_series(self): - df = DataFrame(np.random.randn(10, 4), index=list('abcdefghij'), columns=list('ABCD')) + df = DataFrame( + np.random.randn(10, + 4), index=list('abcdefghij'), columns=list('ABCD')) - df.iloc[1,1] = 1 - result = df.iloc[1,1] + df.iloc[1, 1] = 1 + result = df.iloc[1, 1] self.assertEqual(result, 1) - df.iloc[:,2:3] = 0 - expected = df.iloc[:,2:3] - result = df.iloc[:,2:3] + df.iloc[:, 2:3] = 0 + expected = df.iloc[:, 2:3] + result = df.iloc[:, 2:3] assert_frame_equal(result, expected) - s = Series(np.random.randn(10), index=lrange(0,20,2)) + s = Series(np.random.randn(10), index=lrange(0, 20, 2)) s.iloc[1] = 1 result = s.iloc[1] @@ -1608,36 +1776,39 @@ def test_iloc_setitem_series(self): result = s.iloc[:4] assert_series_equal(result, expected) - s= Series([-1]*6) - s.iloc[0::2]= [0,2,4] - s.iloc[1::2]= [1,3,5] - result = s - expected= Series([0,1,2,3,4,5]) + s = Series([-1] * 6) + s.iloc[0::2] = [0, 2, 4] + s.iloc[1::2] = [1, 3, 5] + result = s + expected = Series([0, 1, 2, 3, 4, 5]) assert_series_equal(result, expected) def test_iloc_setitem_list_of_lists(self): # GH 7551 # list-of-list is set incorrectly in mixed vs. single dtyped frames - df = DataFrame(dict(A = np.arange(5,dtype='int64'), B = np.arange(5,10,dtype='int64'))) - df.iloc[2:4] = [[10,11],[12,13]] - expected = DataFrame(dict(A = [0,1,10,12,4], B = [5,6,11,13,9])) + df = DataFrame(dict(A=np.arange(5, dtype='int64'), B=np.arange( + 5, 10, dtype='int64'))) + df.iloc[2:4] = [[10, 11], [12, 13]] + expected = DataFrame(dict(A=[0, 1, 10, 12, 4], B=[5, 6, 11, 13, 9])) assert_frame_equal(df, expected) - df = DataFrame(dict(A = list('abcde'), B = np.arange(5,10,dtype='int64'))) - df.iloc[2:4] = [['x',11],['y',13]] - expected = DataFrame(dict(A = ['a','b','x','y','e'], B = [5,6,11,13,9])) + df = DataFrame( + dict(A=list('abcde'), B=np.arange(5, 10, dtype='int64'))) + df.iloc[2:4] = [['x', 11], ['y', 13]] + expected = DataFrame(dict(A=['a', 'b', 'x', 'y', 'e'], B=[5, 6, 11, 13, + 9])) assert_frame_equal(df, expected) def test_iloc_getitem_multiindex(self): mi_labels = DataFrame(np.random.randn(4, 3), columns=[['i', 'i', 'j'], ['A', 'A', 'B']], - index=[['i', 'i', 'j', 'k'], ['X', 'X', 'Y','Y']]) - - mi_int = DataFrame(np.random.randn(3, 3), - columns=[[2,2,4],[6,8,10]], - index=[[4,4,8],[8,10,12]]) + index=[['i', 'i', 'j', 'k'], + ['X', 'X', 'Y', 'Y']]) + mi_int = DataFrame(np.random.randn(3, 3), + columns=[[2, 2, 4], [6, 8, 10]], + index=[[4, 4, 8], [8, 10, 12]]) # the first row rs = mi_int.iloc[0] @@ -1647,18 +1818,18 @@ def test_iloc_getitem_multiindex(self): self.assertEqual(xp.name, 8) # 2nd (last) columns - rs = mi_int.iloc[:,2] - xp = mi_int.ix[:,2] + rs = mi_int.iloc[:, 2] + xp = mi_int.ix[:, 2] assert_series_equal(rs, xp) # corner column - rs = mi_int.iloc[2,2] - xp = mi_int.ix[:,2].ix[2] + rs = mi_int.iloc[2, 2] + xp = mi_int.ix[:, 2].ix[2] self.assertEqual(rs, xp) # this is basically regular indexing - rs = mi_labels.iloc[2,2] - xp = mi_labels.ix['j'].ix[:,'j'].ix[0,0] + rs = mi_labels.iloc[2, 2] + xp = mi_labels.ix['j'].ix[:, 'j'].ix[0, 0] self.assertEqual(rs, xp) def test_loc_multiindex(self): @@ -1667,9 +1838,9 @@ def test_loc_multiindex(self): ['A', 'A', 'B']], index=[['i', 'i', 'j'], ['X', 'X', 'Y']]) - mi_int = DataFrame(np.random.randn(3, 3), - columns=[[2,2,4],[6,8,10]], - index=[[4,4,8],[8,10,12]]) + mi_int = DataFrame(np.random.randn(3, 3), + columns=[[2, 2, 4], [6, 8, 10]], + index=[[4, 4, 8], [8, 10, 12]]) # the first row rs = mi_labels.loc['i'] @@ -1677,30 +1848,30 @@ def test_loc_multiindex(self): assert_frame_equal(rs, xp) # 2nd (last) columns - rs = mi_labels.loc[:,'j'] - xp = mi_labels.ix[:,'j'] + rs = mi_labels.loc[:, 'j'] + xp = mi_labels.ix[:, 'j'] assert_frame_equal(rs, xp) # corner column - rs = mi_labels.loc['j'].loc[:,'j'] - xp = mi_labels.ix['j'].ix[:,'j'] - assert_frame_equal(rs,xp) + rs = mi_labels.loc['j'].loc[:, 'j'] + xp = mi_labels.ix['j'].ix[:, 'j'] + assert_frame_equal(rs, xp) # with a tuple - rs = mi_labels.loc[('i','X')] - xp = mi_labels.ix[('i','X')] - assert_frame_equal(rs,xp) + rs = mi_labels.loc[('i', 'X')] + xp = mi_labels.ix[('i', 'X')] + assert_frame_equal(rs, xp) rs = mi_int.loc[4] xp = mi_int.ix[4] - assert_frame_equal(rs,xp) + assert_frame_equal(rs, xp) # GH6788 # multi-index indexer is None (meaning take all) attributes = ['Attribute' + str(i) for i in range(1)] attribute_values = ['Value' + str(i) for i in range(5)] - index = MultiIndex.from_product([attributes,attribute_values]) + index = MultiIndex.from_product([attributes, attribute_values]) df = 0.1 * np.random.randn(10, 1 * 5) + 0.5 df = DataFrame(df, columns=index) result = df[attributes] @@ -1708,15 +1879,19 @@ def test_loc_multiindex(self): # GH 7349 # loc with a multi-index seems to be doing fallback - df = DataFrame(np.arange(12).reshape(-1,1),index=pd.MultiIndex.from_product([[1,2,3,4],[1,2,3]])) + df = DataFrame( + np.arange(12).reshape(-1, 1), + index=pd.MultiIndex.from_product([[1, 2, 3, 4], [1, 2, 3]])) - expected = df.loc[([1,2],),:] - result = df.loc[[1,2]] + expected = df.loc[([1, 2], ), :] + result = df.loc[[1, 2]] assert_frame_equal(result, expected) # GH 7399 # incomplete indexers - s = pd.Series(np.arange(15,dtype='int64'),MultiIndex.from_product([range(5), ['a', 'b', 'c']])) + s = pd.Series( + np.arange(15, dtype='int64'), + MultiIndex.from_product([range(5), ['a', 'b', 'c']])) expected = s.loc[:, 'a':'c'] result = s.loc[0:4, 'a':'c'] @@ -1733,8 +1908,10 @@ def test_loc_multiindex(self): # GH 7400 # multiindexer gettitem with list of indexers skips wrong element - s = pd.Series(np.arange(15,dtype='int64'),MultiIndex.from_product([range(5), ['a', 'b', 'c']])) - expected = s.iloc[[6,7,8,12,13,14]] + s = pd.Series( + np.arange(15, dtype='int64'), + MultiIndex.from_product([range(5), ['a', 'b', 'c']])) + expected = s.iloc[[6, 7, 8, 12, 13, 14]] result = s.loc[2:4:2, 'a':'c'] assert_series_equal(result, expected) @@ -1743,16 +1920,17 @@ def test_multiindex_perf_warn(self): if sys.version_info < (2, 7): raise nose.SkipTest('python version < 2.7') - df = DataFrame({'jim':[0, 0, 1, 1], - 'joe':['x', 'x', 'z', 'y'], - 'jolie':np.random.rand(4)}).set_index(['jim', 'joe']) + df = DataFrame({'jim': [0, 0, 1, 1], + 'joe': ['x', 'x', 'z', 'y'], + 'jolie': np.random.rand(4)}).set_index(['jim', 'joe']) - with tm.assert_produces_warning(PerformanceWarning, clear=[pd.core.index]): - _ = df.loc[(1, 'z')] + with tm.assert_produces_warning(PerformanceWarning, + clear=[pd.core.index]): + df.loc[(1, 'z')] - df = df.iloc[[2,1,3,0]] + df = df.iloc[[2, 1, 3, 0]] with tm.assert_produces_warning(PerformanceWarning): - _ = df.loc[(0,)] + df.loc[(0, )] @slow def test_multiindex_get_loc(self): # GH7724, GH2646 @@ -1771,25 +1949,26 @@ def validate(mi, df, key): mask &= df.iloc[:, i] == k if not mask.any(): - self.assertNotIn(key[:i+1], mi.index) + self.assertNotIn(key[:i + 1], mi.index) continue - self.assertIn(key[:i+1], mi.index) + self.assertIn(key[:i + 1], mi.index) right = df[mask].copy() if i + 1 != len(key): # partial key - right.drop(cols[:i+1], axis=1, inplace=True) - right.set_index(cols[i+1:-1], inplace=True) - assert_frame_equal(mi.loc[key[:i+1]], right) + right.drop(cols[:i + 1], axis=1, inplace=True) + right.set_index(cols[i + 1:-1], inplace=True) + assert_frame_equal(mi.loc[key[:i + 1]], right) else: # full key right.set_index(cols[:-1], inplace=True) if len(right) == 1: # single hit right = Series(right['jolia'].values, - name=right.index[0], index=['jolia']) - assert_series_equal(mi.loc[key[:i+1]], right) + name=right.index[0], + index=['jolia']) + assert_series_equal(mi.loc[key[:i + 1]], right) else: # multi hit - assert_frame_equal(mi.loc[key[:i+1]], right) + assert_frame_equal(mi.loc[key[:i + 1]], right) def loop(mi, df, keys): for key in keys: @@ -1797,17 +1976,19 @@ def loop(mi, df, keys): n, m = 1000, 50 - vals = [randint(0, 10, n), choice(list('abcdefghij'), n), - choice(pd.date_range('20141009', periods=10).tolist(), n), - choice(list('ZYXWVUTSRQ'), n), randn(n)] + vals = [randint(0, 10, n), choice( + list('abcdefghij'), n), choice( + pd.date_range('20141009', periods=10).tolist(), n), choice( + list('ZYXWVUTSRQ'), n), randn(n)] vals = list(map(tuple, zip(*vals))) # bunch of keys for testing - keys = [randint(0, 11, m), choice(list('abcdefghijk'), m), - choice(pd.date_range('20141009', periods=11).tolist(), m), - choice(list('ZYXWVUTSRQP'), m)] + keys = [randint(0, 11, m), choice( + list('abcdefghijk'), m), choice( + pd.date_range('20141009', periods=11).tolist(), m), choice( + list('ZYXWVUTSRQP'), m)] keys = list(map(tuple, zip(*keys))) - keys += list(map(lambda t: t[:-1], vals[::n//m])) + keys += list(map(lambda t: t[:-1], vals[::n // m])) # covers both unique index and non-unique index df = pd.DataFrame(vals, columns=cols) @@ -1815,7 +1996,8 @@ def loop(mi, df, keys): for frame in a, b: for i in range(5): # lexsort depth - df = frame.copy() if i == 0 else frame.sort_values(by=cols[:i]) + df = frame.copy() if i == 0 else frame.sort_values( + by=cols[:i]) mi = df.set_index(cols[:-1]) assert not mi.index.lexsort_depth < i loop(mi, df, keys) @@ -1825,37 +2007,36 @@ def test_series_getitem_multiindex(self): # GH 6018 # series regression getitem with a multi-index - s = Series([1,2,3]) - s.index = MultiIndex.from_tuples([(0,0),(1,1), (2,1)]) + s = Series([1, 2, 3]) + s.index = MultiIndex.from_tuples([(0, 0), (1, 1), (2, 1)]) - result = s[:,0] - expected = Series([1],index=[0]) - assert_series_equal(result,expected) + result = s[:, 0] + expected = Series([1], index=[0]) + assert_series_equal(result, expected) - result = s.ix[:,1] - expected = Series([2,3],index=[1,2]) - assert_series_equal(result,expected) + result = s.ix[:, 1] + expected = Series([2, 3], index=[1, 2]) + assert_series_equal(result, expected) # xs - result = s.xs(0,level=0) - expected = Series([1],index=[0]) - assert_series_equal(result,expected) + result = s.xs(0, level=0) + expected = Series([1], index=[0]) + assert_series_equal(result, expected) - result = s.xs(1,level=1) - expected = Series([2,3],index=[1,2]) - assert_series_equal(result,expected) + result = s.xs(1, level=1) + expected = Series([2, 3], index=[1, 2]) + assert_series_equal(result, expected) # GH6258 - s = Series([1,3,4,1,3,4], - index=MultiIndex.from_product([list('AB'), - list(date_range('20130903',periods=3))])) - result = s.xs('20130903',level=1) - expected = Series([1,1],index=list('AB')) - assert_series_equal(result,expected) + s = Series([1, 3, 4, 1, 3, 4], index=MultiIndex.from_product([list( + 'AB'), list(date_range('20130903', periods=3))])) + result = s.xs('20130903', level=1) + expected = Series([1, 1], index=list('AB')) + assert_series_equal(result, expected) # GH5684 - idx = MultiIndex.from_tuples([('a', 'one'), ('a', 'two'), - ('b', 'one'), ('b', 'two')]) + idx = MultiIndex.from_tuples([('a', 'one'), ('a', 'two'), ('b', 'one'), + ('b', 'two')]) s = Series([1, 2, 3, 4], index=idx) s.index.set_names(['L1', 'L2'], inplace=True) result = s.xs('one', level='L2') @@ -1868,9 +2049,21 @@ def test_ix_general(self): # ix general issues # GH 2817 - data = {'amount': {0: 700, 1: 600, 2: 222, 3: 333, 4: 444}, - 'col': {0: 3.5, 1: 3.5, 2: 4.0, 3: 4.0, 4: 4.0}, - 'year': {0: 2012, 1: 2011, 2: 2012, 3: 2012, 4: 2012}} + data = {'amount': {0: 700, + 1: 600, + 2: 222, + 3: 333, + 4: 444}, + 'col': {0: 3.5, + 1: 3.5, + 2: 4.0, + 3: 4.0, + 4: 4.0}, + 'year': {0: 2012, + 1: 2011, + 2: 2012, + 3: 2012, + 4: 2012}} df = DataFrame(data).set_index(keys=['col', 'year']) key = 4.0, 2012 @@ -1888,23 +2081,34 @@ def test_ix_general(self): tm.assert_frame_equal(res, expected) def test_ix_weird_slicing(self): - ## http://stackoverflow.com/q/17056560/1240268 - df = DataFrame({'one' : [1, 2, 3, np.nan, np.nan], 'two' : [1, 2, 3, 4, 5]}) - df.ix[df['one']>1, 'two'] = -df['two'] - - expected = DataFrame({'one': {0: 1.0, 1: 2.0, 2: 3.0, 3: nan, 4: nan}, - 'two': {0: 1, 1: -2, 2: -3, 3: 4, 4: 5}}) + # http://stackoverflow.com/q/17056560/1240268 + df = DataFrame({'one': [1, 2, 3, np.nan, np.nan], + 'two': [1, 2, 3, 4, 5]}) + df.ix[df['one'] > 1, 'two'] = -df['two'] + + expected = DataFrame({'one': {0: 1.0, + 1: 2.0, + 2: 3.0, + 3: nan, + 4: nan}, + 'two': {0: 1, + 1: -2, + 2: -3, + 3: 4, + 4: 5}}) assert_frame_equal(df, expected) def test_xs_multiindex(self): # GH2903 - columns = MultiIndex.from_tuples([('a', 'foo'), ('a', 'bar'), ('b', 'hello'), ('b', 'world')], names=['lvl0', 'lvl1']) + columns = MultiIndex.from_tuples( + [('a', 'foo'), ('a', 'bar'), ('b', 'hello'), + ('b', 'world')], names=['lvl0', 'lvl1']) df = DataFrame(np.random.randn(4, 4), columns=columns) - df.sortlevel(axis=1,inplace=True) + df.sortlevel(axis=1, inplace=True) result = df.xs('a', level='lvl0', axis=1) - expected = df.iloc[:,0:2].loc[:,'a'] - assert_frame_equal(result,expected) + expected = df.iloc[:, 0:2].loc[:, 'a'] + assert_frame_equal(result, expected) result = df.xs('foo', level='lvl1', axis=1) expected = df.iloc[:, 1:2].copy() @@ -1915,127 +2119,142 @@ def test_per_axis_per_level_getitem(self): # GH6134 # example test case - ix = MultiIndex.from_product([_mklbl('A',5),_mklbl('B',7),_mklbl('C',4),_mklbl('D',2)]) - df = DataFrame(np.arange(len(ix.get_values())),index=ix) - - result = df.loc[(slice('A1','A3'),slice(None), ['C1','C3']),:] - expected = df.loc[[ tuple([a,b,c,d]) for a,b,c,d in df.index.values if ( - a == 'A1' or a == 'A2' or a == 'A3') and (c == 'C1' or c == 'C3')]] + ix = MultiIndex.from_product([_mklbl('A', 5), _mklbl('B', 7), _mklbl( + 'C', 4), _mklbl('D', 2)]) + df = DataFrame(np.arange(len(ix.get_values())), index=ix) + + result = df.loc[(slice('A1', 'A3'), slice(None), ['C1', 'C3']), :] + expected = df.loc[[tuple([a, b, c, d]) + for a, b, c, d in df.index.values + if (a == 'A1' or a == 'A2' or a == 'A3') and ( + c == 'C1' or c == 'C3')]] assert_frame_equal(result, expected) - expected = df.loc[[ tuple([a,b,c,d]) for a,b,c,d in df.index.values if ( - a == 'A1' or a == 'A2' or a == 'A3') and (c == 'C1' or c == 'C2' or c == 'C3')]] - result = df.loc[(slice('A1','A3'),slice(None), slice('C1','C3')),:] + expected = df.loc[[tuple([a, b, c, d]) + for a, b, c, d in df.index.values + if (a == 'A1' or a == 'A2' or a == 'A3') and ( + c == 'C1' or c == 'C2' or c == 'C3')]] + result = df.loc[(slice('A1', 'A3'), slice(None), slice('C1', 'C3')), :] assert_frame_equal(result, expected) # test multi-index slicing with per axis and per index controls - index = MultiIndex.from_tuples([('A',1),('A',2),('A',3),('B',1)], - names=['one','two']) - columns = MultiIndex.from_tuples([('a','foo'),('a','bar'),('b','foo'),('b','bah')], + index = MultiIndex.from_tuples([('A', 1), ('A', 2), + ('A', 3), ('B', 1)], + names=['one', 'two']) + columns = MultiIndex.from_tuples([('a', 'foo'), ('a', 'bar'), + ('b', 'foo'), ('b', 'bah')], names=['lvl0', 'lvl1']) - df = DataFrame(np.arange(16,dtype='int64').reshape(4, 4), index=index, columns=columns) + df = DataFrame( + np.arange(16, dtype='int64').reshape( + 4, 4), index=index, columns=columns) df = df.sortlevel(axis=0).sortlevel(axis=1) # identity - result = df.loc[(slice(None),slice(None)),:] + result = df.loc[(slice(None), slice(None)), :] assert_frame_equal(result, df) - result = df.loc[(slice(None),slice(None)),(slice(None),slice(None))] + result = df.loc[(slice(None), slice(None)), (slice(None), slice(None))] assert_frame_equal(result, df) - result = df.loc[:,(slice(None),slice(None))] + result = df.loc[:, (slice(None), slice(None))] assert_frame_equal(result, df) # index - result = df.loc[(slice(None),[1]),:] - expected = df.iloc[[0,3]] + result = df.loc[(slice(None), [1]), :] + expected = df.iloc[[0, 3]] assert_frame_equal(result, expected) - result = df.loc[(slice(None),1),:] - expected = df.iloc[[0,3]] + result = df.loc[(slice(None), 1), :] + expected = df.iloc[[0, 3]] assert_frame_equal(result, expected) # columns - result = df.loc[:,(slice(None),['foo'])] - expected = df.iloc[:,[1,3]] + result = df.loc[:, (slice(None), ['foo'])] + expected = df.iloc[:, [1, 3]] assert_frame_equal(result, expected) # both - result = df.loc[(slice(None),1),(slice(None),['foo'])] - expected = df.iloc[[0,3],[1,3]] + result = df.loc[(slice(None), 1), (slice(None), ['foo'])] + expected = df.iloc[[0, 3], [1, 3]] assert_frame_equal(result, expected) - result = df.loc['A','a'] - expected = DataFrame(dict(bar = [1,5,9], foo = [0,4,8]), - index=Index([1,2,3],name='two'), - columns=Index(['bar','foo'],name='lvl1')) + result = df.loc['A', 'a'] + expected = DataFrame(dict(bar=[1, 5, 9], foo=[0, 4, 8]), + index=Index([1, 2, 3], name='two'), + columns=Index(['bar', 'foo'], name='lvl1')) assert_frame_equal(result, expected) - result = df.loc[(slice(None),[1,2]),:] - expected = df.iloc[[0,1,3]] + result = df.loc[(slice(None), [1, 2]), :] + expected = df.iloc[[0, 1, 3]] assert_frame_equal(result, expected) # multi-level series - s = Series(np.arange(len(ix.get_values())),index=ix) - result = s.loc['A1':'A3', :, ['C1','C3']] - expected = s.loc[[ tuple([a,b,c,d]) for a,b,c,d in s.index.values if ( - a == 'A1' or a == 'A2' or a == 'A3') and (c == 'C1' or c == 'C3')]] + s = Series(np.arange(len(ix.get_values())), index=ix) + result = s.loc['A1':'A3', :, ['C1', 'C3']] + expected = s.loc[[tuple([a, b, c, d]) + for a, b, c, d in s.index.values + if (a == 'A1' or a == 'A2' or a == 'A3') and ( + c == 'C1' or c == 'C3')]] assert_series_equal(result, expected) # boolean indexers - result = df.loc[(slice(None),df.loc[:,('a','bar')]>5),:] - expected = df.iloc[[2,3]] + result = df.loc[(slice(None), df.loc[:, ('a', 'bar')] > 5), :] + expected = df.iloc[[2, 3]] assert_frame_equal(result, expected) def f(): - df.loc[(slice(None),np.array([True,False])),:] + df.loc[(slice(None), np.array([True, False])), :] + self.assertRaises(ValueError, f) # ambiguous cases # these can be multiply interpreted (e.g. in this case # as df.loc[slice(None),[1]] as well - self.assertRaises(KeyError, lambda : df.loc[slice(None),[1]]) + self.assertRaises(KeyError, lambda: df.loc[slice(None), [1]]) - result = df.loc[(slice(None),[1]),:] - expected = df.iloc[[0,3]] + result = df.loc[(slice(None), [1]), :] + expected = df.iloc[[0, 3]] assert_frame_equal(result, expected) # not lexsorted - self.assertEqual(df.index.lexsort_depth,2) - df = df.sortlevel(level=1,axis=0) - self.assertEqual(df.index.lexsort_depth,0) - with tm.assertRaisesRegexp(KeyError, 'MultiIndex Slicing requires the index to be fully lexsorted tuple len \(2\), lexsort depth \(0\)'): - df.loc[(slice(None),df.loc[:,('a','bar')]>5),:] + self.assertEqual(df.index.lexsort_depth, 2) + df = df.sortlevel(level=1, axis=0) + self.assertEqual(df.index.lexsort_depth, 0) + with tm.assertRaisesRegexp( + KeyError, + 'MultiIndex Slicing requires the index to be fully ' + 'lexsorted tuple len \(2\), lexsort depth \(0\)'): + df.loc[(slice(None), df.loc[:, ('a', 'bar')] > 5), :] def test_multiindex_slicers_non_unique(self): # GH 7106 # non-unique mi index support - df = DataFrame(dict(A = ['foo','foo','foo','foo'], - B = ['a','a','a','a'], - C = [1,2,1,3], - D = [1,2,3,4])).set_index(['A','B','C']).sortlevel() + df = (DataFrame(dict(A=['foo', 'foo', 'foo', 'foo'], + B=['a', 'a', 'a', 'a'], + C=[1, 2, 1, 3], + D=[1, 2, 3, 4])) + .set_index(['A', 'B', 'C']).sortlevel()) self.assertFalse(df.index.is_unique) - expected = DataFrame(dict(A = ['foo','foo'], - B = ['a','a'], - C = [1,1], - D = [1,3])).set_index(['A','B','C']).sortlevel() - result = df.loc[(slice(None),slice(None),1),:] + expected = (DataFrame(dict(A=['foo', 'foo'], B=['a', 'a'], + C=[1, 1], D=[1, 3])) + .set_index(['A', 'B', 'C']).sortlevel()) + result = df.loc[(slice(None), slice(None), 1), :] assert_frame_equal(result, expected) # this is equivalent of an xs expression - result = df.xs(1,level=2,drop_level=False) + result = df.xs(1, level=2, drop_level=False) assert_frame_equal(result, expected) - df = DataFrame(dict(A = ['foo','foo','foo','foo'], - B = ['a','a','a','a'], - C = [1,2,1,2], - D = [1,2,3,4])).set_index(['A','B','C']).sortlevel() + df = (DataFrame(dict(A=['foo', 'foo', 'foo', 'foo'], + B=['a', 'a', 'a', 'a'], + C=[1, 2, 1, 2], + D=[1, 2, 3, 4])) + .set_index(['A', 'B', 'C']).sortlevel()) self.assertFalse(df.index.is_unique) - expected = DataFrame(dict(A = ['foo','foo'], - B = ['a','a'], - C = [1,1], - D = [1,3])).set_index(['A','B','C']).sortlevel() - result = df.loc[(slice(None),slice(None),1),:] + expected = (DataFrame(dict(A=['foo', 'foo'], B=['a', 'a'], + C=[1, 1], D=[1, 3])) + .set_index(['A', 'B', 'C']).sortlevel()) + result = df.loc[(slice(None), slice(None), 1), :] self.assertFalse(result.index.is_unique) assert_frame_equal(result, expected) @@ -2044,99 +2263,101 @@ def test_multiindex_slicers_datetimelike(self): # GH 7429 # buggy/inconsistent behavior when slicing with datetime-like import datetime - dates = [datetime.datetime(2012,1,1,12,12,12) + datetime.timedelta(days=i) for i in range(6)] - freq = [1,2] - index = MultiIndex.from_product([dates,freq], names=['date','frequency']) + dates = [datetime.datetime(2012, 1, 1, 12, 12, 12) + + datetime.timedelta(days=i) for i in range(6)] + freq = [1, 2] + index = MultiIndex.from_product( + [dates, freq], names=['date', 'frequency']) - df = DataFrame(np.arange(6*2*4,dtype='int64').reshape(-1,4),index=index,columns=list('ABCD')) + df = DataFrame( + np.arange(6 * 2 * 4, dtype='int64').reshape( + -1, 4), index=index, columns=list('ABCD')) # multi-axis slicing idx = pd.IndexSlice - expected = df.iloc[[0,2,4],[0,1]] - result = df.loc[(slice(Timestamp('2012-01-01 12:12:12'),Timestamp('2012-01-03 12:12:12')),slice(1,1)), slice('A','B')] - assert_frame_equal(result,expected) + expected = df.iloc[[0, 2, 4], [0, 1]] + result = df.loc[(slice( + Timestamp('2012-01-01 12:12:12'), Timestamp( + '2012-01-03 12:12:12')), slice(1, 1)), slice('A', 'B')] + assert_frame_equal(result, expected) - result = df.loc[(idx[Timestamp('2012-01-01 12:12:12'):Timestamp('2012-01-03 12:12:12')],idx[1:1]), slice('A','B')] - assert_frame_equal(result,expected) + result = df.loc[(idx[Timestamp('2012-01-01 12:12:12'):Timestamp( + '2012-01-03 12:12:12')], idx[1:1]), slice('A', 'B')] + assert_frame_equal(result, expected) - result = df.loc[(slice(Timestamp('2012-01-01 12:12:12'),Timestamp('2012-01-03 12:12:12')),1), slice('A','B')] - assert_frame_equal(result,expected) + result = df.loc[(slice( + Timestamp('2012-01-01 12:12:12'), Timestamp( + '2012-01-03 12:12:12')), 1), slice('A', 'B')] + assert_frame_equal(result, expected) # with strings - result = df.loc[(slice('2012-01-01 12:12:12','2012-01-03 12:12:12'),slice(1,1)), slice('A','B')] - assert_frame_equal(result,expected) - - result = df.loc[(idx['2012-01-01 12:12:12':'2012-01-03 12:12:12'],1), idx['A','B']] - assert_frame_equal(result,expected) + result = df.loc[(slice('2012-01-01 12:12:12', '2012-01-03 12:12:12'), + slice(1, 1)), slice('A', 'B')] + assert_frame_equal(result, expected) + result = df.loc[(idx['2012-01-01 12:12:12':'2012-01-03 12:12:12'], 1), + idx['A', 'B']] + assert_frame_equal(result, expected) def test_multiindex_slicers_edges(self): - # GH 8132 # various edge cases - df = DataFrame({'A': ['A0'] * 5 + ['A1']*5 + ['A2']*5, - 'B': ['B0','B0','B1','B1','B2'] * 3, - 'DATE': ["2013-06-11", - "2013-07-02", - "2013-07-09", - "2013-07-30", - "2013-08-06", - "2013-06-11", - "2013-07-02", - "2013-07-09", - "2013-07-30", - "2013-08-06", - "2013-09-03", - "2013-10-01", - "2013-07-09", - "2013-08-06", - "2013-09-03"], - 'VALUES': [22, 35, 14, 9, 4, 40, 18, 4, 2, 5, 1, 2, 3,4, 2]}) + df = DataFrame( + {'A': ['A0'] * 5 + ['A1'] * 5 + ['A2'] * 5, + 'B': ['B0', 'B0', 'B1', 'B1', 'B2'] * 3, + 'DATE': ["2013-06-11", "2013-07-02", "2013-07-09", "2013-07-30", + "2013-08-06", "2013-06-11", "2013-07-02", "2013-07-09", + "2013-07-30", "2013-08-06", "2013-09-03", "2013-10-01", + "2013-07-09", "2013-08-06", "2013-09-03"], + 'VALUES': [22, 35, 14, 9, 4, 40, 18, 4, 2, 5, 1, 2, 3, 4, 2]}) df['DATE'] = pd.to_datetime(df['DATE']) df1 = df.set_index(['A', 'B', 'DATE']) df1 = df1.sortlevel() - df2 = df.set_index('DATE') # A1 - Get all values under "A0" and "A1" - result = df1.loc[(slice('A1')),:] + result = df1.loc[(slice('A1')), :] expected = df1.iloc[0:10] assert_frame_equal(result, expected) # A2 - Get all values from the start to "A2" - result = df1.loc[(slice('A2')),:] + result = df1.loc[(slice('A2')), :] expected = df1 assert_frame_equal(result, expected) # A3 - Get all values under "B1" or "B2" - result = df1.loc[(slice(None),slice('B1','B2')),:] - expected = df1.iloc[[2,3,4,7,8,9,12,13,14]] + result = df1.loc[(slice(None), slice('B1', 'B2')), :] + expected = df1.iloc[[2, 3, 4, 7, 8, 9, 12, 13, 14]] assert_frame_equal(result, expected) # A4 - Get all values between 2013-07-02 and 2013-07-09 - result = df1.loc[(slice(None),slice(None),slice('20130702','20130709')),:] - expected = df1.iloc[[1,2,6,7,12]] + result = df1.loc[(slice(None), slice(None), slice('20130702', + '20130709')), :] + expected = df1.iloc[[1, 2, 6, 7, 12]] assert_frame_equal(result, expected) # B1 - Get all values in B0 that are also under A0, A1 and A2 - result = df1.loc[(slice('A2'),slice('B0')),:] - expected = df1.iloc[[0,1,5,6,10,11]] + result = df1.loc[(slice('A2'), slice('B0')), :] + expected = df1.iloc[[0, 1, 5, 6, 10, 11]] assert_frame_equal(result, expected) - # B2 - Get all values in B0, B1 and B2 (similar to what #2 is doing for the As) - result = df1.loc[(slice(None),slice('B2')),:] + # B2 - Get all values in B0, B1 and B2 (similar to what #2 is doing for + # the As) + result = df1.loc[(slice(None), slice('B2')), :] expected = df1 assert_frame_equal(result, expected) # B3 - Get all values from B1 to B2 and up to 2013-08-06 - result = df1.loc[(slice(None),slice('B1','B2'),slice('2013-08-06')),:] - expected = df1.iloc[[2,3,4,7,8,9,12,13]] + result = df1.loc[(slice(None), slice('B1', 'B2'), slice('2013-08-06') + ), :] + expected = df1.iloc[[2, 3, 4, 7, 8, 9, 12, 13]] assert_frame_equal(result, expected) # B4 - Same as A4 but the start of the date slice is not a key. # shows indexing on a partial selection slice - result = df1.loc[(slice(None),slice(None),slice('20130701','20130709')),:] - expected = df1.iloc[[1,2,6,7,12]] + result = df1.loc[(slice(None), slice(None), slice('20130701', + '20130709')), :] + expected = df1.iloc[[1, 2, 6, 7, 12]] assert_frame_equal(result, expected) def test_per_axis_per_level_doc_examples(self): @@ -2145,88 +2366,95 @@ def test_per_axis_per_level_doc_examples(self): idx = pd.IndexSlice # from indexing.rst / advanced - index = MultiIndex.from_product([_mklbl('A',4), - _mklbl('B',2), - _mklbl('C',4), - _mklbl('D',2)]) - columns = MultiIndex.from_tuples([('a','foo'),('a','bar'), - ('b','foo'),('b','bah')], + index = MultiIndex.from_product([_mklbl('A', 4), _mklbl('B', 2), + _mklbl('C', 4), _mklbl('D', 2)]) + columns = MultiIndex.from_tuples([('a', 'foo'), ('a', 'bar'), + ('b', 'foo'), ('b', 'bah')], names=['lvl0', 'lvl1']) - df = DataFrame(np.arange(len(index)*len(columns),dtype='int64').reshape((len(index),len(columns))), + df = DataFrame(np.arange(len(index) * len(columns), dtype='int64') + .reshape((len(index), len(columns))), index=index, columns=columns) - result = df.loc[(slice('A1','A3'),slice(None), ['C1','C3']),:] - expected = df.loc[[ tuple([a,b,c,d]) for a,b,c,d in df.index.values if ( - a == 'A1' or a == 'A2' or a == 'A3') and (c == 'C1' or c == 'C3')]] + result = df.loc[(slice('A1', 'A3'), slice(None), ['C1', 'C3']), :] + expected = df.loc[[tuple([a, b, c, d]) + for a, b, c, d in df.index.values + if (a == 'A1' or a == 'A2' or a == 'A3') and ( + c == 'C1' or c == 'C3')]] assert_frame_equal(result, expected) - result = df.loc[idx['A1':'A3',:,['C1','C3']],:] + result = df.loc[idx['A1':'A3', :, ['C1', 'C3']], :] assert_frame_equal(result, expected) - result = df.loc[(slice(None),slice(None), ['C1','C3']),:] - expected = df.loc[[ tuple([a,b,c,d]) for a,b,c,d in df.index.values if ( - c == 'C1' or c == 'C3')]] + result = df.loc[(slice(None), slice(None), ['C1', 'C3']), :] + expected = df.loc[[tuple([a, b, c, d]) + for a, b, c, d in df.index.values + if (c == 'C1' or c == 'C3')]] assert_frame_equal(result, expected) - result = df.loc[idx[:,:,['C1','C3']],:] + result = df.loc[idx[:, :, ['C1', 'C3']], :] assert_frame_equal(result, expected) # not sorted def f(): - df.loc['A1',(slice(None),'foo')] + df.loc['A1', (slice(None), 'foo')] + self.assertRaises(KeyError, f) df = df.sortlevel(axis=1) # slicing - df.loc['A1',(slice(None),'foo')] - df.loc[(slice(None),slice(None), ['C1','C3']),(slice(None),'foo')] + df.loc['A1', (slice(None), 'foo')] + df.loc[(slice(None), slice(None), ['C1', 'C3']), (slice(None), 'foo')] # setitem - df.loc(axis=0)[:,:,['C1','C3']] = -10 + df.loc(axis=0)[:, :, ['C1', 'C3']] = -10 def test_loc_arguments(self): - index = MultiIndex.from_product([_mklbl('A',4), - _mklbl('B',2), - _mklbl('C',4), - _mklbl('D',2)]) - columns = MultiIndex.from_tuples([('a','foo'),('a','bar'), - ('b','foo'),('b','bah')], + index = MultiIndex.from_product([_mklbl('A', 4), _mklbl('B', 2), + _mklbl('C', 4), _mklbl('D', 2)]) + columns = MultiIndex.from_tuples([('a', 'foo'), ('a', 'bar'), + ('b', 'foo'), ('b', 'bah')], names=['lvl0', 'lvl1']) - df = DataFrame(np.arange(len(index)*len(columns),dtype='int64').reshape((len(index),len(columns))), + df = DataFrame(np.arange(len(index) * len(columns), dtype='int64') + .reshape((len(index), len(columns))), index=index, columns=columns).sortlevel().sortlevel(axis=1) - # axis 0 - result = df.loc(axis=0)['A1':'A3',:,['C1','C3']] - expected = df.loc[[ tuple([a,b,c,d]) for a,b,c,d in df.index.values if ( - a == 'A1' or a == 'A2' or a == 'A3') and (c == 'C1' or c == 'C3')]] + result = df.loc(axis=0)['A1':'A3', :, ['C1', 'C3']] + expected = df.loc[[tuple([a, b, c, d]) + for a, b, c, d in df.index.values + if (a == 'A1' or a == 'A2' or a == 'A3') and ( + c == 'C1' or c == 'C3')]] assert_frame_equal(result, expected) - result = df.loc(axis='index')[:,:,['C1','C3']] - expected = df.loc[[ tuple([a,b,c,d]) for a,b,c,d in df.index.values if ( - c == 'C1' or c == 'C3')]] + result = df.loc(axis='index')[:, :, ['C1', 'C3']] + expected = df.loc[[tuple([a, b, c, d]) + for a, b, c, d in df.index.values + if (c == 'C1' or c == 'C3')]] assert_frame_equal(result, expected) # axis 1 - result = df.loc(axis=1)[:,'foo'] - expected = df.loc[:,(slice(None),'foo')] + result = df.loc(axis=1)[:, 'foo'] + expected = df.loc[:, (slice(None), 'foo')] assert_frame_equal(result, expected) - result = df.loc(axis='columns')[:,'foo'] - expected = df.loc[:,(slice(None),'foo')] + result = df.loc(axis='columns')[:, 'foo'] + expected = df.loc[:, (slice(None), 'foo')] assert_frame_equal(result, expected) # invalid axis def f(): - df.loc(axis=-1)[:,:,['C1','C3']] + df.loc(axis=-1)[:, :, ['C1', 'C3']] + self.assertRaises(ValueError, f) def f(): - df.loc(axis=2)[:,:,['C1','C3']] + df.loc(axis=2)[:, :, ['C1', 'C3']] + self.assertRaises(ValueError, f) def f(): - df.loc(axis='foo')[:,:,['C1','C3']] + df.loc(axis='foo')[:, :, ['C1', 'C3']] + self.assertRaises(ValueError, f) def test_per_axis_per_level_setitem(self): @@ -2235,119 +2463,132 @@ def test_per_axis_per_level_setitem(self): idx = pd.IndexSlice # test multi-index slicing with per axis and per index controls - index = MultiIndex.from_tuples([('A',1),('A',2),('A',3),('B',1)], - names=['one','two']) - columns = MultiIndex.from_tuples([('a','foo'),('a','bar'),('b','foo'),('b','bah')], + index = MultiIndex.from_tuples([('A', 1), ('A', 2), + ('A', 3), ('B', 1)], + names=['one', 'two']) + columns = MultiIndex.from_tuples([('a', 'foo'), ('a', 'bar'), + ('b', 'foo'), ('b', 'bah')], names=['lvl0', 'lvl1']) - df_orig = DataFrame(np.arange(16,dtype='int64').reshape(4, 4), index=index, columns=columns) + df_orig = DataFrame( + np.arange(16, dtype='int64').reshape( + 4, 4), index=index, columns=columns) df_orig = df_orig.sortlevel(axis=0).sortlevel(axis=1) # identity df = df_orig.copy() - df.loc[(slice(None),slice(None)),:] = 100 + df.loc[(slice(None), slice(None)), :] = 100 expected = df_orig.copy() - expected.iloc[:,:] = 100 + expected.iloc[:, :] = 100 assert_frame_equal(df, expected) df = df_orig.copy() - df.loc(axis=0)[:,:] = 100 + df.loc(axis=0)[:, :] = 100 expected = df_orig.copy() - expected.iloc[:,:] = 100 + expected.iloc[:, :] = 100 assert_frame_equal(df, expected) df = df_orig.copy() - df.loc[(slice(None),slice(None)),(slice(None),slice(None))] = 100 + df.loc[(slice(None), slice(None)), (slice(None), slice(None))] = 100 expected = df_orig.copy() - expected.iloc[:,:] = 100 + expected.iloc[:, :] = 100 assert_frame_equal(df, expected) df = df_orig.copy() - df.loc[:,(slice(None),slice(None))] = 100 + df.loc[:, (slice(None), slice(None))] = 100 expected = df_orig.copy() - expected.iloc[:,:] = 100 + expected.iloc[:, :] = 100 assert_frame_equal(df, expected) # index df = df_orig.copy() - df.loc[(slice(None),[1]),:] = 100 + df.loc[(slice(None), [1]), :] = 100 expected = df_orig.copy() - expected.iloc[[0,3]] = 100 + expected.iloc[[0, 3]] = 100 assert_frame_equal(df, expected) df = df_orig.copy() - df.loc[(slice(None),1),:] = 100 + df.loc[(slice(None), 1), :] = 100 expected = df_orig.copy() - expected.iloc[[0,3]] = 100 + expected.iloc[[0, 3]] = 100 assert_frame_equal(df, expected) df = df_orig.copy() - df.loc(axis=0)[:,1] = 100 + df.loc(axis=0)[:, 1] = 100 expected = df_orig.copy() - expected.iloc[[0,3]] = 100 + expected.iloc[[0, 3]] = 100 assert_frame_equal(df, expected) # columns df = df_orig.copy() - df.loc[:,(slice(None),['foo'])] = 100 + df.loc[:, (slice(None), ['foo'])] = 100 expected = df_orig.copy() - expected.iloc[:,[1,3]] = 100 + expected.iloc[:, [1, 3]] = 100 assert_frame_equal(df, expected) # both df = df_orig.copy() - df.loc[(slice(None),1),(slice(None),['foo'])] = 100 + df.loc[(slice(None), 1), (slice(None), ['foo'])] = 100 expected = df_orig.copy() - expected.iloc[[0,3],[1,3]] = 100 + expected.iloc[[0, 3], [1, 3]] = 100 assert_frame_equal(df, expected) df = df_orig.copy() - df.loc[idx[:,1],idx[:,['foo']]] = 100 + df.loc[idx[:, 1], idx[:, ['foo']]] = 100 expected = df_orig.copy() - expected.iloc[[0,3],[1,3]] = 100 + expected.iloc[[0, 3], [1, 3]] = 100 assert_frame_equal(df, expected) df = df_orig.copy() - df.loc['A','a'] = 100 + df.loc['A', 'a'] = 100 expected = df_orig.copy() - expected.iloc[0:3,0:2] = 100 + expected.iloc[0:3, 0:2] = 100 assert_frame_equal(df, expected) # setting with a list-like df = df_orig.copy() - df.loc[(slice(None),1),(slice(None),['foo'])] = np.array([[100, 100], [100, 100]],dtype='int64') + df.loc[(slice(None), 1), (slice(None), ['foo'])] = np.array( + [[100, 100], [100, 100]], dtype='int64') expected = df_orig.copy() - expected.iloc[[0,3],[1,3]] = 100 + expected.iloc[[0, 3], [1, 3]] = 100 assert_frame_equal(df, expected) # not enough values df = df_orig.copy() + def f(): - df.loc[(slice(None),1),(slice(None),['foo'])] = np.array([[100], [100, 100]],dtype='int64') + df.loc[(slice(None), 1), (slice(None), ['foo'])] = np.array( + [[100], [100, 100]], dtype='int64') + self.assertRaises(ValueError, f) + def f(): - df.loc[(slice(None),1),(slice(None),['foo'])] = np.array([100, 100, 100, 100],dtype='int64') + df.loc[(slice(None), 1), (slice(None), ['foo'])] = np.array( + [100, 100, 100, 100], dtype='int64') + self.assertRaises(ValueError, f) # with an alignable rhs df = df_orig.copy() - df.loc[(slice(None),1),(slice(None),['foo'])] = df.loc[(slice(None),1),(slice(None),['foo'])] * 5 + df.loc[(slice(None), 1), (slice(None), ['foo'])] = df.loc[(slice( + None), 1), (slice(None), ['foo'])] * 5 expected = df_orig.copy() - expected.iloc[[0,3],[1,3]] = expected.iloc[[0,3],[1,3]] * 5 + expected.iloc[[0, 3], [1, 3]] = expected.iloc[[0, 3], [1, 3]] * 5 assert_frame_equal(df, expected) df = df_orig.copy() - df.loc[(slice(None),1),(slice(None),['foo'])] *= df.loc[(slice(None),1),(slice(None),['foo'])] + df.loc[(slice(None), 1), (slice(None), ['foo'])] *= df.loc[(slice( + None), 1), (slice(None), ['foo'])] expected = df_orig.copy() - expected.iloc[[0,3],[1,3]] *= expected.iloc[[0,3],[1,3]] + expected.iloc[[0, 3], [1, 3]] *= expected.iloc[[0, 3], [1, 3]] assert_frame_equal(df, expected) - rhs = df_orig.loc[(slice(None),1),(slice(None),['foo'])].copy() - rhs.loc[:,('c','bah')] = 10 + rhs = df_orig.loc[(slice(None), 1), (slice(None), ['foo'])].copy() + rhs.loc[:, ('c', 'bah')] = 10 df = df_orig.copy() - df.loc[(slice(None),1),(slice(None),['foo'])] *= rhs + df.loc[(slice(None), 1), (slice(None), ['foo'])] *= rhs expected = df_orig.copy() - expected.iloc[[0,3],[1,3]] *= expected.iloc[[0,3],[1,3]] + expected.iloc[[0, 3], [1, 3]] *= expected.iloc[[0, 3], [1, 3]] assert_frame_equal(df, expected) def test_multiindex_setitem(self): @@ -2362,118 +2603,128 @@ def test_multiindex_setitem(self): index=arrays, columns=['A', 'B', 'C']).sort_index() - expected = df_orig.loc[['bar']]*2 + expected = df_orig.loc[['bar']] * 2 df = df_orig.copy() df.loc[['bar']] *= 2 - assert_frame_equal(df.loc[['bar']],expected) + assert_frame_equal(df.loc[['bar']], expected) # raise because these have differing levels def f(): df.loc['bar'] *= 2 + self.assertRaises(TypeError, f) # from SO - #http://stackoverflow.com/questions/24572040/pandas-access-the-level-of-multiindex-for-inplace-operation + # http://stackoverflow.com/questions/24572040/pandas-access-the-level-of-multiindex-for-inplace-operation df_orig = DataFrame.from_dict({'price': { ('DE', 'Coal', 'Stock'): 2, ('DE', 'Gas', 'Stock'): 4, ('DE', 'Elec', 'Demand'): 1, ('FR', 'Gas', 'Stock'): 5, ('FR', 'Solar', 'SupIm'): 0, - ('FR', 'Wind', 'SupIm'): 0}}) - df_orig.index = MultiIndex.from_tuples(df_orig.index, names=['Sit', 'Com', 'Type']) + ('FR', 'Wind', 'SupIm'): 0 + }}) + df_orig.index = MultiIndex.from_tuples(df_orig.index, + names=['Sit', 'Com', 'Type']) expected = df_orig.copy() - expected.iloc[[0,2,3]] *= 2 + expected.iloc[[0, 2, 3]] *= 2 idx = pd.IndexSlice df = df_orig.copy() - df.loc[idx[:,:,'Stock'],:] *= 2 + df.loc[idx[:, :, 'Stock'], :] *= 2 assert_frame_equal(df, expected) df = df_orig.copy() - df.loc[idx[:,:,'Stock'],'price'] *= 2 + df.loc[idx[:, :, 'Stock'], 'price'] *= 2 assert_frame_equal(df, expected) def test_getitem_multiindex(self): - - # GH 5725 - # the 'A' happens to be a valid Timestamp so the doesn't raise the appropriate - # error, only in PY3 of course! - index = MultiIndex(levels=[['D', 'B', 'C'], [0, 26, 27, 37, 57, 67, 75, 82]], - labels=[[0, 0, 0, 1, 2, 2, 2, 2, 2, 2], [1, 3, 4, 6, 0, 2, 2, 3, 5, 7]], + # GH 5725 the 'A' happens to be a valid Timestamp so the doesn't raise + # the appropriate error, only in PY3 of course! + index = MultiIndex(levels=[['D', 'B', 'C'], [0, 26, 27, 37, 57, 67, 75, + 82]], + labels=[[0, 0, 0, 1, 2, 2, 2, 2, 2, 2], + [1, 3, 4, 6, 0, 2, 2, 3, 5, 7]], names=['tag', 'day']) - arr = np.random.randn(len(index),1) - df = DataFrame(arr,index=index,columns=['val']) + arr = np.random.randn(len(index), 1) + df = DataFrame(arr, index=index, columns=['val']) result = df.val['D'] - expected = Series(arr.ravel()[0:3],name='val',index=Index([26,37,57],name='day')) - assert_series_equal(result,expected) + expected = Series(arr.ravel()[0:3], name='val', index=Index( + [26, 37, 57], name='day')) + assert_series_equal(result, expected) def f(): df.val['A'] + self.assertRaises(KeyError, f) def f(): df.val['X'] + self.assertRaises(KeyError, f) # A is treated as a special Timestamp - index = MultiIndex(levels=[['A', 'B', 'C'], [0, 26, 27, 37, 57, 67, 75, 82]], - labels=[[0, 0, 0, 1, 2, 2, 2, 2, 2, 2], [1, 3, 4, 6, 0, 2, 2, 3, 5, 7]], + index = MultiIndex(levels=[['A', 'B', 'C'], [0, 26, 27, 37, 57, 67, 75, + 82]], + labels=[[0, 0, 0, 1, 2, 2, 2, 2, 2, 2], + [1, 3, 4, 6, 0, 2, 2, 3, 5, 7]], names=['tag', 'day']) - df = DataFrame(arr,index=index,columns=['val']) + df = DataFrame(arr, index=index, columns=['val']) result = df.val['A'] - expected = Series(arr.ravel()[0:3],name='val',index=Index([26,37,57],name='day')) - assert_series_equal(result,expected) + expected = Series(arr.ravel()[0:3], name='val', index=Index( + [26, 37, 57], name='day')) + assert_series_equal(result, expected) def f(): df.val['X'] - self.assertRaises(KeyError, f) + self.assertRaises(KeyError, f) # GH 7866 # multi-index slicing with missing indexers - s = pd.Series(np.arange(9,dtype='int64'), - index=pd.MultiIndex.from_product([['A','B','C'],['foo','bar','baz']], - names=['one','two']) - ).sortlevel() + s = pd.Series(np.arange(9, dtype='int64'), + index=pd.MultiIndex.from_product( + [['A', 'B', 'C'], ['foo', 'bar', 'baz']], + names=['one', 'two'])).sortlevel() - expected = pd.Series(np.arange(3,dtype='int64'), - index=pd.MultiIndex.from_product([['A'],['foo','bar','baz']], - names=['one','two']) - ).sortlevel() + expected = pd.Series(np.arange(3, dtype='int64'), + index=pd.MultiIndex.from_product( + [['A'], ['foo', 'bar', 'baz']], + names=['one', 'two'])).sortlevel() result = s.loc[['A']] - assert_series_equal(result,expected) - result = s.loc[['A','D']] - assert_series_equal(result,expected) + assert_series_equal(result, expected) + result = s.loc[['A', 'D']] + assert_series_equal(result, expected) # not any values found - self.assertRaises(KeyError, lambda : s.loc[['D']]) + self.assertRaises(KeyError, lambda: s.loc[['D']]) # empty ok result = s.loc[[]] expected = s.iloc[[]] - assert_series_equal(result,expected) + assert_series_equal(result, expected) idx = pd.IndexSlice - expected = pd.Series([0,3,6], - index=pd.MultiIndex.from_product([['A','B','C'],['foo']], - names=['one','two']) - ).sortlevel() + expected = pd.Series([0, 3, 6], index=pd.MultiIndex.from_product( + [['A', 'B', 'C'], ['foo']], names=['one', 'two'])).sortlevel() - result = s.loc[idx[:,['foo']]] - assert_series_equal(result,expected) - result = s.loc[idx[:,['foo','bah']]] - assert_series_equal(result,expected) + result = s.loc[idx[:, ['foo']]] + assert_series_equal(result, expected) + result = s.loc[idx[:, ['foo', 'bah']]] + assert_series_equal(result, expected) # GH 8737 # empty indexer - multi_index = pd.MultiIndex.from_product((['foo', 'bar', 'baz'], ['alpha', 'beta'])) - df = DataFrame(np.random.randn(5, 6), index=range(5), columns=multi_index) + multi_index = pd.MultiIndex.from_product((['foo', 'bar', 'baz'], + ['alpha', 'beta'])) + df = DataFrame( + np.random.randn(5, 6), index=range(5), columns=multi_index) df = df.sortlevel(0, axis=1) - expected = DataFrame(index=range(5),columns=multi_index.reindex([])[0]) + expected = DataFrame(index=range(5), + columns=multi_index.reindex([])[0]) result1 = df.loc[:, ([], slice(None))] result2 = df.loc[:, (['foo'], [])] assert_frame_equal(result1, expected) @@ -2481,12 +2732,12 @@ def f(): # regression from < 0.14.0 # GH 7914 - df = DataFrame([[np.mean, np.median],['mean','median']], - columns=MultiIndex.from_tuples([('functs','mean'), - ('functs','median')]), + df = DataFrame([[np.mean, np.median], ['mean', 'median']], + columns=MultiIndex.from_tuples([('functs', 'mean'), + ('functs', 'median')]), index=['function', 'name']) - result = df.loc['function',('functs','mean')] - self.assertEqual(result,np.mean) + result = df.loc['function', ('functs', 'mean')] + self.assertEqual(result, np.mean) def test_setitem_dtype_upcast(self): @@ -2495,33 +2746,34 @@ def test_setitem_dtype_upcast(self): df['c'] = np.nan self.assertEqual(df['c'].dtype, np.float64) - df.ix[0,'c'] = 'foo' - expected = DataFrame([{"a": 1, "c" : 'foo'}, {"a": 3, "b": 2, "c" : np.nan}]) - assert_frame_equal(df,expected) + df.ix[0, 'c'] = 'foo' + expected = DataFrame([{"a": 1, + "c": 'foo'}, {"a": 3, + "b": 2, + "c": np.nan}]) + assert_frame_equal(df, expected) # GH10280 - df = DataFrame(np.arange(6,dtype='int64').reshape(2, 3), + df = DataFrame(np.arange(6, dtype='int64').reshape(2, 3), index=list('ab'), columns=['foo', 'bar', 'baz']) for val in [3.14, 'wxyz']: left = df.copy() left.loc['a', 'bar'] = val - right = DataFrame([[0, val, 2], [3, 4, 5]], - index=list('ab'), + right = DataFrame([[0, val, 2], [3, 4, 5]], index=list('ab'), columns=['foo', 'bar', 'baz']) assert_frame_equal(left, right) self.assertTrue(com.is_integer_dtype(left['foo'])) self.assertTrue(com.is_integer_dtype(left['baz'])) - left = DataFrame(np.arange(6,dtype='int64').reshape(2, 3) / 10.0, + left = DataFrame(np.arange(6, dtype='int64').reshape(2, 3) / 10.0, index=list('ab'), columns=['foo', 'bar', 'baz']) left.loc['a', 'bar'] = 'wxyz' - right = DataFrame([[0, 'wxyz', .2], [.3, .4, .5]], - index=list('ab'), + right = DataFrame([[0, 'wxyz', .2], [.3, .4, .5]], index=list('ab'), columns=['foo', 'bar', 'baz']) assert_frame_equal(left, right) @@ -2530,67 +2782,83 @@ def test_setitem_dtype_upcast(self): def test_setitem_iloc(self): - # setitem with an iloc list - df = DataFrame(np.arange(9).reshape((3, 3)), index=["A", "B", "C"], columns=["A", "B", "C"]) - df.iloc[[0,1],[1,2]] - df.iloc[[0,1],[1,2]] += 100 + df = DataFrame( + np.arange(9).reshape((3, 3)), index=["A", "B", "C"], + columns=["A", "B", "C"]) + df.iloc[[0, 1], [1, 2]] + df.iloc[[0, 1], [1, 2]] += 100 - expected = DataFrame(np.array([0,101,102,3,104,105,6,7,8]).reshape((3, 3)), index=["A", "B", "C"], columns=["A", "B", "C"]) - assert_frame_equal(df,expected) + expected = DataFrame( + np.array([0, 101, 102, 3, 104, 105, 6, 7, 8]).reshape((3, 3)), + index=["A", "B", "C"], columns=["A", "B", "C"]) + assert_frame_equal(df, expected) def test_dups_fancy_indexing(self): # GH 3455 from pandas.util.testing import makeCustomDataframe as mkdf - df= mkdf(10, 3) - df.columns = ['a','a','b'] - cols = ['b','a'] - result = df[['b','a']].columns - expected = Index(['b','a','a']) + df = mkdf(10, 3) + df.columns = ['a', 'a', 'b'] + result = df[['b', 'a']].columns + expected = Index(['b', 'a', 'a']) self.assertTrue(result.equals(expected)) # across dtypes - df = DataFrame([[1,2,1.,2.,3.,'foo','bar']], columns=list('aaaaaaa')) + df = DataFrame([[1, 2, 1., 2., 3., 'foo', 'bar']], + columns=list('aaaaaaa')) df.head() str(df) - result = DataFrame([[1,2,1.,2.,3.,'foo','bar']]) + result = DataFrame([[1, 2, 1., 2., 3., 'foo', 'bar']]) result.columns = list('aaaaaaa') - df_v = df.iloc[:,4] - res_v = result.iloc[:,4] + # TODO(wesm): unused? + df_v = df.iloc[:, 4] # noqa + res_v = result.iloc[:, 4] # noqa - assert_frame_equal(df,result) + assert_frame_equal(df, result) # GH 3561, dups not in selected order - df = DataFrame({'test': [5,7,9,11], 'test1': [4.,5,6,7], 'other': list('abcd') }, index=['A', 'A', 'B', 'C']) + df = DataFrame( + {'test': [5, 7, 9, 11], + 'test1': [4., 5, 6, 7], + 'other': list('abcd')}, index=['A', 'A', 'B', 'C']) rows = ['C', 'B'] - expected = DataFrame({'test' : [11,9], 'test1': [ 7., 6], 'other': ['d','c']},index=rows) + expected = DataFrame( + {'test': [11, 9], + 'test1': [7., 6], + 'other': ['d', 'c']}, index=rows) result = df.ix[rows] assert_frame_equal(result, expected) result = df.ix[Index(rows)] assert_frame_equal(result, expected) - rows = ['C','B','E'] - expected = DataFrame({'test' : [11,9,np.nan], 'test1': [7.,6,np.nan], 'other': ['d','c',np.nan]},index=rows) + rows = ['C', 'B', 'E'] + expected = DataFrame( + {'test': [11, 9, np.nan], + 'test1': [7., 6, np.nan], + 'other': ['d', 'c', np.nan]}, index=rows) result = df.ix[rows] assert_frame_equal(result, expected) # see GH5553, make sure we use the right indexer - rows = ['F','G','H','C','B','E'] - expected = DataFrame({'test' : [np.nan,np.nan,np.nan,11,9,np.nan], - 'test1': [np.nan,np.nan,np.nan,7.,6,np.nan], - 'other': [np.nan,np.nan,np.nan,'d','c',np.nan]},index=rows) + rows = ['F', 'G', 'H', 'C', 'B', 'E'] + expected = DataFrame({'test': [np.nan, np.nan, np.nan, 11, 9, np.nan], + 'test1': [np.nan, np.nan, np.nan, 7., 6, np.nan], + 'other': [np.nan, np.nan, np.nan, + 'd', 'c', np.nan]}, + index=rows) result = df.ix[rows] assert_frame_equal(result, expected) - # inconsistent returns for unique/duplicate indices when values are missing - df = DataFrame(randn(4,3),index=list('ABCD')) + # inconsistent returns for unique/duplicate indices when values are + # missing + df = DataFrame(randn(4, 3), index=list('ABCD')) expected = df.ix[['E']] - dfnu = DataFrame(randn(5,3),index=list('AABCD')) + dfnu = DataFrame(randn(5, 3), index=list('AABCD')) result = dfnu.ix[['E']] assert_frame_equal(result, expected) @@ -2603,146 +2871,216 @@ def test_dups_fancy_indexing(self): assert_frame_equal(result, expected, check_index_type=False) df = DataFrame({"A": list('abc')}) - result = df.ix[[0,8,0]] + result = df.ix[[0, 8, 0]] expected = DataFrame({"A": ['a', np.nan, 'a']}, index=[0, 8, 0]) assert_frame_equal(result, expected, check_index_type=False) # non unique with non unique selector df = DataFrame({'test': [5, 7, 9, 11]}, index=['A', 'A', 'B', 'C']) - expected = DataFrame({'test' : [5, 7, 5, 7, np.nan]}, index=['A', 'A', 'A', 'A', 'E']) + expected = DataFrame( + {'test': [5, 7, 5, 7, np.nan]}, index=['A', 'A', 'A', 'A', 'E']) result = df.ix[['A', 'A', 'E']] assert_frame_equal(result, expected) # GH 5835 # dups on index and missing values - df = DataFrame(np.random.randn(5, 5), columns=['A', 'B', 'B', 'B', 'A']) + df = DataFrame( + np.random.randn(5, 5), columns=['A', 'B', 'B', 'B', 'A']) - expected = pd.concat([df.ix[:,['A','B']],DataFrame(np.nan,columns=['C'],index=df.index)],axis=1) - result = df.ix[:,['A','B','C']] + expected = pd.concat( + [df.ix[:, ['A', 'B']], DataFrame(np.nan, columns=['C'], + index=df.index)], axis=1) + result = df.ix[:, ['A', 'B', 'C']] assert_frame_equal(result, expected) # GH 6504, multi-axis indexing - df = DataFrame(np.random.randn(9,2), index=[1,1,1,2,2,2,3,3,3], columns=['a', 'b']) + df = DataFrame( + np.random.randn( + 9, 2), index=[1, 1, 1, 2, 2, 2, 3, 3, 3], columns=['a', 'b']) expected = df.iloc[0:6] result = df.loc[[1, 2]] assert_frame_equal(result, expected) expected = df - result = df.loc[:,['a', 'b']] + result = df.loc[:, ['a', 'b']] assert_frame_equal(result, expected) - expected = df.iloc[0:6,:] + expected = df.iloc[0:6, :] result = df.loc[[1, 2], ['a', 'b']] assert_frame_equal(result, expected) def test_indexing_mixed_frame_bug(self): # GH3492 - df=DataFrame({'a':{1:'aaa',2:'bbb',3:'ccc'},'b':{1:111,2:222,3:333}}) + df = DataFrame({'a': {1: 'aaa', + 2: 'bbb', + 3: 'ccc'}, + 'b': {1: 111, + 2: 222, + 3: 333}}) # this works, new column is created correctly - df['test']=df['a'].apply(lambda x: '_' if x=='aaa' else x) + df['test'] = df['a'].apply(lambda x: '_' if x == 'aaa' else x) # this does not work, ie column test is not changed - idx=df['test']=='_' - temp=df.ix[idx,'a'].apply(lambda x: '-----' if x=='aaa' else x) - df.ix[idx,'test']=temp - self.assertEqual(df.iloc[0,2], '-----') + idx = df['test'] == '_' + temp = df.ix[idx, 'a'].apply(lambda x: '-----' if x == 'aaa' else x) + df.ix[idx, 'test'] = temp + self.assertEqual(df.iloc[0, 2], '-----') - #if I look at df, then element [0,2] equals '_'. If instead I type df.ix[idx,'test'], I get '-----', finally by typing df.iloc[0,2] I get '_'. + # if I look at df, then element [0,2] equals '_'. If instead I type + # df.ix[idx,'test'], I get '-----', finally by typing df.iloc[0,2] I + # get '_'. def test_multitype_list_index_access(self): - #GH 10610 - df = pd.DataFrame(np.random.random((10, 5)), columns=["a"] + [20, 21, 22, 23]) + # GH 10610 + df = pd.DataFrame(np.random.random((10, 5)), + columns=["a"] + [20, 21, 22, 23]) with self.assertRaises(IndexError): - vals = df[[22, 26, -8]] + df[[22, 26, -8]] self.assertEqual(df[21].shape[0], df.shape[0]) def test_set_index_nan(self): # GH 3586 - df = DataFrame({'PRuid': {17: 'nonQC', 18: 'nonQC', 19: 'nonQC', 20: '10', 21: '11', 22: '12', 23: '13', - 24: '24', 25: '35', 26: '46', 27: '47', 28: '48', 29: '59', 30: '10'}, - 'QC': {17: 0.0, 18: 0.0, 19: 0.0, 20: nan, 21: nan, 22: nan, 23: nan, 24: 1.0, 25: nan, - 26: nan, 27: nan, 28: nan, 29: nan, 30: nan}, - 'data': {17: 7.9544899999999998, 18: 8.0142609999999994, 19: 7.8591520000000008, 20: 0.86140349999999999, - 21: 0.87853110000000001, 22: 0.8427041999999999, 23: 0.78587700000000005, 24: 0.73062459999999996, - 25: 0.81668560000000001, 26: 0.81927080000000008, 27: 0.80705009999999999, 28: 0.81440240000000008, - 29: 0.80140849999999997, 30: 0.81307740000000006}, - 'year': {17: 2006, 18: 2007, 19: 2008, 20: 1985, 21: 1985, 22: 1985, 23: 1985, - 24: 1985, 25: 1985, 26: 1985, 27: 1985, 28: 1985, 29: 1985, 30: 1986}}).reset_index() - - result = df.set_index(['year','PRuid','QC']).reset_index().reindex(columns=df.columns) - assert_frame_equal(result,df) + df = DataFrame({'PRuid': {17: 'nonQC', + 18: 'nonQC', + 19: 'nonQC', + 20: '10', + 21: '11', + 22: '12', + 23: '13', + 24: '24', + 25: '35', + 26: '46', + 27: '47', + 28: '48', + 29: '59', + 30: '10'}, + 'QC': {17: 0.0, + 18: 0.0, + 19: 0.0, + 20: nan, + 21: nan, + 22: nan, + 23: nan, + 24: 1.0, + 25: nan, + 26: nan, + 27: nan, + 28: nan, + 29: nan, + 30: nan}, + 'data': {17: 7.9544899999999998, + 18: 8.0142609999999994, + 19: 7.8591520000000008, + 20: 0.86140349999999999, + 21: 0.87853110000000001, + 22: 0.8427041999999999, + 23: 0.78587700000000005, + 24: 0.73062459999999996, + 25: 0.81668560000000001, + 26: 0.81927080000000008, + 27: 0.80705009999999999, + 28: 0.81440240000000008, + 29: 0.80140849999999997, + 30: 0.81307740000000006}, + 'year': {17: 2006, + 18: 2007, + 19: 2008, + 20: 1985, + 21: 1985, + 22: 1985, + 23: 1985, + 24: 1985, + 25: 1985, + 26: 1985, + 27: 1985, + 28: 1985, + 29: 1985, + 30: 1986}}).reset_index() + + result = df.set_index(['year', 'PRuid', 'QC']).reset_index().reindex( + columns=df.columns) + assert_frame_equal(result, df) def test_multi_nan_indexing(self): # GH 3588 - df = DataFrame({"a":['R1', 'R2', np.nan, 'R4'], 'b':["C1", "C2", "C3" , "C4"], "c":[10, 15, np.nan , 20]}) - result = df.set_index(['a','b'], drop=False) - expected = DataFrame({"a":['R1', 'R2', np.nan, 'R4'], 'b':["C1", "C2", "C3" , "C4"], "c":[10, 15, np.nan , 20]}, - index = [Index(['R1','R2',np.nan,'R4'],name='a'),Index(['C1','C2','C3','C4'],name='b')]) - assert_frame_equal(result,expected) - + df = DataFrame({"a": ['R1', 'R2', np.nan, 'R4'], + 'b': ["C1", "C2", "C3", "C4"], + "c": [10, 15, np.nan, 20]}) + result = df.set_index(['a', 'b'], drop=False) + expected = DataFrame({"a": ['R1', 'R2', np.nan, 'R4'], + 'b': ["C1", "C2", "C3", "C4"], + "c": [10, 15, np.nan, 20]}, + index=[Index(['R1', 'R2', np.nan, 'R4'], + name='a'), + Index(['C1', 'C2', 'C3', 'C4'], name='b')]) + assert_frame_equal(result, expected) def test_iloc_panel_issue(self): # GH 3617 p = Panel(randn(4, 4, 4)) - self.assertEqual(p.iloc[:3, :3, :3].shape, (3,3,3)) - self.assertEqual(p.iloc[1, :3, :3].shape, (3,3)) - self.assertEqual(p.iloc[:3, 1, :3].shape, (3,3)) - self.assertEqual(p.iloc[:3, :3, 1].shape, (3,3)) - self.assertEqual(p.iloc[1, 1, :3].shape, (3,)) - self.assertEqual(p.iloc[1, :3, 1].shape, (3,)) - self.assertEqual(p.iloc[:3, 1, 1].shape, (3,)) + self.assertEqual(p.iloc[:3, :3, :3].shape, (3, 3, 3)) + self.assertEqual(p.iloc[1, :3, :3].shape, (3, 3)) + self.assertEqual(p.iloc[:3, 1, :3].shape, (3, 3)) + self.assertEqual(p.iloc[:3, :3, 1].shape, (3, 3)) + self.assertEqual(p.iloc[1, 1, :3].shape, (3, )) + self.assertEqual(p.iloc[1, :3, 1].shape, (3, )) + self.assertEqual(p.iloc[:3, 1, 1].shape, (3, )) def test_panel_getitem(self): - # GH4016, date selection returns a frame when a partial string selection + # GH4016, date selection returns a frame when a partial string + # selection ind = date_range(start="2000", freq="D", periods=1000) - df = DataFrame(np.random.randn(len(ind), 5), index=ind, columns=list('ABCDE')) - panel = Panel(dict([ ('frame_'+c,df) for c in list('ABC') ])) + df = DataFrame( + np.random.randn( + len(ind), 5), index=ind, columns=list('ABCDE')) + panel = Panel(dict([('frame_' + c, df) for c in list('ABC')])) test2 = panel.ix[:, "2002":"2002-12-31"] test1 = panel.ix[:, "2002"] - tm.assert_panel_equal(test1,test2) + tm.assert_panel_equal(test1, test2) # GH8710 # multi-element getting with a list panel = tm.makePanel() - expected = panel.iloc[[0,1]] + expected = panel.iloc[[0, 1]] - result = panel.loc[['ItemA','ItemB']] - tm.assert_panel_equal(result,expected) + result = panel.loc[['ItemA', 'ItemB']] + tm.assert_panel_equal(result, expected) - result = panel.loc[['ItemA','ItemB'],:,:] - tm.assert_panel_equal(result,expected) + result = panel.loc[['ItemA', 'ItemB'], :, :] + tm.assert_panel_equal(result, expected) - result = panel[['ItemA','ItemB']] - tm.assert_panel_equal(result,expected) + result = panel[['ItemA', 'ItemB']] + tm.assert_panel_equal(result, expected) result = panel.loc['ItemA':'ItemB'] - tm.assert_panel_equal(result,expected) + tm.assert_panel_equal(result, expected) result = panel.ix['ItemA':'ItemB'] - tm.assert_panel_equal(result,expected) + tm.assert_panel_equal(result, expected) - result = panel.ix[['ItemA','ItemB']] - tm.assert_panel_equal(result,expected) + result = panel.ix[['ItemA', 'ItemB']] + tm.assert_panel_equal(result, expected) # with an object-like # GH 9140 class TestObject: + def __str__(self): return "TestObject" obj = TestObject() - p = Panel(np.random.randn(1,5,4), items=[obj], - major_axis = date_range('1/1/2000', periods=5), + p = Panel(np.random.randn(1, 5, 4), items=[obj], + major_axis=date_range('1/1/2000', periods=5), minor_axis=['A', 'B', 'C', 'D']) expected = p.iloc[0] @@ -2754,16 +3092,19 @@ def test_panel_setitem(self): # GH 7763 # loc and setitem have setting differences np.random.seed(0) - index=range(3) + index = range(3) columns = list('abc') - panel = Panel({'A' : DataFrame(np.random.randn(3, 3), index=index, columns=columns), - 'B' : DataFrame(np.random.randn(3, 3), index=index, columns=columns), - 'C' : DataFrame(np.random.randn(3, 3), index=index, columns=columns) - }) + panel = Panel( + {'A': DataFrame( + np.random.randn(3, 3), index=index, columns=columns), + 'B': DataFrame( + np.random.randn(3, 3), index=index, columns=columns), + 'C': DataFrame( + np.random.randn(3, 3), index=index, columns=columns)}) - replace = DataFrame(np.eye(3,3), index=range(3), columns=columns) - expected = Panel({ 'A' : replace, 'B' : replace, 'C' : replace }) + replace = DataFrame(np.eye(3, 3), index=range(3), columns=columns) + expected = Panel({'A': replace, 'B': replace, 'C': replace}) p = panel.copy() for idx in list('ABC'): @@ -2772,113 +3113,138 @@ def test_panel_setitem(self): p = panel.copy() for idx in list('ABC'): - p.loc[idx,:,:] = replace + p.loc[idx, :, :] = replace tm.assert_panel_equal(p, expected) - def test_panel_setitem_with_multiindex(self): # 10360 # failing with a multi-index - arr = np.array([[[1,2,3],[0,0,0]],[[0,0,0],[0,0,0]]],dtype=np.float64) + arr = np.array( + [[[1, 2, 3], [0, 0, 0]], [[0, 0, 0], [0, 0, 0]]], dtype=np.float64) # reg index - axes = dict(items=['A', 'B'], major_axis=[0, 1], minor_axis=['X', 'Y' ,'Z']) + axes = dict(items=['A', 'B'], major_axis=[0, 1], + minor_axis=['X', 'Y', 'Z']) p1 = Panel(0., **axes) p1.iloc[0, 0, :] = [1, 2, 3] expected = Panel(arr, **axes) tm.assert_panel_equal(p1, expected) # multi-indexes - axes['items'] = pd.MultiIndex.from_tuples([('A','a'), ('B','b')]) + axes['items'] = pd.MultiIndex.from_tuples([('A', 'a'), ('B', 'b')]) p2 = Panel(0., **axes) p2.iloc[0, 0, :] = [1, 2, 3] expected = Panel(arr, **axes) tm.assert_panel_equal(p2, expected) - axes['major_axis']=pd.MultiIndex.from_tuples([('A',1),('A',2)]) + axes['major_axis'] = pd.MultiIndex.from_tuples([('A', 1), ('A', 2)]) p3 = Panel(0., **axes) p3.iloc[0, 0, :] = [1, 2, 3] expected = Panel(arr, **axes) tm.assert_panel_equal(p3, expected) - axes['minor_axis']=pd.MultiIndex.from_product([['X'],range(3)]) + axes['minor_axis'] = pd.MultiIndex.from_product([['X'], range(3)]) p4 = Panel(0., **axes) p4.iloc[0, 0, :] = [1, 2, 3] expected = Panel(arr, **axes) tm.assert_panel_equal(p4, expected) - arr = np.array([[[1,0,0],[2,0,0]],[[0,0,0],[0,0,0]]],dtype=np.float64) + arr = np.array( + [[[1, 0, 0], [2, 0, 0]], [[0, 0, 0], [0, 0, 0]]], dtype=np.float64) p5 = Panel(0., **axes) p5.iloc[0, :, 0] = [1, 2] expected = Panel(arr, **axes) tm.assert_panel_equal(p5, expected) def test_panel_assignment(self): - # GH3777 - wp = Panel(randn(2, 5, 4), items=['Item1', 'Item2'], major_axis=date_range('1/1/2000', periods=5), minor_axis=['A', 'B', 'C', 'D']) - wp2 = Panel(randn(2, 5, 4), items=['Item1', 'Item2'], major_axis=date_range('1/1/2000', periods=5), minor_axis=['A', 'B', 'C', 'D']) - expected = wp.loc[['Item1', 'Item2'], :, ['A', 'B']] + wp = Panel( + randn(2, 5, 4), items=['Item1', 'Item2'], + major_axis=date_range('1/1/2000', periods=5), + minor_axis=['A', 'B', 'C', 'D']) + wp2 = Panel( + randn(2, 5, 4), items=['Item1', 'Item2'], + major_axis=date_range('1/1/2000', periods=5), + minor_axis=['A', 'B', 'C', 'D']) + + # TODO: unused? + # expected = wp.loc[['Item1', 'Item2'], :, ['A', 'B']] def f(): - wp.loc[['Item1', 'Item2'], :, ['A', 'B']] = wp2.loc[['Item1', 'Item2'], :, ['A', 'B']] + wp.loc[['Item1', 'Item2'], :, ['A', 'B']] = wp2.loc[ + ['Item1', 'Item2'], :, ['A', 'B']] + self.assertRaises(NotImplementedError, f) - #wp.loc[['Item1', 'Item2'], :, ['A', 'B']] = wp2.loc[['Item1', 'Item2'], :, ['A', 'B']] - #result = wp.loc[['Item1', 'Item2'], :, ['A', 'B']] - #tm.assert_panel_equal(result,expected) + # to_assign = wp2.loc[['Item1', 'Item2'], :, ['A', 'B']] + # wp.loc[['Item1', 'Item2'], :, ['A', 'B']] = to_assign + # result = wp.loc[['Item1', 'Item2'], :, ['A', 'B']] + # tm.assert_panel_equal(result,expected) def test_multiindex_assignment(self): # GH3777 part 2 # mixed dtype - df = DataFrame(np.random.randint(5,10,size=9).reshape(3, 3), + df = DataFrame(np.random.randint(5, 10, size=9).reshape(3, 3), columns=list('abc'), - index=[[4,4,8],[8,10,12]]) + index=[[4, 4, 8], [8, 10, 12]]) df['d'] = np.nan - arr = np.array([0.,1.]) + arr = np.array([0., 1.]) - df.ix[4,'d'] = arr - assert_series_equal(df.ix[4,'d'],Series(arr,index=[8,10],name='d')) + df.ix[4, 'd'] = arr + assert_series_equal(df.ix[4, 'd'], Series(arr, index=[8, 10], + name='d')) # single dtype - df = DataFrame(np.random.randint(5,10,size=9).reshape(3, 3), + df = DataFrame(np.random.randint(5, 10, size=9).reshape(3, 3), columns=list('abc'), - index=[[4,4,8],[8,10,12]]) + index=[[4, 4, 8], [8, 10, 12]]) - df.ix[4,'c'] = arr - assert_series_equal(df.ix[4,'c'],Series(arr,index=[8,10],name='c',dtype='int64')) + df.ix[4, 'c'] = arr + assert_series_equal(df.ix[4, 'c'], Series(arr, index=[8, 10], name='c', + dtype='int64')) # scalar ok - df.ix[4,'c'] = 10 - assert_series_equal(df.ix[4,'c'],Series(10,index=[8,10],name='c',dtype='int64')) + df.ix[4, 'c'] = 10 + assert_series_equal(df.ix[4, 'c'], Series(10, index=[8, 10], name='c', + dtype='int64')) # invalid assignments def f(): - df.ix[4,'c'] = [0,1,2,3] + df.ix[4, 'c'] = [0, 1, 2, 3] + self.assertRaises(ValueError, f) def f(): - df.ix[4,'c'] = [0] + df.ix[4, 'c'] = [0] + self.assertRaises(ValueError, f) # groupby example NUM_ROWS = 100 NUM_COLS = 10 - col_names = ['A'+num for num in map(str,np.arange(NUM_COLS).tolist())] + col_names = ['A' + num + for num in map(str, np.arange(NUM_COLS).tolist())] index_cols = col_names[:5] - df = DataFrame(np.random.randint(5, size=(NUM_ROWS,NUM_COLS)), dtype=np.int64, columns=col_names) + df = DataFrame( + np.random.randint(5, size=(NUM_ROWS, NUM_COLS)), dtype=np.int64, + columns=col_names) df = df.set_index(index_cols).sort_index() grp = df.groupby(level=index_cols[:4]) df['new_col'] = np.nan f_index = np.arange(5) - def f(name,df2): - return Series(np.arange(df2.shape[0]),name=df2.index.values[0]).reindex(f_index) - new_df = pd.concat([ f(name,df2) for name, df2 in grp ],axis=1).T + + def f(name, df2): + return Series( + np.arange(df2.shape[0]), + name=df2.index.values[0]).reindex(f_index) + + # TODO(wesm): unused? + # new_df = pd.concat([f(name, df2) for name, df2 in grp], axis=1).T # we are actually operating on a copy here # but in this case, that's ok @@ -2889,49 +3255,49 @@ def f(name,df2): def test_multi_assign(self): # GH 3626, an assignement of a sub-df to a df - df = DataFrame({'FC':['a','b','a','b','a','b'], - 'PF':[0,0,0,0,1,1], - 'col1':lrange(6), - 'col2':lrange(6,12)}) - df.ix[1,0]=np.nan + df = DataFrame({'FC': ['a', 'b', 'a', 'b', 'a', 'b'], + 'PF': [0, 0, 0, 0, 1, 1], + 'col1': lrange(6), + 'col2': lrange(6, 12)}) + df.ix[1, 0] = np.nan df2 = df.copy() - mask=~df2.FC.isnull() - cols=['col1', 'col2'] + mask = ~df2.FC.isnull() + cols = ['col1', 'col2'] dft = df2 * 2 - dft.ix[3,3] = np.nan - - expected = DataFrame({'FC':['a',np.nan,'a','b','a','b'], - 'PF':[0,0,0,0,1,1], - 'col1':Series([0,1,4,6,8,10]), - 'col2':[12,7,16,np.nan,20,22]}) + dft.ix[3, 3] = np.nan + expected = DataFrame({'FC': ['a', np.nan, 'a', 'b', 'a', 'b'], + 'PF': [0, 0, 0, 0, 1, 1], + 'col1': Series([0, 1, 4, 6, 8, 10]), + 'col2': [12, 7, 16, np.nan, 20, 22]}) # frame on rhs - df2.ix[mask, cols]= dft.ix[mask, cols] - assert_frame_equal(df2,expected) + df2.ix[mask, cols] = dft.ix[mask, cols] + assert_frame_equal(df2, expected) - df2.ix[mask, cols]= dft.ix[mask, cols] - assert_frame_equal(df2,expected) + df2.ix[mask, cols] = dft.ix[mask, cols] + assert_frame_equal(df2, expected) # with an ndarray on rhs df2 = df.copy() - df2.ix[mask, cols]= dft.ix[mask, cols].values - assert_frame_equal(df2,expected) - df2.ix[mask, cols]= dft.ix[mask, cols].values - assert_frame_equal(df2,expected) + df2.ix[mask, cols] = dft.ix[mask, cols].values + assert_frame_equal(df2, expected) + df2.ix[mask, cols] = dft.ix[mask, cols].values + assert_frame_equal(df2, expected) # broadcasting on the rhs is required - df = DataFrame(dict(A = [1,2,0,0,0],B=[0,0,0,10,11],C=[0,0,0,10,11],D=[3,4,5,6,7])) + df = DataFrame(dict(A=[1, 2, 0, 0, 0], B=[0, 0, 0, 10, 11], C=[ + 0, 0, 0, 10, 11], D=[3, 4, 5, 6, 7])) expected = df.copy() mask = expected['A'] == 0 - for col in ['A','B']: - expected.loc[mask,col] = df['D'] + for col in ['A', 'B']: + expected.loc[mask, col] = df['D'] - df.loc[df['A']==0,['A','B']] = df['D'] - assert_frame_equal(df,expected) + df.loc[df['A'] == 0, ['A', 'B']] = df['D'] + assert_frame_equal(df, expected) def test_ix_assign_column_mixed(self): # GH #1142 @@ -2943,37 +3309,38 @@ def test_ix_assign_column_mixed(self): assert_series_equal(df.B, orig + 1) # GH 3668, mixed frame with series value - df = DataFrame({'x':lrange(10), 'y':lrange(10,20),'z' : 'bar'}) + df = DataFrame({'x': lrange(10), 'y': lrange(10, 20), 'z': 'bar'}) expected = df.copy() for i in range(5): - indexer = i*2 - v = 1000 + i*200 + indexer = i * 2 + v = 1000 + i * 200 expected.ix[indexer, 'y'] = v self.assertEqual(expected.ix[indexer, 'y'], v) df.ix[df.x % 2 == 0, 'y'] = df.ix[df.x % 2 == 0, 'y'] * 100 - assert_frame_equal(df,expected) + assert_frame_equal(df, expected) # GH 4508, making sure consistency of assignments - df = DataFrame({'a':[1,2,3],'b':[0,1,2]}) - df.ix[[0,2,],'b'] = [100,-100] - expected = DataFrame({'a' : [1,2,3], 'b' : [100,1,-100] }) - assert_frame_equal(df,expected) + df = DataFrame({'a': [1, 2, 3], 'b': [0, 1, 2]}) + df.ix[[0, 2, ], 'b'] = [100, -100] + expected = DataFrame({'a': [1, 2, 3], 'b': [100, 1, -100]}) + assert_frame_equal(df, expected) - df = pd.DataFrame({'a': lrange(4) }) + df = pd.DataFrame({'a': lrange(4)}) df['b'] = np.nan - df.ix[[1,3],'b'] = [100,-100] - expected = DataFrame({'a' : [0,1,2,3], 'b' : [np.nan,100,np.nan,-100] }) - assert_frame_equal(df,expected) + df.ix[[1, 3], 'b'] = [100, -100] + expected = DataFrame({'a': [0, 1, 2, 3], + 'b': [np.nan, 100, np.nan, -100]}) + assert_frame_equal(df, expected) # ok, but chained assignments are dangerous # if we turn off chained assignement it will work - with option_context('chained_assignment',None): - df = pd.DataFrame({'a': lrange(4) }) + with option_context('chained_assignment', None): + df = pd.DataFrame({'a': lrange(4)}) df['b'] = np.nan - df['b'].ix[[1,3]] = [100,-100] - assert_frame_equal(df,expected) + df['b'].ix[[1, 3]] = [100, -100] + assert_frame_equal(df, expected) def test_ix_get_set_consistency(self): @@ -2999,41 +3366,46 @@ def test_setitem_list(self): # GH 6043 # ix with a list - df = DataFrame(index=[0,1], columns=[0]) - df.ix[1,0] = [1,2,3] - df.ix[1,0] = [1,2] + df = DataFrame(index=[0, 1], columns=[0]) + df.ix[1, 0] = [1, 2, 3] + df.ix[1, 0] = [1, 2] - result = DataFrame(index=[0,1], columns=[0]) - result.ix[1,0] = [1,2] + result = DataFrame(index=[0, 1], columns=[0]) + result.ix[1, 0] = [1, 2] - assert_frame_equal(result,df) + assert_frame_equal(result, df) # ix with an object class TO(object): + def __init__(self, value): self.value = value + def __str__(self): return "[{0}]".format(self.value) + __repr__ = __str__ + def __eq__(self, other): return self.value == other.value + def view(self): return self - df = DataFrame(index=[0,1], columns=[0]) - df.ix[1,0] = TO(1) - df.ix[1,0] = TO(2) + df = DataFrame(index=[0, 1], columns=[0]) + df.ix[1, 0] = TO(1) + df.ix[1, 0] = TO(2) - result = DataFrame(index=[0,1], columns=[0]) - result.ix[1,0] = TO(2) + result = DataFrame(index=[0, 1], columns=[0]) + result.ix[1, 0] = TO(2) - assert_frame_equal(result,df) + assert_frame_equal(result, df) # remains object dtype even after setting it back - df = DataFrame(index=[0,1], columns=[0]) - df.ix[1,0] = TO(1) - df.ix[1,0] = np.nan - result = DataFrame(index=[0,1], columns=[0]) + df = DataFrame(index=[0, 1], columns=[0]) + df.ix[1, 0] = TO(1) + df.ix[1, 0] = np.nan + result = DataFrame(index=[0, 1], columns=[0]) assert_frame_equal(result, df) @@ -3041,37 +3413,40 @@ def test_iloc_mask(self): # GH 3631, iloc with a mask (of a series) should raise df = DataFrame(lrange(5), list('ABCDE'), columns=['a']) - mask = (df.a%2 == 0) + mask = (df.a % 2 == 0) self.assertRaises(ValueError, df.iloc.__getitem__, tuple([mask])) mask.index = lrange(len(mask)) - self.assertRaises(NotImplementedError, df.iloc.__getitem__, tuple([mask])) + self.assertRaises(NotImplementedError, df.iloc.__getitem__, + tuple([mask])) # ndarray ok - result = df.iloc[np.array([True] * len(mask),dtype=bool)] - assert_frame_equal(result,df) + result = df.iloc[np.array([True] * len(mask), dtype=bool)] + assert_frame_equal(result, df) # the possibilities locs = np.arange(4) - nums = 2**locs + nums = 2 ** locs reps = lmap(bin, nums) - df = DataFrame({'locs':locs, 'nums':nums}, reps) + df = DataFrame({'locs': locs, 'nums': nums}, reps) expected = { - (None,'') : '0b1100', - (None,'.loc') : '0b1100', - (None,'.iloc') : '0b1100', - ('index','') : '0b11', - ('index','.loc') : '0b11', - ('index','.iloc') : 'iLocation based boolean indexing cannot use an indexable as a mask', - ('locs','') : 'Unalignable boolean Series key provided', - ('locs','.loc') : 'Unalignable boolean Series key provided', - ('locs','.iloc') : 'iLocation based boolean indexing on an integer type is not available', - } + (None, ''): '0b1100', + (None, '.loc'): '0b1100', + (None, '.iloc'): '0b1100', + ('index', ''): '0b11', + ('index', '.loc'): '0b11', + ('index', '.iloc'): ('iLocation based boolean indexing ' + 'cannot use an indexable as a mask'), + ('locs', ''): 'Unalignable boolean Series key provided', + ('locs', '.loc'): 'Unalignable boolean Series key provided', + ('locs', '.iloc'): ('iLocation based boolean indexing on an ' + 'integer type is not available'), + } warnings.filterwarnings(action='ignore', category=UserWarning) result = dict() for idx in [None, 'index', 'locs']: - mask = (df.nums>2).values + mask = (df.nums > 2).values if idx: mask = Series(mask, list(reversed(getattr(df, idx)))) for method in ['', '.loc', '.iloc']: @@ -3084,53 +3459,75 @@ def test_iloc_mask(self): except Exception as e: ans = str(e) - key = tuple([idx,method]) + key = tuple([idx, method]) r = expected.get(key) if r != ans: - raise AssertionError("[%s] does not match [%s], received [%s]" % - (key,ans,r)) + raise AssertionError( + "[%s] does not match [%s], received [%s]" + % (key, ans, r)) warnings.filterwarnings(action='always', category=UserWarning) def test_ix_slicing_strings(self): - ##GH3836 - data = {'Classification': ['SA EQUITY CFD', 'bbb', 'SA EQUITY', 'SA SSF', 'aaa'], - 'Random': [1,2,3,4,5], - 'X': ['correct', 'wrong','correct', 'correct','wrong']} + # GH3836 + data = {'Classification': + ['SA EQUITY CFD', 'bbb', 'SA EQUITY', 'SA SSF', 'aaa'], + 'Random': [1, 2, 3, 4, 5], + 'X': ['correct', 'wrong', 'correct', 'correct', 'wrong']} df = DataFrame(data) - x = df[~df.Classification.isin(['SA EQUITY CFD', 'SA EQUITY', 'SA SSF'])] - df.ix[x.index,'X'] = df['Classification'] - - expected = DataFrame({'Classification': {0: 'SA EQUITY CFD', 1: 'bbb', - 2: 'SA EQUITY', 3: 'SA SSF', 4: 'aaa'}, - 'Random': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5}, - 'X': {0: 'correct', 1: 'bbb', 2: 'correct', - 3: 'correct', 4: 'aaa'}}) # bug was 4: 'bbb' + x = df[~df.Classification.isin(['SA EQUITY CFD', 'SA EQUITY', 'SA SSF' + ])] + df.ix[x.index, 'X'] = df['Classification'] + + expected = DataFrame({'Classification': {0: 'SA EQUITY CFD', + 1: 'bbb', + 2: 'SA EQUITY', + 3: 'SA SSF', + 4: 'aaa'}, + 'Random': {0: 1, + 1: 2, + 2: 3, + 3: 4, + 4: 5}, + 'X': {0: 'correct', + 1: 'bbb', + 2: 'correct', + 3: 'correct', + 4: 'aaa'}}) # bug was 4: 'bbb' assert_frame_equal(df, expected) def test_non_unique_loc(self): - ## GH3659 - ## non-unique indexer with loc slice - ## https://groups.google.com/forum/?fromgroups#!topic/pydata/zTm2No0crYs + # GH3659 + # non-unique indexer with loc slice + # https://groups.google.com/forum/?fromgroups#!topic/pydata/zTm2No0crYs # these are going to raise becuase the we are non monotonic - df = DataFrame({'A' : [1,2,3,4,5,6], 'B' : [3,4,5,6,7,8]}, index = [0,1,0,1,2,3]) - self.assertRaises(KeyError, df.loc.__getitem__, tuple([slice(1,None)])) - self.assertRaises(KeyError, df.loc.__getitem__, tuple([slice(0,None)])) - self.assertRaises(KeyError, df.loc.__getitem__, tuple([slice(1,2)])) + df = DataFrame( + {'A': [1, 2, 3, 4, 5, 6], + 'B': [3, 4, 5, 6, 7, 8]}, index=[0, 1, 0, 1, 2, 3]) + self.assertRaises(KeyError, df.loc.__getitem__, + tuple([slice(1, None)])) + self.assertRaises(KeyError, df.loc.__getitem__, + tuple([slice(0, None)])) + self.assertRaises(KeyError, df.loc.__getitem__, tuple([slice(1, 2)])) # monotonic are ok - df = DataFrame({'A' : [1,2,3,4,5,6], 'B' : [3,4,5,6,7,8]}, index = [0,1,0,1,2,3]).sort_index(axis=0) + df = DataFrame( + {'A': [1, 2, 3, 4, 5, 6], + 'B': [3, 4, 5, 6, 7, 8]}, index=[0, 1, 0, 1, 2, 3]).sort_index( + axis=0) result = df.loc[1:] - expected = DataFrame({'A' : [2,4,5,6], 'B' : [4, 6,7,8]}, index = [1,1,2,3]) - assert_frame_equal(result,expected) + expected = DataFrame( + {'A': [2, 4, 5, 6], + 'B': [4, 6, 7, 8]}, index=[1, 1, 2, 3]) + assert_frame_equal(result, expected) result = df.loc[0:] - assert_frame_equal(result,df) + assert_frame_equal(result, df) result = df.loc[1:2] - expected = DataFrame({'A' : [2,4,5], 'B' : [4,6,7]}, index = [1,1,2]) - assert_frame_equal(result,expected) + expected = DataFrame({'A': [2, 4, 5], 'B': [4, 6, 7]}, index=[1, 1, 2]) + assert_frame_equal(result, expected) def test_loc_name(self): # GH 3880 @@ -3147,30 +3544,31 @@ def test_loc_name(self): def test_iloc_non_unique_indexing(self): - #GH 4017, non-unique indexing (on the axis) - df = DataFrame({'A' : [0.1] * 3000, 'B' : [1] * 3000}) + # GH 4017, non-unique indexing (on the axis) + df = DataFrame({'A': [0.1] * 3000, 'B': [1] * 3000}) idx = np.array(lrange(30)) * 99 expected = df.iloc[idx] - df3 = pd.concat([df, 2*df, 3*df]) + df3 = pd.concat([df, 2 * df, 3 * df]) result = df3.iloc[idx] assert_frame_equal(result, expected) - df2 = DataFrame({'A' : [0.1] * 1000, 'B' : [1] * 1000}) - df2 = pd.concat([df2, 2*df2, 3*df2]) + df2 = DataFrame({'A': [0.1] * 1000, 'B': [1] * 1000}) + df2 = pd.concat([df2, 2 * df2, 3 * df2]) sidx = df2.index.to_series() - expected = df2.iloc[idx[idx<=sidx.max()]] + expected = df2.iloc[idx[idx <= sidx.max()]] new_list = [] for r, s in expected.iterrows(): new_list.append(s) - new_list.append(s*2) - new_list.append(s*3) + new_list.append(s * 2) + new_list.append(s * 3) expected = DataFrame(new_list) - expected = pd.concat([ expected, DataFrame(index=idx[idx>sidx.max()]) ]) + expected = pd.concat([expected, DataFrame(index=idx[idx > sidx.max()]) + ]) result = df2.loc[idx] assert_frame_equal(result, expected, check_index_type=False) @@ -3186,27 +3584,31 @@ def test_mi_access(self): 5 f B 6 A2 6 """ - df = pd.read_csv(StringIO(data),sep='\s+',index_col=0) + df = pd.read_csv(StringIO(data), sep='\s+', index_col=0) df2 = df.set_index(['main', 'sub']).T.sort_index(1) - index = Index(['h1','h3','h5']) - columns = MultiIndex.from_tuples([('A','A1')],names=['main','sub']) - expected = DataFrame([['a',1,1]],index=columns,columns=index).T + index = Index(['h1', 'h3', 'h5']) + columns = MultiIndex.from_tuples([('A', 'A1')], names=['main', 'sub']) + expected = DataFrame([['a', 1, 1]], index=columns, columns=index).T - result = df2.loc[:,('A','A1')] - assert_frame_equal(result,expected) + result = df2.loc[:, ('A', 'A1')] + assert_frame_equal(result, expected) - result = df2[('A','A1')] - assert_frame_equal(result,expected) + result = df2[('A', 'A1')] + assert_frame_equal(result, expected) # GH 4146, not returning a block manager when selecting a unique index # from a duplicate index - # as of 4879, this returns a Series (which is similar to what happens with a non-unique) - expected = Series(['a',1,1], index=['h1','h3','h5'], name='A1') + # as of 4879, this returns a Series (which is similar to what happens + # with a non-unique) + expected = Series(['a', 1, 1], index=['h1', 'h3', 'h5'], name='A1') result = df2['A']['A1'] assert_series_equal(result, expected) # selecting a non_unique from the 2nd level - expected = DataFrame([['d',4,4],['e',5,5]],index=Index(['B2','B2'],name='sub'),columns=['h1','h3','h5'],).T + expected = DataFrame([['d', 4, 4], ['e', 5, 5]], + index=Index( + ['B2', 'B2'], name='sub'), + columns=['h1', 'h3', 'h5'], ).T result = df2['A']['B2'] assert_frame_equal(result, expected) @@ -3216,97 +3618,108 @@ def test_non_unique_loc_memory_error(self): # non_unique index with a large selection triggers a memory error columns = list('ABCDEFG') - def gen_test(l,l2): - return pd.concat([ DataFrame(randn(l,len(columns)),index=lrange(l),columns=columns), - DataFrame(np.ones((l2,len(columns))),index=[0]*l2,columns=columns) ]) + def gen_test(l, l2): + return pd.concat([DataFrame( + randn(l, len(columns)), index=lrange( + l), columns=columns), DataFrame( + np.ones((l2, len(columns) + )), index=[0] * l2, columns=columns)]) - def gen_expected(df,mask): + def gen_expected(df, mask): l = len(mask) - return pd.concat([ - df.take([0],convert=False), - DataFrame(np.ones((l,len(columns))),index=[0]*l,columns=columns), - df.take(mask[1:],convert=False) ]) + return pd.concat([df.take([0], convert=False), + DataFrame(np.ones((l, len(columns))), + index=[0] * l, + columns=columns), + df.take(mask[1:], convert=False)]) - df = gen_test(900,100) + df = gen_test(900, 100) self.assertFalse(df.index.is_unique) mask = np.arange(100) result = df.loc[mask] - expected = gen_expected(df,mask) - assert_frame_equal(result,expected) + expected = gen_expected(df, mask) + assert_frame_equal(result, expected) - df = gen_test(900000,100000) + df = gen_test(900000, 100000) self.assertFalse(df.index.is_unique) mask = np.arange(100000) result = df.loc[mask] - expected = gen_expected(df,mask) - assert_frame_equal(result,expected) + expected = gen_expected(df, mask) + assert_frame_equal(result, expected) def test_astype_assignment(self): # GH4312 (iloc) - df_orig = DataFrame([['1','2','3','.4',5,6.,'foo']],columns=list('ABCDEFG')) + df_orig = DataFrame( + [['1', '2', '3', '.4', 5, 6., 'foo']], columns=list('ABCDEFG')) df = df_orig.copy() - df.iloc[:,0:2] = df.iloc[:,0:2].astype(np.int64) - expected = DataFrame([[1,2,'3','.4',5,6.,'foo']],columns=list('ABCDEFG')) - assert_frame_equal(df,expected) + df.iloc[:, 0:2] = df.iloc[:, 0:2].astype(np.int64) + expected = DataFrame( + [[1, 2, '3', '.4', 5, 6., 'foo']], columns=list('ABCDEFG')) + assert_frame_equal(df, expected) df = df_orig.copy() - df.iloc[:,0:2] = df.iloc[:,0:2]._convert(datetime=True, numeric=True) - expected = DataFrame([[1,2,'3','.4',5,6.,'foo']],columns=list('ABCDEFG')) - assert_frame_equal(df,expected) + df.iloc[:, 0:2] = df.iloc[:, 0:2]._convert(datetime=True, numeric=True) + expected = DataFrame( + [[1, 2, '3', '.4', 5, 6., 'foo']], columns=list('ABCDEFG')) + assert_frame_equal(df, expected) # GH5702 (loc) df = df_orig.copy() - df.loc[:,'A'] = df.loc[:,'A'].astype(np.int64) - expected = DataFrame([[1,'2','3','.4',5,6.,'foo']],columns=list('ABCDEFG')) - assert_frame_equal(df,expected) + df.loc[:, 'A'] = df.loc[:, 'A'].astype(np.int64) + expected = DataFrame( + [[1, '2', '3', '.4', 5, 6., 'foo']], columns=list('ABCDEFG')) + assert_frame_equal(df, expected) df = df_orig.copy() - df.loc[:,['B','C']] = df.loc[:,['B','C']].astype(np.int64) - expected = DataFrame([['1',2,3,'.4',5,6.,'foo']],columns=list('ABCDEFG')) - assert_frame_equal(df,expected) + df.loc[:, ['B', 'C']] = df.loc[:, ['B', 'C']].astype(np.int64) + expected = DataFrame( + [['1', 2, 3, '.4', 5, 6., 'foo']], columns=list('ABCDEFG')) + assert_frame_equal(df, expected) # full replacements / no nans df = DataFrame({'A': [1., 2., 3., 4.]}) df.iloc[:, 0] = df['A'].astype(np.int64) expected = DataFrame({'A': [1, 2, 3, 4]}) - assert_frame_equal(df,expected) + assert_frame_equal(df, expected) df = DataFrame({'A': [1., 2., 3., 4.]}) df.loc[:, 'A'] = df['A'].astype(np.int64) expected = DataFrame({'A': [1, 2, 3, 4]}) - assert_frame_equal(df,expected) + assert_frame_equal(df, expected) def test_astype_assignment_with_dups(self): # GH 4686 # assignment with dups that has a dtype change df = DataFrame( - np.arange(3).reshape((1,3)), + np.arange(3).reshape((1, 3)), columns=pd.MultiIndex.from_tuples( [('A', '1'), ('B', '1'), ('A', '2')] - ), + ), dtype=object - ) + ) index = df.index.copy() df['A'] = df['A'].astype(np.float64) - result = df.get_dtype_counts().sort_index() - expected = Series({ 'float64' : 2, 'object' : 1 }).sort_index() self.assertTrue(df.index.equals(index)) + # TODO(wesm): unused variables + # result = df.get_dtype_counts().sort_index() + # expected = Series({'float64': 2, 'object': 1}).sort_index() + def test_dups_loc(self): # GH4726 # dup indexing with iloc/loc df = DataFrame([[1, 2, 'foo', 'bar', Timestamp('20130101')]], - columns=['a','a','a','a','a'], index=[1]) + columns=['a', 'a', 'a', 'a', 'a'], index=[1]) expected = Series([1, 2, 'foo', 'bar', Timestamp('20130101')], - index=['a','a','a','a','a'], name=1) + index=['a', 'a', 'a', 'a', 'a'], name=1) result = df.iloc[0] assert_series_equal(result, expected) @@ -3318,141 +3731,160 @@ def test_partial_setting(self): # GH2578, allow ix and friends to partially set - ### series ### - s_orig = Series([1,2,3]) + # series + s_orig = Series([1, 2, 3]) s = s_orig.copy() s[5] = 5 - expected = Series([1,2,3,5],index=[0,1,2,5]) - assert_series_equal(s,expected) + expected = Series([1, 2, 3, 5], index=[0, 1, 2, 5]) + assert_series_equal(s, expected) s = s_orig.copy() s.loc[5] = 5 - expected = Series([1,2,3,5],index=[0,1,2,5]) - assert_series_equal(s,expected) + expected = Series([1, 2, 3, 5], index=[0, 1, 2, 5]) + assert_series_equal(s, expected) s = s_orig.copy() s[5] = 5. - expected = Series([1,2,3,5.],index=[0,1,2,5]) - assert_series_equal(s,expected) + expected = Series([1, 2, 3, 5.], index=[0, 1, 2, 5]) + assert_series_equal(s, expected) s = s_orig.copy() s.loc[5] = 5. - expected = Series([1,2,3,5.],index=[0,1,2,5]) - assert_series_equal(s,expected) + expected = Series([1, 2, 3, 5.], index=[0, 1, 2, 5]) + assert_series_equal(s, expected) # iloc/iat raise s = s_orig.copy() + def f(): s.iloc[3] = 5. + self.assertRaises(IndexError, f) + def f(): s.iat[3] = 5. + self.assertRaises(IndexError, f) - ### frame ### + # ## frame ## - df_orig = DataFrame(np.arange(6).reshape(3,2),columns=['A','B'],dtype='int64') + df_orig = DataFrame( + np.arange(6).reshape(3, 2), columns=['A', 'B'], dtype='int64') # iloc/iat raise df = df_orig.copy() + def f(): - df.iloc[4,2] = 5. + df.iloc[4, 2] = 5. + self.assertRaises(IndexError, f) + def f(): - df.iat[4,2] = 5. + df.iat[4, 2] = 5. + self.assertRaises(IndexError, f) # row setting where it exists - expected = DataFrame(dict({ 'A' : [0,4,4], 'B' : [1,5,5] })) + expected = DataFrame(dict({'A': [0, 4, 4], 'B': [1, 5, 5]})) df = df_orig.copy() df.iloc[1] = df.iloc[2] - assert_frame_equal(df,expected) + assert_frame_equal(df, expected) - expected = DataFrame(dict({ 'A' : [0,4,4], 'B' : [1,5,5] })) + expected = DataFrame(dict({'A': [0, 4, 4], 'B': [1, 5, 5]})) df = df_orig.copy() df.loc[1] = df.loc[2] - assert_frame_equal(df,expected) + assert_frame_equal(df, expected) # like 2578, partial setting with dtype preservation - expected = DataFrame(dict({ 'A' : [0,2,4,4], 'B' : [1,3,5,5] })) + expected = DataFrame(dict({'A': [0, 2, 4, 4], 'B': [1, 3, 5, 5]})) df = df_orig.copy() df.loc[3] = df.loc[2] - assert_frame_equal(df,expected) + assert_frame_equal(df, expected) # single dtype frame, overwrite - expected = DataFrame(dict({ 'A' : [0,2,4], 'B' : [0,2,4] })) + expected = DataFrame(dict({'A': [0, 2, 4], 'B': [0, 2, 4]})) df = df_orig.copy() - df.ix[:,'B'] = df.ix[:,'A'] - assert_frame_equal(df,expected) + df.ix[:, 'B'] = df.ix[:, 'A'] + assert_frame_equal(df, expected) # mixed dtype frame, overwrite - expected = DataFrame(dict({ 'A' : [0,2,4], 'B' : Series([0,2,4]) })) + expected = DataFrame(dict({'A': [0, 2, 4], 'B': Series([0, 2, 4])})) df = df_orig.copy() df['B'] = df['B'].astype(np.float64) - df.ix[:,'B'] = df.ix[:,'A'] - assert_frame_equal(df,expected) + df.ix[:, 'B'] = df.ix[:, 'A'] + assert_frame_equal(df, expected) # single dtype frame, partial setting expected = df_orig.copy() expected['C'] = df['A'] df = df_orig.copy() - df.ix[:,'C'] = df.ix[:,'A'] - assert_frame_equal(df,expected) + df.ix[:, 'C'] = df.ix[:, 'A'] + assert_frame_equal(df, expected) # mixed frame, partial setting expected = df_orig.copy() expected['C'] = df['A'] df = df_orig.copy() - df.ix[:,'C'] = df.ix[:,'A'] - assert_frame_equal(df,expected) + df.ix[:, 'C'] = df.ix[:, 'A'] + assert_frame_equal(df, expected) - ### panel ### - p_orig = Panel(np.arange(16).reshape(2,4,2),items=['Item1','Item2'],major_axis=pd.date_range('2001/1/12',periods=4),minor_axis=['A','B'],dtype='float64') + # ## panel ## + p_orig = Panel( + np.arange(16).reshape(2, 4, 2), items=['Item1', 'Item2'], + major_axis=pd.date_range('2001/1/12', periods=4), + minor_axis=['A', 'B'], dtype='float64') # panel setting via item - p_orig = Panel(np.arange(16).reshape(2,4,2),items=['Item1','Item2'],major_axis=pd.date_range('2001/1/12',periods=4),minor_axis=['A','B'],dtype='float64') + p_orig = Panel( + np.arange(16).reshape(2, 4, 2), items=['Item1', 'Item2'], + major_axis=pd.date_range('2001/1/12', periods=4), + minor_axis=['A', 'B'], dtype='float64') expected = p_orig.copy() expected['Item3'] = expected['Item1'] p = p_orig.copy() p.loc['Item3'] = p['Item1'] - assert_panel_equal(p,expected) + assert_panel_equal(p, expected) # panel with aligned series expected = p_orig.copy() - expected = expected.transpose(2,1,0) - expected['C'] = DataFrame({ 'Item1' : [30,30,30,30], 'Item2' : [32,32,32,32] },index=p_orig.major_axis) - expected = expected.transpose(2,1,0) + expected = expected.transpose(2, 1, 0) + expected['C'] = DataFrame( + {'Item1': [30, 30, 30, 30], + 'Item2': [32, 32, 32, 32]}, index=p_orig.major_axis) + expected = expected.transpose(2, 1, 0) p = p_orig.copy() - p.loc[:,:,'C'] = Series([30,32],index=p_orig.items) - assert_panel_equal(p,expected) + p.loc[:, :, 'C'] = Series([30, 32], index=p_orig.items) + assert_panel_equal(p, expected) # GH 8473 dates = date_range('1/1/2000', periods=8) - df_orig = DataFrame(np.random.randn(8, 4), index=dates, columns=['A', 'B', 'C', 'D']) + df_orig = DataFrame( + np.random.randn(8, 4), index=dates, columns=['A', 'B', 'C', 'D']) - expected = pd.concat([df_orig,DataFrame({'A' : 7},index=[dates[-1]+1])]) + expected = pd.concat([df_orig, DataFrame( + {'A': 7}, index=[dates[-1] + 1])]) df = df_orig.copy() - df.loc[dates[-1]+1, 'A'] = 7 - assert_frame_equal(df,expected) + df.loc[dates[-1] + 1, 'A'] = 7 + assert_frame_equal(df, expected) df = df_orig.copy() - df.at[dates[-1]+1, 'A'] = 7 - assert_frame_equal(df,expected) + df.at[dates[-1] + 1, 'A'] = 7 + assert_frame_equal(df, expected) - expected = pd.concat([df_orig,DataFrame({0 : 7},index=[dates[-1]+1])],axis=1) + expected = pd.concat( + [df_orig, DataFrame({0: 7}, index=[dates[-1] + 1])], axis=1) df = df_orig.copy() - df.loc[dates[-1]+1, 0] = 7 - assert_frame_equal(df,expected) + df.loc[dates[-1] + 1, 0] = 7 + assert_frame_equal(df, expected) df = df_orig.copy() - df.at[dates[-1]+1, 0] = 7 - assert_frame_equal(df,expected) + df.at[dates[-1] + 1, 0] = 7 + assert_frame_equal(df, expected) def test_partial_setting_mixed_dtype(self): # in a mixed dtype environment, try to preserve dtypes # by appending - df = DataFrame([[True, 1],[False, 2]], - columns = ["female","fitness"]) + df = DataFrame([[True, 1], [False, 2]], columns=["female", "fitness"]) s = df.loc[1].copy() s.name = 2 @@ -3462,34 +3894,39 @@ def test_partial_setting_mixed_dtype(self): assert_frame_equal(df, expected) # columns will align - df = DataFrame(columns=['A','B']) - df.loc[0] = Series(1,index=range(4)) - assert_frame_equal(df,DataFrame(columns=['A','B'],index=[0])) + df = DataFrame(columns=['A', 'B']) + df.loc[0] = Series(1, index=range(4)) + assert_frame_equal(df, DataFrame(columns=['A', 'B'], index=[0])) # columns will align - df = DataFrame(columns=['A','B']) - df.loc[0] = Series(1,index=['B']) - assert_frame_equal(df,DataFrame([[np.nan, 1]], columns=['A','B'],index=[0],dtype='float64')) + df = DataFrame(columns=['A', 'B']) + df.loc[0] = Series(1, index=['B']) + assert_frame_equal(df, DataFrame( + [[np.nan, 1]], columns=['A', 'B'], index=[0], dtype='float64')) # list-like must conform - df = DataFrame(columns=['A','B']) + df = DataFrame(columns=['A', 'B']) + def f(): - df.loc[0] = [1,2,3] + df.loc[0] = [1, 2, 3] + self.assertRaises(ValueError, f) # these are coerced to float unavoidably (as its a list-like to begin) - df = DataFrame(columns=['A','B']) - df.loc[3] = [6,7] - assert_frame_equal(df,DataFrame([[6,7]],index=[3],columns=['A','B'],dtype='float64')) + df = DataFrame(columns=['A', 'B']) + df.loc[3] = [6, 7] + assert_frame_equal(df, DataFrame( + [[6, 7]], index=[3], columns=['A', 'B'], dtype='float64')) def test_partial_setting_with_datetimelike_dtype(self): # GH9478 # a datetimeindex alignment issue with partial setting - df = pd.DataFrame(np.arange(6.).reshape(3,2), columns=list('AB'), - index=pd.date_range('1/1/2000', periods=3, freq='1H')) + df = pd.DataFrame(np.arange(6.).reshape(3, 2), columns=list('AB'), + index=pd.date_range('1/1/2000', periods=3, + freq='1H')) expected = df.copy() - expected['C'] = [expected.index[0]] + [pd.NaT,pd.NaT] + expected['C'] = [expected.index[0]] + [pd.NaT, pd.NaT] mask = df.A < 1 df.loc[mask, 'C'] = df.loc[mask].index @@ -3505,10 +3942,10 @@ def test_loc_setitem_datetime(self): lambda x: x.to_pydatetime(), lambda x: np.datetime64(x)]: df = pd.DataFrame() - df.loc[conv(dt1),'one'] = 100 - df.loc[conv(dt2),'one'] = 200 + df.loc[conv(dt1), 'one'] = 100 + df.loc[conv(dt2), 'one'] = 200 - expected = DataFrame({'one' : [100.0, 200.0]},index=[dt1, dt2]) + expected = DataFrame({'one': [100.0, 200.0]}, index=[dt1, dt2]) assert_frame_equal(df, expected) def test_series_partial_set(self): @@ -3534,7 +3971,7 @@ def test_series_partial_set(self): assert_series_equal(result, expected, check_index_type=True) # raises as nothing in in the index - self.assertRaises(KeyError, lambda : ser.loc[[3, 3, 3]]) + self.assertRaises(KeyError, lambda: ser.loc[[3, 3, 3]]) expected = Series([0.2, 0.2, np.nan], index=[2, 2, 3]) result = ser.loc[[2, 2, 3]] @@ -3545,19 +3982,23 @@ def test_series_partial_set(self): assert_series_equal(result, expected, check_index_type=True) expected = Series([np.nan, 0.3, 0.3], index=[5, 3, 3]) - result = Series([0.1, 0.2, 0.3, 0.4], index=[1, 2, 3, 4]).loc[[5, 3, 3]] + result = Series([0.1, 0.2, 0.3, 0.4], + index=[1, 2, 3, 4]).loc[[5, 3, 3]] assert_series_equal(result, expected, check_index_type=True) expected = Series([np.nan, 0.4, 0.4], index=[5, 4, 4]) - result = Series([0.1, 0.2, 0.3, 0.4], index=[1, 2, 3, 4]).loc[[5, 4, 4]] + result = Series([0.1, 0.2, 0.3, 0.4], + index=[1, 2, 3, 4]).loc[[5, 4, 4]] assert_series_equal(result, expected, check_index_type=True) expected = Series([0.4, np.nan, np.nan], index=[7, 2, 2]) - result = Series([0.1, 0.2, 0.3, 0.4], index=[4, 5, 6, 7]).loc[[7, 2, 2]] + result = Series([0.1, 0.2, 0.3, 0.4], + index=[4, 5, 6, 7]).loc[[7, 2, 2]] assert_series_equal(result, expected, check_index_type=True) expected = Series([0.4, np.nan, np.nan], index=[4, 5, 5]) - result = Series([0.1, 0.2, 0.3, 0.4], index=[1, 2, 3, 4]).loc[[4, 5, 5]] + result = Series([0.1, 0.2, 0.3, 0.4], + index=[1, 2, 3, 4]).loc[[4, 5, 5]] assert_series_equal(result, expected, check_index_type=True) # iloc @@ -3578,7 +4019,8 @@ def test_series_partial_set_with_name(self): assert_series_equal(result, expected, check_index_type=True) exp_idx = Index([3, 2, 3, 'x'], dtype='object', name='idx') - expected = Series([np.nan, 0.2, np.nan, np.nan], index=exp_idx, name='s') + expected = Series([np.nan, 0.2, np.nan, np.nan], index=exp_idx, + name='s') result = ser.loc[[3, 2, 3, 'x']] assert_series_equal(result, expected, check_index_type=True) @@ -3593,7 +4035,7 @@ def test_series_partial_set_with_name(self): assert_series_equal(result, expected, check_index_type=True) # raises as nothing in in the index - self.assertRaises(KeyError, lambda : ser.loc[[3, 3, 3]]) + self.assertRaises(KeyError, lambda: ser.loc[[3, 3, 3]]) exp_idx = Index([2, 2, 3], dtype='int64', name='idx') expected = Series([0.2, 0.2, np.nan], index=exp_idx, name='s') @@ -3609,31 +4051,35 @@ def test_series_partial_set_with_name(self): exp_idx = Index([5, 3, 3], dtype='int64', name='idx') expected = Series([np.nan, 0.3, 0.3], index=exp_idx, name='s') idx = Index([1, 2, 3, 4], dtype='int64', name='idx') - result = Series([0.1, 0.2, 0.3, 0.4], index=idx, name='s').loc[[5, 3, 3]] + result = Series([0.1, 0.2, 0.3, 0.4], index=idx, + name='s').loc[[5, 3, 3]] assert_series_equal(result, expected, check_index_type=True) exp_idx = Index([5, 4, 4], dtype='int64', name='idx') expected = Series([np.nan, 0.4, 0.4], index=exp_idx, name='s') idx = Index([1, 2, 3, 4], dtype='int64', name='idx') - result = Series([0.1, 0.2, 0.3, 0.4], index=idx, name='s').loc[[5, 4, 4]] + result = Series([0.1, 0.2, 0.3, 0.4], index=idx, + name='s').loc[[5, 4, 4]] assert_series_equal(result, expected, check_index_type=True) exp_idx = Index([7, 2, 2], dtype='int64', name='idx') expected = Series([0.4, np.nan, np.nan], index=exp_idx, name='s') idx = Index([4, 5, 6, 7], dtype='int64', name='idx') - result = Series([0.1, 0.2, 0.3, 0.4], index=idx, name='s').loc[[7, 2, 2]] + result = Series([0.1, 0.2, 0.3, 0.4], index=idx, + name='s').loc[[7, 2, 2]] assert_series_equal(result, expected, check_index_type=True) exp_idx = Index([4, 5, 5], dtype='int64', name='idx') expected = Series([0.4, np.nan, np.nan], index=exp_idx, name='s') idx = Index([1, 2, 3, 4], dtype='int64', name='idx') - result = Series([0.1, 0.2, 0.3, 0.4], index=idx, name='s').loc[[4, 5, 5]] + result = Series([0.1, 0.2, 0.3, 0.4], index=idx, + name='s').loc[[4, 5, 5]] assert_series_equal(result, expected, check_index_type=True) # iloc exp_idx = Index([2, 2, 1, 1], dtype='int64', name='idx') expected = Series([0.2, 0.2, 0.1, 0.1], index=exp_idx, name='s') - result = ser.iloc[[1,1,0,0]] + result = ser.iloc[[1, 1, 0, 0]] assert_series_equal(result, expected, check_index_type=True) def test_series_partial_set_datetime(self): @@ -3646,12 +4092,16 @@ def test_series_partial_set_datetime(self): exp = Series([0.1, 0.2], index=idx, name='s') assert_series_equal(result, exp, check_index_type=True) - keys = [Timestamp('2011-01-02'), Timestamp('2011-01-02'), Timestamp('2011-01-01')] - exp = Series([0.2, 0.2, 0.1], index=pd.DatetimeIndex(keys, name='idx'), name='s') + keys = [Timestamp('2011-01-02'), Timestamp('2011-01-02'), + Timestamp('2011-01-01')] + exp = Series([0.2, 0.2, 0.1], index=pd.DatetimeIndex(keys, name='idx'), + name='s') assert_series_equal(ser.loc[keys], exp, check_index_type=True) - keys = [Timestamp('2011-01-03'), Timestamp('2011-01-02'), Timestamp('2011-01-03')] - exp = Series([np.nan, 0.2, np.nan], index=pd.DatetimeIndex(keys, name='idx'), name='s') + keys = [Timestamp('2011-01-03'), Timestamp('2011-01-02'), + Timestamp('2011-01-03')] + exp = Series([np.nan, 0.2, np.nan], + index=pd.DatetimeIndex(keys, name='idx'), name='s') assert_series_equal(ser.loc[keys], exp, check_index_type=True) def test_series_partial_set_period(self): @@ -3660,18 +4110,23 @@ def test_series_partial_set_period(self): idx = pd.period_range('2011-01-01', '2011-01-02', freq='D', name='idx') ser = Series([0.1, 0.2], index=idx, name='s') - result = ser.loc[[pd.Period('2011-01-01', freq='D'), pd.Period('2011-01-02', freq='D')]] + result = ser.loc[[pd.Period('2011-01-01', freq='D'), pd.Period( + '2011-01-02', freq='D')]] exp = Series([0.1, 0.2], index=idx, name='s') assert_series_equal(result, exp, check_index_type=True) - keys = [pd.Period('2011-01-02', freq='D'), pd.Period('2011-01-02', freq='D'), + keys = [pd.Period('2011-01-02', freq='D'), + pd.Period('2011-01-02', freq='D'), pd.Period('2011-01-01', freq='D')] - exp = Series([0.2, 0.2, 0.1], index=pd.PeriodIndex(keys, name='idx'), name='s') + exp = Series([0.2, 0.2, 0.1], index=pd.PeriodIndex(keys, name='idx'), + name='s') assert_series_equal(ser.loc[keys], exp, check_index_type=True) - keys = [pd.Period('2011-01-03', freq='D'), pd.Period('2011-01-02', freq='D'), + keys = [pd.Period('2011-01-03', freq='D'), + pd.Period('2011-01-02', freq='D'), pd.Period('2011-01-03', freq='D')] - exp = Series([np.nan, 0.2, np.nan], index=pd.PeriodIndex(keys, name='idx'), name='s') + exp = Series([np.nan, 0.2, np.nan], + index=pd.PeriodIndex(keys, name='idx'), name='s') assert_series_equal(ser.loc[keys], exp, check_index_type=True) def test_partial_set_invalid(self): @@ -3684,20 +4139,26 @@ def test_partial_set_invalid(self): # don't allow not string inserts def f(): df.loc[100.0, :] = df.ix[0] + self.assertRaises(TypeError, f) + def f(): - df.loc[100,:] = df.ix[0] + df.loc[100, :] = df.ix[0] + self.assertRaises(TypeError, f) def f(): df.ix[100.0, :] = df.ix[0] + self.assertRaises(ValueError, f) + def f(): - df.ix[100,:] = df.ix[0] + df.ix[100, :] = df.ix[0] + self.assertRaises(ValueError, f) # allow object conversion here - df.loc['a',:] = df.ix[0] + df.loc['a', :] = df.ix[0] def test_partial_set_empty(self): @@ -3707,23 +4168,23 @@ def test_partial_set_empty(self): # series s = Series() s.loc[1] = 1 - assert_series_equal(s,Series([1],index=[1])) + assert_series_equal(s, Series([1], index=[1])) s.loc[3] = 3 - assert_series_equal(s,Series([1,3],index=[1,3])) + assert_series_equal(s, Series([1, 3], index=[1, 3])) s = Series() s.loc[1] = 1. - assert_series_equal(s,Series([1.],index=[1])) + assert_series_equal(s, Series([1.], index=[1])) s.loc[3] = 3. - assert_series_equal(s,Series([1.,3.],index=[1,3])) + assert_series_equal(s, Series([1., 3.], index=[1, 3])) s = Series() s.loc['foo'] = 1 - assert_series_equal(s,Series([1],index=['foo'])) + assert_series_equal(s, Series([1], index=['foo'])) s.loc['bar'] = 3 - assert_series_equal(s,Series([1,3],index=['foo','bar'])) + assert_series_equal(s, Series([1, 3], index=['foo', 'bar'])) s.loc[3] = 4 - assert_series_equal(s,Series([1,3,4],index=['foo','bar',3])) + assert_series_equal(s, Series([1, 3, 4], index=['foo', 'bar', 3])) # partially set with an empty object # frame @@ -3731,77 +4192,98 @@ def test_partial_set_empty(self): def f(): df.loc[1] = 1 + self.assertRaises(ValueError, f) + def f(): - df.loc[1] = Series([1],index=['foo']) + df.loc[1] = Series([1], index=['foo']) + self.assertRaises(ValueError, f) + def f(): - df.loc[:,1] = 1 + df.loc[:, 1] = 1 + self.assertRaises(ValueError, f) # these work as they don't really change # anything but the index # GH5632 - expected = DataFrame(columns=['foo'], index=pd.Index([], dtype='int64')) + expected = DataFrame(columns=['foo'], index=pd.Index( + [], dtype='int64')) + def f(): df = DataFrame() df['foo'] = Series([], dtype='object') return df + assert_frame_equal(f(), expected) + def f(): df = DataFrame() df['foo'] = Series(df.index) return df + assert_frame_equal(f(), expected) + def f(): df = DataFrame() df['foo'] = df.index return df + assert_frame_equal(f(), expected) - expected = DataFrame(columns=['foo'], index=pd.Index([], dtype='int64')) + expected = DataFrame(columns=['foo'], index=pd.Index( + [], dtype='int64')) expected['foo'] = expected['foo'].astype('float64') + def f(): df = DataFrame() df['foo'] = [] return df + assert_frame_equal(f(), expected) + def f(): df = DataFrame() df['foo'] = Series(range(len(df))) return df + assert_frame_equal(f(), expected) + def f(): df = DataFrame() df['foo'] = range(len(df)) return df + assert_frame_equal(f(), expected) df = DataFrame() df2 = DataFrame() df2[1] = Series([1], index=['foo']) - df.loc[:,1] = Series([1], index=['foo']) - assert_frame_equal(df,DataFrame([[1]], index=['foo'], columns=[1])) - assert_frame_equal(df,df2) + df.loc[:, 1] = Series([1], index=['foo']) + assert_frame_equal(df, DataFrame([[1]], index=['foo'], columns=[1])) + assert_frame_equal(df, df2) # no index to start - expected = DataFrame({ 0 : Series(1,index=range(4)) }, columns=['A','B',0]) + expected = DataFrame( + {0: Series(1, index=range(4))}, columns=['A', 'B', 0]) - df = DataFrame(columns=['A','B']) + df = DataFrame(columns=['A', 'B']) df[0] = Series(1, index=range(4)) df.dtypes str(df) - assert_frame_equal(df,expected) + assert_frame_equal(df, expected) - df = DataFrame(columns=['A','B']) - df.loc[:,0] = Series(1,index=range(4)) + df = DataFrame(columns=['A', 'B']) + df.loc[:, 0] = Series(1, index=range(4)) df.dtypes str(df) - assert_frame_equal(df,expected) + assert_frame_equal(df, expected) # GH5720, GH5744 # don't create rows when empty - expected = DataFrame(columns=['A', 'B', 'New'], index=pd.Index([], dtype='int64')) + expected = DataFrame(columns=['A', 'B', 'New'], index=pd.Index( + [], dtype='int64')) expected['A'] = expected['A'].astype('int64') expected['B'] = expected['B'].astype('float64') expected['New'] = expected['New'].astype('float64') @@ -3809,69 +4291,71 @@ def f(): y = df[df.A > 5] y['New'] = np.nan assert_frame_equal(y, expected) - #assert_frame_equal(y,expected) + # assert_frame_equal(y,expected) expected = DataFrame(columns=['a', 'b', 'c c', 'd']) expected['d'] = expected['d'].astype('int64') df = DataFrame(columns=['a', 'b', 'c c']) df['d'] = 3 assert_frame_equal(df, expected) - assert_series_equal(df['c c'],Series(name='c c',dtype=object)) + assert_series_equal(df['c c'], Series(name='c c', dtype=object)) # reindex columns is ok df = DataFrame({"A": [1, 2, 3], "B": [1.2, 4.2, 5.2]}) y = df[df.A > 5] - result = y.reindex(columns=['A','B','C']) - expected = DataFrame(columns=['A','B','C'], index=pd.Index([], dtype='int64')) + result = y.reindex(columns=['A', 'B', 'C']) + expected = DataFrame(columns=['A', 'B', 'C'], index=pd.Index( + [], dtype='int64')) expected['A'] = expected['A'].astype('int64') expected['B'] = expected['B'].astype('float64') expected['C'] = expected['C'].astype('float64') - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) # GH 5756 # setting with empty Series df = DataFrame(Series()) - assert_frame_equal(df, DataFrame({ 0 : Series() })) + assert_frame_equal(df, DataFrame({0: Series()})) df = DataFrame(Series(name='foo')) - assert_frame_equal(df, DataFrame({ 'foo' : Series() })) + assert_frame_equal(df, DataFrame({'foo': Series()})) # GH 5932 # copy on empty with assignment fails df = DataFrame(index=[0]) df = df.copy() df['a'] = 0 - expected = DataFrame(0,index=[0],columns=['a']) + expected = DataFrame(0, index=[0], columns=['a']) assert_frame_equal(df, expected) # GH 6171 # consistency on empty frames df = DataFrame(columns=['x', 'y']) df['x'] = [1, 2] - expected = DataFrame(dict(x = [1,2], y = [np.nan,np.nan])) + expected = DataFrame(dict(x=[1, 2], y=[np.nan, np.nan])) assert_frame_equal(df, expected, check_dtype=False) df = DataFrame(columns=['x', 'y']) df['x'] = ['1', '2'] - expected = DataFrame(dict(x = ['1','2'], y = [np.nan,np.nan]),dtype=object) + expected = DataFrame( + dict(x=['1', '2'], y=[np.nan, np.nan]), dtype=object) assert_frame_equal(df, expected) df = DataFrame(columns=['x', 'y']) df.loc[0, 'x'] = 1 - expected = DataFrame(dict(x = [1], y = [np.nan])) + expected = DataFrame(dict(x=[1], y=[np.nan])) assert_frame_equal(df, expected, check_dtype=False) def test_cache_updating(self): # GH 4939, make sure to update the cache on setitem df = tm.makeDataFrame() - df['A'] # cache series + df['A'] # cache series df.ix["Hello Friend"] = df.ix[0] self.assertIn("Hello Friend", df['A'].index) self.assertIn("Hello Friend", df['B'].index) panel = tm.makePanel() - panel.ix[0] # get first item into cache + panel.ix[0] # get first item into cache panel.ix[:, :, 'A+1'] = panel.ix[:, :, 'A'] + 1 self.assertIn("A+1", panel.ix[0].columns) self.assertIn("A+1", panel.ix[1].columns) @@ -3887,33 +4371,38 @@ def test_cache_updating(self): # setting via chained assignment # but actually works, since everything is a view df.loc[0]['z'].iloc[0] = 1. - result = df.loc[(0,0),'z'] + result = df.loc[(0, 0), 'z'] self.assertEqual(result, 1) # correct setting - df.loc[(0,0),'z'] = 2 - result = df.loc[(0,0),'z'] + df.loc[(0, 0), 'z'] = 2 + result = df.loc[(0, 0), 'z'] self.assertEqual(result, 2) # 10264 - df = DataFrame(np.zeros((5,5),dtype='int64'),columns=['a','b','c','d','e'],index=range(5)) + df = DataFrame(np.zeros((5, 5), dtype='int64'), columns=[ + 'a', 'b', 'c', 'd', 'e'], index=range(5)) df['f'] = 0 df.f.values[3] = 1 - y = df.iloc[np.arange(2,len(df))] + + # TODO(wesm): unused? + # y = df.iloc[np.arange(2, len(df))] + df.f.values[3] = 2 - expected = DataFrame(np.zeros((5,6),dtype='int64'),columns=['a','b','c','d','e','f'],index=range(5)) - expected.at[3,'f'] = 2 + expected = DataFrame(np.zeros((5, 6), dtype='int64'), columns=[ + 'a', 'b', 'c', 'd', 'e', 'f'], index=range(5)) + expected.at[3, 'f'] = 2 assert_frame_equal(df, expected) - expected = Series([0,0,0,2,0],name='f') + expected = Series([0, 0, 0, 2, 0], name='f') assert_series_equal(df.f, expected) def test_slice_consolidate_invalidate_item_cache(self): # this is chained assignment, but will 'work' - with option_context('chained_assignment',None): + with option_context('chained_assignment', None): # #3970 - df = DataFrame({ "aa":lrange(5), "bb":[2.2]*5}) + df = DataFrame({"aa": lrange(5), "bb": [2.2] * 5}) # Creates a second float block df["cc"] = 0.0 @@ -3931,39 +4420,44 @@ def test_slice_consolidate_invalidate_item_cache(self): def test_setitem_cache_updating(self): # GH 5424 - cont = ['one', 'two','three', 'four', 'five', 'six', 'seven'] + cont = ['one', 'two', 'three', 'four', 'five', 'six', 'seven'] - for do_ref in [False,False]: - df = DataFrame({'a' : cont, "b":cont[3:]+cont[:3] ,'c' : np.arange(7)}) + for do_ref in [False, False]: + df = DataFrame({'a': cont, + "b": cont[3:] + cont[:3], + 'c': np.arange(7)}) # ref the cache if do_ref: - df.ix[0,"c"] + df.ix[0, "c"] # set it - df.ix[7,'c'] = 1 + df.ix[7, 'c'] = 1 - self.assertEqual(df.ix[0,'c'], 0.0) - self.assertEqual(df.ix[7,'c'], 1.0) + self.assertEqual(df.ix[0, 'c'], 0.0) + self.assertEqual(df.ix[7, 'c'], 1.0) # GH 7084 # not updating cache on series setting with slices - expected = DataFrame({'A': [600, 600, 600]}, index=date_range('5/7/2014', '5/9/2014')) - out = DataFrame({'A': [0, 0, 0]}, index=date_range('5/7/2014', '5/9/2014')) + expected = DataFrame({'A': [600, 600, 600]}, + index=date_range('5/7/2014', '5/9/2014')) + out = DataFrame({'A': [0, 0, 0]}, + index=date_range('5/7/2014', '5/9/2014')) df = DataFrame({'C': ['A', 'A', 'A'], 'D': [100, 200, 300]}) - #loop through df to update out + # loop through df to update out six = Timestamp('5/7/2014') eix = Timestamp('5/9/2014') for ix, row in df.iterrows(): - out.loc[six:eix,row['C']] = out.loc[six:eix,row['C']] + row['D'] + out.loc[six:eix, row['C']] = out.loc[six:eix, row['C']] + row['D'] assert_frame_equal(out, expected) assert_series_equal(out['A'], expected['A']) # try via a chain indexing # this actually works - out = DataFrame({'A': [0, 0, 0]}, index=date_range('5/7/2014', '5/9/2014')) + out = DataFrame({'A': [0, 0, 0]}, + index=date_range('5/7/2014', '5/9/2014')) for ix, row in df.iterrows(): v = out[row['C']][six:eix] + row['D'] out[row['C']][six:eix] = v @@ -3971,9 +4465,10 @@ def test_setitem_cache_updating(self): assert_frame_equal(out, expected) assert_series_equal(out['A'], expected['A']) - out = DataFrame({'A': [0, 0, 0]}, index=date_range('5/7/2014', '5/9/2014')) + out = DataFrame({'A': [0, 0, 0]}, + index=date_range('5/7/2014', '5/9/2014')) for ix, row in df.iterrows(): - out.loc[six:eix,row['C']] += row['D'] + out.loc[six:eix, row['C']] += row['D'] assert_frame_equal(out, expected) assert_series_equal(out['A'], expected['A']) @@ -3988,89 +4483,109 @@ def test_setitem_chained_setfault(self): df = DataFrame({'response': np.array(data)}) mask = df.response == 'timeout' df.response[mask] = 'none' - assert_frame_equal(df, DataFrame({'response': mdata })) + assert_frame_equal(df, DataFrame({'response': mdata})) recarray = np.rec.fromarrays([data], names=['response']) df = DataFrame(recarray) mask = df.response == 'timeout' df.response[mask] = 'none' - assert_frame_equal(df, DataFrame({'response': mdata })) + assert_frame_equal(df, DataFrame({'response': mdata})) - df = DataFrame({'response': data, 'response1' : data }) + df = DataFrame({'response': data, 'response1': data}) mask = df.response == 'timeout' df.response[mask] = 'none' - assert_frame_equal(df, DataFrame({'response': mdata, 'response1' : data })) + assert_frame_equal(df, DataFrame({'response': mdata, + 'response1': data})) # GH 6056 - expected = DataFrame(dict(A = [np.nan,'bar','bah','foo','bar'])) - df = DataFrame(dict(A = np.array(['foo','bar','bah','foo','bar']))) + expected = DataFrame(dict(A=[np.nan, 'bar', 'bah', 'foo', 'bar'])) + df = DataFrame(dict(A=np.array(['foo', 'bar', 'bah', 'foo', 'bar']))) df['A'].iloc[0] = np.nan result = df.head() assert_frame_equal(result, expected) - df = DataFrame(dict(A = np.array(['foo','bar','bah','foo','bar']))) + df = DataFrame(dict(A=np.array(['foo', 'bar', 'bah', 'foo', 'bar']))) df.A.iloc[0] = np.nan result = df.head() assert_frame_equal(result, expected) def test_detect_chained_assignment(self): - pd.set_option('chained_assignment','raise') + pd.set_option('chained_assignment', 'raise') # work with the chain - expected = DataFrame([[-5,1],[-6,3]],columns=list('AB')) - df = DataFrame(np.arange(4).reshape(2,2),columns=list('AB'),dtype='int64') + expected = DataFrame([[-5, 1], [-6, 3]], columns=list('AB')) + df = DataFrame( + np.arange(4).reshape(2, 2), columns=list('AB'), dtype='int64') self.assertIsNone(df.is_copy) df['A'][0] = -5 df['A'][1] = -6 assert_frame_equal(df, expected) # test with the chaining - df = DataFrame({ 'A' : Series(range(2),dtype='int64'), 'B' : np.array(np.arange(2,4),dtype=np.float64)}) + df = DataFrame({'A': Series( + range(2), dtype='int64'), + 'B': np.array( + np.arange(2, 4), dtype=np.float64)}) self.assertIsNone(df.is_copy) + def f(): df['A'][0] = -5 + self.assertRaises(com.SettingWithCopyError, f) + def f(): df['A'][1] = np.nan + self.assertRaises(com.SettingWithCopyError, f) self.assertIsNone(df['A'].is_copy) # using a copy (the chain), fails - df = DataFrame({ 'A' : Series(range(2),dtype='int64'), 'B' : np.array(np.arange(2,4),dtype=np.float64)}) + df = DataFrame({'A': Series( + range(2), dtype='int64'), + 'B': np.array( + np.arange(2, 4), dtype=np.float64)}) + def f(): df.loc[0]['A'] = -5 + self.assertRaises(com.SettingWithCopyError, f) # doc example - df = DataFrame({'a' : ['one', 'one', 'two', - 'three', 'two', 'one', 'six'], - 'c' : Series(range(7),dtype='int64') }) + df = DataFrame({'a': ['one', 'one', 'two', 'three', 'two', 'one', 'six' + ], + 'c': Series( + range(7), dtype='int64')}) self.assertIsNone(df.is_copy) - expected = DataFrame({'a' : ['one', 'one', 'two', - 'three', 'two', 'one', 'six'], - 'c' : [42,42,2,3,4,42,6]}) + expected = DataFrame({'a': ['one', 'one', 'two', 'three', 'two', 'one', + 'six'], + 'c': [42, 42, 2, 3, 4, 42, 6]}) def f(): indexer = df.a.str.startswith('o') df[indexer]['c'] = 42 + self.assertRaises(com.SettingWithCopyError, f) - expected = DataFrame({'A':[111,'bbb','ccc'],'B':[1,2,3]}) - df = DataFrame({'A':['aaa','bbb','ccc'],'B':[1,2,3]}) + expected = DataFrame({'A': [111, 'bbb', 'ccc'], 'B': [1, 2, 3]}) + df = DataFrame({'A': ['aaa', 'bbb', 'ccc'], 'B': [1, 2, 3]}) + def f(): df['A'][0] = 111 + self.assertRaises(com.SettingWithCopyError, f) + def f(): df.loc[0]['A'] = 111 + self.assertRaises(com.SettingWithCopyError, f) - df.loc[0,'A'] = 111 - assert_frame_equal(df,expected) + df.loc[0, 'A'] = 111 + assert_frame_equal(df, expected) # make sure that is_copy is picked up reconstruction # GH5475 - df = DataFrame({"A": [1,2]}) + df = DataFrame({"A": [1, 2]}) self.assertIsNone(df.is_copy) with tm.ensure_clean('__tmp__pickle') as path: df.to_pickle(path) @@ -4085,7 +4600,7 @@ def f(): def random_text(nobs=100): df = [] for i in range(nobs): - idx= np.random.randint(len(letters), size=2) + idx = np.random.randint(len(letters), size=2) idx.sort() df.append([letters[idx[0]:idx[1]]]) @@ -4094,30 +4609,30 @@ def random_text(nobs=100): df = random_text(100000) # always a copy - x = df.iloc[[0,1,2]] + x = df.iloc[[0, 1, 2]] self.assertIsNotNone(x.is_copy) - x = df.iloc[[0,1,2,4]] + x = df.iloc[[0, 1, 2, 4]] self.assertIsNotNone(x.is_copy) # explicity copy - indexer = df.letters.apply(lambda x : len(x) > 10) + indexer = df.letters.apply(lambda x: len(x) > 10) df = df.ix[indexer].copy() self.assertIsNone(df.is_copy) df['letters'] = df['letters'].apply(str.lower) # implicity take df = random_text(100000) - indexer = df.letters.apply(lambda x : len(x) > 10) + indexer = df.letters.apply(lambda x: len(x) > 10) df = df.ix[indexer] self.assertIsNotNone(df.is_copy) df['letters'] = df['letters'].apply(str.lower) # implicity take 2 df = random_text(100000) - indexer = df.letters.apply(lambda x : len(x) > 10) + indexer = df.letters.apply(lambda x: len(x) > 10) df = df.ix[indexer] self.assertIsNotNone(df.is_copy) - df.loc[:,'letters'] = df['letters'].apply(str.lower) + df.loc[:, 'letters'] = df['letters'].apply(str.lower) # should be ok even though it's a copy! self.assertIsNone(df.is_copy) @@ -4125,77 +4640,99 @@ def random_text(nobs=100): self.assertIsNone(df.is_copy) df = random_text(100000) - indexer = df.letters.apply(lambda x : len(x) > 10) - df.ix[indexer,'letters'] = df.ix[indexer,'letters'].apply(str.lower) + indexer = df.letters.apply(lambda x: len(x) > 10) + df.ix[indexer, 'letters'] = df.ix[indexer, 'letters'].apply(str.lower) # an identical take, so no copy - df = DataFrame({'a' : [1]}).dropna() + df = DataFrame({'a': [1]}).dropna() self.assertIsNone(df.is_copy) df['a'] += 1 # inplace ops - # original from: http://stackoverflow.com/questions/20508968/series-fillna-in-a-multiindex-dataframe-does-not-fill-is-this-a-bug + # original from: + # http://stackoverflow.com/questions/20508968/series-fillna-in-a-multiindex-dataframe-does-not-fill-is-this-a-bug a = [12, 23] b = [123, None] c = [1234, 2345] d = [12345, 23456] - tuples = [('eyes', 'left'), ('eyes', 'right'), ('ears', 'left'), ('ears', 'right')] - events = {('eyes', 'left'): a, ('eyes', 'right'): b, ('ears', 'left'): c, ('ears', 'right'): d} + tuples = [('eyes', 'left'), ('eyes', 'right'), ('ears', 'left'), + ('ears', 'right')] + events = {('eyes', 'left'): a, + ('eyes', 'right'): b, + ('ears', 'left'): c, + ('ears', 'right'): d} multiind = MultiIndex.from_tuples(tuples, names=['part', 'side']) zed = DataFrame(events, index=['a', 'b'], columns=multiind) + def f(): zed['eyes']['right'].fillna(value=555, inplace=True) + self.assertRaises(com.SettingWithCopyError, f) - df = DataFrame(np.random.randn(10,4)) - s = df.iloc[:,0].sort_values() - assert_series_equal(s,df.iloc[:,0].sort_values()) - assert_series_equal(s,df[0].sort_values()) + df = DataFrame(np.random.randn(10, 4)) + s = df.iloc[:, 0].sort_values() + assert_series_equal(s, df.iloc[:, 0].sort_values()) + assert_series_equal(s, df[0].sort_values()) # false positives GH6025 - df = DataFrame ({'column1':['a', 'a', 'a'], 'column2': [4,8,9] }) + df = DataFrame({'column1': ['a', 'a', 'a'], 'column2': [4, 8, 9]}) str(df) df['column1'] = df['column1'] + 'b' str(df) - df = df [df['column2']!=8] + df = df[df['column2'] != 8] str(df) df['column1'] = df['column1'] + 'c' str(df) - # from SO: http://stackoverflow.com/questions/24054495/potential-bug-setting-value-for-undefined-column-using-iloc - df = DataFrame(np.arange(0,9), columns=['count']) + # from SO: + # http://stackoverflow.com/questions/24054495/potential-bug-setting-value-for-undefined-column-using-iloc + df = DataFrame(np.arange(0, 9), columns=['count']) df['group'] = 'b' + def f(): df.iloc[0:5]['group'] = 'a' + self.assertRaises(com.SettingWithCopyError, f) # mixed type setting # same dtype & changing dtype - df = DataFrame(dict(A=date_range('20130101',periods=5),B=np.random.randn(5),C=np.arange(5,dtype='int64'),D=list('abcde'))) + df = DataFrame(dict(A=date_range('20130101', periods=5), + B=np.random.randn(5), + C=np.arange(5, dtype='int64'), + D=list('abcde'))) def f(): df.ix[2]['D'] = 'foo' + self.assertRaises(com.SettingWithCopyError, f) + def f(): df.ix[2]['C'] = 'foo' + self.assertRaises(com.SettingWithCopyError, f) + def f(): df['C'][2] = 'foo' + self.assertRaises(com.SettingWithCopyError, f) def test_setting_with_copy_bug(self): # operating on a copy - df = pd.DataFrame({'a': list(range(4)), 'b': list('ab..'), 'c': ['a', 'b', np.nan, 'd']}) + df = pd.DataFrame({'a': list(range(4)), + 'b': list('ab..'), + 'c': ['a', 'b', np.nan, 'd']}) mask = pd.isnull(df.c) def f(): df[['c']][mask] = df[['b']][mask] + self.assertRaises(com.SettingWithCopyError, f) # invalid warning as we are returning a new object # GH 8730 - df1 = DataFrame({'x': Series(['a','b','c']), 'y': Series(['d','e','f'])}) + df1 = DataFrame({'x': Series(['a', 'b', 'c']), + 'y': Series(['d', 'e', 'f'])}) df2 = df1[['x']] # this should not raise @@ -4204,24 +4741,173 @@ def f(): def test_detect_chained_assignment_warnings(self): # warnings - with option_context('chained_assignment','warn'): - df = DataFrame({'A':['aaa','bbb','ccc'],'B':[1,2,3]}) - with tm.assert_produces_warning(expected_warning=com.SettingWithCopyWarning): + with option_context('chained_assignment', 'warn'): + df = DataFrame({'A': ['aaa', 'bbb', 'ccc'], 'B': [1, 2, 3]}) + with tm.assert_produces_warning( + expected_warning=com.SettingWithCopyWarning): df.loc[0]['A'] = 111 def test_float64index_slicing_bug(self): # GH 5557, related to slicing a float index - ser = {256: 2321.0, 1: 78.0, 2: 2716.0, 3: 0.0, 4: 369.0, 5: 0.0, 6: 269.0, 7: 0.0, 8: 0.0, 9: 0.0, 10: 3536.0, 11: 0.0, 12: 24.0, 13: 0.0, 14: 931.0, 15: 0.0, 16: 101.0, 17: 78.0, 18: 9643.0, 19: 0.0, 20: 0.0, 21: 0.0, 22: 63761.0, 23: 0.0, 24: 446.0, 25: 0.0, 26: 34773.0, 27: 0.0, 28: 729.0, 29: 78.0, 30: 0.0, 31: 0.0, 32: 3374.0, 33: 0.0, 34: 1391.0, 35: 0.0, 36: 361.0, 37: 0.0, 38: 61808.0, 39: 0.0, 40: 0.0, 41: 0.0, 42: 6677.0, 43: 0.0, 44: 802.0, 45: 0.0, 46: 2691.0, 47: 0.0, 48: 3582.0, 49: 0.0, 50: 734.0, 51: 0.0, 52: 627.0, 53: 70.0, 54: 2584.0, 55: 0.0, 56: 324.0, 57: 0.0, 58: 605.0, 59: 0.0, 60: 0.0, 61: 0.0, 62: 3989.0, 63: 10.0, 64: 42.0, 65: 0.0, 66: 904.0, 67: 0.0, 68: 88.0, 69: 70.0, 70: 8172.0, 71: 0.0, 72: 0.0, 73: 0.0, 74: 64902.0, 75: 0.0, 76: 347.0, 77: 0.0, 78: 36605.0, 79: 0.0, 80: 379.0, 81: 70.0, 82: 0.0, 83: 0.0, 84: 3001.0, 85: 0.0, 86: 1630.0, 87: 7.0, 88: 364.0, 89: 0.0, 90: 67404.0, 91: 9.0, 92: 0.0, 93: 0.0, 94: 7685.0, 95: 0.0, 96: 1017.0, 97: 0.0, 98: 2831.0, 99: 0.0, 100: 2963.0, 101: 0.0, 102: 854.0, 103: 0.0, 104: 0.0, 105: 0.0, 106: 0.0, 107: 0.0, 108: 0.0, 109: 0.0, 110: 0.0, 111: 0.0, 112: 0.0, 113: 0.0, 114: 0.0, 115: 0.0, 116: 0.0, 117: 0.0, 118: 0.0, 119: 0.0, 120: 0.0, 121: 0.0, 122: 0.0, 123: 0.0, 124: 0.0, 125: 0.0, 126: 67744.0, 127: 22.0, 128: 264.0, 129: 0.0, 260: 197.0, 268: 0.0, 265: 0.0, 269: 0.0, 261: 0.0, 266: 1198.0, 267: 0.0, 262: 2629.0, 258: 775.0, 257: 0.0, 263: 0.0, 259: 0.0, 264: 163.0, 250: 10326.0, 251: 0.0, 252: 1228.0, 253: 0.0, 254: 2769.0, 255: 0.0} + ser = {256: 2321.0, + 1: 78.0, + 2: 2716.0, + 3: 0.0, + 4: 369.0, + 5: 0.0, + 6: 269.0, + 7: 0.0, + 8: 0.0, + 9: 0.0, + 10: 3536.0, + 11: 0.0, + 12: 24.0, + 13: 0.0, + 14: 931.0, + 15: 0.0, + 16: 101.0, + 17: 78.0, + 18: 9643.0, + 19: 0.0, + 20: 0.0, + 21: 0.0, + 22: 63761.0, + 23: 0.0, + 24: 446.0, + 25: 0.0, + 26: 34773.0, + 27: 0.0, + 28: 729.0, + 29: 78.0, + 30: 0.0, + 31: 0.0, + 32: 3374.0, + 33: 0.0, + 34: 1391.0, + 35: 0.0, + 36: 361.0, + 37: 0.0, + 38: 61808.0, + 39: 0.0, + 40: 0.0, + 41: 0.0, + 42: 6677.0, + 43: 0.0, + 44: 802.0, + 45: 0.0, + 46: 2691.0, + 47: 0.0, + 48: 3582.0, + 49: 0.0, + 50: 734.0, + 51: 0.0, + 52: 627.0, + 53: 70.0, + 54: 2584.0, + 55: 0.0, + 56: 324.0, + 57: 0.0, + 58: 605.0, + 59: 0.0, + 60: 0.0, + 61: 0.0, + 62: 3989.0, + 63: 10.0, + 64: 42.0, + 65: 0.0, + 66: 904.0, + 67: 0.0, + 68: 88.0, + 69: 70.0, + 70: 8172.0, + 71: 0.0, + 72: 0.0, + 73: 0.0, + 74: 64902.0, + 75: 0.0, + 76: 347.0, + 77: 0.0, + 78: 36605.0, + 79: 0.0, + 80: 379.0, + 81: 70.0, + 82: 0.0, + 83: 0.0, + 84: 3001.0, + 85: 0.0, + 86: 1630.0, + 87: 7.0, + 88: 364.0, + 89: 0.0, + 90: 67404.0, + 91: 9.0, + 92: 0.0, + 93: 0.0, + 94: 7685.0, + 95: 0.0, + 96: 1017.0, + 97: 0.0, + 98: 2831.0, + 99: 0.0, + 100: 2963.0, + 101: 0.0, + 102: 854.0, + 103: 0.0, + 104: 0.0, + 105: 0.0, + 106: 0.0, + 107: 0.0, + 108: 0.0, + 109: 0.0, + 110: 0.0, + 111: 0.0, + 112: 0.0, + 113: 0.0, + 114: 0.0, + 115: 0.0, + 116: 0.0, + 117: 0.0, + 118: 0.0, + 119: 0.0, + 120: 0.0, + 121: 0.0, + 122: 0.0, + 123: 0.0, + 124: 0.0, + 125: 0.0, + 126: 67744.0, + 127: 22.0, + 128: 264.0, + 129: 0.0, + 260: 197.0, + 268: 0.0, + 265: 0.0, + 269: 0.0, + 261: 0.0, + 266: 1198.0, + 267: 0.0, + 262: 2629.0, + 258: 775.0, + 257: 0.0, + 263: 0.0, + 259: 0.0, + 264: 163.0, + 250: 10326.0, + 251: 0.0, + 252: 1228.0, + 253: 0.0, + 254: 2769.0, + 255: 0.0} # smoke test for the repr s = Series(ser) - result = s.value_counts() + result = s.value_counts() str(result) def test_floating_index_doc_example(self): index = Index([1.5, 2, 3, 4.5, 5]) - s = Series(range(5),index=index) + s = Series(range(5), index=index) self.assertEqual(s[3], 2) self.assertEqual(s.ix[3], 2) self.assertEqual(s.loc[3], 2) @@ -4258,20 +4944,20 @@ def test_floating_index(self): # value not found (and no fallbacking at all) # scalar integers - self.assertRaises(KeyError, lambda : s.loc[4]) - self.assertRaises(KeyError, lambda : s.ix[4]) - self.assertRaises(KeyError, lambda : s[4]) + self.assertRaises(KeyError, lambda: s.loc[4]) + self.assertRaises(KeyError, lambda: s.ix[4]) + self.assertRaises(KeyError, lambda: s[4]) # fancy floats/integers create the correct entry (as nan) # fancy tests expected = Series([2, 0], index=Float64Index([5.0, 0.0])) - for fancy_idx in [[5.0, 0.0], np.array([5.0, 0.0])]: # float + for fancy_idx in [[5.0, 0.0], np.array([5.0, 0.0])]: # float assert_series_equal(s[fancy_idx], expected) assert_series_equal(s.loc[fancy_idx], expected) assert_series_equal(s.ix[fancy_idx], expected) expected = Series([2, 0], index=Index([5, 0], dtype='int64')) - for fancy_idx in [[5, 0], np.array([5, 0])]: #int + for fancy_idx in [[5, 0], np.array([5, 0])]: # int assert_series_equal(s[fancy_idx], expected) assert_series_equal(s.loc[fancy_idx], expected) assert_series_equal(s.ix[fancy_idx], expected) @@ -4311,39 +4997,41 @@ def test_floating_index(self): assert_series_equal(result1, result3) # list selection - result1 = s[[0.0,5,10]] - result2 = s.loc[[0.0,5,10]] - result3 = s.ix[[0.0,5,10]] - result4 = s.iloc[[0,2,4]] + result1 = s[[0.0, 5, 10]] + result2 = s.loc[[0.0, 5, 10]] + result3 = s.ix[[0.0, 5, 10]] + result4 = s.iloc[[0, 2, 4]] assert_series_equal(result1, result2) assert_series_equal(result1, result3) assert_series_equal(result1, result4) - result1 = s[[1.6,5,10]] - result2 = s.loc[[1.6,5,10]] - result3 = s.ix[[1.6,5,10]] + result1 = s[[1.6, 5, 10]] + result2 = s.loc[[1.6, 5, 10]] + result3 = s.ix[[1.6, 5, 10]] assert_series_equal(result1, result2) assert_series_equal(result1, result3) - assert_series_equal(result1, Series([np.nan,2,4],index=[1.6,5,10])) + assert_series_equal(result1, Series( + [np.nan, 2, 4], index=[1.6, 5, 10])) - result1 = s[[0,1,2]] - result2 = s.ix[[0,1,2]] - result3 = s.loc[[0,1,2]] + result1 = s[[0, 1, 2]] + result2 = s.ix[[0, 1, 2]] + result3 = s.loc[[0, 1, 2]] assert_series_equal(result1, result2) assert_series_equal(result1, result3) - assert_series_equal(result1, Series([0.0,np.nan,np.nan],index=[0,1,2])) + assert_series_equal(result1, Series( + [0.0, np.nan, np.nan], index=[0, 1, 2])) result1 = s.loc[[2.5, 5]] result2 = s.ix[[2.5, 5]] assert_series_equal(result1, result2) - assert_series_equal(result1, Series([1,2],index=[2.5,5.0])) + assert_series_equal(result1, Series([1, 2], index=[2.5, 5.0])) result1 = s[[2.5]] result2 = s.ix[[2.5]] result3 = s.loc[[2.5]] assert_series_equal(result1, result2) assert_series_equal(result1, result3) - assert_series_equal(result1, Series([1],index=[2.5])) + assert_series_equal(result1, Series([1], index=[2.5])) def test_scalar_indexer(self): # float indexing checked above @@ -4368,8 +5056,8 @@ def check_invalid(index, loc=None, iloc=None, ix=None, getitem=None): self.assertRaises(getitem, lambda: s[3.5]) for index in [tm.makeStringIndex, tm.makeUnicodeIndex, - tm.makeIntIndex, tm.makeRangeIndex, - tm.makeDateIndex, tm.makePeriodIndex]: + tm.makeIntIndex, tm.makeRangeIndex, tm.makeDateIndex, + tm.makePeriodIndex]: check_invalid(index()) check_invalid(Index(np.arange(5) * 2.5), loc=KeyError, @@ -4378,7 +5066,7 @@ def check_invalid(index, loc=None, iloc=None, ix=None, getitem=None): def check_index(index, error): index = index() - s = Series(np.arange(len(index)),index=index) + s = Series(np.arange(len(index)), index=index) # positional selection result1 = s[5] @@ -4387,8 +5075,8 @@ def check_index(index, error): result4 = s.iloc[5.0] # by value - self.assertRaises(error, lambda : s.loc[5]) - self.assertRaises(error, lambda : s.loc[5.0]) + self.assertRaises(error, lambda: s.loc[5]) + self.assertRaises(error, lambda: s.loc[5.0]) # this is fallback, so it works result5 = s.ix[5] @@ -4401,15 +5089,16 @@ def check_index(index, error): self.assertEqual(result1, result6) # string-like - for index in [ tm.makeStringIndex, tm.makeUnicodeIndex ]: + for index in [tm.makeStringIndex, tm.makeUnicodeIndex]: check_index(index, KeyError) # datetimelike - for index in [ tm.makeDateIndex, tm.makeTimedeltaIndex, tm.makePeriodIndex ]: + for index in [tm.makeDateIndex, tm.makeTimedeltaIndex, + tm.makePeriodIndex]: check_index(index, TypeError) # exact indexing when found on IntIndex - s = Series(np.arange(10),dtype='int64') + s = Series(np.arange(10), dtype='int64') result1 = s[5.0] result2 = s.loc[5.0] @@ -4424,24 +5113,23 @@ def check_index(index, error): self.assertEqual(result1, result6) def test_slice_indexer(self): - def check_iloc_compat(s): # invalid type for iloc (but works with a warning) # check_stacklevel=False -> impossible to get it right for all # index types - with self.assert_produces_warning( - FutureWarning, check_stacklevel=False): + with self.assert_produces_warning(FutureWarning, + check_stacklevel=False): s.iloc[6.0:8] - with self.assert_produces_warning( - FutureWarning, check_stacklevel=False): + with self.assert_produces_warning(FutureWarning, + check_stacklevel=False): s.iloc[6.0:8.0] - with self.assert_produces_warning( - FutureWarning, check_stacklevel=False): + with self.assert_produces_warning(FutureWarning, + check_stacklevel=False): s.iloc[6:8.0] def check_slicing_positional(index): - s = Series(np.arange(len(index))+10,index=index) + s = Series(np.arange(len(index)) + 10, index=index) # these are all positional result1 = s[2:5] @@ -4451,26 +5139,27 @@ def check_slicing_positional(index): assert_series_equal(result1, result3) # loc will fail - self.assertRaises(TypeError, lambda : s.loc[2:5]) + self.assertRaises(TypeError, lambda: s.loc[2:5]) # make all float slicing fail - self.assertRaises(TypeError, lambda : s[2.0:5]) - self.assertRaises(TypeError, lambda : s[2.0:5.0]) - self.assertRaises(TypeError, lambda : s[2:5.0]) + self.assertRaises(TypeError, lambda: s[2.0:5]) + self.assertRaises(TypeError, lambda: s[2.0:5.0]) + self.assertRaises(TypeError, lambda: s[2:5.0]) - self.assertRaises(TypeError, lambda : s.ix[2.0:5]) - self.assertRaises(TypeError, lambda : s.ix[2.0:5.0]) - self.assertRaises(TypeError, lambda : s.ix[2:5.0]) + self.assertRaises(TypeError, lambda: s.ix[2.0:5]) + self.assertRaises(TypeError, lambda: s.ix[2.0:5.0]) + self.assertRaises(TypeError, lambda: s.ix[2:5.0]) - self.assertRaises(TypeError, lambda : s.loc[2.0:5]) - self.assertRaises(TypeError, lambda : s.loc[2.0:5.0]) - self.assertRaises(TypeError, lambda : s.loc[2:5.0]) + self.assertRaises(TypeError, lambda: s.loc[2.0:5]) + self.assertRaises(TypeError, lambda: s.loc[2.0:5.0]) + self.assertRaises(TypeError, lambda: s.loc[2:5.0]) check_iloc_compat(s) # all index types except int, float - for index in [ tm.makeStringIndex, tm.makeUnicodeIndex, - tm.makeDateIndex, tm.makeTimedeltaIndex, tm.makePeriodIndex ]: + for index in [tm.makeStringIndex, tm.makeUnicodeIndex, + tm.makeDateIndex, tm.makeTimedeltaIndex, + tm.makePeriodIndex]: check_slicing_positional(index()) ############ @@ -4524,30 +5213,32 @@ def check_slicing_positional(index): # these are valid for all methods # these are treated like labels (e.g. the rhs IS included) def compare(slicers, expected): - for method in [lambda x: x, lambda x: x.loc, lambda x: x.ix ]: + for method in [lambda x: x, lambda x: x.loc, lambda x: x.ix]: for slices in slicers: result = method(s)[slices] assert_series_equal(result, expected) - compare([slice(6.0,8),slice(6.0,8.0),slice(6,8.0)], - s[(s.index>=6.0)&(s.index<=8)]) - compare([slice(6.5,8),slice(6.5,8.5)], - s[(s.index>=6.5)&(s.index<=8.5)]) - compare([slice(6,8.5)], - s[(s.index>=6.0)&(s.index<=8.5)]) - compare([slice(6.5,6.5)], - s[(s.index>=6.5)&(s.index<=6.5)]) + compare([slice(6.0, 8), slice(6.0, 8.0), slice(6, 8.0)], + s[(s.index >= 6.0) & (s.index <= 8)]) + compare([slice(6.5, 8), slice(6.5, 8.5)], + s[(s.index >= 6.5) & (s.index <= 8.5)]) + compare([slice(6, 8.5)], s[(s.index >= 6.0) & (s.index <= 8.5)]) + compare([slice(6.5, 6.5)], s[(s.index >= 6.5) & (s.index <= 6.5)]) check_iloc_compat(s) def test_set_ix_out_of_bounds_axis_0(self): - df = pd.DataFrame(randn(2, 5), index=["row%s" % i for i in range(2)], columns=["col%s" % i for i in range(5)]) + df = pd.DataFrame( + randn(2, 5), index=["row%s" % i for i in range(2)], + columns=["col%s" % i for i in range(5)]) self.assertRaises(ValueError, df.ix.__setitem__, (2, 0), 100) def test_set_ix_out_of_bounds_axis_1(self): - df = pd.DataFrame(randn(5, 2), index=["row%s" % i for i in range(5)], columns=["col%s" % i for i in range(2)]) - self.assertRaises(ValueError, df.ix.__setitem__, (0 , 2), 100) + df = pd.DataFrame( + randn(5, 2), index=["row%s" % i for i in range(5)], + columns=["col%s" % i for i in range(2)]) + self.assertRaises(ValueError, df.ix.__setitem__, (0, 2), 100) def test_iloc_empty_list_indexer_is_ok(self): from pandas.util.testing import makeCustomDataframe as mkdf @@ -4559,8 +5250,8 @@ def test_iloc_empty_list_indexer_is_ok(self): assert_frame_equal(df.iloc[[], :], df.iloc[:0, :], check_index_type=True, check_column_type=True) # horizontal empty - assert_frame_equal(df.iloc[[]], df.iloc[:0, :], - check_index_type=True, check_column_type=True) + assert_frame_equal(df.iloc[[]], df.iloc[:0, :], check_index_type=True, + check_column_type=True) def test_loc_empty_list_indexer_is_ok(self): from pandas.util.testing import makeCustomDataframe as mkdf @@ -4572,21 +5263,21 @@ def test_loc_empty_list_indexer_is_ok(self): assert_frame_equal(df.loc[[], :], df.iloc[:0, :], check_index_type=True, check_column_type=True) # horizontal empty - assert_frame_equal(df.loc[[]], df.iloc[:0, :], - check_index_type=True, check_column_type=True) + assert_frame_equal(df.loc[[]], df.iloc[:0, :], check_index_type=True, + check_column_type=True) def test_ix_empty_list_indexer_is_ok(self): from pandas.util.testing import makeCustomDataframe as mkdf df = mkdf(5, 2) # vertical empty - assert_frame_equal(df.ix[:, []], df.iloc[:, :0], - check_index_type=True, check_column_type=True) + assert_frame_equal(df.ix[:, []], df.iloc[:, :0], check_index_type=True, + check_column_type=True) # horizontal empty - assert_frame_equal(df.ix[[], :], df.iloc[:0, :], - check_index_type=True, check_column_type=True) + assert_frame_equal(df.ix[[], :], df.iloc[:0, :], check_index_type=True, + check_column_type=True) # horizontal empty - assert_frame_equal(df.ix[[]], df.iloc[:0, :], - check_index_type=True, check_column_type=True) + assert_frame_equal(df.ix[[]], df.iloc[:0, :], check_index_type=True, + check_column_type=True) def test_deprecate_float_indexers(self): @@ -4600,29 +5291,34 @@ def test_deprecate_float_indexers(self): def check_index(index): i = index(5) - for s in [ Series(np.arange(len(i)),index=i), DataFrame(np.random.randn(len(i),len(i)),index=i,columns=i) ]: - self.assertRaises(FutureWarning, lambda : - s.iloc[3.0]) + for s in [Series( + np.arange(len(i)), index=i), DataFrame( + np.random.randn( + len(i), len(i)), index=i, columns=i)]: + self.assertRaises(FutureWarning, lambda: s.iloc[3.0]) # setting def f(): s.iloc[3.0] = 0 + self.assertRaises(FutureWarning, f) # fallsback to position selection ,series only - s = Series(np.arange(len(i)),index=i) + s = Series(np.arange(len(i)), index=i) s[3] - self.assertRaises(FutureWarning, lambda : s[3.0]) + self.assertRaises(FutureWarning, lambda: s[3.0]) - for index in [ tm.makeStringIndex, tm.makeUnicodeIndex, - tm.makeDateIndex, tm.makeTimedeltaIndex, tm.makePeriodIndex ]: + for index in [tm.makeStringIndex, tm.makeUnicodeIndex, + tm.makeDateIndex, tm.makeTimedeltaIndex, + tm.makePeriodIndex]: check_index(index) # ints i = index(5) - for s in [ Series(np.arange(len(i))), DataFrame(np.random.randn(len(i),len(i)),index=i,columns=i) ]: - self.assertRaises(FutureWarning, lambda : - s.iloc[3.0]) + for s in [Series(np.arange(len(i))), DataFrame( + np.random.randn( + len(i), len(i)), index=i, columns=i)]: + self.assertRaises(FutureWarning, lambda: s.iloc[3.0]) # on some arch's this doesn't provide a warning (and thus raise) # and some it does @@ -4634,20 +5330,23 @@ def f(): # setting def f(): s.iloc[3.0] = 0 + self.assertRaises(FutureWarning, f) # floats: these are all ok! i = np.arange(5.) - for s in [ Series(np.arange(len(i)),index=i), DataFrame(np.random.randn(len(i),len(i)),index=i,columns=i) ]: + for s in [Series( + np.arange(len(i)), index=i), DataFrame( + np.random.randn( + len(i), len(i)), index=i, columns=i)]: with tm.assert_produces_warning(False): s[3.0] with tm.assert_produces_warning(False): s[3] - self.assertRaises(FutureWarning, lambda : - s.iloc[3.0]) + self.assertRaises(FutureWarning, lambda: s.iloc[3.0]) with tm.assert_produces_warning(False): s.iloc[3] @@ -4660,6 +5359,7 @@ def f(): def f(): s.iloc[3.0] = 0 + self.assertRaises(FutureWarning, f) # slices @@ -4668,33 +5368,35 @@ def f(): tm.makeDateIndex, tm.makePeriodIndex]: index = index(5) - for s in [Series(range(5), index=index), - DataFrame(np.random.randn(5, 2), index=index)]: + for s in [Series( + range(5), index=index), DataFrame( + np.random.randn(5, 2), index=index)]: # getitem - self.assertRaises(FutureWarning, lambda: - s.iloc[3.0:4]) - self.assertRaises(FutureWarning, lambda: - s.iloc[3.0:4.0]) - self.assertRaises(FutureWarning, lambda: - s.iloc[3:4.0]) + self.assertRaises(FutureWarning, lambda: s.iloc[3.0:4]) + self.assertRaises(FutureWarning, lambda: s.iloc[3.0:4.0]) + self.assertRaises(FutureWarning, lambda: s.iloc[3:4.0]) # setitem def f(): s.iloc[3.0:4] = 0 + self.assertRaises(FutureWarning, f) + def f(): s.iloc[3:4.0] = 0 + self.assertRaises(FutureWarning, f) + def f(): s.iloc[3.0:4.0] = 0 + self.assertRaises(FutureWarning, f) warnings.filterwarnings(action='ignore', category=FutureWarning) def test_float_index_to_mixed(self): - df = DataFrame({0.0: np.random.rand(10), - 1.0: np.random.rand(10)}) + df = DataFrame({0.0: np.random.rand(10), 1.0: np.random.rand(10)}) df['a'] = 10 tm.assert_frame_equal(DataFrame({0.0: df[0.0], 1.0: df[1.0], @@ -4709,15 +5411,15 @@ def test_duplicate_ix_returns_series(self): tm.assert_series_equal(r, e) def test_float_index_non_scalar_assignment(self): - df = DataFrame({'a': [1,2,3], 'b': [3,4,5]},index=[1.,2.,3.]) + df = DataFrame({'a': [1, 2, 3], 'b': [3, 4, 5]}, index=[1., 2., 3.]) df.loc[df.index[:2]] = 1 - expected = DataFrame({'a':[1,1,3],'b':[1,1,5]},index=df.index) + expected = DataFrame({'a': [1, 1, 3], 'b': [1, 1, 5]}, index=df.index) tm.assert_frame_equal(expected, df) - df = DataFrame({'a': [1,2,3], 'b': [3,4,5]},index=[1.,2.,3.]) + df = DataFrame({'a': [1, 2, 3], 'b': [3, 4, 5]}, index=[1., 2., 3.]) df2 = df.copy() df.loc[df.index] = df.loc[df.index] - tm.assert_frame_equal(df,df2) + tm.assert_frame_equal(df, df2) def test_float_index_at_iat(self): s = pd.Series([1, 2, 3], index=[0.1, 0.2, 0.3]) @@ -4759,7 +5461,7 @@ def run_tests(df, rhs, right): df = pd.DataFrame(xs, columns=cols, index=list('abcde')) # right hand side; permute the indices and multiplpy by -2 - rhs = - 2 * df.iloc[3:0:-1, 2:0:-1] + rhs = -2 * df.iloc[3:0:-1, 2:0:-1] # expected `right` result; just multiply by -2 right = df.copy() @@ -4808,15 +5510,15 @@ def assert_slices_equivalent(l_slc, i_slc): assert_slices_equivalent(SLC[::-1], SLC[::-1]) assert_slices_equivalent(SLC['d'::-1], SLC[15::-1]) - assert_slices_equivalent(SLC[('d',)::-1], SLC[15::-1]) + assert_slices_equivalent(SLC[('d', )::-1], SLC[15::-1]) assert_slices_equivalent(SLC[:'d':-1], SLC[:11:-1]) - assert_slices_equivalent(SLC[:('d',):-1], SLC[:11:-1]) + assert_slices_equivalent(SLC[:('d', ):-1], SLC[:11:-1]) assert_slices_equivalent(SLC['d':'b':-1], SLC[15:3:-1]) - assert_slices_equivalent(SLC[('d',):'b':-1], SLC[15:3:-1]) - assert_slices_equivalent(SLC['d':('b',):-1], SLC[15:3:-1]) - assert_slices_equivalent(SLC[('d',):('b',):-1], SLC[15:3:-1]) + assert_slices_equivalent(SLC[('d', ):'b':-1], SLC[15:3:-1]) + assert_slices_equivalent(SLC['d':('b', ):-1], SLC[15:3:-1]) + assert_slices_equivalent(SLC[('d', ):('b', ):-1], SLC[15:3:-1]) assert_slices_equivalent(SLC['b':'d':-1], SLC[:0]) assert_slices_equivalent(SLC[('c', 2)::-1], SLC[10::-1]) @@ -4844,12 +5546,12 @@ def test_indexing_assignment_dict_already_exists(self): def test_indexing_dtypes_on_empty(self): # Check that .iloc and .ix return correct dtypes GH9983 - df = DataFrame({'a':[1,2,3],'b':['b','b2','b3']}) - df2 = df.ix[[],:] + df = DataFrame({'a': [1, 2, 3], 'b': ['b', 'b2', 'b3']}) + df2 = df.ix[[], :] - self.assertEqual(df2.loc[:,'a'].dtype, np.int64) - assert_series_equal(df2.loc[:,'a'], df2.iloc[:,0]) - assert_series_equal(df2.loc[:,'a'], df2.ix[:,0]) + self.assertEqual(df2.loc[:, 'a'].dtype, np.int64) + assert_series_equal(df2.loc[:, 'a'], df2.iloc[:, 0]) + assert_series_equal(df2.loc[:, 'a'], df2.ix[:, 0]) def test_range_in_series_indexing(self): # range can cause an indexing error @@ -4857,24 +5559,24 @@ def test_range_in_series_indexing(self): for x in [5, 999999, 1000000]: s = pd.Series(index=range(x)) s.loc[range(1)] = 42 - assert_series_equal(s.loc[range(1)],Series(42.0,index=[0])) + assert_series_equal(s.loc[range(1)], Series(42.0, index=[0])) s.loc[range(2)] = 43 - assert_series_equal(s.loc[range(2)],Series(43.0,index=[0,1])) + assert_series_equal(s.loc[range(2)], Series(43.0, index=[0, 1])) @slow def test_large_dataframe_indexing(self): - #GH10692 - result = DataFrame({'x': range(10**6)},dtype='int64') + # GH10692 + result = DataFrame({'x': range(10 ** 6)}, dtype='int64') result.loc[len(result)] = len(result) + 1 - expected = DataFrame({'x': range(10**6 + 1)},dtype='int64') + expected = DataFrame({'x': range(10 ** 6 + 1)}, dtype='int64') assert_frame_equal(result, expected) @slow def test_large_mi_dataframe_indexing(self): - #GH10645 - result = MultiIndex.from_arrays([range(10**6), range(10**6)]) - assert(not (10**6, 0) in result) + # GH10645 + result = MultiIndex.from_arrays([range(10 ** 6), range(10 ** 6)]) + assert (not (10 ** 6, 0) in result) def test_non_reducing_slice(self): df = pd.DataFrame([[0, 1], [2, 3]]) @@ -4923,76 +5625,93 @@ class TestCategoricalIndex(tm.TestCase): def setUp(self): - self.df = DataFrame({'A' : np.arange(6,dtype='int64'), - 'B' : Series(list('aabbca')).astype('category',categories=list('cab')) }).set_index('B') - self.df2 = DataFrame({'A' : np.arange(6,dtype='int64'), - 'B' : Series(list('aabbca')).astype('category',categories=list('cabe')) }).set_index('B') - self.df3 = DataFrame({'A' : np.arange(6,dtype='int64'), - 'B' : Series([1,1,2,1,3,2]).astype('category',categories=[3,2,1],ordered=True) }).set_index('B') - self.df4 = DataFrame({'A' : np.arange(6,dtype='int64'), - 'B' : Series([1,1,2,1,3,2]).astype('category',categories=[3,2,1],ordered=False) }).set_index('B') - + self.df = DataFrame({'A': np.arange(6, dtype='int64'), + 'B': Series(list('aabbca')).astype( + 'category', categories=list( + 'cab'))}).set_index('B') + self.df2 = DataFrame({'A': np.arange(6, dtype='int64'), + 'B': Series(list('aabbca')).astype( + 'category', categories=list( + 'cabe'))}).set_index('B') + self.df3 = DataFrame({'A': np.arange(6, dtype='int64'), + 'B': (Series([1, 1, 2, 1, 3, 2]) + .astype('category', categories=[3, 2, 1], + ordered=True))}).set_index('B') + self.df4 = DataFrame({'A': np.arange(6, dtype='int64'), + 'B': (Series([1, 1, 2, 1, 3, 2]) + .astype('category', categories=[3, 2, 1], + ordered=False))}).set_index('B') def test_loc_scalar(self): - result = self.df.loc['a'] - expected = DataFrame({'A' : [0,1,5], - 'B' : Series(list('aaa')).astype('category',categories=list('cab')) }).set_index('B') + expected = (DataFrame({'A': [0, 1, 5], + 'B': (Series(list('aaa')) + .astype('category', + categories=list('cab')))}) + .set_index('B')) assert_frame_equal(result, expected) - df = self.df.copy() df.loc['a'] = 20 - expected = DataFrame({'A' : [20,20,2,3,4,20], - 'B' : Series(list('aabbca')).astype('category',categories=list('cab')) }).set_index('B') + expected = (DataFrame({'A': [20, 20, 2, 3, 4, 20], + 'B': (Series(list('aabbca')) + .astype('category', + categories=list('cab')))}) + .set_index('B')) assert_frame_equal(df, expected) # value not in the categories - self.assertRaises(KeyError, lambda : df.loc['d']) + self.assertRaises(KeyError, lambda: df.loc['d']) def f(): df.loc['d'] = 10 + self.assertRaises(TypeError, f) def f(): - df.loc['d','A'] = 10 + df.loc['d', 'A'] = 10 + self.assertRaises(TypeError, f) def f(): - df.loc['d','C'] = 10 + df.loc['d', 'C'] = 10 + self.assertRaises(TypeError, f) def test_loc_listlike(self): # list of labels - result = self.df.loc[['c','a']] - expected = self.df.iloc[[4,0,1,5]] + result = self.df.loc[['c', 'a']] + expected = self.df.iloc[[4, 0, 1, 5]] assert_frame_equal(result, expected, check_index_type=True) - result = self.df2.loc[['a','b','e']] - exp_index = pd.CategoricalIndex(list('aaabbe'), categories=list('cabe'), name='B') - expected = DataFrame({'A' : [0,1,5,2,3,np.nan]}, index=exp_index) + result = self.df2.loc[['a', 'b', 'e']] + exp_index = pd.CategoricalIndex( + list('aaabbe'), categories=list('cabe'), name='B') + expected = DataFrame({'A': [0, 1, 5, 2, 3, np.nan]}, index=exp_index) assert_frame_equal(result, expected, check_index_type=True) # element in the categories but not in the values - self.assertRaises(KeyError, lambda : self.df2.loc['e']) + self.assertRaises(KeyError, lambda: self.df2.loc['e']) # assign is ok df = self.df2.copy() df.loc['e'] = 20 - result = df.loc[['a','b','e']] - exp_index = pd.CategoricalIndex(list('aaabbe'), categories=list('cabe'), name='B') - expected = DataFrame({'A' : [0, 1, 5, 2, 3, 20]}, index=exp_index) + result = df.loc[['a', 'b', 'e']] + exp_index = pd.CategoricalIndex( + list('aaabbe'), categories=list('cabe'), name='B') + expected = DataFrame({'A': [0, 1, 5, 2, 3, 20]}, index=exp_index) assert_frame_equal(result, expected) df = self.df2.copy() - result = df.loc[['a','b','e']] - exp_index = pd.CategoricalIndex(list('aaabbe'), categories=list('cabe'), name='B') - expected = DataFrame({'A' : [0, 1, 5, 2, 3, np.nan]}, index=exp_index) + result = df.loc[['a', 'b', 'e']] + exp_index = pd.CategoricalIndex( + list('aaabbe'), categories=list('cabe'), name='B') + expected = DataFrame({'A': [0, 1, 5, 2, 3, np.nan]}, index=exp_index) assert_frame_equal(result, expected, check_index_type=True) # not all labels in the categories - self.assertRaises(KeyError, lambda : self.df2.loc[['a','d']]) + self.assertRaises(KeyError, lambda: self.df2.loc[['a', 'd']]) def test_loc_listlike_dtypes(self): # GH 11586 @@ -5003,15 +5722,21 @@ def test_loc_listlike_dtypes(self): # unique slice res = df.loc[['a', 'b']] - exp = DataFrame({'A': [1, 2], 'B': [4, 5]}, index=pd.CategoricalIndex(['a', 'b'])) + exp = DataFrame({'A': [1, 2], + 'B': [4, 5]}, index=pd.CategoricalIndex(['a', 'b'])) tm.assert_frame_equal(res, exp, check_index_type=True) # duplicated slice res = df.loc[['a', 'a', 'b']] - exp = DataFrame({'A': [1, 1, 2], 'B': [4, 4, 5]}, index=pd.CategoricalIndex(['a', 'a', 'b'])) + exp = DataFrame({'A': [1, 1, 2], + 'B': [4, 4, 5]}, + index=pd.CategoricalIndex(['a', 'a', 'b'])) tm.assert_frame_equal(res, exp, check_index_type=True) - with tm.assertRaisesRegexp(KeyError, 'a list-indexer must only include values that are in the categories'): + with tm.assertRaisesRegexp( + KeyError, + 'a list-indexer must only include values that are ' + 'in the categories'): df.loc[['a', 'x']] # duplicated categories and codes @@ -5020,38 +5745,53 @@ def test_loc_listlike_dtypes(self): # unique slice res = df.loc[['a', 'b']] - exp = DataFrame({'A': [1, 3, 2], 'B': [4, 6, 5]}, index=pd.CategoricalIndex(['a', 'a', 'b'])) + exp = DataFrame({'A': [1, 3, 2], + 'B': [4, 6, 5]}, + index=pd.CategoricalIndex(['a', 'a', 'b'])) tm.assert_frame_equal(res, exp, check_index_type=True) # duplicated slice res = df.loc[['a', 'a', 'b']] - exp = DataFrame({'A': [1, 3, 1, 3, 2], 'B': [4, 6, 4, 6, 5]}, index=pd.CategoricalIndex(['a', 'a', 'a', 'a', 'b'])) + exp = DataFrame( + {'A': [1, 3, 1, 3, 2], + 'B': [4, 6, 4, 6, 5 + ]}, index=pd.CategoricalIndex(['a', 'a', 'a', 'a', 'b'])) tm.assert_frame_equal(res, exp, check_index_type=True) - with tm.assertRaisesRegexp(KeyError, 'a list-indexer must only include values that are in the categories'): + with tm.assertRaisesRegexp( + KeyError, + 'a list-indexer must only include values ' + 'that are in the categories'): df.loc[['a', 'x']] # contains unused category - index = pd.CategoricalIndex(['a', 'b', 'a', 'c'], categories=list('abcde')) + index = pd.CategoricalIndex( + ['a', 'b', 'a', 'c'], categories=list('abcde')) df = DataFrame({'A': [1, 2, 3, 4], 'B': [5, 6, 7, 8]}, index=index) res = df.loc[['a', 'b']] - exp = DataFrame({'A': [1, 3, 2], 'B': [5, 7, 6]}, - index=pd.CategoricalIndex(['a', 'a', 'b'], categories=list('abcde'))) + exp = DataFrame({'A': [1, 3, 2], + 'B': [5, 7, 6]}, index=pd.CategoricalIndex( + ['a', 'a', 'b'], categories=list('abcde'))) tm.assert_frame_equal(res, exp, check_index_type=True) res = df.loc[['a', 'e']] exp = DataFrame({'A': [1, 3, np.nan], 'B': [5, 7, np.nan]}, - index=pd.CategoricalIndex(['a', 'a', 'e'], categories=list('abcde'))) + index=pd.CategoricalIndex(['a', 'a', 'e'], + categories=list('abcde'))) tm.assert_frame_equal(res, exp, check_index_type=True) # duplicated slice res = df.loc[['a', 'a', 'b']] exp = DataFrame({'A': [1, 3, 1, 3, 2], 'B': [5, 7, 5, 7, 6]}, - index=pd.CategoricalIndex(['a', 'a', 'a', 'a', 'b'], categories=list('abcde'))) + index=pd.CategoricalIndex(['a', 'a', 'a', 'a', 'b'], + categories=list('abcde'))) tm.assert_frame_equal(res, exp, check_index_type=True) - with tm.assertRaisesRegexp(KeyError, 'a list-indexer must only include values that are in the categories'): + with tm.assertRaisesRegexp( + KeyError, + 'a list-indexer must only include values ' + 'that are in the categories'): df.loc[['a', 'x']] def test_read_only_source(self): @@ -5063,99 +5803,109 @@ def test_read_only_source(self): ro_array.setflags(write=False) ro_df = DataFrame(ro_array) - assert_frame_equal(rw_df.iloc[[1,2,3]],ro_df.iloc[[1,2,3]]) - assert_frame_equal(rw_df.iloc[[1]],ro_df.iloc[[1]]) - assert_series_equal(rw_df.iloc[1],ro_df.iloc[1]) - assert_frame_equal(rw_df.iloc[1:3],ro_df.iloc[1:3]) + assert_frame_equal(rw_df.iloc[[1, 2, 3]], ro_df.iloc[[1, 2, 3]]) + assert_frame_equal(rw_df.iloc[[1]], ro_df.iloc[[1]]) + assert_series_equal(rw_df.iloc[1], ro_df.iloc[1]) + assert_frame_equal(rw_df.iloc[1:3], ro_df.iloc[1:3]) - assert_frame_equal(rw_df.loc[[1,2,3]],ro_df.loc[[1,2,3]]) - assert_frame_equal(rw_df.loc[[1]],ro_df.loc[[1]]) - assert_series_equal(rw_df.loc[1],ro_df.loc[1]) - assert_frame_equal(rw_df.loc[1:3],ro_df.loc[1:3]) + assert_frame_equal(rw_df.loc[[1, 2, 3]], ro_df.loc[[1, 2, 3]]) + assert_frame_equal(rw_df.loc[[1]], ro_df.loc[[1]]) + assert_series_equal(rw_df.loc[1], ro_df.loc[1]) + assert_frame_equal(rw_df.loc[1:3], ro_df.loc[1:3]) def test_reindexing(self): # reindexing # convert to a regular index - result = self.df2.reindex(['a','b','e']) - expected = DataFrame({'A' : [0,1,5,2,3,np.nan], - 'B' : Series(list('aaabbe')) }).set_index('B') + result = self.df2.reindex(['a', 'b', 'e']) + expected = DataFrame({'A': [0, 1, 5, 2, 3, np.nan], + 'B': Series(list('aaabbe'))}).set_index('B') assert_frame_equal(result, expected, check_index_type=True) - result = self.df2.reindex(['a','b']) - expected = DataFrame({'A' : [0,1,5,2,3], - 'B' : Series(list('aaabb')) }).set_index('B') + result = self.df2.reindex(['a', 'b']) + expected = DataFrame({'A': [0, 1, 5, 2, 3], + 'B': Series(list('aaabb'))}).set_index('B') assert_frame_equal(result, expected, check_index_type=True) result = self.df2.reindex(['e']) - expected = DataFrame({'A' : [np.nan], - 'B' : Series(['e']) }).set_index('B') + expected = DataFrame({'A': [np.nan], + 'B': Series(['e'])}).set_index('B') assert_frame_equal(result, expected, check_index_type=True) result = self.df2.reindex(['d']) - expected = DataFrame({'A' : [np.nan], - 'B' : Series(['d']) }).set_index('B') + expected = DataFrame({'A': [np.nan], + 'B': Series(['d'])}).set_index('B') assert_frame_equal(result, expected, check_index_type=True) # since we are actually reindexing with a Categorical # then return a Categorical cats = list('cabe') - result = self.df2.reindex(pd.Categorical(['a','d'],categories=cats)) - expected = DataFrame({'A' : [0,1,5,np.nan], - 'B' : Series(list('aaad')).astype('category',categories=cats) }).set_index('B') + result = self.df2.reindex(pd.Categorical(['a', 'd'], categories=cats)) + expected = DataFrame({'A': [0, 1, 5, np.nan], + 'B': Series(list('aaad')).astype( + 'category', categories=cats)}).set_index('B') assert_frame_equal(result, expected, check_index_type=True) - result = self.df2.reindex(pd.Categorical(['a'],categories=cats)) - expected = DataFrame({'A' : [0,1,5], - 'B' : Series(list('aaa')).astype('category',categories=cats) }).set_index('B') + result = self.df2.reindex(pd.Categorical(['a'], categories=cats)) + expected = DataFrame({'A': [0, 1, 5], + 'B': Series(list('aaa')).astype( + 'category', categories=cats)}).set_index('B') assert_frame_equal(result, expected, check_index_type=True) - result = self.df2.reindex(['a','b','e']) - expected = DataFrame({'A' : [0,1,5,2,3,np.nan], - 'B' : Series(list('aaabbe')) }).set_index('B') + result = self.df2.reindex(['a', 'b', 'e']) + expected = DataFrame({'A': [0, 1, 5, 2, 3, np.nan], + 'B': Series(list('aaabbe'))}).set_index('B') assert_frame_equal(result, expected, check_index_type=True) - result = self.df2.reindex(['a','b']) - expected = DataFrame({'A' : [0,1,5,2,3], - 'B' : Series(list('aaabb')) }).set_index('B') + result = self.df2.reindex(['a', 'b']) + expected = DataFrame({'A': [0, 1, 5, 2, 3], + 'B': Series(list('aaabb'))}).set_index('B') assert_frame_equal(result, expected, check_index_type=True) result = self.df2.reindex(['e']) - expected = DataFrame({'A' : [np.nan], - 'B' : Series(['e']) }).set_index('B') + expected = DataFrame({'A': [np.nan], + 'B': Series(['e'])}).set_index('B') assert_frame_equal(result, expected, check_index_type=True) # give back the type of categorical that we received - result = self.df2.reindex(pd.Categorical(['a','d'],categories=cats,ordered=True)) - expected = DataFrame({'A' : [0,1,5,np.nan], - 'B' : Series(list('aaad')).astype('category',categories=cats,ordered=True) }).set_index('B') + result = self.df2.reindex(pd.Categorical( + ['a', 'd'], categories=cats, ordered=True)) + expected = DataFrame( + {'A': [0, 1, 5, np.nan], + 'B': Series(list('aaad')).astype('category', categories=cats, + ordered=True)}).set_index('B') assert_frame_equal(result, expected, check_index_type=True) - result = self.df2.reindex(pd.Categorical(['a','d'],categories=['a','d'])) - expected = DataFrame({'A' : [0,1,5,np.nan], - 'B' : Series(list('aaad')).astype('category',categories=['a','d']) }).set_index('B') + result = self.df2.reindex(pd.Categorical( + ['a', 'd'], categories=['a', 'd'])) + expected = DataFrame({'A': [0, 1, 5, np.nan], + 'B': Series(list('aaad')).astype( + 'category', categories=['a', 'd' + ])}).set_index('B') assert_frame_equal(result, expected, check_index_type=True) # passed duplicate indexers are not allowed - self.assertRaises(ValueError, lambda : self.df2.reindex(['a','a'])) + self.assertRaises(ValueError, lambda: self.df2.reindex(['a', 'a'])) # args NotImplemented ATM - self.assertRaises(NotImplementedError, lambda : self.df2.reindex(['a'],method='ffill')) - self.assertRaises(NotImplementedError, lambda : self.df2.reindex(['a'],level=1)) - self.assertRaises(NotImplementedError, lambda : self.df2.reindex(['a'],limit=2)) + self.assertRaises(NotImplementedError, + lambda: self.df2.reindex(['a'], method='ffill')) + self.assertRaises(NotImplementedError, + lambda: self.df2.reindex(['a'], level=1)) + self.assertRaises(NotImplementedError, + lambda: self.df2.reindex(['a'], limit=2)) def test_loc_slice(self): - # slicing # not implemented ATM # GH9748 - self.assertRaises(TypeError, lambda : self.df.loc[1:5]) + self.assertRaises(TypeError, lambda: self.df.loc[1:5]) - #result = df.loc[1:5] - #expected = df.iloc[[1,2,3,4]] - #assert_frame_equal(result, expected) + # result = df.loc[1:5] + # expected = df.iloc[[1,2,3,4]] + # assert_frame_equal(result, expected) def test_boolean_selection(self): @@ -5164,19 +5914,19 @@ def test_boolean_selection(self): result = df3[df3.index == 'a'] expected = df3.iloc[[]] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) result = df4[df4.index == 'a'] expected = df4.iloc[[]] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) result = df3[df3.index == 1] - expected = df3.iloc[[0,1,3]] - assert_frame_equal(result,expected) + expected = df3.iloc[[0, 1, 3]] + assert_frame_equal(result, expected) result = df4[df4.index == 1] - expected = df4.iloc[[0,1,3]] - assert_frame_equal(result,expected) + expected = df4.iloc[[0, 1, 3]] + assert_frame_equal(result, expected) # since we have an ordered categorical @@ -5186,11 +5936,11 @@ def test_boolean_selection(self): # name=u'B') result = df3[df3.index < 2] expected = df3.iloc[[4]] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) result = df3[df3.index > 1] expected = df3.iloc[[]] - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) # unordered # cannot be compared @@ -5199,8 +5949,9 @@ def test_boolean_selection(self): # categories=[3, 2, 1], # ordered=False, # name=u'B') - self.assertRaises(TypeError, lambda : df4[df4.index < 2]) - self.assertRaises(TypeError, lambda : df4[df4.index > 1]) + self.assertRaises(TypeError, lambda: df4[df4.index < 2]) + self.assertRaises(TypeError, lambda: df4[df4.index > 1]) + class TestSeriesNoneCoercion(tm.TestCase): EXPECTED_RESULTS = [ @@ -5224,9 +5975,9 @@ def test_coercion_with_setitem(self): expected_series = Series(expected_result) assert_attr_equal('dtype', start_series, expected_series) - tm.assert_numpy_array_equal( - start_series.values, - expected_series.values, strict_nan=True) + tm.assert_numpy_array_equal(start_series.values, + expected_series.values, + strict_nan=True) def test_coercion_with_loc_setitem(self): for start_data, expected_result in self.EXPECTED_RESULTS: @@ -5236,9 +5987,9 @@ def test_coercion_with_loc_setitem(self): expected_series = Series(expected_result) assert_attr_equal('dtype', start_series, expected_series) - tm.assert_numpy_array_equal( - start_series.values, - expected_series.values, strict_nan=True) + tm.assert_numpy_array_equal(start_series.values, + expected_series.values, + strict_nan=True) def test_coercion_with_setitem_and_series(self): for start_data, expected_result in self.EXPECTED_RESULTS: @@ -5248,9 +5999,9 @@ def test_coercion_with_setitem_and_series(self): expected_series = Series(expected_result) assert_attr_equal('dtype', start_series, expected_series) - tm.assert_numpy_array_equal( - start_series.values, - expected_series.values, strict_nan=True) + tm.assert_numpy_array_equal(start_series.values, + expected_series.values, + strict_nan=True) def test_coercion_with_loc_and_series(self): for start_data, expected_result in self.EXPECTED_RESULTS: @@ -5260,9 +6011,9 @@ def test_coercion_with_loc_and_series(self): expected_series = Series(expected_result) assert_attr_equal('dtype', start_series, expected_series) - tm.assert_numpy_array_equal( - start_series.values, - expected_series.values, strict_nan=True) + tm.assert_numpy_array_equal(start_series.values, + expected_series.values, + strict_nan=True) class TestDataframeNoneCoercion(tm.TestCase): @@ -5286,54 +6037,63 @@ def test_coercion_with_loc(self): expected_dataframe = DataFrame({'foo': expected_result}) - assert_attr_equal('dtype', start_dataframe['foo'], expected_dataframe['foo']) - tm.assert_numpy_array_equal( - start_dataframe['foo'].values, - expected_dataframe['foo'].values, strict_nan=True) + assert_attr_equal('dtype', start_dataframe['foo'], + expected_dataframe['foo']) + tm.assert_numpy_array_equal(start_dataframe['foo'].values, + expected_dataframe['foo'].values, + strict_nan=True) def test_coercion_with_setitem_and_dataframe(self): for start_data, expected_result, in self.EXPECTED_SINGLE_ROW_RESULTS: start_dataframe = DataFrame({'foo': start_data}) - start_dataframe[start_dataframe['foo'] == start_dataframe['foo'][0]] = None + start_dataframe[start_dataframe['foo'] == start_dataframe['foo'][ + 0]] = None expected_dataframe = DataFrame({'foo': expected_result}) - assert_attr_equal('dtype', start_dataframe['foo'], expected_dataframe['foo']) - tm.assert_numpy_array_equal( - start_dataframe['foo'].values, - expected_dataframe['foo'].values, strict_nan=True) + assert_attr_equal('dtype', start_dataframe['foo'], + expected_dataframe['foo']) + tm.assert_numpy_array_equal(start_dataframe['foo'].values, + expected_dataframe['foo'].values, + strict_nan=True) def test_none_coercion_loc_and_dataframe(self): for start_data, expected_result, in self.EXPECTED_SINGLE_ROW_RESULTS: start_dataframe = DataFrame({'foo': start_data}) - start_dataframe.loc[start_dataframe['foo'] == start_dataframe['foo'][0]] = None + start_dataframe.loc[start_dataframe['foo'] == start_dataframe[ + 'foo'][0]] = None expected_dataframe = DataFrame({'foo': expected_result}) - assert_attr_equal('dtype', start_dataframe['foo'], expected_dataframe['foo']) - tm.assert_numpy_array_equal( - start_dataframe['foo'].values, - expected_dataframe['foo'].values, strict_nan=True) + assert_attr_equal('dtype', start_dataframe['foo'], + expected_dataframe['foo']) + tm.assert_numpy_array_equal(start_dataframe['foo'].values, + expected_dataframe['foo'].values, + strict_nan=True) def test_none_coercion_mixed_dtypes(self): start_dataframe = DataFrame({ 'a': [1, 2, 3], 'b': [1.0, 2.0, 3.0], - 'c': [datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)], - 'd': ['a', 'b', 'c']}) + 'c': [datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, + 3)], + 'd': ['a', 'b', 'c'] + }) start_dataframe.iloc[0] = None expected_dataframe = DataFrame({ 'a': [np.nan, 2, 3], 'b': [np.nan, 2.0, 3.0], 'c': [NaT, datetime(2000, 1, 2), datetime(2000, 1, 3)], - 'd': [None, 'b', 'c']}) + 'd': [None, 'b', 'c'] + }) for column in expected_dataframe.columns: - assert_attr_equal('dtype', start_dataframe[column], expected_dataframe[column]) - tm.assert_numpy_array_equal( - start_dataframe[column].values, - expected_dataframe[column].values, strict_nan=True) + assert_attr_equal('dtype', start_dataframe[column], + expected_dataframe[column]) + tm.assert_numpy_array_equal(start_dataframe[column].values, + expected_dataframe[column].values, + strict_nan=True) if __name__ == '__main__': diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py index 23e8aad01bf52..69e05e1f4e7ca 100644 --- a/pandas/tests/test_internals.py +++ b/pandas/tests/test_internals.py @@ -8,31 +8,30 @@ import re import itertools -from pandas import Index, MultiIndex, DataFrame, DatetimeIndex, Series, Categorical +from pandas import (Index, MultiIndex, DataFrame, DatetimeIndex, + Series, Categorical) from pandas.compat import OrderedDict, lrange from pandas.sparse.array import SparseArray -from pandas.core.internals import (BlockPlacement, SingleBlockManager, make_block, - BlockManager) +from pandas.core.internals import (BlockPlacement, SingleBlockManager, + make_block, BlockManager) import pandas.core.common as com -import pandas.core.internals as internals import pandas.util.testing as tm import pandas as pd -from pandas.util.testing import ( - assert_almost_equal, assert_frame_equal, randn, assert_series_equal) +from pandas.util.testing import (assert_almost_equal, assert_frame_equal, + randn, assert_series_equal) from pandas.compat import zip, u def assert_block_equal(left, right): assert_almost_equal(left.values, right.values) - assert(left.dtype == right.dtype) + assert (left.dtype == right.dtype) assert_almost_equal(left.mgr_locs, right.mgr_locs) def get_numeric_mat(shape): arr = np.arange(shape[0]) - return np.lib.stride_tricks.as_strided( - x=arr, shape=shape, - strides=(arr.itemsize,) + (0,) * (len(shape) - 1)).copy() + return np.lib.stride_tricks.as_strided(x=arr, shape=shape, strides=( + arr.itemsize, ) + (0, ) * (len(shape) - 1)).copy() N = 10 @@ -59,14 +58,13 @@ def create_block(typestr, placement, item_shape=None, num_offset=0): num_items = len(placement) if item_shape is None: - item_shape = (N,) + item_shape = (N, ) - shape = (num_items,) + item_shape + shape = (num_items, ) + item_shape mat = get_numeric_mat(shape) - if typestr in ('float', 'f8', 'f4', 'f2', - 'int', 'i8', 'i4', 'i2', 'i1', + if typestr in ('float', 'f8', 'f4', 'f2', 'int', 'i8', 'i4', 'i2', 'i1', 'uint', 'u8', 'u4', 'u2', 'u1'): values = mat.astype(typestr) + num_offset elif typestr in ('complex', 'c16', 'c8'): @@ -74,7 +72,7 @@ def create_block(typestr, placement, item_shape=None, num_offset=0): elif typestr in ('object', 'string', 'O'): values = np.reshape(['A%d' % i for i in mat.ravel() + num_offset], shape) - elif typestr in ('b','bool',): + elif typestr in ('b', 'bool', ): values = np.ones(shape, dtype=np.bool_) elif typestr in ('datetime', 'dt', 'M8[ns]'): values = (mat * 1e9).astype('M8[ns]') @@ -87,10 +85,11 @@ def create_block(typestr, placement, item_shape=None, num_offset=0): values = DatetimeIndex(np.arange(N) * 1e9, tz=tz) elif typestr in ('timedelta', 'td', 'm8[ns]'): values = (mat * 1).astype('m8[ns]') - elif typestr in ('category',): - values = Categorical([1,1,2,2,3,3,3,3,4,4]) - elif typestr in ('category2',): - values = Categorical(['a','a','a','a','b','b','c','c','c','d']) + elif typestr in ('category', ): + values = Categorical([1, 1, 2, 2, 3, 3, 3, 3, 4, 4]) + elif typestr in ('category2', ): + values = Categorical(['a', 'a', 'a', 'a', 'b', 'b', 'c', 'c', 'c', 'd' + ]) elif typestr in ('sparse', 'sparse_na'): # FIXME: doesn't support num_rows != 10 assert shape[-1] == 10 @@ -140,7 +139,7 @@ def create_mgr(descr, item_shape=None): """ if item_shape is None: - item_shape = (N,) + item_shape = (N, ) offset = 0 mgr_items = [] @@ -167,15 +166,16 @@ def create_mgr(descr, item_shape=None): num_offset = 0 for blockstr, placement in block_placements.items(): typestr = blockstr.split('-')[0] - blocks.append(create_block(typestr, placement, item_shape=item_shape, - num_offset=num_offset,)) + blocks.append(create_block(typestr, + placement, + item_shape=item_shape, + num_offset=num_offset, )) num_offset += len(placement) return BlockManager(sorted(blocks, key=lambda b: b.mgr_locs[0]), [mgr_items] + [np.arange(n) for n in item_shape]) - class TestBlock(tm.TestCase): _multiprocess_can_split_ = True @@ -198,7 +198,6 @@ def test_constructor(self): self.assertEqual(int32block.dtype, np.int32) def test_pickle(self): - def _check(blk): assert_block_equal(self.round_trip_pickle(blk), blk) @@ -221,10 +220,8 @@ def test_merge(self): ref_cols = Index(['e', 'a', 'b', 'd', 'f']) - ablock = make_block(avals, - ref_cols.get_indexer(['e', 'b'])) - bblock = make_block(bvals, - ref_cols.get_indexer(['a', 'd'])) + ablock = make_block(avals, ref_cols.get_indexer(['e', 'b'])) + bblock = make_block(bvals, ref_cols.get_indexer(['a', 'd'])) merged = ablock.merge(bblock) assert_almost_equal(merged.mgr_locs, [0, 1, 2, 3]) assert_almost_equal(merged.values[[0, 2]], avals) @@ -284,21 +281,9 @@ def test_split_block_at(self): self.assertEqual(len(bs), 1) self.assertTrue(np.array_equal(bs[0].items, ['a', 'c'])) - bblock = get_bool_ex(['f']) - bs = list(bblock.split_block_at('f')) - self.assertEqual(len(bs), 0) - - def test_get(self): - pass - - def test_set(self): - pass - - def test_fillna(self): - pass - - def test_repr(self): - pass + # bblock = get_bool_ex(['f']) + # bs = list(bblock.split_block_at('f')) + # self.assertEqual(len(bs), 0) class TestDatetimeBlock(tm.TestCase): @@ -312,8 +297,7 @@ def test_try_coerce_arg(self): self.assertTrue(pd.Timestamp(none_coerced) is pd.NaT) # coerce different types of date bojects - vals = (np.datetime64('2010-10-10'), - datetime(2010, 10, 10), + vals = (np.datetime64('2010-10-10'), datetime(2010, 10, 10), date(2010, 10, 10)) for val in vals: coerced = block._try_coerce_args(block.values, val)[2] @@ -325,9 +309,10 @@ class TestBlockManager(tm.TestCase): _multiprocess_can_split_ = True def setUp(self): - self.mgr = create_mgr('a: f8; b: object; c: f8; d: object; e: f8;' - 'f: bool; g: i8; h: complex; i: datetime-1; j: datetime-2;' - 'k: M8[ns, US/Eastern]; l: M8[ns, CET];') + self.mgr = create_mgr( + 'a: f8; b: object; c: f8; d: object; e: f8;' + 'f: bool; g: i8; h: complex; i: datetime-1; j: datetime-2;' + 'k: M8[ns, US/Eastern]; l: M8[ns, CET];') def test_constructor_corner(self): pass @@ -352,8 +337,8 @@ def test_is_indexed_like(self): self.assertTrue(mgr1._is_indexed_like(mgr2)) self.assertTrue(mgr1._is_indexed_like(mgr3)) - self.assertFalse(mgr1._is_indexed_like( - mgr1.get_slice(slice(-1), axis=1))) + self.assertFalse(mgr1._is_indexed_like(mgr1.get_slice( + slice(-1), axis=1))) def test_duplicate_ref_loc_failure(self): tmp_mgr = create_mgr('a:bool; a: f8') @@ -421,8 +406,7 @@ def test_get_scalar(self): def test_get(self): cols = Index(list('abc')) values = np.random.rand(3, 3) - block = make_block(values=values.copy(), - placement=np.arange(3)) + block = make_block(values=values.copy(), placement=np.arange(3)) mgr = BlockManager(blocks=[block], axes=[cols, np.arange(3)]) assert_almost_equal(mgr.get('a', fastpath=False), values[0]) @@ -433,7 +417,7 @@ def test_get(self): assert_almost_equal(mgr.get('c').internal_values(), values[2]) def test_set(self): - mgr = create_mgr('a,b,c: int', item_shape=(3,)) + mgr = create_mgr('a,b,c: int', item_shape=(3, )) mgr.set('d', np.array(['foo'] * 3)) mgr.set('b', np.array(['bar'] * 3)) @@ -467,16 +451,17 @@ def test_set_change_dtype(self): mgr2.set('quux', randn(N)) self.assertEqual(mgr2.get('quux').dtype, np.float_) - def test_set_change_dtype_slice(self): # GH8850 - cols = MultiIndex.from_tuples([('1st','a'), ('2nd','b'), ('3rd','c')]) + def test_set_change_dtype_slice(self): # GH8850 + cols = MultiIndex.from_tuples([('1st', 'a'), ('2nd', 'b'), ('3rd', 'c') + ]) df = DataFrame([[1.0, 2, 3], [4.0, 5, 6]], columns=cols) df['2nd'] = df['2nd'] * 2.0 self.assertEqual(sorted(df.blocks.keys()), ['float64', 'int64']) - assert_frame_equal(df.blocks['float64'], - DataFrame([[1.0, 4.0], [4.0, 10.0]], columns=cols[:2])) - assert_frame_equal(df.blocks['int64'], - DataFrame([[3], [6]], columns=cols[2:])) + assert_frame_equal(df.blocks['float64'], DataFrame( + [[1.0, 4.0], [4.0, 10.0]], columns=cols[:2])) + assert_frame_equal(df.blocks['int64'], DataFrame( + [[3], [6]], columns=cols[2:])) def test_copy(self): cp = self.mgr.copy(deep=False) @@ -489,14 +474,14 @@ def test_copy(self): cp = self.mgr.copy(deep=True) for blk, cp_blk in zip(self.mgr.blocks, cp.blocks): - # copy assertion - # we either have a None for a base or in case of some blocks it is an array (e.g. datetimetz), - # but was copied + # copy assertion we either have a None for a base or in case of + # some blocks it is an array (e.g. datetimetz), but was copied self.assertTrue(cp_blk.equals(blk)) if cp_blk.values.base is not None and blk.values.base is not None: self.assertFalse(cp_blk.values.base is blk.values.base) else: - self.assertTrue(cp_blk.values.base is None and blk.values.base is None) + self.assertTrue(cp_blk.values.base is None and blk.values.base + is None) def test_sparse(self): mgr = create_mgr('a: sparse-1; b: sparse-2') @@ -592,11 +577,11 @@ def _compare(old_mgr, new_mgr): # noops mgr = create_mgr('f: i8; g: f8') new_mgr = mgr.convert() - _compare(mgr,new_mgr) + _compare(mgr, new_mgr) mgr = create_mgr('a, b: object; f: i8; g: f8') new_mgr = mgr.convert() - _compare(mgr,new_mgr) + _compare(mgr, new_mgr) # convert mgr = create_mgr('a,b,foo: object; f: i8; g: f8') @@ -628,53 +613,53 @@ def _compare(old_mgr, new_mgr): def test_interleave(self): - # self - for dtype in ['f8','i8','object','bool','complex','M8[ns]','m8[ns]']: + for dtype in ['f8', 'i8', 'object', 'bool', 'complex', 'M8[ns]', + 'm8[ns]']: mgr = create_mgr('a: {0}'.format(dtype)) - self.assertEqual(mgr.as_matrix().dtype,dtype) + self.assertEqual(mgr.as_matrix().dtype, dtype) mgr = create_mgr('a: {0}; b: {0}'.format(dtype)) - self.assertEqual(mgr.as_matrix().dtype,dtype) + self.assertEqual(mgr.as_matrix().dtype, dtype) # will be converted according the actual dtype of the underlying mgr = create_mgr('a: category') - self.assertEqual(mgr.as_matrix().dtype,'i8') + self.assertEqual(mgr.as_matrix().dtype, 'i8') mgr = create_mgr('a: category; b: category') - self.assertEqual(mgr.as_matrix().dtype,'i8'), + self.assertEqual(mgr.as_matrix().dtype, 'i8'), mgr = create_mgr('a: category; b: category2') - self.assertEqual(mgr.as_matrix().dtype,'object') + self.assertEqual(mgr.as_matrix().dtype, 'object') mgr = create_mgr('a: category2') - self.assertEqual(mgr.as_matrix().dtype,'object') + self.assertEqual(mgr.as_matrix().dtype, 'object') mgr = create_mgr('a: category2; b: category2') - self.assertEqual(mgr.as_matrix().dtype,'object') + self.assertEqual(mgr.as_matrix().dtype, 'object') # combinations mgr = create_mgr('a: f8') - self.assertEqual(mgr.as_matrix().dtype,'f8') + self.assertEqual(mgr.as_matrix().dtype, 'f8') mgr = create_mgr('a: f8; b: i8') - self.assertEqual(mgr.as_matrix().dtype,'f8') + self.assertEqual(mgr.as_matrix().dtype, 'f8') mgr = create_mgr('a: f4; b: i8') - self.assertEqual(mgr.as_matrix().dtype,'f4') + self.assertEqual(mgr.as_matrix().dtype, 'f4') mgr = create_mgr('a: f4; b: i8; d: object') - self.assertEqual(mgr.as_matrix().dtype,'object') + self.assertEqual(mgr.as_matrix().dtype, 'object') mgr = create_mgr('a: bool; b: i8') - self.assertEqual(mgr.as_matrix().dtype,'object') + self.assertEqual(mgr.as_matrix().dtype, 'object') mgr = create_mgr('a: complex') - self.assertEqual(mgr.as_matrix().dtype,'complex') + self.assertEqual(mgr.as_matrix().dtype, 'complex') mgr = create_mgr('a: f8; b: category') - self.assertEqual(mgr.as_matrix().dtype,'object') + self.assertEqual(mgr.as_matrix().dtype, 'object') mgr = create_mgr('a: M8[ns]; b: category') - self.assertEqual(mgr.as_matrix().dtype,'object') + self.assertEqual(mgr.as_matrix().dtype, 'object') mgr = create_mgr('a: M8[ns]; b: bool') - self.assertEqual(mgr.as_matrix().dtype,'object') + self.assertEqual(mgr.as_matrix().dtype, 'object') mgr = create_mgr('a: M8[ns]; b: i8') - self.assertEqual(mgr.as_matrix().dtype,'object') + self.assertEqual(mgr.as_matrix().dtype, 'object') mgr = create_mgr('a: m8[ns]; b: bool') - self.assertEqual(mgr.as_matrix().dtype,'object') + self.assertEqual(mgr.as_matrix().dtype, 'object') mgr = create_mgr('a: m8[ns]; b: i8') - self.assertEqual(mgr.as_matrix().dtype,'object') + self.assertEqual(mgr.as_matrix().dtype, 'object') mgr = create_mgr('a: M8[ns]; b: m8[ns]') - self.assertEqual(mgr.as_matrix().dtype,'object') + self.assertEqual(mgr.as_matrix().dtype, 'object') def test_interleave_non_unique_cols(self): df = DataFrame([ @@ -718,20 +703,32 @@ def test_reindex_items(self): reindexed = mgr.reindex_axis(['g', 'c', 'a', 'd'], axis=0) self.assertEqual(reindexed.nblocks, 2) assert_almost_equal(reindexed.items, ['g', 'c', 'a', 'd']) - assert_almost_equal(mgr.get('g',fastpath=False), reindexed.get('g',fastpath=False)) - assert_almost_equal(mgr.get('c',fastpath=False), reindexed.get('c',fastpath=False)) - assert_almost_equal(mgr.get('a',fastpath=False), reindexed.get('a',fastpath=False)) - assert_almost_equal(mgr.get('d',fastpath=False), reindexed.get('d',fastpath=False)) - assert_almost_equal(mgr.get('g').internal_values(), reindexed.get('g').internal_values()) - assert_almost_equal(mgr.get('c').internal_values(), reindexed.get('c').internal_values()) - assert_almost_equal(mgr.get('a').internal_values(), reindexed.get('a').internal_values()) - assert_almost_equal(mgr.get('d').internal_values(), reindexed.get('d').internal_values()) + assert_almost_equal( + mgr.get('g', fastpath=False), reindexed.get('g', fastpath=False)) + assert_almost_equal( + mgr.get('c', fastpath=False), reindexed.get('c', fastpath=False)) + assert_almost_equal( + mgr.get('a', fastpath=False), reindexed.get('a', fastpath=False)) + assert_almost_equal( + mgr.get('d', fastpath=False), reindexed.get('d', fastpath=False)) + assert_almost_equal( + mgr.get('g').internal_values(), + reindexed.get('g').internal_values()) + assert_almost_equal( + mgr.get('c').internal_values(), + reindexed.get('c').internal_values()) + assert_almost_equal( + mgr.get('a').internal_values(), + reindexed.get('a').internal_values()) + assert_almost_equal( + mgr.get('d').internal_values(), + reindexed.get('d').internal_values()) def test_multiindex_xs(self): mgr = create_mgr('a,b,c: f8; d,e,f: i8') - index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], - ['one', 'two', 'three']], + index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two', + 'three']], labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], names=['first', 'second']) @@ -745,48 +742,63 @@ def test_multiindex_xs(self): def test_get_numeric_data(self): mgr = create_mgr('int: int; float: float; complex: complex;' 'str: object; bool: bool; obj: object; dt: datetime', - item_shape=(3,)) + item_shape=(3, )) mgr.set('obj', np.array([1, 2, 3], dtype=np.object_)) numeric = mgr.get_numeric_data() assert_almost_equal(numeric.items, ['int', 'float', 'complex', 'bool']) - assert_almost_equal(mgr.get('float',fastpath=False), numeric.get('float',fastpath=False)) - assert_almost_equal(mgr.get('float').internal_values(), numeric.get('float').internal_values()) + assert_almost_equal( + mgr.get('float', fastpath=False), numeric.get('float', + fastpath=False)) + assert_almost_equal( + mgr.get('float').internal_values(), + numeric.get('float').internal_values()) # Check sharing numeric.set('float', np.array([100., 200., 300.])) - assert_almost_equal(mgr.get('float',fastpath=False), np.array([100., 200., 300.])) - assert_almost_equal(mgr.get('float').internal_values(), np.array([100., 200., 300.])) + assert_almost_equal( + mgr.get('float', fastpath=False), np.array([100., 200., 300.])) + assert_almost_equal( + mgr.get('float').internal_values(), np.array([100., 200., 300.])) numeric2 = mgr.get_numeric_data(copy=True) assert_almost_equal(numeric.items, ['int', 'float', 'complex', 'bool']) numeric2.set('float', np.array([1000., 2000., 3000.])) - assert_almost_equal(mgr.get('float',fastpath=False), np.array([100., 200., 300.])) - assert_almost_equal(mgr.get('float').internal_values(), np.array([100., 200., 300.])) + assert_almost_equal( + mgr.get('float', fastpath=False), np.array([100., 200., 300.])) + assert_almost_equal( + mgr.get('float').internal_values(), np.array([100., 200., 300.])) def test_get_bool_data(self): mgr = create_mgr('int: int; float: float; complex: complex;' 'str: object; bool: bool; obj: object; dt: datetime', - item_shape=(3,)) + item_shape=(3, )) mgr.set('obj', np.array([True, False, True], dtype=np.object_)) bools = mgr.get_bool_data() assert_almost_equal(bools.items, ['bool']) - assert_almost_equal(mgr.get('bool',fastpath=False), bools.get('bool',fastpath=False)) - assert_almost_equal(mgr.get('bool').internal_values(), bools.get('bool').internal_values()) + assert_almost_equal( + mgr.get('bool', fastpath=False), bools.get('bool', fastpath=False)) + assert_almost_equal( + mgr.get('bool').internal_values(), + bools.get('bool').internal_values()) bools.set('bool', np.array([True, False, True])) - assert_almost_equal(mgr.get('bool',fastpath=False), [True, False, True]) - assert_almost_equal(mgr.get('bool').internal_values(), [True, False, True]) + assert_almost_equal( + mgr.get('bool', fastpath=False), [True, False, True]) + assert_almost_equal( + mgr.get('bool').internal_values(), [True, False, True]) # Check sharing bools2 = mgr.get_bool_data(copy=True) bools2.set('bool', np.array([False, True, False])) - assert_almost_equal(mgr.get('bool',fastpath=False), [True, False, True]) - assert_almost_equal(mgr.get('bool').internal_values(), [True, False, True]) + assert_almost_equal( + mgr.get('bool', fastpath=False), [True, False, True]) + assert_almost_equal( + mgr.get('bool').internal_values(), [True, False, True]) def test_unicode_repr_doesnt_raise(self): - str_repr = repr(create_mgr(u('b,\u05d0: object'))) + repr(create_mgr(u('b,\u05d0: object'))) def test_missing_unicode_key(self): df = DataFrame({"a": [1]}) @@ -809,12 +821,12 @@ def test_equals_block_order_different_dtypes(self): # GH 9330 mgr_strings = [ - "a:i8;b:f8", # basic case - "a:i8;b:f8;c:c8;d:b", # many types - "a:i8;e:dt;f:td;g:string", # more types - "a:i8;b:category;c:category2;d:category2", # categories - "c:sparse;d:sparse_na;b:f8", # sparse - ] + "a:i8;b:f8", # basic case + "a:i8;b:f8;c:c8;d:b", # many types + "a:i8;e:dt;f:td;g:string", # more types + "a:i8;b:category;c:category2;d:category2", # categories + "c:sparse;d:sparse_na;b:f8", # sparse + ] for mgr_string in mgr_strings: bm = create_mgr(mgr_string) @@ -841,7 +853,7 @@ class TestIndexing(object): MANAGERS = [ create_single_mgr('f8', N), create_single_mgr('i8', N), - #create_single_mgr('sparse', N), + # create_single_mgr('sparse', N), create_single_mgr('sparse_na', N), # 2-dim @@ -849,7 +861,7 @@ class TestIndexing(object): create_mgr('a,b,c,d,e,f: i8', item_shape=(N,)), create_mgr('a,b: f8; c,d: i8; e,f: string', item_shape=(N,)), create_mgr('a,b: f8; c,d: i8; e,f: f8', item_shape=(N,)), - #create_mgr('a: sparse', item_shape=(N,)), + # create_mgr('a: sparse', item_shape=(N,)), create_mgr('a: sparse_na', item_shape=(N,)), # 3-dim @@ -872,9 +884,10 @@ def assert_slice_ok(mgr, axis, slobj): if isinstance(slobj, np.ndarray): ax = mgr.axes[axis] if len(ax) and len(slobj) and len(slobj) != len(ax): - slobj = np.concatenate([slobj, np.zeros(len(ax)-len(slobj),dtype=bool)]) + slobj = np.concatenate([slobj, np.zeros( + len(ax) - len(slobj), dtype=bool)]) sliced = mgr.get_slice(slobj, axis=axis) - mat_slobj = (slice(None),) * axis + (slobj,) + mat_slobj = (slice(None), ) * axis + (slobj, ) assert_almost_equal(mat[mat_slobj], sliced.as_matrix()) assert_almost_equal(mgr.axes[axis][slobj], sliced.axes[axis]) @@ -897,8 +910,8 @@ def assert_slice_ok(mgr, axis, slobj): if mgr.shape[ax] >= 3: yield (assert_slice_ok, mgr, ax, np.arange(mgr.shape[ax]) % 3 == 0) - yield (assert_slice_ok, mgr, ax, - np.array([True, True, False], dtype=np.bool_)) + yield (assert_slice_ok, mgr, ax, np.array( + [True, True, False], dtype=np.bool_)) # fancy indexer yield assert_slice_ok, mgr, ax, [] @@ -912,10 +925,8 @@ def test_take(self): def assert_take_ok(mgr, axis, indexer): mat = mgr.as_matrix() taken = mgr.take(indexer, axis) - assert_almost_equal(np.take(mat, indexer, axis), - taken.as_matrix()) - assert_almost_equal(mgr.axes[axis].take(indexer), - taken.axes[axis]) + assert_almost_equal(np.take(mat, indexer, axis), taken.as_matrix()) + assert_almost_equal(mgr.axes[axis].take(indexer), taken.axes[axis]) for mgr in self.MANAGERS: for ax in range(mgr.ndim): @@ -929,8 +940,7 @@ def assert_take_ok(mgr, axis, indexer): yield assert_take_ok, mgr, ax, [-1, -2, -3] def test_reindex_axis(self): - def assert_reindex_axis_is_ok(mgr, axis, new_labels, - fill_value): + def assert_reindex_axis_is_ok(mgr, axis, new_labels, fill_value): mat = mgr.as_matrix() indexer = mgr.axes[axis].get_indexer_for(new_labels) @@ -945,8 +955,8 @@ def assert_reindex_axis_is_ok(mgr, axis, new_labels, for ax in range(mgr.ndim): for fill_value in (None, np.nan, 100.): yield assert_reindex_axis_is_ok, mgr, ax, [], fill_value - yield (assert_reindex_axis_is_ok, mgr, ax, - mgr.axes[ax], fill_value) + yield (assert_reindex_axis_is_ok, mgr, ax, mgr.axes[ax], + fill_value) yield (assert_reindex_axis_is_ok, mgr, ax, mgr.axes[ax][[0, 0, 0]], fill_value) yield (assert_reindex_axis_is_ok, mgr, ax, @@ -976,20 +986,18 @@ def assert_reindex_indexer_is_ok(mgr, axis, new_labels, indexer, for mgr in self.MANAGERS: for ax in range(mgr.ndim): for fill_value in (None, np.nan, 100.): - yield (assert_reindex_indexer_is_ok, mgr, ax, - [], [], fill_value) - yield (assert_reindex_indexer_is_ok, mgr, ax, - mgr.axes[ax], np.arange(mgr.shape[ax]), fill_value) - yield (assert_reindex_indexer_is_ok, mgr, ax, - ['foo'] * mgr.shape[ax], np.arange(mgr.shape[ax]), + yield (assert_reindex_indexer_is_ok, mgr, ax, [], [], fill_value) + yield (assert_reindex_indexer_is_ok, mgr, ax, mgr.axes[ax], + np.arange(mgr.shape[ax]), fill_value) + yield (assert_reindex_indexer_is_ok, mgr, ax, ['foo'] * + mgr.shape[ax], np.arange(mgr.shape[ax]), fill_value) yield (assert_reindex_indexer_is_ok, mgr, ax, mgr.axes[ax][::-1], np.arange(mgr.shape[ax]), fill_value) - yield (assert_reindex_indexer_is_ok, mgr, ax, - mgr.axes[ax], np.arange(mgr.shape[ax])[::-1], - fill_value) + yield (assert_reindex_indexer_is_ok, mgr, ax, mgr.axes[ax], + np.arange(mgr.shape[ax])[::-1], fill_value) yield (assert_reindex_indexer_is_ok, mgr, ax, ['foo', 'bar', 'baz'], [0, 0, 0], fill_value) yield (assert_reindex_indexer_is_ok, mgr, ax, @@ -1002,7 +1010,6 @@ def assert_reindex_indexer_is_ok(mgr, axis, new_labels, indexer, yield (assert_reindex_indexer_is_ok, mgr, ax, ['foo', 'bar', 'baz'], [0, 1, 2], fill_value) - # test_get_slice(slice_like, axis) # take(indexer, axis) # reindex_axis(new_labels, axis) @@ -1092,7 +1099,7 @@ def test_slice_iter(self): self.assertEqual(list(BlockPlacement(slice(3, 0, -1))), [3, 2, 1]) self.assertEqual(list(BlockPlacement(slice(3, None, -1))), - [3, 2, 1, 0]) + [3, 2, 1, 0]) def test_slice_to_array_conversion(self): def assert_as_array_equals(slc, asarray): @@ -1111,15 +1118,12 @@ def assert_as_array_equals(slc, asarray): def test_blockplacement_add(self): bpl = BlockPlacement(slice(0, 5)) self.assertEqual(bpl.add(1).as_slice, slice(1, 6, 1)) - self.assertEqual(bpl.add(np.arange(5)).as_slice, - slice(0, 10, 2)) - self.assertEqual(list(bpl.add(np.arange(5, 0, -1))), - [5, 5, 5, 5, 5]) + self.assertEqual(bpl.add(np.arange(5)).as_slice, slice(0, 10, 2)) + self.assertEqual(list(bpl.add(np.arange(5, 0, -1))), [5, 5, 5, 5, 5]) def test_blockplacement_add_int(self): def assert_add_equals(val, inc, result): - self.assertEqual(list(BlockPlacement(val).add(inc)), - result) + self.assertEqual(list(BlockPlacement(val).add(inc)), result) assert_add_equals(slice(0, 0), 0, []) assert_add_equals(slice(1, 4), 0, [1, 2, 3]) @@ -1145,13 +1149,7 @@ def assert_add_equals(val, inc, result): self.assertRaises(ValueError, lambda: BlockPlacement(slice(2, None, -1)).add(-1)) - # def test_blockplacement_array_add(self): - - # assert_add_equals(slice(0, 2), [0, 1, 1], [0, 2, 3]) - # assert_add_equals(slice(2, None, -1), [1, 1, 0], [3, 2, 0]) - if __name__ == '__main__': - import nose nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tests/test_lib.py b/pandas/tests/test_lib.py index a24f71482c404..fc0030718f2c9 100644 --- a/pandas/tests/test_lib.py +++ b/pandas/tests/test_lib.py @@ -53,16 +53,19 @@ def test_maybe_indices_to_slice_left_edge(self): indices = np.arange(0, end, step, dtype=np.int64) maybe_slice = lib.maybe_indices_to_slice(indices, len(target)) self.assertTrue(isinstance(maybe_slice, slice)) - self.assert_numpy_array_equal(target[indices], target[maybe_slice]) + self.assert_numpy_array_equal(target[indices], + target[maybe_slice]) # reverse indices = indices[::-1] maybe_slice = lib.maybe_indices_to_slice(indices, len(target)) self.assertTrue(isinstance(maybe_slice, slice)) - self.assert_numpy_array_equal(target[indices], target[maybe_slice]) + self.assert_numpy_array_equal(target[indices], + target[maybe_slice]) # not slice - for case in [[2, 1, 2, 0], [2, 2, 1, 0], [0, 1, 2, 1], [-2, 0, 2], [2, 0, -2]]: + for case in [[2, 1, 2, 0], [2, 2, 1, 0], [0, 1, 2, 1], [-2, 0, 2], + [2, 0, -2]]: indices = np.array(case, dtype=np.int64) maybe_slice = lib.maybe_indices_to_slice(indices, len(target)) self.assertFalse(isinstance(maybe_slice, slice)) @@ -78,13 +81,15 @@ def test_maybe_indices_to_slice_right_edge(self): indices = np.arange(start, 99, step, dtype=np.int64) maybe_slice = lib.maybe_indices_to_slice(indices, len(target)) self.assertTrue(isinstance(maybe_slice, slice)) - self.assert_numpy_array_equal(target[indices], target[maybe_slice]) + self.assert_numpy_array_equal(target[indices], + target[maybe_slice]) # reverse indices = indices[::-1] maybe_slice = lib.maybe_indices_to_slice(indices, len(target)) self.assertTrue(isinstance(maybe_slice, slice)) - self.assert_numpy_array_equal(target[indices], target[maybe_slice]) + self.assert_numpy_array_equal(target[indices], + target[maybe_slice]) # not slice indices = np.array([97, 98, 99, 100], dtype=np.int64) @@ -145,13 +150,15 @@ def test_maybe_indices_to_slice_middle(self): indices = np.arange(start, end, step, dtype=np.int64) maybe_slice = lib.maybe_indices_to_slice(indices, len(target)) self.assertTrue(isinstance(maybe_slice, slice)) - self.assert_numpy_array_equal(target[indices], target[maybe_slice]) + self.assert_numpy_array_equal(target[indices], + target[maybe_slice]) # reverse indices = indices[::-1] maybe_slice = lib.maybe_indices_to_slice(indices, len(target)) self.assertTrue(isinstance(maybe_slice, slice)) - self.assert_numpy_array_equal(target[indices], target[maybe_slice]) + self.assert_numpy_array_equal(target[indices], + target[maybe_slice]) # not slice for case in [[14, 12, 10, 12], [12, 12, 11, 10], [10, 11, 12, 11]]: @@ -162,7 +169,7 @@ def test_maybe_indices_to_slice_middle(self): self.assert_numpy_array_equal(target[indices], target[maybe_slice]) def test_isinf_scalar(self): - #GH 11352 + # GH 11352 self.assertTrue(lib.isposinf_scalar(float('inf'))) self.assertTrue(lib.isposinf_scalar(np.inf)) self.assertFalse(lib.isposinf_scalar(-np.inf)) @@ -175,6 +182,7 @@ def test_isinf_scalar(self): self.assertFalse(lib.isneginf_scalar(1)) self.assertFalse(lib.isneginf_scalar('a')) + class Testisscalar(tm.TestCase): def test_isscalar_builtin_scalars(self): @@ -197,7 +205,7 @@ def test_isscalar_builtin_nonscalars(self): self.assertFalse(lib.isscalar([])) self.assertFalse(lib.isscalar([1])) self.assertFalse(lib.isscalar(())) - self.assertFalse(lib.isscalar((1,))) + self.assertFalse(lib.isscalar((1, ))) self.assertFalse(lib.isscalar(slice(None))) self.assertFalse(lib.isscalar(Ellipsis)) @@ -213,8 +221,7 @@ def test_isscalar_numpy_array_scalars(self): self.assertTrue(lib.isscalar(np.timedelta64(1, 'h'))) def test_isscalar_numpy_zerodim_arrays(self): - for zerodim in [np.array(1), - np.array('foobar'), + for zerodim in [np.array(1), np.array('foobar'), np.array(np.datetime64('2014-01-01')), np.array(np.timedelta64(1, 'h'))]: self.assertFalse(lib.isscalar(zerodim)) diff --git a/pandas/tests/test_msgpack/test_case.py b/pandas/tests/test_msgpack/test_case.py index 187668b242495..5e8bbff390d07 100644 --- a/pandas/tests/test_msgpack/test_case.py +++ b/pandas/tests/test_msgpack/test_case.py @@ -10,68 +10,75 @@ def check(length, obj): "%r length should be %r but get %r" % (obj, length, len(v)) assert unpackb(v, use_list=0) == obj + def test_1(): for o in [None, True, False, 0, 1, (1 << 6), (1 << 7) - 1, -1, - -((1<<5)-1), -(1<<5)]: + -((1 << 5) - 1), -(1 << 5)]: check(1, o) + def test_2(): - for o in [1 << 7, (1 << 8) - 1, - -((1<<5)+1), -(1<<7) - ]: + for o in [1 << 7, (1 << 8) - 1, -((1 << 5) + 1), -(1 << 7)]: check(2, o) + def test_3(): - for o in [1 << 8, (1 << 16) - 1, - -((1<<7)+1), -(1<<15)]: + for o in [1 << 8, (1 << 16) - 1, -((1 << 7) + 1), -(1 << 15)]: check(3, o) + def test_5(): - for o in [1 << 16, (1 << 32) - 1, - -((1<<15)+1), -(1<<31)]: + for o in [1 << 16, (1 << 32) - 1, -((1 << 15) + 1), -(1 << 31)]: check(5, o) + def test_9(): - for o in [1 << 32, (1 << 64) - 1, - -((1<<31)+1), -(1<<63), - 1.0, 0.1, -0.1, -1.0]: + for o in [1 << 32, (1 << 64) - 1, -((1 << 31) + 1), -(1 << 63), 1.0, 0.1, + -0.1, -1.0]: check(9, o) def check_raw(overhead, num): check(num + overhead, b" " * num) + def test_fixraw(): check_raw(1, 0) - check_raw(1, (1<<5) - 1) + check_raw(1, (1 << 5) - 1) + def test_raw16(): - check_raw(3, 1<<5) - check_raw(3, (1<<16) - 1) + check_raw(3, 1 << 5) + check_raw(3, (1 << 16) - 1) + def test_raw32(): - check_raw(5, 1<<16) + check_raw(5, 1 << 16) def check_array(overhead, num): - check(num + overhead, (None,) * num) + check(num + overhead, (None, ) * num) + def test_fixarray(): check_array(1, 0) check_array(1, (1 << 4) - 1) + def test_array16(): check_array(3, 1 << 4) - check_array(3, (1<<16)-1) + check_array(3, (1 << 16) - 1) + def test_array32(): - check_array(5, (1<<16)) + check_array(5, (1 << 16)) def match(obj, buf): assert packb(obj) == buf assert unpackb(buf, use_list=0) == obj + def test_match(): cases = [ (None, b'\xc0'), @@ -84,19 +91,26 @@ def test_match(): (-1, b'\xff'), (-33, b'\xd0\xdf'), (-129, b'\xd1\xff\x7f'), - ({1:1}, b'\x81\x01\x01'), + ({1: 1}, b'\x81\x01\x01'), (1.0, b"\xcb\x3f\xf0\x00\x00\x00\x00\x00\x00"), ((), b'\x90'), - (tuple(range(15)),b"\x9f\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e"), - (tuple(range(16)),b"\xdc\x00\x10\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"), + (tuple(range(15)), (b"\x9f\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09" + b"\x0a\x0b\x0c\x0d\x0e")), + (tuple(range(16)), (b"\xdc\x00\x10\x00\x01\x02\x03\x04\x05\x06\x07" + b"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f")), ({}, b'\x80'), - (dict([(x,x) for x in range(15)]), b'\x8f\x00\x00\x01\x01\x02\x02\x03\x03\x04\x04\x05\x05\x06\x06\x07\x07\x08\x08\t\t\n\n\x0b\x0b\x0c\x0c\r\r\x0e\x0e'), - (dict([(x,x) for x in range(16)]), b'\xde\x00\x10\x00\x00\x01\x01\x02\x02\x03\x03\x04\x04\x05\x05\x06\x06\x07\x07\x08\x08\t\t\n\n\x0b\x0b\x0c\x0c\r\r\x0e\x0e\x0f\x0f'), - ] + (dict([(x, x) for x in range(15)]), + (b'\x8f\x00\x00\x01\x01\x02\x02\x03\x03\x04\x04\x05\x05\x06\x06\x07' + b'\x07\x08\x08\t\t\n\n\x0b\x0b\x0c\x0c\r\r\x0e\x0e')), + (dict([(x, x) for x in range(16)]), + (b'\xde\x00\x10\x00\x00\x01\x01\x02\x02\x03\x03\x04\x04\x05\x05\x06' + b'\x06\x07\x07\x08\x08\t\t\n\n\x0b\x0b\x0c\x0c\r\r\x0e\x0e' + b'\x0f\x0f')), + ] for v, p in cases: match(v, p) + def test_unicode(): assert unpackb(packb('foobar'), use_list=1) == b'foobar' - diff --git a/pandas/tests/test_msgpack/test_except.py b/pandas/tests/test_msgpack/test_except.py index a0239336ca20d..79290ebb891fd 100644 --- a/pandas/tests/test_msgpack/test_except.py +++ b/pandas/tests/test_msgpack/test_except.py @@ -2,14 +2,13 @@ # coding: utf-8 import unittest -import nose - -import datetime from pandas.msgpack import packb, unpackb + class DummyException(Exception): pass + class TestExceptions(unittest.TestCase): def test_raise_on_find_unsupported_value(self): @@ -19,11 +18,17 @@ def test_raise_on_find_unsupported_value(self): def test_raise_from_object_hook(self): def hook(obj): raise DummyException + self.assertRaises(DummyException, unpackb, packb({}), object_hook=hook) - self.assertRaises(DummyException, unpackb, packb({'fizz': 'buzz'}), object_hook=hook) - self.assertRaises(DummyException, unpackb, packb({'fizz': 'buzz'}), object_pairs_hook=hook) - self.assertRaises(DummyException, unpackb, packb({'fizz': {'buzz': 'spam'}}), object_hook=hook) - self.assertRaises(DummyException, unpackb, packb({'fizz': {'buzz': 'spam'}}), object_pairs_hook=hook) + self.assertRaises(DummyException, unpackb, packb({'fizz': 'buzz'}), + object_hook=hook) + self.assertRaises(DummyException, unpackb, packb({'fizz': 'buzz'}), + object_pairs_hook=hook) + self.assertRaises(DummyException, unpackb, + packb({'fizz': {'buzz': 'spam'}}), object_hook=hook) + self.assertRaises(DummyException, unpackb, + packb({'fizz': {'buzz': 'spam'}}), + object_pairs_hook=hook) def test_invalidvalue(self): self.assertRaises(ValueError, unpackb, b'\xd9\x97#DL_') diff --git a/pandas/tests/test_msgpack/test_extension.py b/pandas/tests/test_msgpack/test_extension.py index 3172605c0aae1..97f0962a753d9 100644 --- a/pandas/tests/test_msgpack/test_extension.py +++ b/pandas/tests/test_msgpack/test_extension.py @@ -9,40 +9,42 @@ def p(s): packer = msgpack.Packer() packer.pack_ext_type(0x42, s) return packer.bytes() - assert p(b'A') == b'\xd4\x42A' # fixext 1 - assert p(b'AB') == b'\xd5\x42AB' # fixext 2 - assert p(b'ABCD') == b'\xd6\x42ABCD' # fixext 4 - assert p(b'ABCDEFGH') == b'\xd7\x42ABCDEFGH' # fixext 8 - assert p(b'A'*16) == b'\xd8\x42' + b'A'*16 # fixext 16 - assert p(b'ABC') == b'\xc7\x03\x42ABC' # ext 8 - assert p(b'A'*0x0123) == b'\xc8\x01\x23\x42' + b'A'*0x0123 # ext 16 - assert p(b'A'*0x00012345) == b'\xc9\x00\x01\x23\x45\x42' + b'A'*0x00012345 # ext 32 + + assert p(b'A') == b'\xd4\x42A' # fixext 1 + assert p(b'AB') == b'\xd5\x42AB' # fixext 2 + assert p(b'ABCD') == b'\xd6\x42ABCD' # fixext 4 + assert p(b'ABCDEFGH') == b'\xd7\x42ABCDEFGH' # fixext 8 + assert p(b'A' * 16) == b'\xd8\x42' + b'A' * 16 # fixext 16 + assert p(b'ABC') == b'\xc7\x03\x42ABC' # ext 8 + assert p(b'A' * 0x0123) == b'\xc8\x01\x23\x42' + b'A' * 0x0123 # ext 16 + assert (p(b'A' * 0x00012345) == + b'\xc9\x00\x01\x23\x45\x42' + b'A' * 0x00012345) # ext 32 def test_unpack_ext_type(): def check(b, expected): assert msgpack.unpackb(b) == expected - check(b'\xd4\x42A', ExtType(0x42, b'A')) # fixext 1 - check(b'\xd5\x42AB', ExtType(0x42, b'AB')) # fixext 2 - check(b'\xd6\x42ABCD', ExtType(0x42, b'ABCD')) # fixext 4 - check(b'\xd7\x42ABCDEFGH', ExtType(0x42, b'ABCDEFGH')) # fixext 8 - check(b'\xd8\x42' + b'A'*16, ExtType(0x42, b'A'*16)) # fixext 16 - check(b'\xc7\x03\x42ABC', ExtType(0x42, b'ABC')) # ext 8 - check(b'\xc8\x01\x23\x42' + b'A'*0x0123, - ExtType(0x42, b'A'*0x0123)) # ext 16 - check(b'\xc9\x00\x01\x23\x45\x42' + b'A'*0x00012345, - ExtType(0x42, b'A'*0x00012345)) # ext 32 + check(b'\xd4\x42A', ExtType(0x42, b'A')) # fixext 1 + check(b'\xd5\x42AB', ExtType(0x42, b'AB')) # fixext 2 + check(b'\xd6\x42ABCD', ExtType(0x42, b'ABCD')) # fixext 4 + check(b'\xd7\x42ABCDEFGH', ExtType(0x42, b'ABCDEFGH')) # fixext 8 + check(b'\xd8\x42' + b'A' * 16, ExtType(0x42, b'A' * 16)) # fixext 16 + check(b'\xc7\x03\x42ABC', ExtType(0x42, b'ABC')) # ext 8 + check(b'\xc8\x01\x23\x42' + b'A' * 0x0123, + ExtType(0x42, b'A' * 0x0123)) # ext 16 + check(b'\xc9\x00\x01\x23\x45\x42' + b'A' * 0x00012345, + ExtType(0x42, b'A' * 0x00012345)) # ext 32 def test_extension_type(): def default(obj): print('default called', obj) if isinstance(obj, array.array): - typecode = 123 # application specific typecode + typecode = 123 # application specific typecode data = obj.tostring() return ExtType(typecode, data) - raise TypeError("Unknwon type object %r" % (obj,)) + raise TypeError("Unknwon type object %r" % (obj, )) def ext_hook(code, data): print('ext_hook called', code, data) diff --git a/pandas/tests/test_msgpack/test_format.py b/pandas/tests/test_msgpack/test_format.py index 706c48436d7d3..203726ae6a5f9 100644 --- a/pandas/tests/test_msgpack/test_format.py +++ b/pandas/tests/test_msgpack/test_format.py @@ -3,68 +3,90 @@ from pandas.msgpack import unpackb + def check(src, should, use_list=0): assert unpackb(src, use_list=use_list) == should + def testSimpleValue(): - check(b"\x93\xc0\xc2\xc3", - (None, False, True,)) + check(b"\x93\xc0\xc2\xc3", (None, False, True, )) + def testFixnum(): - check(b"\x92\x93\x00\x40\x7f\x93\xe0\xf0\xff", - ((0,64,127,), (-32,-16,-1,),) - ) + check(b"\x92\x93\x00\x40\x7f\x93\xe0\xf0\xff", ((0, + 64, + 127, ), + (-32, + -16, + -1, ), )) + def testFixArray(): - check(b"\x92\x90\x91\x91\xc0", - ((),((None,),),), - ) + check(b"\x92\x90\x91\x91\xc0", ((), ((None, ), ), ), ) + def testFixRaw(): - check(b"\x94\xa0\xa1a\xa2bc\xa3def", - (b"", b"a", b"bc", b"def",), - ) + check(b"\x94\xa0\xa1a\xa2bc\xa3def", (b"", b"a", b"bc", b"def", ), ) + def testFixMap(): - check( - b"\x82\xc2\x81\xc0\xc0\xc3\x81\xc0\x80", - {False: {None: None}, True:{None:{}}}, - ) + check(b"\x82\xc2\x81\xc0\xc0\xc3\x81\xc0\x80", + {False: {None: None}, + True: {None: {}}}, ) + def testUnsignedInt(): - check( - b"\x99\xcc\x00\xcc\x80\xcc\xff\xcd\x00\x00\xcd\x80\x00" + check(b"\x99\xcc\x00\xcc\x80\xcc\xff\xcd\x00\x00\xcd\x80\x00" b"\xcd\xff\xff\xce\x00\x00\x00\x00\xce\x80\x00\x00\x00" b"\xce\xff\xff\xff\xff", - (0, 128, 255, 0, 32768, 65535, 0, 2147483648, 4294967295,), - ) + (0, + 128, + 255, + 0, + 32768, + 65535, + 0, + 2147483648, + 4294967295, ), ) + def testSignedInt(): check(b"\x99\xd0\x00\xd0\x80\xd0\xff\xd1\x00\x00\xd1\x80\x00" b"\xd1\xff\xff\xd2\x00\x00\x00\x00\xd2\x80\x00\x00\x00" - b"\xd2\xff\xff\xff\xff", - (0, -128, -1, 0, -32768, -1, 0, -2147483648, -1,)) + b"\xd2\xff\xff\xff\xff", (0, + -128, + -1, + 0, + -32768, + -1, + 0, + -2147483648, + -1, )) + def testRaw(): check(b"\x96\xda\x00\x00\xda\x00\x01a\xda\x00\x02ab\xdb\x00\x00" - b"\x00\x00\xdb\x00\x00\x00\x01a\xdb\x00\x00\x00\x02ab", - (b"", b"a", b"ab", b"", b"a", b"ab")) + b"\x00\x00\xdb\x00\x00\x00\x01a\xdb\x00\x00\x00\x02ab", + (b"", b"a", b"ab", b"", b"a", b"ab")) + def testArray(): check(b"\x96\xdc\x00\x00\xdc\x00\x01\xc0\xdc\x00\x02\xc2\xc3\xdd\x00" - b"\x00\x00\x00\xdd\x00\x00\x00\x01\xc0\xdd\x00\x00\x00\x02" - b"\xc2\xc3", - ((), (None,), (False,True), (), (None,), (False,True)) - ) + b"\x00\x00\x00\xdd\x00\x00\x00\x01\xc0\xdd\x00\x00\x00\x02" + b"\xc2\xc3", ((), (None, ), (False, True), (), (None, ), + (False, True))) + def testMap(): - check( - b"\x96" - b"\xde\x00\x00" - b"\xde\x00\x01\xc0\xc2" - b"\xde\x00\x02\xc0\xc2\xc3\xc2" - b"\xdf\x00\x00\x00\x00" - b"\xdf\x00\x00\x00\x01\xc0\xc2" - b"\xdf\x00\x00\x00\x02\xc0\xc2\xc3\xc2", - ({}, {None: False}, {True: False, None: False}, {}, - {None: False}, {True: False, None: False})) + check(b"\x96" + b"\xde\x00\x00" + b"\xde\x00\x01\xc0\xc2" + b"\xde\x00\x02\xc0\xc2\xc3\xc2" + b"\xdf\x00\x00\x00\x00" + b"\xdf\x00\x00\x00\x01\xc0\xc2" + b"\xdf\x00\x00\x00\x02\xc0\xc2\xc3\xc2", ({}, {None: False}, + {True: False, + None: False}, {}, + {None: False}, + {True: False, + None: False})) diff --git a/pandas/tests/test_msgpack/test_limits.py b/pandas/tests/test_msgpack/test_limits.py index d9aa957182d65..2cf52aae65f2a 100644 --- a/pandas/tests/test_msgpack/test_limits.py +++ b/pandas/tests/test_msgpack/test_limits.py @@ -1,33 +1,33 @@ #!/usr/bin/env python # coding: utf-8 -from __future__ import absolute_import, division, print_function, unicode_literals +from __future__ import (absolute_import, division, print_function, + unicode_literals) import pandas.util.testing as tm from pandas.msgpack import packb, unpackb, Packer, Unpacker, ExtType + class TestLimits(tm.TestCase): + def test_integer(self): x = -(2 ** 63) assert unpackb(packb(x)) == x - self.assertRaises((OverflowError, ValueError), packb, x-1) + self.assertRaises((OverflowError, ValueError), packb, x - 1) x = 2 ** 64 - 1 assert unpackb(packb(x)) == x - self.assertRaises((OverflowError, ValueError), packb, x+1) - + self.assertRaises((OverflowError, ValueError), packb, x + 1) def test_array_header(self): packer = Packer() - packer.pack_array_header(2**32-1) + packer.pack_array_header(2 ** 32 - 1) self.assertRaises((OverflowError, ValueError), - packer.pack_array_header, 2**32) - + packer.pack_array_header, 2 ** 32) def test_map_header(self): packer = Packer() - packer.pack_map_header(2**32-1) + packer.pack_map_header(2 ** 32 - 1) self.assertRaises((OverflowError, ValueError), - packer.pack_array_header, 2**32) - + packer.pack_array_header, 2 ** 32) def test_max_str_len(self): d = 'x' * 3 @@ -41,7 +41,6 @@ def test_max_str_len(self): unpacker.feed(packed) self.assertRaises(ValueError, unpacker.unpack) - def test_max_bin_len(self): d = b'x' * 3 packed = packb(d, use_bin_type=True) @@ -54,7 +53,6 @@ def test_max_bin_len(self): unpacker.feed(packed) self.assertRaises(ValueError, unpacker.unpack) - def test_max_array_len(self): d = [1, 2, 3] packed = packb(d) @@ -67,7 +65,6 @@ def test_max_array_len(self): unpacker.feed(packed) self.assertRaises(ValueError, unpacker.unpack) - def test_max_map_len(self): d = {1: 2, 3: 4, 5: 6} packed = packb(d) @@ -80,7 +77,6 @@ def test_max_map_len(self): unpacker.feed(packed) self.assertRaises(ValueError, unpacker.unpack) - def test_max_ext_len(self): d = ExtType(42, b"abc") packed = packb(d) diff --git a/pandas/tests/test_msgpack/test_newspec.py b/pandas/tests/test_msgpack/test_newspec.py index 8532ab8cfb1a4..4eb9a0425c57b 100644 --- a/pandas/tests/test_msgpack/test_newspec.py +++ b/pandas/tests/test_msgpack/test_newspec.py @@ -66,23 +66,27 @@ def test_bin32(): assert b[5:] == data assert unpackb(b) == data + def test_ext(): def check(ext, packed): assert packb(ext) == packed assert unpackb(packed) == ext - check(ExtType(0x42, b'Z'), b'\xd4\x42Z') # fixext 1 - check(ExtType(0x42, b'ZZ'), b'\xd5\x42ZZ') # fixext 2 - check(ExtType(0x42, b'Z'*4), b'\xd6\x42' + b'Z'*4) # fixext 4 - check(ExtType(0x42, b'Z'*8), b'\xd7\x42' + b'Z'*8) # fixext 8 - check(ExtType(0x42, b'Z'*16), b'\xd8\x42' + b'Z'*16) # fixext 16 + + check(ExtType(0x42, b'Z'), b'\xd4\x42Z') # fixext 1 + check(ExtType(0x42, b'ZZ'), b'\xd5\x42ZZ') # fixext 2 + check(ExtType(0x42, b'Z' * 4), b'\xd6\x42' + b'Z' * 4) # fixext 4 + check(ExtType(0x42, b'Z' * 8), b'\xd7\x42' + b'Z' * 8) # fixext 8 + check(ExtType(0x42, b'Z' * 16), b'\xd8\x42' + b'Z' * 16) # fixext 16 # ext 8 check(ExtType(0x42, b''), b'\xc7\x00\x42') - check(ExtType(0x42, b'Z'*255), b'\xc7\xff\x42' + b'Z'*255) + check(ExtType(0x42, b'Z' * 255), b'\xc7\xff\x42' + b'Z' * 255) # ext 16 - check(ExtType(0x42, b'Z'*256), b'\xc8\x01\x00\x42' + b'Z'*256) - check(ExtType(0x42, b'Z'*0xffff), b'\xc8\xff\xff\x42' + b'Z'*0xffff) + check(ExtType(0x42, b'Z' * 256), b'\xc8\x01\x00\x42' + b'Z' * 256) + check(ExtType(0x42, b'Z' * 0xffff), b'\xc8\xff\xff\x42' + b'Z' * 0xffff) # ext 32 - check(ExtType(0x42, b'Z'*0x10000), b'\xc9\x00\x01\x00\x00\x42' + b'Z'*0x10000) + check( + ExtType(0x42, b'Z' * + 0x10000), b'\xc9\x00\x01\x00\x00\x42' + b'Z' * 0x10000) # needs large memory - #check(ExtType(0x42, b'Z'*0xffffffff), + # check(ExtType(0x42, b'Z'*0xffffffff), # b'\xc9\xff\xff\xff\xff\x42' + b'Z'*0xffffffff) diff --git a/pandas/tests/test_msgpack/test_obj.py b/pandas/tests/test_msgpack/test_obj.py index 886fec522d4f3..bcc76929fe8f8 100644 --- a/pandas/tests/test_msgpack/test_obj.py +++ b/pandas/tests/test_msgpack/test_obj.py @@ -1,14 +1,13 @@ # coding: utf-8 import unittest -import nose - -import datetime from pandas.msgpack import packb, unpackb + class DecodeError(Exception): pass + class TestObj(unittest.TestCase): def _arr_to_str(self, arr): @@ -28,32 +27,37 @@ def _encode_complex(self, obj): return obj def test_encode_hook(self): - packed = packb([3, 1+2j], default=self._encode_complex) + packed = packb([3, 1 + 2j], default=self._encode_complex) unpacked = unpackb(packed, use_list=1) assert unpacked[1] == {b'__complex__': True, b'real': 1, b'imag': 2} def test_decode_hook(self): packed = packb([3, {b'__complex__': True, b'real': 1, b'imag': 2}]) - unpacked = unpackb(packed, object_hook=self._decode_complex, use_list=1) - assert unpacked[1] == 1+2j + unpacked = unpackb(packed, object_hook=self._decode_complex, + use_list=1) + assert unpacked[1] == 1 + 2j def test_decode_pairs_hook(self): packed = packb([3, {1: 2, 3: 4}]) prod_sum = 1 * 2 + 3 * 4 - unpacked = unpackb(packed, object_pairs_hook=lambda l: sum(k * v for k, v in l), use_list=1) + unpacked = unpackb( + packed, object_pairs_hook=lambda l: sum(k * v for k, v in l), + use_list=1) assert unpacked[1] == prod_sum def test_only_one_obj_hook(self): - self.assertRaises(TypeError, unpackb, b'', object_hook=lambda x: x, object_pairs_hook=lambda x: x) + self.assertRaises(TypeError, unpackb, b'', object_hook=lambda x: x, + object_pairs_hook=lambda x: x) def test_bad_hook(self): def f(): - packed = packb([3, 1+2j], default=lambda o: o) - unpacked = unpackb(packed, use_list=1) + packed = packb([3, 1 + 2j], default=lambda o: o) + unpacked = unpackb(packed, use_list=1) # noqa + self.assertRaises(TypeError, f) def test_array_hook(self): - packed = packb([1,2,3]) + packed = packb([1, 2, 3]) unpacked = unpackb(packed, list_hook=self._arr_to_str, use_list=1) assert unpacked == '123' @@ -61,11 +65,12 @@ def test_an_exception_in_objecthook1(self): def f(): packed = packb({1: {'__complex__': True, 'real': 1, 'imag': 2}}) unpackb(packed, object_hook=self.bad_complex_decoder) - self.assertRaises(DecodeError, f) + self.assertRaises(DecodeError, f) def test_an_exception_in_objecthook2(self): def f(): packed = packb({1: [{'__complex__': True, 'real': 1, 'imag': 2}]}) unpackb(packed, list_hook=self.bad_complex_decoder, use_list=1) + self.assertRaises(DecodeError, f) diff --git a/pandas/tests/test_msgpack/test_pack.py b/pandas/tests/test_msgpack/test_pack.py index 22df6df5e2e45..99c7453212b8b 100644 --- a/pandas/tests/test_msgpack/test_pack.py +++ b/pandas/tests/test_msgpack/test_pack.py @@ -2,13 +2,13 @@ # coding: utf-8 import unittest -import nose import struct from pandas import compat from pandas.compat import u, OrderedDict from pandas.msgpack import packb, unpackb, Unpacker, Packer + class TestPack(unittest.TestCase): def check(self, data, use_list=False): @@ -20,25 +20,25 @@ def testPack(self): 0, 1, 127, 128, 255, 256, 65535, 65536, -1, -32, -33, -128, -129, -32768, -32769, 1.0, - b"", b"a", b"a"*31, b"a"*32, + b"", b"a", b"a" * 31, b"a" * 32, None, True, False, (), ((),), ((), None,), {None: 0}, - (1<<23), - ] + (1 << 23), + ] for td in test_data: self.check(td) def testPackUnicode(self): - test_data = [ - u(""), u("abcd"), [u("defgh")], u("Русский текст"), - ] + test_data = [u(""), u("abcd"), [u("defgh")], u("Русский текст"), ] for td in test_data: - re = unpackb(packb(td, encoding='utf-8'), use_list=1, encoding='utf-8') + re = unpackb( + packb(td, encoding='utf-8'), use_list=1, encoding='utf-8') assert re == td packer = Packer(encoding='utf-8') data = packer.pack(td) - re = Unpacker(compat.BytesIO(data), encoding='utf-8', use_list=1).unpack() + re = Unpacker( + compat.BytesIO(data), encoding='utf-8', use_list=1).unpack() assert re == td def testPackUTF32(self): @@ -47,30 +47,36 @@ def testPackUTF32(self): compat.u("abcd"), [compat.u("defgh")], compat.u("Русский текст"), - ] + ] for td in test_data: - re = unpackb(packb(td, encoding='utf-32'), use_list=1, encoding='utf-32') + re = unpackb( + packb(td, encoding='utf-32'), use_list=1, encoding='utf-32') assert re == td def testPackBytes(self): - test_data = [ - b"", b"abcd", (b"defgh",), - ] + test_data = [b"", b"abcd", (b"defgh", ), ] for td in test_data: self.check(td) def testIgnoreUnicodeErrors(self): - re = unpackb(packb(b'abc\xeddef'), encoding='utf-8', unicode_errors='ignore', use_list=1) + re = unpackb( + packb(b'abc\xeddef'), encoding='utf-8', unicode_errors='ignore', + use_list=1) assert re == "abcdef" def testStrictUnicodeUnpack(self): - self.assertRaises(UnicodeDecodeError, unpackb, packb(b'abc\xeddef'), encoding='utf-8', use_list=1) + self.assertRaises(UnicodeDecodeError, unpackb, packb(b'abc\xeddef'), + encoding='utf-8', use_list=1) def testStrictUnicodePack(self): - self.assertRaises(UnicodeEncodeError, packb, compat.u("abc\xeddef"), encoding='ascii', unicode_errors='strict') + self.assertRaises(UnicodeEncodeError, packb, compat.u("abc\xeddef"), + encoding='ascii', unicode_errors='strict') def testIgnoreErrorsPack(self): - re = unpackb(packb(compat.u("abcФФФdef"), encoding='ascii', unicode_errors='ignore'), encoding='utf-8', use_list=1) + re = unpackb( + packb( + compat.u("abcФФФdef"), encoding='ascii', + unicode_errors='ignore'), encoding='utf-8', use_list=1) assert re == compat.u("abcdef") def testNoEncoding(self): @@ -81,8 +87,10 @@ def testDecodeBinary(self): assert re == b"abc" def testPackFloat(self): - assert packb(1.0, use_single_float=True) == b'\xca' + struct.pack('>f', 1.0) - assert packb(1.0, use_single_float=False) == b'\xcb' + struct.pack('>d', 1.0) + assert packb(1.0, + use_single_float=True) == b'\xca' + struct.pack('>f', 1.0) + assert packb( + 1.0, use_single_float=False) == b'\xcb' + struct.pack('>d', 1.0) def testArraySize(self, sizes=[0, 5, 50, 1000]): bio = compat.BytesIO() @@ -118,23 +126,24 @@ def testMapSize(self, sizes=[0, 5, 50, 1000]): for size in sizes: bio.write(packer.pack_map_header(size)) for i in range(size): - bio.write(packer.pack(i)) # key - bio.write(packer.pack(i * 2)) # value + bio.write(packer.pack(i)) # key + bio.write(packer.pack(i * 2)) # value bio.seek(0) unpacker = Unpacker(bio) for size in sizes: assert unpacker.unpack() == dict((i, i * 2) for i in range(size)) - def test_odict(self): seq = [(b'one', 1), (b'two', 2), (b'three', 3), (b'four', 4)] od = OrderedDict(seq) assert unpackb(packb(od), use_list=1) == dict(seq) + def pair_hook(seq): return list(seq) - assert unpackb(packb(od), object_pairs_hook=pair_hook, use_list=1) == seq + assert unpackb( + packb(od), object_pairs_hook=pair_hook, use_list=1) == seq def test_pairlist(self): pairlist = [(b'a', 1), (2, b'b'), (b'foo', b'bar')] diff --git a/pandas/tests/test_msgpack/test_read_size.py b/pandas/tests/test_msgpack/test_read_size.py index 7cbb9c9807201..965e97a7007de 100644 --- a/pandas/tests/test_msgpack/test_read_size.py +++ b/pandas/tests/test_msgpack/test_read_size.py @@ -2,6 +2,7 @@ from pandas.msgpack import packb, Unpacker, OutOfData UnexpectedTypeException = ValueError + def test_read_array_header(): unpacker = Unpacker() unpacker.feed(packb(['a', 'b', 'c'])) @@ -28,6 +29,7 @@ def test_read_map_header(): except OutOfData: assert 1, 'okay' + def test_incorrect_type_array(): unpacker = Unpacker() unpacker.feed(packb(1)) @@ -37,6 +39,7 @@ def test_incorrect_type_array(): except UnexpectedTypeException: assert 1, 'okay' + def test_incorrect_type_map(): unpacker = Unpacker() unpacker.feed(packb(1)) @@ -46,6 +49,7 @@ def test_incorrect_type_map(): except UnexpectedTypeException: assert 1, 'okay' + def test_correct_type_nested_array(): unpacker = Unpacker() unpacker.feed(packb({'a': ['b', 'c', 'd']})) @@ -55,6 +59,7 @@ def test_correct_type_nested_array(): except UnexpectedTypeException: assert 1, 'okay' + def test_incorrect_type_nested_map(): unpacker = Unpacker() unpacker.feed(packb([{'a': 'b'}])) @@ -63,4 +68,3 @@ def test_incorrect_type_nested_map(): assert 0, 'should raise exception' except UnexpectedTypeException: assert 1, 'okay' - diff --git a/pandas/tests/test_msgpack/test_seq.py b/pandas/tests/test_msgpack/test_seq.py index 464ff6d0174af..76a21b98f22da 100644 --- a/pandas/tests/test_msgpack/test_seq.py +++ b/pandas/tests/test_msgpack/test_seq.py @@ -4,9 +4,9 @@ import io import pandas.msgpack as msgpack - binarydata = bytes(bytearray(range(256))) + def gen_binary_data(idx): return binarydata[:idx % 300] @@ -18,10 +18,16 @@ def test_exceeding_unpacker_read_size(): NUMBER_OF_STRINGS = 6 read_size = 16 - # 5 ok for read_size=16, while 6 glibc detected *** python: double free or corruption (fasttop): - # 20 ok for read_size=256, while 25 segfaults / glibc detected *** python: double free or corruption (!prev) - # 40 ok for read_size=1024, while 50 introduces errors - # 7000 ok for read_size=1024*1024, while 8000 leads to glibc detected *** python: double free or corruption (!prev): + + # 5 ok for read_size=16, while 6 glibc detected *** python: double free or + # corruption (fasttop): + + # 20 ok for read_size=256, while 25 segfaults / glibc detected *** python: + # double free or corruption (!prev) + + # 40 ok for read_size=1024, while 50 introduces errors + # 7000 ok for read_size=1024*1024, while 8000 leads to glibc detected *** + # python: double free or corruption (!prev): for idx in range(NUMBER_OF_STRINGS): data = gen_binary_data(idx) diff --git a/pandas/tests/test_msgpack/test_sequnpack.py b/pandas/tests/test_msgpack/test_sequnpack.py index 72ceed0471437..2f496b9fbbafa 100644 --- a/pandas/tests/test_msgpack/test_sequnpack.py +++ b/pandas/tests/test_msgpack/test_sequnpack.py @@ -2,12 +2,12 @@ # coding: utf-8 import unittest -import nose from pandas import compat from pandas.msgpack import Unpacker, BufferFull from pandas.msgpack import OutOfData + class TestPack(unittest.TestCase): def test_partialdata(self): @@ -89,8 +89,8 @@ def test_issue124(self): assert tuple(unpacker) == (b'?', b'!') assert tuple(unpacker) == () unpacker.feed(b"\xa1?\xa1") - assert tuple(unpacker) == (b'?',) + assert tuple(unpacker) == (b'?', ) assert tuple(unpacker) == () unpacker.feed(b"!") - assert tuple(unpacker) == (b'!',) + assert tuple(unpacker) == (b'!', ) assert tuple(unpacker) == () diff --git a/pandas/tests/test_msgpack/test_subtype.py b/pandas/tests/test_msgpack/test_subtype.py index 0934b31cebeda..c89b36717a159 100644 --- a/pandas/tests/test_msgpack/test_subtype.py +++ b/pandas/tests/test_msgpack/test_subtype.py @@ -1,20 +1,25 @@ #!/usr/bin/env python # coding: utf-8 -from pandas.msgpack import packb, unpackb +from pandas.msgpack import packb from collections import namedtuple + class MyList(list): pass + class MyDict(dict): pass + class MyTuple(tuple): pass + MyNamedTuple = namedtuple('MyNamedTuple', 'x y') + def test_types(): assert packb(MyDict()) == packb(dict()) assert packb(MyList()) == packb(list()) diff --git a/pandas/tests/test_msgpack/test_unpack.py b/pandas/tests/test_msgpack/test_unpack.py index fe840083ae1c2..a182c676adb3b 100644 --- a/pandas/tests/test_msgpack/test_unpack.py +++ b/pandas/tests/test_msgpack/test_unpack.py @@ -4,9 +4,11 @@ import pandas.util.testing as tm import nose + class TestUnpack(tm.TestCase): + def test_unpack_array_header_from_file(self): - f = BytesIO(packb([1,2,3,4])) + f = BytesIO(packb([1, 2, 3, 4])) unpacker = Unpacker(f) assert unpacker.read_array_header() == 4 assert unpacker.unpack() == 1 @@ -15,7 +17,6 @@ def test_unpack_array_header_from_file(self): assert unpacker.unpack() == 4 self.assertRaises(OutOfData, unpacker.unpack) - def test_unpacker_hook_refcnt(self): if not hasattr(sys, 'getrefcount'): raise nose.SkipTest('no sys.getrefcount()') @@ -41,9 +42,7 @@ def hook(x): assert sys.getrefcount(hook) == basecnt - def test_unpacker_ext_hook(self): - class MyUnpacker(Unpacker): def __init__(self): diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py index 5b00ea163d85f..6302c011a4491 100644 --- a/pandas/tests/test_multilevel.py +++ b/pandas/tests/test_multilevel.py @@ -10,14 +10,12 @@ from pandas.core.index import Index, MultiIndex from pandas import Panel, DataFrame, Series, notnull, isnull, Timestamp -from pandas.util.testing import (assert_almost_equal, - assert_series_equal, - assert_frame_equal, - assertRaisesRegexp) +from pandas.util.testing import (assert_almost_equal, assert_series_equal, + assert_frame_equal, assertRaisesRegexp) import pandas.core.common as com import pandas.util.testing as tm -from pandas.compat import (range, lrange, StringIO, lzip, u, - product as cart_product, zip) +from pandas.compat import (range, lrange, StringIO, lzip, u, product as + cart_product, zip) import pandas as pd import pandas.index as _index @@ -29,8 +27,8 @@ class TestMultiLevel(tm.TestCase): def setUp(self): - index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], - ['one', 'two', 'three']], + index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two', + 'three']], labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], names=['first', 'second']) @@ -38,8 +36,7 @@ def setUp(self): columns=Index(['A', 'B', 'C'], name='exp')) self.single_level = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux']], - labels=[[0, 1, 2, 3]], - names=['first']) + labels=[[0, 1, 2, 3]], names=['first']) # create test series object arrays = [['bar', 'bar', 'baz', 'baz', 'qux', 'qux', 'foo', 'foo'], @@ -57,10 +54,9 @@ def setUp(self): # use Int64Index, to make sure things work self.ymd.index.set_levels([lev.astype('i8') - for lev in self.ymd.index.levels], - inplace=True) - self.ymd.index.set_names(['year', 'month', 'day'], - inplace=True) + for lev in self.ymd.index.levels], + inplace=True) + self.ymd.index.set_names(['year', 'month', 'day'], inplace=True) def test_append(self): a, b = self.frame[:5], self.frame[5:] @@ -75,7 +71,8 @@ def test_append_index(self): tm._skip_if_no_pytz() idx1 = Index([1.1, 1.2, 1.3]) - idx2 = pd.date_range('2011-01-01', freq='D', periods=3, tz='Asia/Tokyo') + idx2 = pd.date_range('2011-01-01', freq='D', periods=3, + tz='Asia/Tokyo') idx3 = Index(['A', 'B', 'C']) midx_lv2 = MultiIndex.from_arrays([idx1, idx2]) @@ -97,7 +94,8 @@ def test_append_index(self): self.assertTrue(result.equals(expected)) result = midx_lv2.append(midx_lv2) - expected = MultiIndex.from_arrays([idx1.append(idx1), idx2.append(idx2)]) + expected = MultiIndex.from_arrays([idx1.append(idx1), idx2.append(idx2) + ]) self.assertTrue(result.equals(expected)) result = midx_lv2.append(midx_lv3) @@ -108,7 +106,7 @@ def test_append_index(self): np.array([(1.1, datetime.datetime(2011, 1, 1, tzinfo=tz), 'A'), (1.2, datetime.datetime(2011, 1, 2, tzinfo=tz), 'B'), (1.3, datetime.datetime(2011, 1, 3, tzinfo=tz), 'C')] - + expected_tuples), None) + + expected_tuples), None) self.assertTrue(result.equals(expected)) def test_dataframe_constructor(self): @@ -124,16 +122,15 @@ def test_dataframe_constructor(self): tm.assertIsInstance(multi.columns, MultiIndex) def test_series_constructor(self): - multi = Series(1., index=[np.array(['a', 'a', 'b', 'b']), - np.array(['x', 'y', 'x', 'y'])]) + multi = Series(1., index=[np.array(['a', 'a', 'b', 'b']), np.array( + ['x', 'y', 'x', 'y'])]) tm.assertIsInstance(multi.index, MultiIndex) - multi = Series(1., index=[['a', 'a', 'b', 'b'], - ['x', 'y', 'x', 'y']]) + multi = Series(1., index=[['a', 'a', 'b', 'b'], ['x', 'y', 'x', 'y']]) tm.assertIsInstance(multi.index, MultiIndex) multi = Series(lrange(4), index=[['a', 'a', 'b', 'b'], - ['x', 'y', 'x', 'y']]) + ['x', 'y', 'x', 'y']]) tm.assertIsInstance(multi.index, MultiIndex) def test_reindex_level(self): @@ -168,8 +165,8 @@ def _check_op(opname): # Series op = getattr(Series, opname) result = op(self.ymd['A'], month_sums['A'], level='month') - broadcasted = self.ymd['A'].groupby( - level='month').transform(np.sum) + broadcasted = self.ymd['A'].groupby(level='month').transform( + np.sum) expected = op(self.ymd['A'], broadcasted) expected.name = 'A' assert_series_equal(result, expected) @@ -180,7 +177,6 @@ def _check_op(opname): _check_op('div') def test_pickle(self): - def _test_roundtrip(frame): unpickled = self.round_trip_pickle(frame) assert_frame_equal(frame, unpickled) @@ -217,38 +213,40 @@ def test_sort_index_preserve_levels(self): def test_sorting_repr_8017(self): np.random.seed(0) - data = np.random.randn(3,4) + data = np.random.randn(3, 4) - for gen, extra in [([1.,3.,2.,5.],4.), - ([1,3,2,5],4), - ([Timestamp('20130101'),Timestamp('20130103'),Timestamp('20130102'),Timestamp('20130105')],Timestamp('20130104')), - (['1one','3one','2one','5one'],'4one')]: + for gen, extra in [([1., 3., 2., 5.], 4.), ([1, 3, 2, 5], 4), + ([Timestamp('20130101'), Timestamp('20130103'), + Timestamp('20130102'), Timestamp('20130105')], + Timestamp('20130104')), + (['1one', '3one', '2one', '5one'], '4one')]: columns = MultiIndex.from_tuples([('red', i) for i in gen]) df = DataFrame(data, index=list('def'), columns=columns) - df2 = pd.concat([df,DataFrame('world', - index=list('def'), - columns=MultiIndex.from_tuples([('red', extra)]))],axis=1) + df2 = pd.concat([df, + DataFrame('world', index=list('def'), + columns=MultiIndex.from_tuples( + [('red', extra)]))], axis=1) # check that the repr is good # make sure that we have a correct sparsified repr # e.g. only 1 header of read - self.assertEqual(str(df2).splitlines()[0].split(),['red']) + self.assertEqual(str(df2).splitlines()[0].split(), ['red']) # GH 8017 # sorting fails after columns added # construct single-dtype then sort result = df.copy().sort_index(axis=1) - expected = df.iloc[:,[0,2,1,3]] + expected = df.iloc[:, [0, 2, 1, 3]] assert_frame_equal(result, expected) result = df2.sort_index(axis=1) - expected = df2.iloc[:,[0,2,1,4,3]] + expected = df2.iloc[:, [0, 2, 1, 4, 3]] assert_frame_equal(result, expected) # setitem then sort result = df.copy() - result[('red',extra)] = 'world' + result[('red', extra)] = 'world' result = result.sort_index(axis=1) assert_frame_equal(result, expected) @@ -285,7 +283,10 @@ def test_series_getitem(self): s = self.ymd['A'] result = s[2000, 3] - result2 = s.ix[2000, 3] + + # TODO(wesm): unused? + # result2 = s.ix[2000, 3] + expected = s.reindex(s.index[42:65]) expected.index = expected.index.droplevel(0).droplevel(0) assert_series_equal(result, expected) @@ -389,10 +390,8 @@ def test_frame_getitem_setitem_multislice(self): assert_frame_equal(df, result) def test_frame_getitem_multicolumn_empty_level(self): - f = DataFrame({'a': ['1', '2', '3'], - 'b': ['2', '3', '4']}) - f.columns = [['level1 item1', 'level1 item2'], - ['', 'level2 item2'], + f = DataFrame({'a': ['1', '2', '3'], 'b': ['2', '3', '4']}) + f.columns = [['level1 item1', 'level1 item2'], ['', 'level2 item2'], ['level3 item1', 'level3 item2']] result = f['level1 item1'] @@ -413,7 +412,7 @@ def test_frame_setitem_multi_column(self): cp['a'] = cp['b'].values assert_frame_equal(cp['a'], cp['b']) - #---------------------------------------- + # --------------------------------------- # #1803 columns = MultiIndex.from_tuples([('A', '1'), ('A', '2'), ('B', '1')]) df = DataFrame(index=[1, 3, 5], columns=columns) @@ -482,18 +481,20 @@ def test_xs(self): # GH 6574 # missing values in returned index should be preserrved acc = [ - ('a','abcde',1), - ('b','bbcde',2), - ('y','yzcde',25), - ('z','xbcde',24), - ('z',None,26), - ('z','zbcde',25), - ('z','ybcde',26), - ] - df = DataFrame(acc, columns=['a1','a2','cnt']).set_index(['a1','a2']) - expected = DataFrame({ 'cnt' : [24,26,25,26] }, index=Index(['xbcde',np.nan,'zbcde','ybcde'],name='a2')) - - result = df.xs('z',level='a1') + ('a', 'abcde', 1), + ('b', 'bbcde', 2), + ('y', 'yzcde', 25), + ('z', 'xbcde', 24), + ('z', None, 26), + ('z', 'zbcde', 25), + ('z', 'ybcde', 26), + ] + df = DataFrame(acc, + columns=['a1', 'a2', 'cnt']).set_index(['a1', 'a2']) + expected = DataFrame({'cnt': [24, 26, 25, 26]}, index=Index( + ['xbcde', np.nan, 'zbcde', 'ybcde'], name='a2')) + + result = df.xs('z', level='a1') assert_frame_equal(result, expected) def test_xs_partial(self): @@ -510,8 +511,8 @@ def test_xs_partial(self): # ex from #1796 index = MultiIndex(levels=[['foo', 'bar'], ['one', 'two'], [-1, 1]], labels=[[0, 0, 0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 0, 0, 1, 1], - [0, 1, 0, 1, 0, 1, 0, 1]]) + [0, 0, 1, 1, 0, 0, 1, 1], [0, 1, 0, 1, 0, 1, + 0, 1]]) df = DataFrame(np.random.randn(8, 4), index=index, columns=list('abcd')) @@ -526,8 +527,8 @@ def test_xs_level(self): assert_frame_equal(result, expected) - index = MultiIndex.from_tuples([('x', 'y', 'z'), ('a', 'b', 'c'), - ('p', 'q', 'r')]) + index = MultiIndex.from_tuples([('x', 'y', 'z'), ('a', 'b', 'c'), ( + 'p', 'q', 'r')]) df = DataFrame(np.random.randn(3, 5), index=index) result = df.xs('c', level=2) expected = df[1:2] @@ -541,6 +542,7 @@ def test_xs_level(self): # as we are trying to write a view def f(x): x[:] = 10 + self.assertRaises(com.SettingWithCopyError, f, result) def test_xs_level_multiple(self): @@ -564,6 +566,7 @@ def test_xs_level_multiple(self): # as we are trying to write a view def f(x): x[:] = 10 + self.assertRaises(com.SettingWithCopyError, f, result) # GH2107 @@ -638,8 +641,7 @@ def test_getitem_toplevel(self): def test_getitem_setitem_slice_integers(self): index = MultiIndex(levels=[[0, 1, 2], [0, 2]], - labels=[[0, 0, 1, 1, 2, 2], - [0, 1, 0, 1, 0, 1]]) + labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]]) frame = DataFrame(np.random.randn(len(index), 4), index=index, columns=['a', 'b', 'c', 'd']) @@ -761,27 +763,27 @@ def test_sortlevel(self): def test_sortlevel_large_cardinality(self): # #2684 (int64) - index = MultiIndex.from_arrays([np.arange(4000)]*3) - df = DataFrame(np.random.randn(4000), index=index, dtype = np.int64) + index = MultiIndex.from_arrays([np.arange(4000)] * 3) + df = DataFrame(np.random.randn(4000), index=index, dtype=np.int64) # it works! result = df.sortlevel(0) self.assertTrue(result.index.lexsort_depth == 3) # #2684 (int32) - index = MultiIndex.from_arrays([np.arange(4000)]*3) - df = DataFrame(np.random.randn(4000), index=index, dtype = np.int32) + index = MultiIndex.from_arrays([np.arange(4000)] * 3) + df = DataFrame(np.random.randn(4000), index=index, dtype=np.int32) # it works! result = df.sortlevel(0) - self.assertTrue((result.dtypes.values == df.dtypes.values).all() == True) + self.assertTrue((result.dtypes.values == df.dtypes.values).all()) self.assertTrue(result.index.lexsort_depth == 3) def test_delevel_infer_dtype(self): - tuples = [tuple for tuple in cart_product(['foo', 'bar'], - [10, 20], [1.0, 1.1])] - index = MultiIndex.from_tuples(tuples, - names=['prm0', 'prm1', 'prm2']) + tuples = [tuple + for tuple in cart_product( + ['foo', 'bar'], [10, 20], [1.0, 1.1])] + index = MultiIndex.from_tuples(tuples, names=['prm0', 'prm1', 'prm2']) df = DataFrame(np.random.randn(8, 3), columns=['A', 'B', 'C'], index=index) deleveled = df.reset_index() @@ -850,10 +852,9 @@ def _check_counts(frame, axis=0): assert_almost_equal(result.columns, ['A', 'B', 'C']) def test_count_level_series(self): - index = MultiIndex(levels=[['foo', 'bar', 'baz'], - ['one', 'two', 'three', 'four']], - labels=[[0, 0, 0, 2, 2], - [2, 0, 1, 1, 2]]) + index = MultiIndex(levels=[['foo', 'bar', 'baz'], ['one', 'two', + 'three', 'four']], + labels=[[0, 0, 0, 2, 2], [2, 0, 1, 1, 2]]) s = Series(np.random.randn(len(index)), index=index) @@ -888,17 +889,17 @@ def test_get_level_number_out_of_bounds(self): def test_unstack(self): # just check that it works for now unstacked = self.ymd.unstack() - unstacked2 = unstacked.unstack() + unstacked.unstack() # test that ints work - unstacked = self.ymd.astype(int).unstack() + self.ymd.astype(int).unstack() # test that int32 work - unstacked = self.ymd.astype(np.int32).unstack() + self.ymd.astype(np.int32).unstack() def test_unstack_multiple_no_empty_columns(self): - index = MultiIndex.from_tuples([(0, 'foo', 0), (0, 'bar', 0), - (1, 'baz', 1), (1, 'qux', 1)]) + index = MultiIndex.from_tuples([(0, 'foo', 0), (0, 'bar', 0), ( + 1, 'baz', 1), (1, 'qux', 1)]) s = Series(np.random.randn(4), index=index) @@ -973,16 +974,17 @@ def check(left, right): columns=['1st', '2nd', '3rd']) mi = MultiIndex(levels=[['a', 'b'], ['1st', '2nd', '3rd']], - labels=[np.tile(np.arange(2).repeat(3), 2), - np.tile(np.arange(3), 4)]) + labels=[np.tile( + np.arange(2).repeat(3), 2), np.tile( + np.arange(3), 4)]) left, right = df.stack(), Series(np.arange(12), index=mi) check(left, right) df.columns = ['1st', '2nd', '1st'] - mi = MultiIndex(levels=[['a', 'b'], ['1st', '2nd']], - labels=[np.tile(np.arange(2).repeat(3), 2), - np.tile([0, 1, 0], 4)]) + mi = MultiIndex(levels=[['a', 'b'], ['1st', '2nd']], labels=[np.tile( + np.arange(2).repeat(3), 2), np.tile( + [0, 1, 0], 4)]) left, right = df.stack(), Series(np.arange(12), index=mi) check(left, right) @@ -990,9 +992,10 @@ def check(left, right): tpls = ('a', 2), ('b', 1), ('a', 1), ('b', 2) df.index = MultiIndex.from_tuples(tpls) mi = MultiIndex(levels=[['a', 'b'], [1, 2], ['1st', '2nd']], - labels=[np.tile(np.arange(2).repeat(3), 2), - np.repeat([1, 0, 1], [3, 6, 3]), - np.tile([0, 1, 0], 4)]) + labels=[np.tile( + np.arange(2).repeat(3), 2), np.repeat( + [1, 0, 1], [3, 6, 3]), np.tile( + [0, 1, 0], 4)]) left, right = df.stack(), Series(np.arange(12), index=mi) check(left, right) @@ -1031,8 +1034,8 @@ def test_stack_mixed_dtype(self): self.assertEqual(stacked['bar'].dtype, np.float_) def test_unstack_bug(self): - df = DataFrame({'state': ['naive', 'naive', 'naive', - 'activ', 'activ', 'activ'], + df = DataFrame({'state': ['naive', 'naive', 'naive', 'activ', 'activ', + 'activ'], 'exp': ['a', 'b', 'b', 'b', 'a', 'a'], 'barcode': [1, 2, 3, 4, 1, 3], 'v': ['hi', 'hi', 'bye', 'bye', 'bye', 'peace'], @@ -1072,8 +1075,7 @@ def test_stack_unstack_multiple(self): unstacked = self.ymd.unstack(['year', 'month']) expected = self.ymd.unstack('year').unstack('month') assert_frame_equal(unstacked, expected) - self.assertEqual(unstacked.columns.names, - expected.columns.names) + self.assertEqual(unstacked.columns.names, expected.columns.names) # series s = self.ymd['A'] @@ -1126,7 +1128,8 @@ def test_unstack_period_series(self): result2 = s.unstack(level=1) result3 = s.unstack(level=0) - e_idx = pd.PeriodIndex(['2013-01', '2013-02', '2013-03'], freq='M', name='period') + e_idx = pd.PeriodIndex( + ['2013-01', '2013-02', '2013-03'], freq='M', name='period') expected = DataFrame({'A': [1, 3, 5], 'B': [2, 4, 6]}, index=e_idx, columns=['A', 'B']) expected.columns.name = 'str' @@ -1147,9 +1150,11 @@ def test_unstack_period_series(self): result2 = s.unstack(level=1) result3 = s.unstack(level=0) - e_idx = pd.PeriodIndex(['2013-01', '2013-02', '2013-03'], freq='M', name='period1') + e_idx = pd.PeriodIndex( + ['2013-01', '2013-02', '2013-03'], freq='M', name='period1') e_cols = pd.PeriodIndex(['2013-07', '2013-08', '2013-09', '2013-10', - '2013-11', '2013-12'], freq='M', name='period2') + '2013-11', '2013-12'], + freq='M', name='period2') expected = DataFrame([[np.nan, np.nan, np.nan, np.nan, 2, 1], [np.nan, np.nan, 4, 3, np.nan, np.nan], [6, 5, np.nan, np.nan, np.nan, np.nan]], @@ -1161,9 +1166,11 @@ def test_unstack_period_series(self): def test_unstack_period_frame(self): # GH 4342 - idx1 = pd.PeriodIndex(['2014-01', '2014-02', '2014-02', '2014-02', '2014-01', '2014-01'], + idx1 = pd.PeriodIndex(['2014-01', '2014-02', '2014-02', '2014-02', + '2014-01', '2014-01'], freq='M', name='period1') - idx2 = pd.PeriodIndex(['2013-12', '2013-12', '2014-02', '2013-10', '2013-10', '2014-02'], + idx2 = pd.PeriodIndex(['2013-12', '2013-12', '2014-02', '2013-10', + '2013-10', '2014-02'], freq='M', name='period2') value = {'A': [1, 2, 3, 4, 5, 6], 'B': [6, 5, 4, 3, 2, 1]} idx = pd.MultiIndex.from_arrays([idx1, idx2]) @@ -1185,7 +1192,8 @@ def test_unstack_period_frame(self): e_1 = pd.PeriodIndex(['2014-01', '2014-02', '2014-01', '2014-02'], freq='M', name='period1') - e_2 = pd.PeriodIndex(['2013-10', '2013-12', '2014-02'], freq='M', name='period2') + e_2 = pd.PeriodIndex( + ['2013-10', '2013-12', '2014-02'], freq='M', name='period2') e_cols = pd.MultiIndex.from_arrays(['A A B B'.split(), e_1]) expected = DataFrame([[5, 4, 2, 3], [1, 2, 6, 5], [6, 3, 1, 4]], index=e_2, columns=e_cols) @@ -1212,9 +1220,7 @@ def test_stack_multiple_bug(self): def test_stack_dropna(self): # GH #3997 - df = pd.DataFrame({'A': ['a1', 'a2'], - 'B': ['b1', 'b2'], - 'C': [1, 1]}) + df = pd.DataFrame({'A': ['a1', 'a2'], 'B': ['b1', 'b2'], 'C': [1, 1]}) df = df.set_index(['A', 'B']) stacked = df.unstack().stack(dropna=False) @@ -1225,8 +1231,8 @@ def test_stack_dropna(self): def test_unstack_multiple_hierarchical(self): df = DataFrame(index=[[0, 0, 0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 0, 0, 1, 1], - [0, 1, 0, 1, 0, 1, 0, 1]], + [0, 0, 1, 1, 0, 0, 1, 1], [0, 1, 0, 1, 0, 1, 0, 1 + ]], columns=[[0, 0, 1, 1], [0, 1, 0, 1]]) df.index.names = ['a', 'b', 'c'] @@ -1280,7 +1286,8 @@ def test_unstack_unobserved_keys(self): def test_groupby_corner(self): midx = MultiIndex(levels=[['foo'], ['bar'], ['baz']], - labels=[[0], [0], [0]], names=['one', 'two', 'three']) + labels=[[0], [0], [0]], + names=['one', 'two', 'three']) df = DataFrame([np.random.rand(4)], columns=['a', 'b', 'c', 'd'], index=midx) # should work @@ -1288,9 +1295,8 @@ def test_groupby_corner(self): def test_groupby_level_no_obs(self): # #1697 - midx = MultiIndex.from_tuples([('f1', 's1'), ('f1', 's2'), - ('f2', 's1'), ('f2', 's2'), - ('f3', 's1'), ('f3', 's2')]) + midx = MultiIndex.from_tuples([('f1', 's1'), ('f1', 's2'), ( + 'f2', 's1'), ('f2', 's2'), ('f3', 's1'), ('f3', 's2')]) df = DataFrame( [[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12]], columns=midx) df1 = df.select(lambda u: u[0] in ['f2', 'f3'], axis=1) @@ -1309,7 +1315,8 @@ def test_join(self): self.assertFalse(np.isnan(joined.values).all()) - assert_frame_equal(joined, expected, check_names=False) # TODO what should join do with names ? + assert_frame_equal(joined, expected, check_names=False + ) # TODO what should join do with names ? def test_swaplevel(self): swapped = self.frame['A'].swaplevel(0, 1) @@ -1328,8 +1335,7 @@ def test_swaplevel(self): assert_frame_equal(swapped, exp) def test_swaplevel_panel(self): - panel = Panel({'ItemA': self.frame, - 'ItemB': self.frame * 2}) + panel = Panel({'ItemA': self.frame, 'ItemB': self.frame * 2}) result = panel.swaplevel(0, 1, axis='major') expected = panel.copy() @@ -1362,11 +1368,11 @@ def test_insert_index(self): self.assertTrue((df[2000, 1, 10] == df[2000, 1, 7]).all()) def test_alignment(self): - x = Series(data=[1, 2, 3], - index=MultiIndex.from_tuples([("A", 1), ("A", 2), ("B", 3)])) + x = Series(data=[1, 2, 3], index=MultiIndex.from_tuples([("A", 1), ( + "A", 2), ("B", 3)])) - y = Series(data=[4, 5, 6], - index=MultiIndex.from_tuples([("Z", 1), ("Z", 2), ("B", 3)])) + y = Series(data=[4, 5, 6], index=MultiIndex.from_tuples([("Z", 1), ( + "Z", 2), ("B", 3)])) res = x - y exp_index = x.index.union(y.index) @@ -1383,18 +1389,15 @@ def test_is_lexsorted(self): levels = [[0, 1], [0, 1, 2]] index = MultiIndex(levels=levels, - labels=[[0, 0, 0, 1, 1, 1], - [0, 1, 2, 0, 1, 2]]) + labels=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]]) self.assertTrue(index.is_lexsorted()) index = MultiIndex(levels=levels, - labels=[[0, 0, 0, 1, 1, 1], - [0, 1, 2, 0, 2, 1]]) + labels=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 2, 1]]) self.assertFalse(index.is_lexsorted()) index = MultiIndex(levels=levels, - labels=[[0, 0, 1, 0, 1, 1], - [0, 1, 0, 2, 2, 1]]) + labels=[[0, 0, 1, 0, 1, 1], [0, 1, 0, 2, 2, 1]]) self.assertFalse(index.is_lexsorted()) self.assertEqual(index.lexsort_depth, 0) @@ -1414,6 +1417,7 @@ def test_frame_getitem_view(self): def f(): df['foo']['one'] = 2 return df + self.assertRaises(com.SettingWithCopyError, f) try: @@ -1445,7 +1449,7 @@ def test_frame_getitem_not_sorted(self): def test_series_getitem_not_sorted(self): arrays = [['bar', 'bar', 'baz', 'baz', 'qux', 'qux', 'foo', 'foo'], - ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']] + ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']] tuples = lzip(*arrays) index = MultiIndex.from_tuples(tuples) s = Series(randn(8), index=index) @@ -1491,8 +1495,7 @@ def test_count(self): 'mad', 'std', 'var', 'sem'] def test_series_group_min_max(self): - for op, level, skipna in cart_product(self.AGG_FUNCTIONS, - lrange(2), + for op, level, skipna in cart_product(self.AGG_FUNCTIONS, lrange(2), [False, True]): grouped = self.series.groupby(level=level) aggf = lambda x: getattr(x, op)(skipna=skipna) @@ -1520,6 +1523,7 @@ def test_frame_group_ops(self): def aggf(x): pieces.append(x) return getattr(x, op)(skipna=skipna, axis=axis) + leftside = grouped.agg(aggf) rightside = getattr(frame, op)(level=level, axis=axis, skipna=skipna) @@ -1555,8 +1559,8 @@ def test_frame_any_all_group(self): assert_frame_equal(result, ex) def test_std_var_pass_ddof(self): - index = MultiIndex.from_arrays([np.arange(5).repeat(10), - np.tile(np.arange(10), 5)]) + index = MultiIndex.from_arrays([np.arange(5).repeat(10), np.tile( + np.arange(10), 5)]) df = DataFrame(np.random.randn(len(index), 5), index=index) for meth in ['var', 'std']: @@ -1588,7 +1592,8 @@ def test_groupby_multilevel(self): expected = self.ymd.groupby([k1, k2]).mean() - assert_frame_equal(result, expected, check_names=False) # TODO groupby with level_values drops names + assert_frame_equal(result, expected, check_names=False + ) # TODO groupby with level_values drops names self.assertEqual(result.index.names, self.ymd.index.names[:2]) result2 = self.ymd.groupby(level=self.ymd.index.names[:2]).mean() @@ -1598,8 +1603,8 @@ def test_groupby_multilevel_with_transform(self): pass def test_multilevel_consolidate(self): - index = MultiIndex.from_tuples([('foo', 'one'), ('foo', 'two'), - ('bar', 'one'), ('bar', 'two')]) + index = MultiIndex.from_tuples([('foo', 'one'), ('foo', 'two'), ( + 'bar', 'one'), ('bar', 'two')]) df = DataFrame(np.random.randn(4, 4), index=index, columns=index) df['Totals', ''] = df.sum(1) df = df.consolidate() @@ -1658,8 +1663,7 @@ def test_unstack_group_index_overflow(self): # test roundtrip stacked = result.stack() - assert_series_equal(s, - stacked.reindex(s.index)) + assert_series_equal(s, stacked.reindex(s.index)) # put it at beginning index = MultiIndex(levels=[[0, 1]] + [level] * 8, @@ -1671,8 +1675,8 @@ def test_unstack_group_index_overflow(self): # put it in middle index = MultiIndex(levels=[level] * 4 + [[0, 1]] + [level] * 4, - labels=([labels] * 4 + [np.arange(2).repeat(500)] - + [labels] * 4)) + labels=([labels] * 4 + [np.arange(2).repeat(500)] + + [labels] * 4)) s = Series(np.arange(1000), index=index) result = s.unstack(4) @@ -1682,12 +1686,11 @@ def test_getitem_lowerdim_corner(self): self.assertRaises(KeyError, self.frame.ix.__getitem__, (('bar', 'three'), 'B')) - # in theory should be inserting in a sorted space???? - self.frame.ix[('bar','three'),'B'] = 0 - self.assertEqual(self.frame.sortlevel().ix[('bar','three'),'B'], 0) + self.frame.ix[('bar', 'three'), 'B'] = 0 + self.assertEqual(self.frame.sortlevel().ix[('bar', 'three'), 'B'], 0) - #---------------------------------------------------------------------- + # --------------------------------------------------------------------- # AMBIGUOUS CASES! def test_partial_ix_missing(self): @@ -1706,7 +1709,7 @@ def test_partial_ix_missing(self): self.assertRaises(Exception, self.ymd.ix.__getitem__, (2000, 6)) self.assertRaises(Exception, self.ymd.ix.__getitem__, (2000, 6), 0) - #---------------------------------------------------------------------- + # --------------------------------------------------------------------- def test_to_html(self): self.ymd.columns.name = 'foo' @@ -1714,10 +1717,9 @@ def test_to_html(self): self.ymd.T.to_html() def test_level_with_tuples(self): - index = MultiIndex(levels=[[('foo', 'bar', 0), ('foo', 'baz', 0), - ('foo', 'qux', 0)], - [0, 1]], - labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]]) + index = MultiIndex(levels=[[('foo', 'bar', 0), ('foo', 'baz', 0), ( + 'foo', 'qux', 0)], [0, 1]], + labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]]) series = Series(np.random.randn(6), index=index) frame = DataFrame(np.random.randn(6, 4), index=index) @@ -1738,10 +1740,9 @@ def test_level_with_tuples(self): assert_frame_equal(result, expected) assert_frame_equal(result2, expected) - index = MultiIndex(levels=[[('foo', 'bar'), ('foo', 'baz'), - ('foo', 'qux')], - [0, 1]], - labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]]) + index = MultiIndex(levels=[[('foo', 'bar'), ('foo', 'baz'), ( + 'foo', 'qux')], [0, 1]], + labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]]) series = Series(np.random.randn(6), index=index) frame = DataFrame(np.random.randn(6, 4), index=index) @@ -1978,8 +1979,8 @@ def test_unicode_repr_level_names(self): def test_dataframe_insert_column_all_na(self): # GH #1534 - mix = MultiIndex.from_tuples( - [('1a', '2a'), ('1a', '2b'), ('1a', '2c')]) + mix = MultiIndex.from_tuples([('1a', '2a'), ('1a', '2b'), ('1a', '2c') + ]) df = DataFrame([[1, 2], [3, 4], [5, 6]], index=mix) s = Series({(1, 1): 1, (1, 2): 2}) df['new'] = s @@ -2006,21 +2007,23 @@ def test_set_column_scalar_with_ix(self): self.assertTrue((self.frame.ix[subset, 'B'] == 97).all()) def test_frame_dict_constructor_empty_series(self): - s1 = Series([1, 2, 3, 4], index=MultiIndex.from_tuples([(1, 2), (1, 3), - (2, 2), (2, 4)])) - s2 = Series([1, 2, 3, 4], - index=MultiIndex.from_tuples([(1, 2), (1, 3), (3, 2), (3, 4)])) + s1 = Series([ + 1, 2, 3, 4 + ], index=MultiIndex.from_tuples([(1, 2), (1, 3), (2, 2), (2, 4)])) + s2 = Series([ + 1, 2, 3, 4 + ], index=MultiIndex.from_tuples([(1, 2), (1, 3), (3, 2), (3, 4)])) s3 = Series() # it works! - df = DataFrame({'foo': s1, 'bar': s2, 'baz': s3}) - df = DataFrame.from_dict({'foo': s1, 'baz': s3, 'bar': s2}) + DataFrame({'foo': s1, 'bar': s2, 'baz': s3}) + DataFrame.from_dict({'foo': s1, 'baz': s3, 'bar': s2}) def test_indexing_ambiguity_bug_1678(self): - columns = MultiIndex.from_tuples([('Ohio', 'Green'), ('Ohio', 'Red'), - ('Colorado', 'Green')]) - index = MultiIndex.from_tuples( - [('a', 1), ('a', 2), ('b', 1), ('b', 2)]) + columns = MultiIndex.from_tuples([('Ohio', 'Green'), ('Ohio', 'Red'), ( + 'Colorado', 'Green')]) + index = MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1), ('b', 2) + ]) frame = DataFrame(np.arange(12).reshape((4, 3)), index=index, columns=columns) @@ -2090,8 +2093,8 @@ def test_assign_index_sequences(self): def test_tuples_have_na(self): index = MultiIndex(levels=[[1, 0], [0, 1, 2, 3]], - labels=[[1, 1, 1, 1, -1, 0, 0, 0], - [0, 1, 2, 3, 0, 1, 2, 3]]) + labels=[[1, 1, 1, 1, -1, 0, 0, 0], [0, 1, 2, 3, 0, + 1, 2, 3]]) self.assertTrue(isnull(index[4][0])) self.assertTrue(isnull(index.values[4][0])) @@ -2099,9 +2102,9 @@ def test_tuples_have_na(self): def test_duplicate_groupby_issues(self): idx_tp = [('600809', '20061231'), ('600809', '20070331'), ('600809', '20070630'), ('600809', '20070331')] - dt = ['demo','demo','demo','demo'] + dt = ['demo', 'demo', 'demo', 'demo'] - idx = MultiIndex.from_tuples(idx_tp,names = ['STK_ID','RPT_Date']) + idx = MultiIndex.from_tuples(idx_tp, names=['STK_ID', 'RPT_Date']) s = Series(dt, index=idx) result = s.groupby(s.index).first() @@ -2109,39 +2112,43 @@ def test_duplicate_groupby_issues(self): def test_duplicate_mi(self): # GH 4516 - df = DataFrame([['foo','bar',1.0,1],['foo','bar',2.0,2],['bah','bam',3.0,3], - ['bah','bam',4.0,4],['foo','bar',5.0,5],['bah','bam',6.0,6]], + df = DataFrame([['foo', 'bar', 1.0, 1], ['foo', 'bar', 2.0, 2], + ['bah', 'bam', 3.0, 3], + ['bah', 'bam', 4.0, 4], ['foo', 'bar', 5.0, 5], + ['bah', 'bam', 6.0, 6]], columns=list('ABCD')) - df = df.set_index(['A','B']) + df = df.set_index(['A', 'B']) df = df.sortlevel(0) - expected = DataFrame([['foo','bar',1.0,1],['foo','bar',2.0,2],['foo','bar',5.0,5]], - columns=list('ABCD')).set_index(['A','B']) - result = df.loc[('foo','bar')] - assert_frame_equal(result,expected) + expected = DataFrame([['foo', 'bar', 1.0, 1], ['foo', 'bar', 2.0, 2], + ['foo', 'bar', 5.0, 5]], + columns=list('ABCD')).set_index(['A', 'B']) + result = df.loc[('foo', 'bar')] + assert_frame_equal(result, expected) def test_duplicated_drop_duplicates(self): # GH 4060 - idx = MultiIndex.from_arrays(([1, 2, 3, 1, 2 ,3], [1, 1, 1, 1, 2, 2])) + idx = MultiIndex.from_arrays(([1, 2, 3, 1, 2, 3], [1, 1, 1, 1, 2, 2])) - expected = np.array([False, False, False, True, False, False], dtype=bool) + expected = np.array( + [False, False, False, True, False, False], dtype=bool) duplicated = idx.duplicated() tm.assert_numpy_array_equal(duplicated, expected) self.assertTrue(duplicated.dtype == bool) - expected = MultiIndex.from_arrays(([1, 2, 3, 2 ,3], [1, 1, 1, 2, 2])) + expected = MultiIndex.from_arrays(([1, 2, 3, 2, 3], [1, 1, 1, 2, 2])) tm.assert_index_equal(idx.drop_duplicates(), expected) expected = np.array([True, False, False, False, False, False]) duplicated = idx.duplicated(keep='last') tm.assert_numpy_array_equal(duplicated, expected) self.assertTrue(duplicated.dtype == bool) - expected = MultiIndex.from_arrays(([2, 3, 1, 2 ,3], [1, 1, 1, 2, 2])) + expected = MultiIndex.from_arrays(([2, 3, 1, 2, 3], [1, 1, 1, 2, 2])) tm.assert_index_equal(idx.drop_duplicates(keep='last'), expected) expected = np.array([True, False, False, True, False, False]) duplicated = idx.duplicated(keep=False) tm.assert_numpy_array_equal(duplicated, expected) self.assertTrue(duplicated.dtype == bool) - expected = MultiIndex.from_arrays(([2, 3, 2 ,3], [1, 1, 2, 2])) + expected = MultiIndex.from_arrays(([2, 3, 2, 3], [1, 1, 2, 2])) tm.assert_index_equal(idx.drop_duplicates(keep=False), expected) # deprecate take_last @@ -2150,9 +2157,10 @@ def test_duplicated_drop_duplicates(self): duplicated = idx.duplicated(take_last=True) tm.assert_numpy_array_equal(duplicated, expected) self.assertTrue(duplicated.dtype == bool) - expected = MultiIndex.from_arrays(([2, 3, 1, 2 ,3], [1, 1, 1, 2, 2])) + expected = MultiIndex.from_arrays(([2, 3, 1, 2, 3], [1, 1, 1, 2, 2])) with tm.assert_produces_warning(FutureWarning): - tm.assert_index_equal(idx.drop_duplicates(take_last=True), expected) + tm.assert_index_equal( + idx.drop_duplicates(take_last=True), expected) def test_multiindex_set_index(self): # segfault in #3308 @@ -2166,11 +2174,16 @@ def test_multiindex_set_index(self): df.set_index(index) def test_datetimeindex(self): - idx1 = pd.DatetimeIndex(['2013-04-01 9:00', '2013-04-02 9:00', '2013-04-03 9:00'] * 2, tz='Asia/Tokyo') - idx2 = pd.date_range('2010/01/01', periods=6, freq='M', tz='US/Eastern') + idx1 = pd.DatetimeIndex( + ['2013-04-01 9:00', '2013-04-02 9:00', '2013-04-03 9:00' + ] * 2, tz='Asia/Tokyo') + idx2 = pd.date_range('2010/01/01', periods=6, freq='M', + tz='US/Eastern') idx = MultiIndex.from_arrays([idx1, idx2]) - expected1 = pd.DatetimeIndex(['2013-04-01 9:00', '2013-04-02 9:00', '2013-04-03 9:00'], tz='Asia/Tokyo') + expected1 = pd.DatetimeIndex( + ['2013-04-01 9:00', '2013-04-02 9:00', '2013-04-03 9:00' + ], tz='Asia/Tokyo') self.assertTrue(idx.levels[0].equals(expected1)) self.assertTrue(idx.levels[1].equals(idx2)) @@ -2181,10 +2194,11 @@ def test_datetimeindex(self): date2 = datetime.datetime.today() date3 = Timestamp.today() - for d1, d2 in itertools.product([date1,date2,date3],[date1,date2,date3]): - index = pd.MultiIndex.from_product([[d1],[d2]]) - self.assertIsInstance(index.levels[0],pd.DatetimeIndex) - self.assertIsInstance(index.levels[1],pd.DatetimeIndex) + for d1, d2 in itertools.product( + [date1, date2, date3], [date1, date2, date3]): + index = pd.MultiIndex.from_product([[d1], [d2]]) + self.assertIsInstance(index.levels[0], pd.DatetimeIndex) + self.assertIsInstance(index.levels[1], pd.DatetimeIndex) def test_constructor_with_tz(self): @@ -2203,15 +2217,18 @@ def test_constructor_with_tz(self): def test_set_index_datetime(self): # GH 3950 - df = pd.DataFrame({'label':['a', 'a', 'a', 'b', 'b', 'b'], - 'datetime':['2011-07-19 07:00:00', '2011-07-19 08:00:00', - '2011-07-19 09:00:00', '2011-07-19 07:00:00', - '2011-07-19 08:00:00', '2011-07-19 09:00:00'], - 'value':range(6)}) + df = pd.DataFrame( + {'label': ['a', 'a', 'a', 'b', 'b', 'b'], + 'datetime': ['2011-07-19 07:00:00', '2011-07-19 08:00:00', + '2011-07-19 09:00:00', '2011-07-19 07:00:00', + '2011-07-19 08:00:00', '2011-07-19 09:00:00'], + 'value': range(6)}) df.index = pd.to_datetime(df.pop('datetime'), utc=True) df.index = df.index.tz_localize('UTC').tz_convert('US/Pacific') - expected = pd.DatetimeIndex(['2011-07-19 07:00:00', '2011-07-19 08:00:00', '2011-07-19 09:00:00']) + expected = pd.DatetimeIndex( + ['2011-07-19 07:00:00', '2011-07-19 08:00:00', + '2011-07-19 09:00:00']) expected = expected.tz_localize('UTC').tz_convert('US/Pacific') df = df.set_index('label', append=True) @@ -2222,13 +2239,14 @@ def test_set_index_datetime(self): self.assertTrue(df.index.levels[0].equals(pd.Index(['a', 'b']))) self.assertTrue(df.index.levels[1].equals(expected)) - df = DataFrame(np.random.random(6)) idx1 = pd.DatetimeIndex(['2011-07-19 07:00:00', '2011-07-19 08:00:00', '2011-07-19 09:00:00', '2011-07-19 07:00:00', - '2011-07-19 08:00:00', '2011-07-19 09:00:00'], tz='US/Eastern') - idx2 = pd.DatetimeIndex(['2012-04-01 09:00', '2012-04-01 09:00', '2012-04-01 09:00', - '2012-04-02 09:00', '2012-04-02 09:00', '2012-04-02 09:00'], + '2011-07-19 08:00:00', '2011-07-19 09:00:00'], + tz='US/Eastern') + idx2 = pd.DatetimeIndex(['2012-04-01 09:00', '2012-04-01 09:00', + '2012-04-01 09:00', '2012-04-02 09:00', + '2012-04-02 09:00', '2012-04-02 09:00'], tz='US/Eastern') idx3 = pd.date_range('2011-01-01 09:00', periods=6, tz='Asia/Tokyo') @@ -2236,9 +2254,11 @@ def test_set_index_datetime(self): df = df.set_index(idx2, append=True) df = df.set_index(idx3, append=True) - expected1 = pd.DatetimeIndex(['2011-07-19 07:00:00', '2011-07-19 08:00:00', - '2011-07-19 09:00:00'], tz='US/Eastern') - expected2 = pd.DatetimeIndex(['2012-04-01 09:00', '2012-04-02 09:00'], tz='US/Eastern') + expected1 = pd.DatetimeIndex(['2011-07-19 07:00:00', + '2011-07-19 08:00:00', + '2011-07-19 09:00:00'], tz='US/Eastern') + expected2 = pd.DatetimeIndex( + ['2012-04-01 09:00', '2012-04-02 09:00'], tz='US/Eastern') self.assertTrue(df.index.levels[0].equals(expected1)) self.assertTrue(df.index.levels[1].equals(expected2)) @@ -2252,69 +2272,90 @@ def test_set_index_datetime(self): def test_reset_index_datetime(self): # GH 3950 for tz in ['UTC', 'Asia/Tokyo', 'US/Eastern']: - idx1 = pd.date_range('1/1/2011', periods=5, freq='D', tz=tz, name='idx1') - idx2 = pd.Index(range(5), name='idx2',dtype='int64') + idx1 = pd.date_range('1/1/2011', periods=5, freq='D', tz=tz, + name='idx1') + idx2 = pd.Index(range(5), name='idx2', dtype='int64') idx = pd.MultiIndex.from_arrays([idx1, idx2]) - df = pd.DataFrame({'a': np.arange(5,dtype='int64'), 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx) + df = pd.DataFrame( + {'a': np.arange(5, dtype='int64'), + 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx) expected = pd.DataFrame({'idx1': [datetime.datetime(2011, 1, 1), datetime.datetime(2011, 1, 2), datetime.datetime(2011, 1, 3), datetime.datetime(2011, 1, 4), datetime.datetime(2011, 1, 5)], - 'idx2': np.arange(5,dtype='int64'), - 'a': np.arange(5,dtype='int64'), 'b': ['A', 'B', 'C', 'D', 'E']}, - columns=['idx1', 'idx2', 'a', 'b']) - expected['idx1'] = expected['idx1'].apply(lambda d: pd.Timestamp(d, tz=tz)) + 'idx2': np.arange(5, dtype='int64'), + 'a': np.arange(5, dtype='int64'), + 'b': ['A', 'B', 'C', 'D', 'E']}, + columns=['idx1', 'idx2', 'a', 'b']) + expected['idx1'] = expected['idx1'].apply( + lambda d: pd.Timestamp(d, tz=tz)) assert_frame_equal(df.reset_index(), expected) - idx3 = pd.date_range('1/1/2012', periods=5, freq='MS', tz='Europe/Paris', name='idx3') + idx3 = pd.date_range('1/1/2012', periods=5, freq='MS', + tz='Europe/Paris', name='idx3') idx = pd.MultiIndex.from_arrays([idx1, idx2, idx3]) - df = pd.DataFrame({'a': np.arange(5,dtype='int64'), 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx) + df = pd.DataFrame( + {'a': np.arange(5, dtype='int64'), + 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx) expected = pd.DataFrame({'idx1': [datetime.datetime(2011, 1, 1), datetime.datetime(2011, 1, 2), datetime.datetime(2011, 1, 3), datetime.datetime(2011, 1, 4), datetime.datetime(2011, 1, 5)], - 'idx2': np.arange(5,dtype='int64'), + 'idx2': np.arange(5, dtype='int64'), 'idx3': [datetime.datetime(2012, 1, 1), datetime.datetime(2012, 2, 1), datetime.datetime(2012, 3, 1), datetime.datetime(2012, 4, 1), datetime.datetime(2012, 5, 1)], - 'a': np.arange(5,dtype='int64'), 'b': ['A', 'B', 'C', 'D', 'E']}, - columns=['idx1', 'idx2', 'idx3', 'a', 'b']) - expected['idx1'] = expected['idx1'].apply(lambda d: pd.Timestamp(d, tz=tz)) - expected['idx3'] = expected['idx3'].apply(lambda d: pd.Timestamp(d, tz='Europe/Paris')) + 'a': np.arange(5, dtype='int64'), + 'b': ['A', 'B', 'C', 'D', 'E']}, + columns=['idx1', 'idx2', 'idx3', 'a', 'b']) + expected['idx1'] = expected['idx1'].apply( + lambda d: pd.Timestamp(d, tz=tz)) + expected['idx3'] = expected['idx3'].apply( + lambda d: pd.Timestamp(d, tz='Europe/Paris')) assert_frame_equal(df.reset_index(), expected) # GH 7793 - idx = pd.MultiIndex.from_product([['a','b'], pd.date_range('20130101', periods=3, tz=tz)]) - df = pd.DataFrame(np.arange(6,dtype='int64').reshape(6,1), columns=['a'], index=idx) + idx = pd.MultiIndex.from_product([['a', 'b'], pd.date_range( + '20130101', periods=3, tz=tz)]) + df = pd.DataFrame( + np.arange(6, dtype='int64').reshape( + 6, 1), columns=['a'], index=idx) expected = pd.DataFrame({'level_0': 'a a a b b b'.split(), - 'level_1': [datetime.datetime(2013, 1, 1), - datetime.datetime(2013, 1, 2), - datetime.datetime(2013, 1, 3)] * 2, + 'level_1': [ + datetime.datetime(2013, 1, 1), + datetime.datetime(2013, 1, 2), + datetime.datetime(2013, 1, 3)] * 2, 'a': np.arange(6, dtype='int64')}, - columns=['level_0', 'level_1', 'a']) - expected['level_1'] = expected['level_1'].apply(lambda d: pd.Timestamp(d, offset='D', tz=tz)) + columns=['level_0', 'level_1', 'a']) + expected['level_1'] = expected['level_1'].apply( + lambda d: pd.Timestamp(d, offset='D', tz=tz)) assert_frame_equal(df.reset_index(), expected) def test_reset_index_period(self): # GH 7746 - idx = pd.MultiIndex.from_product([pd.period_range('20130101', periods=3, freq='M'), - ['a','b','c']], names=['month', 'feature']) - - df = pd.DataFrame(np.arange(9,dtype='int64').reshape(-1,1), index=idx, columns=['a']) - expected = pd.DataFrame({'month': [pd.Period('2013-01', freq='M')] * 3 + - [pd.Period('2013-02', freq='M')] * 3 + - [pd.Period('2013-03', freq='M')] * 3, - 'feature': ['a', 'b', 'c'] * 3, - 'a': np.arange(9, dtype='int64')}, - columns=['month', 'feature', 'a']) + idx = pd.MultiIndex.from_product([pd.period_range('20130101', + periods=3, freq='M'), + ['a', 'b', 'c']], + names=['month', 'feature']) + + df = pd.DataFrame(np.arange(9, dtype='int64') + .reshape(-1, 1), + index=idx, columns=['a']) + expected = pd.DataFrame({ + 'month': ([pd.Period('2013-01', freq='M')] * 3 + + [pd.Period('2013-02', freq='M')] * 3 + + [pd.Period('2013-03', freq='M')] * 3), + 'feature': ['a', 'b', 'c'] * 3, + 'a': np.arange(9, dtype='int64') + }, columns=['month', 'feature', 'a']) assert_frame_equal(df.reset_index(), expected) def test_set_index_period(self): @@ -2344,15 +2385,12 @@ def test_set_index_period(self): def test_repeat(self): # GH 9361 # fixed by # GH 7891 - m_idx = pd.MultiIndex.from_tuples([(1, 2), (3, 4), - (5, 6), (7, 8)]) + m_idx = pd.MultiIndex.from_tuples([(1, 2), (3, 4), (5, 6), (7, 8)]) data = ['a', 'b', 'c', 'd'] m_df = pd.Series(data, index=m_idx) - assert m_df.repeat(3).shape == (3 * len(data),) + assert m_df.repeat(3).shape == (3 * len(data), ) if __name__ == '__main__': - - import nose nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py index b9db95fe06a43..ecd3fa6ed53ee 100644 --- a/pandas/tests/test_nanops.py +++ b/pandas/tests/test_nanops.py @@ -12,6 +12,7 @@ use_bn = nanops._USE_BOTTLENECK + class TestnanopsDataFrame(tm.TestCase): def setUp(self): @@ -22,7 +23,7 @@ def setUp(self): self.arr_float = np.random.randn(*self.arr_shape) self.arr_float1 = np.random.randn(*self.arr_shape) - self.arr_complex = self.arr_float + self.arr_float1*1j + self.arr_complex = self.arr_float + self.arr_float1 * 1j self.arr_int = np.random.randint(-10, 10, self.arr_shape) self.arr_bool = np.random.randint(0, 2, self.arr_shape) == 0 self.arr_str = np.abs(self.arr_float).astype('S') @@ -38,37 +39,31 @@ def setUp(self): self.arr_nan_float1 = np.vstack([self.arr_nan, self.arr_float1]) self.arr_nan_nan = np.vstack([self.arr_nan, self.arr_nan]) - self.arr_inf = self.arr_float*np.inf + self.arr_inf = self.arr_float * np.inf self.arr_float_inf = np.vstack([self.arr_float, self.arr_inf]) self.arr_float1_inf = np.vstack([self.arr_float1, self.arr_inf]) self.arr_inf_float1 = np.vstack([self.arr_inf, self.arr_float1]) self.arr_inf_inf = np.vstack([self.arr_inf, self.arr_inf]) self.arr_nan_inf = np.vstack([self.arr_nan, self.arr_inf]) - self.arr_float_nan_inf = np.vstack([self.arr_float, - self.arr_nan, + self.arr_float_nan_inf = np.vstack([self.arr_float, self.arr_nan, self.arr_inf]) - self.arr_nan_float1_inf = np.vstack([self.arr_float, - self.arr_inf, + self.arr_nan_float1_inf = np.vstack([self.arr_float, self.arr_inf, self.arr_nan]) - self.arr_nan_nan_inf = np.vstack([self.arr_nan, - self.arr_nan, + self.arr_nan_nan_inf = np.vstack([self.arr_nan, self.arr_nan, self.arr_inf]) - self.arr_obj = np.vstack([self.arr_float.astype('O'), - self.arr_int.astype('O'), - self.arr_bool.astype('O'), - self.arr_complex.astype('O'), - self.arr_str.astype('O'), - self.arr_utf.astype('O'), - self.arr_date.astype('O'), - self.arr_tdelta.astype('O')]) - - self.arr_nan_nanj = self.arr_nan + self.arr_nan*1j + self.arr_obj = np.vstack([self.arr_float.astype( + 'O'), self.arr_int.astype('O'), self.arr_bool.astype( + 'O'), self.arr_complex.astype('O'), self.arr_str.astype( + 'O'), self.arr_utf.astype('O'), self.arr_date.astype('O'), + self.arr_tdelta.astype('O')]) + + self.arr_nan_nanj = self.arr_nan + self.arr_nan * 1j self.arr_complex_nan = np.vstack([self.arr_complex, self.arr_nan_nanj]) - self.arr_nan_infj = self.arr_inf*1j + self.arr_nan_infj = self.arr_inf * 1j self.arr_complex_nan_infj = np.vstack([self.arr_complex, - self.arr_nan_infj]) + self.arr_nan_infj]) self.arr_float_2d = self.arr_float[:, :, 0] self.arr_float1_2d = self.arr_float1[:, :, 0] @@ -136,7 +131,8 @@ def _coerce_tds(targ, res): return targ, res try: - if axis != 0 and hasattr(targ, 'shape') and targ.ndim and targ.shape != res.shape: + if axis != 0 and hasattr( + targ, 'shape') and targ.ndim and targ.shape != res.shape: res = np.split(res, [targ.shape[0]], axis=0)[0] except: targ, res = _coerce_tds(targ, res) @@ -176,9 +172,9 @@ def _coerce_tds(targ, res): tm.assert_almost_equal(targ.real, res.real) tm.assert_almost_equal(targ.imag, res.imag) - def check_fun_data(self, testfunc, targfunc, - testarval, targarval, targarnanval, **kwargs): - for axis in list(range(targarval.ndim))+[None]: + def check_fun_data(self, testfunc, targfunc, testarval, targarval, + targarnanval, **kwargs): + for axis in list(range(targarval.ndim)) + [None]: for skipna in [False, True]: targartempval = targarval if skipna else targarnanval try: @@ -196,9 +192,8 @@ def check_fun_data(self, testfunc, targfunc, res = testfunc(testarval, **kwargs) self.check_results(targ, res, axis) except BaseException as exc: - exc.args += ('axis: %s of %s' % (axis, testarval.ndim-1), - 'skipna: %s' % skipna, - 'kwargs: %s' % kwargs) + exc.args += ('axis: %s of %s' % (axis, testarval.ndim - 1), + 'skipna: %s' % skipna, 'kwargs: %s' % kwargs) raise if testarval.ndim <= 1: @@ -210,13 +205,11 @@ def check_fun_data(self, testfunc, targfunc, targarnanval2 = np.take(targarnanval, 0, axis=-1) except ValueError: return - self.check_fun_data(testfunc, targfunc, - testarval2, targarval2, targarnanval2, - **kwargs) + self.check_fun_data(testfunc, targfunc, testarval2, targarval2, + targarnanval2, **kwargs) - def check_fun(self, testfunc, targfunc, - testar, targar=None, targarnan=None, - **kwargs): + def check_fun(self, testfunc, targfunc, testar, targar=None, + targarnan=None, **kwargs): if targar is None: targar = testar if targarnan is None: @@ -225,25 +218,22 @@ def check_fun(self, testfunc, targfunc, targarval = getattr(self, targar) targarnanval = getattr(self, targarnan) try: - self.check_fun_data(testfunc, targfunc, - testarval, targarval, targarnanval, **kwargs) + self.check_fun_data(testfunc, targfunc, testarval, targarval, + targarnanval, **kwargs) except BaseException as exc: - exc.args += ('testar: %s' % testar, - 'targar: %s' % targar, + exc.args += ('testar: %s' % testar, 'targar: %s' % targar, 'targarnan: %s' % targarnan) raise - def check_funs(self, testfunc, targfunc, - allow_complex=True, allow_all_nan=True, allow_str=True, - allow_date=True, allow_tdelta=True, allow_obj=True, - **kwargs): + def check_funs(self, testfunc, targfunc, allow_complex=True, + allow_all_nan=True, allow_str=True, allow_date=True, + allow_tdelta=True, allow_obj=True, **kwargs): self.check_fun(testfunc, targfunc, 'arr_float', **kwargs) self.check_fun(testfunc, targfunc, 'arr_float_nan', 'arr_float', **kwargs) self.check_fun(testfunc, targfunc, 'arr_int', **kwargs) self.check_fun(testfunc, targfunc, 'arr_bool', **kwargs) - objs = [self.arr_float.astype('O'), - self.arr_int.astype('O'), + objs = [self.arr_float.astype('O'), self.arr_int.astype('O'), self.arr_bool.astype('O')] if allow_all_nan: @@ -251,8 +241,8 @@ def check_funs(self, testfunc, targfunc, if allow_complex: self.check_fun(testfunc, targfunc, 'arr_complex', **kwargs) - self.check_fun(testfunc, targfunc, - 'arr_complex_nan', 'arr_complex', **kwargs) + self.check_fun(testfunc, targfunc, 'arr_complex_nan', + 'arr_complex', **kwargs) if allow_all_nan: self.check_fun(testfunc, targfunc, 'arr_nan_nanj', **kwargs) objs += [self.arr_complex.astype('O')] @@ -260,8 +250,7 @@ def check_funs(self, testfunc, targfunc, if allow_str: self.check_fun(testfunc, targfunc, 'arr_str', **kwargs) self.check_fun(testfunc, targfunc, 'arr_utf', **kwargs) - objs += [self.arr_str.astype('O'), - self.arr_utf.astype('O')] + objs += [self.arr_str.astype('O'), self.arr_utf.astype('O')] if allow_date: try: @@ -287,21 +276,26 @@ def check_funs(self, testfunc, targfunc, # counterparts, so the numpy functions need to be given something # else if allow_obj == 'convert': - targfunc = partial(self._badobj_wrap, - func=targfunc, allow_complex=allow_complex) + targfunc = partial(self._badobj_wrap, func=targfunc, + allow_complex=allow_complex) self.check_fun(testfunc, targfunc, 'arr_obj', **kwargs) - def check_funs_ddof(self, testfunc, targfunc, - allow_complex=True, allow_all_nan=True, allow_str=True, - allow_date=False, allow_tdelta=False, allow_obj=True,): + def check_funs_ddof(self, + testfunc, + targfunc, + allow_complex=True, + allow_all_nan=True, + allow_str=True, + allow_date=False, + allow_tdelta=False, + allow_obj=True, ): for ddof in range(3): try: - self.check_funs(testfunc, targfunc, - allow_complex, allow_all_nan, allow_str, - allow_date, allow_tdelta, allow_obj, - ddof=ddof) + self.check_funs(testfunc, targfunc, allow_complex, + allow_all_nan, allow_str, allow_date, + allow_tdelta, allow_obj, ddof=ddof) except BaseException as exc: - exc.args += ('ddof %s' % ddof,) + exc.args += ('ddof %s' % ddof, ) raise def _badobj_wrap(self, value, func, allow_complex=True, **kwargs): @@ -313,21 +307,21 @@ def _badobj_wrap(self, value, func, allow_complex=True, **kwargs): return func(value, **kwargs) def test_nanany(self): - self.check_funs(nanops.nanany, np.any, - allow_all_nan=False, allow_str=False, allow_date=False, allow_tdelta=False) + self.check_funs(nanops.nanany, np.any, allow_all_nan=False, + allow_str=False, allow_date=False, allow_tdelta=False) def test_nanall(self): - self.check_funs(nanops.nanall, np.all, - allow_all_nan=False, allow_str=False, allow_date=False, allow_tdelta=False) + self.check_funs(nanops.nanall, np.all, allow_all_nan=False, + allow_str=False, allow_date=False, allow_tdelta=False) def test_nansum(self): - self.check_funs(nanops.nansum, np.sum, - allow_str=False, allow_date=False, allow_tdelta=True) + self.check_funs(nanops.nansum, np.sum, allow_str=False, + allow_date=False, allow_tdelta=True) def test_nanmean(self): - self.check_funs(nanops.nanmean, np.mean, - allow_complex=False, allow_obj=False, - allow_str=False, allow_date=False, allow_tdelta=True) + self.check_funs(nanops.nanmean, np.mean, allow_complex=False, + allow_obj=False, allow_str=False, allow_date=False, + allow_tdelta=True) def test_nanmean_overflow(self): # GH 10155 @@ -348,7 +342,7 @@ def test_nanmean_overflow(self): def test_returned_dtype(self): dtypes = [np.int16, np.int32, np.int64, np.float32, np.float64] - if hasattr(np,'float128'): + if hasattr(np, 'float128'): dtypes.append(np.float128) for dtype in dtypes: @@ -358,44 +352,38 @@ def test_returned_dtype(self): for method in group_a + group_b: result = getattr(s, method)() if is_integer_dtype(dtype) and method in group_a: - self.assertTrue(result.dtype == np.float64, - "return dtype expected from %s is np.float64, got %s instead" % (method, result.dtype)) + self.assertTrue( + result.dtype == np.float64, + "return dtype expected from %s is np.float64, " + "got %s instead" % (method, result.dtype)) else: - self.assertTrue(result.dtype == dtype, - "return dtype expected from %s is %s, got %s instead" % (method, dtype, result.dtype)) + self.assertTrue( + result.dtype == dtype, + "return dtype expected from %s is %s, " + "got %s instead" % (method, dtype, result.dtype)) def test_nanmedian(self): with warnings.catch_warnings(record=True): - self.check_funs(nanops.nanmedian, np.median, - allow_complex=False, allow_str=False, allow_date=False, - allow_tdelta=True, - allow_obj='convert') + self.check_funs(nanops.nanmedian, np.median, allow_complex=False, + allow_str=False, allow_date=False, + allow_tdelta=True, allow_obj='convert') def test_nanvar(self): - self.check_funs_ddof(nanops.nanvar, np.var, - allow_complex=False, - allow_str=False, - allow_date=False, - allow_tdelta=True, - allow_obj='convert') + self.check_funs_ddof(nanops.nanvar, np.var, allow_complex=False, + allow_str=False, allow_date=False, + allow_tdelta=True, allow_obj='convert') def test_nanstd(self): - self.check_funs_ddof(nanops.nanstd, np.std, - allow_complex=False, - allow_str=False, - allow_date=False, - allow_tdelta=True, - allow_obj='convert') + self.check_funs_ddof(nanops.nanstd, np.std, allow_complex=False, + allow_str=False, allow_date=False, + allow_tdelta=True, allow_obj='convert') def test_nansem(self): tm.skip_if_no_package('scipy.stats') from scipy.stats import sem - self.check_funs_ddof(nanops.nansem, sem, - allow_complex=False, - allow_str=False, - allow_date=False, - allow_tdelta=True, - allow_obj='convert') + self.check_funs_ddof(nanops.nansem, sem, allow_complex=False, + allow_str=False, allow_date=False, + allow_tdelta=True, allow_obj='convert') def _minmax_wrap(self, value, axis=None, func=None): res = func(value, axis) @@ -405,13 +393,11 @@ def _minmax_wrap(self, value, axis=None, func=None): def test_nanmin(self): func = partial(self._minmax_wrap, func=np.min) - self.check_funs(nanops.nanmin, func, - allow_str=False, allow_obj=False) + self.check_funs(nanops.nanmin, func, allow_str=False, allow_obj=False) def test_nanmax(self): func = partial(self._minmax_wrap, func=np.max) - self.check_funs(nanops.nanmax, func, - allow_str=False, allow_obj=False) + self.check_funs(nanops.nanmax, func, allow_str=False, allow_obj=False) def _argminmax_wrap(self, value, axis=None, func=None): res = func(value, axis) @@ -426,21 +412,18 @@ def _argminmax_wrap(self, value, axis=None, func=None): def test_nanargmax(self): func = partial(self._argminmax_wrap, func=np.argmax) - self.check_funs(nanops.nanargmax, func, - allow_str=False, allow_obj=False, - allow_date=True, - allow_tdelta=True) + self.check_funs(nanops.nanargmax, func, allow_str=False, + allow_obj=False, allow_date=True, allow_tdelta=True) def test_nanargmin(self): func = partial(self._argminmax_wrap, func=np.argmin) if tm.sys.version_info[0:2] == (2, 6): - self.check_funs(nanops.nanargmin, func, - allow_date=True, - allow_tdelta=True, - allow_str=False, allow_obj=False) + self.check_funs(nanops.nanargmin, func, allow_date=True, + allow_tdelta=True, allow_str=False, + allow_obj=False) else: - self.check_funs(nanops.nanargmin, func, - allow_str=False, allow_obj=False) + self.check_funs(nanops.nanargmin, func, allow_str=False, + allow_obj=False) def _skew_kurt_wrap(self, values, axis=None, func=None): if not isinstance(values.dtype.type, np.floating): @@ -458,53 +441,45 @@ def test_nanskew(self): tm.skip_if_no_package('scipy.stats') from scipy.stats import skew func = partial(self._skew_kurt_wrap, func=skew) - self.check_funs(nanops.nanskew, func, - allow_complex=False, allow_str=False, allow_date=False, allow_tdelta=False) + self.check_funs(nanops.nanskew, func, allow_complex=False, + allow_str=False, allow_date=False, allow_tdelta=False) def test_nankurt(self): tm.skip_if_no_package('scipy.stats') from scipy.stats import kurtosis func1 = partial(kurtosis, fisher=True) func = partial(self._skew_kurt_wrap, func=func1) - self.check_funs(nanops.nankurt, func, - allow_complex=False, allow_str=False, allow_date=False, allow_tdelta=False) + self.check_funs(nanops.nankurt, func, allow_complex=False, + allow_str=False, allow_date=False, allow_tdelta=False) def test_nanprod(self): - self.check_funs(nanops.nanprod, np.prod, - allow_str=False, allow_date=False, allow_tdelta=False) + self.check_funs(nanops.nanprod, np.prod, allow_str=False, + allow_date=False, allow_tdelta=False) def check_nancorr_nancov_2d(self, checkfun, targ0, targ1, **kwargs): - res00 = checkfun(self.arr_float_2d, self.arr_float1_2d, - **kwargs) + res00 = checkfun(self.arr_float_2d, self.arr_float1_2d, **kwargs) res01 = checkfun(self.arr_float_2d, self.arr_float1_2d, - min_periods=len(self.arr_float_2d)-1, - **kwargs) + min_periods=len(self.arr_float_2d) - 1, **kwargs) tm.assert_almost_equal(targ0, res00) tm.assert_almost_equal(targ0, res01) res10 = checkfun(self.arr_float_nan_2d, self.arr_float1_nan_2d, **kwargs) res11 = checkfun(self.arr_float_nan_2d, self.arr_float1_nan_2d, - min_periods=len(self.arr_float_2d)-1, - **kwargs) + min_periods=len(self.arr_float_2d) - 1, **kwargs) tm.assert_almost_equal(targ1, res10) tm.assert_almost_equal(targ1, res11) targ2 = np.nan - res20 = checkfun(self.arr_nan_2d, self.arr_float1_2d, - **kwargs) - res21 = checkfun(self.arr_float_2d, self.arr_nan_2d, - **kwargs) - res22 = checkfun(self.arr_nan_2d, self.arr_nan_2d, - **kwargs) + res20 = checkfun(self.arr_nan_2d, self.arr_float1_2d, **kwargs) + res21 = checkfun(self.arr_float_2d, self.arr_nan_2d, **kwargs) + res22 = checkfun(self.arr_nan_2d, self.arr_nan_2d, **kwargs) res23 = checkfun(self.arr_float_nan_2d, self.arr_nan_float1_2d, **kwargs) res24 = checkfun(self.arr_float_nan_2d, self.arr_nan_float1_2d, - min_periods=len(self.arr_float_2d)-1, - **kwargs) + min_periods=len(self.arr_float_2d) - 1, **kwargs) res25 = checkfun(self.arr_float_2d, self.arr_float1_2d, - min_periods=len(self.arr_float_2d)+1, - **kwargs) + min_periods=len(self.arr_float_2d) + 1, **kwargs) tm.assert_almost_equal(targ2, res20) tm.assert_almost_equal(targ2, res21) tm.assert_almost_equal(targ2, res22) @@ -513,42 +488,29 @@ def check_nancorr_nancov_2d(self, checkfun, targ0, targ1, **kwargs): tm.assert_almost_equal(targ2, res25) def check_nancorr_nancov_1d(self, checkfun, targ0, targ1, **kwargs): - res00 = checkfun(self.arr_float_1d, self.arr_float1_1d, - **kwargs) + res00 = checkfun(self.arr_float_1d, self.arr_float1_1d, **kwargs) res01 = checkfun(self.arr_float_1d, self.arr_float1_1d, - min_periods=len(self.arr_float_1d)-1, - **kwargs) + min_periods=len(self.arr_float_1d) - 1, **kwargs) tm.assert_almost_equal(targ0, res00) tm.assert_almost_equal(targ0, res01) - res10 = checkfun(self.arr_float_nan_1d, - self.arr_float1_nan_1d, - **kwargs) - res11 = checkfun(self.arr_float_nan_1d, - self.arr_float1_nan_1d, - min_periods=len(self.arr_float_1d)-1, + res10 = checkfun(self.arr_float_nan_1d, self.arr_float1_nan_1d, **kwargs) + res11 = checkfun(self.arr_float_nan_1d, self.arr_float1_nan_1d, + min_periods=len(self.arr_float_1d) - 1, **kwargs) tm.assert_almost_equal(targ1, res10) tm.assert_almost_equal(targ1, res11) targ2 = np.nan - res20 = checkfun(self.arr_nan_1d, self.arr_float1_1d, - **kwargs) - res21 = checkfun(self.arr_float_1d, self.arr_nan_1d, - **kwargs) - res22 = checkfun(self.arr_nan_1d, self.arr_nan_1d, - **kwargs) - res23 = checkfun(self.arr_float_nan_1d, - self.arr_nan_float1_1d, - **kwargs) - res24 = checkfun(self.arr_float_nan_1d, - self.arr_nan_float1_1d, - min_periods=len(self.arr_float_1d)-1, - **kwargs) - res25 = checkfun(self.arr_float_1d, - self.arr_float1_1d, - min_periods=len(self.arr_float_1d)+1, + res20 = checkfun(self.arr_nan_1d, self.arr_float1_1d, **kwargs) + res21 = checkfun(self.arr_float_1d, self.arr_nan_1d, **kwargs) + res22 = checkfun(self.arr_nan_1d, self.arr_nan_1d, **kwargs) + res23 = checkfun(self.arr_float_nan_1d, self.arr_nan_float1_1d, **kwargs) + res24 = checkfun(self.arr_float_nan_1d, self.arr_nan_float1_1d, + min_periods=len(self.arr_float_1d) - 1, **kwargs) + res25 = checkfun(self.arr_float_1d, self.arr_float1_1d, + min_periods=len(self.arr_float_1d) + 1, **kwargs) tm.assert_almost_equal(targ2, res20) tm.assert_almost_equal(targ2, res21) tm.assert_almost_equal(targ2, res22) @@ -636,7 +598,7 @@ def check_nancomp(self, checkfun, targ0): res2 = checkfun(arr_float_nan, arr_nan_float1) tm.assert_almost_equal(targ2, res2) except Exception as exc: - exc.args += ('ndim: %s' % arr_float.ndim,) + exc.args += ('ndim: %s' % arr_float.ndim, ) raise try: @@ -684,7 +646,7 @@ def check_bool(self, func, value, correct, *args, **kwargs): else: self.assertFalse(res0) except BaseException as exc: - exc.args += ('dim: %s' % getattr(value, 'ndim', value),) + exc.args += ('dim: %s' % getattr(value, 'ndim', value), ) raise if not hasattr(value, 'ndim'): break @@ -694,26 +656,15 @@ def check_bool(self, func, value, correct, *args, **kwargs): break def test__has_infs(self): - pairs = [('arr_complex', False), - ('arr_int', False), - ('arr_bool', False), - ('arr_str', False), - ('arr_utf', False), - ('arr_complex', False), - ('arr_complex_nan', False), - - ('arr_nan_nanj', False), - ('arr_nan_infj', True), + pairs = [('arr_complex', False), ('arr_int', False), + ('arr_bool', False), ('arr_str', False), ('arr_utf', False), + ('arr_complex', False), ('arr_complex_nan', False), + ('arr_nan_nanj', False), ('arr_nan_infj', True), ('arr_complex_nan_infj', True)] - pairs_float = [('arr_float', False), - ('arr_nan', False), - ('arr_float_nan', False), - ('arr_nan_nan', False), - - ('arr_float_inf', True), - ('arr_inf', True), - ('arr_nan_inf', True), - ('arr_float_nan_inf', True), + pairs_float = [('arr_float', False), ('arr_nan', False), + ('arr_float_nan', False), ('arr_nan_nan', False), + ('arr_float_inf', True), ('arr_inf', True), + ('arr_nan_inf', True), ('arr_float_nan_inf', True), ('arr_nan_nan_inf', True)] for arr, correct in pairs: @@ -721,7 +672,7 @@ def test__has_infs(self): try: self.check_bool(nanops._has_infs, val, correct) except BaseException as exc: - exc.args += (arr,) + exc.args += (arr, ) raise for arr, correct in pairs_float: @@ -731,40 +682,32 @@ def test__has_infs(self): self.check_bool(nanops._has_infs, val.astype('f4'), correct) self.check_bool(nanops._has_infs, val.astype('f2'), correct) except BaseException as exc: - exc.args += (arr,) + exc.args += (arr, ) raise def test__isfinite(self): - pairs = [('arr_complex', False), - ('arr_int', False), - ('arr_bool', False), - ('arr_str', False), - ('arr_utf', False), - ('arr_complex', False), - ('arr_complex_nan', True), - - ('arr_nan_nanj', True), - ('arr_nan_infj', True), + pairs = [('arr_complex', False), ('arr_int', False), + ('arr_bool', False), ('arr_str', False), ('arr_utf', False), + ('arr_complex', False), ('arr_complex_nan', True), + ('arr_nan_nanj', True), ('arr_nan_infj', True), ('arr_complex_nan_infj', True)] - pairs_float = [('arr_float', False), - ('arr_nan', True), - ('arr_float_nan', True), - ('arr_nan_nan', True), - - ('arr_float_inf', True), - ('arr_inf', True), - ('arr_nan_inf', True), - ('arr_float_nan_inf', True), + pairs_float = [('arr_float', False), ('arr_nan', True), + ('arr_float_nan', True), ('arr_nan_nan', True), + ('arr_float_inf', True), ('arr_inf', True), + ('arr_nan_inf', True), ('arr_float_nan_inf', True), ('arr_nan_nan_inf', True)] func1 = lambda x: np.any(nanops._isfinite(x).ravel()) - func2 = lambda x: np.any(nanops._isfinite(x).values.ravel()) + + # TODO: unused? + # func2 = lambda x: np.any(nanops._isfinite(x).values.ravel()) + for arr, correct in pairs: val = getattr(self, arr) try: self.check_bool(func1, val, correct) except BaseException as exc: - exc.args += (arr,) + exc.args += (arr, ) raise for arr, correct in pairs_float: @@ -774,7 +717,7 @@ def test__isfinite(self): self.check_bool(func1, val.astype('f4'), correct) self.check_bool(func1, val.astype('f2'), correct) except BaseException as exc: - exc.args += (arr,) + exc.args += (arr, ) raise def test__bn_ok_dtype(self): @@ -790,6 +733,7 @@ def test__bn_ok_dtype(self): class TestEnsureNumeric(tm.TestCase): + def test_numeric_values(self): # Test integer self.assertEqual(nanops._ensure_numeric(1), 1, 'Failed for int') @@ -817,8 +761,7 @@ def test_ndarray(self): # Test non-convertible string ndarray s_values = np.array(['foo', 'bar', 'baz'], dtype=object) - self.assertRaises(ValueError, - lambda: nanops._ensure_numeric(s_values)) + self.assertRaises(ValueError, lambda: nanops._ensure_numeric(s_values)) def test_convertable_values(self): self.assertTrue(np.allclose(nanops._ensure_numeric('1'), 1.0), @@ -829,12 +772,9 @@ def test_convertable_values(self): 'Failed for convertible complex string') def test_non_convertable_values(self): - self.assertRaises(TypeError, - lambda: nanops._ensure_numeric('foo')) - self.assertRaises(TypeError, - lambda: nanops._ensure_numeric({})) - self.assertRaises(TypeError, - lambda: nanops._ensure_numeric([])) + self.assertRaises(TypeError, lambda: nanops._ensure_numeric('foo')) + self.assertRaises(TypeError, lambda: nanops._ensure_numeric({})) + self.assertRaises(TypeError, lambda: nanops._ensure_numeric([])) class TestNanvarFixedValues(tm.TestCase): @@ -849,32 +789,30 @@ def setUp(self): def test_nanvar_all_finite(self): samples = self.samples actual_variance = nanops.nanvar(samples) - np.testing.assert_almost_equal( - actual_variance, self.variance, decimal=2) + np.testing.assert_almost_equal(actual_variance, self.variance, + decimal=2) def test_nanvar_nans(self): samples = np.nan * np.ones(2 * self.samples.shape[0]) samples[::2] = self.samples actual_variance = nanops.nanvar(samples, skipna=True) - np.testing.assert_almost_equal( - actual_variance, self.variance, decimal=2) + np.testing.assert_almost_equal(actual_variance, self.variance, + decimal=2) actual_variance = nanops.nanvar(samples, skipna=False) - np.testing.assert_almost_equal( - actual_variance, np.nan, decimal=2) + np.testing.assert_almost_equal(actual_variance, np.nan, decimal=2) def test_nanstd_nans(self): samples = np.nan * np.ones(2 * self.samples.shape[0]) samples[::2] = self.samples actual_std = nanops.nanstd(samples, skipna=True) - np.testing.assert_almost_equal( - actual_std, self.variance ** 0.5, decimal=2) + np.testing.assert_almost_equal(actual_std, self.variance ** 0.5, + decimal=2) actual_std = nanops.nanvar(samples, skipna=False) - np.testing.assert_almost_equal( - actual_std, np.nan, decimal=2) + np.testing.assert_almost_equal(actual_std, np.nan, decimal=2) def test_nanvar_axis(self): # Generate some sample data. @@ -883,12 +821,12 @@ def test_nanvar_axis(self): samples = np.vstack([samples_norm, samples_unif]) actual_variance = nanops.nanvar(samples, axis=1) - np.testing.assert_array_almost_equal( - actual_variance, np.array([self.variance, 1.0 / 12]), decimal=2) + np.testing.assert_array_almost_equal(actual_variance, np.array( + [self.variance, 1.0 / 12]), decimal=2) def test_nanvar_ddof(self): n = 5 - samples = self.prng.uniform(size=(10000, n+1)) + samples = self.prng.uniform(size=(10000, n + 1)) samples[:, -1] = np.nan # Force use of our own algorithm. variance_0 = nanops.nanvar(samples, axis=1, skipna=True, ddof=0).mean() @@ -899,37 +837,34 @@ def test_nanvar_ddof(self): var = 1.0 / 12 np.testing.assert_almost_equal(variance_1, var, decimal=2) # The underestimated variance. - np.testing.assert_almost_equal( - variance_0, (n - 1.0) / n * var, decimal=2) + np.testing.assert_almost_equal(variance_0, (n - 1.0) / n * var, + decimal=2) # The overestimated variance. - np.testing.assert_almost_equal( - variance_2, (n - 1.0) / (n - 2.0) * var, decimal=2) + np.testing.assert_almost_equal(variance_2, (n - 1.0) / (n - 2.0) * var, + decimal=2) def test_ground_truth(self): # Test against values that were precomputed with Numpy. samples = np.empty((4, 4)) - samples[:3, :3] = np.array([[0.97303362, 0.21869576, 0.55560287], - [0.72980153, 0.03109364, 0.99155171], + samples[:3, :3] = np.array([[0.97303362, 0.21869576, 0.55560287 + ], [0.72980153, 0.03109364, 0.99155171], [0.09317602, 0.60078248, 0.15871292]]) samples[3] = samples[:, 3] = np.nan # Actual variances along axis=0, 1 for ddof=0, 1, 2 - variance = np.array( - [[[0.13762259, 0.05619224, 0.11568816], - [0.20643388, 0.08428837, 0.17353224], - [0.41286776, 0.16857673, 0.34706449]], - [[0.09519783, 0.16435395, 0.05082054], - [0.14279674, 0.24653093, 0.07623082], - [0.28559348, 0.49306186, 0.15246163]]] - ) + variance = np.array([[[0.13762259, 0.05619224, 0.11568816 + ], [0.20643388, 0.08428837, 0.17353224], + [0.41286776, 0.16857673, 0.34706449]], + [[0.09519783, 0.16435395, 0.05082054 + ], [0.14279674, 0.24653093, 0.07623082], + [0.28559348, 0.49306186, 0.15246163]]]) # Test nanvar. for axis in range(2): for ddof in range(3): var = nanops.nanvar(samples, skipna=True, axis=axis, ddof=ddof) - np.testing.assert_array_almost_equal( - var[:3], variance[axis, ddof] - ) + np.testing.assert_array_almost_equal(var[:3], + variance[axis, ddof]) np.testing.assert_equal(var[3], np.nan) # Test nanstd. @@ -937,8 +872,7 @@ def test_ground_truth(self): for ddof in range(3): std = nanops.nanstd(samples, skipna=True, axis=axis, ddof=ddof) np.testing.assert_array_almost_equal( - std[:3], variance[axis, ddof] ** 0.5 - ) + std[:3], variance[axis, ddof] ** 0.5) np.testing.assert_equal(std[3], np.nan) def test_nanstd_roundoff(self): @@ -956,5 +890,5 @@ def prng(self): if __name__ == '__main__': import nose - nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure', - '-s'], exit=False) + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure', '-s' + ], exit=False) diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py index f12d851a6772d..a1f2b3edf892f 100644 --- a/pandas/tests/test_panel.py +++ b/pandas/tests/test_panel.py @@ -20,36 +20,35 @@ from pandas.compat import range, lrange, StringIO, OrderedDict from pandas import SparsePanel -from pandas.util.testing import (assert_panel_equal, - assert_frame_equal, - assert_series_equal, - assert_almost_equal, - assert_produces_warning, - ensure_clean, - assertRaisesRegexp, - makeCustomDataframe as mkdf, - makeMixedDataFrame - ) +from pandas.util.testing import (assert_panel_equal, assert_frame_equal, + assert_series_equal, assert_almost_equal, + assert_produces_warning, ensure_clean, + assertRaisesRegexp, makeCustomDataframe as + mkdf, makeMixedDataFrame) import pandas.core.panel as panelm import pandas.util.testing as tm + def ignore_sparse_panel_future_warning(func): """ decorator to ignore FutureWarning if we have a SparsePanel can be removed when SparsePanel is fully removed """ + @wraps(func) def wrapper(self, *args, **kwargs): if isinstance(self.panel, SparsePanel): - with assert_produces_warning(FutureWarning, check_stacklevel=False): + with assert_produces_warning(FutureWarning, + check_stacklevel=False): return func(self, *args, **kwargs) else: return func(self, *args, **kwargs) return wrapper + class PanelTests(object): panel = None @@ -72,7 +71,7 @@ class SafeForLongAndSparse(object): _multiprocess_can_split_ = True def test_repr(self): - foo = repr(self.panel) + repr(self.panel) @ignore_sparse_panel_future_warning def test_copy_names(self): @@ -122,6 +121,7 @@ def this_skew(x): if len(x) < 3: return np.nan return skew(x, bias=False) + self._check_stat_op('skew', this_skew) # def test_mad(self): @@ -133,6 +133,7 @@ def alt(x): if len(x) < 2: return np.nan return np.var(x, ddof=1) + self._check_stat_op('var', alt) def test_std(self): @@ -140,13 +141,15 @@ def alt(x): if len(x) < 2: return np.nan return np.std(x, ddof=1) + self._check_stat_op('std', alt) def test_sem(self): def alt(x): if len(x) < 2: return np.nan - return np.std(x, ddof=1)/np.sqrt(len(x)) + return np.std(x, ddof=1) / np.sqrt(len(x)) + self._check_stat_op('sem', alt) # def test_skew(self): @@ -170,6 +173,7 @@ def _check_stat_op(self, name, alternative, obj=None, has_skipna=True): f = getattr(obj, name) if has_skipna: + def skipna_wrapper(x): nona = remove_na(x) if len(nona) == 0: @@ -207,9 +211,9 @@ def assert_panel_equal(cls, x, y): assert_panel_equal(x, y) def test_get_axis(self): - assert(self.panel._get_axis(0) is self.panel.items) - assert(self.panel._get_axis(1) is self.panel.major_axis) - assert(self.panel._get_axis(2) is self.panel.minor_axis) + assert (self.panel._get_axis(0) is self.panel.items) + assert (self.panel._get_axis(1) is self.panel.major_axis) + assert (self.panel._get_axis(2) is self.panel.minor_axis) def test_set_axis(self): new_items = Index(np.arange(len(self.panel.items))) @@ -224,12 +228,16 @@ def test_set_axis(self): self.assertNotIn('ItemA', self.panel._item_cache) self.assertIs(self.panel.items, new_items) - item = self.panel[0] + # TODO: unused? + item = self.panel[0] # noqa + self.panel.major_axis = new_major self.assertIs(self.panel[0].index, new_major) self.assertIs(self.panel.major_axis, new_major) - item = self.panel[0] + # TODO: unused? + item = self.panel[0] # noqa + self.panel.minor_axis = new_minor self.assertIs(self.panel[0].columns, new_minor) self.assertIs(self.panel.minor_axis, new_minor) @@ -366,7 +374,7 @@ def check_op(op, name): try: check_op(operator.truediv, 'div') except: - com.pprint_thing("Failing operation: %r" % name) + com.pprint_thing("Failing operation: %r" % 'div') raise @ignore_sparse_panel_future_warning @@ -380,13 +388,15 @@ def test_neg(self): # issue 7692 def test_raise_when_not_implemented(self): - p = Panel(np.arange(3*4*5).reshape(3,4,5), items=['ItemA','ItemB','ItemC'], - major_axis=pd.date_range('20130101',periods=4),minor_axis=list('ABCDE')) + p = Panel(np.arange(3 * 4 * 5).reshape(3, 4, 5), + items=['ItemA', 'ItemB', 'ItemC'], + major_axis=pd.date_range('20130101', periods=4), + minor_axis=list('ABCDE')) d = p.sum(axis=1).ix[0] ops = ['add', 'sub', 'mul', 'truediv', 'floordiv', 'div', 'mod', 'pow'] for op in ops: with self.assertRaises(NotImplementedError): - getattr(p,op)(d, axis=0) + getattr(p, op)(d, axis=0) @ignore_sparse_panel_future_warning def test_select(self): @@ -409,7 +419,7 @@ def test_select(self): self.assert_panel_equal(result, expected) # corner case, empty thing - result = p.select(lambda x: x in ('foo',), axis='items') + result = p.select(lambda x: x in ('foo', ), axis='items') self.assert_panel_equal(result, p.reindex(items=[])) def test_get_value(self): @@ -500,8 +510,7 @@ def test_setitem(self): df2 = self.panel['ItemF'] - assert_frame_equal(df, df2.reindex(index=df.index, - columns=df.columns)) + assert_frame_equal(df, df2.reindex(index=df.index, columns=df.columns)) # scalar self.panel['ItemG'] = 1 @@ -547,15 +556,17 @@ def test_set_minor_major(self): # GH 11014 df1 = DataFrame(['a', 'a', 'a', np.nan, 'a', np.nan]) df2 = DataFrame([1.0, np.nan, 1.0, np.nan, 1.0, 1.0]) - panel = Panel({'Item1' : df1, 'Item2': df2}) + panel = Panel({'Item1': df1, 'Item2': df2}) newminor = notnull(panel.iloc[:, :, 0]) panel.loc[:, :, 'NewMinor'] = newminor - assert_frame_equal(panel.loc[:, :, 'NewMinor'], newminor.astype(object)) + assert_frame_equal(panel.loc[:, :, 'NewMinor'], + newminor.astype(object)) newmajor = notnull(panel.iloc[:, 0, :]) panel.loc[:, 'NewMajor', :] = newmajor - assert_frame_equal(panel.loc[:, 'NewMajor', :], newmajor.astype(object)) + assert_frame_equal(panel.loc[:, 'NewMajor', :], + newmajor.astype(object)) def test_major_xs(self): ref = self.panel['ItemA'] @@ -632,14 +643,11 @@ def test_getitem_fancy_labels(self): p.reindex(items=items, major=dates)) # only 1 - assert_panel_equal(p.ix[items, :, :], - p.reindex(items=items)) + assert_panel_equal(p.ix[items, :, :], p.reindex(items=items)) - assert_panel_equal(p.ix[:, dates, :], - p.reindex(major=dates)) + assert_panel_equal(p.ix[:, dates, :], p.reindex(major=dates)) - assert_panel_equal(p.ix[:, :, cols], - p.reindex(minor=cols)) + assert_panel_equal(p.ix[:, :, cols], p.reindex(minor=cols)) def test_getitem_fancy_slice(self): pass @@ -681,7 +689,6 @@ def test_getitem_fancy_xs(self): def test_getitem_fancy_xs_check_view(self): item = 'ItemB' date = self.panel.major_axis[5] - col = 'C' # make sure it's always a view NS = slice(None, None) @@ -731,10 +738,9 @@ def test_ix_align(self): assert_series_equal(df.ix[0, 0, :].reindex(b.index), b) def test_ix_frame_align(self): - from pandas import DataFrame p_orig = tm.makePanel() df = p_orig.ix[0].copy() - assert_frame_equal(p_orig['ItemA'],df) + assert_frame_equal(p_orig['ItemA'], df) p = p_orig.copy() p.ix[0, :, :] = df @@ -767,12 +773,13 @@ def test_ix_frame_align(self): p = p_orig.copy() p.ix[0, [0, 1, 3, 5], -2:] = df out = p.ix[0, [0, 1, 3, 5], -2:] - assert_frame_equal(out, df.iloc[[0,1,3,5],[2,3]]) + assert_frame_equal(out, df.iloc[[0, 1, 3, 5], [2, 3]]) # GH3830, panel assignent by values/frame - for dtype in ['float64','int64']: + for dtype in ['float64', 'int64']: - panel = Panel(np.arange(40).reshape((2,4,5)), items=['a1','a2'], dtype=dtype) + panel = Panel(np.arange(40).reshape((2, 4, 5)), + items=['a1', 'a2'], dtype=dtype) df1 = panel.iloc[0] df2 = panel.iloc[1] @@ -802,8 +809,8 @@ def _check_view(self, indexer, comp): comp(cp.ix[indexer].reindex_like(obj), obj) def test_logical_with_nas(self): - d = Panel({'ItemA': {'a': [np.nan, False]}, 'ItemB': { - 'a': [True, True]}}) + d = Panel({'ItemA': {'a': [np.nan, False]}, + 'ItemB': {'a': [True, True]}}) result = d['ItemA'] | d['ItemB'] expected = DataFrame({'a': [np.nan, True]}) @@ -884,12 +891,12 @@ def test_set_value(self): " plus the value provided"): self.panel.set_value('a') + _panel = tm.makePanel() tm.add_nans(_panel) -class TestPanel(tm.TestCase, PanelTests, CheckIndexing, - SafeForLongAndSparse, +class TestPanel(tm.TestCase, PanelTests, CheckIndexing, SafeForLongAndSparse, SafeForSparse): _multiprocess_can_split_ = True @@ -927,8 +934,7 @@ def test_constructor(self): assert_panel_equal(wp, self.panel) # strings handled prop - wp = Panel([[['foo', 'foo', 'foo', ], - ['foo', 'foo', 'foo']]]) + wp = Panel([[['foo', 'foo', 'foo', ], ['foo', 'foo', 'foo']]]) self.assertEqual(wp.values.dtype, np.object_) vals = self.panel.values @@ -943,15 +949,18 @@ def test_constructor(self): # GH #8285, test when scalar data is used to construct a Panel # if dtype is not passed, it should be inferred - value_and_dtype = [(1, 'int64'), (3.14, 'float64'), ('foo', np.object_)] + value_and_dtype = [(1, 'int64'), (3.14, 'float64'), + ('foo', np.object_)] for (val, dtype) in value_and_dtype: - wp = Panel(val, items=range(2), major_axis=range(3), minor_axis=range(4)) + wp = Panel(val, items=range(2), major_axis=range(3), + minor_axis=range(4)) vals = np.empty((2, 3, 4), dtype=dtype) vals.fill(val) assert_panel_equal(wp, Panel(vals, dtype=dtype)) # test the case when dtype is passed - wp = Panel(1, items=range(2), major_axis=range(3), minor_axis=range(4), dtype='float32') + wp = Panel(1, items=range(2), major_axis=range(3), minor_axis=range(4), + dtype='float32') vals = np.empty((2, 3, 4), dtype='float32') vals.fill(1) assert_panel_equal(wp, Panel(vals, dtype='float32')) @@ -997,25 +1006,35 @@ def _check_dtype(panel, dtype): self.assertEqual(panel[i].values.dtype.name, dtype) # only nan holding types allowed here - for dtype in ['float64','float32','object']: - panel = Panel(items=lrange(2),major_axis=lrange(10),minor_axis=lrange(5),dtype=dtype) - _check_dtype(panel,dtype) + for dtype in ['float64', 'float32', 'object']: + panel = Panel(items=lrange(2), major_axis=lrange(10), + minor_axis=lrange(5), dtype=dtype) + _check_dtype(panel, dtype) - for dtype in ['float64','float32','int64','int32','object']: - panel = Panel(np.array(np.random.randn(2,10,5),dtype=dtype),items=lrange(2),major_axis=lrange(10),minor_axis=lrange(5),dtype=dtype) - _check_dtype(panel,dtype) + for dtype in ['float64', 'float32', 'int64', 'int32', 'object']: + panel = Panel(np.array(np.random.randn(2, 10, 5), dtype=dtype), + items=lrange(2), + major_axis=lrange(10), + minor_axis=lrange(5), dtype=dtype) + _check_dtype(panel, dtype) - for dtype in ['float64','float32','int64','int32','object']: - panel = Panel(np.array(np.random.randn(2,10,5),dtype='O'),items=lrange(2),major_axis=lrange(10),minor_axis=lrange(5),dtype=dtype) - _check_dtype(panel,dtype) + for dtype in ['float64', 'float32', 'int64', 'int32', 'object']: + panel = Panel(np.array(np.random.randn(2, 10, 5), dtype='O'), + items=lrange(2), + major_axis=lrange(10), + minor_axis=lrange(5), dtype=dtype) + _check_dtype(panel, dtype) - for dtype in ['float64','float32','int64','int32','object']: - panel = Panel(np.random.randn(2,10,5),items=lrange(2),major_axis=lrange(10),minor_axis=lrange(5),dtype=dtype) - _check_dtype(panel,dtype) + for dtype in ['float64', 'float32', 'int64', 'int32', 'object']: + panel = Panel(np.random.randn(2, 10, 5), items=lrange( + 2), major_axis=lrange(10), minor_axis=lrange(5), dtype=dtype) + _check_dtype(panel, dtype) for dtype in ['float64', 'float32', 'int64', 'int32', 'object']: - df1 = DataFrame(np.random.randn(2, 5), index=lrange(2), columns=lrange(5)) - df2 = DataFrame(np.random.randn(2, 5), index=lrange(2), columns=lrange(5)) + df1 = DataFrame(np.random.randn(2, 5), + index=lrange(2), columns=lrange(5)) + df2 = DataFrame(np.random.randn(2, 5), + index=lrange(2), columns=lrange(5)) panel = Panel.from_dict({'a': df1, 'b': df2}, dtype=dtype) _check_dtype(panel, dtype) @@ -1045,7 +1064,10 @@ def test_ctor_dict(self): wp = Panel.from_dict(d) wp2 = Panel.from_dict(d2) # nested Dict - wp3 = Panel.from_dict(d3) + + # TODO: unused? + wp3 = Panel.from_dict(d3) # noqa + self.assertTrue(wp.major_axis.equals(self.panel.major_axis)) assert_panel_equal(wp, wp2) @@ -1060,7 +1082,10 @@ def test_ctor_dict(self): # a pathological case d4 = {'A': None, 'B': None} - wp4 = Panel.from_dict(d4) + + # TODO: unused? + wp4 = Panel.from_dict(d4) # noqa + assert_panel_equal(Panel(d4), Panel(items=['A', 'B'])) # cast @@ -1099,8 +1124,9 @@ def test_constructor_dict_mixed(self): self.assertRaises(Exception, Panel, data) def test_ctor_orderedDict(self): - keys = list(set(np.random.randint(0,5000,100)))[:50] # unique random int keys - d = OrderedDict([(k,mkdf(10,5)) for k in keys]) + keys = list(set(np.random.randint(0, 5000, 100)))[ + :50] # unique random int keys + d = OrderedDict([(k, mkdf(10, 5)) for k in keys]) p = Panel(d) self.assertTrue(list(p.items) == keys) @@ -1113,8 +1139,7 @@ def test_constructor_resize(self): major = self.panel.major_axis[:-1] minor = self.panel.minor_axis[:-1] - result = Panel(data, items=items, major_axis=major, - minor_axis=minor) + result = Panel(data, items=items, major_axis=major, minor_axis=minor) expected = self.panel.reindex(items=items, major=major, minor=minor) assert_panel_equal(result, expected) @@ -1134,8 +1159,7 @@ def test_from_dict_mixed_orient(self): df = tm.makeDataFrame() df['foo'] = 'bar' - data = {'k1': df, - 'k2': df} + data = {'k1': df, 'k2': df} panel = Panel.from_dict(data, orient='minor') @@ -1143,147 +1167,177 @@ def test_from_dict_mixed_orient(self): self.assertEqual(panel['A'].values.dtype, np.float64) def test_constructor_error_msgs(self): - def testit(): - Panel(np.random.randn(3,4,5), lrange(4), lrange(5), lrange(5)) - assertRaisesRegexp(ValueError, "Shape of passed values is \(3, 4, 5\), indices imply \(4, 5, 5\)", testit) + Panel(np.random.randn(3, 4, 5), lrange(4), lrange(5), lrange(5)) + + assertRaisesRegexp(ValueError, + "Shape of passed values is \(3, 4, 5\), " + "indices imply \(4, 5, 5\)", + testit) def testit(): - Panel(np.random.randn(3,4,5), lrange(5), lrange(4), lrange(5)) - assertRaisesRegexp(ValueError, "Shape of passed values is \(3, 4, 5\), indices imply \(5, 4, 5\)", testit) + Panel(np.random.randn(3, 4, 5), lrange(5), lrange(4), lrange(5)) + + assertRaisesRegexp(ValueError, + "Shape of passed values is \(3, 4, 5\), " + "indices imply \(5, 4, 5\)", + testit) def testit(): - Panel(np.random.randn(3,4,5), lrange(5), lrange(5), lrange(4)) - assertRaisesRegexp(ValueError, "Shape of passed values is \(3, 4, 5\), indices imply \(5, 5, 4\)", testit) + Panel(np.random.randn(3, 4, 5), lrange(5), lrange(5), lrange(4)) + + assertRaisesRegexp(ValueError, + "Shape of passed values is \(3, 4, 5\), " + "indices imply \(5, 5, 4\)", + testit) def test_conform(self): df = self.panel['ItemA'][:-5].filter(items=['A', 'B']) conformed = self.panel.conform(df) - assert(conformed.index.equals(self.panel.major_axis)) - assert(conformed.columns.equals(self.panel.minor_axis)) + assert (conformed.index.equals(self.panel.major_axis)) + assert (conformed.columns.equals(self.panel.minor_axis)) def test_convert_objects(self): # GH 4937 - p = Panel(dict(A = dict(a = ['1','1.0']))) - expected = Panel(dict(A = dict(a = [1,1.0]))) + p = Panel(dict(A=dict(a=['1', '1.0']))) + expected = Panel(dict(A=dict(a=[1, 1.0]))) result = p._convert(numeric=True, coerce=True) assert_panel_equal(result, expected) def test_dtypes(self): result = self.panel.dtypes - expected = Series(np.dtype('float64'),index=self.panel.items) + expected = Series(np.dtype('float64'), index=self.panel.items) assert_series_equal(result, expected) def test_apply(self): # GH1148 - from pandas import Series,DataFrame - # ufunc applied = self.panel.apply(np.sqrt) - self.assertTrue(assert_almost_equal(applied.values, - np.sqrt(self.panel.values))) + self.assertTrue(assert_almost_equal(applied.values, np.sqrt( + self.panel.values))) # ufunc same shape - result = self.panel.apply(lambda x: x*2, axis='items') - expected = self.panel*2 + result = self.panel.apply(lambda x: x * 2, axis='items') + expected = self.panel * 2 assert_panel_equal(result, expected) - result = self.panel.apply(lambda x: x*2, axis='major_axis') - expected = self.panel*2 + result = self.panel.apply(lambda x: x * 2, axis='major_axis') + expected = self.panel * 2 assert_panel_equal(result, expected) - result = self.panel.apply(lambda x: x*2, axis='minor_axis') - expected = self.panel*2 + result = self.panel.apply(lambda x: x * 2, axis='minor_axis') + expected = self.panel * 2 assert_panel_equal(result, expected) # reduction to DataFrame result = self.panel.apply(lambda x: x.dtype, axis='items') - expected = DataFrame(np.dtype('float64'),index=self.panel.major_axis,columns=self.panel.minor_axis) - assert_frame_equal(result,expected) + expected = DataFrame(np.dtype('float64'), index=self.panel.major_axis, + columns=self.panel.minor_axis) + assert_frame_equal(result, expected) result = self.panel.apply(lambda x: x.dtype, axis='major_axis') - expected = DataFrame(np.dtype('float64'),index=self.panel.minor_axis,columns=self.panel.items) - assert_frame_equal(result,expected) + expected = DataFrame(np.dtype('float64'), index=self.panel.minor_axis, + columns=self.panel.items) + assert_frame_equal(result, expected) result = self.panel.apply(lambda x: x.dtype, axis='minor_axis') - expected = DataFrame(np.dtype('float64'),index=self.panel.major_axis,columns=self.panel.items) - assert_frame_equal(result,expected) + expected = DataFrame(np.dtype('float64'), index=self.panel.major_axis, + columns=self.panel.items) + assert_frame_equal(result, expected) # reductions via other dims expected = self.panel.sum(0) result = self.panel.apply(lambda x: x.sum(), axis='items') - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) expected = self.panel.sum(1) result = self.panel.apply(lambda x: x.sum(), axis='major_axis') - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) expected = self.panel.sum(2) result = self.panel.apply(lambda x: x.sum(), axis='minor_axis') - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) # pass kwargs result = self.panel.apply(lambda x, y: x.sum() + y, axis='items', y=5) expected = self.panel.sum(0) + 5 - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) def test_apply_slabs(self): # same shape as original - result = self.panel.apply(lambda x: x*2, axis = ['items','major_axis']) - expected = (self.panel*2).transpose('minor_axis','major_axis','items') - assert_panel_equal(result,expected) - result = self.panel.apply(lambda x: x*2, axis = ['major_axis','items']) - assert_panel_equal(result,expected) - - result = self.panel.apply(lambda x: x*2, axis = ['items','minor_axis']) - expected = (self.panel*2).transpose('major_axis','minor_axis','items') - assert_panel_equal(result,expected) - result = self.panel.apply(lambda x: x*2, axis = ['minor_axis','items']) - assert_panel_equal(result,expected) - - result = self.panel.apply(lambda x: x*2, axis = ['major_axis','minor_axis']) - expected = self.panel*2 - assert_panel_equal(result,expected) - result = self.panel.apply(lambda x: x*2, axis = ['minor_axis','major_axis']) - assert_panel_equal(result,expected) + result = self.panel.apply(lambda x: x * 2, + axis=['items', 'major_axis']) + expected = (self.panel * 2).transpose('minor_axis', 'major_axis', + 'items') + assert_panel_equal(result, expected) + result = self.panel.apply(lambda x: x * 2, + axis=['major_axis', 'items']) + assert_panel_equal(result, expected) + + result = self.panel.apply(lambda x: x * 2, + axis=['items', 'minor_axis']) + expected = (self.panel * 2).transpose('major_axis', 'minor_axis', + 'items') + assert_panel_equal(result, expected) + result = self.panel.apply(lambda x: x * 2, + axis=['minor_axis', 'items']) + assert_panel_equal(result, expected) + + result = self.panel.apply(lambda x: x * 2, + axis=['major_axis', 'minor_axis']) + expected = self.panel * 2 + assert_panel_equal(result, expected) + result = self.panel.apply(lambda x: x * 2, + axis=['minor_axis', 'major_axis']) + assert_panel_equal(result, expected) # reductions - result = self.panel.apply(lambda x: x.sum(0), axis = ['items','major_axis']) + result = self.panel.apply(lambda x: x.sum(0), axis=[ + 'items', 'major_axis' + ]) expected = self.panel.sum(1).T - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) - result = self.panel.apply(lambda x: x.sum(1), axis = ['items','major_axis']) + result = self.panel.apply(lambda x: x.sum(1), axis=[ + 'items', 'major_axis' + ]) expected = self.panel.sum(0) - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) # transforms - f = lambda x: ((x.T-x.mean(1))/x.std(1)).T + f = lambda x: ((x.T - x.mean(1)) / x.std(1)).T # make sure that we don't trigger any warnings with tm.assert_produces_warning(False): - result = self.panel.apply(f, axis = ['items','major_axis']) - expected = Panel(dict([ (ax,f(self.panel.loc[:,:,ax])) for ax in self.panel.minor_axis ])) - assert_panel_equal(result,expected) - - result = self.panel.apply(f, axis = ['major_axis','minor_axis']) - expected = Panel(dict([ (ax,f(self.panel.loc[ax])) for ax in self.panel.items ])) - assert_panel_equal(result,expected) + result = self.panel.apply(f, axis=['items', 'major_axis']) + expected = Panel(dict([(ax, f(self.panel.loc[:, :, ax])) + for ax in self.panel.minor_axis])) + assert_panel_equal(result, expected) + + result = self.panel.apply(f, axis=['major_axis', 'minor_axis']) + expected = Panel(dict([(ax, f(self.panel.loc[ax])) + for ax in self.panel.items])) + assert_panel_equal(result, expected) - result = self.panel.apply(f, axis = ['minor_axis','items']) - expected = Panel(dict([ (ax,f(self.panel.loc[:,ax])) for ax in self.panel.major_axis ])) - assert_panel_equal(result,expected) + result = self.panel.apply(f, axis=['minor_axis', 'items']) + expected = Panel(dict([(ax, f(self.panel.loc[:, ax])) + for ax in self.panel.major_axis])) + assert_panel_equal(result, expected) # with multi-indexes # GH7469 - index = MultiIndex.from_tuples([('one', 'a'), ('one', 'b'), ('two', 'a'), ('two', 'b')]) - dfa = DataFrame(np.array(np.arange(12, dtype='int64')).reshape(4,3), columns=list("ABC"), index=index) - dfb = DataFrame(np.array(np.arange(10, 22, dtype='int64')).reshape(4,3), columns=list("ABC"), index=index) - p = Panel({'f':dfa, 'g':dfb}) + index = MultiIndex.from_tuples([('one', 'a'), ('one', 'b'), ( + 'two', 'a'), ('two', 'b')]) + dfa = DataFrame(np.array(np.arange(12, dtype='int64')).reshape( + 4, 3), columns=list("ABC"), index=index) + dfb = DataFrame(np.array(np.arange(10, 22, dtype='int64')).reshape( + 4, 3), columns=list("ABC"), index=index) + p = Panel({'f': dfa, 'g': dfb}) result = p.apply(lambda x: x.sum(), axis=0) # on windows this will be in32 result = result.astype('int64') expected = p.sum(0) - assert_frame_equal(result,expected) + assert_frame_equal(result, expected) def test_apply_no_or_zero_ndim(self): # GH10332 @@ -1303,7 +1357,6 @@ def test_apply_no_or_zero_ndim(self): assert_series_equal(result_float, expected_float) assert_series_equal(result_float64, expected_float64) - def test_reindex(self): ref = self.panel['ItemB'] @@ -1317,8 +1370,8 @@ def test_reindex(self): assert_frame_equal(result['ItemB'], ref.reindex(index=new_major)) # raise exception put both major and major_axis - self.assertRaises(Exception, self.panel.reindex, - major_axis=new_major, major=new_major) + self.assertRaises(Exception, self.panel.reindex, major_axis=new_major, + major=new_major) # minor new_minor = list(self.panel.minor_axis[:2]) @@ -1327,22 +1380,21 @@ def test_reindex(self): # this ok result = self.panel.reindex() - assert_panel_equal(result,self.panel) + assert_panel_equal(result, self.panel) self.assertFalse(result is self.panel) # with filling smaller_major = self.panel.major_axis[::5] smaller = self.panel.reindex(major=smaller_major) - larger = smaller.reindex(major=self.panel.major_axis, - method='pad') + larger = smaller.reindex(major=self.panel.major_axis, method='pad') assert_frame_equal(larger.major_xs(self.panel.major_axis[1]), smaller.major_xs(smaller_major[0])) # don't necessarily copy result = self.panel.reindex(major=self.panel.major_axis, copy=False) - assert_panel_equal(result,self.panel) + assert_panel_equal(result, self.panel) self.assertTrue(result is self.panel) def test_reindex_multi(self): @@ -1350,8 +1402,7 @@ def test_reindex_multi(self): # with and without copy full reindexing result = self.panel.reindex(items=self.panel.items, major=self.panel.major_axis, - minor=self.panel.minor_axis, - copy = False) + minor=self.panel.minor_axis, copy=False) self.assertIs(result.items, self.panel.items) self.assertIs(result.major_axis, self.panel.major_axis) @@ -1359,31 +1410,36 @@ def test_reindex_multi(self): result = self.panel.reindex(items=self.panel.items, major=self.panel.major_axis, - minor=self.panel.minor_axis, - copy = False) - assert_panel_equal(result,self.panel) + minor=self.panel.minor_axis, copy=False) + assert_panel_equal(result, self.panel) # multi-axis indexing consistency # GH 5900 - df = DataFrame(np.random.randn(4,3)) - p = Panel({ 'Item1' : df }) - expected = Panel({ 'Item1' : df }) + df = DataFrame(np.random.randn(4, 3)) + p = Panel({'Item1': df}) + expected = Panel({'Item1': df}) expected['Item2'] = np.nan - items = ['Item1','Item2'] + items = ['Item1', 'Item2'] major_axis = np.arange(4) minor_axis = np.arange(3) results = [] - results.append(p.reindex(items=items, major_axis=major_axis, copy=True)) - results.append(p.reindex(items=items, major_axis=major_axis, copy=False)) - results.append(p.reindex(items=items, minor_axis=minor_axis, copy=True)) - results.append(p.reindex(items=items, minor_axis=minor_axis, copy=False)) - results.append(p.reindex(items=items, major_axis=major_axis, minor_axis=minor_axis, copy=True)) - results.append(p.reindex(items=items, major_axis=major_axis, minor_axis=minor_axis, copy=False)) + results.append(p.reindex(items=items, major_axis=major_axis, + copy=True)) + results.append(p.reindex(items=items, major_axis=major_axis, + copy=False)) + results.append(p.reindex(items=items, minor_axis=minor_axis, + copy=True)) + results.append(p.reindex(items=items, minor_axis=minor_axis, + copy=False)) + results.append(p.reindex(items=items, major_axis=major_axis, + minor_axis=minor_axis, copy=True)) + results.append(p.reindex(items=items, major_axis=major_axis, + minor_axis=minor_axis, copy=False)) for i, r in enumerate(results): - assert_panel_equal(expected,r) + assert_panel_equal(expected, r) def test_reindex_like(self): # reindex_like @@ -1465,9 +1521,9 @@ def test_fillna(self): self.assertRaises(TypeError, self.panel.fillna, (1, 2)) # limit not implemented when only value is specified - p = Panel(np.random.randn(3,4,5)) - p.iloc[0:2,0:2,0:2] = np.nan - self.assertRaises(NotImplementedError, lambda : p.fillna(999,limit=1)) + p = Panel(np.random.randn(3, 4, 5)) + p.iloc[0:2, 0:2, 0:2] = np.nan + self.assertRaises(NotImplementedError, lambda: p.fillna(999, limit=1)) def test_ffill_bfill(self): assert_panel_equal(self.panel.ffill(), @@ -1504,7 +1560,7 @@ def test_swapaxes(self): # this works, but return a copy result = self.panel.swapaxes('items', 'items') - assert_panel_equal(self.panel,result) + assert_panel_equal(self.panel, result) self.assertNotEqual(id(self.panel), id(result)) def test_transpose(self): @@ -1528,11 +1584,13 @@ def test_transpose(self): assert_panel_equal(result, expected) # duplicate axes - with tm.assertRaisesRegexp(TypeError, 'not enough/duplicate arguments'): + with tm.assertRaisesRegexp(TypeError, + 'not enough/duplicate arguments'): self.panel.transpose('minor', maj='major', minor='items') with tm.assertRaisesRegexp(ValueError, 'repeated axis in transpose'): - self.panel.transpose('minor', 'major', major='minor', minor='items') + self.panel.transpose('minor', 'major', major='minor', + minor='items') result = self.panel.transpose(2, 1, 0) assert_panel_equal(result, expected) @@ -1597,7 +1655,8 @@ def test_to_frame_mixed(self): lp = panel.to_frame() wp = lp.to_panel() self.assertEqual(wp['bool'].values.dtype, np.bool_) - # Previously, this was mutating the underlying index and changing its name + # Previously, this was mutating the underlying index and changing its + # name assert_frame_equal(wp['bool'], panel['bool'], check_names=False) # GH 8704 @@ -1610,23 +1669,28 @@ def test_to_frame_mixed(self): p = df.to_panel() expected = panel.copy() expected['category'] = 'foo' - assert_panel_equal(p,expected) + assert_panel_equal(p, expected) def test_to_frame_multi_major(self): - idx = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), - (2, 'two')]) + idx = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), ( + 2, 'two')]) df = DataFrame([[1, 'a', 1], [2, 'b', 1], [3, 'c', 1], [4, 'd', 1]], columns=['A', 'B', 'C'], index=idx) wp = Panel({'i1': df, 'i2': df}) - expected_idx = MultiIndex.from_tuples([(1, 'one', 'A'), (1, 'one', 'B'), - (1, 'one', 'C'), (1, 'two', 'A'), - (1, 'two', 'B'), (1, 'two', 'C'), - (2, 'one', 'A'), (2, 'one', 'B'), - (2, 'one', 'C'), (2, 'two', 'A'), - (2, 'two', 'B'), (2, 'two', 'C')], - names=[None, None, 'minor']) - expected = DataFrame({'i1': [1, 'a', 1, 2, 'b', 1, 3, 'c', 1, 4, 'd', 1], - 'i2': [1, 'a', 1, 2, 'b', 1, 3, 'c', 1, 4, 'd', 1]}, + expected_idx = MultiIndex.from_tuples( + [ + (1, 'one', 'A'), (1, 'one', 'B'), + (1, 'one', 'C'), (1, 'two', 'A'), + (1, 'two', 'B'), (1, 'two', 'C'), + (2, 'one', 'A'), (2, 'one', 'B'), + (2, 'one', 'C'), (2, 'two', 'A'), + (2, 'two', 'B'), (2, 'two', 'C') + ], + names=[None, None, 'minor']) + expected = DataFrame({'i1': [1, 'a', 1, 2, 'b', 1, 3, + 'c', 1, 4, 'd', 1], + 'i2': [1, 'a', 1, 2, 'b', + 1, 3, 'c', 1, 4, 'd', 1]}, index=expected_idx) result = wp.to_frame() assert_frame_equal(result, expected) @@ -1635,17 +1699,23 @@ def test_to_frame_multi_major(self): result = wp.to_frame() assert_frame_equal(result, expected[1:]) - idx = MultiIndex.from_tuples([(1, 'two'), (1, 'one'), (2, 'one'), - (np.nan, 'two')]) + idx = MultiIndex.from_tuples([(1, 'two'), (1, 'one'), (2, 'one'), ( + np.nan, 'two')]) df = DataFrame([[1, 'a', 1], [2, 'b', 1], [3, 'c', 1], [4, 'd', 1]], columns=['A', 'B', 'C'], index=idx) wp = Panel({'i1': df, 'i2': df}) - ex_idx = MultiIndex.from_tuples([(1, 'two', 'A'), (1, 'two', 'B'), (1, 'two', 'C'), - (1, 'one', 'A'), (1, 'one', 'B'), (1, 'one', 'C'), - (2, 'one', 'A'), (2, 'one', 'B'), (2, 'one', 'C'), - (np.nan, 'two', 'A'), (np.nan, 'two', 'B'), + ex_idx = MultiIndex.from_tuples([(1, 'two', 'A'), (1, 'two', 'B'), + (1, 'two', 'C'), + (1, 'one', 'A'), + (1, 'one', 'B'), + (1, 'one', 'C'), + (2, 'one', 'A'), + (2, 'one', 'B'), + (2, 'one', 'C'), + (np.nan, 'two', 'A'), + (np.nan, 'two', 'B'), (np.nan, 'two', 'C')], - names=[None, None, 'minor']) + names=[None, None, 'minor']) expected.index = ex_idx result = wp.to_frame() assert_frame_equal(result, expected) @@ -1653,31 +1723,33 @@ def test_to_frame_multi_major(self): def test_to_frame_multi_major_minor(self): cols = MultiIndex(levels=[['C_A', 'C_B'], ['C_1', 'C_2']], labels=[[0, 0, 1, 1], [0, 1, 0, 1]]) - idx = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), - (2, 'two'), (3, 'three'), (4, 'four')]) - df = DataFrame([[1, 2, 11, 12], [3, 4, 13, 14], ['a', 'b', 'w', 'x'], - ['c', 'd', 'y', 'z'], [-1, -2, -3, -4], [-5, -6, -7, -8] - ], columns=cols, index=idx) + idx = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), ( + 2, 'two'), (3, 'three'), (4, 'four')]) + df = DataFrame([[1, 2, 11, 12], [3, 4, 13, 14], + ['a', 'b', 'w', 'x'], + ['c', 'd', 'y', 'z'], [-1, -2, -3, -4], + [-5, -6, -7, -8]], columns=cols, index=idx) wp = Panel({'i1': df, 'i2': df}) - exp_idx = MultiIndex.from_tuples([(1, 'one', 'C_A', 'C_1'), (1, 'one', 'C_A', 'C_2'), - (1, 'one', 'C_B', 'C_1'), (1, 'one', 'C_B', 'C_2'), - (1, 'two', 'C_A', 'C_1'), (1, 'two', 'C_A', 'C_2'), - (1, 'two', 'C_B', 'C_1'), (1, 'two', 'C_B', 'C_2'), - (2, 'one', 'C_A', 'C_1'), (2, 'one', 'C_A', 'C_2'), - (2, 'one', 'C_B', 'C_1'), (2, 'one', 'C_B', 'C_2'), - (2, 'two', 'C_A', 'C_1'), (2, 'two', 'C_A', 'C_2'), - (2, 'two', 'C_B', 'C_1'), (2, 'two', 'C_B', 'C_2'), - (3, 'three', 'C_A', 'C_1'), (3, 'three', 'C_A', 'C_2'), - (3, 'three', 'C_B', 'C_1'), (3, 'three', 'C_B', 'C_2'), - (4, 'four', 'C_A', 'C_1'), (4, 'four', 'C_A', 'C_2'), - (4, 'four', 'C_B', 'C_1'), (4, 'four', 'C_B', 'C_2')], - names=[None, None, None, None]) - exp_val = [[1, 1], [2, 2], [11, 11], [12, 12], [3, 3], [4, 4], [13, 13], - [14, 14], ['a', 'a'], ['b', 'b'], ['w', 'w'], ['x', 'x'], - ['c', 'c'], ['d', 'd'], ['y', 'y'], ['z', 'z'], [-1, -1], - [-2, -2], [-3, -3], [-4, -4], [-5, -5], [-6, -6], [-7, -7], - [-8, -8]] + exp_idx = MultiIndex.from_tuples( + [(1, 'one', 'C_A', 'C_1'), (1, 'one', 'C_A', 'C_2'), + (1, 'one', 'C_B', 'C_1'), (1, 'one', 'C_B', 'C_2'), + (1, 'two', 'C_A', 'C_1'), (1, 'two', 'C_A', 'C_2'), + (1, 'two', 'C_B', 'C_1'), (1, 'two', 'C_B', 'C_2'), + (2, 'one', 'C_A', 'C_1'), (2, 'one', 'C_A', 'C_2'), + (2, 'one', 'C_B', 'C_1'), (2, 'one', 'C_B', 'C_2'), + (2, 'two', 'C_A', 'C_1'), (2, 'two', 'C_A', 'C_2'), + (2, 'two', 'C_B', 'C_1'), (2, 'two', 'C_B', 'C_2'), + (3, 'three', 'C_A', 'C_1'), (3, 'three', 'C_A', 'C_2'), + (3, 'three', 'C_B', 'C_1'), (3, 'three', 'C_B', 'C_2'), + (4, 'four', 'C_A', 'C_1'), (4, 'four', 'C_A', 'C_2'), + (4, 'four', 'C_B', 'C_1'), (4, 'four', 'C_B', 'C_2')], + names=[None, None, None, None]) + exp_val = [[1, 1], [2, 2], [11, 11], [12, 12], [3, 3], [4, 4], + [13, 13], [14, 14], ['a', 'a'], ['b', 'b'], ['w', 'w'], + ['x', 'x'], ['c', 'c'], ['d', 'd'], ['y', 'y'], ['z', 'z'], + [-1, -1], [-2, -2], [-3, -3], [-4, -4], [-5, -5], [-6, -6], + [-7, -7], [-8, -8]] result = wp.to_frame() expected = DataFrame(exp_val, columns=['i1', 'i2'], index=exp_idx) assert_frame_equal(result, expected) @@ -1724,8 +1796,8 @@ def test_panel_dups(self): result = panel.loc['E'] assert_frame_equal(result, expected) - expected = no_dup_panel.loc[['A','B']] - expected.items = ['A','A'] + expected = no_dup_panel.loc[['A', 'B']] + expected.items = ['A', 'A'] result = panel.loc['A'] assert_panel_equal(result, expected) @@ -1734,17 +1806,17 @@ def test_panel_dups(self): no_dup_panel = Panel(data, major_axis=list("ABCDE")) panel = Panel(data, major_axis=list("AACDE")) - expected = no_dup_panel.loc[:,'A'] - result = panel.iloc[:,0] + expected = no_dup_panel.loc[:, 'A'] + result = panel.iloc[:, 0] assert_frame_equal(result, expected) - expected = no_dup_panel.loc[:,'E'] - result = panel.loc[:,'E'] + expected = no_dup_panel.loc[:, 'E'] + result = panel.loc[:, 'E'] assert_frame_equal(result, expected) - expected = no_dup_panel.loc[:,['A','B']] - expected.major_axis = ['A','A'] - result = panel.loc[:,'A'] + expected = no_dup_panel.loc[:, ['A', 'B']] + expected.major_axis = ['A', 'A'] + result = panel.loc[:, 'A'] assert_panel_equal(result, expected) # minor @@ -1752,17 +1824,17 @@ def test_panel_dups(self): no_dup_panel = Panel(data, minor_axis=list("ABCDE")) panel = Panel(data, minor_axis=list("AACDE")) - expected = no_dup_panel.loc[:,:,'A'] - result = panel.iloc[:,:,0] + expected = no_dup_panel.loc[:, :, 'A'] + result = panel.iloc[:, :, 0] assert_frame_equal(result, expected) - expected = no_dup_panel.loc[:,:,'E'] - result = panel.loc[:,:,'E'] + expected = no_dup_panel.loc[:, :, 'E'] + result = panel.loc[:, :, 'E'] assert_frame_equal(result, expected) - expected = no_dup_panel.loc[:,:,['A','B']] - expected.minor_axis = ['A','A'] - result = panel.loc[:,:,'A'] + expected = no_dup_panel.loc[:, :, ['A', 'B']] + expected.minor_axis = ['A', 'A'] + result = panel.loc[:, :, 'A'] assert_panel_equal(result, expected) def test_filter(self): @@ -1780,22 +1852,19 @@ def test_shift(self): idx = self.panel.major_axis[0] idx_lag = self.panel.major_axis[1] shifted = self.panel.shift(1) - assert_frame_equal(self.panel.major_xs(idx), - shifted.major_xs(idx_lag)) + assert_frame_equal(self.panel.major_xs(idx), shifted.major_xs(idx_lag)) # minor idx = self.panel.minor_axis[0] idx_lag = self.panel.minor_axis[1] shifted = self.panel.shift(1, axis='minor') - assert_frame_equal(self.panel.minor_xs(idx), - shifted.minor_xs(idx_lag)) + assert_frame_equal(self.panel.minor_xs(idx), shifted.minor_xs(idx_lag)) # items idx = self.panel.items[0] idx_lag = self.panel.items[1] shifted = self.panel.shift(1, axis='items') - assert_frame_equal(self.panel[idx], - shifted[idx_lag]) + assert_frame_equal(self.panel[idx], shifted[idx_lag]) # negative numbers, #2164 result = self.panel.shift(-1) @@ -1804,7 +1873,7 @@ def test_shift(self): assert_panel_equal(result, expected) # mixed dtypes #6959 - data = [('item '+ch, makeMixedDataFrame()) for ch in list('abcde')] + data = [('item ' + ch, makeMixedDataFrame()) for ch in list('abcde')] data = dict(data) mixed_panel = Panel.from_dict(data, orient='minor') shifted = mixed_panel.shift(1) @@ -1836,10 +1905,9 @@ def test_tshift(self): shifted2 = panel.tshift(freq=panel.major_axis.freq) assert_panel_equal(shifted, shifted2) - inferred_ts = Panel(panel.values, - items=panel.items, - major_axis=Index(np.asarray(panel.major_axis)), - minor_axis=panel.minor_axis) + inferred_ts = Panel(panel.values, items=panel.items, + major_axis=Index(np.asarray(panel.major_axis)), + minor_axis=panel.minor_axis) shifted = inferred_ts.tshift(1) unshifted = shifted.tshift(-1) assert_panel_equal(shifted, panel.tshift(1)) @@ -1886,9 +1954,9 @@ def test_pct_change(self): expected = Panel({'i1': DataFrame({'c1': [np.nan, np.nan, np.nan], 'c2': [np.nan, np.nan, np.nan]}), 'i2': DataFrame({'c1': [1, 0.5, .2], - 'c2': [1./3, 0.25, 1./6]}), - 'i3': DataFrame({'c1': [.5, 1./3, 1./6], - 'c2': [.25, .2, 1./7]})}) + 'c2': [1. / 3, 0.25, 1. / 6]}), + 'i3': DataFrame({'c1': [.5, 1. / 3, 1. / 6], + 'c2': [.25, .2, 1. / 7]})}) assert_panel_equal(result, expected) result = wp.pct_change(axis=0) assert_panel_equal(result, expected) @@ -1899,25 +1967,25 @@ def test_pct_change(self): 'i2': DataFrame({'c1': [np.nan, np.nan, np.nan], 'c2': [np.nan, np.nan, np.nan]}), 'i3': DataFrame({'c1': [2, 1, .4], - 'c2': [2./3, .5, 1./3]})}) + 'c2': [2. / 3, .5, 1. / 3]})}) assert_panel_equal(result, expected) - + def test_round(self): - values = [[[-3.2,2.2],[0,-4.8213],[3.123,123.12], - [-1566.213,88.88],[-12,94.5]], - [[-5.82,3.5],[6.21,-73.272], [-9.087,23.12], - [272.212,-99.99],[23,-76.5]]] - evalues = [[[float(np.around(i)) for i in j] for j in k] for k in values] + values = [[[-3.2, 2.2], [0, -4.8213], [3.123, 123.12], + [-1566.213, 88.88], [-12, 94.5]], + [[-5.82, 3.5], [6.21, -73.272], [-9.087, 23.12], + [272.212, -99.99], [23, -76.5]]] + evalues = [[[float(np.around(i)) for i in j] for j in k] + for k in values] p = Panel(values, items=['Item1', 'Item2'], major_axis=pd.date_range('1/1/2000', periods=5), - minor_axis=['A','B']) + minor_axis=['A', 'B']) expected = Panel(evalues, items=['Item1', 'Item2'], major_axis=pd.date_range('1/1/2000', periods=5), - minor_axis=['A','B']) + minor_axis=['A', 'B']) result = p.round() self.assert_panel_equal(expected, result) - def test_multiindex_get(self): ind = MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1), ('b', 2)], names=['first', 'second']) @@ -1951,11 +2019,7 @@ def test_repr_empty(self): repr(empty) def test_rename(self): - mapper = { - 'ItemA': 'foo', - 'ItemB': 'bar', - 'ItemC': 'baz' - } + mapper = {'ItemA': 'foo', 'ItemB': 'bar', 'ItemC': 'baz'} renamed = self.panel.rename_axis(mapper, axis=0) exp = Index(['foo', 'bar', 'baz']) @@ -1979,21 +2043,19 @@ def test_get_attr(self): self.panel['i'] = self.panel['ItemA'] assert_frame_equal(self.panel['i'], self.panel.i) - def test_from_frame_level1_unsorted(self): - tuples = [('MSFT', 3), ('MSFT', 2), ('AAPL', 2), - ('AAPL', 1), ('MSFT', 1)] + tuples = [('MSFT', 3), ('MSFT', 2), ('AAPL', 2), ('AAPL', 1), + ('MSFT', 1)] midx = MultiIndex.from_tuples(tuples) df = DataFrame(np.random.rand(5, 4), index=midx) p = df.to_panel() assert_frame_equal(p.minor_xs(2), df.xs(2, level=1).sort_index()) def test_to_excel(self): - import os try: - import xlwt - import xlrd - import openpyxl + import xlwt # noqa + import xlrd # noqa + import openpyxl # noqa from pandas.io.excel import ExcelFile except ImportError: raise nose.SkipTest("need xlwt xlrd openpyxl") @@ -2013,8 +2075,8 @@ def test_to_excel(self): def test_to_excel_xlsxwriter(self): try: - import xlrd - import xlsxwriter + import xlrd # noqa + import xlsxwriter # noqa from pandas.io.excel import ExcelFile except ImportError: raise nose.SkipTest("Requires xlrd and xlsxwriter. Skipping test.") @@ -2112,135 +2174,96 @@ def check_drop(drop_val, axis_number, aliases, expected): check_drop("B", 2, ['minor_axis', 'minor'], expected) def test_update(self): - pan = Panel([[[1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]], - [[1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]]]) - - other = Panel([[[3.6, 2., np.nan], - [np.nan, np.nan, 7]]], items=[1]) + pan = Panel([[[1.5, np.nan, 3.], [1.5, np.nan, 3.], [1.5, np.nan, 3.], + [1.5, np.nan, 3.]], + [[1.5, np.nan, 3.], [1.5, np.nan, 3.], [1.5, np.nan, 3.], + [1.5, np.nan, 3.]]]) + + other = Panel([[[3.6, 2., np.nan], [np.nan, np.nan, 7]]], items=[1]) pan.update(other) - expected = Panel([[[1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]], - [[3.6, 2., 3], - [1.5, np.nan, 7], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]]]) + expected = Panel([[[1.5, np.nan, 3.], [1.5, np.nan, 3.], + [1.5, np.nan, 3.], [1.5, np.nan, 3.]], + [[3.6, 2., 3], [1.5, np.nan, 7], [1.5, np.nan, 3.], + [1.5, np.nan, 3.]]]) assert_panel_equal(pan, expected) def test_update_from_dict(self): - pan = Panel({'one': DataFrame([[1.5, np.nan, 3], - [1.5, np.nan, 3], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]]), - 'two': DataFrame([[1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]])}) - - other = {'two': DataFrame([[3.6, 2., np.nan], - [np.nan, np.nan, 7]])} + pan = Panel({'one': DataFrame([[1.5, np.nan, 3], [1.5, np.nan, 3], + [1.5, np.nan, 3.], [1.5, np.nan, 3.]]), + 'two': DataFrame([[1.5, np.nan, 3.], [1.5, np.nan, 3.], + [1.5, np.nan, 3.], [1.5, np.nan, 3.]])}) + + other = {'two': DataFrame([[3.6, 2., np.nan], [np.nan, np.nan, 7]])} pan.update(other) - expected = Panel({'two': DataFrame([[3.6, 2., 3], - [1.5, np.nan, 7], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]]), - 'one': DataFrame([[1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]])}) + expected = Panel( + {'two': DataFrame([[3.6, 2., 3], [1.5, np.nan, 7], + [1.5, np.nan, 3.], [1.5, np.nan, 3.]]), + 'one': DataFrame([[1.5, np.nan, 3.], [1.5, np.nan, 3.], + [1.5, np.nan, 3.], [1.5, np.nan, 3.]])}) assert_panel_equal(pan, expected) def test_update_nooverwrite(self): - pan = Panel([[[1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]], - [[1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]]]) - - other = Panel([[[3.6, 2., np.nan], - [np.nan, np.nan, 7]]], items=[1]) + pan = Panel([[[1.5, np.nan, 3.], [1.5, np.nan, 3.], [1.5, np.nan, 3.], + [1.5, np.nan, 3.]], + [[1.5, np.nan, 3.], [1.5, np.nan, 3.], [1.5, np.nan, 3.], + [1.5, np.nan, 3.]]]) + + other = Panel([[[3.6, 2., np.nan], [np.nan, np.nan, 7]]], items=[1]) pan.update(other, overwrite=False) - expected = Panel([[[1.5, np.nan, 3], - [1.5, np.nan, 3], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]], - [[1.5, 2., 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]]]) + expected = Panel([[[1.5, np.nan, 3], [1.5, np.nan, 3], + [1.5, np.nan, 3.], [1.5, np.nan, 3.]], + [[1.5, 2., 3.], [1.5, np.nan, 3.], [1.5, np.nan, 3.], + [1.5, np.nan, 3.]]]) assert_panel_equal(pan, expected) def test_update_filtered(self): - pan = Panel([[[1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]], - [[1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]]]) - - other = Panel([[[3.6, 2., np.nan], - [np.nan, np.nan, 7]]], items=[1]) + pan = Panel([[[1.5, np.nan, 3.], [1.5, np.nan, 3.], [1.5, np.nan, 3.], + [1.5, np.nan, 3.]], + [[1.5, np.nan, 3.], [1.5, np.nan, 3.], [1.5, np.nan, 3.], + [1.5, np.nan, 3.]]]) + + other = Panel([[[3.6, 2., np.nan], [np.nan, np.nan, 7]]], items=[1]) pan.update(other, filter_func=lambda x: x > 2) - expected = Panel([[[1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]], - [[1.5, np.nan, 3], - [1.5, np.nan, 7], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]]]) + expected = Panel([[[1.5, np.nan, 3.], [1.5, np.nan, 3.], + [1.5, np.nan, 3.], [1.5, np.nan, 3.]], + [[1.5, np.nan, 3], [1.5, np.nan, 7], + [1.5, np.nan, 3.], [1.5, np.nan, 3.]]]) assert_panel_equal(pan, expected) def test_update_raise(self): - pan = Panel([[[1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]], - [[1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.], - [1.5, np.nan, 3.]]]) - - np.testing.assert_raises(Exception, pan.update, *(pan,), + pan = Panel([[[1.5, np.nan, 3.], [1.5, np.nan, 3.], [1.5, np.nan, 3.], + [1.5, np.nan, 3.]], + [[1.5, np.nan, 3.], [1.5, np.nan, 3.], [1.5, np.nan, 3.], + [1.5, np.nan, 3.]]]) + + np.testing.assert_raises(Exception, pan.update, *(pan, ), **{'raise_conflict': True}) def test_all_any(self): - self.assertTrue((self.panel.all(axis=0).values == - nanall(self.panel, axis=0)).all()) - self.assertTrue((self.panel.all(axis=1).values == - nanall(self.panel, axis=1).T).all()) - self.assertTrue((self.panel.all(axis=2).values == - nanall(self.panel, axis=2).T).all()) - self.assertTrue((self.panel.any(axis=0).values == - nanany(self.panel, axis=0)).all()) - self.assertTrue((self.panel.any(axis=1).values == - nanany(self.panel, axis=1).T).all()) - self.assertTrue((self.panel.any(axis=2).values == - nanany(self.panel, axis=2).T).all()) + self.assertTrue((self.panel.all(axis=0).values == nanall( + self.panel, axis=0)).all()) + self.assertTrue((self.panel.all(axis=1).values == nanall( + self.panel, axis=1).T).all()) + self.assertTrue((self.panel.all(axis=2).values == nanall( + self.panel, axis=2).T).all()) + self.assertTrue((self.panel.any(axis=0).values == nanany( + self.panel, axis=0)).all()) + self.assertTrue((self.panel.any(axis=1).values == nanany( + self.panel, axis=1).T).all()) + self.assertTrue((self.panel.any(axis=2).values == nanany( + self.panel, axis=2).T).all()) def test_all_any_unhandled(self): self.assertRaises(NotImplementedError, self.panel.all, bool_only=True) @@ -2274,8 +2297,8 @@ def test_ops_differently_indexed(self): # careful, mutation self.panel['foo'] = lp2['ItemA'] - assert_series_equal(self.panel['foo'].reindex(lp2.index), - lp2['ItemA'], check_names=False) + assert_series_equal(self.panel['foo'].reindex(lp2.index), lp2['ItemA'], + check_names=False) def test_ops_scalar(self): result = self.panel.mul(2) @@ -2325,7 +2348,7 @@ def test_arith_flex_panel(self): aliases = {'div': 'truediv'} self.panel = self.panel.to_panel() - for n in [ np.random.randint(-50, -1), np.random.randint(1, 50), 0]: + for n in [np.random.randint(-50, -1), np.random.randint(1, 50), 0]: for op in ops: alias = aliases.get(op, op) f = getattr(operator, alias) @@ -2361,17 +2384,20 @@ def test_truncate(self): trunced = self.panel.truncate(start, end).to_panel() expected = self.panel.to_panel()['ItemA'].truncate(start, end) - assert_frame_equal(trunced['ItemA'], expected, check_names=False) # TODO trucate drops index.names + # TODO trucate drops index.names + assert_frame_equal(trunced['ItemA'], expected, check_names=False) trunced = self.panel.truncate(before=start).to_panel() expected = self.panel.to_panel()['ItemA'].truncate(before=start) - assert_frame_equal(trunced['ItemA'], expected, check_names=False) # TODO trucate drops index.names + # TODO trucate drops index.names + assert_frame_equal(trunced['ItemA'], expected, check_names=False) trunced = self.panel.truncate(after=end).to_panel() expected = self.panel.to_panel()['ItemA'].truncate(after=end) - assert_frame_equal(trunced['ItemA'], expected, check_names=False) # TODO trucate drops index.names + # TODO trucate drops index.names + assert_frame_equal(trunced['ItemA'], expected, check_names=False) # truncate on dates that aren't in there wp = self.panel.to_panel() @@ -2401,10 +2427,7 @@ def test_axis_dummies(self): self.assertEqual(len(major_dummies.columns), len(self.panel.index.levels[0])) - mapping = {'A': 'one', - 'B': 'one', - 'C': 'two', - 'D': 'two'} + mapping = {'A': 'one', 'B': 'one', 'C': 'two', 'D': 'two'} transformed = make_axis_dummies(self.panel, 'minor', transform=mapping.get) @@ -2502,9 +2525,10 @@ def _monotonic(arr): def test_panel_index(): index = panelm.panel_index([1, 2, 3, 4], [1, 2, 3]) - expected = MultiIndex.from_arrays([np.tile([1, 2, 3, 4], 3), - np.repeat([1, 2, 3], 4)]) - assert(index.equals(expected)) + expected = MultiIndex.from_arrays([np.tile( + [1, 2, 3, 4], 3), np.repeat( + [1, 2, 3], 4)]) + assert (index.equals(expected)) def test_import_warnings(): @@ -2513,7 +2537,7 @@ def test_import_warnings(): with assert_produces_warning(): panel.major_xs(1, copy=False) + if __name__ == '__main__': - import nose nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py index 3772d4b9c272b..6238f13864552 100644 --- a/pandas/tests/test_panel4d.py +++ b/pandas/tests/test_panel4d.py @@ -1,19 +1,17 @@ # -*- coding: utf-8 -*- from datetime import datetime from pandas.compat import range, lrange -import os import operator import nose import numpy as np -from pandas import Series, DataFrame, Index, isnull, notnull, pivot, MultiIndex +from pandas import Series, Index, isnull, notnull from pandas.core.datetools import bday from pandas.core.panel import Panel from pandas.core.panel4d import Panel4D from pandas.core.series import remove_na import pandas.core.common as com -import pandas.core.panel as panelmod from pandas import compat from pandas.util.testing import (assert_panel_equal, @@ -22,7 +20,6 @@ assert_series_equal, assert_almost_equal) import pandas.util.testing as tm -import pandas.compat as compat def add_nans(panel4d): @@ -36,7 +33,7 @@ class SafeForLongAndSparse(object): _multiprocess_can_split_ = True def test_repr(self): - foo = repr(self.panel4d) + repr(self.panel4d) def test_iter(self): tm.equalContents(list(self.panel4d), self.panel4d.labels) @@ -102,7 +99,7 @@ def test_sem(self): def alt(x): if len(x) < 2: return np.nan - return np.std(x, ddof=1)/np.sqrt(len(x)) + return np.std(x, ddof=1) / np.sqrt(len(x)) self._check_stat_op('sem', alt) # def test_skew(self): @@ -170,12 +167,18 @@ def test_get_axis(self): def test_set_axis(self): new_labels = Index(np.arange(len(self.panel4d.labels))) - new_items = Index(np.arange(len(self.panel4d.items))) + + # TODO: unused? + # new_items = Index(np.arange(len(self.panel4d.items))) + new_major = Index(np.arange(len(self.panel4d.major_axis))) new_minor = Index(np.arange(len(self.panel4d.minor_axis))) # ensure propagate to potentially prior-cached items too - label = self.panel4d['l1'] + + # TODO: unused? + # label = self.panel4d['l1'] + self.panel4d.labels = new_labels if hasattr(self.panel4d, '_item_cache'): @@ -343,7 +346,7 @@ def test_delitem_and_pop(self): assert_panel_equal(panel4dc[0], panel4d[0]) def test_setitem(self): - ## LongPanel with one item + # LongPanel with one item # lp = self.panel.filter(['ItemA', 'ItemB']).to_frame() # self.assertRaises(Exception, self.panel.__setitem__, # 'ItemE', lp) @@ -379,23 +382,24 @@ def test_setitem_by_indexer(self): # Panel panel4dc = self.panel4d.copy() p = panel4dc.iloc[0] + def func(): self.panel4d.iloc[0] = p self.assertRaises(NotImplementedError, func) # DataFrame panel4dc = self.panel4d.copy() - df = panel4dc.iloc[0,0] + df = panel4dc.iloc[0, 0] df.iloc[:] = 1 - panel4dc.iloc[0,0] = df - self.assertTrue((panel4dc.iloc[0,0].values == 1).all()) + panel4dc.iloc[0, 0] = df + self.assertTrue((panel4dc.iloc[0, 0].values == 1).all()) # Series panel4dc = self.panel4d.copy() - s = panel4dc.iloc[0,0,:,0] + s = panel4dc.iloc[0, 0, :, 0] s.iloc[:] = 1 - panel4dc.iloc[0,0,:,0] = s - self.assertTrue((panel4dc.iloc[0,0,:,0].values == 1).all()) + panel4dc.iloc[0, 0, :, 0] = s + self.assertTrue((panel4dc.iloc[0, 0, :, 0].values == 1).all()) # scalar panel4dc = self.panel4d.copy() @@ -419,8 +423,6 @@ def test_setitem_by_indexer_mixed_type(self): self.assertTrue(panel4dc.iloc[1].values.all()) self.assertTrue((panel4dc.iloc[2].values == 'foo').all()) - - def test_comparisons(self): p1 = tm.makePanel4D() p2 = tm.makePanel4D() @@ -459,7 +461,8 @@ def test_setitem_ndarray(self): # offset=datetools.MonthEnd()) # lons_coarse = np.linspace(-177.5, 177.5, 72) # lats_coarse = np.linspace(-87.5, 87.5, 36) - # P = Panel(items=timeidx, major_axis=lons_coarse, minor_axis=lats_coarse) + # P = Panel(items=timeidx, major_axis=lons_coarse, + # minor_axis=lats_coarse) # data = np.random.randn(72*36).reshape((72,36)) # key = datetime(2009,2,28) # P[key] = data# @@ -472,7 +475,8 @@ def test_major_xs(self): idx = self.panel4d.major_axis[5] xs = self.panel4d.major_xs(idx) - assert_series_equal(xs['l1'].T['ItemA'], ref.xs(idx), check_names=False) + assert_series_equal(xs['l1'].T['ItemA'], + ref.xs(idx), check_names=False) # not contained idx = self.panel4d.major_axis[0] - bday @@ -527,11 +531,13 @@ def test_getitem_fancy_labels(self): # all 4 specified assert_panel4d_equal(panel4d.ix[labels, items, dates, cols], - panel4d.reindex(labels=labels, items=items, major=dates, minor=cols)) + panel4d.reindex(labels=labels, items=items, + major=dates, minor=cols)) # 3 specified assert_panel4d_equal(panel4d.ix[:, items, dates, cols], - panel4d.reindex(items=items, major=dates, minor=cols)) + panel4d.reindex(items=items, major=dates, + minor=cols)) # 2 specified assert_panel4d_equal(panel4d.ix[:, :, dates, cols], @@ -632,15 +638,18 @@ def test_constructor(self): # GH #8285, test when scalar data is used to construct a Panel4D # if dtype is not passed, it should be inferred - value_and_dtype = [(1, 'int64'), (3.14, 'float64'), ('foo', np.object_)] + value_and_dtype = [(1, 'int64'), (3.14, 'float64'), + ('foo', np.object_)] for (val, dtype) in value_and_dtype: - panel4d = Panel4D(val, labels=range(2), items=range(3), major_axis=range(4), minor_axis=range(5)) + panel4d = Panel4D(val, labels=range(2), items=range( + 3), major_axis=range(4), minor_axis=range(5)) vals = np.empty((2, 3, 4, 5), dtype=dtype) vals.fill(val) assert_panel4d_equal(panel4d, Panel4D(vals, dtype=dtype)) # test the case when dtype is passed - panel4d = Panel4D(1, labels=range(2), items=range(3), major_axis=range(4), minor_axis=range(5), dtype='float32') + panel4d = Panel4D(1, labels=range(2), items=range( + 3), major_axis=range(4), minor_axis=range(5), dtype='float32') vals = np.empty((2, 3, 4, 5), dtype='float32') vals.fill(1) assert_panel4d_equal(panel4d, Panel4D(vals, dtype='float32')) @@ -829,7 +838,7 @@ def test_reindex(self): # don't necessarily copy result = self.panel4d.reindex() - assert_panel4d_equal(result,self.panel4d) + assert_panel4d_equal(result, self.panel4d) self.assertFalse(result is self.panel4d) # with filling @@ -845,7 +854,7 @@ def test_reindex(self): # don't necessarily copy result = self.panel4d.reindex( major=self.panel4d.major_axis, copy=False) - assert_panel4d_equal(result,self.panel4d) + assert_panel4d_equal(result, self.panel4d) self.assertTrue(result is self.panel4d) def test_not_hashable(self): @@ -913,7 +922,8 @@ def test_fillna(self): filled = self.panel4d.fillna(0) self.assertTrue(np.isfinite(filled.values).all()) - self.assertRaises(NotImplementedError, self.panel4d.fillna, method='pad') + self.assertRaises(NotImplementedError, + self.panel4d.fillna, method='pad') def test_swapaxes(self): result = self.panel4d.swapaxes('labels', 'items') @@ -937,7 +947,7 @@ def test_swapaxes(self): # this works, but return a copy result = self.panel4d.swapaxes('items', 'items') - assert_panel4d_equal(self.panel4d,result) + assert_panel4d_equal(self.panel4d, result) self.assertNotEqual(id(self.panel4d), id(result)) def test_to_frame(self): @@ -1001,7 +1011,7 @@ def test_apply(self): def test_dtypes(self): result = self.panel4d.dtypes - expected = Series(np.dtype('float64'),index=self.panel4d.labels) + expected = Series(np.dtype('float64'), index=self.panel4d.labels) assert_series_equal(result, expected) def test_compound(self): @@ -1090,7 +1100,6 @@ def test_rename(self): def test_get_attr(self): assert_panel_equal(self.panel4d['l1'], self.panel4d.l1) - def test_from_frame_level1_unsorted(self): raise nose.SkipTest("skipping for now") @@ -1099,6 +1108,5 @@ def test_to_excel(self): if __name__ == '__main__': - import nose nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tests/test_panelnd.py b/pandas/tests/test_panelnd.py index 67d015b940885..ac497bc580585 100644 --- a/pandas/tests/test_panelnd.py +++ b/pandas/tests/test_panelnd.py @@ -1,21 +1,10 @@ # -*- coding: utf-8 -*- -from datetime import datetime -import os -import operator import nose -import numpy as np - from pandas.core import panelnd from pandas.core.panel import Panel -import pandas.core.common as com -from pandas import compat - -from pandas.util.testing import (assert_panel_equal, - assert_panel4d_equal, - assert_frame_equal, - assert_series_equal, - assert_almost_equal) + +from pandas.util.testing import assert_panel_equal import pandas.util.testing as tm @@ -36,7 +25,7 @@ def test_4d_construction(self): aliases={'major': 'major_axis', 'minor': 'minor_axis'}, stat_axis=2) - p4d = Panel4D(dict(L1=tm.makePanel(), L2=tm.makePanel())) + p4d = Panel4D(dict(L1=tm.makePanel(), L2=tm.makePanel())) # noqa def test_4d_construction_alt(self): @@ -50,7 +39,7 @@ def test_4d_construction_alt(self): aliases={'major': 'major_axis', 'minor': 'minor_axis'}, stat_axis=2) - p4d = Panel4D(dict(L1=tm.makePanel(), L2=tm.makePanel())) + p4d = Panel4D(dict(L1=tm.makePanel(), L2=tm.makePanel())) # noqa def test_4d_construction_error(self): @@ -106,6 +95,5 @@ def test_5d_construction(self): # expected = if __name__ == '__main__': - import nose nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tests/test_reshape.py b/pandas/tests/test_reshape.py index 2961301366188..6de589f87cfd8 100644 --- a/pandas/tests/test_reshape.py +++ b/pandas/tests/test_reshape.py @@ -1,10 +1,5 @@ # -*- coding: utf-8 -*- # pylint: disable-msg=W0612,E1101 -from copy import deepcopy -from datetime import datetime, timedelta -import operator -import os - import nose from pandas import DataFrame, Series @@ -16,10 +11,9 @@ from pandas.util.testing import assert_frame_equal -from pandas.core.reshape import (melt, lreshape, get_dummies, - wide_to_long) +from pandas.core.reshape import (melt, lreshape, get_dummies, wide_to_long) import pandas.util.testing as tm -from pandas.compat import StringIO, cPickle, range, u +from pandas.compat import range, u _multiprocess_can_split_ = True @@ -34,9 +28,9 @@ def setUp(self): self.var_name = 'var' self.value_name = 'val' - self.df1 = pd.DataFrame([[ 1.067683, -1.110463, 0.20867 ], - [-1.321405, 0.368915, -1.055342], - [-0.807333, 0.08298 , -0.873361]]) + self.df1 = pd.DataFrame([[1.067683, -1.110463, 0.20867 + ], [-1.321405, 0.368915, -1.055342], + [-0.807333, 0.08298, -0.873361]]) self.df1.columns = [list('ABC'), list('abc')] self.df1.columns.names = ['CAP', 'low'] @@ -45,10 +39,12 @@ def test_default_col_names(self): self.assertEqual(result.columns.tolist(), ['variable', 'value']) result1 = melt(self.df, id_vars=['id1']) - self.assertEqual(result1.columns.tolist(), ['id1', 'variable', 'value']) + self.assertEqual(result1.columns.tolist(), ['id1', 'variable', 'value' + ]) result2 = melt(self.df, id_vars=['id1', 'id2']) - self.assertEqual(result2.columns.tolist(), ['id1', 'id2', 'variable', 'value']) + self.assertEqual(result2.columns.tolist(), ['id1', 'id2', 'variable', + 'value']) def test_value_vars(self): result3 = melt(self.df, id_vars=['id1', 'id2'], value_vars='A') @@ -57,8 +53,9 @@ def test_value_vars(self): result4 = melt(self.df, id_vars=['id1', 'id2'], value_vars=['A', 'B']) expected4 = DataFrame({'id1': self.df['id1'].tolist() * 2, 'id2': self.df['id2'].tolist() * 2, - 'variable': ['A']*10 + ['B']*10, - 'value': self.df['A'].tolist() + self.df['B'].tolist()}, + 'variable': ['A'] * 10 + ['B'] * 10, + 'value': (self.df['A'].tolist() + + self.df['B'].tolist())}, columns=['id1', 'id2', 'variable', 'value']) tm.assert_frame_equal(result4, expected4) @@ -70,18 +67,21 @@ def test_custom_var_name(self): self.assertEqual(result6.columns.tolist(), ['id1', 'var', 'value']) result7 = melt(self.df, id_vars=['id1', 'id2'], var_name=self.var_name) - self.assertEqual(result7.columns.tolist(), ['id1', 'id2', 'var', 'value']) + self.assertEqual(result7.columns.tolist(), ['id1', 'id2', 'var', + 'value']) - result8 = melt(self.df, id_vars=['id1', 'id2'], - value_vars='A', var_name=self.var_name) - self.assertEqual(result8.columns.tolist(), ['id1', 'id2', 'var', 'value']) + result8 = melt(self.df, id_vars=['id1', 'id2'], value_vars='A', + var_name=self.var_name) + self.assertEqual(result8.columns.tolist(), ['id1', 'id2', 'var', + 'value']) - result9 = melt(self.df, id_vars=['id1', 'id2'], - value_vars=['A', 'B'], var_name=self.var_name) + result9 = melt(self.df, id_vars=['id1', 'id2'], value_vars=['A', 'B'], + var_name=self.var_name) expected9 = DataFrame({'id1': self.df['id1'].tolist() * 2, 'id2': self.df['id2'].tolist() * 2, - self.var_name: ['A']*10 + ['B']*10, - 'value': self.df['A'].tolist() + self.df['B'].tolist()}, + self.var_name: ['A'] * 10 + ['B'] * 10, + 'value': (self.df['A'].tolist() + + self.df['B'].tolist())}, columns=['id1', 'id2', self.var_name, 'value']) tm.assert_frame_equal(result9, expected9) @@ -92,45 +92,56 @@ def test_custom_value_name(self): result11 = melt(self.df, id_vars=['id1'], value_name=self.value_name) self.assertEqual(result11.columns.tolist(), ['id1', 'variable', 'val']) - result12 = melt(self.df, id_vars=['id1', 'id2'], value_name=self.value_name) - self.assertEqual(result12.columns.tolist(), ['id1', 'id2', 'variable', 'val']) + result12 = melt(self.df, id_vars=['id1', 'id2'], + value_name=self.value_name) + self.assertEqual(result12.columns.tolist(), ['id1', 'id2', 'variable', + 'val']) - result13 = melt(self.df, id_vars=['id1', 'id2'], - value_vars='A', value_name=self.value_name) - self.assertEqual(result13.columns.tolist(), ['id1', 'id2', 'variable', 'val']) + result13 = melt(self.df, id_vars=['id1', 'id2'], value_vars='A', + value_name=self.value_name) + self.assertEqual(result13.columns.tolist(), ['id1', 'id2', 'variable', + 'val']) - result14 = melt(self.df, id_vars=['id1', 'id2'], - value_vars=['A', 'B'], value_name=self.value_name) + result14 = melt(self.df, id_vars=['id1', 'id2'], value_vars=['A', 'B'], + value_name=self.value_name) expected14 = DataFrame({'id1': self.df['id1'].tolist() * 2, 'id2': self.df['id2'].tolist() * 2, - 'variable': ['A']*10 + ['B']*10, - self.value_name: self.df['A'].tolist() + self.df['B'].tolist()}, - columns=['id1', 'id2', 'variable', self.value_name]) + 'variable': ['A'] * 10 + ['B'] * 10, + self.value_name: (self.df['A'].tolist() + + self.df['B'].tolist())}, + columns=['id1', 'id2', 'variable', + self.value_name]) tm.assert_frame_equal(result14, expected14) def test_custom_var_and_value_name(self): - result15 = melt(self.df, var_name=self.var_name, value_name=self.value_name) + result15 = melt(self.df, var_name=self.var_name, + value_name=self.value_name) self.assertEqual(result15.columns.tolist(), ['var', 'val']) - result16 = melt(self.df, id_vars=['id1'], var_name=self.var_name, value_name=self.value_name) + result16 = melt(self.df, id_vars=['id1'], var_name=self.var_name, + value_name=self.value_name) self.assertEqual(result16.columns.tolist(), ['id1', 'var', 'val']) result17 = melt(self.df, id_vars=['id1', 'id2'], var_name=self.var_name, value_name=self.value_name) - self.assertEqual(result17.columns.tolist(), ['id1', 'id2', 'var', 'val']) + self.assertEqual(result17.columns.tolist(), ['id1', 'id2', 'var', 'val' + ]) - result18 = melt(self.df, id_vars=['id1', 'id2'], - value_vars='A', var_name=self.var_name, value_name=self.value_name) - self.assertEqual(result18.columns.tolist(), ['id1', 'id2', 'var', 'val']) + result18 = melt(self.df, id_vars=['id1', 'id2'], value_vars='A', + var_name=self.var_name, value_name=self.value_name) + self.assertEqual(result18.columns.tolist(), ['id1', 'id2', 'var', 'val' + ]) - result19 = melt(self.df, id_vars=['id1', 'id2'], - value_vars=['A', 'B'], var_name=self.var_name, value_name=self.value_name) + result19 = melt(self.df, id_vars=['id1', 'id2'], value_vars=['A', 'B'], + var_name=self.var_name, value_name=self.value_name) expected19 = DataFrame({'id1': self.df['id1'].tolist() * 2, 'id2': self.df['id2'].tolist() * 2, - self.var_name: ['A']*10 + ['B']*10, - self.value_name: self.df['A'].tolist() + self.df['B'].tolist()}, - columns=['id1', 'id2', self.var_name, self.value_name]) + self.var_name: ['A'] * 10 + ['B'] * 10, + self.value_name: (self.df['A'].tolist() + + self.df['B'].tolist())}, + columns=['id1', 'id2', self.var_name, + self.value_name]) tm.assert_frame_equal(result19, expected19) df20 = self.df.copy() @@ -142,7 +153,7 @@ def test_col_level(self): res1 = melt(self.df1, col_level=0) res2 = melt(self.df1, col_level='CAP') self.assertEqual(res1.columns.tolist(), ['CAP', 'value']) - self.assertEqual(res1.columns.tolist(), ['CAP', 'value']) + self.assertEqual(res2.columns.tolist(), ['CAP', 'value']) def test_multiindex(self): res = pd.melt(self.df1) @@ -154,7 +165,8 @@ class TestGetDummies(tm.TestCase): sparse = False def setUp(self): - self.df = DataFrame({'A': ['a', 'b', 'a'], 'B': ['b', 'b', 'c'], + self.df = DataFrame({'A': ['a', 'b', 'a'], + 'B': ['b', 'b', 'c'], 'C': [1, 2, 3]}) def test_basic(self): @@ -162,14 +174,21 @@ def test_basic(self): s_series = Series(s_list) s_series_index = Series(s_list, list('ABC')) - expected = DataFrame({'a': {0: 1.0, 1: 0.0, 2: 0.0}, - 'b': {0: 0.0, 1: 1.0, 2: 0.0}, - 'c': {0: 0.0, 1: 0.0, 2: 1.0}}) + expected = DataFrame({'a': {0: 1.0, + 1: 0.0, + 2: 0.0}, + 'b': {0: 0.0, + 1: 1.0, + 2: 0.0}, + 'c': {0: 0.0, + 1: 0.0, + 2: 1.0}}) assert_frame_equal(get_dummies(s_list, sparse=self.sparse), expected) assert_frame_equal(get_dummies(s_series, sparse=self.sparse), expected) expected.index = list('ABC') - assert_frame_equal(get_dummies(s_series_index, sparse=self.sparse), expected) + assert_frame_equal( + get_dummies(s_series_index, sparse=self.sparse), expected) def test_basic_types(self): # GH 10531 @@ -180,14 +199,16 @@ def test_basic_types(self): 'c': [2, 3, 3, 3, 2]}) if not self.sparse: - exp_df_type = DataFrame + exp_df_type = DataFrame exp_blk_type = pd.core.internals.FloatBlock else: exp_df_type = SparseDataFrame exp_blk_type = pd.core.internals.SparseBlock - self.assertEqual(type(get_dummies(s_list, sparse=self.sparse)), exp_df_type) - self.assertEqual(type(get_dummies(s_series, sparse=self.sparse)), exp_df_type) + self.assertEqual( + type(get_dummies(s_list, sparse=self.sparse)), exp_df_type) + self.assertEqual( + type(get_dummies(s_series, sparse=self.sparse)), exp_df_type) r = get_dummies(s_df, sparse=self.sparse, columns=s_df.columns) self.assertEqual(type(r), exp_df_type) @@ -197,15 +218,15 @@ def test_basic_types(self): self.assertEqual(type(r[['a_1']]._data.blocks[0]), exp_blk_type) self.assertEqual(type(r[['a_2']]._data.blocks[0]), exp_blk_type) - def test_just_na(self): just_na_list = [np.nan] just_na_series = Series(just_na_list) - just_na_series_index = Series(just_na_list, index = ['A']) + just_na_series_index = Series(just_na_list, index=['A']) res_list = get_dummies(just_na_list, sparse=self.sparse) res_series = get_dummies(just_na_series, sparse=self.sparse) - res_series_index = get_dummies(just_na_series_index, sparse=self.sparse) + res_series_index = get_dummies(just_na_series_index, + sparse=self.sparse) self.assertEqual(res_list.empty, True) self.assertEqual(res_series.empty, True) @@ -218,56 +239,79 @@ def test_just_na(self): def test_include_na(self): s = ['a', 'b', np.nan] res = get_dummies(s, sparse=self.sparse) - exp = DataFrame({'a': {0: 1.0, 1: 0.0, 2: 0.0}, - 'b': {0: 0.0, 1: 1.0, 2: 0.0}}) + exp = DataFrame({'a': {0: 1.0, + 1: 0.0, + 2: 0.0}, + 'b': {0: 0.0, + 1: 1.0, + 2: 0.0}}) assert_frame_equal(res, exp) # Sparse dataframes do not allow nan labelled columns, see #GH8822 res_na = get_dummies(s, dummy_na=True, sparse=self.sparse) - exp_na = DataFrame({nan: {0: 0.0, 1: 0.0, 2: 1.0}, - 'a': {0: 1.0, 1: 0.0, 2: 0.0}, - 'b': {0: 0.0, 1: 1.0, 2: 0.0}}).reindex_axis(['a', 'b', nan], 1) + exp_na = DataFrame({nan: {0: 0.0, + 1: 0.0, + 2: 1.0}, + 'a': {0: 1.0, + 1: 0.0, + 2: 0.0}, + 'b': {0: 0.0, + 1: 1.0, + 2: 0.0}}).reindex_axis( + ['a', 'b', nan], 1) # hack (NaN handling in assert_index_equal) exp_na.columns = res_na.columns assert_frame_equal(res_na, exp_na) res_just_na = get_dummies([nan], dummy_na=True, sparse=self.sparse) - exp_just_na = DataFrame(Series(1.0,index=[0]),columns=[nan]) + exp_just_na = DataFrame(Series(1.0, index=[0]), columns=[nan]) tm.assert_numpy_array_equal(res_just_na.values, exp_just_na.values) - def test_unicode(self): # See GH 6885 - get_dummies chokes on unicode values + def test_unicode(self + ): # See GH 6885 - get_dummies chokes on unicode values import unicodedata e = 'e' eacute = unicodedata.lookup('LATIN SMALL LETTER E WITH ACUTE') s = [e, eacute, eacute] res = get_dummies(s, prefix='letter', sparse=self.sparse) - exp = DataFrame({'letter_e': {0: 1.0, 1: 0.0, 2: 0.0}, - u('letter_%s') % eacute: {0: 0.0, 1: 1.0, 2: 1.0}}) + exp = DataFrame({'letter_e': {0: 1.0, + 1: 0.0, + 2: 0.0}, + u('letter_%s') % eacute: {0: 0.0, + 1: 1.0, + 2: 1.0}}) assert_frame_equal(res, exp) def test_dataframe_dummies_all_obj(self): df = self.df[['A', 'B']] result = get_dummies(df, sparse=self.sparse) - expected = DataFrame({'A_a': [1., 0, 1], 'A_b': [0., 1, 0], - 'B_b': [1., 1, 0], 'B_c': [0., 0, 1]}) + expected = DataFrame({'A_a': [1., 0, 1], + 'A_b': [0., 1, 0], + 'B_b': [1., 1, 0], + 'B_c': [0., 0, 1]}) assert_frame_equal(result, expected) def test_dataframe_dummies_mix_default(self): df = self.df result = get_dummies(df, sparse=self.sparse) - expected = DataFrame({'C': [1, 2, 3], 'A_a': [1., 0, 1], - 'A_b': [0., 1, 0], 'B_b': [1., 1, 0], + expected = DataFrame({'C': [1, 2, 3], + 'A_a': [1., 0, 1], + 'A_b': [0., 1, 0], + 'B_b': [1., 1, 0], 'B_c': [0., 0, 1]}) expected = expected[['C', 'A_a', 'A_b', 'B_b', 'B_c']] assert_frame_equal(result, expected) def test_dataframe_dummies_prefix_list(self): prefixes = ['from_A', 'from_B'] - df = DataFrame({'A': ['a', 'b', 'a'], 'B': ['b', 'b', 'c'], + df = DataFrame({'A': ['a', 'b', 'a'], + 'B': ['b', 'b', 'c'], 'C': [1, 2, 3]}) result = get_dummies(df, prefix=prefixes, sparse=self.sparse) - expected = DataFrame({'C': [1, 2, 3], 'from_A_a': [1., 0, 1], - 'from_A_b': [0., 1, 0], 'from_B_b': [1., 1, 0], + expected = DataFrame({'C': [1, 2, 3], + 'from_A_a': [1., 0, 1], + 'from_A_b': [0., 1, 0], + 'from_B_b': [1., 1, 0], 'from_B_c': [0., 0, 1]}) expected = expected[['C', 'from_A_a', 'from_A_b', 'from_B_b', 'from_B_c']] @@ -285,17 +329,21 @@ def test_dataframe_dummies_prefix_str(self): def test_dataframe_dummies_subset(self): df = self.df - result = get_dummies(df, prefix=['from_A'], - columns=['A'], sparse=self.sparse) - expected = DataFrame({'from_A_a': [1., 0, 1], 'from_A_b': [0., 1, 0], - 'B': ['b', 'b', 'c'], 'C': [1, 2, 3]}) + result = get_dummies(df, prefix=['from_A'], columns=['A'], + sparse=self.sparse) + expected = DataFrame({'from_A_a': [1., 0, 1], + 'from_A_b': [0., 1, 0], + 'B': ['b', 'b', 'c'], + 'C': [1, 2, 3]}) assert_frame_equal(result, expected) def test_dataframe_dummies_prefix_sep(self): df = self.df result = get_dummies(df, prefix_sep='..', sparse=self.sparse) - expected = DataFrame({'C': [1, 2, 3], 'A..a': [1., 0, 1], - 'A..b': [0., 1, 0], 'B..b': [1., 1, 0], + expected = DataFrame({'C': [1, 2, 3], + 'A..a': [1., 0, 1], + 'A..b': [0., 1, 0], + 'B..b': [1., 1, 0], 'B..c': [0., 0, 1]}) expected = expected[['C', 'A..a', 'A..b', 'B..b', 'B..c']] assert_frame_equal(result, expected) @@ -304,7 +352,8 @@ def test_dataframe_dummies_prefix_sep(self): expected = expected.rename(columns={'B..b': 'B__b', 'B..c': 'B__c'}) assert_frame_equal(result, expected) - result = get_dummies(df, prefix_sep={'A': '..', 'B': '__'}, sparse=self.sparse) + result = get_dummies(df, prefix_sep={'A': '..', + 'B': '__'}, sparse=self.sparse) assert_frame_equal(result, expected) def test_dataframe_dummies_prefix_bad_length(self): @@ -317,11 +366,14 @@ def test_dataframe_dummies_prefix_sep_bad_length(self): def test_dataframe_dummies_prefix_dict(self): prefixes = {'A': 'from_A', 'B': 'from_B'} - df = DataFrame({'A': ['a', 'b', 'a'], 'B': ['b', 'b', 'c'], + df = DataFrame({'A': ['a', 'b', 'a'], + 'B': ['b', 'b', 'c'], 'C': [1, 2, 3]}) result = get_dummies(df, prefix=prefixes, sparse=self.sparse) - expected = DataFrame({'from_A_a': [1., 0, 1], 'from_A_b': [0., 1, 0], - 'from_B_b': [1., 1, 0], 'from_B_c': [0., 0, 1], + expected = DataFrame({'from_A_a': [1., 0, 1], + 'from_A_b': [0., 1, 0], + 'from_B_b': [1., 1, 0], + 'from_B_c': [0., 0, 1], 'C': [1, 2, 3]}) assert_frame_equal(result, expected) @@ -329,11 +381,15 @@ def test_dataframe_dummies_with_na(self): df = self.df df.loc[3, :] = [np.nan, np.nan, np.nan] result = get_dummies(df, dummy_na=True, sparse=self.sparse) - expected = DataFrame({'C': [1, 2, 3, np.nan], 'A_a': [1., 0, 1, 0], - 'A_b': [0., 1, 0, 0], 'A_nan': [0., 0, 0, 1], 'B_b': [1., 1, 0, 0], - 'B_c': [0., 0, 1, 0], 'B_nan': [0., 0, 0, 1]}) - expected = expected[['C', 'A_a', 'A_b', 'A_nan', 'B_b', 'B_c', - 'B_nan']] + expected = DataFrame({'C': [1, 2, 3, np.nan], + 'A_a': [1., 0, 1, 0], + 'A_b': [0., 1, 0, 0], + 'A_nan': [0., 0, 0, 1], + 'B_b': [1., 1, 0, 0], + 'B_c': [0., 0, 1, 0], + 'B_nan': [0., 0, 0, 1]}) + expected = expected[['C', 'A_a', 'A_b', 'A_nan', 'B_b', 'B_c', 'B_nan' + ]] assert_frame_equal(result, expected) result = get_dummies(df, dummy_na=False, sparse=self.sparse) @@ -344,12 +400,15 @@ def test_dataframe_dummies_with_categorical(self): df = self.df df['cat'] = pd.Categorical(['x', 'y', 'y']) result = get_dummies(df, sparse=self.sparse) - expected = DataFrame({'C': [1, 2, 3], 'A_a': [1., 0, 1], - 'A_b': [0., 1, 0], 'B_b': [1., 1, 0], - 'B_c': [0., 0, 1], 'cat_x': [1., 0, 0], + expected = DataFrame({'C': [1, 2, 3], + 'A_a': [1., 0, 1], + 'A_b': [0., 1, 0], + 'B_b': [1., 1, 0], + 'B_c': [0., 0, 1], + 'cat_x': [1., 0, 0], 'cat_y': [0., 1, 1]}) - expected = expected[['C', 'A_a', 'A_b', 'B_b', 'B_c', - 'cat_x', 'cat_y']] + expected = expected[['C', 'A_a', 'A_b', 'B_b', 'B_c', 'cat_x', 'cat_y' + ]] assert_frame_equal(result, expected) @@ -360,14 +419,15 @@ class TestGetDummiesSparse(TestGetDummies): class TestLreshape(tm.TestCase): def test_pairs(self): - data = {'birthdt': ['08jan2009', '20dec2008', '30dec2008', - '21dec2008', '11jan2009'], + data = {'birthdt': ['08jan2009', '20dec2008', '30dec2008', '21dec2008', + '11jan2009'], 'birthwt': [1766, 3301, 1454, 3139, 4133], 'id': [101, 102, 103, 104, 105], 'sex': ['Male', 'Female', 'Female', 'Female', 'Female'], 'visitdt1': ['11jan2009', '22dec2008', '04jan2009', '29dec2008', '20jan2009'], - 'visitdt2': ['21jan2009', nan, '22jan2009', '31dec2008', '03feb2009'], + 'visitdt2': + ['21jan2009', nan, '22jan2009', '31dec2008', '03feb2009'], 'visitdt3': ['05feb2009', nan, nan, '02jan2009', '15feb2009'], 'wt1': [1823, 3338, 1549, 3298, 4306], 'wt2': [2011.0, nan, 1892.0, 3338.0, 4575.0], @@ -379,49 +439,47 @@ def test_pairs(self): 'wt': ['wt%d' % i for i in range(1, 4)]} result = lreshape(df, spec) - exp_data = {'birthdt': ['08jan2009', '20dec2008', '30dec2008', - '21dec2008', '11jan2009', '08jan2009', - '30dec2008', '21dec2008', '11jan2009', - '08jan2009', '21dec2008', '11jan2009'], - 'birthwt': [1766, 3301, 1454, 3139, 4133, 1766, - 1454, 3139, 4133, 1766, 3139, 4133], - 'id': [101, 102, 103, 104, 105, 101, - 103, 104, 105, 101, 104, 105], + exp_data = {'birthdt': + ['08jan2009', '20dec2008', '30dec2008', '21dec2008', + '11jan2009', '08jan2009', '30dec2008', '21dec2008', + '11jan2009', '08jan2009', '21dec2008', '11jan2009'], + 'birthwt': [1766, 3301, 1454, 3139, 4133, 1766, 1454, 3139, + 4133, 1766, 3139, 4133], + 'id': [101, 102, 103, 104, 105, 101, 103, 104, 105, 101, + 104, 105], 'sex': ['Male', 'Female', 'Female', 'Female', 'Female', 'Male', 'Female', 'Female', 'Female', 'Male', 'Female', 'Female'], - 'visitdt': ['11jan2009', '22dec2008', '04jan2009', '29dec2008', - '20jan2009', '21jan2009', '22jan2009', '31dec2008', - '03feb2009', '05feb2009', '02jan2009', '15feb2009'], + 'visitdt': ['11jan2009', '22dec2008', '04jan2009', + '29dec2008', '20jan2009', '21jan2009', + '22jan2009', '31dec2008', '03feb2009', + '05feb2009', '02jan2009', '15feb2009'], 'wt': [1823.0, 3338.0, 1549.0, 3298.0, 4306.0, 2011.0, 1892.0, 3338.0, 4575.0, 2293.0, 3377.0, 4805.0]} exp = DataFrame(exp_data, columns=result.columns) tm.assert_frame_equal(result, exp) result = lreshape(df, spec, dropna=False) - exp_data = {'birthdt': ['08jan2009', '20dec2008', '30dec2008', - '21dec2008', '11jan2009', - '08jan2009', '20dec2008', '30dec2008', - '21dec2008', '11jan2009', - '08jan2009', '20dec2008', '30dec2008', - '21dec2008', '11jan2009'], - 'birthwt': [1766, 3301, 1454, 3139, 4133, - 1766, 3301, 1454, 3139, 4133, - 1766, 3301, 1454, 3139, 4133], - 'id': [101, 102, 103, 104, 105, - 101, 102, 103, 104, 105, + exp_data = {'birthdt': + ['08jan2009', '20dec2008', '30dec2008', '21dec2008', + '11jan2009', '08jan2009', '20dec2008', '30dec2008', + '21dec2008', '11jan2009', '08jan2009', '20dec2008', + '30dec2008', '21dec2008', '11jan2009'], + 'birthwt': [1766, 3301, 1454, 3139, 4133, 1766, 3301, 1454, + 3139, 4133, 1766, 3301, 1454, 3139, 4133], + 'id': [101, 102, 103, 104, 105, 101, 102, 103, 104, 105, 101, 102, 103, 104, 105], 'sex': ['Male', 'Female', 'Female', 'Female', 'Female', 'Male', 'Female', 'Female', 'Female', 'Female', 'Male', 'Female', 'Female', 'Female', 'Female'], 'visitdt': ['11jan2009', '22dec2008', '04jan2009', - '29dec2008', '20jan2009', - '21jan2009', nan, '22jan2009', - '31dec2008', '03feb2009', - '05feb2009', nan, nan, '02jan2009', '15feb2009'], - 'wt': [1823.0, 3338.0, 1549.0, 3298.0, 4306.0, 2011.0, - nan, 1892.0, 3338.0, 4575.0, 2293.0, nan, nan, - 3377.0, 4805.0]} + '29dec2008', '20jan2009', '21jan2009', nan, + '22jan2009', '31dec2008', '03feb2009', + '05feb2009', nan, nan, '02jan2009', + '15feb2009'], + 'wt': [1823.0, 3338.0, 1549.0, 3298.0, 4306.0, 2011.0, nan, + 1892.0, 3338.0, 4575.0, 2293.0, nan, nan, 3377.0, + 4805.0]} exp = DataFrame(exp_data, columns=result.columns) tm.assert_frame_equal(result, exp) @@ -429,22 +487,32 @@ def test_pairs(self): 'wt': ['wt%d' % i for i in range(1, 4)]} self.assertRaises(ValueError, lreshape, df, spec) + class TestWideToLong(tm.TestCase): + def test_simple(self): np.random.seed(123) x = np.random.randn(3) - df = pd.DataFrame({"A1970" : {0 : "a", 1 : "b", 2 : "c"}, - "A1980" : {0 : "d", 1 : "e", 2 : "f"}, - "B1970" : {0 : 2.5, 1 : 1.2, 2 : .7}, - "B1980" : {0 : 3.2, 1 : 1.3, 2 : .1}, - "X" : dict(zip(range(3), x)) - }) + df = pd.DataFrame({"A1970": {0: "a", + 1: "b", + 2: "c"}, + "A1980": {0: "d", + 1: "e", + 2: "f"}, + "B1970": {0: 2.5, + 1: 1.2, + 2: .7}, + "B1980": {0: 3.2, + 1: 1.3, + 2: .1}, + "X": dict(zip( + range(3), x))}) df["id"] = df.index - exp_data = {"X" : x.tolist() + x.tolist(), - "A" : ['a', 'b', 'c', 'd', 'e', 'f'], - "B" : [2.5, 1.2, 0.7, 3.2, 1.3, 0.1], - "year" : [1970, 1970, 1970, 1980, 1980, 1980], - "id" : [0, 1, 2, 0, 1, 2]} + exp_data = {"X": x.tolist() + x.tolist(), + "A": ['a', 'b', 'c', 'd', 'e', 'f'], + "B": [2.5, 1.2, 0.7, 3.2, 1.3, 0.1], + "year": [1970, 1970, 1970, 1980, 1980, 1980], + "id": [0, 1, 2, 0, 1, 2]} exp_frame = DataFrame(exp_data) exp_frame = exp_frame.set_index(['id', 'year'])[["X", "A", "B"]] long_frame = wide_to_long(df, ["A", "B"], i="id", j="year") @@ -452,12 +520,15 @@ def test_simple(self): def test_stubs(self): # GH9204 - df = pd.DataFrame([[0,1,2,3,8],[4,5,6,7,9]]) + df = pd.DataFrame([[0, 1, 2, 3, 8], [4, 5, 6, 7, 9]]) df.columns = ['id', 'inc1', 'inc2', 'edu1', 'edu2'] stubs = ['inc', 'edu'] - df_long = pd.wide_to_long(df, stubs, i='id', j='age') - self.assertEqual(stubs,['inc', 'edu']) + # TODO: unused? + df_long = pd.wide_to_long(df, stubs, i='id', j='age') # noqa + + self.assertEqual(stubs, ['inc', 'edu']) + if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], diff --git a/pandas/tests/test_rplot.py b/pandas/tests/test_rplot.py index 4342417db193b..6be6c53cbb201 100644 --- a/pandas/tests/test_rplot.py +++ b/pandas/tests/test_rplot.py @@ -3,11 +3,11 @@ import pandas.util.testing as tm from pandas import read_csv import os -import nose with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): import pandas.tools.rplot as rplot + def curpath(): pth, _ = os.path.split(os.path.abspath(__file__)) return pth @@ -37,6 +37,7 @@ class TestUtilityFunctions(tm.TestCase): """ Tests for RPlot utility functions. """ + def setUp(self): path = os.path.join(curpath(), 'data/iris.csv') self.data = read_csv(path, sep=',') @@ -62,8 +63,8 @@ def test_make_aes2(self): alpha=rplot.ScaleShape('test')) def test_dictionary_union(self): - dict1 = {1 : 1, 2 : 2, 3 : 3} - dict2 = {1 : 1, 2 : 2, 4 : 4} + dict1 = {1: 1, 2: 2, 3: 3} + dict2 = {1: 1, 2: 2, 4: 4} union = rplot.dictionary_union(dict1, dict2) self.assertEqual(len(union), 4) keys = list(union.keys()) @@ -103,6 +104,7 @@ def test_sequence_layers(self): @tm.mplskip class TestTrellis(tm.TestCase): + def setUp(self): path = os.path.join(curpath(), 'data/tips.csv') self.data = read_csv(path, sep=',') @@ -151,6 +153,7 @@ def test_trellis_cols_rows(self): @tm.mplskip class TestScaleGradient(tm.TestCase): + def setUp(self): path = os.path.join(curpath(), 'data/iris.csv') self.data = read_csv(path, sep=',') @@ -160,7 +163,7 @@ def setUp(self): def test_gradient(self): for index in range(len(self.data)): - row = self.data.iloc[index] + # row = self.data.iloc[index] r, g, b = self.gradient(self.data, index) r1, g1, b1 = self.gradient.colour1 r2, g2, b2 = self.gradient.colour2 @@ -171,10 +174,12 @@ def test_gradient(self): @tm.mplskip class TestScaleGradient2(tm.TestCase): + def setUp(self): path = os.path.join(curpath(), 'data/iris.csv') self.data = read_csv(path, sep=',') - self.gradient = rplot.ScaleGradient2("SepalLength", colour1=(0.2, 0.3, 0.4), colour2=(0.8, 0.7, 0.6), colour3=(0.5, 0.5, 0.5)) + self.gradient = rplot.ScaleGradient2("SepalLength", colour1=( + 0.2, 0.3, 0.4), colour2=(0.8, 0.7, 0.6), colour3=(0.5, 0.5, 0.5)) def test_gradient2(self): for index in range(len(self.data)): @@ -199,6 +204,7 @@ def test_gradient2(self): @tm.mplskip class TestScaleRandomColour(tm.TestCase): + def setUp(self): path = os.path.join(curpath(), 'data/iris.csv') self.data = read_csv(path, sep=',') @@ -219,6 +225,7 @@ def test_random_colour(self): @tm.mplskip class TestScaleConstant(tm.TestCase): + def test_scale_constant(self): scale = rplot.ScaleConstant(1.0) self.assertEqual(scale(None, None), 1.0) @@ -227,6 +234,7 @@ def test_scale_constant(self): class TestScaleSize(tm.TestCase): + def setUp(self): path = os.path.join(curpath(), 'data/iris.csv') self.data = read_csv(path, sep=',') @@ -236,7 +244,8 @@ def setUp(self): def test_scale_size(self): for index in range(len(self.data)): marker = self.scale1(self.data, index) - self.assertTrue(marker in ['o', '+', 's', '*', '^', '<', '>', 'v', '|', 'x']) + self.assertTrue( + marker in ['o', '+', 's', '*', '^', '<', '>', 'v', '|', 'x']) def test_scale_overflow(self): def f(): @@ -248,6 +257,7 @@ def f(): @tm.mplskip class TestRPlot(tm.TestCase): + def test_rplot1(self): import matplotlib.pyplot as plt path = os.path.join(curpath(), 'data/tips.csv') @@ -255,7 +265,8 @@ def test_rplot1(self): self.data = read_csv(path, sep=',') self.plot = rplot.RPlot(self.data, x='tip', y='total_bill') self.plot.add(rplot.TrellisGrid(['sex', 'smoker'])) - self.plot.add(rplot.GeomPoint(colour=rplot.ScaleRandomColour('day'), shape=rplot.ScaleShape('size'))) + self.plot.add(rplot.GeomPoint(colour=rplot.ScaleRandomColour( + 'day'), shape=rplot.ScaleShape('size'))) self.fig = plt.gcf() self.plot.render(self.fig) @@ -266,7 +277,8 @@ def test_rplot2(self): self.data = read_csv(path, sep=',') self.plot = rplot.RPlot(self.data, x='tip', y='total_bill') self.plot.add(rplot.TrellisGrid(['.', 'smoker'])) - self.plot.add(rplot.GeomPoint(colour=rplot.ScaleRandomColour('day'), shape=rplot.ScaleShape('size'))) + self.plot.add(rplot.GeomPoint(colour=rplot.ScaleRandomColour( + 'day'), shape=rplot.ScaleShape('size'))) self.fig = plt.gcf() self.plot.render(self.fig) @@ -277,7 +289,8 @@ def test_rplot3(self): self.data = read_csv(path, sep=',') self.plot = rplot.RPlot(self.data, x='tip', y='total_bill') self.plot.add(rplot.TrellisGrid(['sex', '.'])) - self.plot.add(rplot.GeomPoint(colour=rplot.ScaleRandomColour('day'), shape=rplot.ScaleShape('size'))) + self.plot.add(rplot.GeomPoint(colour=rplot.ScaleRandomColour( + 'day'), shape=rplot.ScaleShape('size'))) self.fig = plt.gcf() self.plot.render(self.fig) @@ -287,8 +300,12 @@ def test_rplot_iris(self): plt.figure() self.data = read_csv(path, sep=',') plot = rplot.RPlot(self.data, x='SepalLength', y='SepalWidth') - plot.add(rplot.GeomPoint(colour=rplot.ScaleGradient('PetalLength', colour1=(0.0, 1.0, 0.5), colour2=(1.0, 0.0, 0.5)), - size=rplot.ScaleSize('PetalWidth', min_size=10.0, max_size=200.0), + plot.add(rplot.GeomPoint( + colour=rplot.ScaleGradient('PetalLength', + colour1=(0.0, 1.0, 0.5), + colour2=(1.0, 0.0, 0.5)), + size=rplot.ScaleSize('PetalWidth', min_size=10.0, + max_size=200.0), shape=rplot.ScaleShape('Name'))) self.fig = plt.gcf() plot.render(self.fig) diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py index a2b1a84e78f22..4045825578aff 100644 --- a/pandas/tests/test_series.py +++ b/pandas/tests/test_series.py @@ -1,7 +1,6 @@ # coding=utf-8 # pylint: disable-msg=E1101,W0612 -import re import sys from datetime import datetime, timedelta import operator @@ -9,7 +8,6 @@ from inspect import getargspec from itertools import product, starmap from distutils.version import LooseVersion -import warnings import random import nose @@ -36,15 +34,12 @@ from pandas.compat import StringIO, lrange, range, zip, u, OrderedDict, long from pandas import compat -from pandas.util.testing import (assert_series_equal, - assert_almost_equal, - assert_frame_equal, - assert_index_equal, +from pandas.util.testing import (assert_series_equal, assert_almost_equal, + assert_frame_equal, assert_index_equal, ensure_clean) import pandas.util.testing as tm - -#------------------------------------------------------------------------------ +# ----------------------------------------------------------------------------- # Series test cases JOIN_TYPES = ['inner', 'outer', 'left', 'right'] @@ -108,20 +103,21 @@ def get_expected(s, name): result = result.astype('int64') elif not com.is_list_like(result): return result - return Series(result,index=s.index) + return Series(result, index=s.index) def compare(s, name): - a = getattr(s.dt,prop) - b = get_expected(s,prop) + a = getattr(s.dt, prop) + b = get_expected(s, prop) if not (com.is_list_like(a) and com.is_list_like(b)): - self.assertEqual(a,b) + self.assertEqual(a, b) else: - tm.assert_series_equal(a,b) + tm.assert_series_equal(a, b) # datetimeindex - for s in [Series(date_range('20130101',periods=5)), - Series(date_range('20130101',periods=5,freq='s')), - Series(date_range('20130101 00:00:00',periods=5,freq='ms'))]: + for s in [Series(date_range('20130101', periods=5)), + Series(date_range('20130101', periods=5, freq='s')), + Series(date_range('20130101 00:00:00', periods=5, freq='ms')) + ]: for prop in ok_for_dt: # we test freq below if prop != 'freq': @@ -131,64 +127,65 @@ def compare(s, name): getattr(s.dt, prop) result = s.dt.to_pydatetime() - self.assertIsInstance(result,np.ndarray) + self.assertIsInstance(result, np.ndarray) self.assertTrue(result.dtype == object) result = s.dt.tz_localize('US/Eastern') - expected = Series(DatetimeIndex(s.values).tz_localize('US/Eastern'),index=s.index) + expected = Series( + DatetimeIndex(s.values).tz_localize('US/Eastern'), + index=s.index) tm.assert_series_equal(result, expected) tz_result = result.dt.tz self.assertEqual(str(tz_result), 'US/Eastern') freq_result = s.dt.freq - self.assertEqual(freq_result, DatetimeIndex(s.values, freq='infer').freq) + self.assertEqual(freq_result, DatetimeIndex(s.values, + freq='infer').freq) # let's localize, then convert result = s.dt.tz_localize('UTC').dt.tz_convert('US/Eastern') - expected = Series(DatetimeIndex(s.values).tz_localize('UTC').tz_convert('US/Eastern'),index=s.index) + expected = Series( + DatetimeIndex(s.values).tz_localize('UTC').tz_convert( + 'US/Eastern'), index=s.index) tm.assert_series_equal(result, expected) # round - s = Series(pd.to_datetime(['2012-01-01 13:00:00', - '2012-01-01 12:01:00', - '2012-01-01 08:00:00'])) + s = Series(pd.to_datetime( + ['2012-01-01 13:00:00', '2012-01-01 12:01:00', + '2012-01-01 08:00:00'])) result = s.dt.round('D') - expected = Series(pd.to_datetime(['2012-01-02', - '2012-01-02', + expected = Series(pd.to_datetime(['2012-01-02', '2012-01-02', '2012-01-01'])) tm.assert_series_equal(result, expected) # round with tz - result = s.dt.tz_localize('UTC').dt.tz_convert( - 'US/Eastern').dt.round('D') - expected = Series(pd.to_datetime(['2012-01-01', - '2012-01-01', - '2012-01-01'] - ).tz_localize('US/Eastern')) + result = s.dt.tz_localize('UTC').dt.tz_convert('US/Eastern').dt.round( + 'D') + expected = Series(pd.to_datetime(['2012-01-01', '2012-01-01', + '2012-01-01']).tz_localize( + 'US/Eastern')) tm.assert_series_equal(result, expected) # floor - s = Series(pd.to_datetime(['2012-01-01 13:00:00', - '2012-01-01 12:01:00', - '2012-01-01 08:00:00'])) + s = Series(pd.to_datetime( + ['2012-01-01 13:00:00', '2012-01-01 12:01:00', + '2012-01-01 08:00:00'])) result = s.dt.floor('D') - expected = Series(pd.to_datetime(['2012-01-01', - '2012-01-01', + expected = Series(pd.to_datetime(['2012-01-01', '2012-01-01', '2012-01-01'])) tm.assert_series_equal(result, expected) # ceil - s = Series(pd.to_datetime(['2012-01-01 13:00:00', - '2012-01-01 12:01:00', - '2012-01-01 08:00:00'])) + s = Series(pd.to_datetime( + ['2012-01-01 13:00:00', '2012-01-01 12:01:00', + '2012-01-01 08:00:00'])) result = s.dt.ceil('D') - expected = Series(pd.to_datetime(['2012-01-02', - '2012-01-02', + expected = Series(pd.to_datetime(['2012-01-02', '2012-01-02', '2012-01-02'])) tm.assert_series_equal(result, expected) # datetimeindex with tz - s = Series(date_range('20130101',periods=5,tz='US/Eastern')) + s = Series(date_range('20130101', periods=5, tz='US/Eastern')) for prop in ok_for_dt: # we test freq below @@ -196,25 +193,28 @@ def compare(s, name): compare(s, prop) for prop in ok_for_dt_methods: - getattr(s.dt,prop) + getattr(s.dt, prop) result = s.dt.to_pydatetime() - self.assertIsInstance(result,np.ndarray) + self.assertIsInstance(result, np.ndarray) self.assertTrue(result.dtype == object) result = s.dt.tz_convert('CET') - expected = Series(s._values.tz_convert('CET'),index=s.index) + expected = Series(s._values.tz_convert('CET'), index=s.index) tm.assert_series_equal(result, expected) tz_result = result.dt.tz self.assertEqual(str(tz_result), 'CET') freq_result = s.dt.freq - self.assertEqual(freq_result, DatetimeIndex(s.values, freq='infer').freq) + self.assertEqual(freq_result, DatetimeIndex(s.values, + freq='infer').freq) # timedeltaindex - for s in [Series(timedelta_range('1 day',periods=5),index=list('abcde')), - Series(timedelta_range('1 day 01:23:45',periods=5,freq='s')), - Series(timedelta_range('2 days 01:23:45.012345',periods=5,freq='ms'))]: + for s in [Series( + timedelta_range('1 day', periods=5), index=list('abcde')), + Series(timedelta_range('1 day 01:23:45', periods=5, freq='s')), + Series(timedelta_range('2 days 01:23:45.012345', periods=5, + freq='ms'))]: for prop in ok_for_td: # we test freq below if prop != 'freq': @@ -224,30 +224,38 @@ def compare(s, name): getattr(s.dt, prop) result = s.dt.components - self.assertIsInstance(result,DataFrame) - tm.assert_index_equal(result.index,s.index) + self.assertIsInstance(result, DataFrame) + tm.assert_index_equal(result.index, s.index) result = s.dt.to_pytimedelta() - self.assertIsInstance(result,np.ndarray) + self.assertIsInstance(result, np.ndarray) self.assertTrue(result.dtype == object) result = s.dt.total_seconds() - self.assertIsInstance(result,pd.Series) + self.assertIsInstance(result, pd.Series) self.assertTrue(result.dtype == 'float64') freq_result = s.dt.freq - self.assertEqual(freq_result, TimedeltaIndex(s.values, freq='infer').freq) + self.assertEqual(freq_result, TimedeltaIndex(s.values, + freq='infer').freq) # both - index = date_range('20130101',periods=3,freq='D') - s = Series(date_range('20140204',periods=3,freq='s'),index=index) - tm.assert_series_equal(s.dt.year,Series(np.array([2014,2014,2014],dtype='int64'),index=index)) - tm.assert_series_equal(s.dt.month,Series(np.array([2,2,2],dtype='int64'),index=index)) - tm.assert_series_equal(s.dt.second,Series(np.array([0,1,2],dtype='int64'),index=index)) - tm.assert_series_equal(s.dt.normalize(), pd.Series([s[0]] * 3, index=index)) + index = date_range('20130101', periods=3, freq='D') + s = Series(date_range('20140204', periods=3, freq='s'), index=index) + tm.assert_series_equal(s.dt.year, Series( + np.array( + [2014, 2014, 2014], dtype='int64'), index=index)) + tm.assert_series_equal(s.dt.month, Series( + np.array( + [2, 2, 2], dtype='int64'), index=index)) + tm.assert_series_equal(s.dt.second, Series( + np.array( + [0, 1, 2], dtype='int64'), index=index)) + tm.assert_series_equal(s.dt.normalize(), pd.Series( + [s[0]] * 3, index=index)) # periodindex - for s in [Series(period_range('20130101',periods=5,freq='D'))]: + for s in [Series(period_range('20130101', periods=5, freq='D'))]: for prop in ok_for_period: # we test freq below if prop != 'freq': @@ -261,87 +269,103 @@ def compare(s, name): # test limited display api def get_dir(s): - results = [ r for r in s.dt.__dir__() if not r.startswith('_') ] + results = [r for r in s.dt.__dir__() if not r.startswith('_')] return list(sorted(set(results))) - s = Series(date_range('20130101',periods=5,freq='D')) + s = Series(date_range('20130101', periods=5, freq='D')) results = get_dir(s) - tm.assert_almost_equal(results,list(sorted(set(ok_for_dt + ok_for_dt_methods)))) + tm.assert_almost_equal( + results, list(sorted(set(ok_for_dt + ok_for_dt_methods)))) - s = Series(period_range('20130101',periods=5,freq='D').asobject) + s = Series(period_range('20130101', periods=5, freq='D').asobject) results = get_dir(s) - tm.assert_almost_equal(results, list(sorted(set(ok_for_period + ok_for_period_methods)))) + tm.assert_almost_equal( + results, list(sorted(set(ok_for_period + ok_for_period_methods)))) # 11295 # ambiguous time error on the conversions s = Series(pd.date_range('2015-01-01', '2016-01-01', freq='T')) s = s.dt.tz_localize('UTC').dt.tz_convert('America/Chicago') results = get_dir(s) - tm.assert_almost_equal(results, list(sorted(set(ok_for_dt + ok_for_dt_methods)))) - expected = Series(pd.date_range('2015-01-01', - '2016-01-01', - freq='T', - tz='UTC').tz_convert('America/Chicago')) + tm.assert_almost_equal( + results, list(sorted(set(ok_for_dt + ok_for_dt_methods)))) + expected = Series(pd.date_range('2015-01-01', '2016-01-01', freq='T', + tz='UTC').tz_convert( + 'America/Chicago')) tm.assert_series_equal(s, expected) # no setting allowed - s = Series(date_range('20130101',periods=5,freq='D')) + s = Series(date_range('20130101', periods=5, freq='D')) with tm.assertRaisesRegexp(ValueError, "modifications"): s.dt.hour = 5 # trying to set a copy - with pd.option_context('chained_assignment','raise'): + with pd.option_context('chained_assignment', 'raise'): + def f(): s.dt.hour[0] = 5 + self.assertRaises(com.SettingWithCopyError, f) def test_dt_accessor_no_new_attributes(self): # https://github.com/pydata/pandas/issues/10673 - s = Series(date_range('20130101',periods=5,freq='D')) - with tm.assertRaisesRegexp(AttributeError, "You cannot add any new attribute"): + s = Series(date_range('20130101', periods=5, freq='D')) + with tm.assertRaisesRegexp(AttributeError, + "You cannot add any new attribute"): s.dt.xlabel = "a" def test_strftime(self): # GH 10086 s = Series(date_range('20130101', periods=5)) result = s.dt.strftime('%Y/%m/%d') - expected = Series(['2013/01/01', '2013/01/02', '2013/01/03', '2013/01/04', '2013/01/05']) + expected = Series(['2013/01/01', '2013/01/02', '2013/01/03', + '2013/01/04', '2013/01/05']) tm.assert_series_equal(result, expected) s = Series(date_range('2015-02-03 11:22:33.4567', periods=5)) result = s.dt.strftime('%Y/%m/%d %H-%M-%S') - expected = Series(['2015/02/03 11-22-33', '2015/02/04 11-22-33', '2015/02/05 11-22-33', - '2015/02/06 11-22-33', '2015/02/07 11-22-33']) + expected = Series(['2015/02/03 11-22-33', '2015/02/04 11-22-33', + '2015/02/05 11-22-33', '2015/02/06 11-22-33', + '2015/02/07 11-22-33']) tm.assert_series_equal(result, expected) s = Series(period_range('20130101', periods=5)) result = s.dt.strftime('%Y/%m/%d') - expected = Series(['2013/01/01', '2013/01/02', '2013/01/03', '2013/01/04', '2013/01/05']) + expected = Series(['2013/01/01', '2013/01/02', '2013/01/03', + '2013/01/04', '2013/01/05']) tm.assert_series_equal(result, expected) - s = Series(period_range('2015-02-03 11:22:33.4567', periods=5, freq='s')) + s = Series(period_range( + '2015-02-03 11:22:33.4567', periods=5, freq='s')) result = s.dt.strftime('%Y/%m/%d %H-%M-%S') - expected = Series(['2015/02/03 11-22-33', '2015/02/03 11-22-34', '2015/02/03 11-22-35', - '2015/02/03 11-22-36', '2015/02/03 11-22-37']) + expected = Series(['2015/02/03 11-22-33', '2015/02/03 11-22-34', + '2015/02/03 11-22-35', '2015/02/03 11-22-36', + '2015/02/03 11-22-37']) tm.assert_series_equal(result, expected) s = Series(date_range('20130101', periods=5)) s.iloc[0] = pd.NaT result = s.dt.strftime('%Y/%m/%d') - expected = Series(['NaT', '2013/01/02', '2013/01/03', '2013/01/04', '2013/01/05']) + expected = Series(['NaT', '2013/01/02', '2013/01/03', '2013/01/04', + '2013/01/05']) tm.assert_series_equal(result, expected) datetime_index = date_range('20150301', periods=5) result = datetime_index.strftime("%Y/%m/%d") - expected = np.array(['2015/03/01', '2015/03/02', '2015/03/03', '2015/03/04', '2015/03/05'], dtype=object) + expected = np.array( + ['2015/03/01', '2015/03/02', '2015/03/03', '2015/03/04', + '2015/03/05'], dtype=object) self.assert_numpy_array_equal(result, expected) period_index = period_range('20150301', periods=5) result = period_index.strftime("%Y/%m/%d") - expected = np.array(['2015/03/01', '2015/03/02', '2015/03/03', '2015/03/04', '2015/03/05'], dtype=object) + expected = np.array( + ['2015/03/01', '2015/03/02', '2015/03/03', '2015/03/04', + '2015/03/05'], dtype=object) self.assert_numpy_array_equal(result, expected) - s = Series([datetime(2013, 1, 1, 2, 32, 59), datetime(2013, 1, 2, 14, 32, 1)]) + s = Series([datetime(2013, 1, 1, 2, 32, 59), datetime(2013, 1, 2, 14, + 32, 1)]) result = s.dt.strftime('%Y-%m-%d %H:%M:%S') expected = Series(["2013-01-01 02:32:59", "2013-01-02 14:32:01"]) tm.assert_series_equal(result, expected) @@ -353,8 +377,9 @@ def test_strftime(self): s = Series(period_range('20130101', periods=4, freq='L')) result = s.dt.strftime('%Y/%m/%d %H:%M:%S.%l') - expected = Series(["2013/01/01 00:00:00.000", "2013/01/01 00:00:00.001", - "2013/01/01 00:00:00.002", "2013/01/01 00:00:00.003"]) + expected = Series( + ["2013/01/01 00:00:00.000", "2013/01/01 00:00:00.001", + "2013/01/01 00:00:00.002", "2013/01/01 00:00:00.003"]) tm.assert_series_equal(result, expected) def test_valid_dt_with_missing_values(self): @@ -362,21 +387,25 @@ def test_valid_dt_with_missing_values(self): from datetime import date, time # GH 8689 - s = Series(date_range('20130101',periods=5,freq='D')) + s = Series(date_range('20130101', periods=5, freq='D')) s.iloc[2] = pd.NaT - for attr in ['microsecond','nanosecond','second','minute','hour','day']: - expected = getattr(s.dt,attr).copy() + for attr in ['microsecond', 'nanosecond', 'second', 'minute', 'hour', + 'day']: + expected = getattr(s.dt, attr).copy() expected.iloc[2] = np.nan - result = getattr(s.dt,attr) + result = getattr(s.dt, attr) tm.assert_series_equal(result, expected) result = s.dt.date - expected = Series([date(2013,1,1),date(2013,1,2),np.nan,date(2013,1,4),date(2013,1,5)],dtype='object') + expected = Series( + [date(2013, 1, 1), date(2013, 1, 2), np.nan, date(2013, 1, 4), + date(2013, 1, 5)], dtype='object') tm.assert_series_equal(result, expected) result = s.dt.time - expected = Series([time(0),time(0),np.nan,time(0),time(0)],dtype='object') + expected = Series( + [time(0), time(0), np.nan, time(0), time(0)], dtype='object') tm.assert_series_equal(result, expected) def test_dt_accessor_api(self): @@ -388,8 +417,7 @@ def test_dt_accessor_api(self): s = Series(date_range('2000-01-01', periods=3)) self.assertIsInstance(s.dt, DatetimeProperties) - for s in [Series(np.arange(5)), - Series(list('abcde')), + for s in [Series(np.arange(5)), Series(list('abcde')), Series(np.random.randn(5))]: with tm.assertRaisesRegexp(AttributeError, "only use .dt accessor"): @@ -410,19 +438,18 @@ def test_tab_completion(self): self.assertTrue('str' not in dir(s)) self.assertTrue('cat' not in dir(s)) - # similiarly for .cat, but with the twist that str and dt should be there - # if the categories are of that type - # first cat and str + # similiarly for .cat, but with the twist that str and dt should be + # there if the categories are of that type first cat and str s = Series(list('abbcd'), dtype="category") self.assertTrue('cat' in dir(s)) - self.assertTrue('str' in dir(s)) # as it is a string categorical + self.assertTrue('str' in dir(s)) # as it is a string categorical self.assertTrue('dt' not in dir(s)) # similar to cat and str s = Series(date_range('1/1/2015', periods=5)).astype("category") self.assertTrue('cat' in dir(s)) self.assertTrue('str' not in dir(s)) - self.assertTrue('dt' in dir(s)) # as it is a datetime categorical + self.assertTrue('dt' in dir(s)) # as it is a datetime categorical def test_binop_maybe_preserve_name(self): # names match, preserve @@ -477,38 +504,39 @@ def test_combine_first_dt64(self): def test_get(self): # GH 6383 - s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56, - 45, 51, 39, 55, 43, 54, 52, 51, 54])) + s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56, 45, + 51, 39, 55, 43, 54, 52, 51, 54])) result = s.get(25, 0) expected = 0 - self.assertEqual(result,expected) + self.assertEqual(result, expected) s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56, 45, 51, 39, 55, 43, 54, 52, 51, 54]), - index=pd.Float64Index([25.0, 36.0, 49.0, 64.0, 81.0, 100.0, - 121.0, 144.0, 169.0, 196.0, 1225.0, - 1296.0, 1369.0, 1444.0, 1521.0, 1600.0, - 1681.0, 1764.0, 1849.0, 1936.0], - dtype='object')) + index=pd.Float64Index( + [25.0, 36.0, 49.0, 64.0, 81.0, 100.0, + 121.0, 144.0, 169.0, 196.0, 1225.0, + 1296.0, 1369.0, 1444.0, 1521.0, 1600.0, + 1681.0, 1764.0, 1849.0, 1936.0], + dtype='object')) result = s.get(25, 0) expected = 43 - self.assertEqual(result,expected) + self.assertEqual(result, expected) # GH 7407 # with a boolean accessor - df = pd.DataFrame({'i':[0]*3, 'b':[False]*3}) + df = pd.DataFrame({'i': [0] * 3, 'b': [False] * 3}) vc = df.i.value_counts() - result = vc.get(99,default='Missing') - self.assertEqual(result,'Missing') + result = vc.get(99, default='Missing') + self.assertEqual(result, 'Missing') vc = df.b.value_counts() - result = vc.get(False,default='Missing') - self.assertEqual(result,3) + result = vc.get(False, default='Missing') + self.assertEqual(result, 3) - result = vc.get(True,default='Missing') - self.assertEqual(result,'Missing') + result = vc.get(True, default='Missing') + self.assertEqual(result, 'Missing') def test_delitem(self): @@ -517,36 +545,42 @@ def test_delitem(self): s = Series(lrange(5)) del s[0] - expected = Series(lrange(1,5),index=lrange(1,5)) + expected = Series(lrange(1, 5), index=lrange(1, 5)) assert_series_equal(s, expected) del s[1] - expected = Series(lrange(2,5),index=lrange(2,5)) + expected = Series(lrange(2, 5), index=lrange(2, 5)) assert_series_equal(s, expected) # empty s = Series() + def f(): del s[0] + self.assertRaises(KeyError, f) # only 1 left, del, add, del s = Series(1) del s[0] - assert_series_equal(s, Series(dtype='int64', index=Index([], dtype='int64'))) + assert_series_equal(s, Series(dtype='int64', index=Index( + [], dtype='int64'))) s[0] = 1 assert_series_equal(s, Series(1)) del s[0] - assert_series_equal(s, Series(dtype='int64', index=Index([], dtype='int64'))) + assert_series_equal(s, Series(dtype='int64', index=Index( + [], dtype='int64'))) # Index(dtype=object) s = Series(1, index=['a']) del s['a'] - assert_series_equal(s, Series(dtype='int64', index=Index([], dtype='object'))) + assert_series_equal(s, Series(dtype='int64', index=Index( + [], dtype='object'))) s['a'] = 1 assert_series_equal(s, Series(1, index=['a'])) del s['a'] - assert_series_equal(s, Series(dtype='int64', index=Index([], dtype='object'))) + assert_series_equal(s, Series(dtype='int64', index=Index( + [], dtype='object'))) def test_getitem_preserve_name(self): result = self.ts[self.ts > 0] @@ -576,30 +610,24 @@ def test_getitem_negative_out_of_bounds(self): self.assertRaises(IndexError, s.__setitem__, -11, 'foo') def test_multilevel_name_print(self): - index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], - ['one', 'two', 'three']], + index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two', + 'three']], labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], names=['first', 'second']) s = Series(lrange(0, len(index)), index=index, name='sth') - expected = ["first second", - "foo one 0", - " two 1", - " three 2", - "bar one 3", - " two 4", - "baz two 5", - " three 6", - "qux one 7", - " two 8", - " three 9", - "Name: sth, dtype: int64"] + expected = ["first second", "foo one 0", + " two 1", " three 2", + "bar one 3", " two 4", + "baz two 5", " three 6", + "qux one 7", " two 8", + " three 9", "Name: sth, dtype: int64"] expected = "\n".join(expected) self.assertEqual(repr(s), expected) def test_multilevel_preserve_name(self): - index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], - ['one', 'two', 'three']], + index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two', + 'three']], labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], names=['first', 'second']) @@ -695,25 +723,25 @@ def test_nansum_buglet(self): def test_overflow(self): # GH 6915 # overflowing on the smaller int dtypes - for dtype in ['int32','int64']: - v = np.arange(5000000,dtype=dtype) + for dtype in ['int32', 'int64']: + v = np.arange(5000000, dtype=dtype) s = Series(v) # no bottleneck result = s.sum(skipna=False) - self.assertEqual(int(result),v.sum(dtype='int64')) + self.assertEqual(int(result), v.sum(dtype='int64')) result = s.min(skipna=False) - self.assertEqual(int(result),0) + self.assertEqual(int(result), 0) result = s.max(skipna=False) - self.assertEqual(int(result),v[-1]) + self.assertEqual(int(result), v[-1]) # use bottleneck if available result = s.sum() - self.assertEqual(int(result),v.sum(dtype='int64')) + self.assertEqual(int(result), v.sum(dtype='int64')) result = s.min() - self.assertEqual(int(result),0) + self.assertEqual(int(result), 0) result = s.max() - self.assertEqual(int(result),v[-1]) + self.assertEqual(int(result), v[-1]) for dtype in ['float32', 'float64']: v = np.arange(5000000, dtype=dtype) @@ -735,18 +763,19 @@ def test_overflow(self): result = s.max() self.assertTrue(np.allclose(float(result), v[-1])) + class SafeForSparse(object): pass + _ts = tm.makeTimeSeries() + class TestSeries(tm.TestCase, CheckNameIntegration): _multiprocess_can_split_ = True def setUp(self): - import warnings - self.ts = _ts.copy() self.ts.name = 'ts' @@ -769,11 +798,10 @@ def test_scalar_conversion(self): self.assertEqual(int(Series([1.])), 1) self.assertEqual(long(Series([1.])), 1) - def test_astype(self): - s = Series(np.random.randn(5),name='foo') + s = Series(np.random.randn(5), name='foo') - for dtype in ['float32','float64','int64','int32']: + for dtype in ['float32', 'float64', 'int64', 'int32']: astyped = s.astype(dtype) self.assertEqual(astyped.dtype, dtype) self.assertEqual(astyped.name, s.name) @@ -782,7 +810,7 @@ def test_TimeSeries_deprecation(self): # deprecation TimeSeries, #10890 with tm.assert_produces_warning(FutureWarning): - pd.TimeSeries(1,index=date_range('20130101',periods=3)) + pd.TimeSeries(1, index=date_range('20130101', periods=3)) def test_constructor(self): # Recognize TimeSeries @@ -845,8 +873,8 @@ def test_constructor_series(self): def test_constructor_iterator(self): - expected = Series(list(range(10)),dtype='int64') - result = Series(range(10),dtype='int64') + expected = Series(list(range(10)), dtype='int64') + result = Series(range(10), dtype='int64') assert_series_equal(result, expected) def test_constructor_generator(self): @@ -875,12 +903,13 @@ def test_constructor_map(self): assert_series_equal(result, exp) def test_constructor_categorical(self): - cat = pd.Categorical([0, 1, 2, 0, 1, 2], ['a', 'b', 'c'], fastpath=True) + cat = pd.Categorical([0, 1, 2, 0, 1, 2], ['a', 'b', 'c'], + fastpath=True) res = Series(cat) self.assertTrue(res.values.equals(cat)) def test_constructor_maskedarray(self): - data = ma.masked_all((3,), dtype=float) + data = ma.masked_all((3, ), dtype=float) result = Series(data) expected = Series([nan, nan, nan]) assert_series_equal(result, expected) @@ -897,7 +926,7 @@ def test_constructor_maskedarray(self): expected = Series([0.0, 1.0, 2.0], index=index) assert_series_equal(result, expected) - data = ma.masked_all((3,), dtype=int) + data = ma.masked_all((3, ), dtype=int) result = Series(data) expected = Series([nan, nan, nan], dtype=float) assert_series_equal(result, expected) @@ -914,7 +943,7 @@ def test_constructor_maskedarray(self): expected = Series([0, 1, 2], index=index, dtype=int) assert_series_equal(result, expected) - data = ma.masked_all((3,), dtype=bool) + data = ma.masked_all((3, ), dtype=bool) result = Series(data) expected = Series([nan, nan, nan], dtype=object) assert_series_equal(result, expected) @@ -932,7 +961,7 @@ def test_constructor_maskedarray(self): assert_series_equal(result, expected) from pandas import tslib - data = ma.masked_all((3,), dtype='M8[ns]') + data = ma.masked_all((3, ), dtype='M8[ns]') result = Series(data) expected = Series([tslib.iNaT, tslib.iNaT, tslib.iNaT], dtype='M8[ns]') assert_series_equal(result, expected) @@ -979,7 +1008,7 @@ def test_constructor_pass_none(self): # inference on the index s = Series(index=np.array([None])) expected = Series(index=Index([None])) - assert_series_equal(s,expected) + assert_series_equal(s, expected) def test_constructor_cast(self): self.assertRaises(ValueError, Series, ['a', 'b', 'c'], dtype=float) @@ -996,19 +1025,23 @@ def test_constructor_dtype_nocast(self): def test_constructor_datelike_coercion(self): # GH 9477 - # incorrectly infering on dateimelike looking when object dtype is specified - s = Series([Timestamp('20130101'),'NOV'],dtype=object) - self.assertEqual(s.iloc[0],Timestamp('20130101')) - self.assertEqual(s.iloc[1],'NOV') + # incorrectly infering on dateimelike looking when object dtype is + # specified + s = Series([Timestamp('20130101'), 'NOV'], dtype=object) + self.assertEqual(s.iloc[0], Timestamp('20130101')) + self.assertEqual(s.iloc[1], 'NOV') self.assertTrue(s.dtype == object) - # the dtype was being reset on the slicing and re-inferred to datetime even - # thought the blocks are mixed + # the dtype was being reset on the slicing and re-inferred to datetime + # even thought the blocks are mixed belly = '216 3T19'.split() wing1 = '2T15 4H19'.split() wing2 = '416 4T20'.split() mat = pd.to_datetime('2016-01-22 2019-09-07'.split()) - df = pd.DataFrame({'wing1':wing1, 'wing2':wing2, 'mat':mat}, index=belly) + df = pd.DataFrame( + {'wing1': wing1, + 'wing2': wing2, + 'mat': mat}, index=belly) result = df.loc['3T19'] self.assertTrue(result.dtype == object) @@ -1042,7 +1075,7 @@ def test_constructor_dtype_datetime64(self): np.datetime64(datetime(2013, 1, 1)), np.datetime64(datetime(2013, 1, 2)), np.datetime64(datetime(2013, 1, 3)), - ] + ] s = Series(dates) self.assertEqual(s.dtype, 'M8[ns]') @@ -1057,18 +1090,18 @@ def test_constructor_dtype_datetime64(self): # GH3414 related self.assertRaises(TypeError, lambda x: Series( Series(dates).astype('int') / 1000000, dtype='M8[ms]')) - self.assertRaises( - TypeError, lambda x: Series(dates, dtype='datetime64')) + self.assertRaises(TypeError, + lambda x: Series(dates, dtype='datetime64')) # invalid dates can be help as object - result = Series([datetime(2,1,1)]) - self.assertEqual(result[0], datetime(2,1,1,0,0)) + result = Series([datetime(2, 1, 1)]) + self.assertEqual(result[0], datetime(2, 1, 1, 0, 0)) - result = Series([datetime(3000,1,1)]) - self.assertEqual(result[0], datetime(3000,1,1,0,0)) + result = Series([datetime(3000, 1, 1)]) + self.assertEqual(result[0], datetime(3000, 1, 1, 0, 0)) # don't mix types - result = Series([ Timestamp('20130101'), 1],index=['a','b']) + result = Series([Timestamp('20130101'), 1], index=['a', 'b']) self.assertEqual(result['a'], Timestamp('20130101')) self.assertEqual(result['b'], 1) @@ -1081,32 +1114,32 @@ def test_constructor_dtype_datetime64(self): for dtype in ['s', 'D', 'ms', 'us', 'ns']: values1 = dates.view(np.ndarray).astype('M8[{0}]'.format(dtype)) result = Series(values1, dates) - assert_series_equal(result,expected) + assert_series_equal(result, expected) # leave datetime.date alone dates2 = np.array([d.date() for d in dates.to_pydatetime()], dtype=object) series1 = Series(dates2, dates) - self.assert_numpy_array_equal(series1.values,dates2) - self.assertEqual(series1.dtype,object) + self.assert_numpy_array_equal(series1.values, dates2) + self.assertEqual(series1.dtype, object) # these will correctly infer a datetime s = Series([None, pd.NaT, '2013-08-05 15:30:00.000001']) - self.assertEqual(s.dtype,'datetime64[ns]') + self.assertEqual(s.dtype, 'datetime64[ns]') s = Series([np.nan, pd.NaT, '2013-08-05 15:30:00.000001']) - self.assertEqual(s.dtype,'datetime64[ns]') + self.assertEqual(s.dtype, 'datetime64[ns]') s = Series([pd.NaT, None, '2013-08-05 15:30:00.000001']) - self.assertEqual(s.dtype,'datetime64[ns]') + self.assertEqual(s.dtype, 'datetime64[ns]') s = Series([pd.NaT, np.nan, '2013-08-05 15:30:00.000001']) - self.assertEqual(s.dtype,'datetime64[ns]') + self.assertEqual(s.dtype, 'datetime64[ns]') # tz-aware (UTC and other tz's) # GH 8411 - dr = date_range('20130101',periods=3) + dr = date_range('20130101', periods=3) self.assertTrue(Series(dr).iloc[0].tz is None) - dr = date_range('20130101',periods=3,tz='UTC') + dr = date_range('20130101', periods=3, tz='UTC') self.assertTrue(str(Series(dr).iloc[0].tz) == 'UTC') - dr = date_range('20130101',periods=3,tz='US/Eastern') + dr = date_range('20130101', periods=3, tz='US/Eastern') self.assertTrue(str(Series(dr).iloc[0].tz) == 'US/Eastern') # non-convertible @@ -1132,7 +1165,7 @@ def test_constructor_with_datetime_tz(self): # 8260 # support datetime64 with tz - dr = date_range('20130101',periods=3,tz='US/Eastern') + dr = date_range('20130101', periods=3, tz='US/Eastern') s = Series(dr) self.assertTrue(s.dtype.name == 'datetime64[ns, US/Eastern]') self.assertTrue(s.dtype == 'datetime64[ns, US/Eastern]') @@ -1143,23 +1176,26 @@ def test_constructor_with_datetime_tz(self): result = s.values self.assertIsInstance(result, np.ndarray) self.assertTrue(result.dtype == 'datetime64[ns]') - self.assertTrue(dr.equals(pd.DatetimeIndex(result).tz_localize('UTC').tz_convert(tz=s.dt.tz))) + self.assertTrue(dr.equals(pd.DatetimeIndex(result).tz_localize( + 'UTC').tz_convert(tz=s.dt.tz))) # indexing result = s.iloc[0] - self.assertEqual(result,Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern', offset='D')) + self.assertEqual(result, Timestamp('2013-01-01 00:00:00-0500', + tz='US/Eastern', offset='D')) result = s[0] - self.assertEqual(result,Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern', offset='D')) + self.assertEqual(result, Timestamp('2013-01-01 00:00:00-0500', + tz='US/Eastern', offset='D')) - result = s[Series([True,True,False],index=s.index)] - assert_series_equal(result,s[0:2]) + result = s[Series([True, True, False], index=s.index)] + assert_series_equal(result, s[0:2]) result = s.iloc[0:1] - assert_series_equal(result,Series(dr[0:1])) + assert_series_equal(result, Series(dr[0:1])) # concat - result = pd.concat([s.iloc[0:1],s.iloc[1:]]) - assert_series_equal(result,s) + result = pd.concat([s.iloc[0:1], s.iloc[1:]]) + assert_series_equal(result, s) # astype result = s.astype(object) @@ -1177,7 +1213,7 @@ def test_constructor_with_datetime_tz(self): assert_series_equal(result, s) result = s.astype('datetime64[ns, CET]') - expected = Series(date_range('20130101 06:00:00',periods=3,tz='CET')) + expected = Series(date_range('20130101 06:00:00', periods=3, tz='CET')) assert_series_equal(result, expected) # short str @@ -1189,18 +1225,20 @@ def test_constructor_with_datetime_tz(self): self.assertTrue('NaT' in str(result)) # long str - t = Series(date_range('20130101',periods=1000,tz='US/Eastern')) + t = Series(date_range('20130101', periods=1000, tz='US/Eastern')) self.assertTrue('datetime64[ns, US/Eastern]' in str(t)) - result = pd.DatetimeIndex(s,freq='infer') + result = pd.DatetimeIndex(s, freq='infer') tm.assert_index_equal(result, dr) # inference - s = Series([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'),pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Pacific')]) + s = Series([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'), + pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Pacific')]) self.assertTrue(s.dtype == 'datetime64[ns, US/Pacific]') self.assertTrue(lib.infer_dtype(s) == 'datetime64') - s = Series([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'),pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Eastern')]) + s = Series([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'), + pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Eastern')]) self.assertTrue(s.dtype == 'object') self.assertTrue(lib.infer_dtype(s) == 'datetime') @@ -1208,7 +1246,7 @@ def test_constructor_periodindex(self): # GH7932 # converting a PeriodIndex when put in a Series - pi = period_range('20130101',periods=5,freq='D') + pi = period_range('20130101', periods=5, freq='D') s = Series(pi) expected = Series(pi.asobject) assert_series_equal(s, expected) @@ -1229,8 +1267,7 @@ def test_constructor_dict(self): def test_constructor_dict_multiindex(self): check = lambda result, expected: tm.assert_series_equal( - result, expected, check_dtype=True, - check_series_type=True) + result, expected, check_dtype=True, check_series_type=True) d = {('a', 'a'): 0., ('b', 'a'): 1., ('b', 'c'): 2.} _d = sorted(d.items()) ser = Series(d) @@ -1241,9 +1278,8 @@ def test_constructor_dict_multiindex(self): d['z'] = 111. _d.insert(0, ('z', d['z'])) ser = Series(d) - expected = Series( - [x[1] for x in _d], - index=Index([x[0] for x in _d], tupleize_cols=False)) + expected = Series([x[1] for x in _d], index=Index( + [x[0] for x in _d], tupleize_cols=False)) ser = ser.reindex(index=expected.index) check(ser, expected) @@ -1291,6 +1327,7 @@ def test_orderedDict_subclass_ctor(self): class A(OrderedDict): pass + data = A([('col%s' % i, random.random()) for i in range(12)]) s = pandas.Series(data) self.assertTrue(all(s.values == list(data.values()))) @@ -1349,11 +1386,7 @@ def test_array_finalize(self): def test_pop(self): # GH 6600 - df = DataFrame({ - 'A': 0, - 'B': np.arange(5,dtype='int64'), - 'C': 0, - }) + df = DataFrame({'A': 0, 'B': np.arange(5, dtype='int64'), 'C': 0, }) k = df.iloc[4] result = k.pop('B') @@ -1517,7 +1550,7 @@ def test_getitem_boolean_empty(self): # GH5877 # indexing with empty series s = Series(['A', 'B']) - expected = Series(np.nan,index=['C'],dtype=object) + expected = Series(np.nan, index=['C'], dtype=object) result = s[Series(['C'], dtype=object)] assert_series_equal(result, expected) @@ -1530,10 +1563,12 @@ def test_getitem_boolean_empty(self): # that's empty or not-aligned def f(): s[Series([], dtype=bool)] + self.assertRaises(IndexingError, f) def f(): s[Series([True], dtype=bool)] + self.assertRaises(IndexingError, f) def test_getitem_generator(self): @@ -1557,7 +1592,7 @@ def test_getitem_boolean_object(self): assert_series_equal(result, expected) # setitem - s2 = s.copy() + s2 = s.copy() cop = s.copy() cop[omask] = 5 s2[mask] = 5 @@ -1576,13 +1611,13 @@ def test_getitem_setitem_boolean_corner(self): self.assertRaises(Exception, ts.__getitem__, mask_shifted) self.assertRaises(Exception, ts.__setitem__, mask_shifted, 1) - #ts[mask_shifted] - #ts[mask_shifted] = 1 + # ts[mask_shifted] + # ts[mask_shifted] = 1 self.assertRaises(Exception, ts.ix.__getitem__, mask_shifted) self.assertRaises(Exception, ts.ix.__setitem__, mask_shifted, 1) - #ts.ix[mask_shifted] - #ts.ix[mask_shifted] = 2 + # ts.ix[mask_shifted] + # ts.ix[mask_shifted] = 2 def test_getitem_setitem_slice_integers(self): s = Series(np.random.randn(8), index=[2, 4, 6, 8, 10, 12, 14, 16]) @@ -1635,16 +1670,16 @@ def test_getitem_dups_with_missing(self): assert_series_equal(result, expected) def test_getitem_dups(self): - s = Series(range(5),index=['A','A','B','C','C'],dtype=np.int64) - expected = Series([3,4],index=['C','C'],dtype=np.int64) + s = Series(range(5), index=['A', 'A', 'B', 'C', 'C'], dtype=np.int64) + expected = Series([3, 4], index=['C', 'C'], dtype=np.int64) result = s['C'] assert_series_equal(result, expected) def test_getitem_dataframe(self): rng = list(range(10)) - s = pd.Series(10, index=rng) - df = pd.DataFrame(rng, index=rng) - self.assertRaises(TypeError, s.__getitem__, df>5) + s = pd.Series(10, index=rng) + df = pd.DataFrame(rng, index=rng) + self.assertRaises(TypeError, s.__getitem__, df > 5) def test_setitem_ambiguous_keyerror(self): s = Series(lrange(10), index=lrange(0, 20, 2)) @@ -1652,13 +1687,13 @@ def test_setitem_ambiguous_keyerror(self): # equivalent of an append s2 = s.copy() s2[1] = 5 - expected = s.append(Series([5],index=[1])) - assert_series_equal(s2,expected) + expected = s.append(Series([5], index=[1])) + assert_series_equal(s2, expected) s2 = s.copy() s2.ix[1] = 5 - expected = s.append(Series([5],index=[1])) - assert_series_equal(s2,expected) + expected = s.append(Series([5], index=[1])) + assert_series_equal(s2, expected) def test_setitem_float_labels(self): # note labels are floats @@ -1684,8 +1719,8 @@ def test_slice(self): self.assertEqual(numSlice.index[1], self.series.index[11]) - self.assertTrue(tm.equalContents(numSliceEnd, - np.array(self.series)[-10:])) + self.assertTrue(tm.equalContents(numSliceEnd, np.array(self.series)[ + -10:])) # test return view sl = self.series[10:20] @@ -1694,13 +1729,15 @@ def test_slice(self): def test_slice_can_reorder_not_uniquely_indexed(self): s = Series(1, index=['a', 'a', 'b', 'b', 'c']) - result = s[::-1] # it works! + s[::-1] # it works! def test_slice_float_get_set(self): - self.assertRaises(TypeError, lambda : self.ts[4.0:10.0]) + self.assertRaises(TypeError, lambda: self.ts[4.0:10.0]) + def f(): self.ts[4.0:10.0] = 0 + self.assertRaises(TypeError, f) self.assertRaises(TypeError, self.ts.__getitem__, slice(4.5, 10.0)) @@ -1783,27 +1820,27 @@ def test_setitem_dtypes(self): # change dtypes # GH 4463 - expected = Series([np.nan,2,3]) + expected = Series([np.nan, 2, 3]) - s = Series([1,2,3]) + s = Series([1, 2, 3]) s.iloc[0] = np.nan - assert_series_equal(s,expected) + assert_series_equal(s, expected) - s = Series([1,2,3]) + s = Series([1, 2, 3]) s.loc[0] = np.nan - assert_series_equal(s,expected) + assert_series_equal(s, expected) - s = Series([1,2,3]) + s = Series([1, 2, 3]) s[0] = np.nan - assert_series_equal(s,expected) + assert_series_equal(s, expected) s = Series([False]) s.loc[0] = np.nan - assert_series_equal(s,Series([np.nan])) + assert_series_equal(s, Series([np.nan])) - s = Series([False,True]) + s = Series([False, True]) s.loc[0] = np.nan - assert_series_equal(s,Series([np.nan,1.0])) + assert_series_equal(s, Series([np.nan, 1.0])) def test_set_value(self): idx = self.ts.index[10] @@ -1849,7 +1886,7 @@ def test_basic_getitem_setitem_corner(self): def test_reshape_non_2d(self): # GH 4554 x = Series(np.random.random(201), name='x') - self.assertTrue(x.reshape(x.shape,) is x) + self.assertTrue(x.reshape(x.shape, ) is x) # GH 2719 a = Series([1, 2, 3, 4]) @@ -1999,11 +2036,11 @@ def test_where(self): assert_series_equal(rs, s.abs()) rs = s.where(cond) - assert(s.shape == rs.shape) - assert(rs is not s) + assert (s.shape == rs.shape) + assert (rs is not s) # test alignment - cond = Series([True,False,False,True,False],index=s.index) + cond = Series([True, False, False, True, False], index=s.index) s2 = -(s.abs()) expected = s2[cond].reindex(s2.index[:3]).reindex(s2.index) @@ -2025,13 +2062,14 @@ def test_where(self): assert_series_equal(s, expected) # failures - self.assertRaises( - ValueError, s.__setitem__, tuple([[[True, False]]]), [0, 2, 3]) - self.assertRaises( - ValueError, s.__setitem__, tuple([[[True, False]]]), []) + self.assertRaises(ValueError, s.__setitem__, tuple([[[True, False]]]), + [0, 2, 3]) + self.assertRaises(ValueError, s.__setitem__, tuple([[[True, False]]]), + []) # unsafe dtype changes - for dtype in [np.int8, np.int16, np.int32, np.int64, np.float16, np.float32, np.float64]: + for dtype in [np.int8, np.int16, np.int32, np.int64, np.float16, + np.float32, np.float64]: s = Series(np.arange(10), dtype=dtype) mask = s < 5 s[mask] = lrange(2, 7) @@ -2081,17 +2119,21 @@ def test_where(self): s = Series(np.arange(10)) mask = s > 5 + def f(): - s[mask] = [5,4,3,2,1] + s[mask] = [5, 4, 3, 2, 1] + self.assertRaises(ValueError, f) + def f(): s[mask] = [0] * 5 + self.assertRaises(ValueError, f) # dtype changes - s = Series([1,2,3,4]) - result = s.where(s>2,np.nan) - expected = Series([np.nan,np.nan,3,4]) + s = Series([1, 2, 3, 4]) + result = s.where(s > 2, np.nan) + expected = Series([np.nan, np.nan, 3, 4]) assert_series_equal(result, expected) # GH 4667 @@ -2104,7 +2146,7 @@ def f(): s = Series(range(10)).astype(float) s[s > 8] = None result = s[isnull(s)] - expected = Series(np.nan,index=[9]) + expected = Series(np.nan, index=[9]) assert_series_equal(result, expected) def test_where_setitem_invalid(self): @@ -2114,72 +2156,87 @@ def test_where_setitem_invalid(self): # slice s = Series(list('abc')) + def f(): s[0:3] = list(range(27)) + self.assertRaises(ValueError, f) s[0:3] = list(range(3)) - expected = Series([0,1,2]) + expected = Series([0, 1, 2]) assert_series_equal(s.astype(np.int64), expected, ) # slice with step s = Series(list('abcdef')) + def f(): s[0:4:2] = list(range(27)) + self.assertRaises(ValueError, f) s = Series(list('abcdef')) s[0:4:2] = list(range(2)) - expected = Series([0,'b',1,'d','e','f']) + expected = Series([0, 'b', 1, 'd', 'e', 'f']) assert_series_equal(s, expected) # neg slices s = Series(list('abcdef')) + def f(): s[:-1] = list(range(27)) + self.assertRaises(ValueError, f) s[-3:-1] = list(range(2)) - expected = Series(['a','b','c',0,1,'f']) + expected = Series(['a', 'b', 'c', 0, 1, 'f']) assert_series_equal(s, expected) # list s = Series(list('abc')) + def f(): - s[[0,1,2]] = list(range(27)) + s[[0, 1, 2]] = list(range(27)) + self.assertRaises(ValueError, f) s = Series(list('abc')) + def f(): - s[[0,1,2]] = list(range(2)) + s[[0, 1, 2]] = list(range(2)) + self.assertRaises(ValueError, f) # scalar s = Series(list('abc')) s[0] = list(range(10)) - expected = Series([list(range(10)),'b','c']) + expected = Series([list(range(10)), 'b', 'c']) assert_series_equal(s, expected) def test_where_broadcast(self): # Test a variety of differently sized series for size in range(2, 6): # Test a variety of boolean indices - for selection in [np.resize([True, False, False, False, False], size), # First element should be set - # Set alternating elements] - np.resize([True, False], size), - np.resize([False], size)]: # No element should be set + for selection in [ + # First element should be set + np.resize([True, False, False, False, False], size), + # Set alternating elements] + np.resize([True, False], size), + # No element should be set + np.resize([False], size)]: + # Test a variety of different numbers as content - for item in [2.0, np.nan, np.finfo(np.float).max, np.finfo(np.float).min]: + for item in [2.0, np.nan, np.finfo(np.float).max, + np.finfo(np.float).min]: # Test numpy arrays, lists and tuples as the input to be # broadcast - for arr in [np.array([item]), [item], (item,)]: + for arr in [np.array([item]), [item], (item, )]: data = np.arange(size, dtype=float) s = Series(data) s[selection] = arr # Construct the expected series by taking the source # data or item based on the selection - expected = Series([item if use_item else data[i] - for i, use_item in enumerate(selection)]) + expected = Series([item if use_item else data[ + i] for i, use_item in enumerate(selection)]) assert_series_equal(s, expected) s = Series(data) @@ -2205,19 +2262,20 @@ def test_where_dups(self): # where crashes with dups in index s1 = Series(list(range(3))) s2 = Series(list(range(3))) - comb = pd.concat([s1,s2]) + comb = pd.concat([s1, s2]) result = comb.where(comb < 2) - expected = Series([0,1,np.nan,0,1,np.nan],index=[0,1,2,0,1,2]) + expected = Series([0, 1, np.nan, 0, 1, np.nan], + index=[0, 1, 2, 0, 1, 2]) assert_series_equal(result, expected) # GH 4548 # inplace updating not working with dups - comb[comb<1] = 5 - expected = Series([5,1,2,5,1,2],index=[0,1,2,0,1,2]) + comb[comb < 1] = 5 + expected = Series([5, 1, 2, 5, 1, 2], index=[0, 1, 2, 0, 1, 2]) assert_series_equal(comb, expected) - comb[comb<2] += 10 - expected = Series([5,11,2,5,11,2],index=[0,1,2,0,1,2]) + comb[comb < 2] += 10 + expected = Series([5, 11, 2, 5, 11, 2], index=[0, 1, 2, 0, 1, 2]) assert_series_equal(comb, expected) def test_where_datetime(self): @@ -2292,8 +2350,8 @@ def test_mask(self): self.assertRaises(ValueError, s.mask, cond[:3].values, -s) # dtype changes - s = Series([1,2,3,4]) - result = s.mask(s>2, np.nan) + s = Series([1, 2, 3, 4]) + result = s.mask(s > 2, np.nan) expected = Series([1, 2, np.nan, np.nan]) assert_series_equal(result, expected) @@ -2301,17 +2359,21 @@ def test_mask_broadcast(self): # GH 8801 # copied from test_where_broadcast for size in range(2, 6): - for selection in [np.resize([True, False, False, False, False], size), # First element should be set - # Set alternating elements] - np.resize([True, False], size), - np.resize([False], size)]: # No element should be set - for item in [2.0, np.nan, np.finfo(np.float).max, np.finfo(np.float).min]: - for arr in [np.array([item]), [item], (item,)]: + for selection in [ + # First element should be set + np.resize([True, False, False, False, False], size), + # Set alternating elements] + np.resize([True, False], size), + # No element should be set + np.resize([False], size)]: + for item in [2.0, np.nan, np.finfo(np.float).max, + np.finfo(np.float).min]: + for arr in [np.array([item]), [item], (item, )]: data = np.arange(size, dtype=float) s = Series(data) result = s.mask(selection, arr) - expected = Series([item if use_item else data[i] - for i, use_item in enumerate(selection)]) + expected = Series([item if use_item else data[ + i] for i, use_item in enumerate(selection)]) assert_series_equal(result, expected) def test_mask_inplace(self): @@ -2330,35 +2392,35 @@ def test_mask_inplace(self): def test_drop(self): # unique - s = Series([1,2],index=['one','two']) - expected = Series([1],index=['one']) + s = Series([1, 2], index=['one', 'two']) + expected = Series([1], index=['one']) result = s.drop(['two']) - assert_series_equal(result,expected) + assert_series_equal(result, expected) result = s.drop('two', axis='rows') - assert_series_equal(result,expected) + assert_series_equal(result, expected) # non-unique # GH 5248 - s = Series([1,1,2],index=['one','two','one']) - expected = Series([1,2],index=['one','one']) + s = Series([1, 1, 2], index=['one', 'two', 'one']) + expected = Series([1, 2], index=['one', 'one']) result = s.drop(['two'], axis=0) - assert_series_equal(result,expected) + assert_series_equal(result, expected) result = s.drop('two') - assert_series_equal(result,expected) + assert_series_equal(result, expected) - expected = Series([1],index=['two']) + expected = Series([1], index=['two']) result = s.drop(['one']) - assert_series_equal(result,expected) + assert_series_equal(result, expected) result = s.drop('one') - assert_series_equal(result,expected) + assert_series_equal(result, expected) # single string/tuple-like - s = Series(range(3),index=list('abc')) + s = Series(range(3), index=list('abc')) self.assertRaises(ValueError, s.drop, 'bc') - self.assertRaises(ValueError, s.drop, ('a',)) + self.assertRaises(ValueError, s.drop, ('a', )) # errors='ignore' - s = Series(range(3),index=list('abc')) + s = Series(range(3), index=list('abc')) result = s.drop('bc', errors='ignore') assert_series_equal(result, s) result = s.drop(['a', 'd'], errors='ignore') @@ -2369,11 +2431,11 @@ def test_drop(self): self.assertRaises(ValueError, s.drop, 'one', axis='columns') # GH 8522 - s = Series([2,3], index=[True, False]) + s = Series([2, 3], index=[True, False]) self.assertTrue(s.index.is_object()) result = s.drop(True) - expected = Series([3],index=[False]) - assert_series_equal(result,expected) + expected = Series([3], index=[False]) + assert_series_equal(result, expected) def test_ix_setitem(self): inds = self.series.index[[3, 4, 7]] @@ -2404,7 +2466,7 @@ def test_ix_setitem(self): def test_where_numeric_with_string(self): # GH 9280 s = pd.Series([1, 2, 3]) - w = s.where(s>1, 'X') + w = s.where(s > 1, 'X') self.assertFalse(com.is_integer(w[0])) self.assertTrue(com.is_integer(w[1])) @@ -2412,14 +2474,14 @@ def test_where_numeric_with_string(self): self.assertTrue(isinstance(w[0], str)) self.assertTrue(w.dtype == 'object') - w = s.where(s>1, ['X', 'Y', 'Z']) + w = s.where(s > 1, ['X', 'Y', 'Z']) self.assertFalse(com.is_integer(w[0])) self.assertTrue(com.is_integer(w[1])) self.assertTrue(com.is_integer(w[2])) self.assertTrue(isinstance(w[0], str)) self.assertTrue(w.dtype == 'object') - w = s.where(s>1, np.array(['X', 'Y', 'Z'])) + w = s.where(s > 1, np.array(['X', 'Y', 'Z'])) self.assertFalse(com.is_integer(w[0])) self.assertTrue(com.is_integer(w[1])) self.assertTrue(com.is_integer(w[2])) @@ -2498,9 +2560,7 @@ def test_repr(self): # various names for name in ['', 1, 1.2, 'foo', u('\u03B1\u03B2\u03B3'), 'loooooooooooooooooooooooooooooooooooooooooooooooooooong', - ('foo', 'bar', 'baz'), - (1, 2), - ('foo', 1, 2.3), + ('foo', 'bar', 'baz'), (1, 2), ('foo', 1, 2.3), (u('\u03B1'), u('\u03B2'), u('\u03B3')), (u('\u03B1'), 'bar')]: self.series.name = name @@ -2535,7 +2595,7 @@ def test_repr(self): def test_tidy_repr(self): a = Series([u("\u05d0")] * 1000) a.name = 'title1' - repr(a) # should not raise exception + repr(a) # should not raise exception def test_repr_bool_fails(self): s = Series([DataFrame(np.random.randn(2, 2)) for i in range(5)]) @@ -2546,7 +2606,7 @@ def test_repr_bool_fails(self): tmp = sys.stderr sys.stderr = buf try: - # it works (with no Cython exception barf)! + # it works (with no Cython exception barf)! repr(s) finally: sys.stderr = tmp @@ -2558,7 +2618,7 @@ def test_repr_name_iterable_indexable(self): # it works! repr(s) - s.name = (u("\u05d0"),) * 2 + s.name = (u("\u05d0"), ) * 2 repr(s) def test_repr_should_return_str(self): @@ -2576,7 +2636,7 @@ def test_repr_should_return_str(self): def test_repr_max_rows(self): # GH 6863 with pd.option_context('max_rows', None): - str(Series(range(1001))) # should not raise exception + str(Series(range(1001))) # should not raise exception def test_unicode_string_with_unicode(self): df = Series([u("\u05d0")], name=u("\u05d1")) @@ -2676,7 +2736,9 @@ def test_mode(self): exp = Series([11, 12]) assert_series_equal(s.mode(), exp) - assert_series_equal(Series([1, 2, 3]).mode(), Series([], dtype='int64')) + assert_series_equal( + Series([1, 2, 3]).mode(), Series( + [], dtype='int64')) lst = [5] * 20 + [1] * 10 + [6] * 25 np.random.shuffle(lst) @@ -2733,11 +2795,12 @@ def test_var_std(self): self.assertTrue(isnull(result)) def test_sem(self): - alt = lambda x: np.std(x, ddof=1)/np.sqrt(len(x)) + alt = lambda x: np.std(x, ddof=1) / np.sqrt(len(x)) self._check_stat_op('sem', alt) result = self.ts.sem(ddof=4) - expected = np.std(self.ts.values, ddof=4)/np.sqrt(len(self.ts.values)) + expected = np.std(self.ts.values, + ddof=4) / np.sqrt(len(self.ts.values)) assert_almost_equal(result, expected) # 1 - element series with ddof=1 @@ -2752,7 +2815,8 @@ def test_skew(self): alt = lambda x: skew(x, bias=False) self._check_stat_op('skew', alt) - # test corner cases, skew() returns NaN unless there's at least 3 values + # test corner cases, skew() returns NaN unless there's at least 3 + # values min_N = 3 for i in range(1, min_N + 1): s = Series(np.ones(i)) @@ -2772,13 +2836,13 @@ def test_kurt(self): self._check_stat_op('kurt', alt) index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], - labels=[[0, 0, 0, 0, 0, 0], - [0, 1, 2, 0, 1, 2], + labels=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1]]) s = Series(np.random.randn(6), index=index) self.assertAlmostEqual(s.kurt(), s.kurt(level=0)['bar']) - # test corner cases, kurt() returns NaN unless there's at least 4 values + # test corner cases, kurt() returns NaN unless there's at least 4 + # values min_N = 4 for i in range(1, min_N + 1): s = Series(np.ones(i)) @@ -2824,8 +2888,7 @@ def test_argsort_stable(self): def test_reorder_levels(self): index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], - labels=[[0, 0, 0, 0, 0, 0], - [0, 1, 2, 0, 1, 2], + labels=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1]], names=['L0', 'L1', 'L2']) s = Series(np.arange(6), index=index) @@ -2841,8 +2904,7 @@ def test_reorder_levels(self): # rotate, position result = s.reorder_levels([1, 2, 0]) e_idx = MultiIndex(levels=[['one', 'two', 'three'], [0, 1], ['bar']], - labels=[[0, 1, 2, 0, 1, 2], - [0, 1, 0, 1, 0, 1], + labels=[[0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1], [0, 0, 0, 0, 0, 0]], names=['L1', 'L2', 'L0']) expected = Series(np.arange(6), index=e_idx) @@ -2850,8 +2912,7 @@ def test_reorder_levels(self): result = s.reorder_levels([0, 0, 0]) e_idx = MultiIndex(levels=[['bar'], ['bar'], ['bar']], - labels=[[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], + labels=[[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]], names=['L0', 'L0', 'L0']) expected = Series(range(6), index=e_idx) @@ -2887,58 +2948,84 @@ def test_cummax(self): self.assert_numpy_array_equal(result, expected) def test_cummin_datetime64(self): - s = pd.Series(pd.to_datetime( - ['NaT', '2000-1-2', 'NaT', '2000-1-1', 'NaT', '2000-1-3'])) + s = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', '2000-1-1', + 'NaT', '2000-1-3'])) - expected = pd.Series(pd.to_datetime( - ['NaT', '2000-1-2', 'NaT', '2000-1-1', 'NaT', '2000-1-1'])) + expected = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', + '2000-1-1', 'NaT', '2000-1-1'])) result = s.cummin(skipna=True) self.assert_series_equal(expected, result) expected = pd.Series(pd.to_datetime( - ['NaT', '2000-1-2', '2000-1-2', '2000-1-1', '2000-1-1', '2000-1-1'])) + ['NaT', '2000-1-2', '2000-1-2', '2000-1-1', '2000-1-1', '2000-1-1' + ])) result = s.cummin(skipna=False) self.assert_series_equal(expected, result) def test_cummax_datetime64(self): - s = pd.Series(pd.to_datetime( - ['NaT', '2000-1-2', 'NaT', '2000-1-1', 'NaT', '2000-1-3'])) + s = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', '2000-1-1', + 'NaT', '2000-1-3'])) - expected = pd.Series(pd.to_datetime( - ['NaT', '2000-1-2', 'NaT', '2000-1-2', 'NaT', '2000-1-3'])) + expected = pd.Series(pd.to_datetime(['NaT', '2000-1-2', 'NaT', + '2000-1-2', 'NaT', '2000-1-3'])) result = s.cummax(skipna=True) self.assert_series_equal(expected, result) expected = pd.Series(pd.to_datetime( - ['NaT', '2000-1-2', '2000-1-2', '2000-1-2', '2000-1-2', '2000-1-3'])) + ['NaT', '2000-1-2', '2000-1-2', '2000-1-2', '2000-1-2', '2000-1-3' + ])) result = s.cummax(skipna=False) self.assert_series_equal(expected, result) def test_cummin_timedelta64(self): - s = pd.Series(pd.to_timedelta( - ['NaT', '2 min', 'NaT', '1 min', 'NaT', '3 min', ])) - - expected = pd.Series(pd.to_timedelta( - ['NaT', '2 min', 'NaT', '1 min', 'NaT', '1 min', ])) + s = pd.Series(pd.to_timedelta(['NaT', + '2 min', + 'NaT', + '1 min', + 'NaT', + '3 min', ])) + + expected = pd.Series(pd.to_timedelta(['NaT', + '2 min', + 'NaT', + '1 min', + 'NaT', + '1 min', ])) result = s.cummin(skipna=True) self.assert_series_equal(expected, result) - expected = pd.Series(pd.to_timedelta( - ['NaT', '2 min', '2 min', '1 min', '1 min', '1 min', ])) + expected = pd.Series(pd.to_timedelta(['NaT', + '2 min', + '2 min', + '1 min', + '1 min', + '1 min', ])) result = s.cummin(skipna=False) self.assert_series_equal(expected, result) def test_cummax_timedelta64(self): - s = pd.Series(pd.to_timedelta( - ['NaT', '2 min', 'NaT', '1 min', 'NaT', '3 min', ])) - - expected = pd.Series(pd.to_timedelta( - ['NaT', '2 min', 'NaT', '2 min', 'NaT', '3 min', ])) + s = pd.Series(pd.to_timedelta(['NaT', + '2 min', + 'NaT', + '1 min', + 'NaT', + '3 min', ])) + + expected = pd.Series(pd.to_timedelta(['NaT', + '2 min', + 'NaT', + '2 min', + 'NaT', + '3 min', ])) result = s.cummax(skipna=True) self.assert_series_equal(expected, result) - expected = pd.Series(pd.to_timedelta( - ['NaT', '2 min', '2 min', '2 min', '2 min', '3 min', ])) + expected = pd.Series(pd.to_timedelta(['NaT', + '2 min', + '2 min', + '2 min', + '2 min', + '3 min', ])) result = s.cummax(skipna=False) self.assert_series_equal(expected, result) @@ -2952,7 +3039,8 @@ def test_npdiff(self): r = np.diff(s) assert_series_equal(Series([nan, 0, 0, 0, nan]), r) - def _check_stat_op(self, name, alternate, check_objects=False, check_allna=False): + def _check_stat_op(self, name, alternate, check_objects=False, + check_allna=False): import pandas.core.nanops as nanops def testit(): @@ -2962,7 +3050,7 @@ def testit(): self.series[5:15] = np.NaN # idxmax, idxmin, min, and max are valid for dates - if name not in ['max','min']: + if name not in ['max', 'min']: ds = Series(date_range('1/1/2001', periods=10)) self.assertRaises(TypeError, f, ds) @@ -2982,9 +3070,9 @@ def testit(): # bottleneck >= 1.0 give 0.0 for an allna Series sum try: self.assertTrue(nanops._USE_BOTTLENECK) - import bottleneck as bn + import bottleneck as bn # noqa self.assertTrue(bn.__version__ >= LooseVersion('1.0')) - self.assertEqual(f(allna),0.0) + self.assertEqual(f(allna), 0.0) except: self.assertTrue(np.isnan(f(allna))) @@ -2994,7 +3082,7 @@ def testit(): # 2888 l = [0] - l.extend(lrange(2 ** 40, 2 ** 40+1000)) + l.extend(lrange(2 ** 40, 2 ** 40 + 1000)) s = Series(l, dtype='int64') assert_almost_equal(float(f(s)), float(alternate(s.values))) @@ -3006,7 +3094,7 @@ def testit(): self.assertEqual(res, exp) # check on string data - if name not in ['sum','min','max']: + if name not in ['sum', 'min', 'max']: self.assertRaises(TypeError, f, Series(list('abc'))) # Invalid axis. @@ -3020,7 +3108,7 @@ def testit(): testit() try: - import bottleneck as bn + import bottleneck as bn # noqa nanops._USE_BOTTLENECK = False testit() nanops._USE_BOTTLENECK = True @@ -3052,7 +3140,8 @@ def test_round(self): def test_built_in_round(self): if not compat.PY3: - raise nose.SkipTest('build in round cannot be overriden prior to Python 3') + raise nose.SkipTest( + 'build in round cannot be overriden prior to Python 3') s = Series([1.123, 2.123, 3.123], index=lrange(3)) result = round(s) @@ -3064,7 +3153,6 @@ def test_built_in_round(self): result = round(s, decimals) self.assert_series_equal(result, expected_rounded) - def test_prod_numpy16_bug(self): s = Series([1., 1., 1.], index=lrange(3)) result = s.prod() @@ -3080,7 +3168,7 @@ def test_quantile(self): self.assertEqual(q, percentile(self.ts.valid(), 90)) # object dtype - q = Series(self.ts,dtype=object).quantile(0.9) + q = Series(self.ts, dtype=object).quantile(0.9) self.assertEqual(q, percentile(self.ts.valid(), 90)) # datetime64[ns] dtype @@ -3121,8 +3209,8 @@ def test_quantile_multi(self): assert_series_equal(result, expected) result = self.ts.quantile([]) - expected = pd.Series([], name=self.ts.name, - index=Index([], dtype=float)) + expected = pd.Series([], name=self.ts.name, index=Index( + [], dtype=float)) assert_series_equal(result, expected) def test_quantile_interpolation(self): @@ -3255,8 +3343,8 @@ def test_modulo(self): # GH3590, modulo as ints p = DataFrame({'first': [3, 4, 5, 8], 'second': [0, 0, 0, 3]}) result = p['first'] % p['second'] - expected = Series(p['first'].values % - p['second'].values, dtype='float64') + expected = Series(p['first'].values % p['second'].values, + dtype='float64') expected.iloc[0:3] = np.nan assert_series_equal(result, expected) @@ -3306,20 +3394,21 @@ def test_div(self): p = DataFrame({'first': [3, 4, 5, 8], 'second': [1, 1, 1, 1]}) result = p['first'] / p['second'] - assert_series_equal(result, p['first'].astype('float64'), check_names=False) + assert_series_equal(result, p['first'].astype('float64'), + check_names=False) self.assertTrue(result.name is None) self.assertFalse(np.array_equal(result, p['second'] / p['first'])) # inf signing - s = Series([np.nan,1.,-1.]) + s = Series([np.nan, 1., -1.]) result = s / 0 - expected = Series([np.nan,np.inf,-np.inf]) + expected = Series([np.nan, np.inf, -np.inf]) assert_series_equal(result, expected) # float/integer issue # GH 7785 - p = DataFrame({'first': (1,0), 'second': (-0.01,-0.02)}) - expected = Series([-0.01,-np.inf]) + p = DataFrame({'first': (1, 0), 'second': (-0.01, -0.02)}) + expected = Series([-0.01, -np.inf]) result = p['second'].div(p['first']) assert_series_equal(result, expected, check_names=False) @@ -3343,7 +3432,6 @@ def test_div(self): assert_series_equal(result, expected) def test_operators(self): - def _check_op(series, other, op, pos_only=False): left = np.abs(series) if pos_only else series right = np.abs(other) if pos_only else other @@ -3398,18 +3486,19 @@ def test_constructor_dtype_timedelta64(self): td = Series([timedelta(days=1)]) self.assertEqual(td.dtype, 'timedelta64[ns]') - td = Series([timedelta(days=1),timedelta(days=2),np.timedelta64(1,'s')]) + td = Series([timedelta(days=1), timedelta(days=2), np.timedelta64( + 1, 's')]) self.assertEqual(td.dtype, 'timedelta64[ns]') # mixed with NaT from pandas import tslib - td = Series([timedelta(days=1),tslib.NaT ], dtype='m8[ns]' ) + td = Series([timedelta(days=1), tslib.NaT], dtype='m8[ns]') self.assertEqual(td.dtype, 'timedelta64[ns]') - td = Series([timedelta(days=1),np.nan ], dtype='m8[ns]' ) + td = Series([timedelta(days=1), np.nan], dtype='m8[ns]') self.assertEqual(td.dtype, 'timedelta64[ns]') - td = Series([np.timedelta64(300000000), pd.NaT],dtype='m8[ns]') + td = Series([np.timedelta64(300000000), pd.NaT], dtype='m8[ns]') self.assertEqual(td.dtype, 'timedelta64[ns]') # improved inference @@ -3426,11 +3515,11 @@ def test_constructor_dtype_timedelta64(self): td = Series([pd.NaT, np.timedelta64(300000000)]) self.assertEqual(td.dtype, 'timedelta64[ns]') - td = Series([np.timedelta64(1,'s')]) + td = Series([np.timedelta64(1, 's')]) self.assertEqual(td.dtype, 'timedelta64[ns]') # these are frequency conversion astypes - #for t in ['s', 'D', 'us', 'ms']: + # for t in ['s', 'D', 'us', 'ms']: # self.assertRaises(TypeError, td.astype, 'm8[%s]' % t) # valid astype @@ -3441,7 +3530,8 @@ def test_constructor_dtype_timedelta64(self): # this is an invalid casting def f(): - Series([timedelta(days=1), 'foo'],dtype='m8[ns]') + Series([timedelta(days=1), 'foo'], dtype='m8[ns]') + self.assertRaises(Exception, f) # leave as object here @@ -3450,30 +3540,30 @@ def f(): # these will correctly infer a timedelta s = Series([None, pd.NaT, '1 Day']) - self.assertEqual(s.dtype,'timedelta64[ns]') + self.assertEqual(s.dtype, 'timedelta64[ns]') s = Series([np.nan, pd.NaT, '1 Day']) - self.assertEqual(s.dtype,'timedelta64[ns]') + self.assertEqual(s.dtype, 'timedelta64[ns]') s = Series([pd.NaT, None, '1 Day']) - self.assertEqual(s.dtype,'timedelta64[ns]') + self.assertEqual(s.dtype, 'timedelta64[ns]') s = Series([pd.NaT, np.nan, '1 Day']) - self.assertEqual(s.dtype,'timedelta64[ns]') + self.assertEqual(s.dtype, 'timedelta64[ns]') def test_operators_timedelta64(self): # invalid ops self.assertRaises(Exception, self.objSeries.__add__, 1) - self.assertRaises( - Exception, self.objSeries.__add__, np.array(1, dtype=np.int64)) + self.assertRaises(Exception, self.objSeries.__add__, + np.array(1, dtype=np.int64)) self.assertRaises(Exception, self.objSeries.__sub__, 1) - self.assertRaises( - Exception, self.objSeries.__sub__, np.array(1, dtype=np.int64)) + self.assertRaises(Exception, self.objSeries.__sub__, + np.array(1, dtype=np.int64)) # seriese ops v1 = date_range('2012-1-1', periods=3, freq='D') v2 = date_range('2012-1-2', periods=3, freq='D') rs = Series(v2) - Series(v1) - xp = Series(1e9 * 3600 * 24, rs.index).astype( - 'int64').astype('timedelta64[ns]') + xp = Series(1e9 * 3600 * 24, + rs.index).astype('int64').astype('timedelta64[ns]') assert_series_equal(rs, xp) self.assertEqual(rs.dtype, 'timedelta64[ns]') @@ -3497,13 +3587,15 @@ def test_operators_timedelta64(self): # timestamp on lhs result = resultb + df['A'] - values = [Timestamp('20111230'), Timestamp('20120101'), Timestamp('20120103')] + values = [Timestamp('20111230'), Timestamp('20120101'), + Timestamp('20120103')] expected = Series(values, name='A') assert_series_equal(result, expected) # datetimes on rhs result = df['A'] - datetime(2001, 1, 1) - expected = Series([timedelta(days=4017 + i) for i in range(3)], name='A') + expected = Series( + [timedelta(days=4017 + i) for i in range(3)], name='A') assert_series_equal(result, expected) self.assertEqual(result.dtype, 'm8[ns]') @@ -3530,8 +3622,8 @@ def test_operators_timedelta64(self): self.assertEqual(resultb.dtype, 'M8[ns]') # inplace - value = rs[2] + np.timedelta64(timedelta(minutes=5,seconds=1)) - rs[2] += np.timedelta64(timedelta(minutes=5,seconds=1)) + value = rs[2] + np.timedelta64(timedelta(minutes=5, seconds=1)) + rs[2] += np.timedelta64(timedelta(minutes=5, seconds=1)) self.assertEqual(rs[2], value) def test_timedeltas_with_DateOffset(self): @@ -3542,54 +3634,54 @@ def test_timedeltas_with_DateOffset(self): result = s + pd.offsets.Second(5) result2 = pd.offsets.Second(5) + s - expected = Series( - [Timestamp('20130101 9:01:05'), Timestamp('20130101 9:02:05')]) + expected = Series([Timestamp('20130101 9:01:05'), Timestamp( + '20130101 9:02:05')]) assert_series_equal(result, expected) assert_series_equal(result2, expected) result = s - pd.offsets.Second(5) result2 = -pd.offsets.Second(5) + s - expected = Series( - [Timestamp('20130101 9:00:55'), Timestamp('20130101 9:01:55')]) + expected = Series([Timestamp('20130101 9:00:55'), Timestamp( + '20130101 9:01:55')]) assert_series_equal(result, expected) assert_series_equal(result2, expected) result = s + pd.offsets.Milli(5) result2 = pd.offsets.Milli(5) + s - expected = Series( - [Timestamp('20130101 9:01:00.005'), Timestamp('20130101 9:02:00.005')]) + expected = Series([Timestamp('20130101 9:01:00.005'), Timestamp( + '20130101 9:02:00.005')]) assert_series_equal(result, expected) assert_series_equal(result2, expected) result = s + pd.offsets.Minute(5) + pd.offsets.Milli(5) - expected = Series( - [Timestamp('20130101 9:06:00.005'), Timestamp('20130101 9:07:00.005')]) + expected = Series([Timestamp('20130101 9:06:00.005'), Timestamp( + '20130101 9:07:00.005')]) assert_series_equal(result, expected) # operate with np.timedelta64 correctly result = s + np.timedelta64(1, 's') result2 = np.timedelta64(1, 's') + s - expected = Series( - [Timestamp('20130101 9:01:01'), Timestamp('20130101 9:02:01')]) + expected = Series([Timestamp('20130101 9:01:01'), Timestamp( + '20130101 9:02:01')]) assert_series_equal(result, expected) assert_series_equal(result2, expected) result = s + np.timedelta64(5, 'ms') result2 = np.timedelta64(5, 'ms') + s - expected = Series( - [Timestamp('20130101 9:01:00.005'), Timestamp('20130101 9:02:00.005')]) + expected = Series([Timestamp('20130101 9:01:00.005'), Timestamp( + '20130101 9:02:00.005')]) assert_series_equal(result, expected) assert_series_equal(result2, expected) # valid DateOffsets - for do in [ 'Hour', 'Minute', 'Second', 'Day', 'Micro', - 'Milli', 'Nano' ]: - op = getattr(pd.offsets,do) + for do in ['Hour', 'Minute', 'Second', 'Day', 'Micro', 'Milli', + 'Nano']: + op = getattr(pd.offsets, do) s + op(5) op(5) + s def test_timedelta_series_ops(self): - #GH11925 + # GH11925 s = Series(timedelta_range('1 day', periods=3)) ts = Timestamp('2012-01-01') @@ -3601,7 +3693,6 @@ def test_timedelta_series_ops(self): assert_series_equal(ts - s, expected2) assert_series_equal(ts + (-s), expected2) - def test_timedelta64_operations_with_DateOffset(self): # GH 10699 td = Series([timedelta(minutes=5, seconds=3)] * 3) @@ -3615,9 +3706,8 @@ def test_timedelta64_operations_with_DateOffset(self): result = td + Series([pd.offsets.Minute(1), pd.offsets.Second(3), pd.offsets.Hour(2)]) - expected = Series([timedelta(minutes=6, seconds=3), - timedelta(minutes=5, seconds=6), - timedelta(hours=2, minutes=5, seconds=3)]) + expected = Series([timedelta(minutes=6, seconds=3), timedelta( + minutes=5, seconds=6), timedelta(hours=2, minutes=5, seconds=3)]) assert_series_equal(result, expected) result = td + pd.offsets.Minute(1) + pd.offsets.Second(12) @@ -3625,9 +3715,9 @@ def test_timedelta64_operations_with_DateOffset(self): assert_series_equal(result, expected) # valid DateOffsets - for do in [ 'Hour', 'Minute', 'Second', 'Day', 'Micro', - 'Milli', 'Nano' ]: - op = getattr(pd.offsets,do) + for do in ['Hour', 'Minute', 'Second', 'Day', 'Micro', 'Milli', + 'Nano']: + op = getattr(pd.offsets, do) td + op(5) op(5) + td td - op(5) @@ -3639,36 +3729,36 @@ def test_timedelta64_operations_with_timedeltas(self): td1 = Series([timedelta(minutes=5, seconds=3)] * 3) td2 = timedelta(minutes=5, seconds=4) result = td1 - td2 - expected = Series([timedelta(seconds=0)] * 3) -Series( - [timedelta(seconds=1)] * 3) + expected = Series([timedelta(seconds=0)] * 3) - Series([timedelta( + seconds=1)] * 3) self.assertEqual(result.dtype, 'm8[ns]') assert_series_equal(result, expected) result2 = td2 - td1 - expected = (Series([timedelta(seconds=1)] * 3) - - Series([timedelta(seconds=0)] * 3)) + expected = (Series([timedelta(seconds=1)] * 3) - Series([timedelta( + seconds=0)] * 3)) assert_series_equal(result2, expected) # roundtrip - assert_series_equal(result + td2,td1) + assert_series_equal(result + td2, td1) # Now again, using pd.to_timedelta, which should build # a Series or a scalar, depending on input. td1 = Series(pd.to_timedelta(['00:05:03'] * 3)) td2 = pd.to_timedelta('00:05:04') result = td1 - td2 - expected = Series([timedelta(seconds=0)] * 3) -Series( - [timedelta(seconds=1)] * 3) + expected = Series([timedelta(seconds=0)] * 3) - Series([timedelta( + seconds=1)] * 3) self.assertEqual(result.dtype, 'm8[ns]') assert_series_equal(result, expected) result2 = td2 - td1 - expected = (Series([timedelta(seconds=1)] * 3) - - Series([timedelta(seconds=0)] * 3)) + expected = (Series([timedelta(seconds=1)] * 3) - Series([timedelta( + seconds=0)] * 3)) assert_series_equal(result2, expected) # roundtrip - assert_series_equal(result + td2,td1) + assert_series_equal(result + td2, td1) def test_timedelta64_operations_with_integers(self): @@ -3683,54 +3773,55 @@ def test_timedelta64_operations_with_integers(self): expected = Series(s1.values.astype(np.int64) / s2, dtype='m8[ns]') expected[2] = np.nan result = s1 / s2 - assert_series_equal(result,expected) + assert_series_equal(result, expected) s2 = Series([20, 30, 40]) expected = Series(s1.values.astype(np.int64) / s2, dtype='m8[ns]') expected[2] = np.nan result = s1 / s2 - assert_series_equal(result,expected) + assert_series_equal(result, expected) result = s1 / 2 expected = Series(s1.values.astype(np.int64) / 2, dtype='m8[ns]') expected[2] = np.nan - assert_series_equal(result,expected) + assert_series_equal(result, expected) s2 = Series([20, 30, 40]) expected = Series(s1.values.astype(np.int64) * s2, dtype='m8[ns]') expected[2] = np.nan result = s1 * s2 - assert_series_equal(result,expected) + assert_series_equal(result, expected) - for dtype in ['int32','int16','uint32','uint64','uint32','uint16','uint8']: - s2 = Series([20, 30, 40],dtype=dtype) - expected = Series(s1.values.astype(np.int64) * s2.astype(np.int64), dtype='m8[ns]') + for dtype in ['int32', 'int16', 'uint32', 'uint64', 'uint32', 'uint16', + 'uint8']: + s2 = Series([20, 30, 40], dtype=dtype) + expected = Series( + s1.values.astype(np.int64) * s2.astype(np.int64), + dtype='m8[ns]') expected[2] = np.nan result = s1 * s2 - assert_series_equal(result,expected) + assert_series_equal(result, expected) result = s1 * 2 expected = Series(s1.values.astype(np.int64) * 2, dtype='m8[ns]') expected[2] = np.nan - assert_series_equal(result,expected) + assert_series_equal(result, expected) result = s1 * -1 expected = Series(s1.values.astype(np.int64) * -1, dtype='m8[ns]') expected[2] = np.nan - assert_series_equal(result,expected) + assert_series_equal(result, expected) # invalid ops assert_series_equal(s1 / s2.astype(float), - Series([Timedelta('2 days 22:48:00'), - Timedelta('1 days 23:12:00'), - Timedelta('NaT')])) + Series([Timedelta('2 days 22:48:00'), Timedelta( + '1 days 23:12:00'), Timedelta('NaT')])) assert_series_equal(s1 / 2.0, - Series([Timedelta('29 days 12:00:00'), - Timedelta('29 days 12:00:00'), - Timedelta('NaT')])) + Series([Timedelta('29 days 12:00:00'), Timedelta( + '29 days 12:00:00'), Timedelta('NaT')])) - for op in ['__add__','__sub__']: - sop = getattr(s1,op,None) + for op in ['__add__', '__sub__']: + sop = getattr(s1, op, None) if sop is not None: self.assertRaises(TypeError, sop, 1) self.assertRaises(TypeError, sop, s2.values) @@ -3743,11 +3834,11 @@ def test_timedelta64_conversions(self): s1[2] = np.nan for m in [1, 3, 10]: - for unit in ['D','h','m','s','ms','us','ns']: + for unit in ['D', 'h', 'm', 's', 'ms', 'us', 'ns']: # op - expected = s1.apply(lambda x: x / np.timedelta64(m,unit)) - result = s1 / np.timedelta64(m,unit) + expected = s1.apply(lambda x: x / np.timedelta64(m, unit)) + result = s1 / np.timedelta64(m, unit) assert_series_equal(result, expected) if m == 1 and unit != 'ns': @@ -3757,27 +3848,33 @@ def test_timedelta64_conversions(self): assert_series_equal(result, expected) # reverse op - expected = s1.apply(lambda x: Timedelta(np.timedelta64(m,unit)) / x) - result = np.timedelta64(m,unit) / s1 + expected = s1.apply( + lambda x: Timedelta(np.timedelta64(m, unit)) / x) + result = np.timedelta64(m, unit) / s1 # astype - s = Series(date_range('20130101',periods=3)) + s = Series(date_range('20130101', periods=3)) result = s.astype(object) - self.assertIsInstance(result.iloc[0],datetime) + self.assertIsInstance(result.iloc[0], datetime) self.assertTrue(result.dtype == np.object_) result = s1.astype(object) - self.assertIsInstance(result.iloc[0],timedelta) + self.assertIsInstance(result.iloc[0], timedelta) self.assertTrue(result.dtype == np.object_) def test_timedelta64_equal_timedelta_supported_ops(self): ser = Series([Timestamp('20130301'), Timestamp('20130228 23:00:00'), - Timestamp('20130228 22:00:00'), - Timestamp('20130228 21:00:00')]) + Timestamp('20130228 22:00:00'), Timestamp( + '20130228 21:00:00')]) intervals = 'D', 'h', 'm', 's', 'us' - npy16_mappings = {'D': 24 * 60 * 60 * 1000000, 'h': 60 * 60 * 1000000, - 'm': 60 * 1000000, 's': 1000000, 'us': 1} + + # TODO: unused + # npy16_mappings = {'D': 24 * 60 * 60 * 1000000, + # 'h': 60 * 60 * 1000000, + # 'm': 60 * 1000000, + # 's': 1000000, + # 'us': 1} def timedelta64(*args): return sum(starmap(np.timedelta64, zip(args, intervals))) @@ -3794,41 +3891,44 @@ def timedelta64(*args): assert_series_equal(lhs, rhs) except: raise AssertionError( - "invalid comparsion [op->{0},d->{1},h->{2},m->{3},s->{4},us->{5}]\n{6}\n{7}\n".format(op, d, h, m, s, us, lhs, rhs)) + "invalid comparsion [op->{0},d->{1},h->{2},m->{3}," + "s->{4},us->{5}]\n{6}\n{7}\n".format(op, d, h, m, s, + us, lhs, rhs)) def test_timedelta_assignment(self): # GH 8209 s = Series([]) s.loc['B'] = timedelta(1) - tm.assert_series_equal(s,Series(Timedelta('1 days'),index=['B'])) + tm.assert_series_equal(s, Series(Timedelta('1 days'), index=['B'])) s = s.reindex(s.index.insert(0, 'A')) - tm.assert_series_equal(s,Series([np.nan,Timedelta('1 days')],index=['A','B'])) + tm.assert_series_equal(s, Series( + [np.nan, Timedelta('1 days')], index=['A', 'B'])) result = s.fillna(timedelta(1)) - expected = Series(Timedelta('1 days'),index=['A','B']) + expected = Series(Timedelta('1 days'), index=['A', 'B']) tm.assert_series_equal(result, expected) s.loc['A'] = timedelta(1) tm.assert_series_equal(s, expected) def test_operators_datetimelike(self): - def run_ops(ops, get_ser, test_ser): # check that we are getting a TypeError - # with 'operate' (from core/ops.py) for the ops that are not defined + # with 'operate' (from core/ops.py) for the ops that are not + # defined for op_str in ops: op = getattr(get_ser, op_str, None) with tm.assertRaisesRegexp(TypeError, 'operate'): op(test_ser) - ### timedelta64 ### - td1 = Series([timedelta(minutes=5,seconds=3)]*3) + # ## timedelta64 ### + td1 = Series([timedelta(minutes=5, seconds=3)] * 3) td1.iloc[2] = np.nan - td2 = timedelta(minutes=5,seconds=4) - ops = ['__mul__','__floordiv__','__pow__', - '__rmul__','__rfloordiv__','__rpow__'] + td2 = timedelta(minutes=5, seconds=4) + ops = ['__mul__', '__floordiv__', '__pow__', '__rmul__', + '__rfloordiv__', '__rpow__'] run_ops(ops, td1, td2) td1 + td2 td2 + td1 @@ -3837,12 +3937,12 @@ def run_ops(ops, get_ser, test_ser): td1 / td2 td2 / td1 - ### datetime64 ### - dt1 = Series([Timestamp('20111230'), Timestamp('20120101'), - Timestamp('20120103')]) + # ## datetime64 ### + dt1 = Series([Timestamp('20111230'), Timestamp('20120101'), Timestamp( + '20120103')]) dt1.iloc[2] = np.nan - dt2 = Series([Timestamp('20111231'), Timestamp('20120102'), - Timestamp('20120104')]) + dt2 = Series([Timestamp('20111231'), Timestamp('20120102'), Timestamp( + '20120104')]) ops = ['__add__', '__mul__', '__floordiv__', '__truediv__', '__div__', '__pow__', '__radd__', '__rmul__', '__rfloordiv__', '__rtruediv__', '__rdiv__', '__rpow__'] @@ -3850,7 +3950,7 @@ def run_ops(ops, get_ser, test_ser): dt1 - dt2 dt2 - dt1 - ### datetime64 with timetimedelta ### + # ## datetime64 with timetimedelta ### ops = ['__mul__', '__floordiv__', '__truediv__', '__div__', '__pow__', '__rmul__', '__rfloordiv__', '__rtruediv__', '__rdiv__', '__rpow__'] @@ -3861,10 +3961,10 @@ def run_ops(ops, get_ser, test_ser): # TODO: Decide if this ought to work. # td1 - dt1 - ### timetimedelta with datetime64 ### + # ## timetimedelta with datetime64 ### ops = ['__sub__', '__mul__', '__floordiv__', '__truediv__', '__div__', - '__pow__', '__rmul__', '__rfloordiv__', - '__rtruediv__', '__rdiv__', '__rpow__'] + '__pow__', '__rmul__', '__rfloordiv__', '__rtruediv__', + '__rdiv__', '__rpow__'] run_ops(ops, td1, dt1) td1 + dt1 dt1 + td1 @@ -3874,56 +3974,68 @@ def run_ops(ops, get_ser, test_ser): ops = ['__mul__', '__floordiv__', '__truediv__', '__div__', '__pow__', '__rmul__', '__rfloordiv__', '__rtruediv__', '__rdiv__', '__rpow__'] - dt1 = Series(date_range('2000-01-01 09:00:00',periods=5,tz='US/Eastern'),name='foo') + dt1 = Series( + date_range('2000-01-01 09:00:00', periods=5, + tz='US/Eastern'), name='foo') dt2 = dt1.copy() dt2.iloc[2] = np.nan - td1 = Series(timedelta_range('1 days 1 min',periods=5, freq='H')) + td1 = Series(timedelta_range('1 days 1 min', periods=5, freq='H')) td2 = td1.copy() td2.iloc[1] = np.nan run_ops(ops, dt1, td1) result = dt1 + td1[0] - expected = (dt1.dt.tz_localize(None) + td1[0]).dt.tz_localize('US/Eastern') + expected = ( + dt1.dt.tz_localize(None) + td1[0]).dt.tz_localize('US/Eastern') assert_series_equal(result, expected) result = dt2 + td2[0] - expected = (dt2.dt.tz_localize(None) + td2[0]).dt.tz_localize('US/Eastern') + expected = ( + dt2.dt.tz_localize(None) + td2[0]).dt.tz_localize('US/Eastern') assert_series_equal(result, expected) # odd numpy behavior with scalar timedeltas if not _np_version_under1p8: result = td1[0] + dt1 - expected = (dt1.dt.tz_localize(None) + td1[0]).dt.tz_localize('US/Eastern') + expected = ( + dt1.dt.tz_localize(None) + td1[0]).dt.tz_localize('US/Eastern') assert_series_equal(result, expected) result = td2[0] + dt2 - expected = (dt2.dt.tz_localize(None) + td2[0]).dt.tz_localize('US/Eastern') + expected = ( + dt2.dt.tz_localize(None) + td2[0]).dt.tz_localize('US/Eastern') assert_series_equal(result, expected) result = dt1 - td1[0] - expected = (dt1.dt.tz_localize(None) - td1[0]).dt.tz_localize('US/Eastern') + expected = ( + dt1.dt.tz_localize(None) - td1[0]).dt.tz_localize('US/Eastern') assert_series_equal(result, expected) self.assertRaises(TypeError, lambda: td1[0] - dt1) result = dt2 - td2[0] - expected = (dt2.dt.tz_localize(None) - td2[0]).dt.tz_localize('US/Eastern') + expected = ( + dt2.dt.tz_localize(None) - td2[0]).dt.tz_localize('US/Eastern') assert_series_equal(result, expected) self.assertRaises(TypeError, lambda: td2[0] - dt2) result = dt1 + td1 - expected = (dt1.dt.tz_localize(None) + td1).dt.tz_localize('US/Eastern') + expected = ( + dt1.dt.tz_localize(None) + td1).dt.tz_localize('US/Eastern') assert_series_equal(result, expected) result = dt2 + td2 - expected = (dt2.dt.tz_localize(None) + td2).dt.tz_localize('US/Eastern') + expected = ( + dt2.dt.tz_localize(None) + td2).dt.tz_localize('US/Eastern') assert_series_equal(result, expected) result = dt1 - td1 - expected = (dt1.dt.tz_localize(None) - td1).dt.tz_localize('US/Eastern') + expected = ( + dt1.dt.tz_localize(None) - td1).dt.tz_localize('US/Eastern') assert_series_equal(result, expected) result = dt2 - td2 - expected = (dt2.dt.tz_localize(None) - td2).dt.tz_localize('US/Eastern') + expected = ( + dt2.dt.tz_localize(None) - td2).dt.tz_localize('US/Eastern') assert_series_equal(result, expected) self.assertRaises(TypeError, lambda: td1 - dt1) @@ -3933,14 +4045,16 @@ def test_ops_nat(self): # GH 11349 timedelta_series = Series([NaT, Timedelta('1s')]) datetime_series = Series([NaT, Timestamp('19900315')]) - nat_series_dtype_timedelta = Series([NaT, NaT], dtype='timedelta64[ns]') + nat_series_dtype_timedelta = Series( + [NaT, NaT], dtype='timedelta64[ns]') nat_series_dtype_timestamp = Series([NaT, NaT], dtype='datetime64[ns]') single_nat_dtype_datetime = Series([NaT], dtype='datetime64[ns]') single_nat_dtype_timedelta = Series([NaT], dtype='timedelta64[ns]') # subtraction assert_series_equal(timedelta_series - NaT, nat_series_dtype_timedelta) - assert_series_equal(-NaT + timedelta_series, nat_series_dtype_timedelta) + assert_series_equal(-NaT + timedelta_series, + nat_series_dtype_timedelta) assert_series_equal(timedelta_series - single_nat_dtype_timedelta, nat_series_dtype_timedelta) @@ -3957,7 +4071,7 @@ def test_ops_nat(self): assert_series_equal(datetime_series - single_nat_dtype_timedelta, nat_series_dtype_timestamp) - assert_series_equal(-single_nat_dtype_timedelta + datetime_series , + assert_series_equal(-single_nat_dtype_timedelta + datetime_series, nat_series_dtype_timestamp) # without a Series wrapping the NaT, it is ambiguous @@ -3968,14 +4082,17 @@ def test_ops_nat(self): assert_series_equal(-NaT + nat_series_dtype_timestamp, nat_series_dtype_timestamp) - assert_series_equal(nat_series_dtype_timestamp - single_nat_dtype_datetime, + assert_series_equal(nat_series_dtype_timestamp - + single_nat_dtype_datetime, nat_series_dtype_timedelta) with tm.assertRaises(TypeError): -single_nat_dtype_datetime + nat_series_dtype_timestamp - assert_series_equal(nat_series_dtype_timestamp - single_nat_dtype_timedelta, + assert_series_equal(nat_series_dtype_timestamp - + single_nat_dtype_timedelta, nat_series_dtype_timestamp) - assert_series_equal(-single_nat_dtype_timedelta + nat_series_dtype_timestamp, + assert_series_equal(-single_nat_dtype_timedelta + + nat_series_dtype_timestamp, nat_series_dtype_timestamp) with tm.assertRaises(TypeError): @@ -3987,9 +4104,11 @@ def test_ops_nat(self): assert_series_equal(NaT + nat_series_dtype_timestamp, nat_series_dtype_timestamp) - assert_series_equal(nat_series_dtype_timestamp + single_nat_dtype_timedelta, + assert_series_equal(nat_series_dtype_timestamp + + single_nat_dtype_timedelta, nat_series_dtype_timestamp) - assert_series_equal(single_nat_dtype_timedelta + nat_series_dtype_timestamp, + assert_series_equal(single_nat_dtype_timedelta + + nat_series_dtype_timestamp, nat_series_dtype_timestamp) assert_series_equal(nat_series_dtype_timedelta + NaT, @@ -3997,9 +4116,11 @@ def test_ops_nat(self): assert_series_equal(NaT + nat_series_dtype_timedelta, nat_series_dtype_timedelta) - assert_series_equal(nat_series_dtype_timedelta + single_nat_dtype_timedelta, + assert_series_equal(nat_series_dtype_timedelta + + single_nat_dtype_timedelta, nat_series_dtype_timedelta) - assert_series_equal(single_nat_dtype_timedelta + nat_series_dtype_timedelta, + assert_series_equal(single_nat_dtype_timedelta + + nat_series_dtype_timedelta, nat_series_dtype_timedelta) assert_series_equal(timedelta_series + NaT, nat_series_dtype_timedelta) @@ -4015,9 +4136,11 @@ def test_ops_nat(self): assert_series_equal(NaT + nat_series_dtype_timestamp, nat_series_dtype_timestamp) - assert_series_equal(nat_series_dtype_timestamp + single_nat_dtype_timedelta, + assert_series_equal(nat_series_dtype_timestamp + + single_nat_dtype_timedelta, nat_series_dtype_timestamp) - assert_series_equal(single_nat_dtype_timedelta + nat_series_dtype_timestamp, + assert_series_equal(single_nat_dtype_timedelta + + nat_series_dtype_timestamp, nat_series_dtype_timestamp) assert_series_equal(nat_series_dtype_timedelta + NaT, @@ -4025,14 +4148,18 @@ def test_ops_nat(self): assert_series_equal(NaT + nat_series_dtype_timedelta, nat_series_dtype_timedelta) - assert_series_equal(nat_series_dtype_timedelta + single_nat_dtype_timedelta, + assert_series_equal(nat_series_dtype_timedelta + + single_nat_dtype_timedelta, nat_series_dtype_timedelta) - assert_series_equal(single_nat_dtype_timedelta + nat_series_dtype_timedelta, + assert_series_equal(single_nat_dtype_timedelta + + nat_series_dtype_timedelta, nat_series_dtype_timedelta) - assert_series_equal(nat_series_dtype_timedelta + single_nat_dtype_datetime, + assert_series_equal(nat_series_dtype_timedelta + + single_nat_dtype_datetime, nat_series_dtype_timestamp) - assert_series_equal(single_nat_dtype_datetime + nat_series_dtype_timedelta, + assert_series_equal(single_nat_dtype_datetime + + nat_series_dtype_timedelta, nat_series_dtype_timestamp) # multiplication @@ -4066,14 +4193,12 @@ def test_ops_nat(self): Series([NaT, Timedelta('0.5s')])) assert_series_equal(timedelta_series / 2.0, Series([NaT, Timedelta('0.5s')])) - assert_series_equal(timedelta_series / nan, - nat_series_dtype_timedelta) + assert_series_equal(timedelta_series / nan, nat_series_dtype_timedelta) with tm.assertRaises(TypeError): nat_series_dtype_timestamp / 1.0 with tm.assertRaises(TypeError): nat_series_dtype_timestamp / 1 - def test_ops_datetimelike_align(self): # GH 7500 # datetimelike ops need to align @@ -4091,13 +4216,11 @@ def test_ops_datetimelike_align(self): assert_series_equal(result, expected) def test_timedelta64_functions(self): - - from datetime import timedelta from pandas import date_range # index min/max td = Series(date_range('2012-1-1', periods=3, freq='D')) - \ - Timestamp('20120101') + Timestamp('20120101') result = td.idxmin() self.assertEqual(result, 0) @@ -4121,7 +4244,7 @@ def test_timedelta64_functions(self): expected = Series(s2 - s1) # this fails as numpy returns timedelta64[us] - #result = np.abs(s1-s2) + # result = np.abs(s1-s2) # assert_frame_equal(result,expected) result = (s1 - s2).abs() @@ -4143,7 +4266,7 @@ def test_ops_consistency_on_empty(self): # float result = Series(dtype=float).sum() - self.assertEqual(result,0) + self.assertEqual(result, 0) result = Series(dtype=float).mean() self.assertTrue(isnull(result)) @@ -4162,37 +4285,38 @@ def test_ops_consistency_on_empty(self): self.assertTrue(result is pd.NaT) def test_timedelta_fillna(self): - #GH 3371 - s = Series([Timestamp('20130101'), Timestamp('20130101'), - Timestamp('20130102'), Timestamp('20130103 9:01:01')]) + # GH 3371 + s = Series([Timestamp('20130101'), Timestamp('20130101'), Timestamp( + '20130102'), Timestamp('20130103 9:01:01')]) td = s.diff() # reg fillna result = td.fillna(0) - expected = Series([timedelta(0), timedelta(0), timedelta(1), - timedelta(days=1, seconds=9*3600+60+1)]) + expected = Series([timedelta(0), timedelta(0), timedelta(1), timedelta( + days=1, seconds=9 * 3600 + 60 + 1)]) assert_series_equal(result, expected) # interprested as seconds result = td.fillna(1) - expected = Series([timedelta(seconds=1), timedelta(0), - timedelta(1), timedelta(days=1, seconds=9*3600+60+1)]) + expected = Series([timedelta(seconds=1), timedelta(0), timedelta(1), + timedelta(days=1, seconds=9 * 3600 + 60 + 1)]) assert_series_equal(result, expected) result = td.fillna(timedelta(days=1, seconds=1)) - expected = Series([timedelta(days=1, seconds=1), timedelta(0), - timedelta(1), timedelta(days=1, seconds=9*3600+60+1)]) + expected = Series([timedelta(days=1, seconds=1), timedelta( + 0), timedelta(1), timedelta(days=1, seconds=9 * 3600 + 60 + 1)]) assert_series_equal(result, expected) result = td.fillna(np.timedelta64(int(1e9))) expected = Series([timedelta(seconds=1), timedelta(0), timedelta(1), - timedelta(days=1, seconds=9*3600+60+1)]) + timedelta(days=1, seconds=9 * 3600 + 60 + 1)]) assert_series_equal(result, expected) from pandas import tslib result = td.fillna(tslib.NaT) expected = Series([tslib.NaT, timedelta(0), timedelta(1), - timedelta(days=1, seconds=9*3600+60+1)], dtype='m8[ns]') + timedelta(days=1, seconds=9 * 3600 + 60 + 1)], + dtype='m8[ns]') assert_series_equal(result, expected) # ffill @@ -4206,19 +4330,19 @@ def test_timedelta_fillna(self): td[2] = np.nan result = td.bfill() expected = td.fillna(0) - expected[2] = timedelta(days=1, seconds=9*3600+60+1) + expected[2] = timedelta(days=1, seconds=9 * 3600 + 60 + 1) assert_series_equal(result, expected) def test_datetime64_fillna(self): - s = Series([Timestamp('20130101'), Timestamp('20130101'), - Timestamp('20130102'), Timestamp('20130103 9:01:01')]) + s = Series([Timestamp('20130101'), Timestamp('20130101'), Timestamp( + '20130102'), Timestamp('20130103 9:01:01')]) s[2] = np.nan # reg fillna result = s.fillna(Timestamp('20130104')) - expected = Series([Timestamp('20130101'), Timestamp('20130101'), - Timestamp('20130104'), Timestamp('20130103 9:01:01')]) + expected = Series([Timestamp('20130101'), Timestamp( + '20130101'), Timestamp('20130104'), Timestamp('20130103 9:01:01')]) assert_series_equal(result, expected) from pandas import tslib @@ -4228,59 +4352,62 @@ def test_datetime64_fillna(self): # ffill result = s.ffill() - expected = Series([Timestamp('20130101'), Timestamp('20130101'), - Timestamp('20130101'), Timestamp('20130103 9:01:01')]) + expected = Series([Timestamp('20130101'), Timestamp( + '20130101'), Timestamp('20130101'), Timestamp('20130103 9:01:01')]) assert_series_equal(result, expected) # bfill result = s.bfill() expected = Series([Timestamp('20130101'), Timestamp('20130101'), - Timestamp('20130103 9:01:01'), - Timestamp('20130103 9:01:01')]) + Timestamp('20130103 9:01:01'), Timestamp( + '20130103 9:01:01')]) assert_series_equal(result, expected) # GH 6587 # make sure that we are treating as integer when filling # this also tests inference of a datetime-like with NaT's s = Series([pd.NaT, pd.NaT, '2013-08-05 15:30:00.000001']) - expected = Series(['2013-08-05 15:30:00.000001', '2013-08-05 15:30:00.000001', '2013-08-05 15:30:00.000001'], dtype='M8[ns]') + expected = Series( + ['2013-08-05 15:30:00.000001', '2013-08-05 15:30:00.000001', + '2013-08-05 15:30:00.000001'], dtype='M8[ns]') result = s.fillna(method='backfill') assert_series_equal(result, expected) def test_datetime64_tz_fillna(self): for tz in ['US/Eastern', 'Asia/Tokyo']: # DatetimeBlock - s = Series([Timestamp('2011-01-01 10:00'), pd.NaT, - Timestamp('2011-01-03 10:00'), pd.NaT]) + s = Series([Timestamp('2011-01-01 10:00'), pd.NaT, Timestamp( + '2011-01-03 10:00'), pd.NaT]) result = s.fillna(pd.Timestamp('2011-01-02 10:00')) - expected = Series([Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00'), - Timestamp('2011-01-03 10:00'), Timestamp('2011-01-02 10:00')]) + expected = Series([Timestamp('2011-01-01 10:00'), Timestamp( + '2011-01-02 10:00'), Timestamp('2011-01-03 10:00'), Timestamp( + '2011-01-02 10:00')]) self.assert_series_equal(expected, result) result = s.fillna(pd.Timestamp('2011-01-02 10:00', tz=tz)) - expected = Series([Timestamp('2011-01-01 10:00'), - Timestamp('2011-01-02 10:00', tz=tz), - Timestamp('2011-01-03 10:00'), - Timestamp('2011-01-02 10:00', tz=tz)]) + expected = Series([Timestamp('2011-01-01 10:00'), Timestamp( + '2011-01-02 10:00', tz=tz), Timestamp('2011-01-03 10:00'), + Timestamp('2011-01-02 10:00', tz=tz)]) self.assert_series_equal(expected, result) result = s.fillna('AAA') expected = Series([Timestamp('2011-01-01 10:00'), 'AAA', - Timestamp('2011-01-03 10:00'), 'AAA'], dtype=object) + Timestamp('2011-01-03 10:00'), 'AAA'], + dtype=object) self.assert_series_equal(expected, result) result = s.fillna({1: pd.Timestamp('2011-01-02 10:00', tz=tz), 3: pd.Timestamp('2011-01-04 10:00')}) - expected = Series([Timestamp('2011-01-01 10:00'), - Timestamp('2011-01-02 10:00', tz=tz), - Timestamp('2011-01-03 10:00'), - Timestamp('2011-01-04 10:00')]) + expected = Series([Timestamp('2011-01-01 10:00'), Timestamp( + '2011-01-02 10:00', tz=tz), Timestamp('2011-01-03 10:00'), + Timestamp('2011-01-04 10:00')]) self.assert_series_equal(expected, result) result = s.fillna({1: pd.Timestamp('2011-01-02 10:00'), 3: pd.Timestamp('2011-01-04 10:00')}) - expected = Series([Timestamp('2011-01-01 10:00'), Timestamp('2011-01-02 10:00'), - Timestamp('2011-01-03 10:00'), Timestamp('2011-01-04 10:00')]) + expected = Series([Timestamp('2011-01-01 10:00'), Timestamp( + '2011-01-02 10:00'), Timestamp('2011-01-03 10:00'), Timestamp( + '2011-01-04 10:00')]) self.assert_series_equal(expected, result) # DatetimeBlockTZ @@ -4288,10 +4415,9 @@ def test_datetime64_tz_fillna(self): '2011-01-03 10:00', pd.NaT], tz=tz) s = pd.Series(idx) result = s.fillna(pd.Timestamp('2011-01-02 10:00')) - expected = Series([Timestamp('2011-01-01 10:00', tz=tz), - Timestamp('2011-01-02 10:00'), - Timestamp('2011-01-03 10:00', tz=tz), - Timestamp('2011-01-02 10:00')]) + expected = Series([Timestamp('2011-01-01 10:00', tz=tz), Timestamp( + '2011-01-02 10:00'), Timestamp('2011-01-03 10:00', tz=tz), + Timestamp('2011-01-02 10:00')]) self.assert_series_equal(expected, result) result = s.fillna(pd.Timestamp('2011-01-02 10:00', tz=tz)) @@ -4301,7 +4427,8 @@ def test_datetime64_tz_fillna(self): expected = Series(idx) self.assert_series_equal(expected, result) - result = s.fillna(pd.Timestamp('2011-01-02 10:00', tz=tz).to_pydatetime()) + result = s.fillna(pd.Timestamp( + '2011-01-02 10:00', tz=tz).to_pydatetime()) idx = pd.DatetimeIndex(['2011-01-01 10:00', '2011-01-02 10:00', '2011-01-03 10:00', '2011-01-02 10:00'], tz=tz) @@ -4316,33 +4443,31 @@ def test_datetime64_tz_fillna(self): result = s.fillna({1: pd.Timestamp('2011-01-02 10:00', tz=tz), 3: pd.Timestamp('2011-01-04 10:00')}) - expected = Series([Timestamp('2011-01-01 10:00', tz=tz), - Timestamp('2011-01-02 10:00', tz=tz), - Timestamp('2011-01-03 10:00', tz=tz), - Timestamp('2011-01-04 10:00')]) + expected = Series([Timestamp('2011-01-01 10:00', tz=tz), Timestamp( + '2011-01-02 10:00', tz=tz), Timestamp( + '2011-01-03 10:00', tz=tz), Timestamp('2011-01-04 10:00')]) self.assert_series_equal(expected, result) result = s.fillna({1: pd.Timestamp('2011-01-02 10:00', tz=tz), 3: pd.Timestamp('2011-01-04 10:00', tz=tz)}) - expected = Series([Timestamp('2011-01-01 10:00', tz=tz), - Timestamp('2011-01-02 10:00', tz=tz), - Timestamp('2011-01-03 10:00', tz=tz), - Timestamp('2011-01-04 10:00', tz=tz)]) + expected = Series([Timestamp('2011-01-01 10:00', tz=tz), Timestamp( + '2011-01-02 10:00', tz=tz), Timestamp( + '2011-01-03 10:00', tz=tz), Timestamp('2011-01-04 10:00', + tz=tz)]) self.assert_series_equal(expected, result) # filling with a naive/other zone, coerce to object result = s.fillna(Timestamp('20130101')) - expected = Series([Timestamp('2011-01-01 10:00', tz=tz), - Timestamp('2013-01-01'), - Timestamp('2011-01-03 10:00', tz=tz), - Timestamp('2013-01-01')]) + expected = Series([Timestamp('2011-01-01 10:00', tz=tz), Timestamp( + '2013-01-01'), Timestamp('2011-01-03 10:00', tz=tz), Timestamp( + '2013-01-01')]) self.assert_series_equal(expected, result) - result = s.fillna(Timestamp('20130101',tz='US/Pacific')) + result = s.fillna(Timestamp('20130101', tz='US/Pacific')) expected = Series([Timestamp('2011-01-01 10:00', tz=tz), - Timestamp('2013-01-01',tz='US/Pacific'), + Timestamp('2013-01-01', tz='US/Pacific'), Timestamp('2011-01-03 10:00', tz=tz), - Timestamp('2013-01-01',tz='US/Pacific')]) + Timestamp('2013-01-01', tz='US/Pacific')]) self.assert_series_equal(expected, result) def test_fillna_int(self): @@ -4370,7 +4495,6 @@ def test_isnull_for_inf(self): tm.assert_series_equal(r, e) tm.assert_series_equal(dr, de) - # TimeSeries-specific def test_fillna(self): @@ -4395,39 +4519,39 @@ def test_fillna(self): s2 = Series([1]) result = s1.fillna(s2) expected = Series([1.]) - assert_series_equal(result,expected) + assert_series_equal(result, expected) result = s1.fillna({}) - assert_series_equal(result,s1) + assert_series_equal(result, s1) result = s1.fillna(Series(())) - assert_series_equal(result,s1) + assert_series_equal(result, s1) result = s2.fillna(s1) - assert_series_equal(result,s2) - result = s1.fillna({ 0 : 1}) - assert_series_equal(result,expected) - result = s1.fillna({ 1 : 1}) - assert_series_equal(result,Series([np.nan])) - result = s1.fillna({ 0 : 1, 1 : 1}) - assert_series_equal(result,expected) - result = s1.fillna(Series({ 0 : 1, 1 : 1})) - assert_series_equal(result,expected) - result = s1.fillna(Series({ 0 : 1, 1 : 1},index=[4,5])) - assert_series_equal(result,s1) + assert_series_equal(result, s2) + result = s1.fillna({0: 1}) + assert_series_equal(result, expected) + result = s1.fillna({1: 1}) + assert_series_equal(result, Series([np.nan])) + result = s1.fillna({0: 1, 1: 1}) + assert_series_equal(result, expected) + result = s1.fillna(Series({0: 1, 1: 1})) + assert_series_equal(result, expected) + result = s1.fillna(Series({0: 1, 1: 1}, index=[4, 5])) + assert_series_equal(result, s1) s1 = Series([0, 1, 2], list('abc')) s2 = Series([0, np.nan, 2], list('bac')) result = s2.fillna(s1) - expected = Series([0,0,2.], list('bac')) - assert_series_equal(result,expected) + expected = Series([0, 0, 2.], list('bac')) + assert_series_equal(result, expected) # limit - s = Series(np.nan,index=[0,1,2]) - result = s.fillna(999,limit=1) - expected = Series([999,np.nan,np.nan],index=[0,1,2]) - assert_series_equal(result,expected) + s = Series(np.nan, index=[0, 1, 2]) + result = s.fillna(999, limit=1) + expected = Series([999, np.nan, np.nan], index=[0, 1, 2]) + assert_series_equal(result, expected) - result = s.fillna(999,limit=2) - expected = Series([999,999,np.nan],index=[0,1,2]) - assert_series_equal(result,expected) + result = s.fillna(999, limit=2) + expected = Series([999, 999, np.nan], index=[0, 1, 2]) + assert_series_equal(result, expected) # GH 9043 # make sure a string representation of int/float values can be filled @@ -4502,8 +4626,8 @@ def test_datetime64_with_index(self): result = s - s.index.to_period() assert_series_equal(result, expected) - df = DataFrame(np.random.randn(5,2), - index=date_range('20130101', periods=5)) + df = DataFrame(np.random.randn(5, 2), + index=date_range('20130101', periods=5)) df['date'] = Timestamp('20130102') df['expected'] = df['date'] - df.index.to_series() df['result'] = df['date'] - df.index @@ -4536,16 +4660,17 @@ def test_timedelta64_nan(self): # boolean setting # this doesn't work, not sure numpy even supports it - #result = td[(td>np.timedelta64(timedelta(days=3))) & (td<np.timedelta64(timedelta(days=7)))] = np.nan - #self.assertEqual(isnull(result).sum(), 7) + # result = td[(td>np.timedelta64(timedelta(days=3))) & + # td<np.timedelta64(timedelta(days=7)))] = np.nan + # self.assertEqual(isnull(result).sum(), 7) - # NumPy limitiation =( + # NumPy limitiation =( - # def test_logical_range_select(self): - # np.random.seed(12345) - # selector = -0.5 <= self.ts <= 0.5 - # expected = (self.ts >= -0.5) & (self.ts <= 0.5) - # assert_series_equal(selector, expected) + # def test_logical_range_select(self): + # np.random.seed(12345) + # selector = -0.5 <= self.ts <= 0.5 + # expected = (self.ts >= -0.5) & (self.ts <= 0.5) + # assert_series_equal(selector, expected) def test_operators_na_handling(self): from decimal import Decimal @@ -4585,35 +4710,35 @@ def test_object_comparisons(self): def test_comparison_tuples(self): # GH11339 # comparisons vs tuple - s = Series([(1,1),(1,2)]) + s = Series([(1, 1), (1, 2)]) - result = s == (1,2) - expected = Series([False,True]) + result = s == (1, 2) + expected = Series([False, True]) assert_series_equal(result, expected) - result = s != (1,2) + result = s != (1, 2) expected = Series([True, False]) assert_series_equal(result, expected) - result = s == (0,0) + result = s == (0, 0) expected = Series([False, False]) assert_series_equal(result, expected) - result = s != (0,0) + result = s != (0, 0) expected = Series([True, True]) assert_series_equal(result, expected) - s = Series([(1,1),(1,1)]) + s = Series([(1, 1), (1, 1)]) - result = s == (1,1) + result = s == (1, 1) expected = Series([True, True]) assert_series_equal(result, expected) - result = s != (1,1) + result = s != (1, 1) expected = Series([False, False]) assert_series_equal(result, expected) - s = Series([frozenset([1]),frozenset([1,2])]) + s = Series([frozenset([1]), frozenset([1, 2])]) result = s == frozenset([1]) expected = Series([True, False]) @@ -4645,7 +4770,7 @@ def test_comparison_operators_with_nas(self): # expected = f(val, s.dropna()).reindex(s.index) # assert_series_equal(result, expected) - # boolean &, |, ^ should work with object arrays and propagate NAs + # boolean &, |, ^ should work with object arrays and propagate NAs ops = ['and_', 'or_', 'xor'] mask = s.isnull() @@ -4679,13 +4804,13 @@ def test_comparison_invalid(self): s = Series(range(5)) s2 = Series(date_range('20010101', periods=5)) - for (x, y) in [(s,s2),(s2,s)]: - self.assertRaises(TypeError, lambda : x == y) - self.assertRaises(TypeError, lambda : x != y) - self.assertRaises(TypeError, lambda : x >= y) - self.assertRaises(TypeError, lambda : x > y) - self.assertRaises(TypeError, lambda : x < y) - self.assertRaises(TypeError, lambda : x <= y) + for (x, y) in [(s, s2), (s2, s)]: + self.assertRaises(TypeError, lambda: x == y) + self.assertRaises(TypeError, lambda: x != y) + self.assertRaises(TypeError, lambda: x >= y) + self.assertRaises(TypeError, lambda: x > y) + self.assertRaises(TypeError, lambda: x < y) + self.assertRaises(TypeError, lambda: x <= y) def test_more_na_comparisons(self): left = Series(['a', np.nan, 'c']) @@ -4726,15 +4851,15 @@ def test_comparison_label_based(self): expected = Series([True, False, False], list('bca')) result = a & b - assert_series_equal(result,expected) + assert_series_equal(result, expected) expected = Series([True, False, True], list('bca')) result = a | b - assert_series_equal(result,expected) + assert_series_equal(result, expected) expected = Series([False, False, True], list('bca')) result = a ^ b - assert_series_equal(result,expected) + assert_series_equal(result, expected) # rhs is bigger a = Series([True, False, True], list('bca')) @@ -4742,66 +4867,67 @@ def test_comparison_label_based(self): expected = Series([True, False, False], list('bca')) result = a & b - assert_series_equal(result,expected) + assert_series_equal(result, expected) expected = Series([True, False, True], list('bca')) result = a | b - assert_series_equal(result,expected) + assert_series_equal(result, expected) # filling # vs empty result = a & Series([]) expected = Series([False, False, False], list('bca')) - assert_series_equal(result,expected) + assert_series_equal(result, expected) result = a | Series([]) expected = Series([True, False, True], list('bca')) - assert_series_equal(result,expected) + assert_series_equal(result, expected) # vs non-matching - result = a & Series([1],['z']) + result = a & Series([1], ['z']) expected = Series([False, False, False], list('bca')) - assert_series_equal(result,expected) + assert_series_equal(result, expected) - result = a | Series([1],['z']) + result = a | Series([1], ['z']) expected = Series([True, False, True], list('bca')) - assert_series_equal(result,expected) + assert_series_equal(result, expected) # identity # we would like s[s|e] == s to hold for any e, whether empty or not - for e in [Series([]),Series([1],['z']),Series(['z']),Series(np.nan,b.index),Series(np.nan,a.index)]: + for e in [Series([]), Series([1], ['z']), Series(['z']), + Series(np.nan, b.index), Series(np.nan, a.index)]: result = a[a | e] - assert_series_equal(result,a[a]) + assert_series_equal(result, a[a]) # vs scalars index = list('bca') - t = Series([True,False,True]) - - for v in [True,1,2]: - result = Series([True,False,True],index=index) | v - expected = Series([True,True,True],index=index) - assert_series_equal(result,expected) - - for v in [np.nan,'foo']: - self.assertRaises(TypeError, lambda : t | v) - - for v in [False,0]: - result = Series([True,False,True],index=index) | v - expected = Series([True,False,True],index=index) - assert_series_equal(result,expected) - - for v in [True,1]: - result = Series([True,False,True],index=index) & v - expected = Series([True,False,True],index=index) - assert_series_equal(result,expected) - - for v in [False,0]: - result = Series([True,False,True],index=index) & v - expected = Series([False,False,False],index=index) - assert_series_equal(result,expected) + t = Series([True, False, True]) + + for v in [True, 1, 2]: + result = Series([True, False, True], index=index) | v + expected = Series([True, True, True], index=index) + assert_series_equal(result, expected) + + for v in [np.nan, 'foo']: + self.assertRaises(TypeError, lambda: t | v) + + for v in [False, 0]: + result = Series([True, False, True], index=index) | v + expected = Series([True, False, True], index=index) + assert_series_equal(result, expected) + + for v in [True, 1]: + result = Series([True, False, True], index=index) & v + expected = Series([True, False, True], index=index) + assert_series_equal(result, expected) + + for v in [False, 0]: + result = Series([True, False, True], index=index) & v + expected = Series([False, False, False], index=index) + assert_series_equal(result, expected) for v in [np.nan]: - self.assertRaises(TypeError, lambda : t & v) + self.assertRaises(TypeError, lambda: t & v) def test_operators_bitwise(self): # GH 9016: support bitwise op for integer types @@ -4811,8 +4937,11 @@ def test_operators_bitwise(self): s_fff = Series([False, False, False], index=index) s_tff = Series([True, False, False], index=index) s_empty = Series([]) - s_0101 = Series([0,1,0,1]) - s_0123 = Series(range(4),dtype='int64') + + # TODO: unused + # s_0101 = Series([0, 1, 0, 1]) + + s_0123 = Series(range(4), dtype='int64') s_3333 = Series([3] * 4) s_4444 = Series([4] * 4) @@ -4825,11 +4954,11 @@ def test_operators_bitwise(self): assert_series_equal(res, expected) res = s_0123 & s_3333 - expected = Series(range(4),dtype='int64') + expected = Series(range(4), dtype='int64') assert_series_equal(res, expected) res = s_0123 | s_4444 - expected = Series(range(4, 8),dtype='int64') + expected = Series(range(4, 8), dtype='int64') assert_series_equal(res, expected) s_a0b1c0 = Series([1], list('b')) @@ -4860,7 +4989,7 @@ def test_operators_bitwise(self): expected = Series([0, 1, 0, 1]) assert_series_equal(res, expected) - s_1111 = Series([1]*4, dtype='int8') + s_1111 = Series([1] * 4, dtype='int8') res = s_0123 & s_1111 expected = Series([0, 1, 0, 1], dtype='int64') assert_series_equal(res, expected) @@ -4870,7 +4999,7 @@ def test_operators_bitwise(self): assert_series_equal(res, expected) self.assertRaises(TypeError, lambda: s_1111 & 'a') - self.assertRaises(TypeError, lambda: s_1111 & ['a','b','c','d']) + self.assertRaises(TypeError, lambda: s_1111 & ['a', 'b', 'c', 'd']) self.assertRaises(TypeError, lambda: s_0123 & np.NaN) self.assertRaises(TypeError, lambda: s_0123 & 3.14) self.assertRaises(TypeError, lambda: s_0123 & [0.1, 4, 3.14, 2]) @@ -4883,12 +5012,13 @@ def test_operators_bitwise(self): assert_series_equal(s_0123 ^ False, Series([False, True, True, True])) assert_series_equal(s_0123 & [False], Series([False] * 4)) assert_series_equal(s_0123 & (False), Series([False] * 4)) - assert_series_equal(s_0123 & Series([False, np.NaN, False, False]), Series([False] * 4)) + assert_series_equal(s_0123 & Series([False, np.NaN, False, False]), + Series([False] * 4)) s_ftft = Series([False, True, False, True]) assert_series_equal(s_0123 & Series([0.1, 4, -3.14, 2]), s_ftft) - s_abNd = Series(['a','b',np.NaN,'d']) + s_abNd = Series(['a', 'b', np.NaN, 'd']) res = s_0123 & s_abNd expected = s_ftft assert_series_equal(res, expected) @@ -4918,7 +5048,8 @@ def test_setitem_na(self): s[::2] = np.nan assert_series_equal(s, expected) - expected = Series([np.nan, np.nan, np.nan, np.nan, np.nan, 5, 6, 7, 8, 9]) + expected = Series([np.nan, np.nan, np.nan, np.nan, np.nan, 5, 6, 7, 8, + 9]) s = Series(np.arange(10)) s[:5] = np.nan assert_series_equal(s, expected) @@ -4934,7 +5065,7 @@ def tester(a, b): s = Series([2, 3, 4, 5, 6, 7, 8, 9, datetime(2005, 1, 1)]) s[::2] = np.nan - expected = Series(True,index=s.index) + expected = Series(True, index=s.index) expected[::2] = False assert_series_equal(tester(s, list(s)), expected) @@ -5007,7 +5138,7 @@ def test_idxmax(self): # Float64Index # GH 5914 - s = pd.Series([1,2,3],[1.1,2.1,3.1]) + s = pd.Series([1, 2, 3], [1.1, 2.1, 3.1]) result = s.idxmax() self.assertEqual(result, 3.1) result = s.idxmin() @@ -5027,9 +5158,10 @@ def test_ndarray_compat(self): def f(x): return x[x.argmax()] + result = tsdf.apply(f) expected = tsdf.max() - assert_series_equal(result,expected) + assert_series_equal(result, expected) # .item() s = Series([1]) @@ -5040,12 +5172,12 @@ def f(x): # using an ndarray like function s = Series(np.random.randn(10)) result = np.ones_like(s) - expected = Series(1,index=range(10),dtype='float64') - #assert_series_equal(result,expected) + expected = Series(1, index=range(10), dtype='float64') + # assert_series_equal(result,expected) # ravel s = Series(np.random.randn(10)) - tm.assert_almost_equal(s.ravel(order='F'),s.values.ravel(order='F')) + tm.assert_almost_equal(s.ravel(order='F'), s.values.ravel(order='F')) # compress # GH 6658 @@ -5072,47 +5204,53 @@ def test_complexx(self): # GH4819 # complex access for ndarray compat a = np.arange(5) - b = Series(a + 4j*a) - tm.assert_almost_equal(a,b.real) - tm.assert_almost_equal(4*a,b.imag) + b = Series(a + 4j * a) + tm.assert_almost_equal(a, b.real) + tm.assert_almost_equal(4 * a, b.imag) - b.real = np.arange(5)+5 - tm.assert_almost_equal(a+5,b.real) - tm.assert_almost_equal(4*a,b.imag) + b.real = np.arange(5) + 5 + tm.assert_almost_equal(a + 5, b.real) + tm.assert_almost_equal(4 * a, b.imag) def test_underlying_data_conversion(self): # GH 4080 - df = DataFrame(dict((c, [1,2,3]) for c in ['a', 'b', 'c'])) + df = DataFrame(dict((c, [1, 2, 3]) for c in ['a', 'b', 'c'])) df.set_index(['a', 'b', 'c'], inplace=True) - s = Series([1], index=[(2,2,2)]) + s = Series([1], index=[(2, 2, 2)]) df['val'] = 0 df df['val'].update(s) - expected = DataFrame(dict(a = [1,2,3], b = [1,2,3], c = [1,2,3], val = [0,1,0])) + expected = DataFrame( + dict(a=[1, 2, 3], b=[1, 2, 3], c=[1, 2, 3], val=[0, 1, 0])) expected.set_index(['a', 'b', 'c'], inplace=True) - tm.assert_frame_equal(df,expected) + tm.assert_frame_equal(df, expected) # GH 3970 # these are chained assignments as well - pd.set_option('chained_assignment',None) - df = DataFrame({ "aa":range(5), "bb":[2.2]*5}) + pd.set_option('chained_assignment', None) + df = DataFrame({"aa": range(5), "bb": [2.2] * 5}) df["cc"] = 0.0 - ck = [True]*len(df) + + ck = [True] * len(df) + df["bb"].iloc[0] = .13 - df_tmp = df.iloc[ck] + + # TODO: unused + df_tmp = df.iloc[ck] # noqa + df["bb"].iloc[0] = .15 self.assertEqual(df['bb'].iloc[0], 0.15) - pd.set_option('chained_assignment','raise') + pd.set_option('chained_assignment', 'raise') # GH 3217 - df = DataFrame(dict(a = [1,3], b = [np.nan, 2])) + df = DataFrame(dict(a=[1, 3], b=[np.nan, 2])) df['c'] = np.nan - df['c'].update(pd.Series(['foo'],index=[0])) + df['c'].update(pd.Series(['foo'], index=[0])) - expected = DataFrame(dict(a = [1,3], b = [np.nan, 2], c = ['foo',np.nan])) - tm.assert_frame_equal(df,expected) + expected = DataFrame(dict(a=[1, 3], b=[np.nan, 2], c=['foo', np.nan])) + tm.assert_frame_equal(df, expected) def test_operators_corner(self): series = self.ts @@ -5139,8 +5277,7 @@ def test_operators_corner(self): def test_operators_reverse_object(self): # GH 56 - arr = Series(np.random.randn(10), index=np.arange(10), - dtype=object) + arr = Series(np.random.randn(10), index=np.arange(10), dtype=object) def _check_op(arr, op): result = op(1., arr) @@ -5223,7 +5360,8 @@ def _check_fill(meth, op, a, b, fill_value=0): if compat.PY3: pairings.append((Series.div, operator.truediv, 1)) - pairings.append((Series.rdiv, lambda x, y: operator.truediv(y, x), 1)) + pairings.append((Series.rdiv, lambda x, y: operator.truediv(y, x), + 1)) else: pairings.append((Series.div, operator.div, 1)) pairings.append((Series.rdiv, lambda x, y: operator.div(y, x), 1)) @@ -5340,12 +5478,12 @@ def test_corr_rank(self): "{0}".format(scipy.__version__)) # results from R - A = Series([-0.89926396, 0.94209606, -1.03289164, -0.95445587, - 0.76910310, -0.06430576, -2.09704447, 0.40660407, - -0.89926396, 0.94209606]) - B = Series([-1.01270225, -0.62210117, -1.56895827, 0.59592943, - -0.01680292, 1.17258718, -1.06009347, -0.10222060, - -0.89076239, 0.89372375]) + A = Series( + [-0.89926396, 0.94209606, -1.03289164, -0.95445587, 0.76910310, - + 0.06430576, -2.09704447, 0.40660407, -0.89926396, 0.94209606]) + B = Series( + [-1.01270225, -0.62210117, -1.56895827, 0.59592943, -0.01680292, + 1.17258718, -1.06009347, -0.10222060, -0.89076239, 0.89372375]) kexp = 0.4319297 sexp = 0.5853767 self.assertAlmostEqual(A.corr(B, method='kendall'), kexp) @@ -5356,8 +5494,8 @@ def test_cov(self): self.assertAlmostEqual(self.ts.cov(self.ts), self.ts.std() ** 2) # partial overlap - self.assertAlmostEqual( - self.ts[:15].cov(self.ts[5:]), self.ts[5:15].std() ** 2) + self.assertAlmostEqual(self.ts[:15].cov(self.ts[5:]), + self.ts[5:15].std() ** 2) # No overlap self.assertTrue(np.isnan(self.ts[::2].cov(self.ts[1::2]))) @@ -5377,7 +5515,7 @@ def test_cov(self): def test_copy(self): for deep in [None, False, True]: - s = Series(np.arange(10),dtype='float64') + s = Series(np.arange(10), dtype='float64') # default deep is True if deep is None: @@ -5443,8 +5581,9 @@ def test_dtype(self): self.assertEqual(self.ts.dtypes, np.dtype('float64')) self.assertEqual(self.ts.ftype, 'float64:dense') self.assertEqual(self.ts.ftypes, 'float64:dense') - assert_series_equal(self.ts.get_dtype_counts(),Series(1,['float64'])) - assert_series_equal(self.ts.get_ftype_counts(),Series(1,['float64:dense'])) + assert_series_equal(self.ts.get_dtype_counts(), Series(1, ['float64'])) + assert_series_equal(self.ts.get_ftype_counts(), Series( + 1, ['float64:dense'])) def test_dot(self): a = Series(np.random.randn(4), index=['p', 'q', 'r', 's']) @@ -5452,8 +5591,7 @@ def test_dot(self): columns=['p', 'q', 'r', 's']).T result = a.dot(b) - expected = Series(np.dot(a.values, b.values), - index=['1', '2', '3']) + expected = Series(np.dot(a.values, b.values), index=['1', '2', '3']) assert_series_equal(result, expected) # Check index alignment @@ -5478,7 +5616,7 @@ def test_value_counts_nunique(self): # basics.rst doc example series = Series(np.random.randn(500)) series[20:500] = np.nan - series[10:20] = 5000 + series[10:20] = 5000 result = series.nunique() self.assertEqual(result, 11) @@ -5518,8 +5656,8 @@ def test_dropna_empty(self): def test_datetime64_tz_dropna(self): # DatetimeBlock - s = Series([Timestamp('2011-01-01 10:00'), pd.NaT, - Timestamp('2011-01-03 10:00'), pd.NaT]) + s = Series([Timestamp('2011-01-01 10:00'), pd.NaT, Timestamp( + '2011-01-03 10:00'), pd.NaT]) result = s.dropna() expected = Series([Timestamp('2011-01-01 10:00'), Timestamp('2011-01-03 10:00')], index=[0, 2]) @@ -5539,8 +5677,8 @@ def test_datetime64_tz_dropna(self): self.assert_series_equal(result, expected) def test_dropna_no_nan(self): - for s in [Series([1, 2, 3], name='x'), - Series([False, True, False], name='x')]: + for s in [Series([1, 2, 3], name='x'), Series( + [False, True, False], name='x')]: result = s.dropna() self.assert_series_equal(result, s) @@ -5578,7 +5716,8 @@ def test_drop_duplicates(self): with tm.assert_produces_warning(FutureWarning): assert_series_equal(s.duplicated(take_last=True), expected) with tm.assert_produces_warning(FutureWarning): - assert_series_equal(s.drop_duplicates(take_last=True), s[~expected]) + assert_series_equal( + s.drop_duplicates(take_last=True), s[~expected]) sc = s.copy() with tm.assert_produces_warning(FutureWarning): sc.drop_duplicates(take_last=True, inplace=True) @@ -5611,7 +5750,8 @@ def test_drop_duplicates(self): with tm.assert_produces_warning(FutureWarning): assert_series_equal(s.duplicated(take_last=True), expected) with tm.assert_produces_warning(FutureWarning): - assert_series_equal(s.drop_duplicates(take_last=True), s[~expected]) + assert_series_equal( + s.drop_duplicates(take_last=True), s[~expected]) sc = s.copy() with tm.assert_produces_warning(FutureWarning): sc.drop_duplicates(take_last=True, inplace=True) @@ -5637,15 +5777,17 @@ def test_sort_values(self): ts.sort_values(ascending=False, inplace=True) self.assert_numpy_array_equal(ts, self.ts.sort_values(ascending=False)) - self.assert_numpy_array_equal(ts.index, - self.ts.sort_values(ascending=False).index) + self.assert_numpy_array_equal(ts.index, self.ts.sort_values( + ascending=False).index) # GH 5856/5853 # Series.sort_values operating on a view - df = DataFrame(np.random.randn(10,4)) - s = df.iloc[:,0] + df = DataFrame(np.random.randn(10, 4)) + s = df.iloc[:, 0] + def f(): s.sort_values(inplace=True) + self.assertRaises(ValueError, f) # test order/sort inplace @@ -5654,13 +5796,13 @@ def f(): ts1.sort_values(ascending=False, inplace=True) ts2 = self.ts.copy() ts2.sort_values(ascending=False, inplace=True) - assert_series_equal(ts1,ts2) + assert_series_equal(ts1, ts2) ts1 = self.ts.copy() ts1 = ts1.sort_values(ascending=False, inplace=False) ts2 = self.ts.copy() ts2 = ts.sort_values(ascending=False) - assert_series_equal(ts1,ts2) + assert_series_equal(ts1, ts2) def test_sort_index(self): rindex = list(self.ts.index) @@ -5686,8 +5828,7 @@ def test_sort_index_inplace(self): result = random_order.sort_index(ascending=False, inplace=True) self.assertIs(result, None, msg='sort_index() inplace should return None') - assert_series_equal(random_order, - self.ts.reindex(self.ts.index[::-1])) + assert_series_equal(random_order, self.ts.reindex(self.ts.index[::-1])) # ascending random_order = self.ts.reindex(rindex) @@ -5720,12 +5861,13 @@ def test_sort_API(self): sorted_series = random_order.sort_index(axis=0) assert_series_equal(sorted_series, self.ts) - self.assertRaises(ValueError, lambda : random_order.sort_values(axis=1)) + self.assertRaises(ValueError, lambda: random_order.sort_values(axis=1)) sorted_series = random_order.sort_index(level=0, axis=0) assert_series_equal(sorted_series, self.ts) - self.assertRaises(ValueError, lambda : random_order.sort_index(level=0, axis=1)) + self.assertRaises(ValueError, + lambda: random_order.sort_index(level=0, axis=1)) def test_order(self): @@ -5801,13 +5943,15 @@ def test_nsmallest_nlargest(self): assert_series_equal(s.nsmallest(2, keep='last'), s.iloc[[2, 3]]) with tm.assert_produces_warning(FutureWarning): - assert_series_equal(s.nsmallest(2, take_last=True), s.iloc[[2, 3]]) + assert_series_equal( + s.nsmallest(2, take_last=True), s.iloc[[2, 3]]) assert_series_equal(s.nlargest(3), s.iloc[[4, 0, 1]]) assert_series_equal(s.nlargest(3, keep='last'), s.iloc[[4, 0, 3]]) with tm.assert_produces_warning(FutureWarning): - assert_series_equal(s.nlargest(3, take_last=True), s.iloc[[4, 0, 3]]) + assert_series_equal( + s.nlargest(3, take_last=True), s.iloc[[4, 0, 3]]) empty = s.iloc[0:0] assert_series_equal(s.nsmallest(0), empty) @@ -5847,7 +5991,7 @@ def test_rank(self): filled = self.ts.fillna(np.inf) # rankdata returns a ndarray - exp = Series(rankdata(filled),index=filled.index) + exp = Series(rankdata(filled), index=filled.index) exp[mask] = np.nan assert_almost_equal(ranks, exp) @@ -5870,7 +6014,7 @@ def test_rank(self): iseries[1] = np.nan exp = Series(np.repeat(50.0 / 99.0, 100)) - exp[1] = np.nan + exp[1] = np.nan iranks = iseries.rank(pct=True) assert_series_equal(iranks, exp) @@ -5898,12 +6042,14 @@ def test_rank(self): iranks = iseries.rank(pct=True) assert_series_equal(iranks, exp) - iseries = Series([1e-50, 1e-100, 1e-20, 1e-2, 1e-20+1e-30, 1e-1]) + iseries = Series([1e-50, 1e-100, 1e-20, 1e-2, 1e-20 + 1e-30, 1e-1]) exp = Series([2, 1, 3, 5, 4, 6.0]) iranks = iseries.rank() assert_series_equal(iranks, exp) - values = np.array([-50, -1, -1e-20, -1e-25, -1e-50, 0, 1e-40, 1e-20, 1e-10, 2, 40], dtype='float64') + values = np.array( + [-50, -1, -1e-20, -1e-25, -1e-50, 0, 1e-40, 1e-20, 1e-10, 2, 40 + ], dtype='float64') random_order = np.random.permutation(len(values)) iseries = Series(values[random_order]) exp = Series(random_order + 1.0, dtype='float64') @@ -5911,16 +6057,18 @@ def test_rank(self): assert_series_equal(iranks, exp) def test_rank_inf(self): - raise nose.SkipTest('DataFrame.rank does not currently rank np.inf and -np.inf properly') + raise nose.SkipTest('DataFrame.rank does not currently rank ' + 'np.inf and -np.inf properly') - values = np.array([-np.inf, -50, -1, -1e-20, -1e-25, -1e-50, 0, 1e-40, 1e-20, 1e-10, 2, 40, np.inf], dtype='float64') + values = np.array( + [-np.inf, -50, -1, -1e-20, -1e-25, -1e-50, 0, 1e-40, 1e-20, 1e-10, + 2, 40, np.inf], dtype='float64') random_order = np.random.permutation(len(values)) iseries = Series(values[random_order]) exp = Series(random_order + 1.0, dtype='float64') iranks = iseries.rank() assert_series_equal(iranks, exp) - def test_from_csv(self): with ensure_clean() as path: @@ -5951,8 +6099,8 @@ def test_from_csv(self): outfile.write('1998-01-01|1.0\n1999-01-01|2.0') outfile.close() series = Series.from_csv(path, sep='|') - checkseries = Series( - {datetime(1998, 1, 1): 1.0, datetime(1999, 1, 1): 2.0}) + checkseries = Series({datetime(1998, 1, 1): 1.0, + datetime(1999, 1, 1): 2.0}) assert_series_equal(checkseries, series) series = Series.from_csv(path, sep='|', parse_dates=False) @@ -5966,7 +6114,7 @@ def test_to_csv(self): self.ts.to_csv(path) lines = io.open(path, newline=None).readlines() - assert(lines[1] != '\n') + assert (lines[1] != '\n') self.ts.to_csv(path, index=False) arr = np.loadtxt(path) @@ -6005,7 +6153,8 @@ def test_to_frame(self): assert_frame_equal(rs, xp) rs = self.ts.to_frame(name='testdifferent') - xp = pd.DataFrame(dict(testdifferent=self.ts.values), index=self.ts.index) + xp = pd.DataFrame( + dict(testdifferent=self.ts.values), index=self.ts.index) assert_frame_equal(rs, xp) def test_to_dict(self): @@ -6066,9 +6215,9 @@ def test_clip(self): def test_clip_types_and_nulls(self): - sers = [Series([np.nan, 1.0, 2.0, 3.0]), - Series([None, 'a', 'b', 'c']), - Series(pd.to_datetime([np.nan, 1, 2, 3], unit='D'))] + sers = [Series([np.nan, 1.0, 2.0, 3.0]), Series([None, 'a', 'b', 'c']), + Series(pd.to_datetime( + [np.nan, 1, 2, 3], unit='D'))] for s in sers: thresh = s[2] @@ -6093,22 +6242,25 @@ def test_clip_against_series(self): assert_series_equal(s.clip(lower, upper), Series([1.0, 2.0, 3.5])) assert_series_equal(s.clip(1.5, upper), Series([1.5, 1.5, 3.5])) - def test_clip_with_datetimes(self): # GH 11838 # naive and tz-aware datetimes t = Timestamp('2015-12-01 09:30:30') - s = Series([ Timestamp('2015-12-01 09:30:00'), Timestamp('2015-12-01 09:31:00') ]) + s = Series([Timestamp('2015-12-01 09:30:00'), Timestamp( + '2015-12-01 09:31:00')]) result = s.clip(upper=t) - expected = Series([ Timestamp('2015-12-01 09:30:00'), Timestamp('2015-12-01 09:30:30') ]) + expected = Series([Timestamp('2015-12-01 09:30:00'), Timestamp( + '2015-12-01 09:30:30')]) assert_series_equal(result, expected) t = Timestamp('2015-12-01 09:30:30', tz='US/Eastern') - s = Series([ Timestamp('2015-12-01 09:30:00', tz='US/Eastern'), Timestamp('2015-12-01 09:31:00', tz='US/Eastern') ]) + s = Series([Timestamp('2015-12-01 09:30:00', tz='US/Eastern'), + Timestamp('2015-12-01 09:31:00', tz='US/Eastern')]) result = s.clip(upper=t) - expected = Series([ Timestamp('2015-12-01 09:30:00', tz='US/Eastern'), Timestamp('2015-12-01 09:30:30', tz='US/Eastern') ]) + expected = Series([Timestamp('2015-12-01 09:30:00', tz='US/Eastern'), + Timestamp('2015-12-01 09:30:30', tz='US/Eastern')]) assert_series_equal(result, expected) def test_valid(self): @@ -6122,15 +6274,15 @@ def test_valid(self): def test_isnull(self): ser = Series([0, 5.4, 3, nan, -0.001]) - np.array_equal( - ser.isnull(), Series([False, False, False, True, False]).values) + np.array_equal(ser.isnull(), + Series([False, False, False, True, False]).values) ser = Series(["hi", "", nan]) np.array_equal(ser.isnull(), Series([False, False, True]).values) def test_notnull(self): ser = Series([0, 5.4, 3, nan, -0.001]) - np.array_equal( - ser.notnull(), Series([True, True, True, False, True]).values) + np.array_equal(ser.notnull(), + Series([True, True, True, False, True]).values) ser = Series(["hi", "", nan]) np.array_equal(ser.notnull(), Series([True, True, False]).values) @@ -6180,23 +6332,27 @@ def test_shift(self): # 32-bit taking # GH 8129 - index=date_range('2000-01-01',periods=5) - for dtype in ['int32','int64']: - s1 = Series(np.arange(5,dtype=dtype),index=index) + index = date_range('2000-01-01', periods=5) + for dtype in ['int32', 'int64']: + s1 = Series(np.arange(5, dtype=dtype), index=index) p = s1.iloc[1] result = s1.shift(periods=p) - expected = Series([np.nan,0,1,2,3],index=index) - assert_series_equal(result,expected) + expected = Series([np.nan, 0, 1, 2, 3], index=index) + assert_series_equal(result, expected) # xref 8260 # with tz - s = Series(date_range('2000-01-01 09:00:00',periods=5,tz='US/Eastern'),name='foo') - result = s-s.shift() - assert_series_equal(result,Series(TimedeltaIndex(['NaT'] + ['1 days']*4),name='foo')) + s = Series( + date_range('2000-01-01 09:00:00', periods=5, + tz='US/Eastern'), name='foo') + result = s - s.shift() + assert_series_equal(result, Series( + TimedeltaIndex(['NaT'] + ['1 days'] * 4), name='foo')) # incompat tz - s2 = Series(date_range('2000-01-01 09:00:00',periods=5,tz='CET'),name='foo') - self.assertRaises(ValueError, lambda : s-s2) + s2 = Series( + date_range('2000-01-01 09:00:00', periods=5, tz='CET'), name='foo') + self.assertRaises(ValueError, lambda: s - s2) def test_tshift(self): # PeriodIndex @@ -6299,10 +6455,10 @@ def test_truncate(self): # corner case, empty series returned truncated = ts.truncate(after=self.ts.index[0] - offset) - assert(len(truncated) == 0) + assert (len(truncated) == 0) truncated = ts.truncate(before=self.ts.index[-1] + offset) - assert(len(truncated) == 0) + assert (len(truncated) == 0) self.assertRaises(ValueError, ts.truncate, before=self.ts.index[-1] + offset, @@ -6319,7 +6475,7 @@ def test_ptp(self): self.assertEqual(s.ptp(), 13) self.assertTrue(pd.isnull(s.ptp(skipna=False))) - mi = pd.MultiIndex.from_product([['a','b'], [1,2,3]]) + mi = pd.MultiIndex.from_product([['a', 'b'], [1, 2, 3]]) s = pd.Series([1, np.nan, 7, 3, 5, np.nan], index=mi) expected = pd.Series([6, 2], index=['a', 'b'], dtype=np.float64) @@ -6338,7 +6494,6 @@ def test_ptp(self): with self.assertRaises(NotImplementedError): s.ptp(numeric_only=True) - def test_asof(self): # array or list or dates N = 50 @@ -6498,13 +6653,13 @@ def test_getitem_setitem_datetime_tz_pytz(self): result[date] = ts[4] assert_series_equal(result, ts) - def test_getitem_setitem_datetime_tz_dateutil(self): tm._skip_if_no_dateutil() from dateutil.tz import tzutc from pandas.tslib import _dateutil_gettz as gettz - tz = lambda x: tzutc() if x == 'UTC' else gettz(x) # handle special case for utc in dateutil + tz = lambda x: tzutc() if x == 'UTC' else gettz( + x) # handle special case for utc in dateutil from pandas import date_range N = 50 @@ -6653,23 +6808,23 @@ def test_cast_on_putmask(self): def test_type_promote_putmask(self): # GH8387: test that changing types does not break alignment - ts = Series(np.random.randn(100), index=np.arange(100,0,-1)).round(5) + ts = Series(np.random.randn(100), index=np.arange(100, 0, -1)).round(5) left, mask = ts.copy(), ts > 0 right = ts[mask].copy().map(str) left[mask] = right assert_series_equal(left, ts.map(lambda t: str(t) if t > 0 else t)) - s = Series([0, 1, 2, 0 ]) + s = Series([0, 1, 2, 0]) mask = s > 0 - s2 = s[ mask ].map( str ) + s2 = s[mask].map(str) s[mask] = s2 assert_series_equal(s, Series([0, '1', '2', 0])) - s = Series([0, 'foo', 'bar', 0 ]) + s = Series([0, 'foo', 'bar', 0]) mask = Series([False, True, True, False]) - s2 = s[ mask ] + s2 = s[mask] s[mask] = s2 - assert_series_equal(s, Series([0, 'foo','bar', 0])) + assert_series_equal(s, Series([0, 'foo', 'bar', 0])) def test_astype_cast_nan_int(self): df = Series([1.0, 2.0, 3.0, np.nan]) @@ -6706,8 +6861,7 @@ def test_astype_datetimes(self): def test_astype_str(self): # GH4405 digits = string.digits - s1 = Series([digits * 10, tm.rands(63), tm.rands(64), - tm.rands(1000)]) + s1 = Series([digits * 10, tm.rands(63), tm.rands(64), tm.rands(1000)]) s2 = Series([digits * 10, tm.rands(63), tm.rands(64), nan, 1.0]) types = (compat.text_type, np.str_) for typ in types: @@ -6747,23 +6901,22 @@ def test_astype_unicode(self): former_encoding = None if not compat.PY3: - # in python we can force the default encoding - # for this test + # in python we can force the default encoding for this test former_encoding = sys.getdefaultencoding() - reload(sys) + reload(sys) # noqa sys.setdefaultencoding("utf-8") if sys.getdefaultencoding() == "utf-8": - test_series.append(Series([u('野菜食べないとやばい').encode("utf-8")])) + test_series.append(Series([u('野菜食べないとやばい') + .encode("utf-8")])) for s in test_series: res = s.astype("unicode") expec = s.map(compat.text_type) assert_series_equal(res, expec) # restore the former encoding if former_encoding is not None and former_encoding != "utf-8": - reload(sys) + reload(sys) # noqa sys.setdefaultencoding(former_encoding) - def test_map(self): index, data = tm.getMixedTypeDict() @@ -6796,7 +6949,8 @@ def test_map(self): self.assert_series_equal(a.map(c), exp) a = Series(['a', 'b', 'c', 'd']) - b = Series([1, 2, 3, 4], index=pd.CategoricalIndex(['b', 'c', 'd', 'e'])) + b = Series([1, 2, 3, 4], + index=pd.CategoricalIndex(['b', 'c', 'd', 'e'])) c = Series([1, 2, 3, 4], index=Index(['b', 'c', 'd', 'e'])) exp = Series([np.nan, 1, 2, 3]) @@ -6816,10 +6970,10 @@ def test_map(self): def test_map_compat(self): # related GH 8024 - s = Series([True,True,False],index=[1,2,3]) - result = s.map({ True : 'foo', False : 'bar' }) - expected = Series(['foo','foo','bar'],index=[1,2,3]) - assert_series_equal(result,expected) + s = Series([True, True, False], index=[1, 2, 3]) + result = s.map({True: 'foo', False: 'bar'}) + expected = Series(['foo', 'foo', 'bar'], index=[1, 2, 3]) + assert_series_equal(result, expected) def test_map_int(self): left = Series({'a': 1., 'b': 2., 'c': 3., 'd': 4}) @@ -6844,13 +6998,13 @@ def test_divide_decimal(self): expected = Series([Decimal(5)]) - s = Series([Decimal(10)]) - s = s/Decimal(2) + s = Series([Decimal(10)]) + s = s / Decimal(2) tm.assert_series_equal(expected, s) - s = Series([Decimal(10)]) - s = s//Decimal(2) + s = Series([Decimal(10)]) + s = s // Decimal(2) tm.assert_series_equal(expected, s) @@ -6875,17 +7029,13 @@ def test_map_dict_with_tuple_keys(self): converted to a multi-index, preventing tuple values from being mapped properly. ''' - df = pd.DataFrame({'a': [(1,), (2,), (3, 4), (5, 6)]}) - label_mappings = { - (1,): 'A', - (2,): 'B', - (3, 4): 'A', - (5, 6): 'B' - } + df = pd.DataFrame({'a': [(1, ), (2, ), (3, 4), (5, 6)]}) + label_mappings = {(1, ): 'A', (2, ): 'B', (3, 4): 'A', (5, 6): 'B'} df['labels'] = df['a'].map(label_mappings) df['expected_labels'] = pd.Series(['A', 'B', 'A', 'B'], index=df.index) # All labels should be filled now - tm.assert_series_equal(df['labels'], df['expected_labels'], check_names=False) + tm.assert_series_equal(df['labels'], df['expected_labels'], + check_names=False) def test_apply(self): assert_series_equal(self.ts.apply(np.sqrt), np.sqrt(self.ts)) @@ -6895,8 +7045,8 @@ def test_apply(self): assert_series_equal(self.ts.apply(math.exp), np.exp(self.ts)) # how to handle Series result, #2316 - result = self.ts.apply(lambda x: Series([x, x ** 2], - index=['x', 'x^2'])) + result = self.ts.apply(lambda x: Series( + [x, x ** 2], index=['x', 'x^2'])) expected = DataFrame({'x': self.ts, 'x^2': self.ts ** 2}) tm.assert_frame_equal(result, expected) @@ -6939,20 +7089,23 @@ def test_convert_objects(self): s = Series([1., 2, 3], index=['a', 'b', 'c']) with tm.assert_produces_warning(FutureWarning): - result = s.convert_objects(convert_dates=False, convert_numeric=True) + result = s.convert_objects(convert_dates=False, + convert_numeric=True) assert_series_equal(result, s) # force numeric conversion r = s.copy().astype('O') r['a'] = '1' with tm.assert_produces_warning(FutureWarning): - result = r.convert_objects(convert_dates=False, convert_numeric=True) + result = r.convert_objects(convert_dates=False, + convert_numeric=True) assert_series_equal(result, s) r = s.copy().astype('O') r['a'] = '1.' with tm.assert_produces_warning(FutureWarning): - result = r.convert_objects(convert_dates=False, convert_numeric=True) + result = r.convert_objects(convert_dates=False, + convert_numeric=True) assert_series_equal(result, s) r = s.copy().astype('O') @@ -6960,7 +7113,8 @@ def test_convert_objects(self): expected = s.copy() expected['a'] = np.nan with tm.assert_produces_warning(FutureWarning): - result = r.convert_objects(convert_dates=False, convert_numeric=True) + result = r.convert_objects(convert_dates=False, + convert_numeric=True) assert_series_equal(result, expected) # GH 4119, not converting a mixed type (e.g.floats and object) @@ -6977,14 +7131,17 @@ def test_convert_objects(self): assert_series_equal(result, expected) # dates - s = Series( - [datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), datetime(2001, 1, 3, 0, 0)]) - s2 = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), datetime( - 2001, 1, 3, 0, 0), 'foo', 1.0, 1, Timestamp('20010104'), '20010105'], dtype='O') + s = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), + datetime(2001, 1, 3, 0, 0)]) + s2 = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), + datetime(2001, 1, 3, 0, 0), 'foo', 1.0, 1, + Timestamp('20010104'), '20010105'], + dtype='O') with tm.assert_produces_warning(FutureWarning): - result = s.convert_objects(convert_dates=True, convert_numeric=False) - expected = Series( - [Timestamp('20010101'), Timestamp('20010102'), Timestamp('20010103')], dtype='M8[ns]') + result = s.convert_objects(convert_dates=True, + convert_numeric=False) + expected = Series([Timestamp('20010101'), Timestamp('20010102'), + Timestamp('20010103')], dtype='M8[ns]') assert_series_equal(result, expected) with tm.assert_produces_warning(FutureWarning): @@ -6995,10 +7152,10 @@ def test_convert_objects(self): convert_numeric=True) assert_series_equal(result, expected) - expected = Series( - [Timestamp( - '20010101'), Timestamp('20010102'), Timestamp('20010103'), - lib.NaT, lib.NaT, lib.NaT, Timestamp('20010104'), Timestamp('20010105')], dtype='M8[ns]') + expected = Series([Timestamp('20010101'), Timestamp('20010102'), + Timestamp('20010103'), + lib.NaT, lib.NaT, lib.NaT, Timestamp('20010104'), + Timestamp('20010105')], dtype='M8[ns]') with tm.assert_produces_warning(FutureWarning): result = s2.convert_objects(convert_dates='coerce', convert_numeric=False) @@ -7022,10 +7179,10 @@ def test_convert_objects(self): convert_numeric=False) assert_series_equal(result, s) - #r = s.copy() - #r[0] = np.nan - #result = r.convert_objects(convert_dates=True,convert_numeric=False) - #self.assertEqual(result.dtype, 'M8[ns]') + # r = s.copy() + # r[0] = np.nan + # result = r.convert_objects(convert_dates=True,convert_numeric=False) + # self.assertEqual(result.dtype, 'M8[ns]') # dateutil parses some single letters into today's value as a date for x in 'abcdefghijklmnopqrstuvwxyz': @@ -7097,19 +7254,20 @@ def test_convert(self): assert_series_equal(results, s) # test pass-through and non-conversion when other types selected - s = Series(['1.0','2.0','3.0']) + s = Series(['1.0', '2.0', '3.0']) results = s._convert(datetime=True, numeric=True, timedelta=True) - expected = Series([1.0,2.0,3.0]) + expected = Series([1.0, 2.0, 3.0]) assert_series_equal(results, expected) - results = s._convert(True,False,True) + results = s._convert(True, False, True) assert_series_equal(results, s) - s = Series([datetime(2001, 1, 1, 0, 0),datetime(2001, 1, 1, 0, 0)], + s = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, 0)], dtype='O') results = s._convert(datetime=True, numeric=True, timedelta=True) - expected = Series([datetime(2001, 1, 1, 0, 0),datetime(2001, 1, 1, 0, 0)]) + expected = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, + 0)]) assert_series_equal(results, expected) - results = s._convert(datetime=False,numeric=True,timedelta=True) + results = s._convert(datetime=False, numeric=True, timedelta=True) assert_series_equal(results, s) td = datetime(2001, 1, 1, 0, 0) - datetime(2000, 1, 1, 0, 0) @@ -7117,10 +7275,9 @@ def test_convert(self): results = s._convert(datetime=True, numeric=True, timedelta=True) expected = Series([td, td]) assert_series_equal(results, expected) - results = s._convert(True,True,False) + results = s._convert(True, True, False) assert_series_equal(results, s) - s = Series([1., 2, 3], index=['a', 'b', 'c']) result = s._convert(numeric=True) assert_series_equal(result, s) @@ -7154,34 +7311,33 @@ def test_convert(self): assert_series_equal(result, expected) # dates - s = Series( - [datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), datetime(2001, 1, 3, 0, 0)]) - s2 = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), datetime( - 2001, 1, 3, 0, 0), 'foo', 1.0, 1, Timestamp('20010104'), '20010105'], dtype='O') + s = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), + datetime(2001, 1, 3, 0, 0)]) + s2 = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 2, 0, 0), + datetime(2001, 1, 3, 0, 0), 'foo', 1.0, 1, + Timestamp('20010104'), '20010105'], dtype='O') result = s._convert(datetime=True) - expected = Series( - [Timestamp('20010101'), Timestamp('20010102'), Timestamp('20010103')], dtype='M8[ns]') + expected = Series([Timestamp('20010101'), Timestamp('20010102'), + Timestamp('20010103')], dtype='M8[ns]') assert_series_equal(result, expected) result = s._convert(datetime=True, coerce=True) assert_series_equal(result, expected) - expected = Series( - [Timestamp( - '20010101'), Timestamp('20010102'), Timestamp('20010103'), - lib.NaT, lib.NaT, lib.NaT, Timestamp('20010104'), Timestamp('20010105')], dtype='M8[ns]') - result = s2._convert(datetime=True, - numeric=False, - timedelta=False, - coerce=True) + expected = Series([Timestamp('20010101'), Timestamp('20010102'), + Timestamp('20010103'), lib.NaT, lib.NaT, lib.NaT, + Timestamp('20010104'), Timestamp('20010105')], + dtype='M8[ns]') + result = s2._convert(datetime=True, numeric=False, timedelta=False, + coerce=True) assert_series_equal(result, expected) result = s2._convert(datetime=True, coerce=True) assert_series_equal(result, expected) s = Series(['foo', 'bar', 1, 1.0], dtype='O') result = s._convert(datetime=True, coerce=True) - expected = Series([lib.NaT]*4) + expected = Series([lib.NaT] * 4) assert_series_equal(result, expected) # preserver if non-object @@ -7189,10 +7345,10 @@ def test_convert(self): result = s._convert(datetime=True, coerce=True) assert_series_equal(result, s) - #r = s.copy() - #r[0] = np.nan - #result = r._convert(convert_dates=True,convert_numeric=False) - #self.assertEqual(result.dtype, 'M8[ns]') + # r = s.copy() + # r[0] = np.nan + # result = r._convert(convert_dates=True,convert_numeric=False) + # self.assertEqual(result.dtype, 'M8[ns]') # dateutil parses some single letters into today's value as a date expected = Series([lib.NaT]) @@ -7205,7 +7361,7 @@ def test_convert(self): assert_series_equal(result, expected) def test_convert_no_arg_error(self): - s = Series(['1.0','2']) + s = Series(['1.0', '2']) self.assertRaises(ValueError, s._convert) def test_convert_preserve_bool(self): @@ -7223,7 +7379,7 @@ def test_convert_preserve_all_bool(self): def test_apply_args(self): s = Series(['foo,bar']) - result = s.apply(str.split, args=(',',)) + result = s.apply(str.split, args=(',', )) self.assertEqual(result[0], ['foo', 'bar']) tm.assertIsInstance(result[0], list) @@ -7287,8 +7443,8 @@ def _check_align(a, b, how='left', method='pad', limit=None): for kind in JOIN_TYPES: for meth in ['pad', 'bfill']: _check_align(self.ts[2:], self.ts[:-5], how=kind, method=meth) - _check_align(self.ts[2:], self.ts[:-5], how=kind, - method=meth, limit=1) + _check_align(self.ts[2:], self.ts[:-5], how=kind, method=meth, + limit=1) # empty left _check_align(self.ts[:0], self.ts[:-5], how=kind, method=meth) @@ -7347,10 +7503,10 @@ def test_align_multiindex(self): # GH 10665 midx = pd.MultiIndex.from_product([range(2), range(3), range(2)], - names=('a', 'b', 'c')) + names=('a', 'b', 'c')) idx = pd.Index(range(2), name='b') - s1 = pd.Series(np.arange(12,dtype='int64'), index=midx) - s2 = pd.Series(np.arange(2,dtype='int64'), index=idx) + s1 = pd.Series(np.arange(12, dtype='int64'), index=midx) + s2 = pd.Series(np.arange(2, dtype='int64'), index=idx) # these must be the same results (but flipped) res1l, res1r = s1.align(s2, join='left') @@ -7382,7 +7538,8 @@ def test_reindex(self): # __array_interface__ is not defined for older numpies # and on some pythons try: - self.assertTrue(np.may_share_memory(self.series.index, identity.index)) + self.assertTrue(np.may_share_memory(self.series.index, + identity.index)) except (AttributeError): pass @@ -7427,7 +7584,7 @@ def test_reindex_nan(self): def test_reindex_corner(self): # (don't forget to fix this) I think it's fixed - reindexed_dep = self.empty.reindex(self.ts.index, method='pad') + self.empty.reindex(self.ts.index, method='pad') # it works # corner case: pad empty series reindexed = self.empty.reindex(self.ts.index, method='pad') @@ -7442,7 +7599,7 @@ def test_reindex_corner(self): def test_reindex_pad(self): - s = Series(np.arange(10),dtype='int64') + s = Series(np.arange(10), dtype='int64') s2 = s[::2] reindexed = s2.reindex(s.index, method='pad') @@ -7453,9 +7610,9 @@ def test_reindex_pad(self): assert_series_equal(reindexed, expected) # GH4604 - s = Series([1,2,3,4,5], index=['a', 'b', 'c', 'd', 'e']) - new_index = ['a','g','c','f'] - expected = Series([1,1,3,3],index=new_index) + s = Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e']) + new_index = ['a', 'g', 'c', 'f'] + expected = Series([1, 1, 3, 3], index=new_index) # this changes dtype because the ffill happens after result = s.reindex(new_index).ffill() @@ -7469,16 +7626,16 @@ def test_reindex_pad(self): assert_series_equal(result, expected) # inferrence of new dtype - s = Series([True,False,False,True],index=list('abcd')) - new_index='agc' + s = Series([True, False, False, True], index=list('abcd')) + new_index = 'agc' result = s.reindex(list(new_index)).ffill() - expected = Series([True,True,False],index=list(new_index)) + expected = Series([True, True, False], index=list(new_index)) assert_series_equal(result, expected) # GH4618 shifted series downcasting - s = Series(False,index=lrange(0,5)) + s = Series(False, index=lrange(0, 5)) result = s.shift(1).fillna(method='bfill') - expected = Series(False,index=lrange(0,5)) + expected = Series(False, index=lrange(0, 5)) assert_series_equal(result, expected) def test_reindex_nearest(self): @@ -7544,11 +7701,11 @@ def test_reindex_like(self): self.ts.reindex_like(other)) # GH 7179 - day1 = datetime(2013,3,5) - day2 = datetime(2013,5,5) - day3 = datetime(2014,3,5) + day1 = datetime(2013, 3, 5) + day2 = datetime(2013, 5, 5) + day3 = datetime(2014, 3, 5) - series1 = Series([5, None, None],[day1, day2, day3]) + series1 = Series([5, None, None], [day1, day2, day3]) series2 = Series([None, None], [day1, day3]) result = series1.reindex_like(series2, method='pad') @@ -7556,7 +7713,7 @@ def test_reindex_like(self): assert_series_equal(result, expected) def test_reindex_fill_value(self): - #------------------------------------------------------------ + # ----------------------------------------------------------- # floats floats = Series([1., 2., 3.]) result = floats.reindex([1, 2, 3]) @@ -7567,7 +7724,7 @@ def test_reindex_fill_value(self): expected = Series([2., 3., 0], index=[1, 2, 3]) assert_series_equal(result, expected) - #------------------------------------------------------------ + # ----------------------------------------------------------- # ints ints = Series([1, 2, 3]) @@ -7581,7 +7738,7 @@ def test_reindex_fill_value(self): self.assertTrue(issubclass(result.dtype.type, np.integer)) assert_series_equal(result, expected) - #------------------------------------------------------------ + # ----------------------------------------------------------- # objects objects = Series([1, 2, 3], dtype=object) @@ -7593,7 +7750,7 @@ def test_reindex_fill_value(self): expected = Series([2, 3, 'foo'], index=[1, 2, 3], dtype=object) assert_series_equal(result, expected) - #------------------------------------------------------------ + # ------------------------------------------------------------ # bools bools = Series([True, False, True]) @@ -7621,8 +7778,9 @@ def test_rename(self): self.assert_numpy_array_equal(renamed.index, ['a', 'foo', 'c', 'bar']) # index with name - renamer = Series( - np.arange(4), index=Index(['a', 'b', 'c', 'd'], name='name'), dtype='int64') + renamer = Series(np.arange(4), + index=Index(['a', 'b', 'c', 'd'], name='name'), + dtype='int64') renamed = renamer.rename({}) self.assertEqual(renamed.index.name, renamer.index.name) @@ -7645,8 +7803,8 @@ def test_ne(self): self.assertTrue(tm.equalContents(~(ts.index == 5), expected)) def test_pad_nan(self): - x = Series([np.nan, 1., np.nan, 3., np.nan], - ['z', 'a', 'b', 'c', 'd'], dtype=float) + x = Series([np.nan, 1., np.nan, 3., np.nan], ['z', 'a', 'b', 'c', 'd'], + dtype=float) x.fillna(method='pad', inplace=True) @@ -7675,20 +7833,18 @@ def test_unstack(self): assert_frame_equal(unstacked, expected.T) index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], - labels=[[0, 0, 0, 0, 0, 0], - [0, 1, 2, 0, 1, 2], + labels=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1]]) s = Series(np.random.randn(6), index=index) exp_index = MultiIndex(levels=[['one', 'two', 'three'], [0, 1]], - labels=[[0, 1, 2, 0, 1, 2], - [0, 1, 0, 1, 0, 1]]) + labels=[[0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1]]) expected = DataFrame({'bar': s.values}, index=exp_index).sortlevel(0) unstacked = s.unstack(0) assert_frame_equal(unstacked, expected) # GH5873 idx = pd.MultiIndex.from_arrays([[101, 102], [3.5, np.nan]]) - ts = pd.Series([1,2], index=idx) + ts = pd.Series([1, 2], index=idx) left = ts.unstack() right = DataFrame([[nan, 1], [2, nan]], index=[101, 102], columns=[nan, 3.5]) @@ -7696,8 +7852,9 @@ def test_unstack(self): print(right) assert_frame_equal(left, right) - idx = pd.MultiIndex.from_arrays([['cat', 'cat', 'cat', 'dog', 'dog'], - ['a', 'a', 'b', 'a', 'b'], [1, 2, 1, 1, np.nan]]) + idx = pd.MultiIndex.from_arrays([['cat', 'cat', 'cat', 'dog', 'dog' + ], ['a', 'a', 'b', 'a', 'b'], + [1, 2, 1, 1, np.nan]]) ts = pd.Series([1.0, 1.1, 1.2, 1.3, 1.4], index=idx) right = DataFrame([[1.0, 1.3], [1.1, nan], [nan, 1.4], [1.2, nan]], columns=['cat', 'dog']) @@ -7727,6 +7884,7 @@ def test_head_tail(self): assert_series_equal(self.series.head(0), self.series[0:0]) assert_series_equal(self.series.tail(), self.series[-5:]) assert_series_equal(self.series.tail(0), self.series[0:0]) + def test_isin(self): s = Series(['A', 'B', 'C', 'a', 'B', 'B', 'A', 'C']) @@ -7747,11 +7905,11 @@ def test_isin_with_string_scalar(self): def test_isin_with_i8(self): # GH 5021 - expected = Series([True,True,False,False,False]) - expected2 = Series([False,True,False,False,False]) + expected = Series([True, True, False, False, False]) + expected2 = Series([False, True, False, False, False]) # datetime64[ns] - s = Series(date_range('jan-01-2013','jan-05-2013')) + s = Series(date_range('jan-01-2013', 'jan-05-2013')) result = s.isin(s[0:2]) assert_series_equal(result, expected) @@ -7770,12 +7928,13 @@ def test_isin_with_i8(self): assert_series_equal(result, expected2) # timedelta64[ns] - s = Series(pd.to_timedelta(lrange(5),unit='d')) + s = Series(pd.to_timedelta(lrange(5), unit='d')) result = s.isin(s[0:2]) assert_series_equal(result, expected) -#------------------------------------------------------------------------------ -# TimeSeries-specific +# ----------------------------------------------------------------------------- +# timeseries-specific + def test_cummethods_bool(self): # GH 6270 # looks like a buggy np.maximum.accumulate for numpy 1.6.1, py 3.2 @@ -7789,8 +7948,10 @@ def cummax(x): b = ~a c = pd.Series([False] * len(b)) d = ~c - methods = {'cumsum': np.cumsum, 'cumprod': np.cumprod, - 'cummin': cummin, 'cummax': cummax} + methods = {'cumsum': np.cumsum, + 'cumprod': np.cumprod, + 'cummin': cummin, + 'cummax': cummax} args = product((a, b, c, d), methods) for s, method in args: expected = Series(methods[method](s.values)) @@ -7802,7 +7963,9 @@ def cummax(x): cpe = pd.Series([False, 0, nan, 0]) cmin = pd.Series([False, False, nan, False]) cmax = pd.Series([False, True, nan, True]) - expecteds = {'cumsum': cse, 'cumprod': cpe, 'cummin': cmin, + expecteds = {'cumsum': cse, + 'cumprod': cpe, + 'cummin': cmin, 'cummax': cmax} for method in methods: @@ -7893,33 +8056,32 @@ def test_replace(self): expected = ser.ffill() result = ser.replace(np.nan) assert_series_equal(result, expected) - #GH 5797 + # GH 5797 ser = Series(date_range('20130101', periods=5)) expected = ser.copy() expected.loc[2] = Timestamp('20120101') - result = ser.replace({Timestamp('20130103'): - Timestamp('20120101')}) + result = ser.replace({Timestamp('20130103'): Timestamp('20120101')}) assert_series_equal(result, expected) result = ser.replace(Timestamp('20130103'), Timestamp('20120101')) assert_series_equal(result, expected) def test_replace_with_single_list(self): ser = Series([0, 1, 2, 3, 4]) - result = ser.replace([1,2,3]) - assert_series_equal(result, Series([0,0,0,0,4])) + result = ser.replace([1, 2, 3]) + assert_series_equal(result, Series([0, 0, 0, 0, 4])) s = ser.copy() - s.replace([1,2,3],inplace=True) - assert_series_equal(s, Series([0,0,0,0,4])) + s.replace([1, 2, 3], inplace=True) + assert_series_equal(s, Series([0, 0, 0, 0, 4])) # make sure things don't get corrupted when fillna call fails s = ser.copy() with tm.assertRaises(ValueError): - s.replace([1,2,3],inplace=True,method='crash_cymbal') + s.replace([1, 2, 3], inplace=True, method='crash_cymbal') assert_series_equal(s, ser) def test_replace_mixed_types(self): - s = Series(np.arange(5),dtype='int64') + s = Series(np.arange(5), dtype='int64') def check_replace(to_rep, val, expected): sc = s.copy() @@ -7929,35 +8091,36 @@ def check_replace(to_rep, val, expected): assert_series_equal(expected, sc) # should NOT upcast to float - e = Series([0,1,2,3,4]) + e = Series([0, 1, 2, 3, 4]) tr, v = [3], [3.0] check_replace(tr, v, e) # MUST upcast to float - e = Series([0,1,2,3.5,4]) + e = Series([0, 1, 2, 3.5, 4]) tr, v = [3], [3.5] check_replace(tr, v, e) # casts to object - e = Series([0,1,2,3.5,'a']) - tr, v = [3,4], [3.5,'a'] + e = Series([0, 1, 2, 3.5, 'a']) + tr, v = [3, 4], [3.5, 'a'] check_replace(tr, v, e) # again casts to object - e = Series([0,1,2,3.5,Timestamp('20130101')]) - tr, v = [3,4],[3.5,Timestamp('20130101')] + e = Series([0, 1, 2, 3.5, Timestamp('20130101')]) + tr, v = [3, 4], [3.5, Timestamp('20130101')] check_replace(tr, v, e) # casts to float - e = Series([0,1,2,3.5,1]) - tr, v = [3,4],[3.5,True] + e = Series([0, 1, 2, 3.5, 1]) + tr, v = [3, 4], [3.5, True] check_replace(tr, v, e) # test an object with dates + floats + integers + strings dr = date_range('1/1/2001', '1/10/2001', freq='D').to_series().reset_index(drop=True) - result = dr.astype(object).replace([dr[0],dr[1],dr[2]], [1.0,2,'a']) - expected = Series([1.0,2,'a'] + dr[3:].tolist(),dtype=object) + result = dr.astype(object).replace( + [dr[0], dr[1], dr[2]], [1.0, 2, 'a']) + expected = Series([1.0, 2, 'a'] + dr[3:].tolist(), dtype=object) assert_series_equal(result, expected) def test_replace_bool_with_string_no_op(self): @@ -7984,9 +8147,8 @@ def test_replace_with_dict_with_bool_keys(self): s.replace({'asdf': 'asdb', True: 'yes'}) def test_asfreq(self): - ts = Series([0., 1., 2.], index=[datetime(2009, 10, 30), - datetime(2009, 11, 30), - datetime(2009, 12, 31)]) + ts = Series([0., 1., 2.], index=[datetime(2009, 10, 30), datetime( + 2009, 11, 30), datetime(2009, 12, 31)]) daily_ts = ts.asfreq('B') monthly_ts = daily_ts.asfreq('BM') @@ -8038,9 +8200,12 @@ def test_diff(self): assert_series_equal(nrs, nxp) # with tz - s = Series(date_range('2000-01-01 09:00:00',periods=5,tz='US/Eastern'), name='foo') + s = Series( + date_range('2000-01-01 09:00:00', periods=5, + tz='US/Eastern'), name='foo') result = s.diff() - assert_series_equal(result,Series(TimedeltaIndex(['NaT'] + ['1 days']*4),name='foo')) + assert_series_equal(result, Series( + TimedeltaIndex(['NaT'] + ['1 days'] * 4), name='foo')) def test_pct_change(self): rs = self.ts.pct_change(fill_method=None) @@ -8116,7 +8281,7 @@ def test_mpl_compat_hack(self): expected = self.ts.values[:, np.newaxis] assert_almost_equal(result, expected) -#------------------------------------------------------------------------------ +# ----------------------------------------------------------------------------- # GroupBy def test_select(self): @@ -8129,7 +8294,7 @@ def test_select(self): expected = self.ts[self.ts.index.weekday == 2] assert_series_equal(result, expected) -#------------------------------------------------------------------------------ +# ----------------------------------------------------------------------------- # Misc not safe for sparse def test_dropna_preserve_name(self): @@ -8143,12 +8308,13 @@ def test_dropna_preserve_name(self): def test_numpy_unique(self): # it works! - result = np.unique(self.ts) + np.unique(self.ts) def test_concat_empty_series_dtypes_roundtrips(self): # round-tripping with self & like self - dtypes = map(np.dtype,['float64','int8','uint8','bool','m8[ns]','M8[ns]']) + dtypes = map(np.dtype, ['float64', 'int8', 'uint8', 'bool', 'm8[ns]', + 'M8[ns]']) for dtype in dtypes: self.assertEqual(pd.concat([Series(dtype=dtype)]).dtype, dtype) @@ -8156,16 +8322,19 @@ def test_concat_empty_series_dtypes_roundtrips(self): Series(dtype=dtype)]).dtype, dtype) def int_result_type(dtype, dtype2): - typs = set([dtype.kind,dtype2.kind]) - if not len(typs-set(['i','u','b'])) and (dtype.kind == 'i' or dtype2.kind == 'i'): + typs = set([dtype.kind, dtype2.kind]) + if not len(typs - set(['i', 'u', 'b'])) and (dtype.kind == 'i' or + dtype2.kind == 'i'): return 'i' - elif not len(typs-set(['u','b'])) and (dtype.kind == 'u' or dtype2.kind == 'u'): - return 'u' + elif not len(typs - set(['u', 'b'])) and (dtype.kind == 'u' or + dtype2.kind == 'u'): + return 'u' return None def float_result_type(dtype, dtype2): - typs = set([dtype.kind,dtype2.kind]) - if not len(typs-set(['f','i','u'])) and (dtype.kind == 'f' or dtype2.kind == 'f'): + typs = set([dtype.kind, dtype2.kind]) + if not len(typs - set(['f', 'i', 'u'])) and (dtype.kind == 'f' or + dtype2.kind == 'f'): return 'f' return None @@ -8184,8 +8353,8 @@ def get_result_type(dtype, dtype2): continue expected = get_result_type(dtype, dtype2) - result = pd.concat([Series(dtype=dtype), - Series(dtype=dtype2)]).dtype + result = pd.concat([Series(dtype=dtype), Series(dtype=dtype2) + ]).dtype self.assertEqual(result.kind, expected) def test_concat_empty_series_dtypes(self): @@ -8194,7 +8363,8 @@ def test_concat_empty_series_dtypes(self): self.assertEqual(pd.concat([Series(dtype=np.bool_), Series(dtype=np.int32)]).dtype, np.int32) self.assertEqual(pd.concat([Series(dtype=np.bool_), - Series(dtype=np.float32)]).dtype, np.object_) + Series(dtype=np.float32)]).dtype, + np.object_) # datetimelike self.assertEqual(pd.concat([Series(dtype='m8[ns]'), @@ -8211,27 +8381,29 @@ def test_concat_empty_series_dtypes(self): # categorical self.assertEqual(pd.concat([Series(dtype='category'), - Series(dtype='category')]).dtype, 'category') + Series(dtype='category')]).dtype, + 'category') self.assertEqual(pd.concat([Series(dtype='category'), - Series(dtype='float64')]).dtype, np.object_) + Series(dtype='float64')]).dtype, + np.object_) self.assertEqual(pd.concat([Series(dtype='category'), Series(dtype='object')]).dtype, 'category') # sparse - result = pd.concat([Series(dtype='float64').to_sparse(), - Series(dtype='float64').to_sparse()]) - self.assertEqual(result.dtype,np.float64) - self.assertEqual(result.ftype,'float64:sparse') - - result = pd.concat([Series(dtype='float64').to_sparse(), - Series(dtype='float64')]) - self.assertEqual(result.dtype,np.float64) - self.assertEqual(result.ftype,'float64:sparse') - - result = pd.concat([Series(dtype='float64').to_sparse(), - Series(dtype='object')]) - self.assertEqual(result.dtype,np.object_) - self.assertEqual(result.ftype,'object:dense') + result = pd.concat([Series(dtype='float64').to_sparse(), Series( + dtype='float64').to_sparse()]) + self.assertEqual(result.dtype, np.float64) + self.assertEqual(result.ftype, 'float64:sparse') + + result = pd.concat([Series(dtype='float64').to_sparse(), Series( + dtype='float64')]) + self.assertEqual(result.dtype, np.float64) + self.assertEqual(result.ftype, 'float64:sparse') + + result = pd.concat([Series(dtype='float64').to_sparse(), Series( + dtype='object')]) + self.assertEqual(result.dtype, np.object_) + self.assertEqual(result.ftype, 'object:dense') def test_searchsorted_numeric_dtypes_scalar(self): s = Series([1, 2, 90, 1000, 3e9]) @@ -8274,6 +8446,7 @@ def test_to_frame_expanddim(self): # GH 9762 class SubclassedSeries(Series): + @property def _constructor_expanddim(self): return SubclassedFrame @@ -8315,7 +8488,6 @@ def test_basic_indexing(self): self.assertRaises(IndexError, s.__getitem__, 5) self.assertRaises(IndexError, s.__setitem__, 5, 0) - def test_int_indexing(self): s = Series(np.random.randn(6), index=[0, 0, 1, 1, 2, 2]) @@ -8371,8 +8543,7 @@ def test_reset_index(self): # level index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], - labels=[[0, 0, 0, 0, 0, 0], - [0, 1, 2, 0, 1, 2], + labels=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1]]) s = Series(np.random.randn(6), index=index) rs = s.reset_index(level=1) @@ -8389,8 +8560,8 @@ def test_set_index_makes_timeseries(self): s.index = idx with tm.assert_produces_warning(FutureWarning): - self.assertTrue(s.is_time_series == True) - self.assertTrue(s.index.is_all_dates == True) + self.assertTrue(s.is_time_series) + self.assertTrue(s.index.is_all_dates) def test_timeseries_coercion(self): idx = tm.makeDateIndex(10000) @@ -8453,8 +8624,8 @@ def test_unique_data_ownership(self): def test_datetime_timedelta_quantiles(self): # covers #9694 - self.assertTrue(pd.isnull(Series([],dtype='M8[ns]').quantile(.5))) - self.assertTrue(pd.isnull(Series([],dtype='m8[ns]').quantile(.5))) + self.assertTrue(pd.isnull(Series([], dtype='M8[ns]').quantile(.5))) + self.assertTrue(pd.isnull(Series([], dtype='m8[ns]').quantile(.5))) def test_empty_timeseries_redections_return_nat(self): # covers #11245 @@ -8462,6 +8633,7 @@ def test_empty_timeseries_redections_return_nat(self): self.assertIs(Series([], dtype=dtype).min(), pd.NaT) self.assertIs(Series([], dtype=dtype).max(), pd.NaT) + if __name__ == '__main__': nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], exit=False) diff --git a/pandas/tests/test_stats.py b/pandas/tests/test_stats.py index 9acd7c2233b7b..ef1bd734de776 100644 --- a/pandas/tests/test_stats.py +++ b/pandas/tests/test_stats.py @@ -13,6 +13,7 @@ assert_almost_equal) import pandas.util.testing as tm + class TestRank(tm.TestCase): _multiprocess_can_split_ = True s = Series([1, 3, 4, 2, nan, 2, 1, 5, nan, 3]) @@ -49,7 +50,7 @@ def test_rank_methods_series(self): from scipy.stats import rankdata xs = np.random.randn(9) - xs = np.concatenate([xs[i:] for i in range(0, 9, 2)]) # add duplicates + xs = np.concatenate([xs[i:] for i in range(0, 9, 2)]) # add duplicates np.random.shuffle(xs) index = [chr(ord('a') + i) for i in range(len(xs))] @@ -76,8 +77,9 @@ def test_rank_methods_frame(self): for ax in [0, 1]: for m in ['average', 'min', 'max', 'first', 'dense']: result = df.rank(axis=ax, method=m) - sprank = np.apply_along_axis(rankdata, ax, vals, - m if m != 'first' else 'ordinal') + sprank = np.apply_along_axis( + rankdata, ax, vals, + m if m != 'first' else 'ordinal') expected = DataFrame(sprank, columns=cols) tm.assert_frame_equal(result, expected) @@ -86,11 +88,11 @@ def test_rank_dense_method(self): in_out = [([1], [1]), ([2], [1]), ([0], [1]), - ([2,2], [1,1]), - ([1,2,3], [1,2,3]), - ([4,2,1], [3,2,1],), - ([1,1,5,5,3], [1,1,3,3,2]), - ([-5,-4,-3,-2,-1], [1,2,3,4,5])] + ([2, 2], [1, 1]), + ([1, 2, 3], [1, 2, 3]), + ([4, 2, 1], [3, 2, 1],), + ([1, 1, 5, 5, 3], [1, 1, 3, 3, 2]), + ([-5, -4, -3, -2, -1], [1, 2, 3, 4, 5])] for ser, exp in in_out: for dtype in dtypes: @@ -137,7 +139,6 @@ def test_rank_descending(self): assert_frame_equal(res3, expected) def test_rank_2d_tie_methods(self): - s = self.s df = self.df def _check2d(df, expected, method='average', axis=0): diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py index 269d272525ce6..f8255c4b4a410 100644 --- a/pandas/tests/test_strings.py +++ b/pandas/tests/test_strings.py @@ -1,11 +1,8 @@ # -*- coding: utf-8 -*- # pylint: disable-msg=E1101,W0612 -from datetime import datetime, timedelta, date -import os -import operator +from datetime import datetime, timedelta import re -import warnings import nose @@ -13,13 +10,12 @@ import numpy as np from numpy.random import randint -from pandas.compat import range, lrange, u, unichr +from pandas.compat import range, u import pandas.compat as compat -from pandas import (Index, Series, DataFrame, isnull, notnull, - bdate_range, date_range, MultiIndex) +from pandas import (Index, Series, DataFrame, isnull, MultiIndex) import pandas.core.common as com -from pandas.util.testing import assert_series_equal, assert_almost_equal +from pandas.util.testing import assert_series_equal import pandas.util.testing as tm import pandas.core.strings as strings @@ -56,7 +52,8 @@ def test_iter(self): for el in s: # each element of the series is either a basestring/str or nan - self.assertTrue(isinstance(el, compat.string_types) or isnull(el)) + self.assertTrue(isinstance(el, compat.string_types) or isnull( + el)) # desired behavior is to iterate until everything would be nan on the # next iter so make sure the last element of the iterator was 'l' in @@ -86,8 +83,8 @@ def test_iter_single_element(self): assert_series_equal(ds, s) def test_iter_object_try_string(self): - ds = Series([slice(None, randint(10), randint(10, 20)) - for _ in range(4)]) + ds = Series([slice(None, randint(10), randint(10, 20)) for _ in range( + 4)]) i, s = 100, 'h' @@ -216,9 +213,9 @@ def test_contains(self): tm.assert_almost_equal(result, expected) # na - values = Series(['om', 'foo',np.nan]) + values = Series(['om', 'foo', np.nan]) res = values.str.contains('foo', na="foo") - self.assertEqual (res.ix[2], "foo") + self.assertEqual(res.ix[2], "foo") def test_startswith(self): values = Series(['om', NA, 'foo_nom', 'nom', 'bar_foo', NA, 'foo']) @@ -284,8 +281,8 @@ def test_title(self): tm.assert_series_equal(result, exp) # mixed - mixed = Series(["FOO", NA, "bar", True, datetime.today(), - "blah", None, 1, 2.]) + mixed = Series(["FOO", NA, "bar", True, datetime.today(), "blah", None, + 1, 2.]) mixed = mixed.str.title() exp = Series(["Foo", NA, "Bar", NA, NA, "Blah", NA, NA, NA]) tm.assert_almost_equal(mixed, exp) @@ -309,8 +306,8 @@ def test_lower_upper(self): tm.assert_series_equal(result, values) # mixed - mixed = Series(['a', NA, 'b', True, datetime.today(), 'foo', None, - 1, 2.]) + mixed = Series(['a', NA, 'b', True, datetime.today(), 'foo', None, 1, + 2.]) mixed = mixed.str.upper() rs = Series(mixed).str.lower() xp = ['a', NA, 'b', NA, NA, 'foo', NA, NA, NA] @@ -334,8 +331,8 @@ def test_capitalize(self): tm.assert_series_equal(result, exp) # mixed - mixed = Series(["FOO", NA, "bar", True, datetime.today(), - "blah", None, 1, 2.]) + mixed = Series(["FOO", NA, "bar", True, datetime.today(), "blah", None, + 1, 2.]) mixed = mixed.str.capitalize() exp = Series(["Foo", NA, "Bar", NA, NA, "Blah", NA, NA, NA]) tm.assert_almost_equal(mixed, exp) @@ -353,8 +350,8 @@ def test_swapcase(self): tm.assert_series_equal(result, exp) # mixed - mixed = Series(["FOO", NA, "bar", True, datetime.today(), - "Blah", None, 1, 2.]) + mixed = Series(["FOO", NA, "bar", True, datetime.today(), "Blah", None, + 1, 2.]) mixed = mixed.str.swapcase() exp = Series(["foo", NA, "BAR", NA, NA, "bLAH", NA, NA, NA]) tm.assert_almost_equal(mixed, exp) @@ -371,8 +368,10 @@ def test_casemethods(self): self.assertEqual(s.str.lower().tolist(), [v.lower() for v in values]) self.assertEqual(s.str.upper().tolist(), [v.upper() for v in values]) self.assertEqual(s.str.title().tolist(), [v.title() for v in values]) - self.assertEqual(s.str.capitalize().tolist(), [v.capitalize() for v in values]) - self.assertEqual(s.str.swapcase().tolist(), [v.swapcase() for v in values]) + self.assertEqual(s.str.capitalize().tolist(), [ + v.capitalize() for v in values]) + self.assertEqual(s.str.swapcase().tolist(), [ + v.swapcase() for v in values]) def test_replace(self): values = Series(['fooBAD__barBAD', NA]) @@ -405,7 +404,7 @@ def test_replace(self): exp = Series([u('foobarBAD'), NA]) tm.assert_series_equal(result, exp) - #flags + unicode + # flags + unicode values = Series([b"abcd,\xc3\xa0".decode("utf-8")]) exp = Series([b"abcd, \xc3\xa0".decode("utf-8")]) result = values.str.replace("(?<=\w),(?=\w)", ", ", flags=re.UNICODE) @@ -423,8 +422,8 @@ def test_repeat(self): tm.assert_series_equal(result, exp) # mixed - mixed = Series(['a', NA, 'b', True, datetime.today(), 'foo', - None, 1, 2.]) + mixed = Series(['a', NA, 'b', True, datetime.today(), 'foo', None, 1, + 2.]) rs = Series(mixed).str.repeat(3) xp = ['aaa', NA, 'bbb', NA, NA, 'foofoofoo', NA, NA, NA] @@ -432,17 +431,14 @@ def test_repeat(self): tm.assert_almost_equal(rs, xp) # unicode - values = Series([u('a'), u('b'), NA, u('c'), NA, - u('d')]) + values = Series([u('a'), u('b'), NA, u('c'), NA, u('d')]) result = values.str.repeat(3) - exp = Series([u('aaa'), u('bbb'), NA, u('ccc'), NA, - u('ddd')]) + exp = Series([u('aaa'), u('bbb'), NA, u('ccc'), NA, u('ddd')]) tm.assert_series_equal(result, exp) result = values.str.repeat([1, 2, 3, 4, 5, 6]) - exp = Series([u('a'), u('bb'), NA, u('cccc'), NA, - u('dddddd')]) + exp = Series([u('a'), u('bb'), NA, u('cccc'), NA, u('dddddd')]) tm.assert_series_equal(result, exp) def test_deprecated_match(self): @@ -527,8 +523,8 @@ def test_extract(self): 'foo', None, 1, 2.]) rs = Series(mixed).str.extract('.*(BAD[_]+).*(BAD)') - exp = DataFrame([['BAD_', 'BAD'], er, ['BAD_', 'BAD'], er, er, - er, er, er, er]) + exp = DataFrame([['BAD_', 'BAD'], er, ['BAD_', 'BAD'], er, er, er, er, + er, er]) tm.assert_frame_equal(rs, exp) # unicode @@ -590,12 +586,14 @@ def test_extract(self): # two named groups result = s.str.extract('(?P<letter>[AB])(?P<number>[123])') - exp = DataFrame([['A', '1'], ['B', '2'], [NA, NA]], columns=['letter', 'number']) + exp = DataFrame([['A', '1'], ['B', '2'], [NA, NA]], + columns=['letter', 'number']) tm.assert_frame_equal(result, exp) # mix named and unnamed groups result = s.str.extract('([AB])(?P<number>[123])') - exp = DataFrame([['A', '1'], ['B', '2'], [NA, NA]], columns=[0, 'number']) + exp = DataFrame([['A', '1'], ['B', '2'], [NA, NA]], + columns=[0, 'number']) tm.assert_frame_equal(result, exp) # one normal group, one non-capturing group @@ -604,18 +602,23 @@ def test_extract(self): tm.assert_series_equal(result, exp) # two normal groups, one non-capturing group - result = Series(['A11', 'B22', 'C33']).str.extract('([AB])([123])(?:[123])') + result = Series(['A11', 'B22', 'C33']).str.extract( + '([AB])([123])(?:[123])') exp = DataFrame([['A', '1'], ['B', '2'], [NA, NA]]) tm.assert_frame_equal(result, exp) # one optional group followed by one normal group - result = Series(['A1', 'B2', '3']).str.extract('(?P<letter>[AB])?(?P<number>[123])') - exp = DataFrame([['A', '1'], ['B', '2'], [NA, '3']], columns=['letter', 'number']) + result = Series(['A1', 'B2', '3']).str.extract( + '(?P<letter>[AB])?(?P<number>[123])') + exp = DataFrame([['A', '1'], ['B', '2'], [NA, '3']], + columns=['letter', 'number']) tm.assert_frame_equal(result, exp) # one normal group followed by one optional group - result = Series(['A1', 'B2', 'C']).str.extract('(?P<letter>[ABC])(?P<number>[123])?') - exp = DataFrame([['A', '1'], ['B', '2'], ['C', NA]], columns=['letter', 'number']) + result = Series(['A1', 'B2', 'C']).str.extract( + '(?P<letter>[ABC])(?P<number>[123])?') + exp = DataFrame([['A', '1'], ['B', '2'], ['C', NA]], + columns=['letter', 'number']) tm.assert_frame_equal(result, exp) # GH6348 @@ -627,12 +630,15 @@ def check_index(index): exp = Series(['1', '2', NA], index=index) tm.assert_series_equal(result, exp) - result = Series(data, index=index).str.extract('(?P<letter>\D)(?P<number>\d)?') - exp = DataFrame([['A', '1'], ['B', '2'], ['C', NA]], columns=['letter', 'number'], index=index) + result = Series( + data, index=index).str.extract('(?P<letter>\D)(?P<number>\d)?') + exp = DataFrame([['A', '1'], ['B', '2'], ['C', NA]], columns=[ + 'letter', 'number' + ], index=index) tm.assert_frame_equal(result, exp) - for index in [ tm.makeStringIndex, tm.makeUnicodeIndex, tm.makeIntIndex, - tm.makeDateIndex, tm.makePeriodIndex ]: + for index in [tm.makeStringIndex, tm.makeUnicodeIndex, tm.makeIntIndex, + tm.makeDateIndex, tm.makePeriodIndex]: check_index(index()) def test_extract_single_series_name_is_preserved(self): @@ -661,11 +667,12 @@ def test_empty_str_methods(self): tm.assert_series_equal(empty_bool, empty.str.endswith('a')) tm.assert_series_equal(empty_str, empty.str.lower()) tm.assert_series_equal(empty_str, empty.str.upper()) - tm.assert_series_equal(empty_str, empty.str.replace('a','b')) + tm.assert_series_equal(empty_str, empty.str.replace('a', 'b')) tm.assert_series_equal(empty_str, empty.str.repeat(3)) tm.assert_series_equal(empty_bool, empty.str.match('^a')) tm.assert_series_equal(empty_str, empty.str.extract('()')) - tm.assert_frame_equal(DataFrame(columns=[0,1], dtype=str), empty.str.extract('()()')) + tm.assert_frame_equal( + DataFrame(columns=[0, 1], dtype=str), empty.str.extract('()()')) tm.assert_frame_equal(DataFrame(dtype=str), empty.str.get_dummies()) tm.assert_series_equal(empty_str, empty_list.str.join('')) tm.assert_series_equal(empty_int, empty.str.len()) @@ -676,8 +683,10 @@ def test_empty_str_methods(self): tm.assert_series_equal(empty_str, empty.str.center(42)) tm.assert_series_equal(empty_list, empty.str.split('a')) tm.assert_series_equal(empty_list, empty.str.rsplit('a')) - tm.assert_series_equal(empty_list, empty.str.partition('a', expand=False)) - tm.assert_series_equal(empty_list, empty.str.rpartition('a', expand=False)) + tm.assert_series_equal(empty_list, + empty.str.partition('a', expand=False)) + tm.assert_series_equal(empty_list, + empty.str.rpartition('a', expand=False)) tm.assert_series_equal(empty_str, empty.str.slice(stop=1)) tm.assert_series_equal(empty_str, empty.str.slice(step=1)) tm.assert_series_equal(empty_str, empty.str.strip()) @@ -708,7 +717,7 @@ def test_empty_str_methods(self): tm.assert_series_equal(empty_str, empty.str.translate(table)) def test_empty_str_methods_to_frame(self): - empty_str = empty = Series(dtype=str) + empty = Series(dtype=str) empty_df = DataFrame([]) tm.assert_frame_equal(empty_df, empty.str.partition('a')) tm.assert_frame_equal(empty_df, empty.str.rpartition('a')) @@ -716,14 +725,25 @@ def test_empty_str_methods_to_frame(self): def test_ismethods(self): values = ['A', 'b', 'Xy', '4', '3A', '', 'TT', '55', '-', ' '] str_s = Series(values) - alnum_e = [True, True, True, True, True, False, True, True, False, False] - alpha_e = [True, True, True, False, False, False, True, False, False, False] - digit_e = [False, False, False, True, False, False, False, True, False, False] - num_e = [False, False, False, True, False, False, False, True, False, False] - space_e = [False, False, False, False, False, False, False, False, False, True] - lower_e = [False, True, False, False, False, False, False, False, False, False] - upper_e = [True, False, False, False, True, False, True, False, False, False] - title_e = [True, False, True, False, True, False, False, False, False, False] + alnum_e = [True, True, True, True, True, False, True, True, False, + False] + alpha_e = [True, True, True, False, False, False, True, False, False, + False] + digit_e = [False, False, False, True, False, False, False, True, False, + False] + + # TODO: unused + num_e = [False, False, False, True, False, False, # noqa + False, True, False, False] + + space_e = [False, False, False, False, False, False, False, False, + False, True] + lower_e = [False, True, False, False, False, False, False, False, + False, False] + upper_e = [True, False, False, False, True, False, True, False, False, + False] + title_e = [True, False, True, False, True, False, False, False, False, + False] tm.assert_series_equal(str_s.str.isalnum(), Series(alnum_e)) tm.assert_series_equal(str_s.str.isalpha(), Series(alpha_e)) @@ -733,13 +753,20 @@ def test_ismethods(self): tm.assert_series_equal(str_s.str.isupper(), Series(upper_e)) tm.assert_series_equal(str_s.str.istitle(), Series(title_e)) - self.assertEqual(str_s.str.isalnum().tolist(), [v.isalnum() for v in values]) - self.assertEqual(str_s.str.isalpha().tolist(), [v.isalpha() for v in values]) - self.assertEqual(str_s.str.isdigit().tolist(), [v.isdigit() for v in values]) - self.assertEqual(str_s.str.isspace().tolist(), [v.isspace() for v in values]) - self.assertEqual(str_s.str.islower().tolist(), [v.islower() for v in values]) - self.assertEqual(str_s.str.isupper().tolist(), [v.isupper() for v in values]) - self.assertEqual(str_s.str.istitle().tolist(), [v.istitle() for v in values]) + self.assertEqual(str_s.str.isalnum().tolist(), [v.isalnum() + for v in values]) + self.assertEqual(str_s.str.isalpha().tolist(), [v.isalpha() + for v in values]) + self.assertEqual(str_s.str.isdigit().tolist(), [v.isdigit() + for v in values]) + self.assertEqual(str_s.str.isspace().tolist(), [v.isspace() + for v in values]) + self.assertEqual(str_s.str.islower().tolist(), [v.islower() + for v in values]) + self.assertEqual(str_s.str.isupper().tolist(), [v.isupper() + for v in values]) + self.assertEqual(str_s.str.istitle().tolist(), [v.istitle() + for v in values]) def test_isnumeric(self): # 0x00bc: ¼ VULGAR FRACTION ONE QUARTER @@ -754,8 +781,10 @@ def test_isnumeric(self): tm.assert_series_equal(s.str.isdecimal(), Series(decimal_e)) unicodes = [u'A', u'3', u'¼', u'★', u'፸', u'3', u'four'] - self.assertEqual(s.str.isnumeric().tolist(), [v.isnumeric() for v in unicodes]) - self.assertEqual(s.str.isdecimal().tolist(), [v.isdecimal() for v in unicodes]) + self.assertEqual(s.str.isnumeric().tolist(), [ + v.isnumeric() for v in unicodes]) + self.assertEqual(s.str.isdecimal().tolist(), [ + v.isdecimal() for v in unicodes]) values = ['A', np.nan, u'¼', u'★', np.nan, u'3', 'four'] s = Series(values) @@ -799,8 +828,7 @@ def test_join(self): tm.assert_almost_equal(rs, xp) # unicode - values = Series([u('a_b_c'), u('c_d_e'), np.nan, - u('f_g_h')]) + values = Series([u('a_b_c'), u('c_d_e'), np.nan, u('f_g_h')]) result = values.str.split('_').str.join('_') tm.assert_series_equal(values, result) @@ -822,8 +850,8 @@ def test_len(self): tm.assert_almost_equal(rs, xp) # unicode - values = Series([u('foo'), u('fooo'), u('fooooo'), np.nan, - u('fooooooo')]) + values = Series([u('foo'), u('fooo'), u('fooooo'), np.nan, u( + 'fooooooo')]) result = values.str.len() exp = values.map(lambda x: len(x) if com.notnull(x) else NA) @@ -847,8 +875,7 @@ def test_findall(self): tm.assert_almost_equal(rs, xp) # unicode - values = Series([u('fooBAD__barBAD'), NA, u('foo'), - u('BAD')]) + values = Series([u('fooBAD__barBAD'), NA, u('foo'), u('BAD')]) result = values.str.findall('BAD[_]*') exp = Series([[u('BAD__'), u('BAD')], NA, [], [u('BAD')]]) @@ -886,10 +913,12 @@ def test_find(self): expected = np.array([v.rfind('EF', 3, 6) for v in values.values]) tm.assert_numpy_array_equal(result.values, expected) - with tm.assertRaisesRegexp(TypeError, "expected a string object, not int"): + with tm.assertRaisesRegexp(TypeError, + "expected a string object, not int"): result = values.str.find(0) - with tm.assertRaisesRegexp(TypeError, "expected a string object, not int"): + with tm.assertRaisesRegexp(TypeError, + "expected a string object, not int"): result = values.str.rfind(0) def test_find_nan(self): @@ -949,7 +978,8 @@ def test_index(self): with tm.assertRaisesRegexp(ValueError, "substring not found"): result = s.str.index('DE') - with tm.assertRaisesRegexp(TypeError, "expected a string object, not int"): + with tm.assertRaisesRegexp(TypeError, + "expected a string object, not int"): result = s.str.index(0) # test with nan @@ -975,8 +1005,8 @@ def test_pad(self): tm.assert_almost_equal(result, exp) # mixed - mixed = Series(['a', NA, 'b', True, datetime.today(), - 'ee', None, 1, 2.]) + mixed = Series(['a', NA, 'b', True, datetime.today(), 'ee', None, 1, 2. + ]) rs = Series(mixed).str.pad(5, side='left') xp = Series([' a', NA, ' b', NA, NA, ' ee', NA, NA, NA]) @@ -984,8 +1014,8 @@ def test_pad(self): tm.assertIsInstance(rs, Series) tm.assert_almost_equal(rs, xp) - mixed = Series(['a', NA, 'b', True, datetime.today(), - 'ee', None, 1, 2.]) + mixed = Series(['a', NA, 'b', True, datetime.today(), 'ee', None, 1, 2. + ]) rs = Series(mixed).str.pad(5, side='right') xp = Series(['a ', NA, 'b ', NA, NA, 'ee ', NA, NA, NA]) @@ -993,8 +1023,8 @@ def test_pad(self): tm.assertIsInstance(rs, Series) tm.assert_almost_equal(rs, xp) - mixed = Series(['a', NA, 'b', True, datetime.today(), - 'ee', None, 1, 2.]) + mixed = Series(['a', NA, 'b', True, datetime.today(), 'ee', None, 1, 2. + ]) rs = Series(mixed).str.pad(5, side='both') xp = Series([' a ', NA, ' b ', NA, NA, ' ee ', NA, NA, NA]) @@ -1003,22 +1033,18 @@ def test_pad(self): tm.assert_almost_equal(rs, xp) # unicode - values = Series([u('a'), u('b'), NA, u('c'), NA, - u('eeeeee')]) + values = Series([u('a'), u('b'), NA, u('c'), NA, u('eeeeee')]) result = values.str.pad(5, side='left') - exp = Series([u(' a'), u(' b'), NA, u(' c'), NA, - u('eeeeee')]) + exp = Series([u(' a'), u(' b'), NA, u(' c'), NA, u('eeeeee')]) tm.assert_almost_equal(result, exp) result = values.str.pad(5, side='right') - exp = Series([u('a '), u('b '), NA, u('c '), NA, - u('eeeeee')]) + exp = Series([u('a '), u('b '), NA, u('c '), NA, u('eeeeee')]) tm.assert_almost_equal(result, exp) result = values.str.pad(5, side='both') - exp = Series([u(' a '), u(' b '), NA, u(' c '), NA, - u('eeeeee')]) + exp = Series([u(' a '), u(' b '), NA, u(' c '), NA, u('eeeeee')]) tm.assert_almost_equal(result, exp) def test_pad_fillchar(self): @@ -1037,10 +1063,12 @@ def test_pad_fillchar(self): exp = Series(['XXaXX', 'XXbXX', NA, 'XXcXX', NA, 'eeeeee']) tm.assert_almost_equal(result, exp) - with tm.assertRaisesRegexp(TypeError, "fillchar must be a character, not str"): + with tm.assertRaisesRegexp(TypeError, + "fillchar must be a character, not str"): result = values.str.pad(5, fillchar='XY') - with tm.assertRaisesRegexp(TypeError, "fillchar must be a character, not int"): + with tm.assertRaisesRegexp(TypeError, + "fillchar must be a character, not int"): result = values.str.pad(5, fillchar=5) def test_translate(self): @@ -1065,7 +1093,8 @@ def test_translate(self): expected = klass(['abcde', 'abcc', 'cddd', 'cde']) tm.assert_numpy_array_equal(result, expected) else: - with tm.assertRaisesRegexp(ValueError, "deletechars is not a valid argument"): + with tm.assertRaisesRegexp( + ValueError, "deletechars is not a valid argument"): result = s.str.translate(table, deletechars='fg') # Series with non-string values @@ -1090,44 +1119,40 @@ def test_center_ljust_rjust(self): tm.assert_almost_equal(result, exp) # mixed - mixed = Series(['a', NA, 'b', True, datetime.today(), - 'c', 'eee', None, 1, 2.]) + mixed = Series(['a', NA, 'b', True, datetime.today(), 'c', 'eee', None, + 1, 2.]) rs = Series(mixed).str.center(5) - xp = Series([' a ', NA, ' b ', NA, NA, ' c ', ' eee ', NA, NA, - NA]) + xp = Series([' a ', NA, ' b ', NA, NA, ' c ', ' eee ', NA, NA, NA + ]) tm.assertIsInstance(rs, Series) tm.assert_almost_equal(rs, xp) rs = Series(mixed).str.ljust(5) - xp = Series(['a ', NA, 'b ', NA, NA, 'c ', 'eee ', NA, NA, - NA]) + xp = Series(['a ', NA, 'b ', NA, NA, 'c ', 'eee ', NA, NA, NA + ]) tm.assertIsInstance(rs, Series) tm.assert_almost_equal(rs, xp) rs = Series(mixed).str.rjust(5) - xp = Series([' a', NA, ' b', NA, NA, ' c', ' eee', NA, NA, - NA]) + xp = Series([' a', NA, ' b', NA, NA, ' c', ' eee', NA, NA, NA + ]) tm.assertIsInstance(rs, Series) tm.assert_almost_equal(rs, xp) # unicode - values = Series([u('a'), u('b'), NA, u('c'), NA, - u('eeeeee')]) + values = Series([u('a'), u('b'), NA, u('c'), NA, u('eeeeee')]) result = values.str.center(5) - exp = Series([u(' a '), u(' b '), NA, u(' c '), NA, - u('eeeeee')]) + exp = Series([u(' a '), u(' b '), NA, u(' c '), NA, u('eeeeee')]) tm.assert_almost_equal(result, exp) result = values.str.ljust(5) - exp = Series([u('a '), u('b '), NA, u('c '), NA, - u('eeeeee')]) + exp = Series([u('a '), u('b '), NA, u('c '), NA, u('eeeeee')]) tm.assert_almost_equal(result, exp) result = values.str.rjust(5) - exp = Series([u(' a'), u(' b'), NA, u(' c'), NA, - u('eeeeee')]) + exp = Series([u(' a'), u(' b'), NA, u(' c'), NA, u('eeeeee')]) tm.assert_almost_equal(result, exp) def test_center_ljust_rjust_fillchar(self): @@ -1154,22 +1179,28 @@ def test_center_ljust_rjust_fillchar(self): # If fillchar is not a charatter, normal str raises TypeError # 'aaa'.ljust(5, 'XY') # TypeError: must be char, not str - with tm.assertRaisesRegexp(TypeError, "fillchar must be a character, not str"): + with tm.assertRaisesRegexp(TypeError, + "fillchar must be a character, not str"): result = values.str.center(5, fillchar='XY') - with tm.assertRaisesRegexp(TypeError, "fillchar must be a character, not str"): + with tm.assertRaisesRegexp(TypeError, + "fillchar must be a character, not str"): result = values.str.ljust(5, fillchar='XY') - with tm.assertRaisesRegexp(TypeError, "fillchar must be a character, not str"): + with tm.assertRaisesRegexp(TypeError, + "fillchar must be a character, not str"): result = values.str.rjust(5, fillchar='XY') - with tm.assertRaisesRegexp(TypeError, "fillchar must be a character, not int"): + with tm.assertRaisesRegexp(TypeError, + "fillchar must be a character, not int"): result = values.str.center(5, fillchar=1) - with tm.assertRaisesRegexp(TypeError, "fillchar must be a character, not int"): + with tm.assertRaisesRegexp(TypeError, + "fillchar must be a character, not int"): result = values.str.ljust(5, fillchar=1) - with tm.assertRaisesRegexp(TypeError, "fillchar must be a character, not int"): + with tm.assertRaisesRegexp(TypeError, + "fillchar must be a character, not int"): result = values.str.rjust(5, fillchar=1) def test_zfill(self): @@ -1208,11 +1239,11 @@ def test_split(self): tm.assert_series_equal(result, exp) # mixed - mixed = Series(['a_b_c', NA, 'd_e_f', True, datetime.today(), - None, 1, 2.]) + mixed = Series(['a_b_c', NA, 'd_e_f', True, datetime.today(), None, 1, + 2.]) result = mixed.str.split('_') - exp = Series([['a', 'b', 'c'], NA, ['d', 'e', 'f'], NA, NA, - NA, NA, NA]) + exp = Series([['a', 'b', 'c'], NA, ['d', 'e', 'f'], NA, NA, NA, NA, NA + ]) tm.assertIsInstance(result, Series) tm.assert_almost_equal(result, exp) @@ -1224,8 +1255,7 @@ def test_split(self): values = Series([u('a_b_c'), u('c_d_e'), NA, u('f_g_h')]) result = values.str.split('_') - exp = Series([[u('a'), u('b'), u('c')], - [u('c'), u('d'), u('e')], NA, + exp = Series([[u('a'), u('b'), u('c')], [u('c'), u('d'), u('e')], NA, [u('f'), u('g'), u('h')]]) tm.assert_series_equal(result, exp) @@ -1235,8 +1265,7 @@ def test_split(self): # regex split values = Series([u('a,b_c'), u('c_d,e'), NA, u('f,g,h')]) result = values.str.split('[,_]') - exp = Series([[u('a'), u('b'), u('c')], - [u('c'), u('d'), u('e')], NA, + exp = Series([[u('a'), u('b'), u('c')], [u('c'), u('d'), u('e')], NA, [u('f'), u('g'), u('h')]]) tm.assert_series_equal(result, exp) @@ -1255,11 +1284,11 @@ def test_rsplit(self): tm.assert_series_equal(result, exp) # mixed - mixed = Series(['a_b_c', NA, 'd_e_f', True, datetime.today(), - None, 1, 2.]) + mixed = Series(['a_b_c', NA, 'd_e_f', True, datetime.today(), None, 1, + 2.]) result = mixed.str.rsplit('_') - exp = Series([['a', 'b', 'c'], NA, ['d', 'e', 'f'], NA, NA, - NA, NA, NA]) + exp = Series([['a', 'b', 'c'], NA, ['d', 'e', 'f'], NA, NA, NA, NA, NA + ]) tm.assertIsInstance(result, Series) tm.assert_almost_equal(result, exp) @@ -1270,8 +1299,7 @@ def test_rsplit(self): # unicode values = Series([u('a_b_c'), u('c_d_e'), NA, u('f_g_h')]) result = values.str.rsplit('_') - exp = Series([[u('a'), u('b'), u('c')], - [u('c'), u('d'), u('e')], NA, + exp = Series([[u('a'), u('b'), u('c')], [u('c'), u('d'), u('e')], NA, [u('f'), u('g'), u('h')]]) tm.assert_series_equal(result, exp) @@ -1281,10 +1309,7 @@ def test_rsplit(self): # regex split is not supported by rsplit values = Series([u('a,b_c'), u('c_d,e'), NA, u('f,g,h')]) result = values.str.rsplit('[,_]') - exp = Series([[u('a,b_c')], - [u('c_d,e')], - NA, - [u('f,g,h')]]) + exp = Series([[u('a,b_c')], [u('c_d,e')], NA, [u('f,g,h')]]) tm.assert_series_equal(result, exp) # setting max number of splits, make sure it's from reverse @@ -1338,16 +1363,20 @@ def test_split_to_dataframe(self): s = Series(['some_equal_splits', 'with_no_nans']) with tm.assert_produces_warning(FutureWarning): result = s.str.split('_', return_type='frame') - exp = DataFrame({0: ['some', 'with'], 1: ['equal', 'no'], + exp = DataFrame({0: ['some', 'with'], + 1: ['equal', 'no'], 2: ['splits', 'nans']}) tm.assert_frame_equal(result, exp) s = Series(['some_unequal_splits', 'one_of_these_things_is_not']) with tm.assert_produces_warning(FutureWarning): result = s.str.split('_', return_type='frame') - exp = DataFrame({0: ['some', 'one'], 1: ['unequal', 'of'], - 2: ['splits', 'these'], 3: [NA, 'things'], - 4: [NA, 'is'], 5: [NA, 'not']}) + exp = DataFrame({0: ['some', 'one'], + 1: ['unequal', 'of'], + 2: ['splits', 'these'], + 3: [NA, 'things'], + 4: [NA, 'is'], + 5: [NA, 'not']}) tm.assert_frame_equal(result, exp) s = Series(['some_splits', 'with_index'], index=['preserve', 'me']) @@ -1369,15 +1398,19 @@ def test_split_to_dataframe_expand(self): s = Series(['some_equal_splits', 'with_no_nans']) result = s.str.split('_', expand=True) - exp = DataFrame({0: ['some', 'with'], 1: ['equal', 'no'], + exp = DataFrame({0: ['some', 'with'], + 1: ['equal', 'no'], 2: ['splits', 'nans']}) tm.assert_frame_equal(result, exp) s = Series(['some_unequal_splits', 'one_of_these_things_is_not']) result = s.str.split('_', expand=True) - exp = DataFrame({0: ['some', 'one'], 1: ['unequal', 'of'], - 2: ['splits', 'these'], 3: [NA, 'things'], - 4: [NA, 'is'], 5: [NA, 'not']}) + exp = DataFrame({0: ['some', 'one'], + 1: ['unequal', 'of'], + 2: ['splits', 'these'], + 3: [NA, 'things'], + 4: [NA, 'is'], + 5: [NA, 'not']}) tm.assert_frame_equal(result, exp) s = Series(['some_splits', 'with_index'], index=['preserve', 'me']) @@ -1399,15 +1432,16 @@ def test_split_to_multiindex_expand(self): idx = Index(['some_equal_splits', 'with_no_nans']) result = idx.str.split('_', expand=True) - exp = MultiIndex.from_tuples([('some', 'equal', 'splits'), - ('with', 'no', 'nans')]) + exp = MultiIndex.from_tuples([('some', 'equal', 'splits'), ( + 'with', 'no', 'nans')]) tm.assert_index_equal(result, exp) self.assertEqual(result.nlevels, 3) idx = Index(['some_unequal_splits', 'one_of_these_things_is_not']) result = idx.str.split('_', expand=True) - exp = MultiIndex.from_tuples([('some', 'unequal', 'splits', NA, NA, NA), - ('one', 'of', 'these', 'things', 'is', 'not')]) + exp = MultiIndex.from_tuples([('some', 'unequal', 'splits', NA, NA, NA + ), ('one', 'of', 'these', 'things', + 'is', 'not')]) tm.assert_index_equal(result, exp) self.assertEqual(result.nlevels, 6) @@ -1423,18 +1457,19 @@ def test_rsplit_to_dataframe_expand(self): s = Series(['some_equal_splits', 'with_no_nans']) result = s.str.rsplit('_', expand=True) - exp = DataFrame({0: ['some', 'with'], 1: ['equal', 'no'], + exp = DataFrame({0: ['some', 'with'], + 1: ['equal', 'no'], 2: ['splits', 'nans']}) tm.assert_frame_equal(result, exp) result = s.str.rsplit('_', expand=True, n=2) - exp = DataFrame({0: ['some', 'with'], 1: ['equal', 'no'], + exp = DataFrame({0: ['some', 'with'], + 1: ['equal', 'no'], 2: ['splits', 'nans']}) tm.assert_frame_equal(result, exp) result = s.str.rsplit('_', expand=True, n=1) - exp = DataFrame({0: ['some_equal', 'with_no'], - 1: ['splits', 'nans']}) + exp = DataFrame({0: ['some_equal', 'with_no'], 1: ['splits', 'nans']}) tm.assert_frame_equal(result, exp) s = Series(['some_splits', 'with_index'], index=['preserve', 'me']) @@ -1452,15 +1487,15 @@ def test_rsplit_to_multiindex_expand(self): idx = Index(['some_equal_splits', 'with_no_nans']) result = idx.str.rsplit('_', expand=True) - exp = MultiIndex.from_tuples([('some', 'equal', 'splits'), - ('with', 'no', 'nans')]) + exp = MultiIndex.from_tuples([('some', 'equal', 'splits'), ( + 'with', 'no', 'nans')]) tm.assert_index_equal(result, exp) self.assertEqual(result.nlevels, 3) idx = Index(['some_equal_splits', 'with_no_nans']) result = idx.str.rsplit('_', expand=True, n=1) - exp = MultiIndex.from_tuples([('some_equal', 'splits'), - ('with_no', 'nans')]) + exp = MultiIndex.from_tuples([('some_equal', 'splits'), ('with_no', + 'nans')]) tm.assert_index_equal(result, exp) self.assertEqual(result.nlevels, 2) @@ -1468,31 +1503,37 @@ def test_partition_series(self): values = Series(['a_b_c', 'c_d_e', NA, 'f_g_h']) result = values.str.partition('_', expand=False) - exp = Series([['a', '_', 'b_c'], ['c', '_', 'd_e'], NA, ['f', '_', 'g_h']]) + exp = Series([['a', '_', 'b_c'], ['c', '_', 'd_e'], NA, ['f', '_', + 'g_h']]) tm.assert_series_equal(result, exp) result = values.str.rpartition('_', expand=False) - exp = Series([['a_b', '_', 'c'], ['c_d', '_', 'e'], NA, ['f_g', '_', 'h']]) + exp = Series([['a_b', '_', 'c'], ['c_d', '_', 'e'], NA, ['f_g', '_', + 'h']]) tm.assert_series_equal(result, exp) # more than one char values = Series(['a__b__c', 'c__d__e', NA, 'f__g__h']) result = values.str.partition('__', expand=False) - exp = Series([['a', '__', 'b__c'], ['c', '__', 'd__e'], NA, ['f', '__', 'g__h']]) + exp = Series([['a', '__', 'b__c'], ['c', '__', 'd__e'], NA, ['f', '__', + 'g__h']]) tm.assert_series_equal(result, exp) result = values.str.rpartition('__', expand=False) - exp = Series([['a__b', '__', 'c'], ['c__d', '__', 'e'], NA, ['f__g', '__', 'h']]) + exp = Series([['a__b', '__', 'c'], ['c__d', '__', 'e'], NA, + ['f__g', '__', 'h']]) tm.assert_series_equal(result, exp) # None values = Series(['a b c', 'c d e', NA, 'f g h']) result = values.str.partition(expand=False) - exp = Series([['a', ' ', 'b c'], ['c', ' ', 'd e'], NA, ['f', ' ', 'g h']]) + exp = Series([['a', ' ', 'b c'], ['c', ' ', 'd e'], NA, ['f', ' ', + 'g h']]) tm.assert_series_equal(result, exp) result = values.str.rpartition(expand=False) - exp = Series([['a b', ' ', 'c'], ['c d', ' ', 'e'], NA, ['f g', ' ', 'h']]) + exp = Series([['a b', ' ', 'c'], ['c d', ' ', 'e'], NA, ['f g', ' ', + 'h']]) tm.assert_series_equal(result, exp) # Not splited @@ -1529,12 +1570,14 @@ def test_partition_index(self): values = Index(['a_b_c', 'c_d_e', 'f_g_h']) result = values.str.partition('_', expand=False) - exp = Index(np.array([('a', '_', 'b_c'), ('c', '_', 'd_e'), ('f', '_', 'g_h')])) + exp = Index(np.array([('a', '_', 'b_c'), ('c', '_', 'd_e'), ('f', '_', + 'g_h')])) tm.assert_index_equal(result, exp) self.assertEqual(result.nlevels, 1) result = values.str.rpartition('_', expand=False) - exp = Index(np.array([('a_b', '_', 'c'), ('c_d', '_', 'e'), ('f_g', '_', 'h')])) + exp = Index(np.array([('a_b', '_', 'c'), ('c_d', '_', 'e'), ( + 'f_g', '_', 'h')])) tm.assert_index_equal(result, exp) self.assertEqual(result.nlevels, 1) @@ -1598,12 +1641,12 @@ def test_slice(self): exp = Series(['foo', 'bar', NA, 'baz']) tm.assert_series_equal(result, exp) - for start, stop, step in [(0, 3, -1), (None, None, -1), - (3, 10, 2), (3, 0, -1)]: + for start, stop, step in [(0, 3, -1), (None, None, -1), (3, 10, 2), + (3, 0, -1)]: try: result = values.str.slice(start, stop, step) - expected = Series([s[start:stop:step] if not isnull(s) else NA for s in - values]) + expected = Series([s[start:stop:step] if not isnull(s) else NA + for s in values]) tm.assert_series_equal(result, expected) except: print('failed on %s:%s:%s' % (start, stop, step)) @@ -1614,19 +1657,16 @@ def test_slice(self): None, 1, 2.]) rs = Series(mixed).str.slice(2, 5) - xp = Series(['foo', NA, 'bar', NA, NA, - NA, NA, NA]) + xp = Series(['foo', NA, 'bar', NA, NA, NA, NA, NA]) tm.assertIsInstance(rs, Series) tm.assert_almost_equal(rs, xp) rs = Series(mixed).str.slice(2, 5, -1) - xp = Series(['oof', NA, 'rab', NA, NA, - NA, NA, NA]) + xp = Series(['oof', NA, 'rab', NA, NA, NA, NA, NA]) # unicode - values = Series([u('aafootwo'), u('aabartwo'), NA, - u('aabazqux')]) + values = Series([u('aafootwo'), u('aabartwo'), NA, u('aabazqux')]) result = values.str.slice(2, 5) exp = Series([u('foo'), u('bar'), NA, u('baz')]) @@ -1637,7 +1677,8 @@ def test_slice(self): tm.assert_series_equal(result, exp) def test_slice_replace(self): - values = Series(['short', 'a bit longer', 'evenlongerthanthat', '', NA]) + values = Series(['short', 'a bit longer', 'evenlongerthanthat', '', NA + ]) exp = Series(['shrt', 'a it longer', 'evnlongerthanthat', '', NA]) result = values.str.slice_replace(2, 3) @@ -1647,11 +1688,13 @@ def test_slice_replace(self): result = values.str.slice_replace(2, 3, 'z') tm.assert_series_equal(result, exp) - exp = Series(['shzort', 'a zbit longer', 'evzenlongerthanthat', 'z', NA]) + exp = Series(['shzort', 'a zbit longer', 'evzenlongerthanthat', 'z', NA + ]) result = values.str.slice_replace(2, 2, 'z') tm.assert_series_equal(result, exp) - exp = Series(['shzort', 'a zbit longer', 'evzenlongerthanthat', 'z', NA]) + exp = Series(['shzort', 'a zbit longer', 'evzenlongerthanthat', 'z', NA + ]) result = values.str.slice_replace(2, 1, 'z') tm.assert_series_equal(result, exp) @@ -1688,34 +1731,30 @@ def test_strip_lstrip_rstrip(self): def test_strip_lstrip_rstrip_mixed(self): # mixed - mixed = Series([' aa ', NA, ' bb \t\n', True, datetime.today(), - None, 1, 2.]) + mixed = Series([' aa ', NA, ' bb \t\n', True, datetime.today(), None, + 1, 2.]) rs = Series(mixed).str.strip() - xp = Series(['aa', NA, 'bb', NA, NA, - NA, NA, NA]) + xp = Series(['aa', NA, 'bb', NA, NA, NA, NA, NA]) tm.assertIsInstance(rs, Series) tm.assert_almost_equal(rs, xp) rs = Series(mixed).str.lstrip() - xp = Series(['aa ', NA, 'bb \t\n', NA, NA, - NA, NA, NA]) + xp = Series(['aa ', NA, 'bb \t\n', NA, NA, NA, NA, NA]) tm.assertIsInstance(rs, Series) tm.assert_almost_equal(rs, xp) rs = Series(mixed).str.rstrip() - xp = Series([' aa', NA, ' bb', NA, NA, - NA, NA, NA]) + xp = Series([' aa', NA, ' bb', NA, NA, NA, NA, NA]) tm.assertIsInstance(rs, Series) tm.assert_almost_equal(rs, xp) def test_strip_lstrip_rstrip_unicode(self): # unicode - values = Series([u(' aa '), u(' bb \n'), NA, - u('cc ')]) + values = Series([u(' aa '), u(' bb \n'), NA, u('cc ')]) result = values.str.strip() exp = Series([u('aa'), u('bb'), NA, u('cc')]) @@ -1745,8 +1784,7 @@ def test_strip_lstrip_rstrip_args(self): assert_series_equal(rs, xp) def test_strip_lstrip_rstrip_args_unicode(self): - values = Series([u('xxABCxx'), u('xx BNSD'), - u('LDFJH xx')]) + values = Series([u('xxABCxx'), u('xx BNSD'), u('LDFJH xx')]) rs = values.str.strip(u('x')) xp = Series(['ABC', ' BNSD', 'LDFJH ']) @@ -1763,26 +1801,25 @@ def test_strip_lstrip_rstrip_args_unicode(self): def test_wrap(self): # test values are: two words less than width, two words equal to width, # two words greater than width, one word less than width, one word - # equal to width, one word greater than width, multiple tokens with trailing - # whitespace equal to width - values = Series([u('hello world'), u('hello world!'), - u('hello world!!'), u('abcdefabcde'), - u('abcdefabcdef'), u('abcdefabcdefa'), - u('ab ab ab ab '), u('ab ab ab ab a'), - u('\t')]) + # equal to width, one word greater than width, multiple tokens with + # trailing whitespace equal to width + values = Series([u('hello world'), u('hello world!'), u( + 'hello world!!'), u('abcdefabcde'), u('abcdefabcdef'), u( + 'abcdefabcdefa'), u('ab ab ab ab '), u('ab ab ab ab a'), u( + '\t')]) # expected values - xp = Series([u('hello world'), u('hello world!'), - u('hello\nworld!!'), u('abcdefabcde'), - u('abcdefabcdef'), u('abcdefabcdef\na'), - u('ab ab ab ab'), u('ab ab ab ab\na'), - u('')]) + xp = Series([u('hello world'), u('hello world!'), u('hello\nworld!!'), + u('abcdefabcde'), u('abcdefabcdef'), u('abcdefabcdef\na'), + u('ab ab ab ab'), u('ab ab ab ab\na'), u('')]) rs = values.str.wrap(12, break_long_words=True) assert_series_equal(rs, xp) - # test with pre and post whitespace (non-unicode), NaN, and non-ascii Unicode - values = Series([' pre ', np.nan, u('\xac\u20ac\U00008000 abadcafe')]) + # test with pre and post whitespace (non-unicode), NaN, and non-ascii + # Unicode + values = Series([' pre ', np.nan, u('\xac\u20ac\U00008000 abadcafe') + ]) xp = Series([' pre', NA, u('\xac\u20ac\U00008000 ab\nadcafe')]) rs = values.str.wrap(6) assert_series_equal(rs, xp) @@ -1795,19 +1832,17 @@ def test_get(self): tm.assert_series_equal(result, expected) # mixed - mixed = Series(['a_b_c', NA, 'c_d_e', True, datetime.today(), - None, 1, 2.]) + mixed = Series(['a_b_c', NA, 'c_d_e', True, datetime.today(), None, 1, + 2.]) rs = Series(mixed).str.split('_').str.get(1) - xp = Series(['b', NA, 'd', NA, NA, - NA, NA, NA]) + xp = Series(['b', NA, 'd', NA, NA, NA, NA, NA]) tm.assertIsInstance(rs, Series) tm.assert_almost_equal(rs, xp) # unicode - values = Series([u('a_b_c'), u('c_d_e'), np.nan, - u('f_g_h')]) + values = Series([u('a_b_c'), u('c_d_e'), np.nan, u('f_g_h')]) result = values.str.split('_').str.get(1) expected = Series([u('b'), u('d'), np.nan, u('g')]) @@ -1815,8 +1850,6 @@ def test_get(self): def test_more_contains(self): # PR #1179 - import re - s = Series(['A', 'B', 'C', 'Aaba', 'Baca', '', NA, 'CABA', 'dog', 'cat']) @@ -1826,8 +1859,8 @@ def test_more_contains(self): assert_series_equal(result, expected) result = s.str.contains('a', case=False) - expected = Series([True, False, False, True, True, False, np.nan, - True, False, True]) + expected = Series([True, False, False, True, True, False, np.nan, True, + False, True]) assert_series_equal(result, expected) result = s.str.contains('Aa') @@ -1847,9 +1880,8 @@ def test_more_contains(self): def test_more_replace(self): # PR #1179 - import re - s = Series(['A', 'B', 'C', 'Aaba', 'Baca', - '', NA, 'CABA', 'dog', 'cat']) + s = Series(['A', 'B', 'C', 'Aaba', 'Baca', '', NA, 'CABA', + 'dog', 'cat']) result = s.str.replace('A', 'YYY') expected = Series(['YYY', 'B', 'C', 'YYYaba', 'Baca', '', NA, @@ -1867,8 +1899,8 @@ def test_more_replace(self): assert_series_equal(result, expected) def test_string_slice_get_syntax(self): - s = Series(['YYY', 'B', 'C', 'YYYYYYbYYY', 'BYYYcYYY', NA, - 'CYYYBYYY', 'dog', 'cYYYt']) + s = Series(['YYY', 'B', 'C', 'YYYYYYbYYY', 'BYYYcYYY', NA, 'CYYYBYYY', + 'dog', 'cYYYt']) result = s.str[0] expected = s.str.get(0) @@ -1883,7 +1915,7 @@ def test_string_slice_get_syntax(self): assert_series_equal(result, expected) def test_string_slice_out_of_bounds(self): - s = Series([(1, 2), (1,), (3,4,5)]) + s = Series([(1, 2), (1, ), (3, 4, 5)]) result = s.str[1] expected = Series([2, np.nan, 4]) @@ -1896,14 +1928,17 @@ def test_string_slice_out_of_bounds(self): assert_series_equal(result, expected) def test_match_findall_flags(self): - data = {'Dave': 'dave@google.com', 'Steve': 'steve@gmail.com', - 'Rob': 'rob@gmail.com', 'Wes': np.nan} + data = {'Dave': 'dave@google.com', + 'Steve': 'steve@gmail.com', + 'Rob': 'rob@gmail.com', + 'Wes': np.nan} data = Series(data) - pat = pattern = r'([A-Z0-9._%+-]+)@([A-Z0-9.-]+)\.([A-Z]{2,4})' + pat = r'([A-Z0-9._%+-]+)@([A-Z0-9.-]+)\.([A-Z]{2,4})' with tm.assert_produces_warning(FutureWarning): result = data.str.match(pat, flags=re.IGNORECASE) + self.assertEqual(result[0], ('dave', 'google', 'com')) result = data.str.findall(pat, flags=re.IGNORECASE) @@ -1929,8 +1964,7 @@ def test_encode_decode(self): def test_encode_decode_errors(self): encodeBase = Series([u('a'), u('b'), u('a\x9d')]) - self.assertRaises(UnicodeEncodeError, - encodeBase.str.encode, 'cp1252') + self.assertRaises(UnicodeEncodeError, encodeBase.str.encode, 'cp1252') f = lambda x: x.encode('cp1252', 'ignore') result = encodeBase.str.encode('cp1252', 'ignore') @@ -1939,8 +1973,7 @@ def test_encode_decode_errors(self): decodeBase = Series([b'a', b'b', b'a\x9d']) - self.assertRaises(UnicodeDecodeError, - decodeBase.str.decode, 'cp1252') + self.assertRaises(UnicodeDecodeError, decodeBase.str.decode, 'cp1252') f = lambda x: x.decode('cp1252', 'ignore') result = decodeBase.str.decode('cp1252', 'ignore') @@ -1973,8 +2006,8 @@ def test_normalize(self): tm.assert_index_equal(result, expected) def test_cat_on_filtered_index(self): - df = DataFrame(index=MultiIndex.from_product([[2011, 2012], [1,2,3]], - names=['year', 'month'])) + df = DataFrame(index=MultiIndex.from_product( + [[2011, 2012], [1, 2, 3]], names=['year', 'month'])) df = df.reset_index() df = df[df.month > 1] @@ -1989,21 +2022,18 @@ def test_cat_on_filtered_index(self): self.assertEqual(str_multiple.loc[1], '2011 2 2') - def test_index_str_accessor_visibility(self): from pandas.core.strings import StringMethods if not compat.PY3: - cases = [(['a', 'b'], 'string'), - (['a', u('b')], 'mixed'), + cases = [(['a', 'b'], 'string'), (['a', u('b')], 'mixed'), ([u('a'), u('b')], 'unicode'), (['a', 'b', 1], 'mixed-integer'), (['a', 'b', 1.3], 'mixed'), (['a', 'b', 1.3, 1], 'mixed-integer'), (['aa', datetime(2011, 1, 1)], 'mixed')] else: - cases = [(['a', 'b'], 'string'), - (['a', u('b')], 'string'), + cases = [(['a', 'b'], 'string'), (['a', u('b')], 'string'), ([u('a'), u('b')], 'string'), (['a', 'b', 1], 'mixed-integer'), (['a', 'b', 1.3], 'mixed'), @@ -2043,7 +2073,8 @@ def test_index_str_accessor_visibility(self): def test_str_accessor_no_new_attributes(self): # https://github.com/pydata/pandas/issues/10673 s = Series(list('aabbcde')) - with tm.assertRaisesRegexp(AttributeError, "You cannot add any new attribute"): + with tm.assertRaisesRegexp(AttributeError, + "You cannot add any new attribute"): s.str.xlabel = "a" def test_method_on_bytes(self): @@ -2053,8 +2084,8 @@ def test_method_on_bytes(self): self.assertRaises(TypeError, lhs.str.cat, rhs) else: result = lhs.str.cat(rhs) - expected = Series(np.array(['ad', 'be', 'cf'], - 'S2').astype(object)) + expected = Series(np.array( + ['ad', 'be', 'cf'], 'S2').astype(object)) tm.assert_series_equal(result, expected) diff --git a/pandas/tests/test_style.py b/pandas/tests/test_style.py index 486f997f9a7c8..fd8540fdf9c0a 100644 --- a/pandas/tests/test_style.py +++ b/pandas/tests/test_style.py @@ -230,39 +230,53 @@ def test_bar(self): def test_bar_0points(self): df = pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) result = df.style.bar()._compute().ctx - expected = {(0, 0): ['width: 10em', ' height: 80%'], - (0, 1): ['width: 10em', ' height: 80%'], - (0, 2): ['width: 10em', ' height: 80%'], - (1, 0): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 50.0%, transparent 0%)'], - (1, 1): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 50.0%, transparent 0%)'], - (1, 2): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 50.0%, transparent 0%)'], - (2, 0): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 100.0%, transparent 0%)'], - (2, 1): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 100.0%, transparent 0%)'], - (2, 2): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 100.0%, transparent 0%)']} + expected = { + (0, 0): ['width: 10em', ' height: 80%'], + (0, 1): ['width: 10em', ' height: 80%'], + (0, 2): ['width: 10em', ' height: 80%'], + (1, 0): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 50.0%, ' + 'transparent 0%)'], + (1, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 50.0%, ' + 'transparent 0%)'], + (1, 2): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 50.0%, ' + 'transparent 0%)'], + (2, 0): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 100.0%, ' + 'transparent 0%)'], + (2, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 100.0%, ' + 'transparent 0%)'], + (2, 2): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 100.0%, ' + 'transparent 0%)']} self.assertEqual(result, expected) result = df.style.bar(axis=1)._compute().ctx - expected = {(0, 0): ['width: 10em', ' height: 80%'], - (0, 1): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 50.0%, transparent 0%)'], - (0, 2): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 100.0%, transparent 0%)'], - (1, 0): ['width: 10em', ' height: 80%'], - (1, 1): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 50.0%, transparent 0%)'], - (1, 2): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 100.0%, transparent 0%)'], - (2, 0): ['width: 10em', ' height: 80%'], - (2, 1): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 50.0%, transparent 0%)'], - (2, 2): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg,#d65f5f 100.0%, transparent 0%)']} + expected = { + (0, 0): ['width: 10em', ' height: 80%'], + (0, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 50.0%, ' + 'transparent 0%)'], + (0, 2): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 100.0%, ' + 'transparent 0%)'], + (1, 0): ['width: 10em', ' height: 80%'], + (1, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 50.0%, ' + 'transparent 0%)'], + (1, 2): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 100.0%, ' + 'transparent 0%)'], + (2, 0): ['width: 10em', ' height: 80%'], + (2, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 50.0%, ' + 'transparent 0%)'], + (2, 2): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,#d65f5f 100.0%, ' + 'transparent 0%)']} self.assertEqual(result, expected) def test_highlight_null(self, null_color='red'): diff --git a/pandas/tests/test_testing.py b/pandas/tests/test_testing.py index 58c4285b8394e..7c3ba2ee8b556 100644 --- a/pandas/tests/test_testing.py +++ b/pandas/tests/test_testing.py @@ -2,22 +2,22 @@ # -*- coding: utf-8 -*- import pandas as pd import unittest -import warnings import nose import numpy as np import sys from pandas import Series, DataFrame import pandas.util.testing as tm -from pandas.util.testing import ( - assert_almost_equal, assertRaisesRegexp, raise_with_traceback, - assert_index_equal, assert_series_equal, assert_frame_equal, - assert_numpy_array_equal, assert_isinstance, RNGContext, - assertRaises, skip_if_no_package_deco -) +from pandas.util.testing import (assert_almost_equal, assertRaisesRegexp, + raise_with_traceback, assert_index_equal, + assert_series_equal, assert_frame_equal, + assert_numpy_array_equal, + RNGContext, assertRaises, + skip_if_no_package_deco) from pandas.compat import is_platform_windows # let's get meta. + class TestAssertAlmostEqual(tm.TestCase): _multiprocess_can_split_ = True @@ -50,7 +50,7 @@ def test_assert_almost_equal_numbers_with_zeros(self): def test_assert_almost_equal_numbers_with_mixed(self): self._assert_not_almost_equal_both(1, 'abc') - self._assert_not_almost_equal_both(1, [1,]) + self._assert_not_almost_equal_both(1, [1, ]) self._assert_not_almost_equal_both(1, object()) def test_assert_almost_equal_edge_case_ndarrays(self): @@ -68,12 +68,13 @@ def test_assert_almost_equal_dicts(self): ) self._assert_not_almost_equal_both({'a': 1}, 1) self._assert_not_almost_equal_both({'a': 1}, 'abc') - self._assert_not_almost_equal_both({'a': 1}, [1,]) + self._assert_not_almost_equal_both({'a': 1}, [1, ]) def test_assert_almost_equal_dict_like_object(self): class DictLikeObj(object): + def keys(self): - return ('a',) + return ('a', ) def __getitem__(self, item): if item == 'a': @@ -89,7 +90,7 @@ def test_assert_almost_equal_strings(self): self._assert_not_almost_equal_both('abc', 'abcd') self._assert_not_almost_equal_both('abc', 'abd') self._assert_not_almost_equal_both('abc', 1) - self._assert_not_almost_equal_both('abc', [1,]) + self._assert_not_almost_equal_both('abc', [1, ]) def test_assert_almost_equal_iterables(self): self._assert_almost_equal_both([1, 2, 3], [1, 2, 3]) @@ -140,13 +141,15 @@ class TestAssertNumpyArrayEqual(tm.TestCase): def test_numpy_array_equal_message(self): if is_platform_windows(): - raise nose.SkipTest("windows has incomparable line-endings and uses L on the shape") + raise nose.SkipTest("windows has incomparable line-endings " + "and uses L on the shape") expected = """numpy array are different numpy array shapes are different \\[left\\]: \\(2,\\) \\[right\\]: \\(3,\\)""" + with assertRaisesRegexp(AssertionError, expected): assert_numpy_array_equal(np.array([1, 2]), np.array([3, 4, 5])) @@ -167,6 +170,7 @@ def test_numpy_array_equal_message(self): First object is iterable, second isn't \\[left\\]: \\[1\\] \\[right\\]: 1""" + with assertRaisesRegexp(AssertionError, expected): assert_numpy_array_equal(np.array([1]), 1) with assertRaisesRegexp(AssertionError, expected): @@ -178,6 +182,7 @@ def test_numpy_array_equal_message(self): Second object is iterable, first isn't \\[left\\]: 1 \\[right\\]: \\[1\\]""" + with assertRaisesRegexp(AssertionError, expected): assert_numpy_array_equal(1, np.array([1])) with assertRaisesRegexp(AssertionError, expected): @@ -188,29 +193,34 @@ def test_numpy_array_equal_message(self): numpy array values are different \\(66\\.66667 %\\) \\[left\\]: \\[nan, 2\\.0, 3\\.0\\] \\[right\\]: \\[1\\.0, nan, 3\\.0\\]""" + with assertRaisesRegexp(AssertionError, expected): - assert_numpy_array_equal(np.array([np.nan, 2, 3]), np.array([1, np.nan, 3])) + assert_numpy_array_equal( + np.array([np.nan, 2, 3]), np.array([1, np.nan, 3])) with assertRaisesRegexp(AssertionError, expected): - assert_almost_equal(np.array([np.nan, 2, 3]), np.array([1, np.nan, 3])) + assert_almost_equal( + np.array([np.nan, 2, 3]), np.array([1, np.nan, 3])) expected = """numpy array are different numpy array values are different \\(50\\.0 %\\) \\[left\\]: \\[1, 2\\] \\[right\\]: \\[1, 3\\]""" + with assertRaisesRegexp(AssertionError, expected): assert_numpy_array_equal(np.array([1, 2]), np.array([1, 3])) with assertRaisesRegexp(AssertionError, expected): assert_almost_equal(np.array([1, 2]), np.array([1, 3])) - expected = """numpy array are different numpy array values are different \\(50\\.0 %\\) \\[left\\]: \\[1\\.1, 2\\.000001\\] \\[right\\]: \\[1\\.1, 2.0\\]""" + with assertRaisesRegexp(AssertionError, expected): - assert_numpy_array_equal(np.array([1.1, 2.000001]), np.array([1.1, 2.0])) + assert_numpy_array_equal( + np.array([1.1, 2.000001]), np.array([1.1, 2.0])) # must pass assert_almost_equal(np.array([1.1, 2.000001]), np.array([1.1, 2.0])) @@ -220,6 +230,7 @@ def test_numpy_array_equal_message(self): numpy array values are different \\(16\\.66667 %\\) \\[left\\]: \\[\\[1, 2\\], \\[3, 4\\], \\[5, 6\\]\\] \\[right\\]: \\[\\[1, 3\\], \\[3, 4\\], \\[5, 6\\]\\]""" + with assertRaisesRegexp(AssertionError, expected): assert_numpy_array_equal(np.array([[1, 2], [3, 4], [5, 6]]), np.array([[1, 3], [3, 4], [5, 6]])) @@ -232,6 +243,7 @@ def test_numpy_array_equal_message(self): numpy array values are different \\(25\\.0 %\\) \\[left\\]: \\[\\[1, 2\\], \\[3, 4\\]\\] \\[right\\]: \\[\\[1, 3\\], \\[3, 4\\]\\]""" + with assertRaisesRegexp(AssertionError, expected): assert_numpy_array_equal(np.array([[1, 2], [3, 4]]), np.array([[1, 3], [3, 4]])) @@ -245,6 +257,7 @@ def test_numpy_array_equal_message(self): Index shapes are different \\[left\\]: \\(2,\\) \\[right\\]: \\(3,\\)""" + with assertRaisesRegexp(AssertionError, expected): assert_numpy_array_equal(np.array([1, 2]), np.array([3, 4, 5]), obj='Index') @@ -259,6 +272,7 @@ def test_assert_almost_equal_iterable_message(self): Iterable length are different \\[left\\]: 2 \\[right\\]: 3""" + with assertRaisesRegexp(AssertionError, expected): assert_almost_equal([1, 2], [3, 4, 5]) @@ -267,6 +281,7 @@ def test_assert_almost_equal_iterable_message(self): Iterable values are different \\(50\\.0 %\\) \\[left\\]: \\[1, 2\\] \\[right\\]: \\[1, 3\\]""" + with assertRaisesRegexp(AssertionError, expected): assert_almost_equal([1, 2], [1, 3]) @@ -282,20 +297,23 @@ def test_index_equal_message(self): \\[left\\]: 1, Int64Index\\(\\[1, 2, 3\\], dtype='int64'\\) \\[right\\]: 2, MultiIndex\\(levels=\\[\\[u?'A', u?'B'\\], \\[1, 2, 3, 4\\]\\], labels=\\[\\[0, 0, 1, 1\\], \\[0, 1, 2, 3\\]\\]\\)""" + idx1 = pd.Index([1, 2, 3]) - idx2 = pd.MultiIndex.from_tuples([('A', 1), ('A', 2), - ('B', 3), ('B', 4)]) + idx2 = pd.MultiIndex.from_tuples([('A', 1), ('A', 2), ('B', 3), ('B', 4 + )]) with assertRaisesRegexp(AssertionError, expected): assert_index_equal(idx1, idx2, exact=False) - expected = """MultiIndex level \\[1\\] are different MultiIndex level \\[1\\] values are different \\(25\\.0 %\\) \\[left\\]: Int64Index\\(\\[2, 2, 3, 4\\], dtype='int64'\\) \\[right\\]: Int64Index\\(\\[1, 2, 3, 4\\], dtype='int64'\\)""" - idx1 = pd.MultiIndex.from_tuples([('A', 2), ('A', 2), ('B', 3), ('B', 4)]) - idx2 = pd.MultiIndex.from_tuples([('A', 1), ('A', 2), ('B', 3), ('B', 4)]) + + idx1 = pd.MultiIndex.from_tuples([('A', 2), ('A', 2), ('B', 3), ('B', 4 + )]) + idx2 = pd.MultiIndex.from_tuples([('A', 1), ('A', 2), ('B', 3), ('B', 4 + )]) with assertRaisesRegexp(AssertionError, expected): assert_index_equal(idx1, idx2) with assertRaisesRegexp(AssertionError, expected): @@ -306,6 +324,7 @@ def test_index_equal_message(self): Index length are different \\[left\\]: 3, Int64Index\\(\\[1, 2, 3\\], dtype='int64'\\) \\[right\\]: 4, Int64Index\\(\\[1, 2, 3, 4\\], dtype='int64'\\)""" + idx1 = pd.Index([1, 2, 3]) idx2 = pd.Index([1, 2, 3, 4]) with assertRaisesRegexp(AssertionError, expected): @@ -318,6 +337,7 @@ def test_index_equal_message(self): Index classes are different \\[left\\]: Int64Index\\(\\[1, 2, 3\\], dtype='int64'\\) \\[right\\]: Float64Index\\(\\[1\\.0, 2\\.0, 3\\.0\\], dtype='float64'\\)""" + idx1 = pd.Index([1, 2, 3]) idx2 = pd.Index([1, 2, 3.0]) with assertRaisesRegexp(AssertionError, expected): @@ -330,6 +350,7 @@ def test_index_equal_message(self): Index values are different \\(33\\.33333 %\\) \\[left\\]: Float64Index\\(\\[1.0, 2.0, 3.0], dtype='float64'\\) \\[right\\]: Float64Index\\(\\[1.0, 2.0, 3.0000000001\\], dtype='float64'\\)""" + idx1 = pd.Index([1, 2, 3.]) idx2 = pd.Index([1, 2, 3.0000000001]) with assertRaisesRegexp(AssertionError, expected): @@ -343,6 +364,7 @@ def test_index_equal_message(self): Index values are different \\(33\\.33333 %\\) \\[left\\]: Float64Index\\(\\[1.0, 2.0, 3.0], dtype='float64'\\) \\[right\\]: Float64Index\\(\\[1.0, 2.0, 3.0001\\], dtype='float64'\\)""" + idx1 = pd.Index([1, 2, 3.]) idx2 = pd.Index([1, 2, 3.0001]) with assertRaisesRegexp(AssertionError, expected): @@ -350,13 +372,15 @@ def test_index_equal_message(self): with assertRaisesRegexp(AssertionError, expected): assert_index_equal(idx1, idx2, check_exact=False) # must success - assert_index_equal(idx1, idx2, check_exact=False, check_less_precise=True) + assert_index_equal(idx1, idx2, check_exact=False, + check_less_precise=True) expected = """Index are different Index values are different \\(33\\.33333 %\\) \\[left\\]: Int64Index\\(\\[1, 2, 3\\], dtype='int64'\\) \\[right\\]: Int64Index\\(\\[1, 2, 4\\], dtype='int64'\\)""" + idx1 = pd.Index([1, 2, 3]) idx2 = pd.Index([1, 2, 4]) with assertRaisesRegexp(AssertionError, expected): @@ -369,8 +393,11 @@ def test_index_equal_message(self): MultiIndex level \\[1\\] values are different \\(25\\.0 %\\) \\[left\\]: Int64Index\\(\\[2, 2, 3, 4\\], dtype='int64'\\) \\[right\\]: Int64Index\\(\\[1, 2, 3, 4\\], dtype='int64'\\)""" - idx1 = pd.MultiIndex.from_tuples([('A', 2), ('A', 2), ('B', 3), ('B', 4)]) - idx2 = pd.MultiIndex.from_tuples([('A', 1), ('A', 2), ('B', 3), ('B', 4)]) + + idx1 = pd.MultiIndex.from_tuples([('A', 2), ('A', 2), ('B', 3), ('B', 4 + )]) + idx2 = pd.MultiIndex.from_tuples([('A', 1), ('A', 2), ('B', 3), ('B', 4 + )]) with assertRaisesRegexp(AssertionError, expected): assert_index_equal(idx1, idx2) with assertRaisesRegexp(AssertionError, expected): @@ -383,6 +410,7 @@ def test_index_equal_metadata_message(self): Attribute "names" are different \\[left\\]: \\[None\\] \\[right\\]: \\[u?'x'\\]""" + idx1 = pd.Index([1, 2, 3]) idx2 = pd.Index([1, 2, 3], name='x') with assertRaisesRegexp(AssertionError, expected): @@ -390,16 +418,16 @@ def test_index_equal_metadata_message(self): # same name, should pass assert_index_equal(pd.Index([1, 2, 3], name=np.nan), - pd.Index([1, 2, 3], name=np.nan)) + pd.Index([1, 2, 3], name=np.nan)) assert_index_equal(pd.Index([1, 2, 3], name=pd.NaT), - pd.Index([1, 2, 3], name=pd.NaT)) - + pd.Index([1, 2, 3], name=pd.NaT)) expected = """Index are different Attribute "names" are different \\[left\\]: \\[nan\\] \\[right\\]: \\[NaT\\]""" + idx1 = pd.Index([1, 2, 3], name=np.nan) idx2 = pd.Index([1, 2, 3], name=pd.NaT) with assertRaisesRegexp(AssertionError, expected): @@ -410,59 +438,65 @@ class TestAssertSeriesEqual(tm.TestCase): _multiprocess_can_split_ = True def _assert_equal(self, x, y, **kwargs): - assert_series_equal(x,y,**kwargs) - assert_series_equal(y,x,**kwargs) + assert_series_equal(x, y, **kwargs) + assert_series_equal(y, x, **kwargs) def _assert_not_equal(self, a, b, **kwargs): self.assertRaises(AssertionError, assert_series_equal, a, b, **kwargs) self.assertRaises(AssertionError, assert_series_equal, b, a, **kwargs) def test_equal(self): - self._assert_equal(Series(range(3)),Series(range(3))) - self._assert_equal(Series(list('abc')),Series(list('abc'))) + self._assert_equal(Series(range(3)), Series(range(3))) + self._assert_equal(Series(list('abc')), Series(list('abc'))) def test_not_equal(self): - self._assert_not_equal(Series(range(3)),Series(range(3))+1) - self._assert_not_equal(Series(list('abc')),Series(list('xyz'))) - self._assert_not_equal(Series(range(3)),Series(range(4))) - self._assert_not_equal(Series(range(3)),Series(range(3),dtype='float64')) - self._assert_not_equal(Series(range(3)),Series(range(3),index=[1,2,4])) + self._assert_not_equal(Series(range(3)), Series(range(3)) + 1) + self._assert_not_equal(Series(list('abc')), Series(list('xyz'))) + self._assert_not_equal(Series(range(3)), Series(range(4))) + self._assert_not_equal( + Series(range(3)), Series( + range(3), dtype='float64')) + self._assert_not_equal( + Series(range(3)), Series( + range(3), index=[1, 2, 4])) # ATM meta data is not checked in assert_series_equal # self._assert_not_equal(Series(range(3)),Series(range(3),name='foo'),check_names=True) def test_less_precise(self): - s1 = Series([0.12345],dtype='float64') - s2 = Series([0.12346],dtype='float64') + s1 = Series([0.12345], dtype='float64') + s2 = Series([0.12346], dtype='float64') self.assertRaises(AssertionError, assert_series_equal, s1, s2) - self._assert_equal(s1,s2,check_less_precise=True) + self._assert_equal(s1, s2, check_less_precise=True) - s1 = Series([0.12345],dtype='float32') - s2 = Series([0.12346],dtype='float32') + s1 = Series([0.12345], dtype='float32') + s2 = Series([0.12346], dtype='float32') self.assertRaises(AssertionError, assert_series_equal, s1, s2) - self._assert_equal(s1,s2,check_less_precise=True) + self._assert_equal(s1, s2, check_less_precise=True) # even less than less precise - s1 = Series([0.1235],dtype='float32') - s2 = Series([0.1236],dtype='float32') + s1 = Series([0.1235], dtype='float32') + s2 = Series([0.1236], dtype='float32') self.assertRaises(AssertionError, assert_series_equal, s1, s2) self.assertRaises(AssertionError, assert_series_equal, s1, s2, True) def test_index_dtype(self): df1 = DataFrame.from_records( - {'a':[1,2],'c':['l1','l2']}, index=['a']) + {'a': [1, 2], 'c': ['l1', 'l2']}, index=['a']) df2 = DataFrame.from_records( - {'a':[1.0,2.0],'c':['l1','l2']}, index=['a']) + {'a': [1.0, 2.0], 'c': ['l1', 'l2']}, index=['a']) self._assert_not_equal(df1.c, df2.c, check_index_type=True) def test_multiindex_dtype(self): df1 = DataFrame.from_records( - {'a':[1,2],'b':[2.1,1.5],'c':['l1','l2']}, index=['a','b']) + {'a': [1, 2], 'b': [2.1, 1.5], + 'c': ['l1', 'l2']}, index=['a', 'b']) df2 = DataFrame.from_records( - {'a':[1.0,2.0],'b':[2.1,1.5],'c':['l1','l2']}, index=['a','b']) + {'a': [1.0, 2.0], 'b': [2.1, 1.5], + 'c': ['l1', 'l2']}, index=['a', 'b']) self._assert_not_equal(df1.c, df2.c, check_index_type=True) def test_series_equal_message(self): @@ -472,28 +506,29 @@ def test_series_equal_message(self): Series length are different \\[left\\]: 3, RangeIndex\\(start=0, stop=3, step=1\\) \\[right\\]: 4, RangeIndex\\(start=0, stop=4, step=1\\)""" + with assertRaisesRegexp(AssertionError, expected): assert_series_equal(pd.Series([1, 2, 3]), pd.Series([1, 2, 3, 4])) - expected = """Series are different Series values are different \\(33\\.33333 %\\) \\[left\\]: \\[1, 2, 3\\] \\[right\\]: \\[1, 2, 4\\]""" + with assertRaisesRegexp(AssertionError, expected): assert_series_equal(pd.Series([1, 2, 3]), pd.Series([1, 2, 4])) with assertRaisesRegexp(AssertionError, expected): assert_series_equal(pd.Series([1, 2, 3]), pd.Series([1, 2, 4]), - check_less_precise=True) + check_less_precise=True) class TestAssertFrameEqual(tm.TestCase): _multiprocess_can_split_ = True def _assert_equal(self, x, y, **kwargs): - assert_frame_equal(x,y,**kwargs) - assert_frame_equal(y,x,**kwargs) + assert_frame_equal(x, y, **kwargs) + assert_frame_equal(y, x, **kwargs) def _assert_not_equal(self, a, b, **kwargs): self.assertRaises(AssertionError, assert_frame_equal, a, b, **kwargs) @@ -501,22 +536,24 @@ def _assert_not_equal(self, a, b, **kwargs): def test_index_dtype(self): df1 = DataFrame.from_records( - {'a':[1,2],'c':['l1','l2']}, index=['a']) + {'a': [1, 2], 'c': ['l1', 'l2']}, index=['a']) df2 = DataFrame.from_records( - {'a':[1.0,2.0],'c':['l1','l2']}, index=['a']) + {'a': [1.0, 2.0], 'c': ['l1', 'l2']}, index=['a']) self._assert_not_equal(df1, df2, check_index_type=True) def test_multiindex_dtype(self): df1 = DataFrame.from_records( - {'a':[1,2],'b':[2.1,1.5],'c':['l1','l2']}, index=['a','b']) + {'a': [1, 2], 'b': [2.1, 1.5], + 'c': ['l1', 'l2']}, index=['a', 'b']) df2 = DataFrame.from_records( - {'a':[1.0,2.0],'b':[2.1,1.5],'c':['l1','l2']}, index=['a','b']) + {'a': [1.0, 2.0], 'b': [2.1, 1.5], + 'c': ['l1', 'l2']}, index=['a', 'b']) self._assert_not_equal(df1, df2, check_index_type=True) def test_empty_dtypes(self): - df1=pd.DataFrame(columns=["col1","col2"]) + df1 = pd.DataFrame(columns=["col1", "col2"]) df1["col1"] = df1["col1"].astype('int64') - df2=pd.DataFrame(columns=["col1","col2"]) + df2 = pd.DataFrame(columns=["col1", "col2"]) self._assert_equal(df1, df2, check_dtype=False) self._assert_not_equal(df1, df2, check_dtype=True) @@ -527,6 +564,7 @@ def test_frame_equal_message(self): DataFrame shape \\(number of rows\\) are different \\[left\\]: 3, RangeIndex\\(start=0, stop=3, step=1\\) \\[right\\]: 4, RangeIndex\\(start=0, stop=4, step=1\\)""" + with assertRaisesRegexp(AssertionError, expected): assert_frame_equal(pd.DataFrame({'A': [1, 2, 3]}), pd.DataFrame({'A': [1, 2, 3, 4]})) @@ -536,6 +574,7 @@ def test_frame_equal_message(self): DataFrame shape \\(number of columns\\) are different \\[left\\]: 2, Index\\(\\[u?'A', u?'B'\\], dtype='object'\\) \\[right\\]: 1, Index\\(\\[u?'A'\\], dtype='object'\\)""" + with assertRaisesRegexp(AssertionError, expected): assert_frame_equal(pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}), pd.DataFrame({'A': [1, 2, 3]})) @@ -545,6 +584,7 @@ def test_frame_equal_message(self): DataFrame\\.index values are different \\(33\\.33333 %\\) \\[left\\]: Index\\(\\[u?'a', u?'b', u?'c'\\], dtype='object'\\) \\[right\\]: Index\\(\\[u?'a', u?'b', u?'d'\\], dtype='object'\\)""" + with assertRaisesRegexp(AssertionError, expected): assert_frame_equal(pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, index=['a', 'b', 'c']), @@ -556,6 +596,7 @@ def test_frame_equal_message(self): DataFrame\\.columns values are different \\(50\\.0 %\\) \\[left\\]: Index\\(\\[u?'A', u?'B'\\], dtype='object'\\) \\[right\\]: Index\\(\\[u?'A', u?'b'\\], dtype='object'\\)""" + with assertRaisesRegexp(AssertionError, expected): assert_frame_equal(pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, index=['a', 'b', 'c']), @@ -567,14 +608,15 @@ def test_frame_equal_message(self): DataFrame\\.iloc\\[:, 1\\] values are different \\(33\\.33333 %\\) \\[left\\]: \\[4, 5, 6\\] \\[right\\]: \\[4, 5, 7\\]""" + with assertRaisesRegexp(AssertionError, expected): - assert_frame_equal(pd.DataFrame({'A':[1, 2, 3], 'B':[4, 5, 6]}), - pd.DataFrame({'A':[1, 2, 3], 'B':[4, 5, 7]})) + assert_frame_equal(pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}), + pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 7]})) with assertRaisesRegexp(AssertionError, expected): - assert_frame_equal(pd.DataFrame({'A':[1, 2, 3], 'B':[4, 5, 6]}), - pd.DataFrame({'A':[1, 2, 3], 'B':[4, 5, 7]}), - by_blocks=True) + assert_frame_equal(pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}), + pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 7]}), + by_blocks=True) class TestRNGContext(unittest.TestCase): @@ -589,7 +631,6 @@ def test_RNGContext(self): self.assertEqual(np.random.randn(), expected0) - class TestDeprecatedTests(tm.TestCase): def test_warning(self): @@ -617,15 +658,17 @@ class TestLocale(tm.TestCase): def test_locale(self): if sys.platform == 'win32': - raise nose.SkipTest("skipping on win platforms as locale not available") + raise nose.SkipTest( + "skipping on win platforms as locale not available") - #GH9744 + # GH9744 locales = tm.get_locales() self.assertTrue(len(locales) >= 1) def test_skiptest_deco(): from nose import SkipTest + @skip_if_no_package_deco("fakepackagename") def f(): pass diff --git a/pandas/tests/test_tseries.py b/pandas/tests/test_tseries.py index 72318f8073595..8422759192cc3 100644 --- a/pandas/tests/test_tseries.py +++ b/pandas/tests/test_tseries.py @@ -1,5 +1,4 @@ # -*- coding: utf-8 -*- -import nose from numpy import nan import numpy as np from pandas import Index, isnull, Timestamp @@ -11,7 +10,6 @@ import pandas.algos as algos from pandas.core import common as com import datetime -from pandas import DateOffset class TestTseriesUtil(tm.TestCase): @@ -72,7 +70,7 @@ def test_left_join_indexer_unique(): result = algos.left_join_indexer_unique_int64(b, a) expected = np.array([1, 1, 2, 3, 3], dtype=np.int64) - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) def test_left_outer_join_bug(): @@ -93,8 +91,8 @@ def test_left_outer_join_bug(): exp_ridx[left == 1] = 1 exp_ridx[left == 3] = 0 - assert(np.array_equal(lidx, exp_lidx)) - assert(np.array_equal(ridx, exp_ridx)) + assert (np.array_equal(lidx, exp_lidx)) + assert (np.array_equal(ridx, exp_ridx)) def test_inner_join_indexer(): @@ -218,22 +216,29 @@ def test_is_lexsorted(): np.array([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, - 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, - 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, + 2, 2, 2, 2, 2, 2, 2, + 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, + 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, + 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), np.array([30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, - 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, + 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, + 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25, - 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, + 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, + 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 30, 29, 28, 27, 26, 25, 24, 23, 22, - 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, + 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, + 6, 5, 4, 3, 2, 1, 0])] - assert(not algos.is_lexsorted(failure)) + assert (not algos.is_lexsorted(failure)) # def test_get_group_index(): # a = np.array([0, 1, 2, 0, 2, 1, 0, 0], dtype=np.int64) @@ -253,20 +258,20 @@ def test_groupsort_indexer(): # need to use a stable sort expected = np.argsort(a, kind='mergesort') - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) # compare with lexsort key = a * 1000 + b result = algos.groupsort_indexer(key, 1000000)[0] expected = np.lexsort((b, a)) - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) def test_ensure_platform_int(): arr = np.arange(100) result = algos.ensure_platform_int(arr) - assert(result is arr) + assert (result is arr) def test_duplicated_with_nas(): @@ -274,19 +279,19 @@ def test_duplicated_with_nas(): result = lib.duplicated(keys) expected = [False, False, False, True, False, True] - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) result = lib.duplicated(keys, keep='first') expected = [False, False, False, True, False, True] - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) result = lib.duplicated(keys, keep='last') expected = [True, False, True, False, False, False] - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) result = lib.duplicated(keys, keep=False) expected = [True, False, True, True, False, True] - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) keys = np.empty(8, dtype=object) for i, t in enumerate(zip([0, 0, nan, nan] * 2, [0, nan, 0, nan] * 2)): @@ -296,40 +301,40 @@ def test_duplicated_with_nas(): falses = [False] * 4 trues = [True] * 4 expected = falses + trues - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) result = lib.duplicated(keys, keep='last') expected = trues + falses - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) result = lib.duplicated(keys, keep=False) expected = trues + trues - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) def test_maybe_booleans_to_slice(): arr = np.array([0, 0, 1, 1, 1, 0, 1], dtype=np.uint8) result = lib.maybe_booleans_to_slice(arr) - assert(result.dtype == np.bool_) + assert (result.dtype == np.bool_) result = lib.maybe_booleans_to_slice(arr[:0]) - assert(result == slice(0, 0)) + assert (result == slice(0, 0)) def test_convert_objects(): arr = np.array(['a', 'b', nan, nan, 'd', 'e', 'f'], dtype='O') result = lib.maybe_convert_objects(arr) - assert(result.dtype == np.object_) + assert (result.dtype == np.object_) def test_convert_infs(): arr = np.array(['inf', 'inf', 'inf'], dtype='O') result = lib.maybe_convert_numeric(arr, set(), False) - assert(result.dtype == np.float64) + assert (result.dtype == np.float64) arr = np.array(['-inf', '-inf', '-inf'], dtype='O') result = lib.maybe_convert_numeric(arr, set(), False) - assert(result.dtype == np.float64) + assert (result.dtype == np.float64) def test_convert_objects_ints(): @@ -338,17 +343,17 @@ def test_convert_objects_ints(): for dtype_str in dtypes: arr = np.array(list(np.arange(20, dtype=dtype_str)), dtype='O') - assert(arr[0].dtype == np.dtype(dtype_str)) + assert (arr[0].dtype == np.dtype(dtype_str)) result = lib.maybe_convert_objects(arr) - assert(issubclass(result.dtype.type, np.integer)) + assert (issubclass(result.dtype.type, np.integer)) def test_convert_objects_complex_number(): for dtype in np.sctypes['complex']: arr = np.array(list(1j * np.arange(20, dtype=dtype)), dtype='O') - assert(arr[0].dtype == np.dtype(dtype)) + assert (arr[0].dtype == np.dtype(dtype)) result = lib.maybe_convert_objects(arr) - assert(issubclass(result.dtype.type, np.complexfloating)) + assert (issubclass(result.dtype.type, np.complexfloating)) def test_rank(): @@ -372,7 +377,7 @@ def test_get_reverse_indexer(): indexer = np.array([-1, -1, 1, 2, 0, -1, 3, 4], dtype=np.int64) result = lib.get_reverse_indexer(indexer, 5) expected = np.array([4, 2, 3, 6, 7], dtype=np.int64) - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) def test_pad_backfill_object_segfault(): @@ -382,25 +387,25 @@ def test_pad_backfill_object_segfault(): result = algos.pad_object(old, new) expected = np.array([-1], dtype=np.int64) - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) result = algos.pad_object(new, old) expected = np.array([], dtype=np.int64) - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) result = algos.backfill_object(old, new) expected = np.array([-1], dtype=np.int64) - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) result = algos.backfill_object(new, old) expected = np.array([], dtype=np.int64) - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) def test_arrmap(): values = np.array(['foo', 'foo', 'bar', 'bar', 'baz', 'qux'], dtype='O') result = algos.arrmap_object(values, lambda x: x in ['foo', 'bar']) - assert(result.dtype == np.bool_) + assert (result.dtype == np.bool_) def test_series_grouper(): @@ -452,40 +457,40 @@ def test_generate_bins(self): for func in [lib.generate_bins_dt64, generate_bins_generic]: bins = func(values, binner, closed='left') - assert((bins == np.array([2, 5, 6])).all()) + assert ((bins == np.array([2, 5, 6])).all()) bins = func(values, binner, closed='right') - assert((bins == np.array([3, 6, 6])).all()) + assert ((bins == np.array([3, 6, 6])).all()) for func in [lib.generate_bins_dt64, generate_bins_generic]: values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64) binner = np.array([0, 3, 6], dtype=np.int64) bins = func(values, binner, closed='right') - assert((bins == np.array([3, 6])).all()) + assert ((bins == np.array([3, 6])).all()) self.assertRaises(ValueError, generate_bins_generic, values, [], 'right') self.assertRaises(ValueError, generate_bins_generic, values[:0], binner, 'right') - self.assertRaises(ValueError, generate_bins_generic, - values, [4], 'right') - self.assertRaises(ValueError, generate_bins_generic, - values, [-3, -1], 'right') + self.assertRaises(ValueError, generate_bins_generic, values, [4], + 'right') + self.assertRaises(ValueError, generate_bins_generic, values, [-3, -1], + 'right') def test_group_ohlc(): - def _check(dtype): - obj = np.array(np.random.randn(20),dtype=dtype) + obj = np.array(np.random.randn(20), dtype=dtype) bins = np.array([6, 12, 20]) out = np.zeros((3, 4), dtype) counts = np.zeros(len(out), dtype=np.int64) - labels = com._ensure_int64(np.repeat(np.arange(3), np.diff(np.r_[0, bins]))) + labels = com._ensure_int64(np.repeat( + np.arange(3), np.diff(np.r_[0, bins]))) - func = getattr(algos,'group_ohlc_%s' % dtype) + func = getattr(algos, 'group_ohlc_%s' % dtype) func(out, counts, obj[:, None], labels) def _ohlc(group): @@ -493,8 +498,8 @@ def _ohlc(group): return np.repeat(nan, 4) return [group[0], group.max(), group.min(), group[-1]] - expected = np.array([_ohlc(obj[:6]), _ohlc(obj[6:12]), - _ohlc(obj[12:])]) + expected = np.array([_ohlc(obj[:6]), _ohlc(obj[6:12]), _ohlc(obj[12:]) + ]) assert_almost_equal(out, expected) assert_almost_equal(counts, [6, 6, 8]) @@ -507,6 +512,7 @@ def _ohlc(group): _check('float32') _check('float64') + def test_try_parse_dates(): from dateutil.parser import parse @@ -514,7 +520,7 @@ def test_try_parse_dates(): result = lib.try_parse_dates(arr, dayfirst=True) expected = [parse(d, dayfirst=True) for d in arr] - assert(np.array_equal(result, expected)) + assert (np.array_equal(result, expected)) class TestTypeInference(tm.TestCase): @@ -532,8 +538,7 @@ def test_integers(self): result = lib.infer_dtype(arr) self.assertEqual(result, 'integer') - arr = np.array([1, 2, 3, np.int64(4), np.int32(5), 'foo'], - dtype='O') + arr = np.array([1, 2, 3, np.int64(4), np.int32(5), 'foo'], dtype='O') result = lib.infer_dtype(arr) self.assertEqual(result, 'mixed-integer') @@ -605,7 +610,7 @@ def test_to_object_array_tuples(self): record = namedtuple('record', 'x y') r = record(5, 6) values = [r] - result = lib.to_object_array_tuples(values) + result = lib.to_object_array_tuples(values) # noqa except ImportError: pass @@ -613,7 +618,7 @@ def test_object(self): # GH 7431 # cannot infer more than this as only a single element - arr = np.array([None],dtype='O') + arr = np.array([None], dtype='O') result = lib.infer_dtype(arr) self.assertEqual(result, 'mixed') @@ -628,19 +633,19 @@ def test_categorical(self): result = lib.infer_dtype(Series(arr)) self.assertEqual(result, 'categorical') - arr = Categorical(list('abc'),categories=['cegfab'],ordered=True) + arr = Categorical(list('abc'), categories=['cegfab'], ordered=True) result = lib.infer_dtype(arr) self.assertEqual(result, 'categorical') result = lib.infer_dtype(Series(arr)) self.assertEqual(result, 'categorical') + class TestMoments(tm.TestCase): pass class TestReducer(tm.TestCase): - def test_int_index(self): from pandas.core.series import Series @@ -654,19 +659,19 @@ def test_int_index(self): assert_almost_equal(result, expected) dummy = Series(0., index=np.arange(100)) - result = lib.reduce( - arr, np.sum, dummy=dummy, labels=Index(np.arange(4))) + result = lib.reduce(arr, np.sum, dummy=dummy, + labels=Index(np.arange(4))) expected = arr.sum(0) assert_almost_equal(result, expected) dummy = Series(0., index=np.arange(4)) - result = lib.reduce(arr, np.sum, axis=1, - dummy=dummy, labels=Index(np.arange(100))) + result = lib.reduce(arr, np.sum, axis=1, dummy=dummy, + labels=Index(np.arange(100))) expected = arr.sum(1) assert_almost_equal(result, expected) - result = lib.reduce(arr, np.sum, axis=1, - dummy=dummy, labels=Index(np.arange(100))) + result = lib.reduce(arr, np.sum, axis=1, dummy=dummy, + labels=Index(np.arange(100))) assert_almost_equal(result, expected) @@ -682,18 +687,18 @@ def test_max_valid(self): def test_to_datetime_bijective(self): # Ensure that converting to datetime and back only loses precision # by going from nanoseconds to microseconds. - self.assertEqual(Timestamp(Timestamp.max.to_pydatetime()).value/1000, Timestamp.max.value/1000) - self.assertEqual(Timestamp(Timestamp.min.to_pydatetime()).value/1000, Timestamp.min.value/1000) + self.assertEqual( + Timestamp(Timestamp.max.to_pydatetime()).value / 1000, + Timestamp.max.value / 1000) + self.assertEqual( + Timestamp(Timestamp.min.to_pydatetime()).value / 1000, + Timestamp.min.value / 1000) -class TestPeriodField(tm.TestCase): +class TestPeriodField(tm.TestCase): def test_get_period_field_raises_on_out_of_range(self): self.assertRaises(ValueError, period.get_period_field, -1, 0, 0) def test_get_period_field_array_raises_on_out_of_range(self): - self.assertRaises(ValueError, period.get_period_field_arr, -1, np.empty(1), 0) - -if __name__ == '__main__': - import nose - nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], - exit=False) + self.assertRaises(ValueError, period.get_period_field_arr, -1, + np.empty(1), 0) diff --git a/pandas/tests/test_util.py b/pandas/tests/test_util.py index 427c96a839c26..e27e45a96432f 100644 --- a/pandas/tests/test_util.py +++ b/pandas/tests/test_util.py @@ -5,8 +5,8 @@ import pandas.util.testing as tm - class TestDecorators(tm.TestCase): + def setUp(self): @deprecate_kwarg('old', 'new') def _f1(new=False): @@ -16,7 +16,7 @@ def _f1(new=False): def _f2(new=False): return new - @deprecate_kwarg('old', 'new', lambda x: x+1) + @deprecate_kwarg('old', 'new', lambda x: x + 1) def _f3(new=0): return new @@ -48,7 +48,7 @@ def test_callable_deprecate_kwarg(self): x = 5 with tm.assert_produces_warning(FutureWarning): result = self.f3(old=x) - self.assertEqual(result, x+1) + self.assertEqual(result, x + 1) with tm.assertRaises(TypeError): self.f3(old='hello') diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py index 4d7f9292705ad..d3e8320fd282d 100644 --- a/pandas/tests/test_window.py +++ b/pandas/tests/test_window.py @@ -1,6 +1,6 @@ +from itertools import product import nose import sys -import functools import warnings from datetime import datetime @@ -10,18 +10,20 @@ from distutils.version import LooseVersion import pandas as pd -from pandas import Series, DataFrame, Panel, bdate_range, isnull, notnull, concat -from pandas.util.testing import ( - assert_almost_equal, assert_series_equal, assert_frame_equal, assert_panel_equal, assert_index_equal -) +from pandas import (Series, DataFrame, Panel, bdate_range, isnull, + notnull, concat) +from pandas.util.testing import (assert_almost_equal, assert_series_equal, + assert_frame_equal, assert_panel_equal, + assert_index_equal) import pandas.core.datetools as datetools import pandas.stats.moments as mom import pandas.core.window as rwindow import pandas.util.testing as tm -from pandas.compat import range, zip, PY3, StringIO +from pandas.compat import range, zip, PY3 N, K = 100, 10 + class Base(tm.TestCase): _multiprocess_can_split_ = True @@ -39,6 +41,7 @@ def _create_data(self): self.frame = DataFrame(randn(N, K), index=self.rng, columns=np.arange(K)) + class TestApi(Base): def setUp(self): @@ -47,17 +50,19 @@ def setUp(self): def test_getitem(self): r = self.frame.rolling(window=5) - tm.assert_index_equal(r._selected_obj.columns,self.frame.columns) + tm.assert_index_equal(r._selected_obj.columns, self.frame.columns) r = self.frame.rolling(window=5)[1] - self.assertEqual(r._selected_obj.name,self.frame.columns[1]) + self.assertEqual(r._selected_obj.name, self.frame.columns[1]) # technically this is allowed - r = self.frame.rolling(window=5)[1,3] - tm.assert_index_equal(r._selected_obj.columns,self.frame.columns[[1,3]]) + r = self.frame.rolling(window=5)[1, 3] + tm.assert_index_equal(r._selected_obj.columns, + self.frame.columns[[1, 3]]) - r = self.frame.rolling(window=5)[[1,3]] - tm.assert_index_equal(r._selected_obj.columns,self.frame.columns[[1,3]]) + r = self.frame.rolling(window=5)[[1, 3]] + tm.assert_index_equal(r._selected_obj.columns, + self.frame.columns[[1, 3]]) def test_select_bad_cols(self): df = DataFrame([[1, 2]], columns=['A', 'B']) @@ -74,37 +79,39 @@ def test_attribute_access(self): df = DataFrame([[1, 2]], columns=['A', 'B']) r = df.rolling(window=5) - tm.assert_series_equal(r.A.sum(),r['A'].sum()) - self.assertRaises(AttributeError, lambda : r.F) + tm.assert_series_equal(r.A.sum(), r['A'].sum()) + self.assertRaises(AttributeError, lambda: r.F) def tests_skip_nuisance(self): - df = DataFrame({'A' : range(5), 'B' : range(5,10), 'C' : 'foo'}) + df = DataFrame({'A': range(5), 'B': range(5, 10), 'C': 'foo'}) r = df.rolling(window=3) - result = r[['A','B']].sum() - expected = DataFrame({'A' : [np.nan,np.nan,3,6,9], - 'B' : [np.nan,np.nan,18,21,24]}, + result = r[['A', 'B']].sum() + expected = DataFrame({'A': [np.nan, np.nan, 3, 6, 9], + 'B': [np.nan, np.nan, 18, 21, 24]}, columns=list('AB')) assert_frame_equal(result, expected) - expected = pd.concat([r[['A','B']].sum(),df[['C']]],axis=1) + expected = pd.concat([r[['A', 'B']].sum(), df[['C']]], axis=1) result = r.sum() assert_frame_equal(result, expected) def test_timedeltas(self): - df = DataFrame({'A' : range(5), 'B' : pd.timedelta_range('1 day',periods=5)}) + df = DataFrame({'A': range(5), + 'B': pd.timedelta_range('1 day', periods=5)}) r = df.rolling(window=3) result = r.sum() - expected = DataFrame({'A' : [np.nan,np.nan,3,6,9], - 'B' : pd.to_timedelta([pd.NaT,pd.NaT,'6 days','9 days','12 days'])}, + expected = DataFrame({'A': [np.nan, np.nan, 3, 6, 9], + 'B': pd.to_timedelta([pd.NaT, pd.NaT, + '6 days', '9 days', + '12 days'])}, columns=list('AB')) assert_frame_equal(result, expected) def test_agg(self): - df = DataFrame({'A' : range(5), - 'B' : range(0,10,2)}) + df = DataFrame({'A': range(5), 'B': range(0, 10, 2)}) r = df.rolling(window=3) a_mean = r['A'].mean() @@ -119,101 +126,105 @@ def compare(result, expected): assert_frame_equal(result.reindex_like(expected), expected) result = r.aggregate([np.mean, np.std]) - expected = pd.concat([a_mean,a_std,b_mean,b_std],axis=1) - expected.columns = pd.MultiIndex.from_product([['A','B'],['mean','std']]) + expected = pd.concat([a_mean, a_std, b_mean, b_std], axis=1) + expected.columns = pd.MultiIndex.from_product([['A', 'B'], ['mean', + 'std']]) assert_frame_equal(result, expected) - result = r.aggregate({'A': np.mean, - 'B': np.std}) - expected = pd.concat([a_mean,b_std],axis=1) + result = r.aggregate({'A': np.mean, 'B': np.std}) + expected = pd.concat([a_mean, b_std], axis=1) compare(result, expected) - result = r.aggregate({'A': ['mean','std']}) - expected = pd.concat([a_mean,a_std],axis=1) - expected.columns = pd.MultiIndex.from_tuples([('A','mean'),('A','std')]) + result = r.aggregate({'A': ['mean', 'std']}) + expected = pd.concat([a_mean, a_std], axis=1) + expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'), ('A', + 'std')]) assert_frame_equal(result, expected) - result = r['A'].aggregate(['mean','sum']) - expected = pd.concat([a_mean,a_sum],axis=1) - expected.columns = ['mean','sum'] + result = r['A'].aggregate(['mean', 'sum']) + expected = pd.concat([a_mean, a_sum], axis=1) + expected.columns = ['mean', 'sum'] assert_frame_equal(result, expected) - result = r.aggregate({'A': { 'mean' : 'mean', 'sum' : 'sum' } }) - expected = pd.concat([a_mean,a_sum],axis=1) - expected.columns = pd.MultiIndex.from_tuples([('A','mean'),('A','sum')]) + result = r.aggregate({'A': {'mean': 'mean', 'sum': 'sum'}}) + expected = pd.concat([a_mean, a_sum], axis=1) + expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'), ('A', + 'sum')]) compare(result, expected) - result = r.aggregate({'A': { 'mean' : 'mean', 'sum' : 'sum' }, - 'B': { 'mean2' : 'mean', 'sum2' : 'sum' }}) - expected = pd.concat([a_mean,a_sum,b_mean,b_sum],axis=1) - expected.columns = pd.MultiIndex.from_tuples([('A','mean'),('A','sum'), - ('B','mean2'),('B','sum2')]) + result = r.aggregate({'A': {'mean': 'mean', + 'sum': 'sum'}, + 'B': {'mean2': 'mean', + 'sum2': 'sum'}}) + expected = pd.concat([a_mean, a_sum, b_mean, b_sum], axis=1) + expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'), ( + 'A', 'sum'), ('B', 'mean2'), ('B', 'sum2')]) compare(result, expected) - result = r.aggregate({'A': ['mean','std'], - 'B': ['mean','std']}) - expected = pd.concat([a_mean,a_std,b_mean,b_std],axis=1) - expected.columns = pd.MultiIndex.from_tuples([('A','mean'),('A','std'), - ('B','mean'),('B','std')]) + result = r.aggregate({'A': ['mean', 'std'], 'B': ['mean', 'std']}) + expected = pd.concat([a_mean, a_std, b_mean, b_std], axis=1) + expected.columns = pd.MultiIndex.from_tuples([('A', 'mean'), ( + 'A', 'std'), ('B', 'mean'), ('B', 'std')]) compare(result, expected) - result = r.aggregate({'r1' : { 'A' : ['mean','sum'] }, - 'r2' : { 'B' : ['mean','sum'] }}) - expected = pd.concat([a_mean,a_sum,b_mean,b_sum],axis=1) - expected.columns = pd.MultiIndex.from_tuples([('r1','A','mean'),('r1','A','sum'), - ('r2','B','mean'),('r2','B','sum')]) + result = r.aggregate({'r1': {'A': ['mean', 'sum']}, + 'r2': {'B': ['mean', 'sum']}}) + expected = pd.concat([a_mean, a_sum, b_mean, b_sum], axis=1) + expected.columns = pd.MultiIndex.from_tuples([('r1', 'A', 'mean'), ( + 'r1', 'A', 'sum'), ('r2', 'B', 'mean'), ('r2', 'B', 'sum')]) compare(result, expected) - result = r.agg({'A' : {'ra' : ['mean','std']}, - 'B' : {'rb' : ['mean','std']}}) - expected = pd.concat([a_mean,a_std,b_mean,b_std],axis=1) - expected.columns = pd.MultiIndex.from_tuples([('A','ra','mean'),('A','ra','std'), - ('B','rb','mean'),('B','rb','std')]) + result = r.agg({'A': {'ra': ['mean', 'std']}, + 'B': {'rb': ['mean', 'std']}}) + expected = pd.concat([a_mean, a_std, b_mean, b_std], axis=1) + expected.columns = pd.MultiIndex.from_tuples([('A', 'ra', 'mean'), ( + 'A', 'ra', 'std'), ('B', 'rb', 'mean'), ('B', 'rb', 'std')]) compare(result, expected) - # passed lambda - result = r.agg({'A' : np.sum, - 'B' : lambda x: np.std(x, ddof=1)}) - rcustom = r['B'].apply(lambda x: np.std(x,ddof=1)) - expected = pd.concat([a_sum,rcustom],axis=1) + result = r.agg({'A': np.sum, 'B': lambda x: np.std(x, ddof=1)}) + rcustom = r['B'].apply(lambda x: np.std(x, ddof=1)) + expected = pd.concat([a_sum, rcustom], axis=1) compare(result, expected) def test_agg_consistency(self): - df = DataFrame({'A' : range(5), - 'B' : range(0,10,2)}) + df = DataFrame({'A': range(5), 'B': range(0, 10, 2)}) r = df.rolling(window=3) result = r.agg([np.sum, np.mean]).columns - expected = pd.MultiIndex.from_product([list('AB'),['sum','mean']]) + expected = pd.MultiIndex.from_product([list('AB'), ['sum', 'mean']]) tm.assert_index_equal(result, expected) result = r['A'].agg([np.sum, np.mean]).columns - expected = pd.Index(['sum','mean']) + expected = pd.Index(['sum', 'mean']) tm.assert_index_equal(result, expected) - result = r.agg({'A' : [np.sum, np.mean]}).columns - expected = pd.MultiIndex.from_tuples([('A','sum'),('A','mean')]) + result = r.agg({'A': [np.sum, np.mean]}).columns + expected = pd.MultiIndex.from_tuples([('A', 'sum'), ('A', 'mean')]) tm.assert_index_equal(result, expected) def test_window_with_args(self): tm._skip_if_no_scipy() # make sure that we are aggregating window functions correctly with arg - r = Series(np.random.randn(100)).rolling(window=10,min_periods=1,win_type='gaussian') - expected = pd.concat([r.mean(std=10),r.mean(std=.01)],axis=1) - expected.columns = ['<lambda>','<lambda>'] - result = r.aggregate([lambda x: x.mean(std=10), lambda x: x.mean(std=.01)]) + r = Series(np.random.randn(100)).rolling(window=10, min_periods=1, + win_type='gaussian') + expected = pd.concat([r.mean(std=10), r.mean(std=.01)], axis=1) + expected.columns = ['<lambda>', '<lambda>'] + result = r.aggregate([lambda x: x.mean(std=10), + lambda x: x.mean(std=.01)]) assert_frame_equal(result, expected) def a(x): return x.mean(std=10) + def b(x): return x.mean(std=0.01) - expected = pd.concat([r.mean(std=10),r.mean(std=.01)],axis=1) - expected.columns = ['a','b'] - result = r.aggregate([a,b]) + + expected = pd.concat([r.mean(std=10), r.mean(std=.01)], axis=1) + expected.columns = ['a', 'b'] + result = r.aggregate([a, b]) assert_frame_equal(result, expected) def test_preserve_metadata(self): @@ -229,39 +240,44 @@ def test_how_compat(self): # in prior versions, we would allow how to be used in the resample # now that its deprecated, we need to handle this in the actual # aggregation functions - s = pd.Series(np.random.randn(20), index=pd.date_range('1/1/2000', periods=20, freq='12H')) + s = pd.Series( + np.random.randn(20), + index=pd.date_range('1/1/2000', periods=20, freq='12H')) - for how in ['min','max','median']: - for op in ['mean','sum','std','var','kurt','skew']: - for t in ['rolling','expanding']: + for how in ['min', 'max', 'median']: + for op in ['mean', 'sum', 'std', 'var', 'kurt', 'skew']: + for t in ['rolling', 'expanding']: - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): - dfunc = getattr(pd,"{0}_{1}".format(t,op)) + dfunc = getattr(pd, "{0}_{1}".format(t, op)) if dfunc is None: continue if t == 'rolling': - kwargs = {'window' : 5} + kwargs = {'window': 5} else: kwargs = {} result = dfunc(s, freq='D', how=how, **kwargs) - expected = getattr(getattr(s,t)(freq='D', **kwargs),op)(how=how) + expected = getattr( + getattr(s, t)(freq='D', **kwargs), op)(how=how) assert_series_equal(result, expected) + class TestDeprecations(Base): """ test that we are catching deprecation warnings """ def setUp(self): self._create_data() - def test_deprecations(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - mom.rolling_mean(np.ones(10),3,center=True ,axis=0) - mom.rolling_mean(Series(np.ones(10)),3,center=True ,axis=0) + mom.rolling_mean(np.ones(10), 3, center=True, axis=0) + mom.rolling_mean(Series(np.ones(10)), 3, center=True, axis=0) + class TestMoments(Base): @@ -271,27 +287,30 @@ def setUp(self): def test_centered_axis_validation(self): # ok - Series(np.ones(10)).rolling(window=3,center=True ,axis=0).mean() + Series(np.ones(10)).rolling(window=3, center=True, axis=0).mean() # bad axis - self.assertRaises(ValueError, lambda : Series(np.ones(10)).rolling(window=3,center=True ,axis=1).mean()) + with self.assertRaises(ValueError): + Series(np.ones(10)).rolling(window=3, center=True, axis=1).mean() # ok ok - DataFrame(np.ones((10,10))).rolling(window=3,center=True ,axis=0).mean() - DataFrame(np.ones((10,10))).rolling(window=3,center=True ,axis=1).mean() + DataFrame(np.ones((10, 10))).rolling(window=3, center=True, + axis=0).mean() + DataFrame(np.ones((10, 10))).rolling(window=3, center=True, + axis=1).mean() # bad axis - self.assertRaises(ValueError, lambda : DataFrame(np.ones((10,10))).rolling(window=3,center=True ,axis=2).mean()) + with self.assertRaises(ValueError): + (DataFrame(np.ones((10, 10))) + .rolling(window=3, center=True, axis=2).mean()) def test_rolling_sum(self): self._check_moment_func(mom.rolling_sum, np.sum, name='sum') def test_rolling_count(self): counter = lambda x: np.isfinite(x).astype(float).sum() - self._check_moment_func(mom.rolling_count, counter, - name='count', - has_min_periods=False, - preserve_nan=False, + self._check_moment_func(mom.rolling_count, counter, name='count', + has_min_periods=False, preserve_nan=False, fill_value=0) def test_rolling_mean(self): @@ -301,10 +320,10 @@ def test_cmov_mean(self): # GH 8238 tm._skip_if_no_scipy() - vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, 13.49, - 16.68, 9.48, 10.63, 14.48]) - xp = np.array([np.nan, np.nan, 9.962, 11.27 , 11.564, 12.516, - 12.818, 12.952, np.nan, np.nan]) + vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, 13.49, 16.68, 9.48, + 10.63, 14.48]) + xp = np.array([np.nan, np.nan, 9.962, 11.27, 11.564, 12.516, 12.818, + 12.952, np.nan, np.nan]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): rs = mom.rolling_mean(vals, 5, center=True) @@ -318,10 +337,10 @@ def test_cmov_window(self): # GH 8238 tm._skip_if_no_scipy() - vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, - 13.49, 16.68, 9.48, 10.63, 14.48]) - xp = np.array([np.nan, np.nan, 9.962, 11.27 , 11.564, 12.516, - 12.818, 12.952, np.nan, np.nan]) + vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, 13.49, 16.68, 9.48, + 10.63, 14.48]) + xp = np.array([np.nan, np.nan, 9.962, 11.27, 11.564, 12.516, 12.818, + 12.952, np.nan, np.nan]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): rs = mom.rolling_window(vals, 5, 'boxcar', center=True) @@ -359,46 +378,30 @@ def test_cmov_window_frame(self): # Gh 8238 tm._skip_if_no_scipy() - vals = np.array([[ 12.18, 3.64], - [ 10.18, 9.16], - [ 13.24, 14.61], - [ 4.51, 8.11], - [ 6.15, 11.44], - [ 9.14, 6.21], - [ 11.31, 10.67], - [ 2.94, 6.51], - [ 9.42, 8.39], - [ 12.44, 7.34 ]]) - - xp = np.array([[ np.nan, np.nan], - [ np.nan, np.nan], - [ 9.252, 9.392], - [ 8.644, 9.906], - [ 8.87 , 10.208], - [ 6.81 , 8.588], - [ 7.792, 8.644], - [ 9.05 , 7.824], - [ np.nan, np.nan], - [ np.nan, np.nan]]) + vals = np.array([[12.18, 3.64], [10.18, 9.16], [13.24, 14.61], + [4.51, 8.11], [6.15, 11.44], [9.14, 6.21], + [11.31, 10.67], [2.94, 6.51], [9.42, 8.39], [12.44, + 7.34]]) + + xp = np.array([[np.nan, np.nan], [np.nan, np.nan], [9.252, 9.392], + [8.644, 9.906], [8.87, 10.208], [6.81, 8.588], + [7.792, 8.644], [9.05, 7.824], [np.nan, np.nan + ], [np.nan, np.nan]]) # DataFrame rs = DataFrame(vals).rolling(5, win_type='boxcar', center=True).mean() assert_frame_equal(DataFrame(xp), rs) # invalid method - self.assertRaises(AttributeError, lambda : DataFrame(vals).rolling(5, win_type='boxcar', center=True).std()) + with self.assertRaises(AttributeError): + (DataFrame(vals).rolling(5, win_type='boxcar', center=True) + .std()) # sum - xp = np.array([[ np.nan, np.nan], - [ np.nan, np.nan], - [ 46.26, 46.96], - [ 43.22, 49.53], - [ 44.35, 51.04], - [ 34.05, 42.94], - [ 38.96, 43.22], - [ 45.25, 39.12], - [ np.nan, np.nan], - [ np.nan, np.nan]]) + xp = np.array([[np.nan, np.nan], [np.nan, np.nan], [46.26, 46.96], + [43.22, 49.53], [44.35, 51.04], [34.05, 42.94], + [38.96, 43.22], [45.25, 39.12], [np.nan, np.nan + ], [np.nan, np.nan]]) rs = DataFrame(vals).rolling(5, win_type='boxcar', center=True).sum() assert_frame_equal(DataFrame(xp), rs) @@ -412,7 +415,8 @@ def test_cmov_window_na_min_periods(self): vals[8] = np.nan xp = vals.rolling(5, min_periods=4, center=True).mean() - rs = vals.rolling(5, win_type='boxcar', min_periods=4, center=True).mean() + rs = vals.rolling(5, win_type='boxcar', min_periods=4, + center=True).mean() assert_series_equal(xp, rs) def test_cmov_window_regular(self): @@ -422,25 +426,26 @@ def test_cmov_window_regular(self): win_types = ['triang', 'blackman', 'hamming', 'bartlett', 'bohman', 'blackmanharris', 'nuttall', 'barthann'] - vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, - 13.49, 16.68, 9.48, 10.63, 14.48]) + vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, 13.49, 16.68, 9.48, + 10.63, 14.48]) xps = { - 'hamming': [np.nan, np.nan, 8.71384, 9.56348, 12.38009, - 14.03687, 13.8567, 11.81473, np.nan, np.nan], - 'triang': [np.nan, np.nan, 9.28667, 10.34667, 12.00556, - 13.33889, 13.38, 12.33667, np.nan, np.nan], - 'barthann': [np.nan, np.nan, 8.4425, 9.1925, 12.5575, - 14.3675, 14.0825, 11.5675, np.nan, np.nan], - 'bohman': [np.nan, np.nan, 7.61599, 9.1764, 12.83559, - 14.17267, 14.65923, 11.10401, np.nan, np.nan], + 'hamming': [np.nan, np.nan, 8.71384, 9.56348, 12.38009, 14.03687, + 13.8567, 11.81473, np.nan, np.nan], + 'triang': [np.nan, np.nan, 9.28667, 10.34667, 12.00556, 13.33889, + 13.38, 12.33667, np.nan, np.nan], + 'barthann': [np.nan, np.nan, 8.4425, 9.1925, 12.5575, 14.3675, + 14.0825, 11.5675, np.nan, np.nan], + 'bohman': [np.nan, np.nan, 7.61599, 9.1764, 12.83559, 14.17267, + 14.65923, 11.10401, np.nan, np.nan], 'blackmanharris': [np.nan, np.nan, 6.97691, 9.16438, 13.05052, 14.02156, 15.10512, 10.74574, np.nan, np.nan], - 'nuttall': [np.nan, np.nan, 7.04618, 9.16786, 13.02671, - 14.03559, 15.05657, 10.78514, np.nan, np.nan], - 'blackman': [np.nan, np.nan, 7.73345, 9.17869, 12.79607, - 14.20036, 14.57726, 11.16988, np.nan, np.nan], - 'bartlett': [np.nan, np.nan, 8.4425, 9.1925, 12.5575, - 14.3675, 14.0825, 11.5675, np.nan, np.nan]} + 'nuttall': [np.nan, np.nan, 7.04618, 9.16786, 13.02671, 14.03559, + 15.05657, 10.78514, np.nan, np.nan], + 'blackman': [np.nan, np.nan, 7.73345, 9.17869, 12.79607, 14.20036, + 14.57726, 11.16988, np.nan, np.nan], + 'bartlett': [np.nan, np.nan, 8.4425, 9.1925, 12.5575, 14.3675, + 14.0825, 11.5675, np.nan, np.nan] + } for wt in win_types: xp = Series(xps[wt]) @@ -471,27 +476,26 @@ def test_cmov_window_regular_missing_data(self): win_types = ['triang', 'blackman', 'hamming', 'bartlett', 'bohman', 'blackmanharris', 'nuttall', 'barthann'] - vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, - 13.49, 16.68, np.nan, 10.63, 14.48]) + vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, 13.49, 16.68, np.nan, + 10.63, 14.48]) xps = { - 'bartlett': [np.nan, np.nan, 9.70333, 10.5225, 8.4425, - 9.1925, 12.5575, 14.3675, 15.61667, 13.655], - 'blackman': [np.nan, np.nan, 9.04582, 11.41536, 7.73345, - 9.17869, 12.79607, 14.20036, 15.8706, 13.655], - 'barthann': [np.nan, np.nan, 9.70333, 10.5225, 8.4425, - 9.1925, 12.5575, 14.3675, 15.61667, 13.655], - 'bohman': [np.nan, np.nan, 8.9444, 11.56327, 7.61599, - 9.1764, 12.83559, 14.17267, 15.90976, 13.655], - 'hamming': [np.nan, np.nan, 9.59321, 10.29694, 8.71384, - 9.56348, 12.38009, 14.20565, 15.24694, 13.69758], - 'nuttall': [np.nan, np.nan, 8.47693, 12.2821, 7.04618, - 9.16786, 13.02671, 14.03673, 16.08759, 13.65553], - 'triang': [np.nan, np.nan, 9.33167, 9.76125, 9.28667, - 10.34667, 12.00556, 13.82125, 14.49429, 13.765], + 'bartlett': [np.nan, np.nan, 9.70333, 10.5225, 8.4425, 9.1925, + 12.5575, 14.3675, 15.61667, 13.655], + 'blackman': [np.nan, np.nan, 9.04582, 11.41536, 7.73345, 9.17869, + 12.79607, 14.20036, 15.8706, 13.655], + 'barthann': [np.nan, np.nan, 9.70333, 10.5225, 8.4425, 9.1925, + 12.5575, 14.3675, 15.61667, 13.655], + 'bohman': [np.nan, np.nan, 8.9444, 11.56327, 7.61599, 9.1764, + 12.83559, 14.17267, 15.90976, 13.655], + 'hamming': [np.nan, np.nan, 9.59321, 10.29694, 8.71384, 9.56348, + 12.38009, 14.20565, 15.24694, 13.69758], + 'nuttall': [np.nan, np.nan, 8.47693, 12.2821, 7.04618, 9.16786, + 13.02671, 14.03673, 16.08759, 13.65553], + 'triang': [np.nan, np.nan, 9.33167, 9.76125, 9.28667, 10.34667, + 12.00556, 13.82125, 14.49429, 13.765], 'blackmanharris': [np.nan, np.nan, 8.42526, 12.36824, 6.97691, - 9.16438, 13.05052, 14.02175, 16.1098, - 13.65509] - } + 9.16438, 13.05052, 14.02175, 16.1098, 13.65509] + } for wt in win_types: xp = Series(xps[wt]) @@ -503,22 +507,21 @@ def test_cmov_window_special(self): tm._skip_if_no_scipy() win_types = ['kaiser', 'gaussian', 'general_gaussian', 'slepian'] - kwds = [{'beta': 1.}, {'std': 1.}, {'power': 2., 'width': 2.}, - {'width': 0.5}] + kwds = [{'beta': 1.}, {'std': 1.}, {'power': 2., + 'width': 2.}, {'width': 0.5}] - vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, - 13.49, 16.68, 9.48, 10.63, 14.48]) + vals = np.array([6.95, 15.21, 4.72, 9.12, 13.81, 13.49, 16.68, 9.48, + 10.63, 14.48]) xps = { - 'gaussian': [np.nan, np.nan, 8.97297, 9.76077, 12.24763, - 13.89053, 13.65671, 12.01002, np.nan, np.nan], - 'general_gaussian': [np.nan, np.nan, 9.85011, 10.71589, - 11.73161, 13.08516, 12.95111, 12.74577, - np.nan, np.nan], - 'slepian': [np.nan, np.nan, 9.81073, 10.89359, 11.70284, - 12.88331, 12.96079, 12.77008, np.nan, np.nan], - 'kaiser': [np.nan, np.nan, 9.86851, 11.02969, 11.65161, - 12.75129, 12.90702, 12.83757, np.nan, np.nan] + 'gaussian': [np.nan, np.nan, 8.97297, 9.76077, 12.24763, 13.89053, + 13.65671, 12.01002, np.nan, np.nan], + 'general_gaussian': [np.nan, np.nan, 9.85011, 10.71589, 11.73161, + 13.08516, 12.95111, 12.74577, np.nan, np.nan], + 'slepian': [np.nan, np.nan, 9.81073, 10.89359, 11.70284, 12.88331, + 12.96079, 12.77008, np.nan, np.nan], + 'kaiser': [np.nan, np.nan, 9.86851, 11.02969, 11.65161, 12.75129, + 12.90702, 12.83757, np.nan, np.nan] } for wt, k in zip(win_types, kwds): @@ -531,8 +534,8 @@ def test_cmov_window_special_linear_range(self): tm._skip_if_no_scipy() win_types = ['kaiser', 'gaussian', 'general_gaussian', 'slepian'] - kwds = [{'beta': 1.}, {'std': 1.}, {'power': 2., 'width': 2.}, - {'width': 0.5}] + kwds = [{'beta': 1.}, {'std': 1.}, {'power': 2., + 'width': 2.}, {'width': 0.5}] vals = np.array(range(10), dtype=np.float) xp = vals.copy() @@ -546,7 +549,8 @@ def test_cmov_window_special_linear_range(self): def test_rolling_median(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - self._check_moment_func(mom.rolling_median, np.median, name='median') + self._check_moment_func(mom.rolling_median, np.median, + name='median') def test_rolling_min(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): @@ -557,8 +561,8 @@ def test_rolling_min(self): b = mom.rolling_min(a, window=100, min_periods=1) assert_almost_equal(b, np.ones(len(a))) - self.assertRaises(ValueError, mom.rolling_min, - np.array([1,2, 3]), window=3, min_periods=5) + self.assertRaises(ValueError, mom.rolling_min, np.array([1, 2, 3]), + window=3, min_periods=5) def test_rolling_max(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): @@ -569,7 +573,7 @@ def test_rolling_max(self): b = mom.rolling_max(a, window=100, min_periods=1) assert_almost_equal(a, b) - self.assertRaises(ValueError, mom.rolling_max, np.array([1,2, 3]), + self.assertRaises(ValueError, mom.rolling_max, np.array([1, 2, 3]), window=3, min_periods=5) def test_rolling_quantile(self): @@ -582,10 +586,11 @@ def scoreatpercentile(a, per): return values[int(idx)] for q in qs: - def f(x, window, quantile, min_periods=None, freq=None, center=False): + + def f(x, window, quantile, min_periods=None, freq=None, + center=False): return mom.rolling_quantile(x, window, quantile, - min_periods=min_periods, - freq=freq, + min_periods=min_periods, freq=freq, center=center) def alt(x): @@ -594,26 +599,29 @@ def alt(x): self._check_moment_func(f, alt, name='quantile', quantile=q) def test_rolling_apply(self): - # suppress warnings about empty slices, as we are deliberately testing with a 0-length Series + # suppress warnings about empty slices, as we are deliberately testing + # with a 0-length Series with warnings.catch_warnings(): - warnings.filterwarnings("ignore", message=".*(empty slice|0 for slice).*", category=RuntimeWarning) + warnings.filterwarnings("ignore", + message=".*(empty slice|0 for slice).*", + category=RuntimeWarning) ser = Series([]) assert_series_equal(ser, ser.rolling(10).apply(lambda x: x.mean())) f = lambda x: x[np.isfinite(x)].mean() - def roll_mean(x, window, min_periods=None, freq=None, center=False, **kwargs): - return mom.rolling_apply(x, - window, - func=f, - min_periods=min_periods, - freq=freq, + + def roll_mean(x, window, min_periods=None, freq=None, center=False, + **kwargs): + return mom.rolling_apply(x, window, func=f, + min_periods=min_periods, freq=freq, center=center) + self._check_moment_func(roll_mean, np.mean, name='apply', func=f) # GH 8080 s = Series([None, None, None]) - result = s.rolling(2,min_periods=0).apply(lambda x: len(x)) + result = s.rolling(2, min_periods=0).apply(lambda x: len(x)) expected = Series([1., 2., 2.]) assert_series_equal(result, expected) @@ -634,13 +642,10 @@ def test_rolling_apply_out_of_bounds(self): assert_almost_equal(result, result) def test_rolling_std(self): - self._check_moment_func(mom.rolling_std, - lambda x: np.std(x, ddof=1), + self._check_moment_func(mom.rolling_std, lambda x: np.std(x, ddof=1), name='std') - self._check_moment_func(mom.rolling_std, - lambda x: np.std(x, ddof=0), - name='std', - ddof=0) + self._check_moment_func(mom.rolling_std, lambda x: np.std(x, ddof=0), + name='std', ddof=0) def test_rolling_std_1obs(self): with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): @@ -665,10 +670,8 @@ def test_rolling_std_neg_sqrt(self): # Test move_nanstd for neg sqrt. - a = np.array([0.0011448196318903589, - 0.00028718669878572767, - 0.00028718669878572767, - 0.00028718669878572767, + a = np.array([0.0011448196318903589, 0.00028718669878572767, + 0.00028718669878572767, 0.00028718669878572767, 0.00028718669878572767]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): b = mom.rolling_std(a, window=3) @@ -679,14 +682,10 @@ def test_rolling_std_neg_sqrt(self): self.assertTrue(np.isfinite(b[2:]).all()) def test_rolling_var(self): - self._check_moment_func(mom.rolling_var, - lambda x: np.var(x, ddof=1), - test_stable=True, - name='var') - self._check_moment_func(mom.rolling_var, - lambda x: np.var(x, ddof=0), - name='var', - ddof=0) + self._check_moment_func(mom.rolling_var, lambda x: np.var(x, ddof=1), + test_stable=True, name='var') + self._check_moment_func(mom.rolling_var, lambda x: np.var(x, ddof=0), + name='var', ddof=0) def test_rolling_skew(self): try: @@ -694,8 +693,7 @@ def test_rolling_skew(self): except ImportError: raise nose.SkipTest('no scipy') self._check_moment_func(mom.rolling_skew, - lambda x: skew(x, bias=False), - name='skew') + lambda x: skew(x, bias=False), name='skew') def test_rolling_kurt(self): try: @@ -703,8 +701,7 @@ def test_rolling_kurt(self): except ImportError: raise nose.SkipTest('no scipy') self._check_moment_func(mom.rolling_kurt, - lambda x: kurtosis(x, bias=False), - name='kurt') + lambda x: kurtosis(x, bias=False), name='kurt') def test_fperr_robustness(self): # TODO: remove this once python 2.5 out of picture @@ -712,7 +709,7 @@ def test_fperr_robustness(self): raise nose.SkipTest("doesn't work on python 3") # #2114 - data = '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1a@\xaa\xaa\xaa\xaa\xaa\xaa\x02@8\x8e\xe38\x8e\xe3\xe8?z\t\xed%\xb4\x97\xd0?\xa2\x0c<\xdd\x9a\x1f\xb6?\x82\xbb\xfa&y\x7f\x9d?\xac\'\xa7\xc4P\xaa\x83?\x90\xdf\xde\xb0k8j?`\xea\xe9u\xf2zQ?*\xe37\x9d\x98N7?\xe2.\xf5&v\x13\x1f?\xec\xc9\xf8\x19\xa4\xb7\x04?\x90b\xf6w\x85\x9f\xeb>\xb5A\xa4\xfaXj\xd2>F\x02\xdb\xf8\xcb\x8d\xb8>.\xac<\xfb\x87^\xa0>\xe8:\xa6\xf9_\xd3\x85>\xfb?\xe2cUU\xfd?\xfc\x7fA\xed8\x8e\xe3?\xa5\xaa\xac\x91\xf6\x12\xca?n\x1cs\xb6\xf9a\xb1?\xe8%D\xf3L-\x97?5\xddZD\x11\xe7~?#>\xe7\x82\x0b\x9ad?\xd9R4Y\x0fxK?;7x;\nP2?N\xf4JO\xb8j\x18?4\xf81\x8a%G\x00?\x9a\xf5\x97\r2\xb4\xe5>\xcd\x9c\xca\xbcB\xf0\xcc>3\x13\x87(\xd7J\xb3>\x99\x19\xb4\xe0\x1e\xb9\x99>ff\xcd\x95\x14&\x81>\x88\x88\xbc\xc7p\xddf>`\x0b\xa6_\x96|N>@\xb2n\xea\x0eS4>U\x98\x938i\x19\x1b>\x8eeb\xd0\xf0\x10\x02>\xbd\xdc-k\x96\x16\xe8=(\x93\x1e\xf2\x0e\x0f\xd0=\xe0n\xd3Bii\xb5=*\xe9\x19Y\x8c\x8c\x9c=\xc6\xf0\xbb\x90]\x08\x83=]\x96\xfa\xc0|`i=>d\xfc\xd5\xfd\xeaP=R0\xfb\xc7\xa7\x8e6=\xc2\x95\xf9_\x8a\x13\x1e=\xd6c\xa6\xea\x06\r\x04=r\xda\xdd8\t\xbc\xea<\xf6\xe6\x93\xd0\xb0\xd2\xd1<\x9d\xdeok\x96\xc3\xb7<&~\xea9s\xaf\x9f<UUUUUU\x13@q\x1c\xc7q\x1c\xc7\xf9?\xf6\x12\xdaKh/\xe1?\xf2\xc3"e\xe0\xe9\xc6?\xed\xaf\x831+\x8d\xae?\xf3\x1f\xad\xcb\x1c^\x94?\x15\x1e\xdd\xbd>\xb8\x02@\xc6\xd2&\xfd\xa8\xf5\xe8?\xd9\xe1\x19\xfe\xc5\xa3\xd0?v\x82"\xa8\xb2/\xb6?\x9dX\x835\xee\x94\x9d?h\x90W\xce\x9e\xb8\x83?\x8a\xc0th~Kj?\\\x80\xf8\x9a\xa9\x87Q?%\xab\xa0\xce\x8c_7?1\xe4\x80\x13\x11*\x1f? \x98\x00\r\xb6\xc6\x04?\x80u\xabf\x9d\xb3\xeb>UNrD\xbew\xd2>\x1c\x13C[\xa8\x9f\xb8>\x12b\xd7<pj\xa0>m-\x1fQ@\xe3\x85>\xe6\x91)l\x00/m>Da\xc6\xf2\xaatS>\x05\xd7]\xee\xe3\xf09>' + data = '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1a@\xaa\xaa\xaa\xaa\xaa\xaa\x02@8\x8e\xe38\x8e\xe3\xe8?z\t\xed%\xb4\x97\xd0?\xa2\x0c<\xdd\x9a\x1f\xb6?\x82\xbb\xfa&y\x7f\x9d?\xac\'\xa7\xc4P\xaa\x83?\x90\xdf\xde\xb0k8j?`\xea\xe9u\xf2zQ?*\xe37\x9d\x98N7?\xe2.\xf5&v\x13\x1f?\xec\xc9\xf8\x19\xa4\xb7\x04?\x90b\xf6w\x85\x9f\xeb>\xb5A\xa4\xfaXj\xd2>F\x02\xdb\xf8\xcb\x8d\xb8>.\xac<\xfb\x87^\xa0>\xe8:\xa6\xf9_\xd3\x85>\xfb?\xe2cUU\xfd?\xfc\x7fA\xed8\x8e\xe3?\xa5\xaa\xac\x91\xf6\x12\xca?n\x1cs\xb6\xf9a\xb1?\xe8%D\xf3L-\x97?5\xddZD\x11\xe7~?#>\xe7\x82\x0b\x9ad?\xd9R4Y\x0fxK?;7x;\nP2?N\xf4JO\xb8j\x18?4\xf81\x8a%G\x00?\x9a\xf5\x97\r2\xb4\xe5>\xcd\x9c\xca\xbcB\xf0\xcc>3\x13\x87(\xd7J\xb3>\x99\x19\xb4\xe0\x1e\xb9\x99>ff\xcd\x95\x14&\x81>\x88\x88\xbc\xc7p\xddf>`\x0b\xa6_\x96|N>@\xb2n\xea\x0eS4>U\x98\x938i\x19\x1b>\x8eeb\xd0\xf0\x10\x02>\xbd\xdc-k\x96\x16\xe8=(\x93\x1e\xf2\x0e\x0f\xd0=\xe0n\xd3Bii\xb5=*\xe9\x19Y\x8c\x8c\x9c=\xc6\xf0\xbb\x90]\x08\x83=]\x96\xfa\xc0|`i=>d\xfc\xd5\xfd\xeaP=R0\xfb\xc7\xa7\x8e6=\xc2\x95\xf9_\x8a\x13\x1e=\xd6c\xa6\xea\x06\r\x04=r\xda\xdd8\t\xbc\xea<\xf6\xe6\x93\xd0\xb0\xd2\xd1<\x9d\xdeok\x96\xc3\xb7<&~\xea9s\xaf\x9f<UUUUUU\x13@q\x1c\xc7q\x1c\xc7\xf9?\xf6\x12\xdaKh/\xe1?\xf2\xc3"e\xe0\xe9\xc6?\xed\xaf\x831+\x8d\xae?\xf3\x1f\xad\xcb\x1c^\x94?\x15\x1e\xdd\xbd>\xb8\x02@\xc6\xd2&\xfd\xa8\xf5\xe8?\xd9\xe1\x19\xfe\xc5\xa3\xd0?v\x82"\xa8\xb2/\xb6?\x9dX\x835\xee\x94\x9d?h\x90W\xce\x9e\xb8\x83?\x8a\xc0th~Kj?\\\x80\xf8\x9a\xa9\x87Q?%\xab\xa0\xce\x8c_7?1\xe4\x80\x13\x11*\x1f? \x98\x00\r\xb6\xc6\x04?\x80u\xabf\x9d\xb3\xeb>UNrD\xbew\xd2>\x1c\x13C[\xa8\x9f\xb8>\x12b\xd7<pj\xa0>m-\x1fQ@\xe3\x85>\xe6\x91)l\x00/m>Da\xc6\xf2\xaatS>\x05\xd7]\xee\xe3\xf09>' # noqa arr = np.frombuffer(data, dtype='<f8') if sys.byteorder != "little": @@ -740,62 +737,45 @@ def test_fperr_robustness(self): result = mom.rolling_mean(-arr, 1) self.assertTrue(result[-1] <= 0) - def _check_moment_func(self, f, static_comp, - name=None, - window=50, - has_min_periods=True, - has_center=True, - has_time_rule=True, - preserve_nan=True, - fill_value=None, - test_stable=False, - **kwargs): + def _check_moment_func(self, f, static_comp, name=None, window=50, + has_min_periods=True, has_center=True, + has_time_rule=True, preserve_nan=True, + fill_value=None, test_stable=False, **kwargs): with warnings.catch_warnings(record=True): self._check_ndarray(f, static_comp, window=window, has_min_periods=has_min_periods, preserve_nan=preserve_nan, - has_center=has_center, - fill_value=fill_value, - test_stable=test_stable, - **kwargs) + has_center=has_center, fill_value=fill_value, + test_stable=test_stable, **kwargs) with warnings.catch_warnings(record=True): self._check_structures(f, static_comp, has_min_periods=has_min_periods, has_time_rule=has_time_rule, fill_value=fill_value, - has_center=has_center, - **kwargs) + has_center=has_center, **kwargs) # new API if name is not None: - self._check_structures(f, static_comp, - name=name, + self._check_structures(f, static_comp, name=name, has_min_periods=has_min_periods, has_time_rule=has_time_rule, fill_value=fill_value, - has_center=has_center, - **kwargs) - - def _check_ndarray(self, f, static_comp, window=50, - has_min_periods=True, - preserve_nan=True, - has_center=True, - fill_value=None, - test_stable=False, - test_window=True, - **kwargs): + has_center=has_center, **kwargs) + def _check_ndarray(self, f, static_comp, window=50, has_min_periods=True, + preserve_nan=True, has_center=True, fill_value=None, + test_stable=False, test_window=True, **kwargs): def get_result(arr, window, min_periods=None, center=False): - return f(arr, window, min_periods=min_periods, center=center, **kwargs) + return f(arr, window, min_periods=min_periods, center=center, ** + kwargs) result = get_result(self.arr, window) - assert_almost_equal(result[-1], - static_comp(self.arr[-50:])) + assert_almost_equal(result[-1], static_comp(self.arr[-50:])) if preserve_nan: - assert(np.isnan(result[self._nan_locs]).all()) + assert (np.isnan(result[self._nan_locs]).all()) # excluding NaNs correctly arr = randn(50) @@ -831,69 +811,62 @@ def get_result(arr, window, min_periods=None, center=False): if has_center: if has_min_periods: result = get_result(arr, 20, min_periods=15, center=True) - expected = get_result(np.concatenate((arr, np.array([np.NaN] * 9))), 20, min_periods=15)[9:] + expected = get_result( + np.concatenate((arr, np.array([np.NaN] * 9))), 20, + min_periods=15)[9:] else: result = get_result(arr, 20, center=True) - expected = get_result(np.concatenate((arr, np.array([np.NaN] * 9))), 20)[9:] + expected = get_result( + np.concatenate((arr, np.array([np.NaN] * 9))), 20)[9:] self.assert_numpy_array_equal(result, expected) if test_stable: result = get_result(self.arr + 1e9, window) - assert_almost_equal(result[-1], - static_comp(self.arr[-50:] + 1e9)) + assert_almost_equal(result[-1], static_comp(self.arr[-50:] + 1e9)) # Test window larger than array, #7297 if test_window: if has_min_periods: - for minp in (0, len(self.arr)-1, len(self.arr)): - result = get_result(self.arr, len(self.arr)+1, min_periods=minp) - expected = get_result(self.arr, len(self.arr), min_periods=minp) + for minp in (0, len(self.arr) - 1, len(self.arr)): + result = get_result(self.arr, len(self.arr) + 1, + min_periods=minp) + expected = get_result(self.arr, len(self.arr), + min_periods=minp) nan_mask = np.isnan(result) - self.assertTrue(np.array_equal(nan_mask, - np.isnan(expected))) + self.assertTrue(np.array_equal(nan_mask, np.isnan( + expected))) nan_mask = ~nan_mask assert_almost_equal(result[nan_mask], expected[nan_mask]) else: - result = get_result(self.arr, len(self.arr)+1) + result = get_result(self.arr, len(self.arr) + 1) expected = get_result(self.arr, len(self.arr)) nan_mask = np.isnan(result) self.assertTrue(np.array_equal(nan_mask, np.isnan(expected))) nan_mask = ~nan_mask assert_almost_equal(result[nan_mask], expected[nan_mask]) - - - - def _check_structures(self, f, static_comp, - name=None, + def _check_structures(self, f, static_comp, name=None, has_min_periods=True, has_time_rule=True, - has_center=True, - fill_value=None, - **kwargs): - + has_center=True, fill_value=None, **kwargs): def get_result(obj, window, min_periods=None, freq=None, center=False): # check via the API calls if name is provided if name is not None: - # catch a freq deprecation warning if freq is provided and not None + # catch a freq deprecation warning if freq is provided and not + # None w = FutureWarning if freq is not None else None with tm.assert_produces_warning(w, check_stacklevel=False): - r = obj.rolling(window=window, - min_periods=min_periods, - freq=freq, - center=center) - return getattr(r,name)(**kwargs) + r = obj.rolling(window=window, min_periods=min_periods, + freq=freq, center=center) + return getattr(r, name)(**kwargs) # check via the moments API - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - return f(obj, - window=window, - min_periods=min_periods, - freq=freq, - center=center, - **kwargs) + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): + return f(obj, window=window, min_periods=min_periods, + freq=freq, center=center, **kwargs) series_result = get_result(self.series, window=50) frame_result = get_result(self.frame, window=50) @@ -907,11 +880,15 @@ def get_result(obj, window, min_periods=None, freq=None, center=False): minp = 10 if has_min_periods: - series_result = get_result(self.series[::2], window=win, min_periods=minp, freq='B') - frame_result = get_result(self.frame[::2], window=win, min_periods=minp, freq='B') + series_result = get_result(self.series[::2], window=win, + min_periods=minp, freq='B') + frame_result = get_result(self.frame[::2], window=win, + min_periods=minp, freq='B') else: - series_result = get_result(self.series[::2], window=win, freq='B') - frame_result = get_result(self.frame[::2], window=win, freq='B') + series_result = get_result(self.series[::2], window=win, + freq='B') + frame_result = get_result(self.frame[::2], window=win, + freq='B') last_date = series_result.index[-1] prev_date = last_date - 24 * datetools.bday @@ -928,39 +905,35 @@ def get_result(obj, window, min_periods=None, freq=None, center=False): if has_center: # shifter index - s = ['x%d'%x for x in range(12)] + s = ['x%d' % x for x in range(12)] if has_min_periods: minp = 10 - series_xp = get_result(self.series.reindex(list(self.series.index)+s), - window=25, - min_periods=minp).shift(-12).reindex(self.series.index) - frame_xp = get_result(self.frame.reindex(list(self.frame.index)+s), - window=25, - min_periods=minp).shift(-12).reindex(self.frame.index) - - series_rs = get_result(self.series, - window=25, - min_periods=minp, - center=True) - frame_rs = get_result(self.frame, - window=25, - min_periods=minp, + series_xp = get_result( + self.series.reindex(list(self.series.index) + s), + window=25, + min_periods=minp).shift(-12).reindex(self.series.index) + frame_xp = get_result( + self.frame.reindex(list(self.frame.index) + s), + window=25, + min_periods=minp).shift(-12).reindex(self.frame.index) + + series_rs = get_result(self.series, window=25, + min_periods=minp, center=True) + frame_rs = get_result(self.frame, window=25, min_periods=minp, center=True) else: - series_xp = get_result(self.series.reindex(list(self.series.index)+s), - window=25).shift(-12).reindex(self.series.index) - frame_xp = get_result(self.frame.reindex(list(self.frame.index)+s), - window=25).shift(-12).reindex(self.frame.index) - - series_rs = get_result(self.series, - window=25, - center=True) - frame_rs = get_result(self.frame, - window=25, - center=True) + series_xp = get_result( + self.series.reindex(list(self.series.index) + s), + window=25).shift(-12).reindex(self.series.index) + frame_xp = get_result( + self.frame.reindex(list(self.frame.index) + s), + window=25).shift(-12).reindex(self.frame.index) + + series_rs = get_result(self.series, window=25, center=True) + frame_rs = get_result(self.frame, window=25, center=True) if fill_value is not None: series_xp = series_xp.fillna(fill_value) @@ -969,7 +942,7 @@ def get_result(obj, window, min_periods=None, freq=None, center=False): assert_frame_equal(frame_xp, frame_rs) def test_ewma(self): - self._check_ew(mom.ewma,name='mean') + self._check_ew(mom.ewma, name='mean') arr = np.zeros(1000) arr[5] = 1 @@ -981,7 +954,8 @@ def test_ewma(self): expected = Series([1.0, 1.6, 2.736842, 4.923077]) for f in [lambda s: s.ewm(com=2.0, adjust=True).mean(), - lambda s: s.ewm(com=2.0, adjust=True, ignore_na=False).mean(), + lambda s: s.ewm(com=2.0, adjust=True, + ignore_na=False).mean(), lambda s: s.ewm(com=2.0, adjust=True, ignore_na=True).mean(), ]: result = f(s) @@ -989,9 +963,11 @@ def test_ewma(self): expected = Series([1.0, 1.333333, 2.222222, 4.148148]) for f in [lambda s: s.ewm(com=2.0, adjust=False).mean(), - lambda s: s.ewm(com=2.0, adjust=False, ignore_na=False).mean(), - lambda s: s.ewm(com=2.0, adjust=False, ignore_na=True).mean(), - ]: + lambda s: s.ewm(com=2.0, adjust=False, + ignore_na=False).mean(), + lambda s: s.ewm(com=2.0, adjust=False, + ignore_na=True).mean(), + ]: result = f(s) assert_series_equal(result, expected) @@ -1020,19 +996,28 @@ def simple_wma(s, w): (s0, True, True, [np.nan, (1. - alpha), 1.]), (s0, False, False, [np.nan, (1. - alpha), alpha]), (s0, False, True, [np.nan, (1. - alpha), alpha]), - (s1, True, False, [(1. - alpha)**2, np.nan, 1.]), + (s1, True, False, [(1. - alpha) ** 2, np.nan, 1.]), (s1, True, True, [(1. - alpha), np.nan, 1.]), - (s1, False, False, [(1. - alpha)**2, np.nan, alpha]), + (s1, False, False, [(1. - alpha) ** 2, np.nan, alpha]), (s1, False, True, [(1. - alpha), np.nan, alpha]), - (s2, True, False, [np.nan, (1. - alpha)**3, np.nan, np.nan, 1., np.nan]), - (s2, True, True, [np.nan, (1. - alpha), np.nan, np.nan, 1., np.nan]), - (s2, False, False, [np.nan, (1. - alpha)**3, np.nan, np.nan, alpha, np.nan]), - (s2, False, True, [np.nan, (1. - alpha), np.nan, np.nan, alpha, np.nan]), - (s3, True, False, [(1. - alpha)**3, np.nan, (1. - alpha), 1.]), - (s3, True, True, [(1. - alpha)**2, np.nan, (1. - alpha), 1.]), - (s3, False, False, [(1. - alpha)**3, np.nan, (1. - alpha) * alpha, alpha * ((1. - alpha)**2 + alpha)]), - (s3, False, True, [(1. - alpha)**2, np.nan, (1. - alpha) * alpha, alpha]), - ]: + (s2, True, False, [np.nan, (1. - alpha) + ** 3, np.nan, np.nan, 1., np.nan]), + (s2, True, True, [np.nan, (1. - alpha), + np.nan, np.nan, 1., np.nan]), + (s2, False, False, [np.nan, (1. - alpha) + ** 3, np.nan, np.nan, alpha, np.nan]), + (s2, False, True, [np.nan, (1. - alpha), + np.nan, np.nan, alpha, np.nan]), + (s3, True, False, [(1. - alpha) + ** 3, np.nan, (1. - alpha), 1.]), + (s3, True, True, [(1. - alpha) ** + 2, np.nan, (1. - alpha), 1.]), + (s3, False, False, [(1. - alpha) ** 3, np.nan, + (1. - alpha) * alpha, + alpha * ((1. - alpha) ** 2 + alpha)]), + (s3, False, True, [(1. - alpha) ** 2, + np.nan, (1. - alpha) * alpha, alpha]), + ]: expected = simple_wma(s, Series(w)) result = s.ewm(com=com, adjust=adjust, ignore_na=ignore_na).mean() @@ -1063,9 +1048,12 @@ def test_ewma_halflife_arg(self): B = mom.ewma(self.arr, halflife=10.0) assert_almost_equal(A, B) - self.assertRaises(Exception, mom.ewma, self.arr, span=20, halflife=50) - self.assertRaises(Exception, mom.ewma, self.arr, com=9.5, halflife=50) - self.assertRaises(Exception, mom.ewma, self.arr, com=9.5, span=20, halflife=50) + self.assertRaises(Exception, mom.ewma, self.arr, span=20, + halflife=50) + self.assertRaises(Exception, mom.ewma, self.arr, com=9.5, + halflife=50) + self.assertRaises(Exception, mom.ewma, self.arr, com=9.5, span=20, + halflife=50) self.assertRaises(Exception, mom.ewma, self.arr) def test_ew_empty_arrays(self): @@ -1073,7 +1061,8 @@ def test_ew_empty_arrays(self): funcs = [mom.ewma, mom.ewmvol, mom.ewmvar] for f in funcs: - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): result = f(arr, 3) assert_almost_equal(result, arr) @@ -1085,7 +1074,7 @@ def _check_ew(self, func, name=None): def _check_ew_ndarray(self, func, preserve_nan=False, name=None): result = func(self.arr, com=10) if preserve_nan: - assert(np.isnan(result[self._nan_locs]).all()) + assert (np.isnan(result[self._nan_locs]).all()) # excluding NaNs correctly arr = randn(50) @@ -1105,7 +1094,8 @@ def _check_ew_ndarray(self, func, preserve_nan=False, name=None): self.assertTrue(np.isnan(result.values[:10]).all()) self.assertFalse(np.isnan(result.values[10:]).any()) else: - # ewmstd, ewmvol, ewmvar (with bias=False) require at least two values + # ewmstd, ewmvol, ewmvar (with bias=False) require at least two + # values self.assertTrue(np.isnan(result.values[:11]).all()) self.assertFalse(np.isnan(result.values[11:]).any()) @@ -1118,7 +1108,8 @@ def _check_ew_ndarray(self, func, preserve_nan=False, name=None): if func == mom.ewma: assert_series_equal(result, Series([1.])) else: - # ewmstd, ewmvol, ewmvar with bias=False require at least two values + # ewmstd, ewmvol, ewmvar with bias=False require at least two + # values assert_series_equal(result, Series([np.NaN])) # pass in ints @@ -1126,45 +1117,52 @@ def _check_ew_ndarray(self, func, preserve_nan=False, name=None): self.assertEqual(result2.dtype, np.float_) def _check_ew_structures(self, func, name): - series_result = getattr(self.series.ewm(com=10),name)() + series_result = getattr(self.series.ewm(com=10), name)() tm.assertIsInstance(series_result, Series) - frame_result = getattr(self.frame.ewm(com=10),name)() + frame_result = getattr(self.frame.ewm(com=10), name)() self.assertEqual(type(frame_result), DataFrame) + # create the data only once as we are not setting it def _create_consistency_data(): - def create_series(): - return [Series(), - Series([np.nan]), - Series([np.nan, np.nan]), - Series([3.]), - Series([np.nan, 3.]), - Series([3., np.nan]), - Series([1., 3.]), - Series([2., 2.]), - Series([3., 1.]), - Series([5., 5., 5., 5., np.nan, np.nan, np.nan, 5., 5., np.nan, np.nan]), - Series([np.nan, 5., 5., 5., np.nan, np.nan, np.nan, 5., 5., np.nan, np.nan]), - Series([np.nan, np.nan, 5., 5., np.nan, np.nan, np.nan, 5., 5., np.nan, np.nan]), - Series([np.nan, 3., np.nan, 3., 4., 5., 6., np.nan, np.nan, 7., 12., 13., 14., 15.]), - Series([np.nan, 5., np.nan, 2., 4., 0., 9., np.nan, np.nan, 3., 12., 13., 14., 15.]), - Series([2., 3., np.nan, 3., 4., 5., 6., np.nan, np.nan, 7., 12., 13., 14., 15.]), - Series([2., 5., np.nan, 2., 4., 0., 9., np.nan, np.nan, 3., 12., 13., 14., 15.]), - Series(range(10)), - Series(range(20, 0, -2)), - ] + return [Series(), + Series([np.nan]), + Series([np.nan, np.nan]), + Series([3.]), + Series([np.nan, 3.]), + Series([3., np.nan]), + Series([1., 3.]), + Series([2., 2.]), + Series([3., 1.]), + Series([5., 5., 5., 5., np.nan, np.nan, np.nan, 5., 5., np.nan, + np.nan]), + Series([np.nan, 5., 5., 5., np.nan, np.nan, np.nan, 5., 5., + np.nan, np.nan]), + Series([np.nan, np.nan, 5., 5., np.nan, np.nan, np.nan, 5., 5., + np.nan, np.nan]), + Series([np.nan, 3., np.nan, 3., 4., 5., 6., np.nan, np.nan, 7., + 12., 13., 14., 15.]), + Series([np.nan, 5., np.nan, 2., 4., 0., 9., np.nan, np.nan, 3., + 12., 13., 14., 15.]), + Series([2., 3., np.nan, 3., 4., 5., 6., np.nan, np.nan, 7., + 12., 13., 14., 15.]), + Series([2., 5., np.nan, 2., 4., 0., 9., np.nan, np.nan, 3., + 12., 13., 14., 15.]), + Series(range(10)), + Series(range(20, 0, -2)), ] def create_dataframes(): - return [DataFrame(), - DataFrame(columns=['a']), - DataFrame(columns=['a', 'a']), - DataFrame(columns=['a', 'b']), - DataFrame(np.arange(10).reshape((5, 2))), - DataFrame(np.arange(25).reshape((5, 5))), - DataFrame(np.arange(25).reshape((5, 5)), columns=['a', 'b', 99, 'd', 'd']), - ] + [DataFrame(s) for s in create_series()] + return ([DataFrame(), + DataFrame(columns=['a']), + DataFrame(columns=['a', 'a']), + DataFrame(columns=['a', 'b']), + DataFrame(np.arange(10).reshape((5, 2))), + DataFrame(np.arange(25).reshape((5, 5))), + DataFrame(np.arange(25).reshape((5, 5)), + columns=['a', 'b', 99, 'd', 'd'])] + + [DataFrame(s) for s in create_series()]) def is_constant(x): values = x.values.ravel() @@ -1176,9 +1174,12 @@ def no_nans(x): # data is a tuple(object, is_contant, no_nans) data = create_series() + create_dataframes() - return [ (x, is_constant(x), no_nans(x)) for x in data ] + return [(x, is_constant(x), no_nans(x)) for x in data] + + _consistency_data = _create_consistency_data() + class TestMomentsConsistency(Base): base_functions = [ (lambda v: Series(v).count(), None, 'count'), @@ -1190,32 +1191,37 @@ class TestMomentsConsistency(Base): (lambda v: Series(v).cov(Series(v)), None, 'cov'), (lambda v: Series(v).corr(Series(v)), None, 'corr'), (lambda v: Series(v).var(), 1, 'var'), - #(lambda v: Series(v).skew(), 3, 'skew'), # restore once GH 8086 is fixed - #(lambda v: Series(v).kurt(), 4, 'kurt'), # restore once GH 8086 is fixed - #(lambda x, min_periods: mom.expanding_quantile(x, 0.3, min_periods=min_periods, 'quantile'), - # lambda v: Series(v).quantile(0.3), None, 'quantile'), # restore once GH 8084 is fixed - (lambda v: Series(v).median(), None ,'median'), + + # restore once GH 8086 is fixed + # lambda v: Series(v).skew(), 3, 'skew'), + # (lambda v: Series(v).kurt(), 4, 'kurt'), + + # (lambda x, min_periods: mom.expanding_quantile(x, 0.3, + # min_periods=min_periods, 'quantile'), + + # restore once GH 8084 is fixed + # lambda v: Series(v).quantile(0.3), None, 'quantile'), + + (lambda v: Series(v).median(), None, 'median'), (np.nanmax, 1, 'max'), (np.nanmin, 1, 'min'), (np.nansum, 1, 'sum'), - ] + ] if np.__version__ >= LooseVersion('1.8.0'): base_functions += [ (np.nanmean, 1, 'mean'), - (lambda v: np.nanstd(v, ddof=1), 1 ,'std'), - (lambda v: np.nanvar(v, ddof=1), 1 ,'var'), + (lambda v: np.nanstd(v, ddof=1), 1, 'std'), + (lambda v: np.nanvar(v, ddof=1), 1, 'var'), ] if np.__version__ >= LooseVersion('1.9.0'): - base_functions += [ - (np.nanmedian, 1, 'median'), - ] + base_functions += [(np.nanmedian, 1, 'median'), ] no_nan_functions = [ (np.max, None, 'max'), (np.min, None, 'min'), (np.sum, None, 'sum'), (np.mean, None, 'mean'), - (lambda v: np.std(v, ddof=1), 1 ,'std'), - (lambda v: np.var(v, ddof=1), 1 ,'var'), + (lambda v: np.std(v, ddof=1), 1, 'std'), + (lambda v: np.var(v, ddof=1), 1, 'var'), (np.median, None, 'median'), ] @@ -1226,19 +1232,18 @@ def _create_data(self): def setUp(self): self._create_data() - def _test_moments_consistency(self, - min_periods, - count, mean, mock_mean, corr, - var_unbiased=None, std_unbiased=None, cov_unbiased=None, - var_biased=None, std_biased=None, cov_biased=None, + def _test_moments_consistency(self, min_periods, count, mean, mock_mean, + corr, var_unbiased=None, std_unbiased=None, + cov_unbiased=None, var_biased=None, + std_biased=None, cov_biased=None, var_debiasing_factors=None): - def _non_null_values(x): values = x.values.ravel() return set(values[notnull(values)].tolist()) for (x, is_constant, no_nans) in self.data: - assert_equal = assert_series_equal if isinstance(x, Series) else assert_frame_equal + assert_equal = assert_series_equal if isinstance( + x, Series) else assert_frame_equal count_x = count(x) mean_x = mean(x) @@ -1249,7 +1254,8 @@ def _non_null_values(x): # check that correlation of a series with itself is either 1 or NaN corr_x_x = corr(x, x) - # self.assertTrue(_non_null_values(corr_x_x).issubset(set([1.]))) # restore once rolling_cov(x, x) is identically equal to var(x) + # self.assertTrue(_non_null_values(corr_x_x).issubset(set([1.]))) # + # restore once rolling_cov(x, x) is identically equal to var(x) if is_constant: exp = x.max() if isinstance(x, Series) else x.max().max() @@ -1268,10 +1274,12 @@ def _non_null_values(x): var_unbiased_x = var_unbiased(x) var_biased_x = var_biased(x) var_debiasing_factors_x = var_debiasing_factors(x) - assert_equal(var_unbiased_x, var_biased_x * var_debiasing_factors_x) + assert_equal(var_unbiased_x, var_biased_x * + var_debiasing_factors_x) for (std, var, cov) in [(std_biased, var_biased, cov_biased), - (std_unbiased, var_unbiased, cov_unbiased)]: + (std_unbiased, var_unbiased, cov_unbiased) + ]: # check that var(x), std(x), and cov(x) are all >= 0 var_x = var(x) @@ -1305,7 +1313,8 @@ def _non_null_values(x): if isinstance(x, Series): for (y, is_constant, no_nans) in self.data: if not x.isnull().equals(y.isnull()): - # can only easily test two Series with similar structure + # can only easily test two Series with similar + # structure continue # check that cor(x, y) is symmetric @@ -1319,41 +1328,45 @@ def _non_null_values(x): cov_y_x = cov(y, x) assert_equal(cov_x_y, cov_y_x) - # check that cov(x, y) == (var(x+y) - var(x) - var(y)) / 2 + # check that cov(x, y) == (var(x+y) - var(x) - + # var(y)) / 2 var_x_plus_y = var(x + y) var_y = var(y) - assert_equal(cov_x_y, 0.5 * (var_x_plus_y - var_x - var_y)) + assert_equal(cov_x_y, 0.5 * + (var_x_plus_y - var_x - var_y)) - # check that corr(x, y) == cov(x, y) / (std(x) * std(y)) + # check that corr(x, y) == cov(x, y) / (std(x) * + # std(y)) std_y = std(y) assert_equal(corr_x_y, cov_x_y / (std_x * std_y)) if cov is cov_biased: - # check that biased cov(x, y) == mean(x*y) - mean(x)*mean(y) + # check that biased cov(x, y) == mean(x*y) - + # mean(x)*mean(y) mean_y = mean(y) mean_x_times_y = mean(x * y) - assert_equal(cov_x_y, mean_x_times_y - (mean_x * mean_y)) + assert_equal(cov_x_y, mean_x_times_y - + (mean_x * mean_y)) @slow def test_ewm_consistency(self): - def _weights(s, com, adjust, ignore_na): if isinstance(s, DataFrame): if not len(s.columns): return DataFrame(index=s.index, columns=s.columns) - w = concat([ _weights(s.iloc[:, i], - com=com, - adjust=adjust, - ignore_na=ignore_na) for i, _ in enumerate(s.columns) ], - axis=1) - w.index=s.index - w.columns=s.columns + w = concat([ + _weights(s.iloc[:, i], com=com, adjust=adjust, + ignore_na=ignore_na) + for i, _ in enumerate(s.columns)], axis=1) + w.index = s.index + w.columns = s.columns return w w = Series(np.nan, index=s.index) alpha = 1. / (1. + com) if ignore_na: - w[s.notnull()] = _weights(s[s.notnull()], com=com, adjust=adjust, ignore_na=False) + w[s.notnull()] = _weights(s[s.notnull()], com=com, + adjust=adjust, ignore_na=False) elif adjust: for i in range(len(s)): if s.iat[i] == s.iat[i]: @@ -1366,7 +1379,8 @@ def _weights(s, com, adjust, ignore_na): if prev_i == -1: w.iat[i] = 1. else: - w.iat[i] = alpha * sum_wts / pow(1. - alpha, i - prev_i) + w.iat[i] = alpha * sum_wts / pow(1. - alpha, + i - prev_i) sum_wts += w.iat[i] prev_i = i return w @@ -1382,35 +1396,66 @@ def _variance_debiasing_factors(s, com, adjust, ignore_na): def _ewma(s, com, min_periods, adjust, ignore_na): weights = _weights(s, com=com, adjust=adjust, ignore_na=ignore_na) - result = s.multiply(weights).cumsum().divide(weights.cumsum()).fillna(method='ffill') - result[s.expanding().count() < (max(min_periods, 1) if min_periods else 1)] = np.nan + result = s.multiply(weights).cumsum().divide(weights.cumsum( + )).fillna(method='ffill') + result[s.expanding().count() < (max(min_periods, 1) if min_periods + else 1)] = np.nan return result com = 3. - for min_periods in [0, 1, 2, 3, 4]: - for adjust in [True, False]: - for ignore_na in [False, True]: - # test consistency between different ewm* moments - self._test_moments_consistency( - min_periods=min_periods, - count=lambda x: x.expanding().count(), - mean=lambda x: x.ewm(com=com, min_periods=min_periods, adjust=adjust, ignore_na=ignore_na).mean(), - mock_mean=lambda x: _ewma(x, com=com, min_periods=min_periods, adjust=adjust, ignore_na=ignore_na), - corr=lambda x, y: x.ewm(com=com, min_periods=min_periods, adjust=adjust, ignore_na=ignore_na).corr(y), - var_unbiased=lambda x: x.ewm(com=com, min_periods=min_periods, adjust=adjust, ignore_na=ignore_na).var(bias=False), - std_unbiased=lambda x: x.ewm(com=com, min_periods=min_periods, adjust=adjust, ignore_na=ignore_na).std(bias=False), - cov_unbiased=lambda x, y: x.ewm(com=com, min_periods=min_periods, adjust=adjust, ignore_na=ignore_na).cov(y, bias=False), - var_biased=lambda x: x.ewm(com=com, min_periods=min_periods, adjust=adjust, ignore_na=ignore_na).var(bias=True), - std_biased=lambda x: x.ewm(com=com, min_periods=min_periods, adjust=adjust, ignore_na=ignore_na).std(bias=True), - cov_biased=lambda x, y: x.ewm(com=com, min_periods=min_periods, adjust=adjust, ignore_na=ignore_na).cov(y, bias=True), - var_debiasing_factors=lambda x: _variance_debiasing_factors(x, com=com, adjust=adjust, ignore_na=ignore_na)) + for min_periods, adjust, ignore_na in product([0, 1, 2, 3, 4], + [True, False], + [False, True]): + # test consistency between different ewm* moments + self._test_moments_consistency( + min_periods=min_periods, + count=lambda x: x.expanding().count(), + mean=lambda x: x.ewm(com=com, min_periods=min_periods, + adjust=adjust, + ignore_na=ignore_na).mean(), + mock_mean=lambda x: _ewma(x, com=com, + min_periods=min_periods, + adjust=adjust, + ignore_na=ignore_na), + corr=lambda x, y: x.ewm(com=com, min_periods=min_periods, + adjust=adjust, + ignore_na=ignore_na).corr(y), + var_unbiased=lambda x: ( + x.ewm(com=com, min_periods=min_periods, + adjust=adjust, + ignore_na=ignore_na).var(bias=False)), + std_unbiased=lambda x: ( + x.ewm(com=com, min_periods=min_periods, + adjust=adjust, ignore_na=ignore_na) + .std(bias=False)), + cov_unbiased=lambda x, y: ( + x.ewm(com=com, min_periods=min_periods, + adjust=adjust, ignore_na=ignore_na) + .cov(y, bias=False)), + var_biased=lambda x: ( + x.ewm(com=com, min_periods=min_periods, + adjust=adjust, ignore_na=ignore_na) + .var(bias=True)), + std_biased=lambda x: x.ewm(com=com, min_periods=min_periods, + adjust=adjust, + ignore_na=ignore_na).std(bias=True), + cov_biased=lambda x, y: ( + x.ewm(com=com, min_periods=min_periods, + adjust=adjust, ignore_na=ignore_na) + .cov(y, bias=True)), + var_debiasing_factors=lambda x: ( + _variance_debiasing_factors(x, com=com, adjust=adjust, + ignore_na=ignore_na))) @slow def test_expanding_consistency(self): - # suppress warnings about empty slices, as we are deliberately testing with empty/0-length Series/DataFrames + # suppress warnings about empty slices, as we are deliberately testing + # with empty/0-length Series/DataFrames with warnings.catch_warnings(): - warnings.filterwarnings("ignore", message=".*(empty slice|0 for slice).*", category=RuntimeWarning) + warnings.filterwarnings("ignore", + message=".*(empty slice|0 for slice).*", + category=RuntimeWarning) for min_periods in [0, 1, 2, 3, 4]: @@ -1418,125 +1463,208 @@ def test_expanding_consistency(self): self._test_moments_consistency( min_periods=min_periods, count=lambda x: x.expanding().count(), - mean=lambda x: x.expanding(min_periods=min_periods).mean(), - mock_mean=lambda x: x.expanding(min_periods=min_periods).sum() / x.expanding().count(), - corr=lambda x, y: x.expanding(min_periods=min_periods).corr(y), - var_unbiased=lambda x: x.expanding(min_periods=min_periods).var(), - std_unbiased=lambda x: x.expanding(min_periods=min_periods).std(), - cov_unbiased=lambda x, y: x.expanding(min_periods=min_periods).cov(y), - var_biased=lambda x: x.expanding(min_periods=min_periods).var(ddof=0), - std_biased=lambda x: x.expanding(min_periods=min_periods).std(ddof=0), - cov_biased=lambda x, y: x.expanding(min_periods=min_periods).cov(y, ddof=0), - var_debiasing_factors=lambda x: x.expanding().count() / (x.expanding().count() - 1.).replace(0., np.nan) - ) - - # test consistency between expanding_xyz() and either (a) expanding_apply of Series.xyz(), - # or (b) expanding_apply of np.nanxyz() + mean=lambda x: x.expanding( + min_periods=min_periods).mean(), + mock_mean=lambda x: x.expanding( + min_periods=min_periods).sum() / x.expanding().count(), + corr=lambda x, y: x.expanding( + min_periods=min_periods).corr(y), + var_unbiased=lambda x: x.expanding( + min_periods=min_periods).var(), + std_unbiased=lambda x: x.expanding( + min_periods=min_periods).std(), + cov_unbiased=lambda x, y: x.expanding( + min_periods=min_periods).cov(y), + var_biased=lambda x: x.expanding( + min_periods=min_periods).var(ddof=0), + std_biased=lambda x: x.expanding( + min_periods=min_periods).std(ddof=0), + cov_biased=lambda x, y: x.expanding( + min_periods=min_periods).cov(y, ddof=0), + var_debiasing_factors=lambda x: ( + x.expanding().count() / + (x.expanding().count() - 1.) + .replace(0., np.nan))) + + # test consistency between expanding_xyz() and either (a) + # expanding_apply of Series.xyz(), or (b) expanding_apply of + # np.nanxyz() for (x, is_constant, no_nans) in self.data: - assert_equal = assert_series_equal if isinstance(x, Series) else assert_frame_equal + assert_equal = assert_series_equal if isinstance( + x, Series) else assert_frame_equal functions = self.base_functions # GH 8269 if no_nans: functions = self.base_functions + self.no_nan_functions for (f, require_min_periods, name) in functions: - expanding_f = getattr(x.expanding(min_periods=min_periods),name) + expanding_f = getattr( + x.expanding(min_periods=min_periods), name) - if require_min_periods and (min_periods is not None) and (min_periods < require_min_periods): + if (require_min_periods and + (min_periods is not None) and + (min_periods < require_min_periods)): continue if name == 'count': expanding_f_result = expanding_f() - expanding_apply_f_result = x.expanding(min_periods=0).apply(func=f) + expanding_apply_f_result = x.expanding( + min_periods=0).apply(func=f) else: - if name in ['cov','corr']: - expanding_f_result = expanding_f(pairwise=False) + if name in ['cov', 'corr']: + expanding_f_result = expanding_f( + pairwise=False) else: expanding_f_result = expanding_f() - expanding_apply_f_result = x.expanding(min_periods=min_periods).apply(func=f) + expanding_apply_f_result = x.expanding( + min_periods=min_periods).apply(func=f) if not tm._incompat_bottleneck_version(name): - assert_equal(expanding_f_result, expanding_apply_f_result) + assert_equal(expanding_f_result, + expanding_apply_f_result) - if (name in ['cov','corr']) and isinstance(x, DataFrame): + if (name in ['cov', 'corr']) and isinstance(x, + DataFrame): # test pairwise=True expanding_f_result = expanding_f(x, pairwise=True) - expected = Panel(items=x.index, major_axis=x.columns, minor_axis=x.columns) + expected = Panel(items=x.index, + major_axis=x.columns, + minor_axis=x.columns) for i, _ in enumerate(x.columns): for j, _ in enumerate(x.columns): - expected.iloc[:, i, j] = getattr(x.iloc[:, i].expanding(min_periods=min_periods),name)(x.iloc[:, j]) + expected.iloc[:, i, j] = getattr( + x.iloc[:, i].expanding( + min_periods=min_periods), + name)(x.iloc[:, j]) assert_panel_equal(expanding_f_result, expected) @slow def test_rolling_consistency(self): - # suppress warnings about empty slices, as we are deliberately testing with empty/0-length Series/DataFrames + # suppress warnings about empty slices, as we are deliberately testing + # with empty/0-length Series/DataFrames with warnings.catch_warnings(): - warnings.filterwarnings("ignore", message=".*(empty slice|0 for slice).*", category=RuntimeWarning) - - for window in [1, 2, 3, 10, 20]: - for min_periods in set([0, 1, 2, 3, 4, window]): - if min_periods and (min_periods > window): - continue - for center in [False, True]: - - # test consistency between different rolling_* moments - self._test_moments_consistency( - min_periods=min_periods, - count=lambda x: x.rolling(window=window, center=center).count(), - mean=lambda x: x.rolling(window=window, min_periods=min_periods, center=center).mean(), - mock_mean=lambda x: x.rolling(window=window, min_periods=min_periods, center=center).sum().divide( - x.rolling(window=window, min_periods=min_periods, center=center).count()), - corr=lambda x, y: x.rolling(window=window, min_periods=min_periods, center=center).corr(y), - var_unbiased=lambda x: x.rolling(window=window, min_periods=min_periods, center=center).var(), - std_unbiased=lambda x: x.rolling(window=window, min_periods=min_periods, center=center).std(), - cov_unbiased=lambda x, y: x.rolling(window=window, min_periods=min_periods, center=center).cov(y), - var_biased=lambda x: x.rolling(window=window, min_periods=min_periods, center=center).var(ddof=0), - std_biased=lambda x: x.rolling(window=window, min_periods=min_periods, center=center).std(ddof=0), - cov_biased=lambda x, y: x.rolling(window=window, min_periods=min_periods, center=center).cov(y, ddof=0), - var_debiasing_factors=lambda x: x.rolling(window=window, center=center).count().divide( - (x.rolling(window=window, center=center).count() - 1.).replace(0., np.nan)), - ) - - # test consistency between rolling_xyz() and either (a) rolling_apply of Series.xyz(), - # or (b) rolling_apply of np.nanxyz() - for (x, is_constant, no_nans) in self.data: - - assert_equal = assert_series_equal if isinstance(x, Series) else assert_frame_equal - functions = self.base_functions - - # GH 8269 - if no_nans: - functions = self.base_functions + self.no_nan_functions - for (f, require_min_periods, name) in functions: - rolling_f = getattr(x.rolling(window=window, center=center, min_periods=min_periods),name) - - if require_min_periods and (min_periods is not None) and (min_periods < require_min_periods): - continue - - if name == 'count': - rolling_f_result = rolling_f() - rolling_apply_f_result = x.rolling(window=window, - min_periods=0, center=center).apply(func=f) - else: - if name in ['cov','corr']: - rolling_f_result = rolling_f(pairwise=False) - else: - rolling_f_result = rolling_f() - rolling_apply_f_result = x.rolling(window=window, - min_periods=min_periods, center=center).apply(func=f) - if not tm._incompat_bottleneck_version(name): - assert_equal(rolling_f_result, rolling_apply_f_result) - - if (name in ['cov','corr']) and isinstance(x, DataFrame): - # test pairwise=True - rolling_f_result = rolling_f(x, pairwise=True) - expected = Panel(items=x.index, major_axis=x.columns, minor_axis=x.columns) - for i, _ in enumerate(x.columns): - for j, _ in enumerate(x.columns): - expected.iloc[:, i, j] = getattr(x.iloc[:, i].rolling( - window=window, min_periods=min_periods, center=center),name)(x.iloc[:, j]) - assert_panel_equal(rolling_f_result, expected) + warnings.filterwarnings("ignore", + message=".*(empty slice|0 for slice).*", + category=RuntimeWarning) + + def cases(): + for window in [1, 2, 3, 10, 20]: + for min_periods in set([0, 1, 2, 3, 4, window]): + if min_periods and (min_periods > window): + continue + for center in [False, True]: + yield window, min_periods, center + + for window, min_periods, center in cases(): + # test consistency between different rolling_* moments + self._test_moments_consistency( + min_periods=min_periods, + count=lambda x: ( + x.rolling(window=window, center=center) + .count()), + mean=lambda x: ( + x.rolling(window=window, min_periods=min_periods, + center=center).mean()), + mock_mean=lambda x: ( + x.rolling(window=window, + min_periods=min_periods, + center=center).sum() + .divide(x.rolling(window=window, + min_periods=min_periods, + center=center).count())), + corr=lambda x, y: ( + x.rolling(window=window, min_periods=min_periods, + center=center).corr(y)), + + var_unbiased=lambda x: ( + x.rolling(window=window, min_periods=min_periods, + center=center).var()), + + std_unbiased=lambda x: ( + x.rolling(window=window, min_periods=min_periods, + center=center).std()), + + cov_unbiased=lambda x, y: ( + x.rolling(window=window, min_periods=min_periods, + center=center).cov(y)), + + var_biased=lambda x: ( + x.rolling(window=window, min_periods=min_periods, + center=center).var(ddof=0)), + + std_biased=lambda x: ( + x.rolling(window=window, min_periods=min_periods, + center=center).std(ddof=0)), + + cov_biased=lambda x, y: ( + x.rolling(window=window, min_periods=min_periods, + center=center).cov(y, ddof=0)), + var_debiasing_factors=lambda x: ( + x.rolling(window=window, center=center).count() + .divide((x.rolling(window=window, center=center) + .count() - 1.) + .replace(0., np.nan)))) + + # test consistency between rolling_xyz() and either (a) + # rolling_apply of Series.xyz(), or (b) rolling_apply of + # np.nanxyz() + for (x, is_constant, no_nans) in self.data: + + assert_equal = (assert_series_equal + if isinstance(x, Series) else + assert_frame_equal) + functions = self.base_functions + + # GH 8269 + if no_nans: + functions = self.base_functions + self.no_nan_functions + for (f, require_min_periods, name) in functions: + rolling_f = getattr( + x.rolling(window=window, center=center, + min_periods=min_periods), name) + + if require_min_periods and ( + min_periods is not None) and ( + min_periods < require_min_periods): + continue + + if name == 'count': + rolling_f_result = rolling_f() + rolling_apply_f_result = x.rolling( + window=window, min_periods=0, + center=center).apply(func=f) + else: + if name in ['cov', 'corr']: + rolling_f_result = rolling_f( + pairwise=False) + else: + rolling_f_result = rolling_f() + rolling_apply_f_result = x.rolling( + window=window, min_periods=min_periods, + center=center).apply(func=f) + if not tm._incompat_bottleneck_version(name): + assert_equal(rolling_f_result, + rolling_apply_f_result) + + if (name in ['cov', 'corr']) and isinstance( + x, DataFrame): + # test pairwise=True + rolling_f_result = rolling_f(x, + pairwise=True) + expected = Panel(items=x.index, + major_axis=x.columns, + minor_axis=x.columns) + for i, _ in enumerate(x.columns): + for j, _ in enumerate(x.columns): + expected.iloc[:, i, j] = ( + getattr( + x.iloc[:, i] + .rolling(window=window, + min_periods=min_periods, + center=center), + name)(x.iloc[:, j])) + assert_panel_equal(rolling_f_result, expected) # binary moments def test_rolling_cov(self): @@ -1547,7 +1675,7 @@ def test_rolling_cov(self): assert_almost_equal(result[-1], np.cov(A[-50:], B[-50:])[0, 1]) def test_rolling_cov_pairwise(self): - self._check_pairwise_moment('rolling','cov', window=10, min_periods=5) + self._check_pairwise_moment('rolling', 'cov', window=10, min_periods=5) def test_rolling_corr(self): A = self.series @@ -1566,12 +1694,12 @@ def test_rolling_corr(self): assert_almost_equal(result[-1], a.corr(b)) def test_rolling_corr_pairwise(self): - self._check_pairwise_moment('rolling', 'corr', window=10, min_periods=5) + self._check_pairwise_moment('rolling', 'corr', window=10, + min_periods=5) def _check_pairwise_moment(self, dispatch, name, **kwargs): - def get_result(obj, obj2=None): - return getattr(getattr(obj,dispatch)(**kwargs),name)(obj2) + return getattr(getattr(obj, dispatch)(**kwargs), name)(obj2) panel = get_result(self.frame) actual = panel.ix[:, 1, 5] @@ -1582,40 +1710,36 @@ def get_result(obj, obj2=None): def test_flex_binary_moment(self): # GH3155 # don't blow the stack - self.assertRaises(TypeError, rwindow._flex_binary_moment,5,6,None) + self.assertRaises(TypeError, rwindow._flex_binary_moment, 5, 6, None) def test_corr_sanity(self): - #GH 3155 - df = DataFrame( - np.array( - [[ 0.87024726, 0.18505595], - [ 0.64355431, 0.3091617 ], - [ 0.92372966, 0.50552513], - [ 0.00203756, 0.04520709], - [ 0.84780328, 0.33394331], - [ 0.78369152, 0.63919667]]) - ) - - res = df[0].rolling(5,center=True).corr(df[1]) - self.assertTrue(all([np.abs(np.nan_to_num(x)) <=1 for x in res])) + # GH 3155 + df = DataFrame(np.array( + [[0.87024726, 0.18505595], [0.64355431, 0.3091617], + [0.92372966, 0.50552513], [0.00203756, 0.04520709], + [0.84780328, 0.33394331], [0.78369152, 0.63919667]])) + + res = df[0].rolling(5, center=True).corr(df[1]) + self.assertTrue(all([np.abs(np.nan_to_num(x)) <= 1 for x in res])) # and some fuzzing for i in range(10): - df = DataFrame(np.random.rand(30,2)) - res = df[0].rolling(5,center=True).corr(df[1]) + df = DataFrame(np.random.rand(30, 2)) + res = df[0].rolling(5, center=True).corr(df[1]) try: - self.assertTrue(all([np.abs(np.nan_to_num(x)) <=1 for x in res])) + self.assertTrue(all([np.abs(np.nan_to_num(x)) <= 1 for x in res + ])) except: print(res) - def test_flex_binary_frame(self): def _check(method): series = self.frame[1] - res = getattr(series.rolling(window=10),method)(self.frame) - res2 = getattr(self.frame.rolling(window=10),method)(series) - exp = self.frame.apply(lambda x: getattr(series.rolling(window=10),method)(x)) + res = getattr(series.rolling(window=10), method)(self.frame) + res2 = getattr(self.frame.rolling(window=10), method)(series) + exp = self.frame.apply(lambda x: getattr( + series.rolling(window=10), method)(x)) tm.assert_frame_equal(res, exp) tm.assert_frame_equal(res2, exp) @@ -1623,12 +1747,12 @@ def _check(method): frame2 = self.frame.copy() frame2.values[:] = np.random.randn(*frame2.shape) - res3 = getattr(self.frame.rolling(window=10),method)(frame2) - exp = DataFrame(dict((k, getattr(self.frame[k].rolling(window=10),method)(frame2[k])) - for k in self.frame)) + res3 = getattr(self.frame.rolling(window=10), method)(frame2) + exp = DataFrame(dict((k, getattr(self.frame[k].rolling( + window=10), method)(frame2[k])) for k in self.frame)) tm.assert_frame_equal(res3, exp) - methods = ['corr','cov'] + methods = ['corr', 'cov'] for meth in methods: _check(meth) @@ -1636,18 +1760,17 @@ def test_ewmcov(self): self._check_binary_ew('cov') def test_ewmcov_pairwise(self): - self._check_pairwise_moment('ewm','cov', span=10, min_periods=5) + self._check_pairwise_moment('ewm', 'cov', span=10, min_periods=5) def test_ewmcorr(self): self._check_binary_ew('corr') def test_ewmcorr_pairwise(self): - self._check_pairwise_moment('ewm','corr', span=10, min_periods=5) + self._check_pairwise_moment('ewm', 'corr', span=10, min_periods=5) def _check_binary_ew(self, name): - def func(A, B, com, **kwargs): - return getattr(A.ewm(com, **kwargs),name)(B) + return getattr(A.ewm(com, **kwargs), name)(B) A = Series(randn(50), index=np.arange(50)) B = A[2:] + randn(48) @@ -1662,7 +1785,8 @@ def func(A, B, com, **kwargs): # GH 7898 for min_periods in (0, 1, 2): result = func(A, B, 20, min_periods=min_periods) - # binary functions (ewmcov, ewmcorr) with bias=False require at least two values + # binary functions (ewmcov, ewmcorr) with bias=False require at + # least two values self.assertTrue(np.isnan(result.values[:11]).all()) self.assertFalse(np.isnan(result.values[11:]).any()) @@ -1671,7 +1795,8 @@ def func(A, B, com, **kwargs): assert_series_equal(result, Series([])) # check series of length 1 - result = func(Series([1.]), Series([1.]), 50, min_periods=min_periods) + result = func( + Series([1.]), Series([1.]), 50, min_periods=min_periods) assert_series_equal(result, Series([np.NaN])) self.assertRaises(Exception, func, A, randn(50), 20, min_periods=5) @@ -1681,10 +1806,9 @@ def test_expanding_apply(self): assert_series_equal(ser, ser.expanding().apply(lambda x: x.mean())) def expanding_mean(x, min_periods=1, freq=None): - return mom.expanding_apply(x, - lambda x: x.mean(), - min_periods=min_periods, - freq=freq) + return mom.expanding_apply(x, lambda x: x.mean(), + min_periods=min_periods, freq=freq) + self._check_expanding(expanding_mean, np.mean) # GH 8080 @@ -1701,32 +1825,32 @@ def mean_w_arg(x, const): expected = df.expanding().apply(np.mean) + 20. - assert_frame_equal(df.expanding().apply(mean_w_arg, args=(20,)), - expected) + assert_frame_equal(df.expanding().apply(mean_w_arg, args=(20, )), + expected) assert_frame_equal(df.expanding().apply(mean_w_arg, - kwargs={'const' : 20}), + kwargs={'const': 20}), expected) - def test_expanding_corr(self): A = self.series.dropna() B = (A + randn(len(A)))[:-5] result = A.expanding().corr(B) - rolling_result = A.rolling(window=len(A),min_periods=1).corr(B) + rolling_result = A.rolling(window=len(A), min_periods=1).corr(B) assert_almost_equal(rolling_result, result) def test_expanding_count(self): result = self.series.expanding().count() - assert_almost_equal(result, self.series.rolling(window=len(self.series)).count()) + assert_almost_equal(result, self.series.rolling( + window=len(self.series)).count()) def test_expanding_quantile(self): result = self.series.expanding().quantile(0.5) - rolling_result = self.series.rolling( - window=len(self.series),min_periods=1).quantile(0.5) + rolling_result = self.series.rolling(window=len(self.series), + min_periods=1).quantile(0.5) assert_almost_equal(result, rolling_result) @@ -1746,7 +1870,8 @@ def test_expanding_max(self): def test_expanding_cov_pairwise(self): result = self.frame.expanding().corr() - rolling_result = self.frame.rolling(window=len(self.frame),min_periods=1).corr() + rolling_result = self.frame.rolling(window=len(self.frame), + min_periods=1).corr() for i in result.items: assert_almost_equal(result[i], rolling_result[i]) @@ -1754,7 +1879,8 @@ def test_expanding_cov_pairwise(self): def test_expanding_corr_pairwise(self): result = self.frame.expanding().corr() - rolling_result = self.frame.rolling(window=len(self.frame), min_periods=1).corr() + rolling_result = self.frame.rolling(window=len(self.frame), + min_periods=1).corr() for i in result.items: assert_almost_equal(result[i], rolling_result[i]) @@ -1823,12 +1949,15 @@ def test_rolling_functions_window_non_shrinkage(self): # GH 7764 s = Series(range(4)) s_expected = Series(np.nan, index=s.index) - df = DataFrame([[1,5], [3, 2], [3,9], [-1,0]], columns=['A','B']) + df = DataFrame([[1, 5], [3, 2], [3, 9], [-1, 0]], columns=['A', 'B']) df_expected = DataFrame(np.nan, index=df.index, columns=df.columns) - df_expected_panel = Panel(items=df.index, major_axis=df.columns, minor_axis=df.columns) + df_expected_panel = Panel(items=df.index, major_axis=df.columns, + minor_axis=df.columns) - functions = [lambda x: x.rolling(window=10, min_periods=5).cov(x, pairwise=False), - lambda x: x.rolling(window=10, min_periods=5).corr(x, pairwise=False), + functions = [lambda x: (x.rolling(window=10, min_periods=5) + .cov(x, pairwise=False)), + lambda x: (x.rolling(window=10, min_periods=5) + .corr(x, pairwise=False)), lambda x: x.rolling(window=10, min_periods=5).max(), lambda x: x.rolling(window=10, min_periods=5).min(), lambda x: x.rolling(window=10, min_periods=5).sum(), @@ -1837,11 +1966,12 @@ def test_rolling_functions_window_non_shrinkage(self): lambda x: x.rolling(window=10, min_periods=5).var(), lambda x: x.rolling(window=10, min_periods=5).skew(), lambda x: x.rolling(window=10, min_periods=5).kurt(), - lambda x: x.rolling(window=10, min_periods=5).quantile(quantile=0.5), + lambda x: x.rolling( + window=10, min_periods=5).quantile(quantile=0.5), lambda x: x.rolling(window=10, min_periods=5).median(), lambda x: x.rolling(window=10, min_periods=5).apply(sum), - lambda x: x.rolling(win_type='boxcar', window=10, min_periods=5).mean(), - ] + lambda x: x.rolling(win_type='boxcar', + window=10, min_periods=5).mean()] for f in functions: try: s_result = f(s) @@ -1854,9 +1984,10 @@ def test_rolling_functions_window_non_shrinkage(self): # scipy needed for rolling_window continue - functions = [lambda x: x.rolling(window=10, min_periods=5).cov(x, pairwise=True), - lambda x: x.rolling(window=10, min_periods=5).corr(x, pairwise=True), - ] + functions = [lambda x: (x.rolling(window=10, min_periods=5) + .cov(x, pairwise=True)), + lambda x: (x.rolling(window=10, min_periods=5) + .corr(x, pairwise=True))] for f in functions: df_result_panel = f(df) assert_panel_equal(df_result_panel, df_expected_panel) @@ -1867,15 +1998,19 @@ def test_moment_functions_zero_length(self): s_expected = s df1 = DataFrame() df1_expected = df1 - df1_expected_panel = Panel(items=df1.index, major_axis=df1.columns, minor_axis=df1.columns) + df1_expected_panel = Panel(items=df1.index, major_axis=df1.columns, + minor_axis=df1.columns) df2 = DataFrame(columns=['a']) df2['a'] = df2['a'].astype('float64') df2_expected = df2 - df2_expected_panel = Panel(items=df2.index, major_axis=df2.columns, minor_axis=df2.columns) + df2_expected_panel = Panel(items=df2.index, major_axis=df2.columns, + minor_axis=df2.columns) functions = [lambda x: x.expanding().count(), - lambda x: x.expanding(min_periods=5).cov(x, pairwise=False), - lambda x: x.expanding(min_periods=5).corr(x, pairwise=False), + lambda x: x.expanding(min_periods=5).cov( + x, pairwise=False), + lambda x: x.expanding(min_periods=5).corr( + x, pairwise=False), lambda x: x.expanding(min_periods=5).max(), lambda x: x.expanding(min_periods=5).min(), lambda x: x.expanding(min_periods=5).sum(), @@ -1888,8 +2023,10 @@ def test_moment_functions_zero_length(self): lambda x: x.expanding(min_periods=5).median(), lambda x: x.expanding(min_periods=5).apply(sum), lambda x: x.rolling(window=10).count(), - lambda x: x.rolling(window=10, min_periods=5).cov(x, pairwise=False), - lambda x: x.rolling(window=10, min_periods=5).corr(x, pairwise=False), + lambda x: x.rolling(window=10, min_periods=5).cov( + x, pairwise=False), + lambda x: x.rolling(window=10, min_periods=5).corr( + x, pairwise=False), lambda x: x.rolling(window=10, min_periods=5).max(), lambda x: x.rolling(window=10, min_periods=5).min(), lambda x: x.rolling(window=10, min_periods=5).sum(), @@ -1898,11 +2035,13 @@ def test_moment_functions_zero_length(self): lambda x: x.rolling(window=10, min_periods=5).var(), lambda x: x.rolling(window=10, min_periods=5).skew(), lambda x: x.rolling(window=10, min_periods=5).kurt(), - lambda x: x.rolling(window=10, min_periods=5).quantile(0.5), + lambda x: x.rolling( + window=10, min_periods=5).quantile(0.5), lambda x: x.rolling(window=10, min_periods=5).median(), lambda x: x.rolling(window=10, min_periods=5).apply(sum), - lambda x: x.rolling(win_type='boxcar', window=10, min_periods=5).mean(), - ] + lambda x: x.rolling(win_type='boxcar', + window=10, min_periods=5).mean(), + ] for f in functions: try: s_result = f(s) @@ -1918,11 +2057,15 @@ def test_moment_functions_zero_length(self): # scipy needed for rolling_window continue - functions = [lambda x: x.expanding(min_periods=5).cov(x, pairwise=True), - lambda x: x.expanding(min_periods=5).corr(x, pairwise=True), - lambda x: x.rolling(window=10, min_periods=5).cov(x, pairwise=True), - lambda x: x.rolling(window=10, min_periods=5).corr(x, pairwise=True), - ] + functions = [lambda x: (x.expanding(min_periods=5) + .cov(x, pairwise=True)), + lambda x: (x.expanding(min_periods=5) + .corr(x, pairwise=True)), + lambda x: (x.rolling(window=10, min_periods=5) + .cov(x, pairwise=True)), + lambda x: (x.rolling(window=10, min_periods=5) + .corr(x, pairwise=True)), + ] for f in functions: df1_result_panel = f(df1) assert_panel_equal(df1_result_panel, df1_expected_panel) @@ -1932,15 +2075,16 @@ def test_moment_functions_zero_length(self): def test_expanding_cov_pairwise_diff_length(self): # GH 7512 - df1 = DataFrame([[1,5], [3, 2], [3,9]], columns=['A','B']) - df1a = DataFrame([[1,5], [3,9]], index=[0,2], columns=['A','B']) - df2 = DataFrame([[5,6], [None,None], [2,1]], columns=['X','Y']) - df2a = DataFrame([[5,6], [2,1]], index=[0,2], columns=['X','Y']) + df1 = DataFrame([[1, 5], [3, 2], [3, 9]], columns=['A', 'B']) + df1a = DataFrame([[1, 5], [3, 9]], index=[0, 2], columns=['A', 'B']) + df2 = DataFrame([[5, 6], [None, None], [2, 1]], columns=['X', 'Y']) + df2a = DataFrame([[5, 6], [2, 1]], index=[0, 2], columns=['X', 'Y']) result1 = df1.expanding().cov(df2a, pairwise=True)[2] result2 = df1.expanding().cov(df2a, pairwise=True)[2] result3 = df1a.expanding().cov(df2, pairwise=True)[2] result4 = df1a.expanding().cov(df2a, pairwise=True)[2] - expected = DataFrame([[-3., -5.], [-6., -10.]], index=['A','B'], columns=['X','Y']) + expected = DataFrame([[-3., -5.], [-6., -10.]], index=['A', 'B'], + columns=['X', 'Y']) assert_frame_equal(result1, expected) assert_frame_equal(result2, expected) assert_frame_equal(result3, expected) @@ -1948,15 +2092,16 @@ def test_expanding_cov_pairwise_diff_length(self): def test_expanding_corr_pairwise_diff_length(self): # GH 7512 - df1 = DataFrame([[1,2], [3, 2], [3,4]], columns=['A','B']) - df1a = DataFrame([[1,2], [3,4]], index=[0,2], columns=['A','B']) - df2 = DataFrame([[5,6], [None,None], [2,1]], columns=['X','Y']) - df2a = DataFrame([[5,6], [2,1]], index=[0,2], columns=['X','Y']) + df1 = DataFrame([[1, 2], [3, 2], [3, 4]], columns=['A', 'B']) + df1a = DataFrame([[1, 2], [3, 4]], index=[0, 2], columns=['A', 'B']) + df2 = DataFrame([[5, 6], [None, None], [2, 1]], columns=['X', 'Y']) + df2a = DataFrame([[5, 6], [2, 1]], index=[0, 2], columns=['X', 'Y']) result1 = df1.expanding().corr(df2, pairwise=True)[2] result2 = df1.expanding().corr(df2a, pairwise=True)[2] result3 = df1a.expanding().corr(df2, pairwise=True)[2] result4 = df1a.expanding().corr(df2a, pairwise=True)[2] - expected = DataFrame([[-1.0, -1.0], [-1.0, -1.0]], index=['A','B'], columns=['X','Y']) + expected = DataFrame([[-1.0, -1.0], [-1.0, -1.0]], index=['A', 'B'], + columns=['X', 'Y']) assert_frame_equal(result1, expected) assert_frame_equal(result2, expected) assert_frame_equal(result3, expected) @@ -1964,28 +2109,39 @@ def test_expanding_corr_pairwise_diff_length(self): def test_pairwise_stats_column_names_order(self): # GH 7738 - df1s = [DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=[0,1]), - DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=[1,0]), - DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=[1,1]), - DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=['C','C']), - DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=[1.,0]), - DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=[0.,1]), - DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=['C',1]), - DataFrame([[2.,4.],[1.,2.],[5.,2.],[8.,1.]], columns=[1,0.]), - DataFrame([[2,4.],[1,2.],[5,2.],[8,1.]], columns=[0,1.]), - DataFrame([[2,4],[1,2],[5,2],[8,1.]], columns=[1.,'X']), - ] - df2 = DataFrame([[None,1,1],[None,1,2],[None,3,2],[None,8,1]], columns=['Y','Z','X']) - s = Series([1,1,3,8]) - - # suppress warnings about incomparable objects, as we are deliberately testing with such column labels + df1s = [DataFrame([[2, 4], [1, 2], [5, 2], [8, 1]], columns=[0, 1]), + DataFrame( + [[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1, 0]), + DataFrame( + [[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1, 1]), + DataFrame( + [[2, 4], [1, 2], [5, 2], [8, 1]], columns=['C', 'C']), + DataFrame( + [[2, 4], [1, 2], [5, 2], [8, 1]], columns=[1., 0]), + DataFrame( + [[2, 4], [1, 2], [5, 2], [8, 1]], columns=[0., 1]), + DataFrame( + [[2, 4], [1, 2], [5, 2], [8, 1]], columns=['C', 1]), + DataFrame( + [[2., 4.], [1., 2.], [5., 2.], [8., 1.]], columns=[1, 0.]), + DataFrame( + [[2, 4.], [1, 2.], [5, 2.], [8, 1.]], columns=[0, 1.]), + DataFrame( + [[2, 4], [1, 2], [5, 2], [8, 1.]], columns=[1., 'X']), ] + df2 = DataFrame( + [[None, 1, 1], [None, 1, 2], [None, 3, 2], [None, 8, 1] + ], columns=['Y', 'Z', 'X']) + s = Series([1, 1, 3, 8]) + + # suppress warnings about incomparable objects, as we are deliberately + # testing with such column labels with warnings.catch_warnings(): - warnings.filterwarnings("ignore", message=".*incomparable objects.*", category=RuntimeWarning) + warnings.filterwarnings("ignore", + message=".*incomparable objects.*", + category=RuntimeWarning) # DataFrame methods (which do not call _flex_binary_moment()) - for f in [lambda x: x.cov(), - lambda x: x.corr(), - ]: + for f in [lambda x: x.cov(), lambda x: x.corr(), ]: results = [f(df) for df in df1s] for (df, result) in zip(df1s, results): assert_index_equal(result.index, df.columns) @@ -2000,8 +2156,7 @@ def test_pairwise_stats_column_names_order(self): lambda x: x.rolling(window=3).cov(pairwise=True), lambda x: x.rolling(window=3).corr(pairwise=True), lambda x: x.ewm(com=3).cov(pairwise=True), - lambda x: x.ewm(com=3).corr(pairwise=True), - ]: + lambda x: x.ewm(com=3).corr(pairwise=True), ]: results = [f(df) for df in df1s] for (df, result) in zip(df1s, results): assert_index_equal(result.items, df.index) @@ -2017,8 +2172,7 @@ def test_pairwise_stats_column_names_order(self): lambda x: x.rolling(window=3).cov(pairwise=False), lambda x: x.rolling(window=3).corr(pairwise=False), lambda x: x.ewm(com=3).cov(pairwise=False), - lambda x: x.ewm(com=3).corr(pairwise=False), - ]: + lambda x: x.ewm(com=3).corr(pairwise=False), ]: results = [f(df) for df in df1s] for (df, result) in zip(df1s, results): assert_index_equal(result.index, df.index) @@ -2033,8 +2187,7 @@ def test_pairwise_stats_column_names_order(self): lambda x, y: x.rolling(window=3).cov(y, pairwise=True), lambda x, y: x.rolling(window=3).corr(y, pairwise=True), lambda x, y: x.ewm(com=3).cov(y, pairwise=True), - lambda x, y: x.ewm(com=3).corr(y, pairwise=True), - ]: + lambda x, y: x.ewm(com=3).corr(y, pairwise=True), ]: results = [f(df, df2) for df in df1s] for (df, result) in zip(df1s, results): assert_index_equal(result.items, df.index) @@ -2050,9 +2203,9 @@ def test_pairwise_stats_column_names_order(self): lambda x, y: x.rolling(window=3).cov(y, pairwise=False), lambda x, y: x.rolling(window=3).corr(y, pairwise=False), lambda x, y: x.ewm(com=3).cov(y, pairwise=False), - lambda x, y: x.ewm(com=3).corr(y, pairwise=False), - ]: - results = [f(df, df2) if df.columns.is_unique else None for df in df1s] + lambda x, y: x.ewm(com=3).corr(y, pairwise=False), ]: + results = [f(df, df2) if df.columns.is_unique else None + for df in df1s] for (df, result) in zip(df1s, results): if result is not None: expected_index = df.index.union(df2.index) @@ -2060,8 +2213,12 @@ def test_pairwise_stats_column_names_order(self): assert_index_equal(result.index, expected_index) assert_index_equal(result.columns, expected_columns) else: - tm.assertRaisesRegexp(ValueError, "'arg1' columns are not unique", f, df, df2) - tm.assertRaisesRegexp(ValueError, "'arg2' columns are not unique", f, df2, df) + tm.assertRaisesRegexp( + ValueError, "'arg1' columns are not unique", f, df, + df2) + tm.assertRaisesRegexp( + ValueError, "'arg2' columns are not unique", f, + df2, df) # DataFrame with a Series for f in [lambda x, y: x.expanding().cov(y), @@ -2069,8 +2226,7 @@ def test_pairwise_stats_column_names_order(self): lambda x, y: x.rolling(window=3).cov(y), lambda x, y: x.rolling(window=3).corr(y), lambda x, y: x.ewm(com=3).cov(y), - lambda x, y: x.ewm(com=3).corr(y), - ]: + lambda x, y: x.ewm(com=3).corr(y), ]: results = [f(df, s) for df in df1s] + [f(s, df) for df in df1s] for (df, result) in zip(df1s, results): assert_index_equal(result.index, df.index) @@ -2094,10 +2250,9 @@ def test_rolling_skew_edge_cases(self): assert_series_equal(all_nan, x) # yields [NaN, NaN, NaN, 0.177994, 1.548824] - d = Series([-1.50837035, -0.1297039 , 0.19501095, - 1.73508164, 0.41941401]) - expected = Series([np.NaN, np.NaN, np.NaN, - 0.177994, 1.548824]) + d = Series([-1.50837035, -0.1297039, 0.19501095, 1.73508164, 0.41941401 + ]) + expected = Series([np.NaN, np.NaN, np.NaN, 0.177994, 1.548824]) x = d.rolling(window=4).skew() assert_series_equal(expected, x) @@ -2116,10 +2271,9 @@ def test_rolling_kurt_edge_cases(self): assert_series_equal(all_nan, x) # yields [NaN, NaN, NaN, 1.224307, 2.671499] - d = Series([-1.50837035, -0.1297039 , 0.19501095, - 1.73508164, 0.41941401]) - expected = Series([np.NaN, np.NaN, np.NaN, - 1.224307, 2.671499]) + d = Series([-1.50837035, -0.1297039, 0.19501095, 1.73508164, 0.41941401 + ]) + expected = Series([np.NaN, np.NaN, np.NaN, 1.224307, 2.671499]) x = d.rolling(window=4).kurt() assert_series_equal(expected, x) @@ -2127,17 +2281,16 @@ def _check_expanding_ndarray(self, func, static_comp, has_min_periods=True, has_time_rule=True, preserve_nan=True): result = func(self.arr) - assert_almost_equal(result[10], - static_comp(self.arr[:11])) + assert_almost_equal(result[10], static_comp(self.arr[:11])) if preserve_nan: - assert(np.isnan(result[self._nan_locs]).all()) + assert (np.isnan(result[self._nan_locs]).all()) arr = randn(50) if has_min_periods: result = func(arr, min_periods=30) - assert(np.isnan(result[:29]).all()) + assert (np.isnan(result[:29]).all()) assert_almost_equal(result[-1], static_comp(arr[:50])) # min_periods is working correctly @@ -2165,8 +2318,7 @@ def _check_expanding_structures(self, func): self.assertEqual(type(frame_result), DataFrame) def _check_expanding(self, func, static_comp, has_min_periods=True, - has_time_rule=True, - preserve_nan=True): + has_time_rule=True, preserve_nan=True): with warnings.catch_warnings(record=True): self._check_expanding_ndarray(func, static_comp, has_min_periods=has_min_periods, @@ -2188,8 +2340,7 @@ def test_rolling_max_gh6297(self): series = series.sort_index() expected = Series([1.0, 2.0, 6.0, 4.0, 5.0], - index=[datetime(1975, 1, i, 0) - for i in range(1, 6)]) + index=[datetime(1975, 1, i, 0) for i in range(1, 6)]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): x = series.rolling(window=1, freq='D').max() assert_series_equal(expected, x) @@ -2208,30 +2359,26 @@ def test_rolling_max_how_resample(self): # Default how should be max expected = Series([0.0, 1.0, 2.0, 3.0, 20.0], - index=[datetime(1975, 1, i, 0) - for i in range(1, 6)]) + index=[datetime(1975, 1, i, 0) for i in range(1, 6)]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): x = series.rolling(window=1, freq='D').max() assert_series_equal(expected, x) # Now specify median (10.0) expected = Series([0.0, 1.0, 2.0, 3.0, 10.0], - index=[datetime(1975, 1, i, 0) - for i in range(1, 6)]) + index=[datetime(1975, 1, i, 0) for i in range(1, 6)]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): x = series.rolling(window=1, freq='D').max(how='median') assert_series_equal(expected, x) # Now specify mean (4+10+20)/3 - v = (4.0+10.0+20.0)/3.0 + v = (4.0 + 10.0 + 20.0) / 3.0 expected = Series([0.0, 1.0, 2.0, 3.0, v], - index=[datetime(1975, 1, i, 0) - for i in range(1, 6)]) + index=[datetime(1975, 1, i, 0) for i in range(1, 6)]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): x = series.rolling(window=1, freq='D').max(how='mean') assert_series_equal(expected, x) - def test_rolling_min_how_resample(self): indices = [datetime(1975, 1, i) for i in range(1, 6)] @@ -2246,8 +2393,7 @@ def test_rolling_min_how_resample(self): # Default how should be min expected = Series([0.0, 1.0, 2.0, 3.0, 4.0], - index=[datetime(1975, 1, i, 0) - for i in range(1, 6)]) + index=[datetime(1975, 1, i, 0) for i in range(1, 6)]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): r = series.rolling(window=1, freq='D') assert_series_equal(expected, r.min()) @@ -2266,8 +2412,7 @@ def test_rolling_median_how_resample(self): # Default how should be median expected = Series([0.0, 1.0, 2.0, 3.0, 10], - index=[datetime(1975, 1, i, 0) - for i in range(1, 6)]) + index=[datetime(1975, 1, i, 0) for i in range(1, 6)]) with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): x = series.rolling(window=1, freq='D').median() assert_series_equal(expected, x) @@ -2277,8 +2422,3 @@ def test_rolling_median_memory_error(self): n = 20000 Series(np.random.randn(n)).rolling(window=2, center=False).median() Series(np.random.randn(n)).rolling(window=2, center=False).median() - -if __name__ == '__main__': - import nose - nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], - exit=False)
Ouch my wrists. I will rebase and fix the post-rebase entropy tomorrow.
https://api.github.com/repos/pandas-dev/pandas/pulls/12069
2016-01-17T06:54:09Z
2016-01-17T16:46:00Z
null
2016-01-18T00:32:51Z
CI: use wheels on all numpy dev builds
diff --git a/.travis.yml b/.travis.yml index 959a1f7e11e41..049a5c056928c 100644 --- a/.travis.yml +++ b/.travis.yml @@ -42,7 +42,6 @@ matrix: - NOSE_ARGS="not slow and not disabled" - FULL_DEPS=true - CLIPBOARD_GUI=gtk2 - - BUILD_TYPE=conda - DOC_BUILD=true # if rst files were changed, build docs in parallel with tests - python: 3.4 env: @@ -57,14 +56,12 @@ matrix: - NOSE_ARGS="not slow and not network and not disabled" - FULL_DEPS=true - CLIPBOARD=xsel - - BUILD_TYPE=conda - python: 2.7 env: - JOB_NAME: "27_slow" - JOB_TAG=_SLOW - NOSE_ARGS="slow and not network and not disabled" - FULL_DEPS=true - - BUILD_TYPE=conda - python: 3.4 env: - JOB_NAME: "34_slow" @@ -72,37 +69,24 @@ matrix: - NOSE_ARGS="slow and not network and not disabled" - FULL_DEPS=true - CLIPBOARD=xsel - - BUILD_TYPE=conda - python: 2.7 env: - JOB_NAME: "27_build_test_conda" - JOB_TAG=_BUILD_TEST - NOSE_ARGS="not slow and not disabled" - FULL_DEPS=true - - BUILD_TYPE=conda - - BUILD_TEST=true - - python: 2.7 - env: - - JOB_NAME: "27_build_test_pydata" - - JOB_TAG=_BUILD_TEST - - NOSE_ARGS="not slow and not disabled" - - FULL_DEPS=true - - BUILD_TYPE=pydata - BUILD_TEST=true - python: 2.7 env: - - JOB_NAME: "27_numpy_master" - - JOB_TAG=_NUMPY_DEV_master + - JOB_NAME: "27_numpy_dev" + - JOB_TAG=_NUMPY_DEV - NOSE_ARGS="not slow and not network and not disabled" - - NUMPY_BUILD=master - - BUILD_TYPE=pydata - PANDAS_TESTING_MODE="deprecate" - python: 3.5 env: - JOB_NAME: "35_numpy_dev" - JOB_TAG=_NUMPY_DEV - NOSE_ARGS="not slow and not network and not disabled" - - BUILD_TYPE=conda - PANDAS_TESTING_MODE="deprecate" allow_failures: - python: 2.7 @@ -111,7 +95,6 @@ matrix: - JOB_TAG=_SLOW - NOSE_ARGS="slow and not network and not disabled" - FULL_DEPS=true - - BUILD_TYPE=conda - python: 3.4 env: - JOB_NAME: "34_slow" @@ -119,14 +102,11 @@ matrix: - NOSE_ARGS="slow and not network and not disabled" - FULL_DEPS=true - CLIPBOARD=xsel - - BUILD_TYPE=conda - python: 2.7 env: - - JOB_NAME: "27_numpy_master" - - JOB_TAG=_NUMPY_DEV_master + - JOB_NAME: "27_numpy_dev" + - JOB_TAG=_NUMPY_DEV - NOSE_ARGS="not slow and not network and not disabled" - - NUMPY_BUILD=master - - BUILD_TYPE=pydata - PANDAS_TESTING_MODE="deprecate" - python: 2.7 env: @@ -134,22 +114,12 @@ matrix: - JOB_TAG=_BUILD_TEST - NOSE_ARGS="not slow and not disabled" - FULL_DEPS=true - - BUILD_TYPE=conda - - BUILD_TEST=true - - python: 2.7 - env: - - JOB_NAME: "27_build_test_pydata" - - JOB_TAG=_BUILD_TEST - - NOSE_ARGS="not slow and not disabled" - - FULL_DEPS=true - - BUILD_TYPE=pydata - BUILD_TEST=true - python: 3.5 env: - JOB_NAME: "35_numpy_dev" - JOB_TAG=_NUMPY_DEV - NOSE_ARGS="not slow and not network and not disabled" - - BUILD_TYPE=conda - PANDAS_TESTING_MODE="deprecate" before_install: @@ -162,7 +132,7 @@ before_install: - pwd - uname -a - python -V - - ci/before_install.sh + - ci/before_install_travis.sh # Xvfb stuff for clipboard functionality; see the travis-ci documentation - export DISPLAY=:99.0 - sh -e /etc/init.d/xvfb start @@ -170,7 +140,7 @@ before_install: install: - echo "install" - ci/prep_ccache.sh - - ci/install_${BUILD_TYPE}.sh + - ci/install_travis.sh - ci/submit_ccache.sh before_script: diff --git a/ci/before_install.sh b/ci/before_install_travis.sh similarity index 100% rename from ci/before_install.sh rename to ci/before_install_travis.sh diff --git a/ci/install-2.7_NUMPY_DEV.sh b/ci/install-2.7_NUMPY_DEV.sh new file mode 100644 index 0000000000000..00b6255daf70f --- /dev/null +++ b/ci/install-2.7_NUMPY_DEV.sh @@ -0,0 +1,20 @@ +#!/bin/bash + +source activate pandas + +echo "install numpy master wheel" + +# remove the system installed numpy +pip uninstall numpy -y + +# we need these for numpy + +# these wheels don't play nice with the conda libgfortran / openblas +# time conda install -n pandas libgfortran openblas || exit 1 + +time sudo apt-get $APT_ARGS install libatlas-base-dev gfortran + +# install numpy wheel from master +pip install --pre --upgrade --no-index --timeout=60 --trusted-host travis-dev-wheels.scipy.org -f http://travis-dev-wheels.scipy.org/ numpy + +true diff --git a/ci/install_pydata.sh b/ci/install_pydata.sh deleted file mode 100755 index 667b57897be7e..0000000000000 --- a/ci/install_pydata.sh +++ /dev/null @@ -1,159 +0,0 @@ -#!/bin/bash - -# There are 2 distinct pieces that get zipped and cached -# - The venv site-packages dir including the installed dependencies -# - The pandas build artifacts, using the build cache support via -# scripts/use_build_cache.py -# -# if the user opted in to use the cache and we're on a whitelisted fork -# - if the server doesn't hold a cached version of venv/pandas build, -# do things the slow way, and put the results on the cache server -# for the next time. -# - if the cache files are available, instal some necessaries via apt -# (no compiling needed), then directly goto script and collect 200$. -# - -function edit_init() -{ - if [ -n "$LOCALE_OVERRIDE" ]; then - echo "Adding locale to the first line of pandas/__init__.py" - rm -f pandas/__init__.pyc - sedc="3iimport locale\nlocale.setlocale(locale.LC_ALL, '$LOCALE_OVERRIDE')\n" - sed -i "$sedc" pandas/__init__.py - echo "head -4 pandas/__init__.py" - head -4 pandas/__init__.py - echo - fi -} - -edit_init - -python_major_version="${TRAVIS_PYTHON_VERSION:0:1}" -[ "$python_major_version" == "2" ] && python_major_version="" - -home_dir=$(pwd) -echo "home_dir: [$home_dir]" - -# known working -# pip==1.5.1 -# setuptools==2.2 -# wheel==0.22 -# nose==1.3.3 - -pip install -I -U pip -pip install -I -U setuptools -pip install wheel==0.22 -#pip install nose==1.3.3 -pip install nose==1.3.4 - -# comment this line to disable the fetching of wheel files -base_url=http://pandas.pydata.org/pandas-build/dev/wheels - -wheel_box=${TRAVIS_PYTHON_VERSION}${JOB_TAG} -PIP_ARGS+=" -I --use-wheel --find-links=$base_url/$wheel_box/ --allow-external --allow-insecure" - -if [ -n "$LOCALE_OVERRIDE" ]; then - # make sure the locale is available - # probably useless, since you would need to relogin - time sudo locale-gen "$LOCALE_OVERRIDE" -fi - -# we need these for numpy -time sudo apt-get $APT_ARGS install libatlas-base-dev gfortran - -if [ -n "$NUMPY_BUILD" ]; then - # building numpy - - cd $home_dir - echo "cloning numpy" - - rm -Rf /tmp/numpy - cd /tmp - - # remove the system installed numpy - pip uninstall numpy -y - - # install cython - pip install --find-links http://wheels.astropy.org/ --find-links http://wheels2.astropy.org/ --use-wheel Cython - - # clone & install - git clone --branch $NUMPY_BUILD https://github.com/numpy/numpy.git numpy - cd numpy - time pip install . - pip uninstall cython -y - - cd $home_dir - numpy_version=$(python -c 'import numpy; print(numpy.__version__)') - echo "[$home_dir] numpy current: $numpy_version" -fi - -# Force virtualenv to accept system_site_packages -rm -f $VIRTUAL_ENV/lib/python$TRAVIS_PYTHON_VERSION/no-global-site-packages.txt - -# build deps -time pip install $PIP_ARGS -r ci/requirements-${wheel_box}.build - -# Need to enable for locale testing. The location of the locale file(s) is -# distro specific. For example, on Arch Linux all of the locales are in a -# commented file--/etc/locale.gen--that must be commented in to be used -# whereas Ubuntu looks in /var/lib/locales/supported.d/* and generates locales -# based on what's in the files in that folder -time echo 'it_CH.UTF-8 UTF-8' | sudo tee -a /var/lib/locales/supported.d/it -time sudo locale-gen - - -# install gui for clipboard testing -if [ -n "$CLIPBOARD_GUI" ]; then - echo "Using CLIPBOARD_GUI: $CLIPBOARD_GUI" - [ -n "$python_major_version" ] && py="py" - python_cb_gui_pkg=python${python_major_version}-${py}${CLIPBOARD_GUI} - time sudo apt-get $APT_ARGS install $python_cb_gui_pkg -fi - - -# install a clipboard if $CLIPBOARD is not empty -if [ -n "$CLIPBOARD" ]; then - echo "Using clipboard: $CLIPBOARD" - time sudo apt-get $APT_ARGS install $CLIPBOARD -fi - - -# Optional Deps -if [ -n "$FULL_DEPS" ]; then - echo "Installing FULL_DEPS" - - # need libhdf5 for PyTables - time sudo apt-get $APT_ARGS install libhdf5-serial-dev -fi - - -# set the compiler cache to work -if [ "$IRON_TOKEN" ]; then - export PATH=/usr/lib/ccache:/usr/lib64/ccache:$PATH - gcc=$(which gcc) - echo "gcc: $gcc" - ccache=$(which ccache) - echo "ccache: $ccache" - export CC='ccache gcc' -fi - -# build pandas -if [ "$BUILD_TEST" ]; then - pip uninstall --yes cython - pip install cython==0.15.1 - ( python setup.py build_ext --inplace ) || true - ( python setup.py develop ) || true -else - python setup.py build_ext --inplace - python setup.py develop -fi - -# install the run libs -time pip install $PIP_ARGS -r ci/requirements-${wheel_box}.run - -# restore cython (if not numpy building) -if [ -z "$NUMPY_BUILD" ]; then - time pip install $PIP_ARGS $(cat ci/requirements-${wheel_box}.txt | grep -i cython) -fi - -true diff --git a/ci/install_conda.sh b/ci/install_travis.sh similarity index 100% rename from ci/install_conda.sh rename to ci/install_travis.sh diff --git a/ci/requirements-2.7_NUMPY_DEV_master.build b/ci/requirements-2.7_NUMPY_DEV.build similarity index 58% rename from ci/requirements-2.7_NUMPY_DEV_master.build rename to ci/requirements-2.7_NUMPY_DEV.build index 7d1d11daf9eeb..d15edbfa3d2c1 100644 --- a/ci/requirements-2.7_NUMPY_DEV_master.build +++ b/ci/requirements-2.7_NUMPY_DEV.build @@ -1,3 +1,3 @@ python-dateutil pytz -cython==0.19.1 +cython diff --git a/ci/requirements-2.7_NUMPY_DEV.run b/ci/requirements-2.7_NUMPY_DEV.run new file mode 100644 index 0000000000000..0aa987baefb1d --- /dev/null +++ b/ci/requirements-2.7_NUMPY_DEV.run @@ -0,0 +1,2 @@ +python-dateutil +pytz diff --git a/ci/requirements-2.7_NUMPY_DEV_master.run b/ci/requirements-2.7_NUMPY_DEV_master.run deleted file mode 100644 index e69de29bb2d1d..0000000000000
- remove pydata 2.7 test build - change 2.7 numpy_dev build to use wheels - remove need for BUILD_TYPE, all builds are now conda closes #12057
https://api.github.com/repos/pandas-dev/pandas/pulls/12068
2016-01-17T03:55:02Z
2016-01-17T16:21:48Z
2016-01-17T16:21:48Z
2016-01-17T16:21:48Z
CI: add 3.5 build with numpy_dev installed by wheel from master, #12057
diff --git a/.travis.yml b/.travis.yml index 9fdb98c0124b8..959a1f7e11e41 100644 --- a/.travis.yml +++ b/.travis.yml @@ -97,6 +97,13 @@ matrix: - NUMPY_BUILD=master - BUILD_TYPE=pydata - PANDAS_TESTING_MODE="deprecate" + - python: 3.5 + env: + - JOB_NAME: "35_numpy_dev" + - JOB_TAG=_NUMPY_DEV + - NOSE_ARGS="not slow and not network and not disabled" + - BUILD_TYPE=conda + - PANDAS_TESTING_MODE="deprecate" allow_failures: - python: 2.7 env: @@ -137,6 +144,13 @@ matrix: - FULL_DEPS=true - BUILD_TYPE=pydata - BUILD_TEST=true + - python: 3.5 + env: + - JOB_NAME: "35_numpy_dev" + - JOB_TAG=_NUMPY_DEV + - NOSE_ARGS="not slow and not network and not disabled" + - BUILD_TYPE=conda + - PANDAS_TESTING_MODE="deprecate" before_install: - echo "before_install" diff --git a/ci/install-3.5_NUMPY_DEV.sh b/ci/install-3.5_NUMPY_DEV.sh new file mode 100644 index 0000000000000..00b6255daf70f --- /dev/null +++ b/ci/install-3.5_NUMPY_DEV.sh @@ -0,0 +1,20 @@ +#!/bin/bash + +source activate pandas + +echo "install numpy master wheel" + +# remove the system installed numpy +pip uninstall numpy -y + +# we need these for numpy + +# these wheels don't play nice with the conda libgfortran / openblas +# time conda install -n pandas libgfortran openblas || exit 1 + +time sudo apt-get $APT_ARGS install libatlas-base-dev gfortran + +# install numpy wheel from master +pip install --pre --upgrade --no-index --timeout=60 --trusted-host travis-dev-wheels.scipy.org -f http://travis-dev-wheels.scipy.org/ numpy + +true diff --git a/ci/install_conda.sh b/ci/install_conda.sh index 465a4e3f63142..335286d7d1676 100755 --- a/ci/install_conda.sh +++ b/ci/install_conda.sh @@ -81,6 +81,14 @@ conda info -a || exit 1 # build deps REQ="ci/requirements-${TRAVIS_PYTHON_VERSION}${JOB_TAG}.build" time conda create -n pandas python=$TRAVIS_PYTHON_VERSION nose flake8 || exit 1 + +# may have additional installation instructions for this build +INSTALL="ci/install-${TRAVIS_PYTHON_VERSION}${JOB_TAG}.sh" +if [ -e ${INSTALL} ]; then + time bash $INSTALL || exit 1 +fi + +# install deps time conda install -n pandas --file=${REQ} || exit 1 source activate pandas diff --git a/ci/requirements-3.5.run b/ci/requirements-3.5.run index 64ed11b744ffd..2401a0fc11673 100644 --- a/ci/requirements-3.5.run +++ b/ci/requirements-3.5.run @@ -13,12 +13,10 @@ html5lib lxml matplotlib jinja2 +bottleneck +sqlalchemy +pymysql +psycopg2 -# currently causing some warnings -#sqlalchemy -#pymysql -#psycopg2 - -# not available from conda -#beautiful-soup -#bottleneck +# incompat with conda ATM +# beautiful-soup diff --git a/ci/requirements-3.5_NUMPY_DEV.build b/ci/requirements-3.5_NUMPY_DEV.build new file mode 100644 index 0000000000000..d15edbfa3d2c1 --- /dev/null +++ b/ci/requirements-3.5_NUMPY_DEV.build @@ -0,0 +1,3 @@ +python-dateutil +pytz +cython diff --git a/ci/requirements-3.5_NUMPY_DEV.run b/ci/requirements-3.5_NUMPY_DEV.run new file mode 100644 index 0000000000000..0aa987baefb1d --- /dev/null +++ b/ci/requirements-3.5_NUMPY_DEV.run @@ -0,0 +1,2 @@ +python-dateutil +pytz
- adds back some 3.5 deps previously missing in conda - closes #12057 (adds 3.5 wheel from nump)
https://api.github.com/repos/pandas-dev/pandas/pulls/12065
2016-01-16T19:37:45Z
2016-01-17T03:20:24Z
2016-01-17T03:20:24Z
2016-01-17T03:20:24Z
BUG in .groupby for single-row DF
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index ce324e8a2dab1..d34c51eea2800 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -497,7 +497,7 @@ Bug Fixes of columns didn't match the number of series provided (:issue:`12039`). - +- Bug in ``.groupby`` where a ``KeyError`` was not raised for a wrong column if there was only one row in the dataframe (:issue:`11741`) - Removed ``millisecond`` property of ``DatetimeIndex``. This would always raise a ``ValueError`` (:issue:`12019`). diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py index 31ed2c38cd29f..27d1c60e0547a 100644 --- a/pandas/core/groupby.py +++ b/pandas/core/groupby.py @@ -2207,11 +2207,12 @@ def _get_grouper(obj, key=None, axis=0, level=None, sort=True): if not isinstance(key, (tuple, list)): keys = [key] + match_axis_length = False else: keys = key + match_axis_length = len(keys) == len(group_axis) # what are we after, exactly? - match_axis_length = len(keys) == len(group_axis) any_callable = any(callable(g) or isinstance(g, dict) for g in keys) any_groupers = any(isinstance(g, Grouper) for g in keys) any_arraylike = any(isinstance(g, (list, tuple, Series, Index, np.ndarray)) diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py index d067b2fd7b969..5eb8606f4c30c 100644 --- a/pandas/tests/test_groupby.py +++ b/pandas/tests/test_groupby.py @@ -3085,6 +3085,13 @@ def test_groupby_keys_same_size_as_index(self): assert_frame_equal(result, expected) + def test_groupby_one_row(self): + # GH 11741 + df1 = pd.DataFrame(np.random.randn(1, 4), columns=list('ABCD')) + self.assertRaises(KeyError, df1.groupby, 'Z') + df2 = pd.DataFrame(np.random.randn(2, 4), columns=list('ABCD')) + self.assertRaises(KeyError, df2.groupby, 'Z') + def test_groupby_nat_exclude(self): # GH 6992 df = pd.DataFrame({'values': np.random.randn(8),
No KeyError was raised when grouping by a non-existant column Fixes #11741 Xref issue #11640, PR #11717
https://api.github.com/repos/pandas-dev/pandas/pulls/12063
2016-01-16T19:18:40Z
2016-01-17T03:43:47Z
null
2016-01-17T09:04:52Z
PEP: pandas/core round 3 (generic, groupby, index)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 958571fdc2218..b970923ff0fe3 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -32,11 +32,10 @@ # goal is to be able to define the docs close to function, while still being # able to share _shared_docs = dict() -_shared_doc_kwargs = dict(axes='keywords for axes', - klass='NDFrame', +_shared_doc_kwargs = dict(axes='keywords for axes', klass='NDFrame', axes_single_arg='int or labels for object', args_transpose='axes to permute (int or label for' - ' object)') + ' object)') def is_dictlike(x): @@ -69,7 +68,6 @@ def _single_replace(self, to_replace, method, inplace, limit): class NDFrame(PandasObject): - """ N-dimensional analogue of DataFrame. Store multi-dimensional in a size-mutable, labeled data structure @@ -80,10 +78,10 @@ class NDFrame(PandasObject): axes : list copy : boolean, default False """ - _internal_names = ['_data', '_cacher', '_item_cache', '_cache', - 'is_copy', '_subtyp', '_index', - '_default_kind', '_default_fill_value', '_metadata', - '__array_struct__', '__array_interface__'] + _internal_names = ['_data', '_cacher', '_item_cache', '_cache', 'is_copy', + '_subtyp', '_index', '_default_kind', + '_default_fill_value', '_metadata', '__array_struct__', + '__array_interface__'] _internal_names_set = set(_internal_names) _accessors = frozenset([]) _metadata = [] @@ -123,8 +121,9 @@ def _init_mgr(self, mgr, axes=None, dtype=None, copy=False): """ passed a manager and a axes dict """ for a, axe in axes.items(): if axe is not None: - mgr = mgr.reindex_axis( - axe, axis=self._get_block_manager_axis(a), copy=False) + mgr = mgr.reindex_axis(axe, + axis=self._get_block_manager_axis(a), + copy=False) # make a copy if explicitly requested if copy: @@ -135,7 +134,7 @@ def _init_mgr(self, mgr, axes=None, dtype=None, copy=False): mgr = mgr.astype(dtype=dtype) return mgr - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Construction @property @@ -154,7 +153,7 @@ def __unicode__(self): def _dir_additions(self): """ add the string-like attributes from the info_axis """ return set([c for c in self._info_axis - if isinstance(c, string_types) and isidentifier(c)]) + if isinstance(c, string_types) and isidentifier(c)]) @property def _constructor_sliced(self): @@ -170,31 +169,32 @@ def _constructor_expanddim(self): """ raise NotImplementedError - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Axis @classmethod - def _setup_axes( - cls, axes, info_axis=None, stat_axis=None, aliases=None, slicers=None, - axes_are_reversed=False, build_axes=True, ns=None): - """ provide axes setup for the major PandasObjects - - axes : the names of the axes in order (lowest to highest) - info_axis_num : the axis of the selector dimension (int) - stat_axis_num : the number of axis for the default stats (int) - aliases : other names for a single axis (dict) - slicers : how axes slice to others (dict) - axes_are_reversed : boolean whether to treat passed axes as - reversed (DataFrame) - build_axes : setup the axis properties (default True) - """ + def _setup_axes(cls, axes, info_axis=None, stat_axis=None, aliases=None, + slicers=None, axes_are_reversed=False, build_axes=True, + ns=None): + """Provide axes setup for the major PandasObjects. + + Parameters + ---------- + axes : the names of the axes in order (lowest to highest) + info_axis_num : the axis of the selector dimension (int) + stat_axis_num : the number of axis for the default stats (int) + aliases : other names for a single axis (dict) + slicers : how axes slice to others (dict) + axes_are_reversed : boolean whether to treat passed axes as + reversed (DataFrame) + build_axes : setup the axis properties (default True) + """ cls._AXIS_ORDERS = axes cls._AXIS_NUMBERS = dict((a, i) for i, a in enumerate(axes)) cls._AXIS_LEN = len(axes) cls._AXIS_ALIASES = aliases or dict() - cls._AXIS_IALIASES = dict((v, k) - for k, v in cls._AXIS_ALIASES.items()) + cls._AXIS_IALIASES = dict((v, k) for k, v in cls._AXIS_ALIASES.items()) cls._AXIS_NAMES = dict(enumerate(axes)) cls._AXIS_SLICEMAP = slicers or None cls._AXIS_REVERSED = axes_are_reversed @@ -234,29 +234,31 @@ def set_axis(a, i): setattr(cls, k, v) def _construct_axes_dict(self, axes=None, **kwargs): - """ return an axes dictionary for myself """ + """Return an axes dictionary for myself.""" d = dict([(a, self._get_axis(a)) for a in (axes or self._AXIS_ORDERS)]) d.update(kwargs) return d @staticmethod def _construct_axes_dict_from(self, axes, **kwargs): - """ return an axes dictionary for the passed axes """ + """Return an axes dictionary for the passed axes.""" d = dict([(a, ax) for a, ax in zip(self._AXIS_ORDERS, axes)]) d.update(kwargs) return d def _construct_axes_dict_for_slice(self, axes=None, **kwargs): - """ return an axes dictionary for myself """ + """Return an axes dictionary for myself.""" d = dict([(self._AXIS_SLICEMAP[a], self._get_axis(a)) - for a in (axes or self._AXIS_ORDERS)]) + for a in (axes or self._AXIS_ORDERS)]) d.update(kwargs) return d def _construct_axes_from_arguments(self, args, kwargs, require_all=False): - """ construct and returns axes if supplied in args/kwargs - if require_all, raise if all axis arguments are not supplied - return a tuple of (axes, kwargs) """ + """Construct and returns axes if supplied in args/kwargs. + + If require_all, raise if all axis arguments are not supplied + return a tuple of (axes, kwargs). + """ # construct the args args = list(args) @@ -267,10 +269,8 @@ def _construct_axes_from_arguments(self, args, kwargs, require_all=False): if alias is not None: if a in kwargs: if alias in kwargs: - raise TypeError( - "arguments are mutually exclusive for [%s,%s]" % - (a, alias) - ) + raise TypeError("arguments are mutually exclusive " + "for [%s,%s]" % (a, alias)) continue if alias in kwargs: kwargs[a] = kwargs.pop(alias) @@ -280,10 +280,10 @@ def _construct_axes_from_arguments(self, args, kwargs, require_all=False): if a not in kwargs: try: kwargs[a] = args.pop(0) - except (IndexError): + except IndexError: if require_all: - raise TypeError( - "not enough/duplicate arguments specified!") + raise TypeError("not enough/duplicate arguments " + "specified!") axes = dict([(a, kwargs.pop(a, None)) for a in self._AXIS_ORDERS]) return axes, kwargs @@ -331,7 +331,7 @@ def _get_axis(self, axis): return getattr(self, name) def _get_block_manager_axis(self, axis): - """ map the axis to the block_manager axis """ + """Map the axis to the block_manager axis.""" axis = self._get_axis_number(axis) if self._AXIS_REVERSED: m = self._AXIS_LEN - 1 @@ -384,24 +384,24 @@ def _stat_axis(self): @property def shape(self): - "Return a tuple of axis dimensions" + """Return a tuple of axis dimensions""" return tuple(len(self._get_axis(a)) for a in self._AXIS_ORDERS) @property def axes(self): - "Return index label(s) of the internal NDFrame" + """Return index label(s) of the internal NDFrame""" # we do it this way because if we have reversed axes, then # the block manager shows then reversed return [self._get_axis(a) for a in self._AXIS_ORDERS] @property def ndim(self): - "Number of axes / array dimensions" + """Number of axes / array dimensions""" return self._data.ndim @property def size(self): - "number of elements in the NDFrame" + """number of elements in the NDFrame""" return np.prod(self.shape) def _expand_axes(self, key): @@ -418,7 +418,7 @@ def _expand_axes(self, key): def set_axis(self, axis, labels): """ public verson of axis assignment """ - setattr(self,self._get_axis_name(axis),labels) + setattr(self, self._get_axis_name(axis), labels) def _set_axis(self, axis, labels): self._data.set_axis(axis, labels) @@ -448,26 +448,26 @@ def _set_axis(self, axis, labels): def transpose(self, *args, **kwargs): # construct the args - axes, kwargs = self._construct_axes_from_arguments( - args, kwargs, require_all=True) + axes, kwargs = self._construct_axes_from_arguments(args, kwargs, + require_all=True) axes_names = tuple([self._get_axis_name(axes[a]) for a in self._AXIS_ORDERS]) axes_numbers = tuple([self._get_axis_number(axes[a]) - for a in self._AXIS_ORDERS]) + for a in self._AXIS_ORDERS]) # we must have unique axes if len(axes) != len(set(axes)): raise ValueError('Must specify %s unique axes' % self._AXIS_LEN) - new_axes = self._construct_axes_dict_from( - self, [self._get_axis(x) for x in axes_names]) + new_axes = self._construct_axes_dict_from(self, [self._get_axis(x) + for x in axes_names]) new_values = self.values.transpose(axes_numbers) if kwargs.pop('copy', None) or (len(args) and args[-1]): new_values = new_values.copy() if kwargs: raise TypeError('transpose() got an unexpected keyword ' - 'argument "{0}"'.format(list(kwargs.keys())[0])) + 'argument "{0}"'.format(list(kwargs.keys())[0])) return self._constructor(new_values, **new_axes).__finalize__(self) @@ -511,10 +511,10 @@ def pop(self, item): return result def squeeze(self): - """ squeeze length 1 dimensions """ + """Squeeze length 1 dimensions.""" try: return self.iloc[tuple([0 if len(a) == 1 else slice(None) - for a in self.axes])] + for a in self.axes])] except: return self @@ -537,7 +537,7 @@ def swaplevel(self, i, j, axis=0): result._data.set_axis(axis, labels.swaplevel(i, j)) return result - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Rename # TODO: define separate funcs for DataFrame, Series and Panel so you can @@ -573,14 +573,15 @@ def rename(self, *args, **kwargs): if kwargs: raise TypeError('rename() got an unexpected keyword ' - 'argument "{0}"'.format(list(kwargs.keys())[0])) + 'argument "{0}"'.format(list(kwargs.keys())[0])) - if (com._count_not_none(*axes.values()) == 0): + if com._count_not_none(*axes.values()) == 0: raise TypeError('must pass an index to rename') # renamer function if passed a dict def _get_rename_function(mapper): if isinstance(mapper, (dict, ABCSeries)): + def f(x): if x in mapper: return mapper[x] @@ -635,7 +636,7 @@ def rename_axis(self, mapper, axis=0, copy=True, inplace=False): d[axis] = mapper return self.rename(**d) - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Comparisons def _indexed_same(self, other): @@ -664,14 +665,14 @@ def __invert__(self): def equals(self, other): """ - Determines if two NDFrame objects contain the same elements. NaNs in the - same location are considered equal. + Determines if two NDFrame objects contain the same elements. NaNs in + the same location are considered equal. """ if not isinstance(other, self._constructor): return False return self._data.equals(other._data) - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Iteration def __hash__(self): @@ -679,9 +680,7 @@ def __hash__(self): ' hashed'.format(self.__class__.__name__)) def __iter__(self): - """ - Iterate over infor axis - """ + """Iterate over infor axis""" return iter(self._info_axis) # can we get a better explanation of this? @@ -689,7 +688,8 @@ def keys(self): """Get the 'info axis' (see Indexing for more) This is index for Series, columns for DataFrame and major_axis for - Panel.""" + Panel. + """ return self._info_axis def iteritems(self): @@ -707,21 +707,21 @@ def iteritems(self): def iterkv(self, *args, **kwargs): "iteritems alias used to get around 2to3. Deprecated" warnings.warn("iterkv is deprecated and will be removed in a future " - "release, use ``iteritems`` instead.", - FutureWarning, stacklevel=2) + "release, use ``iteritems`` instead.", FutureWarning, + stacklevel=2) return self.iteritems(*args, **kwargs) def __len__(self): - """Returns length of info axis """ + """Returns length of info axis""" return len(self._info_axis) def __contains__(self, key): - """True if the key is in the info axis """ + """True if the key is in the info axis""" return key in self._info_axis @property def empty(self): - "True if NDFrame is entirely empty [no items]" + """True if NDFrame is entirely empty [no items]""" return not all(len(self._get_axis(a)) > 0 for a in self._AXIS_ORDERS) def __nonzero__(self): @@ -732,11 +732,12 @@ def __nonzero__(self): __bool__ = __nonzero__ def bool(self): - """ Return the bool of a single element PandasObject - This must be a boolean scalar value, either True or False + """Return the bool of a single element PandasObject. - Raise a ValueError if the PandasObject does not have exactly - 1 element, or that element is not boolean """ + This must be a boolean scalar value, either True or False. Raise a + ValueError if the PandasObject does not have exactly 1 element, or that + element is not boolean + """ v = self.squeeze() if isinstance(v, (bool, np.bool_)): return bool(v) @@ -749,10 +750,10 @@ def bool(self): def __abs__(self): return self.abs() - def __round__(self,decimals=0): + def __round__(self, decimals=0): return self.round(decimals) - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Array Interface def __array__(self, dtype=None): @@ -764,24 +765,24 @@ def __array_wrap__(self, result, context=None): # ideally we would define this to avoid the getattr checks, but # is slower - #@property - #def __array_interface__(self): + # @property + # def __array_interface__(self): # """ provide numpy array interface method """ # values = self.values # return dict(typestr=values.dtype.str,shape=values.shape,data=values) def to_dense(self): - "Return dense representation of NDFrame (as opposed to sparse)" + """Return dense representation of NDFrame (as opposed to sparse)""" # compat return self - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Picklability def __getstate__(self): meta = dict((k, getattr(self, k, None)) for k in self._metadata) - return dict(_data=self._data, _typ=self._typ, - _metadata=self._metadata, **meta) + return dict(_data=self._data, _typ=self._typ, _metadata=self._metadata, + **meta) def __setstate__(self, state): @@ -822,10 +823,10 @@ def __setstate__(self, state): self._item_cache = {} - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # IO - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # I/O Methods def to_json(self, path_or_buf=None, orient=None, date_format='epoch', @@ -886,17 +887,14 @@ def to_json(self, path_or_buf=None, orient=None, date_format='epoch', """ from pandas.io import json - return json.to_json( - path_or_buf=path_or_buf, - obj=self, orient=orient, - date_format=date_format, - double_precision=double_precision, - force_ascii=force_ascii, - date_unit=date_unit, - default_handler=default_handler) + return json.to_json(path_or_buf=path_or_buf, obj=self, orient=orient, + date_format=date_format, + double_precision=double_precision, + force_ascii=force_ascii, date_unit=date_unit, + default_handler=default_handler) def to_hdf(self, path_or_buf, key, **kwargs): - """ activate the HDFStore + """Activate the HDFStore. Parameters ---------- @@ -975,8 +973,8 @@ def to_sql(self, name, con, flavor='sqlite', schema=None, if_exists='fail', If a DBAPI2 object, only sqlite3 is supported. flavor : {'sqlite', 'mysql'}, default 'sqlite' The flavor of SQL to use. Ignored when using SQLAlchemy engine. - 'mysql' is deprecated and will be removed in future versions, but it - will be further supported through SQLAlchemy engines. + 'mysql' is deprecated and will be removed in future versions, but + it will be further supported through SQLAlchemy engines. schema : string, default None Specify the schema (if database flavor supports this). If None, use default schema. @@ -999,14 +997,13 @@ def to_sql(self, name, con, flavor='sqlite', schema=None, if_exists='fail', """ from pandas.io import sql - sql.to_sql( - self, name, con, flavor=flavor, schema=schema, if_exists=if_exists, - index=index, index_label=index_label, chunksize=chunksize, - dtype=dtype) + sql.to_sql(self, name, con, flavor=flavor, schema=schema, + if_exists=if_exists, index=index, index_label=index_label, + chunksize=chunksize, dtype=dtype) def to_pickle(self, path): """ - Pickle (serialize) object to input file path + Pickle (serialize) object to input file path. Parameters ---------- @@ -1041,12 +1038,12 @@ def to_clipboard(self, excel=None, sep=None, **kwargs): from pandas.io import clipboard clipboard.to_clipboard(self, excel=excel, sep=sep, **kwargs) - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Fancy Indexing @classmethod def _create_indexer(cls, name, indexer): - """ create an indexer like _name in the class """ + """Create an indexer like _name in the class.""" if getattr(cls, name, None) is None: iname = '_%s' % name @@ -1067,7 +1064,7 @@ def _indexer(self): def get(self, key, default=None): """ Get item from object for given key (DataFrame column, Panel slice, - etc.). Returns default value if not found + etc.). Returns default value if not found. Parameters ---------- @@ -1086,7 +1083,7 @@ def __getitem__(self, item): return self._get_item_cache(item) def _get_item_cache(self, item): - """ return the cached item, item represents a label indexer """ + """Return the cached item, item represents a label indexer.""" cache = self._item_cache res = cache.get(item) if res is None: @@ -1100,17 +1097,18 @@ def _get_item_cache(self, item): return res def _set_as_cached(self, item, cacher): - """ set the _cacher attribute on the calling object with - a weakref to cacher """ + """Set the _cacher attribute on the calling object with a weakref to + cacher. + """ self._cacher = (item, weakref.ref(cacher)) def _reset_cacher(self): - """ reset the cacher """ - if hasattr(self,'_cacher'): + """Reset the cacher.""" + if hasattr(self, '_cacher'): del self._cacher def _iget_item_cache(self, item): - """ return the cached item, item represents a positional indexer """ + """Return the cached item, item represents a positional indexer.""" ax = self._info_axis if ax.is_unique: lower = self._get_item_cache(ax[item]) @@ -1122,9 +1120,7 @@ def _box_item_values(self, key, values): raise AbstractMethodError(self) def _maybe_cache_changed(self, item, value): - """ - the object has called back to us saying - maybe it has changed + """The object has called back to us saying maybe it has changed. numpy < 1.8 has an issue with object arrays and aliasing GH6026 @@ -1133,11 +1129,11 @@ def _maybe_cache_changed(self, item, value): @property def _is_cached(self): - """ boolean : return if I am cached """ + """Return boolean indicating if self is cached or not.""" return getattr(self, '_cacher', None) is not None def _get_cacher(self): - """ return my cacher or None """ + """return my cacher or None""" cacher = getattr(self, '_cacher', None) if cacher is not None: cacher = cacher[1]() @@ -1145,14 +1141,13 @@ def _get_cacher(self): @property def _is_view(self): - """ boolean : return if I am a view of another array """ + """Return boolean indicating if self is view of another array """ return self._data.is_view def _maybe_update_cacher(self, clear=False, verify_is_copy=True): """ - - see if we need to update our parent cacher - if clear, then clear our cache + See if we need to update our parent cacher if clear, then clear our + cache. Parameters ---------- @@ -1194,7 +1189,6 @@ def _slice(self, slobj, axis=0, kind=None): Construct a slice of this container. kind parameter is maintained for compatibility with Series slicing. - """ axis = self._get_block_manager_axis(axis) result = self._constructor(self._data.get_slice(slobj, axis=axis)) @@ -1202,7 +1196,7 @@ def _slice(self, slobj, axis=0, kind=None): # this could be a view # but only in a single-dtyped view slicable case - is_copy = axis!=0 or result._is_view + is_copy = axis != 0 or result._is_view result._set_is_copy(self, copy=is_copy) return result @@ -1221,18 +1215,20 @@ def _set_is_copy(self, ref=None, copy=True): def _check_is_chained_assignment_possible(self): """ - check if we are a view, have a cacher, and are of mixed type - if so, then force a setitem_copy check + Check if we are a view, have a cacher, and are of mixed type. + If so, then force a setitem_copy check. - should be called just near setting a value + Should be called just near setting a value - will return a boolean if it we are a view and are cached, but a single-dtype - meaning that the cacher should be updated following setting + Will return a boolean if it we are a view and are cached, but a + single-dtype meaning that the cacher should be updated following + setting. """ if self._is_view and self._is_cached: ref = self._get_cacher() if ref is not None and ref._is_mixed_type: - self._check_setitem_copy(stacklevel=4, t='referant', force=True) + self._check_setitem_copy(stacklevel=4, t='referant', + force=True) return True elif self.is_copy: self._check_setitem_copy(stacklevel=4, t='referant') @@ -1255,16 +1251,16 @@ def _check_setitem_copy(self, stacklevel=4, t='setting', force=False): user will see the error *at the level of setting* It is technically possible to figure out that we are setting on - a copy even WITH a multi-dtyped pandas object. In other words, some blocks - may be views while other are not. Currently _is_view will ALWAYS return False - for multi-blocks to avoid having to handle this case. + a copy even WITH a multi-dtyped pandas object. In other words, some + blocks may be views while other are not. Currently _is_view will ALWAYS + return False for multi-blocks to avoid having to handle this case. df = DataFrame(np.arange(0,9), columns=['count']) df['group'] = 'b' - # this technically need not raise SettingWithCopy if both are view (which is not - # generally guaranteed but is usually True - # however, this is in general not a good practice and we recommend using .loc + # This technically need not raise SettingWithCopy if both are view + # (which is not # generally guaranteed but is usually True. However, + # this is in general not a good practice and we recommend using .loc. df.iloc[0:5]['group'] = 'a' """ @@ -1302,15 +1298,19 @@ def _check_setitem_copy(self, stacklevel=4, t='setting', force=False): "A value is trying to be set on a copy of a slice from a " "DataFrame\n\n" "See the caveats in the documentation: " - "http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy") + "http://pandas.pydata.org/pandas-docs/stable/" + "indexing.html#indexing-view-versus-copy" + ) else: t = ("\n" "A value is trying to be set on a copy of a slice from a " "DataFrame.\n" - "Try using .loc[row_indexer,col_indexer] = value instead\n\n" - "See the caveats in the documentation: " - "http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy") + "Try using .loc[row_indexer,col_indexer] = value " + "instead\n\nSee the caveats in the documentation: " + "http://pandas.pydata.org/pandas-docs/stable/" + "indexing.html#indexing-view-versus-copy" + ) if value == 'raise': raise SettingWithCopyError(t) @@ -1334,7 +1334,7 @@ def __delitem__(self, key): # Allow shorthand to delete all columns whose first len(key) # elements match key: if not isinstance(key, tuple): - key = (key,) + key = (key, ) for col in self.columns: if isinstance(col, tuple) and col[:len(key)] == key: del self[col] @@ -1382,8 +1382,8 @@ def take(self, indices, axis=0, convert=True, is_copy=True): def xs(self, key, axis=0, level=None, copy=None, drop_level=True): """ - Returns a cross-section (row(s) or column(s)) from the Series/DataFrame. - Defaults to cross-section on the rows (axis=0). + Returns a cross-section (row(s) or column(s)) from the + Series/DataFrame. Defaults to cross-section on the rows (axis=0). Parameters ---------- @@ -1446,8 +1446,9 @@ def xs(self, key, axis=0, level=None, copy=None, drop_level=True): ----- xs is only for getting, not setting values. - MultiIndex Slicers is a generic way to get/set values on any level or levels - it is a superset of xs functionality, see :ref:`MultiIndex Slicers <advanced.mi_slicers>` + MultiIndex Slicers is a generic way to get/set values on any level or + levels. It is a superset of xs functionality, see + :ref:`MultiIndex Slicers <advanced.mi_slicers>` """ if copy is not None: @@ -1509,10 +1510,8 @@ def xs(self, key, axis=0, level=None, copy=None, drop_level=True): if not is_list_like(new_values) or self.ndim == 1: return _maybe_box_datetimelike(new_values) - result = Series(new_values, - index=self.columns, - name=self.index[loc], - copy=copy, + result = Series(new_values, index=self.columns, + name=self.index[loc], copy=copy, dtype=new_values.dtype) else: @@ -1555,7 +1554,7 @@ def select(self, crit, axis=0): def reindex_like(self, other, method=None, copy=True, limit=None, tolerance=None): - """ return an object with matching indicies to myself + """Return an object with matching indices to myself. Parameters ---------- @@ -1579,15 +1578,15 @@ def reindex_like(self, other, method=None, copy=True, limit=None, ------- reindexed : same as input """ - d = other._construct_axes_dict(axes=self._AXIS_ORDERS, - method=method, copy=copy, limit=limit, - tolerance=tolerance) + d = other._construct_axes_dict(axes=self._AXIS_ORDERS, method=method, + copy=copy, limit=limit, + tolerance=tolerance) return self.reindex(**d) def drop(self, labels, axis=0, level=None, inplace=False, errors='raise'): """ - Return new object with labels in requested axis removed + Return new object with labels in requested axis removed. Parameters ---------- @@ -1629,8 +1628,8 @@ def drop(self, labels, axis=0, level=None, inplace=False, errors='raise'): if level is not None: if not isinstance(axis, MultiIndex): raise AssertionError('axis must be a MultiIndex') - indexer = ~lib.ismember(axis.get_level_values(level).values, - set(labels)) + indexer = ~lib.ismember( + axis.get_level_values(level).values, set(labels)) else: indexer = ~axis.isin(labels) @@ -1646,7 +1645,7 @@ def drop(self, labels, axis=0, level=None, inplace=False, errors='raise'): def _update_inplace(self, result, verify_is_copy=True): """ - replace self internals with result. + Replace self internals with result. Parameters ---------- @@ -1659,7 +1658,7 @@ def _update_inplace(self, result, verify_is_copy=True): self._reset_cache() self._clear_item_cache() - self._data = getattr(result,'_data',result) + self._data = getattr(result, '_data', result) self._maybe_update_cacher(verify_is_copy=verify_is_copy) def add_prefix(self, prefix): @@ -1679,7 +1678,7 @@ def add_prefix(self, prefix): def add_suffix(self, suffix): """ - Concatenate suffix string with panel items names + Concatenate suffix string with panel items names. Parameters ---------- @@ -1702,14 +1701,16 @@ def add_suffix(self, suffix): by : string name or list of names which refer to the axis items axis : %(axes)s to direct sorting ascending : bool or list of bool - Sort ascending vs. descending. Specify list for multiple sort orders. - If this is a list of bools, must match the length of the by + Sort ascending vs. descending. Specify list for multiple sort + orders. If this is a list of bools, must match the length of + the by. inplace : bool if True, perform operation in-place kind : {`quicksort`, `mergesort`, `heapsort`} - Choice of sorting algorithm. See also ndarray.np.sort for more information. - `mergesort` is the only stable algorithm. For DataFrames, this option is - only applied when sorting on a single column or label. + Choice of sorting algorithm. See also ndarray.np.sort for more + information. `mergesort` is the only stable algorithm. For + DataFrames, this option is only applied when sorting on a single + column or label. na_position : {'first', 'last'} `first` puts NaNs at the beginning, `last` puts NaNs at the end @@ -1717,6 +1718,7 @@ def add_suffix(self, suffix): ------- sorted_obj : %(klass)s """ + def sort_values(self, by, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last'): raise AbstractMethodError(self) @@ -1734,14 +1736,15 @@ def sort_values(self, by, axis=0, ascending=True, inplace=False, inplace : bool if True, perform operation in-place kind : {`quicksort`, `mergesort`, `heapsort`} - Choice of sorting algorithm. See also ndarray.np.sort for more information. - `mergesort` is the only stable algorithm. For DataFrames, this option is - only applied when sorting on a single column or label. + Choice of sorting algorithm. See also ndarray.np.sort for more + information. `mergesort` is the only stable algorithm. For + DataFrames, this option is only applied when sorting on a single + column or label. na_position : {'first', 'last'} `first` puts NaNs at the beginning, `last` puts NaNs at the end sort_remaining : bool - if true and sorting by level and index is multilevel, sort by other levels - too (in order) after sorting by specified level + if true and sorting by level and index is multilevel, sort by other + levels too (in order) after sorting by specified level Returns ------- @@ -1784,7 +1787,8 @@ def sort_index(self, axis=0, level=None, ascending=True, inplace=False, Please note: this is only applicable to DataFrames/Series with a monotonically increasing/decreasing index. * default: don't fill gaps - * pad / ffill: propagate last valid observation forward to next valid + * pad / ffill: propagate last valid observation forward to next + valid * backfill / bfill: use next valid observation to fill gap * nearest: use nearest valid observations to fill gap copy : boolean, default True @@ -1923,6 +1927,7 @@ def sort_index(self, axis=0, level=None, ascending=True, inplace=False, ------- reindexed : %(klass)s """ + # TODO: Decide if we care about having different examples for different # kinds @@ -1940,7 +1945,7 @@ def reindex(self, *args, **kwargs): if kwargs: raise TypeError('reindex() got an unexpected keyword ' - 'argument "{0}"'.format(list(kwargs.keys())[0])) + 'argument "{0}"'.format(list(kwargs.keys())[0])) self._consolidate_inplace() @@ -1960,12 +1965,12 @@ def reindex(self, *args, **kwargs): pass # perform the reindex on the axes - return self._reindex_axes(axes, level, limit, tolerance, - method, fill_value, copy).__finalize__(self) + return self._reindex_axes(axes, level, limit, tolerance, method, + fill_value, copy).__finalize__(self) - def _reindex_axes(self, axes, level, limit, tolerance, method, - fill_value, copy): - """ perform the reinxed for all the axes """ + def _reindex_axes(self, axes, level, limit, tolerance, method, fill_value, + copy): + """Perform the reindex for all the axes.""" obj = self for a in self._AXIS_ORDERS: labels = axes[a] @@ -1973,30 +1978,29 @@ def _reindex_axes(self, axes, level, limit, tolerance, method, continue ax = self._get_axis(a) - new_index, indexer = ax.reindex( - labels, level=level, limit=limit, tolerance=tolerance, - method=method) + new_index, indexer = ax.reindex(labels, level=level, limit=limit, + tolerance=tolerance, method=method) axis = self._get_axis_number(a) - obj = obj._reindex_with_indexers( - {axis: [new_index, indexer]}, - fill_value=fill_value, copy=copy, allow_dups=False) + obj = obj._reindex_with_indexers({axis: [new_index, indexer]}, + fill_value=fill_value, + copy=copy, allow_dups=False) return obj def _needs_reindex_multi(self, axes, method, level): - """ check if we do need a multi reindex """ + """Check if we do need a multi reindex.""" return ((com._count_not_none(*axes.values()) == self._AXIS_LEN) and method is None and level is None and not self._is_mixed_type) def _reindex_multi(self, axes, copy, fill_value): return NotImplemented - _shared_docs['reindex_axis'] = ( - """Conform input object to new index with optional filling logic, - placing NA/NaN in locations having no value in the previous index. A - new object is produced unless the new index is equivalent to the - current one and copy=False + _shared_docs[ + 'reindex_axis'] = ("""Conform input object to new index with optional + filling logic, placing NA/NaN in locations having no value in the + previous index. A new object is produced unless the new index is + equivalent to the current one and copy=False Parameters ---------- @@ -2007,7 +2011,8 @@ def _reindex_multi(self, axes, copy, fill_value): method : {None, 'backfill'/'bfill', 'pad'/'ffill', 'nearest'}, optional Method to use for filling holes in reindexed DataFrame: * default: don't fill gaps - * pad / ffill: propagate last valid observation forward to next valid + * pad / ffill: propagate last valid observation forward to next + valid * backfill / bfill: use next valid observation to fill gap * nearest: use nearest valid observations to fill gap copy : boolean, default True @@ -2047,15 +2052,14 @@ def reindex_axis(self, labels, axis=0, method=None, level=None, copy=True, method = mis._clean_reindex_fill_method(method) new_index, indexer = axis_values.reindex(labels, method, level, limit=limit) - return self._reindex_with_indexers( - {axis: [new_index, indexer]}, fill_value=fill_value, copy=copy) + return self._reindex_with_indexers({axis: [new_index, indexer]}, + fill_value=fill_value, copy=copy) - def _reindex_with_indexers(self, reindexers, - fill_value=np.nan, copy=False, + def _reindex_with_indexers(self, reindexers, fill_value=np.nan, copy=False, allow_dups=False): - """ allow_dups indicates an internal call here """ + """allow_dups indicates an internal call here """ - # reindex doing multiple operations on different axes if indiciated + # reindex doing multiple operations on different axes if indicated new_data = self._data for axis in sorted(reindexers.keys()): index, indexer = reindexers[axis] @@ -2119,11 +2123,11 @@ def filter(self, items=None, like=None, regex=None, axis=None): axis_values = self._get_axis(axis_name) if items is not None: - return self.reindex(**{axis_name: [r for r in items - if r in axis_values]}) + return self.reindex(**{axis_name: + [r for r in items if r in axis_values]}) elif like: - matchf = lambda x: (like in x if isinstance(x, string_types) - else like in str(x)) + matchf = lambda x: (like in x if isinstance(x, string_types) else + like in str(x)) return self.select(matchf, axis=axis_name) elif regex: matcher = re.compile(regex) @@ -2146,8 +2150,8 @@ def tail(self, n=5): return self.iloc[0:0] return self.iloc[-n:] - - def sample(self, n=None, frac=None, replace=False, weights=None, random_state=None, axis=None): + def sample(self, n=None, frac=None, replace=False, weights=None, + random_state=None, axis=None): """ Returns a random sample of items from an axis of object. @@ -2252,22 +2256,28 @@ def sample(self, n=None, frac=None, replace=False, weights=None, random_state=No try: weights = self[weights] except KeyError: - raise KeyError("String passed to weights not a valid column") + raise KeyError("String passed to weights not a " + "valid column") else: - raise ValueError("Strings can only be passed to weights when sampling from rows on a DataFrame") + raise ValueError("Strings can only be passed to " + "weights when sampling from rows on " + "a DataFrame") else: - raise ValueError("Strings cannot be passed as weights when sampling from a Series or Panel.") + raise ValueError("Strings cannot be passed as weights " + "when sampling from a Series or Panel.") weights = pd.Series(weights, dtype='float64') if len(weights) != axis_length: - raise ValueError("Weights and axis to be sampled must be of same length") + raise ValueError("Weights and axis to be sampled must be of " + "same length") if (weights == np.inf).any() or (weights == -np.inf).any(): raise ValueError("weight vector may not include `inf` values") if (weights < 0).any(): - raise ValueError("weight vector many not include negative values") + raise ValueError("weight vector many not include negative " + "values") # If has nan, set to zero. weights = weights.fillna(0) @@ -2289,16 +2299,17 @@ def sample(self, n=None, frac=None, replace=False, weights=None, random_state=No elif n is None and frac is not None: n = int(round(frac * axis_length)) elif n is not None and frac is not None: - raise ValueError('Please enter a value for `frac` OR `n`, not both') + raise ValueError('Please enter a value for `frac` OR `n`, not ' + 'both') # Check for negative sizes if n < 0: - raise ValueError("A negative number of rows requested. Please provide positive value.") + raise ValueError("A negative number of rows requested. Please " + "provide positive value.") locs = rs.choice(axis_length, size=n, replace=replace, p=weights) return self.take(locs, axis=axis, is_copy=False) - _shared_docs['pipe'] = (""" Apply func(self, \*args, \*\*kwargs) @@ -2348,26 +2359,26 @@ def sample(self, n=None, frac=None, replace=False, weights=None, random_state=No pandas.DataFrame.apply pandas.DataFrame.applymap pandas.Series.map - """ - ) + """) + @Appender(_shared_docs['pipe'] % _shared_doc_kwargs) def pipe(self, func, *args, **kwargs): if isinstance(func, tuple): func, target = func if target in kwargs: - msg = '%s is both the pipe target and a keyword argument' % target - raise ValueError(msg) + raise ValueError('%s is both the pipe target and a keyword ' + 'argument' % target) kwargs[target] = self return func(*args, **kwargs) else: return func(self, *args, **kwargs) - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Attribute access def __finalize__(self, other, method=None, **kwargs): """ - propagate metadata from other to self + Propagate metadata from other to self. Parameters ---------- @@ -2386,12 +2397,12 @@ def __getattr__(self, name): """After regular attribute access, try looking up the name This allows simpler access to columns for interactive use. """ + # Note: obj.x will always call obj.__getattribute__('x') prior to # calling obj.__getattr__('x'). - if (name in self._internal_names_set - or name in self._metadata - or name in self._accessors): + if (name in self._internal_names_set or name in self._metadata or + name in self._accessors): return object.__getattribute__(self, name) else: if name in self._info_axis: @@ -2400,7 +2411,9 @@ def __getattr__(self, name): def __setattr__(self, name, value): """After regular attribute access, try setting the name - This allows simpler access to columns for interactive use.""" + This allows simpler access to columns for interactive use. + """ + # first try regular attribute access via __getattribute__, so that # e.g. ``obj.x`` and ``obj.x = 4`` will always reference/modify # the same attribute. @@ -2429,14 +2442,16 @@ def __setattr__(self, name, value): except (AttributeError, TypeError): object.__setattr__(self, name, value) - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Getting and setting elements - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Consolidation of internals def _protect_consolidate(self, f): - """ consolidate _data. if the blocks have changed, then clear the cache """ + """Consolidate _data -- if the blocks have changed, then clear the + cache + """ blocks_before = len(self._data.blocks) result = f() if len(self._data.blocks) != blocks_before: @@ -2444,9 +2459,11 @@ def _protect_consolidate(self, f): return result def _consolidate_inplace(self): - """ we are inplace consolidating; return None """ + """Consolidate data in place and return None""" + def f(): self._data = self._data.consolidate() + self._protect_consolidate(f) def consolidate(self, inplace=False): @@ -2499,8 +2516,8 @@ def _check_inplace_setting(self, value): except: pass - raise TypeError( - 'Cannot do inplace boolean setting on mixed-types with a non np.nan value') + raise TypeError('Cannot do inplace boolean setting on ' + 'mixed-types with a non np.nan value') return True @@ -2511,7 +2528,7 @@ def _get_numeric_data(self): def _get_bool_data(self): return self._constructor(self._data.get_bool_data()).__finalize__(self) - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Internal Interface Methods def as_matrix(self, columns=None): @@ -2574,7 +2591,7 @@ def values(self): @property def _values(self): - """ internal implementation """ + """internal implementation""" return self.values @property @@ -2583,22 +2600,22 @@ def _get_values(self): return self.as_matrix() def get_values(self): - """ same as values (but handles sparseness conversions) """ + """same as values (but handles sparseness conversions)""" return self.as_matrix() def get_dtype_counts(self): - """ Return the counts of dtypes in this object """ + """Return the counts of dtypes in this object.""" from pandas import Series return Series(self._data.get_dtype_counts()) def get_ftype_counts(self): - """ Return the counts of ftypes in this object """ + """Return the counts of ftypes in this object.""" from pandas import Series return Series(self._data.get_ftype_counts()) @property def dtypes(self): - """ Return the dtypes in this object """ + """Return the dtypes in this object.""" from pandas import Series return Series(self._data.get_dtypes(), index=self._info_axis, dtype=np.object_) @@ -2648,7 +2665,7 @@ def as_blocks(self, copy=True): @property def blocks(self): - "Internal property, property synonym for as_blocks()" + """Internal property, property synonym for as_blocks()""" return self.as_blocks() def astype(self, dtype, copy=True, raise_on_error=True, **kwargs): @@ -2667,8 +2684,8 @@ def astype(self, dtype, copy=True, raise_on_error=True, **kwargs): casted : type of caller """ - mgr = self._data.astype( - dtype=dtype, copy=copy, raise_on_error=raise_on_error, **kwargs) + mgr = self._data.astype(dtype=dtype, copy=copy, + raise_on_error=raise_on_error, **kwargs) return self._constructor(mgr).__finalize__(self) def copy(self, deep=True): @@ -2714,11 +2731,9 @@ def _convert(self, datetime=False, numeric=False, timedelta=False, converted : same as input object """ return self._constructor( - self._data.convert(datetime=datetime, - numeric=numeric, - timedelta=timedelta, - coerce=coerce, - copy=copy)).__finalize__(self) + self._data.convert(datetime=datetime, numeric=numeric, + timedelta=timedelta, coerce=coerce, + copy=copy)).__finalize__(self) # TODO: Remove in 0.18 or 2017, which ever is sooner def convert_objects(self, convert_dates=True, convert_numeric=False, @@ -2757,20 +2772,20 @@ def convert_objects(self, convert_dates=True, convert_numeric=False, convert_timedeltas=convert_timedeltas, copy=copy)).__finalize__(self) - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Filling NA's - _shared_docs['fillna'] = ( - """ + _shared_docs['fillna'] = (""" Fill NA/NaN values using the specified method Parameters ---------- value : scalar, dict, Series, or DataFrame - Value to use to fill holes (e.g. 0), alternately a dict/Series/DataFrame of - values specifying which value to use for each index (for a Series) or - column (for a DataFrame). (values not in the dict/Series/DataFrame will not be - filled). This value cannot be a list. + Value to use to fill holes (e.g. 0), alternately a + dict/Series/DataFrame of values specifying which value to use for + each index (for a Series) or column (for a DataFrame). (values not + in the dict/Series/DataFrame will not be filled). This value cannot + be a list. method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None Method to use for filling holes in reindexed Series pad / ffill: propagate last valid observation forward to next valid @@ -2799,8 +2814,7 @@ def convert_objects(self, convert_dates=True, convert_numeric=False, Returns ------- filled : %(klass)s - """ - ) + """) @Appender(_shared_docs['fillna'] % _shared_doc_kwargs) def fillna(self, value=None, method=None, axis=None, inplace=False, @@ -2833,9 +2847,8 @@ def fillna(self, value=None, method=None, axis=None, inplace=False, # > 3d if self.ndim > 3: - raise NotImplementedError( - 'Cannot fillna with a method for > 3dims' - ) + raise NotImplementedError('Cannot fillna with a method for > ' + '3dims') # 3d elif self.ndim == 3: @@ -2847,12 +2860,9 @@ def fillna(self, value=None, method=None, axis=None, inplace=False, # 2d or less method = mis._clean_fill_method(method) - new_data = self._data.interpolate(method=method, - axis=axis, - limit=limit, - inplace=inplace, - coerce=True, - downcast=downcast) + new_data = self._data.interpolate(method=method, axis=axis, + limit=limit, inplace=inplace, + coerce=True, downcast=downcast) else: if method is not None: raise ValueError('cannot specify both a fill method and value') @@ -2867,10 +2877,10 @@ def fillna(self, value=None, method=None, axis=None, inplace=False, elif not com.is_list_like(value): pass else: - raise ValueError("invalid fill value with a %s" % type(value)) + raise ValueError("invalid fill value with a %s" % + type(value)) - new_data = self._data.fillna(value=value, - limit=limit, + new_data = self._data.fillna(value=value, limit=limit, inplace=inplace, downcast=downcast) @@ -2888,8 +2898,7 @@ def fillna(self, value=None, method=None, axis=None, inplace=False, obj.fillna(v, limit=limit, inplace=True) return result elif not com.is_list_like(value): - new_data = self._data.fillna(value=value, - limit=limit, + new_data = self._data.fillna(value=value, limit=limit, inplace=inplace, downcast=downcast) elif isinstance(value, DataFrame) and self.ndim == 2: @@ -2903,12 +2912,12 @@ def fillna(self, value=None, method=None, axis=None, inplace=False, return self._constructor(new_data).__finalize__(self) def ffill(self, axis=None, inplace=False, limit=None, downcast=None): - "Synonym for NDFrame.fillna(method='ffill')" + """Synonym for NDFrame.fillna(method='ffill')""" return self.fillna(method='ffill', axis=axis, inplace=inplace, limit=limit, downcast=downcast) def bfill(self, axis=None, inplace=False, limit=None, downcast=None): - "Synonym for NDFrame.fillna(method='bfill')" + """Synonym for NDFrame.fillna(method='bfill')""" return self.fillna(method='bfill', axis=axis, inplace=inplace, limit=limit, downcast=downcast) @@ -3085,8 +3094,7 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None, if c in value and c in self: res[c] = res[c].replace(to_replace=src, value=value[c], - inplace=False, - regex=regex) + inplace=False, regex=regex) return None if inplace else res # {'A': NA} -> 0 @@ -3116,13 +3124,11 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None, else: # [NA, ''] -> 0 new_data = self._data.replace(to_replace=to_replace, - value=value, - inplace=inplace, + value=value, inplace=inplace, regex=regex) elif to_replace is None: if not (com.is_re_compilable(regex) or - com.is_list_like(regex) or - is_dictlike(regex)): + com.is_list_like(regex) or is_dictlike(regex)): raise TypeError("'regex' must be a string or a compiled " "regular expression or a list or dict of " "strings or regular expressions, you " @@ -3139,14 +3145,14 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None, for k, v in compat.iteritems(value): if k in self: new_data = new_data.replace(to_replace=to_replace, - value=v, - filter=[k], + value=v, filter=[k], inplace=inplace, regex=regex) elif not com.is_list_like(value): # NA -> 0 - new_data = self._data.replace(to_replace=to_replace, value=value, - inplace=inplace, regex=regex) + new_data = self._data.replace(to_replace=to_replace, + value=value, inplace=inplace, + regex=regex) else: msg = ('Invalid "to_replace" type: ' '{0!r}').format(type(to_replace).__name__) @@ -3162,8 +3168,8 @@ def interpolate(self, method='linear', axis=0, limit=None, inplace=False, """ Interpolate values according to different methods. - Please note that only ``method='linear'`` is supported for DataFrames/Series - with a MultiIndex. + Please note that only ``method='linear'`` is supported for + DataFrames/Series with a MultiIndex. Parameters ---------- @@ -3187,8 +3193,8 @@ def interpolate(self, method='linear', axis=0, limit=None, inplace=False, wrappers around the scipy interpolation methods of similar names. These use the actual numerical values of the index. See the scipy documentation for more on their behavior - `here <http://docs.scipy.org/doc/scipy/reference/interpolate.html#univariate-interpolation>`__ - `and here <http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html>`__ + `here <http://docs.scipy.org/doc/scipy/reference/interpolate.html#univariate-interpolation>`__ # noqa + `and here <http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html>`__ # noqa axis : {0, 1}, default 0 * 0: fill column-by-column @@ -3248,16 +3254,19 @@ def interpolate(self, method='linear', axis=0, limit=None, inplace=False, else: alt_ax = ax - if isinstance(_maybe_transposed_self.index, MultiIndex) and method != 'linear': + if (isinstance(_maybe_transposed_self.index, MultiIndex) and + method != 'linear'): raise ValueError("Only `method=linear` interpolation is supported " "on MultiIndexes.") - if _maybe_transposed_self._data.get_dtype_counts().get('object') == len(_maybe_transposed_self.T): + if _maybe_transposed_self._data.get_dtype_counts().get( + 'object') == len(_maybe_transposed_self.T): raise TypeError("Cannot interpolate with all NaNs.") # create/use the index if method == 'linear': - index = np.arange(len(_maybe_transposed_self._get_axis(alt_ax))) # prior default + # prior default + index = np.arange(len(_maybe_transposed_self._get_axis(alt_ax))) else: index = _maybe_transposed_self._get_axis(alt_ax) @@ -3265,17 +3274,13 @@ def interpolate(self, method='linear', axis=0, limit=None, inplace=False, raise NotImplementedError("Interpolation with NaNs in the index " "has not been implemented. Try filling " "those NaNs before interpolating.") - new_data = _maybe_transposed_self._data.interpolate( - method=method, - axis=ax, - index=index, - values=_maybe_transposed_self, - limit=limit, - limit_direction=limit_direction, - inplace=inplace, - downcast=downcast, - **kwargs - ) + data = _maybe_transposed_self._data + new_data = data.interpolate(method=method, axis=ax, index=index, + values=_maybe_transposed_self, limit=limit, + limit_direction=limit_direction, + inplace=inplace, downcast=downcast, + **kwargs) + if inplace: if axis == 1: new_data = self._constructor(new_data).T._data @@ -3286,12 +3291,12 @@ def interpolate(self, method='linear', axis=0, limit=None, inplace=False, res = res.T return res - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Action Methods def isnull(self): """ - Return a boolean same-sized object indicating if the values are null + Return a boolean same-sized object indicating if the values are null. See also -------- @@ -3301,7 +3306,7 @@ def isnull(self): def notnull(self): """Return a boolean same-sized object indicating if the values are - not null + not null. See also -------- @@ -3311,7 +3316,7 @@ def notnull(self): def clip(self, lower=None, upper=None, out=None, axis=None): """ - Trim values at input threshold(s) + Trim values at input threshold(s). Parameters ---------- @@ -3373,7 +3378,7 @@ def clip(self, lower=None, upper=None, out=None, axis=None): def clip_upper(self, threshold, axis=None): """ - Return copy of input with values above given value(s) truncated + Return copy of input with values above given value(s) truncated. Parameters ---------- @@ -3397,7 +3402,7 @@ def clip_upper(self, threshold, axis=None): def clip_lower(self, threshold, axis=None): """ - Return copy of the input with values below given value(s) truncated + Return copy of the input with values below given value(s) truncated. Parameters ---------- @@ -3423,7 +3428,7 @@ def groupby(self, by=None, axis=0, level=None, as_index=True, sort=True, group_keys=True, squeeze=False): """ Group series using mapper (dict or key function, apply given function - to group, return result as series) or by a series of columns + to group, return result as series) or by a series of columns. Parameters ---------- @@ -3442,8 +3447,8 @@ def groupby(self, by=None, axis=0, level=None, as_index=True, sort=True, effectively "SQL-style" grouped output sort : boolean, default True Sort group keys. Get better performance by turning this off. - Note this does not influence the order of observations within each group. - groupby preserves the order of rows within each group. + Note this does not influence the order of observations within each + group. groupby preserves the order of rows within each group. group_keys : boolean, default True When calling apply, add group keys to index to identify pieces squeeze : boolean, default False @@ -3496,12 +3501,11 @@ def asfreq(self, freq, method=None, how=None, normalize=False): converted : type of caller """ from pandas.tseries.resample import asfreq - return asfreq(self, freq, method=method, how=how, - normalize=normalize) + return asfreq(self, freq, method=method, how=how, normalize=normalize) def at_time(self, time, asof=False): """ - Select values at particular time of day (e.g. 9:30AM) + Select values at particular time of day (e.g. 9:30AM). Parameters ---------- @@ -3520,7 +3524,7 @@ def at_time(self, time, asof=False): def between_time(self, start_time, end_time, include_start=True, include_end=True): """ - Select values between particular times of the day (e.g., 9:00-9:30 AM) + Select values between particular times of the day (e.g., 9:00-9:30 AM). Parameters ---------- @@ -3541,9 +3545,9 @@ def between_time(self, start_time, end_time, include_start=True, except AttributeError: raise TypeError('Index must be DatetimeIndex') - def resample(self, rule, how=None, axis=0, fill_method=None, - closed=None, label=None, convention='start', - kind=None, loffset=None, limit=None, base=0): + def resample(self, rule, how=None, axis=0, fill_method=None, closed=None, + label=None, convention='start', kind=None, loffset=None, + limit=None, base=0): """ Convenience method for frequency conversion and resampling of regular time-series data. @@ -3684,7 +3688,7 @@ def resample(self, rule, how=None, axis=0, fill_method=None, def first(self, offset): """ Convenience method for subsetting initial periods of time series data - based on a date offset + based on a date offset. Parameters ---------- @@ -3719,7 +3723,7 @@ def first(self, offset): def last(self, offset): """ Convenience method for subsetting final periods of time series data - based on a date offset + based on a date offset. Parameters ---------- @@ -3747,8 +3751,7 @@ def last(self, offset): start = self.index.searchsorted(start_date, side='right') return self.ix[start:] - _shared_docs['align'] = ( - """ + _shared_docs['align'] = (""" Align two object on their axes with the specified join method for each axis Index @@ -3781,8 +3784,7 @@ def last(self, offset): ------- (left, right) : (%(klass)s, type of other) Aligned objects - """ - ) + """) @Appender(_shared_docs['align'] % _shared_doc_kwargs) def align(self, other, join='outer', axis=None, level=None, copy=True, @@ -3793,15 +3795,18 @@ def align(self, other, join='outer', axis=None, level=None, copy=True, if broadcast_axis == 1 and self.ndim != other.ndim: if isinstance(self, Series): - # this means other is a DataFrame, and we need to broadcast self - df = DataFrame(dict((c, self) for c in other.columns), - **other._construct_axes_dict()) - return df._align_frame(other, join=join, axis=axis, level=level, - copy=copy, fill_value=fill_value, - method=method, limit=limit, - fill_axis=fill_axis) + # this means other is a DataFrame, and we need to broadcast + # self + df = DataFrame( + dict((c, self) for c in other.columns), + **other._construct_axes_dict()) + return df._align_frame(other, join=join, axis=axis, + level=level, copy=copy, + fill_value=fill_value, method=method, + limit=limit, fill_axis=fill_axis) elif isinstance(other, Series): - # this means self is a DataFrame, and we need to broadcast other + # this means self is a DataFrame, and we need to broadcast + # other df = DataFrame(dict((c, other) for c in self.columns), **self._construct_axes_dict()) return self._align_frame(df, join=join, axis=axis, level=level, @@ -3834,15 +3839,13 @@ def _align_frame(self, other, join='outer', axis=None, level=None, if axis is None or axis == 0: if not self.index.equals(other.index): - join_index, ilidx, iridx = \ - self.index.join(other.index, how=join, level=level, - return_indexers=True) + join_index, ilidx, iridx = self.index.join( + other.index, how=join, level=level, return_indexers=True) if axis is None or axis == 1: if not self.columns.equals(other.columns): - join_columns, clidx, cridx = \ - self.columns.join(other.columns, how=join, level=level, - return_indexers=True) + join_columns, clidx, cridx = self.columns.join( + other.columns, how=join, level=level, return_indexers=True) left = self._reindex_with_indexers({0: [join_index, ilidx], 1: [join_columns, clidx]}, @@ -3871,7 +3874,7 @@ def _align_series(self, other, join='outer', axis=None, level=None, 'axis 0') # equal - if self.index.equals(other.index): + if self.index.equals(other.index): join_index, lidx, ridx = None, None, None else: join_index, lidx, ridx = self.index.join(other.index, how=join, @@ -3888,9 +3891,9 @@ def _align_series(self, other, join='outer', axis=None, level=None, join_index = self.index lidx, ridx = None, None if not self.index.equals(other.index): - join_index, lidx, ridx = \ - self.index.join(other.index, how=join, level=level, - return_indexers=True) + join_index, lidx, ridx = self.index.join( + other.index, how=join, level=level, + return_indexers=True) if lidx is not None: fdata = fdata.reindex_indexer(join_index, lidx, axis=1) @@ -3899,9 +3902,9 @@ def _align_series(self, other, join='outer', axis=None, level=None, join_index = self.columns lidx, ridx = None, None if not self.columns.equals(other.index): - join_index, lidx, ridx = \ - self.columns.join(other.index, how=join, level=level, - return_indexers=True) + join_index, lidx, ridx = self.columns.join( + other.index, how=join, level=level, + return_indexers=True) if lidx is not None: fdata = fdata.reindex_indexer(join_index, lidx, axis=0) @@ -3921,13 +3924,15 @@ def _align_series(self, other, join='outer', axis=None, level=None, # fill fill_na = notnull(fill_value) or (method is not None) if fill_na: - left = left.fillna(fill_value, method=method, limit=limit, axis=fill_axis) + left = left.fillna(fill_value, method=method, limit=limit, + axis=fill_axis) right = right.fillna(fill_value, method=method, limit=limit) - return (left.__finalize__(self), right.__finalize__(other)) + return left.__finalize__(self), right.__finalize__(other) _shared_docs['where'] = (""" Return an object of same shape as self and whose corresponding - entries are from self where cond is %(cond)s and otherwise are from other. + entries are from self where cond is %(cond)s and otherwise are from + other. Parameters ---------- @@ -3947,6 +3952,7 @@ def _align_series(self, other, join='outer', axis=None, level=None, ------- wh : same type as caller """) + @Appender(_shared_docs['where'] % dict(_shared_doc_kwargs, cond="True")) def where(self, cond, other=np.nan, inplace=False, axis=None, level=None, try_cast=False, raise_on_error=True): @@ -3958,8 +3964,8 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None, raise ValueError('where requires an ndarray like object for ' 'its condition') if cond.shape != self.shape: - raise ValueError( - 'Array conditional must be same shape as self') + raise ValueError('Array conditional must be same shape as ' + 'self') cond = self._constructor(cond, **self._construct_axes_dict()) if inplace: @@ -3974,9 +3980,8 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None, # align with me if other.ndim <= self.ndim: - _, other = self.align(other, join='left', - axis=axis, level=level, - fill_value=np.nan) + _, other = self.align(other, join='left', axis=axis, + level=level, fill_value=np.nan) # if we are NOT aligned, raise as we cannot where index if (axis is None and @@ -3986,9 +3991,8 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None, # slice me out of the other else: - raise NotImplemented( - "cannot align with a higher dimensional NDFrame" - ) + raise NotImplemented("cannot align with a higher dimensional " + "NDFrame") elif is_list_like(other): @@ -4018,7 +4022,9 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None, other = np.array(other) else: other = np.asarray(other) - other = np.asarray(other, dtype=np.common_type(other, new_other)) + other = np.asarray(other, + dtype=np.common_type(other, + new_other)) # we need to use the new dtype try_quick = False @@ -4066,8 +4072,8 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None, other = new_other else: - raise ValueError( - 'Length of replacements must equal series length') + raise ValueError('Length of replacements must equal ' + 'series length') else: raise ValueError('other must be the same shape as self ' @@ -4109,7 +4115,8 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None, def mask(self, cond, other=np.nan, inplace=False, axis=None, level=None, try_cast=False, raise_on_error=True): return self.where(~cond, other=other, inplace=inplace, axis=axis, - level=level, try_cast=try_cast, raise_on_error=raise_on_error) + level=level, try_cast=try_cast, + raise_on_error=raise_on_error) _shared_docs['shift'] = (""" Shift index by desired number of periods with an optional time freq @@ -4133,6 +4140,7 @@ def mask(self, cond, other=np.nan, inplace=False, axis=None, level=None, ------- shifted : %(klass)s """) + @Appender(_shared_docs['shift'] % _shared_doc_kwargs) def shift(self, periods=1, freq=None, axis=0): if periods == 0: @@ -4184,7 +4192,7 @@ def slice_shift(self, periods=1, axis=0): def tshift(self, periods=1, freq=None, axis=0): """ - Shift the time index, using the index's frequency if available + Shift the time index, using the index's frequency if available. Parameters ---------- @@ -4317,10 +4325,10 @@ def _tz_convert(ax, tz): if not hasattr(ax, 'tz_convert'): if len(ax) > 0: ax_name = self._get_axis_name(axis) - raise TypeError('%s is not a valid DatetimeIndex or PeriodIndex' % - ax_name) + raise TypeError('%s is not a valid DatetimeIndex or ' + 'PeriodIndex' % ax_name) else: - ax = DatetimeIndex([],tz=tz) + ax = DatetimeIndex([], tz=tz) else: ax = ax.tz_convert(tz) return ax @@ -4334,18 +4342,19 @@ def _tz_convert(ax, tz): else: if level not in (None, 0, ax.name): raise ValueError("The level {0} is not valid".format(level)) - ax = _tz_convert(ax, tz) + ax = _tz_convert(ax, tz) result = self._constructor(self._data, copy=copy) - result.set_axis(axis,ax) + result.set_axis(axis, ax) return result.__finalize__(self) @deprecate_kwarg(old_arg_name='infer_dst', new_arg_name='ambiguous', - mapping={True: 'infer', False: 'raise'}) + mapping={True: 'infer', + False: 'raise'}) def tz_localize(self, tz, axis=0, level=None, copy=True, ambiguous='raise'): """ - Localize tz-naive TimeSeries to target time zone + Localize tz-naive TimeSeries to target time zone. Parameters ---------- @@ -4357,11 +4366,14 @@ def tz_localize(self, tz, axis=0, level=None, copy=True, copy : boolean, default True Also make a copy of the underlying data ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise' - - 'infer' will attempt to infer fall dst-transition hours based on order + - 'infer' will attempt to infer fall dst-transition hours based on + order - bool-ndarray where True signifies a DST time, False designates - a non-DST time (note that this flag is only applicable for ambiguous times) + a non-DST time (note that this flag is only applicable for + ambiguous times) - 'NaT' will return NaT where there are ambiguous times - - 'raise' will raise an AmbiguousTimeError if there are ambiguous times + - 'raise' will raise an AmbiguousTimeError if there are ambiguous + times infer_dst : boolean, default False (DEPRECATED) Attempt to infer fall dst-transition hours based on order @@ -4380,10 +4392,10 @@ def _tz_localize(ax, tz, ambiguous): if not hasattr(ax, 'tz_localize'): if len(ax) > 0: ax_name = self._get_axis_name(axis) - raise TypeError('%s is not a valid DatetimeIndex or PeriodIndex' % - ax_name) + raise TypeError('%s is not a valid DatetimeIndex or ' + 'PeriodIndex' % ax_name) else: - ax = DatetimeIndex([],tz=tz) + ax = DatetimeIndex([], tz=tz) else: ax = ax.tz_localize(tz, ambiguous=ambiguous) return ax @@ -4397,18 +4409,18 @@ def _tz_localize(ax, tz, ambiguous): else: if level not in (None, 0, ax.name): raise ValueError("The level {0} is not valid".format(level)) - ax = _tz_localize(ax, tz, ambiguous) + ax = _tz_localize(ax, tz, ambiguous) result = self._constructor(self._data, copy=copy) - result.set_axis(axis,ax) + result.set_axis(axis, ax) return result.__finalize__(self) - #---------------------------------------------------------------------- + # ---------------------------------------------------------------------- # Numeric Methods def abs(self): """ - Return an object with absolute value taken. Only applicable to objects - that are all numeric + Return an object with absolute value taken--only applicable to objects + that are all numeric. Returns ------- @@ -4428,8 +4440,8 @@ def abs(self): include, exclude : list-like, 'all', or None (default) Specify the form of the returned result. Either: - - None to both (default). The result will include only numeric-typed - columns or, if none are, only categorical columns. + - None to both (default). The result will include only + numeric-typed columns or, if none are, only categorical columns. - A list of dtypes or strings to be included/excluded. To select all numeric types use numpy numpy.number. To select categorical objects use type object. See also the select_dtypes @@ -4469,7 +4481,7 @@ def abs(self): """ @Appender(_shared_docs['describe'] % _shared_doc_kwargs) - def describe(self, percentiles=None, include=None, exclude=None ): + def describe(self, percentiles=None, include=None, exclude=None): if self.ndim >= 3: msg = "describe is not implemented on on Panel or PanelND objects." raise NotImplementedError(msg) @@ -4496,20 +4508,20 @@ def pretty_name(x): def describe_numeric_1d(series, percentiles): stat_index = (['count', 'mean', 'std', 'min'] + - [pretty_name(x) for x in percentiles] + ['max']) + [pretty_name(x) for x in percentiles] + ['max']) d = ([series.count(), series.mean(), series.std(), series.min()] + [series.quantile(x) for x in percentiles] + [series.max()]) return pd.Series(d, index=stat_index, name=series.name) - def describe_categorical_1d(data): names = ['count', 'unique'] objcounts = data.value_counts() - result = [data.count(), len(objcounts[objcounts!=0])] + result = [data.count(), len(objcounts[objcounts != 0])] if result[1] > 0: top, freq = objcounts.index[0], objcounts.iloc[0] - if data.dtype == object or com.is_categorical_dtype(data.dtype): + if (data.dtype == object or + com.is_categorical_dtype(data.dtype)): names += ['top', 'freq'] result += [top, freq] @@ -4559,7 +4571,7 @@ def describe_1d(data, percentiles): return d def _check_percentile(self, q): - """ Validate percentiles. Used by describe and quantile """ + """Validate percentiles (used by describe and quantile).""" msg = ("percentiles should all be in the interval [0, 1]. " "Try {0} instead.") @@ -4608,8 +4620,8 @@ def pct_change(self, periods=1, fill_method='pad', limit=None, freq=None, else: data = self.fillna(method=fill_method, limit=limit, axis=axis) - rs = (data.div(data.shift(periods=periods, freq=freq, - axis=axis, **kwargs)) - 1) + rs = (data.div(data.shift(periods=periods, freq=freq, axis=axis, + **kwargs)) - 1) if freq is None: mask = com.isnull(_values_from_object(self)) np.putmask(rs.values, mask, np.nan) @@ -4626,7 +4638,7 @@ def _agg_by_level(self, name, axis=0, level=0, skipna=True, **kwargs): @classmethod def _add_numeric_operations(cls): - """ add the operations to the cls; evaluate the doc strings again """ + """Add the operations to the cls; evaluate the doc strings again""" axis_descr, name, name2 = _doc_parms(cls) @@ -4642,11 +4654,9 @@ def _add_numeric_operations(cls): @Substitution(outname='mad', desc="Return the mean absolute deviation of the values " "for the requested axis", - name1=name, - name2=name2, - axis_descr=axis_descr) + name1=name, name2=name2, axis_descr=axis_descr) @Appender(_num_doc) - def mad(self, axis=None, skipna=None, level=None): + def mad(self, axis=None, skipna=None, level=None): if skipna is None: skipna = True if axis is None: @@ -4661,58 +4671,51 @@ def mad(self, axis=None, skipna=None, level=None): else: demeaned = data.sub(data.mean(axis=1), axis=0) return np.abs(demeaned).mean(axis=axis, skipna=skipna) + cls.mad = mad cls.sem = _make_stat_function_ddof( 'sem', name, name2, axis_descr, - "Return unbiased standard error of the mean over " - "requested axis.\n\nNormalized by N-1 by default. " - "This can be changed using the ddof argument", + "Return unbiased standard error of the mean over requested " + "axis.\n\nNormalized by N-1 by default. This can be changed " + "using the ddof argument", nanops.nansem) cls.var = _make_stat_function_ddof( 'var', name, name2, axis_descr, - "Return unbiased variance over requested " - "axis.\n\nNormalized by N-1 by default. " - "This can be changed using the ddof argument", + "Return unbiased variance over requested axis.\n\nNormalized by " + "N-1 by default. This can be changed using the ddof argument", nanops.nanvar) cls.std = _make_stat_function_ddof( 'std', name, name2, axis_descr, - "Return unbiased standard deviation over requested " - "axis.\n\nNormalized by N-1 by default. " - "This can be changed using the ddof argument", + "Return unbiased standard deviation over requested axis." + "\n\nNormalized by N-1 by default. This can be changed using the " + "ddof argument", nanops.nanstd) @Substitution(outname='compounded', desc="Return the compound percentage of the values for " - "the requested axis", - name1=name, - name2=name2, + "the requested axis", name1=name, name2=name2, axis_descr=axis_descr) @Appender(_num_doc) def compound(self, axis=None, skipna=None, level=None): if skipna is None: skipna = True return (1 + self).prod(axis=axis, skipna=skipna, level=level) - 1 + cls.compound = compound cls.cummin = _make_cum_function( - 'min', name, name2, axis_descr, - "cumulative minimum", - lambda y, axis: np.minimum.accumulate(y, axis), - np.inf, np.nan) + 'min', name, name2, axis_descr, "cumulative minimum", + lambda y, axis: np.minimum.accumulate(y, axis), np.inf, np.nan) cls.cumsum = _make_cum_function( - 'sum', name, name2, axis_descr, - "cumulative sum", + 'sum', name, name2, axis_descr, "cumulative sum", lambda y, axis: y.cumsum(axis), 0., np.nan) cls.cumprod = _make_cum_function( - 'prod', name, name2, axis_descr, - "cumulative product", + 'prod', name, name2, axis_descr, "cumulative product", lambda y, axis: y.cumprod(axis), 1., np.nan) cls.cummax = _make_cum_function( - 'max', name, name2, axis_descr, - "cumulative max", - lambda y, axis: np.maximum.accumulate(y, axis), - -np.inf, np.nan) + 'max', name, name2, axis_descr, "cumulative max", + lambda y, axis: np.maximum.accumulate(y, axis), -np.inf, np.nan) cls.sum = _make_stat_function( 'sum', name, name2, axis_descr, @@ -4728,9 +4731,9 @@ def compound(self, axis=None, skipna=None, level=None): nanops.nanskew) cls.kurt = _make_stat_function( 'kurt', name, name2, axis_descr, - 'Return unbiased kurtosis over requested axis using Fisher''s ' - 'definition of\nkurtosis (kurtosis of normal == 0.0). Normalized ' - 'by N-1\n', + "Return unbiased kurtosis over requested axis using Fisher's " + "definition of\nkurtosis (kurtosis of normal == 0.0). Normalized " + "by N-1\n", nanops.nankurt) cls.kurtosis = cls.kurt cls.prod = _make_stat_function( @@ -4742,20 +4745,24 @@ def compound(self, axis=None, skipna=None, level=None): 'median', name, name2, axis_descr, 'Return the median of the values for the requested axis', nanops.nanmedian) - cls.max = _make_stat_function('max', name, name2, axis_descr, - """This method returns the maximum of the values in the object. If you - want the *index* of the maximum, use ``idxmax``. This is the - equivalent of the ``numpy.ndarray`` method ``argmax``.""", - nanops.nanmax) - cls.min = _make_stat_function('min', name, name2, axis_descr, - """This method returns the minimum of the values in the object. If you - want the *index* of the minimum, use ``idxmin``. This is the - equivalent of the ``numpy.ndarray`` method ``argmin``.""", - nanops.nanmin) + cls.max = _make_stat_function( + 'max', name, name2, axis_descr, + """This method returns the maximum of the values in the object. + If you want the *index* of the maximum, use ``idxmax``. This is + the equivalent of the ``numpy.ndarray`` method ``argmax``.""", + nanops.nanmax) + cls.min = _make_stat_function( + 'min', name, name2, axis_descr, + """This method returns the minimum of the values in the object. + If you want the *index* of the minimum, use ``idxmin``. This is + the equivalent of the ``numpy.ndarray`` method ``argmin``.""", + nanops.nanmin) @classmethod def _add_series_only_operations(cls): - """ add the series only operations to the cls; evaluate the doc strings again """ + """Add the series only operations to the cls; evaluate the doc + strings again. + """ axis_descr, name, name2 = _doc_parms(cls) @@ -4764,16 +4771,18 @@ def nanptp(values, axis=0, skipna=True): nmin = nanops.nanmin(values, axis, skipna) return nmax - nmin - cls.ptp = _make_stat_function('ptp', name, name2, axis_descr, - """ - Returns the difference between the maximum value and the minimum - value in the object. This is the equivalent of the ``numpy.ndarray`` - method ``ptp``.""", nanptp) - + cls.ptp = _make_stat_function( + 'ptp', name, name2, axis_descr, + """Returns the difference between the maximum value and the + minimum value in the object. This is the equivalent of the + ``numpy.ndarray`` method ``ptp``.""", + nanptp) @classmethod def _add_series_or_dataframe_operations(cls): - """ add the series or dataframe only operations to the cls; evaluate the doc strings again """ + """Add the series or dataframe only operations to the cls; evaluate + the doc strings again. + """ from pandas.core import window as rwindow @@ -4781,35 +4790,41 @@ def _add_series_or_dataframe_operations(cls): def rolling(self, window, min_periods=None, freq=None, center=False, win_type=None, axis=0): axis = self._get_axis_number(axis) - return rwindow.rolling(self, window=window, min_periods=min_periods, freq=freq, center=center, - win_type=win_type, axis=axis) + return rwindow.rolling(self, window=window, + min_periods=min_periods, freq=freq, + center=center, win_type=win_type, axis=axis) + cls.rolling = rolling @Appender(rwindow.expanding.__doc__) def expanding(self, min_periods=1, freq=None, center=False, axis=0): axis = self._get_axis_number(axis) - return rwindow.expanding(self, min_periods=min_periods, freq=freq, center=center, - axis=axis) + return rwindow.expanding(self, min_periods=min_periods, freq=freq, + center=center, axis=axis) + cls.expanding = expanding @Appender(rwindow.ewm.__doc__) - def ewm(self, com=None, span=None, halflife=None, min_periods=0, freq=None, - adjust=True, ignore_na=False, axis=0): + def ewm(self, com=None, span=None, halflife=None, min_periods=0, + freq=None, adjust=True, ignore_na=False, axis=0): axis = self._get_axis_number(axis) - return rwindow.ewm(self, com=com, span=span, halflife=halflife, min_periods=min_periods, - freq=freq, adjust=adjust, ignore_na=ignore_na, axis=axis) + return rwindow.ewm(self, com=com, span=span, halflife=halflife, + min_periods=min_periods, freq=freq, + adjust=adjust, ignore_na=ignore_na, axis=axis) + cls.ewm = ewm + def _doc_parms(cls): - """ return a tuple of the doc parms """ - axis_descr = "{%s}" % ', '.join([ - "{0} ({1})".format(a, i) for i, a in enumerate(cls._AXIS_ORDERS) - ]) + """Return a tuple of the doc parms.""" + axis_descr = "{%s}" % ', '.join(["{0} ({1})".format(a, i) + for i, a in enumerate(cls._AXIS_ORDERS)]) name = (cls._constructor_sliced.__name__ if cls._AXIS_LEN > 1 else 'scalar') name2 = cls.__name__ return axis_descr, name, name2 + _num_doc = """ %(desc)s @@ -4888,12 +4903,13 @@ def _doc_parms(cls): ------- %(outname)s : %(name1)s\n""" -def _make_stat_function(name, name1, name2, axis_descr, desc, f): - @Substitution(outname=name, desc=desc, name1=name1, name2=name2, axis_descr=axis_descr) +def _make_stat_function(name, name1, name2, axis_descr, desc, f): + @Substitution(outname=name, desc=desc, name1=name1, name2=name2, + axis_descr=axis_descr) @Appender(_num_doc) - def stat_func(self, axis=None, skipna=None, level=None, - numeric_only=None, **kwargs): + def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None, + **kwargs): if skipna is None: skipna = True if axis is None: @@ -4901,14 +4917,16 @@ def stat_func(self, axis=None, skipna=None, level=None, if level is not None: return self._agg_by_level(name, axis=axis, level=level, skipna=skipna) - return self._reduce(f, name, axis=axis, - skipna=skipna, numeric_only=numeric_only) + return self._reduce(f, name, axis=axis, skipna=skipna, + numeric_only=numeric_only) + stat_func.__name__ = name return stat_func -def _make_stat_function_ddof(name, name1, name2, axis_descr, desc, f): - @Substitution(outname=name, desc=desc, name1=name1, name2=name2, axis_descr=axis_descr) +def _make_stat_function_ddof(name, name1, name2, axis_descr, desc, f): + @Substitution(outname=name, desc=desc, name1=name1, name2=name2, + axis_descr=axis_descr) @Appender(_num_ddof_doc) def stat_func(self, axis=None, skipna=None, level=None, ddof=1, numeric_only=None, **kwargs): @@ -4919,19 +4937,20 @@ def stat_func(self, axis=None, skipna=None, level=None, ddof=1, if level is not None: return self._agg_by_level(name, axis=axis, level=level, skipna=skipna, ddof=ddof) - return self._reduce(f, name, axis=axis, - numeric_only=numeric_only, + return self._reduce(f, name, axis=axis, numeric_only=numeric_only, skipna=skipna, ddof=ddof) + stat_func.__name__ = name return stat_func -def _make_cum_function(name, name1, name2, axis_descr, desc, accum_func, mask_a, mask_b): - @Substitution(outname=name, desc=desc, name1=name1, name2=name2, axis_descr=axis_descr) - @Appender("Return cumulative {0} over requested axis.".format(name) - + _cnum_doc) - def func(self, axis=None, dtype=None, out=None, skipna=True, - **kwargs): +def _make_cum_function(name, name1, name2, axis_descr, desc, accum_func, + mask_a, mask_b): + @Substitution(outname=name, desc=desc, name1=name1, name2=name2, + axis_descr=axis_descr) + @Appender("Return cumulative {0} over requested axis.".format(name) + + _cnum_doc) + def func(self, axis=None, dtype=None, out=None, skipna=True, **kwargs): if axis is None: axis = self._stat_axis_number else: @@ -4939,8 +4958,8 @@ def func(self, axis=None, dtype=None, out=None, skipna=True, y = _values_from_object(self).copy() - if skipna and issubclass(y.dtype.type, - (np.datetime64, np.timedelta64)): + if (skipna and + issubclass(y.dtype.type, (np.datetime64, np.timedelta64))): result = accum_func(y, axis) mask = isnull(self) np.putmask(result, mask, pd.tslib.iNaT) @@ -4959,26 +4978,27 @@ def func(self, axis=None, dtype=None, out=None, skipna=True, func.__name__ = name return func -def _make_logical_function(name, name1, name2, axis_descr, desc, f): - @Substitution(outname=name, desc=desc, name1=name1, name2=name2, axis_descr=axis_descr) +def _make_logical_function(name, name1, name2, axis_descr, desc, f): + @Substitution(outname=name, desc=desc, name1=name1, name2=name2, + axis_descr=axis_descr) @Appender(_bool_doc) - def logical_func(self, axis=None, bool_only=None, skipna=None, - level=None, **kwargs): + def logical_func(self, axis=None, bool_only=None, skipna=None, level=None, + **kwargs): if skipna is None: skipna = True if axis is None: axis = self._stat_axis_number if level is not None: if bool_only is not None: - raise NotImplementedError( - "Option bool_only is not implemented with option " - "level.") + raise NotImplementedError("Option bool_only is not " + "implemented with option level.") return self._agg_by_level(name, axis=axis, level=level, - skipna=skipna) + skipna=skipna) return self._reduce(f, axis=axis, skipna=skipna, numeric_only=bool_only, filter_type='bool', name=name) + logical_func.__name__ = name return logical_func diff --git a/pandas/core/index.py b/pandas/core/index.py index 63b748ada6afa..e4a56f7a5f0bd 100644 --- a/pandas/core/index.py +++ b/pandas/core/index.py @@ -37,21 +37,16 @@ from pandas.core.config import get_option from pandas.io.common import PerformanceWarning - # simplify -default_pprint = lambda x, max_seq_items=None: com.pprint_thing(x, - escape_chars=('\t', '\r', '\n'), - quote_strings=True, - max_seq_items=max_seq_items) - +default_pprint = lambda x, max_seq_items=None: \ + com.pprint_thing(x, escape_chars=('\t', '\r', '\n'), quote_strings=True, + max_seq_items=max_seq_items) __all__ = ['Index'] - _unsortable_types = frozenset(('mixed', 'mixed-integer')) -_index_doc_kwargs = dict(klass='Index', inplace='', - duplicated='np.array') +_index_doc_kwargs = dict(klass='Index', inplace='', duplicated='np.array') _index_shared_docs = dict() @@ -61,19 +56,23 @@ def _try_get_item(x): except AttributeError: return x + class InvalidIndexError(Exception): pass + _o_dtype = np.dtype(object) _Identity = object + def _new_Index(cls, d): - """ This is called upon unpickling, rather than the default which doesn't have arguments - and breaks __new__ """ + """ This is called upon unpickling, rather than the default which doesn't + have arguments and breaks __new__ + """ return cls.__new__(cls, **d) -class Index(IndexOpsMixin, StringAccessorMixin, PandasObject): +class Index(IndexOpsMixin, StringAccessorMixin, PandasObject): """ Immutable ndarray implementing an ordered, sliceable set. The basic object storing axis labels for all pandas objects @@ -124,8 +123,8 @@ class Index(IndexOpsMixin, StringAccessorMixin, PandasObject): _engine_type = _index.ObjectEngine - def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=False, - tupleize_cols=True, **kwargs): + def __new__(cls, data=None, dtype=None, copy=False, name=None, + fastpath=False, tupleize_cols=True, **kwargs): if name is None and hasattr(data, 'name'): name = data.name @@ -147,9 +146,8 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=False, # index-like elif isinstance(data, (np.ndarray, Index, ABCSeries)): - if issubclass(data.dtype.type, - np.datetime64) or is_datetimetz(data): - + if (issubclass(data.dtype.type, np.datetime64) or + is_datetimetz(data)): from pandas.tseries.index import DatetimeIndex result = DatetimeIndex(data, copy=copy, name=name, **kwargs) if dtype is not None and _o_dtype == dtype: @@ -192,7 +190,8 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=False, if dtype is None: inferred = lib.infer_dtype(subarr) if inferred == 'integer': - return Int64Index(subarr.astype('i8'), copy=copy, name=name) + return Int64Index(subarr.astype('i8'), copy=copy, + name=name) elif inferred in ['floating', 'mixed-integer-float']: return Float64Index(subarr, copy=copy, name=name) elif inferred == 'boolean': @@ -200,18 +199,20 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=False, pass elif inferred != 'string': if (inferred.startswith('datetime') or - tslib.is_timestamp_array(subarr)): + tslib.is_timestamp_array(subarr)): if (lib.is_datetime_with_singletz_array(subarr) or - 'tz' in kwargs): + 'tz' in kwargs): # only when subarr has the same tz from pandas.tseries.index import DatetimeIndex - return DatetimeIndex(subarr, copy=copy, name=name, **kwargs) + return DatetimeIndex(subarr, copy=copy, name=name, + **kwargs) elif (inferred.startswith('timedelta') or lib.is_timedelta_array(subarr)): from pandas.tseries.tdi import TimedeltaIndex - return TimedeltaIndex(subarr, copy=copy, name=name, **kwargs) + return TimedeltaIndex(subarr, copy=copy, name=name, + **kwargs) elif inferred == 'period': return PeriodIndex(subarr, name=name, **kwargs) return cls._simple_new(subarr, name) @@ -222,17 +223,18 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=False, elif data is None or np.isscalar(data): cls._scalar_data_error(data) else: - if tupleize_cols and isinstance(data, list) and data and isinstance(data[0], tuple): + if (tupleize_cols and isinstance(data, list) and data and + isinstance(data[0], tuple)): # we must be all tuples, otherwise don't construct # 10697 - if all( isinstance(e, tuple) for e in data ): + if all(isinstance(e, tuple) for e in data): try: # must be orderable in py3 if compat.PY3: sorted(data) - return MultiIndex.from_tuples( - data, names=name or kwargs.get('names')) + return MultiIndex.from_tuples(data, names=name or + kwargs.get('names')) except (TypeError, KeyError): # python2 - MultiIndex fails on mixed types pass @@ -245,11 +247,12 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=False, - _simple_new: It returns new Index with the same type as the caller. All metadata (such as name) must be provided by caller's responsibility. - Using _shallow_copy is recommended because it fills these metadata otherwise specified. + Using _shallow_copy is recommended because it fills these metadata + otherwise specified. - - _shallow_copy: It returns new Index with the same type (using _simple_new), - but fills caller's metadata otherwise specified. Passed kwargs will - overwrite corresponding metadata. + - _shallow_copy: It returns new Index with the same type (using + _simple_new), but fills caller's metadata otherwise specified. Passed + kwargs will overwrite corresponding metadata. - _shallow_copy_with_infer: It returns new Index inferring its type from passed values. It fills caller's metadata otherwise specified as the @@ -270,22 +273,24 @@ def _simple_new(cls, values, name=None, dtype=None, **kwargs): if values is None and dtype is not None: values = np.empty(0, dtype=dtype) else: - values = np.array(values,copy=False) + values = np.array(values, copy=False) if is_object_dtype(values): - values = cls(values, name=name, dtype=dtype, **kwargs)._values + values = cls(values, name=name, dtype=dtype, + **kwargs)._values result = object.__new__(cls) result._data = values result.name = name for k, v in compat.iteritems(kwargs): - setattr(result,k,v) + setattr(result, k, v) result._reset_identity() return result def _shallow_copy(self, values=None, **kwargs): """ - create a new Index with the same class as the caller, don't copy the data, - use the same object attributes with passed in attributes taking precedence + create a new Index with the same class as the caller, don't copy the + data, use the same object attributes with passed in attributes taking + precedence *this is an internal non-public method* @@ -302,8 +307,9 @@ def _shallow_copy(self, values=None, **kwargs): def _shallow_copy_with_infer(self, values=None, **kwargs): """ - create a new Index inferring the class with passed value, don't copy the data, - use the same object attributes with passed in attributes taking precedence + create a new Index inferring the class with passed value, don't copy + the data, use the same object attributes with passed in attributes + taking precedence *this is an internal non-public method* @@ -320,7 +326,7 @@ def _shallow_copy_with_infer(self, values=None, **kwargs): if self._infer_as_myclass: try: return self._constructor(values, **attributes) - except (TypeError, ValueError) as e: + except (TypeError, ValueError): pass return Index(values, **attributes) @@ -423,10 +429,9 @@ def ravel(self, order='C'): # construction helpers @classmethod def _scalar_data_error(cls, data): - raise TypeError( - '{0}(...) must be called with a collection of some kind, {1} was ' - 'passed'.format(cls.__name__, repr(data)) - ) + raise TypeError('{0}(...) must be called with a collection of some ' + 'kind, {1} was passed'.format(cls.__name__, + repr(data))) @classmethod def _string_data_error(cls, data): @@ -436,7 +441,8 @@ def _string_data_error(cls, data): @classmethod def _coerce_to_ndarray(cls, data): """coerces data to ndarray, raises on scalar data. Converts other - iterables to list first and then to array. Does not touch ndarrays.""" + iterables to list first and then to array. Does not touch ndarrays. + """ if not isinstance(data, (np.ndarray, Index)): if data is None or np.isscalar(data): @@ -450,13 +456,13 @@ def _coerce_to_ndarray(cls, data): def _get_attributes_dict(self): """ return an attributes dict for my class """ - return dict([ (k,getattr(self,k,None)) for k in self._attributes]) + return dict([(k, getattr(self, k, None)) for k in self._attributes]) def view(self, cls=None): # we need to see if we are subclassing an # index type here - if cls is not None and not hasattr(cls,'_typ'): + if cls is not None and not hasattr(cls, '_typ'): result = self._data.view(cls) else: result = self._shallow_copy() @@ -528,16 +534,14 @@ def __unicode__(self): attrs = self._format_attrs() space = self._format_space() - prepr = (u(",%s") % space).join([u("%s=%s") % (k, v) - for k, v in attrs]) + prepr = (u(",%s") % + space).join([u("%s=%s") % (k, v) for k, v in attrs]) # no data provided, just attributes if data is None: data = '' - res = u("%s(%s%s)") % (klass, - data, - prepr) + res = u("%s(%s%s)") % (klass, data, prepr) return res @@ -546,8 +550,8 @@ def _format_space(self): # using space here controls if the attributes # are line separated or not (the default) - #max_seq_items = get_option('display.max_seq_items') - #if len(self) > max_seq_items: + # max_seq_items = get_option('display.max_seq_items') + # if len(self) > max_seq_items: # space = "\n%s" % (' ' * (len(klass) + 1)) return " " @@ -588,7 +592,8 @@ def _format_data(self): def _extend_line(s, line, value, display_width, next_line_prefix): - if adj.len(line.rstrip()) + adj.len(value.rstrip()) >= display_width: + if (adj.len(line.rstrip()) + adj.len(value.rstrip()) >= + display_width): s += line.rstrip() line = next_line_prefix line += value @@ -612,18 +617,21 @@ def best_len(values): else: if n > max_seq_items: - n = min(max_seq_items//2,10) - head = [ formatter(x) for x in self[:n] ] - tail = [ formatter(x) for x in self[-n:] ] + n = min(max_seq_items // 2, 10) + head = [formatter(x) for x in self[:n]] + tail = [formatter(x) for x in self[-n:]] else: head = [] - tail = [ formatter(x) for x in self ] + tail = [formatter(x) for x in self] # adjust all values to max length if needed if is_justify: - # however, if we are not truncated and we are only a single line, then don't justify - if is_truncated or not (len(', '.join(head)) < display_width and len(', '.join(tail)) < display_width): + # however, if we are not truncated and we are only a single + # line, then don't justify + if (is_truncated or + not (len(', '.join(head)) < display_width and + len(', '.join(tail)) < display_width)): max_len = max(best_len(head), best_len(tail)) head = [x.rjust(max_len) for x in head] tail = [x.rjust(max_len) for x in tail] @@ -641,7 +649,7 @@ def best_len(values): summary += line.rstrip() + space2 + '...' line = space2 - for i in range(len(tail)-1): + for i in range(len(tail) - 1): word = tail[i] + sep + ' ' summary, line = _extend_line(summary, line, word, display_width, space2) @@ -667,12 +675,12 @@ def _format_attrs(self): Return a list of tuples of the (attr,formatted_value) """ attrs = [] - attrs.append(('dtype',"'%s'" % self.dtype)) + attrs.append(('dtype', "'%s'" % self.dtype)) if self.name is not None: - attrs.append(('name',default_pprint(self.name))) + attrs.append(('name', default_pprint(self.name))) max_seq_items = get_option('display.max_seq_items') or len(self) if len(self) > max_seq_items: - attrs.append(('length',len(self))) + attrs.append(('length', len(self))) return attrs def to_series(self, **kwargs): @@ -698,8 +706,7 @@ def _to_embed(self, keep_tz=False): return self.values.copy() def astype(self, dtype): - return Index(self.values.astype(dtype), name=self.name, - dtype=dtype) + return Index(self.values.astype(dtype), name=self.name, dtype=dtype) def _to_safe_for_reshape(self): """ convert to object if we are a categorical """ @@ -737,12 +744,12 @@ def nlevels(self): return 1 def _get_names(self): - return FrozenList((self.name,)) + return FrozenList((self.name, )) def _set_names(self, values, level=None): if len(values) != 1: - raise ValueError('Length of new names must be 1, got %d' - % len(values)) + raise ValueError('Length of new names must be 1, got %d' % + len(values)) self.name = values[0] names = property(fset=_set_names, fget=_get_names) @@ -755,9 +762,9 @@ def set_names(self, names, level=None, inplace=False): ---------- names : str or sequence name(s) to set - level : int or level name, or sequence of int / level names (default None) - If the index is a MultiIndex (hierarchical), level(s) to set (None for all levels) - Otherwise level must be None + level : int, level name, or sequence of int/level names (default None) + If the index is a MultiIndex (hierarchical), level(s) to set (None + for all levels). Otherwise level must be None inplace : bool if True, mutates in place @@ -786,7 +793,8 @@ def set_names(self, names, level=None, inplace=False): if level is not None and self.nlevels == 1: raise ValueError('Level must be None for non-MultiIndex') - if level is not None and not is_list_like(level) and is_list_like(names): + if level is not None and not is_list_like(level) and is_list_like( + names): raise TypeError("Names must be a string") if not is_list_like(names) and level is None and self.nlevels > 1: @@ -830,12 +838,12 @@ def _has_complex_internals(self): def summary(self, name=None): if len(self) > 0: head = self[0] - if hasattr(head, 'format') and\ - not isinstance(head, compat.string_types): + if (hasattr(head, 'format') and + not isinstance(head, compat.string_types)): head = head.format() tail = self[-1] - if hasattr(tail, 'format') and\ - not isinstance(tail, compat.string_types): + if (hasattr(tail, 'format') and + not isinstance(tail, compat.string_types)): tail = tail.format() index_summary = ', %s to %s' % (com.pprint_thing(head), com.pprint_thing(tail)) @@ -934,16 +942,20 @@ def to_int(): return key elif is_float(key): key = to_int() - warnings.warn("scalar indexers for index type {0} should be integers and not floating point".format( - type(self).__name__), FutureWarning, stacklevel=5) + warnings.warn("scalar indexers for index type {0} should be " + "integers and not floating point".format( + type(self).__name__), + FutureWarning, stacklevel=5) return key return self._invalid_indexer('label', key) if is_float(key): if isnull(key): return self._invalid_indexer('label', key) - warnings.warn("scalar indexers for index type {0} should be integers and not floating point".format( - type(self).__name__), FutureWarning, stacklevel=3) + warnings.warn("scalar indexers for index type {0} should be " + "integers and not floating point".format( + type(self).__name__), + FutureWarning, stacklevel=3) return to_int() return key @@ -974,19 +986,20 @@ def _convert_slice_indexer(self, key, kind=None): # need to coerce to_int if needed def f(c): - v = getattr(key,c) + v = getattr(key, c) if v is None or is_integer(v): return v # warn if it's a convertible float if v == int(v): - warnings.warn("slice indexers when using iloc should be integers " - "and not floating point", FutureWarning, stacklevel=7) + warnings.warn("slice indexers when using iloc should be " + "integers and not floating point", + FutureWarning, stacklevel=7) return int(v) self._invalid_indexer('slice {0} value'.format(c), v) - return slice(*[ f(c) for c in ['start','stop','step']]) + return slice(*[f(c) for c in ['start', 'stop', 'step']]) # validate slicers def validate(v): @@ -1001,8 +1014,9 @@ def validate(v): return False return True - for c in ['start','stop','step']: - v = getattr(key,c) + + for c in ['start', 'stop', 'step']: + v = getattr(key, c) if not validate(v): self._invalid_indexer('slice {0} value'.format(c), v) @@ -1025,10 +1039,11 @@ def is_int(v): # if we are mixed and have integers try: if is_positional and self.is_mixed(): + # TODO: i, j are not used anywhere if start is not None: - i = self.get_loc(start) + i = self.get_loc(start) # noqa if stop is not None: - j = self.get_loc(stop) + j = self.get_loc(stop) # noqa is_positional = False except KeyError: if self.inferred_type == 'mixed-integer-float': @@ -1058,23 +1073,25 @@ def _convert_list_indexer(self, keyarr, kind=None): and we have a mixed index (e.g. number/labels). figure out the indexer. return None if we can't help """ - if kind in [None, 'iloc', 'ix'] and is_integer_dtype(keyarr) \ - and not self.is_floating() and not isinstance(keyarr, ABCPeriodIndex): + if (kind in [None, 'iloc', 'ix'] and + is_integer_dtype(keyarr) and not self.is_floating() and + not isinstance(keyarr, ABCPeriodIndex)): if self.inferred_type == 'mixed-integer': indexer = self.get_indexer(keyarr) if (indexer >= 0).all(): return indexer - # missing values are flagged as -1 by get_indexer and negative indices are already - # converted to positive indices in the above if-statement, so the negative flags are changed to - # values outside the range of indices so as to trigger an IndexError in maybe_convert_indices + # missing values are flagged as -1 by get_indexer and negative + # indices are already converted to positive indices in the + # above if-statement, so the negative flags are changed to + # values outside the range of indices so as to trigger an + # IndexError in maybe_convert_indices indexer[indexer < 0] = len(self) from pandas.core.indexing import maybe_convert_indices return maybe_convert_indices(indexer, len(self)) elif not self.inferred_type == 'integer': - keyarr = np.where(keyarr < 0, - len(self) + keyarr, keyarr) + keyarr = np.where(keyarr < 0, len(self) + keyarr, keyarr) return keyarr return None @@ -1082,10 +1099,9 @@ def _convert_list_indexer(self, keyarr, kind=None): def _invalid_indexer(self, form, key): """ consistent invalid indexer message """ raise TypeError("cannot do {form} indexing on {klass} with these " - "indexers [{key}] of {kind}".format(form=form, - klass=type(self), - key=key, - kind=type(key))) + "indexers [{key}] of {kind}".format( + form=form, klass=type(self), key=key, + kind=type(key))) def get_duplicates(self): from collections import defaultdict @@ -1119,14 +1135,14 @@ def _validate_index_level(self, level): if isinstance(level, int): if level < 0 and level != -1: raise IndexError("Too many levels: Index has only 1 level," - " %d is not a valid level number" % (level,)) + " %d is not a valid level number" % (level, )) elif level > 0: raise IndexError("Too many levels:" " Index has only 1 level, not %d" % (level + 1)) elif level != self.name: - raise KeyError('Level %s must be same as name (%s)' - % (level, self.name)) + raise KeyError('Level %s must be same as name (%s)' % + (level, self.name)) def _get_level_number(self, level): self._validate_index_level(level) @@ -1178,11 +1194,12 @@ def __setstate__(self, state): self._reset_identity() else: raise Exception("invalid pickle state") + _unpickle_compat = __setstate__ def __deepcopy__(self, memo=None): if memo is None: - memo = {} + memo = {} return self.copy(deep=True) def __nonzero__(self): @@ -1257,9 +1274,8 @@ def _ensure_compat_append(self, other): to_concat.append(other) for obj in to_concat: - if (isinstance(obj, Index) and - obj.name != name and - obj.name is not None): + if (isinstance(obj, Index) and obj.name != name and + obj.name is not None): name = None break @@ -1283,11 +1299,13 @@ def append(self, other): to_concat, name = self._ensure_compat_append(other) attribs = self._get_attributes_dict() attribs['name'] = name - return self._shallow_copy_with_infer(np.concatenate(to_concat), **attribs) + return self._shallow_copy_with_infer( + np.concatenate(to_concat), **attribs) @staticmethod def _ensure_compat_concat(indexes): - from pandas.tseries.api import DatetimeIndex, PeriodIndex, TimedeltaIndex + from pandas.tseries.api import (DatetimeIndex, PeriodIndex, + TimedeltaIndex) klasses = DatetimeIndex, PeriodIndex, TimedeltaIndex is_ts = [isinstance(idx, klasses) for idx in indexes] @@ -1375,8 +1393,8 @@ def format(self, name=False, formatter=None, **kwargs): header = [] if name: header.append(com.pprint_thing(self.name, - escape_chars=('\t', '\r', '\n')) - if self.name is not None else '') + escape_chars=('\t', '\r', '\n')) if + self.name is not None else '') if formatter is not None: return header + list(self.map(formatter)) @@ -1436,7 +1454,8 @@ def equals(self, other): if not isinstance(other, Index): return False - return array_equivalent(_values_from_object(self), _values_from_object(other)) + return array_equivalent(_values_from_object(self), + _values_from_object(other)) def identical(self, other): """Similar to equals, but check that other comparable attributes are @@ -1504,10 +1523,12 @@ def order(self, return_indexer=False, ascending=True): """ warnings.warn("order is deprecated, use sort_values(...)", FutureWarning, stacklevel=2) - return self.sort_values(return_indexer=return_indexer, ascending=ascending) + return self.sort_values(return_indexer=return_indexer, + ascending=ascending) def sort(self, *args, **kwargs): - raise TypeError("cannot sort an Index object in-place, use sort_values instead") + raise TypeError("cannot sort an Index object in-place, use " + "sort_values instead") def sortlevel(self, level=None, ascending=True, sort_remaining=None): """ @@ -1538,7 +1559,8 @@ def shift(self, periods=1, freq=None): ------- shifted : Index """ - raise NotImplementedError("Not supported for type %s" % type(self).__name__) + raise NotImplementedError("Not supported for type %s" % + type(self).__name__) def argsort(self, *args, **kwargs): """ @@ -1555,23 +1577,26 @@ def argsort(self, *args, **kwargs): def __add__(self, other): if com.is_list_like(other): - warnings.warn("using '+' to provide set union with Indexes is deprecated, " - "use '|' or .union()", FutureWarning, stacklevel=2) + warnings.warn("using '+' to provide set union with Indexes is " + "deprecated, use '|' or .union()", FutureWarning, + stacklevel=2) if isinstance(other, Index): return self.union(other) return Index(np.array(self) + other) def __radd__(self, other): if is_list_like(other): - warnings.warn("using '+' to provide set union with Indexes is deprecated, " - "use '|' or .union()", FutureWarning, stacklevel=2) + warnings.warn("using '+' to provide set union with Indexes is " + "deprecated, use '|' or .union()", FutureWarning, + stacklevel=2) return Index(other + np.array(self)) __iadd__ = __add__ def __sub__(self, other): - warnings.warn("using '-' to provide set differences with Indexes is deprecated, " - "use .difference()",FutureWarning, stacklevel=2) + warnings.warn("using '-' to provide set differences with Indexes is " + "deprecated, use .difference()", FutureWarning, + stacklevel=2) return self.difference(other) def __and__(self, other): @@ -1613,7 +1638,7 @@ def union(self, other): if len(self) == 0: return other - if not is_dtype_equal(self.dtype,other.dtype): + if not is_dtype_equal(self.dtype, other.dtype): this = self.astype('O') other = other.astype('O') return this.union(other) @@ -1641,8 +1666,7 @@ def union(self, other): self.values[0] < other_diff[0] except TypeError as e: warnings.warn("%s, sort order is undefined for " - "incomparable objects" % e, - RuntimeWarning, + "incomparable objects" % e, RuntimeWarning, stacklevel=3) else: types = frozenset((self.inferred_type, @@ -1657,8 +1681,7 @@ def union(self, other): result = np.sort(result) except TypeError as e: warnings.warn("%s, sort order is undefined for " - "incomparable objects" % e, - RuntimeWarning, + "incomparable objects" % e, RuntimeWarning, stacklevel=3) # for subclasses @@ -1698,7 +1721,7 @@ def intersection(self, other): if self.equals(other): return self - if not is_dtype_equal(self.dtype,other.dtype): + if not is_dtype_equal(self.dtype, other.dtype): this = self.astype('O') other = other.astype('O') return this.intersection(other) @@ -1715,7 +1738,8 @@ def intersection(self, other): indexer = indexer.take((indexer != -1).nonzero()[0]) except: # duplicates - indexer = Index(self.values).get_indexer_non_unique(other._values)[0].unique() + indexer = Index(self.values).get_indexer_non_unique( + other._values)[0].unique() indexer = indexer[indexer != -1] taken = self.take(indexer) @@ -1725,7 +1749,8 @@ def intersection(self, other): def difference(self, other): """ - Return a new Index with elements from the index that are not in `other`. + Return a new Index with elements from the index that are not in + `other`. This is the sorted set difference of two Index objects. @@ -1797,7 +1822,8 @@ def sym_diff(self, other, result_name=None): if result_name is None: result_name = result_name_update - the_diff = sorted(set((self.difference(other)).union(other.difference(self)))) + the_diff = sorted(set((self.difference(other)). + union(other.difference(self)))) attribs = self._get_attributes_dict() attribs['name'] = result_name if 'freq' in attribs: @@ -1835,8 +1861,7 @@ def get_loc(self, key, method=None, tolerance=None): key = _values_from_object(key) return self._engine.get_loc(key) - indexer = self.get_indexer([key], method=method, - tolerance=tolerance) + indexer = self.get_indexer([key], method=method, tolerance=tolerance) if indexer.ndim > 1 or indexer.size > 1: raise TypeError('get_loc requires scalar valued input') loc = indexer.item() @@ -1852,7 +1877,7 @@ def get_value(self, series, key): # if we have something that is Index-like, then # use this, e.g. DatetimeIndex - s = getattr(series,'_values',None) + s = getattr(series, '_values', None) if isinstance(s, Index) and lib.isscalar(key): return s[key] @@ -1866,7 +1891,7 @@ def get_value(self, series, key): try: return self._engine.get_value(s, k) except KeyError as e1: - if len(self) > 0 and self.inferred_type in ['integer','boolean']: + if len(self) > 0 and self.inferred_type in ['integer', 'boolean']: raise try: @@ -1892,8 +1917,8 @@ def set_value(self, arr, key, value): Fast lookup of value from 1-dimensional ndarray. Only use this if you know what you're doing """ - self._engine.set_value( - _values_from_object(arr), _values_from_object(key), value) + self._engine.set_value(_values_from_object(arr), + _values_from_object(key), value) def get_level_values(self, level): """ @@ -1991,14 +2016,15 @@ def _convert_tolerance(self, tolerance): def _get_fill_indexer(self, target, method, limit=None, tolerance=None): if self.is_monotonic_increasing and target.is_monotonic_increasing: - method = (self._engine.get_pad_indexer if method == 'pad' - else self._engine.get_backfill_indexer) + method = (self._engine.get_pad_indexer if method == 'pad' else + self._engine.get_backfill_indexer) indexer = method(target._values, limit) else: - indexer = self._get_fill_indexer_searchsorted(target, method, limit) + indexer = self._get_fill_indexer_searchsorted(target, method, + limit) if tolerance is not None: - indexer = self._filter_indexer_tolerance( - target._values, indexer, tolerance) + indexer = self._filter_indexer_tolerance(target._values, indexer, + tolerance) return indexer def _get_fill_indexer_searchsorted(self, target, method, limit=None): @@ -2016,7 +2042,8 @@ def _get_fill_indexer_searchsorted(self, target, method, limit=None): # find exact matches first (this simplifies the algorithm) indexer = self.get_indexer(target) nonexact = (indexer == -1) - indexer[nonexact] = self._searchsorted_monotonic(target[nonexact], side) + indexer[nonexact] = self._searchsorted_monotonic(target[nonexact], + side) if side == 'left': # searchsorted returns "indices into a sorted array such that, # if the corresponding elements in v were inserted before the @@ -2045,12 +2072,11 @@ def _get_nearest_indexer(self, target, limit, tolerance): right_distances = abs(self.values[right_indexer] - target) op = operator.lt if self.is_monotonic_increasing else operator.le - indexer = np.where(op(left_distances, right_distances) - | (right_indexer == -1), - left_indexer, right_indexer) + indexer = np.where(op(left_distances, right_distances) | + (right_indexer == -1), left_indexer, right_indexer) if tolerance is not None: - indexer = self._filter_indexer_tolerance( - target, indexer, tolerance) + indexer = self._filter_indexer_tolerance(target, indexer, + tolerance) return indexer def _filter_indexer_tolerance(self, target, indexer, tolerance): @@ -2222,8 +2248,8 @@ def _reindex_non_unique(self, target): """ *this is an internal non-public method* - Create a new index with target's values (move/add/delete values as necessary) - use with non-unique Index and a possibly non-unique target + Create a new index with target's values (move/add/delete values as + necessary) use with non-unique Index and a possibly non-unique target Parameters ---------- @@ -2303,16 +2329,17 @@ def join(self, other, how='left', level=None, return_indexers=False): # try to figure out the join level # GH3662 - if (level is None and (self_is_mi or other_is_mi)): + if level is None and (self_is_mi or other_is_mi): # have the same levels/names so a simple join if self.names == other.names: pass else: - return self._join_multi(other, how=how, return_indexers=return_indexers) + return self._join_multi(other, how=how, + return_indexers=return_indexers) # join on the level - if (level is not None and (self_is_mi or other_is_mi)): + if level is not None and (self_is_mi or other_is_mi): return self._join_level(other, level, how=how, return_indexers=return_indexers) @@ -2343,11 +2370,10 @@ def join(self, other, how='left', level=None, return_indexers=False): result = x, z, y return result - if not is_dtype_equal(self.dtype,other.dtype): + if not is_dtype_equal(self.dtype, other.dtype): this = self.astype('O') other = other.astype('O') - return this.join(other, how=how, - return_indexers=return_indexers) + return this.join(other, how=how, return_indexers=return_indexers) _validate_join_method(how) @@ -2396,15 +2422,18 @@ def _join_multi(self, other, how, return_indexers=True): other_is_mi = isinstance(other, MultiIndex) # figure out join names - self_names = [ n for n in self.names if n is not None ] - other_names = [ n for n in other.names if n is not None ] + self_names = [n for n in self.names if n is not None] + other_names = [n for n in other.names if n is not None] overlap = list(set(self_names) & set(other_names)) # need at least 1 in common, but not more than 1 if not len(overlap): - raise ValueError("cannot join with no level specified and no overlapping names") + raise ValueError("cannot join with no level specified and no " + "overlapping names") if len(overlap) > 1: - raise NotImplementedError("merging with more than one level overlap on a multi-index is not implemented") + raise NotImplementedError("merging with more than one level " + "overlap on a multi-index is not " + "implemented") jl = overlap[0] # make the indices into mi's that match @@ -2427,13 +2456,15 @@ def _join_multi(self, other, how, return_indexers=True): return result # 2 multi-indexes - raise NotImplementedError("merging with both multi-indexes is not implemented") + raise NotImplementedError("merging with both multi-indexes is not " + "implemented") def _join_non_unique(self, other, how='left', return_indexers=False): from pandas.tools.merge import _get_join_indexers - left_idx, right_idx = _get_join_indexers([self.values], [other._values], - how=how, sort=True) + left_idx, right_idx = _get_join_indexers([self.values], + [other._values], how=how, + sort=True) left_idx = com._ensure_platform_int(left_idx) right_idx = com._ensure_platform_int(right_idx) @@ -2449,8 +2480,7 @@ def _join_non_unique(self, other, how='left', return_indexers=False): else: return join_index - def _join_level(self, other, level, how='left', - return_indexers=False, + def _join_level(self, other, level, how='left', return_indexers=False, keep_order=True): """ The join method *only* affects the level of the resulting @@ -2557,10 +2587,8 @@ def _get_leaf_sorter(labels): if not mask_all: left_indexer = mask.nonzero()[0][left_indexer] - join_index = MultiIndex(levels=new_levels, - labels=new_labels, - names=left.names, - verify_integrity=False) + join_index = MultiIndex(levels=new_levels, labels=new_labels, + names=left.names, verify_integrity=False) if right_lev_indexer is not None: right_indexer = com.take_nd(right_lev_indexer, @@ -2646,7 +2674,8 @@ def slice_indexer(self, start=None, end=None, step=None, kind=None): ----- This function assumes that the data is sorted, so use at your own peril """ - start_slice, end_slice = self.slice_locs(start, end, step=step, kind=kind) + start_slice, end_slice = self.slice_locs(start, end, step=step, + kind=kind) # return a slice if not lib.isscalar(start_slice): @@ -2683,12 +2712,12 @@ def _maybe_cast_slice_bound(self, label, side, kind): # datetimelike Indexes # reject them if is_float(label): - self._invalid_indexer('slice',label) + self._invalid_indexer('slice', label) # we are trying to find integer bounds on a non-integer based index # this is rejected (generally .loc gets you here) elif is_integer(label): - self._invalid_indexer('slice',label) + self._invalid_indexer('slice', label) return label @@ -2699,8 +2728,8 @@ def _searchsorted_monotonic(self, label, side='left'): # np.searchsorted expects ascending sort order, have to reverse # everything for it to work (element ordering, search side and # resulting value). - pos = self[::-1].searchsorted( - label, side='right' if side == 'left' else 'right') + pos = self[::-1].searchsorted(label, side='right' if side == 'left' + else 'right') return len(self) - pos raise ValueError('index must be monotonic increasing or decreasing') @@ -2720,9 +2749,9 @@ def get_slice_bound(self, label, side, kind): """ if side not in ('left', 'right'): - raise ValueError( - "Invalid value for side kwarg," - " must be either 'left' or 'right': %s" % (side,)) + raise ValueError("Invalid value for side kwarg," + " must be either 'left' or 'right': %s" % + (side, )) original_label = label @@ -2748,9 +2777,8 @@ def get_slice_bound(self, label, side, kind): else: slc = lib.maybe_indices_to_slice(slc.astype('i8'), len(self)) if isinstance(slc, np.ndarray): - raise KeyError( - "Cannot get %s slice bound for non-unique label:" - " %r" % (side, original_label)) + raise KeyError("Cannot get %s slice bound for non-unique " + "label: %r" % (side, original_label)) if isinstance(slc, slice): if side == 'left': @@ -2854,8 +2882,7 @@ def insert(self, loc, item): _self = np.asarray(self) item = self._coerce_scalar_to_index(item)._values - idx = np.concatenate( - (_self[:loc], item, _self[loc:])) + idx = np.concatenate((_self[:loc], item, _self[loc:])) return self._shallow_copy_with_infer(idx) def drop(self, labels, errors='raise'): @@ -2877,16 +2904,19 @@ def drop(self, labels, errors='raise'): mask = indexer == -1 if mask.any(): if errors != 'ignore': - raise ValueError('labels %s not contained in axis' % labels[mask]) + raise ValueError('labels %s not contained in axis' % + labels[mask]) indexer = indexer[~mask] return self.delete(indexer) - @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', False: 'first'}) + @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', + False: 'first'}) @Appender(base._shared_docs['drop_duplicates'] % _index_doc_kwargs) def drop_duplicates(self, keep='first'): return super(Index, self).drop_duplicates(keep=keep) - @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', False: 'first'}) + @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', + False: 'first'}) @Appender(base._shared_docs['duplicated'] % _index_doc_kwargs) def duplicated(self, keep='first'): return super(Index, self).duplicated(keep=keep) @@ -2931,7 +2961,6 @@ def _add_comparison_methods(cls): """ add in comparison methods """ def _make_compare(op): - def _evaluate_compare(self, other): if isinstance(other, (np.ndarray, Index, ABCSeries)): if other.ndim > 0 and len(self) != len(other): @@ -2962,25 +2991,25 @@ def _add_numericlike_set_methods_disabled(cls): """ add in the numeric set-like methods to disable """ def _make_invalid_op(name): - def invalid_op(self, other=None): - raise TypeError("cannot perform {name} with this index type: {typ}".format(name=name, - typ=type(self))) + raise TypeError("cannot perform {name} with this index type: " + "{typ}".format(name=name, typ=type(self))) + invalid_op.__name__ = name return invalid_op - cls.__add__ = cls.__radd__ = __iadd__ = _make_invalid_op('__add__') - cls.__sub__ = __isub__ = _make_invalid_op('__sub__') + cls.__add__ = cls.__radd__ = __iadd__ = _make_invalid_op('__add__') # noqa + cls.__sub__ = __isub__ = _make_invalid_op('__sub__') # noqa @classmethod def _add_numeric_methods_disabled(cls): """ add in numeric methods to disable """ def _make_invalid_op(name): - def invalid_op(self, other=None): - raise TypeError("cannot perform {name} with this index type: {typ}".format(name=name, - typ=type(self))) + raise TypeError("cannot perform {name} with this index type: " + "{typ}".format(name=name, typ=type(self))) + invalid_op.__name__ = name return invalid_op @@ -3063,7 +3092,6 @@ def _add_numeric_methods_binary(cls): """ add in numeric methods """ def _make_evaluate_binop(op, opstr, reversed=False): - def _evaluate_numeric_binop(self, other): from pandas.tseries.offsets import DateOffset @@ -3156,34 +3184,36 @@ def _add_logical_methods(cls): A single element array_like may be converted to bool.""" def _make_logical_function(name, desc, f): - @Substitution(outname=name, desc=desc) @Appender(_doc) def logical_func(self, *args, **kwargs): result = f(self.values) - if isinstance(result, (np.ndarray, ABCSeries, Index)) \ - and result.ndim == 0: + if (isinstance(result, (np.ndarray, ABCSeries, Index)) and + result.ndim == 0): # return NumPy type return result.dtype.type(result.item()) else: # pragma: no cover return result + logical_func.__name__ = name return logical_func - cls.all = _make_logical_function( - 'all', 'Return whether all elements are True', np.all) - cls.any = _make_logical_function( - 'any', 'Return whether any element is True', np.any) + cls.all = _make_logical_function('all', 'Return whether all elements ' + 'are True', + np.all) + cls.any = _make_logical_function('any', + 'Return whether any element is True', + np.any) @classmethod def _add_logical_methods_disabled(cls): """ add in logical methods to disable """ def _make_invalid_op(name): - def invalid_op(self, other=None): - raise TypeError("cannot perform {name} with this index type: {typ}".format(name=name, - typ=type(self))) + raise TypeError("cannot perform {name} with this index type: " + "{typ}".format(name=name, typ=type(self))) + invalid_op.__name__ = name return invalid_op @@ -3195,6 +3225,7 @@ def invalid_op(self, other=None): Index._add_logical_methods() Index._add_comparison_methods() + class CategoricalIndex(Index, PandasDelegate): """ @@ -3221,7 +3252,8 @@ class CategoricalIndex(Index, PandasDelegate): _engine_type = _index.Int64Engine _attributes = ['name'] - def __new__(cls, data=None, categories=None, ordered=None, dtype=None, copy=False, name=None, fastpath=False, **kwargs): + def __new__(cls, data=None, categories=None, ordered=None, dtype=None, + copy=False, name=None, fastpath=False, **kwargs): if fastpath: return cls._simple_new(data, name=name) @@ -3246,7 +3278,8 @@ def __new__(cls, data=None, categories=None, ordered=None, dtype=None, copy=Fals return cls._simple_new(data, name=name) - def _create_from_codes(self, codes, categories=None, ordered=None, name=None): + def _create_from_codes(self, codes, categories=None, ordered=None, + name=None): """ *this is an internal non-public method* @@ -3271,7 +3304,8 @@ def _create_from_codes(self, codes, categories=None, ordered=None, name=None): ordered = self.ordered if name is None: name = self.name - cat = Categorical.from_codes(codes, categories=categories, ordered=self.ordered) + cat = Categorical.from_codes(codes, categories=categories, + ordered=self.ordered) return CategoricalIndex(cat, name=name) @staticmethod @@ -3303,14 +3337,15 @@ def _create_categorical(self, data, categories=None, ordered=None): return data @classmethod - def _simple_new(cls, values, name=None, categories=None, ordered=None, **kwargs): + def _simple_new(cls, values, name=None, categories=None, ordered=None, + **kwargs): result = object.__new__(cls) values = cls._create_categorical(cls, values, categories, ordered) result._data = values result.name = name for k, v in compat.iteritems(kwargs): - setattr(result,k,v) + setattr(result, k, v) result._reset_identity() return result @@ -3319,7 +3354,8 @@ def _is_dtype_compat(self, other): """ *this is an internal non-public method* - provide a comparison between the dtype of self and other (coercing if needed) + provide a comparison between the dtype of self and other (coercing if + needed) Raises ------ @@ -3330,14 +3366,17 @@ def _is_dtype_compat(self, other): if isinstance(other, CategoricalIndex): other = other._values if not other.is_dtype_equal(self): - raise TypeError("categories must match existing categories when appending") + raise TypeError("categories must match existing categories " + "when appending") else: values = other if not is_list_like(values): - values = [ values ] - other = CategoricalIndex(self._create_categorical(self, other, categories=self.categories, ordered=self.ordered)) + values = [values] + other = CategoricalIndex(self._create_categorical( + self, other, categories=self.categories, ordered=self.ordered)) if not other.isin(values).all(): - raise TypeError("cannot append a non-category item to a CategoricalIndex") + raise TypeError("cannot append a non-category item to a " + "CategoricalIndex") return other @@ -3364,16 +3403,17 @@ def _format_attrs(self): """ Return a list of tuples of the (attr,formatted_value) """ - max_categories = (10 if get_option("display.max_categories") == 0 - else get_option("display.max_categories")) - attrs = [('categories', default_pprint(self.categories, max_seq_items=max_categories)), - ('ordered',self.ordered)] + max_categories = (10 if get_option("display.max_categories") == 0 else + get_option("display.max_categories")) + attrs = [('categories', default_pprint(self.categories, + max_seq_items=max_categories)), + ('ordered', self.ordered)] if self.name is not None: - attrs.append(('name',default_pprint(self.name))) - attrs.append(('dtype',"'%s'" % self.dtype)) + attrs.append(('name', default_pprint(self.name))) + attrs.append(('dtype', "'%s'" % self.dtype)) max_seq_items = get_option('display.max_seq_items') or len(self) if len(self) > max_seq_items: - attrs.append(('length',len(self))) + attrs.append(('length', len(self))) return attrs @property @@ -3432,7 +3472,8 @@ def _engine(self): def is_unique(self): return not self.duplicated().any() - @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', False: 'first'}) + @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', + False: 'first'}) @Appender(base._shared_docs['duplicated'] % _index_doc_kwargs) def duplicated(self, keep='first'): from pandas.hashtable import duplicated_int64 @@ -3460,7 +3501,7 @@ def get_loc(self, key, method=None): if (codes == -1): raise KeyError(key) indexer, _ = self._engine.get_indexer_non_unique(np.array([codes])) - if (indexer==-1).any(): + if (indexer == -1).any(): raise KeyError(key) return indexer @@ -3484,11 +3525,14 @@ def reindex(self, target, method=None, level=None, limit=None, """ if method is not None: - raise NotImplementedError("argument method is not implemented for CategoricalIndex.reindex") + raise NotImplementedError("argument method is not implemented for " + "CategoricalIndex.reindex") if level is not None: - raise NotImplementedError("argument level is not implemented for CategoricalIndex.reindex") + raise NotImplementedError("argument level is not implemented for " + "CategoricalIndex.reindex") if limit is not None: - raise NotImplementedError("argument limit is not implemented for CategoricalIndex.reindex") + raise NotImplementedError("argument limit is not implemented for " + "CategoricalIndex.reindex") target = _ensure_index(target) @@ -3498,25 +3542,25 @@ def reindex(self, target, method=None, level=None, limit=None, indexer, missing = self.get_indexer_non_unique(np.array(target)) new_target = self.take(indexer) - # filling in missing if needed if len(missing): cats = self.categories.get_indexer(target) - if (cats==-1).any(): + if (cats == -1).any(): # coerce to a regular index here! - result = Index(np.array(self),name=self.name) - new_target, indexer, _ = result._reindex_non_unique(np.array(target)) + result = Index(np.array(self), name=self.name) + new_target, indexer, _ = result._reindex_non_unique( + np.array(target)) else: codes = new_target.codes.copy() - codes[indexer==-1] = cats[missing] + codes[indexer == -1] = cats[missing] new_target = self._create_from_codes(codes) # we always want to return an Index type here - # to be consistent with .reindex for other index types (e.g. they don't coerce - # based on the actual values, only on the dtype) + # to be consistent with .reindex for other index types (e.g. they don't + # coerce based on the actual values, only on the dtype) # unless we had an inital Categorical to begin with # in which case we are going to conform to the passed Categorical new_target = np.asarray(new_target) @@ -3528,11 +3572,13 @@ def reindex(self, target, method=None, level=None, limit=None, return new_target, indexer def _reindex_non_unique(self, target): - """ reindex from a non-unique; which CategoricalIndex's are almost always """ + """ reindex from a non-unique; which CategoricalIndex's are almost + always + """ new_target, indexer = self.reindex(target) new_indexer = None - check = indexer==-1 + check = indexer == -1 if check.any(): new_indexer = np.arange(len(self.take(indexer))) new_indexer[check] = -1 @@ -3580,8 +3626,8 @@ def get_indexer(self, target, method=None, limit=None, tolerance=None): target = target.categories if method == 'pad' or method == 'backfill': - raise NotImplementedError("method='pad' and method='backfill' not implemented yet " - 'for CategoricalIndex') + raise NotImplementedError("method='pad' and method='backfill' not " + "implemented yet for CategoricalIndex") elif method == 'nearest': raise NotImplementedError("method='nearest' not implemented yet " 'for CategoricalIndex') @@ -3593,7 +3639,9 @@ def get_indexer(self, target, method=None, limit=None, tolerance=None): return com._ensure_platform_int(indexer) def get_indexer_non_unique(self, target): - """ this is the same for a CategoricalIndex for get_indexer; the API returns the missing values as well """ + """ this is the same for a CategoricalIndex for get_indexer; the API + returns the missing values as well + """ target = _ensure_index(target) if isinstance(target, CategoricalIndex): @@ -3605,11 +3653,13 @@ def get_indexer_non_unique(self, target): def _convert_list_indexer(self, keyarr, kind=None): """ we are passed a list indexer. - Return our indexer or raise if all of the values are not included in the categories + Return our indexer or raise if all of the values are not included in + the categories """ codes = self.categories.get_indexer(keyarr) - if (codes==-1).any(): - raise KeyError("a list-indexer must only include values that are in the categories") + if (codes == -1).any(): + raise KeyError("a list-indexer must only include values that are " + "in the categories") return None @@ -3661,11 +3711,11 @@ def insert(self, loc, item): """ code = self.categories.get_indexer([item]) if (code == -1): - raise TypeError("cannot insert an item into a CategoricalIndex that is not already an existing category") + raise TypeError("cannot insert an item into a CategoricalIndex " + "that is not already an existing category") codes = self.codes - codes = np.concatenate( - (codes[:loc], code, codes[loc:])) + codes = np.concatenate((codes[:loc], code, codes[loc:])) return self._create_from_codes(codes) def append(self, other): @@ -3685,8 +3735,8 @@ def append(self, other): ValueError if other is not in the categories """ to_concat, name = self._ensure_compat_append(other) - to_concat = [ self._is_dtype_compat(c) for c in to_concat ] - codes = np.concatenate([ c.codes for c in to_concat ]) + to_concat = [self._is_dtype_compat(c) for c in to_concat] + codes = np.concatenate([c.codes for c in to_concat]) return self._create_from_codes(codes, name=name) @classmethod @@ -3694,14 +3744,16 @@ def _add_comparison_methods(cls): """ add in comparison methods """ def _make_compare(op): - def _evaluate_compare(self, other): - # if we have a Categorical type, then must have the same categories + # if we have a Categorical type, then must have the same + # categories if isinstance(other, CategoricalIndex): other = other._values elif isinstance(other, Index): - other = self._create_categorical(self, other._values, categories=self.categories, ordered=self.ordered) + other = self._create_categorical( + self, other._values, categories=self.categories, + ordered=self.ordered) if isinstance(other, (ABCCategorical, np.ndarray, ABCSeries)): if len(self.values) != len(other): @@ -3709,7 +3761,9 @@ def _evaluate_compare(self, other): if isinstance(other, ABCCategorical): if not self.values.is_dtype_equal(other): - raise TypeError("categorical index comparisions must have the same categories and ordered attributes") + raise TypeError("categorical index comparisions must " + "have the same categories and ordered " + "attributes") return getattr(self.values, op)(other) @@ -3722,7 +3776,6 @@ def _evaluate_compare(self, other): cls.__le__ = _make_compare('__le__') cls.__ge__ = _make_compare('__ge__') - def _delegate_method(self, name, *args, **kwargs): """ method delegation to the ._values """ method = getattr(self._values, name) @@ -3738,19 +3791,16 @@ def _add_accessors(cls): """ add in Categorical accessor methods """ from pandas.core.categorical import Categorical - CategoricalIndex._add_delegate_accessors(delegate=Categorical, - accessors=["rename_categories", - "reorder_categories", - "add_categories", - "remove_categories", - "remove_unused_categories", - "set_categories", - "as_ordered", - "as_unordered", - "min", - "max"], - typ='method', - overwrite=True) + CategoricalIndex._add_delegate_accessors( + delegate=Categorical, accessors=["rename_categories", + "reorder_categories", + "add_categories", + "remove_categories", + "remove_unused_categories", + "set_categories", + "as_ordered", "as_unordered", + "min", "max"], + typ='method', overwrite=True) CategoricalIndex._add_numericlike_set_methods_disabled() @@ -3794,7 +3844,7 @@ def _maybe_cast_slice_bound(self, label, side, kind): # we are a numeric index, so we accept # integer/floats directly if not (is_integer(label) or is_float(label)): - self._invalid_indexer('slice',label) + self._invalid_indexer('slice', label) return label @@ -3802,12 +3852,11 @@ def _convert_tolerance(self, tolerance): try: return float(tolerance) except ValueError: - raise ValueError('tolerance argument for %s must be numeric: %r' - % (type(self).__name__, tolerance)) + raise ValueError('tolerance argument for %s must be numeric: %r' % + (type(self).__name__, tolerance)) class Int64Index(NumericIndex): - """ Immutable ndarray implementing an ordered, sliceable set. The basic object storing axis labels for all pandas objects. Int64Index is a special case @@ -3841,7 +3890,8 @@ class Int64Index(NumericIndex): _engine_type = _index.Int64Engine - def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=False, **kwargs): + def __new__(cls, data=None, dtype=None, copy=False, name=None, + fastpath=False, **kwargs): if fastpath: return cls._simple_new(data, name=name) @@ -3855,8 +3905,8 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=False, * elif issubclass(data.dtype.type, np.integer): # don't force the upcast as we may be dealing # with a platform int - if dtype is None or not issubclass(np.dtype(dtype).type, - np.integer): + if (dtype is None or + not issubclass(np.dtype(dtype).type, np.integer)): dtype = np.int64 subarr = np.array(data, dtype=dtype, copy=copy) @@ -3896,7 +3946,8 @@ def equals(self, other): # return False try: - return array_equivalent(_values_from_object(self), _values_from_object(other)) + return array_equivalent(_values_from_object(self), + _values_from_object(other)) except TypeError: # e.g. fails in numpy 1.6 with DatetimeIndex #1681 return False @@ -4465,7 +4516,6 @@ def _evaluate_numeric_binop(self, other): class Float64Index(NumericIndex): - """ Immutable ndarray implementing an ordered, sliceable set. The basic object storing axis labels for all pandas objects. Float64Index is a special case @@ -4494,7 +4544,8 @@ class Float64Index(NumericIndex): _inner_indexer = _algos.inner_join_indexer_float64 _outer_indexer = _algos.outer_join_indexer_float64 - def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=False, **kwargs): + def __new__(cls, data=None, dtype=None, copy=False, name=None, + fastpath=False, **kwargs): if fastpath: return cls._simple_new(data, name) @@ -4510,8 +4561,7 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=False, * try: subarr = np.array(data, dtype=dtype, copy=copy) except: - raise TypeError('Unsafe NumPy casting, you must ' - 'explicitly cast') + raise TypeError('Unsafe NumPy casting, you must explicitly cast') # coerce to float64 for storage if subarr.dtype != np.float64: @@ -4546,7 +4596,8 @@ def _convert_scalar_indexer(self, key, kind=None): if kind == 'iloc': if is_integer(key): return key - return super(Float64Index, self)._convert_scalar_indexer(key, kind=kind) + return super(Float64Index, self)._convert_scalar_indexer(key, + kind=kind) return key @@ -4572,8 +4623,8 @@ def _convert_slice_indexer(self, key, kind=None): # translate to locations return self.slice_indexer(key.start, key.stop, key.step) - def _format_native_types(self, na_rep='', float_format=None, - decimal='.', quoting=None, **kwargs): + def _format_native_types(self, na_rep='', float_format=None, decimal='.', + quoting=None, **kwargs): from pandas.core.format import FloatArrayFormatter formatter = FloatArrayFormatter(self.values, na_rep=na_rep, float_format=float_format, @@ -4611,7 +4662,8 @@ def equals(self, other): try: if not isinstance(other, Float64Index): other = self._constructor(other) - if not is_dtype_equal(self.dtype,other.dtype) or self.shape != other.shape: + if (not is_dtype_equal(self.dtype, other.dtype) or + self.shape != other.shape): return False left, right = self._values, other._values return ((left == right) | (self._isnan & other._isnan)).all() @@ -4674,7 +4726,6 @@ def isin(self, values, level=None): class MultiIndex(Index): - """ A multi-level, or hierarchical, index object for pandas objects @@ -4704,7 +4755,8 @@ class MultiIndex(Index): rename = Index.set_names def __new__(cls, levels=None, labels=None, sortorder=None, names=None, - copy=False, verify_integrity=True, _set_identity=True, name=None, **kwargs): + copy=False, verify_integrity=True, _set_identity=True, + name=None, **kwargs): # compat with Index if name is not None: @@ -4756,8 +4808,8 @@ def _verify_integrity(self): label_length = len(self.labels[0]) for i, (level, label) in enumerate(zip(levels, labels)): if len(label) != label_length: - raise ValueError("Unequal label lengths: %s" % ( - [len(lab) for lab in labels])) + raise ValueError("Unequal label lengths: %s" % + ([len(lab) for lab in labels])) if len(label) and label.max() >= len(level): raise ValueError("On level %d, label max (%d) >= length of" " level (%d). NOTE: this index is in an" @@ -4780,8 +4832,9 @@ def _set_levels(self, levels, level=None, copy=False, validate=True, raise ValueError('Length of levels must match length of level.') if level is None: - new_levels = FrozenList(_ensure_index(lev, copy=copy)._shallow_copy() - for lev in levels) + new_levels = FrozenList( + _ensure_index(lev, copy=copy)._shallow_copy() + for lev in levels) else: level = [self._get_level_number(l) for l in level] new_levels = list(self._levels) @@ -4800,7 +4853,8 @@ def _set_levels(self, levels, level=None, copy=False, validate=True, if verify_integrity: self._verify_integrity() - def set_levels(self, levels, level=None, inplace=False, verify_integrity=True): + def set_levels(self, levels, level=None, inplace=False, + verify_integrity=True): """ Set new levels on MultiIndex. Defaults to returning new index. @@ -4809,7 +4863,7 @@ def set_levels(self, levels, level=None, inplace=False, verify_integrity=True): ---------- levels : sequence or list of sequence new level(s) to apply - level : int or level name, or sequence of int / level names (default None) + level : int, level name, or sequence of int/level names (default None) level(s) to set (None for all levels) inplace : bool if True, mutates in place @@ -4883,13 +4937,15 @@ def _set_labels(self, labels, level=None, copy=False, validate=True, raise ValueError('Length of labels must match length of levels.') if level is None: - new_labels = FrozenList(_ensure_frozen(lab, lev, copy=copy)._shallow_copy() - for lev, lab in zip(self.levels, labels)) + new_labels = FrozenList( + _ensure_frozen(lab, lev, copy=copy)._shallow_copy() + for lev, lab in zip(self.levels, labels)) else: level = [self._get_level_number(l) for l in level] new_labels = list(self._labels) for l, lev, lab in zip(level, self.levels, labels): - new_labels[l] = _ensure_frozen(lab, lev, copy=copy)._shallow_copy() + new_labels[l] = _ensure_frozen( + lab, lev, copy=copy)._shallow_copy() new_labels = FrozenList(new_labels) self._labels = new_labels @@ -4899,7 +4955,8 @@ def _set_labels(self, labels, level=None, copy=False, validate=True, if verify_integrity: self._verify_integrity() - def set_labels(self, labels, level=None, inplace=False, verify_integrity=True): + def set_labels(self, labels, level=None, inplace=False, + verify_integrity=True): """ Set new labels on MultiIndex. Defaults to returning new index. @@ -4908,7 +4965,7 @@ def set_labels(self, labels, level=None, inplace=False, verify_integrity=True): ---------- labels : sequence or list of sequence new labels to apply - level : int or level name, or sequence of int / level names (default None) + level : int, level name, or sequence of int/level names (default None) level(s) to set (None for all levels) inplace : bool if True, mutates in place @@ -5000,11 +5057,8 @@ def copy(self, names=None, dtype=None, levels=None, labels=None, levels = self.levels labels = self.labels names = self.names - return MultiIndex(levels=levels, - labels=labels, - names=names, - sortorder=self.sortorder, - verify_integrity=False, + return MultiIndex(levels=levels, labels=labels, names=names, + sortorder=self.sortorder, verify_integrity=False, _set_identity=_set_identity) def __array__(self, dtype=None): @@ -5023,7 +5077,7 @@ def _shallow_copy_with_infer(self, values=None, **kwargs): def _shallow_copy(self, values=None, **kwargs): if values is not None: if 'name' in kwargs: - kwargs['names'] = kwargs.pop('name',None) + kwargs['names'] = kwargs.pop('name', None) # discards freq kwargs.pop('freq', None) return MultiIndex.from_tuples(values, **kwargs) @@ -5036,9 +5090,9 @@ def dtype(self): @cache_readonly def nbytes(self): """ return the number of bytes in the underlying data """ - level_nbytes = sum(( i.nbytes for i in self.levels )) - label_nbytes = sum(( i.nbytes for i in self.labels )) - names_nbytes = sum(( getsizeof(i) for i in self.names )) + level_nbytes = sum((i.nbytes for i in self.levels)) + label_nbytes = sum((i.nbytes for i in self.labels)) + names_nbytes = sum((getsizeof(i) for i in self.names)) return level_nbytes + label_nbytes + names_nbytes def _format_attrs(self): @@ -5079,8 +5133,8 @@ def _set_names(self, names, level=None, validate=True): if validate and level is not None and len(names) != len(level): raise ValueError('Length of names must match length of level.') if validate and level is None and len(names) != self.nlevels: - raise ValueError( - 'Length of names must match number of levels in MultiIndex.') + raise ValueError('Length of names must match number of levels in ' + 'MultiIndex.') if level is None: level = range(self.nlevels) @@ -5091,8 +5145,8 @@ def _set_names(self, names, level=None, validate=True): for l, name in zip(level, names): self.levels[l].rename(name, inplace=True) - names = property( - fset=_set_names, fget=_get_names, doc="Names of levels in MultiIndex") + names = property(fset=_set_names, fget=_get_names, + doc="Names of levels in MultiIndex") def _reference_duplicate_name(self, name): """ @@ -5151,10 +5205,9 @@ def _get_level_number(self, level): level += self.nlevels if level < 0: orig_level = level - self.nlevels - raise IndexError( - 'Too many levels: Index has only %d levels, ' - '%d is not a valid level number' % (self.nlevels, orig_level) - ) + raise IndexError('Too many levels: Index has only %d ' + 'levels, %d is not a valid level number' % + (self.nlevels, orig_level)) # Note: levels are zero-based elif level >= self.nlevels: raise IndexError('Too many levels: Index has only %d levels, ' @@ -5203,7 +5256,8 @@ def _has_complex_internals(self): def is_unique(self): return not self.duplicated().any() - @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', False: 'first'}) + @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', + False: 'first'}) @Appender(base._shared_docs['duplicated'] % _index_doc_kwargs) def duplicated(self, keep='first'): from pandas.core.groupby import get_group_index @@ -5262,8 +5316,8 @@ def _try_mi(k): # rather than a KeyError, try it here # note that a string that 'looks' like a Timestamp will raise # a KeyError! (GH5725) - if isinstance(key, (datetime.datetime, np.datetime64)) or ( - compat.PY3 and isinstance(key, compat.string_types)): + if (isinstance(key, (datetime.datetime, np.datetime64)) or + (compat.PY3 and isinstance(key, compat.string_types))): try: return _try_mi(key) except (KeyError): @@ -5352,13 +5406,11 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False, if sparsify not in [True, 1]: sentinel = sparsify # little bit of a kludge job for #1217 - result_levels = _sparsify(result_levels, - start=int(names), + result_levels = _sparsify(result_levels, start=int(names), sentinel=sentinel) - if adjoin: - from pandas.core.format import _get_adjustment + from pandas.core.format import _get_adjustment adj = _get_adjustment() return adj.adjoin(space, *result_levels).split('\n') else: @@ -5366,7 +5418,7 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False, def _to_safe_for_reshape(self): """ convert to object if we are a categorical """ - return self.set_levels([ i._to_safe_for_reshape() for i in self.levels ]) + return self.set_levels([i._to_safe_for_reshape() for i in self.levels]) def to_hierarchical(self, n_repeat, n_shuffle=1): """ @@ -5478,9 +5530,8 @@ def from_arrays(cls, arrays, sortorder=None, names=None): if names is None: names = [getattr(arr, "name", None) for arr in arrays] - return MultiIndex(levels=levels, labels=labels, - sortorder=sortorder, names=names, - verify_integrity=False) + return MultiIndex(levels=levels, labels=labels, sortorder=sortorder, + names=names, verify_integrity=False) @classmethod def from_tuples(cls, tuples, sortorder=None, names=None): @@ -5525,8 +5576,7 @@ def from_tuples(cls, tuples, sortorder=None, names=None): else: arrays = lzip(*tuples) - return MultiIndex.from_arrays(arrays, sortorder=sortorder, - names=names) + return MultiIndex.from_arrays(arrays, sortorder=sortorder, names=names) @classmethod def from_product(cls, iterables, sortorder=None, names=None): @@ -5565,7 +5615,8 @@ def from_product(cls, iterables, sortorder=None, names=None): from pandas.core.categorical import Categorical from pandas.tools.util import cartesian_product - categoricals = [Categorical.from_array(it, ordered=True) for it in iterables] + categoricals = [Categorical.from_array(it, ordered=True) + for it in iterables] labels = cartesian_product([c.codes for c in categoricals]) return MultiIndex(levels=[c.categories for c in categoricals], @@ -5590,10 +5641,9 @@ def __contains__(self, key): def __reduce__(self): """Necessary for making this object picklable""" - d = dict(levels = [lev for lev in self.levels], - labels = [label for label in self.labels], - sortorder = self.sortorder, - names = list(self.names)) + d = dict(levels=[lev for lev in self.levels], + labels=[label for label in self.labels], + sortorder=self.sortorder, names=list(self.names)) return _new_Index, (self.__class__, d), None def __setstate__(self, state): @@ -5637,10 +5687,8 @@ def __getitem__(self, key): new_labels = [lab[key] for lab in self.labels] - return MultiIndex(levels=self.levels, - labels=new_labels, - names=self.names, - sortorder=sortorder, + return MultiIndex(levels=self.levels, labels=new_labels, + names=self.names, sortorder=sortorder, verify_integrity=False) def take(self, indexer, axis=None): @@ -5664,7 +5712,8 @@ def append(self, other): if not isinstance(other, (list, tuple)): other = [other] - if all((isinstance(o, MultiIndex) and o.nlevels >= self.nlevels) for o in other): + if all((isinstance(o, MultiIndex) and o.nlevels >= self.nlevels) + for o in other): arrays = [] for i in range(self.nlevels): label = self.get_level_values(i) @@ -5672,7 +5721,7 @@ def append(self, other): arrays.append(label.append(appended)) return MultiIndex.from_arrays(arrays, names=self.names) - to_concat = (self.values,) + tuple(k._values for k in other) + to_concat = (self.values, ) + tuple(k._values for k in other) new_tuples = np.concatenate(to_concat) # if all(isinstance(x, MultiIndex) for x in other): @@ -5686,10 +5735,9 @@ def argsort(self, *args, **kwargs): def repeat(self, n): return MultiIndex(levels=self.levels, - labels=[label.view(np.ndarray).repeat(n) for label in self.labels], - names=self.names, - sortorder=self.sortorder, - verify_integrity=False) + labels=[label.view(np.ndarray).repeat(n) + for label in self.labels], names=self.names, + sortorder=self.sortorder, verify_integrity=False) def drop(self, labels, level=None, errors='raise'): """ @@ -5715,8 +5763,8 @@ def drop(self, labels, level=None, errors='raise'): mask = indexer == -1 if mask.any(): if errors != 'ignore': - raise ValueError('labels %s not contained in axis' - % labels[mask]) + raise ValueError('labels %s not contained in axis' % + labels[mask]) indexer = indexer[~mask] except Exception: pass @@ -5827,9 +5875,9 @@ def reorder_levels(self, order): """ order = [self._get_level_number(i) for i in order] if len(order) != self.nlevels: - raise AssertionError(('Length of order must be same as ' - 'number of levels (%d), got %d') - % (self.nlevels, len(order))) + raise AssertionError('Length of order must be same as ' + 'number of levels (%d), got %d' % + (self.nlevels, len(order))) new_levels = [self.levels[i] for i in order] new_labels = [self.labels[i] for i in order] new_names = [self.names[i] for i in order] @@ -5890,8 +5938,7 @@ def sortlevel(self, level=0, ascending=True, sort_remaining=True): else: sortorder = level[0] - indexer = _indexer_from_factorized(primary, - primshp, + indexer = _indexer_from_factorized(primary, primshp, compress=False) if not ascending: @@ -6007,8 +6054,7 @@ def reindex(self, target, method=None, level=None, limit=None, limit=limit, tolerance=tolerance) else: - raise Exception( - "cannot handle a non-unique multi-index!") + raise Exception("cannot handle a non-unique multi-index!") if not isinstance(target, MultiIndex): if indexer is None: @@ -6020,7 +6066,7 @@ def reindex(self, target, method=None, level=None, limit=None, target = MultiIndex.from_tuples(target) if (preserve_names and target.nlevels == self.nlevels and - target.names != self.names): + target.names != self.names): target = target.copy(deep=False) target.names = self.names @@ -6102,9 +6148,9 @@ def _partial_tup_index(self, tup, side='left'): def get_loc(self, key, method=None): """ - Get integer location, slice or boolean mask for requested label or tuple - If the key is past the lexsort depth, the return may be a boolean mask - array, otherwise it is always a slice or int. + Get integer location, slice or boolean mask for requested label or + tuple. If the key is past the lexsort depth, the return may be a + boolean mask array, otherwise it is always a slice or int. Parameters ---------- @@ -6140,9 +6186,10 @@ def _maybe_to_slice(loc): keylen = len(key) if self.nlevels < keylen: raise KeyError('Key length ({0}) exceeds index depth ({1})' - ''.format(keylen, self.nlevels)) + ''.format(keylen, self.nlevels)) if keylen == self.nlevels and self.is_unique: + def _maybe_str_to_time_stamp(key, lev): if lev.is_all_dates and not isinstance(key, Timestamp): try: @@ -6150,6 +6197,7 @@ def _maybe_str_to_time_stamp(key, lev): except Exception: pass return key + key = _values_from_object(key) key = tuple(map(_maybe_str_to_time_stamp, key, self.levels)) return self._engine.get_loc(key) @@ -6160,8 +6208,8 @@ def _maybe_str_to_time_stamp(key, lev): # needs linear search within the slice i = self.lexsort_depth lead_key, follow_key = key[:i], key[i:] - start, stop = self.slice_locs(lead_key, lead_key) \ - if lead_key else (0, len(self)) + start, stop = (self.slice_locs(lead_key, lead_key) + if lead_key else (0, len(self))) if start == stop: raise KeyError(key) @@ -6181,9 +6229,8 @@ def _maybe_str_to_time_stamp(key, lev): if not len(loc): raise KeyError(key) - return _maybe_to_slice(loc) \ - if len(loc) != stop - start \ - else slice(start, stop) + return (_maybe_to_slice(loc) if len(loc) != stop - start else + slice(start, stop)) def get_loc_level(self, key, level=0, drop_level=True): """ @@ -6198,6 +6245,7 @@ def get_loc_level(self, key, level=0, drop_level=True): ------- loc : int or slice object """ + def maybe_droplevels(indexer, levels, drop_level): if not drop_level: return self[indexer] @@ -6264,19 +6312,18 @@ def partial_selection(key, indexer=None): # here we have a completely specified key, but are # using some partial string matching here # GH4758 - can_index_exactly = any([ - (l.is_all_dates and - not isinstance(k, compat.string_types)) - for k, l in zip(key, self.levels) - ]) - if any([ - l.is_all_dates for k, l in zip(key, self.levels) - ]) and not can_index_exactly: + all_dates = [(l.is_all_dates and + not isinstance(k, compat.string_types)) + for k, l in zip(key, self.levels)] + can_index_exactly = any(all_dates) + if (any([l.is_all_dates + for k, l in zip(key, self.levels)]) and + not can_index_exactly): indexer = self.get_loc(key) # we have a multiple selection here - if not isinstance(indexer, slice) \ - or indexer.stop - indexer.start != 1: + if (not isinstance(indexer, slice) or + indexer.stop - indexer.start != 1): return partial_selection(key, indexer) key = tuple(self[indexer].tolist()[0]) @@ -6313,8 +6360,7 @@ def partial_selection(key, indexer=None): indexer = slice(None, None) ilevels = [i for i in range(len(key)) if key[i] != slice(None, None)] - return indexer, maybe_droplevels(indexer, ilevels, - drop_level) + return indexer, maybe_droplevels(indexer, ilevels, drop_level) else: indexer = self._get_level_indexer(key, level=level) return indexer, maybe_droplevels(indexer, [level], drop_level) @@ -6332,15 +6378,15 @@ def convert_indexer(start, stop, step, indexer=indexer, labels=labels): # if we have a provided indexer, then this need not consider # the entire labels set - r = np.arange(start,stop,step) + r = np.arange(start, stop, step) if indexer is not None and len(indexer) != len(labels): - # we have an indexer which maps the locations in the labels that we - # have already selected (and is not an indexer for the entire set) - # otherwise this is wasteful - # so we only need to examine locations that are in this set - # the only magic here is that the result are the mappings to the - # set that we have selected + # we have an indexer which maps the locations in the labels + # that we have already selected (and is not an indexer for the + # entire set) otherwise this is wasteful so we only need to + # examine locations that are in this set the only magic here is + # that the result are the mappings to the set that we have + # selected from pandas import Series mapper = Series(indexer) indexer = labels.take(com._ensure_platform_int(indexer)) @@ -6348,8 +6394,8 @@ def convert_indexer(start, stop, step, indexer=indexer, labels=labels): m = result.map(mapper)._values else: - m = np.zeros(len(labels),dtype=bool) - m[np.in1d(labels,r,assume_unique=True)] = True + m = np.zeros(len(labels), dtype=bool) + m[np.in1d(labels, r, assume_unique=True)] = True return m @@ -6363,17 +6409,19 @@ def convert_indexer(start, stop, step, indexer=indexer, labels=labels): else: start = 0 if key.stop is not None: - stop = level_index.get_loc(key.stop) + stop = level_index.get_loc(key.stop) else: - stop = len(level_index)-1 + stop = len(level_index) - 1 step = key.step - except (KeyError): + except KeyError: - # we have a partial slice (like looking up a partial date string) - start = stop = level_index.slice_indexer(key.start, key.stop, key.step) + # we have a partial slice (like looking up a partial date + # string) + start = stop = level_index.slice_indexer(key.start, key.stop, + key.step) step = start.step - if isinstance(start,slice) or isinstance(stop,slice): + if isinstance(start, slice) or isinstance(stop, slice): # we have a slice for start and/or stop # a partial date slicer on a DatetimeIndex generates a slice # note that the stop ALREADY includes the stopped point (if @@ -6384,7 +6432,7 @@ def convert_indexer(start, stop, step, indexer=indexer, labels=labels): # need to have like semantics here to right # searching as when we are using a slice # so include the stop+1 (so we include stop) - return convert_indexer(start,stop+1,step) + return convert_indexer(start, stop + 1, step) else: # sorted, so can return slice object -> view i = labels.searchsorted(start, side='left') @@ -6395,7 +6443,7 @@ def convert_indexer(start, stop, step, indexer=indexer, labels=labels): loc = level_index.get_loc(key) if level > 0 or self.lexsort_depth == 0: - return np.array(labels == loc,dtype=bool) + return np.array(labels == loc, dtype=bool) else: # sorted, so can return slice object -> view i = labels.searchsorted(loc, side='left') @@ -6404,8 +6452,8 @@ def convert_indexer(start, stop, step, indexer=indexer, labels=labels): def get_locs(self, tup): """ - Given a tuple of slices/lists/labels/boolean indexer to a level-wise spec - produce an indexer to extract those locations + Given a tuple of slices/lists/labels/boolean indexer to a level-wise + spec produce an indexer to extract those locations Parameters ---------- @@ -6419,8 +6467,9 @@ def get_locs(self, tup): # must be lexsorted to at least as many levels if not self.is_lexsorted_for_tuple(tup): - raise KeyError('MultiIndex Slicing requires the index to be fully lexsorted' - ' tuple len ({0}), lexsort depth ({1})'.format(len(tup), self.lexsort_depth)) + raise KeyError('MultiIndex Slicing requires the index to be fully ' + 'lexsorted tuple len ({0}), lexsort depth ' + '({1})'.format(len(tup), self.lexsort_depth)) # indexer # this is the list of all values that we want to select @@ -6430,13 +6479,14 @@ def get_locs(self, tup): def _convert_to_indexer(r): # return an indexer if isinstance(r, slice): - m = np.zeros(n,dtype=bool) + m = np.zeros(n, dtype=bool) m[r] = True r = m.nonzero()[0] elif is_bool_indexer(r): if len(r) != n: - raise ValueError("cannot index with a boolean indexer that is" - " not the same length as the index") + raise ValueError("cannot index with a boolean indexer " + "that is not the same length as the " + "index") r = r.nonzero()[0] return Int64Index(r) @@ -6447,21 +6497,26 @@ def _update_indexer(idxr, indexer=indexer): return indexer return indexer & idxr - for i,k in enumerate(tup): + for i, k in enumerate(tup): if is_bool_indexer(k): # a boolean indexer, must be the same length! k = np.asarray(k) - indexer = _update_indexer(_convert_to_indexer(k), indexer=indexer) + indexer = _update_indexer(_convert_to_indexer(k), + indexer=indexer) elif is_list_like(k): - # a collection of labels to include from this level (these are or'd) + # a collection of labels to include from this level (these + # are or'd) indexers = None for x in k: try: - idxrs = _convert_to_indexer(self._get_level_indexer(x, level=i, indexer=indexer)) - indexers = idxrs if indexers is None else indexers | idxrs - except (KeyError): + idxrs = _convert_to_indexer( + self._get_level_indexer(x, level=i, + indexer=indexer)) + indexers = (idxrs if indexers is None + else indexers | idxrs) + except KeyError: # ignore not founds continue @@ -6477,13 +6532,17 @@ def _update_indexer(idxr, indexer=indexer): # empty slice indexer = _update_indexer(None, indexer=indexer) - elif isinstance(k,slice): + elif isinstance(k, slice): # a slice, include BOTH of the labels - indexer = _update_indexer(_convert_to_indexer(self._get_level_indexer(k,level=i,indexer=indexer)), indexer=indexer) + indexer = _update_indexer(_convert_to_indexer( + self._get_level_indexer(k, level=i, indexer=indexer)), + indexer=indexer) else: # a single label - indexer = _update_indexer(_convert_to_indexer(self.get_loc_level(k,level=i,drop_level=False)[0]), indexer=indexer) + indexer = _update_indexer(_convert_to_indexer( + self.get_loc_level(k, level=i, drop_level=False)[0]), + indexer=indexer) # empty indexer if indexer is None: @@ -6543,10 +6602,10 @@ def equals(self, other): return False for i in range(self.nlevels): - svalues = com.take_nd(np.asarray(self.levels[i]._values), self.labels[i], - allow_fill=False) - ovalues = com.take_nd(np.asarray(other.levels[i]._values), other.labels[i], - allow_fill=False) + svalues = com.take_nd(np.asarray(self.levels[i]._values), + self.labels[i], allow_fill=False) + ovalues = com.take_nd(np.asarray(other.levels[i]._values), + other.labels[i], allow_fill=False) if not array_equivalent(svalues, ovalues): return False @@ -6630,7 +6689,7 @@ def difference(self, other): other, result_names = self._convert_can_do_setop(other) if len(other) == 0: - return self + return self if self.equals(other): return MultiIndex(levels=[[]] * self.nlevels, @@ -6688,10 +6747,10 @@ def insert(self, loc, item): # Pad the key with empty strings if lower levels of the key # aren't specified: if not isinstance(item, tuple): - item = (item,) + ('',) * (self.nlevels - 1) + item = (item, ) + ('', ) * (self.nlevels - 1) elif len(item) != self.nlevels: - raise ValueError( - 'Item must have length equal to number of levels.') + raise ValueError('Item must have length equal to number of ' + 'levels.') new_levels = [] new_labels = [] @@ -6762,9 +6821,9 @@ def isin(self, values, level=None): MultiIndex._add_numeric_methods_disabled() MultiIndex._add_logical_methods_disabled() - # For utility purposes + def _sparsify(label_list, start=0, sentinel=''): pivoted = lzip(*label_list) k = len(label_list) @@ -6814,8 +6873,8 @@ def _ensure_index(index_like, copy=False): else: index_like = converted else: - # clean_index_list does the equivalent of copying - # so only need to do this if not list instance + # clean_index_list does the equivalent of copying + # so only need to do this if not list instance if copy: from copy import copy index_like = copy(index_like) @@ -6866,12 +6925,14 @@ def _union_indexes(indexes): return result indexes, kind = _sanitize_and_check(indexes) + def _unique_indices(inds): def conv(i): if isinstance(i, Index): i = i.tolist() return i - return Index(lib.fast_unique_multiple_list([ conv(i) for i in inds ])) + + return Index(lib.fast_unique_multiple_list([conv(i) for i in inds])) if kind == 'special': result = indexes[0] @@ -6908,9 +6969,8 @@ def _sanitize_and_check(indexes): if list in kinds: if len(kinds) > 1: - indexes = [Index(com._try_sort(x)) - if not isinstance(x, Index) else x - for x in indexes] + indexes = [Index(com._try_sort(x)) if not isinstance(x, Index) else + x for x in indexes] kinds.remove(list) else: return indexes, 'list' @@ -6925,9 +6985,8 @@ def _get_consensus_names(indexes): # find the non-none names, need to tupleify to make # the set hashable, then reverse on return - consensus_names = set([ - tuple(i.names) for i in indexes if all(n is not None for n in i.names) - ]) + consensus_names = set([tuple(i.names) for i in indexes + if all(n is not None for n in i.names)]) if len(consensus_names) == 1: return list(list(consensus_names)[0]) return [None] * indexes[0].nlevels @@ -6955,8 +7014,8 @@ def _get_na_rep(dtype): def _get_na_value(dtype): - return {np.datetime64: tslib.NaT, np.timedelta64: tslib.NaT}.get(dtype, - np.nan) + return {np.datetime64: tslib.NaT, + np.timedelta64: tslib.NaT}.get(dtype, np.nan) def _ensure_has_len(seq):
yapf with manual modifications for error strings and bad yapf formatting.
https://api.github.com/repos/pandas-dev/pandas/pulls/12062
2016-01-16T19:16:32Z
2016-01-20T03:18:13Z
null
2016-01-20T03:18:31Z
PERF: more flexible iso8601 parsing
diff --git a/asv_bench/benchmarks/timeseries.py b/asv_bench/benchmarks/timeseries.py index db0c526f25c7b..bdf193cd1f3d3 100644 --- a/asv_bench/benchmarks/timeseries.py +++ b/asv_bench/benchmarks/timeseries.py @@ -1059,33 +1059,27 @@ class timeseries_to_datetime_iso8601(object): goal_time = 0.2 def setup(self): - self.N = 100000 - self.rng = date_range(start='1/1/2000', periods=self.N, freq='T') - if hasattr(Series, 'convert'): - Series.resample = Series.convert - self.ts = Series(np.random.randn(self.N), index=self.rng) self.rng = date_range(start='1/1/2000', periods=20000, freq='H') self.strings = [x.strftime('%Y-%m-%d %H:%M:%S') for x in self.rng] + self.strings_nosep = [x.strftime('%Y%m%d %H:%M:%S') for x in self.rng] + self.strings_tz_space = [x.strftime('%Y-%m-%d %H:%M:%S') + ' -0800' + for x in self.rng] def time_timeseries_to_datetime_iso8601(self): to_datetime(self.strings) - -class timeseries_to_datetime_iso8601_format(object): - goal_time = 0.2 - - def setup(self): - self.N = 100000 - self.rng = date_range(start='1/1/2000', periods=self.N, freq='T') - if hasattr(Series, 'convert'): - Series.resample = Series.convert - self.ts = Series(np.random.randn(self.N), index=self.rng) - self.rng = date_range(start='1/1/2000', periods=20000, freq='H') - self.strings = [x.strftime('%Y-%m-%d %H:%M:%S') for x in self.rng] + def time_timeseries_to_datetime_iso8601_nosep(self): + to_datetime(self.strings_nosep) def time_timeseries_to_datetime_iso8601_format(self): to_datetime(self.strings, format='%Y-%m-%d %H:%M:%S') + def time_timeseries_to_datetime_iso8601_format_no_sep(self): + to_datetime(self.strings_nosep, format='%Y%m%d %H:%M:%S') + + def time_timeseries_to_datetime_iso8601_tz_spaceformat(self): + to_datetime(self.strings_tz_space) + class timeseries_with_format_no_exact(object): goal_time = 0.2 @@ -1160,4 +1154,4 @@ def setup(self): self.cdayh = pd.offsets.CustomBusinessDay(calendar=self.hcal) def time_timeseries_year_incr(self): - (self.date + self.year) \ No newline at end of file + (self.date + self.year) diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index b2eb7d9d97d58..f9ae5e1245551 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -461,7 +461,7 @@ Performance Improvements - +- Improved performance of ISO 8601 date parsing for dates without separators (:issue:`11899`), leading zeros (:issue:`11871`) and with whitespace preceding the time zone (:issue:`9714`) diff --git a/pandas/src/datetime/np_datetime_strings.c b/pandas/src/datetime/np_datetime_strings.c index bc4ef1b3c8184..1e59b31da1e65 100644 --- a/pandas/src/datetime/np_datetime_strings.c +++ b/pandas/src/datetime/np_datetime_strings.c @@ -346,8 +346,6 @@ convert_datetimestruct_local_to_utc(pandas_datetimestruct *out_dts_utc, /* * Parses (almost) standard ISO 8601 date strings. The differences are: * - * + The date "20100312" is parsed as the year 20100312, not as - * equivalent to "2010-03-12". The '-' in the dates are not optional. * + Only seconds may have a decimal point, with up to 18 digits after it * (maximum attoseconds precision). * + Either a 'T' as in ISO 8601 or a ' ' may be used to separate @@ -396,6 +394,16 @@ parse_iso_8601_datetime(char *str, int len, char *substr, sublen; PANDAS_DATETIMEUNIT bestunit; + /* if date components in are separated by one of valid separators + * months/days without leadings 0s will be parsed + * (though not iso8601). If the components aren't separated, + * an error code will be retuned because the date is ambigous + */ + int has_sep = 0; + char sep; + char valid_sep[] = {'-', '.', '/', '\\', ' '}; + int valid_sep_len = 5; + /* Initialize the output to all zeros */ memset(out, 0, sizeof(pandas_datetimestruct)); out->month = 1; @@ -523,12 +531,16 @@ parse_iso_8601_datetime(char *str, int len, goto parse_error; } - /* PARSE THE YEAR (digits until the '-' character) */ + /* PARSE THE YEAR (4 digits) */ out->year = 0; - while (sublen > 0 && isdigit(*substr)) { - out->year = 10 * out->year + (*substr - '0'); - ++substr; - --sublen; + if (sublen >= 4 && isdigit(substr[0]) && isdigit(substr[1]) && + isdigit(substr[2]) && isdigit(substr[3])) { + + out->year = 1000 * (substr[0] - '0') + 100 * (substr[1] - '0') + + 10 * (substr[2] - '0') + (substr[3] - '0'); + + substr += 4; + sublen -= 4;; } /* Negate the year if necessary */ @@ -538,7 +550,7 @@ parse_iso_8601_datetime(char *str, int len, /* Check whether it's a leap-year */ year_leap = is_leapyear(out->year); - /* Next character must be a '-' or the end of the string */ + /* Next character must be a separator, start of month or end */ if (sublen == 0) { if (out_local != NULL) { *out_local = 0; @@ -546,21 +558,41 @@ parse_iso_8601_datetime(char *str, int len, bestunit = PANDAS_FR_Y; goto finish; } - else if (*substr == '-') { - ++substr; - --sublen; - } - else { - goto parse_error; + else if (!isdigit(*substr)) { + for (i = 0; i < valid_sep_len; ++i) { + if (*substr == valid_sep[i]) { + has_sep = 1; + sep = valid_sep[i]; + ++substr; + --sublen; + break; + } + } + if (i == valid_sep_len) { + goto parse_error; + } } - /* Can't have a trailing '-' */ + /* Can't have a trailing sep */ if (sublen == 0) { goto parse_error; } + /* PARSE THE MONTH (2 digits) */ - if (sublen >= 2 && isdigit(substr[0]) && isdigit(substr[1])) { + if (has_sep && ((sublen >= 2 && isdigit(substr[0]) && !isdigit(substr[1])) + || (sublen == 1 && isdigit(substr[0])))) { + out->month = (substr[0] - '0'); + + if (out->month < 1) { + PyErr_Format(PyExc_ValueError, + "Month out of range in datetime string \"%s\"", str); + goto error; + } + ++substr; + --sublen; + } + else if (sublen >= 2 && isdigit(substr[0]) && isdigit(substr[1])) { out->month = 10 * (substr[0] - '0') + (substr[1] - '0'); if (out->month < 1 || out->month > 12) { @@ -577,18 +609,22 @@ parse_iso_8601_datetime(char *str, int len, /* Next character must be a '-' or the end of the string */ if (sublen == 0) { + /* dates of form YYYYMM are not valid */ + if (!has_sep) { + goto parse_error; + } if (out_local != NULL) { *out_local = 0; } bestunit = PANDAS_FR_M; goto finish; } - else if (*substr == '-') { + else if (has_sep && *substr == sep) { ++substr; --sublen; } - else { - goto parse_error; + else if (!isdigit(*substr)) { + goto parse_error; } /* Can't have a trailing '-' */ @@ -597,7 +633,19 @@ parse_iso_8601_datetime(char *str, int len, } /* PARSE THE DAY (2 digits) */ - if (sublen >= 2 && isdigit(substr[0]) && isdigit(substr[1])) { + if (has_sep && ((sublen >= 2 && isdigit(substr[0]) && !isdigit(substr[1])) + || (sublen == 1 && isdigit(substr[0])))) { + out->day = (substr[0] - '0'); + + if (out->day < 1) { + PyErr_Format(PyExc_ValueError, + "Day out of range in datetime string \"%s\"", str); + goto error; + } + ++substr; + --sublen; + } + else if (sublen >= 2 && isdigit(substr[0]) && isdigit(substr[1])) { out->day = 10 * (substr[0] - '0') + (substr[1] - '0'); if (out->day < 1 || @@ -633,7 +681,7 @@ parse_iso_8601_datetime(char *str, int len, if (sublen >= 2 && isdigit(substr[0]) && isdigit(substr[1])) { out->hour = 10 * (substr[0] - '0') + (substr[1] - '0'); - if (out->hour < 0 || out->hour >= 24) { + if (out->hour >= 24) { PyErr_Format(PyExc_ValueError, "Hours out of range in datetime string \"%s\"", str); goto error; @@ -641,6 +689,11 @@ parse_iso_8601_datetime(char *str, int len, substr += 2; sublen -= 2; } + else if (sublen >= 1 && isdigit(substr[0])) { + out->hour = substr[0] - '0'; + ++substr; + --sublen; + } else { goto parse_error; } @@ -664,7 +717,7 @@ parse_iso_8601_datetime(char *str, int len, if (sublen >= 2 && isdigit(substr[0]) && isdigit(substr[1])) { out->min = 10 * (substr[0] - '0') + (substr[1] - '0'); - if (out->hour < 0 || out->min >= 60) { + if (out->min >= 60) { PyErr_Format(PyExc_ValueError, "Minutes out of range in datetime string \"%s\"", str); goto error; @@ -672,6 +725,11 @@ parse_iso_8601_datetime(char *str, int len, substr += 2; sublen -= 2; } + else if (sublen >= 1 && isdigit(substr[0])) { + out->min = substr[0] - '0'; + ++substr; + --sublen; + } else { goto parse_error; } @@ -695,7 +753,7 @@ parse_iso_8601_datetime(char *str, int len, if (sublen >= 2 && isdigit(substr[0]) && isdigit(substr[1])) { out->sec = 10 * (substr[0] - '0') + (substr[1] - '0'); - if (out->sec < 0 || out->sec >= 60) { + if (out->sec >= 60) { PyErr_Format(PyExc_ValueError, "Seconds out of range in datetime string \"%s\"", str); goto error; @@ -703,6 +761,11 @@ parse_iso_8601_datetime(char *str, int len, substr += 2; sublen -= 2; } + else if (sublen >= 1 && isdigit(substr[0])) { + out->sec = substr[0] - '0'; + ++substr; + --sublen; + } else { goto parse_error; } @@ -781,6 +844,12 @@ parse_iso_8601_datetime(char *str, int len, } parse_timezone: + /* trim any whitepsace between time/timeezone */ + while (sublen > 0 && isspace(*substr)) { + ++substr; + --sublen; + } + if (sublen == 0) { // Unlike NumPy, treating no time zone as naive goto finish; @@ -832,6 +901,11 @@ parse_iso_8601_datetime(char *str, int len, goto error; } } + else if (sublen >= 1 && isdigit(substr[0])) { + offset_hour = substr[0] - '0'; + ++substr; + --sublen; + } else { goto parse_error; } @@ -856,6 +930,11 @@ parse_iso_8601_datetime(char *str, int len, goto error; } } + else if (sublen >= 1 && isdigit(substr[0])) { + offset_minute = substr[0] - '0'; + ++substr; + --sublen; + } else { goto parse_error; } diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py index 84065c0340aad..8d7b5a31a5ab3 100644 --- a/pandas/tseries/tests/test_timeseries.py +++ b/pandas/tseries/tests/test_timeseries.py @@ -2454,7 +2454,7 @@ def test_constructor_datetime64_tzformat(self): idx = date_range('2013/1/1 0:00:00-5:00', '2016/1/1 23:59:59-5:00', freq=freq) expected = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59', - freq=freq, tz=tzoffset(None, -18000)) + freq=freq, tz=pytz.FixedOffset(-300)) tm.assert_index_equal(idx, expected) # Unable to use `US/Eastern` because of DST expected_i8 = date_range('2013-01-01T00:00:00', @@ -2465,7 +2465,7 @@ def test_constructor_datetime64_tzformat(self): idx = date_range('2013/1/1 0:00:00+9:00', '2016/1/1 23:59:59+09:00', freq=freq) expected = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59', - freq=freq, tz=tzoffset(None, 32400)) + freq=freq, tz=pytz.FixedOffset(540)) tm.assert_index_equal(idx, expected) expected_i8 = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59', freq=freq, @@ -4833,6 +4833,15 @@ def test_to_datetime_infer_datetime_format_series_starting_with_nans(self): pd.to_datetime(test_series, infer_datetime_format=True) ) + def test_to_datetime_iso8601_noleading_0s(self): + # GH 11871 + test_series = pd.Series(['2014-1-1', '2014-2-2', '2015-3-3']) + expected = pd.Series([pd.Timestamp('2014-01-01'), + pd.Timestamp('2014-02-02'), + pd.Timestamp('2015-03-03')]) + tm.assert_series_equal(pd.to_datetime(test_series), expected) + tm.assert_series_equal(pd.to_datetime(test_series, format='%Y-%m-%d'), + expected) class TestGuessDatetimeFormat(tm.TestCase): def test_guess_datetime_format_with_parseable_formats(self): diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py index 123b91d8bbf82..27dbdcdd1d993 100644 --- a/pandas/tseries/tests/test_tslib.py +++ b/pandas/tseries/tests/test_tslib.py @@ -688,6 +688,32 @@ def test_parsers_timezone_minute_offsets_roundtrip(self): converted_time = dt_time.tz_localize('UTC').tz_convert(tz) self.assertEqual(dt_string_repr, repr(converted_time)) + def test_parsers_iso8601(self): + # GH 12060 + # test only the iso parser - flexibility to different + # separators and leadings 0s + # Timestamp construction falls back to dateutil + cases = {'2011-01-02': datetime.datetime(2011, 1, 2), + '2011-1-2': datetime.datetime(2011, 1, 2), + '2011-01': datetime.datetime(2011, 1, 1), + '2011-1': datetime.datetime(2011, 1, 1), + '2011 01 02': datetime.datetime(2011, 1, 2), + '2011.01.02': datetime.datetime(2011, 1, 2), + '2011/01/02': datetime.datetime(2011, 1, 2), + '2011\\01\\02': datetime.datetime(2011, 1, 2), + '2013-01-01 05:30:00': datetime.datetime(2013, 1, 1, 5, 30), + '2013-1-1 5:30:00': datetime.datetime(2013, 1, 1, 5, 30)} + for date_str, exp in compat.iteritems(cases): + actual = tslib._test_parse_iso8601(date_str) + self.assertEqual(actual, exp) + + # seperators must all match - YYYYMM not valid + invalid_cases = ['2011-01/02', '2011^11^11', '201401', + '201111', '200101'] + for date_str in invalid_cases: + with tm.assertRaises(ValueError): + tslib._test_parse_iso8601(date_str) + class TestArrayToDatetime(tm.TestCase): def test_parsing_valid_dates(self): diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py index 734857c6d724d..85e36ba1df20e 100644 --- a/pandas/tseries/tools.py +++ b/pandas/tseries/tools.py @@ -334,10 +334,7 @@ def _convert_listlike(arg, box, format, name=None): # datetime strings, so in those cases don't use the inferred # format because this path makes process slower in this # special case - format_is_iso8601 = ( - ('%Y-%m-%dT%H:%M:%S.%f'.startswith(format) or - '%Y-%m-%d %H:%M:%S.%f'.startswith(format)) and - format != '%Y') + format_is_iso8601 = _format_is_iso(format) if format_is_iso8601: require_iso8601 = not infer_datetime_format format = None @@ -461,6 +458,21 @@ def calc_with_mask(carg, mask): return None +def _format_is_iso(f): + """ + Does format match the iso8601 set that can be handled by the C parser? + Generally of form YYYY-MM-DDTHH:MM:SS - date separator can be different + but must be consistent. Leading 0s in dates and times are optional. + """ + iso_template = '%Y{date_sep}%m{date_sep}%d{time_sep}%H:%M:%S.%f'.format + excluded_formats = ['%Y%m%d','%Y%m', '%Y'] + + for date_sep in [' ', '/', '\\', '-', '.', '']: + for time_sep in [' ', 'T']: + if (iso_template(date_sep=date_sep, time_sep=time_sep).startswith(f) + and f not in excluded_formats): + return True + return False def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None): """ diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx index f737ac8178a68..be1c5af74a95d 100644 --- a/pandas/tslib.pyx +++ b/pandas/tslib.pyx @@ -1388,6 +1388,26 @@ cpdef convert_str_to_tsobject(object ts, object tz, object unit, return convert_to_tsobject(ts, tz, unit) +def _test_parse_iso8601(object ts): + ''' + TESTING ONLY: Parse string into Timestamp using iso8601 parser. Used + only for testing, actual construction uses `convert_str_to_tsobject` + ''' + cdef: + _TSObject obj + int out_local = 0, out_tzoffset = 0 + + obj = _TSObject() + + _string_to_dts(ts, &obj.dts, &out_local, &out_tzoffset) + obj.value = pandas_datetimestruct_to_datetime(PANDAS_FR_ns, &obj.dts) + _check_dts_bounds(&obj.dts) + if out_local == 1: + obj.tzinfo = pytz.FixedOffset(out_tzoffset) + obj.value = tz_convert_single(obj.value, obj.tzinfo, 'UTC') + return Timestamp(obj.value, tz=obj.tzinfo) + else: + return Timestamp(obj.value) cdef inline void _localize_tso(_TSObject obj, object tz): '''
closes #9714 closes #11899 closes #11871 makes ISO parser in C handle the following. I think 2 & 3 aren't actually iso8601 anymore, but close and unambiguous. 1. dates without '-' separator 2. dates without space before tz 3. dates without leading 0s in month/day (this ONLY works if the date has separators, eg. `"2015-1-1"` parses, but `"201511"` doesn't because it's ambiguous) asv results - adds a small amount of overhead to the standard case (`'2014-01-01'`) ``` before after ratio [1ae6384 ] [e922b05 ] 5.17ms 5.34ms 1.03 timeseries.timeseries_to_datetime_iso8601.time_timeseries_to_datetime_iso8601 5.15ms 5.50ms 1.07 timeseries.timeseries_to_datetime_iso8601.time_timeseries_to_datetime_iso8601_format - 111.89ms 5.27ms 0.05 timeseries.timeseries_to_datetime_iso8601.time_timeseries_to_datetime_iso8601_format_no_sep - 2.02s 5.33ms 0.00 timeseries.timeseries_to_datetime_iso8601.time_timeseries_to_datetime_iso8601_nosep - 2.97s 218.25ms 0.07 timeseries.timeseries_to_datetime_iso8601.time_timeseries_to_datetime_iso8601_tz_spaceformat ```
https://api.github.com/repos/pandas-dev/pandas/pulls/12060
2016-01-16T02:24:42Z
2016-01-26T14:32:58Z
null
2016-09-24T14:39:58Z
DOC: read_csv() ignores quotes when a regex is used in sep
diff --git a/doc/source/io.rst b/doc/source/io.rst index 041daaeb3b12f..e301e353071d9 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -87,7 +87,7 @@ They can take a number of arguments: on. With ``sep=None``, ``read_csv`` will try to infer the delimiter automatically in some cases by "sniffing". The separator may be specified as a regular expression; for instance - you may use '\|\\s*' to indicate a pipe plus arbitrary whitespace. + you may use '\|\\s*' to indicate a pipe plus arbitrary whitespace, but ignores quotes in the data when a regex is used in separator. - ``delim_whitespace``: Parse whitespace-delimited (spaces or tabs) file (much faster than using a regular expression) - ``compression``: decompress ``'gzip'`` and ``'bz2'`` formats on the fly.
made doc changes regarding #11989
https://api.github.com/repos/pandas-dev/pandas/pulls/12059
2016-01-16T00:25:30Z
2016-01-16T17:33:32Z
2016-01-16T17:33:32Z
2016-01-16T17:33:36Z
COMPAT: numpy compat with NaT != NaT, #12049
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index c1f14ce6703a0..ce324e8a2dab1 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -445,7 +445,7 @@ Bug Fixes - Accept unicode in ``Timedelta`` constructor (:issue:`11995`) - Bug in value label reading for ``StataReader`` when reading incrementally (:issue:`12014`) - Bug in vectorized ``DateOffset`` when ``n`` parameter is ``0`` (:issue:`11370`) - +- Compat for numpy 1.11 w.r.t. ``NaT`` comparison changes (:issue:`12049`) diff --git a/pandas/core/common.py b/pandas/core/common.py index b80b7eecaeb11..0326352ef3444 100644 --- a/pandas/core/common.py +++ b/pandas/core/common.py @@ -379,12 +379,13 @@ def array_equivalent(left, right, strict_nan=False): """ left, right = np.asarray(left), np.asarray(right) + + # shape compat if left.shape != right.shape: return False # Object arrays can contain None, NaN and NaT. - if (issubclass(left.dtype.type, np.object_) or - issubclass(right.dtype.type, np.object_)): + if is_object_dtype(left) or is_object_dtype(right): if not strict_nan: # pd.isnull considers NaN and None to be equivalent. @@ -405,13 +406,21 @@ def array_equivalent(left, right, strict_nan=False): return True # NaNs can occur in float and complex arrays. - if issubclass(left.dtype.type, (np.floating, np.complexfloating)): + if is_float_dtype(left) or is_complex_dtype(left): return ((left == right) | (np.isnan(left) & np.isnan(right))).all() # numpy will will not allow this type of datetimelike vs integer comparison elif is_datetimelike_v_numeric(left, right): return False + # M8/m8 + elif needs_i8_conversion(left) and needs_i8_conversion(right): + if not is_dtype_equal(left.dtype, right.dtype): + return False + + left = left.view('i8') + right = right.view('i8') + # NaNs cannot occur otherwise. return np.array_equal(left, right) diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py index bc204740567de..a22d8f11c9a75 100644 --- a/pandas/tests/test_common.py +++ b/pandas/tests/test_common.py @@ -8,7 +8,9 @@ import numpy as np import pandas as pd from pandas.tslib import iNaT, NaT -from pandas import Series, DataFrame, date_range, DatetimeIndex, Timestamp, Float64Index +from pandas import (Series, DataFrame, date_range, + DatetimeIndex, TimedeltaIndex, + Timestamp, Float64Index) from pandas import compat from pandas.compat import range, long, lrange, lmap, u from pandas.core.common import notnull, isnull, array_equivalent @@ -322,20 +324,40 @@ def test_array_equivalent(): np.array([np.nan, 1, np.nan])) assert array_equivalent(np.array([np.nan, None], dtype='object'), np.array([np.nan, None], dtype='object')) - assert array_equivalent(np.array([np.nan, 1+1j], dtype='complex'), - np.array([np.nan, 1+1j], dtype='complex')) - assert not array_equivalent(np.array([np.nan, 1+1j], dtype='complex'), - np.array([np.nan, 1+2j], dtype='complex')) + assert array_equivalent(np.array([np.nan, 1 + 1j], dtype='complex'), + np.array([np.nan, 1 + 1j], dtype='complex')) + assert not array_equivalent(np.array([np.nan, 1 + 1j], dtype='complex'), + np.array([np.nan, 1 + 2j], dtype='complex')) assert not array_equivalent(np.array([np.nan, 1, np.nan]), np.array([np.nan, 2, np.nan])) - assert not array_equivalent(np.array(['a', 'b', 'c', 'd']), np.array(['e', 'e'])) - assert array_equivalent(Float64Index([0, np.nan]), Float64Index([0, np.nan])) - assert not array_equivalent(Float64Index([0, np.nan]), Float64Index([1, np.nan])) - assert array_equivalent(DatetimeIndex([0, np.nan]), DatetimeIndex([0, np.nan])) - assert not array_equivalent(DatetimeIndex([0, np.nan]), DatetimeIndex([1, np.nan])) + assert not array_equivalent(np.array(['a', 'b', 'c', 'd']), + np.array(['e', 'e'])) + assert array_equivalent(Float64Index([0, np.nan]), + Float64Index([0, np.nan])) + assert not array_equivalent(Float64Index([0, np.nan]), + Float64Index([1, np.nan])) + assert array_equivalent(DatetimeIndex([0, np.nan]), + DatetimeIndex([0, np.nan])) + assert not array_equivalent(DatetimeIndex([0, np.nan]), + DatetimeIndex([1, np.nan])) + assert array_equivalent(TimedeltaIndex([0, np.nan]), + TimedeltaIndex([0, np.nan])) + assert not array_equivalent(TimedeltaIndex([0, np.nan]), + TimedeltaIndex([1, np.nan])) + assert array_equivalent(DatetimeIndex([0, np.nan], tz='US/Eastern'), + DatetimeIndex([0, np.nan], tz='US/Eastern')) + assert not array_equivalent(DatetimeIndex([0, np.nan], tz='US/Eastern'), + DatetimeIndex([1, np.nan], tz='US/Eastern')) + assert not array_equivalent(DatetimeIndex([0, np.nan]), + DatetimeIndex([0, np.nan], tz='US/Eastern')) + assert not array_equivalent(DatetimeIndex([0, np.nan], tz='CET'), + DatetimeIndex([0, np.nan], tz='US/Eastern')) + assert not array_equivalent(DatetimeIndex([0, np.nan]), + TimedeltaIndex([0, np.nan])) + def test_datetimeindex_from_empty_datetime64_array(): - for unit in [ 'ms', 'us', 'ns' ]: + for unit in ['ms', 'us', 'ns']: idx = DatetimeIndex(np.array([], dtype='datetime64[%s]' % unit)) assert(len(idx) == 0) diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py index ba43537ee609b..55a6bf6f13b63 100644 --- a/pandas/tseries/tests/test_tslib.py +++ b/pandas/tseries/tests/test_tslib.py @@ -399,8 +399,10 @@ def test_asm8(self): 1000, ] for n in ns: - self.assertEqual(Timestamp(n).asm8, np.datetime64(n, 'ns'), n) - self.assertEqual(Timestamp('nat').asm8, np.datetime64('nat', 'ns')) + self.assertEqual(Timestamp(n).asm8.view('i8'), + np.datetime64(n, 'ns').view('i8'), n) + self.assertEqual(Timestamp('nat').asm8.view('i8'), + np.datetime64('nat', 'ns').view('i8')) def test_fields(self): @@ -752,13 +754,11 @@ def test_coercing_dates_outside_of_datetime64_ns_bounds(self): np.array([invalid_date], dtype='object'), errors='raise', ) - self.assertTrue( - np.array_equal( - tslib.array_to_datetime( - np.array([invalid_date], dtype='object'), errors='coerce', - ), - np.array([tslib.iNaT], dtype='M8[ns]') - ) + self.assert_numpy_array_equal( + tslib.array_to_datetime( + np.array([invalid_date], dtype='object'), + errors='coerce'), + np.array([tslib.iNaT], dtype='M8[ns]') ) arr = np.array(['1/1/1000', '1/1/2000'], dtype=object)
closes #12049
https://api.github.com/repos/pandas-dev/pandas/pulls/12058
2016-01-15T20:59:07Z
2016-01-16T16:30:52Z
2016-01-16T16:30:52Z
2016-01-16T16:30:52Z
BUG: GH12050 Setting values on Series using .loc with a TZ-aware DatetimeIndex fails
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 58cc0fd647511..db9cf5ae86d39 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -536,3 +536,5 @@ of columns didn't match the number of series provided (:issue:`12039`). - Removed ``millisecond`` property of ``DatetimeIndex``. This would always raise a ``ValueError`` (:issue:`12019`). - Bug in ``Series`` constructor with read-only data (:issue:`11502`) + +- Bug in ``.loc`` setitem indexer preventing the use of a TZ-aware DatetimeIndex (:issue:`12050`) diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index 9df72053fb0af..492b97a4349f1 100644 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -1114,7 +1114,8 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False): return inds else: if isinstance(obj, Index): - objarr = obj.values + # want Index objects to pass through untouched + objarr = obj else: objarr = _asarray_tuplesafe(obj) diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py index fc7a57ae2f179..c6216fb89f18e 100644 --- a/pandas/tests/test_indexing.py +++ b/pandas/tests/test_indexing.py @@ -970,6 +970,55 @@ def f(): df.loc[df.new_col == 'new', 'time'] = v assert_series_equal(df.loc[df.new_col == 'new', 'time'], v) + def test_indexing_with_datetimeindex_tz(self): + + # GH 12050 + # indexing on a series with a datetimeindex with tz + index = pd.date_range('2015-01-01', periods=2, tz='utc') + + ser = pd.Series(range(2), index=index) + + # list-like indexing + + for sel in (index, list(index)): + # getitem + assert_series_equal(ser[sel], ser) + + # setitem + result = ser.copy() + result[sel] = 1 + expected = pd.Series(1, index=index) + assert_series_equal(result, expected) + + # .loc getitem + assert_series_equal(ser.loc[sel], ser) + + # .loc setitem + result = ser.copy() + result.loc[sel] = 1 + expected = pd.Series(1, index=index) + assert_series_equal(result, expected) + + # single element indexing + + # getitem + self.assertEqual(ser[index[1]], 1) + + # setitem + result = ser.copy() + result[index[1]] = 5 + expected = pd.Series([0, 5], index=index) + assert_series_equal(result, expected) + + # .loc getitem + self.assertEqual(ser.loc[index[1]], 1) + + # .loc setitem + result = ser.copy() + result.loc[index[1]] = 5 + expected = pd.Series([0, 5], index=index) + assert_series_equal(result, expected) + def test_loc_setitem_dups(self): # GH 6541
Fixes https://github.com/pydata/pandas/issues/12050 . Taking just the values of an index object looses the timezone information.
https://api.github.com/repos/pandas-dev/pandas/pulls/12054
2016-01-15T18:28:11Z
2016-01-19T12:35:00Z
null
2016-01-19T12:35:00Z
ENH : Allow to_sql to recognize single sql type #11886
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 4ce2ce5b69cb4..081e84c57c0ac 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -118,6 +118,7 @@ Other enhancements - ``Series`` gained an ``is_unique`` attribute (:issue:`11946`) - ``DataFrame.quantile`` and ``Series.quantile`` now accept ``interpolation`` keyword (:issue:`10174`). - ``DataFrame.select_dtypes`` now allows the ``np.float16`` typecode (:issue:`11990`) +- ``DataFrame.to_sql `` now allows a single value as the SQL type for all columns (:issue:`11886`). .. _whatsnew_0180.enhancements.rounding: @@ -303,6 +304,9 @@ Other API Changes - ``.memory_usage`` now includes values in the index, as does memory_usage in ``.info`` (:issue:`11597`) +- ``DataFrame.to_latex()`` now supports non-ascii encodings (eg utf-8) in Python 2 with the parameter ``encoding`` (:issue:`7061`) + + Changes to eval ^^^^^^^^^^^^^^^ @@ -463,6 +467,7 @@ Bug Fixes - Bug in ``pd.read_clipboard`` and ``pd.to_clipboard`` functions not supporting Unicode; upgrade included ``pyperclip`` to v1.5.15 (:issue:`9263`) - Bug in ``DataFrame.query`` containing an assignment (:issue:`8664`) +- Bug in ``from_msgpack`` where ``__contains__()`` fails for columns of the unpacked ``DataFrame``, if the ``DataFrame`` has object columns. (:issue: `11880`) - Bug in timezone info lost when broadcasting scalar datetime to ``DataFrame`` (:issue:`11682`) diff --git a/pandas/core/format.py b/pandas/core/format.py index 86d39c139fb51..a50edd9462431 100644 --- a/pandas/core/format.py +++ b/pandas/core/format.py @@ -619,105 +619,20 @@ def _join_multiline(self, *strcols): st = ed return '\n\n'.join(str_lst) - def to_latex(self, column_format=None, longtable=False): + def to_latex(self, column_format=None, longtable=False, encoding=None): """ Render a DataFrame to a LaTeX tabular/longtable environment output. """ - self.escape = self.kwds.get('escape', True) - def get_col_type(dtype): - if issubclass(dtype.type, np.number): - return 'r' - else: - return 'l' - - frame = self.frame - - if len(frame.columns) == 0 or len(frame.index) == 0: - info_line = (u('Empty %s\nColumns: %s\nIndex: %s') - % (type(self.frame).__name__, - frame.columns, frame.index)) - strcols = [[info_line]] - else: - strcols = self._to_str_columns() - - if self.index and isinstance(self.frame.index, MultiIndex): - clevels = self.frame.columns.nlevels - strcols.pop(0) - name = any(self.frame.index.names) - for i, lev in enumerate(self.frame.index.levels): - lev2 = lev.format() - blank = ' ' * len(lev2[0]) - lev3 = [blank] * clevels - if name: - lev3.append(lev.name) - for level_idx, group in itertools.groupby( - self.frame.index.labels[i]): - count = len(list(group)) - lev3.extend([lev2[level_idx]] + [blank] * (count - 1)) - strcols.insert(i, lev3) - - if column_format is None: - dtypes = self.frame.dtypes._values - column_format = ''.join(map(get_col_type, dtypes)) - if self.index: - index_format = 'l' * self.frame.index.nlevels - column_format = index_format + column_format - elif not isinstance(column_format, - compat.string_types): # pragma: no cover - raise AssertionError('column_format must be str or unicode, not %s' - % type(column_format)) - - def write(buf, frame, column_format, strcols, longtable=False): - if not longtable: - buf.write('\\begin{tabular}{%s}\n' % column_format) - buf.write('\\toprule\n') - else: - buf.write('\\begin{longtable}{%s}\n' % column_format) - buf.write('\\toprule\n') - - nlevels = frame.columns.nlevels - if any(frame.index.names): - nlevels += 1 - for i, row in enumerate(zip(*strcols)): - if i == nlevels and self.header: - buf.write('\\midrule\n') # End of header - if longtable: - buf.write('\\endhead\n') - buf.write('\\midrule\n') - buf.write('\\multicolumn{3}{r}{{Continued on next ' - 'page}} \\\\\n') - buf.write('\midrule\n') - buf.write('\endfoot\n\n') - buf.write('\\bottomrule\n') - buf.write('\\endlastfoot\n') - if self.escape: - crow = [(x.replace('\\', '\\textbackslash') # escape backslashes first - .replace('_', '\\_') - .replace('%', '\\%') - .replace('$', '\\$') - .replace('#', '\\#') - .replace('{', '\\{') - .replace('}', '\\}') - .replace('~', '\\textasciitilde') - .replace('^', '\\textasciicircum') - .replace('&', '\\&') if x else '{}') for x in row] - else: - crow = [x if x else '{}' for x in row] - buf.write(' & '.join(crow)) - buf.write(' \\\\\n') - - if not longtable: - buf.write('\\bottomrule\n') - buf.write('\\end{tabular}\n') - else: - buf.write('\\end{longtable}\n') + latex_renderer = LatexFormatter(self, column_format=column_format, + longtable=longtable) if hasattr(self.buf, 'write'): - write(self.buf, frame, column_format, strcols, longtable) + latex_renderer.write_result(self.buf) elif isinstance(self.buf, compat.string_types): - with open(self.buf, 'w') as f: - write(f, frame, column_format, strcols, longtable) + import codecs + with codecs.open(self.buf, 'w', encoding=encoding) as f: + latex_renderer.write_result(f) else: raise TypeError('buf is not a file name and it has no write ' 'method') @@ -851,6 +766,124 @@ def _get_column_name_list(self): return names +class LatexFormatter(TableFormatter): + """ Used to render a DataFrame to a LaTeX tabular/longtable environment + output. + + Parameters + ---------- + formatter : `DataFrameFormatter` + column_format : str, default None + The columns format as specified in `LaTeX table format + <https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g 'rcl' for 3 columns + longtable : boolean, default False + Use a longtable environment instead of tabular. + + See also + -------- + HTMLFormatter + """ + + def __init__(self, formatter, column_format=None, longtable=False): + self.fmt = formatter + self.frame = self.fmt.frame + self.column_format = column_format + self.longtable = longtable + + def write_result(self, buf): + """ + Render a DataFrame to a LaTeX tabular/longtable environment output. + """ + + # string representation of the columns + if len(self.frame.columns) == 0 or len(self.frame.index) == 0: + info_line = (u('Empty %s\nColumns: %s\nIndex: %s') + % (type(self.frame).__name__, + self.frame.columns, self.frame.index)) + strcols = [[info_line]] + else: + strcols = self.fmt._to_str_columns() + + def get_col_type(dtype): + if issubclass(dtype.type, np.number): + return 'r' + else: + return 'l' + + if self.fmt.index and isinstance(self.frame.index, MultiIndex): + clevels = self.frame.columns.nlevels + strcols.pop(0) + name = any(self.frame.index.names) + for i, lev in enumerate(self.frame.index.levels): + lev2 = lev.format() + blank = ' ' * len(lev2[0]) + lev3 = [blank] * clevels + if name: + lev3.append(lev.name) + for level_idx, group in itertools.groupby( + self.frame.index.labels[i]): + count = len(list(group)) + lev3.extend([lev2[level_idx]] + [blank] * (count - 1)) + strcols.insert(i, lev3) + + column_format = self.column_format + if column_format is None: + dtypes = self.frame.dtypes._values + column_format = ''.join(map(get_col_type, dtypes)) + if self.fmt.index: + index_format = 'l' * self.frame.index.nlevels + column_format = index_format + column_format + elif not isinstance(column_format, + compat.string_types): # pragma: no cover + raise AssertionError('column_format must be str or unicode, not %s' + % type(column_format)) + + if not self.longtable: + buf.write('\\begin{tabular}{%s}\n' % column_format) + buf.write('\\toprule\n') + else: + buf.write('\\begin{longtable}{%s}\n' % column_format) + buf.write('\\toprule\n') + + nlevels = self.frame.columns.nlevels + if any(self.frame.index.names): + nlevels += 1 + for i, row in enumerate(zip(*strcols)): + if i == nlevels and self.fmt.header: + buf.write('\\midrule\n') # End of header + if self.longtable: + buf.write('\\endhead\n') + buf.write('\\midrule\n') + buf.write('\\multicolumn{3}{r}{{Continued on next ' + 'page}} \\\\\n') + buf.write('\\midrule\n') + buf.write('\\endfoot\n\n') + buf.write('\\bottomrule\n') + buf.write('\\endlastfoot\n') + if self.fmt.kwds.get('escape', True): + # escape backslashes first + crow = [(x.replace('\\', '\\textbackslash') + .replace('_', '\\_') + .replace('%', '\\%') + .replace('$', '\\$') + .replace('#', '\\#') + .replace('{', '\\{') + .replace('}', '\\}') + .replace('~', '\\textasciitilde') + .replace('^', '\\textasciicircum') + .replace('&', '\\&') if x else '{}') for x in row] + else: + crow = [x if x else '{}' for x in row] + buf.write(' & '.join(crow)) + buf.write(' \\\\\n') + + if not self.longtable: + buf.write('\\bottomrule\n') + buf.write('\\end{tabular}\n') + else: + buf.write('\\end{longtable}\n') + + class HTMLFormatter(TableFormatter): indent_delta = 2 diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 7220b25daf318..b27c4268796dd 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -1547,7 +1547,7 @@ def to_latex(self, buf=None, columns=None, col_space=None, colSpace=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, bold_rows=True, column_format=None, - longtable=None, escape=None): + longtable=None, escape=None, encoding=None): """ Render a DataFrame to a tabular environment table. You can splice this into a LaTeX document. Requires \\usepackage{booktabs}. @@ -1567,7 +1567,8 @@ def to_latex(self, buf=None, columns=None, col_space=None, colSpace=None, default: True When set to False prevents from escaping latex special characters in column names. - + encoding : str, default None + Default encoding is ascii in Python 2 and utf-8 in Python 3 """ if colSpace is not None: # pragma: no cover @@ -1589,7 +1590,8 @@ def to_latex(self, buf=None, columns=None, col_space=None, colSpace=None, sparsify=sparsify, index_names=index_names, escape=escape) - formatter.to_latex(column_format=column_format, longtable=longtable) + formatter.to_latex(column_format=column_format, longtable=longtable, + encoding=encoding) if buf is None: return formatter.buf.getvalue() diff --git a/pandas/core/window.py b/pandas/core/window.py index 1e5816e898baa..ce8fda9e932bc 100644 --- a/pandas/core/window.py +++ b/pandas/core/window.py @@ -965,6 +965,7 @@ def corr(self, other=None, pairwise=None, **kwargs): Use a standard estimation bias correction """ + class EWM(_Rolling): r""" Provides exponential weighted functions diff --git a/pandas/hashtable.pyx b/pandas/hashtable.pyx index 58e9d64921e0d..a5fcbd3f2d0f1 100644 --- a/pandas/hashtable.pyx +++ b/pandas/hashtable.pyx @@ -342,7 +342,7 @@ cdef class Int64HashTable(HashTable): self.table.vals[k] = <Py_ssize_t> values[i] @cython.boundscheck(False) - def map_locations(self, int64_t[:] values): + def map_locations(self, ndarray[int64_t, ndim=1] values): cdef: Py_ssize_t i, n = len(values) int ret = 0 @@ -570,7 +570,7 @@ cdef class Float64HashTable(HashTable): return np.asarray(labels) @cython.boundscheck(False) - def map_locations(self, float64_t[:] values): + def map_locations(self, ndarray[float64_t, ndim=1] values): cdef: Py_ssize_t i, n = len(values) int ret = 0 diff --git a/pandas/io/sql.py b/pandas/io/sql.py index 95a6d02b1ccb6..8cf7e0eb15b48 100644 --- a/pandas/io/sql.py +++ b/pandas/io/sql.py @@ -19,6 +19,7 @@ from pandas.core.common import isnull from pandas.core.base import PandasObject from pandas.core.dtypes import DatetimeTZDtype +from pandas.core.generic import is_dictlike from pandas.tseries.tools import to_datetime from pandas.util.decorators import Appender @@ -548,9 +549,11 @@ def to_sql(frame, name, con, flavor='sqlite', schema=None, if_exists='fail', chunksize : int, default None If not None, then rows will be written in batches of this size at a time. If None, all rows will be written at once. - dtype : dict of column name to SQL type, default None + dtype : single SQL type or dict of column name to SQL type, default None Optional specifying the datatype for columns. The SQL type should - be a SQLAlchemy type, or a string for sqlite3 fallback connection. + be a SQLAlchemy type, or a string for sqlite3 fallback connection. + If all columns are of the same type, one single value can be + used. """ if if_exists not in ('fail', 'replace', 'append'): @@ -563,7 +566,7 @@ def to_sql(frame, name, con, flavor='sqlite', schema=None, if_exists='fail', elif not isinstance(frame, DataFrame): raise NotImplementedError("'frame' argument should be either a " "Series or a DataFrame") - + pandas_sql.to_sql(frame, name, if_exists=if_exists, index=index, index_label=index_label, schema=schema, chunksize=chunksize, dtype=dtype) @@ -1222,11 +1225,15 @@ def to_sql(self, frame, name, if_exists='fail', index=True, chunksize : int, default None If not None, then rows will be written in batches of this size at a time. If None, all rows will be written at once. - dtype : dict of column name to SQL type, default None + dtype : single SQL type or dict of column name to SQL type, default None Optional specifying the datatype for columns. The SQL type should - be a SQLAlchemy type. + be a SQLAlchemy type. If all columns are of the same type, one + single value can be used. + """ + if dtype and not is_dictlike(dtype): + dtype = { col_name : dtype for col_name in frame } if dtype is not None: from sqlalchemy.types import to_instance, TypeEngine for col, my_type in dtype.items(): @@ -1618,11 +1625,15 @@ def to_sql(self, frame, name, if_exists='fail', index=True, chunksize : int, default None If not None, then rows will be written in batches of this size at a time. If None, all rows will be written at once. - dtype : dict of column name to SQL type, default None + dtype : single SQL type or dict of column name to SQL type, default None Optional specifying the datatype for columns. The SQL type should - be a string. + be a string. If all columns are of the same type, one single + value can be used. """ + if dtype and not is_dictlike(dtype): + dtype = { col_name : dtype for col_name in frame } + if dtype is not None: for col, my_type in dtype.items(): if not isinstance(my_type, str): diff --git a/pandas/io/tests/test_packers.py b/pandas/io/tests/test_packers.py index d6a9feb1bd8f4..61b24c858b60d 100644 --- a/pandas/io/tests/test_packers.py +++ b/pandas/io/tests/test_packers.py @@ -9,8 +9,8 @@ from pandas import compat from pandas.compat import u from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range, - date_range, period_range, Index, SparseSeries, SparseDataFrame, - SparsePanel) + date_range, period_range, Index) +from pandas.io.packers import to_msgpack, read_msgpack import pandas.util.testing as tm from pandas.util.testing import (ensure_clean, assert_index_equal, assert_series_equal, @@ -23,7 +23,19 @@ nan = np.nan -from pandas.io.packers import to_msgpack, read_msgpack +try: + import blosc # NOQA +except ImportError: + _BLOSC_INSTALLED = False +else: + _BLOSC_INSTALLED = True + +try: + import zlib # NOQA +except ImportError: + _ZLIB_INSTALLED = False +else: + _ZLIB_INSTALLED = True _multiprocess_can_split_ = False @@ -483,6 +495,14 @@ class TestCompression(TestPackers): """ def setUp(self): + try: + from sqlalchemy import create_engine + self._create_sql_engine = create_engine + except ImportError: + self._SQLALCHEMY_INSTALLED = False + else: + self._SQLALCHEMY_INSTALLED = True + super(TestCompression, self).setUp() data = { 'A': np.arange(1000, dtype=np.float64), @@ -508,14 +528,56 @@ def test_compression_zlib(self): assert_frame_equal(self.frame[k], i_rec[k]) def test_compression_blosc(self): - try: - import blosc - except ImportError: + if not _BLOSC_INSTALLED: raise nose.SkipTest('no blosc') i_rec = self.encode_decode(self.frame, compress='blosc') for k in self.frame.keys(): assert_frame_equal(self.frame[k], i_rec[k]) + def test_readonly_axis_blosc(self): + # GH11880 + if not _BLOSC_INSTALLED: + raise nose.SkipTest('no blosc') + df1 = DataFrame({'A': list('abcd')}) + df2 = DataFrame(df1, index=[1., 2., 3., 4.]) + self.assertTrue(1 in self.encode_decode(df1['A'], compress='blosc')) + self.assertTrue(1. in self.encode_decode(df2['A'], compress='blosc')) + + def test_readonly_axis_zlib(self): + # GH11880 + df1 = DataFrame({'A': list('abcd')}) + df2 = DataFrame(df1, index=[1., 2., 3., 4.]) + self.assertTrue(1 in self.encode_decode(df1['A'], compress='zlib')) + self.assertTrue(1. in self.encode_decode(df2['A'], compress='zlib')) + + def test_readonly_axis_blosc_to_sql(self): + # GH11880 + if not _BLOSC_INSTALLED: + raise nose.SkipTest('no blosc') + if not self._SQLALCHEMY_INSTALLED: + raise nose.SkipTest('no sqlalchemy') + expected = DataFrame({'A': list('abcd')}) + df = self.encode_decode(expected, compress='blosc') + eng = self._create_sql_engine("sqlite:///:memory:") + df.to_sql('test', eng, if_exists='append') + result = pandas.read_sql_table('test', eng, index_col='index') + result.index.names = [None] + assert_frame_equal(expected, result) + + def test_readonly_axis_zlib_to_sql(self): + # GH11880 + if not _ZLIB_INSTALLED: + raise nose.SkipTest('no zlib') + if not self._SQLALCHEMY_INSTALLED: + raise nose.SkipTest('no sqlalchemy') + expected = DataFrame({'A': list('abcd')}) + df = self.encode_decode(expected, compress='zlib') + eng = self._create_sql_engine("sqlite:///:memory:") + df.to_sql('test', eng, if_exists='append') + result = pandas.read_sql_table('test', eng, index_col='index') + result.index.names = [None] + assert_frame_equal(expected, result) + class TestEncoding(TestPackers): def setUp(self): diff --git a/pandas/io/tests/test_sql.py b/pandas/io/tests/test_sql.py index bfd1ac3f08ee8..909713d50a1ab 100644 --- a/pandas/io/tests/test_sql.py +++ b/pandas/io/tests/test_sql.py @@ -1509,6 +1509,21 @@ def test_dtype(self): self.assertTrue(isinstance(sqltype, sqlalchemy.String)) self.assertEqual(sqltype.length, 10) + def test_to_sql_single_dtype(self): + self.drop_table('single_dtype_test') + cols = ['A','B'] + data = [('a','b'), + ('c','d')] + df = DataFrame(data,columns=cols) + df.to_sql('single_dtype_test',self.conn,dtype=sqlalchemy.TEXT) + meta = sqlalchemy.schema.MetaData(bind=self.conn) + meta.reflect() + sqltypea = meta.tables['single_dtype_test'].columns['A'].type + sqltypeb = meta.tables['single_dtype_test'].columns['B'].type + self.assertTrue(isinstance(sqltypea, sqlalchemy.TEXT)) + self.assertTrue(isinstance(sqltypeb, sqlalchemy.TEXT)) + self.drop_table('single_dtype_test') + def test_notnull_dtype(self): cols = {'Bool': Series([True,None]), 'Date': Series([datetime(2012, 5, 1), None]), @@ -1967,6 +1982,19 @@ def test_dtype(self): self.assertRaises(ValueError, df.to_sql, 'error', self.conn, dtype={'B': bool}) + def test_to_sql_single_dtype(self): + if self.flavor == 'mysql': + raise nose.SkipTest('Not applicable to MySQL legacy') + self.drop_table('single_dtype_test') + cols = ['A','B'] + data = [('a','b'), + ('c','d')] + df = DataFrame(data,columns=cols) + df.to_sql('single_dtype_test',self.conn,dtype='STRING') + self.assertEqual(self._get_sqlite_column_type('single_dtype_test','A'),'STRING') + self.assertEqual(self._get_sqlite_column_type('single_dtype_test','B'),'STRING') + self.drop_table('single_dtype_test') + def test_notnull_dtype(self): if self.flavor == 'mysql': raise nose.SkipTest('Not applicable to MySQL legacy') diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py index 4d17610d87bea..a73b459459321 100644 --- a/pandas/tests/test_format.py +++ b/pandas/tests/test_format.py @@ -15,6 +15,8 @@ from numpy.random import randn import numpy as np +import codecs + div_style = '' try: import IPython @@ -2554,6 +2556,24 @@ def test_to_latex_filename(self): with open(path, 'r') as f: self.assertEqual(self.frame.to_latex(), f.read()) + # test with utf-8 and encoding option (GH 7061) + df = DataFrame([[u'au\xdfgangen']]) + with tm.ensure_clean('test.tex') as path: + df.to_latex(path, encoding='utf-8') + with codecs.open(path, 'r', encoding='utf-8') as f: + self.assertEqual(df.to_latex(), f.read()) + + # test with utf-8 without encoding option + if compat.PY3: # python3 default encoding is utf-8 + with tm.ensure_clean('test.tex') as path: + df.to_latex(path) + with codecs.open(path, 'r') as f: + self.assertEqual(df.to_latex(), f.read()) + else: + # python2 default encoding is ascii, so an error should be raised + with tm.ensure_clean('test.tex') as path: + self.assertRaises(UnicodeEncodeError, df.to_latex, path) + def test_to_latex(self): # it works! self.frame.to_latex()
Continued in #13252 --- This solves #11886 It checks whether the passed dtype variable is a dictionary. If not, it creates a new dictionary with keys as the columns of the dataframe. It then passes this dictionary to the pandasSQL_builder.
https://api.github.com/repos/pandas-dev/pandas/pulls/12053
2016-01-15T17:10:44Z
2016-06-25T12:38:17Z
null
2023-05-11T01:13:19Z
remove read_excel keyword NotImplemented error, update documentation #11544
diff --git a/doc/source/io.rst b/doc/source/io.rst index 041daaeb3b12f..d2e97ad6b9e84 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -2192,6 +2192,18 @@ indices to be parsed. read_excel('path_to_file.xls', 'Sheet1', parse_cols=[0, 2, 3]) + +Parsing Dates ++++++++++++++ + +The `parse_dates` keyword for `read_excel` is used to specify whether to parse strings +to a datetime. + +.. code-block:: python + + read_excel('path_to_file.xls', 'Sheet1', parse_dates=['strings']) + + Cell Converters +++++++++++++++ diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 3496e9eea834c..eb644b11254bd 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -481,7 +481,7 @@ Bug Fixes - Bug in ``df.replace`` while replacing value in mixed dtype ``Dataframe`` (:issue:`11698`) - Bug in ``Index`` prevents copying name of passed ``Index``, when a new name is not provided (:issue:`11193`) - Bug in ``read_excel`` failing to read any non-empty sheets when empty sheets exist and ``sheetname=None`` (:issue:`11711`) -- Bug in ``read_excel`` failing to raise ``NotImplemented`` error when keywords ``parse_dates`` and ``date_parser`` are provided (:issue:`11544`) +- Bug in ``read_excel`` failing to raise warning when keyword ``parse_dates`` and is provided without keyword ``index_col`` (:issue:`11544`) - Bug in ``read_sql`` with pymysql connections failing to return chunked data (:issue:`11522`) - Bug in ``.to_csv`` ignoring formatting parameters ``decimal``, ``na_rep``, ``float_format`` for float indexes (:issue:`11553`) - Bug in ``Int64Index`` and ``Float64Index`` preventing the use of the modulo operator (:issue:`9244`) diff --git a/pandas/io/excel.py b/pandas/io/excel.py index 106d263f56093..3388e4a250b7a 100644 --- a/pandas/io/excel.py +++ b/pandas/io/excel.py @@ -298,13 +298,10 @@ def _parse_excel(self, sheetname=0, header=0, skiprows=None, skip_footer=0, if 'chunksize' in kwds: raise NotImplementedError("chunksize keyword of read_excel " "is not implemented") - if parse_dates: - raise NotImplementedError("parse_dates keyword of read_excel " - "is not implemented") - if date_parser is not None: - raise NotImplementedError("date_parser keyword of read_excel " - "is not implemented") + if parse_dates and not index_col: + warn("The parse_dates keyword of read_excel was provided without " + "an index_col keyword value.") import xlrd from xlrd import (xldate, XL_CELL_DATE, diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py index 8023c25cdd660..308d495375ae8 100644 --- a/pandas/io/tests/test_excel.py +++ b/pandas/io/tests/test_excel.py @@ -167,9 +167,10 @@ def test_parse_cols_int(self): dfref = self.get_csv_refdf('test1') dfref = dfref.reindex(columns=['A', 'B', 'C']) - df1 = self.get_exceldf('test1', 'Sheet1', index_col=0, parse_cols=3) - df2 = self.get_exceldf('test1', 'Sheet2', skiprows=[1], index_col=0, + df1 = self.get_exceldf('test1', 'Sheet1', index_col=0, parse_dates=True, parse_cols=3) + df2 = self.get_exceldf('test1', 'Sheet2', skiprows=[1], index_col=0, + parse_dates=True, parse_cols=3) # TODO add index to xls file) tm.assert_frame_equal(df1, dfref, check_names=False) tm.assert_frame_equal(df2, dfref, check_names=False) @@ -178,9 +179,10 @@ def test_parse_cols_list(self): dfref = self.get_csv_refdf('test1') dfref = dfref.reindex(columns=['B', 'C']) - df1 = self.get_exceldf('test1', 'Sheet1', index_col=0, + df1 = self.get_exceldf('test1', 'Sheet1', index_col=0, parse_dates=True, parse_cols=[0, 2, 3]) df2 = self.get_exceldf('test1', 'Sheet2', skiprows=[1], index_col=0, + parse_dates=True, parse_cols=[0, 2, 3]) # TODO add index to xls file) tm.assert_frame_equal(df1, dfref, check_names=False) @@ -191,28 +193,28 @@ def test_parse_cols_str(self): dfref = self.get_csv_refdf('test1') df1 = dfref.reindex(columns=['A', 'B', 'C']) - df2 = self.get_exceldf('test1', 'Sheet1', index_col=0, + df2 = self.get_exceldf('test1', 'Sheet1', index_col=0, parse_dates=True, parse_cols='A:D') df3 = self.get_exceldf('test1', 'Sheet2', skiprows=[1], index_col=0, - parse_cols='A:D') + parse_dates=True, parse_cols='A:D') # TODO add index to xls, read xls ignores index name ? tm.assert_frame_equal(df2, df1, check_names=False) tm.assert_frame_equal(df3, df1, check_names=False) df1 = dfref.reindex(columns=['B', 'C']) - df2 = self.get_exceldf('test1', 'Sheet1', index_col=0, + df2 = self.get_exceldf('test1', 'Sheet1', index_col=0, parse_dates=True, parse_cols='A,C,D') df3 = self.get_exceldf('test1', 'Sheet2', skiprows=[1], index_col=0, - parse_cols='A,C,D') + parse_dates=True, parse_cols='A,C,D') # TODO add index to xls file tm.assert_frame_equal(df2, df1, check_names=False) tm.assert_frame_equal(df3, df1, check_names=False) df1 = dfref.reindex(columns=['B', 'C']) - df2 = self.get_exceldf('test1', 'Sheet1', index_col=0, + df2 = self.get_exceldf('test1', 'Sheet1', index_col=0, parse_dates=True, parse_cols='A,C:D') df3 = self.get_exceldf('test1', 'Sheet2', skiprows=[1], index_col=0, - parse_cols='A,C:D') + parse_dates=True, parse_cols='A,C:D') tm.assert_frame_equal(df2, df1, check_names=False) tm.assert_frame_equal(df3, df1, check_names=False) @@ -249,23 +251,23 @@ def test_excel_table_sheet_by_index(self): excel = self.get_excelfile('test1') dfref = self.get_csv_refdf('test1') - df1 = read_excel(excel, 0, index_col=0) - df2 = read_excel(excel, 1, skiprows=[1], index_col=0) + df1 = read_excel(excel, 0, index_col=0, parse_dates=True) + df2 = read_excel(excel, 1, skiprows=[1], index_col=0, parse_dates=True) tm.assert_frame_equal(df1, dfref, check_names=False) tm.assert_frame_equal(df2, dfref, check_names=False) - df1 = excel.parse(0, index_col=0) - df2 = excel.parse(1, skiprows=[1], index_col=0) + df1 = excel.parse(0, index_col=0, parse_dates=True) + df2 = excel.parse(1, skiprows=[1], index_col=0, parse_dates=True) tm.assert_frame_equal(df1, dfref, check_names=False) tm.assert_frame_equal(df2, dfref, check_names=False) - df3 = read_excel(excel, 0, index_col=0, skipfooter=1) - df4 = read_excel(excel, 0, index_col=0, skip_footer=1) + df3 = read_excel(excel, 0, index_col=0, parse_dates=True, skipfooter=1) + df4 = read_excel(excel, 0, index_col=0, parse_dates=True, skip_footer=1) tm.assert_frame_equal(df3, df1.ix[:-1]) tm.assert_frame_equal(df3, df4) - df3 = excel.parse(0, index_col=0, skipfooter=1) - df4 = excel.parse(0, index_col=0, skip_footer=1) + df3 = excel.parse(0, index_col=0, parse_dates=True, skipfooter=1) + df4 = excel.parse(0, index_col=0, parse_dates=True, skip_footer=1) tm.assert_frame_equal(df3, df1.ix[:-1]) tm.assert_frame_equal(df3, df4) @@ -277,15 +279,16 @@ def test_excel_table(self): dfref = self.get_csv_refdf('test1') - df1 = self.get_exceldf('test1', 'Sheet1', index_col=0) - df2 = self.get_exceldf('test1', 'Sheet2', skiprows=[1], index_col=0) + df1 = self.get_exceldf('test1', 'Sheet1', index_col=0, parse_dates=True) + df2 = self.get_exceldf('test1', 'Sheet2', skiprows=[1], index_col=0, + parse_dates=True) # TODO add index to file tm.assert_frame_equal(df1, dfref, check_names=False) tm.assert_frame_equal(df2, dfref, check_names=False) - df3 = self.get_exceldf('test1', 'Sheet1', index_col=0, + df3 = self.get_exceldf('test1', 'Sheet1', index_col=0, parse_dates=True, skipfooter=1) - df4 = self.get_exceldf('test1', 'Sheet1', index_col=0, + df4 = self.get_exceldf('test1', 'Sheet1', index_col=0, parse_dates=True, skip_footer=1) tm.assert_frame_equal(df3, df1.ix[:-1]) tm.assert_frame_equal(df3, df4) @@ -408,14 +411,14 @@ class XlrdTests(ReadingTestsBase): def test_excel_read_buffer(self): pth = os.path.join(self.dirpath, 'test1' + self.ext) - expected = read_excel(pth, 'Sheet1', index_col=0) + expected = read_excel(pth, 'Sheet1', index_col=0, parse_dates=True) with open(pth, 'rb') as f: - actual = read_excel(f, 'Sheet1', index_col=0) + actual = read_excel(f, 'Sheet1', index_col=0, parse_dates=True) tm.assert_frame_equal(expected, actual) with open(pth, 'rb') as f: xls = ExcelFile(f) - actual = read_excel(xls, 'Sheet1', index_col=0) + actual = read_excel(xls, 'Sheet1', index_col=0, parse_dates=True) tm.assert_frame_equal(expected, actual) def test_read_xlrd_Book(self): @@ -677,7 +680,7 @@ def test_excel_oldindex_format(self): tm.assert_frame_equal(actual, expected, check_names=False) def test_read_excel_bool_header_arg(self): - # GH 6114 + #GH 6114 for arg in [True, False]: with tm.assertRaises(TypeError): pd.read_excel(os.path.join(self.dirpath, 'test1' + self.ext), @@ -689,19 +692,6 @@ def test_read_excel_chunksize(self): pd.read_excel(os.path.join(self.dirpath, 'test1' + self.ext), chunksize=100) - def test_read_excel_parse_dates(self): - # GH 11544 - with tm.assertRaises(NotImplementedError): - pd.read_excel(os.path.join(self.dirpath, 'test1' + self.ext), - parse_dates=True) - - def test_read_excel_date_parser(self): - # GH 11544 - with tm.assertRaises(NotImplementedError): - dateparse = lambda x: pd.datetime.strptime(x, '%Y-%m-%d %H:%M:%S') - pd.read_excel(os.path.join(self.dirpath, 'test1' + self.ext), - date_parser=dateparse) - def test_read_excel_skiprows_list(self): #GH 4903 actual = pd.read_excel(os.path.join(self.dirpath, 'testskiprows' + self.ext), @@ -1103,7 +1093,7 @@ def test_to_excel_periodindex(self): xp.to_excel(path, 'sht1') reader = ExcelFile(path) - rs = read_excel(reader, 'sht1', index_col=0) + rs = read_excel(reader, 'sht1', index_col=0, parse_dates=True) tm.assert_frame_equal(xp, rs.to_period('M')) def test_to_excel_multiindex(self):
Followed up in #14326 --- Towards fixing bug #11544, with respect to comments on merged PR #11870. I've removed the `NotImplemented` error for `parse_dates` and `date_parser` keywords in `read_excel`, seeing that they are implemented (see discussion in #11870). I've added an example of the intended functionality in the documentation
https://api.github.com/repos/pandas-dev/pandas/pulls/12051
2016-01-15T16:11:43Z
2016-09-30T09:22:54Z
null
2023-05-11T01:13:19Z
Fix asymmetric error bars for series (closes #9536)
diff --git a/doc/source/visualization.rst b/doc/source/visualization.rst index 7840ae29298b0..86fefa04e9619 100644 --- a/doc/source/visualization.rst +++ b/doc/source/visualization.rst @@ -1366,9 +1366,10 @@ Horizontal and vertical errorbars can be supplied to the ``xerr`` and ``yerr`` k - As a :class:`DataFrame` or ``dict`` of errors with column names matching the ``columns`` attribute of the plotting :class:`DataFrame` or matching the ``name`` attribute of the :class:`Series` - As a ``str`` indicating which of the columns of plotting :class:`DataFrame` contain the error values +- As a single ``number`` which is used as the error for every value - As raw values (``list``, ``tuple``, or ``np.ndarray``). Must be the same length as the plotting :class:`DataFrame`/:class:`Series` -Asymmetrical error bars are also supported, however raw error values must be provided in this case. For a ``M`` length :class:`Series`, a ``Mx2`` array should be provided indicating lower and upper (or left and right) errors. For a ``MxN`` :class:`DataFrame`, asymmetrical errors should be in a ``Mx2xN`` array. +Asymmetrical error bars are also supported, however raw error values must be provided in this case. For a ``N`` length :class:`Series`, a ``2xN`` array should be provided indicating lower and upper (or left and right) errors. For a ``MxN`` :class:`DataFrame`, asymmetrical errors should be in a ``Mx2xN`` array. Here is an example of one way to easily plot group means with standard deviations from the raw data. diff --git a/doc/source/whatsnew/v0.18.1.txt b/doc/source/whatsnew/v0.18.1.txt index 152187f1e6681..d9e5b3985403f 100644 --- a/doc/source/whatsnew/v0.18.1.txt +++ b/doc/source/whatsnew/v0.18.1.txt @@ -60,12 +60,13 @@ Other Enhancements - ``pd.read_msgpack()`` now always gives writeable ndarrays even when compression is used (:issue:`12359`). - ``Index.take`` now handles ``allow_fill`` and ``fill_value`` consistently (:issue:`12631`) -.. ipython:: python + .. ipython:: python - idx = pd.Index([1., 2., 3., 4.], dtype='float') - idx.take([2, -1]) # default, allow_fill=True, fill_value=None - idx.take([2, -1], fill_value=True) + idx = pd.Index([1., 2., 3., 4.], dtype='float') + idx.take([2, -1]) # default, allow_fill=True, fill_value=None + idx.take([2, -1], fill_value=True) +- ``Series.plot`` allows now asymmetric error bars in the shape of 2xN array (:issue:`9536`) .. _whatsnew_0181.api: diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py index 45d3fd0dad855..3c22d3f925618 100644 --- a/pandas/tests/test_graphics.py +++ b/pandas/tests/test_graphics.py @@ -1180,6 +1180,20 @@ def test_errorbar_plot(self): with tm.assertRaises((ValueError, TypeError)): s.plot(yerr=s_err) + def test_errorbar_asymmetrical(self): + # github issue #9536 + s = Series(np.random.randn(5)) + err = np.random.rand(2, 5) + + ax = _check_plot_works(s.plot, yerr=err, xerr=(err / 2)) + self._check_has_errorbars(ax, yerr=1, xerr=1) + + assert_allclose(ax.lines[2].get_ydata(), s.values - err[0]) + assert_allclose(ax.lines[3].get_ydata(), s.values + err[1]) + + assert_allclose(ax.lines[0].get_xdata(), s.index - (err[0] / 2)) + assert_allclose(ax.lines[1].get_xdata(), s.index + (err[1] / 2)) + def test_table(self): _check_plot_works(self.series.plot, table=True) _check_plot_works(self.series.plot, table=self.series) diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py index 103b7484ea138..7758ae75e139a 100644 --- a/pandas/tools/plotting.py +++ b/pandas/tools/plotting.py @@ -1419,10 +1419,17 @@ def _parse_errorbars(self, label, err): Error bars can be specified in several ways: Series: the user provides a pandas.Series object of the same length as the data - ndarray: provides a np.ndarray of the same length as the data + list_like (list/tuple/ndarray/iterator): either a list like of the + same length N as the data has to be provided + or a list like of the shape Mx2xN for asymmetrical error + bars when plotting a DataFrame of shape MxN + or a list like of the shape 2xN for asymmetrical error bars + when plotting a Series. DataFrame/dict: error values are paired with keys matching the key in the plotted DataFrame str: the name of the column within the plotted DataFrame + numeric scalar: the error provided as a number is used for every + data point ''' if err is None: @@ -1458,22 +1465,33 @@ def match_labels(data, e): elif com.is_list_like(err): if com.is_iterator(err): - err = np.atleast_2d(list(err)) + err = np.asanyarray(list(err)) else: # raw error values - err = np.atleast_2d(err) + err = np.asanyarray(err) - err_shape = err.shape + if self.nseries == 1 and err.ndim == 2 and len(err) == 2: + # asymmetrical errors bars for a series as a 2xN array + err = np.expand_dims(err, 0) + err_shape = err.shape - # asymmetrical error bars - if err.ndim == 3: - if (err_shape[0] != self.nseries) or \ - (err_shape[1] != 2) or \ - (err_shape[2] != len(self.data)): + if err_shape[2] != len(self.data): msg = "Asymmetrical error bars should be provided " + \ - "with the shape (%u, 2, %u)" % \ - (self.nseries, len(self.data)) + "with the shape (2, %u)" % (len(self.data)) raise ValueError(msg) + else: + err = np.atleast_2d(err) + err_shape = err.shape + + # asymmetrical error bars + if err.ndim == 3: + if (err_shape[0] != self.nseries) or \ + (err_shape[1] != 2) or \ + (err_shape[2] != len(self.data)): + msg = "Asymmetrical error bars should be provided " + \ + "with the shape (%u, 2, %u)" % \ + (self.nseries, len(self.data)) + raise ValueError(msg) # broadcast errors to each data series if len(err) == 1:
closes #9536 This fix is for handling asymmetric error bars for series. It adapts to the syntax for http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.errorbar where a sequence of shape 2xN is expected in case of asymmetric error bars. If a single series is to be plotted and the error sequence is of shape 2xN it will be used as asymmetric error bars. Previously a 2xN error sequence was assumed to be 2 symmetric error sequences for 2 series. Thus in the end only the first error sequence was used.
https://api.github.com/repos/pandas-dev/pandas/pulls/12046
2016-01-15T14:01:16Z
2016-05-07T19:06:57Z
null
2016-05-07T19:06:57Z
DOC :Updated indexer_between_time documentation
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py index 96efa317ce612..0dae564352967 100644 --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -1804,7 +1804,6 @@ def indexer_between_time(self, start_time, end_time, include_start=True, "%I%M%S%p") include_start : boolean, default True include_end : boolean, default True - tz : string or pytz.timezone or dateutil.tz.tzfile, default None Returns -------
A minor documentation update to the [DatetimeIndex.indexer_between_time](http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.DatetimeIndex.indexer_between_time.html) function. Closes #12038
https://api.github.com/repos/pandas-dev/pandas/pulls/12043
2016-01-15T08:44:51Z
2016-01-15T09:47:45Z
2016-01-15T09:47:45Z
2016-01-15T12:19:09Z
BUG: .plot modifing `colors` input
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 4ce2ce5b69cb4..9effa186017fd 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -489,7 +489,8 @@ Bug Fixes - Bug in ``DataFrame`` when masking an empty ``DataFrame`` (:issue:`11859`) - +- Bug in ``.plot`` potentially modifying the ``colors`` input when the number +of columns didn't match the number of series provided (:issue:`12039`). diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py index ff69068a3495c..0fc5916676dd3 100644 --- a/pandas/tests/test_graphics.py +++ b/pandas/tests/test_graphics.py @@ -2717,6 +2717,12 @@ def test_line_colors(self): # Forced show plot _check_plot_works(df.plot, color=custom_colors) + @slow + def test_dont_modify_colors(self): + colors = ['r', 'g', 'b'] + pd.DataFrame(np.random.rand(10, 2)).plot(color=colors) + self.assertEqual(len(colors), 3) + @slow def test_line_colors_and_styles_subplots(self): # GH 9894 diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py index 3e9c788914a5a..43bcd2373df69 100644 --- a/pandas/tools/plotting.py +++ b/pandas/tools/plotting.py @@ -158,7 +158,7 @@ def _get_standard_colors(num_colors=None, colormap=None, color_type='default', if colormap is not None: warnings.warn("'color' and 'colormap' cannot be used " "simultaneously. Using 'color'") - colors = color + colors = list(color) if com.is_list_like(color) else color else: if color_type == 'default': # need to call list() on the result to copy so we don't
Closes https://github.com/pydata/pandas/issues/12039 Just added a defensive copy in `_get_standard_colors` on list-likes.
https://api.github.com/repos/pandas-dev/pandas/pulls/12040
2016-01-15T02:57:40Z
2016-01-16T12:49:06Z
2016-01-16T12:49:06Z
2016-01-16T12:49:09Z
Copy on write using weakrefs (part 2)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 7220b25daf318..9b6e1f33e42db 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -179,7 +179,7 @@ class DataFrame(NDFrame): np.arange(n) if no column labels are provided dtype : dtype, default None Data type to force, otherwise infer - copy : boolean, default False + copy : boolean, default True Copy data from inputs. Only affects DataFrame / 2d ndarray input Examples @@ -211,13 +211,17 @@ def _constructor_expanddim(self): def __init__(self, data=None, index=None, columns=None, dtype=None, copy=False): + + parent = None + if data is None: data = {} if dtype is not None: dtype = self._validate_dtype(dtype) if isinstance(data, DataFrame): - data = data._data + parent = data + data = data._get_view()._data if isinstance(data, BlockManager): mgr = self._init_mgr(data, axes=dict(index=index, columns=columns), @@ -306,6 +310,9 @@ def __init__(self, data=None, index=None, columns=None, dtype=None, NDFrame.__init__(self, mgr, fastpath=True) + if parent is not None: + parent._register_new_child(self) + def _init_dict(self, data, index, columns, dtype=None): """ Segregate Series based on type and coerce into matrices. @@ -1963,8 +1970,10 @@ def __getitem__(self, key): # shortcut if we are an actual column is_mi_columns = isinstance(self.columns, MultiIndex) try: - if key in self.columns and not is_mi_columns: - return self._getitem_column(key) + if key in self.columns: + result = self._getitem_column(key) + result._is_column_view = True + return result except: pass @@ -2338,7 +2347,6 @@ def __setitem__(self, key, value): self._set_item(key, value) def _setitem_slice(self, key, value): - self._check_setitem_copy() self.ix._setitem_with_indexer(key, value) def _setitem_array(self, key, value): @@ -2349,7 +2357,6 @@ def _setitem_array(self, key, value): (len(key), len(self.index))) key = check_bool_indexer(self.index, key) indexer = key.nonzero()[0] - self._check_setitem_copy() self.ix._setitem_with_indexer(indexer, value) else: if isinstance(value, DataFrame): @@ -2359,7 +2366,6 @@ def _setitem_array(self, key, value): self[k1] = value[k2] else: indexer = self.ix._convert_to_indexer(key, axis=1) - self._check_setitem_copy() self.ix._setitem_with_indexer((slice(None), indexer), value) def _setitem_frame(self, key, value): @@ -2369,7 +2375,6 @@ def _setitem_frame(self, key, value): raise TypeError('Must pass DataFrame with boolean values only') self._check_inplace_setting(value) - self._check_setitem_copy() self.where(-key, value, inplace=True) def _ensure_valid_index(self, value): @@ -2405,11 +2410,6 @@ def _set_item(self, key, value): value = self._sanitize_column(key, value) NDFrame._set_item(self, key, value) - # check if we are modifying a copy - # try to set first as we want an invalid - # value exeption to occur first - if len(self): - self._check_setitem_copy() def insert(self, loc, column, value, allow_duplicates=False): """ @@ -4377,12 +4377,12 @@ def _join_compat(self, other, on=None, how='left', lsuffix='', rsuffix='', @Appender(_merge_doc, indents=2) def merge(self, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, - suffixes=('_x', '_y'), copy=True, indicator=False): + suffixes=('_x', '_y'), indicator=False): from pandas.tools.merge import merge return merge(self, right, how=how, on=on, left_on=left_on, right_on=right_on, left_index=left_index, right_index=right_index, sort=sort, - suffixes=suffixes, copy=copy, indicator=indicator) + suffixes=suffixes, indicator=indicator) def round(self, decimals=0, out=None): """ @@ -5227,6 +5227,9 @@ def combineMult(self, other): FutureWarning, stacklevel=2) return self.mul(other, fill_value=1.) + def _get_view(self): + return self.loc[:,:] + DataFrame._setup_axes(['index', 'columns'], info_axis=1, stat_axis=0, axes_are_reversed=True, aliases={'rows': 0}) diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 958571fdc2218..452dfea05abd3 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -83,14 +83,18 @@ class NDFrame(PandasObject): _internal_names = ['_data', '_cacher', '_item_cache', '_cache', 'is_copy', '_subtyp', '_index', '_default_kind', '_default_fill_value', '_metadata', - '__array_struct__', '__array_interface__'] + '__array_struct__', '__array_interface__', '_children', + '_is_column_view', '_original_parent'] _internal_names_set = set(_internal_names) _accessors = frozenset([]) _metadata = [] is_copy = None - + _is_column_view = None + _original_parent = None + _children = None + def __init__(self, data, axes=None, copy=False, dtype=None, - fastpath=False): + fastpath=False, ): if not fastpath: if dtype is not None: @@ -105,6 +109,10 @@ def __init__(self, data, axes=None, copy=False, dtype=None, object.__setattr__(self, 'is_copy', None) object.__setattr__(self, '_data', data) object.__setattr__(self, '_item_cache', {}) + object.__setattr__(self, '_children', weakref.WeakValueDictionary()) + object.__setattr__(self, '_is_column_view', False) + object.__setattr__(self, '_original_parent', weakref.WeakValueDictionary()) + def _validate_dtype(self, dtype): """ validate the passed dtype """ @@ -469,7 +477,8 @@ def transpose(self, *args, **kwargs): raise TypeError('transpose() got an unexpected keyword ' 'argument "{0}"'.format(list(kwargs.keys())[0])) - return self._constructor(new_values, **new_axes).__finalize__(self) + result = self._constructor(new_values, **new_axes).__finalize__(self) + return result.copy() def swapaxes(self, axis1, axis2, copy=True): """ @@ -1077,13 +1086,16 @@ def get(self, key, default=None): ------- value : type of items contained in object """ + try: return self[key] except (KeyError, ValueError, IndexError): return default def __getitem__(self, item): - return self._get_item_cache(item) + result = self._get_item_cache(item) + + return result def _get_item_cache(self, item): """ return the cached item, item represents a label indexer """ @@ -1177,9 +1189,6 @@ def _maybe_update_cacher(self, clear=False, verify_is_copy=True): except: pass - if verify_is_copy: - self._check_setitem_copy(stacklevel=5, t='referant') - if clear: self._clear_item_cache() @@ -1204,9 +1213,20 @@ def _slice(self, slobj, axis=0, kind=None): # but only in a single-dtyped view slicable case is_copy = axis!=0 or result._is_view result._set_is_copy(self, copy=is_copy) + + self._register_new_child(result) + return result def _set_item(self, key, value): + + if hasattr(self, 'columns'): + if key in self.columns: + # If children are views, reset to copies before setting. + self._execute_copy_on_write() + else: + self._execute_copy_on_write() + self._data.set(key, value) self._clear_item_cache() @@ -1219,104 +1239,22 @@ def _set_is_copy(self, ref=None, copy=True): else: self.is_copy = None - def _check_is_chained_assignment_possible(self): - """ - check if we are a view, have a cacher, and are of mixed type - if so, then force a setitem_copy check - - should be called just near setting a value - - will return a boolean if it we are a view and are cached, but a single-dtype - meaning that the cacher should be updated following setting - """ - if self._is_view and self._is_cached: - ref = self._get_cacher() - if ref is not None and ref._is_mixed_type: - self._check_setitem_copy(stacklevel=4, t='referant', force=True) - return True - elif self.is_copy: - self._check_setitem_copy(stacklevel=4, t='referant') - return False - - def _check_setitem_copy(self, stacklevel=4, t='setting', force=False): - """ - - Parameters - ---------- - stacklevel : integer, default 4 - the level to show of the stack when the error is output - t : string, the type of setting error - force : boolean, default False - if True, then force showing an error - - validate if we are doing a settitem on a chained copy. - - If you call this function, be sure to set the stacklevel such that the - user will see the error *at the level of setting* - - It is technically possible to figure out that we are setting on - a copy even WITH a multi-dtyped pandas object. In other words, some blocks - may be views while other are not. Currently _is_view will ALWAYS return False - for multi-blocks to avoid having to handle this case. - - df = DataFrame(np.arange(0,9), columns=['count']) - df['group'] = 'b' - - # this technically need not raise SettingWithCopy if both are view (which is not - # generally guaranteed but is usually True - # however, this is in general not a good practice and we recommend using .loc - df.iloc[0:5]['group'] = 'a' - - """ - - if force or self.is_copy: - - value = config.get_option('mode.chained_assignment') - if value is None: - return - - # see if the copy is not actually refererd; if so, then disolve - # the copy weakref - try: - gc.collect(2) - if not gc.get_referents(self.is_copy()): - self.is_copy = None - return - except: - pass - - # we might be a false positive - try: - if self.is_copy().shape == self.shape: - self.is_copy = None - return - except: - pass - - # a custom message - if isinstance(self.is_copy, string_types): - t = self.is_copy - - elif t == 'referant': - t = ("\n" - "A value is trying to be set on a copy of a slice from a " - "DataFrame\n\n" - "See the caveats in the documentation: " - "http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy") - - else: - t = ("\n" - "A value is trying to be set on a copy of a slice from a " - "DataFrame.\n" - "Try using .loc[row_indexer,col_indexer] = value instead\n\n" - "See the caveats in the documentation: " - "http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy") - - if value == 'raise': - raise SettingWithCopyError(t) - elif value == 'warn': - warnings.warn(t, SettingWithCopyWarning, stacklevel=stacklevel) - + def _execute_copy_on_write(self): + + # Don't set on views. + if (self._is_view and not self._is_column_view) or len(self._children) is not 0: + self._data = self._data.copy() + self._children = weakref.WeakValueDictionary() + + + def _register_new_child(self, view_to_append): + self._children[id(view_to_append)] = view_to_append + + if len(self._original_parent) is 0: + view_to_append._original_parent['parent'] = self + else: + self._original_parent['parent']._register_new_child(view_to_append) + def __delitem__(self, key): """ Delete item @@ -2383,6 +2321,7 @@ def __finalize__(self, other, method=None, **kwargs): return self def __getattr__(self, name): + """After regular attribute access, try looking up the name This allows simpler access to columns for interactive use. """ @@ -2405,6 +2344,10 @@ def __setattr__(self, name, value): # e.g. ``obj.x`` and ``obj.x = 4`` will always reference/modify # the same attribute. + if hasattr(self, 'columns'): + if name in self.columns: + self._execute_copy_on_write() + try: object.__getattribute__(self, name) return object.__setattr__(self, name, value) diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index 9df72053fb0af..7578517022d66 100644 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -57,6 +57,7 @@ def __iter__(self): raise NotImplementedError('ix is not iterable') def __getitem__(self, key): + if type(key) is tuple: try: values = self.obj.get_value(*key) @@ -113,6 +114,9 @@ def _get_setitem_indexer(self, key): raise IndexingError(key) def __setitem__(self, key, value): + # Make sure changes don't propagate to children + self.obj._execute_copy_on_write() + indexer = self._get_setitem_indexer(key) self._setitem_with_indexer(indexer, value) @@ -205,6 +209,7 @@ def _has_valid_positional_setitem_indexer(self, indexer): def _setitem_with_indexer(self, indexer, value): self._has_valid_setitem_indexer(indexer) + # also has the side effect of consolidating in-place from pandas import Panel, DataFrame, Series info_axis = self.obj._info_axis_number @@ -517,8 +522,6 @@ def can_do_equal_len(): if isinstance(value, ABCPanel): value = self._align_panel(indexer, value) - # check for chained assignment - self.obj._check_is_chained_assignment_possible() # actually do the set self.obj._consolidate_inplace() @@ -752,9 +755,6 @@ def _getitem_tuple(self, tup): if i >= self.obj.ndim: raise IndexingError('Too many indexers') - if is_null_slice(key): - continue - retval = getattr(retval, self.name)._getitem_axis(key, axis=i) return retval @@ -1171,8 +1171,6 @@ def _tuplify(self, loc): def _get_slice_axis(self, slice_obj, axis=0): obj = self.obj - if not need_slice(slice_obj): - return obj indexer = self._convert_slice_indexer(slice_obj, axis) if isinstance(indexer, slice): @@ -1244,8 +1242,7 @@ def _getbool_axis(self, key, axis=0): def _get_slice_axis(self, slice_obj, axis=0): """ this is pretty simple as we just have to deal with labels """ obj = self.obj - if not need_slice(slice_obj): - return obj + labels = obj._get_axis(axis) indexer = labels.slice_indexer(slice_obj.start, slice_obj.stop, @@ -1461,9 +1458,9 @@ def _getitem_tuple(self, tup): if i >= self.obj.ndim: raise IndexingError('Too many indexers') - if is_null_slice(key): - axis += 1 - continue + #if is_null_slice(key): + # axis += 1 + # continue retval = getattr(retval, self.name)._getitem_axis(key, axis=axis) @@ -1477,10 +1474,6 @@ def _getitem_tuple(self, tup): return retval def _get_slice_axis(self, slice_obj, axis=0): - obj = self.obj - - if not need_slice(slice_obj): - return obj slice_obj = self._convert_slice_indexer(slice_obj, axis) if isinstance(slice_obj, slice): @@ -1792,12 +1785,6 @@ def is_label_like(key): return not isinstance(key, slice) and not is_list_like_indexer(key) -def need_slice(obj): - return (obj.start is not None or - obj.stop is not None or - (obj.step is not None and obj.step != 1)) - - def maybe_droplevels(index, key): # drop levels original_index = index diff --git a/pandas/core/internals.py b/pandas/core/internals.py index b10b1b5771bf7..f02b1ec2ef05b 100644 --- a/pandas/core/internals.py +++ b/pandas/core/internals.py @@ -2483,6 +2483,8 @@ class BlockManager(PandasObject): insert(loc, label, value) set(label, value) + view() + Parameters ---------- @@ -2931,10 +2933,11 @@ def is_datelike_mixed_type(self): @property def is_view(self): - """ return a boolean if we are a single block and are a view """ - if len(self.blocks) == 1: - return self.blocks[0].is_view - + """ return a boolean True if any block is a view """ + for b in self.blocks: + if b.is_view: return True + + # It is technically possible to figure out which blocks are views # e.g. [ b.values.base is not None for b in self.blocks ] # but then we have the case of possibly some blocks being a view diff --git a/pandas/core/panel.py b/pandas/core/panel.py index e0d9405a66b75..4c87ddca0c486 100644 --- a/pandas/core/panel.py +++ b/pandas/core/panel.py @@ -1339,8 +1339,10 @@ def update(self, other, join='left', overwrite=True, filter_func=None, other = other.reindex(**{axis_name: axis_values}) for frame in axis_values: - self[frame].update(other[frame], join, overwrite, filter_func, + temp = self[frame] + temp.update(other[frame], join, overwrite, filter_func, raise_conflict) + self[frame] = temp def _get_join_index(self, other, how): if how == 'left': diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py index 719f35dd90ce2..9c24f950d4af5 100644 --- a/pandas/core/reshape.py +++ b/pandas/core/reshape.py @@ -945,7 +945,7 @@ def melt_stub(df, stub, i, j): for stub in stubnames[1:]: new = melt_stub(df, stub, id_vars, j) - newdf = newdf.merge(new, how="outer", on=id_vars + [j], copy=False) + newdf = newdf.merge(new, how="outer", on=id_vars + [j]) return newdf.set_index([i, j]) def get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False, diff --git a/pandas/core/series.py b/pandas/core/series.py index ed5b9093681f1..d95481973e7bd 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -722,10 +722,7 @@ def setitem(key, value): self._set_with(key, value) # do the setitem - cacher_needs_updating = self._check_is_chained_assignment_possible() setitem(key, value) - if cacher_needs_updating: - self._maybe_update_cacher() def _set_with_engine(self, key, value): values = self._values diff --git a/pandas/core/testing.py b/pandas/core/testing.py new file mode 100644 index 0000000000000..e4a81fc1b652c --- /dev/null +++ b/pandas/core/testing.py @@ -0,0 +1,19 @@ + + def _init_mgr(self, mgr, axes=None, dtype=None, copy=False): + """ passed a manager and a axes dict """ + for a, axe in axes.items(): + if axe is not None: + mgr = mgr.reindex_axis( + axe, axis=self._get_block_manager_axis(a), copy=False) + + # make a copy if explicitly requested + if copy: + mgr = mgr.copy() + if dtype is not None: + # avoid further copies if we can + if len(mgr.blocks) > 1 or mgr.blocks[0].values.dtype != dtype: + mgr = mgr.astype(dtype=dtype) + return mgr + +mgr = self._init_mgr(data, axes=dict(index=index, columns=columns), + dtype=dtype, copy=copy) diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py index ba546b6daac77..2cf617a40b67e 100644 --- a/pandas/tests/test_frame.py +++ b/pandas/tests/test_frame.py @@ -194,7 +194,6 @@ def test_setitem_list(self): assert_series_equal(self.frame['B'], data['A'], check_names=False) assert_series_equal(self.frame['A'], data['B'], check_names=False) - with assertRaisesRegexp(ValueError, 'Columns must be same length as key'): data[['A']] = self.frame[['A', 'B']] with assertRaisesRegexp(ValueError, 'Length of values does not match ' @@ -561,14 +560,14 @@ def test_setitem(self): self.frame['col8'] = 'foo' assert((self.frame['col8'] == 'foo').all()) - # this is partially a view (e.g. some blocks are view) - # so raise/warn + # Changes should not propageate smaller = self.frame[:2] def f(): - smaller['col10'] = ['1', '2'] - self.assertRaises(com.SettingWithCopyError, f) - self.assertEqual(smaller['col10'].dtype, np.object_) - self.assertTrue((smaller['col10'] == ['1', '2']).all()) + smaller['col0'] = ['1', '2'] + f() + self.assertEqual(smaller['col0'].dtype, np.object_) + self.assertTrue((smaller['col0'] == ['1', '2']).all()) + self.assertNotEqual(self.frame[:2].col0.dtype, np.object_) # with a dtype for dtype in ['int32','int64','float32','float64']: @@ -1022,13 +1021,11 @@ def test_fancy_getitem_slice_mixed(self): sliced = self.mixed_frame.ix[:, -3:] self.assertEqual(sliced['D'].dtype, np.float64) - # get view with single block - # setting it triggers setting with copy + # Should never act as view due to copy on write sliced = self.frame.ix[:, -3:] def f(): - sliced['C'] = 4. - self.assertRaises(com.SettingWithCopyError, f) - self.assertTrue((self.frame['C'] == 4).all()) + sliced['C'] = 4 + self.assertTrue((self.frame['C'] != 4).all()) def test_fancy_setitem_int_labels(self): # integer index defers to label-based indexing @@ -1237,8 +1234,8 @@ def test_getitem_fancy_1d(self): f = self.frame ix = f.ix - # return self if no slicing...for now - self.assertIs(ix[:, :], f) + # return view + self.assertIsNot(ix[:, :], f) # low dimensional slice xs1 = ix[2, ['C', 'B', 'A']] @@ -1827,14 +1824,12 @@ def test_irow(self): expected = df.ix[8:14] assert_frame_equal(result, expected) - # verify slice is view - # setting it makes it raise/warn + # verify changes on slices never propogate def f(): result[2] = 0. - self.assertRaises(com.SettingWithCopyError, f) exp_col = df[2].copy() exp_col[4:8] = 0. - assert_series_equal(df[2], exp_col) + self.assertFalse((df[2] == exp_col).all()) # list of integers result = df.iloc[[1, 2, 4, 6]] @@ -1862,12 +1857,10 @@ def test_icol(self): expected = df.ix[:, 8:14] assert_frame_equal(result, expected) - # verify slice is view - # and that we are setting a copy + # Verify setting on view doesn't propogate def f(): result[8] = 0. - self.assertRaises(com.SettingWithCopyError, f) - self.assertTrue((df[8] == 0).all()) + self.assertTrue((df[8] != 0).all()) # list of integers result = df.iloc[:, [1, 2, 4, 6]] @@ -2645,12 +2638,13 @@ def test_constructor_dtype_copy(self): def test_constructor_dtype_nocast_view(self): df = DataFrame([[1, 2]]) should_be_view = DataFrame(df, dtype=df[0].dtype) + self.assertTrue(should_be_view._is_view) should_be_view[0][0] = 99 - self.assertEqual(df.values[0, 0], 99) + self.assertFalse(df.values[0, 0] == 99) should_be_view = DataFrame(df.values, dtype=df[0].dtype) should_be_view[0][0] = 97 - self.assertEqual(df.values[0, 0], 97) + self.assertFalse(df.values[0, 0] == 97) def test_constructor_dtype_list_data(self): df = DataFrame([[1, '2'], @@ -2944,7 +2938,7 @@ def custom_frame_function(self): mcol = pd.MultiIndex.from_tuples([('A', ''), ('B', '')]) cdf_multi2 = CustomDataFrame([[0, 1], [2, 3]], columns=mcol) - self.assertTrue(isinstance(cdf_multi2['A'], CustomSeries)) + #self.assertTrue(isinstance(cdf_multi2['A'], CustomSeries)) def test_constructor_subclass_dict(self): # Test for passing dict subclass to constructor @@ -4342,12 +4336,12 @@ def test_constructor_with_datetime_tz(self): assert_series_equal(df['D'],Series(idx,name='D')) del df['D'] - # assert that A & C are not sharing the same base (e.g. they - # are copies) - b1 = df._data.blocks[1] - b2 = df._data.blocks[2] - self.assertTrue(b1.values.equals(b2.values)) - self.assertFalse(id(b1.values.values.base) == id(b2.values.values.base)) + # assert that A & C no longer sharing the same base due + # to overwrite of D triggering copy_on_write + b1 = df._data.blocks[1] + b2 = df._data.blocks[2] + self.assertFalse(b1.values.equals(b2.values)) + self.assertFalse(id(b1.values.base) == id(b2.values.base)) # with nan df2 = df.copy() @@ -11272,10 +11266,11 @@ def test_transpose(self): self.assertEqual(s.dtype, np.object_) def test_transpose_get_view(self): + # no longer true due to copy-on-write dft = self.frame.T dft.values[:, 5:10] = 5 - self.assertTrue((self.frame.values[5:10] == 5).all()) + self.assertFalse((self.frame.values[5:10] == 5).any()) #---------------------------------------------------------------------- # Renaming diff --git a/pandas/tests/test_generic.py b/pandas/tests/test_generic.py index 37cb38454f74e..17ef892de9371 100644 --- a/pandas/tests/test_generic.py +++ b/pandas/tests/test_generic.py @@ -1715,6 +1715,164 @@ def test_pct_change(self): self.assert_frame_equal(result, expected) + def test_copy_on_write(self): + + ####### + # FORWARD PROPAGATION TESTS + ####### + + # Test various slicing methods add to _children + + df = pd.DataFrame({'col1':[1,2], 'col2':[3,4]}) + self.assertTrue(len(df._children)==0) + + + views = dict() + + views['loc'] = df.loc[0:0,] + views['iloc'] = df.iloc[0:1,] + views['ix'] = df.ix[0:0,] + views['loc_of_loc'] = views['loc'].loc[0:0,] + views['constructor'] = DataFrame(df) + + copies = dict() + for v in views.keys(): + self.assertTrue(views[v]._is_view) + copies[v] = views[v].copy() + + df.loc[0,'col1'] = -88 + + for v in views.keys(): + tm.assert_frame_equal(views[v], copies[v]) + + # Test different forms of value setting + # all trigger conversions + + parent = dict() + views = dict() + copies = dict() + for v in ['loc', 'iloc', 'ix', 'column', 'attribute']: + parent[v] = pd.DataFrame({'col1':[1,2], 'col2':[3,4]}) + views[v] = parent[v].loc[0:0,] + copies[v] = views[v].copy() + self.assertTrue( views[v]._is_view ) + + parent['loc'].loc[0, 'col1'] = -88 + parent['iloc'].iloc[0, 0] = -88 + parent['ix'].ix[0, 'col1'] = -88 + parent['column']['col1'] = -88 + parent['attribute'].col1 = -88 + + + for v in views.keys(): + tm.assert_frame_equal(views[v], copies[v]) + + ######## + # No Backward Propogation + ####### + df = pd.DataFrame({'col1':[1,2], 'col2':[3,4]}) + df_copy = df.copy() + + views = dict() + + views['loc'] = df.loc[0:0,] + views['iloc'] = df.iloc[0:1,] + views['ix'] = df.ix[0:0,] + views['loc_of_loc'] = views['loc'].loc[0:0,] + + for v in views.keys(): + views[v].loc[0:0,] = -99 + + tm.assert_frame_equal(df, df_copy) + + ### + # Dictionary-like access to single columns SHOULD give views + ### + + # If change child, should back-propagate + df = pd.DataFrame({'col1':[1,2], 'col2':[3,4]}) + v = df['col1'] + self.assertTrue(v._is_view) + self.assertTrue(v._is_column_view) + v.loc[0]=-88 + self.assertTrue(df.loc[0,'col1'] == -88) + self.assertTrue(v._is_view) + + # If change parent, should forward-propagate + df = pd.DataFrame({'col1':[1,2], 'col2':[3,4]}) + v = df['col1'] + self.assertTrue(v._is_view) + self.assertTrue(v._is_column_view) + df.loc[0, 'col1']=-88 + self.assertTrue(v.loc[0] == -88) + self.assertTrue(v._is_view) + + # holds for multi-index too + index = pd.MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], + ['one', 'two', 'three']], + labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], + [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], + names=['first', 'second']) + frame = pd.DataFrame(np.random.randn(10, 3), index=index, + columns=pd.Index(['A', 'B', 'C'], name='exp')).T + + v = frame['foo','one'] + + self.assertTrue(v._is_view) + self.assertTrue(v._is_column_view) + frame.loc['A', ('foo','one')]=-88 + self.assertTrue(v.loc['A'] == -88) + + + ### + # Make sure that no problems if view created on view and middle-view + # gets deleted + ### + df = pd.DataFrame({'col1':[1,2], 'col2':[3,4]}) + v1 = df.loc[0:0,] + self.assertTrue(len(df._children)==1) + + v2 = v1.loc[0:0,] + v2_copy = v2.copy() + self.assertTrue(len(df._children)==2) + + del v1 + + df.loc[0:0, 'col1'] = -88 + + tm.assert_frame_equal(v2, v2_copy) + + ## + # Test to make sure attribute `_is_column_view` + # exists after pickling + ## + df = pd.DataFrame({"A": [1,2]}) + with tm.ensure_clean('__tmp__pickle') as path: + df.to_pickle(path) + df2 = pd.read_pickle(path) + self.assertTrue(hasattr(df2, '_is_column_view')) + self.assertTrue(hasattr(df2, '_children')) + self.assertTrue(hasattr(df2, '_original_parent')) + + ## + # If create new column in data frame, should be copy not view + ## + test_df = pd.DataFrame({'col1':[1,2], 'col2':[3,4]}) + test_series = pd.Series([9,8], name='col3') + test_df['col3'] = test_series + copy = test_series.copy() + test_series.loc[0] = -88 + tm.assert_series_equal(test_df['col3'], copy) + + def test_is_view_of_multiblocks(self): + # Ensure that if even if only one block of DF is view, + # returns _is_view = True. + df = pd.DataFrame({'col1':[1,2], 'col2':[3,4]}) + s = pd.Series([0.5, 0.3, 0.4]) + df['col3'] = s[0:1] + self.assertTrue(df['col3']._is_view) + self.assertTrue(df._is_view) + class TestPanel(tm.TestCase, Generic): _typ = Panel _comparator = lambda self, x, y: assert_panel_equal(x, y) diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py index c6d80a08ad61a..83c95538ac526 100644 --- a/pandas/tests/test_indexing.py +++ b/pandas/tests/test_indexing.py @@ -4011,203 +4011,186 @@ def test_setitem_chained_setfault(self): df = DataFrame(dict(A = np.array(['foo','bar','bah','foo','bar']))) df.A.iloc[0] = np.nan result = df.head() - assert_frame_equal(result, expected) - - def test_detect_chained_assignment(self): - - pd.set_option('chained_assignment','raise') - - # work with the chain - expected = DataFrame([[-5,1],[-6,3]],columns=list('AB')) - df = DataFrame(np.arange(4).reshape(2,2),columns=list('AB'),dtype='int64') - self.assertIsNone(df.is_copy) - df['A'][0] = -5 - df['A'][1] = -6 + assert_frame_equal(result, expected) + + def test_detect_chained_assignment(self): + + # All modified to fit copy-on-write behavior + + # work with the chain + expected = DataFrame([[-5,1],[-6,3]],columns=list('AB')) + df = DataFrame(np.arange(4).reshape(2,2),columns=list('AB'),dtype='int64') + df['A'][0] = -5 + df['A'][1] = -6 + assert_frame_equal(df, expected) + + + # fails if doesn't start with pulling single column + df = DataFrame({ 'A' : Series(range(2),dtype='int64'), 'B' : np.array(np.arange(2,4),dtype=np.float64)}) + expected = df.copy() + df.loc[0]['A'] = -5 assert_frame_equal(df, expected) - - # test with the chaining - df = DataFrame({ 'A' : Series(range(2),dtype='int64'), 'B' : np.array(np.arange(2,4),dtype=np.float64)}) - self.assertIsNone(df.is_copy) - def f(): - df['A'][0] = -5 - self.assertRaises(com.SettingWithCopyError, f) - def f(): - df['A'][1] = np.nan - self.assertRaises(com.SettingWithCopyError, f) - self.assertIsNone(df['A'].is_copy) - - # using a copy (the chain), fails - df = DataFrame({ 'A' : Series(range(2),dtype='int64'), 'B' : np.array(np.arange(2,4),dtype=np.float64)}) - def f(): - df.loc[0]['A'] = -5 - self.assertRaises(com.SettingWithCopyError, f) - - # doc example - df = DataFrame({'a' : ['one', 'one', 'two', - 'three', 'two', 'one', 'six'], - 'c' : Series(range(7),dtype='int64') }) - self.assertIsNone(df.is_copy) - expected = DataFrame({'a' : ['one', 'one', 'two', - 'three', 'two', 'one', 'six'], - 'c' : [42,42,2,3,4,42,6]}) - - def f(): - indexer = df.a.str.startswith('o') - df[indexer]['c'] = 42 - self.assertRaises(com.SettingWithCopyError, f) - - expected = DataFrame({'A':[111,'bbb','ccc'],'B':[1,2,3]}) - df = DataFrame({'A':['aaa','bbb','ccc'],'B':[1,2,3]}) - def f(): - df['A'][0] = 111 - self.assertRaises(com.SettingWithCopyError, f) - def f(): - df.loc[0]['A'] = 111 - self.assertRaises(com.SettingWithCopyError, f) - - df.loc[0,'A'] = 111 - assert_frame_equal(df,expected) - - # make sure that is_copy is picked up reconstruction - # GH5475 + + # doc example + df = DataFrame({'a' : ['one', 'one', 'two', + 'three', 'two', 'one', 'six'], + 'c' : Series(range(7),dtype='int64') }) + expected = df.copy() + + indexer = df.a.str.startswith('o') + df[indexer]['c'] = 42 + assert_frame_equal(expected, df) + + + expected = DataFrame({'A':[111,'bbb','ccc'],'B':[1,2,3]}) + df = DataFrame({'A':['aaa','bbb','ccc'],'B':[1,2,3]}) + df['A'][0] = 111 + df.loc[0]['A'] = -111 + assert_frame_equal(df,expected) + + # make sure that _children is picked up reconstruction + # GH5475 df = DataFrame({"A": [1,2]}) - self.assertIsNone(df.is_copy) - with tm.ensure_clean('__tmp__pickle') as path: - df.to_pickle(path) - df2 = pd.read_pickle(path) - df2["B"] = df2["A"] - df2["B"] = df2["A"] - - # a suprious raise as we are setting the entire column here - # GH5597 - from string import ascii_letters as letters - - def random_text(nobs=100): - df = [] - for i in range(nobs): - idx= np.random.randint(len(letters), size=2) - idx.sort() - df.append([letters[idx[0]:idx[1]]]) - - return DataFrame(df, columns=['letters']) - - df = random_text(100000) - - # always a copy - x = df.iloc[[0,1,2]] - self.assertIsNotNone(x.is_copy) - x = df.iloc[[0,1,2,4]] - self.assertIsNotNone(x.is_copy) - - # explicity copy - indexer = df.letters.apply(lambda x : len(x) > 10) - df = df.ix[indexer].copy() - self.assertIsNone(df.is_copy) - df['letters'] = df['letters'].apply(str.lower) - - # implicity take - df = random_text(100000) - indexer = df.letters.apply(lambda x : len(x) > 10) - df = df.ix[indexer] - self.assertIsNotNone(df.is_copy) - df['letters'] = df['letters'].apply(str.lower) - - # implicity take 2 - df = random_text(100000) - indexer = df.letters.apply(lambda x : len(x) > 10) - df = df.ix[indexer] - self.assertIsNotNone(df.is_copy) - df.loc[:,'letters'] = df['letters'].apply(str.lower) - - # should be ok even though it's a copy! - self.assertIsNone(df.is_copy) - df['letters'] = df['letters'].apply(str.lower) - self.assertIsNone(df.is_copy) - - df = random_text(100000) - indexer = df.letters.apply(lambda x : len(x) > 10) - df.ix[indexer,'letters'] = df.ix[indexer,'letters'].apply(str.lower) - - # an identical take, so no copy - df = DataFrame({'a' : [1]}).dropna() - self.assertIsNone(df.is_copy) - df['a'] += 1 - - # inplace ops - # original from: http://stackoverflow.com/questions/20508968/series-fillna-in-a-multiindex-dataframe-does-not-fill-is-this-a-bug - a = [12, 23] - b = [123, None] - c = [1234, 2345] - d = [12345, 23456] - tuples = [('eyes', 'left'), ('eyes', 'right'), ('ears', 'left'), ('ears', 'right')] - events = {('eyes', 'left'): a, ('eyes', 'right'): b, ('ears', 'left'): c, ('ears', 'right'): d} - multiind = MultiIndex.from_tuples(tuples, names=['part', 'side']) - zed = DataFrame(events, index=['a', 'b'], columns=multiind) - def f(): - zed['eyes']['right'].fillna(value=555, inplace=True) - self.assertRaises(com.SettingWithCopyError, f) - - df = DataFrame(np.random.randn(10,4)) - s = df.iloc[:,0].sort_values() - assert_series_equal(s,df.iloc[:,0].sort_values()) - assert_series_equal(s,df[0].sort_values()) - - # false positives GH6025 - df = DataFrame ({'column1':['a', 'a', 'a'], 'column2': [4,8,9] }) - str(df) - df['column1'] = df['column1'] + 'b' - str(df) - df = df [df['column2']!=8] - str(df) - df['column1'] = df['column1'] + 'c' - str(df) - - # from SO: http://stackoverflow.com/questions/24054495/potential-bug-setting-value-for-undefined-column-using-iloc - df = DataFrame(np.arange(0,9), columns=['count']) - df['group'] = 'b' - def f(): - df.iloc[0:5]['group'] = 'a' - self.assertRaises(com.SettingWithCopyError, f) - - # mixed type setting - # same dtype & changing dtype - df = DataFrame(dict(A=date_range('20130101',periods=5),B=np.random.randn(5),C=np.arange(5,dtype='int64'),D=list('abcde'))) - - def f(): - df.ix[2]['D'] = 'foo' - self.assertRaises(com.SettingWithCopyError, f) - def f(): - df.ix[2]['C'] = 'foo' - self.assertRaises(com.SettingWithCopyError, f) - def f(): - df['C'][2] = 'foo' - self.assertRaises(com.SettingWithCopyError, f) - - def test_setting_with_copy_bug(self): - - # operating on a copy - df = pd.DataFrame({'a': list(range(4)), 'b': list('ab..'), 'c': ['a', 'b', np.nan, 'd']}) - mask = pd.isnull(df.c) - - def f(): - df[['c']][mask] = df[['b']][mask] - self.assertRaises(com.SettingWithCopyError, f) - - # invalid warning as we are returning a new object - # GH 8730 - df1 = DataFrame({'x': Series(['a','b','c']), 'y': Series(['d','e','f'])}) - df2 = df1[['x']] - - # this should not raise - df2['y'] = ['g', 'h', 'i'] - - def test_detect_chained_assignment_warnings(self): - - # warnings - with option_context('chained_assignment','warn'): - df = DataFrame({'A':['aaa','bbb','ccc'],'B':[1,2,3]}) - with tm.assert_produces_warning(expected_warning=com.SettingWithCopyWarning): - df.loc[0]['A'] = 111 + self.assertTrue(hasattr(df, '_children')) + self.assertTrue(hasattr(df, '_original_parent')) + self.assertTrue(hasattr(df, '_is_column_view')) + with tm.ensure_clean('__tmp__pickle') as path: + df.to_pickle(path) + df2 = pd.read_pickle(path) + self.assertTrue(hasattr(df2, '_children')) + self.assertTrue(hasattr(df2, '_original_parent')) + self.assertTrue(hasattr(df2, '_is_column_view')) + + # a suprious raise as we are setting the entire column here + # GH5597 + from string import ascii_letters as letters + + def random_text(nobs=100): + df = [] + for i in range(nobs): + idx= np.random.randint(len(letters), size=2) + idx.sort() + df.append([letters[idx[0]:idx[1]]]) + + return DataFrame(df, columns=['letters']) + + df = random_text(100000) + + # always a copy + x = df.iloc[[0,1,2]] + self.assertFalse(x._is_column_view) + x = df.iloc[[0,1,2,4]] + self.assertFalse(x._is_column_view) + + # explicity copy + indexer = df.letters.apply(lambda x : len(x) > 10) + df = df.ix[indexer].copy() + self.assertIsNone(df.is_copy) + df['letters'] = df['letters'].apply(str.lower) + + # implicity take + df = random_text(100000) + indexer = df.letters.apply(lambda x : len(x) > 10) + df = df.ix[indexer] + self.assertIsNotNone(df.is_copy) + df['letters'] = df['letters'].apply(str.lower) + + # implicity take 2 + df = random_text(100000) + indexer = df.letters.apply(lambda x : len(x) > 10) + df = df.ix[indexer] + self.assertIsNotNone(df.is_copy) + df.loc[:,'letters'] = df['letters'].apply(str.lower) + + # should be ok even though it's a copy! + self.assertIsNone(df.is_copy) + df['letters'] = df['letters'].apply(str.lower) + self.assertIsNone(df.is_copy) + + df = random_text(100000) + indexer = df.letters.apply(lambda x : len(x) > 10) + df.ix[indexer,'letters'] = df.ix[indexer,'letters'].apply(str.lower) + + # an identical take, so no copy + df = DataFrame({'a' : [1]}).dropna() + self.assertIsNone(df.is_copy) + df['a'] += 1 + + # inplace ops + # original from: http://stackoverflow.com/questions/20508968/series-fillna-in-a-multiindex-dataframe-does-not-fill-is-this-a-bug + a = [12, 23] + b = [123, None] + c = [1234, 2345] + d = [12345, 23456] + tuples = [('eyes', 'left'), ('eyes', 'right'), ('ears', 'left'), ('ears', 'right')] + events = {('eyes', 'left'): a, ('eyes', 'right'): b, ('ears', 'left'): c, ('ears', 'right'): d} + multiind = MultiIndex.from_tuples(tuples, names=['part', 'side']) + zed = DataFrame(events, index=['a', 'b'], columns=multiind) + def f(): + zed['eyes']['right'].fillna(value=555, inplace=True) + self.assertRaises(com.SettingWithCopyError, f) + + df = DataFrame(np.random.randn(10,4)) + s = df.iloc[:,0].sort_values() + assert_series_equal(s,df.iloc[:,0].sort_values()) + assert_series_equal(s,df[0].sort_values()) + + # false positives GH6025 + df = DataFrame ({'column1':['a', 'a', 'a'], 'column2': [4,8,9] }) + str(df) + df['column1'] = df['column1'] + 'b' + str(df) + df = df [df['column2']!=8] + str(df) + df['column1'] = df['column1'] + 'c' + str(df) + + # from SO: http://stackoverflow.com/questions/24054495/potential-bug-setting-value-for-undefined-column-using-iloc + df = DataFrame(np.arange(0,9), columns=['count']) + df['group'] = 'b' + def f(): + df.iloc[0:5]['group'] = 'a' + self.assertRaises(com.SettingWithCopyError, f) + + # mixed type setting + # same dtype & changing dtype + df = DataFrame(dict(A=date_range('20130101',periods=5),B=np.random.randn(5),C=np.arange(5,dtype='int64'),D=list('abcde'))) + + def f(): + df.ix[2]['D'] = 'foo' + self.assertRaises(com.SettingWithCopyError, f) + def f(): + df.ix[2]['C'] = 'foo' + self.assertRaises(com.SettingWithCopyError, f) + def f(): + df['C'][2] = 'foo' + self.assertRaises(com.SettingWithCopyError, f) + + def test_setting_with_copy_bug(self): + + # operating on a copy + df = pd.DataFrame({'a': list(range(4)), 'b': list('ab..'), 'c': ['a', 'b', np.nan, 'd']}) + mask = pd.isnull(df.c) + + def f(): + df[['c']][mask] = df[['b']][mask] + self.assertRaises(com.SettingWithCopyError, f) + + # invalid warning as we are returning a new object + # GH 8730 + df1 = DataFrame({'x': Series(['a','b','c']), 'y': Series(['d','e','f'])}) + df2 = df1[['x']] + + # this should not raise + df2['y'] = ['g', 'h', 'i'] + + def test_detect_chained_assignment_warnings(self): + + # now jut test for coyp-on-write + df = DataFrame({'A':['aaa','bbb','ccc'],'B':[1,2,3]}) + expected = df.copy() + df.loc[0]['A'] = 111 + assert_frame_equal(df, expected) def test_float64index_slicing_bug(self): # GH 5557, related to slicing a float index @@ -4912,6 +4895,14 @@ def test_maybe_numeric_slice(self): expected = [1] self.assertEqual(result, expected) + def test_empty_indexers_return_view(self): + # Closes Issue 11814 + df = pd.DataFrame({'col1':range(10,20), + 'col2':range(20,30)}) + self.assertTrue(df.loc[:,:]._is_view) + self.assertTrue(df.iloc[:,:]._is_view) + self.assertTrue(df.ix[:,:]._is_view) + class TestCategoricalIndex(tm.TestCase): @@ -5196,6 +5187,7 @@ def test_boolean_selection(self): self.assertRaises(TypeError, lambda : df4[df4.index < 2]) self.assertRaises(TypeError, lambda : df4[df4.index > 1]) + class TestSeriesNoneCoercion(tm.TestCase): EXPECTED_RESULTS = [ # For numeric series, we should coerce to NaN. diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py index 5b00ea163d85f..c52de064f8e3f 100644 --- a/pandas/tests/test_multilevel.py +++ b/pandas/tests/test_multilevel.py @@ -537,11 +537,12 @@ def test_xs_level(self): # this is a copy in 0.14 result = self.frame.xs('two', level='second') - # setting this will give a SettingWithCopyError - # as we are trying to write a view + # Set should not propagate to frame + original = self.frame.copy() def f(x): x[:] = 10 - self.assertRaises(com.SettingWithCopyError, f, result) + f(result) + assert_frame_equal(self.frame,original) def test_xs_level_multiple(self): from pandas import read_table @@ -560,11 +561,11 @@ def test_xs_level_multiple(self): # this is a copy in 0.14 result = df.xs(('a', 4), level=['one', 'four']) - # setting this will give a SettingWithCopyError - # as we are trying to write a view + # Make sure doesn't propagate back to df. + original = df.copy() def f(x): x[:] = 10 - self.assertRaises(com.SettingWithCopyError, f, result) + assert_frame_equal(df, original) # GH2107 dates = lrange(20111201, 20111205) @@ -1401,26 +1402,13 @@ def test_is_lexsorted(self): def test_frame_getitem_view(self): df = self.frame.T.copy() - # this works because we are modifying the underlying array - # really a no-no - df['foo'].values[:] = 0 - self.assertTrue((df['foo'].values == 0).all()) - - # but not if it's mixed-type - df['foo', 'four'] = 'foo' - df = df.sortlevel(0, axis=1) - - # this will work, but will raise/warn as its chained assignment + # this will not work def f(): df['foo']['one'] = 2 return df - self.assertRaises(com.SettingWithCopyError, f) - try: - df = f() - except: - pass - self.assertTrue((df['foo', 'one'] == 0).all()) + df = f() + self.assertTrue((df['foo', 'one'] != 2).all()) def test_frame_getitem_not_sorted(self): df = self.frame.T diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py index f12d851a6772d..65a4ea81dbf5d 100644 --- a/pandas/tests/test_panel.py +++ b/pandas/tests/test_panel.py @@ -1631,9 +1631,8 @@ def test_to_frame_multi_major(self): result = wp.to_frame() assert_frame_equal(result, expected) - wp.iloc[0, 0].iloc[0] = np.nan # BUG on setting. GH #5773 result = wp.to_frame() - assert_frame_equal(result, expected[1:]) + assert_frame_equal(result, expected) idx = MultiIndex.from_tuples([(1, 'two'), (1, 'one'), (2, 'one'), (np.nan, 'two')]) diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py index d37ac530d02e8..1669e0916739a 100644 --- a/pandas/tests/test_series.py +++ b/pandas/tests/test_series.py @@ -289,11 +289,9 @@ def get_dir(s): with tm.assertRaisesRegexp(ValueError, "modifications"): s.dt.hour = 5 - # trying to set a copy - with pd.option_context('chained_assignment','raise'): - def f(): - s.dt.hour[0] = 5 - self.assertRaises(com.SettingWithCopyError, f) + # trying to set a copy + s.dt.hour[0] = 5 + def test_dt_accessor_no_new_attributes(self): # https://github.com/pydata/pandas/issues/10673 diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py index 9211ffb5cfde5..fd2227ea8a44e 100644 --- a/pandas/tools/merge.py +++ b/pandas/tools/merge.py @@ -27,11 +27,11 @@ @Appender(_merge_doc, indents=0) def merge(left, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, - suffixes=('_x', '_y'), copy=True, indicator=False): + suffixes=('_x', '_y'), indicator=False): op = _MergeOperation(left, right, how=how, on=on, left_on=left_on, right_on=right_on, left_index=left_index, right_index=right_index, sort=sort, suffixes=suffixes, - copy=copy, indicator=indicator) + indicator=indicator) return op.get_result() if __debug__: merge.__doc__ = _merge_doc % '\nleft : DataFrame' @@ -157,7 +157,7 @@ class _MergeOperation(object): def __init__(self, left, right, how='inner', on=None, left_on=None, right_on=None, axis=1, left_index=False, right_index=False, sort=True, - suffixes=('_x', '_y'), copy=True, indicator=False): + suffixes=('_x', '_y'), indicator=False): self.left = self.orig_left = left self.right = self.orig_right = right self.how = how @@ -167,7 +167,6 @@ def __init__(self, left, right, how='inner', on=None, self.left_on = com._maybe_make_list(left_on) self.right_on = com._maybe_make_list(right_on) - self.copy = copy self.suffixes = suffixes self.sort = sort @@ -207,7 +206,7 @@ def get_result(self): result_data = concatenate_block_managers( [(ldata, lindexers), (rdata, rindexers)], axes=[llabels.append(rlabels), join_index], - concat_axis=0, copy=self.copy) + concat_axis=0, copy=True) typ = self.left._constructor result = typ(result_data).__finalize__(self, method='merge') @@ -569,7 +568,7 @@ def get_result(self): result_data = concatenate_block_managers( [(ldata, lindexers), (rdata, rindexers)], axes=[llabels.append(rlabels), join_index], - concat_axis=0, copy=self.copy) + concat_axis=0, copy=True) typ = self.left._constructor result = typ(result_data).__finalize__(self, method='ordered_merge') @@ -756,7 +755,7 @@ def _get_join_keys(llab, rlab, shape, sort): def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False, - keys=None, levels=None, names=None, verify_integrity=False, copy=True): + keys=None, levels=None, names=None, verify_integrity=False): """ Concatenate pandas objects along a particular axis with optional set logic along the other axes. Can also add a layer of hierarchical indexing on the @@ -794,8 +793,6 @@ def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False, concatenating objects where the concatenation axis does not have meaningful indexing information. Note the index values on the other axes are still respected in the join. - copy : boolean, default True - If False, do not copy data unnecessarily Notes ----- @@ -808,8 +805,7 @@ def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False, op = _Concatenator(objs, axis=axis, join_axes=join_axes, ignore_index=ignore_index, join=join, keys=keys, levels=levels, names=names, - verify_integrity=verify_integrity, - copy=copy) + verify_integrity=verify_integrity) return op.get_result() @@ -820,7 +816,7 @@ class _Concatenator(object): def __init__(self, objs, axis=0, join='outer', join_axes=None, keys=None, levels=None, names=None, - ignore_index=False, verify_integrity=False, copy=True): + ignore_index=False, verify_integrity=False): if isinstance(objs, (NDFrame, compat.string_types)): raise TypeError('first argument must be an iterable of pandas ' 'objects, you passed an object of type ' @@ -944,7 +940,6 @@ def __init__(self, objs, axis=0, join='outer', join_axes=None, self.ignore_index = ignore_index self.verify_integrity = verify_integrity - self.copy = copy self.new_axes = self._get_new_axes() @@ -992,9 +987,7 @@ def get_result(self): mgrs_indexers.append((obj._data, indexers)) new_data = concatenate_block_managers( - mgrs_indexers, self.new_axes, concat_axis=self.axis, copy=self.copy) - if not self.copy: - new_data._consolidate_inplace() + mgrs_indexers, self.new_axes, concat_axis=self.axis, copy=True) return self.objs[0]._from_axes(new_data, self.new_axes).__finalize__(self, method='concat') diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py index 6db2d2e15f699..53cdc4720d738 100644 --- a/pandas/tools/tests/test_merge.py +++ b/pandas/tools/tests/test_merge.py @@ -609,7 +609,7 @@ def test_merge_copy(self): right = DataFrame({'c': 'foo', 'd': 'bar'}, index=lrange(10)) merged = merge(left, right, left_index=True, - right_index=True, copy=True) + right_index=True) merged['a'] = 6 self.assertTrue((left['a'] == 0).all()) @@ -617,19 +617,16 @@ def test_merge_copy(self): merged['d'] = 'peekaboo' self.assertTrue((right['d'] == 'bar').all()) - def test_merge_nocopy(self): - left = DataFrame({'a': 0, 'b': 1}, index=lrange(10)) - right = DataFrame({'c': 'foo', 'd': 'bar'}, index=lrange(10)) - - merged = merge(left, right, left_index=True, - right_index=True, copy=False) - - merged['a'] = 6 - self.assertTrue((left['a'] == 6).all()) - - merged['d'] = 'peekaboo' - self.assertTrue((right['d'] == 'peekaboo').all()) + def test_merge_nocopy(self): + # disabled in copy-on-write paradigm + left = DataFrame({'a': 0, 'b': 1}, index=lrange(10)) + right = DataFrame({'c': 'foo', 'd': 'bar'}, index=lrange(10)) + + with tm.assertRaises(TypeError): + merge(left, right, left_index=True, + right_index=True, copy=False) + def test_join_sort(self): left = DataFrame({'key': ['foo', 'bar', 'baz', 'foo'], 'value': [1, 2, 3, 4]}) @@ -1942,30 +1939,17 @@ def test_concat_copy(self): df3 = DataFrame({5 : 'foo'},index=range(4)) # these are actual copies - result = concat([df,df2,df3],axis=1,copy=True) + result = concat([df,df2,df3],axis=1) for b in result._data.blocks: self.assertIsNone(b.values.base) - # these are the same - result = concat([df,df2,df3],axis=1,copy=False) - for b in result._data.blocks: - if b.is_float: - self.assertTrue(b.values.base is df._data.blocks[0].values.base) - elif b.is_integer: - self.assertTrue(b.values.base is df2._data.blocks[0].values.base) - elif b.is_object: - self.assertIsNotNone(b.values.base) - - # float block was consolidated - df4 = DataFrame(np.random.randn(4,1)) - result = concat([df,df2,df3,df4],axis=1,copy=False) - for b in result._data.blocks: - if b.is_float: - self.assertIsNone(b.values.base) - elif b.is_integer: - self.assertTrue(b.values.base is df2._data.blocks[0].values.base) - elif b.is_object: - self.assertIsNotNone(b.values.base) + # Concat copy argument removed in copy-on-write + with tm.assertRaises(TypeError): + result = concat([df,df2,df3],axis=1,copy=False) + + df4 = DataFrame(np.random.randn(4,1)) + with tm.assertRaises(TypeError): + result = concat([df,df2,df3,df4],axis=1,copy=False) def test_concat_with_group_keys(self): df = DataFrame(np.random.randn(4, 3))
Continuation of #11500
https://api.github.com/repos/pandas-dev/pandas/pulls/12036
2016-01-14T04:50:39Z
2016-01-20T03:13:49Z
null
2020-09-06T18:48:21Z
CI: lint for rest of pandas
diff --git a/.travis.yml b/.travis.yml index 087d7f1565707..9fdb98c0124b8 100644 --- a/.travis.yml +++ b/.travis.yml @@ -175,3 +175,4 @@ after_script: - source activate pandas && ci/print_versions.py - ci/print_skipped.py /tmp/nosetests.xml - ci/lint.sh + - ci/lint_ok_for_now.sh diff --git a/ci/lint.sh b/ci/lint.sh index 1795451f7ace4..97d318b48469e 100755 --- a/ci/lint.sh +++ b/ci/lint.sh @@ -4,8 +4,11 @@ echo "inside $0" source activate pandas -echo flake8 pandas/core --statistics -flake8 pandas/core --statistics +for path in 'core' +do + echo "linting -> pandas/$path" + flake8 pandas/$path --filename '*.py' --statistics -q +done RET="$?" diff --git a/ci/lint_ok_for_now.sh b/ci/lint_ok_for_now.sh new file mode 100755 index 0000000000000..eba667fadde06 --- /dev/null +++ b/ci/lint_ok_for_now.sh @@ -0,0 +1,20 @@ +#!/bin/bash + +echo "inside $0" + +source activate pandas + +for path in 'io' 'stats' 'computation' 'tseries' 'util' 'compat' 'tools' 'sparse' 'tests' +do + echo "linting [ok_for_now] -> pandas/$path" + flake8 pandas/$path --filename '*.py' --statistics -q +done + +RET="$?" + +# we are disabling the return code for now +# to have Travis-CI pass. When the code +# passes linting, re-enable +#exit "$RET" + +exit 0
see output at the end: https://travis-ci.org/jreback/pandas/jobs/102213682 this only shows the summary stats. Once these are much lower we can remove the `-q` and show the actual errors. As more files are fixed can move the checks to `lint.sh`
https://api.github.com/repos/pandas-dev/pandas/pulls/12035
2016-01-13T23:35:30Z
2016-01-14T01:20:02Z
2016-01-14T01:20:02Z
2016-01-14T01:20:02Z
MAINT: Make take_1d accept readonly buffers.
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 9a023ce78bb6f..a59b38c40b03c 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -508,3 +508,5 @@ Bug Fixes - Removed ``millisecond`` property of ``DatetimeIndex``. This would always raise a ``ValueError`` (:issue:`12019`). + +- Bug in Series constructor with read-only data (:issue:`11502`) diff --git a/pandas/src/generate_code.py b/pandas/src/generate_code.py index d137ce732e005..cf42279c89508 100644 --- a/pandas/src/generate_code.py +++ b/pandas/src/generate_code.py @@ -88,13 +88,7 @@ """ -take_1d_template = """ -@cython.wraparound(False) -@cython.boundscheck(False) -def take_1d_%(name)s_%(dest)s(%(c_type_in)s[:] values, - int64_t[:] indexer, - %(c_type_out)s[:] out, - fill_value=np.nan): +inner_take_1d_template = """\ cdef: Py_ssize_t i, n, idx %(c_type_out)s fv @@ -112,6 +106,33 @@ def take_1d_%(name)s_%(dest)s(%(c_type_in)s[:] values, %(tab)s out[i] = %(preval)svalues[idx]%(postval)s """ +take_1d_template = """\ +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_%(name)s_%(dest)s_memview(%(c_type_in)s[:] values, + int64_t[:] indexer, + %(c_type_out)s[:] out, + fill_value=np.nan): +""" + inner_take_1d_template + """ + +@cython.wraparound(False) +@cython.boundscheck(False) +def take_1d_%(name)s_%(dest)s(ndarray[%(c_type_in)s, ndim=1] values, + int64_t[:] indexer, + %(c_type_out)s[:] out, + fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_%(name)s_%(dest)s_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. +""" + inner_take_1d_template + inner_take_2d_axis0_template = """\ cdef: Py_ssize_t i, j, k, n, idx diff --git a/pandas/src/generated.pyx b/pandas/src/generated.pyx index 738f695a6ce9f..99031da48dd20 100644 --- a/pandas/src/generated.pyx +++ b/pandas/src/generated.pyx @@ -2403,13 +2403,45 @@ def arrmap_bool(ndarray[uint8_t] index, object func): return maybe_convert_objects(result) +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_bool_bool_memview(uint8_t[:] values, + int64_t[:] indexer, + uint8_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + uint8_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_bool_bool(uint8_t[:] values, +def take_1d_bool_bool(ndarray[uint8_t, ndim=1] values, int64_t[:] indexer, uint8_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_bool_bool_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx uint8_t fv @@ -2426,13 +2458,45 @@ def take_1d_bool_bool(uint8_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_bool_object_memview(uint8_t[:] values, + int64_t[:] indexer, + object[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + object fv + + n = indexer.shape[0] + + fv = fill_value + + + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = True if values[idx] > 0 else False + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_bool_object(uint8_t[:] values, +def take_1d_bool_object(ndarray[uint8_t, ndim=1] values, int64_t[:] indexer, object[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_bool_object_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx object fv @@ -2449,13 +2513,45 @@ def take_1d_bool_object(uint8_t[:] values, else: out[i] = True if values[idx] > 0 else False +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_int8_int8_memview(int8_t[:] values, + int64_t[:] indexer, + int8_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + int8_t fv + + n = indexer.shape[0] + + fv = fill_value + + + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_int8_int8(int8_t[:] values, +def take_1d_int8_int8(ndarray[int8_t, ndim=1] values, int64_t[:] indexer, int8_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_int8_int8_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx int8_t fv @@ -2472,13 +2568,45 @@ def take_1d_int8_int8(int8_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_int8_int32_memview(int8_t[:] values, + int64_t[:] indexer, + int32_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + int32_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_int8_int32(int8_t[:] values, +def take_1d_int8_int32(ndarray[int8_t, ndim=1] values, int64_t[:] indexer, int32_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_int8_int32_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx int32_t fv @@ -2495,13 +2623,45 @@ def take_1d_int8_int32(int8_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_int8_int64_memview(int8_t[:] values, + int64_t[:] indexer, + int64_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + int64_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_int8_int64(int8_t[:] values, +def take_1d_int8_int64(ndarray[int8_t, ndim=1] values, int64_t[:] indexer, int64_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_int8_int64_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx int64_t fv @@ -2518,13 +2678,45 @@ def take_1d_int8_int64(int8_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_int8_float64_memview(int8_t[:] values, + int64_t[:] indexer, + float64_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + float64_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_int8_float64(int8_t[:] values, +def take_1d_int8_float64(ndarray[int8_t, ndim=1] values, int64_t[:] indexer, float64_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_int8_float64_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx float64_t fv @@ -2541,13 +2733,45 @@ def take_1d_int8_float64(int8_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_int16_int16_memview(int16_t[:] values, + int64_t[:] indexer, + int16_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + int16_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_int16_int16(int16_t[:] values, +def take_1d_int16_int16(ndarray[int16_t, ndim=1] values, int64_t[:] indexer, int16_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_int16_int16_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx int16_t fv @@ -2564,13 +2788,45 @@ def take_1d_int16_int16(int16_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_int16_int32_memview(int16_t[:] values, + int64_t[:] indexer, + int32_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + int32_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_int16_int32(int16_t[:] values, +def take_1d_int16_int32(ndarray[int16_t, ndim=1] values, int64_t[:] indexer, int32_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_int16_int32_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx int32_t fv @@ -2587,13 +2843,45 @@ def take_1d_int16_int32(int16_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_int16_int64_memview(int16_t[:] values, + int64_t[:] indexer, + int64_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + int64_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_int16_int64(int16_t[:] values, +def take_1d_int16_int64(ndarray[int16_t, ndim=1] values, int64_t[:] indexer, int64_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_int16_int64_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx int64_t fv @@ -2610,13 +2898,45 @@ def take_1d_int16_int64(int16_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_int16_float64_memview(int16_t[:] values, + int64_t[:] indexer, + float64_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + float64_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_int16_float64(int16_t[:] values, +def take_1d_int16_float64(ndarray[int16_t, ndim=1] values, int64_t[:] indexer, float64_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_int16_float64_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx float64_t fv @@ -2633,13 +2953,45 @@ def take_1d_int16_float64(int16_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_int32_int32_memview(int32_t[:] values, + int64_t[:] indexer, + int32_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + int32_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_int32_int32(int32_t[:] values, +def take_1d_int32_int32(ndarray[int32_t, ndim=1] values, int64_t[:] indexer, int32_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_int32_int32_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx int32_t fv @@ -2656,13 +3008,45 @@ def take_1d_int32_int32(int32_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_int32_int64_memview(int32_t[:] values, + int64_t[:] indexer, + int64_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + int64_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_int32_int64(int32_t[:] values, +def take_1d_int32_int64(ndarray[int32_t, ndim=1] values, int64_t[:] indexer, int64_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_int32_int64_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx int64_t fv @@ -2679,13 +3063,45 @@ def take_1d_int32_int64(int32_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_int32_float64_memview(int32_t[:] values, + int64_t[:] indexer, + float64_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + float64_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_int32_float64(int32_t[:] values, +def take_1d_int32_float64(ndarray[int32_t, ndim=1] values, int64_t[:] indexer, float64_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_int32_float64_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx float64_t fv @@ -2702,13 +3118,45 @@ def take_1d_int32_float64(int32_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_int64_int64_memview(int64_t[:] values, + int64_t[:] indexer, + int64_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + int64_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_int64_int64(int64_t[:] values, +def take_1d_int64_int64(ndarray[int64_t, ndim=1] values, int64_t[:] indexer, int64_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_int64_int64_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx int64_t fv @@ -2725,13 +3173,45 @@ def take_1d_int64_int64(int64_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_int64_float64_memview(int64_t[:] values, + int64_t[:] indexer, + float64_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + float64_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_int64_float64(int64_t[:] values, +def take_1d_int64_float64(ndarray[int64_t, ndim=1] values, int64_t[:] indexer, float64_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_int64_float64_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx float64_t fv @@ -2748,13 +3228,45 @@ def take_1d_int64_float64(int64_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_float32_float32_memview(float32_t[:] values, + int64_t[:] indexer, + float32_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + float32_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_float32_float32(float32_t[:] values, +def take_1d_float32_float32(ndarray[float32_t, ndim=1] values, int64_t[:] indexer, float32_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_float32_float32_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx float32_t fv @@ -2771,13 +3283,67 @@ def take_1d_float32_float32(float32_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_float32_float64_memview(float32_t[:] values, + int64_t[:] indexer, + float64_t[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + float64_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_float32_float64(float32_t[:] values, +def take_1d_float32_float64(ndarray[float32_t, ndim=1] values, int64_t[:] indexer, float64_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_float32_float64_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. + cdef: + Py_ssize_t i, n, idx + float64_t fv + + n = indexer.shape[0] + + fv = fill_value + + with nogil: + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_float64_float64_memview(float64_t[:] values, + int64_t[:] indexer, + float64_t[:] out, + fill_value=np.nan): cdef: Py_ssize_t i, n, idx float64_t fv @@ -2797,10 +3363,20 @@ def take_1d_float32_float64(float32_t[:] values, @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_float64_float64(float64_t[:] values, +def take_1d_float64_float64(ndarray[float64_t, ndim=1] values, int64_t[:] indexer, float64_t[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_float64_float64_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx float64_t fv @@ -2817,13 +3393,45 @@ def take_1d_float64_float64(float64_t[:] values, else: out[i] = values[idx] +@cython.wraparound(False) +@cython.boundscheck(False) +cdef inline take_1d_object_object_memview(object[:] values, + int64_t[:] indexer, + object[:] out, + fill_value=np.nan): + cdef: + Py_ssize_t i, n, idx + object fv + + n = indexer.shape[0] + + fv = fill_value + + + for i from 0 <= i < n: + idx = indexer[i] + if idx == -1: + out[i] = fv + else: + out[i] = values[idx] + @cython.wraparound(False) @cython.boundscheck(False) -def take_1d_object_object(object[:] values, +def take_1d_object_object(ndarray[object, ndim=1] values, int64_t[:] indexer, object[:] out, fill_value=np.nan): + + if values.flags.writeable: + # We can call the memoryview version of the code + take_1d_object_object_memview(values, indexer, out, + fill_value=fill_value) + return + + # We cannot use the memoryview version on readonly-buffers due to + # a limitation of Cython's typed memoryviews. Instead we can use + # the slightly slower Cython ndarray type directly. cdef: Py_ssize_t i, n, idx object fv diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py index 664e4a4e078fe..bc204740567de 100644 --- a/pandas/tests/test_common.py +++ b/pandas/tests/test_common.py @@ -774,8 +774,9 @@ class TestTake(tm.TestCase): _multiprocess_can_split_ = True def test_1d_with_out(self): - def _test_dtype(dtype, can_hold_na): + def _test_dtype(dtype, can_hold_na, writeable=True): data = np.random.randint(0, 2, 4).astype(dtype) + data.flags.writeable = writeable indexer = [2, 1, 0, 1] out = np.empty(4, dtype=dtype) @@ -796,18 +797,22 @@ def _test_dtype(dtype, can_hold_na): # no exception o/w data.take(indexer, out=out) - _test_dtype(np.float64, True) - _test_dtype(np.float32, True) - _test_dtype(np.uint64, False) - _test_dtype(np.uint32, False) - _test_dtype(np.uint16, False) - _test_dtype(np.uint8, False) - _test_dtype(np.int64, False) - _test_dtype(np.int32, False) - _test_dtype(np.int16, False) - _test_dtype(np.int8, False) - _test_dtype(np.object_, True) - _test_dtype(np.bool, False) + for writeable in [True, False]: + # Check that take_nd works both with writeable arrays (in which + # case fast typed memoryviews implementation) and read-only + # arrays alike. + _test_dtype(np.float64, True, writeable=writeable) + _test_dtype(np.float32, True, writeable=writeable) + _test_dtype(np.uint64, False, writeable=writeable) + _test_dtype(np.uint32, False, writeable=writeable) + _test_dtype(np.uint16, False, writeable=writeable) + _test_dtype(np.uint8, False, writeable=writeable) + _test_dtype(np.int64, False, writeable=writeable) + _test_dtype(np.int32, False, writeable=writeable) + _test_dtype(np.int16, False, writeable=writeable) + _test_dtype(np.int8, False, writeable=writeable) + _test_dtype(np.object_, True, writeable=writeable) + _test_dtype(np.bool, False, writeable=writeable) def test_1d_fill_nonna(self): def _test_dtype(dtype, fill_value, out_dtype):
This is a port of @ogrisel's fix in #10070 to deal with 1d arrays. There's absolutely nothing new here beyond what's in the original patch. closes #11502
https://api.github.com/repos/pandas-dev/pandas/pulls/12033
2016-01-13T15:40:31Z
2016-01-14T12:42:31Z
null
2016-01-14T12:42:31Z
Break apart test_frame.py and fix all flake8 warnings
diff --git a/pandas/io/tests/test_packers.py b/pandas/io/tests/test_packers.py index 3434afc4129c4..d6a9feb1bd8f4 100644 --- a/pandas/io/tests/test_packers.py +++ b/pandas/io/tests/test_packers.py @@ -12,9 +12,9 @@ date_range, period_range, Index, SparseSeries, SparseDataFrame, SparsePanel) import pandas.util.testing as tm -from pandas.util.testing import ensure_clean, assert_index_equal -from pandas.tests.test_series import assert_series_equal -from pandas.tests.test_frame import assert_frame_equal +from pandas.util.testing import (ensure_clean, assert_index_equal, + assert_series_equal, + assert_frame_equal) from pandas.tests.test_panel import assert_panel_equal import pandas diff --git a/pandas/sparse/tests/test_sparse.py b/pandas/sparse/tests/test_sparse.py index 2f148c89ccbe9..64ffd7482ee34 100644 --- a/pandas/sparse/tests/test_sparse.py +++ b/pandas/sparse/tests/test_sparse.py @@ -33,7 +33,9 @@ from pandas.sparse.api import (SparseSeries, SparseDataFrame, SparsePanel, SparseArray) -import pandas.tests.test_frame as test_frame +from pandas.tests.frame.test_misc_api import ( + SafeForSparse as SparseFrameTests) + import pandas.tests.test_panel as test_panel import pandas.tests.test_series as test_series @@ -922,7 +924,7 @@ class TestSparseTimeSeries(tm.TestCase): pass -class TestSparseDataFrame(tm.TestCase, test_frame.SafeForSparse): +class TestSparseDataFrame(tm.TestCase, SparseFrameTests): klass = SparseDataFrame _multiprocess_can_split_ = True diff --git a/pandas/tests/frame/__init__.py b/pandas/tests/frame/__init__.py new file mode 100644 index 0000000000000..e69de29bb2d1d diff --git a/pandas/tests/frame/common.py b/pandas/tests/frame/common.py new file mode 100644 index 0000000000000..37f67712e1b58 --- /dev/null +++ b/pandas/tests/frame/common.py @@ -0,0 +1,141 @@ +import numpy as np + +from pandas import compat +from pandas.util.decorators import cache_readonly +import pandas.util.testing as tm +import pandas as pd + +_seriesd = tm.getSeriesData() +_tsd = tm.getTimeSeriesData() + +_frame = pd.DataFrame(_seriesd) +_frame2 = pd.DataFrame(_seriesd, columns=['D', 'C', 'B', 'A']) +_intframe = pd.DataFrame(dict((k, v.astype(int)) + for k, v in compat.iteritems(_seriesd))) + +_tsframe = pd.DataFrame(_tsd) + +_mixed_frame = _frame.copy() +_mixed_frame['foo'] = 'bar' + + +class TestData(object): + + @cache_readonly + def frame(self): + return _frame.copy() + + @cache_readonly + def frame2(self): + return _frame2.copy() + + @cache_readonly + def intframe(self): + # force these all to int64 to avoid platform testing issues + return pd.DataFrame(dict([(c, s) for c, s in + compat.iteritems(_intframe)]), + dtype=np.int64) + + @cache_readonly + def tsframe(self): + return _tsframe.copy() + + @cache_readonly + def mixed_frame(self): + return _mixed_frame.copy() + + @cache_readonly + def mixed_float(self): + return pd.DataFrame({'A': _frame['A'].copy().astype('float32'), + 'B': _frame['B'].copy().astype('float32'), + 'C': _frame['C'].copy().astype('float16'), + 'D': _frame['D'].copy().astype('float64')}) + + @cache_readonly + def mixed_float2(self): + return pd.DataFrame({'A': _frame2['A'].copy().astype('float32'), + 'B': _frame2['B'].copy().astype('float32'), + 'C': _frame2['C'].copy().astype('float16'), + 'D': _frame2['D'].copy().astype('float64')}) + + @cache_readonly + def mixed_int(self): + return pd.DataFrame({'A': _intframe['A'].copy().astype('int32'), + 'B': np.ones(len(_intframe['B']), dtype='uint64'), + 'C': _intframe['C'].copy().astype('uint8'), + 'D': _intframe['D'].copy().astype('int64')}) + + @cache_readonly + def all_mixed(self): + return pd.DataFrame({'a': 1., 'b': 2, 'c': 'foo', + 'float32': np.array([1.] * 10, dtype='float32'), + 'int32': np.array([1] * 10, dtype='int32')}, + index=np.arange(10)) + + @cache_readonly + def tzframe(self): + result = pd.DataFrame({'A': pd.date_range('20130101', periods=3), + 'B': pd.date_range('20130101', periods=3, + tz='US/Eastern'), + 'C': pd.date_range('20130101', periods=3, + tz='CET')}) + result.iloc[1, 1] = pd.NaT + result.iloc[1, 2] = pd.NaT + return result + + @cache_readonly + def empty(self): + return pd.DataFrame({}) + + @cache_readonly + def ts1(self): + return tm.makeTimeSeries() + + @cache_readonly + def ts2(self): + return tm.makeTimeSeries()[5:] + + @cache_readonly + def simple(self): + arr = np.array([[1., 2., 3.], + [4., 5., 6.], + [7., 8., 9.]]) + + return pd.DataFrame(arr, columns=['one', 'two', 'three'], + index=['a', 'b', 'c']) + +# self.ts3 = tm.makeTimeSeries()[-5:] +# self.ts4 = tm.makeTimeSeries()[1:-1] + + +def _check_mixed_float(df, dtype=None): + # float16 are most likely to be upcasted to float32 + dtypes = dict(A='float32', B='float32', C='float16', D='float64') + if isinstance(dtype, compat.string_types): + dtypes = dict([(k, dtype) for k, v in dtypes.items()]) + elif isinstance(dtype, dict): + dtypes.update(dtype) + if dtypes.get('A'): + assert(df.dtypes['A'] == dtypes['A']) + if dtypes.get('B'): + assert(df.dtypes['B'] == dtypes['B']) + if dtypes.get('C'): + assert(df.dtypes['C'] == dtypes['C']) + if dtypes.get('D'): + assert(df.dtypes['D'] == dtypes['D']) + + +def _check_mixed_int(df, dtype=None): + dtypes = dict(A='int32', B='uint64', C='uint8', D='int64') + if isinstance(dtype, compat.string_types): + dtypes = dict([(k, dtype) for k, v in dtypes.items()]) + elif isinstance(dtype, dict): + dtypes.update(dtype) + if dtypes.get('A'): + assert(df.dtypes['A'] == dtypes['A']) + if dtypes.get('B'): + assert(df.dtypes['B'] == dtypes['B']) + if dtypes.get('C'): + assert(df.dtypes['C'] == dtypes['C']) + if dtypes.get('D'): + assert(df.dtypes['D'] == dtypes['D']) diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py new file mode 100644 index 0000000000000..bea8eab932be3 --- /dev/null +++ b/pandas/tests/frame/test_alter_axes.py @@ -0,0 +1,620 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime, timedelta + +import numpy as np + +from pandas.compat import lrange +from pandas import DataFrame, Series, Index, MultiIndex +import pandas as pd + +from pandas.util.testing import (assert_almost_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameAlterAxes(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_set_index(self): + idx = Index(np.arange(len(self.mixed_frame))) + + # cache it + _ = self.mixed_frame['foo'] # noqa + self.mixed_frame.index = idx + self.assertIs(self.mixed_frame['foo'].index, idx) + with assertRaisesRegexp(ValueError, 'Length mismatch'): + self.mixed_frame.index = idx[::2] + + def test_set_index_cast(self): + + # issue casting an index then set_index + df = DataFrame({'A': [1.1, 2.2, 3.3], 'B': [5.0, 6.1, 7.2]}, + index=[2010, 2011, 2012]) + expected = df.ix[2010] + new_index = df.index.astype(np.int32) + df.index = new_index + result = df.ix[2010] + assert_series_equal(result, expected) + + def test_set_index2(self): + df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'], + 'B': ['one', 'two', 'three', 'one', 'two'], + 'C': ['a', 'b', 'c', 'd', 'e'], + 'D': np.random.randn(5), + 'E': np.random.randn(5)}) + + # new object, single-column + result = df.set_index('C') + result_nodrop = df.set_index('C', drop=False) + + index = Index(df['C'], name='C') + + expected = df.ix[:, ['A', 'B', 'D', 'E']] + expected.index = index + + expected_nodrop = df.copy() + expected_nodrop.index = index + + assert_frame_equal(result, expected) + assert_frame_equal(result_nodrop, expected_nodrop) + self.assertEqual(result.index.name, index.name) + + # inplace, single + df2 = df.copy() + + df2.set_index('C', inplace=True) + + assert_frame_equal(df2, expected) + + df3 = df.copy() + df3.set_index('C', drop=False, inplace=True) + + assert_frame_equal(df3, expected_nodrop) + + # create new object, multi-column + result = df.set_index(['A', 'B']) + result_nodrop = df.set_index(['A', 'B'], drop=False) + + index = MultiIndex.from_arrays([df['A'], df['B']], names=['A', 'B']) + + expected = df.ix[:, ['C', 'D', 'E']] + expected.index = index + + expected_nodrop = df.copy() + expected_nodrop.index = index + + assert_frame_equal(result, expected) + assert_frame_equal(result_nodrop, expected_nodrop) + self.assertEqual(result.index.names, index.names) + + # inplace + df2 = df.copy() + df2.set_index(['A', 'B'], inplace=True) + assert_frame_equal(df2, expected) + + df3 = df.copy() + df3.set_index(['A', 'B'], drop=False, inplace=True) + assert_frame_equal(df3, expected_nodrop) + + # corner case + with assertRaisesRegexp(ValueError, 'Index has duplicate keys'): + df.set_index('A', verify_integrity=True) + + # append + result = df.set_index(['A', 'B'], append=True) + xp = df.reset_index().set_index(['index', 'A', 'B']) + xp.index.names = [None, 'A', 'B'] + assert_frame_equal(result, xp) + + # append to existing multiindex + rdf = df.set_index(['A'], append=True) + rdf = rdf.set_index(['B', 'C'], append=True) + expected = df.set_index(['A', 'B', 'C'], append=True) + assert_frame_equal(rdf, expected) + + # Series + result = df.set_index(df.C) + self.assertEqual(result.index.name, 'C') + + def test_set_index_nonuniq(self): + df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'], + 'B': ['one', 'two', 'three', 'one', 'two'], + 'C': ['a', 'b', 'c', 'd', 'e'], + 'D': np.random.randn(5), + 'E': np.random.randn(5)}) + with assertRaisesRegexp(ValueError, 'Index has duplicate keys'): + df.set_index('A', verify_integrity=True, inplace=True) + self.assertIn('A', df) + + def test_set_index_bug(self): + # GH1590 + df = DataFrame({'val': [0, 1, 2], 'key': ['a', 'b', 'c']}) + df2 = df.select(lambda indx: indx >= 1) + rs = df2.set_index('key') + xp = DataFrame({'val': [1, 2]}, + Index(['b', 'c'], name='key')) + assert_frame_equal(rs, xp) + + def test_set_index_pass_arrays(self): + df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'foo', 'foo'], + 'B': ['one', 'one', 'two', 'three', + 'two', 'two', 'one', 'three'], + 'C': np.random.randn(8), + 'D': np.random.randn(8)}) + + # multiple columns + result = df.set_index(['A', df['B'].values], drop=False) + expected = df.set_index(['A', 'B'], drop=False) + + # TODO should set_index check_names ? + assert_frame_equal(result, expected, check_names=False) + + def test_construction_with_categorical_index(self): + + ci = tm.makeCategoricalIndex(10) + + # with Categorical + df = DataFrame({'A': np.random.randn(10), + 'B': ci.values}) + idf = df.set_index('B') + str(idf) + tm.assert_index_equal(idf.index, ci, check_names=False) + self.assertEqual(idf.index.name, 'B') + + # from a CategoricalIndex + df = DataFrame({'A': np.random.randn(10), + 'B': ci}) + idf = df.set_index('B') + str(idf) + tm.assert_index_equal(idf.index, ci, check_names=False) + self.assertEqual(idf.index.name, 'B') + + idf = df.set_index('B').reset_index().set_index('B') + str(idf) + tm.assert_index_equal(idf.index, ci, check_names=False) + self.assertEqual(idf.index.name, 'B') + + new_df = idf.reset_index() + new_df.index = df.B + tm.assert_index_equal(new_df.index, ci, check_names=False) + self.assertEqual(idf.index.name, 'B') + + def test_set_index_cast_datetimeindex(self): + df = DataFrame({'A': [datetime(2000, 1, 1) + timedelta(i) + for i in range(1000)], + 'B': np.random.randn(1000)}) + + idf = df.set_index('A') + tm.assertIsInstance(idf.index, pd.DatetimeIndex) + + # don't cast a DatetimeIndex WITH a tz, leave as object + # GH 6032 + i = (pd.DatetimeIndex( + pd.tseries.tools.to_datetime(['2013-1-1 13:00', + '2013-1-2 14:00'], errors="raise")) + .tz_localize('US/Pacific')) + df = DataFrame(np.random.randn(2, 1), columns=['A']) + + expected = Series(np.array([pd.Timestamp('2013-01-01 13:00:00-0800', + tz='US/Pacific'), + pd.Timestamp('2013-01-02 14:00:00-0800', + tz='US/Pacific')], + dtype="object")) + + # convert index to series + result = Series(i) + assert_series_equal(result, expected) + + # assignt to frame + df['B'] = i + result = df['B'] + assert_series_equal(result, expected, check_names=False) + self.assertEqual(result.name, 'B') + + # keep the timezone + result = i.to_series(keep_tz=True) + assert_series_equal(result.reset_index(drop=True), expected) + + # convert to utc + df['C'] = i.to_series().reset_index(drop=True) + result = df['C'] + comp = pd.DatetimeIndex(expected.values).copy() + comp.tz = None + self.assert_numpy_array_equal(result.values, comp.values) + + # list of datetimes with a tz + df['D'] = i.to_pydatetime() + result = df['D'] + assert_series_equal(result, expected, check_names=False) + self.assertEqual(result.name, 'D') + + # GH 6785 + # set the index manually + import pytz + df = DataFrame( + [{'ts': datetime(2014, 4, 1, tzinfo=pytz.utc), 'foo': 1}]) + expected = df.set_index('ts') + df.index = df['ts'] + df.pop('ts') + assert_frame_equal(df, expected) + + # GH 3950 + # reset_index with single level + for tz in ['UTC', 'Asia/Tokyo', 'US/Eastern']: + idx = pd.date_range('1/1/2011', periods=5, + freq='D', tz=tz, name='idx') + df = pd.DataFrame( + {'a': range(5), 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx) + + expected = pd.DataFrame({'idx': [datetime(2011, 1, 1), + datetime(2011, 1, 2), + datetime(2011, 1, 3), + datetime(2011, 1, 4), + datetime(2011, 1, 5)], + 'a': range(5), + 'b': ['A', 'B', 'C', 'D', 'E']}, + columns=['idx', 'a', 'b']) + expected['idx'] = expected['idx'].apply( + lambda d: pd.Timestamp(d, tz=tz)) + assert_frame_equal(df.reset_index(), expected) + + def test_set_index_multiindexcolumns(self): + columns = MultiIndex.from_tuples([('foo', 1), ('foo', 2), ('bar', 1)]) + df = DataFrame(np.random.randn(3, 3), columns=columns) + rs = df.set_index(df.columns[0]) + xp = df.ix[:, 1:] + xp.index = df.ix[:, 0].values + xp.index.names = [df.columns[0]] + assert_frame_equal(rs, xp) + + def test_set_index_empty_column(self): + # #1971 + df = DataFrame([ + dict(a=1, p=0), + dict(a=2, m=10), + dict(a=3, m=11, p=20), + dict(a=4, m=12, p=21) + ], columns=('a', 'm', 'p', 'x')) + + # it works! + result = df.set_index(['a', 'x']) + repr(result) + + def test_set_columns(self): + cols = Index(np.arange(len(self.mixed_frame.columns))) + self.mixed_frame.columns = cols + with assertRaisesRegexp(ValueError, 'Length mismatch'): + self.mixed_frame.columns = cols[::2] + + # Renaming + + def test_rename(self): + mapping = { + 'A': 'a', + 'B': 'b', + 'C': 'c', + 'D': 'd' + } + + renamed = self.frame.rename(columns=mapping) + renamed2 = self.frame.rename(columns=str.lower) + + assert_frame_equal(renamed, renamed2) + assert_frame_equal(renamed2.rename(columns=str.upper), + self.frame, check_names=False) + + # index + data = { + 'A': {'foo': 0, 'bar': 1} + } + + # gets sorted alphabetical + df = DataFrame(data) + renamed = df.rename(index={'foo': 'bar', 'bar': 'foo'}) + self.assert_numpy_array_equal(renamed.index, ['foo', 'bar']) + + renamed = df.rename(index=str.upper) + self.assert_numpy_array_equal(renamed.index, ['BAR', 'FOO']) + + # have to pass something + self.assertRaises(TypeError, self.frame.rename) + + # partial columns + renamed = self.frame.rename(columns={'C': 'foo', 'D': 'bar'}) + self.assert_numpy_array_equal( + renamed.columns, ['A', 'B', 'foo', 'bar']) + + # other axis + renamed = self.frame.T.rename(index={'C': 'foo', 'D': 'bar'}) + self.assert_numpy_array_equal(renamed.index, ['A', 'B', 'foo', 'bar']) + + # index with name + index = Index(['foo', 'bar'], name='name') + renamer = DataFrame(data, index=index) + renamed = renamer.rename(index={'foo': 'bar', 'bar': 'foo'}) + self.assert_numpy_array_equal(renamed.index, ['bar', 'foo']) + self.assertEqual(renamed.index.name, renamer.index.name) + + # MultiIndex + tuples_index = [('foo1', 'bar1'), ('foo2', 'bar2')] + tuples_columns = [('fizz1', 'buzz1'), ('fizz2', 'buzz2')] + index = MultiIndex.from_tuples(tuples_index, names=['foo', 'bar']) + columns = MultiIndex.from_tuples( + tuples_columns, names=['fizz', 'buzz']) + renamer = DataFrame([(0, 0), (1, 1)], index=index, columns=columns) + renamed = renamer.rename(index={'foo1': 'foo3', 'bar2': 'bar3'}, + columns={'fizz1': 'fizz3', 'buzz2': 'buzz3'}) + new_index = MultiIndex.from_tuples( + [('foo3', 'bar1'), ('foo2', 'bar3')]) + new_columns = MultiIndex.from_tuples( + [('fizz3', 'buzz1'), ('fizz2', 'buzz3')]) + self.assert_numpy_array_equal(renamed.index, new_index) + self.assert_numpy_array_equal(renamed.columns, new_columns) + self.assertEqual(renamed.index.names, renamer.index.names) + self.assertEqual(renamed.columns.names, renamer.columns.names) + + def test_rename_nocopy(self): + renamed = self.frame.rename(columns={'C': 'foo'}, copy=False) + renamed['foo'] = 1. + self.assertTrue((self.frame['C'] == 1.).all()) + + def test_rename_inplace(self): + self.frame.rename(columns={'C': 'foo'}) + self.assertIn('C', self.frame) + self.assertNotIn('foo', self.frame) + + c_id = id(self.frame['C']) + frame = self.frame.copy() + frame.rename(columns={'C': 'foo'}, inplace=True) + + self.assertNotIn('C', frame) + self.assertIn('foo', frame) + self.assertNotEqual(id(frame['foo']), c_id) + + def test_rename_bug(self): + # GH 5344 + # rename set ref_locs, and set_index was not resetting + df = DataFrame({0: ['foo', 'bar'], 1: ['bah', 'bas'], 2: [1, 2]}) + df = df.rename(columns={0: 'a'}) + df = df.rename(columns={1: 'b'}) + df = df.set_index(['a', 'b']) + df.columns = ['2001-01-01'] + expected = DataFrame([[1], [2]], + index=MultiIndex.from_tuples( + [('foo', 'bah'), ('bar', 'bas')], + names=['a', 'b']), + columns=['2001-01-01']) + assert_frame_equal(df, expected) + + def test_reorder_levels(self): + index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], + labels=[[0, 0, 0, 0, 0, 0], + [0, 1, 2, 0, 1, 2], + [0, 1, 0, 1, 0, 1]], + names=['L0', 'L1', 'L2']) + df = DataFrame({'A': np.arange(6), 'B': np.arange(6)}, index=index) + + # no change, position + result = df.reorder_levels([0, 1, 2]) + assert_frame_equal(df, result) + + # no change, labels + result = df.reorder_levels(['L0', 'L1', 'L2']) + assert_frame_equal(df, result) + + # rotate, position + result = df.reorder_levels([1, 2, 0]) + e_idx = MultiIndex(levels=[['one', 'two', 'three'], [0, 1], ['bar']], + labels=[[0, 1, 2, 0, 1, 2], + [0, 1, 0, 1, 0, 1], + [0, 0, 0, 0, 0, 0]], + names=['L1', 'L2', 'L0']) + expected = DataFrame({'A': np.arange(6), 'B': np.arange(6)}, + index=e_idx) + assert_frame_equal(result, expected) + + result = df.reorder_levels([0, 0, 0]) + e_idx = MultiIndex(levels=[['bar'], ['bar'], ['bar']], + labels=[[0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0]], + names=['L0', 'L0', 'L0']) + expected = DataFrame({'A': np.arange(6), 'B': np.arange(6)}, + index=e_idx) + assert_frame_equal(result, expected) + + result = df.reorder_levels(['L0', 'L0', 'L0']) + assert_frame_equal(result, expected) + + def test_reset_index(self): + stacked = self.frame.stack()[::2] + stacked = DataFrame({'foo': stacked, 'bar': stacked}) + + names = ['first', 'second'] + stacked.index.names = names + deleveled = stacked.reset_index() + for i, (lev, lab) in enumerate(zip(stacked.index.levels, + stacked.index.labels)): + values = lev.take(lab) + name = names[i] + assert_almost_equal(values, deleveled[name]) + + stacked.index.names = [None, None] + deleveled2 = stacked.reset_index() + self.assert_numpy_array_equal(deleveled['first'], + deleveled2['level_0']) + self.assert_numpy_array_equal(deleveled['second'], + deleveled2['level_1']) + + # default name assigned + rdf = self.frame.reset_index() + self.assert_numpy_array_equal(rdf['index'], self.frame.index.values) + + # default name assigned, corner case + df = self.frame.copy() + df['index'] = 'foo' + rdf = df.reset_index() + self.assert_numpy_array_equal(rdf['level_0'], self.frame.index.values) + + # but this is ok + self.frame.index.name = 'index' + deleveled = self.frame.reset_index() + self.assert_numpy_array_equal(deleveled['index'], + self.frame.index.values) + self.assert_numpy_array_equal(deleveled.index, + np.arange(len(deleveled))) + + # preserve column names + self.frame.columns.name = 'columns' + resetted = self.frame.reset_index() + self.assertEqual(resetted.columns.name, 'columns') + + # only remove certain columns + frame = self.frame.reset_index().set_index(['index', 'A', 'B']) + rs = frame.reset_index(['A', 'B']) + + # TODO should reset_index check_names ? + assert_frame_equal(rs, self.frame, check_names=False) + + rs = frame.reset_index(['index', 'A', 'B']) + assert_frame_equal(rs, self.frame.reset_index(), check_names=False) + + rs = frame.reset_index(['index', 'A', 'B']) + assert_frame_equal(rs, self.frame.reset_index(), check_names=False) + + rs = frame.reset_index('A') + xp = self.frame.reset_index().set_index(['index', 'B']) + assert_frame_equal(rs, xp, check_names=False) + + # test resetting in place + df = self.frame.copy() + resetted = self.frame.reset_index() + df.reset_index(inplace=True) + assert_frame_equal(df, resetted, check_names=False) + + frame = self.frame.reset_index().set_index(['index', 'A', 'B']) + rs = frame.reset_index('A', drop=True) + xp = self.frame.copy() + del xp['A'] + xp = xp.set_index(['B'], append=True) + assert_frame_equal(rs, xp, check_names=False) + + def test_reset_index_right_dtype(self): + time = np.arange(0.0, 10, np.sqrt(2) / 2) + s1 = Series((9.81 * time ** 2) / 2, + index=Index(time, name='time'), + name='speed') + df = DataFrame(s1) + + resetted = s1.reset_index() + self.assertEqual(resetted['time'].dtype, np.float64) + + resetted = df.reset_index() + self.assertEqual(resetted['time'].dtype, np.float64) + + def test_reset_index_multiindex_col(self): + vals = np.random.randn(3, 3).astype(object) + idx = ['x', 'y', 'z'] + full = np.hstack(([[x] for x in idx], vals)) + df = DataFrame(vals, Index(idx, name='a'), + columns=[['b', 'b', 'c'], ['mean', 'median', 'mean']]) + rs = df.reset_index() + xp = DataFrame(full, columns=[['a', 'b', 'b', 'c'], + ['', 'mean', 'median', 'mean']]) + assert_frame_equal(rs, xp) + + rs = df.reset_index(col_fill=None) + xp = DataFrame(full, columns=[['a', 'b', 'b', 'c'], + ['a', 'mean', 'median', 'mean']]) + assert_frame_equal(rs, xp) + + rs = df.reset_index(col_level=1, col_fill='blah') + xp = DataFrame(full, columns=[['blah', 'b', 'b', 'c'], + ['a', 'mean', 'median', 'mean']]) + assert_frame_equal(rs, xp) + + df = DataFrame(vals, + MultiIndex.from_arrays([[0, 1, 2], ['x', 'y', 'z']], + names=['d', 'a']), + columns=[['b', 'b', 'c'], ['mean', 'median', 'mean']]) + rs = df.reset_index('a', ) + xp = DataFrame(full, Index([0, 1, 2], name='d'), + columns=[['a', 'b', 'b', 'c'], + ['', 'mean', 'median', 'mean']]) + assert_frame_equal(rs, xp) + + rs = df.reset_index('a', col_fill=None) + xp = DataFrame(full, Index(lrange(3), name='d'), + columns=[['a', 'b', 'b', 'c'], + ['a', 'mean', 'median', 'mean']]) + assert_frame_equal(rs, xp) + + rs = df.reset_index('a', col_fill='blah', col_level=1) + xp = DataFrame(full, Index(lrange(3), name='d'), + columns=[['blah', 'b', 'b', 'c'], + ['a', 'mean', 'median', 'mean']]) + assert_frame_equal(rs, xp) + + def test_reset_index_with_datetimeindex_cols(self): + # GH5818 + # + df = pd.DataFrame([[1, 2], [3, 4]], + columns=pd.date_range('1/1/2013', '1/2/2013'), + index=['A', 'B']) + + result = df.reset_index() + expected = pd.DataFrame([['A', 1, 2], ['B', 3, 4]], + columns=['index', datetime(2013, 1, 1), + datetime(2013, 1, 2)]) + assert_frame_equal(result, expected) + + def test_set_index_names(self): + df = pd.util.testing.makeDataFrame() + df.index.name = 'name' + + self.assertEqual(df.set_index(df.index).index.names, ['name']) + + mi = MultiIndex.from_arrays(df[['A', 'B']].T.values, names=['A', 'B']) + mi2 = MultiIndex.from_arrays(df[['A', 'B', 'A', 'B']].T.values, + names=['A', 'B', 'A', 'B']) + + df = df.set_index(['A', 'B']) + + self.assertEqual(df.set_index(df.index).index.names, ['A', 'B']) + + # Check that set_index isn't converting a MultiIndex into an Index + self.assertTrue(isinstance(df.set_index(df.index).index, MultiIndex)) + + # Check actual equality + tm.assert_index_equal(df.set_index(df.index).index, mi) + + # Check that [MultiIndex, MultiIndex] yields a MultiIndex rather + # than a pair of tuples + self.assertTrue(isinstance(df.set_index( + [df.index, df.index]).index, MultiIndex)) + + # Check equality + tm.assert_index_equal(df.set_index([df.index, df.index]).index, mi2) + + def test_rename_objects(self): + renamed = self.mixed_frame.rename(columns=str.upper) + self.assertIn('FOO', renamed) + self.assertNotIn('foo', renamed) + + def test_assign_columns(self): + self.frame['hi'] = 'there' + + frame = self.frame.copy() + frame.columns = ['foo', 'bar', 'baz', 'quux', 'foo2'] + assert_series_equal(self.frame['C'], frame['baz'], check_names=False) + assert_series_equal(self.frame['hi'], frame['foo2'], check_names=False) diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py new file mode 100644 index 0000000000000..f68faf99d3143 --- /dev/null +++ b/pandas/tests/frame/test_analytics.py @@ -0,0 +1,2268 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import timedelta, datetime +from distutils.version import LooseVersion +import sys +import nose + +from numpy import nan +from numpy.random import randn +import numpy as np + +from pandas.compat import lrange +from pandas import (compat, isnull, notnull, DataFrame, Series, + MultiIndex, date_range, Timestamp) +import pandas as pd +import pandas.core.common as com +import pandas.core.nanops as nanops + +from pandas.util.testing import (assert_almost_equal, + assert_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm +from pandas import _np_version_under1p9 + +from pandas.tests.frame.common import TestData + + +class TestDataFrameAnalytics(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + # ---------------------------------------------------------------------= + # Correlation and covariance + + def test_corr_pearson(self): + tm._skip_if_no_scipy() + self.frame['A'][:5] = nan + self.frame['B'][5:10] = nan + + self._check_method('pearson') + + def test_corr_kendall(self): + tm._skip_if_no_scipy() + self.frame['A'][:5] = nan + self.frame['B'][5:10] = nan + + self._check_method('kendall') + + def test_corr_spearman(self): + tm._skip_if_no_scipy() + self.frame['A'][:5] = nan + self.frame['B'][5:10] = nan + + self._check_method('spearman') + + def _check_method(self, method='pearson', check_minp=False): + if not check_minp: + correls = self.frame.corr(method=method) + exp = self.frame['A'].corr(self.frame['C'], method=method) + assert_almost_equal(correls['A']['C'], exp) + else: + result = self.frame.corr(min_periods=len(self.frame) - 8) + expected = self.frame.corr() + expected.ix['A', 'B'] = expected.ix['B', 'A'] = nan + assert_frame_equal(result, expected) + + def test_corr_non_numeric(self): + tm._skip_if_no_scipy() + self.frame['A'][:5] = nan + self.frame['B'][5:10] = nan + + # exclude non-numeric types + result = self.mixed_frame.corr() + expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].corr() + assert_frame_equal(result, expected) + + def test_corr_nooverlap(self): + tm._skip_if_no_scipy() + + # nothing in common + for meth in ['pearson', 'kendall', 'spearman']: + df = DataFrame({'A': [1, 1.5, 1, np.nan, np.nan, np.nan], + 'B': [np.nan, np.nan, np.nan, 1, 1.5, 1], + 'C': [np.nan, np.nan, np.nan, np.nan, + np.nan, np.nan]}) + rs = df.corr(meth) + self.assertTrue(isnull(rs.ix['A', 'B'])) + self.assertTrue(isnull(rs.ix['B', 'A'])) + self.assertEqual(rs.ix['A', 'A'], 1) + self.assertEqual(rs.ix['B', 'B'], 1) + self.assertTrue(isnull(rs.ix['C', 'C'])) + + def test_corr_constant(self): + tm._skip_if_no_scipy() + + # constant --> all NA + + for meth in ['pearson', 'spearman']: + df = DataFrame({'A': [1, 1, 1, np.nan, np.nan, np.nan], + 'B': [np.nan, np.nan, np.nan, 1, 1, 1]}) + rs = df.corr(meth) + self.assertTrue(isnull(rs.values).all()) + + def test_corr_int(self): + # dtypes other than float64 #1761 + df3 = DataFrame({"a": [1, 2, 3, 4], "b": [1, 2, 3, 4]}) + + # it works! + df3.cov() + df3.corr() + + def test_corr_int_and_boolean(self): + tm._skip_if_no_scipy() + + # when dtypes of pandas series are different + # then ndarray will have dtype=object, + # so it need to be properly handled + df = DataFrame({"a": [True, False], "b": [1, 0]}) + + expected = DataFrame(np.ones((2, 2)), index=[ + 'a', 'b'], columns=['a', 'b']) + for meth in ['pearson', 'kendall', 'spearman']: + assert_frame_equal(df.corr(meth), expected) + + def test_cov(self): + # min_periods no NAs (corner case) + expected = self.frame.cov() + result = self.frame.cov(min_periods=len(self.frame)) + + assert_frame_equal(expected, result) + + result = self.frame.cov(min_periods=len(self.frame) + 1) + self.assertTrue(isnull(result.values).all()) + + # with NAs + frame = self.frame.copy() + frame['A'][:5] = nan + frame['B'][5:10] = nan + result = self.frame.cov(min_periods=len(self.frame) - 8) + expected = self.frame.cov() + expected.ix['A', 'B'] = np.nan + expected.ix['B', 'A'] = np.nan + + # regular + self.frame['A'][:5] = nan + self.frame['B'][:10] = nan + cov = self.frame.cov() + + assert_almost_equal(cov['A']['C'], + self.frame['A'].cov(self.frame['C'])) + + # exclude non-numeric types + result = self.mixed_frame.cov() + expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].cov() + assert_frame_equal(result, expected) + + # Single column frame + df = DataFrame(np.linspace(0.0, 1.0, 10)) + result = df.cov() + expected = DataFrame(np.cov(df.values.T).reshape((1, 1)), + index=df.columns, columns=df.columns) + assert_frame_equal(result, expected) + df.ix[0] = np.nan + result = df.cov() + expected = DataFrame(np.cov(df.values[1:].T).reshape((1, 1)), + index=df.columns, columns=df.columns) + assert_frame_equal(result, expected) + + def test_corrwith(self): + a = self.tsframe + noise = Series(randn(len(a)), index=a.index) + + b = self.tsframe + noise + + # make sure order does not matter + b = b.reindex(columns=b.columns[::-1], index=b.index[::-1][10:]) + del b['B'] + + colcorr = a.corrwith(b, axis=0) + assert_almost_equal(colcorr['A'], a['A'].corr(b['A'])) + + rowcorr = a.corrwith(b, axis=1) + assert_series_equal(rowcorr, a.T.corrwith(b.T, axis=0)) + + dropped = a.corrwith(b, axis=0, drop=True) + assert_almost_equal(dropped['A'], a['A'].corr(b['A'])) + self.assertNotIn('B', dropped) + + dropped = a.corrwith(b, axis=1, drop=True) + self.assertNotIn(a.index[-1], dropped.index) + + # non time-series data + index = ['a', 'b', 'c', 'd', 'e'] + columns = ['one', 'two', 'three', 'four'] + df1 = DataFrame(randn(5, 4), index=index, columns=columns) + df2 = DataFrame(randn(4, 4), index=index[:4], columns=columns) + correls = df1.corrwith(df2, axis=1) + for row in index[:4]: + assert_almost_equal(correls[row], df1.ix[row].corr(df2.ix[row])) + + def test_corrwith_with_objects(self): + df1 = tm.makeTimeDataFrame() + df2 = tm.makeTimeDataFrame() + cols = ['A', 'B', 'C', 'D'] + + df1['obj'] = 'foo' + df2['obj'] = 'bar' + + result = df1.corrwith(df2) + expected = df1.ix[:, cols].corrwith(df2.ix[:, cols]) + assert_series_equal(result, expected) + + result = df1.corrwith(df2, axis=1) + expected = df1.ix[:, cols].corrwith(df2.ix[:, cols], axis=1) + assert_series_equal(result, expected) + + def test_corrwith_series(self): + result = self.tsframe.corrwith(self.tsframe['A']) + expected = self.tsframe.apply(self.tsframe['A'].corr) + + assert_series_equal(result, expected) + + def test_corrwith_matches_corrcoef(self): + df1 = DataFrame(np.arange(10000), columns=['a']) + df2 = DataFrame(np.arange(10000) ** 2, columns=['a']) + c1 = df1.corrwith(df2)['a'] + c2 = np.corrcoef(df1['a'], df2['a'])[0][1] + + assert_almost_equal(c1, c2) + self.assertTrue(c1 < 1) + + def test_bool_describe_in_mixed_frame(self): + df = DataFrame({ + 'string_data': ['a', 'b', 'c', 'd', 'e'], + 'bool_data': [True, True, False, False, False], + 'int_data': [10, 20, 30, 40, 50], + }) + + # Boolean data and integer data is included in .describe() output, + # string data isn't + self.assert_numpy_array_equal(df.describe().columns, [ + 'bool_data', 'int_data']) + + bool_describe = df.describe()['bool_data'] + + # Both the min and the max values should stay booleans + self.assertEqual(bool_describe['min'].dtype, np.bool_) + self.assertEqual(bool_describe['max'].dtype, np.bool_) + + self.assertFalse(bool_describe['min']) + self.assertTrue(bool_describe['max']) + + # For numeric operations, like mean or median, the values True/False + # are cast to the integer values 1 and 0 + assert_almost_equal(bool_describe['mean'], 0.4) + assert_almost_equal(bool_describe['50%'], 0) + + def test_reduce_mixed_frame(self): + # GH 6806 + df = DataFrame({ + 'bool_data': [True, True, False, False, False], + 'int_data': [10, 20, 30, 40, 50], + 'string_data': ['a', 'b', 'c', 'd', 'e'], + }) + df.reindex(columns=['bool_data', 'int_data', 'string_data']) + test = df.sum(axis=0) + assert_almost_equal(test.values, [2, 150, 'abcde']) + assert_series_equal(test, df.T.sum(axis=1)) + + def test_count(self): + f = lambda s: notnull(s).sum() + self._check_stat_op('count', f, + has_skipna=False, + has_numeric_only=True, + check_dtype=False, + check_dates=True) + + # corner case + frame = DataFrame() + ct1 = frame.count(1) + tm.assertIsInstance(ct1, Series) + + ct2 = frame.count(0) + tm.assertIsInstance(ct2, Series) + + # GH #423 + df = DataFrame(index=lrange(10)) + result = df.count(1) + expected = Series(0, index=df.index) + assert_series_equal(result, expected) + + df = DataFrame(columns=lrange(10)) + result = df.count(0) + expected = Series(0, index=df.columns) + assert_series_equal(result, expected) + + df = DataFrame() + result = df.count() + expected = Series(0, index=[]) + assert_series_equal(result, expected) + + def test_sum(self): + self._check_stat_op('sum', np.sum, has_numeric_only=True) + + # mixed types (with upcasting happening) + self._check_stat_op('sum', np.sum, + frame=self.mixed_float.astype('float32'), + has_numeric_only=True, check_dtype=False, + check_less_precise=True) + + def test_stat_operators_attempt_obj_array(self): + data = { + 'a': [-0.00049987540199591344, -0.0016467257772919831, + 0.00067695870775883013], + 'b': [-0, -0, 0.0], + 'c': [0.00031111847529610595, 0.0014902627951905339, + -0.00094099200035979691] + } + df1 = DataFrame(data, index=['foo', 'bar', 'baz'], + dtype='O') + methods = ['sum', 'mean', 'prod', 'var', 'std', 'skew', 'min', 'max'] + + # GH #676 + df2 = DataFrame({0: [np.nan, 2], 1: [np.nan, 3], + 2: [np.nan, 4]}, dtype=object) + + for df in [df1, df2]: + for meth in methods: + self.assertEqual(df.values.dtype, np.object_) + result = getattr(df, meth)(1) + expected = getattr(df.astype('f8'), meth)(1) + + if not tm._incompat_bottleneck_version(meth): + assert_series_equal(result, expected) + + def test_mean(self): + self._check_stat_op('mean', np.mean, check_dates=True) + + def test_product(self): + self._check_stat_op('product', np.prod) + + def test_median(self): + def wrapper(x): + if isnull(x).any(): + return np.nan + return np.median(x) + + self._check_stat_op('median', wrapper, check_dates=True) + + def test_min(self): + self._check_stat_op('min', np.min, check_dates=True) + self._check_stat_op('min', np.min, frame=self.intframe) + + def test_cummin(self): + self.tsframe.ix[5:10, 0] = nan + self.tsframe.ix[10:15, 1] = nan + self.tsframe.ix[15:, 2] = nan + + # axis = 0 + cummin = self.tsframe.cummin() + expected = self.tsframe.apply(Series.cummin) + assert_frame_equal(cummin, expected) + + # axis = 1 + cummin = self.tsframe.cummin(axis=1) + expected = self.tsframe.apply(Series.cummin, axis=1) + assert_frame_equal(cummin, expected) + + # it works + df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) + result = df.cummin() # noqa + + # fix issue + cummin_xs = self.tsframe.cummin(axis=1) + self.assertEqual(np.shape(cummin_xs), np.shape(self.tsframe)) + + def test_cummax(self): + self.tsframe.ix[5:10, 0] = nan + self.tsframe.ix[10:15, 1] = nan + self.tsframe.ix[15:, 2] = nan + + # axis = 0 + cummax = self.tsframe.cummax() + expected = self.tsframe.apply(Series.cummax) + assert_frame_equal(cummax, expected) + + # axis = 1 + cummax = self.tsframe.cummax(axis=1) + expected = self.tsframe.apply(Series.cummax, axis=1) + assert_frame_equal(cummax, expected) + + # it works + df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) + result = df.cummax() # noqa + + # fix issue + cummax_xs = self.tsframe.cummax(axis=1) + self.assertEqual(np.shape(cummax_xs), np.shape(self.tsframe)) + + def test_max(self): + self._check_stat_op('max', np.max, check_dates=True) + self._check_stat_op('max', np.max, frame=self.intframe) + + def test_mad(self): + f = lambda x: np.abs(x - x.mean()).mean() + self._check_stat_op('mad', f) + + def test_var_std(self): + alt = lambda x: np.var(x, ddof=1) + self._check_stat_op('var', alt) + + alt = lambda x: np.std(x, ddof=1) + self._check_stat_op('std', alt) + + result = self.tsframe.std(ddof=4) + expected = self.tsframe.apply(lambda x: x.std(ddof=4)) + assert_almost_equal(result, expected) + + result = self.tsframe.var(ddof=4) + expected = self.tsframe.apply(lambda x: x.var(ddof=4)) + assert_almost_equal(result, expected) + + arr = np.repeat(np.random.random((1, 1000)), 1000, 0) + result = nanops.nanvar(arr, axis=0) + self.assertFalse((result < 0).any()) + if nanops._USE_BOTTLENECK: + nanops._USE_BOTTLENECK = False + result = nanops.nanvar(arr, axis=0) + self.assertFalse((result < 0).any()) + nanops._USE_BOTTLENECK = True + + def test_numeric_only_flag(self): + # GH #9201 + methods = ['sem', 'var', 'std'] + df1 = DataFrame(np.random.randn(5, 3), columns=['foo', 'bar', 'baz']) + # set one entry to a number in str format + df1.ix[0, 'foo'] = '100' + + df2 = DataFrame(np.random.randn(5, 3), columns=['foo', 'bar', 'baz']) + # set one entry to a non-number str + df2.ix[0, 'foo'] = 'a' + + for meth in methods: + result = getattr(df1, meth)(axis=1, numeric_only=True) + expected = getattr(df1[['bar', 'baz']], meth)(axis=1) + assert_series_equal(expected, result) + + result = getattr(df2, meth)(axis=1, numeric_only=True) + expected = getattr(df2[['bar', 'baz']], meth)(axis=1) + assert_series_equal(expected, result) + + # df1 has all numbers, df2 has a letter inside + self.assertRaises(TypeError, lambda: getattr(df1, meth) + (axis=1, numeric_only=False)) + self.assertRaises(TypeError, lambda: getattr(df2, meth) + (axis=1, numeric_only=False)) + + def test_quantile(self): + from numpy import percentile + + q = self.tsframe.quantile(0.1, axis=0) + self.assertEqual(q['A'], percentile(self.tsframe['A'], 10)) + q = self.tsframe.quantile(0.9, axis=1) + q = self.intframe.quantile(0.1) + self.assertEqual(q['A'], percentile(self.intframe['A'], 10)) + + # test degenerate case + q = DataFrame({'x': [], 'y': []}).quantile(0.1, axis=0) + assert(np.isnan(q['x']) and np.isnan(q['y'])) + + # non-numeric exclusion + df = DataFrame({'col1': ['A', 'A', 'B', 'B'], 'col2': [1, 2, 3, 4]}) + rs = df.quantile(0.5) + xp = df.median() + assert_series_equal(rs, xp) + + # axis + df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) + result = df.quantile(.5, axis=1) + expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3]) + assert_series_equal(result, expected) + + result = df.quantile([.5, .75], axis=1) + expected = DataFrame({1: [1.5, 1.75], 2: [2.5, 2.75], + 3: [3.5, 3.75]}, index=[0.5, 0.75]) + assert_frame_equal(result, expected, check_index_type=True) + + # We may want to break API in the future to change this + # so that we exclude non-numeric along the same axis + # See GH #7312 + df = DataFrame([[1, 2, 3], + ['a', 'b', 4]]) + result = df.quantile(.5, axis=1) + expected = Series([3., 4.], index=[0, 1]) + assert_series_equal(result, expected) + + def test_quantile_axis_parameter(self): + # GH 9543/9544 + + df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) + + result = df.quantile(.5, axis=0) + + expected = Series([2., 3.], index=["A", "B"]) + assert_series_equal(result, expected) + + expected = df.quantile(.5, axis="index") + assert_series_equal(result, expected) + + result = df.quantile(.5, axis=1) + + expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3]) + assert_series_equal(result, expected) + + result = df.quantile(.5, axis="columns") + assert_series_equal(result, expected) + + self.assertRaises(ValueError, df.quantile, 0.1, axis=-1) + self.assertRaises(ValueError, df.quantile, 0.1, axis="column") + + def test_quantile_interpolation(self): + # GH #10174 + if _np_version_under1p9: + raise nose.SkipTest("Numpy version under 1.9") + + from numpy import percentile + + # interpolation = linear (default case) + q = self.tsframe.quantile(0.1, axis=0, interpolation='linear') + self.assertEqual(q['A'], percentile(self.tsframe['A'], 10)) + q = self.intframe.quantile(0.1) + self.assertEqual(q['A'], percentile(self.intframe['A'], 10)) + + # test with and without interpolation keyword + q1 = self.intframe.quantile(0.1) + self.assertEqual(q1['A'], np.percentile(self.intframe['A'], 10)) + assert_series_equal(q, q1) + + # interpolation method other than default linear + df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) + result = df.quantile(.5, axis=1, interpolation='nearest') + expected = Series([1., 2., 3.], index=[1, 2, 3]) + assert_series_equal(result, expected) + + # axis + result = df.quantile([.5, .75], axis=1, interpolation='lower') + expected = DataFrame({1: [1., 1.], 2: [2., 2.], + 3: [3., 3.]}, index=[0.5, 0.75]) + assert_frame_equal(result, expected) + + # test degenerate case + df = DataFrame({'x': [], 'y': []}) + q = df.quantile(0.1, axis=0, interpolation='higher') + assert(np.isnan(q['x']) and np.isnan(q['y'])) + + # multi + df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], + columns=['a', 'b', 'c']) + result = df.quantile([.25, .5], interpolation='midpoint') + expected = DataFrame([[1.5, 1.5, 1.5], [2.5, 2.5, 2.5]], + index=[.25, .5], columns=['a', 'b', 'c']) + assert_frame_equal(result, expected) + + def test_quantile_interpolation_np_lt_1p9(self): + # GH #10174 + if not _np_version_under1p9: + raise nose.SkipTest("Numpy version is greater than 1.9") + + from numpy import percentile + + # interpolation = linear (default case) + q = self.tsframe.quantile(0.1, axis=0, interpolation='linear') + self.assertEqual(q['A'], percentile(self.tsframe['A'], 10)) + q = self.intframe.quantile(0.1) + self.assertEqual(q['A'], percentile(self.intframe['A'], 10)) + + # test with and without interpolation keyword + q1 = self.intframe.quantile(0.1) + self.assertEqual(q1['A'], np.percentile(self.intframe['A'], 10)) + assert_series_equal(q, q1) + + # interpolation method other than default linear + expErrMsg = "Interpolation methods other than linear" + df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) + with assertRaisesRegexp(ValueError, expErrMsg): + df.quantile(.5, axis=1, interpolation='nearest') + + with assertRaisesRegexp(ValueError, expErrMsg): + df.quantile([.5, .75], axis=1, interpolation='lower') + + # test degenerate case + df = DataFrame({'x': [], 'y': []}) + with assertRaisesRegexp(ValueError, expErrMsg): + q = df.quantile(0.1, axis=0, interpolation='higher') + + # multi + df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], + columns=['a', 'b', 'c']) + with assertRaisesRegexp(ValueError, expErrMsg): + df.quantile([.25, .5], interpolation='midpoint') + + def test_quantile_multi(self): + df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], + columns=['a', 'b', 'c']) + result = df.quantile([.25, .5]) + expected = DataFrame([[1.5, 1.5, 1.5], [2., 2., 2.]], + index=[.25, .5], columns=['a', 'b', 'c']) + assert_frame_equal(result, expected) + + # axis = 1 + result = df.quantile([.25, .5], axis=1) + expected = DataFrame([[1.5, 1.5, 1.5], [2., 2., 2.]], + index=[.25, .5], columns=[0, 1, 2]) + + # empty + result = DataFrame({'x': [], 'y': []}).quantile([0.1, .9], axis=0) + expected = DataFrame({'x': [np.nan, np.nan], 'y': [np.nan, np.nan]}, + index=[.1, .9]) + assert_frame_equal(result, expected) + + def test_quantile_datetime(self): + df = DataFrame({'a': pd.to_datetime(['2010', '2011']), 'b': [0, 5]}) + + # exclude datetime + result = df.quantile(.5) + expected = Series([2.5], index=['b']) + + # datetime + result = df.quantile(.5, numeric_only=False) + expected = Series([Timestamp('2010-07-02 12:00:00'), 2.5], + index=['a', 'b']) + assert_series_equal(result, expected) + + # datetime w/ multi + result = df.quantile([.5], numeric_only=False) + expected = DataFrame([[Timestamp('2010-07-02 12:00:00'), 2.5]], + index=[.5], columns=['a', 'b']) + assert_frame_equal(result, expected) + + # axis = 1 + df['c'] = pd.to_datetime(['2011', '2012']) + result = df[['a', 'c']].quantile(.5, axis=1, numeric_only=False) + expected = Series([Timestamp('2010-07-02 12:00:00'), + Timestamp('2011-07-02 12:00:00')], + index=[0, 1]) + assert_series_equal(result, expected) + + result = df[['a', 'c']].quantile([.5], axis=1, numeric_only=False) + expected = DataFrame([[Timestamp('2010-07-02 12:00:00'), + Timestamp('2011-07-02 12:00:00')]], + index=[0.5], columns=[0, 1]) + assert_frame_equal(result, expected) + + def test_quantile_invalid(self): + msg = 'percentiles should all be in the interval \\[0, 1\\]' + for invalid in [-1, 2, [0.5, -1], [0.5, 2]]: + with tm.assertRaisesRegexp(ValueError, msg): + self.tsframe.quantile(invalid) + + def test_cumsum(self): + self.tsframe.ix[5:10, 0] = nan + self.tsframe.ix[10:15, 1] = nan + self.tsframe.ix[15:, 2] = nan + + # axis = 0 + cumsum = self.tsframe.cumsum() + expected = self.tsframe.apply(Series.cumsum) + assert_frame_equal(cumsum, expected) + + # axis = 1 + cumsum = self.tsframe.cumsum(axis=1) + expected = self.tsframe.apply(Series.cumsum, axis=1) + assert_frame_equal(cumsum, expected) + + # works + df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) + result = df.cumsum() # noqa + + # fix issue + cumsum_xs = self.tsframe.cumsum(axis=1) + self.assertEqual(np.shape(cumsum_xs), np.shape(self.tsframe)) + + def test_cumprod(self): + self.tsframe.ix[5:10, 0] = nan + self.tsframe.ix[10:15, 1] = nan + self.tsframe.ix[15:, 2] = nan + + # axis = 0 + cumprod = self.tsframe.cumprod() + expected = self.tsframe.apply(Series.cumprod) + assert_frame_equal(cumprod, expected) + + # axis = 1 + cumprod = self.tsframe.cumprod(axis=1) + expected = self.tsframe.apply(Series.cumprod, axis=1) + assert_frame_equal(cumprod, expected) + + # fix issue + cumprod_xs = self.tsframe.cumprod(axis=1) + self.assertEqual(np.shape(cumprod_xs), np.shape(self.tsframe)) + + # ints + df = self.tsframe.fillna(0).astype(int) + df.cumprod(0) + df.cumprod(1) + + # ints32 + df = self.tsframe.fillna(0).astype(np.int32) + df.cumprod(0) + df.cumprod(1) + + def test_rank(self): + tm._skip_if_no_scipy() + from scipy.stats import rankdata + + self.frame['A'][::2] = np.nan + self.frame['B'][::3] = np.nan + self.frame['C'][::4] = np.nan + self.frame['D'][::5] = np.nan + + ranks0 = self.frame.rank() + ranks1 = self.frame.rank(1) + mask = np.isnan(self.frame.values) + + fvals = self.frame.fillna(np.inf).values + + exp0 = np.apply_along_axis(rankdata, 0, fvals) + exp0[mask] = np.nan + + exp1 = np.apply_along_axis(rankdata, 1, fvals) + exp1[mask] = np.nan + + assert_almost_equal(ranks0.values, exp0) + assert_almost_equal(ranks1.values, exp1) + + # integers + df = DataFrame(np.random.randint(0, 5, size=40).reshape((10, 4))) + + result = df.rank() + exp = df.astype(float).rank() + assert_frame_equal(result, exp) + + result = df.rank(1) + exp = df.astype(float).rank(1) + assert_frame_equal(result, exp) + + def test_rank2(self): + df = DataFrame([[1, 3, 2], [1, 2, 3]]) + expected = DataFrame([[1.0, 3.0, 2.0], [1, 2, 3]]) / 3.0 + result = df.rank(1, pct=True) + assert_frame_equal(result, expected) + + df = DataFrame([[1, 3, 2], [1, 2, 3]]) + expected = df.rank(0) / 2.0 + result = df.rank(0, pct=True) + assert_frame_equal(result, expected) + + df = DataFrame([['b', 'c', 'a'], ['a', 'c', 'b']]) + expected = DataFrame([[2.0, 3.0, 1.0], [1, 3, 2]]) + result = df.rank(1, numeric_only=False) + assert_frame_equal(result, expected) + + expected = DataFrame([[2.0, 1.5, 1.0], [1, 1.5, 2]]) + result = df.rank(0, numeric_only=False) + assert_frame_equal(result, expected) + + df = DataFrame([['b', np.nan, 'a'], ['a', 'c', 'b']]) + expected = DataFrame([[2.0, nan, 1.0], [1.0, 3.0, 2.0]]) + result = df.rank(1, numeric_only=False) + assert_frame_equal(result, expected) + + expected = DataFrame([[2.0, nan, 1.0], [1.0, 1.0, 2.0]]) + result = df.rank(0, numeric_only=False) + assert_frame_equal(result, expected) + + # f7u12, this does not work without extensive workaround + data = [[datetime(2001, 1, 5), nan, datetime(2001, 1, 2)], + [datetime(2000, 1, 2), datetime(2000, 1, 3), + datetime(2000, 1, 1)]] + df = DataFrame(data) + + # check the rank + expected = DataFrame([[2., nan, 1.], + [2., 3., 1.]]) + result = df.rank(1, numeric_only=False) + assert_frame_equal(result, expected) + + # mixed-type frames + self.mixed_frame['datetime'] = datetime.now() + self.mixed_frame['timedelta'] = timedelta(days=1, seconds=1) + + result = self.mixed_frame.rank(1) + expected = self.mixed_frame.rank(1, numeric_only=True) + assert_frame_equal(result, expected) + + df = DataFrame({"a": [1e-20, -5, 1e-20 + 1e-40, 10, + 1e60, 1e80, 1e-30]}) + exp = DataFrame({"a": [3.5, 1., 3.5, 5., 6., 7., 2.]}) + assert_frame_equal(df.rank(), exp) + + def test_rank_na_option(self): + tm._skip_if_no_scipy() + from scipy.stats import rankdata + + self.frame['A'][::2] = np.nan + self.frame['B'][::3] = np.nan + self.frame['C'][::4] = np.nan + self.frame['D'][::5] = np.nan + + # bottom + ranks0 = self.frame.rank(na_option='bottom') + ranks1 = self.frame.rank(1, na_option='bottom') + + fvals = self.frame.fillna(np.inf).values + + exp0 = np.apply_along_axis(rankdata, 0, fvals) + exp1 = np.apply_along_axis(rankdata, 1, fvals) + + assert_almost_equal(ranks0.values, exp0) + assert_almost_equal(ranks1.values, exp1) + + # top + ranks0 = self.frame.rank(na_option='top') + ranks1 = self.frame.rank(1, na_option='top') + + fval0 = self.frame.fillna((self.frame.min() - 1).to_dict()).values + fval1 = self.frame.T + fval1 = fval1.fillna((fval1.min() - 1).to_dict()).T + fval1 = fval1.fillna(np.inf).values + + exp0 = np.apply_along_axis(rankdata, 0, fval0) + exp1 = np.apply_along_axis(rankdata, 1, fval1) + + assert_almost_equal(ranks0.values, exp0) + assert_almost_equal(ranks1.values, exp1) + + # descending + + # bottom + ranks0 = self.frame.rank(na_option='top', ascending=False) + ranks1 = self.frame.rank(1, na_option='top', ascending=False) + + fvals = self.frame.fillna(np.inf).values + + exp0 = np.apply_along_axis(rankdata, 0, -fvals) + exp1 = np.apply_along_axis(rankdata, 1, -fvals) + + assert_almost_equal(ranks0.values, exp0) + assert_almost_equal(ranks1.values, exp1) + + # descending + + # top + ranks0 = self.frame.rank(na_option='bottom', ascending=False) + ranks1 = self.frame.rank(1, na_option='bottom', ascending=False) + + fval0 = self.frame.fillna((self.frame.min() - 1).to_dict()).values + fval1 = self.frame.T + fval1 = fval1.fillna((fval1.min() - 1).to_dict()).T + fval1 = fval1.fillna(np.inf).values + + exp0 = np.apply_along_axis(rankdata, 0, -fval0) + exp1 = np.apply_along_axis(rankdata, 1, -fval1) + + assert_almost_equal(ranks0.values, exp0) + assert_almost_equal(ranks1.values, exp1) + + def test_sem(self): + alt = lambda x: np.std(x, ddof=1) / np.sqrt(len(x)) + self._check_stat_op('sem', alt) + + result = self.tsframe.sem(ddof=4) + expected = self.tsframe.apply( + lambda x: x.std(ddof=4) / np.sqrt(len(x))) + assert_almost_equal(result, expected) + + arr = np.repeat(np.random.random((1, 1000)), 1000, 0) + result = nanops.nansem(arr, axis=0) + self.assertFalse((result < 0).any()) + if nanops._USE_BOTTLENECK: + nanops._USE_BOTTLENECK = False + result = nanops.nansem(arr, axis=0) + self.assertFalse((result < 0).any()) + nanops._USE_BOTTLENECK = True + + def test_skew(self): + tm._skip_if_no_scipy() + from scipy.stats import skew + + def alt(x): + if len(x) < 3: + return np.nan + return skew(x, bias=False) + + self._check_stat_op('skew', alt) + + def test_kurt(self): + tm._skip_if_no_scipy() + + from scipy.stats import kurtosis + + def alt(x): + if len(x) < 4: + return np.nan + return kurtosis(x, bias=False) + + self._check_stat_op('kurt', alt) + + index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], + labels=[[0, 0, 0, 0, 0, 0], + [0, 1, 2, 0, 1, 2], + [0, 1, 0, 1, 0, 1]]) + df = DataFrame(np.random.randn(6, 3), index=index) + + kurt = df.kurt() + kurt2 = df.kurt(level=0).xs('bar') + assert_series_equal(kurt, kurt2, check_names=False) + self.assertTrue(kurt.name is None) + self.assertEqual(kurt2.name, 'bar') + + def _check_stat_op(self, name, alternative, frame=None, has_skipna=True, + has_numeric_only=False, check_dtype=True, + check_dates=False, check_less_precise=False): + if frame is None: + frame = self.frame + # set some NAs + frame.ix[5:10] = np.nan + frame.ix[15:20, -2:] = np.nan + + f = getattr(frame, name) + + if check_dates: + df = DataFrame({'b': date_range('1/1/2001', periods=2)}) + _f = getattr(df, name) + result = _f() + self.assertIsInstance(result, Series) + + df['a'] = lrange(len(df)) + result = getattr(df, name)() + self.assertIsInstance(result, Series) + self.assertTrue(len(result)) + + if has_skipna: + def skipna_wrapper(x): + nona = x.dropna() + if len(nona) == 0: + return np.nan + return alternative(nona) + + def wrapper(x): + return alternative(x.values) + + result0 = f(axis=0, skipna=False) + result1 = f(axis=1, skipna=False) + assert_series_equal(result0, frame.apply(wrapper), + check_dtype=check_dtype, + check_less_precise=check_less_precise) + # HACK: win32 + assert_series_equal(result1, frame.apply(wrapper, axis=1), + check_dtype=False, + check_less_precise=check_less_precise) + else: + skipna_wrapper = alternative + wrapper = alternative + + result0 = f(axis=0) + result1 = f(axis=1) + assert_series_equal(result0, frame.apply(skipna_wrapper), + check_dtype=check_dtype, + check_less_precise=check_less_precise) + if not tm._incompat_bottleneck_version(name): + assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1), + check_dtype=False, + check_less_precise=check_less_precise) + + # check dtypes + if check_dtype: + lcd_dtype = frame.values.dtype + self.assertEqual(lcd_dtype, result0.dtype) + self.assertEqual(lcd_dtype, result1.dtype) + + # result = f(axis=1) + # comp = frame.apply(alternative, axis=1).reindex(result.index) + # assert_series_equal(result, comp) + + # bad axis + assertRaisesRegexp(ValueError, 'No axis named 2', f, axis=2) + # make sure works on mixed-type frame + getattr(self.mixed_frame, name)(axis=0) + getattr(self.mixed_frame, name)(axis=1) + + if has_numeric_only: + getattr(self.mixed_frame, name)(axis=0, numeric_only=True) + getattr(self.mixed_frame, name)(axis=1, numeric_only=True) + getattr(self.frame, name)(axis=0, numeric_only=False) + getattr(self.frame, name)(axis=1, numeric_only=False) + + # all NA case + if has_skipna: + all_na = self.frame * np.NaN + r0 = getattr(all_na, name)(axis=0) + r1 = getattr(all_na, name)(axis=1) + if not tm._incompat_bottleneck_version(name): + self.assertTrue(np.isnan(r0).all()) + self.assertTrue(np.isnan(r1).all()) + + def test_mode(self): + df = pd.DataFrame({"A": [12, 12, 11, 12, 19, 11], + "B": [10, 10, 10, np.nan, 3, 4], + "C": [8, 8, 8, 9, 9, 9], + "D": np.arange(6, dtype='int64'), + "E": [8, 8, 1, 1, 3, 3]}) + assert_frame_equal(df[["A"]].mode(), + pd.DataFrame({"A": [12]})) + expected = pd.Series([], dtype='int64', name='D').to_frame() + assert_frame_equal(df[["D"]].mode(), expected) + expected = pd.Series([1, 3, 8], dtype='int64', name='E').to_frame() + assert_frame_equal(df[["E"]].mode(), expected) + assert_frame_equal(df[["A", "B"]].mode(), + pd.DataFrame({"A": [12], "B": [10.]})) + assert_frame_equal(df.mode(), + pd.DataFrame({"A": [12, np.nan, np.nan], + "B": [10, np.nan, np.nan], + "C": [8, 9, np.nan], + "D": [np.nan, np.nan, np.nan], + "E": [1, 3, 8]})) + + # outputs in sorted order + df["C"] = list(reversed(df["C"])) + com.pprint_thing(df["C"]) + com.pprint_thing(df["C"].mode()) + a, b = (df[["A", "B", "C"]].mode(), + pd.DataFrame({"A": [12, np.nan], + "B": [10, np.nan], + "C": [8, 9]})) + com.pprint_thing(a) + com.pprint_thing(b) + assert_frame_equal(a, b) + # should work with heterogeneous types + df = pd.DataFrame({"A": np.arange(6, dtype='int64'), + "B": pd.date_range('2011', periods=6), + "C": list('abcdef')}) + exp = pd.DataFrame({"A": pd.Series([], dtype=df["A"].dtype), + "B": pd.Series([], dtype=df["B"].dtype), + "C": pd.Series([], dtype=df["C"].dtype)}) + assert_frame_equal(df.mode(), exp) + + # and also when not empty + df.loc[1, "A"] = 0 + df.loc[4, "B"] = df.loc[3, "B"] + df.loc[5, "C"] = 'e' + exp = pd.DataFrame({"A": pd.Series([0], dtype=df["A"].dtype), + "B": pd.Series([df.loc[3, "B"]], + dtype=df["B"].dtype), + "C": pd.Series(['e'], dtype=df["C"].dtype)}) + + assert_frame_equal(df.mode(), exp) + + def test_operators_timedelta64(self): + from datetime import timedelta + df = DataFrame(dict(A=date_range('2012-1-1', periods=3, freq='D'), + B=date_range('2012-1-2', periods=3, freq='D'), + C=Timestamp('20120101') - + timedelta(minutes=5, seconds=5))) + + diffs = DataFrame(dict(A=df['A'] - df['C'], + B=df['A'] - df['B'])) + + # min + result = diffs.min() + self.assertEqual(result[0], diffs.ix[0, 'A']) + self.assertEqual(result[1], diffs.ix[0, 'B']) + + result = diffs.min(axis=1) + self.assertTrue((result == diffs.ix[0, 'B']).all()) + + # max + result = diffs.max() + self.assertEqual(result[0], diffs.ix[2, 'A']) + self.assertEqual(result[1], diffs.ix[2, 'B']) + + result = diffs.max(axis=1) + self.assertTrue((result == diffs['A']).all()) + + # abs + result = diffs.abs() + result2 = abs(diffs) + expected = DataFrame(dict(A=df['A'] - df['C'], + B=df['B'] - df['A'])) + assert_frame_equal(result, expected) + assert_frame_equal(result2, expected) + + # mixed frame + mixed = diffs.copy() + mixed['C'] = 'foo' + mixed['D'] = 1 + mixed['E'] = 1. + mixed['F'] = Timestamp('20130101') + + # results in an object array + from pandas.tseries.timedeltas import ( + _coerce_scalar_to_timedelta_type as _coerce) + + result = mixed.min() + expected = Series([_coerce(timedelta(seconds=5 * 60 + 5)), + _coerce(timedelta(days=-1)), + 'foo', 1, 1.0, + Timestamp('20130101')], + index=mixed.columns) + assert_series_equal(result, expected) + + # excludes numeric + result = mixed.min(axis=1) + expected = Series([1, 1, 1.], index=[0, 1, 2]) + assert_series_equal(result, expected) + + # works when only those columns are selected + result = mixed[['A', 'B']].min(1) + expected = Series([timedelta(days=-1)] * 3) + assert_series_equal(result, expected) + + result = mixed[['A', 'B']].min() + expected = Series([timedelta(seconds=5 * 60 + 5), + timedelta(days=-1)], index=['A', 'B']) + assert_series_equal(result, expected) + + # GH 3106 + df = DataFrame({'time': date_range('20130102', periods=5), + 'time2': date_range('20130105', periods=5)}) + df['off1'] = df['time2'] - df['time'] + self.assertEqual(df['off1'].dtype, 'timedelta64[ns]') + + df['off2'] = df['time'] - df['time2'] + df._consolidate_inplace() + self.assertTrue(df['off1'].dtype == 'timedelta64[ns]') + self.assertTrue(df['off2'].dtype == 'timedelta64[ns]') + + def test_sum_corner(self): + axis0 = self.empty.sum(0) + axis1 = self.empty.sum(1) + tm.assertIsInstance(axis0, Series) + tm.assertIsInstance(axis1, Series) + self.assertEqual(len(axis0), 0) + self.assertEqual(len(axis1), 0) + + def test_sum_object(self): + values = self.frame.values.astype(int) + frame = DataFrame(values, index=self.frame.index, + columns=self.frame.columns) + deltas = frame * timedelta(1) + deltas.sum() + + def test_sum_bool(self): + # ensure this works, bug report + bools = np.isnan(self.frame) + bools.sum(1) + bools.sum(0) + + def test_mean_corner(self): + # unit test when have object data + the_mean = self.mixed_frame.mean(axis=0) + the_sum = self.mixed_frame.sum(axis=0, numeric_only=True) + self.assertTrue(the_sum.index.equals(the_mean.index)) + self.assertTrue(len(the_mean.index) < len(self.mixed_frame.columns)) + + # xs sum mixed type, just want to know it works... + the_mean = self.mixed_frame.mean(axis=1) + the_sum = self.mixed_frame.sum(axis=1, numeric_only=True) + self.assertTrue(the_sum.index.equals(the_mean.index)) + + # take mean of boolean column + self.frame['bool'] = self.frame['A'] > 0 + means = self.frame.mean(0) + self.assertEqual(means['bool'], self.frame['bool'].values.mean()) + + def test_stats_mixed_type(self): + # don't blow up + self.mixed_frame.std(1) + self.mixed_frame.var(1) + self.mixed_frame.mean(1) + self.mixed_frame.skew(1) + + def test_median_corner(self): + def wrapper(x): + if isnull(x).any(): + return np.nan + return np.median(x) + + self._check_stat_op('median', wrapper, frame=self.intframe, + check_dtype=False, check_dates=True) + + # Miscellanea + + def test_count_objects(self): + dm = DataFrame(self.mixed_frame._series) + df = DataFrame(self.mixed_frame._series) + + assert_series_equal(dm.count(), df.count()) + assert_series_equal(dm.count(1), df.count(1)) + + def test_cumsum_corner(self): + dm = DataFrame(np.arange(20).reshape(4, 5), + index=lrange(4), columns=lrange(5)) + # ?(wesm) + result = dm.cumsum() # noqa + + def test_sum_bools(self): + df = DataFrame(index=lrange(1), columns=lrange(10)) + bools = isnull(df) + self.assertEqual(bools.sum(axis=1)[0], 10) + + # Index of max / min + + def test_idxmin(self): + frame = self.frame + frame.ix[5:10] = np.nan + frame.ix[15:20, -2:] = np.nan + for skipna in [True, False]: + for axis in [0, 1]: + for df in [frame, self.intframe]: + result = df.idxmin(axis=axis, skipna=skipna) + expected = df.apply( + Series.idxmin, axis=axis, skipna=skipna) + assert_series_equal(result, expected) + + self.assertRaises(ValueError, frame.idxmin, axis=2) + + def test_idxmax(self): + frame = self.frame + frame.ix[5:10] = np.nan + frame.ix[15:20, -2:] = np.nan + for skipna in [True, False]: + for axis in [0, 1]: + for df in [frame, self.intframe]: + result = df.idxmax(axis=axis, skipna=skipna) + expected = df.apply( + Series.idxmax, axis=axis, skipna=skipna) + assert_series_equal(result, expected) + + self.assertRaises(ValueError, frame.idxmax, axis=2) + + # ---------------------------------------------------------------------- + # Logical reductions + + def test_any_all(self): + self._check_bool_op('any', np.any, has_skipna=True, has_bool_only=True) + self._check_bool_op('all', np.all, has_skipna=True, has_bool_only=True) + + df = DataFrame(randn(10, 4)) > 0 + df.any(1) + df.all(1) + df.any(1, bool_only=True) + df.all(1, bool_only=True) + + # skip pathological failure cases + # class CantNonzero(object): + + # def __nonzero__(self): + # raise ValueError + + # df[4] = CantNonzero() + + # it works! + # df.any(1) + # df.all(1) + # df.any(1, bool_only=True) + # df.all(1, bool_only=True) + + # df[4][4] = np.nan + # df.any(1) + # df.all(1) + # df.any(1, bool_only=True) + # df.all(1, bool_only=True) + + def _check_bool_op(self, name, alternative, frame=None, has_skipna=True, + has_bool_only=False): + if frame is None: + frame = self.frame > 0 + # set some NAs + frame = DataFrame(frame.values.astype(object), frame.index, + frame.columns) + frame.ix[5:10] = np.nan + frame.ix[15:20, -2:] = np.nan + + f = getattr(frame, name) + + if has_skipna: + def skipna_wrapper(x): + nona = x.dropna().values + return alternative(nona) + + def wrapper(x): + return alternative(x.values) + + result0 = f(axis=0, skipna=False) + result1 = f(axis=1, skipna=False) + assert_series_equal(result0, frame.apply(wrapper)) + assert_series_equal(result1, frame.apply(wrapper, axis=1), + check_dtype=False) # HACK: win32 + else: + skipna_wrapper = alternative + wrapper = alternative + + result0 = f(axis=0) + result1 = f(axis=1) + assert_series_equal(result0, frame.apply(skipna_wrapper)) + assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1), + check_dtype=False) + + # result = f(axis=1) + # comp = frame.apply(alternative, axis=1).reindex(result.index) + # assert_series_equal(result, comp) + + # bad axis + self.assertRaises(ValueError, f, axis=2) + + # make sure works on mixed-type frame + mixed = self.mixed_frame + mixed['_bool_'] = np.random.randn(len(mixed)) > 0 + getattr(mixed, name)(axis=0) + getattr(mixed, name)(axis=1) + + class NonzeroFail: + + def __nonzero__(self): + raise ValueError + + mixed['_nonzero_fail_'] = NonzeroFail() + + if has_bool_only: + getattr(mixed, name)(axis=0, bool_only=True) + getattr(mixed, name)(axis=1, bool_only=True) + getattr(frame, name)(axis=0, bool_only=False) + getattr(frame, name)(axis=1, bool_only=False) + + # all NA case + if has_skipna: + all_na = frame * np.NaN + r0 = getattr(all_na, name)(axis=0) + r1 = getattr(all_na, name)(axis=1) + if name == 'any': + self.assertFalse(r0.any()) + self.assertFalse(r1.any()) + else: + self.assertTrue(r0.all()) + self.assertTrue(r1.all()) + + # ---------------------------------------------------------------------- + # Top / bottom + + def test_nlargest(self): + # GH10393 + from string import ascii_lowercase + df = pd.DataFrame({'a': np.random.permutation(10), + 'b': list(ascii_lowercase[:10])}) + result = df.nlargest(5, 'a') + expected = df.sort_values('a', ascending=False).head(5) + assert_frame_equal(result, expected) + + def test_nlargest_multiple_columns(self): + from string import ascii_lowercase + df = pd.DataFrame({'a': np.random.permutation(10), + 'b': list(ascii_lowercase[:10]), + 'c': np.random.permutation(10).astype('float64')}) + result = df.nlargest(5, ['a', 'b']) + expected = df.sort_values(['a', 'b'], ascending=False).head(5) + assert_frame_equal(result, expected) + + def test_nsmallest(self): + from string import ascii_lowercase + df = pd.DataFrame({'a': np.random.permutation(10), + 'b': list(ascii_lowercase[:10])}) + result = df.nsmallest(5, 'a') + expected = df.sort_values('a').head(5) + assert_frame_equal(result, expected) + + def test_nsmallest_multiple_columns(self): + from string import ascii_lowercase + df = pd.DataFrame({'a': np.random.permutation(10), + 'b': list(ascii_lowercase[:10]), + 'c': np.random.permutation(10).astype('float64')}) + result = df.nsmallest(5, ['a', 'c']) + expected = df.sort_values(['a', 'c']).head(5) + assert_frame_equal(result, expected) + + # ---------------------------------------------------------------------- + # Isin + + def test_isin(self): + # GH #4211 + df = DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'], + 'ids2': ['a', 'n', 'c', 'n']}, + index=['foo', 'bar', 'baz', 'qux']) + other = ['a', 'b', 'c'] + + result = df.isin(other) + expected = DataFrame([df.loc[s].isin(other) for s in df.index]) + assert_frame_equal(result, expected) + + def test_isin_empty(self): + df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) + result = df.isin([]) + expected = pd.DataFrame(False, df.index, df.columns) + assert_frame_equal(result, expected) + + def test_isin_dict(self): + df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) + d = {'A': ['a']} + + expected = DataFrame(False, df.index, df.columns) + expected.loc[0, 'A'] = True + + result = df.isin(d) + assert_frame_equal(result, expected) + + # non unique columns + df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) + df.columns = ['A', 'A'] + expected = DataFrame(False, df.index, df.columns) + expected.loc[0, 'A'] = True + result = df.isin(d) + assert_frame_equal(result, expected) + + def test_isin_with_string_scalar(self): + # GH4763 + df = DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'], + 'ids2': ['a', 'n', 'c', 'n']}, + index=['foo', 'bar', 'baz', 'qux']) + with tm.assertRaises(TypeError): + df.isin('a') + + with tm.assertRaises(TypeError): + df.isin('aaa') + + def test_isin_df(self): + df1 = DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}) + df2 = DataFrame({'A': [0, 2, 12, 4], 'B': [2, np.nan, 4, 5]}) + expected = DataFrame(False, df1.index, df1.columns) + result = df1.isin(df2) + expected['A'].loc[[1, 3]] = True + expected['B'].loc[[0, 2]] = True + assert_frame_equal(result, expected) + + # partial overlapping columns + df2.columns = ['A', 'C'] + result = df1.isin(df2) + expected['B'] = False + assert_frame_equal(result, expected) + + def test_isin_df_dupe_values(self): + df1 = DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}) + # just cols duped + df2 = DataFrame([[0, 2], [12, 4], [2, np.nan], [4, 5]], + columns=['B', 'B']) + with tm.assertRaises(ValueError): + df1.isin(df2) + + # just index duped + df2 = DataFrame([[0, 2], [12, 4], [2, np.nan], [4, 5]], + columns=['A', 'B'], index=[0, 0, 1, 1]) + with tm.assertRaises(ValueError): + df1.isin(df2) + + # cols and index: + df2.columns = ['B', 'B'] + with tm.assertRaises(ValueError): + df1.isin(df2) + + def test_isin_dupe_self(self): + other = DataFrame({'A': [1, 0, 1, 0], 'B': [1, 1, 0, 0]}) + df = DataFrame([[1, 1], [1, 0], [0, 0]], columns=['A', 'A']) + result = df.isin(other) + expected = DataFrame(False, index=df.index, columns=df.columns) + expected.loc[0] = True + expected.iloc[1, 1] = True + assert_frame_equal(result, expected) + + def test_isin_against_series(self): + df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}, + index=['a', 'b', 'c', 'd']) + s = pd.Series([1, 3, 11, 4], index=['a', 'b', 'c', 'd']) + expected = DataFrame(False, index=df.index, columns=df.columns) + expected['A'].loc['a'] = True + expected.loc['d'] = True + result = df.isin(s) + assert_frame_equal(result, expected) + + def test_isin_multiIndex(self): + idx = MultiIndex.from_tuples([(0, 'a', 'foo'), (0, 'a', 'bar'), + (0, 'b', 'bar'), (0, 'b', 'baz'), + (2, 'a', 'foo'), (2, 'a', 'bar'), + (2, 'c', 'bar'), (2, 'c', 'baz'), + (1, 'b', 'foo'), (1, 'b', 'bar'), + (1, 'c', 'bar'), (1, 'c', 'baz')]) + df1 = DataFrame({'A': np.ones(12), + 'B': np.zeros(12)}, index=idx) + df2 = DataFrame({'A': [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1], + 'B': [1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1]}) + # against regular index + expected = DataFrame(False, index=df1.index, columns=df1.columns) + result = df1.isin(df2) + assert_frame_equal(result, expected) + + df2.index = idx + expected = df2.values.astype(np.bool) + expected[:, 1] = ~expected[:, 1] + expected = DataFrame(expected, columns=['A', 'B'], index=idx) + + result = df1.isin(df2) + assert_frame_equal(result, expected) + + # ---------------------------------------------------------------------- + # Row deduplication + + def test_drop_duplicates(self): + df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1, 1, 2, 2, 2, 2, 1, 2], + 'D': lrange(8)}) + + # single column + result = df.drop_duplicates('AAA') + expected = df[:2] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('AAA', keep='last') + expected = df.ix[[6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('AAA', keep=False) + expected = df.ix[[]] + assert_frame_equal(result, expected) + self.assertEqual(len(result), 0) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates('AAA', take_last=True) + expected = df.ix[[6, 7]] + assert_frame_equal(result, expected) + + # multi column + expected = df.ix[[0, 1, 2, 3]] + result = df.drop_duplicates(np.array(['AAA', 'B'])) + assert_frame_equal(result, expected) + result = df.drop_duplicates(['AAA', 'B']) + assert_frame_equal(result, expected) + + result = df.drop_duplicates(('AAA', 'B'), keep='last') + expected = df.ix[[0, 5, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(('AAA', 'B'), keep=False) + expected = df.ix[[0]] + assert_frame_equal(result, expected) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates(('AAA', 'B'), take_last=True) + expected = df.ix[[0, 5, 6, 7]] + assert_frame_equal(result, expected) + + # consider everything + df2 = df.ix[:, ['AAA', 'B', 'C']] + + result = df2.drop_duplicates() + # in this case only + expected = df2.drop_duplicates(['AAA', 'B']) + assert_frame_equal(result, expected) + + result = df2.drop_duplicates(keep='last') + expected = df2.drop_duplicates(['AAA', 'B'], keep='last') + assert_frame_equal(result, expected) + + result = df2.drop_duplicates(keep=False) + expected = df2.drop_duplicates(['AAA', 'B'], keep=False) + assert_frame_equal(result, expected) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df2.drop_duplicates(take_last=True) + with tm.assert_produces_warning(FutureWarning): + expected = df2.drop_duplicates(['AAA', 'B'], take_last=True) + assert_frame_equal(result, expected) + + # integers + result = df.drop_duplicates('C') + expected = df.iloc[[0, 2]] + assert_frame_equal(result, expected) + result = df.drop_duplicates('C', keep='last') + expected = df.iloc[[-2, -1]] + assert_frame_equal(result, expected) + + df['E'] = df['C'].astype('int8') + result = df.drop_duplicates('E') + expected = df.iloc[[0, 2]] + assert_frame_equal(result, expected) + result = df.drop_duplicates('E', keep='last') + expected = df.iloc[[-2, -1]] + assert_frame_equal(result, expected) + + # GH 11376 + df = pd.DataFrame({'x': [7, 6, 3, 3, 4, 8, 0], + 'y': [0, 6, 5, 5, 9, 1, 2]}) + expected = df.loc[df.index != 3] + assert_frame_equal(df.drop_duplicates(), expected) + + df = pd.DataFrame([[1, 0], [0, 2]]) + assert_frame_equal(df.drop_duplicates(), df) + + df = pd.DataFrame([[-2, 0], [0, -4]]) + assert_frame_equal(df.drop_duplicates(), df) + + x = np.iinfo(np.int64).max / 3 * 2 + df = pd.DataFrame([[-x, x], [0, x + 4]]) + assert_frame_equal(df.drop_duplicates(), df) + + df = pd.DataFrame([[-x, x], [x, x + 4]]) + assert_frame_equal(df.drop_duplicates(), df) + + # GH 11864 + df = pd.DataFrame([i] * 9 for i in range(16)) + df = df.append([[1] + [0] * 8], ignore_index=True) + + for keep in ['first', 'last', False]: + assert_equal(df.duplicated(keep=keep).sum(), 0) + + def test_drop_duplicates_for_take_all(self): + df = DataFrame({'AAA': ['foo', 'bar', 'baz', 'bar', + 'foo', 'bar', 'qux', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1, 1, 2, 2, 2, 2, 1, 2], + 'D': lrange(8)}) + + # single column + result = df.drop_duplicates('AAA') + expected = df.iloc[[0, 1, 2, 6]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('AAA', keep='last') + expected = df.iloc[[2, 5, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('AAA', keep=False) + expected = df.iloc[[2, 6]] + assert_frame_equal(result, expected) + + # multiple columns + result = df.drop_duplicates(['AAA', 'B']) + expected = df.iloc[[0, 1, 2, 3, 4, 6]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(['AAA', 'B'], keep='last') + expected = df.iloc[[0, 1, 2, 5, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(['AAA', 'B'], keep=False) + expected = df.iloc[[0, 1, 2, 6]] + assert_frame_equal(result, expected) + + def test_drop_duplicates_deprecated_warning(self): + df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1, 1, 2, 2, 2, 2, 1, 2], + 'D': lrange(8)}) + expected = df[:2] + + # Raises warning + with tm.assert_produces_warning(False): + result = df.drop_duplicates(subset='AAA') + assert_frame_equal(result, expected) + + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates(cols='AAA') + assert_frame_equal(result, expected) + + # Does not allow both subset and cols + self.assertRaises(TypeError, df.drop_duplicates, + kwargs={'cols': 'AAA', 'subset': 'B'}) + + # Does not allow unknown kwargs + self.assertRaises(TypeError, df.drop_duplicates, + kwargs={'subset': 'AAA', 'bad_arg': True}) + + # deprecate take_last + # Raises warning + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates(take_last=False, subset='AAA') + assert_frame_equal(result, expected) + + self.assertRaises(ValueError, df.drop_duplicates, keep='invalid_name') + + def test_drop_duplicates_tuple(self): + df = DataFrame({('AA', 'AB'): ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1, 1, 2, 2, 2, 2, 1, 2], + 'D': lrange(8)}) + + # single column + result = df.drop_duplicates(('AA', 'AB')) + expected = df[:2] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(('AA', 'AB'), keep='last') + expected = df.ix[[6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(('AA', 'AB'), keep=False) + expected = df.ix[[]] # empty df + self.assertEqual(len(result), 0) + assert_frame_equal(result, expected) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates(('AA', 'AB'), take_last=True) + expected = df.ix[[6, 7]] + assert_frame_equal(result, expected) + + # multi column + expected = df.ix[[0, 1, 2, 3]] + result = df.drop_duplicates((('AA', 'AB'), 'B')) + assert_frame_equal(result, expected) + + def test_drop_duplicates_NA(self): + # none + df = DataFrame({'A': [None, None, 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1.0, np.nan, np.nan, np.nan, 1., 1., 1, 1.], + 'D': lrange(8)}) + + # single column + result = df.drop_duplicates('A') + expected = df.ix[[0, 2, 3]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('A', keep='last') + expected = df.ix[[1, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('A', keep=False) + expected = df.ix[[]] # empty df + assert_frame_equal(result, expected) + self.assertEqual(len(result), 0) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates('A', take_last=True) + expected = df.ix[[1, 6, 7]] + assert_frame_equal(result, expected) + + # multi column + result = df.drop_duplicates(['A', 'B']) + expected = df.ix[[0, 2, 3, 6]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(['A', 'B'], keep='last') + expected = df.ix[[1, 5, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(['A', 'B'], keep=False) + expected = df.ix[[6]] + assert_frame_equal(result, expected) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates(['A', 'B'], take_last=True) + expected = df.ix[[1, 5, 6, 7]] + assert_frame_equal(result, expected) + + # nan + df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1.0, np.nan, np.nan, np.nan, 1., 1., 1, 1.], + 'D': lrange(8)}) + + # single column + result = df.drop_duplicates('C') + expected = df[:2] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('C', keep='last') + expected = df.ix[[3, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('C', keep=False) + expected = df.ix[[]] # empty df + assert_frame_equal(result, expected) + self.assertEqual(len(result), 0) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates('C', take_last=True) + expected = df.ix[[3, 7]] + assert_frame_equal(result, expected) + + # multi column + result = df.drop_duplicates(['C', 'B']) + expected = df.ix[[0, 1, 2, 4]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(['C', 'B'], keep='last') + expected = df.ix[[1, 3, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(['C', 'B'], keep=False) + expected = df.ix[[1]] + assert_frame_equal(result, expected) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates(['C', 'B'], take_last=True) + expected = df.ix[[1, 3, 6, 7]] + assert_frame_equal(result, expected) + + def test_drop_duplicates_NA_for_take_all(self): + # none + df = DataFrame({'A': [None, None, 'foo', 'bar', + 'foo', 'baz', 'bar', 'qux'], + 'C': [1.0, np.nan, np.nan, np.nan, 1., 2., 3, 1.]}) + + # single column + result = df.drop_duplicates('A') + expected = df.iloc[[0, 2, 3, 5, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('A', keep='last') + expected = df.iloc[[1, 4, 5, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('A', keep=False) + expected = df.iloc[[5, 7]] + assert_frame_equal(result, expected) + + # nan + + # single column + result = df.drop_duplicates('C') + expected = df.iloc[[0, 1, 5, 6]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('C', keep='last') + expected = df.iloc[[3, 5, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('C', keep=False) + expected = df.iloc[[5, 6]] + assert_frame_equal(result, expected) + + def test_drop_duplicates_inplace(self): + orig = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1, 1, 2, 2, 2, 2, 1, 2], + 'D': lrange(8)}) + + # single column + df = orig.copy() + df.drop_duplicates('A', inplace=True) + expected = orig[:2] + result = df + assert_frame_equal(result, expected) + + df = orig.copy() + df.drop_duplicates('A', keep='last', inplace=True) + expected = orig.ix[[6, 7]] + result = df + assert_frame_equal(result, expected) + + df = orig.copy() + df.drop_duplicates('A', keep=False, inplace=True) + expected = orig.ix[[]] + result = df + assert_frame_equal(result, expected) + self.assertEqual(len(df), 0) + + # deprecate take_last + df = orig.copy() + with tm.assert_produces_warning(FutureWarning): + df.drop_duplicates('A', take_last=True, inplace=True) + expected = orig.ix[[6, 7]] + result = df + assert_frame_equal(result, expected) + + # multi column + df = orig.copy() + df.drop_duplicates(['A', 'B'], inplace=True) + expected = orig.ix[[0, 1, 2, 3]] + result = df + assert_frame_equal(result, expected) + + df = orig.copy() + df.drop_duplicates(['A', 'B'], keep='last', inplace=True) + expected = orig.ix[[0, 5, 6, 7]] + result = df + assert_frame_equal(result, expected) + + df = orig.copy() + df.drop_duplicates(['A', 'B'], keep=False, inplace=True) + expected = orig.ix[[0]] + result = df + assert_frame_equal(result, expected) + + # deprecate take_last + df = orig.copy() + with tm.assert_produces_warning(FutureWarning): + df.drop_duplicates(['A', 'B'], take_last=True, inplace=True) + expected = orig.ix[[0, 5, 6, 7]] + result = df + assert_frame_equal(result, expected) + + # consider everything + orig2 = orig.ix[:, ['A', 'B', 'C']].copy() + + df2 = orig2.copy() + df2.drop_duplicates(inplace=True) + # in this case only + expected = orig2.drop_duplicates(['A', 'B']) + result = df2 + assert_frame_equal(result, expected) + + df2 = orig2.copy() + df2.drop_duplicates(keep='last', inplace=True) + expected = orig2.drop_duplicates(['A', 'B'], keep='last') + result = df2 + assert_frame_equal(result, expected) + + df2 = orig2.copy() + df2.drop_duplicates(keep=False, inplace=True) + expected = orig2.drop_duplicates(['A', 'B'], keep=False) + result = df2 + assert_frame_equal(result, expected) + + # deprecate take_last + df2 = orig2.copy() + with tm.assert_produces_warning(FutureWarning): + df2.drop_duplicates(take_last=True, inplace=True) + with tm.assert_produces_warning(FutureWarning): + expected = orig2.drop_duplicates(['A', 'B'], take_last=True) + result = df2 + assert_frame_equal(result, expected) + + def test_duplicated_deprecated_warning(self): + df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1, 1, 2, 2, 2, 2, 1, 2], + 'D': lrange(8)}) + + # Raises warning + with tm.assert_produces_warning(False): + result = df.duplicated(subset='AAA') + + with tm.assert_produces_warning(FutureWarning): + result = df.duplicated(cols='AAA') # noqa + + # Does not allow both subset and cols + self.assertRaises(TypeError, df.duplicated, + kwargs={'cols': 'AAA', 'subset': 'B'}) + + # Does not allow unknown kwargs + self.assertRaises(TypeError, df.duplicated, + kwargs={'subset': 'AAA', 'bad_arg': True}) + + # Rounding + + def test_round(self): + # GH 2665 + + # Test that rounding an empty DataFrame does nothing + df = DataFrame() + assert_frame_equal(df, df.round()) + + # Here's the test frame we'll be working with + df = DataFrame( + {'col1': [1.123, 2.123, 3.123], 'col2': [1.234, 2.234, 3.234]}) + + # Default round to integer (i.e. decimals=0) + expected_rounded = DataFrame( + {'col1': [1., 2., 3.], 'col2': [1., 2., 3.]}) + assert_frame_equal(df.round(), expected_rounded) + + # Round with an integer + decimals = 2 + expected_rounded = DataFrame( + {'col1': [1.12, 2.12, 3.12], 'col2': [1.23, 2.23, 3.23]}) + assert_frame_equal(df.round(decimals), expected_rounded) + + # This should also work with np.round (since np.round dispatches to + # df.round) + assert_frame_equal(np.round(df, decimals), expected_rounded) + + # Round with a list + round_list = [1, 2] + with self.assertRaises(TypeError): + df.round(round_list) + + # Round with a dictionary + expected_rounded = DataFrame( + {'col1': [1.1, 2.1, 3.1], 'col2': [1.23, 2.23, 3.23]}) + round_dict = {'col1': 1, 'col2': 2} + assert_frame_equal(df.round(round_dict), expected_rounded) + + # Incomplete dict + expected_partially_rounded = DataFrame( + {'col1': [1.123, 2.123, 3.123], 'col2': [1.2, 2.2, 3.2]}) + partial_round_dict = {'col2': 1} + assert_frame_equal( + df.round(partial_round_dict), expected_partially_rounded) + + # Dict with unknown elements + wrong_round_dict = {'col3': 2, 'col2': 1} + assert_frame_equal( + df.round(wrong_round_dict), expected_partially_rounded) + + # float input to `decimals` + non_int_round_dict = {'col1': 1, 'col2': 0.5} + with self.assertRaises(TypeError): + df.round(non_int_round_dict) + + # String input + non_int_round_dict = {'col1': 1, 'col2': 'foo'} + with self.assertRaises(TypeError): + df.round(non_int_round_dict) + + non_int_round_Series = Series(non_int_round_dict) + with self.assertRaises(TypeError): + df.round(non_int_round_Series) + + # List input + non_int_round_dict = {'col1': 1, 'col2': [1, 2]} + with self.assertRaises(TypeError): + df.round(non_int_round_dict) + + non_int_round_Series = Series(non_int_round_dict) + with self.assertRaises(TypeError): + df.round(non_int_round_Series) + + # Non integer Series inputs + non_int_round_Series = Series(non_int_round_dict) + with self.assertRaises(TypeError): + df.round(non_int_round_Series) + + non_int_round_Series = Series(non_int_round_dict) + with self.assertRaises(TypeError): + df.round(non_int_round_Series) + + # Negative numbers + negative_round_dict = {'col1': -1, 'col2': -2} + big_df = df * 100 + expected_neg_rounded = DataFrame( + {'col1': [110., 210, 310], 'col2': [100., 200, 300]}) + assert_frame_equal( + big_df.round(negative_round_dict), expected_neg_rounded) + + # nan in Series round + nan_round_Series = Series({'col1': nan, 'col2': 1}) + + # TODO(wesm): unused? + expected_nan_round = DataFrame({ # noqa + 'col1': [1.123, 2.123, 3.123], + 'col2': [1.2, 2.2, 3.2]}) + + if sys.version < LooseVersion('2.7'): + # Rounding with decimal is a ValueError in Python < 2.7 + with self.assertRaises(ValueError): + df.round(nan_round_Series) + else: + with self.assertRaises(TypeError): + df.round(nan_round_Series) + + # Make sure this doesn't break existing Series.round + assert_series_equal(df['col1'].round(1), expected_rounded['col1']) + + # named columns + # GH 11986 + decimals = 2 + expected_rounded = DataFrame( + {'col1': [1.12, 2.12, 3.12], 'col2': [1.23, 2.23, 3.23]}) + df.columns.name = "cols" + expected_rounded.columns.name = "cols" + assert_frame_equal(df.round(decimals), expected_rounded) + + # interaction of named columns & series + assert_series_equal(df['col1'].round(decimals), + expected_rounded['col1']) + assert_series_equal(df.round(decimals)['col1'], + expected_rounded['col1']) + + def test_round_mixed_type(self): + # GH11885 + df = DataFrame({'col1': [1.1, 2.2, 3.3, 4.4], + 'col2': ['1', 'a', 'c', 'f'], + 'col3': date_range('20111111', periods=4)}) + round_0 = DataFrame({'col1': [1., 2., 3., 4.], + 'col2': ['1', 'a', 'c', 'f'], + 'col3': date_range('20111111', periods=4)}) + assert_frame_equal(df.round(), round_0) + assert_frame_equal(df.round(1), df) + assert_frame_equal(df.round({'col1': 1}), df) + assert_frame_equal(df.round({'col1': 0}), round_0) + assert_frame_equal(df.round({'col1': 0, 'col2': 1}), round_0) + assert_frame_equal(df.round({'col3': 1}), df) + + def test_round_issue(self): + # GH11611 + + df = pd.DataFrame(np.random.random([3, 3]), columns=['A', 'B', 'C'], + index=['first', 'second', 'third']) + + dfs = pd.concat((df, df), axis=1) + rounded = dfs.round() + self.assertTrue(rounded.index.equals(dfs.index)) + + decimals = pd.Series([1, 0, 2], index=['A', 'B', 'A']) + self.assertRaises(ValueError, df.round, decimals) + + def test_built_in_round(self): + if not compat.PY3: + raise nose.SkipTest("build in round cannot be overriden " + "prior to Python 3") + + # GH11763 + # Here's the test frame we'll be working with + df = DataFrame( + {'col1': [1.123, 2.123, 3.123], 'col2': [1.234, 2.234, 3.234]}) + + # Default round to integer (i.e. decimals=0) + expected_rounded = DataFrame( + {'col1': [1., 2., 3.], 'col2': [1., 2., 3.]}) + assert_frame_equal(round(df), expected_rounded) + + # Clip + + def test_clip(self): + median = self.frame.median().median() + + capped = self.frame.clip_upper(median) + self.assertFalse((capped.values > median).any()) + + floored = self.frame.clip_lower(median) + self.assertFalse((floored.values < median).any()) + + double = self.frame.clip(upper=median, lower=median) + self.assertFalse((double.values != median).any()) + + def test_dataframe_clip(self): + # GH #2747 + df = DataFrame(np.random.randn(1000, 2)) + + for lb, ub in [(-1, 1), (1, -1)]: + clipped_df = df.clip(lb, ub) + + lb, ub = min(lb, ub), max(ub, lb) + lb_mask = df.values <= lb + ub_mask = df.values >= ub + mask = ~lb_mask & ~ub_mask + self.assertTrue((clipped_df.values[lb_mask] == lb).all()) + self.assertTrue((clipped_df.values[ub_mask] == ub).all()) + self.assertTrue((clipped_df.values[mask] == + df.values[mask]).all()) + + def test_clip_against_series(self): + # GH #6966 + + df = DataFrame(np.random.randn(1000, 2)) + lb = Series(np.random.randn(1000)) + ub = lb + 1 + + clipped_df = df.clip(lb, ub, axis=0) + + for i in range(2): + lb_mask = df.iloc[:, i] <= lb + ub_mask = df.iloc[:, i] >= ub + mask = ~lb_mask & ~ub_mask + + result = clipped_df.loc[lb_mask, i] + assert_series_equal(result, lb[lb_mask], check_names=False) + self.assertEqual(result.name, i) + + result = clipped_df.loc[ub_mask, i] + assert_series_equal(result, ub[ub_mask], check_names=False) + self.assertEqual(result.name, i) + + assert_series_equal(clipped_df.loc[mask, i], df.loc[mask, i]) + + def test_clip_against_frame(self): + df = DataFrame(np.random.randn(1000, 2)) + lb = DataFrame(np.random.randn(1000, 2)) + ub = lb + 1 + + clipped_df = df.clip(lb, ub) + + lb_mask = df <= lb + ub_mask = df >= ub + mask = ~lb_mask & ~ub_mask + + assert_frame_equal(clipped_df[lb_mask], lb[lb_mask]) + assert_frame_equal(clipped_df[ub_mask], ub[ub_mask]) + assert_frame_equal(clipped_df[mask], df[mask]) + + # Matrix-like + + def test_dot(self): + a = DataFrame(np.random.randn(3, 4), index=['a', 'b', 'c'], + columns=['p', 'q', 'r', 's']) + b = DataFrame(np.random.randn(4, 2), index=['p', 'q', 'r', 's'], + columns=['one', 'two']) + + result = a.dot(b) + expected = DataFrame(np.dot(a.values, b.values), + index=['a', 'b', 'c'], + columns=['one', 'two']) + # Check alignment + b1 = b.reindex(index=reversed(b.index)) + result = a.dot(b) + assert_frame_equal(result, expected) + + # Check series argument + result = a.dot(b['one']) + assert_series_equal(result, expected['one'], check_names=False) + self.assertTrue(result.name is None) + + result = a.dot(b1['one']) + assert_series_equal(result, expected['one'], check_names=False) + self.assertTrue(result.name is None) + + # can pass correct-length arrays + row = a.ix[0].values + + result = a.dot(row) + exp = a.dot(a.ix[0]) + assert_series_equal(result, exp) + + with assertRaisesRegexp(ValueError, 'Dot product shape mismatch'): + a.dot(row[:-1]) + + a = np.random.rand(1, 5) + b = np.random.rand(5, 1) + A = DataFrame(a) + + # TODO(wesm): unused + B = DataFrame(b) # noqa + + # it works + result = A.dot(b) + + # unaligned + df = DataFrame(randn(3, 4), index=[1, 2, 3], columns=lrange(4)) + df2 = DataFrame(randn(5, 3), index=lrange(5), columns=[1, 2, 3]) + + assertRaisesRegexp(ValueError, 'aligned', df.dot, df2) diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py new file mode 100644 index 0000000000000..818e2fb89008d --- /dev/null +++ b/pandas/tests/frame/test_apply.py @@ -0,0 +1,402 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime + +import numpy as np + +from pandas import (notnull, DataFrame, Series, MultiIndex, date_range, + Timestamp, compat) +import pandas as pd +import pandas.core.common as com +from pandas.util.testing import (assert_series_equal, + assert_frame_equal) +import pandas.util.testing as tm +from pandas.tests.frame.common import TestData + + +class TestDataFrameApply(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_apply(self): + # ufunc + applied = self.frame.apply(np.sqrt) + assert_series_equal(np.sqrt(self.frame['A']), applied['A']) + + # aggregator + applied = self.frame.apply(np.mean) + self.assertEqual(applied['A'], np.mean(self.frame['A'])) + + d = self.frame.index[0] + applied = self.frame.apply(np.mean, axis=1) + self.assertEqual(applied[d], np.mean(self.frame.xs(d))) + self.assertIs(applied.index, self.frame.index) # want this + + # invalid axis + df = DataFrame( + [[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=['a', 'a', 'c']) + self.assertRaises(ValueError, df.apply, lambda x: x, 2) + + # GH9573 + df = DataFrame({'c0': ['A', 'A', 'B', 'B'], + 'c1': ['C', 'C', 'D', 'D']}) + df = df.apply(lambda ts: ts.astype('category')) + self.assertEqual(df.shape, (4, 2)) + self.assertTrue(isinstance(df['c0'].dtype, com.CategoricalDtype)) + self.assertTrue(isinstance(df['c1'].dtype, com.CategoricalDtype)) + + def test_apply_mixed_datetimelike(self): + # mixed datetimelike + # GH 7778 + df = DataFrame({'A': date_range('20130101', periods=3), + 'B': pd.to_timedelta(np.arange(3), unit='s')}) + result = df.apply(lambda x: x, axis=1) + assert_frame_equal(result, df) + + def test_apply_empty(self): + # empty + applied = self.empty.apply(np.sqrt) + self.assertTrue(applied.empty) + + applied = self.empty.apply(np.mean) + self.assertTrue(applied.empty) + + no_rows = self.frame[:0] + result = no_rows.apply(lambda x: x.mean()) + expected = Series(np.nan, index=self.frame.columns) + assert_series_equal(result, expected) + + no_cols = self.frame.ix[:, []] + result = no_cols.apply(lambda x: x.mean(), axis=1) + expected = Series(np.nan, index=self.frame.index) + assert_series_equal(result, expected) + + # 2476 + xp = DataFrame(index=['a']) + rs = xp.apply(lambda x: x['a'], axis=1) + assert_frame_equal(xp, rs) + + # reduce with an empty DataFrame + x = [] + result = self.empty.apply(x.append, axis=1, reduce=False) + assert_frame_equal(result, self.empty) + result = self.empty.apply(x.append, axis=1, reduce=True) + assert_series_equal(result, Series( + [], index=pd.Index([], dtype=object))) + + empty_with_cols = DataFrame(columns=['a', 'b', 'c']) + result = empty_with_cols.apply(x.append, axis=1, reduce=False) + assert_frame_equal(result, empty_with_cols) + result = empty_with_cols.apply(x.append, axis=1, reduce=True) + assert_series_equal(result, Series( + [], index=pd.Index([], dtype=object))) + + # Ensure that x.append hasn't been called + self.assertEqual(x, []) + + def test_apply_standard_nonunique(self): + df = DataFrame( + [[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=['a', 'a', 'c']) + rs = df.apply(lambda s: s[0], axis=1) + xp = Series([1, 4, 7], ['a', 'a', 'c']) + assert_series_equal(rs, xp) + + rs = df.T.apply(lambda s: s[0], axis=0) + assert_series_equal(rs, xp) + + def test_apply_broadcast(self): + broadcasted = self.frame.apply(np.mean, broadcast=True) + agged = self.frame.apply(np.mean) + + for col, ts in compat.iteritems(broadcasted): + self.assertTrue((ts == agged[col]).all()) + + broadcasted = self.frame.apply(np.mean, axis=1, broadcast=True) + agged = self.frame.apply(np.mean, axis=1) + for idx in broadcasted.index: + self.assertTrue((broadcasted.xs(idx) == agged[idx]).all()) + + def test_apply_raw(self): + result0 = self.frame.apply(np.mean, raw=True) + result1 = self.frame.apply(np.mean, axis=1, raw=True) + + expected0 = self.frame.apply(lambda x: x.values.mean()) + expected1 = self.frame.apply(lambda x: x.values.mean(), axis=1) + + assert_series_equal(result0, expected0) + assert_series_equal(result1, expected1) + + # no reduction + result = self.frame.apply(lambda x: x * 2, raw=True) + expected = self.frame * 2 + assert_frame_equal(result, expected) + + def test_apply_axis1(self): + d = self.frame.index[0] + tapplied = self.frame.apply(np.mean, axis=1) + self.assertEqual(tapplied[d], np.mean(self.frame.xs(d))) + + def test_apply_ignore_failures(self): + result = self.mixed_frame._apply_standard(np.mean, 0, + ignore_failures=True) + expected = self.mixed_frame._get_numeric_data().apply(np.mean) + assert_series_equal(result, expected) + + def test_apply_mixed_dtype_corner(self): + df = DataFrame({'A': ['foo'], + 'B': [1.]}) + result = df[:0].apply(np.mean, axis=1) + # the result here is actually kind of ambiguous, should it be a Series + # or a DataFrame? + expected = Series(np.nan, index=pd.Index([], dtype='int64')) + assert_series_equal(result, expected) + + df = DataFrame({'A': ['foo'], + 'B': [1.]}) + result = df.apply(lambda x: x['A'], axis=1) + expected = Series(['foo'], index=[0]) + assert_series_equal(result, expected) + + result = df.apply(lambda x: x['B'], axis=1) + expected = Series([1.], index=[0]) + assert_series_equal(result, expected) + + def test_apply_empty_infer_type(self): + no_cols = DataFrame(index=['a', 'b', 'c']) + no_index = DataFrame(columns=['a', 'b', 'c']) + + def _check(df, f): + test_res = f(np.array([], dtype='f8')) + is_reduction = not isinstance(test_res, np.ndarray) + + def _checkit(axis=0, raw=False): + res = df.apply(f, axis=axis, raw=raw) + if is_reduction: + agg_axis = df._get_agg_axis(axis) + tm.assertIsInstance(res, Series) + self.assertIs(res.index, agg_axis) + else: + tm.assertIsInstance(res, DataFrame) + + _checkit() + _checkit(axis=1) + _checkit(raw=True) + _checkit(axis=0, raw=True) + + _check(no_cols, lambda x: x) + _check(no_cols, lambda x: x.mean()) + _check(no_index, lambda x: x) + _check(no_index, lambda x: x.mean()) + + result = no_cols.apply(lambda x: x.mean(), broadcast=True) + tm.assertIsInstance(result, DataFrame) + + def test_apply_with_args_kwds(self): + def add_some(x, howmuch=0): + return x + howmuch + + def agg_and_add(x, howmuch=0): + return x.mean() + howmuch + + def subtract_and_divide(x, sub, divide=1): + return (x - sub) / divide + + result = self.frame.apply(add_some, howmuch=2) + exp = self.frame.apply(lambda x: x + 2) + assert_frame_equal(result, exp) + + result = self.frame.apply(agg_and_add, howmuch=2) + exp = self.frame.apply(lambda x: x.mean() + 2) + assert_series_equal(result, exp) + + res = self.frame.apply(subtract_and_divide, args=(2,), divide=2) + exp = self.frame.apply(lambda x: (x - 2.) / 2.) + assert_frame_equal(res, exp) + + def test_apply_yield_list(self): + result = self.frame.apply(list) + assert_frame_equal(result, self.frame) + + def test_apply_reduce_Series(self): + self.frame.ix[::2, 'A'] = np.nan + expected = self.frame.mean(1) + result = self.frame.apply(np.mean, axis=1) + assert_series_equal(result, expected) + + def test_apply_differently_indexed(self): + df = DataFrame(np.random.randn(20, 10)) + + result0 = df.apply(Series.describe, axis=0) + expected0 = DataFrame(dict((i, v.describe()) + for i, v in compat.iteritems(df)), + columns=df.columns) + assert_frame_equal(result0, expected0) + + result1 = df.apply(Series.describe, axis=1) + expected1 = DataFrame(dict((i, v.describe()) + for i, v in compat.iteritems(df.T)), + columns=df.index).T + assert_frame_equal(result1, expected1) + + def test_apply_modify_traceback(self): + data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo', + 'bar', 'bar', 'bar', 'bar', + 'foo', 'foo', 'foo'], + 'B': ['one', 'one', 'one', 'two', + 'one', 'one', 'one', 'two', + 'two', 'two', 'one'], + 'C': ['dull', 'dull', 'shiny', 'dull', + 'dull', 'shiny', 'shiny', 'dull', + 'shiny', 'shiny', 'shiny'], + 'D': np.random.randn(11), + 'E': np.random.randn(11), + 'F': np.random.randn(11)}) + + data.loc[4, 'C'] = np.nan + + def transform(row): + if row['C'].startswith('shin') and row['A'] == 'foo': + row['D'] = 7 + return row + + def transform2(row): + if (notnull(row['C']) and row['C'].startswith('shin') + and row['A'] == 'foo'): + row['D'] = 7 + return row + + try: + transformed = data.apply(transform, axis=1) # noqa + except AttributeError as e: + self.assertEqual(len(e.args), 2) + self.assertEqual(e.args[1], 'occurred at index 4') + self.assertEqual( + e.args[0], "'float' object has no attribute 'startswith'") + + def test_apply_bug(self): + + # GH 6125 + positions = pd.DataFrame([[1, 'ABC0', 50], [1, 'YUM0', 20], + [1, 'DEF0', 20], [2, 'ABC1', 50], + [2, 'YUM1', 20], [2, 'DEF1', 20]], + columns=['a', 'market', 'position']) + + def f(r): + return r['market'] + expected = positions.apply(f, axis=1) + + positions = DataFrame([[datetime(2013, 1, 1), 'ABC0', 50], + [datetime(2013, 1, 2), 'YUM0', 20], + [datetime(2013, 1, 3), 'DEF0', 20], + [datetime(2013, 1, 4), 'ABC1', 50], + [datetime(2013, 1, 5), 'YUM1', 20], + [datetime(2013, 1, 6), 'DEF1', 20]], + columns=['a', 'market', 'position']) + result = positions.apply(f, axis=1) + assert_series_equal(result, expected) + + def test_apply_convert_objects(self): + data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo', + 'bar', 'bar', 'bar', 'bar', + 'foo', 'foo', 'foo'], + 'B': ['one', 'one', 'one', 'two', + 'one', 'one', 'one', 'two', + 'two', 'two', 'one'], + 'C': ['dull', 'dull', 'shiny', 'dull', + 'dull', 'shiny', 'shiny', 'dull', + 'shiny', 'shiny', 'shiny'], + 'D': np.random.randn(11), + 'E': np.random.randn(11), + 'F': np.random.randn(11)}) + + result = data.apply(lambda x: x, axis=1) + assert_frame_equal(result._convert(datetime=True), data) + + def test_apply_attach_name(self): + result = self.frame.apply(lambda x: x.name) + expected = Series(self.frame.columns, index=self.frame.columns) + assert_series_equal(result, expected) + + result = self.frame.apply(lambda x: x.name, axis=1) + expected = Series(self.frame.index, index=self.frame.index) + assert_series_equal(result, expected) + + # non-reductions + result = self.frame.apply(lambda x: np.repeat(x.name, len(x))) + expected = DataFrame(np.tile(self.frame.columns, + (len(self.frame.index), 1)), + index=self.frame.index, + columns=self.frame.columns) + assert_frame_equal(result, expected) + + result = self.frame.apply(lambda x: np.repeat(x.name, len(x)), + axis=1) + expected = DataFrame(np.tile(self.frame.index, + (len(self.frame.columns), 1)).T, + index=self.frame.index, + columns=self.frame.columns) + assert_frame_equal(result, expected) + + def test_apply_multi_index(self): + s = DataFrame([[1, 2], [3, 4], [5, 6]]) + s.index = MultiIndex.from_arrays([['a', 'a', 'b'], ['c', 'd', 'd']]) + s.columns = ['col1', 'col2'] + res = s.apply(lambda x: Series({'min': min(x), 'max': max(x)}), 1) + tm.assertIsInstance(res.index, MultiIndex) + + def test_apply_dict(self): + + # GH 8735 + A = DataFrame([['foo', 'bar'], ['spam', 'eggs']]) + A_dicts = pd.Series([dict([(0, 'foo'), (1, 'spam')]), + dict([(0, 'bar'), (1, 'eggs')])]) + B = DataFrame([[0, 1], [2, 3]]) + B_dicts = pd.Series([dict([(0, 0), (1, 2)]), dict([(0, 1), (1, 3)])]) + fn = lambda x: x.to_dict() + + for df, dicts in [(A, A_dicts), (B, B_dicts)]: + reduce_true = df.apply(fn, reduce=True) + reduce_false = df.apply(fn, reduce=False) + reduce_none = df.apply(fn, reduce=None) + + assert_series_equal(reduce_true, dicts) + assert_frame_equal(reduce_false, df) + assert_series_equal(reduce_none, dicts) + + def test_applymap(self): + applied = self.frame.applymap(lambda x: x * 2) + assert_frame_equal(applied, self.frame * 2) + result = self.frame.applymap(type) + + # GH #465, function returning tuples + result = self.frame.applymap(lambda x: (x, x)) + tm.assertIsInstance(result['A'][0], tuple) + + # GH 2909, object conversion to float in constructor? + df = DataFrame(data=[1, 'a']) + result = df.applymap(lambda x: x) + self.assertEqual(result.dtypes[0], object) + + df = DataFrame(data=[1., 'a']) + result = df.applymap(lambda x: x) + self.assertEqual(result.dtypes[0], object) + + # GH2786 + df = DataFrame(np.random.random((3, 4))) + df2 = df.copy() + cols = ['a', 'a', 'a', 'a'] + df.columns = cols + + expected = df2.applymap(str) + expected.columns = cols + result = df.applymap(str) + assert_frame_equal(result, expected) + + # datetime/timedelta + df['datetime'] = Timestamp('20130101') + df['timedelta'] = pd.Timedelta('1 min') + result = df.applymap(str) + for f in ['datetime', 'timedelta']: + self.assertEqual(result.loc[0, f], str(df.loc[0, f])) diff --git a/pandas/tests/frame/test_axis_select_reindex.py b/pandas/tests/frame/test_axis_select_reindex.py new file mode 100644 index 0000000000000..32df9fac42550 --- /dev/null +++ b/pandas/tests/frame/test_axis_select_reindex.py @@ -0,0 +1,808 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime + +from numpy import random +import numpy as np + +from pandas.compat import lrange, lzip, u +from pandas import (compat, DataFrame, Series, Index, MultiIndex, + date_range, isnull) +import pandas as pd + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameSelectReindex(tm.TestCase, TestData): + # These are specific reindex-based tests; other indexing tests should go in + # test_indexing + + _multiprocess_can_split_ = True + + def test_drop_names(self): + df = DataFrame([[1, 2, 3], [3, 4, 5], [5, 6, 7]], + index=['a', 'b', 'c'], + columns=['d', 'e', 'f']) + df.index.name, df.columns.name = 'first', 'second' + df_dropped_b = df.drop('b') + df_dropped_e = df.drop('e', axis=1) + df_inplace_b, df_inplace_e = df.copy(), df.copy() + df_inplace_b.drop('b', inplace=True) + df_inplace_e.drop('e', axis=1, inplace=True) + for obj in (df_dropped_b, df_dropped_e, df_inplace_b, df_inplace_e): + self.assertEqual(obj.index.name, 'first') + self.assertEqual(obj.columns.name, 'second') + self.assertEqual(list(df.columns), ['d', 'e', 'f']) + + self.assertRaises(ValueError, df.drop, ['g']) + self.assertRaises(ValueError, df.drop, ['g'], 1) + + # errors = 'ignore' + dropped = df.drop(['g'], errors='ignore') + expected = Index(['a', 'b', 'c'], name='first') + self.assert_index_equal(dropped.index, expected) + + dropped = df.drop(['b', 'g'], errors='ignore') + expected = Index(['a', 'c'], name='first') + self.assert_index_equal(dropped.index, expected) + + dropped = df.drop(['g'], axis=1, errors='ignore') + expected = Index(['d', 'e', 'f'], name='second') + self.assert_index_equal(dropped.columns, expected) + + dropped = df.drop(['d', 'g'], axis=1, errors='ignore') + expected = Index(['e', 'f'], name='second') + self.assert_index_equal(dropped.columns, expected) + + def test_drop_col_still_multiindex(self): + arrays = [['a', 'b', 'c', 'top'], + ['', '', '', 'OD'], + ['', '', '', 'wx']] + + tuples = sorted(zip(*arrays)) + index = MultiIndex.from_tuples(tuples) + + df = DataFrame(np.random.randn(3, 4), columns=index) + del df[('a', '', '')] + assert(isinstance(df.columns, MultiIndex)) + + def test_drop(self): + simple = DataFrame({"A": [1, 2, 3, 4], "B": [0, 1, 2, 3]}) + assert_frame_equal(simple.drop("A", axis=1), simple[['B']]) + assert_frame_equal(simple.drop(["A", "B"], axis='columns'), + simple[[]]) + assert_frame_equal(simple.drop([0, 1, 3], axis=0), simple.ix[[2], :]) + assert_frame_equal(simple.drop( + [0, 3], axis='index'), simple.ix[[1, 2], :]) + + self.assertRaises(ValueError, simple.drop, 5) + self.assertRaises(ValueError, simple.drop, 'C', 1) + self.assertRaises(ValueError, simple.drop, [1, 5]) + self.assertRaises(ValueError, simple.drop, ['A', 'C'], 1) + + # errors = 'ignore' + assert_frame_equal(simple.drop(5, errors='ignore'), simple) + assert_frame_equal(simple.drop([0, 5], errors='ignore'), + simple.ix[[1, 2, 3], :]) + assert_frame_equal(simple.drop('C', axis=1, errors='ignore'), simple) + assert_frame_equal(simple.drop(['A', 'C'], axis=1, errors='ignore'), + simple[['B']]) + + # non-unique - wheee! + nu_df = DataFrame(lzip(range(3), range(-3, 1), list('abc')), + columns=['a', 'a', 'b']) + assert_frame_equal(nu_df.drop('a', axis=1), nu_df[['b']]) + assert_frame_equal(nu_df.drop('b', axis='columns'), nu_df['a']) + + nu_df = nu_df.set_index(pd.Index(['X', 'Y', 'X'])) + nu_df.columns = list('abc') + assert_frame_equal(nu_df.drop('X', axis='rows'), nu_df.ix[["Y"], :]) + assert_frame_equal(nu_df.drop(['X', 'Y'], axis=0), nu_df.ix[[], :]) + + # inplace cache issue + # GH 5628 + df = pd.DataFrame(np.random.randn(10, 3), columns=list('abc')) + expected = df[~(df.b > 0)] + df.drop(labels=df[df.b > 0].index, inplace=True) + assert_frame_equal(df, expected) + + def test_reindex(self): + newFrame = self.frame.reindex(self.ts1.index) + + for col in newFrame.columns: + for idx, val in compat.iteritems(newFrame[col]): + if idx in self.frame.index: + if np.isnan(val): + self.assertTrue(np.isnan(self.frame[col][idx])) + else: + self.assertEqual(val, self.frame[col][idx]) + else: + self.assertTrue(np.isnan(val)) + + for col, series in compat.iteritems(newFrame): + self.assertTrue(tm.equalContents(series.index, newFrame.index)) + emptyFrame = self.frame.reindex(Index([])) + self.assertEqual(len(emptyFrame.index), 0) + + # Cython code should be unit-tested directly + nonContigFrame = self.frame.reindex(self.ts1.index[::2]) + + for col in nonContigFrame.columns: + for idx, val in compat.iteritems(nonContigFrame[col]): + if idx in self.frame.index: + if np.isnan(val): + self.assertTrue(np.isnan(self.frame[col][idx])) + else: + self.assertEqual(val, self.frame[col][idx]) + else: + self.assertTrue(np.isnan(val)) + + for col, series in compat.iteritems(nonContigFrame): + self.assertTrue(tm.equalContents(series.index, + nonContigFrame.index)) + + # corner cases + + # Same index, copies values but not index if copy=False + newFrame = self.frame.reindex(self.frame.index, copy=False) + self.assertIs(newFrame.index, self.frame.index) + + # length zero + newFrame = self.frame.reindex([]) + self.assertTrue(newFrame.empty) + self.assertEqual(len(newFrame.columns), len(self.frame.columns)) + + # length zero with columns reindexed with non-empty index + newFrame = self.frame.reindex([]) + newFrame = newFrame.reindex(self.frame.index) + self.assertEqual(len(newFrame.index), len(self.frame.index)) + self.assertEqual(len(newFrame.columns), len(self.frame.columns)) + + # pass non-Index + newFrame = self.frame.reindex(list(self.ts1.index)) + self.assertTrue(newFrame.index.equals(self.ts1.index)) + + # copy with no axes + result = self.frame.reindex() + assert_frame_equal(result, self.frame) + self.assertFalse(result is self.frame) + + def test_reindex_nan(self): + df = pd.DataFrame([[1, 2], [3, 5], [7, 11], [9, 23]], + index=[2, np.nan, 1, 5], + columns=['joe', 'jim']) + + i, j = [np.nan, 5, 5, np.nan, 1, 2, np.nan], [1, 3, 3, 1, 2, 0, 1] + assert_frame_equal(df.reindex(i), df.iloc[j]) + + df.index = df.index.astype('object') + assert_frame_equal(df.reindex(i), df.iloc[j], check_index_type=False) + + # GH10388 + df = pd.DataFrame({'other': ['a', 'b', np.nan, 'c'], + 'date': ['2015-03-22', np.nan, + '2012-01-08', np.nan], + 'amount': [2, 3, 4, 5]}) + + df['date'] = pd.to_datetime(df.date) + df['delta'] = (pd.to_datetime('2015-06-18') - df['date']).shift(1) + + left = df.set_index(['delta', 'other', 'date']).reset_index() + right = df.reindex(columns=['delta', 'other', 'date', 'amount']) + assert_frame_equal(left, right) + + def test_reindex_name_remains(self): + s = Series(random.rand(10)) + df = DataFrame(s, index=np.arange(len(s))) + i = Series(np.arange(10), name='iname') + + df = df.reindex(i) + self.assertEqual(df.index.name, 'iname') + + df = df.reindex(Index(np.arange(10), name='tmpname')) + self.assertEqual(df.index.name, 'tmpname') + + s = Series(random.rand(10)) + df = DataFrame(s.T, index=np.arange(len(s))) + i = Series(np.arange(10), name='iname') + df = df.reindex(columns=i) + self.assertEqual(df.columns.name, 'iname') + + def test_reindex_int(self): + smaller = self.intframe.reindex(self.intframe.index[::2]) + + self.assertEqual(smaller['A'].dtype, np.int64) + + bigger = smaller.reindex(self.intframe.index) + self.assertEqual(bigger['A'].dtype, np.float64) + + smaller = self.intframe.reindex(columns=['A', 'B']) + self.assertEqual(smaller['A'].dtype, np.int64) + + def test_reindex_like(self): + other = self.frame.reindex(index=self.frame.index[:10], + columns=['C', 'B']) + + assert_frame_equal(other, self.frame.reindex_like(other)) + + def test_reindex_columns(self): + newFrame = self.frame.reindex(columns=['A', 'B', 'E']) + + assert_series_equal(newFrame['B'], self.frame['B']) + self.assertTrue(np.isnan(newFrame['E']).all()) + self.assertNotIn('C', newFrame) + + # length zero + newFrame = self.frame.reindex(columns=[]) + self.assertTrue(newFrame.empty) + + def test_reindex_axes(self): + # GH 3317, reindexing by both axes loses freq of the index + df = DataFrame(np.ones((3, 3)), + index=[datetime(2012, 1, 1), + datetime(2012, 1, 2), + datetime(2012, 1, 3)], + columns=['a', 'b', 'c']) + time_freq = date_range('2012-01-01', '2012-01-03', freq='d') + some_cols = ['a', 'b'] + + index_freq = df.reindex(index=time_freq).index.freq + both_freq = df.reindex(index=time_freq, columns=some_cols).index.freq + seq_freq = df.reindex(index=time_freq).reindex( + columns=some_cols).index.freq + self.assertEqual(index_freq, both_freq) + self.assertEqual(index_freq, seq_freq) + + def test_reindex_fill_value(self): + df = DataFrame(np.random.randn(10, 4)) + + # axis=0 + result = df.reindex(lrange(15)) + self.assertTrue(np.isnan(result.values[-5:]).all()) + + result = df.reindex(lrange(15), fill_value=0) + expected = df.reindex(lrange(15)).fillna(0) + assert_frame_equal(result, expected) + + # axis=1 + result = df.reindex(columns=lrange(5), fill_value=0.) + expected = df.copy() + expected[4] = 0. + assert_frame_equal(result, expected) + + result = df.reindex(columns=lrange(5), fill_value=0) + expected = df.copy() + expected[4] = 0 + assert_frame_equal(result, expected) + + result = df.reindex(columns=lrange(5), fill_value='foo') + expected = df.copy() + expected[4] = 'foo' + assert_frame_equal(result, expected) + + # reindex_axis + result = df.reindex_axis(lrange(15), fill_value=0., axis=0) + expected = df.reindex(lrange(15)).fillna(0) + assert_frame_equal(result, expected) + + result = df.reindex_axis(lrange(5), fill_value=0., axis=1) + expected = df.reindex(columns=lrange(5)).fillna(0) + assert_frame_equal(result, expected) + + # other dtypes + df['foo'] = 'foo' + result = df.reindex(lrange(15), fill_value=0) + expected = df.reindex(lrange(15)).fillna(0) + assert_frame_equal(result, expected) + + def test_reindex_dups(self): + + # GH4746, reindex on duplicate index error messages + arr = np.random.randn(10) + df = DataFrame(arr, index=[1, 2, 3, 4, 5, 1, 2, 3, 4, 5]) + + # set index is ok + result = df.copy() + result.index = list(range(len(df))) + expected = DataFrame(arr, index=list(range(len(df)))) + assert_frame_equal(result, expected) + + # reindex fails + self.assertRaises(ValueError, df.reindex, index=list(range(len(df)))) + + def test_align(self): + af, bf = self.frame.align(self.frame) + self.assertIsNot(af._data, self.frame._data) + + af, bf = self.frame.align(self.frame, copy=False) + self.assertIs(af._data, self.frame._data) + + # axis = 0 + other = self.frame.ix[:-5, :3] + af, bf = self.frame.align(other, axis=0, fill_value=-1) + self.assertTrue(bf.columns.equals(other.columns)) + # test fill value + join_idx = self.frame.index.join(other.index) + diff_a = self.frame.index.difference(join_idx) + diff_b = other.index.difference(join_idx) + diff_a_vals = af.reindex(diff_a).values + diff_b_vals = bf.reindex(diff_b).values + self.assertTrue((diff_a_vals == -1).all()) + + af, bf = self.frame.align(other, join='right', axis=0) + self.assertTrue(bf.columns.equals(other.columns)) + self.assertTrue(bf.index.equals(other.index)) + self.assertTrue(af.index.equals(other.index)) + + # axis = 1 + other = self.frame.ix[:-5, :3].copy() + af, bf = self.frame.align(other, axis=1) + self.assertTrue(bf.columns.equals(self.frame.columns)) + self.assertTrue(bf.index.equals(other.index)) + + # test fill value + join_idx = self.frame.index.join(other.index) + diff_a = self.frame.index.difference(join_idx) + diff_b = other.index.difference(join_idx) + diff_a_vals = af.reindex(diff_a).values + + # TODO(wesm): unused? + diff_b_vals = bf.reindex(diff_b).values # noqa + + self.assertTrue((diff_a_vals == -1).all()) + + af, bf = self.frame.align(other, join='inner', axis=1) + self.assertTrue(bf.columns.equals(other.columns)) + + af, bf = self.frame.align(other, join='inner', axis=1, method='pad') + self.assertTrue(bf.columns.equals(other.columns)) + + # test other non-float types + af, bf = self.intframe.align(other, join='inner', axis=1, method='pad') + self.assertTrue(bf.columns.equals(other.columns)) + + af, bf = self.mixed_frame.align(self.mixed_frame, + join='inner', axis=1, method='pad') + self.assertTrue(bf.columns.equals(self.mixed_frame.columns)) + + af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1, + method=None, fill_value=None) + self.assertTrue(bf.index.equals(Index([]))) + + af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1, + method=None, fill_value=0) + self.assertTrue(bf.index.equals(Index([]))) + + # mixed floats/ints + af, bf = self.mixed_float.align(other.ix[:, 0], join='inner', axis=1, + method=None, fill_value=0) + self.assertTrue(bf.index.equals(Index([]))) + + af, bf = self.mixed_int.align(other.ix[:, 0], join='inner', axis=1, + method=None, fill_value=0) + self.assertTrue(bf.index.equals(Index([]))) + + # try to align dataframe to series along bad axis + self.assertRaises(ValueError, self.frame.align, af.ix[0, :3], + join='inner', axis=2) + + # align dataframe to series with broadcast or not + idx = self.frame.index + s = Series(range(len(idx)), index=idx) + + left, right = self.frame.align(s, axis=0) + tm.assert_index_equal(left.index, self.frame.index) + tm.assert_index_equal(right.index, self.frame.index) + self.assertTrue(isinstance(right, Series)) + + left, right = self.frame.align(s, broadcast_axis=1) + tm.assert_index_equal(left.index, self.frame.index) + expected = {} + for c in self.frame.columns: + expected[c] = s + expected = DataFrame(expected, index=self.frame.index, + columns=self.frame.columns) + assert_frame_equal(right, expected) + + # GH 9558 + df = DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]}) + result = df[df['a'] == 2] + expected = DataFrame([[2, 5]], index=[1], columns=['a', 'b']) + assert_frame_equal(result, expected) + + result = df.where(df['a'] == 2, 0) + expected = DataFrame({'a': [0, 2, 0], 'b': [0, 5, 0]}) + assert_frame_equal(result, expected) + + def _check_align(self, a, b, axis, fill_axis, how, method, limit=None): + aa, ab = a.align(b, axis=axis, join=how, method=method, limit=limit, + fill_axis=fill_axis) + + join_index, join_columns = None, None + + ea, eb = a, b + if axis is None or axis == 0: + join_index = a.index.join(b.index, how=how) + ea = ea.reindex(index=join_index) + eb = eb.reindex(index=join_index) + + if axis is None or axis == 1: + join_columns = a.columns.join(b.columns, how=how) + ea = ea.reindex(columns=join_columns) + eb = eb.reindex(columns=join_columns) + + ea = ea.fillna(axis=fill_axis, method=method, limit=limit) + eb = eb.fillna(axis=fill_axis, method=method, limit=limit) + + assert_frame_equal(aa, ea) + assert_frame_equal(ab, eb) + + def test_align_fill_method_inner(self): + for meth in ['pad', 'bfill']: + for ax in [0, 1, None]: + for fax in [0, 1]: + self._check_align_fill('inner', meth, ax, fax) + + def test_align_fill_method_outer(self): + for meth in ['pad', 'bfill']: + for ax in [0, 1, None]: + for fax in [0, 1]: + self._check_align_fill('outer', meth, ax, fax) + + def test_align_fill_method_left(self): + for meth in ['pad', 'bfill']: + for ax in [0, 1, None]: + for fax in [0, 1]: + self._check_align_fill('left', meth, ax, fax) + + def test_align_fill_method_right(self): + for meth in ['pad', 'bfill']: + for ax in [0, 1, None]: + for fax in [0, 1]: + self._check_align_fill('right', meth, ax, fax) + + def _check_align_fill(self, kind, meth, ax, fax): + left = self.frame.ix[0:4, :10] + right = self.frame.ix[2:, 6:] + empty = self.frame.ix[:0, :0] + + self._check_align(left, right, axis=ax, fill_axis=fax, + how=kind, method=meth) + self._check_align(left, right, axis=ax, fill_axis=fax, + how=kind, method=meth, limit=1) + + # empty left + self._check_align(empty, right, axis=ax, fill_axis=fax, + how=kind, method=meth) + self._check_align(empty, right, axis=ax, fill_axis=fax, + how=kind, method=meth, limit=1) + + # empty right + self._check_align(left, empty, axis=ax, fill_axis=fax, + how=kind, method=meth) + self._check_align(left, empty, axis=ax, fill_axis=fax, + how=kind, method=meth, limit=1) + + # both empty + self._check_align(empty, empty, axis=ax, fill_axis=fax, + how=kind, method=meth) + self._check_align(empty, empty, axis=ax, fill_axis=fax, + how=kind, method=meth, limit=1) + + def test_align_int_fill_bug(self): + # GH #910 + X = np.arange(10 * 10, dtype='float64').reshape(10, 10) + Y = np.ones((10, 1), dtype=int) + + df1 = DataFrame(X) + df1['0.X'] = Y.squeeze() + + df2 = df1.astype(float) + + result = df1 - df1.mean() + expected = df2 - df2.mean() + assert_frame_equal(result, expected) + + def test_align_multiindex(self): + # GH 10665 + # same test cases as test_align_multiindex in test_series.py + + midx = pd.MultiIndex.from_product([range(2), range(3), range(2)], + names=('a', 'b', 'c')) + idx = pd.Index(range(2), name='b') + df1 = pd.DataFrame(np.arange(12, dtype='int64'), index=midx) + df2 = pd.DataFrame(np.arange(2, dtype='int64'), index=idx) + + # these must be the same results (but flipped) + res1l, res1r = df1.align(df2, join='left') + res2l, res2r = df2.align(df1, join='right') + + expl = df1 + assert_frame_equal(expl, res1l) + assert_frame_equal(expl, res2r) + expr = pd.DataFrame([0, 0, 1, 1, np.nan, np.nan] * 2, index=midx) + assert_frame_equal(expr, res1r) + assert_frame_equal(expr, res2l) + + res1l, res1r = df1.align(df2, join='right') + res2l, res2r = df2.align(df1, join='left') + + exp_idx = pd.MultiIndex.from_product([range(2), range(2), range(2)], + names=('a', 'b', 'c')) + expl = pd.DataFrame([0, 1, 2, 3, 6, 7, 8, 9], index=exp_idx) + assert_frame_equal(expl, res1l) + assert_frame_equal(expl, res2r) + expr = pd.DataFrame([0, 0, 1, 1] * 2, index=exp_idx) + assert_frame_equal(expr, res1r) + assert_frame_equal(expr, res2l) + + def test_filter(self): + # items + filtered = self.frame.filter(['A', 'B', 'E']) + self.assertEqual(len(filtered.columns), 2) + self.assertNotIn('E', filtered) + + filtered = self.frame.filter(['A', 'B', 'E'], axis='columns') + self.assertEqual(len(filtered.columns), 2) + self.assertNotIn('E', filtered) + + # other axis + idx = self.frame.index[0:4] + filtered = self.frame.filter(idx, axis='index') + expected = self.frame.reindex(index=idx) + assert_frame_equal(filtered, expected) + + # like + fcopy = self.frame.copy() + fcopy['AA'] = 1 + + filtered = fcopy.filter(like='A') + self.assertEqual(len(filtered.columns), 2) + self.assertIn('AA', filtered) + + # like with ints in column names + df = DataFrame(0., index=[0, 1, 2], columns=[0, 1, '_A', '_B']) + filtered = df.filter(like='_') + self.assertEqual(len(filtered.columns), 2) + + # regex with ints in column names + # from PR #10384 + df = DataFrame(0., index=[0, 1, 2], columns=['A1', 1, 'B', 2, 'C']) + expected = DataFrame( + 0., index=[0, 1, 2], columns=pd.Index([1, 2], dtype=object)) + filtered = df.filter(regex='^[0-9]+$') + assert_frame_equal(filtered, expected) + + expected = DataFrame(0., index=[0, 1, 2], columns=[0, '0', 1, '1']) + # shouldn't remove anything + filtered = expected.filter(regex='^[0-9]+$') + assert_frame_equal(filtered, expected) + + # pass in None + with assertRaisesRegexp(TypeError, 'Must pass'): + self.frame.filter(items=None) + + # objects + filtered = self.mixed_frame.filter(like='foo') + self.assertIn('foo', filtered) + + # unicode columns, won't ascii-encode + df = self.frame.rename(columns={'B': u('\u2202')}) + filtered = df.filter(like='C') + self.assertTrue('C' in filtered) + + def test_filter_regex_search(self): + fcopy = self.frame.copy() + fcopy['AA'] = 1 + + # regex + filtered = fcopy.filter(regex='[A]+') + self.assertEqual(len(filtered.columns), 2) + self.assertIn('AA', filtered) + + # doesn't have to be at beginning + df = DataFrame({'aBBa': [1, 2], + 'BBaBB': [1, 2], + 'aCCa': [1, 2], + 'aCCaBB': [1, 2]}) + + result = df.filter(regex='BB') + exp = df[[x for x in df.columns if 'BB' in x]] + assert_frame_equal(result, exp) + + def test_filter_corner(self): + empty = DataFrame() + + result = empty.filter([]) + assert_frame_equal(result, empty) + + result = empty.filter(like='foo') + assert_frame_equal(result, empty) + + def test_select(self): + f = lambda x: x.weekday() == 2 + result = self.tsframe.select(f, axis=0) + expected = self.tsframe.reindex( + index=self.tsframe.index[[f(x) for x in self.tsframe.index]]) + assert_frame_equal(result, expected) + + result = self.frame.select(lambda x: x in ('B', 'D'), axis=1) + expected = self.frame.reindex(columns=['B', 'D']) + + # TODO should reindex check_names? + assert_frame_equal(result, expected, check_names=False) + + def test_take(self): + # homogeneous + order = [3, 1, 2, 0] + for df in [self.frame]: + + result = df.take(order, axis=0) + expected = df.reindex(df.index.take(order)) + assert_frame_equal(result, expected) + + # axis = 1 + result = df.take(order, axis=1) + expected = df.ix[:, ['D', 'B', 'C', 'A']] + assert_frame_equal(result, expected, check_names=False) + + # neg indicies + order = [2, 1, -1] + for df in [self.frame]: + + result = df.take(order, axis=0) + expected = df.reindex(df.index.take(order)) + assert_frame_equal(result, expected) + + # axis = 1 + result = df.take(order, axis=1) + expected = df.ix[:, ['C', 'B', 'D']] + assert_frame_equal(result, expected, check_names=False) + + # illegal indices + self.assertRaises(IndexError, df.take, [3, 1, 2, 30], axis=0) + self.assertRaises(IndexError, df.take, [3, 1, 2, -31], axis=0) + self.assertRaises(IndexError, df.take, [3, 1, 2, 5], axis=1) + self.assertRaises(IndexError, df.take, [3, 1, 2, -5], axis=1) + + # mixed-dtype + order = [4, 1, 2, 0, 3] + for df in [self.mixed_frame]: + + result = df.take(order, axis=0) + expected = df.reindex(df.index.take(order)) + assert_frame_equal(result, expected) + + # axis = 1 + result = df.take(order, axis=1) + expected = df.ix[:, ['foo', 'B', 'C', 'A', 'D']] + assert_frame_equal(result, expected) + + # neg indicies + order = [4, 1, -2] + for df in [self.mixed_frame]: + + result = df.take(order, axis=0) + expected = df.reindex(df.index.take(order)) + assert_frame_equal(result, expected) + + # axis = 1 + result = df.take(order, axis=1) + expected = df.ix[:, ['foo', 'B', 'D']] + assert_frame_equal(result, expected) + + # by dtype + order = [1, 2, 0, 3] + for df in [self.mixed_float, self.mixed_int]: + + result = df.take(order, axis=0) + expected = df.reindex(df.index.take(order)) + assert_frame_equal(result, expected) + + # axis = 1 + result = df.take(order, axis=1) + expected = df.ix[:, ['B', 'C', 'A', 'D']] + assert_frame_equal(result, expected) + + def test_reindex_boolean(self): + frame = DataFrame(np.ones((10, 2), dtype=bool), + index=np.arange(0, 20, 2), + columns=[0, 2]) + + reindexed = frame.reindex(np.arange(10)) + self.assertEqual(reindexed.values.dtype, np.object_) + self.assertTrue(isnull(reindexed[0][1])) + + reindexed = frame.reindex(columns=lrange(3)) + self.assertEqual(reindexed.values.dtype, np.object_) + self.assertTrue(isnull(reindexed[1]).all()) + + def test_reindex_objects(self): + reindexed = self.mixed_frame.reindex(columns=['foo', 'A', 'B']) + self.assertIn('foo', reindexed) + + reindexed = self.mixed_frame.reindex(columns=['A', 'B']) + self.assertNotIn('foo', reindexed) + + def test_reindex_corner(self): + index = Index(['a', 'b', 'c']) + dm = self.empty.reindex(index=[1, 2, 3]) + reindexed = dm.reindex(columns=index) + self.assertTrue(reindexed.columns.equals(index)) + + # ints are weird + + smaller = self.intframe.reindex(columns=['A', 'B', 'E']) + self.assertEqual(smaller['E'].dtype, np.float64) + + def test_reindex_axis(self): + cols = ['A', 'B', 'E'] + reindexed1 = self.intframe.reindex_axis(cols, axis=1) + reindexed2 = self.intframe.reindex(columns=cols) + assert_frame_equal(reindexed1, reindexed2) + + rows = self.intframe.index[0:5] + reindexed1 = self.intframe.reindex_axis(rows, axis=0) + reindexed2 = self.intframe.reindex(index=rows) + assert_frame_equal(reindexed1, reindexed2) + + self.assertRaises(ValueError, self.intframe.reindex_axis, rows, axis=2) + + # no-op case + cols = self.frame.columns.copy() + newFrame = self.frame.reindex_axis(cols, axis=1) + assert_frame_equal(newFrame, self.frame) + + def test_reindex_with_nans(self): + df = DataFrame([[1, 2], [3, 4], [np.nan, np.nan], [7, 8], [9, 10]], + columns=['a', 'b'], + index=[100.0, 101.0, np.nan, 102.0, 103.0]) + + result = df.reindex(index=[101.0, 102.0, 103.0]) + expected = df.iloc[[1, 3, 4]] + assert_frame_equal(result, expected) + + result = df.reindex(index=[103.0]) + expected = df.iloc[[4]] + assert_frame_equal(result, expected) + + result = df.reindex(index=[101.0]) + expected = df.iloc[[1]] + assert_frame_equal(result, expected) + + def test_reindex_multi(self): + df = DataFrame(np.random.randn(3, 3)) + + result = df.reindex(lrange(4), lrange(4)) + expected = df.reindex(lrange(4)).reindex(columns=lrange(4)) + + assert_frame_equal(result, expected) + + df = DataFrame(np.random.randint(0, 10, (3, 3))) + + result = df.reindex(lrange(4), lrange(4)) + expected = df.reindex(lrange(4)).reindex(columns=lrange(4)) + + assert_frame_equal(result, expected) + + df = DataFrame(np.random.randint(0, 10, (3, 3))) + + result = df.reindex(lrange(2), lrange(2)) + expected = df.reindex(lrange(2)).reindex(columns=lrange(2)) + + assert_frame_equal(result, expected) + + df = DataFrame(np.random.randn(5, 3) + 1j, columns=['a', 'b', 'c']) + + result = df.reindex(index=[0, 1], columns=['a', 'b']) + expected = df.reindex([0, 1]).reindex(columns=['a', 'b']) + + assert_frame_equal(result, expected) diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py new file mode 100644 index 0000000000000..f337bf48c05ee --- /dev/null +++ b/pandas/tests/frame/test_block_internals.py @@ -0,0 +1,532 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime, timedelta +import itertools + +from numpy import nan +import numpy as np + +from pandas import (DataFrame, Series, Timestamp, date_range, compat, + option_context) +from pandas.compat import StringIO +import pandas as pd + +from pandas.util.testing import (assert_almost_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +# Segregated collection of methods that require the BlockManager internal data +# structure + + +class TestDataFrameBlockInternals(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_cast_internals(self): + casted = DataFrame(self.frame._data, dtype=int) + expected = DataFrame(self.frame._series, dtype=int) + assert_frame_equal(casted, expected) + + casted = DataFrame(self.frame._data, dtype=np.int32) + expected = DataFrame(self.frame._series, dtype=np.int32) + assert_frame_equal(casted, expected) + + def test_consolidate(self): + self.frame['E'] = 7. + consolidated = self.frame.consolidate() + self.assertEqual(len(consolidated._data.blocks), 1) + + # Ensure copy, do I want this? + recons = consolidated.consolidate() + self.assertIsNot(recons, consolidated) + assert_frame_equal(recons, consolidated) + + self.frame['F'] = 8. + self.assertEqual(len(self.frame._data.blocks), 3) + self.frame.consolidate(inplace=True) + self.assertEqual(len(self.frame._data.blocks), 1) + + def test_consolidate_inplace(self): + frame = self.frame.copy() # noqa + + # triggers in-place consolidation + for letter in range(ord('A'), ord('Z')): + self.frame[chr(letter)] = chr(letter) + + def test_as_matrix_consolidate(self): + self.frame['E'] = 7. + self.assertFalse(self.frame._data.is_consolidated()) + _ = self.frame.as_matrix() # noqa + self.assertTrue(self.frame._data.is_consolidated()) + + def test_modify_values(self): + self.frame.values[5] = 5 + self.assertTrue((self.frame.values[5] == 5).all()) + + # unconsolidated + self.frame['E'] = 7. + self.frame.values[6] = 6 + self.assertTrue((self.frame.values[6] == 6).all()) + + def test_boolean_set_uncons(self): + self.frame['E'] = 7. + + expected = self.frame.values.copy() + expected[expected > 1] = 2 + + self.frame[self.frame > 1] = 2 + assert_almost_equal(expected, self.frame.values) + + def test_as_matrix_numeric_cols(self): + self.frame['foo'] = 'bar' + + values = self.frame.as_matrix(['A', 'B', 'C', 'D']) + self.assertEqual(values.dtype, np.float64) + + def test_as_matrix_lcd(self): + + # mixed lcd + values = self.mixed_float.as_matrix(['A', 'B', 'C', 'D']) + self.assertEqual(values.dtype, np.float64) + + values = self.mixed_float.as_matrix(['A', 'B', 'C']) + self.assertEqual(values.dtype, np.float32) + + values = self.mixed_float.as_matrix(['C']) + self.assertEqual(values.dtype, np.float16) + + values = self.mixed_int.as_matrix(['A', 'B', 'C', 'D']) + self.assertEqual(values.dtype, np.int64) + + values = self.mixed_int.as_matrix(['A', 'D']) + self.assertEqual(values.dtype, np.int64) + + # guess all ints are cast to uints.... + values = self.mixed_int.as_matrix(['A', 'B', 'C']) + self.assertEqual(values.dtype, np.int64) + + values = self.mixed_int.as_matrix(['A', 'C']) + self.assertEqual(values.dtype, np.int32) + + values = self.mixed_int.as_matrix(['C', 'D']) + self.assertEqual(values.dtype, np.int64) + + values = self.mixed_int.as_matrix(['A']) + self.assertEqual(values.dtype, np.int32) + + values = self.mixed_int.as_matrix(['C']) + self.assertEqual(values.dtype, np.uint8) + + def test_constructor_with_convert(self): + # this is actually mostly a test of lib.maybe_convert_objects + # #2845 + df = DataFrame({'A': [2 ** 63 - 1]}) + result = df['A'] + expected = Series(np.asarray([2 ** 63 - 1], np.int64), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [2 ** 63]}) + result = df['A'] + expected = Series(np.asarray([2 ** 63], np.object_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [datetime(2005, 1, 1), True]}) + result = df['A'] + expected = Series(np.asarray([datetime(2005, 1, 1), True], np.object_), + name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [None, 1]}) + result = df['A'] + expected = Series(np.asarray([np.nan, 1], np.float_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [1.0, 2]}) + result = df['A'] + expected = Series(np.asarray([1.0, 2], np.float_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [1.0 + 2.0j, 3]}) + result = df['A'] + expected = Series(np.asarray([1.0 + 2.0j, 3], np.complex_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [1.0 + 2.0j, 3.0]}) + result = df['A'] + expected = Series(np.asarray([1.0 + 2.0j, 3.0], np.complex_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [1.0 + 2.0j, True]}) + result = df['A'] + expected = Series(np.asarray([1.0 + 2.0j, True], np.object_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [1.0, None]}) + result = df['A'] + expected = Series(np.asarray([1.0, np.nan], np.float_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [1.0 + 2.0j, None]}) + result = df['A'] + expected = Series(np.asarray( + [1.0 + 2.0j, np.nan], np.complex_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [2.0, 1, True, None]}) + result = df['A'] + expected = Series(np.asarray( + [2.0, 1, True, None], np.object_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [2.0, 1, datetime(2006, 1, 1), None]}) + result = df['A'] + expected = Series(np.asarray([2.0, 1, datetime(2006, 1, 1), + None], np.object_), name='A') + assert_series_equal(result, expected) + + def test_construction_with_mixed(self): + # test construction edge cases with mixed types + + # f7u12, this does not work without extensive workaround + data = [[datetime(2001, 1, 5), nan, datetime(2001, 1, 2)], + [datetime(2000, 1, 2), datetime(2000, 1, 3), + datetime(2000, 1, 1)]] + df = DataFrame(data) + + # check dtypes + result = df.get_dtype_counts().sort_values() + expected = Series({'datetime64[ns]': 3}) + + # mixed-type frames + self.mixed_frame['datetime'] = datetime.now() + self.mixed_frame['timedelta'] = timedelta(days=1, seconds=1) + self.assertEqual(self.mixed_frame['datetime'].dtype, 'M8[ns]') + self.assertEqual(self.mixed_frame['timedelta'].dtype, 'm8[ns]') + result = self.mixed_frame.get_dtype_counts().sort_values() + expected = Series({'float64': 4, + 'object': 1, + 'datetime64[ns]': 1, + 'timedelta64[ns]': 1}).sort_values() + assert_series_equal(result, expected) + + def test_construction_with_conversions(self): + + # convert from a numpy array of non-ns timedelta64 + arr = np.array([1, 2, 3], dtype='timedelta64[s]') + s = Series(arr) + expected = Series(pd.timedelta_range('00:00:01', periods=3, freq='s')) + assert_series_equal(s, expected) + + df = DataFrame(index=range(3)) + df['A'] = arr + expected = DataFrame({'A': pd.timedelta_range('00:00:01', periods=3, + freq='s')}, + index=range(3)) + assert_frame_equal(df, expected) + + # convert from a numpy array of non-ns datetime64 + # note that creating a numpy datetime64 is in LOCAL time!!!! + # seems to work for M8[D], but not for M8[s] + + s = Series(np.array(['2013-01-01', '2013-01-02', + '2013-01-03'], dtype='datetime64[D]')) + assert_series_equal(s, Series(date_range('20130101', periods=3, + freq='D'))) + + # s = Series(np.array(['2013-01-01 00:00:01','2013-01-01 + # 00:00:02','2013-01-01 00:00:03'],dtype='datetime64[s]')) + + # assert_series_equal(s,date_range('20130101 + # 00:00:01',period=3,freq='s')) + + expected = DataFrame({ + 'dt1': Timestamp('20130101'), + 'dt2': date_range('20130101', periods=3), + # 'dt3' : date_range('20130101 00:00:01',periods=3,freq='s'), + }, index=range(3)) + + df = DataFrame(index=range(3)) + df['dt1'] = np.datetime64('2013-01-01') + df['dt2'] = np.array(['2013-01-01', '2013-01-02', '2013-01-03'], + dtype='datetime64[D]') + + # df['dt3'] = np.array(['2013-01-01 00:00:01','2013-01-01 + # 00:00:02','2013-01-01 00:00:03'],dtype='datetime64[s]') + + assert_frame_equal(df, expected) + + def test_constructor_compound_dtypes(self): + # GH 5191 + # compound dtypes should raise not-implementederror + + def f(dtype): + data = list(itertools.repeat((datetime(2001, 1, 1), + "aa", 20), 9)) + return DataFrame(data=data, + columns=["A", "B", "C"], + dtype=dtype) + + self.assertRaises(NotImplementedError, f, + [("A", "datetime64[h]"), + ("B", "str"), + ("C", "int32")]) + + # these work (though results may be unexpected) + f('int64') + f('float64') + + # 10822 + # invalid error message on dt inference + if not compat.is_platform_windows(): + f('M8[ns]') + + def test_equals_different_blocks(self): + # GH 9330 + df0 = pd.DataFrame({"A": ["x", "y"], "B": [1, 2], + "C": ["w", "z"]}) + df1 = df0.reset_index()[["A", "B", "C"]] + # this assert verifies that the above operations have + # induced a block rearrangement + self.assertTrue(df0._data.blocks[0].dtype != + df1._data.blocks[0].dtype) + # do the real tests + assert_frame_equal(df0, df1) + self.assertTrue(df0.equals(df1)) + self.assertTrue(df1.equals(df0)) + + def test_copy_blocks(self): + # API/ENH 9607 + df = DataFrame(self.frame, copy=True) + column = df.columns[0] + + # use the default copy=True, change a column + blocks = df.as_blocks() + for dtype, _df in blocks.items(): + if column in _df: + _df.ix[:, column] = _df[column] + 1 + + # make sure we did not change the original DataFrame + self.assertFalse(_df[column].equals(df[column])) + + def test_no_copy_blocks(self): + # API/ENH 9607 + df = DataFrame(self.frame, copy=True) + column = df.columns[0] + + # use the copy=False, change a column + blocks = df.as_blocks(copy=False) + for dtype, _df in blocks.items(): + if column in _df: + _df.ix[:, column] = _df[column] + 1 + + # make sure we did change the original DataFrame + self.assertTrue(_df[column].equals(df[column])) + + def test_copy(self): + cop = self.frame.copy() + cop['E'] = cop['A'] + self.assertNotIn('E', self.frame) + + # copy objects + copy = self.mixed_frame.copy() + self.assertIsNot(copy._data, self.mixed_frame._data) + + def test_pickle(self): + unpickled = self.round_trip_pickle(self.mixed_frame) + assert_frame_equal(self.mixed_frame, unpickled) + + # buglet + self.mixed_frame._data.ndim + + # empty + unpickled = self.round_trip_pickle(self.empty) + repr(unpickled) + + # tz frame + unpickled = self.round_trip_pickle(self.tzframe) + assert_frame_equal(self.tzframe, unpickled) + + def test_consolidate_datetime64(self): + # numpy vstack bug + + data = """\ +starting,ending,measure +2012-06-21 00:00,2012-06-23 07:00,77 +2012-06-23 07:00,2012-06-23 16:30,65 +2012-06-23 16:30,2012-06-25 08:00,77 +2012-06-25 08:00,2012-06-26 12:00,0 +2012-06-26 12:00,2012-06-27 08:00,77 +""" + df = pd.read_csv(StringIO(data), parse_dates=[0, 1]) + + ser_starting = df.starting + ser_starting.index = ser_starting.values + ser_starting = ser_starting.tz_localize('US/Eastern') + ser_starting = ser_starting.tz_convert('UTC') + + ser_ending = df.ending + ser_ending.index = ser_ending.values + ser_ending = ser_ending.tz_localize('US/Eastern') + ser_ending = ser_ending.tz_convert('UTC') + + df.starting = ser_starting.index + df.ending = ser_ending.index + + tm.assert_index_equal(pd.DatetimeIndex( + df.starting), ser_starting.index) + tm.assert_index_equal(pd.DatetimeIndex(df.ending), ser_ending.index) + + def test_is_mixed_type(self): + self.assertFalse(self.frame._is_mixed_type) + self.assertTrue(self.mixed_frame._is_mixed_type) + + def test_get_numeric_data(self): + # TODO(wesm): unused? + intname = np.dtype(np.int_).name # noqa + floatname = np.dtype(np.float_).name # noqa + + datetime64name = np.dtype('M8[ns]').name + objectname = np.dtype(np.object_).name + + df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', + 'f': Timestamp('20010102')}, + index=np.arange(10)) + result = df.get_dtype_counts() + expected = Series({'int64': 1, 'float64': 1, + datetime64name: 1, objectname: 1}) + result.sort_index() + expected.sort_index() + assert_series_equal(result, expected) + + df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', + 'd': np.array([1.] * 10, dtype='float32'), + 'e': np.array([1] * 10, dtype='int32'), + 'f': np.array([1] * 10, dtype='int16'), + 'g': Timestamp('20010102')}, + index=np.arange(10)) + + result = df._get_numeric_data() + expected = df.ix[:, ['a', 'b', 'd', 'e', 'f']] + assert_frame_equal(result, expected) + + only_obj = df.ix[:, ['c', 'g']] + result = only_obj._get_numeric_data() + expected = df.ix[:, []] + assert_frame_equal(result, expected) + + df = DataFrame.from_dict( + {'a': [1, 2], 'b': ['foo', 'bar'], 'c': [np.pi, np.e]}) + result = df._get_numeric_data() + expected = DataFrame.from_dict({'a': [1, 2], 'c': [np.pi, np.e]}) + assert_frame_equal(result, expected) + + df = result.copy() + result = df._get_numeric_data() + expected = df + assert_frame_equal(result, expected) + + def test_convert_objects(self): + + oops = self.mixed_frame.T.T + converted = oops._convert(datetime=True) + assert_frame_equal(converted, self.mixed_frame) + self.assertEqual(converted['A'].dtype, np.float64) + + # force numeric conversion + self.mixed_frame['H'] = '1.' + self.mixed_frame['I'] = '1' + + # add in some items that will be nan + l = len(self.mixed_frame) + self.mixed_frame['J'] = '1.' + self.mixed_frame['K'] = '1' + self.mixed_frame.ix[0:5, ['J', 'K']] = 'garbled' + converted = self.mixed_frame._convert(datetime=True, numeric=True) + self.assertEqual(converted['H'].dtype, 'float64') + self.assertEqual(converted['I'].dtype, 'int64') + self.assertEqual(converted['J'].dtype, 'float64') + self.assertEqual(converted['K'].dtype, 'float64') + self.assertEqual(len(converted['J'].dropna()), l - 5) + self.assertEqual(len(converted['K'].dropna()), l - 5) + + # via astype + converted = self.mixed_frame.copy() + converted['H'] = converted['H'].astype('float64') + converted['I'] = converted['I'].astype('int64') + self.assertEqual(converted['H'].dtype, 'float64') + self.assertEqual(converted['I'].dtype, 'int64') + + # via astype, but errors + converted = self.mixed_frame.copy() + with assertRaisesRegexp(ValueError, 'invalid literal'): + converted['H'].astype('int32') + + # mixed in a single column + df = DataFrame(dict(s=Series([1, 'na', 3, 4]))) + result = df._convert(datetime=True, numeric=True) + expected = DataFrame(dict(s=Series([1, np.nan, 3, 4]))) + assert_frame_equal(result, expected) + + def test_convert_objects_no_conversion(self): + mixed1 = DataFrame( + {'a': [1, 2, 3], 'b': [4.0, 5, 6], 'c': ['x', 'y', 'z']}) + mixed2 = mixed1._convert(datetime=True) + assert_frame_equal(mixed1, mixed2) + + def test_stale_cached_series_bug_473(self): + + # this is chained, but ok + with option_context('chained_assignment', None): + Y = DataFrame(np.random.random((4, 4)), index=('a', 'b', 'c', 'd'), + columns=('e', 'f', 'g', 'h')) + repr(Y) + Y['e'] = Y['e'].astype('object') + Y['g']['c'] = np.NaN + repr(Y) + result = Y.sum() # noqa + exp = Y['g'].sum() # noqa + self.assertTrue(pd.isnull(Y['g']['c'])) + + def test_get_X_columns(self): + # numeric and object columns + + df = DataFrame({'a': [1, 2, 3], + 'b': [True, False, True], + 'c': ['foo', 'bar', 'baz'], + 'd': [None, None, None], + 'e': [3.14, 0.577, 2.773]}) + + self.assert_numpy_array_equal(df._get_numeric_data().columns, + ['a', 'b', 'e']) + + def test_strange_column_corruption_issue(self): + # (wesm) Unclear how exactly this is related to internal matters + df = DataFrame(index=[0, 1]) + df[0] = nan + wasCol = {} + # uncommenting these makes the results match + # for col in xrange(100, 200): + # wasCol[col] = 1 + # df[col] = nan + + for i, dt in enumerate(df.index): + for col in range(100, 200): + if col not in wasCol: + wasCol[col] = 1 + df[col] = nan + df[col][dt] = i + + myid = 100 + + first = len(df.ix[pd.isnull(df[myid]), [myid]]) + second = len(df.ix[pd.isnull(df[myid]), [myid]]) + self.assertTrue(first == second == 0) diff --git a/pandas/tests/frame/test_combine_concat.py b/pandas/tests/frame/test_combine_concat.py new file mode 100644 index 0000000000000..77077440ea301 --- /dev/null +++ b/pandas/tests/frame/test_combine_concat.py @@ -0,0 +1,455 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime + +from numpy import nan +import numpy as np + +from pandas.compat import lrange +from pandas import DataFrame, Series, Index, Timestamp +import pandas as pd + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameCombineConcat(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_combine_first_mixed(self): + a = Series(['a', 'b'], index=lrange(2)) + b = Series(lrange(2), index=lrange(2)) + f = DataFrame({'A': a, 'B': b}) + + a = Series(['a', 'b'], index=lrange(5, 7)) + b = Series(lrange(2), index=lrange(5, 7)) + g = DataFrame({'A': a, 'B': b}) + + # TODO(wesm): no verification? + combined = f.combine_first(g) # noqa + + def test_combine_multiple_frames_dtypes(self): + + # GH 2759 + A = DataFrame(data=np.ones((10, 2)), columns=[ + 'foo', 'bar'], dtype=np.float64) + B = DataFrame(data=np.ones((10, 2)), dtype=np.float32) + results = pd.concat((A, B), axis=1).get_dtype_counts() + expected = Series(dict(float64=2, float32=2)) + assert_series_equal(results, expected) + + def test_append_series_dict(self): + df = DataFrame(np.random.randn(5, 4), + columns=['foo', 'bar', 'baz', 'qux']) + + series = df.ix[4] + with assertRaisesRegexp(ValueError, 'Indexes have overlapping values'): + df.append(series, verify_integrity=True) + series.name = None + with assertRaisesRegexp(TypeError, 'Can only append a Series if ' + 'ignore_index=True'): + df.append(series, verify_integrity=True) + + result = df.append(series[::-1], ignore_index=True) + expected = df.append(DataFrame({0: series[::-1]}, index=df.columns).T, + ignore_index=True) + assert_frame_equal(result, expected) + + # dict + result = df.append(series.to_dict(), ignore_index=True) + assert_frame_equal(result, expected) + + result = df.append(series[::-1][:3], ignore_index=True) + expected = df.append(DataFrame({0: series[::-1][:3]}).T, + ignore_index=True) + assert_frame_equal(result, expected.ix[:, result.columns]) + + # can append when name set + row = df.ix[4] + row.name = 5 + result = df.append(row) + expected = df.append(df[-1:], ignore_index=True) + assert_frame_equal(result, expected) + + def test_append_list_of_series_dicts(self): + df = DataFrame(np.random.randn(5, 4), + columns=['foo', 'bar', 'baz', 'qux']) + + dicts = [x.to_dict() for idx, x in df.iterrows()] + + result = df.append(dicts, ignore_index=True) + expected = df.append(df, ignore_index=True) + assert_frame_equal(result, expected) + + # different columns + dicts = [{'foo': 1, 'bar': 2, 'baz': 3, 'peekaboo': 4}, + {'foo': 5, 'bar': 6, 'baz': 7, 'peekaboo': 8}] + result = df.append(dicts, ignore_index=True) + expected = df.append(DataFrame(dicts), ignore_index=True) + assert_frame_equal(result, expected) + + def test_append_empty_dataframe(self): + + # Empty df append empty df + df1 = DataFrame([]) + df2 = DataFrame([]) + result = df1.append(df2) + expected = df1.copy() + assert_frame_equal(result, expected) + + # Non-empty df append empty df + df1 = DataFrame(np.random.randn(5, 2)) + df2 = DataFrame() + result = df1.append(df2) + expected = df1.copy() + assert_frame_equal(result, expected) + + # Empty df with columns append empty df + df1 = DataFrame(columns=['bar', 'foo']) + df2 = DataFrame() + result = df1.append(df2) + expected = df1.copy() + assert_frame_equal(result, expected) + + # Non-Empty df with columns append empty df + df1 = DataFrame(np.random.randn(5, 2), columns=['bar', 'foo']) + df2 = DataFrame() + result = df1.append(df2) + expected = df1.copy() + assert_frame_equal(result, expected) + + def test_append_dtypes(self): + + # GH 5754 + # row appends of different dtypes (so need to do by-item) + # can sometimes infer the correct type + + df1 = DataFrame({'bar': Timestamp('20130101')}, index=lrange(5)) + df2 = DataFrame() + result = df1.append(df2) + expected = df1.copy() + assert_frame_equal(result, expected) + + df1 = DataFrame({'bar': Timestamp('20130101')}, index=lrange(1)) + df2 = DataFrame({'bar': 'foo'}, index=lrange(1, 2)) + result = df1.append(df2) + expected = DataFrame({'bar': [Timestamp('20130101'), 'foo']}) + assert_frame_equal(result, expected) + + df1 = DataFrame({'bar': Timestamp('20130101')}, index=lrange(1)) + df2 = DataFrame({'bar': np.nan}, index=lrange(1, 2)) + result = df1.append(df2) + expected = DataFrame( + {'bar': Series([Timestamp('20130101'), np.nan], dtype='M8[ns]')}) + assert_frame_equal(result, expected) + + df1 = DataFrame({'bar': Timestamp('20130101')}, index=lrange(1)) + df2 = DataFrame({'bar': np.nan}, index=lrange(1, 2), dtype=object) + result = df1.append(df2) + expected = DataFrame( + {'bar': Series([Timestamp('20130101'), np.nan], dtype='M8[ns]')}) + assert_frame_equal(result, expected) + + df1 = DataFrame({'bar': np.nan}, index=lrange(1)) + df2 = DataFrame({'bar': Timestamp('20130101')}, index=lrange(1, 2)) + result = df1.append(df2) + expected = DataFrame( + {'bar': Series([np.nan, Timestamp('20130101')], dtype='M8[ns]')}) + assert_frame_equal(result, expected) + + df1 = DataFrame({'bar': Timestamp('20130101')}, index=lrange(1)) + df2 = DataFrame({'bar': 1}, index=lrange(1, 2), dtype=object) + result = df1.append(df2) + expected = DataFrame({'bar': Series([Timestamp('20130101'), 1])}) + assert_frame_equal(result, expected) + + def test_combine_first(self): + # disjoint + head, tail = self.frame[:5], self.frame[5:] + + combined = head.combine_first(tail) + reordered_frame = self.frame.reindex(combined.index) + assert_frame_equal(combined, reordered_frame) + self.assertTrue(tm.equalContents(combined.columns, self.frame.columns)) + assert_series_equal(combined['A'], reordered_frame['A']) + + # same index + fcopy = self.frame.copy() + fcopy['A'] = 1 + del fcopy['C'] + + fcopy2 = self.frame.copy() + fcopy2['B'] = 0 + del fcopy2['D'] + + combined = fcopy.combine_first(fcopy2) + + self.assertTrue((combined['A'] == 1).all()) + assert_series_equal(combined['B'], fcopy['B']) + assert_series_equal(combined['C'], fcopy2['C']) + assert_series_equal(combined['D'], fcopy['D']) + + # overlap + head, tail = reordered_frame[:10].copy(), reordered_frame + head['A'] = 1 + + combined = head.combine_first(tail) + self.assertTrue((combined['A'][:10] == 1).all()) + + # reverse overlap + tail['A'][:10] = 0 + combined = tail.combine_first(head) + self.assertTrue((combined['A'][:10] == 0).all()) + + # no overlap + f = self.frame[:10] + g = self.frame[10:] + combined = f.combine_first(g) + assert_series_equal(combined['A'].reindex(f.index), f['A']) + assert_series_equal(combined['A'].reindex(g.index), g['A']) + + # corner cases + comb = self.frame.combine_first(self.empty) + assert_frame_equal(comb, self.frame) + + comb = self.empty.combine_first(self.frame) + assert_frame_equal(comb, self.frame) + + comb = self.frame.combine_first(DataFrame(index=["faz", "boo"])) + self.assertTrue("faz" in comb.index) + + # #2525 + df = DataFrame({'a': [1]}, index=[datetime(2012, 1, 1)]) + df2 = DataFrame({}, columns=['b']) + result = df.combine_first(df2) + self.assertTrue('b' in result) + + def test_combine_first_mixed_bug(self): + idx = Index(['a', 'b', 'c', 'e']) + ser1 = Series([5.0, -9.0, 4.0, 100.], index=idx) + ser2 = Series(['a', 'b', 'c', 'e'], index=idx) + ser3 = Series([12, 4, 5, 97], index=idx) + + frame1 = DataFrame({"col0": ser1, + "col2": ser2, + "col3": ser3}) + + idx = Index(['a', 'b', 'c', 'f']) + ser1 = Series([5.0, -9.0, 4.0, 100.], index=idx) + ser2 = Series(['a', 'b', 'c', 'f'], index=idx) + ser3 = Series([12, 4, 5, 97], index=idx) + + frame2 = DataFrame({"col1": ser1, + "col2": ser2, + "col5": ser3}) + + combined = frame1.combine_first(frame2) + self.assertEqual(len(combined.columns), 5) + + # gh 3016 (same as in update) + df = DataFrame([[1., 2., False, True], [4., 5., True, False]], + columns=['A', 'B', 'bool1', 'bool2']) + + other = DataFrame([[45, 45]], index=[0], columns=['A', 'B']) + result = df.combine_first(other) + assert_frame_equal(result, df) + + df.ix[0, 'A'] = np.nan + result = df.combine_first(other) + df.ix[0, 'A'] = 45 + assert_frame_equal(result, df) + + # doc example + df1 = DataFrame({'A': [1., np.nan, 3., 5., np.nan], + 'B': [np.nan, 2., 3., np.nan, 6.]}) + + df2 = DataFrame({'A': [5., 2., 4., np.nan, 3., 7.], + 'B': [np.nan, np.nan, 3., 4., 6., 8.]}) + + result = df1.combine_first(df2) + expected = DataFrame( + {'A': [1, 2, 3, 5, 3, 7.], 'B': [np.nan, 2, 3, 4, 6, 8]}) + assert_frame_equal(result, expected) + + # GH3552, return object dtype with bools + df1 = DataFrame( + [[np.nan, 3., True], [-4.6, np.nan, True], [np.nan, 7., False]]) + df2 = DataFrame( + [[-42.6, np.nan, True], [-5., 1.6, False]], index=[1, 2]) + + result = df1.combine_first(df2)[2] + expected = Series([True, True, False], name=2) + assert_series_equal(result, expected) + + # GH 3593, converting datetime64[ns] incorrecly + df0 = DataFrame({"a": [datetime(2000, 1, 1), + datetime(2000, 1, 2), + datetime(2000, 1, 3)]}) + df1 = DataFrame({"a": [None, None, None]}) + df2 = df1.combine_first(df0) + assert_frame_equal(df2, df0) + + df2 = df0.combine_first(df1) + assert_frame_equal(df2, df0) + + df0 = DataFrame({"a": [datetime(2000, 1, 1), + datetime(2000, 1, 2), + datetime(2000, 1, 3)]}) + df1 = DataFrame({"a": [datetime(2000, 1, 2), None, None]}) + df2 = df1.combine_first(df0) + result = df0.copy() + result.iloc[0, :] = df1.iloc[0, :] + assert_frame_equal(df2, result) + + df2 = df0.combine_first(df1) + assert_frame_equal(df2, df0) + + def test_update(self): + df = DataFrame([[1.5, nan, 3.], + [1.5, nan, 3.], + [1.5, nan, 3], + [1.5, nan, 3]]) + + other = DataFrame([[3.6, 2., np.nan], + [np.nan, np.nan, 7]], index=[1, 3]) + + df.update(other) + + expected = DataFrame([[1.5, nan, 3], + [3.6, 2, 3], + [1.5, nan, 3], + [1.5, nan, 7.]]) + assert_frame_equal(df, expected) + + def test_update_dtypes(self): + + # gh 3016 + df = DataFrame([[1., 2., False, True], [4., 5., True, False]], + columns=['A', 'B', 'bool1', 'bool2']) + + other = DataFrame([[45, 45]], index=[0], columns=['A', 'B']) + df.update(other) + + expected = DataFrame([[45., 45., False, True], [4., 5., True, False]], + columns=['A', 'B', 'bool1', 'bool2']) + assert_frame_equal(df, expected) + + def test_update_nooverwrite(self): + df = DataFrame([[1.5, nan, 3.], + [1.5, nan, 3.], + [1.5, nan, 3], + [1.5, nan, 3]]) + + other = DataFrame([[3.6, 2., np.nan], + [np.nan, np.nan, 7]], index=[1, 3]) + + df.update(other, overwrite=False) + + expected = DataFrame([[1.5, nan, 3], + [1.5, 2, 3], + [1.5, nan, 3], + [1.5, nan, 3.]]) + assert_frame_equal(df, expected) + + def test_update_filtered(self): + df = DataFrame([[1.5, nan, 3.], + [1.5, nan, 3.], + [1.5, nan, 3], + [1.5, nan, 3]]) + + other = DataFrame([[3.6, 2., np.nan], + [np.nan, np.nan, 7]], index=[1, 3]) + + df.update(other, filter_func=lambda x: x > 2) + + expected = DataFrame([[1.5, nan, 3], + [1.5, nan, 3], + [1.5, nan, 3], + [1.5, nan, 7.]]) + assert_frame_equal(df, expected) + + def test_update_raise(self): + df = DataFrame([[1.5, 1, 3.], + [1.5, nan, 3.], + [1.5, nan, 3], + [1.5, nan, 3]]) + + other = DataFrame([[2., nan], + [nan, 7]], index=[1, 3], columns=[1, 2]) + with assertRaisesRegexp(ValueError, "Data overlaps"): + df.update(other, raise_conflict=True) + + def test_update_from_non_df(self): + d = {'a': Series([1, 2, 3, 4]), 'b': Series([5, 6, 7, 8])} + df = DataFrame(d) + + d['a'] = Series([5, 6, 7, 8]) + df.update(d) + + expected = DataFrame(d) + + assert_frame_equal(df, expected) + + d = {'a': [1, 2, 3, 4], 'b': [5, 6, 7, 8]} + df = DataFrame(d) + + d['a'] = [5, 6, 7, 8] + df.update(d) + + expected = DataFrame(d) + + assert_frame_equal(df, expected) + + def test_join_str_datetime(self): + str_dates = ['20120209', '20120222'] + dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)] + + A = DataFrame(str_dates, index=lrange(2), columns=['aa']) + C = DataFrame([[1, 2], [3, 4]], index=str_dates, columns=dt_dates) + + tst = A.join(C, on='aa') + + self.assertEqual(len(tst.columns), 3) + + def test_join_multiindex_leftright(self): + # GH 10741 + df1 = (pd.DataFrame([['a', 'x', 0.471780], ['a', 'y', 0.774908], + ['a', 'z', 0.563634], ['b', 'x', -0.353756], + ['b', 'y', 0.368062], ['b', 'z', -1.721840], + ['c', 'x', 1], ['c', 'y', 2], ['c', 'z', 3]], + columns=['first', 'second', 'value1']) + .set_index(['first', 'second'])) + + df2 = (pd.DataFrame([['a', 10], ['b', 20]], + columns=['first', 'value2']) + .set_index(['first'])) + + exp = pd.DataFrame([[0.471780, 10], [0.774908, 10], [0.563634, 10], + [-0.353756, 20], [0.368062, 20], + [-1.721840, 20], + [1.000000, np.nan], [2.000000, np.nan], + [3.000000, np.nan]], + index=df1.index, columns=['value1', 'value2']) + + # these must be the same results (but columns are flipped) + assert_frame_equal(df1.join(df2, how='left'), exp) + assert_frame_equal(df2.join(df1, how='right'), + exp[['value2', 'value1']]) + + exp_idx = pd.MultiIndex.from_product([['a', 'b'], ['x', 'y', 'z']], + names=['first', 'second']) + exp = pd.DataFrame([[0.471780, 10], [0.774908, 10], [0.563634, 10], + [-0.353756, 20], [0.368062, 20], [-1.721840, 20]], + index=exp_idx, columns=['value1', 'value2']) + + assert_frame_equal(df1.join(df2, how='right'), exp) + assert_frame_equal(df2.join(df1, how='left'), + exp[['value2', 'value1']]) diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py new file mode 100644 index 0000000000000..87c263e129361 --- /dev/null +++ b/pandas/tests/frame/test_constructors.py @@ -0,0 +1,1997 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime, timedelta +import functools +import itertools + +import nose + +from numpy.random import randn + +import numpy as np +import numpy.ma as ma +import numpy.ma.mrecords as mrecords + +from pandas.compat import (lmap, long, zip, range, lrange, lzip, + OrderedDict) +from pandas import compat +from pandas import (DataFrame, Index, Series, notnull, isnull, + MultiIndex, Timedelta, Timestamp, + date_range) +from pandas.util.misc import is_little_endian +from pandas.core.common import PandasError +import pandas as pd +import pandas.core.common as com +import pandas.lib as lib + +from pandas.core.dtypes import DatetimeTZDtype + +from pandas.util.testing import (assert_almost_equal, + assert_numpy_array_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +MIXED_FLOAT_DTYPES = ['float16', 'float32', 'float64'] +MIXED_INT_DTYPES = ['uint8', 'uint16', 'uint32', 'uint64', 'int8', 'int16', + 'int32', 'int64'] + + +class TestDataFrameConstructors(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_constructor(self): + df = DataFrame() + self.assertEqual(len(df.index), 0) + + df = DataFrame(data={}) + self.assertEqual(len(df.index), 0) + + def test_constructor_mixed(self): + index, data = tm.getMixedTypeDict() + + # TODO(wesm), incomplete test? + indexed_frame = DataFrame(data, index=index) # noqa + unindexed_frame = DataFrame(data) # noqa + + self.assertEqual(self.mixed_frame['foo'].dtype, np.object_) + + def test_constructor_cast_failure(self): + foo = DataFrame({'a': ['a', 'b', 'c']}, dtype=np.float64) + self.assertEqual(foo['a'].dtype, object) + + # GH 3010, constructing with odd arrays + df = DataFrame(np.ones((4, 2))) + + # this is ok + df['foo'] = np.ones((4, 2)).tolist() + + # this is not ok + self.assertRaises(ValueError, df.__setitem__, tuple(['test']), + np.ones((4, 2))) + + # this is ok + df['foo2'] = np.ones((4, 2)).tolist() + + def test_constructor_dtype_copy(self): + orig_df = DataFrame({ + 'col1': [1.], + 'col2': [2.], + 'col3': [3.]}) + + new_df = pd.DataFrame(orig_df, dtype=float, copy=True) + + new_df['col1'] = 200. + self.assertEqual(orig_df['col1'][0], 1.) + + def test_constructor_dtype_nocast_view(self): + df = DataFrame([[1, 2]]) + should_be_view = DataFrame(df, dtype=df[0].dtype) + should_be_view[0][0] = 99 + self.assertEqual(df.values[0, 0], 99) + + should_be_view = DataFrame(df.values, dtype=df[0].dtype) + should_be_view[0][0] = 97 + self.assertEqual(df.values[0, 0], 97) + + def test_constructor_dtype_list_data(self): + df = DataFrame([[1, '2'], + [None, 'a']], dtype=object) + self.assertIsNone(df.ix[1, 0]) + self.assertEqual(df.ix[0, 1], '2') + + def test_constructor_list_frames(self): + + # GH 3243 + result = DataFrame([DataFrame([])]) + self.assertEqual(result.shape, (1, 0)) + + result = DataFrame([DataFrame(dict(A=lrange(5)))]) + tm.assertIsInstance(result.iloc[0, 0], DataFrame) + + def test_constructor_mixed_dtypes(self): + + def _make_mixed_dtypes_df(typ, ad=None): + + if typ == 'int': + dtypes = MIXED_INT_DTYPES + arrays = [np.array(np.random.rand(10), dtype=d) + for d in dtypes] + elif typ == 'float': + dtypes = MIXED_FLOAT_DTYPES + arrays = [np.array(np.random.randint( + 10, size=10), dtype=d) for d in dtypes] + + zipper = lzip(dtypes, arrays) + for d, a in zipper: + assert(a.dtype == d) + if ad is None: + ad = dict() + ad.update(dict([(d, a) for d, a in zipper])) + return DataFrame(ad) + + def _check_mixed_dtypes(df, dtypes=None): + if dtypes is None: + dtypes = MIXED_FLOAT_DTYPES + MIXED_INT_DTYPES + for d in dtypes: + if d in df: + assert(df.dtypes[d] == d) + + # mixed floating and integer coexinst in the same frame + df = _make_mixed_dtypes_df('float') + _check_mixed_dtypes(df) + + # add lots of types + df = _make_mixed_dtypes_df('float', dict(A=1, B='foo', C='bar')) + _check_mixed_dtypes(df) + + # GH 622 + df = _make_mixed_dtypes_df('int') + _check_mixed_dtypes(df) + + def test_constructor_complex_dtypes(self): + # GH10952 + a = np.random.rand(10).astype(np.complex64) + b = np.random.rand(10).astype(np.complex128) + + df = DataFrame({'a': a, 'b': b}) + self.assertEqual(a.dtype, df.a.dtype) + self.assertEqual(b.dtype, df.b.dtype) + + def test_constructor_rec(self): + rec = self.frame.to_records(index=False) + + # Assigning causes segfault in NumPy < 1.5.1 + # rec.dtype.names = list(rec.dtype.names)[::-1] + + index = self.frame.index + + df = DataFrame(rec) + self.assert_numpy_array_equal(df.columns, rec.dtype.names) + + df2 = DataFrame(rec, index=index) + self.assert_numpy_array_equal(df2.columns, rec.dtype.names) + self.assertTrue(df2.index.equals(index)) + + rng = np.arange(len(rec))[::-1] + df3 = DataFrame(rec, index=rng, columns=['C', 'B']) + expected = DataFrame(rec, index=rng).reindex(columns=['C', 'B']) + assert_frame_equal(df3, expected) + + def test_constructor_bool(self): + df = DataFrame({0: np.ones(10, dtype=bool), + 1: np.zeros(10, dtype=bool)}) + self.assertEqual(df.values.dtype, np.bool_) + + def test_constructor_overflow_int64(self): + values = np.array([2 ** 64 - i for i in range(1, 10)], + dtype=np.uint64) + + result = DataFrame({'a': values}) + self.assertEqual(result['a'].dtype, object) + + # #2355 + data_scores = [(6311132704823138710, 273), (2685045978526272070, 23), + (8921811264899370420, 45), + (long(17019687244989530680), 270), + (long(9930107427299601010), 273)] + dtype = [('uid', 'u8'), ('score', 'u8')] + data = np.zeros((len(data_scores),), dtype=dtype) + data[:] = data_scores + df_crawls = DataFrame(data) + self.assertEqual(df_crawls['uid'].dtype, object) + + def test_constructor_ordereddict(self): + import random + nitems = 100 + nums = lrange(nitems) + random.shuffle(nums) + expected = ['A%d' % i for i in nums] + df = DataFrame(OrderedDict(zip(expected, [[0]] * nitems))) + self.assertEqual(expected, list(df.columns)) + + def test_constructor_dict(self): + frame = DataFrame({'col1': self.ts1, + 'col2': self.ts2}) + + tm.assert_dict_equal(self.ts1, frame['col1'], compare_keys=False) + tm.assert_dict_equal(self.ts2, frame['col2'], compare_keys=False) + + frame = DataFrame({'col1': self.ts1, + 'col2': self.ts2}, + columns=['col2', 'col3', 'col4']) + + self.assertEqual(len(frame), len(self.ts2)) + self.assertNotIn('col1', frame) + self.assertTrue(isnull(frame['col3']).all()) + + # Corner cases + self.assertEqual(len(DataFrame({})), 0) + + # mix dict and array, wrong size - no spec for which error should raise + # first + with tm.assertRaises(ValueError): + DataFrame({'A': {'a': 'a', 'b': 'b'}, 'B': ['a', 'b', 'c']}) + + # Length-one dict micro-optimization + frame = DataFrame({'A': {'1': 1, '2': 2}}) + self.assert_numpy_array_equal(frame.index, ['1', '2']) + + # empty dict plus index + idx = Index([0, 1, 2]) + frame = DataFrame({}, index=idx) + self.assertIs(frame.index, idx) + + # empty with index and columns + idx = Index([0, 1, 2]) + frame = DataFrame({}, index=idx, columns=idx) + self.assertIs(frame.index, idx) + self.assertIs(frame.columns, idx) + self.assertEqual(len(frame._series), 3) + + # with dict of empty list and Series + frame = DataFrame({'A': [], 'B': []}, columns=['A', 'B']) + self.assertTrue(frame.index.equals(Index([]))) + + # GH10856 + # dict with scalar values should raise error, even if columns passed + with tm.assertRaises(ValueError): + DataFrame({'a': 0.7}) + + with tm.assertRaises(ValueError): + DataFrame({'a': 0.7}, columns=['a']) + + with tm.assertRaises(ValueError): + DataFrame({'a': 0.7}, columns=['b']) + + def test_constructor_multi_index(self): + # GH 4078 + # construction error with mi and all-nan frame + tuples = [(2, 3), (3, 3), (3, 3)] + mi = MultiIndex.from_tuples(tuples) + df = DataFrame(index=mi, columns=mi) + self.assertTrue(pd.isnull(df).values.ravel().all()) + + tuples = [(3, 3), (2, 3), (3, 3)] + mi = MultiIndex.from_tuples(tuples) + df = DataFrame(index=mi, columns=mi) + self.assertTrue(pd.isnull(df).values.ravel().all()) + + def test_constructor_error_msgs(self): + msg = "Mixing dicts with non-Series may lead to ambiguous ordering." + # mix dict and array, wrong size + with assertRaisesRegexp(ValueError, msg): + DataFrame({'A': {'a': 'a', 'b': 'b'}, + 'B': ['a', 'b', 'c']}) + + # wrong size ndarray, GH 3105 + msg = "Shape of passed values is \(3, 4\), indices imply \(3, 3\)" + with assertRaisesRegexp(ValueError, msg): + DataFrame(np.arange(12).reshape((4, 3)), + columns=['foo', 'bar', 'baz'], + index=pd.date_range('2000-01-01', periods=3)) + + # higher dim raise exception + with assertRaisesRegexp(ValueError, 'Must pass 2-d input'): + DataFrame(np.zeros((3, 3, 3)), columns=['A', 'B', 'C'], index=[1]) + + # wrong size axis labels + with assertRaisesRegexp(ValueError, "Shape of passed values is " + "\(3, 2\), indices imply \(3, 1\)"): + DataFrame(np.random.rand(2, 3), columns=['A', 'B', 'C'], index=[1]) + + with assertRaisesRegexp(ValueError, "Shape of passed values is " + "\(3, 2\), indices imply \(2, 2\)"): + DataFrame(np.random.rand(2, 3), columns=['A', 'B'], index=[1, 2]) + + with assertRaisesRegexp(ValueError, 'If using all scalar values, you ' + 'must pass an index'): + DataFrame({'a': False, 'b': True}) + + def test_constructor_with_embedded_frames(self): + + # embedded data frames + df1 = DataFrame({'a': [1, 2, 3], 'b': [3, 4, 5]}) + df2 = DataFrame([df1, df1 + 10]) + + df2.dtypes + str(df2) + + result = df2.loc[0, 0] + assert_frame_equal(result, df1) + + result = df2.loc[1, 0] + assert_frame_equal(result, df1 + 10) + + def test_constructor_subclass_dict(self): + # Test for passing dict subclass to constructor + data = {'col1': tm.TestSubDict((x, 10.0 * x) for x in range(10)), + 'col2': tm.TestSubDict((x, 20.0 * x) for x in range(10))} + df = DataFrame(data) + refdf = DataFrame(dict((col, dict(compat.iteritems(val))) + for col, val in compat.iteritems(data))) + assert_frame_equal(refdf, df) + + data = tm.TestSubDict(compat.iteritems(data)) + df = DataFrame(data) + assert_frame_equal(refdf, df) + + # try with defaultdict + from collections import defaultdict + data = {} + self.frame['B'][:10] = np.nan + for k, v in compat.iteritems(self.frame): + dct = defaultdict(dict) + dct.update(v.to_dict()) + data[k] = dct + frame = DataFrame(data) + assert_frame_equal(self.frame.sort_index(), frame) + + def test_constructor_dict_block(self): + expected = [[4., 3., 2., 1.]] + df = DataFrame({'d': [4.], 'c': [3.], 'b': [2.], 'a': [1.]}, + columns=['d', 'c', 'b', 'a']) + assert_almost_equal(df.values, expected) + + def test_constructor_dict_cast(self): + # cast float tests + test_data = { + 'A': {'1': 1, '2': 2}, + 'B': {'1': '1', '2': '2', '3': '3'}, + } + frame = DataFrame(test_data, dtype=float) + self.assertEqual(len(frame), 3) + self.assertEqual(frame['B'].dtype, np.float64) + self.assertEqual(frame['A'].dtype, np.float64) + + frame = DataFrame(test_data) + self.assertEqual(len(frame), 3) + self.assertEqual(frame['B'].dtype, np.object_) + self.assertEqual(frame['A'].dtype, np.float64) + + # can't cast to float + test_data = { + 'A': dict(zip(range(20), tm.makeStringIndex(20))), + 'B': dict(zip(range(15), randn(15))) + } + frame = DataFrame(test_data, dtype=float) + self.assertEqual(len(frame), 20) + self.assertEqual(frame['A'].dtype, np.object_) + self.assertEqual(frame['B'].dtype, np.float64) + + def test_constructor_dict_dont_upcast(self): + d = {'Col1': {'Row1': 'A String', 'Row2': np.nan}} + df = DataFrame(d) + tm.assertIsInstance(df['Col1']['Row2'], float) + + dm = DataFrame([[1, 2], ['a', 'b']], index=[1, 2], columns=[1, 2]) + tm.assertIsInstance(dm[1][1], int) + + def test_constructor_dict_of_tuples(self): + # GH #1491 + data = {'a': (1, 2, 3), 'b': (4, 5, 6)} + + result = DataFrame(data) + expected = DataFrame(dict((k, list(v)) + for k, v in compat.iteritems(data))) + assert_frame_equal(result, expected, check_dtype=False) + + def test_constructor_dict_multiindex(self): + check = lambda result, expected: assert_frame_equal( + result, expected, check_dtype=True, check_index_type=True, + check_column_type=True, check_names=True) + d = {('a', 'a'): {('i', 'i'): 0, ('i', 'j'): 1, ('j', 'i'): 2}, + ('b', 'a'): {('i', 'i'): 6, ('i', 'j'): 5, ('j', 'i'): 4}, + ('b', 'c'): {('i', 'i'): 7, ('i', 'j'): 8, ('j', 'i'): 9}} + _d = sorted(d.items()) + df = DataFrame(d) + expected = DataFrame( + [x[1] for x in _d], + index=MultiIndex.from_tuples([x[0] for x in _d])).T + expected.index = MultiIndex.from_tuples(expected.index) + check(df, expected) + + d['z'] = {'y': 123., ('i', 'i'): 111, ('i', 'j'): 111, ('j', 'i'): 111} + _d.insert(0, ('z', d['z'])) + expected = DataFrame( + [x[1] for x in _d], + index=Index([x[0] for x in _d], tupleize_cols=False)).T + expected.index = Index(expected.index, tupleize_cols=False) + df = DataFrame(d) + df = df.reindex(columns=expected.columns, index=expected.index) + check(df, expected) + + def test_constructor_dict_datetime64_index(self): + # GH 10160 + dates_as_str = ['1984-02-19', '1988-11-06', '1989-12-03', '1990-03-15'] + + def create_data(constructor): + return dict((i, {constructor(s): 2 * i}) + for i, s in enumerate(dates_as_str)) + + data_datetime64 = create_data(np.datetime64) + data_datetime = create_data(lambda x: datetime.strptime(x, '%Y-%m-%d')) + data_Timestamp = create_data(Timestamp) + + expected = DataFrame([{0: 0, 1: None, 2: None, 3: None}, + {0: None, 1: 2, 2: None, 3: None}, + {0: None, 1: None, 2: 4, 3: None}, + {0: None, 1: None, 2: None, 3: 6}], + index=[Timestamp(dt) for dt in dates_as_str]) + + result_datetime64 = DataFrame(data_datetime64) + result_datetime = DataFrame(data_datetime) + result_Timestamp = DataFrame(data_Timestamp) + assert_frame_equal(result_datetime64, expected) + assert_frame_equal(result_datetime, expected) + assert_frame_equal(result_Timestamp, expected) + + def test_constructor_dict_timedelta64_index(self): + # GH 10160 + td_as_int = [1, 2, 3, 4] + + def create_data(constructor): + return dict((i, {constructor(s): 2 * i}) + for i, s in enumerate(td_as_int)) + + data_timedelta64 = create_data(lambda x: np.timedelta64(x, 'D')) + data_timedelta = create_data(lambda x: timedelta(days=x)) + data_Timedelta = create_data(lambda x: Timedelta(x, 'D')) + + expected = DataFrame([{0: 0, 1: None, 2: None, 3: None}, + {0: None, 1: 2, 2: None, 3: None}, + {0: None, 1: None, 2: 4, 3: None}, + {0: None, 1: None, 2: None, 3: 6}], + index=[Timedelta(td, 'D') for td in td_as_int]) + + result_timedelta64 = DataFrame(data_timedelta64) + result_timedelta = DataFrame(data_timedelta) + result_Timedelta = DataFrame(data_Timedelta) + assert_frame_equal(result_timedelta64, expected) + assert_frame_equal(result_timedelta, expected) + assert_frame_equal(result_Timedelta, expected) + + def test_nested_dict_frame_constructor(self): + rng = pd.period_range('1/1/2000', periods=5) + df = DataFrame(randn(10, 5), columns=rng) + + data = {} + for col in df.columns: + for row in df.index: + data.setdefault(col, {})[row] = df.get_value(row, col) + + result = DataFrame(data, columns=rng) + assert_frame_equal(result, df) + + data = {} + for col in df.columns: + for row in df.index: + data.setdefault(row, {})[col] = df.get_value(row, col) + + result = DataFrame(data, index=rng).T + assert_frame_equal(result, df) + + def _check_basic_constructor(self, empty): + # mat: 2d matrix with shpae (3, 2) to input. empty - makes sized + # objects + mat = empty((2, 3), dtype=float) + # 2-D input + frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) + + self.assertEqual(len(frame.index), 2) + self.assertEqual(len(frame.columns), 3) + + # 1-D input + frame = DataFrame(empty((3,)), columns=['A'], index=[1, 2, 3]) + self.assertEqual(len(frame.index), 3) + self.assertEqual(len(frame.columns), 1) + + # cast type + frame = DataFrame(mat, columns=['A', 'B', 'C'], + index=[1, 2], dtype=np.int64) + self.assertEqual(frame.values.dtype, np.int64) + + # wrong size axis labels + msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)' + with assertRaisesRegexp(ValueError, msg): + DataFrame(mat, columns=['A', 'B', 'C'], index=[1]) + msg = r'Shape of passed values is \(3, 2\), indices imply \(2, 2\)' + with assertRaisesRegexp(ValueError, msg): + DataFrame(mat, columns=['A', 'B'], index=[1, 2]) + + # higher dim raise exception + with assertRaisesRegexp(ValueError, 'Must pass 2-d input'): + DataFrame(empty((3, 3, 3)), columns=['A', 'B', 'C'], + index=[1]) + + # automatic labeling + frame = DataFrame(mat) + self.assert_numpy_array_equal(frame.index, lrange(2)) + self.assert_numpy_array_equal(frame.columns, lrange(3)) + + frame = DataFrame(mat, index=[1, 2]) + self.assert_numpy_array_equal(frame.columns, lrange(3)) + + frame = DataFrame(mat, columns=['A', 'B', 'C']) + self.assert_numpy_array_equal(frame.index, lrange(2)) + + # 0-length axis + frame = DataFrame(empty((0, 3))) + self.assertEqual(len(frame.index), 0) + + frame = DataFrame(empty((3, 0))) + self.assertEqual(len(frame.columns), 0) + + def test_constructor_ndarray(self): + self._check_basic_constructor(np.ones) + + frame = DataFrame(['foo', 'bar'], index=[0, 1], columns=['A']) + self.assertEqual(len(frame), 2) + + def test_constructor_maskedarray(self): + self._check_basic_constructor(ma.masked_all) + + # Check non-masked values + mat = ma.masked_all((2, 3), dtype=float) + mat[0, 0] = 1.0 + mat[1, 2] = 2.0 + frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) + self.assertEqual(1.0, frame['A'][1]) + self.assertEqual(2.0, frame['C'][2]) + + # what is this even checking?? + mat = ma.masked_all((2, 3), dtype=float) + frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) + self.assertTrue(np.all(~np.asarray(frame == frame))) + + def test_constructor_maskedarray_nonfloat(self): + # masked int promoted to float + mat = ma.masked_all((2, 3), dtype=int) + # 2-D input + frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) + + self.assertEqual(len(frame.index), 2) + self.assertEqual(len(frame.columns), 3) + self.assertTrue(np.all(~np.asarray(frame == frame))) + + # cast type + frame = DataFrame(mat, columns=['A', 'B', 'C'], + index=[1, 2], dtype=np.float64) + self.assertEqual(frame.values.dtype, np.float64) + + # Check non-masked values + mat2 = ma.copy(mat) + mat2[0, 0] = 1 + mat2[1, 2] = 2 + frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2]) + self.assertEqual(1, frame['A'][1]) + self.assertEqual(2, frame['C'][2]) + + # masked np.datetime64 stays (use lib.NaT as null) + mat = ma.masked_all((2, 3), dtype='M8[ns]') + # 2-D input + frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) + + self.assertEqual(len(frame.index), 2) + self.assertEqual(len(frame.columns), 3) + self.assertTrue(isnull(frame).values.all()) + + # cast type + frame = DataFrame(mat, columns=['A', 'B', 'C'], + index=[1, 2], dtype=np.int64) + self.assertEqual(frame.values.dtype, np.int64) + + # Check non-masked values + mat2 = ma.copy(mat) + mat2[0, 0] = 1 + mat2[1, 2] = 2 + frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2]) + self.assertEqual(1, frame['A'].view('i8')[1]) + self.assertEqual(2, frame['C'].view('i8')[2]) + + # masked bool promoted to object + mat = ma.masked_all((2, 3), dtype=bool) + # 2-D input + frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) + + self.assertEqual(len(frame.index), 2) + self.assertEqual(len(frame.columns), 3) + self.assertTrue(np.all(~np.asarray(frame == frame))) + + # cast type + frame = DataFrame(mat, columns=['A', 'B', 'C'], + index=[1, 2], dtype=object) + self.assertEqual(frame.values.dtype, object) + + # Check non-masked values + mat2 = ma.copy(mat) + mat2[0, 0] = True + mat2[1, 2] = False + frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2]) + self.assertEqual(True, frame['A'][1]) + self.assertEqual(False, frame['C'][2]) + + def test_constructor_mrecarray(self): + # Ensure mrecarray produces frame identical to dict of masked arrays + # from GH3479 + + assert_fr_equal = functools.partial(assert_frame_equal, + check_index_type=True, + check_column_type=True, + check_frame_type=True) + arrays = [ + ('float', np.array([1.5, 2.0])), + ('int', np.array([1, 2])), + ('str', np.array(['abc', 'def'])), + ] + for name, arr in arrays[:]: + arrays.append(('masked1_' + name, + np.ma.masked_array(arr, mask=[False, True]))) + arrays.append(('masked_all', np.ma.masked_all((2,)))) + arrays.append(('masked_none', + np.ma.masked_array([1.0, 2.5], mask=False))) + + # call assert_frame_equal for all selections of 3 arrays + for comb in itertools.combinations(arrays, 3): + names, data = zip(*comb) + mrecs = mrecords.fromarrays(data, names=names) + + # fill the comb + comb = dict([(k, v.filled()) if hasattr( + v, 'filled') else (k, v) for k, v in comb]) + + expected = DataFrame(comb, columns=names) + result = DataFrame(mrecs) + assert_fr_equal(result, expected) + + # specify columns + expected = DataFrame(comb, columns=names[::-1]) + result = DataFrame(mrecs, columns=names[::-1]) + assert_fr_equal(result, expected) + + # specify index + expected = DataFrame(comb, columns=names, index=[1, 2]) + result = DataFrame(mrecs, index=[1, 2]) + assert_fr_equal(result, expected) + + def test_constructor_corner(self): + df = DataFrame(index=[]) + self.assertEqual(df.values.shape, (0, 0)) + + # empty but with specified dtype + df = DataFrame(index=lrange(10), columns=['a', 'b'], dtype=object) + self.assertEqual(df.values.dtype, np.object_) + + # does not error but ends up float + df = DataFrame(index=lrange(10), columns=['a', 'b'], dtype=int) + self.assertEqual(df.values.dtype, np.object_) + + # #1783 empty dtype object + df = DataFrame({}, columns=['foo', 'bar']) + self.assertEqual(df.values.dtype, np.object_) + + df = DataFrame({'b': 1}, index=lrange(10), columns=list('abc'), + dtype=int) + self.assertEqual(df.values.dtype, np.object_) + + def test_constructor_scalar_inference(self): + data = {'int': 1, 'bool': True, + 'float': 3., 'complex': 4j, 'object': 'foo'} + df = DataFrame(data, index=np.arange(10)) + + self.assertEqual(df['int'].dtype, np.int64) + self.assertEqual(df['bool'].dtype, np.bool_) + self.assertEqual(df['float'].dtype, np.float64) + self.assertEqual(df['complex'].dtype, np.complex128) + self.assertEqual(df['object'].dtype, np.object_) + + def test_constructor_arrays_and_scalars(self): + df = DataFrame({'a': randn(10), 'b': True}) + exp = DataFrame({'a': df['a'].values, 'b': [True] * 10}) + + assert_frame_equal(df, exp) + with tm.assertRaisesRegexp(ValueError, 'must pass an index'): + DataFrame({'a': False, 'b': True}) + + def test_constructor_DataFrame(self): + df = DataFrame(self.frame) + assert_frame_equal(df, self.frame) + + df_casted = DataFrame(self.frame, dtype=np.int64) + self.assertEqual(df_casted.values.dtype, np.int64) + + def test_constructor_more(self): + # used to be in test_matrix.py + arr = randn(10) + dm = DataFrame(arr, columns=['A'], index=np.arange(10)) + self.assertEqual(dm.values.ndim, 2) + + arr = randn(0) + dm = DataFrame(arr) + self.assertEqual(dm.values.ndim, 2) + self.assertEqual(dm.values.ndim, 2) + + # no data specified + dm = DataFrame(columns=['A', 'B'], index=np.arange(10)) + self.assertEqual(dm.values.shape, (10, 2)) + + dm = DataFrame(columns=['A', 'B']) + self.assertEqual(dm.values.shape, (0, 2)) + + dm = DataFrame(index=np.arange(10)) + self.assertEqual(dm.values.shape, (10, 0)) + + # corner, silly + # TODO: Fix this Exception to be better... + with assertRaisesRegexp(PandasError, 'constructor not ' + 'properly called'): + DataFrame((1, 2, 3)) + + # can't cast + mat = np.array(['foo', 'bar'], dtype=object).reshape(2, 1) + with assertRaisesRegexp(ValueError, 'cast'): + DataFrame(mat, index=[0, 1], columns=[0], dtype=float) + + dm = DataFrame(DataFrame(self.frame._series)) + assert_frame_equal(dm, self.frame) + + # int cast + dm = DataFrame({'A': np.ones(10, dtype=int), + 'B': np.ones(10, dtype=np.float64)}, + index=np.arange(10)) + + self.assertEqual(len(dm.columns), 2) + self.assertEqual(dm.values.dtype, np.float64) + + def test_constructor_empty_list(self): + df = DataFrame([], index=[]) + expected = DataFrame(index=[]) + assert_frame_equal(df, expected) + + # GH 9939 + df = DataFrame([], columns=['A', 'B']) + expected = DataFrame({}, columns=['A', 'B']) + assert_frame_equal(df, expected) + + # Empty generator: list(empty_gen()) == [] + def empty_gen(): + return + yield + + df = DataFrame(empty_gen(), columns=['A', 'B']) + assert_frame_equal(df, expected) + + def test_constructor_list_of_lists(self): + # GH #484 + l = [[1, 'a'], [2, 'b']] + df = DataFrame(data=l, columns=["num", "str"]) + self.assertTrue(com.is_integer_dtype(df['num'])) + self.assertEqual(df['str'].dtype, np.object_) + + # GH 4851 + # list of 0-dim ndarrays + expected = DataFrame({0: range(10)}) + data = [np.array(x) for x in range(10)] + result = DataFrame(data) + assert_frame_equal(result, expected) + + def test_constructor_sequence_like(self): + # GH 3783 + # collections.Squence like + import collections + + class DummyContainer(collections.Sequence): + + def __init__(self, lst): + self._lst = lst + + def __getitem__(self, n): + return self._lst.__getitem__(n) + + def __len__(self, n): + return self._lst.__len__() + + l = [DummyContainer([1, 'a']), DummyContainer([2, 'b'])] + columns = ["num", "str"] + result = DataFrame(l, columns=columns) + expected = DataFrame([[1, 'a'], [2, 'b']], columns=columns) + assert_frame_equal(result, expected, check_dtype=False) + + # GH 4297 + # support Array + import array + result = DataFrame.from_items([('A', array.array('i', range(10)))]) + expected = DataFrame({'A': list(range(10))}) + assert_frame_equal(result, expected, check_dtype=False) + + expected = DataFrame([list(range(10)), list(range(10))]) + result = DataFrame([array.array('i', range(10)), + array.array('i', range(10))]) + assert_frame_equal(result, expected, check_dtype=False) + + def test_constructor_iterator(self): + + expected = DataFrame([list(range(10)), list(range(10))]) + result = DataFrame([range(10), range(10)]) + assert_frame_equal(result, expected) + + def test_constructor_generator(self): + # related #2305 + + gen1 = (i for i in range(10)) + gen2 = (i for i in range(10)) + + expected = DataFrame([list(range(10)), list(range(10))]) + result = DataFrame([gen1, gen2]) + assert_frame_equal(result, expected) + + gen = ([i, 'a'] for i in range(10)) + result = DataFrame(gen) + expected = DataFrame({0: range(10), 1: 'a'}) + assert_frame_equal(result, expected, check_dtype=False) + + def test_constructor_list_of_dicts(self): + data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]), + OrderedDict([['a', 1.5], ['b', 3], ['d', 6]]), + OrderedDict([['a', 1.5], ['d', 6]]), + OrderedDict(), + OrderedDict([['a', 1.5], ['b', 3], ['c', 4]]), + OrderedDict([['b', 3], ['c', 4], ['d', 6]])] + + result = DataFrame(data) + expected = DataFrame.from_dict(dict(zip(range(len(data)), data)), + orient='index') + assert_frame_equal(result, expected.reindex(result.index)) + + result = DataFrame([{}]) + expected = DataFrame(index=[0]) + assert_frame_equal(result, expected) + + def test_constructor_list_of_series(self): + data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]), + OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])] + sdict = OrderedDict(zip(['x', 'y'], data)) + idx = Index(['a', 'b', 'c']) + + # all named + data2 = [Series([1.5, 3, 4], idx, dtype='O', name='x'), + Series([1.5, 3, 6], idx, name='y')] + result = DataFrame(data2) + expected = DataFrame.from_dict(sdict, orient='index') + assert_frame_equal(result, expected) + + # some unnamed + data2 = [Series([1.5, 3, 4], idx, dtype='O', name='x'), + Series([1.5, 3, 6], idx)] + result = DataFrame(data2) + + sdict = OrderedDict(zip(['x', 'Unnamed 0'], data)) + expected = DataFrame.from_dict(sdict, orient='index') + assert_frame_equal(result.sort_index(), expected) + + # none named + data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]), + OrderedDict([['a', 1.5], ['b', 3], ['d', 6]]), + OrderedDict([['a', 1.5], ['d', 6]]), + OrderedDict(), + OrderedDict([['a', 1.5], ['b', 3], ['c', 4]]), + OrderedDict([['b', 3], ['c', 4], ['d', 6]])] + data = [Series(d) for d in data] + + result = DataFrame(data) + sdict = OrderedDict(zip(range(len(data)), data)) + expected = DataFrame.from_dict(sdict, orient='index') + assert_frame_equal(result, expected.reindex(result.index)) + + result2 = DataFrame(data, index=np.arange(6)) + assert_frame_equal(result, result2) + + result = DataFrame([Series({})]) + expected = DataFrame(index=[0]) + assert_frame_equal(result, expected) + + data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]), + OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])] + sdict = OrderedDict(zip(range(len(data)), data)) + + idx = Index(['a', 'b', 'c']) + data2 = [Series([1.5, 3, 4], idx, dtype='O'), + Series([1.5, 3, 6], idx)] + result = DataFrame(data2) + expected = DataFrame.from_dict(sdict, orient='index') + assert_frame_equal(result, expected) + + def test_constructor_list_of_derived_dicts(self): + class CustomDict(dict): + pass + d = {'a': 1.5, 'b': 3} + + data_custom = [CustomDict(d)] + data = [d] + + result_custom = DataFrame(data_custom) + result = DataFrame(data) + assert_frame_equal(result, result_custom) + + def test_constructor_ragged(self): + data = {'A': randn(10), + 'B': randn(8)} + with assertRaisesRegexp(ValueError, 'arrays must all be same length'): + DataFrame(data) + + def test_constructor_scalar(self): + idx = Index(lrange(3)) + df = DataFrame({"a": 0}, index=idx) + expected = DataFrame({"a": [0, 0, 0]}, index=idx) + assert_frame_equal(df, expected, check_dtype=False) + + def test_constructor_Series_copy_bug(self): + df = DataFrame(self.frame['A'], index=self.frame.index, columns=['A']) + df.copy() + + def test_constructor_mixed_dict_and_Series(self): + data = {} + data['A'] = {'foo': 1, 'bar': 2, 'baz': 3} + data['B'] = Series([4, 3, 2, 1], index=['bar', 'qux', 'baz', 'foo']) + + result = DataFrame(data) + self.assertTrue(result.index.is_monotonic) + + # ordering ambiguous, raise exception + with assertRaisesRegexp(ValueError, 'ambiguous ordering'): + DataFrame({'A': ['a', 'b'], 'B': {'a': 'a', 'b': 'b'}}) + + # this is OK though + result = DataFrame({'A': ['a', 'b'], + 'B': Series(['a', 'b'], index=['a', 'b'])}) + expected = DataFrame({'A': ['a', 'b'], 'B': ['a', 'b']}, + index=['a', 'b']) + assert_frame_equal(result, expected) + + def test_constructor_tuples(self): + result = DataFrame({'A': [(1, 2), (3, 4)]}) + expected = DataFrame({'A': Series([(1, 2), (3, 4)])}) + assert_frame_equal(result, expected) + + def test_constructor_namedtuples(self): + # GH11181 + from collections import namedtuple + named_tuple = namedtuple("Pandas", list('ab')) + tuples = [named_tuple(1, 3), named_tuple(2, 4)] + expected = DataFrame({'a': [1, 2], 'b': [3, 4]}) + result = DataFrame(tuples) + assert_frame_equal(result, expected) + + # with columns + expected = DataFrame({'y': [1, 2], 'z': [3, 4]}) + result = DataFrame(tuples, columns=['y', 'z']) + assert_frame_equal(result, expected) + + def test_constructor_orient(self): + data_dict = self.mixed_frame.T._series + recons = DataFrame.from_dict(data_dict, orient='index') + expected = self.mixed_frame.sort_index() + assert_frame_equal(recons, expected) + + # dict of sequence + a = {'hi': [32, 3, 3], + 'there': [3, 5, 3]} + rs = DataFrame.from_dict(a, orient='index') + xp = DataFrame.from_dict(a).T.reindex(list(a.keys())) + assert_frame_equal(rs, xp) + + def test_constructor_Series_named(self): + a = Series([1, 2, 3], index=['a', 'b', 'c'], name='x') + df = DataFrame(a) + self.assertEqual(df.columns[0], 'x') + self.assertTrue(df.index.equals(a.index)) + + # ndarray like + arr = np.random.randn(10) + s = Series(arr, name='x') + df = DataFrame(s) + expected = DataFrame(dict(x=s)) + assert_frame_equal(df, expected) + + s = Series(arr, index=range(3, 13)) + df = DataFrame(s) + expected = DataFrame({0: s}) + assert_frame_equal(df, expected) + + self.assertRaises(ValueError, DataFrame, s, columns=[1, 2]) + + # #2234 + a = Series([], name='x') + df = DataFrame(a) + self.assertEqual(df.columns[0], 'x') + + # series with name and w/o + s1 = Series(arr, name='x') + df = DataFrame([s1, arr]).T + expected = DataFrame({'x': s1, 'Unnamed 0': arr}, + columns=['x', 'Unnamed 0']) + assert_frame_equal(df, expected) + + # this is a bit non-intuitive here; the series collapse down to arrays + df = DataFrame([arr, s1]).T + expected = DataFrame({1: s1, 0: arr}, columns=[0, 1]) + assert_frame_equal(df, expected) + + def test_constructor_Series_differently_indexed(self): + # name + s1 = Series([1, 2, 3], index=['a', 'b', 'c'], name='x') + + # no name + s2 = Series([1, 2, 3], index=['a', 'b', 'c']) + + other_index = Index(['a', 'b']) + + df1 = DataFrame(s1, index=other_index) + exp1 = DataFrame(s1.reindex(other_index)) + self.assertEqual(df1.columns[0], 'x') + assert_frame_equal(df1, exp1) + + df2 = DataFrame(s2, index=other_index) + exp2 = DataFrame(s2.reindex(other_index)) + self.assertEqual(df2.columns[0], 0) + self.assertTrue(df2.index.equals(other_index)) + assert_frame_equal(df2, exp2) + + def test_constructor_manager_resize(self): + index = list(self.frame.index[:5]) + columns = list(self.frame.columns[:3]) + + result = DataFrame(self.frame._data, index=index, + columns=columns) + self.assert_numpy_array_equal(result.index, index) + self.assert_numpy_array_equal(result.columns, columns) + + def test_constructor_from_items(self): + items = [(c, self.frame[c]) for c in self.frame.columns] + recons = DataFrame.from_items(items) + assert_frame_equal(recons, self.frame) + + # pass some columns + recons = DataFrame.from_items(items, columns=['C', 'B', 'A']) + assert_frame_equal(recons, self.frame.ix[:, ['C', 'B', 'A']]) + + # orient='index' + + row_items = [(idx, self.mixed_frame.xs(idx)) + for idx in self.mixed_frame.index] + + recons = DataFrame.from_items(row_items, + columns=self.mixed_frame.columns, + orient='index') + assert_frame_equal(recons, self.mixed_frame) + self.assertEqual(recons['A'].dtype, np.float64) + + with tm.assertRaisesRegexp(TypeError, + "Must pass columns with orient='index'"): + DataFrame.from_items(row_items, orient='index') + + # orient='index', but thar be tuples + arr = lib.list_to_object_array( + [('bar', 'baz')] * len(self.mixed_frame)) + self.mixed_frame['foo'] = arr + row_items = [(idx, list(self.mixed_frame.xs(idx))) + for idx in self.mixed_frame.index] + recons = DataFrame.from_items(row_items, + columns=self.mixed_frame.columns, + orient='index') + assert_frame_equal(recons, self.mixed_frame) + tm.assertIsInstance(recons['foo'][0], tuple) + + rs = DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])], + orient='index', + columns=['one', 'two', 'three']) + xp = DataFrame([[1, 2, 3], [4, 5, 6]], index=['A', 'B'], + columns=['one', 'two', 'three']) + assert_frame_equal(rs, xp) + + def test_constructor_mix_series_nonseries(self): + df = DataFrame({'A': self.frame['A'], + 'B': list(self.frame['B'])}, columns=['A', 'B']) + assert_frame_equal(df, self.frame.ix[:, ['A', 'B']]) + + with tm.assertRaisesRegexp(ValueError, 'does not match index length'): + DataFrame({'A': self.frame['A'], 'B': list(self.frame['B'])[:-2]}) + + def test_constructor_miscast_na_int_dtype(self): + df = DataFrame([[np.nan, 1], [1, 0]], dtype=np.int64) + expected = DataFrame([[np.nan, 1], [1, 0]]) + assert_frame_equal(df, expected) + + def test_constructor_iterator_failure(self): + with assertRaisesRegexp(TypeError, 'iterator'): + df = DataFrame(iter([1, 2, 3])) # noqa + + def test_constructor_column_duplicates(self): + # it works! #2079 + df = DataFrame([[8, 5]], columns=['a', 'a']) + edf = DataFrame([[8, 5]]) + edf.columns = ['a', 'a'] + + assert_frame_equal(df, edf) + + idf = DataFrame.from_items( + [('a', [8]), ('a', [5])], columns=['a', 'a']) + assert_frame_equal(idf, edf) + + self.assertRaises(ValueError, DataFrame.from_items, + [('a', [8]), ('a', [5]), ('b', [6])], + columns=['b', 'a', 'a']) + + def test_constructor_empty_with_string_dtype(self): + # GH 9428 + expected = DataFrame(index=[0, 1], columns=[0, 1], dtype=object) + + df = DataFrame(index=[0, 1], columns=[0, 1], dtype=str) + assert_frame_equal(df, expected) + df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.str_) + assert_frame_equal(df, expected) + df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.unicode_) + assert_frame_equal(df, expected) + df = DataFrame(index=[0, 1], columns=[0, 1], dtype='U5') + assert_frame_equal(df, expected) + + def test_constructor_single_value(self): + # expecting single value upcasting here + df = DataFrame(0., index=[1, 2, 3], columns=['a', 'b', 'c']) + assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('float64'), + df.index, df.columns)) + + df = DataFrame(0, index=[1, 2, 3], columns=['a', 'b', 'c']) + assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('int64'), + df.index, df.columns)) + + df = DataFrame('a', index=[1, 2], columns=['a', 'c']) + assert_frame_equal(df, DataFrame(np.array([['a', 'a'], + ['a', 'a']], + dtype=object), + index=[1, 2], + columns=['a', 'c'])) + + self.assertRaises(com.PandasError, DataFrame, 'a', [1, 2]) + self.assertRaises(com.PandasError, DataFrame, 'a', columns=['a', 'c']) + with tm.assertRaisesRegexp(TypeError, 'incompatible data and dtype'): + DataFrame('a', [1, 2], ['a', 'c'], float) + + def test_constructor_with_datetimes(self): + intname = np.dtype(np.int_).name + floatname = np.dtype(np.float_).name + datetime64name = np.dtype('M8[ns]').name + objectname = np.dtype(np.object_).name + + # single item + df = DataFrame({'A': 1, 'B': 'foo', 'C': 'bar', + 'D': Timestamp("20010101"), + 'E': datetime(2001, 1, 2, 0, 0)}, + index=np.arange(10)) + result = df.get_dtype_counts() + expected = Series({'int64': 1, datetime64name: 2, objectname: 2}) + result.sort_index() + expected.sort_index() + assert_series_equal(result, expected) + + # check with ndarray construction ndim==0 (e.g. we are passing a ndim 0 + # ndarray with a dtype specified) + df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', + floatname: np.array(1., dtype=floatname), + intname: np.array(1, dtype=intname)}, + index=np.arange(10)) + result = df.get_dtype_counts() + expected = {objectname: 1} + if intname == 'int64': + expected['int64'] = 2 + else: + expected['int64'] = 1 + expected[intname] = 1 + if floatname == 'float64': + expected['float64'] = 2 + else: + expected['float64'] = 1 + expected[floatname] = 1 + + result.sort_index() + expected = Series(expected) + expected.sort_index() + assert_series_equal(result, expected) + + # check with ndarray construction ndim>0 + df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', + floatname: np.array([1.] * 10, dtype=floatname), + intname: np.array([1] * 10, dtype=intname)}, + index=np.arange(10)) + result = df.get_dtype_counts() + result.sort_index() + assert_series_equal(result, expected) + + # GH 2809 + ind = date_range(start="2000-01-01", freq="D", periods=10) + datetimes = [ts.to_pydatetime() for ts in ind] + datetime_s = Series(datetimes) + self.assertEqual(datetime_s.dtype, 'M8[ns]') + df = DataFrame({'datetime_s': datetime_s}) + result = df.get_dtype_counts() + expected = Series({datetime64name: 1}) + result.sort_index() + expected.sort_index() + assert_series_equal(result, expected) + + # GH 2810 + ind = date_range(start="2000-01-01", freq="D", periods=10) + datetimes = [ts.to_pydatetime() for ts in ind] + dates = [ts.date() for ts in ind] + df = DataFrame({'datetimes': datetimes, 'dates': dates}) + result = df.get_dtype_counts() + expected = Series({datetime64name: 1, objectname: 1}) + result.sort_index() + expected.sort_index() + assert_series_equal(result, expected) + + # GH 7594 + # don't coerce tz-aware + import pytz + tz = pytz.timezone('US/Eastern') + dt = tz.localize(datetime(2012, 1, 1)) + + df = DataFrame({'End Date': dt}, index=[0]) + self.assertEqual(df.iat[0, 0], dt) + assert_series_equal(df.dtypes, Series( + {'End Date': 'datetime64[ns, US/Eastern]'})) + + df = DataFrame([{'End Date': dt}]) + self.assertEqual(df.iat[0, 0], dt) + assert_series_equal(df.dtypes, Series( + {'End Date': 'datetime64[ns, US/Eastern]'})) + + # tz-aware (UTC and other tz's) + # GH 8411 + dr = date_range('20130101', periods=3) + df = DataFrame({'value': dr}) + self.assertTrue(df.iat[0, 0].tz is None) + dr = date_range('20130101', periods=3, tz='UTC') + df = DataFrame({'value': dr}) + self.assertTrue(str(df.iat[0, 0].tz) == 'UTC') + dr = date_range('20130101', periods=3, tz='US/Eastern') + df = DataFrame({'value': dr}) + self.assertTrue(str(df.iat[0, 0].tz) == 'US/Eastern') + + # GH 7822 + # preserver an index with a tz on dict construction + i = date_range('1/1/2011', periods=5, freq='10s', tz='US/Eastern') + + expected = DataFrame( + {'a': i.to_series(keep_tz=True).reset_index(drop=True)}) + df = DataFrame() + df['a'] = i + assert_frame_equal(df, expected) + + df = DataFrame({'a': i}) + assert_frame_equal(df, expected) + + # multiples + i_no_tz = date_range('1/1/2011', periods=5, freq='10s') + df = DataFrame({'a': i, 'b': i_no_tz}) + expected = DataFrame({'a': i.to_series(keep_tz=True) + .reset_index(drop=True), 'b': i_no_tz}) + assert_frame_equal(df, expected) + + def test_constructor_with_datetime_tz(self): + + # 8260 + # support datetime64 with tz + + idx = Index(date_range('20130101', periods=3, tz='US/Eastern'), + name='foo') + dr = date_range('20130110', periods=3) + + # construction + df = DataFrame({'A': idx, 'B': dr}) + self.assertTrue(df['A'].dtype, 'M8[ns, US/Eastern') + self.assertTrue(df['A'].name == 'A') + assert_series_equal(df['A'], Series(idx, name='A')) + assert_series_equal(df['B'], Series(dr, name='B')) + + # construction from dict + df2 = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'), + B=Timestamp('20130603', tz='CET')), + index=range(5)) + assert_series_equal(df2.dtypes, Series(['datetime64[ns, US/Eastern]', + 'datetime64[ns, CET]'], + index=['A', 'B'])) + + # dtypes + tzframe = DataFrame({'A': date_range('20130101', periods=3), + 'B': date_range('20130101', periods=3, + tz='US/Eastern'), + 'C': date_range('20130101', periods=3, tz='CET')}) + tzframe.iloc[1, 1] = pd.NaT + tzframe.iloc[1, 2] = pd.NaT + result = tzframe.dtypes.sort_index() + expected = Series([np.dtype('datetime64[ns]'), + DatetimeTZDtype('datetime64[ns, US/Eastern]'), + DatetimeTZDtype('datetime64[ns, CET]')], + ['A', 'B', 'C']) + + # concat + df3 = pd.concat([df2.A.to_frame(), df2.B.to_frame()], axis=1) + assert_frame_equal(df2, df3) + + # select_dtypes + result = df3.select_dtypes(include=['datetime64[ns]']) + expected = df3.reindex(columns=[]) + assert_frame_equal(result, expected) + + # this will select based on issubclass, and these are the same class + result = df3.select_dtypes(include=['datetime64[ns, CET]']) + expected = df3 + assert_frame_equal(result, expected) + + # from index + idx2 = date_range('20130101', periods=3, tz='US/Eastern', name='foo') + df2 = DataFrame(idx2) + assert_series_equal(df2['foo'], Series(idx2, name='foo')) + df2 = DataFrame(Series(idx2)) + assert_series_equal(df2['foo'], Series(idx2, name='foo')) + + idx2 = date_range('20130101', periods=3, tz='US/Eastern') + df2 = DataFrame(idx2) + assert_series_equal(df2[0], Series(idx2, name=0)) + df2 = DataFrame(Series(idx2)) + assert_series_equal(df2[0], Series(idx2, name=0)) + + # interleave with object + result = self.tzframe.assign(D='foo').values + expected = np.array([[Timestamp('2013-01-01 00:00:00'), + Timestamp('2013-01-02 00:00:00'), + Timestamp('2013-01-03 00:00:00')], + [Timestamp('2013-01-01 00:00:00-0500', + tz='US/Eastern'), + pd.NaT, + Timestamp('2013-01-03 00:00:00-0500', + tz='US/Eastern')], + [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), + pd.NaT, + Timestamp('2013-01-03 00:00:00+0100', tz='CET')], + ['foo', 'foo', 'foo']], dtype=object).T + self.assert_numpy_array_equal(result, expected) + + # interleave with only datetime64[ns] + result = self.tzframe.values + expected = np.array([[Timestamp('2013-01-01 00:00:00'), + Timestamp('2013-01-02 00:00:00'), + Timestamp('2013-01-03 00:00:00')], + [Timestamp('2013-01-01 00:00:00-0500', + tz='US/Eastern'), + pd.NaT, + Timestamp('2013-01-03 00:00:00-0500', + tz='US/Eastern')], + [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), + pd.NaT, + Timestamp('2013-01-03 00:00:00+0100', + tz='CET')]], dtype=object).T + self.assert_numpy_array_equal(result, expected) + + # astype + expected = np.array([[Timestamp('2013-01-01 00:00:00'), + Timestamp('2013-01-02 00:00:00'), + Timestamp('2013-01-03 00:00:00')], + [Timestamp('2013-01-01 00:00:00-0500', + tz='US/Eastern'), + pd.NaT, + Timestamp('2013-01-03 00:00:00-0500', + tz='US/Eastern')], + [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), + pd.NaT, + Timestamp('2013-01-03 00:00:00+0100', + tz='CET')]], + dtype=object).T + result = self.tzframe.astype(object) + assert_frame_equal(result, DataFrame( + expected, index=self.tzframe.index, columns=self.tzframe.columns)) + + result = self.tzframe.astype('datetime64[ns]') + expected = DataFrame({'A': date_range('20130101', periods=3), + 'B': (date_range('20130101', periods=3, + tz='US/Eastern') + .tz_convert('UTC') + .tz_localize(None)), + 'C': (date_range('20130101', periods=3, + tz='CET') + .tz_convert('UTC') + .tz_localize(None))}) + expected.iloc[1, 1] = pd.NaT + expected.iloc[1, 2] = pd.NaT + assert_frame_equal(result, expected) + + # str formatting + result = self.tzframe.astype(str) + expected = np.array([['2013-01-01', '2013-01-01 00:00:00-05:00', + '2013-01-01 00:00:00+01:00'], + ['2013-01-02', 'NaT', 'NaT'], + ['2013-01-03', '2013-01-03 00:00:00-05:00', + '2013-01-03 00:00:00+01:00']], dtype=object) + self.assert_numpy_array_equal(result, expected) + + result = str(self.tzframe) + self.assertTrue('0 2013-01-01 2013-01-01 00:00:00-05:00 ' + '2013-01-01 00:00:00+01:00' in result) + self.assertTrue('1 2013-01-02 ' + 'NaT NaT' in result) + self.assertTrue('2 2013-01-03 2013-01-03 00:00:00-05:00 ' + '2013-01-03 00:00:00+01:00' in result) + + # setitem + df['C'] = idx + assert_series_equal(df['C'], Series(idx, name='C')) + + df['D'] = 'foo' + df['D'] = idx + assert_series_equal(df['D'], Series(idx, name='D')) + del df['D'] + + # assert that A & C are not sharing the same base (e.g. they + # are copies) + b1 = df._data.blocks[1] + b2 = df._data.blocks[2] + self.assertTrue(b1.values.equals(b2.values)) + self.assertFalse(id(b1.values.values.base) == + id(b2.values.values.base)) + + # with nan + df2 = df.copy() + df2.iloc[1, 1] = pd.NaT + df2.iloc[1, 2] = pd.NaT + result = df2['B'] + assert_series_equal(notnull(result), Series( + [True, False, True], name='B')) + assert_series_equal(df2.dtypes, df.dtypes) + + # set/reset + df = DataFrame({'A': [0, 1, 2]}, index=idx) + result = df.reset_index() + self.assertTrue(result['foo'].dtype, 'M8[ns, US/Eastern') + + result = result.set_index('foo') + tm.assert_index_equal(df.index, idx) + + def test_constructor_for_list_with_dtypes(self): + # TODO(wesm): unused + intname = np.dtype(np.int_).name # noqa + floatname = np.dtype(np.float_).name # noqa + datetime64name = np.dtype('M8[ns]').name + objectname = np.dtype(np.object_).name + + # test list of lists/ndarrays + df = DataFrame([np.arange(5) for x in range(5)]) + result = df.get_dtype_counts() + expected = Series({'int64': 5}) + + df = DataFrame([np.array(np.arange(5), dtype='int32') + for x in range(5)]) + result = df.get_dtype_counts() + expected = Series({'int32': 5}) + + # overflow issue? (we always expecte int64 upcasting here) + df = DataFrame({'a': [2 ** 31, 2 ** 31 + 1]}) + result = df.get_dtype_counts() + expected = Series({'int64': 1}) + assert_series_equal(result, expected) + + # GH #2751 (construction with no index specified), make sure we cast to + # platform values + df = DataFrame([1, 2]) + result = df.get_dtype_counts() + expected = Series({'int64': 1}) + assert_series_equal(result, expected) + + df = DataFrame([1., 2.]) + result = df.get_dtype_counts() + expected = Series({'float64': 1}) + assert_series_equal(result, expected) + + df = DataFrame({'a': [1, 2]}) + result = df.get_dtype_counts() + expected = Series({'int64': 1}) + assert_series_equal(result, expected) + + df = DataFrame({'a': [1., 2.]}) + result = df.get_dtype_counts() + expected = Series({'float64': 1}) + assert_series_equal(result, expected) + + df = DataFrame({'a': 1}, index=lrange(3)) + result = df.get_dtype_counts() + expected = Series({'int64': 1}) + assert_series_equal(result, expected) + + df = DataFrame({'a': 1.}, index=lrange(3)) + result = df.get_dtype_counts() + expected = Series({'float64': 1}) + assert_series_equal(result, expected) + + # with object list + df = DataFrame({'a': [1, 2, 4, 7], 'b': [1.2, 2.3, 5.1, 6.3], + 'c': list('abcd'), + 'd': [datetime(2000, 1, 1) for i in range(4)], + 'e': [1., 2, 4., 7]}) + result = df.get_dtype_counts() + expected = Series( + {'int64': 1, 'float64': 2, datetime64name: 1, objectname: 1}) + result.sort_index() + expected.sort_index() + assert_series_equal(result, expected) + + def test_constructor_frame_copy(self): + cop = DataFrame(self.frame, copy=True) + cop['A'] = 5 + self.assertTrue((cop['A'] == 5).all()) + self.assertFalse((self.frame['A'] == 5).all()) + + def test_constructor_ndarray_copy(self): + df = DataFrame(self.frame.values) + + self.frame.values[5] = 5 + self.assertTrue((df.values[5] == 5).all()) + + df = DataFrame(self.frame.values, copy=True) + self.frame.values[6] = 6 + self.assertFalse((df.values[6] == 6).all()) + + def test_constructor_series_copy(self): + series = self.frame._series + + df = DataFrame({'A': series['A']}) + df['A'][:] = 5 + + self.assertFalse((series['A'] == 5).all()) + + def test_constructor_with_nas(self): + # GH 5016 + # na's in indicies + + def check(df): + for i in range(len(df.columns)): + df.iloc[:, i] + + # allow single nans to succeed + indexer = np.arange(len(df.columns))[isnull(df.columns)] + + if len(indexer) == 1: + assert_series_equal(df.iloc[:, indexer[0]], df.loc[:, np.nan]) + + # multiple nans should fail + else: + + def f(): + df.loc[:, np.nan] + self.assertRaises(TypeError, f) + + df = DataFrame([[1, 2, 3], [4, 5, 6]], index=[1, np.nan]) + check(df) + + df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=[1.1, 2.2, np.nan]) + check(df) + + df = DataFrame([[0, 1, 2, 3], [4, 5, 6, 7]], + columns=[np.nan, 1.1, 2.2, np.nan]) + check(df) + + df = DataFrame([[0.0, 1, 2, 3.0], [4, 5, 6, 7]], + columns=[np.nan, 1.1, 2.2, np.nan]) + check(df) + + def test_constructor_lists_to_object_dtype(self): + # from #1074 + d = DataFrame({'a': [np.nan, False]}) + self.assertEqual(d['a'].dtype, np.object_) + self.assertFalse(d['a'][1]) + + def test_from_records_to_records(self): + # from numpy documentation + arr = np.zeros((2,), dtype=('i4,f4,a10')) + arr[:] = [(1, 2., 'Hello'), (2, 3., "World")] + + # TODO(wesm): unused + frame = DataFrame.from_records(arr) # noqa + + index = np.arange(len(arr))[::-1] + indexed_frame = DataFrame.from_records(arr, index=index) + self.assert_numpy_array_equal(indexed_frame.index, index) + + # without names, it should go to last ditch + arr2 = np.zeros((2, 3)) + assert_frame_equal(DataFrame.from_records(arr2), DataFrame(arr2)) + + # wrong length + msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)' + with assertRaisesRegexp(ValueError, msg): + DataFrame.from_records(arr, index=index[:-1]) + + indexed_frame = DataFrame.from_records(arr, index='f1') + + # what to do? + records = indexed_frame.to_records() + self.assertEqual(len(records.dtype.names), 3) + + records = indexed_frame.to_records(index=False) + self.assertEqual(len(records.dtype.names), 2) + self.assertNotIn('index', records.dtype.names) + + def test_from_records_nones(self): + tuples = [(1, 2, None, 3), + (1, 2, None, 3), + (None, 2, 5, 3)] + + df = DataFrame.from_records(tuples, columns=['a', 'b', 'c', 'd']) + self.assertTrue(np.isnan(df['c'][0])) + + def test_from_records_iterator(self): + arr = np.array([(1.0, 1.0, 2, 2), (3.0, 3.0, 4, 4), (5., 5., 6, 6), + (7., 7., 8, 8)], + dtype=[('x', np.float64), ('u', np.float32), + ('y', np.int64), ('z', np.int32)]) + df = DataFrame.from_records(iter(arr), nrows=2) + xp = DataFrame({'x': np.array([1.0, 3.0], dtype=np.float64), + 'u': np.array([1.0, 3.0], dtype=np.float32), + 'y': np.array([2, 4], dtype=np.int64), + 'z': np.array([2, 4], dtype=np.int32)}) + assert_frame_equal(df.reindex_like(xp), xp) + + # no dtypes specified here, so just compare with the default + arr = [(1.0, 2), (3.0, 4), (5., 6), (7., 8)] + df = DataFrame.from_records(iter(arr), columns=['x', 'y'], + nrows=2) + assert_frame_equal(df, xp.reindex( + columns=['x', 'y']), check_dtype=False) + + def test_from_records_tuples_generator(self): + def tuple_generator(length): + for i in range(length): + letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' + yield (i, letters[i % len(letters)], i / length) + + columns_names = ['Integer', 'String', 'Float'] + columns = [[i[j] for i in tuple_generator( + 10)] for j in range(len(columns_names))] + data = {'Integer': columns[0], + 'String': columns[1], 'Float': columns[2]} + expected = DataFrame(data, columns=columns_names) + + generator = tuple_generator(10) + result = DataFrame.from_records(generator, columns=columns_names) + assert_frame_equal(result, expected) + + def test_from_records_lists_generator(self): + def list_generator(length): + for i in range(length): + letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' + yield [i, letters[i % len(letters)], i / length] + + columns_names = ['Integer', 'String', 'Float'] + columns = [[i[j] for i in list_generator( + 10)] for j in range(len(columns_names))] + data = {'Integer': columns[0], + 'String': columns[1], 'Float': columns[2]} + expected = DataFrame(data, columns=columns_names) + + generator = list_generator(10) + result = DataFrame.from_records(generator, columns=columns_names) + assert_frame_equal(result, expected) + + def test_from_records_columns_not_modified(self): + tuples = [(1, 2, 3), + (1, 2, 3), + (2, 5, 3)] + + columns = ['a', 'b', 'c'] + original_columns = list(columns) + + df = DataFrame.from_records(tuples, columns=columns, index='a') # noqa + + self.assertEqual(columns, original_columns) + + def test_from_records_decimal(self): + from decimal import Decimal + + tuples = [(Decimal('1.5'),), (Decimal('2.5'),), (None,)] + + df = DataFrame.from_records(tuples, columns=['a']) + self.assertEqual(df['a'].dtype, object) + + df = DataFrame.from_records(tuples, columns=['a'], coerce_float=True) + self.assertEqual(df['a'].dtype, np.float64) + self.assertTrue(np.isnan(df['a'].values[-1])) + + def test_from_records_duplicates(self): + result = DataFrame.from_records([(1, 2, 3), (4, 5, 6)], + columns=['a', 'b', 'a']) + + expected = DataFrame([(1, 2, 3), (4, 5, 6)], + columns=['a', 'b', 'a']) + + assert_frame_equal(result, expected) + + def test_from_records_set_index_name(self): + def create_dict(order_id): + return {'order_id': order_id, 'quantity': np.random.randint(1, 10), + 'price': np.random.randint(1, 10)} + documents = [create_dict(i) for i in range(10)] + # demo missing data + documents.append({'order_id': 10, 'quantity': 5}) + + result = DataFrame.from_records(documents, index='order_id') + self.assertEqual(result.index.name, 'order_id') + + # MultiIndex + result = DataFrame.from_records(documents, + index=['order_id', 'quantity']) + self.assertEqual(result.index.names, ('order_id', 'quantity')) + + def test_from_records_misc_brokenness(self): + # #2179 + + data = {1: ['foo'], 2: ['bar']} + + result = DataFrame.from_records(data, columns=['a', 'b']) + exp = DataFrame(data, columns=['a', 'b']) + assert_frame_equal(result, exp) + + # overlap in index/index_names + + data = {'a': [1, 2, 3], 'b': [4, 5, 6]} + + result = DataFrame.from_records(data, index=['a', 'b', 'c']) + exp = DataFrame(data, index=['a', 'b', 'c']) + assert_frame_equal(result, exp) + + # GH 2623 + rows = [] + rows.append([datetime(2010, 1, 1), 1]) + rows.append([datetime(2010, 1, 2), 'hi']) # test col upconverts to obj + df2_obj = DataFrame.from_records(rows, columns=['date', 'test']) + results = df2_obj.get_dtype_counts() + expected = Series({'datetime64[ns]': 1, 'object': 1}) + + rows = [] + rows.append([datetime(2010, 1, 1), 1]) + rows.append([datetime(2010, 1, 2), 1]) + df2_obj = DataFrame.from_records(rows, columns=['date', 'test']) + results = df2_obj.get_dtype_counts() + expected = Series({'datetime64[ns]': 1, 'int64': 1}) + assert_series_equal(results, expected) + + def test_from_records_empty(self): + # 3562 + result = DataFrame.from_records([], columns=['a', 'b', 'c']) + expected = DataFrame(columns=['a', 'b', 'c']) + assert_frame_equal(result, expected) + + result = DataFrame.from_records([], columns=['a', 'b', 'b']) + expected = DataFrame(columns=['a', 'b', 'b']) + assert_frame_equal(result, expected) + + def test_from_records_empty_with_nonempty_fields_gh3682(self): + a = np.array([(1, 2)], dtype=[('id', np.int64), ('value', np.int64)]) + df = DataFrame.from_records(a, index='id') + assert_numpy_array_equal(df.index, Index([1], name='id')) + self.assertEqual(df.index.name, 'id') + assert_numpy_array_equal(df.columns, Index(['value'])) + + b = np.array([], dtype=[('id', np.int64), ('value', np.int64)]) + df = DataFrame.from_records(b, index='id') + assert_numpy_array_equal(df.index, Index([], name='id')) + self.assertEqual(df.index.name, 'id') + + def test_from_records_with_datetimes(self): + + # this may fail on certain platforms because of a numpy issue + # related GH6140 + if not is_little_endian(): + raise nose.SkipTest("known failure of test on non-little endian") + + # construction with a null in a recarray + # GH 6140 + expected = DataFrame({'EXPIRY': [datetime(2005, 3, 1, 0, 0), None]}) + + arrdata = [np.array([datetime(2005, 3, 1, 0, 0), None])] + dtypes = [('EXPIRY', '<M8[ns]')] + + try: + recarray = np.core.records.fromarrays(arrdata, dtype=dtypes) + except (ValueError): + raise nose.SkipTest("known failure of numpy rec array creation") + + result = DataFrame.from_records(recarray) + assert_frame_equal(result, expected) + + # coercion should work too + arrdata = [np.array([datetime(2005, 3, 1, 0, 0), None])] + dtypes = [('EXPIRY', '<M8[m]')] + recarray = np.core.records.fromarrays(arrdata, dtype=dtypes) + result = DataFrame.from_records(recarray) + assert_frame_equal(result, expected) + + def test_from_records_sequencelike(self): + df = DataFrame({'A': np.array(np.random.randn(6), dtype=np.float64), + 'A1': np.array(np.random.randn(6), dtype=np.float64), + 'B': np.array(np.arange(6), dtype=np.int64), + 'C': ['foo'] * 6, + 'D': np.array([True, False] * 3, dtype=bool), + 'E': np.array(np.random.randn(6), dtype=np.float32), + 'E1': np.array(np.random.randn(6), dtype=np.float32), + 'F': np.array(np.arange(6), dtype=np.int32)}) + + # this is actually tricky to create the recordlike arrays and + # have the dtypes be intact + blocks = df.blocks + tuples = [] + columns = [] + dtypes = [] + for dtype, b in compat.iteritems(blocks): + columns.extend(b.columns) + dtypes.extend([(c, np.dtype(dtype).descr[0][1]) + for c in b.columns]) + for i in range(len(df.index)): + tup = [] + for _, b in compat.iteritems(blocks): + tup.extend(b.iloc[i].values) + tuples.append(tuple(tup)) + + recarray = np.array(tuples, dtype=dtypes).view(np.recarray) + recarray2 = df.to_records() + lists = [list(x) for x in tuples] + + # tuples (lose the dtype info) + result = (DataFrame.from_records(tuples, columns=columns) + .reindex(columns=df.columns)) + + # created recarray and with to_records recarray (have dtype info) + result2 = (DataFrame.from_records(recarray, columns=columns) + .reindex(columns=df.columns)) + result3 = (DataFrame.from_records(recarray2, columns=columns) + .reindex(columns=df.columns)) + + # list of tupels (no dtype info) + result4 = (DataFrame.from_records(lists, columns=columns) + .reindex(columns=df.columns)) + + assert_frame_equal(result, df, check_dtype=False) + assert_frame_equal(result2, df) + assert_frame_equal(result3, df) + assert_frame_equal(result4, df, check_dtype=False) + + # tuples is in the order of the columns + result = DataFrame.from_records(tuples) + self.assert_numpy_array_equal(result.columns, lrange(8)) + + # test exclude parameter & we are casting the results here (as we don't + # have dtype info to recover) + columns_to_test = [columns.index('C'), columns.index('E1')] + + exclude = list(set(range(8)) - set(columns_to_test)) + result = DataFrame.from_records(tuples, exclude=exclude) + result.columns = [columns[i] for i in sorted(columns_to_test)] + assert_series_equal(result['C'], df['C']) + assert_series_equal(result['E1'], df['E1'].astype('float64')) + + # empty case + result = DataFrame.from_records([], columns=['foo', 'bar', 'baz']) + self.assertEqual(len(result), 0) + self.assert_numpy_array_equal(result.columns, ['foo', 'bar', 'baz']) + + result = DataFrame.from_records([]) + self.assertEqual(len(result), 0) + self.assertEqual(len(result.columns), 0) + + def test_from_records_dictlike(self): + + # test the dict methods + df = DataFrame({'A': np.array(np.random.randn(6), dtype=np.float64), + 'A1': np.array(np.random.randn(6), dtype=np.float64), + 'B': np.array(np.arange(6), dtype=np.int64), + 'C': ['foo'] * 6, + 'D': np.array([True, False] * 3, dtype=bool), + 'E': np.array(np.random.randn(6), dtype=np.float32), + 'E1': np.array(np.random.randn(6), dtype=np.float32), + 'F': np.array(np.arange(6), dtype=np.int32)}) + + # columns is in a different order here than the actual items iterated + # from the dict + columns = [] + for dtype, b in compat.iteritems(df.blocks): + columns.extend(b.columns) + + asdict = dict((x, y) for x, y in compat.iteritems(df)) + asdict2 = dict((x, y.values) for x, y in compat.iteritems(df)) + + # dict of series & dict of ndarrays (have dtype info) + results = [] + results.append(DataFrame.from_records( + asdict).reindex(columns=df.columns)) + results.append(DataFrame.from_records(asdict, columns=columns) + .reindex(columns=df.columns)) + results.append(DataFrame.from_records(asdict2, columns=columns) + .reindex(columns=df.columns)) + + for r in results: + assert_frame_equal(r, df) + + def test_from_records_with_index_data(self): + df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C']) + + data = np.random.randn(10) + df1 = DataFrame.from_records(df, index=data) + assert(df1.index.equals(Index(data))) + + def test_from_records_bad_index_column(self): + df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C']) + + # should pass + df1 = DataFrame.from_records(df, index=['C']) + assert(df1.index.equals(Index(df.C))) + + df1 = DataFrame.from_records(df, index='C') + assert(df1.index.equals(Index(df.C))) + + # should fail + self.assertRaises(ValueError, DataFrame.from_records, df, index=[2]) + self.assertRaises(KeyError, DataFrame.from_records, df, index=2) + + def test_from_records_non_tuple(self): + class Record(object): + + def __init__(self, *args): + self.args = args + + def __getitem__(self, i): + return self.args[i] + + def __iter__(self): + return iter(self.args) + + recs = [Record(1, 2, 3), Record(4, 5, 6), Record(7, 8, 9)] + tups = lmap(tuple, recs) + + result = DataFrame.from_records(recs) + expected = DataFrame.from_records(tups) + assert_frame_equal(result, expected) + + def test_from_records_len0_with_columns(self): + # #2633 + result = DataFrame.from_records([], index='foo', + columns=['foo', 'bar']) + + self.assertTrue(np.array_equal(result.columns, ['bar'])) + self.assertEqual(len(result), 0) + self.assertEqual(result.index.name, 'foo') diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py new file mode 100644 index 0000000000000..8bb253e17fd06 --- /dev/null +++ b/pandas/tests/frame/test_convert_to.py @@ -0,0 +1,174 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from numpy import nan +import numpy as np + +from pandas import compat +from pandas import (DataFrame, Series, MultiIndex, Timestamp, + date_range) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameConvertTo(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_to_dict(self): + test_data = { + 'A': {'1': 1, '2': 2}, + 'B': {'1': '1', '2': '2', '3': '3'}, + } + recons_data = DataFrame(test_data).to_dict() + + for k, v in compat.iteritems(test_data): + for k2, v2 in compat.iteritems(v): + self.assertEqual(v2, recons_data[k][k2]) + + recons_data = DataFrame(test_data).to_dict("l") + + for k, v in compat.iteritems(test_data): + for k2, v2 in compat.iteritems(v): + self.assertEqual(v2, recons_data[k][int(k2) - 1]) + + recons_data = DataFrame(test_data).to_dict("s") + + for k, v in compat.iteritems(test_data): + for k2, v2 in compat.iteritems(v): + self.assertEqual(v2, recons_data[k][k2]) + + recons_data = DataFrame(test_data).to_dict("sp") + + expected_split = {'columns': ['A', 'B'], 'index': ['1', '2', '3'], + 'data': [[1.0, '1'], [2.0, '2'], [nan, '3']]} + + tm.assert_almost_equal(recons_data, expected_split) + + recons_data = DataFrame(test_data).to_dict("r") + + expected_records = [{'A': 1.0, 'B': '1'}, + {'A': 2.0, 'B': '2'}, + {'A': nan, 'B': '3'}] + + tm.assert_almost_equal(recons_data, expected_records) + + # GH10844 + recons_data = DataFrame(test_data).to_dict("i") + + for k, v in compat.iteritems(test_data): + for k2, v2 in compat.iteritems(v): + self.assertEqual(v2, recons_data[k2][k]) + + def test_to_dict_timestamp(self): + + # GH11247 + # split/records producing np.datetime64 rather than Timestamps + # on datetime64[ns] dtypes only + + tsmp = Timestamp('20130101') + test_data = DataFrame({'A': [tsmp, tsmp], 'B': [tsmp, tsmp]}) + test_data_mixed = DataFrame({'A': [tsmp, tsmp], 'B': [1, 2]}) + + expected_records = [{'A': tsmp, 'B': tsmp}, + {'A': tsmp, 'B': tsmp}] + expected_records_mixed = [{'A': tsmp, 'B': 1}, + {'A': tsmp, 'B': 2}] + + tm.assert_almost_equal(test_data.to_dict( + orient='records'), expected_records) + tm.assert_almost_equal(test_data_mixed.to_dict( + orient='records'), expected_records_mixed) + + expected_series = { + 'A': Series([tsmp, tsmp]), + 'B': Series([tsmp, tsmp]), + } + expected_series_mixed = { + 'A': Series([tsmp, tsmp]), + 'B': Series([1, 2]), + } + + tm.assert_almost_equal(test_data.to_dict( + orient='series'), expected_series) + tm.assert_almost_equal(test_data_mixed.to_dict( + orient='series'), expected_series_mixed) + + expected_split = { + 'index': [0, 1], + 'data': [[tsmp, tsmp], + [tsmp, tsmp]], + 'columns': ['A', 'B'] + } + expected_split_mixed = { + 'index': [0, 1], + 'data': [[tsmp, 1], + [tsmp, 2]], + 'columns': ['A', 'B'] + } + + tm.assert_almost_equal(test_data.to_dict( + orient='split'), expected_split) + tm.assert_almost_equal(test_data_mixed.to_dict( + orient='split'), expected_split_mixed) + + def test_to_dict_invalid_orient(self): + df = DataFrame({'A': [0, 1]}) + self.assertRaises(ValueError, df.to_dict, orient='xinvalid') + + def test_to_records_dt64(self): + df = DataFrame([["one", "two", "three"], + ["four", "five", "six"]], + index=date_range("2012-01-01", "2012-01-02")) + self.assertEqual(df.to_records()['index'][0], df.index[0]) + + rs = df.to_records(convert_datetime64=False) + self.assertEqual(rs['index'][0], df.index.values[0]) + + def test_to_records_with_multindex(self): + # GH3189 + index = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], + ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']] + data = np.zeros((8, 4)) + df = DataFrame(data, index=index) + r = df.to_records(index=True)['level_0'] + self.assertTrue('bar' in r) + self.assertTrue('one' not in r) + + def test_to_records_with_Mapping_type(self): + import email + from email.parser import Parser + import collections + + collections.Mapping.register(email.message.Message) + + headers = Parser().parsestr('From: <user@example.com>\n' + 'To: <someone_else@example.com>\n' + 'Subject: Test message\n' + '\n' + 'Body would go here\n') + + frame = DataFrame.from_records([headers]) + all(x in frame for x in ['Type', 'Subject', 'From']) + + def test_to_records_floats(self): + df = DataFrame(np.random.rand(10, 10)) + df.to_records() + + def test_to_records_index_name(self): + df = DataFrame(np.random.randn(3, 3)) + df.index.name = 'X' + rs = df.to_records() + self.assertIn('X', rs.dtype.fields) + + df = DataFrame(np.random.randn(3, 3)) + rs = df.to_records() + self.assertIn('index', rs.dtype.fields) + + df.index = MultiIndex.from_tuples([('a', 'x'), ('a', 'y'), ('b', 'z')]) + df.index.names = ['A', None] + rs = df.to_records() + self.assertIn('level_0', rs.dtype.fields) diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py new file mode 100644 index 0000000000000..97ca8238b78f9 --- /dev/null +++ b/pandas/tests/frame/test_dtypes.py @@ -0,0 +1,396 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import timedelta + +import numpy as np + +from pandas import (DataFrame, Series, date_range, Timedelta, Timestamp, + compat, option_context) +from pandas.compat import u +from pandas.tests.frame.common import TestData +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + makeCustomDataframe as mkdf) +import pandas.util.testing as tm +import pandas as pd + + +class TestDataFrameDataTypes(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_concat_empty_dataframe_dtypes(self): + df = DataFrame(columns=list("abc")) + df['a'] = df['a'].astype(np.bool_) + df['b'] = df['b'].astype(np.int32) + df['c'] = df['c'].astype(np.float64) + + result = pd.concat([df, df]) + self.assertEqual(result['a'].dtype, np.bool_) + self.assertEqual(result['b'].dtype, np.int32) + self.assertEqual(result['c'].dtype, np.float64) + + result = pd.concat([df, df.astype(np.float64)]) + self.assertEqual(result['a'].dtype, np.object_) + self.assertEqual(result['b'].dtype, np.float64) + self.assertEqual(result['c'].dtype, np.float64) + + def test_empty_frame_dtypes_ftypes(self): + empty_df = pd.DataFrame() + assert_series_equal(empty_df.dtypes, pd.Series(dtype=np.object)) + assert_series_equal(empty_df.ftypes, pd.Series(dtype=np.object)) + + nocols_df = pd.DataFrame(index=[1, 2, 3]) + assert_series_equal(nocols_df.dtypes, pd.Series(dtype=np.object)) + assert_series_equal(nocols_df.ftypes, pd.Series(dtype=np.object)) + + norows_df = pd.DataFrame(columns=list("abc")) + assert_series_equal(norows_df.dtypes, pd.Series( + np.object, index=list("abc"))) + assert_series_equal(norows_df.ftypes, pd.Series( + 'object:dense', index=list("abc"))) + + norows_int_df = pd.DataFrame(columns=list("abc")).astype(np.int32) + assert_series_equal(norows_int_df.dtypes, pd.Series( + np.dtype('int32'), index=list("abc"))) + assert_series_equal(norows_int_df.ftypes, pd.Series( + 'int32:dense', index=list("abc"))) + + odict = compat.OrderedDict + df = pd.DataFrame(odict([('a', 1), ('b', True), ('c', 1.0)]), + index=[1, 2, 3]) + ex_dtypes = pd.Series(odict([('a', np.int64), + ('b', np.bool), + ('c', np.float64)])) + ex_ftypes = pd.Series(odict([('a', 'int64:dense'), + ('b', 'bool:dense'), + ('c', 'float64:dense')])) + assert_series_equal(df.dtypes, ex_dtypes) + assert_series_equal(df.ftypes, ex_ftypes) + + # same but for empty slice of df + assert_series_equal(df[:0].dtypes, ex_dtypes) + assert_series_equal(df[:0].ftypes, ex_ftypes) + + def test_dtypes_are_correct_after_column_slice(self): + # GH6525 + df = pd.DataFrame(index=range(5), columns=list("abc"), dtype=np.float_) + odict = compat.OrderedDict + assert_series_equal(df.dtypes, + pd.Series(odict([('a', np.float_), + ('b', np.float_), + ('c', np.float_)]))) + assert_series_equal(df.iloc[:, 2:].dtypes, + pd.Series(odict([('c', np.float_)]))) + assert_series_equal(df.dtypes, + pd.Series(odict([('a', np.float_), + ('b', np.float_), + ('c', np.float_)]))) + + def test_select_dtypes_include(self): + df = DataFrame({'a': list('abc'), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True], + 'f': pd.Categorical(list('abc'))}) + ri = df.select_dtypes(include=[np.number]) + ei = df[['b', 'c', 'd']] + assert_frame_equal(ri, ei) + + ri = df.select_dtypes(include=[np.number, 'category']) + ei = df[['b', 'c', 'd', 'f']] + assert_frame_equal(ri, ei) + + def test_select_dtypes_exclude(self): + df = DataFrame({'a': list('abc'), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True]}) + re = df.select_dtypes(exclude=[np.number]) + ee = df[['a', 'e']] + assert_frame_equal(re, ee) + + def test_select_dtypes_exclude_include(self): + df = DataFrame({'a': list('abc'), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True], + 'f': pd.date_range('now', periods=3).values}) + exclude = np.datetime64, + include = np.bool_, 'integer' + r = df.select_dtypes(include=include, exclude=exclude) + e = df[['b', 'c', 'e']] + assert_frame_equal(r, e) + + exclude = 'datetime', + include = 'bool', 'int64', 'int32' + r = df.select_dtypes(include=include, exclude=exclude) + e = df[['b', 'e']] + assert_frame_equal(r, e) + + def test_select_dtypes_not_an_attr_but_still_valid_dtype(self): + df = DataFrame({'a': list('abc'), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True], + 'f': pd.date_range('now', periods=3).values}) + df['g'] = df.f.diff() + assert not hasattr(np, 'u8') + r = df.select_dtypes(include=['i8', 'O'], exclude=['timedelta']) + e = df[['a', 'b']] + assert_frame_equal(r, e) + + r = df.select_dtypes(include=['i8', 'O', 'timedelta64[ns]']) + e = df[['a', 'b', 'g']] + assert_frame_equal(r, e) + + def test_select_dtypes_empty(self): + df = DataFrame({'a': list('abc'), 'b': list(range(1, 4))}) + with tm.assertRaisesRegexp(ValueError, 'at least one of include or ' + 'exclude must be nonempty'): + df.select_dtypes() + + def test_select_dtypes_raises_on_string(self): + df = DataFrame({'a': list('abc'), 'b': list(range(1, 4))}) + with tm.assertRaisesRegexp(TypeError, 'include and exclude .+ non-'): + df.select_dtypes(include='object') + with tm.assertRaisesRegexp(TypeError, 'include and exclude .+ non-'): + df.select_dtypes(exclude='object') + with tm.assertRaisesRegexp(TypeError, 'include and exclude .+ non-'): + df.select_dtypes(include=int, exclude='object') + + def test_select_dtypes_bad_datetime64(self): + df = DataFrame({'a': list('abc'), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True], + 'f': pd.date_range('now', periods=3).values}) + with tm.assertRaisesRegexp(ValueError, '.+ is too specific'): + df.select_dtypes(include=['datetime64[D]']) + + with tm.assertRaisesRegexp(ValueError, '.+ is too specific'): + df.select_dtypes(exclude=['datetime64[as]']) + + def test_select_dtypes_str_raises(self): + df = DataFrame({'a': list('abc'), + 'g': list(u('abc')), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True], + 'f': pd.date_range('now', periods=3).values}) + string_dtypes = set((str, 'str', np.string_, 'S1', + 'unicode', np.unicode_, 'U1')) + try: + string_dtypes.add(unicode) + except NameError: + pass + for dt in string_dtypes: + with tm.assertRaisesRegexp(TypeError, + 'string dtypes are not allowed'): + df.select_dtypes(include=[dt]) + with tm.assertRaisesRegexp(TypeError, + 'string dtypes are not allowed'): + df.select_dtypes(exclude=[dt]) + + def test_select_dtypes_bad_arg_raises(self): + df = DataFrame({'a': list('abc'), + 'g': list(u('abc')), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True], + 'f': pd.date_range('now', periods=3).values}) + with tm.assertRaisesRegexp(TypeError, 'data type.*not understood'): + df.select_dtypes(['blargy, blarg, blarg']) + + def test_select_dtypes_typecodes(self): + # GH 11990 + df = mkdf(30, 3, data_gen_f=lambda x, y: np.random.random()) + expected = df + FLOAT_TYPES = list(np.typecodes['AllFloat']) + assert_frame_equal(df.select_dtypes(FLOAT_TYPES), expected) + + def test_dtypes_gh8722(self): + self.mixed_frame['bool'] = self.mixed_frame['A'] > 0 + result = self.mixed_frame.dtypes + expected = Series(dict((k, v.dtype) + for k, v in compat.iteritems(self.mixed_frame)), + index=result.index) + assert_series_equal(result, expected) + + # compat, GH 8722 + with option_context('use_inf_as_null', True): + df = DataFrame([[1]]) + result = df.dtypes + assert_series_equal(result, Series({0: np.dtype('int64')})) + + def test_ftypes(self): + frame = self.mixed_float + expected = Series(dict(A='float32:dense', + B='float32:dense', + C='float16:dense', + D='float64:dense')).sort_values() + result = frame.ftypes.sort_values() + assert_series_equal(result, expected) + + def test_astype(self): + casted = self.frame.astype(int) + expected = DataFrame(self.frame.values.astype(int), + index=self.frame.index, + columns=self.frame.columns) + assert_frame_equal(casted, expected) + + casted = self.frame.astype(np.int32) + expected = DataFrame(self.frame.values.astype(np.int32), + index=self.frame.index, + columns=self.frame.columns) + assert_frame_equal(casted, expected) + + self.frame['foo'] = '5' + casted = self.frame.astype(int) + expected = DataFrame(self.frame.values.astype(int), + index=self.frame.index, + columns=self.frame.columns) + assert_frame_equal(casted, expected) + + # mixed casting + def _check_cast(df, v): + self.assertEqual( + list(set([s.dtype.name + for _, s in compat.iteritems(df)]))[0], v) + + mn = self.all_mixed._get_numeric_data().copy() + mn['little_float'] = np.array(12345., dtype='float16') + mn['big_float'] = np.array(123456789101112., dtype='float64') + + casted = mn.astype('float64') + _check_cast(casted, 'float64') + + casted = mn.astype('int64') + _check_cast(casted, 'int64') + + casted = self.mixed_float.reindex(columns=['A', 'B']).astype('float32') + _check_cast(casted, 'float32') + + casted = mn.reindex(columns=['little_float']).astype('float16') + _check_cast(casted, 'float16') + + casted = self.mixed_float.reindex(columns=['A', 'B']).astype('float16') + _check_cast(casted, 'float16') + + casted = mn.astype('float32') + _check_cast(casted, 'float32') + + casted = mn.astype('int32') + _check_cast(casted, 'int32') + + # to object + casted = mn.astype('O') + _check_cast(casted, 'object') + + def test_astype_with_exclude_string(self): + df = self.frame.copy() + expected = self.frame.astype(int) + df['string'] = 'foo' + casted = df.astype(int, raise_on_error=False) + + expected['string'] = 'foo' + assert_frame_equal(casted, expected) + + df = self.frame.copy() + expected = self.frame.astype(np.int32) + df['string'] = 'foo' + casted = df.astype(np.int32, raise_on_error=False) + + expected['string'] = 'foo' + assert_frame_equal(casted, expected) + + def test_astype_with_view(self): + + tf = self.mixed_float.reindex(columns=['A', 'B', 'C']) + + casted = tf.astype(np.int64) + + casted = tf.astype(np.float32) + + # this is the only real reason to do it this way + tf = np.round(self.frame).astype(np.int32) + casted = tf.astype(np.float32, copy=False) + + # TODO(wesm): verification? + tf = self.frame.astype(np.float64) + casted = tf.astype(np.int64, copy=False) # noqa + + def test_astype_cast_nan_int(self): + df = DataFrame(data={"Values": [1.0, 2.0, 3.0, np.nan]}) + self.assertRaises(ValueError, df.astype, np.int64) + + def test_astype_str(self): + # GH9757 + a = Series(date_range('2010-01-04', periods=5)) + b = Series(date_range('3/6/2012 00:00', periods=5, tz='US/Eastern')) + c = Series([Timedelta(x, unit='d') for x in range(5)]) + d = Series(range(5)) + e = Series([0.0, 0.2, 0.4, 0.6, 0.8]) + + df = DataFrame({'a': a, 'b': b, 'c': c, 'd': d, 'e': e}) + + # datetimelike + # Test str and unicode on python 2.x and just str on python 3.x + for tt in set([str, compat.text_type]): + result = df.astype(tt) + + expected = DataFrame({ + 'a': list(map(tt, map(lambda x: Timestamp(x)._date_repr, + a._values))), + 'b': list(map(tt, map(Timestamp, b._values))), + 'c': list(map(tt, map(lambda x: Timedelta(x) + ._repr_base(format='all'), c._values))), + 'd': list(map(tt, d._values)), + 'e': list(map(tt, e._values)), + }) + + assert_frame_equal(result, expected) + + # float/nan + # 11302 + # consistency in astype(str) + for tt in set([str, compat.text_type]): + result = DataFrame([np.NaN]).astype(tt) + expected = DataFrame(['nan']) + assert_frame_equal(result, expected) + + result = DataFrame([1.12345678901234567890]).astype(tt) + expected = DataFrame(['1.12345678901']) + assert_frame_equal(result, expected) + + def test_timedeltas(self): + df = DataFrame(dict(A=Series(date_range('2012-1-1', periods=3, + freq='D')), + B=Series([timedelta(days=i) for i in range(3)]))) + result = df.get_dtype_counts().sort_values() + expected = Series( + {'datetime64[ns]': 1, 'timedelta64[ns]': 1}).sort_values() + assert_series_equal(result, expected) + + df['C'] = df['A'] + df['B'] + expected = Series( + {'datetime64[ns]': 2, 'timedelta64[ns]': 1}).sort_values() + result = df.get_dtype_counts().sort_values() + assert_series_equal(result, expected) + + # mixed int types + df['D'] = 1 + expected = Series({'datetime64[ns]': 2, + 'timedelta64[ns]': 1, + 'int64': 1}).sort_values() + result = df.get_dtype_counts().sort_values() + assert_series_equal(result, expected) diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py new file mode 100644 index 0000000000000..ff9567c8a40b1 --- /dev/null +++ b/pandas/tests/frame/test_indexing.py @@ -0,0 +1,2600 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime, date, timedelta, time + +from pandas.compat import map, zip, range, lrange, lzip, long +from pandas import compat + +from numpy import nan +from numpy.random import randn +import numpy as np + +import pandas.core.common as com +from pandas import (DataFrame, Index, Series, notnull, isnull, + MultiIndex, DatetimeIndex, Timestamp, + date_range) +import pandas as pd + +from pandas.util.testing import (assert_almost_equal, + assert_numpy_array_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp, + assertRaises) +from pandas.core.indexing import IndexingError + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameIndexing(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_getitem(self): + # slicing + sl = self.frame[:20] + self.assertEqual(20, len(sl.index)) + + # column access + + for _, series in compat.iteritems(sl): + self.assertEqual(20, len(series.index)) + self.assertTrue(tm.equalContents(series.index, sl.index)) + + for key, _ in compat.iteritems(self.frame._series): + self.assertIsNotNone(self.frame[key]) + + self.assertNotIn('random', self.frame) + with assertRaisesRegexp(KeyError, 'random'): + self.frame['random'] + + df = self.frame.copy() + df['$10'] = randn(len(df)) + ad = randn(len(df)) + df['@awesome_domain'] = ad + self.assertRaises(KeyError, df.__getitem__, 'df["$10"]') + res = df['@awesome_domain'] + assert_numpy_array_equal(ad, res.values) + + def test_getitem_dupe_cols(self): + df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=['a', 'a', 'b']) + try: + df[['baf']] + except KeyError: + pass + else: + self.fail("Dataframe failed to raise KeyError") + + def test_get(self): + b = self.frame.get('B') + assert_series_equal(b, self.frame['B']) + + self.assertIsNone(self.frame.get('foo')) + assert_series_equal(self.frame.get('foo', self.frame['B']), + self.frame['B']) + # None + # GH 5652 + for df in [DataFrame(), DataFrame(columns=list('AB')), + DataFrame(columns=list('AB'), index=range(3))]: + result = df.get(None) + self.assertIsNone(result) + + def test_getitem_iterator(self): + idx = iter(['A', 'B', 'C']) + result = self.frame.ix[:, idx] + expected = self.frame.ix[:, ['A', 'B', 'C']] + assert_frame_equal(result, expected) + + def test_getitem_list(self): + self.frame.columns.name = 'foo' + + result = self.frame[['B', 'A']] + result2 = self.frame[Index(['B', 'A'])] + + expected = self.frame.ix[:, ['B', 'A']] + expected.columns.name = 'foo' + + assert_frame_equal(result, expected) + assert_frame_equal(result2, expected) + + self.assertEqual(result.columns.name, 'foo') + + with assertRaisesRegexp(KeyError, 'not in index'): + self.frame[['B', 'A', 'food']] + with assertRaisesRegexp(KeyError, 'not in index'): + self.frame[Index(['B', 'A', 'foo'])] + + # tuples + df = DataFrame(randn(8, 3), + columns=Index([('foo', 'bar'), ('baz', 'qux'), + ('peek', 'aboo')], name=['sth', 'sth2'])) + + result = df[[('foo', 'bar'), ('baz', 'qux')]] + expected = df.ix[:, :2] + assert_frame_equal(result, expected) + self.assertEqual(result.columns.names, ['sth', 'sth2']) + + def test_setitem_list(self): + + self.frame['E'] = 'foo' + data = self.frame[['A', 'B']] + self.frame[['B', 'A']] = data + + assert_series_equal(self.frame['B'], data['A'], check_names=False) + assert_series_equal(self.frame['A'], data['B'], check_names=False) + + with assertRaisesRegexp(ValueError, + 'Columns must be same length as key'): + data[['A']] = self.frame[['A', 'B']] + + with assertRaisesRegexp(ValueError, 'Length of values does not match ' + 'length of index'): + data['A'] = range(len(data.index) - 1) + + df = DataFrame(0, lrange(3), ['tt1', 'tt2'], dtype=np.int_) + df.ix[1, ['tt1', 'tt2']] = [1, 2] + + result = df.ix[1, ['tt1', 'tt2']] + expected = Series([1, 2], df.columns, dtype=np.int_, name=1) + assert_series_equal(result, expected) + + df['tt1'] = df['tt2'] = '0' + df.ix[1, ['tt1', 'tt2']] = ['1', '2'] + result = df.ix[1, ['tt1', 'tt2']] + expected = Series(['1', '2'], df.columns, name=1) + assert_series_equal(result, expected) + + def test_setitem_list_not_dataframe(self): + data = np.random.randn(len(self.frame), 2) + self.frame[['A', 'B']] = data + assert_almost_equal(self.frame[['A', 'B']].values, data) + + def test_setitem_list_of_tuples(self): + tuples = lzip(self.frame['A'], self.frame['B']) + self.frame['tuples'] = tuples + + result = self.frame['tuples'] + expected = Series(tuples, index=self.frame.index, name='tuples') + assert_series_equal(result, expected) + + def test_setitem_mulit_index(self): + # GH7655, test that assigning to a sub-frame of a frame + # with multi-index columns aligns both rows and columns + it = ['jim', 'joe', 'jolie'], ['first', 'last'], \ + ['left', 'center', 'right'] + + cols = MultiIndex.from_product(it) + index = pd.date_range('20141006', periods=20) + vals = np.random.randint(1, 1000, (len(index), len(cols))) + df = pd.DataFrame(vals, columns=cols, index=index) + + i, j = df.index.values.copy(), it[-1][:] + + np.random.shuffle(i) + df['jim'] = df['jolie'].loc[i, ::-1] + assert_frame_equal(df['jim'], df['jolie']) + + np.random.shuffle(j) + df[('joe', 'first')] = df[('jolie', 'last')].loc[i, j] + assert_frame_equal(df[('joe', 'first')], df[('jolie', 'last')]) + + np.random.shuffle(j) + df[('joe', 'last')] = df[('jolie', 'first')].loc[i, j] + assert_frame_equal(df[('joe', 'last')], df[('jolie', 'first')]) + + def test_getitem_boolean(self): + # boolean indexing + d = self.tsframe.index[10] + indexer = self.tsframe.index > d + indexer_obj = indexer.astype(object) + + subindex = self.tsframe.index[indexer] + subframe = self.tsframe[indexer] + + self.assert_numpy_array_equal(subindex, subframe.index) + with assertRaisesRegexp(ValueError, 'Item wrong length'): + self.tsframe[indexer[:-1]] + + subframe_obj = self.tsframe[indexer_obj] + assert_frame_equal(subframe_obj, subframe) + + with tm.assertRaisesRegexp(ValueError, 'boolean values only'): + self.tsframe[self.tsframe] + + # test that Series work + indexer_obj = Series(indexer_obj, self.tsframe.index) + + subframe_obj = self.tsframe[indexer_obj] + assert_frame_equal(subframe_obj, subframe) + + # test that Series indexers reindex + with tm.assert_produces_warning(UserWarning): + indexer_obj = indexer_obj.reindex(self.tsframe.index[::-1]) + + subframe_obj = self.tsframe[indexer_obj] + assert_frame_equal(subframe_obj, subframe) + + # test df[df > 0] + for df in [self.tsframe, self.mixed_frame, + self.mixed_float, self.mixed_int]: + + data = df._get_numeric_data() + bif = df[df > 0] + bifw = DataFrame(dict([(c, np.where(data[c] > 0, data[c], np.nan)) + for c in data.columns]), + index=data.index, columns=data.columns) + + # add back other columns to compare + for c in df.columns: + if c not in bifw: + bifw[c] = df[c] + bifw = bifw.reindex(columns=df.columns) + + assert_frame_equal(bif, bifw, check_dtype=False) + for c in df.columns: + if bif[c].dtype != bifw[c].dtype: + self.assertEqual(bif[c].dtype, df[c].dtype) + + def test_getitem_boolean_casting(self): + + # don't upcast if we don't need to + df = self.tsframe.copy() + df['E'] = 1 + df['E'] = df['E'].astype('int32') + df['E1'] = df['E'].copy() + df['F'] = 1 + df['F'] = df['F'].astype('int64') + df['F1'] = df['F'].copy() + + casted = df[df > 0] + result = casted.get_dtype_counts() + expected = Series({'float64': 4, 'int32': 2, 'int64': 2}) + assert_series_equal(result, expected) + + # int block splitting + df.ix[1:3, ['E1', 'F1']] = 0 + casted = df[df > 0] + result = casted.get_dtype_counts() + expected = Series({'float64': 6, 'int32': 1, 'int64': 1}) + assert_series_equal(result, expected) + + # where dtype conversions + # GH 3733 + df = DataFrame(data=np.random.randn(100, 50)) + df = df.where(df > 0) # create nans + bools = df > 0 + mask = isnull(df) + expected = bools.astype(float).mask(mask) + result = bools.mask(mask) + assert_frame_equal(result, expected) + + def test_getitem_boolean_list(self): + df = DataFrame(np.arange(12).reshape(3, 4)) + + def _checkit(lst): + result = df[lst] + expected = df.ix[df.index[lst]] + assert_frame_equal(result, expected) + + _checkit([True, False, True]) + _checkit([True, True, True]) + _checkit([False, False, False]) + + def test_getitem_boolean_iadd(self): + arr = randn(5, 5) + + df = DataFrame(arr.copy(), columns=['A', 'B', 'C', 'D', 'E']) + + df[df < 0] += 1 + arr[arr < 0] += 1 + + assert_almost_equal(df.values, arr) + + def test_boolean_index_empty_corner(self): + # #2096 + blah = DataFrame(np.empty([0, 1]), columns=['A'], + index=DatetimeIndex([])) + + # both of these should succeed trivially + k = np.array([], bool) + + blah[k] + blah[k] = 0 + + def test_getitem_ix_mixed_integer(self): + df = DataFrame(np.random.randn(4, 3), + index=[1, 10, 'C', 'E'], columns=[1, 2, 3]) + + result = df.ix[:-1] + expected = df.ix[df.index[:-1]] + assert_frame_equal(result, expected) + + result = df.ix[[1, 10]] + expected = df.ix[Index([1, 10], dtype=object)] + assert_frame_equal(result, expected) + + # 11320 + df = pd.DataFrame({"rna": (1.5, 2.2, 3.2, 4.5), + -1000: [11, 21, 36, 40], + 0: [10, 22, 43, 34], + 1000: [0, 10, 20, 30]}, + columns=['rna', -1000, 0, 1000]) + result = df[[1000]] + expected = df.iloc[:, [3]] + assert_frame_equal(result, expected) + result = df[[-1000]] + expected = df.iloc[:, [1]] + assert_frame_equal(result, expected) + + def test_getitem_setitem_ix_negative_integers(self): + result = self.frame.ix[:, -1] + assert_series_equal(result, self.frame['D']) + + result = self.frame.ix[:, [-1]] + assert_frame_equal(result, self.frame[['D']]) + + result = self.frame.ix[:, [-1, -2]] + assert_frame_equal(result, self.frame[['D', 'C']]) + + self.frame.ix[:, [-1]] = 0 + self.assertTrue((self.frame['D'] == 0).all()) + + df = DataFrame(np.random.randn(8, 4)) + self.assertTrue(isnull(df.ix[:, [-1]].values).all()) + + # #1942 + a = DataFrame(randn(20, 2), index=[chr(x + 65) for x in range(20)]) + a.ix[-1] = a.ix[-2] + + assert_series_equal(a.ix[-1], a.ix[-2], check_names=False) + self.assertEqual(a.ix[-1].name, 'T') + self.assertEqual(a.ix[-2].name, 'S') + + def test_getattr(self): + assert_series_equal(self.frame.A, self.frame['A']) + self.assertRaises(AttributeError, getattr, self.frame, + 'NONEXISTENT_NAME') + + def test_setattr_column(self): + df = DataFrame({'foobar': 1}, index=lrange(10)) + + df.foobar = 5 + self.assertTrue((df.foobar == 5).all()) + + def test_setitem(self): + # not sure what else to do here + series = self.frame['A'][::2] + self.frame['col5'] = series + self.assertIn('col5', self.frame) + tm.assert_dict_equal(series, self.frame['col5'], + compare_keys=False) + + series = self.frame['A'] + self.frame['col6'] = series + tm.assert_dict_equal(series, self.frame['col6'], + compare_keys=False) + + with tm.assertRaises(KeyError): + self.frame[randn(len(self.frame) + 1)] = 1 + + # set ndarray + arr = randn(len(self.frame)) + self.frame['col9'] = arr + self.assertTrue((self.frame['col9'] == arr).all()) + + self.frame['col7'] = 5 + assert((self.frame['col7'] == 5).all()) + + self.frame['col0'] = 3.14 + assert((self.frame['col0'] == 3.14).all()) + + self.frame['col8'] = 'foo' + assert((self.frame['col8'] == 'foo').all()) + + # this is partially a view (e.g. some blocks are view) + # so raise/warn + smaller = self.frame[:2] + + def f(): + smaller['col10'] = ['1', '2'] + self.assertRaises(com.SettingWithCopyError, f) + self.assertEqual(smaller['col10'].dtype, np.object_) + self.assertTrue((smaller['col10'] == ['1', '2']).all()) + + # with a dtype + for dtype in ['int32', 'int64', 'float32', 'float64']: + self.frame[dtype] = np.array(arr, dtype=dtype) + self.assertEqual(self.frame[dtype].dtype.name, dtype) + + # dtype changing GH4204 + df = DataFrame([[0, 0]]) + df.iloc[0] = np.nan + expected = DataFrame([[np.nan, np.nan]]) + assert_frame_equal(df, expected) + + df = DataFrame([[0, 0]]) + df.loc[0] = np.nan + assert_frame_equal(df, expected) + + def test_setitem_tuple(self): + self.frame['A', 'B'] = self.frame['A'] + assert_series_equal(self.frame['A', 'B'], self.frame[ + 'A'], check_names=False) + + def test_setitem_always_copy(self): + s = self.frame['A'].copy() + self.frame['E'] = s + + self.frame['E'][5:10] = nan + self.assertTrue(notnull(s[5:10]).all()) + + def test_setitem_boolean(self): + df = self.frame.copy() + values = self.frame.values + + df[df['A'] > 0] = 4 + values[values[:, 0] > 0] = 4 + assert_almost_equal(df.values, values) + + # test that column reindexing works + series = df['A'] == 4 + series = series.reindex(df.index[::-1]) + df[series] = 1 + values[values[:, 0] == 4] = 1 + assert_almost_equal(df.values, values) + + df[df > 0] = 5 + values[values > 0] = 5 + assert_almost_equal(df.values, values) + + df[df == 5] = 0 + values[values == 5] = 0 + assert_almost_equal(df.values, values) + + # a df that needs alignment first + df[df[:-1] < 0] = 2 + np.putmask(values[:-1], values[:-1] < 0, 2) + assert_almost_equal(df.values, values) + + # indexed with same shape but rows-reversed df + df[df[::-1] == 2] = 3 + values[values == 2] = 3 + assert_almost_equal(df.values, values) + + with assertRaisesRegexp(TypeError, 'Must pass DataFrame with boolean ' + 'values only'): + df[df * 0] = 2 + + # index with DataFrame + mask = df > np.abs(df) + expected = df.copy() + df[df > np.abs(df)] = nan + expected.values[mask.values] = nan + assert_frame_equal(df, expected) + + # set from DataFrame + expected = df.copy() + df[df > np.abs(df)] = df * 2 + np.putmask(expected.values, mask.values, df.values * 2) + assert_frame_equal(df, expected) + + def test_setitem_cast(self): + self.frame['D'] = self.frame['D'].astype('i8') + self.assertEqual(self.frame['D'].dtype, np.int64) + + # #669, should not cast? + # this is now set to int64, which means a replacement of the column to + # the value dtype (and nothing to do with the existing dtype) + self.frame['B'] = 0 + self.assertEqual(self.frame['B'].dtype, np.int64) + + # cast if pass array of course + self.frame['B'] = np.arange(len(self.frame)) + self.assertTrue(issubclass(self.frame['B'].dtype.type, np.integer)) + + self.frame['foo'] = 'bar' + self.frame['foo'] = 0 + self.assertEqual(self.frame['foo'].dtype, np.int64) + + self.frame['foo'] = 'bar' + self.frame['foo'] = 2.5 + self.assertEqual(self.frame['foo'].dtype, np.float64) + + self.frame['something'] = 0 + self.assertEqual(self.frame['something'].dtype, np.int64) + self.frame['something'] = 2 + self.assertEqual(self.frame['something'].dtype, np.int64) + self.frame['something'] = 2.5 + self.assertEqual(self.frame['something'].dtype, np.float64) + + # GH 7704 + # dtype conversion on setting + df = DataFrame(np.random.rand(30, 3), columns=tuple('ABC')) + df['event'] = np.nan + df.loc[10, 'event'] = 'foo' + result = df.get_dtype_counts().sort_values() + expected = Series({'float64': 3, 'object': 1}).sort_values() + assert_series_equal(result, expected) + + # Test that data type is preserved . #5782 + df = DataFrame({'one': np.arange(6, dtype=np.int8)}) + df.loc[1, 'one'] = 6 + self.assertEqual(df.dtypes.one, np.dtype(np.int8)) + df.one = np.int8(7) + self.assertEqual(df.dtypes.one, np.dtype(np.int8)) + + def test_setitem_boolean_column(self): + expected = self.frame.copy() + mask = self.frame['A'] > 0 + + self.frame.ix[mask, 'B'] = 0 + expected.values[mask.values, 1] = 0 + + assert_frame_equal(self.frame, expected) + + def test_setitem_corner(self): + # corner case + df = DataFrame({'B': [1., 2., 3.], + 'C': ['a', 'b', 'c']}, + index=np.arange(3)) + del df['B'] + df['B'] = [1., 2., 3.] + self.assertIn('B', df) + self.assertEqual(len(df.columns), 2) + + df['A'] = 'beginning' + df['E'] = 'foo' + df['D'] = 'bar' + df[datetime.now()] = 'date' + df[datetime.now()] = 5. + + # what to do when empty frame with index + dm = DataFrame(index=self.frame.index) + dm['A'] = 'foo' + dm['B'] = 'bar' + self.assertEqual(len(dm.columns), 2) + self.assertEqual(dm.values.dtype, np.object_) + + # upcast + dm['C'] = 1 + self.assertEqual(dm['C'].dtype, np.int64) + + dm['E'] = 1. + self.assertEqual(dm['E'].dtype, np.float64) + + # set existing column + dm['A'] = 'bar' + self.assertEqual('bar', dm['A'][0]) + + dm = DataFrame(index=np.arange(3)) + dm['A'] = 1 + dm['foo'] = 'bar' + del dm['foo'] + dm['foo'] = 'bar' + self.assertEqual(dm['foo'].dtype, np.object_) + + dm['coercable'] = ['1', '2', '3'] + self.assertEqual(dm['coercable'].dtype, np.object_) + + def test_setitem_corner2(self): + data = {"title": ['foobar', 'bar', 'foobar'] + ['foobar'] * 17, + "cruft": np.random.random(20)} + + df = DataFrame(data) + ix = df[df['title'] == 'bar'].index + + df.ix[ix, ['title']] = 'foobar' + df.ix[ix, ['cruft']] = 0 + + assert(df.ix[1, 'title'] == 'foobar') + assert(df.ix[1, 'cruft'] == 0) + + def test_setitem_ambig(self): + # difficulties with mixed-type data + from decimal import Decimal + + # created as float type + dm = DataFrame(index=lrange(3), columns=lrange(3)) + + coercable_series = Series([Decimal(1) for _ in range(3)], + index=lrange(3)) + uncoercable_series = Series(['foo', 'bzr', 'baz'], index=lrange(3)) + + dm[0] = np.ones(3) + self.assertEqual(len(dm.columns), 3) + # self.assertIsNone(dm.objects) + + dm[1] = coercable_series + self.assertEqual(len(dm.columns), 3) + # self.assertIsNone(dm.objects) + + dm[2] = uncoercable_series + self.assertEqual(len(dm.columns), 3) + # self.assertIsNotNone(dm.objects) + self.assertEqual(dm[2].dtype, np.object_) + + def test_setitem_clear_caches(self): + # GH #304 + df = DataFrame({'x': [1.1, 2.1, 3.1, 4.1], 'y': [5.1, 6.1, 7.1, 8.1]}, + index=[0, 1, 2, 3]) + df.insert(2, 'z', np.nan) + + # cache it + foo = df['z'] + + df.ix[2:, 'z'] = 42 + + expected = Series([np.nan, np.nan, 42, 42], index=df.index, name='z') + self.assertIsNot(df['z'], foo) + assert_series_equal(df['z'], expected) + + def test_setitem_None(self): + # GH #766 + self.frame[None] = self.frame['A'] + assert_series_equal( + self.frame.iloc[:, -1], self.frame['A'], check_names=False) + assert_series_equal(self.frame.loc[:, None], self.frame[ + 'A'], check_names=False) + assert_series_equal(self.frame[None], self.frame[ + 'A'], check_names=False) + repr(self.frame) + + def test_setitem_empty(self): + # GH 9596 + df = pd.DataFrame({'a': ['1', '2', '3'], + 'b': ['11', '22', '33'], + 'c': ['111', '222', '333']}) + + result = df.copy() + result.loc[result.b.isnull(), 'a'] = result.a + assert_frame_equal(result, df) + + def test_setitem_empty_frame_with_boolean(self): + # Test for issue #10126 + + for dtype in ('float', 'int64'): + for df in [ + pd.DataFrame(dtype=dtype), + pd.DataFrame(dtype=dtype, index=[1]), + pd.DataFrame(dtype=dtype, columns=['A']), + ]: + df2 = df.copy() + df[df > df2] = 47 + assert_frame_equal(df, df2) + + def test_getitem_empty_frame_with_boolean(self): + # Test for issue #11859 + + df = pd.DataFrame() + df2 = df[df > 0] + assert_frame_equal(df, df2) + + def test_delitem_corner(self): + f = self.frame.copy() + del f['D'] + self.assertEqual(len(f.columns), 3) + self.assertRaises(KeyError, f.__delitem__, 'D') + del f['B'] + self.assertEqual(len(f.columns), 2) + + def test_getitem_fancy_2d(self): + f = self.frame + ix = f.ix + + assert_frame_equal(ix[:, ['B', 'A']], f.reindex(columns=['B', 'A'])) + + subidx = self.frame.index[[5, 4, 1]] + assert_frame_equal(ix[subidx, ['B', 'A']], + f.reindex(index=subidx, columns=['B', 'A'])) + + # slicing rows, etc. + assert_frame_equal(ix[5:10], f[5:10]) + assert_frame_equal(ix[5:10, :], f[5:10]) + assert_frame_equal(ix[:5, ['A', 'B']], + f.reindex(index=f.index[:5], columns=['A', 'B'])) + + # slice rows with labels, inclusive! + expected = ix[5:11] + result = ix[f.index[5]:f.index[10]] + assert_frame_equal(expected, result) + + # slice columns + assert_frame_equal(ix[:, :2], f.reindex(columns=['A', 'B'])) + + # get view + exp = f.copy() + ix[5:10].values[:] = 5 + exp.values[5:10] = 5 + assert_frame_equal(f, exp) + + self.assertRaises(ValueError, ix.__getitem__, f > 0.5) + + def test_slice_floats(self): + index = [52195.504153, 52196.303147, 52198.369883] + df = DataFrame(np.random.rand(3, 2), index=index) + + s1 = df.ix[52195.1:52196.5] + self.assertEqual(len(s1), 2) + + s1 = df.ix[52195.1:52196.6] + self.assertEqual(len(s1), 2) + + s1 = df.ix[52195.1:52198.9] + self.assertEqual(len(s1), 3) + + def test_getitem_fancy_slice_integers_step(self): + df = DataFrame(np.random.randn(10, 5)) + + # this is OK + result = df.ix[:8:2] # noqa + df.ix[:8:2] = np.nan + self.assertTrue(isnull(df.ix[:8:2]).values.all()) + + def test_getitem_setitem_integer_slice_keyerrors(self): + df = DataFrame(np.random.randn(10, 5), index=lrange(0, 20, 2)) + + # this is OK + cp = df.copy() + cp.ix[4:10] = 0 + self.assertTrue((cp.ix[4:10] == 0).values.all()) + + # so is this + cp = df.copy() + cp.ix[3:11] = 0 + self.assertTrue((cp.ix[3:11] == 0).values.all()) + + result = df.ix[4:10] + result2 = df.ix[3:11] + expected = df.reindex([4, 6, 8, 10]) + + assert_frame_equal(result, expected) + assert_frame_equal(result2, expected) + + # non-monotonic, raise KeyError + df2 = df.iloc[lrange(5) + lrange(5, 10)[::-1]] + self.assertRaises(KeyError, df2.ix.__getitem__, slice(3, 11)) + self.assertRaises(KeyError, df2.ix.__setitem__, slice(3, 11), 0) + + def test_setitem_fancy_2d(self): + f = self.frame + + ix = f.ix # noqa + + # case 1 + frame = self.frame.copy() + expected = frame.copy() + frame.ix[:, ['B', 'A']] = 1 + expected['B'] = 1. + expected['A'] = 1. + assert_frame_equal(frame, expected) + + # case 2 + frame = self.frame.copy() + frame2 = self.frame.copy() + + expected = frame.copy() + + subidx = self.frame.index[[5, 4, 1]] + values = randn(3, 2) + + frame.ix[subidx, ['B', 'A']] = values + frame2.ix[[5, 4, 1], ['B', 'A']] = values + + expected['B'].ix[subidx] = values[:, 0] + expected['A'].ix[subidx] = values[:, 1] + + assert_frame_equal(frame, expected) + assert_frame_equal(frame2, expected) + + # case 3: slicing rows, etc. + frame = self.frame.copy() + + expected1 = self.frame.copy() + frame.ix[5:10] = 1. + expected1.values[5:10] = 1. + assert_frame_equal(frame, expected1) + + expected2 = self.frame.copy() + arr = randn(5, len(frame.columns)) + frame.ix[5:10] = arr + expected2.values[5:10] = arr + assert_frame_equal(frame, expected2) + + # case 4 + frame = self.frame.copy() + frame.ix[5:10, :] = 1. + assert_frame_equal(frame, expected1) + frame.ix[5:10, :] = arr + assert_frame_equal(frame, expected2) + + # case 5 + frame = self.frame.copy() + frame2 = self.frame.copy() + + expected = self.frame.copy() + values = randn(5, 2) + + frame.ix[:5, ['A', 'B']] = values + expected['A'][:5] = values[:, 0] + expected['B'][:5] = values[:, 1] + assert_frame_equal(frame, expected) + + frame2.ix[:5, [0, 1]] = values + assert_frame_equal(frame2, expected) + + # case 6: slice rows with labels, inclusive! + frame = self.frame.copy() + expected = self.frame.copy() + + frame.ix[frame.index[5]:frame.index[10]] = 5. + expected.values[5:11] = 5 + assert_frame_equal(frame, expected) + + # case 7: slice columns + frame = self.frame.copy() + frame2 = self.frame.copy() + expected = self.frame.copy() + + # slice indices + frame.ix[:, 1:3] = 4. + expected.values[:, 1:3] = 4. + assert_frame_equal(frame, expected) + + # slice with labels + frame.ix[:, 'B':'C'] = 4. + assert_frame_equal(frame, expected) + + # new corner case of boolean slicing / setting + frame = DataFrame(lzip([2, 3, 9, 6, 7], [np.nan] * 5), + columns=['a', 'b']) + lst = [100] + lst.extend([np.nan] * 4) + expected = DataFrame(lzip([100, 3, 9, 6, 7], lst), + columns=['a', 'b']) + frame[frame['a'] == 2] = 100 + assert_frame_equal(frame, expected) + + def test_fancy_getitem_slice_mixed(self): + sliced = self.mixed_frame.ix[:, -3:] + self.assertEqual(sliced['D'].dtype, np.float64) + + # get view with single block + # setting it triggers setting with copy + sliced = self.frame.ix[:, -3:] + + def f(): + sliced['C'] = 4. + self.assertRaises(com.SettingWithCopyError, f) + self.assertTrue((self.frame['C'] == 4).all()) + + def test_fancy_setitem_int_labels(self): + # integer index defers to label-based indexing + + df = DataFrame(np.random.randn(10, 5), index=np.arange(0, 20, 2)) + + tmp = df.copy() + exp = df.copy() + tmp.ix[[0, 2, 4]] = 5 + exp.values[:3] = 5 + assert_frame_equal(tmp, exp) + + tmp = df.copy() + exp = df.copy() + tmp.ix[6] = 5 + exp.values[3] = 5 + assert_frame_equal(tmp, exp) + + tmp = df.copy() + exp = df.copy() + tmp.ix[:, 2] = 5 + + # tmp correctly sets the dtype + # so match the exp way + exp[2] = 5 + assert_frame_equal(tmp, exp) + + def test_fancy_getitem_int_labels(self): + df = DataFrame(np.random.randn(10, 5), index=np.arange(0, 20, 2)) + + result = df.ix[[4, 2, 0], [2, 0]] + expected = df.reindex(index=[4, 2, 0], columns=[2, 0]) + assert_frame_equal(result, expected) + + result = df.ix[[4, 2, 0]] + expected = df.reindex(index=[4, 2, 0]) + assert_frame_equal(result, expected) + + result = df.ix[4] + expected = df.xs(4) + assert_series_equal(result, expected) + + result = df.ix[:, 3] + expected = df[3] + assert_series_equal(result, expected) + + def test_fancy_index_int_labels_exceptions(self): + df = DataFrame(np.random.randn(10, 5), index=np.arange(0, 20, 2)) + + # labels that aren't contained + self.assertRaises(KeyError, df.ix.__setitem__, + ([0, 1, 2], [2, 3, 4]), 5) + + # try to set indices not contained in frame + self.assertRaises(KeyError, + self.frame.ix.__setitem__, + ['foo', 'bar', 'baz'], 1) + self.assertRaises(KeyError, + self.frame.ix.__setitem__, + (slice(None, None), ['E']), 1) + + # partial setting now allows this GH2578 + # self.assertRaises(KeyError, + # self.frame.ix.__setitem__, + # (slice(None, None), 'E'), 1) + + def test_setitem_fancy_mixed_2d(self): + self.mixed_frame.ix[:5, ['C', 'B', 'A']] = 5 + result = self.mixed_frame.ix[:5, ['C', 'B', 'A']] + self.assertTrue((result.values == 5).all()) + + self.mixed_frame.ix[5] = np.nan + self.assertTrue(isnull(self.mixed_frame.ix[5]).all()) + + self.mixed_frame.ix[5] = self.mixed_frame.ix[6] + assert_series_equal(self.mixed_frame.ix[5], self.mixed_frame.ix[6], + check_names=False) + + # #1432 + df = DataFrame({1: [1., 2., 3.], + 2: [3, 4, 5]}) + self.assertTrue(df._is_mixed_type) + + df.ix[1] = [5, 10] + + expected = DataFrame({1: [1., 5., 3.], + 2: [3, 10, 5]}) + + assert_frame_equal(df, expected) + + def test_ix_align(self): + b = Series(randn(10), name=0).sort_values() + df_orig = DataFrame(randn(10, 4)) + df = df_orig.copy() + + df.ix[:, 0] = b + assert_series_equal(df.ix[:, 0].reindex(b.index), b) + + dft = df_orig.T + dft.ix[0, :] = b + assert_series_equal(dft.ix[0, :].reindex(b.index), b) + + df = df_orig.copy() + df.ix[:5, 0] = b + s = df.ix[:5, 0] + assert_series_equal(s, b.reindex(s.index)) + + dft = df_orig.T + dft.ix[0, :5] = b + s = dft.ix[0, :5] + assert_series_equal(s, b.reindex(s.index)) + + df = df_orig.copy() + idx = [0, 1, 3, 5] + df.ix[idx, 0] = b + s = df.ix[idx, 0] + assert_series_equal(s, b.reindex(s.index)) + + dft = df_orig.T + dft.ix[0, idx] = b + s = dft.ix[0, idx] + assert_series_equal(s, b.reindex(s.index)) + + def test_ix_frame_align(self): + b = DataFrame(np.random.randn(3, 4)) + df_orig = DataFrame(randn(10, 4)) + df = df_orig.copy() + + df.ix[:3] = b + out = b.ix[:3] + assert_frame_equal(out, b) + + b.sort_index(inplace=True) + + df = df_orig.copy() + df.ix[[0, 1, 2]] = b + out = df.ix[[0, 1, 2]].reindex(b.index) + assert_frame_equal(out, b) + + df = df_orig.copy() + df.ix[:3] = b + out = df.ix[:3] + assert_frame_equal(out, b.reindex(out.index)) + + def test_getitem_setitem_non_ix_labels(self): + df = tm.makeTimeDataFrame() + + start, end = df.index[[5, 10]] + + result = df.ix[start:end] + result2 = df[start:end] + expected = df[5:11] + assert_frame_equal(result, expected) + assert_frame_equal(result2, expected) + + result = df.copy() + result.ix[start:end] = 0 + result2 = df.copy() + result2[start:end] = 0 + expected = df.copy() + expected[5:11] = 0 + assert_frame_equal(result, expected) + assert_frame_equal(result2, expected) + + def test_ix_multi_take(self): + df = DataFrame(np.random.randn(3, 2)) + rs = df.ix[df.index == 0, :] + xp = df.reindex([0]) + assert_frame_equal(rs, xp) + + """ #1321 + df = DataFrame(np.random.randn(3, 2)) + rs = df.ix[df.index==0, df.columns==1] + xp = df.reindex([0], [1]) + assert_frame_equal(rs, xp) + """ + + def test_ix_multi_take_nonint_index(self): + df = DataFrame(np.random.randn(3, 2), index=['x', 'y', 'z'], + columns=['a', 'b']) + rs = df.ix[[0], [0]] + xp = df.reindex(['x'], columns=['a']) + assert_frame_equal(rs, xp) + + def test_ix_multi_take_multiindex(self): + df = DataFrame(np.random.randn(3, 2), index=['x', 'y', 'z'], + columns=[['a', 'b'], ['1', '2']]) + rs = df.ix[[0], [0]] + xp = df.reindex(['x'], columns=[('a', '1')]) + assert_frame_equal(rs, xp) + + def test_ix_dup(self): + idx = Index(['a', 'a', 'b', 'c', 'd', 'd']) + df = DataFrame(np.random.randn(len(idx), 3), idx) + + sub = df.ix[:'d'] + assert_frame_equal(sub, df) + + sub = df.ix['a':'c'] + assert_frame_equal(sub, df.ix[0:4]) + + sub = df.ix['b':'d'] + assert_frame_equal(sub, df.ix[2:]) + + def test_getitem_fancy_1d(self): + f = self.frame + ix = f.ix + + # return self if no slicing...for now + self.assertIs(ix[:, :], f) + + # low dimensional slice + xs1 = ix[2, ['C', 'B', 'A']] + xs2 = f.xs(f.index[2]).reindex(['C', 'B', 'A']) + assert_series_equal(xs1, xs2) + + ts1 = ix[5:10, 2] + ts2 = f[f.columns[2]][5:10] + assert_series_equal(ts1, ts2) + + # positional xs + xs1 = ix[0] + xs2 = f.xs(f.index[0]) + assert_series_equal(xs1, xs2) + + xs1 = ix[f.index[5]] + xs2 = f.xs(f.index[5]) + assert_series_equal(xs1, xs2) + + # single column + assert_series_equal(ix[:, 'A'], f['A']) + + # return view + exp = f.copy() + exp.values[5] = 4 + ix[5][:] = 4 + assert_frame_equal(exp, f) + + exp.values[:, 1] = 6 + ix[:, 1][:] = 6 + assert_frame_equal(exp, f) + + # slice of mixed-frame + xs = self.mixed_frame.ix[5] + exp = self.mixed_frame.xs(self.mixed_frame.index[5]) + assert_series_equal(xs, exp) + + def test_setitem_fancy_1d(self): + + # case 1: set cross-section for indices + frame = self.frame.copy() + expected = self.frame.copy() + + frame.ix[2, ['C', 'B', 'A']] = [1., 2., 3.] + expected['C'][2] = 1. + expected['B'][2] = 2. + expected['A'][2] = 3. + assert_frame_equal(frame, expected) + + frame2 = self.frame.copy() + frame2.ix[2, [3, 2, 1]] = [1., 2., 3.] + assert_frame_equal(frame, expected) + + # case 2, set a section of a column + frame = self.frame.copy() + expected = self.frame.copy() + + vals = randn(5) + expected.values[5:10, 2] = vals + frame.ix[5:10, 2] = vals + assert_frame_equal(frame, expected) + + frame2 = self.frame.copy() + frame2.ix[5:10, 'B'] = vals + assert_frame_equal(frame, expected) + + # case 3: full xs + frame = self.frame.copy() + expected = self.frame.copy() + + frame.ix[4] = 5. + expected.values[4] = 5. + assert_frame_equal(frame, expected) + + frame.ix[frame.index[4]] = 6. + expected.values[4] = 6. + assert_frame_equal(frame, expected) + + # single column + frame = self.frame.copy() + expected = self.frame.copy() + + frame.ix[:, 'A'] = 7. + expected['A'] = 7. + assert_frame_equal(frame, expected) + + def test_getitem_fancy_scalar(self): + f = self.frame + ix = f.ix + # individual value + for col in f.columns: + ts = f[col] + for idx in f.index[::5]: + assert_almost_equal(ix[idx, col], ts[idx]) + + def test_setitem_fancy_scalar(self): + f = self.frame + expected = self.frame.copy() + ix = f.ix + # individual value + for j, col in enumerate(f.columns): + ts = f[col] # noqa + for idx in f.index[::5]: + i = f.index.get_loc(idx) + val = randn() + expected.values[i, j] = val + ix[idx, col] = val + assert_frame_equal(f, expected) + + def test_getitem_fancy_boolean(self): + f = self.frame + ix = f.ix + + expected = f.reindex(columns=['B', 'D']) + result = ix[:, [False, True, False, True]] + assert_frame_equal(result, expected) + + expected = f.reindex(index=f.index[5:10], columns=['B', 'D']) + result = ix[5:10, [False, True, False, True]] + assert_frame_equal(result, expected) + + boolvec = f.index > f.index[7] + expected = f.reindex(index=f.index[boolvec]) + result = ix[boolvec] + assert_frame_equal(result, expected) + result = ix[boolvec, :] + assert_frame_equal(result, expected) + + result = ix[boolvec, 2:] + expected = f.reindex(index=f.index[boolvec], + columns=['C', 'D']) + assert_frame_equal(result, expected) + + def test_setitem_fancy_boolean(self): + # from 2d, set with booleans + frame = self.frame.copy() + expected = self.frame.copy() + + mask = frame['A'] > 0 + frame.ix[mask] = 0. + expected.values[mask.values] = 0. + assert_frame_equal(frame, expected) + + frame = self.frame.copy() + expected = self.frame.copy() + frame.ix[mask, ['A', 'B']] = 0. + expected.values[mask.values, :2] = 0. + assert_frame_equal(frame, expected) + + def test_getitem_fancy_ints(self): + result = self.frame.ix[[1, 4, 7]] + expected = self.frame.ix[self.frame.index[[1, 4, 7]]] + assert_frame_equal(result, expected) + + result = self.frame.ix[:, [2, 0, 1]] + expected = self.frame.ix[:, self.frame.columns[[2, 0, 1]]] + assert_frame_equal(result, expected) + + def test_getitem_setitem_fancy_exceptions(self): + ix = self.frame.ix + with assertRaisesRegexp(IndexingError, 'Too many indexers'): + ix[:, :, :] + + with assertRaises(IndexingError): + ix[:, :, :] = 1 + + def test_getitem_setitem_boolean_misaligned(self): + # boolean index misaligned labels + mask = self.frame['A'][::-1] > 1 + + result = self.frame.ix[mask] + expected = self.frame.ix[mask[::-1]] + assert_frame_equal(result, expected) + + cp = self.frame.copy() + expected = self.frame.copy() + cp.ix[mask] = 0 + expected.ix[mask] = 0 + assert_frame_equal(cp, expected) + + def test_getitem_setitem_boolean_multi(self): + df = DataFrame(np.random.randn(3, 2)) + + # get + k1 = np.array([True, False, True]) + k2 = np.array([False, True]) + result = df.ix[k1, k2] + expected = df.ix[[0, 2], [1]] + assert_frame_equal(result, expected) + + expected = df.copy() + df.ix[np.array([True, False, True]), + np.array([False, True])] = 5 + expected.ix[[0, 2], [1]] = 5 + assert_frame_equal(df, expected) + + def test_getitem_setitem_float_labels(self): + index = Index([1.5, 2, 3, 4, 5]) + df = DataFrame(np.random.randn(5, 5), index=index) + + result = df.ix[1.5:4] + expected = df.reindex([1.5, 2, 3, 4]) + assert_frame_equal(result, expected) + self.assertEqual(len(result), 4) + + result = df.ix[4:5] + expected = df.reindex([4, 5]) # reindex with int + assert_frame_equal(result, expected, check_index_type=False) + self.assertEqual(len(result), 2) + + result = df.ix[4:5] + expected = df.reindex([4.0, 5.0]) # reindex with float + assert_frame_equal(result, expected) + self.assertEqual(len(result), 2) + + # loc_float changes this to work properly + result = df.ix[1:2] + expected = df.iloc[0:2] + assert_frame_equal(result, expected) + + df.ix[1:2] = 0 + result = df[1:2] + self.assertTrue((result == 0).all().all()) + + # #2727 + index = Index([1.0, 2.5, 3.5, 4.5, 5.0]) + df = DataFrame(np.random.randn(5, 5), index=index) + + # positional slicing only via iloc! + # stacklevel=False -> needed stacklevel depends on index type + with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + result = df.iloc[1.0:5] + + expected = df.reindex([2.5, 3.5, 4.5, 5.0]) + assert_frame_equal(result, expected) + self.assertEqual(len(result), 4) + + result = df.iloc[4:5] + expected = df.reindex([5.0]) + assert_frame_equal(result, expected) + self.assertEqual(len(result), 1) + + # GH 4892, float indexers in iloc are deprecated + import warnings + warnings.filterwarnings(action='error', category=FutureWarning) + + cp = df.copy() + + def f(): + cp.iloc[1.0:5] = 0 + self.assertRaises(FutureWarning, f) + + def f(): + result = cp.iloc[1.0:5] == 0 # noqa + + self.assertRaises(FutureWarning, f) + self.assertTrue(result.values.all()) + self.assertTrue((cp.iloc[0:1] == df.iloc[0:1]).values.all()) + + warnings.filterwarnings(action='default', category=FutureWarning) + + cp = df.copy() + cp.iloc[4:5] = 0 + self.assertTrue((cp.iloc[4:5] == 0).values.all()) + self.assertTrue((cp.iloc[0:4] == df.iloc[0:4]).values.all()) + + # float slicing + result = df.ix[1.0:5] + expected = df + assert_frame_equal(result, expected) + self.assertEqual(len(result), 5) + + result = df.ix[1.1:5] + expected = df.reindex([2.5, 3.5, 4.5, 5.0]) + assert_frame_equal(result, expected) + self.assertEqual(len(result), 4) + + result = df.ix[4.51:5] + expected = df.reindex([5.0]) + assert_frame_equal(result, expected) + self.assertEqual(len(result), 1) + + result = df.ix[1.0:5.0] + expected = df.reindex([1.0, 2.5, 3.5, 4.5, 5.0]) + assert_frame_equal(result, expected) + self.assertEqual(len(result), 5) + + cp = df.copy() + cp.ix[1.0:5.0] = 0 + result = cp.ix[1.0:5.0] + self.assertTrue((result == 0).values.all()) + + def test_setitem_single_column_mixed(self): + df = DataFrame(randn(5, 3), index=['a', 'b', 'c', 'd', 'e'], + columns=['foo', 'bar', 'baz']) + df['str'] = 'qux' + df.ix[::2, 'str'] = nan + expected = [nan, 'qux', nan, 'qux', nan] + assert_almost_equal(df['str'].values, expected) + + def test_setitem_single_column_mixed_datetime(self): + df = DataFrame(randn(5, 3), index=['a', 'b', 'c', 'd', 'e'], + columns=['foo', 'bar', 'baz']) + + df['timestamp'] = Timestamp('20010102') + + # check our dtypes + result = df.get_dtype_counts() + expected = Series({'float64': 3, 'datetime64[ns]': 1}) + assert_series_equal(result, expected) + + # set an allowable datetime64 type + from pandas import tslib + df.ix['b', 'timestamp'] = tslib.iNaT + self.assertTrue(com.isnull(df.ix['b', 'timestamp'])) + + # allow this syntax + df.ix['c', 'timestamp'] = nan + self.assertTrue(com.isnull(df.ix['c', 'timestamp'])) + + # allow this syntax + df.ix['d', :] = nan + self.assertTrue(com.isnull(df.ix['c', :]).all() == False) # noqa + + # as of GH 3216 this will now work! + # try to set with a list like item + # self.assertRaises( + # Exception, df.ix.__setitem__, ('d', 'timestamp'), [nan]) + + def test_setitem_frame(self): + piece = self.frame.ix[:2, ['A', 'B']] + self.frame.ix[-2:, ['A', 'B']] = piece.values + assert_almost_equal(self.frame.ix[-2:, ['A', 'B']].values, + piece.values) + + # GH 3216 + + # already aligned + f = self.mixed_frame.copy() + piece = DataFrame([[1, 2], [3, 4]], index=f.index[ + 0:2], columns=['A', 'B']) + key = (slice(None, 2), ['A', 'B']) + f.ix[key] = piece + assert_almost_equal(f.ix[0:2, ['A', 'B']].values, + piece.values) + + # rows unaligned + f = self.mixed_frame.copy() + piece = DataFrame([[1, 2], [3, 4], [5, 6], [7, 8]], index=list( + f.index[0:2]) + ['foo', 'bar'], columns=['A', 'B']) + key = (slice(None, 2), ['A', 'B']) + f.ix[key] = piece + assert_almost_equal(f.ix[0:2:, ['A', 'B']].values, + piece.values[0:2]) + + # key is unaligned with values + f = self.mixed_frame.copy() + piece = f.ix[:2, ['A']] + piece.index = f.index[-2:] + key = (slice(-2, None), ['A', 'B']) + f.ix[key] = piece + piece['B'] = np.nan + assert_almost_equal(f.ix[-2:, ['A', 'B']].values, + piece.values) + + # ndarray + f = self.mixed_frame.copy() + piece = self.mixed_frame.ix[:2, ['A', 'B']] + key = (slice(-2, None), ['A', 'B']) + f.ix[key] = piece.values + assert_almost_equal(f.ix[-2:, ['A', 'B']].values, + piece.values) + + # needs upcasting + df = DataFrame([[1, 2, 'foo'], [3, 4, 'bar']], columns=['A', 'B', 'C']) + df2 = df.copy() + df2.ix[:, ['A', 'B']] = df.ix[:, ['A', 'B']] + 0.5 + expected = df.reindex(columns=['A', 'B']) + expected += 0.5 + expected['C'] = df['C'] + assert_frame_equal(df2, expected) + + def test_setitem_frame_align(self): + piece = self.frame.ix[:2, ['A', 'B']] + piece.index = self.frame.index[-2:] + piece.columns = ['A', 'B'] + self.frame.ix[-2:, ['A', 'B']] = piece + assert_almost_equal(self.frame.ix[-2:, ['A', 'B']].values, + piece.values) + + def test_getitem_setitem_ix_duplicates(self): + # #1201 + df = DataFrame(np.random.randn(5, 3), + index=['foo', 'foo', 'bar', 'baz', 'bar']) + + result = df.ix['foo'] + expected = df[:2] + assert_frame_equal(result, expected) + + result = df.ix['bar'] + expected = df.ix[[2, 4]] + assert_frame_equal(result, expected) + + result = df.ix['baz'] + expected = df.ix[3] + assert_series_equal(result, expected) + + def test_getitem_ix_boolean_duplicates_multiple(self): + # #1201 + df = DataFrame(np.random.randn(5, 3), + index=['foo', 'foo', 'bar', 'baz', 'bar']) + + result = df.ix[['bar']] + exp = df.ix[[2, 4]] + assert_frame_equal(result, exp) + + result = df.ix[df[1] > 0] + exp = df[df[1] > 0] + assert_frame_equal(result, exp) + + result = df.ix[df[0] > 0] + exp = df[df[0] > 0] + assert_frame_equal(result, exp) + + def test_getitem_setitem_ix_bool_keyerror(self): + # #2199 + df = DataFrame({'a': [1, 2, 3]}) + + self.assertRaises(KeyError, df.ix.__getitem__, False) + self.assertRaises(KeyError, df.ix.__getitem__, True) + + self.assertRaises(KeyError, df.ix.__setitem__, False, 0) + self.assertRaises(KeyError, df.ix.__setitem__, True, 0) + + def test_getitem_list_duplicates(self): + # #1943 + df = DataFrame(np.random.randn(4, 4), columns=list('AABC')) + df.columns.name = 'foo' + + result = df[['B', 'C']] + self.assertEqual(result.columns.name, 'foo') + + expected = df.ix[:, 2:] + assert_frame_equal(result, expected) + + def test_get_value(self): + for idx in self.frame.index: + for col in self.frame.columns: + result = self.frame.get_value(idx, col) + expected = self.frame[col][idx] + assert_almost_equal(result, expected) + + def test_lookup(self): + def alt(df, rows, cols): + result = [] + for r, c in zip(rows, cols): + result.append(df.get_value(r, c)) + return result + + def testit(df): + rows = list(df.index) * len(df.columns) + cols = list(df.columns) * len(df.index) + result = df.lookup(rows, cols) + expected = alt(df, rows, cols) + assert_almost_equal(result, expected) + + testit(self.mixed_frame) + testit(self.frame) + + df = DataFrame({'label': ['a', 'b', 'a', 'c'], + 'mask_a': [True, True, False, True], + 'mask_b': [True, False, False, False], + 'mask_c': [False, True, False, True]}) + df['mask'] = df.lookup(df.index, 'mask_' + df['label']) + exp_mask = alt(df, df.index, 'mask_' + df['label']) + assert_almost_equal(df['mask'], exp_mask) + self.assertEqual(df['mask'].dtype, np.bool_) + + with tm.assertRaises(KeyError): + self.frame.lookup(['xyz'], ['A']) + + with tm.assertRaises(KeyError): + self.frame.lookup([self.frame.index[0]], ['xyz']) + + with tm.assertRaisesRegexp(ValueError, 'same size'): + self.frame.lookup(['a', 'b', 'c'], ['a']) + + def test_set_value(self): + for idx in self.frame.index: + for col in self.frame.columns: + self.frame.set_value(idx, col, 1) + assert_almost_equal(self.frame[col][idx], 1) + + def test_set_value_resize(self): + + res = self.frame.set_value('foobar', 'B', 0) + self.assertIs(res, self.frame) + self.assertEqual(res.index[-1], 'foobar') + self.assertEqual(res.get_value('foobar', 'B'), 0) + + self.frame.loc['foobar', 'qux'] = 0 + self.assertEqual(self.frame.get_value('foobar', 'qux'), 0) + + res = self.frame.copy() + res3 = res.set_value('foobar', 'baz', 'sam') + self.assertEqual(res3['baz'].dtype, np.object_) + + res = self.frame.copy() + res3 = res.set_value('foobar', 'baz', True) + self.assertEqual(res3['baz'].dtype, np.object_) + + res = self.frame.copy() + res3 = res.set_value('foobar', 'baz', 5) + self.assertTrue(com.is_float_dtype(res3['baz'])) + self.assertTrue(isnull(res3['baz'].drop(['foobar'])).all()) + self.assertRaises(ValueError, res3.set_value, 'foobar', 'baz', 'sam') + + def test_set_value_with_index_dtype_change(self): + df_orig = DataFrame(randn(3, 3), index=lrange(3), columns=list('ABC')) + + # this is actually ambiguous as the 2 is interpreted as a positional + # so column is not created + df = df_orig.copy() + df.set_value('C', 2, 1.0) + self.assertEqual(list(df.index), list(df_orig.index) + ['C']) + # self.assertEqual(list(df.columns), list(df_orig.columns) + [2]) + + df = df_orig.copy() + df.loc['C', 2] = 1.0 + self.assertEqual(list(df.index), list(df_orig.index) + ['C']) + # self.assertEqual(list(df.columns), list(df_orig.columns) + [2]) + + # create both new + df = df_orig.copy() + df.set_value('C', 'D', 1.0) + self.assertEqual(list(df.index), list(df_orig.index) + ['C']) + self.assertEqual(list(df.columns), list(df_orig.columns) + ['D']) + + df = df_orig.copy() + df.loc['C', 'D'] = 1.0 + self.assertEqual(list(df.index), list(df_orig.index) + ['C']) + self.assertEqual(list(df.columns), list(df_orig.columns) + ['D']) + + def test_get_set_value_no_partial_indexing(self): + # partial w/ MultiIndex raise exception + index = MultiIndex.from_tuples([(0, 1), (0, 2), (1, 1), (1, 2)]) + df = DataFrame(index=index, columns=lrange(4)) + self.assertRaises(KeyError, df.get_value, 0, 1) + # self.assertRaises(KeyError, df.set_value, 0, 1, 0) + + def test_single_element_ix_dont_upcast(self): + self.frame['E'] = 1 + self.assertTrue(issubclass(self.frame['E'].dtype.type, + (int, np.integer))) + + result = self.frame.ix[self.frame.index[5], 'E'] + self.assertTrue(com.is_integer(result)) + + def test_irow(self): + df = DataFrame(np.random.randn(10, 4), index=lrange(0, 20, 2)) + + # 10711, deprecated + with tm.assert_produces_warning(FutureWarning): + df.irow(1) + + result = df.iloc[1] + exp = df.ix[2] + assert_series_equal(result, exp) + + result = df.iloc[2] + exp = df.ix[4] + assert_series_equal(result, exp) + + # slice + result = df.iloc[slice(4, 8)] + expected = df.ix[8:14] + assert_frame_equal(result, expected) + + # verify slice is view + # setting it makes it raise/warn + def f(): + result[2] = 0. + self.assertRaises(com.SettingWithCopyError, f) + exp_col = df[2].copy() + exp_col[4:8] = 0. + assert_series_equal(df[2], exp_col) + + # list of integers + result = df.iloc[[1, 2, 4, 6]] + expected = df.reindex(df.index[[1, 2, 4, 6]]) + assert_frame_equal(result, expected) + + def test_icol(self): + + df = DataFrame(np.random.randn(4, 10), columns=lrange(0, 20, 2)) + + # 10711, deprecated + with tm.assert_produces_warning(FutureWarning): + df.icol(1) + + result = df.iloc[:, 1] + exp = df.ix[:, 2] + assert_series_equal(result, exp) + + result = df.iloc[:, 2] + exp = df.ix[:, 4] + assert_series_equal(result, exp) + + # slice + result = df.iloc[:, slice(4, 8)] + expected = df.ix[:, 8:14] + assert_frame_equal(result, expected) + + # verify slice is view + # and that we are setting a copy + def f(): + result[8] = 0. + self.assertRaises(com.SettingWithCopyError, f) + self.assertTrue((df[8] == 0).all()) + + # list of integers + result = df.iloc[:, [1, 2, 4, 6]] + expected = df.reindex(columns=df.columns[[1, 2, 4, 6]]) + assert_frame_equal(result, expected) + + def test_irow_icol_duplicates(self): + # 10711, deprecated + + df = DataFrame(np.random.rand(3, 3), columns=list('ABC'), + index=list('aab')) + + result = df.iloc[0] + result2 = df.ix[0] + tm.assertIsInstance(result, Series) + assert_almost_equal(result.values, df.values[0]) + assert_series_equal(result, result2) + + result = df.T.iloc[:, 0] + result2 = df.T.ix[:, 0] + tm.assertIsInstance(result, Series) + assert_almost_equal(result.values, df.values[0]) + assert_series_equal(result, result2) + + # multiindex + df = DataFrame(np.random.randn(3, 3), + columns=[['i', 'i', 'j'], ['A', 'A', 'B']], + index=[['i', 'i', 'j'], ['X', 'X', 'Y']]) + + rs = df.iloc[0] + xp = df.ix[0] + assert_series_equal(rs, xp) + + rs = df.iloc[:, 0] + xp = df.T.ix[0] + assert_series_equal(rs, xp) + + rs = df.iloc[:, [0]] + xp = df.ix[:, [0]] + assert_frame_equal(rs, xp) + + # #2259 + df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=[1, 1, 2]) + result = df.iloc[:, [0]] + expected = df.take([0], axis=1) + assert_frame_equal(result, expected) + + def test_icol_sparse_propegate_fill_value(self): + from pandas.sparse.api import SparseDataFrame + df = SparseDataFrame({'A': [999, 1]}, default_fill_value=999) + self.assertTrue(len(df['A'].sp_values) == len(df.iloc[:, 0].sp_values)) + + def test_iget_value(self): + # 10711 deprecated + + with tm.assert_produces_warning(FutureWarning): + self.frame.iget_value(0, 0) + + for i, row in enumerate(self.frame.index): + for j, col in enumerate(self.frame.columns): + result = self.frame.iat[i, j] + expected = self.frame.at[row, col] + assert_almost_equal(result, expected) + + def test_nested_exception(self): + # Ignore the strange way of triggering the problem + # (which may get fixed), it's just a way to trigger + # the issue or reraising an outer exception without + # a named argument + df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], + "c": [7, 8, 9]}).set_index(["a", "b"]) + l = list(df.index) + l[0] = ["a", "b"] + df.index = l + + try: + repr(df) + except Exception as e: + self.assertNotEqual(type(e), UnboundLocalError) + + def test_reindex_methods(self): + df = pd.DataFrame({'x': list(range(5))}) + target = np.array([-0.1, 0.9, 1.1, 1.5]) + + for method, expected_values in [('nearest', [0, 1, 1, 2]), + ('pad', [np.nan, 0, 1, 1]), + ('backfill', [0, 1, 2, 2])]: + expected = pd.DataFrame({'x': expected_values}, index=target) + actual = df.reindex(target, method=method) + assert_frame_equal(expected, actual) + + actual = df.reindex_like(df, method=method, tolerance=0) + assert_frame_equal(df, actual) + + actual = df.reindex(target, method=method, tolerance=1) + assert_frame_equal(expected, actual) + + e2 = expected[::-1] + actual = df.reindex(target[::-1], method=method) + assert_frame_equal(e2, actual) + + new_order = [3, 0, 2, 1] + e2 = expected.iloc[new_order] + actual = df.reindex(target[new_order], method=method) + assert_frame_equal(e2, actual) + + switched_method = ('pad' if method == 'backfill' + else 'backfill' if method == 'pad' + else method) + actual = df[::-1].reindex(target, method=switched_method) + assert_frame_equal(expected, actual) + + expected = pd.DataFrame({'x': [0, 1, 1, np.nan]}, index=target) + actual = df.reindex(target, method='nearest', tolerance=0.2) + assert_frame_equal(expected, actual) + + def test_non_monotonic_reindex_methods(self): + dr = pd.date_range('2013-08-01', periods=6, freq='B') + data = np.random.randn(6, 1) + df = pd.DataFrame(data, index=dr, columns=list('A')) + df_rev = pd.DataFrame(data, index=dr[[3, 4, 5] + [0, 1, 2]], + columns=list('A')) + # index is not monotonic increasing or decreasing + self.assertRaises(ValueError, df_rev.reindex, df.index, method='pad') + self.assertRaises(ValueError, df_rev.reindex, df.index, method='ffill') + self.assertRaises(ValueError, df_rev.reindex, df.index, method='bfill') + self.assertRaises(ValueError, df_rev.reindex, + df.index, method='nearest') + + def test_reindex_level(self): + from itertools import permutations + icol = ['jim', 'joe', 'jolie'] + + def verify_first_level(df, level, idx, check_index_type=True): + f = lambda val: np.nonzero(df[level] == val)[0] + i = np.concatenate(list(map(f, idx))) + left = df.set_index(icol).reindex(idx, level=level) + right = df.iloc[i].set_index(icol) + assert_frame_equal(left, right, check_index_type=check_index_type) + + def verify(df, level, idx, indexer, check_index_type=True): + left = df.set_index(icol).reindex(idx, level=level) + right = df.iloc[indexer].set_index(icol) + assert_frame_equal(left, right, check_index_type=check_index_type) + + df = pd.DataFrame({'jim': list('B' * 4 + 'A' * 2 + 'C' * 3), + 'joe': list('abcdeabcd')[::-1], + 'jolie': [10, 20, 30] * 3, + 'joline': np.random.randint(0, 1000, 9)}) + + target = [['C', 'B', 'A'], ['F', 'C', 'A', 'D'], ['A'], + ['A', 'B', 'C'], ['C', 'A', 'B'], ['C', 'B'], ['C', 'A'], + ['A', 'B'], ['B', 'A', 'C']] + + for idx in target: + verify_first_level(df, 'jim', idx) + + # reindex by these causes different MultiIndex levels + for idx in [['D', 'F'], ['A', 'C', 'B']]: + verify_first_level(df, 'jim', idx, check_index_type=False) + + verify(df, 'joe', list('abcde'), [3, 2, 1, 0, 5, 4, 8, 7, 6]) + verify(df, 'joe', list('abcd'), [3, 2, 1, 0, 5, 8, 7, 6]) + verify(df, 'joe', list('abc'), [3, 2, 1, 8, 7, 6]) + verify(df, 'joe', list('eca'), [1, 3, 4, 6, 8]) + verify(df, 'joe', list('edc'), [0, 1, 4, 5, 6]) + verify(df, 'joe', list('eadbc'), [3, 0, 2, 1, 4, 5, 8, 7, 6]) + verify(df, 'joe', list('edwq'), [0, 4, 5]) + verify(df, 'joe', list('wq'), [], check_index_type=False) + + df = DataFrame({'jim': ['mid'] * 5 + ['btm'] * 8 + ['top'] * 7, + 'joe': ['3rd'] * 2 + ['1st'] * 3 + ['2nd'] * 3 + + ['1st'] * 2 + ['3rd'] * 3 + ['1st'] * 2 + + ['3rd'] * 3 + ['2nd'] * 2, + # this needs to be jointly unique with jim and joe or + # reindexing will fail ~1.5% of the time, this works + # out to needing unique groups of same size as joe + 'jolie': np.concatenate([ + np.random.choice(1000, x, replace=False) + for x in [2, 3, 3, 2, 3, 2, 3, 2]]), + 'joline': np.random.randn(20).round(3) * 10}) + + for idx in permutations(df['jim'].unique()): + for i in range(3): + verify_first_level(df, 'jim', idx[:i + 1]) + + i = [2, 3, 4, 0, 1, 8, 9, 5, 6, 7, 10, + 11, 12, 13, 14, 18, 19, 15, 16, 17] + verify(df, 'joe', ['1st', '2nd', '3rd'], i) + + i = [0, 1, 2, 3, 4, 10, 11, 12, 5, 6, + 7, 8, 9, 15, 16, 17, 18, 19, 13, 14] + verify(df, 'joe', ['3rd', '2nd', '1st'], i) + + i = [0, 1, 5, 6, 7, 10, 11, 12, 18, 19, 15, 16, 17] + verify(df, 'joe', ['2nd', '3rd'], i) + + i = [0, 1, 2, 3, 4, 10, 11, 12, 8, 9, 15, 16, 17, 13, 14] + verify(df, 'joe', ['3rd', '1st'], i) + + def test_getitem_ix_float_duplicates(self): + df = pd.DataFrame(np.random.randn(3, 3), + index=[0.1, 0.2, 0.2], columns=list('abc')) + expect = df.iloc[1:] + assert_frame_equal(df.loc[0.2], expect) + assert_frame_equal(df.ix[0.2], expect) + + expect = df.iloc[1:, 0] + assert_series_equal(df.loc[0.2, 'a'], expect) + + df.index = [1, 0.2, 0.2] + expect = df.iloc[1:] + assert_frame_equal(df.loc[0.2], expect) + assert_frame_equal(df.ix[0.2], expect) + + expect = df.iloc[1:, 0] + assert_series_equal(df.loc[0.2, 'a'], expect) + + df = pd.DataFrame(np.random.randn(4, 3), + index=[1, 0.2, 0.2, 1], columns=list('abc')) + expect = df.iloc[1:-1] + assert_frame_equal(df.loc[0.2], expect) + assert_frame_equal(df.ix[0.2], expect) + + expect = df.iloc[1:-1, 0] + assert_series_equal(df.loc[0.2, 'a'], expect) + + df.index = [0.1, 0.2, 2, 0.2] + expect = df.iloc[[1, -1]] + assert_frame_equal(df.loc[0.2], expect) + assert_frame_equal(df.ix[0.2], expect) + + expect = df.iloc[[1, -1], 0] + assert_series_equal(df.loc[0.2, 'a'], expect) + + def test_setitem_with_sparse_value(self): + # GH8131 + df = pd.DataFrame({'c_1': ['a', 'b', 'c'], 'n_1': [1., 2., 3.]}) + sp_series = pd.Series([0, 0, 1]).to_sparse(fill_value=0) + df['new_column'] = sp_series + assert_series_equal(df['new_column'], sp_series, check_names=False) + + def test_setitem_with_unaligned_sparse_value(self): + df = pd.DataFrame({'c_1': ['a', 'b', 'c'], 'n_1': [1., 2., 3.]}) + sp_series = (pd.Series([0, 0, 1], index=[2, 1, 0]) + .to_sparse(fill_value=0)) + df['new_column'] = sp_series + exp = pd.Series([1, 0, 0], name='new_column') + assert_series_equal(df['new_column'], exp) + + def test_setitem_datetime_coercion(self): + # GH 1048 + df = pd.DataFrame({'c': [pd.Timestamp('2010-10-01')] * 3}) + df.loc[0:1, 'c'] = np.datetime64('2008-08-08') + self.assertEqual(pd.Timestamp('2008-08-08'), df.loc[0, 'c']) + self.assertEqual(pd.Timestamp('2008-08-08'), df.loc[1, 'c']) + df.loc[2, 'c'] = date(2005, 5, 5) + self.assertEqual(pd.Timestamp('2005-05-05'), df.loc[2, 'c']) + + def test_datetimelike_setitem_with_inference(self): + # GH 7592 + # assignment of timedeltas with NaT + + one_hour = timedelta(hours=1) + df = DataFrame(index=date_range('20130101', periods=4)) + df['A'] = np.array([1 * one_hour] * 4, dtype='m8[ns]') + df.loc[:, 'B'] = np.array([2 * one_hour] * 4, dtype='m8[ns]') + df.loc[:3, 'C'] = np.array([3 * one_hour] * 3, dtype='m8[ns]') + df.ix[:, 'D'] = np.array([4 * one_hour] * 4, dtype='m8[ns]') + df.ix[:3, 'E'] = np.array([5 * one_hour] * 3, dtype='m8[ns]') + df['F'] = np.timedelta64('NaT') + df.ix[:-1, 'F'] = np.array([6 * one_hour] * 3, dtype='m8[ns]') + df.ix[-3:, 'G'] = date_range('20130101', periods=3) + df['H'] = np.datetime64('NaT') + result = df.dtypes + expected = Series([np.dtype('timedelta64[ns]')] * 6 + + [np.dtype('datetime64[ns]')] * 2, + index=list('ABCDEFGH')) + assert_series_equal(result, expected) + + def test_at_time_between_time_datetimeindex(self): + index = date_range("2012-01-01", "2012-01-05", freq='30min') + df = DataFrame(randn(len(index), 5), index=index) + akey = time(12, 0, 0) + bkey = slice(time(13, 0, 0), time(14, 0, 0)) + ainds = [24, 72, 120, 168] + binds = [26, 27, 28, 74, 75, 76, 122, 123, 124, 170, 171, 172] + + result = df.at_time(akey) + expected = df.ix[akey] + expected2 = df.ix[ainds] + assert_frame_equal(result, expected) + assert_frame_equal(result, expected2) + self.assertEqual(len(result), 4) + + result = df.between_time(bkey.start, bkey.stop) + expected = df.ix[bkey] + expected2 = df.ix[binds] + assert_frame_equal(result, expected) + assert_frame_equal(result, expected2) + self.assertEqual(len(result), 12) + + result = df.copy() + result.ix[akey] = 0 + result = result.ix[akey] + expected = df.ix[akey].copy() + expected.ix[:] = 0 + assert_frame_equal(result, expected) + + result = df.copy() + result.ix[akey] = 0 + result.ix[akey] = df.ix[ainds] + assert_frame_equal(result, df) + + result = df.copy() + result.ix[bkey] = 0 + result = result.ix[bkey] + expected = df.ix[bkey].copy() + expected.ix[:] = 0 + assert_frame_equal(result, expected) + + result = df.copy() + result.ix[bkey] = 0 + result.ix[bkey] = df.ix[binds] + assert_frame_equal(result, df) + + def test_xs(self): + from pandas.core.datetools import bday + + idx = self.frame.index[5] + xs = self.frame.xs(idx) + for item, value in compat.iteritems(xs): + if np.isnan(value): + self.assertTrue(np.isnan(self.frame[item][idx])) + else: + self.assertEqual(value, self.frame[item][idx]) + + # mixed-type xs + test_data = { + 'A': {'1': 1, '2': 2}, + 'B': {'1': '1', '2': '2', '3': '3'}, + } + frame = DataFrame(test_data) + xs = frame.xs('1') + self.assertEqual(xs.dtype, np.object_) + self.assertEqual(xs['A'], 1) + self.assertEqual(xs['B'], '1') + + with tm.assertRaises(KeyError): + self.tsframe.xs(self.tsframe.index[0] - bday) + + # xs get column + series = self.frame.xs('A', axis=1) + expected = self.frame['A'] + assert_series_equal(series, expected) + + # view is returned if possible + series = self.frame.xs('A', axis=1) + series[:] = 5 + self.assertTrue((expected == 5).all()) + + def test_xs_corner(self): + # pathological mixed-type reordering case + df = DataFrame(index=[0]) + df['A'] = 1. + df['B'] = 'foo' + df['C'] = 2. + df['D'] = 'bar' + df['E'] = 3. + + xs = df.xs(0) + assert_almost_equal(xs, [1., 'foo', 2., 'bar', 3.]) + + # no columns but Index(dtype=object) + df = DataFrame(index=['a', 'b', 'c']) + result = df.xs('a') + expected = Series([], name='a', index=pd.Index([], dtype=object)) + assert_series_equal(result, expected) + + def test_xs_duplicates(self): + df = DataFrame(randn(5, 2), index=['b', 'b', 'c', 'b', 'a']) + + cross = df.xs('c') + exp = df.iloc[2] + assert_series_equal(cross, exp) + + def test_xs_keep_level(self): + df = (DataFrame({'day': {0: 'sat', 1: 'sun'}, + 'flavour': {0: 'strawberry', 1: 'strawberry'}, + 'sales': {0: 10, 1: 12}, + 'year': {0: 2008, 1: 2008}}) + .set_index(['year', 'flavour', 'day'])) + result = df.xs('sat', level='day', drop_level=False) + expected = df[:1] + assert_frame_equal(result, expected) + + result = df.xs([2008, 'sat'], level=['year', 'day'], drop_level=False) + assert_frame_equal(result, expected) + + def test_xs_view(self): + # in 0.14 this will return a view if possible a copy otherwise, but + # this is numpy dependent + + dm = DataFrame(np.arange(20.).reshape(4, 5), + index=lrange(4), columns=lrange(5)) + + dm.xs(2)[:] = 10 + self.assertTrue((dm.xs(2) == 10).all()) + + def test_index_namedtuple(self): + from collections import namedtuple + IndexType = namedtuple("IndexType", ["a", "b"]) + idx1 = IndexType("foo", "bar") + idx2 = IndexType("baz", "bof") + index = Index([idx1, idx2], + name="composite_index", tupleize_cols=False) + df = DataFrame([(1, 2), (3, 4)], index=index, columns=["A", "B"]) + result = df.ix[IndexType("foo", "bar")]["A"] + self.assertEqual(result, 1) + + def test_boolean_indexing(self): + idx = lrange(3) + cols = ['A', 'B', 'C'] + df1 = DataFrame(index=idx, columns=cols, + data=np.array([[0.0, 0.5, 1.0], + [1.5, 2.0, 2.5], + [3.0, 3.5, 4.0]], + dtype=float)) + df2 = DataFrame(index=idx, columns=cols, + data=np.ones((len(idx), len(cols)))) + + expected = DataFrame(index=idx, columns=cols, + data=np.array([[0.0, 0.5, 1.0], + [1.5, 2.0, -1], + [-1, -1, -1]], dtype=float)) + + df1[df1 > 2.0 * df2] = -1 + assert_frame_equal(df1, expected) + with assertRaisesRegexp(ValueError, 'Item wrong length'): + df1[df1.index[:-1] > 2] = -1 + + def test_boolean_indexing_mixed(self): + df = DataFrame({ + long(0): {35: np.nan, 40: np.nan, 43: np.nan, + 49: np.nan, 50: np.nan}, + long(1): {35: np.nan, + 40: 0.32632316859446198, + 43: np.nan, + 49: 0.32632316859446198, + 50: 0.39114724480578139}, + long(2): {35: np.nan, 40: np.nan, 43: 0.29012581014105987, + 49: np.nan, 50: np.nan}, + long(3): {35: np.nan, 40: np.nan, 43: np.nan, 49: np.nan, + 50: np.nan}, + long(4): {35: 0.34215328467153283, 40: np.nan, 43: np.nan, + 49: np.nan, 50: np.nan}, + 'y': {35: 0, 40: 0, 43: 0, 49: 0, 50: 1}}) + + # mixed int/float ok + df2 = df.copy() + df2[df2 > 0.3] = 1 + expected = df.copy() + expected.loc[40, 1] = 1 + expected.loc[49, 1] = 1 + expected.loc[50, 1] = 1 + expected.loc[35, 4] = 1 + assert_frame_equal(df2, expected) + + df['foo'] = 'test' + with tm.assertRaisesRegexp(TypeError, 'boolean setting on mixed-type'): + df[df > 0.3] = 1 + + def test_where(self): + default_frame = DataFrame(np.random.randn(5, 3), + columns=['A', 'B', 'C']) + + def _safe_add(df): + # only add to the numeric items + def is_ok(s): + return (issubclass(s.dtype.type, (np.integer, np.floating)) + and s.dtype != 'uint8') + + return DataFrame(dict([(c, s + 1) if is_ok(s) else (c, s) + for c, s in compat.iteritems(df)])) + + def _check_get(df, cond, check_dtypes=True): + other1 = _safe_add(df) + rs = df.where(cond, other1) + rs2 = df.where(cond.values, other1) + for k, v in rs.iteritems(): + exp = Series( + np.where(cond[k], df[k], other1[k]), index=v.index) + assert_series_equal(v, exp, check_names=False) + assert_frame_equal(rs, rs2) + + # dtypes + if check_dtypes: + self.assertTrue((rs.dtypes == df.dtypes).all()) + + # check getting + for df in [default_frame, self.mixed_frame, + self.mixed_float, self.mixed_int]: + cond = df > 0 + _check_get(df, cond) + + # upcasting case (GH # 2794) + df = DataFrame(dict([(c, Series([1] * 3, dtype=c)) + for c in ['int64', 'int32', + 'float32', 'float64']])) + df.ix[1, :] = 0 + result = df.where(df >= 0).get_dtype_counts() + + # when we don't preserve boolean casts + # + # expected = Series({ 'float32' : 1, 'float64' : 3 }) + + expected = Series({'float32': 1, 'float64': 1, 'int32': 1, 'int64': 1}) + assert_series_equal(result, expected) + + # aligning + def _check_align(df, cond, other, check_dtypes=True): + rs = df.where(cond, other) + for i, k in enumerate(rs.columns): + result = rs[k] + d = df[k].values + c = cond[k].reindex(df[k].index).fillna(False).values + + if np.isscalar(other): + o = other + else: + if isinstance(other, np.ndarray): + o = Series(other[:, i], index=result.index).values + else: + o = other[k].values + + new_values = d if c.all() else np.where(c, d, o) + expected = Series(new_values, index=result.index, name=k) + + # since we can't always have the correct numpy dtype + # as numpy doesn't know how to downcast, don't check + assert_series_equal(result, expected, check_dtype=False) + + # dtypes + # can't check dtype when other is an ndarray + + if check_dtypes and not isinstance(other, np.ndarray): + self.assertTrue((rs.dtypes == df.dtypes).all()) + + for df in [self.mixed_frame, self.mixed_float, self.mixed_int]: + + # other is a frame + cond = (df > 0)[1:] + _check_align(df, cond, _safe_add(df)) + + # check other is ndarray + cond = df > 0 + _check_align(df, cond, (_safe_add(df).values)) + + # integers are upcast, so don't check the dtypes + cond = df > 0 + check_dtypes = all([not issubclass(s.type, np.integer) + for s in df.dtypes]) + _check_align(df, cond, np.nan, check_dtypes=check_dtypes) + + # invalid conditions + df = default_frame + err1 = (df + 1).values[0:2, :] + self.assertRaises(ValueError, df.where, cond, err1) + + err2 = cond.ix[:2, :].values + other1 = _safe_add(df) + self.assertRaises(ValueError, df.where, err2, other1) + + self.assertRaises(ValueError, df.mask, True) + self.assertRaises(ValueError, df.mask, 0) + + # where inplace + def _check_set(df, cond, check_dtypes=True): + dfi = df.copy() + econd = cond.reindex_like(df).fillna(True) + expected = dfi.mask(~econd) + + dfi.where(cond, np.nan, inplace=True) + assert_frame_equal(dfi, expected) + + # dtypes (and confirm upcasts)x + if check_dtypes: + for k, v in compat.iteritems(df.dtypes): + if issubclass(v.type, np.integer) and not cond[k].all(): + v = np.dtype('float64') + self.assertEqual(dfi[k].dtype, v) + + for df in [default_frame, self.mixed_frame, self.mixed_float, + self.mixed_int]: + + cond = df > 0 + _check_set(df, cond) + + cond = df >= 0 + _check_set(df, cond) + + # aligining + cond = (df >= 0)[1:] + _check_set(df, cond) + + # GH 10218 + # test DataFrame.where with Series slicing + df = DataFrame({'a': range(3), 'b': range(4, 7)}) + result = df.where(df['a'] == 1) + expected = df[df['a'] == 1].reindex(df.index) + assert_frame_equal(result, expected) + + def test_where_bug(self): + + # GH 2793 + + df = DataFrame({'a': [1.0, 2.0, 3.0, 4.0], 'b': [ + 4.0, 3.0, 2.0, 1.0]}, dtype='float64') + expected = DataFrame({'a': [np.nan, np.nan, 3.0, 4.0], 'b': [ + 4.0, 3.0, np.nan, np.nan]}, dtype='float64') + result = df.where(df > 2, np.nan) + assert_frame_equal(result, expected) + + result = df.copy() + result.where(result > 2, np.nan, inplace=True) + assert_frame_equal(result, expected) + + # mixed + for dtype in ['int16', 'int8', 'int32', 'int64']: + df = DataFrame({'a': np.array([1, 2, 3, 4], dtype=dtype), + 'b': np.array([4.0, 3.0, 2.0, 1.0], + dtype='float64')}) + + expected = DataFrame({'a': [np.nan, np.nan, 3.0, 4.0], + 'b': [4.0, 3.0, np.nan, np.nan]}, + dtype='float64') + + result = df.where(df > 2, np.nan) + assert_frame_equal(result, expected) + + result = df.copy() + result.where(result > 2, np.nan, inplace=True) + assert_frame_equal(result, expected) + + # transpositional issue + # GH7506 + a = DataFrame({0: [1, 2], 1: [3, 4], 2: [5, 6]}) + b = DataFrame({0: [np.nan, 8], 1: [9, np.nan], 2: [np.nan, np.nan]}) + do_not_replace = b.isnull() | (a > b) + + expected = a.copy() + expected[~do_not_replace] = b + + result = a.where(do_not_replace, b) + assert_frame_equal(result, expected) + + a = DataFrame({0: [4, 6], 1: [1, 0]}) + b = DataFrame({0: [np.nan, 3], 1: [3, np.nan]}) + do_not_replace = b.isnull() | (a > b) + + expected = a.copy() + expected[~do_not_replace] = b + + result = a.where(do_not_replace, b) + assert_frame_equal(result, expected) + + def test_where_datetime(self): + + # GH 3311 + df = DataFrame(dict(A=date_range('20130102', periods=5), + B=date_range('20130104', periods=5), + C=np.random.randn(5))) + + stamp = datetime(2013, 1, 3) + result = df[df > stamp] + expected = df.copy() + expected.loc[[0, 1], 'A'] = np.nan + assert_frame_equal(result, expected) + + def test_where_none(self): + # GH 4667 + # setting with None changes dtype + df = DataFrame({'series': Series(range(10))}).astype(float) + df[df > 7] = None + expected = DataFrame( + {'series': Series([0, 1, 2, 3, 4, 5, 6, 7, np.nan, np.nan])}) + assert_frame_equal(df, expected) + + # GH 7656 + df = DataFrame([{'A': 1, 'B': np.nan, 'C': 'Test'}, { + 'A': np.nan, 'B': 'Test', 'C': np.nan}]) + expected = df.where(~isnull(df), None) + with tm.assertRaisesRegexp(TypeError, 'boolean setting on mixed-type'): + df.where(~isnull(df), None, inplace=True) + + def test_where_align(self): + + def create(): + df = DataFrame(np.random.randn(10, 3)) + df.iloc[3:5, 0] = np.nan + df.iloc[4:6, 1] = np.nan + df.iloc[5:8, 2] = np.nan + return df + + # series + df = create() + expected = df.fillna(df.mean()) + result = df.where(pd.notnull(df), df.mean(), axis='columns') + assert_frame_equal(result, expected) + + df.where(pd.notnull(df), df.mean(), inplace=True, axis='columns') + assert_frame_equal(df, expected) + + df = create().fillna(0) + expected = df.apply(lambda x, y: x.where(x > 0, y), y=df[0]) + result = df.where(df > 0, df[0], axis='index') + assert_frame_equal(result, expected) + result = df.where(df > 0, df[0], axis='rows') + assert_frame_equal(result, expected) + + # frame + df = create() + expected = df.fillna(1) + result = df.where(pd.notnull(df), DataFrame( + 1, index=df.index, columns=df.columns)) + assert_frame_equal(result, expected) + + def test_where_complex(self): + # GH 6345 + expected = DataFrame( + [[1 + 1j, 2], [np.nan, 4 + 1j]], columns=['a', 'b']) + df = DataFrame([[1 + 1j, 2], [5 + 1j, 4 + 1j]], columns=['a', 'b']) + df[df.abs() >= 5] = np.nan + assert_frame_equal(df, expected) + + def test_where_axis(self): + # GH 9736 + df = DataFrame(np.random.randn(2, 2)) + mask = DataFrame([[False, False], [False, False]]) + s = Series([0, 1]) + + expected = DataFrame([[0, 0], [1, 1]], dtype='float64') + result = df.where(mask, s, axis='index') + assert_frame_equal(result, expected) + + result = df.copy() + result.where(mask, s, axis='index', inplace=True) + assert_frame_equal(result, expected) + + expected = DataFrame([[0, 1], [0, 1]], dtype='float64') + result = df.where(mask, s, axis='columns') + assert_frame_equal(result, expected) + + result = df.copy() + result.where(mask, s, axis='columns', inplace=True) + assert_frame_equal(result, expected) + + # Upcast needed + df = DataFrame([[1, 2], [3, 4]], dtype='int64') + mask = DataFrame([[False, False], [False, False]]) + s = Series([0, np.nan]) + + expected = DataFrame([[0, 0], [np.nan, np.nan]], dtype='float64') + result = df.where(mask, s, axis='index') + assert_frame_equal(result, expected) + + result = df.copy() + result.where(mask, s, axis='index', inplace=True) + assert_frame_equal(result, expected) + + expected = DataFrame([[0, np.nan], [0, np.nan]], dtype='float64') + result = df.where(mask, s, axis='columns') + assert_frame_equal(result, expected) + + expected = DataFrame({0: np.array([0, 0], dtype='int64'), + 1: np.array([np.nan, np.nan], dtype='float64')}) + result = df.copy() + result.where(mask, s, axis='columns', inplace=True) + assert_frame_equal(result, expected) + + # Multiple dtypes (=> multiple Blocks) + df = pd.concat([DataFrame(np.random.randn(10, 2)), + DataFrame(np.random.randint(0, 10, size=(10, 2)))], + ignore_index=True, axis=1) + mask = DataFrame(False, columns=df.columns, index=df.index) + s1 = Series(1, index=df.columns) + s2 = Series(2, index=df.index) + + result = df.where(mask, s1, axis='columns') + expected = DataFrame(1.0, columns=df.columns, index=df.index) + expected[2] = expected[2].astype(int) + expected[3] = expected[3].astype(int) + assert_frame_equal(result, expected) + + result = df.copy() + result.where(mask, s1, axis='columns', inplace=True) + assert_frame_equal(result, expected) + + result = df.where(mask, s2, axis='index') + expected = DataFrame(2.0, columns=df.columns, index=df.index) + expected[2] = expected[2].astype(int) + expected[3] = expected[3].astype(int) + assert_frame_equal(result, expected) + + result = df.copy() + result.where(mask, s2, axis='index', inplace=True) + assert_frame_equal(result, expected) + + # DataFrame vs DataFrame + d1 = df.copy().drop(1, axis=0) + expected = df.copy() + expected.loc[1, :] = np.nan + + result = df.where(mask, d1) + assert_frame_equal(result, expected) + result = df.where(mask, d1, axis='index') + assert_frame_equal(result, expected) + result = df.copy() + result.where(mask, d1, inplace=True) + assert_frame_equal(result, expected) + result = df.copy() + result.where(mask, d1, inplace=True, axis='index') + assert_frame_equal(result, expected) + + d2 = df.copy().drop(1, axis=1) + expected = df.copy() + expected.loc[:, 1] = np.nan + + result = df.where(mask, d2) + assert_frame_equal(result, expected) + result = df.where(mask, d2, axis='columns') + assert_frame_equal(result, expected) + result = df.copy() + result.where(mask, d2, inplace=True) + assert_frame_equal(result, expected) + result = df.copy() + result.where(mask, d2, inplace=True, axis='columns') + assert_frame_equal(result, expected) + + def test_mask(self): + df = DataFrame(np.random.randn(5, 3)) + cond = df > 0 + + rs = df.where(cond, np.nan) + assert_frame_equal(rs, df.mask(df <= 0)) + assert_frame_equal(rs, df.mask(~cond)) + + other = DataFrame(np.random.randn(5, 3)) + rs = df.where(cond, other) + assert_frame_equal(rs, df.mask(df <= 0, other)) + assert_frame_equal(rs, df.mask(~cond, other)) + + def test_mask_inplace(self): + # GH8801 + df = DataFrame(np.random.randn(5, 3)) + cond = df > 0 + + rdf = df.copy() + + rdf.where(cond, inplace=True) + assert_frame_equal(rdf, df.where(cond)) + assert_frame_equal(rdf, df.mask(~cond)) + + rdf = df.copy() + rdf.where(cond, -df, inplace=True) + assert_frame_equal(rdf, df.where(cond, -df)) + assert_frame_equal(rdf, df.mask(~cond, -df)) + + def test_mask_edge_case_1xN_frame(self): + # GH4071 + df = DataFrame([[1, 2]]) + res = df.mask(DataFrame([[True, False]])) + expec = DataFrame([[nan, 2]]) + assert_frame_equal(res, expec) + + def test_head_tail(self): + assert_frame_equal(self.frame.head(), self.frame[:5]) + assert_frame_equal(self.frame.tail(), self.frame[-5:]) + + assert_frame_equal(self.frame.head(0), self.frame[0:0]) + assert_frame_equal(self.frame.tail(0), self.frame[0:0]) + + assert_frame_equal(self.frame.head(-1), self.frame[:-1]) + assert_frame_equal(self.frame.tail(-1), self.frame[1:]) + assert_frame_equal(self.frame.head(1), self.frame[:1]) + assert_frame_equal(self.frame.tail(1), self.frame[-1:]) + # with a float index + df = self.frame.copy() + df.index = np.arange(len(self.frame)) + 0.1 + assert_frame_equal(df.head(), df.iloc[:5]) + assert_frame_equal(df.tail(), df.iloc[-5:]) + assert_frame_equal(df.head(0), df[0:0]) + assert_frame_equal(df.tail(0), df[0:0]) + assert_frame_equal(df.head(-1), df.iloc[:-1]) + assert_frame_equal(df.tail(-1), df.iloc[1:]) + # test empty dataframe + empty_df = DataFrame() + assert_frame_equal(empty_df.tail(), empty_df) + assert_frame_equal(empty_df.head(), empty_df) diff --git a/pandas/tests/frame/test_misc_api.py b/pandas/tests/frame/test_misc_api.py new file mode 100644 index 0000000000000..ade1895ece14f --- /dev/null +++ b/pandas/tests/frame/test_misc_api.py @@ -0,0 +1,487 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function +# pylint: disable-msg=W0612,E1101 +from copy import deepcopy +import sys +import nose +from distutils.version import LooseVersion + +from pandas.compat import range, lrange +from pandas import compat + +from numpy.random import randn +import numpy as np + +from pandas import DataFrame, Series +import pandas as pd + +from pandas.util.testing import (assert_almost_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class SafeForSparse(object): + + _multiprocess_can_split_ = True + + def test_copy_index_name_checking(self): + # don't want to be able to modify the index stored elsewhere after + # making a copy + for attr in ('index', 'columns'): + ind = getattr(self.frame, attr) + ind.name = None + cp = self.frame.copy() + getattr(cp, attr).name = 'foo' + self.assertIsNone(getattr(self.frame, attr).name) + + def test_getitem_pop_assign_name(self): + s = self.frame['A'] + self.assertEqual(s.name, 'A') + + s = self.frame.pop('A') + self.assertEqual(s.name, 'A') + + s = self.frame.ix[:, 'B'] + self.assertEqual(s.name, 'B') + + s2 = s.ix[:] + self.assertEqual(s2.name, 'B') + + def test_get_value(self): + for idx in self.frame.index: + for col in self.frame.columns: + result = self.frame.get_value(idx, col) + expected = self.frame[col][idx] + assert_almost_equal(result, expected) + + def test_join_index(self): + # left / right + + f = self.frame.reindex(columns=['A', 'B'])[:10] + f2 = self.frame.reindex(columns=['C', 'D']) + + joined = f.join(f2) + self.assertTrue(f.index.equals(joined.index)) + self.assertEqual(len(joined.columns), 4) + + joined = f.join(f2, how='left') + self.assertTrue(joined.index.equals(f.index)) + self.assertEqual(len(joined.columns), 4) + + joined = f.join(f2, how='right') + self.assertTrue(joined.index.equals(f2.index)) + self.assertEqual(len(joined.columns), 4) + + # inner + + f = self.frame.reindex(columns=['A', 'B'])[:10] + f2 = self.frame.reindex(columns=['C', 'D']) + + joined = f.join(f2, how='inner') + self.assertTrue(joined.index.equals(f.index.intersection(f2.index))) + self.assertEqual(len(joined.columns), 4) + + # outer + + f = self.frame.reindex(columns=['A', 'B'])[:10] + f2 = self.frame.reindex(columns=['C', 'D']) + + joined = f.join(f2, how='outer') + self.assertTrue(tm.equalContents(self.frame.index, joined.index)) + self.assertEqual(len(joined.columns), 4) + + assertRaisesRegexp(ValueError, 'join method', f.join, f2, how='foo') + + # corner case - overlapping columns + for how in ('outer', 'left', 'inner'): + with assertRaisesRegexp(ValueError, 'columns overlap but ' + 'no suffix'): + self.frame.join(self.frame, how=how) + + def test_join_index_more(self): + af = self.frame.ix[:, ['A', 'B']] + bf = self.frame.ix[::2, ['C', 'D']] + + expected = af.copy() + expected['C'] = self.frame['C'][::2] + expected['D'] = self.frame['D'][::2] + + result = af.join(bf) + assert_frame_equal(result, expected) + + result = af.join(bf, how='right') + assert_frame_equal(result, expected[::2]) + + result = bf.join(af, how='right') + assert_frame_equal(result, expected.ix[:, result.columns]) + + def test_join_index_series(self): + df = self.frame.copy() + s = df.pop(self.frame.columns[-1]) + joined = df.join(s) + + # TODO should this check_names ? + assert_frame_equal(joined, self.frame, check_names=False) + + s.name = None + assertRaisesRegexp(ValueError, 'must have a name', df.join, s) + + def test_join_overlap(self): + df1 = self.frame.ix[:, ['A', 'B', 'C']] + df2 = self.frame.ix[:, ['B', 'C', 'D']] + + joined = df1.join(df2, lsuffix='_df1', rsuffix='_df2') + df1_suf = df1.ix[:, ['B', 'C']].add_suffix('_df1') + df2_suf = df2.ix[:, ['B', 'C']].add_suffix('_df2') + + no_overlap = self.frame.ix[:, ['A', 'D']] + expected = df1_suf.join(df2_suf).join(no_overlap) + + # column order not necessarily sorted + assert_frame_equal(joined, expected.ix[:, joined.columns]) + + def test_add_prefix_suffix(self): + with_prefix = self.frame.add_prefix('foo#') + expected = ['foo#%s' % c for c in self.frame.columns] + self.assert_numpy_array_equal(with_prefix.columns, expected) + + with_suffix = self.frame.add_suffix('#foo') + expected = ['%s#foo' % c for c in self.frame.columns] + self.assert_numpy_array_equal(with_suffix.columns, expected) + + +class TestDataFrameMisc(tm.TestCase, SafeForSparse, TestData): + + klass = DataFrame + + _multiprocess_can_split_ = True + + def test_get_axis(self): + f = self.frame + self.assertEqual(f._get_axis_number(0), 0) + self.assertEqual(f._get_axis_number(1), 1) + self.assertEqual(f._get_axis_number('index'), 0) + self.assertEqual(f._get_axis_number('rows'), 0) + self.assertEqual(f._get_axis_number('columns'), 1) + + self.assertEqual(f._get_axis_name(0), 'index') + self.assertEqual(f._get_axis_name(1), 'columns') + self.assertEqual(f._get_axis_name('index'), 'index') + self.assertEqual(f._get_axis_name('rows'), 'index') + self.assertEqual(f._get_axis_name('columns'), 'columns') + + self.assertIs(f._get_axis(0), f.index) + self.assertIs(f._get_axis(1), f.columns) + + assertRaisesRegexp(ValueError, 'No axis named', f._get_axis_number, 2) + assertRaisesRegexp(ValueError, 'No axis.*foo', f._get_axis_name, 'foo') + assertRaisesRegexp(ValueError, 'No axis.*None', f._get_axis_name, None) + assertRaisesRegexp(ValueError, 'No axis named', f._get_axis_number, + None) + + def test_keys(self): + getkeys = self.frame.keys + self.assertIs(getkeys(), self.frame.columns) + + def test_column_contains_typeerror(self): + try: + self.frame.columns in self.frame + except TypeError: + pass + + def test_not_hashable(self): + df = pd.DataFrame([1]) + self.assertRaises(TypeError, hash, df) + self.assertRaises(TypeError, hash, self.empty) + + def test_new_empty_index(self): + df1 = DataFrame(randn(0, 3)) + df2 = DataFrame(randn(0, 3)) + df1.index.name = 'foo' + self.assertIsNone(df2.index.name) + + def test_array_interface(self): + result = np.sqrt(self.frame) + tm.assertIsInstance(result, type(self.frame)) + self.assertIs(result.index, self.frame.index) + self.assertIs(result.columns, self.frame.columns) + + assert_frame_equal(result, self.frame.apply(np.sqrt)) + + def test_get_agg_axis(self): + cols = self.frame._get_agg_axis(0) + self.assertIs(cols, self.frame.columns) + + idx = self.frame._get_agg_axis(1) + self.assertIs(idx, self.frame.index) + + self.assertRaises(ValueError, self.frame._get_agg_axis, 2) + + def test_nonzero(self): + self.assertTrue(self.empty.empty) + + self.assertFalse(self.frame.empty) + self.assertFalse(self.mixed_frame.empty) + + # corner case + df = DataFrame({'A': [1., 2., 3.], + 'B': ['a', 'b', 'c']}, + index=np.arange(3)) + del df['A'] + self.assertFalse(df.empty) + + def test_iteritems(self): + df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=['a', 'a', 'b']) + for k, v in compat.iteritems(df): + self.assertEqual(type(v), Series) + + def test_iter(self): + self.assertTrue(tm.equalContents(list(self.frame), self.frame.columns)) + + def test_iterrows(self): + for i, (k, v) in enumerate(self.frame.iterrows()): + exp = self.frame.xs(self.frame.index[i]) + assert_series_equal(v, exp) + + for i, (k, v) in enumerate(self.mixed_frame.iterrows()): + exp = self.mixed_frame.xs(self.mixed_frame.index[i]) + assert_series_equal(v, exp) + + def test_itertuples(self): + for i, tup in enumerate(self.frame.itertuples()): + s = Series(tup[1:]) + s.name = tup[0] + expected = self.frame.ix[i, :].reset_index(drop=True) + assert_series_equal(s, expected) + + df = DataFrame({'floats': np.random.randn(5), + 'ints': lrange(5)}, columns=['floats', 'ints']) + + for tup in df.itertuples(index=False): + tm.assertIsInstance(tup[1], np.integer) + + df = DataFrame(data={"a": [1, 2, 3], "b": [4, 5, 6]}) + dfaa = df[['a', 'a']] + self.assertEqual(list(dfaa.itertuples()), [ + (0, 1, 1), (1, 2, 2), (2, 3, 3)]) + + self.assertEqual(repr(list(df.itertuples(name=None))), + '[(0, 1, 4), (1, 2, 5), (2, 3, 6)]') + + tup = next(df.itertuples(name='TestName')) + + # no support for field renaming in Python 2.6, regular tuples are + # returned + if sys.version >= LooseVersion('2.7'): + self.assertEqual(tup._fields, ('Index', 'a', 'b')) + self.assertEqual((tup.Index, tup.a, tup.b), tup) + self.assertEqual(type(tup).__name__, 'TestName') + + df.columns = ['def', 'return'] + tup2 = next(df.itertuples(name='TestName')) + self.assertEqual(tup2, (0, 1, 4)) + + if sys.version >= LooseVersion('2.7'): + self.assertEqual(tup2._fields, ('Index', '_1', '_2')) + + df3 = DataFrame(dict(('f' + str(i), [i]) for i in range(1024))) + # will raise SyntaxError if trying to create namedtuple + tup3 = next(df3.itertuples()) + self.assertFalse(hasattr(tup3, '_fields')) + self.assertIsInstance(tup3, tuple) + + def test_len(self): + self.assertEqual(len(self.frame), len(self.frame.index)) + + def test_as_matrix(self): + frame = self.frame + mat = frame.as_matrix() + + frameCols = frame.columns + for i, row in enumerate(mat): + for j, value in enumerate(row): + col = frameCols[j] + if np.isnan(value): + self.assertTrue(np.isnan(frame[col][i])) + else: + self.assertEqual(value, frame[col][i]) + + # mixed type + mat = self.mixed_frame.as_matrix(['foo', 'A']) + self.assertEqual(mat[0, 0], 'bar') + + df = DataFrame({'real': [1, 2, 3], 'complex': [1j, 2j, 3j]}) + mat = df.as_matrix() + self.assertEqual(mat[0, 0], 1j) + + # single block corner case + mat = self.frame.as_matrix(['A', 'B']) + expected = self.frame.reindex(columns=['A', 'B']).values + assert_almost_equal(mat, expected) + + def test_values(self): + self.frame.values[:, 0] = 5. + self.assertTrue((self.frame.values[:, 0] == 5).all()) + + def test_deepcopy(self): + cp = deepcopy(self.frame) + series = cp['A'] + series[:] = 10 + for idx, value in compat.iteritems(series): + self.assertNotEqual(self.frame['A'][idx], value) + + # --------------------------------------------------------------------- + # Transposing + + def test_transpose(self): + frame = self.frame + dft = frame.T + for idx, series in compat.iteritems(dft): + for col, value in compat.iteritems(series): + if np.isnan(value): + self.assertTrue(np.isnan(frame[col][idx])) + else: + self.assertEqual(value, frame[col][idx]) + + # mixed type + index, data = tm.getMixedTypeDict() + mixed = DataFrame(data, index=index) + + mixed_T = mixed.T + for col, s in compat.iteritems(mixed_T): + self.assertEqual(s.dtype, np.object_) + + def test_transpose_get_view(self): + dft = self.frame.T + dft.values[:, 5:10] = 5 + + self.assertTrue((self.frame.values[5:10] == 5).all()) + + def test_swapaxes(self): + df = DataFrame(np.random.randn(10, 5)) + assert_frame_equal(df.T, df.swapaxes(0, 1)) + assert_frame_equal(df.T, df.swapaxes(1, 0)) + assert_frame_equal(df, df.swapaxes(0, 0)) + self.assertRaises(ValueError, df.swapaxes, 2, 5) + + def test_axis_aliases(self): + f = self.frame + + # reg name + expected = f.sum(axis=0) + result = f.sum(axis='index') + assert_series_equal(result, expected) + + expected = f.sum(axis=1) + result = f.sum(axis='columns') + assert_series_equal(result, expected) + + def test_more_asMatrix(self): + values = self.mixed_frame.as_matrix() + self.assertEqual(values.shape[1], len(self.mixed_frame.columns)) + + def test_repr_with_mi_nat(self): + df = DataFrame({'X': [1, 2]}, + index=[[pd.NaT, pd.Timestamp('20130101')], ['a', 'b']]) + res = repr(df) + exp = ' X\nNaT a 1\n2013-01-01 b 2' + nose.tools.assert_equal(res, exp) + + def test_iterkv_deprecation(self): + with tm.assert_produces_warning(FutureWarning): + self.mixed_float.iterkv() + + def test_iterkv_names(self): + for k, v in compat.iteritems(self.mixed_frame): + self.assertEqual(v.name, k) + + def test_series_put_names(self): + series = self.mixed_frame._series + for k, v in compat.iteritems(series): + self.assertEqual(v.name, k) + + def test_empty_nonzero(self): + df = DataFrame([1, 2, 3]) + self.assertFalse(df.empty) + df = DataFrame(index=['a', 'b'], columns=['c', 'd']).dropna() + self.assertTrue(df.empty) + self.assertTrue(df.T.empty) + + def test_inplace_return_self(self): + # re #1893 + + data = DataFrame({'a': ['foo', 'bar', 'baz', 'qux'], + 'b': [0, 0, 1, 1], + 'c': [1, 2, 3, 4]}) + + def _check_f(base, f): + result = f(base) + self.assertTrue(result is None) + + # -----DataFrame----- + + # set_index + f = lambda x: x.set_index('a', inplace=True) + _check_f(data.copy(), f) + + # reset_index + f = lambda x: x.reset_index(inplace=True) + _check_f(data.set_index('a'), f) + + # drop_duplicates + f = lambda x: x.drop_duplicates(inplace=True) + _check_f(data.copy(), f) + + # sort + f = lambda x: x.sort_values('b', inplace=True) + _check_f(data.copy(), f) + + # sort_index + f = lambda x: x.sort_index(inplace=True) + _check_f(data.copy(), f) + + # sortlevel + f = lambda x: x.sortlevel(0, inplace=True) + _check_f(data.set_index(['a', 'b']), f) + + # fillna + f = lambda x: x.fillna(0, inplace=True) + _check_f(data.copy(), f) + + # replace + f = lambda x: x.replace(1, 0, inplace=True) + _check_f(data.copy(), f) + + # rename + f = lambda x: x.rename({1: 'foo'}, inplace=True) + _check_f(data.copy(), f) + + # -----Series----- + d = data.copy()['c'] + + # reset_index + f = lambda x: x.reset_index(inplace=True, drop=True) + _check_f(data.set_index('a')['c'], f) + + # fillna + f = lambda x: x.fillna(0, inplace=True) + _check_f(d.copy(), f) + + # replace + f = lambda x: x.replace(1, 0, inplace=True) + _check_f(d.copy(), f) + + # rename + f = lambda x: x.rename({1: 'foo'}, inplace=True) + _check_f(d.copy(), f) + + +if __name__ == '__main__': + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], + exit=False) diff --git a/pandas/tests/frame/test_missing.py b/pandas/tests/frame/test_missing.py new file mode 100644 index 0000000000000..fd212664b5b9b --- /dev/null +++ b/pandas/tests/frame/test_missing.py @@ -0,0 +1,427 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from numpy import nan, random +import numpy as np + +from pandas.compat import lrange +from pandas import (DataFrame, Series, Timestamp, + date_range) +import pandas as pd + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm +from pandas.tests.frame.common import TestData, _check_mixed_float + + +class TestDataFrameMissingData(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_dropEmptyRows(self): + N = len(self.frame.index) + mat = random.randn(N) + mat[:5] = nan + + frame = DataFrame({'foo': mat}, index=self.frame.index) + original = Series(mat, index=self.frame.index, name='foo') + expected = original.dropna() + inplace_frame1, inplace_frame2 = frame.copy(), frame.copy() + + smaller_frame = frame.dropna(how='all') + # check that original was preserved + assert_series_equal(frame['foo'], original) + inplace_frame1.dropna(how='all', inplace=True) + assert_series_equal(smaller_frame['foo'], expected) + assert_series_equal(inplace_frame1['foo'], expected) + + smaller_frame = frame.dropna(how='all', subset=['foo']) + inplace_frame2.dropna(how='all', subset=['foo'], inplace=True) + assert_series_equal(smaller_frame['foo'], expected) + assert_series_equal(inplace_frame2['foo'], expected) + + def test_dropIncompleteRows(self): + N = len(self.frame.index) + mat = random.randn(N) + mat[:5] = nan + + frame = DataFrame({'foo': mat}, index=self.frame.index) + frame['bar'] = 5 + original = Series(mat, index=self.frame.index, name='foo') + inp_frame1, inp_frame2 = frame.copy(), frame.copy() + + smaller_frame = frame.dropna() + assert_series_equal(frame['foo'], original) + inp_frame1.dropna(inplace=True) + self.assert_numpy_array_equal(smaller_frame['foo'], mat[5:]) + self.assert_numpy_array_equal(inp_frame1['foo'], mat[5:]) + + samesize_frame = frame.dropna(subset=['bar']) + assert_series_equal(frame['foo'], original) + self.assertTrue((frame['bar'] == 5).all()) + inp_frame2.dropna(subset=['bar'], inplace=True) + self.assertTrue(samesize_frame.index.equals(self.frame.index)) + self.assertTrue(inp_frame2.index.equals(self.frame.index)) + + def test_dropna(self): + df = DataFrame(np.random.randn(6, 4)) + df[2][:2] = nan + + dropped = df.dropna(axis=1) + expected = df.ix[:, [0, 1, 3]] + inp = df.copy() + inp.dropna(axis=1, inplace=True) + assert_frame_equal(dropped, expected) + assert_frame_equal(inp, expected) + + dropped = df.dropna(axis=0) + expected = df.ix[lrange(2, 6)] + inp = df.copy() + inp.dropna(axis=0, inplace=True) + assert_frame_equal(dropped, expected) + assert_frame_equal(inp, expected) + + # threshold + dropped = df.dropna(axis=1, thresh=5) + expected = df.ix[:, [0, 1, 3]] + inp = df.copy() + inp.dropna(axis=1, thresh=5, inplace=True) + assert_frame_equal(dropped, expected) + assert_frame_equal(inp, expected) + + dropped = df.dropna(axis=0, thresh=4) + expected = df.ix[lrange(2, 6)] + inp = df.copy() + inp.dropna(axis=0, thresh=4, inplace=True) + assert_frame_equal(dropped, expected) + assert_frame_equal(inp, expected) + + dropped = df.dropna(axis=1, thresh=4) + assert_frame_equal(dropped, df) + + dropped = df.dropna(axis=1, thresh=3) + assert_frame_equal(dropped, df) + + # subset + dropped = df.dropna(axis=0, subset=[0, 1, 3]) + inp = df.copy() + inp.dropna(axis=0, subset=[0, 1, 3], inplace=True) + assert_frame_equal(dropped, df) + assert_frame_equal(inp, df) + + # all + dropped = df.dropna(axis=1, how='all') + assert_frame_equal(dropped, df) + + df[2] = nan + dropped = df.dropna(axis=1, how='all') + expected = df.ix[:, [0, 1, 3]] + assert_frame_equal(dropped, expected) + + # bad input + self.assertRaises(ValueError, df.dropna, axis=3) + + def test_drop_and_dropna_caching(self): + # tst that cacher updates + original = Series([1, 2, np.nan], name='A') + expected = Series([1, 2], dtype=original.dtype, name='A') + df = pd.DataFrame({'A': original.values.copy()}) + df2 = df.copy() + df['A'].dropna() + assert_series_equal(df['A'], original) + df['A'].dropna(inplace=True) + assert_series_equal(df['A'], expected) + df2['A'].drop([1]) + assert_series_equal(df2['A'], original) + df2['A'].drop([1], inplace=True) + assert_series_equal(df2['A'], original.drop([1])) + + def test_dropna_corner(self): + # bad input + self.assertRaises(ValueError, self.frame.dropna, how='foo') + self.assertRaises(TypeError, self.frame.dropna, how=None) + # non-existent column - 8303 + self.assertRaises(KeyError, self.frame.dropna, subset=['A', 'X']) + + def test_dropna_multiple_axes(self): + df = DataFrame([[1, np.nan, 2, 3], + [4, np.nan, 5, 6], + [np.nan, np.nan, np.nan, np.nan], + [7, np.nan, 8, 9]]) + cp = df.copy() + result = df.dropna(how='all', axis=[0, 1]) + result2 = df.dropna(how='all', axis=(0, 1)) + expected = df.dropna(how='all').dropna(how='all', axis=1) + + assert_frame_equal(result, expected) + assert_frame_equal(result2, expected) + assert_frame_equal(df, cp) + + inp = df.copy() + inp.dropna(how='all', axis=(0, 1), inplace=True) + assert_frame_equal(inp, expected) + + def test_fillna(self): + self.tsframe.ix[:5, 'A'] = nan + self.tsframe.ix[-5:, 'A'] = nan + + zero_filled = self.tsframe.fillna(0) + self.assertTrue((zero_filled.ix[:5, 'A'] == 0).all()) + + padded = self.tsframe.fillna(method='pad') + self.assertTrue(np.isnan(padded.ix[:5, 'A']).all()) + self.assertTrue((padded.ix[-5:, 'A'] == padded.ix[-5, 'A']).all()) + + # mixed type + self.mixed_frame.ix[5:20, 'foo'] = nan + self.mixed_frame.ix[-10:, 'A'] = nan + result = self.mixed_frame.fillna(value=0) + result = self.mixed_frame.fillna(method='pad') + + self.assertRaises(ValueError, self.tsframe.fillna) + self.assertRaises(ValueError, self.tsframe.fillna, 5, method='ffill') + + # mixed numeric (but no float16) + mf = self.mixed_float.reindex(columns=['A', 'B', 'D']) + mf.ix[-10:, 'A'] = nan + result = mf.fillna(value=0) + _check_mixed_float(result, dtype=dict(C=None)) + + result = mf.fillna(method='pad') + _check_mixed_float(result, dtype=dict(C=None)) + + # empty frame (GH #2778) + df = DataFrame(columns=['x']) + for m in ['pad', 'backfill']: + df.x.fillna(method=m, inplace=1) + df.x.fillna(method=m) + + # with different dtype (GH3386) + df = DataFrame([['a', 'a', np.nan, 'a'], [ + 'b', 'b', np.nan, 'b'], ['c', 'c', np.nan, 'c']]) + + result = df.fillna({2: 'foo'}) + expected = DataFrame([['a', 'a', 'foo', 'a'], + ['b', 'b', 'foo', 'b'], + ['c', 'c', 'foo', 'c']]) + assert_frame_equal(result, expected) + + df.fillna({2: 'foo'}, inplace=True) + assert_frame_equal(df, expected) + + # limit and value + df = DataFrame(np.random.randn(10, 3)) + df.iloc[2:7, 0] = np.nan + df.iloc[3:5, 2] = np.nan + + expected = df.copy() + expected.iloc[2, 0] = 999 + expected.iloc[3, 2] = 999 + result = df.fillna(999, limit=1) + assert_frame_equal(result, expected) + + # with datelike + # GH 6344 + df = DataFrame({ + 'Date': [pd.NaT, Timestamp("2014-1-1")], + 'Date2': [Timestamp("2013-1-1"), pd.NaT] + }) + + expected = df.copy() + expected['Date'] = expected['Date'].fillna(df.ix[0, 'Date2']) + result = df.fillna(value={'Date': df['Date2']}) + assert_frame_equal(result, expected) + + def test_fillna_dtype_conversion(self): + # make sure that fillna on an empty frame works + df = DataFrame(index=["A", "B", "C"], columns=[1, 2, 3, 4, 5]) + result = df.get_dtype_counts().sort_values() + expected = Series({'object': 5}) + assert_series_equal(result, expected) + + result = df.fillna(1) + expected = DataFrame(1, index=["A", "B", "C"], columns=[1, 2, 3, 4, 5]) + result = result.get_dtype_counts().sort_values() + expected = Series({'int64': 5}) + assert_series_equal(result, expected) + + # empty block + df = DataFrame(index=lrange(3), columns=['A', 'B'], dtype='float64') + result = df.fillna('nan') + expected = DataFrame('nan', index=lrange(3), columns=['A', 'B']) + assert_frame_equal(result, expected) + + # equiv of replace + df = DataFrame(dict(A=[1, np.nan], B=[1., 2.])) + for v in ['', 1, np.nan, 1.0]: + expected = df.replace(np.nan, v) + result = df.fillna(v) + assert_frame_equal(result, expected) + + def test_fillna_datetime_columns(self): + # GH 7095 + df = pd.DataFrame({'A': [-1, -2, np.nan], + 'B': date_range('20130101', periods=3), + 'C': ['foo', 'bar', None], + 'D': ['foo2', 'bar2', None]}, + index=date_range('20130110', periods=3)) + result = df.fillna('?') + expected = pd.DataFrame({'A': [-1, -2, '?'], + 'B': date_range('20130101', periods=3), + 'C': ['foo', 'bar', '?'], + 'D': ['foo2', 'bar2', '?']}, + index=date_range('20130110', periods=3)) + self.assert_frame_equal(result, expected) + + df = pd.DataFrame({'A': [-1, -2, np.nan], + 'B': [pd.Timestamp('2013-01-01'), + pd.Timestamp('2013-01-02'), pd.NaT], + 'C': ['foo', 'bar', None], + 'D': ['foo2', 'bar2', None]}, + index=date_range('20130110', periods=3)) + result = df.fillna('?') + expected = pd.DataFrame({'A': [-1, -2, '?'], + 'B': [pd.Timestamp('2013-01-01'), + pd.Timestamp('2013-01-02'), '?'], + 'C': ['foo', 'bar', '?'], + 'D': ['foo2', 'bar2', '?']}, + index=pd.date_range('20130110', periods=3)) + self.assert_frame_equal(result, expected) + + def test_ffill(self): + self.tsframe['A'][:5] = nan + self.tsframe['A'][-5:] = nan + + assert_frame_equal(self.tsframe.ffill(), + self.tsframe.fillna(method='ffill')) + + def test_bfill(self): + self.tsframe['A'][:5] = nan + self.tsframe['A'][-5:] = nan + + assert_frame_equal(self.tsframe.bfill(), + self.tsframe.fillna(method='bfill')) + + def test_fillna_skip_certain_blocks(self): + # don't try to fill boolean, int blocks + + df = DataFrame(np.random.randn(10, 4).astype(int)) + + # it works! + df.fillna(np.nan) + + def test_fillna_inplace(self): + df = DataFrame(np.random.randn(10, 4)) + df[1][:4] = np.nan + df[3][-4:] = np.nan + + expected = df.fillna(value=0) + self.assertIsNot(expected, df) + + df.fillna(value=0, inplace=True) + assert_frame_equal(df, expected) + + df[1][:4] = np.nan + df[3][-4:] = np.nan + expected = df.fillna(method='ffill') + self.assertIsNot(expected, df) + + df.fillna(method='ffill', inplace=True) + assert_frame_equal(df, expected) + + def test_fillna_dict_series(self): + df = DataFrame({'a': [nan, 1, 2, nan, nan], + 'b': [1, 2, 3, nan, nan], + 'c': [nan, 1, 2, 3, 4]}) + + result = df.fillna({'a': 0, 'b': 5}) + + expected = df.copy() + expected['a'] = expected['a'].fillna(0) + expected['b'] = expected['b'].fillna(5) + assert_frame_equal(result, expected) + + # it works + result = df.fillna({'a': 0, 'b': 5, 'd': 7}) + + # Series treated same as dict + result = df.fillna(df.max()) + expected = df.fillna(df.max().to_dict()) + assert_frame_equal(result, expected) + + # disable this for now + with assertRaisesRegexp(NotImplementedError, 'column by column'): + df.fillna(df.max(1), axis=1) + + def test_fillna_dataframe(self): + # GH 8377 + df = DataFrame({'a': [nan, 1, 2, nan, nan], + 'b': [1, 2, 3, nan, nan], + 'c': [nan, 1, 2, 3, 4]}, + index=list('VWXYZ')) + + # df2 may have different index and columns + df2 = DataFrame({'a': [nan, 10, 20, 30, 40], + 'b': [50, 60, 70, 80, 90], + 'foo': ['bar'] * 5}, + index=list('VWXuZ')) + + result = df.fillna(df2) + + # only those columns and indices which are shared get filled + expected = DataFrame({'a': [nan, 1, 2, nan, 40], + 'b': [1, 2, 3, nan, 90], + 'c': [nan, 1, 2, 3, 4]}, + index=list('VWXYZ')) + + assert_frame_equal(result, expected) + + def test_fillna_columns(self): + df = DataFrame(np.random.randn(10, 10)) + df.values[:, ::2] = np.nan + + result = df.fillna(method='ffill', axis=1) + expected = df.T.fillna(method='pad').T + assert_frame_equal(result, expected) + + df.insert(6, 'foo', 5) + result = df.fillna(method='ffill', axis=1) + expected = df.astype(float).fillna(method='ffill', axis=1) + assert_frame_equal(result, expected) + + def test_fillna_invalid_method(self): + with assertRaisesRegexp(ValueError, 'ffil'): + self.frame.fillna(method='ffil') + + def test_fillna_invalid_value(self): + # list + self.assertRaises(TypeError, self.frame.fillna, [1, 2]) + # tuple + self.assertRaises(TypeError, self.frame.fillna, (1, 2)) + # frame with series + self.assertRaises(ValueError, self.frame.iloc[:, 0].fillna, + self.frame) + + def test_fillna_col_reordering(self): + cols = ["COL." + str(i) for i in range(5, 0, -1)] + data = np.random.rand(20, 5) + df = DataFrame(index=lrange(20), columns=cols, data=data) + filled = df.fillna(method='ffill') + self.assertEqual(df.columns.tolist(), filled.columns.tolist()) + + def test_fill_corner(self): + self.mixed_frame.ix[5:20, 'foo'] = nan + self.mixed_frame.ix[-10:, 'A'] = nan + + filled = self.mixed_frame.fillna(value=0) + self.assertTrue((filled.ix[5:20, 'foo'] == 0).all()) + del self.mixed_frame['foo'] + + empty_float = self.frame.reindex(columns=[]) + + # TODO(wesm): unused? + result = empty_float.fillna(value=0) # noqa diff --git a/pandas/tests/frame/test_mutate_columns.py b/pandas/tests/frame/test_mutate_columns.py new file mode 100644 index 0000000000000..1546d18a224cd --- /dev/null +++ b/pandas/tests/frame/test_mutate_columns.py @@ -0,0 +1,221 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from pandas.compat import range, lrange +import numpy as np + +from pandas import DataFrame, Series + +from pandas.util.testing import (assert_almost_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +# Column add, remove, delete. + + +class TestDataFrameMutateColumns(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_assign(self): + df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}) + original = df.copy() + result = df.assign(C=df.B / df.A) + expected = df.copy() + expected['C'] = [4, 2.5, 2] + assert_frame_equal(result, expected) + + # lambda syntax + result = df.assign(C=lambda x: x.B / x.A) + assert_frame_equal(result, expected) + + # original is unmodified + assert_frame_equal(df, original) + + # Non-Series array-like + result = df.assign(C=[4, 2.5, 2]) + assert_frame_equal(result, expected) + # original is unmodified + assert_frame_equal(df, original) + + result = df.assign(B=df.B / df.A) + expected = expected.drop('B', axis=1).rename(columns={'C': 'B'}) + assert_frame_equal(result, expected) + + # overwrite + result = df.assign(A=df.A + df.B) + expected = df.copy() + expected['A'] = [5, 7, 9] + assert_frame_equal(result, expected) + + # lambda + result = df.assign(A=lambda x: x.A + x.B) + assert_frame_equal(result, expected) + + def test_assign_multiple(self): + df = DataFrame([[1, 4], [2, 5], [3, 6]], columns=['A', 'B']) + result = df.assign(C=[7, 8, 9], D=df.A, E=lambda x: x.B) + expected = DataFrame([[1, 4, 7, 1, 4], [2, 5, 8, 2, 5], + [3, 6, 9, 3, 6]], columns=list('ABCDE')) + assert_frame_equal(result, expected) + + def test_assign_alphabetical(self): + # GH 9818 + df = DataFrame([[1, 2], [3, 4]], columns=['A', 'B']) + result = df.assign(D=df.A + df.B, C=df.A - df.B) + expected = DataFrame([[1, 2, -1, 3], [3, 4, -1, 7]], + columns=list('ABCD')) + assert_frame_equal(result, expected) + result = df.assign(C=df.A - df.B, D=df.A + df.B) + assert_frame_equal(result, expected) + + def test_assign_bad(self): + df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}) + # non-keyword argument + with tm.assertRaises(TypeError): + df.assign(lambda x: x.A) + with tm.assertRaises(AttributeError): + df.assign(C=df.A, D=df.A + df.C) + with tm.assertRaises(KeyError): + df.assign(C=lambda df: df.A, D=lambda df: df['A'] + df['C']) + with tm.assertRaises(KeyError): + df.assign(C=df.A, D=lambda x: x['A'] + x['C']) + + def test_insert_error_msmgs(self): + + # GH 7432 + df = DataFrame({'foo': ['a', 'b', 'c'], 'bar': [ + 1, 2, 3], 'baz': ['d', 'e', 'f']}).set_index('foo') + s = DataFrame({'foo': ['a', 'b', 'c', 'a'], 'fiz': [ + 'g', 'h', 'i', 'j']}).set_index('foo') + msg = 'cannot reindex from a duplicate axis' + with assertRaisesRegexp(ValueError, msg): + df['newcol'] = s + + # GH 4107, more descriptive error message + df = DataFrame(np.random.randint(0, 2, (4, 4)), + columns=['a', 'b', 'c', 'd']) + + msg = 'incompatible index of inserted column with frame index' + with assertRaisesRegexp(TypeError, msg): + df['gr'] = df.groupby(['b', 'c']).count() + + def test_insert_benchmark(self): + # from the vb_suite/frame_methods/frame_insert_columns + N = 10 + K = 5 + df = DataFrame(index=lrange(N)) + new_col = np.random.randn(N) + for i in range(K): + df[i] = new_col + expected = DataFrame(np.repeat(new_col, K).reshape(N, K), + index=lrange(N)) + assert_frame_equal(df, expected) + + def test_insert(self): + df = DataFrame(np.random.randn(5, 3), index=np.arange(5), + columns=['c', 'b', 'a']) + + df.insert(0, 'foo', df['a']) + self.assert_numpy_array_equal(df.columns, ['foo', 'c', 'b', 'a']) + assert_almost_equal(df['a'], df['foo']) + + df.insert(2, 'bar', df['c']) + self.assert_numpy_array_equal(df.columns, + ['foo', 'c', 'bar', 'b', 'a']) + assert_almost_equal(df['c'], df['bar']) + + # diff dtype + + # new item + df['x'] = df['a'].astype('float32') + result = Series(dict(float64=5, float32=1)) + self.assertTrue((df.get_dtype_counts() == result).all()) + + # replacing current (in different block) + df['a'] = df['a'].astype('float32') + result = Series(dict(float64=4, float32=2)) + self.assertTrue((df.get_dtype_counts() == result).all()) + + df['y'] = df['a'].astype('int32') + result = Series(dict(float64=4, float32=2, int32=1)) + self.assertTrue((df.get_dtype_counts() == result).all()) + + with assertRaisesRegexp(ValueError, 'already exists'): + df.insert(1, 'a', df['b']) + self.assertRaises(ValueError, df.insert, 1, 'c', df['b']) + + df.columns.name = 'some_name' + # preserve columns name field + df.insert(0, 'baz', df['c']) + self.assertEqual(df.columns.name, 'some_name') + + def test_delitem(self): + del self.frame['A'] + self.assertNotIn('A', self.frame) + + def test_pop(self): + self.frame.columns.name = 'baz' + + self.frame.pop('A') + self.assertNotIn('A', self.frame) + + self.frame['foo'] = 'bar' + self.frame.pop('foo') + self.assertNotIn('foo', self.frame) + # TODO self.assertEqual(self.frame.columns.name, 'baz') + + # 10912 + # inplace ops cause caching issue + a = DataFrame([[1, 2, 3], [4, 5, 6]], columns=[ + 'A', 'B', 'C'], index=['X', 'Y']) + b = a.pop('B') + b += 1 + + # original frame + expected = DataFrame([[1, 3], [4, 6]], columns=[ + 'A', 'C'], index=['X', 'Y']) + assert_frame_equal(a, expected) + + # result + expected = Series([2, 5], index=['X', 'Y'], name='B') + 1 + assert_series_equal(b, expected) + + def test_pop_non_unique_cols(self): + df = DataFrame({0: [0, 1], 1: [0, 1], 2: [4, 5]}) + df.columns = ["a", "b", "a"] + + res = df.pop("a") + self.assertEqual(type(res), DataFrame) + self.assertEqual(len(res), 2) + self.assertEqual(len(df.columns), 1) + self.assertTrue("b" in df.columns) + self.assertFalse("a" in df.columns) + self.assertEqual(len(df.index), 2) + + def test_insert_column_bug_4032(self): + + # GH4032, inserting a column and renaming causing errors + df = DataFrame({'b': [1.1, 2.2]}) + df = df.rename(columns={}) + df.insert(0, 'a', [1, 2]) + + result = df.rename(columns={}) + str(result) + expected = DataFrame([[1, 1.1], [2, 2.2]], columns=['a', 'b']) + assert_frame_equal(result, expected) + df.insert(0, 'c', [1.3, 2.3]) + + result = df.rename(columns={}) + str(result) + + expected = DataFrame([[1.3, 1, 1.1], [2.3, 2, 2.2]], + columns=['c', 'a', 'b']) + assert_frame_equal(result, expected) diff --git a/pandas/tests/frame/test_nonunique_indexes.py b/pandas/tests/frame/test_nonunique_indexes.py new file mode 100644 index 0000000000000..1b24e829088f2 --- /dev/null +++ b/pandas/tests/frame/test_nonunique_indexes.py @@ -0,0 +1,454 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +import numpy as np + +from pandas.compat import lrange, u +from pandas import DataFrame, Series, MultiIndex, date_range +import pandas as pd + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameNonuniqueIndexes(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_column_dups_operations(self): + + def check(result, expected=None): + if expected is not None: + assert_frame_equal(result, expected) + result.dtypes + str(result) + + # assignment + # GH 3687 + arr = np.random.randn(3, 2) + idx = lrange(2) + df = DataFrame(arr, columns=['A', 'A']) + df.columns = idx + expected = DataFrame(arr, columns=idx) + check(df, expected) + + idx = date_range('20130101', periods=4, freq='Q-NOV') + df = DataFrame([[1, 1, 1, 5], [1, 1, 2, 5], [2, 1, 3, 5]], + columns=['a', 'a', 'a', 'a']) + df.columns = idx + expected = DataFrame( + [[1, 1, 1, 5], [1, 1, 2, 5], [2, 1, 3, 5]], columns=idx) + check(df, expected) + + # insert + df = DataFrame([[1, 1, 1, 5], [1, 1, 2, 5], [2, 1, 3, 5]], + columns=['foo', 'bar', 'foo', 'hello']) + df['string'] = 'bah' + expected = DataFrame([[1, 1, 1, 5, 'bah'], [1, 1, 2, 5, 'bah'], + [2, 1, 3, 5, 'bah']], + columns=['foo', 'bar', 'foo', 'hello', 'string']) + check(df, expected) + with assertRaisesRegexp(ValueError, 'Length of value'): + df.insert(0, 'AnotherColumn', range(len(df.index) - 1)) + + # insert same dtype + df['foo2'] = 3 + expected = DataFrame([[1, 1, 1, 5, 'bah', 3], [1, 1, 2, 5, 'bah', 3], + [2, 1, 3, 5, 'bah', 3]], + columns=['foo', 'bar', 'foo', 'hello', + 'string', 'foo2']) + check(df, expected) + + # set (non-dup) + df['foo2'] = 4 + expected = DataFrame([[1, 1, 1, 5, 'bah', 4], [1, 1, 2, 5, 'bah', 4], + [2, 1, 3, 5, 'bah', 4]], + columns=['foo', 'bar', 'foo', 'hello', + 'string', 'foo2']) + check(df, expected) + df['foo2'] = 3 + + # delete (non dup) + del df['bar'] + expected = DataFrame([[1, 1, 5, 'bah', 3], [1, 2, 5, 'bah', 3], + [2, 3, 5, 'bah', 3]], + columns=['foo', 'foo', 'hello', 'string', 'foo2']) + check(df, expected) + + # try to delete again (its not consolidated) + del df['hello'] + expected = DataFrame([[1, 1, 'bah', 3], [1, 2, 'bah', 3], + [2, 3, 'bah', 3]], + columns=['foo', 'foo', 'string', 'foo2']) + check(df, expected) + + # consolidate + df = df.consolidate() + expected = DataFrame([[1, 1, 'bah', 3], [1, 2, 'bah', 3], + [2, 3, 'bah', 3]], + columns=['foo', 'foo', 'string', 'foo2']) + check(df, expected) + + # insert + df.insert(2, 'new_col', 5.) + expected = DataFrame([[1, 1, 5., 'bah', 3], [1, 2, 5., 'bah', 3], + [2, 3, 5., 'bah', 3]], + columns=['foo', 'foo', 'new_col', 'string', + 'foo2']) + check(df, expected) + + # insert a dup + assertRaisesRegexp(ValueError, 'cannot insert', + df.insert, 2, 'new_col', 4.) + df.insert(2, 'new_col', 4., allow_duplicates=True) + expected = DataFrame([[1, 1, 4., 5., 'bah', 3], + [1, 2, 4., 5., 'bah', 3], + [2, 3, 4., 5., 'bah', 3]], + columns=['foo', 'foo', 'new_col', + 'new_col', 'string', 'foo2']) + check(df, expected) + + # delete (dup) + del df['foo'] + expected = DataFrame([[4., 5., 'bah', 3], [4., 5., 'bah', 3], + [4., 5., 'bah', 3]], + columns=['new_col', 'new_col', 'string', 'foo2']) + assert_frame_equal(df, expected) + + # dup across dtypes + df = DataFrame([[1, 1, 1., 5], [1, 1, 2., 5], [2, 1, 3., 5]], + columns=['foo', 'bar', 'foo', 'hello']) + check(df) + + df['foo2'] = 7. + expected = DataFrame([[1, 1, 1., 5, 7.], [1, 1, 2., 5, 7.], + [2, 1, 3., 5, 7.]], + columns=['foo', 'bar', 'foo', 'hello', 'foo2']) + check(df, expected) + + result = df['foo'] + expected = DataFrame([[1, 1.], [1, 2.], [2, 3.]], + columns=['foo', 'foo']) + check(result, expected) + + # multiple replacements + df['foo'] = 'string' + expected = DataFrame([['string', 1, 'string', 5, 7.], + ['string', 1, 'string', 5, 7.], + ['string', 1, 'string', 5, 7.]], + columns=['foo', 'bar', 'foo', 'hello', 'foo2']) + check(df, expected) + + del df['foo'] + expected = DataFrame([[1, 5, 7.], [1, 5, 7.], [1, 5, 7.]], columns=[ + 'bar', 'hello', 'foo2']) + check(df, expected) + + # values + df = DataFrame([[1, 2.5], [3, 4.5]], index=[1, 2], columns=['x', 'x']) + result = df.values + expected = np.array([[1, 2.5], [3, 4.5]]) + self.assertTrue((result == expected).all().all()) + + # rename, GH 4403 + df4 = DataFrame( + {'TClose': [22.02], + 'RT': [0.0454], + 'TExg': [0.0422]}, + index=MultiIndex.from_tuples([(600809, 20130331)], + names=['STK_ID', 'RPT_Date'])) + + df5 = DataFrame({'STK_ID': [600809] * 3, + 'RPT_Date': [20120930, 20121231, 20130331], + 'STK_Name': [u('饡驦'), u('饡驦'), u('饡驦')], + 'TClose': [38.05, 41.66, 30.01]}, + index=MultiIndex.from_tuples( + [(600809, 20120930), + (600809, 20121231), + (600809, 20130331)], + names=['STK_ID', 'RPT_Date'])) + + k = pd.merge(df4, df5, how='inner', left_index=True, right_index=True) + result = k.rename( + columns={'TClose_x': 'TClose', 'TClose_y': 'QT_Close'}) + str(result) + result.dtypes + + expected = (DataFrame([[0.0454, 22.02, 0.0422, 20130331, 600809, + u('饡驦'), 30.01]], + columns=['RT', 'TClose', 'TExg', + 'RPT_Date', 'STK_ID', 'STK_Name', + 'QT_Close']) + .set_index(['STK_ID', 'RPT_Date'], drop=False)) + assert_frame_equal(result, expected) + + # reindex is invalid! + df = DataFrame([[1, 5, 7.], [1, 5, 7.], [1, 5, 7.]], + columns=['bar', 'a', 'a']) + self.assertRaises(ValueError, df.reindex, columns=['bar']) + self.assertRaises(ValueError, df.reindex, columns=['bar', 'foo']) + + # drop + df = DataFrame([[1, 5, 7.], [1, 5, 7.], [1, 5, 7.]], + columns=['bar', 'a', 'a']) + result = df.drop(['a'], axis=1) + expected = DataFrame([[1], [1], [1]], columns=['bar']) + check(result, expected) + result = df.drop('a', axis=1) + check(result, expected) + + # describe + df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], + columns=['bar', 'a', 'a'], dtype='float64') + result = df.describe() + s = df.iloc[:, 0].describe() + expected = pd.concat([s, s, s], keys=df.columns, axis=1) + check(result, expected) + + # check column dups with index equal and not equal to df's index + df = DataFrame(np.random.randn(5, 3), index=['a', 'b', 'c', 'd', 'e'], + columns=['A', 'B', 'A']) + for index in [df.index, pd.Index(list('edcba'))]: + this_df = df.copy() + expected_ser = pd.Series(index.values, index=this_df.index) + expected_df = DataFrame.from_items([('A', expected_ser), + ('B', this_df['B']), + ('A', expected_ser)]) + this_df['A'] = index + check(this_df, expected_df) + + # operations + for op in ['__add__', '__mul__', '__sub__', '__truediv__']: + df = DataFrame(dict(A=np.arange(10), B=np.random.rand(10))) + expected = getattr(df, op)(df) + expected.columns = ['A', 'A'] + df.columns = ['A', 'A'] + result = getattr(df, op)(df) + check(result, expected) + + # multiple assignments that change dtypes + # the location indexer is a slice + # GH 6120 + df = DataFrame(np.random.randn(5, 2), columns=['that', 'that']) + expected = DataFrame(1.0, index=range(5), columns=['that', 'that']) + + df['that'] = 1.0 + check(df, expected) + + df = DataFrame(np.random.rand(5, 2), columns=['that', 'that']) + expected = DataFrame(1, index=range(5), columns=['that', 'that']) + + df['that'] = 1 + check(df, expected) + + def test_column_dups2(self): + + # drop buggy GH 6240 + df = DataFrame({'A': np.random.randn(5), + 'B': np.random.randn(5), + 'C': np.random.randn(5), + 'D': ['a', 'b', 'c', 'd', 'e']}) + + expected = df.take([0, 1, 1], axis=1) + df2 = df.take([2, 0, 1, 2, 1], axis=1) + result = df2.drop('C', axis=1) + assert_frame_equal(result, expected) + + # dropna + df = DataFrame({'A': np.random.randn(5), + 'B': np.random.randn(5), + 'C': np.random.randn(5), + 'D': ['a', 'b', 'c', 'd', 'e']}) + df.iloc[2, [0, 1, 2]] = np.nan + df.iloc[0, 0] = np.nan + df.iloc[1, 1] = np.nan + df.iloc[:, 3] = np.nan + expected = df.dropna(subset=['A', 'B', 'C'], how='all') + expected.columns = ['A', 'A', 'B', 'C'] + + df.columns = ['A', 'A', 'B', 'C'] + + result = df.dropna(subset=['A', 'C'], how='all') + assert_frame_equal(result, expected) + + def test_column_dups_indexing(self): + def check(result, expected=None): + if expected is not None: + assert_frame_equal(result, expected) + result.dtypes + str(result) + + # boolean indexing + # GH 4879 + dups = ['A', 'A', 'C', 'D'] + df = DataFrame(np.arange(12).reshape(3, 4), columns=[ + 'A', 'B', 'C', 'D'], dtype='float64') + expected = df[df.C > 6] + expected.columns = dups + df = DataFrame(np.arange(12).reshape(3, 4), + columns=dups, dtype='float64') + result = df[df.C > 6] + check(result, expected) + + # where + df = DataFrame(np.arange(12).reshape(3, 4), columns=[ + 'A', 'B', 'C', 'D'], dtype='float64') + expected = df[df > 6] + expected.columns = dups + df = DataFrame(np.arange(12).reshape(3, 4), + columns=dups, dtype='float64') + result = df[df > 6] + check(result, expected) + + # boolean with the duplicate raises + df = DataFrame(np.arange(12).reshape(3, 4), + columns=dups, dtype='float64') + self.assertRaises(ValueError, lambda: df[df.A > 6]) + + # dup aligining operations should work + # GH 5185 + df1 = DataFrame([1, 2, 3, 4, 5], index=[1, 2, 1, 2, 3]) + df2 = DataFrame([1, 2, 3], index=[1, 2, 3]) + expected = DataFrame([0, 2, 0, 2, 2], index=[1, 1, 2, 2, 3]) + result = df1.sub(df2) + assert_frame_equal(result, expected) + + # equality + df1 = DataFrame([[1, 2], [2, np.nan], [3, 4], [4, 4]], + columns=['A', 'B']) + df2 = DataFrame([[0, 1], [2, 4], [2, np.nan], [4, 5]], + columns=['A', 'A']) + + # not-comparing like-labelled + self.assertRaises(ValueError, lambda: df1 == df2) + + df1r = df1.reindex_like(df2) + result = df1r == df2 + expected = DataFrame([[False, True], [True, False], [False, False], [ + True, False]], columns=['A', 'A']) + assert_frame_equal(result, expected) + + # mixed column selection + # GH 5639 + dfbool = DataFrame({'one': Series([True, True, False], + index=['a', 'b', 'c']), + 'two': Series([False, False, True, False], + index=['a', 'b', 'c', 'd']), + 'three': Series([False, True, True, True], + index=['a', 'b', 'c', 'd'])}) + expected = pd.concat( + [dfbool['one'], dfbool['three'], dfbool['one']], axis=1) + result = dfbool[['one', 'three', 'one']] + check(result, expected) + + # multi-axis dups + # GH 6121 + df = DataFrame(np.arange(25.).reshape(5, 5), + index=['a', 'b', 'c', 'd', 'e'], + columns=['A', 'B', 'C', 'D', 'E']) + z = df[['A', 'C', 'A']].copy() + expected = z.ix[['a', 'c', 'a']] + + df = DataFrame(np.arange(25.).reshape(5, 5), + index=['a', 'b', 'c', 'd', 'e'], + columns=['A', 'B', 'C', 'D', 'E']) + z = df[['A', 'C', 'A']] + result = z.ix[['a', 'c', 'a']] + check(result, expected) + + def test_column_dups_indexing2(self): + + # GH 8363 + # datetime ops with a non-unique index + df = DataFrame({'A': np.arange(5, dtype='int64'), + 'B': np.arange(1, 6, dtype='int64')}, + index=[2, 2, 3, 3, 4]) + result = df.B - df.A + expected = Series(1, index=[2, 2, 3, 3, 4]) + assert_series_equal(result, expected) + + df = DataFrame({'A': date_range('20130101', periods=5), + 'B': date_range('20130101 09:00:00', periods=5)}, + index=[2, 2, 3, 3, 4]) + result = df.B - df.A + expected = Series(pd.Timedelta('9 hours'), index=[2, 2, 3, 3, 4]) + assert_series_equal(result, expected) + + def test_columns_with_dups(self): + # GH 3468 related + + # basic + df = DataFrame([[1, 2]], columns=['a', 'a']) + df.columns = ['a', 'a.1'] + str(df) + expected = DataFrame([[1, 2]], columns=['a', 'a.1']) + assert_frame_equal(df, expected) + + df = DataFrame([[1, 2, 3]], columns=['b', 'a', 'a']) + df.columns = ['b', 'a', 'a.1'] + str(df) + expected = DataFrame([[1, 2, 3]], columns=['b', 'a', 'a.1']) + assert_frame_equal(df, expected) + + # with a dup index + df = DataFrame([[1, 2]], columns=['a', 'a']) + df.columns = ['b', 'b'] + str(df) + expected = DataFrame([[1, 2]], columns=['b', 'b']) + assert_frame_equal(df, expected) + + # multi-dtype + df = DataFrame([[1, 2, 1., 2., 3., 'foo', 'bar']], + columns=['a', 'a', 'b', 'b', 'd', 'c', 'c']) + df.columns = list('ABCDEFG') + str(df) + expected = DataFrame( + [[1, 2, 1., 2., 3., 'foo', 'bar']], columns=list('ABCDEFG')) + assert_frame_equal(df, expected) + + # this is an error because we cannot disambiguate the dup columns + self.assertRaises(Exception, lambda x: DataFrame( + [[1, 2, 'foo', 'bar']], columns=['a', 'a', 'a', 'a'])) + + # dups across blocks + df_float = DataFrame(np.random.randn(10, 3), dtype='float64') + df_int = DataFrame(np.random.randn(10, 3), dtype='int64') + df_bool = DataFrame(True, index=df_float.index, + columns=df_float.columns) + df_object = DataFrame('foo', index=df_float.index, + columns=df_float.columns) + df_dt = DataFrame(pd.Timestamp('20010101'), + index=df_float.index, + columns=df_float.columns) + df = pd.concat([df_float, df_int, df_bool, df_object, df_dt], axis=1) + + self.assertEqual(len(df._data._blknos), len(df.columns)) + self.assertEqual(len(df._data._blklocs), len(df.columns)) + + # testing iget + for i in range(len(df.columns)): + df.iloc[:, i] + + # dup columns across dtype GH 2079/2194 + vals = [[1, -1, 2.], [2, -2, 3.]] + rs = DataFrame(vals, columns=['A', 'A', 'B']) + xp = DataFrame(vals) + xp.columns = ['A', 'A', 'B'] + assert_frame_equal(rs, xp) + + def test_as_matrix_duplicates(self): + df = DataFrame([[1, 2, 'a', 'b'], + [1, 2, 'a', 'b']], + columns=['one', 'one', 'two', 'two']) + + result = df.values + expected = np.array([[1, 2, 'a', 'b'], [1, 2, 'a', 'b']], + dtype=object) + + self.assertTrue(np.array_equal(result, expected)) diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py new file mode 100644 index 0000000000000..9e48702ad2b0a --- /dev/null +++ b/pandas/tests/frame/test_operators.py @@ -0,0 +1,1171 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime +import operator + +import nose + +from numpy import nan, random +import numpy as np + +from pandas.compat import lrange +from pandas import compat +from pandas import (DataFrame, Series, MultiIndex, Timestamp, + date_range) +import pandas.core.common as com +import pandas as pd + +from pandas.util.testing import (assert_numpy_array_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import (TestData, _check_mixed_float, + _check_mixed_int) + + +class TestDataFrameOperators(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_operators(self): + garbage = random.random(4) + colSeries = Series(garbage, index=np.array(self.frame.columns)) + + idSum = self.frame + self.frame + seriesSum = self.frame + colSeries + + for col, series in compat.iteritems(idSum): + for idx, val in compat.iteritems(series): + origVal = self.frame[col][idx] * 2 + if not np.isnan(val): + self.assertEqual(val, origVal) + else: + self.assertTrue(np.isnan(origVal)) + + for col, series in compat.iteritems(seriesSum): + for idx, val in compat.iteritems(series): + origVal = self.frame[col][idx] + colSeries[col] + if not np.isnan(val): + self.assertEqual(val, origVal) + else: + self.assertTrue(np.isnan(origVal)) + + added = self.frame2 + self.frame2 + expected = self.frame2 * 2 + assert_frame_equal(added, expected) + + df = DataFrame({'a': ['a', None, 'b']}) + assert_frame_equal(df + df, DataFrame({'a': ['aa', np.nan, 'bb']})) + + # Test for issue #10181 + for dtype in ('float', 'int64'): + frames = [ + DataFrame(dtype=dtype), + DataFrame(columns=['A'], dtype=dtype), + DataFrame(index=[0], dtype=dtype), + ] + for df in frames: + self.assertTrue((df + df).equals(df)) + assert_frame_equal(df + df, df) + + def test_ops_np_scalar(self): + vals, xs = np.random.rand(5, 3), [nan, 7, -23, 2.718, -3.14, np.inf] + f = lambda x: DataFrame(x, index=list('ABCDE'), + columns=['jim', 'joe', 'jolie']) + + df = f(vals) + + for x in xs: + assert_frame_equal(df / np.array(x), f(vals / x)) + assert_frame_equal(np.array(x) * df, f(vals * x)) + assert_frame_equal(df + np.array(x), f(vals + x)) + assert_frame_equal(np.array(x) - df, f(x - vals)) + + def test_operators_boolean(self): + + # GH 5808 + # empty frames, non-mixed dtype + + result = DataFrame(index=[1]) & DataFrame(index=[1]) + assert_frame_equal(result, DataFrame(index=[1])) + + result = DataFrame(index=[1]) | DataFrame(index=[1]) + assert_frame_equal(result, DataFrame(index=[1])) + + result = DataFrame(index=[1]) & DataFrame(index=[1, 2]) + assert_frame_equal(result, DataFrame(index=[1, 2])) + + result = DataFrame(index=[1], columns=['A']) & DataFrame( + index=[1], columns=['A']) + assert_frame_equal(result, DataFrame(index=[1], columns=['A'])) + + result = DataFrame(True, index=[1], columns=['A']) & DataFrame( + True, index=[1], columns=['A']) + assert_frame_equal(result, DataFrame(True, index=[1], columns=['A'])) + + result = DataFrame(True, index=[1], columns=['A']) | DataFrame( + True, index=[1], columns=['A']) + assert_frame_equal(result, DataFrame(True, index=[1], columns=['A'])) + + # boolean ops + result = DataFrame(1, index=[1], columns=['A']) | DataFrame( + True, index=[1], columns=['A']) + assert_frame_equal(result, DataFrame(1, index=[1], columns=['A'])) + + def f(): + DataFrame(1.0, index=[1], columns=['A']) | DataFrame( + True, index=[1], columns=['A']) + self.assertRaises(TypeError, f) + + def f(): + DataFrame('foo', index=[1], columns=['A']) | DataFrame( + True, index=[1], columns=['A']) + self.assertRaises(TypeError, f) + + def test_operators_none_as_na(self): + df = DataFrame({"col1": [2, 5.0, 123, None], + "col2": [1, 2, 3, 4]}, dtype=object) + + ops = [operator.add, operator.sub, operator.mul, operator.truediv] + + # since filling converts dtypes from object, changed expected to be + # object + for op in ops: + filled = df.fillna(np.nan) + result = op(df, 3) + expected = op(filled, 3).astype(object) + expected[com.isnull(expected)] = None + assert_frame_equal(result, expected) + + result = op(df, df) + expected = op(filled, filled).astype(object) + expected[com.isnull(expected)] = None + assert_frame_equal(result, expected) + + result = op(df, df.fillna(7)) + assert_frame_equal(result, expected) + + result = op(df.fillna(7), df) + assert_frame_equal(result, expected, check_dtype=False) + + def test_comparison_invalid(self): + + def check(df, df2): + + for (x, y) in [(df, df2), (df2, df)]: + self.assertRaises(TypeError, lambda: x == y) + self.assertRaises(TypeError, lambda: x != y) + self.assertRaises(TypeError, lambda: x >= y) + self.assertRaises(TypeError, lambda: x > y) + self.assertRaises(TypeError, lambda: x < y) + self.assertRaises(TypeError, lambda: x <= y) + + # GH4968 + # invalid date/int comparisons + df = DataFrame(np.random.randint(10, size=(10, 1)), columns=['a']) + df['dates'] = date_range('20010101', periods=len(df)) + + df2 = df.copy() + df2['dates'] = df['a'] + check(df, df2) + + df = DataFrame(np.random.randint(10, size=(10, 2)), columns=['a', 'b']) + df2 = DataFrame({'a': date_range('20010101', periods=len( + df)), 'b': date_range('20100101', periods=len(df))}) + check(df, df2) + + def test_timestamp_compare(self): + # make sure we can compare Timestamps on the right AND left hand side + # GH4982 + df = DataFrame({'dates1': date_range('20010101', periods=10), + 'dates2': date_range('20010102', periods=10), + 'intcol': np.random.randint(1000000000, size=10), + 'floatcol': np.random.randn(10), + 'stringcol': list(tm.rands(10))}) + df.loc[np.random.rand(len(df)) > 0.5, 'dates2'] = pd.NaT + ops = {'gt': 'lt', 'lt': 'gt', 'ge': 'le', 'le': 'ge', 'eq': 'eq', + 'ne': 'ne'} + for left, right in ops.items(): + left_f = getattr(operator, left) + right_f = getattr(operator, right) + + # no nats + expected = left_f(df, Timestamp('20010109')) + result = right_f(Timestamp('20010109'), df) + assert_frame_equal(result, expected) + + # nats + expected = left_f(df, Timestamp('nat')) + result = right_f(Timestamp('nat'), df) + assert_frame_equal(result, expected) + + def test_modulo(self): + # GH3590, modulo as ints + p = DataFrame({'first': [3, 4, 5, 8], 'second': [0, 0, 0, 3]}) + + # this is technically wrong as the integer portion is coerced to float + # ### + expected = DataFrame({'first': Series([0, 0, 0, 0], dtype='float64'), + 'second': Series([np.nan, np.nan, np.nan, 0])}) + result = p % p + assert_frame_equal(result, expected) + + # numpy has a slightly different (wrong) treatement + result2 = DataFrame(p.values % p.values, index=p.index, + columns=p.columns, dtype='float64') + result2.iloc[0:3, 1] = np.nan + assert_frame_equal(result2, expected) + + result = p % 0 + expected = DataFrame(np.nan, index=p.index, columns=p.columns) + assert_frame_equal(result, expected) + + # numpy has a slightly different (wrong) treatement + result2 = DataFrame(p.values.astype('float64') % + 0, index=p.index, columns=p.columns) + assert_frame_equal(result2, expected) + + # not commutative with series + p = DataFrame(np.random.randn(10, 5)) + s = p[0] + res = s % p + res2 = p % s + self.assertFalse(np.array_equal(res.fillna(0), res2.fillna(0))) + + def test_div(self): + + # integer div, but deal with the 0's (GH 9144) + p = DataFrame({'first': [3, 4, 5, 8], 'second': [0, 0, 0, 3]}) + result = p / p + + expected = DataFrame({'first': Series([1.0, 1.0, 1.0, 1.0]), + 'second': Series([nan, nan, nan, 1])}) + assert_frame_equal(result, expected) + + result2 = DataFrame(p.values.astype('float') / p.values, index=p.index, + columns=p.columns) + assert_frame_equal(result2, expected) + + result = p / 0 + expected = DataFrame(np.inf, index=p.index, columns=p.columns) + expected.iloc[0:3, 1] = nan + assert_frame_equal(result, expected) + + # numpy has a slightly different (wrong) treatement + result2 = DataFrame(p.values.astype('float64') / 0, index=p.index, + columns=p.columns) + assert_frame_equal(result2, expected) + + p = DataFrame(np.random.randn(10, 5)) + s = p[0] + res = s / p + res2 = p / s + self.assertFalse(np.array_equal(res.fillna(0), res2.fillna(0))) + + def test_logical_operators(self): + + def _check_bin_op(op): + result = op(df1, df2) + expected = DataFrame(op(df1.values, df2.values), index=df1.index, + columns=df1.columns) + self.assertEqual(result.values.dtype, np.bool_) + assert_frame_equal(result, expected) + + def _check_unary_op(op): + result = op(df1) + expected = DataFrame(op(df1.values), index=df1.index, + columns=df1.columns) + self.assertEqual(result.values.dtype, np.bool_) + assert_frame_equal(result, expected) + + df1 = {'a': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True}, + 'b': {'a': False, 'b': True, 'c': False, + 'd': False, 'e': False}, + 'c': {'a': False, 'b': False, 'c': True, + 'd': False, 'e': False}, + 'd': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True}, + 'e': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True}} + + df2 = {'a': {'a': True, 'b': False, 'c': True, 'd': False, 'e': False}, + 'b': {'a': False, 'b': True, 'c': False, + 'd': False, 'e': False}, + 'c': {'a': True, 'b': False, 'c': True, 'd': False, 'e': False}, + 'd': {'a': False, 'b': False, 'c': False, + 'd': True, 'e': False}, + 'e': {'a': False, 'b': False, 'c': False, + 'd': False, 'e': True}} + + df1 = DataFrame(df1) + df2 = DataFrame(df2) + + _check_bin_op(operator.and_) + _check_bin_op(operator.or_) + _check_bin_op(operator.xor) + + # operator.neg is deprecated in numpy >= 1.9 + _check_unary_op(operator.inv) + + def test_logical_typeerror(self): + if not compat.PY3: + self.assertRaises(TypeError, self.frame.__eq__, 'foo') + self.assertRaises(TypeError, self.frame.__lt__, 'foo') + self.assertRaises(TypeError, self.frame.__gt__, 'foo') + self.assertRaises(TypeError, self.frame.__ne__, 'foo') + else: + raise nose.SkipTest('test_logical_typeerror not tested on PY3') + + def test_logical_with_nas(self): + d = DataFrame({'a': [np.nan, False], 'b': [True, True]}) + + # GH4947 + # bool comparisons should return bool + result = d['a'] | d['b'] + expected = Series([False, True]) + assert_series_equal(result, expected) + + # GH4604, automatic casting here + result = d['a'].fillna(False) | d['b'] + expected = Series([True, True]) + assert_series_equal(result, expected) + + result = d['a'].fillna(False, downcast=False) | d['b'] + expected = Series([True, True]) + assert_series_equal(result, expected) + + def test_neg(self): + # what to do? + assert_frame_equal(-self.frame, -1 * self.frame) + + def test_invert(self): + assert_frame_equal(-(self.frame < 0), ~(self.frame < 0)) + + def test_arith_flex_frame(self): + ops = ['add', 'sub', 'mul', 'div', 'truediv', 'pow', 'floordiv', 'mod'] + if not compat.PY3: + aliases = {} + else: + aliases = {'div': 'truediv'} + + for op in ops: + try: + alias = aliases.get(op, op) + f = getattr(operator, alias) + result = getattr(self.frame, op)(2 * self.frame) + exp = f(self.frame, 2 * self.frame) + assert_frame_equal(result, exp) + + # vs mix float + result = getattr(self.mixed_float, op)(2 * self.mixed_float) + exp = f(self.mixed_float, 2 * self.mixed_float) + assert_frame_equal(result, exp) + _check_mixed_float(result, dtype=dict(C=None)) + + # vs mix int + if op in ['add', 'sub', 'mul']: + result = getattr(self.mixed_int, op)(2 + self.mixed_int) + exp = f(self.mixed_int, 2 + self.mixed_int) + + # overflow in the uint + dtype = None + if op in ['sub']: + dtype = dict(B='object', C=None) + elif op in ['add', 'mul']: + dtype = dict(C=None) + assert_frame_equal(result, exp) + _check_mixed_int(result, dtype=dtype) + + # rops + r_f = lambda x, y: f(y, x) + result = getattr(self.frame, 'r' + op)(2 * self.frame) + exp = r_f(self.frame, 2 * self.frame) + assert_frame_equal(result, exp) + + # vs mix float + result = getattr(self.mixed_float, op)( + 2 * self.mixed_float) + exp = f(self.mixed_float, 2 * self.mixed_float) + assert_frame_equal(result, exp) + _check_mixed_float(result, dtype=dict(C=None)) + + result = getattr(self.intframe, op)(2 * self.intframe) + exp = f(self.intframe, 2 * self.intframe) + assert_frame_equal(result, exp) + + # vs mix int + if op in ['add', 'sub', 'mul']: + result = getattr(self.mixed_int, op)( + 2 + self.mixed_int) + exp = f(self.mixed_int, 2 + self.mixed_int) + + # overflow in the uint + dtype = None + if op in ['sub']: + dtype = dict(B='object', C=None) + elif op in ['add', 'mul']: + dtype = dict(C=None) + assert_frame_equal(result, exp) + _check_mixed_int(result, dtype=dtype) + except: + com.pprint_thing("Failing operation %r" % op) + raise + + # ndim >= 3 + ndim_5 = np.ones(self.frame.shape + (3, 4, 5)) + with assertRaisesRegexp(ValueError, 'shape'): + f(self.frame, ndim_5) + + with assertRaisesRegexp(ValueError, 'shape'): + getattr(self.frame, op)(ndim_5) + + # res_add = self.frame.add(self.frame) + # res_sub = self.frame.sub(self.frame) + # res_mul = self.frame.mul(self.frame) + # res_div = self.frame.div(2 * self.frame) + + # assert_frame_equal(res_add, self.frame + self.frame) + # assert_frame_equal(res_sub, self.frame - self.frame) + # assert_frame_equal(res_mul, self.frame * self.frame) + # assert_frame_equal(res_div, self.frame / (2 * self.frame)) + + const_add = self.frame.add(1) + assert_frame_equal(const_add, self.frame + 1) + + # corner cases + result = self.frame.add(self.frame[:0]) + assert_frame_equal(result, self.frame * np.nan) + + result = self.frame[:0].add(self.frame) + assert_frame_equal(result, self.frame * np.nan) + with assertRaisesRegexp(NotImplementedError, 'fill_value'): + self.frame.add(self.frame.iloc[0], fill_value=3) + with assertRaisesRegexp(NotImplementedError, 'fill_value'): + self.frame.add(self.frame.iloc[0], axis='index', fill_value=3) + + def test_binary_ops_align(self): + + # test aligning binary ops + + # GH 6681 + index = MultiIndex.from_product([list('abc'), + ['one', 'two', 'three'], + [1, 2, 3]], + names=['first', 'second', 'third']) + + df = DataFrame(np.arange(27 * 3).reshape(27, 3), + index=index, + columns=['value1', 'value2', 'value3']).sortlevel() + + idx = pd.IndexSlice + for op in ['add', 'sub', 'mul', 'div', 'truediv']: + opa = getattr(operator, op, None) + if opa is None: + continue + + x = Series([1.0, 10.0, 100.0], [1, 2, 3]) + result = getattr(df, op)(x, level='third', axis=0) + + expected = pd.concat([opa(df.loc[idx[:, :, i], :], v) + for i, v in x.iteritems()]).sortlevel() + assert_frame_equal(result, expected) + + x = Series([1.0, 10.0], ['two', 'three']) + result = getattr(df, op)(x, level='second', axis=0) + + expected = (pd.concat([opa(df.loc[idx[:, i], :], v) + for i, v in x.iteritems()]) + .reindex_like(df).sortlevel()) + assert_frame_equal(result, expected) + + # GH9463 (alignment level of dataframe with series) + + midx = MultiIndex.from_product([['A', 'B'], ['a', 'b']]) + df = DataFrame(np.ones((2, 4), dtype='int64'), columns=midx) + s = pd.Series({'a': 1, 'b': 2}) + + df2 = df.copy() + df2.columns.names = ['lvl0', 'lvl1'] + s2 = s.copy() + s2.index.name = 'lvl1' + + # different cases of integer/string level names: + res1 = df.mul(s, axis=1, level=1) + res2 = df.mul(s2, axis=1, level=1) + res3 = df2.mul(s, axis=1, level=1) + res4 = df2.mul(s2, axis=1, level=1) + res5 = df2.mul(s, axis=1, level='lvl1') + res6 = df2.mul(s2, axis=1, level='lvl1') + + exp = DataFrame(np.array([[1, 2, 1, 2], [1, 2, 1, 2]], dtype='int64'), + columns=midx) + + for res in [res1, res2]: + assert_frame_equal(res, exp) + + exp.columns.names = ['lvl0', 'lvl1'] + for res in [res3, res4, res5, res6]: + assert_frame_equal(res, exp) + + def test_arith_mixed(self): + + left = DataFrame({'A': ['a', 'b', 'c'], + 'B': [1, 2, 3]}) + + result = left + left + expected = DataFrame({'A': ['aa', 'bb', 'cc'], + 'B': [2, 4, 6]}) + assert_frame_equal(result, expected) + + def test_arith_getitem_commute(self): + df = DataFrame({'A': [1.1, 3.3], 'B': [2.5, -3.9]}) + + self._test_op(df, operator.add) + self._test_op(df, operator.sub) + self._test_op(df, operator.mul) + self._test_op(df, operator.truediv) + self._test_op(df, operator.floordiv) + self._test_op(df, operator.pow) + + self._test_op(df, lambda x, y: y + x) + self._test_op(df, lambda x, y: y - x) + self._test_op(df, lambda x, y: y * x) + self._test_op(df, lambda x, y: y / x) + self._test_op(df, lambda x, y: y ** x) + + self._test_op(df, lambda x, y: x + y) + self._test_op(df, lambda x, y: x - y) + self._test_op(df, lambda x, y: x * y) + self._test_op(df, lambda x, y: x / y) + self._test_op(df, lambda x, y: x ** y) + + @staticmethod + def _test_op(df, op): + result = op(df, 1) + + if not df.columns.is_unique: + raise ValueError("Only unique columns supported by this test") + + for col in result.columns: + assert_series_equal(result[col], op(df[col], 1)) + + def test_bool_flex_frame(self): + data = np.random.randn(5, 3) + other_data = np.random.randn(5, 3) + df = DataFrame(data) + other = DataFrame(other_data) + ndim_5 = np.ones(df.shape + (1, 3)) + + # Unaligned + def _check_unaligned_frame(meth, op, df, other): + part_o = other.ix[3:, 1:].copy() + rs = meth(part_o) + xp = op(df, part_o.reindex(index=df.index, columns=df.columns)) + assert_frame_equal(rs, xp) + + # DataFrame + self.assertTrue(df.eq(df).values.all()) + self.assertFalse(df.ne(df).values.any()) + for op in ['eq', 'ne', 'gt', 'lt', 'ge', 'le']: + f = getattr(df, op) + o = getattr(operator, op) + # No NAs + assert_frame_equal(f(other), o(df, other)) + _check_unaligned_frame(f, o, df, other) + # ndarray + assert_frame_equal(f(other.values), o(df, other.values)) + # scalar + assert_frame_equal(f(0), o(df, 0)) + # NAs + assert_frame_equal(f(np.nan), o(df, np.nan)) + with assertRaisesRegexp(ValueError, 'shape'): + f(ndim_5) + + # Series + def _test_seq(df, idx_ser, col_ser): + idx_eq = df.eq(idx_ser, axis=0) + col_eq = df.eq(col_ser) + idx_ne = df.ne(idx_ser, axis=0) + col_ne = df.ne(col_ser) + assert_frame_equal(col_eq, df == Series(col_ser)) + assert_frame_equal(col_eq, -col_ne) + assert_frame_equal(idx_eq, -idx_ne) + assert_frame_equal(idx_eq, df.T.eq(idx_ser).T) + assert_frame_equal(col_eq, df.eq(list(col_ser))) + assert_frame_equal(idx_eq, df.eq(Series(idx_ser), axis=0)) + assert_frame_equal(idx_eq, df.eq(list(idx_ser), axis=0)) + + idx_gt = df.gt(idx_ser, axis=0) + col_gt = df.gt(col_ser) + idx_le = df.le(idx_ser, axis=0) + col_le = df.le(col_ser) + + assert_frame_equal(col_gt, df > Series(col_ser)) + assert_frame_equal(col_gt, -col_le) + assert_frame_equal(idx_gt, -idx_le) + assert_frame_equal(idx_gt, df.T.gt(idx_ser).T) + + idx_ge = df.ge(idx_ser, axis=0) + col_ge = df.ge(col_ser) + idx_lt = df.lt(idx_ser, axis=0) + col_lt = df.lt(col_ser) + assert_frame_equal(col_ge, df >= Series(col_ser)) + assert_frame_equal(col_ge, -col_lt) + assert_frame_equal(idx_ge, -idx_lt) + assert_frame_equal(idx_ge, df.T.ge(idx_ser).T) + + idx_ser = Series(np.random.randn(5)) + col_ser = Series(np.random.randn(3)) + _test_seq(df, idx_ser, col_ser) + + # list/tuple + _test_seq(df, idx_ser.values, col_ser.values) + + # NA + df.ix[0, 0] = np.nan + rs = df.eq(df) + self.assertFalse(rs.ix[0, 0]) + rs = df.ne(df) + self.assertTrue(rs.ix[0, 0]) + rs = df.gt(df) + self.assertFalse(rs.ix[0, 0]) + rs = df.lt(df) + self.assertFalse(rs.ix[0, 0]) + rs = df.ge(df) + self.assertFalse(rs.ix[0, 0]) + rs = df.le(df) + self.assertFalse(rs.ix[0, 0]) + + # complex + arr = np.array([np.nan, 1, 6, np.nan]) + arr2 = np.array([2j, np.nan, 7, None]) + df = DataFrame({'a': arr}) + df2 = DataFrame({'a': arr2}) + rs = df.gt(df2) + self.assertFalse(rs.values.any()) + rs = df.ne(df2) + self.assertTrue(rs.values.all()) + + arr3 = np.array([2j, np.nan, None]) + df3 = DataFrame({'a': arr3}) + rs = df3.gt(2j) + self.assertFalse(rs.values.any()) + + # corner, dtype=object + df1 = DataFrame({'col': ['foo', np.nan, 'bar']}) + df2 = DataFrame({'col': ['foo', datetime.now(), 'bar']}) + result = df1.ne(df2) + exp = DataFrame({'col': [False, True, False]}) + assert_frame_equal(result, exp) + + def test_arith_flex_series(self): + df = self.simple + + row = df.xs('a') + col = df['two'] + # after arithmetic refactor, add truediv here + ops = ['add', 'sub', 'mul', 'mod'] + for op in ops: + f = getattr(df, op) + op = getattr(operator, op) + assert_frame_equal(f(row), op(df, row)) + assert_frame_equal(f(col, axis=0), op(df.T, col).T) + + # special case for some reason + assert_frame_equal(df.add(row, axis=None), df + row) + + # cases which will be refactored after big arithmetic refactor + assert_frame_equal(df.div(row), df / row) + assert_frame_equal(df.div(col, axis=0), (df.T / col).T) + + # broadcasting issue in GH7325 + df = DataFrame(np.arange(3 * 2).reshape((3, 2)), dtype='int64') + expected = DataFrame([[nan, np.inf], [1.0, 1.5], [1.0, 1.25]]) + result = df.div(df[0], axis='index') + assert_frame_equal(result, expected) + + df = DataFrame(np.arange(3 * 2).reshape((3, 2)), dtype='float64') + expected = DataFrame([[np.nan, np.inf], [1.0, 1.5], [1.0, 1.25]]) + result = df.div(df[0], axis='index') + assert_frame_equal(result, expected) + + def test_arith_non_pandas_object(self): + df = self.simple + + val1 = df.xs('a').values + added = DataFrame(df.values + val1, index=df.index, columns=df.columns) + assert_frame_equal(df + val1, added) + + added = DataFrame((df.values.T + val1).T, + index=df.index, columns=df.columns) + assert_frame_equal(df.add(val1, axis=0), added) + + val2 = list(df['two']) + + added = DataFrame(df.values + val2, index=df.index, columns=df.columns) + assert_frame_equal(df + val2, added) + + added = DataFrame((df.values.T + val2).T, index=df.index, + columns=df.columns) + assert_frame_equal(df.add(val2, axis='index'), added) + + val3 = np.random.rand(*df.shape) + added = DataFrame(df.values + val3, index=df.index, columns=df.columns) + assert_frame_equal(df.add(val3), added) + + def test_combineFrame(self): + frame_copy = self.frame.reindex(self.frame.index[::2]) + + del frame_copy['D'] + frame_copy['C'][:5] = nan + + added = self.frame + frame_copy + tm.assert_dict_equal(added['A'].valid(), + self.frame['A'] * 2, + compare_keys=False) + + self.assertTrue( + np.isnan(added['C'].reindex(frame_copy.index)[:5]).all()) + + # assert(False) + + self.assertTrue(np.isnan(added['D']).all()) + + self_added = self.frame + self.frame + self.assertTrue(self_added.index.equals(self.frame.index)) + + added_rev = frame_copy + self.frame + self.assertTrue(np.isnan(added['D']).all()) + self.assertTrue(np.isnan(added_rev['D']).all()) + + # corner cases + + # empty + plus_empty = self.frame + self.empty + self.assertTrue(np.isnan(plus_empty.values).all()) + + empty_plus = self.empty + self.frame + self.assertTrue(np.isnan(empty_plus.values).all()) + + empty_empty = self.empty + self.empty + self.assertTrue(empty_empty.empty) + + # out of order + reverse = self.frame.reindex(columns=self.frame.columns[::-1]) + + assert_frame_equal(reverse + self.frame, self.frame * 2) + + # mix vs float64, upcast + added = self.frame + self.mixed_float + _check_mixed_float(added, dtype='float64') + added = self.mixed_float + self.frame + _check_mixed_float(added, dtype='float64') + + # mix vs mix + added = self.mixed_float + self.mixed_float2 + _check_mixed_float(added, dtype=dict(C=None)) + added = self.mixed_float2 + self.mixed_float + _check_mixed_float(added, dtype=dict(C=None)) + + # with int + added = self.frame + self.mixed_int + _check_mixed_float(added, dtype='float64') + + def test_combineSeries(self): + + # Series + series = self.frame.xs(self.frame.index[0]) + + added = self.frame + series + + for key, s in compat.iteritems(added): + assert_series_equal(s, self.frame[key] + series[key]) + + larger_series = series.to_dict() + larger_series['E'] = 1 + larger_series = Series(larger_series) + larger_added = self.frame + larger_series + + for key, s in compat.iteritems(self.frame): + assert_series_equal(larger_added[key], s + series[key]) + self.assertIn('E', larger_added) + self.assertTrue(np.isnan(larger_added['E']).all()) + + # vs mix (upcast) as needed + added = self.mixed_float + series + _check_mixed_float(added, dtype='float64') + added = self.mixed_float + series.astype('float32') + _check_mixed_float(added, dtype=dict(C=None)) + added = self.mixed_float + series.astype('float16') + _check_mixed_float(added, dtype=dict(C=None)) + + # these raise with numexpr.....as we are adding an int64 to an + # uint64....weird vs int + + # added = self.mixed_int + (100*series).astype('int64') + # _check_mixed_int(added, dtype = dict(A = 'int64', B = 'float64', C = + # 'int64', D = 'int64')) + # added = self.mixed_int + (100*series).astype('int32') + # _check_mixed_int(added, dtype = dict(A = 'int32', B = 'float64', C = + # 'int32', D = 'int64')) + + # TimeSeries + ts = self.tsframe['A'] + + # 10890 + # we no longer allow auto timeseries broadcasting + # and require explict broadcasting + added = self.tsframe.add(ts, axis='index') + + for key, col in compat.iteritems(self.tsframe): + result = col + ts + assert_series_equal(added[key], result, check_names=False) + self.assertEqual(added[key].name, key) + if col.name == ts.name: + self.assertEqual(result.name, 'A') + else: + self.assertTrue(result.name is None) + + smaller_frame = self.tsframe[:-5] + smaller_added = smaller_frame.add(ts, axis='index') + + self.assertTrue(smaller_added.index.equals(self.tsframe.index)) + + smaller_ts = ts[:-5] + smaller_added2 = self.tsframe.add(smaller_ts, axis='index') + assert_frame_equal(smaller_added, smaller_added2) + + # length 0, result is all-nan + result = self.tsframe.add(ts[:0], axis='index') + expected = DataFrame(np.nan, index=self.tsframe.index, + columns=self.tsframe.columns) + assert_frame_equal(result, expected) + + # Frame is all-nan + result = self.tsframe[:0].add(ts, axis='index') + expected = DataFrame(np.nan, index=self.tsframe.index, + columns=self.tsframe.columns) + assert_frame_equal(result, expected) + + # empty but with non-empty index + frame = self.tsframe[:1].reindex(columns=[]) + result = frame.mul(ts, axis='index') + self.assertEqual(len(result), len(ts)) + + def test_combineFunc(self): + result = self.frame * 2 + self.assert_numpy_array_equal(result.values, self.frame.values * 2) + + # vs mix + result = self.mixed_float * 2 + for c, s in compat.iteritems(result): + self.assert_numpy_array_equal( + s.values, self.mixed_float[c].values * 2) + _check_mixed_float(result, dtype=dict(C=None)) + + result = self.empty * 2 + self.assertIs(result.index, self.empty.index) + self.assertEqual(len(result.columns), 0) + + def test_comparisons(self): + df1 = tm.makeTimeDataFrame() + df2 = tm.makeTimeDataFrame() + + row = self.simple.xs('a') + ndim_5 = np.ones(df1.shape + (1, 1, 1)) + + def test_comp(func): + result = func(df1, df2) + self.assert_numpy_array_equal(result.values, + func(df1.values, df2.values)) + with assertRaisesRegexp(ValueError, 'Wrong number of dimensions'): + func(df1, ndim_5) + + result2 = func(self.simple, row) + self.assert_numpy_array_equal(result2.values, + func(self.simple.values, row.values)) + + result3 = func(self.frame, 0) + self.assert_numpy_array_equal(result3.values, + func(self.frame.values, 0)) + + with assertRaisesRegexp(ValueError, 'Can only compare ' + 'identically-labeled DataFrame'): + func(self.simple, self.simple[:2]) + + test_comp(operator.eq) + test_comp(operator.ne) + test_comp(operator.lt) + test_comp(operator.gt) + test_comp(operator.ge) + test_comp(operator.le) + + def test_string_comparison(self): + df = DataFrame([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}]) + mask_a = df.a > 1 + assert_frame_equal(df[mask_a], df.ix[1:1, :]) + assert_frame_equal(df[-mask_a], df.ix[0:0, :]) + + mask_b = df.b == "foo" + assert_frame_equal(df[mask_b], df.ix[0:0, :]) + assert_frame_equal(df[-mask_b], df.ix[1:1, :]) + + def test_float_none_comparison(self): + df = DataFrame(np.random.randn(8, 3), index=lrange(8), + columns=['A', 'B', 'C']) + + self.assertRaises(TypeError, df.__eq__, None) + + def test_boolean_comparison(self): + + # GH 4576 + # boolean comparisons with a tuple/list give unexpected results + df = DataFrame(np.arange(6).reshape((3, 2))) + b = np.array([2, 2]) + b_r = np.atleast_2d([2, 2]) + b_c = b_r.T + l = (2, 2, 2) + tup = tuple(l) + + # gt + expected = DataFrame([[False, False], [False, True], [True, True]]) + result = df > b + assert_frame_equal(result, expected) + + result = df.values > b + assert_numpy_array_equal(result, expected.values) + + result = df > l + assert_frame_equal(result, expected) + + result = df > tup + assert_frame_equal(result, expected) + + result = df > b_r + assert_frame_equal(result, expected) + + result = df.values > b_r + assert_numpy_array_equal(result, expected.values) + + self.assertRaises(ValueError, df.__gt__, b_c) + self.assertRaises(ValueError, df.values.__gt__, b_c) + + # == + expected = DataFrame([[False, False], [True, False], [False, False]]) + result = df == b + assert_frame_equal(result, expected) + + result = df == l + assert_frame_equal(result, expected) + + result = df == tup + assert_frame_equal(result, expected) + + result = df == b_r + assert_frame_equal(result, expected) + + result = df.values == b_r + assert_numpy_array_equal(result, expected.values) + + self.assertRaises(ValueError, lambda: df == b_c) + self.assertFalse((df.values == b_c)) + + # with alignment + df = DataFrame(np.arange(6).reshape((3, 2)), + columns=list('AB'), index=list('abc')) + expected.index = df.index + expected.columns = df.columns + + result = df == l + assert_frame_equal(result, expected) + + result = df == tup + assert_frame_equal(result, expected) + + # not shape compatible + self.assertRaises(ValueError, lambda: df == (2, 2)) + self.assertRaises(ValueError, lambda: df == [2, 2]) + + def test_combineAdd(self): + + with tm.assert_produces_warning(FutureWarning): + # trivial + comb = self.frame.combineAdd(self.frame) + assert_frame_equal(comb, self.frame * 2) + + # more rigorous + a = DataFrame([[1., nan, nan, 2., nan]], + columns=np.arange(5)) + b = DataFrame([[2., 3., nan, 2., 6., nan]], + columns=np.arange(6)) + expected = DataFrame([[3., 3., nan, 4., 6., nan]], + columns=np.arange(6)) + + result = a.combineAdd(b) + assert_frame_equal(result, expected) + result2 = a.T.combineAdd(b.T) + assert_frame_equal(result2, expected.T) + + expected2 = a.combine(b, operator.add, fill_value=0.) + assert_frame_equal(expected, expected2) + + # corner cases + comb = self.frame.combineAdd(self.empty) + assert_frame_equal(comb, self.frame) + + comb = self.empty.combineAdd(self.frame) + assert_frame_equal(comb, self.frame) + + # integer corner case + df1 = DataFrame({'x': [5]}) + df2 = DataFrame({'x': [1]}) + df3 = DataFrame({'x': [6]}) + comb = df1.combineAdd(df2) + assert_frame_equal(comb, df3) + + # mixed type GH2191 + df1 = DataFrame({'A': [1, 2], 'B': [3, 4]}) + df2 = DataFrame({'A': [1, 2], 'C': [5, 6]}) + rs = df1.combineAdd(df2) + xp = DataFrame({'A': [2, 4], 'B': [3, 4.], 'C': [5, 6.]}) + assert_frame_equal(xp, rs) + + # TODO: test integer fill corner? + + def test_combineMult(self): + with tm.assert_produces_warning(FutureWarning): + # trivial + comb = self.frame.combineMult(self.frame) + + assert_frame_equal(comb, self.frame ** 2) + + # corner cases + comb = self.frame.combineMult(self.empty) + assert_frame_equal(comb, self.frame) + + comb = self.empty.combineMult(self.frame) + assert_frame_equal(comb, self.frame) + + def test_combine_generic(self): + df1 = self.frame + df2 = self.frame.ix[:-5, ['A', 'B', 'C']] + + combined = df1.combine(df2, np.add) + combined2 = df2.combine(df1, np.add) + self.assertTrue(combined['D'].isnull().all()) + self.assertTrue(combined2['D'].isnull().all()) + + chunk = combined.ix[:-5, ['A', 'B', 'C']] + chunk2 = combined2.ix[:-5, ['A', 'B', 'C']] + + exp = self.frame.ix[:-5, ['A', 'B', 'C']].reindex_like(chunk) * 2 + assert_frame_equal(chunk, exp) + assert_frame_equal(chunk2, exp) + + def test_inplace_ops_alignment(self): + + # inplace ops / ops alignment + # GH 8511 + + columns = list('abcdefg') + X_orig = DataFrame(np.arange(10 * len(columns)) + .reshape(-1, len(columns)), + columns=columns, index=range(10)) + Z = 100 * X_orig.iloc[:, 1:-1].copy() + block1 = list('bedcf') + subs = list('bcdef') + + # add + X = X_orig.copy() + result1 = (X[block1] + Z).reindex(columns=subs) + + X[block1] += Z + result2 = X.reindex(columns=subs) + + X = X_orig.copy() + result3 = (X[block1] + Z[block1]).reindex(columns=subs) + + X[block1] += Z[block1] + result4 = X.reindex(columns=subs) + + assert_frame_equal(result1, result2) + assert_frame_equal(result1, result3) + assert_frame_equal(result1, result4) + + # sub + X = X_orig.copy() + result1 = (X[block1] - Z).reindex(columns=subs) + + X[block1] -= Z + result2 = X.reindex(columns=subs) + + X = X_orig.copy() + result3 = (X[block1] - Z[block1]).reindex(columns=subs) + + X[block1] -= Z[block1] + result4 = X.reindex(columns=subs) + + assert_frame_equal(result1, result2) + assert_frame_equal(result1, result3) + assert_frame_equal(result1, result4) + + def test_inplace_ops_identity(self): + + # GH 5104 + # make sure that we are actually changing the object + s_orig = Series([1, 2, 3]) + df_orig = DataFrame(np.random.randint(0, 5, size=10).reshape(-1, 5)) + + # no dtype change + s = s_orig.copy() + s2 = s + s += 1 + assert_series_equal(s, s2) + assert_series_equal(s_orig + 1, s) + self.assertIs(s, s2) + self.assertIs(s._data, s2._data) + + df = df_orig.copy() + df2 = df + df += 1 + assert_frame_equal(df, df2) + assert_frame_equal(df_orig + 1, df) + self.assertIs(df, df2) + self.assertIs(df._data, df2._data) + + # dtype change + s = s_orig.copy() + s2 = s + s += 1.5 + assert_series_equal(s, s2) + assert_series_equal(s_orig + 1.5, s) + + df = df_orig.copy() + df2 = df + df += 1.5 + assert_frame_equal(df, df2) + assert_frame_equal(df_orig + 1.5, df) + self.assertIs(df, df2) + self.assertIs(df._data, df2._data) + + # mixed dtype + arr = np.random.randint(0, 10, size=5) + df_orig = DataFrame({'A': arr.copy(), 'B': 'foo'}) + df = df_orig.copy() + df2 = df + df['A'] += 1 + expected = DataFrame({'A': arr.copy() + 1, 'B': 'foo'}) + assert_frame_equal(df, expected) + assert_frame_equal(df2, expected) + self.assertIs(df._data, df2._data) + + df = df_orig.copy() + df2 = df + df['A'] += 1.5 + expected = DataFrame({'A': arr.copy() + 1.5, 'B': 'foo'}) + assert_frame_equal(df, expected) + assert_frame_equal(df2, expected) + self.assertIs(df._data, df2._data) diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py new file mode 100644 index 0000000000000..52594b982a0d0 --- /dev/null +++ b/pandas/tests/frame/test_query_eval.py @@ -0,0 +1,1087 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +import operator +import nose +from itertools import product + +from pandas.compat import (zip, range, lrange, StringIO) +from pandas import DataFrame, Series, Index, MultiIndex, date_range +import pandas as pd +import numpy as np + +from numpy.random import randn + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaises, + makeCustomDataframe as mkdf) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +PARSERS = 'python', 'pandas' +ENGINES = 'python', 'numexpr' + + +def skip_if_no_pandas_parser(parser): + if parser != 'pandas': + raise nose.SkipTest("cannot evaluate with parser {0!r}".format(parser)) + + +def skip_if_no_ne(engine='numexpr'): + if engine == 'numexpr': + try: + import numexpr as ne # noqa + except ImportError: + raise nose.SkipTest("cannot query engine numexpr when numexpr not " + "installed") + + +class TestDataFrameEval(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_ops(self): + + # tst ops and reversed ops in evaluation + # GH7198 + + # smaller hits python, larger hits numexpr + for n in [4, 4000]: + + df = DataFrame(1, index=range(n), columns=list('abcd')) + df.iloc[0] = 2 + m = df.mean() + + for op_str, op, rop in [('+', '__add__', '__radd__'), + ('-', '__sub__', '__rsub__'), + ('*', '__mul__', '__rmul__'), + ('/', '__truediv__', '__rtruediv__')]: + + base = (DataFrame(np.tile(m.values, n) # noqa + .reshape(n, -1), + columns=list('abcd'))) + + expected = eval("base{op}df".format(op=op_str)) + + # ops as strings + result = eval("m{op}df".format(op=op_str)) + assert_frame_equal(result, expected) + + # these are commutative + if op in ['+', '*']: + result = getattr(df, op)(m) + assert_frame_equal(result, expected) + + # these are not + elif op in ['-', '/']: + result = getattr(df, rop)(m) + assert_frame_equal(result, expected) + + # GH7192 + df = DataFrame(dict(A=np.random.randn(25000))) + df.iloc[0:5] = np.nan + expected = (1 - np.isnan(df.iloc[0:25])) + result = (1 - np.isnan(df)).iloc[0:25] + assert_frame_equal(result, expected) + + +class TestDataFrameQueryWithMultiIndex(tm.TestCase): + + _multiprocess_can_split_ = True + + def check_query_with_named_multiindex(self, parser, engine): + tm.skip_if_no_ne(engine) + a = tm.choice(['red', 'green'], size=10) + b = tm.choice(['eggs', 'ham'], size=10) + index = MultiIndex.from_arrays([a, b], names=['color', 'food']) + df = DataFrame(randn(10, 2), index=index) + ind = Series(df.index.get_level_values('color').values, index=index, + name='color') + + # equality + res1 = df.query('color == "red"', parser=parser, engine=engine) + res2 = df.query('"red" == color', parser=parser, engine=engine) + exp = df[ind == 'red'] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # inequality + res1 = df.query('color != "red"', parser=parser, engine=engine) + res2 = df.query('"red" != color', parser=parser, engine=engine) + exp = df[ind != 'red'] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # list equality (really just set membership) + res1 = df.query('color == ["red"]', parser=parser, engine=engine) + res2 = df.query('["red"] == color', parser=parser, engine=engine) + exp = df[ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + res1 = df.query('color != ["red"]', parser=parser, engine=engine) + res2 = df.query('["red"] != color', parser=parser, engine=engine) + exp = df[~ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # in/not in ops + res1 = df.query('["red"] in color', parser=parser, engine=engine) + res2 = df.query('"red" in color', parser=parser, engine=engine) + exp = df[ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + res1 = df.query('["red"] not in color', parser=parser, engine=engine) + res2 = df.query('"red" not in color', parser=parser, engine=engine) + exp = df[~ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + def test_query_with_named_multiindex(self): + for parser, engine in product(['pandas'], ENGINES): + yield self.check_query_with_named_multiindex, parser, engine + + def check_query_with_unnamed_multiindex(self, parser, engine): + tm.skip_if_no_ne(engine) + a = tm.choice(['red', 'green'], size=10) + b = tm.choice(['eggs', 'ham'], size=10) + index = MultiIndex.from_arrays([a, b]) + df = DataFrame(randn(10, 2), index=index) + ind = Series(df.index.get_level_values(0).values, index=index) + + res1 = df.query('ilevel_0 == "red"', parser=parser, engine=engine) + res2 = df.query('"red" == ilevel_0', parser=parser, engine=engine) + exp = df[ind == 'red'] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # inequality + res1 = df.query('ilevel_0 != "red"', parser=parser, engine=engine) + res2 = df.query('"red" != ilevel_0', parser=parser, engine=engine) + exp = df[ind != 'red'] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # list equality (really just set membership) + res1 = df.query('ilevel_0 == ["red"]', parser=parser, engine=engine) + res2 = df.query('["red"] == ilevel_0', parser=parser, engine=engine) + exp = df[ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + res1 = df.query('ilevel_0 != ["red"]', parser=parser, engine=engine) + res2 = df.query('["red"] != ilevel_0', parser=parser, engine=engine) + exp = df[~ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # in/not in ops + res1 = df.query('["red"] in ilevel_0', parser=parser, engine=engine) + res2 = df.query('"red" in ilevel_0', parser=parser, engine=engine) + exp = df[ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + res1 = df.query('["red"] not in ilevel_0', parser=parser, + engine=engine) + res2 = df.query('"red" not in ilevel_0', parser=parser, engine=engine) + exp = df[~ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # ## LEVEL 1 + ind = Series(df.index.get_level_values(1).values, index=index) + res1 = df.query('ilevel_1 == "eggs"', parser=parser, engine=engine) + res2 = df.query('"eggs" == ilevel_1', parser=parser, engine=engine) + exp = df[ind == 'eggs'] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # inequality + res1 = df.query('ilevel_1 != "eggs"', parser=parser, engine=engine) + res2 = df.query('"eggs" != ilevel_1', parser=parser, engine=engine) + exp = df[ind != 'eggs'] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # list equality (really just set membership) + res1 = df.query('ilevel_1 == ["eggs"]', parser=parser, engine=engine) + res2 = df.query('["eggs"] == ilevel_1', parser=parser, engine=engine) + exp = df[ind.isin(['eggs'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + res1 = df.query('ilevel_1 != ["eggs"]', parser=parser, engine=engine) + res2 = df.query('["eggs"] != ilevel_1', parser=parser, engine=engine) + exp = df[~ind.isin(['eggs'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # in/not in ops + res1 = df.query('["eggs"] in ilevel_1', parser=parser, engine=engine) + res2 = df.query('"eggs" in ilevel_1', parser=parser, engine=engine) + exp = df[ind.isin(['eggs'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + res1 = df.query('["eggs"] not in ilevel_1', parser=parser, + engine=engine) + res2 = df.query('"eggs" not in ilevel_1', parser=parser, engine=engine) + exp = df[~ind.isin(['eggs'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + def test_query_with_unnamed_multiindex(self): + for parser, engine in product(['pandas'], ENGINES): + yield self.check_query_with_unnamed_multiindex, parser, engine + + def check_query_with_partially_named_multiindex(self, parser, engine): + tm.skip_if_no_ne(engine) + a = tm.choice(['red', 'green'], size=10) + b = np.arange(10) + index = MultiIndex.from_arrays([a, b]) + index.names = [None, 'rating'] + df = DataFrame(randn(10, 2), index=index) + res = df.query('rating == 1', parser=parser, engine=engine) + ind = Series(df.index.get_level_values('rating').values, index=index, + name='rating') + exp = df[ind == 1] + assert_frame_equal(res, exp) + + res = df.query('rating != 1', parser=parser, engine=engine) + ind = Series(df.index.get_level_values('rating').values, index=index, + name='rating') + exp = df[ind != 1] + assert_frame_equal(res, exp) + + res = df.query('ilevel_0 == "red"', parser=parser, engine=engine) + ind = Series(df.index.get_level_values(0).values, index=index) + exp = df[ind == "red"] + assert_frame_equal(res, exp) + + res = df.query('ilevel_0 != "red"', parser=parser, engine=engine) + ind = Series(df.index.get_level_values(0).values, index=index) + exp = df[ind != "red"] + assert_frame_equal(res, exp) + + def test_query_with_partially_named_multiindex(self): + for parser, engine in product(['pandas'], ENGINES): + yield (self.check_query_with_partially_named_multiindex, + parser, engine) + + def test_query_multiindex_get_index_resolvers(self): + for parser, engine in product(['pandas'], ENGINES): + yield (self.check_query_multiindex_get_index_resolvers, parser, + engine) + + def check_query_multiindex_get_index_resolvers(self, parser, engine): + df = mkdf(10, 3, r_idx_nlevels=2, r_idx_names=['spam', 'eggs']) + resolvers = df._get_index_resolvers() + + def to_series(mi, level): + level_values = mi.get_level_values(level) + s = level_values.to_series() + s.index = mi + return s + + col_series = df.columns.to_series() + expected = {'index': df.index, + 'columns': col_series, + 'spam': to_series(df.index, 'spam'), + 'eggs': to_series(df.index, 'eggs'), + 'C0': col_series} + for k, v in resolvers.items(): + if isinstance(v, Index): + assert v.is_(expected[k]) + elif isinstance(v, Series): + assert_series_equal(v, expected[k]) + else: + raise AssertionError("object must be a Series or Index") + + def test_raise_on_panel_with_multiindex(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_raise_on_panel_with_multiindex, parser, engine + + def check_raise_on_panel_with_multiindex(self, parser, engine): + tm.skip_if_no_ne() + p = tm.makePanel(7) + p.items = tm.makeCustomIndex(len(p.items), nlevels=2) + with tm.assertRaises(NotImplementedError): + pd.eval('p + 1', parser=parser, engine=engine) + + def test_raise_on_panel4d_with_multiindex(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_raise_on_panel4d_with_multiindex, parser, engine + + def check_raise_on_panel4d_with_multiindex(self, parser, engine): + tm.skip_if_no_ne() + p4d = tm.makePanel4D(7) + p4d.items = tm.makeCustomIndex(len(p4d.items), nlevels=2) + with tm.assertRaises(NotImplementedError): + pd.eval('p4d + 1', parser=parser, engine=engine) + + +class TestDataFrameQueryNumExprPandas(tm.TestCase): + + @classmethod + def setUpClass(cls): + super(TestDataFrameQueryNumExprPandas, cls).setUpClass() + cls.engine = 'numexpr' + cls.parser = 'pandas' + tm.skip_if_no_ne(cls.engine) + + @classmethod + def tearDownClass(cls): + super(TestDataFrameQueryNumExprPandas, cls).tearDownClass() + del cls.engine, cls.parser + + def test_date_query_with_attribute_access(self): + engine, parser = self.engine, self.parser + skip_if_no_pandas_parser(parser) + df = DataFrame(randn(5, 3)) + df['dates1'] = date_range('1/1/2012', periods=5) + df['dates2'] = date_range('1/1/2013', periods=5) + df['dates3'] = date_range('1/1/2014', periods=5) + res = df.query('@df.dates1 < 20130101 < @df.dates3', engine=engine, + parser=parser) + expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_query_no_attribute_access(self): + engine, parser = self.engine, self.parser + df = DataFrame(randn(5, 3)) + df['dates1'] = date_range('1/1/2012', periods=5) + df['dates2'] = date_range('1/1/2013', periods=5) + df['dates3'] = date_range('1/1/2014', periods=5) + res = df.query('dates1 < 20130101 < dates3', engine=engine, + parser=parser) + expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_query_with_NaT(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates2'] = date_range('1/1/2013', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT + df.loc[np.random.rand(n) > 0.5, 'dates3'] = pd.NaT + res = df.query('dates1 < 20130101 < dates3', engine=engine, + parser=parser) + expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_index_query(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.set_index('dates1', inplace=True, drop=True) + res = df.query('index < 20130101 < dates3', engine=engine, + parser=parser) + expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_index_query_with_NaT(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.iloc[0, 0] = pd.NaT + df.set_index('dates1', inplace=True, drop=True) + res = df.query('index < 20130101 < dates3', engine=engine, + parser=parser) + expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_index_query_with_NaT_duplicates(self): + engine, parser = self.engine, self.parser + n = 10 + d = {} + d['dates1'] = date_range('1/1/2012', periods=n) + d['dates3'] = date_range('1/1/2014', periods=n) + df = DataFrame(d) + df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT + df.set_index('dates1', inplace=True, drop=True) + res = df.query('index < 20130101 < dates3', engine=engine, + parser=parser) + expec = df[(df.index.to_series() < '20130101') & + ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_query_with_non_date(self): + engine, parser = self.engine, self.parser + + n = 10 + df = DataFrame({'dates': date_range('1/1/2012', periods=n), + 'nondate': np.arange(n)}) + + ops = '==', '!=', '<', '>', '<=', '>=' + + for op in ops: + with tm.assertRaises(TypeError): + df.query('dates %s nondate' % op, parser=parser, engine=engine) + + def test_query_syntax_error(self): + engine, parser = self.engine, self.parser + df = DataFrame({"i": lrange(10), "+": lrange(3, 13), + "r": lrange(4, 14)}) + with tm.assertRaises(SyntaxError): + df.query('i - +', engine=engine, parser=parser) + + def test_query_scope(self): + from pandas.computation.ops import UndefinedVariableError + engine, parser = self.engine, self.parser + skip_if_no_pandas_parser(parser) + + df = DataFrame(np.random.randn(20, 2), columns=list('ab')) + + a, b = 1, 2 # noqa + res = df.query('a > b', engine=engine, parser=parser) + expected = df[df.a > df.b] + assert_frame_equal(res, expected) + + res = df.query('@a > b', engine=engine, parser=parser) + expected = df[a > df.b] + assert_frame_equal(res, expected) + + # no local variable c + with tm.assertRaises(UndefinedVariableError): + df.query('@a > b > @c', engine=engine, parser=parser) + + # no column named 'c' + with tm.assertRaises(UndefinedVariableError): + df.query('@a > b > c', engine=engine, parser=parser) + + def test_query_doesnt_pickup_local(self): + from pandas.computation.ops import UndefinedVariableError + + engine, parser = self.engine, self.parser + n = m = 10 + df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) + + # we don't pick up the local 'sin' + with tm.assertRaises(UndefinedVariableError): + df.query('sin > 5', engine=engine, parser=parser) + + def test_query_builtin(self): + from pandas.computation.engines import NumExprClobberingError + engine, parser = self.engine, self.parser + + n = m = 10 + df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) + + df.index.name = 'sin' + with tm.assertRaisesRegexp(NumExprClobberingError, + 'Variables in expression.+'): + df.query('sin > 5', engine=engine, parser=parser) + + def test_query(self): + engine, parser = self.engine, self.parser + df = DataFrame(np.random.randn(10, 3), columns=['a', 'b', 'c']) + + assert_frame_equal(df.query('a < b', engine=engine, parser=parser), + df[df.a < df.b]) + assert_frame_equal(df.query('a + b > b * c', engine=engine, + parser=parser), + df[df.a + df.b > df.b * df.c]) + + def test_query_index_with_name(self): + engine, parser = self.engine, self.parser + df = DataFrame(np.random.randint(10, size=(10, 3)), + index=Index(range(10), name='blob'), + columns=['a', 'b', 'c']) + res = df.query('(blob < 5) & (a < b)', engine=engine, parser=parser) + expec = df[(df.index < 5) & (df.a < df.b)] + assert_frame_equal(res, expec) + + res = df.query('blob < b', engine=engine, parser=parser) + expec = df[df.index < df.b] + + assert_frame_equal(res, expec) + + def test_query_index_without_name(self): + engine, parser = self.engine, self.parser + df = DataFrame(np.random.randint(10, size=(10, 3)), + index=range(10), columns=['a', 'b', 'c']) + + # "index" should refer to the index + res = df.query('index < b', engine=engine, parser=parser) + expec = df[df.index < df.b] + assert_frame_equal(res, expec) + + # test against a scalar + res = df.query('index < 5', engine=engine, parser=parser) + expec = df[df.index < 5] + assert_frame_equal(res, expec) + + def test_nested_scope(self): + engine = self.engine + parser = self.parser + + skip_if_no_pandas_parser(parser) + + df = DataFrame(np.random.randn(5, 3)) + df2 = DataFrame(np.random.randn(5, 3)) + expected = df[(df > 0) & (df2 > 0)] + + result = df.query('(@df > 0) & (@df2 > 0)', engine=engine, + parser=parser) + assert_frame_equal(result, expected) + + result = pd.eval('df[df > 0 and df2 > 0]', engine=engine, + parser=parser) + assert_frame_equal(result, expected) + + result = pd.eval('df[df > 0 and df2 > 0 and df[df > 0] > 0]', + engine=engine, parser=parser) + expected = df[(df > 0) & (df2 > 0) & (df[df > 0] > 0)] + assert_frame_equal(result, expected) + + result = pd.eval('df[(df>0) & (df2>0)]', engine=engine, parser=parser) + expected = df.query('(@df>0) & (@df2>0)', engine=engine, parser=parser) + assert_frame_equal(result, expected) + + def test_nested_raises_on_local_self_reference(self): + from pandas.computation.ops import UndefinedVariableError + + df = DataFrame(np.random.randn(5, 3)) + + # can't reference ourself b/c we're a local so @ is necessary + with tm.assertRaises(UndefinedVariableError): + df.query('df > 0', engine=self.engine, parser=self.parser) + + def test_local_syntax(self): + skip_if_no_pandas_parser(self.parser) + + engine, parser = self.engine, self.parser + df = DataFrame(randn(100, 10), columns=list('abcdefghij')) + b = 1 + expect = df[df.a < b] + result = df.query('a < @b', engine=engine, parser=parser) + assert_frame_equal(result, expect) + + expect = df[df.a < df.b] + result = df.query('a < b', engine=engine, parser=parser) + assert_frame_equal(result, expect) + + def test_chained_cmp_and_in(self): + skip_if_no_pandas_parser(self.parser) + engine, parser = self.engine, self.parser + cols = list('abc') + df = DataFrame(randn(100, len(cols)), columns=cols) + res = df.query('a < b < c and a not in b not in c', engine=engine, + parser=parser) + ind = ((df.a < df.b) & (df.b < df.c) & ~df.b.isin(df.a) & + ~df.c.isin(df.b)) + expec = df[ind] + assert_frame_equal(res, expec) + + def test_local_variable_with_in(self): + engine, parser = self.engine, self.parser + skip_if_no_pandas_parser(parser) + a = Series(np.random.randint(3, size=15), name='a') + b = Series(np.random.randint(10, size=15), name='b') + df = DataFrame({'a': a, 'b': b}) + + expected = df.loc[(df.b - 1).isin(a)] + result = df.query('b - 1 in a', engine=engine, parser=parser) + assert_frame_equal(expected, result) + + b = Series(np.random.randint(10, size=15), name='b') + expected = df.loc[(b - 1).isin(a)] + result = df.query('@b - 1 in a', engine=engine, parser=parser) + assert_frame_equal(expected, result) + + def test_at_inside_string(self): + engine, parser = self.engine, self.parser + skip_if_no_pandas_parser(parser) + c = 1 # noqa + df = DataFrame({'a': ['a', 'a', 'b', 'b', '@c', '@c']}) + result = df.query('a == "@c"', engine=engine, parser=parser) + expected = df[df.a == "@c"] + assert_frame_equal(result, expected) + + def test_query_undefined_local(self): + from pandas.computation.ops import UndefinedVariableError + engine, parser = self.engine, self.parser + skip_if_no_pandas_parser(parser) + df = DataFrame(np.random.rand(10, 2), columns=list('ab')) + with tm.assertRaisesRegexp(UndefinedVariableError, + "local variable 'c' is not defined"): + df.query('a == @c', engine=engine, parser=parser) + + def test_index_resolvers_come_after_columns_with_the_same_name(self): + n = 1 # noqa + a = np.r_[20:101:20] + + df = DataFrame({'index': a, 'b': np.random.randn(a.size)}) + df.index.name = 'index' + result = df.query('index > 5', engine=self.engine, parser=self.parser) + expected = df[df['index'] > 5] + assert_frame_equal(result, expected) + + df = DataFrame({'index': a, + 'b': np.random.randn(a.size)}) + result = df.query('ilevel_0 > 5', engine=self.engine, + parser=self.parser) + expected = df.loc[df.index[df.index > 5]] + assert_frame_equal(result, expected) + + df = DataFrame({'a': a, 'b': np.random.randn(a.size)}) + df.index.name = 'a' + result = df.query('a > 5', engine=self.engine, parser=self.parser) + expected = df[df.a > 5] + assert_frame_equal(result, expected) + + result = df.query('index > 5', engine=self.engine, parser=self.parser) + expected = df.loc[df.index[df.index > 5]] + assert_frame_equal(result, expected) + + def test_inf(self): + n = 10 + df = DataFrame({'a': np.random.rand(n), 'b': np.random.rand(n)}) + df.loc[::2, 0] = np.inf + ops = '==', '!=' + d = dict(zip(ops, (operator.eq, operator.ne))) + for op, f in d.items(): + q = 'a %s inf' % op + expected = df[f(df.a, np.inf)] + result = df.query(q, engine=self.engine, parser=self.parser) + assert_frame_equal(result, expected) + + +class TestDataFrameQueryNumExprPython(TestDataFrameQueryNumExprPandas): + + @classmethod + def setUpClass(cls): + super(TestDataFrameQueryNumExprPython, cls).setUpClass() + cls.engine = 'numexpr' + cls.parser = 'python' + tm.skip_if_no_ne(cls.engine) + cls.frame = TestData().frame + + def test_date_query_no_attribute_access(self): + engine, parser = self.engine, self.parser + df = DataFrame(randn(5, 3)) + df['dates1'] = date_range('1/1/2012', periods=5) + df['dates2'] = date_range('1/1/2013', periods=5) + df['dates3'] = date_range('1/1/2014', periods=5) + res = df.query('(dates1 < 20130101) & (20130101 < dates3)', + engine=engine, parser=parser) + expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_query_with_NaT(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates2'] = date_range('1/1/2013', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT + df.loc[np.random.rand(n) > 0.5, 'dates3'] = pd.NaT + res = df.query('(dates1 < 20130101) & (20130101 < dates3)', + engine=engine, parser=parser) + expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_index_query(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.set_index('dates1', inplace=True, drop=True) + res = df.query('(index < 20130101) & (20130101 < dates3)', + engine=engine, parser=parser) + expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_index_query_with_NaT(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.iloc[0, 0] = pd.NaT + df.set_index('dates1', inplace=True, drop=True) + res = df.query('(index < 20130101) & (20130101 < dates3)', + engine=engine, parser=parser) + expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_index_query_with_NaT_duplicates(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT + df.set_index('dates1', inplace=True, drop=True) + with tm.assertRaises(NotImplementedError): + df.query('index < 20130101 < dates3', engine=engine, parser=parser) + + def test_nested_scope(self): + from pandas.computation.ops import UndefinedVariableError + engine = self.engine + parser = self.parser + # smoke test + x = 1 # noqa + result = pd.eval('x + 1', engine=engine, parser=parser) + self.assertEqual(result, 2) + + df = DataFrame(np.random.randn(5, 3)) + df2 = DataFrame(np.random.randn(5, 3)) + + # don't have the pandas parser + with tm.assertRaises(SyntaxError): + df.query('(@df>0) & (@df2>0)', engine=engine, parser=parser) + + with tm.assertRaises(UndefinedVariableError): + df.query('(df>0) & (df2>0)', engine=engine, parser=parser) + + expected = df[(df > 0) & (df2 > 0)] + result = pd.eval('df[(df > 0) & (df2 > 0)]', engine=engine, + parser=parser) + assert_frame_equal(expected, result) + + expected = df[(df > 0) & (df2 > 0) & (df[df > 0] > 0)] + result = pd.eval('df[(df > 0) & (df2 > 0) & (df[df > 0] > 0)]', + engine=engine, parser=parser) + assert_frame_equal(expected, result) + + +class TestDataFrameQueryPythonPandas(TestDataFrameQueryNumExprPandas): + + @classmethod + def setUpClass(cls): + super(TestDataFrameQueryPythonPandas, cls).setUpClass() + cls.engine = 'python' + cls.parser = 'pandas' + cls.frame = TestData().frame + + def test_query_builtin(self): + engine, parser = self.engine, self.parser + + n = m = 10 + df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) + + df.index.name = 'sin' + expected = df[df.index > 5] + result = df.query('sin > 5', engine=engine, parser=parser) + assert_frame_equal(expected, result) + + +class TestDataFrameQueryPythonPython(TestDataFrameQueryNumExprPython): + + @classmethod + def setUpClass(cls): + super(TestDataFrameQueryPythonPython, cls).setUpClass() + cls.engine = cls.parser = 'python' + cls.frame = TestData().frame + + def test_query_builtin(self): + engine, parser = self.engine, self.parser + + n = m = 10 + df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) + + df.index.name = 'sin' + expected = df[df.index > 5] + result = df.query('sin > 5', engine=engine, parser=parser) + assert_frame_equal(expected, result) + + +class TestDataFrameQueryStrings(tm.TestCase): + + def check_str_query_method(self, parser, engine): + tm.skip_if_no_ne(engine) + df = DataFrame(randn(10, 1), columns=['b']) + df['strings'] = Series(list('aabbccddee')) + expect = df[df.strings == 'a'] + + if parser != 'pandas': + col = 'strings' + lst = '"a"' + + lhs = [col] * 2 + [lst] * 2 + rhs = lhs[::-1] + + eq, ne = '==', '!=' + ops = 2 * ([eq] + [ne]) + + for lhs, op, rhs in zip(lhs, ops, rhs): + ex = '{lhs} {op} {rhs}'.format(lhs=lhs, op=op, rhs=rhs) + assertRaises(NotImplementedError, df.query, ex, engine=engine, + parser=parser, local_dict={'strings': df.strings}) + else: + res = df.query('"a" == strings', engine=engine, parser=parser) + assert_frame_equal(res, expect) + + res = df.query('strings == "a"', engine=engine, parser=parser) + assert_frame_equal(res, expect) + assert_frame_equal(res, df[df.strings.isin(['a'])]) + + expect = df[df.strings != 'a'] + res = df.query('strings != "a"', engine=engine, parser=parser) + assert_frame_equal(res, expect) + + res = df.query('"a" != strings', engine=engine, parser=parser) + assert_frame_equal(res, expect) + assert_frame_equal(res, df[~df.strings.isin(['a'])]) + + def test_str_query_method(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_str_query_method, parser, engine + + def test_str_list_query_method(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_str_list_query_method, parser, engine + + def check_str_list_query_method(self, parser, engine): + tm.skip_if_no_ne(engine) + df = DataFrame(randn(10, 1), columns=['b']) + df['strings'] = Series(list('aabbccddee')) + expect = df[df.strings.isin(['a', 'b'])] + + if parser != 'pandas': + col = 'strings' + lst = '["a", "b"]' + + lhs = [col] * 2 + [lst] * 2 + rhs = lhs[::-1] + + eq, ne = '==', '!=' + ops = 2 * ([eq] + [ne]) + + for lhs, op, rhs in zip(lhs, ops, rhs): + ex = '{lhs} {op} {rhs}'.format(lhs=lhs, op=op, rhs=rhs) + with tm.assertRaises(NotImplementedError): + df.query(ex, engine=engine, parser=parser) + else: + res = df.query('strings == ["a", "b"]', engine=engine, + parser=parser) + assert_frame_equal(res, expect) + + res = df.query('["a", "b"] == strings', engine=engine, + parser=parser) + assert_frame_equal(res, expect) + + expect = df[~df.strings.isin(['a', 'b'])] + + res = df.query('strings != ["a", "b"]', engine=engine, + parser=parser) + assert_frame_equal(res, expect) + + res = df.query('["a", "b"] != strings', engine=engine, + parser=parser) + assert_frame_equal(res, expect) + + def check_query_with_string_columns(self, parser, engine): + tm.skip_if_no_ne(engine) + df = DataFrame({'a': list('aaaabbbbcccc'), + 'b': list('aabbccddeeff'), + 'c': np.random.randint(5, size=12), + 'd': np.random.randint(9, size=12)}) + if parser == 'pandas': + res = df.query('a in b', parser=parser, engine=engine) + expec = df[df.a.isin(df.b)] + assert_frame_equal(res, expec) + + res = df.query('a in b and c < d', parser=parser, engine=engine) + expec = df[df.a.isin(df.b) & (df.c < df.d)] + assert_frame_equal(res, expec) + else: + with assertRaises(NotImplementedError): + df.query('a in b', parser=parser, engine=engine) + + with assertRaises(NotImplementedError): + df.query('a in b and c < d', parser=parser, engine=engine) + + def test_query_with_string_columns(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_query_with_string_columns, parser, engine + + def check_object_array_eq_ne(self, parser, engine): + tm.skip_if_no_ne(engine) + df = DataFrame({'a': list('aaaabbbbcccc'), + 'b': list('aabbccddeeff'), + 'c': np.random.randint(5, size=12), + 'd': np.random.randint(9, size=12)}) + res = df.query('a == b', parser=parser, engine=engine) + exp = df[df.a == df.b] + assert_frame_equal(res, exp) + + res = df.query('a != b', parser=parser, engine=engine) + exp = df[df.a != df.b] + assert_frame_equal(res, exp) + + def test_object_array_eq_ne(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_object_array_eq_ne, parser, engine + + def check_query_with_nested_strings(self, parser, engine): + tm.skip_if_no_ne(engine) + skip_if_no_pandas_parser(parser) + raw = """id event timestamp + 1 "page 1 load" 1/1/2014 0:00:01 + 1 "page 1 exit" 1/1/2014 0:00:31 + 2 "page 2 load" 1/1/2014 0:01:01 + 2 "page 2 exit" 1/1/2014 0:01:31 + 3 "page 3 load" 1/1/2014 0:02:01 + 3 "page 3 exit" 1/1/2014 0:02:31 + 4 "page 1 load" 2/1/2014 1:00:01 + 4 "page 1 exit" 2/1/2014 1:00:31 + 5 "page 2 load" 2/1/2014 1:01:01 + 5 "page 2 exit" 2/1/2014 1:01:31 + 6 "page 3 load" 2/1/2014 1:02:01 + 6 "page 3 exit" 2/1/2014 1:02:31 + """ + df = pd.read_csv(StringIO(raw), sep=r'\s{2,}', engine='python', + parse_dates=['timestamp']) + expected = df[df.event == '"page 1 load"'] + res = df.query("""'"page 1 load"' in event""", parser=parser, + engine=engine) + assert_frame_equal(expected, res) + + def test_query_with_nested_string(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_query_with_nested_strings, parser, engine + + def check_query_with_nested_special_character(self, parser, engine): + skip_if_no_pandas_parser(parser) + tm.skip_if_no_ne(engine) + df = DataFrame({'a': ['a', 'b', 'test & test'], + 'b': [1, 2, 3]}) + res = df.query('a == "test & test"', parser=parser, engine=engine) + expec = df[df.a == 'test & test'] + assert_frame_equal(res, expec) + + def test_query_with_nested_special_character(self): + for parser, engine in product(PARSERS, ENGINES): + yield (self.check_query_with_nested_special_character, + parser, engine) + + def check_query_lex_compare_strings(self, parser, engine): + tm.skip_if_no_ne(engine=engine) + import operator as opr + + a = Series(tm.choice(list('abcde'), 20)) + b = Series(np.arange(a.size)) + df = DataFrame({'X': a, 'Y': b}) + + ops = {'<': opr.lt, '>': opr.gt, '<=': opr.le, '>=': opr.ge} + + for op, func in ops.items(): + res = df.query('X %s "d"' % op, engine=engine, parser=parser) + expected = df[func(df.X, 'd')] + assert_frame_equal(res, expected) + + def test_query_lex_compare_strings(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_query_lex_compare_strings, parser, engine + + def check_query_single_element_booleans(self, parser, engine): + tm.skip_if_no_ne(engine) + columns = 'bid', 'bidsize', 'ask', 'asksize' + data = np.random.randint(2, size=(1, len(columns))).astype(bool) + df = DataFrame(data, columns=columns) + res = df.query('bid & ask', engine=engine, parser=parser) + expected = df[df.bid & df.ask] + assert_frame_equal(res, expected) + + def test_query_single_element_booleans(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_query_single_element_booleans, parser, engine + + def check_query_string_scalar_variable(self, parser, engine): + tm.skip_if_no_ne(engine) + df = pd.DataFrame({'Symbol': ['BUD US', 'BUD US', 'IBM US', 'IBM US'], + 'Price': [109.70, 109.72, 183.30, 183.35]}) + e = df[df.Symbol == 'BUD US'] + symb = 'BUD US' # noqa + r = df.query('Symbol == @symb', parser=parser, engine=engine) + assert_frame_equal(e, r) + + def test_query_string_scalar_variable(self): + for parser, engine in product(['pandas'], ENGINES): + yield self.check_query_string_scalar_variable, parser, engine + + +class TestDataFrameEvalNumExprPandas(tm.TestCase): + + @classmethod + def setUpClass(cls): + super(TestDataFrameEvalNumExprPandas, cls).setUpClass() + cls.engine = 'numexpr' + cls.parser = 'pandas' + tm.skip_if_no_ne() + + def setUp(self): + self.frame = DataFrame(randn(10, 3), columns=list('abc')) + + def tearDown(self): + del self.frame + + def test_simple_expr(self): + res = self.frame.eval('a + b', engine=self.engine, parser=self.parser) + expect = self.frame.a + self.frame.b + assert_series_equal(res, expect) + + def test_bool_arith_expr(self): + res = self.frame.eval('a[a < 1] + b', engine=self.engine, + parser=self.parser) + expect = self.frame.a[self.frame.a < 1] + self.frame.b + assert_series_equal(res, expect) + + def test_invalid_type_for_operator_raises(self): + df = DataFrame({'a': [1, 2], 'b': ['c', 'd']}) + ops = '+', '-', '*', '/' + for op in ops: + with tm.assertRaisesRegexp(TypeError, + "unsupported operand type\(s\) for " + ".+: '.+' and '.+'"): + df.eval('a {0} b'.format(op), engine=self.engine, + parser=self.parser) + + +class TestDataFrameEvalNumExprPython(TestDataFrameEvalNumExprPandas): + + @classmethod + def setUpClass(cls): + super(TestDataFrameEvalNumExprPython, cls).setUpClass() + cls.engine = 'numexpr' + cls.parser = 'python' + tm.skip_if_no_ne(cls.engine) + + +class TestDataFrameEvalPythonPandas(TestDataFrameEvalNumExprPandas): + + @classmethod + def setUpClass(cls): + super(TestDataFrameEvalPythonPandas, cls).setUpClass() + cls.engine = 'python' + cls.parser = 'pandas' + + +class TestDataFrameEvalPythonPython(TestDataFrameEvalNumExprPython): + + @classmethod + def setUpClass(cls): + super(TestDataFrameEvalPythonPython, cls).tearDownClass() + cls.engine = cls.parser = 'python' + + +if __name__ == '__main__': + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], + exit=False) diff --git a/pandas/tests/frame/test_replace.py b/pandas/tests/frame/test_replace.py new file mode 100644 index 0000000000000..bed0e0623ace0 --- /dev/null +++ b/pandas/tests/frame/test_replace.py @@ -0,0 +1,1055 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime +import re + +from pandas.compat import (zip, range, lrange, StringIO) +from pandas import (DataFrame, Series, Index, date_range, compat, + Timestamp) +import pandas as pd + +from numpy import nan +import numpy as np + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameReplace(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_replace_inplace(self): + self.tsframe['A'][:5] = nan + self.tsframe['A'][-5:] = nan + + tsframe = self.tsframe.copy() + tsframe.replace(nan, 0, inplace=True) + assert_frame_equal(tsframe, self.tsframe.fillna(0)) + + self.assertRaises(TypeError, self.tsframe.replace, nan, inplace=True) + self.assertRaises(TypeError, self.tsframe.replace, nan) + + # mixed type + self.mixed_frame.ix[5:20, 'foo'] = nan + self.mixed_frame.ix[-10:, 'A'] = nan + + result = self.mixed_frame.replace(np.nan, 0) + expected = self.mixed_frame.fillna(value=0) + assert_frame_equal(result, expected) + + tsframe = self.tsframe.copy() + tsframe.replace([nan], [0], inplace=True) + assert_frame_equal(tsframe, self.tsframe.fillna(0)) + + def test_regex_replace_scalar(self): + obj = {'a': list('ab..'), 'b': list('efgh')} + dfobj = DataFrame(obj) + mix = {'a': lrange(4), 'b': list('ab..')} + dfmix = DataFrame(mix) + + # simplest cases + # regex -> value + # obj frame + res = dfobj.replace(r'\s*\.\s*', nan, regex=True) + assert_frame_equal(dfobj, res.fillna('.')) + + # mixed + res = dfmix.replace(r'\s*\.\s*', nan, regex=True) + assert_frame_equal(dfmix, res.fillna('.')) + + # regex -> regex + # obj frame + res = dfobj.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True) + objc = obj.copy() + objc['a'] = ['a', 'b', '...', '...'] + expec = DataFrame(objc) + assert_frame_equal(res, expec) + + # with mixed + res = dfmix.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True) + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + # everything with compiled regexs as well + res = dfobj.replace(re.compile(r'\s*\.\s*'), nan, regex=True) + assert_frame_equal(dfobj, res.fillna('.')) + + # mixed + res = dfmix.replace(re.compile(r'\s*\.\s*'), nan, regex=True) + assert_frame_equal(dfmix, res.fillna('.')) + + # regex -> regex + # obj frame + res = dfobj.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1') + objc = obj.copy() + objc['a'] = ['a', 'b', '...', '...'] + expec = DataFrame(objc) + assert_frame_equal(res, expec) + + # with mixed + res = dfmix.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1') + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + res = dfmix.replace(regex=re.compile(r'\s*(\.)\s*'), value=r'\1\1\1') + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + res = dfmix.replace(regex=r'\s*(\.)\s*', value=r'\1\1\1') + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + def test_regex_replace_scalar_inplace(self): + obj = {'a': list('ab..'), 'b': list('efgh')} + dfobj = DataFrame(obj) + mix = {'a': lrange(4), 'b': list('ab..')} + dfmix = DataFrame(mix) + + # simplest cases + # regex -> value + # obj frame + res = dfobj.copy() + res.replace(r'\s*\.\s*', nan, regex=True, inplace=True) + assert_frame_equal(dfobj, res.fillna('.')) + + # mixed + res = dfmix.copy() + res.replace(r'\s*\.\s*', nan, regex=True, inplace=True) + assert_frame_equal(dfmix, res.fillna('.')) + + # regex -> regex + # obj frame + res = dfobj.copy() + res.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True, inplace=True) + objc = obj.copy() + objc['a'] = ['a', 'b', '...', '...'] + expec = DataFrame(objc) + assert_frame_equal(res, expec) + + # with mixed + res = dfmix.copy() + res.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True, inplace=True) + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + # everything with compiled regexs as well + res = dfobj.copy() + res.replace(re.compile(r'\s*\.\s*'), nan, regex=True, inplace=True) + assert_frame_equal(dfobj, res.fillna('.')) + + # mixed + res = dfmix.copy() + res.replace(re.compile(r'\s*\.\s*'), nan, regex=True, inplace=True) + assert_frame_equal(dfmix, res.fillna('.')) + + # regex -> regex + # obj frame + res = dfobj.copy() + res.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1', regex=True, + inplace=True) + objc = obj.copy() + objc['a'] = ['a', 'b', '...', '...'] + expec = DataFrame(objc) + assert_frame_equal(res, expec) + + # with mixed + res = dfmix.copy() + res.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1', regex=True, + inplace=True) + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + res = dfobj.copy() + res.replace(regex=r'\s*\.\s*', value=nan, inplace=True) + assert_frame_equal(dfobj, res.fillna('.')) + + # mixed + res = dfmix.copy() + res.replace(regex=r'\s*\.\s*', value=nan, inplace=True) + assert_frame_equal(dfmix, res.fillna('.')) + + # regex -> regex + # obj frame + res = dfobj.copy() + res.replace(regex=r'\s*(\.)\s*', value=r'\1\1\1', inplace=True) + objc = obj.copy() + objc['a'] = ['a', 'b', '...', '...'] + expec = DataFrame(objc) + assert_frame_equal(res, expec) + + # with mixed + res = dfmix.copy() + res.replace(regex=r'\s*(\.)\s*', value=r'\1\1\1', inplace=True) + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + # everything with compiled regexs as well + res = dfobj.copy() + res.replace(regex=re.compile(r'\s*\.\s*'), value=nan, inplace=True) + assert_frame_equal(dfobj, res.fillna('.')) + + # mixed + res = dfmix.copy() + res.replace(regex=re.compile(r'\s*\.\s*'), value=nan, inplace=True) + assert_frame_equal(dfmix, res.fillna('.')) + + # regex -> regex + # obj frame + res = dfobj.copy() + res.replace(regex=re.compile(r'\s*(\.)\s*'), value=r'\1\1\1', + inplace=True) + objc = obj.copy() + objc['a'] = ['a', 'b', '...', '...'] + expec = DataFrame(objc) + assert_frame_equal(res, expec) + + # with mixed + res = dfmix.copy() + res.replace(regex=re.compile(r'\s*(\.)\s*'), value=r'\1\1\1', + inplace=True) + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + def test_regex_replace_list_obj(self): + obj = {'a': list('ab..'), 'b': list('efgh'), 'c': list('helo')} + dfobj = DataFrame(obj) + + # lists of regexes and values + # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] + to_replace_res = [r'\s*\.\s*', r'e|f|g'] + values = [nan, 'crap'] + res = dfobj.replace(to_replace_res, values, regex=True) + expec = DataFrame({'a': ['a', 'b', nan, nan], 'b': ['crap'] * 3 + + ['h'], 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] + to_replace_res = [r'\s*(\.)\s*', r'(e|f|g)'] + values = [r'\1\1', r'\1_crap'] + res = dfobj.replace(to_replace_res, values, regex=True) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['e_crap', + 'f_crap', + 'g_crap', 'h'], + 'c': ['h', 'e_crap', 'l', 'o']}) + + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN + # or vN)] + to_replace_res = [r'\s*(\.)\s*', r'e'] + values = [r'\1\1', r'crap'] + res = dfobj.replace(to_replace_res, values, regex=True) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', + 'h'], + 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + to_replace_res = [r'\s*(\.)\s*', r'e'] + values = [r'\1\1', r'crap'] + res = dfobj.replace(value=values, regex=to_replace_res) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', + 'h'], + 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + def test_regex_replace_list_obj_inplace(self): + # same as above with inplace=True + # lists of regexes and values + obj = {'a': list('ab..'), 'b': list('efgh'), 'c': list('helo')} + dfobj = DataFrame(obj) + + # lists of regexes and values + # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] + to_replace_res = [r'\s*\.\s*', r'e|f|g'] + values = [nan, 'crap'] + res = dfobj.copy() + res.replace(to_replace_res, values, inplace=True, regex=True) + expec = DataFrame({'a': ['a', 'b', nan, nan], 'b': ['crap'] * 3 + + ['h'], 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] + to_replace_res = [r'\s*(\.)\s*', r'(e|f|g)'] + values = [r'\1\1', r'\1_crap'] + res = dfobj.copy() + res.replace(to_replace_res, values, inplace=True, regex=True) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['e_crap', + 'f_crap', + 'g_crap', 'h'], + 'c': ['h', 'e_crap', 'l', 'o']}) + + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN + # or vN)] + to_replace_res = [r'\s*(\.)\s*', r'e'] + values = [r'\1\1', r'crap'] + res = dfobj.copy() + res.replace(to_replace_res, values, inplace=True, regex=True) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', + 'h'], + 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + to_replace_res = [r'\s*(\.)\s*', r'e'] + values = [r'\1\1', r'crap'] + res = dfobj.copy() + res.replace(value=values, regex=to_replace_res, inplace=True) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', + 'h'], + 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + def test_regex_replace_list_mixed(self): + # mixed frame to make sure this doesn't break things + mix = {'a': lrange(4), 'b': list('ab..')} + dfmix = DataFrame(mix) + + # lists of regexes and values + # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] + to_replace_res = [r'\s*\.\s*', r'a'] + values = [nan, 'crap'] + mix2 = {'a': lrange(4), 'b': list('ab..'), 'c': list('halo')} + dfmix2 = DataFrame(mix2) + res = dfmix2.replace(to_replace_res, values, regex=True) + expec = DataFrame({'a': mix2['a'], 'b': ['crap', 'b', nan, nan], + 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] + to_replace_res = [r'\s*(\.)\s*', r'(a|b)'] + values = [r'\1\1', r'\1_crap'] + res = dfmix.replace(to_replace_res, values, regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['a_crap', 'b_crap', '..', + '..']}) + + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN + # or vN)] + to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] + values = [r'\1\1', r'crap', r'\1_crap'] + res = dfmix.replace(to_replace_res, values, regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) + assert_frame_equal(res, expec) + + to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] + values = [r'\1\1', r'crap', r'\1_crap'] + res = dfmix.replace(regex=to_replace_res, value=values) + expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) + assert_frame_equal(res, expec) + + def test_regex_replace_list_mixed_inplace(self): + mix = {'a': lrange(4), 'b': list('ab..')} + dfmix = DataFrame(mix) + # the same inplace + # lists of regexes and values + # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] + to_replace_res = [r'\s*\.\s*', r'a'] + values = [nan, 'crap'] + res = dfmix.copy() + res.replace(to_replace_res, values, inplace=True, regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b', nan, nan]}) + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] + to_replace_res = [r'\s*(\.)\s*', r'(a|b)'] + values = [r'\1\1', r'\1_crap'] + res = dfmix.copy() + res.replace(to_replace_res, values, inplace=True, regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['a_crap', 'b_crap', '..', + '..']}) + + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN + # or vN)] + to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] + values = [r'\1\1', r'crap', r'\1_crap'] + res = dfmix.copy() + res.replace(to_replace_res, values, inplace=True, regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) + assert_frame_equal(res, expec) + + to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] + values = [r'\1\1', r'crap', r'\1_crap'] + res = dfmix.copy() + res.replace(regex=to_replace_res, value=values, inplace=True) + expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) + assert_frame_equal(res, expec) + + def test_regex_replace_dict_mixed(self): + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + dfmix = DataFrame(mix) + + # dicts + # single dict {re1: v1}, search the whole frame + # need test for this... + + # list of dicts {re1: v1, re2: v2, ..., re3: v3}, search the whole + # frame + res = dfmix.replace({'b': r'\s*\.\s*'}, {'b': nan}, regex=True) + res2 = dfmix.copy() + res2.replace({'b': r'\s*\.\s*'}, {'b': nan}, inplace=True, regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', nan, nan], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + + # list of dicts {re1: re11, re2: re12, ..., reN: re1N}, search the + # whole frame + res = dfmix.replace({'b': r'\s*(\.)\s*'}, {'b': r'\1ty'}, regex=True) + res2 = dfmix.copy() + res2.replace({'b': r'\s*(\.)\s*'}, {'b': r'\1ty'}, inplace=True, + regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', '.ty', '.ty'], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + + res = dfmix.replace(regex={'b': r'\s*(\.)\s*'}, value={'b': r'\1ty'}) + res2 = dfmix.copy() + res2.replace(regex={'b': r'\s*(\.)\s*'}, value={'b': r'\1ty'}, + inplace=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', '.ty', '.ty'], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + + # scalar -> dict + # to_replace regex, {value: value} + expec = DataFrame({'a': mix['a'], 'b': [nan, 'b', '.', '.'], 'c': + mix['c']}) + res = dfmix.replace('a', {'b': nan}, regex=True) + res2 = dfmix.copy() + res2.replace('a', {'b': nan}, regex=True, inplace=True) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + + res = dfmix.replace('a', {'b': nan}, regex=True) + res2 = dfmix.copy() + res2.replace(regex='a', value={'b': nan}, inplace=True) + expec = DataFrame({'a': mix['a'], 'b': [nan, 'b', '.', '.'], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + + def test_regex_replace_dict_nested(self): + # nested dicts will not work until this is implemented for Series + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + dfmix = DataFrame(mix) + res = dfmix.replace({'b': {r'\s*\.\s*': nan}}, regex=True) + res2 = dfmix.copy() + res4 = dfmix.copy() + res2.replace({'b': {r'\s*\.\s*': nan}}, inplace=True, regex=True) + res3 = dfmix.replace(regex={'b': {r'\s*\.\s*': nan}}) + res4.replace(regex={'b': {r'\s*\.\s*': nan}}, inplace=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', nan, nan], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + assert_frame_equal(res3, expec) + assert_frame_equal(res4, expec) + + def test_regex_replace_dict_nested_gh4115(self): + df = pd.DataFrame({'Type': ['Q', 'T', 'Q', 'Q', 'T'], 'tmp': 2}) + expected = DataFrame({'Type': [0, 1, 0, 0, 1], 'tmp': 2}) + result = df.replace({'Type': {'Q': 0, 'T': 1}}) + assert_frame_equal(result, expected) + + def test_regex_replace_list_to_scalar(self): + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + df = DataFrame(mix) + expec = DataFrame({'a': mix['a'], 'b': np.array([nan] * 4), + 'c': [nan, nan, nan, 'd']}) + + res = df.replace([r'\s*\.\s*', 'a|b'], nan, regex=True) + res2 = df.copy() + res3 = df.copy() + res2.replace([r'\s*\.\s*', 'a|b'], nan, regex=True, inplace=True) + res3.replace(regex=[r'\s*\.\s*', 'a|b'], value=nan, inplace=True) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + assert_frame_equal(res3, expec) + + def test_regex_replace_str_to_numeric(self): + # what happens when you try to replace a numeric value with a regex? + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + df = DataFrame(mix) + res = df.replace(r'\s*\.\s*', 0, regex=True) + res2 = df.copy() + res2.replace(r'\s*\.\s*', 0, inplace=True, regex=True) + res3 = df.copy() + res3.replace(regex=r'\s*\.\s*', value=0, inplace=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', 0, 0], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + assert_frame_equal(res3, expec) + + def test_regex_replace_regex_list_to_numeric(self): + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + df = DataFrame(mix) + res = df.replace([r'\s*\.\s*', 'b'], 0, regex=True) + res2 = df.copy() + res2.replace([r'\s*\.\s*', 'b'], 0, regex=True, inplace=True) + res3 = df.copy() + res3.replace(regex=[r'\s*\.\s*', 'b'], value=0, inplace=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 0, 0, 0], 'c': ['a', 0, + nan, + 'd']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + assert_frame_equal(res3, expec) + + def test_regex_replace_series_of_regexes(self): + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + df = DataFrame(mix) + s1 = Series({'b': r'\s*\.\s*'}) + s2 = Series({'b': nan}) + res = df.replace(s1, s2, regex=True) + res2 = df.copy() + res2.replace(s1, s2, inplace=True, regex=True) + res3 = df.copy() + res3.replace(regex=s1, value=s2, inplace=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', nan, nan], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + assert_frame_equal(res3, expec) + + def test_regex_replace_numeric_to_object_conversion(self): + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + df = DataFrame(mix) + expec = DataFrame({'a': ['a', 1, 2, 3], 'b': mix['b'], 'c': mix['c']}) + res = df.replace(0, 'a') + assert_frame_equal(res, expec) + self.assertEqual(res.a.dtype, np.object_) + + def test_replace_regex_metachar(self): + metachars = '[]', '()', '\d', '\w', '\s' + + for metachar in metachars: + df = DataFrame({'a': [metachar, 'else']}) + result = df.replace({'a': {metachar: 'paren'}}) + expected = DataFrame({'a': ['paren', 'else']}) + assert_frame_equal(result, expected) + + def test_replace(self): + self.tsframe['A'][:5] = nan + self.tsframe['A'][-5:] = nan + + zero_filled = self.tsframe.replace(nan, -1e8) + assert_frame_equal(zero_filled, self.tsframe.fillna(-1e8)) + assert_frame_equal(zero_filled.replace(-1e8, nan), self.tsframe) + + self.tsframe['A'][:5] = nan + self.tsframe['A'][-5:] = nan + self.tsframe['B'][:5] = -1e8 + + # empty + df = DataFrame(index=['a', 'b']) + assert_frame_equal(df, df.replace(5, 7)) + + # GH 11698 + # test for mixed data types. + df = pd.DataFrame([('-', pd.to_datetime('20150101')), + ('a', pd.to_datetime('20150102'))]) + df1 = df.replace('-', np.nan) + expected_df = pd.DataFrame([(np.nan, pd.to_datetime('20150101')), + ('a', pd.to_datetime('20150102'))]) + assert_frame_equal(df1, expected_df) + + def test_replace_list(self): + obj = {'a': list('ab..'), 'b': list('efgh'), 'c': list('helo')} + dfobj = DataFrame(obj) + + # lists of regexes and values + # list of [v1, v2, ..., vN] -> [v1, v2, ..., vN] + to_replace_res = [r'.', r'e'] + values = [nan, 'crap'] + res = dfobj.replace(to_replace_res, values) + expec = DataFrame({'a': ['a', 'b', nan, nan], + 'b': ['crap', 'f', 'g', 'h'], 'c': ['h', 'crap', + 'l', 'o']}) + assert_frame_equal(res, expec) + + # list of [v1, v2, ..., vN] -> [v1, v2, .., vN] + to_replace_res = [r'.', r'f'] + values = [r'..', r'crap'] + res = dfobj.replace(to_replace_res, values) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['e', 'crap', 'g', + 'h'], + 'c': ['h', 'e', 'l', 'o']}) + + assert_frame_equal(res, expec) + + def test_replace_series_dict(self): + # from GH 3064 + df = DataFrame({'zero': {'a': 0.0, 'b': 1}, 'one': {'a': 2.0, 'b': 0}}) + result = df.replace(0, {'zero': 0.5, 'one': 1.0}) + expected = DataFrame( + {'zero': {'a': 0.5, 'b': 1}, 'one': {'a': 2.0, 'b': 1.0}}) + assert_frame_equal(result, expected) + + result = df.replace(0, df.mean()) + assert_frame_equal(result, expected) + + # series to series/dict + df = DataFrame({'zero': {'a': 0.0, 'b': 1}, 'one': {'a': 2.0, 'b': 0}}) + s = Series({'zero': 0.0, 'one': 2.0}) + result = df.replace(s, {'zero': 0.5, 'one': 1.0}) + expected = DataFrame( + {'zero': {'a': 0.5, 'b': 1}, 'one': {'a': 1.0, 'b': 0.0}}) + assert_frame_equal(result, expected) + + result = df.replace(s, df.mean()) + assert_frame_equal(result, expected) + + def test_replace_convert(self): + # gh 3907 + df = DataFrame([['foo', 'bar', 'bah'], ['bar', 'foo', 'bah']]) + m = {'foo': 1, 'bar': 2, 'bah': 3} + rep = df.replace(m) + expec = Series([np.int64] * 3) + res = rep.dtypes + assert_series_equal(expec, res) + + def test_replace_mixed(self): + self.mixed_frame.ix[5:20, 'foo'] = nan + self.mixed_frame.ix[-10:, 'A'] = nan + + result = self.mixed_frame.replace(np.nan, -18) + expected = self.mixed_frame.fillna(value=-18) + assert_frame_equal(result, expected) + assert_frame_equal(result.replace(-18, nan), self.mixed_frame) + + result = self.mixed_frame.replace(np.nan, -1e8) + expected = self.mixed_frame.fillna(value=-1e8) + assert_frame_equal(result, expected) + assert_frame_equal(result.replace(-1e8, nan), self.mixed_frame) + + # int block upcasting + df = DataFrame({'A': Series([1.0, 2.0], dtype='float64'), + 'B': Series([0, 1], dtype='int64')}) + expected = DataFrame({'A': Series([1.0, 2.0], dtype='float64'), + 'B': Series([0.5, 1], dtype='float64')}) + result = df.replace(0, 0.5) + assert_frame_equal(result, expected) + + df.replace(0, 0.5, inplace=True) + assert_frame_equal(df, expected) + + # int block splitting + df = DataFrame({'A': Series([1.0, 2.0], dtype='float64'), + 'B': Series([0, 1], dtype='int64'), + 'C': Series([1, 2], dtype='int64')}) + expected = DataFrame({'A': Series([1.0, 2.0], dtype='float64'), + 'B': Series([0.5, 1], dtype='float64'), + 'C': Series([1, 2], dtype='int64')}) + result = df.replace(0, 0.5) + assert_frame_equal(result, expected) + + # to object block upcasting + df = DataFrame({'A': Series([1.0, 2.0], dtype='float64'), + 'B': Series([0, 1], dtype='int64')}) + expected = DataFrame({'A': Series([1, 'foo'], dtype='object'), + 'B': Series([0, 1], dtype='int64')}) + result = df.replace(2, 'foo') + assert_frame_equal(result, expected) + + expected = DataFrame({'A': Series(['foo', 'bar'], dtype='object'), + 'B': Series([0, 'foo'], dtype='object')}) + result = df.replace([1, 2], ['foo', 'bar']) + assert_frame_equal(result, expected) + + # test case from + df = DataFrame({'A': Series([3, 0], dtype='int64'), + 'B': Series([0, 3], dtype='int64')}) + result = df.replace(3, df.mean().to_dict()) + expected = df.copy().astype('float64') + m = df.mean() + expected.iloc[0, 0] = m[0] + expected.iloc[1, 1] = m[1] + assert_frame_equal(result, expected) + + def test_replace_simple_nested_dict(self): + df = DataFrame({'col': range(1, 5)}) + expected = DataFrame({'col': ['a', 2, 3, 'b']}) + + result = df.replace({'col': {1: 'a', 4: 'b'}}) + assert_frame_equal(expected, result) + + # in this case, should be the same as the not nested version + result = df.replace({1: 'a', 4: 'b'}) + assert_frame_equal(expected, result) + + def test_replace_simple_nested_dict_with_nonexistent_value(self): + df = DataFrame({'col': range(1, 5)}) + expected = DataFrame({'col': ['a', 2, 3, 'b']}) + + result = df.replace({-1: '-', 1: 'a', 4: 'b'}) + assert_frame_equal(expected, result) + + result = df.replace({'col': {-1: '-', 1: 'a', 4: 'b'}}) + assert_frame_equal(expected, result) + + def test_replace_value_is_none(self): + self.assertRaises(TypeError, self.tsframe.replace, nan) + orig_value = self.tsframe.iloc[0, 0] + orig2 = self.tsframe.iloc[1, 0] + + self.tsframe.iloc[0, 0] = nan + self.tsframe.iloc[1, 0] = 1 + + result = self.tsframe.replace(to_replace={nan: 0}) + expected = self.tsframe.T.replace(to_replace={nan: 0}).T + assert_frame_equal(result, expected) + + result = self.tsframe.replace(to_replace={nan: 0, 1: -1e8}) + tsframe = self.tsframe.copy() + tsframe.iloc[0, 0] = 0 + tsframe.iloc[1, 0] = -1e8 + expected = tsframe + assert_frame_equal(expected, result) + self.tsframe.iloc[0, 0] = orig_value + self.tsframe.iloc[1, 0] = orig2 + + def test_replace_for_new_dtypes(self): + + # dtypes + tsframe = self.tsframe.copy().astype(np.float32) + tsframe['A'][:5] = nan + tsframe['A'][-5:] = nan + + zero_filled = tsframe.replace(nan, -1e8) + assert_frame_equal(zero_filled, tsframe.fillna(-1e8)) + assert_frame_equal(zero_filled.replace(-1e8, nan), tsframe) + + tsframe['A'][:5] = nan + tsframe['A'][-5:] = nan + tsframe['B'][:5] = -1e8 + + b = tsframe['B'] + b[b == -1e8] = nan + tsframe['B'] = b + result = tsframe.fillna(method='bfill') + assert_frame_equal(result, tsframe.fillna(method='bfill')) + + def test_replace_dtypes(self): + # int + df = DataFrame({'ints': [1, 2, 3]}) + result = df.replace(1, 0) + expected = DataFrame({'ints': [0, 2, 3]}) + assert_frame_equal(result, expected) + + df = DataFrame({'ints': [1, 2, 3]}, dtype=np.int32) + result = df.replace(1, 0) + expected = DataFrame({'ints': [0, 2, 3]}, dtype=np.int32) + assert_frame_equal(result, expected) + + df = DataFrame({'ints': [1, 2, 3]}, dtype=np.int16) + result = df.replace(1, 0) + expected = DataFrame({'ints': [0, 2, 3]}, dtype=np.int16) + assert_frame_equal(result, expected) + + # bools + df = DataFrame({'bools': [True, False, True]}) + result = df.replace(False, True) + self.assertTrue(result.values.all()) + + # complex blocks + df = DataFrame({'complex': [1j, 2j, 3j]}) + result = df.replace(1j, 0j) + expected = DataFrame({'complex': [0j, 2j, 3j]}) + assert_frame_equal(result, expected) + + # datetime blocks + prev = datetime.today() + now = datetime.today() + df = DataFrame({'datetime64': Index([prev, now, prev])}) + result = df.replace(prev, now) + expected = DataFrame({'datetime64': Index([now] * 3)}) + assert_frame_equal(result, expected) + + def test_replace_input_formats(self): + # both dicts + to_rep = {'A': np.nan, 'B': 0, 'C': ''} + values = {'A': 0, 'B': -1, 'C': 'missing'} + df = DataFrame({'A': [np.nan, 0, np.inf], 'B': [0, 2, 5], + 'C': ['', 'asdf', 'fd']}) + filled = df.replace(to_rep, values) + expected = {} + for k, v in compat.iteritems(df): + expected[k] = v.replace(to_rep[k], values[k]) + assert_frame_equal(filled, DataFrame(expected)) + + result = df.replace([0, 2, 5], [5, 2, 0]) + expected = DataFrame({'A': [np.nan, 5, np.inf], 'B': [5, 2, 0], + 'C': ['', 'asdf', 'fd']}) + assert_frame_equal(result, expected) + + # dict to scalar + filled = df.replace(to_rep, 0) + expected = {} + for k, v in compat.iteritems(df): + expected[k] = v.replace(to_rep[k], 0) + assert_frame_equal(filled, DataFrame(expected)) + + self.assertRaises(TypeError, df.replace, to_rep, [np.nan, 0, '']) + + # scalar to dict + values = {'A': 0, 'B': -1, 'C': 'missing'} + df = DataFrame({'A': [np.nan, 0, np.nan], 'B': [0, 2, 5], + 'C': ['', 'asdf', 'fd']}) + filled = df.replace(np.nan, values) + expected = {} + for k, v in compat.iteritems(df): + expected[k] = v.replace(np.nan, values[k]) + assert_frame_equal(filled, DataFrame(expected)) + + # list to list + to_rep = [np.nan, 0, ''] + values = [-2, -1, 'missing'] + result = df.replace(to_rep, values) + expected = df.copy() + for i in range(len(to_rep)): + expected.replace(to_rep[i], values[i], inplace=True) + assert_frame_equal(result, expected) + + self.assertRaises(ValueError, df.replace, to_rep, values[1:]) + + # list to scalar + to_rep = [np.nan, 0, ''] + result = df.replace(to_rep, -1) + expected = df.copy() + for i in range(len(to_rep)): + expected.replace(to_rep[i], -1, inplace=True) + assert_frame_equal(result, expected) + + def test_replace_limit(self): + pass + + def test_replace_dict_no_regex(self): + answer = Series({0: 'Strongly Agree', 1: 'Agree', 2: 'Neutral', 3: + 'Disagree', 4: 'Strongly Disagree'}) + weights = {'Agree': 4, 'Disagree': 2, 'Neutral': 3, 'Strongly Agree': + 5, 'Strongly Disagree': 1} + expected = Series({0: 5, 1: 4, 2: 3, 3: 2, 4: 1}) + result = answer.replace(weights) + assert_series_equal(result, expected) + + def test_replace_series_no_regex(self): + answer = Series({0: 'Strongly Agree', 1: 'Agree', 2: 'Neutral', 3: + 'Disagree', 4: 'Strongly Disagree'}) + weights = Series({'Agree': 4, 'Disagree': 2, 'Neutral': 3, + 'Strongly Agree': 5, 'Strongly Disagree': 1}) + expected = Series({0: 5, 1: 4, 2: 3, 3: 2, 4: 1}) + result = answer.replace(weights) + assert_series_equal(result, expected) + + def test_replace_dict_tuple_list_ordering_remains_the_same(self): + df = DataFrame(dict(A=[nan, 1])) + res1 = df.replace(to_replace={nan: 0, 1: -1e8}) + res2 = df.replace(to_replace=(1, nan), value=[-1e8, 0]) + res3 = df.replace(to_replace=[1, nan], value=[-1e8, 0]) + + expected = DataFrame({'A': [0, -1e8]}) + assert_frame_equal(res1, res2) + assert_frame_equal(res2, res3) + assert_frame_equal(res3, expected) + + def test_replace_doesnt_replace_without_regex(self): + raw = """fol T_opp T_Dir T_Enh + 0 1 0 0 vo + 1 2 vr 0 0 + 2 2 0 0 0 + 3 3 0 bt 0""" + df = pd.read_csv(StringIO(raw), sep=r'\s+') + res = df.replace({'\D': 1}) + assert_frame_equal(df, res) + + def test_replace_bool_with_string(self): + df = DataFrame({'a': [True, False], 'b': list('ab')}) + result = df.replace(True, 'a') + expected = DataFrame({'a': ['a', False], 'b': df.b}) + assert_frame_equal(result, expected) + + def test_replace_pure_bool_with_string_no_op(self): + df = DataFrame(np.random.rand(2, 2) > 0.5) + result = df.replace('asdf', 'fdsa') + assert_frame_equal(df, result) + + def test_replace_bool_with_bool(self): + df = DataFrame(np.random.rand(2, 2) > 0.5) + result = df.replace(False, True) + expected = DataFrame(np.ones((2, 2), dtype=bool)) + assert_frame_equal(result, expected) + + def test_replace_with_dict_with_bool_keys(self): + df = DataFrame({0: [True, False], 1: [False, True]}) + with tm.assertRaisesRegexp(TypeError, 'Cannot compare types .+'): + df.replace({'asdf': 'asdb', True: 'yes'}) + + def test_replace_truthy(self): + df = DataFrame({'a': [True, True]}) + r = df.replace([np.inf, -np.inf], np.nan) + e = df + assert_frame_equal(r, e) + + def test_replace_int_to_int_chain(self): + df = DataFrame({'a': lrange(1, 5)}) + with tm.assertRaisesRegexp(ValueError, "Replacement not allowed .+"): + df.replace({'a': dict(zip(range(1, 5), range(2, 6)))}) + + def test_replace_str_to_str_chain(self): + a = np.arange(1, 5) + astr = a.astype(str) + bstr = np.arange(2, 6).astype(str) + df = DataFrame({'a': astr}) + with tm.assertRaisesRegexp(ValueError, "Replacement not allowed .+"): + df.replace({'a': dict(zip(astr, bstr))}) + + def test_replace_swapping_bug(self): + df = pd.DataFrame({'a': [True, False, True]}) + res = df.replace({'a': {True: 'Y', False: 'N'}}) + expect = pd.DataFrame({'a': ['Y', 'N', 'Y']}) + assert_frame_equal(res, expect) + + df = pd.DataFrame({'a': [0, 1, 0]}) + res = df.replace({'a': {0: 'Y', 1: 'N'}}) + expect = pd.DataFrame({'a': ['Y', 'N', 'Y']}) + assert_frame_equal(res, expect) + + def test_replace_period(self): + d = { + 'fname': { + 'out_augmented_AUG_2011.json': + pd.Period(year=2011, month=8, freq='M'), + 'out_augmented_JAN_2011.json': + pd.Period(year=2011, month=1, freq='M'), + 'out_augmented_MAY_2012.json': + pd.Period(year=2012, month=5, freq='M'), + 'out_augmented_SUBSIDY_WEEK.json': + pd.Period(year=2011, month=4, freq='M'), + 'out_augmented_AUG_2012.json': + pd.Period(year=2012, month=8, freq='M'), + 'out_augmented_MAY_2011.json': + pd.Period(year=2011, month=5, freq='M'), + 'out_augmented_SEP_2013.json': + pd.Period(year=2013, month=9, freq='M')}} + + df = pd.DataFrame(['out_augmented_AUG_2012.json', + 'out_augmented_SEP_2013.json', + 'out_augmented_SUBSIDY_WEEK.json', + 'out_augmented_MAY_2012.json', + 'out_augmented_MAY_2011.json', + 'out_augmented_AUG_2011.json', + 'out_augmented_JAN_2011.json'], columns=['fname']) + tm.assert_equal(set(df.fname.values), set(d['fname'].keys())) + expected = DataFrame({'fname': [d['fname'][k] + for k in df.fname.values]}) + result = df.replace(d) + assert_frame_equal(result, expected) + + def test_replace_datetime(self): + d = {'fname': + {'out_augmented_AUG_2011.json': pd.Timestamp('2011-08'), + 'out_augmented_JAN_2011.json': pd.Timestamp('2011-01'), + 'out_augmented_MAY_2012.json': pd.Timestamp('2012-05'), + 'out_augmented_SUBSIDY_WEEK.json': pd.Timestamp('2011-04'), + 'out_augmented_AUG_2012.json': pd.Timestamp('2012-08'), + 'out_augmented_MAY_2011.json': pd.Timestamp('2011-05'), + 'out_augmented_SEP_2013.json': pd.Timestamp('2013-09')}} + + df = pd.DataFrame(['out_augmented_AUG_2012.json', + 'out_augmented_SEP_2013.json', + 'out_augmented_SUBSIDY_WEEK.json', + 'out_augmented_MAY_2012.json', + 'out_augmented_MAY_2011.json', + 'out_augmented_AUG_2011.json', + 'out_augmented_JAN_2011.json'], columns=['fname']) + tm.assert_equal(set(df.fname.values), set(d['fname'].keys())) + expected = DataFrame({'fname': [d['fname'][k] + for k in df.fname.values]}) + result = df.replace(d) + assert_frame_equal(result, expected) + + def test_replace_datetimetz(self): + + # GH 11326 + # behaving poorly when presented with a datetime64[ns, tz] + df = DataFrame({'A': date_range('20130101', periods=3, + tz='US/Eastern'), + 'B': [0, np.nan, 2]}) + result = df.replace(np.nan, 1) + expected = DataFrame({'A': date_range('20130101', periods=3, + tz='US/Eastern'), + 'B': Series([0, 1, 2], dtype='float64')}) + assert_frame_equal(result, expected) + + result = df.fillna(1) + assert_frame_equal(result, expected) + + result = df.replace(0, np.nan) + expected = DataFrame({'A': date_range('20130101', periods=3, + tz='US/Eastern'), + 'B': [np.nan, np.nan, 2]}) + assert_frame_equal(result, expected) + + result = df.replace(Timestamp('20130102', tz='US/Eastern'), + Timestamp('20130104', tz='US/Eastern')) + expected = DataFrame({'A': [Timestamp('20130101', tz='US/Eastern'), + Timestamp('20130104', tz='US/Eastern'), + Timestamp('20130103', tz='US/Eastern')], + 'B': [0, np.nan, 2]}) + assert_frame_equal(result, expected) + + result = df.copy() + result.iloc[1, 0] = np.nan + result = result.replace( + {'A': pd.NaT}, Timestamp('20130104', tz='US/Eastern')) + assert_frame_equal(result, expected) + + # coerce to object + result = df.copy() + result.iloc[1, 0] = np.nan + result = result.replace( + {'A': pd.NaT}, Timestamp('20130104', tz='US/Pacific')) + expected = DataFrame({'A': [Timestamp('20130101', tz='US/Eastern'), + Timestamp('20130104', tz='US/Pacific'), + Timestamp('20130103', tz='US/Eastern')], + 'B': [0, np.nan, 2]}) + assert_frame_equal(result, expected) + + result = df.copy() + result.iloc[1, 0] = np.nan + result = result.replace({'A': np.nan}, Timestamp('20130104')) + expected = DataFrame({'A': [Timestamp('20130101', tz='US/Eastern'), + Timestamp('20130104'), + Timestamp('20130103', tz='US/Eastern')], + 'B': [0, np.nan, 2]}) + assert_frame_equal(result, expected) diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py new file mode 100644 index 0000000000000..a458445081be5 --- /dev/null +++ b/pandas/tests/frame/test_repr_info.py @@ -0,0 +1,377 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime, timedelta +import re +import sys + +from numpy import nan +import numpy as np + +from pandas import (DataFrame, compat, option_context) +from pandas.compat import StringIO, lrange, u +import pandas.core.format as fmt +import pandas as pd + +from numpy.testing.decorators import slow +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +# Segregated collection of methods that require the BlockManager internal data +# structure + + +class TestDataFrameReprInfoEtc(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_repr_empty(self): + # empty + foo = repr(self.empty) # noqa + + # empty with index + frame = DataFrame(index=np.arange(1000)) + foo = repr(frame) # noqa + + def test_repr_mixed(self): + buf = StringIO() + + # mixed + foo = repr(self.mixed_frame) # noqa + self.mixed_frame.info(verbose=False, buf=buf) + + @slow + def test_repr_mixed_big(self): + # big mixed + biggie = DataFrame({'A': np.random.randn(200), + 'B': tm.makeStringIndex(200)}, + index=lrange(200)) + biggie.loc[:20, 'A'] = nan + biggie.loc[:20, 'B'] = nan + + foo = repr(biggie) # noqa + + def test_repr(self): + buf = StringIO() + + # small one + foo = repr(self.frame) + self.frame.info(verbose=False, buf=buf) + + # even smaller + self.frame.reindex(columns=['A']).info(verbose=False, buf=buf) + self.frame.reindex(columns=['A', 'B']).info(verbose=False, buf=buf) + + # exhausting cases in DataFrame.info + + # columns but no index + no_index = DataFrame(columns=[0, 1, 3]) + foo = repr(no_index) # noqa + + # no columns or index + self.empty.info(buf=buf) + + df = DataFrame(["a\n\r\tb"], columns=["a\n\r\td"], index=["a\n\r\tf"]) + self.assertFalse("\t" in repr(df)) + self.assertFalse("\r" in repr(df)) + self.assertFalse("a\n" in repr(df)) + + def test_repr_dimensions(self): + df = DataFrame([[1, 2, ], [3, 4]]) + with option_context('display.show_dimensions', True): + self.assertTrue("2 rows x 2 columns" in repr(df)) + + with option_context('display.show_dimensions', False): + self.assertFalse("2 rows x 2 columns" in repr(df)) + + with option_context('display.show_dimensions', 'truncate'): + self.assertFalse("2 rows x 2 columns" in repr(df)) + + @slow + def test_repr_big(self): + # big one + biggie = DataFrame(np.zeros((200, 4)), columns=lrange(4), + index=lrange(200)) + repr(biggie) + + def test_repr_unsortable(self): + # columns are not sortable + import warnings + warn_filters = warnings.filters + warnings.filterwarnings('ignore', + category=FutureWarning, + module=".*format") + + unsortable = DataFrame({'foo': [1] * 50, + datetime.today(): [1] * 50, + 'bar': ['bar'] * 50, + datetime.today() + timedelta(1): ['bar'] * 50}, + index=np.arange(50)) + repr(unsortable) + + fmt.set_option('display.precision', 3, 'display.column_space', 10) + repr(self.frame) + + fmt.set_option('display.max_rows', 10, 'display.max_columns', 2) + repr(self.frame) + + fmt.set_option('display.max_rows', 1000, 'display.max_columns', 1000) + repr(self.frame) + + self.reset_display_options() + + warnings.filters = warn_filters + + def test_repr_unicode(self): + uval = u('\u03c3\u03c3\u03c3\u03c3') + + # TODO(wesm): is this supposed to be used? + bval = uval.encode('utf-8') # noqa + + df = DataFrame({'A': [uval, uval]}) + + result = repr(df) + ex_top = ' A' + self.assertEqual(result.split('\n')[0].rstrip(), ex_top) + + df = DataFrame({'A': [uval, uval]}) + result = repr(df) + self.assertEqual(result.split('\n')[0].rstrip(), ex_top) + + def test_unicode_string_with_unicode(self): + df = DataFrame({'A': [u("\u05d0")]}) + + if compat.PY3: + str(df) + else: + compat.text_type(df) + + def test_bytestring_with_unicode(self): + df = DataFrame({'A': [u("\u05d0")]}) + if compat.PY3: + bytes(df) + else: + str(df) + + def test_very_wide_info_repr(self): + df = DataFrame(np.random.randn(10, 20), + columns=tm.rands_array(10, 20)) + repr(df) + + def test_repr_column_name_unicode_truncation_bug(self): + # #1906 + df = DataFrame({'Id': [7117434], + 'StringCol': ('Is it possible to modify drop plot code' + ' so that the output graph is displayed ' + 'in iphone simulator, Is it possible to ' + 'modify drop plot code so that the ' + 'output graph is \xe2\x80\xa8displayed ' + 'in iphone simulator.Now we are adding ' + 'the CSV file externally. I want to Call' + ' the File through the code..')}) + + result = repr(df) + self.assertIn('StringCol', result) + + def test_latex_repr(self): + result = r"""\begin{tabular}{llll} +\toprule +{} & 0 & 1 & 2 \\ +\midrule +0 & $\alpha$ & b & c \\ +1 & 1 & 2 & 3 \\ +\bottomrule +\end{tabular} +""" + with option_context("display.latex.escape", False): + df = DataFrame([[r'$\alpha$', 'b', 'c'], [1, 2, 3]]) + self.assertEqual(result, df._repr_latex_()) + + def test_info(self): + io = StringIO() + self.frame.info(buf=io) + self.tsframe.info(buf=io) + + frame = DataFrame(np.random.randn(5, 3)) + + import sys + sys.stdout = StringIO() + frame.info() + frame.info(verbose=False) + sys.stdout = sys.__stdout__ + + def test_info_wide(self): + from pandas import set_option, reset_option + io = StringIO() + df = DataFrame(np.random.randn(5, 101)) + df.info(buf=io) + + io = StringIO() + df.info(buf=io, max_cols=101) + rs = io.getvalue() + self.assertTrue(len(rs.splitlines()) > 100) + xp = rs + + set_option('display.max_info_columns', 101) + io = StringIO() + df.info(buf=io) + self.assertEqual(rs, xp) + reset_option('display.max_info_columns') + + def test_info_duplicate_columns(self): + io = StringIO() + + # it works! + frame = DataFrame(np.random.randn(1500, 4), + columns=['a', 'a', 'b', 'b']) + frame.info(buf=io) + + def test_info_duplicate_columns_shows_correct_dtypes(self): + # GH11761 + io = StringIO() + + frame = DataFrame([[1, 2.0]], + columns=['a', 'a']) + frame.info(buf=io) + io.seek(0) + lines = io.readlines() + self.assertEqual('a 1 non-null int64\n', lines[3]) + self.assertEqual('a 1 non-null float64\n', lines[4]) + + def test_info_shows_column_dtypes(self): + dtypes = ['int64', 'float64', 'datetime64[ns]', 'timedelta64[ns]', + 'complex128', 'object', 'bool'] + data = {} + n = 10 + for i, dtype in enumerate(dtypes): + data[i] = np.random.randint(2, size=n).astype(dtype) + df = DataFrame(data) + buf = StringIO() + df.info(buf=buf) + res = buf.getvalue() + for i, dtype in enumerate(dtypes): + name = '%d %d non-null %s' % (i, n, dtype) + assert name in res + + def test_info_max_cols(self): + df = DataFrame(np.random.randn(10, 5)) + for len_, verbose in [(5, None), (5, False), (10, True)]: + # For verbose always ^ setting ^ summarize ^ full output + with option_context('max_info_columns', 4): + buf = StringIO() + df.info(buf=buf, verbose=verbose) + res = buf.getvalue() + self.assertEqual(len(res.strip().split('\n')), len_) + + for len_, verbose in [(10, None), (5, False), (10, True)]: + + # max_cols no exceeded + with option_context('max_info_columns', 5): + buf = StringIO() + df.info(buf=buf, verbose=verbose) + res = buf.getvalue() + self.assertEqual(len(res.strip().split('\n')), len_) + + for len_, max_cols in [(10, 5), (5, 4)]: + # setting truncates + with option_context('max_info_columns', 4): + buf = StringIO() + df.info(buf=buf, max_cols=max_cols) + res = buf.getvalue() + self.assertEqual(len(res.strip().split('\n')), len_) + + # setting wouldn't truncate + with option_context('max_info_columns', 5): + buf = StringIO() + df.info(buf=buf, max_cols=max_cols) + res = buf.getvalue() + self.assertEqual(len(res.strip().split('\n')), len_) + + def test_info_memory_usage(self): + # Ensure memory usage is displayed, when asserted, on the last line + dtypes = ['int64', 'float64', 'datetime64[ns]', 'timedelta64[ns]', + 'complex128', 'object', 'bool'] + data = {} + n = 10 + for i, dtype in enumerate(dtypes): + data[i] = np.random.randint(2, size=n).astype(dtype) + df = DataFrame(data) + buf = StringIO() + # display memory usage case + df.info(buf=buf, memory_usage=True) + res = buf.getvalue().splitlines() + self.assertTrue("memory usage: " in res[-1]) + # do not display memory usage cas + df.info(buf=buf, memory_usage=False) + res = buf.getvalue().splitlines() + self.assertTrue("memory usage: " not in res[-1]) + + df.info(buf=buf, memory_usage=True) + res = buf.getvalue().splitlines() + # memory usage is a lower bound, so print it as XYZ+ MB + self.assertTrue(re.match(r"memory usage: [^+]+\+", res[-1])) + + df.iloc[:, :5].info(buf=buf, memory_usage=True) + res = buf.getvalue().splitlines() + # excluded column with object dtype, so estimate is accurate + self.assertFalse(re.match(r"memory usage: [^+]+\+", res[-1])) + + df_with_object_index = pd.DataFrame({'a': [1]}, index=['foo']) + df_with_object_index.info(buf=buf, memory_usage=True) + res = buf.getvalue().splitlines() + self.assertTrue(re.match(r"memory usage: [^+]+\+", res[-1])) + + df_with_object_index.info(buf=buf, memory_usage='deep') + res = buf.getvalue().splitlines() + self.assertTrue(re.match(r"memory usage: [^+]+$", res[-1])) + + self.assertTrue(df_with_object_index.memory_usage(index=True, + deep=True).sum() + > df_with_object_index.memory_usage(index=True).sum()) + + df_object = pd.DataFrame({'a': ['a']}) + self.assertTrue(df_object.memory_usage(deep=True).sum() + > df_object.memory_usage().sum()) + + # Test a DataFrame with duplicate columns + dtypes = ['int64', 'int64', 'int64', 'float64'] + data = {} + n = 100 + for i, dtype in enumerate(dtypes): + data[i] = np.random.randint(2, size=n).astype(dtype) + df = DataFrame(data) + df.columns = dtypes + # Ensure df size is as expected + df_size = df.memory_usage().sum() + exp_size = (len(dtypes) + 1) * n * 8 # (cols + index) * rows * bytes + self.assertEqual(df_size, exp_size) + # Ensure number of cols in memory_usage is the same as df + size_df = np.size(df.columns.values) + 1 # index=True; default + self.assertEqual(size_df, np.size(df.memory_usage())) + + # assert deep works only on object + self.assertEqual(df.memory_usage().sum(), + df.memory_usage(deep=True).sum()) + + # test for validity + DataFrame(1, index=['a'], columns=['A'] + ).memory_usage(index=True) + DataFrame(1, index=['a'], columns=['A'] + ).index.nbytes + df = DataFrame( + data=1, + index=pd.MultiIndex.from_product( + [['a'], range(1000)]), + columns=['A'] + ) + df.index.nbytes + df.memory_usage(index=True) + df.index.values.nbytes + + # sys.getsizeof will call the .memory_usage with + # deep=True, and add on some GC overhead + diff = df.memory_usage(deep=True).sum() - sys.getsizeof(df) + self.assertTrue(abs(diff) < 100) diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py new file mode 100644 index 0000000000000..c030a6a71c7b8 --- /dev/null +++ b/pandas/tests/frame/test_reshape.py @@ -0,0 +1,570 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime +import itertools + +from numpy.random import randn +from numpy import nan +import numpy as np + +from pandas.compat import u +from pandas import DataFrame, Index, Series, MultiIndex, date_range +import pandas as pd + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameReshape(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_pivot(self): + data = { + 'index': ['A', 'B', 'C', 'C', 'B', 'A'], + 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'], + 'values': [1., 2., 3., 3., 2., 1.] + } + + frame = DataFrame(data) + pivoted = frame.pivot( + index='index', columns='columns', values='values') + + expected = DataFrame({ + 'One': {'A': 1., 'B': 2., 'C': 3.}, + 'Two': {'A': 1., 'B': 2., 'C': 3.} + }) + expected.index.name, expected.columns.name = 'index', 'columns' + + assert_frame_equal(pivoted, expected) + + # name tracking + self.assertEqual(pivoted.index.name, 'index') + self.assertEqual(pivoted.columns.name, 'columns') + + # don't specify values + pivoted = frame.pivot(index='index', columns='columns') + self.assertEqual(pivoted.index.name, 'index') + self.assertEqual(pivoted.columns.names, (None, 'columns')) + + # pivot multiple columns + wp = tm.makePanel() + lp = wp.to_frame() + df = lp.reset_index() + assert_frame_equal(df.pivot('major', 'minor'), lp.unstack()) + + def test_pivot_duplicates(self): + data = DataFrame({'a': ['bar', 'bar', 'foo', 'foo', 'foo'], + 'b': ['one', 'two', 'one', 'one', 'two'], + 'c': [1., 2., 3., 3., 4.]}) + with assertRaisesRegexp(ValueError, 'duplicate entries'): + data.pivot('a', 'b', 'c') + + def test_pivot_empty(self): + df = DataFrame({}, columns=['a', 'b', 'c']) + result = df.pivot('a', 'b', 'c') + expected = DataFrame({}) + assert_frame_equal(result, expected, check_names=False) + + def test_pivot_integer_bug(self): + df = DataFrame(data=[("A", "1", "A1"), ("B", "2", "B2")]) + + result = df.pivot(index=1, columns=0, values=2) + repr(result) + self.assert_numpy_array_equal(result.columns, ['A', 'B']) + + def test_pivot_index_none(self): + # gh-3962 + data = { + 'index': ['A', 'B', 'C', 'C', 'B', 'A'], + 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'], + 'values': [1., 2., 3., 3., 2., 1.] + } + + frame = DataFrame(data).set_index('index') + result = frame.pivot(columns='columns', values='values') + expected = DataFrame({ + 'One': {'A': 1., 'B': 2., 'C': 3.}, + 'Two': {'A': 1., 'B': 2., 'C': 3.} + }) + + expected.index.name, expected.columns.name = 'index', 'columns' + assert_frame_equal(result, expected) + + # omit values + result = frame.pivot(columns='columns') + + expected.columns = pd.MultiIndex.from_tuples([('values', 'One'), + ('values', 'Two')], + names=[None, 'columns']) + expected.index.name = 'index' + assert_frame_equal(result, expected, check_names=False) + self.assertEqual(result.index.name, 'index',) + self.assertEqual(result.columns.names, (None, 'columns')) + expected.columns = expected.columns.droplevel(0) + + data = { + 'index': range(7), + 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'], + 'values': [1., 2., 3., 3., 2., 1.] + } + + result = frame.pivot(columns='columns', values='values') + + expected.columns.name = 'columns' + assert_frame_equal(result, expected) + + def test_stack_unstack(self): + stacked = self.frame.stack() + stacked_df = DataFrame({'foo': stacked, 'bar': stacked}) + + unstacked = stacked.unstack() + unstacked_df = stacked_df.unstack() + + assert_frame_equal(unstacked, self.frame) + assert_frame_equal(unstacked_df['bar'], self.frame) + + unstacked_cols = stacked.unstack(0) + unstacked_cols_df = stacked_df.unstack(0) + assert_frame_equal(unstacked_cols.T, self.frame) + assert_frame_equal(unstacked_cols_df['bar'].T, self.frame) + + def test_stack_ints(self): + df = DataFrame( + np.random.randn(30, 27), + columns=MultiIndex.from_tuples( + list(itertools.product(range(3), repeat=3)) + ) + ) + assert_frame_equal( + df.stack(level=[1, 2]), + df.stack(level=1).stack(level=1) + ) + assert_frame_equal( + df.stack(level=[-2, -1]), + df.stack(level=1).stack(level=1) + ) + + df_named = df.copy() + df_named.columns.set_names(range(3), inplace=True) + assert_frame_equal( + df_named.stack(level=[1, 2]), + df_named.stack(level=1).stack(level=1) + ) + + def test_stack_mixed_levels(self): + columns = MultiIndex.from_tuples( + [('A', 'cat', 'long'), ('B', 'cat', 'long'), + ('A', 'dog', 'short'), ('B', 'dog', 'short')], + names=['exp', 'animal', 'hair_length'] + ) + df = DataFrame(randn(4, 4), columns=columns) + + animal_hair_stacked = df.stack(level=['animal', 'hair_length']) + exp_hair_stacked = df.stack(level=['exp', 'hair_length']) + + # GH #8584: Need to check that stacking works when a number + # is passed that is both a level name and in the range of + # the level numbers + df2 = df.copy() + df2.columns.names = ['exp', 'animal', 1] + assert_frame_equal(df2.stack(level=['animal', 1]), + animal_hair_stacked, check_names=False) + assert_frame_equal(df2.stack(level=['exp', 1]), + exp_hair_stacked, check_names=False) + + # When mixed types are passed and the ints are not level + # names, raise + self.assertRaises(ValueError, df2.stack, level=['animal', 0]) + + # GH #8584: Having 0 in the level names could raise a + # strange error about lexsort depth + df3 = df.copy() + df3.columns.names = ['exp', 'animal', 0] + assert_frame_equal(df3.stack(level=['animal', 0]), + animal_hair_stacked, check_names=False) + + def test_stack_int_level_names(self): + columns = MultiIndex.from_tuples( + [('A', 'cat', 'long'), ('B', 'cat', 'long'), + ('A', 'dog', 'short'), ('B', 'dog', 'short')], + names=['exp', 'animal', 'hair_length'] + ) + df = DataFrame(randn(4, 4), columns=columns) + + exp_animal_stacked = df.stack(level=['exp', 'animal']) + animal_hair_stacked = df.stack(level=['animal', 'hair_length']) + exp_hair_stacked = df.stack(level=['exp', 'hair_length']) + + df2 = df.copy() + df2.columns.names = [0, 1, 2] + assert_frame_equal(df2.stack(level=[1, 2]), animal_hair_stacked, + check_names=False) + assert_frame_equal(df2.stack(level=[0, 1]), exp_animal_stacked, + check_names=False) + assert_frame_equal(df2.stack(level=[0, 2]), exp_hair_stacked, + check_names=False) + + # Out-of-order int column names + df3 = df.copy() + df3.columns.names = [2, 0, 1] + assert_frame_equal(df3.stack(level=[0, 1]), animal_hair_stacked, + check_names=False) + assert_frame_equal(df3.stack(level=[2, 0]), exp_animal_stacked, + check_names=False) + assert_frame_equal(df3.stack(level=[2, 1]), exp_hair_stacked, + check_names=False) + + def test_unstack_bool(self): + df = DataFrame([False, False], + index=MultiIndex.from_arrays([['a', 'b'], ['c', 'l']]), + columns=['col']) + rs = df.unstack() + xp = DataFrame(np.array([[False, np.nan], [np.nan, False]], + dtype=object), + index=['a', 'b'], + columns=MultiIndex.from_arrays([['col', 'col'], + ['c', 'l']])) + assert_frame_equal(rs, xp) + + def test_unstack_level_binding(self): + # GH9856 + mi = pd.MultiIndex( + levels=[[u('foo'), u('bar')], [u('one'), u('two')], + [u('a'), u('b')]], + labels=[[0, 0, 1, 1], [0, 1, 0, 1], [1, 0, 1, 0]], + names=[u('first'), u('second'), u('third')]) + s = pd.Series(0, index=mi) + result = s.unstack([1, 2]).stack(0) + + expected_mi = pd.MultiIndex( + levels=[['foo', 'bar'], ['one', 'two']], + labels=[[0, 0, 1, 1], [0, 1, 0, 1]], + names=['first', 'second']) + + expected = pd.DataFrame(np.array([[np.nan, 0], + [0, np.nan], + [np.nan, 0], + [0, np.nan]], + dtype=np.float64), + index=expected_mi, + columns=pd.Index(['a', 'b'], name='third')) + + assert_frame_equal(result, expected) + + def test_unstack_to_series(self): + # check reversibility + data = self.frame.unstack() + + self.assertTrue(isinstance(data, Series)) + undo = data.unstack().T + assert_frame_equal(undo, self.frame) + + # check NA handling + data = DataFrame({'x': [1, 2, np.NaN], 'y': [3.0, 4, np.NaN]}) + data.index = Index(['a', 'b', 'c']) + result = data.unstack() + + midx = MultiIndex(levels=[['x', 'y'], ['a', 'b', 'c']], + labels=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]]) + expected = Series([1, 2, np.NaN, 3, 4, np.NaN], index=midx) + + assert_series_equal(result, expected) + + # check composability of unstack + old_data = data.copy() + for _ in range(4): + data = data.unstack() + assert_frame_equal(old_data, data) + + def test_unstack_dtypes(self): + + # GH 2929 + rows = [[1, 1, 3, 4], + [1, 2, 3, 4], + [2, 1, 3, 4], + [2, 2, 3, 4]] + + df = DataFrame(rows, columns=list('ABCD')) + result = df.get_dtype_counts() + expected = Series({'int64': 4}) + assert_series_equal(result, expected) + + # single dtype + df2 = df.set_index(['A', 'B']) + df3 = df2.unstack('B') + result = df3.get_dtype_counts() + expected = Series({'int64': 4}) + assert_series_equal(result, expected) + + # mixed + df2 = df.set_index(['A', 'B']) + df2['C'] = 3. + df3 = df2.unstack('B') + result = df3.get_dtype_counts() + expected = Series({'int64': 2, 'float64': 2}) + assert_series_equal(result, expected) + + df2['D'] = 'foo' + df3 = df2.unstack('B') + result = df3.get_dtype_counts() + expected = Series({'float64': 2, 'object': 2}) + assert_series_equal(result, expected) + + # GH7405 + for c, d in (np.zeros(5), np.zeros(5)), \ + (np.arange(5, dtype='f8'), np.arange(5, 10, dtype='f8')): + + df = DataFrame({'A': ['a'] * 5, 'C': c, 'D': d, + 'B': pd.date_range('2012-01-01', periods=5)}) + + right = df.iloc[:3].copy(deep=True) + + df = df.set_index(['A', 'B']) + df['D'] = df['D'].astype('int64') + + left = df.iloc[:3].unstack(0) + right = right.set_index(['A', 'B']).unstack(0) + right[('D', 'a')] = right[('D', 'a')].astype('int64') + + self.assertEqual(left.shape, (3, 2)) + assert_frame_equal(left, right) + + def test_unstack_non_unique_index_names(self): + idx = MultiIndex.from_tuples([('a', 'b'), ('c', 'd')], + names=['c1', 'c1']) + df = DataFrame([1, 2], index=idx) + with tm.assertRaises(ValueError): + df.unstack('c1') + + with tm.assertRaises(ValueError): + df.T.stack('c1') + + def test_unstack_nan_index(self): # GH7466 + cast = lambda val: '{0:1}'.format('' if val != val else val) + nan = np.nan + + def verify(df): + mk_list = lambda a: list(a) if isinstance(a, tuple) else [a] + rows, cols = df.notnull().values.nonzero() + for i, j in zip(rows, cols): + left = sorted(df.iloc[i, j].split('.')) + right = mk_list(df.index[i]) + mk_list(df.columns[j]) + right = sorted(list(map(cast, right))) + self.assertEqual(left, right) + + df = DataFrame({'jim': ['a', 'b', nan, 'd'], + 'joe': ['w', 'x', 'y', 'z'], + 'jolie': ['a.w', 'b.x', ' .y', 'd.z']}) + + left = df.set_index(['jim', 'joe']).unstack()['jolie'] + right = df.set_index(['joe', 'jim']).unstack()['jolie'].T + assert_frame_equal(left, right) + + for idx in itertools.permutations(df.columns[:2]): + mi = df.set_index(list(idx)) + for lev in range(2): + udf = mi.unstack(level=lev) + self.assertEqual(udf.notnull().values.sum(), len(df)) + verify(udf['jolie']) + + df = DataFrame({'1st': ['d'] * 3 + [nan] * 5 + ['a'] * 2 + + ['c'] * 3 + ['e'] * 2 + ['b'] * 5, + '2nd': ['y'] * 2 + ['w'] * 3 + [nan] * 3 + + ['z'] * 4 + [nan] * 3 + ['x'] * 3 + [nan] * 2, + '3rd': [67, 39, 53, 72, 57, 80, 31, 18, 11, 30, 59, + 50, 62, 59, 76, 52, 14, 53, 60, 51]}) + + df['4th'], df['5th'] = \ + df.apply(lambda r: '.'.join(map(cast, r)), axis=1), \ + df.apply(lambda r: '.'.join(map(cast, r.iloc[::-1])), axis=1) + + for idx in itertools.permutations(['1st', '2nd', '3rd']): + mi = df.set_index(list(idx)) + for lev in range(3): + udf = mi.unstack(level=lev) + self.assertEqual(udf.notnull().values.sum(), 2 * len(df)) + for col in ['4th', '5th']: + verify(udf[col]) + + # GH7403 + df = pd.DataFrame( + {'A': list('aaaabbbb'), 'B': range(8), 'C': range(8)}) + df.iloc[3, 1] = np.NaN + left = df.set_index(['A', 'B']).unstack(0) + + vals = [[3, 0, 1, 2, nan, nan, nan, nan], + [nan, nan, nan, nan, 4, 5, 6, 7]] + vals = list(map(list, zip(*vals))) + idx = Index([nan, 0, 1, 2, 4, 5, 6, 7], name='B') + cols = MultiIndex(levels=[['C'], ['a', 'b']], + labels=[[0, 0], [0, 1]], + names=[None, 'A']) + + right = DataFrame(vals, columns=cols, index=idx) + assert_frame_equal(left, right) + + df = DataFrame({'A': list('aaaabbbb'), 'B': list(range(4)) * 2, + 'C': range(8)}) + df.iloc[2, 1] = np.NaN + left = df.set_index(['A', 'B']).unstack(0) + + vals = [[2, nan], [0, 4], [1, 5], [nan, 6], [3, 7]] + cols = MultiIndex(levels=[['C'], ['a', 'b']], + labels=[[0, 0], [0, 1]], + names=[None, 'A']) + idx = Index([nan, 0, 1, 2, 3], name='B') + right = DataFrame(vals, columns=cols, index=idx) + assert_frame_equal(left, right) + + df = pd.DataFrame({'A': list('aaaabbbb'), 'B': list(range(4)) * 2, + 'C': range(8)}) + df.iloc[3, 1] = np.NaN + left = df.set_index(['A', 'B']).unstack(0) + + vals = [[3, nan], [0, 4], [1, 5], [2, 6], [nan, 7]] + cols = MultiIndex(levels=[['C'], ['a', 'b']], + labels=[[0, 0], [0, 1]], + names=[None, 'A']) + idx = Index([nan, 0, 1, 2, 3], name='B') + right = DataFrame(vals, columns=cols, index=idx) + assert_frame_equal(left, right) + + # GH7401 + df = pd.DataFrame({'A': list('aaaaabbbbb'), 'C': np.arange(10), + 'B': (date_range('2012-01-01', periods=5) + .tolist() * 2)}) + + df.iloc[3, 1] = np.NaN + left = df.set_index(['A', 'B']).unstack() + + vals = np.array([[3, 0, 1, 2, nan, 4], [nan, 5, 6, 7, 8, 9]]) + idx = Index(['a', 'b'], name='A') + cols = MultiIndex(levels=[['C'], date_range('2012-01-01', periods=5)], + labels=[[0, 0, 0, 0, 0, 0], [-1, 0, 1, 2, 3, 4]], + names=[None, 'B']) + + right = DataFrame(vals, columns=cols, index=idx) + assert_frame_equal(left, right) + + # GH4862 + vals = [['Hg', nan, nan, 680585148], + ['U', 0.0, nan, 680585148], + ['Pb', 7.07e-06, nan, 680585148], + ['Sn', 2.3614e-05, 0.0133, 680607017], + ['Ag', 0.0, 0.0133, 680607017], + ['Hg', -0.00015, 0.0133, 680607017]] + df = DataFrame(vals, columns=['agent', 'change', 'dosage', 's_id'], + index=[17263, 17264, 17265, 17266, 17267, 17268]) + + left = df.copy().set_index(['s_id', 'dosage', 'agent']).unstack() + + vals = [[nan, nan, 7.07e-06, nan, 0.0], + [0.0, -0.00015, nan, 2.3614e-05, nan]] + + idx = MultiIndex(levels=[[680585148, 680607017], [0.0133]], + labels=[[0, 1], [-1, 0]], + names=['s_id', 'dosage']) + + cols = MultiIndex(levels=[['change'], ['Ag', 'Hg', 'Pb', 'Sn', 'U']], + labels=[[0, 0, 0, 0, 0], [0, 1, 2, 3, 4]], + names=[None, 'agent']) + + right = DataFrame(vals, columns=cols, index=idx) + assert_frame_equal(left, right) + + left = df.ix[17264:].copy().set_index(['s_id', 'dosage', 'agent']) + assert_frame_equal(left.unstack(), right) + + # GH9497 - multiple unstack with nulls + df = DataFrame({'1st': [1, 2, 1, 2, 1, 2], + '2nd': pd.date_range('2014-02-01', periods=6, + freq='D'), + 'jim': 100 + np.arange(6), + 'joe': (np.random.randn(6) * 10).round(2)}) + + df['3rd'] = df['2nd'] - pd.Timestamp('2014-02-02') + df.loc[1, '2nd'] = df.loc[3, '2nd'] = nan + df.loc[1, '3rd'] = df.loc[4, '3rd'] = nan + + left = df.set_index(['1st', '2nd', '3rd']).unstack(['2nd', '3rd']) + self.assertEqual(left.notnull().values.sum(), 2 * len(df)) + + for col in ['jim', 'joe']: + for _, r in df.iterrows(): + key = r['1st'], (col, r['2nd'], r['3rd']) + self.assertEqual(r[col], left.loc[key]) + + def test_stack_datetime_column_multiIndex(self): + # GH 8039 + t = datetime(2014, 1, 1) + df = DataFrame( + [1, 2, 3, 4], columns=MultiIndex.from_tuples([(t, 'A', 'B')])) + result = df.stack() + + eidx = MultiIndex.from_product([(0, 1, 2, 3), ('B',)]) + ecols = MultiIndex.from_tuples([(t, 'A')]) + expected = DataFrame([1, 2, 3, 4], index=eidx, columns=ecols) + assert_frame_equal(result, expected) + + def test_stack_partial_multiIndex(self): + # GH 8844 + def _test_stack_with_multiindex(multiindex): + df = DataFrame(np.arange(3 * len(multiindex)) + .reshape(3, len(multiindex)), + columns=multiindex) + for level in (-1, 0, 1, [0, 1], [1, 0]): + result = df.stack(level=level, dropna=False) + + if isinstance(level, int): + # Stacking a single level should not make any all-NaN rows, + # so df.stack(level=level, dropna=False) should be the same + # as df.stack(level=level, dropna=True). + expected = df.stack(level=level, dropna=True) + if isinstance(expected, Series): + assert_series_equal(result, expected) + else: + assert_frame_equal(result, expected) + + df.columns = MultiIndex.from_tuples(df.columns.get_values(), + names=df.columns.names) + expected = df.stack(level=level, dropna=False) + if isinstance(expected, Series): + assert_series_equal(result, expected) + else: + assert_frame_equal(result, expected) + + full_multiindex = MultiIndex.from_tuples([('B', 'x'), ('B', 'z'), + ('A', 'y'), + ('C', 'x'), ('C', 'u')], + names=['Upper', 'Lower']) + for multiindex_columns in ([0, 1, 2, 3, 4], + [0, 1, 2, 3], [0, 1, 2, 4], + [0, 1, 2], [1, 2, 3], [2, 3, 4], + [0, 1], [0, 2], [0, 3], + [0], [2], [4]): + _test_stack_with_multiindex(full_multiindex[multiindex_columns]) + if len(multiindex_columns) > 1: + multiindex_columns.reverse() + _test_stack_with_multiindex( + full_multiindex[multiindex_columns]) + + df = DataFrame(np.arange(6).reshape(2, 3), + columns=full_multiindex[[0, 1, 3]]) + result = df.stack(dropna=False) + expected = DataFrame([[0, 2], [1, nan], [3, 5], [4, nan]], + index=MultiIndex( + levels=[[0, 1], ['u', 'x', 'y', 'z']], + labels=[[0, 0, 1, 1], + [1, 3, 1, 3]], + names=[None, 'Lower']), + columns=Index(['B', 'C'], name='Upper'), + dtype=df.dtypes[0]) + assert_frame_equal(result, expected) diff --git a/pandas/tests/frame/test_sorting.py b/pandas/tests/frame/test_sorting.py new file mode 100644 index 0000000000000..ff2159f8b6f40 --- /dev/null +++ b/pandas/tests/frame/test_sorting.py @@ -0,0 +1,473 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +import numpy as np + +from pandas.compat import lrange +from pandas import (DataFrame, Series, MultiIndex, Timestamp, + date_range) + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameSorting(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_sort_values(self): + # API for 9816 + + # sort_index + frame = DataFrame(np.arange(16).reshape(4, 4), index=[1, 2, 3, 4], + columns=['A', 'B', 'C', 'D']) + + # 9816 deprecated + with tm.assert_produces_warning(FutureWarning): + frame.sort(columns='A') + with tm.assert_produces_warning(FutureWarning): + frame.sort() + + unordered = frame.ix[[3, 2, 4, 1]] + expected = unordered.sort_index() + + result = unordered.sort_index(axis=0) + assert_frame_equal(result, expected) + + unordered = frame.ix[:, [2, 1, 3, 0]] + expected = unordered.sort_index(axis=1) + + result = unordered.sort_index(axis=1) + assert_frame_equal(result, expected) + assert_frame_equal(result, expected) + + # sortlevel + mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) + df = DataFrame([[1, 2], [3, 4]], mi) + + result = df.sort_index(level='A', sort_remaining=False) + expected = df.sortlevel('A', sort_remaining=False) + assert_frame_equal(result, expected) + + df = df.T + result = df.sort_index(level='A', axis=1, sort_remaining=False) + expected = df.sortlevel('A', axis=1, sort_remaining=False) + assert_frame_equal(result, expected) + + # MI sort, but no by + mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) + df = DataFrame([[1, 2], [3, 4]], mi) + result = df.sort_index(sort_remaining=False) + expected = df.sort_index() + assert_frame_equal(result, expected) + + def test_sort_index(self): + frame = DataFrame(np.arange(16).reshape(4, 4), index=[1, 2, 3, 4], + columns=['A', 'B', 'C', 'D']) + + # axis=0 + unordered = frame.ix[[3, 2, 4, 1]] + sorted_df = unordered.sort_index(axis=0) + expected = frame + assert_frame_equal(sorted_df, expected) + + sorted_df = unordered.sort_index(ascending=False) + expected = frame[::-1] + assert_frame_equal(sorted_df, expected) + + # axis=1 + unordered = frame.ix[:, ['D', 'B', 'C', 'A']] + sorted_df = unordered.sort_index(axis=1) + expected = frame + assert_frame_equal(sorted_df, expected) + + sorted_df = unordered.sort_index(axis=1, ascending=False) + expected = frame.ix[:, ::-1] + assert_frame_equal(sorted_df, expected) + + # by column + sorted_df = frame.sort_values(by='A') + indexer = frame['A'].argsort().values + expected = frame.ix[frame.index[indexer]] + assert_frame_equal(sorted_df, expected) + + sorted_df = frame.sort_values(by='A', ascending=False) + indexer = indexer[::-1] + expected = frame.ix[frame.index[indexer]] + assert_frame_equal(sorted_df, expected) + + sorted_df = frame.sort_values(by='A', ascending=False) + assert_frame_equal(sorted_df, expected) + + # GH4839 + sorted_df = frame.sort_values(by=['A'], ascending=[False]) + assert_frame_equal(sorted_df, expected) + + # check for now + sorted_df = frame.sort_values(by='A') + assert_frame_equal(sorted_df, expected[::-1]) + expected = frame.sort_values(by='A') + assert_frame_equal(sorted_df, expected) + + expected = frame.sort_values(by=['A', 'B'], ascending=False) + sorted_df = frame.sort_values(by=['A', 'B']) + assert_frame_equal(sorted_df, expected[::-1]) + + self.assertRaises(ValueError, lambda: frame.sort_values( + by=['A', 'B'], axis=2, inplace=True)) + + msg = 'When sorting by column, axis must be 0' + with assertRaisesRegexp(ValueError, msg): + frame.sort_values(by='A', axis=1) + + msg = r'Length of ascending \(5\) != length of by \(2\)' + with assertRaisesRegexp(ValueError, msg): + frame.sort_values(by=['A', 'B'], axis=0, ascending=[True] * 5) + + def test_sort_index_categorical_index(self): + + df = (DataFrame({'A': np.arange(6, dtype='int64'), + 'B': Series(list('aabbca')) + .astype('category', categories=list('cab'))}) + .set_index('B')) + + result = df.sort_index() + expected = df.iloc[[4, 0, 1, 5, 2, 3]] + assert_frame_equal(result, expected) + + result = df.sort_index(ascending=False) + expected = df.iloc[[3, 2, 5, 1, 0, 4]] + assert_frame_equal(result, expected) + + def test_sort_nan(self): + # GH3917 + nan = np.nan + df = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], + 'B': [9, nan, 5, 2, 5, 4, 5]}) + + # sort one column only + expected = DataFrame( + {'A': [nan, 1, 1, 2, 4, 6, 8], + 'B': [5, 9, 2, nan, 5, 5, 4]}, + index=[2, 0, 3, 1, 6, 4, 5]) + sorted_df = df.sort_values(['A'], na_position='first') + assert_frame_equal(sorted_df, expected) + + expected = DataFrame( + {'A': [nan, 8, 6, 4, 2, 1, 1], + 'B': [5, 4, 5, 5, nan, 9, 2]}, + index=[2, 5, 4, 6, 1, 0, 3]) + sorted_df = df.sort_values(['A'], na_position='first', ascending=False) + assert_frame_equal(sorted_df, expected) + + # na_position='last', order + expected = DataFrame( + {'A': [1, 1, 2, 4, 6, 8, nan], + 'B': [2, 9, nan, 5, 5, 4, 5]}, + index=[3, 0, 1, 6, 4, 5, 2]) + sorted_df = df.sort_values(['A', 'B']) + assert_frame_equal(sorted_df, expected) + + # na_position='first', order + expected = DataFrame( + {'A': [nan, 1, 1, 2, 4, 6, 8], + 'B': [5, 2, 9, nan, 5, 5, 4]}, + index=[2, 3, 0, 1, 6, 4, 5]) + sorted_df = df.sort_values(['A', 'B'], na_position='first') + assert_frame_equal(sorted_df, expected) + + # na_position='first', not order + expected = DataFrame( + {'A': [nan, 1, 1, 2, 4, 6, 8], + 'B': [5, 9, 2, nan, 5, 5, 4]}, + index=[2, 0, 3, 1, 6, 4, 5]) + sorted_df = df.sort_values(['A', 'B'], ascending=[ + 1, 0], na_position='first') + assert_frame_equal(sorted_df, expected) + + # na_position='last', not order + expected = DataFrame( + {'A': [8, 6, 4, 2, 1, 1, nan], + 'B': [4, 5, 5, nan, 2, 9, 5]}, + index=[5, 4, 6, 1, 3, 0, 2]) + sorted_df = df.sort_values(['A', 'B'], ascending=[ + 0, 1], na_position='last') + assert_frame_equal(sorted_df, expected) + + # Test DataFrame with nan label + df = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], + 'B': [9, nan, 5, 2, 5, 4, 5]}, + index=[1, 2, 3, 4, 5, 6, nan]) + + # NaN label, ascending=True, na_position='last' + sorted_df = df.sort_index( + kind='quicksort', ascending=True, na_position='last') + expected = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], + 'B': [9, nan, 5, 2, 5, 4, 5]}, + index=[1, 2, 3, 4, 5, 6, nan]) + assert_frame_equal(sorted_df, expected) + + # NaN label, ascending=True, na_position='first' + sorted_df = df.sort_index(na_position='first') + expected = DataFrame({'A': [4, 1, 2, nan, 1, 6, 8], + 'B': [5, 9, nan, 5, 2, 5, 4]}, + index=[nan, 1, 2, 3, 4, 5, 6]) + assert_frame_equal(sorted_df, expected) + + # NaN label, ascending=False, na_position='last' + sorted_df = df.sort_index(kind='quicksort', ascending=False) + expected = DataFrame({'A': [8, 6, 1, nan, 2, 1, 4], + 'B': [4, 5, 2, 5, nan, 9, 5]}, + index=[6, 5, 4, 3, 2, 1, nan]) + assert_frame_equal(sorted_df, expected) + + # NaN label, ascending=False, na_position='first' + sorted_df = df.sort_index( + kind='quicksort', ascending=False, na_position='first') + expected = DataFrame({'A': [4, 8, 6, 1, nan, 2, 1], + 'B': [5, 4, 5, 2, 5, nan, 9]}, + index=[nan, 6, 5, 4, 3, 2, 1]) + assert_frame_equal(sorted_df, expected) + + def test_stable_descending_sort(self): + # GH #6399 + df = DataFrame([[2, 'first'], [2, 'second'], [1, 'a'], [1, 'b']], + columns=['sort_col', 'order']) + sorted_df = df.sort_values(by='sort_col', kind='mergesort', + ascending=False) + assert_frame_equal(df, sorted_df) + + def test_stable_descending_multicolumn_sort(self): + nan = np.nan + df = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], + 'B': [9, nan, 5, 2, 5, 4, 5]}) + # test stable mergesort + expected = DataFrame( + {'A': [nan, 8, 6, 4, 2, 1, 1], + 'B': [5, 4, 5, 5, nan, 2, 9]}, + index=[2, 5, 4, 6, 1, 3, 0]) + sorted_df = df.sort_values(['A', 'B'], ascending=[0, 1], + na_position='first', + kind='mergesort') + assert_frame_equal(sorted_df, expected) + + expected = DataFrame( + {'A': [nan, 8, 6, 4, 2, 1, 1], + 'B': [5, 4, 5, 5, nan, 9, 2]}, + index=[2, 5, 4, 6, 1, 0, 3]) + sorted_df = df.sort_values(['A', 'B'], ascending=[0, 0], + na_position='first', + kind='mergesort') + assert_frame_equal(sorted_df, expected) + + def test_sort_index_multicolumn(self): + import random + A = np.arange(5).repeat(20) + B = np.tile(np.arange(5), 20) + random.shuffle(A) + random.shuffle(B) + frame = DataFrame({'A': A, 'B': B, + 'C': np.random.randn(100)}) + + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + frame.sort_index(by=['A', 'B']) + result = frame.sort_values(by=['A', 'B']) + indexer = np.lexsort((frame['B'], frame['A'])) + expected = frame.take(indexer) + assert_frame_equal(result, expected) + + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + frame.sort_index(by=['A', 'B'], ascending=False) + result = frame.sort_values(by=['A', 'B'], ascending=False) + indexer = np.lexsort((frame['B'].rank(ascending=False), + frame['A'].rank(ascending=False))) + expected = frame.take(indexer) + assert_frame_equal(result, expected) + + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + frame.sort_index(by=['B', 'A']) + result = frame.sort_values(by=['B', 'A']) + indexer = np.lexsort((frame['A'], frame['B'])) + expected = frame.take(indexer) + assert_frame_equal(result, expected) + + def test_sort_index_inplace(self): + frame = DataFrame(np.random.randn(4, 4), index=[1, 2, 3, 4], + columns=['A', 'B', 'C', 'D']) + + # axis=0 + unordered = frame.ix[[3, 2, 4, 1]] + a_id = id(unordered['A']) + df = unordered.copy() + df.sort_index(inplace=True) + expected = frame + assert_frame_equal(df, expected) + self.assertNotEqual(a_id, id(df['A'])) + + df = unordered.copy() + df.sort_index(ascending=False, inplace=True) + expected = frame[::-1] + assert_frame_equal(df, expected) + + # axis=1 + unordered = frame.ix[:, ['D', 'B', 'C', 'A']] + df = unordered.copy() + df.sort_index(axis=1, inplace=True) + expected = frame + assert_frame_equal(df, expected) + + df = unordered.copy() + df.sort_index(axis=1, ascending=False, inplace=True) + expected = frame.ix[:, ::-1] + assert_frame_equal(df, expected) + + def test_sort_index_different_sortorder(self): + A = np.arange(20).repeat(5) + B = np.tile(np.arange(5), 20) + + indexer = np.random.permutation(100) + A = A.take(indexer) + B = B.take(indexer) + + df = DataFrame({'A': A, 'B': B, + 'C': np.random.randn(100)}) + + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + df.sort_index(by=['A', 'B'], ascending=[1, 0]) + result = df.sort_values(by=['A', 'B'], ascending=[1, 0]) + + ex_indexer = np.lexsort((df.B.max() - df.B, df.A)) + expected = df.take(ex_indexer) + assert_frame_equal(result, expected) + + # test with multiindex, too + idf = df.set_index(['A', 'B']) + + result = idf.sort_index(ascending=[1, 0]) + expected = idf.take(ex_indexer) + assert_frame_equal(result, expected) + + # also, Series! + result = idf['C'].sort_index(ascending=[1, 0]) + assert_series_equal(result, expected['C']) + + def test_sort_inplace(self): + frame = DataFrame(np.random.randn(4, 4), index=[1, 2, 3, 4], + columns=['A', 'B', 'C', 'D']) + + sorted_df = frame.copy() + sorted_df.sort_values(by='A', inplace=True) + expected = frame.sort_values(by='A') + assert_frame_equal(sorted_df, expected) + + sorted_df = frame.copy() + sorted_df.sort_values(by='A', ascending=False, inplace=True) + expected = frame.sort_values(by='A', ascending=False) + assert_frame_equal(sorted_df, expected) + + sorted_df = frame.copy() + sorted_df.sort_values(by=['A', 'B'], ascending=False, inplace=True) + expected = frame.sort_values(by=['A', 'B'], ascending=False) + assert_frame_equal(sorted_df, expected) + + def test_sort_index_duplicates(self): + + # with 9816, these are all translated to .sort_values + + df = DataFrame([lrange(5, 9), lrange(4)], + columns=['a', 'a', 'b', 'b']) + + with assertRaisesRegexp(ValueError, 'duplicate'): + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + df.sort_index(by='a') + with assertRaisesRegexp(ValueError, 'duplicate'): + df.sort_values(by='a') + + with assertRaisesRegexp(ValueError, 'duplicate'): + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + df.sort_index(by=['a']) + with assertRaisesRegexp(ValueError, 'duplicate'): + df.sort_values(by=['a']) + + with assertRaisesRegexp(ValueError, 'duplicate'): + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + # multi-column 'by' is separate codepath + df.sort_index(by=['a', 'b']) + with assertRaisesRegexp(ValueError, 'duplicate'): + # multi-column 'by' is separate codepath + df.sort_values(by=['a', 'b']) + + # with multi-index + # GH4370 + df = DataFrame(np.random.randn(4, 2), + columns=MultiIndex.from_tuples([('a', 0), ('a', 1)])) + with assertRaisesRegexp(ValueError, 'levels'): + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + df.sort_index(by='a') + with assertRaisesRegexp(ValueError, 'levels'): + df.sort_values(by='a') + + # convert tuples to a list of tuples + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + df.sort_index(by=[('a', 1)]) + expected = df.sort_values(by=[('a', 1)]) + + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + df.sort_index(by=('a', 1)) + result = df.sort_values(by=('a', 1)) + assert_frame_equal(result, expected) + + def test_sortlevel(self): + mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) + df = DataFrame([[1, 2], [3, 4]], mi) + res = df.sortlevel('A', sort_remaining=False) + assert_frame_equal(df, res) + + res = df.sortlevel(['A', 'B'], sort_remaining=False) + assert_frame_equal(df, res) + + def test_sort_datetimes(self): + + # GH 3461, argsort / lexsort differences for a datetime column + df = DataFrame(['a', 'a', 'a', 'b', 'c', 'd', 'e', 'f', 'g'], + columns=['A'], + index=date_range('20130101', periods=9)) + dts = [Timestamp(x) + for x in ['2004-02-11', '2004-01-21', '2004-01-26', + '2005-09-20', '2010-10-04', '2009-05-12', + '2008-11-12', '2010-09-28', '2010-09-28']] + df['B'] = dts[::2] + dts[1::2] + df['C'] = 2. + df['A1'] = 3. + + df1 = df.sort_values(by='A') + df2 = df.sort_values(by=['A']) + assert_frame_equal(df1, df2) + + df1 = df.sort_values(by='B') + df2 = df.sort_values(by=['B']) + assert_frame_equal(df1, df2) + + def test_frame_column_inplace_sort_exception(self): + s = self.frame['A'] + with assertRaisesRegexp(ValueError, "This Series is a view"): + s.sort_values(inplace=True) + + cp = s.copy() + cp.sort_values() # it works! diff --git a/pandas/tests/frame/test_subclass.py b/pandas/tests/frame/test_subclass.py new file mode 100644 index 0000000000000..a7458f5335ec4 --- /dev/null +++ b/pandas/tests/frame/test_subclass.py @@ -0,0 +1,126 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from pandas import DataFrame, Series, MultiIndex, Panel +import pandas as pd + +from pandas.util.testing import (assert_frame_equal, + SubclassedDataFrame) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameSubclassing(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_frame_subclassing_and_slicing(self): + # Subclass frame and ensure it returns the right class on slicing it + # In reference to PR 9632 + + class CustomSeries(Series): + + @property + def _constructor(self): + return CustomSeries + + def custom_series_function(self): + return 'OK' + + class CustomDataFrame(DataFrame): + """ + Subclasses pandas DF, fills DF with simulation results, adds some + custom plotting functions. + """ + + def __init__(self, *args, **kw): + super(CustomDataFrame, self).__init__(*args, **kw) + + @property + def _constructor(self): + return CustomDataFrame + + _constructor_sliced = CustomSeries + + def custom_frame_function(self): + return 'OK' + + data = {'col1': range(10), + 'col2': range(10)} + cdf = CustomDataFrame(data) + + # Did we get back our own DF class? + self.assertTrue(isinstance(cdf, CustomDataFrame)) + + # Do we get back our own Series class after selecting a column? + cdf_series = cdf.col1 + self.assertTrue(isinstance(cdf_series, CustomSeries)) + self.assertEqual(cdf_series.custom_series_function(), 'OK') + + # Do we get back our own DF class after slicing row-wise? + cdf_rows = cdf[1:5] + self.assertTrue(isinstance(cdf_rows, CustomDataFrame)) + self.assertEqual(cdf_rows.custom_frame_function(), 'OK') + + # Make sure sliced part of multi-index frame is custom class + mcol = pd.MultiIndex.from_tuples([('A', 'A'), ('A', 'B')]) + cdf_multi = CustomDataFrame([[0, 1], [2, 3]], columns=mcol) + self.assertTrue(isinstance(cdf_multi['A'], CustomDataFrame)) + + mcol = pd.MultiIndex.from_tuples([('A', ''), ('B', '')]) + cdf_multi2 = CustomDataFrame([[0, 1], [2, 3]], columns=mcol) + self.assertTrue(isinstance(cdf_multi2['A'], CustomSeries)) + + def test_dataframe_metadata(self): + df = SubclassedDataFrame({'X': [1, 2, 3], 'Y': [1, 2, 3]}, + index=['a', 'b', 'c']) + df.testattr = 'XXX' + + self.assertEqual(df.testattr, 'XXX') + self.assertEqual(df[['X']].testattr, 'XXX') + self.assertEqual(df.loc[['a', 'b'], :].testattr, 'XXX') + self.assertEqual(df.iloc[[0, 1], :].testattr, 'XXX') + + # GH9776 + self.assertEqual(df.iloc[0:1, :].testattr, 'XXX') + + # GH10553 + unpickled = self.round_trip_pickle(df) + assert_frame_equal(df, unpickled) + self.assertEqual(df._metadata, unpickled._metadata) + self.assertEqual(df.testattr, unpickled.testattr) + + def test_to_panel_expanddim(self): + # GH 9762 + + class SubclassedFrame(DataFrame): + + @property + def _constructor_expanddim(self): + return SubclassedPanel + + class SubclassedPanel(Panel): + pass + + index = MultiIndex.from_tuples([(0, 0), (0, 1), (0, 2)]) + df = SubclassedFrame({'X': [1, 2, 3], 'Y': [4, 5, 6]}, index=index) + result = df.to_panel() + self.assertTrue(isinstance(result, SubclassedPanel)) + expected = SubclassedPanel([[[1, 2, 3]], [[4, 5, 6]]], + items=['X', 'Y'], major_axis=[0], + minor_axis=[0, 1, 2], + dtype='int64') + tm.assert_panel_equal(result, expected) + + def test_subclass_attr_err_propagation(self): + # GH 11808 + class A(DataFrame): + + @property + def bar(self): + return self.i_dont_exist + with tm.assertRaisesRegexp(AttributeError, '.*i_dont_exist.*'): + A().bar diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py new file mode 100644 index 0000000000000..115e942dceb0f --- /dev/null +++ b/pandas/tests/frame/test_timeseries.py @@ -0,0 +1,338 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime + +from numpy import nan +from numpy.random import randn +import numpy as np + +from pandas import DataFrame, Series, Index, Timestamp, DatetimeIndex +import pandas as pd +import pandas.core.datetools as datetools + +from pandas.util.testing import (assert_almost_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameTimeSeriesMethods(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_diff(self): + the_diff = self.tsframe.diff(1) + + assert_series_equal(the_diff['A'], + self.tsframe['A'] - self.tsframe['A'].shift(1)) + + # int dtype + a = 10000000000000000 + b = a + 1 + s = Series([a, b]) + + rs = DataFrame({'s': s}).diff() + self.assertEqual(rs.s[1], 1) + + # mixed numeric + tf = self.tsframe.astype('float32') + the_diff = tf.diff(1) + assert_series_equal(the_diff['A'], + tf['A'] - tf['A'].shift(1)) + + # issue 10907 + df = pd.DataFrame({'y': pd.Series([2]), 'z': pd.Series([3])}) + df.insert(0, 'x', 1) + result = df.diff(axis=1) + expected = pd.DataFrame({'x': np.nan, 'y': pd.Series( + 1), 'z': pd.Series(1)}).astype('float64') + assert_frame_equal(result, expected) + + def test_diff_timedelta(self): + # GH 4533 + df = DataFrame(dict(time=[Timestamp('20130101 9:01'), + Timestamp('20130101 9:02')], + value=[1.0, 2.0])) + + res = df.diff() + exp = DataFrame([[pd.NaT, np.nan], + [pd.Timedelta('00:01:00'), 1]], + columns=['time', 'value']) + assert_frame_equal(res, exp) + + def test_diff_mixed_dtype(self): + df = DataFrame(np.random.randn(5, 3)) + df['A'] = np.array([1, 2, 3, 4, 5], dtype=object) + + result = df.diff() + self.assertEqual(result[0].dtype, np.float64) + + def test_diff_neg_n(self): + rs = self.tsframe.diff(-1) + xp = self.tsframe - self.tsframe.shift(-1) + assert_frame_equal(rs, xp) + + def test_diff_float_n(self): + rs = self.tsframe.diff(1.) + xp = self.tsframe.diff(1) + assert_frame_equal(rs, xp) + + def test_diff_axis(self): + # GH 9727 + df = DataFrame([[1., 2.], [3., 4.]]) + assert_frame_equal(df.diff(axis=1), DataFrame( + [[np.nan, 1.], [np.nan, 1.]])) + assert_frame_equal(df.diff(axis=0), DataFrame( + [[np.nan, np.nan], [2., 2.]])) + + def test_pct_change(self): + rs = self.tsframe.pct_change(fill_method=None) + assert_frame_equal(rs, self.tsframe / self.tsframe.shift(1) - 1) + + rs = self.tsframe.pct_change(2) + filled = self.tsframe.fillna(method='pad') + assert_frame_equal(rs, filled / filled.shift(2) - 1) + + rs = self.tsframe.pct_change(fill_method='bfill', limit=1) + filled = self.tsframe.fillna(method='bfill', limit=1) + assert_frame_equal(rs, filled / filled.shift(1) - 1) + + rs = self.tsframe.pct_change(freq='5D') + filled = self.tsframe.fillna(method='pad') + assert_frame_equal(rs, filled / filled.shift(freq='5D') - 1) + + def test_pct_change_shift_over_nas(self): + s = Series([1., 1.5, np.nan, 2.5, 3.]) + + df = DataFrame({'a': s, 'b': s}) + + chg = df.pct_change() + expected = Series([np.nan, 0.5, np.nan, 2.5 / 1.5 - 1, .2]) + edf = DataFrame({'a': expected, 'b': expected}) + assert_frame_equal(chg, edf) + + def test_shift(self): + # naive shift + shiftedFrame = self.tsframe.shift(5) + self.assertTrue(shiftedFrame.index.equals(self.tsframe.index)) + + shiftedSeries = self.tsframe['A'].shift(5) + assert_series_equal(shiftedFrame['A'], shiftedSeries) + + shiftedFrame = self.tsframe.shift(-5) + self.assertTrue(shiftedFrame.index.equals(self.tsframe.index)) + + shiftedSeries = self.tsframe['A'].shift(-5) + assert_series_equal(shiftedFrame['A'], shiftedSeries) + + # shift by 0 + unshifted = self.tsframe.shift(0) + assert_frame_equal(unshifted, self.tsframe) + + # shift by DateOffset + shiftedFrame = self.tsframe.shift(5, freq=datetools.BDay()) + self.assertEqual(len(shiftedFrame), len(self.tsframe)) + + shiftedFrame2 = self.tsframe.shift(5, freq='B') + assert_frame_equal(shiftedFrame, shiftedFrame2) + + d = self.tsframe.index[0] + shifted_d = d + datetools.BDay(5) + assert_series_equal(self.tsframe.xs(d), + shiftedFrame.xs(shifted_d), check_names=False) + + # shift int frame + int_shifted = self.intframe.shift(1) # noqa + + # Shifting with PeriodIndex + ps = tm.makePeriodFrame() + shifted = ps.shift(1) + unshifted = shifted.shift(-1) + self.assertTrue(shifted.index.equals(ps.index)) + + tm.assert_dict_equal(unshifted.ix[:, 0].valid(), ps.ix[:, 0], + compare_keys=False) + + shifted2 = ps.shift(1, 'B') + shifted3 = ps.shift(1, datetools.bday) + assert_frame_equal(shifted2, shifted3) + assert_frame_equal(ps, shifted2.shift(-1, 'B')) + + assertRaisesRegexp(ValueError, 'does not match PeriodIndex freq', + ps.shift, freq='D') + + # shift other axis + # GH 6371 + df = DataFrame(np.random.rand(10, 5)) + expected = pd.concat([DataFrame(np.nan, index=df.index, + columns=[0]), + df.iloc[:, 0:-1]], + ignore_index=True, axis=1) + result = df.shift(1, axis=1) + assert_frame_equal(result, expected) + + # shift named axis + df = DataFrame(np.random.rand(10, 5)) + expected = pd.concat([DataFrame(np.nan, index=df.index, + columns=[0]), + df.iloc[:, 0:-1]], + ignore_index=True, axis=1) + result = df.shift(1, axis='columns') + assert_frame_equal(result, expected) + + def test_shift_bool(self): + df = DataFrame({'high': [True, False], + 'low': [False, False]}) + rs = df.shift(1) + xp = DataFrame(np.array([[np.nan, np.nan], + [True, False]], dtype=object), + columns=['high', 'low']) + assert_frame_equal(rs, xp) + + def test_shift_categorical(self): + # GH 9416 + s1 = pd.Series(['a', 'b', 'c'], dtype='category') + s2 = pd.Series(['A', 'B', 'C'], dtype='category') + df = DataFrame({'one': s1, 'two': s2}) + rs = df.shift(1) + xp = DataFrame({'one': s1.shift(1), 'two': s2.shift(1)}) + assert_frame_equal(rs, xp) + + def test_shift_empty(self): + # Regression test for #8019 + df = DataFrame({'foo': []}) + rs = df.shift(-1) + + assert_frame_equal(df, rs) + + def test_tshift(self): + # PeriodIndex + ps = tm.makePeriodFrame() + shifted = ps.tshift(1) + unshifted = shifted.tshift(-1) + + assert_frame_equal(unshifted, ps) + + shifted2 = ps.tshift(freq='B') + assert_frame_equal(shifted, shifted2) + + shifted3 = ps.tshift(freq=datetools.bday) + assert_frame_equal(shifted, shifted3) + + assertRaisesRegexp(ValueError, 'does not match', ps.tshift, freq='M') + + # DatetimeIndex + shifted = self.tsframe.tshift(1) + unshifted = shifted.tshift(-1) + + assert_frame_equal(self.tsframe, unshifted) + + shifted2 = self.tsframe.tshift(freq=self.tsframe.index.freq) + assert_frame_equal(shifted, shifted2) + + inferred_ts = DataFrame(self.tsframe.values, + Index(np.asarray(self.tsframe.index)), + columns=self.tsframe.columns) + shifted = inferred_ts.tshift(1) + unshifted = shifted.tshift(-1) + assert_frame_equal(shifted, self.tsframe.tshift(1)) + assert_frame_equal(unshifted, inferred_ts) + + no_freq = self.tsframe.ix[[0, 5, 7], :] + self.assertRaises(ValueError, no_freq.tshift) + + def test_truncate(self): + ts = self.tsframe[::3] + + start, end = self.tsframe.index[3], self.tsframe.index[6] + + start_missing = self.tsframe.index[2] + end_missing = self.tsframe.index[7] + + # neither specified + truncated = ts.truncate() + assert_frame_equal(truncated, ts) + + # both specified + expected = ts[1:3] + + truncated = ts.truncate(start, end) + assert_frame_equal(truncated, expected) + + truncated = ts.truncate(start_missing, end_missing) + assert_frame_equal(truncated, expected) + + # start specified + expected = ts[1:] + + truncated = ts.truncate(before=start) + assert_frame_equal(truncated, expected) + + truncated = ts.truncate(before=start_missing) + assert_frame_equal(truncated, expected) + + # end specified + expected = ts[:3] + + truncated = ts.truncate(after=end) + assert_frame_equal(truncated, expected) + + truncated = ts.truncate(after=end_missing) + assert_frame_equal(truncated, expected) + + self.assertRaises(ValueError, ts.truncate, + before=ts.index[-1] - 1, + after=ts.index[0] + 1) + + def test_truncate_copy(self): + index = self.tsframe.index + truncated = self.tsframe.truncate(index[5], index[10]) + truncated.values[:] = 5. + self.assertFalse((self.tsframe.values[5:11] == 5).any()) + + def test_asfreq(self): + offset_monthly = self.tsframe.asfreq(datetools.bmonthEnd) + rule_monthly = self.tsframe.asfreq('BM') + + assert_almost_equal(offset_monthly['A'], rule_monthly['A']) + + filled = rule_monthly.asfreq('B', method='pad') # noqa + # TODO: actually check that this worked. + + # don't forget! + filled_dep = rule_monthly.asfreq('B', method='pad') # noqa + + # test does not blow up on length-0 DataFrame + zero_length = self.tsframe.reindex([]) + result = zero_length.asfreq('BM') + self.assertIsNot(result, zero_length) + + def test_asfreq_datetimeindex(self): + df = DataFrame({'A': [1, 2, 3]}, + index=[datetime(2011, 11, 1), datetime(2011, 11, 2), + datetime(2011, 11, 3)]) + df = df.asfreq('B') + tm.assertIsInstance(df.index, DatetimeIndex) + + ts = df['A'].asfreq('B') + tm.assertIsInstance(ts.index, DatetimeIndex) + + def test_first_last_valid(self): + N = len(self.frame.index) + mat = randn(N) + mat[:5] = nan + mat[-5:] = nan + + frame = DataFrame({'foo': mat}, index=self.frame.index) + index = frame.first_valid_index() + + self.assertEqual(index, frame.index[5]) + + index = frame.last_valid_index() + self.assertEqual(index, frame.index[-6]) diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py new file mode 100644 index 0000000000000..a5b86b35d330e --- /dev/null +++ b/pandas/tests/frame/test_to_csv.py @@ -0,0 +1,1110 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +import csv + +from numpy import nan +import numpy as np + +from pandas.compat import (lmap, range, lrange, StringIO, u) +from pandas.parser import CParserError +from pandas import (DataFrame, Index, Series, MultiIndex, Timestamp, + date_range, read_csv, compat) +import pandas as pd + +from pandas.util.testing import (assert_almost_equal, + assert_equal, + assert_series_equal, + assert_frame_equal, + ensure_clean, + makeCustomDataframe as mkdf, + assertRaisesRegexp) + +from numpy.testing.decorators import slow +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +MIXED_FLOAT_DTYPES = ['float16', 'float32', 'float64'] +MIXED_INT_DTYPES = ['uint8', 'uint16', 'uint32', 'uint64', 'int8', 'int16', + 'int32', 'int64'] + + +class TestDataFrameToCSV(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_to_csv_from_csv(self): + + pname = '__tmp_to_csv_from_csv__' + with ensure_clean(pname) as path: + self.frame['A'][:5] = nan + + self.frame.to_csv(path) + self.frame.to_csv(path, columns=['A', 'B']) + self.frame.to_csv(path, header=False) + self.frame.to_csv(path, index=False) + + # test roundtrip + self.tsframe.to_csv(path) + recons = DataFrame.from_csv(path) + + assert_frame_equal(self.tsframe, recons) + + self.tsframe.to_csv(path, index_label='index') + recons = DataFrame.from_csv(path, index_col=None) + assert(len(recons.columns) == len(self.tsframe.columns) + 1) + + # no index + self.tsframe.to_csv(path, index=False) + recons = DataFrame.from_csv(path, index_col=None) + assert_almost_equal(self.tsframe.values, recons.values) + + # corner case + dm = DataFrame({'s1': Series(lrange(3), lrange(3)), + 's2': Series(lrange(2), lrange(2))}) + dm.to_csv(path) + recons = DataFrame.from_csv(path) + assert_frame_equal(dm, recons) + + with ensure_clean(pname) as path: + + # duplicate index + df = DataFrame(np.random.randn(3, 3), index=['a', 'a', 'b'], + columns=['x', 'y', 'z']) + df.to_csv(path) + result = DataFrame.from_csv(path) + assert_frame_equal(result, df) + + midx = MultiIndex.from_tuples( + [('A', 1, 2), ('A', 1, 2), ('B', 1, 2)]) + df = DataFrame(np.random.randn(3, 3), index=midx, + columns=['x', 'y', 'z']) + df.to_csv(path) + result = DataFrame.from_csv(path, index_col=[0, 1, 2], + parse_dates=False) + # TODO from_csv names index ['Unnamed: 1', 'Unnamed: 2'] should it + # ? + assert_frame_equal(result, df, check_names=False) + + # column aliases + col_aliases = Index(['AA', 'X', 'Y', 'Z']) + self.frame2.to_csv(path, header=col_aliases) + rs = DataFrame.from_csv(path) + xp = self.frame2.copy() + xp.columns = col_aliases + + assert_frame_equal(xp, rs) + + self.assertRaises(ValueError, self.frame2.to_csv, path, + header=['AA', 'X']) + + with ensure_clean(pname) as path: + df1 = DataFrame(np.random.randn(3, 1)) + df2 = DataFrame(np.random.randn(3, 1)) + + df1.to_csv(path) + df2.to_csv(path, mode='a', header=False) + xp = pd.concat([df1, df2]) + rs = pd.read_csv(path, index_col=0) + rs.columns = lmap(int, rs.columns) + xp.columns = lmap(int, xp.columns) + assert_frame_equal(xp, rs) + + with ensure_clean() as path: + # GH 10833 (TimedeltaIndex formatting) + dt = pd.Timedelta(seconds=1) + df = pd.DataFrame({'dt_data': [i * dt for i in range(3)]}, + index=pd.Index([i * dt for i in range(3)], + name='dt_index')) + df.to_csv(path) + + result = pd.read_csv(path, index_col='dt_index') + result.index = pd.to_timedelta(result.index) + # TODO: remove renaming when GH 10875 is solved + result.index = result.index.rename('dt_index') + result['dt_data'] = pd.to_timedelta(result['dt_data']) + + assert_frame_equal(df, result, check_index_type=True) + + # tz, 8260 + with ensure_clean(pname) as path: + + self.tzframe.to_csv(path) + result = pd.read_csv(path, index_col=0, parse_dates=['A']) + + converter = lambda c: pd.to_datetime(result[c]).dt.tz_localize( + 'UTC').dt.tz_convert(self.tzframe[c].dt.tz) + result['B'] = converter('B') + result['C'] = converter('C') + assert_frame_equal(result, self.tzframe) + + def test_to_csv_cols_reordering(self): + # GH3454 + import pandas as pd + + chunksize = 5 + N = int(chunksize * 2.5) + + df = mkdf(N, 3) + cs = df.columns + cols = [cs[2], cs[0]] + + with ensure_clean() as path: + df.to_csv(path, columns=cols, chunksize=chunksize) + rs_c = pd.read_csv(path, index_col=0) + + assert_frame_equal(df[cols], rs_c, check_names=False) + + def test_to_csv_legacy_raises_on_dupe_cols(self): + df = mkdf(10, 3) + df.columns = ['a', 'a', 'b'] + with ensure_clean() as path: + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): + self.assertRaises(NotImplementedError, + df.to_csv, path, engine='python') + + def test_to_csv_new_dupe_cols(self): + import pandas as pd + + def _check_df(df, cols=None): + with ensure_clean() as path: + df.to_csv(path, columns=cols, chunksize=chunksize) + rs_c = pd.read_csv(path, index_col=0) + + # we wrote them in a different order + # so compare them in that order + if cols is not None: + + if df.columns.is_unique: + rs_c.columns = cols + else: + indexer, missing = df.columns.get_indexer_non_unique( + cols) + rs_c.columns = df.columns.take(indexer) + + for c in cols: + obj_df = df[c] + obj_rs = rs_c[c] + if isinstance(obj_df, Series): + assert_series_equal(obj_df, obj_rs) + else: + assert_frame_equal( + obj_df, obj_rs, check_names=False) + + # wrote in the same order + else: + rs_c.columns = df.columns + assert_frame_equal(df, rs_c, check_names=False) + + chunksize = 5 + N = int(chunksize * 2.5) + + # dupe cols + df = mkdf(N, 3) + df.columns = ['a', 'a', 'b'] + _check_df(df, None) + + # dupe cols with selection + cols = ['b', 'a'] + _check_df(df, cols) + + @slow + def test_to_csv_moar(self): + path = '__tmp_to_csv_moar__' + + def _do_test(df, path, r_dtype=None, c_dtype=None, + rnlvl=None, cnlvl=None, dupe_col=False): + + kwargs = dict(parse_dates=False) + if cnlvl: + if rnlvl is not None: + kwargs['index_col'] = lrange(rnlvl) + kwargs['header'] = lrange(cnlvl) + with ensure_clean(path) as path: + df.to_csv(path, encoding='utf8', + chunksize=chunksize, tupleize_cols=False) + recons = DataFrame.from_csv( + path, tupleize_cols=False, **kwargs) + else: + kwargs['header'] = 0 + with ensure_clean(path) as path: + df.to_csv(path, encoding='utf8', chunksize=chunksize) + recons = DataFrame.from_csv(path, **kwargs) + + def _to_uni(x): + if not isinstance(x, compat.text_type): + return x.decode('utf8') + return x + if dupe_col: + # read_Csv disambiguates the columns by + # labeling them dupe.1,dupe.2, etc'. monkey patch columns + recons.columns = df.columns + if rnlvl and not cnlvl: + delta_lvl = [recons.iloc[ + :, i].values for i in range(rnlvl - 1)] + ix = MultiIndex.from_arrays([list(recons.index)] + delta_lvl) + recons.index = ix + recons = recons.iloc[:, rnlvl - 1:] + + type_map = dict(i='i', f='f', s='O', u='O', dt='O', p='O') + if r_dtype: + if r_dtype == 'u': # unicode + r_dtype = 'O' + recons.index = np.array(lmap(_to_uni, recons.index), + dtype=r_dtype) + df.index = np.array(lmap(_to_uni, df.index), dtype=r_dtype) + elif r_dtype == 'dt': # unicode + r_dtype = 'O' + recons.index = np.array(lmap(Timestamp, recons.index), + dtype=r_dtype) + df.index = np.array( + lmap(Timestamp, df.index), dtype=r_dtype) + elif r_dtype == 'p': + r_dtype = 'O' + recons.index = np.array( + list(map(Timestamp, recons.index.to_datetime())), + dtype=r_dtype) + df.index = np.array( + list(map(Timestamp, df.index.to_datetime())), + dtype=r_dtype) + else: + r_dtype = type_map.get(r_dtype) + recons.index = np.array(recons.index, dtype=r_dtype) + df.index = np.array(df.index, dtype=r_dtype) + if c_dtype: + if c_dtype == 'u': + c_dtype = 'O' + recons.columns = np.array(lmap(_to_uni, recons.columns), + dtype=c_dtype) + df.columns = np.array( + lmap(_to_uni, df.columns), dtype=c_dtype) + elif c_dtype == 'dt': + c_dtype = 'O' + recons.columns = np.array(lmap(Timestamp, recons.columns), + dtype=c_dtype) + df.columns = np.array( + lmap(Timestamp, df.columns), dtype=c_dtype) + elif c_dtype == 'p': + c_dtype = 'O' + recons.columns = np.array( + lmap(Timestamp, recons.columns.to_datetime()), + dtype=c_dtype) + df.columns = np.array( + lmap(Timestamp, df.columns.to_datetime()), + dtype=c_dtype) + else: + c_dtype = type_map.get(c_dtype) + recons.columns = np.array(recons.columns, dtype=c_dtype) + df.columns = np.array(df.columns, dtype=c_dtype) + + assert_frame_equal(df, recons, check_names=False, + check_less_precise=True) + + N = 100 + chunksize = 1000 + + # GH3437 + from pandas import NaT + + def make_dtnat_arr(n, nnat=None): + if nnat is None: + nnat = int(n * 0.1) # 10% + s = list(date_range('2000', freq='5min', periods=n)) + if nnat: + for i in np.random.randint(0, len(s), nnat): + s[i] = NaT + i = np.random.randint(100) + s[-i] = NaT + s[i] = NaT + return s + + # N=35000 + s1 = make_dtnat_arr(chunksize + 5) + s2 = make_dtnat_arr(chunksize + 5, 0) + path = '1.csv' + + # s3=make_dtnjat_arr(chunksize+5,0) + with ensure_clean('.csv') as pth: + df = DataFrame(dict(a=s1, b=s2)) + df.to_csv(pth, chunksize=chunksize) + recons = DataFrame.from_csv(pth)._convert(datetime=True, + coerce=True) + assert_frame_equal(df, recons, check_names=False, + check_less_precise=True) + + for ncols in [4]: + base = int((chunksize // ncols or 1) or 1) + for nrows in [2, 10, N - 1, N, N + 1, N + 2, 2 * N - 2, + 2 * N - 1, 2 * N, 2 * N + 1, 2 * N + 2, + base - 1, base, base + 1]: + _do_test(mkdf(nrows, ncols, r_idx_type='dt', + c_idx_type='s'), path, 'dt', 's') + + for ncols in [4]: + base = int((chunksize // ncols or 1) or 1) + for nrows in [2, 10, N - 1, N, N + 1, N + 2, 2 * N - 2, + 2 * N - 1, 2 * N, 2 * N + 1, 2 * N + 2, + base - 1, base, base + 1]: + _do_test(mkdf(nrows, ncols, r_idx_type='dt', + c_idx_type='s'), path, 'dt', 's') + pass + + for r_idx_type, c_idx_type in [('i', 'i'), ('s', 's'), ('u', 'dt'), + ('p', 'p')]: + for ncols in [1, 2, 3, 4]: + base = int((chunksize // ncols or 1) or 1) + for nrows in [2, 10, N - 1, N, N + 1, N + 2, 2 * N - 2, + 2 * N - 1, 2 * N, 2 * N + 1, 2 * N + 2, + base - 1, base, base + 1]: + _do_test(mkdf(nrows, ncols, r_idx_type=r_idx_type, + c_idx_type=c_idx_type), + path, r_idx_type, c_idx_type) + + for ncols in [1, 2, 3, 4]: + base = int((chunksize // ncols or 1) or 1) + for nrows in [10, N - 2, N - 1, N, N + 1, N + 2, 2 * N - 2, + 2 * N - 1, 2 * N, 2 * N + 1, 2 * N + 2, + base - 1, base, base + 1]: + _do_test(mkdf(nrows, ncols), path) + + for nrows in [10, N - 2, N - 1, N, N + 1, N + 2]: + df = mkdf(nrows, 3) + cols = list(df.columns) + cols[:2] = ["dupe", "dupe"] + cols[-2:] = ["dupe", "dupe"] + ix = list(df.index) + ix[:2] = ["rdupe", "rdupe"] + ix[-2:] = ["rdupe", "rdupe"] + df.index = ix + df.columns = cols + _do_test(df, path, dupe_col=True) + + _do_test(DataFrame(index=lrange(10)), path) + _do_test(mkdf(chunksize // 2 + 1, 2, r_idx_nlevels=2), path, rnlvl=2) + for ncols in [2, 3, 4]: + base = int(chunksize // ncols) + for nrows in [10, N - 2, N - 1, N, N + 1, N + 2, 2 * N - 2, + 2 * N - 1, 2 * N, 2 * N + 1, 2 * N + 2, + base - 1, base, base + 1]: + _do_test(mkdf(nrows, ncols, r_idx_nlevels=2), path, rnlvl=2) + _do_test(mkdf(nrows, ncols, c_idx_nlevels=2), path, cnlvl=2) + _do_test(mkdf(nrows, ncols, r_idx_nlevels=2, c_idx_nlevels=2), + path, rnlvl=2, cnlvl=2) + + def test_to_csv_from_csv_w_some_infs(self): + + # test roundtrip with inf, -inf, nan, as full columns and mix + self.frame['G'] = np.nan + f = lambda x: [np.inf, np.nan][np.random.rand() < .5] + self.frame['H'] = self.frame.index.map(f) + + with ensure_clean() as path: + self.frame.to_csv(path) + recons = DataFrame.from_csv(path) + + # TODO to_csv drops column name + assert_frame_equal(self.frame, recons, check_names=False) + assert_frame_equal(np.isinf(self.frame), + np.isinf(recons), check_names=False) + + def test_to_csv_from_csv_w_all_infs(self): + + # test roundtrip with inf, -inf, nan, as full columns and mix + self.frame['E'] = np.inf + self.frame['F'] = -np.inf + + with ensure_clean() as path: + self.frame.to_csv(path) + recons = DataFrame.from_csv(path) + + # TODO to_csv drops column name + assert_frame_equal(self.frame, recons, check_names=False) + assert_frame_equal(np.isinf(self.frame), + np.isinf(recons), check_names=False) + + def test_to_csv_no_index(self): + # GH 3624, after appending columns, to_csv fails + pname = '__tmp_to_csv_no_index__' + with ensure_clean(pname) as path: + df = DataFrame({'c1': [1, 2, 3], 'c2': [4, 5, 6]}) + df.to_csv(path, index=False) + result = read_csv(path) + assert_frame_equal(df, result) + df['c3'] = Series([7, 8, 9], dtype='int64') + df.to_csv(path, index=False) + result = read_csv(path) + assert_frame_equal(df, result) + + def test_to_csv_with_mix_columns(self): + # GH11637, incorrect output when a mix of integer and string column + # names passed as columns parameter in to_csv + + df = DataFrame({0: ['a', 'b', 'c'], + 1: ['aa', 'bb', 'cc']}) + df['test'] = 'txt' + assert_equal(df.to_csv(), df.to_csv(columns=[0, 1, 'test'])) + + def test_to_csv_headers(self): + # GH6186, the presence or absence of `index` incorrectly + # causes to_csv to have different header semantics. + pname = '__tmp_to_csv_headers__' + from_df = DataFrame([[1, 2], [3, 4]], columns=['A', 'B']) + to_df = DataFrame([[1, 2], [3, 4]], columns=['X', 'Y']) + with ensure_clean(pname) as path: + from_df.to_csv(path, header=['X', 'Y']) + recons = DataFrame.from_csv(path) + assert_frame_equal(to_df, recons) + + from_df.to_csv(path, index=False, header=['X', 'Y']) + recons = DataFrame.from_csv(path) + recons.reset_index(inplace=True) + assert_frame_equal(to_df, recons) + + def test_to_csv_multiindex(self): + + pname = '__tmp_to_csv_multiindex__' + frame = self.frame + old_index = frame.index + arrays = np.arange(len(old_index) * 2).reshape(2, -1) + new_index = MultiIndex.from_arrays(arrays, names=['first', 'second']) + frame.index = new_index + + with ensure_clean(pname) as path: + + frame.to_csv(path, header=False) + frame.to_csv(path, columns=['A', 'B']) + + # round trip + frame.to_csv(path) + df = DataFrame.from_csv(path, index_col=[0, 1], parse_dates=False) + + # TODO to_csv drops column name + assert_frame_equal(frame, df, check_names=False) + self.assertEqual(frame.index.names, df.index.names) + + # needed if setUP becomes a classmethod + self.frame.index = old_index + + # try multiindex with dates + tsframe = self.tsframe + old_index = tsframe.index + new_index = [old_index, np.arange(len(old_index))] + tsframe.index = MultiIndex.from_arrays(new_index) + + tsframe.to_csv(path, index_label=['time', 'foo']) + recons = DataFrame.from_csv(path, index_col=[0, 1]) + # TODO to_csv drops column name + assert_frame_equal(tsframe, recons, check_names=False) + + # do not load index + tsframe.to_csv(path) + recons = DataFrame.from_csv(path, index_col=None) + np.testing.assert_equal( + len(recons.columns), len(tsframe.columns) + 2) + + # no index + tsframe.to_csv(path, index=False) + recons = DataFrame.from_csv(path, index_col=None) + assert_almost_equal(recons.values, self.tsframe.values) + + # needed if setUP becomes classmethod + self.tsframe.index = old_index + + with ensure_clean(pname) as path: + # GH3571, GH1651, GH3141 + + def _make_frame(names=None): + if names is True: + names = ['first', 'second'] + return DataFrame(np.random.randint(0, 10, size=(3, 3)), + columns=MultiIndex.from_tuples( + [('bah', 'foo'), + ('bah', 'bar'), + ('ban', 'baz')], names=names), + dtype='int64') + + # column & index are multi-index + df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4) + df.to_csv(path, tupleize_cols=False) + result = read_csv(path, header=[0, 1, 2, 3], index_col=[ + 0, 1], tupleize_cols=False) + assert_frame_equal(df, result) + + # column is mi + df = mkdf(5, 3, r_idx_nlevels=1, c_idx_nlevels=4) + df.to_csv(path, tupleize_cols=False) + result = read_csv( + path, header=[0, 1, 2, 3], index_col=0, tupleize_cols=False) + assert_frame_equal(df, result) + + # dup column names? + df = mkdf(5, 3, r_idx_nlevels=3, c_idx_nlevels=4) + df.to_csv(path, tupleize_cols=False) + result = read_csv(path, header=[0, 1, 2, 3], index_col=[ + 0, 1, 2], tupleize_cols=False) + assert_frame_equal(df, result) + + # writing with no index + df = _make_frame() + df.to_csv(path, tupleize_cols=False, index=False) + result = read_csv(path, header=[0, 1], tupleize_cols=False) + assert_frame_equal(df, result) + + # we lose the names here + df = _make_frame(True) + df.to_csv(path, tupleize_cols=False, index=False) + result = read_csv(path, header=[0, 1], tupleize_cols=False) + self.assertTrue(all([x is None for x in result.columns.names])) + result.columns.names = df.columns.names + assert_frame_equal(df, result) + + # tupleize_cols=True and index=False + df = _make_frame(True) + df.to_csv(path, tupleize_cols=True, index=False) + result = read_csv( + path, header=0, tupleize_cols=True, index_col=None) + result.columns = df.columns + assert_frame_equal(df, result) + + # whatsnew example + df = _make_frame() + df.to_csv(path, tupleize_cols=False) + result = read_csv(path, header=[0, 1], index_col=[ + 0], tupleize_cols=False) + assert_frame_equal(df, result) + + df = _make_frame(True) + df.to_csv(path, tupleize_cols=False) + result = read_csv(path, header=[0, 1], index_col=[ + 0], tupleize_cols=False) + assert_frame_equal(df, result) + + # column & index are multi-index (compatibility) + df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4) + df.to_csv(path, tupleize_cols=True) + result = read_csv(path, header=0, index_col=[ + 0, 1], tupleize_cols=True) + result.columns = df.columns + assert_frame_equal(df, result) + + # invalid options + df = _make_frame(True) + df.to_csv(path, tupleize_cols=False) + + # catch invalid headers + with assertRaisesRegexp(CParserError, + 'Passed header=\[0,1,2\] are too many ' + 'rows for this multi_index of columns'): + read_csv(path, tupleize_cols=False, + header=lrange(3), index_col=0) + + with assertRaisesRegexp(CParserError, + 'Passed header=\[0,1,2,3,4,5,6\], len of ' + '7, but only 6 lines in file'): + read_csv(path, tupleize_cols=False, + header=lrange(7), index_col=0) + + for i in [4, 5, 6]: + with tm.assertRaises(CParserError): + read_csv(path, tupleize_cols=False, + header=lrange(i), index_col=0) + + # write with cols + with assertRaisesRegexp(TypeError, 'cannot specify cols with a ' + 'MultiIndex'): + df.to_csv(path, tupleize_cols=False, columns=['foo', 'bar']) + + with ensure_clean(pname) as path: + # empty + tsframe[:0].to_csv(path) + recons = DataFrame.from_csv(path) + exp = tsframe[:0] + exp.index = [] + + self.assertTrue(recons.columns.equals(exp.columns)) + self.assertEqual(len(recons), 0) + + def test_to_csv_float32_nanrep(self): + df = DataFrame(np.random.randn(1, 4).astype(np.float32)) + df[1] = np.nan + + with ensure_clean('__tmp_to_csv_float32_nanrep__.csv') as path: + df.to_csv(path, na_rep=999) + + with open(path) as f: + lines = f.readlines() + self.assertEqual(lines[1].split(',')[2], '999') + + def test_to_csv_withcommas(self): + + # Commas inside fields should be correctly escaped when saving as CSV. + df = DataFrame({'A': [1, 2, 3], 'B': ['5,6', '7,8', '9,0']}) + + with ensure_clean('__tmp_to_csv_withcommas__.csv') as path: + df.to_csv(path) + df2 = DataFrame.from_csv(path) + assert_frame_equal(df2, df) + + def test_to_csv_mixed(self): + + def create_cols(name): + return ["%s%03d" % (name, i) for i in range(5)] + + df_float = DataFrame(np.random.randn( + 100, 5), dtype='float64', columns=create_cols('float')) + df_int = DataFrame(np.random.randn(100, 5), + dtype='int64', columns=create_cols('int')) + df_bool = DataFrame(True, index=df_float.index, + columns=create_cols('bool')) + df_object = DataFrame('foo', index=df_float.index, + columns=create_cols('object')) + df_dt = DataFrame(Timestamp('20010101'), + index=df_float.index, columns=create_cols('date')) + + # add in some nans + df_float.ix[30:50, 1:3] = np.nan + + # ## this is a bug in read_csv right now #### + # df_dt.ix[30:50,1:3] = np.nan + + df = pd.concat([df_float, df_int, df_bool, df_object, df_dt], axis=1) + + # dtype + dtypes = dict() + for n, dtype in [('float', np.float64), ('int', np.int64), + ('bool', np.bool), ('object', np.object)]: + for c in create_cols(n): + dtypes[c] = dtype + + with ensure_clean() as filename: + df.to_csv(filename) + rs = read_csv(filename, index_col=0, dtype=dtypes, + parse_dates=create_cols('date')) + assert_frame_equal(rs, df) + + def test_to_csv_dups_cols(self): + + df = DataFrame(np.random.randn(1000, 30), columns=lrange( + 15) + lrange(15), dtype='float64') + + with ensure_clean() as filename: + df.to_csv(filename) # single dtype, fine + result = read_csv(filename, index_col=0) + result.columns = df.columns + assert_frame_equal(result, df) + + df_float = DataFrame(np.random.randn(1000, 3), dtype='float64') + df_int = DataFrame(np.random.randn(1000, 3), dtype='int64') + df_bool = DataFrame(True, index=df_float.index, columns=lrange(3)) + df_object = DataFrame('foo', index=df_float.index, columns=lrange(3)) + df_dt = DataFrame(Timestamp('20010101'), + index=df_float.index, columns=lrange(3)) + df = pd.concat([df_float, df_int, df_bool, df_object, + df_dt], axis=1, ignore_index=True) + + cols = [] + for i in range(5): + cols.extend([0, 1, 2]) + df.columns = cols + + from pandas import to_datetime + with ensure_clean() as filename: + df.to_csv(filename) + result = read_csv(filename, index_col=0) + + # date cols + for i in ['0.4', '1.4', '2.4']: + result[i] = to_datetime(result[i]) + + result.columns = df.columns + assert_frame_equal(result, df) + + # GH3457 + from pandas.util.testing import makeCustomDataframe as mkdf + + N = 10 + df = mkdf(N, 3) + df.columns = ['a', 'a', 'b'] + + with ensure_clean() as filename: + df.to_csv(filename) + + # read_csv will rename the dups columns + result = read_csv(filename, index_col=0) + result = result.rename(columns={'a.1': 'a'}) + assert_frame_equal(result, df) + + def test_to_csv_chunking(self): + + aa = DataFrame({'A': lrange(100000)}) + aa['B'] = aa.A + 1.0 + aa['C'] = aa.A + 2.0 + aa['D'] = aa.A + 3.0 + + for chunksize in [10000, 50000, 100000]: + with ensure_clean() as filename: + aa.to_csv(filename, chunksize=chunksize) + rs = read_csv(filename, index_col=0) + assert_frame_equal(rs, aa) + + @slow + def test_to_csv_wide_frame_formatting(self): + # Issue #8621 + df = DataFrame(np.random.randn(1, 100010), columns=None, index=None) + with ensure_clean() as filename: + df.to_csv(filename, header=False, index=False) + rs = read_csv(filename, header=None) + assert_frame_equal(rs, df) + + def test_to_csv_bug(self): + f1 = StringIO('a,1.0\nb,2.0') + df = DataFrame.from_csv(f1, header=None) + newdf = DataFrame({'t': df[df.columns[0]]}) + + with ensure_clean() as path: + newdf.to_csv(path) + + recons = read_csv(path, index_col=0) + # don't check_names as t != 1 + assert_frame_equal(recons, newdf, check_names=False) + + def test_to_csv_unicode(self): + + df = DataFrame({u('c/\u03c3'): [1, 2, 3]}) + with ensure_clean() as path: + + df.to_csv(path, encoding='UTF-8') + df2 = read_csv(path, index_col=0, encoding='UTF-8') + assert_frame_equal(df, df2) + + df.to_csv(path, encoding='UTF-8', index=False) + df2 = read_csv(path, index_col=None, encoding='UTF-8') + assert_frame_equal(df, df2) + + def test_to_csv_unicode_index_col(self): + buf = StringIO('') + df = DataFrame( + [[u("\u05d0"), "d2", "d3", "d4"], ["a1", "a2", "a3", "a4"]], + columns=[u("\u05d0"), + u("\u05d1"), u("\u05d2"), u("\u05d3")], + index=[u("\u05d0"), u("\u05d1")]) + + df.to_csv(buf, encoding='UTF-8') + buf.seek(0) + + df2 = read_csv(buf, index_col=0, encoding='UTF-8') + assert_frame_equal(df, df2) + + def test_to_csv_stringio(self): + buf = StringIO() + self.frame.to_csv(buf) + buf.seek(0) + recons = read_csv(buf, index_col=0) + # TODO to_csv drops column name + assert_frame_equal(recons, self.frame, check_names=False) + + def test_to_csv_float_format(self): + + df = DataFrame([[0.123456, 0.234567, 0.567567], + [12.32112, 123123.2, 321321.2]], + index=['A', 'B'], columns=['X', 'Y', 'Z']) + + with ensure_clean() as filename: + + df.to_csv(filename, float_format='%.2f') + + rs = read_csv(filename, index_col=0) + xp = DataFrame([[0.12, 0.23, 0.57], + [12.32, 123123.20, 321321.20]], + index=['A', 'B'], columns=['X', 'Y', 'Z']) + assert_frame_equal(rs, xp) + + def test_to_csv_quoting(self): + df = DataFrame({'A': [1, 2, 3], 'B': ['foo', 'bar', 'baz']}) + + buf = StringIO() + df.to_csv(buf, index=False, quoting=csv.QUOTE_NONNUMERIC) + + result = buf.getvalue() + expected = ('"A","B"\n' + '1,"foo"\n' + '2,"bar"\n' + '3,"baz"\n') + + self.assertEqual(result, expected) + + # quoting windows line terminators, presents with encoding? + # #3503 + text = 'a,b,c\n1,"test \r\n",3\n' + df = pd.read_csv(StringIO(text)) + buf = StringIO() + df.to_csv(buf, encoding='utf-8', index=False) + self.assertEqual(buf.getvalue(), text) + + # testing if quoting parameter is passed through with multi-indexes + # related to issue #7791 + df = pd.DataFrame({'a': [1, 2], 'b': [3, 4], 'c': [5, 6]}) + df = df.set_index(['a', 'b']) + expected = '"a","b","c"\n"1","3","5"\n"2","4","6"\n' + self.assertEqual(df.to_csv(quoting=csv.QUOTE_ALL), expected) + + def test_to_csv_unicodewriter_quoting(self): + df = DataFrame({'A': [1, 2, 3], 'B': ['foo', 'bar', 'baz']}) + + buf = StringIO() + df.to_csv(buf, index=False, quoting=csv.QUOTE_NONNUMERIC, + encoding='utf-8') + + result = buf.getvalue() + expected = ('"A","B"\n' + '1,"foo"\n' + '2,"bar"\n' + '3,"baz"\n') + + self.assertEqual(result, expected) + + def test_to_csv_quote_none(self): + # GH4328 + df = DataFrame({'A': ['hello', '{"hello"}']}) + for encoding in (None, 'utf-8'): + buf = StringIO() + df.to_csv(buf, quoting=csv.QUOTE_NONE, + encoding=encoding, index=False) + result = buf.getvalue() + expected = 'A\nhello\n{"hello"}\n' + self.assertEqual(result, expected) + + def test_to_csv_index_no_leading_comma(self): + df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, + index=['one', 'two', 'three']) + + buf = StringIO() + df.to_csv(buf, index_label=False) + expected = ('A,B\n' + 'one,1,4\n' + 'two,2,5\n' + 'three,3,6\n') + self.assertEqual(buf.getvalue(), expected) + + def test_to_csv_line_terminators(self): + df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, + index=['one', 'two', 'three']) + + buf = StringIO() + df.to_csv(buf, line_terminator='\r\n') + expected = (',A,B\r\n' + 'one,1,4\r\n' + 'two,2,5\r\n' + 'three,3,6\r\n') + self.assertEqual(buf.getvalue(), expected) + + buf = StringIO() + df.to_csv(buf) # The default line terminator remains \n + expected = (',A,B\n' + 'one,1,4\n' + 'two,2,5\n' + 'three,3,6\n') + self.assertEqual(buf.getvalue(), expected) + + def test_to_csv_from_csv_categorical(self): + + # CSV with categoricals should result in the same output as when one + # would add a "normal" Series/DataFrame. + s = Series(pd.Categorical(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c'])) + s2 = Series(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c']) + res = StringIO() + s.to_csv(res) + exp = StringIO() + s2.to_csv(exp) + self.assertEqual(res.getvalue(), exp.getvalue()) + + df = DataFrame({"s": s}) + df2 = DataFrame({"s": s2}) + res = StringIO() + df.to_csv(res) + exp = StringIO() + df2.to_csv(exp) + self.assertEqual(res.getvalue(), exp.getvalue()) + + def test_to_csv_path_is_none(self): + # GH 8215 + # Make sure we return string for consistency with + # Series.to_csv() + csv_str = self.frame.to_csv(path=None) + self.assertIsInstance(csv_str, str) + recons = pd.read_csv(StringIO(csv_str), index_col=0) + assert_frame_equal(self.frame, recons) + + def test_to_csv_compression_gzip(self): + # GH7615 + # use the compression kw in to_csv + df = DataFrame([[0.123456, 0.234567, 0.567567], + [12.32112, 123123.2, 321321.2]], + index=['A', 'B'], columns=['X', 'Y', 'Z']) + + with ensure_clean() as filename: + + df.to_csv(filename, compression="gzip") + + # test the round trip - to_csv -> read_csv + rs = read_csv(filename, compression="gzip", index_col=0) + assert_frame_equal(df, rs) + + # explicitly make sure file is gziped + import gzip + f = gzip.open(filename, 'rb') + text = f.read().decode('utf8') + f.close() + for col in df.columns: + self.assertIn(col, text) + + def test_to_csv_compression_bz2(self): + # GH7615 + # use the compression kw in to_csv + df = DataFrame([[0.123456, 0.234567, 0.567567], + [12.32112, 123123.2, 321321.2]], + index=['A', 'B'], columns=['X', 'Y', 'Z']) + + with ensure_clean() as filename: + + df.to_csv(filename, compression="bz2") + + # test the round trip - to_csv -> read_csv + rs = read_csv(filename, compression="bz2", index_col=0) + assert_frame_equal(df, rs) + + # explicitly make sure file is bz2ed + import bz2 + f = bz2.BZ2File(filename, 'rb') + text = f.read().decode('utf8') + f.close() + for col in df.columns: + self.assertIn(col, text) + + def test_to_csv_compression_value_error(self): + # GH7615 + # use the compression kw in to_csv + df = DataFrame([[0.123456, 0.234567, 0.567567], + [12.32112, 123123.2, 321321.2]], + index=['A', 'B'], columns=['X', 'Y', 'Z']) + + with ensure_clean() as filename: + # zip compression is not supported and should raise ValueError + self.assertRaises(ValueError, df.to_csv, + filename, compression="zip") + + def test_to_csv_date_format(self): + from pandas import to_datetime + pname = '__tmp_to_csv_date_format__' + with ensure_clean(pname) as path: + for engine in [None, 'python']: + w = FutureWarning if engine == 'python' else None + + dt_index = self.tsframe.index + datetime_frame = DataFrame( + {'A': dt_index, 'B': dt_index.shift(1)}, index=dt_index) + + with tm.assert_produces_warning(w, check_stacklevel=False): + datetime_frame.to_csv( + path, date_format='%Y%m%d', engine=engine) + + # Check that the data was put in the specified format + test = read_csv(path, index_col=0) + + datetime_frame_int = datetime_frame.applymap( + lambda x: int(x.strftime('%Y%m%d'))) + datetime_frame_int.index = datetime_frame_int.index.map( + lambda x: int(x.strftime('%Y%m%d'))) + + assert_frame_equal(test, datetime_frame_int) + + with tm.assert_produces_warning(w, check_stacklevel=False): + datetime_frame.to_csv( + path, date_format='%Y-%m-%d', engine=engine) + + # Check that the data was put in the specified format + test = read_csv(path, index_col=0) + datetime_frame_str = datetime_frame.applymap( + lambda x: x.strftime('%Y-%m-%d')) + datetime_frame_str.index = datetime_frame_str.index.map( + lambda x: x.strftime('%Y-%m-%d')) + + assert_frame_equal(test, datetime_frame_str) + + # Check that columns get converted + datetime_frame_columns = datetime_frame.T + + with tm.assert_produces_warning(w, check_stacklevel=False): + datetime_frame_columns.to_csv( + path, date_format='%Y%m%d', engine=engine) + + test = read_csv(path, index_col=0) + + datetime_frame_columns = datetime_frame_columns.applymap( + lambda x: int(x.strftime('%Y%m%d'))) + # Columns don't get converted to ints by read_csv + datetime_frame_columns.columns = ( + datetime_frame_columns.columns + .map(lambda x: x.strftime('%Y%m%d'))) + + assert_frame_equal(test, datetime_frame_columns) + + # test NaTs + nat_index = to_datetime( + ['NaT'] * 10 + ['2000-01-01', '1/1/2000', '1-1-2000']) + nat_frame = DataFrame({'A': nat_index}, index=nat_index) + + with tm.assert_produces_warning(w, check_stacklevel=False): + nat_frame.to_csv( + path, date_format='%Y-%m-%d', engine=engine) + + test = read_csv(path, parse_dates=[0, 1], index_col=0) + + assert_frame_equal(test, nat_frame) + + def test_to_csv_with_dst_transitions(self): + + with ensure_clean('csv_date_format_with_dst') as path: + # make sure we are not failing on transitions + times = pd.date_range("2013-10-26 23:00", "2013-10-27 01:00", + tz="Europe/London", + freq="H", + ambiguous='infer') + + for i in [times, times + pd.Timedelta('10s')]: + time_range = np.array(range(len(i)), dtype='int64') + df = DataFrame({'A': time_range}, index=i) + df.to_csv(path, index=True) + + # we have to reconvert the index as we + # don't parse the tz's + result = read_csv(path, index_col=0) + result.index = pd.to_datetime(result.index).tz_localize( + 'UTC').tz_convert('Europe/London') + assert_frame_equal(result, df) + + # GH11619 + idx = pd.date_range('2015-01-01', '2015-12-31', + freq='H', tz='Europe/Paris') + df = DataFrame({'values': 1, 'idx': idx}, + index=idx) + with ensure_clean('csv_date_format_with_dst') as path: + df.to_csv(path, index=True) + result = read_csv(path, index_col=0) + result.index = pd.to_datetime(result.index).tz_localize( + 'UTC').tz_convert('Europe/Paris') + result['idx'] = pd.to_datetime(result['idx']).astype( + 'datetime64[ns, Europe/Paris]') + assert_frame_equal(result, df) + + # assert working + df.astype(str) + + with ensure_clean('csv_date_format_with_dst') as path: + df.to_pickle(path) + result = pd.read_pickle(path) + assert_frame_equal(result, df) diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py deleted file mode 100644 index ba546b6daac77..0000000000000 --- a/pandas/tests/test_frame.py +++ /dev/null @@ -1,17190 +0,0 @@ -# -*- coding: utf-8 -*- - -from __future__ import print_function -# pylint: disable-msg=W0612,E1101 -from copy import deepcopy -from datetime import datetime, timedelta, time, date -import sys -import operator -import re -import csv -import nose -import functools -import itertools -from itertools import product, permutations -from distutils.version import LooseVersion - -from pandas.compat import( - map, zip, range, long, lrange, lmap, lzip, - OrderedDict, u, StringIO, is_platform_windows -) -from pandas import compat - -from numpy import random, nan, inf -from numpy.random import randn -import numpy as np -import numpy.ma as ma -import numpy.ma.mrecords as mrecords - -import pandas.core.nanops as nanops -import pandas.core.common as com -import pandas.core.format as fmt -import pandas.core.datetools as datetools -from pandas import (DataFrame, Index, Series, Panel, notnull, isnull, - MultiIndex, DatetimeIndex, Timestamp, date_range, - read_csv, timedelta_range, Timedelta, option_context, period_range) -from pandas.core.dtypes import DatetimeTZDtype -import pandas as pd -from pandas.parser import CParserError -from pandas.util.misc import is_little_endian - -from pandas.util.testing import (assert_almost_equal, - assert_equal, - assert_numpy_array_equal, - assert_series_equal, - assert_frame_equal, - assertRaisesRegexp, - assertRaises, - makeCustomDataframe as mkdf, - ensure_clean, - SubclassedDataFrame) -from pandas.core.indexing import IndexingError -from pandas.core.common import PandasError - -import pandas.util.testing as tm -import pandas.lib as lib - -from numpy.testing.decorators import slow -from pandas import _np_version_under1p9 - -# --------------------------------------------------------------------- -# DataFrame test cases - -JOIN_TYPES = ['inner', 'outer', 'left', 'right'] -MIXED_FLOAT_DTYPES = ['float16','float32','float64'] -MIXED_INT_DTYPES = ['uint8','uint16','uint32','uint64','int8','int16', - 'int32','int64'] - -def _check_mixed_float(df, dtype = None): - - # float16 are most likely to be upcasted to float32 - dtypes = dict(A = 'float32', B = 'float32', C = 'float16', D = 'float64') - if isinstance(dtype, compat.string_types): - dtypes = dict([ (k,dtype) for k, v in dtypes.items() ]) - elif isinstance(dtype, dict): - dtypes.update(dtype) - if dtypes.get('A'): - assert(df.dtypes['A'] == dtypes['A']) - if dtypes.get('B'): - assert(df.dtypes['B'] == dtypes['B']) - if dtypes.get('C'): - assert(df.dtypes['C'] == dtypes['C']) - if dtypes.get('D'): - assert(df.dtypes['D'] == dtypes['D']) - - -def _check_mixed_int(df, dtype = None): - dtypes = dict(A = 'int32', B = 'uint64', C = 'uint8', D = 'int64') - if isinstance(dtype, compat.string_types): - dtypes = dict([ (k,dtype) for k, v in dtypes.items() ]) - elif isinstance(dtype, dict): - dtypes.update(dtype) - if dtypes.get('A'): - assert(df.dtypes['A'] == dtypes['A']) - if dtypes.get('B'): - assert(df.dtypes['B'] == dtypes['B']) - if dtypes.get('C'): - assert(df.dtypes['C'] == dtypes['C']) - if dtypes.get('D'): - assert(df.dtypes['D'] == dtypes['D']) - - -class CheckIndexing(object): - - _multiprocess_can_split_ = True - - def test_getitem(self): - # slicing - sl = self.frame[:20] - self.assertEqual(20, len(sl.index)) - - # column access - - for _, series in compat.iteritems(sl): - self.assertEqual(20, len(series.index)) - self.assertTrue(tm.equalContents(series.index, sl.index)) - - for key, _ in compat.iteritems(self.frame._series): - self.assertIsNotNone(self.frame[key]) - - self.assertNotIn('random', self.frame) - with assertRaisesRegexp(KeyError, 'random'): - self.frame['random'] - - df = self.frame.copy() - df['$10'] = randn(len(df)) - ad = randn(len(df)) - df['@awesome_domain'] = ad - self.assertRaises(KeyError, df.__getitem__, 'df["$10"]') - res = df['@awesome_domain'] - assert_numpy_array_equal(ad, res.values) - - def test_getitem_dupe_cols(self): - df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=['a', 'a', 'b']) - try: - df[['baf']] - except KeyError: - pass - else: - self.fail("Dataframe failed to raise KeyError") - - def test_get(self): - b = self.frame.get('B') - assert_series_equal(b, self.frame['B']) - - self.assertIsNone(self.frame.get('foo')) - assert_series_equal(self.frame.get('foo', self.frame['B']), - self.frame['B']) - # None - # GH 5652 - for df in [DataFrame(), DataFrame(columns=list('AB')), DataFrame(columns=list('AB'),index=range(3)) ]: - result = df.get(None) - self.assertIsNone(result) - - def test_getitem_iterator(self): - idx = iter(['A', 'B', 'C']) - result = self.frame.ix[:, idx] - expected = self.frame.ix[:, ['A', 'B', 'C']] - assert_frame_equal(result, expected) - - def test_getitem_list(self): - self.frame.columns.name = 'foo' - - result = self.frame[['B', 'A']] - result2 = self.frame[Index(['B', 'A'])] - - expected = self.frame.ix[:, ['B', 'A']] - expected.columns.name = 'foo' - - assert_frame_equal(result, expected) - assert_frame_equal(result2, expected) - - self.assertEqual(result.columns.name, 'foo') - - with assertRaisesRegexp(KeyError, 'not in index'): - self.frame[['B', 'A', 'food']] - with assertRaisesRegexp(KeyError, 'not in index'): - self.frame[Index(['B', 'A', 'foo'])] - - # tuples - df = DataFrame(randn(8, 3), - columns=Index([('foo', 'bar'), ('baz', 'qux'), - ('peek', 'aboo')], name=['sth', 'sth2'])) - - result = df[[('foo', 'bar'), ('baz', 'qux')]] - expected = df.ix[:, :2] - assert_frame_equal(result, expected) - self.assertEqual(result.columns.names, ['sth', 'sth2']) - - def test_setitem_list(self): - - self.frame['E'] = 'foo' - data = self.frame[['A', 'B']] - self.frame[['B', 'A']] = data - - assert_series_equal(self.frame['B'], data['A'], check_names=False) - assert_series_equal(self.frame['A'], data['B'], check_names=False) - - with assertRaisesRegexp(ValueError, 'Columns must be same length as key'): - data[['A']] = self.frame[['A', 'B']] - with assertRaisesRegexp(ValueError, 'Length of values does not match ' - 'length of index'): - data['A'] = range(len(data.index) - 1) - - df = DataFrame(0, lrange(3), ['tt1', 'tt2'], dtype=np.int_) - df.ix[1, ['tt1', 'tt2']] = [1, 2] - - result = df.ix[1, ['tt1', 'tt2']] - expected = Series([1, 2], df.columns, dtype=np.int_, name=1) - assert_series_equal(result, expected) - - df['tt1'] = df['tt2'] = '0' - df.ix[1, ['tt1', 'tt2']] = ['1', '2'] - result = df.ix[1, ['tt1', 'tt2']] - expected = Series(['1', '2'], df.columns, name=1) - assert_series_equal(result, expected) - - def test_setitem_list_not_dataframe(self): - data = np.random.randn(len(self.frame), 2) - self.frame[['A', 'B']] = data - assert_almost_equal(self.frame[['A', 'B']].values, data) - - def test_setitem_list_of_tuples(self): - tuples = lzip(self.frame['A'], self.frame['B']) - self.frame['tuples'] = tuples - - result = self.frame['tuples'] - expected = Series(tuples, index=self.frame.index, name='tuples') - assert_series_equal(result, expected) - - def test_setitem_mulit_index(self): - # GH7655, test that assigning to a sub-frame of a frame - # with multi-index columns aligns both rows and columns - it = ['jim', 'joe', 'jolie'], ['first', 'last'], \ - ['left', 'center', 'right'] - - cols = MultiIndex.from_product(it) - index = pd.date_range('20141006',periods=20) - vals = np.random.randint(1, 1000, (len(index), len(cols))) - df = pd.DataFrame(vals, columns=cols, index=index) - - i, j = df.index.values.copy(), it[-1][:] - - np.random.shuffle(i) - df['jim'] = df['jolie'].loc[i, ::-1] - assert_frame_equal(df['jim'], df['jolie']) - - np.random.shuffle(j) - df[('joe', 'first')] = df[('jolie', 'last')].loc[i, j] - assert_frame_equal(df[('joe', 'first')], df[('jolie', 'last')]) - - np.random.shuffle(j) - df[('joe', 'last')] = df[('jolie', 'first')].loc[i, j] - assert_frame_equal(df[('joe', 'last')], df[('jolie', 'first')]) - - def test_inplace_ops_alignment(self): - - # inplace ops / ops alignment - # GH 8511 - - columns = list('abcdefg') - X_orig = DataFrame(np.arange(10*len(columns)).reshape(-1,len(columns)), columns=columns, index=range(10)) - Z = 100*X_orig.iloc[:,1:-1].copy() - block1 = list('bedcf') - subs = list('bcdef') - - # add - X = X_orig.copy() - result1 = (X[block1] + Z).reindex(columns=subs) - - X[block1] += Z - result2 = X.reindex(columns=subs) - - X = X_orig.copy() - result3 = (X[block1] + Z[block1]).reindex(columns=subs) - - X[block1] += Z[block1] - result4 = X.reindex(columns=subs) - - assert_frame_equal(result1, result2) - assert_frame_equal(result1, result3) - assert_frame_equal(result1, result4) - - # sub - X = X_orig.copy() - result1 = (X[block1] - Z).reindex(columns=subs) - - X[block1] -= Z - result2 = X.reindex(columns=subs) - - X = X_orig.copy() - result3 = (X[block1] - Z[block1]).reindex(columns=subs) - - X[block1] -= Z[block1] - result4 = X.reindex(columns=subs) - - assert_frame_equal(result1, result2) - assert_frame_equal(result1, result3) - assert_frame_equal(result1, result4) - - def test_inplace_ops_identity(self): - - # GH 5104 - # make sure that we are actually changing the object - s_orig = Series([1, 2, 3]) - df_orig = DataFrame(np.random.randint(0,5,size=10).reshape(-1,5)) - - # no dtype change - s = s_orig.copy() - s2 = s - s += 1 - assert_series_equal(s,s2) - assert_series_equal(s_orig+1,s) - self.assertIs(s,s2) - self.assertIs(s._data,s2._data) - - df = df_orig.copy() - df2 = df - df += 1 - assert_frame_equal(df,df2) - assert_frame_equal(df_orig+1,df) - self.assertIs(df,df2) - self.assertIs(df._data,df2._data) - - # dtype change - s = s_orig.copy() - s2 = s - s += 1.5 - assert_series_equal(s,s2) - assert_series_equal(s_orig+1.5,s) - - df = df_orig.copy() - df2 = df - df += 1.5 - assert_frame_equal(df,df2) - assert_frame_equal(df_orig+1.5,df) - self.assertIs(df,df2) - self.assertIs(df._data,df2._data) - - # mixed dtype - arr = np.random.randint(0,10,size=5) - df_orig = DataFrame({'A' : arr.copy(), 'B' : 'foo'}) - df = df_orig.copy() - df2 = df - df['A'] += 1 - expected = DataFrame({'A' : arr.copy()+1, 'B' : 'foo'}) - assert_frame_equal(df,expected) - assert_frame_equal(df2,expected) - self.assertIs(df._data,df2._data) - - df = df_orig.copy() - df2 = df - df['A'] += 1.5 - expected = DataFrame({'A' : arr.copy()+1.5, 'B' : 'foo'}) - assert_frame_equal(df,expected) - assert_frame_equal(df2,expected) - self.assertIs(df._data,df2._data) - - def test_getitem_boolean(self): - # boolean indexing - d = self.tsframe.index[10] - indexer = self.tsframe.index > d - indexer_obj = indexer.astype(object) - - subindex = self.tsframe.index[indexer] - subframe = self.tsframe[indexer] - - self.assert_numpy_array_equal(subindex, subframe.index) - with assertRaisesRegexp(ValueError, 'Item wrong length'): - self.tsframe[indexer[:-1]] - - subframe_obj = self.tsframe[indexer_obj] - assert_frame_equal(subframe_obj, subframe) - - with tm.assertRaisesRegexp(ValueError, 'boolean values only'): - self.tsframe[self.tsframe] - - # test that Series work - indexer_obj = Series(indexer_obj, self.tsframe.index) - - subframe_obj = self.tsframe[indexer_obj] - assert_frame_equal(subframe_obj, subframe) - - # test that Series indexers reindex - with tm.assert_produces_warning(UserWarning): - indexer_obj = indexer_obj.reindex(self.tsframe.index[::-1]) - - subframe_obj = self.tsframe[indexer_obj] - assert_frame_equal(subframe_obj, subframe) - - # test df[df > 0] - for df in [ self.tsframe, self.mixed_frame, self.mixed_float, self.mixed_int ]: - - data = df._get_numeric_data() - bif = df[df > 0] - bifw = DataFrame(dict([ (c,np.where(data[c] > 0, data[c], np.nan)) for c in data.columns ]), - index=data.index, columns=data.columns) - - # add back other columns to compare - for c in df.columns: - if c not in bifw: - bifw[c] = df[c] - bifw = bifw.reindex(columns = df.columns) - - assert_frame_equal(bif, bifw, check_dtype=False) - for c in df.columns: - if bif[c].dtype != bifw[c].dtype: - self.assertEqual(bif[c].dtype, df[c].dtype) - - def test_getitem_boolean_casting(self): - - # don't upcast if we don't need to - df = self.tsframe.copy() - df['E'] = 1 - df['E'] = df['E'].astype('int32') - df['E1'] = df['E'].copy() - df['F'] = 1 - df['F'] = df['F'].astype('int64') - df['F1'] = df['F'].copy() - - casted = df[df>0] - result = casted.get_dtype_counts() - expected = Series({'float64': 4, 'int32' : 2, 'int64' : 2}) - assert_series_equal(result, expected) - - # int block splitting - df.ix[1:3,['E1','F1']] = 0 - casted = df[df>0] - result = casted.get_dtype_counts() - expected = Series({'float64': 6, 'int32' : 1, 'int64' : 1}) - assert_series_equal(result, expected) - - # where dtype conversions - # GH 3733 - df = DataFrame(data = np.random.randn(100, 50)) - df = df.where(df > 0) # create nans - bools = df > 0 - mask = isnull(df) - expected = bools.astype(float).mask(mask) - result = bools.mask(mask) - assert_frame_equal(result,expected) - - def test_getitem_boolean_list(self): - df = DataFrame(np.arange(12).reshape(3, 4)) - - def _checkit(lst): - result = df[lst] - expected = df.ix[df.index[lst]] - assert_frame_equal(result, expected) - - _checkit([True, False, True]) - _checkit([True, True, True]) - _checkit([False, False, False]) - - def test_getitem_boolean_iadd(self): - arr = randn(5, 5) - - df = DataFrame(arr.copy(), columns = ['A','B','C','D','E']) - - df[df < 0] += 1 - arr[arr < 0] += 1 - - assert_almost_equal(df.values, arr) - - def test_boolean_index_empty_corner(self): - # #2096 - blah = DataFrame(np.empty([0, 1]), columns=['A'], - index=DatetimeIndex([])) - - # both of these should succeed trivially - k = np.array([], bool) - - blah[k] - blah[k] = 0 - - def test_getitem_ix_mixed_integer(self): - df = DataFrame(np.random.randn(4, 3), - index=[1, 10, 'C', 'E'], columns=[1, 2, 3]) - - result = df.ix[:-1] - expected = df.ix[df.index[:-1]] - assert_frame_equal(result, expected) - - result = df.ix[[1, 10]] - expected = df.ix[Index([1, 10], dtype=object)] - assert_frame_equal(result, expected) - - # 11320 - df = pd.DataFrame({ "rna": (1.5,2.2,3.2,4.5), - -1000: [11,21,36,40], - 0: [10,22,43,34], - 1000:[0, 10, 20, 30] },columns=['rna',-1000,0,1000]) - result = df[[1000]] - expected = df.iloc[:,[3]] - assert_frame_equal(result, expected) - result = df[[-1000]] - expected = df.iloc[:,[1]] - assert_frame_equal(result, expected) - - def test_getitem_setitem_ix_negative_integers(self): - result = self.frame.ix[:, -1] - assert_series_equal(result, self.frame['D']) - - result = self.frame.ix[:, [-1]] - assert_frame_equal(result, self.frame[['D']]) - - result = self.frame.ix[:, [-1, -2]] - assert_frame_equal(result, self.frame[['D', 'C']]) - - self.frame.ix[:, [-1]] = 0 - self.assertTrue((self.frame['D'] == 0).all()) - - df = DataFrame(np.random.randn(8, 4)) - self.assertTrue(isnull(df.ix[:, [-1]].values).all()) - - # #1942 - a = DataFrame(randn(20, 2), index=[chr(x + 65) for x in range(20)]) - a.ix[-1] = a.ix[-2] - - assert_series_equal(a.ix[-1], a.ix[-2], check_names=False) - self.assertEqual(a.ix[-1].name, 'T') - self.assertEqual(a.ix[-2].name, 'S') - - def test_getattr(self): - assert_series_equal(self.frame.A, self.frame['A']) - self.assertRaises(AttributeError, getattr, self.frame, - 'NONEXISTENT_NAME') - - def test_setattr_column(self): - df = DataFrame({'foobar': 1}, index=lrange(10)) - - df.foobar = 5 - self.assertTrue((df.foobar == 5).all()) - - def test_setitem(self): - # not sure what else to do here - series = self.frame['A'][::2] - self.frame['col5'] = series - self.assertIn('col5', self.frame) - tm.assert_dict_equal(series, self.frame['col5'], - compare_keys=False) - - series = self.frame['A'] - self.frame['col6'] = series - tm.assert_dict_equal(series, self.frame['col6'], - compare_keys=False) - - with tm.assertRaises(KeyError): - self.frame[randn(len(self.frame) + 1)] = 1 - - # set ndarray - arr = randn(len(self.frame)) - self.frame['col9'] = arr - self.assertTrue((self.frame['col9'] == arr).all()) - - self.frame['col7'] = 5 - assert((self.frame['col7'] == 5).all()) - - self.frame['col0'] = 3.14 - assert((self.frame['col0'] == 3.14).all()) - - self.frame['col8'] = 'foo' - assert((self.frame['col8'] == 'foo').all()) - - # this is partially a view (e.g. some blocks are view) - # so raise/warn - smaller = self.frame[:2] - def f(): - smaller['col10'] = ['1', '2'] - self.assertRaises(com.SettingWithCopyError, f) - self.assertEqual(smaller['col10'].dtype, np.object_) - self.assertTrue((smaller['col10'] == ['1', '2']).all()) - - # with a dtype - for dtype in ['int32','int64','float32','float64']: - self.frame[dtype] = np.array(arr,dtype=dtype) - self.assertEqual(self.frame[dtype].dtype.name, dtype) - - # dtype changing GH4204 - df = DataFrame([[0,0]]) - df.iloc[0] = np.nan - expected = DataFrame([[np.nan,np.nan]]) - assert_frame_equal(df,expected) - - df = DataFrame([[0,0]]) - df.loc[0] = np.nan - assert_frame_equal(df,expected) - - def test_setitem_tuple(self): - self.frame['A', 'B'] = self.frame['A'] - assert_series_equal(self.frame['A', 'B'], self.frame['A'], check_names=False) - - def test_setitem_always_copy(self): - s = self.frame['A'].copy() - self.frame['E'] = s - - self.frame['E'][5:10] = nan - self.assertTrue(notnull(s[5:10]).all()) - - def test_setitem_boolean(self): - df = self.frame.copy() - values = self.frame.values - - df[df['A'] > 0] = 4 - values[values[:, 0] > 0] = 4 - assert_almost_equal(df.values, values) - - # test that column reindexing works - series = df['A'] == 4 - series = series.reindex(df.index[::-1]) - df[series] = 1 - values[values[:, 0] == 4] = 1 - assert_almost_equal(df.values, values) - - df[df > 0] = 5 - values[values > 0] = 5 - assert_almost_equal(df.values, values) - - df[df == 5] = 0 - values[values == 5] = 0 - assert_almost_equal(df.values, values) - - # a df that needs alignment first - df[df[:-1] < 0] = 2 - np.putmask(values[:-1], values[:-1] < 0, 2) - assert_almost_equal(df.values, values) - - # indexed with same shape but rows-reversed df - df[df[::-1] == 2] = 3 - values[values == 2] = 3 - assert_almost_equal(df.values, values) - - with assertRaisesRegexp(TypeError, 'Must pass DataFrame with boolean ' - 'values only'): - df[df * 0] = 2 - - # index with DataFrame - mask = df > np.abs(df) - expected = df.copy() - df[df > np.abs(df)] = nan - expected.values[mask.values] = nan - assert_frame_equal(df, expected) - - # set from DataFrame - expected = df.copy() - df[df > np.abs(df)] = df * 2 - np.putmask(expected.values, mask.values, df.values * 2) - assert_frame_equal(df, expected) - - def test_setitem_cast(self): - self.frame['D'] = self.frame['D'].astype('i8') - self.assertEqual(self.frame['D'].dtype, np.int64) - - # #669, should not cast? - # this is now set to int64, which means a replacement of the column to - # the value dtype (and nothing to do with the existing dtype) - self.frame['B'] = 0 - self.assertEqual(self.frame['B'].dtype, np.int64) - - # cast if pass array of course - self.frame['B'] = np.arange(len(self.frame)) - self.assertTrue(issubclass(self.frame['B'].dtype.type, np.integer)) - - self.frame['foo'] = 'bar' - self.frame['foo'] = 0 - self.assertEqual(self.frame['foo'].dtype, np.int64) - - self.frame['foo'] = 'bar' - self.frame['foo'] = 2.5 - self.assertEqual(self.frame['foo'].dtype, np.float64) - - self.frame['something'] = 0 - self.assertEqual(self.frame['something'].dtype, np.int64) - self.frame['something'] = 2 - self.assertEqual(self.frame['something'].dtype, np.int64) - self.frame['something'] = 2.5 - self.assertEqual(self.frame['something'].dtype, np.float64) - - # GH 7704 - # dtype conversion on setting - df = DataFrame(np.random.rand(30, 3), columns=tuple('ABC')) - df['event'] = np.nan - df.loc[10,'event'] = 'foo' - result = df.get_dtype_counts().sort_values() - expected = Series({'float64' : 3, 'object' : 1 }).sort_values() - assert_series_equal(result, expected) - - # Test that data type is preserved . #5782 - df = DataFrame({'one': np.arange(6, dtype=np.int8)}) - df.loc[1, 'one'] = 6 - self.assertEqual(df.dtypes.one, np.dtype(np.int8)) - df.one = np.int8(7) - self.assertEqual(df.dtypes.one, np.dtype(np.int8)) - - def test_setitem_boolean_column(self): - expected = self.frame.copy() - mask = self.frame['A'] > 0 - - self.frame.ix[mask, 'B'] = 0 - expected.values[mask.values, 1] = 0 - - assert_frame_equal(self.frame, expected) - - def test_setitem_corner(self): - # corner case - df = DataFrame({'B': [1., 2., 3.], - 'C': ['a', 'b', 'c']}, - index=np.arange(3)) - del df['B'] - df['B'] = [1., 2., 3.] - self.assertIn('B', df) - self.assertEqual(len(df.columns), 2) - - df['A'] = 'beginning' - df['E'] = 'foo' - df['D'] = 'bar' - df[datetime.now()] = 'date' - df[datetime.now()] = 5. - - # what to do when empty frame with index - dm = DataFrame(index=self.frame.index) - dm['A'] = 'foo' - dm['B'] = 'bar' - self.assertEqual(len(dm.columns), 2) - self.assertEqual(dm.values.dtype, np.object_) - - # upcast - dm['C'] = 1 - self.assertEqual(dm['C'].dtype, np.int64) - - dm['E'] = 1. - self.assertEqual(dm['E'].dtype, np.float64) - - # set existing column - dm['A'] = 'bar' - self.assertEqual('bar', dm['A'][0]) - - dm = DataFrame(index=np.arange(3)) - dm['A'] = 1 - dm['foo'] = 'bar' - del dm['foo'] - dm['foo'] = 'bar' - self.assertEqual(dm['foo'].dtype, np.object_) - - dm['coercable'] = ['1', '2', '3'] - self.assertEqual(dm['coercable'].dtype, np.object_) - - def test_setitem_corner2(self): - data = {"title": ['foobar', 'bar', 'foobar'] + ['foobar'] * 17, - "cruft": np.random.random(20)} - - df = DataFrame(data) - ix = df[df['title'] == 'bar'].index - - df.ix[ix, ['title']] = 'foobar' - df.ix[ix, ['cruft']] = 0 - - assert(df.ix[1, 'title'] == 'foobar') - assert(df.ix[1, 'cruft'] == 0) - - def test_setitem_ambig(self): - # difficulties with mixed-type data - from decimal import Decimal - - # created as float type - dm = DataFrame(index=lrange(3), columns=lrange(3)) - - coercable_series = Series([Decimal(1) for _ in range(3)], - index=lrange(3)) - uncoercable_series = Series(['foo', 'bzr', 'baz'], index=lrange(3)) - - dm[0] = np.ones(3) - self.assertEqual(len(dm.columns), 3) - # self.assertIsNone(dm.objects) - - dm[1] = coercable_series - self.assertEqual(len(dm.columns), 3) - # self.assertIsNone(dm.objects) - - dm[2] = uncoercable_series - self.assertEqual(len(dm.columns), 3) - # self.assertIsNotNone(dm.objects) - self.assertEqual(dm[2].dtype, np.object_) - - def test_setitem_clear_caches(self): - # GH #304 - df = DataFrame({'x': [1.1, 2.1, 3.1, 4.1], 'y': [5.1, 6.1, 7.1, 8.1]}, - index=[0, 1, 2, 3]) - df.insert(2, 'z', np.nan) - - # cache it - foo = df['z'] - - df.ix[2:, 'z'] = 42 - - expected = Series([np.nan, np.nan, 42, 42], index=df.index, name='z') - self.assertIsNot(df['z'], foo) - assert_series_equal(df['z'], expected) - - def test_setitem_None(self): - # GH #766 - self.frame[None] = self.frame['A'] - assert_series_equal(self.frame.iloc[:,-1], self.frame['A'], check_names=False) - assert_series_equal(self.frame.loc[:,None], self.frame['A'], check_names=False) - assert_series_equal(self.frame[None], self.frame['A'], check_names=False) - repr(self.frame) - - def test_setitem_empty(self): - # GH 9596 - df = pd.DataFrame({'a': ['1', '2', '3'], - 'b': ['11', '22', '33'], - 'c': ['111', '222', '333']}) - - result = df.copy() - result.loc[result.b.isnull(), 'a'] = result.a - assert_frame_equal(result, df) - - def test_setitem_empty_frame_with_boolean(self): - # Test for issue #10126 - - for dtype in ('float', 'int64'): - for df in [ - pd.DataFrame(dtype=dtype), - pd.DataFrame(dtype=dtype, index=[1]), - pd.DataFrame(dtype=dtype, columns=['A']), - ]: - df2 = df.copy() - df[df > df2] = 47 - assert_frame_equal(df, df2) - - def test_getitem_empty_frame_with_boolean(self): - # Test for issue #11859 - - df = pd.DataFrame() - df2 = df[df>0] - assert_frame_equal(df, df2) - - def test_delitem_corner(self): - f = self.frame.copy() - del f['D'] - self.assertEqual(len(f.columns), 3) - self.assertRaises(KeyError, f.__delitem__, 'D') - del f['B'] - self.assertEqual(len(f.columns), 2) - - def test_getitem_fancy_2d(self): - f = self.frame - ix = f.ix - - assert_frame_equal(ix[:, ['B', 'A']], f.reindex(columns=['B', 'A'])) - - subidx = self.frame.index[[5, 4, 1]] - assert_frame_equal(ix[subidx, ['B', 'A']], - f.reindex(index=subidx, columns=['B', 'A'])) - - # slicing rows, etc. - assert_frame_equal(ix[5:10], f[5:10]) - assert_frame_equal(ix[5:10, :], f[5:10]) - assert_frame_equal(ix[:5, ['A', 'B']], - f.reindex(index=f.index[:5], columns=['A', 'B'])) - - # slice rows with labels, inclusive! - expected = ix[5:11] - result = ix[f.index[5]:f.index[10]] - assert_frame_equal(expected, result) - - # slice columns - assert_frame_equal(ix[:, :2], f.reindex(columns=['A', 'B'])) - - # get view - exp = f.copy() - ix[5:10].values[:] = 5 - exp.values[5:10] = 5 - assert_frame_equal(f, exp) - - self.assertRaises(ValueError, ix.__getitem__, f > 0.5) - - def test_slice_floats(self): - index = [52195.504153, 52196.303147, 52198.369883] - df = DataFrame(np.random.rand(3, 2), index=index) - - s1 = df.ix[52195.1:52196.5] - self.assertEqual(len(s1), 2) - - s1 = df.ix[52195.1:52196.6] - self.assertEqual(len(s1), 2) - - s1 = df.ix[52195.1:52198.9] - self.assertEqual(len(s1), 3) - - def test_getitem_fancy_slice_integers_step(self): - df = DataFrame(np.random.randn(10, 5)) - - # this is OK - result = df.ix[:8:2] - df.ix[:8:2] = np.nan - self.assertTrue(isnull(df.ix[:8:2]).values.all()) - - def test_getitem_setitem_integer_slice_keyerrors(self): - df = DataFrame(np.random.randn(10, 5), index=lrange(0, 20, 2)) - - # this is OK - cp = df.copy() - cp.ix[4:10] = 0 - self.assertTrue((cp.ix[4:10] == 0).values.all()) - - # so is this - cp = df.copy() - cp.ix[3:11] = 0 - self.assertTrue((cp.ix[3:11] == 0).values.all()) - - result = df.ix[4:10] - result2 = df.ix[3:11] - expected = df.reindex([4, 6, 8, 10]) - - assert_frame_equal(result, expected) - assert_frame_equal(result2, expected) - - # non-monotonic, raise KeyError - df2 = df.iloc[lrange(5) + lrange(5, 10)[::-1]] - self.assertRaises(KeyError, df2.ix.__getitem__, slice(3, 11)) - self.assertRaises(KeyError, df2.ix.__setitem__, slice(3, 11), 0) - - def test_setitem_fancy_2d(self): - f = self.frame - ix = f.ix - - # case 1 - frame = self.frame.copy() - expected = frame.copy() - frame.ix[:, ['B', 'A']] = 1 - expected['B'] = 1. - expected['A'] = 1. - assert_frame_equal(frame, expected) - - # case 2 - frame = self.frame.copy() - frame2 = self.frame.copy() - - expected = frame.copy() - - subidx = self.frame.index[[5, 4, 1]] - values = randn(3, 2) - - frame.ix[subidx, ['B', 'A']] = values - frame2.ix[[5, 4, 1], ['B', 'A']] = values - - expected['B'].ix[subidx] = values[:, 0] - expected['A'].ix[subidx] = values[:, 1] - - assert_frame_equal(frame, expected) - assert_frame_equal(frame2, expected) - - # case 3: slicing rows, etc. - frame = self.frame.copy() - - expected1 = self.frame.copy() - frame.ix[5:10] = 1. - expected1.values[5:10] = 1. - assert_frame_equal(frame, expected1) - - expected2 = self.frame.copy() - arr = randn(5, len(frame.columns)) - frame.ix[5:10] = arr - expected2.values[5:10] = arr - assert_frame_equal(frame, expected2) - - # case 4 - frame = self.frame.copy() - frame.ix[5:10, :] = 1. - assert_frame_equal(frame, expected1) - frame.ix[5:10, :] = arr - assert_frame_equal(frame, expected2) - - # case 5 - frame = self.frame.copy() - frame2 = self.frame.copy() - - expected = self.frame.copy() - values = randn(5, 2) - - frame.ix[:5, ['A', 'B']] = values - expected['A'][:5] = values[:, 0] - expected['B'][:5] = values[:, 1] - assert_frame_equal(frame, expected) - - frame2.ix[:5, [0, 1]] = values - assert_frame_equal(frame2, expected) - - # case 6: slice rows with labels, inclusive! - frame = self.frame.copy() - expected = self.frame.copy() - - frame.ix[frame.index[5]:frame.index[10]] = 5. - expected.values[5:11] = 5 - assert_frame_equal(frame, expected) - - # case 7: slice columns - frame = self.frame.copy() - frame2 = self.frame.copy() - expected = self.frame.copy() - - # slice indices - frame.ix[:, 1:3] = 4. - expected.values[:, 1:3] = 4. - assert_frame_equal(frame, expected) - - # slice with labels - frame.ix[:, 'B':'C'] = 4. - assert_frame_equal(frame, expected) - - # new corner case of boolean slicing / setting - frame = DataFrame(lzip([2, 3, 9, 6, 7], [np.nan] * 5), - columns=['a', 'b']) - lst = [100] - lst.extend([np.nan] * 4) - expected = DataFrame(lzip([100, 3, 9, 6, 7], lst), - columns=['a', 'b']) - frame[frame['a'] == 2] = 100 - assert_frame_equal(frame, expected) - - def test_fancy_getitem_slice_mixed(self): - sliced = self.mixed_frame.ix[:, -3:] - self.assertEqual(sliced['D'].dtype, np.float64) - - # get view with single block - # setting it triggers setting with copy - sliced = self.frame.ix[:, -3:] - def f(): - sliced['C'] = 4. - self.assertRaises(com.SettingWithCopyError, f) - self.assertTrue((self.frame['C'] == 4).all()) - - def test_fancy_setitem_int_labels(self): - # integer index defers to label-based indexing - - df = DataFrame(np.random.randn(10, 5), index=np.arange(0, 20, 2)) - - tmp = df.copy() - exp = df.copy() - tmp.ix[[0, 2, 4]] = 5 - exp.values[:3] = 5 - assert_frame_equal(tmp, exp) - - tmp = df.copy() - exp = df.copy() - tmp.ix[6] = 5 - exp.values[3] = 5 - assert_frame_equal(tmp, exp) - - tmp = df.copy() - exp = df.copy() - tmp.ix[:, 2] = 5 - - # tmp correctly sets the dtype - # so match the exp way - exp[2] = 5 - assert_frame_equal(tmp, exp) - - def test_fancy_getitem_int_labels(self): - df = DataFrame(np.random.randn(10, 5), index=np.arange(0, 20, 2)) - - result = df.ix[[4, 2, 0], [2, 0]] - expected = df.reindex(index=[4, 2, 0], columns=[2, 0]) - assert_frame_equal(result, expected) - - result = df.ix[[4, 2, 0]] - expected = df.reindex(index=[4, 2, 0]) - assert_frame_equal(result, expected) - - result = df.ix[4] - expected = df.xs(4) - assert_series_equal(result, expected) - - result = df.ix[:, 3] - expected = df[3] - assert_series_equal(result, expected) - - def test_fancy_index_int_labels_exceptions(self): - df = DataFrame(np.random.randn(10, 5), index=np.arange(0, 20, 2)) - - # labels that aren't contained - self.assertRaises(KeyError, df.ix.__setitem__, - ([0, 1, 2], [2, 3, 4]), 5) - - # try to set indices not contained in frame - self.assertRaises(KeyError, - self.frame.ix.__setitem__, - ['foo', 'bar', 'baz'], 1) - self.assertRaises(KeyError, - self.frame.ix.__setitem__, - (slice(None, None), ['E']), 1) - - # partial setting now allows this GH2578 - #self.assertRaises(KeyError, - # self.frame.ix.__setitem__, - # (slice(None, None), 'E'), 1) - - def test_setitem_fancy_mixed_2d(self): - self.mixed_frame.ix[:5, ['C', 'B', 'A']] = 5 - result = self.mixed_frame.ix[:5, ['C', 'B', 'A']] - self.assertTrue((result.values == 5).all()) - - self.mixed_frame.ix[5] = np.nan - self.assertTrue(isnull(self.mixed_frame.ix[5]).all()) - - self.mixed_frame.ix[5] = self.mixed_frame.ix[6] - assert_series_equal(self.mixed_frame.ix[5], self.mixed_frame.ix[6], - check_names=False) - - # #1432 - df = DataFrame({1: [1., 2., 3.], - 2: [3, 4, 5]}) - self.assertTrue(df._is_mixed_type) - - df.ix[1] = [5, 10] - - expected = DataFrame({1: [1., 5., 3.], - 2: [3, 10, 5]}) - - assert_frame_equal(df, expected) - - def test_ix_align(self): - b = Series(randn(10), name=0).sort_values() - df_orig = DataFrame(randn(10, 4)) - df = df_orig.copy() - - df.ix[:, 0] = b - assert_series_equal(df.ix[:, 0].reindex(b.index), b) - - dft = df_orig.T - dft.ix[0, :] = b - assert_series_equal(dft.ix[0, :].reindex(b.index), b) - - df = df_orig.copy() - df.ix[:5, 0] = b - s = df.ix[:5, 0] - assert_series_equal(s, b.reindex(s.index)) - - dft = df_orig.T - dft.ix[0, :5] = b - s = dft.ix[0, :5] - assert_series_equal(s, b.reindex(s.index)) - - df = df_orig.copy() - idx = [0, 1, 3, 5] - df.ix[idx, 0] = b - s = df.ix[idx, 0] - assert_series_equal(s, b.reindex(s.index)) - - dft = df_orig.T - dft.ix[0, idx] = b - s = dft.ix[0, idx] - assert_series_equal(s, b.reindex(s.index)) - - def test_ix_frame_align(self): - b = DataFrame(np.random.randn(3, 4)) - df_orig = DataFrame(randn(10, 4)) - df = df_orig.copy() - - df.ix[:3] = b - out = b.ix[:3] - assert_frame_equal(out, b) - - b.sort_index(inplace=True) - - df = df_orig.copy() - df.ix[[0, 1, 2]] = b - out = df.ix[[0, 1, 2]].reindex(b.index) - assert_frame_equal(out, b) - - df = df_orig.copy() - df.ix[:3] = b - out = df.ix[:3] - assert_frame_equal(out, b.reindex(out.index)) - - def test_getitem_setitem_non_ix_labels(self): - df = tm.makeTimeDataFrame() - - start, end = df.index[[5, 10]] - - result = df.ix[start:end] - result2 = df[start:end] - expected = df[5:11] - assert_frame_equal(result, expected) - assert_frame_equal(result2, expected) - - result = df.copy() - result.ix[start:end] = 0 - result2 = df.copy() - result2[start:end] = 0 - expected = df.copy() - expected[5:11] = 0 - assert_frame_equal(result, expected) - assert_frame_equal(result2, expected) - - def test_ix_multi_take(self): - df = DataFrame(np.random.randn(3, 2)) - rs = df.ix[df.index == 0, :] - xp = df.reindex([0]) - assert_frame_equal(rs, xp) - - """ #1321 - df = DataFrame(np.random.randn(3, 2)) - rs = df.ix[df.index==0, df.columns==1] - xp = df.reindex([0], [1]) - assert_frame_equal(rs, xp) - """ - - def test_ix_multi_take_nonint_index(self): - df = DataFrame(np.random.randn(3, 2), index=['x', 'y', 'z'], - columns=['a', 'b']) - rs = df.ix[[0], [0]] - xp = df.reindex(['x'], columns=['a']) - assert_frame_equal(rs, xp) - - def test_ix_multi_take_multiindex(self): - df = DataFrame(np.random.randn(3, 2), index=['x', 'y', 'z'], - columns=[['a', 'b'], ['1', '2']]) - rs = df.ix[[0], [0]] - xp = df.reindex(['x'], columns=[('a', '1')]) - assert_frame_equal(rs, xp) - - def test_ix_dup(self): - idx = Index(['a', 'a', 'b', 'c', 'd', 'd']) - df = DataFrame(np.random.randn(len(idx), 3), idx) - - sub = df.ix[:'d'] - assert_frame_equal(sub, df) - - sub = df.ix['a':'c'] - assert_frame_equal(sub, df.ix[0:4]) - - sub = df.ix['b':'d'] - assert_frame_equal(sub, df.ix[2:]) - - def test_getitem_fancy_1d(self): - f = self.frame - ix = f.ix - - # return self if no slicing...for now - self.assertIs(ix[:, :], f) - - # low dimensional slice - xs1 = ix[2, ['C', 'B', 'A']] - xs2 = f.xs(f.index[2]).reindex(['C', 'B', 'A']) - assert_series_equal(xs1, xs2) - - ts1 = ix[5:10, 2] - ts2 = f[f.columns[2]][5:10] - assert_series_equal(ts1, ts2) - - # positional xs - xs1 = ix[0] - xs2 = f.xs(f.index[0]) - assert_series_equal(xs1, xs2) - - xs1 = ix[f.index[5]] - xs2 = f.xs(f.index[5]) - assert_series_equal(xs1, xs2) - - # single column - assert_series_equal(ix[:, 'A'], f['A']) - - # return view - exp = f.copy() - exp.values[5] = 4 - ix[5][:] = 4 - assert_frame_equal(exp, f) - - exp.values[:, 1] = 6 - ix[:, 1][:] = 6 - assert_frame_equal(exp, f) - - # slice of mixed-frame - xs = self.mixed_frame.ix[5] - exp = self.mixed_frame.xs(self.mixed_frame.index[5]) - assert_series_equal(xs, exp) - - def test_setitem_fancy_1d(self): - - # case 1: set cross-section for indices - frame = self.frame.copy() - expected = self.frame.copy() - - frame.ix[2, ['C', 'B', 'A']] = [1., 2., 3.] - expected['C'][2] = 1. - expected['B'][2] = 2. - expected['A'][2] = 3. - assert_frame_equal(frame, expected) - - frame2 = self.frame.copy() - frame2.ix[2, [3, 2, 1]] = [1., 2., 3.] - assert_frame_equal(frame, expected) - - # case 2, set a section of a column - frame = self.frame.copy() - expected = self.frame.copy() - - vals = randn(5) - expected.values[5:10, 2] = vals - frame.ix[5:10, 2] = vals - assert_frame_equal(frame, expected) - - frame2 = self.frame.copy() - frame2.ix[5:10, 'B'] = vals - assert_frame_equal(frame, expected) - - # case 3: full xs - frame = self.frame.copy() - expected = self.frame.copy() - - frame.ix[4] = 5. - expected.values[4] = 5. - assert_frame_equal(frame, expected) - - frame.ix[frame.index[4]] = 6. - expected.values[4] = 6. - assert_frame_equal(frame, expected) - - # single column - frame = self.frame.copy() - expected = self.frame.copy() - - frame.ix[:, 'A'] = 7. - expected['A'] = 7. - assert_frame_equal(frame, expected) - - def test_getitem_fancy_scalar(self): - f = self.frame - ix = f.ix - # individual value - for col in f.columns: - ts = f[col] - for idx in f.index[::5]: - assert_almost_equal(ix[idx, col], ts[idx]) - - def test_setitem_fancy_scalar(self): - f = self.frame - expected = self.frame.copy() - ix = f.ix - # individual value - for j, col in enumerate(f.columns): - ts = f[col] - for idx in f.index[::5]: - i = f.index.get_loc(idx) - val = randn() - expected.values[i, j] = val - ix[idx, col] = val - assert_frame_equal(f, expected) - - def test_getitem_fancy_boolean(self): - f = self.frame - ix = f.ix - - expected = f.reindex(columns=['B', 'D']) - result = ix[:, [False, True, False, True]] - assert_frame_equal(result, expected) - - expected = f.reindex(index=f.index[5:10], columns=['B', 'D']) - result = ix[5:10, [False, True, False, True]] - assert_frame_equal(result, expected) - - boolvec = f.index > f.index[7] - expected = f.reindex(index=f.index[boolvec]) - result = ix[boolvec] - assert_frame_equal(result, expected) - result = ix[boolvec, :] - assert_frame_equal(result, expected) - - result = ix[boolvec, 2:] - expected = f.reindex(index=f.index[boolvec], - columns=['C', 'D']) - assert_frame_equal(result, expected) - - def test_setitem_fancy_boolean(self): - # from 2d, set with booleans - frame = self.frame.copy() - expected = self.frame.copy() - - mask = frame['A'] > 0 - frame.ix[mask] = 0. - expected.values[mask.values] = 0. - assert_frame_equal(frame, expected) - - frame = self.frame.copy() - expected = self.frame.copy() - frame.ix[mask, ['A', 'B']] = 0. - expected.values[mask.values, :2] = 0. - assert_frame_equal(frame, expected) - - def test_getitem_fancy_ints(self): - result = self.frame.ix[[1, 4, 7]] - expected = self.frame.ix[self.frame.index[[1, 4, 7]]] - assert_frame_equal(result, expected) - - result = self.frame.ix[:, [2, 0, 1]] - expected = self.frame.ix[:, self.frame.columns[[2, 0, 1]]] - assert_frame_equal(result, expected) - - def test_getitem_setitem_fancy_exceptions(self): - ix = self.frame.ix - with assertRaisesRegexp(IndexingError, 'Too many indexers'): - ix[:, :, :] - - with assertRaises(IndexingError): - ix[:, :, :] = 1 - - def test_getitem_setitem_boolean_misaligned(self): - # boolean index misaligned labels - mask = self.frame['A'][::-1] > 1 - - result = self.frame.ix[mask] - expected = self.frame.ix[mask[::-1]] - assert_frame_equal(result, expected) - - cp = self.frame.copy() - expected = self.frame.copy() - cp.ix[mask] = 0 - expected.ix[mask] = 0 - assert_frame_equal(cp, expected) - - def test_getitem_setitem_boolean_multi(self): - df = DataFrame(np.random.randn(3, 2)) - - # get - k1 = np.array([True, False, True]) - k2 = np.array([False, True]) - result = df.ix[k1, k2] - expected = df.ix[[0, 2], [1]] - assert_frame_equal(result, expected) - - expected = df.copy() - df.ix[np.array([True, False, True]), - np.array([False, True])] = 5 - expected.ix[[0, 2], [1]] = 5 - assert_frame_equal(df, expected) - - def test_getitem_setitem_float_labels(self): - index = Index([1.5, 2, 3, 4, 5]) - df = DataFrame(np.random.randn(5, 5), index=index) - - result = df.ix[1.5:4] - expected = df.reindex([1.5, 2, 3, 4]) - assert_frame_equal(result, expected) - self.assertEqual(len(result), 4) - - result = df.ix[4:5] - expected = df.reindex([4, 5]) # reindex with int - assert_frame_equal(result, expected, check_index_type=False) - self.assertEqual(len(result), 2) - - result = df.ix[4:5] - expected = df.reindex([4.0, 5.0]) # reindex with float - assert_frame_equal(result, expected) - self.assertEqual(len(result), 2) - - # loc_float changes this to work properly - result = df.ix[1:2] - expected = df.iloc[0:2] - assert_frame_equal(result, expected) - - df.ix[1:2] = 0 - result = df[1:2] - self.assertTrue((result==0).all().all()) - - # #2727 - index = Index([1.0, 2.5, 3.5, 4.5, 5.0]) - df = DataFrame(np.random.randn(5, 5), index=index) - - # positional slicing only via iloc! - # stacklevel=False -> needed stacklevel depends on index type - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - result = df.iloc[1.0:5] - - expected = df.reindex([2.5, 3.5, 4.5, 5.0]) - assert_frame_equal(result, expected) - self.assertEqual(len(result), 4) - - result = df.iloc[4:5] - expected = df.reindex([5.0]) - assert_frame_equal(result, expected) - self.assertEqual(len(result), 1) - - # GH 4892, float indexers in iloc are deprecated - import warnings - warnings.filterwarnings(action='error', category=FutureWarning) - - cp = df.copy() - def f(): - cp.iloc[1.0:5] = 0 - self.assertRaises(FutureWarning, f) - def f(): - result = cp.iloc[1.0:5] == 0 - self.assertRaises(FutureWarning, f) - self.assertTrue(result.values.all()) - self.assertTrue((cp.iloc[0:1] == df.iloc[0:1]).values.all()) - - warnings.filterwarnings(action='default', category=FutureWarning) - - cp = df.copy() - cp.iloc[4:5] = 0 - self.assertTrue((cp.iloc[4:5] == 0).values.all()) - self.assertTrue((cp.iloc[0:4] == df.iloc[0:4]).values.all()) - - # float slicing - result = df.ix[1.0:5] - expected = df - assert_frame_equal(result, expected) - self.assertEqual(len(result), 5) - - result = df.ix[1.1:5] - expected = df.reindex([2.5, 3.5, 4.5, 5.0]) - assert_frame_equal(result, expected) - self.assertEqual(len(result), 4) - - result = df.ix[4.51:5] - expected = df.reindex([5.0]) - assert_frame_equal(result, expected) - self.assertEqual(len(result), 1) - - result = df.ix[1.0:5.0] - expected = df.reindex([1.0, 2.5, 3.5, 4.5, 5.0]) - assert_frame_equal(result, expected) - self.assertEqual(len(result), 5) - - cp = df.copy() - cp.ix[1.0:5.0] = 0 - result = cp.ix[1.0:5.0] - self.assertTrue((result == 0).values.all()) - - def test_setitem_single_column_mixed(self): - df = DataFrame(randn(5, 3), index=['a', 'b', 'c', 'd', 'e'], - columns=['foo', 'bar', 'baz']) - df['str'] = 'qux' - df.ix[::2, 'str'] = nan - expected = [nan, 'qux', nan, 'qux', nan] - assert_almost_equal(df['str'].values, expected) - - def test_setitem_single_column_mixed_datetime(self): - df = DataFrame(randn(5, 3), index=['a', 'b', 'c', 'd', 'e'], - columns=['foo', 'bar', 'baz']) - - df['timestamp'] = Timestamp('20010102') - - # check our dtypes - result = df.get_dtype_counts() - expected = Series({'float64': 3, 'datetime64[ns]': 1}) - assert_series_equal(result, expected) - - # set an allowable datetime64 type - from pandas import tslib - df.ix['b', 'timestamp'] = tslib.iNaT - self.assertTrue(com.isnull(df.ix['b', 'timestamp'])) - - # allow this syntax - df.ix['c', 'timestamp'] = nan - self.assertTrue(com.isnull(df.ix['c', 'timestamp'])) - - # allow this syntax - df.ix['d', :] = nan - self.assertTrue(com.isnull(df.ix['c', :]).all() == False) - - # as of GH 3216 this will now work! - # try to set with a list like item - #self.assertRaises( - # Exception, df.ix.__setitem__, ('d', 'timestamp'), [nan]) - - def test_setitem_frame(self): - piece = self.frame.ix[:2, ['A', 'B']] - self.frame.ix[-2:, ['A', 'B']] = piece.values - assert_almost_equal(self.frame.ix[-2:, ['A', 'B']].values, - piece.values) - - # GH 3216 - - # already aligned - f = self.mixed_frame.copy() - piece = DataFrame([[ 1, 2], [3, 4]], index=f.index[0:2],columns=['A', 'B']) - key = (slice(None,2), ['A', 'B']) - f.ix[key] = piece - assert_almost_equal(f.ix[0:2, ['A', 'B']].values, - piece.values) - - # rows unaligned - f = self.mixed_frame.copy() - piece = DataFrame([[ 1, 2 ], [3, 4], [5, 6], [7, 8]], index=list(f.index[0:2]) + ['foo','bar'],columns=['A', 'B']) - key = (slice(None,2), ['A', 'B']) - f.ix[key] = piece - assert_almost_equal(f.ix[0:2:, ['A', 'B']].values, - piece.values[0:2]) - - # key is unaligned with values - f = self.mixed_frame.copy() - piece = f.ix[:2, ['A']] - piece.index = f.index[-2:] - key = (slice(-2, None), ['A', 'B']) - f.ix[key] = piece - piece['B'] = np.nan - assert_almost_equal(f.ix[-2:, ['A', 'B']].values, - piece.values) - - # ndarray - f = self.mixed_frame.copy() - piece = self.mixed_frame.ix[:2, ['A', 'B']] - key = (slice(-2, None), ['A', 'B']) - f.ix[key] = piece.values - assert_almost_equal(f.ix[-2:, ['A', 'B']].values, - piece.values) - - - # needs upcasting - df = DataFrame([[1,2,'foo'],[3,4,'bar']],columns=['A','B','C']) - df2 = df.copy() - df2.ix[:,['A','B']] = df.ix[:,['A','B']]+0.5 - expected = df.reindex(columns=['A','B']) - expected += 0.5 - expected['C'] = df['C'] - assert_frame_equal(df2, expected) - - def test_setitem_frame_align(self): - piece = self.frame.ix[:2, ['A', 'B']] - piece.index = self.frame.index[-2:] - piece.columns = ['A', 'B'] - self.frame.ix[-2:, ['A', 'B']] = piece - assert_almost_equal(self.frame.ix[-2:, ['A', 'B']].values, - piece.values) - - def test_setitem_fancy_exceptions(self): - pass - - def test_getitem_boolean_missing(self): - pass - - def test_setitem_boolean_missing(self): - pass - - def test_getitem_setitem_ix_duplicates(self): - # #1201 - df = DataFrame(np.random.randn(5, 3), - index=['foo', 'foo', 'bar', 'baz', 'bar']) - - result = df.ix['foo'] - expected = df[:2] - assert_frame_equal(result, expected) - - result = df.ix['bar'] - expected = df.ix[[2, 4]] - assert_frame_equal(result, expected) - - result = df.ix['baz'] - expected = df.ix[3] - assert_series_equal(result, expected) - - def test_getitem_ix_boolean_duplicates_multiple(self): - # #1201 - df = DataFrame(np.random.randn(5, 3), - index=['foo', 'foo', 'bar', 'baz', 'bar']) - - result = df.ix[['bar']] - exp = df.ix[[2, 4]] - assert_frame_equal(result, exp) - - result = df.ix[df[1] > 0] - exp = df[df[1] > 0] - assert_frame_equal(result, exp) - - result = df.ix[df[0] > 0] - exp = df[df[0] > 0] - assert_frame_equal(result, exp) - - def test_getitem_setitem_ix_bool_keyerror(self): - # #2199 - df = DataFrame({'a': [1, 2, 3]}) - - self.assertRaises(KeyError, df.ix.__getitem__, False) - self.assertRaises(KeyError, df.ix.__getitem__, True) - - self.assertRaises(KeyError, df.ix.__setitem__, False, 0) - self.assertRaises(KeyError, df.ix.__setitem__, True, 0) - - def test_getitem_list_duplicates(self): - # #1943 - df = DataFrame(np.random.randn(4, 4), columns=list('AABC')) - df.columns.name = 'foo' - - result = df[['B', 'C']] - self.assertEqual(result.columns.name, 'foo') - - expected = df.ix[:, 2:] - assert_frame_equal(result, expected) - - def test_get_value(self): - for idx in self.frame.index: - for col in self.frame.columns: - result = self.frame.get_value(idx, col) - expected = self.frame[col][idx] - assert_almost_equal(result, expected) - - def test_iteritems(self): - df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=['a', 'a', 'b']) - for k, v in compat.iteritems(df): - self.assertEqual(type(v), Series) - - def test_lookup(self): - def alt(df, rows, cols): - result = [] - for r, c in zip(rows, cols): - result.append(df.get_value(r, c)) - return result - - def testit(df): - rows = list(df.index) * len(df.columns) - cols = list(df.columns) * len(df.index) - result = df.lookup(rows, cols) - expected = alt(df, rows, cols) - assert_almost_equal(result, expected) - - testit(self.mixed_frame) - testit(self.frame) - - df = DataFrame({'label': ['a', 'b', 'a', 'c'], - 'mask_a': [True, True, False, True], - 'mask_b': [True, False, False, False], - 'mask_c': [False, True, False, True]}) - df['mask'] = df.lookup(df.index, 'mask_' + df['label']) - exp_mask = alt(df, df.index, 'mask_' + df['label']) - assert_almost_equal(df['mask'], exp_mask) - self.assertEqual(df['mask'].dtype, np.bool_) - - with tm.assertRaises(KeyError): - self.frame.lookup(['xyz'], ['A']) - - with tm.assertRaises(KeyError): - self.frame.lookup([self.frame.index[0]], ['xyz']) - - with tm.assertRaisesRegexp(ValueError, 'same size'): - self.frame.lookup(['a', 'b', 'c'], ['a']) - - def test_set_value(self): - for idx in self.frame.index: - for col in self.frame.columns: - self.frame.set_value(idx, col, 1) - assert_almost_equal(self.frame[col][idx], 1) - - def test_set_value_resize(self): - - res = self.frame.set_value('foobar', 'B', 0) - self.assertIs(res, self.frame) - self.assertEqual(res.index[-1], 'foobar') - self.assertEqual(res.get_value('foobar', 'B'), 0) - - self.frame.loc['foobar','qux'] = 0 - self.assertEqual(self.frame.get_value('foobar', 'qux'), 0) - - res = self.frame.copy() - res3 = res.set_value('foobar', 'baz', 'sam') - self.assertEqual(res3['baz'].dtype, np.object_) - - res = self.frame.copy() - res3 = res.set_value('foobar', 'baz', True) - self.assertEqual(res3['baz'].dtype, np.object_) - - res = self.frame.copy() - res3 = res.set_value('foobar', 'baz', 5) - self.assertTrue(com.is_float_dtype(res3['baz'])) - self.assertTrue(isnull(res3['baz'].drop(['foobar'])).all()) - self.assertRaises(ValueError, res3.set_value, 'foobar', 'baz', 'sam') - - def test_set_value_with_index_dtype_change(self): - df_orig = DataFrame(randn(3, 3), index=lrange(3), columns=list('ABC')) - - # this is actually ambiguous as the 2 is interpreted as a positional - # so column is not created - df = df_orig.copy() - df.set_value('C', 2, 1.0) - self.assertEqual(list(df.index), list(df_orig.index) + ['C']) - #self.assertEqual(list(df.columns), list(df_orig.columns) + [2]) - - df = df_orig.copy() - df.loc['C', 2] = 1.0 - self.assertEqual(list(df.index), list(df_orig.index) + ['C']) - #self.assertEqual(list(df.columns), list(df_orig.columns) + [2]) - - # create both new - df = df_orig.copy() - df.set_value('C', 'D', 1.0) - self.assertEqual(list(df.index), list(df_orig.index) + ['C']) - self.assertEqual(list(df.columns), list(df_orig.columns) + ['D']) - - df = df_orig.copy() - df.loc['C', 'D'] = 1.0 - self.assertEqual(list(df.index), list(df_orig.index) + ['C']) - self.assertEqual(list(df.columns), list(df_orig.columns) + ['D']) - - def test_get_set_value_no_partial_indexing(self): - # partial w/ MultiIndex raise exception - index = MultiIndex.from_tuples([(0, 1), (0, 2), (1, 1), (1, 2)]) - df = DataFrame(index=index, columns=lrange(4)) - self.assertRaises(KeyError, df.get_value, 0, 1) - # self.assertRaises(KeyError, df.set_value, 0, 1, 0) - - def test_single_element_ix_dont_upcast(self): - self.frame['E'] = 1 - self.assertTrue(issubclass(self.frame['E'].dtype.type, - (int, np.integer))) - - result = self.frame.ix[self.frame.index[5], 'E'] - self.assertTrue(com.is_integer(result)) - - def test_irow(self): - df = DataFrame(np.random.randn(10, 4), index=lrange(0, 20, 2)) - - # 10711, deprecated - with tm.assert_produces_warning(FutureWarning): - df.irow(1) - - result = df.iloc[1] - exp = df.ix[2] - assert_series_equal(result, exp) - - result = df.iloc[2] - exp = df.ix[4] - assert_series_equal(result, exp) - - # slice - result = df.iloc[slice(4, 8)] - expected = df.ix[8:14] - assert_frame_equal(result, expected) - - # verify slice is view - # setting it makes it raise/warn - def f(): - result[2] = 0. - self.assertRaises(com.SettingWithCopyError, f) - exp_col = df[2].copy() - exp_col[4:8] = 0. - assert_series_equal(df[2], exp_col) - - # list of integers - result = df.iloc[[1, 2, 4, 6]] - expected = df.reindex(df.index[[1, 2, 4, 6]]) - assert_frame_equal(result, expected) - - def test_icol(self): - - df = DataFrame(np.random.randn(4, 10), columns=lrange(0, 20, 2)) - - # 10711, deprecated - with tm.assert_produces_warning(FutureWarning): - df.icol(1) - - result = df.iloc[:, 1] - exp = df.ix[:, 2] - assert_series_equal(result, exp) - - result = df.iloc[:, 2] - exp = df.ix[:, 4] - assert_series_equal(result, exp) - - # slice - result = df.iloc[:, slice(4, 8)] - expected = df.ix[:, 8:14] - assert_frame_equal(result, expected) - - # verify slice is view - # and that we are setting a copy - def f(): - result[8] = 0. - self.assertRaises(com.SettingWithCopyError, f) - self.assertTrue((df[8] == 0).all()) - - # list of integers - result = df.iloc[:, [1, 2, 4, 6]] - expected = df.reindex(columns=df.columns[[1, 2, 4, 6]]) - assert_frame_equal(result, expected) - - def test_irow_icol_duplicates(self): - # 10711, deprecated - - df = DataFrame(np.random.rand(3, 3), columns=list('ABC'), - index=list('aab')) - - result = df.iloc[0] - result2 = df.ix[0] - tm.assertIsInstance(result, Series) - assert_almost_equal(result.values, df.values[0]) - assert_series_equal(result, result2) - - result = df.T.iloc[:, 0] - result2 = df.T.ix[:, 0] - tm.assertIsInstance(result, Series) - assert_almost_equal(result.values, df.values[0]) - assert_series_equal(result, result2) - - # multiindex - df = DataFrame(np.random.randn(3, 3), columns=[['i', 'i', 'j'], - ['A', 'A', 'B']], - index=[['i', 'i', 'j'], ['X', 'X', 'Y']]) - rs = df.iloc[0] - xp = df.ix[0] - assert_series_equal(rs, xp) - - rs = df.iloc[:, 0] - xp = df.T.ix[0] - assert_series_equal(rs, xp) - - rs = df.iloc[:, [0]] - xp = df.ix[:, [0]] - assert_frame_equal(rs, xp) - - # #2259 - df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=[1, 1, 2]) - result = df.iloc[:, [0]] - expected = df.take([0], axis=1) - assert_frame_equal(result, expected) - - def test_icol_sparse_propegate_fill_value(self): - from pandas.sparse.api import SparseDataFrame - df = SparseDataFrame({'A': [999, 1]}, default_fill_value=999) - self.assertTrue(len(df['A'].sp_values) == len(df.iloc[:, 0].sp_values)) - - def test_iget_value(self): - # 10711 deprecated - - with tm.assert_produces_warning(FutureWarning): - self.frame.iget_value(0,0) - - for i, row in enumerate(self.frame.index): - for j, col in enumerate(self.frame.columns): - result = self.frame.iat[i,j] - expected = self.frame.at[row, col] - assert_almost_equal(result, expected) - - def test_nested_exception(self): - # Ignore the strange way of triggering the problem - # (which may get fixed), it's just a way to trigger - # the issue or reraising an outer exception without - # a named argument - df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, - 9]}).set_index(["a", "b"]) - l = list(df.index) - l[0] = ["a", "b"] - df.index = l - - try: - repr(df) - except Exception as e: - self.assertNotEqual(type(e), UnboundLocalError) - - def test_reindex_methods(self): - df = pd.DataFrame({'x': list(range(5))}) - target = np.array([-0.1, 0.9, 1.1, 1.5]) - - for method, expected_values in [('nearest', [0, 1, 1, 2]), - ('pad', [np.nan, 0, 1, 1]), - ('backfill', [0, 1, 2, 2])]: - expected = pd.DataFrame({'x': expected_values}, index=target) - actual = df.reindex(target, method=method) - assert_frame_equal(expected, actual) - - actual = df.reindex_like(df, method=method, tolerance=0) - assert_frame_equal(df, actual) - - actual = df.reindex(target, method=method, tolerance=1) - assert_frame_equal(expected, actual) - - e2 = expected[::-1] - actual = df.reindex(target[::-1], method=method) - assert_frame_equal(e2, actual) - - new_order = [3, 0, 2, 1] - e2 = expected.iloc[new_order] - actual = df.reindex(target[new_order], method=method) - assert_frame_equal(e2, actual) - - switched_method = ('pad' if method == 'backfill' - else 'backfill' if method == 'pad' - else method) - actual = df[::-1].reindex(target, method=switched_method) - assert_frame_equal(expected, actual) - - expected = pd.DataFrame({'x': [0, 1, 1, np.nan]}, index=target) - actual = df.reindex(target, method='nearest', tolerance=0.2) - assert_frame_equal(expected, actual) - - def test_non_monotonic_reindex_methods(self): - dr = pd.date_range('2013-08-01', periods=6, freq='B') - data = np.random.randn(6,1) - df = pd.DataFrame(data, index=dr, columns=list('A')) - df_rev = pd.DataFrame(data, index=dr[[3, 4, 5] + [0, 1, 2]], - columns=list('A')) - # index is not monotonic increasing or decreasing - self.assertRaises(ValueError, df_rev.reindex, df.index, method='pad') - self.assertRaises(ValueError, df_rev.reindex, df.index, method='ffill') - self.assertRaises(ValueError, df_rev.reindex, df.index, method='bfill') - self.assertRaises(ValueError, df_rev.reindex, df.index, method='nearest') - - def test_reindex_level(self): - from itertools import permutations - icol = ['jim', 'joe', 'jolie'] - - def verify_first_level(df, level, idx, check_index_type=True): - f = lambda val: np.nonzero(df[level] == val)[0] - i = np.concatenate(list(map(f, idx))) - left = df.set_index(icol).reindex(idx, level=level) - right = df.iloc[i].set_index(icol) - assert_frame_equal(left, right, check_index_type=check_index_type) - - def verify(df, level, idx, indexer, check_index_type=True): - left = df.set_index(icol).reindex(idx, level=level) - right = df.iloc[indexer].set_index(icol) - assert_frame_equal(left, right, check_index_type=check_index_type) - - df = pd.DataFrame({'jim':list('B' * 4 + 'A' * 2 + 'C' * 3), - 'joe':list('abcdeabcd')[::-1], - 'jolie':[10, 20, 30] * 3, - 'joline': np.random.randint(0, 1000, 9)}) - - target = [['C', 'B', 'A'], ['F', 'C', 'A', 'D'], ['A'], - ['A', 'B', 'C'], ['C', 'A', 'B'], ['C', 'B'], ['C', 'A'], - ['A', 'B'], ['B', 'A', 'C']] - - for idx in target: - verify_first_level(df, 'jim', idx) - - # reindex by these causes different MultiIndex levels - for idx in [['D', 'F'], ['A', 'C', 'B']]: - verify_first_level(df, 'jim', idx, check_index_type=False) - - verify(df, 'joe', list('abcde'), [3, 2, 1, 0, 5, 4, 8, 7, 6]) - verify(df, 'joe', list('abcd'), [3, 2, 1, 0, 5, 8, 7, 6]) - verify(df, 'joe', list('abc'), [3, 2, 1, 8, 7, 6]) - verify(df, 'joe', list('eca'), [1, 3, 4, 6, 8]) - verify(df, 'joe', list('edc'), [0, 1, 4, 5, 6]) - verify(df, 'joe', list('eadbc'), [3, 0, 2, 1, 4, 5, 8, 7, 6]) - verify(df, 'joe', list('edwq'), [0, 4, 5]) - verify(df, 'joe', list('wq'), [], check_index_type=False) - - df = DataFrame({'jim':['mid'] * 5 + ['btm'] * 8 + ['top'] * 7, - 'joe':['3rd'] * 2 + ['1st'] * 3 + ['2nd'] * 3 + - ['1st'] * 2 + ['3rd'] * 3 + ['1st'] * 2 + - ['3rd'] * 3 + ['2nd'] * 2, - # this needs to be jointly unique with jim and joe or - # reindexing will fail ~1.5% of the time, this works - # out to needing unique groups of same size as joe - 'jolie': np.concatenate([np.random.choice(1000, x, replace=False) - for x in [2, 3, 3, 2, 3, 2, 3, 2]]), - 'joline': np.random.randn(20).round(3) * 10}) - - for idx in permutations(df['jim'].unique()): - for i in range(3): - verify_first_level(df, 'jim', idx[:i+1]) - - i = [2,3,4,0,1,8,9,5,6,7,10,11,12,13,14,18,19,15,16,17] - verify(df, 'joe', ['1st', '2nd', '3rd'], i) - - i = [0,1,2,3,4,10,11,12,5,6,7,8,9,15,16,17,18,19,13,14] - verify(df, 'joe', ['3rd', '2nd', '1st'], i) - - i = [0,1,5,6,7,10,11,12,18,19,15,16,17] - verify(df, 'joe', ['2nd', '3rd'], i) - - i = [0,1,2,3,4,10,11,12,8,9,15,16,17,13,14] - verify(df, 'joe', ['3rd', '1st'], i) - - def test_getitem_ix_float_duplicates(self): - df = pd.DataFrame(np.random.randn(3, 3), - index=[0.1, 0.2, 0.2], columns=list('abc')) - expect = df.iloc[1:] - assert_frame_equal(df.loc[0.2], expect) - assert_frame_equal(df.ix[0.2], expect) - - expect = df.iloc[1:, 0] - assert_series_equal(df.loc[0.2, 'a'], expect) - - df.index = [1, 0.2, 0.2] - expect = df.iloc[1:] - assert_frame_equal(df.loc[0.2], expect) - assert_frame_equal(df.ix[0.2], expect) - - expect = df.iloc[1:, 0] - assert_series_equal(df.loc[0.2, 'a'], expect) - - df = pd.DataFrame(np.random.randn(4, 3), - index=[1, 0.2, 0.2, 1], columns=list('abc')) - expect = df.iloc[1:-1] - assert_frame_equal(df.loc[0.2], expect) - assert_frame_equal(df.ix[0.2], expect) - - expect = df.iloc[1:-1, 0] - assert_series_equal(df.loc[0.2, 'a'], expect) - - df.index = [0.1, 0.2, 2, 0.2] - expect = df.iloc[[1, -1]] - assert_frame_equal(df.loc[0.2], expect) - assert_frame_equal(df.ix[0.2], expect) - - expect = df.iloc[[1, -1], 0] - assert_series_equal(df.loc[0.2, 'a'], expect) - - def test_setitem_with_sparse_value(self): - # GH8131 - df = pd.DataFrame({'c_1': ['a', 'b', 'c'], 'n_1': [1., 2., 3.]}) - sp_series = pd.Series([0, 0, 1]).to_sparse(fill_value=0) - df['new_column'] = sp_series - assert_series_equal(df['new_column'], sp_series, check_names=False) - - def test_setitem_with_unaligned_sparse_value(self): - df = pd.DataFrame({'c_1': ['a', 'b', 'c'], 'n_1': [1., 2., 3.]}) - sp_series = (pd.Series([0, 0, 1], index=[2, 1, 0]) - .to_sparse(fill_value=0)) - df['new_column'] = sp_series - exp = pd.Series([1, 0, 0], name='new_column') - assert_series_equal(df['new_column'], exp) - - -_seriesd = tm.getSeriesData() -_tsd = tm.getTimeSeriesData() - -_frame = DataFrame(_seriesd) -_frame2 = DataFrame(_seriesd, columns=['D', 'C', 'B', 'A']) -_intframe = DataFrame(dict((k, v.astype(int)) - for k, v in compat.iteritems(_seriesd))) - -_tsframe = DataFrame(_tsd) - -_mixed_frame = _frame.copy() -_mixed_frame['foo'] = 'bar' - - -class SafeForSparse(object): - - _multiprocess_can_split_ = True - - def test_copy_index_name_checking(self): - # don't want to be able to modify the index stored elsewhere after - # making a copy - for attr in ('index', 'columns'): - ind = getattr(self.frame, attr) - ind.name = None - cp = self.frame.copy() - getattr(cp, attr).name = 'foo' - self.assertIsNone(getattr(self.frame, attr).name) - - def test_getitem_pop_assign_name(self): - s = self.frame['A'] - self.assertEqual(s.name, 'A') - - s = self.frame.pop('A') - self.assertEqual(s.name, 'A') - - s = self.frame.ix[:, 'B'] - self.assertEqual(s.name, 'B') - - s2 = s.ix[:] - self.assertEqual(s2.name, 'B') - - def test_get_value(self): - for idx in self.frame.index: - for col in self.frame.columns: - result = self.frame.get_value(idx, col) - expected = self.frame[col][idx] - assert_almost_equal(result, expected) - - def test_join_index(self): - # left / right - - f = self.frame.reindex(columns=['A', 'B'])[:10] - f2 = self.frame.reindex(columns=['C', 'D']) - - joined = f.join(f2) - self.assertTrue(f.index.equals(joined.index)) - self.assertEqual(len(joined.columns), 4) - - joined = f.join(f2, how='left') - self.assertTrue(joined.index.equals(f.index)) - self.assertEqual(len(joined.columns), 4) - - joined = f.join(f2, how='right') - self.assertTrue(joined.index.equals(f2.index)) - self.assertEqual(len(joined.columns), 4) - - # inner - - f = self.frame.reindex(columns=['A', 'B'])[:10] - f2 = self.frame.reindex(columns=['C', 'D']) - - joined = f.join(f2, how='inner') - self.assertTrue(joined.index.equals(f.index.intersection(f2.index))) - self.assertEqual(len(joined.columns), 4) - - # outer - - f = self.frame.reindex(columns=['A', 'B'])[:10] - f2 = self.frame.reindex(columns=['C', 'D']) - - joined = f.join(f2, how='outer') - self.assertTrue(tm.equalContents(self.frame.index, joined.index)) - self.assertEqual(len(joined.columns), 4) - - assertRaisesRegexp(ValueError, 'join method', f.join, f2, how='foo') - - # corner case - overlapping columns - for how in ('outer', 'left', 'inner'): - with assertRaisesRegexp(ValueError, 'columns overlap but no suffix'): - self.frame.join(self.frame, how=how) - - def test_join_index_more(self): - af = self.frame.ix[:, ['A', 'B']] - bf = self.frame.ix[::2, ['C', 'D']] - - expected = af.copy() - expected['C'] = self.frame['C'][::2] - expected['D'] = self.frame['D'][::2] - - result = af.join(bf) - assert_frame_equal(result, expected) - - result = af.join(bf, how='right') - assert_frame_equal(result, expected[::2]) - - result = bf.join(af, how='right') - assert_frame_equal(result, expected.ix[:, result.columns]) - - def test_join_index_series(self): - df = self.frame.copy() - s = df.pop(self.frame.columns[-1]) - joined = df.join(s) - - assert_frame_equal(joined, self.frame, check_names=False) # TODO should this check_names ? - - s.name = None - assertRaisesRegexp(ValueError, 'must have a name', df.join, s) - - def test_join_overlap(self): - df1 = self.frame.ix[:, ['A', 'B', 'C']] - df2 = self.frame.ix[:, ['B', 'C', 'D']] - - joined = df1.join(df2, lsuffix='_df1', rsuffix='_df2') - df1_suf = df1.ix[:, ['B', 'C']].add_suffix('_df1') - df2_suf = df2.ix[:, ['B', 'C']].add_suffix('_df2') - no_overlap = self.frame.ix[:, ['A', 'D']] - expected = df1_suf.join(df2_suf).join(no_overlap) - - # column order not necessarily sorted - assert_frame_equal(joined, expected.ix[:, joined.columns]) - - def test_add_prefix_suffix(self): - with_prefix = self.frame.add_prefix('foo#') - expected = ['foo#%s' % c for c in self.frame.columns] - self.assert_numpy_array_equal(with_prefix.columns, expected) - - with_suffix = self.frame.add_suffix('#foo') - expected = ['%s#foo' % c for c in self.frame.columns] - self.assert_numpy_array_equal(with_suffix.columns, expected) - - -class TestDataFrame(tm.TestCase, CheckIndexing, - SafeForSparse): - klass = DataFrame - - _multiprocess_can_split_ = True - - def setUp(self): - - self.frame = _frame.copy() - self.frame2 = _frame2.copy() - - # force these all to int64 to avoid platform testing issues - self.intframe = DataFrame(dict([ (c,s) for c,s in compat.iteritems(_intframe) ]), dtype = np.int64) - self.tsframe = _tsframe.copy() - self.mixed_frame = _mixed_frame.copy() - self.mixed_float = DataFrame({ 'A': _frame['A'].copy().astype('float32'), - 'B': _frame['B'].copy().astype('float32'), - 'C': _frame['C'].copy().astype('float16'), - 'D': _frame['D'].copy().astype('float64') }) - self.mixed_float2 = DataFrame({ 'A': _frame2['A'].copy().astype('float32'), - 'B': _frame2['B'].copy().astype('float32'), - 'C': _frame2['C'].copy().astype('float16'), - 'D': _frame2['D'].copy().astype('float64') }) - self.mixed_int = DataFrame({ 'A': _intframe['A'].copy().astype('int32'), - 'B': np.ones(len(_intframe['B']),dtype='uint64'), - 'C': _intframe['C'].copy().astype('uint8'), - 'D': _intframe['D'].copy().astype('int64') }) - self.all_mixed = DataFrame({'a': 1., 'b': 2, 'c': 'foo', 'float32' : np.array([1.]*10,dtype='float32'), - 'int32' : np.array([1]*10,dtype='int32'), - }, index=np.arange(10)) - self.tzframe = DataFrame({'A' : date_range('20130101',periods=3), - 'B' : date_range('20130101',periods=3,tz='US/Eastern'), - 'C' : date_range('20130101',periods=3,tz='CET')}) - self.tzframe.iloc[1,1] = pd.NaT - self.tzframe.iloc[1,2] = pd.NaT - - self.ts1 = tm.makeTimeSeries() - self.ts2 = tm.makeTimeSeries()[5:] - self.ts3 = tm.makeTimeSeries()[-5:] - self.ts4 = tm.makeTimeSeries()[1:-1] - - self.ts_dict = { - 'col1': self.ts1, - 'col2': self.ts2, - 'col3': self.ts3, - 'col4': self.ts4, - } - self.empty = DataFrame({}) - - arr = np.array([[1., 2., 3.], - [4., 5., 6.], - [7., 8., 9.]]) - - self.simple = DataFrame(arr, columns=['one', 'two', 'three'], - index=['a', 'b', 'c']) - - def test_get_axis(self): - f = self.frame - self.assertEqual(f._get_axis_number(0), 0) - self.assertEqual(f._get_axis_number(1), 1) - self.assertEqual(f._get_axis_number('index'), 0) - self.assertEqual(f._get_axis_number('rows'), 0) - self.assertEqual(f._get_axis_number('columns'), 1) - - self.assertEqual(f._get_axis_name(0), 'index') - self.assertEqual(f._get_axis_name(1), 'columns') - self.assertEqual(f._get_axis_name('index'), 'index') - self.assertEqual(f._get_axis_name('rows'), 'index') - self.assertEqual(f._get_axis_name('columns'), 'columns') - - self.assertIs(f._get_axis(0), f.index) - self.assertIs(f._get_axis(1), f.columns) - - assertRaisesRegexp(ValueError, 'No axis named', f._get_axis_number, 2) - assertRaisesRegexp(ValueError, 'No axis.*foo', f._get_axis_name, 'foo') - assertRaisesRegexp(ValueError, 'No axis.*None', f._get_axis_name, None) - assertRaisesRegexp(ValueError, 'No axis named', f._get_axis_number, None) - - def test_set_index(self): - idx = Index(np.arange(len(self.mixed_frame))) - - # cache it - _ = self.mixed_frame['foo'] - self.mixed_frame.index = idx - self.assertIs(self.mixed_frame['foo'].index, idx) - with assertRaisesRegexp(ValueError, 'Length mismatch'): - self.mixed_frame.index = idx[::2] - - def test_set_index_cast(self): - - # issue casting an index then set_index - df = DataFrame({'A' : [1.1,2.2,3.3], 'B' : [5.0,6.1,7.2]}, - index = [2010,2011,2012]) - expected = df.ix[2010] - new_index = df.index.astype(np.int32) - df.index = new_index - result = df.ix[2010] - assert_series_equal(result,expected) - - def test_set_index2(self): - df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'], - 'B': ['one', 'two', 'three', 'one', 'two'], - 'C': ['a', 'b', 'c', 'd', 'e'], - 'D': np.random.randn(5), - 'E': np.random.randn(5)}) - - # new object, single-column - result = df.set_index('C') - result_nodrop = df.set_index('C', drop=False) - - index = Index(df['C'], name='C') - - expected = df.ix[:, ['A', 'B', 'D', 'E']] - expected.index = index - - expected_nodrop = df.copy() - expected_nodrop.index = index - - assert_frame_equal(result, expected) - assert_frame_equal(result_nodrop, expected_nodrop) - self.assertEqual(result.index.name, index.name) - - # inplace, single - df2 = df.copy() - - df2.set_index('C', inplace=True) - - assert_frame_equal(df2, expected) - - df3 = df.copy() - df3.set_index('C', drop=False, inplace=True) - - assert_frame_equal(df3, expected_nodrop) - - # create new object, multi-column - result = df.set_index(['A', 'B']) - result_nodrop = df.set_index(['A', 'B'], drop=False) - - index = MultiIndex.from_arrays([df['A'], df['B']], names=['A', 'B']) - - expected = df.ix[:, ['C', 'D', 'E']] - expected.index = index - - expected_nodrop = df.copy() - expected_nodrop.index = index - - assert_frame_equal(result, expected) - assert_frame_equal(result_nodrop, expected_nodrop) - self.assertEqual(result.index.names, index.names) - - # inplace - df2 = df.copy() - df2.set_index(['A', 'B'], inplace=True) - assert_frame_equal(df2, expected) - - df3 = df.copy() - df3.set_index(['A', 'B'], drop=False, inplace=True) - assert_frame_equal(df3, expected_nodrop) - - # corner case - with assertRaisesRegexp(ValueError, 'Index has duplicate keys'): - df.set_index('A', verify_integrity=True) - - # append - result = df.set_index(['A', 'B'], append=True) - xp = df.reset_index().set_index(['index', 'A', 'B']) - xp.index.names = [None, 'A', 'B'] - assert_frame_equal(result, xp) - - # append to existing multiindex - rdf = df.set_index(['A'], append=True) - rdf = rdf.set_index(['B', 'C'], append=True) - expected = df.set_index(['A', 'B', 'C'], append=True) - assert_frame_equal(rdf, expected) - - # Series - result = df.set_index(df.C) - self.assertEqual(result.index.name, 'C') - - def test_set_index_nonuniq(self): - df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'], - 'B': ['one', 'two', 'three', 'one', 'two'], - 'C': ['a', 'b', 'c', 'd', 'e'], - 'D': np.random.randn(5), - 'E': np.random.randn(5)}) - with assertRaisesRegexp(ValueError, 'Index has duplicate keys'): - df.set_index('A', verify_integrity=True, inplace=True) - self.assertIn('A', df) - - def test_set_index_bug(self): - # GH1590 - df = DataFrame({'val': [0, 1, 2], 'key': ['a', 'b', 'c']}) - df2 = df.select(lambda indx: indx >= 1) - rs = df2.set_index('key') - xp = DataFrame({'val': [1, 2]}, - Index(['b', 'c'], name='key')) - assert_frame_equal(rs, xp) - - def test_set_index_pass_arrays(self): - df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'foo', 'foo'], - 'B': ['one', 'one', 'two', 'three', - 'two', 'two', 'one', 'three'], - 'C': np.random.randn(8), - 'D': np.random.randn(8)}) - - # multiple columns - result = df.set_index(['A', df['B'].values], drop=False) - expected = df.set_index(['A', 'B'], drop=False) - assert_frame_equal(result, expected, check_names=False) # TODO should set_index check_names ? - - def test_construction_with_categorical_index(self): - - ci = tm.makeCategoricalIndex(10) - - # with Categorical - df = DataFrame({'A' : np.random.randn(10), - 'B' : ci.values }) - idf = df.set_index('B') - str(idf) - tm.assert_index_equal(idf.index, ci, check_names=False) - self.assertEqual(idf.index.name, 'B') - - # from a CategoricalIndex - df = DataFrame({'A' : np.random.randn(10), - 'B' : ci }) - idf = df.set_index('B') - str(idf) - tm.assert_index_equal(idf.index, ci, check_names=False) - self.assertEqual(idf.index.name, 'B') - - idf = df.set_index('B').reset_index().set_index('B') - str(idf) - tm.assert_index_equal(idf.index, ci, check_names=False) - self.assertEqual(idf.index.name, 'B') - - new_df = idf.reset_index() - new_df.index = df.B - tm.assert_index_equal(new_df.index, ci, check_names=False) - self.assertEqual(idf.index.name, 'B') - - def test_set_index_cast_datetimeindex(self): - df = DataFrame({'A': [datetime(2000, 1, 1) + timedelta(i) - for i in range(1000)], - 'B': np.random.randn(1000)}) - - idf = df.set_index('A') - tm.assertIsInstance(idf.index, DatetimeIndex) - - # don't cast a DatetimeIndex WITH a tz, leave as object - # GH 6032 - i = pd.DatetimeIndex(pd.tseries.tools.to_datetime(['2013-1-1 13:00','2013-1-2 14:00'], errors="raise")).tz_localize('US/Pacific') - df = DataFrame(np.random.randn(2,1),columns=['A']) - - expected = Series(np.array([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'), - pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Pacific')], dtype="object")) - - # convert index to series - result = Series(i) - assert_series_equal(result, expected) - - # assignt to frame - df['B'] = i - result = df['B'] - assert_series_equal(result, expected, check_names=False) - self.assertEqual(result.name, 'B') - - # keep the timezone - result = i.to_series(keep_tz=True) - assert_series_equal(result.reset_index(drop=True), expected) - - # convert to utc - df['C'] = i.to_series().reset_index(drop=True) - result = df['C'] - comp = DatetimeIndex(expected.values).copy() - comp.tz = None - self.assert_numpy_array_equal(result.values, comp.values) - - # list of datetimes with a tz - df['D'] = i.to_pydatetime() - result = df['D'] - assert_series_equal(result, expected, check_names=False) - self.assertEqual(result.name, 'D') - - # GH 6785 - # set the index manually - import pytz - df = DataFrame([{'ts':datetime(2014, 4, 1, tzinfo=pytz.utc), 'foo':1}]) - expected = df.set_index('ts') - df.index = df['ts'] - df.pop('ts') - assert_frame_equal(df, expected) - - # GH 3950 - # reset_index with single level - for tz in ['UTC', 'Asia/Tokyo', 'US/Eastern']: - idx = pd.date_range('1/1/2011', periods=5, freq='D', tz=tz, name='idx') - df = pd.DataFrame({'a': range(5), 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx) - - expected = pd.DataFrame({'idx': [datetime(2011, 1, 1), datetime(2011, 1, 2), - datetime(2011, 1, 3), datetime(2011, 1, 4), - datetime(2011, 1, 5)], - 'a': range(5), 'b': ['A', 'B', 'C', 'D', 'E']}, - columns=['idx', 'a', 'b']) - expected['idx'] = expected['idx'].apply(lambda d: pd.Timestamp(d, tz=tz)) - assert_frame_equal(df.reset_index(), expected) - - def test_set_index_multiindexcolumns(self): - columns = MultiIndex.from_tuples([('foo', 1), ('foo', 2), ('bar', 1)]) - df = DataFrame(np.random.randn(3, 3), columns=columns) - rs = df.set_index(df.columns[0]) - xp = df.ix[:, 1:] - xp.index = df.ix[:, 0].values - xp.index.names = [df.columns[0]] - assert_frame_equal(rs, xp) - - def test_set_index_empty_column(self): - # #1971 - df = DataFrame([ - dict(a=1, p=0), - dict(a=2, m=10), - dict(a=3, m=11, p=20), - dict(a=4, m=12, p=21) - ], columns=('a', 'm', 'p', 'x')) - - # it works! - result = df.set_index(['a', 'x']) - repr(result) - - def test_set_columns(self): - cols = Index(np.arange(len(self.mixed_frame.columns))) - self.mixed_frame.columns = cols - with assertRaisesRegexp(ValueError, 'Length mismatch'): - self.mixed_frame.columns = cols[::2] - - def test_keys(self): - getkeys = self.frame.keys - self.assertIs(getkeys(), self.frame.columns) - - def test_column_contains_typeerror(self): - try: - self.frame.columns in self.frame - except TypeError: - pass - - def test_constructor(self): - df = DataFrame() - self.assertEqual(len(df.index), 0) - - df = DataFrame(data={}) - self.assertEqual(len(df.index), 0) - - def test_constructor_mixed(self): - index, data = tm.getMixedTypeDict() - - indexed_frame = DataFrame(data, index=index) - unindexed_frame = DataFrame(data) - - self.assertEqual(self.mixed_frame['foo'].dtype, np.object_) - - def test_constructor_cast_failure(self): - foo = DataFrame({'a': ['a', 'b', 'c']}, dtype=np.float64) - self.assertEqual(foo['a'].dtype, object) - - # GH 3010, constructing with odd arrays - df = DataFrame(np.ones((4,2))) - - # this is ok - df['foo'] = np.ones((4,2)).tolist() - - # this is not ok - self.assertRaises(ValueError, df.__setitem__, tuple(['test']), np.ones((4,2))) - - # this is ok - df['foo2'] = np.ones((4,2)).tolist() - - def test_constructor_dtype_copy(self): - orig_df = DataFrame({ - 'col1': [1.], - 'col2': [2.], - 'col3': [3.]}) - - new_df = pd.DataFrame(orig_df, dtype=float, copy=True) - - new_df['col1'] = 200. - self.assertEqual(orig_df['col1'][0], 1.) - - def test_constructor_dtype_nocast_view(self): - df = DataFrame([[1, 2]]) - should_be_view = DataFrame(df, dtype=df[0].dtype) - should_be_view[0][0] = 99 - self.assertEqual(df.values[0, 0], 99) - - should_be_view = DataFrame(df.values, dtype=df[0].dtype) - should_be_view[0][0] = 97 - self.assertEqual(df.values[0, 0], 97) - - def test_constructor_dtype_list_data(self): - df = DataFrame([[1, '2'], - [None, 'a']], dtype=object) - self.assertIsNone(df.ix[1, 0]) - self.assertEqual(df.ix[0, 1], '2') - - def test_constructor_list_frames(self): - - # GH 3243 - result = DataFrame([DataFrame([])]) - self.assertEqual(result.shape, (1,0)) - - result = DataFrame([DataFrame(dict(A = lrange(5)))]) - tm.assertIsInstance(result.iloc[0,0], DataFrame) - - def test_constructor_mixed_dtypes(self): - - def _make_mixed_dtypes_df(typ, ad = None): - - if typ == 'int': - dtypes = MIXED_INT_DTYPES - arrays = [ np.array(np.random.rand(10), dtype = d) for d in dtypes ] - elif typ == 'float': - dtypes = MIXED_FLOAT_DTYPES - arrays = [ np.array(np.random.randint(10, size=10), dtype = d) for d in dtypes ] - - zipper = lzip(dtypes,arrays) - for d,a in zipper: - assert(a.dtype == d) - if ad is None: - ad = dict() - ad.update(dict([ (d,a) for d,a in zipper ])) - return DataFrame(ad) - - def _check_mixed_dtypes(df, dtypes = None): - if dtypes is None: - dtypes = MIXED_FLOAT_DTYPES + MIXED_INT_DTYPES - for d in dtypes: - if d in df: - assert(df.dtypes[d] == d) - - # mixed floating and integer coexinst in the same frame - df = _make_mixed_dtypes_df('float') - _check_mixed_dtypes(df) - - # add lots of types - df = _make_mixed_dtypes_df('float', dict(A = 1, B = 'foo', C = 'bar')) - _check_mixed_dtypes(df) - - # GH 622 - df = _make_mixed_dtypes_df('int') - _check_mixed_dtypes(df) - - def test_constructor_complex_dtypes(self): - # GH10952 - a = np.random.rand(10).astype(np.complex64) - b = np.random.rand(10).astype(np.complex128) - - df = DataFrame({'a': a, 'b': b}) - self.assertEqual(a.dtype, df.a.dtype) - self.assertEqual(b.dtype, df.b.dtype) - - def test_constructor_rec(self): - rec = self.frame.to_records(index=False) - - # Assigning causes segfault in NumPy < 1.5.1 - # rec.dtype.names = list(rec.dtype.names)[::-1] - - index = self.frame.index - - df = DataFrame(rec) - self.assert_numpy_array_equal(df.columns, rec.dtype.names) - - df2 = DataFrame(rec, index=index) - self.assert_numpy_array_equal(df2.columns, rec.dtype.names) - self.assertTrue(df2.index.equals(index)) - - rng = np.arange(len(rec))[::-1] - df3 = DataFrame(rec, index=rng, columns=['C', 'B']) - expected = DataFrame(rec, index=rng).reindex(columns=['C', 'B']) - assert_frame_equal(df3, expected) - - def test_constructor_bool(self): - df = DataFrame({0: np.ones(10, dtype=bool), - 1: np.zeros(10, dtype=bool)}) - self.assertEqual(df.values.dtype, np.bool_) - - def test_constructor_overflow_int64(self): - values = np.array([2 ** 64 - i for i in range(1, 10)], - dtype=np.uint64) - - result = DataFrame({'a': values}) - self.assertEqual(result['a'].dtype, object) - - # #2355 - data_scores = [(6311132704823138710, 273), (2685045978526272070, 23), - (8921811264899370420, 45), (long(17019687244989530680), 270), - (long(9930107427299601010), 273)] - dtype = [('uid', 'u8'), ('score', 'u8')] - data = np.zeros((len(data_scores),), dtype=dtype) - data[:] = data_scores - df_crawls = DataFrame(data) - self.assertEqual(df_crawls['uid'].dtype, object) - - def test_constructor_ordereddict(self): - import random - nitems = 100 - nums = lrange(nitems) - random.shuffle(nums) - expected = ['A%d' % i for i in nums] - df = DataFrame(OrderedDict(zip(expected, [[0]] * nitems))) - self.assertEqual(expected, list(df.columns)) - - def test_constructor_dict(self): - frame = DataFrame({'col1': self.ts1, - 'col2': self.ts2}) - - tm.assert_dict_equal(self.ts1, frame['col1'], compare_keys=False) - tm.assert_dict_equal(self.ts2, frame['col2'], compare_keys=False) - - frame = DataFrame({'col1': self.ts1, - 'col2': self.ts2}, - columns=['col2', 'col3', 'col4']) - - self.assertEqual(len(frame), len(self.ts2)) - self.assertNotIn('col1', frame) - self.assertTrue(isnull(frame['col3']).all()) - - # Corner cases - self.assertEqual(len(DataFrame({})), 0) - - # mix dict and array, wrong size - no spec for which error should raise - # first - with tm.assertRaises(ValueError): - DataFrame({'A': {'a': 'a', 'b': 'b'}, 'B': ['a', 'b', 'c']}) - - # Length-one dict micro-optimization - frame = DataFrame({'A': {'1': 1, '2': 2}}) - self.assert_numpy_array_equal(frame.index, ['1', '2']) - - # empty dict plus index - idx = Index([0, 1, 2]) - frame = DataFrame({}, index=idx) - self.assertIs(frame.index, idx) - - # empty with index and columns - idx = Index([0, 1, 2]) - frame = DataFrame({}, index=idx, columns=idx) - self.assertIs(frame.index, idx) - self.assertIs(frame.columns, idx) - self.assertEqual(len(frame._series), 3) - - # with dict of empty list and Series - frame = DataFrame({'A': [], 'B': []}, columns=['A', 'B']) - self.assertTrue(frame.index.equals(Index([]))) - - # GH10856 - # dict with scalar values should raise error, even if columns passed - with tm.assertRaises(ValueError): - DataFrame({'a': 0.7}) - - with tm.assertRaises(ValueError): - DataFrame({'a': 0.7}, columns=['a']) - - with tm.assertRaises(ValueError): - DataFrame({'a': 0.7}, columns=['b']) - - def test_constructor_multi_index(self): - # GH 4078 - # construction error with mi and all-nan frame - tuples = [(2, 3), (3, 3), (3, 3)] - mi = MultiIndex.from_tuples(tuples) - df = DataFrame(index=mi,columns=mi) - self.assertTrue(pd.isnull(df).values.ravel().all()) - - tuples = [(3, 3), (2, 3), (3, 3)] - mi = MultiIndex.from_tuples(tuples) - df = DataFrame(index=mi,columns=mi) - self.assertTrue(pd.isnull(df).values.ravel().all()) - - def test_constructor_error_msgs(self): - msg = "Mixing dicts with non-Series may lead to ambiguous ordering." - # mix dict and array, wrong size - with assertRaisesRegexp(ValueError, msg): - DataFrame({'A': {'a': 'a', 'b': 'b'}, - 'B': ['a', 'b', 'c']}) - - # wrong size ndarray, GH 3105 - msg = "Shape of passed values is \(3, 4\), indices imply \(3, 3\)" - with assertRaisesRegexp(ValueError, msg): - DataFrame(np.arange(12).reshape((4, 3)), - columns=['foo', 'bar', 'baz'], - index=date_range('2000-01-01', periods=3)) - - - # higher dim raise exception - with assertRaisesRegexp(ValueError, 'Must pass 2-d input'): - DataFrame(np.zeros((3, 3, 3)), columns=['A', 'B', 'C'], index=[1]) - - # wrong size axis labels - with assertRaisesRegexp(ValueError, "Shape of passed values is \(3, 2\), indices imply \(3, 1\)"): - DataFrame(np.random.rand(2,3), columns=['A', 'B', 'C'], index=[1]) - - with assertRaisesRegexp(ValueError, "Shape of passed values is \(3, 2\), indices imply \(2, 2\)"): - DataFrame(np.random.rand(2,3), columns=['A', 'B'], index=[1, 2]) - - with assertRaisesRegexp(ValueError, 'If using all scalar values, you must pass an index'): - DataFrame({'a': False, 'b': True}) - - def test_constructor_with_embedded_frames(self): - - # embedded data frames - df1 = DataFrame({'a':[1, 2, 3], 'b':[3, 4, 5]}) - df2 = DataFrame([df1, df1+10]) - - df2.dtypes - str(df2) - - result = df2.loc[0,0] - assert_frame_equal(result,df1) - - result = df2.loc[1,0] - assert_frame_equal(result,df1+10) - - def test_insert_error_msmgs(self): - - # GH 7432 - df = DataFrame({'foo':['a', 'b', 'c'], 'bar':[1,2,3], 'baz':['d','e','f']}).set_index('foo') - s = DataFrame({'foo':['a', 'b', 'c', 'a'], 'fiz':['g','h','i','j']}).set_index('foo') - msg = 'cannot reindex from a duplicate axis' - with assertRaisesRegexp(ValueError, msg): - df['newcol'] = s - - # GH 4107, more descriptive error message - df = DataFrame(np.random.randint(0,2,(4,4)), - columns=['a', 'b', 'c', 'd']) - - msg = 'incompatible index of inserted column with frame index' - with assertRaisesRegexp(TypeError, msg): - df['gr'] = df.groupby(['b', 'c']).count() - - def test_frame_subclassing_and_slicing(self): - # Subclass frame and ensure it returns the right class on slicing it - # In reference to PR 9632 - - class CustomSeries(Series): - @property - def _constructor(self): - return CustomSeries - - def custom_series_function(self): - return 'OK' - - class CustomDataFrame(DataFrame): - "Subclasses pandas DF, fills DF with simulation results, adds some custom plotting functions." - - def __init__(self, *args, **kw): - super(CustomDataFrame, self).__init__(*args, **kw) - - @property - def _constructor(self): - return CustomDataFrame - - _constructor_sliced = CustomSeries - - def custom_frame_function(self): - return 'OK' - - data = {'col1': range(10), - 'col2': range(10)} - cdf = CustomDataFrame(data) - - # Did we get back our own DF class? - self.assertTrue(isinstance(cdf, CustomDataFrame)) - - # Do we get back our own Series class after selecting a column? - cdf_series = cdf.col1 - self.assertTrue(isinstance(cdf_series, CustomSeries)) - self.assertEqual(cdf_series.custom_series_function(), 'OK') - - # Do we get back our own DF class after slicing row-wise? - cdf_rows = cdf[1:5] - self.assertTrue(isinstance(cdf_rows, CustomDataFrame)) - self.assertEqual(cdf_rows.custom_frame_function(), 'OK') - - # Make sure sliced part of multi-index frame is custom class - mcol = pd.MultiIndex.from_tuples([('A', 'A'), ('A', 'B')]) - cdf_multi = CustomDataFrame([[0, 1], [2, 3]], columns=mcol) - self.assertTrue(isinstance(cdf_multi['A'], CustomDataFrame)) - - mcol = pd.MultiIndex.from_tuples([('A', ''), ('B', '')]) - cdf_multi2 = CustomDataFrame([[0, 1], [2, 3]], columns=mcol) - self.assertTrue(isinstance(cdf_multi2['A'], CustomSeries)) - - def test_constructor_subclass_dict(self): - # Test for passing dict subclass to constructor - data = {'col1': tm.TestSubDict((x, 10.0 * x) for x in range(10)), - 'col2': tm.TestSubDict((x, 20.0 * x) for x in range(10))} - df = DataFrame(data) - refdf = DataFrame(dict((col, dict(compat.iteritems(val))) - for col, val in compat.iteritems(data))) - assert_frame_equal(refdf, df) - - data = tm.TestSubDict(compat.iteritems(data)) - df = DataFrame(data) - assert_frame_equal(refdf, df) - - # try with defaultdict - from collections import defaultdict - data = {} - self.frame['B'][:10] = np.nan - for k, v in compat.iteritems(self.frame): - dct = defaultdict(dict) - dct.update(v.to_dict()) - data[k] = dct - frame = DataFrame(data) - assert_frame_equal(self.frame.sort_index(), frame) - - def test_constructor_dict_block(self): - expected = [[4., 3., 2., 1.]] - df = DataFrame({'d': [4.], 'c': [3.], 'b': [2.], 'a': [1.]}, - columns=['d', 'c', 'b', 'a']) - assert_almost_equal(df.values, expected) - - def test_constructor_dict_cast(self): - # cast float tests - test_data = { - 'A': {'1': 1, '2': 2}, - 'B': {'1': '1', '2': '2', '3': '3'}, - } - frame = DataFrame(test_data, dtype=float) - self.assertEqual(len(frame), 3) - self.assertEqual(frame['B'].dtype, np.float64) - self.assertEqual(frame['A'].dtype, np.float64) - - frame = DataFrame(test_data) - self.assertEqual(len(frame), 3) - self.assertEqual(frame['B'].dtype, np.object_) - self.assertEqual(frame['A'].dtype, np.float64) - - # can't cast to float - test_data = { - 'A': dict(zip(range(20), tm.makeStringIndex(20))), - 'B': dict(zip(range(15), randn(15))) - } - frame = DataFrame(test_data, dtype=float) - self.assertEqual(len(frame), 20) - self.assertEqual(frame['A'].dtype, np.object_) - self.assertEqual(frame['B'].dtype, np.float64) - - def test_constructor_dict_dont_upcast(self): - d = {'Col1': {'Row1': 'A String', 'Row2': np.nan}} - df = DataFrame(d) - tm.assertIsInstance(df['Col1']['Row2'], float) - - dm = DataFrame([[1, 2], ['a', 'b']], index=[1, 2], columns=[1, 2]) - tm.assertIsInstance(dm[1][1], int) - - def test_constructor_dict_of_tuples(self): - # GH #1491 - data = {'a': (1, 2, 3), 'b': (4, 5, 6)} - - result = DataFrame(data) - expected = DataFrame(dict((k, list(v)) for k, v in compat.iteritems(data))) - assert_frame_equal(result, expected, check_dtype=False) - - def test_constructor_dict_multiindex(self): - check = lambda result, expected: assert_frame_equal( - result, expected, check_dtype=True, check_index_type=True, - check_column_type=True, check_names=True) - d = {('a', 'a'): {('i', 'i'): 0, ('i', 'j'): 1, ('j', 'i'): 2}, - ('b', 'a'): {('i', 'i'): 6, ('i', 'j'): 5, ('j', 'i'): 4}, - ('b', 'c'): {('i', 'i'): 7, ('i', 'j'): 8, ('j', 'i'): 9}} - _d = sorted(d.items()) - df = DataFrame(d) - expected = DataFrame( - [x[1] for x in _d], - index=MultiIndex.from_tuples([x[0] for x in _d])).T - expected.index = MultiIndex.from_tuples(expected.index) - check(df, expected) - - d['z'] = {'y': 123., ('i', 'i'): 111, ('i', 'j'): 111, ('j', 'i'): 111} - _d.insert(0, ('z', d['z'])) - expected = DataFrame( - [x[1] for x in _d], - index=Index([x[0] for x in _d], tupleize_cols=False)).T - expected.index = Index(expected.index, tupleize_cols=False) - df = DataFrame(d) - df = df.reindex(columns=expected.columns, index=expected.index) - check(df, expected) - - def test_constructor_dict_datetime64_index(self): - # GH 10160 - dates_as_str = ['1984-02-19', '1988-11-06', '1989-12-03', '1990-03-15'] - - def create_data(constructor): - return dict((i, {constructor(s): 2*i}) for i, s in enumerate(dates_as_str)) - - data_datetime64 = create_data(np.datetime64) - data_datetime = create_data(lambda x: datetime.strptime(x, '%Y-%m-%d')) - data_Timestamp = create_data(Timestamp) - - expected = DataFrame([{0: 0, 1: None, 2: None, 3: None}, - {0: None, 1: 2, 2: None, 3: None}, - {0: None, 1: None, 2: 4, 3: None}, - {0: None, 1: None, 2: None, 3: 6}], - index=[Timestamp(dt) for dt in dates_as_str]) - - result_datetime64 = DataFrame(data_datetime64) - result_datetime = DataFrame(data_datetime) - result_Timestamp = DataFrame(data_Timestamp) - assert_frame_equal(result_datetime64, expected) - assert_frame_equal(result_datetime, expected) - assert_frame_equal(result_Timestamp, expected) - - def test_constructor_dict_timedelta64_index(self): - # GH 10160 - td_as_int = [1, 2, 3, 4] - - def create_data(constructor): - return dict((i, {constructor(s): 2*i}) for i, s in enumerate(td_as_int)) - - data_timedelta64 = create_data(lambda x: np.timedelta64(x, 'D')) - data_timedelta = create_data(lambda x: timedelta(days=x)) - data_Timedelta = create_data(lambda x: Timedelta(x, 'D')) - - expected = DataFrame([{0: 0, 1: None, 2: None, 3: None}, - {0: None, 1: 2, 2: None, 3: None}, - {0: None, 1: None, 2: 4, 3: None}, - {0: None, 1: None, 2: None, 3: 6}], - index=[Timedelta(td, 'D') for td in td_as_int]) - - result_timedelta64 = DataFrame(data_timedelta64) - result_timedelta = DataFrame(data_timedelta) - result_Timedelta = DataFrame(data_Timedelta) - assert_frame_equal(result_timedelta64, expected) - assert_frame_equal(result_timedelta, expected) - assert_frame_equal(result_Timedelta, expected) - - def test_nested_dict_frame_constructor(self): - rng = period_range('1/1/2000', periods=5) - df = DataFrame(randn(10, 5), columns=rng) - - data = {} - for col in df.columns: - for row in df.index: - data.setdefault(col, {})[row] = df.get_value(row, col) - - result = DataFrame(data, columns=rng) - assert_frame_equal(result, df) - - data = {} - for col in df.columns: - for row in df.index: - data.setdefault(row, {})[col] = df.get_value(row, col) - - result = DataFrame(data, index=rng).T - assert_frame_equal(result, df) - - def _check_basic_constructor(self, empty): - "mat: 2d matrix with shpae (3, 2) to input. empty - makes sized objects" - mat = empty((2, 3), dtype=float) - # 2-D input - frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) - - self.assertEqual(len(frame.index), 2) - self.assertEqual(len(frame.columns), 3) - - # 1-D input - frame = DataFrame(empty((3,)), columns=['A'], index=[1, 2, 3]) - self.assertEqual(len(frame.index), 3) - self.assertEqual(len(frame.columns), 1) - - - # cast type - frame = DataFrame(mat, columns=['A', 'B', 'C'], - index=[1, 2], dtype=np.int64) - self.assertEqual(frame.values.dtype, np.int64) - - # wrong size axis labels - msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)' - with assertRaisesRegexp(ValueError, msg): - DataFrame(mat, columns=['A', 'B', 'C'], index=[1]) - msg = r'Shape of passed values is \(3, 2\), indices imply \(2, 2\)' - with assertRaisesRegexp(ValueError, msg): - DataFrame(mat, columns=['A', 'B'], index=[1, 2]) - - # higher dim raise exception - with assertRaisesRegexp(ValueError, 'Must pass 2-d input'): - DataFrame(empty((3, 3, 3)), columns=['A', 'B', 'C'], - index=[1]) - - # automatic labeling - frame = DataFrame(mat) - self.assert_numpy_array_equal(frame.index, lrange(2)) - self.assert_numpy_array_equal(frame.columns, lrange(3)) - - frame = DataFrame(mat, index=[1, 2]) - self.assert_numpy_array_equal(frame.columns, lrange(3)) - - frame = DataFrame(mat, columns=['A', 'B', 'C']) - self.assert_numpy_array_equal(frame.index, lrange(2)) - - # 0-length axis - frame = DataFrame(empty((0, 3))) - self.assertEqual(len(frame.index), 0) - - frame = DataFrame(empty((3, 0))) - self.assertEqual(len(frame.columns), 0) - - def test_constructor_ndarray(self): - mat = np.zeros((2, 3), dtype=float) - self._check_basic_constructor(np.ones) - - frame = DataFrame(['foo', 'bar'], index=[0, 1], columns=['A']) - self.assertEqual(len(frame), 2) - - def test_constructor_maskedarray(self): - self._check_basic_constructor(ma.masked_all) - - # Check non-masked values - mat = ma.masked_all((2, 3), dtype=float) - mat[0, 0] = 1.0 - mat[1, 2] = 2.0 - frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) - self.assertEqual(1.0, frame['A'][1]) - self.assertEqual(2.0, frame['C'][2]) - - # what is this even checking?? - mat = ma.masked_all((2, 3), dtype=float) - frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) - self.assertTrue(np.all(~np.asarray(frame == frame))) - - def test_constructor_maskedarray_nonfloat(self): - # masked int promoted to float - mat = ma.masked_all((2, 3), dtype=int) - # 2-D input - frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) - - self.assertEqual(len(frame.index), 2) - self.assertEqual(len(frame.columns), 3) - self.assertTrue(np.all(~np.asarray(frame == frame))) - - # cast type - frame = DataFrame(mat, columns=['A', 'B', 'C'], - index=[1, 2], dtype=np.float64) - self.assertEqual(frame.values.dtype, np.float64) - - # Check non-masked values - mat2 = ma.copy(mat) - mat2[0, 0] = 1 - mat2[1, 2] = 2 - frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2]) - self.assertEqual(1, frame['A'][1]) - self.assertEqual(2, frame['C'][2]) - - # masked np.datetime64 stays (use lib.NaT as null) - mat = ma.masked_all((2, 3), dtype='M8[ns]') - # 2-D input - frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) - - self.assertEqual(len(frame.index), 2) - self.assertEqual(len(frame.columns), 3) - self.assertTrue(isnull(frame).values.all()) - - # cast type - frame = DataFrame(mat, columns=['A', 'B', 'C'], - index=[1, 2], dtype=np.int64) - self.assertEqual(frame.values.dtype, np.int64) - - # Check non-masked values - mat2 = ma.copy(mat) - mat2[0, 0] = 1 - mat2[1, 2] = 2 - frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2]) - self.assertEqual(1, frame['A'].view('i8')[1]) - self.assertEqual(2, frame['C'].view('i8')[2]) - - # masked bool promoted to object - mat = ma.masked_all((2, 3), dtype=bool) - # 2-D input - frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) - - self.assertEqual(len(frame.index), 2) - self.assertEqual(len(frame.columns), 3) - self.assertTrue(np.all(~np.asarray(frame == frame))) - - # cast type - frame = DataFrame(mat, columns=['A', 'B', 'C'], - index=[1, 2], dtype=object) - self.assertEqual(frame.values.dtype, object) - - # Check non-masked values - mat2 = ma.copy(mat) - mat2[0, 0] = True - mat2[1, 2] = False - frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2]) - self.assertEqual(True, frame['A'][1]) - self.assertEqual(False, frame['C'][2]) - - def test_constructor_mrecarray(self): - # Ensure mrecarray produces frame identical to dict of masked arrays - # from GH3479 - - assert_fr_equal = functools.partial(assert_frame_equal, - check_index_type=True, - check_column_type=True, - check_frame_type=True) - arrays = [ - ('float', np.array([1.5, 2.0])), - ('int', np.array([1, 2])), - ('str', np.array(['abc', 'def'])), - ] - for name, arr in arrays[:]: - arrays.append(('masked1_' + name, - np.ma.masked_array(arr, mask=[False, True]))) - arrays.append(('masked_all', np.ma.masked_all((2,)))) - arrays.append(('masked_none', - np.ma.masked_array([1.0, 2.5], mask=False))) - - # call assert_frame_equal for all selections of 3 arrays - for comb in itertools.combinations(arrays, 3): - names, data = zip(*comb) - mrecs = mrecords.fromarrays(data, names=names) - - # fill the comb - comb = dict([ (k, v.filled()) if hasattr(v,'filled') else (k, v) for k, v in comb ]) - - expected = DataFrame(comb,columns=names) - result = DataFrame(mrecs) - assert_fr_equal(result,expected) - - # specify columns - expected = DataFrame(comb,columns=names[::-1]) - result = DataFrame(mrecs, columns=names[::-1]) - assert_fr_equal(result,expected) - - # specify index - expected = DataFrame(comb,columns=names,index=[1,2]) - result = DataFrame(mrecs, index=[1,2]) - assert_fr_equal(result,expected) - - def test_constructor_corner(self): - df = DataFrame(index=[]) - self.assertEqual(df.values.shape, (0, 0)) - - # empty but with specified dtype - df = DataFrame(index=lrange(10), columns=['a', 'b'], dtype=object) - self.assertEqual(df.values.dtype, np.object_) - - # does not error but ends up float - df = DataFrame(index=lrange(10), columns=['a', 'b'], dtype=int) - self.assertEqual(df.values.dtype, np.object_) - - # #1783 empty dtype object - df = DataFrame({}, columns=['foo', 'bar']) - self.assertEqual(df.values.dtype, np.object_) - - df = DataFrame({'b': 1}, index=lrange(10), columns=list('abc'), - dtype=int) - self.assertEqual(df.values.dtype, np.object_) - - - def test_constructor_scalar_inference(self): - data = {'int': 1, 'bool': True, - 'float': 3., 'complex': 4j, 'object': 'foo'} - df = DataFrame(data, index=np.arange(10)) - - self.assertEqual(df['int'].dtype, np.int64) - self.assertEqual(df['bool'].dtype, np.bool_) - self.assertEqual(df['float'].dtype, np.float64) - self.assertEqual(df['complex'].dtype, np.complex128) - self.assertEqual(df['object'].dtype, np.object_) - - def test_constructor_arrays_and_scalars(self): - df = DataFrame({'a': randn(10), 'b': True}) - exp = DataFrame({'a': df['a'].values, 'b': [True] * 10}) - - assert_frame_equal(df, exp) - with tm.assertRaisesRegexp(ValueError, 'must pass an index'): - DataFrame({'a': False, 'b': True}) - - def test_constructor_DataFrame(self): - df = DataFrame(self.frame) - assert_frame_equal(df, self.frame) - - df_casted = DataFrame(self.frame, dtype=np.int64) - self.assertEqual(df_casted.values.dtype, np.int64) - - def test_constructor_more(self): - # used to be in test_matrix.py - arr = randn(10) - dm = DataFrame(arr, columns=['A'], index=np.arange(10)) - self.assertEqual(dm.values.ndim, 2) - - arr = randn(0) - dm = DataFrame(arr) - self.assertEqual(dm.values.ndim, 2) - self.assertEqual(dm.values.ndim, 2) - - # no data specified - dm = DataFrame(columns=['A', 'B'], index=np.arange(10)) - self.assertEqual(dm.values.shape, (10, 2)) - - dm = DataFrame(columns=['A', 'B']) - self.assertEqual(dm.values.shape, (0, 2)) - - dm = DataFrame(index=np.arange(10)) - self.assertEqual(dm.values.shape, (10, 0)) - - # corner, silly - # TODO: Fix this Exception to be better... - with assertRaisesRegexp(PandasError, 'constructor not properly called'): - DataFrame((1, 2, 3)) - - # can't cast - mat = np.array(['foo', 'bar'], dtype=object).reshape(2, 1) - with assertRaisesRegexp(ValueError, 'cast'): - DataFrame(mat, index=[0, 1], columns=[0], dtype=float) - - dm = DataFrame(DataFrame(self.frame._series)) - assert_frame_equal(dm, self.frame) - - # int cast - dm = DataFrame({'A': np.ones(10, dtype=int), - 'B': np.ones(10, dtype=np.float64)}, - index=np.arange(10)) - - self.assertEqual(len(dm.columns), 2) - self.assertEqual(dm.values.dtype, np.float64) - - def test_constructor_empty_list(self): - df = DataFrame([], index=[]) - expected = DataFrame(index=[]) - assert_frame_equal(df, expected) - - # GH 9939 - df = DataFrame([], columns=['A', 'B']) - expected = DataFrame({}, columns=['A', 'B']) - assert_frame_equal(df, expected) - - # Empty generator: list(empty_gen()) == [] - def empty_gen(): - return - yield - - df = DataFrame(empty_gen(), columns=['A', 'B']) - assert_frame_equal(df, expected) - - def test_constructor_list_of_lists(self): - # GH #484 - l = [[1, 'a'], [2, 'b']] - df = DataFrame(data=l, columns=["num", "str"]) - self.assertTrue(com.is_integer_dtype(df['num'])) - self.assertEqual(df['str'].dtype, np.object_) - - # GH 4851 - # list of 0-dim ndarrays - expected = DataFrame({ 0: range(10) }) - data = [np.array(x) for x in range(10)] - result = DataFrame(data) - assert_frame_equal(result, expected) - - def test_constructor_sequence_like(self): - # GH 3783 - # collections.Squence like - import collections - - class DummyContainer(collections.Sequence): - def __init__(self, lst): - self._lst = lst - def __getitem__(self, n): - return self._lst.__getitem__(n) - def __len__(self, n): - return self._lst.__len__() - - l = [DummyContainer([1, 'a']), DummyContainer([2, 'b'])] - columns = ["num", "str"] - result = DataFrame(l, columns=columns) - expected = DataFrame([[1,'a'],[2,'b']],columns=columns) - assert_frame_equal(result, expected, check_dtype=False) - - # GH 4297 - # support Array - import array - result = DataFrame.from_items([('A', array.array('i', range(10)))]) - expected = DataFrame({ 'A' : list(range(10)) }) - assert_frame_equal(result, expected, check_dtype=False) - - expected = DataFrame([ list(range(10)), list(range(10)) ]) - result = DataFrame([ array.array('i', range(10)), array.array('i',range(10)) ]) - assert_frame_equal(result, expected, check_dtype=False) - - def test_constructor_iterator(self): - - expected = DataFrame([ list(range(10)), list(range(10)) ]) - result = DataFrame([ range(10), range(10) ]) - assert_frame_equal(result, expected) - - def test_constructor_generator(self): - #related #2305 - - gen1 = (i for i in range(10)) - gen2 = (i for i in range(10)) - - expected = DataFrame([ list(range(10)), list(range(10)) ]) - result = DataFrame([ gen1, gen2 ]) - assert_frame_equal(result, expected) - - gen = ([ i, 'a'] for i in range(10)) - result = DataFrame(gen) - expected = DataFrame({ 0 : range(10), 1 : 'a' }) - assert_frame_equal(result, expected, check_dtype=False) - - def test_constructor_list_of_dicts(self): - data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]), - OrderedDict([['a', 1.5], ['b', 3], ['d', 6]]), - OrderedDict([['a', 1.5], ['d', 6]]), - OrderedDict(), - OrderedDict([['a', 1.5], ['b', 3], ['c', 4]]), - OrderedDict([['b', 3], ['c', 4], ['d', 6]])] - - result = DataFrame(data) - expected = DataFrame.from_dict(dict(zip(range(len(data)), data)), - orient='index') - assert_frame_equal(result, expected.reindex(result.index)) - - result = DataFrame([{}]) - expected = DataFrame(index=[0]) - assert_frame_equal(result, expected) - - def test_constructor_list_of_series(self): - data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]), - OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])] - sdict = OrderedDict(zip(['x', 'y'], data)) - idx = Index(['a', 'b', 'c']) - - # all named - data2 = [Series([1.5, 3, 4], idx, dtype='O', name='x'), - Series([1.5, 3, 6], idx, name='y')] - result = DataFrame(data2) - expected = DataFrame.from_dict(sdict, orient='index') - assert_frame_equal(result, expected) - - # some unnamed - data2 = [Series([1.5, 3, 4], idx, dtype='O', name='x'), - Series([1.5, 3, 6], idx)] - result = DataFrame(data2) - - sdict = OrderedDict(zip(['x', 'Unnamed 0'], data)) - expected = DataFrame.from_dict(sdict, orient='index') - assert_frame_equal(result.sort_index(), expected) - - # none named - data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]), - OrderedDict([['a', 1.5], ['b', 3], ['d', 6]]), - OrderedDict([['a', 1.5], ['d', 6]]), - OrderedDict(), - OrderedDict([['a', 1.5], ['b', 3], ['c', 4]]), - OrderedDict([['b', 3], ['c', 4], ['d', 6]])] - data = [Series(d) for d in data] - - result = DataFrame(data) - sdict = OrderedDict(zip(range(len(data)), data)) - expected = DataFrame.from_dict(sdict, orient='index') - assert_frame_equal(result, expected.reindex(result.index)) - - result2 = DataFrame(data, index=np.arange(6)) - assert_frame_equal(result, result2) - - result = DataFrame([Series({})]) - expected = DataFrame(index=[0]) - assert_frame_equal(result, expected) - - data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]), - OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])] - sdict = OrderedDict(zip(range(len(data)), data)) - - idx = Index(['a', 'b', 'c']) - data2 = [Series([1.5, 3, 4], idx, dtype='O'), - Series([1.5, 3, 6], idx)] - result = DataFrame(data2) - expected = DataFrame.from_dict(sdict, orient='index') - assert_frame_equal(result, expected) - - def test_constructor_list_of_derived_dicts(self): - class CustomDict(dict): - pass - d = {'a': 1.5, 'b': 3} - - data_custom = [CustomDict(d)] - data = [d] - - result_custom = DataFrame(data_custom) - result = DataFrame(data) - assert_frame_equal(result, result_custom) - - def test_constructor_ragged(self): - data = {'A': randn(10), - 'B': randn(8)} - with assertRaisesRegexp(ValueError, 'arrays must all be same length'): - DataFrame(data) - - def test_constructor_scalar(self): - idx = Index(lrange(3)) - df = DataFrame({"a": 0}, index=idx) - expected = DataFrame({"a": [0, 0, 0]}, index=idx) - assert_frame_equal(df, expected, check_dtype=False) - - def test_constructor_Series_copy_bug(self): - df = DataFrame(self.frame['A'], index=self.frame.index, columns=['A']) - df.copy() - - def test_constructor_mixed_dict_and_Series(self): - data = {} - data['A'] = {'foo': 1, 'bar': 2, 'baz': 3} - data['B'] = Series([4, 3, 2, 1], index=['bar', 'qux', 'baz', 'foo']) - - result = DataFrame(data) - self.assertTrue(result.index.is_monotonic) - - # ordering ambiguous, raise exception - with assertRaisesRegexp(ValueError, 'ambiguous ordering'): - DataFrame({'A': ['a', 'b'], 'B': {'a': 'a', 'b': 'b'}}) - - # this is OK though - result = DataFrame({'A': ['a', 'b'], - 'B': Series(['a', 'b'], index=['a', 'b'])}) - expected = DataFrame({'A': ['a', 'b'], 'B': ['a', 'b']}, - index=['a', 'b']) - assert_frame_equal(result, expected) - - def test_constructor_tuples(self): - result = DataFrame({'A': [(1, 2), (3, 4)]}) - expected = DataFrame({'A': Series([(1, 2), (3, 4)])}) - assert_frame_equal(result, expected) - - def test_constructor_namedtuples(self): - # GH11181 - from collections import namedtuple - named_tuple = namedtuple("Pandas", list('ab')) - tuples = [named_tuple(1, 3), named_tuple(2, 4)] - expected = DataFrame({'a': [1, 2], 'b': [3, 4]}) - result = DataFrame(tuples) - assert_frame_equal(result, expected) - - # with columns - expected = DataFrame({'y': [1, 2], 'z': [3, 4]}) - result = DataFrame(tuples, columns=['y', 'z']) - assert_frame_equal(result, expected) - - def test_constructor_orient(self): - data_dict = self.mixed_frame.T._series - recons = DataFrame.from_dict(data_dict, orient='index') - expected = self.mixed_frame.sort_index() - assert_frame_equal(recons, expected) - - # dict of sequence - a = {'hi': [32, 3, 3], - 'there': [3, 5, 3]} - rs = DataFrame.from_dict(a, orient='index') - xp = DataFrame.from_dict(a).T.reindex(list(a.keys())) - assert_frame_equal(rs, xp) - - def test_constructor_Series_named(self): - a = Series([1, 2, 3], index=['a', 'b', 'c'], name='x') - df = DataFrame(a) - self.assertEqual(df.columns[0], 'x') - self.assertTrue(df.index.equals(a.index)) - - # ndarray like - arr = np.random.randn(10) - s = Series(arr,name='x') - df = DataFrame(s) - expected = DataFrame(dict(x = s)) - assert_frame_equal(df,expected) - - s = Series(arr,index=range(3,13)) - df = DataFrame(s) - expected = DataFrame({ 0 : s }) - assert_frame_equal(df,expected) - - self.assertRaises(ValueError, DataFrame, s, columns=[1,2]) - - # #2234 - a = Series([], name='x') - df = DataFrame(a) - self.assertEqual(df.columns[0], 'x') - - # series with name and w/o - s1 = Series(arr,name='x') - df = DataFrame([s1, arr]).T - expected = DataFrame({ 'x' : s1, 'Unnamed 0' : arr },columns=['x','Unnamed 0']) - assert_frame_equal(df,expected) - - # this is a bit non-intuitive here; the series collapse down to arrays - df = DataFrame([arr, s1]).T - expected = DataFrame({ 1 : s1, 0 : arr },columns=[0,1]) - assert_frame_equal(df,expected) - - def test_constructor_Series_differently_indexed(self): - # name - s1 = Series([1, 2, 3], index=['a', 'b', 'c'], name='x') - - # no name - s2 = Series([1, 2, 3], index=['a', 'b', 'c']) - - other_index = Index(['a', 'b']) - - df1 = DataFrame(s1, index=other_index) - exp1 = DataFrame(s1.reindex(other_index)) - self.assertEqual(df1.columns[0], 'x') - assert_frame_equal(df1, exp1) - - df2 = DataFrame(s2, index=other_index) - exp2 = DataFrame(s2.reindex(other_index)) - self.assertEqual(df2.columns[0], 0) - self.assertTrue(df2.index.equals(other_index)) - assert_frame_equal(df2, exp2) - - def test_constructor_manager_resize(self): - index = list(self.frame.index[:5]) - columns = list(self.frame.columns[:3]) - - result = DataFrame(self.frame._data, index=index, - columns=columns) - self.assert_numpy_array_equal(result.index, index) - self.assert_numpy_array_equal(result.columns, columns) - - def test_constructor_from_items(self): - items = [(c, self.frame[c]) for c in self.frame.columns] - recons = DataFrame.from_items(items) - assert_frame_equal(recons, self.frame) - - # pass some columns - recons = DataFrame.from_items(items, columns=['C', 'B', 'A']) - assert_frame_equal(recons, self.frame.ix[:, ['C', 'B', 'A']]) - - # orient='index' - - row_items = [(idx, self.mixed_frame.xs(idx)) - for idx in self.mixed_frame.index] - - recons = DataFrame.from_items(row_items, - columns=self.mixed_frame.columns, - orient='index') - assert_frame_equal(recons, self.mixed_frame) - self.assertEqual(recons['A'].dtype, np.float64) - - with tm.assertRaisesRegexp(TypeError, - "Must pass columns with orient='index'"): - DataFrame.from_items(row_items, orient='index') - - # orient='index', but thar be tuples - arr = lib.list_to_object_array( - [('bar', 'baz')] * len(self.mixed_frame)) - self.mixed_frame['foo'] = arr - row_items = [(idx, list(self.mixed_frame.xs(idx))) - for idx in self.mixed_frame.index] - recons = DataFrame.from_items(row_items, - columns=self.mixed_frame.columns, - orient='index') - assert_frame_equal(recons, self.mixed_frame) - tm.assertIsInstance(recons['foo'][0], tuple) - - rs = DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])], - orient='index', columns=['one', 'two', 'three']) - xp = DataFrame([[1, 2, 3], [4, 5, 6]], index=['A', 'B'], - columns=['one', 'two', 'three']) - assert_frame_equal(rs, xp) - - def test_constructor_mix_series_nonseries(self): - df = DataFrame({'A': self.frame['A'], - 'B': list(self.frame['B'])}, columns=['A', 'B']) - assert_frame_equal(df, self.frame.ix[:, ['A', 'B']]) - - with tm.assertRaisesRegexp(ValueError, 'does not match index length'): - DataFrame({'A': self.frame['A'], 'B': list(self.frame['B'])[:-2]}) - - def test_constructor_miscast_na_int_dtype(self): - df = DataFrame([[np.nan, 1], [1, 0]], dtype=np.int64) - expected = DataFrame([[np.nan, 1], [1, 0]]) - assert_frame_equal(df, expected) - - def test_constructor_iterator_failure(self): - with assertRaisesRegexp(TypeError, 'iterator'): - df = DataFrame(iter([1, 2, 3])) - - def test_constructor_column_duplicates(self): - # it works! #2079 - df = DataFrame([[8, 5]], columns=['a', 'a']) - edf = DataFrame([[8, 5]]) - edf.columns = ['a', 'a'] - - assert_frame_equal(df, edf) - - idf = DataFrame.from_items( - [('a', [8]), ('a', [5])], columns=['a', 'a']) - assert_frame_equal(idf, edf) - - self.assertRaises(ValueError, DataFrame.from_items, - [('a', [8]), ('a', [5]), ('b', [6])], - columns=['b', 'a', 'a']) - - def test_constructor_empty_with_string_dtype(self): - # GH 9428 - expected = DataFrame(index=[0, 1], columns=[0, 1], dtype=object) - - df = DataFrame(index=[0, 1], columns=[0, 1], dtype=str) - assert_frame_equal(df, expected) - df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.str_) - assert_frame_equal(df, expected) - df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.unicode_) - assert_frame_equal(df, expected) - df = DataFrame(index=[0, 1], columns=[0, 1], dtype='U5') - assert_frame_equal(df, expected) - - - def test_column_dups_operations(self): - - def check(result, expected=None): - if expected is not None: - assert_frame_equal(result,expected) - result.dtypes - str(result) - - # assignment - # GH 3687 - arr = np.random.randn(3, 2) - idx = lrange(2) - df = DataFrame(arr, columns=['A', 'A']) - df.columns = idx - expected = DataFrame(arr,columns=idx) - check(df,expected) - - idx = date_range('20130101',periods=4,freq='Q-NOV') - df = DataFrame([[1,1,1,5],[1,1,2,5],[2,1,3,5]],columns=['a','a','a','a']) - df.columns = idx - expected = DataFrame([[1,1,1,5],[1,1,2,5],[2,1,3,5]],columns=idx) - check(df,expected) - - # insert - df = DataFrame([[1,1,1,5],[1,1,2,5],[2,1,3,5]],columns=['foo','bar','foo','hello']) - df['string'] = 'bah' - expected = DataFrame([[1,1,1,5,'bah'],[1,1,2,5,'bah'],[2,1,3,5,'bah']],columns=['foo','bar','foo','hello','string']) - check(df,expected) - with assertRaisesRegexp(ValueError, 'Length of value'): - df.insert(0, 'AnotherColumn', range(len(df.index) - 1)) - - # insert same dtype - df['foo2'] = 3 - expected = DataFrame([[1,1,1,5,'bah',3],[1,1,2,5,'bah',3],[2,1,3,5,'bah',3]],columns=['foo','bar','foo','hello','string','foo2']) - check(df,expected) - - # set (non-dup) - df['foo2'] = 4 - expected = DataFrame([[1,1,1,5,'bah',4],[1,1,2,5,'bah',4],[2,1,3,5,'bah',4]],columns=['foo','bar','foo','hello','string','foo2']) - check(df,expected) - df['foo2'] = 3 - - # delete (non dup) - del df['bar'] - expected = DataFrame([[1,1,5,'bah',3],[1,2,5,'bah',3],[2,3,5,'bah',3]],columns=['foo','foo','hello','string','foo2']) - check(df,expected) - - # try to delete again (its not consolidated) - del df['hello'] - expected = DataFrame([[1,1,'bah',3],[1,2,'bah',3],[2,3,'bah',3]],columns=['foo','foo','string','foo2']) - check(df,expected) - - # consolidate - df = df.consolidate() - expected = DataFrame([[1,1,'bah',3],[1,2,'bah',3],[2,3,'bah',3]],columns=['foo','foo','string','foo2']) - check(df,expected) - - # insert - df.insert(2,'new_col',5.) - expected = DataFrame([[1,1,5.,'bah',3],[1,2,5.,'bah',3],[2,3,5.,'bah',3]],columns=['foo','foo','new_col','string','foo2']) - check(df,expected) - - # insert a dup - assertRaisesRegexp(ValueError, 'cannot insert', df.insert, 2, 'new_col', 4.) - df.insert(2,'new_col',4.,allow_duplicates=True) - expected = DataFrame([[1,1,4.,5.,'bah',3],[1,2,4.,5.,'bah',3],[2,3,4.,5.,'bah',3]],columns=['foo','foo','new_col','new_col','string','foo2']) - check(df,expected) - - # delete (dup) - del df['foo'] - expected = DataFrame([[4.,5.,'bah',3],[4.,5.,'bah',3],[4.,5.,'bah',3]],columns=['new_col','new_col','string','foo2']) - assert_frame_equal(df,expected) - - # dup across dtypes - df = DataFrame([[1,1,1.,5],[1,1,2.,5],[2,1,3.,5]],columns=['foo','bar','foo','hello']) - check(df) - - df['foo2'] = 7. - expected = DataFrame([[1,1,1.,5,7.],[1,1,2.,5,7.],[2,1,3.,5,7.]],columns=['foo','bar','foo','hello','foo2']) - check(df,expected) - - result = df['foo'] - expected = DataFrame([[1,1.],[1,2.],[2,3.]],columns=['foo','foo']) - check(result,expected) - - # multiple replacements - df['foo'] = 'string' - expected = DataFrame([['string',1,'string',5,7.],['string',1,'string',5,7.],['string',1,'string',5,7.]],columns=['foo','bar','foo','hello','foo2']) - check(df,expected) - - del df['foo'] - expected = DataFrame([[1,5,7.],[1,5,7.],[1,5,7.]],columns=['bar','hello','foo2']) - check(df,expected) - - # values - df = DataFrame([[1,2.5],[3,4.5]], index=[1,2], columns=['x','x']) - result = df.values - expected = np.array([[1,2.5],[3,4.5]]) - self.assertTrue((result == expected).all().all()) - - # rename, GH 4403 - df4 = DataFrame({'TClose': [22.02], - 'RT': [0.0454], - 'TExg': [0.0422]}, - index=MultiIndex.from_tuples([(600809, 20130331)], names=['STK_ID', 'RPT_Date'])) - - df5 = DataFrame({'STK_ID': [600809] * 3, - 'RPT_Date': [20120930,20121231,20130331], - 'STK_Name': [u('饡驦'), u('饡驦'), u('饡驦')], - 'TClose': [38.05, 41.66, 30.01]}, - index=MultiIndex.from_tuples([(600809, 20120930), (600809, 20121231),(600809,20130331)], names=['STK_ID', 'RPT_Date'])) - - k = pd.merge(df4,df5,how='inner',left_index=True,right_index=True) - result = k.rename(columns={'TClose_x':'TClose', 'TClose_y':'QT_Close'}) - str(result) - result.dtypes - - expected = DataFrame([[0.0454, 22.02, 0.0422, 20130331, 600809, u('饡驦'), 30.01 ]], - columns=['RT','TClose','TExg','RPT_Date','STK_ID','STK_Name','QT_Close']).set_index(['STK_ID','RPT_Date'],drop=False) - assert_frame_equal(result,expected) - - # reindex is invalid! - df = DataFrame([[1,5,7.],[1,5,7.],[1,5,7.]],columns=['bar','a','a']) - self.assertRaises(ValueError, df.reindex, columns=['bar']) - self.assertRaises(ValueError, df.reindex, columns=['bar','foo']) - - # drop - df = DataFrame([[1,5,7.],[1,5,7.],[1,5,7.]],columns=['bar','a','a']) - result = df.drop(['a'],axis=1) - expected = DataFrame([[1],[1],[1]],columns=['bar']) - check(result,expected) - result = df.drop('a',axis=1) - check(result,expected) - - # describe - df = DataFrame([[1,1,1],[2,2,2],[3,3,3]],columns=['bar','a','a'],dtype='float64') - result = df.describe() - s = df.iloc[:,0].describe() - expected = pd.concat([ s, s, s],keys=df.columns,axis=1) - check(result,expected) - - # check column dups with index equal and not equal to df's index - df = DataFrame(np.random.randn(5, 3), index=['a', 'b', 'c', 'd', 'e'], - columns=['A', 'B', 'A']) - for index in [df.index, pd.Index(list('edcba'))]: - this_df = df.copy() - expected_ser = pd.Series(index.values, index=this_df.index) - expected_df = DataFrame.from_items([('A', expected_ser), - ('B', this_df['B']), - ('A', expected_ser)]) - this_df['A'] = index - check(this_df, expected_df) - - # operations - for op in ['__add__','__mul__','__sub__','__truediv__']: - df = DataFrame(dict(A = np.arange(10), B = np.random.rand(10))) - expected = getattr(df,op)(df) - expected.columns = ['A','A'] - df.columns = ['A','A'] - result = getattr(df,op)(df) - check(result,expected) - - # multiple assignments that change dtypes - # the location indexer is a slice - # GH 6120 - df = DataFrame(np.random.randn(5,2), columns=['that', 'that']) - expected = DataFrame(1.0, index=range(5), columns=['that', 'that']) - - df['that'] = 1.0 - check(df, expected) - - df = DataFrame(np.random.rand(5,2), columns=['that', 'that']) - expected = DataFrame(1, index=range(5), columns=['that', 'that']) - - df['that'] = 1 - check(df, expected) - - def test_column_dups2(self): - - # drop buggy GH 6240 - df = DataFrame({'A' : np.random.randn(5), - 'B' : np.random.randn(5), - 'C' : np.random.randn(5), - 'D' : ['a','b','c','d','e'] }) - - expected = df.take([0,1,1], axis=1) - df2 = df.take([2,0,1,2,1], axis=1) - result = df2.drop('C',axis=1) - assert_frame_equal(result, expected) - - # dropna - df = DataFrame({'A' : np.random.randn(5), - 'B' : np.random.randn(5), - 'C' : np.random.randn(5), - 'D' : ['a','b','c','d','e'] }) - df.iloc[2,[0,1,2]] = np.nan - df.iloc[0,0] = np.nan - df.iloc[1,1] = np.nan - df.iloc[:,3] = np.nan - expected = df.dropna(subset=['A','B','C'],how='all') - expected.columns = ['A','A','B','C'] - - df.columns = ['A','A','B','C'] - - result = df.dropna(subset=['A','C'],how='all') - assert_frame_equal(result, expected) - - def test_column_dups_indexing(self): - def check(result, expected=None): - if expected is not None: - assert_frame_equal(result,expected) - result.dtypes - str(result) - - # boolean indexing - # GH 4879 - dups = ['A', 'A', 'C', 'D'] - df = DataFrame(np.arange(12).reshape(3,4), columns=['A', 'B', 'C', 'D'],dtype='float64') - expected = df[df.C > 6] - expected.columns = dups - df = DataFrame(np.arange(12).reshape(3,4), columns=dups,dtype='float64') - result = df[df.C > 6] - check(result,expected) - - # where - df = DataFrame(np.arange(12).reshape(3,4), columns=['A', 'B', 'C', 'D'],dtype='float64') - expected = df[df > 6] - expected.columns = dups - df = DataFrame(np.arange(12).reshape(3,4), columns=dups,dtype='float64') - result = df[df > 6] - check(result,expected) - - # boolean with the duplicate raises - df = DataFrame(np.arange(12).reshape(3,4), columns=dups,dtype='float64') - self.assertRaises(ValueError, lambda : df[df.A > 6]) - - # dup aligining operations should work - # GH 5185 - df1 = DataFrame([1, 2, 3, 4, 5], index=[1, 2, 1, 2, 3]) - df2 = DataFrame([1, 2, 3], index=[1, 2, 3]) - expected = DataFrame([0,2,0,2,2],index=[1,1,2,2,3]) - result = df1.sub(df2) - assert_frame_equal(result,expected) - - # equality - df1 = DataFrame([[1,2],[2,np.nan],[3,4],[4,4]],columns=['A','B']) - df2 = DataFrame([[0,1],[2,4],[2,np.nan],[4,5]],columns=['A','A']) - - # not-comparing like-labelled - self.assertRaises(ValueError, lambda : df1 == df2) - - df1r = df1.reindex_like(df2) - result = df1r == df2 - expected = DataFrame([[False,True],[True,False],[False,False],[True,False]],columns=['A','A']) - assert_frame_equal(result,expected) - - # mixed column selection - # GH 5639 - dfbool = DataFrame({'one' : Series([True, True, False], index=['a', 'b', 'c']), - 'two' : Series([False, False, True, False], index=['a', 'b', 'c', 'd']), - 'three': Series([False, True, True, True], index=['a', 'b', 'c', 'd'])}) - expected = pd.concat([dfbool['one'],dfbool['three'],dfbool['one']],axis=1) - result = dfbool[['one', 'three', 'one']] - check(result,expected) - - # multi-axis dups - # GH 6121 - df = DataFrame(np.arange(25.).reshape(5,5), - index=['a', 'b', 'c', 'd', 'e'], - columns=['A', 'B', 'C', 'D', 'E']) - z = df[['A', 'C', 'A']].copy() - expected = z.ix[['a', 'c', 'a']] - - df = DataFrame(np.arange(25.).reshape(5,5), - index=['a', 'b', 'c', 'd', 'e'], - columns=['A', 'B', 'C', 'D', 'E']) - z = df[['A', 'C', 'A']] - result = z.ix[['a', 'c', 'a']] - check(result,expected) - - - def test_column_dups_indexing2(self): - - # GH 8363 - # datetime ops with a non-unique index - df = DataFrame({'A' : np.arange(5,dtype='int64'), - 'B' : np.arange(1,6,dtype='int64')}, - index=[2,2,3,3,4]) - result = df.B-df.A - expected = Series(1,index=[2,2,3,3,4]) - assert_series_equal(result,expected) - - df = DataFrame({'A' : date_range('20130101',periods=5), 'B' : date_range('20130101 09:00:00', periods=5)},index=[2,2,3,3,4]) - result = df.B-df.A - expected = Series(Timedelta('9 hours'),index=[2,2,3,3,4]) - assert_series_equal(result,expected) - - def test_insert_benchmark(self): - # from the vb_suite/frame_methods/frame_insert_columns - N = 10 - K = 5 - df = DataFrame(index=lrange(N)) - new_col = np.random.randn(N) - for i in range(K): - df[i] = new_col - expected = DataFrame(np.repeat(new_col,K).reshape(N,K),index=lrange(N)) - assert_frame_equal(df,expected) - - def test_constructor_single_value(self): - - # expecting single value upcasting here - df = DataFrame(0., index=[1, 2, 3], columns=['a', 'b', 'c']) - assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('float64'), df.index, - df.columns)) - - df = DataFrame(0, index=[1, 2, 3], columns=['a', 'b', 'c']) - assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('int64'), df.index, - df.columns)) - - - df = DataFrame('a', index=[1, 2], columns=['a', 'c']) - assert_frame_equal(df, DataFrame(np.array([['a', 'a'], - ['a', 'a']], - dtype=object), - index=[1, 2], - columns=['a', 'c'])) - - self.assertRaises(com.PandasError, DataFrame, 'a', [1, 2]) - self.assertRaises(com.PandasError, DataFrame, 'a', columns=['a', 'c']) - with tm.assertRaisesRegexp(TypeError, 'incompatible data and dtype'): - DataFrame('a', [1, 2], ['a', 'c'], float) - - def test_constructor_with_datetimes(self): - intname = np.dtype(np.int_).name - floatname = np.dtype(np.float_).name - datetime64name = np.dtype('M8[ns]').name - objectname = np.dtype(np.object_).name - - # single item - df = DataFrame({'A' : 1, 'B' : 'foo', 'C' : 'bar', 'D' : Timestamp("20010101"), 'E' : datetime(2001,1,2,0,0) }, - index=np.arange(10)) - result = df.get_dtype_counts() - expected = Series({'int64': 1, datetime64name: 2, objectname : 2}) - result.sort_index() - expected.sort_index() - assert_series_equal(result, expected) - - # check with ndarray construction ndim==0 (e.g. we are passing a ndim 0 ndarray with a dtype specified) - df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', floatname : np.array(1.,dtype=floatname), - intname : np.array(1,dtype=intname)}, index=np.arange(10)) - result = df.get_dtype_counts() - expected = { objectname : 1 } - if intname == 'int64': - expected['int64'] = 2 - else: - expected['int64'] = 1 - expected[intname] = 1 - if floatname == 'float64': - expected['float64'] = 2 - else: - expected['float64'] = 1 - expected[floatname] = 1 - - result.sort_index() - expected = Series(expected) - expected.sort_index() - assert_series_equal(result, expected) - - # check with ndarray construction ndim>0 - df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', floatname : np.array([1.]*10,dtype=floatname), - intname : np.array([1]*10,dtype=intname)}, index=np.arange(10)) - result = df.get_dtype_counts() - result.sort_index() - assert_series_equal(result, expected) - - # GH 2809 - ind = date_range(start="2000-01-01", freq="D", periods=10) - datetimes = [ts.to_pydatetime() for ts in ind] - datetime_s = Series(datetimes) - self.assertEqual(datetime_s.dtype, 'M8[ns]') - df = DataFrame({'datetime_s':datetime_s}) - result = df.get_dtype_counts() - expected = Series({ datetime64name : 1 }) - result.sort_index() - expected.sort_index() - assert_series_equal(result, expected) - - # GH 2810 - ind = date_range(start="2000-01-01", freq="D", periods=10) - datetimes = [ts.to_pydatetime() for ts in ind] - dates = [ts.date() for ts in ind] - df = DataFrame({'datetimes': datetimes, 'dates':dates}) - result = df.get_dtype_counts() - expected = Series({ datetime64name : 1, objectname : 1 }) - result.sort_index() - expected.sort_index() - assert_series_equal(result, expected) - - # GH 7594 - # don't coerce tz-aware - import pytz - tz = pytz.timezone('US/Eastern') - dt = tz.localize(datetime(2012, 1, 1)) - - df = DataFrame({'End Date': dt}, index=[0]) - self.assertEqual(df.iat[0,0],dt) - assert_series_equal(df.dtypes,Series({'End Date' : 'datetime64[ns, US/Eastern]' })) - - df = DataFrame([{'End Date': dt}]) - self.assertEqual(df.iat[0,0],dt) - assert_series_equal(df.dtypes,Series({'End Date' : 'datetime64[ns, US/Eastern]' })) - - # tz-aware (UTC and other tz's) - # GH 8411 - dr = date_range('20130101',periods=3) - df = DataFrame({ 'value' : dr}) - self.assertTrue(df.iat[0,0].tz is None) - dr = date_range('20130101',periods=3,tz='UTC') - df = DataFrame({ 'value' : dr}) - self.assertTrue(str(df.iat[0,0].tz) == 'UTC') - dr = date_range('20130101',periods=3,tz='US/Eastern') - df = DataFrame({ 'value' : dr}) - self.assertTrue(str(df.iat[0,0].tz) == 'US/Eastern') - - # GH 7822 - # preserver an index with a tz on dict construction - i = date_range('1/1/2011', periods=5, freq='10s', tz = 'US/Eastern') - - expected = DataFrame( {'a' : i.to_series(keep_tz=True).reset_index(drop=True) }) - df = DataFrame() - df['a'] = i - assert_frame_equal(df, expected) - - df = DataFrame( {'a' : i } ) - assert_frame_equal(df, expected) - - # multiples - i_no_tz = date_range('1/1/2011', periods=5, freq='10s') - df = DataFrame( {'a' : i, 'b' : i_no_tz } ) - expected = DataFrame( {'a' : i.to_series(keep_tz=True).reset_index(drop=True), 'b': i_no_tz }) - assert_frame_equal(df, expected) - - def test_constructor_with_datetime_tz(self): - - # 8260 - # support datetime64 with tz - - idx = Index(date_range('20130101',periods=3,tz='US/Eastern'), - name='foo') - dr = date_range('20130110',periods=3) - - # construction - df = DataFrame({'A' : idx, 'B' : dr}) - self.assertTrue(df['A'].dtype,'M8[ns, US/Eastern') - self.assertTrue(df['A'].name == 'A') - assert_series_equal(df['A'],Series(idx,name='A')) - assert_series_equal(df['B'],Series(dr,name='B')) - - # construction from dict - df2 = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'), B=Timestamp('20130603', tz='CET')), index=range(5)) - assert_series_equal(df2.dtypes, Series(['datetime64[ns, US/Eastern]', 'datetime64[ns, CET]'], index=['A','B'])) - - # dtypes - tzframe = DataFrame({'A' : date_range('20130101',periods=3), - 'B' : date_range('20130101',periods=3,tz='US/Eastern'), - 'C' : date_range('20130101',periods=3,tz='CET')}) - tzframe.iloc[1,1] = pd.NaT - tzframe.iloc[1,2] = pd.NaT - result = tzframe.dtypes.sort_index() - expected = Series([ np.dtype('datetime64[ns]'), - DatetimeTZDtype('datetime64[ns, US/Eastern]'), - DatetimeTZDtype('datetime64[ns, CET]') ], - ['A','B','C']) - - # concat - df3 = pd.concat([df2.A.to_frame(),df2.B.to_frame()],axis=1) - assert_frame_equal(df2, df3) - - # select_dtypes - result = df3.select_dtypes(include=['datetime64[ns]']) - expected = df3.reindex(columns=[]) - assert_frame_equal(result, expected) - - # this will select based on issubclass, and these are the same class - result = df3.select_dtypes(include=['datetime64[ns, CET]']) - expected = df3 - assert_frame_equal(result, expected) - - # from index - idx2 = date_range('20130101',periods=3,tz='US/Eastern',name='foo') - df2 = DataFrame(idx2) - assert_series_equal(df2['foo'],Series(idx2,name='foo')) - df2 = DataFrame(Series(idx2)) - assert_series_equal(df2['foo'],Series(idx2,name='foo')) - - idx2 = date_range('20130101',periods=3,tz='US/Eastern') - df2 = DataFrame(idx2) - assert_series_equal(df2[0],Series(idx2,name=0)) - df2 = DataFrame(Series(idx2)) - assert_series_equal(df2[0],Series(idx2,name=0)) - - # interleave with object - result = self.tzframe.assign(D = 'foo').values - expected = np.array([[Timestamp('2013-01-01 00:00:00'), - Timestamp('2013-01-02 00:00:00'), - Timestamp('2013-01-03 00:00:00')], - [Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern'), - pd.NaT, - Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern')], - [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), - pd.NaT, - Timestamp('2013-01-03 00:00:00+0100', tz='CET')], - ['foo','foo','foo']], dtype=object).T - self.assert_numpy_array_equal(result, expected) - - # interleave with only datetime64[ns] - result = self.tzframe.values - expected = np.array([[Timestamp('2013-01-01 00:00:00'), - Timestamp('2013-01-02 00:00:00'), - Timestamp('2013-01-03 00:00:00')], - [Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern'), - pd.NaT, - Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern')], - [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), - pd.NaT, - Timestamp('2013-01-03 00:00:00+0100', tz='CET')]], dtype=object).T - self.assert_numpy_array_equal(result, expected) - - # astype - expected = np.array([[Timestamp('2013-01-01 00:00:00'), - Timestamp('2013-01-02 00:00:00'), - Timestamp('2013-01-03 00:00:00')], - [Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern'), - pd.NaT, - Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern')], - [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), - pd.NaT, - Timestamp('2013-01-03 00:00:00+0100', tz='CET')]], dtype=object).T - result = self.tzframe.astype(object) - assert_frame_equal(result, DataFrame(expected, index=self.tzframe.index, columns=self.tzframe.columns)) - - result = self.tzframe.astype('datetime64[ns]') - expected = DataFrame({'A' : date_range('20130101',periods=3), - 'B' : date_range('20130101',periods=3,tz='US/Eastern').tz_convert('UTC').tz_localize(None), - 'C' : date_range('20130101',periods=3,tz='CET').tz_convert('UTC').tz_localize(None)}) - expected.iloc[1,1] = pd.NaT - expected.iloc[1,2] = pd.NaT - assert_frame_equal(result, expected) - - # str formatting - result = self.tzframe.astype(str) - expected = np.array([['2013-01-01', '2013-01-01 00:00:00-05:00', - '2013-01-01 00:00:00+01:00'], - ['2013-01-02', 'NaT', 'NaT'], - ['2013-01-03', '2013-01-03 00:00:00-05:00', - '2013-01-03 00:00:00+01:00']], dtype=object) - self.assert_numpy_array_equal(result, expected) - - result = str(self.tzframe) - self.assertTrue('0 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00+01:00' in result) - self.assertTrue('1 2013-01-02 NaT NaT' in result) - self.assertTrue('2 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-03 00:00:00+01:00' in result) - - # setitem - df['C'] = idx - assert_series_equal(df['C'],Series(idx,name='C')) - - df['D'] = 'foo' - df['D'] = idx - assert_series_equal(df['D'],Series(idx,name='D')) - del df['D'] - - # assert that A & C are not sharing the same base (e.g. they - # are copies) - b1 = df._data.blocks[1] - b2 = df._data.blocks[2] - self.assertTrue(b1.values.equals(b2.values)) - self.assertFalse(id(b1.values.values.base) == id(b2.values.values.base)) - - # with nan - df2 = df.copy() - df2.iloc[1,1] = pd.NaT - df2.iloc[1,2] = pd.NaT - result = df2['B'] - assert_series_equal(notnull(result), Series([True,False,True],name='B')) - assert_series_equal(df2.dtypes, df.dtypes) - - # set/reset - df = DataFrame({'A' : [0,1,2] }, index=idx) - result = df.reset_index() - self.assertTrue(result['foo'].dtype,'M8[ns, US/Eastern') - - result = result.set_index('foo') - tm.assert_index_equal(df.index,idx) - - def test_constructor_for_list_with_dtypes(self): - intname = np.dtype(np.int_).name - floatname = np.dtype(np.float_).name - datetime64name = np.dtype('M8[ns]').name - objectname = np.dtype(np.object_).name - - # test list of lists/ndarrays - df = DataFrame([np.arange(5) for x in range(5)]) - result = df.get_dtype_counts() - expected = Series({'int64' : 5}) - - df = DataFrame([np.array(np.arange(5),dtype='int32') for x in range(5)]) - result = df.get_dtype_counts() - expected = Series({'int32' : 5}) - - # overflow issue? (we always expecte int64 upcasting here) - df = DataFrame({'a' : [2**31,2**31+1]}) - result = df.get_dtype_counts() - expected = Series({'int64' : 1 }) - assert_series_equal(result, expected) - - # GH #2751 (construction with no index specified), make sure we cast to platform values - df = DataFrame([1, 2]) - result = df.get_dtype_counts() - expected = Series({'int64': 1 }) - assert_series_equal(result, expected) - - df = DataFrame([1.,2.]) - result = df.get_dtype_counts() - expected = Series({'float64' : 1 }) - assert_series_equal(result, expected) - - df = DataFrame({'a' : [1, 2]}) - result = df.get_dtype_counts() - expected = Series({'int64' : 1}) - assert_series_equal(result, expected) - - df = DataFrame({'a' : [1., 2.]}) - result = df.get_dtype_counts() - expected = Series({'float64' : 1}) - assert_series_equal(result, expected) - - df = DataFrame({'a' : 1 }, index=lrange(3)) - result = df.get_dtype_counts() - expected = Series({'int64': 1}) - assert_series_equal(result, expected) - - df = DataFrame({'a' : 1. }, index=lrange(3)) - result = df.get_dtype_counts() - expected = Series({'float64': 1 }) - assert_series_equal(result, expected) - - # with object list - df = DataFrame({'a':[1,2,4,7], 'b':[1.2, 2.3, 5.1, 6.3], - 'c':list('abcd'), 'd':[datetime(2000,1,1) for i in range(4)], - 'e' : [1.,2,4.,7]}) - result = df.get_dtype_counts() - expected = Series({'int64': 1, 'float64' : 2, datetime64name: 1, objectname : 1}) - result.sort_index() - expected.sort_index() - assert_series_equal(result, expected) - - def test_not_hashable(self): - df = pd.DataFrame([1]) - self.assertRaises(TypeError, hash, df) - self.assertRaises(TypeError, hash, self.empty) - - def test_timedeltas(self): - - df = DataFrame(dict(A = Series(date_range('2012-1-1', periods=3, freq='D')), - B = Series([ timedelta(days=i) for i in range(3) ]))) - result = df.get_dtype_counts().sort_values() - expected = Series({'datetime64[ns]': 1, 'timedelta64[ns]' : 1 }).sort_values() - assert_series_equal(result, expected) - - df['C'] = df['A'] + df['B'] - expected = Series({'datetime64[ns]': 2, 'timedelta64[ns]' : 1 }).sort_values() - result = df.get_dtype_counts().sort_values() - assert_series_equal(result, expected) - - # mixed int types - df['D'] = 1 - expected = Series({'datetime64[ns]': 2, 'timedelta64[ns]' : 1, 'int64' : 1 }).sort_values() - result = df.get_dtype_counts().sort_values() - assert_series_equal(result, expected) - - def test_operators_timedelta64(self): - - from datetime import timedelta - df = DataFrame(dict(A = date_range('2012-1-1', periods=3, freq='D'), - B = date_range('2012-1-2', periods=3, freq='D'), - C = Timestamp('20120101')-timedelta(minutes=5,seconds=5))) - - diffs = DataFrame(dict(A = df['A']-df['C'], - B = df['A']-df['B'])) - - - # min - result = diffs.min() - self.assertEqual(result[0], diffs.ix[0,'A']) - self.assertEqual(result[1], diffs.ix[0,'B']) - - result = diffs.min(axis=1) - self.assertTrue((result == diffs.ix[0,'B']).all() == True) - - # max - result = diffs.max() - self.assertEqual(result[0], diffs.ix[2,'A']) - self.assertEqual(result[1], diffs.ix[2,'B']) - - result = diffs.max(axis=1) - self.assertTrue((result == diffs['A']).all() == True) - - # abs - result = diffs.abs() - result2 = abs(diffs) - expected = DataFrame(dict(A = df['A']-df['C'], - B = df['B']-df['A'])) - assert_frame_equal(result,expected) - assert_frame_equal(result2, expected) - - # mixed frame - mixed = diffs.copy() - mixed['C'] = 'foo' - mixed['D'] = 1 - mixed['E'] = 1. - mixed['F'] = Timestamp('20130101') - - # results in an object array - from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type - result = mixed.min() - expected = Series([_coerce_scalar_to_timedelta_type(timedelta(seconds=5*60+5)), - _coerce_scalar_to_timedelta_type(timedelta(days=-1)), - 'foo', - 1, - 1.0, - Timestamp('20130101')], - index=mixed.columns) - assert_series_equal(result,expected) - - # excludes numeric - result = mixed.min(axis=1) - expected = Series([1, 1, 1.],index=[0, 1, 2]) - assert_series_equal(result,expected) - - # works when only those columns are selected - result = mixed[['A','B']].min(1) - expected = Series([ timedelta(days=-1) ] * 3) - assert_series_equal(result,expected) - - result = mixed[['A','B']].min() - expected = Series([ timedelta(seconds=5*60+5), timedelta(days=-1) ],index=['A','B']) - assert_series_equal(result,expected) - - # GH 3106 - df = DataFrame({'time' : date_range('20130102',periods=5), - 'time2' : date_range('20130105',periods=5) }) - df['off1'] = df['time2']-df['time'] - self.assertEqual(df['off1'].dtype, 'timedelta64[ns]') - - df['off2'] = df['time']-df['time2'] - df._consolidate_inplace() - self.assertTrue(df['off1'].dtype == 'timedelta64[ns]') - self.assertTrue(df['off2'].dtype == 'timedelta64[ns]') - - def test_datetimelike_setitem_with_inference(self): - # GH 7592 - # assignment of timedeltas with NaT - - one_hour = timedelta(hours=1) - df = DataFrame(index=date_range('20130101',periods=4)) - df['A'] = np.array([1*one_hour]*4, dtype='m8[ns]') - df.loc[:,'B'] = np.array([2*one_hour]*4, dtype='m8[ns]') - df.loc[:3,'C'] = np.array([3*one_hour]*3, dtype='m8[ns]') - df.ix[:,'D'] = np.array([4*one_hour]*4, dtype='m8[ns]') - df.ix[:3,'E'] = np.array([5*one_hour]*3, dtype='m8[ns]') - df['F'] = np.timedelta64('NaT') - df.ix[:-1,'F'] = np.array([6*one_hour]*3, dtype='m8[ns]') - df.ix[-3:,'G'] = date_range('20130101',periods=3) - df['H'] = np.datetime64('NaT') - result = df.dtypes - expected = Series([np.dtype('timedelta64[ns]')]*6+[np.dtype('datetime64[ns]')]*2,index=list('ABCDEFGH')) - assert_series_equal(result,expected) - - def test_setitem_datetime_coercion(self): - # GH 1048 - df = pd.DataFrame({'c': [pd.Timestamp('2010-10-01')]*3}) - df.loc[0:1, 'c'] = np.datetime64('2008-08-08') - self.assertEqual(pd.Timestamp('2008-08-08'), df.loc[0, 'c']) - self.assertEqual(pd.Timestamp('2008-08-08'), df.loc[1, 'c']) - df.loc[2, 'c'] = date(2005, 5, 5) - self.assertEqual(pd.Timestamp('2005-05-05'), df.loc[2, 'c']) - - - def test_new_empty_index(self): - df1 = DataFrame(randn(0, 3)) - df2 = DataFrame(randn(0, 3)) - df1.index.name = 'foo' - self.assertIsNone(df2.index.name) - - def test_astype(self): - casted = self.frame.astype(int) - expected = DataFrame(self.frame.values.astype(int), - index=self.frame.index, - columns=self.frame.columns) - assert_frame_equal(casted, expected) - - casted = self.frame.astype(np.int32) - expected = DataFrame(self.frame.values.astype(np.int32), - index=self.frame.index, - columns=self.frame.columns) - assert_frame_equal(casted, expected) - - self.frame['foo'] = '5' - casted = self.frame.astype(int) - expected = DataFrame(self.frame.values.astype(int), - index=self.frame.index, - columns=self.frame.columns) - assert_frame_equal(casted, expected) - - # mixed casting - def _check_cast(df, v): - self.assertEqual(list(set([ s.dtype.name for _, s in compat.iteritems(df) ]))[0], v) - - mn = self.all_mixed._get_numeric_data().copy() - mn['little_float'] = np.array(12345.,dtype='float16') - mn['big_float'] = np.array(123456789101112.,dtype='float64') - - casted = mn.astype('float64') - _check_cast(casted, 'float64') - - casted = mn.astype('int64') - _check_cast(casted, 'int64') - - casted = self.mixed_float.reindex(columns = ['A','B']).astype('float32') - _check_cast(casted, 'float32') - - casted = mn.reindex(columns = ['little_float']).astype('float16') - _check_cast(casted, 'float16') - - casted = self.mixed_float.reindex(columns = ['A','B']).astype('float16') - _check_cast(casted, 'float16') - - casted = mn.astype('float32') - _check_cast(casted, 'float32') - - casted = mn.astype('int32') - _check_cast(casted, 'int32') - - # to object - casted = mn.astype('O') - _check_cast(casted, 'object') - - def test_astype_with_exclude_string(self): - df = self.frame.copy() - expected = self.frame.astype(int) - df['string'] = 'foo' - casted = df.astype(int, raise_on_error = False) - - expected['string'] = 'foo' - assert_frame_equal(casted, expected) - - df = self.frame.copy() - expected = self.frame.astype(np.int32) - df['string'] = 'foo' - casted = df.astype(np.int32, raise_on_error = False) - - expected['string'] = 'foo' - assert_frame_equal(casted, expected) - - def test_astype_with_view(self): - - tf = self.mixed_float.reindex(columns = ['A','B','C']) - - casted = tf.astype(np.int64) - - casted = tf.astype(np.float32) - - # this is the only real reason to do it this way - tf = np.round(self.frame).astype(np.int32) - casted = tf.astype(np.float32, copy = False) - - tf = self.frame.astype(np.float64) - casted = tf.astype(np.int64, copy = False) - - def test_astype_cast_nan_int(self): - df = DataFrame(data={"Values": [1.0, 2.0, 3.0, np.nan]}) - self.assertRaises(ValueError, df.astype, np.int64) - - def test_astype_str(self): - # GH9757 - a = Series(date_range('2010-01-04', periods=5)) - b = Series(date_range('3/6/2012 00:00', periods=5, tz='US/Eastern')) - c = Series([Timedelta(x, unit='d') for x in range(5)]) - d = Series(range(5)) - e = Series([0.0, 0.2, 0.4, 0.6, 0.8]) - - df = DataFrame({'a' : a, 'b' : b, 'c' : c, 'd' : d, 'e' : e}) - - # datetimelike - # Test str and unicode on python 2.x and just str on python 3.x - for tt in set([str, compat.text_type]): - result = df.astype(tt) - - expected = DataFrame({ - 'a' : list(map(tt, map(lambda x: Timestamp(x)._date_repr, a._values))), - 'b' : list(map(tt, map(Timestamp, b._values))), - 'c' : list(map(tt, map(lambda x: Timedelta(x)._repr_base(format='all'), c._values))), - 'd' : list(map(tt, d._values)), - 'e' : list(map(tt, e._values)), - }) - - assert_frame_equal(result, expected) - - # float/nan - # 11302 - # consistency in astype(str) - for tt in set([str, compat.text_type]): - result = DataFrame([np.NaN]).astype(tt) - expected = DataFrame(['nan']) - assert_frame_equal(result, expected) - - result = DataFrame([1.12345678901234567890]).astype(tt) - expected = DataFrame(['1.12345678901']) - assert_frame_equal(result, expected) - - def test_array_interface(self): - result = np.sqrt(self.frame) - tm.assertIsInstance(result, type(self.frame)) - self.assertIs(result.index, self.frame.index) - self.assertIs(result.columns, self.frame.columns) - - assert_frame_equal(result, self.frame.apply(np.sqrt)) - - def test_pickle(self): - unpickled = self.round_trip_pickle(self.mixed_frame) - assert_frame_equal(self.mixed_frame, unpickled) - - # buglet - self.mixed_frame._data.ndim - - # empty - unpickled = self.round_trip_pickle(self.empty) - repr(unpickled) - - # tz frame - unpickled = self.round_trip_pickle(self.tzframe) - assert_frame_equal(self.tzframe, unpickled) - - def test_to_dict(self): - test_data = { - 'A': {'1': 1, '2': 2}, - 'B': {'1': '1', '2': '2', '3': '3'}, - } - recons_data = DataFrame(test_data).to_dict() - - for k, v in compat.iteritems(test_data): - for k2, v2 in compat.iteritems(v): - self.assertEqual(v2, recons_data[k][k2]) - - recons_data = DataFrame(test_data).to_dict("l") - - for k, v in compat.iteritems(test_data): - for k2, v2 in compat.iteritems(v): - self.assertEqual(v2, recons_data[k][int(k2) - 1]) - - recons_data = DataFrame(test_data).to_dict("s") - - for k, v in compat.iteritems(test_data): - for k2, v2 in compat.iteritems(v): - self.assertEqual(v2, recons_data[k][k2]) - - recons_data = DataFrame(test_data).to_dict("sp") - - expected_split = {'columns': ['A', 'B'], 'index': ['1', '2', '3'], - 'data': [[1.0, '1'], [2.0, '2'], [nan, '3']]} - - tm.assert_almost_equal(recons_data, expected_split) - - recons_data = DataFrame(test_data).to_dict("r") - - expected_records = [{'A': 1.0, 'B': '1'}, - {'A': 2.0, 'B': '2'}, - {'A': nan, 'B': '3'}] - - tm.assert_almost_equal(recons_data, expected_records) - - # GH10844 - recons_data = DataFrame(test_data).to_dict("i") - - for k, v in compat.iteritems(test_data): - for k2, v2 in compat.iteritems(v): - self.assertEqual(v2, recons_data[k2][k]) - - def test_latex_repr(self): - result=r"""\begin{tabular}{llll} -\toprule -{} & 0 & 1 & 2 \\ -\midrule -0 & $\alpha$ & b & c \\ -1 & 1 & 2 & 3 \\ -\bottomrule -\end{tabular} -""" - with option_context("display.latex.escape",False): - df=DataFrame([[r'$\alpha$','b','c'],[1,2,3]]) - self.assertEqual(result,df._repr_latex_()) - - - def test_to_dict_timestamp(self): - - # GH11247 - # split/records producing np.datetime64 rather than Timestamps - # on datetime64[ns] dtypes only - - tsmp = Timestamp('20130101') - test_data = DataFrame({'A': [tsmp, tsmp], 'B': [tsmp, tsmp]}) - test_data_mixed = DataFrame({'A': [tsmp, tsmp], 'B': [1, 2]}) - - expected_records = [{'A': tsmp, 'B': tsmp}, - {'A': tsmp, 'B': tsmp}] - expected_records_mixed = [{'A': tsmp, 'B': 1}, - {'A': tsmp, 'B': 2}] - - tm.assert_almost_equal(test_data.to_dict( - orient='records'), expected_records) - tm.assert_almost_equal(test_data_mixed.to_dict( - orient='records'), expected_records_mixed) - - expected_series = { - 'A': Series([tsmp, tsmp]), - 'B': Series([tsmp, tsmp]), - } - expected_series_mixed = { - 'A': Series([tsmp, tsmp]), - 'B': Series([1, 2]), - } - - tm.assert_almost_equal(test_data.to_dict( - orient='series'), expected_series) - tm.assert_almost_equal(test_data_mixed.to_dict( - orient='series'), expected_series_mixed) - - expected_split = { - 'index': [0, 1], - 'data': [[tsmp, tsmp], - [tsmp, tsmp]], - 'columns': ['A', 'B'] - } - expected_split_mixed = { - 'index': [0, 1], - 'data': [[tsmp, 1], - [tsmp, 2]], - 'columns': ['A', 'B'] - } - - tm.assert_almost_equal(test_data.to_dict( - orient='split'), expected_split) - tm.assert_almost_equal(test_data_mixed.to_dict( - orient='split'), expected_split_mixed) - - def test_to_dict_invalid_orient(self): - df = DataFrame({'A':[0, 1]}) - self.assertRaises(ValueError, df.to_dict, orient='xinvalid') - - def test_to_records_dt64(self): - df = DataFrame([["one", "two", "three"], - ["four", "five", "six"]], - index=date_range("2012-01-01", "2012-01-02")) - self.assertEqual(df.to_records()['index'][0], df.index[0]) - - rs = df.to_records(convert_datetime64=False) - self.assertEqual(rs['index'][0], df.index.values[0]) - - def test_to_records_with_multindex(self): - # GH3189 - index = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], - ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']] - data = np.zeros((8, 4)) - df = DataFrame(data, index=index) - r = df.to_records(index=True)['level_0'] - self.assertTrue('bar' in r) - self.assertTrue('one' not in r) - - def test_to_records_with_Mapping_type(self): - import email - from email.parser import Parser - import collections - - collections.Mapping.register(email.message.Message) - - headers = Parser().parsestr('From: <user@example.com>\n' - 'To: <someone_else@example.com>\n' - 'Subject: Test message\n' - '\n' - 'Body would go here\n') - - frame = DataFrame.from_records([headers]) - all( x in frame for x in ['Type','Subject','From']) - - def test_from_records_to_records(self): - # from numpy documentation - arr = np.zeros((2,), dtype=('i4,f4,a10')) - arr[:] = [(1, 2., 'Hello'), (2, 3., "World")] - - frame = DataFrame.from_records(arr) - - index = np.arange(len(arr))[::-1] - indexed_frame = DataFrame.from_records(arr, index=index) - self.assert_numpy_array_equal(indexed_frame.index, index) - - # without names, it should go to last ditch - arr2 = np.zeros((2, 3)) - assert_frame_equal(DataFrame.from_records(arr2), DataFrame(arr2)) - - # wrong length - msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)' - with assertRaisesRegexp(ValueError, msg): - DataFrame.from_records(arr, index=index[:-1]) - - indexed_frame = DataFrame.from_records(arr, index='f1') - - # what to do? - records = indexed_frame.to_records() - self.assertEqual(len(records.dtype.names), 3) - - records = indexed_frame.to_records(index=False) - self.assertEqual(len(records.dtype.names), 2) - self.assertNotIn('index', records.dtype.names) - - def test_from_records_nones(self): - tuples = [(1, 2, None, 3), - (1, 2, None, 3), - (None, 2, 5, 3)] - - df = DataFrame.from_records(tuples, columns=['a', 'b', 'c', 'd']) - self.assertTrue(np.isnan(df['c'][0])) - - def test_from_records_iterator(self): - arr = np.array([(1.0, 1.0, 2, 2), (3.0, 3.0, 4, 4), (5., 5., 6, 6), (7., 7., 8, 8)], - dtype=[('x', np.float64), ('u', np.float32), ('y', np.int64), ('z', np.int32) ]) - df = DataFrame.from_records(iter(arr), nrows=2) - xp = DataFrame({'x': np.array([1.0, 3.0], dtype=np.float64), - 'u': np.array([1.0, 3.0], dtype=np.float32), - 'y': np.array([2, 4], dtype=np.int64), - 'z': np.array([2, 4], dtype=np.int32)}) - assert_frame_equal(df.reindex_like(xp), xp) - - # no dtypes specified here, so just compare with the default - arr = [(1.0, 2), (3.0, 4), (5., 6), (7., 8)] - df = DataFrame.from_records(iter(arr), columns=['x', 'y'], - nrows=2) - assert_frame_equal(df, xp.reindex(columns=['x','y']), check_dtype=False) - - def test_from_records_tuples_generator(self): - def tuple_generator(length): - for i in range(length): - letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' - yield (i, letters[i % len(letters)], i/length) - - columns_names = ['Integer', 'String', 'Float'] - columns = [[i[j] for i in tuple_generator(10)] for j in range(len(columns_names))] - data = {'Integer': columns[0], 'String': columns[1], 'Float': columns[2]} - expected = DataFrame(data, columns=columns_names) - - generator = tuple_generator(10) - result = DataFrame.from_records(generator, columns=columns_names) - assert_frame_equal(result, expected) - - def test_from_records_lists_generator(self): - def list_generator(length): - for i in range(length): - letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' - yield [i, letters[i % len(letters)], i/length] - - columns_names = ['Integer', 'String', 'Float'] - columns = [[i[j] for i in list_generator(10)] for j in range(len(columns_names))] - data = {'Integer': columns[0], 'String': columns[1], 'Float': columns[2]} - expected = DataFrame(data, columns=columns_names) - - generator = list_generator(10) - result = DataFrame.from_records(generator, columns=columns_names) - assert_frame_equal(result, expected) - - def test_from_records_columns_not_modified(self): - tuples = [(1, 2, 3), - (1, 2, 3), - (2, 5, 3)] - - columns = ['a', 'b', 'c'] - original_columns = list(columns) - df = DataFrame.from_records(tuples, columns=columns, index='a') - self.assertEqual(columns, original_columns) - - def test_from_records_decimal(self): - from decimal import Decimal - - tuples = [(Decimal('1.5'),), (Decimal('2.5'),), (None,)] - - df = DataFrame.from_records(tuples, columns=['a']) - self.assertEqual(df['a'].dtype, object) - - df = DataFrame.from_records(tuples, columns=['a'], coerce_float=True) - self.assertEqual(df['a'].dtype, np.float64) - self.assertTrue(np.isnan(df['a'].values[-1])) - - def test_from_records_duplicates(self): - result = DataFrame.from_records([(1, 2, 3), (4, 5, 6)], - columns=['a', 'b', 'a']) - - expected = DataFrame([(1, 2, 3), (4, 5, 6)], - columns=['a', 'b', 'a']) - - assert_frame_equal(result, expected) - - def test_from_records_set_index_name(self): - def create_dict(order_id): - return {'order_id': order_id, 'quantity': np.random.randint(1, 10), - 'price': np.random.randint(1, 10)} - documents = [create_dict(i) for i in range(10)] - # demo missing data - documents.append({'order_id': 10, 'quantity': 5}) - - result = DataFrame.from_records(documents, index='order_id') - self.assertEqual(result.index.name, 'order_id') - - # MultiIndex - result = DataFrame.from_records(documents, - index=['order_id', 'quantity']) - self.assertEqual(result.index.names, ('order_id', 'quantity')) - - def test_from_records_misc_brokenness(self): - # #2179 - - data = {1: ['foo'], 2: ['bar']} - - result = DataFrame.from_records(data, columns=['a', 'b']) - exp = DataFrame(data, columns=['a', 'b']) - assert_frame_equal(result, exp) - - # overlap in index/index_names - - data = {'a': [1, 2, 3], 'b': [4, 5, 6]} - - result = DataFrame.from_records(data, index=['a', 'b', 'c']) - exp = DataFrame(data, index=['a', 'b', 'c']) - assert_frame_equal(result, exp) - - - # GH 2623 - rows = [] - rows.append([datetime(2010, 1, 1), 1]) - rows.append([datetime(2010, 1, 2), 'hi']) # test col upconverts to obj - df2_obj = DataFrame.from_records(rows, columns=['date', 'test']) - results = df2_obj.get_dtype_counts() - expected = Series({ 'datetime64[ns]' : 1, 'object' : 1 }) - - rows = [] - rows.append([datetime(2010, 1, 1), 1]) - rows.append([datetime(2010, 1, 2), 1]) - df2_obj = DataFrame.from_records(rows, columns=['date', 'test']) - results = df2_obj.get_dtype_counts() - expected = Series({ 'datetime64[ns]' : 1, 'int64' : 1 }) - - def test_from_records_empty(self): - # 3562 - result = DataFrame.from_records([], columns=['a','b','c']) - expected = DataFrame(columns=['a','b','c']) - assert_frame_equal(result, expected) - - result = DataFrame.from_records([], columns=['a','b','b']) - expected = DataFrame(columns=['a','b','b']) - assert_frame_equal(result, expected) - - def test_from_records_empty_with_nonempty_fields_gh3682(self): - a = np.array([(1, 2)], dtype=[('id', np.int64), ('value', np.int64)]) - df = DataFrame.from_records(a, index='id') - assert_numpy_array_equal(df.index, Index([1], name='id')) - self.assertEqual(df.index.name, 'id') - assert_numpy_array_equal(df.columns, Index(['value'])) - - b = np.array([], dtype=[('id', np.int64), ('value', np.int64)]) - df = DataFrame.from_records(b, index='id') - assert_numpy_array_equal(df.index, Index([], name='id')) - self.assertEqual(df.index.name, 'id') - - def test_from_records_with_datetimes(self): - - # this may fail on certain platforms because of a numpy issue - # related GH6140 - if not is_little_endian(): - raise nose.SkipTest("known failure of test on non-little endian") - - # construction with a null in a recarray - # GH 6140 - expected = DataFrame({ 'EXPIRY' : [datetime(2005, 3, 1, 0, 0), None ]}) - - arrdata = [np.array([datetime(2005, 3, 1, 0, 0), None])] - dtypes = [('EXPIRY', '<M8[ns]')] - - try: - recarray = np.core.records.fromarrays(arrdata, dtype=dtypes) - except (ValueError): - raise nose.SkipTest("known failure of numpy rec array creation") - - result = DataFrame.from_records(recarray) - assert_frame_equal(result,expected) - - # coercion should work too - arrdata = [np.array([datetime(2005, 3, 1, 0, 0), None])] - dtypes = [('EXPIRY', '<M8[m]')] - recarray = np.core.records.fromarrays(arrdata, dtype=dtypes) - result = DataFrame.from_records(recarray) - assert_frame_equal(result,expected) - - def test_to_records_floats(self): - df = DataFrame(np.random.rand(10, 10)) - df.to_records() - - def test_to_recods_index_name(self): - df = DataFrame(np.random.randn(3, 3)) - df.index.name = 'X' - rs = df.to_records() - self.assertIn('X', rs.dtype.fields) - - df = DataFrame(np.random.randn(3, 3)) - rs = df.to_records() - self.assertIn('index', rs.dtype.fields) - - df.index = MultiIndex.from_tuples([('a', 'x'), ('a', 'y'), ('b', 'z')]) - df.index.names = ['A', None] - rs = df.to_records() - self.assertIn('level_0', rs.dtype.fields) - - def test_join_str_datetime(self): - str_dates = ['20120209', '20120222'] - dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)] - - A = DataFrame(str_dates, index=lrange(2), columns=['aa']) - C = DataFrame([[1, 2], [3, 4]], index=str_dates, columns=dt_dates) - - tst = A.join(C, on='aa') - - self.assertEqual(len(tst.columns), 3) - - def test_join_multiindex_leftright(self): - # GH 10741 - df1 = pd.DataFrame([['a', 'x', 0.471780], ['a','y', 0.774908], - ['a', 'z', 0.563634], ['b', 'x', -0.353756], - ['b', 'y', 0.368062], ['b', 'z', -1.721840], - ['c', 'x', 1], ['c', 'y', 2], ['c', 'z', 3]], - columns=['first', 'second', 'value1']).set_index(['first', 'second']) - df2 = pd.DataFrame([['a', 10], ['b', 20]], columns=['first', 'value2']).set_index(['first']) - - exp = pd.DataFrame([[0.471780, 10], [0.774908, 10], [0.563634, 10], - [-0.353756, 20], [0.368062, 20], [-1.721840, 20], - [1.000000, np.nan], [2.000000, np.nan], [3.000000, np.nan]], - index=df1.index, columns=['value1', 'value2']) - - # these must be the same results (but columns are flipped) - assert_frame_equal(df1.join(df2, how='left'), exp) - assert_frame_equal(df2.join(df1, how='right'), - exp[['value2', 'value1']]) - - exp_idx = pd.MultiIndex.from_product([['a', 'b'], ['x', 'y', 'z']], - names=['first', 'second']) - exp = pd.DataFrame([[0.471780, 10], [0.774908, 10], [0.563634, 10], - [-0.353756, 20], [0.368062, 20], [-1.721840, 20]], - index=exp_idx, columns=['value1', 'value2']) - - assert_frame_equal(df1.join(df2, how='right'), exp) - assert_frame_equal(df2.join(df1, how='left'), - exp[['value2', 'value1']]) - - def test_from_records_sequencelike(self): - df = DataFrame({'A': np.array(np.random.randn(6), dtype=np.float64), - 'A1': np.array(np.random.randn(6), dtype=np.float64), - 'B': np.array(np.arange(6), dtype=np.int64), - 'C': ['foo'] * 6, - 'D': np.array([True, False] * 3, dtype=bool), - 'E': np.array(np.random.randn(6), dtype=np.float32), - 'E1': np.array(np.random.randn(6), dtype=np.float32), - 'F': np.array(np.arange(6), dtype=np.int32)}) - - # this is actually tricky to create the recordlike arrays and - # have the dtypes be intact - blocks = df.blocks - tuples = [] - columns = [] - dtypes = [] - for dtype, b in compat.iteritems(blocks): - columns.extend(b.columns) - dtypes.extend([ (c,np.dtype(dtype).descr[0][1]) for c in b.columns ]) - for i in range(len(df.index)): - tup = [] - for _, b in compat.iteritems(blocks): - tup.extend(b.iloc[i].values) - tuples.append(tuple(tup)) - - recarray = np.array(tuples, dtype=dtypes).view(np.recarray) - recarray2 = df.to_records() - lists = [list(x) for x in tuples] - - # tuples (lose the dtype info) - result = DataFrame.from_records(tuples, columns=columns).reindex(columns=df.columns) - - # created recarray and with to_records recarray (have dtype info) - result2 = DataFrame.from_records(recarray, columns=columns).reindex(columns=df.columns) - result3 = DataFrame.from_records(recarray2, columns=columns).reindex(columns=df.columns) - - # list of tupels (no dtype info) - result4 = DataFrame.from_records(lists, columns=columns).reindex(columns=df.columns) - - assert_frame_equal(result, df, check_dtype=False) - assert_frame_equal(result2, df) - assert_frame_equal(result3, df) - assert_frame_equal(result4, df, check_dtype=False) - - # tuples is in the order of the columns - result = DataFrame.from_records(tuples) - self.assert_numpy_array_equal(result.columns, lrange(8)) - - # test exclude parameter & we are casting the results here (as we don't have dtype info to recover) - columns_to_test = [ columns.index('C'), columns.index('E1') ] - - exclude = list(set(range(8))-set(columns_to_test)) - result = DataFrame.from_records(tuples, exclude=exclude) - result.columns = [ columns[i] for i in sorted(columns_to_test) ] - assert_series_equal(result['C'], df['C']) - assert_series_equal(result['E1'], df['E1'].astype('float64')) - - # empty case - result = DataFrame.from_records([], columns=['foo', 'bar', 'baz']) - self.assertEqual(len(result), 0) - self.assert_numpy_array_equal(result.columns, ['foo', 'bar', 'baz']) - - result = DataFrame.from_records([]) - self.assertEqual(len(result), 0) - self.assertEqual(len(result.columns), 0) - - def test_from_records_dictlike(self): - - # test the dict methods - df = DataFrame({'A' : np.array(np.random.randn(6), dtype = np.float64), - 'A1': np.array(np.random.randn(6), dtype = np.float64), - 'B' : np.array(np.arange(6), dtype = np.int64), - 'C' : ['foo'] * 6, - 'D' : np.array([True, False] * 3, dtype=bool), - 'E' : np.array(np.random.randn(6), dtype = np.float32), - 'E1': np.array(np.random.randn(6), dtype = np.float32), - 'F' : np.array(np.arange(6), dtype = np.int32) }) - - # columns is in a different order here than the actual items iterated from the dict - columns = [] - for dtype, b in compat.iteritems(df.blocks): - columns.extend(b.columns) - - asdict = dict((x, y) for x, y in compat.iteritems(df)) - asdict2 = dict((x, y.values) for x, y in compat.iteritems(df)) - - # dict of series & dict of ndarrays (have dtype info) - results = [] - results.append(DataFrame.from_records(asdict).reindex(columns=df.columns)) - results.append(DataFrame.from_records(asdict, columns=columns).reindex(columns=df.columns)) - results.append(DataFrame.from_records(asdict2, columns=columns).reindex(columns=df.columns)) - - for r in results: - assert_frame_equal(r, df) - - def test_from_records_with_index_data(self): - df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C']) - - data = np.random.randn(10) - df1 = DataFrame.from_records(df, index=data) - assert(df1.index.equals(Index(data))) - - def test_from_records_bad_index_column(self): - df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C']) - - # should pass - df1 = DataFrame.from_records(df, index=['C']) - assert(df1.index.equals(Index(df.C))) - - df1 = DataFrame.from_records(df, index='C') - assert(df1.index.equals(Index(df.C))) - - # should fail - self.assertRaises(ValueError, DataFrame.from_records, df, index=[2]) - self.assertRaises(KeyError, DataFrame.from_records, df, index=2) - - def test_from_records_non_tuple(self): - class Record(object): - - def __init__(self, *args): - self.args = args - - def __getitem__(self, i): - return self.args[i] - - def __iter__(self): - return iter(self.args) - - recs = [Record(1, 2, 3), Record(4, 5, 6), Record(7, 8, 9)] - tups = lmap(tuple, recs) - - result = DataFrame.from_records(recs) - expected = DataFrame.from_records(tups) - assert_frame_equal(result, expected) - - def test_from_records_len0_with_columns(self): - # #2633 - result = DataFrame.from_records([], index='foo', - columns=['foo', 'bar']) - - self.assertTrue(np.array_equal(result.columns, ['bar'])) - self.assertEqual(len(result), 0) - self.assertEqual(result.index.name, 'foo') - - def test_get_agg_axis(self): - cols = self.frame._get_agg_axis(0) - self.assertIs(cols, self.frame.columns) - - idx = self.frame._get_agg_axis(1) - self.assertIs(idx, self.frame.index) - - self.assertRaises(ValueError, self.frame._get_agg_axis, 2) - - def test_nonzero(self): - self.assertTrue(self.empty.empty) - - self.assertFalse(self.frame.empty) - self.assertFalse(self.mixed_frame.empty) - - # corner case - df = DataFrame({'A': [1., 2., 3.], - 'B': ['a', 'b', 'c']}, - index=np.arange(3)) - del df['A'] - self.assertFalse(df.empty) - - def test_repr_empty(self): - buf = StringIO() - - # empty - foo = repr(self.empty) - - # empty with index - frame = DataFrame(index=np.arange(1000)) - foo = repr(frame) - - def test_repr_mixed(self): - buf = StringIO() - - # mixed - foo = repr(self.mixed_frame) - self.mixed_frame.info(verbose=False, buf=buf) - - @slow - def test_repr_mixed_big(self): - # big mixed - biggie = DataFrame({'A': randn(200), - 'B': tm.makeStringIndex(200)}, - index=lrange(200)) - biggie.loc[:20,'A'] = nan - biggie.loc[:20,'B'] = nan - - foo = repr(biggie) - - def test_repr(self): - buf = StringIO() - - # small one - foo = repr(self.frame) - self.frame.info(verbose=False, buf=buf) - - # even smaller - self.frame.reindex(columns=['A']).info(verbose=False, buf=buf) - self.frame.reindex(columns=['A', 'B']).info(verbose=False, buf=buf) - - # exhausting cases in DataFrame.info - - # columns but no index - no_index = DataFrame(columns=[0, 1, 3]) - foo = repr(no_index) - - # no columns or index - self.empty.info(buf=buf) - - df = DataFrame(["a\n\r\tb"], columns=["a\n\r\td"], index=["a\n\r\tf"]) - self.assertFalse("\t" in repr(df)) - self.assertFalse("\r" in repr(df)) - self.assertFalse("a\n" in repr(df)) - - def test_repr_dimensions(self): - df = DataFrame([[1, 2,], [3, 4]]) - with option_context('display.show_dimensions', True): - self.assertTrue("2 rows x 2 columns" in repr(df)) - - with option_context('display.show_dimensions', False): - self.assertFalse("2 rows x 2 columns" in repr(df)) - - with option_context('display.show_dimensions', 'truncate'): - self.assertFalse("2 rows x 2 columns" in repr(df)) - - @slow - def test_repr_big(self): - buf = StringIO() - - # big one - biggie = DataFrame(np.zeros((200, 4)), columns=lrange(4), - index=lrange(200)) - foo = repr(biggie) - - def test_repr_unsortable(self): - # columns are not sortable - import warnings - warn_filters = warnings.filters - warnings.filterwarnings('ignore', - category=FutureWarning, - module=".*format") - - unsortable = DataFrame({'foo': [1] * 50, - datetime.today(): [1] * 50, - 'bar': ['bar'] * 50, - datetime.today( - ) + timedelta(1): ['bar'] * 50}, - index=np.arange(50)) - foo = repr(unsortable) - - fmt.set_option('display.precision', 3, 'display.column_space', 10) - repr(self.frame) - - fmt.set_option('display.max_rows', 10, 'display.max_columns', 2) - repr(self.frame) - - fmt.set_option('display.max_rows', 1000, 'display.max_columns', 1000) - repr(self.frame) - - self.reset_display_options() - - warnings.filters = warn_filters - - def test_repr_unicode(self): - uval = u('\u03c3\u03c3\u03c3\u03c3') - bval = uval.encode('utf-8') - df = DataFrame({'A': [uval, uval]}) - - result = repr(df) - ex_top = ' A' - self.assertEqual(result.split('\n')[0].rstrip(), ex_top) - - df = DataFrame({'A': [uval, uval]}) - result = repr(df) - self.assertEqual(result.split('\n')[0].rstrip(), ex_top) - - def test_unicode_string_with_unicode(self): - df = DataFrame({'A': [u("\u05d0")]}) - - if compat.PY3: - str(df) - else: - compat.text_type(df) - - def test_bytestring_with_unicode(self): - df = DataFrame({'A': [u("\u05d0")]}) - if compat.PY3: - bytes(df) - else: - str(df) - - def test_very_wide_info_repr(self): - df = DataFrame(np.random.randn(10, 20), - columns=tm.rands_array(10, 20)) - repr(df) - - def test_repr_column_name_unicode_truncation_bug(self): - # #1906 - df = DataFrame({'Id': [7117434], - 'StringCol': ('Is it possible to modify drop plot code' - ' so that the output graph is displayed ' - 'in iphone simulator, Is it possible to ' - 'modify drop plot code so that the ' - 'output graph is \xe2\x80\xa8displayed ' - 'in iphone simulator.Now we are adding ' - 'the CSV file externally. I want to Call' - ' the File through the code..')}) - - result = repr(df) - self.assertIn('StringCol', result) - - def test_head_tail(self): - assert_frame_equal(self.frame.head(), self.frame[:5]) - assert_frame_equal(self.frame.tail(), self.frame[-5:]) - - assert_frame_equal(self.frame.head(0), self.frame[0:0]) - assert_frame_equal(self.frame.tail(0), self.frame[0:0]) - - assert_frame_equal(self.frame.head(-1), self.frame[:-1]) - assert_frame_equal(self.frame.tail(-1), self.frame[1:]) - assert_frame_equal(self.frame.head(1), self.frame[:1]) - assert_frame_equal(self.frame.tail(1), self.frame[-1:]) - # with a float index - df = self.frame.copy() - df.index = np.arange(len(self.frame)) + 0.1 - assert_frame_equal(df.head(), df.iloc[:5]) - assert_frame_equal(df.tail(), df.iloc[-5:]) - assert_frame_equal(df.head(0), df[0:0]) - assert_frame_equal(df.tail(0), df[0:0]) - assert_frame_equal(df.head(-1), df.iloc[:-1]) - assert_frame_equal(df.tail(-1), df.iloc[1:]) - #test empty dataframe - empty_df = DataFrame() - assert_frame_equal(empty_df.tail(), empty_df) - assert_frame_equal(empty_df.head(), empty_df) - - def test_insert(self): - df = DataFrame(np.random.randn(5, 3), index=np.arange(5), - columns=['c', 'b', 'a']) - - df.insert(0, 'foo', df['a']) - self.assert_numpy_array_equal(df.columns, ['foo', 'c', 'b', 'a']) - assert_almost_equal(df['a'], df['foo']) - - df.insert(2, 'bar', df['c']) - self.assert_numpy_array_equal(df.columns, ['foo', 'c', 'bar', 'b', 'a']) - assert_almost_equal(df['c'], df['bar']) - - # diff dtype - - # new item - df['x'] = df['a'].astype('float32') - result = Series(dict(float64 = 5, float32 = 1)) - self.assertTrue((df.get_dtype_counts() == result).all()) - - # replacing current (in different block) - df['a'] = df['a'].astype('float32') - result = Series(dict(float64 = 4, float32 = 2)) - self.assertTrue((df.get_dtype_counts() == result).all()) - - df['y'] = df['a'].astype('int32') - result = Series(dict(float64 = 4, float32 = 2, int32 = 1)) - self.assertTrue((df.get_dtype_counts() == result).all()) - - with assertRaisesRegexp(ValueError, 'already exists'): - df.insert(1, 'a', df['b']) - self.assertRaises(ValueError, df.insert, 1, 'c', df['b']) - - df.columns.name = 'some_name' - # preserve columns name field - df.insert(0, 'baz', df['c']) - self.assertEqual(df.columns.name, 'some_name') - - def test_delitem(self): - del self.frame['A'] - self.assertNotIn('A', self.frame) - - def test_pop(self): - self.frame.columns.name = 'baz' - - A = self.frame.pop('A') - self.assertNotIn('A', self.frame) - - self.frame['foo'] = 'bar' - foo = self.frame.pop('foo') - self.assertNotIn('foo', self.frame) - # TODO self.assertEqual(self.frame.columns.name, 'baz') - - # 10912 - # inplace ops cause caching issue - a = DataFrame([[1,2,3],[4,5,6]], columns=['A','B','C'], index=['X','Y']) - b = a.pop('B') - b += 1 - - # original frame - expected = DataFrame([[1,3],[4,6]], columns=['A','C'], index=['X','Y']) - assert_frame_equal(a, expected) - - # result - expected = Series([2,5],index=['X','Y'],name='B')+1 - assert_series_equal(b, expected) - - def test_pop_non_unique_cols(self): - df = DataFrame({0: [0, 1], 1: [0, 1], 2: [4, 5]}) - df.columns = ["a", "b", "a"] - - res = df.pop("a") - self.assertEqual(type(res), DataFrame) - self.assertEqual(len(res), 2) - self.assertEqual(len(df.columns), 1) - self.assertTrue("b" in df.columns) - self.assertFalse("a" in df.columns) - self.assertEqual(len(df.index), 2) - - def test_iter(self): - self.assertTrue(tm.equalContents(list(self.frame), self.frame.columns)) - - def test_iterrows(self): - for i, (k, v) in enumerate(self.frame.iterrows()): - exp = self.frame.xs(self.frame.index[i]) - assert_series_equal(v, exp) - - for i, (k, v) in enumerate(self.mixed_frame.iterrows()): - exp = self.mixed_frame.xs(self.mixed_frame.index[i]) - assert_series_equal(v, exp) - - def test_itertuples(self): - for i, tup in enumerate(self.frame.itertuples()): - s = Series(tup[1:]) - s.name = tup[0] - expected = self.frame.ix[i, :].reset_index(drop=True) - assert_series_equal(s, expected) - - df = DataFrame({'floats': np.random.randn(5), - 'ints': lrange(5)}, columns=['floats', 'ints']) - - for tup in df.itertuples(index=False): - tm.assertIsInstance(tup[1], np.integer) - - df = DataFrame(data={"a": [1, 2, 3], "b": [4, 5, 6]}) - dfaa = df[['a', 'a']] - self.assertEqual(list(dfaa.itertuples()), [(0, 1, 1), (1, 2, 2), (2, 3, 3)]) - - self.assertEqual(repr(list(df.itertuples(name=None))), '[(0, 1, 4), (1, 2, 5), (2, 3, 6)]') - - tup = next(df.itertuples(name='TestName')) - - # no support for field renaming in Python 2.6, regular tuples are returned - if sys.version >= LooseVersion('2.7'): - self.assertEqual(tup._fields, ('Index', 'a', 'b')) - self.assertEqual((tup.Index, tup.a, tup.b), tup) - self.assertEqual(type(tup).__name__, 'TestName') - - df.columns = ['def', 'return'] - tup2 = next(df.itertuples(name='TestName')) - self.assertEqual(tup2, (0, 1, 4)) - - if sys.version >= LooseVersion('2.7'): - self.assertEqual(tup2._fields, ('Index', '_1', '_2')) - - df3 = DataFrame(dict(('f'+str(i), [i]) for i in range(1024))) - # will raise SyntaxError if trying to create namedtuple - tup3 = next(df3.itertuples()) - self.assertFalse(hasattr(tup3, '_fields')) - self.assertIsInstance(tup3, tuple) - - def test_len(self): - self.assertEqual(len(self.frame), len(self.frame.index)) - - def test_operators(self): - garbage = random.random(4) - colSeries = Series(garbage, index=np.array(self.frame.columns)) - - idSum = self.frame + self.frame - seriesSum = self.frame + colSeries - - for col, series in compat.iteritems(idSum): - for idx, val in compat.iteritems(series): - origVal = self.frame[col][idx] * 2 - if not np.isnan(val): - self.assertEqual(val, origVal) - else: - self.assertTrue(np.isnan(origVal)) - - for col, series in compat.iteritems(seriesSum): - for idx, val in compat.iteritems(series): - origVal = self.frame[col][idx] + colSeries[col] - if not np.isnan(val): - self.assertEqual(val, origVal) - else: - self.assertTrue(np.isnan(origVal)) - - added = self.frame2 + self.frame2 - expected = self.frame2 * 2 - assert_frame_equal(added, expected) - - df = DataFrame({'a': ['a', None, 'b']}) - assert_frame_equal(df + df, DataFrame({'a': ['aa', np.nan, 'bb']})) - - # Test for issue #10181 - for dtype in ('float', 'int64'): - frames = [ - DataFrame(dtype=dtype), - DataFrame(columns=['A'], dtype=dtype), - DataFrame(index=[0], dtype=dtype), - ] - for df in frames: - self.assertTrue((df + df).equals(df)) - assert_frame_equal(df + df, df) - - def test_ops_np_scalar(self): - vals, xs = np.random.rand(5, 3), [nan, 7, -23, 2.718, -3.14, np.inf] - f = lambda x: DataFrame(x, index=list('ABCDE'), - columns=['jim', 'joe', 'jolie']) - - df = f(vals) - - for x in xs: - assert_frame_equal(df / np.array(x), f(vals / x)) - assert_frame_equal(np.array(x) * df, f(vals * x)) - assert_frame_equal(df + np.array(x), f(vals + x)) - assert_frame_equal(np.array(x) - df, f(x - vals)) - - def test_operators_boolean(self): - - # GH 5808 - # empty frames, non-mixed dtype - - result = DataFrame(index=[1]) & DataFrame(index=[1]) - assert_frame_equal(result,DataFrame(index=[1])) - - result = DataFrame(index=[1]) | DataFrame(index=[1]) - assert_frame_equal(result,DataFrame(index=[1])) - - result = DataFrame(index=[1]) & DataFrame(index=[1,2]) - assert_frame_equal(result,DataFrame(index=[1,2])) - - result = DataFrame(index=[1],columns=['A']) & DataFrame(index=[1],columns=['A']) - assert_frame_equal(result,DataFrame(index=[1],columns=['A'])) - - result = DataFrame(True,index=[1],columns=['A']) & DataFrame(True,index=[1],columns=['A']) - assert_frame_equal(result,DataFrame(True,index=[1],columns=['A'])) - - result = DataFrame(True,index=[1],columns=['A']) | DataFrame(True,index=[1],columns=['A']) - assert_frame_equal(result,DataFrame(True,index=[1],columns=['A'])) - - # boolean ops - result = DataFrame(1,index=[1],columns=['A']) | DataFrame(True,index=[1],columns=['A']) - assert_frame_equal(result,DataFrame(1,index=[1],columns=['A'])) - - def f(): - DataFrame(1.0,index=[1],columns=['A']) | DataFrame(True,index=[1],columns=['A']) - self.assertRaises(TypeError, f) - - def f(): - DataFrame('foo',index=[1],columns=['A']) | DataFrame(True,index=[1],columns=['A']) - self.assertRaises(TypeError, f) - - def test_operators_none_as_na(self): - df = DataFrame({"col1": [2, 5.0, 123, None], - "col2": [1, 2, 3, 4]}, dtype=object) - - ops = [operator.add, operator.sub, operator.mul, operator.truediv] - - # since filling converts dtypes from object, changed expected to be object - for op in ops: - filled = df.fillna(np.nan) - result = op(df, 3) - expected = op(filled, 3).astype(object) - expected[com.isnull(expected)] = None - assert_frame_equal(result, expected) - - result = op(df, df) - expected = op(filled, filled).astype(object) - expected[com.isnull(expected)] = None - assert_frame_equal(result, expected) - - result = op(df, df.fillna(7)) - assert_frame_equal(result, expected) - - result = op(df.fillna(7), df) - assert_frame_equal(result, expected, check_dtype=False) - - def test_comparison_invalid(self): - - def check(df,df2): - - for (x, y) in [(df,df2),(df2,df)]: - self.assertRaises(TypeError, lambda : x == y) - self.assertRaises(TypeError, lambda : x != y) - self.assertRaises(TypeError, lambda : x >= y) - self.assertRaises(TypeError, lambda : x > y) - self.assertRaises(TypeError, lambda : x < y) - self.assertRaises(TypeError, lambda : x <= y) - - # GH4968 - # invalid date/int comparisons - df = DataFrame(np.random.randint(10, size=(10, 1)), columns=['a']) - df['dates'] = date_range('20010101', periods=len(df)) - - df2 = df.copy() - df2['dates'] = df['a'] - check(df,df2) - - df = DataFrame(np.random.randint(10, size=(10, 2)), columns=['a', 'b']) - df2 = DataFrame({'a': date_range('20010101', periods=len(df)), 'b': date_range('20100101', periods=len(df))}) - check(df,df2) - - def test_timestamp_compare(self): - # make sure we can compare Timestamps on the right AND left hand side - # GH4982 - df = DataFrame({'dates1': date_range('20010101', periods=10), - 'dates2': date_range('20010102', periods=10), - 'intcol': np.random.randint(1000000000, size=10), - 'floatcol': np.random.randn(10), - 'stringcol': list(tm.rands(10))}) - df.loc[np.random.rand(len(df)) > 0.5, 'dates2'] = pd.NaT - ops = {'gt': 'lt', 'lt': 'gt', 'ge': 'le', 'le': 'ge', 'eq': 'eq', - 'ne': 'ne'} - for left, right in ops.items(): - left_f = getattr(operator, left) - right_f = getattr(operator, right) - - # no nats - expected = left_f(df, Timestamp('20010109')) - result = right_f(Timestamp('20010109'), df) - assert_frame_equal(result, expected) - - # nats - expected = left_f(df, Timestamp('nat')) - result = right_f(Timestamp('nat'), df) - assert_frame_equal(result, expected) - - def test_modulo(self): - - # GH3590, modulo as ints - p = DataFrame({ 'first' : [3,4,5,8], 'second' : [0,0,0,3] }) - - ### this is technically wrong as the integer portion is coerced to float ### - expected = DataFrame({ 'first' : Series([0,0,0,0],dtype='float64'), 'second' : Series([np.nan,np.nan,np.nan,0]) }) - result = p % p - assert_frame_equal(result,expected) - - # numpy has a slightly different (wrong) treatement - result2 = DataFrame(p.values % p.values,index=p.index,columns=p.columns,dtype='float64') - result2.iloc[0:3,1] = np.nan - assert_frame_equal(result2,expected) - - result = p % 0 - expected = DataFrame(np.nan,index=p.index,columns=p.columns) - assert_frame_equal(result,expected) - - # numpy has a slightly different (wrong) treatement - result2 = DataFrame(p.values.astype('float64') % 0,index=p.index,columns=p.columns) - assert_frame_equal(result2,expected) - - # not commutative with series - p = DataFrame(np.random.randn(10, 5)) - s = p[0] - res = s % p - res2 = p % s - self.assertFalse(np.array_equal(res.fillna(0), res2.fillna(0))) - - def test_div(self): - - # integer div, but deal with the 0's (GH 9144) - p = DataFrame({ 'first' : [3,4,5,8], 'second' : [0,0,0,3] }) - result = p / p - - expected = DataFrame({'first': Series([1.0, 1.0, 1.0, 1.0]), - 'second': Series([nan, nan, nan, 1])}) - assert_frame_equal(result,expected) - - result2 = DataFrame(p.values.astype('float') / p.values, index=p.index, - columns=p.columns) - assert_frame_equal(result2,expected) - - result = p / 0 - expected = DataFrame(inf, index=p.index, columns=p.columns) - expected.iloc[0:3, 1] = nan - assert_frame_equal(result,expected) - - # numpy has a slightly different (wrong) treatement - result2 = DataFrame(p.values.astype('float64') / 0, index=p.index, - columns=p.columns) - assert_frame_equal(result2,expected) - - p = DataFrame(np.random.randn(10, 5)) - s = p[0] - res = s / p - res2 = p / s - self.assertFalse(np.array_equal(res.fillna(0), res2.fillna(0))) - - def test_logical_operators(self): - - def _check_bin_op(op): - result = op(df1, df2) - expected = DataFrame(op(df1.values, df2.values), index=df1.index, - columns=df1.columns) - self.assertEqual(result.values.dtype, np.bool_) - assert_frame_equal(result, expected) - - def _check_unary_op(op): - result = op(df1) - expected = DataFrame(op(df1.values), index=df1.index, - columns=df1.columns) - self.assertEqual(result.values.dtype, np.bool_) - assert_frame_equal(result, expected) - - df1 = {'a': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True}, - 'b': {'a': False, 'b': True, 'c': False, - 'd': False, 'e': False}, - 'c': {'a': False, 'b': False, 'c': True, - 'd': False, 'e': False}, - 'd': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True}, - 'e': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True}} - - df2 = {'a': {'a': True, 'b': False, 'c': True, 'd': False, 'e': False}, - 'b': {'a': False, 'b': True, 'c': False, - 'd': False, 'e': False}, - 'c': {'a': True, 'b': False, 'c': True, 'd': False, 'e': False}, - 'd': {'a': False, 'b': False, 'c': False, - 'd': True, 'e': False}, - 'e': {'a': False, 'b': False, 'c': False, - 'd': False, 'e': True}} - - df1 = DataFrame(df1) - df2 = DataFrame(df2) - - _check_bin_op(operator.and_) - _check_bin_op(operator.or_) - _check_bin_op(operator.xor) - - # operator.neg is deprecated in numpy >= 1.9 - _check_unary_op(operator.inv) - - def test_logical_typeerror(self): - if not compat.PY3: - self.assertRaises(TypeError, self.frame.__eq__, 'foo') - self.assertRaises(TypeError, self.frame.__lt__, 'foo') - self.assertRaises(TypeError, self.frame.__gt__, 'foo') - self.assertRaises(TypeError, self.frame.__ne__, 'foo') - else: - raise nose.SkipTest('test_logical_typeerror not tested on PY3') - - def test_constructor_lists_to_object_dtype(self): - # from #1074 - d = DataFrame({'a': [np.nan, False]}) - self.assertEqual(d['a'].dtype, np.object_) - self.assertFalse(d['a'][1]) - - def test_constructor_with_nas(self): - # GH 5016 - # na's in indicies - - def check(df): - for i in range(len(df.columns)): - df.iloc[:,i] - - # allow single nans to succeed - indexer = np.arange(len(df.columns))[isnull(df.columns)] - - if len(indexer) == 1: - assert_series_equal(df.iloc[:,indexer[0]],df.loc[:,np.nan]) - - - # multiple nans should fail - else: - - def f(): - df.loc[:,np.nan] - self.assertRaises(TypeError, f) - - - df = DataFrame([[1,2,3],[4,5,6]], index=[1,np.nan]) - check(df) - - df = DataFrame([[1,2,3],[4,5,6]], columns=[1.1,2.2,np.nan]) - check(df) - - df = DataFrame([[0,1,2,3],[4,5,6,7]], columns=[np.nan,1.1,2.2,np.nan]) - check(df) - - df = DataFrame([[0.0,1,2,3.0],[4,5,6,7]], columns=[np.nan,1.1,2.2,np.nan]) - check(df) - - def test_logical_with_nas(self): - d = DataFrame({'a': [np.nan, False], 'b': [True, True]}) - - # GH4947 - # bool comparisons should return bool - result = d['a'] | d['b'] - expected = Series([False, True]) - assert_series_equal(result, expected) - - # GH4604, automatic casting here - result = d['a'].fillna(False) | d['b'] - expected = Series([True, True]) - assert_series_equal(result, expected) - - result = d['a'].fillna(False,downcast=False) | d['b'] - expected = Series([True, True]) - assert_series_equal(result, expected) - - def test_neg(self): - # what to do? - assert_frame_equal(-self.frame, -1 * self.frame) - - def test_invert(self): - assert_frame_equal(-(self.frame < 0), ~(self.frame < 0)) - - def test_first_last_valid(self): - N = len(self.frame.index) - mat = randn(N) - mat[:5] = nan - mat[-5:] = nan - - frame = DataFrame({'foo': mat}, index=self.frame.index) - index = frame.first_valid_index() - - self.assertEqual(index, frame.index[5]) - - index = frame.last_valid_index() - self.assertEqual(index, frame.index[-6]) - - def test_arith_flex_frame(self): - ops = ['add', 'sub', 'mul', 'div', 'truediv', 'pow', 'floordiv', 'mod'] - if not compat.PY3: - aliases = {} - else: - aliases = {'div': 'truediv'} - - for op in ops: - try: - alias = aliases.get(op, op) - f = getattr(operator, alias) - result = getattr(self.frame, op)(2 * self.frame) - exp = f(self.frame, 2 * self.frame) - assert_frame_equal(result, exp) - - # vs mix float - result = getattr(self.mixed_float, op)(2 * self.mixed_float) - exp = f(self.mixed_float, 2 * self.mixed_float) - assert_frame_equal(result, exp) - _check_mixed_float(result, dtype = dict(C = None)) - - # vs mix int - if op in ['add','sub','mul']: - result = getattr(self.mixed_int, op)(2 + self.mixed_int) - exp = f(self.mixed_int, 2 + self.mixed_int) - - # overflow in the uint - dtype = None - if op in ['sub']: - dtype = dict(B = 'object', C = None) - elif op in ['add','mul']: - dtype = dict(C = None) - assert_frame_equal(result, exp) - _check_mixed_int(result, dtype = dtype) - - # rops - r_f = lambda x, y: f(y, x) - result = getattr(self.frame, 'r' + op)(2 * self.frame) - exp = r_f(self.frame, 2 * self.frame) - assert_frame_equal(result, exp) - - # vs mix float - result = getattr(self.mixed_float, op)(2 * self.mixed_float) - exp = f(self.mixed_float, 2 * self.mixed_float) - assert_frame_equal(result, exp) - _check_mixed_float(result, dtype = dict(C = None)) - - result = getattr(self.intframe, op)(2 * self.intframe) - exp = f(self.intframe, 2 * self.intframe) - assert_frame_equal(result, exp) - - # vs mix int - if op in ['add','sub','mul']: - result = getattr(self.mixed_int, op)(2 + self.mixed_int) - exp = f(self.mixed_int, 2 + self.mixed_int) - - # overflow in the uint - dtype = None - if op in ['sub']: - dtype = dict(B = 'object', C = None) - elif op in ['add','mul']: - dtype = dict(C = None) - assert_frame_equal(result, exp) - _check_mixed_int(result, dtype = dtype) - except: - com.pprint_thing("Failing operation %r" % op) - raise - - # ndim >= 3 - ndim_5 = np.ones(self.frame.shape + (3, 4, 5)) - with assertRaisesRegexp(ValueError, 'shape'): - f(self.frame, ndim_5) - - with assertRaisesRegexp(ValueError, 'shape'): - getattr(self.frame, op)(ndim_5) - - - # res_add = self.frame.add(self.frame) - # res_sub = self.frame.sub(self.frame) - # res_mul = self.frame.mul(self.frame) - # res_div = self.frame.div(2 * self.frame) - - # assert_frame_equal(res_add, self.frame + self.frame) - # assert_frame_equal(res_sub, self.frame - self.frame) - # assert_frame_equal(res_mul, self.frame * self.frame) - # assert_frame_equal(res_div, self.frame / (2 * self.frame)) - - const_add = self.frame.add(1) - assert_frame_equal(const_add, self.frame + 1) - - # corner cases - result = self.frame.add(self.frame[:0]) - assert_frame_equal(result, self.frame * np.nan) - - result = self.frame[:0].add(self.frame) - assert_frame_equal(result, self.frame * np.nan) - with assertRaisesRegexp(NotImplementedError, 'fill_value'): - self.frame.add(self.frame.iloc[0], fill_value=3) - with assertRaisesRegexp(NotImplementedError, 'fill_value'): - self.frame.add(self.frame.iloc[0], axis='index', fill_value=3) - - def test_binary_ops_align(self): - - # test aligning binary ops - - # GH 6681 - index=MultiIndex.from_product([list('abc'), - ['one','two','three'], - [1,2,3]], - names=['first','second','third']) - - df = DataFrame(np.arange(27*3).reshape(27,3), - index=index, - columns=['value1','value2','value3']).sortlevel() - - idx = pd.IndexSlice - for op in ['add','sub','mul','div','truediv']: - opa = getattr(operator,op,None) - if opa is None: - continue - - x = Series([ 1.0, 10.0, 100.0], [1,2,3]) - result = getattr(df,op)(x,level='third',axis=0) - - expected = pd.concat([ opa(df.loc[idx[:,:,i],:],v) for i, v in x.iteritems() ]).sortlevel() - assert_frame_equal(result, expected) - - x = Series([ 1.0, 10.0], ['two','three']) - result = getattr(df,op)(x,level='second',axis=0) - - expected = pd.concat([ opa(df.loc[idx[:,i],:],v) for i, v in x.iteritems() ]).reindex_like(df).sortlevel() - assert_frame_equal(result, expected) - - ## GH9463 (alignment level of dataframe with series) - - midx = MultiIndex.from_product([['A', 'B'],['a', 'b']]) - df = DataFrame(np.ones((2,4), dtype='int64'), columns=midx) - s = pd.Series({'a':1, 'b':2}) - - df2 = df.copy() - df2.columns.names = ['lvl0', 'lvl1'] - s2 = s.copy() - s2.index.name = 'lvl1' - - # different cases of integer/string level names: - res1 = df.mul(s, axis=1, level=1) - res2 = df.mul(s2, axis=1, level=1) - res3 = df2.mul(s, axis=1, level=1) - res4 = df2.mul(s2, axis=1, level=1) - res5 = df2.mul(s, axis=1, level='lvl1') - res6 = df2.mul(s2, axis=1, level='lvl1') - - exp = DataFrame(np.array([[1, 2, 1, 2], [1, 2, 1, 2]], dtype='int64'), - columns=midx) - - for res in [res1, res2]: - assert_frame_equal(res, exp) - - exp.columns.names = ['lvl0', 'lvl1'] - for res in [res3, res4, res5, res6]: - assert_frame_equal(res, exp) - - def test_arith_mixed(self): - - left = DataFrame({'A': ['a', 'b', 'c'], - 'B': [1, 2, 3]}) - - result = left + left - expected = DataFrame({'A': ['aa', 'bb', 'cc'], - 'B': [2, 4, 6]}) - assert_frame_equal(result, expected) - - def test_arith_getitem_commute(self): - df = DataFrame({'A': [1.1, 3.3], 'B': [2.5, -3.9]}) - - self._test_op(df, operator.add) - self._test_op(df, operator.sub) - self._test_op(df, operator.mul) - self._test_op(df, operator.truediv) - self._test_op(df, operator.floordiv) - self._test_op(df, operator.pow) - - self._test_op(df, lambda x, y: y + x) - self._test_op(df, lambda x, y: y - x) - self._test_op(df, lambda x, y: y * x) - self._test_op(df, lambda x, y: y / x) - self._test_op(df, lambda x, y: y ** x) - - self._test_op(df, lambda x, y: x + y) - self._test_op(df, lambda x, y: x - y) - self._test_op(df, lambda x, y: x * y) - self._test_op(df, lambda x, y: x / y) - self._test_op(df, lambda x, y: x ** y) - - @staticmethod - def _test_op(df, op): - result = op(df, 1) - - if not df.columns.is_unique: - raise ValueError("Only unique columns supported by this test") - - for col in result.columns: - assert_series_equal(result[col], op(df[col], 1)) - - def test_bool_flex_frame(self): - data = np.random.randn(5, 3) - other_data = np.random.randn(5, 3) - df = DataFrame(data) - other = DataFrame(other_data) - ndim_5 = np.ones(df.shape + (1, 3)) - - # Unaligned - def _check_unaligned_frame(meth, op, df, other): - part_o = other.ix[3:, 1:].copy() - rs = meth(part_o) - xp = op(df, part_o.reindex(index=df.index, columns=df.columns)) - assert_frame_equal(rs, xp) - - # DataFrame - self.assertTrue(df.eq(df).values.all()) - self.assertFalse(df.ne(df).values.any()) - for op in ['eq', 'ne', 'gt', 'lt', 'ge', 'le']: - f = getattr(df, op) - o = getattr(operator, op) - # No NAs - assert_frame_equal(f(other), o(df, other)) - _check_unaligned_frame(f, o, df, other) - # ndarray - assert_frame_equal(f(other.values), o(df, other.values)) - # scalar - assert_frame_equal(f(0), o(df, 0)) - # NAs - assert_frame_equal(f(np.nan), o(df, np.nan)) - with assertRaisesRegexp(ValueError, 'shape'): - f(ndim_5) - - # Series - def _test_seq(df, idx_ser, col_ser): - idx_eq = df.eq(idx_ser, axis=0) - col_eq = df.eq(col_ser) - idx_ne = df.ne(idx_ser, axis=0) - col_ne = df.ne(col_ser) - assert_frame_equal(col_eq, df == Series(col_ser)) - assert_frame_equal(col_eq, -col_ne) - assert_frame_equal(idx_eq, -idx_ne) - assert_frame_equal(idx_eq, df.T.eq(idx_ser).T) - assert_frame_equal(col_eq, df.eq(list(col_ser))) - assert_frame_equal(idx_eq, df.eq(Series(idx_ser), axis=0)) - assert_frame_equal(idx_eq, df.eq(list(idx_ser), axis=0)) - - idx_gt = df.gt(idx_ser, axis=0) - col_gt = df.gt(col_ser) - idx_le = df.le(idx_ser, axis=0) - col_le = df.le(col_ser) - - assert_frame_equal(col_gt, df > Series(col_ser)) - assert_frame_equal(col_gt, -col_le) - assert_frame_equal(idx_gt, -idx_le) - assert_frame_equal(idx_gt, df.T.gt(idx_ser).T) - - idx_ge = df.ge(idx_ser, axis=0) - col_ge = df.ge(col_ser) - idx_lt = df.lt(idx_ser, axis=0) - col_lt = df.lt(col_ser) - assert_frame_equal(col_ge, df >= Series(col_ser)) - assert_frame_equal(col_ge, -col_lt) - assert_frame_equal(idx_ge, -idx_lt) - assert_frame_equal(idx_ge, df.T.ge(idx_ser).T) - - idx_ser = Series(np.random.randn(5)) - col_ser = Series(np.random.randn(3)) - _test_seq(df, idx_ser, col_ser) - - - # list/tuple - _test_seq(df, idx_ser.values, col_ser.values) - - # NA - df.ix[0, 0] = np.nan - rs = df.eq(df) - self.assertFalse(rs.ix[0, 0]) - rs = df.ne(df) - self.assertTrue(rs.ix[0, 0]) - rs = df.gt(df) - self.assertFalse(rs.ix[0, 0]) - rs = df.lt(df) - self.assertFalse(rs.ix[0, 0]) - rs = df.ge(df) - self.assertFalse(rs.ix[0, 0]) - rs = df.le(df) - self.assertFalse(rs.ix[0, 0]) - - - - # complex - arr = np.array([np.nan, 1, 6, np.nan]) - arr2 = np.array([2j, np.nan, 7, None]) - df = DataFrame({'a': arr}) - df2 = DataFrame({'a': arr2}) - rs = df.gt(df2) - self.assertFalse(rs.values.any()) - rs = df.ne(df2) - self.assertTrue(rs.values.all()) - - arr3 = np.array([2j, np.nan, None]) - df3 = DataFrame({'a': arr3}) - rs = df3.gt(2j) - self.assertFalse(rs.values.any()) - - # corner, dtype=object - df1 = DataFrame({'col': ['foo', np.nan, 'bar']}) - df2 = DataFrame({'col': ['foo', datetime.now(), 'bar']}) - result = df1.ne(df2) - exp = DataFrame({'col': [False, True, False]}) - assert_frame_equal(result, exp) - - def test_arith_flex_series(self): - df = self.simple - - row = df.xs('a') - col = df['two'] - # after arithmetic refactor, add truediv here - ops = ['add', 'sub', 'mul', 'mod'] - for op in ops: - f = getattr(df, op) - op = getattr(operator, op) - assert_frame_equal(f(row), op(df, row)) - assert_frame_equal(f(col, axis=0), op(df.T, col).T) - - # special case for some reason - assert_frame_equal(df.add(row, axis=None), df + row) - - # cases which will be refactored after big arithmetic refactor - assert_frame_equal(df.div(row), df / row) - assert_frame_equal(df.div(col, axis=0), (df.T / col).T) - - # broadcasting issue in GH7325 - df = DataFrame(np.arange(3*2).reshape((3,2)),dtype='int64') - expected = DataFrame([[nan, inf], [1.0, 1.5], [1.0, 1.25]]) - result = df.div(df[0],axis='index') - assert_frame_equal(result,expected) - - df = DataFrame(np.arange(3*2).reshape((3,2)),dtype='float64') - expected = DataFrame([[np.nan,np.inf],[1.0,1.5],[1.0,1.25]]) - result = df.div(df[0],axis='index') - assert_frame_equal(result,expected) - - def test_arith_non_pandas_object(self): - df = self.simple - - val1 = df.xs('a').values - added = DataFrame(df.values + val1, index=df.index, columns=df.columns) - assert_frame_equal(df + val1, added) - - added = DataFrame((df.values.T + val1).T, - index=df.index, columns=df.columns) - assert_frame_equal(df.add(val1, axis=0), added) - - val2 = list(df['two']) - - added = DataFrame(df.values + val2, index=df.index, columns=df.columns) - assert_frame_equal(df + val2, added) - - added = DataFrame((df.values.T + val2).T, index=df.index, - columns=df.columns) - assert_frame_equal(df.add(val2, axis='index'), added) - - val3 = np.random.rand(*df.shape) - added = DataFrame(df.values + val3, index=df.index, columns=df.columns) - assert_frame_equal(df.add(val3), added) - - def test_combineFrame(self): - frame_copy = self.frame.reindex(self.frame.index[::2]) - - del frame_copy['D'] - frame_copy['C'][:5] = nan - - added = self.frame + frame_copy - tm.assert_dict_equal(added['A'].valid(), - self.frame['A'] * 2, - compare_keys=False) - - self.assertTrue(np.isnan(added['C'].reindex(frame_copy.index)[:5]).all()) - - # assert(False) - - self.assertTrue(np.isnan(added['D']).all()) - - self_added = self.frame + self.frame - self.assertTrue(self_added.index.equals(self.frame.index)) - - added_rev = frame_copy + self.frame - self.assertTrue(np.isnan(added['D']).all()) - - # corner cases - - # empty - plus_empty = self.frame + self.empty - self.assertTrue(np.isnan(plus_empty.values).all()) - - empty_plus = self.empty + self.frame - self.assertTrue(np.isnan(empty_plus.values).all()) - - empty_empty = self.empty + self.empty - self.assertTrue(empty_empty.empty) - - # out of order - reverse = self.frame.reindex(columns=self.frame.columns[::-1]) - - assert_frame_equal(reverse + self.frame, self.frame * 2) - - # mix vs float64, upcast - added = self.frame + self.mixed_float - _check_mixed_float(added, dtype = 'float64') - added = self.mixed_float + self.frame - _check_mixed_float(added, dtype = 'float64') - - # mix vs mix - added = self.mixed_float + self.mixed_float2 - _check_mixed_float(added, dtype = dict(C = None)) - added = self.mixed_float2 + self.mixed_float - _check_mixed_float(added, dtype = dict(C = None)) - - # with int - added = self.frame + self.mixed_int - _check_mixed_float(added, dtype = 'float64') - - def test_combineSeries(self): - - # Series - series = self.frame.xs(self.frame.index[0]) - - added = self.frame + series - - for key, s in compat.iteritems(added): - assert_series_equal(s, self.frame[key] + series[key]) - - larger_series = series.to_dict() - larger_series['E'] = 1 - larger_series = Series(larger_series) - larger_added = self.frame + larger_series - - for key, s in compat.iteritems(self.frame): - assert_series_equal(larger_added[key], s + series[key]) - self.assertIn('E', larger_added) - self.assertTrue(np.isnan(larger_added['E']).all()) - - # vs mix (upcast) as needed - added = self.mixed_float + series - _check_mixed_float(added, dtype = 'float64') - added = self.mixed_float + series.astype('float32') - _check_mixed_float(added, dtype = dict(C = None)) - added = self.mixed_float + series.astype('float16') - _check_mixed_float(added, dtype = dict(C = None)) - - #### these raise with numexpr.....as we are adding an int64 to an uint64....weird - # vs int - #added = self.mixed_int + (100*series).astype('int64') - #_check_mixed_int(added, dtype = dict(A = 'int64', B = 'float64', C = 'int64', D = 'int64')) - #added = self.mixed_int + (100*series).astype('int32') - #_check_mixed_int(added, dtype = dict(A = 'int32', B = 'float64', C = 'int32', D = 'int64')) - - - # TimeSeries - ts = self.tsframe['A'] - - # 10890 - # we no longer allow auto timeseries broadcasting - # and require explict broadcasting - added = self.tsframe.add(ts, axis='index') - - for key, col in compat.iteritems(self.tsframe): - result = col + ts - assert_series_equal(added[key], result, check_names=False) - self.assertEqual(added[key].name, key) - if col.name == ts.name: - self.assertEqual(result.name, 'A') - else: - self.assertTrue(result.name is None) - - smaller_frame = self.tsframe[:-5] - smaller_added = smaller_frame.add(ts, axis='index') - - self.assertTrue(smaller_added.index.equals(self.tsframe.index)) - - smaller_ts = ts[:-5] - smaller_added2 = self.tsframe.add(smaller_ts, axis='index') - assert_frame_equal(smaller_added, smaller_added2) - - # length 0, result is all-nan - result = self.tsframe.add(ts[:0], axis='index') - expected = DataFrame(np.nan,index=self.tsframe.index,columns=self.tsframe.columns) - assert_frame_equal(result, expected) - - # Frame is all-nan - result = self.tsframe[:0].add(ts, axis='index') - expected = DataFrame(np.nan,index=self.tsframe.index,columns=self.tsframe.columns) - assert_frame_equal(result, expected) - - # empty but with non-empty index - frame = self.tsframe[:1].reindex(columns=[]) - result = frame.mul(ts,axis='index') - self.assertEqual(len(result), len(ts)) - - def test_combineFunc(self): - result = self.frame * 2 - self.assert_numpy_array_equal(result.values, self.frame.values * 2) - - # vs mix - result = self.mixed_float * 2 - for c, s in compat.iteritems(result): - self.assert_numpy_array_equal(s.values, self.mixed_float[c].values * 2) - _check_mixed_float(result, dtype = dict(C = None)) - - result = self.empty * 2 - self.assertIs(result.index, self.empty.index) - self.assertEqual(len(result.columns), 0) - - def test_comparisons(self): - df1 = tm.makeTimeDataFrame() - df2 = tm.makeTimeDataFrame() - - row = self.simple.xs('a') - ndim_5 = np.ones(df1.shape + (1, 1, 1)) - - def test_comp(func): - result = func(df1, df2) - self.assert_numpy_array_equal(result.values, - func(df1.values, df2.values)) - with assertRaisesRegexp(ValueError, 'Wrong number of dimensions'): - func(df1, ndim_5) - - result2 = func(self.simple, row) - self.assert_numpy_array_equal(result2.values, - func(self.simple.values, row.values)) - - result3 = func(self.frame, 0) - self.assert_numpy_array_equal(result3.values, - func(self.frame.values, 0)) - - - with assertRaisesRegexp(ValueError, 'Can only compare ' - 'identically-labeled DataFrame'): - func(self.simple, self.simple[:2]) - - test_comp(operator.eq) - test_comp(operator.ne) - test_comp(operator.lt) - test_comp(operator.gt) - test_comp(operator.ge) - test_comp(operator.le) - - def test_string_comparison(self): - df = DataFrame([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}]) - mask_a = df.a > 1 - assert_frame_equal(df[mask_a], df.ix[1:1, :]) - assert_frame_equal(df[-mask_a], df.ix[0:0, :]) - - mask_b = df.b == "foo" - assert_frame_equal(df[mask_b], df.ix[0:0, :]) - assert_frame_equal(df[-mask_b], df.ix[1:1, :]) - - def test_float_none_comparison(self): - df = DataFrame(np.random.randn(8, 3), index=lrange(8), - columns=['A', 'B', 'C']) - - self.assertRaises(TypeError, df.__eq__, None) - - def test_boolean_comparison(self): - - # GH 4576 - # boolean comparisons with a tuple/list give unexpected results - df = DataFrame(np.arange(6).reshape((3,2))) - b = np.array([2, 2]) - b_r = np.atleast_2d([2,2]) - b_c = b_r.T - l = (2,2,2) - tup = tuple(l) - - # gt - expected = DataFrame([[False,False],[False,True],[True,True]]) - result = df>b - assert_frame_equal(result,expected) - - result = df.values>b - assert_numpy_array_equal(result,expected.values) - - result = df>l - assert_frame_equal(result,expected) - - result = df>tup - assert_frame_equal(result,expected) - - result = df>b_r - assert_frame_equal(result,expected) - - result = df.values>b_r - assert_numpy_array_equal(result,expected.values) - - self.assertRaises(ValueError, df.__gt__, b_c) - self.assertRaises(ValueError, df.values.__gt__, b_c) - - # == - expected = DataFrame([[False,False],[True,False],[False,False]]) - result = df == b - assert_frame_equal(result,expected) - - result = df==l - assert_frame_equal(result,expected) - - result = df==tup - assert_frame_equal(result,expected) - - result = df == b_r - assert_frame_equal(result,expected) - - result = df.values == b_r - assert_numpy_array_equal(result,expected.values) - - self.assertRaises(ValueError, lambda : df == b_c) - self.assertFalse((df.values == b_c)) - - # with alignment - df = DataFrame(np.arange(6).reshape((3,2)),columns=list('AB'),index=list('abc')) - expected.index=df.index - expected.columns=df.columns - - result = df==l - assert_frame_equal(result,expected) - - result = df==tup - assert_frame_equal(result,expected) - - # not shape compatible - self.assertRaises(ValueError, lambda : df == (2,2)) - self.assertRaises(ValueError, lambda : df == [2,2]) - - def test_equals_different_blocks(self): - # GH 9330 - df0 = pd.DataFrame({"A": ["x","y"], "B": [1,2], - "C": ["w","z"]}) - df1 = df0.reset_index()[["A","B","C"]] - # this assert verifies that the above operations have - # induced a block rearrangement - self.assertTrue(df0._data.blocks[0].dtype != - df1._data.blocks[0].dtype) - # do the real tests - assert_frame_equal(df0, df1) - self.assertTrue(df0.equals(df1)) - self.assertTrue(df1.equals(df0)) - - def test_copy_blocks(self): - # API/ENH 9607 - df = DataFrame(self.frame, copy=True) - column = df.columns[0] - - # use the default copy=True, change a column - blocks = df.as_blocks() - for dtype, _df in blocks.items(): - if column in _df: - _df.ix[:, column] = _df[column] + 1 - - # make sure we did not change the original DataFrame - self.assertFalse(_df[column].equals(df[column])) - - def test_no_copy_blocks(self): - # API/ENH 9607 - df = DataFrame(self.frame, copy=True) - column = df.columns[0] - - # use the copy=False, change a column - blocks = df.as_blocks(copy=False) - for dtype, _df in blocks.items(): - if column in _df: - _df.ix[:, column] = _df[column] + 1 - - # make sure we did change the original DataFrame - self.assertTrue(_df[column].equals(df[column])) - - def test_to_csv_from_csv(self): - - pname = '__tmp_to_csv_from_csv__' - with ensure_clean(pname) as path: - - self.frame['A'][:5] = nan - - self.frame.to_csv(path) - self.frame.to_csv(path, columns=['A', 'B']) - self.frame.to_csv(path, header=False) - self.frame.to_csv(path, index=False) - - # test roundtrip - self.tsframe.to_csv(path) - recons = DataFrame.from_csv(path) - - assert_frame_equal(self.tsframe, recons) - - self.tsframe.to_csv(path, index_label='index') - recons = DataFrame.from_csv(path, index_col=None) - assert(len(recons.columns) == len(self.tsframe.columns) + 1) - - # no index - self.tsframe.to_csv(path, index=False) - recons = DataFrame.from_csv(path, index_col=None) - assert_almost_equal(self.tsframe.values, recons.values) - - # corner case - dm = DataFrame({'s1': Series(lrange(3), lrange(3)), - 's2': Series(lrange(2), lrange(2))}) - dm.to_csv(path) - recons = DataFrame.from_csv(path) - assert_frame_equal(dm, recons) - - with ensure_clean(pname) as path: - - # duplicate index - df = DataFrame(np.random.randn(3, 3), index=['a', 'a', 'b'], - columns=['x', 'y', 'z']) - df.to_csv(path) - result = DataFrame.from_csv(path) - assert_frame_equal(result, df) - - midx = MultiIndex.from_tuples([('A', 1, 2), ('A', 1, 2), ('B', 1, 2)]) - df = DataFrame(np.random.randn(3, 3), index=midx, - columns=['x', 'y', 'z']) - df.to_csv(path) - result = DataFrame.from_csv(path, index_col=[0, 1, 2], - parse_dates=False) - assert_frame_equal(result, df, check_names=False) # TODO from_csv names index ['Unnamed: 1', 'Unnamed: 2'] should it ? - - # column aliases - col_aliases = Index(['AA', 'X', 'Y', 'Z']) - self.frame2.to_csv(path, header=col_aliases) - rs = DataFrame.from_csv(path) - xp = self.frame2.copy() - xp.columns = col_aliases - - assert_frame_equal(xp, rs) - - self.assertRaises(ValueError, self.frame2.to_csv, path, - header=['AA', 'X']) - - with ensure_clean(pname) as path: - df1 = DataFrame(np.random.randn(3, 1)) - df2 = DataFrame(np.random.randn(3, 1)) - - df1.to_csv(path) - df2.to_csv(path,mode='a',header=False) - xp = pd.concat([df1,df2]) - rs = pd.read_csv(path,index_col=0) - rs.columns = lmap(int,rs.columns) - xp.columns = lmap(int,xp.columns) - assert_frame_equal(xp,rs) - - with ensure_clean() as path: - # GH 10833 (TimedeltaIndex formatting) - dt = pd.Timedelta(seconds=1) - df = pd.DataFrame({'dt_data': [i*dt for i in range(3)]}, - index=pd.Index([i*dt for i in range(3)], - name='dt_index')) - df.to_csv(path) - - result = pd.read_csv(path, index_col='dt_index') - result.index = pd.to_timedelta(result.index) - # TODO: remove renaming when GH 10875 is solved - result.index = result.index.rename('dt_index') - result['dt_data'] = pd.to_timedelta(result['dt_data']) - - assert_frame_equal(df, result, check_index_type=True) - - # tz, 8260 - with ensure_clean(pname) as path: - - self.tzframe.to_csv(path) - result = pd.read_csv(path, index_col=0, parse_dates=['A']) - - converter = lambda c: pd.to_datetime(result[c]).dt.tz_localize('UTC').dt.tz_convert(self.tzframe[c].dt.tz) - result['B'] = converter('B') - result['C'] = converter('C') - assert_frame_equal(result, self.tzframe) - - def test_to_csv_cols_reordering(self): - # GH3454 - import pandas as pd - - chunksize=5 - N = int(chunksize*2.5) - - df= mkdf(N, 3) - cs = df.columns - cols = [cs[2],cs[0]] - - with ensure_clean() as path: - df.to_csv(path,columns = cols,chunksize=chunksize) - rs_c = pd.read_csv(path,index_col=0) - - assert_frame_equal(df[cols],rs_c,check_names=False) - - def test_to_csv_legacy_raises_on_dupe_cols(self): - df= mkdf(10, 3) - df.columns = ['a','a','b'] - with ensure_clean() as path: - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - self.assertRaises(NotImplementedError,df.to_csv,path,engine='python') - - def test_to_csv_new_dupe_cols(self): - import pandas as pd - def _check_df(df,cols=None): - with ensure_clean() as path: - df.to_csv(path,columns = cols,chunksize=chunksize) - rs_c = pd.read_csv(path,index_col=0) - - # we wrote them in a different order - # so compare them in that order - if cols is not None: - - if df.columns.is_unique: - rs_c.columns = cols - else: - indexer, missing = df.columns.get_indexer_non_unique(cols) - rs_c.columns = df.columns.take(indexer) - - for c in cols: - obj_df = df[c] - obj_rs = rs_c[c] - if isinstance(obj_df,Series): - assert_series_equal(obj_df,obj_rs) - else: - assert_frame_equal(obj_df,obj_rs,check_names=False) - - # wrote in the same order - else: - rs_c.columns = df.columns - assert_frame_equal(df,rs_c,check_names=False) - - chunksize=5 - N = int(chunksize*2.5) - - # dupe cols - df= mkdf(N, 3) - df.columns = ['a','a','b'] - _check_df(df,None) - - # dupe cols with selection - cols = ['b','a'] - _check_df(df,cols) - - @slow - def test_to_csv_moar(self): - path = '__tmp_to_csv_moar__' - - def _do_test(df,path,r_dtype=None,c_dtype=None,rnlvl=None,cnlvl=None, - dupe_col=False): - - kwargs = dict(parse_dates=False) - if cnlvl: - if rnlvl is not None: - kwargs['index_col'] = lrange(rnlvl) - kwargs['header'] = lrange(cnlvl) - with ensure_clean(path) as path: - df.to_csv(path,encoding='utf8',chunksize=chunksize,tupleize_cols=False) - recons = DataFrame.from_csv(path,tupleize_cols=False,**kwargs) - else: - kwargs['header'] = 0 - with ensure_clean(path) as path: - df.to_csv(path,encoding='utf8',chunksize=chunksize) - recons = DataFrame.from_csv(path,**kwargs) - - def _to_uni(x): - if not isinstance(x, compat.text_type): - return x.decode('utf8') - return x - if dupe_col: - # read_Csv disambiguates the columns by - # labeling them dupe.1,dupe.2, etc'. monkey patch columns - recons.columns = df.columns - if rnlvl and not cnlvl: - delta_lvl = [recons.iloc[:, i].values for i in range(rnlvl-1)] - ix=MultiIndex.from_arrays([list(recons.index)]+delta_lvl) - recons.index = ix - recons = recons.iloc[:,rnlvl-1:] - - type_map = dict(i='i',f='f',s='O',u='O',dt='O',p='O') - if r_dtype: - if r_dtype == 'u': # unicode - r_dtype='O' - recons.index = np.array(lmap(_to_uni,recons.index), - dtype=r_dtype) - df.index = np.array(lmap(_to_uni,df.index),dtype=r_dtype) - elif r_dtype == 'dt': # unicode - r_dtype='O' - recons.index = np.array(lmap(Timestamp,recons.index), - dtype=r_dtype) - df.index = np.array(lmap(Timestamp,df.index),dtype=r_dtype) - elif r_dtype == 'p': - r_dtype='O' - recons.index = np.array(list(map(Timestamp, - recons.index.to_datetime())), - dtype=r_dtype) - df.index = np.array(list(map(Timestamp, - df.index.to_datetime())), - dtype=r_dtype) - else: - r_dtype= type_map.get(r_dtype) - recons.index = np.array(recons.index,dtype=r_dtype ) - df.index = np.array(df.index,dtype=r_dtype ) - if c_dtype: - if c_dtype == 'u': - c_dtype='O' - recons.columns = np.array(lmap(_to_uni,recons.columns), - dtype=c_dtype) - df.columns = np.array(lmap(_to_uni,df.columns),dtype=c_dtype ) - elif c_dtype == 'dt': - c_dtype='O' - recons.columns = np.array(lmap(Timestamp,recons.columns), - dtype=c_dtype ) - df.columns = np.array(lmap(Timestamp,df.columns),dtype=c_dtype) - elif c_dtype == 'p': - c_dtype='O' - recons.columns = np.array(lmap(Timestamp,recons.columns.to_datetime()), - dtype=c_dtype) - df.columns = np.array(lmap(Timestamp,df.columns.to_datetime()),dtype=c_dtype ) - else: - c_dtype= type_map.get(c_dtype) - recons.columns = np.array(recons.columns,dtype=c_dtype ) - df.columns = np.array(df.columns,dtype=c_dtype ) - - assert_frame_equal(df,recons,check_names=False,check_less_precise=True) - - N = 100 - chunksize=1000 - - # GH3437 - from pandas import NaT - def make_dtnat_arr(n,nnat=None): - if nnat is None: - nnat= int(n*0.1) # 10% - s=list(date_range('2000',freq='5min',periods=n)) - if nnat: - for i in np.random.randint(0,len(s),nnat): - s[i] = NaT - i = np.random.randint(100) - s[-i] = NaT - s[i] = NaT - return s - - # N=35000 - s1=make_dtnat_arr(chunksize+5) - s2=make_dtnat_arr(chunksize+5,0) - path = '1.csv' - - # s3=make_dtnjat_arr(chunksize+5,0) - with ensure_clean('.csv') as pth: - df=DataFrame(dict(a=s1,b=s2)) - df.to_csv(pth,chunksize=chunksize) - recons = DataFrame.from_csv(pth)._convert(datetime=True, - coerce=True) - assert_frame_equal(df, recons,check_names=False,check_less_precise=True) - - for ncols in [4]: - base = int((chunksize// ncols or 1) or 1) - for nrows in [2,10,N-1,N,N+1,N+2,2*N-2,2*N-1,2*N,2*N+1,2*N+2, - base-1,base,base+1]: - _do_test(mkdf(nrows, ncols,r_idx_type='dt', - c_idx_type='s'),path, 'dt','s') - - - for ncols in [4]: - base = int((chunksize// ncols or 1) or 1) - for nrows in [2,10,N-1,N,N+1,N+2,2*N-2,2*N-1,2*N,2*N+1,2*N+2, - base-1,base,base+1]: - _do_test(mkdf(nrows, ncols,r_idx_type='dt', - c_idx_type='s'),path, 'dt','s') - pass - - for r_idx_type,c_idx_type in [('i','i'),('s','s'),('u','dt'),('p','p')]: - for ncols in [1,2,3,4]: - base = int((chunksize// ncols or 1) or 1) - for nrows in [2,10,N-1,N,N+1,N+2,2*N-2,2*N-1,2*N,2*N+1,2*N+2, - base-1,base,base+1]: - _do_test(mkdf(nrows, ncols,r_idx_type=r_idx_type, - c_idx_type=c_idx_type),path,r_idx_type,c_idx_type) - - for ncols in [1,2,3,4]: - base = int((chunksize// ncols or 1) or 1) - for nrows in [10,N-2,N-1,N,N+1,N+2,2*N-2,2*N-1,2*N,2*N+1,2*N+2, - base-1,base,base+1]: - _do_test(mkdf(nrows, ncols),path) - - for nrows in [10,N-2,N-1,N,N+1,N+2]: - df = mkdf(nrows, 3) - cols = list(df.columns) - cols[:2] = ["dupe","dupe"] - cols[-2:] = ["dupe","dupe"] - ix = list(df.index) - ix[:2] = ["rdupe","rdupe"] - ix[-2:] = ["rdupe","rdupe"] - df.index=ix - df.columns=cols - _do_test(df,path,dupe_col=True) - - - _do_test(DataFrame(index=lrange(10)),path) - _do_test(mkdf(chunksize//2+1, 2,r_idx_nlevels=2),path,rnlvl=2) - for ncols in [2,3,4]: - base = int(chunksize//ncols) - for nrows in [10,N-2,N-1,N,N+1,N+2,2*N-2,2*N-1,2*N,2*N+1,2*N+2, - base-1,base,base+1]: - _do_test(mkdf(nrows, ncols,r_idx_nlevels=2),path,rnlvl=2) - _do_test(mkdf(nrows, ncols,c_idx_nlevels=2),path,cnlvl=2) - _do_test(mkdf(nrows, ncols,r_idx_nlevels=2,c_idx_nlevels=2), - path,rnlvl=2,cnlvl=2) - - def test_to_csv_from_csv_w_some_infs(self): - - # test roundtrip with inf, -inf, nan, as full columns and mix - self.frame['G'] = np.nan - f = lambda x: [np.inf, np.nan][np.random.rand() < .5] - self.frame['H'] = self.frame.index.map(f) - - with ensure_clean() as path: - self.frame.to_csv(path) - recons = DataFrame.from_csv(path) - - assert_frame_equal(self.frame, recons, check_names=False) # TODO to_csv drops column name - assert_frame_equal(np.isinf(self.frame), np.isinf(recons), check_names=False) - - def test_to_csv_from_csv_w_all_infs(self): - - # test roundtrip with inf, -inf, nan, as full columns and mix - self.frame['E'] = np.inf - self.frame['F'] = -np.inf - - with ensure_clean() as path: - self.frame.to_csv(path) - recons = DataFrame.from_csv(path) - - assert_frame_equal(self.frame, recons, check_names=False) # TODO to_csv drops column name - assert_frame_equal(np.isinf(self.frame), np.isinf(recons), check_names=False) - - def test_to_csv_no_index(self): - # GH 3624, after appending columns, to_csv fails - pname = '__tmp_to_csv_no_index__' - with ensure_clean(pname) as path: - df = DataFrame({'c1':[1,2,3], 'c2':[4,5,6]}) - df.to_csv(path, index=False) - result = read_csv(path) - assert_frame_equal(df,result) - df['c3'] = Series([7,8,9],dtype='int64') - df.to_csv(path, index=False) - result = read_csv(path) - assert_frame_equal(df,result) - - def test_to_csv_with_mix_columns(self): - #GH11637, incorrect output when a mix of integer and string column - # names passed as columns parameter in to_csv - - df = DataFrame({0: ['a', 'b', 'c'], - 1: ['aa', 'bb', 'cc']}) - df['test'] = 'txt' - assert_equal(df.to_csv(), df.to_csv(columns=[0, 1, 'test'])) - - def test_to_csv_headers(self): - # GH6186, the presence or absence of `index` incorrectly - # causes to_csv to have different header semantics. - pname = '__tmp_to_csv_headers__' - from_df = DataFrame([[1, 2], [3, 4]], columns=['A', 'B']) - to_df = DataFrame([[1, 2], [3, 4]], columns=['X', 'Y']) - with ensure_clean(pname) as path: - from_df.to_csv(path, header=['X', 'Y']) - recons = DataFrame.from_csv(path) - assert_frame_equal(to_df, recons) - - from_df.to_csv(path, index=False, header=['X', 'Y']) - recons = DataFrame.from_csv(path) - recons.reset_index(inplace=True) - assert_frame_equal(to_df, recons) - - def test_to_csv_multiindex(self): - - pname = '__tmp_to_csv_multiindex__' - frame = self.frame - old_index = frame.index - arrays = np.arange(len(old_index) * 2).reshape(2, -1) - new_index = MultiIndex.from_arrays(arrays, names=['first', 'second']) - frame.index = new_index - - with ensure_clean(pname) as path: - - frame.to_csv(path, header=False) - frame.to_csv(path, columns=['A', 'B']) - - # round trip - frame.to_csv(path) - df = DataFrame.from_csv(path, index_col=[0, 1], parse_dates=False) - - assert_frame_equal(frame, df, check_names=False) # TODO to_csv drops column name - self.assertEqual(frame.index.names, df.index.names) - self.frame.index = old_index # needed if setUP becomes a classmethod - - # try multiindex with dates - tsframe = self.tsframe - old_index = tsframe.index - new_index = [old_index, np.arange(len(old_index))] - tsframe.index = MultiIndex.from_arrays(new_index) - - tsframe.to_csv(path, index_label=['time', 'foo']) - recons = DataFrame.from_csv(path, index_col=[0, 1]) - assert_frame_equal(tsframe, recons, check_names=False) # TODO to_csv drops column name - - # do not load index - tsframe.to_csv(path) - recons = DataFrame.from_csv(path, index_col=None) - np.testing.assert_equal(len(recons.columns), len(tsframe.columns) + 2) - - # no index - tsframe.to_csv(path, index=False) - recons = DataFrame.from_csv(path, index_col=None) - assert_almost_equal(recons.values, self.tsframe.values) - self.tsframe.index = old_index # needed if setUP becomes classmethod - - with ensure_clean(pname) as path: - # GH3571, GH1651, GH3141 - - def _make_frame(names=None): - if names is True: - names = ['first','second'] - return DataFrame(np.random.randint(0,10,size=(3,3)), - columns=MultiIndex.from_tuples([('bah', 'foo'), - ('bah', 'bar'), - ('ban', 'baz')], - names=names), - dtype='int64') - - # column & index are multi-index - df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4) - df.to_csv(path,tupleize_cols=False) - result = read_csv(path,header=[0,1,2,3],index_col=[0,1],tupleize_cols=False) - assert_frame_equal(df,result) - - # column is mi - df = mkdf(5,3,r_idx_nlevels=1,c_idx_nlevels=4) - df.to_csv(path,tupleize_cols=False) - result = read_csv(path,header=[0,1,2,3],index_col=0,tupleize_cols=False) - assert_frame_equal(df,result) - - # dup column names? - df = mkdf(5,3,r_idx_nlevels=3,c_idx_nlevels=4) - df.to_csv(path,tupleize_cols=False) - result = read_csv(path,header=[0,1,2,3],index_col=[0,1,2],tupleize_cols=False) - assert_frame_equal(df,result) - - # writing with no index - df = _make_frame() - df.to_csv(path,tupleize_cols=False,index=False) - result = read_csv(path,header=[0,1],tupleize_cols=False) - assert_frame_equal(df,result) - - # we lose the names here - df = _make_frame(True) - df.to_csv(path,tupleize_cols=False,index=False) - result = read_csv(path,header=[0,1],tupleize_cols=False) - self.assertTrue(all([ x is None for x in result.columns.names ])) - result.columns.names = df.columns.names - assert_frame_equal(df,result) - - # tupleize_cols=True and index=False - df = _make_frame(True) - df.to_csv(path,tupleize_cols=True,index=False) - result = read_csv(path,header=0,tupleize_cols=True,index_col=None) - result.columns = df.columns - assert_frame_equal(df,result) - - # whatsnew example - df = _make_frame() - df.to_csv(path,tupleize_cols=False) - result = read_csv(path,header=[0,1],index_col=[0],tupleize_cols=False) - assert_frame_equal(df,result) - - df = _make_frame(True) - df.to_csv(path,tupleize_cols=False) - result = read_csv(path,header=[0,1],index_col=[0],tupleize_cols=False) - assert_frame_equal(df,result) - - # column & index are multi-index (compatibility) - df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4) - df.to_csv(path,tupleize_cols=True) - result = read_csv(path,header=0,index_col=[0,1],tupleize_cols=True) - result.columns = df.columns - assert_frame_equal(df,result) - - # invalid options - df = _make_frame(True) - df.to_csv(path,tupleize_cols=False) - - # catch invalid headers - with assertRaisesRegexp(CParserError, 'Passed header=\[0,1,2\] are too many rows for this multi_index of columns'): - read_csv(path,tupleize_cols=False,header=lrange(3),index_col=0) - - with assertRaisesRegexp(CParserError, 'Passed header=\[0,1,2,3,4,5,6\], len of 7, but only 6 lines in file'): - read_csv(path,tupleize_cols=False,header=lrange(7),index_col=0) - - for i in [4,5,6]: - with tm.assertRaises(CParserError): - read_csv(path, tupleize_cols=False, header=lrange(i), index_col=0) - - # write with cols - with assertRaisesRegexp(TypeError, 'cannot specify cols with a MultiIndex'): - df.to_csv(path, tupleize_cols=False, columns=['foo', 'bar']) - - with ensure_clean(pname) as path: - # empty - tsframe[:0].to_csv(path) - recons = DataFrame.from_csv(path) - exp = tsframe[:0] - exp.index = [] - - self.assertTrue(recons.columns.equals(exp.columns)) - self.assertEqual(len(recons), 0) - - def test_to_csv_float32_nanrep(self): - df = DataFrame(np.random.randn(1, 4).astype(np.float32)) - df[1] = np.nan - - with ensure_clean('__tmp_to_csv_float32_nanrep__.csv') as path: - df.to_csv(path, na_rep=999) - - with open(path) as f: - lines = f.readlines() - self.assertEqual(lines[1].split(',')[2], '999') - - def test_to_csv_withcommas(self): - - # Commas inside fields should be correctly escaped when saving as CSV. - df = DataFrame({'A': [1, 2, 3], 'B': ['5,6', '7,8', '9,0']}) - - with ensure_clean('__tmp_to_csv_withcommas__.csv') as path: - df.to_csv(path) - df2 = DataFrame.from_csv(path) - assert_frame_equal(df2, df) - - def test_to_csv_mixed(self): - - def create_cols(name): - return [ "%s%03d" % (name,i) for i in range(5) ] - - df_float = DataFrame(np.random.randn(100, 5),dtype='float64',columns=create_cols('float')) - df_int = DataFrame(np.random.randn(100, 5),dtype='int64',columns=create_cols('int')) - df_bool = DataFrame(True,index=df_float.index,columns=create_cols('bool')) - df_object = DataFrame('foo',index=df_float.index,columns=create_cols('object')) - df_dt = DataFrame(Timestamp('20010101'),index=df_float.index,columns=create_cols('date')) - - # add in some nans - df_float.ix[30:50,1:3] = np.nan - - #### this is a bug in read_csv right now #### - #df_dt.ix[30:50,1:3] = np.nan - - df = pd.concat([ df_float, df_int, df_bool, df_object, df_dt ], axis=1) - - # dtype - dtypes = dict() - for n,dtype in [('float',np.float64),('int',np.int64),('bool',np.bool),('object',np.object)]: - for c in create_cols(n): - dtypes[c] = dtype - - with ensure_clean() as filename: - df.to_csv(filename) - rs = read_csv(filename, index_col=0, dtype=dtypes, parse_dates=create_cols('date')) - assert_frame_equal(rs, df) - - def test_to_csv_dups_cols(self): - - df = DataFrame(np.random.randn(1000, 30),columns=lrange(15)+lrange(15),dtype='float64') - - with ensure_clean() as filename: - df.to_csv(filename) # single dtype, fine - result = read_csv(filename,index_col=0) - result.columns = df.columns - assert_frame_equal(result,df) - - df_float = DataFrame(np.random.randn(1000, 3),dtype='float64') - df_int = DataFrame(np.random.randn(1000, 3),dtype='int64') - df_bool = DataFrame(True,index=df_float.index,columns=lrange(3)) - df_object = DataFrame('foo',index=df_float.index,columns=lrange(3)) - df_dt = DataFrame(Timestamp('20010101'),index=df_float.index,columns=lrange(3)) - df = pd.concat([ df_float, df_int, df_bool, df_object, df_dt ], axis=1, ignore_index=True) - - cols = [] - for i in range(5): - cols.extend([0,1,2]) - df.columns = cols - - from pandas import to_datetime - with ensure_clean() as filename: - df.to_csv(filename) - result = read_csv(filename,index_col=0) - - # date cols - for i in ['0.4','1.4','2.4']: - result[i] = to_datetime(result[i]) - - result.columns = df.columns - assert_frame_equal(result,df) - - # GH3457 - from pandas.util.testing import makeCustomDataframe as mkdf - - N=10 - df= mkdf(N, 3) - df.columns = ['a','a','b'] - - with ensure_clean() as filename: - df.to_csv(filename) - - # read_csv will rename the dups columns - result = read_csv(filename,index_col=0) - result = result.rename(columns={ 'a.1' : 'a' }) - assert_frame_equal(result,df) - - def test_to_csv_chunking(self): - - aa=DataFrame({'A':lrange(100000)}) - aa['B'] = aa.A + 1.0 - aa['C'] = aa.A + 2.0 - aa['D'] = aa.A + 3.0 - - for chunksize in [10000,50000,100000]: - with ensure_clean() as filename: - aa.to_csv(filename,chunksize=chunksize) - rs = read_csv(filename,index_col=0) - assert_frame_equal(rs, aa) - - @slow - def test_to_csv_wide_frame_formatting(self): - # Issue #8621 - df = DataFrame(np.random.randn(1, 100010), columns=None, index=None) - with ensure_clean() as filename: - df.to_csv(filename, header=False, index=False) - rs = read_csv(filename, header=None) - assert_frame_equal(rs, df) - - def test_to_csv_bug(self): - f1 = StringIO('a,1.0\nb,2.0') - df = DataFrame.from_csv(f1, header=None) - newdf = DataFrame({'t': df[df.columns[0]]}) - - with ensure_clean() as path: - newdf.to_csv(path) - - recons = read_csv(path, index_col=0) - assert_frame_equal(recons, newdf, check_names=False) # don't check_names as t != 1 - - def test_to_csv_unicode(self): - - df = DataFrame({u('c/\u03c3'): [1, 2, 3]}) - with ensure_clean() as path: - - df.to_csv(path, encoding='UTF-8') - df2 = read_csv(path, index_col=0, encoding='UTF-8') - assert_frame_equal(df, df2) - - df.to_csv(path, encoding='UTF-8', index=False) - df2 = read_csv(path, index_col=None, encoding='UTF-8') - assert_frame_equal(df, df2) - - def test_to_csv_unicode_index_col(self): - buf = StringIO('') - df = DataFrame( - [[u("\u05d0"), "d2", "d3", "d4"], ["a1", "a2", "a3", "a4"]], - columns=[u("\u05d0"), - u("\u05d1"), u("\u05d2"), u("\u05d3")], - index=[u("\u05d0"), u("\u05d1")]) - - df.to_csv(buf, encoding='UTF-8') - buf.seek(0) - - df2 = read_csv(buf, index_col=0, encoding='UTF-8') - assert_frame_equal(df, df2) - - def test_to_csv_stringio(self): - buf = StringIO() - self.frame.to_csv(buf) - buf.seek(0) - recons = read_csv(buf, index_col=0) - assert_frame_equal(recons, self.frame, check_names=False) # TODO to_csv drops column name - - def test_to_csv_float_format(self): - - df = DataFrame([[0.123456, 0.234567, 0.567567], - [12.32112, 123123.2, 321321.2]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) - - with ensure_clean() as filename: - - df.to_csv(filename, float_format='%.2f') - - rs = read_csv(filename, index_col=0) - xp = DataFrame([[0.12, 0.23, 0.57], - [12.32, 123123.20, 321321.20]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) - assert_frame_equal(rs, xp) - - def test_to_csv_quoting(self): - df = DataFrame({'A': [1, 2, 3], 'B': ['foo', 'bar', 'baz']}) - - buf = StringIO() - df.to_csv(buf, index=False, quoting=csv.QUOTE_NONNUMERIC) - - result = buf.getvalue() - expected = ('"A","B"\n' - '1,"foo"\n' - '2,"bar"\n' - '3,"baz"\n') - - self.assertEqual(result, expected) - - # quoting windows line terminators, presents with encoding? - # #3503 - text = 'a,b,c\n1,"test \r\n",3\n' - df = pd.read_csv(StringIO(text)) - buf = StringIO() - df.to_csv(buf, encoding='utf-8', index=False) - self.assertEqual(buf.getvalue(), text) - - # testing if quoting parameter is passed through with multi-indexes - # related to issue #7791 - df = pd.DataFrame({'a': [1, 2], 'b': [3, 4], 'c': [5, 6]}) - df = df.set_index(['a', 'b']) - expected = '"a","b","c"\n"1","3","5"\n"2","4","6"\n' - self.assertEqual(df.to_csv(quoting=csv.QUOTE_ALL), expected) - - def test_to_csv_unicodewriter_quoting(self): - df = DataFrame({'A': [1, 2, 3], 'B': ['foo', 'bar', 'baz']}) - - buf = StringIO() - df.to_csv(buf, index=False, quoting=csv.QUOTE_NONNUMERIC, - encoding='utf-8') - - result = buf.getvalue() - expected = ('"A","B"\n' - '1,"foo"\n' - '2,"bar"\n' - '3,"baz"\n') - - self.assertEqual(result, expected) - - def test_to_csv_quote_none(self): - # GH4328 - df = DataFrame({'A': ['hello', '{"hello"}']}) - for encoding in (None, 'utf-8'): - buf = StringIO() - df.to_csv(buf, quoting=csv.QUOTE_NONE, - encoding=encoding, index=False) - result = buf.getvalue() - expected = 'A\nhello\n{"hello"}\n' - self.assertEqual(result, expected) - - def test_to_csv_index_no_leading_comma(self): - df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, - index=['one', 'two', 'three']) - - buf = StringIO() - df.to_csv(buf, index_label=False) - expected = ('A,B\n' - 'one,1,4\n' - 'two,2,5\n' - 'three,3,6\n') - self.assertEqual(buf.getvalue(), expected) - - def test_to_csv_line_terminators(self): - df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, - index=['one', 'two', 'three']) - - buf = StringIO() - df.to_csv(buf, line_terminator='\r\n') - expected = (',A,B\r\n' - 'one,1,4\r\n' - 'two,2,5\r\n' - 'three,3,6\r\n') - self.assertEqual(buf.getvalue(), expected) - - buf = StringIO() - df.to_csv(buf) # The default line terminator remains \n - expected = (',A,B\n' - 'one,1,4\n' - 'two,2,5\n' - 'three,3,6\n') - self.assertEqual(buf.getvalue(), expected) - - def test_to_csv_from_csv_categorical(self): - - # CSV with categoricals should result in the same output as when one would add a "normal" - # Series/DataFrame. - s = Series(pd.Categorical(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c'])) - s2 = Series(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c']) - res = StringIO() - s.to_csv(res) - exp = StringIO() - s2.to_csv(exp) - self.assertEqual(res.getvalue(), exp.getvalue()) - - df = DataFrame({"s":s}) - df2 = DataFrame({"s":s2}) - res = StringIO() - df.to_csv(res) - exp = StringIO() - df2.to_csv(exp) - self.assertEqual(res.getvalue(), exp.getvalue()) - - def test_to_csv_path_is_none(self): - # GH 8215 - # Make sure we return string for consistency with - # Series.to_csv() - csv_str = self.frame.to_csv(path=None) - self.assertIsInstance(csv_str, str) - recons = pd.read_csv(StringIO(csv_str), index_col=0) - assert_frame_equal(self.frame, recons) - - def test_to_csv_compression_gzip(self): - ## GH7615 - ## use the compression kw in to_csv - df = DataFrame([[0.123456, 0.234567, 0.567567], - [12.32112, 123123.2, 321321.2]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) - - with ensure_clean() as filename: - - df.to_csv(filename, compression="gzip") - - # test the round trip - to_csv -> read_csv - rs = read_csv(filename, compression="gzip", index_col=0) - assert_frame_equal(df, rs) - - # explicitly make sure file is gziped - import gzip - f = gzip.open(filename, 'rb') - text = f.read().decode('utf8') - f.close() - for col in df.columns: - self.assertIn(col, text) - - def test_to_csv_compression_bz2(self): - ## GH7615 - ## use the compression kw in to_csv - df = DataFrame([[0.123456, 0.234567, 0.567567], - [12.32112, 123123.2, 321321.2]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) - - with ensure_clean() as filename: - - df.to_csv(filename, compression="bz2") - - # test the round trip - to_csv -> read_csv - rs = read_csv(filename, compression="bz2", index_col=0) - assert_frame_equal(df, rs) - - # explicitly make sure file is bz2ed - import bz2 - f = bz2.BZ2File(filename, 'rb') - text = f.read().decode('utf8') - f.close() - for col in df.columns: - self.assertIn(col, text) - - def test_to_csv_compression_value_error(self): - ## GH7615 - ## use the compression kw in to_csv - df = DataFrame([[0.123456, 0.234567, 0.567567], - [12.32112, 123123.2, 321321.2]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) - - with ensure_clean() as filename: - # zip compression is not supported and should raise ValueError - self.assertRaises(ValueError, df.to_csv, filename, compression="zip") - - def test_info(self): - io = StringIO() - self.frame.info(buf=io) - self.tsframe.info(buf=io) - - frame = DataFrame(np.random.randn(5, 3)) - - import sys - sys.stdout = StringIO() - frame.info() - frame.info(verbose=False) - sys.stdout = sys.__stdout__ - - def test_info_wide(self): - from pandas import set_option, reset_option - io = StringIO() - df = DataFrame(np.random.randn(5, 101)) - df.info(buf=io) - - io = StringIO() - df.info(buf=io, max_cols=101) - rs = io.getvalue() - self.assertTrue(len(rs.splitlines()) > 100) - xp = rs - - set_option('display.max_info_columns', 101) - io = StringIO() - df.info(buf=io) - self.assertEqual(rs, xp) - reset_option('display.max_info_columns') - - def test_info_duplicate_columns(self): - io = StringIO() - - # it works! - frame = DataFrame(np.random.randn(1500, 4), - columns=['a', 'a', 'b', 'b']) - frame.info(buf=io) - - def test_info_duplicate_columns_shows_correct_dtypes(self): - # GH11761 - io = StringIO() - - frame = DataFrame([[1, 2.0]], - columns=['a', 'a']) - frame.info(buf=io) - io.seek(0) - lines = io.readlines() - self.assertEqual('a 1 non-null int64\n', lines[3]) - self.assertEqual('a 1 non-null float64\n', lines[4]) - - def test_info_shows_column_dtypes(self): - dtypes = ['int64', 'float64', 'datetime64[ns]', 'timedelta64[ns]', - 'complex128', 'object', 'bool'] - data = {} - n = 10 - for i, dtype in enumerate(dtypes): - data[i] = np.random.randint(2, size=n).astype(dtype) - df = DataFrame(data) - buf = StringIO() - df.info(buf=buf) - res = buf.getvalue() - for i, dtype in enumerate(dtypes): - name = '%d %d non-null %s' % (i, n, dtype) - assert name in res - - def test_info_max_cols(self): - df = DataFrame(np.random.randn(10, 5)) - for len_, verbose in [(5, None), (5, False), (10, True)]: - # For verbose always ^ setting ^ summarize ^ full output - with option_context('max_info_columns', 4): - buf = StringIO() - df.info(buf=buf, verbose=verbose) - res = buf.getvalue() - self.assertEqual(len(res.strip().split('\n')), len_) - - for len_, verbose in [(10, None), (5, False), (10, True)]: - - # max_cols no exceeded - with option_context('max_info_columns', 5): - buf = StringIO() - df.info(buf=buf, verbose=verbose) - res = buf.getvalue() - self.assertEqual(len(res.strip().split('\n')), len_) - - for len_, max_cols in [(10, 5), (5, 4)]: - # setting truncates - with option_context('max_info_columns', 4): - buf = StringIO() - df.info(buf=buf, max_cols=max_cols) - res = buf.getvalue() - self.assertEqual(len(res.strip().split('\n')), len_) - - # setting wouldn't truncate - with option_context('max_info_columns', 5): - buf = StringIO() - df.info(buf=buf, max_cols=max_cols) - res = buf.getvalue() - self.assertEqual(len(res.strip().split('\n')), len_) - - def test_info_memory_usage(self): - # Ensure memory usage is displayed, when asserted, on the last line - dtypes = ['int64', 'float64', 'datetime64[ns]', 'timedelta64[ns]', - 'complex128', 'object', 'bool'] - data = {} - n = 10 - for i, dtype in enumerate(dtypes): - data[i] = np.random.randint(2, size=n).astype(dtype) - df = DataFrame(data) - buf = StringIO() - # display memory usage case - df.info(buf=buf, memory_usage=True) - res = buf.getvalue().splitlines() - self.assertTrue("memory usage: " in res[-1]) - # do not display memory usage cas - df.info(buf=buf, memory_usage=False) - res = buf.getvalue().splitlines() - self.assertTrue("memory usage: " not in res[-1]) - - df.info(buf=buf, memory_usage=True) - res = buf.getvalue().splitlines() - # memory usage is a lower bound, so print it as XYZ+ MB - self.assertTrue(re.match(r"memory usage: [^+]+\+", res[-1])) - - df.iloc[:, :5].info(buf=buf, memory_usage=True) - res = buf.getvalue().splitlines() - # excluded column with object dtype, so estimate is accurate - self.assertFalse(re.match(r"memory usage: [^+]+\+", res[-1])) - - df_with_object_index = pd.DataFrame({'a': [1]}, index=['foo']) - df_with_object_index.info(buf=buf, memory_usage=True) - res = buf.getvalue().splitlines() - self.assertTrue(re.match(r"memory usage: [^+]+\+", res[-1])) - - df_with_object_index.info(buf=buf, memory_usage='deep') - res = buf.getvalue().splitlines() - self.assertTrue(re.match(r"memory usage: [^+]+$", res[-1])) - - self.assertTrue(df_with_object_index.memory_usage(index=True, deep=True).sum() \ - > df_with_object_index.memory_usage(index=True).sum()) - - df_object = pd.DataFrame({'a': ['a']}) - self.assertTrue(df_object.memory_usage(deep=True).sum() \ - > df_object.memory_usage().sum()) - - # Test a DataFrame with duplicate columns - dtypes = ['int64', 'int64', 'int64', 'float64'] - data = {} - n = 100 - for i, dtype in enumerate(dtypes): - data[i] = np.random.randint(2, size=n).astype(dtype) - df = DataFrame(data) - df.columns = dtypes - # Ensure df size is as expected - df_size = df.memory_usage().sum() - exp_size = (len(dtypes) + 1) * n * 8 # (cols + index) * rows * bytes - self.assertEqual(df_size, exp_size) - # Ensure number of cols in memory_usage is the same as df - size_df = np.size(df.columns.values) + 1 # index=True; default - self.assertEqual(size_df, np.size(df.memory_usage())) - - # assert deep works only on object - self.assertEqual(df.memory_usage().sum(), - df.memory_usage(deep=True).sum()) - - # test for validity - DataFrame(1, index=['a'], columns=['A'] - ).memory_usage(index=True) - DataFrame(1, index=['a'], columns=['A'] - ).index.nbytes - df = DataFrame( - data=1, - index=pd.MultiIndex.from_product( - [['a'], range(1000)]), - columns=['A'] - ) - df.index.nbytes - df.memory_usage(index=True) - df.index.values.nbytes - - # sys.getsizeof will call the .memory_usage with - # deep=True, and add on some GC overhead - diff = df.memory_usage(deep=True).sum() - sys.getsizeof(df) - self.assertTrue(abs(diff) < 100) - - def test_dtypes(self): - self.mixed_frame['bool'] = self.mixed_frame['A'] > 0 - result = self.mixed_frame.dtypes - expected = Series(dict((k, v.dtype) - for k, v in compat.iteritems(self.mixed_frame)), - index=result.index) - assert_series_equal(result, expected) - - # compat, GH 8722 - with option_context('use_inf_as_null',True): - df = DataFrame([[1]]) - result = df.dtypes - assert_series_equal(result,Series({0:np.dtype('int64')})) - - def test_convert_objects(self): - - oops = self.mixed_frame.T.T - converted = oops._convert(datetime=True) - assert_frame_equal(converted, self.mixed_frame) - self.assertEqual(converted['A'].dtype, np.float64) - - # force numeric conversion - self.mixed_frame['H'] = '1.' - self.mixed_frame['I'] = '1' - - # add in some items that will be nan - l = len(self.mixed_frame) - self.mixed_frame['J'] = '1.' - self.mixed_frame['K'] = '1' - self.mixed_frame.ix[0:5,['J','K']] = 'garbled' - converted = self.mixed_frame._convert(datetime=True, numeric=True) - self.assertEqual(converted['H'].dtype, 'float64') - self.assertEqual(converted['I'].dtype, 'int64') - self.assertEqual(converted['J'].dtype, 'float64') - self.assertEqual(converted['K'].dtype, 'float64') - self.assertEqual(len(converted['J'].dropna()), l-5) - self.assertEqual(len(converted['K'].dropna()), l-5) - - # via astype - converted = self.mixed_frame.copy() - converted['H'] = converted['H'].astype('float64') - converted['I'] = converted['I'].astype('int64') - self.assertEqual(converted['H'].dtype, 'float64') - self.assertEqual(converted['I'].dtype, 'int64') - - # via astype, but errors - converted = self.mixed_frame.copy() - with assertRaisesRegexp(ValueError, 'invalid literal'): - converted['H'].astype('int32') - - # mixed in a single column - df = DataFrame(dict(s = Series([1, 'na', 3 ,4]))) - result = df._convert(datetime=True, numeric=True) - expected = DataFrame(dict(s = Series([1, np.nan, 3 ,4]))) - assert_frame_equal(result, expected) - - def test_convert_objects_no_conversion(self): - mixed1 = DataFrame( - {'a': [1, 2, 3], 'b': [4.0, 5, 6], 'c': ['x', 'y', 'z']}) - mixed2 = mixed1._convert(datetime=True) - assert_frame_equal(mixed1, mixed2) - - def test_append_series_dict(self): - df = DataFrame(np.random.randn(5, 4), - columns=['foo', 'bar', 'baz', 'qux']) - - series = df.ix[4] - with assertRaisesRegexp(ValueError, 'Indexes have overlapping values'): - df.append(series, verify_integrity=True) - series.name = None - with assertRaisesRegexp(TypeError, 'Can only append a Series if ' - 'ignore_index=True'): - df.append(series, verify_integrity=True) - - result = df.append(series[::-1], ignore_index=True) - expected = df.append(DataFrame({0: series[::-1]}, index=df.columns).T, - ignore_index=True) - assert_frame_equal(result, expected) - - # dict - result = df.append(series.to_dict(), ignore_index=True) - assert_frame_equal(result, expected) - - result = df.append(series[::-1][:3], ignore_index=True) - expected = df.append(DataFrame({0: series[::-1][:3]}).T, - ignore_index=True) - assert_frame_equal(result, expected.ix[:, result.columns]) - - # can append when name set - row = df.ix[4] - row.name = 5 - result = df.append(row) - expected = df.append(df[-1:], ignore_index=True) - assert_frame_equal(result, expected) - - def test_append_list_of_series_dicts(self): - df = DataFrame(np.random.randn(5, 4), - columns=['foo', 'bar', 'baz', 'qux']) - - dicts = [x.to_dict() for idx, x in df.iterrows()] - - result = df.append(dicts, ignore_index=True) - expected = df.append(df, ignore_index=True) - assert_frame_equal(result, expected) - - # different columns - dicts = [{'foo': 1, 'bar': 2, 'baz': 3, 'peekaboo': 4}, - {'foo': 5, 'bar': 6, 'baz': 7, 'peekaboo': 8}] - result = df.append(dicts, ignore_index=True) - expected = df.append(DataFrame(dicts), ignore_index=True) - assert_frame_equal(result, expected) - - def test_append_empty_dataframe(self): - - # Empty df append empty df - df1 = DataFrame([]) - df2 = DataFrame([]) - result = df1.append(df2) - expected = df1.copy() - assert_frame_equal(result, expected) - - # Non-empty df append empty df - df1 = DataFrame(np.random.randn(5, 2)) - df2 = DataFrame() - result = df1.append(df2) - expected = df1.copy() - assert_frame_equal(result, expected) - - # Empty df with columns append empty df - df1 = DataFrame(columns=['bar', 'foo']) - df2 = DataFrame() - result = df1.append(df2) - expected = df1.copy() - assert_frame_equal(result, expected) - - # Non-Empty df with columns append empty df - df1 = DataFrame(np.random.randn(5, 2), columns=['bar', 'foo']) - df2 = DataFrame() - result = df1.append(df2) - expected = df1.copy() - assert_frame_equal(result, expected) - - def test_append_dtypes(self): - - # GH 5754 - # row appends of different dtypes (so need to do by-item) - # can sometimes infer the correct type - - df1 = DataFrame({ 'bar' : Timestamp('20130101') }, index=lrange(5)) - df2 = DataFrame() - result = df1.append(df2) - expected = df1.copy() - assert_frame_equal(result, expected) - - df1 = DataFrame({ 'bar' : Timestamp('20130101') }, index=lrange(1)) - df2 = DataFrame({ 'bar' : 'foo' }, index=lrange(1,2)) - result = df1.append(df2) - expected = DataFrame({ 'bar' : [ Timestamp('20130101'), 'foo' ]}) - assert_frame_equal(result, expected) - - df1 = DataFrame({ 'bar' : Timestamp('20130101') }, index=lrange(1)) - df2 = DataFrame({ 'bar' : np.nan }, index=lrange(1,2)) - result = df1.append(df2) - expected = DataFrame({ 'bar' : Series([ Timestamp('20130101'), np.nan ],dtype='M8[ns]') }) - assert_frame_equal(result, expected) - - df1 = DataFrame({ 'bar' : Timestamp('20130101') }, index=lrange(1)) - df2 = DataFrame({ 'bar' : np.nan }, index=lrange(1,2), dtype=object) - result = df1.append(df2) - expected = DataFrame({ 'bar' : Series([ Timestamp('20130101'), np.nan ],dtype='M8[ns]') }) - assert_frame_equal(result, expected) - - df1 = DataFrame({ 'bar' : np.nan }, index=lrange(1)) - df2 = DataFrame({ 'bar' : Timestamp('20130101') }, index=lrange(1,2)) - result = df1.append(df2) - expected = DataFrame({ 'bar' : Series([ np.nan, Timestamp('20130101')] ,dtype='M8[ns]') }) - assert_frame_equal(result, expected) - - df1 = DataFrame({ 'bar' : Timestamp('20130101') }, index=lrange(1)) - df2 = DataFrame({ 'bar' : 1 }, index=lrange(1,2), dtype=object) - result = df1.append(df2) - expected = DataFrame({ 'bar' : Series([ Timestamp('20130101'), 1 ]) }) - assert_frame_equal(result, expected) - - def test_asfreq(self): - offset_monthly = self.tsframe.asfreq(datetools.bmonthEnd) - rule_monthly = self.tsframe.asfreq('BM') - - assert_almost_equal(offset_monthly['A'], rule_monthly['A']) - - filled = rule_monthly.asfreq('B', method='pad') - # TODO: actually check that this worked. - - # don't forget! - filled_dep = rule_monthly.asfreq('B', method='pad') - - # test does not blow up on length-0 DataFrame - zero_length = self.tsframe.reindex([]) - result = zero_length.asfreq('BM') - self.assertIsNot(result, zero_length) - - def test_asfreq_datetimeindex(self): - df = DataFrame({'A': [1, 2, 3]}, - index=[datetime(2011, 11, 1), datetime(2011, 11, 2), - datetime(2011, 11, 3)]) - df = df.asfreq('B') - tm.assertIsInstance(df.index, DatetimeIndex) - - ts = df['A'].asfreq('B') - tm.assertIsInstance(ts.index, DatetimeIndex) - - def test_at_time_between_time_datetimeindex(self): - index = date_range("2012-01-01", "2012-01-05", freq='30min') - df = DataFrame(randn(len(index), 5), index=index) - akey = time(12, 0, 0) - bkey = slice(time(13, 0, 0), time(14, 0, 0)) - ainds = [24, 72, 120, 168] - binds = [26, 27, 28, 74, 75, 76, 122, 123, 124, 170, 171, 172] - - result = df.at_time(akey) - expected = df.ix[akey] - expected2 = df.ix[ainds] - assert_frame_equal(result, expected) - assert_frame_equal(result, expected2) - self.assertEqual(len(result), 4) - - result = df.between_time(bkey.start, bkey.stop) - expected = df.ix[bkey] - expected2 = df.ix[binds] - assert_frame_equal(result, expected) - assert_frame_equal(result, expected2) - self.assertEqual(len(result), 12) - - result = df.copy() - result.ix[akey] = 0 - result = result.ix[akey] - expected = df.ix[akey].copy() - expected.ix[:] = 0 - assert_frame_equal(result, expected) - - result = df.copy() - result.ix[akey] = 0 - result.ix[akey] = df.ix[ainds] - assert_frame_equal(result, df) - - result = df.copy() - result.ix[bkey] = 0 - result = result.ix[bkey] - expected = df.ix[bkey].copy() - expected.ix[:] = 0 - assert_frame_equal(result, expected) - - result = df.copy() - result.ix[bkey] = 0 - result.ix[bkey] = df.ix[binds] - assert_frame_equal(result, df) - - def test_as_matrix(self): - frame = self.frame - mat = frame.as_matrix() - - frameCols = frame.columns - for i, row in enumerate(mat): - for j, value in enumerate(row): - col = frameCols[j] - if np.isnan(value): - self.assertTrue(np.isnan(frame[col][i])) - else: - self.assertEqual(value, frame[col][i]) - - # mixed type - mat = self.mixed_frame.as_matrix(['foo', 'A']) - self.assertEqual(mat[0, 0], 'bar') - - df = DataFrame({'real': [1, 2, 3], 'complex': [1j, 2j, 3j]}) - mat = df.as_matrix() - self.assertEqual(mat[0, 0], 1j) - - # single block corner case - mat = self.frame.as_matrix(['A', 'B']) - expected = self.frame.reindex(columns=['A', 'B']).values - assert_almost_equal(mat, expected) - - def test_as_matrix_duplicates(self): - df = DataFrame([[1, 2, 'a', 'b'], - [1, 2, 'a', 'b']], - columns=['one', 'one', 'two', 'two']) - - result = df.values - expected = np.array([[1, 2, 'a', 'b'], [1, 2, 'a', 'b']], - dtype=object) - - self.assertTrue(np.array_equal(result, expected)) - - def test_ftypes(self): - frame = self.mixed_float - expected = Series(dict(A = 'float32:dense', - B = 'float32:dense', - C = 'float16:dense', - D = 'float64:dense')).sort_values() - result = frame.ftypes.sort_values() - assert_series_equal(result,expected) - - def test_values(self): - self.frame.values[:, 0] = 5. - self.assertTrue((self.frame.values[:, 0] == 5).all()) - - def test_deepcopy(self): - cp = deepcopy(self.frame) - series = cp['A'] - series[:] = 10 - for idx, value in compat.iteritems(series): - self.assertNotEqual(self.frame['A'][idx], value) - - def test_copy(self): - cop = self.frame.copy() - cop['E'] = cop['A'] - self.assertNotIn('E', self.frame) - - # copy objects - copy = self.mixed_frame.copy() - self.assertIsNot(copy._data, self.mixed_frame._data) - - def _check_method(self, method='pearson', check_minp=False): - if not check_minp: - correls = self.frame.corr(method=method) - exp = self.frame['A'].corr(self.frame['C'], method=method) - assert_almost_equal(correls['A']['C'], exp) - else: - result = self.frame.corr(min_periods=len(self.frame) - 8) - expected = self.frame.corr() - expected.ix['A', 'B'] = expected.ix['B', 'A'] = nan - - def test_corr_pearson(self): - tm._skip_if_no_scipy() - self.frame['A'][:5] = nan - self.frame['B'][5:10] = nan - - self._check_method('pearson') - - def test_corr_kendall(self): - tm._skip_if_no_scipy() - self.frame['A'][:5] = nan - self.frame['B'][5:10] = nan - - self._check_method('kendall') - - def test_corr_spearman(self): - tm._skip_if_no_scipy() - self.frame['A'][:5] = nan - self.frame['B'][5:10] = nan - - self._check_method('spearman') - - def test_corr_non_numeric(self): - tm._skip_if_no_scipy() - self.frame['A'][:5] = nan - self.frame['B'][5:10] = nan - - # exclude non-numeric types - result = self.mixed_frame.corr() - expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].corr() - assert_frame_equal(result, expected) - - def test_corr_nooverlap(self): - tm._skip_if_no_scipy() - - # nothing in common - for meth in ['pearson', 'kendall', 'spearman']: - df = DataFrame({'A': [1, 1.5, 1, np.nan, np.nan, np.nan], - 'B': [np.nan, np.nan, np.nan, 1, 1.5, 1], - 'C': [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]}) - rs = df.corr(meth) - self.assertTrue(isnull(rs.ix['A', 'B'])) - self.assertTrue(isnull(rs.ix['B', 'A'])) - self.assertEqual(rs.ix['A', 'A'], 1) - self.assertEqual(rs.ix['B', 'B'], 1) - self.assertTrue(isnull(rs.ix['C', 'C'])) - - def test_corr_constant(self): - tm._skip_if_no_scipy() - - # constant --> all NA - - for meth in ['pearson', 'spearman']: - df = DataFrame({'A': [1, 1, 1, np.nan, np.nan, np.nan], - 'B': [np.nan, np.nan, np.nan, 1, 1, 1]}) - rs = df.corr(meth) - self.assertTrue(isnull(rs.values).all()) - - def test_corr_int(self): - # dtypes other than float64 #1761 - df3 = DataFrame({"a": [1, 2, 3, 4], "b": [1, 2, 3, 4]}) - - # it works! - df3.cov() - df3.corr() - - def test_corr_int_and_boolean(self): - tm._skip_if_no_scipy() - - # when dtypes of pandas series are different - # then ndarray will have dtype=object, - # so it need to be properly handled - df = DataFrame({"a": [True, False], "b": [1, 0]}) - - expected = DataFrame(np.ones((2, 2)), index=['a', 'b'], columns=['a', 'b']) - for meth in ['pearson', 'kendall', 'spearman']: - assert_frame_equal(df.corr(meth), expected) - - def test_cov(self): - # min_periods no NAs (corner case) - expected = self.frame.cov() - result = self.frame.cov(min_periods=len(self.frame)) - - assert_frame_equal(expected, result) - - result = self.frame.cov(min_periods=len(self.frame) + 1) - self.assertTrue(isnull(result.values).all()) - - # with NAs - frame = self.frame.copy() - frame['A'][:5] = nan - frame['B'][5:10] = nan - result = self.frame.cov(min_periods=len(self.frame) - 8) - expected = self.frame.cov() - expected.ix['A', 'B'] = np.nan - expected.ix['B', 'A'] = np.nan - - # regular - self.frame['A'][:5] = nan - self.frame['B'][:10] = nan - cov = self.frame.cov() - - assert_almost_equal(cov['A']['C'], - self.frame['A'].cov(self.frame['C'])) - - # exclude non-numeric types - result = self.mixed_frame.cov() - expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].cov() - assert_frame_equal(result, expected) - - # Single column frame - df = DataFrame(np.linspace(0.0,1.0,10)) - result = df.cov() - expected = DataFrame(np.cov(df.values.T).reshape((1,1)), - index=df.columns,columns=df.columns) - assert_frame_equal(result, expected) - df.ix[0] = np.nan - result = df.cov() - expected = DataFrame(np.cov(df.values[1:].T).reshape((1,1)), - index=df.columns,columns=df.columns) - assert_frame_equal(result, expected) - - def test_corrwith(self): - a = self.tsframe - noise = Series(randn(len(a)), index=a.index) - - b = self.tsframe + noise - - # make sure order does not matter - b = b.reindex(columns=b.columns[::-1], index=b.index[::-1][10:]) - del b['B'] - - colcorr = a.corrwith(b, axis=0) - assert_almost_equal(colcorr['A'], a['A'].corr(b['A'])) - - rowcorr = a.corrwith(b, axis=1) - assert_series_equal(rowcorr, a.T.corrwith(b.T, axis=0)) - - dropped = a.corrwith(b, axis=0, drop=True) - assert_almost_equal(dropped['A'], a['A'].corr(b['A'])) - self.assertNotIn('B', dropped) - - dropped = a.corrwith(b, axis=1, drop=True) - self.assertNotIn(a.index[-1], dropped.index) - - # non time-series data - index = ['a', 'b', 'c', 'd', 'e'] - columns = ['one', 'two', 'three', 'four'] - df1 = DataFrame(randn(5, 4), index=index, columns=columns) - df2 = DataFrame(randn(4, 4), index=index[:4], columns=columns) - correls = df1.corrwith(df2, axis=1) - for row in index[:4]: - assert_almost_equal(correls[row], df1.ix[row].corr(df2.ix[row])) - - def test_corrwith_with_objects(self): - df1 = tm.makeTimeDataFrame() - df2 = tm.makeTimeDataFrame() - cols = ['A', 'B', 'C', 'D'] - - df1['obj'] = 'foo' - df2['obj'] = 'bar' - - result = df1.corrwith(df2) - expected = df1.ix[:, cols].corrwith(df2.ix[:, cols]) - assert_series_equal(result, expected) - - result = df1.corrwith(df2, axis=1) - expected = df1.ix[:, cols].corrwith(df2.ix[:, cols], axis=1) - assert_series_equal(result, expected) - - def test_corrwith_series(self): - result = self.tsframe.corrwith(self.tsframe['A']) - expected = self.tsframe.apply(self.tsframe['A'].corr) - - assert_series_equal(result, expected) - - def test_corrwith_matches_corrcoef(self): - df1 = DataFrame(np.arange(10000), columns=['a']) - df2 = DataFrame(np.arange(10000)**2, columns=['a']) - c1 = df1.corrwith(df2)['a'] - c2 = np.corrcoef(df1['a'],df2['a'])[0][1] - - assert_almost_equal(c1, c2) - self.assertTrue(c1 < 1) - - def test_drop_names(self): - df = DataFrame([[1, 2, 3],[3, 4, 5],[5, 6, 7]], index=['a', 'b', 'c'], - columns=['d', 'e', 'f']) - df.index.name, df.columns.name = 'first', 'second' - df_dropped_b = df.drop('b') - df_dropped_e = df.drop('e', axis=1) - df_inplace_b, df_inplace_e = df.copy(), df.copy() - df_inplace_b.drop('b', inplace=True) - df_inplace_e.drop('e', axis=1, inplace=True) - for obj in (df_dropped_b, df_dropped_e, df_inplace_b, df_inplace_e): - self.assertEqual(obj.index.name, 'first') - self.assertEqual(obj.columns.name, 'second') - self.assertEqual(list(df.columns), ['d', 'e', 'f']) - - self.assertRaises(ValueError, df.drop, ['g']) - self.assertRaises(ValueError, df.drop, ['g'], 1) - - # errors = 'ignore' - dropped = df.drop(['g'], errors='ignore') - expected = Index(['a', 'b', 'c'], name='first') - self.assert_index_equal(dropped.index, expected) - - dropped = df.drop(['b', 'g'], errors='ignore') - expected = Index(['a', 'c'], name='first') - self.assert_index_equal(dropped.index, expected) - - dropped = df.drop(['g'], axis=1, errors='ignore') - expected = Index(['d', 'e', 'f'], name='second') - self.assert_index_equal(dropped.columns, expected) - - dropped = df.drop(['d', 'g'], axis=1, errors='ignore') - expected = Index(['e', 'f'], name='second') - self.assert_index_equal(dropped.columns, expected) - - def test_dropEmptyRows(self): - N = len(self.frame.index) - mat = randn(N) - mat[:5] = nan - - frame = DataFrame({'foo': mat}, index=self.frame.index) - original = Series(mat, index=self.frame.index, name='foo') - expected = original.dropna() - inplace_frame1, inplace_frame2 = frame.copy(), frame.copy() - - smaller_frame = frame.dropna(how='all') - # check that original was preserved - assert_series_equal(frame['foo'], original) - inplace_frame1.dropna(how='all', inplace=True) - assert_series_equal(smaller_frame['foo'], expected) - assert_series_equal(inplace_frame1['foo'], expected) - - smaller_frame = frame.dropna(how='all', subset=['foo']) - inplace_frame2.dropna(how='all', subset=['foo'], inplace=True) - assert_series_equal(smaller_frame['foo'], expected) - assert_series_equal(inplace_frame2['foo'], expected) - - def test_dropIncompleteRows(self): - N = len(self.frame.index) - mat = randn(N) - mat[:5] = nan - - frame = DataFrame({'foo': mat}, index=self.frame.index) - frame['bar'] = 5 - original = Series(mat, index=self.frame.index, name='foo') - inp_frame1, inp_frame2 = frame.copy(), frame.copy() - - smaller_frame = frame.dropna() - assert_series_equal(frame['foo'], original) - inp_frame1.dropna(inplace=True) - self.assert_numpy_array_equal(smaller_frame['foo'], mat[5:]) - self.assert_numpy_array_equal(inp_frame1['foo'], mat[5:]) - - samesize_frame = frame.dropna(subset=['bar']) - assert_series_equal(frame['foo'], original) - self.assertTrue((frame['bar'] == 5).all()) - inp_frame2.dropna(subset=['bar'], inplace=True) - self.assertTrue(samesize_frame.index.equals(self.frame.index)) - self.assertTrue(inp_frame2.index.equals(self.frame.index)) - - def test_dropna(self): - df = DataFrame(np.random.randn(6, 4)) - df[2][:2] = nan - - dropped = df.dropna(axis=1) - expected = df.ix[:, [0, 1, 3]] - inp = df.copy() - inp.dropna(axis=1, inplace=True) - assert_frame_equal(dropped, expected) - assert_frame_equal(inp, expected) - - dropped = df.dropna(axis=0) - expected = df.ix[lrange(2, 6)] - inp = df.copy() - inp.dropna(axis=0, inplace=True) - assert_frame_equal(dropped, expected) - assert_frame_equal(inp, expected) - - # threshold - dropped = df.dropna(axis=1, thresh=5) - expected = df.ix[:, [0, 1, 3]] - inp = df.copy() - inp.dropna(axis=1, thresh=5, inplace=True) - assert_frame_equal(dropped, expected) - assert_frame_equal(inp, expected) - - dropped = df.dropna(axis=0, thresh=4) - expected = df.ix[lrange(2, 6)] - inp = df.copy() - inp.dropna(axis=0, thresh=4, inplace=True) - assert_frame_equal(dropped, expected) - assert_frame_equal(inp, expected) - - dropped = df.dropna(axis=1, thresh=4) - assert_frame_equal(dropped, df) - - dropped = df.dropna(axis=1, thresh=3) - assert_frame_equal(dropped, df) - - # subset - dropped = df.dropna(axis=0, subset=[0, 1, 3]) - inp = df.copy() - inp.dropna(axis=0, subset=[0, 1, 3], inplace=True) - assert_frame_equal(dropped, df) - assert_frame_equal(inp, df) - - # all - dropped = df.dropna(axis=1, how='all') - assert_frame_equal(dropped, df) - - df[2] = nan - dropped = df.dropna(axis=1, how='all') - expected = df.ix[:, [0, 1, 3]] - assert_frame_equal(dropped, expected) - - # bad input - self.assertRaises(ValueError, df.dropna, axis=3) - - - def test_drop_and_dropna_caching(self): - # tst that cacher updates - original = Series([1, 2, np.nan], name='A') - expected = Series([1, 2], dtype=original.dtype, name='A') - df = pd.DataFrame({'A': original.values.copy()}) - df2 = df.copy() - df['A'].dropna() - assert_series_equal(df['A'], original) - df['A'].dropna(inplace=True) - assert_series_equal(df['A'], expected) - df2['A'].drop([1]) - assert_series_equal(df2['A'], original) - df2['A'].drop([1], inplace=True) - assert_series_equal(df2['A'], original.drop([1])) - - def test_dropna_corner(self): - # bad input - self.assertRaises(ValueError, self.frame.dropna, how='foo') - self.assertRaises(TypeError, self.frame.dropna, how=None) - # non-existent column - 8303 - self.assertRaises(KeyError, self.frame.dropna, subset=['A','X']) - - def test_dropna_multiple_axes(self): - df = DataFrame([[1, np.nan, 2, 3], - [4, np.nan, 5, 6], - [np.nan, np.nan, np.nan, np.nan], - [7, np.nan, 8, 9]]) - cp = df.copy() - result = df.dropna(how='all', axis=[0, 1]) - result2 = df.dropna(how='all', axis=(0, 1)) - expected = df.dropna(how='all').dropna(how='all', axis=1) - - assert_frame_equal(result, expected) - assert_frame_equal(result2, expected) - assert_frame_equal(df, cp) - - inp = df.copy() - inp.dropna(how='all', axis=(0, 1), inplace=True) - assert_frame_equal(inp, expected) - - def test_drop_duplicates(self): - df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - - # single column - result = df.drop_duplicates('AAA') - expected = df[:2] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('AAA', keep='last') - expected = df.ix[[6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('AAA', keep=False) - expected = df.ix[[]] - assert_frame_equal(result, expected) - self.assertEqual(len(result), 0) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates('AAA', take_last=True) - expected = df.ix[[6, 7]] - assert_frame_equal(result, expected) - - # multi column - expected = df.ix[[0, 1, 2, 3]] - result = df.drop_duplicates(np.array(['AAA', 'B'])) - assert_frame_equal(result, expected) - result = df.drop_duplicates(['AAA', 'B']) - assert_frame_equal(result, expected) - - result = df.drop_duplicates(('AAA', 'B'), keep='last') - expected = df.ix[[0, 5, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(('AAA', 'B'), keep=False) - expected = df.ix[[0]] - assert_frame_equal(result, expected) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(('AAA', 'B'), take_last=True) - expected = df.ix[[0, 5, 6, 7]] - assert_frame_equal(result, expected) - - # consider everything - df2 = df.ix[:, ['AAA', 'B', 'C']] - - result = df2.drop_duplicates() - # in this case only - expected = df2.drop_duplicates(['AAA', 'B']) - assert_frame_equal(result, expected) - - result = df2.drop_duplicates(keep='last') - expected = df2.drop_duplicates(['AAA', 'B'], keep='last') - assert_frame_equal(result, expected) - - result = df2.drop_duplicates(keep=False) - expected = df2.drop_duplicates(['AAA', 'B'], keep=False) - assert_frame_equal(result, expected) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df2.drop_duplicates(take_last=True) - with tm.assert_produces_warning(FutureWarning): - expected = df2.drop_duplicates(['AAA', 'B'], take_last=True) - assert_frame_equal(result, expected) - - # integers - result = df.drop_duplicates('C') - expected = df.iloc[[0,2]] - assert_frame_equal(result, expected) - result = df.drop_duplicates('C',keep='last') - expected = df.iloc[[-2,-1]] - assert_frame_equal(result, expected) - - df['E'] = df['C'].astype('int8') - result = df.drop_duplicates('E') - expected = df.iloc[[0,2]] - assert_frame_equal(result, expected) - result = df.drop_duplicates('E',keep='last') - expected = df.iloc[[-2,-1]] - assert_frame_equal(result, expected) - - # GH 11376 - df = pd.DataFrame({'x': [7, 6, 3, 3, 4, 8, 0], - 'y': [0, 6, 5, 5, 9, 1, 2]}) - expected = df.loc[df.index != 3] - assert_frame_equal(df.drop_duplicates(), expected) - - df = pd.DataFrame([[1 , 0], [0, 2]]) - assert_frame_equal(df.drop_duplicates(), df) - - df = pd.DataFrame([[-2, 0], [0, -4]]) - assert_frame_equal(df.drop_duplicates(), df) - - x = np.iinfo(np.int64).max / 3 * 2 - df = pd.DataFrame([[-x, x], [0, x + 4]]) - assert_frame_equal(df.drop_duplicates(), df) - - df = pd.DataFrame([[-x, x], [x, x + 4]]) - assert_frame_equal(df.drop_duplicates(), df) - - # GH 11864 - df = pd.DataFrame([i] * 9 for i in range(16)) - df = df.append([[1] + [0] * 8], ignore_index=True) - - for keep in ['first', 'last', False]: - assert_equal(df.duplicated(keep=keep).sum(), 0) - - def test_drop_duplicates_for_take_all(self): - df = DataFrame({'AAA': ['foo', 'bar', 'baz', 'bar', - 'foo', 'bar', 'qux', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - - # single column - result = df.drop_duplicates('AAA') - expected = df.iloc[[0, 1, 2, 6]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('AAA', keep='last') - expected = df.iloc[[2, 5, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('AAA', keep=False) - expected = df.iloc[[2, 6]] - assert_frame_equal(result, expected) - - # multiple columns - result = df.drop_duplicates(['AAA', 'B']) - expected = df.iloc[[0, 1, 2, 3, 4, 6]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(['AAA', 'B'], keep='last') - expected = df.iloc[[0, 1, 2, 5, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(['AAA', 'B'], keep=False) - expected = df.iloc[[0, 1, 2, 6]] - assert_frame_equal(result, expected) - - def test_drop_duplicates_deprecated_warning(self): - df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - expected = df[:2] - - # Raises warning - with tm.assert_produces_warning(False): - result = df.drop_duplicates(subset='AAA') - assert_frame_equal(result, expected) - - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(cols='AAA') - assert_frame_equal(result, expected) - - # Does not allow both subset and cols - self.assertRaises(TypeError, df.drop_duplicates, - kwargs={'cols': 'AAA', 'subset': 'B'}) - - # Does not allow unknown kwargs - self.assertRaises(TypeError, df.drop_duplicates, - kwargs={'subset': 'AAA', 'bad_arg': True}) - - # deprecate take_last - # Raises warning - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(take_last=False, subset='AAA') - assert_frame_equal(result, expected) - - self.assertRaises(ValueError, df.drop_duplicates, keep='invalid_name') - - def test_drop_duplicates_tuple(self): - df = DataFrame({('AA', 'AB'): ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - - # single column - result = df.drop_duplicates(('AA', 'AB')) - expected = df[:2] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(('AA', 'AB'), keep='last') - expected = df.ix[[6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(('AA', 'AB'), keep=False) - expected = df.ix[[]] # empty df - self.assertEqual(len(result), 0) - assert_frame_equal(result, expected) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(('AA', 'AB'), take_last=True) - expected = df.ix[[6, 7]] - assert_frame_equal(result, expected) - - # multi column - expected = df.ix[[0, 1, 2, 3]] - result = df.drop_duplicates((('AA', 'AB'), 'B')) - assert_frame_equal(result, expected) - - def test_drop_duplicates_NA(self): - # none - df = DataFrame({'A': [None, None, 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1.0, np.nan, np.nan, np.nan, 1., 1., 1, 1.], - 'D': lrange(8)}) - - # single column - result = df.drop_duplicates('A') - expected = df.ix[[0, 2, 3]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('A', keep='last') - expected = df.ix[[1, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('A', keep=False) - expected = df.ix[[]] # empty df - assert_frame_equal(result, expected) - self.assertEqual(len(result), 0) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates('A', take_last=True) - expected = df.ix[[1, 6, 7]] - assert_frame_equal(result, expected) - - # multi column - result = df.drop_duplicates(['A', 'B']) - expected = df.ix[[0, 2, 3, 6]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(['A', 'B'], keep='last') - expected = df.ix[[1, 5, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(['A', 'B'], keep=False) - expected = df.ix[[6]] - assert_frame_equal(result, expected) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(['A', 'B'], take_last=True) - expected = df.ix[[1, 5, 6, 7]] - assert_frame_equal(result, expected) - - # nan - df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1.0, np.nan, np.nan, np.nan, 1., 1., 1, 1.], - 'D': lrange(8)}) - - # single column - result = df.drop_duplicates('C') - expected = df[:2] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('C', keep='last') - expected = df.ix[[3, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('C', keep=False) - expected = df.ix[[]] # empty df - assert_frame_equal(result, expected) - self.assertEqual(len(result), 0) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates('C', take_last=True) - expected = df.ix[[3, 7]] - assert_frame_equal(result, expected) - - # multi column - result = df.drop_duplicates(['C', 'B']) - expected = df.ix[[0, 1, 2, 4]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(['C', 'B'], keep='last') - expected = df.ix[[1, 3, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(['C', 'B'], keep=False) - expected = df.ix[[1]] - assert_frame_equal(result, expected) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(['C', 'B'], take_last=True) - expected = df.ix[[1, 3, 6, 7]] - assert_frame_equal(result, expected) - - def test_drop_duplicates_NA_for_take_all(self): - # none - df = DataFrame({'A': [None, None, 'foo', 'bar', - 'foo', 'baz', 'bar', 'qux'], - 'C': [1.0, np.nan, np.nan, np.nan, 1., 2., 3, 1.]}) - - # single column - result = df.drop_duplicates('A') - expected = df.iloc[[0, 2, 3, 5, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('A', keep='last') - expected = df.iloc[[1, 4, 5, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('A', keep=False) - expected = df.iloc[[5, 7]] - assert_frame_equal(result, expected) - - # nan - - # single column - result = df.drop_duplicates('C') - expected = df.iloc[[0, 1, 5, 6]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('C', keep='last') - expected = df.iloc[[3, 5, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('C', keep=False) - expected = df.iloc[[5, 6]] - assert_frame_equal(result, expected) - - def test_drop_duplicates_inplace(self): - orig = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - - # single column - df = orig.copy() - df.drop_duplicates('A', inplace=True) - expected = orig[:2] - result = df - assert_frame_equal(result, expected) - - df = orig.copy() - df.drop_duplicates('A', keep='last', inplace=True) - expected = orig.ix[[6, 7]] - result = df - assert_frame_equal(result, expected) - - df = orig.copy() - df.drop_duplicates('A', keep=False, inplace=True) - expected = orig.ix[[]] - result = df - assert_frame_equal(result, expected) - self.assertEqual(len(df), 0) - - # deprecate take_last - df = orig.copy() - with tm.assert_produces_warning(FutureWarning): - df.drop_duplicates('A', take_last=True, inplace=True) - expected = orig.ix[[6, 7]] - result = df - assert_frame_equal(result, expected) - - # multi column - df = orig.copy() - df.drop_duplicates(['A', 'B'], inplace=True) - expected = orig.ix[[0, 1, 2, 3]] - result = df - assert_frame_equal(result, expected) - - df = orig.copy() - df.drop_duplicates(['A', 'B'], keep='last', inplace=True) - expected = orig.ix[[0, 5, 6, 7]] - result = df - assert_frame_equal(result, expected) - - df = orig.copy() - df.drop_duplicates(['A', 'B'], keep=False, inplace=True) - expected = orig.ix[[0]] - result = df - assert_frame_equal(result, expected) - - # deprecate take_last - df = orig.copy() - with tm.assert_produces_warning(FutureWarning): - df.drop_duplicates(['A', 'B'], take_last=True, inplace=True) - expected = orig.ix[[0, 5, 6, 7]] - result = df - assert_frame_equal(result, expected) - - # consider everything - orig2 = orig.ix[:, ['A', 'B', 'C']].copy() - - df2 = orig2.copy() - df2.drop_duplicates(inplace=True) - # in this case only - expected = orig2.drop_duplicates(['A', 'B']) - result = df2 - assert_frame_equal(result, expected) - - df2 = orig2.copy() - df2.drop_duplicates(keep='last', inplace=True) - expected = orig2.drop_duplicates(['A', 'B'], keep='last') - result = df2 - assert_frame_equal(result, expected) - - df2 = orig2.copy() - df2.drop_duplicates(keep=False, inplace=True) - expected = orig2.drop_duplicates(['A', 'B'], keep=False) - result = df2 - assert_frame_equal(result, expected) - - # deprecate take_last - df2 = orig2.copy() - with tm.assert_produces_warning(FutureWarning): - df2.drop_duplicates(take_last=True, inplace=True) - with tm.assert_produces_warning(FutureWarning): - expected = orig2.drop_duplicates(['A', 'B'], take_last=True) - result = df2 - assert_frame_equal(result, expected) - - def test_duplicated_deprecated_warning(self): - df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - - # Raises warning - with tm.assert_produces_warning(False): - result = df.duplicated(subset='AAA') - - with tm.assert_produces_warning(FutureWarning): - result = df.duplicated(cols='AAA') - - # Does not allow both subset and cols - self.assertRaises(TypeError, df.duplicated, - kwargs={'cols': 'AAA', 'subset': 'B'}) - - # Does not allow unknown kwargs - self.assertRaises(TypeError, df.duplicated, - kwargs={'subset': 'AAA', 'bad_arg': True}) - - def test_drop_col_still_multiindex(self): - arrays = [['a', 'b', 'c', 'top'], - ['', '', '', 'OD'], - ['', '', '', 'wx']] - - tuples = sorted(zip(*arrays)) - index = MultiIndex.from_tuples(tuples) - - df = DataFrame(randn(3, 4), columns=index) - del df[('a', '', '')] - assert(isinstance(df.columns, MultiIndex)) - - def test_drop(self): - simple = DataFrame({"A": [1, 2, 3, 4], "B": [0, 1, 2, 3]}) - assert_frame_equal(simple.drop("A", axis=1), simple[['B']]) - assert_frame_equal(simple.drop(["A", "B"], axis='columns'), - simple[[]]) - assert_frame_equal(simple.drop([0, 1, 3], axis=0), simple.ix[[2], :]) - assert_frame_equal(simple.drop([0, 3], axis='index'), simple.ix[[1, 2], :]) - - self.assertRaises(ValueError, simple.drop, 5) - self.assertRaises(ValueError, simple.drop, 'C', 1) - self.assertRaises(ValueError, simple.drop, [1, 5]) - self.assertRaises(ValueError, simple.drop, ['A', 'C'], 1) - - # errors = 'ignore' - assert_frame_equal(simple.drop(5, errors='ignore'), simple) - assert_frame_equal(simple.drop([0, 5], errors='ignore'), - simple.ix[[1, 2, 3], :]) - assert_frame_equal(simple.drop('C', axis=1, errors='ignore'), simple) - assert_frame_equal(simple.drop(['A', 'C'], axis=1, errors='ignore'), - simple[['B']]) - - #non-unique - wheee! - nu_df = DataFrame(lzip(range(3), range(-3, 1), list('abc')), - columns=['a', 'a', 'b']) - assert_frame_equal(nu_df.drop('a', axis=1), nu_df[['b']]) - assert_frame_equal(nu_df.drop('b', axis='columns'), nu_df['a']) - - nu_df = nu_df.set_index(pd.Index(['X', 'Y', 'X'])) - nu_df.columns = list('abc') - assert_frame_equal(nu_df.drop('X', axis='rows'), nu_df.ix[["Y"], :]) - assert_frame_equal(nu_df.drop(['X', 'Y'], axis=0), nu_df.ix[[], :]) - - # inplace cache issue - # GH 5628 - df = pd.DataFrame(np.random.randn(10,3), columns=list('abc')) - expected = df[~(df.b>0)] - df.drop(labels=df[df.b>0].index, inplace=True) - assert_frame_equal(df,expected) - - def test_fillna(self): - self.tsframe.ix[:5,'A'] = nan - self.tsframe.ix[-5:,'A'] = nan - - zero_filled = self.tsframe.fillna(0) - self.assertTrue((zero_filled.ix[:5,'A'] == 0).all()) - - padded = self.tsframe.fillna(method='pad') - self.assertTrue(np.isnan(padded.ix[:5,'A']).all()) - self.assertTrue((padded.ix[-5:,'A'] == padded.ix[-5,'A']).all()) - - # mixed type - self.mixed_frame.ix[5:20,'foo'] = nan - self.mixed_frame.ix[-10:,'A'] = nan - result = self.mixed_frame.fillna(value=0) - result = self.mixed_frame.fillna(method='pad') - - self.assertRaises(ValueError, self.tsframe.fillna) - self.assertRaises(ValueError, self.tsframe.fillna, 5, method='ffill') - - # mixed numeric (but no float16) - mf = self.mixed_float.reindex(columns=['A','B','D']) - mf.ix[-10:,'A'] = nan - result = mf.fillna(value=0) - _check_mixed_float(result, dtype = dict(C = None)) - - result = mf.fillna(method='pad') - _check_mixed_float(result, dtype = dict(C = None)) - - # empty frame (GH #2778) - df = DataFrame(columns=['x']) - for m in ['pad','backfill']: - df.x.fillna(method=m,inplace=1) - df.x.fillna(method=m) - - # with different dtype (GH3386) - df = DataFrame([['a','a',np.nan,'a'],['b','b',np.nan,'b'],['c','c',np.nan,'c']]) - - result = df.fillna({ 2: 'foo' }) - expected = DataFrame([['a','a','foo','a'],['b','b','foo','b'],['c','c','foo','c']]) - assert_frame_equal(result, expected) - - df.fillna({ 2: 'foo' }, inplace=True) - assert_frame_equal(df, expected) - - # limit and value - df = DataFrame(np.random.randn(10,3)) - df.iloc[2:7,0] = np.nan - df.iloc[3:5,2] = np.nan - - expected = df.copy() - expected.iloc[2,0] = 999 - expected.iloc[3,2] = 999 - result = df.fillna(999,limit=1) - assert_frame_equal(result, expected) - - # with datelike - # GH 6344 - df = DataFrame({ - 'Date':[pd.NaT, Timestamp("2014-1-1")], - 'Date2':[ Timestamp("2013-1-1"), pd.NaT] - }) - - expected = df.copy() - expected['Date'] = expected['Date'].fillna(df.ix[0,'Date2']) - result = df.fillna(value={'Date':df['Date2']}) - assert_frame_equal(result, expected) - - def test_fillna_dtype_conversion(self): - # make sure that fillna on an empty frame works - df = DataFrame(index=["A","B","C"], columns = [1,2,3,4,5]) - result = df.get_dtype_counts().sort_values() - expected = Series({ 'object' : 5 }) - assert_series_equal(result, expected) - - result = df.fillna(1) - expected = DataFrame(1, index=["A","B","C"], columns = [1,2,3,4,5]) - result = result.get_dtype_counts().sort_values() - expected = Series({ 'int64' : 5 }) - assert_series_equal(result, expected) - - # empty block - df = DataFrame(index=lrange(3),columns=['A','B'],dtype='float64') - result = df.fillna('nan') - expected = DataFrame('nan',index=lrange(3),columns=['A','B']) - assert_frame_equal(result, expected) - - # equiv of replace - df = DataFrame(dict(A = [1,np.nan], B = [1.,2.])) - for v in ['',1,np.nan,1.0]: - expected = df.replace(np.nan,v) - result = df.fillna(v) - assert_frame_equal(result, expected) - - def test_fillna_datetime_columns(self): - # GH 7095 - df = pd.DataFrame({'A': [-1, -2, np.nan], - 'B': date_range('20130101', periods=3), - 'C': ['foo', 'bar', None], - 'D': ['foo2', 'bar2', None]}, - index=date_range('20130110', periods=3)) - result = df.fillna('?') - expected = pd.DataFrame({'A': [-1, -2, '?'], - 'B': date_range('20130101', periods=3), - 'C': ['foo', 'bar', '?'], - 'D': ['foo2', 'bar2', '?']}, - index=date_range('20130110', periods=3)) - self.assert_frame_equal(result, expected) - - df = pd.DataFrame({'A': [-1, -2, np.nan], - 'B': [pd.Timestamp('2013-01-01'), pd.Timestamp('2013-01-02'), pd.NaT], - 'C': ['foo', 'bar', None], - 'D': ['foo2', 'bar2', None]}, - index=date_range('20130110', periods=3)) - result = df.fillna('?') - expected = pd.DataFrame({'A': [-1, -2, '?'], - 'B': [pd.Timestamp('2013-01-01'), pd.Timestamp('2013-01-02'), '?'], - 'C': ['foo', 'bar', '?'], - 'D': ['foo2', 'bar2', '?']}, - index=date_range('20130110', periods=3)) - self.assert_frame_equal(result, expected) - - def test_ffill(self): - self.tsframe['A'][:5] = nan - self.tsframe['A'][-5:] = nan - - assert_frame_equal(self.tsframe.ffill(), - self.tsframe.fillna(method='ffill')) - - def test_bfill(self): - self.tsframe['A'][:5] = nan - self.tsframe['A'][-5:] = nan - - assert_frame_equal(self.tsframe.bfill(), - self.tsframe.fillna(method='bfill')) - - def test_fillna_skip_certain_blocks(self): - # don't try to fill boolean, int blocks - - df = DataFrame(np.random.randn(10, 4).astype(int)) - - # it works! - df.fillna(np.nan) - - def test_fillna_inplace(self): - df = DataFrame(np.random.randn(10, 4)) - df[1][:4] = np.nan - df[3][-4:] = np.nan - - expected = df.fillna(value=0) - self.assertIsNot(expected, df) - - df.fillna(value=0, inplace=True) - assert_frame_equal(df, expected) - - df[1][:4] = np.nan - df[3][-4:] = np.nan - expected = df.fillna(method='ffill') - self.assertIsNot(expected, df) - - df.fillna(method='ffill', inplace=True) - assert_frame_equal(df, expected) - - def test_fillna_dict_series(self): - df = DataFrame({'a': [nan, 1, 2, nan, nan], - 'b': [1, 2, 3, nan, nan], - 'c': [nan, 1, 2, 3, 4]}) - - result = df.fillna({'a': 0, 'b': 5}) - - expected = df.copy() - expected['a'] = expected['a'].fillna(0) - expected['b'] = expected['b'].fillna(5) - assert_frame_equal(result, expected) - - # it works - result = df.fillna({'a': 0, 'b': 5, 'd': 7}) - - # Series treated same as dict - result = df.fillna(df.max()) - expected = df.fillna(df.max().to_dict()) - assert_frame_equal(result, expected) - - # disable this for now - with assertRaisesRegexp(NotImplementedError, 'column by column'): - df.fillna(df.max(1), axis=1) - - def test_fillna_dataframe(self): - # GH 8377 - df = DataFrame({'a': [nan, 1, 2, nan, nan], - 'b': [1, 2, 3, nan, nan], - 'c': [nan, 1, 2, 3, 4]}, - index = list('VWXYZ')) - - # df2 may have different index and columns - df2 = DataFrame({'a': [nan, 10, 20, 30, 40], - 'b': [50, 60, 70, 80, 90], - 'foo': ['bar']*5}, - index = list('VWXuZ')) - - result = df.fillna(df2) - - # only those columns and indices which are shared get filled - expected = DataFrame({'a': [nan, 1, 2, nan, 40], - 'b': [1, 2, 3, nan, 90], - 'c': [nan, 1, 2, 3, 4]}, - index = list('VWXYZ')) - - assert_frame_equal(result, expected) - - def test_fillna_columns(self): - df = DataFrame(np.random.randn(10, 10)) - df.values[:, ::2] = np.nan - - result = df.fillna(method='ffill', axis=1) - expected = df.T.fillna(method='pad').T - assert_frame_equal(result, expected) - - df.insert(6, 'foo', 5) - result = df.fillna(method='ffill', axis=1) - expected = df.astype(float).fillna(method='ffill', axis=1) - assert_frame_equal(result, expected) - - - def test_fillna_invalid_method(self): - with assertRaisesRegexp(ValueError, 'ffil'): - self.frame.fillna(method='ffil') - - def test_fillna_invalid_value(self): - # list - self.assertRaises(TypeError, self.frame.fillna, [1, 2]) - # tuple - self.assertRaises(TypeError, self.frame.fillna, (1, 2)) - # frame with series - self.assertRaises(ValueError, self.frame.iloc[:,0].fillna, self.frame) - - def test_replace_inplace(self): - self.tsframe['A'][:5] = nan - self.tsframe['A'][-5:] = nan - - tsframe = self.tsframe.copy() - tsframe.replace(nan, 0, inplace=True) - assert_frame_equal(tsframe, self.tsframe.fillna(0)) - - self.assertRaises(TypeError, self.tsframe.replace, nan, inplace=True) - self.assertRaises(TypeError, self.tsframe.replace, nan) - - # mixed type - self.mixed_frame.ix[5:20,'foo'] = nan - self.mixed_frame.ix[-10:,'A'] = nan - - result = self.mixed_frame.replace(np.nan, 0) - expected = self.mixed_frame.fillna(value=0) - assert_frame_equal(result, expected) - - tsframe = self.tsframe.copy() - tsframe.replace([nan], [0], inplace=True) - assert_frame_equal(tsframe, self.tsframe.fillna(0)) - - def test_regex_replace_scalar(self): - obj = {'a': list('ab..'), 'b': list('efgh')} - dfobj = DataFrame(obj) - mix = {'a': lrange(4), 'b': list('ab..')} - dfmix = DataFrame(mix) - - ### simplest cases - ## regex -> value - # obj frame - res = dfobj.replace(r'\s*\.\s*', nan, regex=True) - assert_frame_equal(dfobj, res.fillna('.')) - - # mixed - res = dfmix.replace(r'\s*\.\s*', nan, regex=True) - assert_frame_equal(dfmix, res.fillna('.')) - - ## regex -> regex - # obj frame - res = dfobj.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True) - objc = obj.copy() - objc['a'] = ['a', 'b', '...', '...'] - expec = DataFrame(objc) - assert_frame_equal(res, expec) - - # with mixed - res = dfmix.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True) - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - # everything with compiled regexs as well - res = dfobj.replace(re.compile(r'\s*\.\s*'), nan, regex=True) - assert_frame_equal(dfobj, res.fillna('.')) - - # mixed - res = dfmix.replace(re.compile(r'\s*\.\s*'), nan, regex=True) - assert_frame_equal(dfmix, res.fillna('.')) - - ## regex -> regex - # obj frame - res = dfobj.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1') - objc = obj.copy() - objc['a'] = ['a', 'b', '...', '...'] - expec = DataFrame(objc) - assert_frame_equal(res, expec) - - # with mixed - res = dfmix.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1') - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - res = dfmix.replace(regex=re.compile(r'\s*(\.)\s*'), value=r'\1\1\1') - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - res = dfmix.replace(regex=r'\s*(\.)\s*', value=r'\1\1\1') - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - def test_regex_replace_scalar_inplace(self): - obj = {'a': list('ab..'), 'b': list('efgh')} - dfobj = DataFrame(obj) - mix = {'a': lrange(4), 'b': list('ab..')} - dfmix = DataFrame(mix) - - ### simplest cases - ## regex -> value - # obj frame - res = dfobj.copy() - res.replace(r'\s*\.\s*', nan, regex=True, inplace=True) - assert_frame_equal(dfobj, res.fillna('.')) - - # mixed - res = dfmix.copy() - res.replace(r'\s*\.\s*', nan, regex=True, inplace=True) - assert_frame_equal(dfmix, res.fillna('.')) - - ## regex -> regex - # obj frame - res = dfobj.copy() - res.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True, inplace=True) - objc = obj.copy() - objc['a'] = ['a', 'b', '...', '...'] - expec = DataFrame(objc) - assert_frame_equal(res, expec) - - # with mixed - res = dfmix.copy() - res.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True, inplace=True) - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - # everything with compiled regexs as well - res = dfobj.copy() - res.replace(re.compile(r'\s*\.\s*'), nan, regex=True, inplace=True) - assert_frame_equal(dfobj, res.fillna('.')) - - # mixed - res = dfmix.copy() - res.replace(re.compile(r'\s*\.\s*'), nan, regex=True, inplace=True) - assert_frame_equal(dfmix, res.fillna('.')) - - ## regex -> regex - # obj frame - res = dfobj.copy() - res.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1', regex=True, - inplace=True) - objc = obj.copy() - objc['a'] = ['a', 'b', '...', '...'] - expec = DataFrame(objc) - assert_frame_equal(res, expec) - - # with mixed - res = dfmix.copy() - res.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1', regex=True, - inplace=True) - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - res = dfobj.copy() - res.replace(regex=r'\s*\.\s*', value=nan, inplace=True) - assert_frame_equal(dfobj, res.fillna('.')) - - # mixed - res = dfmix.copy() - res.replace(regex=r'\s*\.\s*', value=nan, inplace=True) - assert_frame_equal(dfmix, res.fillna('.')) - - ## regex -> regex - # obj frame - res = dfobj.copy() - res.replace(regex=r'\s*(\.)\s*', value=r'\1\1\1', inplace=True) - objc = obj.copy() - objc['a'] = ['a', 'b', '...', '...'] - expec = DataFrame(objc) - assert_frame_equal(res, expec) - - # with mixed - res = dfmix.copy() - res.replace(regex=r'\s*(\.)\s*', value=r'\1\1\1', inplace=True) - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - # everything with compiled regexs as well - res = dfobj.copy() - res.replace(regex=re.compile(r'\s*\.\s*'), value=nan, inplace=True) - assert_frame_equal(dfobj, res.fillna('.')) - - # mixed - res = dfmix.copy() - res.replace(regex=re.compile(r'\s*\.\s*'), value=nan, inplace=True) - assert_frame_equal(dfmix, res.fillna('.')) - - ## regex -> regex - # obj frame - res = dfobj.copy() - res.replace(regex=re.compile(r'\s*(\.)\s*'), value=r'\1\1\1', - inplace=True) - objc = obj.copy() - objc['a'] = ['a', 'b', '...', '...'] - expec = DataFrame(objc) - assert_frame_equal(res, expec) - - # with mixed - res = dfmix.copy() - res.replace(regex=re.compile(r'\s*(\.)\s*'), value=r'\1\1\1', - inplace=True) - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - def test_regex_replace_list_obj(self): - obj = {'a': list('ab..'), 'b': list('efgh'), 'c': list('helo')} - dfobj = DataFrame(obj) - - ## lists of regexes and values - # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] - to_replace_res = [r'\s*\.\s*', r'e|f|g'] - values = [nan, 'crap'] - res = dfobj.replace(to_replace_res, values, regex=True) - expec = DataFrame({'a': ['a', 'b', nan, nan], 'b': ['crap'] * 3 + - ['h'], 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] - to_replace_res = [r'\s*(\.)\s*', r'(e|f|g)'] - values = [r'\1\1', r'\1_crap'] - res = dfobj.replace(to_replace_res, values, regex=True) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['e_crap', - 'f_crap', - 'g_crap', 'h'], - 'c': ['h', 'e_crap', 'l', 'o']}) - - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN - # or vN)] - to_replace_res = [r'\s*(\.)\s*', r'e'] - values = [r'\1\1', r'crap'] - res = dfobj.replace(to_replace_res, values, regex=True) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', - 'h'], - 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - to_replace_res = [r'\s*(\.)\s*', r'e'] - values = [r'\1\1', r'crap'] - res = dfobj.replace(value=values, regex=to_replace_res) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', - 'h'], - 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - def test_regex_replace_list_obj_inplace(self): - ### same as above with inplace=True - ## lists of regexes and values - obj = {'a': list('ab..'), 'b': list('efgh'), 'c': list('helo')} - dfobj = DataFrame(obj) - - ## lists of regexes and values - # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] - to_replace_res = [r'\s*\.\s*', r'e|f|g'] - values = [nan, 'crap'] - res = dfobj.copy() - res.replace(to_replace_res, values, inplace=True, regex=True) - expec = DataFrame({'a': ['a', 'b', nan, nan], 'b': ['crap'] * 3 + - ['h'], 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] - to_replace_res = [r'\s*(\.)\s*', r'(e|f|g)'] - values = [r'\1\1', r'\1_crap'] - res = dfobj.copy() - res.replace(to_replace_res, values, inplace=True, regex=True) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['e_crap', - 'f_crap', - 'g_crap', 'h'], - 'c': ['h', 'e_crap', 'l', 'o']}) - - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN - # or vN)] - to_replace_res = [r'\s*(\.)\s*', r'e'] - values = [r'\1\1', r'crap'] - res = dfobj.copy() - res.replace(to_replace_res, values, inplace=True, regex=True) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', - 'h'], - 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - to_replace_res = [r'\s*(\.)\s*', r'e'] - values = [r'\1\1', r'crap'] - res = dfobj.copy() - res.replace(value=values, regex=to_replace_res, inplace=True) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', - 'h'], - 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - def test_regex_replace_list_mixed(self): - ## mixed frame to make sure this doesn't break things - mix = {'a': lrange(4), 'b': list('ab..')} - dfmix = DataFrame(mix) - - ## lists of regexes and values - # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] - to_replace_res = [r'\s*\.\s*', r'a'] - values = [nan, 'crap'] - mix2 = {'a': lrange(4), 'b': list('ab..'), 'c': list('halo')} - dfmix2 = DataFrame(mix2) - res = dfmix2.replace(to_replace_res, values, regex=True) - expec = DataFrame({'a': mix2['a'], 'b': ['crap', 'b', nan, nan], - 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] - to_replace_res = [r'\s*(\.)\s*', r'(a|b)'] - values = [r'\1\1', r'\1_crap'] - res = dfmix.replace(to_replace_res, values, regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['a_crap', 'b_crap', '..', - '..']}) - - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN - # or vN)] - to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] - values = [r'\1\1', r'crap', r'\1_crap'] - res = dfmix.replace(to_replace_res, values, regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) - assert_frame_equal(res, expec) - - to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] - values = [r'\1\1', r'crap', r'\1_crap'] - res = dfmix.replace(regex=to_replace_res, value=values) - expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) - assert_frame_equal(res, expec) - - def test_regex_replace_list_mixed_inplace(self): - mix = {'a': lrange(4), 'b': list('ab..')} - dfmix = DataFrame(mix) - # the same inplace - ## lists of regexes and values - # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] - to_replace_res = [r'\s*\.\s*', r'a'] - values = [nan, 'crap'] - res = dfmix.copy() - res.replace(to_replace_res, values, inplace=True, regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b', nan, nan]}) - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] - to_replace_res = [r'\s*(\.)\s*', r'(a|b)'] - values = [r'\1\1', r'\1_crap'] - res = dfmix.copy() - res.replace(to_replace_res, values, inplace=True, regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['a_crap', 'b_crap', '..', - '..']}) - - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN - # or vN)] - to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] - values = [r'\1\1', r'crap', r'\1_crap'] - res = dfmix.copy() - res.replace(to_replace_res, values, inplace=True, regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) - assert_frame_equal(res, expec) - - to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] - values = [r'\1\1', r'crap', r'\1_crap'] - res = dfmix.copy() - res.replace(regex=to_replace_res, value=values, inplace=True) - expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) - assert_frame_equal(res, expec) - - def test_regex_replace_dict_mixed(self): - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - dfmix = DataFrame(mix) - - ## dicts - # single dict {re1: v1}, search the whole frame - # need test for this... - - # list of dicts {re1: v1, re2: v2, ..., re3: v3}, search the whole - # frame - res = dfmix.replace({'b': r'\s*\.\s*'}, {'b': nan}, regex=True) - res2 = dfmix.copy() - res2.replace({'b': r'\s*\.\s*'}, {'b': nan}, inplace=True, regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', nan, nan], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - - # list of dicts {re1: re11, re2: re12, ..., reN: re1N}, search the - # whole frame - res = dfmix.replace({'b': r'\s*(\.)\s*'}, {'b': r'\1ty'}, regex=True) - res2 = dfmix.copy() - res2.replace({'b': r'\s*(\.)\s*'}, {'b': r'\1ty'}, inplace=True, - regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', '.ty', '.ty'], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - - res = dfmix.replace(regex={'b': r'\s*(\.)\s*'}, value={'b': r'\1ty'}) - res2 = dfmix.copy() - res2.replace(regex={'b': r'\s*(\.)\s*'}, value={'b': r'\1ty'}, - inplace=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', '.ty', '.ty'], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - - # scalar -> dict - # to_replace regex, {value: value} - expec = DataFrame({'a': mix['a'], 'b': [nan, 'b', '.', '.'], 'c': - mix['c']}) - res = dfmix.replace('a', {'b': nan}, regex=True) - res2 = dfmix.copy() - res2.replace('a', {'b': nan}, regex=True, inplace=True) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - - res = dfmix.replace('a', {'b': nan}, regex=True) - res2 = dfmix.copy() - res2.replace(regex='a', value={'b': nan}, inplace=True) - expec = DataFrame({'a': mix['a'], 'b': [nan, 'b', '.', '.'], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - - def test_regex_replace_dict_nested(self): - # nested dicts will not work until this is implemented for Series - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - dfmix = DataFrame(mix) - res = dfmix.replace({'b': {r'\s*\.\s*': nan}}, regex=True) - res2 = dfmix.copy() - res4 = dfmix.copy() - res2.replace({'b': {r'\s*\.\s*': nan}}, inplace=True, regex=True) - res3 = dfmix.replace(regex={'b': {r'\s*\.\s*': nan}}) - res4.replace(regex={'b': {r'\s*\.\s*': nan}}, inplace=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', nan, nan], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - assert_frame_equal(res3, expec) - assert_frame_equal(res4, expec) - - def test_regex_replace_dict_nested_gh4115(self): - df = pd.DataFrame({'Type':['Q','T','Q','Q','T'], 'tmp':2}) - expected = DataFrame({'Type': [0,1,0,0,1], 'tmp': 2}) - result = df.replace({'Type': {'Q':0,'T':1}}) - assert_frame_equal(result, expected) - - def test_regex_replace_list_to_scalar(self): - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - df = DataFrame(mix) - expec = DataFrame({'a': mix['a'], 'b': np.array([nan] * 4), - 'c': [nan, nan, nan, 'd']}) - - res = df.replace([r'\s*\.\s*', 'a|b'], nan, regex=True) - res2 = df.copy() - res3 = df.copy() - res2.replace([r'\s*\.\s*', 'a|b'], nan, regex=True, inplace=True) - res3.replace(regex=[r'\s*\.\s*', 'a|b'], value=nan, inplace=True) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - assert_frame_equal(res3, expec) - - def test_regex_replace_str_to_numeric(self): - # what happens when you try to replace a numeric value with a regex? - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - df = DataFrame(mix) - res = df.replace(r'\s*\.\s*', 0, regex=True) - res2 = df.copy() - res2.replace(r'\s*\.\s*', 0, inplace=True, regex=True) - res3 = df.copy() - res3.replace(regex=r'\s*\.\s*', value=0, inplace=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', 0, 0], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - assert_frame_equal(res3, expec) - - def test_regex_replace_regex_list_to_numeric(self): - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - df = DataFrame(mix) - res = df.replace([r'\s*\.\s*', 'b'], 0, regex=True) - res2 = df.copy() - res2.replace([r'\s*\.\s*', 'b'], 0, regex=True, inplace=True) - res3 = df.copy() - res3.replace(regex=[r'\s*\.\s*', 'b'], value=0, inplace=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 0, 0, 0], 'c': ['a', 0, - nan, - 'd']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - assert_frame_equal(res3, expec) - - def test_regex_replace_series_of_regexes(self): - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - df = DataFrame(mix) - s1 = Series({'b': r'\s*\.\s*'}) - s2 = Series({'b': nan}) - res = df.replace(s1, s2, regex=True) - res2 = df.copy() - res2.replace(s1, s2, inplace=True, regex=True) - res3 = df.copy() - res3.replace(regex=s1, value=s2, inplace=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', nan, nan], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - assert_frame_equal(res3, expec) - - def test_regex_replace_numeric_to_object_conversion(self): - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - df = DataFrame(mix) - expec = DataFrame({'a': ['a', 1, 2, 3], 'b': mix['b'], 'c': mix['c']}) - res = df.replace(0, 'a') - assert_frame_equal(res, expec) - self.assertEqual(res.a.dtype, np.object_) - - def test_replace_regex_metachar(self): - metachars = '[]', '()', '\d', '\w', '\s' - - for metachar in metachars: - df = DataFrame({'a': [metachar, 'else']}) - result = df.replace({'a': {metachar: 'paren'}}) - expected = DataFrame({'a': ['paren', 'else']}) - assert_frame_equal(result, expected) - - def test_replace(self): - self.tsframe['A'][:5] = nan - self.tsframe['A'][-5:] = nan - - zero_filled = self.tsframe.replace(nan, -1e8) - assert_frame_equal(zero_filled, self.tsframe.fillna(-1e8)) - assert_frame_equal(zero_filled.replace(-1e8, nan), self.tsframe) - - self.tsframe['A'][:5] = nan - self.tsframe['A'][-5:] = nan - self.tsframe['B'][:5] = -1e8 - - # empty - df = DataFrame(index=['a', 'b']) - assert_frame_equal(df, df.replace(5, 7)) - - # GH 11698 - # test for mixed data types. - df = pd.DataFrame([('-', pd.to_datetime('20150101')), ('a', pd.to_datetime('20150102'))]) - df1 = df.replace('-', np.nan) - expected_df = pd.DataFrame([(np.nan, pd.to_datetime('20150101')), ('a', pd.to_datetime('20150102'))]) - assert_frame_equal(df1, expected_df) - - def test_replace_list(self): - obj = {'a': list('ab..'), 'b': list('efgh'), 'c': list('helo')} - dfobj = DataFrame(obj) - - ## lists of regexes and values - # list of [v1, v2, ..., vN] -> [v1, v2, ..., vN] - to_replace_res = [r'.', r'e'] - values = [nan, 'crap'] - res = dfobj.replace(to_replace_res, values) - expec = DataFrame({'a': ['a', 'b', nan, nan], - 'b': ['crap', 'f', 'g', 'h'], 'c': ['h', 'crap', - 'l', 'o']}) - assert_frame_equal(res, expec) - - # list of [v1, v2, ..., vN] -> [v1, v2, .., vN] - to_replace_res = [r'.', r'f'] - values = [r'..', r'crap'] - res = dfobj.replace(to_replace_res, values) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['e', 'crap', 'g', - 'h'], - 'c': ['h', 'e', 'l', 'o']}) - - assert_frame_equal(res, expec) - - def test_replace_series_dict(self): - # from GH 3064 - df = DataFrame({'zero': {'a': 0.0, 'b': 1}, 'one': {'a': 2.0, 'b': 0}}) - result = df.replace(0, {'zero': 0.5, 'one': 1.0}) - expected = DataFrame({'zero': {'a': 0.5, 'b': 1}, 'one': {'a': 2.0, 'b': 1.0}}) - assert_frame_equal(result, expected) - - result = df.replace(0, df.mean()) - assert_frame_equal(result, expected) - - # series to series/dict - df = DataFrame({'zero': {'a': 0.0, 'b': 1}, 'one': {'a': 2.0, 'b': 0}}) - s = Series({'zero': 0.0, 'one': 2.0}) - result = df.replace(s, {'zero': 0.5, 'one': 1.0}) - expected = DataFrame({'zero': {'a': 0.5, 'b': 1}, 'one': {'a': 1.0, 'b': 0.0}}) - assert_frame_equal(result, expected) - - result = df.replace(s, df.mean()) - assert_frame_equal(result, expected) - - def test_replace_convert(self): - # gh 3907 - df = DataFrame([['foo', 'bar', 'bah'], ['bar', 'foo', 'bah']]) - m = {'foo': 1, 'bar': 2, 'bah': 3} - rep = df.replace(m) - expec = Series([ np.int64] * 3) - res = rep.dtypes - assert_series_equal(expec, res) - - def test_replace_mixed(self): - self.mixed_frame.ix[5:20,'foo'] = nan - self.mixed_frame.ix[-10:,'A'] = nan - - result = self.mixed_frame.replace(np.nan, -18) - expected = self.mixed_frame.fillna(value=-18) - assert_frame_equal(result, expected) - assert_frame_equal(result.replace(-18, nan), self.mixed_frame) - - result = self.mixed_frame.replace(np.nan, -1e8) - expected = self.mixed_frame.fillna(value=-1e8) - assert_frame_equal(result, expected) - assert_frame_equal(result.replace(-1e8, nan), self.mixed_frame) - - # int block upcasting - df = DataFrame({ 'A' : Series([1.0,2.0],dtype='float64'), 'B' : Series([0,1],dtype='int64') }) - expected = DataFrame({ 'A' : Series([1.0,2.0],dtype='float64'), 'B' : Series([0.5,1],dtype='float64') }) - result = df.replace(0, 0.5) - assert_frame_equal(result,expected) - - df.replace(0, 0.5, inplace=True) - assert_frame_equal(df,expected) - - # int block splitting - df = DataFrame({ 'A' : Series([1.0,2.0],dtype='float64'), 'B' : Series([0,1],dtype='int64'), 'C' : Series([1,2],dtype='int64') }) - expected = DataFrame({ 'A' : Series([1.0,2.0],dtype='float64'), 'B' : Series([0.5,1],dtype='float64'), 'C' : Series([1,2],dtype='int64') }) - result = df.replace(0, 0.5) - assert_frame_equal(result,expected) - - # to object block upcasting - df = DataFrame({ 'A' : Series([1.0,2.0],dtype='float64'), 'B' : Series([0,1],dtype='int64') }) - expected = DataFrame({ 'A' : Series([1,'foo'],dtype='object'), 'B' : Series([0,1],dtype='int64') }) - result = df.replace(2, 'foo') - assert_frame_equal(result,expected) - - expected = DataFrame({ 'A' : Series(['foo','bar'],dtype='object'), 'B' : Series([0,'foo'],dtype='object') }) - result = df.replace([1,2], ['foo','bar']) - assert_frame_equal(result,expected) - - # test case from - df = DataFrame({'A' : Series([3,0],dtype='int64'), 'B' : Series([0,3],dtype='int64') }) - result = df.replace(3, df.mean().to_dict()) - expected = df.copy().astype('float64') - m = df.mean() - expected.iloc[0,0] = m[0] - expected.iloc[1,1] = m[1] - assert_frame_equal(result,expected) - - def test_replace_simple_nested_dict(self): - df = DataFrame({'col': range(1, 5)}) - expected = DataFrame({'col': ['a', 2, 3, 'b']}) - - result = df.replace({'col': {1: 'a', 4: 'b'}}) - assert_frame_equal(expected, result) - - # in this case, should be the same as the not nested version - result = df.replace({1: 'a', 4: 'b'}) - assert_frame_equal(expected, result) - - def test_replace_simple_nested_dict_with_nonexistent_value(self): - df = DataFrame({'col': range(1, 5)}) - expected = DataFrame({'col': ['a', 2, 3, 'b']}) - - result = df.replace({-1: '-', 1: 'a', 4: 'b'}) - assert_frame_equal(expected, result) - - result = df.replace({'col': {-1: '-', 1: 'a', 4: 'b'}}) - assert_frame_equal(expected, result) - - def test_interpolate(self): - pass - - def test_replace_value_is_none(self): - self.assertRaises(TypeError, self.tsframe.replace, nan) - orig_value = self.tsframe.iloc[0, 0] - orig2 = self.tsframe.iloc[1, 0] - - self.tsframe.iloc[0, 0] = nan - self.tsframe.iloc[1, 0] = 1 - - result = self.tsframe.replace(to_replace={nan: 0}) - expected = self.tsframe.T.replace(to_replace={nan: 0}).T - assert_frame_equal(result, expected) - - result = self.tsframe.replace(to_replace={nan: 0, 1: -1e8}) - tsframe = self.tsframe.copy() - tsframe.iloc[0, 0] = 0 - tsframe.iloc[1, 0] = -1e8 - expected = tsframe - assert_frame_equal(expected, result) - self.tsframe.iloc[0, 0] = orig_value - self.tsframe.iloc[1, 0] = orig2 - - def test_replace_for_new_dtypes(self): - - # dtypes - tsframe = self.tsframe.copy().astype(np.float32) - tsframe['A'][:5] = nan - tsframe['A'][-5:] = nan - - zero_filled = tsframe.replace(nan, -1e8) - assert_frame_equal(zero_filled, tsframe.fillna(-1e8)) - assert_frame_equal(zero_filled.replace(-1e8, nan), tsframe) - - tsframe['A'][:5] = nan - tsframe['A'][-5:] = nan - tsframe['B'][:5] = -1e8 - - b = tsframe['B'] - b[b == -1e8] = nan - tsframe['B'] = b - result = tsframe.fillna(method='bfill') - assert_frame_equal(result, tsframe.fillna(method='bfill')) - - def test_replace_dtypes(self): - # int - df = DataFrame({'ints': [1, 2, 3]}) - result = df.replace(1, 0) - expected = DataFrame({'ints': [0, 2, 3]}) - assert_frame_equal(result, expected) - - df = DataFrame({'ints': [1, 2, 3]}, dtype=np.int32) - result = df.replace(1, 0) - expected = DataFrame({'ints': [0, 2, 3]}, dtype=np.int32) - assert_frame_equal(result, expected) - - df = DataFrame({'ints': [1, 2, 3]}, dtype=np.int16) - result = df.replace(1, 0) - expected = DataFrame({'ints': [0, 2, 3]}, dtype=np.int16) - assert_frame_equal(result, expected) - - # bools - df = DataFrame({'bools': [True, False, True]}) - result = df.replace(False, True) - self.assertTrue(result.values.all()) - - # complex blocks - df = DataFrame({'complex': [1j, 2j, 3j]}) - result = df.replace(1j, 0j) - expected = DataFrame({'complex': [0j, 2j, 3j]}) - assert_frame_equal(result, expected) - - # datetime blocks - prev = datetime.today() - now = datetime.today() - df = DataFrame({'datetime64': Index([prev, now, prev])}) - result = df.replace(prev, now) - expected = DataFrame({'datetime64': Index([now] * 3)}) - assert_frame_equal(result, expected) - - def test_replace_input_formats(self): - # both dicts - to_rep = {'A': np.nan, 'B': 0, 'C': ''} - values = {'A': 0, 'B': -1, 'C': 'missing'} - df = DataFrame({'A': [np.nan, 0, np.inf], 'B': [0, 2, 5], - 'C': ['', 'asdf', 'fd']}) - filled = df.replace(to_rep, values) - expected = {} - for k, v in compat.iteritems(df): - expected[k] = v.replace(to_rep[k], values[k]) - assert_frame_equal(filled, DataFrame(expected)) - - result = df.replace([0, 2, 5], [5, 2, 0]) - expected = DataFrame({'A': [np.nan, 5, np.inf], 'B': [5, 2, 0], - 'C': ['', 'asdf', 'fd']}) - assert_frame_equal(result, expected) - - # dict to scalar - filled = df.replace(to_rep, 0) - expected = {} - for k, v in compat.iteritems(df): - expected[k] = v.replace(to_rep[k], 0) - assert_frame_equal(filled, DataFrame(expected)) - - self.assertRaises(TypeError, df.replace, to_rep, [np.nan, 0, '']) - - # scalar to dict - values = {'A': 0, 'B': -1, 'C': 'missing'} - df = DataFrame({'A': [np.nan, 0, np.nan], 'B': [0, 2, 5], - 'C': ['', 'asdf', 'fd']}) - filled = df.replace(np.nan, values) - expected = {} - for k, v in compat.iteritems(df): - expected[k] = v.replace(np.nan, values[k]) - assert_frame_equal(filled, DataFrame(expected)) - - # list to list - to_rep = [np.nan, 0, ''] - values = [-2, -1, 'missing'] - result = df.replace(to_rep, values) - expected = df.copy() - for i in range(len(to_rep)): - expected.replace(to_rep[i], values[i], inplace=True) - assert_frame_equal(result, expected) - - self.assertRaises(ValueError, df.replace, to_rep, values[1:]) - - # list to scalar - to_rep = [np.nan, 0, ''] - result = df.replace(to_rep, -1) - expected = df.copy() - for i in range(len(to_rep)): - expected.replace(to_rep[i], -1, inplace=True) - assert_frame_equal(result, expected) - - def test_replace_limit(self): - pass - - def test_replace_dict_no_regex(self): - answer = Series({0: 'Strongly Agree', 1: 'Agree', 2: 'Neutral', 3: - 'Disagree', 4: 'Strongly Disagree'}) - weights = {'Agree': 4, 'Disagree': 2, 'Neutral': 3, 'Strongly Agree': - 5, 'Strongly Disagree': 1} - expected = Series({0: 5, 1: 4, 2: 3, 3: 2, 4: 1}) - result = answer.replace(weights) - assert_series_equal(result, expected) - - def test_replace_series_no_regex(self): - answer = Series({0: 'Strongly Agree', 1: 'Agree', 2: 'Neutral', 3: - 'Disagree', 4: 'Strongly Disagree'}) - weights = Series({'Agree': 4, 'Disagree': 2, 'Neutral': 3, - 'Strongly Agree': 5, 'Strongly Disagree': 1}) - expected = Series({0: 5, 1: 4, 2: 3, 3: 2, 4: 1}) - result = answer.replace(weights) - assert_series_equal(result, expected) - - def test_replace_dict_tuple_list_ordering_remains_the_same(self): - df = DataFrame(dict(A=[nan, 1])) - res1 = df.replace(to_replace={nan: 0, 1: -1e8}) - res2 = df.replace(to_replace=(1, nan), value=[-1e8, 0]) - res3 = df.replace(to_replace=[1, nan], value=[-1e8, 0]) - - expected = DataFrame({'A': [0, -1e8]}) - assert_frame_equal(res1, res2) - assert_frame_equal(res2, res3) - assert_frame_equal(res3, expected) - - def test_replace_doesnt_replace_without_regex(self): - from pandas.compat import StringIO - raw = """fol T_opp T_Dir T_Enh - 0 1 0 0 vo - 1 2 vr 0 0 - 2 2 0 0 0 - 3 3 0 bt 0""" - df = read_csv(StringIO(raw), sep=r'\s+') - res = df.replace({'\D': 1}) - assert_frame_equal(df, res) - - def test_replace_bool_with_string(self): - df = DataFrame({'a': [True, False], 'b': list('ab')}) - result = df.replace(True, 'a') - expected = DataFrame({'a': ['a', False], 'b': df.b}) - assert_frame_equal(result, expected) - - def test_replace_pure_bool_with_string_no_op(self): - df = DataFrame(np.random.rand(2, 2) > 0.5) - result = df.replace('asdf', 'fdsa') - assert_frame_equal(df, result) - - def test_replace_bool_with_bool(self): - df = DataFrame(np.random.rand(2, 2) > 0.5) - result = df.replace(False, True) - expected = DataFrame(np.ones((2, 2), dtype=bool)) - assert_frame_equal(result, expected) - - def test_replace_with_dict_with_bool_keys(self): - df = DataFrame({0: [True, False], 1: [False, True]}) - with tm.assertRaisesRegexp(TypeError, 'Cannot compare types .+'): - df.replace({'asdf': 'asdb', True: 'yes'}) - - def test_replace_truthy(self): - df = DataFrame({'a': [True, True]}) - r = df.replace([np.inf, -np.inf], np.nan) - e = df - assert_frame_equal(r, e) - - def test_replace_int_to_int_chain(self): - df = DataFrame({'a': lrange(1, 5)}) - with tm.assertRaisesRegexp(ValueError, "Replacement not allowed .+"): - df.replace({'a': dict(zip(range(1, 5), range(2, 6)))}) - - def test_replace_str_to_str_chain(self): - a = np.arange(1, 5) - astr = a.astype(str) - bstr = np.arange(2, 6).astype(str) - df = DataFrame({'a': astr}) - with tm.assertRaisesRegexp(ValueError, "Replacement not allowed .+"): - df.replace({'a': dict(zip(astr, bstr))}) - - def test_replace_swapping_bug(self): - df = pd.DataFrame({'a': [True, False, True]}) - res = df.replace({'a': {True: 'Y', False: 'N'}}) - expect = pd.DataFrame({'a': ['Y', 'N', 'Y']}) - assert_frame_equal(res, expect) - - df = pd.DataFrame({'a': [0, 1, 0]}) - res = df.replace({'a': {0: 'Y', 1: 'N'}}) - expect = pd.DataFrame({'a': ['Y', 'N', 'Y']}) - assert_frame_equal(res, expect) - - def test_replace_period(self): - d = {'fname': - {'out_augmented_AUG_2011.json': pd.Period(year=2011, month=8, freq='M'), - 'out_augmented_JAN_2011.json': pd.Period(year=2011, month=1, freq='M'), - 'out_augmented_MAY_2012.json': pd.Period(year=2012, month=5, freq='M'), - 'out_augmented_SUBSIDY_WEEK.json': pd.Period(year=2011, month=4, freq='M'), - 'out_augmented_AUG_2012.json': pd.Period(year=2012, month=8, freq='M'), - 'out_augmented_MAY_2011.json': pd.Period(year=2011, month=5, freq='M'), - 'out_augmented_SEP_2013.json': pd.Period(year=2013, month=9, freq='M')}} - - df = pd.DataFrame(['out_augmented_AUG_2012.json', - 'out_augmented_SEP_2013.json', - 'out_augmented_SUBSIDY_WEEK.json', - 'out_augmented_MAY_2012.json', - 'out_augmented_MAY_2011.json', - 'out_augmented_AUG_2011.json', - 'out_augmented_JAN_2011.json'], columns=['fname']) - tm.assert_equal(set(df.fname.values), set(d['fname'].keys())) - expected = DataFrame({'fname': [d['fname'][k] - for k in df.fname.values]}) - result = df.replace(d) - assert_frame_equal(result, expected) - - def test_replace_datetime(self): - d = {'fname': - {'out_augmented_AUG_2011.json': pd.Timestamp('2011-08'), - 'out_augmented_JAN_2011.json': pd.Timestamp('2011-01'), - 'out_augmented_MAY_2012.json': pd.Timestamp('2012-05'), - 'out_augmented_SUBSIDY_WEEK.json': pd.Timestamp('2011-04'), - 'out_augmented_AUG_2012.json': pd.Timestamp('2012-08'), - 'out_augmented_MAY_2011.json': pd.Timestamp('2011-05'), - 'out_augmented_SEP_2013.json': pd.Timestamp('2013-09')}} - - df = pd.DataFrame(['out_augmented_AUG_2012.json', - 'out_augmented_SEP_2013.json', - 'out_augmented_SUBSIDY_WEEK.json', - 'out_augmented_MAY_2012.json', - 'out_augmented_MAY_2011.json', - 'out_augmented_AUG_2011.json', - 'out_augmented_JAN_2011.json'], columns=['fname']) - tm.assert_equal(set(df.fname.values), set(d['fname'].keys())) - expected = DataFrame({'fname': [d['fname'][k] - for k in df.fname.values]}) - result = df.replace(d) - assert_frame_equal(result, expected) - - def test_replace_datetimetz(self): - - # GH 11326 - # behaving poorly when presented with a datetime64[ns, tz] - df = DataFrame({'A' : date_range('20130101',periods=3,tz='US/Eastern'), - 'B' : [0, np.nan, 2]}) - result = df.replace(np.nan,1) - expected = DataFrame({'A' : date_range('20130101',periods=3,tz='US/Eastern'), - 'B' : Series([0, 1, 2],dtype='float64')}) - assert_frame_equal(result, expected) - - result = df.fillna(1) - assert_frame_equal(result, expected) - - result = df.replace(0,np.nan) - expected = DataFrame({'A' : date_range('20130101',periods=3,tz='US/Eastern'), - 'B' : [np.nan, np.nan, 2]}) - assert_frame_equal(result, expected) - - result = df.replace(Timestamp('20130102',tz='US/Eastern'),Timestamp('20130104',tz='US/Eastern')) - expected = DataFrame({'A' : [Timestamp('20130101',tz='US/Eastern'), - Timestamp('20130104',tz='US/Eastern'), - Timestamp('20130103',tz='US/Eastern')], - 'B' : [0, np.nan, 2]}) - assert_frame_equal(result, expected) - - result = df.copy() - result.iloc[1,0] = np.nan - result = result.replace({'A' : pd.NaT }, Timestamp('20130104',tz='US/Eastern')) - assert_frame_equal(result, expected) - - # coerce to object - result = df.copy() - result.iloc[1,0] = np.nan - result = result.replace({'A' : pd.NaT }, Timestamp('20130104',tz='US/Pacific')) - expected = DataFrame({'A' : [Timestamp('20130101',tz='US/Eastern'), - Timestamp('20130104',tz='US/Pacific'), - Timestamp('20130103',tz='US/Eastern')], - 'B' : [0, np.nan, 2]}) - assert_frame_equal(result, expected) - - result = df.copy() - result.iloc[1,0] = np.nan - result = result.replace({'A' : np.nan }, Timestamp('20130104')) - expected = DataFrame({'A' : [Timestamp('20130101',tz='US/Eastern'), - Timestamp('20130104'), - Timestamp('20130103',tz='US/Eastern')], - 'B' : [0, np.nan, 2]}) - assert_frame_equal(result, expected) - - def test_combine_multiple_frames_dtypes(self): - - # GH 2759 - A = DataFrame(data=np.ones((10, 2)), columns=['foo', 'bar'], dtype=np.float64) - B = DataFrame(data=np.ones((10, 2)), dtype=np.float32) - results = pd.concat((A, B), axis=1).get_dtype_counts() - expected = Series(dict( float64 = 2, float32 = 2 )) - assert_series_equal(results,expected) - - def test_ops(self): - - # tst ops and reversed ops in evaluation - # GH7198 - - # smaller hits python, larger hits numexpr - for n in [ 4, 4000 ]: - - df = DataFrame(1,index=range(n),columns=list('abcd')) - df.iloc[0] = 2 - m = df.mean() - - for op_str, op, rop in [('+','__add__','__radd__'), - ('-','__sub__','__rsub__'), - ('*','__mul__','__rmul__'), - ('/','__truediv__','__rtruediv__')]: - - base = DataFrame(np.tile(m.values,n).reshape(n,-1),columns=list('abcd')) - expected = eval("base{op}df".format(op=op_str)) - - # ops as strings - result = eval("m{op}df".format(op=op_str)) - assert_frame_equal(result,expected) - - # these are commutative - if op in ['+','*']: - result = getattr(df,op)(m) - assert_frame_equal(result,expected) - - # these are not - elif op in ['-','/']: - result = getattr(df,rop)(m) - assert_frame_equal(result,expected) - - # GH7192 - df = DataFrame(dict(A=np.random.randn(25000))) - df.iloc[0:5] = np.nan - expected = (1-np.isnan(df.iloc[0:25])) - result = (1-np.isnan(df)).iloc[0:25] - assert_frame_equal(result,expected) - - def test_truncate(self): - offset = datetools.bday - - ts = self.tsframe[::3] - - start, end = self.tsframe.index[3], self.tsframe.index[6] - - start_missing = self.tsframe.index[2] - end_missing = self.tsframe.index[7] - - # neither specified - truncated = ts.truncate() - assert_frame_equal(truncated, ts) - - # both specified - expected = ts[1:3] - - truncated = ts.truncate(start, end) - assert_frame_equal(truncated, expected) - - truncated = ts.truncate(start_missing, end_missing) - assert_frame_equal(truncated, expected) - - # start specified - expected = ts[1:] - - truncated = ts.truncate(before=start) - assert_frame_equal(truncated, expected) - - truncated = ts.truncate(before=start_missing) - assert_frame_equal(truncated, expected) - - # end specified - expected = ts[:3] - - truncated = ts.truncate(after=end) - assert_frame_equal(truncated, expected) - - truncated = ts.truncate(after=end_missing) - assert_frame_equal(truncated, expected) - - self.assertRaises(ValueError, ts.truncate, - before=ts.index[-1] - 1, - after=ts.index[0] +1) - - def test_truncate_copy(self): - index = self.tsframe.index - truncated = self.tsframe.truncate(index[5], index[10]) - truncated.values[:] = 5. - self.assertFalse((self.tsframe.values[5:11] == 5).any()) - - def test_xs(self): - idx = self.frame.index[5] - xs = self.frame.xs(idx) - for item, value in compat.iteritems(xs): - if np.isnan(value): - self.assertTrue(np.isnan(self.frame[item][idx])) - else: - self.assertEqual(value, self.frame[item][idx]) - - # mixed-type xs - test_data = { - 'A': {'1': 1, '2': 2}, - 'B': {'1': '1', '2': '2', '3': '3'}, - } - frame = DataFrame(test_data) - xs = frame.xs('1') - self.assertEqual(xs.dtype, np.object_) - self.assertEqual(xs['A'], 1) - self.assertEqual(xs['B'], '1') - - with tm.assertRaises(KeyError): - self.tsframe.xs(self.tsframe.index[0] - datetools.bday) - - # xs get column - series = self.frame.xs('A', axis=1) - expected = self.frame['A'] - assert_series_equal(series, expected) - - # view is returned if possible - series = self.frame.xs('A', axis=1) - series[:] = 5 - self.assertTrue((expected == 5).all()) - - def test_xs_corner(self): - # pathological mixed-type reordering case - df = DataFrame(index=[0]) - df['A'] = 1. - df['B'] = 'foo' - df['C'] = 2. - df['D'] = 'bar' - df['E'] = 3. - - xs = df.xs(0) - assert_almost_equal(xs, [1., 'foo', 2., 'bar', 3.]) - - # no columns but Index(dtype=object) - df = DataFrame(index=['a', 'b', 'c']) - result = df.xs('a') - expected = Series([], name='a', index=pd.Index([], dtype=object)) - assert_series_equal(result, expected) - - def test_xs_duplicates(self): - df = DataFrame(randn(5, 2), index=['b', 'b', 'c', 'b', 'a']) - - cross = df.xs('c') - exp = df.iloc[2] - assert_series_equal(cross, exp) - - def test_xs_keep_level(self): - df = DataFrame({'day': {0: 'sat', 1: 'sun'}, - 'flavour': {0: 'strawberry', 1: 'strawberry'}, - 'sales': {0: 10, 1: 12}, - 'year': {0: 2008, 1: 2008}}).set_index(['year','flavour','day']) - result = df.xs('sat', level='day', drop_level=False) - expected = df[:1] - assert_frame_equal(result, expected) - - result = df.xs([2008, 'sat'], level=['year', 'day'], drop_level=False) - assert_frame_equal(result, expected) - - def test_pivot(self): - data = { - 'index': ['A', 'B', 'C', 'C', 'B', 'A'], - 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'], - 'values': [1., 2., 3., 3., 2., 1.] - } - - frame = DataFrame(data) - pivoted = frame.pivot( - index='index', columns='columns', values='values') - - expected = DataFrame({ - 'One': {'A': 1., 'B': 2., 'C': 3.}, - 'Two': {'A': 1., 'B': 2., 'C': 3.} - }) - expected.index.name, expected.columns.name = 'index', 'columns' - - assert_frame_equal(pivoted, expected) - - # name tracking - self.assertEqual(pivoted.index.name, 'index') - self.assertEqual(pivoted.columns.name, 'columns') - - # don't specify values - pivoted = frame.pivot(index='index', columns='columns') - self.assertEqual(pivoted.index.name, 'index') - self.assertEqual(pivoted.columns.names, (None, 'columns')) - - # pivot multiple columns - wp = tm.makePanel() - lp = wp.to_frame() - df = lp.reset_index() - assert_frame_equal(df.pivot('major', 'minor'), lp.unstack()) - - def test_pivot_duplicates(self): - data = DataFrame({'a': ['bar', 'bar', 'foo', 'foo', 'foo'], - 'b': ['one', 'two', 'one', 'one', 'two'], - 'c': [1., 2., 3., 3., 4.]}) - with assertRaisesRegexp(ValueError, 'duplicate entries'): - data.pivot('a', 'b', 'c') - - def test_pivot_empty(self): - df = DataFrame({}, columns=['a', 'b', 'c']) - result = df.pivot('a', 'b', 'c') - expected = DataFrame({}) - assert_frame_equal(result, expected, check_names=False) - - def test_pivot_integer_bug(self): - df = DataFrame(data=[("A", "1", "A1"), ("B", "2", "B2")]) - - result = df.pivot(index=1, columns=0, values=2) - repr(result) - self.assert_numpy_array_equal(result.columns, ['A', 'B']) - - def test_pivot_index_none(self): - # gh-3962 - data = { - 'index': ['A', 'B', 'C', 'C', 'B', 'A'], - 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'], - 'values': [1., 2., 3., 3., 2., 1.] - } - - frame = DataFrame(data).set_index('index') - result = frame.pivot(columns='columns', values='values') - expected = DataFrame({ - 'One': {'A': 1., 'B': 2., 'C': 3.}, - 'Two': {'A': 1., 'B': 2., 'C': 3.} - }) - - expected.index.name, expected.columns.name = 'index', 'columns' - assert_frame_equal(result, expected) - - # omit values - result = frame.pivot(columns='columns') - - expected.columns = pd.MultiIndex.from_tuples([('values', 'One'), - ('values', 'Two')], - names=[None, 'columns']) - expected.index.name = 'index' - assert_frame_equal(result, expected, check_names=False) - self.assertEqual(result.index.name, 'index',) - self.assertEqual(result.columns.names, (None, 'columns')) - expected.columns = expected.columns.droplevel(0) - - data = { - 'index': range(7), - 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'], - 'values': [1., 2., 3., 3., 2., 1.] - } - - result = frame.pivot(columns='columns', values='values') - - expected.columns.name = 'columns' - assert_frame_equal(result, expected) - - def test_reindex(self): - newFrame = self.frame.reindex(self.ts1.index) - - for col in newFrame.columns: - for idx, val in compat.iteritems(newFrame[col]): - if idx in self.frame.index: - if np.isnan(val): - self.assertTrue(np.isnan(self.frame[col][idx])) - else: - self.assertEqual(val, self.frame[col][idx]) - else: - self.assertTrue(np.isnan(val)) - - for col, series in compat.iteritems(newFrame): - self.assertTrue(tm.equalContents(series.index, newFrame.index)) - emptyFrame = self.frame.reindex(Index([])) - self.assertEqual(len(emptyFrame.index), 0) - - # Cython code should be unit-tested directly - nonContigFrame = self.frame.reindex(self.ts1.index[::2]) - - for col in nonContigFrame.columns: - for idx, val in compat.iteritems(nonContigFrame[col]): - if idx in self.frame.index: - if np.isnan(val): - self.assertTrue(np.isnan(self.frame[col][idx])) - else: - self.assertEqual(val, self.frame[col][idx]) - else: - self.assertTrue(np.isnan(val)) - - for col, series in compat.iteritems(nonContigFrame): - self.assertTrue(tm.equalContents(series.index, - nonContigFrame.index)) - - # corner cases - - # Same index, copies values but not index if copy=False - newFrame = self.frame.reindex(self.frame.index, copy=False) - self.assertIs(newFrame.index, self.frame.index) - - # length zero - newFrame = self.frame.reindex([]) - self.assertTrue(newFrame.empty) - self.assertEqual(len(newFrame.columns), len(self.frame.columns)) - - # length zero with columns reindexed with non-empty index - newFrame = self.frame.reindex([]) - newFrame = newFrame.reindex(self.frame.index) - self.assertEqual(len(newFrame.index), len(self.frame.index)) - self.assertEqual(len(newFrame.columns), len(self.frame.columns)) - - # pass non-Index - newFrame = self.frame.reindex(list(self.ts1.index)) - self.assertTrue(newFrame.index.equals(self.ts1.index)) - - # copy with no axes - result = self.frame.reindex() - assert_frame_equal(result,self.frame) - self.assertFalse(result is self.frame) - - def test_reindex_nan(self): - df = pd.DataFrame([[1, 2], [3, 5], [7, 11], [9, 23]], - index=[2, np.nan, 1, 5], - columns=['joe', 'jim']) - - i, j = [np.nan, 5, 5, np.nan, 1, 2, np.nan], [1, 3, 3, 1, 2, 0, 1] - assert_frame_equal(df.reindex(i), df.iloc[j]) - - df.index = df.index.astype('object') - assert_frame_equal(df.reindex(i), df.iloc[j], check_index_type=False) - - # GH10388 - df = pd.DataFrame({'other': ['a', 'b', np.nan, 'c'], - 'date': ['2015-03-22', np.nan, - '2012-01-08', np.nan], - 'amount': [2, 3, 4, 5]}) - - df['date'] = pd.to_datetime(df.date) - df['delta'] = (pd.to_datetime('2015-06-18') - df['date']).shift(1) - - left = df.set_index(['delta', 'other', 'date']).reset_index() - right = df.reindex(columns=['delta', 'other', 'date', 'amount']) - assert_frame_equal(left, right) - - def test_reindex_name_remains(self): - s = Series(random.rand(10)) - df = DataFrame(s, index=np.arange(len(s))) - i = Series(np.arange(10), name='iname') - - df = df.reindex(i) - self.assertEqual(df.index.name, 'iname') - - df = df.reindex(Index(np.arange(10), name='tmpname')) - self.assertEqual(df.index.name, 'tmpname') - - s = Series(random.rand(10)) - df = DataFrame(s.T, index=np.arange(len(s))) - i = Series(np.arange(10), name='iname') - df = df.reindex(columns=i) - self.assertEqual(df.columns.name, 'iname') - - def test_reindex_int(self): - smaller = self.intframe.reindex(self.intframe.index[::2]) - - self.assertEqual(smaller['A'].dtype, np.int64) - - bigger = smaller.reindex(self.intframe.index) - self.assertEqual(bigger['A'].dtype, np.float64) - - smaller = self.intframe.reindex(columns=['A', 'B']) - self.assertEqual(smaller['A'].dtype, np.int64) - - def test_reindex_like(self): - other = self.frame.reindex(index=self.frame.index[:10], - columns=['C', 'B']) - - assert_frame_equal(other, self.frame.reindex_like(other)) - - def test_reindex_columns(self): - newFrame = self.frame.reindex(columns=['A', 'B', 'E']) - - assert_series_equal(newFrame['B'], self.frame['B']) - self.assertTrue(np.isnan(newFrame['E']).all()) - self.assertNotIn('C', newFrame) - - # length zero - newFrame = self.frame.reindex(columns=[]) - self.assertTrue(newFrame.empty) - - def test_reindex_axes(self): - - # GH 3317, reindexing by both axes loses freq of the index - from datetime import datetime - df = DataFrame(np.ones((3, 3)), index=[datetime(2012, 1, 1), datetime(2012, 1, 2), datetime(2012, 1, 3)], columns=['a', 'b', 'c']) - time_freq = date_range('2012-01-01', '2012-01-03', freq='d') - some_cols = ['a', 'b'] - - index_freq = df.reindex(index=time_freq).index.freq - both_freq = df.reindex(index=time_freq, columns=some_cols).index.freq - seq_freq = df.reindex(index=time_freq).reindex(columns=some_cols).index.freq - self.assertEqual(index_freq, both_freq) - self.assertEqual(index_freq, seq_freq) - - def test_reindex_fill_value(self): - df = DataFrame(np.random.randn(10, 4)) - - # axis=0 - result = df.reindex(lrange(15)) - self.assertTrue(np.isnan(result.values[-5:]).all()) - - result = df.reindex(lrange(15), fill_value=0) - expected = df.reindex(lrange(15)).fillna(0) - assert_frame_equal(result, expected) - - # axis=1 - result = df.reindex(columns=lrange(5), fill_value=0.) - expected = df.copy() - expected[4] = 0. - assert_frame_equal(result, expected) - - result = df.reindex(columns=lrange(5), fill_value=0) - expected = df.copy() - expected[4] = 0 - assert_frame_equal(result, expected) - - result = df.reindex(columns=lrange(5), fill_value='foo') - expected = df.copy() - expected[4] = 'foo' - assert_frame_equal(result, expected) - - # reindex_axis - result = df.reindex_axis(lrange(15), fill_value=0., axis=0) - expected = df.reindex(lrange(15)).fillna(0) - assert_frame_equal(result, expected) - - result = df.reindex_axis(lrange(5), fill_value=0., axis=1) - expected = df.reindex(columns=lrange(5)).fillna(0) - assert_frame_equal(result, expected) - - # other dtypes - df['foo'] = 'foo' - result = df.reindex(lrange(15), fill_value=0) - expected = df.reindex(lrange(15)).fillna(0) - assert_frame_equal(result, expected) - - def test_reindex_dups(self): - - # GH4746, reindex on duplicate index error messages - arr = np.random.randn(10) - df = DataFrame(arr,index=[1,2,3,4,5,1,2,3,4,5]) - - # set index is ok - result = df.copy() - result.index = list(range(len(df))) - expected = DataFrame(arr,index=list(range(len(df)))) - assert_frame_equal(result,expected) - - # reindex fails - self.assertRaises(ValueError, df.reindex, index=list(range(len(df)))) - - def test_align(self): - af, bf = self.frame.align(self.frame) - self.assertIsNot(af._data, self.frame._data) - - af, bf = self.frame.align(self.frame, copy=False) - self.assertIs(af._data, self.frame._data) - - # axis = 0 - other = self.frame.ix[:-5, :3] - af, bf = self.frame.align(other, axis=0, fill_value=-1) - self.assertTrue(bf.columns.equals(other.columns)) - # test fill value - join_idx = self.frame.index.join(other.index) - diff_a = self.frame.index.difference(join_idx) - diff_b = other.index.difference(join_idx) - diff_a_vals = af.reindex(diff_a).values - diff_b_vals = bf.reindex(diff_b).values - self.assertTrue((diff_a_vals == -1).all()) - - af, bf = self.frame.align(other, join='right', axis=0) - self.assertTrue(bf.columns.equals(other.columns)) - self.assertTrue(bf.index.equals(other.index)) - self.assertTrue(af.index.equals(other.index)) - - # axis = 1 - other = self.frame.ix[:-5, :3].copy() - af, bf = self.frame.align(other, axis=1) - self.assertTrue(bf.columns.equals(self.frame.columns)) - self.assertTrue(bf.index.equals(other.index)) - - # test fill value - join_idx = self.frame.index.join(other.index) - diff_a = self.frame.index.difference(join_idx) - diff_b = other.index.difference(join_idx) - diff_a_vals = af.reindex(diff_a).values - diff_b_vals = bf.reindex(diff_b).values - self.assertTrue((diff_a_vals == -1).all()) - - af, bf = self.frame.align(other, join='inner', axis=1) - self.assertTrue(bf.columns.equals(other.columns)) - - af, bf = self.frame.align(other, join='inner', axis=1, method='pad') - self.assertTrue(bf.columns.equals(other.columns)) - - # test other non-float types - af, bf = self.intframe.align(other, join='inner', axis=1, method='pad') - self.assertTrue(bf.columns.equals(other.columns)) - - af, bf = self.mixed_frame.align(self.mixed_frame, - join='inner', axis=1, method='pad') - self.assertTrue(bf.columns.equals(self.mixed_frame.columns)) - - af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1, - method=None, fill_value=None) - self.assertTrue(bf.index.equals(Index([]))) - - af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1, - method=None, fill_value=0) - self.assertTrue(bf.index.equals(Index([]))) - - # mixed floats/ints - af, bf = self.mixed_float.align(other.ix[:, 0], join='inner', axis=1, - method=None, fill_value=0) - self.assertTrue(bf.index.equals(Index([]))) - - af, bf = self.mixed_int.align(other.ix[:, 0], join='inner', axis=1, - method=None, fill_value=0) - self.assertTrue(bf.index.equals(Index([]))) - - # try to align dataframe to series along bad axis - self.assertRaises(ValueError, self.frame.align, af.ix[0, :3], - join='inner', axis=2) - - # align dataframe to series with broadcast or not - idx = self.frame.index - s = Series(range(len(idx)), index=idx) - - left, right = self.frame.align(s, axis=0) - tm.assert_index_equal(left.index, self.frame.index) - tm.assert_index_equal(right.index, self.frame.index) - self.assertTrue(isinstance(right, Series)) - - left, right = self.frame.align(s, broadcast_axis=1) - tm.assert_index_equal(left.index, self.frame.index) - expected = {} - for c in self.frame.columns: - expected[c] = s - expected = DataFrame(expected, index=self.frame.index, - columns=self.frame.columns) - assert_frame_equal(right, expected) - - # GH 9558 - df = DataFrame({'a':[1,2,3], 'b':[4,5,6]}) - result = df[df['a'] == 2] - expected = DataFrame([[2, 5]], index=[1], columns=['a', 'b']) - assert_frame_equal(result, expected) - - result = df.where(df['a'] == 2, 0) - expected = DataFrame({'a':[0, 2, 0], 'b':[0, 5, 0]}) - assert_frame_equal(result, expected) - - def _check_align(self, a, b, axis, fill_axis, how, method, limit=None): - aa, ab = a.align(b, axis=axis, join=how, method=method, limit=limit, - fill_axis=fill_axis) - - join_index, join_columns = None, None - - ea, eb = a, b - if axis is None or axis == 0: - join_index = a.index.join(b.index, how=how) - ea = ea.reindex(index=join_index) - eb = eb.reindex(index=join_index) - - if axis is None or axis == 1: - join_columns = a.columns.join(b.columns, how=how) - ea = ea.reindex(columns=join_columns) - eb = eb.reindex(columns=join_columns) - - ea = ea.fillna(axis=fill_axis, method=method, limit=limit) - eb = eb.fillna(axis=fill_axis, method=method, limit=limit) - - assert_frame_equal(aa, ea) - assert_frame_equal(ab, eb) - - def test_align_fill_method_inner(self): - for meth in ['pad', 'bfill']: - for ax in [0, 1, None]: - for fax in [0, 1]: - self._check_align_fill('inner', meth, ax, fax) - - def test_align_fill_method_outer(self): - for meth in ['pad', 'bfill']: - for ax in [0, 1, None]: - for fax in [0, 1]: - self._check_align_fill('outer', meth, ax, fax) - - def test_align_fill_method_left(self): - for meth in ['pad', 'bfill']: - for ax in [0, 1, None]: - for fax in [0, 1]: - self._check_align_fill('left', meth, ax, fax) - - def test_align_fill_method_right(self): - for meth in ['pad', 'bfill']: - for ax in [0, 1, None]: - for fax in [0, 1]: - self._check_align_fill('right', meth, ax, fax) - - def _check_align_fill(self, kind, meth, ax, fax): - left = self.frame.ix[0:4, :10] - right = self.frame.ix[2:, 6:] - empty = self.frame.ix[:0, :0] - - self._check_align(left, right, axis=ax, fill_axis=fax, - how=kind, method=meth) - self._check_align(left, right, axis=ax, fill_axis=fax, - how=kind, method=meth, limit=1) - - # empty left - self._check_align(empty, right, axis=ax, fill_axis=fax, - how=kind, method=meth) - self._check_align(empty, right, axis=ax, fill_axis=fax, - how=kind, method=meth, limit=1) - - # empty right - self._check_align(left, empty, axis=ax, fill_axis=fax, - how=kind, method=meth) - self._check_align(left, empty, axis=ax, fill_axis=fax, - how=kind, method=meth, limit=1) - - # both empty - self._check_align(empty, empty, axis=ax, fill_axis=fax, - how=kind, method=meth) - self._check_align(empty, empty, axis=ax, fill_axis=fax, - how=kind, method=meth, limit=1) - - def test_align_int_fill_bug(self): - # GH #910 - X = np.arange(10*10, dtype='float64').reshape(10, 10) - Y = np.ones((10, 1), dtype=int) - - df1 = DataFrame(X) - df1['0.X'] = Y.squeeze() - - df2 = df1.astype(float) - - result = df1 - df1.mean() - expected = df2 - df2.mean() - assert_frame_equal(result, expected) - - def test_align_multiindex(self): - # GH 10665 - # same test cases as test_align_multiindex in test_series.py - - midx = pd.MultiIndex.from_product([range(2), range(3), range(2)], - names=('a', 'b', 'c')) - idx = pd.Index(range(2), name='b') - df1 = pd.DataFrame(np.arange(12,dtype='int64'), index=midx) - df2 = pd.DataFrame(np.arange(2,dtype='int64'), index=idx) - - # these must be the same results (but flipped) - res1l, res1r = df1.align(df2, join='left') - res2l, res2r = df2.align(df1, join='right') - - expl = df1 - assert_frame_equal(expl, res1l) - assert_frame_equal(expl, res2r) - expr = pd.DataFrame([0, 0, 1, 1, np.nan, np.nan] * 2, index=midx) - assert_frame_equal(expr, res1r) - assert_frame_equal(expr, res2l) - - res1l, res1r = df1.align(df2, join='right') - res2l, res2r = df2.align(df1, join='left') - - exp_idx = pd.MultiIndex.from_product([range(2), range(2), range(2)], - names=('a', 'b', 'c')) - expl = pd.DataFrame([0, 1, 2, 3, 6, 7, 8, 9], index=exp_idx) - assert_frame_equal(expl, res1l) - assert_frame_equal(expl, res2r) - expr = pd.DataFrame([0, 0, 1, 1] * 2, index=exp_idx) - assert_frame_equal(expr, res1r) - assert_frame_equal(expr, res2l) - - def test_where(self): - default_frame = DataFrame(np.random.randn(5, 3), - columns=['A', 'B', 'C']) - - def _safe_add(df): - # only add to the numeric items - def is_ok(s): - return issubclass(s.dtype.type, (np.integer,np.floating)) and s.dtype != 'uint8' - return DataFrame(dict([ (c,s+1) if is_ok(s) else (c,s) for c, s in compat.iteritems(df) ])) - - def _check_get(df, cond, check_dtypes = True): - other1 = _safe_add(df) - rs = df.where(cond, other1) - rs2 = df.where(cond.values, other1) - for k, v in rs.iteritems(): - exp = Series(np.where(cond[k], df[k], other1[k]),index=v.index) - assert_series_equal(v, exp, check_names=False) - assert_frame_equal(rs, rs2) - - # dtypes - if check_dtypes: - self.assertTrue((rs.dtypes == df.dtypes).all() == True) - - # check getting - for df in [ default_frame, self.mixed_frame, self.mixed_float, self.mixed_int ]: - cond = df > 0 - _check_get(df, cond) - - - # upcasting case (GH # 2794) - df = DataFrame(dict([ (c,Series([1]*3,dtype=c)) for c in ['int64','int32','float32','float64'] ])) - df.ix[1,:] = 0 - result = df.where(df>=0).get_dtype_counts() - - #### when we don't preserve boolean casts #### - #expected = Series({ 'float32' : 1, 'float64' : 3 }) - - expected = Series({ 'float32' : 1, 'float64' : 1, 'int32' : 1, 'int64' : 1 }) - assert_series_equal(result, expected) - - # aligning - def _check_align(df, cond, other, check_dtypes = True): - rs = df.where(cond, other) - for i, k in enumerate(rs.columns): - result = rs[k] - d = df[k].values - c = cond[k].reindex(df[k].index).fillna(False).values - - if np.isscalar(other): - o = other - else: - if isinstance(other,np.ndarray): - o = Series(other[:,i],index=result.index).values - else: - o = other[k].values - - new_values = d if c.all() else np.where(c, d, o) - expected = Series(new_values, index=result.index, name=k) - - # since we can't always have the correct numpy dtype - # as numpy doesn't know how to downcast, don't check - assert_series_equal(result, expected, check_dtype=False) - - # dtypes - # can't check dtype when other is an ndarray - - if check_dtypes and not isinstance(other,np.ndarray): - self.assertTrue((rs.dtypes == df.dtypes).all() == True) - - for df in [ self.mixed_frame, self.mixed_float, self.mixed_int ]: - - # other is a frame - cond = (df > 0)[1:] - _check_align(df, cond, _safe_add(df)) - - # check other is ndarray - cond = df > 0 - _check_align(df, cond, (_safe_add(df).values)) - - # integers are upcast, so don't check the dtypes - cond = df > 0 - check_dtypes = all([ not issubclass(s.type,np.integer) for s in df.dtypes ]) - _check_align(df, cond, np.nan, check_dtypes = check_dtypes) - - # invalid conditions - df = default_frame - err1 = (df + 1).values[0:2, :] - self.assertRaises(ValueError, df.where, cond, err1) - - err2 = cond.ix[:2, :].values - other1 = _safe_add(df) - self.assertRaises(ValueError, df.where, err2, other1) - - self.assertRaises(ValueError, df.mask, True) - self.assertRaises(ValueError, df.mask, 0) - - # where inplace - def _check_set(df, cond, check_dtypes = True): - dfi = df.copy() - econd = cond.reindex_like(df).fillna(True) - expected = dfi.mask(~econd) - - dfi.where(cond, np.nan, inplace=True) - assert_frame_equal(dfi, expected) - - # dtypes (and confirm upcasts)x - if check_dtypes: - for k, v in compat.iteritems(df.dtypes): - if issubclass(v.type,np.integer) and not cond[k].all(): - v = np.dtype('float64') - self.assertEqual(dfi[k].dtype, v) - - for df in [ default_frame, self.mixed_frame, self.mixed_float, self.mixed_int ]: - - cond = df > 0 - _check_set(df, cond) - - cond = df >= 0 - _check_set(df, cond) - - # aligining - cond = (df >= 0)[1:] - _check_set(df, cond) - - # GH 10218 - # test DataFrame.where with Series slicing - df = DataFrame({'a': range(3), 'b': range(4, 7)}) - result = df.where(df['a'] == 1) - expected = df[df['a'] == 1].reindex(df.index) - assert_frame_equal(result, expected) - - def test_where_bug(self): - - # GH 2793 - - df = DataFrame({'a': [1.0, 2.0, 3.0, 4.0], 'b': [4.0, 3.0, 2.0, 1.0]}, dtype = 'float64') - expected = DataFrame({'a': [np.nan, np.nan, 3.0, 4.0], 'b': [4.0, 3.0, np.nan, np.nan]}, dtype = 'float64') - result = df.where(df > 2, np.nan) - assert_frame_equal(result, expected) - - result = df.copy() - result.where(result > 2, np.nan, inplace=True) - assert_frame_equal(result, expected) - - # mixed - for dtype in ['int16','int8','int32','int64']: - df = DataFrame({'a': np.array([1, 2, 3, 4],dtype=dtype), 'b': np.array([4.0, 3.0, 2.0, 1.0], dtype = 'float64') }) - expected = DataFrame({'a': [np.nan, np.nan, 3.0, 4.0], 'b': [4.0, 3.0, np.nan, np.nan]}, dtype = 'float64') - result = df.where(df > 2, np.nan) - assert_frame_equal(result, expected) - - result = df.copy() - result.where(result > 2, np.nan, inplace=True) - assert_frame_equal(result, expected) - - # transpositional issue - # GH7506 - a = DataFrame({ 0 : [1,2], 1 : [3,4], 2 : [5,6]}) - b = DataFrame({ 0 : [np.nan,8], 1:[9,np.nan], 2:[np.nan,np.nan]}) - do_not_replace = b.isnull() | (a > b) - - expected = a.copy() - expected[~do_not_replace] = b - - result = a.where(do_not_replace,b) - assert_frame_equal(result,expected) - - a = DataFrame({ 0 : [4,6], 1 : [1,0]}) - b = DataFrame({ 0 : [np.nan,3],1:[3,np.nan]}) - do_not_replace = b.isnull() | (a > b) - - expected = a.copy() - expected[~do_not_replace] = b - - result = a.where(do_not_replace,b) - assert_frame_equal(result,expected) - - def test_where_datetime(self): - - # GH 3311 - df = DataFrame(dict(A = date_range('20130102',periods=5), - B = date_range('20130104',periods=5), - C = np.random.randn(5))) - - stamp = datetime(2013,1,3) - result = df[df>stamp] - expected = df.copy() - expected.loc[[0,1],'A'] = np.nan - assert_frame_equal(result,expected) - - def test_where_none(self): - # GH 4667 - # setting with None changes dtype - df = DataFrame({'series': Series(range(10))}).astype(float) - df[df > 7] = None - expected = DataFrame({'series': Series([0,1,2,3,4,5,6,7,np.nan,np.nan]) }) - assert_frame_equal(df, expected) - - # GH 7656 - df = DataFrame([{'A': 1, 'B': np.nan, 'C': 'Test'}, {'A': np.nan, 'B': 'Test', 'C': np.nan}]) - expected = df.where(~isnull(df), None) - with tm.assertRaisesRegexp(TypeError, 'boolean setting on mixed-type'): - df.where(~isnull(df), None, inplace=True) - - def test_where_align(self): - - def create(): - df = DataFrame(np.random.randn(10,3)) - df.iloc[3:5,0] = np.nan - df.iloc[4:6,1] = np.nan - df.iloc[5:8,2] = np.nan - return df - - # series - df = create() - expected = df.fillna(df.mean()) - result = df.where(pd.notnull(df),df.mean(),axis='columns') - assert_frame_equal(result, expected) - - df.where(pd.notnull(df),df.mean(),inplace=True,axis='columns') - assert_frame_equal(df, expected) - - df = create().fillna(0) - expected = df.apply(lambda x, y: x.where(x>0,y), y=df[0]) - result = df.where(df>0,df[0],axis='index') - assert_frame_equal(result, expected) - result = df.where(df>0,df[0],axis='rows') - assert_frame_equal(result, expected) - - # frame - df = create() - expected = df.fillna(1) - result = df.where(pd.notnull(df),DataFrame(1,index=df.index,columns=df.columns)) - assert_frame_equal(result, expected) - - def test_where_complex(self): - # GH 6345 - expected = DataFrame([[1+1j, 2], [np.nan, 4+1j]], columns=['a', 'b']) - df = DataFrame([[1+1j, 2], [5+1j, 4+1j]], columns=['a', 'b']) - df[df.abs() >= 5] = np.nan - assert_frame_equal(df,expected) - - def test_where_axis(self): - # GH 9736 - df = DataFrame(np.random.randn(2, 2)) - mask = DataFrame([[False, False], [False, False]]) - s = Series([0, 1]) - - expected = DataFrame([[0, 0], [1, 1]], dtype='float64') - result = df.where(mask, s, axis='index') - assert_frame_equal(result, expected) - - result = df.copy() - result.where(mask, s, axis='index', inplace=True) - assert_frame_equal(result, expected) - - expected = DataFrame([[0, 1], [0, 1]], dtype='float64') - result = df.where(mask, s, axis='columns') - assert_frame_equal(result, expected) - - result = df.copy() - result.where(mask, s, axis='columns', inplace=True) - assert_frame_equal(result, expected) - - # Upcast needed - df = DataFrame([[1, 2], [3, 4]], dtype='int64') - mask = DataFrame([[False, False], [False, False]]) - s = Series([0, np.nan]) - - expected = DataFrame([[0, 0], [np.nan, np.nan]], dtype='float64') - result = df.where(mask, s, axis='index') - assert_frame_equal(result, expected) - - result = df.copy() - result.where(mask, s, axis='index', inplace=True) - assert_frame_equal(result, expected) - - expected = DataFrame([[0, np.nan], [0, np.nan]], dtype='float64') - result = df.where(mask, s, axis='columns') - assert_frame_equal(result, expected) - - expected = DataFrame({0 : np.array([0, 0], dtype='int64'), - 1 : np.array([np.nan, np.nan], dtype='float64')}) - result = df.copy() - result.where(mask, s, axis='columns', inplace=True) - assert_frame_equal(result, expected) - - # Multiple dtypes (=> multiple Blocks) - df = pd.concat([DataFrame(np.random.randn(10, 2)), - DataFrame(np.random.randint(0, 10, size=(10, 2)))], - ignore_index=True, axis=1) - mask = DataFrame(False, columns=df.columns, index=df.index) - s1 = Series(1, index=df.columns) - s2 = Series(2, index=df.index) - - result = df.where(mask, s1, axis='columns') - expected = DataFrame(1.0, columns=df.columns, index=df.index) - expected[2] = expected[2].astype(int) - expected[3] = expected[3].astype(int) - assert_frame_equal(result, expected) - - result = df.copy() - result.where(mask, s1, axis='columns', inplace=True) - assert_frame_equal(result, expected) - - result = df.where(mask, s2, axis='index') - expected = DataFrame(2.0, columns=df.columns, index=df.index) - expected[2] = expected[2].astype(int) - expected[3] = expected[3].astype(int) - assert_frame_equal(result, expected) - - result = df.copy() - result.where(mask, s2, axis='index', inplace=True) - assert_frame_equal(result, expected) - - # DataFrame vs DataFrame - d1 = df.copy().drop(1, axis=0) - expected = df.copy() - expected.loc[1, :] = np.nan - - result = df.where(mask, d1) - assert_frame_equal(result, expected) - result = df.where(mask, d1, axis='index') - assert_frame_equal(result, expected) - result = df.copy() - result.where(mask, d1, inplace=True) - assert_frame_equal(result, expected) - result = df.copy() - result.where(mask, d1, inplace=True, axis='index') - assert_frame_equal(result, expected) - - d2 = df.copy().drop(1, axis=1) - expected = df.copy() - expected.loc[:, 1] = np.nan - - result = df.where(mask, d2) - assert_frame_equal(result, expected) - result = df.where(mask, d2, axis='columns') - assert_frame_equal(result, expected) - result = df.copy() - result.where(mask, d2, inplace=True) - assert_frame_equal(result, expected) - result = df.copy() - result.where(mask, d2, inplace=True, axis='columns') - assert_frame_equal(result, expected) - - def test_mask(self): - df = DataFrame(np.random.randn(5, 3)) - cond = df > 0 - - rs = df.where(cond, np.nan) - assert_frame_equal(rs, df.mask(df <= 0)) - assert_frame_equal(rs, df.mask(~cond)) - - other = DataFrame(np.random.randn(5, 3)) - rs = df.where(cond, other) - assert_frame_equal(rs, df.mask(df <= 0, other)) - assert_frame_equal(rs, df.mask(~cond, other)) - - def test_mask_inplace(self): - # GH8801 - df = DataFrame(np.random.randn(5, 3)) - cond = df > 0 - - rdf = df.copy() - - rdf.where(cond, inplace=True) - assert_frame_equal(rdf, df.where(cond)) - assert_frame_equal(rdf, df.mask(~cond)) - - rdf = df.copy() - rdf.where(cond, -df, inplace=True) - assert_frame_equal(rdf, df.where(cond, -df)) - assert_frame_equal(rdf, df.mask(~cond, -df)) - - def test_mask_edge_case_1xN_frame(self): - # GH4071 - df = DataFrame([[1, 2]]) - res = df.mask(DataFrame([[True, False]])) - expec = DataFrame([[nan, 2]]) - assert_frame_equal(res, expec) - - #---------------------------------------------------------------------- - # Transposing - - def test_transpose(self): - frame = self.frame - dft = frame.T - for idx, series in compat.iteritems(dft): - for col, value in compat.iteritems(series): - if np.isnan(value): - self.assertTrue(np.isnan(frame[col][idx])) - else: - self.assertEqual(value, frame[col][idx]) - - # mixed type - index, data = tm.getMixedTypeDict() - mixed = DataFrame(data, index=index) - - mixed_T = mixed.T - for col, s in compat.iteritems(mixed_T): - self.assertEqual(s.dtype, np.object_) - - def test_transpose_get_view(self): - dft = self.frame.T - dft.values[:, 5:10] = 5 - - self.assertTrue((self.frame.values[5:10] == 5).all()) - - #---------------------------------------------------------------------- - # Renaming - - def test_rename(self): - mapping = { - 'A': 'a', - 'B': 'b', - 'C': 'c', - 'D': 'd' - } - - renamed = self.frame.rename(columns=mapping) - renamed2 = self.frame.rename(columns=str.lower) - - assert_frame_equal(renamed, renamed2) - assert_frame_equal(renamed2.rename(columns=str.upper), - self.frame, check_names=False) - - # index - data = { - 'A': {'foo': 0, 'bar': 1} - } - - # gets sorted alphabetical - df = DataFrame(data) - renamed = df.rename(index={'foo': 'bar', 'bar': 'foo'}) - self.assert_numpy_array_equal(renamed.index, ['foo', 'bar']) - - renamed = df.rename(index=str.upper) - self.assert_numpy_array_equal(renamed.index, ['BAR', 'FOO']) - - # have to pass something - self.assertRaises(TypeError, self.frame.rename) - - # partial columns - renamed = self.frame.rename(columns={'C': 'foo', 'D': 'bar'}) - self.assert_numpy_array_equal(renamed.columns, ['A', 'B', 'foo', 'bar']) - - # other axis - renamed = self.frame.T.rename(index={'C': 'foo', 'D': 'bar'}) - self.assert_numpy_array_equal(renamed.index, ['A', 'B', 'foo', 'bar']) - - # index with name - index = Index(['foo', 'bar'], name='name') - renamer = DataFrame(data, index=index) - renamed = renamer.rename(index={'foo': 'bar', 'bar': 'foo'}) - self.assert_numpy_array_equal(renamed.index, ['bar', 'foo']) - self.assertEqual(renamed.index.name, renamer.index.name) - - # MultiIndex - tuples_index = [('foo1', 'bar1'), ('foo2', 'bar2')] - tuples_columns = [('fizz1', 'buzz1'), ('fizz2', 'buzz2')] - index = MultiIndex.from_tuples(tuples_index, names=['foo', 'bar']) - columns = MultiIndex.from_tuples(tuples_columns, names=['fizz', 'buzz']) - renamer = DataFrame([(0,0),(1,1)], index=index, columns=columns) - renamed = renamer.rename(index={'foo1': 'foo3', 'bar2': 'bar3'}, - columns={'fizz1': 'fizz3', 'buzz2': 'buzz3'}) - new_index = MultiIndex.from_tuples([('foo3', 'bar1'), ('foo2', 'bar3')]) - new_columns = MultiIndex.from_tuples([('fizz3', 'buzz1'), ('fizz2', 'buzz3')]) - self.assert_numpy_array_equal(renamed.index, new_index) - self.assert_numpy_array_equal(renamed.columns, new_columns) - self.assertEqual(renamed.index.names, renamer.index.names) - self.assertEqual(renamed.columns.names, renamer.columns.names) - - def test_rename_nocopy(self): - renamed = self.frame.rename(columns={'C': 'foo'}, copy=False) - renamed['foo'] = 1. - self.assertTrue((self.frame['C'] == 1.).all()) - - def test_rename_inplace(self): - self.frame.rename(columns={'C': 'foo'}) - self.assertIn('C', self.frame) - self.assertNotIn('foo', self.frame) - - c_id = id(self.frame['C']) - frame = self.frame.copy() - frame.rename(columns={'C': 'foo'}, inplace=True) - - self.assertNotIn('C', frame) - self.assertIn('foo', frame) - self.assertNotEqual(id(frame['foo']), c_id) - - def test_rename_bug(self): - # GH 5344 - # rename set ref_locs, and set_index was not resetting - df = DataFrame({ 0 : ['foo','bar'], 1 : ['bah','bas'], 2 : [1,2]}) - df = df.rename(columns={0 : 'a'}) - df = df.rename(columns={1 : 'b'}) - df = df.set_index(['a','b']) - df.columns = ['2001-01-01'] - expected = DataFrame([[1],[2]],index=MultiIndex.from_tuples([('foo','bah'),('bar','bas')], - names=['a','b']), - columns=['2001-01-01']) - assert_frame_equal(df,expected) - - #---------------------------------------------------------------------- - # Time series related - def test_diff(self): - the_diff = self.tsframe.diff(1) - - assert_series_equal(the_diff['A'], - self.tsframe['A'] - self.tsframe['A'].shift(1)) - - # int dtype - a = 10000000000000000 - b = a + 1 - s = Series([a, b]) - - rs = DataFrame({'s': s}).diff() - self.assertEqual(rs.s[1], 1) - - # mixed numeric - tf = self.tsframe.astype('float32') - the_diff = tf.diff(1) - assert_series_equal(the_diff['A'], - tf['A'] - tf['A'].shift(1)) - - # issue 10907 - df = pd.DataFrame({'y': pd.Series([2]), 'z': pd.Series([3])}) - df.insert(0, 'x', 1) - result = df.diff(axis=1) - expected = pd.DataFrame({'x':np.nan, 'y':pd.Series(1), 'z':pd.Series(1)}).astype('float64') - assert_frame_equal(result, expected) - - - def test_diff_timedelta(self): - # GH 4533 - df = DataFrame(dict(time=[Timestamp('20130101 9:01'), - Timestamp('20130101 9:02')], - value=[1.0,2.0])) - - res = df.diff() - exp = DataFrame([[pd.NaT, np.nan], - [Timedelta('00:01:00'), 1]], - columns=['time', 'value']) - assert_frame_equal(res, exp) - - def test_diff_mixed_dtype(self): - df = DataFrame(np.random.randn(5, 3)) - df['A'] = np.array([1, 2, 3, 4, 5], dtype=object) - - result = df.diff() - self.assertEqual(result[0].dtype, np.float64) - - def test_diff_neg_n(self): - rs = self.tsframe.diff(-1) - xp = self.tsframe - self.tsframe.shift(-1) - assert_frame_equal(rs, xp) - - def test_diff_float_n(self): - rs = self.tsframe.diff(1.) - xp = self.tsframe.diff(1) - assert_frame_equal(rs, xp) - - def test_diff_axis(self): - # GH 9727 - df = DataFrame([[1., 2.], [3., 4.]]) - assert_frame_equal(df.diff(axis=1), DataFrame([[np.nan, 1.], [np.nan, 1.]])) - assert_frame_equal(df.diff(axis=0), DataFrame([[np.nan, np.nan], [2., 2.]])) - - def test_pct_change(self): - rs = self.tsframe.pct_change(fill_method=None) - assert_frame_equal(rs, self.tsframe / self.tsframe.shift(1) - 1) - - rs = self.tsframe.pct_change(2) - filled = self.tsframe.fillna(method='pad') - assert_frame_equal(rs, filled / filled.shift(2) - 1) - - rs = self.tsframe.pct_change(fill_method='bfill', limit=1) - filled = self.tsframe.fillna(method='bfill', limit=1) - assert_frame_equal(rs, filled / filled.shift(1) - 1) - - rs = self.tsframe.pct_change(freq='5D') - filled = self.tsframe.fillna(method='pad') - assert_frame_equal(rs, filled / filled.shift(freq='5D') - 1) - - def test_pct_change_shift_over_nas(self): - s = Series([1., 1.5, np.nan, 2.5, 3.]) - - df = DataFrame({'a': s, 'b': s}) - - chg = df.pct_change() - expected = Series([np.nan, 0.5, np.nan, 2.5 / 1.5 - 1, .2]) - edf = DataFrame({'a': expected, 'b': expected}) - assert_frame_equal(chg, edf) - - def test_shift(self): - # naive shift - shiftedFrame = self.tsframe.shift(5) - self.assertTrue(shiftedFrame.index.equals(self.tsframe.index)) - - shiftedSeries = self.tsframe['A'].shift(5) - assert_series_equal(shiftedFrame['A'], shiftedSeries) - - shiftedFrame = self.tsframe.shift(-5) - self.assertTrue(shiftedFrame.index.equals(self.tsframe.index)) - - shiftedSeries = self.tsframe['A'].shift(-5) - assert_series_equal(shiftedFrame['A'], shiftedSeries) - - # shift by 0 - unshifted = self.tsframe.shift(0) - assert_frame_equal(unshifted, self.tsframe) - - # shift by DateOffset - shiftedFrame = self.tsframe.shift(5, freq=datetools.BDay()) - self.assertEqual(len(shiftedFrame), len(self.tsframe)) - - shiftedFrame2 = self.tsframe.shift(5, freq='B') - assert_frame_equal(shiftedFrame, shiftedFrame2) - - d = self.tsframe.index[0] - shifted_d = d + datetools.BDay(5) - assert_series_equal(self.tsframe.xs(d), - shiftedFrame.xs(shifted_d), check_names=False) - - # shift int frame - int_shifted = self.intframe.shift(1) - - # Shifting with PeriodIndex - ps = tm.makePeriodFrame() - shifted = ps.shift(1) - unshifted = shifted.shift(-1) - self.assertTrue(shifted.index.equals(ps.index)) - - tm.assert_dict_equal(unshifted.ix[:, 0].valid(), ps.ix[:, 0], - compare_keys=False) - - shifted2 = ps.shift(1, 'B') - shifted3 = ps.shift(1, datetools.bday) - assert_frame_equal(shifted2, shifted3) - assert_frame_equal(ps, shifted2.shift(-1, 'B')) - - assertRaisesRegexp(ValueError, 'does not match PeriodIndex freq', - ps.shift, freq='D') - - - # shift other axis - # GH 6371 - df = DataFrame(np.random.rand(10,5)) - expected = pd.concat([DataFrame(np.nan,index=df.index,columns=[0]),df.iloc[:,0:-1]],ignore_index=True,axis=1) - result = df.shift(1,axis=1) - assert_frame_equal(result,expected) - - # shift named axis - df = DataFrame(np.random.rand(10,5)) - expected = pd.concat([DataFrame(np.nan,index=df.index,columns=[0]),df.iloc[:,0:-1]],ignore_index=True,axis=1) - result = df.shift(1,axis='columns') - assert_frame_equal(result,expected) - - def test_shift_bool(self): - df = DataFrame({'high': [True, False], - 'low': [False, False]}) - rs = df.shift(1) - xp = DataFrame(np.array([[np.nan, np.nan], - [True, False]], dtype=object), - columns=['high', 'low']) - assert_frame_equal(rs, xp) - - def test_shift_categorical(self): - # GH 9416 - s1 = pd.Series(['a', 'b', 'c'], dtype='category') - s2 = pd.Series(['A', 'B', 'C'], dtype='category') - df = DataFrame({'one': s1, 'two': s2}) - rs = df.shift(1) - xp = DataFrame({'one': s1.shift(1), 'two': s2.shift(1)}) - assert_frame_equal(rs, xp) - - def test_shift_empty(self): - # Regression test for #8019 - df = DataFrame({'foo': []}) - rs = df.shift(-1) - - assert_frame_equal(df, rs) - - def test_tshift(self): - # PeriodIndex - ps = tm.makePeriodFrame() - shifted = ps.tshift(1) - unshifted = shifted.tshift(-1) - - assert_frame_equal(unshifted, ps) - - shifted2 = ps.tshift(freq='B') - assert_frame_equal(shifted, shifted2) - - shifted3 = ps.tshift(freq=datetools.bday) - assert_frame_equal(shifted, shifted3) - - assertRaisesRegexp(ValueError, 'does not match', ps.tshift, freq='M') - - # DatetimeIndex - shifted = self.tsframe.tshift(1) - unshifted = shifted.tshift(-1) - - assert_frame_equal(self.tsframe, unshifted) - - shifted2 = self.tsframe.tshift(freq=self.tsframe.index.freq) - assert_frame_equal(shifted, shifted2) - - inferred_ts = DataFrame(self.tsframe.values, - Index(np.asarray(self.tsframe.index)), - columns=self.tsframe.columns) - shifted = inferred_ts.tshift(1) - unshifted = shifted.tshift(-1) - assert_frame_equal(shifted, self.tsframe.tshift(1)) - assert_frame_equal(unshifted, inferred_ts) - - no_freq = self.tsframe.ix[[0, 5, 7], :] - self.assertRaises(ValueError, no_freq.tshift) - - def test_apply(self): - # ufunc - applied = self.frame.apply(np.sqrt) - assert_series_equal(np.sqrt(self.frame['A']), applied['A']) - - # aggregator - applied = self.frame.apply(np.mean) - self.assertEqual(applied['A'], np.mean(self.frame['A'])) - - d = self.frame.index[0] - applied = self.frame.apply(np.mean, axis=1) - self.assertEqual(applied[d], np.mean(self.frame.xs(d))) - self.assertIs(applied.index, self.frame.index) # want this - - # invalid axis - df = DataFrame( - [[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=['a', 'a', 'c']) - self.assertRaises(ValueError, df.apply, lambda x: x, 2) - - # GH9573 - df = DataFrame({'c0':['A','A','B','B'], 'c1':['C','C','D','D']}) - df = df.apply(lambda ts: ts.astype('category')) - self.assertEqual(df.shape, (4, 2)) - self.assertTrue(isinstance(df['c0'].dtype, com.CategoricalDtype)) - self.assertTrue(isinstance(df['c1'].dtype, com.CategoricalDtype)) - - def test_apply_mixed_datetimelike(self): - # mixed datetimelike - # GH 7778 - df = DataFrame({ 'A' : date_range('20130101',periods=3), 'B' : pd.to_timedelta(np.arange(3),unit='s') }) - result = df.apply(lambda x: x, axis=1) - assert_frame_equal(result, df) - - def test_apply_empty(self): - # empty - applied = self.empty.apply(np.sqrt) - self.assertTrue(applied.empty) - - applied = self.empty.apply(np.mean) - self.assertTrue(applied.empty) - - no_rows = self.frame[:0] - result = no_rows.apply(lambda x: x.mean()) - expected = Series(np.nan, index=self.frame.columns) - assert_series_equal(result, expected) - - no_cols = self.frame.ix[:, []] - result = no_cols.apply(lambda x: x.mean(), axis=1) - expected = Series(np.nan, index=self.frame.index) - assert_series_equal(result, expected) - - # 2476 - xp = DataFrame(index=['a']) - rs = xp.apply(lambda x: x['a'], axis=1) - assert_frame_equal(xp, rs) - - # reduce with an empty DataFrame - x = [] - result = self.empty.apply(x.append, axis=1, reduce=False) - assert_frame_equal(result, self.empty) - result = self.empty.apply(x.append, axis=1, reduce=True) - assert_series_equal(result, Series([], index=pd.Index([], dtype=object))) - - empty_with_cols = DataFrame(columns=['a', 'b', 'c']) - result = empty_with_cols.apply(x.append, axis=1, reduce=False) - assert_frame_equal(result, empty_with_cols) - result = empty_with_cols.apply(x.append, axis=1, reduce=True) - assert_series_equal(result, Series([], index=pd.Index([], dtype=object))) - - # Ensure that x.append hasn't been called - self.assertEqual(x, []) - - def test_apply_standard_nonunique(self): - df = DataFrame( - [[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=['a', 'a', 'c']) - rs = df.apply(lambda s: s[0], axis=1) - xp = Series([1, 4, 7], ['a', 'a', 'c']) - assert_series_equal(rs, xp) - - rs = df.T.apply(lambda s: s[0], axis=0) - assert_series_equal(rs, xp) - - def test_apply_broadcast(self): - broadcasted = self.frame.apply(np.mean, broadcast=True) - agged = self.frame.apply(np.mean) - - for col, ts in compat.iteritems(broadcasted): - self.assertTrue((ts == agged[col]).all()) - - broadcasted = self.frame.apply(np.mean, axis=1, broadcast=True) - agged = self.frame.apply(np.mean, axis=1) - for idx in broadcasted.index: - self.assertTrue((broadcasted.xs(idx) == agged[idx]).all()) - - def test_apply_raw(self): - result0 = self.frame.apply(np.mean, raw=True) - result1 = self.frame.apply(np.mean, axis=1, raw=True) - - expected0 = self.frame.apply(lambda x: x.values.mean()) - expected1 = self.frame.apply(lambda x: x.values.mean(), axis=1) - - assert_series_equal(result0, expected0) - assert_series_equal(result1, expected1) - - # no reduction - result = self.frame.apply(lambda x: x * 2, raw=True) - expected = self.frame * 2 - assert_frame_equal(result, expected) - - def test_apply_axis1(self): - d = self.frame.index[0] - tapplied = self.frame.apply(np.mean, axis=1) - self.assertEqual(tapplied[d], np.mean(self.frame.xs(d))) - - def test_apply_ignore_failures(self): - result = self.mixed_frame._apply_standard(np.mean, 0, - ignore_failures=True) - expected = self.mixed_frame._get_numeric_data().apply(np.mean) - assert_series_equal(result, expected) - - def test_apply_mixed_dtype_corner(self): - df = DataFrame({'A': ['foo'], - 'B': [1.]}) - result = df[:0].apply(np.mean, axis=1) - # the result here is actually kind of ambiguous, should it be a Series - # or a DataFrame? - expected = Series(np.nan, index=pd.Index([], dtype='int64')) - assert_series_equal(result, expected) - - df = DataFrame({'A': ['foo'], - 'B': [1.]}) - result = df.apply(lambda x: x['A'], axis=1) - expected = Series(['foo'],index=[0]) - assert_series_equal(result, expected) - - result = df.apply(lambda x: x['B'], axis=1) - expected = Series([1.],index=[0]) - assert_series_equal(result, expected) - - def test_apply_empty_infer_type(self): - no_cols = DataFrame(index=['a', 'b', 'c']) - no_index = DataFrame(columns=['a', 'b', 'c']) - - def _check(df, f): - test_res = f(np.array([], dtype='f8')) - is_reduction = not isinstance(test_res, np.ndarray) - - def _checkit(axis=0, raw=False): - res = df.apply(f, axis=axis, raw=raw) - if is_reduction: - agg_axis = df._get_agg_axis(axis) - tm.assertIsInstance(res, Series) - self.assertIs(res.index, agg_axis) - else: - tm.assertIsInstance(res, DataFrame) - - _checkit() - _checkit(axis=1) - _checkit(raw=True) - _checkit(axis=0, raw=True) - - _check(no_cols, lambda x: x) - _check(no_cols, lambda x: x.mean()) - _check(no_index, lambda x: x) - _check(no_index, lambda x: x.mean()) - - result = no_cols.apply(lambda x: x.mean(), broadcast=True) - tm.assertIsInstance(result, DataFrame) - - def test_apply_with_args_kwds(self): - def add_some(x, howmuch=0): - return x + howmuch - - def agg_and_add(x, howmuch=0): - return x.mean() + howmuch - - def subtract_and_divide(x, sub, divide=1): - return (x - sub) / divide - - result = self.frame.apply(add_some, howmuch=2) - exp = self.frame.apply(lambda x: x + 2) - assert_frame_equal(result, exp) - - result = self.frame.apply(agg_and_add, howmuch=2) - exp = self.frame.apply(lambda x: x.mean() + 2) - assert_series_equal(result, exp) - - res = self.frame.apply(subtract_and_divide, args=(2,), divide=2) - exp = self.frame.apply(lambda x: (x - 2.) / 2.) - assert_frame_equal(res, exp) - - def test_apply_yield_list(self): - result = self.frame.apply(list) - assert_frame_equal(result, self.frame) - - def test_apply_reduce_Series(self): - self.frame.ix[::2, 'A'] = np.nan - expected = self.frame.mean(1) - result = self.frame.apply(np.mean, axis=1) - assert_series_equal(result, expected) - - def test_apply_differently_indexed(self): - df = DataFrame(np.random.randn(20, 10)) - - result0 = df.apply(Series.describe, axis=0) - expected0 = DataFrame(dict((i, v.describe()) - for i, v in compat.iteritems(df)), - columns=df.columns) - assert_frame_equal(result0, expected0) - - result1 = df.apply(Series.describe, axis=1) - expected1 = DataFrame(dict((i, v.describe()) - for i, v in compat.iteritems(df.T)), - columns=df.index).T - assert_frame_equal(result1, expected1) - - def test_apply_modify_traceback(self): - data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo', - 'bar', 'bar', 'bar', 'bar', - 'foo', 'foo', 'foo'], - 'B': ['one', 'one', 'one', 'two', - 'one', 'one', 'one', 'two', - 'two', 'two', 'one'], - 'C': ['dull', 'dull', 'shiny', 'dull', - 'dull', 'shiny', 'shiny', 'dull', - 'shiny', 'shiny', 'shiny'], - 'D': np.random.randn(11), - 'E': np.random.randn(11), - 'F': np.random.randn(11)}) - - data.loc[4,'C'] = np.nan - - def transform(row): - if row['C'].startswith('shin') and row['A'] == 'foo': - row['D'] = 7 - return row - - def transform2(row): - if (notnull(row['C']) and row['C'].startswith('shin') - and row['A'] == 'foo'): - row['D'] = 7 - return row - - try: - transformed = data.apply(transform, axis=1) - except AttributeError as e: - self.assertEqual(len(e.args), 2) - self.assertEqual(e.args[1], 'occurred at index 4') - self.assertEqual(e.args[0], "'float' object has no attribute 'startswith'") - - def test_apply_bug(self): - - # GH 6125 - import datetime - positions = pd.DataFrame([[1, 'ABC0', 50], [1, 'YUM0', 20], - [1, 'DEF0', 20], [2, 'ABC1', 50], - [2, 'YUM1', 20], [2, 'DEF1', 20]], - columns=['a', 'market', 'position']) - def f(r): - return r['market'] - expected = positions.apply(f, axis=1) - - positions = DataFrame([[datetime.datetime(2013, 1, 1), 'ABC0', 50], - [datetime.datetime(2013, 1, 2), 'YUM0', 20], - [datetime.datetime(2013, 1, 3), 'DEF0', 20], - [datetime.datetime(2013, 1, 4), 'ABC1', 50], - [datetime.datetime(2013, 1, 5), 'YUM1', 20], - [datetime.datetime(2013, 1, 6), 'DEF1', 20]], - columns=['a', 'market', 'position']) - result = positions.apply(f, axis=1) - assert_series_equal(result,expected) - - def test_swapaxes(self): - df = DataFrame(np.random.randn(10, 5)) - assert_frame_equal(df.T, df.swapaxes(0, 1)) - assert_frame_equal(df.T, df.swapaxes(1, 0)) - assert_frame_equal(df, df.swapaxes(0, 0)) - self.assertRaises(ValueError, df.swapaxes, 2, 5) - - def test_apply_convert_objects(self): - data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo', - 'bar', 'bar', 'bar', 'bar', - 'foo', 'foo', 'foo'], - 'B': ['one', 'one', 'one', 'two', - 'one', 'one', 'one', 'two', - 'two', 'two', 'one'], - 'C': ['dull', 'dull', 'shiny', 'dull', - 'dull', 'shiny', 'shiny', 'dull', - 'shiny', 'shiny', 'shiny'], - 'D': np.random.randn(11), - 'E': np.random.randn(11), - 'F': np.random.randn(11)}) - - result = data.apply(lambda x: x, axis=1) - assert_frame_equal(result._convert(datetime=True), data) - - def test_apply_attach_name(self): - result = self.frame.apply(lambda x: x.name) - expected = Series(self.frame.columns, index=self.frame.columns) - assert_series_equal(result, expected) - - result = self.frame.apply(lambda x: x.name, axis=1) - expected = Series(self.frame.index, index=self.frame.index) - assert_series_equal(result, expected) - - # non-reductions - result = self.frame.apply(lambda x: np.repeat(x.name, len(x))) - expected = DataFrame(np.tile(self.frame.columns, - (len(self.frame.index), 1)), - index=self.frame.index, - columns=self.frame.columns) - assert_frame_equal(result, expected) - - result = self.frame.apply(lambda x: np.repeat(x.name, len(x)), - axis=1) - expected = DataFrame(np.tile(self.frame.index, - (len(self.frame.columns), 1)).T, - index=self.frame.index, - columns=self.frame.columns) - assert_frame_equal(result, expected) - - def test_apply_multi_index(self): - s = DataFrame([[1,2], [3,4], [5,6]]) - s.index = MultiIndex.from_arrays([['a','a','b'], ['c','d','d']]) - s.columns = ['col1','col2'] - res = s.apply(lambda x: Series({'min': min(x), 'max': max(x)}), 1) - tm.assertIsInstance(res.index, MultiIndex) - - def test_apply_dict(self): - - # GH 8735 - A = DataFrame([['foo', 'bar'], ['spam', 'eggs']]) - A_dicts = pd.Series([dict([(0, 'foo'), (1, 'spam')]), - dict([(0, 'bar'), (1, 'eggs')])]) - B = DataFrame([[0, 1], [2, 3]]) - B_dicts = pd.Series([dict([(0, 0), (1, 2)]), dict([(0, 1), (1, 3)])]) - fn = lambda x: x.to_dict() - - for df, dicts in [(A, A_dicts), (B, B_dicts)]: - reduce_true = df.apply(fn, reduce=True) - reduce_false = df.apply(fn, reduce=False) - reduce_none = df.apply(fn, reduce=None) - - assert_series_equal(reduce_true, dicts) - assert_frame_equal(reduce_false, df) - assert_series_equal(reduce_none, dicts) - - def test_applymap(self): - applied = self.frame.applymap(lambda x: x * 2) - assert_frame_equal(applied, self.frame * 2) - result = self.frame.applymap(type) - - # GH #465, function returning tuples - result = self.frame.applymap(lambda x: (x, x)) - tm.assertIsInstance(result['A'][0], tuple) - - # GH 2909, object conversion to float in constructor? - df = DataFrame(data=[1,'a']) - result = df.applymap(lambda x: x) - self.assertEqual(result.dtypes[0], object) - - df = DataFrame(data=[1.,'a']) - result = df.applymap(lambda x: x) - self.assertEqual(result.dtypes[0], object) - - # GH2786 - df = DataFrame(np.random.random((3,4))) - df2 = df.copy() - cols = ['a','a','a','a'] - df.columns = cols - - expected = df2.applymap(str) - expected.columns = cols - result = df.applymap(str) - assert_frame_equal(result,expected) - - # datetime/timedelta - df['datetime'] = Timestamp('20130101') - df['timedelta'] = Timedelta('1 min') - result = df.applymap(str) - for f in ['datetime','timedelta']: - self.assertEqual(result.loc[0,f],str(df.loc[0,f])) - - def test_filter(self): - # items - filtered = self.frame.filter(['A', 'B', 'E']) - self.assertEqual(len(filtered.columns), 2) - self.assertNotIn('E', filtered) - - filtered = self.frame.filter(['A', 'B', 'E'], axis='columns') - self.assertEqual(len(filtered.columns), 2) - self.assertNotIn('E', filtered) - - # other axis - idx = self.frame.index[0:4] - filtered = self.frame.filter(idx, axis='index') - expected = self.frame.reindex(index=idx) - assert_frame_equal(filtered, expected) - - # like - fcopy = self.frame.copy() - fcopy['AA'] = 1 - - filtered = fcopy.filter(like='A') - self.assertEqual(len(filtered.columns), 2) - self.assertIn('AA', filtered) - - # like with ints in column names - df = DataFrame(0., index=[0, 1, 2], columns=[0, 1, '_A', '_B']) - filtered = df.filter(like='_') - self.assertEqual(len(filtered.columns), 2) - - # regex with ints in column names - # from PR #10384 - df = DataFrame(0., index=[0, 1, 2], columns=['A1', 1, 'B', 2, 'C']) - expected = DataFrame(0., index=[0, 1, 2], columns=pd.Index([1, 2], dtype=object)) - filtered = df.filter(regex='^[0-9]+$') - assert_frame_equal(filtered, expected) - - expected = DataFrame(0., index=[0, 1, 2], columns=[0, '0', 1, '1']) - filtered = expected.filter(regex='^[0-9]+$') # shouldn't remove anything - assert_frame_equal(filtered, expected) - - # pass in None - with assertRaisesRegexp(TypeError, 'Must pass'): - self.frame.filter(items=None) - - # objects - filtered = self.mixed_frame.filter(like='foo') - self.assertIn('foo', filtered) - - # unicode columns, won't ascii-encode - df = self.frame.rename(columns={'B': u('\u2202')}) - filtered = df.filter(like='C') - self.assertTrue('C' in filtered) - - def test_filter_regex_search(self): - fcopy = self.frame.copy() - fcopy['AA'] = 1 - - # regex - filtered = fcopy.filter(regex='[A]+') - self.assertEqual(len(filtered.columns), 2) - self.assertIn('AA', filtered) - - # doesn't have to be at beginning - df = DataFrame({'aBBa': [1, 2], - 'BBaBB': [1, 2], - 'aCCa': [1, 2], - 'aCCaBB': [1, 2]}) - - result = df.filter(regex='BB') - exp = df[[x for x in df.columns if 'BB' in x]] - assert_frame_equal(result, exp) - - def test_filter_corner(self): - empty = DataFrame() - - result = empty.filter([]) - assert_frame_equal(result, empty) - - result = empty.filter(like='foo') - assert_frame_equal(result, empty) - - def test_select(self): - f = lambda x: x.weekday() == 2 - result = self.tsframe.select(f, axis=0) - expected = self.tsframe.reindex( - index=self.tsframe.index[[f(x) for x in self.tsframe.index]]) - assert_frame_equal(result, expected) - - result = self.frame.select(lambda x: x in ('B', 'D'), axis=1) - expected = self.frame.reindex(columns=['B', 'D']) - - assert_frame_equal(result, expected, check_names=False) # TODO should reindex check_names? - - def test_reorder_levels(self): - index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], - labels=[[0, 0, 0, 0, 0, 0], - [0, 1, 2, 0, 1, 2], - [0, 1, 0, 1, 0, 1]], - names=['L0', 'L1', 'L2']) - df = DataFrame({'A': np.arange(6), 'B': np.arange(6)}, index=index) - - # no change, position - result = df.reorder_levels([0, 1, 2]) - assert_frame_equal(df, result) - - # no change, labels - result = df.reorder_levels(['L0', 'L1', 'L2']) - assert_frame_equal(df, result) - - # rotate, position - result = df.reorder_levels([1, 2, 0]) - e_idx = MultiIndex(levels=[['one', 'two', 'three'], [0, 1], ['bar']], - labels=[[0, 1, 2, 0, 1, 2], - [0, 1, 0, 1, 0, 1], - [0, 0, 0, 0, 0, 0]], - names=['L1', 'L2', 'L0']) - expected = DataFrame({'A': np.arange(6), 'B': np.arange(6)}, - index=e_idx) - assert_frame_equal(result, expected) - - result = df.reorder_levels([0, 0, 0]) - e_idx = MultiIndex(levels=[['bar'], ['bar'], ['bar']], - labels=[[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0]], - names=['L0', 'L0', 'L0']) - expected = DataFrame({'A': np.arange(6), 'B': np.arange(6)}, - index=e_idx) - assert_frame_equal(result, expected) - - result = df.reorder_levels(['L0', 'L0', 'L0']) - assert_frame_equal(result, expected) - - def test_sort_values(self): - - # API for 9816 - - # sort_index - frame = DataFrame(np.arange(16).reshape(4, 4), index=[1, 2, 3, 4], - columns=['A', 'B', 'C', 'D']) - - # 9816 deprecated - with tm.assert_produces_warning(FutureWarning): - frame.sort(columns='A') - with tm.assert_produces_warning(FutureWarning): - frame.sort() - - unordered = frame.ix[[3, 2, 4, 1]] - expected = unordered.sort_index() - - result = unordered.sort_index(axis=0) - assert_frame_equal(result, expected) - - unordered = frame.ix[:, [2, 1, 3, 0]] - expected = unordered.sort_index(axis=1) - - result = unordered.sort_index(axis=1) - assert_frame_equal(result, expected) - assert_frame_equal(result, expected) - - # sortlevel - mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) - df = DataFrame([[1, 2], [3, 4]], mi) - - result = df.sort_index(level='A', sort_remaining=False) - expected = df.sortlevel('A', sort_remaining=False) - assert_frame_equal(result, expected) - - df = df.T - result = df.sort_index(level='A', axis=1, sort_remaining=False) - expected = df.sortlevel('A', axis=1, sort_remaining=False) - assert_frame_equal(result, expected) - - # MI sort, but no by - mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) - df = DataFrame([[1, 2], [3, 4]], mi) - result = df.sort_index(sort_remaining=False) - expected = df.sort_index() - assert_frame_equal(result, expected) - - def test_sort_index(self): - frame = DataFrame(np.arange(16).reshape(4, 4), index=[1, 2, 3, 4], - columns=['A', 'B', 'C', 'D']) - - # axis=0 - unordered = frame.ix[[3, 2, 4, 1]] - sorted_df = unordered.sort_index(axis=0) - expected = frame - assert_frame_equal(sorted_df, expected) - - sorted_df = unordered.sort_index(ascending=False) - expected = frame[::-1] - assert_frame_equal(sorted_df, expected) - - # axis=1 - unordered = frame.ix[:, ['D', 'B', 'C', 'A']] - sorted_df = unordered.sort_index(axis=1) - expected = frame - assert_frame_equal(sorted_df, expected) - - sorted_df = unordered.sort_index(axis=1, ascending=False) - expected = frame.ix[:, ::-1] - assert_frame_equal(sorted_df, expected) - - # by column - sorted_df = frame.sort_values(by='A') - indexer = frame['A'].argsort().values - expected = frame.ix[frame.index[indexer]] - assert_frame_equal(sorted_df, expected) - - sorted_df = frame.sort_values(by='A', ascending=False) - indexer = indexer[::-1] - expected = frame.ix[frame.index[indexer]] - assert_frame_equal(sorted_df, expected) - - sorted_df = frame.sort_values(by='A', ascending=False) - assert_frame_equal(sorted_df, expected) - - # GH4839 - sorted_df = frame.sort_values(by=['A'], ascending=[False]) - assert_frame_equal(sorted_df, expected) - - # check for now - sorted_df = frame.sort_values(by='A') - assert_frame_equal(sorted_df, expected[::-1]) - expected = frame.sort_values(by='A') - assert_frame_equal(sorted_df, expected) - - expected = frame.sort_values(by=['A', 'B'], ascending=False) - sorted_df = frame.sort_values(by=['A', 'B']) - assert_frame_equal(sorted_df, expected[::-1]) - - self.assertRaises(ValueError, lambda : frame.sort_values(by=['A','B'], axis=2, inplace=True)) - - msg = 'When sorting by column, axis must be 0' - with assertRaisesRegexp(ValueError, msg): - frame.sort_values(by='A', axis=1) - - msg = r'Length of ascending \(5\) != length of by \(2\)' - with assertRaisesRegexp(ValueError, msg): - frame.sort_values(by=['A', 'B'], axis=0, ascending=[True] * 5) - - def test_sort_index_categorical_index(self): - - df = DataFrame({'A' : np.arange(6,dtype='int64'), - 'B' : Series(list('aabbca')).astype('category',categories=list('cab')) }).set_index('B') - - result = df.sort_index() - expected = df.iloc[[4,0,1,5,2,3]] - assert_frame_equal(result, expected) - - result = df.sort_index(ascending=False) - expected = df.iloc[[3,2,5,1,0,4]] - assert_frame_equal(result, expected) - - def test_sort_nan(self): - # GH3917 - nan = np.nan - df = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], - 'B': [9, nan, 5, 2, 5, 4, 5]}) - - # sort one column only - expected = DataFrame( - {'A': [nan, 1, 1, 2, 4, 6, 8], - 'B': [5, 9, 2, nan, 5, 5, 4]}, - index=[2, 0, 3, 1, 6, 4, 5]) - sorted_df = df.sort_values(['A'], na_position='first') - assert_frame_equal(sorted_df, expected) - - expected = DataFrame( - {'A': [nan, 8, 6, 4, 2, 1, 1], - 'B': [5, 4, 5, 5, nan, 9, 2]}, - index=[2, 5, 4, 6, 1, 0, 3]) - sorted_df = df.sort_values(['A'], na_position='first', ascending=False) - assert_frame_equal(sorted_df, expected) - - # na_position='last', order - expected = DataFrame( - {'A': [1, 1, 2, 4, 6, 8, nan], - 'B': [2, 9, nan, 5, 5, 4, 5]}, - index=[3, 0, 1, 6, 4, 5, 2]) - sorted_df = df.sort_values(['A','B']) - assert_frame_equal(sorted_df, expected) - - # na_position='first', order - expected = DataFrame( - {'A': [nan, 1, 1, 2, 4, 6, 8], - 'B': [5, 2, 9, nan, 5, 5, 4]}, - index=[2, 3, 0, 1, 6, 4, 5]) - sorted_df = df.sort_values(['A','B'], na_position='first') - assert_frame_equal(sorted_df, expected) - - # na_position='first', not order - expected = DataFrame( - {'A': [nan, 1, 1, 2, 4, 6, 8], - 'B': [5, 9, 2, nan, 5, 5, 4]}, - index=[2, 0, 3, 1, 6, 4, 5]) - sorted_df = df.sort_values(['A','B'], ascending=[1,0], na_position='first') - assert_frame_equal(sorted_df, expected) - - # na_position='last', not order - expected = DataFrame( - {'A': [8, 6, 4, 2, 1, 1, nan], - 'B': [4, 5, 5, nan, 2, 9, 5]}, - index=[5, 4, 6, 1, 3, 0, 2]) - sorted_df = df.sort_values(['A','B'], ascending=[0,1], na_position='last') - assert_frame_equal(sorted_df, expected) - - # Test DataFrame with nan label - df = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], - 'B': [9, nan, 5, 2, 5, 4, 5]}, - index = [1, 2, 3, 4, 5, 6, nan]) - - # NaN label, ascending=True, na_position='last' - sorted_df = df.sort_index(kind='quicksort', ascending=True, na_position='last') - expected = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], - 'B': [9, nan, 5, 2, 5, 4, 5]}, - index = [1, 2, 3, 4, 5, 6, nan]) - assert_frame_equal(sorted_df, expected) - - # NaN label, ascending=True, na_position='first' - sorted_df = df.sort_index(na_position='first') - expected = DataFrame({'A': [4, 1, 2, nan, 1, 6, 8], - 'B': [5, 9, nan, 5, 2, 5, 4]}, - index = [nan, 1, 2, 3, 4, 5, 6]) - assert_frame_equal(sorted_df, expected) - - # NaN label, ascending=False, na_position='last' - sorted_df = df.sort_index(kind='quicksort', ascending=False) - expected = DataFrame({'A': [8, 6, 1, nan, 2, 1, 4], - 'B': [4, 5, 2, 5, nan, 9, 5]}, - index = [6, 5, 4, 3, 2, 1, nan]) - assert_frame_equal(sorted_df, expected) - - # NaN label, ascending=False, na_position='first' - sorted_df = df.sort_index(kind='quicksort', ascending=False, na_position='first') - expected = DataFrame({'A': [4, 8, 6, 1, nan, 2, 1], - 'B': [5, 4, 5, 2, 5, nan, 9]}, - index = [nan, 6, 5, 4, 3, 2, 1]) - assert_frame_equal(sorted_df, expected) - - def test_stable_descending_sort(self): - # GH #6399 - df = DataFrame([[2, 'first'], [2, 'second'], [1, 'a'], [1, 'b']], - columns=['sort_col', 'order']) - sorted_df = df.sort_values(by='sort_col', kind='mergesort', - ascending=False) - assert_frame_equal(df, sorted_df) - - def test_stable_descending_multicolumn_sort(self): - nan = np.nan - df = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], - 'B': [9, nan, 5, 2, 5, 4, 5]}) - # test stable mergesort - expected = DataFrame( - {'A': [nan, 8, 6, 4, 2, 1, 1], - 'B': [5, 4, 5, 5, nan, 2, 9]}, - index=[2, 5, 4, 6, 1, 3, 0]) - sorted_df = df.sort_values(['A','B'], ascending=[0,1], na_position='first', - kind='mergesort') - assert_frame_equal(sorted_df, expected) - - expected = DataFrame( - {'A': [nan, 8, 6, 4, 2, 1, 1], - 'B': [5, 4, 5, 5, nan, 9, 2]}, - index=[2, 5, 4, 6, 1, 0, 3]) - sorted_df = df.sort_values(['A','B'], ascending=[0,0], na_position='first', - kind='mergesort') - assert_frame_equal(sorted_df, expected) - - def test_sort_index_multicolumn(self): - import random - A = np.arange(5).repeat(20) - B = np.tile(np.arange(5), 20) - random.shuffle(A) - random.shuffle(B) - frame = DataFrame({'A': A, 'B': B, - 'C': np.random.randn(100)}) - - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - frame.sort_index(by=['A', 'B']) - result = frame.sort_values(by=['A', 'B']) - indexer = np.lexsort((frame['B'], frame['A'])) - expected = frame.take(indexer) - assert_frame_equal(result, expected) - - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - frame.sort_index(by=['A', 'B'], ascending=False) - result = frame.sort_values(by=['A', 'B'], ascending=False) - indexer = np.lexsort((frame['B'].rank(ascending=False), - frame['A'].rank(ascending=False))) - expected = frame.take(indexer) - assert_frame_equal(result, expected) - - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - frame.sort_index(by=['B', 'A']) - result = frame.sort_values(by=['B', 'A']) - indexer = np.lexsort((frame['A'], frame['B'])) - expected = frame.take(indexer) - assert_frame_equal(result, expected) - - def test_sort_index_inplace(self): - frame = DataFrame(np.random.randn(4, 4), index=[1, 2, 3, 4], - columns=['A', 'B', 'C', 'D']) - - # axis=0 - unordered = frame.ix[[3, 2, 4, 1]] - a_id = id(unordered['A']) - df = unordered.copy() - df.sort_index(inplace=True) - expected = frame - assert_frame_equal(df, expected) - self.assertNotEqual(a_id, id(df['A'])) - - df = unordered.copy() - df.sort_index(ascending=False, inplace=True) - expected = frame[::-1] - assert_frame_equal(df, expected) - - # axis=1 - unordered = frame.ix[:, ['D', 'B', 'C', 'A']] - df = unordered.copy() - df.sort_index(axis=1, inplace=True) - expected = frame - assert_frame_equal(df, expected) - - df = unordered.copy() - df.sort_index(axis=1, ascending=False, inplace=True) - expected = frame.ix[:, ::-1] - assert_frame_equal(df, expected) - - def test_sort_index_different_sortorder(self): - A = np.arange(20).repeat(5) - B = np.tile(np.arange(5), 20) - - indexer = np.random.permutation(100) - A = A.take(indexer) - B = B.take(indexer) - - df = DataFrame({'A': A, 'B': B, - 'C': np.random.randn(100)}) - - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - df.sort_index(by=['A', 'B'], ascending=[1, 0]) - result = df.sort_values(by=['A', 'B'], ascending=[1, 0]) - - ex_indexer = np.lexsort((df.B.max() - df.B, df.A)) - expected = df.take(ex_indexer) - assert_frame_equal(result, expected) - - # test with multiindex, too - idf = df.set_index(['A', 'B']) - - result = idf.sort_index(ascending=[1, 0]) - expected = idf.take(ex_indexer) - assert_frame_equal(result, expected) - - # also, Series! - result = idf['C'].sort_index(ascending=[1, 0]) - assert_series_equal(result, expected['C']) - - def test_sort_inplace(self): - frame = DataFrame(np.random.randn(4, 4), index=[1, 2, 3, 4], - columns=['A', 'B', 'C', 'D']) - - sorted_df = frame.copy() - sorted_df.sort_values(by='A', inplace=True) - expected = frame.sort_values(by='A') - assert_frame_equal(sorted_df, expected) - - sorted_df = frame.copy() - sorted_df.sort_values(by='A', ascending=False, inplace=True) - expected = frame.sort_values(by='A', ascending=False) - assert_frame_equal(sorted_df, expected) - - sorted_df = frame.copy() - sorted_df.sort_values(by=['A', 'B'], ascending=False, inplace=True) - expected = frame.sort_values(by=['A', 'B'], ascending=False) - assert_frame_equal(sorted_df, expected) - - def test_sort_index_duplicates(self): - - ### with 9816, these are all translated to .sort_values - - df = DataFrame([lrange(5,9), lrange(4)], - columns=['a', 'a', 'b', 'b']) - - with assertRaisesRegexp(ValueError, 'duplicate'): - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - df.sort_index(by='a') - with assertRaisesRegexp(ValueError, 'duplicate'): - df.sort_values(by='a') - - with assertRaisesRegexp(ValueError, 'duplicate'): - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - df.sort_index(by=['a']) - with assertRaisesRegexp(ValueError, 'duplicate'): - df.sort_values(by=['a']) - - with assertRaisesRegexp(ValueError, 'duplicate'): - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - # multi-column 'by' is separate codepath - df.sort_index(by=['a', 'b']) - with assertRaisesRegexp(ValueError, 'duplicate'): - # multi-column 'by' is separate codepath - df.sort_values(by=['a', 'b']) - - # with multi-index - # GH4370 - df = DataFrame(np.random.randn(4,2),columns=MultiIndex.from_tuples([('a',0),('a',1)])) - with assertRaisesRegexp(ValueError, 'levels'): - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - df.sort_index(by='a') - with assertRaisesRegexp(ValueError, 'levels'): - df.sort_values(by='a') - - # convert tuples to a list of tuples - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - df.sort_index(by=[('a',1)]) - expected = df.sort_values(by=[('a',1)]) - - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - df.sort_index(by=('a',1)) - result = df.sort_values(by=('a',1)) - assert_frame_equal(result, expected) - - def test_sortlevel(self): - mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) - df = DataFrame([[1, 2], [3, 4]], mi) - res = df.sortlevel('A', sort_remaining=False) - assert_frame_equal(df, res) - - res = df.sortlevel(['A', 'B'], sort_remaining=False) - assert_frame_equal(df, res) - - def test_sort_datetimes(self): - - # GH 3461, argsort / lexsort differences for a datetime column - df = DataFrame(['a','a','a','b','c','d','e','f','g'], - columns=['A'], - index=date_range('20130101',periods=9)) - dts = [Timestamp(x) - for x in ['2004-02-11','2004-01-21','2004-01-26', - '2005-09-20','2010-10-04','2009-05-12', - '2008-11-12','2010-09-28','2010-09-28']] - df['B'] = dts[::2] + dts[1::2] - df['C'] = 2. - df['A1'] = 3. - - df1 = df.sort_values(by='A') - df2 = df.sort_values(by=['A']) - assert_frame_equal(df1,df2) - - df1 = df.sort_values(by='B') - df2 = df.sort_values(by=['B']) - assert_frame_equal(df1,df2) - - def test_frame_column_inplace_sort_exception(self): - s = self.frame['A'] - with assertRaisesRegexp(ValueError, "This Series is a view"): - s.sort_values(inplace=True) - - cp = s.copy() - cp.sort_values() # it works! - - def test_combine_first(self): - # disjoint - head, tail = self.frame[:5], self.frame[5:] - - combined = head.combine_first(tail) - reordered_frame = self.frame.reindex(combined.index) - assert_frame_equal(combined, reordered_frame) - self.assertTrue(tm.equalContents(combined.columns, self.frame.columns)) - assert_series_equal(combined['A'], reordered_frame['A']) - - # same index - fcopy = self.frame.copy() - fcopy['A'] = 1 - del fcopy['C'] - - fcopy2 = self.frame.copy() - fcopy2['B'] = 0 - del fcopy2['D'] - - combined = fcopy.combine_first(fcopy2) - - self.assertTrue((combined['A'] == 1).all()) - assert_series_equal(combined['B'], fcopy['B']) - assert_series_equal(combined['C'], fcopy2['C']) - assert_series_equal(combined['D'], fcopy['D']) - - # overlap - head, tail = reordered_frame[:10].copy(), reordered_frame - head['A'] = 1 - - combined = head.combine_first(tail) - self.assertTrue((combined['A'][:10] == 1).all()) - - # reverse overlap - tail['A'][:10] = 0 - combined = tail.combine_first(head) - self.assertTrue((combined['A'][:10] == 0).all()) - - # no overlap - f = self.frame[:10] - g = self.frame[10:] - combined = f.combine_first(g) - assert_series_equal(combined['A'].reindex(f.index), f['A']) - assert_series_equal(combined['A'].reindex(g.index), g['A']) - - # corner cases - comb = self.frame.combine_first(self.empty) - assert_frame_equal(comb, self.frame) - - comb = self.empty.combine_first(self.frame) - assert_frame_equal(comb, self.frame) - - comb = self.frame.combine_first(DataFrame(index=["faz", "boo"])) - self.assertTrue("faz" in comb.index) - - # #2525 - df = DataFrame({'a': [1]}, index=[datetime(2012, 1, 1)]) - df2 = DataFrame({}, columns=['b']) - result = df.combine_first(df2) - self.assertTrue('b' in result) - - def test_combine_first_mixed_bug(self): - idx = Index(['a', 'b', 'c', 'e']) - ser1 = Series([5.0, -9.0, 4.0, 100.], index=idx) - ser2 = Series(['a', 'b', 'c', 'e'], index=idx) - ser3 = Series([12, 4, 5, 97], index=idx) - - frame1 = DataFrame({"col0": ser1, - "col2": ser2, - "col3": ser3}) - - idx = Index(['a', 'b', 'c', 'f']) - ser1 = Series([5.0, -9.0, 4.0, 100.], index=idx) - ser2 = Series(['a', 'b', 'c', 'f'], index=idx) - ser3 = Series([12, 4, 5, 97], index=idx) - - frame2 = DataFrame({"col1": ser1, - "col2": ser2, - "col5": ser3}) - - combined = frame1.combine_first(frame2) - self.assertEqual(len(combined.columns), 5) - - # gh 3016 (same as in update) - df = DataFrame([[1.,2.,False, True],[4.,5.,True,False]], - columns=['A','B','bool1','bool2']) - - other = DataFrame([[45,45]],index=[0],columns=['A','B']) - result = df.combine_first(other) - assert_frame_equal(result, df) - - df.ix[0,'A'] = np.nan - result = df.combine_first(other) - df.ix[0,'A'] = 45 - assert_frame_equal(result, df) - - # doc example - df1 = DataFrame({'A' : [1., np.nan, 3., 5., np.nan], - 'B' : [np.nan, 2., 3., np.nan, 6.]}) - - df2 = DataFrame({'A' : [5., 2., 4., np.nan, 3., 7.], - 'B' : [np.nan, np.nan, 3., 4., 6., 8.]}) - - result = df1.combine_first(df2) - expected = DataFrame({ 'A' : [1,2,3,5,3,7.], 'B' : [np.nan,2,3,4,6,8] }) - assert_frame_equal(result,expected) - - # GH3552, return object dtype with bools - df1 = DataFrame([[np.nan, 3.,True], [-4.6, np.nan, True], [np.nan, 7., False]]) - df2 = DataFrame([[-42.6, np.nan, True], [-5., 1.6, False]], index=[1, 2]) - - result = df1.combine_first(df2)[2] - expected = Series([True, True, False], name=2) - assert_series_equal(result, expected) - - # GH 3593, converting datetime64[ns] incorrecly - df0 = DataFrame({"a":[datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)]}) - df1 = DataFrame({"a":[None, None, None]}) - df2 = df1.combine_first(df0) - assert_frame_equal(df2, df0) - - df2 = df0.combine_first(df1) - assert_frame_equal(df2, df0) - - df0 = DataFrame({"a":[datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)]}) - df1 = DataFrame({"a":[datetime(2000, 1, 2), None, None]}) - df2 = df1.combine_first(df0) - result = df0.copy() - result.iloc[0,:] = df1.iloc[0,:] - assert_frame_equal(df2, result) - - df2 = df0.combine_first(df1) - assert_frame_equal(df2, df0) - - def test_update(self): - df = DataFrame([[1.5, nan, 3.], - [1.5, nan, 3.], - [1.5, nan, 3], - [1.5, nan, 3]]) - - other = DataFrame([[3.6, 2., np.nan], - [np.nan, np.nan, 7]], index=[1, 3]) - - df.update(other) - - expected = DataFrame([[1.5, nan, 3], - [3.6, 2, 3], - [1.5, nan, 3], - [1.5, nan, 7.]]) - assert_frame_equal(df, expected) - - def test_update_dtypes(self): - - # gh 3016 - df = DataFrame([[1.,2.,False, True],[4.,5.,True,False]], - columns=['A','B','bool1','bool2']) - - other = DataFrame([[45,45]],index=[0],columns=['A','B']) - df.update(other) - - expected = DataFrame([[45.,45.,False, True],[4.,5.,True,False]], - columns=['A','B','bool1','bool2']) - assert_frame_equal(df, expected) - - def test_update_nooverwrite(self): - df = DataFrame([[1.5, nan, 3.], - [1.5, nan, 3.], - [1.5, nan, 3], - [1.5, nan, 3]]) - - other = DataFrame([[3.6, 2., np.nan], - [np.nan, np.nan, 7]], index=[1, 3]) - - df.update(other, overwrite=False) - - expected = DataFrame([[1.5, nan, 3], - [1.5, 2, 3], - [1.5, nan, 3], - [1.5, nan, 3.]]) - assert_frame_equal(df, expected) - - def test_update_filtered(self): - df = DataFrame([[1.5, nan, 3.], - [1.5, nan, 3.], - [1.5, nan, 3], - [1.5, nan, 3]]) - - other = DataFrame([[3.6, 2., np.nan], - [np.nan, np.nan, 7]], index=[1, 3]) - - df.update(other, filter_func=lambda x: x > 2) - - expected = DataFrame([[1.5, nan, 3], - [1.5, nan, 3], - [1.5, nan, 3], - [1.5, nan, 7.]]) - assert_frame_equal(df, expected) - - def test_update_raise(self): - df = DataFrame([[1.5, 1, 3.], - [1.5, nan, 3.], - [1.5, nan, 3], - [1.5, nan, 3]]) - - other = DataFrame([[2., nan], - [nan, 7]], index=[1, 3], columns=[1, 2]) - with assertRaisesRegexp(ValueError, "Data overlaps"): - df.update(other, raise_conflict=True) - - def test_update_from_non_df(self): - d = {'a': Series([1, 2, 3, 4]), 'b': Series([5, 6, 7, 8])} - df = DataFrame(d) - - d['a'] = Series([5, 6, 7, 8]) - df.update(d) - - expected = DataFrame(d) - - assert_frame_equal(df, expected) - - d = {'a': [1, 2, 3, 4], 'b': [5, 6, 7, 8]} - df = DataFrame(d) - - d['a'] = [5, 6, 7, 8] - df.update(d) - - expected = DataFrame(d) - - assert_frame_equal(df, expected) - - def test_combineAdd(self): - - with tm.assert_produces_warning(FutureWarning): - # trivial - comb = self.frame.combineAdd(self.frame) - assert_frame_equal(comb, self.frame * 2) - - # more rigorous - a = DataFrame([[1., nan, nan, 2., nan]], - columns=np.arange(5)) - b = DataFrame([[2., 3., nan, 2., 6., nan]], - columns=np.arange(6)) - expected = DataFrame([[3., 3., nan, 4., 6., nan]], - columns=np.arange(6)) - - result = a.combineAdd(b) - assert_frame_equal(result, expected) - result2 = a.T.combineAdd(b.T) - assert_frame_equal(result2, expected.T) - - expected2 = a.combine(b, operator.add, fill_value=0.) - assert_frame_equal(expected, expected2) - - # corner cases - comb = self.frame.combineAdd(self.empty) - assert_frame_equal(comb, self.frame) - - comb = self.empty.combineAdd(self.frame) - assert_frame_equal(comb, self.frame) - - # integer corner case - df1 = DataFrame({'x': [5]}) - df2 = DataFrame({'x': [1]}) - df3 = DataFrame({'x': [6]}) - comb = df1.combineAdd(df2) - assert_frame_equal(comb, df3) - - # mixed type GH2191 - df1 = DataFrame({'A': [1, 2], 'B': [3, 4]}) - df2 = DataFrame({'A': [1, 2], 'C': [5, 6]}) - rs = df1.combineAdd(df2) - xp = DataFrame({'A': [2, 4], 'B': [3, 4.], 'C': [5, 6.]}) - assert_frame_equal(xp, rs) - - # TODO: test integer fill corner? - - def test_combineMult(self): - - with tm.assert_produces_warning(FutureWarning): - # trivial - comb = self.frame.combineMult(self.frame) - - assert_frame_equal(comb, self.frame ** 2) - - # corner cases - comb = self.frame.combineMult(self.empty) - assert_frame_equal(comb, self.frame) - - comb = self.empty.combineMult(self.frame) - assert_frame_equal(comb, self.frame) - - def test_combine_generic(self): - df1 = self.frame - df2 = self.frame.ix[:-5, ['A', 'B', 'C']] - - combined = df1.combine(df2, np.add) - combined2 = df2.combine(df1, np.add) - self.assertTrue(combined['D'].isnull().all()) - self.assertTrue(combined2['D'].isnull().all()) - - chunk = combined.ix[:-5, ['A', 'B', 'C']] - chunk2 = combined2.ix[:-5, ['A', 'B', 'C']] - - exp = self.frame.ix[:-5, ['A', 'B', 'C']].reindex_like(chunk) * 2 - assert_frame_equal(chunk, exp) - assert_frame_equal(chunk2, exp) - - def test_clip(self): - median = self.frame.median().median() - - capped = self.frame.clip_upper(median) - self.assertFalse((capped.values > median).any()) - - floored = self.frame.clip_lower(median) - self.assertFalse((floored.values < median).any()) - - double = self.frame.clip(upper=median, lower=median) - self.assertFalse((double.values != median).any()) - - def test_dataframe_clip(self): - - # GH #2747 - df = DataFrame(np.random.randn(1000,2)) - - for lb, ub in [(-1,1),(1,-1)]: - clipped_df = df.clip(lb, ub) - - lb, ub = min(lb,ub), max(ub,lb) - lb_mask = df.values <= lb - ub_mask = df.values >= ub - mask = ~lb_mask & ~ub_mask - self.assertTrue((clipped_df.values[lb_mask] == lb).all() == True) - self.assertTrue((clipped_df.values[ub_mask] == ub).all() == True) - self.assertTrue((clipped_df.values[mask] == df.values[mask]).all() == True) - - def test_clip_against_series(self): - # GH #6966 - - df = DataFrame(np.random.randn(1000, 2)) - lb = Series(np.random.randn(1000)) - ub = lb + 1 - - clipped_df = df.clip(lb, ub, axis=0) - - for i in range(2): - lb_mask = df.iloc[:, i] <= lb - ub_mask = df.iloc[:, i] >= ub - mask = ~lb_mask & ~ub_mask - - result = clipped_df.loc[lb_mask, i] - assert_series_equal(result, lb[lb_mask], check_names=False) - self.assertEqual(result.name, i) - - result = clipped_df.loc[ub_mask, i] - assert_series_equal(result, ub[ub_mask], check_names=False) - self.assertEqual(result.name, i) - - assert_series_equal(clipped_df.loc[mask, i], df.loc[mask, i]) - - def test_clip_against_frame(self): - df = DataFrame(np.random.randn(1000, 2)) - lb = DataFrame(np.random.randn(1000, 2)) - ub = lb + 1 - - clipped_df = df.clip(lb, ub) - - lb_mask = df <= lb - ub_mask = df >= ub - mask = ~lb_mask & ~ub_mask - - assert_frame_equal(clipped_df[lb_mask], lb[lb_mask]) - assert_frame_equal(clipped_df[ub_mask], ub[ub_mask]) - assert_frame_equal(clipped_df[mask], df[mask]) - - def test_get_X_columns(self): - # numeric and object columns - - df = DataFrame({'a': [1, 2, 3], - 'b' : [True, False, True], - 'c': ['foo', 'bar', 'baz'], - 'd': [None, None, None], - 'e': [3.14, 0.577, 2.773]}) - - self.assert_numpy_array_equal(df._get_numeric_data().columns, - ['a', 'b', 'e']) - - def test_is_mixed_type(self): - self.assertFalse(self.frame._is_mixed_type) - self.assertTrue(self.mixed_frame._is_mixed_type) - - def test_get_numeric_data(self): - intname = np.dtype(np.int_).name - floatname = np.dtype(np.float_).name - datetime64name = np.dtype('M8[ns]').name - objectname = np.dtype(np.object_).name - - df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', 'f' : Timestamp('20010102')}, - index=np.arange(10)) - result = df.get_dtype_counts() - expected = Series({'int64': 1, 'float64' : 1, datetime64name: 1, objectname : 1}) - result.sort_index() - expected.sort_index() - assert_series_equal(result, expected) - - df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', - 'd' : np.array([1.]*10,dtype='float32'), - 'e' : np.array([1]*10,dtype='int32'), - 'f' : np.array([1]*10,dtype='int16'), - 'g' : Timestamp('20010102')}, - index=np.arange(10)) - - result = df._get_numeric_data() - expected = df.ix[:, ['a', 'b','d','e','f']] - assert_frame_equal(result, expected) - - only_obj = df.ix[:, ['c','g']] - result = only_obj._get_numeric_data() - expected = df.ix[:, []] - assert_frame_equal(result, expected) - - df = DataFrame.from_dict({'a':[1,2], 'b':['foo','bar'],'c':[np.pi,np.e]}) - result = df._get_numeric_data() - expected = DataFrame.from_dict({'a':[1,2], 'c':[np.pi,np.e]}) - assert_frame_equal(result, expected) - - df = result.copy() - result = df._get_numeric_data() - expected = df - assert_frame_equal(result, expected) - - def test_bool_describe_in_mixed_frame(self): - df = DataFrame({ - 'string_data': ['a', 'b', 'c', 'd', 'e'], - 'bool_data': [True, True, False, False, False], - 'int_data': [10, 20, 30, 40, 50], - }) - - # Boolean data and integer data is included in .describe() output, string data isn't - self.assert_numpy_array_equal(df.describe().columns, ['bool_data', 'int_data']) - - bool_describe = df.describe()['bool_data'] - - # Both the min and the max values should stay booleans - self.assertEqual(bool_describe['min'].dtype, np.bool_) - self.assertEqual(bool_describe['max'].dtype, np.bool_) - - self.assertFalse(bool_describe['min']) - self.assertTrue(bool_describe['max']) - - # For numeric operations, like mean or median, the values True/False are cast to - # the integer values 1 and 0 - assert_almost_equal(bool_describe['mean'], 0.4) - assert_almost_equal(bool_describe['50%'], 0) - - def test_reduce_mixed_frame(self): - # GH 6806 - df = DataFrame({ - 'bool_data': [True, True, False, False, False], - 'int_data': [10, 20, 30, 40, 50], - 'string_data': ['a', 'b', 'c', 'd', 'e'], - }) - df.reindex(columns=['bool_data', 'int_data', 'string_data']) - test = df.sum(axis=0) - assert_almost_equal(test.values, [2, 150, 'abcde']) - assert_series_equal(test, df.T.sum(axis=1)) - - def test_count(self): - f = lambda s: notnull(s).sum() - self._check_stat_op('count', f, - has_skipna=False, - has_numeric_only=True, - check_dtype=False, - check_dates=True) - - # corner case - frame = DataFrame() - ct1 = frame.count(1) - tm.assertIsInstance(ct1, Series) - - ct2 = frame.count(0) - tm.assertIsInstance(ct2, Series) - - # GH #423 - df = DataFrame(index=lrange(10)) - result = df.count(1) - expected = Series(0, index=df.index) - assert_series_equal(result, expected) - - df = DataFrame(columns=lrange(10)) - result = df.count(0) - expected = Series(0, index=df.columns) - assert_series_equal(result, expected) - - df = DataFrame() - result = df.count() - expected = Series(0, index=[]) - assert_series_equal(result, expected) - - def test_sum(self): - self._check_stat_op('sum', np.sum, has_numeric_only=True) - - # mixed types (with upcasting happening) - self._check_stat_op('sum', np.sum, frame=self.mixed_float.astype('float32'), - has_numeric_only=True, check_dtype=False, check_less_precise=True) - - def test_stat_operators_attempt_obj_array(self): - data = { - 'a': [-0.00049987540199591344, -0.0016467257772919831, - 0.00067695870775883013], - 'b': [-0, -0, 0.0], - 'c': [0.00031111847529610595, 0.0014902627951905339, - -0.00094099200035979691] - } - df1 = DataFrame(data, index=['foo', 'bar', 'baz'], - dtype='O') - methods = ['sum', 'mean', 'prod', 'var', 'std', 'skew', 'min', 'max'] - - # GH #676 - df2 = DataFrame({0: [np.nan, 2], 1: [np.nan, 3], - 2: [np.nan, 4]}, dtype=object) - - for df in [df1, df2]: - for meth in methods: - self.assertEqual(df.values.dtype, np.object_) - result = getattr(df, meth)(1) - expected = getattr(df.astype('f8'), meth)(1) - - if not tm._incompat_bottleneck_version(meth): - assert_series_equal(result, expected) - - def test_mean(self): - self._check_stat_op('mean', np.mean, check_dates=True) - - def test_product(self): - self._check_stat_op('product', np.prod) - - def test_median(self): - def wrapper(x): - if isnull(x).any(): - return np.nan - return np.median(x) - - self._check_stat_op('median', wrapper, check_dates=True) - - def test_min(self): - self._check_stat_op('min', np.min, check_dates=True) - self._check_stat_op('min', np.min, frame=self.intframe) - - def test_cummin(self): - self.tsframe.ix[5:10, 0] = nan - self.tsframe.ix[10:15, 1] = nan - self.tsframe.ix[15:, 2] = nan - - # axis = 0 - cummin = self.tsframe.cummin() - expected = self.tsframe.apply(Series.cummin) - assert_frame_equal(cummin, expected) - - # axis = 1 - cummin = self.tsframe.cummin(axis=1) - expected = self.tsframe.apply(Series.cummin, axis=1) - assert_frame_equal(cummin, expected) - - # works - df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) - result = df.cummin() - - # fix issue - cummin_xs = self.tsframe.cummin(axis=1) - self.assertEqual(np.shape(cummin_xs), np.shape(self.tsframe)) - - def test_cummax(self): - self.tsframe.ix[5:10, 0] = nan - self.tsframe.ix[10:15, 1] = nan - self.tsframe.ix[15:, 2] = nan - - # axis = 0 - cummax = self.tsframe.cummax() - expected = self.tsframe.apply(Series.cummax) - assert_frame_equal(cummax, expected) - - # axis = 1 - cummax = self.tsframe.cummax(axis=1) - expected = self.tsframe.apply(Series.cummax, axis=1) - assert_frame_equal(cummax, expected) - - # works - df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) - result = df.cummax() - - # fix issue - cummax_xs = self.tsframe.cummax(axis=1) - self.assertEqual(np.shape(cummax_xs), np.shape(self.tsframe)) - - def test_max(self): - self._check_stat_op('max', np.max, check_dates=True) - self._check_stat_op('max', np.max, frame=self.intframe) - - def test_mad(self): - f = lambda x: np.abs(x - x.mean()).mean() - self._check_stat_op('mad', f) - - def test_var_std(self): - alt = lambda x: np.var(x, ddof=1) - self._check_stat_op('var', alt) - - alt = lambda x: np.std(x, ddof=1) - self._check_stat_op('std', alt) - - result = self.tsframe.std(ddof=4) - expected = self.tsframe.apply(lambda x: x.std(ddof=4)) - assert_almost_equal(result, expected) - - result = self.tsframe.var(ddof=4) - expected = self.tsframe.apply(lambda x: x.var(ddof=4)) - assert_almost_equal(result, expected) - - arr = np.repeat(np.random.random((1, 1000)), 1000, 0) - result = nanops.nanvar(arr, axis=0) - self.assertFalse((result < 0).any()) - if nanops._USE_BOTTLENECK: - nanops._USE_BOTTLENECK = False - result = nanops.nanvar(arr, axis=0) - self.assertFalse((result < 0).any()) - nanops._USE_BOTTLENECK = True - - def test_numeric_only_flag(self): - # GH #9201 - methods = ['sem', 'var', 'std'] - df1 = DataFrame(np.random.randn(5, 3), columns=['foo', 'bar', 'baz']) - # set one entry to a number in str format - df1.ix[0, 'foo'] = '100' - - df2 = DataFrame(np.random.randn(5, 3), columns=['foo', 'bar', 'baz']) - # set one entry to a non-number str - df2.ix[0, 'foo'] = 'a' - - for meth in methods: - result = getattr(df1, meth)(axis=1, numeric_only=True) - expected = getattr(df1[['bar', 'baz']], meth)(axis=1) - assert_series_equal(expected, result) - - result = getattr(df2, meth)(axis=1, numeric_only=True) - expected = getattr(df2[['bar', 'baz']], meth)(axis=1) - assert_series_equal(expected, result) - - # df1 has all numbers, df2 has a letter inside - self.assertRaises(TypeError, lambda : getattr(df1, meth)(axis=1, numeric_only=False)) - self.assertRaises(TypeError, lambda : getattr(df2, meth)(axis=1, numeric_only=False)) - - def test_sem(self): - alt = lambda x: np.std(x, ddof=1)/np.sqrt(len(x)) - self._check_stat_op('sem', alt) - - result = self.tsframe.sem(ddof=4) - expected = self.tsframe.apply(lambda x: x.std(ddof=4)/np.sqrt(len(x))) - assert_almost_equal(result, expected) - - arr = np.repeat(np.random.random((1, 1000)), 1000, 0) - result = nanops.nansem(arr, axis=0) - self.assertFalse((result < 0).any()) - if nanops._USE_BOTTLENECK: - nanops._USE_BOTTLENECK = False - result = nanops.nansem(arr, axis=0) - self.assertFalse((result < 0).any()) - nanops._USE_BOTTLENECK = True - - def test_skew(self): - tm._skip_if_no_scipy() - from scipy.stats import skew - - def alt(x): - if len(x) < 3: - return np.nan - return skew(x, bias=False) - - self._check_stat_op('skew', alt) - - def test_kurt(self): - tm._skip_if_no_scipy() - - from scipy.stats import kurtosis - - def alt(x): - if len(x) < 4: - return np.nan - return kurtosis(x, bias=False) - - self._check_stat_op('kurt', alt) - - index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], - labels=[[0, 0, 0, 0, 0, 0], - [0, 1, 2, 0, 1, 2], - [0, 1, 0, 1, 0, 1]]) - df = DataFrame(np.random.randn(6, 3), index=index) - - kurt = df.kurt() - kurt2 = df.kurt(level=0).xs('bar') - assert_series_equal(kurt, kurt2, check_names=False) - self.assertTrue(kurt.name is None) - self.assertEqual(kurt2.name, 'bar') - - def _check_stat_op(self, name, alternative, frame=None, has_skipna=True, - has_numeric_only=False, check_dtype=True, check_dates=False, - check_less_precise=False): - if frame is None: - frame = self.frame - # set some NAs - frame.ix[5:10] = np.nan - frame.ix[15:20, -2:] = np.nan - - f = getattr(frame, name) - - if check_dates: - df = DataFrame({'b': date_range('1/1/2001', periods=2)}) - _f = getattr(df, name) - result = _f() - self.assertIsInstance(result, Series) - - df['a'] = lrange(len(df)) - result = getattr(df, name)() - self.assertIsInstance(result, Series) - self.assertTrue(len(result)) - - if has_skipna: - def skipna_wrapper(x): - nona = x.dropna() - if len(nona) == 0: - return np.nan - return alternative(nona) - - def wrapper(x): - return alternative(x.values) - - result0 = f(axis=0, skipna=False) - result1 = f(axis=1, skipna=False) - assert_series_equal(result0, frame.apply(wrapper), - check_dtype=check_dtype, - check_less_precise=check_less_precise) - assert_series_equal(result1, frame.apply(wrapper, axis=1), - check_dtype=False, - check_less_precise=check_less_precise) # HACK: win32 - else: - skipna_wrapper = alternative - wrapper = alternative - - result0 = f(axis=0) - result1 = f(axis=1) - assert_series_equal(result0, frame.apply(skipna_wrapper), - check_dtype=check_dtype, - check_less_precise=check_less_precise) - if not tm._incompat_bottleneck_version(name): - assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1), - check_dtype=False, - check_less_precise=check_less_precise) - - # check dtypes - if check_dtype: - lcd_dtype = frame.values.dtype - self.assertEqual(lcd_dtype, result0.dtype) - self.assertEqual(lcd_dtype, result1.dtype) - - # result = f(axis=1) - # comp = frame.apply(alternative, axis=1).reindex(result.index) - # assert_series_equal(result, comp) - - # bad axis - assertRaisesRegexp(ValueError, 'No axis named 2', f, axis=2) - # make sure works on mixed-type frame - getattr(self.mixed_frame, name)(axis=0) - getattr(self.mixed_frame, name)(axis=1) - - if has_numeric_only: - getattr(self.mixed_frame, name)(axis=0, numeric_only=True) - getattr(self.mixed_frame, name)(axis=1, numeric_only=True) - getattr(self.frame, name)(axis=0, numeric_only=False) - getattr(self.frame, name)(axis=1, numeric_only=False) - - # all NA case - if has_skipna: - all_na = self.frame * np.NaN - r0 = getattr(all_na, name)(axis=0) - r1 = getattr(all_na, name)(axis=1) - if not tm._incompat_bottleneck_version(name): - self.assertTrue(np.isnan(r0).all()) - self.assertTrue(np.isnan(r1).all()) - - def test_mode(self): - df = pd.DataFrame({"A": [12, 12, 11, 12, 19, 11], - "B": [10, 10, 10, np.nan, 3, 4], - "C": [8, 8, 8, 9, 9, 9], - "D": np.arange(6,dtype='int64'), - "E": [8, 8, 1, 1, 3, 3]}) - assert_frame_equal(df[["A"]].mode(), - pd.DataFrame({"A": [12]})) - expected = pd.Series([], dtype='int64', name='D').to_frame() - assert_frame_equal(df[["D"]].mode(), expected) - expected = pd.Series([1, 3, 8], dtype='int64', name='E').to_frame() - assert_frame_equal(df[["E"]].mode(), expected) - assert_frame_equal(df[["A", "B"]].mode(), - pd.DataFrame({"A": [12], "B": [10.]})) - assert_frame_equal(df.mode(), - pd.DataFrame({"A": [12, np.nan, np.nan], - "B": [10, np.nan, np.nan], - "C": [8, 9, np.nan], - "D": [np.nan, np.nan, np.nan], - "E": [1, 3, 8]})) - - # outputs in sorted order - df["C"] = list(reversed(df["C"])) - com.pprint_thing(df["C"]) - com.pprint_thing(df["C"].mode()) - a, b = (df[["A", "B", "C"]].mode(), - pd.DataFrame({"A": [12, np.nan], - "B": [10, np.nan], - "C": [8, 9]})) - com.pprint_thing(a) - com.pprint_thing(b) - assert_frame_equal(a, b) - # should work with heterogeneous types - df = pd.DataFrame({"A": np.arange(6,dtype='int64'), - "B": pd.date_range('2011', periods=6), - "C": list('abcdef')}) - exp = pd.DataFrame({"A": pd.Series([], dtype=df["A"].dtype), - "B": pd.Series([], dtype=df["B"].dtype), - "C": pd.Series([], dtype=df["C"].dtype)}) - assert_frame_equal(df.mode(), exp) - - # and also when not empty - df.loc[1, "A"] = 0 - df.loc[4, "B"] = df.loc[3, "B"] - df.loc[5, "C"] = 'e' - exp = pd.DataFrame({"A": pd.Series([0], dtype=df["A"].dtype), - "B": pd.Series([df.loc[3, "B"]], dtype=df["B"].dtype), - "C": pd.Series(['e'], dtype=df["C"].dtype)}) - - assert_frame_equal(df.mode(), exp) - - def test_sum_corner(self): - axis0 = self.empty.sum(0) - axis1 = self.empty.sum(1) - tm.assertIsInstance(axis0, Series) - tm.assertIsInstance(axis1, Series) - self.assertEqual(len(axis0), 0) - self.assertEqual(len(axis1), 0) - - def test_sum_object(self): - values = self.frame.values.astype(int) - frame = DataFrame(values, index=self.frame.index, - columns=self.frame.columns) - deltas = frame * timedelta(1) - deltas.sum() - - def test_sum_bool(self): - # ensure this works, bug report - bools = np.isnan(self.frame) - bools.sum(1) - bools.sum(0) - - def test_mean_corner(self): - # unit test when have object data - the_mean = self.mixed_frame.mean(axis=0) - the_sum = self.mixed_frame.sum(axis=0, numeric_only=True) - self.assertTrue(the_sum.index.equals(the_mean.index)) - self.assertTrue(len(the_mean.index) < len(self.mixed_frame.columns)) - - # xs sum mixed type, just want to know it works... - the_mean = self.mixed_frame.mean(axis=1) - the_sum = self.mixed_frame.sum(axis=1, numeric_only=True) - self.assertTrue(the_sum.index.equals(the_mean.index)) - - # take mean of boolean column - self.frame['bool'] = self.frame['A'] > 0 - means = self.frame.mean(0) - self.assertEqual(means['bool'], self.frame['bool'].values.mean()) - - def test_stats_mixed_type(self): - # don't blow up - self.mixed_frame.std(1) - self.mixed_frame.var(1) - self.mixed_frame.mean(1) - self.mixed_frame.skew(1) - - def test_median_corner(self): - def wrapper(x): - if isnull(x).any(): - return np.nan - return np.median(x) - - self._check_stat_op('median', wrapper, frame=self.intframe, - check_dtype=False, check_dates=True) - - def test_round(self): - - # GH 2665 - - # Test that rounding an empty DataFrame does nothing - df = DataFrame() - assert_frame_equal(df, df.round()) - - # Here's the test frame we'll be working with - df = DataFrame( - {'col1': [1.123, 2.123, 3.123], 'col2': [1.234, 2.234, 3.234]}) - - # Default round to integer (i.e. decimals=0) - expected_rounded = DataFrame( - {'col1': [1., 2., 3.], 'col2': [1., 2., 3.]}) - assert_frame_equal(df.round(), expected_rounded) - - # Round with an integer - decimals = 2 - expected_rounded = DataFrame( - {'col1': [1.12, 2.12, 3.12], 'col2': [1.23, 2.23, 3.23]}) - assert_frame_equal(df.round(decimals), expected_rounded) - - # This should also work with np.round (since np.round dispatches to - # df.round) - assert_frame_equal(np.round(df, decimals), expected_rounded) - - # Round with a list - round_list = [1, 2] - with self.assertRaises(TypeError): - df.round(round_list) - - # Round with a dictionary - expected_rounded = DataFrame( - {'col1': [1.1, 2.1, 3.1], 'col2': [1.23, 2.23, 3.23]}) - round_dict = {'col1': 1, 'col2': 2} - assert_frame_equal(df.round(round_dict), expected_rounded) - - # Incomplete dict - expected_partially_rounded = DataFrame( - {'col1': [1.123, 2.123, 3.123], 'col2': [1.2, 2.2, 3.2]}) - partial_round_dict = {'col2': 1} - assert_frame_equal( - df.round(partial_round_dict), expected_partially_rounded) - - # Dict with unknown elements - wrong_round_dict = {'col3': 2, 'col2': 1} - assert_frame_equal( - df.round(wrong_round_dict), expected_partially_rounded) - - # float input to `decimals` - non_int_round_dict = {'col1': 1, 'col2': 0.5} - with self.assertRaises(TypeError): - df.round(non_int_round_dict) - - # String input - non_int_round_dict = {'col1': 1, 'col2': 'foo'} - with self.assertRaises(TypeError): - df.round(non_int_round_dict) - - non_int_round_Series = Series(non_int_round_dict) - with self.assertRaises(TypeError): - df.round(non_int_round_Series) - - # List input - non_int_round_dict = {'col1': 1, 'col2': [1, 2]} - with self.assertRaises(TypeError): - df.round(non_int_round_dict) - - non_int_round_Series = Series(non_int_round_dict) - with self.assertRaises(TypeError): - df.round(non_int_round_Series) - - # Non integer Series inputs - non_int_round_Series = Series(non_int_round_dict) - with self.assertRaises(TypeError): - df.round(non_int_round_Series) - - non_int_round_Series = Series(non_int_round_dict) - with self.assertRaises(TypeError): - df.round(non_int_round_Series) - - # Negative numbers - negative_round_dict = {'col1': -1, 'col2': -2} - big_df = df * 100 - expected_neg_rounded = DataFrame( - {'col1': [110., 210, 310], 'col2': [100., 200, 300]}) - assert_frame_equal( - big_df.round(negative_round_dict), expected_neg_rounded) - - # nan in Series round - nan_round_Series = Series({'col1': nan, 'col2':1}) - expected_nan_round = DataFrame( - {'col1': [1.123, 2.123, 3.123], 'col2': [1.2, 2.2, 3.2]}) - if sys.version < LooseVersion('2.7'): - # Rounding with decimal is a ValueError in Python < 2.7 - with self.assertRaises(ValueError): - df.round(nan_round_Series) - else: - with self.assertRaises(TypeError): - df.round(nan_round_Series) - - # Make sure this doesn't break existing Series.round - assert_series_equal(df['col1'].round(1), expected_rounded['col1']) - - # named columns - # GH 11986 - decimals = 2 - expected_rounded = DataFrame( - {'col1': [1.12, 2.12, 3.12], 'col2': [1.23, 2.23, 3.23]}) - df.columns.name = "cols" - expected_rounded.columns.name = "cols" - assert_frame_equal(df.round(decimals), expected_rounded) - - # interaction of named columns & series - assert_series_equal(df['col1'].round(decimals), - expected_rounded['col1']) - assert_series_equal(df.round(decimals)['col1'], - expected_rounded['col1']) - - def test_round_mixed_type(self): - # GH11885 - df = DataFrame({'col1': [1.1, 2.2, 3.3, 4.4], - 'col2': ['1', 'a', 'c', 'f'], - 'col3': date_range('20111111', periods=4)}) - round_0 = DataFrame({'col1': [1., 2., 3., 4.], - 'col2': ['1', 'a', 'c', 'f'], - 'col3': date_range('20111111', periods=4)}) - assert_frame_equal(df.round(), round_0) - assert_frame_equal(df.round(1), df) - assert_frame_equal(df.round({'col1': 1}), df) - assert_frame_equal(df.round({'col1': 0}), round_0) - assert_frame_equal(df.round({'col1': 0, 'col2': 1}), round_0) - assert_frame_equal(df.round({'col3': 1}), df) - - def test_round_issue(self): - # GH11611 - - df = pd.DataFrame(np.random.random([3, 3]), columns=['A', 'B', 'C'], - index=['first', 'second', 'third']) - - dfs = pd.concat((df, df), axis=1) - rounded = dfs.round() - self.assertTrue(rounded.index.equals(dfs.index)) - - decimals = pd.Series([1, 0, 2], index=['A', 'B', 'A']) - self.assertRaises(ValueError, df.round, decimals) - - def test_built_in_round(self): - if not compat.PY3: - raise nose.SkipTest("build in round cannot be overriden " - "prior to Python 3") - - # GH11763 - # Here's the test frame we'll be working with - df = DataFrame( - {'col1': [1.123, 2.123, 3.123], 'col2': [1.234, 2.234, 3.234]}) - - # Default round to integer (i.e. decimals=0) - expected_rounded = DataFrame( - {'col1': [1., 2., 3.], 'col2': [1., 2., 3.]}) - assert_frame_equal(round(df), expected_rounded) - - def test_quantile(self): - from numpy import percentile - - q = self.tsframe.quantile(0.1, axis=0) - self.assertEqual(q['A'], percentile(self.tsframe['A'], 10)) - q = self.tsframe.quantile(0.9, axis=1) - q = self.intframe.quantile(0.1) - self.assertEqual(q['A'], percentile(self.intframe['A'], 10)) - - # test degenerate case - q = DataFrame({'x': [], 'y': []}).quantile(0.1, axis=0) - assert(np.isnan(q['x']) and np.isnan(q['y'])) - - # non-numeric exclusion - df = DataFrame({'col1':['A','A','B','B'], 'col2':[1,2,3,4]}) - rs = df.quantile(0.5) - xp = df.median() - assert_series_equal(rs, xp) - - # axis - df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) - result = df.quantile(.5, axis=1) - expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3]) - assert_series_equal(result, expected) - - result = df.quantile([.5, .75], axis=1) - expected = DataFrame({1: [1.5, 1.75], 2: [2.5, 2.75], - 3: [3.5, 3.75]}, index=[0.5, 0.75]) - assert_frame_equal(result, expected, check_index_type=True) - - # We may want to break API in the future to change this - # so that we exclude non-numeric along the same axis - # See GH #7312 - df = DataFrame([[1, 2, 3], - ['a', 'b', 4]]) - result = df.quantile(.5, axis=1) - expected = Series([3., 4.], index=[0, 1]) - assert_series_equal(result, expected) - - def test_quantile_axis_parameter(self): - # GH 9543/9544 - - df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) - - result = df.quantile(.5, axis=0) - - expected = Series([2., 3.], index=["A", "B"]) - assert_series_equal(result, expected) - - expected = df.quantile(.5, axis="index") - assert_series_equal(result, expected) - - result = df.quantile(.5, axis=1) - - expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3]) - assert_series_equal(result, expected) - - result = df.quantile(.5, axis="columns") - assert_series_equal(result, expected) - - self.assertRaises(ValueError, df.quantile, 0.1, axis=-1) - self.assertRaises(ValueError, df.quantile, 0.1, axis="column") - - def test_quantile_interpolation(self): - # GH #10174 - if _np_version_under1p9: - raise nose.SkipTest("Numpy version under 1.9") - - from numpy import percentile - - # interpolation = linear (default case) - q = self.tsframe.quantile(0.1, axis=0, interpolation='linear') - self.assertEqual(q['A'], percentile(self.tsframe['A'], 10)) - q = self.intframe.quantile(0.1) - self.assertEqual(q['A'], percentile(self.intframe['A'], 10)) - - # test with and without interpolation keyword - q1 = self.intframe.quantile(0.1) - self.assertEqual(q1['A'], np.percentile(self.intframe['A'], 10)) - assert_series_equal(q, q1) - - # interpolation method other than default linear - df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) - result = df.quantile(.5, axis=1, interpolation='nearest') - expected = Series([1., 2., 3.], index=[1, 2, 3]) - assert_series_equal(result, expected) - - # axis - result = df.quantile([.5, .75], axis=1, interpolation='lower') - expected = DataFrame({1: [1., 1.], 2: [2., 2.], - 3: [3., 3.]}, index=[0.5, 0.75]) - assert_frame_equal(result, expected) - - # test degenerate case - df = DataFrame({'x': [], 'y': []}) - q = df.quantile(0.1, axis=0, interpolation='higher') - assert(np.isnan(q['x']) and np.isnan(q['y'])) - - # multi - df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], - columns=['a', 'b', 'c']) - result = df.quantile([.25, .5], interpolation='midpoint') - expected = DataFrame([[1.5, 1.5, 1.5], [2.5, 2.5, 2.5]], - index=[.25, .5], columns=['a', 'b', 'c']) - assert_frame_equal(result, expected) - - def test_quantile_interpolation_np_lt_1p9(self): - # GH #10174 - if not _np_version_under1p9: - raise nose.SkipTest("Numpy version is greater than 1.9") - - from numpy import percentile - - # interpolation = linear (default case) - q = self.tsframe.quantile(0.1, axis=0, interpolation='linear') - self.assertEqual(q['A'], percentile(self.tsframe['A'], 10)) - q = self.intframe.quantile(0.1) - self.assertEqual(q['A'], percentile(self.intframe['A'], 10)) - - # test with and without interpolation keyword - q1 = self.intframe.quantile(0.1) - self.assertEqual(q1['A'], np.percentile(self.intframe['A'], 10)) - assert_series_equal(q, q1) - - # interpolation method other than default linear - expErrMsg = "Interpolation methods other than linear" - df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) - with assertRaisesRegexp(ValueError, expErrMsg): - df.quantile(.5, axis=1, interpolation='nearest') - - with assertRaisesRegexp(ValueError, expErrMsg): - df.quantile([.5, .75], axis=1, interpolation='lower') - - # test degenerate case - df = DataFrame({'x': [], 'y': []}) - with assertRaisesRegexp(ValueError, expErrMsg): - q = df.quantile(0.1, axis=0, interpolation='higher') - - # multi - df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], - columns=['a', 'b', 'c']) - with assertRaisesRegexp(ValueError, expErrMsg): - df.quantile([.25, .5], interpolation='midpoint') - - def test_quantile_multi(self): - df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], - columns=['a', 'b', 'c']) - result = df.quantile([.25, .5]) - expected = DataFrame([[1.5, 1.5, 1.5], [2., 2., 2.]], - index=[.25, .5], columns=['a', 'b', 'c']) - assert_frame_equal(result, expected) - - # axis = 1 - result = df.quantile([.25, .5], axis=1) - expected = DataFrame([[1.5, 1.5, 1.5], [2., 2., 2.]], - index=[.25, .5], columns=[0, 1, 2]) - - # empty - result = DataFrame({'x': [], 'y': []}).quantile([0.1, .9], axis=0) - expected = DataFrame({'x': [np.nan, np.nan], 'y': [np.nan, np.nan]}, - index=[.1, .9]) - assert_frame_equal(result, expected) - - def test_quantile_datetime(self): - df = DataFrame({'a': pd.to_datetime(['2010', '2011']), 'b': [0, 5]}) - - # exclude datetime - result = df.quantile(.5) - expected = Series([2.5], index=['b']) - - # datetime - result = df.quantile(.5, numeric_only=False) - expected = Series([Timestamp('2010-07-02 12:00:00'), 2.5], - index=['a', 'b']) - assert_series_equal(result, expected) - - # datetime w/ multi - result = df.quantile([.5], numeric_only=False) - expected = DataFrame([[Timestamp('2010-07-02 12:00:00'), 2.5]], - index=[.5], columns=['a', 'b']) - assert_frame_equal(result, expected) - - # axis = 1 - df['c'] = pd.to_datetime(['2011', '2012']) - result = df[['a', 'c']].quantile(.5, axis=1, numeric_only=False) - expected = Series([Timestamp('2010-07-02 12:00:00'), - Timestamp('2011-07-02 12:00:00')], - index=[0, 1]) - assert_series_equal(result, expected) - - result = df[['a', 'c']].quantile([.5], axis=1, numeric_only=False) - expected = DataFrame([[Timestamp('2010-07-02 12:00:00'), - Timestamp('2011-07-02 12:00:00')]], - index=[0.5], columns=[0, 1]) - assert_frame_equal(result, expected) - - def test_quantile_invalid(self): - msg = 'percentiles should all be in the interval \\[0, 1\\]' - for invalid in [-1, 2, [0.5, -1], [0.5, 2]]: - with tm.assertRaisesRegexp(ValueError, msg): - self.tsframe.quantile(invalid) - - def test_cumsum(self): - self.tsframe.ix[5:10, 0] = nan - self.tsframe.ix[10:15, 1] = nan - self.tsframe.ix[15:, 2] = nan - - # axis = 0 - cumsum = self.tsframe.cumsum() - expected = self.tsframe.apply(Series.cumsum) - assert_frame_equal(cumsum, expected) - - # axis = 1 - cumsum = self.tsframe.cumsum(axis=1) - expected = self.tsframe.apply(Series.cumsum, axis=1) - assert_frame_equal(cumsum, expected) - - # works - df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) - result = df.cumsum() - - # fix issue - cumsum_xs = self.tsframe.cumsum(axis=1) - self.assertEqual(np.shape(cumsum_xs), np.shape(self.tsframe)) - - def test_cumprod(self): - self.tsframe.ix[5:10, 0] = nan - self.tsframe.ix[10:15, 1] = nan - self.tsframe.ix[15:, 2] = nan - - # axis = 0 - cumprod = self.tsframe.cumprod() - expected = self.tsframe.apply(Series.cumprod) - assert_frame_equal(cumprod, expected) - - # axis = 1 - cumprod = self.tsframe.cumprod(axis=1) - expected = self.tsframe.apply(Series.cumprod, axis=1) - assert_frame_equal(cumprod, expected) - - # fix issue - cumprod_xs = self.tsframe.cumprod(axis=1) - self.assertEqual(np.shape(cumprod_xs), np.shape(self.tsframe)) - - # ints - df = self.tsframe.fillna(0).astype(int) - df.cumprod(0) - df.cumprod(1) - - # ints32 - df = self.tsframe.fillna(0).astype(np.int32) - df.cumprod(0) - df.cumprod(1) - - def test_rank(self): - tm._skip_if_no_scipy() - from scipy.stats import rankdata - - self.frame['A'][::2] = np.nan - self.frame['B'][::3] = np.nan - self.frame['C'][::4] = np.nan - self.frame['D'][::5] = np.nan - - ranks0 = self.frame.rank() - ranks1 = self.frame.rank(1) - mask = np.isnan(self.frame.values) - - fvals = self.frame.fillna(np.inf).values - - exp0 = np.apply_along_axis(rankdata, 0, fvals) - exp0[mask] = np.nan - - exp1 = np.apply_along_axis(rankdata, 1, fvals) - exp1[mask] = np.nan - - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) - - # integers - df = DataFrame(np.random.randint(0, 5, size=40).reshape((10, 4))) - - result = df.rank() - exp = df.astype(float).rank() - assert_frame_equal(result, exp) - - result = df.rank(1) - exp = df.astype(float).rank(1) - assert_frame_equal(result, exp) - - def test_rank2(self): - from datetime import datetime - df = DataFrame([[1, 3, 2], [1, 2, 3]]) - expected = DataFrame([[1.0, 3.0, 2.0], [1, 2, 3]]) / 3.0 - result = df.rank(1, pct=True) - assert_frame_equal(result, expected) - - df = DataFrame([[1, 3, 2], [1, 2, 3]]) - expected = df.rank(0) / 2.0 - result = df.rank(0, pct=True) - assert_frame_equal(result, expected) - - - - df = DataFrame([['b', 'c', 'a'], ['a', 'c', 'b']]) - expected = DataFrame([[2.0, 3.0, 1.0], [1, 3, 2]]) - result = df.rank(1, numeric_only=False) - assert_frame_equal(result, expected) - - - expected = DataFrame([[2.0, 1.5, 1.0], [1, 1.5, 2]]) - result = df.rank(0, numeric_only=False) - assert_frame_equal(result, expected) - - df = DataFrame([['b', np.nan, 'a'], ['a', 'c', 'b']]) - expected = DataFrame([[2.0, nan, 1.0], [1.0, 3.0, 2.0]]) - result = df.rank(1, numeric_only=False) - assert_frame_equal(result, expected) - - expected = DataFrame([[2.0, nan, 1.0], [1.0, 1.0, 2.0]]) - result = df.rank(0, numeric_only=False) - assert_frame_equal(result, expected) - - # f7u12, this does not work without extensive workaround - data = [[datetime(2001, 1, 5), nan, datetime(2001, 1, 2)], - [datetime(2000, 1, 2), datetime(2000, 1, 3), - datetime(2000, 1, 1)]] - df = DataFrame(data) - - # check the rank - expected = DataFrame([[2., nan, 1.], - [2., 3., 1.]]) - result = df.rank(1, numeric_only=False) - assert_frame_equal(result, expected) - - # mixed-type frames - self.mixed_frame['datetime'] = datetime.now() - self.mixed_frame['timedelta'] = timedelta(days=1,seconds=1) - - result = self.mixed_frame.rank(1) - expected = self.mixed_frame.rank(1, numeric_only=True) - assert_frame_equal(result, expected) - - df = DataFrame({"a":[1e-20, -5, 1e-20+1e-40, 10, 1e60, 1e80, 1e-30]}) - exp = DataFrame({"a":[ 3.5, 1. , 3.5, 5. , 6. , 7. , 2. ]}) - assert_frame_equal(df.rank(), exp) - - def test_rank_na_option(self): - tm._skip_if_no_scipy() - from scipy.stats import rankdata - - self.frame['A'][::2] = np.nan - self.frame['B'][::3] = np.nan - self.frame['C'][::4] = np.nan - self.frame['D'][::5] = np.nan - - # bottom - ranks0 = self.frame.rank(na_option='bottom') - ranks1 = self.frame.rank(1, na_option='bottom') - - fvals = self.frame.fillna(np.inf).values - - exp0 = np.apply_along_axis(rankdata, 0, fvals) - exp1 = np.apply_along_axis(rankdata, 1, fvals) - - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) - - # top - ranks0 = self.frame.rank(na_option='top') - ranks1 = self.frame.rank(1, na_option='top') - - fval0 = self.frame.fillna((self.frame.min() - 1).to_dict()).values - fval1 = self.frame.T - fval1 = fval1.fillna((fval1.min() - 1).to_dict()).T - fval1 = fval1.fillna(np.inf).values - - exp0 = np.apply_along_axis(rankdata, 0, fval0) - exp1 = np.apply_along_axis(rankdata, 1, fval1) - - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) - - # descending - - # bottom - ranks0 = self.frame.rank(na_option='top', ascending=False) - ranks1 = self.frame.rank(1, na_option='top', ascending=False) - - fvals = self.frame.fillna(np.inf).values - - exp0 = np.apply_along_axis(rankdata, 0, -fvals) - exp1 = np.apply_along_axis(rankdata, 1, -fvals) - - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) - - # descending - - # top - ranks0 = self.frame.rank(na_option='bottom', ascending=False) - ranks1 = self.frame.rank(1, na_option='bottom', ascending=False) - - fval0 = self.frame.fillna((self.frame.min() - 1).to_dict()).values - fval1 = self.frame.T - fval1 = fval1.fillna((fval1.min() - 1).to_dict()).T - fval1 = fval1.fillna(np.inf).values - - exp0 = np.apply_along_axis(rankdata, 0, -fval0) - exp1 = np.apply_along_axis(rankdata, 1, -fval1) - - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) - - def test_axis_aliases(self): - - f = self.frame - - # reg name - expected = f.sum(axis=0) - result = f.sum(axis='index') - assert_series_equal(result, expected) - - expected = f.sum(axis=1) - result = f.sum(axis='columns') - assert_series_equal(result, expected) - - def test_combine_first_mixed(self): - a = Series(['a', 'b'], index=lrange(2)) - b = Series(lrange(2), index=lrange(2)) - f = DataFrame({'A': a, 'B': b}) - - a = Series(['a', 'b'], index=lrange(5, 7)) - b = Series(lrange(2), index=lrange(5, 7)) - g = DataFrame({'A': a, 'B': b}) - - combined = f.combine_first(g) - - def test_more_asMatrix(self): - values = self.mixed_frame.as_matrix() - self.assertEqual(values.shape[1], len(self.mixed_frame.columns)) - - def test_reindex_boolean(self): - frame = DataFrame(np.ones((10, 2), dtype=bool), - index=np.arange(0, 20, 2), - columns=[0, 2]) - - reindexed = frame.reindex(np.arange(10)) - self.assertEqual(reindexed.values.dtype, np.object_) - self.assertTrue(isnull(reindexed[0][1])) - - reindexed = frame.reindex(columns=lrange(3)) - self.assertEqual(reindexed.values.dtype, np.object_) - self.assertTrue(isnull(reindexed[1]).all()) - - def test_reindex_objects(self): - reindexed = self.mixed_frame.reindex(columns=['foo', 'A', 'B']) - self.assertIn('foo', reindexed) - - reindexed = self.mixed_frame.reindex(columns=['A', 'B']) - self.assertNotIn('foo', reindexed) - - def test_reindex_corner(self): - index = Index(['a', 'b', 'c']) - dm = self.empty.reindex(index=[1, 2, 3]) - reindexed = dm.reindex(columns=index) - self.assertTrue(reindexed.columns.equals(index)) - - # ints are weird - - smaller = self.intframe.reindex(columns=['A', 'B', 'E']) - self.assertEqual(smaller['E'].dtype, np.float64) - - def test_reindex_axis(self): - cols = ['A', 'B', 'E'] - reindexed1 = self.intframe.reindex_axis(cols, axis=1) - reindexed2 = self.intframe.reindex(columns=cols) - assert_frame_equal(reindexed1, reindexed2) - - rows = self.intframe.index[0:5] - reindexed1 = self.intframe.reindex_axis(rows, axis=0) - reindexed2 = self.intframe.reindex(index=rows) - assert_frame_equal(reindexed1, reindexed2) - - self.assertRaises(ValueError, self.intframe.reindex_axis, rows, axis=2) - - # no-op case - cols = self.frame.columns.copy() - newFrame = self.frame.reindex_axis(cols, axis=1) - assert_frame_equal(newFrame, self.frame) - - def test_reindex_with_nans(self): - df = DataFrame([[1, 2], [3, 4], [np.nan, np.nan], [7, 8], [9, 10]], - columns=['a', 'b'], - index=[100.0, 101.0, np.nan, 102.0, 103.0]) - - result = df.reindex(index=[101.0, 102.0, 103.0]) - expected = df.iloc[[1, 3, 4]] - assert_frame_equal(result, expected) - - result = df.reindex(index=[103.0]) - expected = df.iloc[[4]] - assert_frame_equal(result, expected) - - result = df.reindex(index=[101.0]) - expected = df.iloc[[1]] - assert_frame_equal(result, expected) - - def test_reindex_multi(self): - df = DataFrame(np.random.randn(3, 3)) - - result = df.reindex(lrange(4), lrange(4)) - expected = df.reindex(lrange(4)).reindex(columns=lrange(4)) - - assert_frame_equal(result, expected) - - df = DataFrame(np.random.randint(0, 10, (3, 3))) - - result = df.reindex(lrange(4), lrange(4)) - expected = df.reindex(lrange(4)).reindex(columns=lrange(4)) - - assert_frame_equal(result, expected) - - df = DataFrame(np.random.randint(0, 10, (3, 3))) - - result = df.reindex(lrange(2), lrange(2)) - expected = df.reindex(lrange(2)).reindex(columns=lrange(2)) - - assert_frame_equal(result, expected) - - df = DataFrame(np.random.randn(5, 3) + 1j, columns=['a', 'b', 'c']) - - result = df.reindex(index=[0, 1], columns=['a', 'b']) - expected = df.reindex([0, 1]).reindex(columns=['a', 'b']) - - assert_frame_equal(result, expected) - - def test_rename_objects(self): - renamed = self.mixed_frame.rename(columns=str.upper) - self.assertIn('FOO', renamed) - self.assertNotIn('foo', renamed) - - def test_fill_corner(self): - self.mixed_frame.ix[5:20,'foo'] = nan - self.mixed_frame.ix[-10:,'A'] = nan - - filled = self.mixed_frame.fillna(value=0) - self.assertTrue((filled.ix[5:20,'foo'] == 0).all()) - del self.mixed_frame['foo'] - - empty_float = self.frame.reindex(columns=[]) - result = empty_float.fillna(value=0) - - def test_count_objects(self): - dm = DataFrame(self.mixed_frame._series) - df = DataFrame(self.mixed_frame._series) - - assert_series_equal(dm.count(), df.count()) - assert_series_equal(dm.count(1), df.count(1)) - - def test_cumsum_corner(self): - dm = DataFrame(np.arange(20).reshape(4, 5), - index=lrange(4), columns=lrange(5)) - result = dm.cumsum() - - #---------------------------------------------------------------------- - # Stacking / unstacking - - def test_stack_unstack(self): - stacked = self.frame.stack() - stacked_df = DataFrame({'foo': stacked, 'bar': stacked}) - - unstacked = stacked.unstack() - unstacked_df = stacked_df.unstack() - - assert_frame_equal(unstacked, self.frame) - assert_frame_equal(unstacked_df['bar'], self.frame) - - unstacked_cols = stacked.unstack(0) - unstacked_cols_df = stacked_df.unstack(0) - assert_frame_equal(unstacked_cols.T, self.frame) - assert_frame_equal(unstacked_cols_df['bar'].T, self.frame) - - def test_stack_ints(self): - df = DataFrame( - np.random.randn(30, 27), - columns=MultiIndex.from_tuples( - list(itertools.product(range(3), repeat=3)) - ) - ) - assert_frame_equal( - df.stack(level=[1, 2]), - df.stack(level=1).stack(level=1) - ) - assert_frame_equal( - df.stack(level=[-2, -1]), - df.stack(level=1).stack(level=1) - ) - - df_named = df.copy() - df_named.columns.set_names(range(3), inplace=True) - assert_frame_equal( - df_named.stack(level=[1, 2]), - df_named.stack(level=1).stack(level=1) - ) - - def test_stack_mixed_levels(self): - columns = MultiIndex.from_tuples( - [('A', 'cat', 'long'), ('B', 'cat', 'long'), - ('A', 'dog', 'short'), ('B', 'dog', 'short')], - names=['exp', 'animal', 'hair_length'] - ) - df = DataFrame(randn(4, 4), columns=columns) - - animal_hair_stacked = df.stack(level=['animal', 'hair_length']) - exp_hair_stacked = df.stack(level=['exp', 'hair_length']) - - # GH #8584: Need to check that stacking works when a number - # is passed that is both a level name and in the range of - # the level numbers - df2 = df.copy() - df2.columns.names = ['exp', 'animal', 1] - assert_frame_equal(df2.stack(level=['animal', 1]), - animal_hair_stacked, check_names=False) - assert_frame_equal(df2.stack(level=['exp', 1]), - exp_hair_stacked, check_names=False) - - # When mixed types are passed and the ints are not level - # names, raise - self.assertRaises(ValueError, df2.stack, level=['animal', 0]) - - # GH #8584: Having 0 in the level names could raise a - # strange error about lexsort depth - df3 = df.copy() - df3.columns.names = ['exp', 'animal', 0] - assert_frame_equal(df3.stack(level=['animal', 0]), - animal_hair_stacked, check_names=False) - - def test_stack_int_level_names(self): - columns = MultiIndex.from_tuples( - [('A', 'cat', 'long'), ('B', 'cat', 'long'), - ('A', 'dog', 'short'), ('B', 'dog', 'short')], - names=['exp', 'animal', 'hair_length'] - ) - df = DataFrame(randn(4, 4), columns=columns) - - exp_animal_stacked = df.stack(level=['exp', 'animal']) - animal_hair_stacked = df.stack(level=['animal', 'hair_length']) - exp_hair_stacked = df.stack(level=['exp', 'hair_length']) - - df2 = df.copy() - df2.columns.names = [0, 1, 2] - assert_frame_equal(df2.stack(level=[1, 2]), animal_hair_stacked, - check_names=False ) - assert_frame_equal(df2.stack(level=[0, 1]), exp_animal_stacked, - check_names=False) - assert_frame_equal(df2.stack(level=[0, 2]), exp_hair_stacked, - check_names=False) - - # Out-of-order int column names - df3 = df.copy() - df3.columns.names = [2, 0, 1] - assert_frame_equal(df3.stack(level=[0, 1]), animal_hair_stacked, - check_names=False) - assert_frame_equal(df3.stack(level=[2, 0]), exp_animal_stacked, - check_names=False) - assert_frame_equal(df3.stack(level=[2, 1]), exp_hair_stacked, - check_names=False) - - - def test_unstack_bool(self): - df = DataFrame([False, False], - index=MultiIndex.from_arrays([['a', 'b'], ['c', 'l']]), - columns=['col']) - rs = df.unstack() - xp = DataFrame(np.array([[False, np.nan], [np.nan, False]], - dtype=object), - index=['a', 'b'], - columns=MultiIndex.from_arrays([['col', 'col'], - ['c', 'l']])) - assert_frame_equal(rs, xp) - - def test_unstack_level_binding(self): - # GH9856 - mi = pd.MultiIndex( - levels=[[u('foo'), u('bar')], [u('one'), u('two')], - [u('a'), u('b')]], - labels=[[0, 0, 1, 1], [0, 1, 0, 1], [1, 0, 1, 0]], - names=[u('first'), u('second'), u('third')]) - s = pd.Series(0, index=mi) - result = s.unstack([1, 2]).stack(0) - - expected_mi = pd.MultiIndex( - levels=[['foo', 'bar'], ['one', 'two']], - labels=[[0, 0, 1, 1], [0, 1, 0, 1]], - names=['first', 'second']) - - expected = pd.DataFrame(np.array([[np.nan, 0], - [0, np.nan], - [np.nan, 0], - [0, np.nan]], - dtype=np.float64), - index=expected_mi, - columns=pd.Index(['a', 'b'], name='third')) - - assert_frame_equal(result, expected) - - def test_unstack_to_series(self): - # check reversibility - data = self.frame.unstack() - - self.assertTrue(isinstance(data, Series)) - undo = data.unstack().T - assert_frame_equal(undo, self.frame) - - # check NA handling - data = DataFrame({'x': [1, 2, np.NaN], 'y': [3.0, 4, np.NaN]}) - data.index = Index(['a', 'b', 'c']) - result = data.unstack() - - midx = MultiIndex(levels=[['x', 'y'], ['a', 'b', 'c']], - labels=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]]) - expected = Series([1, 2, np.NaN, 3, 4, np.NaN], index=midx) - - assert_series_equal(result, expected) - - # check composability of unstack - old_data = data.copy() - for _ in range(4): - data = data.unstack() - assert_frame_equal(old_data, data) - - def test_unstack_dtypes(self): - - # GH 2929 - rows = [[1, 1, 3, 4], - [1, 2, 3, 4], - [2, 1, 3, 4], - [2, 2, 3, 4]] - - df = DataFrame(rows, columns=list('ABCD')) - result = df.get_dtype_counts() - expected = Series({'int64' : 4}) - assert_series_equal(result, expected) - - # single dtype - df2 = df.set_index(['A','B']) - df3 = df2.unstack('B') - result = df3.get_dtype_counts() - expected = Series({'int64' : 4}) - assert_series_equal(result, expected) - - # mixed - df2 = df.set_index(['A','B']) - df2['C'] = 3. - df3 = df2.unstack('B') - result = df3.get_dtype_counts() - expected = Series({'int64' : 2, 'float64' : 2}) - assert_series_equal(result, expected) - - df2['D'] = 'foo' - df3 = df2.unstack('B') - result = df3.get_dtype_counts() - expected = Series({'float64' : 2, 'object' : 2}) - assert_series_equal(result, expected) - - # GH7405 - for c, d in (np.zeros(5), np.zeros(5)), \ - (np.arange(5, dtype='f8'), np.arange(5, 10, dtype='f8')): - - df = DataFrame({'A': ['a']*5, 'C':c, 'D':d, - 'B':pd.date_range('2012-01-01', periods=5)}) - - right = df.iloc[:3].copy(deep=True) - - df = df.set_index(['A', 'B']) - df['D'] = df['D'].astype('int64') - - left = df.iloc[:3].unstack(0) - right = right.set_index(['A', 'B']).unstack(0) - right[('D', 'a')] = right[('D', 'a')].astype('int64') - - self.assertEqual(left.shape, (3, 2)) - assert_frame_equal(left, right) - - def test_unstack_non_unique_index_names(self): - idx = MultiIndex.from_tuples([('a', 'b'), ('c', 'd')], - names=['c1', 'c1']) - df = DataFrame([1, 2], index=idx) - with tm.assertRaises(ValueError): - df.unstack('c1') - - with tm.assertRaises(ValueError): - df.T.stack('c1') - - def test_unstack_nan_index(self): # GH7466 - cast = lambda val: '{0:1}'.format('' if val != val else val) - nan = np.nan - - def verify(df): - mk_list = lambda a: list(a) if isinstance(a, tuple) else [a] - rows, cols = df.notnull().values.nonzero() - for i, j in zip(rows, cols): - left = sorted(df.iloc[i, j].split('.')) - right = mk_list(df.index[i]) + mk_list(df.columns[j]) - right = sorted(list(map(cast, right))) - self.assertEqual(left, right) - - df = DataFrame({'jim':['a', 'b', nan, 'd'], - 'joe':['w', 'x', 'y', 'z'], - 'jolie':['a.w', 'b.x', ' .y', 'd.z']}) - - left = df.set_index(['jim', 'joe']).unstack()['jolie'] - right = df.set_index(['joe', 'jim']).unstack()['jolie'].T - assert_frame_equal(left, right) - - for idx in permutations(df.columns[:2]): - mi = df.set_index(list(idx)) - for lev in range(2): - udf = mi.unstack(level=lev) - self.assertEqual(udf.notnull().values.sum(), len(df)) - verify(udf['jolie']) - - df = DataFrame({'1st':['d'] * 3 + [nan] * 5 + ['a'] * 2 + - ['c'] * 3 + ['e'] * 2 + ['b'] * 5, - '2nd':['y'] * 2 + ['w'] * 3 + [nan] * 3 + - ['z'] * 4 + [nan] * 3 + ['x'] * 3 + [nan] * 2, - '3rd':[67,39,53,72,57,80,31,18,11,30,59, - 50,62,59,76,52,14,53,60,51]}) - - df['4th'], df['5th'] = \ - df.apply(lambda r: '.'.join(map(cast, r)), axis=1), \ - df.apply(lambda r: '.'.join(map(cast, r.iloc[::-1])), axis=1) - - for idx in permutations(['1st', '2nd', '3rd']): - mi = df.set_index(list(idx)) - for lev in range(3): - udf = mi.unstack(level=lev) - self.assertEqual(udf.notnull().values.sum(), 2 * len(df)) - for col in ['4th', '5th']: - verify(udf[col]) - - # GH7403 - df = pd.DataFrame({'A': list('aaaabbbb'),'B':range(8), 'C':range(8)}) - df.iloc[3, 1] = np.NaN - left = df.set_index(['A', 'B']).unstack(0) - - vals = [[3, 0, 1, 2, nan, nan, nan, nan], - [nan, nan, nan, nan, 4, 5, 6, 7]] - vals = list(map(list, zip(*vals))) - idx = Index([nan, 0, 1, 2, 4, 5, 6, 7], name='B') - cols = MultiIndex(levels=[['C'], ['a', 'b']], - labels=[[0, 0], [0, 1]], - names=[None, 'A']) - - right = DataFrame(vals, columns=cols, index=idx) - assert_frame_equal(left, right) - - df = DataFrame({'A': list('aaaabbbb'), 'B':list(range(4))*2, - 'C':range(8)}) - df.iloc[2,1] = np.NaN - left = df.set_index(['A', 'B']).unstack(0) - - vals = [[2, nan], [0, 4], [1, 5], [nan, 6], [3, 7]] - cols = MultiIndex(levels=[['C'], ['a', 'b']], - labels=[[0, 0], [0, 1]], - names=[None, 'A']) - idx = Index([nan, 0, 1, 2, 3], name='B') - right = DataFrame(vals, columns=cols, index=idx) - assert_frame_equal(left, right) - - df = pd.DataFrame({'A': list('aaaabbbb'),'B':list(range(4))*2, - 'C':range(8)}) - df.iloc[3,1] = np.NaN - left = df.set_index(['A', 'B']).unstack(0) - - vals = [[3, nan], [0, 4], [1, 5], [2, 6], [nan, 7]] - cols = MultiIndex(levels=[['C'], ['a', 'b']], - labels=[[0, 0], [0, 1]], - names=[None, 'A']) - idx = Index([nan, 0, 1, 2, 3], name='B') - right = DataFrame(vals, columns=cols, index=idx) - assert_frame_equal(left, right) - - # GH7401 - df = pd.DataFrame({'A': list('aaaaabbbbb'), 'C':np.arange(10), - 'B':date_range('2012-01-01', periods=5).tolist()*2 }) - - df.iloc[3,1] = np.NaN - left = df.set_index(['A', 'B']).unstack() - - vals = np.array([[3, 0, 1, 2, nan, 4], [nan, 5, 6, 7, 8, 9]]) - idx = Index(['a', 'b'], name='A') - cols = MultiIndex(levels=[['C'], date_range('2012-01-01', periods=5)], - labels=[[0, 0, 0, 0, 0, 0], [-1, 0, 1, 2, 3, 4]], - names=[None, 'B']) - - right = DataFrame(vals, columns=cols, index=idx) - assert_frame_equal(left, right) - - # GH4862 - vals = [['Hg', nan, nan, 680585148], - ['U', 0.0, nan, 680585148], - ['Pb', 7.07e-06, nan, 680585148], - ['Sn', 2.3614e-05, 0.0133, 680607017], - ['Ag', 0.0, 0.0133, 680607017], - ['Hg', -0.00015, 0.0133, 680607017]] - df = DataFrame(vals, columns=['agent', 'change', 'dosage', 's_id'], - index=[17263, 17264, 17265, 17266, 17267, 17268]) - - left = df.copy().set_index(['s_id','dosage','agent']).unstack() - - vals = [[nan, nan, 7.07e-06, nan, 0.0], - [0.0, -0.00015, nan, 2.3614e-05, nan]] - - idx = MultiIndex(levels=[[680585148, 680607017], [0.0133]], - labels=[[0, 1], [-1, 0]], - names=['s_id', 'dosage']) - - cols = MultiIndex(levels=[['change'], ['Ag', 'Hg', 'Pb', 'Sn', 'U']], - labels=[[0, 0, 0, 0, 0], [0, 1, 2, 3, 4]], - names=[None, 'agent']) - - right = DataFrame(vals, columns=cols, index=idx) - assert_frame_equal(left, right) - - left = df.ix[17264:].copy().set_index(['s_id','dosage','agent']) - assert_frame_equal(left.unstack(), right) - - # GH9497 - multiple unstack with nulls - df = DataFrame({'1st':[1, 2, 1, 2, 1, 2], - '2nd':pd.date_range('2014-02-01', periods=6, freq='D'), - 'jim':100 + np.arange(6), - 'joe':(np.random.randn(6) * 10).round(2)}) - - df['3rd'] = df['2nd'] - pd.Timestamp('2014-02-02') - df.loc[1, '2nd'] = df.loc[3, '2nd'] = nan - df.loc[1, '3rd'] = df.loc[4, '3rd'] = nan - - left = df.set_index(['1st', '2nd', '3rd']).unstack(['2nd', '3rd']) - self.assertEqual(left.notnull().values.sum(), 2 * len(df)) - - for col in ['jim', 'joe']: - for _, r in df.iterrows(): - key = r['1st'], (col, r['2nd'], r['3rd']) - self.assertEqual(r[col], left.loc[key]) - - def test_stack_datetime_column_multiIndex(self): - # GH 8039 - t = datetime(2014, 1, 1) - df = DataFrame([1, 2, 3, 4], columns=MultiIndex.from_tuples([(t, 'A', 'B')])) - result = df.stack() - - eidx = MultiIndex.from_product([(0, 1, 2, 3), ('B',)]) - ecols = MultiIndex.from_tuples([(t, 'A')]) - expected = DataFrame([1, 2, 3, 4], index=eidx, columns=ecols) - assert_frame_equal(result, expected) - - def test_stack_partial_multiIndex(self): - # GH 8844 - def _test_stack_with_multiindex(multiindex): - df = DataFrame(np.arange(3 * len(multiindex)).reshape(3, len(multiindex)), - columns=multiindex) - for level in (-1, 0, 1, [0, 1], [1, 0]): - result = df.stack(level=level, dropna=False) - - if isinstance(level, int): - # Stacking a single level should not make any all-NaN rows, - # so df.stack(level=level, dropna=False) should be the same - # as df.stack(level=level, dropna=True). - expected = df.stack(level=level, dropna=True) - if isinstance(expected, Series): - assert_series_equal(result, expected) - else: - assert_frame_equal(result, expected) - - df.columns = MultiIndex.from_tuples(df.columns.get_values(), - names=df.columns.names) - expected = df.stack(level=level, dropna=False) - if isinstance(expected, Series): - assert_series_equal(result, expected) - else: - assert_frame_equal(result, expected) - - full_multiindex = MultiIndex.from_tuples([('B', 'x'), ('B', 'z'), - ('A', 'y'), - ('C', 'x'), ('C', 'u')], - names=['Upper', 'Lower']) - for multiindex_columns in ([0, 1, 2, 3, 4], - [0, 1, 2, 3], [0, 1, 2, 4], - [0, 1, 2], [1, 2, 3], [2, 3, 4], - [0, 1], [0, 2], [0, 3], - [0], [2], [4]): - _test_stack_with_multiindex(full_multiindex[multiindex_columns]) - if len(multiindex_columns) > 1: - multiindex_columns.reverse() - _test_stack_with_multiindex(full_multiindex[multiindex_columns]) - - df = DataFrame(np.arange(6).reshape(2, 3), columns=full_multiindex[[0, 1, 3]]) - result = df.stack(dropna=False) - expected = DataFrame([[0, 2], [1, nan], [3, 5], [4, nan]], - index=MultiIndex(levels=[[0, 1], ['u', 'x', 'y', 'z']], - labels=[[0, 0, 1, 1], [1, 3, 1, 3]], - names=[None, 'Lower']), - columns=Index(['B', 'C'], name='Upper'), - dtype=df.dtypes[0]) - assert_frame_equal(result, expected) - - def test_repr_with_mi_nat(self): - df = DataFrame({'X': [1, 2]}, - index=[[pd.NaT, pd.Timestamp('20130101')], ['a', 'b']]) - res = repr(df) - exp = ' X\nNaT a 1\n2013-01-01 b 2' - nose.tools.assert_equal(res, exp) - - def test_reset_index(self): - stacked = self.frame.stack()[::2] - stacked = DataFrame({'foo': stacked, 'bar': stacked}) - - names = ['first', 'second'] - stacked.index.names = names - deleveled = stacked.reset_index() - for i, (lev, lab) in enumerate(zip(stacked.index.levels, - stacked.index.labels)): - values = lev.take(lab) - name = names[i] - assert_almost_equal(values, deleveled[name]) - - stacked.index.names = [None, None] - deleveled2 = stacked.reset_index() - self.assert_numpy_array_equal(deleveled['first'], - deleveled2['level_0']) - self.assert_numpy_array_equal(deleveled['second'], - deleveled2['level_1']) - - # default name assigned - rdf = self.frame.reset_index() - self.assert_numpy_array_equal(rdf['index'], self.frame.index.values) - - # default name assigned, corner case - df = self.frame.copy() - df['index'] = 'foo' - rdf = df.reset_index() - self.assert_numpy_array_equal(rdf['level_0'], self.frame.index.values) - - # but this is ok - self.frame.index.name = 'index' - deleveled = self.frame.reset_index() - self.assert_numpy_array_equal(deleveled['index'], - self.frame.index.values) - self.assert_numpy_array_equal(deleveled.index, - np.arange(len(deleveled))) - - # preserve column names - self.frame.columns.name = 'columns' - resetted = self.frame.reset_index() - self.assertEqual(resetted.columns.name, 'columns') - - # only remove certain columns - frame = self.frame.reset_index().set_index(['index', 'A', 'B']) - rs = frame.reset_index(['A', 'B']) - - assert_frame_equal(rs, self.frame, check_names=False) # TODO should reset_index check_names ? - - rs = frame.reset_index(['index', 'A', 'B']) - assert_frame_equal(rs, self.frame.reset_index(), check_names=False) - - rs = frame.reset_index(['index', 'A', 'B']) - assert_frame_equal(rs, self.frame.reset_index(), check_names=False) - - rs = frame.reset_index('A') - xp = self.frame.reset_index().set_index(['index', 'B']) - assert_frame_equal(rs, xp, check_names=False) - - # test resetting in place - df = self.frame.copy() - resetted = self.frame.reset_index() - df.reset_index(inplace=True) - assert_frame_equal(df, resetted, check_names=False) - - frame = self.frame.reset_index().set_index(['index', 'A', 'B']) - rs = frame.reset_index('A', drop=True) - xp = self.frame.copy() - del xp['A'] - xp = xp.set_index(['B'], append=True) - assert_frame_equal(rs, xp, check_names=False) - - def test_reset_index_right_dtype(self): - time = np.arange(0.0, 10, np.sqrt(2) / 2) - s1 = Series((9.81 * time ** 2) / 2, - index=Index(time, name='time'), - name='speed') - df = DataFrame(s1) - - resetted = s1.reset_index() - self.assertEqual(resetted['time'].dtype, np.float64) - - resetted = df.reset_index() - self.assertEqual(resetted['time'].dtype, np.float64) - - def test_reset_index_multiindex_col(self): - vals = np.random.randn(3, 3).astype(object) - idx = ['x', 'y', 'z'] - full = np.hstack(([[x] for x in idx], vals)) - df = DataFrame(vals, Index(idx, name='a'), - columns=[['b', 'b', 'c'], ['mean', 'median', 'mean']]) - rs = df.reset_index() - xp = DataFrame(full, columns=[['a', 'b', 'b', 'c'], - ['', 'mean', 'median', 'mean']]) - assert_frame_equal(rs, xp) - - rs = df.reset_index(col_fill=None) - xp = DataFrame(full, columns=[['a', 'b', 'b', 'c'], - ['a', 'mean', 'median', 'mean']]) - assert_frame_equal(rs, xp) - - rs = df.reset_index(col_level=1, col_fill='blah') - xp = DataFrame(full, columns=[['blah', 'b', 'b', 'c'], - ['a', 'mean', 'median', 'mean']]) - assert_frame_equal(rs, xp) - - df = DataFrame(vals, - MultiIndex.from_arrays([[0, 1, 2], ['x', 'y', 'z']], - names=['d', 'a']), - columns=[['b', 'b', 'c'], ['mean', 'median', 'mean']]) - rs = df.reset_index('a', ) - xp = DataFrame(full, Index([0, 1, 2], name='d'), - columns=[['a', 'b', 'b', 'c'], - ['', 'mean', 'median', 'mean']]) - assert_frame_equal(rs, xp) - - rs = df.reset_index('a', col_fill=None) - xp = DataFrame(full, Index(lrange(3), name='d'), - columns=[['a', 'b', 'b', 'c'], - ['a', 'mean', 'median', 'mean']]) - assert_frame_equal(rs, xp) - - rs = df.reset_index('a', col_fill='blah', col_level=1) - xp = DataFrame(full, Index(lrange(3), name='d'), - columns=[['blah', 'b', 'b', 'c'], - ['a', 'mean', 'median', 'mean']]) - assert_frame_equal(rs, xp) - - def test_reset_index_with_datetimeindex_cols(self): - # GH5818 - # - df = pd.DataFrame([[1, 2], [3, 4]], - columns=pd.date_range('1/1/2013', '1/2/2013'), - index=['A', 'B']) - - result = df.reset_index() - expected = pd.DataFrame([['A', 1, 2], ['B', 3, 4]], - columns=['index', datetime(2013, 1, 1), - datetime(2013, 1, 2)]) - assert_frame_equal(result, expected) - - #---------------------------------------------------------------------- - # Tests to cope with refactored internals - def test_as_matrix_numeric_cols(self): - self.frame['foo'] = 'bar' - - values = self.frame.as_matrix(['A', 'B', 'C', 'D']) - self.assertEqual(values.dtype, np.float64) - - def test_as_matrix_lcd(self): - - # mixed lcd - values = self.mixed_float.as_matrix(['A', 'B', 'C', 'D']) - self.assertEqual(values.dtype, np.float64) - - values = self.mixed_float.as_matrix(['A', 'B', 'C' ]) - self.assertEqual(values.dtype, np.float32) - - values = self.mixed_float.as_matrix(['C']) - self.assertEqual(values.dtype, np.float16) - - values = self.mixed_int.as_matrix(['A','B','C','D']) - self.assertEqual(values.dtype, np.int64) - - values = self.mixed_int.as_matrix(['A','D']) - self.assertEqual(values.dtype, np.int64) - - # guess all ints are cast to uints.... - values = self.mixed_int.as_matrix(['A','B','C']) - self.assertEqual(values.dtype, np.int64) - - values = self.mixed_int.as_matrix(['A','C']) - self.assertEqual(values.dtype, np.int32) - - values = self.mixed_int.as_matrix(['C','D']) - self.assertEqual(values.dtype, np.int64) - - values = self.mixed_int.as_matrix(['A']) - self.assertEqual(values.dtype, np.int32) - - values = self.mixed_int.as_matrix(['C']) - self.assertEqual(values.dtype, np.uint8) - - def test_constructor_with_convert(self): - # this is actually mostly a test of lib.maybe_convert_objects - # #2845 - df = DataFrame({'A' : [2**63-1] }) - result = df['A'] - expected = Series(np.asarray([2**63-1], np.int64), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [2**63] }) - result = df['A'] - expected = Series(np.asarray([2**63], np.object_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [datetime(2005, 1, 1), True] }) - result = df['A'] - expected = Series(np.asarray([datetime(2005, 1, 1), True], np.object_), - name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [None, 1] }) - result = df['A'] - expected = Series(np.asarray([np.nan, 1], np.float_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [1.0, 2] }) - result = df['A'] - expected = Series(np.asarray([1.0, 2], np.float_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [1.0+2.0j, 3] }) - result = df['A'] - expected = Series(np.asarray([1.0+2.0j, 3], np.complex_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [1.0+2.0j, 3.0] }) - result = df['A'] - expected = Series(np.asarray([1.0+2.0j, 3.0], np.complex_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [1.0+2.0j, True] }) - result = df['A'] - expected = Series(np.asarray([1.0+2.0j, True], np.object_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [1.0, None] }) - result = df['A'] - expected = Series(np.asarray([1.0, np.nan], np.float_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [1.0+2.0j, None] }) - result = df['A'] - expected = Series(np.asarray([1.0+2.0j, np.nan], np.complex_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [2.0, 1, True, None] }) - result = df['A'] - expected = Series(np.asarray([2.0, 1, True, None], np.object_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [2.0, 1, datetime(2006, 1, 1), None] }) - result = df['A'] - expected = Series(np.asarray([2.0, 1, datetime(2006, 1, 1), - None], np.object_), name='A') - assert_series_equal(result, expected) - - def test_construction_with_mixed(self): - # test construction edge cases with mixed types - - # f7u12, this does not work without extensive workaround - data = [[datetime(2001, 1, 5), nan, datetime(2001, 1, 2)], - [datetime(2000, 1, 2), datetime(2000, 1, 3), - datetime(2000, 1, 1)]] - df = DataFrame(data) - - # check dtypes - result = df.get_dtype_counts().sort_values() - expected = Series({ 'datetime64[ns]' : 3 }) - - # mixed-type frames - self.mixed_frame['datetime'] = datetime.now() - self.mixed_frame['timedelta'] = timedelta(days=1,seconds=1) - self.assertEqual(self.mixed_frame['datetime'].dtype, 'M8[ns]') - self.assertEqual(self.mixed_frame['timedelta'].dtype, 'm8[ns]') - result = self.mixed_frame.get_dtype_counts().sort_values() - expected = Series({ 'float64' : 4, - 'object' : 1, - 'datetime64[ns]' : 1, - 'timedelta64[ns]' : 1}).sort_values() - assert_series_equal(result,expected) - - def test_construction_with_conversions(self): - - # convert from a numpy array of non-ns timedelta64 - arr = np.array([1,2,3],dtype='timedelta64[s]') - s = Series(arr) - expected = Series(timedelta_range('00:00:01',periods=3,freq='s')) - assert_series_equal(s,expected) - - df = DataFrame(index=range(3)) - df['A'] = arr - expected = DataFrame({'A' : timedelta_range('00:00:01',periods=3,freq='s')}, - index=range(3)) - assert_frame_equal(df,expected) - - # convert from a numpy array of non-ns datetime64 - #### note that creating a numpy datetime64 is in LOCAL time!!!! - #### seems to work for M8[D], but not for M8[s] - - s = Series(np.array(['2013-01-01','2013-01-02','2013-01-03'],dtype='datetime64[D]')) - assert_series_equal(s,Series(date_range('20130101',periods=3,freq='D'))) - #s = Series(np.array(['2013-01-01 00:00:01','2013-01-01 00:00:02','2013-01-01 00:00:03'],dtype='datetime64[s]')) - #assert_series_equal(s,date_range('20130101 00:00:01',period=3,freq='s')) - - expected = DataFrame({ - 'dt1' : Timestamp('20130101'), - 'dt2' : date_range('20130101',periods=3), - #'dt3' : date_range('20130101 00:00:01',periods=3,freq='s'), - },index=range(3)) - - - df = DataFrame(index=range(3)) - df['dt1'] = np.datetime64('2013-01-01') - df['dt2'] = np.array(['2013-01-01','2013-01-02','2013-01-03'],dtype='datetime64[D]') - #df['dt3'] = np.array(['2013-01-01 00:00:01','2013-01-01 00:00:02','2013-01-01 00:00:03'],dtype='datetime64[s]') - assert_frame_equal(df, expected) - - def test_constructor_frame_copy(self): - cop = DataFrame(self.frame, copy=True) - cop['A'] = 5 - self.assertTrue((cop['A'] == 5).all()) - self.assertFalse((self.frame['A'] == 5).all()) - - def test_constructor_ndarray_copy(self): - df = DataFrame(self.frame.values) - - self.frame.values[5] = 5 - self.assertTrue((df.values[5] == 5).all()) - - df = DataFrame(self.frame.values, copy=True) - self.frame.values[6] = 6 - self.assertFalse((df.values[6] == 6).all()) - - def test_constructor_series_copy(self): - series = self.frame._series - - df = DataFrame({'A': series['A']}) - df['A'][:] = 5 - - self.assertFalse((series['A'] == 5).all()) - - def test_constructor_compound_dtypes(self): - # GH 5191 - # compound dtypes should raise not-implementederror - - def f(dtype): - return DataFrame(data = list(itertools.repeat((datetime(2001, 1, 1), "aa", 20), 9)), - columns=["A", "B", "C"], dtype=dtype) - - self.assertRaises(NotImplementedError, f, [("A","datetime64[h]"), ("B","str"), ("C","int32")]) - - # these work (though results may be unexpected) - f('int64') - f('float64') - - # 10822 - # invalid error message on dt inference - if not is_platform_windows(): - f('M8[ns]') - - def test_assign_columns(self): - self.frame['hi'] = 'there' - - frame = self.frame.copy() - frame.columns = ['foo', 'bar', 'baz', 'quux', 'foo2'] - assert_series_equal(self.frame['C'], frame['baz'], check_names=False) - assert_series_equal(self.frame['hi'], frame['foo2'], check_names=False) - - def test_columns_with_dups(self): - - # GH 3468 related - - # basic - df = DataFrame([[1,2]], columns=['a','a']) - df.columns = ['a','a.1'] - str(df) - expected = DataFrame([[1,2]], columns=['a','a.1']) - assert_frame_equal(df, expected) - - df = DataFrame([[1,2,3]], columns=['b','a','a']) - df.columns = ['b','a','a.1'] - str(df) - expected = DataFrame([[1,2,3]], columns=['b','a','a.1']) - assert_frame_equal(df, expected) - - # with a dup index - df = DataFrame([[1,2]], columns=['a','a']) - df.columns = ['b','b'] - str(df) - expected = DataFrame([[1,2]], columns=['b','b']) - assert_frame_equal(df, expected) - - # multi-dtype - df = DataFrame([[1,2,1.,2.,3.,'foo','bar']], columns=['a','a','b','b','d','c','c']) - df.columns = list('ABCDEFG') - str(df) - expected = DataFrame([[1,2,1.,2.,3.,'foo','bar']], columns=list('ABCDEFG')) - assert_frame_equal(df, expected) - - # this is an error because we cannot disambiguate the dup columns - self.assertRaises(Exception, lambda x: DataFrame([[1,2,'foo','bar']], columns=['a','a','a','a'])) - - # dups across blocks - df_float = DataFrame(np.random.randn(10, 3),dtype='float64') - df_int = DataFrame(np.random.randn(10, 3),dtype='int64') - df_bool = DataFrame(True,index=df_float.index,columns=df_float.columns) - df_object = DataFrame('foo',index=df_float.index,columns=df_float.columns) - df_dt = DataFrame(Timestamp('20010101'),index=df_float.index,columns=df_float.columns) - df = pd.concat([ df_float, df_int, df_bool, df_object, df_dt ], axis=1) - - self.assertEqual(len(df._data._blknos), len(df.columns)) - self.assertEqual(len(df._data._blklocs), len(df.columns)) - - # testing iget - for i in range(len(df.columns)): - df.iloc[:,i] - - # dup columns across dtype GH 2079/2194 - vals = [[1, -1, 2.], [2, -2, 3.]] - rs = DataFrame(vals, columns=['A', 'A', 'B']) - xp = DataFrame(vals) - xp.columns = ['A', 'A', 'B'] - assert_frame_equal(rs, xp) - - def test_insert_column_bug_4032(self): - - # GH4032, inserting a column and renaming causing errors - df = DataFrame({'b': [1.1, 2.2]}) - df = df.rename(columns={}) - df.insert(0, 'a', [1, 2]) - - result = df.rename(columns={}) - str(result) - expected = DataFrame([[1,1.1],[2, 2.2]],columns=['a','b']) - assert_frame_equal(result,expected) - df.insert(0, 'c', [1.3, 2.3]) - - result = df.rename(columns={}) - str(result) - - expected = DataFrame([[1.3,1,1.1],[2.3,2, 2.2]],columns=['c','a','b']) - assert_frame_equal(result,expected) - - def test_cast_internals(self): - casted = DataFrame(self.frame._data, dtype=int) - expected = DataFrame(self.frame._series, dtype=int) - assert_frame_equal(casted, expected) - - casted = DataFrame(self.frame._data, dtype=np.int32) - expected = DataFrame(self.frame._series, dtype=np.int32) - assert_frame_equal(casted, expected) - - def test_consolidate(self): - self.frame['E'] = 7. - consolidated = self.frame.consolidate() - self.assertEqual(len(consolidated._data.blocks), 1) - - # Ensure copy, do I want this? - recons = consolidated.consolidate() - self.assertIsNot(recons, consolidated) - assert_frame_equal(recons, consolidated) - - self.frame['F'] = 8. - self.assertEqual(len(self.frame._data.blocks), 3) - self.frame.consolidate(inplace=True) - self.assertEqual(len(self.frame._data.blocks), 1) - - def test_consolidate_inplace(self): - frame = self.frame.copy() - - # triggers in-place consolidation - for letter in range(ord('A'), ord('Z')): - self.frame[chr(letter)] = chr(letter) - - def test_as_matrix_consolidate(self): - self.frame['E'] = 7. - self.assertFalse(self.frame._data.is_consolidated()) - _ = self.frame.as_matrix() - self.assertTrue(self.frame._data.is_consolidated()) - - def test_modify_values(self): - self.frame.values[5] = 5 - self.assertTrue((self.frame.values[5] == 5).all()) - - # unconsolidated - self.frame['E'] = 7. - self.frame.values[6] = 6 - self.assertTrue((self.frame.values[6] == 6).all()) - - def test_boolean_set_uncons(self): - self.frame['E'] = 7. - - expected = self.frame.values.copy() - expected[expected > 1] = 2 - - self.frame[self.frame > 1] = 2 - assert_almost_equal(expected, self.frame.values) - - def test_xs_view(self): - """ - in 0.14 this will return a view if possible - a copy otherwise, but this is numpy dependent - """ - - dm = DataFrame(np.arange(20.).reshape(4, 5), - index=lrange(4), columns=lrange(5)) - - dm.xs(2)[:] = 10 - self.assertTrue((dm.xs(2) == 10).all()) - - def test_boolean_indexing(self): - idx = lrange(3) - cols = ['A','B','C'] - df1 = DataFrame(index=idx, columns=cols, - data=np.array([[0.0, 0.5, 1.0], - [1.5, 2.0, 2.5], - [3.0, 3.5, 4.0]], - dtype=float)) - df2 = DataFrame(index=idx, columns=cols, - data=np.ones((len(idx), len(cols)))) - - expected = DataFrame(index=idx, columns=cols, - data=np.array([[0.0, 0.5, 1.0], - [1.5, 2.0, -1], - [-1, -1, -1]], dtype=float)) - - df1[df1 > 2.0 * df2] = -1 - assert_frame_equal(df1, expected) - with assertRaisesRegexp(ValueError, 'Item wrong length'): - df1[df1.index[:-1] > 2] = -1 - - def test_boolean_indexing_mixed(self): - df = DataFrame( - {long(0): {35: np.nan, 40: np.nan, 43: np.nan, 49: np.nan, 50: np.nan}, - long(1): {35: np.nan, - 40: 0.32632316859446198, - 43: np.nan, - 49: 0.32632316859446198, - 50: 0.39114724480578139}, - long(2): {35: np.nan, 40: np.nan, 43: 0.29012581014105987, 49: np.nan, 50: np.nan}, - long(3): {35: np.nan, 40: np.nan, 43: np.nan, 49: np.nan, 50: np.nan}, - long(4): {35: 0.34215328467153283, 40: np.nan, 43: np.nan, 49: np.nan, 50: np.nan}, - 'y': {35: 0, 40: 0, 43: 0, 49: 0, 50: 1}}) - - # mixed int/float ok - df2 = df.copy() - df2[df2>0.3] = 1 - expected = df.copy() - expected.loc[40,1] = 1 - expected.loc[49,1] = 1 - expected.loc[50,1] = 1 - expected.loc[35,4] = 1 - assert_frame_equal(df2,expected) - - df['foo'] = 'test' - with tm.assertRaisesRegexp(TypeError, 'boolean setting on mixed-type'): - df[df > 0.3] = 1 - - def test_sum_bools(self): - df = DataFrame(index=lrange(1), columns=lrange(10)) - bools = isnull(df) - self.assertEqual(bools.sum(axis=1)[0], 10) - - def test_fillna_col_reordering(self): - idx = lrange(20) - cols = ["COL." + str(i) for i in range(5, 0, -1)] - data = np.random.rand(20, 5) - df = DataFrame(index=lrange(20), columns=cols, data=data) - filled = df.fillna(method='ffill') - self.assertEqual(df.columns.tolist(), filled.columns.tolist()) - - def test_take(self): - - # homogeneous - #---------------------------------------- - order = [3, 1, 2, 0] - for df in [self.frame]: - - result = df.take(order, axis=0) - expected = df.reindex(df.index.take(order)) - assert_frame_equal(result, expected) - - # axis = 1 - result = df.take(order, axis=1) - expected = df.ix[:, ['D', 'B', 'C', 'A']] - assert_frame_equal(result, expected, check_names=False) - - # neg indicies - order = [2,1,-1] - for df in [self.frame]: - - result = df.take(order, axis=0) - expected = df.reindex(df.index.take(order)) - assert_frame_equal(result, expected) - - # axis = 1 - result = df.take(order, axis=1) - expected = df.ix[:, ['C', 'B', 'D']] - assert_frame_equal(result, expected, check_names=False) - - # illegal indices - self.assertRaises(IndexError, df.take, [3,1,2,30], axis=0) - self.assertRaises(IndexError, df.take, [3,1,2,-31], axis=0) - self.assertRaises(IndexError, df.take, [3,1,2,5], axis=1) - self.assertRaises(IndexError, df.take, [3,1,2,-5], axis=1) - - # mixed-dtype - #---------------------------------------- - order = [4, 1, 2, 0, 3] - for df in [self.mixed_frame]: - - result = df.take(order, axis=0) - expected = df.reindex(df.index.take(order)) - assert_frame_equal(result, expected) - - # axis = 1 - result = df.take(order, axis=1) - expected = df.ix[:, ['foo', 'B', 'C', 'A', 'D']] - assert_frame_equal(result, expected) - - # neg indicies - order = [4,1,-2] - for df in [self.mixed_frame]: - - result = df.take(order, axis=0) - expected = df.reindex(df.index.take(order)) - assert_frame_equal(result, expected) - - # axis = 1 - result = df.take(order, axis=1) - expected = df.ix[:, ['foo', 'B', 'D']] - assert_frame_equal(result, expected) - - # by dtype - order = [1, 2, 0, 3] - for df in [self.mixed_float,self.mixed_int]: - - result = df.take(order, axis=0) - expected = df.reindex(df.index.take(order)) - assert_frame_equal(result, expected) - - # axis = 1 - result = df.take(order, axis=1) - expected = df.ix[:, ['B', 'C', 'A', 'D']] - assert_frame_equal(result, expected) - - def test_iterkv_deprecation(self): - with tm.assert_produces_warning(FutureWarning): - self.mixed_float.iterkv() - - def test_iterkv_names(self): - for k, v in compat.iteritems(self.mixed_frame): - self.assertEqual(v.name, k) - - def test_series_put_names(self): - series = self.mixed_frame._series - for k, v in compat.iteritems(series): - self.assertEqual(v.name, k) - - def test_dot(self): - a = DataFrame(np.random.randn(3, 4), index=['a', 'b', 'c'], - columns=['p', 'q', 'r', 's']) - b = DataFrame(np.random.randn(4, 2), index=['p', 'q', 'r', 's'], - columns=['one', 'two']) - - result = a.dot(b) - expected = DataFrame(np.dot(a.values, b.values), - index=['a', 'b', 'c'], - columns=['one', 'two']) - # Check alignment - b1 = b.reindex(index=reversed(b.index)) - result = a.dot(b) - assert_frame_equal(result, expected) - - # Check series argument - result = a.dot(b['one']) - assert_series_equal(result, expected['one'], check_names=False) - self.assertTrue(result.name is None) - - result = a.dot(b1['one']) - assert_series_equal(result, expected['one'], check_names=False) - self.assertTrue(result.name is None) - - # can pass correct-length arrays - row = a.ix[0].values - - result = a.dot(row) - exp = a.dot(a.ix[0]) - assert_series_equal(result, exp) - - with assertRaisesRegexp(ValueError, 'Dot product shape mismatch'): - a.dot(row[:-1]) - - a = np.random.rand(1, 5) - b = np.random.rand(5, 1) - A = DataFrame(a) - B = DataFrame(b) - - # it works - result = A.dot(b) - - # unaligned - df = DataFrame(randn(3, 4), index=[1, 2, 3], columns=lrange(4)) - df2 = DataFrame(randn(5, 3), index=lrange(5), columns=[1, 2, 3]) - - assertRaisesRegexp(ValueError, 'aligned', df.dot, df2) - - def test_idxmin(self): - frame = self.frame - frame.ix[5:10] = np.nan - frame.ix[15:20, -2:] = np.nan - for skipna in [True, False]: - for axis in [0, 1]: - for df in [frame, self.intframe]: - result = df.idxmin(axis=axis, skipna=skipna) - expected = df.apply( - Series.idxmin, axis=axis, skipna=skipna) - assert_series_equal(result, expected) - - self.assertRaises(ValueError, frame.idxmin, axis=2) - - def test_idxmax(self): - frame = self.frame - frame.ix[5:10] = np.nan - frame.ix[15:20, -2:] = np.nan - for skipna in [True, False]: - for axis in [0, 1]: - for df in [frame, self.intframe]: - result = df.idxmax(axis=axis, skipna=skipna) - expected = df.apply( - Series.idxmax, axis=axis, skipna=skipna) - assert_series_equal(result, expected) - - self.assertRaises(ValueError, frame.idxmax, axis=2) - - def test_stale_cached_series_bug_473(self): - - # this is chained, but ok - with option_context('chained_assignment',None): - Y = DataFrame(np.random.random((4, 4)), index=('a', 'b', 'c', 'd'), - columns=('e', 'f', 'g', 'h')) - repr(Y) - Y['e'] = Y['e'].astype('object') - Y['g']['c'] = np.NaN - repr(Y) - result = Y.sum() - exp = Y['g'].sum() - self.assertTrue(isnull(Y['g']['c'])) - - def test_index_namedtuple(self): - from collections import namedtuple - IndexType = namedtuple("IndexType", ["a", "b"]) - idx1 = IndexType("foo", "bar") - idx2 = IndexType("baz", "bof") - index = Index([idx1, idx2], - name="composite_index", tupleize_cols=False) - df = DataFrame([(1, 2), (3, 4)], index=index, columns=["A", "B"]) - result = df.ix[IndexType("foo", "bar")]["A"] - self.assertEqual(result, 1) - - def test_empty_nonzero(self): - df = DataFrame([1, 2, 3]) - self.assertFalse(df.empty) - df = DataFrame(index=['a', 'b'], columns=['c', 'd']).dropna() - self.assertTrue(df.empty) - self.assertTrue(df.T.empty) - - def test_any_all(self): - - self._check_bool_op('any', np.any, has_skipna=True, has_bool_only=True) - self._check_bool_op('all', np.all, has_skipna=True, has_bool_only=True) - - df = DataFrame(randn(10, 4)) > 0 - df.any(1) - df.all(1) - df.any(1, bool_only=True) - df.all(1, bool_only=True) - - # skip pathological failure cases - # class CantNonzero(object): - - # def __nonzero__(self): - # raise ValueError - - # df[4] = CantNonzero() - - # it works! - # df.any(1) - # df.all(1) - # df.any(1, bool_only=True) - # df.all(1, bool_only=True) - - # df[4][4] = np.nan - # df.any(1) - # df.all(1) - # df.any(1, bool_only=True) - # df.all(1, bool_only=True) - - def test_consolidate_datetime64(self): - # numpy vstack bug - - data = """\ -starting,ending,measure -2012-06-21 00:00,2012-06-23 07:00,77 -2012-06-23 07:00,2012-06-23 16:30,65 -2012-06-23 16:30,2012-06-25 08:00,77 -2012-06-25 08:00,2012-06-26 12:00,0 -2012-06-26 12:00,2012-06-27 08:00,77 -""" - df = read_csv(StringIO(data), parse_dates=[0, 1]) - - ser_starting = df.starting - ser_starting.index = ser_starting.values - ser_starting = ser_starting.tz_localize('US/Eastern') - ser_starting = ser_starting.tz_convert('UTC') - - ser_ending = df.ending - ser_ending.index = ser_ending.values - ser_ending = ser_ending.tz_localize('US/Eastern') - ser_ending = ser_ending.tz_convert('UTC') - - df.starting = ser_starting.index - df.ending = ser_ending.index - - tm.assert_index_equal(pd.DatetimeIndex(df.starting), ser_starting.index) - tm.assert_index_equal(pd.DatetimeIndex(df.ending), ser_ending.index) - - def _check_bool_op(self, name, alternative, frame=None, has_skipna=True, - has_bool_only=False): - if frame is None: - frame = self.frame > 0 - # set some NAs - frame = DataFrame(frame.values.astype(object), frame.index, - frame.columns) - frame.ix[5:10] = np.nan - frame.ix[15:20, -2:] = np.nan - - f = getattr(frame, name) - - if has_skipna: - def skipna_wrapper(x): - nona = x.dropna().values - return alternative(nona) - - def wrapper(x): - return alternative(x.values) - - result0 = f(axis=0, skipna=False) - result1 = f(axis=1, skipna=False) - assert_series_equal(result0, frame.apply(wrapper)) - assert_series_equal(result1, frame.apply(wrapper, axis=1), - check_dtype=False) # HACK: win32 - else: - skipna_wrapper = alternative - wrapper = alternative - - result0 = f(axis=0) - result1 = f(axis=1) - assert_series_equal(result0, frame.apply(skipna_wrapper)) - assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1), - check_dtype=False) - - # result = f(axis=1) - # comp = frame.apply(alternative, axis=1).reindex(result.index) - # assert_series_equal(result, comp) - - # bad axis - self.assertRaises(ValueError, f, axis=2) - - # make sure works on mixed-type frame - mixed = self.mixed_frame - mixed['_bool_'] = np.random.randn(len(mixed)) > 0 - getattr(mixed, name)(axis=0) - getattr(mixed, name)(axis=1) - - class NonzeroFail: - - def __nonzero__(self): - raise ValueError - - mixed['_nonzero_fail_'] = NonzeroFail() - - if has_bool_only: - getattr(mixed, name)(axis=0, bool_only=True) - getattr(mixed, name)(axis=1, bool_only=True) - getattr(frame, name)(axis=0, bool_only=False) - getattr(frame, name)(axis=1, bool_only=False) - - # all NA case - if has_skipna: - all_na = frame * np.NaN - r0 = getattr(all_na, name)(axis=0) - r1 = getattr(all_na, name)(axis=1) - if name == 'any': - self.assertFalse(r0.any()) - self.assertFalse(r1.any()) - else: - self.assertTrue(r0.all()) - self.assertTrue(r1.all()) - - def test_strange_column_corruption_issue(self): - - df = DataFrame(index=[0, 1]) - df[0] = nan - wasCol = {} - # uncommenting these makes the results match - # for col in xrange(100, 200): - # wasCol[col] = 1 - # df[col] = nan - - for i, dt in enumerate(df.index): - for col in range(100, 200): - if not col in wasCol: - wasCol[col] = 1 - df[col] = nan - df[col][dt] = i - - myid = 100 - - first = len(df.ix[isnull(df[myid]), [myid]]) - second = len(df.ix[isnull(df[myid]), [myid]]) - self.assertTrue(first == second == 0) - - def test_inplace_return_self(self): - # re #1893 - - data = DataFrame({'a': ['foo', 'bar', 'baz', 'qux'], - 'b': [0, 0, 1, 1], - 'c': [1, 2, 3, 4]}) - - def _check_f(base, f): - result = f(base) - self.assertTrue(result is None) - - # -----DataFrame----- - - # set_index - f = lambda x: x.set_index('a', inplace=True) - _check_f(data.copy(), f) - - # reset_index - f = lambda x: x.reset_index(inplace=True) - _check_f(data.set_index('a'), f) - - # drop_duplicates - f = lambda x: x.drop_duplicates(inplace=True) - _check_f(data.copy(), f) - - # sort - f = lambda x: x.sort_values('b', inplace=True) - _check_f(data.copy(), f) - - # sort_index - f = lambda x: x.sort_index(inplace=True) - _check_f(data.copy(), f) - - # sortlevel - f = lambda x: x.sortlevel(0, inplace=True) - _check_f(data.set_index(['a', 'b']), f) - - # fillna - f = lambda x: x.fillna(0, inplace=True) - _check_f(data.copy(), f) - - # replace - f = lambda x: x.replace(1, 0, inplace=True) - _check_f(data.copy(), f) - - # rename - f = lambda x: x.rename({1: 'foo'}, inplace=True) - _check_f(data.copy(), f) - - # -----Series----- - d = data.copy()['c'] - - # reset_index - f = lambda x: x.reset_index(inplace=True, drop=True) - _check_f(data.set_index('a')['c'], f) - - # fillna - f = lambda x: x.fillna(0, inplace=True) - _check_f(d.copy(), f) - - # replace - f = lambda x: x.replace(1, 0, inplace=True) - _check_f(d.copy(), f) - - # rename - f = lambda x: x.rename({1: 'foo'}, inplace=True) - _check_f(d.copy(), f) - - def test_isin(self): - # GH #4211 - df = DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'], - 'ids2': ['a', 'n', 'c', 'n']}, - index=['foo', 'bar', 'baz', 'qux']) - other = ['a', 'b', 'c'] - - result = df.isin(other) - expected = DataFrame([df.loc[s].isin(other) for s in df.index]) - assert_frame_equal(result, expected) - - def test_isin_empty(self): - df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) - result = df.isin([]) - expected = pd.DataFrame(False, df.index, df.columns) - assert_frame_equal(result, expected) - - def test_isin_dict(self): - df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) - d = {'A': ['a']} - - expected = DataFrame(False, df.index, df.columns) - expected.loc[0, 'A'] = True - - result = df.isin(d) - assert_frame_equal(result, expected) - - # non unique columns - df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) - df.columns = ['A', 'A'] - expected = DataFrame(False, df.index, df.columns) - expected.loc[0, 'A'] = True - result = df.isin(d) - assert_frame_equal(result, expected) - - def test_isin_with_string_scalar(self): - #GH4763 - df = DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'], - 'ids2': ['a', 'n', 'c', 'n']}, - index=['foo', 'bar', 'baz', 'qux']) - with tm.assertRaises(TypeError): - df.isin('a') - - with tm.assertRaises(TypeError): - df.isin('aaa') - - def test_isin_df(self): - df1 = DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}) - df2 = DataFrame({'A': [0, 2, 12, 4], 'B': [2, np.nan, 4, 5]}) - expected = DataFrame(False, df1.index, df1.columns) - result = df1.isin(df2) - expected['A'].loc[[1, 3]] = True - expected['B'].loc[[0, 2]] = True - assert_frame_equal(result, expected) - - # partial overlapping columns - df2.columns = ['A', 'C'] - result = df1.isin(df2) - expected['B'] = False - assert_frame_equal(result, expected) - - def test_isin_df_dupe_values(self): - df1 = DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}) - # just cols duped - df2 = DataFrame([[0, 2], [12, 4], [2, np.nan], [4, 5]], - columns=['B', 'B']) - with tm.assertRaises(ValueError): - df1.isin(df2) - - # just index duped - df2 = DataFrame([[0, 2], [12, 4], [2, np.nan], [4, 5]], - columns=['A', 'B'], index=[0, 0, 1, 1]) - with tm.assertRaises(ValueError): - df1.isin(df2) - - # cols and index: - df2.columns = ['B', 'B'] - with tm.assertRaises(ValueError): - df1.isin(df2) - - def test_isin_dupe_self(self): - other = DataFrame({'A': [1, 0, 1, 0], 'B': [1, 1, 0, 0]}) - df = DataFrame([[1, 1], [1, 0], [0, 0]], columns=['A','A']) - result = df.isin(other) - expected = DataFrame(False, index=df.index, columns=df.columns) - expected.loc[0] = True - expected.iloc[1, 1] = True - assert_frame_equal(result, expected) - - def test_isin_against_series(self): - df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}, - index=['a', 'b', 'c', 'd']) - s = pd.Series([1, 3, 11, 4], index=['a', 'b', 'c', 'd']) - expected = DataFrame(False, index=df.index, columns=df.columns) - expected['A'].loc['a'] = True - expected.loc['d'] = True - result = df.isin(s) - assert_frame_equal(result, expected) - - def test_isin_multiIndex(self): - idx = MultiIndex.from_tuples([(0, 'a', 'foo'), (0, 'a', 'bar'), - (0, 'b', 'bar'), (0, 'b', 'baz'), - (2, 'a', 'foo'), (2, 'a', 'bar'), - (2, 'c', 'bar'), (2, 'c', 'baz'), - (1, 'b', 'foo'), (1, 'b', 'bar'), - (1, 'c', 'bar'), (1, 'c', 'baz')]) - df1 = DataFrame({'A': np.ones(12), - 'B': np.zeros(12)}, index=idx) - df2 = DataFrame({'A': [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1], - 'B': [1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1]}) - # against regular index - expected = DataFrame(False, index=df1.index, columns=df1.columns) - result = df1.isin(df2) - assert_frame_equal(result, expected) - - df2.index = idx - expected = df2.values.astype(np.bool) - expected[:, 1] = ~expected[:, 1] - expected = DataFrame(expected, columns=['A', 'B'], index=idx) - - result = df1.isin(df2) - assert_frame_equal(result, expected) - - def test_to_csv_date_format(self): - from pandas import to_datetime - pname = '__tmp_to_csv_date_format__' - with ensure_clean(pname) as path: - for engine in [None, 'python']: - w = FutureWarning if engine == 'python' else None - - dt_index = self.tsframe.index - datetime_frame = DataFrame({'A': dt_index, 'B': dt_index.shift(1)}, index=dt_index) - - with tm.assert_produces_warning(w, check_stacklevel=False): - datetime_frame.to_csv(path, date_format='%Y%m%d', engine=engine) - - # Check that the data was put in the specified format - test = read_csv(path, index_col=0) - - datetime_frame_int = datetime_frame.applymap(lambda x: int(x.strftime('%Y%m%d'))) - datetime_frame_int.index = datetime_frame_int.index.map(lambda x: int(x.strftime('%Y%m%d'))) - - assert_frame_equal(test, datetime_frame_int) - - with tm.assert_produces_warning(w, check_stacklevel=False): - datetime_frame.to_csv(path, date_format='%Y-%m-%d', engine=engine) - - # Check that the data was put in the specified format - test = read_csv(path, index_col=0) - datetime_frame_str = datetime_frame.applymap(lambda x: x.strftime('%Y-%m-%d')) - datetime_frame_str.index = datetime_frame_str.index.map(lambda x: x.strftime('%Y-%m-%d')) - - assert_frame_equal(test, datetime_frame_str) - - # Check that columns get converted - datetime_frame_columns = datetime_frame.T - - with tm.assert_produces_warning(w, check_stacklevel=False): - datetime_frame_columns.to_csv(path, date_format='%Y%m%d', engine=engine) - - test = read_csv(path, index_col=0) - - datetime_frame_columns = datetime_frame_columns.applymap(lambda x: int(x.strftime('%Y%m%d'))) - # Columns don't get converted to ints by read_csv - datetime_frame_columns.columns = datetime_frame_columns.columns.map(lambda x: x.strftime('%Y%m%d')) - - assert_frame_equal(test, datetime_frame_columns) - - # test NaTs - nat_index = to_datetime(['NaT'] * 10 + ['2000-01-01', '1/1/2000', '1-1-2000']) - nat_frame = DataFrame({'A': nat_index}, index=nat_index) - - with tm.assert_produces_warning(w, check_stacklevel=False): - nat_frame.to_csv(path, date_format='%Y-%m-%d', engine=engine) - - test = read_csv(path, parse_dates=[0, 1], index_col=0) - - assert_frame_equal(test, nat_frame) - - def test_to_csv_with_dst_transitions(self): - - with ensure_clean('csv_date_format_with_dst') as path: - # make sure we are not failing on transitions - times = pd.date_range("2013-10-26 23:00", "2013-10-27 01:00", - tz="Europe/London", - freq="H", - ambiguous='infer') - - for i in [times, times+pd.Timedelta('10s')]: - time_range = np.array(range(len(i)), dtype='int64') - df = DataFrame({'A' : time_range}, index=i) - df.to_csv(path,index=True) - - # we have to reconvert the index as we - # don't parse the tz's - result = read_csv(path,index_col=0) - result.index = pd.to_datetime(result.index).tz_localize('UTC').tz_convert('Europe/London') - assert_frame_equal(result,df) - - # GH11619 - idx = pd.date_range('2015-01-01', '2015-12-31', freq = 'H', tz='Europe/Paris') - df = DataFrame({'values' : 1, 'idx' : idx}, - index=idx) - with ensure_clean('csv_date_format_with_dst') as path: - df.to_csv(path,index=True) - result = read_csv(path,index_col=0) - result.index = pd.to_datetime(result.index).tz_localize('UTC').tz_convert('Europe/Paris') - result['idx'] = pd.to_datetime(result['idx']).astype('datetime64[ns, Europe/Paris]') - assert_frame_equal(result,df) - - # assert working - df.astype(str) - - with ensure_clean('csv_date_format_with_dst') as path: - df.to_pickle(path) - result = pd.read_pickle(path) - assert_frame_equal(result,df) - - - def test_concat_empty_dataframe_dtypes(self): - df = DataFrame(columns=list("abc")) - df['a'] = df['a'].astype(np.bool_) - df['b'] = df['b'].astype(np.int32) - df['c'] = df['c'].astype(np.float64) - - result = pd.concat([df, df]) - self.assertEqual(result['a'].dtype, np.bool_) - self.assertEqual(result['b'].dtype, np.int32) - self.assertEqual(result['c'].dtype, np.float64) - - result = pd.concat([df, df.astype(np.float64)]) - self.assertEqual(result['a'].dtype, np.object_) - self.assertEqual(result['b'].dtype, np.float64) - self.assertEqual(result['c'].dtype, np.float64) - - def test_empty_frame_dtypes_ftypes(self): - empty_df = pd.DataFrame() - assert_series_equal(empty_df.dtypes, pd.Series(dtype=np.object)) - assert_series_equal(empty_df.ftypes, pd.Series(dtype=np.object)) - - nocols_df = pd.DataFrame(index=[1,2,3]) - assert_series_equal(nocols_df.dtypes, pd.Series(dtype=np.object)) - assert_series_equal(nocols_df.ftypes, pd.Series(dtype=np.object)) - - norows_df = pd.DataFrame(columns=list("abc")) - assert_series_equal(norows_df.dtypes, pd.Series(np.object, index=list("abc"))) - assert_series_equal(norows_df.ftypes, pd.Series('object:dense', index=list("abc"))) - - norows_int_df = pd.DataFrame(columns=list("abc")).astype(np.int32) - assert_series_equal(norows_int_df.dtypes, pd.Series(np.dtype('int32'), index=list("abc"))) - assert_series_equal(norows_int_df.ftypes, pd.Series('int32:dense', index=list("abc"))) - - odict = OrderedDict - df = pd.DataFrame(odict([('a', 1), ('b', True), ('c', 1.0)]), index=[1, 2, 3]) - assert_series_equal(df.dtypes, pd.Series(odict([('a', np.int64), - ('b', np.bool), - ('c', np.float64)]))) - assert_series_equal(df.ftypes, pd.Series(odict([('a', 'int64:dense'), - ('b', 'bool:dense'), - ('c', 'float64:dense')]))) - - # same but for empty slice of df - assert_series_equal(df[:0].dtypes, pd.Series(odict([('a', np.int64), - ('b', np.bool), - ('c', np.float64)]))) - assert_series_equal(df[:0].ftypes, pd.Series(odict([('a', 'int64:dense'), - ('b', 'bool:dense'), - ('c', 'float64:dense')]))) - - def test_dtypes_are_correct_after_column_slice(self): - # GH6525 - df = pd.DataFrame(index=range(5), columns=list("abc"), dtype=np.float_) - odict = OrderedDict - assert_series_equal(df.dtypes, - pd.Series(odict([('a', np.float_), ('b', np.float_), - ('c', np.float_),]))) - assert_series_equal(df.iloc[:,2:].dtypes, - pd.Series(odict([('c', np.float_)]))) - assert_series_equal(df.dtypes, - pd.Series(odict([('a', np.float_), ('b', np.float_), - ('c', np.float_),]))) - - def test_set_index_names(self): - df = pd.util.testing.makeDataFrame() - df.index.name = 'name' - - self.assertEqual(df.set_index(df.index).index.names, ['name']) - - mi = MultiIndex.from_arrays(df[['A', 'B']].T.values, names=['A', 'B']) - mi2 = MultiIndex.from_arrays(df[['A', 'B', 'A', 'B']].T.values, - names=['A', 'B', 'A', 'B']) - - df = df.set_index(['A', 'B']) - - self.assertEqual(df.set_index(df.index).index.names, ['A', 'B']) - - # Check that set_index isn't converting a MultiIndex into an Index - self.assertTrue(isinstance(df.set_index(df.index).index, MultiIndex)) - - # Check actual equality - tm.assert_index_equal(df.set_index(df.index).index, mi) - - # Check that [MultiIndex, MultiIndex] yields a MultiIndex rather - # than a pair of tuples - self.assertTrue(isinstance(df.set_index([df.index, df.index]).index, MultiIndex)) - - # Check equality - tm.assert_index_equal(df.set_index([df.index, df.index]).index, mi2) - - def test_select_dtypes_include(self): - df = DataFrame({'a': list('abc'), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True], - 'f': pd.Categorical(list('abc'))}) - ri = df.select_dtypes(include=[np.number]) - ei = df[['b', 'c', 'd']] - assert_frame_equal(ri, ei) - - ri = df.select_dtypes(include=[np.number, 'category']) - ei = df[['b', 'c', 'd', 'f']] - assert_frame_equal(ri, ei) - - def test_select_dtypes_exclude(self): - df = DataFrame({'a': list('abc'), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True]}) - re = df.select_dtypes(exclude=[np.number]) - ee = df[['a', 'e']] - assert_frame_equal(re, ee) - - def test_select_dtypes_exclude_include(self): - df = DataFrame({'a': list('abc'), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True], - 'f': pd.date_range('now', periods=3).values}) - exclude = np.datetime64, - include = np.bool_, 'integer' - r = df.select_dtypes(include=include, exclude=exclude) - e = df[['b', 'c', 'e']] - assert_frame_equal(r, e) - - exclude = 'datetime', - include = 'bool', 'int64', 'int32' - r = df.select_dtypes(include=include, exclude=exclude) - e = df[['b', 'e']] - assert_frame_equal(r, e) - - def test_select_dtypes_not_an_attr_but_still_valid_dtype(self): - df = DataFrame({'a': list('abc'), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True], - 'f': pd.date_range('now', periods=3).values}) - df['g'] = df.f.diff() - assert not hasattr(np, 'u8') - r = df.select_dtypes(include=['i8', 'O'], exclude=['timedelta']) - e = df[['a', 'b']] - assert_frame_equal(r, e) - - r = df.select_dtypes(include=['i8', 'O', 'timedelta64[ns]']) - e = df[['a', 'b', 'g']] - assert_frame_equal(r, e) - - def test_select_dtypes_empty(self): - df = DataFrame({'a': list('abc'), 'b': list(range(1, 4))}) - with tm.assertRaisesRegexp(ValueError, 'at least one of include or ' - 'exclude must be nonempty'): - df.select_dtypes() - - def test_select_dtypes_raises_on_string(self): - df = DataFrame({'a': list('abc'), 'b': list(range(1, 4))}) - with tm.assertRaisesRegexp(TypeError, 'include and exclude .+ non-'): - df.select_dtypes(include='object') - with tm.assertRaisesRegexp(TypeError, 'include and exclude .+ non-'): - df.select_dtypes(exclude='object') - with tm.assertRaisesRegexp(TypeError, 'include and exclude .+ non-'): - df.select_dtypes(include=int, exclude='object') - - def test_select_dtypes_bad_datetime64(self): - df = DataFrame({'a': list('abc'), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True], - 'f': pd.date_range('now', periods=3).values}) - with tm.assertRaisesRegexp(ValueError, '.+ is too specific'): - df.select_dtypes(include=['datetime64[D]']) - - with tm.assertRaisesRegexp(ValueError, '.+ is too specific'): - df.select_dtypes(exclude=['datetime64[as]']) - - def test_select_dtypes_str_raises(self): - df = DataFrame({'a': list('abc'), - 'g': list(u('abc')), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True], - 'f': pd.date_range('now', periods=3).values}) - string_dtypes = set((str, 'str', np.string_, 'S1', - 'unicode', np.unicode_, 'U1')) - try: - string_dtypes.add(unicode) - except NameError: - pass - for dt in string_dtypes: - with tm.assertRaisesRegexp(TypeError, - 'string dtypes are not allowed'): - df.select_dtypes(include=[dt]) - with tm.assertRaisesRegexp(TypeError, - 'string dtypes are not allowed'): - df.select_dtypes(exclude=[dt]) - - def test_select_dtypes_bad_arg_raises(self): - df = DataFrame({'a': list('abc'), - 'g': list(u('abc')), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True], - 'f': pd.date_range('now', periods=3).values}) - with tm.assertRaisesRegexp(TypeError, 'data type.*not understood'): - df.select_dtypes(['blargy, blarg, blarg']) - - def test_select_dtypes_typecodes(self): - # GH 11990 - df = mkdf(30, 3, data_gen_f=lambda x, y: np.random.random()) - expected = df - FLOAT_TYPES = list(np.typecodes['AllFloat']) - assert_frame_equal(df.select_dtypes(FLOAT_TYPES), expected) - - def test_assign(self): - df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}) - original = df.copy() - result = df.assign(C=df.B / df.A) - expected = df.copy() - expected['C'] = [4, 2.5, 2] - assert_frame_equal(result, expected) - - # lambda syntax - result = df.assign(C=lambda x: x.B / x.A) - assert_frame_equal(result, expected) - - # original is unmodified - assert_frame_equal(df, original) - - # Non-Series array-like - result = df.assign(C=[4, 2.5, 2]) - assert_frame_equal(result, expected) - # original is unmodified - assert_frame_equal(df, original) - - result = df.assign(B=df.B / df.A) - expected = expected.drop('B', axis=1).rename(columns={'C': 'B'}) - assert_frame_equal(result, expected) - - # overwrite - result = df.assign(A=df.A + df.B) - expected = df.copy() - expected['A'] = [5, 7, 9] - assert_frame_equal(result, expected) - - # lambda - result = df.assign(A=lambda x: x.A + x.B) - assert_frame_equal(result, expected) - - def test_assign_multiple(self): - df = DataFrame([[1, 4], [2, 5], [3, 6]], columns=['A', 'B']) - result = df.assign(C=[7, 8, 9], D=df.A, E=lambda x: x.B) - expected = DataFrame([[1, 4, 7, 1, 4], [2, 5, 8, 2, 5], - [3, 6, 9, 3, 6]], columns=list('ABCDE')) - assert_frame_equal(result, expected) - - def test_assign_alphabetical(self): - # GH 9818 - df = DataFrame([[1, 2], [3, 4]], columns=['A', 'B']) - result = df.assign(D=df.A + df.B, C=df.A - df.B) - expected = DataFrame([[1, 2, -1, 3], [3, 4, -1, 7]], - columns=list('ABCD')) - assert_frame_equal(result, expected) - result = df.assign(C=df.A - df.B, D=df.A + df.B) - assert_frame_equal(result, expected) - - def test_assign_bad(self): - df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}) - # non-keyword argument - with tm.assertRaises(TypeError): - df.assign(lambda x: x.A) - with tm.assertRaises(AttributeError): - df.assign(C=df.A, D=df.A + df.C) - with tm.assertRaises(KeyError): - df.assign(C=lambda df: df.A, D=lambda df: df['A'] + df['C']) - with tm.assertRaises(KeyError): - df.assign(C=df.A, D=lambda x: x['A'] + x['C']) - - def test_dataframe_metadata(self): - - df = SubclassedDataFrame({'X': [1, 2, 3], 'Y': [1, 2, 3]}, - index=['a', 'b', 'c']) - df.testattr = 'XXX' - - self.assertEqual(df.testattr, 'XXX') - self.assertEqual(df[['X']].testattr, 'XXX') - self.assertEqual(df.loc[['a', 'b'], :].testattr, 'XXX') - self.assertEqual(df.iloc[[0, 1], :].testattr, 'XXX') - - # GH9776 - self.assertEqual(df.iloc[0:1, :].testattr, 'XXX') - - # GH10553 - unpickled = self.round_trip_pickle(df) - assert_frame_equal(df, unpickled) - self.assertEqual(df._metadata, unpickled._metadata) - self.assertEqual(df.testattr, unpickled.testattr) - - def test_nlargest(self): - # GH10393 - from string import ascii_lowercase - df = pd.DataFrame({'a': np.random.permutation(10), - 'b': list(ascii_lowercase[:10])}) - result = df.nlargest(5, 'a') - expected = df.sort_values('a', ascending=False).head(5) - assert_frame_equal(result, expected) - - def test_nlargest_multiple_columns(self): - from string import ascii_lowercase - df = pd.DataFrame({'a': np.random.permutation(10), - 'b': list(ascii_lowercase[:10]), - 'c': np.random.permutation(10).astype('float64')}) - result = df.nlargest(5, ['a', 'b']) - expected = df.sort_values(['a', 'b'], ascending=False).head(5) - assert_frame_equal(result, expected) - - def test_nsmallest(self): - from string import ascii_lowercase - df = pd.DataFrame({'a': np.random.permutation(10), - 'b': list(ascii_lowercase[:10])}) - result = df.nsmallest(5, 'a') - expected = df.sort_values('a').head(5) - assert_frame_equal(result, expected) - - def test_nsmallest_multiple_columns(self): - from string import ascii_lowercase - df = pd.DataFrame({'a': np.random.permutation(10), - 'b': list(ascii_lowercase[:10]), - 'c': np.random.permutation(10).astype('float64')}) - result = df.nsmallest(5, ['a', 'c']) - expected = df.sort_values(['a', 'c']).head(5) - assert_frame_equal(result, expected) - - def test_to_panel_expanddim(self): - # GH 9762 - - class SubclassedFrame(DataFrame): - @property - def _constructor_expanddim(self): - return SubclassedPanel - - class SubclassedPanel(Panel): - pass - - index = MultiIndex.from_tuples([(0, 0), (0, 1), (0, 2)]) - df = SubclassedFrame({'X':[1, 2, 3], 'Y': [4, 5, 6]}, index=index) - result = df.to_panel() - self.assertTrue(isinstance(result, SubclassedPanel)) - expected = SubclassedPanel([[[1, 2, 3]], [[4, 5, 6]]], - items=['X', 'Y'], major_axis=[0], - minor_axis=[0, 1, 2], - dtype='int64') - tm.assert_panel_equal(result, expected) - - def test_subclass_attr_err_propagation(self): - # GH 11808 - class A(DataFrame): - - @property - def bar(self): - return self.i_dont_exist - with tm.assertRaisesRegexp(AttributeError, '.*i_dont_exist.*'): - A().bar - - -def skip_if_no_ne(engine='numexpr'): - if engine == 'numexpr': - try: - import numexpr as ne - except ImportError: - raise nose.SkipTest("cannot query engine numexpr when numexpr not " - "installed") - - -def skip_if_no_pandas_parser(parser): - if parser != 'pandas': - raise nose.SkipTest("cannot evaluate with parser {0!r}".format(parser)) - - -class TestDataFrameQueryWithMultiIndex(object): - def check_query_with_named_multiindex(self, parser, engine): - tm.skip_if_no_ne(engine) - a = tm.choice(['red', 'green'], size=10) - b = tm.choice(['eggs', 'ham'], size=10) - index = MultiIndex.from_arrays([a, b], names=['color', 'food']) - df = DataFrame(randn(10, 2), index=index) - ind = Series(df.index.get_level_values('color').values, index=index, - name='color') - - # equality - res1 = df.query('color == "red"', parser=parser, engine=engine) - res2 = df.query('"red" == color', parser=parser, engine=engine) - exp = df[ind == 'red'] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # inequality - res1 = df.query('color != "red"', parser=parser, engine=engine) - res2 = df.query('"red" != color', parser=parser, engine=engine) - exp = df[ind != 'red'] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # list equality (really just set membership) - res1 = df.query('color == ["red"]', parser=parser, engine=engine) - res2 = df.query('["red"] == color', parser=parser, engine=engine) - exp = df[ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - res1 = df.query('color != ["red"]', parser=parser, engine=engine) - res2 = df.query('["red"] != color', parser=parser, engine=engine) - exp = df[~ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # in/not in ops - res1 = df.query('["red"] in color', parser=parser, engine=engine) - res2 = df.query('"red" in color', parser=parser, engine=engine) - exp = df[ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - res1 = df.query('["red"] not in color', parser=parser, engine=engine) - res2 = df.query('"red" not in color', parser=parser, engine=engine) - exp = df[~ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - def test_query_with_named_multiindex(self): - for parser, engine in product(['pandas'], ENGINES): - yield self.check_query_with_named_multiindex, parser, engine - - def check_query_with_unnamed_multiindex(self, parser, engine): - tm.skip_if_no_ne(engine) - a = tm.choice(['red', 'green'], size=10) - b = tm.choice(['eggs', 'ham'], size=10) - index = MultiIndex.from_arrays([a, b]) - df = DataFrame(randn(10, 2), index=index) - ind = Series(df.index.get_level_values(0).values, index=index) - - res1 = df.query('ilevel_0 == "red"', parser=parser, engine=engine) - res2 = df.query('"red" == ilevel_0', parser=parser, engine=engine) - exp = df[ind == 'red'] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # inequality - res1 = df.query('ilevel_0 != "red"', parser=parser, engine=engine) - res2 = df.query('"red" != ilevel_0', parser=parser, engine=engine) - exp = df[ind != 'red'] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # list equality (really just set membership) - res1 = df.query('ilevel_0 == ["red"]', parser=parser, engine=engine) - res2 = df.query('["red"] == ilevel_0', parser=parser, engine=engine) - exp = df[ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - res1 = df.query('ilevel_0 != ["red"]', parser=parser, engine=engine) - res2 = df.query('["red"] != ilevel_0', parser=parser, engine=engine) - exp = df[~ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # in/not in ops - res1 = df.query('["red"] in ilevel_0', parser=parser, engine=engine) - res2 = df.query('"red" in ilevel_0', parser=parser, engine=engine) - exp = df[ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - res1 = df.query('["red"] not in ilevel_0', parser=parser, engine=engine) - res2 = df.query('"red" not in ilevel_0', parser=parser, engine=engine) - exp = df[~ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - #### LEVEL 1 #### - ind = Series(df.index.get_level_values(1).values, index=index) - res1 = df.query('ilevel_1 == "eggs"', parser=parser, engine=engine) - res2 = df.query('"eggs" == ilevel_1', parser=parser, engine=engine) - exp = df[ind == 'eggs'] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # inequality - res1 = df.query('ilevel_1 != "eggs"', parser=parser, engine=engine) - res2 = df.query('"eggs" != ilevel_1', parser=parser, engine=engine) - exp = df[ind != 'eggs'] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # list equality (really just set membership) - res1 = df.query('ilevel_1 == ["eggs"]', parser=parser, engine=engine) - res2 = df.query('["eggs"] == ilevel_1', parser=parser, engine=engine) - exp = df[ind.isin(['eggs'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - res1 = df.query('ilevel_1 != ["eggs"]', parser=parser, engine=engine) - res2 = df.query('["eggs"] != ilevel_1', parser=parser, engine=engine) - exp = df[~ind.isin(['eggs'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # in/not in ops - res1 = df.query('["eggs"] in ilevel_1', parser=parser, engine=engine) - res2 = df.query('"eggs" in ilevel_1', parser=parser, engine=engine) - exp = df[ind.isin(['eggs'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - res1 = df.query('["eggs"] not in ilevel_1', parser=parser, engine=engine) - res2 = df.query('"eggs" not in ilevel_1', parser=parser, engine=engine) - exp = df[~ind.isin(['eggs'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - def test_query_with_unnamed_multiindex(self): - for parser, engine in product(['pandas'], ENGINES): - yield self.check_query_with_unnamed_multiindex, parser, engine - - def check_query_with_partially_named_multiindex(self, parser, engine): - tm.skip_if_no_ne(engine) - a = tm.choice(['red', 'green'], size=10) - b = np.arange(10) - index = MultiIndex.from_arrays([a, b]) - index.names = [None, 'rating'] - df = DataFrame(randn(10, 2), index=index) - res = df.query('rating == 1', parser=parser, engine=engine) - ind = Series(df.index.get_level_values('rating').values, index=index, - name='rating') - exp = df[ind == 1] - assert_frame_equal(res, exp) - - res = df.query('rating != 1', parser=parser, engine=engine) - ind = Series(df.index.get_level_values('rating').values, index=index, - name='rating') - exp = df[ind != 1] - assert_frame_equal(res, exp) - - res = df.query('ilevel_0 == "red"', parser=parser, engine=engine) - ind = Series(df.index.get_level_values(0).values, index=index) - exp = df[ind == "red"] - assert_frame_equal(res, exp) - - res = df.query('ilevel_0 != "red"', parser=parser, engine=engine) - ind = Series(df.index.get_level_values(0).values, index=index) - exp = df[ind != "red"] - assert_frame_equal(res, exp) - - def test_query_with_partially_named_multiindex(self): - for parser, engine in product(['pandas'], ENGINES): - yield self.check_query_with_partially_named_multiindex, parser, engine - - def test_query_multiindex_get_index_resolvers(self): - for parser, engine in product(['pandas'], ENGINES): - yield self.check_query_multiindex_get_index_resolvers, parser, engine - - def check_query_multiindex_get_index_resolvers(self, parser, engine): - df = mkdf(10, 3, r_idx_nlevels=2, r_idx_names=['spam', 'eggs']) - resolvers = df._get_index_resolvers() - - def to_series(mi, level): - level_values = mi.get_level_values(level) - s = level_values.to_series() - s.index = mi - return s - - col_series = df.columns.to_series() - expected = {'index': df.index, - 'columns': col_series, - 'spam': to_series(df.index, 'spam'), - 'eggs': to_series(df.index, 'eggs'), - 'C0': col_series} - for k, v in resolvers.items(): - if isinstance(v, Index): - assert v.is_(expected[k]) - elif isinstance(v, Series): - assert_series_equal(v, expected[k]) - else: - raise AssertionError("object must be a Series or Index") - - def test_raise_on_panel_with_multiindex(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_raise_on_panel_with_multiindex, parser, engine - - def check_raise_on_panel_with_multiindex(self, parser, engine): - tm.skip_if_no_ne() - p = tm.makePanel(7) - p.items = tm.makeCustomIndex(len(p.items), nlevels=2) - with tm.assertRaises(NotImplementedError): - pd.eval('p + 1', parser=parser, engine=engine) - - def test_raise_on_panel4d_with_multiindex(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_raise_on_panel4d_with_multiindex, parser, engine - - def check_raise_on_panel4d_with_multiindex(self, parser, engine): - tm.skip_if_no_ne() - p4d = tm.makePanel4D(7) - p4d.items = tm.makeCustomIndex(len(p4d.items), nlevels=2) - with tm.assertRaises(NotImplementedError): - pd.eval('p4d + 1', parser=parser, engine=engine) - - -class TestDataFrameQueryNumExprPandas(tm.TestCase): - - @classmethod - def setUpClass(cls): - super(TestDataFrameQueryNumExprPandas, cls).setUpClass() - cls.engine = 'numexpr' - cls.parser = 'pandas' - tm.skip_if_no_ne(cls.engine) - - @classmethod - def tearDownClass(cls): - super(TestDataFrameQueryNumExprPandas, cls).tearDownClass() - del cls.engine, cls.parser - - def test_date_query_with_attribute_access(self): - engine, parser = self.engine, self.parser - skip_if_no_pandas_parser(parser) - df = DataFrame(randn(5, 3)) - df['dates1'] = date_range('1/1/2012', periods=5) - df['dates2'] = date_range('1/1/2013', periods=5) - df['dates3'] = date_range('1/1/2014', periods=5) - res = df.query('@df.dates1 < 20130101 < @df.dates3', engine=engine, - parser=parser) - expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_query_no_attribute_access(self): - engine, parser = self.engine, self.parser - df = DataFrame(randn(5, 3)) - df['dates1'] = date_range('1/1/2012', periods=5) - df['dates2'] = date_range('1/1/2013', periods=5) - df['dates3'] = date_range('1/1/2014', periods=5) - res = df.query('dates1 < 20130101 < dates3', engine=engine, - parser=parser) - expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_query_with_NaT(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates2'] = date_range('1/1/2013', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT - df.loc[np.random.rand(n) > 0.5, 'dates3'] = pd.NaT - res = df.query('dates1 < 20130101 < dates3', engine=engine, - parser=parser) - expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_index_query(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.set_index('dates1', inplace=True, drop=True) - res = df.query('index < 20130101 < dates3', engine=engine, - parser=parser) - expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_index_query_with_NaT(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.iloc[0, 0] = pd.NaT - df.set_index('dates1', inplace=True, drop=True) - res = df.query('index < 20130101 < dates3', engine=engine, - parser=parser) - expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_index_query_with_NaT_duplicates(self): - engine, parser = self.engine, self.parser - n = 10 - d = {} - d['dates1'] = date_range('1/1/2012', periods=n) - d['dates3'] = date_range('1/1/2014', periods=n) - df = DataFrame(d) - df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT - df.set_index('dates1', inplace=True, drop=True) - res = df.query('index < 20130101 < dates3', engine=engine, parser=parser) - expec = df[(df.index.to_series() < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_query_with_non_date(self): - engine, parser = self.engine, self.parser - - n = 10 - df = DataFrame({'dates': date_range('1/1/2012', periods=n), - 'nondate': np.arange(n)}) - - ops = '==', '!=', '<', '>', '<=', '>=' - - for op in ops: - with tm.assertRaises(TypeError): - df.query('dates %s nondate' % op, parser=parser, engine=engine) - - def test_query_syntax_error(self): - engine, parser = self.engine, self.parser - df = DataFrame({"i": lrange(10), "+": lrange(3, 13), - "r": lrange(4, 14)}) - with tm.assertRaises(SyntaxError): - df.query('i - +', engine=engine, parser=parser) - - def test_query_scope(self): - from pandas.computation.ops import UndefinedVariableError - engine, parser = self.engine, self.parser - skip_if_no_pandas_parser(parser) - - df = DataFrame(np.random.randn(20, 2), columns=list('ab')) - - a, b = 1, 2 # noqa - res = df.query('a > b', engine=engine, parser=parser) - expected = df[df.a > df.b] - assert_frame_equal(res, expected) - - res = df.query('@a > b', engine=engine, parser=parser) - expected = df[a > df.b] - assert_frame_equal(res, expected) - - # no local variable c - with tm.assertRaises(UndefinedVariableError): - df.query('@a > b > @c', engine=engine, parser=parser) - - # no column named 'c' - with tm.assertRaises(UndefinedVariableError): - df.query('@a > b > c', engine=engine, parser=parser) - - def test_query_doesnt_pickup_local(self): - from pandas.computation.ops import UndefinedVariableError - - engine, parser = self.engine, self.parser - n = m = 10 - df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) - - # we don't pick up the local 'sin' - with tm.assertRaises(UndefinedVariableError): - df.query('sin > 5', engine=engine, parser=parser) - - def test_query_builtin(self): - from pandas.computation.engines import NumExprClobberingError - engine, parser = self.engine, self.parser - - n = m = 10 - df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) - - df.index.name = 'sin' - with tm.assertRaisesRegexp(NumExprClobberingError, - 'Variables in expression.+'): - df.query('sin > 5', engine=engine, parser=parser) - - def test_query(self): - engine, parser = self.engine, self.parser - df = DataFrame(np.random.randn(10, 3), columns=['a', 'b', 'c']) - - assert_frame_equal(df.query('a < b', engine=engine, parser=parser), - df[df.a < df.b]) - assert_frame_equal(df.query('a + b > b * c', engine=engine, - parser=parser), - df[df.a + df.b > df.b * df.c]) - - def test_query_index_with_name(self): - engine, parser = self.engine, self.parser - df = DataFrame(np.random.randint(10, size=(10, 3)), - index=Index(range(10), name='blob'), - columns=['a', 'b', 'c']) - res = df.query('(blob < 5) & (a < b)', engine=engine, parser=parser) - expec = df[(df.index < 5) & (df.a < df.b)] - assert_frame_equal(res, expec) - - res = df.query('blob < b', engine=engine, parser=parser) - expec = df[df.index < df.b] - - assert_frame_equal(res, expec) - - def test_query_index_without_name(self): - engine, parser = self.engine, self.parser - df = DataFrame(np.random.randint(10, size=(10, 3)), - index=range(10), columns=['a', 'b', 'c']) - - # "index" should refer to the index - res = df.query('index < b', engine=engine, parser=parser) - expec = df[df.index < df.b] - assert_frame_equal(res, expec) - - # test against a scalar - res = df.query('index < 5', engine=engine, parser=parser) - expec = df[df.index < 5] - assert_frame_equal(res, expec) - - def test_nested_scope(self): - engine = self.engine - parser = self.parser - - skip_if_no_pandas_parser(parser) - - df = DataFrame(np.random.randn(5, 3)) - df2 = DataFrame(np.random.randn(5, 3)) - expected = df[(df > 0) & (df2 > 0)] - - result = df.query('(@df > 0) & (@df2 > 0)', engine=engine, parser=parser) - assert_frame_equal(result, expected) - - result = pd.eval('df[df > 0 and df2 > 0]', engine=engine, - parser=parser) - assert_frame_equal(result, expected) - - result = pd.eval('df[df > 0 and df2 > 0 and df[df > 0] > 0]', - engine=engine, parser=parser) - expected = df[(df > 0) & (df2 > 0) & (df[df > 0] > 0)] - assert_frame_equal(result, expected) - - result = pd.eval('df[(df>0) & (df2>0)]', engine=engine, parser=parser) - expected = df.query('(@df>0) & (@df2>0)', engine=engine, parser=parser) - assert_frame_equal(result, expected) - - def test_nested_raises_on_local_self_reference(self): - from pandas.computation.ops import UndefinedVariableError - - df = DataFrame(np.random.randn(5, 3)) - - # can't reference ourself b/c we're a local so @ is necessary - with tm.assertRaises(UndefinedVariableError): - df.query('df > 0', engine=self.engine, parser=self.parser) - - def test_local_syntax(self): - skip_if_no_pandas_parser(self.parser) - - engine, parser = self.engine, self.parser - df = DataFrame(randn(100, 10), columns=list('abcdefghij')) - b = 1 - expect = df[df.a < b] - result = df.query('a < @b', engine=engine, parser=parser) - assert_frame_equal(result, expect) - - expect = df[df.a < df.b] - result = df.query('a < b', engine=engine, parser=parser) - assert_frame_equal(result, expect) - - def test_chained_cmp_and_in(self): - skip_if_no_pandas_parser(self.parser) - engine, parser = self.engine, self.parser - cols = list('abc') - df = DataFrame(randn(100, len(cols)), columns=cols) - res = df.query('a < b < c and a not in b not in c', engine=engine, - parser=parser) - ind = (df.a < df.b) & (df.b < df.c) & ~df.b.isin(df.a) & ~df.c.isin(df.b) - expec = df[ind] - assert_frame_equal(res, expec) - - def test_local_variable_with_in(self): - engine, parser = self.engine, self.parser - skip_if_no_pandas_parser(parser) - a = Series(np.random.randint(3, size=15), name='a') - b = Series(np.random.randint(10, size=15), name='b') - df = DataFrame({'a': a, 'b': b}) - - expected = df.loc[(df.b - 1).isin(a)] - result = df.query('b - 1 in a', engine=engine, parser=parser) - assert_frame_equal(expected, result) - - b = Series(np.random.randint(10, size=15), name='b') - expected = df.loc[(b - 1).isin(a)] - result = df.query('@b - 1 in a', engine=engine, parser=parser) - assert_frame_equal(expected, result) - - def test_at_inside_string(self): - engine, parser = self.engine, self.parser - skip_if_no_pandas_parser(parser) - c = 1 - df = DataFrame({'a': ['a', 'a', 'b', 'b', '@c', '@c']}) - result = df.query('a == "@c"', engine=engine, parser=parser) - expected = df[df.a == "@c"] - assert_frame_equal(result, expected) - - def test_query_undefined_local(self): - from pandas.computation.ops import UndefinedVariableError - engine, parser = self.engine, self.parser - skip_if_no_pandas_parser(parser) - df = DataFrame(np.random.rand(10, 2), columns=list('ab')) - with tm.assertRaisesRegexp(UndefinedVariableError, - "local variable 'c' is not defined"): - df.query('a == @c', engine=engine, parser=parser) - - def test_index_resolvers_come_after_columns_with_the_same_name(self): - n = 1 - a = np.r_[20:101:20] - - df = DataFrame({'index': a, 'b': np.random.randn(a.size)}) - df.index.name = 'index' - result = df.query('index > 5', engine=self.engine, parser=self.parser) - expected = df[df['index'] > 5] - assert_frame_equal(result, expected) - - df = DataFrame({'index': a, - 'b': np.random.randn(a.size)}) - result = df.query('ilevel_0 > 5', engine=self.engine, - parser=self.parser) - expected = df.loc[df.index[df.index > 5]] - assert_frame_equal(result, expected) - - df = DataFrame({'a': a, 'b': np.random.randn(a.size)}) - df.index.name = 'a' - result = df.query('a > 5', engine=self.engine, parser=self.parser) - expected = df[df.a > 5] - assert_frame_equal(result, expected) - - result = df.query('index > 5', engine=self.engine, parser=self.parser) - expected = df.loc[df.index[df.index > 5]] - assert_frame_equal(result, expected) - - def test_inf(self): - n = 10 - df = DataFrame({'a': np.random.rand(n), 'b': np.random.rand(n)}) - df.loc[::2, 0] = np.inf - ops = '==', '!=' - d = dict(zip(ops, (operator.eq, operator.ne))) - for op, f in d.items(): - q = 'a %s inf' % op - expected = df[f(df.a, np.inf)] - result = df.query(q, engine=self.engine, parser=self.parser) - assert_frame_equal(result, expected) - - -class TestDataFrameQueryNumExprPython(TestDataFrameQueryNumExprPandas): - - @classmethod - def setUpClass(cls): - super(TestDataFrameQueryNumExprPython, cls).setUpClass() - cls.engine = 'numexpr' - cls.parser = 'python' - tm.skip_if_no_ne(cls.engine) - cls.frame = _frame.copy() - - def test_date_query_no_attribute_access(self): - engine, parser = self.engine, self.parser - df = DataFrame(randn(5, 3)) - df['dates1'] = date_range('1/1/2012', periods=5) - df['dates2'] = date_range('1/1/2013', periods=5) - df['dates3'] = date_range('1/1/2014', periods=5) - res = df.query('(dates1 < 20130101) & (20130101 < dates3)', - engine=engine, parser=parser) - expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_query_with_NaT(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates2'] = date_range('1/1/2013', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT - df.loc[np.random.rand(n) > 0.5, 'dates3'] = pd.NaT - res = df.query('(dates1 < 20130101) & (20130101 < dates3)', - engine=engine, parser=parser) - expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_index_query(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.set_index('dates1', inplace=True, drop=True) - res = df.query('(index < 20130101) & (20130101 < dates3)', - engine=engine, parser=parser) - expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_index_query_with_NaT(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.iloc[0, 0] = pd.NaT - df.set_index('dates1', inplace=True, drop=True) - res = df.query('(index < 20130101) & (20130101 < dates3)', - engine=engine, parser=parser) - expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_index_query_with_NaT_duplicates(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT - df.set_index('dates1', inplace=True, drop=True) - with tm.assertRaises(NotImplementedError): - df.query('index < 20130101 < dates3', engine=engine, parser=parser) - - def test_nested_scope(self): - from pandas.computation.ops import UndefinedVariableError - engine = self.engine - parser = self.parser - # smoke test - x = 1 - result = pd.eval('x + 1', engine=engine, parser=parser) - self.assertEqual(result, 2) - - df = DataFrame(np.random.randn(5, 3)) - df2 = DataFrame(np.random.randn(5, 3)) - - # don't have the pandas parser - with tm.assertRaises(SyntaxError): - df.query('(@df>0) & (@df2>0)', engine=engine, parser=parser) - - with tm.assertRaises(UndefinedVariableError): - df.query('(df>0) & (df2>0)', engine=engine, parser=parser) - - expected = df[(df > 0) & (df2 > 0)] - result = pd.eval('df[(df > 0) & (df2 > 0)]', engine=engine, - parser=parser) - assert_frame_equal(expected, result) - - expected = df[(df > 0) & (df2 > 0) & (df[df > 0] > 0)] - result = pd.eval('df[(df > 0) & (df2 > 0) & (df[df > 0] > 0)]', - engine=engine, parser=parser) - assert_frame_equal(expected, result) - - -class TestDataFrameQueryPythonPandas(TestDataFrameQueryNumExprPandas): - - @classmethod - def setUpClass(cls): - super(TestDataFrameQueryPythonPandas, cls).setUpClass() - cls.engine = 'python' - cls.parser = 'pandas' - cls.frame = _frame.copy() - - def test_query_builtin(self): - engine, parser = self.engine, self.parser - - n = m = 10 - df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) - - df.index.name = 'sin' - expected = df[df.index > 5] - result = df.query('sin > 5', engine=engine, parser=parser) - assert_frame_equal(expected, result) - - -class TestDataFrameQueryPythonPython(TestDataFrameQueryNumExprPython): - - @classmethod - def setUpClass(cls): - super(TestDataFrameQueryPythonPython, cls).setUpClass() - cls.engine = cls.parser = 'python' - cls.frame = _frame.copy() - - def test_query_builtin(self): - engine, parser = self.engine, self.parser - - n = m = 10 - df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) - - df.index.name = 'sin' - expected = df[df.index > 5] - result = df.query('sin > 5', engine=engine, parser=parser) - assert_frame_equal(expected, result) - - -PARSERS = 'python', 'pandas' -ENGINES = 'python', 'numexpr' - - -class TestDataFrameQueryStrings(object): - def check_str_query_method(self, parser, engine): - tm.skip_if_no_ne(engine) - df = DataFrame(randn(10, 1), columns=['b']) - df['strings'] = Series(list('aabbccddee')) - expect = df[df.strings == 'a'] - - if parser != 'pandas': - col = 'strings' - lst = '"a"' - - lhs = [col] * 2 + [lst] * 2 - rhs = lhs[::-1] - - eq, ne = '==', '!=' - ops = 2 * ([eq] + [ne]) - - for lhs, op, rhs in zip(lhs, ops, rhs): - ex = '{lhs} {op} {rhs}'.format(lhs=lhs, op=op, rhs=rhs) - assertRaises(NotImplementedError, df.query, ex, engine=engine, - parser=parser, local_dict={'strings': df.strings}) - else: - res = df.query('"a" == strings', engine=engine, parser=parser) - assert_frame_equal(res, expect) - - res = df.query('strings == "a"', engine=engine, parser=parser) - assert_frame_equal(res, expect) - assert_frame_equal(res, df[df.strings.isin(['a'])]) - - expect = df[df.strings != 'a'] - res = df.query('strings != "a"', engine=engine, parser=parser) - assert_frame_equal(res, expect) - - res = df.query('"a" != strings', engine=engine, parser=parser) - assert_frame_equal(res, expect) - assert_frame_equal(res, df[~df.strings.isin(['a'])]) - - def test_str_query_method(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_str_query_method, parser, engine - - def test_str_list_query_method(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_str_list_query_method, parser, engine - - def check_str_list_query_method(self, parser, engine): - tm.skip_if_no_ne(engine) - df = DataFrame(randn(10, 1), columns=['b']) - df['strings'] = Series(list('aabbccddee')) - expect = df[df.strings.isin(['a', 'b'])] - - if parser != 'pandas': - col = 'strings' - lst = '["a", "b"]' - - lhs = [col] * 2 + [lst] * 2 - rhs = lhs[::-1] - - eq, ne = '==', '!=' - ops = 2 * ([eq] + [ne]) - - for lhs, op, rhs in zip(lhs, ops, rhs): - ex = '{lhs} {op} {rhs}'.format(lhs=lhs, op=op, rhs=rhs) - with tm.assertRaises(NotImplementedError): - df.query(ex, engine=engine, parser=parser) - else: - res = df.query('strings == ["a", "b"]', engine=engine, - parser=parser) - assert_frame_equal(res, expect) - - res = df.query('["a", "b"] == strings', engine=engine, - parser=parser) - assert_frame_equal(res, expect) - - expect = df[~df.strings.isin(['a', 'b'])] - - res = df.query('strings != ["a", "b"]', engine=engine, - parser=parser) - assert_frame_equal(res, expect) - - res = df.query('["a", "b"] != strings', engine=engine, - parser=parser) - assert_frame_equal(res, expect) - - def check_query_with_string_columns(self, parser, engine): - tm.skip_if_no_ne(engine) - df = DataFrame({'a': list('aaaabbbbcccc'), - 'b': list('aabbccddeeff'), - 'c': np.random.randint(5, size=12), - 'd': np.random.randint(9, size=12)}) - if parser == 'pandas': - res = df.query('a in b', parser=parser, engine=engine) - expec = df[df.a.isin(df.b)] - assert_frame_equal(res, expec) - - res = df.query('a in b and c < d', parser=parser, engine=engine) - expec = df[df.a.isin(df.b) & (df.c < df.d)] - assert_frame_equal(res, expec) - else: - with assertRaises(NotImplementedError): - df.query('a in b', parser=parser, engine=engine) - - with assertRaises(NotImplementedError): - df.query('a in b and c < d', parser=parser, engine=engine) - - def test_query_with_string_columns(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_query_with_string_columns, parser, engine - - def check_object_array_eq_ne(self, parser, engine): - tm.skip_if_no_ne(engine) - df = DataFrame({'a': list('aaaabbbbcccc'), - 'b': list('aabbccddeeff'), - 'c': np.random.randint(5, size=12), - 'd': np.random.randint(9, size=12)}) - res = df.query('a == b', parser=parser, engine=engine) - exp = df[df.a == df.b] - assert_frame_equal(res, exp) - - res = df.query('a != b', parser=parser, engine=engine) - exp = df[df.a != df.b] - assert_frame_equal(res, exp) - - def test_object_array_eq_ne(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_object_array_eq_ne, parser, engine - - def check_query_with_nested_strings(self, parser, engine): - tm.skip_if_no_ne(engine) - skip_if_no_pandas_parser(parser) - from pandas.compat import StringIO - raw = """id event timestamp - 1 "page 1 load" 1/1/2014 0:00:01 - 1 "page 1 exit" 1/1/2014 0:00:31 - 2 "page 2 load" 1/1/2014 0:01:01 - 2 "page 2 exit" 1/1/2014 0:01:31 - 3 "page 3 load" 1/1/2014 0:02:01 - 3 "page 3 exit" 1/1/2014 0:02:31 - 4 "page 1 load" 2/1/2014 1:00:01 - 4 "page 1 exit" 2/1/2014 1:00:31 - 5 "page 2 load" 2/1/2014 1:01:01 - 5 "page 2 exit" 2/1/2014 1:01:31 - 6 "page 3 load" 2/1/2014 1:02:01 - 6 "page 3 exit" 2/1/2014 1:02:31 - """ - df = pd.read_csv(StringIO(raw), sep=r'\s{2,}', engine='python', - parse_dates=['timestamp']) - expected = df[df.event == '"page 1 load"'] - res = df.query("""'"page 1 load"' in event""", parser=parser, - engine=engine) - assert_frame_equal(expected, res) - - def test_query_with_nested_string(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_query_with_nested_strings, parser, engine - - def check_query_with_nested_special_character(self, parser, engine): - skip_if_no_pandas_parser(parser) - tm.skip_if_no_ne(engine) - df = DataFrame({'a': ['a', 'b', 'test & test'], - 'b': [1, 2, 3]}) - res = df.query('a == "test & test"', parser=parser, engine=engine) - expec = df[df.a == 'test & test'] - assert_frame_equal(res, expec) - - def test_query_with_nested_special_character(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_query_with_nested_special_character, parser, engine - - def check_query_lex_compare_strings(self, parser, engine): - tm.skip_if_no_ne(engine=engine) - import operator as opr - - a = Series(tm.choice(list('abcde'), 20)) - b = Series(np.arange(a.size)) - df = DataFrame({'X': a, 'Y': b}) - - ops = {'<': opr.lt, '>': opr.gt, '<=': opr.le, '>=': opr.ge} - - for op, func in ops.items(): - res = df.query('X %s "d"' % op, engine=engine, parser=parser) - expected = df[func(df.X, 'd')] - assert_frame_equal(res, expected) - - def test_query_lex_compare_strings(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_query_lex_compare_strings, parser, engine - - def check_query_single_element_booleans(self, parser, engine): - tm.skip_if_no_ne(engine) - columns = 'bid', 'bidsize', 'ask', 'asksize' - data = np.random.randint(2, size=(1, len(columns))).astype(bool) - df = DataFrame(data, columns=columns) - res = df.query('bid & ask', engine=engine, parser=parser) - expected = df[df.bid & df.ask] - assert_frame_equal(res, expected) - - def test_query_single_element_booleans(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_query_single_element_booleans, parser, engine - - def check_query_string_scalar_variable(self, parser, engine): - tm.skip_if_no_ne(engine) - df = pd.DataFrame({'Symbol': ['BUD US', 'BUD US', 'IBM US', 'IBM US'], - 'Price': [109.70, 109.72, 183.30, 183.35]}) - e = df[df.Symbol == 'BUD US'] - symb = 'BUD US' # noqa - r = df.query('Symbol == @symb', parser=parser, engine=engine) - assert_frame_equal(e, r) - - def test_query_string_scalar_variable(self): - for parser, engine in product(['pandas'], ENGINES): - yield self.check_query_string_scalar_variable, parser, engine - - -class TestDataFrameEvalNumExprPandas(tm.TestCase): - - @classmethod - def setUpClass(cls): - super(TestDataFrameEvalNumExprPandas, cls).setUpClass() - cls.engine = 'numexpr' - cls.parser = 'pandas' - tm.skip_if_no_ne() - - def setUp(self): - self.frame = DataFrame(randn(10, 3), columns=list('abc')) - - def tearDown(self): - del self.frame - - def test_simple_expr(self): - res = self.frame.eval('a + b', engine=self.engine, parser=self.parser) - expect = self.frame.a + self.frame.b - assert_series_equal(res, expect) - - def test_bool_arith_expr(self): - res = self.frame.eval('a[a < 1] + b', engine=self.engine, - parser=self.parser) - expect = self.frame.a[self.frame.a < 1] + self.frame.b - assert_series_equal(res, expect) - - def test_invalid_type_for_operator_raises(self): - df = DataFrame({'a': [1, 2], 'b': ['c', 'd']}) - ops = '+', '-', '*', '/' - for op in ops: - with tm.assertRaisesRegexp(TypeError, - "unsupported operand type\(s\) for " - ".+: '.+' and '.+'"): - df.eval('a {0} b'.format(op), engine=self.engine, - parser=self.parser) - - -class TestDataFrameEvalNumExprPython(TestDataFrameEvalNumExprPandas): - - @classmethod - def setUpClass(cls): - super(TestDataFrameEvalNumExprPython, cls).setUpClass() - cls.engine = 'numexpr' - cls.parser = 'python' - tm.skip_if_no_ne(cls.engine) - - -class TestDataFrameEvalPythonPandas(TestDataFrameEvalNumExprPandas): - - @classmethod - def setUpClass(cls): - super(TestDataFrameEvalPythonPandas, cls).setUpClass() - cls.engine = 'python' - cls.parser = 'pandas' - - -class TestDataFrameEvalPythonPython(TestDataFrameEvalNumExprPython): - - @classmethod - def setUpClass(cls): - super(TestDataFrameEvalPythonPython, cls).tearDownClass() - cls.engine = cls.parser = 'python' - - -if __name__ == '__main__': - nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], - exit=False) diff --git a/setup.cfg b/setup.cfg index 3e13665f0ee22..5c07a44ff4f7f 100644 --- a/setup.cfg +++ b/setup.cfg @@ -12,4 +12,4 @@ tag_prefix = v parentdir_prefix = pandas- [flake8] -ignore = F401,E731 +ignore = E731 diff --git a/setup.py b/setup.py index 0f4492d9821ee..62d9062de1155 100755 --- a/setup.py +++ b/setup.py @@ -552,6 +552,7 @@ def pxd(name): 'pandas.stats', 'pandas.util', 'pandas.tests', + 'pandas.tests.frame', 'pandas.tests.test_msgpack', 'pandas.tools', 'pandas.tools.tests',
@jreback I re-enabled flake8 checking for unused imports. We should be explicit about what imports are part of a module API (and those warnings explicitly suppressed with `# noqa`) and delete all unused imports.
https://api.github.com/repos/pandas-dev/pandas/pulls/12032
2016-01-13T15:35:36Z
2016-01-14T14:11:00Z
2016-01-14T14:11:00Z
2016-01-16T04:14:26Z