title
stringlengths
1
185
diff
stringlengths
0
32.2M
body
stringlengths
0
123k
url
stringlengths
57
58
created_at
stringlengths
20
20
closed_at
stringlengths
20
20
merged_at
stringlengths
20
20
updated_at
stringlengths
20
20
Move tz cleanup whatsnew entries to v0.24
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index d163ad8564efb..18cd36205648a 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -99,11 +99,7 @@ Bug Fixes - Bug in :class:`Timestamp` and :class:`DatetimeIndex` where passing a :class:`Timestamp` localized after a DST transition would return a datetime before the DST transition (:issue:`20854`) - Bug in comparing :class:`DataFrame`s with tz-aware :class:`DatetimeIndex` columns with a DST transition that raised a ``KeyError`` (:issue:`19970`) -- Bug in :meth:`DatetimeIndex.shift` where an ``AssertionError`` would raise when shifting across DST (:issue:`8616`) -- Bug in :class:`Timestamp` constructor where passing an invalid timezone offset designator (``Z``) would not raise a ``ValueError``(:issue:`8910`) -- Bug in :meth:`Timestamp.replace` where replacing at a DST boundary would retain an incorrect offset (:issue:`7825`) -- Bug in :meth:`DatetimeIndex.reindex` when reindexing a tz-naive and tz-aware :class:`DatetimeIndex` (:issue:`8306`) -- Bug in :meth:`DatetimeIndex.resample` when downsampling across a DST boundary (:issue:`8531`) + **Other** diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index a63276efc5b7c..8e38171e93bc2 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -159,7 +159,11 @@ Datetimelike - Fixed bug where two :class:`DateOffset` objects with different ``normalize`` attributes could evaluate as equal (:issue:`21404`) - Bug in :class:`Index` with ``datetime64[ns, tz]`` dtype that did not localize integer data correctly (:issue:`20964`) -- +- Bug in :meth:`DatetimeIndex.shift` where an ``AssertionError`` would raise when shifting across DST (:issue:`8616`) +- Bug in :class:`Timestamp` constructor where passing an invalid timezone offset designator (``Z``) would not raise a ``ValueError``(:issue:`8910`) +- Bug in :meth:`Timestamp.replace` where replacing at a DST boundary would retain an incorrect offset (:issue:`7825`) +- Bug in :meth:`DatetimeIndex.reindex` when reindexing a tz-naive and tz-aware :class:`DatetimeIndex` (:issue:`8306`) +- Bug in :meth:`DatetimeIndex.resample` when downsampling across a DST boundary (:issue:`8531`) Timedelta ^^^^^^^^^
Pre-req for #21612. https://github.com/pandas-dev/pandas/pull/21612#pullrequestreview-131551269 Moving tz cleanup whatsnew entries from 0.23.2 to 0.24.0 added in #21491 cc @jreback
https://api.github.com/repos/pandas-dev/pandas/pulls/21631
2018-06-26T02:28:40Z
2018-06-26T07:44:06Z
2018-06-26T07:44:06Z
2018-06-26T15:17:55Z
Update to_gbq and read_gbq to pandas-gbq 0.5.0
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index 72e7373d0dd33..60c3e4df8d129 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -19,6 +19,11 @@ Other Enhancements - :func:`to_csv` now supports ``compression`` keyword when a file handle is passed. (:issue:`21227`) - :meth:`Index.droplevel` is now implemented also for flat indexes, for compatibility with :class:`MultiIndex` (:issue:`21115`) - Added support for reading from Google Cloud Storage via the ``gcsfs`` library (:issue:`19454`) +- :func:`to_gbq` and :func:`read_gbq` signature and documentation updated to + reflect changes from the `Pandas-GBQ library version 0.5.0 + <https://pandas-gbq.readthedocs.io/en/latest/changelog.html#changelog-0-5-0>`__. + (:issue:`21627`) + .. _whatsnew_0240.api_breaking: diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 0bf5acf14294a..b553cfdc72c92 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -1102,37 +1102,27 @@ def to_dict(self, orient='dict', into=dict): else: raise ValueError("orient '{o}' not understood".format(o=orient)) - def to_gbq(self, destination_table, project_id, chunksize=None, - verbose=None, reauth=False, if_exists='fail', private_key=None, - auth_local_webserver=False, table_schema=None): + def to_gbq(self, destination_table, project_id=None, chunksize=None, + reauth=False, if_exists='fail', private_key=None, + auth_local_webserver=False, table_schema=None, location=None, + progress_bar=True, verbose=None): """ Write a DataFrame to a Google BigQuery table. This function requires the `pandas-gbq package <https://pandas-gbq.readthedocs.io>`__. - Authentication to the Google BigQuery service is via OAuth 2.0. - - - If ``private_key`` is provided, the library loads the JSON service - account credentials and uses those to authenticate. - - - If no ``private_key`` is provided, the library tries `application - default credentials`_. - - .. _application default credentials: - https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application - - - If application default credentials are not found or cannot be used - with BigQuery, the library authenticates with user account - credentials. In this case, you will be asked to grant permissions - for product name 'pandas GBQ'. + See the `How to authenticate with Google BigQuery + <https://pandas-gbq.readthedocs.io/en/latest/howto/authentication.html>`__ + guide for authentication instructions. Parameters ---------- destination_table : str - Name of table to be written, in the form 'dataset.tablename'. - project_id : str - Google BigQuery Account project ID. + Name of table to be written, in the form ``dataset.tablename``. + project_id : str, optional + Google BigQuery Account project ID. Optional when available from + the environment. chunksize : int, optional Number of rows to be inserted in each chunk from the dataframe. Set to ``None`` to load the whole dataframe at once. @@ -1170,8 +1160,21 @@ def to_gbq(self, destination_table, project_id, chunksize=None, BigQuery API documentation on available names of a field. *New in version 0.3.1 of pandas-gbq*. - verbose : boolean, deprecated - *Deprecated in Pandas-GBQ 0.4.0.* Use the `logging module + location : str, optional + Location where the load job should run. See the `BigQuery locations + documentation + <https://cloud.google.com/bigquery/docs/dataset-locations>`__ for a + list of available locations. The location must match that of the + target dataset. + + *New in version 0.5.0 of pandas-gbq*. + progress_bar : bool, default True + Use the library `tqdm` to show the progress bar for the upload, + chunk by chunk. + + *New in version 0.5.0 of pandas-gbq*. + verbose : bool, deprecated + Deprecated in Pandas-GBQ 0.4.0. Use the `logging module to adjust verbosity instead <https://pandas-gbq.readthedocs.io/en/latest/intro.html#logging>`__. @@ -1182,10 +1185,12 @@ def to_gbq(self, destination_table, project_id, chunksize=None, """ from pandas.io import gbq return gbq.to_gbq( - self, destination_table, project_id, chunksize=chunksize, - verbose=verbose, reauth=reauth, if_exists=if_exists, - private_key=private_key, auth_local_webserver=auth_local_webserver, - table_schema=table_schema) + self, destination_table, project_id=project_id, + chunksize=chunksize, reauth=reauth, + if_exists=if_exists, private_key=private_key, + auth_local_webserver=auth_local_webserver, + table_schema=table_schema, location=location, + progress_bar=progress_bar, verbose=verbose) @classmethod def from_records(cls, data, index=None, exclude=None, columns=None, diff --git a/pandas/io/gbq.py b/pandas/io/gbq.py index c7c16598ee432..87a0e4d5d1747 100644 --- a/pandas/io/gbq.py +++ b/pandas/io/gbq.py @@ -22,34 +22,26 @@ def _try_import(): def read_gbq(query, project_id=None, index_col=None, col_order=None, - reauth=False, verbose=None, private_key=None, dialect='legacy', - **kwargs): + reauth=False, private_key=None, auth_local_webserver=False, + dialect='legacy', location=None, configuration=None, + verbose=None): """ Load data from Google BigQuery. This function requires the `pandas-gbq package <https://pandas-gbq.readthedocs.io>`__. - Authentication to the Google BigQuery service is via OAuth 2.0. - - - If "private_key" is not provided: - - By default "application default credentials" are used. - - If default application credentials are not found or are restrictive, - user account credentials are used. In this case, you will be asked to - grant permissions for product name 'pandas GBQ'. - - - If "private_key" is provided: - - Service account credentials will be used to authenticate. + See the `How to authenticate with Google BigQuery + <https://pandas-gbq.readthedocs.io/en/latest/howto/authentication.html>`__ + guide for authentication instructions. Parameters ---------- query : str SQL-Like Query to return data values. - project_id : str - Google BigQuery Account project ID. + project_id : str, optional + Google BigQuery Account project ID. Optional when available from + the environment. index_col : str, optional Name of result column to use for index in results DataFrame. col_order : list(str), optional @@ -62,6 +54,16 @@ def read_gbq(query, project_id=None, index_col=None, col_order=None, Service account private key in JSON format. Can be file path or string contents. This is useful for remote server authentication (eg. Jupyter/IPython notebook on remote host). + auth_local_webserver : boolean, default False + Use the `local webserver flow`_ instead of the `console flow`_ + when getting user credentials. + + .. _local webserver flow: + http://google-auth-oauthlib.readthedocs.io/en/latest/reference/google_auth_oauthlib.flow.html#google_auth_oauthlib.flow.InstalledAppFlow.run_local_server + .. _console flow: + http://google-auth-oauthlib.readthedocs.io/en/latest/reference/google_auth_oauthlib.flow.html#google_auth_oauthlib.flow.InstalledAppFlow.run_console + + *New in version 0.2.0 of pandas-gbq*. dialect : str, default 'legacy' SQL syntax dialect to use. Value can be one of: @@ -74,19 +76,26 @@ def read_gbq(query, project_id=None, index_col=None, col_order=None, compliant with the SQL 2011 standard. For more information see `BigQuery Standard SQL Reference <https://cloud.google.com/bigquery/docs/reference/standard-sql/>`__. - verbose : boolean, deprecated - *Deprecated in Pandas-GBQ 0.4.0.* Use the `logging module - to adjust verbosity instead - <https://pandas-gbq.readthedocs.io/en/latest/intro.html#logging>`__. - kwargs : dict - Arbitrary keyword arguments. - configuration (dict): query config parameters for job processing. + location : str, optional + Location where the query job should run. See the `BigQuery locations + documentation + <https://cloud.google.com/bigquery/docs/dataset-locations>`__ for a + list of available locations. The location must match that of any + datasets used in the query. + + *New in version 0.5.0 of pandas-gbq*. + configuration : dict, optional + Query config parameters for job processing. For example: configuration = {'query': {'useQueryCache': False}} - For more information see `BigQuery SQL Reference - <https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query>`__ + For more information see `BigQuery REST API Reference + <https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query>`__. + verbose : None, deprecated + Deprecated in Pandas-GBQ 0.4.0. Use the `logging module + to adjust verbosity instead + <https://pandas-gbq.readthedocs.io/en/latest/intro.html#logging>`__. Returns ------- @@ -100,20 +109,21 @@ def read_gbq(query, project_id=None, index_col=None, col_order=None, """ pandas_gbq = _try_import() return pandas_gbq.read_gbq( - query, project_id=project_id, - index_col=index_col, col_order=col_order, - reauth=reauth, verbose=verbose, - private_key=private_key, - dialect=dialect, - **kwargs) + query, project_id=project_id, index_col=index_col, + col_order=col_order, reauth=reauth, verbose=verbose, + private_key=private_key, auth_local_webserver=auth_local_webserver, + dialect=dialect, location=location, configuration=configuration) -def to_gbq(dataframe, destination_table, project_id, chunksize=None, +def to_gbq(dataframe, destination_table, project_id=None, chunksize=None, verbose=None, reauth=False, if_exists='fail', private_key=None, - auth_local_webserver=False, table_schema=None): + auth_local_webserver=False, table_schema=None, location=None, + progress_bar=True): pandas_gbq = _try_import() return pandas_gbq.to_gbq( - dataframe, destination_table, project_id, chunksize=chunksize, - verbose=verbose, reauth=reauth, if_exists=if_exists, - private_key=private_key, auth_local_webserver=auth_local_webserver, - table_schema=table_schema) + dataframe, destination_table, project_id=project_id, + chunksize=chunksize, verbose=verbose, reauth=reauth, + if_exists=if_exists, private_key=private_key, + auth_local_webserver=auth_local_webserver, + table_schema=table_schema, location=location, + progress_bar=progress_bar) diff --git a/pandas/tests/io/test_gbq.py b/pandas/tests/io/test_gbq.py index 58a84ad4d47f8..dc6c319bb3366 100644 --- a/pandas/tests/io/test_gbq.py +++ b/pandas/tests/io/test_gbq.py @@ -2,7 +2,6 @@ from datetime import datetime import pytz import platform -from time import sleep import os import numpy as np @@ -48,16 +47,18 @@ def _in_travis_environment(): def _get_project_id(): if _in_travis_environment(): return os.environ.get('GBQ_PROJECT_ID') - else: - return PROJECT_ID + return PROJECT_ID or os.environ.get('GBQ_PROJECT_ID') def _get_private_key_path(): if _in_travis_environment(): return os.path.join(*[os.environ.get('TRAVIS_BUILD_DIR'), 'ci', 'travis_gbq.json']) - else: - return PRIVATE_KEY_JSON_PATH + + private_key_path = PRIVATE_KEY_JSON_PATH + if not private_key_path: + private_key_path = os.environ.get('GBQ_GOOGLE_APPLICATION_CREDENTIALS') + return private_key_path def clean_gbq_environment(private_key=None): @@ -123,11 +124,9 @@ def test_roundtrip(self): test_size = 20001 df = make_mixed_dataframe_v2(test_size) - df.to_gbq(destination_table, _get_project_id(), chunksize=10000, + df.to_gbq(destination_table, _get_project_id(), chunksize=None, private_key=_get_private_key_path()) - sleep(30) # <- Curses Google!!! - result = pd.read_gbq("SELECT COUNT(*) AS num_rows FROM {0}" .format(destination_table), project_id=_get_project_id(),
Closes https://github.com/pydata/pandas-gbq/issues/177 Closes #21627 - [x] closes #xxxx - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry I've also verified that the docs build and render well with ``` python doc/make.py --single read_gbq python doc/make.py --single DataFrame.to_gbq ``` Output from `scripts/validate_docstrings.py pandas.read_gbq`: <details> ``` ################################################################################ ######################### Docstring (pandas.read_gbq) ######################### ################################################################################ Load data from Google BigQuery. This function requires the `pandas-gbq package <https://pandas-gbq.readthedocs.io>`__. See the `How to authenticate with Google BigQuery <https://pandas-gbq.readthedocs.io/en/latest/howto/authentication.html>`__ guide for authentication instructions. Parameters ---------- query : str SQL-Like Query to return data values. project_id : str, optional Google BigQuery Account project ID. Optional when available from the environment. index_col : str, optional Name of result column to use for index in results DataFrame. col_order : list(str), optional List of BigQuery column names in the desired order for results DataFrame. reauth : boolean, default False Force Google BigQuery to re-authenticate the user. This is useful if multiple accounts are used. private_key : str, optional Service account private key in JSON format. Can be file path or string contents. This is useful for remote server authentication (eg. Jupyter/IPython notebook on remote host). auth_local_webserver : boolean, default False Use the `local webserver flow`_ instead of the `console flow`_ when getting user credentials. .. _local webserver flow: http://google-auth-oauthlib.readthedocs.io/en/latest/reference/google_auth_oauthlib.flow.html#google_auth_oauthlib.flow.InstalledAppFlow.run_local_server .. _console flow: http://google-auth-oauthlib.readthedocs.io/en/latest/reference/google_auth_oauthlib.flow.html#google_auth_oauthlib.flow.InstalledAppFlow.run_console *New in version 0.2.0 of pandas-gbq*. dialect : str, default 'legacy' SQL syntax dialect to use. Value can be one of: ``'legacy'`` Use BigQuery's legacy SQL dialect. For more information see `BigQuery Legacy SQL Reference <https://cloud.google.com/bigquery/docs/reference/legacy-sql>`__. ``'standard'`` Use BigQuery's standard SQL, which is compliant with the SQL 2011 standard. For more information see `BigQuery Standard SQL Reference <https://cloud.google.com/bigquery/docs/reference/standard-sql/>`__. location : str, optional Location where the query job should run. See the `BigQuery locations documentation <https://cloud.google.com/bigquery/docs/dataset-locations>`__ for a list of available locations. The location must match that of any datasets used in the query. *New in version 0.5.0 of pandas-gbq*. configuration : dict, optional Query config parameters for job processing. For example: configuration = {'query': {'useQueryCache': False}} For more information see `BigQuery REST API Reference <https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query>`__. verbose : None, deprecated Deprecated in Pandas-GBQ 0.4.0. Use the `logging module to adjust verbosity instead <https://pandas-gbq.readthedocs.io/en/latest/intro.html#logging>`__. Returns ------- df: DataFrame DataFrame representing results of query. See Also -------- pandas_gbq.read_gbq : This function in the pandas-gbq library. pandas.DataFrame.to_gbq : Write a DataFrame to Google BigQuery. ################################################################################ ################################## Validation ################################## ################################################################################ Errors found: No examples section found ``` </details> Output from `scripts/validate_docstrings.py pandas.DataFrame.to_gbq`: <details> ``` ################################################################################ ##################### Docstring (pandas.DataFrame.to_gbq) ##################### ################################################################################ Write a DataFrame to a Google BigQuery table. This function requires the `pandas-gbq package <https://pandas-gbq.readthedocs.io>`__. See the `How to authenticate with Google BigQuery <https://pandas-gbq.readthedocs.io/en/latest/howto/authentication.html>`__ guide for authentication instructions. Parameters ---------- destination_table : str Name of table to be written, in the form ``dataset.tablename``. project_id : str, optional Google BigQuery Account project ID. Optional when available from the environment. chunksize : int, optional Number of rows to be inserted in each chunk from the dataframe. Use ``None`` to load the dataframe in a single chunk. reauth : bool, default False Force Google BigQuery to re-authenticate the user. This is useful if multiple accounts are used. if_exists : str, default 'fail' Behavior when the destination table exists. Value can be one of: ``'fail'`` If table exists, do nothing. ``'replace'`` If table exists, drop it, recreate it, and insert data. ``'append'`` If table exists, insert data. Create if does not exist. private_key : str, optional Service account private key in JSON format. Can be file path or string contents. This is useful for remote server authentication (eg. Jupyter/IPython notebook on remote host). auth_local_webserver : bool, default False Use the `local webserver flow`_ instead of the `console flow`_ when getting user credentials. .. _local webserver flow: http://google-auth-oauthlib.readthedocs.io/en/latest/reference/google_auth_oauthlib.flow.html#google_auth_oauthlib.flow.InstalledAppFlow.run_local_server .. _console flow: http://google-auth-oauthlib.readthedocs.io/en/latest/reference/google_auth_oauthlib.flow.html#google_auth_oauthlib.flow.InstalledAppFlow.run_console *New in version 0.2.0 of pandas-gbq*. table_schema : list of dicts, optional List of BigQuery table fields to which according DataFrame columns conform to, e.g. ``[{'name': 'col1', 'type': 'STRING'},...]``. If schema is not provided, it will be generated according to dtypes of DataFrame columns. See BigQuery API documentation on available names of a field. *New in version 0.3.1 of pandas-gbq*. location : str, optional Location where the load job should run. See the `BigQuery locations documentation <https://cloud.google.com/bigquery/docs/dataset-locations>`__ for a list of available locations. The location must match that of the target dataset. *New in version 0.5.0 of pandas-gbq*. progress_bar : bool, default True Use the library `tqdm` to show the progress bar for the upload, chunk by chunk. *New in version 0.5.0 of pandas-gbq*. verbose : bool, deprecated Deprecated in Pandas-GBQ 0.4.0. Use the `logging module to adjust verbosity instead <https://pandas-gbq.readthedocs.io/en/latest/intro.html#logging>`__. See Also -------- pandas_gbq.to_gbq : This function in the pandas-gbq library. pandas.read_gbq : Read a DataFrame from Google BigQuery. ################################################################################ ################################## Validation ################################## ################################################################################ Errors found: No returns section found No examples section found ``` </details>
https://api.github.com/repos/pandas-dev/pandas/pulls/21628
2018-06-25T18:03:15Z
2018-06-26T22:25:58Z
2018-06-26T22:25:57Z
2018-06-26T22:26:05Z
DOC: Do no use 'type' as first word when specifying a return type (#21622)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 74bb2abc27c4b..34d3eb0a6db73 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -4675,7 +4675,7 @@ def swaplevel(self, i=-2, j=-1, axis=0): Returns ------- - swapped : type of caller (new object) + swapped : same type as caller (new object) .. versionchanged:: 0.18.1 diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 04ba0b5de3f7f..4efdd3812accd 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -800,7 +800,7 @@ def swaplevel(self, i=-2, j=-1, axis=0): Returns ------- - swapped : type of caller (new object) + swapped : same type as caller (new object) .. versionchanged:: 0.18.1 @@ -1073,7 +1073,7 @@ def _set_axis_name(self, name, axis=0, inplace=False): Returns ------- - renamed : type of caller or None if inplace=True + renamed : same type as caller or None if inplace=True See Also -------- @@ -2468,7 +2468,7 @@ def get(self, key, default=None): Returns ------- - value : type of items contained in object + value : same type as items contained in object """ try: return self[key] @@ -2768,7 +2768,7 @@ def __delitem__(self, key): Returns ------- - taken : type of caller + taken : same type as caller An array-like containing the elements taken from the object. See Also @@ -2824,7 +2824,7 @@ def _take(self, indices, axis=0, is_copy=True): Returns ------- - taken : type of caller + taken : same type as caller An array-like containing the elements taken from the object. See Also @@ -3033,7 +3033,7 @@ def select(self, crit, axis=0): Returns ------- - selection : type of caller + selection : same type as caller """ warnings.warn("'select' is deprecated and will be removed in a " "future release. You can use " @@ -3924,7 +3924,7 @@ def head(self, n=5): Returns ------- - obj_head : type of caller + obj_head : same type as caller The first `n` rows of the caller object. See Also @@ -4447,7 +4447,7 @@ def _consolidate(self, inplace=False): Returns ------- - consolidated : type of caller + consolidated : same type as caller """ inplace = validate_bool_kwarg(inplace, 'inplace') if inplace: @@ -4916,7 +4916,7 @@ def astype(self, dtype, copy=True, errors='raise', **kwargs): Returns ------- - casted : type of caller + casted : same type as caller Examples -------- @@ -6691,7 +6691,7 @@ def asfreq(self, freq, method=None, how=None, normalize=False, Returns ------- - converted : type of caller + converted : same type as caller Examples -------- @@ -6772,7 +6772,7 @@ def at_time(self, time, asof=False): Returns ------- - values_at_time : type of caller + values_at_time : same type as caller Examples -------- @@ -6826,7 +6826,7 @@ def between_time(self, start_time, end_time, include_start=True, Returns ------- - values_between_time : type of caller + values_between_time : same type as caller Examples -------- @@ -7145,7 +7145,7 @@ def first(self, offset): Returns ------- - subset : type of caller + subset : same type as caller See Also -------- @@ -7209,7 +7209,7 @@ def last(self, offset): Returns ------- - subset : type of caller + subset : same type as caller See Also -------- diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 0bbdfbbe52ac4..c69d7f43de8ea 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -867,7 +867,7 @@ def get_group(self, name, obj=None): Returns ------- - group : type of obj + group : same type as obj """ if obj is None: obj = self._selected_obj diff --git a/pandas/core/sparse/series.py b/pandas/core/sparse/series.py index 714cd09a27294..09d958059d355 100644 --- a/pandas/core/sparse/series.py +++ b/pandas/core/sparse/series.py @@ -398,7 +398,7 @@ def abs(self): Returns ------- - abs: type of caller + abs: same type as caller """ return self._constructor(np.abs(self.values), index=self.index).__finalize__(self) diff --git a/pandas/core/window.py b/pandas/core/window.py index 9d0f9dc4f75f9..f089e402261db 100644 --- a/pandas/core/window.py +++ b/pandas/core/window.py @@ -665,7 +665,7 @@ def _apply_window(self, mean=True, **kwargs): Returns ------- - y : type of input argument + y : same type as input argument """ window = self._prep_window(**kwargs) @@ -2139,7 +2139,7 @@ def _apply(self, func, **kwargs): Returns ------- - y : type of input argument + y : same type as input argument """ blocks, obj, index = self._create_blocks() diff --git a/pandas/io/packers.py b/pandas/io/packers.py index f9b1d1574d45c..03a5e8528f72d 100644 --- a/pandas/io/packers.py +++ b/pandas/io/packers.py @@ -178,7 +178,7 @@ def read_msgpack(path_or_buf, encoding='utf-8', iterator=False, **kwargs): Returns ------- - obj : type of object stored in file + obj : same type as object stored in file """ path_or_buf, _, _, should_close = get_filepath_or_buffer(path_or_buf) diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py index d27735fbca318..d347d76c33e0f 100644 --- a/pandas/io/pickle.py +++ b/pandas/io/pickle.py @@ -103,7 +103,7 @@ def read_pickle(path, compression='infer'): Returns ------- - unpickled : type of object stored in file + unpickled : same type as object stored in file See Also -------- diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py index aad387e0cdd58..580c7923017e5 100644 --- a/pandas/io/pytables.py +++ b/pandas/io/pytables.py @@ -687,7 +687,7 @@ def get(self, key): Returns ------- - obj : type of object stored in file + obj : same type as object stored in file """ group = self.get_node(key) if group is None:
Pycharm is otherwise confused and expects objects of type 'type' to be returned. - [x] closes #21622 - [x] tests added / passed (no code changes) - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry (not needed I guess)
https://api.github.com/repos/pandas-dev/pandas/pulls/21623
2018-06-25T14:14:16Z
2018-06-25T22:22:24Z
2018-06-25T22:22:24Z
2018-06-25T22:22:32Z
remove unused cimport
diff --git a/pandas/_libs/hashtable_class_helper.pxi.in b/pandas/_libs/hashtable_class_helper.pxi.in index b92eb0e651276..4d2b6f845eb71 100644 --- a/pandas/_libs/hashtable_class_helper.pxi.in +++ b/pandas/_libs/hashtable_class_helper.pxi.in @@ -4,8 +4,6 @@ Template for each `dtype` helper function for hashtable WARNING: DO NOT edit .pxi FILE directly, .pxi is generated from .pxi.in """ -from missing cimport is_null_datetimelike - #---------------------------------------------------------------------- # VectorData
- [ ] closes #xxxx - [ ] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21619
2018-06-25T03:24:22Z
2018-06-25T10:56:55Z
2018-06-25T10:56:55Z
2018-06-25T18:07:50Z
CLN: make CategoricalIndex._create_categorical a classmethod
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 4f140a6e77b2f..122f8662abb61 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -1130,7 +1130,8 @@ def to_frame(self, index=True): """ from pandas import DataFrame - result = DataFrame(self._shallow_copy(), columns=[self.name or 0]) + name = self.name or 0 + result = DataFrame({name: self.values.copy()}) if index: result.index = self diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py index fc669074758da..a2efe2c49c747 100644 --- a/pandas/core/indexes/category.py +++ b/pandas/core/indexes/category.py @@ -85,11 +85,11 @@ def __new__(cls, data=None, categories=None, ordered=None, dtype=None, name = data.name if isinstance(data, ABCCategorical): - data = cls._create_categorical(cls, data, categories, ordered, + data = cls._create_categorical(data, categories, ordered, dtype) elif isinstance(data, CategoricalIndex): data = data._data - data = cls._create_categorical(cls, data, categories, ordered, + data = cls._create_categorical(data, categories, ordered, dtype) else: @@ -99,7 +99,7 @@ def __new__(cls, data=None, categories=None, ordered=None, dtype=None, if data is not None or categories is None: cls._scalar_data_error(data) data = [] - data = cls._create_categorical(cls, data, categories, ordered, + data = cls._create_categorical(data, categories, ordered, dtype) if copy: @@ -136,8 +136,8 @@ def _create_from_codes(self, codes, categories=None, ordered=None, ordered=self.ordered) return CategoricalIndex(cat, name=name) - @staticmethod - def _create_categorical(self, data, categories=None, ordered=None, + @classmethod + def _create_categorical(cls, data, categories=None, ordered=None, dtype=None): """ *this is an internal non-public method* @@ -155,7 +155,7 @@ def _create_categorical(self, data, categories=None, ordered=None, ------- Categorical """ - if (isinstance(data, (ABCSeries, type(self))) and + if (isinstance(data, (cls, ABCSeries)) and is_categorical_dtype(data)): data = data.values @@ -179,7 +179,7 @@ def _simple_new(cls, values, name=None, categories=None, ordered=None, dtype=None, **kwargs): result = object.__new__(cls) - values = cls._create_categorical(cls, values, categories, ordered, + values = cls._create_categorical(values, categories, ordered, dtype=dtype) result._data = values result.name = name @@ -236,7 +236,7 @@ def _is_dtype_compat(self, other): if not is_list_like(values): values = [values] other = CategoricalIndex(self._create_categorical( - self, other, categories=self.categories, ordered=self.ordered)) + other, categories=self.categories, ordered=self.ordered)) if not other.isin(values).all(): raise TypeError("cannot append a non-category item to a " "CategoricalIndex") @@ -798,7 +798,7 @@ def _evaluate_compare(self, other): other = other._values elif isinstance(other, Index): other = self._create_categorical( - self, other._values, categories=self.categories, + other._values, categories=self.categories, ordered=self.ordered) if isinstance(other, (ABCCategorical, np.ndarray,
Currently, ``CategoricalIndex._create_categorical`` is a staticmethod, and is being called internally using *either* instances or classes as its first argument, e.g.: * in ``_is_dtype_compat`` *an instance* is supplied as the first argument, * in ``__new__`` *a class* is supplied as the first argument. This is confusing and makes the code paths different depending on how the method is called. It makes it difficult to reason about the precise output of the method. This PR cleans this up by making ``_create_categorical`` a classmethod. This simplifies stuff, and we can also remove the method's first parameter when calling it. Calling ``_create_categorical`` unneccesarily is one reason for the slowness of #20395. After this cleanup PR I will do another that should get #20395 come down to 1.6 ms as well as give some other related performance improvements.
https://api.github.com/repos/pandas-dev/pandas/pulls/21618
2018-06-24T21:36:42Z
2018-06-25T22:24:31Z
2018-06-25T22:24:31Z
2018-10-27T08:17:10Z
DEPR: Series.ptp()
diff --git a/doc/source/api.rst b/doc/source/api.rst index f2c00d5d12031..f1e9d236c0028 100644 --- a/doc/source/api.rst +++ b/doc/source/api.rst @@ -434,7 +434,6 @@ Computations / Descriptive Stats Series.value_counts Series.compound Series.nonzero - Series.ptp Reindexing / Selection / Label manipulation diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index 72e7373d0dd33..ef741de7cf873 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -111,6 +111,7 @@ Deprecations - :meth:`DataFrame.to_stata`, :meth:`read_stata`, :class:`StataReader` and :class:`StataWriter` have deprecated the ``encoding`` argument. The encoding of a Stata dta file is determined by the file type and cannot be changed (:issue:`21244`). - :meth:`MultiIndex.to_hierarchical` is deprecated and will be removed in a future version (:issue:`21613`) +- :meth:`Series.ptp` is deprecated. Use ``numpy.ptp`` instead (:issue:`21614`) - .. _whatsnew_0240.prior_deprecations: diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 8fa79a130d1f8..8c384e3eeea58 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -8875,13 +8875,21 @@ def _add_series_only_operations(cls): def nanptp(values, axis=0, skipna=True): nmax = nanops.nanmax(values, axis, skipna) nmin = nanops.nanmin(values, axis, skipna) + warnings.warn("Method .ptp is deprecated and will be removed " + "in a future version. Use numpy.ptp instead.", + FutureWarning, stacklevel=4) return nmax - nmin cls.ptp = _make_stat_function( cls, 'ptp', name, name2, axis_descr, - """Returns the difference between the maximum value and the + """ + Returns the difference between the maximum value and the minimum value in the object. This is the equivalent of the - ``numpy.ndarray`` method ``ptp``.""", + ``numpy.ndarray`` method ``ptp``. + + .. deprecated:: 0.24.0 + Use numpy.ptp instead + """, nanptp) @classmethod diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py index 36342b5ba4ee1..a14944fde9b36 100644 --- a/pandas/tests/series/test_analytics.py +++ b/pandas/tests/series/test_analytics.py @@ -1395,6 +1395,7 @@ def test_numpy_argmax_deprecated(self): s, out=data) def test_ptp(self): + # GH21614 N = 1000 arr = np.random.randn(N) ser = Series(arr) @@ -1402,27 +1403,36 @@ def test_ptp(self): # GH11163 s = Series([3, 5, np.nan, -3, 10]) - assert s.ptp() == 13 - assert pd.isna(s.ptp(skipna=False)) + with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + assert s.ptp() == 13 + assert pd.isna(s.ptp(skipna=False)) mi = pd.MultiIndex.from_product([['a', 'b'], [1, 2, 3]]) s = pd.Series([1, np.nan, 7, 3, 5, np.nan], index=mi) expected = pd.Series([6, 2], index=['a', 'b'], dtype=np.float64) - tm.assert_series_equal(s.ptp(level=0), expected) + with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + tm.assert_series_equal(s.ptp(level=0), expected) expected = pd.Series([np.nan, np.nan], index=['a', 'b']) - tm.assert_series_equal(s.ptp(level=0, skipna=False), expected) + with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + tm.assert_series_equal(s.ptp(level=0, skipna=False), expected) with pytest.raises(ValueError): - s.ptp(axis=1) + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): + s.ptp(axis=1) s = pd.Series(['a', 'b', 'c', 'd', 'e']) with pytest.raises(TypeError): - s.ptp() + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): + s.ptp() with pytest.raises(NotImplementedError): - s.ptp(numeric_only=True) + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): + s.ptp(numeric_only=True) def test_empty_timeseries_redections_return_nat(self): # covers #11245
xref #18262 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21614
2018-06-24T15:23:53Z
2018-07-06T13:56:03Z
2018-07-06T13:56:03Z
2018-10-25T18:22:34Z
DEPR: MultiIndex.to_hierarchical
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index c23ed006ff637..dbe5c481160a3 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -83,7 +83,7 @@ Deprecations ~~~~~~~~~~~~ - :meth:`DataFrame.to_stata`, :meth:`read_stata`, :class:`StataReader` and :class:`StataWriter` have deprecated the ``encoding`` argument. The encoding of a Stata dta file is determined by the file type and cannot be changed (:issue:`21244`). -- +- :meth:`MultiIndex.to_hierarchical` is deprecated and will be removed in a future version (:issue:`21613`) - .. _whatsnew_0240.prior_deprecations: diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py index ab23a80acdaae..8339e27651082 100644 --- a/pandas/core/indexes/multi.py +++ b/pandas/core/indexes/multi.py @@ -189,7 +189,6 @@ class MultiIndex(Index): from_product set_levels set_labels - to_hierarchical to_frame is_lexsorted sortlevel @@ -1182,6 +1181,8 @@ def to_frame(self, index=True): def to_hierarchical(self, n_repeat, n_shuffle=1): """ + .. deprecated:: 0.24.0 + Return a MultiIndex reshaped to conform to the shapes given by n_repeat and n_shuffle. @@ -1216,6 +1217,9 @@ def to_hierarchical(self, n_repeat, n_shuffle=1): # Assumes that each label is divisible by n_shuffle labels = [x.reshape(n_shuffle, -1).ravel(order='F') for x in labels] names = self.names + warnings.warn("Method .to_hierarchical is deprecated and will " + "be removed in a future version", + FutureWarning, stacklevel=2) return MultiIndex(levels=levels, labels=labels, names=names) @property diff --git a/pandas/core/panel.py b/pandas/core/panel.py index c4aa471b8b944..c8797f14e1cc8 100644 --- a/pandas/core/panel.py +++ b/pandas/core/panel.py @@ -948,10 +948,14 @@ def to_frame(self, filter_observations=True): data[item] = self[item].values.ravel()[selector] def construct_multi_parts(idx, n_repeat, n_shuffle=1): - axis_idx = idx.to_hierarchical(n_repeat, n_shuffle) - labels = [x[selector] for x in axis_idx.labels] - levels = axis_idx.levels - names = axis_idx.names + # Replicates and shuffles MultiIndex, returns individual attributes + labels = [np.repeat(x, n_repeat) for x in idx.labels] + # Assumes that each label is divisible by n_shuffle + labels = [x.reshape(n_shuffle, -1).ravel(order='F') + for x in labels] + labels = [x[selector] for x in labels] + levels = idx.levels + names = idx.names return labels, levels, names def construct_index_parts(idx, major=True): diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py index ab53002ee1587..362f917e74972 100644 --- a/pandas/tests/indexes/test_multi.py +++ b/pandas/tests/indexes/test_multi.py @@ -1673,9 +1673,11 @@ def test_to_frame(self): tm.assert_frame_equal(result, expected) def test_to_hierarchical(self): + # GH21613 index = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), ( 2, 'two')]) - result = index.to_hierarchical(3) + with tm.assert_produces_warning(FutureWarning): + result = index.to_hierarchical(3) expected = MultiIndex(levels=[[1, 2], ['one', 'two']], labels=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1], [0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1]]) @@ -1683,7 +1685,8 @@ def test_to_hierarchical(self): assert result.names == index.names # K > 1 - result = index.to_hierarchical(3, 2) + with tm.assert_produces_warning(FutureWarning): + result = index.to_hierarchical(3, 2) expected = MultiIndex(levels=[[1, 2], ['one', 'two']], labels=[[0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1], [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]]) @@ -1694,8 +1697,8 @@ def test_to_hierarchical(self): index = MultiIndex.from_tuples([(2, 'c'), (1, 'b'), (2, 'a'), (2, 'b')], names=['N1', 'N2']) - - result = index.to_hierarchical(2) + with tm.assert_produces_warning(FutureWarning): + result = index.to_hierarchical(2) expected = MultiIndex.from_tuples([(2, 'c'), (2, 'c'), (1, 'b'), (1, 'b'), (2, 'a'), (2, 'a'),
xref #18262 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21613
2018-06-24T15:17:37Z
2018-06-26T10:07:55Z
2018-06-26T10:07:55Z
2018-06-26T11:03:09Z
TST: Clean old timezone issues PT2
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index 406ca9ba045c9..1105acda067d3 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -164,12 +164,6 @@ Datetimelike ^^^^^^^^^^^^ - Fixed bug where two :class:`DateOffset` objects with different ``normalize`` attributes could evaluate as equal (:issue:`21404`) -- Bug in :class:`Index` with ``datetime64[ns, tz]`` dtype that did not localize integer data correctly (:issue:`20964`) -- Bug in :meth:`DatetimeIndex.shift` where an ``AssertionError`` would raise when shifting across DST (:issue:`8616`) -- Bug in :class:`Timestamp` constructor where passing an invalid timezone offset designator (``Z``) would not raise a ``ValueError``(:issue:`8910`) -- Bug in :meth:`Timestamp.replace` where replacing at a DST boundary would retain an incorrect offset (:issue:`7825`) -- Bug in :meth:`DatetimeIndex.reindex` when reindexing a tz-naive and tz-aware :class:`DatetimeIndex` (:issue:`8306`) -- Bug in :meth:`DatetimeIndex.resample` when downsampling across a DST boundary (:issue:`8531`) Timedelta ^^^^^^^^^ @@ -181,9 +175,15 @@ Timedelta Timezones ^^^^^^^^^ -- -- -- +- Bug in :meth:`DatetimeIndex.shift` where an ``AssertionError`` would raise when shifting across DST (:issue:`8616`) +- Bug in :class:`Timestamp` constructor where passing an invalid timezone offset designator (``Z``) would not raise a ``ValueError``(:issue:`8910`) +- Bug in :meth:`Timestamp.replace` where replacing at a DST boundary would retain an incorrect offset (:issue:`7825`) +- Bug in :meth:`Series.replace` with ``datetime64[ns, tz]`` data when replacing ``NaT`` (:issue:`11792`) +- Bug in :class:`Timestamp` when passing different string date formats with a timezone offset would produce different timezone offsets (:issue:`12064`) +- Bug when comparing a tz-naive :class:`Timestamp` to a tz-aware :class:`DatetimeIndex` which would coerce the :class:`DatetimeIndex` to tz-naive (:issue:`12601`) +- Bug in :meth:`Series.truncate` with a tz-aware :class:`DatetimeIndex` which would cause a core dump (:issue:`9243`) +- Bug in :class:`Series` constructor which would coerce tz-aware and tz-naive :class:`Timestamp`s to tz-aware (:issue:`13051`) +- Bug in :class:`Index` with ``datetime64[ns, tz]`` dtype that did not localize integer data correctly (:issue:`20964`) Offsets ^^^^^^^ @@ -217,7 +217,10 @@ Indexing - The traceback from a ``KeyError`` when asking ``.loc`` for a single missing label is now shorter and more clear (:issue:`21557`) - When ``.ix`` is asked for a missing integer label in a :class:`MultiIndex` with a first level of integer type, it now raises a ``KeyError`` - consistently with the case of a flat :class:`Int64Index` - rather than falling back to positional indexing (:issue:`21593`) -- +- Bug in :meth:`DatetimeIndex.reindex` when reindexing a tz-naive and tz-aware :class:`DatetimeIndex` (:issue:`8306`) +- Bug in :class:`DataFrame` when setting values with ``.loc`` and a timezone aware :class:`DatetimeIndex` (:issue:`11365`) +- Bug when indexing :class:`DatetimeIndex` with nanosecond resolution dates and timezones (:issue:`11679`) + - MultiIndex @@ -245,6 +248,7 @@ Groupby/Resample/Rolling ^^^^^^^^^^^^^^^^^^^^^^^^ - Bug in :func:`pandas.core.groupby.GroupBy.first` and :func:`pandas.core.groupby.GroupBy.last` with ``as_index=False`` leading to the loss of timezone information (:issue:`15884`) +- Bug in :meth:`DatetimeIndex.resample` when downsampling across a DST boundary (:issue:`8531`) - - diff --git a/pandas/conftest.py b/pandas/conftest.py index 803b3add97052..ae08e0817de29 100644 --- a/pandas/conftest.py +++ b/pandas/conftest.py @@ -320,3 +320,20 @@ def mock(): return importlib.import_module("unittest.mock") else: return pytest.importorskip("mock") + + +@pytest.fixture(params=['__eq__', '__ne__', '__le__', + '__lt__', '__ge__', '__gt__']) +def all_compare_operators(request): + """ + Fixture for dunder names for common compare operations + + * >= + * > + * == + * != + * < + * <= + """ + + return request.param diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py index be37e696ea0a3..c7aaf900b17fa 100644 --- a/pandas/tests/frame/test_indexing.py +++ b/pandas/tests/frame/test_indexing.py @@ -2248,6 +2248,16 @@ def test_setitem_datetimelike_with_inference(self): index=list('ABCDEFGH')) assert_series_equal(result, expected) + @pytest.mark.parametrize('idxer', ['var', ['var']]) + def test_setitem_datetimeindex_tz(self, idxer, tz_naive_fixture): + # GH 11365 + tz = tz_naive_fixture + idx = date_range(start='2015-07-12', periods=3, freq='H', tz=tz) + expected = DataFrame(1.2, index=idx, columns=['var']) + result = DataFrame(index=idx, columns=['var']) + result.loc[:, idxer] = expected + tm.assert_frame_equal(result, expected) + def test_at_time_between_time_datetimeindex(self): index = date_range("2012-01-01", "2012-01-05", freq='30min') df = DataFrame(randn(len(index), 5), index=index) diff --git a/pandas/tests/indexes/datetimes/test_arithmetic.py b/pandas/tests/indexes/datetimes/test_arithmetic.py index 0649083a440df..ff31ffee13217 100644 --- a/pandas/tests/indexes/datetimes/test_arithmetic.py +++ b/pandas/tests/indexes/datetimes/test_arithmetic.py @@ -276,6 +276,10 @@ def test_comparison_tzawareness_compat(self, op): with pytest.raises(TypeError): op(dz, ts) + # GH 12601: Check comparison against Timestamps and DatetimeIndex + with pytest.raises(TypeError): + op(ts, dz) + @pytest.mark.parametrize('op', [operator.eq, operator.ne, operator.gt, operator.ge, operator.lt, operator.le]) diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py index ec37bbbcb6c02..47d4d15420f1d 100644 --- a/pandas/tests/indexes/datetimes/test_date_range.py +++ b/pandas/tests/indexes/datetimes/test_date_range.py @@ -292,6 +292,15 @@ def test_construct_over_dst(self): freq='H', tz='US/Pacific') tm.assert_index_equal(result, expected) + def test_construct_with_different_start_end_string_format(self): + # GH 12064 + result = date_range('2013-01-01 00:00:00+09:00', + '2013/01/01 02:00:00+09:00', freq='H') + expected = DatetimeIndex([Timestamp('2013-01-01 00:00:00+09:00'), + Timestamp('2013-01-01 01:00:00+09:00'), + Timestamp('2013-01-01 02:00:00+09:00')]) + tm.assert_index_equal(result, expected) + class TestGenRangeGeneration(object): diff --git a/pandas/tests/indexing/test_datetime.py b/pandas/tests/indexing/test_datetime.py index a5c12e4152c90..751372380d262 100644 --- a/pandas/tests/indexing/test_datetime.py +++ b/pandas/tests/indexing/test_datetime.py @@ -252,3 +252,17 @@ def test_series_partial_set_period(self): check_stacklevel=False): result = ser.loc[keys] tm.assert_series_equal(result, exp) + + def test_nanosecond_getitem_setitem_with_tz(self): + # GH 11679 + data = ['2016-06-28 08:30:00.123456789'] + index = pd.DatetimeIndex(data, dtype='datetime64[ns, America/Chicago]') + df = DataFrame({'a': [10]}, index=index) + result = df.loc[df.index[0]] + expected = Series(10, index=['a'], name=df.index[0]) + tm.assert_series_equal(result, expected) + + result = df.copy() + result.loc[df.index[0], 'a'] = -1 + expected = DataFrame(-1, index=index, columns=['a']) + tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/scalar/timestamp/test_timestamp.py b/pandas/tests/scalar/timestamp/test_timestamp.py index 8dc9903b7356d..5272059163a07 100644 --- a/pandas/tests/scalar/timestamp/test_timestamp.py +++ b/pandas/tests/scalar/timestamp/test_timestamp.py @@ -542,6 +542,14 @@ def test_construct_timestamp_near_dst(self, offset): result = Timestamp(expected, tz='Europe/Helsinki') assert result == expected + @pytest.mark.parametrize('arg', [ + '2013/01/01 00:00:00+09:00', '2013-01-01 00:00:00+09:00']) + def test_construct_with_different_string_format(self, arg): + # GH 12064 + result = Timestamp(arg) + expected = Timestamp(datetime(2013, 1, 1), tz=pytz.FixedOffset(540)) + assert result == expected + class TestTimestamp(object): diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index 27cfec0dbf20d..fe224436c52e6 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -1185,3 +1185,11 @@ def test_constructor_range_dtype(self, dtype): expected = Series([0, 1, 2, 3, 4], dtype=dtype or 'int64') result = Series(range(5), dtype=dtype) tm.assert_series_equal(result, expected) + + def test_constructor_tz_mixed_data(self): + # GH 13051 + dt_list = [Timestamp('2016-05-01 02:03:37'), + Timestamp('2016-04-30 19:03:37-0700', tz='US/Pacific')] + result = Series(dt_list) + expected = Series(dt_list, dtype=object) + tm.assert_series_equal(result, expected) diff --git a/pandas/tests/series/test_replace.py b/pandas/tests/series/test_replace.py index 2c07d87865f53..a3b92798879f5 100644 --- a/pandas/tests/series/test_replace.py +++ b/pandas/tests/series/test_replace.py @@ -108,6 +108,13 @@ def test_replace_gh5319(self): pd.Timestamp('20120101')) tm.assert_series_equal(result, expected) + # GH 11792: Test with replacing NaT in a list with tz data + ts = pd.Timestamp('2015/01/01', tz='UTC') + s = pd.Series([pd.NaT, pd.Timestamp('2015/01/01', tz='UTC')]) + result = s.replace([np.nan, pd.NaT], pd.Timestamp.min) + expected = pd.Series([pd.Timestamp.min, ts], dtype=object) + tm.assert_series_equal(expected, result) + def test_replace_with_single_list(self): ser = pd.Series([0, 1, 2, 3, 4]) result = ser.replace([1, 2, 3]) diff --git a/pandas/tests/series/test_timezones.py b/pandas/tests/series/test_timezones.py index b54645d04bd1a..f2433163352ac 100644 --- a/pandas/tests/series/test_timezones.py +++ b/pandas/tests/series/test_timezones.py @@ -300,3 +300,11 @@ def test_getitem_pydatetime_tz(self, tzstr): dt = datetime(2012, 12, 24, 17, 0) time_datetime = tslib._localize_pydatetime(dt, tz) assert ts[time_pandas] == ts[time_datetime] + + def test_series_truncate_datetimeindex_tz(self): + # GH 9243 + idx = date_range('4/1/2005', '4/30/2005', freq='D', tz='US/Pacific') + s = Series(range(len(idx)), index=idx) + result = s.truncate(datetime(2005, 4, 2), datetime(2005, 4, 4)) + expected = Series([1, 2, 3], index=idx[1:4]) + tm.assert_series_equal(result, expected)
- [x] closes #11679 - [x] closes #11365 - [x] closes #12064 - [x] closes #12601 - [x] closes #9243 - [x] closes #11792 - [x] closes #13051 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry xref #21491, cleaning up older timezone issues and moved the whatsnew entries in the first cleanup PR from v0.23.2 to v0.24.0 as discussed https://github.com/pandas-dev/pandas/pull/21491#discussion_r197077733
https://api.github.com/repos/pandas-dev/pandas/pulls/21612
2018-06-23T17:32:45Z
2018-06-28T10:23:02Z
2018-06-28T10:23:00Z
2018-06-28T15:06:24Z
BUG: Fix json_normalize throwing AttributeError (#21608)
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index ff872cfc6b3ef..ffb6316991673 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -72,6 +72,7 @@ Bug Fixes - Bug in :func:`read_csv` that caused it to incorrectly raise an error when ``nrows=0``, ``low_memory=True``, and ``index_col`` was not ``None`` (:issue:`21141`) - Bug in :func:`json_normalize` when formatting the ``record_prefix`` with integer columns (:issue:`21536`) +- Bug in :func:`json_normalize` when flattening an array of values (:issue:`21608`) - **Plotting** diff --git a/pandas/io/json/normalize.py b/pandas/io/json/normalize.py index 2004a24c2ec5a..7c6da8d5f505c 100644 --- a/pandas/io/json/normalize.py +++ b/pandas/io/json/normalize.py @@ -186,8 +186,12 @@ def _pull_field(js, spec): return result - if isinstance(data, list) and not data: - return DataFrame() + if isinstance(data, list): + if not data: + return DataFrame() + elif any([not isinstance(x, list) and not isinstance(x, dict) + for x in data]): + return DataFrame(data, columns=['0']) # A bit of a hackjob if isinstance(data, dict): diff --git a/pandas/tests/io/json/test_normalize.py b/pandas/tests/io/json/test_normalize.py index 200a853c48900..232a9fa371a1f 100644 --- a/pandas/tests/io/json/test_normalize.py +++ b/pandas/tests/io/json/test_normalize.py @@ -129,6 +129,12 @@ def test_value_array_record_prefix(self): expected = DataFrame([[1], [2]], columns=['Prefix.0']) tm.assert_frame_equal(result, expected) + def test_value_array(self): + # GH 21608 + result = json_normalize([1, 2]) + expected = DataFrame([[1], [2]], columns=['0']) + tm.assert_frame_equal(result, expected) + def test_more_deeply_nested(self, deep_nested): result = json_normalize(deep_nested, ['states', 'cities'],
This PR fixes the bug in `json_normalize` that causes it to throw `AttributeError` when flattening an array of values. - [ ] closes #21608 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21611
2018-06-23T13:36:27Z
2018-10-11T01:52:56Z
null
2018-10-11T01:52:56Z
BUG: Fix `json_normalize` when calling with list `record_path` (#21605)
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index ff872cfc6b3ef..b6ffa6909c5bf 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -72,6 +72,7 @@ Bug Fixes - Bug in :func:`read_csv` that caused it to incorrectly raise an error when ``nrows=0``, ``low_memory=True``, and ``index_col`` was not ``None`` (:issue:`21141`) - Bug in :func:`json_normalize` when formatting the ``record_prefix`` with integer columns (:issue:`21536`) +- Bug in :func:`json_normalize` when calling with list ``record_path`` (:issue:`21605`) - **Plotting** diff --git a/pandas/io/json/normalize.py b/pandas/io/json/normalize.py index 2004a24c2ec5a..c82b4f57b062e 100644 --- a/pandas/io/json/normalize.py +++ b/pandas/io/json/normalize.py @@ -224,6 +224,31 @@ def _pull_field(js, spec): sep = str(sep) meta_keys = [sep.join(val) for val in meta] + def _extract(obj, key, seen_meta, level): + recs = _pull_field(obj, key) + + # For repeating the metadata later + lengths.append(len(recs)) + + for val, key in zip(meta, meta_keys): + if level + 1 > len(val): + meta_val = seen_meta[key] + else: + try: + meta_val = _pull_field(obj, val[level:]) + except KeyError as e: + if errors == 'ignore': + meta_val = np.nan + else: + raise \ + KeyError("Try running with " + "errors='ignore' as key " + "{err} is not always present" + .format(err=e)) + meta_vals[key].append(meta_val) + + records.extend(recs) + def _recursive_extract(data, path, seen_meta, level=0): if len(path) > 1: for obj in data: @@ -233,31 +258,11 @@ def _recursive_extract(data, path, seen_meta, level=0): _recursive_extract(obj[path[0]], path[1:], seen_meta, level=level + 1) - else: + elif isinstance(data, list): for obj in data: - recs = _pull_field(obj, path[0]) - - # For repeating the metadata later - lengths.append(len(recs)) - - for val, key in zip(meta, meta_keys): - if level + 1 > len(val): - meta_val = seen_meta[key] - else: - try: - meta_val = _pull_field(obj, val[level:]) - except KeyError as e: - if errors == 'ignore': - meta_val = np.nan - else: - raise \ - KeyError("Try running with " - "errors='ignore' as key " - "{err} is not always present" - .format(err=e)) - meta_vals[key].append(meta_val) - - records.extend(recs) + _extract(obj, path[0], seen_meta, level) + else: + _extract(data, path[0], seen_meta, level) _recursive_extract(data, record_path, {}, level=0) diff --git a/pandas/tests/io/json/test_normalize.py b/pandas/tests/io/json/test_normalize.py index 200a853c48900..353cfce2fcfc6 100644 --- a/pandas/tests/io/json/test_normalize.py +++ b/pandas/tests/io/json/test_normalize.py @@ -129,6 +129,13 @@ def test_value_array_record_prefix(self): expected = DataFrame([[1], [2]], columns=['Prefix.0']) tm.assert_frame_equal(result, expected) + def test_list_record_path(self): + # GH 21605 + result = json_normalize( + {'A': {'B': [{'X': 1, 'Y': 2}, {'X': 3, 'Y': 4}]}}, ['A', 'B']) + expected = DataFrame([[1, 2], [3, 4]], columns=['X', 'Y']) + tm.assert_frame_equal(result, expected) + def test_more_deeply_nested(self, deep_nested): result = json_normalize(deep_nested, ['states', 'cities'],
This PR fixes the bug that caused `json_normalize` to throw `TypeError` when calling with a list `record_path`. - [x] closes #21605 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21607
2018-06-23T04:10:50Z
2018-10-11T01:53:11Z
null
2018-10-11T01:53:11Z
More speedups for Period comparisons
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index 72e7373d0dd33..379221478e203 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -130,7 +130,7 @@ Performance Improvements - Improved performance of :func:`Series.describe` in case of numeric dtpyes (:issue:`21274`) - Improved performance of :func:`pandas.core.groupby.GroupBy.rank` when dealing with tied rankings (:issue:`21237`) -- Improved performance of :func:`DataFrame.set_index` with columns consisting of :class:`Period` objects (:issue:`21582`) +- Improved performance of :func:`DataFrame.set_index` with columns consisting of :class:`Period` objects (:issue:`21582`,:issue:`21606`) - .. _whatsnew_0240.docs: diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx index 63add06db17b4..b4b27b99bdb30 100644 --- a/pandas/_libs/tslibs/offsets.pyx +++ b/pandas/_libs/tslibs/offsets.pyx @@ -88,6 +88,15 @@ for _d in DAYS: # --------------------------------------------------------------------- # Misc Helpers +cdef to_offset(object obj): + """ + Wrap pandas.tseries.frequencies.to_offset to keep centralize runtime + imports + """ + from pandas.tseries.frequencies import to_offset + return to_offset(obj) + + def as_datetime(obj): f = getattr(obj, 'to_pydatetime', None) if f is not None: @@ -313,6 +322,41 @@ class _BaseOffset(object): def __setattr__(self, name, value): raise AttributeError("DateOffset objects are immutable.") + def __eq__(self, other): + if is_string_object(other): + other = to_offset(other) + + try: + return self._params == other._params + except AttributeError: + # other is not a DateOffset object + return False + + return self._params == other._params + + def __ne__(self, other): + return not self == other + + def __hash__(self): + return hash(self._params) + + @property + def _params(self): + """ + Returns a tuple containing all of the attributes needed to evaluate + equality between two DateOffset objects. + """ + # NB: non-cython subclasses override property with cache_readonly + all_paras = self.__dict__.copy() + if 'holidays' in all_paras and not all_paras['holidays']: + all_paras.pop('holidays') + exclude = ['kwds', 'name', 'calendar'] + attrs = [(k, v) for k, v in all_paras.items() + if (k not in exclude) and (k[0] != '_')] + attrs = sorted(set(attrs)) + params = tuple([str(self.__class__)] + attrs) + return params + @property def kwds(self): # for backwards-compatibility diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py index a3f82c1a0902e..1cfd3f476f8ab 100644 --- a/pandas/tseries/offsets.py +++ b/pandas/tseries/offsets.py @@ -182,6 +182,7 @@ def __add__(date): Since 0 is a bit weird, we suggest avoiding its use. """ + _params = cache_readonly(BaseOffset._params.fget) _use_relativedelta = False _adjust_dst = False _attributes = frozenset(['n', 'normalize'] + @@ -288,18 +289,6 @@ def isAnchored(self): # if there were a canonical docstring for what isAnchored means. return (self.n == 1) - @cache_readonly - def _params(self): - all_paras = self.__dict__.copy() - if 'holidays' in all_paras and not all_paras['holidays']: - all_paras.pop('holidays') - exclude = ['kwds', 'name', 'calendar'] - attrs = [(k, v) for k, v in all_paras.items() - if (k not in exclude) and (k[0] != '_')] - attrs = sorted(set(attrs)) - params = tuple([str(self.__class__)] + attrs) - return params - # TODO: Combine this with BusinessMixin version by defining a whitelisted # set of attributes on each object rather than the existing behavior of # iterating over internal ``__dict__`` @@ -322,24 +311,6 @@ def _repr_attrs(self): def name(self): return self.rule_code - def __eq__(self, other): - - if isinstance(other, compat.string_types): - from pandas.tseries.frequencies import to_offset - - other = to_offset(other) - - if not isinstance(other, DateOffset): - return False - - return self._params == other._params - - def __ne__(self, other): - return not self == other - - def __hash__(self): - return hash(self._params) - def __add__(self, other): if isinstance(other, (ABCDatetimeIndex, ABCSeries)): return other + self
Following #21582, the biggest avoidable overheads in Period comparisons are in a) `isinstance` calls that can be short-circuited and b) `__ne__` call overhead. This PR moves `__eq__` and `__ne__` to the cython file, removing the `__ne__` overhead, and changes an `isinstance` check to a try/except. Using the same profile code from #21582, this brings the `set_index` runtime from 6.603 seconds down to 2.165 seconds.
https://api.github.com/repos/pandas-dev/pandas/pulls/21606
2018-06-23T04:09:00Z
2018-06-26T22:27:30Z
2018-06-26T22:27:29Z
2018-07-01T01:27:32Z
CI: Test against Python 3.7
diff --git a/.travis.yml b/.travis.yml index 4e25380a7d941..2d2a0bc019c80 100644 --- a/.travis.yml +++ b/.travis.yml @@ -35,6 +35,11 @@ matrix: language: generic env: - JOB="3.5, OSX" ENV_FILE="ci/travis-35-osx.yaml" TEST_ARGS="--skip-slow --skip-network" + + - dist: trusty + env: + - JOB="3.7" ENV_FILE="ci/travis-37.yaml" TEST_ARGS="--skip-slow --skip-network" + - dist: trusty env: - JOB="2.7, locale, slow, old NumPy" ENV_FILE="ci/travis-27-locale.yaml" LOCALE_OVERRIDE="zh_CN.UTF-8" SLOW=true diff --git a/ci/travis-37.yaml b/ci/travis-37.yaml new file mode 100644 index 0000000000000..8b255c9e6ec72 --- /dev/null +++ b/ci/travis-37.yaml @@ -0,0 +1,14 @@ +name: pandas +channels: + - defaults + - conda-forge + - c3i_test +dependencies: + - python=3.7 + - cython + - numpy + - python-dateutil + - nomkl + - pytz + - pytest + - pytest-xdist diff --git a/doc/source/install.rst b/doc/source/install.rst index 87d1b63914635..fa6b9f4fc7f4d 100644 --- a/doc/source/install.rst +++ b/doc/source/install.rst @@ -43,7 +43,7 @@ For more information, see the `Python 3 statement`_ and the `Porting to Python 3 Python version support ---------------------- -Officially Python 2.7, 3.5, and 3.6. +Officially Python 2.7, 3.5, 3.6, and 3.7. Installing pandas ----------------- diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index c781f45715bd4..494c7bacac9aa 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -6,6 +6,12 @@ v0.23.2 This is a minor bug-fix release in the 0.23.x series and includes some small regression fixes and bug fixes. We recommend that all users upgrade to this version. +.. note:: + + Pandas 0.23.2 is first pandas release that's compatible with + Python 3.7 (:issue:`20552`) + + .. contents:: What's new in v0.23.2 :local: :backlinks: none diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py index 5ae22694d0da7..28a55133e68aa 100644 --- a/pandas/compat/__init__.py +++ b/pandas/compat/__init__.py @@ -40,10 +40,11 @@ from collections import namedtuple PY2 = sys.version_info[0] == 2 -PY3 = (sys.version_info[0] >= 3) -PY35 = (sys.version_info >= (3, 5)) -PY36 = (sys.version_info >= (3, 6)) -PYPY = (platform.python_implementation() == 'PyPy') +PY3 = sys.version_info[0] >= 3 +PY35 = sys.version_info >= (3, 5) +PY36 = sys.version_info >= (3, 6) +PY37 = sys.version_info >= (3, 7) +PYPY = platform.python_implementation() == 'PyPy' try: import __builtin__ as builtins diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py index 66cb9baeb9357..74bc08ee9649b 100644 --- a/pandas/tests/tseries/offsets/test_offsets.py +++ b/pandas/tests/tseries/offsets/test_offsets.py @@ -591,7 +591,10 @@ def test_repr(self): assert repr(self.offset) == '<BusinessDay>' assert repr(self.offset2) == '<2 * BusinessDays>' - expected = '<BusinessDay: offset=datetime.timedelta(1)>' + if compat.PY37: + expected = '<BusinessDay: offset=datetime.timedelta(days=1)>' + else: + expected = '<BusinessDay: offset=datetime.timedelta(1)>' assert repr(self.offset + timedelta(1)) == expected def test_with_offset(self): @@ -1651,7 +1654,10 @@ def test_repr(self): assert repr(self.offset) == '<CustomBusinessDay>' assert repr(self.offset2) == '<2 * CustomBusinessDays>' - expected = '<BusinessDay: offset=datetime.timedelta(1)>' + if compat.PY37: + expected = '<BusinessDay: offset=datetime.timedelta(days=1)>' + else: + expected = '<BusinessDay: offset=datetime.timedelta(1)>' assert repr(self.offset + timedelta(1)) == expected def test_with_offset(self): diff --git a/setup.py b/setup.py index d6890a08b09d0..dd026bd611727 100755 --- a/setup.py +++ b/setup.py @@ -217,6 +217,7 @@ def build_extensions(self): 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', + 'Programming Language :: Python :: 3.7', 'Programming Language :: Cython', 'Topic :: Scientific/Engineering']
- [x] closes #20552 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21604
2018-06-23T02:54:14Z
2018-06-25T14:57:44Z
2018-06-25T14:57:44Z
2018-07-02T15:31:40Z
TST: Use multiple instances of parametrize instead of product in tests
diff --git a/pandas/tests/dtypes/test_dtypes.py b/pandas/tests/dtypes/test_dtypes.py index cc833af03ae66..eee53a2fcac6a 100644 --- a/pandas/tests/dtypes/test_dtypes.py +++ b/pandas/tests/dtypes/test_dtypes.py @@ -2,8 +2,6 @@ import re import pytest -from itertools import product - import numpy as np import pandas as pd from pandas import ( @@ -233,12 +231,14 @@ def test_dst(self): assert is_datetimetz(s2) assert s1.dtype == s2.dtype - def test_parser(self): + @pytest.mark.parametrize('tz', ['UTC', 'US/Eastern']) + @pytest.mark.parametrize('constructor', ['M8', 'datetime64']) + def test_parser(self, tz, constructor): # pr #11245 - for tz, constructor in product(('UTC', 'US/Eastern'), - ('M8', 'datetime64')): - assert (DatetimeTZDtype('%s[ns, %s]' % (constructor, tz)) == - DatetimeTZDtype('ns', tz)) + dtz_str = '{con}[ns, {tz}]'.format(con=constructor, tz=tz) + result = DatetimeTZDtype(dtz_str) + expected = DatetimeTZDtype('ns', tz) + assert result == expected def test_empty(self): dt = DatetimeTZDtype() diff --git a/pandas/tests/frame/test_rank.py b/pandas/tests/frame/test_rank.py index b8ba408b54715..a1210f1ed54e4 100644 --- a/pandas/tests/frame/test_rank.py +++ b/pandas/tests/frame/test_rank.py @@ -10,7 +10,6 @@ from pandas.util.testing import assert_frame_equal from pandas.tests.frame.common import TestData from pandas import Series, DataFrame -from pandas.compat import product class TestRank(TestData): @@ -26,6 +25,13 @@ class TestRank(TestData): 'dense': np.array([1, 3, 4, 2, nan, 2, 1, 5, nan, 3]), } + @pytest.fixture(params=['average', 'min', 'max', 'first', 'dense']) + def method(self, request): + """ + Fixture for trying all rank methods + """ + return request.param + def test_rank(self): rankdata = pytest.importorskip('scipy.stats.rankdata') @@ -217,34 +223,35 @@ def test_rank_methods_frame(self): expected = expected.astype('float64') tm.assert_frame_equal(result, expected) - def test_rank_descending(self): - dtypes = ['O', 'f8', 'i8'] + @pytest.mark.parametrize('dtype', ['O', 'f8', 'i8']) + def test_rank_descending(self, method, dtype): - for dtype, method in product(dtypes, self.results): - if 'i' in dtype: - df = self.df.dropna() - else: - df = self.df.astype(dtype) + if 'i' in dtype: + df = self.df.dropna() + else: + df = self.df.astype(dtype) - res = df.rank(ascending=False) - expected = (df.max() - df).rank() - assert_frame_equal(res, expected) + res = df.rank(ascending=False) + expected = (df.max() - df).rank() + assert_frame_equal(res, expected) - if method == 'first' and dtype == 'O': - continue + if method == 'first' and dtype == 'O': + return - expected = (df.max() - df).rank(method=method) + expected = (df.max() - df).rank(method=method) - if dtype != 'O': - res2 = df.rank(method=method, ascending=False, - numeric_only=True) - assert_frame_equal(res2, expected) + if dtype != 'O': + res2 = df.rank(method=method, ascending=False, + numeric_only=True) + assert_frame_equal(res2, expected) - res3 = df.rank(method=method, ascending=False, - numeric_only=False) - assert_frame_equal(res3, expected) + res3 = df.rank(method=method, ascending=False, + numeric_only=False) + assert_frame_equal(res3, expected) - def test_rank_2d_tie_methods(self): + @pytest.mark.parametrize('axis', [0, 1]) + @pytest.mark.parametrize('dtype', [None, object]) + def test_rank_2d_tie_methods(self, method, axis, dtype): df = self.df def _check2d(df, expected, method='average', axis=0): @@ -257,43 +264,38 @@ def _check2d(df, expected, method='average', axis=0): result = df.rank(method=method, axis=axis) assert_frame_equal(result, exp_df) - dtypes = [None, object] disabled = set([(object, 'first')]) - results = self.results - - for method, axis, dtype in product(results, [0, 1], dtypes): - if (dtype, method) in disabled: - continue - frame = df if dtype is None else df.astype(dtype) - _check2d(frame, results[method], method=method, axis=axis) - - -@pytest.mark.parametrize( - "method,exp", [("dense", - [[1., 1., 1.], - [1., 0.5, 2. / 3], - [1., 0.5, 1. / 3]]), - ("min", - [[1. / 3, 1., 1.], - [1. / 3, 1. / 3, 2. / 3], - [1. / 3, 1. / 3, 1. / 3]]), - ("max", - [[1., 1., 1.], - [1., 2. / 3, 2. / 3], - [1., 2. / 3, 1. / 3]]), - ("average", - [[2. / 3, 1., 1.], - [2. / 3, 0.5, 2. / 3], - [2. / 3, 0.5, 1. / 3]]), - ("first", - [[1. / 3, 1., 1.], - [2. / 3, 1. / 3, 2. / 3], - [3. / 3, 2. / 3, 1. / 3]])]) -def test_rank_pct_true(method, exp): - # see gh-15630. - - df = DataFrame([[2012, 66, 3], [2012, 65, 2], [2012, 65, 1]]) - result = df.rank(method=method, pct=True) - - expected = DataFrame(exp) - tm.assert_frame_equal(result, expected) + if (dtype, method) in disabled: + return + frame = df if dtype is None else df.astype(dtype) + _check2d(frame, self.results[method], method=method, axis=axis) + + @pytest.mark.parametrize( + "method,exp", [("dense", + [[1., 1., 1.], + [1., 0.5, 2. / 3], + [1., 0.5, 1. / 3]]), + ("min", + [[1. / 3, 1., 1.], + [1. / 3, 1. / 3, 2. / 3], + [1. / 3, 1. / 3, 1. / 3]]), + ("max", + [[1., 1., 1.], + [1., 2. / 3, 2. / 3], + [1., 2. / 3, 1. / 3]]), + ("average", + [[2. / 3, 1., 1.], + [2. / 3, 0.5, 2. / 3], + [2. / 3, 0.5, 1. / 3]]), + ("first", + [[1. / 3, 1., 1.], + [2. / 3, 1. / 3, 2. / 3], + [3. / 3, 2. / 3, 1. / 3]])]) + def test_rank_pct_true(self, method, exp): + # see gh-15630. + + df = DataFrame([[2012, 66, 3], [2012, 65, 2], [2012, 65, 1]]) + result = df.rank(method=method, pct=True) + + expected = DataFrame(exp) + tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py index f1d678db4ff7f..9df362a8e132f 100644 --- a/pandas/tests/groupby/test_function.py +++ b/pandas/tests/groupby/test_function.py @@ -778,9 +778,10 @@ def test_frame_describe_unstacked_format(): # nunique # -------------------------------- -@pytest.mark.parametrize("n, m", cart_product(10 ** np.arange(2, 6), - (10, 100, 1000))) -@pytest.mark.parametrize("sort, dropna", cart_product((False, True), repeat=2)) +@pytest.mark.parametrize('n', 10 ** np.arange(2, 6)) +@pytest.mark.parametrize('m', [10, 100, 1000]) +@pytest.mark.parametrize('sort', [False, True]) +@pytest.mark.parametrize('dropna', [False, True]) def test_series_groupby_nunique(n, m, sort, dropna): def check_nunique(df, keys, as_index=True): diff --git a/pandas/tests/groupby/test_whitelist.py b/pandas/tests/groupby/test_whitelist.py index 8d6e074881cbb..f4a58b9cbe61b 100644 --- a/pandas/tests/groupby/test_whitelist.py +++ b/pandas/tests/groupby/test_whitelist.py @@ -8,7 +8,6 @@ import numpy as np from pandas import DataFrame, Series, compat, date_range, Index, MultiIndex from pandas.util import testing as tm -from pandas.compat import lrange, product AGG_FUNCTIONS = ['sum', 'prod', 'min', 'max', 'median', 'mean', 'skew', 'mad', 'std', 'var', 'sem'] @@ -175,12 +174,11 @@ def raw_frame(): return raw_frame -@pytest.mark.parametrize( - "op, level, axis, skipna, sort", - product(AGG_FUNCTIONS, - lrange(2), lrange(2), - [True, False], - [True, False])) +@pytest.mark.parametrize('op', AGG_FUNCTIONS) +@pytest.mark.parametrize('level', [0, 1]) +@pytest.mark.parametrize('axis', [0, 1]) +@pytest.mark.parametrize('skipna', [True, False]) +@pytest.mark.parametrize('sort', [True, False]) def test_regression_whitelist_methods( raw_frame, op, level, axis, skipna, sort): diff --git a/pandas/tests/reshape/test_concat.py b/pandas/tests/reshape/test_concat.py index dea305d4b3fee..8d819f9926abb 100644 --- a/pandas/tests/reshape/test_concat.py +++ b/pandas/tests/reshape/test_concat.py @@ -1,5 +1,5 @@ from warnings import catch_warnings -from itertools import combinations, product +from itertools import combinations import datetime as dt import dateutil @@ -941,10 +941,11 @@ def test_append_different_columns_types(self, df_columns, series_index): columns=combined_columns) assert_frame_equal(result, expected) - @pytest.mark.parametrize( - "index_can_append, index_cannot_append_with_other", - product(indexes_can_append, indexes_cannot_append_with_other), - ids=lambda x: x.__class__.__name__) + @pytest.mark.parametrize('index_can_append', indexes_can_append, + ids=lambda x: x.__class__.__name__) + @pytest.mark.parametrize('index_cannot_append_with_other', + indexes_cannot_append_with_other, + ids=lambda x: x.__class__.__name__) def test_append_different_columns_types_raises( self, index_can_append, index_cannot_append_with_other): # GH18359 diff --git a/pandas/tests/sparse/series/test_series.py b/pandas/tests/sparse/series/test_series.py index eb63c87820070..921c30234660f 100644 --- a/pandas/tests/sparse/series/test_series.py +++ b/pandas/tests/sparse/series/test_series.py @@ -23,8 +23,6 @@ from pandas.core.sparse.api import SparseSeries from pandas.tests.series.test_api import SharedWithSparse -from itertools import product - def _test_data1(): # nan-based @@ -985,16 +983,16 @@ def test_combine_first(self): tm.assert_sp_series_equal(result, result2) tm.assert_sp_series_equal(result, expected) - @pytest.mark.parametrize('deep,fill_values', [([True, False], - [0, 1, np.nan, None])]) - def test_memory_usage_deep(self, deep, fill_values): - for deep, fill_value in product(deep, fill_values): - sparse_series = SparseSeries(fill_values, fill_value=fill_value) - dense_series = Series(fill_values) - sparse_usage = sparse_series.memory_usage(deep=deep) - dense_usage = dense_series.memory_usage(deep=deep) + @pytest.mark.parametrize('deep', [True, False]) + @pytest.mark.parametrize('fill_value', [0, 1, np.nan, None]) + def test_memory_usage_deep(self, deep, fill_value): + values = [0, 1, np.nan, None] + sparse_series = SparseSeries(values, fill_value=fill_value) + dense_series = Series(values) + sparse_usage = sparse_series.memory_usage(deep=deep) + dense_usage = dense_series.memory_usage(deep=deep) - assert sparse_usage < dense_usage + assert sparse_usage < dense_usage class TestSparseHandlingMultiIndexes(object): diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py index 79e05c90a21b0..3caee2b44c579 100644 --- a/pandas/tests/test_multilevel.py +++ b/pandas/tests/test_multilevel.py @@ -20,6 +20,9 @@ import pandas as pd import pandas._libs.index as _index +AGG_FUNCTIONS = ['sum', 'prod', 'min', 'max', 'median', 'mean', 'skew', 'mad', + 'std', 'var', 'sem'] + class Base(object): @@ -1389,60 +1392,57 @@ def test_count(self): pytest.raises(KeyError, series.count, 'x') pytest.raises(KeyError, frame.count, level='x') - AGG_FUNCTIONS = ['sum', 'prod', 'min', 'max', 'median', 'mean', 'skew', - 'mad', 'std', 'var', 'sem'] - + @pytest.mark.parametrize('op', AGG_FUNCTIONS) + @pytest.mark.parametrize('level', [0, 1]) + @pytest.mark.parametrize('skipna', [True, False]) @pytest.mark.parametrize('sort', [True, False]) - def test_series_group_min_max(self, sort): + def test_series_group_min_max(self, op, level, skipna, sort): # GH 17537 - for op, level, skipna in cart_product(self.AGG_FUNCTIONS, lrange(2), - [False, True]): - grouped = self.series.groupby(level=level, sort=sort) - aggf = lambda x: getattr(x, op)(skipna=skipna) - # skipna=True - leftside = grouped.agg(aggf) - rightside = getattr(self.series, op)(level=level, skipna=skipna) - if sort: - rightside = rightside.sort_index(level=level) - tm.assert_series_equal(leftside, rightside) - + grouped = self.series.groupby(level=level, sort=sort) + # skipna=True + leftside = grouped.agg(lambda x: getattr(x, op)(skipna=skipna)) + rightside = getattr(self.series, op)(level=level, skipna=skipna) + if sort: + rightside = rightside.sort_index(level=level) + tm.assert_series_equal(leftside, rightside) + + @pytest.mark.parametrize('op', AGG_FUNCTIONS) + @pytest.mark.parametrize('level', [0, 1]) + @pytest.mark.parametrize('axis', [0, 1]) + @pytest.mark.parametrize('skipna', [True, False]) @pytest.mark.parametrize('sort', [True, False]) - def test_frame_group_ops(self, sort): + def test_frame_group_ops(self, op, level, axis, skipna, sort): # GH 17537 self.frame.iloc[1, [1, 2]] = np.nan self.frame.iloc[7, [0, 1]] = np.nan - for op, level, axis, skipna in cart_product(self.AGG_FUNCTIONS, - lrange(2), lrange(2), - [False, True]): - - if axis == 0: - frame = self.frame - else: - frame = self.frame.T + if axis == 0: + frame = self.frame + else: + frame = self.frame.T - grouped = frame.groupby(level=level, axis=axis, sort=sort) + grouped = frame.groupby(level=level, axis=axis, sort=sort) - pieces = [] + pieces = [] - def aggf(x): - pieces.append(x) - return getattr(x, op)(skipna=skipna, axis=axis) + def aggf(x): + pieces.append(x) + return getattr(x, op)(skipna=skipna, axis=axis) - leftside = grouped.agg(aggf) - rightside = getattr(frame, op)(level=level, axis=axis, - skipna=skipna) - if sort: - rightside = rightside.sort_index(level=level, axis=axis) - frame = frame.sort_index(level=level, axis=axis) + leftside = grouped.agg(aggf) + rightside = getattr(frame, op)(level=level, axis=axis, + skipna=skipna) + if sort: + rightside = rightside.sort_index(level=level, axis=axis) + frame = frame.sort_index(level=level, axis=axis) - # for good measure, groupby detail - level_index = frame._get_axis(axis).levels[level] + # for good measure, groupby detail + level_index = frame._get_axis(axis).levels[level] - tm.assert_index_equal(leftside._get_axis(axis), level_index) - tm.assert_index_equal(rightside._get_axis(axis), level_index) + tm.assert_index_equal(leftside._get_axis(axis), level_index) + tm.assert_index_equal(rightside._get_axis(axis), level_index) - tm.assert_frame_equal(leftside, rightside) + tm.assert_frame_equal(leftside, rightside) def test_stat_op_corner(self): obj = Series([10.0], index=MultiIndex.from_tuples([(2, 3)])) diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py index 6f0ad0535c6b4..60f23309b11d9 100644 --- a/pandas/tests/test_resample.py +++ b/pandas/tests/test_resample.py @@ -17,7 +17,7 @@ from pandas import (Series, DataFrame, Panel, Index, isna, notna, Timestamp) -from pandas.compat import range, lrange, zip, product, OrderedDict +from pandas.compat import range, lrange, zip, OrderedDict from pandas.errors import UnsupportedFunctionCall from pandas.core.groupby.groupby import DataError import pandas.core.common as com @@ -1951,30 +1951,32 @@ def test_resample_nunique_with_date_gap(self): assert_series_equal(results[0], results[2]) assert_series_equal(results[0], results[3]) - def test_resample_group_info(self): # GH10914 - for n, k in product((10000, 100000), (10, 100, 1000)): - dr = date_range(start='2015-08-27', periods=n // 10, freq='T') - ts = Series(np.random.randint(0, n // k, n).astype('int64'), - index=np.random.choice(dr, n)) + @pytest.mark.parametrize('n', [10000, 100000]) + @pytest.mark.parametrize('k', [10, 100, 1000]) + def test_resample_group_info(self, n, k): + # GH10914 + dr = date_range(start='2015-08-27', periods=n // 10, freq='T') + ts = Series(np.random.randint(0, n // k, n).astype('int64'), + index=np.random.choice(dr, n)) - left = ts.resample('30T').nunique() - ix = date_range(start=ts.index.min(), end=ts.index.max(), - freq='30T') + left = ts.resample('30T').nunique() + ix = date_range(start=ts.index.min(), end=ts.index.max(), + freq='30T') - vals = ts.values - bins = np.searchsorted(ix.values, ts.index, side='right') + vals = ts.values + bins = np.searchsorted(ix.values, ts.index, side='right') - sorter = np.lexsort((vals, bins)) - vals, bins = vals[sorter], bins[sorter] + sorter = np.lexsort((vals, bins)) + vals, bins = vals[sorter], bins[sorter] - mask = np.r_[True, vals[1:] != vals[:-1]] - mask |= np.r_[True, bins[1:] != bins[:-1]] + mask = np.r_[True, vals[1:] != vals[:-1]] + mask |= np.r_[True, bins[1:] != bins[:-1]] - arr = np.bincount(bins[mask] - 1, - minlength=len(ix)).astype('int64', copy=False) - right = Series(arr, index=ix) + arr = np.bincount(bins[mask] - 1, + minlength=len(ix)).astype('int64', copy=False) + right = Series(arr, index=ix) - assert_series_equal(left, right) + assert_series_equal(left, right) def test_resample_size(self): n = 10000 @@ -2323,28 +2325,25 @@ def test_annual_upsample(self): method='ffill') assert_series_equal(result, expected) - def test_quarterly_upsample(self): - targets = ['D', 'B', 'M'] - - for month in MONTHS: - ts = _simple_pts('1/1/1990', '12/31/1995', freq='Q-%s' % month) - - for targ, conv in product(targets, ['start', 'end']): - result = ts.resample(targ, convention=conv).ffill() - expected = result.to_timestamp(targ, how=conv) - expected = expected.asfreq(targ, 'ffill').to_period() - assert_series_equal(result, expected) - - def test_monthly_upsample(self): - targets = ['D', 'B'] + @pytest.mark.parametrize('month', MONTHS) + @pytest.mark.parametrize('target', ['D', 'B', 'M']) + @pytest.mark.parametrize('convention', ['start', 'end']) + def test_quarterly_upsample(self, month, target, convention): + freq = 'Q-{month}'.format(month=month) + ts = _simple_pts('1/1/1990', '12/31/1995', freq=freq) + result = ts.resample(target, convention=convention).ffill() + expected = result.to_timestamp(target, how=convention) + expected = expected.asfreq(target, 'ffill').to_period() + assert_series_equal(result, expected) + @pytest.mark.parametrize('target', ['D', 'B']) + @pytest.mark.parametrize('convention', ['start', 'end']) + def test_monthly_upsample(self, target, convention): ts = _simple_pts('1/1/1990', '12/31/1995', freq='M') - - for targ, conv in product(targets, ['start', 'end']): - result = ts.resample(targ, convention=conv).ffill() - expected = result.to_timestamp(targ, how=conv) - expected = expected.asfreq(targ, 'ffill').to_period() - assert_series_equal(result, expected) + result = ts.resample(target, convention=convention).ffill() + expected = result.to_timestamp(target, how=convention) + expected = expected.asfreq(target, 'ffill').to_period() + assert_series_equal(result, expected) def test_resample_basic(self): # GH3609 @@ -2455,17 +2454,16 @@ def test_fill_method_and_how_upsample(self): both = s.resample('M').ffill().resample('M').last().astype('int64') assert_series_equal(last, both) - def test_weekly_upsample(self): - targets = ['D', 'B'] - - for day in DAYS: - ts = _simple_pts('1/1/1990', '12/31/1995', freq='W-%s' % day) - - for targ, conv in product(targets, ['start', 'end']): - result = ts.resample(targ, convention=conv).ffill() - expected = result.to_timestamp(targ, how=conv) - expected = expected.asfreq(targ, 'ffill').to_period() - assert_series_equal(result, expected) + @pytest.mark.parametrize('day', DAYS) + @pytest.mark.parametrize('target', ['D', 'B']) + @pytest.mark.parametrize('convention', ['start', 'end']) + def test_weekly_upsample(self, day, target, convention): + freq = 'W-{day}'.format(day=day) + ts = _simple_pts('1/1/1990', '12/31/1995', freq=freq) + result = ts.resample(target, convention=convention).ffill() + expected = result.to_timestamp(target, how=convention) + expected = expected.asfreq(target, 'ffill').to_period() + assert_series_equal(result, expected) def test_resample_to_timestamps(self): ts = _simple_pts('1/1/1990', '12/31/1995', freq='M') diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py index cfd88f41f855e..78d1fa84cc5db 100644 --- a/pandas/tests/test_window.py +++ b/pandas/tests/test_window.py @@ -2105,10 +2105,9 @@ def _non_null_values(x): (mean_x * mean_y)) @pytest.mark.slow - @pytest.mark.parametrize( - 'min_periods, adjust, ignore_na', product([0, 1, 2, 3, 4], - [True, False], - [False, True])) + @pytest.mark.parametrize('min_periods', [0, 1, 2, 3, 4]) + @pytest.mark.parametrize('adjust', [True, False]) + @pytest.mark.parametrize('ignore_na', [True, False]) def test_ewm_consistency(self, min_periods, adjust, ignore_na): def _weights(s, com, adjust, ignore_na): if isinstance(s, DataFrame):
- All instances `product` being used inside a `@pytest.mark.parametrize` have been converted to separate instances of `@pytest.mark.parametrize` - Extracted `product` from within tests where it was straightforward - There are still multiple instances of `product` being used within tests for looping purporses - Most of the ones left do not appear to be trivial to extract - Some instances we might not want to remove/are clearer as-is - Could maybe open an issue for some of the more obvious ones?
https://api.github.com/repos/pandas-dev/pandas/pulls/21602
2018-06-23T00:39:55Z
2018-06-23T11:27:59Z
2018-06-23T11:27:59Z
2018-06-23T11:28:20Z
Updating fork
diff --git a/pandas/core/strings.py b/pandas/core/strings.py index 9632df46d3bbf..d54deb5409297 100644 --- a/pandas/core/strings.py +++ b/pandas/core/strings.py @@ -839,7 +839,7 @@ def _str_extract_frame(arr, pat, flags=0): def str_extract(arr, pat, flags=0, expand=True): - r""" + """ For each subject string in the Series, extract groups from the first match of regular expression pat. @@ -926,7 +926,7 @@ def str_extract(arr, pat, flags=0, expand=True): def str_extractall(arr, pat, flags=0): - r""" + """ For each subject string in the Series, extract groups from all matches of regular expression pat. When each subject string in the Series has exactly one match, extractall(pat).xs(0, level='match') @@ -1343,108 +1343,7 @@ def str_pad(arr, width, side='left', fillchar=' '): def str_split(arr, pat=None, n=None): - """ - Split strings around given separator/delimiter. - - Split each string in the caller's values by given - pattern, propagating NaN values. Equivalent to :meth:`str.split`. - - Parameters - ---------- - pat : str, optional - String or regular expression to split on. - If not specified, split on whitespace. - n : int, default -1 (all) - Limit number of splits in output. - ``None``, 0 and -1 will be interpreted as return all splits. - expand : bool, default False - Expand the split strings into separate columns. - - * If ``True``, return DataFrame/MultiIndex expanding dimensionality. - * If ``False``, return Series/Index, containing lists of strings. - - Returns - ------- - Series, Index, DataFrame or MultiIndex - Type matches caller unless ``expand=True`` (see Notes). - - Notes - ----- - The handling of the `n` keyword depends on the number of found splits: - - - If found splits > `n`, make first `n` splits only - - If found splits <= `n`, make all splits - - If for a certain row the number of found splits < `n`, - append `None` for padding up to `n` if ``expand=True`` - - If using ``expand=True``, Series and Index callers return DataFrame and - MultiIndex objects, respectively. - - See Also - -------- - str.split : Standard library version of this method. - Series.str.get_dummies : Split each string into dummy variables. - Series.str.partition : Split string on a separator, returning - the before, separator, and after components. - - Examples - -------- - >>> s = pd.Series(["this is good text", "but this is even better"]) - - By default, split will return an object of the same size - having lists containing the split elements - - >>> s.str.split() - 0 [this, is, good, text] - 1 [but, this, is, even, better] - dtype: object - >>> s.str.split("random") - 0 [this is good text] - 1 [but this is even better] - dtype: object - - When using ``expand=True``, the split elements will expand out into - separate columns. - - For Series object, output return type is DataFrame. - - >>> s.str.split(expand=True) - 0 1 2 3 4 - 0 this is good text None - 1 but this is even better - >>> s.str.split(" is ", expand=True) - 0 1 - 0 this good text - 1 but this even better - - For Index object, output return type is MultiIndex. - - >>> i = pd.Index(["ba 100 001", "ba 101 002", "ba 102 003"]) - >>> i.str.split(expand=True) - MultiIndex(levels=[['ba'], ['100', '101', '102'], ['001', '002', '003']], - labels=[[0, 0, 0], [0, 1, 2], [0, 1, 2]]) - - Parameter `n` can be used to limit the number of splits in the output. - - >>> s.str.split("is", n=1) - 0 [th, is good text] - 1 [but th, is even better] - dtype: object - >>> s.str.split("is", n=1, expand=True) - 0 1 - 0 th is good text - 1 but th is even better - - If NaN is present, it is propagated throughout the columns - during the split. - >>> s = pd.Series(["this is good text", "but this is even better", np.nan]) - >>> s.str.split(n=3, expand=True) - 0 1 2 3 - 0 this is good text - 1 but this is even better - 2 NaN NaN NaN NaN - """ if pat is None: if n is None or n == 0: n = -1 @@ -1464,25 +1363,7 @@ def str_split(arr, pat=None, n=None): def str_rsplit(arr, pat=None, n=None): - """ - Split each string in the Series/Index by the given delimiter - string, starting at the end of the string and working to the front. - Equivalent to :meth:`str.rsplit`. - - Parameters - ---------- - pat : string, default None - Separator to split on. If None, splits on whitespace - n : int, default -1 (all) - None, 0 and -1 will be interpreted as return all splits - expand : bool, default False - * If True, return DataFrame/MultiIndex expanding dimensionality. - * If False, return Series/Index. - - Returns - ------- - split : Series/Index or DataFrame/MultiIndex of objects - """ + if n is None or n == 0: n = -1 f = lambda x: x.rsplit(pat, n) @@ -2325,12 +2206,133 @@ def cat(self, others=None, sep=None, na_rep=None, join=None): res = Series(res, index=data.index, name=self._orig.name) return res - @copy(str_split) + _shared_docs['str_split'] = (""" + Split strings around given separator/delimiter. + + Splits the string in the Series/Index from the %(side)s, + at the specified delimiter string.Equivalent to :meth:`str.%(method)s`. + + Parameters + ---------- + pat : str, optional + String or regular expression to split on. + If not specified, split on whitespace. + n : int, default -1 (all) + Limit number of splits in output. + ``None``, 0 and -1 will be interpreted as return all splits. + expand : bool, default False + Expand the splitted strings into separate columns. + + * If ``True``, return DataFrame/MultiIndex expanding dimensionality. + * If ``False``, return Series/Index, containing lists of strings. + + Returns + ------- + Series, Index, DataFrame or MultiIndex + Type matches caller unless ``expand=True`` (see Notes). + + Notes + ----- + The handling of the `n` keyword depends on the number of found splits: + + - If found splits > `n`, make first `n` splits only + - If found splits <= `n`, make all splits + - If for a certain row the number of found splits < `n`, + append `None` for padding up to `n` if ``expand=True`` + + If using ``expand=True``, Series and Index callers return DataFrame and + MultiIndex objects, respectively. + + See Also + -------- + %(also)s + + Examples + -------- + >>> s = pd.Series(["this is good text", "but this is even better"]) + + By default, split and rsplit will return an object of the same size + having lists containing the split elements + + >>> s.str.split() + 0 [this, is, good, text] + 1 [but, this, is, even, better] + dtype: object + + >>> s.str.rsplit() + 0 [this, is, good, text] + 1 [but, this, is, even, better] + dtype: object + + >>> s.str.split("random") + 0 [this is good text] + 1 [but this is even better] + dtype: object + + >>> s.str.rsplit("random") + 0 [this is good text] + 1 [but this is even better] + dtype: object + + When using ``expand=True``, the split and rsplit elements will + expand out into separate columns. + + For Series object, output return type is DataFrame. + + >>> s.str.split(expand=True) + 0 1 2 3 4 + 0 this is good text None + 1 but this is even better + + >>> s.str.split(" is ", expand=True) + 0 1 + 0 this good text + 1 but this even better + + Parameter `n` can be used to limit the number of splits in the output. + + >>> s.str.split("is", n=1) + 0 [th, is good text] + 1 [but th, is even better] + dtype: object + + >>> s.str.rsplit("is", n=1) + 0 [this , good text] + 1 [but this , even better] + dtype: object + + If NaN is present, it is propagated throughout the columns + during the split. + + >>> s = pd.Series(["this is good text", "but this is even better", np.nan]) + + >>> s.str.split(n=3, expand=True) + 0 1 2 3 + 0 this is good text + 1 but this is even better + 2 NaN NaN NaN NaN + + >>> s.str.rsplit(n=3, expand=True) + 0 1 2 3 + 0 this is good text + 1 but this is even better + 2 NaN NaN NaN NaN + """) + + @Appender(_shared_docs['str_split'] % { + 'side': 'beginning', + 'method': 'split', + 'also': 'rsplit : Splits string at the last occurrence of delimiter' + }) def split(self, pat=None, n=-1, expand=False): result = str_split(self._data, pat, n=n) return self._wrap_result(result, expand=expand) - @copy(str_rsplit) + @Appender(_shared_docs['str_split'] % { + 'side': 'end', + 'method': 'rsplit', + 'also': 'split : Splits string at the first occurrence of delimiter' + }) def rsplit(self, pat=None, n=-1, expand=False): result = str_rsplit(self._data, pat, n=n) return self._wrap_result(result, expand=expand)
- [ ] closes #xxxx - [ ] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21601
2018-06-23T00:38:22Z
2018-06-23T00:39:47Z
null
2018-06-23T00:43:41Z
TST: Refactor test_maybe_match_name and test_hash_pandas_object
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py index ef5f13bfa504a..61f838eeeeb30 100644 --- a/pandas/tests/test_common.py +++ b/pandas/tests/test_common.py @@ -25,7 +25,6 @@ def test_mut_exclusive(): def test_get_callable_name(): - from functools import partial getname = com._get_callable_name def fn(x): @@ -154,8 +153,7 @@ def test_random_state(): # Check with random state object state2 = npr.RandomState(10) - assert (com._random_state(state2).uniform() == - npr.RandomState(10).uniform()) + assert com._random_state(state2).uniform() == npr.RandomState(10).uniform() # check with no arg random state assert com._random_state() is np.random @@ -168,29 +166,15 @@ def test_random_state(): com._random_state(5.5) -def test_maybe_match_name(): - - matched = ops._maybe_match_name( - Series([1], name='x'), Series( - [2], name='x')) - assert (matched == 'x') - - matched = ops._maybe_match_name( - Series([1], name='x'), Series( - [2], name='y')) - assert (matched is None) - - matched = ops._maybe_match_name(Series([1]), Series([2], name='x')) - assert (matched is None) - - matched = ops._maybe_match_name(Series([1], name='x'), Series([2])) - assert (matched is None) - - matched = ops._maybe_match_name(Series([1], name='x'), [2]) - assert (matched == 'x') - - matched = ops._maybe_match_name([1], Series([2], name='y')) - assert (matched == 'y') +@pytest.mark.parametrize('left, right, expected', [ + (Series([1], name='x'), Series([2], name='x'), 'x'), + (Series([1], name='x'), Series([2], name='y'), None), + (Series([1]), Series([2], name='x'), None), + (Series([1], name='x'), Series([2]), None), + (Series([1], name='x'), [2], 'x'), + ([1], Series([2], name='y'), 'y')]) +def test_maybe_match_name(left, right, expected): + assert ops._maybe_match_name(left, right) == expected def test_dict_compat(): diff --git a/pandas/tests/util/test_hashing.py b/pandas/tests/util/test_hashing.py index fe8d75539879e..82b870c156cc8 100644 --- a/pandas/tests/util/test_hashing.py +++ b/pandas/tests/util/test_hashing.py @@ -142,39 +142,35 @@ def test_multiindex_objects(self): tm.assert_numpy_array_equal(np.sort(result), np.sort(expected)) - def test_hash_pandas_object(self): - - for obj in [Series([1, 2, 3]), - Series([1.0, 1.5, 3.2]), - Series([1.0, 1.5, np.nan]), - Series([1.0, 1.5, 3.2], index=[1.5, 1.1, 3.3]), - Series(['a', 'b', 'c']), - Series(['a', np.nan, 'c']), - Series(['a', None, 'c']), - Series([True, False, True]), - Series(), - Index([1, 2, 3]), - Index([True, False, True]), - DataFrame({'x': ['a', 'b', 'c'], 'y': [1, 2, 3]}), - DataFrame(), - tm.makeMissingDataframe(), - tm.makeMixedDataFrame(), - tm.makeTimeDataFrame(), - tm.makeTimeSeries(), - tm.makeTimedeltaIndex(), - tm.makePeriodIndex(), - Series(tm.makePeriodIndex()), - Series(pd.date_range('20130101', - periods=3, tz='US/Eastern')), - MultiIndex.from_product( - [range(5), - ['foo', 'bar', 'baz'], - pd.date_range('20130101', periods=2)]), - MultiIndex.from_product( - [pd.CategoricalIndex(list('aabc')), - range(3)])]: - self.check_equal(obj) - self.check_not_equal_with_index(obj) + @pytest.mark.parametrize('obj', [ + Series([1, 2, 3]), + Series([1.0, 1.5, 3.2]), + Series([1.0, 1.5, np.nan]), + Series([1.0, 1.5, 3.2], index=[1.5, 1.1, 3.3]), + Series(['a', 'b', 'c']), + Series(['a', np.nan, 'c']), + Series(['a', None, 'c']), + Series([True, False, True]), + Series(), + Index([1, 2, 3]), + Index([True, False, True]), + DataFrame({'x': ['a', 'b', 'c'], 'y': [1, 2, 3]}), + DataFrame(), + tm.makeMissingDataframe(), + tm.makeMixedDataFrame(), + tm.makeTimeDataFrame(), + tm.makeTimeSeries(), + tm.makeTimedeltaIndex(), + tm.makePeriodIndex(), + Series(tm.makePeriodIndex()), + Series(pd.date_range('20130101', periods=3, tz='US/Eastern')), + MultiIndex.from_product([range(5), ['foo', 'bar', 'baz'], + pd.date_range('20130101', periods=2)]), + MultiIndex.from_product([pd.CategoricalIndex(list('aabc')), range(3)]) + ]) + def test_hash_pandas_object(self, obj): + self.check_equal(obj) + self.check_not_equal_with_index(obj) def test_hash_pandas_object2(self): for name, s in self.df.iteritems():
- [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
https://api.github.com/repos/pandas-dev/pandas/pulls/21600
2018-06-22T21:49:22Z
2018-06-25T22:31:09Z
2018-06-25T22:31:08Z
2018-06-26T08:26:31Z
DOC: Change release and whatsnew
diff --git a/.gitignore b/.gitignore index a59f2843c365a..f912fedb199c0 100644 --- a/.gitignore +++ b/.gitignore @@ -109,6 +109,5 @@ doc/build/html/index.html # Windows specific leftover: doc/tmp.sv doc/source/styled.xlsx -doc/source/templates/ env/ doc/source/savefig/ diff --git a/ci/build_docs.sh b/ci/build_docs.sh index f445447e3565c..33340a1c038dc 100755 --- a/ci/build_docs.sh +++ b/ci/build_docs.sh @@ -5,7 +5,7 @@ if [ "${TRAVIS_OS_NAME}" != "linux" ]; then exit 0 fi -cd "$TRAVIS_BUILD_DIR" +cd "$TRAVIS_BUILD_DIR"/doc echo "inside $0" if [ "$DOC" ]; then @@ -14,10 +14,6 @@ if [ "$DOC" ]; then source activate pandas - mv "$TRAVIS_BUILD_DIR"/doc /tmp - mv "$TRAVIS_BUILD_DIR/LICENSE" /tmp # included in the docs. - cd /tmp/doc - echo ############################### echo # Log file for the doc build # echo ############################### @@ -29,7 +25,7 @@ if [ "$DOC" ]; then echo # Create and send docs # echo ######################## - cd /tmp/doc/build/html + cd build/html git config --global user.email "pandas-docs-bot@localhost.foo" git config --global user.name "pandas-docs-bot" diff --git a/ci/deps/travis-36-doc.yaml b/ci/deps/travis-36-doc.yaml index 6bf8cb38e0b7c..f79fcb11c179f 100644 --- a/ci/deps/travis-36-doc.yaml +++ b/ci/deps/travis-36-doc.yaml @@ -8,6 +8,7 @@ dependencies: - bottleneck - cython>=0.28.2 - fastparquet + - gitpython - html5lib - hypothesis>=3.58.0 - ipykernel diff --git a/doc/make.py b/doc/make.py index cab5fa0ed4c52..0a3a7483fcc91 100755 --- a/doc/make.py +++ b/doc/make.py @@ -126,7 +126,12 @@ def _process_single_doc(self, single_doc): self.single_doc = 'api' elif os.path.exists(os.path.join(SOURCE_PATH, single_doc)): self.single_doc_type = 'rst' - self.single_doc = os.path.splitext(os.path.basename(single_doc))[0] + + if 'whatsnew' in single_doc: + basename = single_doc + else: + basename = os.path.basename(single_doc) + self.single_doc = os.path.splitext(basename)[0] elif os.path.exists( os.path.join(SOURCE_PATH, '{}.rst'.format(single_doc))): self.single_doc_type = 'rst' diff --git a/doc/source/conf.py b/doc/source/conf.py index 3b0b51dd0d648..47adc80204fcc 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -40,7 +40,6 @@ # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.append(os.path.abspath('.')) sys.path.insert(0, os.path.abspath('../sphinxext')) - sys.path.extend([ # numpy standard doc extensions @@ -75,6 +74,7 @@ 'sphinx.ext.ifconfig', 'sphinx.ext.linkcode', 'nbsphinx', + 'contributors', # custom pandas extension ] try: @@ -120,7 +120,9 @@ templates_path = ['../_templates'] # The suffix of source filenames. -source_suffix = '.rst' +source_suffix = [ + '.rst', +] # The encoding of source files. source_encoding = 'utf-8' @@ -298,8 +300,26 @@ for page in moved_api_pages } + +common_imports = """\ +.. currentmodule:: pandas + +.. ipython:: python + :suppress: + + import numpy as np + from pandas import * + import pandas as pd + randn = np.random.randn + np.set_printoptions(precision=4, suppress=True) + options.display.max_rows = 15 + from pandas.compat import StringIO +""" + + html_context = { - 'redirects': {old: new for old, new in moved_api_pages} + 'redirects': {old: new for old, new in moved_api_pages}, + 'common_imports': common_imports, } # If false, no module index is generated. @@ -654,7 +674,23 @@ def process_class_docstrings(app, what, name, obj, options, lines): ] +def rstjinja(app, docname, source): + """ + Render our pages as a jinja template for fancy templating goodness. + """ + # http://ericholscher.com/blog/2016/jul/25/integrating-jinja-rst-sphinx/ + # Make sure we're outputting HTML + if app.builder.format != 'html': + return + src = source[0] + rendered = app.builder.templates.render_string( + src, app.config.html_context + ) + source[0] = rendered + + def setup(app): + app.connect("source-read", rstjinja) app.connect("autodoc-process-docstring", remove_flags_docstring) app.connect("autodoc-process-docstring", process_class_docstrings) app.add_autodocumenter(AccessorDocumenter) diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst index 514a58456bcd9..7eb9a6cf815ba 100644 --- a/doc/source/contributing.rst +++ b/doc/source/contributing.rst @@ -1103,7 +1103,7 @@ Information on how to write a benchmark and how to use asv can be found in the Documenting your code --------------------- -Changes should be reflected in the release notes located in ``doc/source/whatsnew/vx.y.z.txt``. +Changes should be reflected in the release notes located in ``doc/source/whatsnew/vx.y.z.rst``. This file contains an ongoing change log for each release. Add an entry to this file to document your fix, enhancement or (unavoidable) breaking change. Make sure to include the GitHub issue number when adding your entry (using ``:issue:`1234``` where ``1234`` is the diff --git a/doc/source/index.rst.template b/doc/source/index.rst.template index d2b88e794e51e..38f73f8617ced 100644 --- a/doc/source/index.rst.template +++ b/doc/source/index.rst.template @@ -118,7 +118,7 @@ See the package overview for more detail about what's in the library. {{ single_doc }} {% endif -%} {% if not single_doc -%} - whatsnew + What's New <whatsnew/v0.24.0> install contributing overview @@ -159,5 +159,5 @@ See the package overview for more detail about what's in the library. developer internals extending - release + releases {% endif -%} diff --git a/doc/source/releases.rst b/doc/source/releases.rst new file mode 100644 index 0000000000000..0167903cce8bc --- /dev/null +++ b/doc/source/releases.rst @@ -0,0 +1,203 @@ +.. _release: + +************* +Release Notes +************* + +This is the list of changes to pandas between each release. For full details, +see the commit logs at http://github.com/pandas-dev/pandas. For install and +upgrade instructions, see :ref:`install`. + +Version 0.24 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.24.0 + +Version 0.23 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.23.4 + whatsnew/v0.23.3 + whatsnew/v0.23.2 + whatsnew/v0.23.1 + whatsnew/v0.23.0 + +Version 0.22 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.22.0 + +Version 0.21 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.21.0 + whatsnew/v0.21.1 + +Version 0.20 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.20.0 + whatsnew/v0.20.2 + whatsnew/v0.20.3 + +Version 0.19 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.19.0 + whatsnew/v0.19.1 + whatsnew/v0.19.2 + +Version 0.18 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.18.0 + whatsnew/v0.18.1 + +Version 0.17 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.17.0 + whatsnew/v0.17.1 + +Version 0.16 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.16.0 + whatsnew/v0.16.1 + whatsnew/v0.16.2 + +Version 0.15 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.15.0 + whatsnew/v0.15.1 + whatsnew/v0.15.2 + +Version 0.14 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.14.0 + whatsnew/v0.14.1 + +Version 0.13 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.13.0 + whatsnew/v0.13.1 + +Version 0.12 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.12.0 + +Version 0.11 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.11.0 + +Version 0.10 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.10.0 + whatsnew/v0.10.1 + +Version 0.9 +----------- + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.9.0 + whatsnew/v0.9.1 + +Version 0.8 +------------ + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.8.0 + whatsnew/v0.8.1 + +Version 0.7 +----------- + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.7.0 + whatsnew/v0.7.1 + whatsnew/v0.7.2 + whatsnew/v0.7.3 + +Version 0.6 +----------- + +.. toctree:: + :maxdepth: 2 + + + whatsnew/v0.6.0 + whatsnew/v0.6.1 + +Version 0.5 +----------- + +.. toctree:: + :maxdepth: 2 + + + whatsnew/v0.5.0 + +Version 0.4 +----------- + +.. toctree:: + :maxdepth: 2 + + whatsnew/v0.4.x diff --git a/doc/source/style.ipynb b/doc/source/style.ipynb index 6f66c1a9bf7f9..792fe5120f6e8 100644 --- a/doc/source/style.ipynb +++ b/doc/source/style.ipynb @@ -2,9 +2,7 @@ "cells": [ { "cell_type": "markdown", - "metadata": { - "collapsed": true - }, + "metadata": {}, "source": [ "# Styling\n", "\n", @@ -51,7 +49,6 @@ "cell_type": "code", "execution_count": null, "metadata": { - "collapsed": true, "nbsphinx": "hidden" }, "outputs": [], @@ -64,9 +61,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", @@ -132,9 +127,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "def color_negative_red(val):\n", @@ -188,9 +181,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "def highlight_max(s):\n", @@ -253,9 +244,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "def highlight_max(data, color='yellow'):\n", @@ -908,9 +897,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "from IPython.html import widgets\n", @@ -925,9 +912,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "def magnify():\n", @@ -946,9 +931,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "np.random.seed(25)\n", @@ -985,18 +968,16 @@ "- `vertical-align`\n", "- `white-space: nowrap`\n", "\n", - "Only CSS2 named colors and hex colors of the form `#rgb` or `#rrggbb` are currently supported.\n", "\n", - "The following pseudo CSS properties are also available to set excel specific style properties:\n", - "- `number-format`\n" + "- Only CSS2 named colors and hex colors of the form `#rgb` or `#rrggbb` are currently supported.\n", + "- The following pseudo CSS properties are also available to set excel specific style properties:\n", + " - `number-format`\n" ] }, { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "df.style.\\\n", @@ -1037,9 +1018,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "from jinja2 import Environment, ChoiceLoader, FileSystemLoader\n", @@ -1047,39 +1026,21 @@ "from pandas.io.formats.style import Styler" ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": true - }, - "outputs": [], - "source": [ - "%mkdir templates" - ] - }, { "cell_type": "markdown", "metadata": {}, "source": [ - "This next cell writes the custom template.\n", - "We extend the template `html.tpl`, which comes with pandas." + "We'll use the following template:" ] }, { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ - "%%file templates/myhtml.tpl\n", - "{% extends \"html.tpl\" %}\n", - "{% block table %}\n", - "<h1>{{ table_title|default(\"My Table\") }}</h1>\n", - "{{ super() }}\n", - "{% endblock table %}" + "with open(\"templates/myhtml.tpl\") as f:\n", + " print(f.read())" ] }, { @@ -1093,9 +1054,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "class MyStyler(Styler):\n", @@ -1122,9 +1081,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "MyStyler(df)" @@ -1140,9 +1097,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "HTML(MyStyler(df).render(table_title=\"Extending Example\"))" @@ -1158,9 +1113,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "EasyStyler = Styler.from_custom_template(\"templates\", \"myhtml.tpl\")\n", @@ -1177,9 +1130,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "collapsed": true - }, + "metadata": {}, "outputs": [], "source": [ "with open(\"template_structure.html\") as f:\n", @@ -1199,7 +1150,6 @@ "cell_type": "code", "execution_count": null, "metadata": { - "collapsed": true, "nbsphinx": "hidden" }, "outputs": [], @@ -1216,7 +1166,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python [default]", + "display_name": "Python 3", "language": "python", "name": "python3" }, @@ -1230,14 +1180,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.5.3" - }, - "widgets": { - "application/vnd.jupyter.widget-state+json": { - "state": {}, - "version_major": 1, - "version_minor": 0 - } + "version": "3.7.0" } }, "nbformat": 4, diff --git a/doc/source/templates/myhtml.tpl b/doc/source/templates/myhtml.tpl new file mode 100644 index 0000000000000..1170fd3def653 --- /dev/null +++ b/doc/source/templates/myhtml.tpl @@ -0,0 +1,5 @@ +{% extends "html.tpl" %} +{% block table %} +<h1>{{ table_title|default("My Table") }}</h1> +{{ super() }} +{% endblock table %} diff --git a/doc/source/whatsnew.rst b/doc/source/whatsnew.rst deleted file mode 100644 index 8672685b3ebb4..0000000000000 --- a/doc/source/whatsnew.rst +++ /dev/null @@ -1,109 +0,0 @@ -.. _whatsnew: - -.. currentmodule:: pandas - -.. ipython:: python - :suppress: - - import numpy as np - from pandas import * - import pandas as pd - randn = np.random.randn - np.set_printoptions(precision=4, suppress=True) - options.display.max_rows = 15 - -********** -What's New -********** - -These are new features and improvements of note in each release. - -.. include:: whatsnew/v0.24.0.txt - -.. include:: whatsnew/v0.23.4.txt - -.. include:: whatsnew/v0.23.3.txt - -.. include:: whatsnew/v0.23.2.txt - -.. include:: whatsnew/v0.23.1.txt - -.. include:: whatsnew/v0.23.0.txt - -.. include:: whatsnew/v0.22.0.txt - -.. include:: whatsnew/v0.21.1.txt - -.. include:: whatsnew/v0.21.0.txt - -.. include:: whatsnew/v0.20.3.txt - -.. include:: whatsnew/v0.20.2.txt - -.. include:: whatsnew/v0.20.0.txt - -.. include:: whatsnew/v0.19.2.txt - -.. include:: whatsnew/v0.19.1.txt - -.. include:: whatsnew/v0.19.0.txt - -.. include:: whatsnew/v0.18.1.txt - -.. include:: whatsnew/v0.18.0.txt - -.. include:: whatsnew/v0.17.1.txt - -.. include:: whatsnew/v0.17.0.txt - -.. include:: whatsnew/v0.16.2.txt - -.. include:: whatsnew/v0.16.1.txt - -.. include:: whatsnew/v0.16.0.txt - -.. include:: whatsnew/v0.15.2.txt - -.. include:: whatsnew/v0.15.1.txt - -.. include:: whatsnew/v0.15.0.txt - -.. include:: whatsnew/v0.14.1.txt - -.. include:: whatsnew/v0.14.0.txt - -.. include:: whatsnew/v0.13.1.txt - -.. include:: whatsnew/v0.13.0.txt - -.. include:: whatsnew/v0.12.0.txt - -.. include:: whatsnew/v0.11.0.txt - -.. include:: whatsnew/v0.10.1.txt - -.. include:: whatsnew/v0.10.0.txt - -.. include:: whatsnew/v0.9.1.txt - -.. include:: whatsnew/v0.9.0.txt - -.. include:: whatsnew/v0.8.1.txt - -.. include:: whatsnew/v0.8.0.txt - -.. include:: whatsnew/v0.7.3.txt - -.. include:: whatsnew/v0.7.2.txt - -.. include:: whatsnew/v0.7.1.txt - -.. include:: whatsnew/v0.7.0.txt - -.. include:: whatsnew/v0.6.1.txt - -.. include:: whatsnew/v0.6.0.txt - -.. include:: whatsnew/v0.5.0.txt - -.. include:: whatsnew/v0.4.x.txt diff --git a/doc/source/whatsnew/v0.10.0.txt b/doc/source/whatsnew/v0.10.0.rst similarity index 99% rename from doc/source/whatsnew/v0.10.0.txt rename to doc/source/whatsnew/v0.10.0.rst index 298088a4f96b3..27f20111dbf96 100644 --- a/doc/source/whatsnew/v0.10.0.txt +++ b/doc/source/whatsnew/v0.10.0.rst @@ -1,13 +1,10 @@ .. _whatsnew_0100: -.. ipython:: python - :suppress: - - from pandas.compat import StringIO - v0.10.0 (December 17, 2012) --------------------------- +{{ common_imports }} + This is a major release from 0.9.1 and includes many new features and enhancements along with a large number of bug fixes. There are also a number of important API changes that long-time pandas users should pay close attention @@ -431,3 +428,11 @@ Here is a taste of what to expect. See the :ref:`full release notes <release>` or issue tracker on GitHub for a complete list. + + +.. _whatsnew_0.10.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.9.0..v0.10.0 diff --git a/doc/source/whatsnew/v0.10.1.txt b/doc/source/whatsnew/v0.10.1.rst similarity index 98% rename from doc/source/whatsnew/v0.10.1.txt rename to doc/source/whatsnew/v0.10.1.rst index f1a32440c6950..5679babf07b73 100644 --- a/doc/source/whatsnew/v0.10.1.txt +++ b/doc/source/whatsnew/v0.10.1.rst @@ -3,6 +3,8 @@ v0.10.1 (January 22, 2013) --------------------------- +{{ common_imports }} + This is a minor release from 0.10.0 and includes new features, enhancements, and bug fixes. In particular, there is substantial new HDFStore functionality contributed by Jeff Reback. @@ -208,3 +210,11 @@ combined result, by using ``where`` on a selector table. See the :ref:`full release notes <release>` or issue tracker on GitHub for a complete list. + + +.. _whatsnew_0.10.1.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.10.0..v0.10.1 diff --git a/doc/source/whatsnew/v0.11.0.txt b/doc/source/whatsnew/v0.11.0.rst similarity index 98% rename from doc/source/whatsnew/v0.11.0.txt rename to doc/source/whatsnew/v0.11.0.rst index f39e6c9ff459b..051d735e539aa 100644 --- a/doc/source/whatsnew/v0.11.0.txt +++ b/doc/source/whatsnew/v0.11.0.rst @@ -3,6 +3,8 @@ v0.11.0 (April 22, 2013) ------------------------ +{{ common_imports }} + This is a major release from 0.10.1 and includes many new features and enhancements along with a large number of bug fixes. The methods of Selecting Data have had quite a number of additions, and Dtype support is now full-fledged. @@ -330,3 +332,11 @@ Enhancements See the :ref:`full release notes <release>` or issue tracker on GitHub for a complete list. + + +.. _whatsnew_0.11.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.10.1..v0.11.0 diff --git a/doc/source/whatsnew/v0.12.0.txt b/doc/source/whatsnew/v0.12.0.rst similarity index 99% rename from doc/source/whatsnew/v0.12.0.txt rename to doc/source/whatsnew/v0.12.0.rst index f66f6c0f72d5d..a462359b6e3c0 100644 --- a/doc/source/whatsnew/v0.12.0.txt +++ b/doc/source/whatsnew/v0.12.0.rst @@ -3,6 +3,8 @@ v0.12.0 (July 24, 2013) ------------------------ +{{ common_imports }} + This is a major release from 0.11.0 and includes several new features and enhancements along with a large number of bug fixes. @@ -504,3 +506,11 @@ Bug Fixes See the :ref:`full release notes <release>` or issue tracker on GitHub for a complete list. + + +.. _whatsnew_0.12.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.11.0..v0.12.0 diff --git a/doc/source/whatsnew/v0.13.0.txt b/doc/source/whatsnew/v0.13.0.rst similarity index 66% rename from doc/source/whatsnew/v0.13.0.txt rename to doc/source/whatsnew/v0.13.0.rst index 94cd451196ead..037347afb1d59 100644 --- a/doc/source/whatsnew/v0.13.0.txt +++ b/doc/source/whatsnew/v0.13.0.rst @@ -3,6 +3,8 @@ v0.13.0 (January 3, 2014) --------------------------- +{{ common_imports }} + This is a major release from 0.12.0 and includes a number of API changes, several new features and enhancements along with a large number of bug fixes. @@ -425,7 +427,7 @@ than switching to the short info view (:issue:`4886`, :issue:`5550`). This makes the representation more consistent as small DataFrames get larger. -.. image:: _static/df_repr_truncated.png +.. image:: ../_static/df_repr_truncated.png :alt: Truncated HTML representation of a DataFrame To get the info view, call :meth:`DataFrame.info`. If you prefer the @@ -976,11 +978,308 @@ to unify methods and behaviors. Series formerly subclassed directly from s.a = 5 s +.. _release.bug_fixes-0.13.0: + Bug Fixes ~~~~~~~~~ -See :ref:`V0.13.0 Bug Fixes<release.bug_fixes-0.13.0>` for an extensive list of bugs that have been fixed in 0.13.0. +- ``HDFStore`` + + - raising an invalid ``TypeError`` rather than ``ValueError`` when + appending with a different block ordering (:issue:`4096`) + - ``read_hdf`` was not respecting as passed ``mode`` (:issue:`4504`) + - appending a 0-len table will work correctly (:issue:`4273`) + - ``to_hdf`` was raising when passing both arguments ``append`` and + ``table`` (:issue:`4584`) + - reading from a store with duplicate columns across dtypes would raise + (:issue:`4767`) + - Fixed a bug where ``ValueError`` wasn't correctly raised when column + names weren't strings (:issue:`4956`) + - A zero length series written in Fixed format not deserializing properly. + (:issue:`4708`) + - Fixed decoding perf issue on pyt3 (:issue:`5441`) + - Validate levels in a MultiIndex before storing (:issue:`5527`) + - Correctly handle ``data_columns`` with a Panel (:issue:`5717`) +- Fixed bug in tslib.tz_convert(vals, tz1, tz2): it could raise IndexError + exception while trying to access trans[pos + 1] (:issue:`4496`) +- The ``by`` argument now works correctly with the ``layout`` argument + (:issue:`4102`, :issue:`4014`) in ``*.hist`` plotting methods +- Fixed bug in ``PeriodIndex.map`` where using ``str`` would return the str + representation of the index (:issue:`4136`) +- Fixed test failure ``test_time_series_plot_color_with_empty_kwargs`` when + using custom matplotlib default colors (:issue:`4345`) +- Fix running of stata IO tests. Now uses temporary files to write + (:issue:`4353`) +- Fixed an issue where ``DataFrame.sum`` was slower than ``DataFrame.mean`` + for integer valued frames (:issue:`4365`) +- ``read_html`` tests now work with Python 2.6 (:issue:`4351`) +- Fixed bug where ``network`` testing was throwing ``NameError`` because a + local variable was undefined (:issue:`4381`) +- In ``to_json``, raise if a passed ``orient`` would cause loss of data + because of a duplicate index (:issue:`4359`) +- In ``to_json``, fix date handling so milliseconds are the default timestamp + as the docstring says (:issue:`4362`). +- ``as_index`` is no longer ignored when doing groupby apply (:issue:`4648`, + :issue:`3417`) +- JSON NaT handling fixed, NaTs are now serialized to `null` (:issue:`4498`) +- Fixed JSON handling of escapable characters in JSON object keys + (:issue:`4593`) +- Fixed passing ``keep_default_na=False`` when ``na_values=None`` + (:issue:`4318`) +- Fixed bug with ``values`` raising an error on a DataFrame with duplicate + columns and mixed dtypes, surfaced in (:issue:`4377`) +- Fixed bug with duplicate columns and type conversion in ``read_json`` when + ``orient='split'`` (:issue:`4377`) +- Fixed JSON bug where locales with decimal separators other than '.' threw + exceptions when encoding / decoding certain values. (:issue:`4918`) +- Fix ``.iat`` indexing with a ``PeriodIndex`` (:issue:`4390`) +- Fixed an issue where ``PeriodIndex`` joining with self was returning a new + instance rather than the same instance (:issue:`4379`); also adds a test + for this for the other index types +- Fixed a bug with all the dtypes being converted to object when using the + CSV cparser with the usecols parameter (:issue:`3192`) +- Fix an issue in merging blocks where the resulting DataFrame had partially + set _ref_locs (:issue:`4403`) +- Fixed an issue where hist subplots were being overwritten when they were + called using the top level matplotlib API (:issue:`4408`) +- Fixed a bug where calling ``Series.astype(str)`` would truncate the string + (:issue:`4405`, :issue:`4437`) +- Fixed a py3 compat issue where bytes were being repr'd as tuples + (:issue:`4455`) +- Fixed Panel attribute naming conflict if item is named 'a' + (:issue:`3440`) +- Fixed an issue where duplicate indexes were raising when plotting + (:issue:`4486`) +- Fixed an issue where cumsum and cumprod didn't work with bool dtypes + (:issue:`4170`, :issue:`4440`) +- Fixed Panel slicing issued in ``xs`` that was returning an incorrect dimmed + object (:issue:`4016`) +- Fix resampling bug where custom reduce function not used if only one group + (:issue:`3849`, :issue:`4494`) +- Fixed Panel assignment with a transposed frame (:issue:`3830`) +- Raise on set indexing with a Panel and a Panel as a value which needs + alignment (:issue:`3777`) +- frozenset objects now raise in the ``Series`` constructor (:issue:`4482`, + :issue:`4480`) +- Fixed issue with sorting a duplicate MultiIndex that has multiple dtypes + (:issue:`4516`) +- Fixed bug in ``DataFrame.set_values`` which was causing name attributes to + be lost when expanding the index. (:issue:`3742`, :issue:`4039`) +- Fixed issue where individual ``names``, ``levels`` and ``labels`` could be + set on ``MultiIndex`` without validation (:issue:`3714`, :issue:`4039`) +- Fixed (:issue:`3334`) in pivot_table. Margins did not compute if values is + the index. +- Fix bug in having a rhs of ``np.timedelta64`` or ``np.offsets.DateOffset`` + when operating with datetimes (:issue:`4532`) +- Fix arithmetic with series/datetimeindex and ``np.timedelta64`` not working + the same (:issue:`4134`) and buggy timedelta in NumPy 1.6 (:issue:`4135`) +- Fix bug in ``pd.read_clipboard`` on windows with PY3 (:issue:`4561`); not + decoding properly +- ``tslib.get_period_field()`` and ``tslib.get_period_field_arr()`` now raise + if code argument out of range (:issue:`4519`, :issue:`4520`) +- Fix boolean indexing on an empty series loses index names (:issue:`4235`), + infer_dtype works with empty arrays. +- Fix reindexing with multiple axes; if an axes match was not replacing the + current axes, leading to a possible lazy frequency inference issue + (:issue:`3317`) +- Fixed issue where ``DataFrame.apply`` was reraising exceptions incorrectly + (causing the original stack trace to be truncated). +- Fix selection with ``ix/loc`` and non_unique selectors (:issue:`4619`) +- Fix assignment with iloc/loc involving a dtype change in an existing column + (:issue:`4312`, :issue:`5702`) have internal setitem_with_indexer in core/indexing + to use Block.setitem +- Fixed bug where thousands operator was not handled correctly for floating + point numbers in csv_import (:issue:`4322`) +- Fix an issue with CacheableOffset not properly being used by many + DateOffset; this prevented the DateOffset from being cached (:issue:`4609`) +- Fix boolean comparison with a DataFrame on the lhs, and a list/tuple on the + rhs (:issue:`4576`) +- Fix error/dtype conversion with setitem of ``None`` on ``Series/DataFrame`` + (:issue:`4667`) +- Fix decoding based on a passed in non-default encoding in ``pd.read_stata`` + (:issue:`4626`) +- Fix ``DataFrame.from_records`` with a plain-vanilla ``ndarray``. + (:issue:`4727`) +- Fix some inconsistencies with ``Index.rename`` and ``MultiIndex.rename``, + etc. (:issue:`4718`, :issue:`4628`) +- Bug in using ``iloc/loc`` with a cross-sectional and duplicate indices + (:issue:`4726`) +- Bug with using ``QUOTE_NONE`` with ``to_csv`` causing ``Exception``. + (:issue:`4328`) +- Bug with Series indexing not raising an error when the right-hand-side has + an incorrect length (:issue:`2702`) +- Bug in MultiIndexing with a partial string selection as one part of a + MultIndex (:issue:`4758`) +- Bug with reindexing on the index with a non-unique index will now raise + ``ValueError`` (:issue:`4746`) +- Bug in setting with ``loc/ix`` a single indexer with a MultiIndex axis and + a NumPy array, related to (:issue:`3777`) +- Bug in concatenation with duplicate columns across dtypes not merging with + axis=0 (:issue:`4771`, :issue:`4975`) +- Bug in ``iloc`` with a slice index failing (:issue:`4771`) +- Incorrect error message with no colspecs or width in ``read_fwf``. + (:issue:`4774`) +- Fix bugs in indexing in a Series with a duplicate index (:issue:`4548`, + :issue:`4550`) +- Fixed bug with reading compressed files with ``read_fwf`` in Python 3. + (:issue:`3963`) +- Fixed an issue with a duplicate index and assignment with a dtype change + (:issue:`4686`) +- Fixed bug with reading compressed files in as ``bytes`` rather than ``str`` + in Python 3. Simplifies bytes-producing file-handling in Python 3 + (:issue:`3963`, :issue:`4785`). +- Fixed an issue related to ticklocs/ticklabels with log scale bar plots + across different versions of matplotlib (:issue:`4789`) +- Suppressed DeprecationWarning associated with internal calls issued by + repr() (:issue:`4391`) +- Fixed an issue with a duplicate index and duplicate selector with ``.loc`` + (:issue:`4825`) +- Fixed an issue with ``DataFrame.sort_index`` where, when sorting by a + single column and passing a list for ``ascending``, the argument for + ``ascending`` was being interpreted as ``True`` (:issue:`4839`, + :issue:`4846`) +- Fixed ``Panel.tshift`` not working. Added `freq` support to ``Panel.shift`` + (:issue:`4853`) +- Fix an issue in TextFileReader w/ Python engine (i.e. PythonParser) + with thousands != "," (:issue:`4596`) +- Bug in getitem with a duplicate index when using where (:issue:`4879`) +- Fix Type inference code coerces float column into datetime (:issue:`4601`) +- Fixed ``_ensure_numeric`` does not check for complex numbers + (:issue:`4902`) +- Fixed a bug in ``Series.hist`` where two figures were being created when + the ``by`` argument was passed (:issue:`4112`, :issue:`4113`). +- Fixed a bug in ``convert_objects`` for > 2 ndims (:issue:`4937`) +- Fixed a bug in DataFrame/Panel cache insertion and subsequent indexing + (:issue:`4939`, :issue:`5424`) +- Fixed string methods for ``FrozenNDArray`` and ``FrozenList`` + (:issue:`4929`) +- Fixed a bug with setting invalid or out-of-range values in indexing + enlargement scenarios (:issue:`4940`) +- Tests for fillna on empty Series (:issue:`4346`), thanks @immerrr +- Fixed ``copy()`` to shallow copy axes/indices as well and thereby keep + separate metadata. (:issue:`4202`, :issue:`4830`) +- Fixed skiprows option in Python parser for read_csv (:issue:`4382`) +- Fixed bug preventing ``cut`` from working with ``np.inf`` levels without + explicitly passing labels (:issue:`3415`) +- Fixed wrong check for overlapping in ``DatetimeIndex.union`` + (:issue:`4564`) +- Fixed conflict between thousands separator and date parser in csv_parser + (:issue:`4678`) +- Fix appending when dtypes are not the same (error showing mixing + float/np.datetime64) (:issue:`4993`) +- Fix repr for DateOffset. No longer show duplicate entries in kwds. + Removed unused offset fields. (:issue:`4638`) +- Fixed wrong index name during read_csv if using usecols. Applies to c + parser only. (:issue:`4201`) +- ``Timestamp`` objects can now appear in the left hand side of a comparison + operation with a ``Series`` or ``DataFrame`` object (:issue:`4982`). +- Fix a bug when indexing with ``np.nan`` via ``iloc/loc`` (:issue:`5016`) +- Fixed a bug where low memory c parser could create different types in + different chunks of the same file. Now coerces to numerical type or raises + warning. (:issue:`3866`) +- Fix a bug where reshaping a ``Series`` to its own shape raised + ``TypeError`` (:issue:`4554`) and other reshaping issues. +- Bug in setting with ``ix/loc`` and a mixed int/string index (:issue:`4544`) +- Make sure series-series boolean comparisons are label based (:issue:`4947`) +- Bug in multi-level indexing with a Timestamp partial indexer + (:issue:`4294`) +- Tests/fix for MultiIndex construction of an all-nan frame (:issue:`4078`) +- Fixed a bug where :func:`~pandas.read_html` wasn't correctly inferring + values of tables with commas (:issue:`5029`) +- Fixed a bug where :func:`~pandas.read_html` wasn't providing a stable + ordering of returned tables (:issue:`4770`, :issue:`5029`). +- Fixed a bug where :func:`~pandas.read_html` was incorrectly parsing when + passed ``index_col=0`` (:issue:`5066`). +- Fixed a bug where :func:`~pandas.read_html` was incorrectly inferring the + type of headers (:issue:`5048`). +- Fixed a bug where ``DatetimeIndex`` joins with ``PeriodIndex`` caused a + stack overflow (:issue:`3899`). +- Fixed a bug where ``groupby`` objects didn't allow plots (:issue:`5102`). +- Fixed a bug where ``groupby`` objects weren't tab-completing column names + (:issue:`5102`). +- Fixed a bug where ``groupby.plot()`` and friends were duplicating figures + multiple times (:issue:`5102`). +- Provide automatic conversion of ``object`` dtypes on fillna, related + (:issue:`5103`) +- Fixed a bug where default options were being overwritten in the option + parser cleaning (:issue:`5121`). +- Treat a list/ndarray identically for ``iloc`` indexing with list-like + (:issue:`5006`) +- Fix ``MultiIndex.get_level_values()`` with missing values (:issue:`5074`) +- Fix bound checking for Timestamp() with datetime64 input (:issue:`4065`) +- Fix a bug where ``TestReadHtml`` wasn't calling the correct ``read_html()`` + function (:issue:`5150`). +- Fix a bug with ``NDFrame.replace()`` which made replacement appear as + though it was (incorrectly) using regular expressions (:issue:`5143`). +- Fix better error message for to_datetime (:issue:`4928`) +- Made sure different locales are tested on travis-ci (:issue:`4918`). Also + adds a couple of utilities for getting locales and setting locales with a + context manager. +- Fixed segfault on ``isnull(MultiIndex)`` (now raises an error instead) + (:issue:`5123`, :issue:`5125`) +- Allow duplicate indices when performing operations that align + (:issue:`5185`, :issue:`5639`) +- Compound dtypes in a constructor raise ``NotImplementedError`` + (:issue:`5191`) +- Bug in comparing duplicate frames (:issue:`4421`) related +- Bug in describe on duplicate frames +- Bug in ``to_datetime`` with a format and ``coerce=True`` not raising + (:issue:`5195`) +- Bug in ``loc`` setting with multiple indexers and a rhs of a Series that + needs broadcasting (:issue:`5206`) +- Fixed bug where inplace setting of levels or labels on ``MultiIndex`` would + not clear cached ``values`` property and therefore return wrong ``values``. + (:issue:`5215`) +- Fixed bug where filtering a grouped DataFrame or Series did not maintain + the original ordering (:issue:`4621`). +- Fixed ``Period`` with a business date freq to always roll-forward if on a + non-business date. (:issue:`5203`) +- Fixed bug in Excel writers where frames with duplicate column names weren't + written correctly. (:issue:`5235`) +- Fixed issue with ``drop`` and a non-unique index on Series (:issue:`5248`) +- Fixed segfault in C parser caused by passing more names than columns in + the file. (:issue:`5156`) +- Fix ``Series.isin`` with date/time-like dtypes (:issue:`5021`) +- C and Python Parser can now handle the more common MultiIndex column + format which doesn't have a row for index names (:issue:`4702`) +- Bug when trying to use an out-of-bounds date as an object dtype + (:issue:`5312`) +- Bug when trying to display an embedded PandasObject (:issue:`5324`) +- Allows operating of Timestamps to return a datetime if the result is out-of-bounds + related (:issue:`5312`) +- Fix return value/type signature of ``initObjToJSON()`` to be compatible + with numpy's ``import_array()`` (:issue:`5334`, :issue:`5326`) +- Bug when renaming then set_index on a DataFrame (:issue:`5344`) +- Test suite no longer leaves around temporary files when testing graphics. (:issue:`5347`) + (thanks for catching this @yarikoptic!) +- Fixed html tests on win32. (:issue:`4580`) +- Make sure that ``head/tail`` are ``iloc`` based, (:issue:`5370`) +- Fixed bug for ``PeriodIndex`` string representation if there are 1 or 2 + elements. (:issue:`5372`) +- The GroupBy methods ``transform`` and ``filter`` can be used on Series + and DataFrames that have repeated (non-unique) indices. (:issue:`4620`) +- Fix empty series not printing name in repr (:issue:`4651`) +- Make tests create temp files in temp directory by default. (:issue:`5419`) +- ``pd.to_timedelta`` of a scalar returns a scalar (:issue:`5410`) +- ``pd.to_timedelta`` accepts ``NaN`` and ``NaT``, returning ``NaT`` instead of raising (:issue:`5437`) +- performance improvements in ``isnull`` on larger size pandas objects +- Fixed various setitem with 1d ndarray that does not have a matching + length to the indexer (:issue:`5508`) +- Bug in getitem with a MultiIndex and ``iloc`` (:issue:`5528`) +- Bug in delitem on a Series (:issue:`5542`) +- Bug fix in apply when using custom function and objects are not mutated (:issue:`5545`) +- Bug in selecting from a non-unique index with ``loc`` (:issue:`5553`) +- Bug in groupby returning non-consistent types when user function returns a ``None``, (:issue:`5592`) +- Work around regression in numpy 1.7.0 which erroneously raises IndexError from ``ndarray.item`` (:issue:`5666`) +- Bug in repeated indexing of object with resultant non-unique index (:issue:`5678`) +- Bug in fillna with Series and a passed series/dict (:issue:`5703`) +- Bug in groupby transform with a datetime-like grouper (:issue:`5712`) +- Bug in MultiIndex selection in PY3 when using certain keys (:issue:`5725`) +- Row-wise concat of differing dtypes failing in certain cases (:issue:`5754`) + +.. _whatsnew_0.13.0.contributors: + +Contributors +~~~~~~~~~~~~ -See the :ref:`full release notes -<release>` or issue tracker -on GitHub for a complete list of all API changes, Enhancements and Bug Fixes. +.. contributors:: v0.12.0..v0.13.0 diff --git a/doc/source/whatsnew/v0.13.1.txt b/doc/source/whatsnew/v0.13.1.rst similarity index 64% rename from doc/source/whatsnew/v0.13.1.txt rename to doc/source/whatsnew/v0.13.1.rst index a4807a6d61b76..6a1b578cc08fb 100644 --- a/doc/source/whatsnew/v0.13.1.txt +++ b/doc/source/whatsnew/v0.13.1.rst @@ -3,6 +3,8 @@ v0.13.1 (February 3, 2014) -------------------------- +{{ common_imports }} + This is a minor release from 0.13.0 and includes a small number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version. @@ -126,10 +128,6 @@ API changes df.equals(df2) df.equals(df2.sort_index()) - import pandas.core.common as com - com.array_equivalent(np.array([0, np.nan]), np.array([0, np.nan])) - np.array_equal(np.array([0, np.nan]), np.array([0, np.nan])) - - ``DataFrame.apply`` will use the ``reduce`` argument to determine whether a ``Series`` or a ``DataFrame`` should be returned when the ``DataFrame`` is empty (:issue:`6007`). @@ -296,11 +294,86 @@ Experimental There are no experimental changes in 0.13.1 +.. _release.bug_fixes-0.13.1: + Bug Fixes ~~~~~~~~~ -See :ref:`V0.13.1 Bug Fixes<release.bug_fixes-0.13.1>` for an extensive list of bugs that have been fixed in 0.13.1. +- Bug in ``io.wb.get_countries`` not including all countries (:issue:`6008`) +- Bug in Series replace with timestamp dict (:issue:`5797`) +- read_csv/read_table now respects the `prefix` kwarg (:issue:`5732`). +- Bug in selection with missing values via ``.ix`` from a duplicate indexed DataFrame failing (:issue:`5835`) +- Fix issue of boolean comparison on empty DataFrames (:issue:`5808`) +- Bug in isnull handling ``NaT`` in an object array (:issue:`5443`) +- Bug in ``to_datetime`` when passed a ``np.nan`` or integer datelike and a format string (:issue:`5863`) +- Bug in groupby dtype conversion with datetimelike (:issue:`5869`) +- Regression in handling of empty Series as indexers to Series (:issue:`5877`) +- Bug in internal caching, related to (:issue:`5727`) +- Testing bug in reading JSON/msgpack from a non-filepath on windows under py3 (:issue:`5874`) +- Bug when assigning to .ix[tuple(...)] (:issue:`5896`) +- Bug in fully reindexing a Panel (:issue:`5905`) +- Bug in idxmin/max with object dtypes (:issue:`5914`) +- Bug in ``BusinessDay`` when adding n days to a date not on offset when n>5 and n%5==0 (:issue:`5890`) +- Bug in assigning to chained series with a series via ix (:issue:`5928`) +- Bug in creating an empty DataFrame, copying, then assigning (:issue:`5932`) +- Bug in DataFrame.tail with empty frame (:issue:`5846`) +- Bug in propagating metadata on ``resample`` (:issue:`5862`) +- Fixed string-representation of ``NaT`` to be "NaT" (:issue:`5708`) +- Fixed string-representation for Timestamp to show nanoseconds if present (:issue:`5912`) +- ``pd.match`` not returning passed sentinel +- ``Panel.to_frame()`` no longer fails when ``major_axis`` is a + ``MultiIndex`` (:issue:`5402`). +- Bug in ``pd.read_msgpack`` with inferring a ``DateTimeIndex`` frequency + incorrectly (:issue:`5947`) +- Fixed ``to_datetime`` for array with both Tz-aware datetimes and ``NaT``'s (:issue:`5961`) +- Bug in rolling skew/kurtosis when passed a Series with bad data (:issue:`5749`) +- Bug in scipy ``interpolate`` methods with a datetime index (:issue:`5975`) +- Bug in NaT comparison if a mixed datetime/np.datetime64 with NaT were passed (:issue:`5968`) +- Fixed bug with ``pd.concat`` losing dtype information if all inputs are empty (:issue:`5742`) +- Recent changes in IPython cause warnings to be emitted when using previous versions + of pandas in QTConsole, now fixed. If you're using an older version and + need to suppress the warnings, see (:issue:`5922`). +- Bug in merging ``timedelta`` dtypes (:issue:`5695`) +- Bug in plotting.scatter_matrix function. Wrong alignment among diagonal + and off-diagonal plots, see (:issue:`5497`). +- Regression in Series with a MultiIndex via ix (:issue:`6018`) +- Bug in Series.xs with a MultiIndex (:issue:`6018`) +- Bug in Series construction of mixed type with datelike and an integer (which should result in + object type and not automatic conversion) (:issue:`6028`) +- Possible segfault when chained indexing with an object array under NumPy 1.7.1 (:issue:`6026`, :issue:`6056`) +- Bug in setting using fancy indexing a single element with a non-scalar (e.g. a list), + (:issue:`6043`) +- ``to_sql`` did not respect ``if_exists`` (:issue:`4110` :issue:`4304`) +- Regression in ``.get(None)`` indexing from 0.12 (:issue:`5652`) +- Subtle ``iloc`` indexing bug, surfaced in (:issue:`6059`) +- Bug with insert of strings into DatetimeIndex (:issue:`5818`) +- Fixed unicode bug in to_html/HTML repr (:issue:`6098`) +- Fixed missing arg validation in get_options_data (:issue:`6105`) +- Bug in assignment with duplicate columns in a frame where the locations + are a slice (e.g. next to each other) (:issue:`6120`) +- Bug in propagating _ref_locs during construction of a DataFrame with dups + index/columns (:issue:`6121`) +- Bug in ``DataFrame.apply`` when using mixed datelike reductions (:issue:`6125`) +- Bug in ``DataFrame.append`` when appending a row with different columns (:issue:`6129`) +- Bug in DataFrame construction with recarray and non-ns datetime dtype (:issue:`6140`) +- Bug in ``.loc`` setitem indexing with a dataframe on rhs, multiple item setting, and + a datetimelike (:issue:`6152`) +- Fixed a bug in ``query``/``eval`` during lexicographic string comparisons (:issue:`6155`). +- Fixed a bug in ``query`` where the index of a single-element ``Series`` was + being thrown away (:issue:`6148`). +- Bug in ``HDFStore`` on appending a dataframe with MultiIndexed columns to + an existing table (:issue:`6167`) +- Consistency with dtypes in setting an empty DataFrame (:issue:`6171`) +- Bug in selecting on a MultiIndex ``HDFStore`` even in the presence of under + specified column spec (:issue:`6169`) +- Bug in ``nanops.var`` with ``ddof=1`` and 1 elements would sometimes return ``inf`` + rather than ``nan`` on some platforms (:issue:`6136`) +- Bug in Series and DataFrame bar plots ignoring the ``use_index`` keyword (:issue:`6209`) +- Bug in groupby with mixed str/int under python3 fixed; ``argsort`` was failing (:issue:`6212`) + +.. _whatsnew_0.13.1.contributors: + +Contributors +~~~~~~~~~~~~ -See the :ref:`full release notes -<release>` or issue tracker -on GitHub for a complete list of all API changes, Enhancements and Bug Fixes. +.. contributors:: v0.13.0..v0.13.1 diff --git a/doc/source/whatsnew/v0.14.0.txt b/doc/source/whatsnew/v0.14.0.rst similarity index 99% rename from doc/source/whatsnew/v0.14.0.txt rename to doc/source/whatsnew/v0.14.0.rst index d4b7b09c054d6..9606bbac2a1b3 100644 --- a/doc/source/whatsnew/v0.14.0.txt +++ b/doc/source/whatsnew/v0.14.0.rst @@ -3,6 +3,8 @@ v0.14.0 (May 31 , 2014) ----------------------- +{{ common_imports }} + This is a major release from 0.13.1 and includes a small number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version. @@ -249,13 +251,13 @@ Display Changes constraints were reached and an ellipse (...) signaled that part of the data was cut off. - .. image:: _static/trunc_before.png + .. image:: ../_static/trunc_before.png :alt: The previous look of truncate. In the current version, large DataFrames are centrally truncated, showing a preview of head and tail in both dimensions. - .. image:: _static/trunc_after.png + .. image:: ../_static/trunc_after.png :alt: The new look. - allow option ``'truncate'`` for ``display.show_dimensions`` to only show the dimensions if the @@ -1047,3 +1049,11 @@ Bug Fixes - Bug in expressions evaluation with reversed ops, showing in series-dataframe ops (:issue:`7198`, :issue:`7192`) - Bug in multi-axis indexing with > 2 ndim and a MultiIndex (:issue:`7199`) - Fix a bug where invalid eval/query operations would blow the stack (:issue:`5198`) + + +.. _whatsnew_0.14.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.13.1..v0.14.0 diff --git a/doc/source/whatsnew/v0.14.1.txt b/doc/source/whatsnew/v0.14.1.rst similarity index 99% rename from doc/source/whatsnew/v0.14.1.txt rename to doc/source/whatsnew/v0.14.1.rst index d019cf54086c6..3b0ff5650d90d 100644 --- a/doc/source/whatsnew/v0.14.1.txt +++ b/doc/source/whatsnew/v0.14.1.rst @@ -3,6 +3,8 @@ v0.14.1 (July 11, 2014) ----------------------- +{{ common_imports }} + This is a minor release from 0.14.0 and includes a small number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version. @@ -269,3 +271,11 @@ Bug Fixes - Bug in grouped `hist` doesn't handle `rot` kw and `sharex` kw properly (:issue:`7234`) - Bug in ``.loc`` performing fallback integer indexing with ``object`` dtype indices (:issue:`7496`) - Bug (regression) in ``PeriodIndex`` constructor when passed ``Series`` objects (:issue:`7701`). + + +.. _whatsnew_0.14.1.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.14.0..v0.14.1 diff --git a/doc/source/whatsnew/v0.15.0.txt b/doc/source/whatsnew/v0.15.0.rst similarity index 99% rename from doc/source/whatsnew/v0.15.0.txt rename to doc/source/whatsnew/v0.15.0.rst index 4be6975958af5..00eda927a9c73 100644 --- a/doc/source/whatsnew/v0.15.0.txt +++ b/doc/source/whatsnew/v0.15.0.rst @@ -3,6 +3,8 @@ v0.15.0 (October 18, 2014) -------------------------- +{{ common_imports }} + This is a major release from 0.14.1 and includes a small number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version. @@ -1216,3 +1218,11 @@ Bug Fixes - Suppress FutureWarning generated by NumPy when comparing object arrays containing NaN for equality (:issue:`7065`) - Bug in ``DataFrame.eval()`` where the dtype of the ``not`` operator (``~``) was not correctly inferred as ``bool``. + + +.. _whatsnew_0.15.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.14.1..v0.15.0 diff --git a/doc/source/whatsnew/v0.15.1.txt b/doc/source/whatsnew/v0.15.1.rst similarity index 98% rename from doc/source/whatsnew/v0.15.1.txt rename to doc/source/whatsnew/v0.15.1.rst index 8cbf239ea20d0..88127d4e1b8d8 100644 --- a/doc/source/whatsnew/v0.15.1.txt +++ b/doc/source/whatsnew/v0.15.1.rst @@ -3,6 +3,8 @@ v0.15.1 (November 9, 2014) -------------------------- +{{ common_imports }} + This is a minor bug-fix release from 0.15.0 and includes a small number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version. @@ -304,3 +306,11 @@ Bug Fixes - Bug in Setting by indexer to a scalar value with a mixed-dtype `Panel4d` was failing (:issue:`8702`) - Bug where ``DataReader``'s would fail if one of the symbols passed was invalid. Now returns data for valid symbols and np.nan for invalid (:issue:`8494`) - Bug in ``get_quote_yahoo`` that wouldn't allow non-float return values (:issue:`5229`). + + +.. _whatsnew_0.15.1.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.15.0..v0.15.1 diff --git a/doc/source/whatsnew/v0.15.2.txt b/doc/source/whatsnew/v0.15.2.rst similarity index 99% rename from doc/source/whatsnew/v0.15.2.txt rename to doc/source/whatsnew/v0.15.2.rst index ee72fab7d23f2..dd988cde88145 100644 --- a/doc/source/whatsnew/v0.15.2.txt +++ b/doc/source/whatsnew/v0.15.2.rst @@ -3,6 +3,8 @@ v0.15.2 (December 12, 2014) --------------------------- +{{ common_imports }} + This is a minor release from 0.15.1 and includes a large number of bug fixes along with several new features, enhancements, and performance improvements. A small number of API changes were necessary to fix existing bugs. @@ -238,3 +240,11 @@ Bug Fixes - Bug in plotting if sharex was enabled and index was a timeseries, would show labels on multiple axes (:issue:`3964`). - Bug where passing a unit to the TimedeltaIndex constructor applied the to nano-second conversion twice. (:issue:`9011`). - Bug in plotting of a period-like array (:issue:`9012`) + + +.. _whatsnew_0.15.2.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.15.1..v0.15.2 diff --git a/doc/source/whatsnew/v0.16.0.txt b/doc/source/whatsnew/v0.16.0.rst similarity index 99% rename from doc/source/whatsnew/v0.16.0.txt rename to doc/source/whatsnew/v0.16.0.rst index ce525bbb4c1d6..d394b43a7ec88 100644 --- a/doc/source/whatsnew/v0.16.0.txt +++ b/doc/source/whatsnew/v0.16.0.rst @@ -3,6 +3,8 @@ v0.16.0 (March 22, 2015) ------------------------ +{{ common_imports }} + This is a major release from 0.15.2 and includes a small number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version. @@ -74,7 +76,7 @@ calculate the ratio, and plot PetalRatio = lambda x: x.PetalWidth / x.PetalLength) .plot(kind='scatter', x='SepalRatio', y='PetalRatio')) -.. image:: _static/whatsnew_assign.png +.. image:: ../_static/whatsnew_assign.png :scale: 50 % See the :ref:`documentation <dsintro.chained_assignment>` for more. (:issue:`9229`) @@ -675,3 +677,11 @@ Bug Fixes df1 = DataFrame({'x': Series(['a','b','c']), 'y': Series(['d','e','f'])}) df2 = df1[['x']] df2['y'] = ['g', 'h', 'i'] + + +.. _whatsnew_0.16.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.15.2..v0.16.0 diff --git a/doc/source/whatsnew/v0.16.1.txt b/doc/source/whatsnew/v0.16.1.rst similarity index 99% rename from doc/source/whatsnew/v0.16.1.txt rename to doc/source/whatsnew/v0.16.1.rst index d3a8064a0e786..aae96a5d63c14 100644 --- a/doc/source/whatsnew/v0.16.1.txt +++ b/doc/source/whatsnew/v0.16.1.rst @@ -3,6 +3,8 @@ v0.16.1 (May 11, 2015) ---------------------- +{{ common_imports }} + This is a minor bug-fix release from 0.16.0 and includes a a large number of bug fixes along several new features, enhancements, and performance improvements. We recommend that all users upgrade to this version. @@ -465,3 +467,11 @@ Bug Fixes - Bug in subclassed ``DataFrame``. It may not return the correct class, when slicing or subsetting it. (:issue:`9632`) - Bug in ``.median()`` where non-float null values are not handled correctly (:issue:`10040`) - Bug in Series.fillna() where it raises if a numerically convertible string is given (:issue:`10092`) + + +.. _whatsnew_0.16.1.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.16.0..v0.16.1 diff --git a/doc/source/whatsnew/v0.16.2.txt b/doc/source/whatsnew/v0.16.2.rst similarity index 98% rename from doc/source/whatsnew/v0.16.2.txt rename to doc/source/whatsnew/v0.16.2.rst index 047da4c94093b..acae3a55d5f78 100644 --- a/doc/source/whatsnew/v0.16.2.txt +++ b/doc/source/whatsnew/v0.16.2.rst @@ -3,6 +3,8 @@ v0.16.2 (June 12, 2015) ----------------------- +{{ common_imports }} + This is a minor bug-fix release from 0.16.1 and includes a a large number of bug fixes along some new features (:meth:`~DataFrame.pipe` method), enhancements, and performance improvements. @@ -165,3 +167,11 @@ Bug Fixes - Bug in ``read_hdf`` where open stores could not be used (:issue:`10330`). - Bug in adding empty ``DataFrames``, now results in a ``DataFrame`` that ``.equals`` an empty ``DataFrame`` (:issue:`10181`). - Bug in ``to_hdf`` and ``HDFStore`` which did not check that complib choices were valid (:issue:`4582`, :issue:`8874`). + + +.. _whatsnew_0.16.2.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.16.1..v0.16.2 diff --git a/doc/source/whatsnew/v0.17.0.txt b/doc/source/whatsnew/v0.17.0.rst similarity index 99% rename from doc/source/whatsnew/v0.17.0.txt rename to doc/source/whatsnew/v0.17.0.rst index 404f2bf06e861..abde8d953f4df 100644 --- a/doc/source/whatsnew/v0.17.0.txt +++ b/doc/source/whatsnew/v0.17.0.rst @@ -3,6 +3,8 @@ v0.17.0 (October 9, 2015) ------------------------- +{{ common_imports }} + This is a major release from 0.16.2 and includes a small number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version. @@ -160,7 +162,7 @@ To alleviate this issue, we have added a new, optional plotting interface, which In [14]: df.plot.bar() -.. image:: _static/whatsnew_plot_submethods.png +.. image:: ../_static/whatsnew_plot_submethods.png As a result of this change, these methods are now all discoverable via tab-completion: @@ -313,11 +315,11 @@ has been changed to make this keyword unnecessary - the change is shown below. **Old** -.. image:: _static/old-excel-index.png +.. image:: ../_static/old-excel-index.png **New** -.. image:: _static/new-excel-index.png +.. image:: ../_static/new-excel-index.png .. warning:: @@ -354,14 +356,14 @@ Some East Asian countries use Unicode characters its width is corresponding to 2 df = pd.DataFrame({u'国籍': ['UK', u'日本'], u'名前': ['Alice', u'しのぶ']}) df; -.. image:: _static/option_unicode01.png +.. image:: ../_static/option_unicode01.png .. ipython:: python pd.set_option('display.unicode.east_asian_width', True) df; -.. image:: _static/option_unicode02.png +.. image:: ../_static/option_unicode02.png For further details, see :ref:`here <options.east_asian_width>` @@ -1167,3 +1169,11 @@ Bug Fixes - Bug in ``.groupby`` when number of keys to group by is same as length of index (:issue:`11185`) - Bug in ``convert_objects`` where converted values might not be returned if all null and ``coerce`` (:issue:`9589`) - Bug in ``convert_objects`` where ``copy`` keyword was not respected (:issue:`9589`) + + +.. _whatsnew_0.17.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.16.2..v0.17.0 diff --git a/doc/source/whatsnew/v0.17.1.txt b/doc/source/whatsnew/v0.17.1.rst similarity index 98% rename from doc/source/whatsnew/v0.17.1.txt rename to doc/source/whatsnew/v0.17.1.rst index 328a8193c8b13..44554a88fba04 100644 --- a/doc/source/whatsnew/v0.17.1.txt +++ b/doc/source/whatsnew/v0.17.1.rst @@ -3,6 +3,8 @@ v0.17.1 (November 21, 2015) --------------------------- +{{ common_imports }} + .. note:: We are proud to announce that *pandas* has become a sponsored project of the (`NumFOCUS organization`_). This will help ensure the success of development of *pandas* as a world-class open-source project. @@ -202,3 +204,11 @@ Bug Fixes - Bug in ``DataFrame.to_sparse()`` loses column names for MultiIndexes (:issue:`11600`) - Bug in ``DataFrame.round()`` with non-unique column index producing a Fatal Python error (:issue:`11611`) - Bug in ``DataFrame.round()`` with ``decimals`` being a non-unique indexed Series producing extra columns (:issue:`11618`) + + +.. _whatsnew_0.17.1.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.17.0..v0.17.1 diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.rst similarity index 99% rename from doc/source/whatsnew/v0.18.0.txt rename to doc/source/whatsnew/v0.18.0.rst index e38ba54d4b058..5cd4163b1a7a5 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.rst @@ -3,6 +3,8 @@ v0.18.0 (March 13, 2016) ------------------------ +{{ common_imports }} + This is a major release from 0.17.1 and includes a small number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version. @@ -1290,3 +1292,11 @@ Bug Fixes - Bug when specifying a UTC ``DatetimeIndex`` by setting ``utc=True`` in ``.to_datetime`` (:issue:`11934`) - Bug when increasing the buffer size of CSV reader in ``read_csv`` (:issue:`12494`) - Bug when setting columns of a ``DataFrame`` with duplicate column names (:issue:`12344`) + + +.. _whatsnew_0.18.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.17.1..v0.18.0 diff --git a/doc/source/whatsnew/v0.18.1.txt b/doc/source/whatsnew/v0.18.1.rst similarity index 99% rename from doc/source/whatsnew/v0.18.1.txt rename to doc/source/whatsnew/v0.18.1.rst index 2445daebb580a..1dc01d7f1f745 100644 --- a/doc/source/whatsnew/v0.18.1.txt +++ b/doc/source/whatsnew/v0.18.1.rst @@ -3,6 +3,8 @@ v0.18.1 (May 3, 2016) --------------------- +{{ common_imports }} + This is a minor bug-fix release from 0.18.0 and includes a large number of bug fixes along with several new features, enhancements, and performance improvements. We recommend that all users upgrade to this version. @@ -692,3 +694,11 @@ Bug Fixes - Bug in ``pd.to_numeric()`` with ``Index`` returns ``np.ndarray``, rather than ``Index`` (:issue:`12777`) - Bug in ``pd.to_numeric()`` with datetime-like may raise ``TypeError`` (:issue:`12777`) - Bug in ``pd.to_numeric()`` with scalar raises ``ValueError`` (:issue:`12777`) + + +.. _whatsnew_0.18.1.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.18.0..v0.18.1 diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.rst similarity index 99% rename from doc/source/whatsnew/v0.19.0.txt rename to doc/source/whatsnew/v0.19.0.rst index 73fb124afef87..467319a4527d1 100644 --- a/doc/source/whatsnew/v0.19.0.txt +++ b/doc/source/whatsnew/v0.19.0.rst @@ -3,6 +3,8 @@ v0.19.0 (October 2, 2016) ------------------------- +{{ common_imports }} + This is a major release from 0.18.1 and includes number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version. @@ -1564,3 +1566,11 @@ Bug Fixes - ``PeriodIndex`` can now accept ``list`` and ``array`` which contains ``pd.NaT`` (:issue:`13430`) - Bug in ``df.groupby`` where ``.median()`` returns arbitrary values if grouped dataframe contains empty bins (:issue:`13629`) - Bug in ``Index.copy()`` where ``name`` parameter was ignored (:issue:`14302`) + + +.. _whatsnew_0.19.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.18.1..v0.19.0 diff --git a/doc/source/whatsnew/v0.19.1.txt b/doc/source/whatsnew/v0.19.1.rst similarity index 97% rename from doc/source/whatsnew/v0.19.1.txt rename to doc/source/whatsnew/v0.19.1.rst index 1c577dddf1cd4..0c909fa4195d7 100644 --- a/doc/source/whatsnew/v0.19.1.txt +++ b/doc/source/whatsnew/v0.19.1.rst @@ -3,6 +3,8 @@ v0.19.1 (November 3, 2016) -------------------------- +{{ common_imports }} + This is a minor bug-fix release from 0.19.0 and includes some small regression fixes, bug fixes and performance improvements. We recommend that all users upgrade to this version. @@ -59,3 +61,11 @@ Bug Fixes - Bug in ``df.groupby`` where ``TypeError`` raised when ``pd.Grouper(key=...)`` is passed in a list (:issue:`14334`) - Bug in ``pd.pivot_table`` may raise ``TypeError`` or ``ValueError`` when ``index`` or ``columns`` is not scalar and ``values`` is not specified (:issue:`14380`) + + +.. _whatsnew_0.19.1.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.19.0..v0.19.1 diff --git a/doc/source/whatsnew/v0.19.2.txt b/doc/source/whatsnew/v0.19.2.rst similarity index 97% rename from doc/source/whatsnew/v0.19.2.txt rename to doc/source/whatsnew/v0.19.2.rst index 171d97b76de75..1cded6d2c94e2 100644 --- a/doc/source/whatsnew/v0.19.2.txt +++ b/doc/source/whatsnew/v0.19.2.rst @@ -3,6 +3,8 @@ v0.19.2 (December 24, 2016) --------------------------- +{{ common_imports }} + This is a minor bug-fix release in the 0.19.x series and includes some small regression fixes, bug fixes and performance improvements. We recommend that all users upgrade to this version. @@ -80,3 +82,11 @@ Bug Fixes - Explicit check in ``to_stata`` and ``StataWriter`` for out-of-range values when writing doubles (:issue:`14618`) - Bug in ``.plot(kind='kde')`` which did not drop missing values to generate the KDE Plot, instead generating an empty plot. (:issue:`14821`) - Bug in ``unstack()`` if called with a list of column(s) as an argument, regardless of the dtypes of all columns, they get coerced to ``object`` (:issue:`11847`) + + +.. _whatsnew_0.19.2.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.19.1..v0.19.2 diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.rst similarity index 99% rename from doc/source/whatsnew/v0.20.0.txt rename to doc/source/whatsnew/v0.20.0.rst index 9f5fbdc195f34..8456449ee4419 100644 --- a/doc/source/whatsnew/v0.20.0.txt +++ b/doc/source/whatsnew/v0.20.0.rst @@ -3,6 +3,8 @@ v0.20.1 (May 5, 2017) --------------------- +{{ common_imports }} + This is a major release from 0.19.2 and includes a number of API changes, deprecations, new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version. @@ -381,7 +383,7 @@ For example, after running the following, ``styled.xlsx`` renders as below: highlight_max() styled.to_excel('styled.xlsx', engine='openpyxl') -.. image:: _static/style-excel.png +.. image:: ../_static/style-excel.png .. ipython:: python :suppress: @@ -1731,3 +1733,11 @@ Other - Compat for 32-bit platforms for ``.qcut/cut``; bins will now be ``int64`` dtype (:issue:`14866`) - Bug in interactions with ``Qt`` when a ``QtApplication`` already exists (:issue:`14372`) - Avoid use of ``np.finfo()`` during ``import pandas`` removed to mitigate deadlock on Python GIL misuse (:issue:`14641`) + + +.. _whatsnew_0.20.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.19.2..v0.20.0 diff --git a/doc/source/whatsnew/v0.20.2.txt b/doc/source/whatsnew/v0.20.2.rst similarity index 97% rename from doc/source/whatsnew/v0.20.2.txt rename to doc/source/whatsnew/v0.20.2.rst index 3de6fbc8afaf8..784cd09edff30 100644 --- a/doc/source/whatsnew/v0.20.2.txt +++ b/doc/source/whatsnew/v0.20.2.rst @@ -3,6 +3,8 @@ v0.20.2 (June 4, 2017) ---------------------- +{{ common_imports }} + This is a minor bug-fix release in the 0.20.x series and includes some small regression fixes, bug fixes and performance improvements. We recommend that all users upgrade to this version. @@ -125,3 +127,11 @@ Other ^^^^^ - Bug in ``DataFrame.drop()`` with an empty-list with non-unique indices (:issue:`16270`) + + +.. _whatsnew_0.20.2.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.20.0..v0.20.2 diff --git a/doc/source/whatsnew/v0.20.3.txt b/doc/source/whatsnew/v0.20.3.rst similarity index 95% rename from doc/source/whatsnew/v0.20.3.txt rename to doc/source/whatsnew/v0.20.3.rst index 582f975f81a7a..47bfcc761b088 100644 --- a/doc/source/whatsnew/v0.20.3.txt +++ b/doc/source/whatsnew/v0.20.3.rst @@ -3,6 +3,8 @@ v0.20.3 (July 7, 2017) ----------------------- +{{ common_imports }} + This is a minor bug-fix release in the 0.20.x series and includes some small regression fixes and bug fixes. We recommend that all users upgrade to this version. @@ -58,3 +60,11 @@ Categorical ^^^^^^^^^^^ - Bug in ``DataFrame.sort_values`` not respecting the ``kind`` parameter with categorical data (:issue:`16793`) + + +.. _whatsnew_0.20.3.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.20.2..v0.20.3 diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.rst similarity index 99% rename from doc/source/whatsnew/v0.21.0.txt rename to doc/source/whatsnew/v0.21.0.rst index 77ae5b92d0e70..c9a90f3ada7e5 100644 --- a/doc/source/whatsnew/v0.21.0.txt +++ b/doc/source/whatsnew/v0.21.0.rst @@ -3,6 +3,8 @@ v0.21.0 (October 27, 2017) -------------------------- +{{ common_imports }} + This is a major release from 0.20.3 and includes a number of API changes, deprecations, new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version. @@ -1176,3 +1178,11 @@ Other - Bug where some inplace operators were not being wrapped and produced a copy when invoked (:issue:`12962`) - Bug in :func:`eval` where the ``inplace`` parameter was being incorrectly handled (:issue:`16732`) + + +.. _whatsnew_0.21.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.20.3..v0.21.0 diff --git a/doc/source/whatsnew/v0.21.1.txt b/doc/source/whatsnew/v0.21.1.rst similarity index 98% rename from doc/source/whatsnew/v0.21.1.txt rename to doc/source/whatsnew/v0.21.1.rst index 49e59c9ddf5a7..bf13d5d67ed63 100644 --- a/doc/source/whatsnew/v0.21.1.txt +++ b/doc/source/whatsnew/v0.21.1.rst @@ -3,6 +3,8 @@ v0.21.1 (December 12, 2017) --------------------------- +{{ common_imports }} + This is a minor bug-fix release in the 0.21.x series and includes some small regression fixes, bug fixes and performance improvements. We recommend that all users upgrade to this version. @@ -169,3 +171,11 @@ String ^^^^^^ - :meth:`Series.str.split()` will now propagate ``NaN`` values across all expanded columns instead of ``None`` (:issue:`18450`) + + +.. _whatsnew_0.21.1.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.21.0..v0.21.1 diff --git a/doc/source/whatsnew/v0.22.0.txt b/doc/source/whatsnew/v0.22.0.rst similarity index 98% rename from doc/source/whatsnew/v0.22.0.txt rename to doc/source/whatsnew/v0.22.0.rst index d165339cb0de9..f05b84a9d8902 100644 --- a/doc/source/whatsnew/v0.22.0.txt +++ b/doc/source/whatsnew/v0.22.0.rst @@ -3,6 +3,8 @@ v0.22.0 (December 29, 2017) --------------------------- +{{ common_imports }} + This is a major release from 0.21.1 and includes a single, API-breaking change. We recommend that all users upgrade to this version after carefully reading the release note (singular!). @@ -241,3 +243,11 @@ With conda, use Note that the inconsistency in the return value for all-*NA* series is still there for pandas 0.20.3 and earlier. Avoiding pandas 0.21 will only help with the empty case. + + +.. _whatsnew_0.22.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.21.1..v0.22.0 diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.rst similarity index 99% rename from doc/source/whatsnew/v0.23.0.txt rename to doc/source/whatsnew/v0.23.0.rst index 473a4bb72e6d9..f84517a3e3b9c 100644 --- a/doc/source/whatsnew/v0.23.0.txt +++ b/doc/source/whatsnew/v0.23.0.rst @@ -1,7 +1,9 @@ .. _whatsnew_0230: -v0.23.0 (May 15, 2018) ----------------------- +What's new in 0.23.0 (May 15, 2018) +----------------------------------- + +{{ common_imports }} This is a major release from 0.22.0 and includes a number of API changes, deprecations, new features, enhancements, and performance improvements along @@ -908,7 +910,7 @@ frames would not fit within the terminal width, and pandas would introduce line breaks to display these 20 columns. This resulted in an output that was relatively difficult to read: -.. image:: _static/print_df_old.png +.. image:: ../_static/print_df_old.png If Python runs in a terminal, the maximum number of columns is now determined automatically so that the printed data frame fits within the current terminal @@ -918,7 +920,7 @@ well as in many IDEs), this value cannot be inferred automatically and is thus set to `20` as in previous versions. In a terminal, this results in a much nicer output: -.. image:: _static/print_df_new.png +.. image:: ../_static/print_df_new.png Note that if you don't like the new default, you can always set this option yourself. To revert to the old setting, you can run this line: @@ -1412,3 +1414,10 @@ Other - Improved error message when attempting to use a Python keyword as an identifier in a ``numexpr`` backed query (:issue:`18221`) - Bug in accessing a :func:`pandas.get_option`, which raised ``KeyError`` rather than ``OptionError`` when looking up a non-existent option key in some cases (:issue:`19789`) - Bug in :func:`testing.assert_series_equal` and :func:`testing.assert_frame_equal` for Series or DataFrames with differing unicode data (:issue:`20503`) + +.. _whatsnew_0.23.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.22.0..v0.23.0 diff --git a/doc/source/whatsnew/v0.23.1.txt b/doc/source/whatsnew/v0.23.1.rst similarity index 97% rename from doc/source/whatsnew/v0.23.1.txt rename to doc/source/whatsnew/v0.23.1.rst index 1a514ba627fcb..e8e0060c48337 100644 --- a/doc/source/whatsnew/v0.23.1.txt +++ b/doc/source/whatsnew/v0.23.1.rst @@ -1,7 +1,9 @@ .. _whatsnew_0231: -v0.23.1 (June 12, 2018) ------------------------ +What's New in 0.23.1 (June 12, 2018) +------------------------------------ + +{{ common_imports }} This is a minor bug-fix release in the 0.23.x series and includes some small regression fixes and bug fixes. We recommend that all users upgrade to this version. @@ -138,3 +140,10 @@ Bug Fixes - Tab completion on :class:`Index` in IPython no longer outputs deprecation warnings (:issue:`21125`) - Bug preventing pandas being used on Windows without C++ redistributable installed (:issue:`21106`) + +.. _whatsnew_0.23.1.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.23.0..v0.23.1 diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.rst similarity index 81% rename from doc/source/whatsnew/v0.23.2.txt rename to doc/source/whatsnew/v0.23.2.rst index 7ec6e2632e717..573a30f17846b 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.rst @@ -1,7 +1,9 @@ .. _whatsnew_0232: -v0.23.2 (July 5, 2018) ----------------------- +What's New in 0.23.2 (July 5, 2018) +----------------------------------- + +{{ common_imports }} This is a minor bug-fix release in the 0.23.x series and includes some small regression fixes and bug fixes. We recommend that all users upgrade to this version. @@ -101,8 +103,20 @@ Bug Fixes **Timezones** - Bug in :class:`Timestamp` and :class:`DatetimeIndex` where passing a :class:`Timestamp` localized after a DST transition would return a datetime before the DST transition (:issue:`20854`) -- Bug in comparing :class:`DataFrame`s with tz-aware :class:`DatetimeIndex` columns with a DST transition that raised a ``KeyError`` (:issue:`19970`) +- Bug in comparing :class:`DataFrame` with tz-aware :class:`DatetimeIndex` columns with a DST transition that raised a ``KeyError`` (:issue:`19970`) +- Bug in :meth:`DatetimeIndex.shift` where an ``AssertionError`` would raise when shifting across DST (:issue:`8616`) +- Bug in :class:`Timestamp` constructor where passing an invalid timezone offset designator (``Z``) would not raise a ``ValueError`` (:issue:`8910`) +- Bug in :meth:`Timestamp.replace` where replacing at a DST boundary would retain an incorrect offset (:issue:`7825`) +- Bug in :meth:`DatetimeIndex.reindex` when reindexing a tz-naive and tz-aware :class:`DatetimeIndex` (:issue:`8306`) +- Bug in :meth:`DatetimeIndex.resample` when downsampling across a DST boundary (:issue:`8531`) **Timedelta** - Bug in :class:`Timedelta` where non-zero timedeltas shorter than 1 microsecond were considered False (:issue:`21484`) + +.. _whatsnew_0.23.2.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.23.1..v0.23.2 diff --git a/doc/source/whatsnew/v0.23.3.rst b/doc/source/whatsnew/v0.23.3.rst new file mode 100644 index 0000000000000..29758e54b437b --- /dev/null +++ b/doc/source/whatsnew/v0.23.3.rst @@ -0,0 +1,16 @@ +.. _whatsnew_0233: + +What's New in 0.23.3 (July 7, 2018) +----------------------------------- + +{{ common_imports }} + +This release fixes a build issue with the sdist for Python 3.7 (:issue:`21785`) +There are no other changes. + +.. _whatsnew_0.23.3.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.23.2..v0.23.3 diff --git a/doc/source/whatsnew/v0.23.3.txt b/doc/source/whatsnew/v0.23.3.txt deleted file mode 100644 index b8adce27d2523..0000000000000 --- a/doc/source/whatsnew/v0.23.3.txt +++ /dev/null @@ -1,7 +0,0 @@ -.. _whatsnew_0233: - -v0.23.3 (July 7, 2018) ----------------------- - -This release fixes a build issue with the sdist for Python 3.7 (:issue:`21785`) -There are no other changes. diff --git a/doc/source/whatsnew/v0.23.4.txt b/doc/source/whatsnew/v0.23.4.rst similarity index 84% rename from doc/source/whatsnew/v0.23.4.txt rename to doc/source/whatsnew/v0.23.4.rst index 9a3ad3f61ee49..c8f08d0bb7091 100644 --- a/doc/source/whatsnew/v0.23.4.txt +++ b/doc/source/whatsnew/v0.23.4.rst @@ -1,7 +1,9 @@ .. _whatsnew_0234: -v0.23.4 (August 3, 2018) ------------------------- +What's New in 0.23.4 (August 3, 2018) +------------------------------------- + +{{ common_imports }} This is a minor bug-fix release in the 0.23.x series and includes some small regression fixes and bug fixes. We recommend that all users upgrade to this version. @@ -35,3 +37,10 @@ Bug Fixes **Missing** - Bug in :func:`Series.clip` and :func:`DataFrame.clip` cannot accept list-like threshold containing ``NaN`` (:issue:`19992`) + +.. _whatsnew_0.23.4.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.23.3..v0.23.4 diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.rst similarity index 99% rename from doc/source/whatsnew/v0.24.0.txt rename to doc/source/whatsnew/v0.24.0.rst index 3057e3f700eab..44c467795d1ed 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.rst @@ -1,13 +1,18 @@ .. _whatsnew_0240: -v0.24.0 (Month XX, 2018) ------------------------- +What's New in 0.24.0 (Month XX, 2018) +------------------------------------- .. warning:: Starting January 1, 2019, pandas feature releases will support Python 3 only. See :ref:`install.dropping-27` for more. +{{ common_imports }} + +These are the changes in pandas 0.24.0. See :ref:`release` for a full changelog +including other versions of pandas. + .. _whatsnew_0240.enhancements: New features @@ -205,6 +210,7 @@ See the :ref:`advanced docs on renaming<advanced.index_names>` for more details. Other Enhancements ^^^^^^^^^^^^^^^^^^ + - :func:`to_datetime` now supports the ``%Z`` and ``%z`` directive when passed into ``format`` (:issue:`13486`) - :func:`Series.mode` and :func:`DataFrame.mode` now support the ``dropna`` parameter which can be used to specify whether ``NaN``/``NaT`` values should be considered (:issue:`17534`) - :func:`to_csv` now supports ``compression`` keyword when a file handle is passed. (:issue:`21227`) @@ -1175,7 +1181,7 @@ Timezones - Bug in :class:`DatetimeIndex` comparisons failing to raise ``TypeError`` when comparing timezone-aware ``DatetimeIndex`` against ``np.datetime64`` (:issue:`22074`) - Bug in ``DataFrame`` assignment with a timezone-aware scalar (:issue:`19843`) - Bug in :func:`DataFrame.asof` that raised a ``TypeError`` when attempting to compare tz-naive and tz-aware timestamps (:issue:`21194`) -- Bug when constructing a :class:`DatetimeIndex` with :class:`Timestamp`s constructed with the ``replace`` method across DST (:issue:`18785`) +- Bug when constructing a :class:`DatetimeIndex` with :class:`Timestamp` constructed with the ``replace`` method across DST (:issue:`18785`) - Bug when setting a new value with :meth:`DataFrame.loc` with a :class:`DatetimeIndex` with a DST transition (:issue:`18308`, :issue:`20724`) - Bug in :meth:`DatetimeIndex.unique` that did not re-localize tz-aware dates correctly (:issue:`21737`) - Bug when indexing a :class:`Series` with a DST transition (:issue:`21846`) @@ -1260,7 +1266,7 @@ MultiIndex ^^^^^^^^^^ - Removed compatibility for :class:`MultiIndex` pickles prior to version 0.8.0; compatibility with :class:`MultiIndex` pickles from version 0.13 forward is maintained (:issue:`21654`) -- :meth:`MultiIndex.get_loc_level` (and as a consequence, ``.loc`` on a :class:`MultiIndex`ed object) will now raise a ``KeyError``, rather than returning an empty ``slice``, if asked a label which is present in the ``levels`` but is unused (:issue:`22221`) +- :meth:`MultiIndex.get_loc_level` (and as a consequence, ``.loc`` on a ``Series`` or ``DataFrame`` with a :class:`MultiIndex` index) will now raise a ``KeyError``, rather than returning an empty ``slice``, if asked a label which is present in the ``levels`` but is unused (:issue:`22221`) - Fix ``TypeError`` in Python 3 when creating :class:`MultiIndex` in which some levels have mixed types, e.g. when some labels are tuples (:issue:`15457`) I/O @@ -1363,9 +1369,9 @@ Reshaping - Bug in :func:`pandas.wide_to_long` when a string is passed to the stubnames argument and a column name is a substring of that stubname (:issue:`22468`) - Bug in :func:`merge` when merging ``datetime64[ns, tz]`` data that contained a DST transition (:issue:`18885`) - Bug in :func:`merge_asof` when merging on float values within defined tolerance (:issue:`22981`) -- Bug in :func:`pandas.concat` when concatenating a multicolumn DataFrame with tz-aware data against a DataFrame with a different number of columns (:issue`22796`) +- Bug in :func:`pandas.concat` when concatenating a multicolumn DataFrame with tz-aware data against a DataFrame with a different number of columns (:issue:`22796`) - Bug in :func:`merge_asof` where confusing error message raised when attempting to merge with missing values (:issue:`23189`) -- Bug in :meth:`DataFrame.nsmallest` and :meth:`DataFrame.nlargest` for dataframes that have :class:`MultiIndex`ed columns (:issue:`23033`). +- Bug in :meth:`DataFrame.nsmallest` and :meth:`DataFrame.nlargest` for dataframes that have a :class:`MultiIndex` for columns (:issue:`23033`). .. _whatsnew_0240.bug_fixes.sparse: @@ -1398,3 +1404,10 @@ Other - :meth:`~pandas.io.formats.style.Styler.bar` now also supports tablewise application (in addition to rowwise and columnwise) with ``axis=None`` and setting clipping range with ``vmin`` and ``vmax`` (:issue:`21548` and :issue:`21526`). ``NaN`` values are also handled properly. - Logical operations ``&, |, ^`` between :class:`Series` and :class:`Index` will no longer raise ``ValueError`` (:issue:`22092`) - Bug in :meth:`DataFrame.combine_first` in which column types were unexpectedly converted to float (:issue:`20699`) + +.. _whatsnew_0.24.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.23.4..HEAD diff --git a/doc/source/whatsnew/v0.4.x.txt b/doc/source/whatsnew/v0.4.x.rst similarity index 97% rename from doc/source/whatsnew/v0.4.x.txt rename to doc/source/whatsnew/v0.4.x.rst index ed9352059a6dc..e54614849c93b 100644 --- a/doc/source/whatsnew/v0.4.x.txt +++ b/doc/source/whatsnew/v0.4.x.rst @@ -3,6 +3,8 @@ v.0.4.3 through v0.4.1 (September 25 - October 9, 2011) ------------------------------------------------------- +{{ common_imports }} + New Features ~~~~~~~~~~~~ @@ -61,3 +63,7 @@ Performance Enhancements .. _ENHed: https://github.com/pandas-dev/pandas/commit/edd9f1945fc010a57fa0ae3b3444d1fffe592591 .. _ENH56: https://github.com/pandas-dev/pandas/commit/56e0c9ffafac79ce262b55a6a13e1b10a88fbe93 +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.4.1..v0.4.3 diff --git a/doc/source/whatsnew/v0.5.0.txt b/doc/source/whatsnew/v0.5.0.rst similarity index 96% rename from doc/source/whatsnew/v0.5.0.txt rename to doc/source/whatsnew/v0.5.0.rst index 6fe6a02b08f70..c6d17cb1e1290 100644 --- a/doc/source/whatsnew/v0.5.0.txt +++ b/doc/source/whatsnew/v0.5.0.rst @@ -4,6 +4,8 @@ v.0.5.0 (October 24, 2011) -------------------------- +{{ common_imports }} + New Features ~~~~~~~~~~~~ @@ -41,3 +43,11 @@ Performance Enhancements .. _ENH61: https://github.com/pandas-dev/pandas/commit/6141961 .. _ENH5c: https://github.com/pandas-dev/pandas/commit/5ca6ff5d822ee4ddef1ec0d87b6d83d8b4bbd3eb + + +.. _whatsnew_0.5.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.4.0..v0.5.0 diff --git a/doc/source/whatsnew/v0.6.0.txt b/doc/source/whatsnew/v0.6.0.rst similarity index 97% rename from doc/source/whatsnew/v0.6.0.txt rename to doc/source/whatsnew/v0.6.0.rst index bd01dd0a90a59..de45b3b383129 100644 --- a/doc/source/whatsnew/v0.6.0.txt +++ b/doc/source/whatsnew/v0.6.0.rst @@ -3,6 +3,8 @@ v.0.6.0 (November 25, 2011) --------------------------- +{{ common_imports }} + New Features ~~~~~~~~~~~~ - :ref:`Added <reshaping.melt>` ``melt`` function to ``pandas.core.reshape`` @@ -54,3 +56,11 @@ Performance Enhancements - VBENCH Significantly improved performance of ``Series.order``, which also makes np.unique called on a Series faster (:issue:`327`) - VBENCH Vastly improved performance of GroupBy on axes with a MultiIndex (:issue:`299`) + + +.. _whatsnew_0.6.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.5.0..v0.6.0 diff --git a/doc/source/whatsnew/v0.6.1.txt b/doc/source/whatsnew/v0.6.1.rst similarity index 96% rename from doc/source/whatsnew/v0.6.1.txt rename to doc/source/whatsnew/v0.6.1.rst index acd5b0774f2bb..d01757775d694 100644 --- a/doc/source/whatsnew/v0.6.1.txt +++ b/doc/source/whatsnew/v0.6.1.rst @@ -48,3 +48,11 @@ Performance improvements - Column deletion in DataFrame copies no data (computes views on blocks) (GH #158) + + +.. _whatsnew_0.6.1.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.6.0..v0.6.1 diff --git a/doc/source/whatsnew/v0.7.0.txt b/doc/source/whatsnew/v0.7.0.rst similarity index 98% rename from doc/source/whatsnew/v0.7.0.txt rename to doc/source/whatsnew/v0.7.0.rst index 21d91950e7b78..e278bc0738108 100644 --- a/doc/source/whatsnew/v0.7.0.txt +++ b/doc/source/whatsnew/v0.7.0.rst @@ -3,6 +3,8 @@ v.0.7.0 (February 9, 2012) -------------------------- +{{ common_imports }} + New features ~~~~~~~~~~~~ @@ -298,3 +300,11 @@ Performance improvements ``level`` parameter passed (:issue:`545`) - Ported skiplist data structure to C to speed up ``rolling_median`` by about 5-10x in most typical use cases (:issue:`374`) + + +.. _whatsnew_0.7.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.6.1..v0.7.0 diff --git a/doc/source/whatsnew/v0.7.1.txt b/doc/source/whatsnew/v0.7.1.rst similarity index 90% rename from doc/source/whatsnew/v0.7.1.txt rename to doc/source/whatsnew/v0.7.1.rst index bc12cb8d200cd..f1a133797fd59 100644 --- a/doc/source/whatsnew/v0.7.1.txt +++ b/doc/source/whatsnew/v0.7.1.rst @@ -3,6 +3,8 @@ v.0.7.1 (February 29, 2012) --------------------------- +{{ common_imports }} + This release includes a few new features and addresses over a dozen bugs in 0.7.0. @@ -28,3 +30,11 @@ Performance improvements - Improve performance and memory usage of fillna on DataFrame - Can concatenate a list of Series along axis=1 to obtain a DataFrame (:issue:`787`) + + +.. _whatsnew_0.7.1.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.7.0..v0.7.1 diff --git a/doc/source/whatsnew/v0.7.2.txt b/doc/source/whatsnew/v0.7.2.rst similarity index 89% rename from doc/source/whatsnew/v0.7.2.txt rename to doc/source/whatsnew/v0.7.2.rst index c711639354139..b870db956f4f1 100644 --- a/doc/source/whatsnew/v0.7.2.txt +++ b/doc/source/whatsnew/v0.7.2.rst @@ -3,6 +3,8 @@ v.0.7.2 (March 16, 2012) --------------------------- +{{ common_imports }} + This release targets bugs in 0.7.1, and adds a few minor features. New features @@ -25,3 +27,11 @@ Performance improvements - Use khash for Series.value_counts, add raw function to algorithms.py (:issue:`861`) - Intercept __builtin__.sum in groupby (:issue:`885`) + + +.. _whatsnew_0.7.2.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.7.1..v0.7.2 diff --git a/doc/source/whatsnew/v0.7.3.txt b/doc/source/whatsnew/v0.7.3.rst similarity index 92% rename from doc/source/whatsnew/v0.7.3.txt rename to doc/source/whatsnew/v0.7.3.rst index 77cc72d8707cf..30e22f105656c 100644 --- a/doc/source/whatsnew/v0.7.3.txt +++ b/doc/source/whatsnew/v0.7.3.rst @@ -3,6 +3,8 @@ v.0.7.3 (April 12, 2012) ------------------------ +{{ common_imports }} + This is a minor release from 0.7.2 and fixes many minor bugs and adds a number of nice new features. There are also a couple of API changes to note; these should not affect very many users, and we are inclined to call them "bug fixes" @@ -22,7 +24,7 @@ New features from pandas.tools.plotting import scatter_matrix scatter_matrix(df, alpha=0.2) -.. image:: savefig/scatter_matrix_kde.png +.. image:: ../savefig/scatter_matrix_kde.png :width: 5in - Add ``stacked`` argument to Series and DataFrame's ``plot`` method for @@ -32,14 +34,14 @@ New features df.plot(kind='bar', stacked=True) -.. image:: savefig/bar_plot_stacked_ex.png +.. image:: ../savefig/bar_plot_stacked_ex.png :width: 4in .. code-block:: python df.plot(kind='barh', stacked=True) -.. image:: savefig/barh_plot_stacked_ex.png +.. image:: ../savefig/barh_plot_stacked_ex.png :width: 4in - Add log x and y :ref:`scaling options <visualization.basic>` to @@ -94,3 +96,11 @@ Series, to be more consistent with the ``groupby`` behavior with DataFrame: grouped = df.groupby('A')['C'] grouped.describe() grouped.apply(lambda x: x.sort_values()[-2:]) # top 2 values + + +.. _whatsnew_0.7.3.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.7.2..v0.7.3 diff --git a/doc/source/whatsnew/v0.8.0.txt b/doc/source/whatsnew/v0.8.0.rst similarity index 99% rename from doc/source/whatsnew/v0.8.0.txt rename to doc/source/whatsnew/v0.8.0.rst index 28c043e772605..eedaaa3dfa8bd 100644 --- a/doc/source/whatsnew/v0.8.0.txt +++ b/doc/source/whatsnew/v0.8.0.rst @@ -3,6 +3,8 @@ v0.8.0 (June 29, 2012) ------------------------ +{{ common_imports }} + This is a major release from 0.7.3 and includes extensive work on the time series handling and processing infrastructure as well as a great deal of new functionality throughout the library. It includes over 700 commits from more @@ -269,3 +271,11 @@ unique. In many cases it will no longer fail (some method like ``append`` still check for uniqueness unless disabled). However, all is not lost: you can inspect ``index.is_unique`` and raise an exception explicitly if it is ``False`` or go to a different code branch. + + +.. _whatsnew_0.8.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.7.3..v0.8.0 diff --git a/doc/source/whatsnew/v0.8.1.txt b/doc/source/whatsnew/v0.8.1.rst similarity index 93% rename from doc/source/whatsnew/v0.8.1.txt rename to doc/source/whatsnew/v0.8.1.rst index add96bec9d1dd..468b99341163c 100644 --- a/doc/source/whatsnew/v0.8.1.txt +++ b/doc/source/whatsnew/v0.8.1.rst @@ -3,6 +3,8 @@ v0.8.1 (July 22, 2012) ---------------------- +{{ common_imports }} + This release includes a few new features, performance enhancements, and over 30 bug fixes from 0.8.0. New features include notably NA friendly string processing functionality and a series of new plot types and options. @@ -34,3 +36,11 @@ Performance improvements Categorical types - Significant datetime parsing performance improvements + + +.. _whatsnew_0.8.1.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.8.0..v0.8.1 diff --git a/doc/source/whatsnew/v0.9.0.txt b/doc/source/whatsnew/v0.9.0.rst similarity index 96% rename from doc/source/whatsnew/v0.9.0.txt rename to doc/source/whatsnew/v0.9.0.rst index b60fb9cc64f4a..ee4e8c338c984 100644 --- a/doc/source/whatsnew/v0.9.0.txt +++ b/doc/source/whatsnew/v0.9.0.rst @@ -1,9 +1,6 @@ .. _whatsnew_0900: -.. ipython:: python - :suppress: - - from pandas.compat import StringIO +{{ common_imports }} v0.9.0 (October 7, 2012) ------------------------ @@ -95,3 +92,11 @@ See the :ref:`full release notes <release>` or issue tracker on GitHub for a complete list. + + +.. _whatsnew_0.9.0.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.8.1..v0.9.0 diff --git a/doc/source/whatsnew/v0.9.1.txt b/doc/source/whatsnew/v0.9.1.rst similarity index 97% rename from doc/source/whatsnew/v0.9.1.txt rename to doc/source/whatsnew/v0.9.1.rst index 1f58170b30244..fe3de9be95a74 100644 --- a/doc/source/whatsnew/v0.9.1.txt +++ b/doc/source/whatsnew/v0.9.1.rst @@ -1,13 +1,10 @@ .. _whatsnew_0901: -.. ipython:: python - :suppress: - - from pandas.compat import StringIO - v0.9.1 (November 14, 2012) -------------------------- +{{ common_imports }} + This is a bug fix release from 0.9.0 and includes several new features and enhancements along with a large number of bug fixes. The new features include by-column sort order for DataFrame and Series, improved NA handling for the rank @@ -158,3 +155,11 @@ API changes See the :ref:`full release notes <release>` or issue tracker on GitHub for a complete list. + + +.. _whatsnew_0.9.1.contributors: + +Contributors +~~~~~~~~~~~~ + +.. contributors:: v0.9.0..v0.9.1 diff --git a/scripts/announce.py b/doc/sphinxext/announce.py similarity index 75% rename from scripts/announce.py rename to doc/sphinxext/announce.py index 7b7933eba54dd..6bc53d3e96d01 100755 --- a/scripts/announce.py +++ b/doc/sphinxext/announce.py @@ -33,19 +33,21 @@ $ ./scripts/announce.py $GITHUB v1.11.0..v1.11.1 > announce.rst """ -from __future__ import print_function, division +from __future__ import division, print_function +import codecs import os import re -import codecs +import textwrap + from git import Repo UTF8Writer = codecs.getwriter('utf8') -this_repo = Repo(os.path.join(os.path.dirname(__file__), "..")) +this_repo = Repo(os.path.join(os.path.dirname(__file__), "..", "..")) author_msg = """\ -A total of %d people contributed to this release. People with a "+" by their -names contributed a patch for the first time. +A total of %d people contributed patches to this release. People with a +"+" by their names contributed a patch for the first time. """ pull_request_msg = """\ @@ -98,19 +100,35 @@ def get_pull_requests(repo, revision_range): return prs -def main(revision_range, repo): +def build_components(revision_range, heading="Contributors"): lst_release, cur_release = [r.strip() for r in revision_range.split('..')] - - # document authors authors = get_authors(revision_range) - heading = u"Contributors" - print() - print(heading) - print(u"=" * len(heading)) - print(author_msg % len(authors)) - for s in authors: - print(u'* ' + s) + return { + 'heading': heading, + 'author_message': author_msg % len(authors), + 'authors': authors, + } + + +def build_string(revision_range, heading="Contributors"): + components = build_components(revision_range, heading=heading) + components['uline'] = '=' * len(components['heading']) + components['authors'] = "* " + "\n* ".join(components['authors']) + + tpl = textwrap.dedent("""\ + {heading} + {uline} + + {author_message} + {authors}""").format(**components) + return tpl + + +def main(revision_range): + # document authors + text = build_string(revision_range) + print(text) if __name__ == "__main__": @@ -118,7 +136,5 @@ def main(revision_range, repo): parser = ArgumentParser(description="Generate author lists for release") parser.add_argument('revision_range', help='<revision>..<revision>') - parser.add_argument('--repo', help="Github org/repository", - default="pandas-dev/pandas") args = parser.parse_args() - main(args.revision_range, args.repo) + main(args.revision_range) diff --git a/doc/sphinxext/contributors.py b/doc/sphinxext/contributors.py new file mode 100644 index 0000000000000..0f04d47435699 --- /dev/null +++ b/doc/sphinxext/contributors.py @@ -0,0 +1,40 @@ +"""Sphinx extension for listing code contributors to a release. + +Usage:: + + .. contributors:: v0.23.0..v0.23.1 + +This will be replaced with a message indicating the number of +code contributors and commits, and then list each contributor +individually. +""" +from docutils import nodes +from docutils.parsers.rst import Directive + +from announce import build_components + + +class ContributorsDirective(Directive): + required_arguments = 1 + name = 'contributors' + + def run(self): + components = build_components(self.arguments[0]) + + message = nodes.paragraph() + message += nodes.Text(components['author_message']) + + listnode = nodes.bullet_list() + + for author in components['authors']: + para = nodes.paragraph() + para += nodes.Text(author) + listnode += nodes.list_item('', para) + + return [message, listnode] + + +def setup(app): + app.add_directive('contributors', ContributorsDirective) + + return {'version': '0.1'} diff --git a/environment.yml b/environment.yml index 742b974566577..fc35f1290f1b1 100644 --- a/environment.yml +++ b/environment.yml @@ -14,6 +14,7 @@ dependencies: - flake8 - flake8-comprehensions - flake8-rst=0.4.2 + - gitpython - hypothesis>=3.58.0 - isort - moto diff --git a/requirements-dev.txt b/requirements-dev.txt index 9acfe243d22fb..6678d205aca6c 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -5,6 +5,7 @@ Cython>=0.28.2 flake8 flake8-comprehensions flake8-rst==0.4.2 +gitpython hypothesis>=3.58.0 isort moto diff --git a/setup.cfg b/setup.cfg index 9f5384170a245..7212833435997 100644 --- a/setup.cfg +++ b/setup.cfg @@ -34,7 +34,22 @@ exclude = ignore = F821, # undefined name W391, # blank line at end of file [Seems to be a bug (v0.4.1)] - +exclude = + doc/source/whatsnew/v0.7.0.rst + doc/source/whatsnew/v0.10.1.rst + doc/source/whatsnew/v0.12.0.rst + doc/source/whatsnew/v0.13.0.rst + doc/source/whatsnew/v0.13.1.rst + doc/source/whatsnew/v0.14.0.rst + doc/source/whatsnew/v0.15.0.rst + doc/source/whatsnew/v0.16.0.rst + doc/source/whatsnew/v0.16.2.rst + doc/source/whatsnew/v0.17.0.rst + doc/source/whatsnew/v0.18.0.rst + doc/source/whatsnew/v0.18.1.rst + doc/source/whatsnew/v0.20.0.rst + doc/source/whatsnew/v0.21.0.rst + doc/source/whatsnew/v0.23.0.rst [yapf] based_on_style = pep8 @@ -405,3 +420,4 @@ skip= pandas/types/common.py, pandas/plotting/_compat.py, pandas/tests/extension/arrow/test_bool.py + doc/source/conf.py
Some cleanup & changes to facilitate release automation * We will include the correct (latest on master or maintenance branch) whatsnew directly in the index.rst toctree * Contributors are included in the whatsnew for each version (automatically) * Removed release.rst * Added new releases.rst which has toctrees for each release ![screen shot 2018-06-22 at 3 10 23 pm](https://user-images.githubusercontent.com/1312546/41797116-6bfe43aa-762e-11e8-840f-9639faf14e8b.png) Incidental changes * Updated style.ipynb. Writing the jinja template was confusing sphinx. We included it in the git source now. * Fixing some inconsitent header levels (will do more) * Refactored announce.py to support auto-generated contributors TODO: - [x] Finish up the rest of the whatsnews cc @jorisvandenbossche, @datapythonista
https://api.github.com/repos/pandas-dev/pandas/pulls/21599
2018-06-22T20:12:12Z
2018-11-14T21:09:47Z
2018-11-14T21:09:47Z
2018-11-15T13:14:00Z
TST: Add interval closed fixture to top-level conftest
diff --git a/pandas/conftest.py b/pandas/conftest.py index 9d806a91f37f7..d6b18db4e71f2 100644 --- a/pandas/conftest.py +++ b/pandas/conftest.py @@ -137,6 +137,14 @@ def nselect_method(request): return request.param +@pytest.fixture(params=['left', 'right', 'both', 'neither']) +def closed(request): + """ + Fixture for trying all interval closed parameters + """ + return request.param + + @pytest.fixture(params=[None, np.nan, pd.NaT, float('nan'), np.float('NaN')]) def nulls_fixture(request): """ diff --git a/pandas/tests/indexes/interval/test_construction.py b/pandas/tests/indexes/interval/test_construction.py index b1711c3444586..ac946a3421e53 100644 --- a/pandas/tests/indexes/interval/test_construction.py +++ b/pandas/tests/indexes/interval/test_construction.py @@ -14,11 +14,6 @@ import pandas.util.testing as tm -@pytest.fixture(params=['left', 'right', 'both', 'neither']) -def closed(request): - return request.param - - @pytest.fixture(params=[None, 'foo']) def name(request): return request.param diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py index 9920809a18a24..6a7330f8cfb68 100644 --- a/pandas/tests/indexes/interval/test_interval.py +++ b/pandas/tests/indexes/interval/test_interval.py @@ -12,11 +12,6 @@ import pandas as pd -@pytest.fixture(scope='class', params=['left', 'right', 'both', 'neither']) -def closed(request): - return request.param - - @pytest.fixture(scope='class', params=[None, 'foo']) def name(request): return request.param diff --git a/pandas/tests/indexes/interval/test_interval_range.py b/pandas/tests/indexes/interval/test_interval_range.py index 29fe2b0185662..447856e7e9d51 100644 --- a/pandas/tests/indexes/interval/test_interval_range.py +++ b/pandas/tests/indexes/interval/test_interval_range.py @@ -11,11 +11,6 @@ import pandas.util.testing as tm -@pytest.fixture(scope='class', params=['left', 'right', 'both', 'neither']) -def closed(request): - return request.param - - @pytest.fixture(scope='class', params=[None, 'foo']) def name(request): return request.param diff --git a/pandas/tests/indexes/interval/test_interval_tree.py b/pandas/tests/indexes/interval/test_interval_tree.py index 056d3e1087a2e..5f248bf7725e5 100644 --- a/pandas/tests/indexes/interval/test_interval_tree.py +++ b/pandas/tests/indexes/interval/test_interval_tree.py @@ -7,11 +7,6 @@ import pandas.util.testing as tm -@pytest.fixture(scope='class', params=['left', 'right', 'both', 'neither']) -def closed(request): - return request.param - - @pytest.fixture( scope='class', params=['int32', 'int64', 'float32', 'float64', 'uint64']) def dtype(request): diff --git a/pandas/tests/indexing/interval/test_interval.py b/pandas/tests/indexing/interval/test_interval.py index 233fbd2c8d7be..f2f59159032a2 100644 --- a/pandas/tests/indexing/interval/test_interval.py +++ b/pandas/tests/indexing/interval/test_interval.py @@ -3,7 +3,6 @@ import pandas as pd from pandas import Series, DataFrame, IntervalIndex, Interval -from pandas.compat import product import pandas.util.testing as tm @@ -51,9 +50,7 @@ def test_getitem_with_scalar(self): tm.assert_series_equal(expected, s[s >= 2]) # TODO: check this behavior is consistent with test_interval_new.py - @pytest.mark.parametrize('direction, closed', - product(('increasing', 'decreasing'), - ('left', 'right', 'neither', 'both'))) + @pytest.mark.parametrize('direction', ['increasing', 'decreasing']) def test_nonoverlapping_monotonic(self, direction, closed): tpls = [(0, 1), (2, 3), (4, 5)] if direction == 'decreasing':
Noticed identical copies of this fixture in multiple places, so seemed reasonable to dedupe and move to the top-level `conftest.py`. I think I removed all instances of such a fixture/parametrize being used, but could have missed one. I imagine this fixture will be used in more places in the future when interval related tests are expanded.
https://api.github.com/repos/pandas-dev/pandas/pulls/21595
2018-06-22T14:18:40Z
2018-06-22T22:45:27Z
2018-06-22T22:45:27Z
2018-09-24T17:22:53Z
PERF: do not check for label presence preventively
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index 90fc579ae69e5..a63276efc5b7c 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -17,7 +17,7 @@ Other Enhancements - :func:`to_datetime` now supports the ``%Z`` and ``%z`` directive when passed into ``format`` (:issue:`13486`) - :func:`Series.mode` and :func:`DataFrame.mode` now support the ``dropna`` parameter which can be used to specify whether NaN/NaT values should be considered (:issue:`17534`) - :func:`to_csv` now supports ``compression`` keyword when a file handle is passed. (:issue:`21227`) -- :meth:`Index.droplevel` is now implemented also for flat indexes, for compatibility with MultiIndex (:issue:`21115`) +- :meth:`Index.droplevel` is now implemented also for flat indexes, for compatibility with :class:`MultiIndex` (:issue:`21115`) .. _whatsnew_0240.api_breaking: @@ -199,6 +199,7 @@ Indexing ^^^^^^^^ - The traceback from a ``KeyError`` when asking ``.loc`` for a single missing label is now shorter and more clear (:issue:`21557`) +- When ``.ix`` is asked for a missing integer label in a :class:`MultiIndex` with a first level of integer type, it now raises a ``KeyError`` - consistently with the case of a flat :class:`Int64Index` - rather than falling back to positional indexing (:issue:`21593`) - - diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index 1f9fe5f947d0c..a69313a2d4a43 100755 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -13,7 +13,6 @@ is_iterator, is_scalar, is_sparse, - _is_unorderable_exception, _ensure_platform_int) from pandas.core.dtypes.missing import isna, _infer_fill_value from pandas.errors import AbstractMethodError @@ -139,10 +138,7 @@ def _get_label(self, label, axis=None): # as its basically direct indexing # but will fail when the index is not present # see GH5667 - try: - return self.obj._xs(label, axis=axis) - except: - return self.obj[label] + return self.obj._xs(label, axis=axis) elif isinstance(label, tuple) and isinstance(label[axis], slice): raise IndexingError('no slices here, handle elsewhere') @@ -1797,9 +1793,8 @@ class _LocIndexer(_LocationIndexer): @Appender(_NDFrameIndexer._validate_key.__doc__) def _validate_key(self, key, axis): - ax = self.obj._get_axis(axis) - # valid for a label where all labels are in the index + # valid for a collection of labels (we check their presence later) # slice of labels (where start-end in labels) # slice of integers (only if in the labels) # boolean @@ -1807,32 +1802,11 @@ def _validate_key(self, key, axis): if isinstance(key, slice): return - elif com.is_bool_indexer(key): + if com.is_bool_indexer(key): return - elif not is_list_like_indexer(key): - - def error(): - if isna(key): - raise TypeError("cannot use label indexing with a null " - "key") - raise KeyError(u"the label [{key}] is not in the [{axis}]" - .format(key=key, - axis=self.obj._get_axis_name(axis))) - - try: - key = self._convert_scalar_indexer(key, axis) - except TypeError as e: - - # python 3 type errors should be raised - if _is_unorderable_exception(e): - error() - raise - except: - error() - - if not ax.contains(key): - error() + if not is_list_like_indexer(key): + self._convert_scalar_indexer(key, axis) def _is_scalar_access(self, key): # this is a shortcut accessor to both .loc and .iloc diff --git a/pandas/tests/indexes/datetimes/test_partial_slicing.py b/pandas/tests/indexes/datetimes/test_partial_slicing.py index 4580d9fff31d5..e1e80e50e31f0 100644 --- a/pandas/tests/indexes/datetimes/test_partial_slicing.py +++ b/pandas/tests/indexes/datetimes/test_partial_slicing.py @@ -11,6 +11,8 @@ date_range, Index, Timedelta, Timestamp) from pandas.util import testing as tm +from pandas.core.indexing import IndexingError + class TestSlicing(object): def test_dti_slicing(self): @@ -313,12 +315,12 @@ def test_partial_slicing_with_multiindex(self): result = df_multi.loc[('2013-06-19 09:30:00', 'ACCT1', 'ABC')] tm.assert_series_equal(result, expected) - # this is a KeyError as we don't do partial string selection on - # multi-levels + # this is an IndexingError as we don't do partial string selection on + # multi-levels. def f(): df_multi.loc[('2013-06-19', 'ACCT1', 'ABC')] - pytest.raises(KeyError, f) + pytest.raises(IndexingError, f) # GH 4294 # partial slice on a series mi diff --git a/pandas/tests/indexing/test_multiindex.py b/pandas/tests/indexing/test_multiindex.py index 43656a392e582..d2c4c8f5e149b 100644 --- a/pandas/tests/indexing/test_multiindex.py +++ b/pandas/tests/indexing/test_multiindex.py @@ -230,7 +230,8 @@ def test_iloc_getitem_multiindex(self): # corner column rs = mi_int.iloc[2, 2] with catch_warnings(record=True): - xp = mi_int.ix[:, 2].ix[2] + # First level is int - so use .loc rather than .ix (GH 21593) + xp = mi_int.loc[(8, 12), (4, 10)] assert rs == xp # this is basically regular indexing @@ -278,6 +279,12 @@ def test_loc_multiindex(self): xp = mi_int.ix[4] tm.assert_frame_equal(rs, xp) + # missing label + pytest.raises(KeyError, lambda: mi_int.loc[2]) + with catch_warnings(record=True): + # GH 21593 + pytest.raises(KeyError, lambda: mi_int.ix[2]) + def test_getitem_partial_int(self): # GH 12416 # with single item
- [x] closes #21593 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry ASV run: ``` before after ratio [f1ffc5fa] [0af738b0] - 433±7μs 390±5μs 0.90 indexing.MultiIndexing.time_series_ix - 79.0±0.9μs 53.1±0.8μs 0.67 indexing.IntervalIndexing.time_loc_scalar - 66.3±0.8μs 42.7±2μs 0.64 indexing.NumericSeriesIndexing.time_loc_scalar(<class 'pandas.core.indexes.numeric.Int64Index'>) - 86.9±0.4μs 54.7±0.2μs 0.63 indexing.NumericSeriesIndexing.time_loc_scalar(<class 'pandas.core.indexes.numeric.Float64Index'>) SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY. ``` We don't check anymore if errors raised in indexing have to do with ordering different types (in Python 3)... but I can't think of any possible case (and none is present in the tests) in which looking for a single label could create a sorting problem.
https://api.github.com/repos/pandas-dev/pandas/pulls/21594
2018-06-22T10:37:22Z
2018-06-25T22:29:58Z
2018-06-25T22:29:57Z
2018-07-08T08:22:46Z
TST: Clean up tests in test_take.py
diff --git a/pandas/tests/test_take.py b/pandas/tests/test_take.py index 9ab147edb8d1b..ade847923c083 100644 --- a/pandas/tests/test_take.py +++ b/pandas/tests/test_take.py @@ -10,315 +10,268 @@ from pandas._libs.tslib import iNaT +@pytest.fixture(params=[True, False]) +def writeable(request): + return request.param + + +# Check that take_nd works both with writeable arrays +# (in which case fast typed memory-views implementation) +# and read-only arrays alike. +@pytest.fixture(params=[ + (np.float64, True), + (np.float32, True), + (np.uint64, False), + (np.uint32, False), + (np.uint16, False), + (np.uint8, False), + (np.int64, False), + (np.int32, False), + (np.int16, False), + (np.int8, False), + (np.object_, True), + (np.bool, False), +]) +def dtype_can_hold_na(request): + return request.param + + +@pytest.fixture(params=[ + (np.int8, np.int16(127), np.int8), + (np.int8, np.int16(128), np.int16), + (np.int32, 1, np.int32), + (np.int32, 2.0, np.float64), + (np.int32, 3.0 + 4.0j, np.complex128), + (np.int32, True, np.object_), + (np.int32, "", np.object_), + (np.float64, 1, np.float64), + (np.float64, 2.0, np.float64), + (np.float64, 3.0 + 4.0j, np.complex128), + (np.float64, True, np.object_), + (np.float64, "", np.object_), + (np.complex128, 1, np.complex128), + (np.complex128, 2.0, np.complex128), + (np.complex128, 3.0 + 4.0j, np.complex128), + (np.complex128, True, np.object_), + (np.complex128, "", np.object_), + (np.bool_, 1, np.object_), + (np.bool_, 2.0, np.object_), + (np.bool_, 3.0 + 4.0j, np.object_), + (np.bool_, True, np.bool_), + (np.bool_, '', np.object_), +]) +def dtype_fill_out_dtype(request): + return request.param + + class TestTake(object): - # standard incompatible fill error + # Standard incompatible fill error. fill_error = re.compile("Incompatible type for fill_value") - def test_1d_with_out(self): - def _test_dtype(dtype, can_hold_na, writeable=True): - data = np.random.randint(0, 2, 4).astype(dtype) - data.flags.writeable = writeable + def test_1d_with_out(self, dtype_can_hold_na, writeable): + dtype, can_hold_na = dtype_can_hold_na + + data = np.random.randint(0, 2, 4).astype(dtype) + data.flags.writeable = writeable + + indexer = [2, 1, 0, 1] + out = np.empty(4, dtype=dtype) + algos.take_1d(data, indexer, out=out) - indexer = [2, 1, 0, 1] - out = np.empty(4, dtype=dtype) + expected = data.take(indexer) + tm.assert_almost_equal(out, expected) + + indexer = [2, 1, 0, -1] + out = np.empty(4, dtype=dtype) + + if can_hold_na: algos.take_1d(data, indexer, out=out) expected = data.take(indexer) + expected[3] = np.nan tm.assert_almost_equal(out, expected) - - indexer = [2, 1, 0, -1] - out = np.empty(4, dtype=dtype) - if can_hold_na: + else: + with tm.assert_raises_regex(TypeError, self.fill_error): algos.take_1d(data, indexer, out=out) - expected = data.take(indexer) - expected[3] = np.nan - tm.assert_almost_equal(out, expected) - else: - with tm.assert_raises_regex(TypeError, self.fill_error): - algos.take_1d(data, indexer, out=out) - # no exception o/w - data.take(indexer, out=out) - - for writeable in [True, False]: - # Check that take_nd works both with writeable arrays (in which - # case fast typed memoryviews implementation) and read-only - # arrays alike. - _test_dtype(np.float64, True, writeable=writeable) - _test_dtype(np.float32, True, writeable=writeable) - _test_dtype(np.uint64, False, writeable=writeable) - _test_dtype(np.uint32, False, writeable=writeable) - _test_dtype(np.uint16, False, writeable=writeable) - _test_dtype(np.uint8, False, writeable=writeable) - _test_dtype(np.int64, False, writeable=writeable) - _test_dtype(np.int32, False, writeable=writeable) - _test_dtype(np.int16, False, writeable=writeable) - _test_dtype(np.int8, False, writeable=writeable) - _test_dtype(np.object_, True, writeable=writeable) - _test_dtype(np.bool, False, writeable=writeable) - - def test_1d_fill_nonna(self): - def _test_dtype(dtype, fill_value, out_dtype): - data = np.random.randint(0, 2, 4).astype(dtype) - - indexer = [2, 1, 0, -1] - - result = algos.take_1d(data, indexer, fill_value=fill_value) - assert ((result[[0, 1, 2]] == data[[2, 1, 0]]).all()) - assert (result[3] == fill_value) - assert (result.dtype == out_dtype) - - indexer = [2, 1, 0, 1] - - result = algos.take_1d(data, indexer, fill_value=fill_value) - assert ((result[[0, 1, 2, 3]] == data[indexer]).all()) - assert (result.dtype == dtype) - - _test_dtype(np.int8, np.int16(127), np.int8) - _test_dtype(np.int8, np.int16(128), np.int16) - _test_dtype(np.int32, 1, np.int32) - _test_dtype(np.int32, 2.0, np.float64) - _test_dtype(np.int32, 3.0 + 4.0j, np.complex128) - _test_dtype(np.int32, True, np.object_) - _test_dtype(np.int32, '', np.object_) - _test_dtype(np.float64, 1, np.float64) - _test_dtype(np.float64, 2.0, np.float64) - _test_dtype(np.float64, 3.0 + 4.0j, np.complex128) - _test_dtype(np.float64, True, np.object_) - _test_dtype(np.float64, '', np.object_) - _test_dtype(np.complex128, 1, np.complex128) - _test_dtype(np.complex128, 2.0, np.complex128) - _test_dtype(np.complex128, 3.0 + 4.0j, np.complex128) - _test_dtype(np.complex128, True, np.object_) - _test_dtype(np.complex128, '', np.object_) - _test_dtype(np.bool_, 1, np.object_) - _test_dtype(np.bool_, 2.0, np.object_) - _test_dtype(np.bool_, 3.0 + 4.0j, np.object_) - _test_dtype(np.bool_, True, np.bool_) - _test_dtype(np.bool_, '', np.object_) - - def test_2d_with_out(self): - def _test_dtype(dtype, can_hold_na, writeable=True): - data = np.random.randint(0, 2, (5, 3)).astype(dtype) - data.flags.writeable = writeable - - indexer = [2, 1, 0, 1] - out0 = np.empty((4, 3), dtype=dtype) - out1 = np.empty((5, 4), dtype=dtype) + + # No Exception otherwise. + data.take(indexer, out=out) + + def test_1d_fill_nonna(self, dtype_fill_out_dtype): + dtype, fill_value, out_dtype = dtype_fill_out_dtype + data = np.random.randint(0, 2, 4).astype(dtype) + indexer = [2, 1, 0, -1] + + result = algos.take_1d(data, indexer, fill_value=fill_value) + assert ((result[[0, 1, 2]] == data[[2, 1, 0]]).all()) + assert (result[3] == fill_value) + assert (result.dtype == out_dtype) + + indexer = [2, 1, 0, 1] + + result = algos.take_1d(data, indexer, fill_value=fill_value) + assert ((result[[0, 1, 2, 3]] == data[indexer]).all()) + assert (result.dtype == dtype) + + def test_2d_with_out(self, dtype_can_hold_na, writeable): + dtype, can_hold_na = dtype_can_hold_na + + data = np.random.randint(0, 2, (5, 3)).astype(dtype) + data.flags.writeable = writeable + + indexer = [2, 1, 0, 1] + out0 = np.empty((4, 3), dtype=dtype) + out1 = np.empty((5, 4), dtype=dtype) + algos.take_nd(data, indexer, out=out0, axis=0) + algos.take_nd(data, indexer, out=out1, axis=1) + + expected0 = data.take(indexer, axis=0) + expected1 = data.take(indexer, axis=1) + tm.assert_almost_equal(out0, expected0) + tm.assert_almost_equal(out1, expected1) + + indexer = [2, 1, 0, -1] + out0 = np.empty((4, 3), dtype=dtype) + out1 = np.empty((5, 4), dtype=dtype) + + if can_hold_na: algos.take_nd(data, indexer, out=out0, axis=0) algos.take_nd(data, indexer, out=out1, axis=1) + expected0 = data.take(indexer, axis=0) expected1 = data.take(indexer, axis=1) + expected0[3, :] = np.nan + expected1[:, 3] = np.nan + tm.assert_almost_equal(out0, expected0) tm.assert_almost_equal(out1, expected1) - - indexer = [2, 1, 0, -1] - out0 = np.empty((4, 3), dtype=dtype) - out1 = np.empty((5, 4), dtype=dtype) - if can_hold_na: - algos.take_nd(data, indexer, out=out0, axis=0) - algos.take_nd(data, indexer, out=out1, axis=1) - expected0 = data.take(indexer, axis=0) - expected1 = data.take(indexer, axis=1) - expected0[3, :] = np.nan - expected1[:, 3] = np.nan - tm.assert_almost_equal(out0, expected0) - tm.assert_almost_equal(out1, expected1) - else: - for i, out in enumerate([out0, out1]): - with tm.assert_raises_regex(TypeError, - self.fill_error): - algos.take_nd(data, indexer, out=out, axis=i) - # no exception o/w - data.take(indexer, out=out, axis=i) - - for writeable in [True, False]: - # Check that take_nd works both with writeable arrays (in which - # case fast typed memoryviews implementation) and read-only - # arrays alike. - _test_dtype(np.float64, True, writeable=writeable) - _test_dtype(np.float32, True, writeable=writeable) - _test_dtype(np.uint64, False, writeable=writeable) - _test_dtype(np.uint32, False, writeable=writeable) - _test_dtype(np.uint16, False, writeable=writeable) - _test_dtype(np.uint8, False, writeable=writeable) - _test_dtype(np.int64, False, writeable=writeable) - _test_dtype(np.int32, False, writeable=writeable) - _test_dtype(np.int16, False, writeable=writeable) - _test_dtype(np.int8, False, writeable=writeable) - _test_dtype(np.object_, True, writeable=writeable) - _test_dtype(np.bool, False, writeable=writeable) - - def test_2d_fill_nonna(self): - def _test_dtype(dtype, fill_value, out_dtype): - data = np.random.randint(0, 2, (5, 3)).astype(dtype) - - indexer = [2, 1, 0, -1] - - result = algos.take_nd(data, indexer, axis=0, - fill_value=fill_value) - assert ((result[[0, 1, 2], :] == data[[2, 1, 0], :]).all()) - assert ((result[3, :] == fill_value).all()) - assert (result.dtype == out_dtype) - - result = algos.take_nd(data, indexer, axis=1, - fill_value=fill_value) - assert ((result[:, [0, 1, 2]] == data[:, [2, 1, 0]]).all()) - assert ((result[:, 3] == fill_value).all()) - assert (result.dtype == out_dtype) - - indexer = [2, 1, 0, 1] - - result = algos.take_nd(data, indexer, axis=0, - fill_value=fill_value) - assert ((result[[0, 1, 2, 3], :] == data[indexer, :]).all()) - assert (result.dtype == dtype) - - result = algos.take_nd(data, indexer, axis=1, - fill_value=fill_value) - assert ((result[:, [0, 1, 2, 3]] == data[:, indexer]).all()) - assert (result.dtype == dtype) - - _test_dtype(np.int8, np.int16(127), np.int8) - _test_dtype(np.int8, np.int16(128), np.int16) - _test_dtype(np.int32, 1, np.int32) - _test_dtype(np.int32, 2.0, np.float64) - _test_dtype(np.int32, 3.0 + 4.0j, np.complex128) - _test_dtype(np.int32, True, np.object_) - _test_dtype(np.int32, '', np.object_) - _test_dtype(np.float64, 1, np.float64) - _test_dtype(np.float64, 2.0, np.float64) - _test_dtype(np.float64, 3.0 + 4.0j, np.complex128) - _test_dtype(np.float64, True, np.object_) - _test_dtype(np.float64, '', np.object_) - _test_dtype(np.complex128, 1, np.complex128) - _test_dtype(np.complex128, 2.0, np.complex128) - _test_dtype(np.complex128, 3.0 + 4.0j, np.complex128) - _test_dtype(np.complex128, True, np.object_) - _test_dtype(np.complex128, '', np.object_) - _test_dtype(np.bool_, 1, np.object_) - _test_dtype(np.bool_, 2.0, np.object_) - _test_dtype(np.bool_, 3.0 + 4.0j, np.object_) - _test_dtype(np.bool_, True, np.bool_) - _test_dtype(np.bool_, '', np.object_) - - def test_3d_with_out(self): - def _test_dtype(dtype, can_hold_na): - data = np.random.randint(0, 2, (5, 4, 3)).astype(dtype) - - indexer = [2, 1, 0, 1] - out0 = np.empty((4, 4, 3), dtype=dtype) - out1 = np.empty((5, 4, 3), dtype=dtype) - out2 = np.empty((5, 4, 4), dtype=dtype) + else: + for i, out in enumerate([out0, out1]): + with tm.assert_raises_regex(TypeError, + self.fill_error): + algos.take_nd(data, indexer, out=out, axis=i) + + # No Exception otherwise. + data.take(indexer, out=out, axis=i) + + def test_2d_fill_nonna(self, dtype_fill_out_dtype): + dtype, fill_value, out_dtype = dtype_fill_out_dtype + data = np.random.randint(0, 2, (5, 3)).astype(dtype) + indexer = [2, 1, 0, -1] + + result = algos.take_nd(data, indexer, axis=0, + fill_value=fill_value) + assert ((result[[0, 1, 2], :] == data[[2, 1, 0], :]).all()) + assert ((result[3, :] == fill_value).all()) + assert (result.dtype == out_dtype) + + result = algos.take_nd(data, indexer, axis=1, + fill_value=fill_value) + assert ((result[:, [0, 1, 2]] == data[:, [2, 1, 0]]).all()) + assert ((result[:, 3] == fill_value).all()) + assert (result.dtype == out_dtype) + + indexer = [2, 1, 0, 1] + result = algos.take_nd(data, indexer, axis=0, + fill_value=fill_value) + assert ((result[[0, 1, 2, 3], :] == data[indexer, :]).all()) + assert (result.dtype == dtype) + + result = algos.take_nd(data, indexer, axis=1, + fill_value=fill_value) + assert ((result[:, [0, 1, 2, 3]] == data[:, indexer]).all()) + assert (result.dtype == dtype) + + def test_3d_with_out(self, dtype_can_hold_na): + dtype, can_hold_na = dtype_can_hold_na + + data = np.random.randint(0, 2, (5, 4, 3)).astype(dtype) + indexer = [2, 1, 0, 1] + + out0 = np.empty((4, 4, 3), dtype=dtype) + out1 = np.empty((5, 4, 3), dtype=dtype) + out2 = np.empty((5, 4, 4), dtype=dtype) + + algos.take_nd(data, indexer, out=out0, axis=0) + algos.take_nd(data, indexer, out=out1, axis=1) + algos.take_nd(data, indexer, out=out2, axis=2) + + expected0 = data.take(indexer, axis=0) + expected1 = data.take(indexer, axis=1) + expected2 = data.take(indexer, axis=2) + + tm.assert_almost_equal(out0, expected0) + tm.assert_almost_equal(out1, expected1) + tm.assert_almost_equal(out2, expected2) + + indexer = [2, 1, 0, -1] + out0 = np.empty((4, 4, 3), dtype=dtype) + out1 = np.empty((5, 4, 3), dtype=dtype) + out2 = np.empty((5, 4, 4), dtype=dtype) + + if can_hold_na: algos.take_nd(data, indexer, out=out0, axis=0) algos.take_nd(data, indexer, out=out1, axis=1) algos.take_nd(data, indexer, out=out2, axis=2) + expected0 = data.take(indexer, axis=0) expected1 = data.take(indexer, axis=1) expected2 = data.take(indexer, axis=2) + + expected0[3, :, :] = np.nan + expected1[:, 3, :] = np.nan + expected2[:, :, 3] = np.nan + tm.assert_almost_equal(out0, expected0) tm.assert_almost_equal(out1, expected1) tm.assert_almost_equal(out2, expected2) - - indexer = [2, 1, 0, -1] - out0 = np.empty((4, 4, 3), dtype=dtype) - out1 = np.empty((5, 4, 3), dtype=dtype) - out2 = np.empty((5, 4, 4), dtype=dtype) - if can_hold_na: - algos.take_nd(data, indexer, out=out0, axis=0) - algos.take_nd(data, indexer, out=out1, axis=1) - algos.take_nd(data, indexer, out=out2, axis=2) - expected0 = data.take(indexer, axis=0) - expected1 = data.take(indexer, axis=1) - expected2 = data.take(indexer, axis=2) - expected0[3, :, :] = np.nan - expected1[:, 3, :] = np.nan - expected2[:, :, 3] = np.nan - tm.assert_almost_equal(out0, expected0) - tm.assert_almost_equal(out1, expected1) - tm.assert_almost_equal(out2, expected2) - else: - for i, out in enumerate([out0, out1, out2]): - with tm.assert_raises_regex(TypeError, - self.fill_error): - algos.take_nd(data, indexer, out=out, axis=i) - # no exception o/w - data.take(indexer, out=out, axis=i) - - _test_dtype(np.float64, True) - _test_dtype(np.float32, True) - _test_dtype(np.uint64, False) - _test_dtype(np.uint32, False) - _test_dtype(np.uint16, False) - _test_dtype(np.uint8, False) - _test_dtype(np.int64, False) - _test_dtype(np.int32, False) - _test_dtype(np.int16, False) - _test_dtype(np.int8, False) - _test_dtype(np.object_, True) - _test_dtype(np.bool, False) - - def test_3d_fill_nonna(self): - def _test_dtype(dtype, fill_value, out_dtype): - data = np.random.randint(0, 2, (5, 4, 3)).astype(dtype) - - indexer = [2, 1, 0, -1] - - result = algos.take_nd(data, indexer, axis=0, - fill_value=fill_value) - assert ((result[[0, 1, 2], :, :] == data[[2, 1, 0], :, :]).all()) - assert ((result[3, :, :] == fill_value).all()) - assert (result.dtype == out_dtype) - - result = algos.take_nd(data, indexer, axis=1, - fill_value=fill_value) - assert ((result[:, [0, 1, 2], :] == data[:, [2, 1, 0], :]).all()) - assert ((result[:, 3, :] == fill_value).all()) - assert (result.dtype == out_dtype) - - result = algos.take_nd(data, indexer, axis=2, - fill_value=fill_value) - assert ((result[:, :, [0, 1, 2]] == data[:, :, [2, 1, 0]]).all()) - assert ((result[:, :, 3] == fill_value).all()) - assert (result.dtype == out_dtype) - - indexer = [2, 1, 0, 1] - - result = algos.take_nd(data, indexer, axis=0, - fill_value=fill_value) - assert ((result[[0, 1, 2, 3], :, :] == data[indexer, :, :]).all()) - assert (result.dtype == dtype) - - result = algos.take_nd(data, indexer, axis=1, - fill_value=fill_value) - assert ((result[:, [0, 1, 2, 3], :] == data[:, indexer, :]).all()) - assert (result.dtype == dtype) - - result = algos.take_nd(data, indexer, axis=2, - fill_value=fill_value) - assert ((result[:, :, [0, 1, 2, 3]] == data[:, :, indexer]).all()) - assert (result.dtype == dtype) - - _test_dtype(np.int8, np.int16(127), np.int8) - _test_dtype(np.int8, np.int16(128), np.int16) - _test_dtype(np.int32, 1, np.int32) - _test_dtype(np.int32, 2.0, np.float64) - _test_dtype(np.int32, 3.0 + 4.0j, np.complex128) - _test_dtype(np.int32, True, np.object_) - _test_dtype(np.int32, '', np.object_) - _test_dtype(np.float64, 1, np.float64) - _test_dtype(np.float64, 2.0, np.float64) - _test_dtype(np.float64, 3.0 + 4.0j, np.complex128) - _test_dtype(np.float64, True, np.object_) - _test_dtype(np.float64, '', np.object_) - _test_dtype(np.complex128, 1, np.complex128) - _test_dtype(np.complex128, 2.0, np.complex128) - _test_dtype(np.complex128, 3.0 + 4.0j, np.complex128) - _test_dtype(np.complex128, True, np.object_) - _test_dtype(np.complex128, '', np.object_) - _test_dtype(np.bool_, 1, np.object_) - _test_dtype(np.bool_, 2.0, np.object_) - _test_dtype(np.bool_, 3.0 + 4.0j, np.object_) - _test_dtype(np.bool_, True, np.bool_) - _test_dtype(np.bool_, '', np.object_) + else: + for i, out in enumerate([out0, out1, out2]): + with tm.assert_raises_regex(TypeError, + self.fill_error): + algos.take_nd(data, indexer, out=out, axis=i) + + # No Exception otherwise. + data.take(indexer, out=out, axis=i) + + def test_3d_fill_nonna(self, dtype_fill_out_dtype): + dtype, fill_value, out_dtype = dtype_fill_out_dtype + + data = np.random.randint(0, 2, (5, 4, 3)).astype(dtype) + indexer = [2, 1, 0, -1] + + result = algos.take_nd(data, indexer, axis=0, + fill_value=fill_value) + assert ((result[[0, 1, 2], :, :] == data[[2, 1, 0], :, :]).all()) + assert ((result[3, :, :] == fill_value).all()) + assert (result.dtype == out_dtype) + + result = algos.take_nd(data, indexer, axis=1, + fill_value=fill_value) + assert ((result[:, [0, 1, 2], :] == data[:, [2, 1, 0], :]).all()) + assert ((result[:, 3, :] == fill_value).all()) + assert (result.dtype == out_dtype) + + result = algos.take_nd(data, indexer, axis=2, + fill_value=fill_value) + assert ((result[:, :, [0, 1, 2]] == data[:, :, [2, 1, 0]]).all()) + assert ((result[:, :, 3] == fill_value).all()) + assert (result.dtype == out_dtype) + + indexer = [2, 1, 0, 1] + result = algos.take_nd(data, indexer, axis=0, + fill_value=fill_value) + assert ((result[[0, 1, 2, 3], :, :] == data[indexer, :, :]).all()) + assert (result.dtype == dtype) + + result = algos.take_nd(data, indexer, axis=1, + fill_value=fill_value) + assert ((result[:, [0, 1, 2, 3], :] == data[:, indexer, :]).all()) + assert (result.dtype == dtype) + + result = algos.take_nd(data, indexer, axis=2, + fill_value=fill_value) + assert ((result[:, :, [0, 1, 2, 3]] == data[:, :, indexer]).all()) + assert (result.dtype == dtype) def test_1d_other_dtypes(self): arr = np.random.randn(10).astype(np.float32)
Utilizes `pytest` fixtures to cleanup the code significantly. Initially thought this could be a good place for #21500, but that didn't turn out to be the case. 🤷‍♂️
https://api.github.com/repos/pandas-dev/pandas/pulls/21591
2018-06-22T08:30:45Z
2018-06-22T16:52:21Z
2018-06-22T16:52:21Z
2018-06-22T16:52:39Z
API/REGR: (re-)allow neg/pos unary operation on object dtype
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 8c36d51a5fd16..fac584e455e2a 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -54,6 +54,7 @@ Fixed Regressions - Fixed regression in :meth:`to_csv` when handling file-like object incorrectly (:issue:`21471`) - Bug in both :meth:`DataFrame.first_valid_index` and :meth:`Series.first_valid_index` raised for a row index having duplicate values (:issue:`21441`) +- Fixed regression in unary negative operations with object dtype (:issue:`21380`) - Bug in :meth:`Timestamp.ceil` and :meth:`Timestamp.floor` when timestamp is a multiple of the rounding frequency (:issue:`21262`) .. _whatsnew_0232.performance: diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 8fa79a130d1f8..26c23b84a9c04 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -27,6 +27,7 @@ is_dict_like, is_re_compilable, is_period_arraylike, + is_object_dtype, pandas_dtype) from pandas.core.dtypes.cast import maybe_promote, maybe_upcast_putmask from pandas.core.dtypes.inference import is_hashable @@ -1117,7 +1118,8 @@ def __neg__(self): values = com._values_from_object(self) if is_bool_dtype(values): arr = operator.inv(values) - elif (is_numeric_dtype(values) or is_timedelta64_dtype(values)): + elif (is_numeric_dtype(values) or is_timedelta64_dtype(values) + or is_object_dtype(values)): arr = operator.neg(values) else: raise TypeError("Unary negative expects numeric dtype, not {}" @@ -1128,7 +1130,8 @@ def __pos__(self): values = com._values_from_object(self) if (is_bool_dtype(values) or is_period_arraylike(values)): arr = values - elif (is_numeric_dtype(values) or is_timedelta64_dtype(values)): + elif (is_numeric_dtype(values) or is_timedelta64_dtype(values) + or is_object_dtype(values)): arr = operator.pos(values) else: raise TypeError("Unary plus expects numeric dtype, not {}" diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py index 5df50f3d7835b..fdf50805ad818 100644 --- a/pandas/tests/frame/test_operators.py +++ b/pandas/tests/frame/test_operators.py @@ -3,6 +3,7 @@ from __future__ import print_function from collections import deque from datetime import datetime +from decimal import Decimal import operator import pytest @@ -282,6 +283,17 @@ def test_neg_numeric(self, df, expected): assert_frame_equal(-df, expected) assert_series_equal(-df['a'], expected['a']) + @pytest.mark.parametrize('df, expected', [ + (np.array([1, 2], dtype=object), np.array([-1, -2], dtype=object)), + ([Decimal('1.0'), Decimal('2.0')], [Decimal('-1.0'), Decimal('-2.0')]), + ]) + def test_neg_object(self, df, expected): + # GH 21380 + df = pd.DataFrame({'a': df}) + expected = pd.DataFrame({'a': expected}) + assert_frame_equal(-df, expected) + assert_series_equal(-df['a'], expected['a']) + @pytest.mark.parametrize('df', [ pd.DataFrame({'a': ['a', 'b']}), pd.DataFrame({'a': pd.to_datetime(['2017-01-22', '1970-01-01'])}), @@ -307,6 +319,15 @@ def test_pos_numeric(self, df): @pytest.mark.parametrize('df', [ pd.DataFrame({'a': ['a', 'b']}), + pd.DataFrame({'a': np.array([-1, 2], dtype=object)}), + pd.DataFrame({'a': [Decimal('-1.0'), Decimal('2.0')]}), + ]) + def test_pos_object(self, df): + # GH 21380 + assert_frame_equal(+df, df) + assert_series_equal(+df['a'], df['a']) + + @pytest.mark.parametrize('df', [ pd.DataFrame({'a': pd.to_datetime(['2017-01-22', '1970-01-01'])}), ]) def test_pos_raises(self, df):
closes #21380 This is an easy fix to simply be more forgiveable and just try the operation: - For object dtype, this is IMO the better way: simply let the scalar values decide if it can do those operations or not. The scalar values (eg strings) already raise an informative error message. Eg for a series with string you get `TypeError: bad operand type for unary -: 'str'` for unary negative operation. - But with two caveats to discuss: - For datetime64 data, the error message is not as user friendly as our custom one: "TypeError: ufunc 'negative' did not contain a loop with signature matching types dtype('<M8[ns]') dtype('<M8[ns]')" - For the unary positive operation, numpy has apparently different rules: it does allow datetime64 data and returns itself (the same for object string data). Because of this, the current tests fail (it expects TypeError for `+s_string`) <details> <summary>Details with examples</summary> ``` In [14]: s_string = pd.Series(['a', 'b']) In [15]: -s_string --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-15-91308cf04ad2> in <module>() ----> 1 -s_string ~/scipy/pandas/pandas/core/generic.py in __neg__(self) 1119 arr = operator.inv(values) 1120 else: -> 1121 arr = operator.neg(values) 1122 return self.__array_wrap__(arr) 1123 TypeError: bad operand type for unary -: 'str' In [16]: +s_string Out[16]: 0 a 1 b dtype: object In [17]: s_datetime = pd.Series(pd.date_range('2012', periods=3)) In [18]: -s_datetime --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-18-b460860ba74c> in <module>() ----> 1 -s_datetime ~/scipy/pandas/pandas/core/generic.py in __neg__(self) 1119 arr = operator.inv(values) 1120 else: -> 1121 arr = operator.neg(values) 1122 return self.__array_wrap__(arr) 1123 TypeError: ufunc 'negative' did not contain a loop with signature matching types dtype('<M8[ns]') dtype('<M8[ns]') In [19]: +s_datetime Out[19]: 0 2012-01-01 1 2012-01-02 2 2012-01-03 dtype: datetime64[ns] In [20]: In [20]: from decimal import Decimal ...: s_decimal = pd.Series([Decimal(1)]) In [21]: -s_decimal Out[21]: 0 -1 dtype: object In [22]: +s_decimal Out[22]: 0 1 dtype: object ``` </details>
https://api.github.com/repos/pandas-dev/pandas/pulls/21590
2018-06-22T07:54:40Z
2018-06-29T00:38:40Z
2018-06-29T00:38:39Z
2018-07-02T15:43:33Z
clarifying regex pipe behavior
diff --git a/pandas/core/strings.py b/pandas/core/strings.py index 9632df46d3bbf..08239ae4dae20 100644 --- a/pandas/core/strings.py +++ b/pandas/core/strings.py @@ -335,11 +335,11 @@ def str_contains(arr, pat, case=True, flags=0, na=np.nan, regex=True): 4 False dtype: bool - Returning 'house' and 'parrot' within same string. + Returning 'house' or 'dog' when either expression occurs in a string. - >>> s1.str.contains('house|parrot', regex=True) + >>> s1.str.contains('house|dog', regex=True) 0 False - 1 False + 1 True 2 True 3 False 4 NaN
[Current documentation for `pandas.Series.str.contains()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html) states that the `|` acts as a regex **and** operator ("Returning ‘house’ and ‘parrot’ within same string."), while I believe it is actually an **or** operator. I've updated the phrase and example to demonstrate this behavior in what I think is a clearer way, though I'm definitely open to suggestions to improve it further. - [ ] closes #xxxx - [ ] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21589
2018-06-22T06:55:14Z
2018-06-22T10:11:21Z
2018-06-22T10:11:21Z
2018-06-22T10:11:26Z
TST: Use int fixtures in test_construction.py
diff --git a/pandas/tests/indexes/datetimes/test_construction.py b/pandas/tests/indexes/datetimes/test_construction.py index f7682a965c038..ae98510951845 100644 --- a/pandas/tests/indexes/datetimes/test_construction.py +++ b/pandas/tests/indexes/datetimes/test_construction.py @@ -524,14 +524,13 @@ def test_dti_constructor_years_only(self, tz_naive_fixture): (rng3, expected3), (rng4, expected4)]: tm.assert_index_equal(rng, expected) - @pytest.mark.parametrize('dtype', [np.int64, np.int32, np.int16, np.int8]) - def test_dti_constructor_small_int(self, dtype): - # GH 13721 + def test_dti_constructor_small_int(self, any_int_dtype): + # see gh-13721 exp = DatetimeIndex(['1970-01-01 00:00:00.00000000', '1970-01-01 00:00:00.00000001', '1970-01-01 00:00:00.00000002']) - arr = np.array([0, 10, 20], dtype=dtype) + arr = np.array([0, 10, 20], dtype=any_int_dtype) tm.assert_index_equal(DatetimeIndex(arr), exp) def test_ctor_str_intraday(self):
Title is self-explanatory. Partially addresses #21500.
https://api.github.com/repos/pandas-dev/pandas/pulls/21588
2018-06-22T06:20:34Z
2018-06-22T09:58:41Z
2018-06-22T09:58:41Z
2018-06-22T10:01:35Z
[DOC]: To remove extra `` to match :class: rendering requirements
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index fd34424dedc52..4bfae7de01b8f 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -31,8 +31,8 @@ Backwards incompatible API changes Tick DateOffset Normalize Restrictions -------------------------------------- -Creating a ``Tick`` object (:class:``Day``, :class:``Hour``, :class:``Minute``, -:class:``Second``, :class:``Milli``, :class:``Micro``, :class:``Nano``) with +Creating a ``Tick`` object (:class:`Day`, :class:`Hour`, :class:`Minute`, +:class:`Second`, :class:`Milli`, :class:`Micro`, :class:`Nano`) with `normalize=True` is no longer supported. This prevents unexpected behavior where addition could fail to be monotone or associative. (:issue:`21427`)
Removing extra `` to solve rendering issues - [x] xref #21564 - [ ] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry @TomAugspurger , I am not really much acquainted to :class: reference. Do tell me If I am wrong in the inference of the issue. I guessed that the parameters should be enclosed in `` not `` `` by looking at the following: https://github.com/pandas-dev/pandas/blob/f1ffc5fae06a7294dc831887b0d76177aec9b708/doc/source/whatsnew/v0.24.0.txt#L66-L70
https://api.github.com/repos/pandas-dev/pandas/pulls/21586
2018-06-22T05:16:36Z
2018-06-22T10:07:08Z
2018-06-22T10:07:07Z
2018-06-22T10:07:25Z
DOC: Adding clarification on return dtype of to_numeric
diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py index c584e29f682dd..ebe135dfb184c 100644 --- a/pandas/core/tools/numeric.py +++ b/pandas/core/tools/numeric.py @@ -16,6 +16,10 @@ def to_numeric(arg, errors='raise', downcast=None): """ Convert argument to a numeric type. + The default return dtype is `float64` or `int64` + depending on the data supplied. Use the `downcast` parameter + to obtain other dtypes. + Parameters ---------- arg : list, tuple, 1-d array, or Series
Specifying the default return types for `to_numeric` in case of no downcast - [x] closes #21551 - [ ] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21585
2018-06-22T04:41:15Z
2018-06-22T10:05:46Z
2018-06-22T10:05:46Z
2018-06-22T10:05:59Z
BUG: Let IntervalIndex constructor override inferred closed
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index fd34424dedc52..85ce1bc567484 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -194,6 +194,13 @@ Strings - - +Interval +^^^^^^^^ + +- Bug in the :class:`IntervalIndex` constructor where the ``closed`` parameter did not always override the inferred ``closed`` (:issue:`19370`) +- +- + Indexing ^^^^^^^^ diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx index 5dbf509fda65e..fbb7265a17f8b 100644 --- a/pandas/_libs/interval.pyx +++ b/pandas/_libs/interval.pyx @@ -335,11 +335,17 @@ cdef class Interval(IntervalMixin): @cython.wraparound(False) @cython.boundscheck(False) -cpdef intervals_to_interval_bounds(ndarray intervals): +cpdef intervals_to_interval_bounds(ndarray intervals, + bint validate_closed=True): """ Parameters ---------- - intervals: ndarray object array of Intervals / nulls + intervals : ndarray + object array of Intervals / nulls + + validate_closed: boolean, default True + boolean indicating if all intervals must be closed on the same side. + Mismatching closed will raise if True, else return None for closed. Returns ------- @@ -353,6 +359,7 @@ cpdef intervals_to_interval_bounds(ndarray intervals): object closed = None, interval int64_t n = len(intervals) ndarray left, right + bint seen_closed = False left = np.empty(n, dtype=intervals.dtype) right = np.empty(n, dtype=intervals.dtype) @@ -370,10 +377,14 @@ cpdef intervals_to_interval_bounds(ndarray intervals): left[i] = interval.left right[i] = interval.right - if closed is None: + if not seen_closed: + seen_closed = True closed = interval.closed elif closed != interval.closed: - raise ValueError('intervals must all be closed on the same side') + closed = None + if validate_closed: + msg = 'intervals must all be closed on the same side' + raise ValueError(msg) return left, right, closed diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py index eb9d7efc06c27..23c0fb27a7553 100644 --- a/pandas/core/indexes/interval.py +++ b/pandas/core/indexes/interval.py @@ -233,7 +233,7 @@ def __new__(cls, data, closed=None, dtype=None, copy=False, if isinstance(data, IntervalIndex): left = data.left right = data.right - closed = data.closed + closed = closed or data.closed else: # don't allow scalars @@ -241,16 +241,8 @@ def __new__(cls, data, closed=None, dtype=None, copy=False, cls._scalar_data_error(data) data = maybe_convert_platform_interval(data) - left, right, infer_closed = intervals_to_interval_bounds(data) - - if (com._all_not_none(closed, infer_closed) and - closed != infer_closed): - # GH 18421 - msg = ("conflicting values for closed: constructor got " - "'{closed}', inferred from data '{infer_closed}'" - .format(closed=closed, infer_closed=infer_closed)) - raise ValueError(msg) - + left, right, infer_closed = intervals_to_interval_bounds( + data, validate_closed=closed is None) closed = closed or infer_closed return cls._simple_new(left, right, closed, name, copy=copy, diff --git a/pandas/tests/indexes/interval/test_construction.py b/pandas/tests/indexes/interval/test_construction.py index b1711c3444586..3145575804f0f 100644 --- a/pandas/tests/indexes/interval/test_construction.py +++ b/pandas/tests/indexes/interval/test_construction.py @@ -317,13 +317,7 @@ def test_generic_errors(self, constructor): pass def test_constructor_errors(self, constructor): - # mismatched closed inferred from intervals vs constructor. - ivs = [Interval(0, 1, closed='both'), Interval(1, 2, closed='both')] - msg = 'conflicting values for closed' - with tm.assert_raises_regex(ValueError, msg): - constructor(ivs, closed='neither') - - # mismatched closed within intervals + # mismatched closed within intervals with no constructor override ivs = [Interval(0, 1, closed='right'), Interval(2, 3, closed='left')] msg = 'intervals must all be closed on the same side' with tm.assert_raises_regex(ValueError, msg): @@ -341,6 +335,24 @@ def test_constructor_errors(self, constructor): with tm.assert_raises_regex(TypeError, msg): constructor([0, 1]) + @pytest.mark.parametrize('data, closed', [ + ([], 'both'), + ([np.nan, np.nan], 'neither'), + ([Interval(0, 3, closed='neither'), + Interval(2, 5, closed='neither')], 'left'), + ([Interval(0, 3, closed='left'), + Interval(2, 5, closed='right')], 'neither'), + (IntervalIndex.from_breaks(range(5), closed='both'), 'right')]) + def test_override_inferred_closed(self, constructor, data, closed): + # GH 19370 + if isinstance(data, IntervalIndex): + tuples = data.to_tuples() + else: + tuples = [(iv.left, iv.right) if notna(iv) else iv for iv in data] + expected = IntervalIndex.from_tuples(tuples, closed=closed) + result = constructor(data, closed=closed) + tm.assert_index_equal(result, expected) + class TestFromIntervals(TestClassConstructors): """
- [X] closes #19370 - [X] tests added / passed - [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [X] whatsnew entry Makes `IntervalIndex` constructor behavior consistent: the `closed` parameter, if specified, takes priority over the inferred `closed`. Comments: - Modified the `intervals_to_interval_bounds` function to optionally raise if mixed values of `closed` are encountered instead of automatically raising. - Allow creating an `IntervalIndex` from mixed closed lists, e.g.`[Interval(0, 1, closed='left'), Interval(2, 3, closed='right')]`, only if `closed` is specified during construction. - The above will still raise if `closed` is not passed to the constructor. - This appears to only be called in the constructor currently. - Added an Interval subsection to the Bug Fixes section of the 0.24.0 whatsnew, since I anticipate that there will be a non-negligible number of interval related fixes.
https://api.github.com/repos/pandas-dev/pandas/pulls/21584
2018-06-22T03:18:36Z
2018-06-27T15:29:17Z
2018-06-27T15:29:16Z
2018-09-24T17:22:59Z
cache DateOffset attrs now that they are immutable
diff --git a/asv_bench/benchmarks/period.py b/asv_bench/benchmarks/period.py index 897a3338c164c..c34f9a737473e 100644 --- a/asv_bench/benchmarks/period.py +++ b/asv_bench/benchmarks/period.py @@ -64,6 +64,11 @@ def setup(self): def time_setitem_period_column(self): self.df['col'] = self.rng + def time_set_index(self): + # GH#21582 limited by comparisons of Period objects + self.df['col2'] = self.rng + self.df.set_index('col2', append=True) + class Algorithms(object): diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index 4bfae7de01b8f..5f05bbdfdb948 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -130,6 +130,7 @@ Performance Improvements - Improved performance of :func:`Series.describe` in case of numeric dtpyes (:issue:`21274`) - Improved performance of :func:`pandas.core.groupby.GroupBy.rank` when dealing with tied rankings (:issue:`21237`) +- Improved performance of :func:`DataFrame.set_index` with columns consisting of :class:`Period` objects (:issue:`21582`) - .. _whatsnew_0240.docs: diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx index a9ef9166e4d33..63add06db17b4 100644 --- a/pandas/_libs/tslibs/offsets.pyx +++ b/pandas/_libs/tslibs/offsets.pyx @@ -404,6 +404,9 @@ class _BaseOffset(object): kwds = {key: odict[key] for key in odict if odict[key]} state.update(kwds) + if '_cache' not in state: + state['_cache'] = {} + self.__dict__.update(state) if 'weekmask' in state and 'holidays' in state: diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py index da8fdb4d79e34..a3f82c1a0902e 100644 --- a/pandas/tseries/offsets.py +++ b/pandas/tseries/offsets.py @@ -288,6 +288,7 @@ def isAnchored(self): # if there were a canonical docstring for what isAnchored means. return (self.n == 1) + @cache_readonly def _params(self): all_paras = self.__dict__.copy() if 'holidays' in all_paras and not all_paras['holidays']: @@ -322,8 +323,6 @@ def name(self): return self.rule_code def __eq__(self, other): - if other is None: - return False if isinstance(other, compat.string_types): from pandas.tseries.frequencies import to_offset @@ -333,13 +332,13 @@ def __eq__(self, other): if not isinstance(other, DateOffset): return False - return self._params() == other._params() + return self._params == other._params def __ne__(self, other): return not self == other def __hash__(self): - return hash(self._params()) + return hash(self._params) def __add__(self, other): if isinstance(other, (ABCDatetimeIndex, ABCSeries)): @@ -397,7 +396,7 @@ def _prefix(self): def rule_code(self): return self._prefix - @property + @cache_readonly def freqstr(self): try: code = self.rule_code @@ -601,7 +600,7 @@ def next_bday(self): else: return BusinessDay(n=nb_offset) - # TODO: Cache this once offsets are immutable + @cache_readonly def _get_daytime_flag(self): if self.start == self.end: raise ValueError('start and end must not be the same') @@ -643,12 +642,12 @@ def _prev_opening_time(self, other): return datetime(other.year, other.month, other.day, self.start.hour, self.start.minute) - # TODO: cache this once offsets are immutable + @cache_readonly def _get_business_hours_by_sec(self): """ Return business hours in a day by seconds. """ - if self._get_daytime_flag(): + if self._get_daytime_flag: # create dummy datetime to calculate businesshours in a day dtstart = datetime(2014, 4, 1, self.start.hour, self.start.minute) until = datetime(2014, 4, 1, self.end.hour, self.end.minute) @@ -662,7 +661,7 @@ def _get_business_hours_by_sec(self): def rollback(self, dt): """Roll provided date backward to next offset only if not on offset""" if not self.onOffset(dt): - businesshours = self._get_business_hours_by_sec() + businesshours = self._get_business_hours_by_sec if self.n >= 0: dt = self._prev_opening_time( dt) + timedelta(seconds=businesshours) @@ -683,9 +682,8 @@ def rollforward(self, dt): @apply_wraps def apply(self, other): - # calculate here because offset is not immutable - daytime = self._get_daytime_flag() - businesshours = self._get_business_hours_by_sec() + daytime = self._get_daytime_flag + businesshours = self._get_business_hours_by_sec bhdelta = timedelta(seconds=businesshours) if isinstance(other, datetime): @@ -766,7 +764,7 @@ def onOffset(self, dt): dt.minute, dt.second, dt.microsecond) # Valid BH can be on the different BusinessDay during midnight # Distinguish by the time spent from previous opening time - businesshours = self._get_business_hours_by_sec() + businesshours = self._get_business_hours_by_sec return self._onOffset(dt, businesshours) def _onOffset(self, dt, businesshours): @@ -2203,13 +2201,12 @@ def __eq__(self, other): if isinstance(other, Tick): return self.delta == other.delta else: - # TODO: Are there cases where this should raise TypeError? return False # This is identical to DateOffset.__hash__, but has to be redefined here # for Python 3, because we've redefined __eq__. def __hash__(self): - return hash(self._params()) + return hash(self._params) def __ne__(self, other): if isinstance(other, compat.string_types): @@ -2220,7 +2217,6 @@ def __ne__(self, other): if isinstance(other, Tick): return self.delta != other.delta else: - # TODO: Are there cases where this should raise TypeError? return True @property
TL;DR ~6x speedup in `set_index` for `PeriodIndex`-like column. Alright! Now that DateOffset objects are immutable (#21341), we can can start caching stuff. This was pretty much the original motivation that brought me here, so I'm pretty psyched to finally make this happen. The motivating super-slow operation is `df.set_index`. Profiling before/after with: ``` idx = pd.period_range('May 1973', freq='M', periods=10**5) df = pd.DataFrame({"A": 1, "B": idx}) out = df.set_index("B", append=True) ``` Total Runtime Before: 32.708 seconds Total Runtime After: 5.340 seconds pstats output (truncated) before: ``` 1 0.000 0.000 31.903 31.903 pandas/core/frame.py:3807(set_index) 1 0.000 0.000 31.897 31.897 pandas/core/indexes/base.py:4823(_ensure_index_from_sequences) 1 0.000 0.000 31.896 31.896 pandas/core/indexes/multi.py:1246(from_arrays) 1 0.000 0.000 31.896 31.896 pandas/core/arrays/categorical.py:2590(_factorize_from_iterables) 2 0.001 0.000 31.896 15.948 pandas/core/arrays/categorical.py:2553(_factorize_from_iterable) 2 0.000 0.000 31.895 15.948 pandas/core/arrays/categorical.py:318(__init__) 2 0.000 0.000 31.512 15.756 pandas/util/_decorators.py:136(wrapper) 2 0.002 0.001 31.512 15.756 pandas/core/algorithms.py:576(factorize) 1600011 1.168 0.000 28.211 0.000 pandas/tseries/offsets.py:338(__ne__) 4 1.820 0.455 28.042 7.010 {method 'argsort' of 'numpy.ndarray' objects} 1600011 4.016 0.000 27.042 0.000 pandas/tseries/offsets.py:324(__eq__) 3200022 16.987 0.000 21.856 0.000 pandas/tseries/offsets.py:291(_params) 2 0.000 0.000 3.460 1.730 pandas/core/algorithms.py:449(_factorize_array) 1 0.617 0.617 3.445 3.445 {method 'get_labels' of 'pandas._libs.hashtable.PyObjectHashTable' objects} 3200023 3.200 0.000 3.200 0.000 {sorted} 3400729/3400727 1.235 0.000 1.235 0.000 {isinstance} 400004 0.984 0.000 1.060 0.000 pandas/tseries/offsets.py:400(freqstr) 3200022 0.840 0.000 0.840 0.000 {method 'copy' of 'dict' objects} 3200023 0.829 0.000 0.829 0.000 {method 'items' of 'dict' objects} ``` pstats output (truncated) after: ``` 1 0.000 0.000 4.571 4.571 pandas/core/frame.py:3807(set_index) 1 0.000 0.000 4.561 4.561 pandas/core/indexes/base.py:4823(_ensure_index_from_sequences) 1 0.000 0.000 4.561 4.561 pandas/core/indexes/multi.py:1246(from_arrays) 1 0.000 0.000 4.561 4.561 pandas/core/arrays/categorical.py:2590(_factorize_from_iterables) 2 0.001 0.000 4.561 2.280 pandas/core/arrays/categorical.py:2553(_factorize_from_iterable) 2 0.000 0.000 4.560 2.280 pandas/core/arrays/categorical.py:318(__init__) 2 0.000 0.000 4.506 2.253 pandas/util/_decorators.py:136(wrapper) 2 0.003 0.001 4.506 2.253 pandas/core/algorithms.py:576(factorize) 4 1.170 0.292 4.090 1.022 {method 'argsort' of 'numpy.ndarray' objects} 1600011 0.870 0.000 3.138 0.000 pandas/tseries/offsets.py:337(__ne__) 1600011 1.475 0.000 2.267 0.000 pandas/tseries/offsets.py:325(__eq__) 3400729/3400727 0.845 0.000 0.846 0.000 {isinstance} ``` The `_params` calls that make up half of the runtime in the before version doesn't even make the cut for the pstats output in the after version. There is some more tweaking around the edges we can do for perf, but this is the big one. (Also another big one when columns can have PeriodDtype). - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21582
2018-06-21T23:55:20Z
2018-06-22T22:57:41Z
2018-06-22T22:57:41Z
2018-06-23T00:11:38Z
DOC: Note assert_almost_equal impl. detail
diff --git a/pandas/util/testing.py b/pandas/util/testing.py index b9e53dfc80020..675dd94d49750 100644 --- a/pandas/util/testing.py +++ b/pandas/util/testing.py @@ -224,9 +224,15 @@ def assert_almost_equal(left, right, check_exact=False, check_dtype: bool, default True check dtype if both a and b are the same type check_less_precise : bool or int, default False - Specify comparison precision. Only used when check_exact is False. + Specify comparison precision. Only used when `check_exact` is False. 5 digits (False) or 3 digits (True) after decimal points are compared. - If int, then specify the digits to compare + If int, then specify the digits to compare. + + When comparing two numbers, if the first number has magnitude less + than 1e-5, we compare the two numbers directly and check whether + they are equivalent within the specified precision. Otherwise, we + compare the **ratio** of the second number to the first number and + check whether it is equivalent to 1 within the specified precision. """ if isinstance(left, pd.Index): return assert_index_equal(left, right, check_exact=check_exact,
Note the hard-coded switch between absolute and relative tolerance during checking. Closes #21528.
https://api.github.com/repos/pandas-dev/pandas/pulls/21580
2018-06-21T21:26:38Z
2018-06-22T10:21:14Z
2018-06-22T10:21:14Z
2018-06-22T16:52:08Z
DOC: update the Series.any / Dataframe.any docstring
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 9902da4094404..04ba0b5de3f7f 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -463,7 +463,7 @@ def ndim(self): See Also -------- - ndarray.ndim + ndarray.ndim : Number of array dimensions. Examples -------- @@ -487,7 +487,7 @@ def size(self): See Also -------- - ndarray.size + ndarray.size : Number of elements in the array. Examples -------- @@ -9420,7 +9420,11 @@ def _doc_parms(cls): _any_see_also = """\ See Also -------- -pandas.DataFrame.all : Return whether all elements are True. +numpy.any : Numpy version of this method. +Series.any : Return whether any element is True. +Series.all : Return whether all elements are True. +DataFrame.any : Return whether any element is True over requested axis. +DataFrame.all : Return whether all elements are True over requested axis. """ _any_desc = """\
- [ ] closes #xxxx - [ ] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21579
2018-06-21T19:57:05Z
2018-06-22T10:33:13Z
2018-06-22T10:33:13Z
2018-06-22T10:39:46Z
BUG: Series dot product __rmatmul__ doesn't allow matrix vector multiplication
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index fd34424dedc52..aadb380a816f4 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -183,7 +183,7 @@ Offsets Numeric ^^^^^^^ -- +- Bug in :class:`Series` ``__rmatmul__`` doesn't support matrix vector multiplication (:issue:`21530`) - - diff --git a/pandas/core/series.py b/pandas/core/series.py index 2f762dff4aeab..a608db806d20b 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -2066,7 +2066,7 @@ def __matmul__(self, other): def __rmatmul__(self, other): """ Matrix multiplication using binary `@` operator in Python>=3.5 """ - return self.dot(other) + return self.dot(np.transpose(other)) @Substitution(klass='Series') @Appender(base._shared_docs['searchsorted']) diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py index b9c7b837b8b81..36342b5ba4ee1 100644 --- a/pandas/tests/series/test_analytics.py +++ b/pandas/tests/series/test_analytics.py @@ -849,11 +849,30 @@ def test_matmul(self): expected = np.dot(a.values, a.values) assert_almost_equal(result, expected) - # np.array @ Series (__rmatmul__) + # GH 21530 + # vector (1D np.array) @ Series (__rmatmul__) result = operator.matmul(a.values, a) expected = np.dot(a.values, a.values) assert_almost_equal(result, expected) + # GH 21530 + # vector (1D list) @ Series (__rmatmul__) + result = operator.matmul(a.values.tolist(), a) + expected = np.dot(a.values, a.values) + assert_almost_equal(result, expected) + + # GH 21530 + # matrix (2D np.array) @ Series (__rmatmul__) + result = operator.matmul(b.T.values, a) + expected = np.dot(b.T.values, a.values) + assert_almost_equal(result, expected) + + # GH 21530 + # matrix (2D nested lists) @ Series (__rmatmul__) + result = operator.matmul(b.T.values.tolist(), a) + expected = np.dot(b.T.values, a.values) + assert_almost_equal(result, expected) + # mixed dtype DataFrame @ Series a['p'] = int(a.p) result = operator.matmul(b.T, a)
- [x] closes #21530 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry the new `__rmatmul__` implementation in Series seems to be missing matrix vector multiplication case as raised in the issue in question. Only inner product of two vectors was supported in `__rmatmul__` method. This PR uses the same implementation in DataFrame to add support in Series and test case. matmul operator (`@`, `@=`) was added in python 3.5 in https://www.python.org/dev/peps/pep-0465/ ```python class A(object): def __matmul__(self, other): print('__matmul__ is called in A.') class B(object): def __rmatmul__(self, other): print('__rmatmul__ is called in B.') A() @ B() B() @ A() del A.__matmul__ A() @ B() >>> A() @ B() __matmul__ is called in A. >>> B() @ A() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for @: 'B' and 'A' >>> del A.__matmul__ >>> A() @ B() __rmatmul__ is called in B. ```
https://api.github.com/repos/pandas-dev/pandas/pulls/21578
2018-06-21T19:52:07Z
2018-06-22T22:59:28Z
2018-06-22T22:59:28Z
2018-06-23T07:34:47Z
BUG: first/last lose timezone in groupby with as_index=False
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index 4bfae7de01b8f..3c3f6358d6579 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -225,7 +225,7 @@ Plotting Groupby/Resample/Rolling ^^^^^^^^^^^^^^^^^^^^^^^^ -- +- Bug in :func:`pandas.core.groupby.GroupBy.first` and :func:`pandas.core.groupby.GroupBy.last` with ``as_index=False`` leading to the loss of timezone information (:issue:`15884`) - - diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 3bc59157055ce..0bbdfbbe52ac4 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -4740,7 +4740,7 @@ def _wrap_transformed_output(self, output, names=None): def _wrap_agged_blocks(self, items, blocks): if not self.as_index: - index = np.arange(blocks[0].values.shape[1]) + index = np.arange(blocks[0].values.shape[-1]) mgr = BlockManager(blocks, [items, index]) result = DataFrame(mgr) diff --git a/pandas/tests/groupby/test_nth.py b/pandas/tests/groupby/test_nth.py index a32ba9ad76f14..a1b748cd50e8f 100644 --- a/pandas/tests/groupby/test_nth.py +++ b/pandas/tests/groupby/test_nth.py @@ -1,11 +1,12 @@ import numpy as np import pandas as pd -from pandas import DataFrame, MultiIndex, Index, Series, isna +from pandas import DataFrame, MultiIndex, Index, Series, isna, Timestamp from pandas.compat import lrange from pandas.util.testing import ( assert_frame_equal, assert_produces_warning, assert_series_equal) +import pytest def test_first_last_nth(df): @@ -219,6 +220,64 @@ def test_nth_multi_index(three_group): assert_frame_equal(result, expected) +@pytest.mark.parametrize('data, expected_first, expected_last', [ + ({'id': ['A'], + 'time': Timestamp('2012-02-01 14:00:00', + tz='US/Central'), + 'foo': [1]}, + {'id': ['A'], + 'time': Timestamp('2012-02-01 14:00:00', + tz='US/Central'), + 'foo': [1]}, + {'id': ['A'], + 'time': Timestamp('2012-02-01 14:00:00', + tz='US/Central'), + 'foo': [1]}), + ({'id': ['A', 'B', 'A'], + 'time': [Timestamp('2012-01-01 13:00:00', + tz='America/New_York'), + Timestamp('2012-02-01 14:00:00', + tz='US/Central'), + Timestamp('2012-03-01 12:00:00', + tz='Europe/London')], + 'foo': [1, 2, 3]}, + {'id': ['A', 'B'], + 'time': [Timestamp('2012-01-01 13:00:00', + tz='America/New_York'), + Timestamp('2012-02-01 14:00:00', + tz='US/Central')], + 'foo': [1, 2]}, + {'id': ['A', 'B'], + 'time': [Timestamp('2012-03-01 12:00:00', + tz='Europe/London'), + Timestamp('2012-02-01 14:00:00', + tz='US/Central')], + 'foo': [3, 2]}) +]) +def test_first_last_tz(data, expected_first, expected_last): + # GH15884 + # Test that the timezone is retained when calling first + # or last on groupby with as_index=False + + df = DataFrame(data) + + result = df.groupby('id', as_index=False).first() + expected = DataFrame(expected_first) + cols = ['id', 'time', 'foo'] + assert_frame_equal(result[cols], expected[cols]) + + result = df.groupby('id', as_index=False)['time'].first() + assert_frame_equal(result, expected[['id', 'time']]) + + result = df.groupby('id', as_index=False).last() + expected = DataFrame(expected_last) + cols = ['id', 'time', 'foo'] + assert_frame_equal(result[cols], expected[cols]) + + result = df.groupby('id', as_index=False)['time'].last() + assert_frame_equal(result, expected[['id', 'time']]) + + def test_nth_multi_index_as_expected(): # PR 9090, related to issue 8979 # test nth on MultiIndex
- [ ] closes #15884 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21573
2018-06-21T12:39:21Z
2018-06-22T23:01:40Z
2018-06-22T23:01:40Z
2018-06-23T22:13:58Z
add test case when to_csv argument is sys.stdout
diff --git a/pandas/tests/io/formats/test_to_csv.py b/pandas/tests/io/formats/test_to_csv.py index dfa3751bff57a..36c4ae547ad4e 100644 --- a/pandas/tests/io/formats/test_to_csv.py +++ b/pandas/tests/io/formats/test_to_csv.py @@ -285,3 +285,18 @@ def test_to_csv_string_array_utf8(self): df.to_csv(path, encoding='utf-8') with open(path, 'r') as f: assert f.read() == expected_utf8 + + @tm.capture_stdout + def test_to_csv_stdout_file(self): + # GH 21561 + df = pd.DataFrame([['foo', 'bar'], ['baz', 'qux']], + columns=['name_1', 'name_2']) + expected_ascii = '''\ +,name_1,name_2 +0,foo,bar +1,baz,qux +''' + df.to_csv(sys.stdout, encoding='ascii') + output = sys.stdout.getvalue() + assert output == expected_ascii + assert not sys.stdout.closed
- [+] closes #21561 - [+] tests added / passed - [+] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry Add new test case when to_csv argument is sys.stdout
https://api.github.com/repos/pandas-dev/pandas/pulls/21572
2018-06-21T11:49:16Z
2018-06-22T23:04:39Z
2018-06-22T23:04:39Z
2018-06-29T15:03:54Z
DOC: Fixing spaces around backticks, and linting
diff --git a/ci/lint.sh b/ci/lint.sh index 2cbf6f7ae52a9..9bcee55e1344c 100755 --- a/ci/lint.sh +++ b/ci/lint.sh @@ -174,6 +174,14 @@ if [ "$LINT" ]; then fi echo "Check for old-style classes DONE" + echo "Check for backticks incorrectly rendering because of missing spaces" + grep -R --include="*.rst" -E "[a-zA-Z0-9]\`\`?[a-zA-Z0-9]" doc/source/ + + if [ $? = "0" ]; then + RET=1 + fi + echo "Check for backticks incorrectly rendering because of missing spaces DONE" + else echo "NOT Linting" fi diff --git a/doc/source/merging.rst b/doc/source/merging.rst index b2cb388e3cd03..2eb5962ead986 100644 --- a/doc/source/merging.rst +++ b/doc/source/merging.rst @@ -279,9 +279,9 @@ need to be: Ignoring indexes on the concatenation axis ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -For ``DataFrame``s which don't have a meaningful index, you may wish to append -them and ignore the fact that they may have overlapping indexes. To do this, use -the ``ignore_index`` argument: +For ``DataFrame`` objects which don't have a meaningful index, you may wish +to append them and ignore the fact that they may have overlapping indexes. To +do this, use the ``ignore_index`` argument: .. ipython:: python @@ -314,7 +314,7 @@ This is also a valid argument to :meth:`DataFrame.append`: Concatenating with mixed ndims ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -You can concatenate a mix of ``Series`` and ``DataFrame``s. The +You can concatenate a mix of ``Series`` and ``DataFrame`` objects. The ``Series`` will be transformed to ``DataFrame`` with the column name as the name of the ``Series``. diff --git a/doc/source/release.rst b/doc/source/release.rst index 7bbd4ba43e66f..16fe896d9f58f 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -2641,7 +2641,7 @@ Improvements to existing features option it is no longer possible to round trip Excel files with merged MultiIndex and Hierarchical Rows. Set the ``merge_cells`` to ``False`` to restore the previous behaviour. (:issue:`5254`) -- The FRED DataReader now accepts multiple series (:issue`3413`) +- The FRED DataReader now accepts multiple series (:issue:`3413`) - StataWriter adjusts variable names to Stata's limitations (:issue:`5709`) API Changes @@ -2837,7 +2837,7 @@ API Changes copy through chained assignment is detected, settable via option ``mode.chained_assignment`` - test the list of ``NA`` values in the csv parser. add ``N/A``, ``#NA`` as independent default na values (:issue:`5521`) -- The refactoring involving``Series`` deriving from ``NDFrame`` breaks ``rpy2<=2.3.8``. an Issue +- The refactoring involving ``Series`` deriving from ``NDFrame`` breaks ``rpy2<=2.3.8``. an Issue has been opened against rpy2 and a workaround is detailed in :issue:`5698`. Thanks @JanSchulz. - ``Series.argmin`` and ``Series.argmax`` are now aliased to ``Series.idxmin`` and ``Series.idxmax``. These return the *index* of the min or max element respectively. Prior to 0.13.0 these would return diff --git a/doc/source/reshaping.rst b/doc/source/reshaping.rst index 88b7114cf4101..7d9925d800441 100644 --- a/doc/source/reshaping.rst +++ b/doc/source/reshaping.rst @@ -654,7 +654,7 @@ When a column contains only one level, it will be omitted in the result. pd.get_dummies(df, drop_first=True) By default new columns will have ``np.uint8`` dtype. -To choose another dtype, use the``dtype`` argument: +To choose another dtype, use the ``dtype`` argument: .. ipython:: python diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst index 11157264304b0..9e01296d9c9c7 100644 --- a/doc/source/timeseries.rst +++ b/doc/source/timeseries.rst @@ -2169,8 +2169,8 @@ still considered to be equal even if they are in different time zones: rng_berlin[5] rng_eastern[5] == rng_berlin[5] -Like ``Series``, ``DataFrame``, and ``DatetimeIndex``, ``Timestamp``s can be converted to other -time zones using ``tz_convert``: +Like ``Series``, ``DataFrame``, and ``DatetimeIndex``; ``Timestamp`` objects +can be converted to other time zones using ``tz_convert``: .. ipython:: python
- [X] closes #21562 - [ ] tests added / passed - [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21570
2018-06-21T09:59:58Z
2018-06-21T10:10:32Z
2018-06-21T10:10:32Z
2018-06-21T10:10:51Z
REF: multi_take is now able to tackle all list-like (non-bool) cases
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index d5e81105dd323..cdc592ae253ac 100755 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -902,30 +902,45 @@ def _getitem_tuple(self, tup): return retval def _multi_take_opportunity(self, tup): - from pandas.core.generic import NDFrame + """ + Check whether there is the possibility to use ``_multi_take``. + Currently the limit is that all axes being indexed must be indexed with + list-likes. - # ugly hack for GH #836 - if not isinstance(self.obj, NDFrame): - return False + Parameters + ---------- + tup : tuple + Tuple of indexers, one per axis + Returns + ------- + boolean: Whether the current indexing can be passed through _multi_take + """ if not all(is_list_like_indexer(x) for x in tup): return False # just too complicated - for indexer, ax in zip(tup, self.obj._data.axes): - if isinstance(ax, MultiIndex): - return False - elif com.is_bool_indexer(indexer): - return False - elif not ax.is_unique: - return False + if any(com.is_bool_indexer(x) for x in tup): + return False return True def _multi_take(self, tup): - """ create the reindex map for our objects, raise the _exception if we - can't create the indexer """ + Create the indexers for the passed tuple of keys, and execute the take + operation. This allows the take operation to be executed all at once - + rather than once for each dimension - improving efficiency. + + Parameters + ---------- + tup : tuple + Tuple of indexers, one per axis + + Returns + ------- + values: same type as the object being indexed + """ + # GH 836 o = self.obj d = {axis: self._get_listlike_indexer(key, axis) for (key, axis) in zip(tup, o._AXIS_ORDERS)}
- [x] tests passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` This is basically a consequence of #21503 - the code path for indexing when ``key`` is a collection is now the same for multi_take and for single take. (Next step will be to merge the other code paths too, and approach with multi_take _all_ cases in which mutiple axes are being indexed, in whatever fashion)
https://api.github.com/repos/pandas-dev/pandas/pulls/21569
2018-06-21T09:26:31Z
2018-06-21T13:05:44Z
2018-06-21T13:05:44Z
2018-06-21T13:06:13Z
[DOC]: Updating merge.rst to resolve rendering issues
diff --git a/doc/source/merging.rst b/doc/source/merging.rst index 1161656731f88..4d7cd0bdadef7 100644 --- a/doc/source/merging.rst +++ b/doc/source/merging.rst @@ -279,7 +279,7 @@ need to be: Ignoring indexes on the concatenation axis ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -For ``DataFrame``s which don't have a meaningful index, you may wish to append +For ``DataFrame`` s which don't have a meaningful index, you may wish to append them and ignore the fact that they may have overlapping indexes. To do this, use the ``ignore_index`` argument: @@ -314,7 +314,7 @@ This is also a valid argument to :meth:`DataFrame.append`: Concatenating with mixed ndims ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -You can concatenate a mix of ``Series`` and ``DataFrame``s. The +You can concatenate a mix of ``Series`` and ``DataFrame`` s. The ``Series`` will be transformed to ``DataFrame`` with the column name as the name of the ``Series``.
Fixing Documentation according to rendering requirements - [x] xref #21562 - [ ] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21568
2018-06-21T07:21:24Z
2018-06-21T09:39:18Z
2018-06-21T09:39:18Z
2018-06-21T09:39:22Z
DOC: Add documentation for freq='infer' option of DatetimeIndex and TimedeltaIndex constructors
diff --git a/doc/source/timedeltas.rst b/doc/source/timedeltas.rst index 745810704f665..e602e45784f4a 100644 --- a/doc/source/timedeltas.rst +++ b/doc/source/timedeltas.rst @@ -363,6 +363,13 @@ or ``np.timedelta64`` objects. Passing ``np.nan/pd.NaT/nat`` will represent miss pd.TimedeltaIndex(['1 days', '1 days, 00:00:05', np.timedelta64(2,'D'), datetime.timedelta(days=2,seconds=2)]) +The string 'infer' can be passed in order to set the frequency of the index as the +inferred frequency upon creation: + +.. ipython:: python + + pd.TimedeltaIndex(['0 days', '10 days', '20 days'], freq='infer') + Generating Ranges of Time Deltas ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst index 1b0cf86995a39..5262eedc23baa 100644 --- a/doc/source/timeseries.rst +++ b/doc/source/timeseries.rst @@ -185,6 +185,19 @@ options like ``dayfirst`` or ``format``, so use ``to_datetime`` if these are req pd.Timestamp('2010/11/12') +You can also use the ``DatetimeIndex`` constructor directly: + +.. ipython:: python + + pd.DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05']) + +The string 'infer' can be passed in order to set the frequency of the index as the +inferred frequency upon creation: + +.. ipython:: python + + pd.DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], freq='infer') + Providing a Format Argument ~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py index 83950f1d71633..963eb6dc053bf 100644 --- a/pandas/core/indexes/datetimes.py +++ b/pandas/core/indexes/datetimes.py @@ -186,7 +186,10 @@ class DatetimeIndex(DatelikeOps, TimelikeOps, DatetimeIndexOpsMixin, copy : bool Make a copy of input ndarray freq : string or pandas offset object, optional - One of pandas date offset strings or corresponding objects + One of pandas date offset strings or corresponding objects. The string + 'infer' can be passed in order to set the frequency of the index as the + inferred frequency upon creation + start : starting value, datetime-like, optional If data is None, start is used as the start point in generating regular timestamp data. diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py index 9707d19953418..e90e1264638b0 100644 --- a/pandas/core/indexes/timedeltas.py +++ b/pandas/core/indexes/timedeltas.py @@ -107,7 +107,10 @@ class TimedeltaIndex(DatetimeIndexOpsMixin, TimelikeOps, Int64Index): Optional timedelta-like data to construct index with unit: unit of the arg (D,h,m,s,ms,us,ns) denote the unit, optional which is an integer/float number - freq: a frequency for the index, optional + freq : string or pandas offset object, optional + One of pandas date offset strings or corresponding objects. The string + 'infer' can be passed in order to set the frequency of the index as the + inferred frequency upon creation copy : bool Make a copy of input ndarray start : starting value, timedelta-like, optional
- [X] closes #21128 - [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` Reboot of #21201. Should be good to merge on green based on comments in the previous PR, but will leave this open for a little bit to give anyone who missed the old PR a chance to review.
https://api.github.com/repos/pandas-dev/pandas/pulls/21566
2018-06-20T23:28:07Z
2018-06-21T00:09:16Z
2018-06-21T00:09:16Z
2018-06-21T03:27:57Z
ERR: Raise a simpler backtrace for missing key
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index 15c5cc97b8426..a9c49b7476fa6 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -197,7 +197,7 @@ Strings Indexing ^^^^^^^^ -- +- The traceback from a ``KeyError`` when asking ``.loc`` for a single missing label is now shorter and more clear (:issue:`21557`) - - diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index d5e81105dd323..38b6aaa2230fb 100755 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -1807,8 +1807,6 @@ def error(): try: key = self._convert_scalar_indexer(key, axis) - if not ax.contains(key): - error() except TypeError as e: # python 3 type errors should be raised @@ -1818,6 +1816,9 @@ def error(): except: error() + if not ax.contains(key): + error() + def _is_scalar_access(self, key): # this is a shortcut accessor to both .loc and .iloc # that provide the equivalent access of .at and .iat
- [x] closes #21557 - [x] tests passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21558
2018-06-20T13:19:14Z
2018-06-21T09:42:42Z
2018-06-21T09:42:41Z
2018-06-22T10:41:15Z
Update tutorials.rst
diff --git a/doc/source/tutorials.rst b/doc/source/tutorials.rst index 895fe595de205..6ccb921014bd8 100644 --- a/doc/source/tutorials.rst +++ b/doc/source/tutorials.rst @@ -203,3 +203,4 @@ Various Tutorials - `Pandas Tutorial, by Mikhail Semeniuk <http://www.bearrelroll.com/2013/05/python-pandas-tutorial>`_ - `Pandas DataFrames Tutorial, by Karlijn Willems <http://www.datacamp.com/community/tutorials/pandas-tutorial-dataframe-python>`_ - `A concise tutorial with real life examples <https://tutswiki.com/pandas-cookbook/chapter1>`_ +- `Data Analysis and Exploration with Pandas, by Theodore Petrou <https://www.packtpub.com/big-data-and-business-intelligence/data-analysis-and-exploration-pandas-video>`_
- [ ] closes #xxxx - [ ] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry Added a new video listing.
https://api.github.com/repos/pandas-dev/pandas/pulls/21553
2018-06-20T05:23:33Z
2018-06-20T09:46:45Z
null
2018-06-20T09:46:45Z
Update "See Also" section of pandas/core/generic.py
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 32f64b1d3e05c..555108a5d9349 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -5175,8 +5175,7 @@ def convert_objects(self, convert_dates=True, convert_numeric=False, -------- pandas.to_datetime : Convert argument to datetime. pandas.to_timedelta : Convert argument to timedelta. - pandas.to_numeric : Return a fixed frequency timedelta index, - with day as the default. + pandas.to_numeric : Convert argument to numeric type. Returns ------- @@ -5210,7 +5209,7 @@ def infer_objects(self): -------- pandas.to_datetime : Convert argument to datetime. pandas.to_timedelta : Convert argument to timedelta. - pandas.to_numeric : Convert argument to numeric typeR + pandas.to_numeric : Convert argument to numeric type. Returns -------
Fix some minor text errors in `infer_objects` and `convert_objects`.
https://api.github.com/repos/pandas-dev/pandas/pulls/21550
2018-06-20T00:51:53Z
2018-06-20T09:50:22Z
2018-06-20T09:50:22Z
2018-06-20T09:50:28Z
use ccalendar instead of np_datetime
diff --git a/pandas/_libs/tslibs/ccalendar.pxd b/pandas/_libs/tslibs/ccalendar.pxd index 42473a97a7150..04fb6eaf49c84 100644 --- a/pandas/_libs/tslibs/ccalendar.pxd +++ b/pandas/_libs/tslibs/ccalendar.pxd @@ -6,7 +6,7 @@ from cython cimport Py_ssize_t from numpy cimport int64_t, int32_t -cdef int dayofweek(int y, int m, int m) nogil +cdef int dayofweek(int y, int m, int d) nogil cdef bint is_leapyear(int64_t year) nogil cpdef int32_t get_days_in_month(int year, Py_ssize_t month) nogil cpdef int32_t get_week_of_year(int year, int month, int day) nogil diff --git a/pandas/_libs/tslibs/np_datetime.pxd b/pandas/_libs/tslibs/np_datetime.pxd index 33b8b32bcf2dc..1a0baa8271643 100644 --- a/pandas/_libs/tslibs/np_datetime.pxd +++ b/pandas/_libs/tslibs/np_datetime.pxd @@ -54,10 +54,6 @@ cdef extern from "../src/datetime/np_datetime.h": PANDAS_DATETIMEUNIT fr, pandas_datetimestruct *result) nogil - int days_per_month_table[2][12] - int dayofweek(int y, int m, int d) nogil - int is_leapyear(int64_t year) nogil - cdef int reverse_ops[6] diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx index 63add06db17b4..2173588e348e7 100644 --- a/pandas/_libs/tslibs/offsets.pyx +++ b/pandas/_libs/tslibs/offsets.pyx @@ -18,12 +18,12 @@ cnp.import_array() from util cimport is_string_object, is_integer_object from ccalendar import MONTHS, DAYS +from ccalendar cimport get_days_in_month, dayofweek from conversion cimport tz_convert_single, pydt_to_i8 from frequencies cimport get_freq_code from nattype cimport NPY_NAT from np_datetime cimport (pandas_datetimestruct, - dtstruct_to_dt64, dt64_to_dtstruct, - is_leapyear, days_per_month_table, dayofweek) + dtstruct_to_dt64, dt64_to_dtstruct) # --------------------------------------------------------------------- # Constants @@ -450,12 +450,6 @@ class BaseOffset(_BaseOffset): # ---------------------------------------------------------------------- # RelativeDelta Arithmetic -@cython.wraparound(False) -@cython.boundscheck(False) -cdef inline int get_days_in_month(int year, int month) nogil: - return days_per_month_table[is_leapyear(year)][month - 1] - - cdef inline int year_add_months(pandas_datetimestruct dts, int months) nogil: """new year number after shifting pandas_datetimestruct number of months""" return dts.year + (dts.month + months - 1) / 12 diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx index cc2fb6e0617cb..49208056f88fe 100644 --- a/pandas/_libs/tslibs/period.pyx +++ b/pandas/_libs/tslibs/period.pyx @@ -45,9 +45,8 @@ from timezones cimport is_utc, is_tzlocal, get_utcoffset, get_dst_info from timedeltas cimport delta_to_nanoseconds cimport ccalendar -from ccalendar cimport dayofweek, get_day_of_year +from ccalendar cimport dayofweek, get_day_of_year, is_leapyear from ccalendar import MONTH_NUMBERS -from ccalendar cimport is_leapyear from conversion cimport tz_convert_utc_to_tzlocal from frequencies cimport (get_freq_code, get_base_alias, get_to_timestamp_base, get_freq_str, diff --git a/setup.py b/setup.py index d6890a08b09d0..b0b377ecc9e0f 100755 --- a/setup.py +++ b/setup.py @@ -591,6 +591,7 @@ def pxd(name): '_libs.tslibs.offsets': { 'pyxfile': '_libs/tslibs/offsets', 'pxdfiles': ['_libs/src/util', + '_libs/tslibs/ccalendar', '_libs/tslibs/conversion', '_libs/tslibs/frequencies', '_libs/tslibs/nattype'],
Do Not Merge AFAICT the np_datetime.c versions of these functions are noticeably more performant than the cython versions. I could use help verifying this observation since asv doesn't work well for me. Best guess is that lookups in np_datetime.c's `days_per_month_table[2][12]` are more efficient than lookups in `ccalendar.days_per_month_array`, as there does not appear to be a way to instantiate the C data structure in cython (at least not as a module-global).
https://api.github.com/repos/pandas-dev/pandas/pulls/21549
2018-06-19T23:12:11Z
2018-06-26T22:29:48Z
2018-06-26T22:29:48Z
2018-07-01T01:27:52Z
style.bar: add support for axis=None (tablewise application instead of rowwise or columnwise)
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index d7feb6e547b22..50845ee697113 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -725,9 +725,10 @@ Build Changes Other ^^^^^ -- :meth: `~pandas.io.formats.style.Styler.background_gradient` now takes a ``text_color_threshold`` parameter to automatically lighten the text color based on the luminance of the background color. This improves readability with dark background colors without the need to limit the background colormap range. (:issue:`21258`) +- :meth:`~pandas.io.formats.style.Styler.background_gradient` now takes a ``text_color_threshold`` parameter to automatically lighten the text color based on the luminance of the background color. This improves readability with dark background colors without the need to limit the background colormap range. (:issue:`21258`) - Require at least 0.28.2 version of ``cython`` to support read-only memoryviews (:issue:`21688`) -- :meth: `~pandas.io.formats.style.Styler.background_gradient` now also supports tablewise application (in addition to rowwise and columnwise) with ``axis=None`` (:issue:`15204`) +- :meth:`~pandas.io.formats.style.Styler.background_gradient` now also supports tablewise application (in addition to rowwise and columnwise) with ``axis=None`` (:issue:`15204`) +- :meth:`~pandas.io.formats.style.Styler.bar` now also supports tablewise application (in addition to rowwise and columnwise) with ``axis=None`` and setting clipping range with ``vmin`` and ``vmax`` (:issue:`21548` and :issue:`21526`). ``NaN`` values are also handled properly. - - - diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py index 4d68971bf0ef6..6501717f715cb 100644 --- a/pandas/io/formats/style.py +++ b/pandas/io/formats/style.py @@ -30,6 +30,8 @@ import pandas.core.common as com from pandas.core.indexing import _maybe_numeric_slice, _non_reducing_slice from pandas.util._decorators import Appender +from pandas.core.dtypes.generic import ABCSeries + try: import matplotlib.pyplot as plt from matplotlib import colors @@ -993,174 +995,124 @@ def set_properties(self, subset=None, **kwargs): return self.applymap(f, subset=subset) @staticmethod - def _bar_left(s, color, width, base): - """ - The minimum value is aligned at the left of the cell - Parameters - ---------- - color: 2-tuple/list, of [``color_negative``, ``color_positive``] - width: float - A number between 0 or 100. The largest value will cover ``width`` - percent of the cell's width - base: str - The base css format of the cell, e.g.: - ``base = 'width: 10em; height: 80%;'`` - Returns - ------- - self : Styler - """ - normed = width * (s - s.min()) / (s.max() - s.min()) - zero_normed = width * (0 - s.min()) / (s.max() - s.min()) - attrs = (base + 'background: linear-gradient(90deg,{c} {w:.1f}%, ' - 'transparent 0%)') - - return [base if x == 0 else attrs.format(c=color[0], w=x) - if x < zero_normed - else attrs.format(c=color[1], w=x) if x >= zero_normed - else base for x in normed] - - @staticmethod - def _bar_center_zero(s, color, width, base): - """ - Creates a bar chart where the zero is centered in the cell - Parameters - ---------- - color: 2-tuple/list, of [``color_negative``, ``color_positive``] - width: float - A number between 0 or 100. The largest value will cover ``width`` - percent of the cell's width - base: str - The base css format of the cell, e.g.: - ``base = 'width: 10em; height: 80%;'`` - Returns - ------- - self : Styler - """ - - # Either the min or the max should reach the edge - # (50%, centered on zero) - m = max(abs(s.min()), abs(s.max())) - - normed = s * 50 * width / (100.0 * m) - - attrs_neg = (base + 'background: linear-gradient(90deg, transparent 0%' - ', transparent {w:.1f}%, {c} {w:.1f}%, ' - '{c} 50%, transparent 50%)') - - attrs_pos = (base + 'background: linear-gradient(90deg, transparent 0%' - ', transparent 50%, {c} 50%, {c} {w:.1f}%, ' - 'transparent {w:.1f}%)') - - return [attrs_pos.format(c=color[1], w=(50 + x)) if x >= 0 - else attrs_neg.format(c=color[0], w=(50 + x)) - for x in normed] + def _bar(s, align, colors, width=100, vmin=None, vmax=None): + """Draw bar chart in dataframe cells""" + + # Get input value range. + smin = s.min() if vmin is None else vmin + if isinstance(smin, ABCSeries): + smin = smin.min() + smax = s.max() if vmax is None else vmax + if isinstance(smax, ABCSeries): + smax = smax.max() + if align == 'mid': + smin = min(0, smin) + smax = max(0, smax) + elif align == 'zero': + # For "zero" mode, we want the range to be symmetrical around zero. + smax = max(abs(smin), abs(smax)) + smin = -smax + # Transform to percent-range of linear-gradient + normed = width * (s.values - smin) / (smax - smin + 1e-12) + zero = -width * smin / (smax - smin + 1e-12) + + def css_bar(start, end, color): + """Generate CSS code to draw a bar from start to end.""" + css = 'width: 10em; height: 80%;' + if end > start: + css += 'background: linear-gradient(90deg,' + if start > 0: + css += ' transparent {s:.1f}%, {c} {s:.1f}%, '.format( + s=start, c=color + ) + css += '{c} {e:.1f}%, transparent {e:.1f}%)'.format( + e=min(end, width), c=color, + ) + return css - @staticmethod - def _bar_center_mid(s, color, width, base): - """ - Creates a bar chart where the midpoint is centered in the cell - Parameters - ---------- - color: 2-tuple/list, of [``color_negative``, ``color_positive``] - width: float - A number between 0 or 100. The largest value will cover ``width`` - percent of the cell's width - base: str - The base css format of the cell, e.g.: - ``base = 'width: 10em; height: 80%;'`` - Returns - ------- - self : Styler - """ + def css(x): + if pd.isna(x): + return '' + if align == 'left': + return css_bar(0, x, colors[x > zero]) + else: + return css_bar(min(x, zero), max(x, zero), colors[x > zero]) - if s.min() >= 0: - # In this case, we place the zero at the left, and the max() should - # be at width - zero = 0.0 - slope = width / s.max() - elif s.max() <= 0: - # In this case, we place the zero at the right, and the min() - # should be at 100-width - zero = 100.0 - slope = width / -s.min() + if s.ndim == 1: + return [css(x) for x in normed] else: - slope = width / (s.max() - s.min()) - zero = (100.0 + width) / 2.0 - slope * s.max() - - normed = zero + slope * s - - attrs_neg = (base + 'background: linear-gradient(90deg, transparent 0%' - ', transparent {w:.1f}%, {c} {w:.1f}%, ' - '{c} {zero:.1f}%, transparent {zero:.1f}%)') - - attrs_pos = (base + 'background: linear-gradient(90deg, transparent 0%' - ', transparent {zero:.1f}%, {c} {zero:.1f}%, ' - '{c} {w:.1f}%, transparent {w:.1f}%)') - - return [attrs_pos.format(c=color[1], zero=zero, w=x) if x > zero - else attrs_neg.format(c=color[0], zero=zero, w=x) - for x in normed] + return pd.DataFrame( + [[css(x) for x in row] for row in normed], + index=s.index, columns=s.columns + ) def bar(self, subset=None, axis=0, color='#d65f5f', width=100, - align='left'): + align='left', vmin=None, vmax=None): """ - Color the background ``color`` proportional to the values in each - column. - Excludes non-numeric data by default. + Draw bar chart in the cell backgrounds. Parameters ---------- - subset: IndexSlice, default None - a valid slice for ``data`` to limit the style application to - axis: int - color: str or 2-tuple/list + subset : IndexSlice, optional + A valid slice for `data` to limit the style application to. + axis : int, str or None, default 0 + Apply to each column (`axis=0` or `'index'`) + or to each row (`axis=1` or `'columns'`) or + to the entire DataFrame at once with `axis=None`. + color : str or 2-tuple/list If a str is passed, the color is the same for both negative and positive numbers. If 2-tuple/list is used, the first element is the color_negative and the second is the - color_positive (eg: ['#d65f5f', '#5fba7d']) - width: float - A number between 0 or 100. The largest value will cover ``width`` - percent of the cell's width + color_positive (eg: ['#d65f5f', '#5fba7d']). + width : float, default 100 + A number between 0 or 100. The largest value will cover `width` + percent of the cell's width. align : {'left', 'zero',' mid'}, default 'left' - - 'left' : the min value starts at the left of the cell - - 'zero' : a value of zero is located at the center of the cell + How to align the bars with the cells. + - 'left' : the min value starts at the left of the cell. + - 'zero' : a value of zero is located at the center of the cell. - 'mid' : the center of the cell is at (max-min)/2, or if values are all negative (positive) the zero is aligned - at the right (left) of the cell + at the right (left) of the cell. .. versionadded:: 0.20.0 + vmin : float, optional + Minimum bar value, defining the left hand limit + of the bar drawing range, lower values are clipped to `vmin`. + When None (default): the minimum value of the data will be used. + + .. versionadded:: 0.24.0 + + vmax : float, optional + Maximum bar value, defining the right hand limit + of the bar drawing range, higher values are clipped to `vmax`. + When None (default): the maximum value of the data will be used. + + .. versionadded:: 0.24.0 + + Returns ------- self : Styler """ - subset = _maybe_numeric_slice(self.data, subset) - subset = _non_reducing_slice(subset) + if align not in ('left', 'zero', 'mid'): + raise ValueError("`align` must be one of {'left', 'zero',' mid'}") - base = 'width: 10em; height: 80%;' - - if not(is_list_like(color)): + if not (is_list_like(color)): color = [color, color] elif len(color) == 1: color = [color[0], color[0]] elif len(color) > 2: - msg = ("Must pass `color` as string or a list-like" - " of length 2: [`color_negative`, `color_positive`]\n" - "(eg: color=['#d65f5f', '#5fba7d'])") - raise ValueError(msg) + raise ValueError("`color` must be string or a list-like" + " of length 2: [`color_neg`, `color_pos`]" + " (eg: color=['#d65f5f', '#5fba7d'])") - if align == 'left': - self.apply(self._bar_left, subset=subset, axis=axis, color=color, - width=width, base=base) - elif align == 'zero': - self.apply(self._bar_center_zero, subset=subset, axis=axis, - color=color, width=width, base=base) - elif align == 'mid': - self.apply(self._bar_center_mid, subset=subset, axis=axis, - color=color, width=width, base=base) - else: - msg = ("`align` must be one of {'left', 'zero',' mid'}") - raise ValueError(msg) + subset = _maybe_numeric_slice(self.data, subset) + subset = _non_reducing_slice(subset) + self.apply(self._bar, subset=subset, axis=axis, + align=align, colors=color, width=width, + vmin=vmin, vmax=vmax) return self diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py index bcfd3cbb739ff..5254ccc742ab8 100644 --- a/pandas/tests/io/formats/test_style.py +++ b/pandas/tests/io/formats/test_style.py @@ -349,10 +349,10 @@ def test_bar_align_left(self): (0, 0): ['width: 10em', ' height: 80%'], (1, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(' - '90deg,#d65f5f 50.0%, transparent 0%)'], + '90deg,#d65f5f 50.0%, transparent 50.0%)'], (2, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(' - '90deg,#d65f5f 100.0%, transparent 0%)'] + '90deg,#d65f5f 100.0%, transparent 100.0%)'] } assert result == expected @@ -361,10 +361,10 @@ def test_bar_align_left(self): (0, 0): ['width: 10em', ' height: 80%'], (1, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(' - '90deg,red 25.0%, transparent 0%)'], + '90deg,red 25.0%, transparent 25.0%)'], (2, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(' - '90deg,red 50.0%, transparent 0%)'] + '90deg,red 50.0%, transparent 50.0%)'] } assert result == expected @@ -383,46 +383,46 @@ def test_bar_align_left_0points(self): (0, 2): ['width: 10em', ' height: 80%'], (1, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg,#d65f5f 50.0%,' - ' transparent 0%)'], + ' transparent 50.0%)'], (1, 1): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg,#d65f5f 50.0%,' - ' transparent 0%)'], + ' transparent 50.0%)'], (1, 2): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg,#d65f5f 50.0%,' - ' transparent 0%)'], + ' transparent 50.0%)'], (2, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg,#d65f5f 100.0%' - ', transparent 0%)'], + ', transparent 100.0%)'], (2, 1): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg,#d65f5f 100.0%' - ', transparent 0%)'], + ', transparent 100.0%)'], (2, 2): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg,#d65f5f 100.0%' - ', transparent 0%)']} + ', transparent 100.0%)']} assert result == expected result = df.style.bar(axis=1)._compute().ctx expected = {(0, 0): ['width: 10em', ' height: 80%'], (0, 1): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg,#d65f5f 50.0%,' - ' transparent 0%)'], + ' transparent 50.0%)'], (0, 2): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg,#d65f5f 100.0%' - ', transparent 0%)'], + ', transparent 100.0%)'], (1, 0): ['width: 10em', ' height: 80%'], (1, 1): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg,#d65f5f 50.0%' - ', transparent 0%)'], + ', transparent 50.0%)'], (1, 2): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg,#d65f5f 100.0%' - ', transparent 0%)'], + ', transparent 100.0%)'], (2, 0): ['width: 10em', ' height: 80%'], (2, 1): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg,#d65f5f 50.0%' - ', transparent 0%)'], + ', transparent 50.0%)'], (2, 2): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg,#d65f5f 100.0%' - ', transparent 0%)']} + ', transparent 100.0%)']} assert result == expected def test_bar_align_mid_pos_and_neg(self): @@ -432,21 +432,16 @@ def test_bar_align_mid_pos_and_neg(self): '#d65f5f', '#5fba7d'])._compute().ctx expected = {(0, 0): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 0.0%, #d65f5f 0.0%, ' + 'background: linear-gradient(90deg,' '#d65f5f 10.0%, transparent 10.0%)'], - (1, 0): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 10.0%, ' - '#d65f5f 10.0%, #d65f5f 10.0%, ' - 'transparent 10.0%)'], + (1, 0): ['width: 10em', ' height: 80%', ], (2, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 10.0%, #5fba7d 10.0%' + 'transparent 10.0%, #5fba7d 10.0%' ', #5fba7d 30.0%, transparent 30.0%)'], (3, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 10.0%, ' + 'transparent 10.0%, ' '#5fba7d 10.0%, #5fba7d 100.0%, ' 'transparent 100.0%)']} @@ -459,20 +454,16 @@ def test_bar_align_mid_all_pos(self): '#d65f5f', '#5fba7d'])._compute().ctx expected = {(0, 0): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 0.0%, #5fba7d 0.0%, ' + 'background: linear-gradient(90deg,' '#5fba7d 10.0%, transparent 10.0%)'], (1, 0): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 0.0%, #5fba7d 0.0%, ' + 'background: linear-gradient(90deg,' '#5fba7d 20.0%, transparent 20.0%)'], (2, 0): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 0.0%, #5fba7d 0.0%, ' + 'background: linear-gradient(90deg,' '#5fba7d 50.0%, transparent 50.0%)'], (3, 0): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 0.0%, #5fba7d 0.0%, ' + 'background: linear-gradient(90deg,' '#5fba7d 100.0%, transparent 100.0%)']} assert result == expected @@ -484,23 +475,21 @@ def test_bar_align_mid_all_neg(self): '#d65f5f', '#5fba7d'])._compute().ctx expected = {(0, 0): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 0.0%, ' - '#d65f5f 0.0%, #d65f5f 100.0%, ' - 'transparent 100.0%)'], + 'background: linear-gradient(90deg,' + '#d65f5f 100.0%, transparent 100.0%)'], (1, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 40.0%, ' + 'transparent 40.0%, ' '#d65f5f 40.0%, #d65f5f 100.0%, ' 'transparent 100.0%)'], (2, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 70.0%, ' + 'transparent 70.0%, ' '#d65f5f 70.0%, #d65f5f 100.0%, ' 'transparent 100.0%)'], (3, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 80.0%, ' + 'transparent 80.0%, ' '#d65f5f 80.0%, #d65f5f 100.0%, ' 'transparent 100.0%)']} assert result == expected @@ -511,25 +500,194 @@ def test_bar_align_zero_pos_and_neg(self): result = df.style.bar(align='zero', color=[ '#d65f5f', '#5fba7d'], width=90)._compute().ctx - expected = {(0, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 45.0%, ' - '#d65f5f 45.0%, #d65f5f 50%, ' - 'transparent 50%)'], - (1, 0): ['width: 10em', ' height: 80%', - 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 50%, ' - '#5fba7d 50%, #5fba7d 50.0%, ' - 'transparent 50.0%)'], + 'transparent 40.0%, #d65f5f 40.0%, ' + '#d65f5f 45.0%, transparent 45.0%)'], + (1, 0): ['width: 10em', ' height: 80%'], (2, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 50%, #5fba7d 50%, ' - '#5fba7d 60.0%, transparent 60.0%)'], + 'transparent 45.0%, #5fba7d 45.0%, ' + '#5fba7d 55.0%, transparent 55.0%)'], (3, 0): ['width: 10em', ' height: 80%', 'background: linear-gradient(90deg, ' - 'transparent 0%, transparent 50%, #5fba7d 50%, ' - '#5fba7d 95.0%, transparent 95.0%)']} + 'transparent 45.0%, #5fba7d 45.0%, ' + '#5fba7d 90.0%, transparent 90.0%)']} + assert result == expected + + def test_bar_align_left_axis_none(self): + df = pd.DataFrame({'A': [0, 1], 'B': [2, 4]}) + result = df.style.bar(axis=None)._compute().ctx + expected = { + (0, 0): ['width: 10em', ' height: 80%'], + (1, 0): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,' + '#d65f5f 25.0%, transparent 25.0%)'], + (0, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,' + '#d65f5f 50.0%, transparent 50.0%)'], + (1, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,' + '#d65f5f 100.0%, transparent 100.0%)'] + } + assert result == expected + + def test_bar_align_zero_axis_none(self): + df = pd.DataFrame({'A': [0, 1], 'B': [-2, 4]}) + result = df.style.bar(align='zero', axis=None)._compute().ctx + expected = { + (0, 0): ['width: 10em', ' height: 80%'], + (1, 0): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 50.0%, #d65f5f 50.0%, ' + '#d65f5f 62.5%, transparent 62.5%)'], + (0, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 25.0%, #d65f5f 25.0%, ' + '#d65f5f 50.0%, transparent 50.0%)'], + (1, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 50.0%, #d65f5f 50.0%, ' + '#d65f5f 100.0%, transparent 100.0%)'] + } + assert result == expected + + def test_bar_align_mid_axis_none(self): + df = pd.DataFrame({'A': [0, 1], 'B': [-2, 4]}) + result = df.style.bar(align='mid', axis=None)._compute().ctx + expected = { + (0, 0): ['width: 10em', ' height: 80%'], + (1, 0): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 33.3%, #d65f5f 33.3%, ' + '#d65f5f 50.0%, transparent 50.0%)'], + (0, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,' + '#d65f5f 33.3%, transparent 33.3%)'], + (1, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 33.3%, #d65f5f 33.3%, ' + '#d65f5f 100.0%, transparent 100.0%)'] + } + assert result == expected + + def test_bar_align_mid_vmin(self): + df = pd.DataFrame({'A': [0, 1], 'B': [-2, 4]}) + result = df.style.bar(align='mid', axis=None, vmin=-6)._compute().ctx + expected = { + (0, 0): ['width: 10em', ' height: 80%'], + (1, 0): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 60.0%, #d65f5f 60.0%, ' + '#d65f5f 70.0%, transparent 70.0%)'], + (0, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 40.0%, #d65f5f 40.0%, ' + '#d65f5f 60.0%, transparent 60.0%)'], + (1, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 60.0%, #d65f5f 60.0%, ' + '#d65f5f 100.0%, transparent 100.0%)'] + } + assert result == expected + + def test_bar_align_mid_vmax(self): + df = pd.DataFrame({'A': [0, 1], 'B': [-2, 4]}) + result = df.style.bar(align='mid', axis=None, vmax=8)._compute().ctx + expected = { + (0, 0): ['width: 10em', ' height: 80%'], + (1, 0): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 20.0%, #d65f5f 20.0%, ' + '#d65f5f 30.0%, transparent 30.0%)'], + (0, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,' + '#d65f5f 20.0%, transparent 20.0%)'], + (1, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 20.0%, #d65f5f 20.0%, ' + '#d65f5f 60.0%, transparent 60.0%)'] + } + assert result == expected + + def test_bar_align_mid_vmin_vmax_wide(self): + df = pd.DataFrame({'A': [0, 1], 'B': [-2, 4]}) + result = df.style.bar(align='mid', axis=None, + vmin=-3, vmax=7)._compute().ctx + expected = { + (0, 0): ['width: 10em', ' height: 80%'], + (1, 0): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 30.0%, #d65f5f 30.0%, ' + '#d65f5f 40.0%, transparent 40.0%)'], + (0, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 10.0%, #d65f5f 10.0%, ' + '#d65f5f 30.0%, transparent 30.0%)'], + (1, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 30.0%, #d65f5f 30.0%, ' + '#d65f5f 70.0%, transparent 70.0%)'] + } + assert result == expected + + def test_bar_align_mid_vmin_vmax_clipping(self): + df = pd.DataFrame({'A': [0, 1], 'B': [-2, 4]}) + result = df.style.bar(align='mid', axis=None, + vmin=-1, vmax=3)._compute().ctx + expected = { + (0, 0): ['width: 10em', ' height: 80%'], + (1, 0): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 25.0%, #d65f5f 25.0%, ' + '#d65f5f 50.0%, transparent 50.0%)'], + (0, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,' + '#d65f5f 25.0%, transparent 25.0%)'], + (1, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 25.0%, #d65f5f 25.0%, ' + '#d65f5f 100.0%, transparent 100.0%)'] + } + assert result == expected + + def test_bar_align_mid_nans(self): + df = pd.DataFrame({'A': [1, None], 'B': [-1, 3]}) + result = df.style.bar(align='mid', axis=None)._compute().ctx + expected = { + (0, 0): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 25.0%, #d65f5f 25.0%, ' + '#d65f5f 50.0%, transparent 50.0%)'], + (1, 0): [''], + (0, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg,' + '#d65f5f 25.0%, transparent 25.0%)'], + (1, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 25.0%, #d65f5f 25.0%, ' + '#d65f5f 100.0%, transparent 100.0%)'] + } + assert result == expected + + def test_bar_align_zero_nans(self): + df = pd.DataFrame({'A': [1, None], 'B': [-1, 2]}) + result = df.style.bar(align='zero', axis=None)._compute().ctx + expected = { + (0, 0): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 50.0%, #d65f5f 50.0%, ' + '#d65f5f 75.0%, transparent 75.0%)'], + (1, 0): [''], + (0, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 25.0%, #d65f5f 25.0%, ' + '#d65f5f 50.0%, transparent 50.0%)'], + (1, 1): ['width: 10em', ' height: 80%', + 'background: linear-gradient(90deg, ' + 'transparent 50.0%, #d65f5f 50.0%, ' + '#d65f5f 100.0%, transparent 100.0%)'] + } assert result == expected def test_bar_bad_align_raises(self):
- eliminate code duplication related to style.bar with different align modes - add support for axis=None - fix minor bug with align 'zero' and width < 100 - make generated CSS gradients more compact - [x] Closes #21525 - [x] tests added / passed - [x] passes `git diff origin/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21548
2018-06-19T23:08:25Z
2018-08-30T12:46:57Z
2018-08-30T12:46:57Z
2018-08-30T12:47:03Z
ENH: set accessor for Series (WIP)
diff --git a/doc/source/api.rst b/doc/source/api.rst index 4faec93490fde..9e1480144b2f5 100644 --- a/doc/source/api.rst +++ b/doc/source/api.rst @@ -686,6 +686,36 @@ strings and apply several methods to it. These can be accessed like Series.dt Index.str +Set handling +~~~~~~~~~~~~~~~ +``Series.set`` can be used to access the values of the series as +sets and apply several methods to it. These can be accessed like +``Series.set.<function/property>``. + +.. autosummary:: + :toctree: generated/ + :template: autosummary/accessor_method.rst + + Series.set.union + Series.set.intersect + Series.set.xor + Series.set.diff + Series.set.len + +.. + The following is needed to ensure the generated pages are created with the + correct template (otherwise they would be created in the Series/Index class page) + +.. + .. autosummary:: + :toctree: generated/ + :template: autosummary/accessor.rst + + Series.str + Series.cat + Series.dt + Series.set + .. _api.categorical: Categorical diff --git a/pandas/core/series.py b/pandas/core/series.py index 23c4bbe082f28..4428dcb3c376c 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -77,6 +77,7 @@ from pandas._libs import index as libindex, tslib as libts, lib, iNaT from pandas.core.config import get_option from pandas.core.strings import StringMethods +from pandas.core.sets import SetMethods import pandas.plotting._core as gfx @@ -158,7 +159,7 @@ class Series(base.IndexOpsMixin, generic.NDFrame): Copy input data """ _metadata = ['name'] - _accessors = set(['dt', 'cat', 'str']) + _accessors = set(['dt', 'cat', 'str', 'set']) _deprecations = generic.NDFrame._deprecations | frozenset( ['asobject', 'sortlevel', 'reshape', 'get_value', 'set_value', 'from_csv', 'valid']) @@ -3992,6 +3993,7 @@ def to_period(self, freq=None, copy=True): # Accessor Methods # ---------------------------------------------------------------------- str = CachedAccessor("str", StringMethods) + set = CachedAccessor("set", SetMethods) dt = CachedAccessor("dt", CombinedDatetimelikeProperties) cat = CachedAccessor("cat", CategoricalAccessor) plot = CachedAccessor("plot", gfx.SeriesPlotMethods) diff --git a/pandas/core/sets.py b/pandas/core/sets.py new file mode 100644 index 0000000000000..bf1ac427f279c --- /dev/null +++ b/pandas/core/sets.py @@ -0,0 +1,534 @@ +import numpy as np + +from functools import reduce +from operator import __or__, __xor__, __and__, __sub__ + +from pandas.core.dtypes.generic import ABCSeries +from pandas.core.dtypes.missing import isna +from pandas.core.dtypes.common import is_list_like + +from pandas.core.base import NoNewAttributesMixin +from pandas.util._decorators import Appender +import pandas.compat as compat + +_shared_docs = dict() + + +class SetMethods(NoNewAttributesMixin): + """ + Vectorized set functions for Series. NAs get turned to empty sets by + default - this behavior can be changed by using the 'fill_value'-parameter. + All methods have an 'errors'-parameter that determines how values are + converted to sets. + + Examples + -------- + >>> s.set.union() + >>> s.set.intersect() + """ + + def __init__(self, data): + self._data = data + self._freeze() + + @staticmethod + def _validate(data, errors='raise', fill_value=None): + """ + TODO + """ + + # signature following GH13877 + if not isinstance(data, ABCSeries): + raise ValueError("Must pass Series for validating inputs of set " + "accessor operations") + + if fill_value is not None and fill_value is not np.nan: + err_str = ("The parameter 'fill_value' must be list-like!") + if not is_list_like(fill_value): + raise ValueError(err_str) + fill_value = set(fill_value) + + data = data.copy() # avoid changing original input + na_mask = data.isna() + + if errors == 'raise': + forbidden = ~data.loc[~na_mask].map(is_list_like) + if forbidden.any(): + raise ValueError("By default, can only use .set accessor with " + "values that can be mapped to sets. For more " + "permissive error-handling, set 'errors'=" + "'ignore'|'coerce'|'wrap'.") + elif errors == 'ignore': + ignore = ~data.loc[~na_mask].map(is_list_like) + # everything that's not list-like gets set to na + na_mask.loc[~na_mask] = ignore + elif errors == 'coerce': + permitted = lambda x: (isinstance(x, compat.string_types) + or is_list_like(x)) + # everything that's not a string or container gets set to na + na_mask.loc[~na_mask] = ~data.loc[~na_mask].map(permitted) + elif errors == 'wrap': + singletons = ~na_mask & ~data.map(is_list_like) + data.loc[singletons] = data.loc[singletons].map(lambda x: [x]) + elif errors == 'skip': + pass + else: + raise ValueError("Received illegal value for parameter 'errors'; " + "allowed values are {'raise'|'ignore'|" + "'coerce'|'wrap'|'skip'}") + + if errors != 'skip': + data.loc[na_mask] = np.nan + # everything else gets mapped to sets + data.loc[~na_mask] = data.loc[~na_mask].map(set) + + # cannot use fillna due to GH21329 + if fill_value is not None and na_mask.any(): + data.loc[na_mask] = [fill_value] * na_mask.sum() + + return data + + def _wrap_result(self, result, name=None, index=None): + """ + TODO + """ + + from pandas import Series + + if name is None: + name = getattr(result, 'name', None) + if name is None: + name = self._data.name + + if not hasattr(result, 'ndim') or not hasattr(result, 'dtype'): + return result + assert result.ndim < 3 + + index = self._data.index if index is None else index + return Series(result, name=name, index=index) + + def _get_series_list(self, others): + """ + Auxiliary function for set-accessor functions. Turn potentially mixed + input into a list of Series. + + Parameters + ---------- + others : Series, DataFrame, np.ndarray, or list-like of objects that + are either Series or np.ndarray (1-dim). If it is a list-like that + *only* contains scalar values, this list-like object will be + broadcast to every element of a Series with the same index as the + calling Series. + + Returns + ------- + list : others transformed into list of Series + """ + + from pandas import Series, DataFrame + + idx = self._data.index + + err_msg = ('others must be Series, DataFrame, np.ndarrary or ' + 'list-like (containing either only scalar values, or only ' + 'objects of type Series/np.ndarray)!') + + # np.ndarray inherits the index `idx` of the calling Series - i.e. must + # have matching length. Series/DataFrame keep their own index. + # List-likes must contain only Series or 1-dim np.ndarray + if isinstance(others, Series): + return [others] + elif isinstance(others, DataFrame): + return [others[x] for x in others] + elif isinstance(others, np.ndarray) and others.ndim == 1: + return [Series(others, index=idx)] + elif isinstance(others, np.ndarray) and others.ndim == 2: + others = DataFrame(others, index=idx) + return [others[x] for x in others] + elif is_list_like(others): + others = list(others) # ensure iterators do not get read twice etc + + # in case of list-like `others`, all elements must be either be + # scalar or Series/np.ndarray + if all(not is_list_like(x) for x in others): # True if empty + # in this case, we broadcast others to every element of a new + # Series with the same index as the caller + return [Series([others] * len(idx), index=idx)] + + check = lambda x: (isinstance(x, Series) + or (isinstance(x, np.ndarray) and x.ndim == 1)) + if all(check(x) for x in others): + los = [] + # iterate through list and append list of series for each + # element (which we check to be one-dimensional) + while others: + nxt = others.pop(0) # Series or np.ndarray by the above + los = los + self._get_series_list(nxt) + return los + raise TypeError(err_msg) + + def _apply_op(self, others, operator, errors, fill_value, join): + """ + TODO + """ + + from pandas import concat + + data = self._validate(self._data, errors, fill_value) + + # concatenate Series/Index with itself if no "others" + if others is None: + return reduce(operator, data.dropna()) + + try: + # turn others into list of series -- necessary for concat/align + others = self._get_series_list(others) + except ValueError: # do not catch TypeError raised by _get_series_list + raise ValueError('If `others` contains arrays, these must all be ' + 'of the same length as the calling Series.') + # check if all series are legal for set ops; raise/convert otherwise + others = [self._validate(x, errors, fill_value) for x in others] + + # Need to add keys for uniqueness in case of duplicate columns + others = concat(others, axis=1, + join=(join if join == 'inner' else 'outer'), + keys=range(len(others))) + data, others = data.align(others, join=join) + allcols = [data] + [others[x] for x in others] # again list of Series + + # if alignment introduced NaNs anywhere, need to re-apply fill_value + if fill_value is not None and any(x.isna().any() for x in allcols): + allcols = [self._validate(x, 'skip', fill_value) for x in allcols] + + result = self._apply_op_core(allcols, operator) + index = others.index if join == 'right' else data.index + return self._wrap_result(result, index=index) + + def _apply_op_core(self, list_of_series, operator): + """ + TODO + """ + # list_of_series: must be aligned already! + masks = np.array([isna(x).values for x in list_of_series]) + na_mask = np.logical_or.reduce(masks, axis=0) + result = np.empty(len(na_mask), dtype=object) + np.putmask(result, na_mask, np.nan) + + # apply operator over columns left-to-right; everything aligned already + result[~na_mask] = reduce(operator, + [x.values[~na_mask] for x in list_of_series]) + return result + + _shared_docs['set_ops'] = (""" + Calculate %(op)s for Series. + + If `others` is specified, this method applies the %(op)s per element. If + `others` is not passed, then the %(op)s is applied to the elements in the + Series%(add)s. + + Parameters + ---------- + others : Series, DataFrame, np.ndarray, or list-like, default None + np.ndarray (one- or two-dimensional) must have the same length as the + calling Series; Series and DataFrame get matched on index and therefore + do not have to match in length. + + If `others` is a list-like, it may contain either: + + - Only Series or np.ndarray. The latter must have the same length as + the calling Series + - Only scalars. In this case, this list-like object will be used as the + right-hand side of the %(op)s for all elements of the calling Series. + + If `others` is None, the method applies the %(op)s%(add)s to the + elements of the calling Series. + join : {'left', 'right', 'outer', 'inner'}, default 'left' + Determines the join-style between the calling Series and any Series or + DataFrame in `others` (np.ndarrays need to match the length of the + calling Series). To disable alignment, use `.values` on any Series or + DataFrame in `others`. + errors : {'raise', 'ignore', 'coerce', 'wrap', 'skip'}, default 'raise' + Determines how values that are not sets are treated, both in the + calling Series, as well as any column in `others`. All options ignore + missing values, and all options except 'raise' and 'skip' will set + elements that they cannot map to `np.nan` - these values can be further + processed using the `fill_value`-parameter. + + - 'raise': Raise error for any element that cannot be unambiguously + mapped to a set (including strings). + - 'ignore': Ignore all elements that cannot be unambiguously mapped to + a set (including strings). + - 'coerce': Forcefully map everything possible to a set. In particular, + strings will be mapped to the set of their characters. + - 'wrap': Maps list-likes to sets, and wraps all other elements + (including strings; except missing values) into singleton sets. + - 'skip': Do not run any checks or conversions to `set`, if + performance is critical (`fill_values` will work as usual). In this + case, it is up to the user that all non-null elements are compatible + with the respective `numpy` set-methods. + fill_value : list-like, default None + Value to use for missing values in the calling Series and any column in + `others`. + + Returns + ------- + result : set or Series/Index of objects + If `others` is None, `set` is returned, otherwise a `Series` of objects + is returned. + + See Also + -------- + %(also)s + Examples + -------- + If `others` is not specified, the operation will be applied%(add)s to all + elements of the Series. + + >>> s = pd.Series([{1, 2}, {2, 4}, {3, 1}]) + >>> s + 0 {1, 2} + 1 {2, 4} + 2 {1, 3} + dtype: object + %(ex_no_others)s + If `others` is a Series (or np.ndarray of the correct length), the + operation will be applied element-wise. + + >>> t = pd.Series([{2, 3}, {1, 2}, np.nan]) + >>> t + 0 {2, 3} + 1 {1, 2} + 2 NaN + dtype: object + %(ex_with_others)s + By default, missing values in any of the input columns will remain missing + in the result. To change this, use the `fill_value` parameter, which fills + the columns *before* applying the operation. + %(ex_with_fill)s + Finally, if `others` is a list-like containing only scalar values, this + list-like object will be used as the right-hand side of the %(op)s for all + elements of the calling Series. (as in the corresponding `numpy` methods). + + >>> s + 0 {1, 2} + 1 {2, 4} + 2 {1, 3} + dtype: object + %(ex_scalar)s + For more examples, see :ref:`here <set.accessor>`. + """) + + also = ''' + intersect : Calculate intersection + diff : Calculate set difference + xor : Calculate symmetric set difference + ''' + ex_no_others = '''>>> + >>> s.set.union() + {1, 2, 3, 4} + ''' + ex_with_others = '''>>> + >>> s.set.union(t) + 0 {1, 2, 3} + 1 {1, 2, 4} + 2 NaN + dtype: object + ''' + ex_with_fill = ''' + >>> s.set.union(t, fill_value=set()) # equivalent fill values: [], {}, ... + 0 {1, 2, 3} + 1 {1, 2, 4} + 2 {1, 3} + dtype: object + >>> + >>> s.set.union(t, fill_value={1, 3, 5}) + 0 {1, 2, 3} + 1 {1, 2, 4} + 2 {1, 3, 5} + dtype: object + ''' + ex_scalar = '''>>> + >>> s.set.union({1}) + 0 {1, 2} + 1 {1, 2, 4} + 2 {1, 3} + dtype: object + ''' + + @Appender(_shared_docs['set_ops'] % { + 'op': 'union', + 'add': '', + 'also': also, + 'ex_no_others': ex_no_others, + 'ex_with_others': ex_with_others, + 'ex_with_fill': ex_with_fill, + 'ex_scalar': ex_scalar + }) + def union(self, others=None, join='left', errors='raise', fill_value=None): + return self._apply_op(others, __or__, errors, fill_value, join) + + also = ''' + union : Calculate union + diff : Calculate set difference + xor : Calculate symmetric set difference + ''' + ex_no_others = '''>>> + >>> s.set.intersect() + set() + ''' + ex_with_others = '''>>> + >>> s.set.intersect(t) + 0 {2} + 1 {2} + 2 NaN + dtype: object + ''' + ex_with_fill = ''' + >>> s.set.intersect(t, fill_value=set()) # equiv. fill values: [], {}, ... + 0 {2} + 1 {2} + 2 {} + dtype: object + >>> + >>> s.set.intersect(t, fill_value={1, 3, 5}) + 0 {2} + 1 {2} + 2 {1, 3} + dtype: object + ''' + ex_scalar = '''>>> + >>> s.set.intersect({1}) + 0 {1} + 1 {} + 2 {1} + dtype: object + ''' + + @Appender(_shared_docs['set_ops'] % { + 'op': 'intersection', + 'add': '', + 'also': also, + 'ex_no_others': ex_no_others, + 'ex_with_others': ex_with_others, + 'ex_with_fill': ex_with_fill, + 'ex_scalar': ex_scalar + }) + def intersect(self, others=None, join='left', + errors='raise', fill_value=None): + return self._apply_op(others, __and__, errors, fill_value, join) + + also = ''' + intersect : Calculate intersection + union : Calculate union + xor : Calculate symmetric set difference + ''' + ex_no_others = '''>>> + >>> s.set.diff() + set() + ''' + ex_with_others = '''>>> + >>> s.set.diff(t) + 0 {1} + 1 {4} + 2 NaN + dtype: object + ''' + ex_with_fill = ''' + >>> s.set.diff(t, fill_value=set()) # equivalent fill values: [], {}, ... + 0 {1} + 1 {4} + 2 {1, 3} + dtype: object + >>> + >>> s.set.diff(t, fill_value={1, 3, 5}) + 0 {1} + 1 {4} + 2 {} + dtype: object + ''' + ex_scalar = '''>>> + >>> s.set.diff({1}) + 0 {2} + 1 {2, 4} + 2 {3} + dtype: object + ''' + + @Appender(_shared_docs['set_ops'] % { + 'op': 'set difference', + 'add': ' sequentially', + 'also': also, + 'ex_no_others': ex_no_others, + 'ex_with_others': ex_with_others, + 'ex_with_fill': ex_with_fill, + 'ex_scalar': ex_scalar + }) + def diff(self, others=None, join='left', errors='raise', fill_value=None): + return self._apply_op(others, __sub__, errors, fill_value, join) + + also = ''' + intersect : Calculate intersection + union : Calculate union + diff : Calculate symmetric set difference + ''' + ex_no_others = '''>>> + >>> s.set.xor() + {3, 4} + ''' + ex_with_others = '''>>> + >>> s.set.xor(t) + 0 {1, 3} + 1 {1, 4} + 2 NaN + dtype: object + ''' + ex_with_fill = ''' + >>> s.set.xor(t, fill_value=set()) # equivalent fill values: [], {}, ... + 0 {1, 3} + 1 {1, 4} + 2 {1, 3} + dtype: object + >>> + >>> s.set.xor(t, fill_value={1, 3, 5}) + 0 {1, 3} + 1 {1, 4} + 2 {5} + dtype: object + ''' + ex_scalar = '''>>> + >>> s.set.xor({1}) + 0 {2} + 1 {1, 2, 4} + 2 {3} + dtype: object + ''' + + @Appender(_shared_docs['set_ops'] % { + 'op': 'symmetric set difference', + 'add': ' sequentially', + 'also': also, + 'ex_no_others': ex_no_others, + 'ex_with_others': ex_with_others, + 'ex_with_fill': ex_with_fill, + 'ex_scalar': ex_scalar + }) + def xor(self, others=None, join='left', errors='raise', fill_value=None): + return self._apply_op(others, __xor__, errors, fill_value, join) + + def len(self, errors='raise', fill_value=None): + """ + TODO + """ + + from pandas import Series + + data = self._validate(self._data, errors, fill_value) + na_mask = data.isna() + result = Series(index=data.index) + result.loc[~na_mask] = data.dropna().map(len) + return result + + @classmethod + def _make_accessor(cls, data): + cls._validate(data) + return cls(data) diff --git a/pandas/tests/test_sets.py b/pandas/tests/test_sets.py new file mode 100644 index 0000000000000..cd8da53d7d87f --- /dev/null +++ b/pandas/tests/test_sets.py @@ -0,0 +1,352 @@ +# -*- coding: utf-8 -*- +# pylint: disable-msg=E1101,W0612 + +import pytest +import numpy as np +from functools import reduce +from collections import OrderedDict +from operator import __or__, __xor__, __and__, __sub__ + +from pandas import Series, concat +from pandas.util.testing import assert_series_equal +import pandas.util.testing as tm +import pandas.core.sets as sets + +ops = {'union': __or__, 'xor': __xor__, 'intersect': __and__, 'diff': __sub__} + + +class TestSetMethods(object): + + def test_api(self): + assert Series.set is sets.SetMethods + assert isinstance(Series([{1, 2}]).set, sets.SetMethods) + + @pytest.mark.parametrize('opname', ops.keys()) + def test_set_op_self(self, opname): + s = Series([{1, 2}, {2, 4}, {3, 1}]) + exp = reduce(ops[opname], s.values) + assert getattr(s.set, opname)() == exp + + # with NaN + t = Series([{2, 3}, {1, 2}, np.nan]) + exp = reduce(ops[opname], t.dropna().values) + assert getattr(t.set, opname)() == exp + + @pytest.mark.parametrize('opname', ops.keys()) + def test_set_op_broadcast(self, opname): + s = Series([{1, 2}, {2, 4}, {3, 1}]) + x = {3} + exp = Series(ops[opname](s.values, x), index=s.index) + assert_series_equal(getattr(s.set, opname)(x), exp) + + # with NaN + t = Series([{2, 3}, {1, 2}, np.nan]) + exp = Series(index=t.index) + exp.loc[t.notna()] = ops[opname](t.dropna().values, x) + assert_series_equal(getattr(t.set, opname)(x), exp) + + @pytest.mark.parametrize('opname', ops.keys()) + def test_set_op_with_series(self, opname): + s = Series([{1, 2}, {2, 4}, {3, 1}]) + t = Series([{2, 3}, {1, 2}, np.nan]) + + exp = Series(index=s.index) + na_mask = s.isna() | t.isna() + exp.loc[~na_mask] = ops[opname](s.loc[~na_mask].values, + t.loc[~na_mask].values) + + assert_series_equal(getattr(s.set, opname)(t), exp) + + @pytest.mark.parametrize('opname', ops.keys()) + def test_set_op_with_1darray(self, opname): + s = Series([{1, 2}, {2, 4}, {3, 1}]) + t = Series([{2, 3}, {1, 2}, np.nan]) + + exp = getattr(s.set, opname)(t) # tested in test_set_op_with_series + assert_series_equal(getattr(s.set, opname)(t.values), exp) + + # errors for incorrect lengths + rgx = 'If `others` contains arrays, these must all be of the same.*' + with tm.assert_raises_regex(ValueError, rgx): + getattr(s.set, opname)(t.iloc[:2].values) + + @pytest.mark.parametrize('opname', ops.keys()) + def test_set_op_parameter_errors(self, opname): + s = Series([{1, 2}, {2, 4}, {3, 1}]) + + rgx = "The parameter 'fill_value' must be list-like!" + with tm.assert_raises_regex(ValueError, rgx): + getattr(s.set, opname)(fill_value=1) + + rgx = "Received illegal value for parameter 'errors'.*" + with tm.assert_raises_regex(ValueError, rgx): + getattr(s.set, opname)(errors='abcd') + + rgx = "Must pass Series for validating inputs of set accessor.*" + with tm.assert_raises_regex(ValueError, rgx): + s.set._validate(s.values) + + @pytest.mark.parametrize('fill_value', [None, set(), {5}]) + @pytest.mark.parametrize('opname', ops.keys()) + def test_set_op_fill_value(self, opname, fill_value): + s = Series([{1, 2}, np.nan, {3, 1}]) + t = Series([{2, 3}, {1, 2}, np.nan]) + + # cannot use fillna with sets due to GH21329 + sf = s.copy() + sf.loc[sf.isna()] = [fill_value] * sf.isna().sum() + tf = t.copy() + tf.loc[tf.isna()] = [fill_value] * tf.isna().sum() + + exp = getattr(sf.set, opname)(tf) + assert_series_equal(getattr(s.set, opname)(t, fill_value=fill_value), + exp) + + @pytest.mark.parametrize('errors', + ['raise', 'ignore', 'coerce', 'wrap', 'skip']) + @pytest.mark.parametrize('opname', ops.keys()) + def test_conversion_caller(self, opname, errors): + u = Series([{1, 2}, [2, 4], 'abcd', 5, np.nan]) + ui = Series([{1, 2}, {2, 4}, np.nan, np.nan, np.nan]) # ignore + uc = Series([{1, 2}, {2, 4}, set('abcd'), np.nan, np.nan]) # coerce + uw = Series([{1, 2}, {2, 4}, {'abcd'}, {5}, np.nan]) # wrap + x = {3} + + if errors == 'raise': + rgx = '.*can only use .set accessor with values that.*' + with tm.assert_raises_regex(ValueError, rgx): + getattr(u.set, opname)(errors=errors) + elif errors == 'skip': # raw error from numpy + rgx = '.*unsupported operand type.*' + with tm.assert_raises_regex(TypeError, rgx): + getattr(u.set, opname)(errors=errors) + # but if series is already converted (like uw/uc here), skip works + exp = getattr(uc.set, opname)(uw) # default: errors='raise' + assert_series_equal(getattr(uc.set, opname)(uw, errors=errors), + exp) + else: # 'ignore', 'coerce', 'wrap' + if errors == 'ignore': + u_exp = ui + elif errors == 'coerce': + u_exp = uc + else: # 'wrap' + u_exp = uw + + # apply to self + exp = getattr(u_exp.set, opname)() + assert getattr(u.set, opname)(errors=errors) == exp + + # apply to single set with broadcasting + exp = getattr(u_exp.set, opname)(x) + assert_series_equal(getattr(u.set, opname)(x, errors=errors), exp) + + @pytest.mark.parametrize('errors', + ['raise', 'ignore', 'coerce', 'wrap', 'skip']) + @pytest.mark.parametrize('opname', ops.keys()) + def test_conversion_others(self, opname, errors): + s = Series([{1}, {2}, {3}, {4}, {5}]) + u = Series([{1, 2}, [2, 4], 'abcd', 5, np.nan]) + + if errors == 'raise': + rgx = '.*can only use .set accessor with values that.*' + with tm.assert_raises_regex(ValueError, rgx): + getattr(s.set, opname)(u, errors=errors) + elif errors == 'skip': # raw error from numpy + rgx = '.*unsupported operand type.*' + with tm.assert_raises_regex(TypeError, rgx): + getattr(s.set, opname)(u, errors=errors) + # but if series is already converted, skip works + u_conv = u.set.union(set(), errors='wrap') # explanation below + exp = getattr(s.set, opname)(u_conv) # default: errors='raise' + assert_series_equal(getattr(s.set, opname)(u_conv, errors=errors), + exp) + else: # 'ignore', 'coerce', 'wrap' + # union with set() does not change sets, only applies conversion + # correctness of this behavior tested in test_conversion_caller + u_exp = u.set.union(set(), errors=errors) + + exp = getattr(s.set, opname)(u_exp) + assert_series_equal(getattr(s.set, opname)(u, errors=errors), exp) + + @pytest.mark.parametrize('fill_value', [None, set(), {5}]) + @pytest.mark.parametrize('join', ['left', 'outer', 'inner', 'right']) + @pytest.mark.parametrize('opname', ops.keys()) + def test_set_op_align(self, opname, join, fill_value): + s = Series([{1, 2}, {2, 4}, {3, 1}]) + t = Series([{2, 3}, {1, 2}, np.nan], index=[3, 2, 1]) + + sa, ta = s.align(t, join=join) + exp = getattr(sa.set, opname)(ta, fill_value=fill_value) + assert_series_equal(getattr(s.set, opname)(t, join=join, + fill_value=fill_value), exp) + + @pytest.mark.parametrize('opname', ops.keys()) + def test_set_op_mixed_inputs(self, opname): + s = Series([{1, 2}, {2, 4}, {3, 1}]) + t = Series([None, {1, 2}, np.nan]) # test if NaN/None diff. matters + d = concat([t, s], axis=1) + + # all methods below are equivalent to sequential application + # (at least when indexes in rhs are all the same) + tmp = getattr(s.set, opname)(t) + exp = getattr(tmp.set, opname)(s) + + # Series with DataFrame + assert_series_equal(getattr(s.set, opname)(d), exp) + + # Series with two-dimensional ndarray + assert_series_equal(getattr(s.set, opname)(d.values), exp) + + # Series with list of Series + assert_series_equal(getattr(s.set, opname)([t, s]), exp) + + # Series with mixed list of Series/ndarray + assert_series_equal(getattr(s.set, opname)([t, s.values]), exp) + + # Series with iterator of Series + assert_series_equal(getattr(s.set, opname)(iter([t, s])), exp) + + # Series with dict_view of Series + dv = d.to_dict('series', into=OrderedDict).values() + assert_series_equal(getattr(s.set, opname)(dv), exp) + + # errors for incorrect lengths + rgx = 'If `others` contains arrays, these must all be' + + # two-dimensional ndarray + with tm.assert_raises_regex(ValueError, rgx): + getattr(s.set, opname)(d.iloc[:2].values) + + # mixed list with Series/ndarray + with tm.assert_raises_regex(ValueError, rgx): + getattr(s.set, opname)([t, s.iloc[:2].values]) + + # errors for incorrect arguments in list-like + rgx = 'others must be Series, DataFrame, np.ndarrary or list-like.*' + + # mix of string and Series + with tm.assert_raises_regex(TypeError, rgx): + getattr(s.set, opname)([t, 't']) + + # DataFrame in list + with tm.assert_raises_regex(TypeError, rgx): + getattr(s.set, opname)([t, d]) + + # 2-dim ndarray in list + with tm.assert_raises_regex(TypeError, rgx): + getattr(s.set, opname)([t, d.values]) + + # nested lists + with tm.assert_raises_regex(TypeError, rgx): + getattr(s.set, opname)([t, [t, t]]) + + # forbidden input type, e.g. int + with tm.assert_raises_regex(TypeError, rgx): + getattr(s.set, opname)(1) + + @pytest.mark.parametrize('fill_value', [None, set(), {5}]) + @pytest.mark.parametrize('join', ['left', 'outer', 'inner', 'right']) + @pytest.mark.parametrize('opname', ops.keys()) + def test_set_op_align_several(self, opname, join, fill_value): + # no differences in the indexes of the right-hand side yet! + s = Series([{1, 2}, {2, 4}, {3, 1}]) + t = Series([{2, 3}, {1, 2}, np.nan], index=[3, 2, 1]) + d = concat([t, t], axis=1) + + sa, ta = s.align(t, join=join) + exp = getattr(sa.set, opname)([ta, ta], fill_value=fill_value) + + # list of Series + tm.assert_series_equal(getattr(s.set, opname)([t, t], join=join, + fill_value=fill_value), + exp) + + # DataFrame + tm.assert_series_equal(getattr(s.set, opname)(d, join=join, + fill_value=fill_value), + exp) + + @pytest.mark.parametrize('fill_value', [None, set(), {5}]) + @pytest.mark.parametrize('join', ['left', 'outer', 'inner', 'right']) + @pytest.mark.parametrize('opname', ops.keys()) + def test_set_op_align_mixed(self, opname, join, fill_value): + s = Series([{1, 2}, {2, 4}, {3, 1}]) + t = Series([{2, 3}, {1, 2}, np.nan], index=[3, 2, 1]) + u = Series([{5}, {3}, None], index=[2, 4, 1]) + + # the index of the right-hand side is the union of the rhs indexes, + # except for 'inner' - this is only really relevant for 'right', which + # would not have a well-defined index otherwise. + ta, ua = t.align(u, join=(join if join == 'inner' else 'outer')) + + # reuse case of same rhs-index; tested in test_set_op_align_several + exp = getattr(s.set, opname)([ta, ua], join=join, + fill_value=fill_value) + + # list of Series + tm.assert_series_equal(getattr(s.set, opname)([t, u], join=join, + fill_value=fill_value), + exp) + + # unindexed -> use index of caller + # reuses test directly above that differently-indexed series work + tu = Series(t.values, index=s.index) + exp = getattr(s.set, opname)([tu, u], join=join, + fill_value=fill_value) + + # mixed list of indexed/unindexed + tm.assert_series_equal(getattr(s.set, opname)([t.values, u], join=join, + fill_value=fill_value), + exp) + + @pytest.mark.parametrize('fill_value', [set(), {5}]) + def test_set_len(self, fill_value): + s = Series([{1, 2}, {2, 4}, {3, 1}]) + t = Series([{2, 3}, {1, 2}, np.nan]) + + rgx = "The parameter 'fill_value' must be list-like!" + with tm.assert_raises_regex(ValueError, rgx): + s.set.len(fill_value=1) + + rgx = "Received illegal value for parameter 'errors'.*" + with tm.assert_raises_regex(ValueError, rgx): + s.set.len(errors='abcd') + + # no NaN + exp = s.map(len) + assert_series_equal(s.set.len(fill_value=fill_value), exp) + + # cannot use fillna with sets due to GH21329 + tf = t.copy() + tf.loc[tf.isna()] = [fill_value] * tf.isna().sum() + exp = tf.map(len) + assert_series_equal(t.set.len(fill_value=fill_value), exp) + + @pytest.mark.parametrize('errors', + ['raise', 'ignore', 'coerce', 'wrap', 'skip']) + def test_set_len_conversion(self, errors): + u = Series([{1, 2}, [2, 4], 'abcd', 5, np.nan]) + ui = Series([{1, 2}, {2, 4}, np.nan, np.nan, np.nan]) # ignore + uc = Series([{1, 2}, {2, 4}, set('abcd'), np.nan, np.nan]) # coerce + uw = Series([{1, 2}, {2, 4}, {'abcd'}, {5}, np.nan]) # wrap + + if errors == 'raise': + rgx = '.*can only use .set accessor with values that.*' + with tm.assert_raises_regex(ValueError, rgx): + u.set.len(errors=errors) + elif errors == 'skip': # raw error from trying len(5) + rgx = "object of type 'int' has no len()" + with tm.assert_raises_regex(TypeError, rgx): + u.set.len(errors=errors) + # but if series is already converted (like uc here), skip works + exp = uc.set.len() # default: errors='raise' + assert_series_equal(uc.set.len(errors=errors), exp) + else: # 'ignore', 'coerce', 'wrap' + if errors == 'ignore': + u_exp = ui + elif errors == 'coerce': + u_exp = uc + else: # 'wrap' + u_exp = uw + + assert_series_equal(u.set.len(errors=errors), u_exp.set.len())
Closes #4480, and a fair bit more that was discussed in comments. In particular, it makes the `numpy` methods NaN-safe, index aware, and adds a bit of wrapping convenience. This is still a WIP, so no tests yet. Also more work to do on docstrings, whatsnew, and some overview documentation similar to `pandas/doc/source/text.rst`. The implementation borrows heavily from `pandas/core/strings.py`. Before I write any tests etc., I'd like to have a discussion on the API - which functions to add (I only added union/intersection/difference/symmetric difference), how to name them, and what arguments they should have. I've followed the suggestions in #13877 (not yet realised for the `.str`-accessor) to have an `errors`- and a `fill_value`-parameter, as well as the `join`-parameter from #20347. Here's a sample API documentation: ![4](https://user-images.githubusercontent.com/33685575/41625235-661fbca6-7418-11e8-9b1d-2927e93b5522.png) Here's some more usage examples. The basic idea is to follow the numpy set methods, which work with `| & ^ - ` between arrays and arrays (as well as arrays and singletons), but are not NaN-safe. Basiscs: ``` s = pd.Series([{1, 2}, {2, 4}, {3, 1}]) s # 0 {1, 2} # 1 {2, 4} # 2 {1, 3} # dtype: object s.set.union() # apply (sequentially) to elements of calling Series # {1, 2, 3, 4} s.set.union({2}) # broadcast like "s.values | {2}" # 0 {1, 2} # 1 {2, 4} # 2 {1, 2, 3} # dtype: object ``` With another Series: ``` t = pd.Series([{2, 3}, {1, 2}, np.nan]) t # 0 {2, 3} # 1 {1, 2} # 2 NaN # dtype: object s.values | t.values # remember... # TypeError s.set.union(t) # 0 {1, 2, 3} # 1 {1, 2, 4} # 2 NaN # dtype: object s.set.union(t, fill_value={5}) # 0 {1, 2, 3} # 1 {1, 2, 4} # 2 {1, 3, 5} # dtype: object ``` With different indices (`fill_value` also works for NaNs created by alignment): ``` u = pd.Series(t.values, index=[1, 2, 3]) u # 1 {2, 3} # 2 {1, 2} # 3 NaN # dtype: object s.set.union(u) # 0 NaN # 1 {2, 3, 4} # 2 {1, 2, 3} # dtype: object s.set.union(u, join='outer') # 0 NaN # 1 {2, 3, 4} # 2 {1, 2, 3} # 3 NaN # dtype: object s.set.union(u, join='outer', fill_value={5}) # 0 {1, 2, 5} # 1 {2, 3, 4} # 2 {1, 2, 3} # 3 {5} # dtype: object ``` Finally, the behaviour of the `errors`-parameter. Since strings are not list-like, but *can* be coerced into a set, I made a distinction between `'coerce'` and `'wrap'`, which is the most permissive (but treats strings as singletons). ``` v = pd.Series([{1, 2}, [2, 3], 'abcd', 4, np.nan]) v # 0 {1, 2} # 1 [2, 3] # 2 abcd # 3 4 # 4 NaN # dtype: object v.set.union(set(), errors='raise') # default # ValueError v.set.union(set(), errors='ignore') # 0 {1, 2} # 1 {2, 3} # 2 NaN # 3 NaN # 4 NaN # dtype: object v.set.union(set(), errors='coerce') # 0 {1, 2} # 1 {2, 3} # 2 {a, c, b, d} # 3 NaN # 4 NaN # dtype: object v.set.union(set(), errors='wrap') # 0 {1, 2} # 1 {2, 3} # 2 {abcd} # 3 {4} # 4 NaN # dtype: object ```
https://api.github.com/repos/pandas-dev/pandas/pulls/21547
2018-06-19T21:33:23Z
2018-09-25T16:29:51Z
null
2018-12-26T16:58:23Z
BUG: Series.combine_first with datetime-tz dtype (#21469)
diff --git a/pandas/core/common.py b/pandas/core/common.py index 1de8269c9a0c6..ec516d9d80023 100644 --- a/pandas/core/common.py +++ b/pandas/core/common.py @@ -410,19 +410,6 @@ def _apply_if_callable(maybe_callable, obj, **kwargs): return maybe_callable -def _where_compat(mask, arr1, arr2): - if arr1.dtype == _NS_DTYPE and arr2.dtype == _NS_DTYPE: - new_vals = np.where(mask, arr1.view('i8'), arr2.view('i8')) - return new_vals.view(_NS_DTYPE) - - if arr1.dtype == _NS_DTYPE: - arr1 = tslib.ints_to_pydatetime(arr1.view('i8')) - if arr2.dtype == _NS_DTYPE: - arr2 = tslib.ints_to_pydatetime(arr2.view('i8')) - - return np.where(mask, arr1, arr2) - - def _dict_compat(d): """ Helper function to convert datetimelike-keyed dicts to Timestamp-keyed dict diff --git a/pandas/core/series.py b/pandas/core/series.py index 0450f28087f66..801a8ddeb67f7 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -29,13 +29,16 @@ is_hashable, is_iterator, is_dict_like, + is_dtype_equal, is_scalar, _is_unorderable_exception, _ensure_platform_int, - pandas_dtype) + pandas_dtype, + needs_i8_conversion) from pandas.core.dtypes.generic import ( ABCSparseArray, ABCDataFrame, ABCIndexClass) from pandas.core.dtypes.cast import ( + find_common_type, maybe_downcast_to_dtype, maybe_upcast, infer_dtype_from_scalar, maybe_convert_platform, maybe_cast_to_datetime, maybe_castable, @@ -2304,7 +2307,24 @@ def combine_first(self, other): other = other.reindex(new_index, copy=False) # TODO: do we need name? name = ops.get_op_result_name(self, other) # noqa - rs_vals = com._where_compat(isna(this), other._values, this._values) + if not is_dtype_equal(this.dtype, other.dtype): + new_dtype = find_common_type([this.dtype, other.dtype]) + if not is_dtype_equal(this.dtype, new_dtype): + this = this.astype(new_dtype) + if not is_dtype_equal(other.dtype, new_dtype): + other = other.astype(new_dtype) + + if needs_i8_conversion(this.dtype): + mask = isna(this) + this_values = this.values.view('i8') + other_values = other.values.view('i8') + else: + this_values = this.values + other_values = other.values + mask = isna(this_values) + + rs_vals = np.where(mask, other_values, this_values) + rs_vals = maybe_downcast_to_dtype(rs_vals, this.dtype) return self._constructor(rs_vals, index=new_index).__finalize__(self) def update(self, other): diff --git a/pandas/tests/series/test_combine_concat.py b/pandas/tests/series/test_combine_concat.py index f35cce6ac9d71..d3e4720a756f1 100644 --- a/pandas/tests/series/test_combine_concat.py +++ b/pandas/tests/series/test_combine_concat.py @@ -170,6 +170,19 @@ def get_result_type(dtype, dtype2): ]).dtype assert result.kind == expected + def test_combine_first_dt_tz_values(self): + dts1 = pd.date_range('20150101', '20150105', tz='America/New_York') + df1 = pd.DataFrame({'date': dts1}) + dts2 = pd.date_range('20160514', '20160518', tz='America/New_York') + df2 = pd.DataFrame({'date': dts2}, index=range(3, 8)) + result = df1.date.combine_first(df2.date) + exp_vals = pd.DatetimeIndex(['20150101', '20150102', '20150103', + '20150104', '20150105', '20160516', + '20160517', '20160518'], + tz='America/New_York') + exp = pd.Series(exp_vals, name='date') + assert_series_equal(exp, result) + def test_concat_empty_series_dtypes(self): # booleans
- [x] closes #21469 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21544
2018-06-19T15:38:23Z
2018-06-27T19:11:10Z
null
2018-06-27T19:11:46Z
Fixed HDFSTore.groups() performance.
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 5b3e607956f7a..41b18ea1c4634 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -18,7 +18,7 @@ Fixed Regressions - Fixed regression in :meth:`to_csv` when handling file-like object incorrectly (:issue:`21471`) - Bug in both :meth:`DataFrame.first_valid_index` and :meth:`Series.first_valid_index` raised for a row index having duplicate values (:issue:`21441`) -- +- .. _whatsnew_0232.performance: @@ -28,6 +28,9 @@ Performance Improvements - Improved performance of membership checks in :class:`CategoricalIndex` (i.e. ``x in ci``-style checks are much faster). :meth:`CategoricalIndex.contains` is likewise much faster (:issue:`21369`, :issue:`21508`) +- Improved performance of :meth:`HDFStore.groups` (and dependent functions like + :meth:`~HDFStore.keys`. (i.e. ``x in store`` checks are much faster) + (:issue:`21372`) - Improved performance of :meth:`MultiIndex.is_unique` (:issue:`21522`) - diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py index aa39e341792c7..aad387e0cdd58 100644 --- a/pandas/io/pytables.py +++ b/pandas/io/pytables.py @@ -1098,7 +1098,7 @@ def groups(self): _tables() self._check_if_open() return [ - g for g in self._handle.walk_nodes() + g for g in self._handle.walk_groups() if (not isinstance(g, _table_mod.link.Link) and (getattr(g._v_attrs, 'pandas_type', None) or getattr(g, 'table', None) or
No longer walks every node, but rather every group. - [x] closes #21372 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21543
2018-06-19T14:51:55Z
2018-06-21T09:47:12Z
2018-06-21T09:47:12Z
2018-07-02T23:25:35Z
BUG: Fix group index calculation to prevent hitting maximum recursion depth (#21524)
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 0f2c9c4756987..a89a84a15bbdc 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -60,6 +60,7 @@ Bug Fixes - Bug in :meth:`Index.get_indexer_non_unique` with categorical key (:issue:`21448`) - Bug in comparison operations for :class:`MultiIndex` where error was raised on equality / inequality comparison involving a MultiIndex with ``nlevels == 1`` (:issue:`21149`) +- Bug in :func:`DataFrame.duplicated` with a large number of columns causing a 'maximum recursion depth exceeded' (:issue:`21524`). - **I/O** diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py index e550976d1deeb..212f44e55c489 100644 --- a/pandas/core/sorting.py +++ b/pandas/core/sorting.py @@ -52,7 +52,21 @@ def _int64_cut_off(shape): return i return len(shape) - def loop(labels, shape): + def maybe_lift(lab, size): + # promote nan values (assigned -1 label in lab array) + # so that all output values are non-negative + return (lab + 1, size + 1) if (lab == -1).any() else (lab, size) + + labels = map(_ensure_int64, labels) + if not xnull: + labels, shape = map(list, zip(*map(maybe_lift, labels, shape))) + + labels = list(labels) + shape = list(shape) + + # Iteratively process all the labels in chunks sized so less + # than _INT64_MAX unique int ids will be required for each chunk + while True: # how many levels can be done without overflow: nlev = _int64_cut_off(shape) @@ -74,7 +88,7 @@ def loop(labels, shape): out[mask] = -1 if nlev == len(shape): # all levels done! - return out + break # compress what has been done so far in order to avoid overflow # to retain lexical ranks, obs_ids should be sorted @@ -83,16 +97,7 @@ def loop(labels, shape): labels = [comp_ids] + labels[nlev:] shape = [len(obs_ids)] + shape[nlev:] - return loop(labels, shape) - - def maybe_lift(lab, size): # pormote nan values - return (lab + 1, size + 1) if (lab == -1).any() else (lab, size) - - labels = map(_ensure_int64, labels) - if not xnull: - labels, shape = map(list, zip(*map(maybe_lift, labels, shape))) - - return loop(list(labels), list(shape)) + return out def get_compressed_ids(labels, sizes): diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py index 6dc24ed856017..12ebdbe0fd3c7 100644 --- a/pandas/tests/frame/test_analytics.py +++ b/pandas/tests/frame/test_analytics.py @@ -1527,6 +1527,23 @@ def test_duplicated_with_misspelled_column_name(self, subset): with pytest.raises(KeyError): df.drop_duplicates(subset) + @pytest.mark.slow + def test_duplicated_do_not_fail_on_wide_dataframes(self): + # gh-21524 + # Given the wide dataframe with a lot of columns + # with different (important!) values + data = {'col_{0:02d}'.format(i): np.random.randint(0, 1000, 30000) + for i in range(100)} + df = pd.DataFrame(data).T + result = df.duplicated() + + # Then duplicates produce the bool pd.Series as a result + # and don't fail during calculation. + # Actual values doesn't matter here, though usually + # it's all False in this case + assert isinstance(result, pd.Series) + assert result.dtype == np.bool + def test_drop_duplicates_with_duplicate_column_names(self): # GH17836 df = DataFrame([
This just replaces tail recursion call with a simple loop. It should have no effect whatsoever on a performance but prevent hitting recursion limits on some input data ( See example in my issue here: https://github.com/pandas-dev/pandas/issues/21524 ) - [x] closes #21524 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21541
2018-06-19T09:12:43Z
2018-06-21T02:54:24Z
2018-06-21T02:54:24Z
2018-06-29T15:00:02Z
BUG: Fix json_normalize throwing TypeError (#21536)
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 0f2c9c4756987..56ce4300e4561 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -65,7 +65,7 @@ Bug Fixes **I/O** - Bug in :func:`read_csv` that caused it to incorrectly raise an error when ``nrows=0``, ``low_memory=True``, and ``index_col`` was not ``None`` (:issue:`21141`) -- +- Bug in :func:`json_normalize` when formatting the ``record_prefix`` with integer columns (:issue:`21536`) - **Plotting** diff --git a/pandas/io/json/normalize.py b/pandas/io/json/normalize.py index b845a43b9ca9e..2004a24c2ec5a 100644 --- a/pandas/io/json/normalize.py +++ b/pandas/io/json/normalize.py @@ -170,6 +170,11 @@ def json_normalize(data, record_path=None, meta=None, 3 Summit 1234 John Kasich Ohio OH 4 Cuyahoga 1337 John Kasich Ohio OH + >>> data = {'A': [1, 2]} + >>> json_normalize(data, 'A', record_prefix='Prefix.') + Prefix.0 + 0 1 + 1 2 """ def _pull_field(js, spec): result = js @@ -259,7 +264,8 @@ def _recursive_extract(data, path, seen_meta, level=0): result = DataFrame(records) if record_prefix is not None: - result.rename(columns=lambda x: record_prefix + x, inplace=True) + result = result.rename( + columns=lambda x: "{p}{c}".format(p=record_prefix, c=x)) # Data types, a problem for k, v in compat.iteritems(meta_vals): diff --git a/pandas/tests/io/json/test_normalize.py b/pandas/tests/io/json/test_normalize.py index 395c2c90767d3..200a853c48900 100644 --- a/pandas/tests/io/json/test_normalize.py +++ b/pandas/tests/io/json/test_normalize.py @@ -123,6 +123,12 @@ def test_simple_normalize_with_separator(self, deep_nested): 'country', 'states_name']).sort_values() assert result.columns.sort_values().equals(expected) + def test_value_array_record_prefix(self): + # GH 21536 + result = json_normalize({'A': [1, 2]}, 'A', record_prefix='Prefix.') + expected = DataFrame([[1], [2]], columns=['Prefix.0']) + tm.assert_frame_equal(result, expected) + def test_more_deeply_nested(self, deep_nested): result = json_normalize(deep_nested, ['states', 'cities'],
Fix json_normalize throwing TypeError with array of values and record_prefix (#21536) - [x] closes #21536 - [x] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21540
2018-06-19T09:09:28Z
2018-06-22T23:07:22Z
2018-06-22T23:07:22Z
2018-06-29T15:04:11Z
[TST] : Adding test cases to confirm that .apply() works fine
diff --git a/pandas/tests/sparse/frame/test_apply.py b/pandas/tests/sparse/frame/test_apply.py index 07e4b1bf7c913..c591d51c53341 100644 --- a/pandas/tests/sparse/frame/test_apply.py +++ b/pandas/tests/sparse/frame/test_apply.py @@ -1,5 +1,6 @@ import pytest import numpy as np +import pandas as pd from pandas import SparseDataFrame, DataFrame, Series, bdate_range from pandas.core import nanops from pandas.util import testing as tm @@ -90,3 +91,18 @@ def test_applymap(frame): # just test that it works result = frame.applymap(lambda x: x * 2) assert isinstance(result, SparseDataFrame) + + +def test_apply_toindex(): + # GH 21539 + tmp = np.array([[1, 1, 1], [1, 0, 0], [1, 1, 0], [1, 1, 0]]) + tmp_df = pd.DataFrame(tmp) + result = tmp_df.apply(lambda x: x[x == 1].index.tolist(), axis=1) + expected = pd.Series([[0, 1, 2], [0], [0, 1], [0, 1]]) + tm.assert_series_equal(result, expected) + + tmp2 = np.array([[5, 1, 3], [1, np.nan, 0], [1, 2, 0], [np.nan, 1, 0]]) + tmp2_df = pd.DataFrame(tmp2) + result = tmp2_df.apply(lambda x: x[x == 0].index.tolist()) + expected = pd.Series([[], [], [1, 2, 3]]) + tm.assert_series_equal(result, expected)
Adding test cases for closure of issue : Unexpected behaviour of apply function - [x] closes #21535 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21539
2018-06-19T09:00:49Z
2018-06-21T10:12:12Z
null
2018-06-21T10:12:12Z
DOC: remove grammar duplication in groupby docs
diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst index 1c4c3f93726a9..47d53c82b86f3 100644 --- a/doc/source/groupby.rst +++ b/doc/source/groupby.rst @@ -680,8 +680,7 @@ match the shape of the input array. data_range = lambda x: x.max() - x.min() ts.groupby(key).transform(data_range) -Alternatively the built-in methods can be could be used to produce the same -outputs +Alternatively, the built-in methods could be used to produce the same outputs. .. ipython:: python
"can be could be" -> "could be"
https://api.github.com/repos/pandas-dev/pandas/pulls/21534
2018-06-19T03:33:19Z
2018-06-19T08:30:48Z
2018-06-19T08:30:48Z
2018-06-19T12:20:22Z
remove daytime attr, move getstate and setstate to base class
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx index 8caf9ea0e0389..3ca9bb307da9c 100644 --- a/pandas/_libs/tslibs/offsets.pyx +++ b/pandas/_libs/tslibs/offsets.pyx @@ -379,6 +379,45 @@ class _BaseOffset(object): 'got {n}'.format(n=n)) return nint + def __setstate__(self, state): + """Reconstruct an instance from a pickled state""" + if 'offset' in state: + # Older (<0.22.0) versions have offset attribute instead of _offset + if '_offset' in state: # pragma: no cover + raise AssertionError('Unexpected key `_offset`') + state['_offset'] = state.pop('offset') + state['kwds']['offset'] = state['_offset'] + + if '_offset' in state and not isinstance(state['_offset'], timedelta): + # relativedelta, we need to populate using its kwds + offset = state['_offset'] + odict = offset.__dict__ + kwds = {key: odict[key] for key in odict if odict[key]} + state.update(kwds) + + self.__dict__ = state + if 'weekmask' in state and 'holidays' in state: + calendar, holidays = _get_calendar(weekmask=self.weekmask, + holidays=self.holidays, + calendar=None) + self.calendar = calendar + self.holidays = holidays + + def __getstate__(self): + """Return a pickleable state""" + state = self.__dict__.copy() + + # we don't want to actually pickle the calendar object + # as its a np.busyday; we recreate on deserilization + if 'calendar' in state: + del state['calendar'] + try: + state['kwds'].pop('calendar') + except KeyError: + pass + + return state + class BaseOffset(_BaseOffset): # Here we add __rfoo__ methods that don't play well with cdef classes diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py index 2f4989f26b394..ffa2c0a5e3211 100644 --- a/pandas/tseries/offsets.py +++ b/pandas/tseries/offsets.py @@ -423,30 +423,6 @@ def _offset_str(self): def nanos(self): raise ValueError("{name} is a non-fixed frequency".format(name=self)) - def __setstate__(self, state): - """Reconstruct an instance from a pickled state""" - if 'offset' in state: - # Older (<0.22.0) versions have offset attribute instead of _offset - if '_offset' in state: # pragma: no cover - raise AssertionError('Unexpected key `_offset`') - state['_offset'] = state.pop('offset') - state['kwds']['offset'] = state['_offset'] - - if '_offset' in state and not isinstance(state['_offset'], timedelta): - # relativedelta, we need to populate using its kwds - offset = state['_offset'] - odict = offset.__dict__ - kwds = {key: odict[key] for key in odict if odict[key]} - state.update(kwds) - - self.__dict__ = state - if 'weekmask' in state and 'holidays' in state: - calendar, holidays = _get_calendar(weekmask=self.weekmask, - holidays=self.holidays, - calendar=None) - self.calendar = calendar - self.holidays = holidays - class SingleConstructorOffset(DateOffset): @classmethod @@ -494,21 +470,6 @@ def _repr_attrs(self): out += ': ' + ', '.join(attrs) return out - def __getstate__(self): - """Return a pickleable state""" - state = self.__dict__.copy() - - # we don't want to actually pickle the calendar object - # as its a np.busyday; we recreate on deserilization - if 'calendar' in state: - del state['calendar'] - try: - state['kwds'].pop('calendar') - except KeyError: - pass - - return state - class BusinessDay(BusinessMixin, SingleConstructorOffset): """ @@ -690,7 +651,6 @@ def _get_business_hours_by_sec(self): until = datetime(2014, 4, 1, self.end.hour, self.end.minute) return (until - dtstart).total_seconds() else: - self.daytime = False dtstart = datetime(2014, 4, 1, self.start.hour, self.start.minute) until = datetime(2014, 4, 2, self.end.hour, self.end.minute) return (until - dtstart).total_seconds()
If/when #18224 is revived to make `_BaseOffset` a `cdef class`, we'll need to move `__getstate__` and `__setstate__` into the base class (and then make some changes). This moves the two methods pre-emptively so that we'll have a smaller diff for the next steps. Also removes an attr `self.daytime` that is not used anywhere else.
https://api.github.com/repos/pandas-dev/pandas/pulls/21533
2018-06-19T01:04:42Z
2018-06-19T11:12:53Z
2018-06-19T11:12:53Z
2018-06-22T03:27:52Z
fixed unicode issue (21499) using requests library
diff --git a/pandas/io/html.py b/pandas/io/html.py index 8fd876e85889f..3b43a136c5424 100644 --- a/pandas/io/html.py +++ b/pandas/io/html.py @@ -127,8 +127,8 @@ def _read(obj): raw_text : str """ if _is_url(obj): - with urlopen(obj) as url: - text = url.read() + import requests + text = requests.get(obj).content elif hasattr(obj, 'read'): text = obj.read() elif isinstance(obj, char_types): @@ -985,3 +985,4 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None, decimal=decimal, converters=converters, na_values=na_values, keep_default_na=keep_default_na, displayed_only=displayed_only) +
- [x ] closes #21499 - [ ] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21532
2018-06-18T19:01:02Z
2018-06-18T23:12:51Z
null
2018-06-18T23:12:51Z
BUG: Allow IOErrors when attempting to retrieve default client encoding.
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index 39ed5d968707b..49c63ac2e3f88 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -749,6 +749,7 @@ I/O - :func:`read_sas()` will parse numbers in sas7bdat-files that have width less than 8 bytes correctly. (:issue:`21616`) - :func:`read_sas()` will correctly parse sas7bdat files with many columns (:issue:`22628`) - :func:`read_sas()` will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (:issue:`16615`) +- Bug in :meth:`detect_client_encoding` where potential ``IOError`` goes unhandled when importing in a mod_wsgi process due to restricted access to stdout. (:issue:`21552`) Plotting ^^^^^^^^ diff --git a/pandas/io/formats/console.py b/pandas/io/formats/console.py index 45d50ea3fa073..b8b28a0b0c98c 100644 --- a/pandas/io/formats/console.py +++ b/pandas/io/formats/console.py @@ -21,7 +21,7 @@ def detect_console_encoding(): encoding = None try: encoding = sys.stdout.encoding or sys.stdin.encoding - except AttributeError: + except (AttributeError, IOError): pass # try again for something better diff --git a/pandas/tests/io/formats/test_console.py b/pandas/tests/io/formats/test_console.py new file mode 100644 index 0000000000000..055763bf62d6e --- /dev/null +++ b/pandas/tests/io/formats/test_console.py @@ -0,0 +1,74 @@ +import pytest + +from pandas.io.formats.console import detect_console_encoding + + +class MockEncoding(object): # TODO(py27): replace with mock + """ + Used to add a side effect when accessing the 'encoding' property. If the + side effect is a str in nature, the value will be returned. Otherwise, the + side effect should be an exception that will be raised. + """ + def __init__(self, encoding): + super(MockEncoding, self).__init__() + self.val = encoding + + @property + def encoding(self): + return self.raise_or_return(self.val) + + @staticmethod + def raise_or_return(val): + if isinstance(val, str): + return val + else: + raise val + + +@pytest.mark.parametrize('empty,filled', [ + ['stdin', 'stdout'], + ['stdout', 'stdin'] +]) +def test_detect_console_encoding_from_stdout_stdin(monkeypatch, empty, filled): + # Ensures that when sys.stdout.encoding or sys.stdin.encoding is used when + # they have values filled. + # GH 21552 + with monkeypatch.context() as context: + context.setattr('sys.{}'.format(empty), MockEncoding('')) + context.setattr('sys.{}'.format(filled), MockEncoding(filled)) + assert detect_console_encoding() == filled + + +@pytest.mark.parametrize('encoding', [ + AttributeError, + IOError, + 'ascii' +]) +def test_detect_console_encoding_fallback_to_locale(monkeypatch, encoding): + # GH 21552 + with monkeypatch.context() as context: + context.setattr('locale.getpreferredencoding', lambda: 'foo') + context.setattr('sys.stdout', MockEncoding(encoding)) + assert detect_console_encoding() == 'foo' + + +@pytest.mark.parametrize('std,locale', [ + ['ascii', 'ascii'], + ['ascii', Exception], + [AttributeError, 'ascii'], + [AttributeError, Exception], + [IOError, 'ascii'], + [IOError, Exception] +]) +def test_detect_console_encoding_fallback_to_default(monkeypatch, std, locale): + # When both the stdout/stdin encoding and locale preferred encoding checks + # fail (or return 'ascii', we should default to the sys default encoding. + # GH 21552 + with monkeypatch.context() as context: + context.setattr( + 'locale.getpreferredencoding', + lambda: MockEncoding.raise_or_return(locale) + ) + context.setattr('sys.stdout', MockEncoding(std)) + context.setattr('sys.getdefaultencoding', lambda: 'sysDefaultEncoding') + assert detect_console_encoding() == 'sysDefaultEncoding'
- [X] closes #21552 When using mod_wsgi, access to sys.stdout is restricted by default. To handle this case, catch a base exception instead of a more specific AttributeError during this process to include the thrown IOError by mod_wsgi.
https://api.github.com/repos/pandas-dev/pandas/pulls/21531
2018-06-18T18:26:25Z
2018-09-19T14:16:11Z
2018-09-19T14:16:11Z
2018-09-19T20:18:02Z
PERF: remove useless overrides
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index b8d865195cddd..f7e170cca039e 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -27,6 +27,7 @@ Performance Improvements - Improved performance of membership checks in :class:`CategoricalIndex` (i.e. ``x in ci``-style checks are much faster). :meth:`CategoricalIndex.contains` is likewise much faster (:issue:`21369`) +- Improved performance of :meth:`MultiIndex.is_unique` (:issue:`21522`) - Documentation Changes diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py index 75b6be96feb78..ab23a80acdaae 100644 --- a/pandas/core/indexes/multi.py +++ b/pandas/core/indexes/multi.py @@ -852,14 +852,6 @@ def _has_complex_internals(self): # to disable groupby tricks return True - @cache_readonly - def is_monotonic(self): - """ - return if the index is monotonic increasing (only equal or - increasing) values. - """ - return self.is_monotonic_increasing - @cache_readonly def is_monotonic_increasing(self): """ @@ -887,10 +879,6 @@ def is_monotonic_decreasing(self): # monotonic decreasing if and only if reverse is monotonic increasing return self[::-1].is_monotonic_increasing - @cache_readonly - def is_unique(self): - return not self.duplicated().any() - @cache_readonly def _have_mixed_levels(self): """ return a boolean list indicated if we have mixed levels """
- [x] closes #21522 - [x] tests passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry Asv run: ``` before after ratio [9e982e18] [705f0e3b] - 243±7ms 186±4ms 0.77 multiindex_object.GetLoc.time_large_get_loc_warm - 220±0.9ms 151±3ms 0.69 multiindex_object.GetLoc.time_large_get_loc - 173±1ms 101±2ms 0.59 multiindex_object.Integer.time_get_indexer SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY. ``` Notice that as of now the same _cannot_ be done for ``.is_monotonic_increasing`` and friends, because the sortedness of the ``MultiIndex`` corresponds to the sortedness of the underlying int index only if levels are themselves sorted.
https://api.github.com/repos/pandas-dev/pandas/pulls/21523
2018-06-18T13:07:25Z
2018-06-18T21:43:00Z
2018-06-18T21:42:59Z
2018-06-29T14:55:16Z
Fixing documentation lists indentation (#21518)
diff --git a/doc/source/api.rst b/doc/source/api.rst index 4faec93490fde..f2c00d5d12031 100644 --- a/doc/source/api.rst +++ b/doc/source/api.rst @@ -1200,9 +1200,9 @@ Attributes and underlying data ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Axes** - * **items**: axis 0; each item corresponds to a DataFrame contained inside - * **major_axis**: axis 1; the index (rows) of each of the DataFrames - * **minor_axis**: axis 2; the columns of each of the DataFrames +* **items**: axis 0; each item corresponds to a DataFrame contained inside +* **major_axis**: axis 1; the index (rows) of each of the DataFrames +* **minor_axis**: axis 2; the columns of each of the DataFrames .. autosummary:: :toctree: generated/ diff --git a/doc/source/basics.rst b/doc/source/basics.rst index 74f1d80c6fd3d..c460b19640f46 100644 --- a/doc/source/basics.rst +++ b/doc/source/basics.rst @@ -50,9 +50,8 @@ Attributes and the raw ndarray(s) pandas objects have a number of attributes enabling you to access the metadata - * **shape**: gives the axis dimensions of the object, consistent with ndarray - * Axis labels - +* **shape**: gives the axis dimensions of the object, consistent with ndarray +* Axis labels * **Series**: *index* (only axis) * **DataFrame**: *index* (rows) and *columns* * **Panel**: *items*, *major_axis*, and *minor_axis* @@ -131,9 +130,9 @@ Flexible binary operations With binary operations between pandas data structures, there are two key points of interest: - * Broadcasting behavior between higher- (e.g. DataFrame) and - lower-dimensional (e.g. Series) objects. - * Missing data in computations. +* Broadcasting behavior between higher- (e.g. DataFrame) and + lower-dimensional (e.g. Series) objects. +* Missing data in computations. We will demonstrate how to manage these issues independently, though they can be handled simultaneously. @@ -462,10 +461,10 @@ produce an object of the same size. Generally speaking, these methods take an **axis** argument, just like *ndarray.{sum, std, ...}*, but the axis can be specified by name or integer: - - **Series**: no axis argument needed - - **DataFrame**: "index" (axis=0, default), "columns" (axis=1) - - **Panel**: "items" (axis=0), "major" (axis=1, default), "minor" - (axis=2) +* **Series**: no axis argument needed +* **DataFrame**: "index" (axis=0, default), "columns" (axis=1) +* **Panel**: "items" (axis=0), "major" (axis=1, default), "minor" + (axis=2) For example: @@ -1187,11 +1186,11 @@ It is used to implement nearly all other features relying on label-alignment functionality. To *reindex* means to conform the data to match a given set of labels along a particular axis. This accomplishes several things: - * Reorders the existing data to match a new set of labels - * Inserts missing value (NA) markers in label locations where no data for - that label existed - * If specified, **fill** data for missing labels using logic (highly relevant - to working with time series data) +* Reorders the existing data to match a new set of labels +* Inserts missing value (NA) markers in label locations where no data for + that label existed +* If specified, **fill** data for missing labels using logic (highly relevant + to working with time series data) Here is a simple example: @@ -1911,10 +1910,10 @@ the axis indexes, since they are immutable) and returns a new object. Note that **it is seldom necessary to copy objects**. For example, there are only a handful of ways to alter a DataFrame *in-place*: - * Inserting, deleting, or modifying a column. - * Assigning to the ``index`` or ``columns`` attributes. - * For homogeneous data, directly modifying the values via the ``values`` - attribute or advanced indexing. +* Inserting, deleting, or modifying a column. +* Assigning to the ``index`` or ``columns`` attributes. +* For homogeneous data, directly modifying the values via the ``values`` + attribute or advanced indexing. To be clear, no pandas method has the side effect of modifying your data; almost every method returns a new object, leaving the original object @@ -2112,14 +2111,14 @@ Because the data was transposed the original inference stored all columns as obj The following functions are available for one dimensional object arrays or scalars to perform hard conversion of objects to a specified type: -- :meth:`~pandas.to_numeric` (conversion to numeric dtypes) +* :meth:`~pandas.to_numeric` (conversion to numeric dtypes) .. ipython:: python m = ['1.1', 2, 3] pd.to_numeric(m) -- :meth:`~pandas.to_datetime` (conversion to datetime objects) +* :meth:`~pandas.to_datetime` (conversion to datetime objects) .. ipython:: python @@ -2127,7 +2126,7 @@ hard conversion of objects to a specified type: m = ['2016-07-09', datetime.datetime(2016, 3, 2)] pd.to_datetime(m) -- :meth:`~pandas.to_timedelta` (conversion to timedelta objects) +* :meth:`~pandas.to_timedelta` (conversion to timedelta objects) .. ipython:: python diff --git a/doc/source/categorical.rst b/doc/source/categorical.rst index c6827f67a390b..acab9de905540 100644 --- a/doc/source/categorical.rst +++ b/doc/source/categorical.rst @@ -542,11 +542,11 @@ Comparisons Comparing categorical data with other objects is possible in three cases: - * Comparing equality (``==`` and ``!=``) to a list-like object (list, Series, array, - ...) of the same length as the categorical data. - * All comparisons (``==``, ``!=``, ``>``, ``>=``, ``<``, and ``<=``) of categorical data to - another categorical Series, when ``ordered==True`` and the `categories` are the same. - * All comparisons of a categorical data to a scalar. +* Comparing equality (``==`` and ``!=``) to a list-like object (list, Series, array, + ...) of the same length as the categorical data. +* All comparisons (``==``, ``!=``, ``>``, ``>=``, ``<``, and ``<=``) of categorical data to + another categorical Series, when ``ordered==True`` and the `categories` are the same. +* All comparisons of a categorical data to a scalar. All other comparisons, especially "non-equality" comparisons of two categoricals with different categories or a categorical with any list-like object, will raise a ``TypeError``. diff --git a/doc/source/comparison_with_r.rst b/doc/source/comparison_with_r.rst index a7586f623a160..eecacde8ad14e 100644 --- a/doc/source/comparison_with_r.rst +++ b/doc/source/comparison_with_r.rst @@ -18,11 +18,11 @@ was started to provide a more detailed look at the `R language party libraries as they relate to ``pandas``. In comparisons with R and CRAN libraries, we care about the following things: - - **Functionality / flexibility**: what can/cannot be done with each tool - - **Performance**: how fast are operations. Hard numbers/benchmarks are - preferable - - **Ease-of-use**: Is one tool easier/harder to use (you may have to be - the judge of this, given side-by-side code comparisons) +* **Functionality / flexibility**: what can/cannot be done with each tool +* **Performance**: how fast are operations. Hard numbers/benchmarks are + preferable +* **Ease-of-use**: Is one tool easier/harder to use (you may have to be + the judge of this, given side-by-side code comparisons) This page is also here to offer a bit of a translation guide for users of these R packages. diff --git a/doc/source/computation.rst b/doc/source/computation.rst index ff06c369e1897..5e7b8be5f8af0 100644 --- a/doc/source/computation.rst +++ b/doc/source/computation.rst @@ -344,20 +344,20 @@ The weights used in the window are specified by the ``win_type`` keyword. The list of recognized types are the `scipy.signal window functions <https://docs.scipy.org/doc/scipy/reference/signal.html#window-functions>`__: -- ``boxcar`` -- ``triang`` -- ``blackman`` -- ``hamming`` -- ``bartlett`` -- ``parzen`` -- ``bohman`` -- ``blackmanharris`` -- ``nuttall`` -- ``barthann`` -- ``kaiser`` (needs beta) -- ``gaussian`` (needs std) -- ``general_gaussian`` (needs power, width) -- ``slepian`` (needs width). +* ``boxcar`` +* ``triang`` +* ``blackman`` +* ``hamming`` +* ``bartlett`` +* ``parzen`` +* ``bohman`` +* ``blackmanharris`` +* ``nuttall`` +* ``barthann`` +* ``kaiser`` (needs beta) +* ``gaussian`` (needs std) +* ``general_gaussian`` (needs power, width) +* ``slepian`` (needs width). .. ipython:: python @@ -537,10 +537,10 @@ Binary Window Functions two ``Series`` or any combination of ``DataFrame/Series`` or ``DataFrame/DataFrame``. Here is the behavior in each case: -- two ``Series``: compute the statistic for the pairing. -- ``DataFrame/Series``: compute the statistics for each column of the DataFrame +* two ``Series``: compute the statistic for the pairing. +* ``DataFrame/Series``: compute the statistics for each column of the DataFrame with the passed Series, thus returning a DataFrame. -- ``DataFrame/DataFrame``: by default compute the statistic for matching column +* ``DataFrame/DataFrame``: by default compute the statistic for matching column names, returning a DataFrame. If the keyword argument ``pairwise=True`` is passed then computes the statistic for each pair of columns, returning a ``MultiIndexed DataFrame`` whose ``index`` are the dates in question (see :ref:`the next section @@ -741,10 +741,10 @@ Aside from not having a ``window`` parameter, these functions have the same interfaces as their ``.rolling`` counterparts. Like above, the parameters they all accept are: -- ``min_periods``: threshold of non-null data points to require. Defaults to +* ``min_periods``: threshold of non-null data points to require. Defaults to minimum needed to compute statistic. No ``NaNs`` will be output once ``min_periods`` non-null data points have been seen. -- ``center``: boolean, whether to set the labels at the center (default is False). +* ``center``: boolean, whether to set the labels at the center (default is False). .. _stats.moments.expanding.note: .. note:: @@ -903,12 +903,12 @@ of an EW moment: One must specify precisely one of **span**, **center of mass**, **half-life** and **alpha** to the EW functions: -- **Span** corresponds to what is commonly called an "N-day EW moving average". -- **Center of mass** has a more physical interpretation and can be thought of +* **Span** corresponds to what is commonly called an "N-day EW moving average". +* **Center of mass** has a more physical interpretation and can be thought of in terms of span: :math:`c = (s - 1) / 2`. -- **Half-life** is the period of time for the exponential weight to reduce to +* **Half-life** is the period of time for the exponential weight to reduce to one half. -- **Alpha** specifies the smoothing factor directly. +* **Alpha** specifies the smoothing factor directly. Here is an example for a univariate time series: diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst index 6ae93ba46fa5c..ff06d024740bf 100644 --- a/doc/source/contributing.rst +++ b/doc/source/contributing.rst @@ -138,11 +138,11 @@ steps; you only need to install the compiler. For Windows developers, the following links may be helpful. -- https://blogs.msdn.microsoft.com/pythonengineering/2016/04/11/unable-to-find-vcvarsall-bat/ -- https://github.com/conda/conda-recipes/wiki/Building-from-Source-on-Windows-32-bit-and-64-bit -- https://cowboyprogrammer.org/building-python-wheels-for-windows/ -- https://blog.ionelmc.ro/2014/12/21/compiling-python-extensions-on-windows/ -- https://support.enthought.com/hc/en-us/articles/204469260-Building-Python-extensions-with-Canopy +* https://blogs.msdn.microsoft.com/pythonengineering/2016/04/11/unable-to-find-vcvarsall-bat/ +* https://github.com/conda/conda-recipes/wiki/Building-from-Source-on-Windows-32-bit-and-64-bit +* https://cowboyprogrammer.org/building-python-wheels-for-windows/ +* https://blog.ionelmc.ro/2014/12/21/compiling-python-extensions-on-windows/ +* https://support.enthought.com/hc/en-us/articles/204469260-Building-Python-extensions-with-Canopy Let us know if you have any difficulties by opening an issue or reaching out on `Gitter`_. @@ -155,11 +155,11 @@ Creating a Python Environment Now that you have a C compiler, create an isolated pandas development environment: -- Install either `Anaconda <https://www.anaconda.com/download/>`_ or `miniconda +* Install either `Anaconda <https://www.anaconda.com/download/>`_ or `miniconda <https://conda.io/miniconda.html>`_ -- Make sure your conda is up to date (``conda update conda``) -- Make sure that you have :ref:`cloned the repository <contributing.forking>` -- ``cd`` to the *pandas* source directory +* Make sure your conda is up to date (``conda update conda``) +* Make sure that you have :ref:`cloned the repository <contributing.forking>` +* ``cd`` to the *pandas* source directory We'll now kick off a three-step process: @@ -286,7 +286,7 @@ complex changes to the documentation as well. Some other important things to know about the docs: -- The *pandas* documentation consists of two parts: the docstrings in the code +* The *pandas* documentation consists of two parts: the docstrings in the code itself and the docs in this folder ``pandas/doc/``. The docstrings provide a clear explanation of the usage of the individual @@ -294,7 +294,7 @@ Some other important things to know about the docs: overviews per topic together with some other information (what's new, installation, etc). -- The docstrings follow a pandas convention, based on the **Numpy Docstring +* The docstrings follow a pandas convention, based on the **Numpy Docstring Standard**. Follow the :ref:`pandas docstring guide <docstring>` for detailed instructions on how to write a correct docstring. @@ -303,7 +303,7 @@ Some other important things to know about the docs: contributing_docstring.rst -- The tutorials make heavy use of the `ipython directive +* The tutorials make heavy use of the `ipython directive <http://matplotlib.org/sampledoc/ipython_directive.html>`_ sphinx extension. This directive lets you put code in the documentation which will be run during the doc build. For example:: @@ -324,7 +324,7 @@ Some other important things to know about the docs: doc build. This approach means that code examples will always be up to date, but it does make the doc building a bit more complex. -- Our API documentation in ``doc/source/api.rst`` houses the auto-generated +* Our API documentation in ``doc/source/api.rst`` houses the auto-generated documentation from the docstrings. For classes, there are a few subtleties around controlling which methods and attributes have pages auto-generated. @@ -488,8 +488,8 @@ standard. Google provides an open source style checker called ``cpplint``, but w use a fork of it that can be found `here <https://github.com/cpplint/cpplint>`__. Here are *some* of the more common ``cpplint`` issues: - - we restrict line-length to 80 characters to promote readability - - every header file must include a header guard to avoid name collisions if re-included +* we restrict line-length to 80 characters to promote readability +* every header file must include a header guard to avoid name collisions if re-included :ref:`Continuous Integration <contributing.ci>` will run the `cpplint <https://pypi.org/project/cpplint>`_ tool @@ -536,8 +536,8 @@ Python (PEP8) There are several tools to ensure you abide by this standard. Here are *some* of the more common ``PEP8`` issues: - - we restrict line-length to 79 characters to promote readability - - passing arguments should have spaces after commas, e.g. ``foo(arg1, arg2, kw1='bar')`` +* we restrict line-length to 79 characters to promote readability +* passing arguments should have spaces after commas, e.g. ``foo(arg1, arg2, kw1='bar')`` :ref:`Continuous Integration <contributing.ci>` will run the `flake8 <https://pypi.org/project/flake8>`_ tool @@ -715,14 +715,14 @@ Using ``pytest`` Here is an example of a self-contained set of tests that illustrate multiple features that we like to use. -- functional style: tests are like ``test_*`` and *only* take arguments that are either fixtures or parameters -- ``pytest.mark`` can be used to set metadata on test functions, e.g. ``skip`` or ``xfail``. -- using ``parametrize``: allow testing of multiple cases -- to set a mark on a parameter, ``pytest.param(..., marks=...)`` syntax should be used -- ``fixture``, code for object construction, on a per-test basis -- using bare ``assert`` for scalars and truth-testing -- ``tm.assert_series_equal`` (and its counter part ``tm.assert_frame_equal``), for pandas object comparisons. -- the typical pattern of constructing an ``expected`` and comparing versus the ``result`` +* functional style: tests are like ``test_*`` and *only* take arguments that are either fixtures or parameters +* ``pytest.mark`` can be used to set metadata on test functions, e.g. ``skip`` or ``xfail``. +* using ``parametrize``: allow testing of multiple cases +* to set a mark on a parameter, ``pytest.param(..., marks=...)`` syntax should be used +* ``fixture``, code for object construction, on a per-test basis +* using bare ``assert`` for scalars and truth-testing +* ``tm.assert_series_equal`` (and its counter part ``tm.assert_frame_equal``), for pandas object comparisons. +* the typical pattern of constructing an ``expected`` and comparing versus the ``result`` We would name this file ``test_cool_feature.py`` and put in an appropriate place in the ``pandas/tests/`` structure. @@ -969,21 +969,21 @@ Finally, commit your changes to your local repository with an explanatory messag uses a convention for commit message prefixes and layout. Here are some common prefixes along with general guidelines for when to use them: - * ENH: Enhancement, new functionality - * BUG: Bug fix - * DOC: Additions/updates to documentation - * TST: Additions/updates to tests - * BLD: Updates to the build process/scripts - * PERF: Performance improvement - * CLN: Code cleanup +* ENH: Enhancement, new functionality +* BUG: Bug fix +* DOC: Additions/updates to documentation +* TST: Additions/updates to tests +* BLD: Updates to the build process/scripts +* PERF: Performance improvement +* CLN: Code cleanup The following defines how a commit message should be structured. Please reference the relevant GitHub issues in your commit message using GH1234 or #1234. Either style is fine, but the former is generally preferred: - * a subject line with `< 80` chars. - * One blank line. - * Optionally, a commit message body. +* a subject line with `< 80` chars. +* One blank line. +* Optionally, a commit message body. Now you can commit your changes in your local repository:: diff --git a/doc/source/contributing_docstring.rst b/doc/source/contributing_docstring.rst index 4dec2a23facca..afb554aeffbc3 100644 --- a/doc/source/contributing_docstring.rst +++ b/doc/source/contributing_docstring.rst @@ -68,7 +68,7 @@ As PEP-257 is quite open, and some other standards exist on top of it. In the case of pandas, the numpy docstring convention is followed. The conventions is explained in this document: -- `numpydoc docstring guide <http://numpydoc.readthedocs.io/en/latest/format.html>`_ +* `numpydoc docstring guide <http://numpydoc.readthedocs.io/en/latest/format.html>`_ (which is based in the original `Guide to NumPy/SciPy documentation <https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt>`_) @@ -78,9 +78,9 @@ The standard uses reStructuredText (reST). reStructuredText is a markup language that allows encoding styles in plain text files. Documentation about reStructuredText can be found in: -- `Sphinx reStructuredText primer <http://www.sphinx-doc.org/en/stable/rest.html>`_ -- `Quick reStructuredText reference <http://docutils.sourceforge.net/docs/user/rst/quickref.html>`_ -- `Full reStructuredText specification <http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html>`_ +* `Sphinx reStructuredText primer <http://www.sphinx-doc.org/en/stable/rest.html>`_ +* `Quick reStructuredText reference <http://docutils.sourceforge.net/docs/user/rst/quickref.html>`_ +* `Full reStructuredText specification <http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html>`_ Pandas has some helpers for sharing docstrings between related classes, see :ref:`docstring.sharing`. @@ -107,12 +107,12 @@ In rare occasions reST styles like bold text or italics will be used in docstrings, but is it common to have inline code, which is presented between backticks. It is considered inline code: -- The name of a parameter -- Python code, a module, function, built-in, type, literal... (e.g. ``os``, +* The name of a parameter +* Python code, a module, function, built-in, type, literal... (e.g. ``os``, ``list``, ``numpy.abs``, ``datetime.date``, ``True``) -- A pandas class (in the form ``:class:`pandas.Series```) -- A pandas method (in the form ``:meth:`pandas.Series.sum```) -- A pandas function (in the form ``:func:`pandas.to_datetime```) +* A pandas class (in the form ``:class:`pandas.Series```) +* A pandas method (in the form ``:meth:`pandas.Series.sum```) +* A pandas function (in the form ``:func:`pandas.to_datetime```) .. note:: To display only the last component of the linked class, method or @@ -352,71 +352,71 @@ When specifying the parameter types, Python built-in data types can be used directly (the Python type is preferred to the more verbose string, integer, boolean, etc): -- int -- float -- str -- bool +* int +* float +* str +* bool For complex types, define the subtypes. For `dict` and `tuple`, as more than one type is present, we use the brackets to help read the type (curly brackets for `dict` and normal brackets for `tuple`): -- list of int -- dict of {str : int} -- tuple of (str, int, int) -- tuple of (str,) -- set of str +* list of int +* dict of {str : int} +* tuple of (str, int, int) +* tuple of (str,) +* set of str In case where there are just a set of values allowed, list them in curly brackets and separated by commas (followed by a space). If the values are ordinal and they have an order, list them in this order. Otherwise, list the default value first, if there is one: -- {0, 10, 25} -- {'simple', 'advanced'} -- {'low', 'medium', 'high'} -- {'cat', 'dog', 'bird'} +* {0, 10, 25} +* {'simple', 'advanced'} +* {'low', 'medium', 'high'} +* {'cat', 'dog', 'bird'} If the type is defined in a Python module, the module must be specified: -- datetime.date -- datetime.datetime -- decimal.Decimal +* datetime.date +* datetime.datetime +* decimal.Decimal If the type is in a package, the module must be also specified: -- numpy.ndarray -- scipy.sparse.coo_matrix +* numpy.ndarray +* scipy.sparse.coo_matrix If the type is a pandas type, also specify pandas except for Series and DataFrame: -- Series -- DataFrame -- pandas.Index -- pandas.Categorical -- pandas.SparseArray +* Series +* DataFrame +* pandas.Index +* pandas.Categorical +* pandas.SparseArray If the exact type is not relevant, but must be compatible with a numpy array, array-like can be specified. If Any type that can be iterated is accepted, iterable can be used: -- array-like -- iterable +* array-like +* iterable If more than one type is accepted, separate them by commas, except the last two types, that need to be separated by the word 'or': -- int or float -- float, decimal.Decimal or None -- str or list of str +* int or float +* float, decimal.Decimal or None +* str or list of str If ``None`` is one of the accepted values, it always needs to be the last in the list. For axis, the convention is to use something like: -- axis : {0 or 'index', 1 or 'columns', None}, default None +* axis : {0 or 'index', 1 or 'columns', None}, default None .. _docstring.returns: diff --git a/doc/source/developer.rst b/doc/source/developer.rst index b8bb2b2fcbe2f..f76af394abc48 100644 --- a/doc/source/developer.rst +++ b/doc/source/developer.rst @@ -81,20 +81,20 @@ The ``metadata`` field is ``None`` except for: omitted it is assumed to be nanoseconds. * ``categorical``: ``{'num_categories': K, 'ordered': is_ordered, 'type': $TYPE}`` - * Here ``'type'`` is optional, and can be a nested pandas type specification - here (but not categorical) + * Here ``'type'`` is optional, and can be a nested pandas type specification + here (but not categorical) * ``unicode``: ``{'encoding': encoding}`` - * The encoding is optional, and if not present is UTF-8 + * The encoding is optional, and if not present is UTF-8 * ``object``: ``{'encoding': encoding}``. Objects can be serialized and stored in ``BYTE_ARRAY`` Parquet columns. The encoding can be one of: - * ``'pickle'`` - * ``'msgpack'`` - * ``'bson'`` - * ``'json'`` + * ``'pickle'`` + * ``'msgpack'`` + * ``'bson'`` + * ``'json'`` * ``timedelta``: ``{'unit': 'ns'}``. The ``'unit'`` is optional, and if omitted it is assumed to be nanoseconds. This metadata is optional altogether diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst index 4d8e7979060f4..efa52a6f7cfe2 100644 --- a/doc/source/dsintro.rst +++ b/doc/source/dsintro.rst @@ -51,9 +51,9 @@ labels are collectively referred to as the **index**. The basic method to create Here, ``data`` can be many different things: - - a Python dict - - an ndarray - - a scalar value (like 5) +* a Python dict +* an ndarray +* a scalar value (like 5) The passed **index** is a list of axis labels. Thus, this separates into a few cases depending on what **data is**: @@ -246,12 +246,12 @@ potentially different types. You can think of it like a spreadsheet or SQL table, or a dict of Series objects. It is generally the most commonly used pandas object. Like Series, DataFrame accepts many different kinds of input: - - Dict of 1D ndarrays, lists, dicts, or Series - - 2-D numpy.ndarray - - `Structured or record - <http://docs.scipy.org/doc/numpy/user/basics.rec.html>`__ ndarray - - A ``Series`` - - Another ``DataFrame`` +* Dict of 1D ndarrays, lists, dicts, or Series +* 2-D numpy.ndarray +* `Structured or record + <http://docs.scipy.org/doc/numpy/user/basics.rec.html>`__ ndarray +* A ``Series`` +* Another ``DataFrame`` Along with the data, you can optionally pass **index** (row labels) and **columns** (column labels) arguments. If you pass an index and / or columns, @@ -842,10 +842,10 @@ econometric analysis of panel data. However, for the strict purposes of slicing and dicing a collection of DataFrame objects, you may find the axis names slightly arbitrary: - - **items**: axis 0, each item corresponds to a DataFrame contained inside - - **major_axis**: axis 1, it is the **index** (rows) of each of the - DataFrames - - **minor_axis**: axis 2, it is the **columns** of each of the DataFrames +* **items**: axis 0, each item corresponds to a DataFrame contained inside +* **major_axis**: axis 1, it is the **index** (rows) of each of the + DataFrames +* **minor_axis**: axis 2, it is the **columns** of each of the DataFrames Construction of Panels works about like you would expect: diff --git a/doc/source/ecosystem.rst b/doc/source/ecosystem.rst index f683fd6892ea5..4e15f9069de67 100644 --- a/doc/source/ecosystem.rst +++ b/doc/source/ecosystem.rst @@ -159,14 +159,14 @@ See more in the `pandas-datareader docs <https://pandas-datareader.readthedocs. The following data feeds are available: - * Yahoo! Finance - * Google Finance - * FRED - * Fama/French - * World Bank - * OECD - * Eurostat - * EDGAR Index +* Yahoo! Finance +* Google Finance +* FRED +* Fama/French +* World Bank +* OECD +* Eurostat +* EDGAR Index `quandl/Python <https://github.com/quandl/Python>`__ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/source/enhancingperf.rst b/doc/source/enhancingperf.rst index 979d025111df1..8f8a9fe3e50e0 100644 --- a/doc/source/enhancingperf.rst +++ b/doc/source/enhancingperf.rst @@ -461,15 +461,15 @@ Supported Syntax These operations are supported by :func:`pandas.eval`: -- Arithmetic operations except for the left shift (``<<``) and right shift +* Arithmetic operations except for the left shift (``<<``) and right shift (``>>``) operators, e.g., ``df + 2 * pi / s ** 4 % 42 - the_golden_ratio`` -- Comparison operations, including chained comparisons, e.g., ``2 < df < df2`` -- Boolean operations, e.g., ``df < df2 and df3 < df4 or not df_bool`` -- ``list`` and ``tuple`` literals, e.g., ``[1, 2]`` or ``(1, 2)`` -- Attribute access, e.g., ``df.a`` -- Subscript expressions, e.g., ``df[0]`` -- Simple variable evaluation, e.g., ``pd.eval('df')`` (this is not very useful) -- Math functions: `sin`, `cos`, `exp`, `log`, `expm1`, `log1p`, +* Comparison operations, including chained comparisons, e.g., ``2 < df < df2`` +* Boolean operations, e.g., ``df < df2 and df3 < df4 or not df_bool`` +* ``list`` and ``tuple`` literals, e.g., ``[1, 2]`` or ``(1, 2)`` +* Attribute access, e.g., ``df.a`` +* Subscript expressions, e.g., ``df[0]`` +* Simple variable evaluation, e.g., ``pd.eval('df')`` (this is not very useful) +* Math functions: `sin`, `cos`, `exp`, `log`, `expm1`, `log1p`, `sqrt`, `sinh`, `cosh`, `tanh`, `arcsin`, `arccos`, `arctan`, `arccosh`, `arcsinh`, `arctanh`, `abs` and `arctan2`. @@ -477,22 +477,22 @@ This Python syntax is **not** allowed: * Expressions - - Function calls other than math functions. - - ``is``/``is not`` operations - - ``if`` expressions - - ``lambda`` expressions - - ``list``/``set``/``dict`` comprehensions - - Literal ``dict`` and ``set`` expressions - - ``yield`` expressions - - Generator expressions - - Boolean expressions consisting of only scalar values + * Function calls other than math functions. + * ``is``/``is not`` operations + * ``if`` expressions + * ``lambda`` expressions + * ``list``/``set``/``dict`` comprehensions + * Literal ``dict`` and ``set`` expressions + * ``yield`` expressions + * Generator expressions + * Boolean expressions consisting of only scalar values * Statements - - Neither `simple <https://docs.python.org/3/reference/simple_stmts.html>`__ - nor `compound <https://docs.python.org/3/reference/compound_stmts.html>`__ - statements are allowed. This includes things like ``for``, ``while``, and - ``if``. + * Neither `simple <https://docs.python.org/3/reference/simple_stmts.html>`__ + nor `compound <https://docs.python.org/3/reference/compound_stmts.html>`__ + statements are allowed. This includes things like ``for``, ``while``, and + ``if``. diff --git a/doc/source/extending.rst b/doc/source/extending.rst index 431c69bc0b6b5..8018d35770924 100644 --- a/doc/source/extending.rst +++ b/doc/source/extending.rst @@ -167,9 +167,9 @@ you can retain subclasses through ``pandas`` data manipulations. There are 3 constructor properties to be defined: -- ``_constructor``: Used when a manipulation result has the same dimensions as the original. -- ``_constructor_sliced``: Used when a manipulation result has one lower dimension(s) as the original, such as ``DataFrame`` single columns slicing. -- ``_constructor_expanddim``: Used when a manipulation result has one higher dimension as the original, such as ``Series.to_frame()`` and ``DataFrame.to_panel()``. +* ``_constructor``: Used when a manipulation result has the same dimensions as the original. +* ``_constructor_sliced``: Used when a manipulation result has one lower dimension(s) as the original, such as ``DataFrame`` single columns slicing. +* ``_constructor_expanddim``: Used when a manipulation result has one higher dimension as the original, such as ``Series.to_frame()`` and ``DataFrame.to_panel()``. Following table shows how ``pandas`` data structures define constructor properties by default. diff --git a/doc/source/gotchas.rst b/doc/source/gotchas.rst index b7042ef390018..79e312ca12833 100644 --- a/doc/source/gotchas.rst +++ b/doc/source/gotchas.rst @@ -193,9 +193,9 @@ Choice of ``NA`` representation For lack of ``NA`` (missing) support from the ground up in NumPy and Python in general, we were given the difficult choice between either: -- A *masked array* solution: an array of data and an array of boolean values +* A *masked array* solution: an array of data and an array of boolean values indicating whether a value is there or is missing. -- Using a special sentinel value, bit pattern, or set of sentinel values to +* Using a special sentinel value, bit pattern, or set of sentinel values to denote ``NA`` across the dtypes. For many reasons we chose the latter. After years of production use it has diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst index 1c4c3f93726a9..299fbfd12baa8 100644 --- a/doc/source/groupby.rst +++ b/doc/source/groupby.rst @@ -22,36 +22,36 @@ Group By: split-apply-combine By "group by" we are referring to a process involving one or more of the following steps: - - **Splitting** the data into groups based on some criteria. - - **Applying** a function to each group independently. - - **Combining** the results into a data structure. +* **Splitting** the data into groups based on some criteria. +* **Applying** a function to each group independently. +* **Combining** the results into a data structure. Out of these, the split step is the most straightforward. In fact, in many situations we may wish to split the data set into groups and do something with those groups. In the apply step, we might wish to one of the following: - - **Aggregation**: compute a summary statistic (or statistics) for each - group. Some examples: +* **Aggregation**: compute a summary statistic (or statistics) for each + group. Some examples: - - Compute group sums or means. - - Compute group sizes / counts. + * Compute group sums or means. + * Compute group sizes / counts. - - **Transformation**: perform some group-specific computations and return a - like-indexed object. Some examples: +* **Transformation**: perform some group-specific computations and return a + like-indexed object. Some examples: - - Standardize data (zscore) within a group. - - Filling NAs within groups with a value derived from each group. + * Standardize data (zscore) within a group. + * Filling NAs within groups with a value derived from each group. - - **Filtration**: discard some groups, according to a group-wise computation - that evaluates True or False. Some examples: +* **Filtration**: discard some groups, according to a group-wise computation + that evaluates True or False. Some examples: - - Discard data that belongs to groups with only a few members. - - Filter out data based on the group sum or mean. + * Discard data that belongs to groups with only a few members. + * Filter out data based on the group sum or mean. - - Some combination of the above: GroupBy will examine the results of the apply - step and try to return a sensibly combined result if it doesn't fit into - either of the above two categories. +* Some combination of the above: GroupBy will examine the results of the apply + step and try to return a sensibly combined result if it doesn't fit into + either of the above two categories. Since the set of object instance methods on pandas data structures are generally rich and expressive, we often simply want to invoke, say, a DataFrame function @@ -88,15 +88,15 @@ object (more on what the GroupBy object is later), you may do the following: The mapping can be specified many different ways: - - A Python function, to be called on each of the axis labels. - - A list or NumPy array of the same length as the selected axis. - - A dict or ``Series``, providing a ``label -> group name`` mapping. - - For ``DataFrame`` objects, a string indicating a column to be used to group. - Of course ``df.groupby('A')`` is just syntactic sugar for - ``df.groupby(df['A'])``, but it makes life simpler. - - For ``DataFrame`` objects, a string indicating an index level to be used to - group. - - A list of any of the above things. +* A Python function, to be called on each of the axis labels. +* A list or NumPy array of the same length as the selected axis. +* A dict or ``Series``, providing a ``label -> group name`` mapping. +* For ``DataFrame`` objects, a string indicating a column to be used to group. + Of course ``df.groupby('A')`` is just syntactic sugar for + ``df.groupby(df['A'])``, but it makes life simpler. +* For ``DataFrame`` objects, a string indicating an index level to be used to + group. +* A list of any of the above things. Collectively we refer to the grouping objects as the **keys**. For example, consider the following ``DataFrame``: diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst index 2b9fcf874ef22..1c63acce6e3fa 100644 --- a/doc/source/indexing.rst +++ b/doc/source/indexing.rst @@ -17,10 +17,10 @@ Indexing and Selecting Data The axis labeling information in pandas objects serves many purposes: - - Identifies data (i.e. provides *metadata*) using known indicators, - important for analysis, visualization, and interactive console display. - - Enables automatic and explicit data alignment. - - Allows intuitive getting and setting of subsets of the data set. +* Identifies data (i.e. provides *metadata*) using known indicators, + important for analysis, visualization, and interactive console display. +* Enables automatic and explicit data alignment. +* Allows intuitive getting and setting of subsets of the data set. In this section, we will focus on the final point: namely, how to slice, dice, and generally get and set subsets of pandas objects. The primary focus will be @@ -62,37 +62,37 @@ Object selection has had a number of user-requested additions in order to support more explicit location based indexing. Pandas now supports three types of multi-axis indexing. -- ``.loc`` is primarily label based, but may also be used with a boolean array. ``.loc`` will raise ``KeyError`` when the items are not found. Allowed inputs are: +* ``.loc`` is primarily label based, but may also be used with a boolean array. ``.loc`` will raise ``KeyError`` when the items are not found. Allowed inputs are: - - A single label, e.g. ``5`` or ``'a'`` (Note that ``5`` is interpreted as a - *label* of the index. This use is **not** an integer position along the - index.). - - A list or array of labels ``['a', 'b', 'c']``. - - A slice object with labels ``'a':'f'`` (Note that contrary to usual python - slices, **both** the start and the stop are included, when present in the - index! See :ref:`Slicing with labels - <indexing.slicing_with_labels>`.). - - A boolean array - - A ``callable`` function with one argument (the calling Series, DataFrame or Panel) and - that returns valid output for indexing (one of the above). + * A single label, e.g. ``5`` or ``'a'`` (Note that ``5`` is interpreted as a + *label* of the index. This use is **not** an integer position along the + index.). + * A list or array of labels ``['a', 'b', 'c']``. + * A slice object with labels ``'a':'f'`` (Note that contrary to usual python + slices, **both** the start and the stop are included, when present in the + index! See :ref:`Slicing with labels + <indexing.slicing_with_labels>`.). + * A boolean array + * A ``callable`` function with one argument (the calling Series, DataFrame or Panel) and + that returns valid output for indexing (one of the above). .. versionadded:: 0.18.1 See more at :ref:`Selection by Label <indexing.label>`. -- ``.iloc`` is primarily integer position based (from ``0`` to +* ``.iloc`` is primarily integer position based (from ``0`` to ``length-1`` of the axis), but may also be used with a boolean array. ``.iloc`` will raise ``IndexError`` if a requested indexer is out-of-bounds, except *slice* indexers which allow out-of-bounds indexing. (this conforms with Python/NumPy *slice* semantics). Allowed inputs are: - - An integer e.g. ``5``. - - A list or array of integers ``[4, 3, 0]``. - - A slice object with ints ``1:7``. - - A boolean array. - - A ``callable`` function with one argument (the calling Series, DataFrame or Panel) and - that returns valid output for indexing (one of the above). + * An integer e.g. ``5``. + * A list or array of integers ``[4, 3, 0]``. + * A slice object with ints ``1:7``. + * A boolean array. + * A ``callable`` function with one argument (the calling Series, DataFrame or Panel) and + that returns valid output for indexing (one of the above). .. versionadded:: 0.18.1 @@ -100,7 +100,7 @@ of multi-axis indexing. :ref:`Advanced Indexing <advanced>` and :ref:`Advanced Hierarchical <advanced.advanced_hierarchical>`. -- ``.loc``, ``.iloc``, and also ``[]`` indexing can accept a ``callable`` as indexer. See more at :ref:`Selection By Callable <indexing.callable>`. +* ``.loc``, ``.iloc``, and also ``[]`` indexing can accept a ``callable`` as indexer. See more at :ref:`Selection By Callable <indexing.callable>`. Getting values from an object with multi-axes selection uses the following notation (using ``.loc`` as an example, but the following applies to ``.iloc`` as @@ -343,14 +343,14 @@ Integers are valid labels, but they refer to the label **and not the position**. The ``.loc`` attribute is the primary access method. The following are valid inputs: -- A single label, e.g. ``5`` or ``'a'`` (Note that ``5`` is interpreted as a *label* of the index. This use is **not** an integer position along the index.). -- A list or array of labels ``['a', 'b', 'c']``. -- A slice object with labels ``'a':'f'`` (Note that contrary to usual python +* A single label, e.g. ``5`` or ``'a'`` (Note that ``5`` is interpreted as a *label* of the index. This use is **not** an integer position along the index.). +* A list or array of labels ``['a', 'b', 'c']``. +* A slice object with labels ``'a':'f'`` (Note that contrary to usual python slices, **both** the start and the stop are included, when present in the index! See :ref:`Slicing with labels <indexing.slicing_with_labels>`.). -- A boolean array. -- A ``callable``, see :ref:`Selection By Callable <indexing.callable>`. +* A boolean array. +* A ``callable``, see :ref:`Selection By Callable <indexing.callable>`. .. ipython:: python @@ -445,11 +445,11 @@ Pandas provides a suite of methods in order to get **purely integer based indexi The ``.iloc`` attribute is the primary access method. The following are valid inputs: -- An integer e.g. ``5``. -- A list or array of integers ``[4, 3, 0]``. -- A slice object with ints ``1:7``. -- A boolean array. -- A ``callable``, see :ref:`Selection By Callable <indexing.callable>`. +* An integer e.g. ``5``. +* A list or array of integers ``[4, 3, 0]``. +* A slice object with ints ``1:7``. +* A boolean array. +* A ``callable``, see :ref:`Selection By Callable <indexing.callable>`. .. ipython:: python @@ -599,8 +599,8 @@ bit of user confusion over the years. The recommended methods of indexing are: -- ``.loc`` if you want to *label* index. -- ``.iloc`` if you want to *positionally* index. +* ``.loc`` if you want to *label* index. +* ``.iloc`` if you want to *positionally* index. .. ipython:: python @@ -1455,15 +1455,15 @@ If you want to identify and remove duplicate rows in a DataFrame, there are two methods that will help: ``duplicated`` and ``drop_duplicates``. Each takes as an argument the columns to use to identify duplicated rows. -- ``duplicated`` returns a boolean vector whose length is the number of rows, and which indicates whether a row is duplicated. -- ``drop_duplicates`` removes duplicate rows. +* ``duplicated`` returns a boolean vector whose length is the number of rows, and which indicates whether a row is duplicated. +* ``drop_duplicates`` removes duplicate rows. By default, the first observed row of a duplicate set is considered unique, but each method has a ``keep`` parameter to specify targets to be kept. -- ``keep='first'`` (default): mark / drop duplicates except for the first occurrence. -- ``keep='last'``: mark / drop duplicates except for the last occurrence. -- ``keep=False``: mark / drop all duplicates. +* ``keep='first'`` (default): mark / drop duplicates except for the first occurrence. +* ``keep='last'``: mark / drop duplicates except for the last occurrence. +* ``keep=False``: mark / drop all duplicates. .. ipython:: python diff --git a/doc/source/install.rst b/doc/source/install.rst index e655136904920..87d1b63914635 100644 --- a/doc/source/install.rst +++ b/doc/source/install.rst @@ -261,17 +261,17 @@ Optional Dependencies * `Apache Parquet <https://parquet.apache.org/>`__, either `pyarrow <http://arrow.apache.org/docs/python/>`__ (>= 0.4.1) or `fastparquet <https://fastparquet.readthedocs.io/en/latest>`__ (>= 0.0.6) for parquet-based storage. The `snappy <https://pypi.org/project/python-snappy>`__ and `brotli <https://pypi.org/project/brotlipy>`__ are available for compression support. * `SQLAlchemy <http://www.sqlalchemy.org>`__: for SQL database support. Version 0.8.1 or higher recommended. Besides SQLAlchemy, you also need a database specific driver. You can find an overview of supported drivers for each SQL dialect in the `SQLAlchemy docs <http://docs.sqlalchemy.org/en/latest/dialects/index.html>`__. Some common drivers are: - * `psycopg2 <http://initd.org/psycopg/>`__: for PostgreSQL - * `pymysql <https://github.com/PyMySQL/PyMySQL>`__: for MySQL. - * `SQLite <https://docs.python.org/3/library/sqlite3.html>`__: for SQLite, this is included in Python's standard library by default. + * `psycopg2 <http://initd.org/psycopg/>`__: for PostgreSQL + * `pymysql <https://github.com/PyMySQL/PyMySQL>`__: for MySQL. + * `SQLite <https://docs.python.org/3/library/sqlite3.html>`__: for SQLite, this is included in Python's standard library by default. * `matplotlib <http://matplotlib.org/>`__: for plotting, Version 1.4.3 or higher. * For Excel I/O: - * `xlrd/xlwt <http://www.python-excel.org/>`__: Excel reading (xlrd) and writing (xlwt) - * `openpyxl <http://https://openpyxl.readthedocs.io/en/default/>`__: openpyxl version 2.4.0 - for writing .xlsx files (xlrd >= 0.9.0) - * `XlsxWriter <https://pypi.org/project/XlsxWriter>`__: Alternative Excel writer + * `xlrd/xlwt <http://www.python-excel.org/>`__: Excel reading (xlrd) and writing (xlwt) + * `openpyxl <http://https://openpyxl.readthedocs.io/en/default/>`__: openpyxl version 2.4.0 + for writing .xlsx files (xlrd >= 0.9.0) + * `XlsxWriter <https://pypi.org/project/XlsxWriter>`__: Alternative Excel writer * `Jinja2 <http://jinja.pocoo.org/>`__: Template engine for conditional HTML formatting. * `s3fs <http://s3fs.readthedocs.io/>`__: necessary for Amazon S3 access (s3fs >= 0.0.7). diff --git a/doc/source/internals.rst b/doc/source/internals.rst index caf5790fb24c6..fce99fc633440 100644 --- a/doc/source/internals.rst +++ b/doc/source/internals.rst @@ -24,23 +24,23 @@ Indexing In pandas there are a few objects implemented which can serve as valid containers for the axis labels: -- ``Index``: the generic "ordered set" object, an ndarray of object dtype +* ``Index``: the generic "ordered set" object, an ndarray of object dtype assuming nothing about its contents. The labels must be hashable (and likely immutable) and unique. Populates a dict of label to location in Cython to do ``O(1)`` lookups. -- ``Int64Index``: a version of ``Index`` highly optimized for 64-bit integer +* ``Int64Index``: a version of ``Index`` highly optimized for 64-bit integer data, such as time stamps -- ``Float64Index``: a version of ``Index`` highly optimized for 64-bit float data -- ``MultiIndex``: the standard hierarchical index object -- ``DatetimeIndex``: An Index object with ``Timestamp`` boxed elements (impl are the int64 values) -- ``TimedeltaIndex``: An Index object with ``Timedelta`` boxed elements (impl are the in64 values) -- ``PeriodIndex``: An Index object with Period elements +* ``Float64Index``: a version of ``Index`` highly optimized for 64-bit float data +* ``MultiIndex``: the standard hierarchical index object +* ``DatetimeIndex``: An Index object with ``Timestamp`` boxed elements (impl are the int64 values) +* ``TimedeltaIndex``: An Index object with ``Timedelta`` boxed elements (impl are the in64 values) +* ``PeriodIndex``: An Index object with Period elements There are functions that make the creation of a regular index easy: -- ``date_range``: fixed frequency date range generated from a time rule or +* ``date_range``: fixed frequency date range generated from a time rule or DateOffset. An ndarray of Python datetime objects -- ``period_range``: fixed frequency date range generated from a time rule or +* ``period_range``: fixed frequency date range generated from a time rule or DateOffset. An ndarray of ``Period`` objects, representing timespans The motivation for having an ``Index`` class in the first place was to enable @@ -52,22 +52,22 @@ From an internal implementation point of view, the relevant methods that an ``Index`` must define are one or more of the following (depending on how incompatible the new object internals are with the ``Index`` functions): -- ``get_loc``: returns an "indexer" (an integer, or in some cases a +* ``get_loc``: returns an "indexer" (an integer, or in some cases a slice object) for a label -- ``slice_locs``: returns the "range" to slice between two labels -- ``get_indexer``: Computes the indexing vector for reindexing / data +* ``slice_locs``: returns the "range" to slice between two labels +* ``get_indexer``: Computes the indexing vector for reindexing / data alignment purposes. See the source / docstrings for more on this -- ``get_indexer_non_unique``: Computes the indexing vector for reindexing / data +* ``get_indexer_non_unique``: Computes the indexing vector for reindexing / data alignment purposes when the index is non-unique. See the source / docstrings for more on this -- ``reindex``: Does any pre-conversion of the input index then calls +* ``reindex``: Does any pre-conversion of the input index then calls ``get_indexer`` -- ``union``, ``intersection``: computes the union or intersection of two +* ``union``, ``intersection``: computes the union or intersection of two Index objects -- ``insert``: Inserts a new label into an Index, yielding a new object -- ``delete``: Delete a label, yielding a new object -- ``drop``: Deletes a set of labels -- ``take``: Analogous to ndarray.take +* ``insert``: Inserts a new label into an Index, yielding a new object +* ``delete``: Delete a label, yielding a new object +* ``drop``: Deletes a set of labels +* ``take``: Analogous to ndarray.take MultiIndex ~~~~~~~~~~ diff --git a/doc/source/io.rst b/doc/source/io.rst index 658b9ff15783d..ae6c4f12f04f7 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -252,12 +252,12 @@ Datetime Handling +++++++++++++++++ parse_dates : boolean or list of ints or names or list of lists or dict, default ``False``. - - If ``True`` -> try parsing the index. - - If ``[1, 2, 3]`` -> try parsing columns 1, 2, 3 each as a separate date + * If ``True`` -> try parsing the index. + * If ``[1, 2, 3]`` -> try parsing columns 1, 2, 3 each as a separate date column. - - If ``[[1, 3]]`` -> combine columns 1 and 3 and parse as a single date + * If ``[[1, 3]]`` -> combine columns 1 and 3 and parse as a single date column. - - If ``{'foo': [1, 3]}`` -> parse columns 1, 3 as date and call result 'foo'. + * If ``{'foo': [1, 3]}`` -> parse columns 1, 3 as date and call result 'foo'. A fast-path exists for iso8601-formatted dates. infer_datetime_format : boolean, default ``False`` If ``True`` and parse_dates is enabled for a column, attempt to infer the @@ -961,12 +961,12 @@ negative consequences if enabled. Here are some examples of datetime strings that can be guessed (All representing December 30th, 2011 at 00:00:00): -- "20111230" -- "2011/12/30" -- "20111230 00:00:00" -- "12/30/2011 00:00:00" -- "30/Dec/2011 00:00:00" -- "30/December/2011 00:00:00" +* "20111230" +* "2011/12/30" +* "20111230 00:00:00" +* "12/30/2011 00:00:00" +* "30/Dec/2011 00:00:00" +* "30/December/2011 00:00:00" Note that ``infer_datetime_format`` is sensitive to ``dayfirst``. With ``dayfirst=True``, it will guess "01/12/2011" to be December 1st. With @@ -1303,16 +1303,16 @@ with data files that have known and fixed column widths. The function parameters to ``read_fwf`` are largely the same as `read_csv` with two extra parameters, and a different usage of the ``delimiter`` parameter: - - ``colspecs``: A list of pairs (tuples) giving the extents of the - fixed-width fields of each line as half-open intervals (i.e., [from, to[ ). - String value 'infer' can be used to instruct the parser to try detecting - the column specifications from the first 100 rows of the data. Default - behavior, if not specified, is to infer. - - ``widths``: A list of field widths which can be used instead of 'colspecs' - if the intervals are contiguous. - - ``delimiter``: Characters to consider as filler characters in the fixed-width file. - Can be used to specify the filler character of the fields - if it is not spaces (e.g., '~'). +* ``colspecs``: A list of pairs (tuples) giving the extents of the + fixed-width fields of each line as half-open intervals (i.e., [from, to[ ). + String value 'infer' can be used to instruct the parser to try detecting + the column specifications from the first 100 rows of the data. Default + behavior, if not specified, is to infer. +* ``widths``: A list of field widths which can be used instead of 'colspecs' + if the intervals are contiguous. +* ``delimiter``: Characters to consider as filler characters in the fixed-width file. + Can be used to specify the filler character of the fields + if it is not spaces (e.g., '~'). .. ipython:: python :suppress: @@ -1566,9 +1566,9 @@ possible pandas uses the C parser (specified as ``engine='c'``), but may fall back to Python if C-unsupported options are specified. Currently, C-unsupported options include: -- ``sep`` other than a single character (e.g. regex separators) -- ``skipfooter`` -- ``sep=None`` with ``delim_whitespace=False`` +* ``sep`` other than a single character (e.g. regex separators) +* ``skipfooter`` +* ``sep=None`` with ``delim_whitespace=False`` Specifying any of the above options will produce a ``ParserWarning`` unless the python engine is selected explicitly using ``engine='python'``. @@ -1602,29 +1602,29 @@ The ``Series`` and ``DataFrame`` objects have an instance method ``to_csv`` whic allows storing the contents of the object as a comma-separated-values file. The function takes a number of arguments. Only the first is required. - - ``path_or_buf``: A string path to the file to write or a StringIO - - ``sep`` : Field delimiter for the output file (default ",") - - ``na_rep``: A string representation of a missing value (default '') - - ``float_format``: Format string for floating point numbers - - ``cols``: Columns to write (default None) - - ``header``: Whether to write out the column names (default True) - - ``index``: whether to write row (index) names (default True) - - ``index_label``: Column label(s) for index column(s) if desired. If None - (default), and `header` and `index` are True, then the index names are - used. (A sequence should be given if the ``DataFrame`` uses MultiIndex). - - ``mode`` : Python write mode, default 'w' - - ``encoding``: a string representing the encoding to use if the contents are - non-ASCII, for Python versions prior to 3 - - ``line_terminator``: Character sequence denoting line end (default '\\n') - - ``quoting``: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set a `float_format` then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-numeric - - ``quotechar``: Character used to quote fields (default '"') - - ``doublequote``: Control quoting of ``quotechar`` in fields (default True) - - ``escapechar``: Character used to escape ``sep`` and ``quotechar`` when - appropriate (default None) - - ``chunksize``: Number of rows to write at a time - - ``tupleize_cols``: If False (default), write as a list of tuples, otherwise - write in an expanded line format suitable for ``read_csv`` - - ``date_format``: Format string for datetime objects +* ``path_or_buf``: A string path to the file to write or a StringIO +* ``sep`` : Field delimiter for the output file (default ",") +* ``na_rep``: A string representation of a missing value (default '') +* ``float_format``: Format string for floating point numbers +* ``cols``: Columns to write (default None) +* ``header``: Whether to write out the column names (default True) +* ``index``: whether to write row (index) names (default True) +* ``index_label``: Column label(s) for index column(s) if desired. If None + (default), and `header` and `index` are True, then the index names are + used. (A sequence should be given if the ``DataFrame`` uses MultiIndex). +* ``mode`` : Python write mode, default 'w' +* ``encoding``: a string representing the encoding to use if the contents are + non-ASCII, for Python versions prior to 3 +* ``line_terminator``: Character sequence denoting line end (default '\\n') +* ``quoting``: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set a `float_format` then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-numeric +* ``quotechar``: Character used to quote fields (default '"') +* ``doublequote``: Control quoting of ``quotechar`` in fields (default True) +* ``escapechar``: Character used to escape ``sep`` and ``quotechar`` when + appropriate (default None) +* ``chunksize``: Number of rows to write at a time +* ``tupleize_cols``: If False (default), write as a list of tuples, otherwise + write in an expanded line format suitable for ``read_csv`` +* ``date_format``: Format string for datetime objects Writing a formatted string ++++++++++++++++++++++++++ @@ -1634,22 +1634,22 @@ Writing a formatted string The ``DataFrame`` object has an instance method ``to_string`` which allows control over the string representation of the object. All arguments are optional: - - ``buf`` default None, for example a StringIO object - - ``columns`` default None, which columns to write - - ``col_space`` default None, minimum width of each column. - - ``na_rep`` default ``NaN``, representation of NA value - - ``formatters`` default None, a dictionary (by column) of functions each of - which takes a single argument and returns a formatted string - - ``float_format`` default None, a function which takes a single (float) - argument and returns a formatted string; to be applied to floats in the - ``DataFrame``. - - ``sparsify`` default True, set to False for a ``DataFrame`` with a hierarchical - index to print every MultiIndex key at each row. - - ``index_names`` default True, will print the names of the indices - - ``index`` default True, will print the index (ie, row labels) - - ``header`` default True, will print the column labels - - ``justify`` default ``left``, will print column headers left- or - right-justified +* ``buf`` default None, for example a StringIO object +* ``columns`` default None, which columns to write +* ``col_space`` default None, minimum width of each column. +* ``na_rep`` default ``NaN``, representation of NA value +* ``formatters`` default None, a dictionary (by column) of functions each of + which takes a single argument and returns a formatted string +* ``float_format`` default None, a function which takes a single (float) + argument and returns a formatted string; to be applied to floats in the + ``DataFrame``. +* ``sparsify`` default True, set to False for a ``DataFrame`` with a hierarchical + index to print every MultiIndex key at each row. +* ``index_names`` default True, will print the names of the indices +* ``index`` default True, will print the index (ie, row labels) +* ``header`` default True, will print the column labels +* ``justify`` default ``left``, will print column headers left- or + right-justified The ``Series`` object also has a ``to_string`` method, but with only the ``buf``, ``na_rep``, ``float_format`` arguments. There is also a ``length`` argument @@ -1670,17 +1670,17 @@ Writing JSON A ``Series`` or ``DataFrame`` can be converted to a valid JSON string. Use ``to_json`` with optional parameters: -- ``path_or_buf`` : the pathname or buffer to write the output +* ``path_or_buf`` : the pathname or buffer to write the output This can be ``None`` in which case a JSON string is returned -- ``orient`` : +* ``orient`` : ``Series``: - - default is ``index`` - - allowed values are {``split``, ``records``, ``index``} + * default is ``index`` + * allowed values are {``split``, ``records``, ``index``} ``DataFrame``: - - default is ``columns`` - - allowed values are {``split``, ``records``, ``index``, ``columns``, ``values``, ``table``} + * default is ``columns`` + * allowed values are {``split``, ``records``, ``index``, ``columns``, ``values``, ``table``} The format of the JSON string @@ -1694,12 +1694,12 @@ with optional parameters: ``columns``; dict like {column -> {index -> value}} ``values``; just the values array -- ``date_format`` : string, type of date conversion, 'epoch' for timestamp, 'iso' for ISO8601. -- ``double_precision`` : The number of decimal places to use when encoding floating point values, default 10. -- ``force_ascii`` : force encoded string to be ASCII, default True. -- ``date_unit`` : The time unit to encode to, governs timestamp and ISO8601 precision. One of 's', 'ms', 'us' or 'ns' for seconds, milliseconds, microseconds and nanoseconds respectively. Default 'ms'. -- ``default_handler`` : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serializable object. -- ``lines`` : If ``records`` orient, then will write each record per line as json. +* ``date_format`` : string, type of date conversion, 'epoch' for timestamp, 'iso' for ISO8601. +* ``double_precision`` : The number of decimal places to use when encoding floating point values, default 10. +* ``force_ascii`` : force encoded string to be ASCII, default True. +* ``date_unit`` : The time unit to encode to, governs timestamp and ISO8601 precision. One of 's', 'ms', 'us' or 'ns' for seconds, milliseconds, microseconds and nanoseconds respectively. Default 'ms'. +* ``default_handler`` : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serializable object. +* ``lines`` : If ``records`` orient, then will write each record per line as json. Note ``NaN``'s, ``NaT``'s and ``None`` will be converted to ``null`` and ``datetime`` objects will be converted based on the ``date_format`` and ``date_unit`` parameters. @@ -1818,19 +1818,19 @@ Fallback Behavior If the JSON serializer cannot handle the container contents directly it will fall back in the following manner: -- if the dtype is unsupported (e.g. ``np.complex``) then the ``default_handler``, if provided, will be called +* if the dtype is unsupported (e.g. ``np.complex``) then the ``default_handler``, if provided, will be called for each value, otherwise an exception is raised. -- if an object is unsupported it will attempt the following: +* if an object is unsupported it will attempt the following: - * check if the object has defined a ``toDict`` method and call it. - A ``toDict`` method should return a ``dict`` which will then be JSON serialized. + * check if the object has defined a ``toDict`` method and call it. + A ``toDict`` method should return a ``dict`` which will then be JSON serialized. - * invoke the ``default_handler`` if one was provided. + * invoke the ``default_handler`` if one was provided. - * convert the object to a ``dict`` by traversing its contents. However this will often fail - with an ``OverflowError`` or give unexpected results. + * convert the object to a ``dict`` by traversing its contents. However this will often fail + with an ``OverflowError`` or give unexpected results. In general the best approach for unsupported objects or dtypes is to provide a ``default_handler``. For example: @@ -1856,20 +1856,20 @@ Reading a JSON string to pandas object can take a number of parameters. The parser will try to parse a ``DataFrame`` if ``typ`` is not supplied or is ``None``. To explicitly force ``Series`` parsing, pass ``typ=series`` -- ``filepath_or_buffer`` : a **VALID** JSON string or file handle / StringIO. The string could be +* ``filepath_or_buffer`` : a **VALID** JSON string or file handle / StringIO. The string could be a URL. Valid URL schemes include http, ftp, S3, and file. For file URLs, a host is expected. For instance, a local file could be file ://localhost/path/to/table.json -- ``typ`` : type of object to recover (series or frame), default 'frame' -- ``orient`` : +* ``typ`` : type of object to recover (series or frame), default 'frame' +* ``orient`` : Series : - - default is ``index`` - - allowed values are {``split``, ``records``, ``index``} + * default is ``index`` + * allowed values are {``split``, ``records``, ``index``} DataFrame - - default is ``columns`` - - allowed values are {``split``, ``records``, ``index``, ``columns``, ``values``, ``table``} + * default is ``columns`` + * allowed values are {``split``, ``records``, ``index``, ``columns``, ``values``, ``table``} The format of the JSON string @@ -1885,20 +1885,20 @@ is ``None``. To explicitly force ``Series`` parsing, pass ``typ=series`` ``table``; adhering to the JSON `Table Schema`_ -- ``dtype`` : if True, infer dtypes, if a dict of column to dtype, then use those, if ``False``, then don't infer dtypes at all, default is True, apply only to the data. -- ``convert_axes`` : boolean, try to convert the axes to the proper dtypes, default is ``True`` -- ``convert_dates`` : a list of columns to parse for dates; If ``True``, then try to parse date-like columns, default is ``True``. -- ``keep_default_dates`` : boolean, default ``True``. If parsing dates, then parse the default date-like columns. -- ``numpy`` : direct decoding to NumPy arrays. default is ``False``; +* ``dtype`` : if True, infer dtypes, if a dict of column to dtype, then use those, if ``False``, then don't infer dtypes at all, default is True, apply only to the data. +* ``convert_axes`` : boolean, try to convert the axes to the proper dtypes, default is ``True`` +* ``convert_dates`` : a list of columns to parse for dates; If ``True``, then try to parse date-like columns, default is ``True``. +* ``keep_default_dates`` : boolean, default ``True``. If parsing dates, then parse the default date-like columns. +* ``numpy`` : direct decoding to NumPy arrays. default is ``False``; Supports numeric data only, although labels may be non-numeric. Also note that the JSON ordering **MUST** be the same for each term if ``numpy=True``. -- ``precise_float`` : boolean, default ``False``. Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (``False``) is to use fast but less precise builtin functionality. -- ``date_unit`` : string, the timestamp unit to detect if converting dates. Default +* ``precise_float`` : boolean, default ``False``. Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (``False``) is to use fast but less precise builtin functionality. +* ``date_unit`` : string, the timestamp unit to detect if converting dates. Default None. By default the timestamp precision will be detected, if this is not desired then pass one of 's', 'ms', 'us' or 'ns' to force timestamp precision to seconds, milliseconds, microseconds or nanoseconds respectively. -- ``lines`` : reads file as one json object per line. -- ``encoding`` : The encoding to use to decode py3 bytes. -- ``chunksize`` : when used in combination with ``lines=True``, return a JsonReader which reads in ``chunksize`` lines per iteration. +* ``lines`` : reads file as one json object per line. +* ``encoding`` : The encoding to use to decode py3 bytes. +* ``chunksize`` : when used in combination with ``lines=True``, return a JsonReader which reads in ``chunksize`` lines per iteration. The parser will raise one of ``ValueError/TypeError/AssertionError`` if the JSON is not parseable. @@ -2175,10 +2175,10 @@ object str A few notes on the generated table schema: -- The ``schema`` object contains a ``pandas_version`` field. This contains +* The ``schema`` object contains a ``pandas_version`` field. This contains the version of pandas' dialect of the schema, and will be incremented with each revision. -- All dates are converted to UTC when serializing. Even timezone naive values, +* All dates are converted to UTC when serializing. Even timezone naive values, which are treated as UTC with an offset of 0. .. ipython:: python @@ -2187,7 +2187,7 @@ A few notes on the generated table schema: s = pd.Series(pd.date_range('2016', periods=4)) build_table_schema(s) -- datetimes with a timezone (before serializing), include an additional field +* datetimes with a timezone (before serializing), include an additional field ``tz`` with the time zone name (e.g. ``'US/Central'``). .. ipython:: python @@ -2196,7 +2196,7 @@ A few notes on the generated table schema: tz='US/Central')) build_table_schema(s_tz) -- Periods are converted to timestamps before serialization, and so have the +* Periods are converted to timestamps before serialization, and so have the same behavior of being converted to UTC. In addition, periods will contain and additional field ``freq`` with the period's frequency, e.g. ``'A-DEC'``. @@ -2206,7 +2206,7 @@ A few notes on the generated table schema: periods=4)) build_table_schema(s_per) -- Categoricals use the ``any`` type and an ``enum`` constraint listing +* Categoricals use the ``any`` type and an ``enum`` constraint listing the set of possible values. Additionally, an ``ordered`` field is included: .. ipython:: python @@ -2214,7 +2214,7 @@ A few notes on the generated table schema: s_cat = pd.Series(pd.Categorical(['a', 'b', 'a'])) build_table_schema(s_cat) -- A ``primaryKey`` field, containing an array of labels, is included +* A ``primaryKey`` field, containing an array of labels, is included *if the index is unique*: .. ipython:: python @@ -2222,7 +2222,7 @@ A few notes on the generated table schema: s_dupe = pd.Series([1, 2], index=[1, 1]) build_table_schema(s_dupe) -- The ``primaryKey`` behavior is the same with MultiIndexes, but in this +* The ``primaryKey`` behavior is the same with MultiIndexes, but in this case the ``primaryKey`` is an array: .. ipython:: python @@ -2231,15 +2231,15 @@ A few notes on the generated table schema: (0, 1)])) build_table_schema(s_multi) -- The default naming roughly follows these rules: +* The default naming roughly follows these rules: - + For series, the ``object.name`` is used. If that's none, then the - name is ``values`` - + For ``DataFrames``, the stringified version of the column name is used - + For ``Index`` (not ``MultiIndex``), ``index.name`` is used, with a - fallback to ``index`` if that is None. - + For ``MultiIndex``, ``mi.names`` is used. If any level has no name, - then ``level_<i>`` is used. + * For series, the ``object.name`` is used. If that's none, then the + name is ``values`` + * For ``DataFrames``, the stringified version of the column name is used + * For ``Index`` (not ``MultiIndex``), ``index.name`` is used, with a + fallback to ``index`` if that is None. + * For ``MultiIndex``, ``mi.names`` is used. If any level has no name, + then ``level_<i>`` is used. .. versionadded:: 0.23.0 @@ -2601,55 +2601,55 @@ parse HTML tables in the top-level pandas io function ``read_html``. **Issues with** |lxml|_ - * Benefits +* Benefits - * |lxml|_ is very fast. + * |lxml|_ is very fast. - * |lxml|_ requires Cython to install correctly. + * |lxml|_ requires Cython to install correctly. - * Drawbacks +* Drawbacks - * |lxml|_ does *not* make any guarantees about the results of its parse - *unless* it is given |svm|_. + * |lxml|_ does *not* make any guarantees about the results of its parse + *unless* it is given |svm|_. - * In light of the above, we have chosen to allow you, the user, to use the - |lxml|_ backend, but **this backend will use** |html5lib|_ if |lxml|_ - fails to parse + * In light of the above, we have chosen to allow you, the user, to use the + |lxml|_ backend, but **this backend will use** |html5lib|_ if |lxml|_ + fails to parse - * It is therefore *highly recommended* that you install both - |BeautifulSoup4|_ and |html5lib|_, so that you will still get a valid - result (provided everything else is valid) even if |lxml|_ fails. + * It is therefore *highly recommended* that you install both + |BeautifulSoup4|_ and |html5lib|_, so that you will still get a valid + result (provided everything else is valid) even if |lxml|_ fails. **Issues with** |BeautifulSoup4|_ **using** |lxml|_ **as a backend** - * The above issues hold here as well since |BeautifulSoup4|_ is essentially - just a wrapper around a parser backend. +* The above issues hold here as well since |BeautifulSoup4|_ is essentially + just a wrapper around a parser backend. **Issues with** |BeautifulSoup4|_ **using** |html5lib|_ **as a backend** - * Benefits +* Benefits - * |html5lib|_ is far more lenient than |lxml|_ and consequently deals - with *real-life markup* in a much saner way rather than just, e.g., - dropping an element without notifying you. + * |html5lib|_ is far more lenient than |lxml|_ and consequently deals + with *real-life markup* in a much saner way rather than just, e.g., + dropping an element without notifying you. - * |html5lib|_ *generates valid HTML5 markup from invalid markup - automatically*. This is extremely important for parsing HTML tables, - since it guarantees a valid document. However, that does NOT mean that - it is "correct", since the process of fixing markup does not have a - single definition. + * |html5lib|_ *generates valid HTML5 markup from invalid markup + automatically*. This is extremely important for parsing HTML tables, + since it guarantees a valid document. However, that does NOT mean that + it is "correct", since the process of fixing markup does not have a + single definition. - * |html5lib|_ is pure Python and requires no additional build steps beyond - its own installation. + * |html5lib|_ is pure Python and requires no additional build steps beyond + its own installation. - * Drawbacks +* Drawbacks - * The biggest drawback to using |html5lib|_ is that it is slow as - molasses. However consider the fact that many tables on the web are not - big enough for the parsing algorithm runtime to matter. It is more - likely that the bottleneck will be in the process of reading the raw - text from the URL over the web, i.e., IO (input-output). For very large - tables, this might not be true. + * The biggest drawback to using |html5lib|_ is that it is slow as + molasses. However consider the fact that many tables on the web are not + big enough for the parsing algorithm runtime to matter. It is more + likely that the bottleneck will be in the process of reading the raw + text from the URL over the web, i.e., IO (input-output). For very large + tables, this might not be true. .. |svm| replace:: **strictly valid markup** @@ -2753,13 +2753,13 @@ Specifying Sheets .. note :: An ExcelFile's attribute ``sheet_names`` provides access to a list of sheets. -- The arguments ``sheet_name`` allows specifying the sheet or sheets to read. -- The default value for ``sheet_name`` is 0, indicating to read the first sheet -- Pass a string to refer to the name of a particular sheet in the workbook. -- Pass an integer to refer to the index of a sheet. Indices follow Python +* The arguments ``sheet_name`` allows specifying the sheet or sheets to read. +* The default value for ``sheet_name`` is 0, indicating to read the first sheet +* Pass a string to refer to the name of a particular sheet in the workbook. +* Pass an integer to refer to the index of a sheet. Indices follow Python convention, beginning at 0. -- Pass a list of either strings or integers, to return a dictionary of specified sheets. -- Pass a ``None`` to return a dictionary of all available sheets. +* Pass a list of either strings or integers, to return a dictionary of specified sheets. +* Pass a ``None`` to return a dictionary of all available sheets. .. code-block:: python @@ -3030,9 +3030,9 @@ files if `Xlsxwriter`_ is not available. To specify which writer you want to use, you can pass an engine keyword argument to ``to_excel`` and to ``ExcelWriter``. The built-in engines are: -- ``openpyxl``: version 2.4 or higher is required -- ``xlsxwriter`` -- ``xlwt`` +* ``openpyxl``: version 2.4 or higher is required +* ``xlsxwriter`` +* ``xlwt`` .. code-block:: python @@ -3055,8 +3055,8 @@ Style and Formatting The look and feel of Excel worksheets created from pandas can be modified using the following parameters on the ``DataFrame``'s ``to_excel`` method. -- ``float_format`` : Format string for floating point numbers (default ``None``). -- ``freeze_panes`` : A tuple of two integers representing the bottommost row and rightmost column to freeze. Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default ``None``). +* ``float_format`` : Format string for floating point numbers (default ``None``). +* ``freeze_panes`` : A tuple of two integers representing the bottommost row and rightmost column to freeze. Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default ``None``). @@ -3654,10 +3654,10 @@ data. A query is specified using the ``Term`` class under the hood, as a boolean expression. -- ``index`` and ``columns`` are supported indexers of a ``DataFrames``. -- ``major_axis``, ``minor_axis``, and ``items`` are supported indexers of +* ``index`` and ``columns`` are supported indexers of a ``DataFrames``. +* ``major_axis``, ``minor_axis``, and ``items`` are supported indexers of the Panel. -- if ``data_columns`` are specified, these can be used as additional indexers. +* if ``data_columns`` are specified, these can be used as additional indexers. Valid comparison operators are: @@ -3665,9 +3665,9 @@ Valid comparison operators are: Valid boolean expressions are combined with: -- ``|`` : or -- ``&`` : and -- ``(`` and ``)`` : for grouping +* ``|`` : or +* ``&`` : and +* ``(`` and ``)`` : for grouping These rules are similar to how boolean expressions are used in pandas for indexing. @@ -3680,16 +3680,16 @@ These rules are similar to how boolean expressions are used in pandas for indexi The following are valid expressions: -- ``'index >= date'`` -- ``"columns = ['A', 'D']"`` -- ``"columns in ['A', 'D']"`` -- ``'columns = A'`` -- ``'columns == A'`` -- ``"~(columns = ['A', 'B'])"`` -- ``'index > df.index[3] & string = "bar"'`` -- ``'(index > df.index[3] & index <= df.index[6]) | string = "bar"'`` -- ``"ts >= Timestamp('2012-02-01')"`` -- ``"major_axis>=20130101"`` +* ``'index >= date'`` +* ``"columns = ['A', 'D']"`` +* ``"columns in ['A', 'D']"`` +* ``'columns = A'`` +* ``'columns == A'`` +* ``"~(columns = ['A', 'B'])"`` +* ``'index > df.index[3] & string = "bar"'`` +* ``'(index > df.index[3] & index <= df.index[6]) | string = "bar"'`` +* ``"ts >= Timestamp('2012-02-01')"`` +* ``"major_axis>=20130101"`` The ``indexers`` are on the left-hand side of the sub-expression: @@ -3697,11 +3697,11 @@ The ``indexers`` are on the left-hand side of the sub-expression: The right-hand side of the sub-expression (after a comparison operator) can be: -- functions that will be evaluated, e.g. ``Timestamp('2012-02-01')`` -- strings, e.g. ``"bar"`` -- date-like, e.g. ``20130101``, or ``"20130101"`` -- lists, e.g. ``"['A', 'B']"`` -- variables that are defined in the local names space, e.g. ``date`` +* functions that will be evaluated, e.g. ``Timestamp('2012-02-01')`` +* strings, e.g. ``"bar"`` +* date-like, e.g. ``20130101``, or ``"20130101"`` +* lists, e.g. ``"['A', 'B']"`` +* variables that are defined in the local names space, e.g. ``date`` .. note:: @@ -4080,15 +4080,15 @@ simple use case. You store panel-type data, with dates in the ``major_axis`` and ids in the ``minor_axis``. The data is then interleaved like this: -- date_1 - - id_1 - - id_2 - - . - - id_n -- date_2 - - id_1 - - . - - id_n +* date_1 + * id_1 + * id_2 + * . + * id_n +* date_2 + * id_1 + * . + * id_n It should be clear that a delete operation on the ``major_axis`` will be fairly quick, as one chunk is removed, then the following data moved. On @@ -4216,12 +4216,12 @@ Caveats need to serialize these operations in a single thread in a single process. You will corrupt your data otherwise. See the (:issue:`2397`) for more information. -- If you use locks to manage write access between multiple processes, you +* If you use locks to manage write access between multiple processes, you may want to use :py:func:`~os.fsync` before releasing write locks. For convenience you can use ``store.flush(fsync=True)`` to do this for you. -- Once a ``table`` is created its items (Panel) / columns (DataFrame) +* Once a ``table`` is created its items (Panel) / columns (DataFrame) are fixed; only exactly the same columns can be appended -- Be aware that timezones (e.g., ``pytz.timezone('US/Eastern')``) +* Be aware that timezones (e.g., ``pytz.timezone('US/Eastern')``) are not necessarily equal across timezone versions. So if data is localized to a specific timezone in the HDFStore using one version of a timezone library and that data is updated with another version, the data @@ -4438,21 +4438,21 @@ Now you can import the ``DataFrame`` into R: Performance ''''''''''' -- ``tables`` format come with a writing performance penalty as compared to +* ``tables`` format come with a writing performance penalty as compared to ``fixed`` stores. The benefit is the ability to append/delete and query (potentially very large amounts of data). Write times are generally longer as compared with regular stores. Query times can be quite fast, especially on an indexed axis. -- You can pass ``chunksize=<int>`` to ``append``, specifying the +* You can pass ``chunksize=<int>`` to ``append``, specifying the write chunksize (default is 50000). This will significantly lower your memory usage on writing. -- You can pass ``expectedrows=<int>`` to the first ``append``, +* You can pass ``expectedrows=<int>`` to the first ``append``, to set the TOTAL number of expected rows that ``PyTables`` will expected. This will optimize read/write performance. -- Duplicate rows can be written to tables, but are filtered out in +* Duplicate rows can be written to tables, but are filtered out in selection (with the last items being selected; thus a table is unique on major, minor pairs) -- A ``PerformanceWarning`` will be raised if you are attempting to +* A ``PerformanceWarning`` will be raised if you are attempting to store types that will be pickled by PyTables (rather than stored as endemic types). See `Here <http://stackoverflow.com/questions/14355151/how-to-make-pandas-hdfstore-put-operation-faster/14370190#14370190>`__ @@ -4482,14 +4482,14 @@ dtypes, including extension dtypes such as categorical and datetime with tz. Several caveats. -- This is a newer library, and the format, though stable, is not guaranteed to be backward compatible +* This is a newer library, and the format, though stable, is not guaranteed to be backward compatible to the earlier versions. -- The format will NOT write an ``Index``, or ``MultiIndex`` for the +* The format will NOT write an ``Index``, or ``MultiIndex`` for the ``DataFrame`` and will raise an error if a non-default one is provided. You can ``.reset_index()`` to store the index or ``.reset_index(drop=True)`` to ignore it. -- Duplicate column names and non-string columns names are not supported -- Non supported types include ``Period`` and actual Python object types. These will raise a helpful error message +* Duplicate column names and non-string columns names are not supported +* Non supported types include ``Period`` and actual Python object types. These will raise a helpful error message on an attempt at serialization. See the `Full Documentation <https://github.com/wesm/feather>`__. @@ -4550,10 +4550,10 @@ dtypes, including extension dtypes such as datetime with tz. Several caveats. -- Duplicate column names and non-string columns names are not supported. -- Index level names, if specified, must be strings. -- Categorical dtypes can be serialized to parquet, but will de-serialize as ``object`` dtype. -- Non supported types include ``Period`` and actual Python object types. These will raise a helpful error message +* Duplicate column names and non-string columns names are not supported. +* Index level names, if specified, must be strings. +* Categorical dtypes can be serialized to parquet, but will de-serialize as ``object`` dtype. +* Non supported types include ``Period`` and actual Python object types. These will raise a helpful error message on an attempt at serialization. You can specify an ``engine`` to direct the serialization. This can be one of ``pyarrow``, or ``fastparquet``, or ``auto``. diff --git a/doc/source/merging.rst b/doc/source/merging.rst index 45944ba56d4e7..b2cb388e3cd03 100644 --- a/doc/source/merging.rst +++ b/doc/source/merging.rst @@ -81,33 +81,33 @@ some configurable handling of "what to do with the other axes": keys=None, levels=None, names=None, verify_integrity=False, copy=True) -- ``objs`` : a sequence or mapping of Series, DataFrame, or Panel objects. If a +* ``objs`` : a sequence or mapping of Series, DataFrame, or Panel objects. If a dict is passed, the sorted keys will be used as the `keys` argument, unless it is passed, in which case the values will be selected (see below). Any None objects will be dropped silently unless they are all None in which case a ValueError will be raised. -- ``axis`` : {0, 1, ...}, default 0. The axis to concatenate along. -- ``join`` : {'inner', 'outer'}, default 'outer'. How to handle indexes on +* ``axis`` : {0, 1, ...}, default 0. The axis to concatenate along. +* ``join`` : {'inner', 'outer'}, default 'outer'. How to handle indexes on other axis(es). Outer for union and inner for intersection. -- ``ignore_index`` : boolean, default False. If True, do not use the index +* ``ignore_index`` : boolean, default False. If True, do not use the index values on the concatenation axis. The resulting axis will be labeled 0, ..., n - 1. This is useful if you are concatenating objects where the concatenation axis does not have meaningful indexing information. Note the index values on the other axes are still respected in the join. -- ``join_axes`` : list of Index objects. Specific indexes to use for the other +* ``join_axes`` : list of Index objects. Specific indexes to use for the other n - 1 axes instead of performing inner/outer set logic. -- ``keys`` : sequence, default None. Construct hierarchical index using the +* ``keys`` : sequence, default None. Construct hierarchical index using the passed keys as the outermost level. If multiple levels passed, should contain tuples. -- ``levels`` : list of sequences, default None. Specific levels (unique values) +* ``levels`` : list of sequences, default None. Specific levels (unique values) to use for constructing a MultiIndex. Otherwise they will be inferred from the keys. -- ``names`` : list, default None. Names for the levels in the resulting +* ``names`` : list, default None. Names for the levels in the resulting hierarchical index. -- ``verify_integrity`` : boolean, default False. Check whether the new +* ``verify_integrity`` : boolean, default False. Check whether the new concatenated axis contains duplicates. This can be very expensive relative to the actual data concatenation. -- ``copy`` : boolean, default True. If False, do not copy data unnecessarily. +* ``copy`` : boolean, default True. If False, do not copy data unnecessarily. Without a little bit of context many of these arguments don't make much sense. Let's revisit the above example. Suppose we wanted to associate specific keys @@ -156,10 +156,10 @@ When gluing together multiple DataFrames, you have a choice of how to handle the other axes (other than the one being concatenated). This can be done in the following three ways: -- Take the union of them all, ``join='outer'``. This is the default +* Take the union of them all, ``join='outer'``. This is the default option as it results in zero information loss. -- Take the intersection, ``join='inner'``. -- Use a specific index, as passed to the ``join_axes`` argument. +* Take the intersection, ``join='inner'``. +* Use a specific index, as passed to the ``join_axes`` argument. Here is an example of each of these methods. First, the default ``join='outer'`` behavior: @@ -531,52 +531,52 @@ all standard database join operations between ``DataFrame`` objects: suffixes=('_x', '_y'), copy=True, indicator=False, validate=None) -- ``left``: A DataFrame object. -- ``right``: Another DataFrame object. -- ``on``: Column or index level names to join on. Must be found in both the left +* ``left``: A DataFrame object. +* ``right``: Another DataFrame object. +* ``on``: Column or index level names to join on. Must be found in both the left and right DataFrame objects. If not passed and ``left_index`` and ``right_index`` are ``False``, the intersection of the columns in the DataFrames will be inferred to be the join keys. -- ``left_on``: Columns or index levels from the left DataFrame to use as +* ``left_on``: Columns or index levels from the left DataFrame to use as keys. Can either be column names, index level names, or arrays with length equal to the length of the DataFrame. -- ``right_on``: Columns or index levels from the right DataFrame to use as +* ``right_on``: Columns or index levels from the right DataFrame to use as keys. Can either be column names, index level names, or arrays with length equal to the length of the DataFrame. -- ``left_index``: If ``True``, use the index (row labels) from the left +* ``left_index``: If ``True``, use the index (row labels) from the left DataFrame as its join key(s). In the case of a DataFrame with a MultiIndex (hierarchical), the number of levels must match the number of join keys from the right DataFrame. -- ``right_index``: Same usage as ``left_index`` for the right DataFrame -- ``how``: One of ``'left'``, ``'right'``, ``'outer'``, ``'inner'``. Defaults +* ``right_index``: Same usage as ``left_index`` for the right DataFrame +* ``how``: One of ``'left'``, ``'right'``, ``'outer'``, ``'inner'``. Defaults to ``inner``. See below for more detailed description of each method. -- ``sort``: Sort the result DataFrame by the join keys in lexicographical +* ``sort``: Sort the result DataFrame by the join keys in lexicographical order. Defaults to ``True``, setting to ``False`` will improve performance substantially in many cases. -- ``suffixes``: A tuple of string suffixes to apply to overlapping +* ``suffixes``: A tuple of string suffixes to apply to overlapping columns. Defaults to ``('_x', '_y')``. -- ``copy``: Always copy data (default ``True``) from the passed DataFrame +* ``copy``: Always copy data (default ``True``) from the passed DataFrame objects, even when reindexing is not necessary. Cannot be avoided in many cases but may improve performance / memory usage. The cases where copying can be avoided are somewhat pathological but this option is provided nonetheless. -- ``indicator``: Add a column to the output DataFrame called ``_merge`` +* ``indicator``: Add a column to the output DataFrame called ``_merge`` with information on the source of each row. ``_merge`` is Categorical-type and takes on a value of ``left_only`` for observations whose merge key only appears in ``'left'`` DataFrame, ``right_only`` for observations whose merge key only appears in ``'right'`` DataFrame, and ``both`` if the observation's merge key is found in both. -- ``validate`` : string, default None. +* ``validate`` : string, default None. If specified, checks if merge is of specified type. - * "one_to_one" or "1:1": checks if merge keys are unique in both - left and right datasets. - * "one_to_many" or "1:m": checks if merge keys are unique in left - dataset. - * "many_to_one" or "m:1": checks if merge keys are unique in right - dataset. - * "many_to_many" or "m:m": allowed, but does not result in checks. + * "one_to_one" or "1:1": checks if merge keys are unique in both + left and right datasets. + * "one_to_many" or "1:m": checks if merge keys are unique in left + dataset. + * "many_to_one" or "m:1": checks if merge keys are unique in right + dataset. + * "many_to_many" or "m:m": allowed, but does not result in checks. .. versionadded:: 0.21.0 @@ -605,11 +605,11 @@ terminology used to describe join operations between two SQL-table like structures (``DataFrame`` objects). There are several cases to consider which are very important to understand: -- **one-to-one** joins: for example when joining two ``DataFrame`` objects on +* **one-to-one** joins: for example when joining two ``DataFrame`` objects on their indexes (which must contain unique values). -- **many-to-one** joins: for example when joining an index (unique) to one or +* **many-to-one** joins: for example when joining an index (unique) to one or more columns in a different ``DataFrame``. -- **many-to-many** joins: joining columns on columns. +* **many-to-many** joins: joining columns on columns. .. note:: diff --git a/doc/source/options.rst b/doc/source/options.rst index 697cc0682e39a..cbe0264f442bc 100644 --- a/doc/source/options.rst +++ b/doc/source/options.rst @@ -31,10 +31,10 @@ You can get/set options directly as attributes of the top-level ``options`` attr The API is composed of 5 relevant functions, available directly from the ``pandas`` namespace: -- :func:`~pandas.get_option` / :func:`~pandas.set_option` - get/set the value of a single option. -- :func:`~pandas.reset_option` - reset one or more options to their default value. -- :func:`~pandas.describe_option` - print the descriptions of one or more options. -- :func:`~pandas.option_context` - execute a codeblock with a set of options +* :func:`~pandas.get_option` / :func:`~pandas.set_option` - get/set the value of a single option. +* :func:`~pandas.reset_option` - reset one or more options to their default value. +* :func:`~pandas.describe_option` - print the descriptions of one or more options. +* :func:`~pandas.option_context` - execute a codeblock with a set of options that revert to prior settings after execution. **Note:** Developers can check out `pandas/core/config.py <https://github.com/pandas-dev/pandas/blob/master/pandas/core/config.py>`_ for more information. diff --git a/doc/source/overview.rst b/doc/source/overview.rst index f86b1c67e6843..6ba9501ba0b5e 100644 --- a/doc/source/overview.rst +++ b/doc/source/overview.rst @@ -12,19 +12,19 @@ programming language. :mod:`pandas` consists of the following elements: - * A set of labeled array data structures, the primary of which are - Series and DataFrame. - * Index objects enabling both simple axis indexing and multi-level / - hierarchical axis indexing. - * An integrated group by engine for aggregating and transforming data sets. - * Date range generation (date_range) and custom date offsets enabling the - implementation of customized frequencies. - * Input/Output tools: loading tabular data from flat files (CSV, delimited, - Excel 2003), and saving and loading pandas objects from the fast and - efficient PyTables/HDF5 format. - * Memory-efficient "sparse" versions of the standard data structures for storing - data that is mostly missing or mostly constant (some fixed value). - * Moving window statistics (rolling mean, rolling standard deviation, etc.). +* A set of labeled array data structures, the primary of which are + Series and DataFrame. +* Index objects enabling both simple axis indexing and multi-level / + hierarchical axis indexing. +* An integrated group by engine for aggregating and transforming data sets. +* Date range generation (date_range) and custom date offsets enabling the + implementation of customized frequencies. +* Input/Output tools: loading tabular data from flat files (CSV, delimited, + Excel 2003), and saving and loading pandas objects from the fast and + efficient PyTables/HDF5 format. +* Memory-efficient "sparse" versions of the standard data structures for storing + data that is mostly missing or mostly constant (some fixed value). +* Moving window statistics (rolling mean, rolling standard deviation, etc.). Data Structures --------------- diff --git a/doc/source/reshaping.rst b/doc/source/reshaping.rst index 250a1808e496e..88b7114cf4101 100644 --- a/doc/source/reshaping.rst +++ b/doc/source/reshaping.rst @@ -106,12 +106,12 @@ Closely related to the :meth:`~DataFrame.pivot` method are the related ``MultiIndex`` objects (see the section on :ref:`hierarchical indexing <advanced.hierarchical>`). Here are essentially what these methods do: - - ``stack``: "pivot" a level of the (possibly hierarchical) column labels, - returning a ``DataFrame`` with an index with a new inner-most level of row - labels. - - ``unstack``: (inverse operation of ``stack``) "pivot" a level of the - (possibly hierarchical) row index to the column axis, producing a reshaped - ``DataFrame`` with a new inner-most level of column labels. +* ``stack``: "pivot" a level of the (possibly hierarchical) column labels, + returning a ``DataFrame`` with an index with a new inner-most level of row + labels. +* ``unstack``: (inverse operation of ``stack``) "pivot" a level of the + (possibly hierarchical) row index to the column axis, producing a reshaped + ``DataFrame`` with a new inner-most level of column labels. .. image:: _static/reshaping_unstack.png @@ -132,8 +132,8 @@ from the hierarchical indexing section: The ``stack`` function "compresses" a level in the ``DataFrame``'s columns to produce either: - - A ``Series``, in the case of a simple column Index. - - A ``DataFrame``, in the case of a ``MultiIndex`` in the columns. +* A ``Series``, in the case of a simple column Index. +* A ``DataFrame``, in the case of a ``MultiIndex`` in the columns. If the columns have a ``MultiIndex``, you can choose which level to stack. The stacked level becomes the new lowest level in a ``MultiIndex`` on the columns: @@ -351,13 +351,13 @@ strategies. It takes a number of arguments: -- ``data``: a DataFrame object. -- ``values``: a column or a list of columns to aggregate. -- ``index``: a column, Grouper, array which has the same length as data, or list of them. +* ``data``: a DataFrame object. +* ``values``: a column or a list of columns to aggregate. +* ``index``: a column, Grouper, array which has the same length as data, or list of them. Keys to group by on the pivot table index. If an array is passed, it is being used as the same manner as column values. -- ``columns``: a column, Grouper, array which has the same length as data, or list of them. +* ``columns``: a column, Grouper, array which has the same length as data, or list of them. Keys to group by on the pivot table column. If an array is passed, it is being used as the same manner as column values. -- ``aggfunc``: function to use for aggregation, defaulting to ``numpy.mean``. +* ``aggfunc``: function to use for aggregation, defaulting to ``numpy.mean``. Consider a data set like this: @@ -431,17 +431,17 @@ unless an array of values and an aggregation function are passed. It takes a number of arguments -- ``index``: array-like, values to group by in the rows. -- ``columns``: array-like, values to group by in the columns. -- ``values``: array-like, optional, array of values to aggregate according to +* ``index``: array-like, values to group by in the rows. +* ``columns``: array-like, values to group by in the columns. +* ``values``: array-like, optional, array of values to aggregate according to the factors. -- ``aggfunc``: function, optional, If no values array is passed, computes a +* ``aggfunc``: function, optional, If no values array is passed, computes a frequency table. -- ``rownames``: sequence, default ``None``, must match number of row arrays passed. -- ``colnames``: sequence, default ``None``, if passed, must match number of column +* ``rownames``: sequence, default ``None``, must match number of row arrays passed. +* ``colnames``: sequence, default ``None``, if passed, must match number of column arrays passed. -- ``margins``: boolean, default ``False``, Add row/column margins (subtotals) -- ``normalize``: boolean, {'all', 'index', 'columns'}, or {0,1}, default ``False``. +* ``margins``: boolean, default ``False``, Add row/column margins (subtotals) +* ``normalize``: boolean, {'all', 'index', 'columns'}, or {0,1}, default ``False``. Normalize by dividing all values by the sum of values. @@ -615,10 +615,10 @@ As with the ``Series`` version, you can pass values for the ``prefix`` and ``prefix_sep``. By default the column name is used as the prefix, and '_' as the prefix separator. You can specify ``prefix`` and ``prefix_sep`` in 3 ways: -- string: Use the same value for ``prefix`` or ``prefix_sep`` for each column +* string: Use the same value for ``prefix`` or ``prefix_sep`` for each column to be encoded. -- list: Must be the same length as the number of columns being encoded. -- dict: Mapping column name to prefix. +* list: Must be the same length as the number of columns being encoded. +* dict: Mapping column name to prefix. .. ipython:: python diff --git a/doc/source/sparse.rst b/doc/source/sparse.rst index 260d8aa32ef52..2bb99dd1822b6 100644 --- a/doc/source/sparse.rst +++ b/doc/source/sparse.rst @@ -104,9 +104,9 @@ Sparse data should have the same dtype as its dense representation. Currently, ``float64``, ``int64`` and ``bool`` dtypes are supported. Depending on the original dtype, ``fill_value`` default changes: -- ``float64``: ``np.nan`` -- ``int64``: ``0`` -- ``bool``: ``False`` +* ``float64``: ``np.nan`` +* ``int64``: ``0`` +* ``bool``: ``False`` .. ipython:: python diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst index ded54d2d355f1..ba58d65b00714 100644 --- a/doc/source/timeseries.rst +++ b/doc/source/timeseries.rst @@ -28,11 +28,11 @@ a tremendous amount of new functionality for manipulating time series data. In working with time series data, we will frequently seek to: - - generate sequences of fixed-frequency dates and time spans - - conform or convert time series to a particular frequency - - compute "relative" dates based on various non-standard time increments - (e.g. 5 business days before the last business day of the year), or "roll" - dates forward or backward +* generate sequences of fixed-frequency dates and time spans +* conform or convert time series to a particular frequency +* compute "relative" dates based on various non-standard time increments + (e.g. 5 business days before the last business day of the year), or "roll" + dates forward or backward pandas provides a relatively compact and self-contained set of tools for performing the above tasks. @@ -226,8 +226,8 @@ You can pass only the columns that you need to assemble. ``pd.to_datetime`` looks for standard designations of the datetime component in the column names, including: -- required: ``year``, ``month``, ``day`` -- optional: ``hour``, ``minute``, ``second``, ``millisecond``, ``microsecond``, ``nanosecond`` +* required: ``year``, ``month``, ``day`` +* optional: ``hour``, ``minute``, ``second``, ``millisecond``, ``microsecond``, ``nanosecond`` Invalid Data ~~~~~~~~~~~~ @@ -463,14 +463,14 @@ Indexing One of the main uses for ``DatetimeIndex`` is as an index for pandas objects. The ``DatetimeIndex`` class contains many time series related optimizations: - - A large range of dates for various offsets are pre-computed and cached - under the hood in order to make generating subsequent date ranges very fast - (just have to grab a slice). - - Fast shifting using the ``shift`` and ``tshift`` method on pandas objects. - - Unioning of overlapping ``DatetimeIndex`` objects with the same frequency is - very fast (important for fast data alignment). - - Quick access to date fields via properties such as ``year``, ``month``, etc. - - Regularization functions like ``snap`` and very fast ``asof`` logic. +* A large range of dates for various offsets are pre-computed and cached + under the hood in order to make generating subsequent date ranges very fast + (just have to grab a slice). +* Fast shifting using the ``shift`` and ``tshift`` method on pandas objects. +* Unioning of overlapping ``DatetimeIndex`` objects with the same frequency is + very fast (important for fast data alignment). +* Quick access to date fields via properties such as ``year``, ``month``, etc. +* Regularization functions like ``snap`` and very fast ``asof`` logic. ``DatetimeIndex`` objects have all the basic functionality of regular ``Index`` objects, and a smorgasbord of advanced time series specific methods for easy @@ -797,11 +797,11 @@ We could have done the same thing with ``DateOffset``: The key features of a ``DateOffset`` object are: -- It can be added / subtracted to/from a datetime object to obtain a +* It can be added / subtracted to/from a datetime object to obtain a shifted date. -- It can be multiplied by an integer (positive or negative) so that the +* It can be multiplied by an integer (positive or negative) so that the increment will be applied multiple times. -- It has :meth:`~pandas.DateOffset.rollforward` and +* It has :meth:`~pandas.DateOffset.rollforward` and :meth:`~pandas.DateOffset.rollback` methods for moving a date forward or backward to the next or previous "offset date". @@ -2064,9 +2064,9 @@ To supply the time zone, you can use the ``tz`` keyword to ``date_range`` and other functions. Dateutil time zone strings are distinguished from ``pytz`` time zones by starting with ``dateutil/``. -- In ``pytz`` you can find a list of common (and less common) time zones using +* In ``pytz`` you can find a list of common (and less common) time zones using ``from pytz import common_timezones, all_timezones``. -- ``dateutil`` uses the OS timezones so there isn't a fixed list available. For +* ``dateutil`` uses the OS timezones so there isn't a fixed list available. For common zones, the names are the same as ``pytz``. .. ipython:: python diff --git a/doc/source/tutorials.rst b/doc/source/tutorials.rst index 895fe595de205..381031fa128e6 100644 --- a/doc/source/tutorials.rst +++ b/doc/source/tutorials.rst @@ -28,33 +28,33 @@ repository <http://github.com/jvns/pandas-cookbook>`_. To run the examples in th clone the GitHub repository and get IPython Notebook running. See `How to use this cookbook <https://github.com/jvns/pandas-cookbook#how-to-use-this-cookbook>`_. -- `A quick tour of the IPython Notebook: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/A%20quick%20tour%20of%20IPython%20Notebook.ipynb>`_ +* `A quick tour of the IPython Notebook: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/A%20quick%20tour%20of%20IPython%20Notebook.ipynb>`_ Shows off IPython's awesome tab completion and magic functions. -- `Chapter 1: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%201%20-%20Reading%20from%20a%20CSV.ipynb>`_ +* `Chapter 1: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%201%20-%20Reading%20from%20a%20CSV.ipynb>`_ Reading your data into pandas is pretty much the easiest thing. Even when the encoding is wrong! -- `Chapter 2: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%202%20-%20Selecting%20data%20%26%20finding%20the%20most%20common%20complaint%20type.ipynb>`_ +* `Chapter 2: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%202%20-%20Selecting%20data%20%26%20finding%20the%20most%20common%20complaint%20type.ipynb>`_ It's not totally obvious how to select data from a pandas dataframe. Here we explain the basics (how to take slices and get columns) -- `Chapter 3: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%203%20-%20Which%20borough%20has%20the%20most%20noise%20complaints%20%28or%2C%20more%20selecting%20data%29.ipynb>`_ +* `Chapter 3: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%203%20-%20Which%20borough%20has%20the%20most%20noise%20complaints%20%28or%2C%20more%20selecting%20data%29.ipynb>`_ Here we get into serious slicing and dicing and learn how to filter dataframes in complicated ways, really fast. -- `Chapter 4: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%204%20-%20Find%20out%20on%20which%20weekday%20people%20bike%20the%20most%20with%20groupby%20and%20aggregate.ipynb>`_ +* `Chapter 4: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%204%20-%20Find%20out%20on%20which%20weekday%20people%20bike%20the%20most%20with%20groupby%20and%20aggregate.ipynb>`_ Groupby/aggregate is seriously my favorite thing about pandas and I use it all the time. You should probably read this. -- `Chapter 5: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%205%20-%20Combining%20dataframes%20and%20scraping%20Canadian%20weather%20data.ipynb>`_ +* `Chapter 5: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%205%20-%20Combining%20dataframes%20and%20scraping%20Canadian%20weather%20data.ipynb>`_ Here you get to find out if it's cold in Montreal in the winter (spoiler: yes). Web scraping with pandas is fun! Here we combine dataframes. -- `Chapter 6: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%206%20-%20String%20Operations-%20Which%20month%20was%20the%20snowiest.ipynb>`_ +* `Chapter 6: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%206%20-%20String%20Operations-%20Which%20month%20was%20the%20snowiest.ipynb>`_ Strings with pandas are great. It has all these vectorized string operations and they're the best. We will turn a bunch of strings containing "Snow" into vectors of numbers in a trice. -- `Chapter 7: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%207%20-%20Cleaning%20up%20messy%20data.ipynb>`_ +* `Chapter 7: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%207%20-%20Cleaning%20up%20messy%20data.ipynb>`_ Cleaning up messy data is never a joy, but with pandas it's easier. -- `Chapter 8: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%208%20-%20How%20to%20deal%20with%20timestamps.ipynb>`_ +* `Chapter 8: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%208%20-%20How%20to%20deal%20with%20timestamps.ipynb>`_ Parsing Unix timestamps is confusing at first but it turns out to be really easy. -- `Chapter 9: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%209%20-%20Loading%20data%20from%20SQL%20databases.ipynb>`_ +* `Chapter 9: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%209%20-%20Loading%20data%20from%20SQL%20databases.ipynb>`_ Reading data from SQL databases. @@ -63,54 +63,54 @@ Lessons for new pandas users For more resources, please visit the main `repository <https://bitbucket.org/hrojas/learn-pandas>`__. -- `01 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/01%20-%20Lesson.ipynb>`_ - - Importing libraries - - Creating data sets - - Creating data frames - - Reading from CSV - - Exporting to CSV - - Finding maximums - - Plotting data +* `01 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/01%20-%20Lesson.ipynb>`_ + * Importing libraries + * Creating data sets + * Creating data frames + * Reading from CSV + * Exporting to CSV + * Finding maximums + * Plotting data -- `02 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/02%20-%20Lesson.ipynb>`_ - - Reading from TXT - - Exporting to TXT - - Selecting top/bottom records - - Descriptive statistics - - Grouping/sorting data +* `02 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/02%20-%20Lesson.ipynb>`_ + * Reading from TXT + * Exporting to TXT + * Selecting top/bottom records + * Descriptive statistics + * Grouping/sorting data -- `03 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/03%20-%20Lesson.ipynb>`_ - - Creating functions - - Reading from EXCEL - - Exporting to EXCEL - - Outliers - - Lambda functions - - Slice and dice data +* `03 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/03%20-%20Lesson.ipynb>`_ + * Creating functions + * Reading from EXCEL + * Exporting to EXCEL + * Outliers + * Lambda functions + * Slice and dice data -- `04 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/04%20-%20Lesson.ipynb>`_ - - Adding/deleting columns - - Index operations +* `04 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/04%20-%20Lesson.ipynb>`_ + * Adding/deleting columns + * Index operations -- `05 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/05%20-%20Lesson.ipynb>`_ - - Stack/Unstack/Transpose functions +* `05 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/05%20-%20Lesson.ipynb>`_ + * Stack/Unstack/Transpose functions -- `06 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/06%20-%20Lesson.ipynb>`_ - - GroupBy function +* `06 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/06%20-%20Lesson.ipynb>`_ + * GroupBy function -- `07 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/07%20-%20Lesson.ipynb>`_ - - Ways to calculate outliers +* `07 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/07%20-%20Lesson.ipynb>`_ + * Ways to calculate outliers -- `08 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/08%20-%20Lesson.ipynb>`_ - - Read from Microsoft SQL databases +* `08 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/08%20-%20Lesson.ipynb>`_ + * Read from Microsoft SQL databases -- `09 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/09%20-%20Lesson.ipynb>`_ - - Export to CSV/EXCEL/TXT +* `09 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/09%20-%20Lesson.ipynb>`_ + * Export to CSV/EXCEL/TXT -- `10 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/10%20-%20Lesson.ipynb>`_ - - Converting between different kinds of formats +* `10 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/10%20-%20Lesson.ipynb>`_ + * Converting between different kinds of formats -- `11 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/11%20-%20Lesson.ipynb>`_ - - Combining data from various sources +* `11 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/11%20-%20Lesson.ipynb>`_ + * Combining data from various sources Practical data analysis with Python @@ -119,13 +119,13 @@ Practical data analysis with Python This `guide <http://wavedatalab.github.io/datawithpython>`_ is a comprehensive introduction to the data analysis process using the Python data ecosystem and an interesting open dataset. There are four sections covering selected topics as follows: -- `Munging Data <http://wavedatalab.github.io/datawithpython/munge.html>`_ +* `Munging Data <http://wavedatalab.github.io/datawithpython/munge.html>`_ -- `Aggregating Data <http://wavedatalab.github.io/datawithpython/aggregate.html>`_ +* `Aggregating Data <http://wavedatalab.github.io/datawithpython/aggregate.html>`_ -- `Visualizing Data <http://wavedatalab.github.io/datawithpython/visualize.html>`_ +* `Visualizing Data <http://wavedatalab.github.io/datawithpython/visualize.html>`_ -- `Time Series <http://wavedatalab.github.io/datawithpython/timeseries.html>`_ +* `Time Series <http://wavedatalab.github.io/datawithpython/timeseries.html>`_ .. _tutorial-exercises-new-users: @@ -134,25 +134,25 @@ Exercises for new users Practice your skills with real data sets and exercises. For more resources, please visit the main `repository <https://github.com/guipsamora/pandas_exercises>`__. -- `01 - Getting & Knowing Your Data <https://github.com/guipsamora/pandas_exercises/tree/master/01_Getting_%26_Knowing_Your_Data>`_ +* `01 - Getting & Knowing Your Data <https://github.com/guipsamora/pandas_exercises/tree/master/01_Getting_%26_Knowing_Your_Data>`_ -- `02 - Filtering & Sorting <https://github.com/guipsamora/pandas_exercises/tree/master/02_Filtering_%26_Sorting>`_ +* `02 - Filtering & Sorting <https://github.com/guipsamora/pandas_exercises/tree/master/02_Filtering_%26_Sorting>`_ -- `03 - Grouping <https://github.com/guipsamora/pandas_exercises/tree/master/03_Grouping>`_ +* `03 - Grouping <https://github.com/guipsamora/pandas_exercises/tree/master/03_Grouping>`_ -- `04 - Apply <https://github.com/guipsamora/pandas_exercises/tree/master/04_Apply>`_ +* `04 - Apply <https://github.com/guipsamora/pandas_exercises/tree/master/04_Apply>`_ -- `05 - Merge <https://github.com/guipsamora/pandas_exercises/tree/master/05_Merge>`_ +* `05 - Merge <https://github.com/guipsamora/pandas_exercises/tree/master/05_Merge>`_ -- `06 - Stats <https://github.com/guipsamora/pandas_exercises/tree/master/06_Stats>`_ +* `06 - Stats <https://github.com/guipsamora/pandas_exercises/tree/master/06_Stats>`_ -- `07 - Visualization <https://github.com/guipsamora/pandas_exercises/tree/master/07_Visualization>`_ +* `07 - Visualization <https://github.com/guipsamora/pandas_exercises/tree/master/07_Visualization>`_ -- `08 - Creating Series and DataFrames <https://github.com/guipsamora/pandas_exercises/tree/master/08_Creating_Series_and_DataFrames/Pokemon>`_ +* `08 - Creating Series and DataFrames <https://github.com/guipsamora/pandas_exercises/tree/master/08_Creating_Series_and_DataFrames/Pokemon>`_ -- `09 - Time Series <https://github.com/guipsamora/pandas_exercises/tree/master/09_Time_Series>`_ +* `09 - Time Series <https://github.com/guipsamora/pandas_exercises/tree/master/09_Time_Series>`_ -- `10 - Deleting <https://github.com/guipsamora/pandas_exercises/tree/master/10_Deleting>`_ +* `10 - Deleting <https://github.com/guipsamora/pandas_exercises/tree/master/10_Deleting>`_ .. _tutorial-modern: @@ -164,29 +164,29 @@ Tutorial series written in 2016 by The source may be found in the GitHub repository `TomAugspurger/effective-pandas <https://github.com/TomAugspurger/effective-pandas>`_. -- `Modern Pandas <http://tomaugspurger.github.io/modern-1-intro.html>`_ -- `Method Chaining <http://tomaugspurger.github.io/method-chaining.html>`_ -- `Indexes <http://tomaugspurger.github.io/modern-3-indexes.html>`_ -- `Performance <http://tomaugspurger.github.io/modern-4-performance.html>`_ -- `Tidy Data <http://tomaugspurger.github.io/modern-5-tidy.html>`_ -- `Visualization <http://tomaugspurger.github.io/modern-6-visualization.html>`_ -- `Timeseries <http://tomaugspurger.github.io/modern-7-timeseries.html>`_ +* `Modern Pandas <http://tomaugspurger.github.io/modern-1-intro.html>`_ +* `Method Chaining <http://tomaugspurger.github.io/method-chaining.html>`_ +* `Indexes <http://tomaugspurger.github.io/modern-3-indexes.html>`_ +* `Performance <http://tomaugspurger.github.io/modern-4-performance.html>`_ +* `Tidy Data <http://tomaugspurger.github.io/modern-5-tidy.html>`_ +* `Visualization <http://tomaugspurger.github.io/modern-6-visualization.html>`_ +* `Timeseries <http://tomaugspurger.github.io/modern-7-timeseries.html>`_ Excel charts with pandas, vincent and xlsxwriter ------------------------------------------------ -- `Using Pandas and XlsxWriter to create Excel charts <https://pandas-xlsxwriter-charts.readthedocs.io/>`_ +* `Using Pandas and XlsxWriter to create Excel charts <https://pandas-xlsxwriter-charts.readthedocs.io/>`_ Video Tutorials --------------- -- `Pandas From The Ground Up <https://www.youtube.com/watch?v=5JnMutdy6Fw>`_ +* `Pandas From The Ground Up <https://www.youtube.com/watch?v=5JnMutdy6Fw>`_ (2015) (2:24) `GitHub repo <https://github.com/brandon-rhodes/pycon-pandas-tutorial>`__ -- `Introduction Into Pandas <https://www.youtube.com/watch?v=-NR-ynQg0YM>`_ +* `Introduction Into Pandas <https://www.youtube.com/watch?v=-NR-ynQg0YM>`_ (2016) (1:28) `GitHub repo <https://github.com/chendaniely/2016-pydata-carolinas-pandas>`__ -- `Pandas: .head() to .tail() <https://www.youtube.com/watch?v=7vuO9QXDN50>`_ +* `Pandas: .head() to .tail() <https://www.youtube.com/watch?v=7vuO9QXDN50>`_ (2016) (1:26) `GitHub repo <https://github.com/TomAugspurger/pydata-chi-h2t>`__ @@ -194,12 +194,12 @@ Video Tutorials Various Tutorials ----------------- -- `Wes McKinney's (pandas BDFL) blog <http://blog.wesmckinney.com/>`_ -- `Statistical analysis made easy in Python with SciPy and pandas DataFrames, by Randal Olson <http://www.randalolson.com/2012/08/06/statistical-analysis-made-easy-in-python/>`_ -- `Statistical Data Analysis in Python, tutorial videos, by Christopher Fonnesbeck from SciPy 2013 <http://conference.scipy.org/scipy2013/tutorial_detail.php?id=109>`_ -- `Financial analysis in Python, by Thomas Wiecki <http://nbviewer.ipython.org/github/twiecki/financial-analysis-python-tutorial/blob/master/1.%20Pandas%20Basics.ipynb>`_ -- `Intro to pandas data structures, by Greg Reda <http://www.gregreda.com/2013/10/26/intro-to-pandas-data-structures/>`_ -- `Pandas and Python: Top 10, by Manish Amde <http://manishamde.github.io/blog/2013/03/07/pandas-and-python-top-10/>`_ -- `Pandas Tutorial, by Mikhail Semeniuk <http://www.bearrelroll.com/2013/05/python-pandas-tutorial>`_ -- `Pandas DataFrames Tutorial, by Karlijn Willems <http://www.datacamp.com/community/tutorials/pandas-tutorial-dataframe-python>`_ -- `A concise tutorial with real life examples <https://tutswiki.com/pandas-cookbook/chapter1>`_ +* `Wes McKinney's (pandas BDFL) blog <http://blog.wesmckinney.com/>`_ +* `Statistical analysis made easy in Python with SciPy and pandas DataFrames, by Randal Olson <http://www.randalolson.com/2012/08/06/statistical-analysis-made-easy-in-python/>`_ +* `Statistical Data Analysis in Python, tutorial videos, by Christopher Fonnesbeck from SciPy 2013 <http://conference.scipy.org/scipy2013/tutorial_detail.php?id=109>`_ +* `Financial analysis in Python, by Thomas Wiecki <http://nbviewer.ipython.org/github/twiecki/financial-analysis-python-tutorial/blob/master/1.%20Pandas%20Basics.ipynb>`_ +* `Intro to pandas data structures, by Greg Reda <http://www.gregreda.com/2013/10/26/intro-to-pandas-data-structures/>`_ +* `Pandas and Python: Top 10, by Manish Amde <http://manishamde.github.io/blog/2013/03/07/pandas-and-python-top-10/>`_ +* `Pandas Tutorial, by Mikhail Semeniuk <http://www.bearrelroll.com/2013/05/python-pandas-tutorial>`_ +* `Pandas DataFrames Tutorial, by Karlijn Willems <http://www.datacamp.com/community/tutorials/pandas-tutorial-dataframe-python>`_ +* `A concise tutorial with real life examples <https://tutswiki.com/pandas-cookbook/chapter1>`_ diff --git a/doc/source/visualization.rst b/doc/source/visualization.rst index 17197b805e86a..569a6fb7b7a0d 100644 --- a/doc/source/visualization.rst +++ b/doc/source/visualization.rst @@ -1381,9 +1381,9 @@ Plotting with error bars is supported in :meth:`DataFrame.plot` and :meth:`Serie Horizontal and vertical error bars can be supplied to the ``xerr`` and ``yerr`` keyword arguments to :meth:`~DataFrame.plot()`. The error values can be specified using a variety of formats: -- As a :class:`DataFrame` or ``dict`` of errors with column names matching the ``columns`` attribute of the plotting :class:`DataFrame` or matching the ``name`` attribute of the :class:`Series`. -- As a ``str`` indicating which of the columns of plotting :class:`DataFrame` contain the error values. -- As raw values (``list``, ``tuple``, or ``np.ndarray``). Must be the same length as the plotting :class:`DataFrame`/:class:`Series`. +* As a :class:`DataFrame` or ``dict`` of errors with column names matching the ``columns`` attribute of the plotting :class:`DataFrame` or matching the ``name`` attribute of the :class:`Series`. +* As a ``str`` indicating which of the columns of plotting :class:`DataFrame` contain the error values. +* As raw values (``list``, ``tuple``, or ``np.ndarray``). Must be the same length as the plotting :class:`DataFrame`/:class:`Series`. Asymmetrical error bars are also supported, however raw error values must be provided in this case. For a ``M`` length :class:`Series`, a ``Mx2`` array should be provided indicating lower and upper (or left and right) errors. For a ``MxN`` :class:`DataFrame`, asymmetrical errors should be in a ``Mx2xN`` array.
- [X] closes #21518 - [ ] tests added / passed - [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21519
2018-06-18T08:42:05Z
2018-06-20T10:14:26Z
2018-06-20T10:14:26Z
2018-06-21T09:58:45Z
Fix passing empty label to df drop
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 9271f58947f95..cae0d1a754d89 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -61,6 +61,7 @@ Bug Fixes - Bug in :meth:`Index.get_indexer_non_unique` with categorical key (:issue:`21448`) - Bug in comparison operations for :class:`MultiIndex` where error was raised on equality / inequality comparison involving a MultiIndex with ``nlevels == 1`` (:issue:`21149`) +- Bug in :meth:`DataFrame.drop` behaviour is not consistent for unique and non-unique indexes (:issue:`21494`) - Bug in :func:`DataFrame.duplicated` with a large number of columns causing a 'maximum recursion depth exceeded' (:issue:`21524`). - diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 1780e359164e2..9902da4094404 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -3129,7 +3129,7 @@ def _drop_axis(self, labels, axis, level=None, errors='raise'): """ axis = self._get_axis_number(axis) axis_name = self._get_axis_name(axis) - axis, axis_ = self._get_axis(axis), axis + axis = self._get_axis(axis) if axis.is_unique: if level is not None: @@ -3138,24 +3138,25 @@ def _drop_axis(self, labels, axis, level=None, errors='raise'): new_axis = axis.drop(labels, level=level, errors=errors) else: new_axis = axis.drop(labels, errors=errors) - dropped = self.reindex(**{axis_name: new_axis}) - try: - dropped.axes[axis_].set_names(axis.names, inplace=True) - except AttributeError: - pass - result = dropped + result = self.reindex(**{axis_name: new_axis}) + # Case for non-unique axis else: labels = _ensure_object(com._index_labels_to_array(labels)) if level is not None: if not isinstance(axis, MultiIndex): raise AssertionError('axis must be a MultiIndex') indexer = ~axis.get_level_values(level).isin(labels) + + # GH 18561 MultiIndex.drop should raise if label is absent + if errors == 'raise' and indexer.all(): + raise KeyError('{} not found in axis'.format(labels)) else: indexer = ~axis.isin(labels) - - if errors == 'raise' and indexer.all(): - raise KeyError('{} not found in axis'.format(labels)) + # Check if label doesn't exist along axis + labels_missing = (axis.get_indexer_for(labels) == -1).any() + if errors == 'raise' and labels_missing: + raise KeyError('{} not found in axis'.format(labels)) slicer = [slice(None)] * self.ndim slicer[self._get_axis_number(axis_name)] = indexer diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index ac33ffad762cd..4f140a6e77b2f 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -4341,7 +4341,7 @@ def drop(self, labels, errors='raise'): Raises ------ KeyError - If none of the labels are found in the selected axis + If not all of the labels are found in the selected axis """ arr_dtype = 'object' if self.dtype == 'object' else None labels = com._index_labels_to_array(labels, dtype=arr_dtype) @@ -4350,7 +4350,7 @@ def drop(self, labels, errors='raise'): if mask.any(): if errors != 'ignore': raise KeyError( - 'labels %s not contained in axis' % labels[mask]) + '{} not found in axis'.format(labels[mask])) indexer = indexer[~mask] return self.delete(indexer) diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py index ab23a80acdaae..61b50f139dd10 100644 --- a/pandas/core/indexes/multi.py +++ b/pandas/core/indexes/multi.py @@ -1707,7 +1707,6 @@ def drop(self, labels, level=None, errors='raise'): if errors != 'ignore': raise ValueError('labels %s not contained in axis' % labels[mask]) - indexer = indexer[~mask] except Exception: pass diff --git a/pandas/tests/frame/test_axis_select_reindex.py b/pandas/tests/frame/test_axis_select_reindex.py index 28e82f7585850..0e0d6598f5101 100644 --- a/pandas/tests/frame/test_axis_select_reindex.py +++ b/pandas/tests/frame/test_axis_select_reindex.py @@ -1151,3 +1151,18 @@ def test_raise_on_drop_duplicate_index(self, actual): expected_no_err = actual.T.drop('c', axis=1, level=level, errors='ignore') assert_frame_equal(expected_no_err.T, actual) + + @pytest.mark.parametrize('index', [[1, 2, 3], [1, 1, 2]]) + @pytest.mark.parametrize('drop_labels', [[], [1], [2]]) + def test_drop_empty_list(self, index, drop_labels): + # GH 21494 + expected_index = [i for i in index if i not in drop_labels] + frame = pd.DataFrame(index=index).drop(drop_labels) + tm.assert_frame_equal(frame, pd.DataFrame(index=expected_index)) + + @pytest.mark.parametrize('index', [[1, 2, 3], [1, 2, 2]]) + @pytest.mark.parametrize('drop_labels', [[1, 4], [4, 5]]) + def test_drop_non_empty_list(self, index, drop_labels): + # GH 21494 + with tm.assert_raises_regex(KeyError, 'not found in axis'): + pd.DataFrame(index=index).drop(drop_labels) diff --git a/pandas/tests/series/indexing/test_alter_index.py b/pandas/tests/series/indexing/test_alter_index.py index bcd5a64402c33..561d6a9b42508 100644 --- a/pandas/tests/series/indexing/test_alter_index.py +++ b/pandas/tests/series/indexing/test_alter_index.py @@ -472,54 +472,86 @@ def test_rename(): assert result.name == expected.name -def test_drop(): - # unique - s = Series([1, 2], index=['one', 'two']) - expected = Series([1], index=['one']) - result = s.drop(['two']) - assert_series_equal(result, expected) - result = s.drop('two', axis='rows') - assert_series_equal(result, expected) - - # non-unique - # GH 5248 - s = Series([1, 1, 2], index=['one', 'two', 'one']) - expected = Series([1, 2], index=['one', 'one']) - result = s.drop(['two'], axis=0) - assert_series_equal(result, expected) - result = s.drop('two') - assert_series_equal(result, expected) - - expected = Series([1], index=['two']) - result = s.drop(['one']) - assert_series_equal(result, expected) - result = s.drop('one') - assert_series_equal(result, expected) +@pytest.mark.parametrize( + 'data, index, drop_labels,' + ' axis, expected_data, expected_index', + [ + # Unique Index + ([1, 2], ['one', 'two'], ['two'], + 0, [1], ['one']), + ([1, 2], ['one', 'two'], ['two'], + 'rows', [1], ['one']), + ([1, 1, 2], ['one', 'two', 'one'], ['two'], + 0, [1, 2], ['one', 'one']), + + # GH 5248 Non-Unique Index + ([1, 1, 2], ['one', 'two', 'one'], 'two', + 0, [1, 2], ['one', 'one']), + ([1, 1, 2], ['one', 'two', 'one'], ['one'], + 0, [1], ['two']), + ([1, 1, 2], ['one', 'two', 'one'], 'one', + 0, [1], ['two'])]) +def test_drop_unique_and_non_unique_index(data, index, axis, drop_labels, + expected_data, expected_index): + + s = Series(data=data, index=index) + result = s.drop(drop_labels, axis=axis) + expected = Series(data=expected_data, index=expected_index) + tm.assert_series_equal(result, expected) - # single string/tuple-like - s = Series(range(3), index=list('abc')) - pytest.raises(KeyError, s.drop, 'bc') - pytest.raises(KeyError, s.drop, ('a',)) +@pytest.mark.parametrize( + 'data, index, drop_labels,' + ' axis, error_type, error_desc', + [ + # single string/tuple-like + (range(3), list('abc'), 'bc', + 0, KeyError, 'not found in axis'), + + # bad axis + (range(3), list('abc'), ('a',), + 0, KeyError, 'not found in axis'), + (range(3), list('abc'), 'one', + 'columns', ValueError, 'No axis named columns')]) +def test_drop_exception_raised(data, index, drop_labels, + axis, error_type, error_desc): + + with tm.assert_raises_regex(error_type, error_desc): + Series(data, index=index).drop(drop_labels, axis=axis) + + +def test_drop_with_ignore_errors(): # errors='ignore' s = Series(range(3), index=list('abc')) result = s.drop('bc', errors='ignore') - assert_series_equal(result, s) + tm.assert_series_equal(result, s) result = s.drop(['a', 'd'], errors='ignore') expected = s.iloc[1:] - assert_series_equal(result, expected) - - # bad axis - pytest.raises(ValueError, s.drop, 'one', axis='columns') + tm.assert_series_equal(result, expected) # GH 8522 s = Series([2, 3], index=[True, False]) assert s.index.is_object() result = s.drop(True) expected = Series([3], index=[False]) - assert_series_equal(result, expected) + tm.assert_series_equal(result, expected) + - # GH 16877 - s = Series([2, 3], index=[0, 1]) - with tm.assert_raises_regex(KeyError, 'not contained in axis'): - s.drop([False, True]) +@pytest.mark.parametrize('index', [[1, 2, 3], [1, 1, 3]]) +@pytest.mark.parametrize('drop_labels', [[], [1], [3]]) +def test_drop_empty_list(index, drop_labels): + # GH 21494 + expected_index = [i for i in index if i not in drop_labels] + series = pd.Series(index=index).drop(drop_labels) + tm.assert_series_equal(series, pd.Series(index=expected_index)) + + +@pytest.mark.parametrize('data, index, drop_labels', [ + (None, [1, 2, 3], [1, 4]), + (None, [1, 2, 2], [1, 4]), + ([2, 3], [0, 1], [False, True]) +]) +def test_drop_non_empty_list(data, index, drop_labels): + # GH 21494 and GH 16877 + with tm.assert_raises_regex(KeyError, 'not found in axis'): + pd.Series(data=data, index=index).drop(drop_labels)
- Closes #21494 - Tests added / passed -Drop method in indexes/base.py, docs say KeyError should only be raised if **none** of labels are found in selected axis. However `pd.DataFrame(index=[1,2,3]).drop([1, 4])` throws. -Makes behaviour consistent for .drop() across unique/non-unique indexes. Both the below will now raise a KeyError - `pd.DataFrame(index=[1,2,3]).drop([1, 4])` - `pd.DataFrame(index=[1,1,3]).drop([1, 4])` -Remove unused var `indexer` and `_axis`
https://api.github.com/repos/pandas-dev/pandas/pulls/21515
2018-06-17T19:19:30Z
2018-06-21T08:13:02Z
2018-06-21T08:13:02Z
2018-06-29T15:00:18Z
split up pandas/tests/indexes/test_multi.py #18644
diff --git a/pandas/tests/indexes/multi/__init__.py b/pandas/tests/indexes/multi/__init__.py new file mode 100644 index 0000000000000..e69de29bb2d1d diff --git a/pandas/tests/indexes/multi/conftest.py b/pandas/tests/indexes/multi/conftest.py new file mode 100644 index 0000000000000..6cf9003500b61 --- /dev/null +++ b/pandas/tests/indexes/multi/conftest.py @@ -0,0 +1,43 @@ +# -*- coding: utf-8 -*- + +import numpy as np +import pytest +from pandas import Index, MultiIndex + + +@pytest.fixture +def idx(): + # a MultiIndex used to test the general functionality of the + # general functionality of this object + major_axis = Index(['foo', 'bar', 'baz', 'qux']) + minor_axis = Index(['one', 'two']) + + major_labels = np.array([0, 0, 1, 2, 3, 3]) + minor_labels = np.array([0, 1, 0, 1, 0, 1]) + index_names = ['first', 'second'] + index = MultiIndex( + levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels], + names=index_names, + verify_integrity=False + ) + return index + + +@pytest.fixture +def index_names(): + # names that match those in the idx fixture for testing equality of + # names assigned to the idx + return ['first', 'second'] + + +@pytest.fixture +def holder(): + # the MultiIndex constructor used to base compatibility with pickle + return MultiIndex + + +@pytest.fixture +def compat_props(): + # a MultiIndex must have these properties associated with it + return ['shape', 'ndim', 'size'] diff --git a/pandas/tests/indexes/data/mindex_073.pickle b/pandas/tests/indexes/multi/data/mindex_073.pickle similarity index 100% rename from pandas/tests/indexes/data/mindex_073.pickle rename to pandas/tests/indexes/multi/data/mindex_073.pickle diff --git a/pandas/tests/indexes/data/multiindex_v1.pickle b/pandas/tests/indexes/multi/data/multiindex_v1.pickle similarity index 100% rename from pandas/tests/indexes/data/multiindex_v1.pickle rename to pandas/tests/indexes/multi/data/multiindex_v1.pickle diff --git a/pandas/tests/indexes/multi/test_analytics.py b/pandas/tests/indexes/multi/test_analytics.py new file mode 100644 index 0000000000000..072356e4923a6 --- /dev/null +++ b/pandas/tests/indexes/multi/test_analytics.py @@ -0,0 +1,8 @@ +import pytest + + +def test_shift(idx): + + # GH8083 test the base class for shift + pytest.raises(NotImplementedError, idx.shift, 1) + pytest.raises(NotImplementedError, idx.shift, 1, 2) diff --git a/pandas/tests/indexes/multi/test_compat.py b/pandas/tests/indexes/multi/test_compat.py new file mode 100644 index 0000000000000..0dfe322c2eef9 --- /dev/null +++ b/pandas/tests/indexes/multi/test_compat.py @@ -0,0 +1,122 @@ +# -*- coding: utf-8 -*- + + +import numpy as np +import pandas.util.testing as tm +import pytest +from pandas import MultiIndex +from pandas.compat import PY3, long + + +def test_numeric_compat(idx): + tm.assert_raises_regex(TypeError, "cannot perform __mul__", + lambda: idx * 1) + tm.assert_raises_regex(TypeError, "cannot perform __rmul__", + lambda: 1 * idx) + + div_err = "cannot perform __truediv__" if PY3 \ + else "cannot perform __div__" + tm.assert_raises_regex(TypeError, div_err, lambda: idx / 1) + div_err = div_err.replace(' __', ' __r') + tm.assert_raises_regex(TypeError, div_err, lambda: 1 / idx) + tm.assert_raises_regex(TypeError, "cannot perform __floordiv__", + lambda: idx // 1) + tm.assert_raises_regex(TypeError, "cannot perform __rfloordiv__", + lambda: 1 // idx) + + +def test_logical_compat(idx): + tm.assert_raises_regex(TypeError, 'cannot perform all', + lambda: idx.all()) + tm.assert_raises_regex(TypeError, 'cannot perform any', + lambda: idx.any()) + + +def test_boolean_context_compat(idx): + + with pytest.raises(ValueError): + bool(idx) + + +def test_boolean_context_compat2(): + + # boolean context compat + # GH7897 + i1 = MultiIndex.from_tuples([('A', 1), ('A', 2)]) + i2 = MultiIndex.from_tuples([('A', 1), ('A', 3)]) + common = i1.intersection(i2) + + with pytest.raises(ValueError): + bool(common) + + +def test_inplace_mutation_resets_values(): + levels = [['a', 'b', 'c'], [4]] + levels2 = [[1, 2, 3], ['a']] + labels = [[0, 1, 0, 2, 2, 0], [0, 0, 0, 0, 0, 0]] + + mi1 = MultiIndex(levels=levels, labels=labels) + mi2 = MultiIndex(levels=levels2, labels=labels) + vals = mi1.values.copy() + vals2 = mi2.values.copy() + + assert mi1._tuples is not None + + # Make sure level setting works + new_vals = mi1.set_levels(levels2).values + tm.assert_almost_equal(vals2, new_vals) + + # Non-inplace doesn't kill _tuples [implementation detail] + tm.assert_almost_equal(mi1._tuples, vals) + + # ...and values is still same too + tm.assert_almost_equal(mi1.values, vals) + + # Inplace should kill _tuples + mi1.set_levels(levels2, inplace=True) + tm.assert_almost_equal(mi1.values, vals2) + + # Make sure label setting works too + labels2 = [[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]] + exp_values = np.empty((6,), dtype=object) + exp_values[:] = [(long(1), 'a')] * 6 + + # Must be 1d array of tuples + assert exp_values.shape == (6,) + new_values = mi2.set_labels(labels2).values + + # Not inplace shouldn't change + tm.assert_almost_equal(mi2._tuples, vals2) + + # Should have correct values + tm.assert_almost_equal(exp_values, new_values) + + # ...and again setting inplace should kill _tuples, etc + mi2.set_labels(labels2, inplace=True) + tm.assert_almost_equal(mi2.values, new_values) + + +def test_ndarray_compat_properties(idx, compat_props): + assert idx.T.equals(idx) + assert idx.transpose().equals(idx) + + values = idx.values + for prop in compat_props: + assert getattr(idx, prop) == getattr(values, prop) + + # test for validity + idx.nbytes + idx.values.nbytes + + +def test_compat(indices): + assert indices.tolist() == list(indices) + + +def test_pickle_compat_construction(holder): + # this is testing for pickle compat + if holder is None: + return + + # need an object to create with + pytest.raises(TypeError, holder) diff --git a/pandas/tests/indexes/multi/test_constructor.py b/pandas/tests/indexes/multi/test_constructor.py new file mode 100644 index 0000000000000..9577662bda366 --- /dev/null +++ b/pandas/tests/indexes/multi/test_constructor.py @@ -0,0 +1,434 @@ +# -*- coding: utf-8 -*- + +import re + +import numpy as np +import pandas as pd +import pandas.util.testing as tm +import pytest +from pandas import Index, MultiIndex, date_range +from pandas._libs.tslib import Timestamp +from pandas.compat import lrange, range +from pandas.core.dtypes.cast import construct_1d_object_array_from_listlike + + +def test_constructor_single_level(): + result = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux']], + labels=[[0, 1, 2, 3]], names=['first']) + assert isinstance(result, MultiIndex) + expected = Index(['foo', 'bar', 'baz', 'qux'], name='first') + tm.assert_index_equal(result.levels[0], expected) + assert result.names == ['first'] + + +def test_constructor_no_levels(): + tm.assert_raises_regex(ValueError, "non-zero number " + "of levels/labels", + MultiIndex, levels=[], labels=[]) + both_re = re.compile('Must pass both levels and labels') + with tm.assert_raises_regex(TypeError, both_re): + MultiIndex(levels=[]) + with tm.assert_raises_regex(TypeError, both_re): + MultiIndex(labels=[]) + + +def test_constructor_nonhashable_names(): + # GH 20527 + levels = [[1, 2], [u'one', u'two']] + labels = [[0, 0, 1, 1], [0, 1, 0, 1]] + names = ((['foo'], ['bar'])) + message = "MultiIndex.name must be a hashable type" + tm.assert_raises_regex(TypeError, message, + MultiIndex, levels=levels, + labels=labels, names=names) + + # With .rename() + mi = MultiIndex(levels=[[1, 2], [u'one', u'two']], + labels=[[0, 0, 1, 1], [0, 1, 0, 1]], + names=('foo', 'bar')) + renamed = [['foor'], ['barr']] + tm.assert_raises_regex(TypeError, message, mi.rename, names=renamed) + # With .set_names() + tm.assert_raises_regex(TypeError, message, mi.set_names, names=renamed) + + +def test_constructor_mismatched_label_levels(idx): + labels = [np.array([1]), np.array([2]), np.array([3])] + levels = ["a"] + tm.assert_raises_regex(ValueError, "Length of levels and labels " + "must be the same", MultiIndex, + levels=levels, labels=labels) + length_error = re.compile('>= length of level') + label_error = re.compile(r'Unequal label lengths: \[4, 2\]') + + # important to check that it's looking at the right thing. + with tm.assert_raises_regex(ValueError, length_error): + MultiIndex(levels=[['a'], ['b']], + labels=[[0, 1, 2, 3], [0, 3, 4, 1]]) + + with tm.assert_raises_regex(ValueError, label_error): + MultiIndex(levels=[['a'], ['b']], labels=[[0, 0, 0, 0], [0, 0]]) + + # external API + with tm.assert_raises_regex(ValueError, length_error): + idx.copy().set_levels([['a'], ['b']]) + + with tm.assert_raises_regex(ValueError, label_error): + idx.copy().set_labels([[0, 0, 0, 0], [0, 0]]) + + +def test_copy_in_constructor(): + levels = np.array(["a", "b", "c"]) + labels = np.array([1, 1, 2, 0, 0, 1, 1]) + val = labels[0] + mi = MultiIndex(levels=[levels, levels], labels=[labels, labels], + copy=True) + assert mi.labels[0][0] == val + labels[0] = 15 + assert mi.labels[0][0] == val + val = levels[0] + levels[0] = "PANDA" + assert mi.levels[0][0] == val + + +def test_from_arrays(idx): + arrays = [] + for lev, lab in zip(idx.levels, idx.labels): + arrays.append(np.asarray(lev).take(lab)) + + # list of arrays as input + result = MultiIndex.from_arrays(arrays, names=idx.names) + tm.assert_index_equal(result, idx) + + # infer correctly + result = MultiIndex.from_arrays([[pd.NaT, Timestamp('20130101')], + ['a', 'b']]) + assert result.levels[0].equals(Index([Timestamp('20130101')])) + assert result.levels[1].equals(Index(['a', 'b'])) + + +def test_from_arrays_iterator(idx): + # GH 18434 + arrays = [] + for lev, lab in zip(idx.levels, idx.labels): + arrays.append(np.asarray(lev).take(lab)) + + # iterator as input + result = MultiIndex.from_arrays(iter(arrays), names=idx.names) + tm.assert_index_equal(result, idx) + + # invalid iterator input + with tm.assert_raises_regex( + TypeError, "Input must be a list / sequence of array-likes."): + MultiIndex.from_arrays(0) + + +def test_from_arrays_index_series_datetimetz(): + idx1 = pd.date_range('2015-01-01 10:00', freq='D', periods=3, + tz='US/Eastern') + idx2 = pd.date_range('2015-01-01 10:00', freq='H', periods=3, + tz='Asia/Tokyo') + result = pd.MultiIndex.from_arrays([idx1, idx2]) + tm.assert_index_equal(result.get_level_values(0), idx1) + tm.assert_index_equal(result.get_level_values(1), idx2) + + result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), pd.Series(idx2)]) + tm.assert_index_equal(result2.get_level_values(0), idx1) + tm.assert_index_equal(result2.get_level_values(1), idx2) + + tm.assert_index_equal(result, result2) + + +def test_from_arrays_index_series_timedelta(): + idx1 = pd.timedelta_range('1 days', freq='D', periods=3) + idx2 = pd.timedelta_range('2 hours', freq='H', periods=3) + result = pd.MultiIndex.from_arrays([idx1, idx2]) + tm.assert_index_equal(result.get_level_values(0), idx1) + tm.assert_index_equal(result.get_level_values(1), idx2) + + result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), pd.Series(idx2)]) + tm.assert_index_equal(result2.get_level_values(0), idx1) + tm.assert_index_equal(result2.get_level_values(1), idx2) + + tm.assert_index_equal(result, result2) + + +def test_from_arrays_index_series_period(): + idx1 = pd.period_range('2011-01-01', freq='D', periods=3) + idx2 = pd.period_range('2015-01-01', freq='H', periods=3) + result = pd.MultiIndex.from_arrays([idx1, idx2]) + tm.assert_index_equal(result.get_level_values(0), idx1) + tm.assert_index_equal(result.get_level_values(1), idx2) + + result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), pd.Series(idx2)]) + tm.assert_index_equal(result2.get_level_values(0), idx1) + tm.assert_index_equal(result2.get_level_values(1), idx2) + + tm.assert_index_equal(result, result2) + + +def test_from_arrays_index_datetimelike_mixed(): + idx1 = pd.date_range('2015-01-01 10:00', freq='D', periods=3, + tz='US/Eastern') + idx2 = pd.date_range('2015-01-01 10:00', freq='H', periods=3) + idx3 = pd.timedelta_range('1 days', freq='D', periods=3) + idx4 = pd.period_range('2011-01-01', freq='D', periods=3) + + result = pd.MultiIndex.from_arrays([idx1, idx2, idx3, idx4]) + tm.assert_index_equal(result.get_level_values(0), idx1) + tm.assert_index_equal(result.get_level_values(1), idx2) + tm.assert_index_equal(result.get_level_values(2), idx3) + tm.assert_index_equal(result.get_level_values(3), idx4) + + result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), + pd.Series(idx2), + pd.Series(idx3), + pd.Series(idx4)]) + tm.assert_index_equal(result2.get_level_values(0), idx1) + tm.assert_index_equal(result2.get_level_values(1), idx2) + tm.assert_index_equal(result2.get_level_values(2), idx3) + tm.assert_index_equal(result2.get_level_values(3), idx4) + + tm.assert_index_equal(result, result2) + + +def test_from_arrays_index_series_categorical(): + # GH13743 + idx1 = pd.CategoricalIndex(list("abcaab"), categories=list("bac"), + ordered=False) + idx2 = pd.CategoricalIndex(list("abcaab"), categories=list("bac"), + ordered=True) + + result = pd.MultiIndex.from_arrays([idx1, idx2]) + tm.assert_index_equal(result.get_level_values(0), idx1) + tm.assert_index_equal(result.get_level_values(1), idx2) + + result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), pd.Series(idx2)]) + tm.assert_index_equal(result2.get_level_values(0), idx1) + tm.assert_index_equal(result2.get_level_values(1), idx2) + + result3 = pd.MultiIndex.from_arrays([idx1.values, idx2.values]) + tm.assert_index_equal(result3.get_level_values(0), idx1) + tm.assert_index_equal(result3.get_level_values(1), idx2) + + +def test_from_arrays_empty(): + # 0 levels + with tm.assert_raises_regex( + ValueError, "Must pass non-zero number of levels/labels"): + MultiIndex.from_arrays(arrays=[]) + + # 1 level + result = MultiIndex.from_arrays(arrays=[[]], names=['A']) + assert isinstance(result, MultiIndex) + expected = Index([], name='A') + tm.assert_index_equal(result.levels[0], expected) + + # N levels + for N in [2, 3]: + arrays = [[]] * N + names = list('ABC')[:N] + result = MultiIndex.from_arrays(arrays=arrays, names=names) + expected = MultiIndex(levels=[[]] * N, labels=[[]] * N, + names=names) + tm.assert_index_equal(result, expected) + + +def test_from_arrays_invalid_input(): + invalid_inputs = [1, [1], [1, 2], [[1], 2], + 'a', ['a'], ['a', 'b'], [['a'], 'b']] + for i in invalid_inputs: + pytest.raises(TypeError, MultiIndex.from_arrays, arrays=i) + + +def test_from_arrays_different_lengths(): + # see gh-13599 + idx1 = [1, 2, 3] + idx2 = ['a', 'b'] + tm.assert_raises_regex(ValueError, '^all arrays must ' + 'be same length$', + MultiIndex.from_arrays, [idx1, idx2]) + + idx1 = [] + idx2 = ['a', 'b'] + tm.assert_raises_regex(ValueError, '^all arrays must ' + 'be same length$', + MultiIndex.from_arrays, [idx1, idx2]) + + idx1 = [1, 2, 3] + idx2 = [] + tm.assert_raises_regex(ValueError, '^all arrays must ' + 'be same length$', + MultiIndex.from_arrays, [idx1, idx2]) + + +def test_from_tuples(): + tm.assert_raises_regex(TypeError, 'Cannot infer number of levels ' + 'from empty list', + MultiIndex.from_tuples, []) + + expected = MultiIndex(levels=[[1, 3], [2, 4]], + labels=[[0, 1], [0, 1]], + names=['a', 'b']) + + # input tuples + result = MultiIndex.from_tuples(((1, 2), (3, 4)), names=['a', 'b']) + tm.assert_index_equal(result, expected) + + +def test_from_tuples_iterator(): + # GH 18434 + # input iterator for tuples + expected = MultiIndex(levels=[[1, 3], [2, 4]], + labels=[[0, 1], [0, 1]], + names=['a', 'b']) + + result = MultiIndex.from_tuples(zip([1, 3], [2, 4]), names=['a', 'b']) + tm.assert_index_equal(result, expected) + + # input non-iterables + with tm.assert_raises_regex( + TypeError, 'Input must be a list / sequence of tuple-likes.'): + MultiIndex.from_tuples(0) + + +def test_from_tuples_empty(): + # GH 16777 + result = MultiIndex.from_tuples([], names=['a', 'b']) + expected = MultiIndex.from_arrays(arrays=[[], []], + names=['a', 'b']) + tm.assert_index_equal(result, expected) + + +def test_from_tuples_index_values(idx): + result = MultiIndex.from_tuples(idx) + assert (result.values == idx.values).all() + + +def test_from_product_empty(): + # 0 levels + with tm.assert_raises_regex( + ValueError, "Must pass non-zero number of levels/labels"): + MultiIndex.from_product([]) + + # 1 level + result = MultiIndex.from_product([[]], names=['A']) + expected = pd.Index([], name='A') + tm.assert_index_equal(result.levels[0], expected) + + # 2 levels + l1 = [[], ['foo', 'bar', 'baz'], []] + l2 = [[], [], ['a', 'b', 'c']] + names = ['A', 'B'] + for first, second in zip(l1, l2): + result = MultiIndex.from_product([first, second], names=names) + expected = MultiIndex(levels=[first, second], + labels=[[], []], names=names) + tm.assert_index_equal(result, expected) + + # GH12258 + names = ['A', 'B', 'C'] + for N in range(4): + lvl2 = lrange(N) + result = MultiIndex.from_product([[], lvl2, []], names=names) + expected = MultiIndex(levels=[[], lvl2, []], + labels=[[], [], []], names=names) + tm.assert_index_equal(result, expected) + + +def test_from_product_invalid_input(): + invalid_inputs = [1, [1], [1, 2], [[1], 2], + 'a', ['a'], ['a', 'b'], [['a'], 'b']] + for i in invalid_inputs: + pytest.raises(TypeError, MultiIndex.from_product, iterables=i) + + +def test_from_product_datetimeindex(): + dt_index = date_range('2000-01-01', periods=2) + mi = pd.MultiIndex.from_product([[1, 2], dt_index]) + etalon = construct_1d_object_array_from_listlike([(1, pd.Timestamp( + '2000-01-01')), (1, pd.Timestamp('2000-01-02')), (2, pd.Timestamp( + '2000-01-01')), (2, pd.Timestamp('2000-01-02'))]) + tm.assert_numpy_array_equal(mi.values, etalon) + + +def test_from_product_index_series_categorical(): + # GH13743 + first = ['foo', 'bar'] + for ordered in [False, True]: + idx = pd.CategoricalIndex(list("abcaab"), categories=list("bac"), + ordered=ordered) + expected = pd.CategoricalIndex(list("abcaab") + list("abcaab"), + categories=list("bac"), + ordered=ordered) + + for arr in [idx, pd.Series(idx), idx.values]: + result = pd.MultiIndex.from_product([first, arr]) + tm.assert_index_equal(result.get_level_values(1), expected) + + +def test_from_product(): + + first = ['foo', 'bar', 'buz'] + second = ['a', 'b', 'c'] + names = ['first', 'second'] + result = MultiIndex.from_product([first, second], names=names) + + tuples = [('foo', 'a'), ('foo', 'b'), ('foo', 'c'), ('bar', 'a'), + ('bar', 'b'), ('bar', 'c'), ('buz', 'a'), ('buz', 'b'), + ('buz', 'c')] + expected = MultiIndex.from_tuples(tuples, names=names) + + tm.assert_index_equal(result, expected) + + +def test_from_product_iterator(): + # GH 18434 + first = ['foo', 'bar', 'buz'] + second = ['a', 'b', 'c'] + names = ['first', 'second'] + tuples = [('foo', 'a'), ('foo', 'b'), ('foo', 'c'), ('bar', 'a'), + ('bar', 'b'), ('bar', 'c'), ('buz', 'a'), ('buz', 'b'), + ('buz', 'c')] + expected = MultiIndex.from_tuples(tuples, names=names) + + # iterator as input + result = MultiIndex.from_product(iter([first, second]), names=names) + tm.assert_index_equal(result, expected) + + # Invalid non-iterable input + with tm.assert_raises_regex( + TypeError, "Input must be a list / sequence of iterables."): + MultiIndex.from_product(0) + + +def test_create_index_existing_name(idx): + + # GH11193, when an existing index is passed, and a new name is not + # specified, the new index should inherit the previous object name + index = idx + index.names = ['foo', 'bar'] + result = pd.Index(index) + tm.assert_index_equal( + result, Index(Index([('foo', 'one'), ('foo', 'two'), + ('bar', 'one'), ('baz', 'two'), + ('qux', 'one'), ('qux', 'two')], + dtype='object'), + names=['foo', 'bar'])) + + result = pd.Index(index, names=['A', 'B']) + tm.assert_index_equal( + result, + Index(Index([('foo', 'one'), ('foo', 'two'), ('bar', 'one'), + ('baz', 'two'), ('qux', 'one'), ('qux', 'two')], + dtype='object'), names=['A', 'B'])) + + +def test_tuples_with_name_string(): + # GH 15110 and GH 14848 + + li = [(0, 0, 1), (0, 1, 0), (1, 0, 0)] + with pytest.raises(ValueError): + pd.Index(li, name='abc') + with pytest.raises(ValueError): + pd.Index(li, name='a') diff --git a/pandas/tests/indexes/multi/test_contains.py b/pandas/tests/indexes/multi/test_contains.py new file mode 100644 index 0000000000000..aaed4467816da --- /dev/null +++ b/pandas/tests/indexes/multi/test_contains.py @@ -0,0 +1,93 @@ +# -*- coding: utf-8 -*- + +import numpy as np +import pandas as pd +import pandas.util.testing as tm +import pytest +from pandas import MultiIndex +from pandas.compat import PYPY + + +def test_contains_top_level(): + midx = MultiIndex.from_product([['A', 'B'], [1, 2]]) + assert 'A' in midx + assert 'A' not in midx._engine + + +def test_contains_with_nat(): + # MI with a NaT + mi = MultiIndex(levels=[['C'], + pd.date_range('2012-01-01', periods=5)], + labels=[[0, 0, 0, 0, 0, 0], [-1, 0, 1, 2, 3, 4]], + names=[None, 'B']) + assert ('C', pd.Timestamp('2012-01-01')) in mi + for val in mi.values: + assert val in mi + + +def test_contains(idx): + assert ('foo', 'two') in idx + assert ('bar', 'two') not in idx + assert None not in idx + + +@pytest.mark.skipif(not PYPY, reason="tuples cmp recursively on PyPy") +def test_isin_nan_pypy(): + idx = MultiIndex.from_arrays([['foo', 'bar'], [1.0, np.nan]]) + tm.assert_numpy_array_equal(idx.isin([('bar', np.nan)]), + np.array([False, True])) + tm.assert_numpy_array_equal(idx.isin([('bar', float('nan'))]), + np.array([False, True])) + + +def test_isin(): + values = [('foo', 2), ('bar', 3), ('quux', 4)] + + idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange( + 4)]) + result = idx.isin(values) + expected = np.array([False, False, True, True]) + tm.assert_numpy_array_equal(result, expected) + + # empty, return dtype bool + idx = MultiIndex.from_arrays([[], []]) + result = idx.isin(values) + assert len(result) == 0 + assert result.dtype == np.bool_ + + +@pytest.mark.skipif(PYPY, reason="tuples cmp recursively on PyPy") +def test_isin_nan_not_pypy(): + idx = MultiIndex.from_arrays([['foo', 'bar'], [1.0, np.nan]]) + tm.assert_numpy_array_equal(idx.isin([('bar', np.nan)]), + np.array([False, False])) + tm.assert_numpy_array_equal(idx.isin([('bar', float('nan'))]), + np.array([False, False])) + + +def test_isin_level_kwarg(): + idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange( + 4)]) + + vals_0 = ['foo', 'bar', 'quux'] + vals_1 = [2, 3, 10] + + expected = np.array([False, False, True, True]) + tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level=0)) + tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level=-2)) + + tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level=1)) + tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level=-1)) + + pytest.raises(IndexError, idx.isin, vals_0, level=5) + pytest.raises(IndexError, idx.isin, vals_0, level=-5) + + pytest.raises(KeyError, idx.isin, vals_0, level=1.0) + pytest.raises(KeyError, idx.isin, vals_1, level=-1.0) + pytest.raises(KeyError, idx.isin, vals_1, level='A') + + idx.names = ['A', 'B'] + tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level='A')) + tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level='B')) + + pytest.raises(KeyError, idx.isin, vals_1, level='C') diff --git a/pandas/tests/indexes/multi/test_conversion.py b/pandas/tests/indexes/multi/test_conversion.py new file mode 100644 index 0000000000000..ff99941ba9948 --- /dev/null +++ b/pandas/tests/indexes/multi/test_conversion.py @@ -0,0 +1,176 @@ +# -*- coding: utf-8 -*- + + +import numpy as np +import pandas as pd +import pandas.util.testing as tm +import pytest +from pandas import DataFrame, MultiIndex, date_range +from pandas.compat import PY3, range +from pandas.util.testing import assert_almost_equal + + +def test_tolist(idx): + result = idx.tolist() + exp = list(idx.values) + assert result == exp + + +def test_to_frame(): + tuples = [(1, 'one'), (1, 'two'), (2, 'one'), (2, 'two')] + + index = MultiIndex.from_tuples(tuples) + result = index.to_frame(index=False) + expected = DataFrame(tuples) + tm.assert_frame_equal(result, expected) + + result = index.to_frame() + expected.index = index + tm.assert_frame_equal(result, expected) + + tuples = [(1, 'one'), (1, 'two'), (2, 'one'), (2, 'two')] + index = MultiIndex.from_tuples(tuples, names=['first', 'second']) + result = index.to_frame(index=False) + expected = DataFrame(tuples) + expected.columns = ['first', 'second'] + tm.assert_frame_equal(result, expected) + + result = index.to_frame() + expected.index = index + tm.assert_frame_equal(result, expected) + + index = MultiIndex.from_product([range(5), + pd.date_range('20130101', periods=3)]) + result = index.to_frame(index=False) + expected = DataFrame( + {0: np.repeat(np.arange(5, dtype='int64'), 3), + 1: np.tile(pd.date_range('20130101', periods=3), 5)}) + tm.assert_frame_equal(result, expected) + + index = MultiIndex.from_product([range(5), + pd.date_range('20130101', periods=3)]) + result = index.to_frame() + expected.index = index + tm.assert_frame_equal(result, expected) + + +def test_to_hierarchical(): + index = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), ( + 2, 'two')]) + result = index.to_hierarchical(3) + expected = MultiIndex(levels=[[1, 2], ['one', 'two']], + labels=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1], + [0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1]]) + tm.assert_index_equal(result, expected) + assert result.names == index.names + + # K > 1 + result = index.to_hierarchical(3, 2) + expected = MultiIndex(levels=[[1, 2], ['one', 'two']], + labels=[[0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1], + [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]]) + tm.assert_index_equal(result, expected) + assert result.names == index.names + + # non-sorted + index = MultiIndex.from_tuples([(2, 'c'), (1, 'b'), + (2, 'a'), (2, 'b')], + names=['N1', 'N2']) + + result = index.to_hierarchical(2) + expected = MultiIndex.from_tuples([(2, 'c'), (2, 'c'), (1, 'b'), + (1, 'b'), + (2, 'a'), (2, 'a'), + (2, 'b'), (2, 'b')], + names=['N1', 'N2']) + tm.assert_index_equal(result, expected) + assert result.names == index.names + + +@pytest.mark.skipif(PY3, reason="testing legacy pickles not support on py3") +def test_legacy_pickle(datapath): + + path = datapath('indexes', 'multi', 'data', 'multiindex_v1.pickle') + obj = pd.read_pickle(path) + + obj2 = MultiIndex.from_tuples(obj.values) + assert obj.equals(obj2) + + res = obj.get_indexer(obj) + exp = np.arange(len(obj), dtype=np.intp) + assert_almost_equal(res, exp) + + res = obj.get_indexer(obj2[::-1]) + exp = obj.get_indexer(obj[::-1]) + exp2 = obj2.get_indexer(obj2[::-1]) + assert_almost_equal(res, exp) + assert_almost_equal(exp, exp2) + + +def test_legacy_v2_unpickle(datapath): + + # 0.7.3 -> 0.8.0 format manage + path = datapath('indexes', 'multi', 'data', 'mindex_073.pickle') + obj = pd.read_pickle(path) + + obj2 = MultiIndex.from_tuples(obj.values) + assert obj.equals(obj2) + + res = obj.get_indexer(obj) + exp = np.arange(len(obj), dtype=np.intp) + assert_almost_equal(res, exp) + + res = obj.get_indexer(obj2[::-1]) + exp = obj.get_indexer(obj[::-1]) + exp2 = obj2.get_indexer(obj2[::-1]) + assert_almost_equal(res, exp) + assert_almost_equal(exp, exp2) + + +def test_roundtrip_pickle_with_tz(): + + # GH 8367 + # round-trip of timezone + index = MultiIndex.from_product( + [[1, 2], ['a', 'b'], date_range('20130101', periods=3, + tz='US/Eastern') + ], names=['one', 'two', 'three']) + unpickled = tm.round_trip_pickle(index) + assert index.equal_levels(unpickled) + + +def test_pickle(indices): + unpickled = tm.round_trip_pickle(indices) + assert indices.equals(unpickled) + original_name, indices.name = indices.name, 'foo' + unpickled = tm.round_trip_pickle(indices) + assert indices.equals(unpickled) + indices.name = original_name + + +def test_to_series(idx): + # assert that we are creating a copy of the index + + s = idx.to_series() + assert s.values is not idx.values + assert s.index is not idx + assert s.name == idx.name + + +def test_to_series_with_arguments(idx): + # GH18699 + + # index kwarg + s = idx.to_series(index=idx) + + assert s.values is not idx.values + assert s.index is idx + assert s.name == idx.name + + # name kwarg + idx = idx + s = idx.to_series(name='__test') + + assert s.values is not idx.values + assert s.index is not idx + assert s.name != idx.name diff --git a/pandas/tests/indexes/multi/test_copy.py b/pandas/tests/indexes/multi/test_copy.py new file mode 100644 index 0000000000000..282f2fa84efe0 --- /dev/null +++ b/pandas/tests/indexes/multi/test_copy.py @@ -0,0 +1,124 @@ +# -*- coding: utf-8 -*- + +from copy import copy, deepcopy + +import pandas.util.testing as tm +from pandas import (CategoricalIndex, IntervalIndex, MultiIndex, PeriodIndex, + RangeIndex, Series, compat) + + +def assert_multiindex_copied(copy, original): + # Levels should be (at least, shallow copied) + tm.assert_copy(copy.levels, original.levels) + tm.assert_almost_equal(copy.labels, original.labels) + + # Labels doesn't matter which way copied + tm.assert_almost_equal(copy.labels, original.labels) + assert copy.labels is not original.labels + + # Names doesn't matter which way copied + assert copy.names == original.names + assert copy.names is not original.names + + # Sort order should be copied + assert copy.sortorder == original.sortorder + + +def test_copy(idx): + i_copy = idx.copy() + + assert_multiindex_copied(i_copy, idx) + + +def test_shallow_copy(idx): + i_copy = idx._shallow_copy() + + assert_multiindex_copied(i_copy, idx) + + +def test_view(idx): + i_view = idx.view() + assert_multiindex_copied(i_view, idx) + + +def test_copy_name(idx): + # gh-12309: Check that the "name" argument + # passed at initialization is honored. + + # TODO: Remove or refactor MultiIndex not tested. + for name, index in compat.iteritems({'idx': idx}): + if isinstance(index, MultiIndex): + continue + + first = index.__class__(index, copy=True, name='mario') + second = first.__class__(first, copy=False) + + # Even though "copy=False", we want a new object. + assert first is not second + + # Not using tm.assert_index_equal() since names differ. + assert index.equals(first) + + assert first.name == 'mario' + assert second.name == 'mario' + + s1 = Series(2, index=first) + s2 = Series(3, index=second[:-1]) + + if not isinstance(index, CategoricalIndex): + # See gh-13365 + s3 = s1 * s2 + assert s3.index.name == 'mario' + + +def test_ensure_copied_data(idx): + # Check the "copy" argument of each Index.__new__ is honoured + # GH12309 + # TODO: REMOVE THIS TEST. MultiIndex is tested seperately as noted below. + + for name, index in compat.iteritems({'idx': idx}): + init_kwargs = {} + if isinstance(index, PeriodIndex): + # Needs "freq" specification: + init_kwargs['freq'] = index.freq + elif isinstance(index, (RangeIndex, MultiIndex, CategoricalIndex)): + # RangeIndex cannot be initialized from data + # MultiIndex and CategoricalIndex are tested separately + continue + + index_type = index.__class__ + result = index_type(index.values, copy=True, **init_kwargs) + tm.assert_index_equal(index, result) + tm.assert_numpy_array_equal(index.values, result.values, + check_same='copy') + + if isinstance(index, PeriodIndex): + # .values an object array of Period, thus copied + result = index_type(ordinal=index.asi8, copy=False, + **init_kwargs) + tm.assert_numpy_array_equal(index._ndarray_values, + result._ndarray_values, + check_same='same') + elif isinstance(index, IntervalIndex): + # checked in test_interval.py + pass + else: + result = index_type(index.values, copy=False, **init_kwargs) + tm.assert_numpy_array_equal(index.values, result.values, + check_same='same') + tm.assert_numpy_array_equal(index._ndarray_values, + result._ndarray_values, + check_same='same') + + +def test_copy_and_deepcopy(indices): + + if isinstance(indices, MultiIndex): + return + for func in (copy, deepcopy): + idx_copy = func(indices) + assert idx_copy is not indices + assert idx_copy.equals(indices) + + new_copy = indices.copy(deep=True, name="banana") + assert new_copy.name == "banana" diff --git a/pandas/tests/indexes/multi/test_drop.py b/pandas/tests/indexes/multi/test_drop.py new file mode 100644 index 0000000000000..281db7fd2c8a7 --- /dev/null +++ b/pandas/tests/indexes/multi/test_drop.py @@ -0,0 +1,126 @@ +# -*- coding: utf-8 -*- + + +import numpy as np +import pandas as pd +import pandas.util.testing as tm +import pytest +from pandas import Index, MultiIndex +from pandas.compat import lrange +from pandas.errors import PerformanceWarning + + +def test_drop(idx): + dropped = idx.drop([('foo', 'two'), ('qux', 'one')]) + + index = MultiIndex.from_tuples([('foo', 'two'), ('qux', 'one')]) + dropped2 = idx.drop(index) + + expected = idx[[0, 2, 3, 5]] + tm.assert_index_equal(dropped, expected) + tm.assert_index_equal(dropped2, expected) + + dropped = idx.drop(['bar']) + expected = idx[[0, 1, 3, 4, 5]] + tm.assert_index_equal(dropped, expected) + + dropped = idx.drop('foo') + expected = idx[[2, 3, 4, 5]] + tm.assert_index_equal(dropped, expected) + + index = MultiIndex.from_tuples([('bar', 'two')]) + pytest.raises(KeyError, idx.drop, [('bar', 'two')]) + pytest.raises(KeyError, idx.drop, index) + pytest.raises(KeyError, idx.drop, ['foo', 'two']) + + # partially correct argument + mixed_index = MultiIndex.from_tuples([('qux', 'one'), ('bar', 'two')]) + pytest.raises(KeyError, idx.drop, mixed_index) + + # error='ignore' + dropped = idx.drop(index, errors='ignore') + expected = idx[[0, 1, 2, 3, 4, 5]] + tm.assert_index_equal(dropped, expected) + + dropped = idx.drop(mixed_index, errors='ignore') + expected = idx[[0, 1, 2, 3, 5]] + tm.assert_index_equal(dropped, expected) + + dropped = idx.drop(['foo', 'two'], errors='ignore') + expected = idx[[2, 3, 4, 5]] + tm.assert_index_equal(dropped, expected) + + # mixed partial / full drop + dropped = idx.drop(['foo', ('qux', 'one')]) + expected = idx[[2, 3, 5]] + tm.assert_index_equal(dropped, expected) + + # mixed partial / full drop / error='ignore' + mixed_index = ['foo', ('qux', 'one'), 'two'] + pytest.raises(KeyError, idx.drop, mixed_index) + dropped = idx.drop(mixed_index, errors='ignore') + expected = idx[[2, 3, 5]] + tm.assert_index_equal(dropped, expected) + + +def test_droplevel_with_names(idx): + index = idx[idx.get_loc('foo')] + dropped = index.droplevel(0) + assert dropped.name == 'second' + + index = MultiIndex( + levels=[Index(lrange(4)), Index(lrange(4)), Index(lrange(4))], + labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])], + names=['one', 'two', 'three']) + dropped = index.droplevel(0) + assert dropped.names == ('two', 'three') + + dropped = index.droplevel('two') + expected = index.droplevel(1) + assert dropped.equals(expected) + + +def test_droplevel_list(): + index = MultiIndex( + levels=[Index(lrange(4)), Index(lrange(4)), Index(lrange(4))], + labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])], + names=['one', 'two', 'three']) + + dropped = index[:2].droplevel(['three', 'one']) + expected = index[:2].droplevel(2).droplevel(0) + assert dropped.equals(expected) + + dropped = index[:2].droplevel([]) + expected = index[:2] + assert dropped.equals(expected) + + with pytest.raises(ValueError): + index[:2].droplevel(['one', 'two', 'three']) + + with pytest.raises(KeyError): + index[:2].droplevel(['one', 'four']) + + +def test_drop_not_lexsorted(): + # GH 12078 + + # define the lexsorted version of the multi-index + tuples = [('a', ''), ('b1', 'c1'), ('b2', 'c2')] + lexsorted_mi = MultiIndex.from_tuples(tuples, names=['b', 'c']) + assert lexsorted_mi.is_lexsorted() + + # and the not-lexsorted version + df = pd.DataFrame(columns=['a', 'b', 'c', 'd'], + data=[[1, 'b1', 'c1', 3], [1, 'b2', 'c2', 4]]) + df = df.pivot_table(index='a', columns=['b', 'c'], values='d') + df = df.reset_index() + not_lexsorted_mi = df.columns + assert not not_lexsorted_mi.is_lexsorted() + + # compare the results + tm.assert_index_equal(lexsorted_mi, not_lexsorted_mi) + with tm.assert_produces_warning(PerformanceWarning): + tm.assert_index_equal(lexsorted_mi.drop('a'), + not_lexsorted_mi.drop('a')) diff --git a/pandas/tests/indexes/multi/test_equivalence.py b/pandas/tests/indexes/multi/test_equivalence.py new file mode 100644 index 0000000000000..0bebe3165e2e8 --- /dev/null +++ b/pandas/tests/indexes/multi/test_equivalence.py @@ -0,0 +1,223 @@ +# -*- coding: utf-8 -*- + + +import numpy as np +import pandas as pd +import pandas.util.testing as tm +from pandas import Index, MultiIndex, RangeIndex, Series, compat +from pandas.compat import lrange, lzip, range + + +def test_equals(idx): + # TODO: Remove or Refactor. MultiIndex not tested. + for name, idx in compat.iteritems({'idx': idx}): + assert idx.equals(idx) + assert idx.equals(idx.copy()) + assert idx.equals(idx.astype(object)) + + assert not idx.equals(list(idx)) + assert not idx.equals(np.array(idx)) + + # Cannot pass in non-int64 dtype to RangeIndex + if not isinstance(idx, RangeIndex): + same_values = Index(idx, dtype=object) + assert idx.equals(same_values) + assert same_values.equals(idx) + + if idx.nlevels == 1: + # do not test MultiIndex + assert not idx.equals(pd.Series(idx)) + + +def test_equals_op(idx): + # GH9947, GH10637 + index_a = idx + + n = len(index_a) + index_b = index_a[0:-1] + index_c = index_a[0:-1].append(index_a[-2:-1]) + index_d = index_a[0:1] + with tm.assert_raises_regex(ValueError, "Lengths must match"): + index_a == index_b + expected1 = np.array([True] * n) + expected2 = np.array([True] * (n - 1) + [False]) + tm.assert_numpy_array_equal(index_a == index_a, expected1) + tm.assert_numpy_array_equal(index_a == index_c, expected2) + + # test comparisons with numpy arrays + array_a = np.array(index_a) + array_b = np.array(index_a[0:-1]) + array_c = np.array(index_a[0:-1].append(index_a[-2:-1])) + array_d = np.array(index_a[0:1]) + with tm.assert_raises_regex(ValueError, "Lengths must match"): + index_a == array_b + tm.assert_numpy_array_equal(index_a == array_a, expected1) + tm.assert_numpy_array_equal(index_a == array_c, expected2) + + # test comparisons with Series + series_a = Series(array_a) + series_b = Series(array_b) + series_c = Series(array_c) + series_d = Series(array_d) + with tm.assert_raises_regex(ValueError, "Lengths must match"): + index_a == series_b + + tm.assert_numpy_array_equal(index_a == series_a, expected1) + tm.assert_numpy_array_equal(index_a == series_c, expected2) + + # cases where length is 1 for one of them + with tm.assert_raises_regex(ValueError, "Lengths must match"): + index_a == index_d + with tm.assert_raises_regex(ValueError, "Lengths must match"): + index_a == series_d + with tm.assert_raises_regex(ValueError, "Lengths must match"): + index_a == array_d + msg = "Can only compare identically-labeled Series objects" + with tm.assert_raises_regex(ValueError, msg): + series_a == series_d + with tm.assert_raises_regex(ValueError, "Lengths must match"): + series_a == array_d + + # comparing with a scalar should broadcast; note that we are excluding + # MultiIndex because in this case each item in the index is a tuple of + # length 2, and therefore is considered an array of length 2 in the + # comparison instead of a scalar + if not isinstance(index_a, MultiIndex): + expected3 = np.array([False] * (len(index_a) - 2) + [True, False]) + # assuming the 2nd to last item is unique in the data + item = index_a[-2] + tm.assert_numpy_array_equal(index_a == item, expected3) + tm.assert_series_equal(series_a == item, Series(expected3)) + + +def test_equals_multi(idx): + assert idx.equals(idx) + assert not idx.equals(idx.values) + assert idx.equals(Index(idx.values)) + + assert idx.equal_levels(idx) + assert not idx.equals(idx[:-1]) + assert not idx.equals(idx[-1]) + + # different number of levels + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) + + index2 = MultiIndex(levels=index.levels[:-1], labels=index.labels[:-1]) + assert not index.equals(index2) + assert not index.equal_levels(index2) + + # levels are different + major_axis = Index(lrange(4)) + minor_axis = Index(lrange(2)) + + major_labels = np.array([0, 0, 1, 2, 2, 3]) + minor_labels = np.array([0, 1, 0, 0, 1, 0]) + + index = MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels]) + assert not idx.equals(index) + assert not idx.equal_levels(index) + + # some of the labels are different + major_axis = Index(['foo', 'bar', 'baz', 'qux']) + minor_axis = Index(['one', 'two']) + + major_labels = np.array([0, 0, 2, 2, 3, 3]) + minor_labels = np.array([0, 1, 0, 1, 0, 1]) + + index = MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels]) + assert not idx.equals(index) + + +def test_identical(idx): + mi = idx.copy() + mi2 = idx.copy() + assert mi.identical(mi2) + + mi = mi.set_names(['new1', 'new2']) + assert mi.equals(mi2) + assert not mi.identical(mi2) + + mi2 = mi2.set_names(['new1', 'new2']) + assert mi.identical(mi2) + + mi3 = Index(mi.tolist(), names=mi.names) + mi4 = Index(mi.tolist(), names=mi.names, tupleize_cols=False) + assert mi.identical(mi3) + assert not mi.identical(mi4) + assert mi.equals(mi4) + + +def test_equals_operator(idx): + # GH9785 + assert (idx == idx).all() + + +def test_equals_missing_values(): + # make sure take is not using -1 + i = pd.MultiIndex.from_tuples([(0, pd.NaT), + (0, pd.Timestamp('20130101'))]) + result = i[0:1].equals(i[0]) + assert not result + result = i[1:2].equals(i[1]) + assert not result + + +def test_is_(): + mi = MultiIndex.from_tuples(lzip(range(10), range(10))) + assert mi.is_(mi) + assert mi.is_(mi.view()) + assert mi.is_(mi.view().view().view().view()) + mi2 = mi.view() + # names are metadata, they don't change id + mi2.names = ["A", "B"] + assert mi2.is_(mi) + assert mi.is_(mi2) + + assert mi.is_(mi.set_names(["C", "D"])) + mi2 = mi.view() + mi2.set_names(["E", "F"], inplace=True) + assert mi.is_(mi2) + # levels are inherent properties, they change identity + mi3 = mi2.set_levels([lrange(10), lrange(10)]) + assert not mi3.is_(mi2) + # shouldn't change + assert mi2.is_(mi) + mi4 = mi3.view() + + # GH 17464 - Remove duplicate MultiIndex levels + mi4.set_levels([lrange(10), lrange(10)], inplace=True) + assert not mi4.is_(mi3) + mi5 = mi.view() + mi5.set_levels(mi5.levels, inplace=True) + assert not mi5.is_(mi) + + +def test_is_all_dates(idx): + assert not idx.is_all_dates + + +def test_is_numeric(idx): + # MultiIndex is never numeric + assert not idx.is_numeric() + + +def test_multiindex_compare(): + # GH 21149 + # Ensure comparison operations for MultiIndex with nlevels == 1 + # behave consistently with those for MultiIndex with nlevels > 1 + + midx = pd.MultiIndex.from_product([[0, 1]]) + + # Equality self-test: MultiIndex object vs self + expected = pd.Series([True, True]) + result = pd.Series(midx == midx) + tm.assert_series_equal(result, expected) + + # Greater than comparison: MultiIndex object vs self + expected = pd.Series([False, False]) + result = pd.Series(midx > midx) + tm.assert_series_equal(result, expected) diff --git a/pandas/tests/indexes/multi/test_format.py b/pandas/tests/indexes/multi/test_format.py new file mode 100644 index 0000000000000..21e8a199cadd9 --- /dev/null +++ b/pandas/tests/indexes/multi/test_format.py @@ -0,0 +1,133 @@ +# -*- coding: utf-8 -*- + + +import warnings + +import pandas as pd +import pandas.util.testing as tm +from pandas import MultiIndex, compat +from pandas.compat import PY3, range, u + + +def test_dtype_str(indices): + dtype = indices.dtype_str + assert isinstance(dtype, compat.string_types) + assert dtype == str(indices.dtype) + + +def test_format(idx): + idx.format() + idx[:0].format() + + +def test_format_integer_names(): + index = MultiIndex(levels=[[0, 1], [0, 1]], + labels=[[0, 0, 1, 1], [0, 1, 0, 1]], names=[0, 1]) + index.format(names=True) + + +def test_format_sparse_config(idx): + warn_filters = warnings.filters + warnings.filterwarnings('ignore', category=FutureWarning, + module=".*format") + # GH1538 + pd.set_option('display.multi_sparse', False) + + result = idx.format() + assert result[1] == 'foo two' + + tm.reset_display_options() + + warnings.filters = warn_filters + + +def test_format_sparse_display(): + index = MultiIndex(levels=[[0, 1], [0, 1], [0, 1], [0]], + labels=[[0, 0, 0, 1, 1, 1], [0, 0, 1, 0, 0, 1], + [0, 1, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0]]) + + result = index.format() + assert result[3] == '1 0 0 0' + + +def test_repr_with_unicode_data(): + with pd.core.config.option_context("display.encoding", 'UTF-8'): + d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]} + index = pd.DataFrame(d).set_index(["a", "b"]).index + assert "\\u" not in repr(index) # we don't want unicode-escaped + + +def test_repr_roundtrip(): + + mi = MultiIndex.from_product([list('ab'), range(3)], + names=['first', 'second']) + str(mi) + + if PY3: + tm.assert_index_equal(eval(repr(mi)), mi, exact=True) + else: + result = eval(repr(mi)) + # string coerces to unicode + tm.assert_index_equal(result, mi, exact=False) + assert mi.get_level_values('first').inferred_type == 'string' + assert result.get_level_values('first').inferred_type == 'unicode' + + mi_u = MultiIndex.from_product( + [list(u'ab'), range(3)], names=['first', 'second']) + result = eval(repr(mi_u)) + tm.assert_index_equal(result, mi_u, exact=True) + + # formatting + if PY3: + str(mi) + else: + compat.text_type(mi) + + # long format + mi = MultiIndex.from_product([list('abcdefg'), range(10)], + names=['first', 'second']) + + if PY3: + tm.assert_index_equal(eval(repr(mi)), mi, exact=True) + else: + result = eval(repr(mi)) + # string coerces to unicode + tm.assert_index_equal(result, mi, exact=False) + assert mi.get_level_values('first').inferred_type == 'string' + assert result.get_level_values('first').inferred_type == 'unicode' + + result = eval(repr(mi_u)) + tm.assert_index_equal(result, mi_u, exact=True) + + +def test_str(): + # tested elsewhere + pass + + +def test_unicode_string_with_unicode(): + d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]} + idx = pd.DataFrame(d).set_index(["a", "b"]).index + + if PY3: + str(idx) + else: + compat.text_type(idx) + + +def test_bytestring_with_unicode(): + d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]} + idx = pd.DataFrame(d).set_index(["a", "b"]).index + + if PY3: + bytes(idx) + else: + str(idx) + + +def test_repr_max_seq_item_setting(idx): + # GH10182 + idx = idx.repeat(50) + with pd.option_context("display.max_seq_items", None): + repr(idx) + assert '...' not in str(idx) diff --git a/pandas/tests/indexes/multi/test_get_set.py b/pandas/tests/indexes/multi/test_get_set.py new file mode 100644 index 0000000000000..56fd4c04cb96e --- /dev/null +++ b/pandas/tests/indexes/multi/test_get_set.py @@ -0,0 +1,423 @@ +# -*- coding: utf-8 -*- + + +import numpy as np +import pandas as pd +import pandas.util.testing as tm +import pytest +from pandas import CategoricalIndex, Index, MultiIndex +from pandas.compat import range + + +def test_get_level_number_integer(idx): + idx.names = [1, 0] + assert idx._get_level_number(1) == 0 + assert idx._get_level_number(0) == 1 + pytest.raises(IndexError, idx._get_level_number, 2) + tm.assert_raises_regex(KeyError, 'Level fourth not found', + idx._get_level_number, 'fourth') + + +def test_get_level_values(idx): + result = idx.get_level_values(0) + expected = Index(['foo', 'foo', 'bar', 'baz', 'qux', 'qux'], + name='first') + tm.assert_index_equal(result, expected) + assert result.name == 'first' + + result = idx.get_level_values('first') + expected = idx.get_level_values(0) + tm.assert_index_equal(result, expected) + + # GH 10460 + index = MultiIndex( + levels=[CategoricalIndex(['A', 'B']), + CategoricalIndex([1, 2, 3])], + labels=[np.array([0, 0, 0, 1, 1, 1]), + np.array([0, 1, 2, 0, 1, 2])]) + + exp = CategoricalIndex(['A', 'A', 'A', 'B', 'B', 'B']) + tm.assert_index_equal(index.get_level_values(0), exp) + exp = CategoricalIndex([1, 2, 3, 1, 2, 3]) + tm.assert_index_equal(index.get_level_values(1), exp) + + +def test_get_value_duplicates(): + index = MultiIndex(levels=[['D', 'B', 'C'], + [0, 26, 27, 37, 57, 67, 75, 82]], + labels=[[0, 0, 0, 1, 2, 2, 2, 2, 2, 2], + [1, 3, 4, 6, 0, 2, 2, 3, 5, 7]], + names=['tag', 'day']) + + assert index.get_loc('D') == slice(0, 3) + with pytest.raises(KeyError): + index._engine.get_value(np.array([]), 'D') + + +def test_get_level_values_all_na(): + # GH 17924 when level entirely consists of nan + arrays = [[np.nan, np.nan, np.nan], ['a', np.nan, 1]] + index = pd.MultiIndex.from_arrays(arrays) + result = index.get_level_values(0) + expected = pd.Index([np.nan, np.nan, np.nan], dtype=np.float64) + tm.assert_index_equal(result, expected) + + result = index.get_level_values(1) + expected = pd.Index(['a', np.nan, 1], dtype=object) + tm.assert_index_equal(result, expected) + + +def test_get_level_values_int_with_na(): + # GH 17924 + arrays = [['a', 'b', 'b'], [1, np.nan, 2]] + index = pd.MultiIndex.from_arrays(arrays) + result = index.get_level_values(1) + expected = Index([1, np.nan, 2]) + tm.assert_index_equal(result, expected) + + arrays = [['a', 'b', 'b'], [np.nan, np.nan, 2]] + index = pd.MultiIndex.from_arrays(arrays) + result = index.get_level_values(1) + expected = Index([np.nan, np.nan, 2]) + tm.assert_index_equal(result, expected) + + +def test_get_level_values_na(): + arrays = [[np.nan, np.nan, np.nan], ['a', np.nan, 1]] + index = pd.MultiIndex.from_arrays(arrays) + result = index.get_level_values(0) + expected = pd.Index([np.nan, np.nan, np.nan]) + tm.assert_index_equal(result, expected) + + result = index.get_level_values(1) + expected = pd.Index(['a', np.nan, 1]) + tm.assert_index_equal(result, expected) + + arrays = [['a', 'b', 'b'], pd.DatetimeIndex([0, 1, pd.NaT])] + index = pd.MultiIndex.from_arrays(arrays) + result = index.get_level_values(1) + expected = pd.DatetimeIndex([0, 1, pd.NaT]) + tm.assert_index_equal(result, expected) + + arrays = [[], []] + index = pd.MultiIndex.from_arrays(arrays) + result = index.get_level_values(0) + expected = pd.Index([], dtype=object) + tm.assert_index_equal(result, expected) + + +def test_set_name_methods(idx, index_names): + # so long as these are synonyms, we don't need to test set_names + assert idx.rename == idx.set_names + new_names = [name + "SUFFIX" for name in index_names] + ind = idx.set_names(new_names) + assert idx.names == index_names + assert ind.names == new_names + with tm.assert_raises_regex(ValueError, "^Length"): + ind.set_names(new_names + new_names) + new_names2 = [name + "SUFFIX2" for name in new_names] + res = ind.set_names(new_names2, inplace=True) + assert res is None + assert ind.names == new_names2 + + # set names for specific level (# GH7792) + ind = idx.set_names(new_names[0], level=0) + assert idx.names == index_names + assert ind.names == [new_names[0], index_names[1]] + + res = ind.set_names(new_names2[0], level=0, inplace=True) + assert res is None + assert ind.names == [new_names2[0], index_names[1]] + + # set names for multiple levels + ind = idx.set_names(new_names, level=[0, 1]) + assert idx.names == index_names + assert ind.names == new_names + + res = ind.set_names(new_names2, level=[0, 1], inplace=True) + assert res is None + assert ind.names == new_names2 + + +def test_set_levels_labels_directly(idx): + # setting levels/labels directly raises AttributeError + + levels = idx.levels + new_levels = [[lev + 'a' for lev in level] for level in levels] + + labels = idx.labels + major_labels, minor_labels = labels + major_labels = [(x + 1) % 3 for x in major_labels] + minor_labels = [(x + 1) % 1 for x in minor_labels] + new_labels = [major_labels, minor_labels] + + with pytest.raises(AttributeError): + idx.levels = new_levels + + with pytest.raises(AttributeError): + idx.labels = new_labels + + +def test_set_levels(idx): + # side note - you probably wouldn't want to use levels and labels + # directly like this - but it is possible. + levels = idx.levels + new_levels = [[lev + 'a' for lev in level] for level in levels] + + def assert_matching(actual, expected, check_dtype=False): + # avoid specifying internal representation + # as much as possible + assert len(actual) == len(expected) + for act, exp in zip(actual, expected): + act = np.asarray(act) + exp = np.asarray(exp) + tm.assert_numpy_array_equal(act, exp, check_dtype=check_dtype) + + # level changing [w/o mutation] + ind2 = idx.set_levels(new_levels) + assert_matching(ind2.levels, new_levels) + assert_matching(idx.levels, levels) + + # level changing [w/ mutation] + ind2 = idx.copy() + inplace_return = ind2.set_levels(new_levels, inplace=True) + assert inplace_return is None + assert_matching(ind2.levels, new_levels) + + # level changing specific level [w/o mutation] + ind2 = idx.set_levels(new_levels[0], level=0) + assert_matching(ind2.levels, [new_levels[0], levels[1]]) + assert_matching(idx.levels, levels) + + ind2 = idx.set_levels(new_levels[1], level=1) + assert_matching(ind2.levels, [levels[0], new_levels[1]]) + assert_matching(idx.levels, levels) + + # level changing multiple levels [w/o mutation] + ind2 = idx.set_levels(new_levels, level=[0, 1]) + assert_matching(ind2.levels, new_levels) + assert_matching(idx.levels, levels) + + # level changing specific level [w/ mutation] + ind2 = idx.copy() + inplace_return = ind2.set_levels(new_levels[0], level=0, inplace=True) + assert inplace_return is None + assert_matching(ind2.levels, [new_levels[0], levels[1]]) + assert_matching(idx.levels, levels) + + ind2 = idx.copy() + inplace_return = ind2.set_levels(new_levels[1], level=1, inplace=True) + assert inplace_return is None + assert_matching(ind2.levels, [levels[0], new_levels[1]]) + assert_matching(idx.levels, levels) + + # level changing multiple levels [w/ mutation] + ind2 = idx.copy() + inplace_return = ind2.set_levels(new_levels, level=[0, 1], + inplace=True) + assert inplace_return is None + assert_matching(ind2.levels, new_levels) + assert_matching(idx.levels, levels) + + # illegal level changing should not change levels + # GH 13754 + original_index = idx.copy() + for inplace in [True, False]: + with tm.assert_raises_regex(ValueError, "^On"): + idx.set_levels(['c'], level=0, inplace=inplace) + assert_matching(idx.levels, original_index.levels, + check_dtype=True) + + with tm.assert_raises_regex(ValueError, "^On"): + idx.set_labels([0, 1, 2, 3, 4, 5], level=0, + inplace=inplace) + assert_matching(idx.labels, original_index.labels, + check_dtype=True) + + with tm.assert_raises_regex(TypeError, "^Levels"): + idx.set_levels('c', level=0, inplace=inplace) + assert_matching(idx.levels, original_index.levels, + check_dtype=True) + + with tm.assert_raises_regex(TypeError, "^Labels"): + idx.set_labels(1, level=0, inplace=inplace) + assert_matching(idx.labels, original_index.labels, + check_dtype=True) + + +def test_set_labels(idx): + # side note - you probably wouldn't want to use levels and labels + # directly like this - but it is possible. + labels = idx.labels + major_labels, minor_labels = labels + major_labels = [(x + 1) % 3 for x in major_labels] + minor_labels = [(x + 1) % 1 for x in minor_labels] + new_labels = [major_labels, minor_labels] + + def assert_matching(actual, expected): + # avoid specifying internal representation + # as much as possible + assert len(actual) == len(expected) + for act, exp in zip(actual, expected): + act = np.asarray(act) + exp = np.asarray(exp, dtype=np.int8) + tm.assert_numpy_array_equal(act, exp) + + # label changing [w/o mutation] + ind2 = idx.set_labels(new_labels) + assert_matching(ind2.labels, new_labels) + assert_matching(idx.labels, labels) + + # label changing [w/ mutation] + ind2 = idx.copy() + inplace_return = ind2.set_labels(new_labels, inplace=True) + assert inplace_return is None + assert_matching(ind2.labels, new_labels) + + # label changing specific level [w/o mutation] + ind2 = idx.set_labels(new_labels[0], level=0) + assert_matching(ind2.labels, [new_labels[0], labels[1]]) + assert_matching(idx.labels, labels) + + ind2 = idx.set_labels(new_labels[1], level=1) + assert_matching(ind2.labels, [labels[0], new_labels[1]]) + assert_matching(idx.labels, labels) + + # label changing multiple levels [w/o mutation] + ind2 = idx.set_labels(new_labels, level=[0, 1]) + assert_matching(ind2.labels, new_labels) + assert_matching(idx.labels, labels) + + # label changing specific level [w/ mutation] + ind2 = idx.copy() + inplace_return = ind2.set_labels(new_labels[0], level=0, inplace=True) + assert inplace_return is None + assert_matching(ind2.labels, [new_labels[0], labels[1]]) + assert_matching(idx.labels, labels) + + ind2 = idx.copy() + inplace_return = ind2.set_labels(new_labels[1], level=1, inplace=True) + assert inplace_return is None + assert_matching(ind2.labels, [labels[0], new_labels[1]]) + assert_matching(idx.labels, labels) + + # label changing multiple levels [w/ mutation] + ind2 = idx.copy() + inplace_return = ind2.set_labels(new_labels, level=[0, 1], + inplace=True) + assert inplace_return is None + assert_matching(ind2.labels, new_labels) + assert_matching(idx.labels, labels) + + # label changing for levels of different magnitude of categories + ind = pd.MultiIndex.from_tuples([(0, i) for i in range(130)]) + new_labels = range(129, -1, -1) + expected = pd.MultiIndex.from_tuples( + [(0, i) for i in new_labels]) + + # [w/o mutation] + result = ind.set_labels(labels=new_labels, level=1) + assert result.equals(expected) + + # [w/ mutation] + result = ind.copy() + result.set_labels(labels=new_labels, level=1, inplace=True) + assert result.equals(expected) + + +def test_set_levels_labels_names_bad_input(idx): + levels, labels = idx.levels, idx.labels + names = idx.names + + with tm.assert_raises_regex(ValueError, 'Length of levels'): + idx.set_levels([levels[0]]) + + with tm.assert_raises_regex(ValueError, 'Length of labels'): + idx.set_labels([labels[0]]) + + with tm.assert_raises_regex(ValueError, 'Length of names'): + idx.set_names([names[0]]) + + # shouldn't scalar data error, instead should demand list-like + with tm.assert_raises_regex(TypeError, 'list of lists-like'): + idx.set_levels(levels[0]) + + # shouldn't scalar data error, instead should demand list-like + with tm.assert_raises_regex(TypeError, 'list of lists-like'): + idx.set_labels(labels[0]) + + # shouldn't scalar data error, instead should demand list-like + with tm.assert_raises_regex(TypeError, 'list-like'): + idx.set_names(names[0]) + + # should have equal lengths + with tm.assert_raises_regex(TypeError, 'list of lists-like'): + idx.set_levels(levels[0], level=[0, 1]) + + with tm.assert_raises_regex(TypeError, 'list-like'): + idx.set_levels(levels, level=0) + + # should have equal lengths + with tm.assert_raises_regex(TypeError, 'list of lists-like'): + idx.set_labels(labels[0], level=[0, 1]) + + with tm.assert_raises_regex(TypeError, 'list-like'): + idx.set_labels(labels, level=0) + + # should have equal lengths + with tm.assert_raises_regex(ValueError, 'Length of names'): + idx.set_names(names[0], level=[0, 1]) + + with tm.assert_raises_regex(TypeError, 'string'): + idx.set_names(names, level=0) + + +@pytest.mark.parametrize('inplace', [True, False]) +def test_set_names_with_nlevel_1(inplace): + # GH 21149 + # Ensure that .set_names for MultiIndex with + # nlevels == 1 does not raise any errors + expected = pd.MultiIndex(levels=[[0, 1]], + labels=[[0, 1]], + names=['first']) + m = pd.MultiIndex.from_product([[0, 1]]) + result = m.set_names('first', level=0, inplace=inplace) + + if inplace: + result = m + + tm.assert_index_equal(result, expected) + + +def test_set_levels_categorical(): + # GH13854 + index = MultiIndex.from_arrays([list("xyzx"), [0, 1, 2, 3]]) + for ordered in [False, True]: + cidx = CategoricalIndex(list("bac"), ordered=ordered) + result = index.set_levels(cidx, 0) + expected = MultiIndex(levels=[cidx, [0, 1, 2, 3]], + labels=index.labels) + tm.assert_index_equal(result, expected) + + result_lvl = result.get_level_values(0) + expected_lvl = CategoricalIndex(list("bacb"), + categories=cidx.categories, + ordered=cidx.ordered) + tm.assert_index_equal(result_lvl, expected_lvl) + + +def test_set_value_keeps_names(): + # motivating example from #3742 + lev1 = ['hans', 'hans', 'hans', 'grethe', 'grethe', 'grethe'] + lev2 = ['1', '2', '3'] * 2 + idx = pd.MultiIndex.from_arrays([lev1, lev2], names=['Name', 'Number']) + df = pd.DataFrame( + np.random.randn(6, 4), + columns=['one', 'two', 'three', 'four'], + index=idx) + df = df.sort_index() + assert df._is_copy is None + assert df.index.names == ('Name', 'Number') + df.at[('grethe', '4'), 'one'] = 99.34 + assert df._is_copy is None + assert df.index.names == ('Name', 'Number') diff --git a/pandas/tests/indexes/multi/test_indexing.py b/pandas/tests/indexes/multi/test_indexing.py new file mode 100644 index 0000000000000..0b528541e5eb6 --- /dev/null +++ b/pandas/tests/indexes/multi/test_indexing.py @@ -0,0 +1,369 @@ +# -*- coding: utf-8 -*- + + +from datetime import timedelta + +import numpy as np +import pytest + +import pandas as pd +import pandas.util.testing as tm +from pandas import (Categorical, CategoricalIndex, Index, IntervalIndex, + MultiIndex, date_range) +from pandas.compat import lrange +from pandas.core.indexes.base import InvalidIndexError +from pandas.util.testing import assert_almost_equal + + +def test_slice_locs_partial(idx): + sorted_idx, _ = idx.sortlevel(0) + + result = sorted_idx.slice_locs(('foo', 'two'), ('qux', 'one')) + assert result == (1, 5) + + result = sorted_idx.slice_locs(None, ('qux', 'one')) + assert result == (0, 5) + + result = sorted_idx.slice_locs(('foo', 'two'), None) + assert result == (1, len(sorted_idx)) + + result = sorted_idx.slice_locs('bar', 'baz') + assert result == (2, 4) + + +def test_slice_locs(): + df = tm.makeTimeDataFrame() + stacked = df.stack() + idx = stacked.index + + slob = slice(*idx.slice_locs(df.index[5], df.index[15])) + sliced = stacked[slob] + expected = df[5:16].stack() + tm.assert_almost_equal(sliced.values, expected.values) + + slob = slice(*idx.slice_locs(df.index[5] + timedelta(seconds=30), + df.index[15] - timedelta(seconds=30))) + sliced = stacked[slob] + expected = df[6:15].stack() + tm.assert_almost_equal(sliced.values, expected.values) + + +def test_slice_locs_with_type_mismatch(): + df = tm.makeTimeDataFrame() + stacked = df.stack() + idx = stacked.index + tm.assert_raises_regex(TypeError, '^Level type mismatch', + idx.slice_locs, (1, 3)) + tm.assert_raises_regex(TypeError, '^Level type mismatch', + idx.slice_locs, + df.index[5] + timedelta( + seconds=30), (5, 2)) + df = tm.makeCustomDataframe(5, 5) + stacked = df.stack() + idx = stacked.index + with tm.assert_raises_regex(TypeError, '^Level type mismatch'): + idx.slice_locs(timedelta(seconds=30)) + # TODO: Try creating a UnicodeDecodeError in exception message + with tm.assert_raises_regex(TypeError, '^Level type mismatch'): + idx.slice_locs(df.index[1], (16, "a")) + + +def test_slice_locs_not_sorted(): + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) + + tm.assert_raises_regex(KeyError, "[Kk]ey length.*greater than " + "MultiIndex lexsort depth", + index.slice_locs, (1, 0, 1), (2, 1, 0)) + + # works + sorted_index, _ = index.sortlevel(0) + # should there be a test case here??? + sorted_index.slice_locs((1, 0, 1), (2, 1, 0)) + + +def test_slice_locs_not_contained(): + # some searchsorted action + + index = MultiIndex(levels=[[0, 2, 4, 6], [0, 2, 4]], + labels=[[0, 0, 0, 1, 1, 2, 3, 3, 3], + [0, 1, 2, 1, 2, 2, 0, 1, 2]], sortorder=0) + + result = index.slice_locs((1, 0), (5, 2)) + assert result == (3, 6) + + result = index.slice_locs(1, 5) + assert result == (3, 6) + + result = index.slice_locs((2, 2), (5, 2)) + assert result == (3, 6) + + result = index.slice_locs(2, 5) + assert result == (3, 6) + + result = index.slice_locs((1, 0), (6, 3)) + assert result == (3, 8) + + result = index.slice_locs(-1, 10) + assert result == (0, len(index)) + + +def test_insert_base(idx): + + result = idx[1:4] + + # test 0th element + assert idx[0:4].equals(result.insert(0, idx[0])) + + +def test_delete_base(idx): + + expected = idx[1:] + result = idx.delete(0) + assert result.equals(expected) + assert result.name == expected.name + + expected = idx[:-1] + result = idx.delete(-1) + assert result.equals(expected) + assert result.name == expected.name + + with pytest.raises((IndexError, ValueError)): + # either depending on numpy version + result = idx.delete(len(idx)) + + +def test_putmask_with_wrong_mask(idx): + # GH18368 + + with pytest.raises(ValueError): + idx.putmask(np.ones(len(idx) + 1, np.bool), 1) + + with pytest.raises(ValueError): + idx.putmask(np.ones(len(idx) - 1, np.bool), 1) + + with pytest.raises(ValueError): + idx.putmask('foo', 1) + + +def test_get_indexer(): + major_axis = Index(lrange(4)) + minor_axis = Index(lrange(2)) + + major_labels = np.array([0, 0, 1, 2, 2, 3, 3], dtype=np.intp) + minor_labels = np.array([0, 1, 0, 0, 1, 0, 1], dtype=np.intp) + + index = MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels]) + idx1 = index[:5] + idx2 = index[[1, 3, 5]] + + r1 = idx1.get_indexer(idx2) + assert_almost_equal(r1, np.array([1, 3, -1], dtype=np.intp)) + + r1 = idx2.get_indexer(idx1, method='pad') + e1 = np.array([-1, 0, 0, 1, 1], dtype=np.intp) + assert_almost_equal(r1, e1) + + r2 = idx2.get_indexer(idx1[::-1], method='pad') + assert_almost_equal(r2, e1[::-1]) + + rffill1 = idx2.get_indexer(idx1, method='ffill') + assert_almost_equal(r1, rffill1) + + r1 = idx2.get_indexer(idx1, method='backfill') + e1 = np.array([0, 0, 1, 1, 2], dtype=np.intp) + assert_almost_equal(r1, e1) + + r2 = idx2.get_indexer(idx1[::-1], method='backfill') + assert_almost_equal(r2, e1[::-1]) + + rbfill1 = idx2.get_indexer(idx1, method='bfill') + assert_almost_equal(r1, rbfill1) + + # pass non-MultiIndex + r1 = idx1.get_indexer(idx2.values) + rexp1 = idx1.get_indexer(idx2) + assert_almost_equal(r1, rexp1) + + r1 = idx1.get_indexer([1, 2, 3]) + assert (r1 == [-1, -1, -1]).all() + + # create index with duplicates + idx1 = Index(lrange(10) + lrange(10)) + idx2 = Index(lrange(20)) + + msg = "Reindexing only valid with uniquely valued Index objects" + with tm.assert_raises_regex(InvalidIndexError, msg): + idx1.get_indexer(idx2) + + +def test_get_indexer_nearest(): + midx = MultiIndex.from_tuples([('a', 1), ('b', 2)]) + with pytest.raises(NotImplementedError): + midx.get_indexer(['a'], method='nearest') + with pytest.raises(NotImplementedError): + midx.get_indexer(['a'], method='pad', tolerance=2) + + +def test_getitem(idx): + # scalar + assert idx[2] == ('bar', 'one') + + # slice + result = idx[2:5] + expected = idx[[2, 3, 4]] + assert result.equals(expected) + + # boolean + result = idx[[True, False, True, False, True, True]] + result2 = idx[np.array([True, False, True, False, True, True])] + expected = idx[[0, 2, 4, 5]] + assert result.equals(expected) + assert result2.equals(expected) + + +def test_getitem_group_select(idx): + sorted_idx, _ = idx.sortlevel(0) + assert sorted_idx.get_loc('baz') == slice(3, 4) + assert sorted_idx.get_loc('foo') == slice(0, 2) + + +def test_get_indexer_consistency(idx): + # See GH 16819 + if isinstance(idx, IntervalIndex): + pass + + if idx.is_unique or isinstance(idx, CategoricalIndex): + indexer = idx.get_indexer(idx[0:2]) + assert isinstance(indexer, np.ndarray) + assert indexer.dtype == np.intp + else: + e = "Reindexing only valid with uniquely valued Index objects" + with tm.assert_raises_regex(InvalidIndexError, e): + indexer = idx.get_indexer(idx[0:2]) + + indexer, _ = idx.get_indexer_non_unique(idx[0:2]) + assert isinstance(indexer, np.ndarray) + assert indexer.dtype == np.intp + + +def test_get_loc(idx): + assert idx.get_loc(('foo', 'two')) == 1 + assert idx.get_loc(('baz', 'two')) == 3 + pytest.raises(KeyError, idx.get_loc, ('bar', 'two')) + pytest.raises(KeyError, idx.get_loc, 'quux') + + pytest.raises(NotImplementedError, idx.get_loc, 'foo', + method='nearest') + + # 3 levels + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) + pytest.raises(KeyError, index.get_loc, (1, 1)) + assert index.get_loc((2, 0)) == slice(3, 5) + + +def test_get_loc_duplicates(): + index = Index([2, 2, 2, 2]) + result = index.get_loc(2) + expected = slice(0, 4) + assert result == expected + # pytest.raises(Exception, index.get_loc, 2) + + index = Index(['c', 'a', 'a', 'b', 'b']) + rs = index.get_loc('c') + xp = 0 + assert rs == xp + + +def test_get_loc_level(): + index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( + lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( + [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) + + loc, new_index = index.get_loc_level((0, 1)) + expected = slice(1, 2) + exp_index = index[expected].droplevel(0).droplevel(0) + assert loc == expected + assert new_index.equals(exp_index) + + loc, new_index = index.get_loc_level((0, 1, 0)) + expected = 1 + assert loc == expected + assert new_index is None + + pytest.raises(KeyError, index.get_loc_level, (2, 2)) + + index = MultiIndex(levels=[[2000], lrange(4)], labels=[np.array( + [0, 0, 0, 0]), np.array([0, 1, 2, 3])]) + result, new_index = index.get_loc_level((2000, slice(None, None))) + expected = slice(None, None) + assert result == expected + assert new_index.equals(index.droplevel(0)) + + +@pytest.mark.parametrize('dtype1', [int, float, bool, str]) +@pytest.mark.parametrize('dtype2', [int, float, bool, str]) +def test_get_loc_multiple_dtypes(dtype1, dtype2): + # GH 18520 + levels = [np.array([0, 1]).astype(dtype1), + np.array([0, 1]).astype(dtype2)] + idx = pd.MultiIndex.from_product(levels) + assert idx.get_loc(idx[2]) == 2 + + +@pytest.mark.parametrize('level', [0, 1]) +@pytest.mark.parametrize('dtypes', [[int, float], [float, int]]) +def test_get_loc_implicit_cast(level, dtypes): + # GH 18818, GH 15994 : as flat index, cast int to float and vice-versa + levels = [['a', 'b'], ['c', 'd']] + key = ['b', 'd'] + lev_dtype, key_dtype = dtypes + levels[level] = np.array([0, 1], dtype=lev_dtype) + key[level] = key_dtype(1) + idx = MultiIndex.from_product(levels) + assert idx.get_loc(tuple(key)) == 3 + + +def test_get_loc_cast_bool(): + # GH 19086 : int is casted to bool, but not vice-versa + levels = [[False, True], np.arange(2, dtype='int64')] + idx = MultiIndex.from_product(levels) + + assert idx.get_loc((0, 1)) == 1 + assert idx.get_loc((1, 0)) == 2 + + pytest.raises(KeyError, idx.get_loc, (False, True)) + pytest.raises(KeyError, idx.get_loc, (True, False)) + + +@pytest.mark.parametrize('level', [0, 1]) +def test_get_loc_nan(level, nulls_fixture): + # GH 18485 : NaN in MultiIndex + levels = [['a', 'b'], ['c', 'd']] + key = ['b', 'd'] + levels[level] = np.array([0, nulls_fixture], dtype=type(nulls_fixture)) + key[level] = nulls_fixture + idx = MultiIndex.from_product(levels) + assert idx.get_loc(tuple(key)) == 3 + + +def test_get_loc_missing_nan(): + # GH 8569 + idx = MultiIndex.from_arrays([[1.0, 2.0], [3.0, 4.0]]) + assert isinstance(idx.get_loc(1), slice) + pytest.raises(KeyError, idx.get_loc, 3) + pytest.raises(KeyError, idx.get_loc, np.nan) + pytest.raises(KeyError, idx.get_loc, [np.nan]) + + +def test_get_indexer_categorical_time(): + # https://github.com/pandas-dev/pandas/issues/21390 + midx = MultiIndex.from_product( + [Categorical(['a', 'b', 'c']), + Categorical(date_range("2012-01-01", periods=3, freq='H'))]) + result = midx.get_indexer(midx) + tm.assert_numpy_array_equal(result, np.arange(9, dtype=np.intp)) diff --git a/pandas/tests/indexes/multi/test_integrity.py b/pandas/tests/indexes/multi/test_integrity.py new file mode 100644 index 0000000000000..7a8f8b60d31ba --- /dev/null +++ b/pandas/tests/indexes/multi/test_integrity.py @@ -0,0 +1,288 @@ +# -*- coding: utf-8 -*- + +import re + +import numpy as np +import pandas as pd +import pandas.util.testing as tm +import pytest +from pandas import IntervalIndex, MultiIndex, RangeIndex +from pandas.compat import lrange, range +from pandas.core.dtypes.cast import construct_1d_object_array_from_listlike + + +def test_labels_dtypes(): + + # GH 8456 + i = MultiIndex.from_tuples([('A', 1), ('A', 2)]) + assert i.labels[0].dtype == 'int8' + assert i.labels[1].dtype == 'int8' + + i = MultiIndex.from_product([['a'], range(40)]) + assert i.labels[1].dtype == 'int8' + i = MultiIndex.from_product([['a'], range(400)]) + assert i.labels[1].dtype == 'int16' + i = MultiIndex.from_product([['a'], range(40000)]) + assert i.labels[1].dtype == 'int32' + + i = pd.MultiIndex.from_product([['a'], range(1000)]) + assert (i.labels[0] >= 0).all() + assert (i.labels[1] >= 0).all() + + +def test_values_boxed(): + tuples = [(1, pd.Timestamp('2000-01-01')), (2, pd.NaT), + (3, pd.Timestamp('2000-01-03')), + (1, pd.Timestamp('2000-01-04')), + (2, pd.Timestamp('2000-01-02')), + (3, pd.Timestamp('2000-01-03'))] + result = pd.MultiIndex.from_tuples(tuples) + expected = construct_1d_object_array_from_listlike(tuples) + tm.assert_numpy_array_equal(result.values, expected) + # Check that code branches for boxed values produce identical results + tm.assert_numpy_array_equal(result.values[:4], result[:4].values) + + +def test_values_multiindex_datetimeindex(): + # Test to ensure we hit the boxing / nobox part of MI.values + ints = np.arange(10 ** 18, 10 ** 18 + 5) + naive = pd.DatetimeIndex(ints) + aware = pd.DatetimeIndex(ints, tz='US/Central') + + idx = pd.MultiIndex.from_arrays([naive, aware]) + result = idx.values + + outer = pd.DatetimeIndex([x[0] for x in result]) + tm.assert_index_equal(outer, naive) + + inner = pd.DatetimeIndex([x[1] for x in result]) + tm.assert_index_equal(inner, aware) + + # n_lev > n_lab + result = idx[:2].values + + outer = pd.DatetimeIndex([x[0] for x in result]) + tm.assert_index_equal(outer, naive[:2]) + + inner = pd.DatetimeIndex([x[1] for x in result]) + tm.assert_index_equal(inner, aware[:2]) + + +def test_values_multiindex_periodindex(): + # Test to ensure we hit the boxing / nobox part of MI.values + ints = np.arange(2007, 2012) + pidx = pd.PeriodIndex(ints, freq='D') + + idx = pd.MultiIndex.from_arrays([ints, pidx]) + result = idx.values + + outer = pd.Int64Index([x[0] for x in result]) + tm.assert_index_equal(outer, pd.Int64Index(ints)) + + inner = pd.PeriodIndex([x[1] for x in result]) + tm.assert_index_equal(inner, pidx) + + # n_lev > n_lab + result = idx[:2].values + + outer = pd.Int64Index([x[0] for x in result]) + tm.assert_index_equal(outer, pd.Int64Index(ints[:2])) + + inner = pd.PeriodIndex([x[1] for x in result]) + tm.assert_index_equal(inner, pidx[:2]) + + +def test_consistency(): + # need to construct an overflow + major_axis = lrange(70000) + minor_axis = lrange(10) + + major_labels = np.arange(70000) + minor_labels = np.repeat(lrange(10), 7000) + + # the fact that is works means it's consistent + index = MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels]) + + # inconsistent + major_labels = np.array([0, 0, 1, 1, 1, 2, 2, 3, 3]) + minor_labels = np.array([0, 1, 0, 1, 1, 0, 1, 0, 1]) + index = MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels]) + + assert not index.is_unique + + +def test_hash_collisions(): + # non-smoke test that we don't get hash collisions + + index = MultiIndex.from_product([np.arange(1000), np.arange(1000)], + names=['one', 'two']) + result = index.get_indexer(index.values) + tm.assert_numpy_array_equal(result, np.arange( + len(index), dtype='intp')) + + for i in [0, 1, len(index) - 2, len(index) - 1]: + result = index.get_loc(index[i]) + assert result == i + + +def test_dims(): + pass + + +def take_invalid_kwargs(): + vals = [['A', 'B'], + [pd.Timestamp('2011-01-01'), pd.Timestamp('2011-01-02')]] + idx = pd.MultiIndex.from_product(vals, names=['str', 'dt']) + indices = [1, 2] + + msg = r"take\(\) got an unexpected keyword argument 'foo'" + tm.assert_raises_regex(TypeError, msg, idx.take, + indices, foo=2) + + msg = "the 'out' parameter is not supported" + tm.assert_raises_regex(ValueError, msg, idx.take, + indices, out=indices) + + msg = "the 'mode' parameter is not supported" + tm.assert_raises_regex(ValueError, msg, idx.take, + indices, mode='clip') + + +def test_isna_behavior(idx): + # should not segfault GH5123 + # NOTE: if MI representation changes, may make sense to allow + # isna(MI) + with pytest.raises(NotImplementedError): + pd.isna(idx) + + +def test_large_multiindex_error(): + # GH12527 + df_below_1000000 = pd.DataFrame( + 1, index=pd.MultiIndex.from_product([[1, 2], range(499999)]), + columns=['dest']) + with pytest.raises(KeyError): + df_below_1000000.loc[(-1, 0), 'dest'] + with pytest.raises(KeyError): + df_below_1000000.loc[(3, 0), 'dest'] + df_above_1000000 = pd.DataFrame( + 1, index=pd.MultiIndex.from_product([[1, 2], range(500001)]), + columns=['dest']) + with pytest.raises(KeyError): + df_above_1000000.loc[(-1, 0), 'dest'] + with pytest.raises(KeyError): + df_above_1000000.loc[(3, 0), 'dest'] + + +def test_million_record_attribute_error(): + # GH 18165 + r = list(range(1000000)) + df = pd.DataFrame({'a': r, 'b': r}, + index=pd.MultiIndex.from_tuples([(x, x) for x in r])) + + with tm.assert_raises_regex(AttributeError, + "'Series' object has no attribute 'foo'"): + df['a'].foo() + + +def test_can_hold_identifiers(idx): + key = idx[0] + assert idx._can_hold_identifiers_and_holds_name(key) is True + + +def test_metadata_immutable(idx): + levels, labels = idx.levels, idx.labels + # shouldn't be able to set at either the top level or base level + mutable_regex = re.compile('does not support mutable operations') + with tm.assert_raises_regex(TypeError, mutable_regex): + levels[0] = levels[0] + with tm.assert_raises_regex(TypeError, mutable_regex): + levels[0][0] = levels[0][0] + # ditto for labels + with tm.assert_raises_regex(TypeError, mutable_regex): + labels[0] = labels[0] + with tm.assert_raises_regex(TypeError, mutable_regex): + labels[0][0] = labels[0][0] + # and for names + names = idx.names + with tm.assert_raises_regex(TypeError, mutable_regex): + names[0] = names[0] + + +def test_level_setting_resets_attributes(): + ind = pd.MultiIndex.from_arrays([ + ['A', 'A', 'B', 'B', 'B'], [1, 2, 1, 2, 3] + ]) + assert ind.is_monotonic + ind.set_levels([['A', 'B'], [1, 3, 2]], inplace=True) + # if this fails, probably didn't reset the cache correctly. + assert not ind.is_monotonic + + +def test_rangeindex_fallback_coercion_bug(): + # GH 12893 + foo = pd.DataFrame(np.arange(100).reshape((10, 10))) + bar = pd.DataFrame(np.arange(100).reshape((10, 10))) + df = pd.concat({'foo': foo.stack(), 'bar': bar.stack()}, axis=1) + df.index.names = ['fizz', 'buzz'] + + str(df) + expected = pd.DataFrame({'bar': np.arange(100), + 'foo': np.arange(100)}, + index=pd.MultiIndex.from_product( + [range(10), range(10)], + names=['fizz', 'buzz'])) + tm.assert_frame_equal(df, expected, check_like=True) + + result = df.index.get_level_values('fizz') + expected = pd.Int64Index(np.arange(10), name='fizz').repeat(10) + tm.assert_index_equal(result, expected) + + result = df.index.get_level_values('buzz') + expected = pd.Int64Index(np.tile(np.arange(10), 10), name='buzz') + tm.assert_index_equal(result, expected) + + +def test_hash_error(indices): + index = indices + tm.assert_raises_regex(TypeError, "unhashable type: %r" % + type(index).__name__, hash, indices) + + +def test_mutability(indices): + if not len(indices): + return + pytest.raises(TypeError, indices.__setitem__, 0, indices[0]) + + +def test_wrong_number_names(indices): + def testit(ind): + ind.names = ["apple", "banana", "carrot"] + tm.assert_raises_regex(ValueError, "^Length", testit, indices) + + +def test_memory_usage(idx): + result = idx.memory_usage() + if len(idx): + idx.get_loc(idx[0]) + result2 = idx.memory_usage() + result3 = idx.memory_usage(deep=True) + + # RangeIndex, IntervalIndex + # don't have engines + if not isinstance(idx, (RangeIndex, IntervalIndex)): + assert result2 > result + + if idx.inferred_type == 'object': + assert result3 > result2 + + else: + + # we report 0 for no-length + assert result == 0 + + +def test_nlevels(idx): + assert idx.nlevels == 2 diff --git a/pandas/tests/indexes/multi/test_join.py b/pandas/tests/indexes/multi/test_join.py new file mode 100644 index 0000000000000..4a386c6e8dbe4 --- /dev/null +++ b/pandas/tests/indexes/multi/test_join.py @@ -0,0 +1,94 @@ +# -*- coding: utf-8 -*- + + +import numpy as np +import pandas as pd +import pandas.util.testing as tm +import pytest +from pandas import Index, MultiIndex + + +@pytest.mark.parametrize('other', + [Index(['three', 'one', 'two']), + Index(['one']), + Index(['one', 'three'])]) +def test_join_level(idx, other, join_type): + join_index, lidx, ridx = other.join(idx, how=join_type, + level='second', + return_indexers=True) + + exp_level = other.join(idx.levels[1], how=join_type) + assert join_index.levels[0].equals(idx.levels[0]) + assert join_index.levels[1].equals(exp_level) + + # pare down levels + mask = np.array( + [x[1] in exp_level for x in idx], dtype=bool) + exp_values = idx.values[mask] + tm.assert_numpy_array_equal(join_index.values, exp_values) + + if join_type in ('outer', 'inner'): + join_index2, ridx2, lidx2 = \ + idx.join(other, how=join_type, level='second', + return_indexers=True) + + assert join_index.equals(join_index2) + tm.assert_numpy_array_equal(lidx, lidx2) + tm.assert_numpy_array_equal(ridx, ridx2) + tm.assert_numpy_array_equal(join_index2.values, exp_values) + + +def test_join_level_corner_case(idx): + # some corner cases + index = Index(['three', 'one', 'two']) + result = index.join(idx, level='second') + assert isinstance(result, MultiIndex) + + tm.assert_raises_regex(TypeError, "Join.*MultiIndex.*ambiguous", + idx.join, idx, level=1) + + +def test_join_self(idx, join_type): + joined = idx.join(idx, how=join_type) + assert idx is joined + + +def test_join_multi(): + # GH 10665 + midx = pd.MultiIndex.from_product( + [np.arange(4), np.arange(4)], names=['a', 'b']) + idx = pd.Index([1, 2, 5], name='b') + + # inner + jidx, lidx, ridx = midx.join(idx, how='inner', return_indexers=True) + exp_idx = pd.MultiIndex.from_product( + [np.arange(4), [1, 2]], names=['a', 'b']) + exp_lidx = np.array([1, 2, 5, 6, 9, 10, 13, 14], dtype=np.intp) + exp_ridx = np.array([0, 1, 0, 1, 0, 1, 0, 1], dtype=np.intp) + tm.assert_index_equal(jidx, exp_idx) + tm.assert_numpy_array_equal(lidx, exp_lidx) + tm.assert_numpy_array_equal(ridx, exp_ridx) + # flip + jidx, ridx, lidx = idx.join(midx, how='inner', return_indexers=True) + tm.assert_index_equal(jidx, exp_idx) + tm.assert_numpy_array_equal(lidx, exp_lidx) + tm.assert_numpy_array_equal(ridx, exp_ridx) + + # keep MultiIndex + jidx, lidx, ridx = midx.join(idx, how='left', return_indexers=True) + exp_ridx = np.array([-1, 0, 1, -1, -1, 0, 1, -1, -1, 0, 1, -1, -1, 0, + 1, -1], dtype=np.intp) + tm.assert_index_equal(jidx, midx) + assert lidx is None + tm.assert_numpy_array_equal(ridx, exp_ridx) + # flip + jidx, ridx, lidx = idx.join(midx, how='right', return_indexers=True) + tm.assert_index_equal(jidx, midx) + assert lidx is None + tm.assert_numpy_array_equal(ridx, exp_ridx) + + +def test_join_self_unique(idx, join_type): + if idx.is_unique: + joined = idx.join(idx, how=join_type) + assert (idx == joined).all() diff --git a/pandas/tests/indexes/multi/test_missing.py b/pandas/tests/indexes/multi/test_missing.py new file mode 100644 index 0000000000000..01465ea4c2f3b --- /dev/null +++ b/pandas/tests/indexes/multi/test_missing.py @@ -0,0 +1,145 @@ +# -*- coding: utf-8 -*- + +import numpy as np +import pandas as pd +import pandas.util.testing as tm +import pytest +from pandas import Int64Index, MultiIndex, PeriodIndex, UInt64Index, isna +from pandas._libs.tslib import iNaT +from pandas.core.indexes.datetimelike import DatetimeIndexOpsMixin + + +def test_fillna(idx): + # GH 11343 + + # TODO: Remove or Refactor. Not Implemented for MultiIndex + for name, index in [('idx', idx), ]: + if len(index) == 0: + pass + elif isinstance(index, MultiIndex): + idx = index.copy() + msg = "isna is not defined for MultiIndex" + with tm.assert_raises_regex(NotImplementedError, msg): + idx.fillna(idx[0]) + else: + idx = index.copy() + result = idx.fillna(idx[0]) + tm.assert_index_equal(result, idx) + assert result is not idx + + msg = "'value' must be a scalar, passed: " + with tm.assert_raises_regex(TypeError, msg): + idx.fillna([idx[0]]) + + idx = index.copy() + values = idx.values + + if isinstance(index, DatetimeIndexOpsMixin): + values[1] = iNaT + elif isinstance(index, (Int64Index, UInt64Index)): + continue + else: + values[1] = np.nan + + if isinstance(index, PeriodIndex): + idx = index.__class__(values, freq=index.freq) + else: + idx = index.__class__(values) + + expected = np.array([False] * len(idx), dtype=bool) + expected[1] = True + tm.assert_numpy_array_equal(idx._isnan, expected) + assert idx.hasnans + + +def test_dropna(): + # GH 6194 + idx = pd.MultiIndex.from_arrays([[1, np.nan, 3, np.nan, 5], + [1, 2, np.nan, np.nan, 5], + ['a', 'b', 'c', np.nan, 'e']]) + + exp = pd.MultiIndex.from_arrays([[1, 5], + [1, 5], + ['a', 'e']]) + tm.assert_index_equal(idx.dropna(), exp) + tm.assert_index_equal(idx.dropna(how='any'), exp) + + exp = pd.MultiIndex.from_arrays([[1, np.nan, 3, 5], + [1, 2, np.nan, 5], + ['a', 'b', 'c', 'e']]) + tm.assert_index_equal(idx.dropna(how='all'), exp) + + msg = "invalid how option: xxx" + with tm.assert_raises_regex(ValueError, msg): + idx.dropna(how='xxx') + + +def test_nulls(idx): + # this is really a smoke test for the methods + # as these are adequately tested for function elsewhere + + # TODO: Remove or Refactor. MultiIndex not Implemeted. + for name, index in [('idx', idx), ]: + if len(index) == 0: + tm.assert_numpy_array_equal( + index.isna(), np.array([], dtype=bool)) + elif isinstance(index, MultiIndex): + idx = index.copy() + msg = "isna is not defined for MultiIndex" + with tm.assert_raises_regex(NotImplementedError, msg): + idx.isna() + else: + + if not index.hasnans: + tm.assert_numpy_array_equal( + index.isna(), np.zeros(len(index), dtype=bool)) + tm.assert_numpy_array_equal( + index.notna(), np.ones(len(index), dtype=bool)) + else: + result = isna(index) + tm.assert_numpy_array_equal(index.isna(), result) + tm.assert_numpy_array_equal(index.notna(), ~result) + + +@pytest.mark.xfail +def test_hasnans_isnans(idx): + # GH 11343, added tests for hasnans / isnans + index = idx.copy() + + # cases in indices doesn't include NaN + expected = np.array([False] * len(index), dtype=bool) + tm.assert_numpy_array_equal(index._isnan, expected) + assert not index.hasnans + + index = idx.copy() + values = index.values + values[1] = np.nan + + index = idx.__class__(values) + + expected = np.array([False] * len(index), dtype=bool) + expected[1] = True + tm.assert_numpy_array_equal(index._isnan, expected) + assert index.hasnans + + +def test_nan_stays_float(): + + # GH 7031 + idx0 = pd.MultiIndex(levels=[["A", "B"], []], + labels=[[1, 0], [-1, -1]], + names=[0, 1]) + idx1 = pd.MultiIndex(levels=[["C"], ["D"]], + labels=[[0], [0]], + names=[0, 1]) + idxm = idx0.join(idx1, how='outer') + assert pd.isna(idx0.get_level_values(1)).all() + # the following failed in 0.14.1 + assert pd.isna(idxm.get_level_values(1)[:-1]).all() + + df0 = pd.DataFrame([[1, 2]], index=idx0) + df1 = pd.DataFrame([[3, 4]], index=idx1) + dfm = df0 - df1 + assert pd.isna(df0.index.get_level_values(1)).all() + # the following failed in 0.14.1 + assert pd.isna(dfm.index.get_level_values(1)[:-1]).all() diff --git a/pandas/tests/indexes/multi/test_monotonic.py b/pandas/tests/indexes/multi/test_monotonic.py new file mode 100644 index 0000000000000..f02447e27ab81 --- /dev/null +++ b/pandas/tests/indexes/multi/test_monotonic.py @@ -0,0 +1,205 @@ +# -*- coding: utf-8 -*- + +import numpy as np +import pandas as pd +import pytest +from pandas import Index, IntervalIndex, MultiIndex + + +def test_is_monotonic_increasing(): + i = MultiIndex.from_product([np.arange(10), + np.arange(10)], names=['one', 'two']) + assert i.is_monotonic + assert i._is_strictly_monotonic_increasing + assert Index(i.values).is_monotonic + assert i._is_strictly_monotonic_increasing + + i = MultiIndex.from_product([np.arange(10, 0, -1), + np.arange(10)], names=['one', 'two']) + assert not i.is_monotonic + assert not i._is_strictly_monotonic_increasing + assert not Index(i.values).is_monotonic + assert not Index(i.values)._is_strictly_monotonic_increasing + + i = MultiIndex.from_product([np.arange(10), + np.arange(10, 0, -1)], + names=['one', 'two']) + assert not i.is_monotonic + assert not i._is_strictly_monotonic_increasing + assert not Index(i.values).is_monotonic + assert not Index(i.values)._is_strictly_monotonic_increasing + + i = MultiIndex.from_product([[1.0, np.nan, 2.0], ['a', 'b', 'c']]) + assert not i.is_monotonic + assert not i._is_strictly_monotonic_increasing + assert not Index(i.values).is_monotonic + assert not Index(i.values)._is_strictly_monotonic_increasing + + # string ordering + i = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], + ['one', 'two', 'three']], + labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], + [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], + names=['first', 'second']) + assert not i.is_monotonic + assert not Index(i.values).is_monotonic + assert not i._is_strictly_monotonic_increasing + assert not Index(i.values)._is_strictly_monotonic_increasing + + i = MultiIndex(levels=[['bar', 'baz', 'foo', 'qux'], + ['mom', 'next', 'zenith']], + labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], + [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], + names=['first', 'second']) + assert i.is_monotonic + assert Index(i.values).is_monotonic + assert i._is_strictly_monotonic_increasing + assert Index(i.values)._is_strictly_monotonic_increasing + + # mixed levels, hits the TypeError + i = MultiIndex( + levels=[[1, 2, 3, 4], ['gb00b03mlx29', 'lu0197800237', + 'nl0000289783', + 'nl0000289965', 'nl0000301109']], + labels=[[0, 1, 1, 2, 2, 2, 3], [4, 2, 0, 0, 1, 3, -1]], + names=['household_id', 'asset_id']) + + assert not i.is_monotonic + assert not i._is_strictly_monotonic_increasing + + # empty + i = MultiIndex.from_arrays([[], []]) + assert i.is_monotonic + assert Index(i.values).is_monotonic + assert i._is_strictly_monotonic_increasing + assert Index(i.values)._is_strictly_monotonic_increasing + + +def test_is_monotonic_decreasing(): + i = MultiIndex.from_product([np.arange(9, -1, -1), + np.arange(9, -1, -1)], + names=['one', 'two']) + assert i.is_monotonic_decreasing + assert i._is_strictly_monotonic_decreasing + assert Index(i.values).is_monotonic_decreasing + assert i._is_strictly_monotonic_decreasing + + i = MultiIndex.from_product([np.arange(10), + np.arange(10, 0, -1)], + names=['one', 'two']) + assert not i.is_monotonic_decreasing + assert not i._is_strictly_monotonic_decreasing + assert not Index(i.values).is_monotonic_decreasing + assert not Index(i.values)._is_strictly_monotonic_decreasing + + i = MultiIndex.from_product([np.arange(10, 0, -1), + np.arange(10)], names=['one', 'two']) + assert not i.is_monotonic_decreasing + assert not i._is_strictly_monotonic_decreasing + assert not Index(i.values).is_monotonic_decreasing + assert not Index(i.values)._is_strictly_monotonic_decreasing + + i = MultiIndex.from_product([[2.0, np.nan, 1.0], ['c', 'b', 'a']]) + assert not i.is_monotonic_decreasing + assert not i._is_strictly_monotonic_decreasing + assert not Index(i.values).is_monotonic_decreasing + assert not Index(i.values)._is_strictly_monotonic_decreasing + + # string ordering + i = MultiIndex(levels=[['qux', 'foo', 'baz', 'bar'], + ['three', 'two', 'one']], + labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], + [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], + names=['first', 'second']) + assert not i.is_monotonic_decreasing + assert not Index(i.values).is_monotonic_decreasing + assert not i._is_strictly_monotonic_decreasing + assert not Index(i.values)._is_strictly_monotonic_decreasing + + i = MultiIndex(levels=[['qux', 'foo', 'baz', 'bar'], + ['zenith', 'next', 'mom']], + labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], + [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], + names=['first', 'second']) + assert i.is_monotonic_decreasing + assert Index(i.values).is_monotonic_decreasing + assert i._is_strictly_monotonic_decreasing + assert Index(i.values)._is_strictly_monotonic_decreasing + + # mixed levels, hits the TypeError + i = MultiIndex( + levels=[[4, 3, 2, 1], ['nl0000301109', 'nl0000289965', + 'nl0000289783', 'lu0197800237', + 'gb00b03mlx29']], + labels=[[0, 1, 1, 2, 2, 2, 3], [4, 2, 0, 0, 1, 3, -1]], + names=['household_id', 'asset_id']) + + assert not i.is_monotonic_decreasing + assert not i._is_strictly_monotonic_decreasing + + # empty + i = MultiIndex.from_arrays([[], []]) + assert i.is_monotonic_decreasing + assert Index(i.values).is_monotonic_decreasing + assert i._is_strictly_monotonic_decreasing + assert Index(i.values)._is_strictly_monotonic_decreasing + + +def test_is_strictly_monotonic_increasing(): + idx = pd.MultiIndex(levels=[['bar', 'baz'], ['mom', 'next']], + labels=[[0, 0, 1, 1], [0, 0, 0, 1]]) + assert idx.is_monotonic_increasing + assert not idx._is_strictly_monotonic_increasing + + +def test_is_strictly_monotonic_decreasing(): + idx = pd.MultiIndex(levels=[['baz', 'bar'], ['next', 'mom']], + labels=[[0, 0, 1, 1], [0, 0, 0, 1]]) + assert idx.is_monotonic_decreasing + assert not idx._is_strictly_monotonic_decreasing + + +def test_searchsorted_monotonic(indices): + # GH17271 + # not implemented for tuple searches in MultiIndex + # or Intervals searches in IntervalIndex + if isinstance(indices, (MultiIndex, IntervalIndex)): + return + + # nothing to test if the index is empty + if indices.empty: + return + value = indices[0] + + # determine the expected results (handle dupes for 'right') + expected_left, expected_right = 0, (indices == value).argmin() + if expected_right == 0: + # all values are the same, expected_right should be length + expected_right = len(indices) + + # test _searchsorted_monotonic in all cases + # test searchsorted only for increasing + if indices.is_monotonic_increasing: + ssm_left = indices._searchsorted_monotonic(value, side='left') + assert expected_left == ssm_left + + ssm_right = indices._searchsorted_monotonic(value, side='right') + assert expected_right == ssm_right + + ss_left = indices.searchsorted(value, side='left') + assert expected_left == ss_left + + ss_right = indices.searchsorted(value, side='right') + assert expected_right == ss_right + + elif indices.is_monotonic_decreasing: + ssm_left = indices._searchsorted_monotonic(value, side='left') + assert expected_left == ssm_left + + ssm_right = indices._searchsorted_monotonic(value, side='right') + assert expected_right == ssm_right + + else: + # non-monotonic should raise. + with pytest.raises(ValueError): + indices._searchsorted_monotonic(value, side='left') diff --git a/pandas/tests/indexes/multi/test_names.py b/pandas/tests/indexes/multi/test_names.py new file mode 100644 index 0000000000000..a9fbb55679173 --- /dev/null +++ b/pandas/tests/indexes/multi/test_names.py @@ -0,0 +1,117 @@ +# -*- coding: utf-8 -*- + + +import pandas as pd +import pandas.util.testing as tm +from pandas import MultiIndex + + +def check_level_names(index, names): + assert [level.name for level in index.levels] == list(names) + + +def test_slice_keep_name(): + x = MultiIndex.from_tuples([('a', 'b'), (1, 2), ('c', 'd')], + names=['x', 'y']) + assert x[1:].names == x.names + + +def test_index_name_retained(): + # GH9857 + result = pd.DataFrame({'x': [1, 2, 6], + 'y': [2, 2, 8], + 'z': [-5, 0, 5]}) + result = result.set_index('z') + result.loc[10] = [9, 10] + df_expected = pd.DataFrame({'x': [1, 2, 6, 9], + 'y': [2, 2, 8, 10], + 'z': [-5, 0, 5, 10]}) + df_expected = df_expected.set_index('z') + tm.assert_frame_equal(result, df_expected) + + +def test_changing_names(idx): + + # names should be applied to levels + level_names = [level.name for level in idx.levels] + check_level_names(idx, idx.names) + + view = idx.view() + copy = idx.copy() + shallow_copy = idx._shallow_copy() + + # changing names should change level names on object + new_names = [name + "a" for name in idx.names] + idx.names = new_names + check_level_names(idx, new_names) + + # but not on copies + check_level_names(view, level_names) + check_level_names(copy, level_names) + check_level_names(shallow_copy, level_names) + + # and copies shouldn't change original + shallow_copy.names = [name + "c" for name in shallow_copy.names] + check_level_names(idx, new_names) + + +def test_take_preserve_name(idx): + taken = idx.take([3, 0, 1]) + assert taken.names == idx.names + + +def test_copy_names(): + # Check that adding a "names" parameter to the copy is honored + # GH14302 + multi_idx = pd.Index([(1, 2), (3, 4)], names=['MyName1', 'MyName2']) + multi_idx1 = multi_idx.copy() + + assert multi_idx.equals(multi_idx1) + assert multi_idx.names == ['MyName1', 'MyName2'] + assert multi_idx1.names == ['MyName1', 'MyName2'] + + multi_idx2 = multi_idx.copy(names=['NewName1', 'NewName2']) + + assert multi_idx.equals(multi_idx2) + assert multi_idx.names == ['MyName1', 'MyName2'] + assert multi_idx2.names == ['NewName1', 'NewName2'] + + multi_idx3 = multi_idx.copy(name=['NewName1', 'NewName2']) + + assert multi_idx.equals(multi_idx3) + assert multi_idx.names == ['MyName1', 'MyName2'] + assert multi_idx3.names == ['NewName1', 'NewName2'] + + +def test_names(idx, index_names): + + # names are assigned in setup + names = index_names + level_names = [level.name for level in idx.levels] + assert names == level_names + + # setting bad names on existing + index = idx + tm.assert_raises_regex(ValueError, "^Length of names", + setattr, index, "names", + list(index.names) + ["third"]) + tm.assert_raises_regex(ValueError, "^Length of names", + setattr, index, "names", []) + + # initializing with bad names (should always be equivalent) + major_axis, minor_axis = idx.levels + major_labels, minor_labels = idx.labels + tm.assert_raises_regex(ValueError, "^Length of names", MultiIndex, + levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels], + names=['first']) + tm.assert_raises_regex(ValueError, "^Length of names", MultiIndex, + levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels], + names=['first', 'second', 'third']) + + # names are assigned + index.names = ["a", "b"] + ind_names = list(index.names) + level_names = [level.name for level in index.levels] + assert ind_names == level_names diff --git a/pandas/tests/indexes/multi/test_operations.py b/pandas/tests/indexes/multi/test_operations.py new file mode 100644 index 0000000000000..d38cb28039595 --- /dev/null +++ b/pandas/tests/indexes/multi/test_operations.py @@ -0,0 +1,448 @@ +# -*- coding: utf-8 -*- + +import numpy as np +import pandas as pd +import pandas.util.testing as tm +import pytest +from pandas import (DatetimeIndex, Float64Index, Index, Int64Index, MultiIndex, + PeriodIndex, TimedeltaIndex, UInt64Index, date_range, + period_range) +from pandas.compat import lrange, range +from pandas.core.dtypes.dtypes import CategoricalDtype +from pandas.core.indexes.datetimelike import DatetimeIndexOpsMixin +from pandas.util.testing import assert_copy + + +def check_level_names(index, names): + assert [level.name for level in index.levels] == list(names) + + +def test_insert(idx): + # key contained in all levels + new_index = idx.insert(0, ('bar', 'two')) + assert new_index.equal_levels(idx) + assert new_index[0] == ('bar', 'two') + + # key not contained in all levels + new_index = idx.insert(0, ('abc', 'three')) + + exp0 = Index(list(idx.levels[0]) + ['abc'], name='first') + tm.assert_index_equal(new_index.levels[0], exp0) + + exp1 = Index(list(idx.levels[1]) + ['three'], name='second') + tm.assert_index_equal(new_index.levels[1], exp1) + assert new_index[0] == ('abc', 'three') + + # key wrong length + msg = "Item must have length equal to number of levels" + with tm.assert_raises_regex(ValueError, msg): + idx.insert(0, ('foo2',)) + + left = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1]], + columns=['1st', '2nd', '3rd']) + left.set_index(['1st', '2nd'], inplace=True) + ts = left['3rd'].copy(deep=True) + + left.loc[('b', 'x'), '3rd'] = 2 + left.loc[('b', 'a'), '3rd'] = -1 + left.loc[('b', 'b'), '3rd'] = 3 + left.loc[('a', 'x'), '3rd'] = 4 + left.loc[('a', 'w'), '3rd'] = 5 + left.loc[('a', 'a'), '3rd'] = 6 + + ts.loc[('b', 'x')] = 2 + ts.loc['b', 'a'] = -1 + ts.loc[('b', 'b')] = 3 + ts.loc['a', 'x'] = 4 + ts.loc[('a', 'w')] = 5 + ts.loc['a', 'a'] = 6 + + right = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1], ['b', 'x', 2], + ['b', 'a', -1], ['b', 'b', 3], ['a', 'x', 4], + ['a', 'w', 5], ['a', 'a', 6]], + columns=['1st', '2nd', '3rd']) + right.set_index(['1st', '2nd'], inplace=True) + # FIXME data types changes to float because + # of intermediate nan insertion; + tm.assert_frame_equal(left, right, check_dtype=False) + tm.assert_series_equal(ts, right['3rd']) + + # GH9250 + idx = [('test1', i) for i in range(5)] + \ + [('test2', i) for i in range(6)] + \ + [('test', 17), ('test', 18)] + + left = pd.Series(np.linspace(0, 10, 11), + pd.MultiIndex.from_tuples(idx[:-2])) + + left.loc[('test', 17)] = 11 + left.loc[('test', 18)] = 12 + + right = pd.Series(np.linspace(0, 12, 13), + pd.MultiIndex.from_tuples(idx)) + + tm.assert_series_equal(left, right) + + +def test_bounds(idx): + idx._bounds + + +def test_append(idx): + result = idx[:3].append(idx[3:]) + assert result.equals(idx) + + foos = [idx[:1], idx[1:3], idx[3:]] + result = foos[0].append(foos[1:]) + assert result.equals(idx) + + # empty + result = idx.append([]) + assert result.equals(idx) + + +def test_groupby(idx): + groups = idx.groupby(np.array([1, 1, 1, 2, 2, 2])) + labels = idx.get_values().tolist() + exp = {1: labels[:3], 2: labels[3:]} + tm.assert_dict_equal(groups, exp) + + # GH5620 + groups = idx.groupby(idx) + exp = {key: [key] for key in idx} + tm.assert_dict_equal(groups, exp) + + +def test_truncate(): + major_axis = Index(lrange(4)) + minor_axis = Index(lrange(2)) + + major_labels = np.array([0, 0, 1, 2, 3, 3]) + minor_labels = np.array([0, 1, 0, 1, 0, 1]) + + index = MultiIndex(levels=[major_axis, minor_axis], + labels=[major_labels, minor_labels]) + + result = index.truncate(before=1) + assert 'foo' not in result.levels[0] + assert 1 in result.levels[0] + + result = index.truncate(after=1) + assert 2 not in result.levels[0] + assert 1 in result.levels[0] + + result = index.truncate(before=1, after=2) + assert len(result.levels[0]) == 2 + + # after < before + pytest.raises(ValueError, index.truncate, 3, 1) + + +def test_where(): + i = MultiIndex.from_tuples([('A', 1), ('A', 2)]) + + def f(): + i.where(True) + + pytest.raises(NotImplementedError, f) + + +def test_where_array_like(): + i = MultiIndex.from_tuples([('A', 1), ('A', 2)]) + klasses = [list, tuple, np.array, pd.Series] + cond = [False, True] + + for klass in klasses: + def f(): + return i.where(klass(cond)) + pytest.raises(NotImplementedError, f) + + +def test_reorder_levels(idx): + # this blows up + tm.assert_raises_regex(IndexError, '^Too many levels', + idx.reorder_levels, [2, 1, 0]) + + +def test_astype(idx): + expected = idx.copy() + actual = idx.astype('O') + assert_copy(actual.levels, expected.levels) + assert_copy(actual.labels, expected.labels) + check_level_names(actual, expected.names) + + with tm.assert_raises_regex(TypeError, "^Setting.*dtype.*object"): + idx.astype(np.dtype(int)) + + +@pytest.mark.parametrize('ordered', [True, False]) +def test_astype_category(idx, ordered): + # GH 18630 + msg = '> 1 ndim Categorical are not supported at this time' + with tm.assert_raises_regex(NotImplementedError, msg): + idx.astype(CategoricalDtype(ordered=ordered)) + + if ordered is False: + # dtype='category' defaults to ordered=False, so only test once + with tm.assert_raises_regex(NotImplementedError, msg): + idx.astype('category') + + +def test_repeat(): + reps = 2 + numbers = [1, 2, 3] + names = np.array(['foo', 'bar']) + + m = MultiIndex.from_product([ + numbers, names], names=names) + expected = MultiIndex.from_product([ + numbers, names.repeat(reps)], names=names) + tm.assert_index_equal(m.repeat(reps), expected) + + with tm.assert_produces_warning(FutureWarning): + result = m.repeat(n=reps) + tm.assert_index_equal(result, expected) + + +def test_numpy_repeat(): + reps = 2 + numbers = [1, 2, 3] + names = np.array(['foo', 'bar']) + + m = MultiIndex.from_product([ + numbers, names], names=names) + expected = MultiIndex.from_product([ + numbers, names.repeat(reps)], names=names) + tm.assert_index_equal(np.repeat(m, reps), expected) + + msg = "the 'axis' parameter is not supported" + tm.assert_raises_regex( + ValueError, msg, np.repeat, m, reps, axis=1) + + +def test_append_mixed_dtypes(): + # GH 13660 + dti = date_range('2011-01-01', freq='M', periods=3, ) + dti_tz = date_range('2011-01-01', freq='M', periods=3, tz='US/Eastern') + pi = period_range('2011-01', freq='M', periods=3) + + mi = MultiIndex.from_arrays([[1, 2, 3], + [1.1, np.nan, 3.3], + ['a', 'b', 'c'], + dti, dti_tz, pi]) + assert mi.nlevels == 6 + + res = mi.append(mi) + exp = MultiIndex.from_arrays([[1, 2, 3, 1, 2, 3], + [1.1, np.nan, 3.3, 1.1, np.nan, 3.3], + ['a', 'b', 'c', 'a', 'b', 'c'], + dti.append(dti), + dti_tz.append(dti_tz), + pi.append(pi)]) + tm.assert_index_equal(res, exp) + + other = MultiIndex.from_arrays([['x', 'y', 'z'], ['x', 'y', 'z'], + ['x', 'y', 'z'], ['x', 'y', 'z'], + ['x', 'y', 'z'], ['x', 'y', 'z']]) + + res = mi.append(other) + exp = MultiIndex.from_arrays([[1, 2, 3, 'x', 'y', 'z'], + [1.1, np.nan, 3.3, 'x', 'y', 'z'], + ['a', 'b', 'c', 'x', 'y', 'z'], + dti.append(pd.Index(['x', 'y', 'z'])), + dti_tz.append(pd.Index(['x', 'y', 'z'])), + pi.append(pd.Index(['x', 'y', 'z']))]) + tm.assert_index_equal(res, exp) + + +def test_take(idx): + indexer = [4, 3, 0, 2] + result = idx.take(indexer) + expected = idx[indexer] + assert result.equals(expected) + + if not isinstance(idx, + (DatetimeIndex, PeriodIndex, TimedeltaIndex)): + # GH 10791 + with pytest.raises(AttributeError): + idx.freq + + +def test_take_invalid_kwargs(idx): + idx = idx + indices = [1, 2] + + msg = r"take\(\) got an unexpected keyword argument 'foo'" + tm.assert_raises_regex(TypeError, msg, idx.take, + indices, foo=2) + + msg = "the 'out' parameter is not supported" + tm.assert_raises_regex(ValueError, msg, idx.take, + indices, out=indices) + + msg = "the 'mode' parameter is not supported" + tm.assert_raises_regex(ValueError, msg, idx.take, + indices, mode='clip') + + +def test_take_fill_value(): + # GH 12631 + vals = [['A', 'B'], + [pd.Timestamp('2011-01-01'), pd.Timestamp('2011-01-02')]] + idx = pd.MultiIndex.from_product(vals, names=['str', 'dt']) + + result = idx.take(np.array([1, 0, -1])) + exp_vals = [('A', pd.Timestamp('2011-01-02')), + ('A', pd.Timestamp('2011-01-01')), + ('B', pd.Timestamp('2011-01-02'))] + expected = pd.MultiIndex.from_tuples(exp_vals, names=['str', 'dt']) + tm.assert_index_equal(result, expected) + + # fill_value + result = idx.take(np.array([1, 0, -1]), fill_value=True) + exp_vals = [('A', pd.Timestamp('2011-01-02')), + ('A', pd.Timestamp('2011-01-01')), + (np.nan, pd.NaT)] + expected = pd.MultiIndex.from_tuples(exp_vals, names=['str', 'dt']) + tm.assert_index_equal(result, expected) + + # allow_fill=False + result = idx.take(np.array([1, 0, -1]), allow_fill=False, + fill_value=True) + exp_vals = [('A', pd.Timestamp('2011-01-02')), + ('A', pd.Timestamp('2011-01-01')), + ('B', pd.Timestamp('2011-01-02'))] + expected = pd.MultiIndex.from_tuples(exp_vals, names=['str', 'dt']) + tm.assert_index_equal(result, expected) + + msg = ('When allow_fill=True and fill_value is not None, ' + 'all indices must be >= -1') + with tm.assert_raises_regex(ValueError, msg): + idx.take(np.array([1, 0, -2]), fill_value=True) + with tm.assert_raises_regex(ValueError, msg): + idx.take(np.array([1, 0, -5]), fill_value=True) + + with pytest.raises(IndexError): + idx.take(np.array([1, -5])) + + +def test_iter(idx): + result = list(idx) + expected = [('foo', 'one'), ('foo', 'two'), ('bar', 'one'), + ('baz', 'two'), ('qux', 'one'), ('qux', 'two')] + assert result == expected + + +def test_sub(idx): + + first = idx + + # - now raises (previously was set op difference) + with pytest.raises(TypeError): + first - idx[-3:] + with pytest.raises(TypeError): + idx[-3:] - first + with pytest.raises(TypeError): + idx[-3:] - first.tolist() + with pytest.raises(TypeError): + first.tolist() - idx[-3:] + + +def test_argsort(idx): + result = idx.argsort() + expected = idx.values.argsort() + tm.assert_numpy_array_equal(result, expected) + + +def test_map(idx): + # callable + index = idx + + # we don't infer UInt64 + if isinstance(index, pd.UInt64Index): + expected = index.astype('int64') + else: + expected = index + + result = index.map(lambda x: x) + tm.assert_index_equal(result, expected) + + +@pytest.mark.parametrize( + "mapper", + [ + lambda values, idx: {i: e for e, i in zip(values, idx)}, + lambda values, idx: pd.Series(values, idx)]) +def test_map_dictlike(idx, mapper): + + if isinstance(idx, (pd.CategoricalIndex, pd.IntervalIndex)): + pytest.skip("skipping tests for {}".format(type(idx))) + + identity = mapper(idx.values, idx) + + # we don't infer to UInt64 for a dict + if isinstance(idx, pd.UInt64Index) and isinstance(identity, dict): + expected = idx.astype('int64') + else: + expected = idx + + result = idx.map(identity) + tm.assert_index_equal(result, expected) + + # empty mappable + expected = pd.Index([np.nan] * len(idx)) + result = idx.map(mapper(expected, idx)) + tm.assert_index_equal(result, expected) + + +def test_numpy_ufuncs(idx): + # test ufuncs of numpy 1.9.2. see: + # http://docs.scipy.org/doc/numpy/reference/ufuncs.html + + # some functions are skipped because it may return different result + # for unicode input depending on numpy version + + for func in [np.exp, np.exp2, np.expm1, np.log, np.log2, np.log10, + np.log1p, np.sqrt, np.sin, np.cos, np.tan, np.arcsin, + np.arccos, np.arctan, np.sinh, np.cosh, np.tanh, + np.arcsinh, np.arccosh, np.arctanh, np.deg2rad, + np.rad2deg]: + if isinstance(idx, DatetimeIndexOpsMixin): + # raise TypeError or ValueError (PeriodIndex) + # PeriodIndex behavior should be changed in future version + with pytest.raises(Exception): + with np.errstate(all='ignore'): + func(idx) + elif isinstance(idx, (Float64Index, Int64Index, UInt64Index)): + # coerces to float (e.g. np.sin) + with np.errstate(all='ignore'): + result = func(idx) + exp = Index(func(idx.values), name=idx.name) + + tm.assert_index_equal(result, exp) + assert isinstance(result, pd.Float64Index) + else: + # raise AttributeError or TypeError + if len(idx) == 0: + continue + else: + with pytest.raises(Exception): + with np.errstate(all='ignore'): + func(idx) + + for func in [np.isfinite, np.isinf, np.isnan, np.signbit]: + if isinstance(idx, DatetimeIndexOpsMixin): + # raise TypeError or ValueError (PeriodIndex) + with pytest.raises(Exception): + func(idx) + elif isinstance(idx, (Float64Index, Int64Index, UInt64Index)): + # Results in bool array + result = func(idx) + assert isinstance(result, np.ndarray) + assert not isinstance(result, Index) + else: + if len(idx) == 0: + continue + else: + with pytest.raises(Exception): + func(idx) diff --git a/pandas/tests/indexes/multi/test_partial_indexing.py b/pandas/tests/indexes/multi/test_partial_indexing.py new file mode 100644 index 0000000000000..40e5e26e9cb0f --- /dev/null +++ b/pandas/tests/indexes/multi/test_partial_indexing.py @@ -0,0 +1,98 @@ +import numpy as np +import pytest + +import pandas as pd +import pandas.util.testing as tm +from pandas import DataFrame, MultiIndex, date_range + + +def test_partial_string_timestamp_multiindex(): + # GH10331 + dr = pd.date_range('2016-01-01', '2016-01-03', freq='12H') + abc = ['a', 'b', 'c'] + ix = pd.MultiIndex.from_product([dr, abc]) + df = pd.DataFrame({'c1': range(0, 15)}, index=ix) + idx = pd.IndexSlice + + # c1 + # 2016-01-01 00:00:00 a 0 + # b 1 + # c 2 + # 2016-01-01 12:00:00 a 3 + # b 4 + # c 5 + # 2016-01-02 00:00:00 a 6 + # b 7 + # c 8 + # 2016-01-02 12:00:00 a 9 + # b 10 + # c 11 + # 2016-01-03 00:00:00 a 12 + # b 13 + # c 14 + + # partial string matching on a single index + for df_swap in (df.swaplevel(), + df.swaplevel(0), + df.swaplevel(0, 1)): + df_swap = df_swap.sort_index() + just_a = df_swap.loc['a'] + result = just_a.loc['2016-01-01'] + expected = df.loc[idx[:, 'a'], :].iloc[0:2] + expected.index = expected.index.droplevel(1) + tm.assert_frame_equal(result, expected) + + # indexing with IndexSlice + result = df.loc[idx['2016-01-01':'2016-02-01', :], :] + expected = df + tm.assert_frame_equal(result, expected) + + # match on secondary index + result = df_swap.loc[idx[:, '2016-01-01':'2016-01-01'], :] + expected = df_swap.iloc[[0, 1, 5, 6, 10, 11]] + tm.assert_frame_equal(result, expected) + + # Even though this syntax works on a single index, this is somewhat + # ambiguous and we don't want to extend this behavior forward to work + # in multi-indexes. This would amount to selecting a scalar from a + # column. + with pytest.raises(KeyError): + df['2016-01-01'] + + # partial string match on year only + result = df.loc['2016'] + expected = df + tm.assert_frame_equal(result, expected) + + # partial string match on date + result = df.loc['2016-01-01'] + expected = df.iloc[0:6] + tm.assert_frame_equal(result, expected) + + # partial string match on date and hour, from middle + result = df.loc['2016-01-02 12'] + expected = df.iloc[9:12] + tm.assert_frame_equal(result, expected) + + # partial string match on secondary index + result = df_swap.loc[idx[:, '2016-01-02'], :] + expected = df_swap.iloc[[2, 3, 7, 8, 12, 13]] + tm.assert_frame_equal(result, expected) + + # tuple selector with partial string match on date + result = df.loc[('2016-01-01', 'a'), :] + expected = df.iloc[[0, 3]] + tm.assert_frame_equal(result, expected) + + # Slicing date on first level should break (of course) + with pytest.raises(KeyError): + df_swap.loc['2016-01-01'] + + # GH12685 (partial string with daily resolution or below) + dr = date_range('2013-01-01', periods=100, freq='D') + ix = MultiIndex.from_product([dr, ['a', 'b']]) + df = DataFrame(np.random.randn(200, 1), columns=['A'], index=ix) + + result = df.loc[idx['2013-03':'2013-03', :], :] + expected = df.iloc[118:180] + tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/indexes/multi/test_reindex.py b/pandas/tests/indexes/multi/test_reindex.py new file mode 100644 index 0000000000000..346b23fed7075 --- /dev/null +++ b/pandas/tests/indexes/multi/test_reindex.py @@ -0,0 +1,99 @@ +# -*- coding: utf-8 -*- + + +import numpy as np +import pandas as pd +import pandas.util.testing as tm +from pandas import Index, MultiIndex + + +def check_level_names(index, names): + assert [level.name for level in index.levels] == list(names) + + +def test_reindex(idx): + result, indexer = idx.reindex(list(idx[:4])) + assert isinstance(result, MultiIndex) + check_level_names(result, idx[:4].names) + + result, indexer = idx.reindex(list(idx)) + assert isinstance(result, MultiIndex) + assert indexer is None + check_level_names(result, idx.names) + + +def test_reindex_level(idx): + index = Index(['one']) + + target, indexer = idx.reindex(index, level='second') + target2, indexer2 = index.reindex(idx, level='second') + + exp_index = idx.join(index, level='second', how='right') + exp_index2 = idx.join(index, level='second', how='left') + + assert target.equals(exp_index) + exp_indexer = np.array([0, 2, 4]) + tm.assert_numpy_array_equal(indexer, exp_indexer, check_dtype=False) + + assert target2.equals(exp_index2) + exp_indexer2 = np.array([0, -1, 0, -1, 0, -1]) + tm.assert_numpy_array_equal(indexer2, exp_indexer2, check_dtype=False) + + tm.assert_raises_regex(TypeError, "Fill method not supported", + idx.reindex, idx, + method='pad', level='second') + + tm.assert_raises_regex(TypeError, "Fill method not supported", + index.reindex, index, method='bfill', + level='first') + + +def test_reindex_preserves_names_when_target_is_list_or_ndarray(idx): + # GH6552 + idx = idx.copy() + target = idx.copy() + idx.names = target.names = [None, None] + + other_dtype = pd.MultiIndex.from_product([[1, 2], [3, 4]]) + + # list & ndarray cases + assert idx.reindex([])[0].names == [None, None] + assert idx.reindex(np.array([]))[0].names == [None, None] + assert idx.reindex(target.tolist())[0].names == [None, None] + assert idx.reindex(target.values)[0].names == [None, None] + assert idx.reindex(other_dtype.tolist())[0].names == [None, None] + assert idx.reindex(other_dtype.values)[0].names == [None, None] + + idx.names = ['foo', 'bar'] + assert idx.reindex([])[0].names == ['foo', 'bar'] + assert idx.reindex(np.array([]))[0].names == ['foo', 'bar'] + assert idx.reindex(target.tolist())[0].names == ['foo', 'bar'] + assert idx.reindex(target.values)[0].names == ['foo', 'bar'] + assert idx.reindex(other_dtype.tolist())[0].names == ['foo', 'bar'] + assert idx.reindex(other_dtype.values)[0].names == ['foo', 'bar'] + + +def test_reindex_lvl_preserves_names_when_target_is_list_or_array(): + # GH7774 + idx = pd.MultiIndex.from_product([[0, 1], ['a', 'b']], + names=['foo', 'bar']) + assert idx.reindex([], level=0)[0].names == ['foo', 'bar'] + assert idx.reindex([], level=1)[0].names == ['foo', 'bar'] + + +def test_reindex_lvl_preserves_type_if_target_is_empty_list_or_array(): + # GH7774 + idx = pd.MultiIndex.from_product([[0, 1], ['a', 'b']]) + assert idx.reindex([], level=0)[0].levels[0].dtype.type == np.int64 + assert idx.reindex([], level=1)[0].levels[1].dtype.type == np.object_ + + +def test_reindex_base(idx): + idx = idx + expected = np.arange(idx.size, dtype=np.intp) + + actual = idx.get_indexer(idx) + tm.assert_numpy_array_equal(expected, actual) + + with tm.assert_raises_regex(ValueError, 'Invalid fill method'): + idx.get_indexer(idx, method='invalid') diff --git a/pandas/tests/indexes/multi/test_set_ops.py b/pandas/tests/indexes/multi/test_set_ops.py new file mode 100644 index 0000000000000..79a3837aac7f8 --- /dev/null +++ b/pandas/tests/indexes/multi/test_set_ops.py @@ -0,0 +1,269 @@ +# -*- coding: utf-8 -*- + + +import numpy as np +import pandas as pd +import pandas.util.testing as tm +from pandas import (CategoricalIndex, DatetimeIndex, MultiIndex, PeriodIndex, + Series, TimedeltaIndex) + + +def test_setops_errorcases(idx): + # # non-iterable input + cases = [0.5, 'xxx'] + methods = [idx.intersection, idx.union, idx.difference, + idx.symmetric_difference] + + for method in methods: + for case in cases: + tm.assert_raises_regex(TypeError, + "Input must be Index " + "or array-like", + method, case) + + +def test_intersection_base(idx): + first = idx[:5] + second = idx[:3] + intersect = first.intersection(second) + + if isinstance(idx, CategoricalIndex): + pass + else: + assert tm.equalContents(intersect, second) + + # GH 10149 + cases = [klass(second.values) + for klass in [np.array, Series, list]] + for case in cases: + if isinstance(idx, PeriodIndex): + msg = "can only call with other PeriodIndex-ed objects" + with tm.assert_raises_regex(ValueError, msg): + result = first.intersection(case) + elif isinstance(idx, CategoricalIndex): + pass + else: + result = first.intersection(case) + assert tm.equalContents(result, second) + + if isinstance(idx, MultiIndex): + msg = "other must be a MultiIndex or a list of tuples" + with tm.assert_raises_regex(TypeError, msg): + result = first.intersection([1, 2, 3]) + + +def test_union_base(idx): + first = idx[3:] + second = idx[:5] + everything = idx + union = first.union(second) + assert tm.equalContents(union, everything) + + # GH 10149 + cases = [klass(second.values) + for klass in [np.array, Series, list]] + for case in cases: + if isinstance(idx, PeriodIndex): + msg = "can only call with other PeriodIndex-ed objects" + with tm.assert_raises_regex(ValueError, msg): + result = first.union(case) + elif isinstance(idx, CategoricalIndex): + pass + else: + result = first.union(case) + assert tm.equalContents(result, everything) + + if isinstance(idx, MultiIndex): + msg = "other must be a MultiIndex or a list of tuples" + with tm.assert_raises_regex(TypeError, msg): + result = first.union([1, 2, 3]) + + +def test_difference_base(idx): + first = idx[2:] + second = idx[:4] + answer = idx[4:] + result = first.difference(second) + + if isinstance(idx, CategoricalIndex): + pass + else: + assert tm.equalContents(result, answer) + + # GH 10149 + cases = [klass(second.values) + for klass in [np.array, Series, list]] + for case in cases: + if isinstance(idx, PeriodIndex): + msg = "can only call with other PeriodIndex-ed objects" + with tm.assert_raises_regex(ValueError, msg): + result = first.difference(case) + elif isinstance(idx, CategoricalIndex): + pass + elif isinstance(idx, (DatetimeIndex, TimedeltaIndex)): + assert result.__class__ == answer.__class__ + tm.assert_numpy_array_equal(result.sort_values().asi8, + answer.sort_values().asi8) + else: + result = first.difference(case) + assert tm.equalContents(result, answer) + + if isinstance(idx, MultiIndex): + msg = "other must be a MultiIndex or a list of tuples" + with tm.assert_raises_regex(TypeError, msg): + result = first.difference([1, 2, 3]) + + +def test_symmetric_difference(idx): + first = idx[1:] + second = idx[:-1] + if isinstance(idx, CategoricalIndex): + pass + else: + answer = idx[[0, -1]] + result = first.symmetric_difference(second) + assert tm.equalContents(result, answer) + + # GH 10149 + cases = [klass(second.values) + for klass in [np.array, Series, list]] + for case in cases: + if isinstance(idx, PeriodIndex): + msg = "can only call with other PeriodIndex-ed objects" + with tm.assert_raises_regex(ValueError, msg): + result = first.symmetric_difference(case) + elif isinstance(idx, CategoricalIndex): + pass + else: + result = first.symmetric_difference(case) + assert tm.equalContents(result, answer) + + if isinstance(idx, MultiIndex): + msg = "other must be a MultiIndex or a list of tuples" + with tm.assert_raises_regex(TypeError, msg): + first.symmetric_difference([1, 2, 3]) + + +def test_empty(idx): + # GH 15270 + assert not idx.empty + assert idx[:0].empty + + +def test_difference(idx): + + first = idx + result = first.difference(idx[-3:]) + expected = MultiIndex.from_tuples(sorted(idx[:-3].values), + sortorder=0, + names=idx.names) + + assert isinstance(result, MultiIndex) + assert result.equals(expected) + assert result.names == idx.names + + # empty difference: reflexive + result = idx.difference(idx) + expected = idx[:0] + assert result.equals(expected) + assert result.names == idx.names + + # empty difference: superset + result = idx[-3:].difference(idx) + expected = idx[:0] + assert result.equals(expected) + assert result.names == idx.names + + # empty difference: degenerate + result = idx[:0].difference(idx) + expected = idx[:0] + assert result.equals(expected) + assert result.names == idx.names + + # names not the same + chunklet = idx[-3:] + chunklet.names = ['foo', 'baz'] + result = first.difference(chunklet) + assert result.names == (None, None) + + # empty, but non-equal + result = idx.difference(idx.sortlevel(1)[0]) + assert len(result) == 0 + + # raise Exception called with non-MultiIndex + result = first.difference(first.values) + assert result.equals(first[:0]) + + # name from empty array + result = first.difference([]) + assert first.equals(result) + assert first.names == result.names + + # name from non-empty array + result = first.difference([('foo', 'one')]) + expected = pd.MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'), ( + 'foo', 'two'), ('qux', 'one'), ('qux', 'two')]) + expected.names = first.names + assert first.names == result.names + tm.assert_raises_regex(TypeError, "other must be a MultiIndex " + "or a list of tuples", + first.difference, [1, 2, 3, 4, 5]) + + +def test_union(idx): + piece1 = idx[:5][::-1] + piece2 = idx[3:] + + the_union = piece1 | piece2 + + tups = sorted(idx.values) + expected = MultiIndex.from_tuples(tups) + + assert the_union.equals(expected) + + # corner case, pass self or empty thing: + the_union = idx.union(idx) + assert the_union is idx + + the_union = idx.union(idx[:0]) + assert the_union is idx + + # won't work in python 3 + # tuples = _index.values + # result = _index[:4] | tuples[4:] + # assert result.equals(tuples) + + # not valid for python 3 + # def test_union_with_regular_index(self): + # other = Index(['A', 'B', 'C']) + + # result = other.union(idx) + # assert ('foo', 'one') in result + # assert 'B' in result + + # result2 = _index.union(other) + # assert result.equals(result2) + + +def test_intersection(idx): + piece1 = idx[:5][::-1] + piece2 = idx[3:] + + the_int = piece1 & piece2 + tups = sorted(idx[3:5].values) + expected = MultiIndex.from_tuples(tups) + assert the_int.equals(expected) + + # corner case, pass self + the_int = idx.intersection(idx) + assert the_int is idx + + # empty intersection: disjoint + empty = idx[:2] & idx[2:] + expected = idx[:0] + assert empty.equals(expected) + + # can't do in python 3 + # tuples = _index.values + # result = _index & tuples + # assert result.equals(tuples) diff --git a/pandas/tests/indexes/multi/test_sorting.py b/pandas/tests/indexes/multi/test_sorting.py new file mode 100644 index 0000000000000..d6165c17c6717 --- /dev/null +++ b/pandas/tests/indexes/multi/test_sorting.py @@ -0,0 +1,256 @@ +# -*- coding: utf-8 -*- +import numpy as np +import pandas as pd +import pandas.util.testing as tm +import pytest +from pandas import CategoricalIndex, DataFrame, Index, MultiIndex, RangeIndex +from pandas.compat import lrange +from pandas.errors import PerformanceWarning, UnsortedIndexError + + +def test_sortlevel(idx): + import random + + tuples = list(idx) + random.shuffle(tuples) + + index = MultiIndex.from_tuples(tuples) + + sorted_idx, _ = index.sortlevel(0) + expected = MultiIndex.from_tuples(sorted(tuples)) + assert sorted_idx.equals(expected) + + sorted_idx, _ = index.sortlevel(0, ascending=False) + assert sorted_idx.equals(expected[::-1]) + + sorted_idx, _ = index.sortlevel(1) + by1 = sorted(tuples, key=lambda x: (x[1], x[0])) + expected = MultiIndex.from_tuples(by1) + assert sorted_idx.equals(expected) + + sorted_idx, _ = index.sortlevel(1, ascending=False) + assert sorted_idx.equals(expected[::-1]) + + +def test_sortlevel_not_sort_remaining(): + mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) + sorted_idx, _ = mi.sortlevel('A', sort_remaining=False) + assert sorted_idx.equals(mi) + + +def test_sortlevel_deterministic(): + tuples = [('bar', 'one'), ('foo', 'two'), ('qux', 'two'), + ('foo', 'one'), ('baz', 'two'), ('qux', 'one')] + + index = MultiIndex.from_tuples(tuples) + + sorted_idx, _ = index.sortlevel(0) + expected = MultiIndex.from_tuples(sorted(tuples)) + assert sorted_idx.equals(expected) + + sorted_idx, _ = index.sortlevel(0, ascending=False) + assert sorted_idx.equals(expected[::-1]) + + sorted_idx, _ = index.sortlevel(1) + by1 = sorted(tuples, key=lambda x: (x[1], x[0])) + expected = MultiIndex.from_tuples(by1) + assert sorted_idx.equals(expected) + + sorted_idx, _ = index.sortlevel(1, ascending=False) + assert sorted_idx.equals(expected[::-1]) + + +def test_sort(indices): + pytest.raises(TypeError, indices.sort) + + +def test_numpy_argsort(idx): + result = np.argsort(idx) + expected = idx.argsort() + tm.assert_numpy_array_equal(result, expected) + + # these are the only two types that perform + # pandas compatibility input validation - the + # rest already perform separate (or no) such + # validation via their 'values' attribute as + # defined in pandas.core.indexes/base.py - they + # cannot be changed at the moment due to + # backwards compatibility concerns + if isinstance(type(idx), (CategoricalIndex, RangeIndex)): + msg = "the 'axis' parameter is not supported" + tm.assert_raises_regex(ValueError, msg, + np.argsort, idx, axis=1) + + msg = "the 'kind' parameter is not supported" + tm.assert_raises_regex(ValueError, msg, np.argsort, + idx, kind='mergesort') + + msg = "the 'order' parameter is not supported" + tm.assert_raises_regex(ValueError, msg, np.argsort, + idx, order=('a', 'b')) + + +def test_unsortedindex(): + # GH 11897 + mi = pd.MultiIndex.from_tuples([('z', 'a'), ('x', 'a'), ('y', 'b'), + ('x', 'b'), ('y', 'a'), ('z', 'b')], + names=['one', 'two']) + df = pd.DataFrame([[i, 10 * i] for i in lrange(6)], index=mi, + columns=['one', 'two']) + + # GH 16734: not sorted, but no real slicing + result = df.loc(axis=0)['z', 'a'] + expected = df.iloc[0] + tm.assert_series_equal(result, expected) + + with pytest.raises(UnsortedIndexError): + df.loc(axis=0)['z', slice('a')] + df.sort_index(inplace=True) + assert len(df.loc(axis=0)['z', :]) == 2 + + with pytest.raises(KeyError): + df.loc(axis=0)['q', :] + + +def test_unsortedindex_doc_examples(): + # http://pandas.pydata.org/pandas-docs/stable/advanced.html#sorting-a-multiindex # noqa + dfm = DataFrame({'jim': [0, 0, 1, 1], + 'joe': ['x', 'x', 'z', 'y'], + 'jolie': np.random.rand(4)}) + + dfm = dfm.set_index(['jim', 'joe']) + with tm.assert_produces_warning(PerformanceWarning): + dfm.loc[(1, 'z')] + + with pytest.raises(UnsortedIndexError): + dfm.loc[(0, 'y'):(1, 'z')] + + assert not dfm.index.is_lexsorted() + assert dfm.index.lexsort_depth == 1 + + # sort it + dfm = dfm.sort_index() + dfm.loc[(1, 'z')] + dfm.loc[(0, 'y'):(1, 'z')] + + assert dfm.index.is_lexsorted() + assert dfm.index.lexsort_depth == 2 + + +def test_reconstruct_sort(): + + # starts off lexsorted & monotonic + mi = MultiIndex.from_arrays([ + ['A', 'A', 'B', 'B', 'B'], [1, 2, 1, 2, 3] + ]) + assert mi.is_lexsorted() + assert mi.is_monotonic + + recons = mi._sort_levels_monotonic() + assert recons.is_lexsorted() + assert recons.is_monotonic + assert mi is recons + + assert mi.equals(recons) + assert Index(mi.values).equals(Index(recons.values)) + + # cannot convert to lexsorted + mi = pd.MultiIndex.from_tuples([('z', 'a'), ('x', 'a'), ('y', 'b'), + ('x', 'b'), ('y', 'a'), ('z', 'b')], + names=['one', 'two']) + assert not mi.is_lexsorted() + assert not mi.is_monotonic + + recons = mi._sort_levels_monotonic() + assert not recons.is_lexsorted() + assert not recons.is_monotonic + + assert mi.equals(recons) + assert Index(mi.values).equals(Index(recons.values)) + + # cannot convert to lexsorted + mi = MultiIndex(levels=[['b', 'd', 'a'], [1, 2, 3]], + labels=[[0, 1, 0, 2], [2, 0, 0, 1]], + names=['col1', 'col2']) + assert not mi.is_lexsorted() + assert not mi.is_monotonic + + recons = mi._sort_levels_monotonic() + assert not recons.is_lexsorted() + assert not recons.is_monotonic + + assert mi.equals(recons) + assert Index(mi.values).equals(Index(recons.values)) + + +def test_reconstruct_remove_unused(): + # xref to GH 2770 + df = DataFrame([['deleteMe', 1, 9], + ['keepMe', 2, 9], + ['keepMeToo', 3, 9]], + columns=['first', 'second', 'third']) + df2 = df.set_index(['first', 'second'], drop=False) + df2 = df2[df2['first'] != 'deleteMe'] + + # removed levels are there + expected = MultiIndex(levels=[['deleteMe', 'keepMe', 'keepMeToo'], + [1, 2, 3]], + labels=[[1, 2], [1, 2]], + names=['first', 'second']) + result = df2.index + tm.assert_index_equal(result, expected) + + expected = MultiIndex(levels=[['keepMe', 'keepMeToo'], + [2, 3]], + labels=[[0, 1], [0, 1]], + names=['first', 'second']) + result = df2.index.remove_unused_levels() + tm.assert_index_equal(result, expected) + + # idempotent + result2 = result.remove_unused_levels() + tm.assert_index_equal(result2, expected) + assert result2.is_(result) + + +@pytest.mark.parametrize('first_type,second_type', [ + ('int64', 'int64'), + ('datetime64[D]', 'str')]) +def test_remove_unused_levels_large(first_type, second_type): + # GH16556 + + # because tests should be deterministic (and this test in particular + # checks that levels are removed, which is not the case for every + # random input): + rng = np.random.RandomState(4) # seed is arbitrary value that works + + size = 1 << 16 + df = DataFrame(dict( + first=rng.randint(0, 1 << 13, size).astype(first_type), + second=rng.randint(0, 1 << 10, size).astype(second_type), + third=rng.rand(size))) + df = df.groupby(['first', 'second']).sum() + df = df[df.third < 0.1] + + result = df.index.remove_unused_levels() + assert len(result.levels[0]) < len(df.index.levels[0]) + assert len(result.levels[1]) < len(df.index.levels[1]) + assert result.equals(df.index) + + expected = df.reset_index().set_index(['first', 'second']).index + tm.assert_index_equal(result, expected) + + +@pytest.mark.parametrize('level0', [['a', 'd', 'b'], + ['a', 'd', 'b', 'unused']]) +@pytest.mark.parametrize('level1', [['w', 'x', 'y', 'z'], + ['w', 'x', 'y', 'z', 'unused']]) +def test_remove_unused_nan(level0, level1): + # GH 18417 + mi = pd.MultiIndex(levels=[level0, level1], + labels=[[0, 2, -1, 1, -1], [0, 1, 2, 3, 2]]) + + result = mi.remove_unused_levels() + tm.assert_index_equal(result, mi) + for level in 0, 1: + assert('unused' not in result.levels[level]) diff --git a/pandas/tests/indexes/multi/test_unique_and_duplicates.py b/pandas/tests/indexes/multi/test_unique_and_duplicates.py new file mode 100644 index 0000000000000..a97d84ace9602 --- /dev/null +++ b/pandas/tests/indexes/multi/test_unique_and_duplicates.py @@ -0,0 +1,259 @@ +# -*- coding: utf-8 -*- + +import warnings +from itertools import product + +import numpy as np +import pandas as pd +import pandas.util.testing as tm +import pytest +from pandas import MultiIndex +from pandas.compat import range, u + + +@pytest.mark.parametrize('names', [None, ['first', 'second']]) +def test_unique(names): + mi = pd.MultiIndex.from_arrays([[1, 2, 1, 2], [1, 1, 1, 2]], + names=names) + + res = mi.unique() + exp = pd.MultiIndex.from_arrays([[1, 2, 2], [1, 1, 2]], names=mi.names) + tm.assert_index_equal(res, exp) + + mi = pd.MultiIndex.from_arrays([list('aaaa'), list('abab')], + names=names) + res = mi.unique() + exp = pd.MultiIndex.from_arrays([list('aa'), list('ab')], + names=mi.names) + tm.assert_index_equal(res, exp) + + mi = pd.MultiIndex.from_arrays([list('aaaa'), list('aaaa')], + names=names) + res = mi.unique() + exp = pd.MultiIndex.from_arrays([['a'], ['a']], names=mi.names) + tm.assert_index_equal(res, exp) + + # GH #20568 - empty MI + mi = pd.MultiIndex.from_arrays([[], []], names=names) + res = mi.unique() + tm.assert_index_equal(mi, res) + + +def test_unique_datetimelike(): + idx1 = pd.DatetimeIndex(['2015-01-01', '2015-01-01', '2015-01-01', + '2015-01-01', 'NaT', 'NaT']) + idx2 = pd.DatetimeIndex(['2015-01-01', '2015-01-01', '2015-01-02', + '2015-01-02', 'NaT', '2015-01-01'], + tz='Asia/Tokyo') + result = pd.MultiIndex.from_arrays([idx1, idx2]).unique() + + eidx1 = pd.DatetimeIndex(['2015-01-01', '2015-01-01', 'NaT', 'NaT']) + eidx2 = pd.DatetimeIndex(['2015-01-01', '2015-01-02', + 'NaT', '2015-01-01'], + tz='Asia/Tokyo') + exp = pd.MultiIndex.from_arrays([eidx1, eidx2]) + tm.assert_index_equal(result, exp) + + +@pytest.mark.parametrize('level', [0, 'first', 1, 'second']) +def test_unique_level(idx, level): + # GH #17896 - with level= argument + result = idx.unique(level=level) + expected = idx.get_level_values(level).unique() + tm.assert_index_equal(result, expected) + + # With already unique level + mi = pd.MultiIndex.from_arrays([[1, 3, 2, 4], [1, 3, 2, 5]], + names=['first', 'second']) + result = mi.unique(level=level) + expected = mi.get_level_values(level) + tm.assert_index_equal(result, expected) + + # With empty MI + mi = pd.MultiIndex.from_arrays([[], []], names=['first', 'second']) + result = mi.unique(level=level) + expected = mi.get_level_values(level) + + +def test_duplicate_multiindex_labels(): + # GH 17464 + # Make sure that a MultiIndex with duplicate levels throws a ValueError + with pytest.raises(ValueError): + ind = pd.MultiIndex([['A'] * 10, range(10)], [[0] * 10, range(10)]) + + # And that using set_levels with duplicate levels fails + ind = MultiIndex.from_arrays([['A', 'A', 'B', 'B', 'B'], + [1, 2, 1, 2, 3]]) + with pytest.raises(ValueError): + ind.set_levels([['A', 'B', 'A', 'A', 'B'], [2, 1, 3, -2, 5]], + inplace=True) + + +@pytest.mark.parametrize('names', [['a', 'b', 'a'], [1, 1, 2], + [1, 'a', 1]]) +def test_duplicate_level_names(names): + # GH18872, GH19029 + mi = pd.MultiIndex.from_product([[0, 1]] * 3, names=names) + assert mi.names == names + + # With .rename() + mi = pd.MultiIndex.from_product([[0, 1]] * 3) + mi = mi.rename(names) + assert mi.names == names + + # With .rename(., level=) + mi.rename(names[1], level=1, inplace=True) + mi = mi.rename([names[0], names[2]], level=[0, 2]) + assert mi.names == names + + +def test_duplicate_meta_data(): + # GH 10115 + index = MultiIndex( + levels=[[0, 1], [0, 1, 2]], + labels=[[0, 0, 0, 0, 1, 1, 1], + [0, 1, 2, 0, 0, 1, 2]]) + + for idx in [index, + index.set_names([None, None]), + index.set_names([None, 'Num']), + index.set_names(['Upper', 'Num']), ]: + assert idx.has_duplicates + assert idx.drop_duplicates().names == idx.names + + +def test_duplicates(idx): + assert not idx.has_duplicates + assert idx.append(idx).has_duplicates + + index = MultiIndex(levels=[[0, 1], [0, 1, 2]], labels=[ + [0, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 0, 1, 2]]) + assert index.has_duplicates + + # GH 9075 + t = [(u('x'), u('out'), u('z'), 5, u('y'), u('in'), u('z'), 169), + (u('x'), u('out'), u('z'), 7, u('y'), u('in'), u('z'), 119), + (u('x'), u('out'), u('z'), 9, u('y'), u('in'), u('z'), 135), + (u('x'), u('out'), u('z'), 13, u('y'), u('in'), u('z'), 145), + (u('x'), u('out'), u('z'), 14, u('y'), u('in'), u('z'), 158), + (u('x'), u('out'), u('z'), 16, u('y'), u('in'), u('z'), 122), + (u('x'), u('out'), u('z'), 17, u('y'), u('in'), u('z'), 160), + (u('x'), u('out'), u('z'), 18, u('y'), u('in'), u('z'), 180), + (u('x'), u('out'), u('z'), 20, u('y'), u('in'), u('z'), 143), + (u('x'), u('out'), u('z'), 21, u('y'), u('in'), u('z'), 128), + (u('x'), u('out'), u('z'), 22, u('y'), u('in'), u('z'), 129), + (u('x'), u('out'), u('z'), 25, u('y'), u('in'), u('z'), 111), + (u('x'), u('out'), u('z'), 28, u('y'), u('in'), u('z'), 114), + (u('x'), u('out'), u('z'), 29, u('y'), u('in'), u('z'), 121), + (u('x'), u('out'), u('z'), 31, u('y'), u('in'), u('z'), 126), + (u('x'), u('out'), u('z'), 32, u('y'), u('in'), u('z'), 155), + (u('x'), u('out'), u('z'), 33, u('y'), u('in'), u('z'), 123), + (u('x'), u('out'), u('z'), 12, u('y'), u('in'), u('z'), 144)] + + index = pd.MultiIndex.from_tuples(t) + assert not index.has_duplicates + + # handle int64 overflow if possible + def check(nlevels, with_nulls): + labels = np.tile(np.arange(500), 2) + level = np.arange(500) + + if with_nulls: # inject some null values + labels[500] = -1 # common nan value + labels = [labels.copy() for i in range(nlevels)] + for i in range(nlevels): + labels[i][500 + i - nlevels // 2] = -1 + + labels += [np.array([-1, 1]).repeat(500)] + else: + labels = [labels] * nlevels + [np.arange(2).repeat(500)] + + levels = [level] * nlevels + [[0, 1]] + + # no dups + index = MultiIndex(levels=levels, labels=labels) + assert not index.has_duplicates + + # with a dup + if with_nulls: + def f(a): + return np.insert(a, 1000, a[0]) + labels = list(map(f, labels)) + index = MultiIndex(levels=levels, labels=labels) + else: + values = index.values.tolist() + index = MultiIndex.from_tuples(values + [values[0]]) + + assert index.has_duplicates + + # no overflow + check(4, False) + check(4, True) + + # overflow possible + check(8, False) + check(8, True) + + # GH 9125 + n, k = 200, 5000 + levels = [np.arange(n), tm.makeStringIndex(n), 1000 + np.arange(n)] + labels = [np.random.choice(n, k * n) for lev in levels] + mi = MultiIndex(levels=levels, labels=labels) + + for keep in ['first', 'last', False]: + left = mi.duplicated(keep=keep) + right = pd._libs.hashtable.duplicated_object(mi.values, keep=keep) + tm.assert_numpy_array_equal(left, right) + + # GH5873 + for a in [101, 102]: + mi = MultiIndex.from_arrays([[101, a], [3.5, np.nan]]) + assert not mi.has_duplicates + + with warnings.catch_warnings(record=True): + # Deprecated - see GH20239 + assert mi.get_duplicates().equals(MultiIndex.from_arrays( + [[], []])) + + tm.assert_numpy_array_equal(mi.duplicated(), np.zeros( + 2, dtype='bool')) + + for n in range(1, 6): # 1st level shape + for m in range(1, 5): # 2nd level shape + # all possible unique combinations, including nan + lab = product(range(-1, n), range(-1, m)) + mi = MultiIndex(levels=[list('abcde')[:n], list('WXYZ')[:m]], + labels=np.random.permutation(list(lab)).T) + assert len(mi) == (n + 1) * (m + 1) + assert not mi.has_duplicates + + with warnings.catch_warnings(record=True): + # Deprecated - see GH20239 + assert mi.get_duplicates().equals(MultiIndex.from_arrays( + [[], []])) + + tm.assert_numpy_array_equal(mi.duplicated(), np.zeros( + len(mi), dtype='bool')) + + +def test_get_unique_index(idx): + idx = idx[[0, 1, 0, 1, 1, 0, 0]] + expected = idx._shallow_copy(idx[[0, 1]]) + + for dropna in [False, True]: + result = idx._get_unique_index(dropna=dropna) + assert result.unique + tm.assert_index_equal(result, expected) + + +def test_unique_na(): + idx = pd.Index([2, np.nan, 2, 1], name='my_index') + expected = pd.Index([2, np.nan, 1], name='my_index') + result = idx.unique() + tm.assert_index_equal(result, expected) + + +def test_duplicate_level_names_access_raises(idx): + idx.names = ['foo', 'foo'] + tm.assert_raises_regex(KeyError, 'Level foo not found', + idx._get_level_number, 'foo') diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py deleted file mode 100644 index b1fb5f01862ae..0000000000000 --- a/pandas/tests/indexes/test_multi.py +++ /dev/null @@ -1,3342 +0,0 @@ -# -*- coding: utf-8 -*- - -import re -import warnings - -from datetime import timedelta -from itertools import product - -import pytest - -import numpy as np - -import pandas as pd - -from pandas import (CategoricalIndex, Categorical, DataFrame, Index, - MultiIndex, compat, date_range, period_range) -from pandas.compat import PY3, long, lrange, lzip, range, u, PYPY -from pandas.errors import PerformanceWarning, UnsortedIndexError -from pandas.core.dtypes.dtypes import CategoricalDtype -from pandas.core.indexes.base import InvalidIndexError -from pandas.core.dtypes.cast import construct_1d_object_array_from_listlike -from pandas._libs.tslib import Timestamp - -import pandas.util.testing as tm - -from pandas.util.testing import assert_almost_equal, assert_copy - -from .common import Base - - -class TestMultiIndex(Base): - _holder = MultiIndex - _compat_props = ['shape', 'ndim', 'size'] - - def setup_method(self, method): - major_axis = Index(['foo', 'bar', 'baz', 'qux']) - minor_axis = Index(['one', 'two']) - - major_labels = np.array([0, 0, 1, 2, 3, 3]) - minor_labels = np.array([0, 1, 0, 1, 0, 1]) - self.index_names = ['first', 'second'] - self.indices = dict(index=MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels - ], names=self.index_names, - verify_integrity=False)) - self.setup_indices() - - def create_index(self): - return self.index - - def test_can_hold_identifiers(self): - idx = self.create_index() - key = idx[0] - assert idx._can_hold_identifiers_and_holds_name(key) is True - - def test_boolean_context_compat2(self): - - # boolean context compat - # GH7897 - i1 = MultiIndex.from_tuples([('A', 1), ('A', 2)]) - i2 = MultiIndex.from_tuples([('A', 1), ('A', 3)]) - common = i1.intersection(i2) - - def f(): - if common: - pass - - tm.assert_raises_regex(ValueError, 'The truth value of a', f) - - def test_labels_dtypes(self): - - # GH 8456 - i = MultiIndex.from_tuples([('A', 1), ('A', 2)]) - assert i.labels[0].dtype == 'int8' - assert i.labels[1].dtype == 'int8' - - i = MultiIndex.from_product([['a'], range(40)]) - assert i.labels[1].dtype == 'int8' - i = MultiIndex.from_product([['a'], range(400)]) - assert i.labels[1].dtype == 'int16' - i = MultiIndex.from_product([['a'], range(40000)]) - assert i.labels[1].dtype == 'int32' - - i = pd.MultiIndex.from_product([['a'], range(1000)]) - assert (i.labels[0] >= 0).all() - assert (i.labels[1] >= 0).all() - - def test_where(self): - i = MultiIndex.from_tuples([('A', 1), ('A', 2)]) - - def f(): - i.where(True) - - pytest.raises(NotImplementedError, f) - - def test_where_array_like(self): - i = MultiIndex.from_tuples([('A', 1), ('A', 2)]) - klasses = [list, tuple, np.array, pd.Series] - cond = [False, True] - - for klass in klasses: - def f(): - return i.where(klass(cond)) - pytest.raises(NotImplementedError, f) - - def test_repeat(self): - reps = 2 - numbers = [1, 2, 3] - names = np.array(['foo', 'bar']) - - m = MultiIndex.from_product([ - numbers, names], names=names) - expected = MultiIndex.from_product([ - numbers, names.repeat(reps)], names=names) - tm.assert_index_equal(m.repeat(reps), expected) - - with tm.assert_produces_warning(FutureWarning): - result = m.repeat(n=reps) - tm.assert_index_equal(result, expected) - - def test_numpy_repeat(self): - reps = 2 - numbers = [1, 2, 3] - names = np.array(['foo', 'bar']) - - m = MultiIndex.from_product([ - numbers, names], names=names) - expected = MultiIndex.from_product([ - numbers, names.repeat(reps)], names=names) - tm.assert_index_equal(np.repeat(m, reps), expected) - - msg = "the 'axis' parameter is not supported" - tm.assert_raises_regex( - ValueError, msg, np.repeat, m, reps, axis=1) - - def test_set_name_methods(self): - # so long as these are synonyms, we don't need to test set_names - assert self.index.rename == self.index.set_names - new_names = [name + "SUFFIX" for name in self.index_names] - ind = self.index.set_names(new_names) - assert self.index.names == self.index_names - assert ind.names == new_names - with tm.assert_raises_regex(ValueError, "^Length"): - ind.set_names(new_names + new_names) - new_names2 = [name + "SUFFIX2" for name in new_names] - res = ind.set_names(new_names2, inplace=True) - assert res is None - assert ind.names == new_names2 - - # set names for specific level (# GH7792) - ind = self.index.set_names(new_names[0], level=0) - assert self.index.names == self.index_names - assert ind.names == [new_names[0], self.index_names[1]] - - res = ind.set_names(new_names2[0], level=0, inplace=True) - assert res is None - assert ind.names == [new_names2[0], self.index_names[1]] - - # set names for multiple levels - ind = self.index.set_names(new_names, level=[0, 1]) - assert self.index.names == self.index_names - assert ind.names == new_names - - res = ind.set_names(new_names2, level=[0, 1], inplace=True) - assert res is None - assert ind.names == new_names2 - - @pytest.mark.parametrize('inplace', [True, False]) - def test_set_names_with_nlevel_1(self, inplace): - # GH 21149 - # Ensure that .set_names for MultiIndex with - # nlevels == 1 does not raise any errors - expected = pd.MultiIndex(levels=[[0, 1]], - labels=[[0, 1]], - names=['first']) - m = pd.MultiIndex.from_product([[0, 1]]) - result = m.set_names('first', level=0, inplace=inplace) - - if inplace: - result = m - - tm.assert_index_equal(result, expected) - - def test_set_levels_labels_directly(self): - # setting levels/labels directly raises AttributeError - - levels = self.index.levels - new_levels = [[lev + 'a' for lev in level] for level in levels] - - labels = self.index.labels - major_labels, minor_labels = labels - major_labels = [(x + 1) % 3 for x in major_labels] - minor_labels = [(x + 1) % 1 for x in minor_labels] - new_labels = [major_labels, minor_labels] - - with pytest.raises(AttributeError): - self.index.levels = new_levels - - with pytest.raises(AttributeError): - self.index.labels = new_labels - - def test_set_levels(self): - # side note - you probably wouldn't want to use levels and labels - # directly like this - but it is possible. - levels = self.index.levels - new_levels = [[lev + 'a' for lev in level] for level in levels] - - def assert_matching(actual, expected, check_dtype=False): - # avoid specifying internal representation - # as much as possible - assert len(actual) == len(expected) - for act, exp in zip(actual, expected): - act = np.asarray(act) - exp = np.asarray(exp) - tm.assert_numpy_array_equal(act, exp, check_dtype=check_dtype) - - # level changing [w/o mutation] - ind2 = self.index.set_levels(new_levels) - assert_matching(ind2.levels, new_levels) - assert_matching(self.index.levels, levels) - - # level changing [w/ mutation] - ind2 = self.index.copy() - inplace_return = ind2.set_levels(new_levels, inplace=True) - assert inplace_return is None - assert_matching(ind2.levels, new_levels) - - # level changing specific level [w/o mutation] - ind2 = self.index.set_levels(new_levels[0], level=0) - assert_matching(ind2.levels, [new_levels[0], levels[1]]) - assert_matching(self.index.levels, levels) - - ind2 = self.index.set_levels(new_levels[1], level=1) - assert_matching(ind2.levels, [levels[0], new_levels[1]]) - assert_matching(self.index.levels, levels) - - # level changing multiple levels [w/o mutation] - ind2 = self.index.set_levels(new_levels, level=[0, 1]) - assert_matching(ind2.levels, new_levels) - assert_matching(self.index.levels, levels) - - # level changing specific level [w/ mutation] - ind2 = self.index.copy() - inplace_return = ind2.set_levels(new_levels[0], level=0, inplace=True) - assert inplace_return is None - assert_matching(ind2.levels, [new_levels[0], levels[1]]) - assert_matching(self.index.levels, levels) - - ind2 = self.index.copy() - inplace_return = ind2.set_levels(new_levels[1], level=1, inplace=True) - assert inplace_return is None - assert_matching(ind2.levels, [levels[0], new_levels[1]]) - assert_matching(self.index.levels, levels) - - # level changing multiple levels [w/ mutation] - ind2 = self.index.copy() - inplace_return = ind2.set_levels(new_levels, level=[0, 1], - inplace=True) - assert inplace_return is None - assert_matching(ind2.levels, new_levels) - assert_matching(self.index.levels, levels) - - # illegal level changing should not change levels - # GH 13754 - original_index = self.index.copy() - for inplace in [True, False]: - with tm.assert_raises_regex(ValueError, "^On"): - self.index.set_levels(['c'], level=0, inplace=inplace) - assert_matching(self.index.levels, original_index.levels, - check_dtype=True) - - with tm.assert_raises_regex(ValueError, "^On"): - self.index.set_labels([0, 1, 2, 3, 4, 5], level=0, - inplace=inplace) - assert_matching(self.index.labels, original_index.labels, - check_dtype=True) - - with tm.assert_raises_regex(TypeError, "^Levels"): - self.index.set_levels('c', level=0, inplace=inplace) - assert_matching(self.index.levels, original_index.levels, - check_dtype=True) - - with tm.assert_raises_regex(TypeError, "^Labels"): - self.index.set_labels(1, level=0, inplace=inplace) - assert_matching(self.index.labels, original_index.labels, - check_dtype=True) - - def test_set_labels(self): - # side note - you probably wouldn't want to use levels and labels - # directly like this - but it is possible. - labels = self.index.labels - major_labels, minor_labels = labels - major_labels = [(x + 1) % 3 for x in major_labels] - minor_labels = [(x + 1) % 1 for x in minor_labels] - new_labels = [major_labels, minor_labels] - - def assert_matching(actual, expected): - # avoid specifying internal representation - # as much as possible - assert len(actual) == len(expected) - for act, exp in zip(actual, expected): - act = np.asarray(act) - exp = np.asarray(exp, dtype=np.int8) - tm.assert_numpy_array_equal(act, exp) - - # label changing [w/o mutation] - ind2 = self.index.set_labels(new_labels) - assert_matching(ind2.labels, new_labels) - assert_matching(self.index.labels, labels) - - # label changing [w/ mutation] - ind2 = self.index.copy() - inplace_return = ind2.set_labels(new_labels, inplace=True) - assert inplace_return is None - assert_matching(ind2.labels, new_labels) - - # label changing specific level [w/o mutation] - ind2 = self.index.set_labels(new_labels[0], level=0) - assert_matching(ind2.labels, [new_labels[0], labels[1]]) - assert_matching(self.index.labels, labels) - - ind2 = self.index.set_labels(new_labels[1], level=1) - assert_matching(ind2.labels, [labels[0], new_labels[1]]) - assert_matching(self.index.labels, labels) - - # label changing multiple levels [w/o mutation] - ind2 = self.index.set_labels(new_labels, level=[0, 1]) - assert_matching(ind2.labels, new_labels) - assert_matching(self.index.labels, labels) - - # label changing specific level [w/ mutation] - ind2 = self.index.copy() - inplace_return = ind2.set_labels(new_labels[0], level=0, inplace=True) - assert inplace_return is None - assert_matching(ind2.labels, [new_labels[0], labels[1]]) - assert_matching(self.index.labels, labels) - - ind2 = self.index.copy() - inplace_return = ind2.set_labels(new_labels[1], level=1, inplace=True) - assert inplace_return is None - assert_matching(ind2.labels, [labels[0], new_labels[1]]) - assert_matching(self.index.labels, labels) - - # label changing multiple levels [w/ mutation] - ind2 = self.index.copy() - inplace_return = ind2.set_labels(new_labels, level=[0, 1], - inplace=True) - assert inplace_return is None - assert_matching(ind2.labels, new_labels) - assert_matching(self.index.labels, labels) - - # label changing for levels of different magnitude of categories - ind = pd.MultiIndex.from_tuples([(0, i) for i in range(130)]) - new_labels = range(129, -1, -1) - expected = pd.MultiIndex.from_tuples( - [(0, i) for i in new_labels]) - - # [w/o mutation] - result = ind.set_labels(labels=new_labels, level=1) - assert result.equals(expected) - - # [w/ mutation] - result = ind.copy() - result.set_labels(labels=new_labels, level=1, inplace=True) - assert result.equals(expected) - - def test_set_levels_labels_names_bad_input(self): - levels, labels = self.index.levels, self.index.labels - names = self.index.names - - with tm.assert_raises_regex(ValueError, 'Length of levels'): - self.index.set_levels([levels[0]]) - - with tm.assert_raises_regex(ValueError, 'Length of labels'): - self.index.set_labels([labels[0]]) - - with tm.assert_raises_regex(ValueError, 'Length of names'): - self.index.set_names([names[0]]) - - # shouldn't scalar data error, instead should demand list-like - with tm.assert_raises_regex(TypeError, 'list of lists-like'): - self.index.set_levels(levels[0]) - - # shouldn't scalar data error, instead should demand list-like - with tm.assert_raises_regex(TypeError, 'list of lists-like'): - self.index.set_labels(labels[0]) - - # shouldn't scalar data error, instead should demand list-like - with tm.assert_raises_regex(TypeError, 'list-like'): - self.index.set_names(names[0]) - - # should have equal lengths - with tm.assert_raises_regex(TypeError, 'list of lists-like'): - self.index.set_levels(levels[0], level=[0, 1]) - - with tm.assert_raises_regex(TypeError, 'list-like'): - self.index.set_levels(levels, level=0) - - # should have equal lengths - with tm.assert_raises_regex(TypeError, 'list of lists-like'): - self.index.set_labels(labels[0], level=[0, 1]) - - with tm.assert_raises_regex(TypeError, 'list-like'): - self.index.set_labels(labels, level=0) - - # should have equal lengths - with tm.assert_raises_regex(ValueError, 'Length of names'): - self.index.set_names(names[0], level=[0, 1]) - - with tm.assert_raises_regex(TypeError, 'string'): - self.index.set_names(names, level=0) - - def test_set_levels_categorical(self): - # GH13854 - index = MultiIndex.from_arrays([list("xyzx"), [0, 1, 2, 3]]) - for ordered in [False, True]: - cidx = CategoricalIndex(list("bac"), ordered=ordered) - result = index.set_levels(cidx, 0) - expected = MultiIndex(levels=[cidx, [0, 1, 2, 3]], - labels=index.labels) - tm.assert_index_equal(result, expected) - - result_lvl = result.get_level_values(0) - expected_lvl = CategoricalIndex(list("bacb"), - categories=cidx.categories, - ordered=cidx.ordered) - tm.assert_index_equal(result_lvl, expected_lvl) - - def test_metadata_immutable(self): - levels, labels = self.index.levels, self.index.labels - # shouldn't be able to set at either the top level or base level - mutable_regex = re.compile('does not support mutable operations') - with tm.assert_raises_regex(TypeError, mutable_regex): - levels[0] = levels[0] - with tm.assert_raises_regex(TypeError, mutable_regex): - levels[0][0] = levels[0][0] - # ditto for labels - with tm.assert_raises_regex(TypeError, mutable_regex): - labels[0] = labels[0] - with tm.assert_raises_regex(TypeError, mutable_regex): - labels[0][0] = labels[0][0] - # and for names - names = self.index.names - with tm.assert_raises_regex(TypeError, mutable_regex): - names[0] = names[0] - - def test_inplace_mutation_resets_values(self): - levels = [['a', 'b', 'c'], [4]] - levels2 = [[1, 2, 3], ['a']] - labels = [[0, 1, 0, 2, 2, 0], [0, 0, 0, 0, 0, 0]] - - mi1 = MultiIndex(levels=levels, labels=labels) - mi2 = MultiIndex(levels=levels2, labels=labels) - vals = mi1.values.copy() - vals2 = mi2.values.copy() - - assert mi1._tuples is not None - - # Make sure level setting works - new_vals = mi1.set_levels(levels2).values - tm.assert_almost_equal(vals2, new_vals) - - # Non-inplace doesn't kill _tuples [implementation detail] - tm.assert_almost_equal(mi1._tuples, vals) - - # ...and values is still same too - tm.assert_almost_equal(mi1.values, vals) - - # Inplace should kill _tuples - mi1.set_levels(levels2, inplace=True) - tm.assert_almost_equal(mi1.values, vals2) - - # Make sure label setting works too - labels2 = [[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]] - exp_values = np.empty((6,), dtype=object) - exp_values[:] = [(long(1), 'a')] * 6 - - # Must be 1d array of tuples - assert exp_values.shape == (6,) - new_values = mi2.set_labels(labels2).values - - # Not inplace shouldn't change - tm.assert_almost_equal(mi2._tuples, vals2) - - # Should have correct values - tm.assert_almost_equal(exp_values, new_values) - - # ...and again setting inplace should kill _tuples, etc - mi2.set_labels(labels2, inplace=True) - tm.assert_almost_equal(mi2.values, new_values) - - def test_copy_in_constructor(self): - levels = np.array(["a", "b", "c"]) - labels = np.array([1, 1, 2, 0, 0, 1, 1]) - val = labels[0] - mi = MultiIndex(levels=[levels, levels], labels=[labels, labels], - copy=True) - assert mi.labels[0][0] == val - labels[0] = 15 - assert mi.labels[0][0] == val - val = levels[0] - levels[0] = "PANDA" - assert mi.levels[0][0] == val - - def test_set_value_keeps_names(self): - # motivating example from #3742 - lev1 = ['hans', 'hans', 'hans', 'grethe', 'grethe', 'grethe'] - lev2 = ['1', '2', '3'] * 2 - idx = pd.MultiIndex.from_arrays([lev1, lev2], names=['Name', 'Number']) - df = pd.DataFrame( - np.random.randn(6, 4), - columns=['one', 'two', 'three', 'four'], - index=idx) - df = df.sort_index() - assert df._is_copy is None - assert df.index.names == ('Name', 'Number') - df.at[('grethe', '4'), 'one'] = 99.34 - assert df._is_copy is None - assert df.index.names == ('Name', 'Number') - - def test_copy_names(self): - # Check that adding a "names" parameter to the copy is honored - # GH14302 - multi_idx = pd.Index([(1, 2), (3, 4)], names=['MyName1', 'MyName2']) - multi_idx1 = multi_idx.copy() - - assert multi_idx.equals(multi_idx1) - assert multi_idx.names == ['MyName1', 'MyName2'] - assert multi_idx1.names == ['MyName1', 'MyName2'] - - multi_idx2 = multi_idx.copy(names=['NewName1', 'NewName2']) - - assert multi_idx.equals(multi_idx2) - assert multi_idx.names == ['MyName1', 'MyName2'] - assert multi_idx2.names == ['NewName1', 'NewName2'] - - multi_idx3 = multi_idx.copy(name=['NewName1', 'NewName2']) - - assert multi_idx.equals(multi_idx3) - assert multi_idx.names == ['MyName1', 'MyName2'] - assert multi_idx3.names == ['NewName1', 'NewName2'] - - def test_names(self): - - # names are assigned in setup - names = self.index_names - level_names = [level.name for level in self.index.levels] - assert names == level_names - - # setting bad names on existing - index = self.index - tm.assert_raises_regex(ValueError, "^Length of names", - setattr, index, "names", - list(index.names) + ["third"]) - tm.assert_raises_regex(ValueError, "^Length of names", - setattr, index, "names", []) - - # initializing with bad names (should always be equivalent) - major_axis, minor_axis = self.index.levels - major_labels, minor_labels = self.index.labels - tm.assert_raises_regex(ValueError, "^Length of names", MultiIndex, - levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels], - names=['first']) - tm.assert_raises_regex(ValueError, "^Length of names", MultiIndex, - levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels], - names=['first', 'second', 'third']) - - # names are assigned - index.names = ["a", "b"] - ind_names = list(index.names) - level_names = [level.name for level in index.levels] - assert ind_names == level_names - - def test_astype(self): - expected = self.index.copy() - actual = self.index.astype('O') - assert_copy(actual.levels, expected.levels) - assert_copy(actual.labels, expected.labels) - self.check_level_names(actual, expected.names) - - with tm.assert_raises_regex(TypeError, "^Setting.*dtype.*object"): - self.index.astype(np.dtype(int)) - - @pytest.mark.parametrize('ordered', [True, False]) - def test_astype_category(self, ordered): - # GH 18630 - msg = '> 1 ndim Categorical are not supported at this time' - with tm.assert_raises_regex(NotImplementedError, msg): - self.index.astype(CategoricalDtype(ordered=ordered)) - - if ordered is False: - # dtype='category' defaults to ordered=False, so only test once - with tm.assert_raises_regex(NotImplementedError, msg): - self.index.astype('category') - - def test_constructor_single_level(self): - result = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux']], - labels=[[0, 1, 2, 3]], names=['first']) - assert isinstance(result, MultiIndex) - expected = Index(['foo', 'bar', 'baz', 'qux'], name='first') - tm.assert_index_equal(result.levels[0], expected) - assert result.names == ['first'] - - def test_constructor_no_levels(self): - tm.assert_raises_regex(ValueError, "non-zero number " - "of levels/labels", - MultiIndex, levels=[], labels=[]) - both_re = re.compile('Must pass both levels and labels') - with tm.assert_raises_regex(TypeError, both_re): - MultiIndex(levels=[]) - with tm.assert_raises_regex(TypeError, both_re): - MultiIndex(labels=[]) - - def test_constructor_mismatched_label_levels(self): - labels = [np.array([1]), np.array([2]), np.array([3])] - levels = ["a"] - tm.assert_raises_regex(ValueError, "Length of levels and labels " - "must be the same", MultiIndex, - levels=levels, labels=labels) - length_error = re.compile('>= length of level') - label_error = re.compile(r'Unequal label lengths: \[4, 2\]') - - # important to check that it's looking at the right thing. - with tm.assert_raises_regex(ValueError, length_error): - MultiIndex(levels=[['a'], ['b']], - labels=[[0, 1, 2, 3], [0, 3, 4, 1]]) - - with tm.assert_raises_regex(ValueError, label_error): - MultiIndex(levels=[['a'], ['b']], labels=[[0, 0, 0, 0], [0, 0]]) - - # external API - with tm.assert_raises_regex(ValueError, length_error): - self.index.copy().set_levels([['a'], ['b']]) - - with tm.assert_raises_regex(ValueError, label_error): - self.index.copy().set_labels([[0, 0, 0, 0], [0, 0]]) - - def test_constructor_nonhashable_names(self): - # GH 20527 - levels = [[1, 2], [u'one', u'two']] - labels = [[0, 0, 1, 1], [0, 1, 0, 1]] - names = ((['foo'], ['bar'])) - message = "MultiIndex.name must be a hashable type" - tm.assert_raises_regex(TypeError, message, - MultiIndex, levels=levels, - labels=labels, names=names) - - # With .rename() - mi = MultiIndex(levels=[[1, 2], [u'one', u'two']], - labels=[[0, 0, 1, 1], [0, 1, 0, 1]], - names=('foo', 'bar')) - renamed = [['foor'], ['barr']] - tm.assert_raises_regex(TypeError, message, mi.rename, names=renamed) - # With .set_names() - tm.assert_raises_regex(TypeError, message, mi.set_names, names=renamed) - - @pytest.mark.parametrize('names', [['a', 'b', 'a'], [1, 1, 2], - [1, 'a', 1]]) - def test_duplicate_level_names(self, names): - # GH18872, GH19029 - mi = pd.MultiIndex.from_product([[0, 1]] * 3, names=names) - assert mi.names == names - - # With .rename() - mi = pd.MultiIndex.from_product([[0, 1]] * 3) - mi = mi.rename(names) - assert mi.names == names - - # With .rename(., level=) - mi.rename(names[1], level=1, inplace=True) - mi = mi.rename([names[0], names[2]], level=[0, 2]) - assert mi.names == names - - def test_duplicate_level_names_access_raises(self): - self.index.names = ['foo', 'foo'] - tm.assert_raises_regex(KeyError, 'Level foo not found', - self.index._get_level_number, 'foo') - - def assert_multiindex_copied(self, copy, original): - # Levels should be (at least, shallow copied) - tm.assert_copy(copy.levels, original.levels) - tm.assert_almost_equal(copy.labels, original.labels) - - # Labels doesn't matter which way copied - tm.assert_almost_equal(copy.labels, original.labels) - assert copy.labels is not original.labels - - # Names doesn't matter which way copied - assert copy.names == original.names - assert copy.names is not original.names - - # Sort order should be copied - assert copy.sortorder == original.sortorder - - def test_copy(self): - i_copy = self.index.copy() - - self.assert_multiindex_copied(i_copy, self.index) - - def test_shallow_copy(self): - i_copy = self.index._shallow_copy() - - self.assert_multiindex_copied(i_copy, self.index) - - def test_view(self): - i_view = self.index.view() - - self.assert_multiindex_copied(i_view, self.index) - - def check_level_names(self, index, names): - assert [level.name for level in index.levels] == list(names) - - def test_changing_names(self): - - # names should be applied to levels - level_names = [level.name for level in self.index.levels] - self.check_level_names(self.index, self.index.names) - - view = self.index.view() - copy = self.index.copy() - shallow_copy = self.index._shallow_copy() - - # changing names should change level names on object - new_names = [name + "a" for name in self.index.names] - self.index.names = new_names - self.check_level_names(self.index, new_names) - - # but not on copies - self.check_level_names(view, level_names) - self.check_level_names(copy, level_names) - self.check_level_names(shallow_copy, level_names) - - # and copies shouldn't change original - shallow_copy.names = [name + "c" for name in shallow_copy.names] - self.check_level_names(self.index, new_names) - - def test_get_level_number_integer(self): - self.index.names = [1, 0] - assert self.index._get_level_number(1) == 0 - assert self.index._get_level_number(0) == 1 - pytest.raises(IndexError, self.index._get_level_number, 2) - tm.assert_raises_regex(KeyError, 'Level fourth not found', - self.index._get_level_number, 'fourth') - - def test_from_arrays(self): - arrays = [] - for lev, lab in zip(self.index.levels, self.index.labels): - arrays.append(np.asarray(lev).take(lab)) - - # list of arrays as input - result = MultiIndex.from_arrays(arrays, names=self.index.names) - tm.assert_index_equal(result, self.index) - - # infer correctly - result = MultiIndex.from_arrays([[pd.NaT, Timestamp('20130101')], - ['a', 'b']]) - assert result.levels[0].equals(Index([Timestamp('20130101')])) - assert result.levels[1].equals(Index(['a', 'b'])) - - def test_from_arrays_iterator(self): - # GH 18434 - arrays = [] - for lev, lab in zip(self.index.levels, self.index.labels): - arrays.append(np.asarray(lev).take(lab)) - - # iterator as input - result = MultiIndex.from_arrays(iter(arrays), names=self.index.names) - tm.assert_index_equal(result, self.index) - - # invalid iterator input - with tm.assert_raises_regex( - TypeError, "Input must be a list / sequence of array-likes."): - MultiIndex.from_arrays(0) - - def test_from_arrays_index_series_datetimetz(self): - idx1 = pd.date_range('2015-01-01 10:00', freq='D', periods=3, - tz='US/Eastern') - idx2 = pd.date_range('2015-01-01 10:00', freq='H', periods=3, - tz='Asia/Tokyo') - result = pd.MultiIndex.from_arrays([idx1, idx2]) - tm.assert_index_equal(result.get_level_values(0), idx1) - tm.assert_index_equal(result.get_level_values(1), idx2) - - result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), pd.Series(idx2)]) - tm.assert_index_equal(result2.get_level_values(0), idx1) - tm.assert_index_equal(result2.get_level_values(1), idx2) - - tm.assert_index_equal(result, result2) - - def test_from_arrays_index_series_timedelta(self): - idx1 = pd.timedelta_range('1 days', freq='D', periods=3) - idx2 = pd.timedelta_range('2 hours', freq='H', periods=3) - result = pd.MultiIndex.from_arrays([idx1, idx2]) - tm.assert_index_equal(result.get_level_values(0), idx1) - tm.assert_index_equal(result.get_level_values(1), idx2) - - result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), pd.Series(idx2)]) - tm.assert_index_equal(result2.get_level_values(0), idx1) - tm.assert_index_equal(result2.get_level_values(1), idx2) - - tm.assert_index_equal(result, result2) - - def test_from_arrays_index_series_period(self): - idx1 = pd.period_range('2011-01-01', freq='D', periods=3) - idx2 = pd.period_range('2015-01-01', freq='H', periods=3) - result = pd.MultiIndex.from_arrays([idx1, idx2]) - tm.assert_index_equal(result.get_level_values(0), idx1) - tm.assert_index_equal(result.get_level_values(1), idx2) - - result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), pd.Series(idx2)]) - tm.assert_index_equal(result2.get_level_values(0), idx1) - tm.assert_index_equal(result2.get_level_values(1), idx2) - - tm.assert_index_equal(result, result2) - - def test_from_arrays_index_datetimelike_mixed(self): - idx1 = pd.date_range('2015-01-01 10:00', freq='D', periods=3, - tz='US/Eastern') - idx2 = pd.date_range('2015-01-01 10:00', freq='H', periods=3) - idx3 = pd.timedelta_range('1 days', freq='D', periods=3) - idx4 = pd.period_range('2011-01-01', freq='D', periods=3) - - result = pd.MultiIndex.from_arrays([idx1, idx2, idx3, idx4]) - tm.assert_index_equal(result.get_level_values(0), idx1) - tm.assert_index_equal(result.get_level_values(1), idx2) - tm.assert_index_equal(result.get_level_values(2), idx3) - tm.assert_index_equal(result.get_level_values(3), idx4) - - result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), - pd.Series(idx2), - pd.Series(idx3), - pd.Series(idx4)]) - tm.assert_index_equal(result2.get_level_values(0), idx1) - tm.assert_index_equal(result2.get_level_values(1), idx2) - tm.assert_index_equal(result2.get_level_values(2), idx3) - tm.assert_index_equal(result2.get_level_values(3), idx4) - - tm.assert_index_equal(result, result2) - - def test_from_arrays_index_series_categorical(self): - # GH13743 - idx1 = pd.CategoricalIndex(list("abcaab"), categories=list("bac"), - ordered=False) - idx2 = pd.CategoricalIndex(list("abcaab"), categories=list("bac"), - ordered=True) - - result = pd.MultiIndex.from_arrays([idx1, idx2]) - tm.assert_index_equal(result.get_level_values(0), idx1) - tm.assert_index_equal(result.get_level_values(1), idx2) - - result2 = pd.MultiIndex.from_arrays([pd.Series(idx1), pd.Series(idx2)]) - tm.assert_index_equal(result2.get_level_values(0), idx1) - tm.assert_index_equal(result2.get_level_values(1), idx2) - - result3 = pd.MultiIndex.from_arrays([idx1.values, idx2.values]) - tm.assert_index_equal(result3.get_level_values(0), idx1) - tm.assert_index_equal(result3.get_level_values(1), idx2) - - def test_from_arrays_empty(self): - # 0 levels - with tm.assert_raises_regex( - ValueError, "Must pass non-zero number of levels/labels"): - MultiIndex.from_arrays(arrays=[]) - - # 1 level - result = MultiIndex.from_arrays(arrays=[[]], names=['A']) - assert isinstance(result, MultiIndex) - expected = Index([], name='A') - tm.assert_index_equal(result.levels[0], expected) - - # N levels - for N in [2, 3]: - arrays = [[]] * N - names = list('ABC')[:N] - result = MultiIndex.from_arrays(arrays=arrays, names=names) - expected = MultiIndex(levels=[[]] * N, labels=[[]] * N, - names=names) - tm.assert_index_equal(result, expected) - - def test_from_arrays_invalid_input(self): - invalid_inputs = [1, [1], [1, 2], [[1], 2], - 'a', ['a'], ['a', 'b'], [['a'], 'b']] - for i in invalid_inputs: - pytest.raises(TypeError, MultiIndex.from_arrays, arrays=i) - - def test_from_arrays_different_lengths(self): - # see gh-13599 - idx1 = [1, 2, 3] - idx2 = ['a', 'b'] - tm.assert_raises_regex(ValueError, '^all arrays must ' - 'be same length$', - MultiIndex.from_arrays, [idx1, idx2]) - - idx1 = [] - idx2 = ['a', 'b'] - tm.assert_raises_regex(ValueError, '^all arrays must ' - 'be same length$', - MultiIndex.from_arrays, [idx1, idx2]) - - idx1 = [1, 2, 3] - idx2 = [] - tm.assert_raises_regex(ValueError, '^all arrays must ' - 'be same length$', - MultiIndex.from_arrays, [idx1, idx2]) - - def test_from_product(self): - - first = ['foo', 'bar', 'buz'] - second = ['a', 'b', 'c'] - names = ['first', 'second'] - result = MultiIndex.from_product([first, second], names=names) - - tuples = [('foo', 'a'), ('foo', 'b'), ('foo', 'c'), ('bar', 'a'), - ('bar', 'b'), ('bar', 'c'), ('buz', 'a'), ('buz', 'b'), - ('buz', 'c')] - expected = MultiIndex.from_tuples(tuples, names=names) - - tm.assert_index_equal(result, expected) - - def test_from_product_iterator(self): - # GH 18434 - first = ['foo', 'bar', 'buz'] - second = ['a', 'b', 'c'] - names = ['first', 'second'] - tuples = [('foo', 'a'), ('foo', 'b'), ('foo', 'c'), ('bar', 'a'), - ('bar', 'b'), ('bar', 'c'), ('buz', 'a'), ('buz', 'b'), - ('buz', 'c')] - expected = MultiIndex.from_tuples(tuples, names=names) - - # iterator as input - result = MultiIndex.from_product(iter([first, second]), names=names) - tm.assert_index_equal(result, expected) - - # Invalid non-iterable input - with tm.assert_raises_regex( - TypeError, "Input must be a list / sequence of iterables."): - MultiIndex.from_product(0) - - def test_from_product_empty(self): - # 0 levels - with tm.assert_raises_regex( - ValueError, "Must pass non-zero number of levels/labels"): - MultiIndex.from_product([]) - - # 1 level - result = MultiIndex.from_product([[]], names=['A']) - expected = pd.Index([], name='A') - tm.assert_index_equal(result.levels[0], expected) - - # 2 levels - l1 = [[], ['foo', 'bar', 'baz'], []] - l2 = [[], [], ['a', 'b', 'c']] - names = ['A', 'B'] - for first, second in zip(l1, l2): - result = MultiIndex.from_product([first, second], names=names) - expected = MultiIndex(levels=[first, second], - labels=[[], []], names=names) - tm.assert_index_equal(result, expected) - - # GH12258 - names = ['A', 'B', 'C'] - for N in range(4): - lvl2 = lrange(N) - result = MultiIndex.from_product([[], lvl2, []], names=names) - expected = MultiIndex(levels=[[], lvl2, []], - labels=[[], [], []], names=names) - tm.assert_index_equal(result, expected) - - def test_from_product_invalid_input(self): - invalid_inputs = [1, [1], [1, 2], [[1], 2], - 'a', ['a'], ['a', 'b'], [['a'], 'b']] - for i in invalid_inputs: - pytest.raises(TypeError, MultiIndex.from_product, iterables=i) - - def test_from_product_datetimeindex(self): - dt_index = date_range('2000-01-01', periods=2) - mi = pd.MultiIndex.from_product([[1, 2], dt_index]) - etalon = construct_1d_object_array_from_listlike([(1, pd.Timestamp( - '2000-01-01')), (1, pd.Timestamp('2000-01-02')), (2, pd.Timestamp( - '2000-01-01')), (2, pd.Timestamp('2000-01-02'))]) - tm.assert_numpy_array_equal(mi.values, etalon) - - def test_from_product_index_series_categorical(self): - # GH13743 - first = ['foo', 'bar'] - for ordered in [False, True]: - idx = pd.CategoricalIndex(list("abcaab"), categories=list("bac"), - ordered=ordered) - expected = pd.CategoricalIndex(list("abcaab") + list("abcaab"), - categories=list("bac"), - ordered=ordered) - - for arr in [idx, pd.Series(idx), idx.values]: - result = pd.MultiIndex.from_product([first, arr]) - tm.assert_index_equal(result.get_level_values(1), expected) - - def test_values_boxed(self): - tuples = [(1, pd.Timestamp('2000-01-01')), (2, pd.NaT), - (3, pd.Timestamp('2000-01-03')), - (1, pd.Timestamp('2000-01-04')), - (2, pd.Timestamp('2000-01-02')), - (3, pd.Timestamp('2000-01-03'))] - result = pd.MultiIndex.from_tuples(tuples) - expected = construct_1d_object_array_from_listlike(tuples) - tm.assert_numpy_array_equal(result.values, expected) - # Check that code branches for boxed values produce identical results - tm.assert_numpy_array_equal(result.values[:4], result[:4].values) - - def test_values_multiindex_datetimeindex(self): - # Test to ensure we hit the boxing / nobox part of MI.values - ints = np.arange(10 ** 18, 10 ** 18 + 5) - naive = pd.DatetimeIndex(ints) - aware = pd.DatetimeIndex(ints, tz='US/Central') - - idx = pd.MultiIndex.from_arrays([naive, aware]) - result = idx.values - - outer = pd.DatetimeIndex([x[0] for x in result]) - tm.assert_index_equal(outer, naive) - - inner = pd.DatetimeIndex([x[1] for x in result]) - tm.assert_index_equal(inner, aware) - - # n_lev > n_lab - result = idx[:2].values - - outer = pd.DatetimeIndex([x[0] for x in result]) - tm.assert_index_equal(outer, naive[:2]) - - inner = pd.DatetimeIndex([x[1] for x in result]) - tm.assert_index_equal(inner, aware[:2]) - - def test_values_multiindex_periodindex(self): - # Test to ensure we hit the boxing / nobox part of MI.values - ints = np.arange(2007, 2012) - pidx = pd.PeriodIndex(ints, freq='D') - - idx = pd.MultiIndex.from_arrays([ints, pidx]) - result = idx.values - - outer = pd.Int64Index([x[0] for x in result]) - tm.assert_index_equal(outer, pd.Int64Index(ints)) - - inner = pd.PeriodIndex([x[1] for x in result]) - tm.assert_index_equal(inner, pidx) - - # n_lev > n_lab - result = idx[:2].values - - outer = pd.Int64Index([x[0] for x in result]) - tm.assert_index_equal(outer, pd.Int64Index(ints[:2])) - - inner = pd.PeriodIndex([x[1] for x in result]) - tm.assert_index_equal(inner, pidx[:2]) - - def test_append(self): - result = self.index[:3].append(self.index[3:]) - assert result.equals(self.index) - - foos = [self.index[:1], self.index[1:3], self.index[3:]] - result = foos[0].append(foos[1:]) - assert result.equals(self.index) - - # empty - result = self.index.append([]) - assert result.equals(self.index) - - def test_append_mixed_dtypes(self): - # GH 13660 - dti = date_range('2011-01-01', freq='M', periods=3, ) - dti_tz = date_range('2011-01-01', freq='M', periods=3, tz='US/Eastern') - pi = period_range('2011-01', freq='M', periods=3) - - mi = MultiIndex.from_arrays([[1, 2, 3], - [1.1, np.nan, 3.3], - ['a', 'b', 'c'], - dti, dti_tz, pi]) - assert mi.nlevels == 6 - - res = mi.append(mi) - exp = MultiIndex.from_arrays([[1, 2, 3, 1, 2, 3], - [1.1, np.nan, 3.3, 1.1, np.nan, 3.3], - ['a', 'b', 'c', 'a', 'b', 'c'], - dti.append(dti), - dti_tz.append(dti_tz), - pi.append(pi)]) - tm.assert_index_equal(res, exp) - - other = MultiIndex.from_arrays([['x', 'y', 'z'], ['x', 'y', 'z'], - ['x', 'y', 'z'], ['x', 'y', 'z'], - ['x', 'y', 'z'], ['x', 'y', 'z']]) - - res = mi.append(other) - exp = MultiIndex.from_arrays([[1, 2, 3, 'x', 'y', 'z'], - [1.1, np.nan, 3.3, 'x', 'y', 'z'], - ['a', 'b', 'c', 'x', 'y', 'z'], - dti.append(pd.Index(['x', 'y', 'z'])), - dti_tz.append(pd.Index(['x', 'y', 'z'])), - pi.append(pd.Index(['x', 'y', 'z']))]) - tm.assert_index_equal(res, exp) - - def test_get_level_values(self): - result = self.index.get_level_values(0) - expected = Index(['foo', 'foo', 'bar', 'baz', 'qux', 'qux'], - name='first') - tm.assert_index_equal(result, expected) - assert result.name == 'first' - - result = self.index.get_level_values('first') - expected = self.index.get_level_values(0) - tm.assert_index_equal(result, expected) - - # GH 10460 - index = MultiIndex( - levels=[CategoricalIndex(['A', 'B']), - CategoricalIndex([1, 2, 3])], - labels=[np.array([0, 0, 0, 1, 1, 1]), - np.array([0, 1, 2, 0, 1, 2])]) - - exp = CategoricalIndex(['A', 'A', 'A', 'B', 'B', 'B']) - tm.assert_index_equal(index.get_level_values(0), exp) - exp = CategoricalIndex([1, 2, 3, 1, 2, 3]) - tm.assert_index_equal(index.get_level_values(1), exp) - - def test_get_level_values_int_with_na(self): - # GH 17924 - arrays = [['a', 'b', 'b'], [1, np.nan, 2]] - index = pd.MultiIndex.from_arrays(arrays) - result = index.get_level_values(1) - expected = Index([1, np.nan, 2]) - tm.assert_index_equal(result, expected) - - arrays = [['a', 'b', 'b'], [np.nan, np.nan, 2]] - index = pd.MultiIndex.from_arrays(arrays) - result = index.get_level_values(1) - expected = Index([np.nan, np.nan, 2]) - tm.assert_index_equal(result, expected) - - def test_get_level_values_na(self): - arrays = [[np.nan, np.nan, np.nan], ['a', np.nan, 1]] - index = pd.MultiIndex.from_arrays(arrays) - result = index.get_level_values(0) - expected = pd.Index([np.nan, np.nan, np.nan]) - tm.assert_index_equal(result, expected) - - result = index.get_level_values(1) - expected = pd.Index(['a', np.nan, 1]) - tm.assert_index_equal(result, expected) - - arrays = [['a', 'b', 'b'], pd.DatetimeIndex([0, 1, pd.NaT])] - index = pd.MultiIndex.from_arrays(arrays) - result = index.get_level_values(1) - expected = pd.DatetimeIndex([0, 1, pd.NaT]) - tm.assert_index_equal(result, expected) - - arrays = [[], []] - index = pd.MultiIndex.from_arrays(arrays) - result = index.get_level_values(0) - expected = pd.Index([], dtype=object) - tm.assert_index_equal(result, expected) - - def test_get_level_values_all_na(self): - # GH 17924 when level entirely consists of nan - arrays = [[np.nan, np.nan, np.nan], ['a', np.nan, 1]] - index = pd.MultiIndex.from_arrays(arrays) - result = index.get_level_values(0) - expected = pd.Index([np.nan, np.nan, np.nan], dtype=np.float64) - tm.assert_index_equal(result, expected) - - result = index.get_level_values(1) - expected = pd.Index(['a', np.nan, 1], dtype=object) - tm.assert_index_equal(result, expected) - - def test_reorder_levels(self): - # this blows up - tm.assert_raises_regex(IndexError, '^Too many levels', - self.index.reorder_levels, [2, 1, 0]) - - def test_nlevels(self): - assert self.index.nlevels == 2 - - def test_iter(self): - result = list(self.index) - expected = [('foo', 'one'), ('foo', 'two'), ('bar', 'one'), - ('baz', 'two'), ('qux', 'one'), ('qux', 'two')] - assert result == expected - - def test_legacy_pickle(self, datapath): - if PY3: - pytest.skip("testing for legacy pickles not " - "support on py3") - - path = datapath('indexes', 'data', 'multiindex_v1.pickle') - obj = pd.read_pickle(path) - - obj2 = MultiIndex.from_tuples(obj.values) - assert obj.equals(obj2) - - res = obj.get_indexer(obj) - exp = np.arange(len(obj), dtype=np.intp) - assert_almost_equal(res, exp) - - res = obj.get_indexer(obj2[::-1]) - exp = obj.get_indexer(obj[::-1]) - exp2 = obj2.get_indexer(obj2[::-1]) - assert_almost_equal(res, exp) - assert_almost_equal(exp, exp2) - - def test_legacy_v2_unpickle(self, datapath): - - # 0.7.3 -> 0.8.0 format manage - path = datapath('indexes', 'data', 'mindex_073.pickle') - obj = pd.read_pickle(path) - - obj2 = MultiIndex.from_tuples(obj.values) - assert obj.equals(obj2) - - res = obj.get_indexer(obj) - exp = np.arange(len(obj), dtype=np.intp) - assert_almost_equal(res, exp) - - res = obj.get_indexer(obj2[::-1]) - exp = obj.get_indexer(obj[::-1]) - exp2 = obj2.get_indexer(obj2[::-1]) - assert_almost_equal(res, exp) - assert_almost_equal(exp, exp2) - - def test_roundtrip_pickle_with_tz(self): - - # GH 8367 - # round-trip of timezone - index = MultiIndex.from_product( - [[1, 2], ['a', 'b'], date_range('20130101', periods=3, - tz='US/Eastern') - ], names=['one', 'two', 'three']) - unpickled = tm.round_trip_pickle(index) - assert index.equal_levels(unpickled) - - def test_from_tuples_index_values(self): - result = MultiIndex.from_tuples(self.index) - assert (result.values == self.index.values).all() - - def test_contains(self): - assert ('foo', 'two') in self.index - assert ('bar', 'two') not in self.index - assert None not in self.index - - def test_contains_top_level(self): - midx = MultiIndex.from_product([['A', 'B'], [1, 2]]) - assert 'A' in midx - assert 'A' not in midx._engine - - def test_contains_with_nat(self): - # MI with a NaT - mi = MultiIndex(levels=[['C'], - pd.date_range('2012-01-01', periods=5)], - labels=[[0, 0, 0, 0, 0, 0], [-1, 0, 1, 2, 3, 4]], - names=[None, 'B']) - assert ('C', pd.Timestamp('2012-01-01')) in mi - for val in mi.values: - assert val in mi - - def test_is_all_dates(self): - assert not self.index.is_all_dates - - def test_is_numeric(self): - # MultiIndex is never numeric - assert not self.index.is_numeric() - - def test_getitem(self): - # scalar - assert self.index[2] == ('bar', 'one') - - # slice - result = self.index[2:5] - expected = self.index[[2, 3, 4]] - assert result.equals(expected) - - # boolean - result = self.index[[True, False, True, False, True, True]] - result2 = self.index[np.array([True, False, True, False, True, True])] - expected = self.index[[0, 2, 4, 5]] - assert result.equals(expected) - assert result2.equals(expected) - - def test_getitem_group_select(self): - sorted_idx, _ = self.index.sortlevel(0) - assert sorted_idx.get_loc('baz') == slice(3, 4) - assert sorted_idx.get_loc('foo') == slice(0, 2) - - def test_get_loc(self): - assert self.index.get_loc(('foo', 'two')) == 1 - assert self.index.get_loc(('baz', 'two')) == 3 - pytest.raises(KeyError, self.index.get_loc, ('bar', 'two')) - pytest.raises(KeyError, self.index.get_loc, 'quux') - - pytest.raises(NotImplementedError, self.index.get_loc, 'foo', - method='nearest') - - # 3 levels - index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( - lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( - [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) - pytest.raises(KeyError, index.get_loc, (1, 1)) - assert index.get_loc((2, 0)) == slice(3, 5) - - def test_get_loc_duplicates(self): - index = Index([2, 2, 2, 2]) - result = index.get_loc(2) - expected = slice(0, 4) - assert result == expected - # pytest.raises(Exception, index.get_loc, 2) - - index = Index(['c', 'a', 'a', 'b', 'b']) - rs = index.get_loc('c') - xp = 0 - assert rs == xp - - def test_get_value_duplicates(self): - index = MultiIndex(levels=[['D', 'B', 'C'], - [0, 26, 27, 37, 57, 67, 75, 82]], - labels=[[0, 0, 0, 1, 2, 2, 2, 2, 2, 2], - [1, 3, 4, 6, 0, 2, 2, 3, 5, 7]], - names=['tag', 'day']) - - assert index.get_loc('D') == slice(0, 3) - with pytest.raises(KeyError): - index._engine.get_value(np.array([]), 'D') - - def test_get_loc_level(self): - index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( - lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( - [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) - - loc, new_index = index.get_loc_level((0, 1)) - expected = slice(1, 2) - exp_index = index[expected].droplevel(0).droplevel(0) - assert loc == expected - assert new_index.equals(exp_index) - - loc, new_index = index.get_loc_level((0, 1, 0)) - expected = 1 - assert loc == expected - assert new_index is None - - pytest.raises(KeyError, index.get_loc_level, (2, 2)) - - index = MultiIndex(levels=[[2000], lrange(4)], labels=[np.array( - [0, 0, 0, 0]), np.array([0, 1, 2, 3])]) - result, new_index = index.get_loc_level((2000, slice(None, None))) - expected = slice(None, None) - assert result == expected - assert new_index.equals(index.droplevel(0)) - - @pytest.mark.parametrize('level', [0, 1]) - @pytest.mark.parametrize('null_val', [np.nan, pd.NaT, None]) - def test_get_loc_nan(self, level, null_val): - # GH 18485 : NaN in MultiIndex - levels = [['a', 'b'], ['c', 'd']] - key = ['b', 'd'] - levels[level] = np.array([0, null_val], dtype=type(null_val)) - key[level] = null_val - idx = MultiIndex.from_product(levels) - assert idx.get_loc(tuple(key)) == 3 - - def test_get_loc_missing_nan(self): - # GH 8569 - idx = MultiIndex.from_arrays([[1.0, 2.0], [3.0, 4.0]]) - assert isinstance(idx.get_loc(1), slice) - pytest.raises(KeyError, idx.get_loc, 3) - pytest.raises(KeyError, idx.get_loc, np.nan) - pytest.raises(KeyError, idx.get_loc, [np.nan]) - - @pytest.mark.parametrize('dtype1', [int, float, bool, str]) - @pytest.mark.parametrize('dtype2', [int, float, bool, str]) - def test_get_loc_multiple_dtypes(self, dtype1, dtype2): - # GH 18520 - levels = [np.array([0, 1]).astype(dtype1), - np.array([0, 1]).astype(dtype2)] - idx = pd.MultiIndex.from_product(levels) - assert idx.get_loc(idx[2]) == 2 - - @pytest.mark.parametrize('level', [0, 1]) - @pytest.mark.parametrize('dtypes', [[int, float], [float, int]]) - def test_get_loc_implicit_cast(self, level, dtypes): - # GH 18818, GH 15994 : as flat index, cast int to float and vice-versa - levels = [['a', 'b'], ['c', 'd']] - key = ['b', 'd'] - lev_dtype, key_dtype = dtypes - levels[level] = np.array([0, 1], dtype=lev_dtype) - key[level] = key_dtype(1) - idx = MultiIndex.from_product(levels) - assert idx.get_loc(tuple(key)) == 3 - - def test_get_loc_cast_bool(self): - # GH 19086 : int is casted to bool, but not vice-versa - levels = [[False, True], np.arange(2, dtype='int64')] - idx = MultiIndex.from_product(levels) - - assert idx.get_loc((0, 1)) == 1 - assert idx.get_loc((1, 0)) == 2 - - pytest.raises(KeyError, idx.get_loc, (False, True)) - pytest.raises(KeyError, idx.get_loc, (True, False)) - - def test_slice_locs(self): - df = tm.makeTimeDataFrame() - stacked = df.stack() - idx = stacked.index - - slob = slice(*idx.slice_locs(df.index[5], df.index[15])) - sliced = stacked[slob] - expected = df[5:16].stack() - tm.assert_almost_equal(sliced.values, expected.values) - - slob = slice(*idx.slice_locs(df.index[5] + timedelta(seconds=30), - df.index[15] - timedelta(seconds=30))) - sliced = stacked[slob] - expected = df[6:15].stack() - tm.assert_almost_equal(sliced.values, expected.values) - - def test_slice_locs_with_type_mismatch(self): - df = tm.makeTimeDataFrame() - stacked = df.stack() - idx = stacked.index - tm.assert_raises_regex(TypeError, '^Level type mismatch', - idx.slice_locs, (1, 3)) - tm.assert_raises_regex(TypeError, '^Level type mismatch', - idx.slice_locs, - df.index[5] + timedelta( - seconds=30), (5, 2)) - df = tm.makeCustomDataframe(5, 5) - stacked = df.stack() - idx = stacked.index - with tm.assert_raises_regex(TypeError, '^Level type mismatch'): - idx.slice_locs(timedelta(seconds=30)) - # TODO: Try creating a UnicodeDecodeError in exception message - with tm.assert_raises_regex(TypeError, '^Level type mismatch'): - idx.slice_locs(df.index[1], (16, "a")) - - def test_slice_locs_not_sorted(self): - index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( - lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( - [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) - - tm.assert_raises_regex(KeyError, "[Kk]ey length.*greater than " - "MultiIndex lexsort depth", - index.slice_locs, (1, 0, 1), (2, 1, 0)) - - # works - sorted_index, _ = index.sortlevel(0) - # should there be a test case here??? - sorted_index.slice_locs((1, 0, 1), (2, 1, 0)) - - def test_slice_locs_partial(self): - sorted_idx, _ = self.index.sortlevel(0) - - result = sorted_idx.slice_locs(('foo', 'two'), ('qux', 'one')) - assert result == (1, 5) - - result = sorted_idx.slice_locs(None, ('qux', 'one')) - assert result == (0, 5) - - result = sorted_idx.slice_locs(('foo', 'two'), None) - assert result == (1, len(sorted_idx)) - - result = sorted_idx.slice_locs('bar', 'baz') - assert result == (2, 4) - - def test_slice_locs_not_contained(self): - # some searchsorted action - - index = MultiIndex(levels=[[0, 2, 4, 6], [0, 2, 4]], - labels=[[0, 0, 0, 1, 1, 2, 3, 3, 3], - [0, 1, 2, 1, 2, 2, 0, 1, 2]], sortorder=0) - - result = index.slice_locs((1, 0), (5, 2)) - assert result == (3, 6) - - result = index.slice_locs(1, 5) - assert result == (3, 6) - - result = index.slice_locs((2, 2), (5, 2)) - assert result == (3, 6) - - result = index.slice_locs(2, 5) - assert result == (3, 6) - - result = index.slice_locs((1, 0), (6, 3)) - assert result == (3, 8) - - result = index.slice_locs(-1, 10) - assert result == (0, len(index)) - - def test_consistency(self): - # need to construct an overflow - major_axis = lrange(70000) - minor_axis = lrange(10) - - major_labels = np.arange(70000) - minor_labels = np.repeat(lrange(10), 7000) - - # the fact that is works means it's consistent - index = MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels]) - - # inconsistent - major_labels = np.array([0, 0, 1, 1, 1, 2, 2, 3, 3]) - minor_labels = np.array([0, 1, 0, 1, 1, 0, 1, 0, 1]) - index = MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels]) - - assert not index.is_unique - - def test_truncate(self): - major_axis = Index(lrange(4)) - minor_axis = Index(lrange(2)) - - major_labels = np.array([0, 0, 1, 2, 3, 3]) - minor_labels = np.array([0, 1, 0, 1, 0, 1]) - - index = MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels]) - - result = index.truncate(before=1) - assert 'foo' not in result.levels[0] - assert 1 in result.levels[0] - - result = index.truncate(after=1) - assert 2 not in result.levels[0] - assert 1 in result.levels[0] - - result = index.truncate(before=1, after=2) - assert len(result.levels[0]) == 2 - - # after < before - pytest.raises(ValueError, index.truncate, 3, 1) - - def test_get_indexer(self): - major_axis = Index(lrange(4)) - minor_axis = Index(lrange(2)) - - major_labels = np.array([0, 0, 1, 2, 2, 3, 3], dtype=np.intp) - minor_labels = np.array([0, 1, 0, 0, 1, 0, 1], dtype=np.intp) - - index = MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels]) - idx1 = index[:5] - idx2 = index[[1, 3, 5]] - - r1 = idx1.get_indexer(idx2) - assert_almost_equal(r1, np.array([1, 3, -1], dtype=np.intp)) - - r1 = idx2.get_indexer(idx1, method='pad') - e1 = np.array([-1, 0, 0, 1, 1], dtype=np.intp) - assert_almost_equal(r1, e1) - - r2 = idx2.get_indexer(idx1[::-1], method='pad') - assert_almost_equal(r2, e1[::-1]) - - rffill1 = idx2.get_indexer(idx1, method='ffill') - assert_almost_equal(r1, rffill1) - - r1 = idx2.get_indexer(idx1, method='backfill') - e1 = np.array([0, 0, 1, 1, 2], dtype=np.intp) - assert_almost_equal(r1, e1) - - r2 = idx2.get_indexer(idx1[::-1], method='backfill') - assert_almost_equal(r2, e1[::-1]) - - rbfill1 = idx2.get_indexer(idx1, method='bfill') - assert_almost_equal(r1, rbfill1) - - # pass non-MultiIndex - r1 = idx1.get_indexer(idx2.values) - rexp1 = idx1.get_indexer(idx2) - assert_almost_equal(r1, rexp1) - - r1 = idx1.get_indexer([1, 2, 3]) - assert (r1 == [-1, -1, -1]).all() - - # create index with duplicates - idx1 = Index(lrange(10) + lrange(10)) - idx2 = Index(lrange(20)) - - msg = "Reindexing only valid with uniquely valued Index objects" - with tm.assert_raises_regex(InvalidIndexError, msg): - idx1.get_indexer(idx2) - - def test_get_indexer_nearest(self): - midx = MultiIndex.from_tuples([('a', 1), ('b', 2)]) - with pytest.raises(NotImplementedError): - midx.get_indexer(['a'], method='nearest') - with pytest.raises(NotImplementedError): - midx.get_indexer(['a'], method='pad', tolerance=2) - - def test_get_indexer_categorical_time(self): - # https://github.com/pandas-dev/pandas/issues/21390 - midx = MultiIndex.from_product( - [Categorical(['a', 'b', 'c']), - Categorical(date_range("2012-01-01", periods=3, freq='H'))]) - result = midx.get_indexer(midx) - tm.assert_numpy_array_equal(result, np.arange(9, dtype=np.intp)) - - def test_hash_collisions(self): - # non-smoke test that we don't get hash collisions - - index = MultiIndex.from_product([np.arange(1000), np.arange(1000)], - names=['one', 'two']) - result = index.get_indexer(index.values) - tm.assert_numpy_array_equal(result, np.arange( - len(index), dtype='intp')) - - for i in [0, 1, len(index) - 2, len(index) - 1]: - result = index.get_loc(index[i]) - assert result == i - - def test_format(self): - self.index.format() - self.index[:0].format() - - def test_format_integer_names(self): - index = MultiIndex(levels=[[0, 1], [0, 1]], - labels=[[0, 0, 1, 1], [0, 1, 0, 1]], names=[0, 1]) - index.format(names=True) - - def test_format_sparse_display(self): - index = MultiIndex(levels=[[0, 1], [0, 1], [0, 1], [0]], - labels=[[0, 0, 0, 1, 1, 1], [0, 0, 1, 0, 0, 1], - [0, 1, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0]]) - - result = index.format() - assert result[3] == '1 0 0 0' - - def test_format_sparse_config(self): - warn_filters = warnings.filters - warnings.filterwarnings('ignore', category=FutureWarning, - module=".*format") - # GH1538 - pd.set_option('display.multi_sparse', False) - - result = self.index.format() - assert result[1] == 'foo two' - - tm.reset_display_options() - - warnings.filters = warn_filters - - def test_to_frame(self): - tuples = [(1, 'one'), (1, 'two'), (2, 'one'), (2, 'two')] - - index = MultiIndex.from_tuples(tuples) - result = index.to_frame(index=False) - expected = DataFrame(tuples) - tm.assert_frame_equal(result, expected) - - result = index.to_frame() - expected.index = index - tm.assert_frame_equal(result, expected) - - tuples = [(1, 'one'), (1, 'two'), (2, 'one'), (2, 'two')] - index = MultiIndex.from_tuples(tuples, names=['first', 'second']) - result = index.to_frame(index=False) - expected = DataFrame(tuples) - expected.columns = ['first', 'second'] - tm.assert_frame_equal(result, expected) - - result = index.to_frame() - expected.index = index - tm.assert_frame_equal(result, expected) - - index = MultiIndex.from_product([range(5), - pd.date_range('20130101', periods=3)]) - result = index.to_frame(index=False) - expected = DataFrame( - {0: np.repeat(np.arange(5, dtype='int64'), 3), - 1: np.tile(pd.date_range('20130101', periods=3), 5)}) - tm.assert_frame_equal(result, expected) - - index = MultiIndex.from_product([range(5), - pd.date_range('20130101', periods=3)]) - result = index.to_frame() - expected.index = index - tm.assert_frame_equal(result, expected) - - def test_to_hierarchical(self): - # GH21613 - index = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), ( - 2, 'two')]) - with tm.assert_produces_warning(FutureWarning): - result = index.to_hierarchical(3) - expected = MultiIndex(levels=[[1, 2], ['one', 'two']], - labels=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1], - [0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1]]) - tm.assert_index_equal(result, expected) - assert result.names == index.names - - # K > 1 - with tm.assert_produces_warning(FutureWarning): - result = index.to_hierarchical(3, 2) - expected = MultiIndex(levels=[[1, 2], ['one', 'two']], - labels=[[0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1], - [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]]) - tm.assert_index_equal(result, expected) - assert result.names == index.names - - # non-sorted - index = MultiIndex.from_tuples([(2, 'c'), (1, 'b'), - (2, 'a'), (2, 'b')], - names=['N1', 'N2']) - with tm.assert_produces_warning(FutureWarning): - result = index.to_hierarchical(2) - expected = MultiIndex.from_tuples([(2, 'c'), (2, 'c'), (1, 'b'), - (1, 'b'), - (2, 'a'), (2, 'a'), - (2, 'b'), (2, 'b')], - names=['N1', 'N2']) - tm.assert_index_equal(result, expected) - assert result.names == index.names - - def test_bounds(self): - self.index._bounds - - def test_equals_multi(self): - assert self.index.equals(self.index) - assert not self.index.equals(self.index.values) - assert self.index.equals(Index(self.index.values)) - - assert self.index.equal_levels(self.index) - assert not self.index.equals(self.index[:-1]) - assert not self.index.equals(self.index[-1]) - - # different number of levels - index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index( - lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( - [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])]) - - index2 = MultiIndex(levels=index.levels[:-1], labels=index.labels[:-1]) - assert not index.equals(index2) - assert not index.equal_levels(index2) - - # levels are different - major_axis = Index(lrange(4)) - minor_axis = Index(lrange(2)) - - major_labels = np.array([0, 0, 1, 2, 2, 3]) - minor_labels = np.array([0, 1, 0, 0, 1, 0]) - - index = MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels]) - assert not self.index.equals(index) - assert not self.index.equal_levels(index) - - # some of the labels are different - major_axis = Index(['foo', 'bar', 'baz', 'qux']) - minor_axis = Index(['one', 'two']) - - major_labels = np.array([0, 0, 2, 2, 3, 3]) - minor_labels = np.array([0, 1, 0, 1, 0, 1]) - - index = MultiIndex(levels=[major_axis, minor_axis], - labels=[major_labels, minor_labels]) - assert not self.index.equals(index) - - def test_equals_missing_values(self): - # make sure take is not using -1 - i = pd.MultiIndex.from_tuples([(0, pd.NaT), - (0, pd.Timestamp('20130101'))]) - result = i[0:1].equals(i[0]) - assert not result - result = i[1:2].equals(i[1]) - assert not result - - def test_identical(self): - mi = self.index.copy() - mi2 = self.index.copy() - assert mi.identical(mi2) - - mi = mi.set_names(['new1', 'new2']) - assert mi.equals(mi2) - assert not mi.identical(mi2) - - mi2 = mi2.set_names(['new1', 'new2']) - assert mi.identical(mi2) - - mi3 = Index(mi.tolist(), names=mi.names) - mi4 = Index(mi.tolist(), names=mi.names, tupleize_cols=False) - assert mi.identical(mi3) - assert not mi.identical(mi4) - assert mi.equals(mi4) - - def test_is_(self): - mi = MultiIndex.from_tuples(lzip(range(10), range(10))) - assert mi.is_(mi) - assert mi.is_(mi.view()) - assert mi.is_(mi.view().view().view().view()) - mi2 = mi.view() - # names are metadata, they don't change id - mi2.names = ["A", "B"] - assert mi2.is_(mi) - assert mi.is_(mi2) - - assert mi.is_(mi.set_names(["C", "D"])) - mi2 = mi.view() - mi2.set_names(["E", "F"], inplace=True) - assert mi.is_(mi2) - # levels are inherent properties, they change identity - mi3 = mi2.set_levels([lrange(10), lrange(10)]) - assert not mi3.is_(mi2) - # shouldn't change - assert mi2.is_(mi) - mi4 = mi3.view() - - # GH 17464 - Remove duplicate MultiIndex levels - mi4.set_levels([lrange(10), lrange(10)], inplace=True) - assert not mi4.is_(mi3) - mi5 = mi.view() - mi5.set_levels(mi5.levels, inplace=True) - assert not mi5.is_(mi) - - def test_union(self): - piece1 = self.index[:5][::-1] - piece2 = self.index[3:] - - the_union = piece1 | piece2 - - tups = sorted(self.index.values) - expected = MultiIndex.from_tuples(tups) - - assert the_union.equals(expected) - - # corner case, pass self or empty thing: - the_union = self.index.union(self.index) - assert the_union is self.index - - the_union = self.index.union(self.index[:0]) - assert the_union is self.index - - # won't work in python 3 - # tuples = self.index.values - # result = self.index[:4] | tuples[4:] - # assert result.equals(tuples) - - # not valid for python 3 - # def test_union_with_regular_index(self): - # other = Index(['A', 'B', 'C']) - - # result = other.union(self.index) - # assert ('foo', 'one') in result - # assert 'B' in result - - # result2 = self.index.union(other) - # assert result.equals(result2) - - def test_intersection(self): - piece1 = self.index[:5][::-1] - piece2 = self.index[3:] - - the_int = piece1 & piece2 - tups = sorted(self.index[3:5].values) - expected = MultiIndex.from_tuples(tups) - assert the_int.equals(expected) - - # corner case, pass self - the_int = self.index.intersection(self.index) - assert the_int is self.index - - # empty intersection: disjoint - empty = self.index[:2] & self.index[2:] - expected = self.index[:0] - assert empty.equals(expected) - - # can't do in python 3 - # tuples = self.index.values - # result = self.index & tuples - # assert result.equals(tuples) - - def test_sub(self): - - first = self.index - - # - now raises (previously was set op difference) - with pytest.raises(TypeError): - first - self.index[-3:] - with pytest.raises(TypeError): - self.index[-3:] - first - with pytest.raises(TypeError): - self.index[-3:] - first.tolist() - with pytest.raises(TypeError): - first.tolist() - self.index[-3:] - - def test_difference(self): - - first = self.index - result = first.difference(self.index[-3:]) - expected = MultiIndex.from_tuples(sorted(self.index[:-3].values), - sortorder=0, - names=self.index.names) - - assert isinstance(result, MultiIndex) - assert result.equals(expected) - assert result.names == self.index.names - - # empty difference: reflexive - result = self.index.difference(self.index) - expected = self.index[:0] - assert result.equals(expected) - assert result.names == self.index.names - - # empty difference: superset - result = self.index[-3:].difference(self.index) - expected = self.index[:0] - assert result.equals(expected) - assert result.names == self.index.names - - # empty difference: degenerate - result = self.index[:0].difference(self.index) - expected = self.index[:0] - assert result.equals(expected) - assert result.names == self.index.names - - # names not the same - chunklet = self.index[-3:] - chunklet.names = ['foo', 'baz'] - result = first.difference(chunklet) - assert result.names == (None, None) - - # empty, but non-equal - result = self.index.difference(self.index.sortlevel(1)[0]) - assert len(result) == 0 - - # raise Exception called with non-MultiIndex - result = first.difference(first.values) - assert result.equals(first[:0]) - - # name from empty array - result = first.difference([]) - assert first.equals(result) - assert first.names == result.names - - # name from non-empty array - result = first.difference([('foo', 'one')]) - expected = pd.MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'), ( - 'foo', 'two'), ('qux', 'one'), ('qux', 'two')]) - expected.names = first.names - assert first.names == result.names - tm.assert_raises_regex(TypeError, "other must be a MultiIndex " - "or a list of tuples", - first.difference, [1, 2, 3, 4, 5]) - - def test_from_tuples(self): - tm.assert_raises_regex(TypeError, 'Cannot infer number of levels ' - 'from empty list', - MultiIndex.from_tuples, []) - - expected = MultiIndex(levels=[[1, 3], [2, 4]], - labels=[[0, 1], [0, 1]], - names=['a', 'b']) - - # input tuples - result = MultiIndex.from_tuples(((1, 2), (3, 4)), names=['a', 'b']) - tm.assert_index_equal(result, expected) - - def test_from_tuples_iterator(self): - # GH 18434 - # input iterator for tuples - expected = MultiIndex(levels=[[1, 3], [2, 4]], - labels=[[0, 1], [0, 1]], - names=['a', 'b']) - - result = MultiIndex.from_tuples(zip([1, 3], [2, 4]), names=['a', 'b']) - tm.assert_index_equal(result, expected) - - # input non-iterables - with tm.assert_raises_regex( - TypeError, 'Input must be a list / sequence of tuple-likes.'): - MultiIndex.from_tuples(0) - - def test_from_tuples_empty(self): - # GH 16777 - result = MultiIndex.from_tuples([], names=['a', 'b']) - expected = MultiIndex.from_arrays(arrays=[[], []], - names=['a', 'b']) - tm.assert_index_equal(result, expected) - - def test_argsort(self): - result = self.index.argsort() - expected = self.index.values.argsort() - tm.assert_numpy_array_equal(result, expected) - - def test_sortlevel(self): - import random - - tuples = list(self.index) - random.shuffle(tuples) - - index = MultiIndex.from_tuples(tuples) - - sorted_idx, _ = index.sortlevel(0) - expected = MultiIndex.from_tuples(sorted(tuples)) - assert sorted_idx.equals(expected) - - sorted_idx, _ = index.sortlevel(0, ascending=False) - assert sorted_idx.equals(expected[::-1]) - - sorted_idx, _ = index.sortlevel(1) - by1 = sorted(tuples, key=lambda x: (x[1], x[0])) - expected = MultiIndex.from_tuples(by1) - assert sorted_idx.equals(expected) - - sorted_idx, _ = index.sortlevel(1, ascending=False) - assert sorted_idx.equals(expected[::-1]) - - def test_sortlevel_not_sort_remaining(self): - mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) - sorted_idx, _ = mi.sortlevel('A', sort_remaining=False) - assert sorted_idx.equals(mi) - - def test_sortlevel_deterministic(self): - tuples = [('bar', 'one'), ('foo', 'two'), ('qux', 'two'), - ('foo', 'one'), ('baz', 'two'), ('qux', 'one')] - - index = MultiIndex.from_tuples(tuples) - - sorted_idx, _ = index.sortlevel(0) - expected = MultiIndex.from_tuples(sorted(tuples)) - assert sorted_idx.equals(expected) - - sorted_idx, _ = index.sortlevel(0, ascending=False) - assert sorted_idx.equals(expected[::-1]) - - sorted_idx, _ = index.sortlevel(1) - by1 = sorted(tuples, key=lambda x: (x[1], x[0])) - expected = MultiIndex.from_tuples(by1) - assert sorted_idx.equals(expected) - - sorted_idx, _ = index.sortlevel(1, ascending=False) - assert sorted_idx.equals(expected[::-1]) - - def test_dims(self): - pass - - def test_drop(self): - dropped = self.index.drop([('foo', 'two'), ('qux', 'one')]) - - index = MultiIndex.from_tuples([('foo', 'two'), ('qux', 'one')]) - dropped2 = self.index.drop(index) - - expected = self.index[[0, 2, 3, 5]] - tm.assert_index_equal(dropped, expected) - tm.assert_index_equal(dropped2, expected) - - dropped = self.index.drop(['bar']) - expected = self.index[[0, 1, 3, 4, 5]] - tm.assert_index_equal(dropped, expected) - - dropped = self.index.drop('foo') - expected = self.index[[2, 3, 4, 5]] - tm.assert_index_equal(dropped, expected) - - index = MultiIndex.from_tuples([('bar', 'two')]) - pytest.raises(KeyError, self.index.drop, [('bar', 'two')]) - pytest.raises(KeyError, self.index.drop, index) - pytest.raises(KeyError, self.index.drop, ['foo', 'two']) - - # partially correct argument - mixed_index = MultiIndex.from_tuples([('qux', 'one'), ('bar', 'two')]) - pytest.raises(KeyError, self.index.drop, mixed_index) - - # error='ignore' - dropped = self.index.drop(index, errors='ignore') - expected = self.index[[0, 1, 2, 3, 4, 5]] - tm.assert_index_equal(dropped, expected) - - dropped = self.index.drop(mixed_index, errors='ignore') - expected = self.index[[0, 1, 2, 3, 5]] - tm.assert_index_equal(dropped, expected) - - dropped = self.index.drop(['foo', 'two'], errors='ignore') - expected = self.index[[2, 3, 4, 5]] - tm.assert_index_equal(dropped, expected) - - # mixed partial / full drop - dropped = self.index.drop(['foo', ('qux', 'one')]) - expected = self.index[[2, 3, 5]] - tm.assert_index_equal(dropped, expected) - - # mixed partial / full drop / error='ignore' - mixed_index = ['foo', ('qux', 'one'), 'two'] - pytest.raises(KeyError, self.index.drop, mixed_index) - dropped = self.index.drop(mixed_index, errors='ignore') - expected = self.index[[2, 3, 5]] - tm.assert_index_equal(dropped, expected) - - def test_droplevel_with_names(self): - index = self.index[self.index.get_loc('foo')] - dropped = index.droplevel(0) - assert dropped.name == 'second' - - index = MultiIndex( - levels=[Index(lrange(4)), Index(lrange(4)), Index(lrange(4))], - labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( - [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])], - names=['one', 'two', 'three']) - dropped = index.droplevel(0) - assert dropped.names == ('two', 'three') - - dropped = index.droplevel('two') - expected = index.droplevel(1) - assert dropped.equals(expected) - - def test_droplevel_list(self): - index = MultiIndex( - levels=[Index(lrange(4)), Index(lrange(4)), Index(lrange(4))], - labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array( - [0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])], - names=['one', 'two', 'three']) - - dropped = index[:2].droplevel(['three', 'one']) - expected = index[:2].droplevel(2).droplevel(0) - assert dropped.equals(expected) - - dropped = index[:2].droplevel([]) - expected = index[:2] - assert dropped.equals(expected) - - with pytest.raises(ValueError): - index[:2].droplevel(['one', 'two', 'three']) - - with pytest.raises(KeyError): - index[:2].droplevel(['one', 'four']) - - def test_drop_not_lexsorted(self): - # GH 12078 - - # define the lexsorted version of the multi-index - tuples = [('a', ''), ('b1', 'c1'), ('b2', 'c2')] - lexsorted_mi = MultiIndex.from_tuples(tuples, names=['b', 'c']) - assert lexsorted_mi.is_lexsorted() - - # and the not-lexsorted version - df = pd.DataFrame(columns=['a', 'b', 'c', 'd'], - data=[[1, 'b1', 'c1', 3], [1, 'b2', 'c2', 4]]) - df = df.pivot_table(index='a', columns=['b', 'c'], values='d') - df = df.reset_index() - not_lexsorted_mi = df.columns - assert not not_lexsorted_mi.is_lexsorted() - - # compare the results - tm.assert_index_equal(lexsorted_mi, not_lexsorted_mi) - with tm.assert_produces_warning(PerformanceWarning): - tm.assert_index_equal(lexsorted_mi.drop('a'), - not_lexsorted_mi.drop('a')) - - def test_insert(self): - # key contained in all levels - new_index = self.index.insert(0, ('bar', 'two')) - assert new_index.equal_levels(self.index) - assert new_index[0] == ('bar', 'two') - - # key not contained in all levels - new_index = self.index.insert(0, ('abc', 'three')) - - exp0 = Index(list(self.index.levels[0]) + ['abc'], name='first') - tm.assert_index_equal(new_index.levels[0], exp0) - - exp1 = Index(list(self.index.levels[1]) + ['three'], name='second') - tm.assert_index_equal(new_index.levels[1], exp1) - assert new_index[0] == ('abc', 'three') - - # key wrong length - msg = "Item must have length equal to number of levels" - with tm.assert_raises_regex(ValueError, msg): - self.index.insert(0, ('foo2',)) - - left = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1]], - columns=['1st', '2nd', '3rd']) - left.set_index(['1st', '2nd'], inplace=True) - ts = left['3rd'].copy(deep=True) - - left.loc[('b', 'x'), '3rd'] = 2 - left.loc[('b', 'a'), '3rd'] = -1 - left.loc[('b', 'b'), '3rd'] = 3 - left.loc[('a', 'x'), '3rd'] = 4 - left.loc[('a', 'w'), '3rd'] = 5 - left.loc[('a', 'a'), '3rd'] = 6 - - ts.loc[('b', 'x')] = 2 - ts.loc['b', 'a'] = -1 - ts.loc[('b', 'b')] = 3 - ts.loc['a', 'x'] = 4 - ts.loc[('a', 'w')] = 5 - ts.loc['a', 'a'] = 6 - - right = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1], ['b', 'x', 2], - ['b', 'a', -1], ['b', 'b', 3], ['a', 'x', 4], - ['a', 'w', 5], ['a', 'a', 6]], - columns=['1st', '2nd', '3rd']) - right.set_index(['1st', '2nd'], inplace=True) - # FIXME data types changes to float because - # of intermediate nan insertion; - tm.assert_frame_equal(left, right, check_dtype=False) - tm.assert_series_equal(ts, right['3rd']) - - # GH9250 - idx = [('test1', i) for i in range(5)] + \ - [('test2', i) for i in range(6)] + \ - [('test', 17), ('test', 18)] - - left = pd.Series(np.linspace(0, 10, 11), - pd.MultiIndex.from_tuples(idx[:-2])) - - left.loc[('test', 17)] = 11 - left.loc[('test', 18)] = 12 - - right = pd.Series(np.linspace(0, 12, 13), - pd.MultiIndex.from_tuples(idx)) - - tm.assert_series_equal(left, right) - - def test_take_preserve_name(self): - taken = self.index.take([3, 0, 1]) - assert taken.names == self.index.names - - def test_take_fill_value(self): - # GH 12631 - vals = [['A', 'B'], - [pd.Timestamp('2011-01-01'), pd.Timestamp('2011-01-02')]] - idx = pd.MultiIndex.from_product(vals, names=['str', 'dt']) - - result = idx.take(np.array([1, 0, -1])) - exp_vals = [('A', pd.Timestamp('2011-01-02')), - ('A', pd.Timestamp('2011-01-01')), - ('B', pd.Timestamp('2011-01-02'))] - expected = pd.MultiIndex.from_tuples(exp_vals, names=['str', 'dt']) - tm.assert_index_equal(result, expected) - - # fill_value - result = idx.take(np.array([1, 0, -1]), fill_value=True) - exp_vals = [('A', pd.Timestamp('2011-01-02')), - ('A', pd.Timestamp('2011-01-01')), - (np.nan, pd.NaT)] - expected = pd.MultiIndex.from_tuples(exp_vals, names=['str', 'dt']) - tm.assert_index_equal(result, expected) - - # allow_fill=False - result = idx.take(np.array([1, 0, -1]), allow_fill=False, - fill_value=True) - exp_vals = [('A', pd.Timestamp('2011-01-02')), - ('A', pd.Timestamp('2011-01-01')), - ('B', pd.Timestamp('2011-01-02'))] - expected = pd.MultiIndex.from_tuples(exp_vals, names=['str', 'dt']) - tm.assert_index_equal(result, expected) - - msg = ('When allow_fill=True and fill_value is not None, ' - 'all indices must be >= -1') - with tm.assert_raises_regex(ValueError, msg): - idx.take(np.array([1, 0, -2]), fill_value=True) - with tm.assert_raises_regex(ValueError, msg): - idx.take(np.array([1, 0, -5]), fill_value=True) - - with pytest.raises(IndexError): - idx.take(np.array([1, -5])) - - def take_invalid_kwargs(self): - vals = [['A', 'B'], - [pd.Timestamp('2011-01-01'), pd.Timestamp('2011-01-02')]] - idx = pd.MultiIndex.from_product(vals, names=['str', 'dt']) - indices = [1, 2] - - msg = r"take\(\) got an unexpected keyword argument 'foo'" - tm.assert_raises_regex(TypeError, msg, idx.take, - indices, foo=2) - - msg = "the 'out' parameter is not supported" - tm.assert_raises_regex(ValueError, msg, idx.take, - indices, out=indices) - - msg = "the 'mode' parameter is not supported" - tm.assert_raises_regex(ValueError, msg, idx.take, - indices, mode='clip') - - @pytest.mark.parametrize('other', - [Index(['three', 'one', 'two']), - Index(['one']), - Index(['one', 'three'])]) - def test_join_level(self, other, join_type): - join_index, lidx, ridx = other.join(self.index, how=join_type, - level='second', - return_indexers=True) - - exp_level = other.join(self.index.levels[1], how=join_type) - assert join_index.levels[0].equals(self.index.levels[0]) - assert join_index.levels[1].equals(exp_level) - - # pare down levels - mask = np.array( - [x[1] in exp_level for x in self.index], dtype=bool) - exp_values = self.index.values[mask] - tm.assert_numpy_array_equal(join_index.values, exp_values) - - if join_type in ('outer', 'inner'): - join_index2, ridx2, lidx2 = \ - self.index.join(other, how=join_type, level='second', - return_indexers=True) - - assert join_index.equals(join_index2) - tm.assert_numpy_array_equal(lidx, lidx2) - tm.assert_numpy_array_equal(ridx, ridx2) - tm.assert_numpy_array_equal(join_index2.values, exp_values) - - def test_join_level_corner_case(self): - # some corner cases - idx = Index(['three', 'one', 'two']) - result = idx.join(self.index, level='second') - assert isinstance(result, MultiIndex) - - tm.assert_raises_regex(TypeError, "Join.*MultiIndex.*ambiguous", - self.index.join, self.index, level=1) - - def test_join_self(self, join_type): - res = self.index - joined = res.join(res, how=join_type) - assert res is joined - - def test_join_multi(self): - # GH 10665 - midx = pd.MultiIndex.from_product( - [np.arange(4), np.arange(4)], names=['a', 'b']) - idx = pd.Index([1, 2, 5], name='b') - - # inner - jidx, lidx, ridx = midx.join(idx, how='inner', return_indexers=True) - exp_idx = pd.MultiIndex.from_product( - [np.arange(4), [1, 2]], names=['a', 'b']) - exp_lidx = np.array([1, 2, 5, 6, 9, 10, 13, 14], dtype=np.intp) - exp_ridx = np.array([0, 1, 0, 1, 0, 1, 0, 1], dtype=np.intp) - tm.assert_index_equal(jidx, exp_idx) - tm.assert_numpy_array_equal(lidx, exp_lidx) - tm.assert_numpy_array_equal(ridx, exp_ridx) - # flip - jidx, ridx, lidx = idx.join(midx, how='inner', return_indexers=True) - tm.assert_index_equal(jidx, exp_idx) - tm.assert_numpy_array_equal(lidx, exp_lidx) - tm.assert_numpy_array_equal(ridx, exp_ridx) - - # keep MultiIndex - jidx, lidx, ridx = midx.join(idx, how='left', return_indexers=True) - exp_ridx = np.array([-1, 0, 1, -1, -1, 0, 1, -1, -1, 0, 1, -1, -1, 0, - 1, -1], dtype=np.intp) - tm.assert_index_equal(jidx, midx) - assert lidx is None - tm.assert_numpy_array_equal(ridx, exp_ridx) - # flip - jidx, ridx, lidx = idx.join(midx, how='right', return_indexers=True) - tm.assert_index_equal(jidx, midx) - assert lidx is None - tm.assert_numpy_array_equal(ridx, exp_ridx) - - def test_reindex(self): - result, indexer = self.index.reindex(list(self.index[:4])) - assert isinstance(result, MultiIndex) - self.check_level_names(result, self.index[:4].names) - - result, indexer = self.index.reindex(list(self.index)) - assert isinstance(result, MultiIndex) - assert indexer is None - self.check_level_names(result, self.index.names) - - def test_reindex_level(self): - idx = Index(['one']) - - target, indexer = self.index.reindex(idx, level='second') - target2, indexer2 = idx.reindex(self.index, level='second') - - exp_index = self.index.join(idx, level='second', how='right') - exp_index2 = self.index.join(idx, level='second', how='left') - - assert target.equals(exp_index) - exp_indexer = np.array([0, 2, 4]) - tm.assert_numpy_array_equal(indexer, exp_indexer, check_dtype=False) - - assert target2.equals(exp_index2) - exp_indexer2 = np.array([0, -1, 0, -1, 0, -1]) - tm.assert_numpy_array_equal(indexer2, exp_indexer2, check_dtype=False) - - tm.assert_raises_regex(TypeError, "Fill method not supported", - self.index.reindex, self.index, - method='pad', level='second') - - tm.assert_raises_regex(TypeError, "Fill method not supported", - idx.reindex, idx, method='bfill', - level='first') - - def test_duplicates(self): - assert not self.index.has_duplicates - assert self.index.append(self.index).has_duplicates - - index = MultiIndex(levels=[[0, 1], [0, 1, 2]], labels=[ - [0, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 0, 1, 2]]) - assert index.has_duplicates - - # GH 9075 - t = [(u('x'), u('out'), u('z'), 5, u('y'), u('in'), u('z'), 169), - (u('x'), u('out'), u('z'), 7, u('y'), u('in'), u('z'), 119), - (u('x'), u('out'), u('z'), 9, u('y'), u('in'), u('z'), 135), - (u('x'), u('out'), u('z'), 13, u('y'), u('in'), u('z'), 145), - (u('x'), u('out'), u('z'), 14, u('y'), u('in'), u('z'), 158), - (u('x'), u('out'), u('z'), 16, u('y'), u('in'), u('z'), 122), - (u('x'), u('out'), u('z'), 17, u('y'), u('in'), u('z'), 160), - (u('x'), u('out'), u('z'), 18, u('y'), u('in'), u('z'), 180), - (u('x'), u('out'), u('z'), 20, u('y'), u('in'), u('z'), 143), - (u('x'), u('out'), u('z'), 21, u('y'), u('in'), u('z'), 128), - (u('x'), u('out'), u('z'), 22, u('y'), u('in'), u('z'), 129), - (u('x'), u('out'), u('z'), 25, u('y'), u('in'), u('z'), 111), - (u('x'), u('out'), u('z'), 28, u('y'), u('in'), u('z'), 114), - (u('x'), u('out'), u('z'), 29, u('y'), u('in'), u('z'), 121), - (u('x'), u('out'), u('z'), 31, u('y'), u('in'), u('z'), 126), - (u('x'), u('out'), u('z'), 32, u('y'), u('in'), u('z'), 155), - (u('x'), u('out'), u('z'), 33, u('y'), u('in'), u('z'), 123), - (u('x'), u('out'), u('z'), 12, u('y'), u('in'), u('z'), 144)] - - index = pd.MultiIndex.from_tuples(t) - assert not index.has_duplicates - - # handle int64 overflow if possible - def check(nlevels, with_nulls): - labels = np.tile(np.arange(500), 2) - level = np.arange(500) - - if with_nulls: # inject some null values - labels[500] = -1 # common nan value - labels = [labels.copy() for i in range(nlevels)] - for i in range(nlevels): - labels[i][500 + i - nlevels // 2] = -1 - - labels += [np.array([-1, 1]).repeat(500)] - else: - labels = [labels] * nlevels + [np.arange(2).repeat(500)] - - levels = [level] * nlevels + [[0, 1]] - - # no dups - index = MultiIndex(levels=levels, labels=labels) - assert not index.has_duplicates - - # with a dup - if with_nulls: - def f(a): - return np.insert(a, 1000, a[0]) - labels = list(map(f, labels)) - index = MultiIndex(levels=levels, labels=labels) - else: - values = index.values.tolist() - index = MultiIndex.from_tuples(values + [values[0]]) - - assert index.has_duplicates - - # no overflow - check(4, False) - check(4, True) - - # overflow possible - check(8, False) - check(8, True) - - # GH 9125 - n, k = 200, 5000 - levels = [np.arange(n), tm.makeStringIndex(n), 1000 + np.arange(n)] - labels = [np.random.choice(n, k * n) for lev in levels] - mi = MultiIndex(levels=levels, labels=labels) - - for keep in ['first', 'last', False]: - left = mi.duplicated(keep=keep) - right = pd._libs.hashtable.duplicated_object(mi.values, keep=keep) - tm.assert_numpy_array_equal(left, right) - - # GH5873 - for a in [101, 102]: - mi = MultiIndex.from_arrays([[101, a], [3.5, np.nan]]) - assert not mi.has_duplicates - - with warnings.catch_warnings(record=True): - # Deprecated - see GH20239 - assert mi.get_duplicates().equals(MultiIndex.from_arrays( - [[], []])) - - tm.assert_numpy_array_equal(mi.duplicated(), np.zeros( - 2, dtype='bool')) - - for n in range(1, 6): # 1st level shape - for m in range(1, 5): # 2nd level shape - # all possible unique combinations, including nan - lab = product(range(-1, n), range(-1, m)) - mi = MultiIndex(levels=[list('abcde')[:n], list('WXYZ')[:m]], - labels=np.random.permutation(list(lab)).T) - assert len(mi) == (n + 1) * (m + 1) - assert not mi.has_duplicates - - with warnings.catch_warnings(record=True): - # Deprecated - see GH20239 - assert mi.get_duplicates().equals(MultiIndex.from_arrays( - [[], []])) - - tm.assert_numpy_array_equal(mi.duplicated(), np.zeros( - len(mi), dtype='bool')) - - def test_duplicate_meta_data(self): - # GH 10115 - index = MultiIndex( - levels=[[0, 1], [0, 1, 2]], - labels=[[0, 0, 0, 0, 1, 1, 1], - [0, 1, 2, 0, 0, 1, 2]]) - - for idx in [index, - index.set_names([None, None]), - index.set_names([None, 'Num']), - index.set_names(['Upper', 'Num']), ]: - assert idx.has_duplicates - assert idx.drop_duplicates().names == idx.names - - def test_get_unique_index(self): - idx = self.index[[0, 1, 0, 1, 1, 0, 0]] - expected = self.index._shallow_copy(idx[[0, 1]]) - - for dropna in [False, True]: - result = idx._get_unique_index(dropna=dropna) - assert result.unique - tm.assert_index_equal(result, expected) - - @pytest.mark.parametrize('names', [None, ['first', 'second']]) - def test_unique(self, names): - mi = pd.MultiIndex.from_arrays([[1, 2, 1, 2], [1, 1, 1, 2]], - names=names) - - res = mi.unique() - exp = pd.MultiIndex.from_arrays([[1, 2, 2], [1, 1, 2]], names=mi.names) - tm.assert_index_equal(res, exp) - - mi = pd.MultiIndex.from_arrays([list('aaaa'), list('abab')], - names=names) - res = mi.unique() - exp = pd.MultiIndex.from_arrays([list('aa'), list('ab')], - names=mi.names) - tm.assert_index_equal(res, exp) - - mi = pd.MultiIndex.from_arrays([list('aaaa'), list('aaaa')], - names=names) - res = mi.unique() - exp = pd.MultiIndex.from_arrays([['a'], ['a']], names=mi.names) - tm.assert_index_equal(res, exp) - - # GH #20568 - empty MI - mi = pd.MultiIndex.from_arrays([[], []], names=names) - res = mi.unique() - tm.assert_index_equal(mi, res) - - @pytest.mark.parametrize('level', [0, 'first', 1, 'second']) - def test_unique_level(self, level): - # GH #17896 - with level= argument - result = self.index.unique(level=level) - expected = self.index.get_level_values(level).unique() - tm.assert_index_equal(result, expected) - - # With already unique level - mi = pd.MultiIndex.from_arrays([[1, 3, 2, 4], [1, 3, 2, 5]], - names=['first', 'second']) - result = mi.unique(level=level) - expected = mi.get_level_values(level) - tm.assert_index_equal(result, expected) - - # With empty MI - mi = pd.MultiIndex.from_arrays([[], []], names=['first', 'second']) - result = mi.unique(level=level) - expected = mi.get_level_values(level) - - def test_unique_datetimelike(self): - idx1 = pd.DatetimeIndex(['2015-01-01', '2015-01-01', '2015-01-01', - '2015-01-01', 'NaT', 'NaT']) - idx2 = pd.DatetimeIndex(['2015-01-01', '2015-01-01', '2015-01-02', - '2015-01-02', 'NaT', '2015-01-01'], - tz='Asia/Tokyo') - result = pd.MultiIndex.from_arrays([idx1, idx2]).unique() - - eidx1 = pd.DatetimeIndex(['2015-01-01', '2015-01-01', 'NaT', 'NaT']) - eidx2 = pd.DatetimeIndex(['2015-01-01', '2015-01-02', - 'NaT', '2015-01-01'], - tz='Asia/Tokyo') - exp = pd.MultiIndex.from_arrays([eidx1, eidx2]) - tm.assert_index_equal(result, exp) - - def test_tolist(self): - result = self.index.tolist() - exp = list(self.index.values) - assert result == exp - - def test_repr_with_unicode_data(self): - with pd.core.config.option_context("display.encoding", 'UTF-8'): - d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]} - index = pd.DataFrame(d).set_index(["a", "b"]).index - assert "\\u" not in repr(index) # we don't want unicode-escaped - - def test_repr_roundtrip(self): - - mi = MultiIndex.from_product([list('ab'), range(3)], - names=['first', 'second']) - str(mi) - - if PY3: - tm.assert_index_equal(eval(repr(mi)), mi, exact=True) - else: - result = eval(repr(mi)) - # string coerces to unicode - tm.assert_index_equal(result, mi, exact=False) - assert mi.get_level_values('first').inferred_type == 'string' - assert result.get_level_values('first').inferred_type == 'unicode' - - mi_u = MultiIndex.from_product( - [list(u'ab'), range(3)], names=['first', 'second']) - result = eval(repr(mi_u)) - tm.assert_index_equal(result, mi_u, exact=True) - - # formatting - if PY3: - str(mi) - else: - compat.text_type(mi) - - # long format - mi = MultiIndex.from_product([list('abcdefg'), range(10)], - names=['first', 'second']) - - if PY3: - tm.assert_index_equal(eval(repr(mi)), mi, exact=True) - else: - result = eval(repr(mi)) - # string coerces to unicode - tm.assert_index_equal(result, mi, exact=False) - assert mi.get_level_values('first').inferred_type == 'string' - assert result.get_level_values('first').inferred_type == 'unicode' - - result = eval(repr(mi_u)) - tm.assert_index_equal(result, mi_u, exact=True) - - def test_str(self): - # tested elsewhere - pass - - def test_unicode_string_with_unicode(self): - d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]} - idx = pd.DataFrame(d).set_index(["a", "b"]).index - - if PY3: - str(idx) - else: - compat.text_type(idx) - - def test_bytestring_with_unicode(self): - d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]} - idx = pd.DataFrame(d).set_index(["a", "b"]).index - - if PY3: - bytes(idx) - else: - str(idx) - - def test_slice_keep_name(self): - x = MultiIndex.from_tuples([('a', 'b'), (1, 2), ('c', 'd')], - names=['x', 'y']) - assert x[1:].names == x.names - - def test_isna_behavior(self): - # should not segfault GH5123 - # NOTE: if MI representation changes, may make sense to allow - # isna(MI) - with pytest.raises(NotImplementedError): - pd.isna(self.index) - - def test_level_setting_resets_attributes(self): - ind = pd.MultiIndex.from_arrays([ - ['A', 'A', 'B', 'B', 'B'], [1, 2, 1, 2, 3] - ]) - assert ind.is_monotonic - ind.set_levels([['A', 'B'], [1, 3, 2]], inplace=True) - # if this fails, probably didn't reset the cache correctly. - assert not ind.is_monotonic - - def test_is_monotonic_increasing(self): - i = MultiIndex.from_product([np.arange(10), - np.arange(10)], names=['one', 'two']) - assert i.is_monotonic - assert i._is_strictly_monotonic_increasing - assert Index(i.values).is_monotonic - assert i._is_strictly_monotonic_increasing - - i = MultiIndex.from_product([np.arange(10, 0, -1), - np.arange(10)], names=['one', 'two']) - assert not i.is_monotonic - assert not i._is_strictly_monotonic_increasing - assert not Index(i.values).is_monotonic - assert not Index(i.values)._is_strictly_monotonic_increasing - - i = MultiIndex.from_product([np.arange(10), - np.arange(10, 0, -1)], - names=['one', 'two']) - assert not i.is_monotonic - assert not i._is_strictly_monotonic_increasing - assert not Index(i.values).is_monotonic - assert not Index(i.values)._is_strictly_monotonic_increasing - - i = MultiIndex.from_product([[1.0, np.nan, 2.0], ['a', 'b', 'c']]) - assert not i.is_monotonic - assert not i._is_strictly_monotonic_increasing - assert not Index(i.values).is_monotonic - assert not Index(i.values)._is_strictly_monotonic_increasing - - # string ordering - i = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], - ['one', 'two', 'three']], - labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], - [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], - names=['first', 'second']) - assert not i.is_monotonic - assert not Index(i.values).is_monotonic - assert not i._is_strictly_monotonic_increasing - assert not Index(i.values)._is_strictly_monotonic_increasing - - i = MultiIndex(levels=[['bar', 'baz', 'foo', 'qux'], - ['mom', 'next', 'zenith']], - labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], - [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], - names=['first', 'second']) - assert i.is_monotonic - assert Index(i.values).is_monotonic - assert i._is_strictly_monotonic_increasing - assert Index(i.values)._is_strictly_monotonic_increasing - - # mixed levels, hits the TypeError - i = MultiIndex( - levels=[[1, 2, 3, 4], ['gb00b03mlx29', 'lu0197800237', - 'nl0000289783', - 'nl0000289965', 'nl0000301109']], - labels=[[0, 1, 1, 2, 2, 2, 3], [4, 2, 0, 0, 1, 3, -1]], - names=['household_id', 'asset_id']) - - assert not i.is_monotonic - assert not i._is_strictly_monotonic_increasing - - # empty - i = MultiIndex.from_arrays([[], []]) - assert i.is_monotonic - assert Index(i.values).is_monotonic - assert i._is_strictly_monotonic_increasing - assert Index(i.values)._is_strictly_monotonic_increasing - - def test_is_monotonic_decreasing(self): - i = MultiIndex.from_product([np.arange(9, -1, -1), - np.arange(9, -1, -1)], - names=['one', 'two']) - assert i.is_monotonic_decreasing - assert i._is_strictly_monotonic_decreasing - assert Index(i.values).is_monotonic_decreasing - assert i._is_strictly_monotonic_decreasing - - i = MultiIndex.from_product([np.arange(10), - np.arange(10, 0, -1)], - names=['one', 'two']) - assert not i.is_monotonic_decreasing - assert not i._is_strictly_monotonic_decreasing - assert not Index(i.values).is_monotonic_decreasing - assert not Index(i.values)._is_strictly_monotonic_decreasing - - i = MultiIndex.from_product([np.arange(10, 0, -1), - np.arange(10)], names=['one', 'two']) - assert not i.is_monotonic_decreasing - assert not i._is_strictly_monotonic_decreasing - assert not Index(i.values).is_monotonic_decreasing - assert not Index(i.values)._is_strictly_monotonic_decreasing - - i = MultiIndex.from_product([[2.0, np.nan, 1.0], ['c', 'b', 'a']]) - assert not i.is_monotonic_decreasing - assert not i._is_strictly_monotonic_decreasing - assert not Index(i.values).is_monotonic_decreasing - assert not Index(i.values)._is_strictly_monotonic_decreasing - - # string ordering - i = MultiIndex(levels=[['qux', 'foo', 'baz', 'bar'], - ['three', 'two', 'one']], - labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], - [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], - names=['first', 'second']) - assert not i.is_monotonic_decreasing - assert not Index(i.values).is_monotonic_decreasing - assert not i._is_strictly_monotonic_decreasing - assert not Index(i.values)._is_strictly_monotonic_decreasing - - i = MultiIndex(levels=[['qux', 'foo', 'baz', 'bar'], - ['zenith', 'next', 'mom']], - labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], - [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], - names=['first', 'second']) - assert i.is_monotonic_decreasing - assert Index(i.values).is_monotonic_decreasing - assert i._is_strictly_monotonic_decreasing - assert Index(i.values)._is_strictly_monotonic_decreasing - - # mixed levels, hits the TypeError - i = MultiIndex( - levels=[[4, 3, 2, 1], ['nl0000301109', 'nl0000289965', - 'nl0000289783', 'lu0197800237', - 'gb00b03mlx29']], - labels=[[0, 1, 1, 2, 2, 2, 3], [4, 2, 0, 0, 1, 3, -1]], - names=['household_id', 'asset_id']) - - assert not i.is_monotonic_decreasing - assert not i._is_strictly_monotonic_decreasing - - # empty - i = MultiIndex.from_arrays([[], []]) - assert i.is_monotonic_decreasing - assert Index(i.values).is_monotonic_decreasing - assert i._is_strictly_monotonic_decreasing - assert Index(i.values)._is_strictly_monotonic_decreasing - - def test_is_strictly_monotonic_increasing(self): - idx = pd.MultiIndex(levels=[['bar', 'baz'], ['mom', 'next']], - labels=[[0, 0, 1, 1], [0, 0, 0, 1]]) - assert idx.is_monotonic_increasing - assert not idx._is_strictly_monotonic_increasing - - def test_is_strictly_monotonic_decreasing(self): - idx = pd.MultiIndex(levels=[['baz', 'bar'], ['next', 'mom']], - labels=[[0, 0, 1, 1], [0, 0, 0, 1]]) - assert idx.is_monotonic_decreasing - assert not idx._is_strictly_monotonic_decreasing - - def test_reconstruct_sort(self): - - # starts off lexsorted & monotonic - mi = MultiIndex.from_arrays([ - ['A', 'A', 'B', 'B', 'B'], [1, 2, 1, 2, 3] - ]) - assert mi.is_lexsorted() - assert mi.is_monotonic - - recons = mi._sort_levels_monotonic() - assert recons.is_lexsorted() - assert recons.is_monotonic - assert mi is recons - - assert mi.equals(recons) - assert Index(mi.values).equals(Index(recons.values)) - - # cannot convert to lexsorted - mi = pd.MultiIndex.from_tuples([('z', 'a'), ('x', 'a'), ('y', 'b'), - ('x', 'b'), ('y', 'a'), ('z', 'b')], - names=['one', 'two']) - assert not mi.is_lexsorted() - assert not mi.is_monotonic - - recons = mi._sort_levels_monotonic() - assert not recons.is_lexsorted() - assert not recons.is_monotonic - - assert mi.equals(recons) - assert Index(mi.values).equals(Index(recons.values)) - - # cannot convert to lexsorted - mi = MultiIndex(levels=[['b', 'd', 'a'], [1, 2, 3]], - labels=[[0, 1, 0, 2], [2, 0, 0, 1]], - names=['col1', 'col2']) - assert not mi.is_lexsorted() - assert not mi.is_monotonic - - recons = mi._sort_levels_monotonic() - assert not recons.is_lexsorted() - assert not recons.is_monotonic - - assert mi.equals(recons) - assert Index(mi.values).equals(Index(recons.values)) - - def test_reconstruct_remove_unused(self): - # xref to GH 2770 - df = DataFrame([['deleteMe', 1, 9], - ['keepMe', 2, 9], - ['keepMeToo', 3, 9]], - columns=['first', 'second', 'third']) - df2 = df.set_index(['first', 'second'], drop=False) - df2 = df2[df2['first'] != 'deleteMe'] - - # removed levels are there - expected = MultiIndex(levels=[['deleteMe', 'keepMe', 'keepMeToo'], - [1, 2, 3]], - labels=[[1, 2], [1, 2]], - names=['first', 'second']) - result = df2.index - tm.assert_index_equal(result, expected) - - expected = MultiIndex(levels=[['keepMe', 'keepMeToo'], - [2, 3]], - labels=[[0, 1], [0, 1]], - names=['first', 'second']) - result = df2.index.remove_unused_levels() - tm.assert_index_equal(result, expected) - - # idempotent - result2 = result.remove_unused_levels() - tm.assert_index_equal(result2, expected) - assert result2.is_(result) - - @pytest.mark.parametrize('level0', [['a', 'd', 'b'], - ['a', 'd', 'b', 'unused']]) - @pytest.mark.parametrize('level1', [['w', 'x', 'y', 'z'], - ['w', 'x', 'y', 'z', 'unused']]) - def test_remove_unused_nan(self, level0, level1): - # GH 18417 - mi = pd.MultiIndex(levels=[level0, level1], - labels=[[0, 2, -1, 1, -1], [0, 1, 2, 3, 2]]) - - result = mi.remove_unused_levels() - tm.assert_index_equal(result, mi) - for level in 0, 1: - assert('unused' not in result.levels[level]) - - @pytest.mark.parametrize('first_type,second_type', [ - ('int64', 'int64'), - ('datetime64[D]', 'str')]) - def test_remove_unused_levels_large(self, first_type, second_type): - # GH16556 - - # because tests should be deterministic (and this test in particular - # checks that levels are removed, which is not the case for every - # random input): - rng = np.random.RandomState(4) # seed is arbitrary value that works - - size = 1 << 16 - df = DataFrame(dict( - first=rng.randint(0, 1 << 13, size).astype(first_type), - second=rng.randint(0, 1 << 10, size).astype(second_type), - third=rng.rand(size))) - df = df.groupby(['first', 'second']).sum() - df = df[df.third < 0.1] - - result = df.index.remove_unused_levels() - assert len(result.levels[0]) < len(df.index.levels[0]) - assert len(result.levels[1]) < len(df.index.levels[1]) - assert result.equals(df.index) - - expected = df.reset_index().set_index(['first', 'second']).index - tm.assert_index_equal(result, expected) - - def test_isin(self): - values = [('foo', 2), ('bar', 3), ('quux', 4)] - - idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange( - 4)]) - result = idx.isin(values) - expected = np.array([False, False, True, True]) - tm.assert_numpy_array_equal(result, expected) - - # empty, return dtype bool - idx = MultiIndex.from_arrays([[], []]) - result = idx.isin(values) - assert len(result) == 0 - assert result.dtype == np.bool_ - - @pytest.mark.skipif(PYPY, reason="tuples cmp recursively on PyPy") - def test_isin_nan_not_pypy(self): - idx = MultiIndex.from_arrays([['foo', 'bar'], [1.0, np.nan]]) - tm.assert_numpy_array_equal(idx.isin([('bar', np.nan)]), - np.array([False, False])) - tm.assert_numpy_array_equal(idx.isin([('bar', float('nan'))]), - np.array([False, False])) - - @pytest.mark.skipif(not PYPY, reason="tuples cmp recursively on PyPy") - def test_isin_nan_pypy(self): - idx = MultiIndex.from_arrays([['foo', 'bar'], [1.0, np.nan]]) - tm.assert_numpy_array_equal(idx.isin([('bar', np.nan)]), - np.array([False, True])) - tm.assert_numpy_array_equal(idx.isin([('bar', float('nan'))]), - np.array([False, True])) - - def test_isin_level_kwarg(self): - idx = MultiIndex.from_arrays([['qux', 'baz', 'foo', 'bar'], np.arange( - 4)]) - - vals_0 = ['foo', 'bar', 'quux'] - vals_1 = [2, 3, 10] - - expected = np.array([False, False, True, True]) - tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level=0)) - tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level=-2)) - - tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level=1)) - tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level=-1)) - - pytest.raises(IndexError, idx.isin, vals_0, level=5) - pytest.raises(IndexError, idx.isin, vals_0, level=-5) - - pytest.raises(KeyError, idx.isin, vals_0, level=1.0) - pytest.raises(KeyError, idx.isin, vals_1, level=-1.0) - pytest.raises(KeyError, idx.isin, vals_1, level='A') - - idx.names = ['A', 'B'] - tm.assert_numpy_array_equal(expected, idx.isin(vals_0, level='A')) - tm.assert_numpy_array_equal(expected, idx.isin(vals_1, level='B')) - - pytest.raises(KeyError, idx.isin, vals_1, level='C') - - def test_reindex_preserves_names_when_target_is_list_or_ndarray(self): - # GH6552 - idx = self.index.copy() - target = idx.copy() - idx.names = target.names = [None, None] - - other_dtype = pd.MultiIndex.from_product([[1, 2], [3, 4]]) - - # list & ndarray cases - assert idx.reindex([])[0].names == [None, None] - assert idx.reindex(np.array([]))[0].names == [None, None] - assert idx.reindex(target.tolist())[0].names == [None, None] - assert idx.reindex(target.values)[0].names == [None, None] - assert idx.reindex(other_dtype.tolist())[0].names == [None, None] - assert idx.reindex(other_dtype.values)[0].names == [None, None] - - idx.names = ['foo', 'bar'] - assert idx.reindex([])[0].names == ['foo', 'bar'] - assert idx.reindex(np.array([]))[0].names == ['foo', 'bar'] - assert idx.reindex(target.tolist())[0].names == ['foo', 'bar'] - assert idx.reindex(target.values)[0].names == ['foo', 'bar'] - assert idx.reindex(other_dtype.tolist())[0].names == ['foo', 'bar'] - assert idx.reindex(other_dtype.values)[0].names == ['foo', 'bar'] - - def test_reindex_lvl_preserves_names_when_target_is_list_or_array(self): - # GH7774 - idx = pd.MultiIndex.from_product([[0, 1], ['a', 'b']], - names=['foo', 'bar']) - assert idx.reindex([], level=0)[0].names == ['foo', 'bar'] - assert idx.reindex([], level=1)[0].names == ['foo', 'bar'] - - def test_reindex_lvl_preserves_type_if_target_is_empty_list_or_array(self): - # GH7774 - idx = pd.MultiIndex.from_product([[0, 1], ['a', 'b']]) - assert idx.reindex([], level=0)[0].levels[0].dtype.type == np.int64 - assert idx.reindex([], level=1)[0].levels[1].dtype.type == np.object_ - - def test_groupby(self): - groups = self.index.groupby(np.array([1, 1, 1, 2, 2, 2])) - labels = self.index.get_values().tolist() - exp = {1: labels[:3], 2: labels[3:]} - tm.assert_dict_equal(groups, exp) - - # GH5620 - groups = self.index.groupby(self.index) - exp = {key: [key] for key in self.index} - tm.assert_dict_equal(groups, exp) - - def test_index_name_retained(self): - # GH9857 - result = pd.DataFrame({'x': [1, 2, 6], - 'y': [2, 2, 8], - 'z': [-5, 0, 5]}) - result = result.set_index('z') - result.loc[10] = [9, 10] - df_expected = pd.DataFrame({'x': [1, 2, 6, 9], - 'y': [2, 2, 8, 10], - 'z': [-5, 0, 5, 10]}) - df_expected = df_expected.set_index('z') - tm.assert_frame_equal(result, df_expected) - - def test_equals_operator(self): - # GH9785 - assert (self.index == self.index).all() - - def test_large_multiindex_error(self): - # GH12527 - df_below_1000000 = pd.DataFrame( - 1, index=pd.MultiIndex.from_product([[1, 2], range(499999)]), - columns=['dest']) - with pytest.raises(KeyError): - df_below_1000000.loc[(-1, 0), 'dest'] - with pytest.raises(KeyError): - df_below_1000000.loc[(3, 0), 'dest'] - df_above_1000000 = pd.DataFrame( - 1, index=pd.MultiIndex.from_product([[1, 2], range(500001)]), - columns=['dest']) - with pytest.raises(KeyError): - df_above_1000000.loc[(-1, 0), 'dest'] - with pytest.raises(KeyError): - df_above_1000000.loc[(3, 0), 'dest'] - - def test_partial_string_timestamp_multiindex(self): - # GH10331 - dr = pd.date_range('2016-01-01', '2016-01-03', freq='12H') - abc = ['a', 'b', 'c'] - ix = pd.MultiIndex.from_product([dr, abc]) - df = pd.DataFrame({'c1': range(0, 15)}, index=ix) - idx = pd.IndexSlice - - # c1 - # 2016-01-01 00:00:00 a 0 - # b 1 - # c 2 - # 2016-01-01 12:00:00 a 3 - # b 4 - # c 5 - # 2016-01-02 00:00:00 a 6 - # b 7 - # c 8 - # 2016-01-02 12:00:00 a 9 - # b 10 - # c 11 - # 2016-01-03 00:00:00 a 12 - # b 13 - # c 14 - - # partial string matching on a single index - for df_swap in (df.swaplevel(), - df.swaplevel(0), - df.swaplevel(0, 1)): - df_swap = df_swap.sort_index() - just_a = df_swap.loc['a'] - result = just_a.loc['2016-01-01'] - expected = df.loc[idx[:, 'a'], :].iloc[0:2] - expected.index = expected.index.droplevel(1) - tm.assert_frame_equal(result, expected) - - # indexing with IndexSlice - result = df.loc[idx['2016-01-01':'2016-02-01', :], :] - expected = df - tm.assert_frame_equal(result, expected) - - # match on secondary index - result = df_swap.loc[idx[:, '2016-01-01':'2016-01-01'], :] - expected = df_swap.iloc[[0, 1, 5, 6, 10, 11]] - tm.assert_frame_equal(result, expected) - - # Even though this syntax works on a single index, this is somewhat - # ambiguous and we don't want to extend this behavior forward to work - # in multi-indexes. This would amount to selecting a scalar from a - # column. - with pytest.raises(KeyError): - df['2016-01-01'] - - # partial string match on year only - result = df.loc['2016'] - expected = df - tm.assert_frame_equal(result, expected) - - # partial string match on date - result = df.loc['2016-01-01'] - expected = df.iloc[0:6] - tm.assert_frame_equal(result, expected) - - # partial string match on date and hour, from middle - result = df.loc['2016-01-02 12'] - expected = df.iloc[9:12] - tm.assert_frame_equal(result, expected) - - # partial string match on secondary index - result = df_swap.loc[idx[:, '2016-01-02'], :] - expected = df_swap.iloc[[2, 3, 7, 8, 12, 13]] - tm.assert_frame_equal(result, expected) - - # tuple selector with partial string match on date - result = df.loc[('2016-01-01', 'a'), :] - expected = df.iloc[[0, 3]] - tm.assert_frame_equal(result, expected) - - # Slicing date on first level should break (of course) - with pytest.raises(KeyError): - df_swap.loc['2016-01-01'] - - # GH12685 (partial string with daily resolution or below) - dr = date_range('2013-01-01', periods=100, freq='D') - ix = MultiIndex.from_product([dr, ['a', 'b']]) - df = DataFrame(np.random.randn(200, 1), columns=['A'], index=ix) - - result = df.loc[idx['2013-03':'2013-03', :], :] - expected = df.iloc[118:180] - tm.assert_frame_equal(result, expected) - - def test_rangeindex_fallback_coercion_bug(self): - # GH 12893 - foo = pd.DataFrame(np.arange(100).reshape((10, 10))) - bar = pd.DataFrame(np.arange(100).reshape((10, 10))) - df = pd.concat({'foo': foo.stack(), 'bar': bar.stack()}, axis=1) - df.index.names = ['fizz', 'buzz'] - - str(df) - expected = pd.DataFrame({'bar': np.arange(100), - 'foo': np.arange(100)}, - index=pd.MultiIndex.from_product( - [range(10), range(10)], - names=['fizz', 'buzz'])) - tm.assert_frame_equal(df, expected, check_like=True) - - result = df.index.get_level_values('fizz') - expected = pd.Int64Index(np.arange(10), name='fizz').repeat(10) - tm.assert_index_equal(result, expected) - - result = df.index.get_level_values('buzz') - expected = pd.Int64Index(np.tile(np.arange(10), 10), name='buzz') - tm.assert_index_equal(result, expected) - - def test_dropna(self): - # GH 6194 - idx = pd.MultiIndex.from_arrays([[1, np.nan, 3, np.nan, 5], - [1, 2, np.nan, np.nan, 5], - ['a', 'b', 'c', np.nan, 'e']]) - - exp = pd.MultiIndex.from_arrays([[1, 5], - [1, 5], - ['a', 'e']]) - tm.assert_index_equal(idx.dropna(), exp) - tm.assert_index_equal(idx.dropna(how='any'), exp) - - exp = pd.MultiIndex.from_arrays([[1, np.nan, 3, 5], - [1, 2, np.nan, 5], - ['a', 'b', 'c', 'e']]) - tm.assert_index_equal(idx.dropna(how='all'), exp) - - msg = "invalid how option: xxx" - with tm.assert_raises_regex(ValueError, msg): - idx.dropna(how='xxx') - - def test_unsortedindex(self): - # GH 11897 - mi = pd.MultiIndex.from_tuples([('z', 'a'), ('x', 'a'), ('y', 'b'), - ('x', 'b'), ('y', 'a'), ('z', 'b')], - names=['one', 'two']) - df = pd.DataFrame([[i, 10 * i] for i in lrange(6)], index=mi, - columns=['one', 'two']) - - # GH 16734: not sorted, but no real slicing - result = df.loc(axis=0)['z', 'a'] - expected = df.iloc[0] - tm.assert_series_equal(result, expected) - - with pytest.raises(UnsortedIndexError): - df.loc(axis=0)['z', slice('a')] - df.sort_index(inplace=True) - assert len(df.loc(axis=0)['z', :]) == 2 - - with pytest.raises(KeyError): - df.loc(axis=0)['q', :] - - def test_unsortedindex_doc_examples(self): - # http://pandas.pydata.org/pandas-docs/stable/advanced.html#sorting-a-multiindex # noqa - dfm = DataFrame({'jim': [0, 0, 1, 1], - 'joe': ['x', 'x', 'z', 'y'], - 'jolie': np.random.rand(4)}) - - dfm = dfm.set_index(['jim', 'joe']) - with tm.assert_produces_warning(PerformanceWarning): - dfm.loc[(1, 'z')] - - with pytest.raises(UnsortedIndexError): - dfm.loc[(0, 'y'):(1, 'z')] - - assert not dfm.index.is_lexsorted() - assert dfm.index.lexsort_depth == 1 - - # sort it - dfm = dfm.sort_index() - dfm.loc[(1, 'z')] - dfm.loc[(0, 'y'):(1, 'z')] - - assert dfm.index.is_lexsorted() - assert dfm.index.lexsort_depth == 2 - - def test_tuples_with_name_string(self): - # GH 15110 and GH 14848 - - li = [(0, 0, 1), (0, 1, 0), (1, 0, 0)] - with pytest.raises(ValueError): - pd.Index(li, name='abc') - with pytest.raises(ValueError): - pd.Index(li, name='a') - - def test_nan_stays_float(self): - - # GH 7031 - idx0 = pd.MultiIndex(levels=[["A", "B"], []], - labels=[[1, 0], [-1, -1]], - names=[0, 1]) - idx1 = pd.MultiIndex(levels=[["C"], ["D"]], - labels=[[0], [0]], - names=[0, 1]) - idxm = idx0.join(idx1, how='outer') - assert pd.isna(idx0.get_level_values(1)).all() - # the following failed in 0.14.1 - assert pd.isna(idxm.get_level_values(1)[:-1]).all() - - df0 = pd.DataFrame([[1, 2]], index=idx0) - df1 = pd.DataFrame([[3, 4]], index=idx1) - dfm = df0 - df1 - assert pd.isna(df0.index.get_level_values(1)).all() - # the following failed in 0.14.1 - assert pd.isna(dfm.index.get_level_values(1)[:-1]).all() - - def test_million_record_attribute_error(self): - # GH 18165 - r = list(range(1000000)) - df = pd.DataFrame({'a': r, 'b': r}, - index=pd.MultiIndex.from_tuples([(x, x) for x in r])) - - with tm.assert_raises_regex(AttributeError, - "'Series' object has no attribute 'foo'"): - df['a'].foo() - - def test_duplicate_multiindex_labels(self): - # GH 17464 - # Make sure that a MultiIndex with duplicate levels throws a ValueError - with pytest.raises(ValueError): - ind = pd.MultiIndex([['A'] * 10, range(10)], [[0] * 10, range(10)]) - - # And that using set_levels with duplicate levels fails - ind = MultiIndex.from_arrays([['A', 'A', 'B', 'B', 'B'], - [1, 2, 1, 2, 3]]) - with pytest.raises(ValueError): - ind.set_levels([['A', 'B', 'A', 'A', 'B'], [2, 1, 3, -2, 5]], - inplace=True) - - def test_multiindex_compare(self): - # GH 21149 - # Ensure comparison operations for MultiIndex with nlevels == 1 - # behave consistently with those for MultiIndex with nlevels > 1 - - midx = pd.MultiIndex.from_product([[0, 1]]) - - # Equality self-test: MultiIndex object vs self - expected = pd.Series([True, True]) - result = pd.Series(midx == midx) - tm.assert_series_equal(result, expected) - - # Greater than comparison: MultiIndex object vs self - expected = pd.Series([False, False]) - result = pd.Series(midx > midx) - tm.assert_series_equal(result, expected)
- [x] closes #18644 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` The file is split out, the number of tests remains 377 (2 skips on my machine; all other test cases pass), the code uses fixtures in all instances, new fixtures are in a local conftest.py file, and there are no dependencies on classes. I'm happy to receive and address any feedback.
https://api.github.com/repos/pandas-dev/pandas/pulls/21514
2018-06-17T18:02:28Z
2018-07-03T23:36:54Z
2018-07-03T23:36:53Z
2018-07-15T17:19:13Z
Maintain Dict Ordering with Concat
diff --git a/.gitignore b/.gitignore index 96b1f945870de..82e2eb44e43e7 100644 --- a/.gitignore +++ b/.gitignore @@ -110,3 +110,4 @@ doc/source/styled.xlsx doc/source/templates/ env/ doc/source/savefig/ +*my-dev-test.py \ No newline at end of file diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index 15c5cc97b8426..86cf9774e2add 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -255,6 +255,6 @@ Other ^^^^^ - :meth: `~pandas.io.formats.style.Styler.background_gradient` now takes a ``text_color_threshold`` parameter to automatically lighten the text color based on the luminance of the background color. This improves readability with dark background colors without the need to limit the background colormap range. (:issue:`21258`) -- +- Bug in :meth:`concat` should maintain dict order when :meth:`concat` is called (:issue:`2151`) - - diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py index b36e9b8d900fd..d311f07c144ce 100644 --- a/pandas/core/reshape/concat.py +++ b/pandas/core/reshape/concat.py @@ -1,7 +1,7 @@ """ concat routines """ - +from collections import OrderedDict import numpy as np from pandas import compat, DataFrame, Series, Index, MultiIndex from pandas.core.index import (_get_objs_combined_axis, @@ -250,7 +250,10 @@ def __init__(self, objs, axis=0, join='outer', join_axes=None, if isinstance(objs, dict): if keys is None: - keys = sorted(objs) + if not isinstance(objs, OrderedDict): + keys = sorted(objs) + else: + keys = objs objs = [objs[k] for k in keys] else: objs = list(objs) diff --git a/pandas/tests/reshape/test_concat.py b/pandas/tests/reshape/test_concat.py index dea305d4b3fee..b7b38618be0b3 100644 --- a/pandas/tests/reshape/test_concat.py +++ b/pandas/tests/reshape/test_concat.py @@ -1,3 +1,4 @@ +from collections import OrderedDict from warnings import catch_warnings from itertools import combinations, product @@ -1294,6 +1295,17 @@ def test_concat_rename_index(self): tm.assert_frame_equal(result, exp) assert result.index.names == exp.index.names + def test_concat_with_ordered_dict(self): + # GH 21510 + result = pd.concat(OrderedDict([('First', pd.Series(range(3))), + ('Another', pd.Series(range(4)))])) + index = MultiIndex(levels=[['First', 'Another'], [0, 1, 2, 3]], + labels=[[0, 0, 0, 1, 1, 1, 1], + [0, 1, 2, 0, 1, 2, 3]]) + data = list(range(3)) + list(range(4)) + expected = pd.Series(data, index=index) + tm.assert_series_equal(result, expected) + def test_crossed_dtypes_weird_corner(self): columns = ['A', 'B', 'C', 'D'] df1 = DataFrame({'A': np.array([1, 2, 3, 4], dtype='f8'),
- [x] closes #21510 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry although the set is instance of dict, it should not sort if it is also instance of OrderedDict
https://api.github.com/repos/pandas-dev/pandas/pulls/21512
2018-06-16T19:26:07Z
2018-10-11T01:52:22Z
null
2018-10-11T01:52:23Z
DOC: Improve code example for Index.get_indexer
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 6a56278b0da49..ffe8ea51a64a2 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -3146,17 +3146,22 @@ def droplevel(self, level=0): .. versionadded:: 0.21.0 (list-like tolerance) - Examples - -------- - >>> indexer = index.get_indexer(new_index) - >>> new_values = cur_values.take(indexer) - Returns ------- indexer : ndarray of int Integers from 0 to n - 1 indicating that the index at these positions matches the corresponding target values. Missing values in the target are marked by -1. + + Examples + -------- + >>> index = pd.Index(['c', 'a', 'b']) + >>> index.get_indexer(['a', 'b', 'x']) + array([ 1, 2, -1]) + + Notice that the return value is an array of locations in ``index`` + and ``x`` is marked by -1, as it is not in ``index``. + """ @Appender(_index_shared_docs['get_indexer'] % _index_doc_kwargs)
Make code example clearer for ``Index.get_indexer``
https://api.github.com/repos/pandas-dev/pandas/pulls/21511
2018-06-16T17:52:42Z
2018-06-19T08:20:13Z
2018-06-19T08:20:13Z
2018-06-19T08:39:34Z
PERF: add method Categorical.__contains__
diff --git a/asv_bench/benchmarks/categoricals.py b/asv_bench/benchmarks/categoricals.py index 48f42621d183d..73e3933122628 100644 --- a/asv_bench/benchmarks/categoricals.py +++ b/asv_bench/benchmarks/categoricals.py @@ -202,7 +202,11 @@ class Contains(object): def setup(self): N = 10**5 self.ci = tm.makeCategoricalIndex(N) - self.cat = self.ci.categories[0] + self.c = self.ci.values + self.key = self.ci.categories[0] - def time_contains(self): - self.cat in self.ci + def time_categorical_index_contains(self): + self.key in self.ci + + def time_categorical_contains(self): + self.key in self.c diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 0f2c9c4756987..5454dc9eca360 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -26,7 +26,7 @@ Performance Improvements - Improved performance of membership checks in :class:`CategoricalIndex` (i.e. ``x in ci``-style checks are much faster). :meth:`CategoricalIndex.contains` - is likewise much faster (:issue:`21369`) + is likewise much faster (:issue:`21369`, :issue:`21508`) - Improved performance of :meth:`MultiIndex.is_unique` (:issue:`21522`) - diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py index e22b0d626a218..7b3cce0f2585d 100644 --- a/pandas/core/arrays/categorical.py +++ b/pandas/core/arrays/categorical.py @@ -157,6 +157,57 @@ def _maybe_to_categorical(array): return array +def contains(cat, key, container): + """ + Helper for membership check for ``key`` in ``cat``. + + This is a helper method for :method:`__contains__` + and :class:`CategoricalIndex.__contains__`. + + Returns True if ``key`` is in ``cat.categories`` and the + location of ``key`` in ``categories`` is in ``container``. + + Parameters + ---------- + cat : :class:`Categorical`or :class:`categoricalIndex` + key : a hashable object + The key to check membership for. + container : Container (e.g. list-like or mapping) + The container to check for membership in. + + Returns + ------- + is_in : bool + True if ``key`` is in ``self.categories`` and location of + ``key`` in ``categories`` is in ``container``, else False. + + Notes + ----- + This method does not check for NaN values. Do that separately + before calling this method. + """ + hash(key) + + # get location of key in categories. + # If a KeyError, the key isn't in categories, so logically + # can't be in container either. + try: + loc = cat.categories.get_loc(key) + except KeyError: + return False + + # loc is the location of key in categories, but also the *value* + # for key in container. So, `key` may be in categories, + # but still not in `container`. Example ('b' in categories, + # but not in values): + # 'b' in Categorical(['a'], categories=['a', 'b']) # False + if is_scalar(loc): + return loc in container + else: + # if categories is an IntervalIndex, loc is an array. + return any(loc_ in container for loc_ in loc) + + _codes_doc = """The category codes of this categorical. Level codes are an array if integer which are the positions of the real @@ -1846,6 +1897,14 @@ def __iter__(self): """Returns an Iterator over the values of this Categorical.""" return iter(self.get_values().tolist()) + def __contains__(self, key): + """Returns True if `key` is in this Categorical.""" + # if key is a NaN, check if any NaN is in self. + if isna(key): + return self.isna().any() + + return contains(self, key, container=self._codes) + def _tidy_repr(self, max_vals=10, footer=True): """ a short repr displaying only max_vals and an optional (but default footer) diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py index 0093d4940751e..fc669074758da 100644 --- a/pandas/core/indexes/category.py +++ b/pandas/core/indexes/category.py @@ -24,6 +24,7 @@ import pandas.core.common as com import pandas.core.missing as missing import pandas.core.indexes.base as ibase +from pandas.core.arrays.categorical import Categorical, contains _index_doc_kwargs = dict(ibase._index_doc_kwargs) _index_doc_kwargs.update(dict(target_klass='CategoricalIndex')) @@ -125,7 +126,6 @@ def _create_from_codes(self, codes, categories=None, ordered=None, CategoricalIndex """ - from pandas.core.arrays import Categorical if categories is None: categories = self.categories if ordered is None: @@ -162,7 +162,6 @@ def _create_categorical(self, data, categories=None, ordered=None, if not isinstance(data, ABCCategorical): if ordered is None and dtype is None: ordered = False - from pandas.core.arrays import Categorical data = Categorical(data, categories=categories, ordered=ordered, dtype=dtype) else: @@ -323,32 +322,14 @@ def _reverse_indexer(self): @Appender(_index_shared_docs['__contains__'] % _index_doc_kwargs) def __contains__(self, key): - hash(key) - - if isna(key): # if key is a NaN, check if any NaN is in self. + # if key is a NaN, check if any NaN is in self. + if isna(key): return self.hasnans - # is key in self.categories? Then get its location. - # If not (i.e. KeyError), it logically can't be in self either - try: - loc = self.categories.get_loc(key) - except KeyError: - return False - - # loc is the location of key in self.categories, but also the value - # for key in self.codes and in self._engine. key may be in categories, - # but still not in self, check this. Example: - # 'b' in CategoricalIndex(['a'], categories=['a', 'b']) # False - if is_scalar(loc): - return loc in self._engine - else: - # if self.categories is IntervalIndex, loc is an array - # check if any scalar of the array is in self._engine - return any(loc_ in self._engine for loc_ in loc) + return contains(self, key, container=self._engine) @Appender(_index_shared_docs['contains'] % _index_doc_kwargs) def contains(self, key): - hash(key) return key in self def __array__(self, dtype=None): @@ -479,7 +460,6 @@ def where(self, cond, other=None): other = self._na_value values = np.where(cond, self.values, other) - from pandas.core.arrays import Categorical cat = Categorical(values, categories=self.categories, ordered=self.ordered) @@ -862,7 +842,6 @@ def _delegate_method(self, name, *args, **kwargs): def _add_accessors(cls): """ add in Categorical accessor methods """ - from pandas.core.arrays import Categorical CategoricalIndex._add_delegate_accessors( delegate=Categorical, accessors=["rename_categories", "reorder_categories", diff --git a/pandas/tests/categorical/test_operators.py b/pandas/tests/categorical/test_operators.py index fa8bb817616e4..a26de32d7446c 100644 --- a/pandas/tests/categorical/test_operators.py +++ b/pandas/tests/categorical/test_operators.py @@ -291,3 +291,20 @@ def test_numeric_like_ops(self): # invalid ufunc pytest.raises(TypeError, lambda: np.log(s)) + + def test_contains(self): + # GH21508 + c = pd.Categorical(list('aabbca'), categories=list('cab')) + + assert 'b' in c + assert 'z' not in c + assert np.nan not in c + with pytest.raises(TypeError): + assert [1] in c + + # assert codes NOT in index + assert 0 not in c + assert 1 not in c + + c = pd.Categorical(list('aabbca') + [np.nan], categories=list('cab')) + assert np.nan in c
- [x] closes #21022 - [x] xref #21369 - [x] tests added / passed - [x] benchmark added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry Currently, membership checks in ``Categorical`` is very slow as explained by @fjetter in #21022. This PR fixes the issue. See also #21369 which fixed the similar issue for ``CategoricalIndex``. Tests didn't exist beforehand and have been added. ASV: ``` before after ratio [9e982e18] [28461f0c] - 4.26±0.7ms 134±20μs 0.03 categoricals.Contains.time_categorical_contains SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY. ```
https://api.github.com/repos/pandas-dev/pandas/pulls/21508
2018-06-16T14:48:04Z
2018-06-20T10:29:50Z
2018-06-20T10:29:50Z
2018-07-02T23:24:04Z
Fix Timestamp rounding
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 9c4b408a1d24b..8c36d51a5fd16 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -54,7 +54,7 @@ Fixed Regressions - Fixed regression in :meth:`to_csv` when handling file-like object incorrectly (:issue:`21471`) - Bug in both :meth:`DataFrame.first_valid_index` and :meth:`Series.first_valid_index` raised for a row index having duplicate values (:issue:`21441`) -- +- Bug in :meth:`Timestamp.ceil` and :meth:`Timestamp.floor` when timestamp is a multiple of the rounding frequency (:issue:`21262`) .. _whatsnew_0232.performance: diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx index ba5ebdab82ddc..123ccebf83a56 100644 --- a/pandas/_libs/tslibs/timestamps.pyx +++ b/pandas/_libs/tslibs/timestamps.pyx @@ -59,42 +59,51 @@ cdef inline object create_timestamp_from_ts(int64_t value, def round_ns(values, rounder, freq): + """ Applies rounding function at given frequency Parameters ---------- - values : int, :obj:`ndarray` - rounder : function + values : :obj:`ndarray` + rounder : function, eg. 'ceil', 'floor', 'round' freq : str, obj Returns ------- - int or :obj:`ndarray` + :obj:`ndarray` """ + from pandas.tseries.frequencies import to_offset unit = to_offset(freq).nanos + + # GH21262 If the Timestamp is multiple of the freq str + # don't apply any rounding + mask = values % unit == 0 + if mask.all(): + return values + r = values.copy() + if unit < 1000: # for nano rounding, work with the last 6 digits separately # due to float precision buff = 1000000 - r = (buff * (values // buff) + unit * - (rounder((values % buff) * (1 / float(unit)))).astype('i8')) + r[~mask] = (buff * (values[~mask] // buff) + + unit * (rounder((values[~mask] % buff) * + (1 / float(unit)))).astype('i8')) else: if unit % 1000 != 0: msg = 'Precision will be lost using frequency: {}' warnings.warn(msg.format(freq)) - # GH19206 # to deal with round-off when unit is large if unit >= 1e9: divisor = 10 ** int(np.log10(unit / 1e7)) else: divisor = 10 - - r = (unit * rounder((values * (divisor / float(unit))) / divisor) - .astype('i8')) - + r[~mask] = (unit * rounder((values[~mask] * + (divisor / float(unit))) / divisor) + .astype('i8')) return r @@ -649,7 +658,10 @@ class Timestamp(_Timestamp): else: value = self.value - r = round_ns(value, rounder, freq) + value = np.array([value], dtype=np.int64) + + # Will only ever contain 1 element for timestamp + r = round_ns(value, rounder, freq)[0] result = Timestamp(r, unit='ns') if self.tz is not None: result = result.tz_localize(self.tz) diff --git a/pandas/tests/indexes/datetimes/test_scalar_compat.py b/pandas/tests/indexes/datetimes/test_scalar_compat.py index 9180bb0af3af3..801dcb91b124e 100644 --- a/pandas/tests/indexes/datetimes/test_scalar_compat.py +++ b/pandas/tests/indexes/datetimes/test_scalar_compat.py @@ -134,6 +134,21 @@ def test_round(self, tz): ts = '2016-10-17 12:00:00.001501031' DatetimeIndex([ts]).round('1010ns') + def test_no_rounding_occurs(self, tz): + # GH 21262 + rng = date_range(start='2016-01-01', periods=5, + freq='2Min', tz=tz) + + expected_rng = DatetimeIndex([ + Timestamp('2016-01-01 00:00:00', tz=tz, freq='2T'), + Timestamp('2016-01-01 00:02:00', tz=tz, freq='2T'), + Timestamp('2016-01-01 00:04:00', tz=tz, freq='2T'), + Timestamp('2016-01-01 00:06:00', tz=tz, freq='2T'), + Timestamp('2016-01-01 00:08:00', tz=tz, freq='2T'), + ]) + + tm.assert_index_equal(rng.round(freq='2T'), expected_rng) + @pytest.mark.parametrize('test_input, rounder, freq, expected', [ (['2117-01-01 00:00:45'], 'floor', '15s', ['2117-01-01 00:00:45']), (['2117-01-01 00:00:45'], 'ceil', '15s', ['2117-01-01 00:00:45']), @@ -143,6 +158,10 @@ def test_round(self, tz): ['1823-01-01 00:00:01.000000020']), (['1823-01-01 00:00:01'], 'floor', '1s', ['1823-01-01 00:00:01']), (['1823-01-01 00:00:01'], 'ceil', '1s', ['1823-01-01 00:00:01']), + (['2018-01-01 00:15:00'], 'ceil', '15T', ['2018-01-01 00:15:00']), + (['2018-01-01 00:15:00'], 'floor', '15T', ['2018-01-01 00:15:00']), + (['1823-01-01 03:00:00'], 'ceil', '3H', ['1823-01-01 03:00:00']), + (['1823-01-01 03:00:00'], 'floor', '3H', ['1823-01-01 03:00:00']), (('NaT', '1823-01-01 00:00:01'), 'floor', '1s', ('NaT', '1823-01-01 00:00:01')), (('NaT', '1823-01-01 00:00:01'), 'ceil', '1s', diff --git a/pandas/tests/scalar/timestamp/test_unary_ops.py b/pandas/tests/scalar/timestamp/test_unary_ops.py index 6f3b5ae6a20a3..b02fef707a6fe 100644 --- a/pandas/tests/scalar/timestamp/test_unary_ops.py +++ b/pandas/tests/scalar/timestamp/test_unary_ops.py @@ -118,6 +118,25 @@ def test_ceil_floor_edge(self, test_input, rounder, freq, expected): expected = Timestamp(expected) assert result == expected + @pytest.mark.parametrize('test_input, freq, expected', [ + ('2018-01-01 00:02:06', '2s', '2018-01-01 00:02:06'), + ('2018-01-01 00:02:00', '2T', '2018-01-01 00:02:00'), + ('2018-01-01 00:04:00', '4T', '2018-01-01 00:04:00'), + ('2018-01-01 00:15:00', '15T', '2018-01-01 00:15:00'), + ('2018-01-01 00:20:00', '20T', '2018-01-01 00:20:00'), + ('2018-01-01 03:00:00', '3H', '2018-01-01 03:00:00'), + ]) + @pytest.mark.parametrize('rounder', ['ceil', 'floor', 'round']) + def test_round_minute_freq(self, test_input, freq, expected, rounder): + # Ensure timestamps that shouldnt round dont! + # GH#21262 + + dt = Timestamp(test_input) + expected = Timestamp(expected) + func = getattr(dt, rounder) + result = func(freq) + assert result == expected + def test_ceil(self): dt = Timestamp('20130101 09:10:11') result = dt.ceil('D') @@ -264,7 +283,6 @@ def test_timestamp(self): if PY3: # datetime.timestamp() converts in the local timezone with tm.set_timezone('UTC'): - # should agree with datetime.timestamp method dt = ts.to_pydatetime() assert dt.timestamp() == ts.timestamp()
- Closes #21262 - Tests added (thanks @Safrone [PR](https://github.com/pandas-dev/pandas/pull/21265)) This change-set is to avoid rounding a timestamp when the timestamp is a multiple of the frequency string passed in. "Values" param passed into round_ns can either be a np array or int. So relevant handling added for both. FYI I havn't used Cython much before so keen to get peoples thoughts/feedback. Thanks
https://api.github.com/repos/pandas-dev/pandas/pulls/21507
2018-06-15T23:39:29Z
2018-06-29T00:26:39Z
2018-06-29T00:26:39Z
2018-07-02T15:43:24Z
PERF: avoid unnecessary recoding in CategoricalIndex._create_categorical
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py index 7f2860a963423..93df0d377c4fe 100644 --- a/pandas/core/indexes/category.py +++ b/pandas/core/indexes/category.py @@ -172,7 +172,10 @@ def _create_categorical(self, data, categories=None, ordered=None, data = data.set_ordered(ordered) if isinstance(dtype, CategoricalDtype): # we want to silently ignore dtype='category' - data = data._set_dtype(dtype) + if dtype != data.dtype: + data = data._set_dtype(dtype) + else: + data = data.copy() return data @classmethod
- [x] progress towards #20395 - [x] xref #21369 - [ ] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` This issue was found when looking for a solution to #20395. I've found that ``CategoricalIndex._create_categorical`` makes an unnecessary call to ``Categorical._set_dtype`` when passed a dtype that is equal to self.dtype: ```python >>> n = 100_000 >>> ci = pd.CategoricalIndex(list('a'*n + 'b'*n + 'c'*n)) >>> %timeit ci._create_categorical(ci, ci) 197 µs # master and this PR >>> %timeit ci._create_categorical(ci, ci, dtype=ci.dtype) 1.92 ms # master 197 µs # this PR ``` Internally, some operations in Pandas pass self.dtype to ``CategoricalIndex._create_categorical`` and onwards to ``_set_dtype``, which is a very slow code path. By avoiding calling ``_set_dtype`` unnecessarily, some operations become faster. For example: ```python >>> df = pd.DataFrame(dict(A=range(n*3)), index=ci) >>> %timeit df.loc['b'] 3.55 ms # master 2.14 ms # this PR ``` As ``Categoricalndex._create_categorical`` is called directly or indirectly by various methods (e.g. ``CategoricalIndex._shallow_copy``), there are probably other places where this speedup is relevant also.
https://api.github.com/repos/pandas-dev/pandas/pulls/21506
2018-06-15T23:00:18Z
2018-06-16T14:50:29Z
null
2018-09-20T21:13:21Z
wrap urlopen with requests
diff --git a/pandas/io/common.py b/pandas/io/common.py index 3a67238a66450..22655ad86263e 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -30,14 +30,13 @@ if compat.PY3: from urllib.request import urlopen, pathname2url - _urlopen = urlopen from urllib.parse import urlparse as parse_url from urllib.parse import (uses_relative, uses_netloc, uses_params, urlencode, urljoin) from urllib.error import URLError from http.client import HTTPException # noqa else: - from urllib2 import urlopen as _urlopen + from urllib2 import urlopen as urlopen2 from urllib import urlencode, pathname2url # noqa from urlparse import urlparse as parse_url from urlparse import uses_relative, uses_netloc, uses_params, urljoin @@ -46,10 +45,10 @@ from contextlib import contextmanager, closing # noqa from functools import wraps # noqa - # @wraps(_urlopen) + # @wraps(urlopen2) @contextmanager def urlopen(*args, **kwargs): - with closing(_urlopen(*args, **kwargs)) as f: + with closing(urlopen2(*args, **kwargs)) as f: yield f @@ -91,6 +90,34 @@ def _is_url(url): return False +def _urlopen(url, session=None): + compression = None + content_encoding = None + try: + import requests + if session: + if not isinstance(session, requests.sessions.Session): + raise ValueError( + 'Expected a requests.sessions.Session object, ' + 'got {!r}'.format(session) + ) + r = session.get(url) + else: + r = requests.get(url) + r.raise_for_status() + content = r.content + r.close() + except ImportError: + with urlopen(url) as r: + content = r.read() + content_encoding = r.headers.get('Content-Encoding', None) + if content_encoding == 'gzip': + # Override compression based on Content-Encoding header. + compression = 'gzip' + reader = BytesIO(content) + return reader, compression + + def _expand_user(filepath_or_buffer): """Return the argument with an initial component of ~ or ~user replaced by that user's home directory. @@ -177,7 +204,7 @@ def is_gcs_url(url): def get_filepath_or_buffer(filepath_or_buffer, encoding=None, - compression=None, mode=None): + compression=None, mode=None, session=None): """ If the filepath_or_buffer is a url, translate and return the buffer. Otherwise passthrough. @@ -188,6 +215,14 @@ def get_filepath_or_buffer(filepath_or_buffer, encoding=None, or buffer encoding : the encoding to use to decode py3 bytes, default is 'utf-8' mode : str, optional + compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer' + For on-the-fly decompression of on-disk data. If 'infer' and + `filepath_or_buffer` is path-like, then detect compression from the + following extensions: '.gz', '.bz2', '.zip', or '.xz' (otherwise no + decompression). If using 'zip', the ZIP file must contain only one data + file to be read in. Set to None for no decompression. + + .. versionadded:: 0.18.1 support for 'zip' and 'xz' compression. Returns ------- @@ -199,13 +234,7 @@ def get_filepath_or_buffer(filepath_or_buffer, encoding=None, filepath_or_buffer = _stringify_path(filepath_or_buffer) if _is_url(filepath_or_buffer): - req = _urlopen(filepath_or_buffer) - content_encoding = req.headers.get('Content-Encoding', None) - if content_encoding == 'gzip': - # Override compression based on Content-Encoding header - compression = 'gzip' - reader = BytesIO(req.read()) - req.close() + reader, compression = _urlopen(filepath_or_buffer, session=session) return reader, encoding, compression, True if is_s3_url(filepath_or_buffer): diff --git a/pandas/io/excel.py b/pandas/io/excel.py index 1328713736b03..fe1e256205744 100644 --- a/pandas/io/excel.py +++ b/pandas/io/excel.py @@ -332,7 +332,8 @@ def read_excel(io, "`sheet`") if not isinstance(io, ExcelFile): - io = ExcelFile(io, engine=engine) + session = kwds.get('session', None) + io = ExcelFile(io, engine=engine, session=session) return io.parse( sheet_name=sheet_name, @@ -396,10 +397,11 @@ def __init__(self, io, **kwds): if engine is not None and engine != 'xlrd': raise ValueError("Unknown engine: {engine}".format(engine=engine)) + session = kwds.pop('session', None) # If io is a url, want to keep the data as bytes so can't pass # to get_filepath_or_buffer() if _is_url(self._io): - io = _urlopen(self._io) + io, _ = _urlopen(self._io, session=session) elif not isinstance(self.io, (ExcelFile, xlrd.Book)): io, _, _, _ = get_filepath_or_buffer(self._io) diff --git a/pandas/io/html.py b/pandas/io/html.py index c967bdd29df1f..8c31aac1f1563 100644 --- a/pandas/io/html.py +++ b/pandas/io/html.py @@ -15,10 +15,9 @@ from pandas.errors import AbstractMethodError, EmptyDataError from pandas.core.dtypes.common import is_list_like - from pandas import Series -from pandas.io.common import _is_url, _validate_header_arg, urlopen +from pandas.io.common import _is_url, _urlopen, _validate_header_arg, urlopen from pandas.io.formats.printing import pprint_thing from pandas.io.parsers import TextParser @@ -115,7 +114,7 @@ def _get_skiprows(skiprows): type(skiprows).__name__) -def _read(obj): +def _read(obj, session=None): """Try to read from a url, file or string. Parameters @@ -127,8 +126,7 @@ def _read(obj): raw_text : str """ if _is_url(obj): - with urlopen(obj) as url: - text = url.read() + text, _ = _urlopen(obj, session=session) elif hasattr(obj, 'read'): text = obj.read() elif isinstance(obj, char_types): @@ -203,12 +201,14 @@ class _HtmlFrameParser(object): functionality. """ - def __init__(self, io, match, attrs, encoding, displayed_only): + def __init__(self, io, match, attrs, encoding, displayed_only, + session=None): self.io = io self.match = match self.attrs = attrs self.encoding = encoding self.displayed_only = displayed_only + self.session = session def parse_tables(self): """ @@ -592,7 +592,7 @@ def _parse_tfoot_tr(self, table): return table.select('tfoot tr') def _setup_build_doc(self): - raw_text = _read(self.io) + raw_text = _read(self.io, self.session) if not raw_text: raise ValueError('No text parsed from document: {doc}' .format(doc=self.io)) @@ -715,7 +715,7 @@ def _build_doc(self): try: if _is_url(self.io): - with urlopen(self.io) as f: + with _urlopen(self.io) as f: r = parse(f, parser=parser) else: # try to parse the input in the simplest way @@ -890,9 +890,11 @@ def _parse(flavor, io, match, attrs, encoding, displayed_only, **kwargs): # hack around python 3 deleting the exception variable retained = None + session = kwargs.get('session', None) for flav in flavor: parser = _parser_dispatch(flav) - p = parser(io, compiled_match, attrs, encoding, displayed_only) + p = parser(io, compiled_match, attrs, encoding, displayed_only, + session) try: tables = p.parse_tables() @@ -928,7 +930,7 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None, skiprows=None, attrs=None, parse_dates=False, tupleize_cols=None, thousands=',', encoding=None, decimal='.', converters=None, na_values=None, - keep_default_na=True, displayed_only=True): + keep_default_na=True, displayed_only=True, session=None): r"""Read HTML tables into a ``list`` of ``DataFrame`` objects. Parameters @@ -1091,4 +1093,4 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None, thousands=thousands, attrs=attrs, encoding=encoding, decimal=decimal, converters=converters, na_values=na_values, keep_default_na=keep_default_na, - displayed_only=displayed_only) + displayed_only=displayed_only, session=session) diff --git a/pandas/io/json/json.py b/pandas/io/json/json.py index 38f8cd5412015..024af642a2222 100644 --- a/pandas/io/json/json.py +++ b/pandas/io/json/json.py @@ -228,7 +228,7 @@ def _write(self, obj, orient, double_precision, ensure_ascii, def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True, convert_axes=True, convert_dates=True, keep_default_dates=True, numpy=False, precise_float=False, date_unit=None, encoding=None, - lines=False, chunksize=None, compression='infer'): + lines=False, chunksize=None, compression='infer', session=None): """ Convert a JSON string to pandas object @@ -410,6 +410,7 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True, compression = _infer_compression(path_or_buf, compression) filepath_or_buffer, _, compression, should_close = get_filepath_or_buffer( path_or_buf, encoding=encoding, compression=compression, + session=session, ) json_reader = JsonReader( diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index acb9bca2545c0..1f0e175d649b2 100755 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -319,6 +319,9 @@ values. The options are `None` for the ordinary converter, `high` for the high-precision converter, and `round_trip` for the round-trip converter. +session : requests.Session + object with the a requests session configuration for remote file. + (requires the requests library) Returns ------- @@ -401,10 +404,11 @@ def _read(filepath_or_buffer, kwds): encoding = re.sub('_', '-', encoding).lower() kwds['encoding'] = encoding + session = kwds.get('session', None) compression = kwds.get('compression') compression = _infer_compression(filepath_or_buffer, compression) filepath_or_buffer, _, compression, should_close = get_filepath_or_buffer( - filepath_or_buffer, encoding, compression) + filepath_or_buffer, encoding, compression, session=session) kwds['compression'] = compression if kwds.get('date_parser', None) is not None: @@ -590,7 +594,8 @@ def parser_f(filepath_or_buffer, delim_whitespace=False, low_memory=_c_parser_defaults['low_memory'], memory_map=False, - float_precision=None): + float_precision=None, + session=None): # deprecate read_table GH21948 if name == "read_table": @@ -690,7 +695,8 @@ def parser_f(filepath_or_buffer, mangle_dupe_cols=mangle_dupe_cols, tupleize_cols=tupleize_cols, infer_datetime_format=infer_datetime_format, - skip_blank_lines=skip_blank_lines) + skip_blank_lines=skip_blank_lines, + session=session) return _read(filepath_or_buffer, kwds)
- [X] closes #16716 - [ ] tests added / passed - [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry Alternative to https://github.com/pandas-dev/pandas/pull/17087 I'll write the whatsnew entry, tests, and the session option for `html`, `json`, and `excel` if this path is OK with the devs. Ping @skynss who is the original author of #17087 and @gfyoung who reviewed it. Note that the main difference in this approach is that I made the option slightly simpler by allowing only the `session` instead of `http_params`. You can see an example of this in action in: http://nbviewer.jupyter.org/urls/gist.githubusercontent.com/ocefpaf/d6e9ab2c3569ff8fa181fc7885b6524d/raw/5d868fd4cbe2b81f84ccab7760b80251ef6e4651/pandas_test.ipynb
https://api.github.com/repos/pandas-dev/pandas/pulls/21504
2018-06-15T21:46:27Z
2018-12-31T00:12:51Z
null
2019-02-11T16:13:18Z
De-duplicate code for indexing with list-likes of keys
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 02c86d2f4dcc8..383f129a713ed 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -2723,7 +2723,8 @@ def _getitem_array(self, key): indexer = key.nonzero()[0] return self._take(indexer, axis=0) else: - indexer = self.loc._convert_to_indexer(key, axis=1) + indexer = self.loc._convert_to_indexer(key, axis=1, + raise_missing=True) return self._take(indexer, axis=1) def _getitem_multilevel(self, key): diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 6a56278b0da49..ccecb6d4d0713 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -3627,7 +3627,7 @@ def _reindex_non_unique(self, target): else: # need to retake to have the same size as the indexer - indexer[~check] = 0 + indexer[~check] = -1 # reset the new indexer to account for the new size new_indexer = np.arange(len(self.take(indexer))) diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index 0e4f040253560..d5e81105dd323 100755 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -688,7 +688,8 @@ def _align_series(self, indexer, ser, multiindex_indexer=False): if isinstance(indexer, tuple): # flatten np.ndarray indexers - ravel = lambda i: i.ravel() if isinstance(i, np.ndarray) else i + def ravel(i): + return i.ravel() if isinstance(i, np.ndarray) else i indexer = tuple(map(ravel, indexer)) aligners = [not com.is_null_slice(idx) for idx in indexer] @@ -925,33 +926,10 @@ def _multi_take(self, tup): """ create the reindex map for our objects, raise the _exception if we can't create the indexer """ - try: - o = self.obj - d = {} - for key, axis in zip(tup, o._AXIS_ORDERS): - ax = o._get_axis(axis) - # Have the index compute an indexer or return None - # if it cannot handle: - indexer, keyarr = ax._convert_listlike_indexer(key, - kind=self.name) - # We only act on all found values: - if indexer is not None and (indexer != -1).all(): - self._validate_read_indexer(key, indexer, axis) - d[axis] = (ax[indexer], indexer) - continue - - # If we are trying to get actual keys from empty Series, we - # patiently wait for a KeyError later on - otherwise, convert - if len(ax) or not len(key): - key = self._convert_for_reindex(key, axis) - indexer = ax.get_indexer_for(key) - keyarr = ax.reindex(keyarr)[0] - self._validate_read_indexer(keyarr, indexer, - o._get_axis_number(axis)) - d[axis] = (keyarr, indexer) - return o._reindex_with_indexers(d, copy=True, allow_dups=True) - except (KeyError, IndexingError) as detail: - raise self._exception(detail) + o = self.obj + d = {axis: self._get_listlike_indexer(key, axis) + for (key, axis) in zip(tup, o._AXIS_ORDERS)} + return o._reindex_with_indexers(d, copy=True, allow_dups=True) def _convert_for_reindex(self, key, axis=None): return key @@ -1124,7 +1102,88 @@ def _getitem_axis(self, key, axis=None): return self._get_label(key, axis=axis) + def _get_listlike_indexer(self, key, axis, raise_missing=False): + """ + Transform a list-like of keys into a new index and an indexer. + + Parameters + ---------- + key : list-like + Target labels + axis: int + Dimension on which the indexing is being made + raise_missing: bool + Whether to raise a KeyError if some labels are not found. Will be + removed in the future, and then this method will always behave as + if raise_missing=True. + + Raises + ------ + KeyError + If at least one key was requested but none was found, and + raise_missing=True. + + Returns + ------- + keyarr: Index + New index (coinciding with 'key' if the axis is unique) + values : array-like + An indexer for the return object; -1 denotes keys not found + """ + o = self.obj + ax = o._get_axis(axis) + + # Have the index compute an indexer or return None + # if it cannot handle: + indexer, keyarr = ax._convert_listlike_indexer(key, + kind=self.name) + # We only act on all found values: + if indexer is not None and (indexer != -1).all(): + self._validate_read_indexer(key, indexer, axis, + raise_missing=raise_missing) + return ax[indexer], indexer + + if ax.is_unique: + # If we are trying to get actual keys from empty Series, we + # patiently wait for a KeyError later on - otherwise, convert + if len(ax) or not len(key): + key = self._convert_for_reindex(key, axis) + indexer = ax.get_indexer_for(key) + keyarr = ax.reindex(keyarr)[0] + else: + keyarr, indexer, new_indexer = ax._reindex_non_unique(keyarr) + + self._validate_read_indexer(keyarr, indexer, + o._get_axis_number(axis), + raise_missing=raise_missing) + return keyarr, indexer + def _getitem_iterable(self, key, axis=None): + """ + Index current object with an an iterable key (which can be a boolean + indexer, or a collection of keys). + + Parameters + ---------- + key : iterable + Target labels, or boolean indexer + axis: int, default None + Dimension on which the indexing is being made + + Raises + ------ + KeyError + If no key was found. Will change in the future to raise if not all + keys were found. + IndexingError + If the boolean indexer is unalignable with the object being + indexed. + + Returns + ------- + scalar, DataFrame, or Series: indexed value(s), + """ + if axis is None: axis = self.axis or 0 @@ -1133,54 +1192,18 @@ def _getitem_iterable(self, key, axis=None): labels = self.obj._get_axis(axis) if com.is_bool_indexer(key): + # A boolean indexer key = check_bool_indexer(labels, key) inds, = key.nonzero() return self.obj._take(inds, axis=axis) else: - # Have the index compute an indexer or return None - # if it cannot handle; we only act on all found values - indexer, keyarr = labels._convert_listlike_indexer( - key, kind=self.name) - if indexer is not None and (indexer != -1).all(): - self._validate_read_indexer(key, indexer, axis) - return self.obj.take(indexer, axis=axis) - - ax = self.obj._get_axis(axis) - # existing labels are unique and indexer are unique - if labels.is_unique and Index(keyarr).is_unique: - indexer = ax.get_indexer_for(key) - self._validate_read_indexer(key, indexer, axis) - - d = {axis: [ax.reindex(keyarr)[0], indexer]} - return self.obj._reindex_with_indexers(d, copy=True, - allow_dups=True) - - # existing labels are non-unique - else: - - # reindex with the specified axis - if axis + 1 > self.obj.ndim: - raise AssertionError("invalid indexing error with " - "non-unique index") - - new_target, indexer, new_indexer = labels._reindex_non_unique( - keyarr) - - if new_indexer is not None: - result = self.obj._take(indexer[indexer != -1], axis=axis) - - self._validate_read_indexer(key, new_indexer, axis) - result = result._reindex_with_indexers( - {axis: [new_target, new_indexer]}, - copy=True, allow_dups=True) + # A collection of keys + keyarr, indexer = self._get_listlike_indexer(key, axis, + raise_missing=False) + return self.obj._reindex_with_indexers({axis: [keyarr, indexer]}, + copy=True, allow_dups=True) - else: - self._validate_read_indexer(key, indexer, axis) - result = self.obj._take(indexer, axis=axis) - - return result - - def _validate_read_indexer(self, key, indexer, axis): + def _validate_read_indexer(self, key, indexer, axis, raise_missing=False): """ Check that indexer can be used to return a result (e.g. at least one element was found, unless the list of keys was actually empty). @@ -1193,11 +1216,16 @@ def _validate_read_indexer(self, key, indexer, axis): Indices corresponding to the key (with -1 indicating not found) axis: int Dimension on which the indexing is being made + raise_missing: bool + Whether to raise a KeyError if some labels are not found. Will be + removed in the future, and then this method will always behave as + if raise_missing=True. Raises ------ KeyError - If at least one key was requested none was found. + If at least one key was requested but none was found, and + raise_missing=True. """ ax = self.obj._get_axis(axis) @@ -1214,6 +1242,12 @@ def _validate_read_indexer(self, key, indexer, axis): u"None of [{key}] are in the [{axis}]".format( key=key, axis=self.obj._get_axis_name(axis))) + # We (temporarily) allow for some missing keys with .loc, except in + # some cases (e.g. setting) in which "raise_missing" will be False + if not(self.name == 'loc' and not raise_missing): + not_found = list(set(key) - set(ax)) + raise KeyError("{} not in index".format(not_found)) + # we skip the warning on Categorical/Interval # as this check is actually done (check for # non-missing values), but a bit later in the @@ -1229,9 +1263,10 @@ def _validate_read_indexer(self, key, indexer, axis): if not (ax.is_categorical() or ax.is_interval()): warnings.warn(_missing_key_warning, - FutureWarning, stacklevel=5) + FutureWarning, stacklevel=6) - def _convert_to_indexer(self, obj, axis=None, is_setter=False): + def _convert_to_indexer(self, obj, axis=None, is_setter=False, + raise_missing=False): """ Convert indexing key into something we can use to do actual fancy indexing on an ndarray @@ -1310,33 +1345,10 @@ def _convert_to_indexer(self, obj, axis=None, is_setter=False): inds, = obj.nonzero() return inds else: - - # Have the index compute an indexer or return None - # if it cannot handle - indexer, objarr = labels._convert_listlike_indexer( - obj, kind=self.name) - if indexer is not None: - return indexer - - # unique index - if labels.is_unique: - indexer = check = labels.get_indexer(objarr) - - # non-unique (dups) - else: - (indexer, - missing) = labels.get_indexer_non_unique(objarr) - # 'indexer' has dupes, create 'check' using 'missing' - check = np.zeros(len(objarr), dtype=np.intp) - check[missing] = -1 - - mask = check == -1 - if mask.any(): - raise KeyError('{mask} not in index' - .format(mask=objarr[mask])) - - return com._values_from_object(indexer) - + # When setting, missing keys are not allowed, even with .loc: + kwargs = {'raise_missing': True if is_setter else + raise_missing} + return self._get_listlike_indexer(obj, axis, **kwargs)[1] else: try: return labels.get_loc(obj)
- [x] tests passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` Just refactoring, removing duplicated code (I don't think the bug in ``Index._reindex_non_unique`` was actually appearing anywhere)
https://api.github.com/repos/pandas-dev/pandas/pulls/21503
2018-06-15T21:00:39Z
2018-06-19T20:36:56Z
2018-06-19T20:36:56Z
2018-06-19T22:26:26Z
REGR: Fixes first_valid_index when DataFrame or Series has duplicate row index (GH21441)
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 0f2c9c4756987..2112b68c32bae 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -17,7 +17,8 @@ Fixed Regressions ~~~~~~~~~~~~~~~~~ - Fixed regression in :meth:`to_csv` when handling file-like object incorrectly (:issue:`21471`) -- +- Bug in both :meth:`DataFrame.first_valid_index` and :meth:`Series.first_valid_index` raised for a row index having duplicate values (:issue:`21441`) +- .. _whatsnew_0232.performance: diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 32f64b1d3e05c..c37516d478d84 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -8969,18 +8969,17 @@ def _find_valid_index(self, how): is_valid = is_valid.any(1) # reduce axis 1 if how == 'first': - # First valid value case - i = is_valid.idxmax() - if not is_valid[i]: - return None - return i - - elif how == 'last': - # Last valid value case - i = is_valid.values[::-1].argmax() - if not is_valid.iat[len(self) - i - 1]: - return None - return self.index[len(self) - i - 1] + idxpos = is_valid.values[::].argmax() + + if how == 'last': + idxpos = len(self) - 1 - is_valid.values[::-1].argmax() + + chk_notna = is_valid.iat[idxpos] + idx = self.index[idxpos] + + if not chk_notna: + return None + return idx @Appender(_shared_docs['valid_index'] % {'position': 'first', 'klass': 'NDFrame'}) diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py index 90fbc6e628369..fb9bd74d9876d 100644 --- a/pandas/tests/frame/test_timeseries.py +++ b/pandas/tests/frame/test_timeseries.py @@ -506,7 +506,15 @@ def test_asfreq_fillvalue(self): actual_series = ts.asfreq(freq='1S', fill_value=9.0) assert_series_equal(expected_series, actual_series) - def test_first_last_valid(self): + @pytest.mark.parametrize("data,idx,expected_first,expected_last", [ + ({'A': [1, 2, 3]}, [1, 1, 2], 1, 2), + ({'A': [1, 2, 3]}, [1, 2, 2], 1, 2), + ({'A': [1, 2, 3, 4]}, ['d', 'd', 'd', 'd'], 'd', 'd'), + ({'A': [1, np.nan, 3]}, [1, 1, 2], 1, 2), + ({'A': [np.nan, np.nan, 3]}, [1, 1, 2], 2, 2), + ({'A': [1, np.nan, 3]}, [1, 2, 2], 1, 2)]) + def test_first_last_valid(self, data, idx, + expected_first, expected_last): N = len(self.frame.index) mat = randn(N) mat[:5] = nan @@ -539,6 +547,11 @@ def test_first_last_valid(self): assert frame.first_valid_index().freq == frame.index.freq assert frame.last_valid_index().freq == frame.index.freq + # GH 21441 + df = DataFrame(data, index=idx) + assert expected_first == df.first_valid_index() + assert expected_last == df.last_valid_index() + def test_first_subset(self): ts = tm.makeTimeDataFrame(freq='12h') result = ts.first('10d')
- [x] closes #21441 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21497
2018-06-15T12:15:45Z
2018-06-20T10:33:08Z
2018-06-20T10:33:07Z
2018-06-29T14:59:07Z
Tseries fixtures docstrings
diff --git a/pandas/conftest.py b/pandas/conftest.py index 255e0e165041b..a497bddaa3d09 100644 --- a/pandas/conftest.py +++ b/pandas/conftest.py @@ -218,7 +218,8 @@ def nulls_fixture(request): nulls_fixture2 = nulls_fixture # Generate cartesian product of nulls_fixture -TIMEZONES = [None, 'UTC', 'US/Eastern', 'Asia/Tokyo', 'dateutil/US/Pacific'] +TIMEZONES = [None, 'UTC', 'US/Eastern', 'Asia/Tokyo', 'dateutil/US/Pacific', + 'dateutil/Asia/Singapore'] @td.parametrize_fixture_doc(str(TIMEZONES)) diff --git a/pandas/tests/indexes/datetimes/test_arithmetic.py b/pandas/tests/indexes/datetimes/test_arithmetic.py index 555f804800588..4ce2b1dd4fd86 100644 --- a/pandas/tests/indexes/datetimes/test_arithmetic.py +++ b/pandas/tests/indexes/datetimes/test_arithmetic.py @@ -19,13 +19,6 @@ from pandas._libs.tslibs.offsets import shift_months -@pytest.fixture(params=[None, 'UTC', 'Asia/Tokyo', - 'US/Eastern', 'dateutil/Asia/Singapore', - 'dateutil/US/Pacific']) -def tz(request): - return request.param - - @pytest.fixture(params=[pd.offsets.Hour(2), timedelta(hours=2), np.timedelta64(2, 'h'), Timedelta(hours=2)], ids=str) @@ -50,7 +43,8 @@ class TestDatetimeIndexComparisons(object): @pytest.mark.parametrize('other', [datetime(2016, 1, 1), Timestamp('2016-01-01'), np.datetime64('2016-01-01')]) - def test_dti_cmp_datetimelike(self, other, tz): + def test_dti_cmp_datetimelike(self, other, tz_naive_fixture): + tz = tz_naive_fixture dti = pd.date_range('2016-01-01', periods=2, tz=tz) if tz is not None: if isinstance(other, np.datetime64): @@ -78,9 +72,10 @@ def test_dti_cmp_datetimelike(self, other, tz): expected = np.array([True, False]) tm.assert_numpy_array_equal(result, expected) - def dti_cmp_non_datetime(self, tz): + def dti_cmp_non_datetime(self, tz_naive_fixture): # GH#19301 by convention datetime.date is not considered comparable # to Timestamp or DatetimeIndex. This may change in the future. + tz = tz_naive_fixture dti = pd.date_range('2016-01-01', periods=2, tz=tz) other = datetime(2016, 1, 1).date() @@ -96,20 +91,23 @@ def dti_cmp_non_datetime(self, tz): dti >= other @pytest.mark.parametrize('other', [None, np.nan, pd.NaT]) - def test_dti_eq_null_scalar(self, other, tz): + def test_dti_eq_null_scalar(self, other, tz_naive_fixture): # GH#19301 + tz = tz_naive_fixture dti = pd.date_range('2016-01-01', periods=2, tz=tz) assert not (dti == other).any() @pytest.mark.parametrize('other', [None, np.nan, pd.NaT]) - def test_dti_ne_null_scalar(self, other, tz): + def test_dti_ne_null_scalar(self, other, tz_naive_fixture): # GH#19301 + tz = tz_naive_fixture dti = pd.date_range('2016-01-01', periods=2, tz=tz) assert (dti != other).all() @pytest.mark.parametrize('other', [None, np.nan]) - def test_dti_cmp_null_scalar_inequality(self, tz, other): + def test_dti_cmp_null_scalar_inequality(self, tz_naive_fixture, other): # GH#19301 + tz = tz_naive_fixture dti = pd.date_range('2016-01-01', periods=2, tz=tz) with pytest.raises(TypeError): @@ -335,8 +333,9 @@ def test_dti_radd_timestamp_raises(self): # ------------------------------------------------------------- # Binary operations DatetimeIndex and int - def test_dti_add_int(self, tz, one): + def test_dti_add_int(self, tz_naive_fixture, one): # Variants of `one` for #19012 + tz = tz_naive_fixture rng = pd.date_range('2000-01-01 09:00', freq='H', periods=10, tz=tz) result = rng + one @@ -344,7 +343,8 @@ def test_dti_add_int(self, tz, one): periods=10, tz=tz) tm.assert_index_equal(result, expected) - def test_dti_iadd_int(self, tz, one): + def test_dti_iadd_int(self, tz_naive_fixture, one): + tz = tz_naive_fixture rng = pd.date_range('2000-01-01 09:00', freq='H', periods=10, tz=tz) expected = pd.date_range('2000-01-01 10:00', freq='H', @@ -352,7 +352,8 @@ def test_dti_iadd_int(self, tz, one): rng += one tm.assert_index_equal(rng, expected) - def test_dti_sub_int(self, tz, one): + def test_dti_sub_int(self, tz_naive_fixture, one): + tz = tz_naive_fixture rng = pd.date_range('2000-01-01 09:00', freq='H', periods=10, tz=tz) result = rng - one @@ -360,7 +361,8 @@ def test_dti_sub_int(self, tz, one): periods=10, tz=tz) tm.assert_index_equal(result, expected) - def test_dti_isub_int(self, tz, one): + def test_dti_isub_int(self, tz_naive_fixture, one): + tz = tz_naive_fixture rng = pd.date_range('2000-01-01 09:00', freq='H', periods=10, tz=tz) expected = pd.date_range('2000-01-01 08:00', freq='H', @@ -414,8 +416,9 @@ def test_dti_add_intarray_no_freq(self, box): # ------------------------------------------------------------- # DatetimeIndex.shift is used in integer addition - def test_dti_shift_tzaware(self, tz): + def test_dti_shift_tzaware(self, tz_naive_fixture): # GH#9903 + tz = tz_naive_fixture idx = pd.DatetimeIndex([], name='xxx', tz=tz) tm.assert_index_equal(idx.shift(0, freq='H'), idx) tm.assert_index_equal(idx.shift(3, freq='H'), idx) @@ -502,28 +505,32 @@ def test_dti_shift_near_midnight(self, shift, result_time): # ------------------------------------------------------------- # Binary operations DatetimeIndex and timedelta-like - def test_dti_add_timedeltalike(self, tz, delta): + def test_dti_add_timedeltalike(self, tz_naive_fixture, delta): + tz = tz_naive_fixture rng = pd.date_range('2000-01-01', '2000-02-01', tz=tz) result = rng + delta expected = pd.date_range('2000-01-01 02:00', '2000-02-01 02:00', tz=tz) tm.assert_index_equal(result, expected) - def test_dti_iadd_timedeltalike(self, tz, delta): + def test_dti_iadd_timedeltalike(self, tz_naive_fixture, delta): + tz = tz_naive_fixture rng = pd.date_range('2000-01-01', '2000-02-01', tz=tz) expected = pd.date_range('2000-01-01 02:00', '2000-02-01 02:00', tz=tz) rng += delta tm.assert_index_equal(rng, expected) - def test_dti_sub_timedeltalike(self, tz, delta): + def test_dti_sub_timedeltalike(self, tz_naive_fixture, delta): + tz = tz_naive_fixture rng = pd.date_range('2000-01-01', '2000-02-01', tz=tz) expected = pd.date_range('1999-12-31 22:00', '2000-01-31 22:00', tz=tz) result = rng - delta tm.assert_index_equal(result, expected) - def test_dti_isub_timedeltalike(self, tz, delta): + def test_dti_isub_timedeltalike(self, tz_naive_fixture, delta): + tz = tz_naive_fixture rng = pd.date_range('2000-01-01', '2000-02-01', tz=tz) expected = pd.date_range('1999-12-31 22:00', '2000-01-31 22:00', tz=tz) @@ -532,8 +539,9 @@ def test_dti_isub_timedeltalike(self, tz, delta): # ------------------------------------------------------------- # Binary operations DatetimeIndex and TimedeltaIndex/array - def test_dti_add_tdi(self, tz): + def test_dti_add_tdi(self, tz_naive_fixture): # GH 17558 + tz = tz_naive_fixture dti = DatetimeIndex([Timestamp('2017-01-01', tz=tz)] * 10) tdi = pd.timedelta_range('0 days', periods=10) expected = pd.date_range('2017-01-01', periods=10, tz=tz) @@ -552,8 +560,9 @@ def test_dti_add_tdi(self, tz): result = tdi.values + dti tm.assert_index_equal(result, expected) - def test_dti_iadd_tdi(self, tz): + def test_dti_iadd_tdi(self, tz_naive_fixture): # GH 17558 + tz = tz_naive_fixture dti = DatetimeIndex([Timestamp('2017-01-01', tz=tz)] * 10) tdi = pd.timedelta_range('0 days', periods=10) expected = pd.date_range('2017-01-01', periods=10, tz=tz) @@ -576,8 +585,9 @@ def test_dti_iadd_tdi(self, tz): result += dti tm.assert_index_equal(result, expected) - def test_dti_sub_tdi(self, tz): + def test_dti_sub_tdi(self, tz_naive_fixture): # GH 17558 + tz = tz_naive_fixture dti = DatetimeIndex([Timestamp('2017-01-01', tz=tz)] * 10) tdi = pd.timedelta_range('0 days', periods=10) expected = pd.date_range('2017-01-01', periods=10, tz=tz, freq='-1D') @@ -598,8 +608,9 @@ def test_dti_sub_tdi(self, tz): with tm.assert_raises_regex(TypeError, msg): tdi.values - dti - def test_dti_isub_tdi(self, tz): + def test_dti_isub_tdi(self, tz_naive_fixture): # GH 17558 + tz = tz_naive_fixture dti = DatetimeIndex([Timestamp('2017-01-01', tz=tz)] * 10) tdi = pd.timedelta_range('0 days', periods=10) expected = pd.date_range('2017-01-01', periods=10, tz=tz, freq='-1D') @@ -653,7 +664,8 @@ def test_add_datetimelike_and_dti_tz(self, addend): # ------------------------------------------------------------- # __add__/__sub__ with ndarray[datetime64] and ndarray[timedelta64] - def test_dti_add_dt64_array_raises(self, tz): + def test_dti_add_dt64_array_raises(self, tz_naive_fixture): + tz = tz_naive_fixture dti = pd.date_range('2016-01-01', periods=3, tz=tz) dtarr = dti.values @@ -672,7 +684,8 @@ def test_dti_sub_dt64_array_naive(self): result = dtarr - dti tm.assert_index_equal(result, expected) - def test_dti_sub_dt64_array_aware_raises(self, tz): + def test_dti_sub_dt64_array_aware_raises(self, tz_naive_fixture): + tz = tz_naive_fixture if tz is None: return dti = pd.date_range('2016-01-01', periods=3, tz=tz) @@ -683,7 +696,8 @@ def test_dti_sub_dt64_array_aware_raises(self, tz): with pytest.raises(TypeError): dtarr - dti - def test_dti_add_td64_array(self, tz): + def test_dti_add_td64_array(self, tz_naive_fixture): + tz = tz_naive_fixture dti = pd.date_range('2016-01-01', periods=3, tz=tz) tdi = dti - dti.shift(1) tdarr = tdi.values @@ -694,7 +708,8 @@ def test_dti_add_td64_array(self, tz): result = tdarr + dti tm.assert_index_equal(result, expected) - def test_dti_sub_td64_array(self, tz): + def test_dti_sub_td64_array(self, tz_naive_fixture): + tz = tz_naive_fixture dti = pd.date_range('2016-01-01', periods=3, tz=tz) tdi = dti - dti.shift(1) tdarr = tdi.values @@ -867,8 +882,9 @@ def test_dti_add_series(self, tz, names): result4 = index + ser.values tm.assert_index_equal(result4, expected) - def test_dti_add_offset_array(self, tz): + def test_dti_add_offset_array(self, tz_naive_fixture): # GH#18849 + tz = tz_naive_fixture dti = pd.date_range('2017-01-01', periods=2, tz=tz) other = np.array([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)]) @@ -885,8 +901,9 @@ def test_dti_add_offset_array(self, tz): @pytest.mark.parametrize('names', [(None, None, None), ('foo', 'bar', None), ('foo', 'foo', 'foo')]) - def test_dti_add_offset_index(self, tz, names): + def test_dti_add_offset_index(self, tz_naive_fixture, names): # GH#18849, GH#19744 + tz = tz_naive_fixture dti = pd.date_range('2017-01-01', periods=2, tz=tz, name=names[0]) other = pd.Index([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)], name=names[1]) @@ -901,8 +918,9 @@ def test_dti_add_offset_index(self, tz, names): res2 = other + dti tm.assert_index_equal(res2, expected) - def test_dti_sub_offset_array(self, tz): + def test_dti_sub_offset_array(self, tz_naive_fixture): # GH#18824 + tz = tz_naive_fixture dti = pd.date_range('2017-01-01', periods=2, tz=tz) other = np.array([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)]) @@ -915,8 +933,9 @@ def test_dti_sub_offset_array(self, tz): @pytest.mark.parametrize('names', [(None, None, None), ('foo', 'bar', None), ('foo', 'foo', 'foo')]) - def test_dti_sub_offset_index(self, tz, names): + def test_dti_sub_offset_index(self, tz_naive_fixture, names): # GH#18824, GH#19744 + tz = tz_naive_fixture dti = pd.date_range('2017-01-01', periods=2, tz=tz, name=names[0]) other = pd.Index([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)], name=names[1]) @@ -930,8 +949,9 @@ def test_dti_sub_offset_index(self, tz, names): @pytest.mark.parametrize('names', [(None, None, None), ('foo', 'bar', None), ('foo', 'foo', 'foo')]) - def test_dti_with_offset_series(self, tz, names): + def test_dti_with_offset_series(self, tz_naive_fixture, names): # GH#18849 + tz = tz_naive_fixture dti = pd.date_range('2017-01-01', periods=2, tz=tz, name=names[0]) other = Series([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)], name=names[1]) diff --git a/pandas/tests/indexes/datetimes/test_ops.py b/pandas/tests/indexes/datetimes/test_ops.py index c6334e70a1d2c..6ccd310f33bbd 100644 --- a/pandas/tests/indexes/datetimes/test_ops.py +++ b/pandas/tests/indexes/datetimes/test_ops.py @@ -14,13 +14,6 @@ from pandas.core.dtypes.generic import ABCDateOffset -@pytest.fixture(params=[None, 'UTC', 'Asia/Tokyo', 'US/Eastern', - 'dateutil/Asia/Singapore', - 'dateutil/US/Pacific']) -def tz_fixture(request): - return request.param - - START, END = datetime(2009, 1, 1), datetime(2010, 1, 1) @@ -53,8 +46,8 @@ def test_ops_properties_basic(self): assert s.day == 10 pytest.raises(AttributeError, lambda: s.weekday) - def test_minmax_tz(self, tz_fixture): - tz = tz_fixture + def test_minmax_tz(self, tz_naive_fixture): + tz = tz_naive_fixture # monotonic idx1 = pd.DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], tz=tz) @@ -103,8 +96,8 @@ def test_numpy_minmax(self): tm.assert_raises_regex( ValueError, errmsg, np.argmax, dr, out=0) - def test_repeat_range(self, tz_fixture): - tz = tz_fixture + def test_repeat_range(self, tz_naive_fixture): + tz = tz_naive_fixture rng = date_range('1/1/2000', '1/1/2001') result = rng.repeat(5) @@ -135,8 +128,8 @@ def test_repeat_range(self, tz_fixture): tm.assert_index_equal(res, exp) assert res.freq is None - def test_repeat(self, tz_fixture): - tz = tz_fixture + def test_repeat(self, tz_naive_fixture): + tz = tz_naive_fixture reps = 2 msg = "the 'axis' parameter is not supported" @@ -158,8 +151,8 @@ def test_repeat(self, tz_fixture): tm.assert_raises_regex(ValueError, msg, np.repeat, rng, reps, axis=1) - def test_resolution(self, tz_fixture): - tz = tz_fixture + def test_resolution(self, tz_naive_fixture): + tz = tz_naive_fixture for freq, expected in zip(['A', 'Q', 'M', 'D', 'H', 'T', 'S', 'L', 'U'], ['day', 'day', 'day', 'day', 'hour', @@ -169,8 +162,8 @@ def test_resolution(self, tz_fixture): tz=tz) assert idx.resolution == expected - def test_value_counts_unique(self, tz_fixture): - tz = tz_fixture + def test_value_counts_unique(self, tz_naive_fixture): + tz = tz_naive_fixture # GH 7735 idx = pd.date_range('2011-01-01 09:00', freq='H', periods=10) # create repeated values, 'n'th element is repeated by n+1 times @@ -270,8 +263,9 @@ def test_order_with_freq(self, idx): [pd.NaT, pd.NaT, '2011-01-02', '2011-01-03', '2011-01-05']) ]) - def test_order_without_freq(self, index_dates, expected_dates, tz_fixture): - tz = tz_fixture + def test_order_without_freq(self, index_dates, expected_dates, + tz_naive_fixture): + tz = tz_naive_fixture # without freq index = DatetimeIndex(index_dates, tz=tz, name='idx') @@ -356,11 +350,11 @@ def test_nat_new(self): tm.assert_numpy_array_equal(result, exp) def test_nat(self, tz_naive_fixture): - timezone = tz_naive_fixture + tz = tz_naive_fixture assert pd.DatetimeIndex._na_value is pd.NaT assert pd.DatetimeIndex([])._na_value is pd.NaT - idx = pd.DatetimeIndex(['2011-01-01', '2011-01-02'], tz=timezone) + idx = pd.DatetimeIndex(['2011-01-01', '2011-01-02'], tz=tz) assert idx._can_hold_na tm.assert_numpy_array_equal(idx._isnan, np.array([False, False])) @@ -368,7 +362,7 @@ def test_nat(self, tz_naive_fixture): tm.assert_numpy_array_equal(idx._nan_idxs, np.array([], dtype=np.intp)) - idx = pd.DatetimeIndex(['2011-01-01', 'NaT'], tz=timezone) + idx = pd.DatetimeIndex(['2011-01-01', 'NaT'], tz=tz) assert idx._can_hold_na tm.assert_numpy_array_equal(idx._isnan, np.array([False, True])) diff --git a/pandas/tests/indexes/datetimes/test_scalar_compat.py b/pandas/tests/indexes/datetimes/test_scalar_compat.py index f0442d9d40ef1..6f6f4eb8d24e3 100644 --- a/pandas/tests/indexes/datetimes/test_scalar_compat.py +++ b/pandas/tests/indexes/datetimes/test_scalar_compat.py @@ -13,13 +13,6 @@ from pandas import date_range, Timestamp, DatetimeIndex -@pytest.fixture(params=[None, 'UTC', 'Asia/Tokyo', - 'US/Eastern', 'dateutil/Asia/Singapore', - 'dateutil/US/Pacific']) -def tz(request): - return request.param - - class TestDatetimeIndexOps(object): def test_dti_time(self): rng = date_range('1/1/2000', freq='12min', periods=10) @@ -84,7 +77,8 @@ def test_round_daily(self): for freq in ['Y', 'M', 'foobar']: pytest.raises(ValueError, lambda: dti.round(freq)) - def test_round(self, tz): + def test_round(self, tz_naive_fixture): + tz = tz_naive_fixture rng = date_range(start='2016-01-01', periods=5, freq='30Min', tz=tz) elt = rng[1] @@ -134,8 +128,9 @@ def test_round(self, tz): ts = '2016-10-17 12:00:00.001501031' DatetimeIndex([ts]).round('1010ns') - def test_no_rounding_occurs(self, tz): + def test_no_rounding_occurs(self, tz_naive_fixture): # GH 21262 + tz = tz_naive_fixture rng = date_range(start='2016-01-01', periods=5, freq='2Min', tz=tz) @@ -167,7 +162,7 @@ def test_no_rounding_occurs(self, tz): (('NaT', '1823-01-01 00:00:01'), 'ceil', '1s', ('NaT', '1823-01-01 00:00:01')) ]) - def test_ceil_floor_edge(self, tz, test_input, rounder, freq, expected): + def test_ceil_floor_edge(self, test_input, rounder, freq, expected): dt = DatetimeIndex(list(test_input)) func = getattr(dt, rounder) result = func(freq) diff --git a/pandas/tests/tseries/conftest.py b/pandas/tests/tseries/conftest.py deleted file mode 100644 index fc1ecf21c5446..0000000000000 --- a/pandas/tests/tseries/conftest.py +++ /dev/null @@ -1,7 +0,0 @@ -import pytest - - -@pytest.fixture(params=[None, 'UTC', 'Asia/Tokyo', 'US/Eastern', - 'dateutil/Asia/Tokyo', 'dateutil/US/Pacific']) -def tz(request): - return request.param diff --git a/pandas/tests/tseries/offsets/conftest.py b/pandas/tests/tseries/offsets/conftest.py index 76f24123ea0e1..4766e7e277b13 100644 --- a/pandas/tests/tseries/offsets/conftest.py +++ b/pandas/tests/tseries/offsets/conftest.py @@ -4,6 +4,9 @@ @pytest.fixture(params=[getattr(offsets, o) for o in offsets.__all__]) def offset_types(request): + """ + Fixture for all the datetime offsets available for a time series. + """ return request.param @@ -11,16 +14,16 @@ def offset_types(request): issubclass(getattr(offsets, o), offsets.MonthOffset) and o != 'MonthOffset']) def month_classes(request): + """ + Fixture for month based datetime offsets available for a time series. + """ return request.param @pytest.fixture(params=[getattr(offsets, o) for o in offsets.__all__ if issubclass(getattr(offsets, o), offsets.Tick)]) def tick_classes(request): - return request.param - - -@pytest.fixture(params=[None, 'UTC', 'Asia/Tokyo', 'US/Eastern', - 'dateutil/Asia/Tokyo', 'dateutil/US/Pacific']) -def tz(request): + """ + Fixture for Tick based datetime offsets available for a time series. + """ return request.param diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py index a5cd839c1472f..db69bfadfcf49 100644 --- a/pandas/tests/tseries/offsets/test_offsets.py +++ b/pandas/tests/tseries/offsets/test_offsets.py @@ -40,7 +40,6 @@ from .common import assert_offset_equal, assert_onOffset - #### # Misc function tests #### @@ -107,7 +106,8 @@ def _get_offset(self, klass, value=1, normalize=False): klass = klass(normalize=normalize) return klass - def test_apply_out_of_range(self, tz): + def test_apply_out_of_range(self, tz_naive_fixture): + tz = tz_naive_fixture if self._offset is None: return @@ -479,7 +479,8 @@ def test_onOffset(self, offset_types): date = datetime(dt.year, dt.month, dt.day) assert offset_n.onOffset(date) - def test_add(self, offset_types, tz): + def test_add(self, offset_types, tz_naive_fixture): + tz = tz_naive_fixture dt = datetime(2011, 1, 1, 9, 0) offset_s = self._get_offset(offset_types)
- [ ] xref #19159 - [ ] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21496
2018-06-15T12:08:38Z
2018-07-08T22:03:52Z
2018-07-08T22:03:52Z
2018-07-09T19:34:42Z
PERF: improve speed of nans in CategoricalIndex
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py index 7f2860a963423..0093d4940751e 100644 --- a/pandas/core/indexes/category.py +++ b/pandas/core/indexes/category.py @@ -326,7 +326,7 @@ def __contains__(self, key): hash(key) if isna(key): # if key is a NaN, check if any NaN is in self. - return self.isna().any() + return self.hasnans # is key in self.categories? Then get its location. # If not (i.e. KeyError), it logically can't be in self either
This is a minor follow-up to #21369. ```python >>> n = 100_000 >>> ci = pd.CategoricalIndex(['a']*n + ['b']*n + ['c']*n + [np.nan]) >>> np.nan in ci 19.5 us # master 114 ns # this PR ``` Using ``self.hasnans`` to check for nans is faster than ``self.isna().any()`` because it's cached.
https://api.github.com/repos/pandas-dev/pandas/pulls/21493
2018-06-15T07:26:33Z
2018-06-15T12:51:18Z
2018-06-15T12:51:18Z
2018-07-02T23:24:25Z
TST: Add unit tests for older timezone issues
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index b8d865195cddd..d5f3cfa477eca 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -79,6 +79,11 @@ Bug Fixes **Timezones** - Bug in :class:`Timestamp` and :class:`DatetimeIndex` where passing a :class:`Timestamp` localized after a DST transition would return a datetime before the DST transition (:issue:`20854`) - Bug in comparing :class:`DataFrame`s with tz-aware :class:`DatetimeIndex` columns with a DST transition that raised a ``KeyError`` (:issue:`19970`) +- Bug in :meth:`DatetimeIndex.shift` where an ``AssertionError`` would raise when shifting across DST (:issue:`8616`) +- Bug in :class:`Timestamp` constructor where passing an invalid timezone offset designator (``Z``) would not raise a ``ValueError``(:issue:`8910`) +- Bug in :meth:`Timestamp.replace` where replacing at a DST boundary would retain an incorrect offset (:issue:`7825`) +- Bug in :meth:`DatetimeIndex.reindex` when reindexing a tz-naive and tz-aware :class:`DatetimeIndex` (:issue:`8306`) +- Bug in :meth:`DatetimeIndex.resample` when downsampling across a DST boundary (:issue:`8531`) **Other** diff --git a/pandas/tests/indexes/datetimes/test_arithmetic.py b/pandas/tests/indexes/datetimes/test_arithmetic.py index eff2872a1cff3..0649083a440df 100644 --- a/pandas/tests/indexes/datetimes/test_arithmetic.py +++ b/pandas/tests/indexes/datetimes/test_arithmetic.py @@ -4,7 +4,7 @@ import operator import pytest - +import pytz import numpy as np import pandas as pd @@ -476,6 +476,28 @@ def test_dti_shift_localized(self, tzstr): result = dr_tz.shift(1, '10T') assert result.tz == dr_tz.tz + def test_dti_shift_across_dst(self): + # GH 8616 + idx = date_range('2013-11-03', tz='America/Chicago', + periods=7, freq='H') + s = Series(index=idx[:-1]) + result = s.shift(freq='H') + expected = Series(index=idx[1:]) + tm.assert_series_equal(result, expected) + + @pytest.mark.parametrize('shift, result_time', [ + [0, '2014-11-14 00:00:00'], + [-1, '2014-11-13 23:00:00'], + [1, '2014-11-14 01:00:00']]) + def test_dti_shift_near_midnight(self, shift, result_time): + # GH 8616 + dt = datetime(2014, 11, 14, 0) + dt_est = pytz.timezone('EST').localize(dt) + s = Series(data=[1], index=[dt_est]) + result = s.shift(shift, freq='H') + expected = Series(1, index=DatetimeIndex([result_time], tz='EST')) + tm.assert_series_equal(result, expected) + # ------------------------------------------------------------- # Binary operations DatetimeIndex and timedelta-like diff --git a/pandas/tests/scalar/timestamp/test_timestamp.py b/pandas/tests/scalar/timestamp/test_timestamp.py index 4689c7bea626f..8dc9903b7356d 100644 --- a/pandas/tests/scalar/timestamp/test_timestamp.py +++ b/pandas/tests/scalar/timestamp/test_timestamp.py @@ -420,6 +420,12 @@ def test_constructor_nanosecond(self, result): expected = expected + Timedelta(nanoseconds=1) assert result == expected + @pytest.mark.parametrize('z', ['Z0', 'Z00']) + def test_constructor_invalid_Z0_isostring(self, z): + # GH 8910 + with pytest.raises(ValueError): + Timestamp('2014-11-02 01:00{}'.format(z)) + @pytest.mark.parametrize('arg', ['year', 'month', 'day', 'hour', 'minute', 'second', 'microsecond', 'nanosecond']) def test_invalid_date_kwarg_with_string_input(self, arg): diff --git a/pandas/tests/scalar/timestamp/test_unary_ops.py b/pandas/tests/scalar/timestamp/test_unary_ops.py index aecddab8477fc..6f3b5ae6a20a3 100644 --- a/pandas/tests/scalar/timestamp/test_unary_ops.py +++ b/pandas/tests/scalar/timestamp/test_unary_ops.py @@ -238,6 +238,13 @@ def test_replace_across_dst(self, tz, normalize): ts2b = normalize(ts2) assert ts2 == ts2b + def test_replace_dst_border(self): + # Gh 7825 + t = Timestamp('2013-11-3', tz='America/Chicago') + result = t.replace(hour=3) + expected = Timestamp('2013-11-3 03:00:00', tz='America/Chicago') + assert result == expected + # -------------------------------------------------------------- @td.skip_if_windows diff --git a/pandas/tests/series/indexing/test_alter_index.py b/pandas/tests/series/indexing/test_alter_index.py index 999ed5f26daee..bcd5a64402c33 100644 --- a/pandas/tests/series/indexing/test_alter_index.py +++ b/pandas/tests/series/indexing/test_alter_index.py @@ -453,6 +453,15 @@ def test_reindex_fill_value(): assert_series_equal(result, expected) +def test_reindex_datetimeindexes_tz_naive_and_aware(): + # GH 8306 + idx = date_range('20131101', tz='America/Chicago', periods=7) + newidx = date_range('20131103', periods=10, freq='H') + s = Series(range(7), index=idx) + with pytest.raises(TypeError): + s.reindex(newidx, method='ffill') + + def test_rename(): # GH 17407 s = Series(range(1, 6), index=pd.Index(range(2, 7), name='IntIndex')) diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py index c1257cce9a9a4..6f0ad0535c6b4 100644 --- a/pandas/tests/test_resample.py +++ b/pandas/tests/test_resample.py @@ -2084,6 +2084,17 @@ def test_resample_dst_anchor(self): freq='D', tz='Europe/Paris')), 'D Frequency') + def test_downsample_across_dst(self): + # GH 8531 + tz = pytz.timezone('Europe/Berlin') + dt = datetime(2014, 10, 26) + dates = date_range(tz.localize(dt), periods=4, freq='2H') + result = Series(5, index=dates).resample('H').mean() + expected = Series([5., np.nan] * 3 + [5.], + index=date_range(tz.localize(dt), periods=7, + freq='H')) + tm.assert_series_equal(result, expected) + def test_resample_with_nat(self): # GH 13020 index = DatetimeIndex([pd.NaT,
- [x] closes #8616 - [x] closes #8910 - [x] closes #7825 - [x] closes #8306 - [x] closes #8531 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` In the spirit of your https://github.com/pandas-dev/pandas/pull/21407#issuecomment-396896678 @jreback, cleaned up some old timezone issues (but not necessarily covered by #21407) that appear to have been solved
https://api.github.com/repos/pandas-dev/pandas/pulls/21491
2018-06-15T06:25:05Z
2018-06-18T22:34:30Z
2018-06-18T22:34:28Z
2018-06-26T07:44:36Z
CLN: Index imports and 0.23.1 whatsnew
diff --git a/doc/source/whatsnew/v0.23.1.txt b/doc/source/whatsnew/v0.23.1.txt index db25bcf8113f5..af4eeffd87d01 100644 --- a/doc/source/whatsnew/v0.23.1.txt +++ b/doc/source/whatsnew/v0.23.1.txt @@ -97,8 +97,8 @@ Bug Fixes **Data-type specific** -- Bug in :meth:`Series.str.replace()` where the method throws `TypeError` on Python 3.5.2 (:issue: `21078`) -- Bug in :class:`Timedelta`: where passing a float with a unit would prematurely round the float precision (:issue: `14156`) +- Bug in :meth:`Series.str.replace()` where the method throws `TypeError` on Python 3.5.2 (:issue:`21078`) +- Bug in :class:`Timedelta` where passing a float with a unit would prematurely round the float precision (:issue:`14156`) - Bug in :func:`pandas.testing.assert_index_equal` which raised ``AssertionError`` incorrectly, when comparing two :class:`CategoricalIndex` objects with param ``check_categorical=False`` (:issue:`19776`) **Sparse** @@ -110,12 +110,12 @@ Bug Fixes - Bug in :meth:`Series.reset_index` where appropriate error was not raised with an invalid level name (:issue:`20925`) - Bug in :func:`interval_range` when ``start``/``periods`` or ``end``/``periods`` are specified with float ``start`` or ``end`` (:issue:`21161`) - Bug in :meth:`MultiIndex.set_names` where error raised for a ``MultiIndex`` with ``nlevels == 1`` (:issue:`21149`) -- Bug in :class:`IntervalIndex` constructors where creating an ``IntervalIndex`` from categorical data was not fully supported (:issue:`21243`, issue:`21253`) +- Bug in :class:`IntervalIndex` constructors where creating an ``IntervalIndex`` from categorical data was not fully supported (:issue:`21243`, :issue:`21253`) - Bug in :meth:`MultiIndex.sort_index` which was not guaranteed to sort correctly with ``level=1``; this was also causing data misalignment in particular :meth:`DataFrame.stack` operations (:issue:`20994`, :issue:`20945`, :issue:`21052`) **Plotting** -- New keywords (sharex, sharey) to turn on/off sharing of x/y-axis by subplots generated with pandas.DataFrame().groupby().boxplot() (:issue: `20968`) +- New keywords (sharex, sharey) to turn on/off sharing of x/y-axis by subplots generated with pandas.DataFrame().groupby().boxplot() (:issue:`20968`) **I/O** diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 4b32e5d4f5654..6a56278b0da49 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -283,7 +283,7 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, if (is_datetime64_any_dtype(data) or (dtype is not None and is_datetime64_any_dtype(dtype)) or 'tz' in kwargs): - from pandas.core.indexes.datetimes import DatetimeIndex + from pandas import DatetimeIndex result = DatetimeIndex(data, copy=copy, name=name, dtype=dtype, **kwargs) if dtype is not None and is_dtype_equal(_o_dtype, dtype): @@ -293,7 +293,7 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, elif (is_timedelta64_dtype(data) or (dtype is not None and is_timedelta64_dtype(dtype))): - from pandas.core.indexes.timedeltas import TimedeltaIndex + from pandas import TimedeltaIndex result = TimedeltaIndex(data, copy=copy, name=name, **kwargs) if dtype is not None and _o_dtype == dtype: return Index(result.to_pytimedelta(), dtype=_o_dtype) @@ -404,8 +404,7 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, if (lib.is_datetime_with_singletz_array(subarr) or 'tz' in kwargs): # only when subarr has the same tz - from pandas.core.indexes.datetimes import ( - DatetimeIndex) + from pandas import DatetimeIndex try: return DatetimeIndex(subarr, copy=copy, name=name, **kwargs) @@ -413,8 +412,7 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, pass elif inferred.startswith('timedelta'): - from pandas.core.indexes.timedeltas import ( - TimedeltaIndex) + from pandas import TimedeltaIndex return TimedeltaIndex(subarr, copy=copy, name=name, **kwargs) elif inferred == 'period': @@ -1177,7 +1175,7 @@ def astype(self, dtype, copy=True): copy=copy) try: if is_datetime64tz_dtype(dtype): - from pandas.core.indexes.datetimes import DatetimeIndex + from pandas import DatetimeIndex return DatetimeIndex(self.values, name=self.name, dtype=dtype, copy=copy) return Index(self.values.astype(dtype, copy=copy), name=self.name, @@ -3333,7 +3331,7 @@ def get_indexer_for(self, target, **kwargs): def _maybe_promote(self, other): # A hack, but it works - from pandas.core.indexes.datetimes import DatetimeIndex + from pandas import DatetimeIndex if self.inferred_type == 'date' and isinstance(other, DatetimeIndex): return DatetimeIndex(self), other elif self.inferred_type == 'boolean':
xref @jreback https://github.com/pandas-dev/pandas/pull/21216#discussion_r195367428 and cleaned up some formatting in the 0.23.1 whatsnew where GH issues references didn't hyperlink
https://api.github.com/repos/pandas-dev/pandas/pulls/21490
2018-06-15T05:36:59Z
2018-06-15T12:50:06Z
2018-06-15T12:50:06Z
2018-06-29T14:49:30Z
read_html: Handle colspan and rowspan
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index 1b2033999d67d..d0b8f00150099 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -10,7 +10,7 @@ New features - ``ExcelWriter`` now accepts ``mode`` as a keyword argument, enabling append to existing workbooks when using the ``openpyxl`` engine (:issue:`3441`) -.. _whatsnew_0240.enhancements.extension_array_operators +.. _whatsnew_0240.enhancements.extension_array_operators: ``ExtensionArray`` operator support ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -26,6 +26,46 @@ See the :ref:`ExtensionArray Operator Support <extending.extension.operator>` documentation section for details on both ways of adding operator support. +.. _whatsnew_0240.enhancements.read_html: + +``read_html`` Enhancements +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +:func:`read_html` previously ignored ``colspan`` and ``rowspan`` attributes. +Now it understands them, treating them as sequences of cells with the same +value. (:issue:`17054`) + +.. ipython:: python + + result = pd.read_html(""" + <table> + <thead> + <tr> + <th>A</th><th>B</th><th>C</th> + </tr> + </thead> + <tbody> + <tr> + <td colspan="2">1</td><td>2</td> + </tr> + </tbody> + </table>""") + +Previous Behavior: + +.. code-block:: ipython + + In [13]: result + Out [13]: + [ A B C + 0 1 2 NaN] + +Current Behavior: + +.. ipython:: python + + result + .. _whatsnew_0240.enhancements.other: Other Enhancements @@ -40,6 +80,7 @@ Other Enhancements <https://pandas-gbq.readthedocs.io/en/latest/changelog.html#changelog-0-5-0>`__. (:issue:`21627`) - New method :meth:`HDFStore.walk` will recursively walk the group hierarchy of an HDF5 file (:issue:`10932`) +- :func:`read_html` copies cell data across ``colspan``s and ``rowspan``s, and it treats all-``th`` table rows as headers if ``header`` kwarg is not given and there is no ``thead`` (:issue:`17054`) - :meth:`Series.nlargest`, :meth:`Series.nsmallest`, :meth:`DataFrame.nlargest`, and :meth:`DataFrame.nsmallest` now accept the value ``"all"`` for the ``keep` argument. This keeps all ties for the nth largest/smallest value (:issue:`16818`) - :class:`IntervalIndex` has gained the :meth:`~IntervalIndex.set_closed` method to change the existing ``closed`` value (:issue:`21670`) - @@ -329,7 +370,7 @@ MultiIndex I/O ^^^ -- +- :func:`read_html()` no longer ignores all-whitespace ``<tr>`` within ``<thead>`` when considering the ``skiprows`` and ``header`` arguments. Previously, users had to decrease their ``header`` and ``skiprows`` values on such tables to work around the issue. (:issue:`21641`) - - diff --git a/pandas/io/html.py b/pandas/io/html.py index 8fd876e85889f..45fe3b017e4f6 100644 --- a/pandas/io/html.py +++ b/pandas/io/html.py @@ -10,8 +10,6 @@ from distutils.version import LooseVersion -import numpy as np - from pandas.core.dtypes.common import is_list_like from pandas.errors import EmptyDataError from pandas.io.common import _is_url, urlopen, _validate_header_arg @@ -191,13 +189,14 @@ class _HtmlFrameParser(object): ----- To subclass this class effectively you must override the following methods: * :func:`_build_doc` + * :func:`_attr_getter` * :func:`_text_getter` * :func:`_parse_td` + * :func:`_parse_thead_tr` + * :func:`_parse_tbody_tr` + * :func:`_parse_tfoot_tr` * :func:`_parse_tables` - * :func:`_parse_tr` - * :func:`_parse_thead` - * :func:`_parse_tbody` - * :func:`_parse_tfoot` + * :func:`_equals_tag` See each method's respective documentation for details on their functionality. """ @@ -210,35 +209,39 @@ def __init__(self, io, match, attrs, encoding, displayed_only): self.displayed_only = displayed_only def parse_tables(self): + """ + Parse and return all tables from the DOM. + + Returns + ------- + list of parsed (header, body, footer) tuples from tables. + """ tables = self._parse_tables(self._build_doc(), self.match, self.attrs) - return (self._build_table(table) for table in tables) + return (self._parse_thead_tbody_tfoot(table) for table in tables) - def _parse_raw_data(self, rows): - """Parse the raw data into a list of lists. + def _attr_getter(self, obj, attr): + """ + Return the attribute value of an individual DOM node. Parameters ---------- - rows : iterable of node-like - A list of row elements. - - text_getter : callable - A callable that gets the text from an individual node. This must be - defined by subclasses. + obj : node-like + A DOM node. - column_finder : callable - A callable that takes a row node as input and returns a list of the - column node in that row. This must be defined by subclasses. + attr : str or unicode + The attribute, such as "colspan" Returns ------- - data : list of list of strings + str or unicode + The attribute value. """ - data = [[_remove_whitespace(self._text_getter(col)) for col in - self._parse_td(row)] for row in rows] - return data + # Both lxml and BeautifulSoup have the same implementation: + return obj.get(attr) def _text_getter(self, obj): - """Return the text of an individual DOM node. + """ + Return the text of an individual DOM node. Parameters ---------- @@ -258,161 +261,257 @@ def _parse_td(self, obj): Parameters ---------- obj : node-like + A DOM <tr> node. Returns ------- - columns : list of node-like + list of node-like These are the elements of each row, i.e., the columns. """ raise com.AbstractMethodError(self) - def _parse_tables(self, doc, match, attrs): - """Return all tables from the parsed DOM. + def _parse_thead_tr(self, table): + """ + Return the list of thead row elements from the parsed table element. Parameters ---------- - doc : tree-like - The DOM from which to parse the table element. - - match : str or regular expression - The text to search for in the DOM tree. - - attrs : dict - A dictionary of table attributes that can be used to disambiguate - multiple tables on a page. - - Raises - ------ - ValueError - * If `match` does not match any text in the document. + table : a table element that contains zero or more thead elements. Returns ------- - tables : list of node-like - A list of <table> elements to be parsed into raw data. + list of node-like + These are the <tr> row elements of a table. """ raise com.AbstractMethodError(self) - def _parse_tr(self, table): - """Return the list of row elements from the parsed table element. + def _parse_tbody_tr(self, table): + """ + Return the list of tbody row elements from the parsed table element. + + HTML5 table bodies consist of either 0 or more <tbody> elements (which + only contain <tr> elements) or 0 or more <tr> elements. This method + checks for both structures. Parameters ---------- - table : node-like - A table element that contains row elements. + table : a table element that contains row elements. Returns ------- - rows : list of node-like - A list row elements of a table, usually <tr> or <th> elements. + list of node-like + These are the <tr> row elements of a table. """ raise com.AbstractMethodError(self) - def _parse_thead(self, table): - """Return the header of a table. + def _parse_tfoot_tr(self, table): + """ + Return the list of tfoot row elements from the parsed table element. Parameters ---------- - table : node-like - A table element that contains row elements. + table : a table element that contains row elements. Returns ------- - thead : node-like - A <thead>...</thead> element. + list of node-like + These are the <tr> row elements of a table. """ raise com.AbstractMethodError(self) - def _parse_tbody(self, table): - """Return the list of tbody elements from the parsed table element. + def _parse_tables(self, doc, match, attrs): + """ + Return all tables from the parsed DOM. Parameters ---------- - table : node-like - A table element that contains row elements. + doc : the DOM from which to parse the table element. + + match : str or regular expression + The text to search for in the DOM tree. + + attrs : dict + A dictionary of table attributes that can be used to disambiguate + multiple tables on a page. + + Raises + ------ + ValueError : `match` does not match any text in the document. Returns ------- - tbodys : list of node-like - A list of <tbody>...</tbody> elements + list of node-like + HTML <table> elements to be parsed into raw data. """ raise com.AbstractMethodError(self) - def _parse_tfoot(self, table): - """Return the footer of the table if any. + def _equals_tag(self, obj, tag): + """ + Return whether an individual DOM node matches a tag Parameters ---------- - table : node-like - A table element that contains row elements. + obj : node-like + A DOM node. + + tag : str + Tag name to be checked for equality. Returns ------- - tfoot : node-like - A <tfoot>...</tfoot> element. + boolean + Whether `obj`'s tag name is `tag` """ raise com.AbstractMethodError(self) def _build_doc(self): - """Return a tree-like object that can be used to iterate over the DOM. + """ + Return a tree-like object that can be used to iterate over the DOM. Returns ------- - obj : tree-like + node-like + The DOM from which to parse the table element. """ raise com.AbstractMethodError(self) - def _build_table(self, table): - header = self._parse_raw_thead(table) - body = self._parse_raw_tbody(table) - footer = self._parse_raw_tfoot(table) + def _parse_thead_tbody_tfoot(self, table_html): + """ + Given a table, return parsed header, body, and foot. + + Parameters + ---------- + table_html : node-like + + Returns + ------- + tuple of (header, body, footer), each a list of list-of-text rows. + + Notes + ----- + Header and body are lists-of-lists. Top level list is a list of + rows. Each row is a list of str text. + + Logic: Use <thead>, <tbody>, <tfoot> elements to identify + header, body, and footer, otherwise: + - Put all rows into body + - Move rows from top of body to header only if + all elements inside row are <th> + - Move rows from bottom of body to footer only if + all elements inside row are <th> + """ + + header_rows = self._parse_thead_tr(table_html) + body_rows = self._parse_tbody_tr(table_html) + footer_rows = self._parse_tfoot_tr(table_html) + + def row_is_all_th(row): + return all(self._equals_tag(t, 'th') for t in + self._parse_td(row)) + + if not header_rows: + # The table has no <thead>. Move the top all-<th> rows from + # body_rows to header_rows. (This is a common case because many + # tables in the wild have no <thead> or <tfoot> + while body_rows and row_is_all_th(body_rows[0]): + header_rows.append(body_rows.pop(0)) + + header = self._expand_colspan_rowspan(header_rows) + body = self._expand_colspan_rowspan(body_rows) + footer = self._expand_colspan_rowspan(footer_rows) + return header, body, footer - def _parse_raw_thead(self, table): - thead = self._parse_thead(table) - res = [] - if thead: - trs = self._parse_tr(thead[0]) - for tr in trs: - cols = lmap(self._text_getter, self._parse_td(tr)) - if any(col != '' for col in cols): - res.append(cols) - return res - - def _parse_raw_tfoot(self, table): - tfoot = self._parse_tfoot(table) - res = [] - if tfoot: - res = lmap(self._text_getter, self._parse_td(tfoot[0])) - return np.atleast_1d( - np.array(res).squeeze()) if res and len(res) == 1 else res - - def _parse_raw_tbody(self, table): - tbodies = self._parse_tbody(table) - - raw_data = [] - - if tbodies: - for tbody in tbodies: - raw_data.extend(self._parse_tr(tbody)) - else: - raw_data.extend(self._parse_tr(table)) + def _expand_colspan_rowspan(self, rows): + """ + Given a list of <tr>s, return a list of text rows. - return self._parse_raw_data(raw_data) + Parameters + ---------- + rows : list of node-like + List of <tr>s + + Returns + ------- + list of list + Each returned row is a list of str text. + + Notes + ----- + Any cell with ``rowspan`` or ``colspan`` will have its contents copied + to subsequent cells. + """ + + all_texts = [] # list of rows, each a list of str + remainder = [] # list of (index, text, nrows) + + for tr in rows: + texts = [] # the output for this row + next_remainder = [] + + index = 0 + tds = self._parse_td(tr) + for td in tds: + # Append texts from previous rows with rowspan>1 that come + # before this <td> + while remainder and remainder[0][0] <= index: + prev_i, prev_text, prev_rowspan = remainder.pop(0) + texts.append(prev_text) + if prev_rowspan > 1: + next_remainder.append((prev_i, prev_text, + prev_rowspan - 1)) + index += 1 + + # Append the text from this <td>, colspan times + text = _remove_whitespace(self._text_getter(td)) + rowspan = int(self._attr_getter(td, 'rowspan') or 1) + colspan = int(self._attr_getter(td, 'colspan') or 1) + + for _ in range(colspan): + texts.append(text) + if rowspan > 1: + next_remainder.append((index, text, rowspan - 1)) + index += 1 + + # Append texts from previous rows at the final position + for prev_i, prev_text, prev_rowspan in remainder: + texts.append(prev_text) + if prev_rowspan > 1: + next_remainder.append((prev_i, prev_text, + prev_rowspan - 1)) + + all_texts.append(texts) + remainder = next_remainder + + # Append rows that only appear because the previous row had non-1 + # rowspan + while remainder: + next_remainder = [] + texts = [] + for prev_i, prev_text, prev_rowspan in remainder: + texts.append(prev_text) + if prev_rowspan > 1: + next_remainder.append((prev_i, prev_text, + prev_rowspan - 1)) + all_texts.append(texts) + remainder = next_remainder + + return all_texts def _handle_hidden_tables(self, tbl_list, attr_name): - """Returns list of tables, potentially removing hidden elements + """ + Return list of tables, potentially removing hidden elements Parameters ---------- - tbl_list : list of Tag or list of Element + tbl_list : list of node-like Type of list elements will vary depending upon parser used attr_name : str Name of the accessor for retrieving HTML attributes Returns ------- - list of Tag or list of Element + list of node-like Return type matches `tbl_list` """ if not self.displayed_only: @@ -442,27 +541,6 @@ def __init__(self, *args, **kwargs): from bs4 import SoupStrainer self._strainer = SoupStrainer('table') - def _text_getter(self, obj): - return obj.text - - def _parse_td(self, row): - return row.find_all(('td', 'th')) - - def _parse_tr(self, element): - return element.find_all('tr') - - def _parse_th(self, element): - return element.find_all('th') - - def _parse_thead(self, table): - return table.find_all('thead') - - def _parse_tbody(self, table): - return table.find_all('tbody') - - def _parse_tfoot(self, table): - return table.find_all('tfoot') - def _parse_tables(self, doc, match, attrs): element_name = self._strainer.name tables = doc.find_all(element_name, attrs=attrs) @@ -490,6 +568,27 @@ def _parse_tables(self, doc, match, attrs): .format(patt=match.pattern)) return result + def _text_getter(self, obj): + return obj.text + + def _equals_tag(self, obj, tag): + return obj.name == tag + + def _parse_td(self, row): + return row.find_all(('td', 'th'), recursive=False) + + def _parse_thead_tr(self, table): + return table.select('thead tr') + + def _parse_tbody_tr(self, table): + from_tbody = table.select('tbody tr') + from_root = table.find_all('tr', recursive=False) + # HTML spec: at most one of these lists has content + return from_tbody + from_root + + def _parse_tfoot_tr(self, table): + return table.select('tfoot tr') + def _setup_build_doc(self): raw_text = _read(self.io) if not raw_text: @@ -554,10 +653,9 @@ def _text_getter(self, obj): return obj.text_content() def _parse_td(self, row): - return row.xpath('.//td|.//th') - - def _parse_tr(self, table): - return table.xpath('.//tr') + # Look for direct children only: the "row" element here may be a + # <thead> or <tfoot> (see _parse_thead_tr). + return row.xpath('./td|./th') def _parse_tables(self, doc, match, kwargs): pattern = match.pattern @@ -590,6 +688,9 @@ def _parse_tables(self, doc, match, kwargs): .format(patt=pattern)) return tables + def _equals_tag(self, obj, tag): + return obj.tag == tag + def _build_doc(self): """ Raises @@ -637,41 +738,32 @@ def _build_doc(self): raise XMLSyntaxError("no text parsed from document", 0, 0, 0) return r - def _parse_tbody(self, table): - return table.xpath('.//tbody') - - def _parse_thead(self, table): - return table.xpath('.//thead') - - def _parse_tfoot(self, table): - return table.xpath('.//tfoot') - - def _parse_raw_thead(self, table): - expr = './/thead' - thead = table.xpath(expr) - res = [] - if thead: - # Grab any directly descending table headers first - ths = thead[0].xpath('./th') - if ths: - cols = [_remove_whitespace(x.text_content()) for x in ths] - if any(col != '' for col in cols): - res.append(cols) - else: - trs = self._parse_tr(thead[0]) + def _parse_thead_tr(self, table): + rows = [] + + for thead in table.xpath('.//thead'): + rows.extend(thead.xpath('./tr')) + + # HACK: lxml does not clean up the clearly-erroneous + # <thead><th>foo</th><th>bar</th></thead>. (Missing <tr>). Add + # the <thead> and _pretend_ it's a <tr>; _parse_td() will find its + # children as though it's a <tr>. + # + # Better solution would be to use html5lib. + elements_at_root = thead.xpath('./td|./th') + if elements_at_root: + rows.append(thead) - for tr in trs: - cols = [_remove_whitespace(x.text_content()) for x in - self._parse_td(tr)] + return rows - if any(col != '' for col in cols): - res.append(cols) - return res + def _parse_tbody_tr(self, table): + from_tbody = table.xpath('.//tbody//tr') + from_root = table.xpath('./tr') + # HTML spec: at most one of these lists has content + return from_tbody + from_root - def _parse_raw_tfoot(self, table): - expr = './/tfoot//th|//tfoot//td' - return [_remove_whitespace(x.text_content()) for x in - table.xpath(expr)] + def _parse_tfoot_tr(self, table): + return table.xpath('.//tfoot//tr') def _expand_elements(body): @@ -689,13 +781,19 @@ def _data_to_frame(**kwargs): header = kwargs.pop('header') kwargs['skiprows'] = _get_skiprows(kwargs['skiprows']) if head: - rows = lrange(len(head)) body = head + body - if header is None: # special case when a table has <th> elements - header = 0 if rows == [0] else rows + + # Infer header when there is a <thead> or top <th>-only rows + if header is None: + if len(head) == 1: + header = 0 + else: + # ignore all-empty-text rows + header = [i for i, row in enumerate(head) + if any(text for text in row)] if foot: - body += [foot] + body += foot # fill out elements of body that are "ragged" _expand_elements(body) @@ -953,7 +1051,13 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None, This function searches for ``<table>`` elements and only for ``<tr>`` and ``<th>`` rows and ``<td>`` elements within each ``<tr>`` or ``<th>`` - element in the table. ``<td>`` stands for "table data". + element in the table. ``<td>`` stands for "table data". This function + attempts to properly handle ``colspan`` and ``rowspan`` attributes. + If the function has a ``<thead>`` argument, it is used to construct + the header, otherwise the function attempts to find the header within + the body (by putting rows with only ``<th>`` elements into the header). + + .. versionadded:: 0.21.0 Similar to :func:`~pandas.read_csv` the `header` argument is applied **after** `skiprows` is applied. diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py index 9c6a8de7ed446..b78c4f27d8c3f 100644 --- a/pandas/tests/io/test_html.py +++ b/pandas/tests/io/test_html.py @@ -15,10 +15,10 @@ date_range, Series) from pandas.compat import (map, zip, StringIO, BytesIO, is_platform_windows, PY3, reload) +from pandas.errors import ParserError from pandas.io.common import URLError, file_path_to_url import pandas.io.html from pandas.io.html import read_html -from pandas._libs.parsers import ParserError import pandas.util.testing as tm import pandas.util._test_decorators as td @@ -129,16 +129,7 @@ def test_banklist(self): assert_framelist_equal(df1, df2) - def test_spam_no_types(self): - - # infer_types removed in #10892 - df1 = self.read_html(self.spam_data, '.*Water.*') - df2 = self.read_html(self.spam_data, 'Unit') - assert_framelist_equal(df1, df2) - assert df1[0].iloc[0, 0] == 'Proximates' - assert df1[0].columns[0] == 'Nutrient' - - def test_spam_with_types(self): + def test_spam(self): df1 = self.read_html(self.spam_data, '.*Water.*') df2 = self.read_html(self.spam_data, 'Unit') assert_framelist_equal(df1, df2) @@ -157,7 +148,7 @@ def test_banklist_no_match(self): assert isinstance(df, DataFrame) def test_spam_header(self): - df = self.read_html(self.spam_data, '.*Water.*', header=1)[0] + df = self.read_html(self.spam_data, '.*Water.*', header=2)[0] assert df.columns[0] == 'Proximates' assert not df.empty @@ -387,32 +378,33 @@ def test_empty_tables(self): """ Make sure that read_html ignores empty tables. """ - data1 = '''<table> - <thead> - <tr> - <th>A</th> - <th>B</th> - </tr> - </thead> - <tbody> - <tr> - <td>1</td> - <td>2</td> - </tr> - </tbody> - </table>''' - data2 = data1 + '''<table> - <tbody> - </tbody> - </table>''' - res1 = self.read_html(StringIO(data1)) - res2 = self.read_html(StringIO(data2)) - assert_framelist_equal(res1, res2) + result = self.read_html(''' + <table> + <thead> + <tr> + <th>A</th> + <th>B</th> + </tr> + </thead> + <tbody> + <tr> + <td>1</td> + <td>2</td> + </tr> + </tbody> + </table> + <table> + <tbody> + </tbody> + </table> + ''') + + assert len(result) == 1 def test_multiple_tbody(self): # GH-20690 # Read all tbody tags within a single table. - data = '''<table> + result = self.read_html('''<table> <thead> <tr> <th>A</th> @@ -431,9 +423,10 @@ def test_multiple_tbody(self): <td>4</td> </tr> </tbody> - </table>''' - expected = DataFrame({'A': [1, 3], 'B': [2, 4]}) - result = self.read_html(StringIO(data))[0] + </table>''')[0] + + expected = DataFrame(data=[[1, 2], [3, 4]], columns=['A', 'B']) + tm.assert_frame_equal(result, expected) def test_header_and_one_column(self): @@ -441,9 +434,7 @@ def test_header_and_one_column(self): Don't fail with bs4 when there is a header and only one column as described in issue #9178 """ - data = StringIO('''<html> - <body> - <table> + result = self.read_html('''<table> <thead> <tr> <th>Header</th> @@ -454,11 +445,36 @@ def test_header_and_one_column(self): <td>first</td> </tr> </tbody> - </table> - </body> - </html>''') + </table>''')[0] + expected = DataFrame(data={'Header': 'first'}, index=[0]) - result = self.read_html(data)[0] + + tm.assert_frame_equal(result, expected) + + def test_thead_without_tr(self): + """ + Ensure parser adds <tr> within <thead> on malformed HTML. + """ + result = self.read_html('''<table> + <thead> + <tr> + <th>Country</th> + <th>Municipality</th> + <th>Year</th> + </tr> + </thead> + <tbody> + <tr> + <td>Ukraine</td> + <th>Odessa</th> + <td>1944</td> + </tr> + </tbody> + </table>''')[0] + + expected = DataFrame(data=[['Ukraine', 'Odessa', 1944]], + columns=['Country', 'Municipality', 'Year']) + tm.assert_frame_equal(result, expected) def test_tfoot_read(self): @@ -484,63 +500,51 @@ def test_tfoot_read(self): </tfoot> </table>''' + expected1 = DataFrame(data=[['bodyA', 'bodyB']], columns=['A', 'B']) + + expected2 = DataFrame(data=[['bodyA', 'bodyB'], ['footA', 'footB']], + columns=['A', 'B']) + data1 = data_template.format(footer="") data2 = data_template.format( footer="<tr><td>footA</td><th>footB</th></tr>") - d1 = {'A': ['bodyA'], 'B': ['bodyB']} - d2 = {'A': ['bodyA', 'footA'], 'B': ['bodyB', 'footB']} + result1 = self.read_html(data1)[0] + result2 = self.read_html(data2)[0] - tm.assert_frame_equal(self.read_html(data1)[0], DataFrame(d1)) - tm.assert_frame_equal(self.read_html(data2)[0], DataFrame(d2)) + tm.assert_frame_equal(result1, expected1) + tm.assert_frame_equal(result2, expected2) - def test_countries_municipalities(self): - # GH5048 - data1 = StringIO('''<table> - <thead> - <tr> - <th>Country</th> - <th>Municipality</th> - <th>Year</th> - </tr> - </thead> - <tbody> - <tr> - <td>Ukraine</td> - <th>Odessa</th> - <td>1944</td> - </tr> - </tbody> - </table>''') - data2 = StringIO(''' - <table> - <tbody> + def test_parse_header_of_non_string_column(self): + # GH5048: if header is specified explicitly, an int column should be + # parsed as int while its header is parsed as str + result = self.read_html(''' + <table> <tr> - <th>Country</th> - <th>Municipality</th> - <th>Year</th> + <td>S</td> + <td>I</td> </tr> <tr> - <td>Ukraine</td> - <th>Odessa</th> + <td>text</td> <td>1944</td> </tr> - </tbody> - </table>''') - res1 = self.read_html(data1) - res2 = self.read_html(data2, header=0) - assert_framelist_equal(res1, res2) + </table> + ''', header=0)[0] + + expected = DataFrame([['text', 1944]], columns=('S', 'I')) + + tm.assert_frame_equal(result, expected) def test_nyse_wsj_commas_table(self, datapath): data = datapath('io', 'data', 'nyse_wsj.html') df = self.read_html(data, index_col=0, header=0, attrs={'class': 'mdcTable'})[0] - columns = Index(['Issue(Roll over for charts and headlines)', - 'Volume', 'Price', 'Chg', '% Chg']) + expected = Index(['Issue(Roll over for charts and headlines)', + 'Volume', 'Price', 'Chg', '% Chg']) nrows = 100 assert df.shape[0] == nrows - tm.assert_index_equal(df.columns, columns) + tm.assert_index_equal(df.columns, expected) @pytest.mark.slow def test_banklist_header(self, datapath): @@ -592,8 +596,8 @@ def test_gold_canyon(self): attrs={'id': 'table'})[0] assert gc in df.to_string() - def test_different_number_of_rows(self): - expected = """<table border="1" class="dataframe"> + def test_different_number_of_cols(self): + expected = self.read_html("""<table> <thead> <tr style="text-align: right;"> <th></th> @@ -622,8 +626,9 @@ def test_different_number_of_rows(self): <td> 0.222</td> </tr> </tbody> - </table>""" - out = """<table border="1" class="dataframe"> + </table>""", index_col=0)[0] + + result = self.read_html("""<table> <thead> <tr style="text-align: right;"> <th></th> @@ -649,10 +654,151 @@ def test_different_number_of_rows(self): <td> 0.222</td> </tr> </tbody> - </table>""" - expected = self.read_html(expected, index_col=0)[0] - res = self.read_html(out, index_col=0)[0] - tm.assert_frame_equal(expected, res) + </table>""", index_col=0)[0] + + tm.assert_frame_equal(result, expected) + + def test_colspan_rowspan_1(self): + # GH17054 + result = self.read_html(""" + <table> + <tr> + <th>A</th> + <th colspan="1">B</th> + <th rowspan="1">C</th> + </tr> + <tr> + <td>a</td> + <td>b</td> + <td>c</td> + </tr> + </table> + """)[0] + + expected = DataFrame([['a', 'b', 'c']], columns=['A', 'B', 'C']) + + tm.assert_frame_equal(result, expected) + + def test_colspan_rowspan_copy_values(self): + # GH17054 + + # In ASCII, with lowercase letters being copies: + # + # X x Y Z W + # A B b z C + + result = self.read_html(""" + <table> + <tr> + <td colspan="2">X</td> + <td>Y</td> + <td rowspan="2">Z</td> + <td>W</td> + </tr> + <tr> + <td>A</td> + <td colspan="2">B</td> + <td>C</td> + </tr> + </table> + """, header=0)[0] + + expected = DataFrame(data=[['A', 'B', 'B', 'Z', 'C']], + columns=['X', 'X.1', 'Y', 'Z', 'W']) + + tm.assert_frame_equal(result, expected) + + def test_colspan_rowspan_both_not_1(self): + # GH17054 + + # In ASCII, with lowercase letters being copies: + # + # A B b b C + # a b b b D + + result = self.read_html(""" + <table> + <tr> + <td rowspan="2">A</td> + <td rowspan="2" colspan="3">B</td> + <td>C</td> + </tr> + <tr> + <td>D</td> + </tr> + </table> + """, header=0)[0] + + expected = DataFrame(data=[['A', 'B', 'B', 'B', 'D']], + columns=['A', 'B', 'B.1', 'B.2', 'C']) + + tm.assert_frame_equal(result, expected) + + def test_rowspan_at_end_of_row(self): + # GH17054 + + # In ASCII, with lowercase letters being copies: + # + # A B + # C b + + result = self.read_html(""" + <table> + <tr> + <td>A</td> + <td rowspan="2">B</td> + </tr> + <tr> + <td>C</td> + </tr> + </table> + """, header=0)[0] + + expected = DataFrame(data=[['C', 'B']], columns=['A', 'B']) + + tm.assert_frame_equal(result, expected) + + def test_rowspan_only_rows(self): + # GH17054 + + result = self.read_html(""" + <table> + <tr> + <td rowspan="3">A</td> + <td rowspan="3">B</td> + </tr> + </table> + """, header=0)[0] + + expected = DataFrame(data=[['A', 'B'], ['A', 'B']], + columns=['A', 'B']) + + tm.assert_frame_equal(result, expected) + + def test_header_inferred_from_rows_with_only_th(self): + # GH17054 + result = self.read_html(""" + <table> + <tr> + <th>A</th> + <th>B</th> + </tr> + <tr> + <th>a</th> + <th>b</th> + </tr> + <tr> + <td>1</td> + <td>2</td> + </tr> + </table> + """)[0] + + columns = MultiIndex(levels=[['A', 'B'], ['a', 'b']], + labels=[[0, 1], [0, 1]]) + expected = DataFrame(data=[[1, 2]], columns=columns) + + tm.assert_frame_equal(result, expected) def test_parse_dates_list(self): df = DataFrame({'date': date_range('1/1/2001', periods=10)}) @@ -689,10 +835,26 @@ def test_wikipedia_states_table(self, datapath): result = self.read_html(data, 'Arizona', header=1)[0] assert result['sq mi'].dtype == np.dtype('float64') - def test_decimal_rows(self): + def test_parser_error_on_empty_header_row(self): + with tm.assert_raises_regex(ParserError, + r"Passed header=\[0,1\] are " + r"too many rows for this " + r"multi_index of columns"): + self.read_html(""" + <table> + <thead> + <tr><th></th><th></tr> + <tr><th>A</th><th>B</th></tr> + </thead> + <tbody> + <tr><td>a</td><td>b</td></tr> + </tbody> + </table> + """, header=[0, 1]) + def test_decimal_rows(self): # GH 12907 - data = StringIO('''<html> + result = self.read_html('''<html> <body> <table> <thead> @@ -707,9 +869,10 @@ def test_decimal_rows(self): </tbody> </table> </body> - </html>''') + </html>''', decimal='#')[0] + expected = DataFrame(data={'Header': 1100.101}, index=[0]) - result = self.read_html(data, decimal='#')[0] + assert result['Header'].dtype == np.dtype('float64') tm.assert_frame_equal(result, expected) @@ -717,53 +880,61 @@ def test_bool_header_arg(self): # GH 6114 for arg in [True, False]: with pytest.raises(TypeError): - read_html(self.spam_data, header=arg) + self.read_html(self.spam_data, header=arg) def test_converters(self): # GH 13461 - html_data = """<table> - <thead> - <th>a</th> - </tr> - </thead> - <tbody> - <tr> - <td> 0.763</td> - </tr> - <tr> - <td> 0.244</td> - </tr> - </tbody> - </table>""" + result = self.read_html( + """<table> + <thead> + <tr> + <th>a</th> + </tr> + </thead> + <tbody> + <tr> + <td> 0.763</td> + </tr> + <tr> + <td> 0.244</td> + </tr> + </tbody> + </table>""", + converters={'a': str} + )[0] + + expected = DataFrame({'a': ['0.763', '0.244']}) - expected_df = DataFrame({'a': ['0.763', '0.244']}) - html_df = read_html(html_data, converters={'a': str})[0] - tm.assert_frame_equal(expected_df, html_df) + tm.assert_frame_equal(result, expected) def test_na_values(self): # GH 13461 - html_data = """<table> - <thead> - <th>a</th> - </tr> - </thead> - <tbody> - <tr> - <td> 0.763</td> - </tr> - <tr> - <td> 0.244</td> - </tr> - </tbody> - </table>""" + result = self.read_html( + """<table> + <thead> + <tr> + <th>a</th> + </tr> + </thead> + <tbody> + <tr> + <td> 0.763</td> + </tr> + <tr> + <td> 0.244</td> + </tr> + </tbody> + </table>""", + na_values=[0.244])[0] + + expected = DataFrame({'a': [0.763, np.nan]}) - expected_df = DataFrame({'a': [0.763, np.nan]}) - html_df = read_html(html_data, na_values=[0.244])[0] - tm.assert_frame_equal(expected_df, html_df) + tm.assert_frame_equal(result, expected) def test_keep_default_na(self): html_data = """<table> <thead> + <tr> <th>a</th> </tr> </thead> @@ -778,13 +949,56 @@ def test_keep_default_na(self): </table>""" expected_df = DataFrame({'a': ['N/A', 'NA']}) - html_df = read_html(html_data, keep_default_na=False)[0] + html_df = self.read_html(html_data, keep_default_na=False)[0] tm.assert_frame_equal(expected_df, html_df) expected_df = DataFrame({'a': [np.nan, np.nan]}) - html_df = read_html(html_data, keep_default_na=True)[0] + html_df = self.read_html(html_data, keep_default_na=True)[0] tm.assert_frame_equal(expected_df, html_df) + def test_preserve_empty_rows(self): + result = self.read_html(""" + <table> + <tr> + <th>A</th> + <th>B</th> + </tr> + <tr> + <td>a</td> + <td>b</td> + </tr> + <tr> + <td></td> + <td></td> + </tr> + </table> + """)[0] + + expected = DataFrame(data=[['a', 'b'], [np.nan, np.nan]], + columns=['A', 'B']) + + tm.assert_frame_equal(result, expected) + + def test_ignore_empty_rows_when_inferring_header(self): + result = self.read_html(""" + <table> + <thead> + <tr><th></th><th></tr> + <tr><th>A</th><th>B</th></tr> + <tr><th>a</th><th>b</th></tr> + </thead> + <tbody> + <tr><td>1</td><td>2</td></tr> + </tbody> + </table> + """)[0] + + columns = MultiIndex(levels=[['A', 'B'], ['a', 'b']], + labels=[[0, 1], [0, 1]]) + expected = DataFrame(data=[[1, 2]], columns=columns) + + tm.assert_frame_equal(result, expected) + def test_multiple_header_rows(self): # Issue #13434 expected_df = DataFrame(data=[("Hillary", 68, "D"), @@ -794,7 +1008,7 @@ def test_multiple_header_rows(self): ["Name", "Unnamed: 1_level_1", "Unnamed: 2_level_1"]] html = expected_df.to_html(index=False) - html_df = read_html(html, )[0] + html_df = self.read_html(html, )[0] tm.assert_frame_equal(expected_df, html_df) def test_works_on_valid_markup(self, datapath):
This is essentially a rebased and squashed #17054 (mad props to @jowens for doing all the hard thinking). My tweaks: * test_computer_sales_page (see #17074) no longer tests for ParserError, because the ParserError was a bug caused by missing colspan support. Now, test that MultiIndex works as expected. * I respectfully removed the fill_rowspan argument from #17073. Instead, the virtual cells created by rowspan/colspan are always copies of the real cells' text. This prevents _infer_columns() from naming virtual cells as "Unnamed: ..." * I removed a small layer of abstraction to respect #20891 (multiple tbody support), which was implemented after @jowens' pull request. Now _HtmlFrameParser has _parse_thead_trs, _parse_tbody_trs and _parse_tfoot_trs, each returning a list of trs. That let me remove _parse_tr, Making All The Tests Pass. * That caused a snowball effect. lxml does not fix malformed thead, as tested by spam.html. The previous hacky workaround was in _parse_raw_thead, but the new _parse_thead_trs signature returns nodes instead of text. The new hacky solution: return the thead itself, pretending it's a tr. This works in all the tests. A better solution is to use html5lib with lxml; but that might belong in a separate pull request. - [x] closes #17054 - [x] closes #21641 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21487
2018-06-14T20:38:24Z
2018-07-05T17:48:25Z
2018-07-05T17:48:25Z
2018-07-16T12:47:49Z
API/COMPAT: support axis=None for logical reduction (reduce over all axes)
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 5b3e607956f7a..ca8d60051ff90 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -10,6 +10,36 @@ and bug fixes. We recommend that all users upgrade to this version. :local: :backlinks: none +.. _whatsnew_0232.enhancements: + +Logical Reductions over Entire DataFrame +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +:meth:`DataFrame.all` and :meth:`DataFrame.any` now accept ``axis=None`` to reduce over all axes to a scalar (:issue:`19976`) + +.. ipython:: python + + df = pd.DataFrame({"A": [1, 2], "B": [True, False]}) + df.all(axis=None) + + +This also provides compatibility with NumPy 1.15, which now dispatches to ``DataFrame.all``. +With NumPy 1.15 and pandas 0.23.1 or earlier, :func:`numpy.all` will no longer reduce over every axis: + +.. code-block:: python + + >>> # NumPy 1.15, pandas 0.23.1 + >>> np.any(pd.DataFrame({"A": [False], "B": [False]})) + A False + B False + dtype: bool + +With pandas 0.23.2, that will correctly return False, as it did with NumPy < 1.15. + +.. ipython:: python + + np.any(pd.DataFrame({"A": [False], "B": [False]})) + .. _whatsnew_0232.fixed_regressions: diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 74bb2abc27c4b..9884bf9a53478 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -6846,13 +6846,18 @@ def _count_level(self, level, axis=0, numeric_only=False): def _reduce(self, op, name, axis=0, skipna=True, numeric_only=None, filter_type=None, **kwds): - axis = self._get_axis_number(axis) + if axis is None and filter_type == 'bool': + labels = None + constructor = None + else: + # TODO: Make other agg func handle axis=None properly + axis = self._get_axis_number(axis) + labels = self._get_agg_axis(axis) + constructor = self._constructor def f(x): return op(x, axis=axis, skipna=skipna, **kwds) - labels = self._get_agg_axis(axis) - # exclude timedelta/datetime unless we are uniform types if axis == 1 and self._is_mixed_type and self._is_datelike_mixed_type: numeric_only = True @@ -6861,6 +6866,13 @@ def f(x): try: values = self.values result = f(values) + + if (filter_type == 'bool' and is_object_dtype(values) and + axis is None): + # work around https://github.com/numpy/numpy/issues/10489 + # TODO: combine with hasattr(result, 'dtype') further down + # hard since we don't have `values` down there. + result = np.bool_(result) except Exception as e: # try by-column first @@ -6927,7 +6939,9 @@ def f(x): if axis == 0: result = coerce_to_dtypes(result, self.dtypes) - return Series(result, index=labels) + if constructor is not None: + result = Series(result, index=labels) + return result def nunique(self, axis=0, dropna=True): """ diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 1780e359164e2..bdf2fe350b42d 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -8727,6 +8727,8 @@ def pct_change(self, periods=1, fill_method='pad', limit=None, freq=None, return rs def _agg_by_level(self, name, axis=0, level=0, skipna=True, **kwargs): + if axis is None: + raise ValueError("Must specify 'axis' when aggregating by level.") grouped = self.groupby(level=level, axis=axis, sort=False) if hasattr(grouped, name) and skipna: return getattr(grouped, name)(**kwargs) @@ -9053,8 +9055,15 @@ def _doc_parms(cls): Parameters ---------- -axis : int, default 0 - Select the axis which can be 0 for indices and 1 for columns. +axis : {0 or 'index', 1 or 'columns', None}, default 0 + Indicate which axis or axes should be reduced. + + * 0 / 'index' : reduce the index, return a Series whose index is the + original column labels. + * 1 / 'columns' : reduce the columns, return a Series whose index is the + original index. + * None : reduce all axes, return a scalar. + skipna : boolean, default True Exclude NA/null values. If an entire row/column is NA, the result will be NA. @@ -9076,9 +9085,9 @@ def _doc_parms(cls): %(examples)s""" _all_doc = """\ -Return whether all elements are True over series or dataframe axis. +Return whether all elements are True, potentially over an axis. -Returns True if all elements within a series or along a dataframe +Returns True if all elements within a series or along a Dataframe axis are non-zero, not-empty or not-False.""" _all_examples = """\ @@ -9091,7 +9100,7 @@ def _doc_parms(cls): >>> pd.Series([True, False]).all() False -Dataframes +DataFrames Create a dataframe from a dictionary. @@ -9108,12 +9117,17 @@ def _doc_parms(cls): col2 False dtype: bool -Adding axis=1 argument will check if row-wise values all return True. +Specify ``axis='columns'`` to check if row-wise values all return True. ->>> df.all(axis=1) +>>> df.all(axis='columns') 0 True 1 False dtype: bool + +Or ``axis=None`` for whether every value is True. + +>>> df.all(axis=None) +False """ _all_see_also = """\ @@ -9479,6 +9493,11 @@ def _doc_parms(cls): 1 False dtype: bool +Aggregating over the entire DataFrame with ``axis=None``. + +>>> df.any(axis=None) +True + `any` for an empty DataFrame is an empty Series. >>> pd.DataFrame([]).any() @@ -9649,22 +9668,17 @@ def _make_logical_function(cls, name, name1, name2, axis_descr, desc, f, @Substitution(outname=name, desc=desc, name1=name1, name2=name2, axis_descr=axis_descr, examples=examples, see_also=see_also) @Appender(_bool_doc) - def logical_func(self, axis=None, bool_only=None, skipna=None, level=None, + def logical_func(self, axis=0, bool_only=None, skipna=True, level=None, **kwargs): nv.validate_logical_func(tuple(), kwargs, fname=name) - if skipna is None: - skipna = True - if axis is None: - axis = self._stat_axis_number if level is not None: if bool_only is not None: raise NotImplementedError("Option bool_only is not " "implemented with option level.") return self._agg_by_level(name, axis=axis, level=level, skipna=skipna) - return self._reduce(f, axis=axis, skipna=skipna, - numeric_only=bool_only, filter_type='bool', - name=name) + return self._reduce(f, name, axis=axis, skipna=skipna, + numeric_only=bool_only, filter_type='bool') return set_function_name(logical_func, name, cls) diff --git a/pandas/core/panel.py b/pandas/core/panel.py index c4aa471b8b944..4f7400ad8388b 100644 --- a/pandas/core/panel.py +++ b/pandas/core/panel.py @@ -1143,13 +1143,26 @@ def _reduce(self, op, name, axis=0, skipna=True, numeric_only=None, raise NotImplementedError('Panel.{0} does not implement ' 'numeric_only.'.format(name)) - axis_name = self._get_axis_name(axis) - axis_number = self._get_axis_number(axis_name) + if axis is None and filter_type == 'bool': + # labels = None + # constructor = None + axis_number = None + axis_name = None + else: + # TODO: Make other agg func handle axis=None properly + axis = self._get_axis_number(axis) + # labels = self._get_agg_axis(axis) + # constructor = self._constructor + axis_name = self._get_axis_name(axis) + axis_number = self._get_axis_number(axis_name) + f = lambda x: op(x, axis=axis_number, skipna=skipna, **kwds) with np.errstate(all='ignore'): result = f(self.values) + if axis is None and filter_type == 'bool': + return np.bool_(result) axes = self._get_plane_axes(axis_name) if result.ndim == 2 and axis_name != self._info_axis_name: result = result.T diff --git a/pandas/core/series.py b/pandas/core/series.py index 2f762dff4aeab..d374ddbf59ad2 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -3241,7 +3241,8 @@ def _reduce(self, op, name, axis=0, skipna=True, numeric_only=None, delegate = self._values if isinstance(delegate, np.ndarray): # Validate that 'axis' is consistent with Series's single axis. - self._get_axis_number(axis) + if axis is not None: + self._get_axis_number(axis) if numeric_only: raise NotImplementedError('Series.{0} does not implement ' 'numeric_only.'.format(name)) diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py index 6dc24ed856017..5f6aec9d882b6 100644 --- a/pandas/tests/frame/test_analytics.py +++ b/pandas/tests/frame/test_analytics.py @@ -15,7 +15,7 @@ from pandas.compat import lrange, PY35 from pandas import (compat, isna, notna, DataFrame, Series, MultiIndex, date_range, Timestamp, Categorical, - _np_version_under1p12, _np_version_under1p15, + _np_version_under1p12, to_datetime, to_timedelta) import pandas as pd import pandas.core.nanops as nanops @@ -1159,11 +1159,35 @@ def test_any_all(self): self._check_bool_op('any', np.any, has_skipna=True, has_bool_only=True) self._check_bool_op('all', np.all, has_skipna=True, has_bool_only=True) - df = DataFrame(randn(10, 4)) > 0 - df.any(1) - df.all(1) - df.any(1, bool_only=True) - df.all(1, bool_only=True) + def test_any_all_extra(self): + df = DataFrame({ + 'A': [True, False, False], + 'B': [True, True, False], + 'C': [True, True, True], + }, index=['a', 'b', 'c']) + result = df[['A', 'B']].any(1) + expected = Series([True, True, False], index=['a', 'b', 'c']) + tm.assert_series_equal(result, expected) + + result = df[['A', 'B']].any(1, bool_only=True) + tm.assert_series_equal(result, expected) + + result = df.all(1) + expected = Series([True, False, False], index=['a', 'b', 'c']) + tm.assert_series_equal(result, expected) + + result = df.all(1, bool_only=True) + tm.assert_series_equal(result, expected) + + # Axis is None + result = df.all(axis=None).item() + assert result is False + + result = df.any(axis=None).item() + assert result is True + + result = df[['C']].all(axis=None).item() + assert result is True # skip pathological failure cases # class CantNonzero(object): @@ -1185,6 +1209,86 @@ def test_any_all(self): # df.any(1, bool_only=True) # df.all(1, bool_only=True) + @pytest.mark.parametrize('func, data, expected', [ + (np.any, {}, False), + (np.all, {}, True), + (np.any, {'A': []}, False), + (np.all, {'A': []}, True), + (np.any, {'A': [False, False]}, False), + (np.all, {'A': [False, False]}, False), + (np.any, {'A': [True, False]}, True), + (np.all, {'A': [True, False]}, False), + (np.any, {'A': [True, True]}, True), + (np.all, {'A': [True, True]}, True), + + (np.any, {'A': [False], 'B': [False]}, False), + (np.all, {'A': [False], 'B': [False]}, False), + + (np.any, {'A': [False, False], 'B': [False, True]}, True), + (np.all, {'A': [False, False], 'B': [False, True]}, False), + + # other types + (np.all, {'A': pd.Series([0.0, 1.0], dtype='float')}, False), + (np.any, {'A': pd.Series([0.0, 1.0], dtype='float')}, True), + (np.all, {'A': pd.Series([0, 1], dtype=int)}, False), + (np.any, {'A': pd.Series([0, 1], dtype=int)}, True), + pytest.param(np.all, {'A': pd.Series([0, 1], dtype='M8[ns]')}, False, + marks=[td.skip_if_np_lt_115]), + pytest.param(np.any, {'A': pd.Series([0, 1], dtype='M8[ns]')}, True, + marks=[td.skip_if_np_lt_115]), + pytest.param(np.all, {'A': pd.Series([1, 2], dtype='M8[ns]')}, True, + marks=[td.skip_if_np_lt_115]), + pytest.param(np.any, {'A': pd.Series([1, 2], dtype='M8[ns]')}, True, + marks=[td.skip_if_np_lt_115]), + pytest.param(np.all, {'A': pd.Series([0, 1], dtype='m8[ns]')}, False, + marks=[td.skip_if_np_lt_115]), + pytest.param(np.any, {'A': pd.Series([0, 1], dtype='m8[ns]')}, True, + marks=[td.skip_if_np_lt_115]), + pytest.param(np.all, {'A': pd.Series([1, 2], dtype='m8[ns]')}, True, + marks=[td.skip_if_np_lt_115]), + pytest.param(np.any, {'A': pd.Series([1, 2], dtype='m8[ns]')}, True, + marks=[td.skip_if_np_lt_115]), + (np.all, {'A': pd.Series([0, 1], dtype='category')}, False), + (np.any, {'A': pd.Series([0, 1], dtype='category')}, True), + (np.all, {'A': pd.Series([1, 2], dtype='category')}, True), + (np.any, {'A': pd.Series([1, 2], dtype='category')}, True), + + # # Mix + # GH-21484 + # (np.all, {'A': pd.Series([10, 20], dtype='M8[ns]'), + # 'B': pd.Series([10, 20], dtype='m8[ns]')}, True), + ]) + def test_any_all_np_func(self, func, data, expected): + # https://github.com/pandas-dev/pandas/issues/19976 + data = DataFrame(data) + result = func(data) + assert isinstance(result, np.bool_) + assert result.item() is expected + + # method version + result = getattr(DataFrame(data), func.__name__)(axis=None) + assert isinstance(result, np.bool_) + assert result.item() is expected + + def test_any_all_object(self): + # https://github.com/pandas-dev/pandas/issues/19976 + result = np.all(DataFrame(columns=['a', 'b'])).item() + assert result is True + + result = np.any(DataFrame(columns=['a', 'b'])).item() + assert result is False + + @pytest.mark.parametrize('method', ['any', 'all']) + def test_any_all_level_axis_none_raises(self, method): + df = DataFrame( + {"A": 1}, + index=MultiIndex.from_product([['A', 'B'], ['a', 'b']], + names=['out', 'in']) + ) + xpr = "Must specify 'axis' when aggregating by level." + with tm.assert_raises_regex(ValueError, xpr): + getattr(df, method)(axis=None, level='out') + def _check_bool_op(self, name, alternative, frame=None, has_skipna=True, has_bool_only=False): if frame is None: @@ -2074,9 +2178,6 @@ def test_clip_against_list_like(self, inplace, lower, axis, res): result = original tm.assert_frame_equal(result, expected, check_exact=True) - @pytest.mark.xfail( - not _np_version_under1p15, - reason="failing under numpy-dev gh-19976") @pytest.mark.parametrize("axis", [0, 1, None]) def test_clip_against_frame(self, axis): df = DataFrame(np.random.randn(1000, 2)) diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py index d95a2ad2d7f76..2f8bc228cf86e 100644 --- a/pandas/tests/test_panel.py +++ b/pandas/tests/test_panel.py @@ -2707,3 +2707,10 @@ def test_panel_index(): np.repeat([1, 2, 3], 4)], names=['time', 'panel']) tm.assert_index_equal(index, expected) + + +def test_panel_np_all(): + with catch_warnings(record=True): + wp = Panel({"A": DataFrame({'b': [1, 2]})}) + result = np.all(wp) + assert result == np.bool_(True) diff --git a/pandas/util/_test_decorators.py b/pandas/util/_test_decorators.py index 89d90258f58e0..27c24e3a68079 100644 --- a/pandas/util/_test_decorators.py +++ b/pandas/util/_test_decorators.py @@ -30,6 +30,7 @@ def test_foo(): from pandas.compat import (is_platform_windows, is_platform_32bit, PY3, import_lzma) +from pandas.compat.numpy import _np_version_under1p15 from pandas.core.computation.expressions import (_USE_NUMEXPR, _NUMEXPR_INSTALLED) @@ -160,6 +161,9 @@ def decorated_func(func): skip_if_no_mpl = pytest.mark.skipif(_skip_if_no_mpl(), reason="Missing matplotlib dependency") + +skip_if_np_lt_115 = pytest.mark.skipif(_np_version_under1p15, + reason="NumPy 1.15 or greater required") skip_if_mpl = pytest.mark.skipif(not _skip_if_no_mpl(), reason="matplotlib is present") skip_if_mpl_1_5 = pytest.mark.skipif(_skip_if_mpl_1_5(),
- [x] closes #19976 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry This is the minimal fix, just to get np.all / np.any working again. Some followup items: 1. Expand to all aggregations, not just logical ones 2. Do logical reductions blockwiwse: https://github.com/pandas-dev/pandas/issues/17667. Currently, we do `DataFrame.values`, which isn't necessary for logical reductions.
https://api.github.com/repos/pandas-dev/pandas/pulls/21486
2018-06-14T19:56:00Z
2018-06-26T07:34:16Z
2018-06-26T07:34:16Z
2018-07-02T15:36:33Z
BUG: Timedelta.__bool__
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 70a5dd5817c3c..48efc02480e67 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -46,10 +46,13 @@ Bug Fixes - - -**Conversion** +**Timedelta** +- Bug in :class:`Timedelta` where non-zero timedeltas shorter than 1 microsecond were considered False (:issue:`21484`) -- +**Conversion** + +- Bug in :meth:`Series.nlargest` for signed and unsigned integer dtypes when the minimum value is present (:issue:`21426`) - **Indexing** @@ -78,6 +81,7 @@ Bug Fixes - **Timezones** + - Bug in :class:`Timestamp` and :class:`DatetimeIndex` where passing a :class:`Timestamp` localized after a DST transition would return a datetime before the DST transition (:issue:`20854`) - Bug in comparing :class:`DataFrame`s with tz-aware :class:`DatetimeIndex` columns with a DST transition that raised a ``KeyError`` (:issue:`19970`) - Bug in :meth:`DatetimeIndex.shift` where an ``AssertionError`` would raise when shifting across DST (:issue:`8616`) @@ -88,5 +92,4 @@ Bug Fixes **Other** -- Bug in :meth:`Series.nlargest` for signed and unsigned integer dtypes when the minimum value is present (:issue:`21426`) - diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx index 87dc371195b5b..f68dc421a1ee9 100644 --- a/pandas/_libs/tslibs/timedeltas.pyx +++ b/pandas/_libs/tslibs/timedeltas.pyx @@ -899,6 +899,9 @@ cdef class _Timedelta(timedelta): def __str__(self): return self._repr_base(format='long') + def __bool__(self): + return self.value != 0 + def isoformat(self): """ Format Timedelta as ISO 8601 Duration like diff --git a/pandas/tests/scalar/timedelta/test_timedelta.py b/pandas/tests/scalar/timedelta/test_timedelta.py index 205fdf49d3e91..6472bd4245622 100644 --- a/pandas/tests/scalar/timedelta/test_timedelta.py +++ b/pandas/tests/scalar/timedelta/test_timedelta.py @@ -588,3 +588,17 @@ def test_components(self): result = s.dt.components assert not result.iloc[0].isna().all() assert result.iloc[1].isna().all() + + +@pytest.mark.parametrize('value, expected', [ + (Timedelta('10S'), True), + (Timedelta('-10S'), True), + (Timedelta(10, unit='ns'), True), + (Timedelta(0, unit='ns'), False), + (Timedelta(-10, unit='ns'), True), + (Timedelta(None), True), + (pd.NaT, True), +]) +def test_truthiness(value, expected): + # https://github.com/pandas-dev/pandas/issues/21484 + assert bool(value) is expected
Closes #21484
https://api.github.com/repos/pandas-dev/pandas/pulls/21485
2018-06-14T19:22:11Z
2018-06-18T22:39:39Z
2018-06-18T22:39:39Z
2018-06-29T14:56:38Z
Removing SimpleMock test from pandas.util.testing
diff --git a/pandas/util/testing.py b/pandas/util/testing.py index 233eba6490937..d26a2116fb3ce 100644 --- a/pandas/util/testing.py +++ b/pandas/util/testing.py @@ -2263,59 +2263,6 @@ def wrapper(*args, **kwargs): with_connectivity_check = network -class SimpleMock(object): - - """ - Poor man's mocking object - - Note: only works for new-style classes, assumes __getattribute__ exists. - - >>> a = type("Duck",(),{}) - >>> a.attr1,a.attr2 ="fizz","buzz" - >>> b = SimpleMock(a,"attr1","bar") - >>> b.attr1 == "bar" and b.attr2 == "buzz" - True - >>> a.attr1 == "fizz" and a.attr2 == "buzz" - True - """ - - def __init__(self, obj, *args, **kwds): - assert(len(args) % 2 == 0) - attrs = kwds.get("attrs", {}) - for k, v in zip(args[::2], args[1::2]): - # dict comprehensions break 2.6 - attrs[k] = v - self.attrs = attrs - self.obj = obj - - def __getattribute__(self, name): - attrs = object.__getattribute__(self, "attrs") - obj = object.__getattribute__(self, "obj") - return attrs.get(name, type(obj).__getattribute__(obj, name)) - - -@contextmanager -def stdin_encoding(encoding=None): - """ - Context manager for running bits of code while emulating an arbitrary - stdin encoding. - - >>> import sys - >>> _encoding = sys.stdin.encoding - >>> with stdin_encoding('AES'): sys.stdin.encoding - 'AES' - >>> sys.stdin.encoding==_encoding - True - - """ - import sys - - _stdin = sys.stdin - sys.stdin = SimpleMock(sys.stdin, "encoding", encoding) - yield - sys.stdin = _stdin - - def assert_raises_regex(_exception, _regexp, _callable=None, *args, **kwargs): r"""
Removing SimpleMock test - [x] closes #21475 - [ ] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21482
2018-06-14T17:29:51Z
2018-06-15T17:27:20Z
2018-06-15T17:27:20Z
2018-06-15T17:29:21Z
BUG: Fix Index construction when given empty generator (#21470).
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 48efc02480e67..94669c5b02410 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -52,8 +52,9 @@ Bug Fixes **Conversion** +- Bug in constructing :class:`Index` with an iterator or generator (:issue:`21470`) - Bug in :meth:`Series.nlargest` for signed and unsigned integer dtypes when the minimum value is present (:issue:`21426`) -- + **Indexing** diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py index d466198b648ef..e22b0d626a218 100644 --- a/pandas/core/arrays/categorical.py +++ b/pandas/core/arrays/categorical.py @@ -3,7 +3,6 @@ import numpy as np from warnings import warn import textwrap -import types from pandas import compat from pandas.compat import u, lzip @@ -28,7 +27,7 @@ is_categorical, is_categorical_dtype, is_list_like, is_sequence, - is_scalar, + is_scalar, is_iterator, is_dict_like) from pandas.core.algorithms import factorize, take_1d, unique1d, take @@ -2483,7 +2482,7 @@ def _convert_to_list_like(list_like): if isinstance(list_like, list): return list_like if (is_sequence(list_like) or isinstance(list_like, tuple) or - isinstance(list_like, types.GeneratorType)): + is_iterator(list_like)): return list(list_like) elif is_scalar(list_like): return [list_like] diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 6a56278b0da49..27cc368a696e3 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -428,12 +428,14 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, elif data is None or is_scalar(data): cls._scalar_data_error(data) else: - if tupleize_cols and is_list_like(data) and data: + if tupleize_cols and is_list_like(data): + # GH21470: convert iterable to list before determining if empty if is_iterator(data): data = list(data) - # we must be all tuples, otherwise don't construct - # 10697 - if all(isinstance(e, tuple) for e in data): + + if data and all(isinstance(e, tuple) for e in data): + # we must be all tuples, otherwise don't construct + # 10697 from .multi import MultiIndex return MultiIndex.from_tuples( data, names=name or kwargs.get('names')) diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py index b8bd218ec25ab..1d8a958c3413f 100644 --- a/pandas/tests/indexes/test_base.py +++ b/pandas/tests/indexes/test_base.py @@ -445,21 +445,24 @@ def test_constructor_dtypes_timedelta(self, attr, klass): result = klass(list(values), dtype=dtype) tm.assert_index_equal(result, index) - def test_constructor_empty_gen(self): - skip_index_keys = ["repeats", "periodIndex", "rangeIndex", - "tuples"] - for key, index in self.generate_index_types(skip_index_keys): - empty = index.__class__([]) - assert isinstance(empty, index.__class__) - assert not len(empty) + @pytest.mark.parametrize("value", [[], iter([]), (x for x in [])]) + @pytest.mark.parametrize("klass", + [Index, Float64Index, Int64Index, UInt64Index, + CategoricalIndex, DatetimeIndex, TimedeltaIndex]) + def test_constructor_empty(self, value, klass): + empty = klass(value) + assert isinstance(empty, klass) + assert not len(empty) @pytest.mark.parametrize("empty,klass", [ (PeriodIndex([], freq='B'), PeriodIndex), + (PeriodIndex(iter([]), freq='B'), PeriodIndex), + (PeriodIndex((x for x in []), freq='B'), PeriodIndex), (RangeIndex(step=1), pd.RangeIndex), (MultiIndex(levels=[[1, 2], ['blue', 'red']], labels=[[], []]), MultiIndex) ]) - def test_constructor_empty(self, empty, klass): + def test_constructor_empty_special(self, empty, klass): assert isinstance(empty, klass) assert not len(empty)
- [ ] closes #21470 - [ ] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21481
2018-06-14T17:26:40Z
2018-06-18T22:43:28Z
2018-06-18T22:43:27Z
2018-06-29T14:56:58Z
BUG/REG: file-handle object handled incorrectly in to_csv
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 3e4326dea2ecc..f562e782debda 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -16,7 +16,7 @@ and bug fixes. We recommend that all users upgrade to this version. Fixed Regressions ~~~~~~~~~~~~~~~~~ -- +- Fixed regression in :meth:`to_csv` when handling file-like object incorrectly (:issue:`21471`) - .. _whatsnew_0232.performance: diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 02c86d2f4dcc8..a5dfbcc2a3142 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -1690,7 +1690,8 @@ def to_csv(self, path_or_buf=None, sep=",", na_rep='', float_format=None, defaults to 'ascii' on Python 2 and 'utf-8' on Python 3. compression : string, optional A string representing the compression to use in the output file. - Allowed values are 'gzip', 'bz2', 'zip', 'xz'. + Allowed values are 'gzip', 'bz2', 'zip', 'xz'. This input is only + used when the first argument is a filename. line_terminator : string, default ``'\n'`` The newline character or character sequence to use in the output file diff --git a/pandas/core/series.py b/pandas/core/series.py index 0450f28087f66..23c4bbe082f28 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -3790,7 +3790,8 @@ def to_csv(self, path=None, index=True, sep=",", na_rep='', non-ascii, for python versions prior to 3 compression : string, optional A string representing the compression to use in the output file. - Allowed values are 'gzip', 'bz2', 'zip', 'xz'. + Allowed values are 'gzip', 'bz2', 'zip', 'xz'. This input is only + used when the first argument is a filename. date_format: string, default None Format string for datetime objects. decimal: string, default '.' diff --git a/pandas/io/common.py b/pandas/io/common.py index a492b7c0b8e8e..ac9077f2db50e 100644 --- a/pandas/io/common.py +++ b/pandas/io/common.py @@ -445,6 +445,10 @@ def __init__(self, file, mode, compression=zipfile.ZIP_DEFLATED, **kwargs): def write(self, data): super(BytesZipFile, self).writestr(self.filename, data) + @property + def closed(self): + return self.fp is None + class MMapWrapper(BaseIterator): """ diff --git a/pandas/io/formats/csvs.py b/pandas/io/formats/csvs.py index 7f660e2644fa4..60518f596e9af 100644 --- a/pandas/io/formats/csvs.py +++ b/pandas/io/formats/csvs.py @@ -5,11 +5,13 @@ from __future__ import print_function +import warnings + import csv as csvlib +from zipfile import ZipFile import numpy as np from pandas.core.dtypes.missing import notna -from pandas.core.dtypes.inference import is_file_like from pandas.core.index import Index, MultiIndex from pandas import compat from pandas.compat import (StringIO, range, zip) @@ -128,19 +130,31 @@ def save(self): else: encoding = self.encoding - # PR 21300 uses string buffer to receive csv writing and dump into - # file-like output with compression as option. GH 21241, 21118 - f = StringIO() - if not is_file_like(self.path_or_buf): - # path_or_buf is path - path_or_buf = self.path_or_buf - elif hasattr(self.path_or_buf, 'name'): - # path_or_buf is file handle - path_or_buf = self.path_or_buf.name - else: - # path_or_buf is file-like IO objects. + # GH 21227 internal compression is not used when file-like passed. + if self.compression and hasattr(self.path_or_buf, 'write'): + msg = ("compression has no effect when passing file-like " + "object as input.") + warnings.warn(msg, RuntimeWarning, stacklevel=2) + + # when zip compression is called. + is_zip = isinstance(self.path_or_buf, ZipFile) or ( + not hasattr(self.path_or_buf, 'write') + and self.compression == 'zip') + + if is_zip: + # zipfile doesn't support writing string to archive. uses string + # buffer to receive csv writing and dump into zip compression + # file handle. GH 21241, 21118 + f = StringIO() + close = False + elif hasattr(self.path_or_buf, 'write'): f = self.path_or_buf - path_or_buf = None + close = False + else: + f, handles = _get_handle(self.path_or_buf, self.mode, + encoding=encoding, + compression=self.compression) + close = True try: writer_kwargs = dict(lineterminator=self.line_terminator, @@ -157,13 +171,18 @@ def save(self): self._save() finally: - # GH 17778 handles zip compression for byte strings separately. - buf = f.getvalue() - if path_or_buf: - f, handles = _get_handle(path_or_buf, self.mode, - encoding=encoding, - compression=self.compression) - f.write(buf) + if is_zip: + # GH 17778 handles zip compression separately. + buf = f.getvalue() + if hasattr(self.path_or_buf, 'write'): + self.path_or_buf.write(buf) + else: + f, handles = _get_handle(self.path_or_buf, self.mode, + encoding=encoding, + compression=self.compression) + f.write(buf) + close = True + if close: f.close() for _fh in handles: _fh.close() diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py index 60dc336a85388..3ad25ae73109e 100644 --- a/pandas/tests/frame/test_to_csv.py +++ b/pandas/tests/frame/test_to_csv.py @@ -9,6 +9,7 @@ import numpy as np from pandas.compat import (lmap, range, lrange, StringIO, u) +from pandas.io.common import _get_handle import pandas.core.common as com from pandas.errors import ParserError from pandas import (DataFrame, Index, Series, MultiIndex, Timestamp, @@ -935,18 +936,19 @@ def test_to_csv_compression(self, df, encoding, compression): with ensure_clean() as filename: df.to_csv(filename, compression=compression, encoding=encoding) - # test the round trip - to_csv -> read_csv result = read_csv(filename, compression=compression, index_col=0, encoding=encoding) + assert_frame_equal(df, result) - with open(filename, 'w') as fh: - df.to_csv(fh, compression=compression, encoding=encoding) - - result_fh = read_csv(filename, compression=compression, - index_col=0, encoding=encoding) + # test the round trip using file handle - to_csv -> read_csv + f, _handles = _get_handle(filename, 'w', compression=compression, + encoding=encoding) + with f: + df.to_csv(f, encoding=encoding) + result = pd.read_csv(filename, compression=compression, + encoding=encoding, index_col=0, squeeze=True) assert_frame_equal(df, result) - assert_frame_equal(df, result_fh) # explicitly make sure file is compressed with tm.decompress_file(filename, compression) as fh: diff --git a/pandas/tests/series/test_io.py b/pandas/tests/series/test_io.py index f98962685ad9a..814d794d45c18 100644 --- a/pandas/tests/series/test_io.py +++ b/pandas/tests/series/test_io.py @@ -11,6 +11,7 @@ from pandas import Series, DataFrame from pandas.compat import StringIO, u +from pandas.io.common import _get_handle from pandas.util.testing import (assert_series_equal, assert_almost_equal, assert_frame_equal, ensure_clean) import pandas.util.testing as tm @@ -151,20 +152,19 @@ def test_to_csv_compression(self, s, encoding, compression): s.to_csv(filename, compression=compression, encoding=encoding, header=True) - # test the round trip - to_csv -> read_csv result = pd.read_csv(filename, compression=compression, encoding=encoding, index_col=0, squeeze=True) + assert_series_equal(s, result) - with open(filename, 'w') as fh: - s.to_csv(fh, compression=compression, encoding=encoding, - header=True) - - result_fh = pd.read_csv(filename, compression=compression, - encoding=encoding, index_col=0, - squeeze=True) + # test the round trip using file handle - to_csv -> read_csv + f, _handles = _get_handle(filename, 'w', compression=compression, + encoding=encoding) + with f: + s.to_csv(f, encoding=encoding, header=True) + result = pd.read_csv(filename, compression=compression, + encoding=encoding, index_col=0, squeeze=True) assert_series_equal(s, result) - assert_series_equal(s, result_fh) # explicitly ensure file was compressed with tm.decompress_file(filename, compression) as fh: diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py index 7034e9ac2e0c8..ef5f13bfa504a 100644 --- a/pandas/tests/test_common.py +++ b/pandas/tests/test_common.py @@ -11,6 +11,7 @@ from pandas.compat import range, lmap import pandas.core.common as com from pandas.core import ops +from pandas.io.common import _get_handle import pandas.util.testing as tm @@ -246,19 +247,34 @@ def test_compression_size(obj, method, compression_only): [12.32112, 123123.2, 321321.2]], columns=['X', 'Y', 'Z']), Series(100 * [0.123456, 0.234567, 0.567567], name='X')]) -@pytest.mark.parametrize('method', ['to_csv']) +@pytest.mark.parametrize('method', ['to_csv', 'to_json']) def test_compression_size_fh(obj, method, compression_only): with tm.ensure_clean() as filename: - with open(filename, 'w') as fh: - getattr(obj, method)(fh, compression=compression_only) - assert not fh.closed - assert fh.closed + f, _handles = _get_handle(filename, 'w', compression=compression_only) + with f: + getattr(obj, method)(f) + assert not f.closed + assert f.closed compressed = os.path.getsize(filename) with tm.ensure_clean() as filename: - with open(filename, 'w') as fh: - getattr(obj, method)(fh, compression=None) - assert not fh.closed - assert fh.closed + f, _handles = _get_handle(filename, 'w', compression=None) + with f: + getattr(obj, method)(f) + assert not f.closed + assert f.closed uncompressed = os.path.getsize(filename) assert uncompressed > compressed + + +# GH 21227 +def test_compression_warning(compression_only): + df = DataFrame(100 * [[0.123456, 0.234567, 0.567567], + [12.32112, 123123.2, 321321.2]], + columns=['X', 'Y', 'Z']) + with tm.ensure_clean() as filename: + f, _handles = _get_handle(filename, 'w', compression=compression_only) + with tm.assert_produces_warning(RuntimeWarning, + check_stacklevel=False): + with f: + df.to_csv(f, compression=compression_only)
- [x] closes #https://github.com/pandas-dev/pandas/issues/21471 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry This error related to PR #21249 and https://github.com/pandas-dev/pandas/issues/21227. This is never supported use case and to use file-handle in to_csv with compression, the file-object itself should be a compression archive such as: ``` with gzip.open('test.txt.gz', 'wt') as f: pd.DataFrame([0,1],index=['a','b'], columns=['c']).to_csv(f, sep='\t') ``` Regressed to 0.22 to_csv with support for zipfile. zipfile doesn't support writing csv strings to a zip archive using a file-handle. So buffer is used to catch the writing and dump into zip archive in one go. The other scenarios remain unchanged.
https://api.github.com/repos/pandas-dev/pandas/pulls/21478
2018-06-14T17:03:53Z
2018-06-18T22:45:26Z
2018-06-18T22:45:26Z
2018-06-29T14:57:18Z
BUG: inconsistency between replace dict using integers and using strings (#20656)
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index e28b15fcf621f..c340486c78a59 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -668,6 +668,7 @@ Reshaping - :func:`pandas.core.groupby.GroupBy.rank` now raises a ``ValueError`` when an invalid value is passed for argument ``na_option`` (:issue:`22124`) - Bug in :func:`get_dummies` with Unicode attributes in Python 2 (:issue:`22084`) - Bug in :meth:`DataFrame.replace` raises ``RecursionError`` when replacing empty lists (:issue:`22083`) +- Bug in :meth:`Series.replace` and meth:`DataFrame.replace` when dict is used as the `to_replace` value and one key in the dict is is another key's value, the results were inconsistent between using integer key and using string key (:issue:`20656`) - Build Changes diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index be80a605f08fd..f0635014b166b 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -1695,6 +1695,45 @@ def _nanpercentile(values, q, axis, **kw): placement=np.arange(len(result)), ndim=ndim) + def _replace_coerce(self, to_replace, value, inplace=True, regex=False, + convert=False, mgr=None, mask=None): + """ + Replace value corresponding to the given boolean array with another + value. + + Parameters + ---------- + to_replace : object or pattern + Scalar to replace or regular expression to match. + value : object + Replacement object. + inplace : bool, default False + Perform inplace modification. + regex : bool, default False + If true, perform regular expression substitution. + convert : bool, default True + If true, try to coerce any object types to better types. + mgr : BlockManager, optional + mask : array-like of bool, optional + True indicate corresponding element is ignored. + + Returns + ------- + A new block if there is anything to replace or the original block. + """ + + if mask.any(): + if not regex: + self = self.coerce_to_target_dtype(value) + return self.putmask(mask, value, inplace=inplace) + else: + return self._replace_single(to_replace, value, inplace=inplace, + regex=regex, + convert=convert, + mask=mask, + mgr=mgr) + return self + class ScalarBlock(Block): """ @@ -2470,8 +2509,31 @@ def replace(self, to_replace, value, inplace=False, filter=None, regex=regex, mgr=mgr) def _replace_single(self, to_replace, value, inplace=False, filter=None, - regex=False, convert=True, mgr=None): + regex=False, convert=True, mgr=None, mask=None): + """ + Replace elements by the given value. + Parameters + ---------- + to_replace : object or pattern + Scalar to replace or regular expression to match. + value : object + Replacement object. + inplace : bool, default False + Perform inplace modification. + filter : list, optional + regex : bool, default False + If true, perform regular expression substitution. + convert : bool, default True + If true, try to coerce any object types to better types. + mgr : BlockManager, optional + mask : array-like of bool, optional + True indicate corresponding element is ignored. + + Returns + ------- + a new block, the result after replacing + """ inplace = validate_bool_kwarg(inplace, 'inplace') # to_replace is regex compilable @@ -2537,15 +2599,53 @@ def re_replacer(s): else: filt = self.mgr_locs.isin(filter).nonzero()[0] - new_values[filt] = f(new_values[filt]) + if mask is None: + new_values[filt] = f(new_values[filt]) + else: + new_values[filt][mask] = f(new_values[filt][mask]) # convert block = self.make_block(new_values) if convert: block = block.convert(by_item=True, numeric=False) - return block + def _replace_coerce(self, to_replace, value, inplace=True, regex=False, + convert=False, mgr=None, mask=None): + """ + Replace value corresponding to the given boolean array with another + value. + + Parameters + ---------- + to_replace : object or pattern + Scalar to replace or regular expression to match. + value : object + Replacement object. + inplace : bool, default False + Perform inplace modification. + regex : bool, default False + If true, perform regular expression substitution. + convert : bool, default True + If true, try to coerce any object types to better types. + mgr : BlockManager, optional + mask : array-like of bool, optional + True indicate corresponding element is ignored. + + Returns + ------- + A new block if there is anything to replace or the original block. + """ + if mask.any(): + block = super(ObjectBlock, self)._replace_coerce( + to_replace=to_replace, value=value, inplace=inplace, + regex=regex, convert=convert, mgr=mgr, mask=mask) + if convert: + block = [b.convert(by_item=True, numeric=False, copy=True) + for b in block] + return block + return self + class CategoricalBlock(ExtensionBlock): __slots__ = () diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py index 32e8372d5c6c9..e64ba44bb8a92 100644 --- a/pandas/core/internals/managers.py +++ b/pandas/core/internals/managers.py @@ -3,6 +3,7 @@ from functools import partial import itertools import operator +import re import numpy as np @@ -23,7 +24,8 @@ from pandas.core.dtypes.cast import ( maybe_promote, infer_dtype_from_scalar, - find_common_type) + find_common_type, + maybe_convert_objects) from pandas.core.dtypes.missing import isna import pandas.core.dtypes.concat as _concat from pandas.core.dtypes.generic import ABCSeries, ABCExtensionArray @@ -571,12 +573,19 @@ def replace_list(self, src_list, dest_list, inplace=False, regex=False, # figure out our mask a-priori to avoid repeated replacements values = self.as_array() - def comp(s): + def comp(s, regex=False): + """ + Generate a bool array by perform an equality check, or perform + an element-wise regular expression matching + """ if isna(s): return isna(values) - return _maybe_compare(values, getattr(s, 'asm8', s), operator.eq) + if hasattr(s, 'asm8'): + return _compare_or_regex_match(maybe_convert_objects(values), + getattr(s, 'asm8'), regex) + return _compare_or_regex_match(values, s, regex) - masks = [comp(s) for i, s in enumerate(src_list)] + masks = [comp(s, regex) for i, s in enumerate(src_list)] result_blocks = [] src_len = len(src_list) - 1 @@ -588,20 +597,16 @@ def comp(s): for i, (s, d) in enumerate(zip(src_list, dest_list)): new_rb = [] for b in rb: - if b.dtype == np.object_: - convert = i == src_len - result = b.replace(s, d, inplace=inplace, regex=regex, - mgr=mgr, convert=convert) + m = masks[i][b.mgr_locs.indexer] + convert = i == src_len + result = b._replace_coerce(mask=m, to_replace=s, value=d, + inplace=inplace, + convert=convert, regex=regex, + mgr=mgr) + if m.any(): new_rb = _extend_blocks(result, new_rb) else: - # get our mask for this element, sized to this - # particular block - m = masks[i][b.mgr_locs.indexer] - if m.any(): - b = b.coerce_to_target_dtype(d) - new_rb.extend(b.putmask(m, d, inplace=True)) - else: - new_rb.append(b) + new_rb.append(b) rb = new_rb result_blocks.extend(rb) @@ -1890,7 +1895,28 @@ def _consolidate(blocks): return new_blocks -def _maybe_compare(a, b, op): +def _compare_or_regex_match(a, b, regex=False): + """ + Compare two array_like inputs of the same shape or two scalar values + + Calls operator.eq or re.match, depending on regex argument. If regex is + True, perform an element-wise regex matching. + + Parameters + ---------- + a : array_like or scalar + b : array_like or scalar + regex : bool, default False + + Returns + ------- + mask : array_like of bool + """ + if not regex: + op = lambda x: operator.eq(x, b) + else: + op = np.vectorize(lambda x: bool(re.match(b, x)) if isinstance(x, str) + else False) is_a_array = isinstance(a, np.ndarray) is_b_array = isinstance(b, np.ndarray) @@ -1902,9 +1928,8 @@ def _maybe_compare(a, b, op): # numpy deprecation warning if comparing numeric vs string-like elif is_numeric_v_string_like(a, b): result = False - else: - result = op(a, b) + result = op(a) if is_scalar(result) and (is_a_array or is_b_array): type_names = [type(a).__name__, type(b).__name__] diff --git a/pandas/tests/series/test_replace.py b/pandas/tests/series/test_replace.py index d495fd9c83c24..9e198d2854f24 100644 --- a/pandas/tests/series/test_replace.py +++ b/pandas/tests/series/test_replace.py @@ -256,6 +256,14 @@ def test_replace_string_with_number(self): expected = pd.Series([1, 2, 3]) tm.assert_series_equal(expected, result) + def test_replace_replacer_equals_replacement(self): + # GH 20656 + # make sure all replacers are matching against original values + s = pd.Series(['a', 'b']) + expected = pd.Series(['b', 'a']) + result = s.replace({'a': 'b', 'b': 'a'}) + tm.assert_series_equal(expected, result) + def test_replace_unicode_with_number(self): # GH 15743 s = pd.Series([1, 2, 3])
- [x] closes #20656 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21477
2018-06-14T12:03:13Z
2018-08-09T10:51:54Z
2018-08-09T10:51:54Z
2023-03-06T06:04:02Z
BUG #21374: Fix divison with complex numbers in eval() method
diff --git a/pandas/core/computation/ops.py b/pandas/core/computation/ops.py index ca0c4db4947c4..e9403a48dd157 100644 --- a/pandas/core/computation/ops.py +++ b/pandas/core/computation/ops.py @@ -466,7 +466,8 @@ def __init__(self, lhs, rhs, truediv, *args, **kwargs): if truediv or PY3: # do not upcast float32s to float64 un-necessarily - acceptable_dtypes = [np.float32, np.float_] + acceptable_dtypes = [np.float32, np.float_, + np.complex64, np.complex_] _cast_inplace(com.flatten(self), acceptable_dtypes, np.float_)
- [ ] closes #21374 - [ ] tests added / passed ```python data = {"a": [1 + 2j], "b": [1 + 1j]} df = pd.DataFrame(data = data) df.eval("a/b") ``` Expected output: ```python df["a"]/df["b"] 0 (1.5+0.5j) dtype: complex128 ``` New result: ```python df.eval("a/b") 0 (1.5+0.5j) dtype: complex128 ``` - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21472
2018-06-14T02:07:36Z
2018-09-25T16:49:20Z
null
2018-09-25T16:49:20Z
Adding Multiindex support to dataframe pivot function(Fixes #21425)
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py index 3d9e84954a63b..b22d143544b7b 100644 --- a/pandas/core/reshape/reshape.py +++ b/pandas/core/reshape/reshape.py @@ -395,15 +395,29 @@ def pivot(self, index=None, columns=None, values=None): See DataFrame.pivot """ if values is None: - cols = [columns] if index is None else [index, columns] + if index is None: + cols = [columns] + else: + if is_list_like(index): + cols = [column for column in index] + else: + cols = [index] + cols.append(columns) append = index is None indexed = self.set_index(cols, append=append) + else: if index is None: index = self.index + index = MultiIndex.from_arrays([index, self[columns]]) + elif is_list_like(index): + # Iterating through the list of multiple columns of an index + indexes = [self[column] for column in index] + indexes.append(self[columns]) + index = MultiIndex.from_arrays(indexes) else: index = self[index] - index = MultiIndex.from_arrays([index, self[columns]]) + index = MultiIndex.from_arrays([index, self[columns]]) if is_list_like(values) and not isinstance(values, tuple): # Exclude tuple because it is seen as a single column name diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py index 7e7e081408534..4474c61dddfe3 100644 --- a/pandas/tests/reshape/test_pivot.py +++ b/pandas/tests/reshape/test_pivot.py @@ -301,6 +301,34 @@ def test_pivot_multi_functions(self): expected = concat([means, stds], keys=['mean', 'std'], axis=1) tm.assert_frame_equal(result, expected) + def test_pivot_multiple_columns_as_index(self): + # adding the test case for multiple columns as index (#21425) + df = DataFrame({'lev1': [1, 1, 1, 1, 2, 2, 2, 2], + 'lev2': [1, 1, 2, 2, 1, 1, 2, 2], + 'lev3': [1, 2, 1, 2, 1, 2, 1, 2], + 'values': [0, 1, 2, 3, 4, 5, 6, 7]}) + result = df.pivot(index=['lev1', 'lev2'], + columns='lev3', + values='values') + result_no_values = df.pivot(index=['lev1', 'lev2'], + columns='lev3') + data = [[0, 1], [2, 3], [4, 5], [6, 7]] + exp_index = pd.MultiIndex.from_product([[1, 2], [1, 2]], + names=['lev1', 'lev2']) + exp_columns_1 = Index([1, 2], name='lev3') + expected_1 = DataFrame(data=data, index=exp_index, + columns=exp_columns_1) + + exp_columns_2 = MultiIndex(levels=[['values'], [1, 2]], + labels=[[0, 0], [0, 1]], + names=[None, 'lev3']) + + expected_2 = DataFrame(data=data, index=exp_index, + columns=exp_columns_2) + + tm.assert_frame_equal(result, expected_1) + tm.assert_frame_equal(result_no_values, expected_2) + def test_pivot_index_with_nan(self): # GH 3588 nan = np.nan
- [x] closes #21425 - [x] tests added/passed: - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew : Now dataframe.pivot function can take multiple columns as index Added extra case to handle indexing on multiple colums in dataframe.pivot function
https://api.github.com/repos/pandas-dev/pandas/pulls/21467
2018-06-13T17:09:24Z
2019-02-06T03:30:55Z
null
2019-02-06T03:30:56Z
DOC : Update generic.py
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 32f64b1d3e05c..b38a9de7589f9 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -6882,6 +6882,8 @@ def resample(self, rule, how=None, axis=0, fill_method=None, closed=None, rule : string the offset string or object representing target conversion axis : int, optional, default 0 + how : string + method for down- or re-sampling, default to ‘mean’ for downsampling closed : {'right', 'left'} Which side of bin interval is closed. The default is 'left' for all frequency offsets except for 'M', 'A', 'Q', 'BM',
Included documentation for 'how' in DataFrame.resample - [x] closes #21463 - [ ] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21465
2018-06-13T16:15:12Z
2018-06-13T18:07:58Z
null
2018-06-13T18:07:59Z
Removing an un-needed conditional in np_datetime_strings.c
diff --git a/pandas/_libs/src/datetime/np_datetime_strings.c b/pandas/_libs/src/datetime/np_datetime_strings.c index 2ea69e2ac1636..f5c403858a641 100644 --- a/pandas/_libs/src/datetime/np_datetime_strings.c +++ b/pandas/_libs/src/datetime/np_datetime_strings.c @@ -252,13 +252,9 @@ int parse_iso_8601_datetime(char *str, int len, } /* Next character must be a ':' or the end of the string */ - if (sublen == 0) { - if (!hour_was_2_digits) { - goto parse_error; - } + if (sublen == 0) goto finish; - } - + if (*substr == ':') { has_hms_sep = 1; ++substr;
- [x] closes #21422 - [ ] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry PS: I was not able to build this properly on my setup, so would wait for the test builds to run. Also, in which version should I put a whatsnew entry in?
https://api.github.com/repos/pandas-dev/pandas/pulls/21462
2018-06-13T14:16:25Z
2018-11-23T03:26:42Z
null
2018-11-23T03:26:42Z
BUG: fix get_indexer_non_unique with CategoricalIndex key
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index ec2eddcfd4d41..611e5c4836c6f 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -55,7 +55,7 @@ Conversion Indexing ^^^^^^^^ -- +- Bug in :meth:`Index.get_indexer_non_unique` with categorical key (:issue:`21448`) - I/O diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index bf1051332ee19..d9e4ef7db1158 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -31,6 +31,7 @@ is_dtype_equal, is_dtype_union_equal, is_object_dtype, + is_categorical, is_categorical_dtype, is_interval_dtype, is_period_dtype, @@ -3300,6 +3301,8 @@ def _filter_indexer_tolerance(self, target, indexer, tolerance): @Appender(_index_shared_docs['get_indexer_non_unique'] % _index_doc_kwargs) def get_indexer_non_unique(self, target): target = _ensure_index(target) + if is_categorical(target): + target = target.astype(target.dtype.categories.dtype) pself, ptarget = self._maybe_promote(target) if pself is not self or ptarget is not target: return pself.get_indexer_non_unique(ptarget) diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py index 150eca32e229d..587090fa72def 100644 --- a/pandas/core/indexes/category.py +++ b/pandas/core/indexes/category.py @@ -598,7 +598,12 @@ def get_indexer_non_unique(self, target): target = ibase._ensure_index(target) if isinstance(target, CategoricalIndex): - target = target.categories + # Indexing on codes is more efficient if categories are the same: + if target.categories is self.categories: + target = target.codes + indexer, missing = self._engine.get_indexer_non_unique(target) + return _ensure_platform_int(indexer), missing + target = target.values codes = self.categories.get_indexer(target) indexer, missing = self._engine.get_indexer_non_unique(codes) diff --git a/pandas/tests/categorical/test_indexing.py b/pandas/tests/categorical/test_indexing.py index 9c27b1101e5ca..cf7b5cfa55882 100644 --- a/pandas/tests/categorical/test_indexing.py +++ b/pandas/tests/categorical/test_indexing.py @@ -5,7 +5,7 @@ import numpy as np import pandas.util.testing as tm -from pandas import Categorical, Index, PeriodIndex +from pandas import Categorical, Index, CategoricalIndex, PeriodIndex from pandas.tests.categorical.common import TestCategorical @@ -103,3 +103,21 @@ def f(): s.categories = [1, 2] pytest.raises(ValueError, f) + + # Combinations of sorted/unique: + @pytest.mark.parametrize("idx_values", [[1, 2, 3, 4], [1, 3, 2, 4], + [1, 3, 3, 4], [1, 2, 2, 4]]) + # Combinations of missing/unique + @pytest.mark.parametrize("key_values", [[1, 2], [1, 5], [1, 1], [5, 5]]) + @pytest.mark.parametrize("key_class", [Categorical, CategoricalIndex]) + def test_get_indexer_non_unique(self, idx_values, key_values, key_class): + # GH 21448 + key = key_class(key_values, categories=range(1, 5)) + # Test for flat index and CategoricalIndex with same/different cats: + for dtype in None, 'category', key.dtype: + idx = Index(idx_values, dtype=dtype) + expected, exp_miss = idx.get_indexer_non_unique(key_values) + result, res_miss = idx.get_indexer_non_unique(key) + + tm.assert_numpy_array_equal(expected, result) + tm.assert_numpy_array_equal(exp_miss, res_miss)
- [x] closes #21448 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21457
2018-06-13T08:34:17Z
2018-06-13T13:24:02Z
2018-06-13T13:24:02Z
2018-06-29T14:48:15Z
API/BUG: Raise when int-dtype coercions fail
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index c23ed006ff637..15c5cc97b8426 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -26,7 +26,7 @@ Other Enhancements Backwards incompatible API changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. _whatsnew_0240.api.datetimelike.normalize +.. _whatsnew_0240.api.datetimelike.normalize: Tick DateOffset Normalize Restrictions -------------------------------------- @@ -73,6 +73,32 @@ Datetimelike API Changes Other API Changes ^^^^^^^^^^^^^^^^^ +.. _whatsnew_0240.api.other.incompatibilities: + +Series and Index Data-Dtype Incompatibilities +--------------------------------------------- + +``Series`` and ``Index`` constructors now raise when the +data is incompatible with a passed ``dtype=`` (:issue:`15832`) + +Previous Behavior: + +.. code-block:: ipython + + In [4]: pd.Series([-1], dtype="uint64") + Out [4]: + 0 18446744073709551615 + dtype: uint64 + +Current Behavior: + +.. code-block:: ipython + + In [4]: pd.Series([-1], dtype="uint64") + Out [4]: + ... + OverflowError: Trying to coerce negative values to unsigned integers + - :class:`DatetimeIndex` now accepts :class:`Int64Index` arguments as epoch timestamps (:issue:`20997`) - - diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py index ebc7a13234a98..65328dfc7347e 100644 --- a/pandas/core/dtypes/cast.py +++ b/pandas/core/dtypes/cast.py @@ -20,6 +20,7 @@ is_dtype_equal, is_float_dtype, is_complex_dtype, is_integer_dtype, + is_unsigned_integer_dtype, is_datetime_or_timedelta_dtype, is_bool_dtype, is_scalar, is_string_dtype, _string_dtypes, @@ -1269,3 +1270,74 @@ def construct_1d_ndarray_preserving_na(values, dtype=None, copy=False): subarr = subarr2 return subarr + + +def maybe_cast_to_integer_array(arr, dtype, copy=False): + """ + Takes any dtype and returns the casted version, raising for when data is + incompatible with integer/unsigned integer dtypes. + + .. versionadded:: 0.24.0 + + Parameters + ---------- + arr : array-like + The array to cast. + dtype : str, np.dtype + The integer dtype to cast the array to. + copy: boolean, default False + Whether to make a copy of the array before returning. + + Returns + ------- + int_arr : ndarray + An array of integer or unsigned integer dtype + + Raises + ------ + OverflowError : the dtype is incompatible with the data + ValueError : loss of precision has occurred during casting + + Examples + -------- + If you try to coerce negative values to unsigned integers, it raises: + + >>> Series([-1], dtype="uint64") + Traceback (most recent call last): + ... + OverflowError: Trying to coerce negative values to unsigned integers + + Also, if you try to coerce float values to integers, it raises: + + >>> Series([1, 2, 3.5], dtype="int64") + Traceback (most recent call last): + ... + ValueError: Trying to coerce float values to integers + """ + + try: + if not hasattr(arr, "astype"): + casted = np.array(arr, dtype=dtype, copy=copy) + else: + casted = arr.astype(dtype, copy=copy) + except OverflowError: + raise OverflowError("The elements provided in the data cannot all be " + "casted to the dtype {dtype}".format(dtype=dtype)) + + if np.array_equal(arr, casted): + return casted + + # We do this casting to allow for proper + # data and dtype checking. + # + # We didn't do this earlier because NumPy + # doesn't handle `uint64` correctly. + arr = np.asarray(arr) + + if is_unsigned_integer_dtype(dtype) and (arr < 0).any(): + raise OverflowError("Trying to coerce negative values " + "to unsigned integers") + + if is_integer_dtype(dtype) and (is_float_dtype(arr) or + is_object_dtype(arr)): + raise ValueError("Trying to coerce float values to integers") diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 490fd872125ff..4834f799b92eb 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -21,6 +21,7 @@ ABCPeriodIndex, ABCTimedeltaIndex, ABCDateOffset) from pandas.core.dtypes.missing import isna, array_equivalent +from pandas.core.dtypes.cast import maybe_cast_to_integer_array from pandas.core.dtypes.common import ( _ensure_int64, _ensure_object, @@ -311,19 +312,16 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, if is_integer_dtype(dtype): inferred = lib.infer_dtype(data) if inferred == 'integer': - try: - data = np.array(data, copy=copy, dtype=dtype) - except OverflowError: - # gh-15823: a more user-friendly error message - raise OverflowError( - "the elements provided in the data cannot " - "all be casted to the dtype {dtype}" - .format(dtype=dtype)) + data = maybe_cast_to_integer_array(data, dtype, + copy=copy) elif inferred in ['floating', 'mixed-integer-float']: if isna(data).any(): raise ValueError('cannot convert float ' 'NaN to integer') + if inferred == "mixed-integer-float": + data = maybe_cast_to_integer_array(data, dtype) + # If we are actually all equal to integers, # then coerce to integer. try: @@ -352,7 +350,8 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None, except (TypeError, ValueError) as e: msg = str(e) - if 'cannot convert float' in msg: + if ("cannot convert float" in msg or + "Trying to coerce float values to integer" in msg): raise # maybe coerce to a sub-class diff --git a/pandas/core/series.py b/pandas/core/series.py index 23c4bbe082f28..2f762dff4aeab 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -41,7 +41,8 @@ maybe_cast_to_datetime, maybe_castable, construct_1d_arraylike_from_scalar, construct_1d_ndarray_preserving_na, - construct_1d_object_array_from_listlike) + construct_1d_object_array_from_listlike, + maybe_cast_to_integer_array) from pandas.core.dtypes.missing import ( isna, notna, @@ -4068,6 +4069,11 @@ def _try_cast(arr, take_fast_path): return arr try: + # gh-15832: Check if we are requesting a numeric dype and + # that we can convert the data to the requested dtype. + if is_float_dtype(dtype) or is_integer_dtype(dtype): + subarr = maybe_cast_to_integer_array(arr, dtype) + subarr = maybe_cast_to_datetime(arr, dtype) # Take care in creating object arrays (but iterators are not # supported): diff --git a/pandas/tests/generic/test_generic.py b/pandas/tests/generic/test_generic.py index 311c71f734945..533bff0384ad9 100644 --- a/pandas/tests/generic/test_generic.py +++ b/pandas/tests/generic/test_generic.py @@ -199,11 +199,11 @@ def test_downcast(self): self._compare(result, expected) def test_constructor_compound_dtypes(self): - # GH 5191 - # compound dtypes should raise not-implementederror + # see gh-5191 + # Compound dtypes should raise NotImplementedError. def f(dtype): - return self._construct(shape=3, dtype=dtype) + return self._construct(shape=3, value=1, dtype=dtype) pytest.raises(NotImplementedError, f, [("A", "datetime64[h]"), ("B", "str"), @@ -534,14 +534,14 @@ def test_truncate_out_of_bounds(self): # small shape = [int(2e3)] + ([1] * (self._ndim - 1)) - small = self._construct(shape, dtype='int8') + small = self._construct(shape, dtype='int8', value=1) self._compare(small.truncate(), small) self._compare(small.truncate(before=0, after=3e3), small) self._compare(small.truncate(before=-1, after=2e3), small) # big shape = [int(2e6)] + ([1] * (self._ndim - 1)) - big = self._construct(shape, dtype='int8') + big = self._construct(shape, dtype='int8', value=1) self._compare(big.truncate(), big) self._compare(big.truncate(before=0, after=3e6), big) self._compare(big.truncate(before=-1, after=2e6), big) diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py index 1d8a958c3413f..daba56e0c1e29 100644 --- a/pandas/tests/indexes/test_base.py +++ b/pandas/tests/indexes/test_base.py @@ -486,11 +486,18 @@ def test_constructor_nonhashable_name(self, indices): def test_constructor_overflow_int64(self): # see gh-15832 - msg = ("the elements provided in the data cannot " + msg = ("The elements provided in the data cannot " "all be casted to the dtype int64") with tm.assert_raises_regex(OverflowError, msg): Index([np.iinfo(np.uint64).max - 1], dtype="int64") + @pytest.mark.xfail(reason="see gh-21311: Index " + "doesn't enforce dtype argument") + def test_constructor_cast(self): + msg = "could not convert string to float" + with tm.assert_raises_regex(ValueError, msg): + Index(["a", "b", "c"], dtype=float) + def test_view_with_args(self): restricted = ['unicodeIndex', 'strIndex', 'catIndex', 'boolIndex', diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py index 49322d9b7abd6..166af4c89877d 100644 --- a/pandas/tests/indexes/test_numeric.py +++ b/pandas/tests/indexes/test_numeric.py @@ -451,6 +451,18 @@ def test_astype(self): i = Float64Index([0, 1.1, np.NAN]) pytest.raises(ValueError, lambda: i.astype(dtype)) + def test_type_coercion_fail(self, any_int_dtype): + # see gh-15832 + msg = "Trying to coerce float values to integers" + with tm.assert_raises_regex(ValueError, msg): + Index([1, 2, 3.5], dtype=any_int_dtype) + + def test_type_coercion_valid(self, float_dtype): + # There is no Float32Index, so we always + # generate Float64Index. + i = Index([1, 2, 3.5], dtype=float_dtype) + tm.assert_index_equal(i, Index([1, 2, 3.5])) + def test_equals_numeric(self): i = Float64Index([1.0, 2.0]) @@ -862,6 +874,14 @@ def test_constructor_corner(self): with tm.assert_raises_regex(TypeError, 'casting'): Int64Index(arr_with_floats) + def test_constructor_coercion_signed_to_unsigned(self, uint_dtype): + + # see gh-15832 + msg = "Trying to coerce negative values to unsigned integers" + + with tm.assert_raises_regex(OverflowError, msg): + Index([-1], dtype=uint_dtype) + def test_coerce_list(self): # coerce things arr = Index([1, 2, 3, 4]) diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py index d590cfd6b6c64..f96e7eeb40ea2 100644 --- a/pandas/tests/io/test_pytables.py +++ b/pandas/tests/io/test_pytables.py @@ -2047,7 +2047,7 @@ def test_table_values_dtypes_roundtrip(self): assert df1.dtypes[0] == 'float32' # check with mixed dtypes - df1 = DataFrame(dict((c, Series(np.random.randn(5), dtype=c)) + df1 = DataFrame(dict((c, Series(np.random.randint(5), dtype=c)) for c in ['float32', 'float64', 'int32', 'int64', 'int16', 'int8'])) df1['string'] = 'foo' diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index 906d2aacd5586..27cfec0dbf20d 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -542,12 +542,30 @@ def test_constructor_pass_nan_nat(self): tm.assert_series_equal(Series(np.array([np.nan, pd.NaT])), exp) def test_constructor_cast(self): - pytest.raises(ValueError, Series, ['a', 'b', 'c'], dtype=float) + msg = "could not convert string to float" + with tm.assert_raises_regex(ValueError, msg): + Series(["a", "b", "c"], dtype=float) + + def test_constructor_unsigned_dtype_overflow(self, uint_dtype): + # see gh-15832 + msg = 'Trying to coerce negative values to unsigned integers' + with tm.assert_raises_regex(OverflowError, msg): + Series([-1], dtype=uint_dtype) + + def test_constructor_coerce_float_fail(self, any_int_dtype): + # see gh-15832 + msg = "Trying to coerce float values to integers" + with tm.assert_raises_regex(ValueError, msg): + Series([1, 2, 3.5], dtype=any_int_dtype) + + def test_constructor_coerce_float_valid(self, float_dtype): + s = Series([1, 2, 3.5], dtype=float_dtype) + expected = Series([1, 2, 3.5]).astype(float_dtype) + assert_series_equal(s, expected) - def test_constructor_dtype_nocast(self): - # 1572 + def test_constructor_dtype_no_cast(self): + # see gh-1572 s = Series([1, 2, 3]) - s2 = Series(s, dtype=np.int64) s2[1] = 5
Related to the Index and Series constructors. Closes #15832. cc @ucals (since this is mostly based off what you did in #15859)
https://api.github.com/repos/pandas-dev/pandas/pulls/21456
2018-06-13T06:39:59Z
2018-06-20T10:35:11Z
2018-06-20T10:35:10Z
2020-12-20T17:34:52Z
Fix tests fragile to PATH
diff --git a/pandas/tests/plotting/test_converter.py b/pandas/tests/plotting/test_converter.py index 47cded19f5300..bb976a1e3e81c 100644 --- a/pandas/tests/plotting/test_converter.py +++ b/pandas/tests/plotting/test_converter.py @@ -1,4 +1,5 @@ import subprocess +import sys import pytest from datetime import datetime, date @@ -27,7 +28,7 @@ def test_register_by_default(self): "import pandas as pd; " "units = dict(matplotlib.units.registry); " "assert pd.Timestamp in units)'") - call = ['python', '-c', code] + call = [sys.executable, '-c', code] assert subprocess.check_call(call) == 0 def test_warns(self): diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py index afd7993fefc70..cf98cff97669a 100644 --- a/pandas/tests/test_downstream.py +++ b/pandas/tests/test_downstream.py @@ -3,6 +3,7 @@ Testing that we work in the downstream packages """ import subprocess +import sys import pytest import numpy as np # noqa @@ -57,7 +58,7 @@ def test_xarray(df): def test_oo_optimizable(): # GH 21071 - subprocess.check_call(["python", "-OO", "-c", "import pandas"]) + subprocess.check_call([sys.executable, "-OO", "-c", "import pandas"]) @tm.network
Closes #21450 Get path of current python interpreter running opposed to one on users path.
https://api.github.com/repos/pandas-dev/pandas/pulls/21453
2018-06-12T20:40:42Z
2018-06-13T10:25:59Z
2018-06-13T10:25:59Z
2018-08-11T18:10:13Z
PERF: typing and cdefs for tslibs.resolution
diff --git a/pandas/_libs/tslibs/resolution.pyx b/pandas/_libs/tslibs/resolution.pyx index d0a9501afe566..7d6dcb9ecb831 100644 --- a/pandas/_libs/tslibs/resolution.pyx +++ b/pandas/_libs/tslibs/resolution.pyx @@ -5,7 +5,7 @@ from cython cimport Py_ssize_t import numpy as np cimport numpy as cnp -from numpy cimport ndarray, int64_t +from numpy cimport ndarray, int64_t, int32_t cnp.import_array() from util cimport is_string_object, get_nat @@ -44,12 +44,12 @@ cdef int RESO_MIN = 4 cdef int RESO_HR = 5 cdef int RESO_DAY = 6 -_ONE_MICRO = 1000L -_ONE_MILLI = _ONE_MICRO * 1000 -_ONE_SECOND = _ONE_MILLI * 1000 -_ONE_MINUTE = 60 * _ONE_SECOND -_ONE_HOUR = 60 * _ONE_MINUTE -_ONE_DAY = 24 * _ONE_HOUR +_ONE_MICRO = <int64_t>1000L +_ONE_MILLI = <int64_t>(_ONE_MICRO * 1000) +_ONE_SECOND = <int64_t>(_ONE_MILLI * 1000) +_ONE_MINUTE = <int64_t>(60 * _ONE_SECOND) +_ONE_HOUR = <int64_t>(60 * _ONE_MINUTE) +_ONE_DAY = <int64_t>(24 * _ONE_HOUR) # ---------------------------------------------------------------------- @@ -349,7 +349,7 @@ class Resolution(object): # TODO: this is non performant logic here (and duplicative) and this # simply should call unique_1d directly # plus no reason to depend on khash directly -cdef unique_deltas(ndarray[int64_t] arr): +cdef ndarray[int64_t, ndim=1] unique_deltas(ndarray[int64_t] arr): cdef: Py_ssize_t i, n = len(arr) int64_t val @@ -373,21 +373,27 @@ cdef unique_deltas(ndarray[int64_t] arr): return result -def _is_multiple(us, mult): +cdef inline bint _is_multiple(int64_t us, int64_t mult): return us % mult == 0 -def _maybe_add_count(base, count): +cdef inline str _maybe_add_count(str base, int64_t count): if count != 1: - return '{count}{base}'.format(count=int(count), base=base) + return '{count}{base}'.format(count=count, base=base) else: return base -class _FrequencyInferer(object): +cdef class _FrequencyInferer(object): """ Not sure if I can avoid the state machine here """ + cdef public: + object index + object values + bint warn + bint is_monotonic + dict _cache def __init__(self, index, warn=True): self.index = index @@ -475,16 +481,23 @@ class _FrequencyInferer(object): def rep_stamp(self): return Timestamp(self.values[0]) - def month_position_check(self): + cdef month_position_check(self): # TODO: cythonize this, very slow - calendar_end = True - business_end = True - calendar_start = True - business_start = True - - years = self.fields['Y'] - months = self.fields['M'] - days = self.fields['D'] + cdef: + int32_t daysinmonth, y, m, d + bint calendar_end = True + bint business_end = True + bint calendar_start = True + bint business_start = True + bint cal + int32_t[:] years + int32_t[:] months + int32_t[:] days + + fields = self.fields + years = fields['Y'] + months = fields['M'] + days = fields['D'] weekdays = self.index.dayofweek from calendar import monthrange @@ -525,7 +538,7 @@ class _FrequencyInferer(object): def ydiffs(self): return unique_deltas(self.fields['Y'].astype('i8')) - def _infer_daily_rule(self): + cdef _infer_daily_rule(self): annual_rule = self._get_annual_rule() if annual_rule: nyears = self.ydiffs[0] @@ -562,7 +575,7 @@ class _FrequencyInferer(object): if wom_rule: return wom_rule - def _get_annual_rule(self): + cdef _get_annual_rule(self): if len(self.ydiffs) > 1: return None @@ -573,7 +586,7 @@ class _FrequencyInferer(object): return {'cs': 'AS', 'bs': 'BAS', 'ce': 'A', 'be': 'BA'}.get(pos_check) - def _get_quarterly_rule(self): + cdef _get_quarterly_rule(self): if len(self.mdiffs) > 1: return None @@ -584,14 +597,14 @@ class _FrequencyInferer(object): return {'cs': 'QS', 'bs': 'BQS', 'ce': 'Q', 'be': 'BQ'}.get(pos_check) - def _get_monthly_rule(self): + cdef _get_monthly_rule(self): if len(self.mdiffs) > 1: return None pos_check = self.month_position_check() return {'cs': 'MS', 'bs': 'BMS', 'ce': 'M', 'be': 'BM'}.get(pos_check) - def _is_business_daily(self): + cdef bint _is_business_daily(self): # quick check: cannot be business daily if self.day_deltas != [1, 3]: return False @@ -604,7 +617,7 @@ class _FrequencyInferer(object): return np.all(((weekdays == 0) & (shifts == 3)) | ((weekdays > 0) & (weekdays <= 4) & (shifts == 1))) - def _get_wom_rule(self): + cdef _get_wom_rule(self): # wdiffs = unique(np.diff(self.index.week)) # We also need -47, -49, -48 to catch index spanning year boundary # if not lib.ismember(wdiffs, set([4, 5, -47, -49, -48])).all(): @@ -627,9 +640,9 @@ class _FrequencyInferer(object): return 'WOM-{week}{weekday}'.format(week=week, weekday=wd) -class _TimedeltaFrequencyInferer(_FrequencyInferer): +cdef class _TimedeltaFrequencyInferer(_FrequencyInferer): - def _infer_daily_rule(self): + cdef _infer_daily_rule(self): if self.is_unique: days = self.deltas[0] / _ONE_DAY if days % 7 == 0:
Moving things lower-level will help improve performance (due in part to better Cython compilation).
https://api.github.com/repos/pandas-dev/pandas/pulls/21452
2018-06-12T19:28:29Z
2018-06-14T10:12:02Z
2018-06-14T10:12:02Z
2018-06-22T03:27:55Z
PERF: Use ccalendar.get_days_in_month over tslib.monthrange
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx index 0f58cfa761f21..4f73f196b0d9d 100644 --- a/pandas/_libs/tslib.pyx +++ b/pandas/_libs/tslib.pyx @@ -25,9 +25,7 @@ from tslibs.np_datetime cimport (check_dts_bounds, _string_to_dts, dt64_to_dtstruct, dtstruct_to_dt64, pydatetime_to_dt64, pydate_to_dt64, - get_datetime64_value, - days_per_month_table, - dayofweek, is_leapyear) + get_datetime64_value) from tslibs.np_datetime import OutOfBoundsDatetime from tslibs.parsing import parse_datetime_string @@ -763,18 +761,6 @@ cdef inline bint _parse_today_now(str val, int64_t* iresult): # Some general helper functions -def monthrange(int64_t year, int64_t month): - cdef: - int64_t days - - if month < 1 or month > 12: - raise ValueError("bad month number 0; must be 1-12") - - days = days_per_month_table[is_leapyear(year)][month - 1] - - return (dayofweek(year, month, 1), days) - - cpdef normalize_date(object dt): """ Normalize datetime.datetime value to midnight. Returns datetime.date as a diff --git a/pandas/_libs/tslibs/resolution.pyx b/pandas/_libs/tslibs/resolution.pyx index d0a9501afe566..2f185f4142a09 100644 --- a/pandas/_libs/tslibs/resolution.pyx +++ b/pandas/_libs/tslibs/resolution.pyx @@ -25,6 +25,7 @@ from fields import build_field_sarray from conversion import tz_convert from conversion cimport tz_convert_utc_to_tzlocal from ccalendar import MONTH_ALIASES, int_to_weekday +from ccalendar cimport get_days_in_month from pandas._libs.properties import cache_readonly from pandas._libs.tslib import Timestamp @@ -487,7 +488,6 @@ class _FrequencyInferer(object): days = self.fields['D'] weekdays = self.index.dayofweek - from calendar import monthrange for y, m, d, wd in zip(years, months, days, weekdays): if calendar_start: @@ -496,7 +496,7 @@ class _FrequencyInferer(object): business_start &= d == 1 or (d <= 3 and wd == 0) if calendar_end or business_end: - _, daysinmonth = monthrange(y, m) + daysinmonth = get_days_in_month(y, m) cal = d == daysinmonth if calendar_end: calendar_end &= cal diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py index d44b13172f86d..66622814f172d 100644 --- a/pandas/core/indexes/datetimes.py +++ b/pandas/core/indexes/datetimes.py @@ -55,6 +55,7 @@ from pandas._libs import (lib, index as libindex, tslib as libts, join as libjoin, Timestamp) from pandas._libs.tslibs import (timezones, conversion, fields, parsing, + ccalendar, resolution as libresolution) # -------- some conversion wrapper functions @@ -1451,14 +1452,14 @@ def _parsed_string_to_bounds(self, reso, parsed): Timestamp(datetime(parsed.year, 12, 31, 23, 59, 59, 999999), tz=self.tz)) elif reso == 'month': - d = libts.monthrange(parsed.year, parsed.month)[1] + d = ccalendar.get_days_in_month(parsed.year, parsed.month) return (Timestamp(datetime(parsed.year, parsed.month, 1), tz=self.tz), Timestamp(datetime(parsed.year, parsed.month, d, 23, 59, 59, 999999), tz=self.tz)) elif reso == 'quarter': qe = (((parsed.month - 1) + 2) % 12) + 1 # two months ahead - d = libts.monthrange(parsed.year, qe)[1] # at end of month + d = ccalendar.get_days_in_month(parsed.year, qe) # at end of month return (Timestamp(datetime(parsed.year, parsed.month, 1), tz=self.tz), Timestamp(datetime(parsed.year, qe, d, 23, 59, diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py index 5369b1a94a956..a1c5a825054ec 100644 --- a/pandas/tests/tseries/offsets/test_offsets.py +++ b/pandas/tests/tseries/offsets/test_offsets.py @@ -41,12 +41,6 @@ from .common import assert_offset_equal, assert_onOffset -def test_monthrange(): - import calendar - for y in range(2000, 2013): - for m in range(1, 13): - assert tslib.monthrange(y, m) == calendar.monthrange(y, m) - #### # Misc function tests #### diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py index c294110d89ec5..a5a983bf94bb8 100644 --- a/pandas/tseries/offsets.py +++ b/pandas/tseries/offsets.py @@ -1140,7 +1140,7 @@ def apply(self, other): # shift `other` to self.day_of_month, incrementing `n` if necessary n = liboffsets.roll_convention(other.day, self.n, self.day_of_month) - days_in_month = tslib.monthrange(other.year, other.month)[1] + days_in_month = ccalendar.get_days_in_month(other.year, other.month) # For SemiMonthBegin on other.day == 1 and # SemiMonthEnd on other.day == days_in_month, @@ -1217,7 +1217,7 @@ class SemiMonthEnd(SemiMonthOffset): def onOffset(self, dt): if self.normalize and not _is_normalized(dt): return False - _, days_in_month = tslib.monthrange(dt.year, dt.month) + days_in_month = ccalendar.get_days_in_month(dt.year, dt.month) return dt.day in (self.day_of_month, days_in_month) def _apply(self, n, other): diff --git a/setup.py b/setup.py index 90ec8e91a0700..d6890a08b09d0 100755 --- a/setup.py +++ b/setup.py @@ -603,6 +603,7 @@ def pxd(name): 'pyxfile': '_libs/tslibs/resolution', 'pxdfiles': ['_libs/src/util', '_libs/khash', + '_libs/tslibs/ccalendar', '_libs/tslibs/frequencies', '_libs/tslibs/timezones'], 'depends': tseries_depends,
asv looks like a wash, probably to be expected since these calls make up such a small part of the methods they are used in. tslibs.resolution uses the stdlib calendar.monthrange instead of tslib.monthrange, so the perf bump should be bigger there.
https://api.github.com/repos/pandas-dev/pandas/pulls/21451
2018-06-12T19:19:24Z
2018-06-13T10:32:55Z
2018-06-13T10:32:55Z
2018-06-13T14:43:49Z
DOC: Fixed warning in doc build [ci skip]
diff --git a/doc/source/ecosystem.rst b/doc/source/ecosystem.rst index 56fea1ccfd9dc..f683fd6892ea5 100644 --- a/doc/source/ecosystem.rst +++ b/doc/source/ecosystem.rst @@ -39,7 +39,7 @@ Use pandas DataFrames in your `scikit-learn <http://scikit-learn.org/>`__ ML pipeline. `Featuretools <https://github.com/featuretools/featuretools/>`__ -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Featuretools is a Python library for automated feature engineering built on top of pandas. It excels at transforming temporal and relational datasets into feature matrices for machine learning using reusable feature engineering "primitives". Users can contribute their own primitives in Python and share them with the rest of the community.
[ci skip]
https://api.github.com/repos/pandas-dev/pandas/pulls/21449
2018-06-12T18:03:10Z
2018-06-13T07:55:54Z
2018-06-13T07:55:54Z
2018-06-13T08:18:33Z
perf improvements in tslibs.period
diff --git a/pandas/_libs/src/util.pxd b/pandas/_libs/src/util.pxd index d8249ec130f4d..2c1876fad95d2 100644 --- a/pandas/_libs/src/util.pxd +++ b/pandas/_libs/src/util.pxd @@ -161,3 +161,18 @@ cdef inline bint _checknull(object val): cdef inline bint is_period_object(object val): return getattr(val, '_typ', '_typ') == 'period' + + +cdef inline bint is_offset_object(object val): + """ + Check if an object is a DateOffset object. + + Parameters + ---------- + val : object + + Returns + ------- + is_date_offset : bool + """ + return getattr(val, '_typ', None) == "dateoffset" diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx index 4f73f196b0d9d..6588b5476e2b9 100644 --- a/pandas/_libs/tslib.pyx +++ b/pandas/_libs/tslib.pyx @@ -125,7 +125,7 @@ def ints_to_pydatetime(ndarray[int64_t] arr, tz=None, freq=None, elif box == "datetime": func_create = create_datetime_from_ts else: - raise ValueError("box must be one of 'datetime', 'date', 'time' or" + + raise ValueError("box must be one of 'datetime', 'date', 'time' or" " 'timestamp'") if tz is not None: diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx index 008747c0a9e78..cc2fb6e0617cb 100644 --- a/pandas/_libs/tslibs/period.pyx +++ b/pandas/_libs/tslibs/period.pyx @@ -19,7 +19,8 @@ from pandas.compat import PY2 cimport cython -from cpython.datetime cimport PyDateTime_Check, PyDateTime_IMPORT +from cpython.datetime cimport (PyDateTime_Check, PyDelta_Check, + PyDateTime_IMPORT) # import datetime C API PyDateTime_IMPORT @@ -1058,18 +1059,21 @@ cdef class _Period(object): return hash((self.ordinal, self.freqstr)) def _add_delta(self, other): - if isinstance(other, (timedelta, np.timedelta64, offsets.Tick)): + cdef: + int64_t nanos, offset_nanos + + if (PyDelta_Check(other) or util.is_timedelta64_object(other) or + isinstance(other, offsets.Tick)): offset = frequencies.to_offset(self.freq.rule_code) if isinstance(offset, offsets.Tick): nanos = delta_to_nanoseconds(other) offset_nanos = delta_to_nanoseconds(offset) - if nanos % offset_nanos == 0: ordinal = self.ordinal + (nanos // offset_nanos) return Period(ordinal=ordinal, freq=self.freq) msg = 'Input cannot be converted to Period(freq={0})' raise IncompatibleFrequency(msg.format(self.freqstr)) - elif isinstance(other, offsets.DateOffset): + elif util.is_offset_object(other): freqstr = other.rule_code base = get_base_alias(freqstr) if base == self.freq.rule_code: @@ -1082,8 +1086,8 @@ cdef class _Period(object): def __add__(self, other): if is_period_object(self): - if isinstance(other, (timedelta, np.timedelta64, - offsets.DateOffset)): + if (PyDelta_Check(other) or util.is_timedelta64_object(other) or + util.is_offset_object(other)): return self._add_delta(other) elif other is NaT: return NaT @@ -1109,8 +1113,8 @@ cdef class _Period(object): def __sub__(self, other): if is_period_object(self): - if isinstance(other, (timedelta, np.timedelta64, - offsets.DateOffset)): + if (PyDelta_Check(other) or util.is_timedelta64_object(other) or + util.is_offset_object(other)): neg_other = -other return self + neg_other elif util.is_integer_object(other):
Removes an unnecessary call to `frequencies.to_offset`, one of the last few non-cython dependencies in the file. will post asv results when available.
https://api.github.com/repos/pandas-dev/pandas/pulls/21447
2018-06-12T16:49:49Z
2018-06-15T17:19:04Z
2018-06-15T17:19:04Z
2018-06-22T03:27:54Z
DOC: 0.23.1 release
diff --git a/doc/source/release.rst b/doc/source/release.rst index fa03d614ed42c..7bbd4ba43e66f 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -37,10 +37,57 @@ analysis / manipulation tool available in any language. * Binary installers on PyPI: https://pypi.org/project/pandas * Documentation: http://pandas.pydata.org +pandas 0.23.1 +------------- + +**Release date**: June 12, 2018 + +This is a minor release from 0.23.0 and includes a number of bug fixes and +performance improvements. + +See the :ref:`full whatsnew <whatsnew_0231>` for a list of all the changes. + +Thanks +~~~~~~ + +A total of 30 people contributed to this release. People with a "+" by their +names contributed a patch for the first time. + +* Adam J. Stewart +* Adam Kim + +* Aly Sivji +* Chalmer Lowe + +* Damini Satya + +* Dr. Irv +* Gabe Fernando + +* Giftlin Rajaiah +* Jeff Reback +* Jeremy Schendel + +* Joris Van den Bossche +* Kalyan Gokhale + +* Kevin Sheppard +* Matthew Roeschke +* Max Kanter + +* Ming Li +* Pyry Kovanen + +* Stefano Cianciulli +* Tom Augspurger +* Uddeshya Singh + +* Wenhuan +* William Ayd +* chris-b1 +* gfyoung +* h-vetinari +* nprad + +* ssikdar1 + +* tmnhat2001 +* topper-123 +* zertrin + + pandas 0.23.0 ------------- -**Release date**: May 15, 2017 +**Release date**: May 15, 2018 This is a major release from 0.22.0 and includes a number of API changes, new features, enhancements, and performance improvements along with a large number
https://api.github.com/repos/pandas-dev/pandas/pulls/21446
2018-06-12T16:42:40Z
2018-06-12T16:42:56Z
2018-06-12T16:42:56Z
2018-06-12T17:00:56Z
Resolves Issue 21344: provide a timedelta in a non-string format
diff --git a/doc/source/whatsnew/v0.23.1.txt b/doc/source/whatsnew/v0.23.1.txt index db25bcf8113f5..51dcb324f538c 100644 --- a/doc/source/whatsnew/v0.23.1.txt +++ b/doc/source/whatsnew/v0.23.1.txt @@ -133,3 +133,4 @@ Bug Fixes - Tab completion on :class:`Index` in IPython no longer outputs deprecation warnings (:issue:`21125`) - Bug preventing pandas being used on Windows without C++ redistributable installed (:issue:`21106`) +- Add `resolution_timedelta` to :class:`Timedelta` to get non-string representations of resolution (:issue: `21344`) diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx index 87dc371195b5b..91c90263b0f43 100644 --- a/pandas/_libs/tslibs/timedeltas.pyx +++ b/pandas/_libs/tslibs/timedeltas.pyx @@ -795,7 +795,11 @@ cdef class _Timedelta(timedelta): @property def resolution(self): - """ return a string representing the lowest resolution that we have """ + """ + Return a string representing the lowest resolution that we have. + Note that this is nonstandard behavior. + To retrieve a timedelta object use the resolution_timedelta property + """ self._ensure_components() if self._ns: @@ -813,6 +817,32 @@ cdef class _Timedelta(timedelta): else: return "D" + @property + def resolution_timedelta(self): + """ + Return a timedelta object (rather than a string) + representing the lowest resolution we have. + to retrieve a string use the resolution property. + """ + + self._ensure_components() + if self._ns: + # At time of writing datetime.timedelta doesn't + # support nanoseconds as a keyword argument. + return timedelta(microseconds=0.1) + elif self._us: + return timedelta(microseconds=1) + elif self._ms: + return timedelta(milliseconds=1) + elif self._s: + return timedelta(seconds=1) + elif self._m: + return timedelta(minutes=1) + elif self._h: + return timedelta(hours=1) + else: + return timedelta(days=1) + @property def nanoseconds(self): """ diff --git a/pandas/tests/scalar/timedelta/test_timedelta.py b/pandas/tests/scalar/timedelta/test_timedelta.py index 205fdf49d3e91..509de841b7730 100644 --- a/pandas/tests/scalar/timedelta/test_timedelta.py +++ b/pandas/tests/scalar/timedelta/test_timedelta.py @@ -588,3 +588,31 @@ def test_components(self): result = s.dt.components assert not result.iloc[0].isna().all() assert result.iloc[1].isna().all() + + def test_resolution(self): + # GH 21344 + assert Timedelta(nanoseconds=30).resolution == 'N' + # Note that datetime.timedelta doesn't offer + # finer resolution than microseconds + assert Timedelta(nanoseconds=30).resolution_timedelta.resolution == \ + timedelta(0, 0, 1) + + assert Timedelta(microseconds=30).resolution == 'U' + assert Timedelta(microseconds=30).resolution_timedelta.resolution == \ + timedelta(0, 0, 1) + + assert Timedelta(milliseconds=30).resolution == 'L' + assert Timedelta(milliseconds=30).resolution_timedelta.resolution == \ + timedelta(0, 0, 1) + + assert Timedelta(seconds=30).resolution == 'S' + assert Timedelta(seconds=30).resolution_timedelta.resolution == \ + timedelta(0, 0, 1) + + assert Timedelta(minutes=30).resolution == 'T' + assert Timedelta(minutes=30).resolution_timedelta.resolution == \ + timedelta(0, 0, 1) + + assert Timedelta(hours=2).resolution == 'H' + assert Timedelta(hours=2).resolution_timedelta.resolution == \ + timedelta(0, 0, 1)
- [x] closes #21344 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry This extends the API - and there is a judgement call about how to treat nanosecond-resolution pd.Timedelta() objects in the context of datetime.timedelta() objects.
https://api.github.com/repos/pandas-dev/pandas/pulls/21444
2018-06-12T15:45:49Z
2018-11-23T03:29:45Z
null
2018-11-23T03:29:45Z
0.23.1 backports 2
diff --git a/doc/source/ecosystem.rst b/doc/source/ecosystem.rst index 30cdb06b28487..6714398084186 100644 --- a/doc/source/ecosystem.rst +++ b/doc/source/ecosystem.rst @@ -38,7 +38,10 @@ Statsmodels leverages pandas objects as the underlying data container for comput Use pandas DataFrames in your `scikit-learn <http://scikit-learn.org/>`__ ML pipeline. +`Featuretools <https://github.com/featuretools/featuretools/>`__ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Featuretools is a Python library for automated feature engineering built on top of pandas. It excels at transforming temporal and relational datasets into feature matrices for machine learning using reusable feature engineering "primitives". Users can contribute their own primitives in Python and share them with the rest of the community. .. _ecosystem.visualization: diff --git a/doc/source/io.rst b/doc/source/io.rst index aa2484b0cb5c3..d818f486ad62d 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -4719,14 +4719,6 @@ writes ``data`` to the database in batches of 1000 rows at a time: data.to_sql('data_chunked', engine, chunksize=1000) -.. note:: - - The function :func:`~pandas.DataFrame.to_sql` will perform a multivalue - insert if the engine dialect ``supports_multivalues_insert``. This will - greatly speed up the insert in some cases. - -SQL data types -++++++++++++++ :func:`~pandas.DataFrame.to_sql` will try to map your data to an appropriate SQL data type based on the dtype of the data. When you have columns of dtype diff --git a/doc/source/whatsnew/v0.23.1.txt b/doc/source/whatsnew/v0.23.1.txt index b3c1dbc86525d..db25bcf8113f5 100644 --- a/doc/source/whatsnew/v0.23.1.txt +++ b/doc/source/whatsnew/v0.23.1.txt @@ -10,19 +10,69 @@ and bug fixes. We recommend that all users upgrade to this version. :local: :backlinks: none -.. _whatsnew_0231.enhancements: - -New features -~~~~~~~~~~~~ - - -.. _whatsnew_0231.deprecations: - -Deprecations -~~~~~~~~~~~~ - -- -- +.. _whatsnew_0231.fixed_regressions: + +Fixed Regressions +~~~~~~~~~~~~~~~~~ + +**Comparing Series with datetime.date** + +We've reverted a 0.23.0 change to comparing a :class:`Series` holding datetimes and a ``datetime.date`` object (:issue:`21152`). +In pandas 0.22 and earlier, comparing a Series holding datetimes and ``datetime.date`` objects would coerce the ``datetime.date`` to a datetime before comapring. +This was inconsistent with Python, NumPy, and :class:`DatetimeIndex`, which never consider a datetime and ``datetime.date`` equal. + +In 0.23.0, we unified operations between DatetimeIndex and Series, and in the process changed comparisons between a Series of datetimes and ``datetime.date`` without warning. + +We've temporarily restored the 0.22.0 behavior, so datetimes and dates may again compare equal, but restore the 0.23.0 behavior in a future release. + +To summarize, here's the behavior in 0.22.0, 0.23.0, 0.23.1: + +.. code-block:: python + + # 0.22.0... Silently coerce the datetime.date + >>> Series(pd.date_range('2017', periods=2)) == datetime.date(2017, 1, 1) + 0 True + 1 False + dtype: bool + + # 0.23.0... Do not coerce the datetime.date + >>> Series(pd.date_range('2017', periods=2)) == datetime.date(2017, 1, 1) + 0 False + 1 False + dtype: bool + + # 0.23.1... Coerce the datetime.date with a warning + >>> Series(pd.date_range('2017', periods=2)) == datetime.date(2017, 1, 1) + /bin/python:1: FutureWarning: Comparing Series of datetimes with 'datetime.date'. Currently, the + 'datetime.date' is coerced to a datetime. In the future pandas will + not coerce, and the values not compare equal to the 'datetime.date'. + To retain the current behavior, convert the 'datetime.date' to a + datetime with 'pd.Timestamp'. + #!/bin/python3 + 0 True + 1 False + dtype: bool + +In addition, ordering comparisons will raise a ``TypeError`` in the future. + +**Other Fixes** + +- Reverted the ability of :func:`~DataFrame.to_sql` to perform multivalue + inserts as this caused regression in certain cases (:issue:`21103`). + In the future this will be made configurable. +- Fixed regression in the :attr:`DatetimeIndex.date` and :attr:`DatetimeIndex.time` + attributes in case of timezone-aware data: :attr:`DatetimeIndex.time` returned + a tz-aware time instead of tz-naive (:issue:`21267`) and :attr:`DatetimeIndex.date` + returned incorrect date when the input date has a non-UTC timezone (:issue:`21230`). +- Fixed regression in :meth:`pandas.io.json.json_normalize` when called with ``None`` values + in nested levels in JSON, and to not drop keys with value as `None` (:issue:`21158`, :issue:`21356`). +- Bug in :meth:`~DataFrame.to_csv` causes encoding error when compression and encoding are specified (:issue:`21241`, :issue:`21118`) +- Bug preventing pandas from being importable with -OO optimization (:issue:`21071`) +- Bug in :meth:`Categorical.fillna` incorrectly raising a ``TypeError`` when `value` the individual categories are iterable and `value` is an iterable (:issue:`21097`, :issue:`19788`) +- Fixed regression in constructors coercing NA values like ``None`` to strings when passing ``dtype=str`` (:issue:`21083`) +- Regression in :func:`pivot_table` where an ordered ``Categorical`` with missing + values for the pivot's ``index`` would give a mis-aligned result (:issue:`21133`) +- Fixed regression in merging on boolean index/columns (:issue:`21119`). .. _whatsnew_0231.performance: @@ -30,82 +80,56 @@ Performance Improvements ~~~~~~~~~~~~~~~~~~~~~~~~ - Improved performance of :meth:`CategoricalIndex.is_monotonic_increasing`, :meth:`CategoricalIndex.is_monotonic_decreasing` and :meth:`CategoricalIndex.is_monotonic` (:issue:`21025`) -- -- - -Documentation Changes -~~~~~~~~~~~~~~~~~~~~~ +- Improved performance of :meth:`CategoricalIndex.is_unique` (:issue:`21107`) -- -- .. _whatsnew_0231.bug_fixes: Bug Fixes ~~~~~~~~~ -Groupby/Resample/Rolling -^^^^^^^^^^^^^^^^^^^^^^^^ +**Groupby/Resample/Rolling** - Bug in :func:`DataFrame.agg` where applying multiple aggregation functions to a :class:`DataFrame` with duplicated column names would cause a stack overflow (:issue:`21063`) - Bug in :func:`pandas.core.groupby.GroupBy.ffill` and :func:`pandas.core.groupby.GroupBy.bfill` where the fill within a grouping would not always be applied as intended due to the implementations' use of a non-stable sort (:issue:`21207`) - Bug in :func:`pandas.core.groupby.GroupBy.rank` where results did not scale to 100% when specifying ``method='dense'`` and ``pct=True`` +- Bug in :func:`pandas.DataFrame.rolling` and :func:`pandas.Series.rolling` which incorrectly accepted a 0 window size rather than raising (:issue:`21286`) -Strings -^^^^^^^ +**Data-type specific** - Bug in :meth:`Series.str.replace()` where the method throws `TypeError` on Python 3.5.2 (:issue: `21078`) - -Timedelta -^^^^^^^^^ - Bug in :class:`Timedelta`: where passing a float with a unit would prematurely round the float precision (:issue: `14156`) +- Bug in :func:`pandas.testing.assert_index_equal` which raised ``AssertionError`` incorrectly, when comparing two :class:`CategoricalIndex` objects with param ``check_categorical=False`` (:issue:`19776`) -Categorical -^^^^^^^^^^^ - -- Bug in :func:`pandas.util.testing.assert_index_equal` which raised ``AssertionError`` incorrectly, when comparing two :class:`CategoricalIndex` objects with param ``check_categorical=False`` (:issue:`19776`) -- Bug in :meth:`Categorical.fillna` incorrectly raising a ``TypeError`` when `value` the individual categories are iterable and `value` is an iterable (:issue:`21097`, :issue:`19788`) - -Sparse -^^^^^^ +**Sparse** - Bug in :attr:`SparseArray.shape` which previously only returned the shape :attr:`SparseArray.sp_values` (:issue:`21126`) -Conversion -^^^^^^^^^^ - -- -- - -Indexing -^^^^^^^^ +**Indexing** - Bug in :meth:`Series.reset_index` where appropriate error was not raised with an invalid level name (:issue:`20925`) - Bug in :func:`interval_range` when ``start``/``periods`` or ``end``/``periods`` are specified with float ``start`` or ``end`` (:issue:`21161`) - Bug in :meth:`MultiIndex.set_names` where error raised for a ``MultiIndex`` with ``nlevels == 1`` (:issue:`21149`) -- +- Bug in :class:`IntervalIndex` constructors where creating an ``IntervalIndex`` from categorical data was not fully supported (:issue:`21243`, issue:`21253`) +- Bug in :meth:`MultiIndex.sort_index` which was not guaranteed to sort correctly with ``level=1``; this was also causing data misalignment in particular :meth:`DataFrame.stack` operations (:issue:`20994`, :issue:`20945`, :issue:`21052`) -I/O -^^^ +**Plotting** -- Bug in IO methods specifying ``compression='zip'`` which produced uncompressed zip archives (:issue:`17778`, :issue:`21144`) -- Bug in :meth:`DataFrame.to_stata` which prevented exporting DataFrames to buffers and most file-like objects (:issue:`21041`) -- +- New keywords (sharex, sharey) to turn on/off sharing of x/y-axis by subplots generated with pandas.DataFrame().groupby().boxplot() (:issue: `20968`) -Plotting -^^^^^^^^ +**I/O** -- -- +- Bug in IO methods specifying ``compression='zip'`` which produced uncompressed zip archives (:issue:`17778`, :issue:`21144`) +- Bug in :meth:`DataFrame.to_stata` which prevented exporting DataFrames to buffers and most file-like objects (:issue:`21041`) +- Bug in :meth:`read_stata` and :class:`StataReader` which did not correctly decode utf-8 strings on Python 3 from Stata 14 files (dta version 118) (:issue:`21244`) +- Bug in IO JSON :func:`read_json` reading empty JSON schema with ``orient='table'`` back to :class:`DataFrame` caused an error (:issue:`21287`) -Reshaping -^^^^^^^^^ +**Reshaping** - Bug in :func:`concat` where error was raised in concatenating :class:`Series` with numpy scalar and tuple names (:issue:`21015`) -- +- Bug in :func:`concat` warning message providing the wrong guidance for future behavior (:issue:`21101`) -Other -^^^^^ +**Other** - Tab completion on :class:`Index` in IPython no longer outputs deprecation warnings (:issue:`21125`) -- Bug preventing pandas from being importable with -OO optimization (:issue:`21071`) +- Bug preventing pandas being used on Windows without C++ redistributable installed (:issue:`21106`) diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx index 17453d8af1297..0f58cfa761f21 100644 --- a/pandas/_libs/tslib.pyx +++ b/pandas/_libs/tslib.pyx @@ -77,7 +77,7 @@ cdef inline object create_time_from_ts( int64_t value, pandas_datetimestruct dts, object tz, object freq): """ convenience routine to construct a datetime.time from its parts """ - return time(dts.hour, dts.min, dts.sec, dts.us, tz) + return time(dts.hour, dts.min, dts.sec, dts.us) def ints_to_pydatetime(ndarray[int64_t] arr, tz=None, freq=None, diff --git a/pandas/conftest.py b/pandas/conftest.py index b09cb872a12fb..d5f399c7cd63d 100644 --- a/pandas/conftest.py +++ b/pandas/conftest.py @@ -105,6 +105,16 @@ def compression(request): return request.param +@pytest.fixture(params=['gzip', 'bz2', 'zip', + pytest.param('xz', marks=td.skip_if_no_lzma)]) +def compression_only(request): + """ + Fixture for trying common compression types in compression tests excluding + uncompressed case + """ + return request.param + + @pytest.fixture(scope='module') def datetime_tz_utc(): from datetime import timezone @@ -149,3 +159,14 @@ def tz_aware_fixture(request): Fixture for trying explicit timezones: {0} """ return request.param + + +@pytest.fixture(params=[str, 'str', 'U']) +def string_dtype(request): + """Parametrized fixture for string dtypes. + + * str + * 'str' + * 'U' + """ + return request.param diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py index e4ed6d544d42e..ebc7a13234a98 100644 --- a/pandas/core/dtypes/cast.py +++ b/pandas/core/dtypes/cast.py @@ -1227,3 +1227,45 @@ def construct_1d_object_array_from_listlike(values): result = np.empty(len(values), dtype='object') result[:] = values return result + + +def construct_1d_ndarray_preserving_na(values, dtype=None, copy=False): + """ + Construct a new ndarray, coercing `values` to `dtype`, preserving NA. + + Parameters + ---------- + values : Sequence + dtype : numpy.dtype, optional + copy : bool, default False + Note that copies may still be made with ``copy=False`` if casting + is required. + + Returns + ------- + arr : ndarray[dtype] + + Examples + -------- + >>> np.array([1.0, 2.0, None], dtype='str') + array(['1.0', '2.0', 'None'], dtype='<U4') + + >>> construct_1d_ndarray_preserving_na([1.0, 2.0, None], dtype='str') + + + """ + subarr = np.array(values, dtype=dtype, copy=copy) + + if dtype is not None and dtype.kind in ("U", "S"): + # GH-21083 + # We can't just return np.array(subarr, dtype='str') since + # NumPy will convert the non-string objects into strings + # Including NA values. Se we have to go + # string -> object -> update NA, which requires an + # additional pass over the data. + na_values = isna(values) + subarr2 = subarr.astype(object) + subarr2[na_values] = np.asarray(values, dtype=object)[na_values] + subarr = subarr2 + + return subarr diff --git a/pandas/core/indexes/api.py b/pandas/core/indexes/api.py index f9501cd2f9ddf..6f4fdfe5bf5cd 100644 --- a/pandas/core/indexes/api.py +++ b/pandas/core/indexes/api.py @@ -24,9 +24,9 @@ Sorting because non-concatenation axis is not aligned. A future version of pandas will change to not sort by default. -To accept the future behavior, pass 'sort=True'. +To accept the future behavior, pass 'sort=False'. -To retain the current behavior and silence the warning, pass sort=False +To retain the current behavior and silence the warning, pass 'sort=True'. """) diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py index 78b7ae7054248..150eca32e229d 100644 --- a/pandas/core/indexes/category.py +++ b/pandas/core/indexes/category.py @@ -378,7 +378,7 @@ def _engine(self): # introspection @cache_readonly def is_unique(self): - return not self.duplicated().any() + return self._engine.is_unique @property def is_monotonic_increasing(self): diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py index 83950f1d71633..0ddf33cdcae73 100644 --- a/pandas/core/indexes/datetimes.py +++ b/pandas/core/indexes/datetimes.py @@ -2032,7 +2032,16 @@ def time(self): """ Returns numpy array of datetime.time. The time part of the Timestamps. """ - return libts.ints_to_pydatetime(self.asi8, self.tz, box="time") + + # If the Timestamps have a timezone that is not UTC, + # convert them into their i8 representation while + # keeping their timezone and not using UTC + if (self.tz is not None and self.tz is not utc): + timestamps = self._local_timestamps() + else: + timestamps = self.asi8 + + return libts.ints_to_pydatetime(timestamps, box="time") @property def date(self): @@ -2040,7 +2049,16 @@ def date(self): Returns numpy array of python datetime.date objects (namely, the date part of Timestamps without timezone information). """ - return libts.ints_to_pydatetime(self.normalize().asi8, box="date") + + # If the Timestamps have a timezone that is not UTC, + # convert them into their i8 representation while + # keeping their timezone and not using UTC + if (self.tz is not None and self.tz is not utc): + timestamps = self._local_timestamps() + else: + timestamps = self.asi8 + + return libts.ints_to_pydatetime(timestamps, box="date") def normalize(self): """ diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py index 8f8d8760583ce..eb9d7efc06c27 100644 --- a/pandas/core/indexes/interval.py +++ b/pandas/core/indexes/interval.py @@ -112,6 +112,10 @@ def maybe_convert_platform_interval(values): ------- array """ + if is_categorical_dtype(values): + # GH 21243/21253 + values = np.array(values) + if isinstance(values, (list, tuple)) and len(values) == 0: # GH 19016 # empty lists/tuples get object dtype by default, but this is not diff --git a/pandas/core/ops.py b/pandas/core/ops.py index e14f82906cd06..540ebeee438f6 100644 --- a/pandas/core/ops.py +++ b/pandas/core/ops.py @@ -5,7 +5,10 @@ """ # necessary to enforce truediv in Python 2.X from __future__ import division +import datetime import operator +import textwrap +import warnings import numpy as np import pandas as pd @@ -1197,8 +1200,35 @@ def wrapper(self, other, axis=None): if is_datetime64_dtype(self) or is_datetime64tz_dtype(self): # Dispatch to DatetimeIndex to ensure identical # Series/Index behavior + if (isinstance(other, datetime.date) and + not isinstance(other, datetime.datetime)): + # https://github.com/pandas-dev/pandas/issues/21152 + # Compatibility for difference between Series comparison w/ + # datetime and date + msg = ( + "Comparing Series of datetimes with 'datetime.date'. " + "Currently, the 'datetime.date' is coerced to a " + "datetime. In the future pandas will not coerce, " + "and {future}. " + "To retain the current behavior, " + "convert the 'datetime.date' to a datetime with " + "'pd.Timestamp'." + ) + + if op in {operator.lt, operator.le, operator.gt, operator.ge}: + future = "a TypeError will be raised" + else: + future = ( + "'the values will not compare equal to the " + "'datetime.date'" + ) + msg = '\n'.join(textwrap.wrap(msg.format(future=future))) + warnings.warn(msg, FutureWarning, stacklevel=2) + other = pd.Timestamp(other) + res_values = dispatch_to_index_op(op, self, other, pd.DatetimeIndex) + return self._constructor(res_values, index=self.index, name=res_name) diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py index 4d8897fb7c811..d69d79ca9b098 100644 --- a/pandas/core/reshape/merge.py +++ b/pandas/core/reshape/merge.py @@ -28,6 +28,7 @@ is_int_or_datetime_dtype, is_dtype_equal, is_bool, + is_bool_dtype, is_list_like, is_datetimelike, _ensure_int64, @@ -974,9 +975,14 @@ def _maybe_coerce_merge_keys(self): # Check if we are trying to merge on obviously # incompatible dtypes GH 9780, GH 15800 - elif is_numeric_dtype(lk) and not is_numeric_dtype(rk): + + # boolean values are considered as numeric, but are still allowed + # to be merged on object boolean values + elif ((is_numeric_dtype(lk) and not is_bool_dtype(lk)) + and not is_numeric_dtype(rk)): raise ValueError(msg) - elif not is_numeric_dtype(lk) and is_numeric_dtype(rk): + elif (not is_numeric_dtype(lk) + and (is_numeric_dtype(rk) and not is_bool_dtype(rk))): raise ValueError(msg) elif is_datetimelike(lk) and not is_datetimelike(rk): raise ValueError(msg) diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py index e02420323704e..9a2ad5d13d77a 100644 --- a/pandas/core/reshape/pivot.py +++ b/pandas/core/reshape/pivot.py @@ -1,8 +1,10 @@ # pylint: disable=E1103 -from pandas.core.dtypes.common import is_list_like, is_scalar +from pandas.core.dtypes.common import ( + is_list_like, is_scalar, is_integer_dtype) from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries +from pandas.core.dtypes.cast import maybe_downcast_to_dtype from pandas.core.reshape.concat import concat from pandas.core.series import Series @@ -79,8 +81,22 @@ def pivot_table(data, values=None, index=None, columns=None, aggfunc='mean', pass values = list(values) - grouped = data.groupby(keys, observed=dropna) + # group by the cartesian product of the grouper + # if we have a categorical + grouped = data.groupby(keys, observed=False) agged = grouped.agg(aggfunc) + if dropna and isinstance(agged, ABCDataFrame) and len(agged.columns): + agged = agged.dropna(how='all') + + # gh-21133 + # we want to down cast if + # the original values are ints + # as we grouped with a NaN value + # and then dropped, coercing to floats + for v in [v for v in values if v in data and v in agged]: + if (is_integer_dtype(data[v]) and + not is_integer_dtype(agged[v])): + agged[v] = maybe_downcast_to_dtype(agged[v], data[v].dtype) table = agged if table.index.nlevels > 1: diff --git a/pandas/core/series.py b/pandas/core/series.py index c5caafa07fb8e..6975dd8fc918e 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -40,6 +40,7 @@ maybe_convert_platform, maybe_cast_to_datetime, maybe_castable, construct_1d_arraylike_from_scalar, + construct_1d_ndarray_preserving_na, construct_1d_object_array_from_listlike) from pandas.core.dtypes.missing import ( isna, @@ -4047,7 +4048,8 @@ def _try_cast(arr, take_fast_path): isinstance(subarr, np.ndarray))): subarr = construct_1d_object_array_from_listlike(subarr) elif not is_extension_type(subarr): - subarr = np.array(subarr, dtype=dtype, copy=copy) + subarr = construct_1d_ndarray_preserving_na(subarr, dtype, + copy=copy) except (ValueError, TypeError): if is_categorical_dtype(dtype): # We *do* allow casting to categorical, since we know diff --git a/pandas/core/strings.py b/pandas/core/strings.py index 5d50c45fe7eca..44811781837bc 100644 --- a/pandas/core/strings.py +++ b/pandas/core/strings.py @@ -2172,9 +2172,9 @@ def cat(self, others=None, sep=None, na_rep=None, join=None): Returns ------- - concat : str if `other is None`, Series/Index of objects if `others is - not None`. In the latter case, the result will remain categorical - if the calling Series/Index is categorical. + concat : str or Series/Index of objects + If `others` is None, `str` is returned, otherwise a `Series/Index` + (same type as caller) of objects is returned. See Also -------- diff --git a/pandas/core/window.py b/pandas/core/window.py index 015e7f7913ed0..9d0f9dc4f75f9 100644 --- a/pandas/core/window.py +++ b/pandas/core/window.py @@ -602,8 +602,8 @@ def validate(self): if isinstance(window, (list, tuple, np.ndarray)): pass elif is_integer(window): - if window < 0: - raise ValueError("window must be non-negative") + if window <= 0: + raise ValueError("window must be > 0 ") try: import scipy.signal as sig except ImportError: diff --git a/pandas/io/formats/csvs.py b/pandas/io/formats/csvs.py index 29b8d29af0808..7f660e2644fa4 100644 --- a/pandas/io/formats/csvs.py +++ b/pandas/io/formats/csvs.py @@ -9,6 +9,7 @@ import numpy as np from pandas.core.dtypes.missing import notna +from pandas.core.dtypes.inference import is_file_like from pandas.core.index import Index, MultiIndex from pandas import compat from pandas.compat import (StringIO, range, zip) @@ -127,14 +128,19 @@ def save(self): else: encoding = self.encoding - if hasattr(self.path_or_buf, 'write'): - f = self.path_or_buf - close = False + # PR 21300 uses string buffer to receive csv writing and dump into + # file-like output with compression as option. GH 21241, 21118 + f = StringIO() + if not is_file_like(self.path_or_buf): + # path_or_buf is path + path_or_buf = self.path_or_buf + elif hasattr(self.path_or_buf, 'name'): + # path_or_buf is file handle + path_or_buf = self.path_or_buf.name else: - f, handles = _get_handle(self.path_or_buf, self.mode, - encoding=encoding, - compression=None) - close = True if self.compression is None else False + # path_or_buf is file-like IO objects. + f = self.path_or_buf + path_or_buf = None try: writer_kwargs = dict(lineterminator=self.line_terminator, @@ -151,18 +157,16 @@ def save(self): self._save() finally: - # GH 17778 handles compression for byte strings. - if not close and self.compression: - f.close() - with open(self.path_or_buf, 'r') as f: - data = f.read() - f, handles = _get_handle(self.path_or_buf, self.mode, + # GH 17778 handles zip compression for byte strings separately. + buf = f.getvalue() + if path_or_buf: + f, handles = _get_handle(path_or_buf, self.mode, encoding=encoding, compression=self.compression) - f.write(data) - close = True - if close: + f.write(buf) f.close() + for _fh in handles: + _fh.close() def _save_header(self): diff --git a/pandas/io/json/normalize.py b/pandas/io/json/normalize.py index 549204abd3caf..b845a43b9ca9e 100644 --- a/pandas/io/json/normalize.py +++ b/pandas/io/json/normalize.py @@ -80,8 +80,6 @@ def nested_to_record(ds, prefix="", sep=".", level=0): if level != 0: # so we skip copying for top level, common case v = new_d.pop(k) new_d[newkey] = v - if v is None: # pop the key if the value is None - new_d.pop(k) continue else: v = new_d.pop(k) diff --git a/pandas/io/json/table_schema.py b/pandas/io/json/table_schema.py index 01f7db7d68664..5cea64388bdd7 100644 --- a/pandas/io/json/table_schema.py +++ b/pandas/io/json/table_schema.py @@ -296,7 +296,7 @@ def parse_table_schema(json, precise_float): """ table = loads(json, precise_float=precise_float) col_order = [field['name'] for field in table['schema']['fields']] - df = DataFrame(table['data'])[col_order] + df = DataFrame(table['data'], columns=col_order)[col_order] dtypes = {field['name']: convert_json_field_to_pandas_type(field) for field in table['schema']['fields']} diff --git a/pandas/io/sql.py b/pandas/io/sql.py index ccb8d2d99d734..a582d32741ae9 100644 --- a/pandas/io/sql.py +++ b/pandas/io/sql.py @@ -572,29 +572,8 @@ def create(self): else: self._execute_create() - def insert_statement(self, data, conn): - """ - Generate tuple of SQLAlchemy insert statement and any arguments - to be executed by connection (via `_execute_insert`). - - Parameters - ---------- - conn : SQLAlchemy connectable(engine/connection) - Connection to recieve the data - data : list of dict - The data to be inserted - - Returns - ------- - SQLAlchemy statement - insert statement - *, optional - Additional parameters to be passed when executing insert statement - """ - dialect = getattr(conn, 'dialect', None) - if dialect and getattr(dialect, 'supports_multivalues_insert', False): - return self.table.insert(data), - return self.table.insert(), data + def insert_statement(self): + return self.table.insert() def insert_data(self): if self.index is not None: @@ -633,9 +612,8 @@ def insert_data(self): return column_names, data_list def _execute_insert(self, conn, keys, data_iter): - """Insert data into this table with database connection""" data = [{k: v for k, v in zip(keys, row)} for row in data_iter] - conn.execute(*self.insert_statement(data, conn)) + conn.execute(self.insert_statement(), data) def insert(self, chunksize=None): keys, data_list = self.insert_data() diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py index 87b7d13251f28..d1a2121597dd6 100644 --- a/pandas/plotting/_core.py +++ b/pandas/plotting/_core.py @@ -811,7 +811,7 @@ class PlanePlot(MPLPlot): def __init__(self, data, x, y, **kwargs): MPLPlot.__init__(self, data, **kwargs) if x is None or y is None: - raise ValueError(self._kind + ' requires and x and y column') + raise ValueError(self._kind + ' requires an x and y column') if is_integer(x) and not self.data.columns.holds_integer(): x = self.data.columns[x] if is_integer(y) and not self.data.columns.holds_integer(): diff --git a/pandas/tests/dtypes/test_cast.py b/pandas/tests/dtypes/test_cast.py index 20cd8b43478d2..4a19682e2c558 100644 --- a/pandas/tests/dtypes/test_cast.py +++ b/pandas/tests/dtypes/test_cast.py @@ -23,6 +23,7 @@ maybe_convert_scalar, find_common_type, construct_1d_object_array_from_listlike, + construct_1d_ndarray_preserving_na, construct_1d_arraylike_from_scalar) from pandas.core.dtypes.dtypes import ( CategoricalDtype, @@ -440,3 +441,15 @@ def test_cast_1d_arraylike_from_scalar_categorical(self): tm.assert_categorical_equal(result, expected, check_category_order=True, check_dtype=True) + + +@pytest.mark.parametrize('values, dtype, expected', [ + ([1, 2, 3], None, np.array([1, 2, 3])), + (np.array([1, 2, 3]), None, np.array([1, 2, 3])), + (['1', '2', None], None, np.array(['1', '2', None])), + (['1', '2', None], np.dtype('str'), np.array(['1', '2', None])), + ([1, 2, None], np.dtype('str'), np.array(['1', '2', None])), +]) +def test_construct_1d_ndarray_preserving_na(values, dtype, expected): + result = construct_1d_ndarray_preserving_na(values, dtype=dtype) + tm.assert_numpy_array_equal(result, expected) diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py index 6dd38187f7277..70dd358248bc4 100644 --- a/pandas/tests/frame/test_constructors.py +++ b/pandas/tests/frame/test_constructors.py @@ -151,6 +151,17 @@ def test_constructor_complex_dtypes(self): assert a.dtype == df.a.dtype assert b.dtype == df.b.dtype + def test_constructor_dtype_str_na_values(self, string_dtype): + # https://github.com/pandas-dev/pandas/issues/21083 + df = DataFrame({'A': ['x', None]}, dtype=string_dtype) + result = df.isna() + expected = DataFrame({"A": [False, True]}) + tm.assert_frame_equal(result, expected) + assert df.iloc[1, 0] is None + + df = DataFrame({'A': ['x', np.nan]}, dtype=string_dtype) + assert np.isnan(df.iloc[1, 0]) + def test_constructor_rec(self): rec = self.frame.to_records(index=False) diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py index 4c9f8c2ea0980..1eeeec0be3b8b 100644 --- a/pandas/tests/frame/test_dtypes.py +++ b/pandas/tests/frame/test_dtypes.py @@ -794,22 +794,26 @@ def test_arg_for_errors_in_astype(self): @pytest.mark.parametrize('input_vals', [ ([1, 2]), - ([1.0, 2.0, np.nan]), (['1', '2']), (list(pd.date_range('1/1/2011', periods=2, freq='H'))), (list(pd.date_range('1/1/2011', periods=2, freq='H', tz='US/Eastern'))), ([pd.Interval(left=0, right=5)]), ]) - def test_constructor_list_str(self, input_vals): + def test_constructor_list_str(self, input_vals, string_dtype): # GH 16605 # Ensure that data elements are converted to strings when # dtype is str, 'str', or 'U' - for dtype in ['str', str, 'U']: - result = DataFrame({'A': input_vals}, dtype=dtype) - expected = DataFrame({'A': input_vals}).astype({'A': dtype}) - assert_frame_equal(result, expected) + result = DataFrame({'A': input_vals}, dtype=string_dtype) + expected = DataFrame({'A': input_vals}).astype({'A': string_dtype}) + assert_frame_equal(result, expected) + + def test_constructor_list_str_na(self, string_dtype): + + result = DataFrame({"A": [1.0, 2.0, None]}, dtype=string_dtype) + expected = DataFrame({"A": ['1.0', '2.0', None]}, dtype=object) + assert_frame_equal(result, expected) class TestDataFrameDatetimeWithTZ(TestData): diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py index e4829ebf48561..60dc336a85388 100644 --- a/pandas/tests/frame/test_to_csv.py +++ b/pandas/tests/frame/test_to_csv.py @@ -919,29 +919,45 @@ def test_to_csv_path_is_none(self): recons = pd.read_csv(StringIO(csv_str), index_col=0) assert_frame_equal(self.frame, recons) - def test_to_csv_compression(self, compression): - - df = DataFrame([[0.123456, 0.234567, 0.567567], - [12.32112, 123123.2, 321321.2]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) + @pytest.mark.parametrize('df,encoding', [ + (DataFrame([[0.123456, 0.234567, 0.567567], + [12.32112, 123123.2, 321321.2]], + index=['A', 'B'], columns=['X', 'Y', 'Z']), None), + # GH 21241, 21118 + (DataFrame([['abc', 'def', 'ghi']], columns=['X', 'Y', 'Z']), 'ascii'), + (DataFrame(5 * [[123, u"你好", u"世界"]], + columns=['X', 'Y', 'Z']), 'gb2312'), + (DataFrame(5 * [[123, u"Γειά σου", u"Κόσμε"]], + columns=['X', 'Y', 'Z']), 'cp737') + ]) + def test_to_csv_compression(self, df, encoding, compression): with ensure_clean() as filename: - df.to_csv(filename, compression=compression) + df.to_csv(filename, compression=compression, encoding=encoding) # test the round trip - to_csv -> read_csv - rs = read_csv(filename, compression=compression, - index_col=0) - assert_frame_equal(df, rs) + result = read_csv(filename, compression=compression, + index_col=0, encoding=encoding) + + with open(filename, 'w') as fh: + df.to_csv(fh, compression=compression, encoding=encoding) + + result_fh = read_csv(filename, compression=compression, + index_col=0, encoding=encoding) + assert_frame_equal(df, result) + assert_frame_equal(df, result_fh) # explicitly make sure file is compressed with tm.decompress_file(filename, compression) as fh: - text = fh.read().decode('utf8') + text = fh.read().decode(encoding or 'utf8') for col in df.columns: assert col in text with tm.decompress_file(filename, compression) as fh: - assert_frame_equal(df, read_csv(fh, index_col=0)) + assert_frame_equal(df, read_csv(fh, + index_col=0, + encoding=encoding)) def test_to_csv_date_format(self): with ensure_clean('__tmp_to_csv_date_format__') as path: diff --git a/pandas/tests/indexes/datetimes/test_timezones.py b/pandas/tests/indexes/datetimes/test_timezones.py index 09210d8b64d1b..573940edaa08f 100644 --- a/pandas/tests/indexes/datetimes/test_timezones.py +++ b/pandas/tests/indexes/datetimes/test_timezones.py @@ -2,7 +2,7 @@ """ Tests for DatetimeIndex timezone-related methods """ -from datetime import datetime, timedelta, tzinfo +from datetime import datetime, timedelta, tzinfo, date, time from distutils.version import LooseVersion import pytest @@ -706,6 +706,32 @@ def test_join_utc_convert(self, join_type): assert isinstance(result, DatetimeIndex) assert result.tz.zone == 'UTC' + @pytest.mark.parametrize("dtype", [ + None, 'datetime64[ns, CET]', + 'datetime64[ns, EST]', 'datetime64[ns, UTC]' + ]) + def test_date_accessor(self, dtype): + # Regression test for GH#21230 + expected = np.array([date(2018, 6, 4), pd.NaT]) + + index = DatetimeIndex(['2018-06-04 10:00:00', pd.NaT], dtype=dtype) + result = index.date + + tm.assert_numpy_array_equal(result, expected) + + @pytest.mark.parametrize("dtype", [ + None, 'datetime64[ns, CET]', + 'datetime64[ns, EST]', 'datetime64[ns, UTC]' + ]) + def test_time_accessor(self, dtype): + # Regression test for GH#21267 + expected = np.array([time(10, 20, 30), pd.NaT]) + + index = DatetimeIndex(['2018-06-04 10:20:30', pd.NaT], dtype=dtype) + result = index.time + + tm.assert_numpy_array_equal(result, expected) + def test_dti_drop_dont_lose_tz(self): # GH#2621 ind = date_range("2012-12-01", periods=10, tz="utc") diff --git a/pandas/tests/indexes/interval/test_construction.py b/pandas/tests/indexes/interval/test_construction.py index 5fdf92dcb2044..b1711c3444586 100644 --- a/pandas/tests/indexes/interval/test_construction.py +++ b/pandas/tests/indexes/interval/test_construction.py @@ -6,8 +6,9 @@ from pandas import ( Interval, IntervalIndex, Index, Int64Index, Float64Index, Categorical, - date_range, timedelta_range, period_range, notna) + CategoricalIndex, date_range, timedelta_range, period_range, notna) from pandas.compat import lzip +from pandas.core.dtypes.common import is_categorical_dtype from pandas.core.dtypes.dtypes import IntervalDtype import pandas.core.common as com import pandas.util.testing as tm @@ -111,6 +112,22 @@ def test_constructor_string(self, constructor, breaks): with tm.assert_raises_regex(TypeError, msg): constructor(**self.get_kwargs_from_breaks(breaks)) + @pytest.mark.parametrize('cat_constructor', [ + Categorical, CategoricalIndex]) + def test_constructor_categorical_valid(self, constructor, cat_constructor): + # GH 21243/21253 + if isinstance(constructor, partial) and constructor.func is Index: + # Index is defined to create CategoricalIndex from categorical data + pytest.skip() + + breaks = np.arange(10, dtype='int64') + expected = IntervalIndex.from_breaks(breaks) + + cat_breaks = cat_constructor(breaks) + result_kwargs = self.get_kwargs_from_breaks(cat_breaks) + result = constructor(**result_kwargs) + tm.assert_index_equal(result, expected) + def test_generic_errors(self, constructor): # filler input data to be used when supplying invalid kwargs filler = self.get_kwargs_from_breaks(range(10)) @@ -238,6 +255,8 @@ def get_kwargs_from_breaks(self, breaks, closed='right'): tuples = lzip(breaks[:-1], breaks[1:]) if isinstance(breaks, (list, tuple)): return {'data': tuples} + elif is_categorical_dtype(breaks): + return {'data': breaks._constructor(tuples)} return {'data': com._asarray_tuplesafe(tuples)} def test_constructor_errors(self): @@ -286,6 +305,8 @@ def get_kwargs_from_breaks(self, breaks, closed='right'): if isinstance(breaks, list): return {'data': ivs} + elif is_categorical_dtype(breaks): + return {'data': breaks._constructor(ivs)} return {'data': np.array(ivs, dtype=object)} def test_generic_errors(self, constructor): diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py index 0e630f69b1a32..a2a4170256088 100644 --- a/pandas/tests/indexes/test_category.py +++ b/pandas/tests/indexes/test_category.py @@ -581,6 +581,15 @@ def test_is_monotonic(self, data, non_lexsorted_data): assert c.is_monotonic_increasing assert not c.is_monotonic_decreasing + @pytest.mark.parametrize('values, expected', [ + ([1, 2, 3], True), + ([1, 3, 1], False), + (list('abc'), True), + (list('aba'), False)]) + def test_is_unique(self, values, expected): + ci = CategoricalIndex(values) + assert ci.is_unique is expected + def test_duplicates(self): idx = CategoricalIndex([0, 0, 0], name='foo') diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py index 49b39c17238ae..b6483d0e978ba 100644 --- a/pandas/tests/io/json/test_json_table_schema.py +++ b/pandas/tests/io/json/test_json_table_schema.py @@ -560,3 +560,16 @@ def test_multiindex(self, index_names): out = df.to_json(orient="table") result = pd.read_json(out, orient="table") tm.assert_frame_equal(df, result) + + @pytest.mark.parametrize("strict_check", [ + pytest.param(True, marks=pytest.mark.xfail), False]) + def test_empty_frame_roundtrip(self, strict_check): + # GH 21287 + df = pd.DataFrame([], columns=['a', 'b', 'c']) + expected = df.copy() + out = df.to_json(orient='table') + result = pd.read_json(out, orient='table') + # TODO: When DF coercion issue (#21345) is resolved tighten type checks + tm.assert_frame_equal(expected, result, + check_dtype=strict_check, + check_index_type=strict_check) diff --git a/pandas/tests/io/json/test_normalize.py b/pandas/tests/io/json/test_normalize.py index 0fabaf747b6de..395c2c90767d3 100644 --- a/pandas/tests/io/json/test_normalize.py +++ b/pandas/tests/io/json/test_normalize.py @@ -238,15 +238,16 @@ def test_non_ascii_key(self): tm.assert_frame_equal(result, expected) def test_missing_field(self, author_missing_data): - # GH20030: Checks for robustness of json_normalize - should - # unnest records where only the first record has a None value + # GH20030: result = json_normalize(author_missing_data) ex_data = [ - {'author_name.first': np.nan, + {'info': np.nan, + 'author_name.first': np.nan, 'author_name.last_name': np.nan, 'info.created_at': np.nan, 'info.last_updated': np.nan}, - {'author_name.first': 'Jane', + {'info': None, + 'author_name.first': 'Jane', 'author_name.last_name': 'Doe', 'info.created_at': '11/08/1993', 'info.last_updated': '26/05/2012'} @@ -351,9 +352,8 @@ def test_json_normalize_errors(self): errors='raise' ) - def test_nonetype_dropping(self): - # GH20030: Checks that None values are dropped in nested_to_record - # to prevent additional columns of nans when passed to DataFrame + def test_donot_drop_nonevalues(self): + # GH21356 data = [ {'info': None, 'author_name': @@ -367,7 +367,8 @@ def test_nonetype_dropping(self): ] result = nested_to_record(data) expected = [ - {'author_name.first': 'Smith', + {'info': None, + 'author_name.first': 'Smith', 'author_name.last_name': 'Appleseed'}, {'author_name.first': 'Jane', 'author_name.last_name': 'Doe', @@ -375,3 +376,61 @@ def test_nonetype_dropping(self): 'info.last_updated': '26/05/2012'}] assert result == expected + + def test_nonetype_top_level_bottom_level(self): + # GH21158: If inner level json has a key with a null value + # make sure it doesnt do a new_d.pop twice and except + data = { + "id": None, + "location": { + "country": { + "state": { + "id": None, + "town.info": { + "id": None, + "region": None, + "x": 49.151580810546875, + "y": -33.148521423339844, + "z": 27.572303771972656}}} + } + } + result = nested_to_record(data) + expected = { + 'id': None, + 'location.country.state.id': None, + 'location.country.state.town.info.id': None, + 'location.country.state.town.info.region': None, + 'location.country.state.town.info.x': 49.151580810546875, + 'location.country.state.town.info.y': -33.148521423339844, + 'location.country.state.town.info.z': 27.572303771972656} + assert result == expected + + def test_nonetype_multiple_levels(self): + # GH21158: If inner level json has a key with a null value + # make sure it doesnt do a new_d.pop twice and except + data = { + "id": None, + "location": { + "id": None, + "country": { + "id": None, + "state": { + "id": None, + "town.info": { + "region": None, + "x": 49.151580810546875, + "y": -33.148521423339844, + "z": 27.572303771972656}}} + } + } + result = nested_to_record(data) + expected = { + 'id': None, + 'location.id': None, + 'location.country.id': None, + 'location.country.state.id': None, + 'location.country.state.town.info.region': None, + 'location.country.state.town.info.x': 49.151580810546875, + 'location.country.state.town.info.y': -33.148521423339844, + 'location.country.state.town.info.z': 27.572303771972656} + assert result == expected diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py index 4530cc9d2fba9..f3ab74d37a2bc 100644 --- a/pandas/tests/io/test_sql.py +++ b/pandas/tests/io/test_sql.py @@ -1665,29 +1665,6 @@ class Temporary(Base): tm.assert_frame_equal(df, expected) - def test_insert_multivalues(self): - # issues addressed - # https://github.com/pandas-dev/pandas/issues/14315 - # https://github.com/pandas-dev/pandas/issues/8953 - - db = sql.SQLDatabase(self.conn) - df = DataFrame({'A': [1, 0, 0], 'B': [1.1, 0.2, 4.3]}) - table = sql.SQLTable("test_table", db, frame=df) - data = [ - {'A': 1, 'B': 0.46}, - {'A': 0, 'B': -2.06} - ] - statement = table.insert_statement(data, conn=self.conn)[0] - - if self.supports_multivalues_insert: - assert statement.parameters == data, ( - 'insert statement should be multivalues' - ) - else: - assert statement.parameters is None, ( - 'insert statement should not be multivalues' - ) - class _TestSQLAlchemyConn(_EngineToConnMixin, _TestSQLAlchemy): @@ -1702,7 +1679,6 @@ class _TestSQLiteAlchemy(object): """ flavor = 'sqlite' - supports_multivalues_insert = True @classmethod def connect(cls): @@ -1751,7 +1727,6 @@ class _TestMySQLAlchemy(object): """ flavor = 'mysql' - supports_multivalues_insert = True @classmethod def connect(cls): @@ -1821,7 +1796,6 @@ class _TestPostgreSQLAlchemy(object): """ flavor = 'postgresql' - supports_multivalues_insert = True @classmethod def connect(cls): diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py index 8e639edd34b18..037bd9cc7cd18 100644 --- a/pandas/tests/reshape/merge/test_merge.py +++ b/pandas/tests/reshape/merge/test_merge.py @@ -1526,6 +1526,27 @@ def test_merge_on_ints_floats_warning(self): result = B.merge(A, left_on='Y', right_on='X') assert_frame_equal(result, expected[['Y', 'X']]) + def test_merge_incompat_infer_boolean_object(self): + # GH21119: bool + object bool merge OK + df1 = DataFrame({'key': Series([True, False], dtype=object)}) + df2 = DataFrame({'key': [True, False]}) + + expected = DataFrame({'key': [True, False]}, dtype=object) + result = pd.merge(df1, df2, on='key') + assert_frame_equal(result, expected) + result = pd.merge(df2, df1, on='key') + assert_frame_equal(result, expected) + + # with missing value + df1 = DataFrame({'key': Series([True, False, np.nan], dtype=object)}) + df2 = DataFrame({'key': [True, False]}) + + expected = DataFrame({'key': [True, False]}, dtype=object) + result = pd.merge(df1, df2, on='key') + assert_frame_equal(result, expected) + result = pd.merge(df2, df1, on='key') + assert_frame_equal(result, expected) + @pytest.mark.parametrize('df1_vals, df2_vals', [ ([0, 1, 2], ["0", "1", "2"]), ([0.0, 1.0, 2.0], ["0", "1", "2"]), @@ -1538,6 +1559,8 @@ def test_merge_on_ints_floats_warning(self): pd.date_range('20130101', periods=3, tz='US/Eastern')), ([0, 1, 2], Series(['a', 'b', 'a']).astype('category')), ([0.0, 1.0, 2.0], Series(['a', 'b', 'a']).astype('category')), + # TODO ([0, 1], pd.Series([False, True], dtype=bool)), + ([0, 1], pd.Series([False, True], dtype=object)) ]) def test_merge_incompat_dtypes(self, df1_vals, df2_vals): # GH 9780, GH 15800 diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py index d2cf3fc11e165..3ec60d50f2792 100644 --- a/pandas/tests/reshape/test_pivot.py +++ b/pandas/tests/reshape/test_pivot.py @@ -1,3 +1,4 @@ +# -*- coding: utf-8 -*- from datetime import datetime, date, timedelta @@ -16,6 +17,11 @@ from pandas.api.types import CategoricalDtype as CDT +@pytest.fixture(params=[True, False]) +def dropna(request): + return request.param + + class TestPivotTable(object): def setup_method(self, method): @@ -109,7 +115,6 @@ def test_pivot_table_categorical(self): index=exp_index) tm.assert_frame_equal(result, expected) - @pytest.mark.parametrize('dropna', [True, False]) def test_pivot_table_dropna_categoricals(self, dropna): # GH 15193 categories = ['a', 'b', 'c', 'd'] @@ -137,6 +142,25 @@ def test_pivot_table_dropna_categoricals(self, dropna): tm.assert_frame_equal(result, expected) + def test_pivot_with_non_observable_dropna(self, dropna): + # gh-21133 + df = pd.DataFrame( + {'A': pd.Categorical([np.nan, 'low', 'high', 'low', 'high'], + categories=['low', 'high'], + ordered=True), + 'B': range(5)}) + + result = df.pivot_table(index='A', values='B', dropna=dropna) + expected = pd.DataFrame( + {'B': [2, 3]}, + index=pd.Index( + pd.Categorical.from_codes([0, 1], + categories=['low', 'high'], + ordered=True), + name='A')) + + tm.assert_frame_equal(result, expected) + def test_pass_array(self): result = self.data.pivot_table( 'D', index=self.data.A, columns=self.data.C) diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py index ec0d7296e540e..95836f046195a 100644 --- a/pandas/tests/series/test_arithmetic.py +++ b/pandas/tests/series/test_arithmetic.py @@ -88,6 +88,46 @@ def test_ser_cmp_result_names(self, names, op): class TestTimestampSeriesComparison(object): + def test_dt64_ser_cmp_date_warning(self): + # https://github.com/pandas-dev/pandas/issues/21359 + # Remove this test and enble invalid test below + ser = pd.Series(pd.date_range('20010101', periods=10), name='dates') + date = ser.iloc[0].to_pydatetime().date() + + with tm.assert_produces_warning(FutureWarning) as m: + result = ser == date + expected = pd.Series([True] + [False] * 9, name='dates') + tm.assert_series_equal(result, expected) + assert "Comparing Series of datetimes " in str(m[0].message) + assert "will not compare equal" in str(m[0].message) + + with tm.assert_produces_warning(FutureWarning) as m: + result = ser != date + tm.assert_series_equal(result, ~expected) + assert "will not compare equal" in str(m[0].message) + + with tm.assert_produces_warning(FutureWarning) as m: + result = ser <= date + tm.assert_series_equal(result, expected) + assert "a TypeError will be raised" in str(m[0].message) + + with tm.assert_produces_warning(FutureWarning) as m: + result = ser < date + tm.assert_series_equal(result, pd.Series([False] * 10, name='dates')) + assert "a TypeError will be raised" in str(m[0].message) + + with tm.assert_produces_warning(FutureWarning) as m: + result = ser >= date + tm.assert_series_equal(result, pd.Series([True] * 10, name='dates')) + assert "a TypeError will be raised" in str(m[0].message) + + with tm.assert_produces_warning(FutureWarning) as m: + result = ser > date + tm.assert_series_equal(result, pd.Series([False] + [True] * 9, + name='dates')) + assert "a TypeError will be raised" in str(m[0].message) + + @pytest.mark.skip(reason="GH-21359") def test_dt64ser_cmp_date_invalid(self): # GH#19800 datetime.date comparison raises to # match DatetimeIndex/Timestamp. This also matches the behavior diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py index 7e59325c32ddc..906d2aacd5586 100644 --- a/pandas/tests/series/test_constructors.py +++ b/pandas/tests/series/test_constructors.py @@ -137,6 +137,17 @@ def test_constructor_no_data_index_order(self): result = pd.Series(index=['b', 'a', 'c']) assert result.index.tolist() == ['b', 'a', 'c'] + def test_constructor_dtype_str_na_values(self, string_dtype): + # https://github.com/pandas-dev/pandas/issues/21083 + ser = Series(['x', None], dtype=string_dtype) + result = ser.isna() + expected = Series([False, True]) + tm.assert_series_equal(result, expected) + assert ser.iloc[1] is None + + ser = Series(['x', np.nan], dtype=string_dtype) + assert np.isnan(ser.iloc[1]) + def test_constructor_series(self): index1 = ['d', 'b', 'a', 'c'] index2 = sorted(index1) @@ -164,22 +175,25 @@ def test_constructor_list_like(self): @pytest.mark.parametrize('input_vals', [ ([1, 2]), - ([1.0, 2.0, np.nan]), (['1', '2']), (list(pd.date_range('1/1/2011', periods=2, freq='H'))), (list(pd.date_range('1/1/2011', periods=2, freq='H', tz='US/Eastern'))), ([pd.Interval(left=0, right=5)]), ]) - def test_constructor_list_str(self, input_vals): + def test_constructor_list_str(self, input_vals, string_dtype): # GH 16605 # Ensure that data elements from a list are converted to strings # when dtype is str, 'str', or 'U' + result = Series(input_vals, dtype=string_dtype) + expected = Series(input_vals).astype(string_dtype) + assert_series_equal(result, expected) - for dtype in ['str', str, 'U']: - result = Series(input_vals, dtype=dtype) - expected = Series(input_vals).astype(dtype) - assert_series_equal(result, expected) + def test_constructor_list_str_na(self, string_dtype): + result = Series([1.0, 2.0, np.nan], dtype=string_dtype) + expected = Series(['1.0', '2.0', np.nan], dtype=object) + assert_series_equal(result, expected) + assert np.isnan(result[2]) def test_constructor_generator(self): gen = (i for i in range(10)) diff --git a/pandas/tests/series/test_io.py b/pandas/tests/series/test_io.py index 0b0d4334c86a3..76dd4bc1f3d4a 100644 --- a/pandas/tests/series/test_io.py +++ b/pandas/tests/series/test_io.py @@ -138,29 +138,45 @@ def test_to_csv_path_is_none(self): csv_str = s.to_csv(path=None) assert isinstance(csv_str, str) - def test_to_csv_compression(self, compression): - - s = Series([0.123456, 0.234567, 0.567567], index=['A', 'B', 'C'], - name='X') + @pytest.mark.parametrize('s,encoding', [ + (Series([0.123456, 0.234567, 0.567567], index=['A', 'B', 'C'], + name='X'), None), + # GH 21241, 21118 + (Series(['abc', 'def', 'ghi'], name='X'), 'ascii'), + (Series(["123", u"你好", u"世界"], name=u"中文"), 'gb2312'), + (Series(["123", u"Γειά σου", u"Κόσμε"], name=u"Ελληνικά"), 'cp737') + ]) + def test_to_csv_compression(self, s, encoding, compression): with ensure_clean() as filename: - s.to_csv(filename, compression=compression, header=True) + s.to_csv(filename, compression=compression, encoding=encoding, + header=True) # test the round trip - to_csv -> read_csv - rs = pd.read_csv(filename, compression=compression, - index_col=0, squeeze=True) - assert_series_equal(s, rs) + result = pd.read_csv(filename, compression=compression, + encoding=encoding, index_col=0, squeeze=True) + + with open(filename, 'w') as fh: + s.to_csv(fh, compression=compression, encoding=encoding, + header=True) + + result_fh = pd.read_csv(filename, compression=compression, + encoding=encoding, index_col=0, + squeeze=True) + assert_series_equal(s, result) + assert_series_equal(s, result_fh) # explicitly ensure file was compressed with tm.decompress_file(filename, compression) as fh: - text = fh.read().decode('utf8') + text = fh.read().decode(encoding or 'utf8') assert s.name in text with tm.decompress_file(filename, compression) as fh: assert_series_equal(s, pd.read_csv(fh, index_col=0, - squeeze=True)) + squeeze=True, + encoding=encoding)) class TestSeriesIO(TestData): diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py index bb7ee1b911fee..3443331e3d4ba 100644 --- a/pandas/tests/test_common.py +++ b/pandas/tests/test_common.py @@ -241,3 +241,26 @@ def test_compression_size(obj, method, compression): getattr(obj, method)(filename, compression=None) uncompressed = os.path.getsize(filename) assert uncompressed > compressed + + +@pytest.mark.parametrize('obj', [ + DataFrame(100 * [[0.123456, 0.234567, 0.567567], + [12.32112, 123123.2, 321321.2]], + columns=['X', 'Y', 'Z']), + Series(100 * [0.123456, 0.234567, 0.567567], name='X')]) +@pytest.mark.parametrize('method', ['to_csv']) +def test_compression_size_fh(obj, method, compression_only): + + with tm.ensure_clean() as filename: + with open(filename, 'w') as fh: + getattr(obj, method)(fh, compression=compression_only) + assert not fh.closed + assert fh.closed + compressed = os.path.getsize(filename) + with tm.ensure_clean() as filename: + with open(filename, 'w') as fh: + getattr(obj, method)(fh, compression=None) + assert not fh.closed + assert fh.closed + uncompressed = os.path.getsize(filename) + assert uncompressed > compressed diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py index c2d09c6d49e86..afd7993fefc70 100644 --- a/pandas/tests/test_downstream.py +++ b/pandas/tests/test_downstream.py @@ -103,7 +103,6 @@ def test_pandas_datareader(): 'F', 'quandl', '2017-01-01', '2017-02-01') -@pytest.mark.xfail(reaason="downstream install issue") def test_geopandas(): geopandas = import_module('geopandas') # noqa diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py index 74f2c977e0db2..cfd88f41f855e 100644 --- a/pandas/tests/test_window.py +++ b/pandas/tests/test_window.py @@ -389,8 +389,8 @@ def test_constructor(self, which): c(window=2, min_periods=1, center=False) # GH 13383 - c(0) with pytest.raises(ValueError): + c(0) c(-1) # not valid @@ -409,7 +409,6 @@ def test_constructor_with_win_type(self, which): # GH 13383 o = getattr(self, which) c = o.rolling - c(0, win_type='boxcar') with pytest.raises(ValueError): c(-1, win_type='boxcar') diff --git a/setup.py b/setup.py index 6febe674fb2a1..90ec8e91a0700 100755 --- a/setup.py +++ b/setup.py @@ -453,10 +453,10 @@ def pxd(name): return pjoin('pandas', name + '.pxd') -# args to ignore warnings if is_platform_windows(): extra_compile_args = [] else: + # args to ignore warnings extra_compile_args = ['-Wno-unused-function'] lib_depends = lib_depends + ['pandas/_libs/src/numpy_helper.h', @@ -733,7 +733,7 @@ def pxd(name): maintainer=AUTHOR, version=versioneer.get_version(), packages=find_packages(include=['pandas', 'pandas.*']), - package_data={'': ['data/*', 'templates/*'], + package_data={'': ['data/*', 'templates/*', '_libs/*.dll'], 'pandas.tests.io': ['data/legacy_hdf/*.h5', 'data/legacy_pickle/*/*.pickle', 'data/legacy_msgpack/*/*.msgpack',
https://api.github.com/repos/pandas-dev/pandas/pulls/21442
2018-06-12T15:12:13Z
2018-06-12T16:29:56Z
2018-06-12T16:29:56Z
2018-06-12T16:30:42Z
DOC: Add favicon to doc pages
diff --git a/doc/source/_static/favicon.ico b/doc/source/_static/favicon.ico new file mode 100644 index 0000000000000..d15c4803b62e6 Binary files /dev/null and b/doc/source/_static/favicon.ico differ diff --git a/doc/source/conf.py b/doc/source/conf.py index 5534700f0734a..29f947e1144ea 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -213,16 +213,16 @@ # of the sidebar. # html_logo = None -# The name of an image file (within the static path) to use as favicon of the -# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 -# pixels large. -# html_favicon = None - # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] +# The name of an image file (within the static path) to use as favicon of the +# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 +# pixels large. +html_favicon = os.path.join(html_static_path[0], 'favicon.ico') + # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y'
This is a suggestion for adding a favicon to the doc pages. Having a favicon makes the pandas browser tabs easier to locate, especially with many open tabs or many pinned tabs. I used the image from the pandas-dev GitHub organization, cropped some of the bars to make it appear less cluttered in the small favicon format, and resized it to a 32x32 `.ico`-file (the cropped png is attached at the end of this issue). A few screenshots (pinned and normal tabs, alternating with and without the favicon): Firefox Quantum ![image](https://user-images.githubusercontent.com/4560057/41294884-19082544-6e27-11e8-90d7-db2992f5d1bd.png) Chromium ![image](https://user-images.githubusercontent.com/4560057/41294942-40a370e0-6e27-11e8-94a7-29ae473bdc5a.png) Firefox pre-Quantum ![image](https://user-images.githubusercontent.com/4560057/41295070-9c1c7d22-6e27-11e8-9b4b-04d6e0e9fc57.png) This is somewhat related to #21376, in that the favicon would change if a new logo is decided upon. Cropped png ![favicon](https://user-images.githubusercontent.com/4560057/41295398-83b5e16e-6e28-11e8-9daf-353e700e6365.png) - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21440
2018-06-12T14:20:38Z
2018-06-13T07:57:30Z
2018-06-13T07:57:30Z
2018-06-13T07:58:37Z
Fix flake8 in conf.py
diff --git a/doc/source/conf.py b/doc/source/conf.py index 909bd5a80b76e..5534700f0734a 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -78,7 +78,7 @@ ] try: - import sphinxcontrib.spelling + import sphinxcontrib.spelling # noqa except ImportError as err: logger.warn(('sphinxcontrib.spelling failed to import with error "{}". ' '`spellcheck` command is not available.'.format(err)))
Sorry, I missed this in the PR of @datapythonista (for some reason travis was not run there)
https://api.github.com/repos/pandas-dev/pandas/pulls/21438
2018-06-12T09:12:50Z
2018-06-12T09:15:13Z
2018-06-12T09:15:13Z
2018-06-12T09:15:17Z
DOC: follow 0.23.1 template for 0.23.2 whatsnew
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index ec2eddcfd4d41..c636e73fbd6c2 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -10,16 +10,11 @@ and bug fixes. We recommend that all users upgrade to this version. :local: :backlinks: none -.. _whatsnew_0232.enhancements: -New features -~~~~~~~~~~~~ +.. _whatsnew_0232.fixed_regressions: - -.. _whatsnew_0232.deprecations: - -Deprecations -~~~~~~~~~~~~ +Fixed Regressions +~~~~~~~~~~~~~~~~~ - - @@ -43,40 +38,41 @@ Documentation Changes Bug Fixes ~~~~~~~~~ +**Groupby/Resample/Rolling** + - - -Conversion -^^^^^^^^^^ +**Conversion** + - - -Indexing -^^^^^^^^ +**Indexing** - - -I/O -^^^ +**I/O** - - -Plotting -^^^^^^^^ +**Plotting** - - -Reshaping -^^^^^^^^^ +**Reshaping** - - -Categorical -^^^^^^^^^^^ +**Categorical** + +- + +**Other** -
xref https://github.com/pandas-dev/pandas/pull/21433 (@gfyoung we changed some things since the v0.23.1.txt file was added)
https://api.github.com/repos/pandas-dev/pandas/pulls/21435
2018-06-12T07:45:09Z
2018-06-12T07:54:12Z
2018-06-12T07:54:12Z
2018-06-29T14:46:19Z
DOC: Add 0.23.2 whatsnew template
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt new file mode 100644 index 0000000000000..ec2eddcfd4d41 --- /dev/null +++ b/doc/source/whatsnew/v0.23.2.txt @@ -0,0 +1,82 @@ +.. _whatsnew_0232: + +v0.23.2 +------- + +This is a minor bug-fix release in the 0.23.x series and includes some small regression fixes +and bug fixes. We recommend that all users upgrade to this version. + +.. contents:: What's new in v0.23.2 + :local: + :backlinks: none + +.. _whatsnew_0232.enhancements: + +New features +~~~~~~~~~~~~ + + +.. _whatsnew_0232.deprecations: + +Deprecations +~~~~~~~~~~~~ + +- +- + +.. _whatsnew_0232.performance: + +Performance Improvements +~~~~~~~~~~~~~~~~~~~~~~~~ + +- +- + +Documentation Changes +~~~~~~~~~~~~~~~~~~~~~ + +- +- + +.. _whatsnew_0232.bug_fixes: + +Bug Fixes +~~~~~~~~~ + +- +- + +Conversion +^^^^^^^^^^ + +- +- + +Indexing +^^^^^^^^ + +- +- + +I/O +^^^ + +- +- + +Plotting +^^^^^^^^ + +- +- + +Reshaping +^^^^^^^^^ + +- +- + +Categorical +^^^^^^^^^^^ + +-
Title is self-explanatory. Copied (almost) directly from #21001.
https://api.github.com/repos/pandas-dev/pandas/pulls/21433
2018-06-11T23:56:19Z
2018-06-12T00:15:30Z
2018-06-12T00:15:30Z
2018-06-29T14:45:57Z
BUG: Fix Series.nlargest for integer boundary values
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 79a4c3da2ffa4..b8d865195cddd 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -82,4 +82,5 @@ Bug Fixes **Other** +- Bug in :meth:`Series.nlargest` for signed and unsigned integer dtypes when the minimum value is present (:issue:`21426`) - diff --git a/pandas/conftest.py b/pandas/conftest.py index d5f399c7cd63d..9d806a91f37f7 100644 --- a/pandas/conftest.py +++ b/pandas/conftest.py @@ -129,6 +129,14 @@ def join_type(request): return request.param +@pytest.fixture(params=['nlargest', 'nsmallest']) +def nselect_method(request): + """ + Fixture for trying all nselect methods + """ + return request.param + + @pytest.fixture(params=[None, np.nan, pd.NaT, float('nan'), np.float('NaN')]) def nulls_fixture(request): """ @@ -170,3 +178,66 @@ def string_dtype(request): * 'U' """ return request.param + + +@pytest.fixture(params=["float32", "float64"]) +def float_dtype(request): + """ + Parameterized fixture for float dtypes. + + * float32 + * float64 + """ + + return request.param + + +UNSIGNED_INT_DTYPES = ["uint8", "uint16", "uint32", "uint64"] +SIGNED_INT_DTYPES = ["int8", "int16", "int32", "int64"] +ALL_INT_DTYPES = UNSIGNED_INT_DTYPES + SIGNED_INT_DTYPES + + +@pytest.fixture(params=SIGNED_INT_DTYPES) +def sint_dtype(request): + """ + Parameterized fixture for signed integer dtypes. + + * int8 + * int16 + * int32 + * int64 + """ + + return request.param + + +@pytest.fixture(params=UNSIGNED_INT_DTYPES) +def uint_dtype(request): + """ + Parameterized fixture for unsigned integer dtypes. + + * uint8 + * uint16 + * uint32 + * uint64 + """ + + return request.param + + +@pytest.fixture(params=ALL_INT_DTYPES) +def any_int_dtype(request): + """ + Parameterized fixture for any integer dtypes. + + * int8 + * uint8 + * int16 + * uint16 + * int32 + * uint32 + * int64 + * uint64 + """ + + return request.param diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py index b33c10da7813e..9e34b8eb55ccb 100644 --- a/pandas/core/algorithms.py +++ b/pandas/core/algorithms.py @@ -1133,9 +1133,12 @@ def compute(self, method): return dropped[slc].sort_values(ascending=ascending).head(n) # fast method - arr, _, _ = _ensure_data(dropped.values) + arr, pandas_dtype, _ = _ensure_data(dropped.values) if method == 'nlargest': arr = -arr + if is_integer_dtype(pandas_dtype): + # GH 21426: ensure reverse ordering at boundaries + arr -= 1 if self.keep == 'last': arr = arr[::-1] diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py index b8f1acc2aa679..6dc24ed856017 100644 --- a/pandas/tests/frame/test_analytics.py +++ b/pandas/tests/frame/test_analytics.py @@ -12,7 +12,7 @@ from numpy.random import randn import numpy as np -from pandas.compat import lrange, product, PY35 +from pandas.compat import lrange, PY35 from pandas import (compat, isna, notna, DataFrame, Series, MultiIndex, date_range, Timestamp, Categorical, _np_version_under1p12, _np_version_under1p15, @@ -2260,54 +2260,49 @@ class TestNLargestNSmallest(object): # ---------------------------------------------------------------------- # Top / bottom - @pytest.mark.parametrize( - 'method, n, order', - product(['nsmallest', 'nlargest'], range(1, 11), - [['a'], - ['c'], - ['a', 'b'], - ['a', 'c'], - ['b', 'a'], - ['b', 'c'], - ['a', 'b', 'c'], - ['c', 'a', 'b'], - ['c', 'b', 'a'], - ['b', 'c', 'a'], - ['b', 'a', 'c'], - - # dups! - ['b', 'c', 'c'], - - ])) - def test_n(self, df_strings, method, n, order): + @pytest.mark.parametrize('order', [ + ['a'], + ['c'], + ['a', 'b'], + ['a', 'c'], + ['b', 'a'], + ['b', 'c'], + ['a', 'b', 'c'], + ['c', 'a', 'b'], + ['c', 'b', 'a'], + ['b', 'c', 'a'], + ['b', 'a', 'c'], + + # dups! + ['b', 'c', 'c']]) + @pytest.mark.parametrize('n', range(1, 11)) + def test_n(self, df_strings, nselect_method, n, order): # GH10393 df = df_strings if 'b' in order: error_msg = self.dtype_error_msg_template.format( - column='b', method=method, dtype='object') + column='b', method=nselect_method, dtype='object') with tm.assert_raises_regex(TypeError, error_msg): - getattr(df, method)(n, order) + getattr(df, nselect_method)(n, order) else: - ascending = method == 'nsmallest' - result = getattr(df, method)(n, order) + ascending = nselect_method == 'nsmallest' + result = getattr(df, nselect_method)(n, order) expected = df.sort_values(order, ascending=ascending).head(n) tm.assert_frame_equal(result, expected) - @pytest.mark.parametrize( - 'method, columns', - product(['nsmallest', 'nlargest'], - product(['group'], ['category_string', 'string']) - )) - def test_n_error(self, df_main_dtypes, method, columns): + @pytest.mark.parametrize('columns', [ + ('group', 'category_string'), ('group', 'string')]) + def test_n_error(self, df_main_dtypes, nselect_method, columns): df = df_main_dtypes + col = columns[1] error_msg = self.dtype_error_msg_template.format( - column=columns[1], method=method, dtype=df[columns[1]].dtype) + column=col, method=nselect_method, dtype=df[col].dtype) # escape some characters that may be in the repr error_msg = (error_msg.replace('(', '\\(').replace(")", "\\)") .replace("[", "\\[").replace("]", "\\]")) with tm.assert_raises_regex(TypeError, error_msg): - getattr(df, method)(2, columns) + getattr(df, nselect_method)(2, columns) def test_n_all_dtypes(self, df_main_dtypes): df = df_main_dtypes @@ -2328,15 +2323,14 @@ def test_n_identical_values(self): expected = pd.DataFrame({'a': [1] * 3, 'b': [1, 2, 3]}) tm.assert_frame_equal(result, expected) - @pytest.mark.parametrize( - 'n, order', - product([1, 2, 3, 4, 5], - [['a', 'b', 'c'], - ['c', 'b', 'a'], - ['a'], - ['b'], - ['a', 'b'], - ['c', 'b']])) + @pytest.mark.parametrize('order', [ + ['a', 'b', 'c'], + ['c', 'b', 'a'], + ['a'], + ['b'], + ['a', 'b'], + ['c', 'b']]) + @pytest.mark.parametrize('n', range(1, 6)) def test_n_duplicate_index(self, df_duplicates, n, order): # GH 13412 diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py index aba472f2ce8f9..b9c7b837b8b81 100644 --- a/pandas/tests/series/test_analytics.py +++ b/pandas/tests/series/test_analytics.py @@ -1944,6 +1944,15 @@ def test_mode_sortwarning(self): tm.assert_series_equal(result, expected) +def assert_check_nselect_boundary(vals, dtype, method): + # helper function for 'test_boundary_{dtype}' tests + s = Series(vals, dtype=dtype) + result = getattr(s, method)(3) + expected_idxr = [0, 1, 2] if method == 'nsmallest' else [3, 2, 1] + expected = s.loc[expected_idxr] + tm.assert_series_equal(result, expected) + + class TestNLargestNSmallest(object): @pytest.mark.parametrize( @@ -2028,6 +2037,32 @@ def test_n(self, n): expected = s.sort_values().head(n) assert_series_equal(result, expected) + def test_boundary_integer(self, nselect_method, any_int_dtype): + # GH 21426 + dtype_info = np.iinfo(any_int_dtype) + min_val, max_val = dtype_info.min, dtype_info.max + vals = [min_val, min_val + 1, max_val - 1, max_val] + assert_check_nselect_boundary(vals, any_int_dtype, nselect_method) + + def test_boundary_float(self, nselect_method, float_dtype): + # GH 21426 + dtype_info = np.finfo(float_dtype) + min_val, max_val = dtype_info.min, dtype_info.max + min_2nd, max_2nd = np.nextafter( + [min_val, max_val], 0, dtype=float_dtype) + vals = [min_val, min_2nd, max_2nd, max_val] + assert_check_nselect_boundary(vals, float_dtype, nselect_method) + + @pytest.mark.parametrize('dtype', ['datetime64[ns]', 'timedelta64[ns]']) + def test_boundary_datetimelike(self, nselect_method, dtype): + # GH 21426 + # use int64 bounds and +1 to min_val since true minimum is NaT + # (include min_val/NaT at end to maintain same expected_idxr) + dtype_info = np.iinfo('int64') + min_val, max_val = dtype_info.min, dtype_info.max + vals = [min_val + 1, min_val + 2, max_val - 1, max_val, min_val] + assert_check_nselect_boundary(vals, dtype, nselect_method) + class TestCategoricalSeriesAnalytics(object):
- [X] closes #21426 - [X] tests added / passed - [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [X] whatsnew entry Also added some similar tests for float and datetimelike dtypes to ensure that the behavior is as desired.
https://api.github.com/repos/pandas-dev/pandas/pulls/21432
2018-06-11T23:37:11Z
2018-06-15T17:21:37Z
2018-06-15T17:21:37Z
2018-06-29T14:50:52Z
BUG: Nrows cannot be zero for read_csv. Fixes #21141
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py index 2c8f98732c92f..897019c5d4065 100755 --- a/pandas/io/parsers.py +++ b/pandas/io/parsers.py @@ -1027,7 +1027,7 @@ def _failover_to_python(self): raise com.AbstractMethodError(self) def read(self, nrows=None): - nrows = _validate_integer('nrows', nrows) + nrows = _validate_integer('nrows', nrows, min_val=1) if nrows is not None: if self.options.get('skipfooter'): diff --git a/pandas/tests/io/parser/common.py b/pandas/tests/io/parser/common.py index 2b7ff1f5a9879..482acdca24f81 100644 --- a/pandas/tests/io/parser/common.py +++ b/pandas/tests/io/parser/common.py @@ -364,7 +364,7 @@ def test_read_nrows(self): df = self.read_csv(StringIO(self.data1), nrows=3.0) tm.assert_frame_equal(df, expected) - msg = r"'nrows' must be an integer >=0" + msg = r"'nrows' must be an integer >=1" with tm.assert_raises_regex(ValueError, msg): self.read_csv(StringIO(self.data1), nrows=1.2) @@ -375,6 +375,9 @@ def test_read_nrows(self): with tm.assert_raises_regex(ValueError, msg): self.read_csv(StringIO(self.data1), nrows=-1) + with tm.assert_raises_regex(ValueError, msg): + self.read_csv(StringIO(self.data1), nrows=0) + def test_read_chunksize(self): reader = self.read_csv(StringIO(self.data1), index_col=0, chunksize=2) df = self.read_csv(StringIO(self.data1), index_col=0) diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py index 05423474f330a..b5d955c5309ea 100644 --- a/pandas/tests/io/test_excel.py +++ b/pandas/tests/io/test_excel.py @@ -995,11 +995,18 @@ def test_read_excel_nrows_greater_than_nrows_in_file(self, ext): def test_read_excel_nrows_non_integer_parameter(self, ext): # GH 16645 - msg = "'nrows' must be an integer >=0" + msg = "'nrows' must be an integer >=1" with tm.assert_raises_regex(ValueError, msg): pd.read_excel(os.path.join(self.dirpath, 'test1' + ext), nrows='5') + def test_read_excel_nrows_zero_parameter(self, ext): + # GH 21141 + msg = "'nrows' must be an integer >=1" + with tm.assert_raises_regex(ValueError, msg): + pd.read_excel(os.path.join(self.dirpath, 'test1' + ext), + nrows=0) + def test_read_excel_squeeze(self, ext): # GH 12157 f = os.path.join(self.dirpath, 'test_squeeze' + ext)
- [x] closes #21141 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21431
2018-06-11T22:24:38Z
2018-06-12T23:02:36Z
null
2018-06-13T00:49:17Z
disallow normalize=True with Tick classes
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt index 68c1839221508..43c75cde74b42 100644 --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -24,6 +24,41 @@ Other Enhancements Backwards incompatible API changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +.. _whatsnew_0240.api.datetimelike.normalize + +Tick DateOffset Normalize Restrictions +-------------------------------------- + +Creating a ``Tick`` object (:class:``Day``, :class:``Hour``, :class:``Minute``, +:class:``Second``, :class:``Milli``, :class:``Micro``, :class:``Nano``) with +`normalize=True` is no longer supported. This prevents unexpected behavior +where addition could fail to be monotone or associative. (:issue:`21427`) + +.. ipython:: python + + ts = pd.Timestamp('2018-06-11 18:01:14') + ts + tic = pd.offsets.Hour(n=2, normalize=True) + tic + +Previous Behavior: + +.. code-block:: ipython + + In [4]: ts + tic + Out [4]: Timestamp('2018-06-11 00:00:00') + + In [5]: ts + tic + tic + tic == ts + (tic + tic + tic) + Out [5]: False + +Current Behavior: + +.. ipython:: python + + tic = pd.offsets.Hour(n=2) + ts + tic + tic + tic == ts + (tic + tic + tic) + + .. _whatsnew_0240.api.datetimelike: Datetimelike API Changes diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py index 8bf0d9f915d04..33e5a70c4c30b 100644 --- a/pandas/tests/tseries/offsets/test_offsets.py +++ b/pandas/tests/tseries/offsets/test_offsets.py @@ -28,7 +28,7 @@ YearEnd, Day, QuarterEnd, BusinessMonthEnd, FY5253, Nano, Easter, FY5253Quarter, - LastWeekOfMonth) + LastWeekOfMonth, Tick) from pandas.core.tools.datetimes import format, ole2datetime import pandas.tseries.offsets as offsets from pandas.io.pickle import read_pickle @@ -270,6 +270,11 @@ def test_offset_freqstr(self, offset_types): def _check_offsetfunc_works(self, offset, funcname, dt, expected, normalize=False): + + if normalize and issubclass(offset, Tick): + # normalize=True disallowed for Tick subclasses GH#21427 + return + offset_s = self._get_offset(offset, normalize=normalize) func = getattr(offset_s, funcname) @@ -458,6 +463,9 @@ def test_onOffset(self, offset_types): assert offset_s.onOffset(dt) # when normalize=True, onOffset checks time is 00:00:00 + if issubclass(offset_types, Tick): + # normalize=True disallowed for Tick subclasses GH#21427 + return offset_n = self._get_offset(offset_types, normalize=True) assert not offset_n.onOffset(dt) @@ -485,7 +493,9 @@ def test_add(self, offset_types, tz): assert isinstance(result, Timestamp) assert result == expected_localize - # normalize=True + # normalize=True, disallowed for Tick subclasses GH#21427 + if issubclass(offset_types, Tick): + return offset_s = self._get_offset(offset_types, normalize=True) expected = Timestamp(expected.date()) @@ -3098,6 +3108,14 @@ def test_require_integers(offset_types): cls(n=1.5) +def test_tick_normalize_raises(tick_classes): + # check that trying to create a Tick object with normalize=True raises + # GH#21427 + cls = tick_classes + with pytest.raises(ValueError): + cls(n=3, normalize=True) + + def test_weeks_onoffset(): # GH#18510 Week with weekday = None, normalize = False should always # be onOffset diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py index a5a983bf94bb8..ecd15bc7b04b8 100644 --- a/pandas/tseries/offsets.py +++ b/pandas/tseries/offsets.py @@ -2217,8 +2217,10 @@ class Tick(SingleConstructorOffset): _attributes = frozenset(['n', 'normalize']) def __init__(self, n=1, normalize=False): - # TODO: do Tick classes with normalize=True make sense? self.n = self._validate_n(n) + if normalize: + raise ValueError("Tick offset with `normalize=True` are not " + "allowed.") # GH#21427 self.normalize = normalize __gt__ = _tick_comp(operator.gt)
The problem: allowing `Tick` objects with `normalize=True` causes addition to lose monotonicity/associativity ``` ts = pd.Timestamp.now() tick = pd.offsets.Minute(n=4, normalize=True) >>> ts Timestamp('2018-06-11 10:50:14.419655') >>> ts + tick Timestamp('2018-06-11 00:00:00') ``` - [x] closes #21434 - [x] tests added/passed - [x] passes flake8 - [x] whatsnew note
https://api.github.com/repos/pandas-dev/pandas/pulls/21427
2018-06-11T17:53:10Z
2018-06-14T10:18:24Z
2018-06-14T10:18:24Z
2018-06-22T03:27:57Z
API: re-allow duplicate index level names
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index 9c4b408a1d24b..2df10592ab1af 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -53,6 +53,7 @@ Fixed Regressions ~~~~~~~~~~~~~~~~~ - Fixed regression in :meth:`to_csv` when handling file-like object incorrectly (:issue:`21471`) +- Re-allowed duplicate level names of a ``MultiIndex``. Accessing a level that has a duplicate name by name still raises an error (:issue:`19029`). - Bug in both :meth:`DataFrame.first_valid_index` and :meth:`Series.first_valid_index` raised for a row index having duplicate values (:issue:`21441`) - diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py index f9f3041bef073..a2322348e1caa 100644 --- a/pandas/core/indexes/multi.py +++ b/pandas/core/indexes/multi.py @@ -671,30 +671,18 @@ def _set_names(self, names, level=None, validate=True): if level is None: level = range(self.nlevels) - used = {} else: level = [self._get_level_number(l) for l in level] - used = {self.levels[l].name: l - for l in set(range(self.nlevels)) - set(level)} # set the name for l, name in zip(level, names): if name is not None: - # GH 20527 # All items in 'names' need to be hashable: if not is_hashable(name): raise TypeError('{}.name must be a hashable type' .format(self.__class__.__name__)) - - if name in used: - raise ValueError( - 'Duplicated level name: "{}", assigned to ' - 'level {}, is already used for level ' - '{}.'.format(name, l, used[name])) - self.levels[l].rename(name, inplace=True) - used[name] = l names = property(fset=_set_names, fget=_get_names, doc="Names of levels in MultiIndex") @@ -2893,6 +2881,13 @@ def isin(self, values, level=None): else: return np.lib.arraysetops.in1d(labs, sought_labels) + def _reference_duplicate_name(self, name): + """ + Returns True if the name refered to in self.names is duplicated. + """ + # count the times name equals an element in self.names. + return sum(name == n for n in self.names) > 1 + MultiIndex._add_numeric_methods_disabled() MultiIndex._add_numeric_methods_add_sub_disabled() diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py index 2757e0797a410..3d9e84954a63b 100644 --- a/pandas/core/reshape/reshape.py +++ b/pandas/core/reshape/reshape.py @@ -115,6 +115,12 @@ def __init__(self, values, index, level=-1, value_columns=None, self.index = index.remove_unused_levels() + if isinstance(self.index, MultiIndex): + if index._reference_duplicate_name(level): + msg = ("Ambiguous reference to {level}. The index " + "names are not unique.".format(level=level)) + raise ValueError(msg) + self.level = self.index._get_level_number(level) # when index includes `nan`, need to lift levels/strides by 1 @@ -528,6 +534,12 @@ def factorize(index): N, K = frame.shape + if isinstance(frame.columns, MultiIndex): + if frame.columns._reference_duplicate_name(level): + msg = ("Ambiguous reference to {level}. The column " + "names are not unique.".format(level=level)) + raise ValueError(msg) + # Will also convert negative level numbers and check if out of bounds. level_num = frame.columns._get_level_number(level) diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py index 164d6746edec0..21961906c39bb 100644 --- a/pandas/tests/frame/test_alter_axes.py +++ b/pandas/tests/frame/test_alter_axes.py @@ -130,19 +130,27 @@ def test_set_index2(self): result = df.set_index(df.C) assert result.index.name == 'C' - @pytest.mark.parametrize('level', ['a', pd.Series(range(3), name='a')]) + @pytest.mark.parametrize( + 'level', ['a', pd.Series(range(0, 8, 2), name='a')]) def test_set_index_duplicate_names(self, level): - # GH18872 + # GH18872 - GH19029 df = pd.DataFrame(np.arange(8).reshape(4, 2), columns=['a', 'b']) # Pass an existing level name: df.index.name = 'a' - pytest.raises(ValueError, df.set_index, level, append=True) - pytest.raises(ValueError, df.set_index, [level], append=True) - - # Pass twice the same level name: - df.index.name = 'c' - pytest.raises(ValueError, df.set_index, [level, level]) + expected = pd.MultiIndex.from_tuples([(0, 0), (1, 2), (2, 4), (3, 6)], + names=['a', 'a']) + result = df.set_index(level, append=True) + tm.assert_index_equal(result.index, expected) + result = df.set_index([level], append=True) + tm.assert_index_equal(result.index, expected) + + # Pass twice the same level name (only works with passing actual data) + if isinstance(level, pd.Series): + result = df.set_index([level, level]) + expected = pd.MultiIndex.from_tuples( + [(0, 0), (2, 2), (4, 4), (6, 6)], names=['a', 'a']) + tm.assert_index_equal(result.index, expected) def test_set_index_nonuniq(self): df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'], @@ -617,6 +625,19 @@ def test_reorder_levels(self): index=e_idx) assert_frame_equal(result, expected) + result = df.reorder_levels([0, 0, 0]) + e_idx = MultiIndex(levels=[['bar'], ['bar'], ['bar']], + labels=[[0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0]], + names=['L0', 'L0', 'L0']) + expected = DataFrame({'A': np.arange(6), 'B': np.arange(6)}, + index=e_idx) + assert_frame_equal(result, expected) + + result = df.reorder_levels(['L0', 'L0', 'L0']) + assert_frame_equal(result, expected) + def test_reset_index(self): stacked = self.frame.stack()[::2] stacked = DataFrame({'foo': stacked, 'bar': stacked}) diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py index d05321abefca6..ebf6c5e37b916 100644 --- a/pandas/tests/frame/test_reshape.py +++ b/pandas/tests/frame/test_reshape.py @@ -560,6 +560,16 @@ def test_unstack_dtypes(self): assert left.shape == (3, 2) tm.assert_frame_equal(left, right) + def test_unstack_non_unique_index_names(self): + idx = MultiIndex.from_tuples([('a', 'b'), ('c', 'd')], + names=['c1', 'c1']) + df = DataFrame([1, 2], index=idx) + with pytest.raises(ValueError): + df.unstack('c1') + + with pytest.raises(ValueError): + df.T.stack('c1') + def test_unstack_unused_levels(self): # GH 17845: unused labels in index make unstack() cast int to float idx = pd.MultiIndex.from_product([['a'], ['A', 'B', 'C', 'D']])[:-1] diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py index 0fec6a8f96a24..cb76195eacf40 100644 --- a/pandas/tests/groupby/test_categorical.py +++ b/pandas/tests/groupby/test_categorical.py @@ -555,15 +555,11 @@ def test_as_index(): columns=['cat', 'A', 'B']) tm.assert_frame_equal(result, expected) - # another not in-axis grouper - s = Series(['a', 'b', 'b'], name='cat2') + # another not in-axis grouper (conflicting names in index) + s = Series(['a', 'b', 'b'], name='cat') result = df.groupby(['cat', s], as_index=False, observed=True).sum() tm.assert_frame_equal(result, expected) - # GH18872: conflicting names in desired index - with pytest.raises(ValueError): - df.groupby(['cat', s.rename('cat')], observed=True).sum() - # is original index dropped? group_columns = ['cat', 'A'] expected = DataFrame( diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py index c925c4c403960..1dc44677ab3ad 100644 --- a/pandas/tests/indexes/test_multi.py +++ b/pandas/tests/indexes/test_multi.py @@ -656,22 +656,27 @@ def test_constructor_nonhashable_names(self): # With .set_names() tm.assert_raises_regex(TypeError, message, mi.set_names, names=renamed) - @pytest.mark.parametrize('names', [['a', 'b', 'a'], ['1', '1', '2'], - ['1', 'a', '1']]) + @pytest.mark.parametrize('names', [['a', 'b', 'a'], [1, 1, 2], + [1, 'a', 1]]) def test_duplicate_level_names(self, names): - # GH18872 - pytest.raises(ValueError, pd.MultiIndex.from_product, - [[0, 1]] * 3, names=names) + # GH18872, GH19029 + mi = pd.MultiIndex.from_product([[0, 1]] * 3, names=names) + assert mi.names == names # With .rename() mi = pd.MultiIndex.from_product([[0, 1]] * 3) - tm.assert_raises_regex(ValueError, "Duplicated level name:", - mi.rename, names) + mi = mi.rename(names) + assert mi.names == names # With .rename(., level=) - mi.rename(names[0], level=1, inplace=True) - tm.assert_raises_regex(ValueError, "Duplicated level name:", - mi.rename, names[:2], level=[0, 2]) + mi.rename(names[1], level=1, inplace=True) + mi = mi.rename([names[0], names[2]], level=[0, 2]) + assert mi.names == names + + def test_duplicate_level_names_access_raises(self): + self.index.names = ['foo', 'foo'] + tm.assert_raises_regex(KeyError, 'Level foo not found', + self.index._get_level_number, 'foo') def assert_multiindex_copied(self, copy, original): # Levels should be (at least, shallow copied) diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py index 29063b64221c1..865cab7a1596e 100644 --- a/pandas/tests/io/test_pytables.py +++ b/pandas/tests/io/test_pytables.py @@ -1893,6 +1893,12 @@ def make_index(names=None): 'a', 'b'], index=make_index(['date', 'a', 't'])) pytest.raises(ValueError, store.append, 'df', df) + # dup within level + _maybe_remove(store, 'df') + df = DataFrame(np.zeros((12, 2)), columns=['a', 'b'], + index=make_index(['date', 'date', 'date'])) + pytest.raises(ValueError, store.append, 'df', df) + # fully names _maybe_remove(store, 'df') df = DataFrame(np.zeros((12, 2)), columns=[ diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py index ca95dde1a20c9..7e7e081408534 100644 --- a/pandas/tests/reshape/test_pivot.py +++ b/pandas/tests/reshape/test_pivot.py @@ -1747,9 +1747,15 @@ def test_crosstab_with_numpy_size(self): tm.assert_frame_equal(result, expected) def test_crosstab_dup_index_names(self): - # GH 13279, GH 18872 + # GH 13279 s = pd.Series(range(3), name='foo') - pytest.raises(ValueError, pd.crosstab, s, s) + + result = pd.crosstab(s, s) + expected_index = pd.Index(range(3), name='foo') + expected = pd.DataFrame(np.eye(3, dtype=np.int64), + index=expected_index, + columns=expected_index) + tm.assert_frame_equal(result, expected) @pytest.mark.parametrize("names", [['a', ('b', 'c')], [('a', 'b'), 'c']])
One possible solution for https://github.com/pandas-dev/pandas/issues/19029 WIP (need to clean up tests and possibly re-add some ones that have been removed in https://github.com/pandas-dev/pandas/pull/18882)
https://api.github.com/repos/pandas-dev/pandas/pulls/21423
2018-06-11T14:18:25Z
2018-06-29T00:39:46Z
2018-06-29T00:39:46Z
2018-07-02T15:43:56Z
DOC: fix grammar of deprecation message
diff --git a/pandas/util/_decorators.py b/pandas/util/_decorators.py index 6b55554cdc941..7d5753d03f4fc 100644 --- a/pandas/util/_decorators.py +++ b/pandas/util/_decorators.py @@ -140,8 +140,8 @@ def wrapper(*args, **kwargs): if new_arg_name is None and old_arg_value is not None: msg = ( "the '{old_name}' keyword is deprecated and will be " - "removed in a future version " - "please takes steps to stop use of '{old_name}'" + "removed in a future version. " + "Please take steps to stop the use of '{old_name}'" ).format(old_name=old_arg_name) warnings.warn(msg, FutureWarning, stacklevel=stacklevel) kwargs[old_arg_name] = old_arg_value
https://api.github.com/repos/pandas-dev/pandas/pulls/21421
2018-06-11T12:42:58Z
2018-06-11T15:14:48Z
2018-06-11T15:14:48Z
2018-06-11T15:19:23Z
Doc Fixes
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index 2c40be17ce781..0e4f040253560 100755 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -46,6 +46,15 @@ class _IndexSlice(object): """ Create an object to more easily perform multi-index slicing + See Also + -------- + MultiIndex.remove_unused_levels : New MultiIndex with no unused levels. + + Notes + ----- + See :ref:`Defined Levels <advanced.shown_levels>` + for further info on slicing a MultiIndex. + Examples --------
Closes <#21308> Note : Defined Levels section was added in the "Notes" section opposed to "See Also". Description of "See Also" section [here](https://numpydoc.readthedocs.io/en/latest/format.html), suggests it should really link to other functions etc.
https://api.github.com/repos/pandas-dev/pandas/pulls/21415
2018-06-10T21:45:11Z
2018-06-12T11:28:38Z
2018-06-12T11:28:38Z
2018-06-24T22:28:33Z
Doc Fixes
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py index 20805e33bb1d3..75b6be96feb78 100644 --- a/pandas/core/indexes/multi.py +++ b/pandas/core/indexes/multi.py @@ -1407,7 +1407,7 @@ def _sort_levels_monotonic(self): This is an *internal* function. - create a new MultiIndex from the current to monotonically sorted + Create a new MultiIndex from the current to monotonically sorted items IN the levels. This does not actually make the entire MultiIndex monotonic, JUST the levels. @@ -1465,8 +1465,8 @@ def _sort_levels_monotonic(self): def remove_unused_levels(self): """ - create a new MultiIndex from the current that removing - unused levels, meaning that they are not expressed in the labels + Create a new MultiIndex from the current that removes + unused levels, meaning that they are not expressed in the labels. The resulting MultiIndex will have the same outward appearance, meaning the same .values and ordering. It will also
Minor doc fixes - make casing consistent and tense
https://api.github.com/repos/pandas-dev/pandas/pulls/21414
2018-06-10T20:03:53Z
2018-06-11T11:28:55Z
2018-06-11T11:28:55Z
2018-06-11T22:43:09Z
add dropna=False to crosstab example
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py index 9a2ad5d13d77a..3390451c60c0f 100644 --- a/pandas/core/reshape/pivot.py +++ b/pandas/core/reshape/pivot.py @@ -446,7 +446,18 @@ def crosstab(index, columns, values=None, rownames=None, colnames=None, >>> foo = pd.Categorical(['a', 'b'], categories=['a', 'b', 'c']) >>> bar = pd.Categorical(['d', 'e'], categories=['d', 'e', 'f']) >>> crosstab(foo, bar) # 'c' and 'f' are not represented in the data, - ... # but they still will be counted in the output + # and will not be shown in the output because + # dropna is True by default. Set 'dropna=False' + # to preserve categories with no data + ... # doctest: +SKIP + col_0 d e + row_0 + a 1 0 + b 0 1 + + >>> crosstab(foo, bar, dropna=False) # 'c' and 'f' are not represented + # in the data, but they still will be counted + # and shown in the output ... # doctest: +SKIP col_0 d e f row_0
``` >>> foo = pd.Categorical(['a', 'b'], categories=['a', 'b', 'c']) >>> bar = pd.Categorical(['d', 'e'], categories=['d', 'e', 'f']) >>> crosstab(foo, bar) # 'c' and 'f' are not represented in the data, ... # but they still will be counted in the output col_0 d e f row_0 a 1 0 0 b 0 1 0 c 0 0 0 ``` The above example code does not produce the output shown because dropna=True is default. Changing crosstab(foo, bar) to crosstab(foo, bar, dropna=False) fixes that and produces the shown output (which is also the expected and correct output). - [ ] closes #xxxx - [x] tests added / passed - [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/21413
2018-06-10T20:03:30Z
2018-06-12T11:31:11Z
2018-06-12T11:31:11Z
2018-06-12T11:50:42Z
BUG: Categorical.__setitem__ allows for tuple assignment
diff --git a/doc/source/whatsnew/v0.23.2.txt b/doc/source/whatsnew/v0.23.2.txt index c636e73fbd6c2..1e52fd83d2bd4 100644 --- a/doc/source/whatsnew/v0.23.2.txt +++ b/doc/source/whatsnew/v0.23.2.txt @@ -43,8 +43,11 @@ Bug Fixes - - -**Conversion** +**Data-type specific** + +- Bug in :meth:`Categorical.__setitem__` where error was raised when trying to set value to a tuple (:issue:`20439`) +**Conversion** - - diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py index d466198b648ef..07b58596f8430 100644 --- a/pandas/core/arrays/categorical.py +++ b/pandas/core/arrays/categorical.py @@ -1971,10 +1971,14 @@ def __setitem__(self, key, value): raise ValueError("Cannot set a Categorical with another, " "without identical categories") - rvalue = value if is_list_like(value) else [value] - from pandas import Index - to_add = Index(rvalue).difference(self.categories) + if isinstance(value, tuple): + rvalue = [value] + to_add = Index(rvalue, + tupleize_cols=False).difference(self.categories) + else: + rvalue = value if is_list_like(value) else [value] + to_add = Index(rvalue).difference(self.categories) # no assignments of values not in categories, but it's always ok to set # something to np.nan diff --git a/pandas/tests/categorical/test_indexing.py b/pandas/tests/categorical/test_indexing.py index 9c27b1101e5ca..398ab429c2b25 100644 --- a/pandas/tests/categorical/test_indexing.py +++ b/pandas/tests/categorical/test_indexing.py @@ -103,3 +103,22 @@ def f(): s.categories = [1, 2] pytest.raises(ValueError, f) + + def test_setitem_with_tuple_categories(self): + # GH 20439 + + # change element in Categorical of tuples + s = Categorical([('a', 'a'), ('a', 'b'), ('b', 'a'), ('b', 'b')]) + s[0] = ('b', 'b') + expected = Categorical( + [('b', 'b'), ('a', 'b'), ('b', 'a'), ('b', 'b')], + categories=[('a', 'a'), ('a', 'b'), ('b', 'a'), ('b', 'b')] + ) + tm.assert_categorical_equal(s, expected) + + # change element in Categorical to use new category + msg = ("Cannot setitem on a Categorical with a new category, set the " + "categories first") + with tm.assert_raises_regex(ValueError, msg): + s = Categorical([('a', 'a'), ('a', 'b'), ('b', 'a'), ('b', 'b')]) + s[0] = ('c', 'c')
- [x] closes #20439 - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [x] whatsnew entry Pretty straightforward fix in the `__setitem__` method While I was looking into this, I discovered a bug that does not allow for the creation of mixed-dtype Categoricals, if the first element is not a tuple. Should I create an issue? ```console In [20]: s = pd.Categorical([('a', 'a'), ('a', 'b'), ('b', 'a'), 'c']) In [21]: s Out[21]: [(a, a), (a, b), (b, a), c] Categories (4, object): [(a, a), (a, b), (b, a), c] In [22]: s = pd.Categorical(['c', ('a', 'b'), ('b', 'a'), 'c']) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-22-0f4b5f338532> in <module>() ----> 1 s = pd.Categorical(['c', ('a', 'b'), ('b', 'a'), 'c']) ~/Documents/siv-dev/projects/open-source/pandas/pandas/core/arrays/categorical.py in __init__(self, values, categories, ordered, dtype, fastpath) 328 # _sanitize_array coerces np.nan to a string under certain versions 329 # of numpy --> 330 values = maybe_infer_to_datetimelike(values, convert_dates=True) 331 if not isinstance(values, np.ndarray): 332 values = _convert_to_list_like(values) ~/Documents/siv-dev/projects/open-source/pandas/pandas/core/dtypes/cast.py in maybe_infer_to_datetimelike(value, convert_dates) 893 if not is_list_like(v): 894 v = [v] --> 895 v = np.array(v, copy=False) 896 897 # we only care about object dtypes ValueError: setting an array element with a sequence ```
https://api.github.com/repos/pandas-dev/pandas/pulls/21412
2018-06-10T19:16:09Z
2018-07-16T22:48:17Z
null
2018-07-16T22:48:17Z